id
string
paper_link
string
title
string
full_text
string
2205.10633v1
http://arxiv.org/abs/2205.10633v1
Certain properties of Generalization of $L^p-$Spaces for $0 < p < 1$
\documentclass[reqno]{amsart} \usepackage{color} \usepackage{hyperref} \usepackage{graphicx} \usepackage[all]{xy} \newtheorem{thm}{Theorem}[section] \newtheorem{cor}[thm]{Corollary} \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{propr}[thm]{Properties} \newtheorem{nota}[thm]{Notation} \theoremstyle{definition} \newtheorem{defn}[thm]{Definition} \theoremstyle{remark} \newtheorem{rem}[thm]{Remark} \theoremstyle{example} \newtheorem{exm}[thm]{Example} \DeclareMathOperator{\RE}{Re} \DeclareMathOperator{\IM}{Im} \DeclareMathOperator{\ess}{ess} \DeclareMathOperator{\diverg}{div} \newcommand{\eps}{\varepsilon} \newcommand{\To}{\longrightarrow} \newcommand{\h}{\mathcal{H}} \newcommand{\s}{\mathcal{S}} \newcommand{\A}{\mathcal{A}} \newcommand{\J}{\mathcal{J}} \newcommand{\M}{\mathcal{M}} \newcommand{\W}{\mathcal{W}} \newcommand{\X}{\mathcal{X}} \newcommand{\BOP}{\mathbf{B}} \newcommand{\BH}{\mathbf{B}(\mathcal{H})} \newcommand{\KH}{\mathcal{K}(\mathcal{H})} \newcommand{\Real}{\mathbb{R}} \newcommand{\N}{\mathbb{N}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\Complex}{\mathbb{C}} \newcommand{\Field}{\mathbb{F}} \newcommand{\RPlus}{\Real^{+}} \newcommand{\Polar}{\mathcal{P}_{\s}} \newcommand{\Poly}{\mathcal{P}(E)} \newcommand{\EssD}{\mathcal{D}} \newcommand{\Lom}{\mathcal{L}} \newcommand{\States}{\mathcal{T}} \newcommand{\abs}[1]{\left\vert#1\right\vert} \newcommand{\set}[1]{\left\{#1\right\}} \newcommand{\seq}[1]{\left\langle #1\right\rangle} \newcommand{\norm}[1]{\left\Vert#1\right\Vert} \newcommand{\essnorm}[1]{\norm{#1}_{\ess}} \begin{document} \renewcommand{\thefootnote}{\arabic{footnote}} \title[\it{Certain properties of Generalization of $L^p-$SPACES FOR $0 < p < 1$ } ]{} \begin{center}{\large{CERTAIN PROPERTIES OF GENERALIZATION OF $L^p-$SPACES FOR $0 < p < 1$}} \end{center} \author[ R. E\MakeLowercase{larabi} M. E\MakeLowercase{l-arabi} \MakeLowercase{and} M. R\MakeLowercase{houdaf} ]{R\MakeLowercase{abab} E\MakeLowercase{larabi}$^{ 1 * },$ \ M\MakeLowercase{ouhssine} E\MakeLowercase{l-}A\MakeLowercase{rabi}$^2$ \ \MakeLowercase{and} \ M\MakeLowercase{ohamed} R\MakeLowercase{houdaf}$^3$ } \maketitle \vspace*{-5mm}\begin{center}\footnotesize $^{1,3}$Laboratoire de Math\'ematiques et leurs Applications, \'equipe: EDP et Calcul Scientifique, Universit\'e Moulay Ismail Facult\'e des Sciences, Mekn\`es, Morocco \\ [5mm] $^{2}$Laboratoire (MSISI), \'equipe: Analyse fonctionnelle, Th\'eorie spectrale, Th\'eorie des codes et Applications, Universit\'e Moulay Ismail Facult\'e des Sciences et T\'echniques, Errachidia, Morocco \\ [5mm] \end{center} \vspace*{-5mm}\begin{center}\footnotesize $^{1}$\texttt{ [email protected]}\\ $^{2}$\,\,\,\texttt {[email protected]}\,\,\,\,\,\,\,\,\,\\ $^{3}$\texttt{ [email protected]}\\ $^*$ Corresponding author.\\ \end{center} \begin{abstract} This paper introduces the notion of $N^*-$function and gives a generalization of $L^p,$ for $0<p<1$ denoted by $L_\Phi$ where $\Phi$ is an $N^*-$function. As well as, this paper examines some properties regarding to this generalized spaces and its linear forms, including some analogies and common features to some other well known spaces. As well as, we prove this space is a quasi-normed space but it is not normed space. \end{abstract} \section{\textbf{Introduction}} Function spaces, in particular $L^p(X)$ spaces, play a central role in many questions in analysis. It is a space of measurable functions $f$ on $X,$ for $p<\infty$ have absolute values $p-$th power integrable with respect to Lebesgue measure for which $\int_X |f|^p<\infty.$ For $p\geq 1,$ $L^p(X)$ is a Banach space with the following norm: $ \|f\|_{L^p}= \big(\int_{X}|f|^p dx\big)^{1/p}.$ Observe that since the triangle inequality $\|f_1+f_2\|_{L^p}\leq\|f_1\|_{L^p}+\|f_2\|_{L^p}$ fails in general when $0 <p< 1,$ $\|.\|_{L^p}$ is not a norm on $L^p$ for this range of $p,$ hence it is not a Banach space. This suggests that while many theorems on Banach spaces which can be applied to the spaces $L^p$ with $p\geq 1$ may fail to hold in those spaces with $0<p<1.$ In this work we introduce a new notion of functions called $N^*-$function and generalization of $ L^p$ spaces for $0<p<1$ denoted by $L_\Phi$ where $\Phi $ is an $N^*-$function. We concentrate on the basic structural facts about $ L_\Phi,$ we present the linear form of $L_\Phi$ and the dual space is developed. The most theorem about Banach spaces is the Hahn-Banach theorem, which links the original Banach space with its dual space and has no obvious extension because the dual space is zero and $L^p$ with $0<p<1$ as a model. We'll see how to make the proof work for our spaces $L_\Phi.$ \\ The paper is organized as follows: In order not to disturb our discussions later on we use Section 2 to present some preliminaries, including some concavity results. In Section 3, we introduce the notion of $N^*-$function and its properties while the discussion of the $L_\Phi$ spaces presented in Section 4 and 5 . The $L_\Phi$ spaces with dual space zero is developed in Section 6. \section{\textbf{Preliminaries }} In this section, we recall some basic tools that are important in our main results.\\ Throughout this paper our vector spaces are real vector spaces. \begin{defn} Let $X$ be a vector space. A norm on $X$ is an assignment of a norm $\|x\| \in \mathbb{R}$ to every "point" $x$ in $X$ (that is $\|\|: X \mapsto \mathbb{R}$) satisfying the following: \begin{itemize} \item (Separation) For all $x \in X,$ \ $ \|x\| \geq 0$ and $\|x\| = 0$ if and only if $x =0$ \item (The triangle inequality) For all $x,y \in X,$ $\|x + y\| \leq\| x\| + \|y\|$ \item (Homogeneity) For all $x, y \in X,$ $\|\alpha x\| = |\alpha|\|x\|.$ \end{itemize} Given a norm on a vector space, we get a metric by $d(x,y) = \|x -y\|.$ \end{defn} \begin{defn} A Banach space is a complete vector space $X$ equipped with a norm $\|.\|,$ defined by a metric $d(x,y) = \|x-y\|.$ \end{defn} \begin{defn} A topological vector space is called locally convex if the convex open sets are a base for the topology: given any open set $U$ around a point, there is a convex open set $C$ containing that point such that $C \subset U.$ \end{defn} \begin{exm} Any Banach space is locally convex, since all open balls are convex. This follows from the definition of a norm.\\ The spaces $L^p$ for $p\geq 1$ are Banach spaces so, they are locally convex, but in general it is not locally convex as an examples 2.19 and 2.20 in \cite{K} that, for $0 < p < 1,$ $L^p[0, 1]$ is not locally convex. \end{exm} \begin{defn} A set $A\in \mathcal{M}$ is called atom (in symbols $A\in \Lambda$) if, $\mu(A) > 0$ and for any measurable subset $B$ of $A$ with $\mu(B)<\mu(A)$ we have $\mu(B) = 0.$\\ A measure is called non-atomic if for any measurable set $A$ with $\mu(A) > 0,$ there is a measurable subset $B$ of $A$ such that $$0 < \mu(B) < \mu(A) .$$ That is, $A$ is not an atom.\\ A measure space $(X,\mathcal{M}, \mu )$ is called an atomic if there is a countable partition $\{A_i\}$ of $X$ formed by atoms.\\ A set $A\in \mathcal{M}$ is called non-atomic (or atomless) if neither $A$ nor any of it measurable subsets is the atom. A measure space $(X,\mathcal{M}, \mu )$ is called a non-atomic ( or atomless) if for any measurable set $A$ with $\mu(A)>0$ is an atomless. Hence $(X,\mathcal{M}, \mu )$ is non-atomic if and only if $\Lambda=\emptyset.$ There are several obvious properties of the sets of $\Lambda.$ \begin{enumerate} \item If $A$ has only one element and $\mu(A)>0,$ then $A\in \Lambda.$ \item If $A\in \Lambda$ and $f$ is function measurable, then $f$ is equal to a constant $\overline{f}$ almost everywhere on $A.$ \item If $A_1\in \Lambda$ and $A_2\in \mathcal{M},$ either $\mu(A_1\cap A_2)=0$ or $\mu(A_1\cap A_2)=\mu(A_1).$ \item $\Lambda=\emptyset$ if and only if for every $A\in \mathcal{M},$ with $\mu(A)>0,$ there exists a sequence of measurable sets $\{B_n \}_{n\geq 1}$ such that $B_{n+1}\subset B_n\subset A,$ $\mu(B_n)>0$ and $\lim\limits_{n\rightarrow +\infty}\mu(B_n)=0.$ \item If $A\in \Lambda $ and $\mu(A')=\mu(A")=0,$ then $A_0=A+A'-A"\in \Lambda.$\\ \end{enumerate} \end{defn} Let's recall some definitions and properties which will be used in the sequel of this paper. \begin{defn}[\textbf{$N-$function\footnote {{\it }For more details about N-functions see \cite{Kra}.}}] Let $M\colon\Real\to \Real^{+}$ be an $N$-function, i.e., $M $ is a convex function, with $M(t)>0$ for $t\neq0,$ $$\frac{M(t)}{t}\to 0 \ \mbox{as} \ t\to 0$$ and $$\frac{M(t)}{t}\to \infty \ \mbox{ as} \ t\to \infty .$$ \end{defn} Equivalently, $M$ admits the following representation: $$M(t)=\int_0^{|t|} m(s)\,{\rm d}s $$ where $m\colon \Real^{+}\to \Real^{+}$ is a non-decreasing and right continuous function, with $m(0)=0$, $m(t)>0$ for $t>0$, and $m(t)\to \infty $ as $t\to \infty$.\\ The $N$-function $\bar{M}$ conjugate to $M$ is defined by $$\bar{M}(t)=\int_0^{|t|}\bar{m}(s)\,{\rm d}s ,$$ where $\bar{m}\ :\ \Real^{+}\to \Real^{+}$ is given by $\bar{m}(t)=\sup \{s\,/\, m(s)\leq t\}$. \begin{defn}[\textbf{Orlicz spaces}](see \cite{A,Kra}) Let $X$ be an open subset of $\Real^d$, $d\in\N$. The Orlicz class $ {\mathcal{L}_M}(X)$ (resp.~the Orlicz space $L_M(X)$) is defined as the set of (equivalence classes of) real-valued measurable functions $u$ on $X $ such that: \begin{equation}\label{10} \displaystyle \int_X M(u(x))\,{\rm d}x<+\infty \quad {\mbox{(resp.\ }} \int_X M\Big(\frac{u(x)}\lambda \Big)\,{\rm d}x<+\infty {\mbox{ for some $\lambda>0$}}). \end{equation} \end{defn} Notice that $L_M(X)$ is a Banach space under the so-called Luxemburg norm, namely \begin{equation}\label{11} \| u\| _{M}=\inf \Big\{ \lambda >0\,/\, \int_X M\Big(\frac{u(x)}{\lambda}\Big)\,{\rm d}x\leq 1\Big\} \end{equation} and ${\mathcal{L}_M}(X)$ is a convex subset of $L_M(X)$. \begin{defn}[\textbf{Concave function}] A real-valued function $\Phi(x)$ of the real variable $x$ is said to be concave if the inequality \begin{equation}\label{concave} \Phi(\alpha x + ( 1 -\alpha )y)\geq \alpha \Phi(x)+(1-\alpha )\Phi(y) \end{equation} is satisfied for all values of $x, y$ and $0\leq \alpha \leq 1 .$ \end{defn} Inequality (\ref{concave}) admits still another generalization: \begin{equation}\label{Gconcave } \displaystyle\Phi\big(\frac{x_1+x_2+...+x_n}{n}\big)\geq\frac{1}{n}[ \Phi(x_1)+\Phi(x_2)+...+\Phi(x_n)] \end{equation} for arbitrary $x_l, x_2, ... , x_n.$ By successive application of (\ref{concave}). \begin{lem}\label{lem1} A continuous concave function $\Phi(x)$ has, at every point, a right derivative $p_+ (x)$ and a left derivative $p_- (x)$ such that \begin{equation}\label{p} p_-(x)\geq p_+(x). \end{equation} \end{lem} \begin{lem}\label{1.3} A concave function $\Phi$ is absolutely continuous and satisfies the Lipschitz condition in every finite interval. \end{lem} The above Lemmas \ref{lem1} and \ref{1.3} are proven by the same way as that indicated respectively in the proof of lemma 1.1 and lemma 1.3 in \cite{Kra}. \begin{thm}\label{thm} Every concave function $\Phi$ which satisfies the condition $\Phi(a)=0$ can be represented in the form \begin{equation}\label{form} \Phi(x)=\int_{a}^{x}p(t)dt, \end{equation} where $p$ is a decreasing right-continuous function. \end{thm} \begin{proof} We note first of all that the function $\Phi$ has a derivative almost everywhere. In fact , we have in virtue of (\ref{p}) and \begin{equation} p_+(x_1)\geq p_-(x_2). \end{equation} so, we have that \begin{equation}\label{1.10} p_-(x_2)\leq p_+(x_1)\leq p_-(x_1) \end{equation} for $x_2>x_1.$ Since the function $p_-$ is monotonic, it is continuous almost everywhere. Let $x_1$ be a point of continuity of the function $p_-.$ Passing to the limit in (\ref{1.10}) as $x_2\rightarrow x_1,$ we obtain that $p_-(x_1)\leq p_+(x_1) \leq p_-(x_1) ,$ i.e. $p_-(x_1) = p_+(x_1) .$\\ Similarly, we have that $\Phi'(x) = p(x) = p_+(x)$ almost everywhere.\\ Since the function $\Phi$ is absolutely continuous (in virtue of Lemma \ref{1.3}), it is the indefinite integral of its derivative. \end{proof} \section{ \textbf{$N^*$-function.}} A function $\Phi$ is called an $N^*-$function if it admits of the representation \begin{equation}\label{N*-function} \displaystyle \Phi(x)=\int_{0}^{|x|}p(t)dt<+\infty; \end{equation} where the function $p(t)$ is right-continuous for $t\geq 0,$ positive for $t > 0$ and decreasing which satisfies the conditions \begin{equation}\label{limit_p} \lim\limits_{t\rightarrow 0^+}p(t)=p(0^+)=+\infty, \ \lim\limits_{t\rightarrow +\infty}p(t) =p(+\infty)=0. \end{equation} For example, the functions $\displaystyle \Phi_1(x)=\displaystyle e^{\frac{\ln(\alpha|x| )}{\alpha}}\ (\alpha>1),\ \Phi_2(x)=\sqrt{\ln(|x|+1)}$ are $N^*-$functions. For the first of these, $\displaystyle p_1(t) = \Phi'_1(t) = \frac{1}{\alpha t}e^{\frac{\ln(\alpha t)}{\alpha}}$ and, for the second, $\displaystyle p_2(t) = \Phi'_2(t) = \frac{1}{2(t+1)\sqrt{\ln(t+1)}}.$ \subsection{Properties of $N^*-$functions.} It follows from representation (\ref{form}) that every $N^*-$function is even, continuous, assumes the value zero at the origin, and non-decreases for positive values of the argument. $N^*-$functions are concave. In fact , if $0< x_1 < x_2,$ then, in virtue of the monotonicity of the function $p,$ we have that \begin{eqnarray*} \Phi(\frac{x_1+x_2}{2}) =\int_{0}^{\frac{x_1+x_2}{2}}p(t)dt&=&\int_{0}^{\frac{x_1}{2}}p(t)dt+\int_{\frac{x_1}{2}}^{\frac{x_1+x_2}{2}}p(t)dt\\ &=& \int_{0}^{\frac{x_1}{2}}p(t)dt+\int_{0}^{\frac{x_2}{2}}p(t+\frac{x_1}{2})dt\\ &\geq& \frac{1}{2}\int_{0}^{\frac{x_1}{2}}p(t)dt+\frac{1}{2}\int_{0}^{\frac{x_2}{2}}p(t)dt\\ &\geq& \frac{1}{2}[\Phi(x_1)+\Phi(x_2)]. \end{eqnarray*} In the case of arbitrary $x_1, x_2,$ we have that \begin{eqnarray*} \Phi(\frac{x_1+x_2}{2}) =\Phi(\frac{|x_1+x_2|}{2})&\geq& \Phi(\frac{|x_1|+|x_2|}{2})\\ &\geq& \frac{1}{2}[\Phi(x_1)+\Phi(x_2)]. \end{eqnarray*} Setting $x_2 = 0$ in (\ref{concave}) , we obtain that \begin{equation}\label{1.13} \Phi(\alpha x_1)\geq \alpha \Phi(x_1) \ \ (0\leq \alpha \leq 1). \end{equation} The first of conditions (\ref{limit_p}) signifies that \begin{equation}\label{1.14} \lim\limits_{x\rightarrow 0^+}\frac{\Phi(x)}{x}=+\infty. \end{equation} It follows from the second condition in (\ref{limit_p}) that \begin{equation}\label{1.15} \lim\limits_{x\rightarrow +\infty}\frac{\Phi(x)}{x}=0. \end{equation} We denote by $\Phi^{-1} \ (0\leq x< \infty)$ the inverse function to the $N^*-$function $\Phi$ considered for non-negative values of the argument. This function is convexe since, in virtue of inequality (\ref{concave}) , we have that $\Phi^{-1}[\alpha x_1 + (1 - \alpha) x_2]\leq \alpha\Phi^{-1}( x_1) + (1 - \alpha)\Phi^{-1} (x_2)$ for $x_1 , x_2\geq 0.$ The monotonicity of the right derivative $p$ of the $N^*-$function $\Phi$ implies the inequality \begin{eqnarray}\label{1.16} \Phi(x+y)=\int_{0}^{|x+y|}p(t)dt&\leq &\int_{0}^{|x|+|y|}p(t)dt\nonumber\\ &=&\int_{0}^{|x|}p(t)dt+\int_{|x|}^{|x|+|y|}p(t)dt\nonumber\\ &\leq& \int_{0}^{|x|}p(t)dt+\int_{0}^{|y|}p(t+|x|)dt\nonumber\\ &\leq& \int_{0}^{|x|}p(t)dt+\int_{0}^{|y|}p(t)dt. \end{eqnarray} \subsection{Second definition of an $N^*-$function} It is sometimes expedient to use the following definition: a continuous concave function $\Phi$ is called an $N^*-$function if it is even and satisfies conditions (\ref{1.14}) and (\ref{1.15}). We shall show that this definition is equivalent to the fact that it follows from the second definition of an $N^*-$function that it is possible to represent it in the form (\ref{N*-function}). In virtue of (\ref{1.14}), $\Phi(0) =0.$ Therefore, in virtue of the evenness of the function $\Phi$ and Theorem \ref{thm}, it can be represented in the form \begin{equation*} \Phi(x)=\int_{0}^{|x|}p(t)dt, \end{equation*} where the function $\Phi$ is non-decreasing for $x > 0$ and rightcontinuous (i.e. the right derivative of the function $\Phi$). For $\displaystyle x > 0, \ \frac{\Phi(x)}{x}=\frac{1}{x}\int_{0}^{x}p(t)dt\geq \frac{1}{x}\int_{\frac{x}{2}}^{x}p(t)dt\geq \frac{p(x)}{2}>0,$ and in virtue of (\ref{1.15}), $\lim\limits_{x\rightarrow +\infty}p(x)=0.$ On the other hand, $\displaystyle p(0^+)=\lim\limits_{x\rightarrow 0^+}p(x)=\lim\limits_{x\rightarrow 0^+}\frac{\Phi(x)}{x}=+\infty.$ \begin{prop}\label{phi-1} A function $\Phi$ is an $N^*-$function if and only if $\Phi^{-1}:\mathbb{R}^+\rightarrow\mathbb{R}^+$ is an $N-$function. \end{prop} \begin{proof} \begin{enumerate} \item $\Phi^{-1}(0)=0\Leftrightarrow \Phi(0)=0$ (from the fact that $\Phi$ is bijective). \item \begin{eqnarray*} \displaystyle\lim\limits_{x\rightarrow 0^+}\frac{\Phi^{-1}(x)}{x}&=&\lim\limits_{x\rightarrow 0^+}\frac{\Phi^{-1}(x)}{\Phi(\Phi^{-1}(x))}\\ &=&\displaystyle\lim\limits_{x\rightarrow 0^+}\displaystyle\frac{1}{\frac{\Phi(\Phi^{-1}(x))}{\Phi^{-1}(x)}}\\ &=& \displaystyle\lim\limits_{X\rightarrow 0^+}\frac{1}{\frac{\Phi(X)}{X}} =0. \end{eqnarray*} \item \begin{eqnarray*} \displaystyle\lim\limits_{x\rightarrow +\infty}\frac{\Phi^{-1}(x)}{x}&=&\lim\limits_{x\rightarrow +\infty}\frac{1}{\frac{\Phi(\Phi^{-1}(x))}{\Phi^{-1}(x)}}\\ &=&\displaystyle\lim\limits_{x\rightarrow +\infty}\displaystyle\frac{1}{\frac{\Phi(\Phi^{-1}(x))}{\Phi^{-1}(x)}}\\ &=& \displaystyle\lim\limits_{X\rightarrow +\infty}\frac{1}{\frac{\Phi(X)}{X}} =+\infty. \end{eqnarray*} The inverse implication proceed as the same way. \end{enumerate} \end{proof} \begin{rem} we finish this section by the complementary $N^*-$function defined as follows: $$\widehat{\Phi}=\big(\overline{\Phi^{-1}})^{-1}$$ deduced from the young's inequality, which is also an $N^*-$function. \end{rem} \textbf{Example.} As an example of $\widehat{\Phi},$ let $X=[0,a]$ with $a\geq1$ and $\Phi(t)=\displaystyle\frac{t^p}{p^p}$ where $0<p\leq 1.$ For any $t\geq 0$ we have $$\Phi^{-1}(t)=\frac{t^{1/p}}{1/p} \ \mbox{and} \ \overline{\Phi^{-1}}(t)=\frac{t^{\frac{1}{1-p}}}{\frac{1}{1-p}}.$$ So that $$\widehat{\Phi}(t)=\big(\overline{\Phi^{-1}})^{-1}(t)=\frac{t^{1-p}}{(1-p)^{1-p}}.$$ \section{\textbf{The $L_\Phi$ spaces}} In this section, we present a new definition of a generalized spaces denoted by $L_{\Phi}$ which is a generalisation of $L^p$ spaces with $0<p<1,$ for any measure space and then extend some results in this generalized framework. \subsection{Definition and Elementary Properties of $L_\Phi$ Spaces} \begin{defn} Let $(X,\mathcal{M},\mu)$ be a measure space and $\Phi$ be an $N^*-$function, we define the space \begin{equation*} L_\Phi(X)\stackrel{def}{=}:\{ f:X\rightarrow \mathbb{R} \ \mbox{measurable and } \ \int_{X}\Phi(f )d\mu<\infty \}, \end{equation*} with functions that are equal almost everywhere being identified with one another. \end{defn} \begin{rem} The $L_\Phi(X)$ is a vector metric spaces and the metric used on it is defined as follows : \begin{equation*} d_\Phi(f,g)=\int_{X}\Phi(f-g)d\mu. \end{equation*} \begin{itemize} \item $d_\Phi(f,g)=0 \Leftrightarrow \int_{X}\Phi(f-g)d\mu=0 \Leftrightarrow f=g.$ \item If $f, g, h \in L_\Phi(X),$ then \begin{eqnarray*} d_\Phi(f,g)=\int_{X}\Phi(f-g)d\mu&=& \int_{X}\Phi(f-h+h-g)d\mu\\ &\leq& \int_{X}\Phi(f-h)d\mu+\int_{X}\Phi(h-g)d\mu\\ &\leq& d_\Phi(f,h)+d_\Phi(h,g). \end{eqnarray*} \end{itemize} \end{rem} The notion of these spaces extends the usual notion of $L^p$ with $(0<p<1).$ The function $s \mapsto s^p$ entering the definition of $L^p$ is replaces by a more general concave function $s\mapsto\Phi(s)$ which is the $N^*-$function.\\ Let's give some analogies between the Orlicz Spaces $L_M$ where $M$ is an $N-$function and our Spaces $L_\Phi,$ where $\Phi$ is an $N^*-$function. Some properties of the Orlicz spaces $L_M$ are: \begin{itemize} \item[$P_1)$] $(L_M,||.||_M)$ is complete, and thus is a Banach space for the Luxemburg norm $||.||_M.$ \item[$P_2)$] (H\"{o}lder's inequality) \[ \int_{X}|u(x)v(x)|\,{\rm d}x \le ||u||_{M}||v||_{\bar{M}} \mbox{ for all $u\in L_{M}(X)$ and $v\in L_{\bar{M}}(X)$,} \] where $M$ is an $N-$function and $\bar{M}$ its complementary function. \item[$P_3)$](Jensen's inequality) $$M\big( \frac{1}{\mu(X)}\int_Xf(t) d\mu\big)\leq \frac{1}{\mu(X)} \int_X M(f(t)) d\mu$$ for all $f\in L_M(X).$ \item[$P_4)$] In particular, if $X$ has finite measure, both H\"older's inequality and Jenson's inequality yield the continuous inclusion $L_{M}(X)\subset L^1(X)$.\\ For more properties about Orlicz spaces see \cite{A,Kra,M,R}. \end{itemize} Let's look at the $L_\Phi$ spaces and develop more its properties: \begin{thm} For $\Phi$ is $N^*-$function, $(L_\Phi,d_\Phi)$ is a complete space. \end{thm} \begin{proof} Let $(f_n)$ be a cauchy sequence of $L_\Phi.$ We select a representatives and identify them at $f_n.$ We can build a subsequence $(f_{n_i})$ verifies $$ d_\Phi(f_{n_{i+1}},f_{n_i})\leq 2^{-i}$$ then we pose $$g_k=\sum_{i=1}^{k}|f_{n_{i+1}}-f_{n_i}|$$ and $$g=\sum_{i=1}^{+\infty}|f_{n_{i+1}}-f_{n_i}|$$ these are functions with values in $[0,+\infty],$ triangular inequality shows that: $$d_\Phi(g_k,0)\leq 1.$$ Fatou's lemma or the monotone convergence theorem then gives $$d_\Phi(g,0)\leq 1$$ In particular this shows that these functions are finite almost everywhere and therefore for almost any $x,$ the series $\displaystyle\sum f_{n_{i+1}}(x)-f_{n_i}(x)$ converge absolutely. As $ \mathbb{R}$ is complete. This series converges and since it is a telescopic series, it means that $f_{n_i}(x)$ converges to the sum of the series noted $f(x). $\\ $$\displaystyle \sum_{i=1}^{k}f_{n_{i+1}}(x)-f_{n_i}(x)=f_{n_k}(x)-f_{n_{k-1}}(x)+f_{n_{k-1}}(x)-f_{n_{k-3}}(x)-...-f_{n_{1}}(x))$$ This defines almost everywhere, then one completes by $0. $ It remains to be shown that $f$ is in $L_\Phi$ and $(f_n)$ converges to $f$ for distance $d_\Phi$ of $L_\Phi.$\\ Let’s go back to the fact that $(f_n)$ is Cauchy sequence: $$\forall \varepsilon>0, \exists N, \forall n,m\geq N, \ \int_{X}\Phi(f_n-f_m)<\Phi(\varepsilon).$$ By taking $n=n_i,$ we have by Fatou's lemma: $$\int_X\Phi(f-f_m)d\mu\leq \lim\limits_{}\inf \int_X \Phi(f_{n_i}-f_m)d\mu<\Phi(\varepsilon)$$ and $$\displaystyle\int_X\Phi(f)d\mu= \int_X\Phi(f-f_m+f_m)d\mu\leq \int_X\Phi(f-f_m)d\mu+\int_X\Phi(f_m)d\mu<\infty$$ then $f\in L_\Phi, \ \mbox{and}\ f_n\rightarrow f \ \mbox{in} \ L_\Phi.$\\ \end{proof} \begin{thm}\label{ThmY} For $f\in L_\Phi$ and $g\in L_{\widehat{\Phi}}$ \begin{equation}\label{notHolder} \int_{X}\Phi(f) \widehat{\Phi}(g)\,{\rm d}x \le \int_{X}|f|dx+\int_{X}|g|dx \end{equation} where $\widehat{\Phi}$ is a complementary $N^*-$function. \end{thm} \begin{proof} it sufficient to apply the following Young's inequality: $$|fg|\leq \Phi^{-1}(f)+\overline{\Phi^{-1}}(g)$$ where $\Phi^{-1}$ is an $N-$function owing to Proposition \ref{phi-1}. We get (\ref{notHolder}) with $\widehat{\Phi}=[\overline{\Phi^{-1}}]^{-1}.$ Moreover, it is clear to show that $\widehat{\widehat{\Phi} } =\Phi.$ \end{proof} If $X$ has finite measure we have the following results. \begin{prop}\label{Ojens} For $f\in L_\Phi(X),$ then \begin{equation*} \Phi(\frac{1}{\mu(X)}\int_{X}|f(t)|d\mu)\geq \frac{1}{\mu(X)}\int_{X}\Phi(f)d\mu. \end{equation*} \end{prop} \begin{proof} The $N^*-$function $\Phi$ is convex, then it can be written as the upper envelope of refined functions \begin{equation*} \Phi(x)= \inf_{n\geq0}(a_nx+b_n), \end{equation*} where $a_n,b_n \in \mathbb{R}.$ Which gives \begin{eqnarray*} \Phi(\frac{1}{\mu(X)}\int_{X}|f(t)|d\mu)&=&\inf_{n\geq0}\big[\frac{a_n}{\mu(X)}\int_{X}|f(t)|d\mu+b_n\big]\\ &\geq&\frac{1}{\mu(X)}\int_{X}\inf_{n\geq0}(a_n|f(t)|+b_n)d\mu\\ &\geq&\frac{1}{\mu(X)}\int_{X}\Phi(f)d\mu. \\ \end{eqnarray*} \end{proof} From Theorem \ref{ThmY} and Proposition \ref{Ojens} yield the continuous inclusion $L^1(\Omega)\subset L_{\Phi}(\Omega) $. \begin{thm} Let $f_n,f\in L^1(X),$ if $\parallel f_n-f\parallel_1\rightarrow0,$ then $d_\Phi(f_n,f)\rightarrow0.$ \end{thm} \begin{proof} We use the fact that $L^1\subset L_\Phi(X),$ as $f_n,f\in L^1(X),$ then $f_n-f\in L_\Phi(X).$ So $$d(f_n,f)=\Phi(|f_n-f|)\leq \mu(X)\Phi(\frac{1}{\mu(X)}\int_X|f_n-f|d\mu)$$ as $\parallel f_n-f\parallel_1\rightarrow0,$ and $\Phi$ is function continuous, then $d(f_n,f)\rightarrow0.$ \end{proof} \begin{thm} The elements of $L^1$ form a dense in $L_\Phi(X).$ \end{thm} \begin{proof} let $f\in L_\Phi\setminus L^1$ and $(f_n)_{n=1}^\infty$ be the simple function is the approximation of $f.$ Since $|f_n|\leq |f|$ for all $n,$ and $$\Phi(|f-f_n|)\leq \Phi(|f|)+\Phi(|f_n|)\leq \varepsilon\Phi(|f|)\in L^1.$$ Therefore, by the dominated convergence theorem $$\lim\limits_{n\rightarrow+\infty}d(f_n,f)=\lim\limits_{n\rightarrow+\infty}\int_{X}\Phi(|f_n-f|)d\mu=\int_{X}\lim\limits_{n\rightarrow+\infty}\Phi(|f_n-f|)d\mu=0.$$ \end{proof} We can conclude from this analogies that our spaces $L_\Phi$ is including $L^1, L^p$ for $0<p<1$ and the Orlicz spaces $L_M$ where $M$ is $N-$function.\\ It is known that if $M$ is $N-$function, then for every $\alpha,$ we have $\alpha<M^{-1}(\alpha)+(M^*)^{-1}(\alpha)\leq 2\alpha.$ Since $\Phi$ is $N^*-$function, then $\Phi^{-1}$ is $N-$function, therefore: \begin{equation}\label{result} \alpha<\Phi(\alpha)+\widehat{\Phi}(\alpha) \leq 2 \alpha \end{equation} for $\alpha>0.$ By this property we can get the following proposition: \begin{prop} We have $L_{\Phi}\cap L_{\widehat{\Phi}}=L^1.$ \end{prop} \begin{proof} Owing to the inequality (\ref{result}), we have for all $f\in L^1$ \begin{equation*} \int_X \Phi(|f|)d\mu +\int_X {\widehat{\Phi}}(|f|)d\mu\leq 2 \int_X \Phi(|f|)d\mu <+\infty, \end{equation*} then $$\int_X \Phi(|f|)d\mu<+\infty \ \mbox{and}\ \int_X \widehat{\Phi}(|f|)d\mu<+\infty . $$ Hence $$f\in L_{\Phi}\cap L_{\widehat{\Phi}}. $$ Let now $ f\in L_{\Phi}\cap L_{\widehat{\Phi}};$ then $\int_X \Phi(|f|)d\mu<+\infty $ and $\int_X {\widehat{\Phi}}(|f|)d\mu<+\infty,$ so $$\int_X |f|d\mu<\int_X \Phi(|f|)d\mu+\int_X \widehat{\Phi}(|f|)d\mu<+\infty,$$ therefore, $f\in L^1,$ so that $L_{\Phi}\cap L_{\widehat{\Phi}}\subseteq L^1.$ Then we get the result. \end{proof} \subsection{\textbf{$L_\Phi$ spaces with $\Phi^{-1}\in \Delta_2$}} \vspace*{0.5cm}Let's recall first the definition of $\Delta_2$-condition: \begin{defn} An $N$-function $M$ is said to satisfy the $\Delta _2$-condition if, for some $k>2$, \begin{equation}\label{7} M(2t)\leq k\,M(t)\quad {\mbox{for all }}t\geq 0. \end{equation} We can denote by $M\in \Delta_2$ when $M$ is satisfying the $\Delta_2$-condition. \end{defn} \begin{rem} Let $\Phi^{-1}\in \Delta_2,$ then there exists $k_0>2$ such that, $\Phi^{-1}(2x)\leq k_0\Phi^{-1}(x); \ \forall x>0,$ if and only if there exists $ k_0>2 $ such that \begin{equation}\label{Ndelta2} 2\Phi(x)\leq \Phi(k_0x) \ \mbox{for all}\ x>0. \end{equation} We take $\psi_x(t)=\Phi(tx)-2\Phi(x), \ \forall t\geq 2 \ \mbox{and}\ \forall x>0. $ Then \begin{itemize} \item[i)] $\psi_x$ is continuous on $[2,+\infty[$ for each $x>0.$ \item[ii)] $\psi_x(k_0)=\Phi(k_0x)-2\Phi(x)\geq 0 \ \forall x>0$ and $\psi_x(2)=\Phi(2x)-2\Phi(x)\leq 0 \ \forall x>0$ \end{itemize} there is $k\in [2,k_0],$ such that $\Phi_x(k)=0 \ \forall x>0$ equivalently to \begin{equation}\label{Ndelta22} 2\Phi(x)=\Phi(kx) \ \forall x>0. \end{equation} \end{rem} In the sequel we suppose that $\Phi^{-1}\in \Delta_2,$ so that $\Phi$ is satisfying the assumption (\ref{Ndelta22}). \\ We recall the following definition of a quasi-norm on a real vector space $X$ is a map $x\mapsto \|x\| \ (X\rightsquigarrow \mathbb{R})$ such that: \begin{itemize} \item $\|x\|>0$ if $x\neq 0,$ \item $\|tx\|=|t|\|x\|$ for $x\in X, \ t\in \mathbb{R},$ \item $\|x+y\|\leq k(\|x\|+\|y\|)$ for $x,y\in X,$ \end{itemize} where $k$ is a constant independent of $x$ and $y.$ The best such constant $k$ is called the modulus of concavity of the quasi-norm. If $k\leq 1,$ then the quasi-norm is called a norm.\\ The sets $\{x\in X: \|x\|<\varepsilon\}$ for $\varepsilon>0$ form a base of neighbourhoods of a Hausdorff vector topology on $X.$ This topology is (locally) p-convex, where $0<p\leq1$ if for some constant $A$ and any $x_1,x_2,..,x_n\in X, \ \|x_1+x_2+..+x_n\|\leq A (\|x_1\|^p+..+\|x_n\|^p)^{1/p}.$ In this case we may endow $X$ with an equivalent quasi-norm: $$\|x\|^* =\inf \{ (\sum_{i=1}^{n}\|x_i\|^p)^{1/p}; \ x_1+x_2+..+x_n=x \}$$ and then $$\|x\|\geq \|x\|^*\geq A^{-1}\|x\|; \ x\in X;$$ and $\|.\|^*$ is $p$-sub-additive, i.e. $$\|x_1+x_2+..+x_n\|^*\leq (\sum_{i=1}^{n}\|x_i\|^{*p})^{1/p}.$$ \begin{prop} The function defined by: $\displaystyle\|f\|_\Phi = \inf\{\lambda>0, \int_X \Phi(\frac{|f|}{\lambda})d\mu \leq 1 \}$ is a quasi-norm on $L_\Phi$ space. $(L_\Phi, \|\|_\Phi)$ is a quasi-normed space. \end{prop} \begin{proof} \begin{itemize} \item[$1)$] If $f=0,$ then $\displaystyle\Phi(\frac{|f|}{\lambda})=0 \ \forall \lambda >0,$ so $\|f\|_\Phi=0.$ \item[$2)$] If $\|f\|_\Phi=0$ then $\displaystyle\int_X\Phi(\alpha|f|)d\mu \leq 1, \ \forall \alpha >0.$ \end{itemize} We have $\displaystyle\int_X\Phi(|f|)d\mu=\frac{1}{2}\int_X\Phi(k|f|)d\mu=...=\frac{1}{2^n}\int_X\Phi(k^n|f|)d\mu \ \forall n\in \mathbb{N},$ by $2)$, then $\displaystyle\int_X\Phi(|f|)d\mu\leq \frac{1}{2^n} \ \forall n\in \mathbb{N}.$ While $\displaystyle\frac{1}{2^n}\rightarrow 0 \ \mbox{as} \ n\rightarrow +\infty,$ that is $\int_X\Phi(|f|)d\mu=0,$ so $f=0.$\\ Let $\alpha$ and $f\in L_\Phi,$ then \begin{itemize} \item If $\alpha=0,$ then $\|\alpha f \|_\Phi=\|0\|_\Phi=0=0\|f\|_\Phi.$ \item If $\alpha\neq0,$ then $\|\alpha f \|_\Phi=\displaystyle\inf\{ \lambda >0, \int_X \Phi(\frac{\alpha f}{\lambda})d\mu <1 \}\\ \hspace*{3.2cm}=\displaystyle\inf\{ |\alpha|\frac{\lambda}{|\alpha|}, \int_X \Phi(\frac{ f}{\frac{\lambda}{|\alpha|}})d\mu <1 \}\\ \hspace*{3.2cm}=\displaystyle|\alpha|\inf\{ \frac{\lambda}{|\alpha|}>0 , \int_X \Phi(\frac{ f}{\frac{\lambda}{|\alpha|}})d\mu <1 \}\\ \hspace*{3.2cm}=|\alpha|\|f\|_\Phi.$ \end{itemize} Let $f,g\in L_\Phi$ and $a,b \in \mathbb{R},$ such that: $a\geq \|f\|_\Phi$ and $b \geq \|g\|_\Phi,$ then: $$\int_X \Phi(\frac{|f|}{a})d\mu<1 \ \mbox{and} \ \int_X \Phi(\frac{|g|}{b})d\mu<1,$$ since $\displaystyle\int_X \Phi\big(\frac{|f+g|}{k(a+b)}\big)d\mu=\frac{1}{2}\int_X \Phi\big(\frac{k|f+g|}{k(a+b)}\big)d\mu=\frac{1}{2}\int_X \Phi\big(\frac{|f+g|}{(a+b)}\big)d\mu\\ \hspace*{3.5cm}\leq\frac{1}{2}\big[\int_X \Phi\big(\frac{|f|}{(a+b)}\big)d\mu + \int_X \Phi\big(\frac{|g|}{(a+b)}\big)d\mu\big]\\ \hspace*{3.5cm}\leq\frac{1}{2}\big[\int_X \Phi\big(\frac{|f|}{a}\big)d\mu + \int_X \Phi\big(\frac{|g|}{b}\big)d\mu\big] \leq 1,$\\ which means that $\|f+g\|_\Phi\leq k(a+b).$ As a result, $\|f+g\|\leq \big(\|f\|_\Phi+\|g\|_\Phi\big).$ \end{proof} \begin{rem} As is well known, whenever $0<p<1$ the function $\|f\|_p=\big(\int_X |f|^p d\mu\big)^{\frac{1}{p}}$ no longer satisfies the triangle inequality $\|f_1+f_2\|_p\leq \|f_1\|_p+\|f_2\|_p$ but in general only the weaker condition $\|f_1+f_2\|_p\leq 2^{\frac{1-p}{p}}(\|f_1\|_p+\|f_2\|_p).$ Then the function $\|.\|_p$ is a quasi-norm on $L^p,$ and coincide with the quasi-norm $\|.\|_\Phi.$ Indeed, since $\Phi(t)=t^p$ is $N^*-$function, $2\Phi(t)=(2^{1/p})^p t^p=(2^{1/p}.t)^p,$ for all $t\geq 0.$ Set $k=2^{1/p}\geq 2.$ $$\int_X \big(\frac{|f|}{\|f\|_\Phi}\big)^p d\mu=\int_X \frac{|f|^p}{\|f\|_\Phi}^p d\mu$$ then $\big[\int_X |f|^pd\mu\big]^{1/p}\leq \|f\|_\Phi $ for all $f\in L_\Phi.$ So that $ \|f\|_p\leq \|f\|_\Phi.$\\ On the other hand, we have $\int_X \big(\frac{|f|}{\|f\|_\Phi}\big)^p d\mu=\frac{1}{\|f\|^p_p}\int_X |f|^pd\mu=1.$ So $ \|f\|_\Phi\leq \|f\|_p.$ Finally, we conclude that $ \|f\|_p= \|f\|_\Phi.$ \end{rem} \begin{prop} Let $\mu(X)<+\infty$ and $f\in L^1,$ then \begin{equation} \|f\|_\Phi\leq \frac{1}{\mu(X)\Phi^{-1}(\frac{1}{\mu(X)})}\|f\|_1. \end{equation} \end{prop} \begin{proof} \begin{eqnarray*} \int_X\Phi\big(\frac{|f|}{\|f\|_1}\mu(X)\Phi^{-1}(\frac{1}{\mu(X)})d\mu\big)&=&\int_X\big[\inf_n \{a_n \frac{|f|}{\|f\|_1}\mu(X)\Phi^{-1}(\frac{1}{\mu(X)})+b_n \} \big]d\mu\\ &\leq& \inf_n \{a_n \int_X \frac{|f|}{\|f\|_1}\mu(X)\Phi^{-1}(\frac{1}{\mu(X)})d\mu+b_n \mu(X) \}\\ &\leq& \mu(X)\big[\inf_n \{a_n \int_X \frac{|f|}{\|f\|_1}\Phi^{-1}(\frac{1}{\mu(X)})d\mu+b_n \}\big]\\ &\leq& \mu(X)\Phi\big( \frac{1}{\|f\|_1}(\int_X |f|d\mu)\Phi^{-1}(\frac{1}{\mu(X)})\big)\\ &\leq& \mu(X)\Phi\big(\Phi^{-1}(\frac{1}{\mu(X)})\big)\\ &\leq& \mu(X)\frac{1}{\mu(X)}=1. \end{eqnarray*} Then \begin{equation*} \|f\|_\Phi\leq \frac{1}{\mu(X)\Phi^{-1}(\frac{1}{\mu(X)})}\|f\|_1. \end{equation*} \end{proof} \begin{prop} Let $f\in L_\Phi.$ If $\displaystyle\int_X \Phi(|f|)d\mu<c$ then $\displaystyle\|f\|_\Phi \leq k^{n_0},$ where $\displaystyle n_0=[\frac{\ln(c)}{\ln(2)}]+1.$ \end{prop} \begin{proof} We have \begin{eqnarray*} \int_X \Phi(\frac{|f|}{k^{n_0}})d\mu&=&\frac{1}{2^{n_0}}\int_X \Phi(\frac{|f|k^{n_0}}{k^{n_0}})d\mu\\ &=& \frac{1}{2^{n_0}}\int_X \Phi(|f|)d\mu \leq \frac{c}{2^{n_0}}; \end{eqnarray*} since $\displaystyle n_0>\frac{\ln(c)}{\ln(2)},$ then $\displaystyle e^{\ln(2)n_0}>c$ so that $2^{n_0}>c$ which gives $\displaystyle\frac{c}{2^{n_0}}<1,$ we get $\displaystyle\|f\|_\Phi \leq k^{n_0}.$ \end{proof} \begin{defn} \begin{enumerate} \item A sequence $\{f_n\}$ in $L_\Phi (X)$ is called a quasi-convergent to a point $f\in L_\Phi$ if and only if $\|f_n-f\|_\Phi\rightarrow 0$ and we note $\xymatrix{f_n\ar[r]^{\|.\|_\Phi}&f} \ \mbox{as}\ n\rightarrow \infty.$ \item A sequence $\{f_n\}$ in $L_\Phi (X)$ is quasi-cauchy sequence if and only if $\|f_n-f_m\|_\Phi \rightarrow 0$ as $n,m\rightarrow \infty.$ \end{enumerate} Is clear that every a quasi-convergent sequence is a quasi-cauchy sequence. \end{defn} \begin{prop}\label{prop2} Let $\{f_n\}_{n\in \mathbb{N}}$ be a sequence in $L_\Phi(X)$ and $f\in L_\Phi(X).$ Then $\xymatrix{f_n\ar[r]^{\|.\|_\Phi}&f}$ if and only if $\xymatrix{f_n\ar[r]^{d_\Phi}&f.}$ \end{prop} \begin{proof} $\Rightarrow]$ Assume that $\xymatrix{f_n\ar[r]^{\|.\|_\Phi}&f.}$ Then, for all $i\in \mathbb{N},$ there exists $N_i,$ for all $n>N_i,$ $\|f_n-f\|_\Phi\leq \frac{1}{k^i}.$ This implies that : \begin{equation*} 2^i \int_X \Phi(|f_n-f|)d\mu=\int_X \Phi(k^n|f_n-f|)d\mu=\int_X \Phi(\frac{|f_n-f|}{k})d\mu<1. \end{equation*} Hence, \begin{equation*} d_\Phi(f_n,f)= \int_X \Phi(|f_n-f|)d\mu\leq \frac{1}{2^i}. \end{equation*} Thus $\xymatrix{f_n\ar[r]^{d_\Phi}&f.}$ \\ $\Leftarrow]$ Assume that $\xymatrix{f_n\ar[r]^{d_\Phi}&f.}$ Then, for all $i\in \mathbb{N},$ there exists $N_i,$ for all $n>N_i,$ $ \int_X \Phi(f_n-f)d\mu\leq \frac{1}{2^i}.$ By the above proposition $\|f_n-f\|_\Phi\leq k^{n_0(i)},$ where $n_0(i)=[\frac{\ln(\frac{1}{2^i})}{\ln(2)}]+1.$ We have $\|f_n-f\|_\Phi\leq k^{\frac{\ln(\frac{1}{2^i})}{\ln(2)}+1}\leq k^{-i\frac{\ln(2)}{\ln(2)}+1}\leq \frac{1}{k^{i-1}}.$ Thus $\xymatrix{f_n\ar[r]^{\|.\|_\Phi}&f.}$ \end{proof} We conclude this section by the following result: \begin{thm} $L_\Phi(X)$ is a quasi-Banach space. \end{thm} \begin{proof} Is obvious by proposition \ref{prop2}, as $(L_\Phi, d_\Phi)$ is complet space. \end{proof} \section{\textbf{Linear form of $L_\Phi$ }} In this last section we present the linear forms on $L_\Phi$ by discussing each case of the measure. \\ \textbf{Case 1:} Let $\mu(X)<+\infty$ \begin{lem}\cite{D} If $\Lambda \neq \emptyset, $ there exists a finite or countable sequence of sets $\{A_i \}\subset \mathcal{U}$ such that every $A \in \mathcal{U}$ differs from just one $\{A_i \}$ only by sets of measure zero. \end{lem} In what follows we shall let $\{A_i \}$ be the family of sets of $\Lambda$ of the preceding lemma, let $a_i=\mu(A_i)$ and let $\overline{f}_i$ be the value of the measurable function $f$ almost everywhere on $\{A_i \}.$ Then we have \begin{thm}\label{thm4.2} Any linear functional on $L_\Phi(X)$ is identically zero, if $\Lambda=\emptyset,$ or can be expressed in the form $U(f)=\sum_{i}u_i \overline{f}_i$ where $\|U\|= \sup_i |u_i|\Phi^{-1}(\frac{1}{a_i}).$\\ If $u_i$ are given so that $\displaystyle|u_i|<k \frac{1}{\Phi^{-1}(\frac{1}{a_i})}$ for all $i,$ the function $U:f\mapsto \sum_i u_i \overline{f}_i $ is linear on $L_\Phi(X).$ \end{thm} \begin{proof} It is easily seen from Theorem 4.5, applied to $L_\Phi(X-\cup_i A_i),$ that $U(f)$ is independent of the values of $f$ on $X-\cup_i A_i ;$ hence no generality is lost if we assume $X=\cup_iA_i$ for simplicity.\\ Let $\chi_{A_i}$ be the characteristic function on $A_i,$ and take any $f\in L_\Phi(X),$ then $$\|\sum_{i=1}^n \overline{f}_i\xi_{A_i}-f\|_\Phi\rightarrow 0 \ \mbox{ as}\ n\rightarrow +\infty,$$ so by continuity, $$U(f)=\lim\limits_{n\rightarrow+\infty} U(\sum_{i=1}^{n}\overline{f}_i \chi_{A_i})=\lim\limits_{n\rightarrow+\infty}\sum_{i=1}^{n} U(\overline{f}_i \chi_{A_i})=\sum_{i=1}^{n}\overline{f}_i u_i,$$ where $u_i=U(\chi_{A_i}).$ Postponing the computation of $\|U\|$ for a moment.\\ Let us assume $u_i$ given, such that $\displaystyle|u_i|<k\frac{1}{\Phi^{-1}(\frac{1}{a_i})},$ and take $\|f\|_\Phi\leq 1.$ Then $$|\Phi(|\overline{f}_i|)a_i|\leq \|f\|_\Phi\leq 1,$$ so that $$|\overline{f}_i|=\Phi^{-1}(\Phi(|\overline{f}_i)a_i\times \frac{1}{a_i})\leq \Phi(\overline{f}_i)a_i\Phi^{-1}(\frac{1}{a_i})$$ since $\Phi^{-1}$ is convex, therefore \begin{eqnarray*} |U(f)|\leq \sum |u_i\overline{f}_i| &\leq& k\sum_{i=1}^{+\infty}\frac{1}{\Phi^{-1}(\frac{1}{a_i})}\times \Phi(|\overline{f}_i|)a_i \Phi^{-1}(\frac{1}{a_i})\\ &\leq& k\sum_{i=1}^{+\infty}\Phi(|\overline{f}_i|)a_i \\ &\leq& k\int_{X}\Phi(|f|)d\mu\\ &\leq& k, \end{eqnarray*} so $U$ is bounded hence it is continuous, and $\|U\|\leq k.$ It follows, if $k=\sup_i |u_i|\Phi^{-1}(\frac{1}{a_i}),$ that $\|U\|\geq k$ also for $$|U(\Phi^{-1}(\frac{1}{a_i}\chi_{A_i}))|=|u_i|\Phi^{-1}(\frac{1}{a_i})\leq \|U\|,$$ therefore, $\sup_i|u_i|\Phi^{-1}(\frac{1}{a_i})\leq \|U\|.$ This shows that the series $\sum |u_i \overline{f}_i| $ converges. \end{proof} \textbf{Case 2:} $\mu(X)=+\infty$\\ Let's first present the following Lemmma: \begin{lem}\label{lemma1} We have for $\Lambda\neq\emptyset$ if and only if $L_\Phi^* \neq0.$ \end{lem} \begin{proof} B y using the theorem 5.4 and 5.5 in \cite{Arabi}. \end{proof} For greater convenience we assume the $(X,\mathfrak{m},\mu)$ is $\sigma-$finite, (or $X\in B $), that is, there exists a sequence of measurable subsets $X_i\in \mathfrak{m} $ such that $X=\bigcup_i^{\infty}X_i$ and $\mu(X_i)<+\infty$ for all $i\in \mathbb{N}.$ We let $E_i,\ a_i$ and $\overline{f}_i$ have their previous meanings. \begin{thm} If $(X,\mathfrak{m},\mu)$ is $\sigma-$finite, a functional $U$ on $L_\Phi(X)$ is linear if and only if: \begin{enumerate} \item[a)] it is identically zero and $\Lambda=\emptyset,$ \\ or \item[b)] $\Lambda\neq\emptyset,$ and $U$ can be expressed in the form $U(f)=\sum_iu_i\overline{f}_i$ with $\|U\|=\sup_i|u_i|\Phi^{-1}(\frac{1}{a_i}).$ \end{enumerate} \end{thm} \begin{proof} The sets $X_i$ which exists as $(X,\mathfrak{m},\mu)$ is $\sigma-$finite can obviously be taken disjoint; we let $U_j$ be a linear functional on $L_\Phi(X)$ defined by $U_j(f)=U(f.\chi_{X_j}).$ Then $U_j$ is linear on $L_\Phi(X_j),$ so by Theorem \ref{thm4.2} and Lemma \ref{lemma1} either $U_j$ is identically zero on $U_j(f)=\sum_iu_{ji}\overline{f}_{ji}.$ \\ Now $\lim\limits_{n\rightarrow +\infty}\|\sum_{i=1}^{n}f\chi_{X_i}-f\|_\Phi=0.$ So $U(f)=\lim\limits_{n\rightarrow\infty}U(f\chi_{X_j})=\sum_jU_j(f).$ Moreover there is an $f_0\in L_\Phi(X)$ such that $|f_0(y)|=|f(y)|$ for $y\in X$ and $U_j(f_0)=|U_j(f)|$ for all $j;$ hence $$|U_j(f)|\leq \sum_j|U_j(f)|=\sum_jU_j(f_0)=U(f_0)\leq \|U\|.\|f\|_\Phi,$$so $\|U_j\|\leq \|U\|$ and the series $\sum_jU_j(f)$ converges absolutely for each $f\in L_\Phi.$ Then $$|u_{ji}|\Phi^{-1}(\frac{1}{a_{ji}})\leq \|U_j\|\leq \|U\|$$ and $U(f)=\sum_j\sum_iu_{ji}\overline{f}_{ji}$ unless all $U_j$ are identically zero, that is, unless $\mathcal{U}=\emptyset.$ Rearranging, we get $U(f)=\sum u_if_i.$ \\ The other conclusions follow as in Theorem \ref{thm4.2}. \end{proof} \textbf{Case 3:} There remains the case in which $(X,\mathfrak{m},\mu)$ is not $\sigma-$finite. As we did in the proof of Lemma 7 in \cite{D}, we can define a well-ordered set $\{A_\gamma\}$ of sets of $\Lambda$ disjoint up to sets of measure of measure zero, equal to some $A_\gamma;$ however we have no assurance that the sequence will be countable. As before we let $a_\gamma=\mu(A_\gamma)$ and $\overline{f}_\gamma$ be the value of $f$ almost everywhere on $A_\gamma.$\\ Now, if $f\in L_\Phi(X)$ if and only if $\Phi(f)\in L^1(X).$ By Lemma 8 in \cite{D} $$\{x\in X,f(x)\neq 0\}=\{x\in X;\Phi(f)(x)\neq 0\}\in B.$$ By this, it is possible to put each function $f$ of $L_\Phi(X)$ into at least one class $D_E$ such that $f(x)=0$ if $x\in X\backslash E$ and $E\in B;$ then $D_E$ is equivalent to $L_\Phi(E),$ and if we set $D_E(f)=U(f)$ for $f\in D_E,$ $U_E$ is linear on $L_\Phi(E)$ and therefore can be expressed as before where the $A_i$ of theorem above will be those $E_\gamma$ which except for sets of measure zero, lie in $E.$ But $f=0$ on $X\backslash E;$ so we can write $U(f)=\sum_{\gamma}U_\gamma \overline{f}_\gamma $ if it is not identically zero, with the convention that the sum of any number of terms in which $\overline{f}_\alpha $ or $u_\alpha$ is zero shall be zero.\\ Since this can be done for each $f\in L_\Phi(X),$ we get our final result, including the previous theorems as special cases. \begin{thm} A functional $U$ on $L_\Phi(X)$ is linear if and only if: \begin{enumerate} \item[a)] $\Lambda=\emptyset,$ and $U$ is identically zero \\ or \item[b)] $\Lambda\neq\emptyset,$ and $U$ can be expressed in the form $U(f)=\sum_\gamma u_i\overline{f}_\gamma,$ and $\|U\|=\sup_\gamma|u_\gamma|\Phi^{-1}(\frac{1}{a_\gamma}).$ \end{enumerate} \end{thm} \begin{rem} $L_\Phi$ is a quasi-normed linear space , by using Theorem 3.4 in \cite{Gobardhan} then the Dual space of $L_\Phi$ is a complete normed linear space. \end{rem} \section{\textbf{The $L_\Phi$ Spaces with Dual Space 0}} In this part we speak about some topological vector spaces that are not in general locally convex. For example the function $d(f,g) =\int_{0}^{1} |f(x)- g(x)|^{1/2}dx$ is a metric on $L^{1/2}[0,1].$ The topology $L^{1/2}[0,1]$ obtains from this metric is not locally convex. Theorem 3.2 in \cite{K} proved that $L^p(\mu)$ for $0<p<1$ is locally convex space where the measure $\mu$ assumes finitely many values.\\ For a measure space $(X,\mathcal{M},\mu)$ and $\Phi$ is an $N^*-$function, is $L_\Phi$ ever locally convex? \begin{thm} $L_\Phi(X)$ is locally convex if and only if the measure $\mu$ assume finitely many values. \end{thm} \begin{proof} If $\mu$ takes finitely many values, then $X$ is the disjoint union of finitely many atoms $B_1,...,B_m.$ A measurable function is constant almost everywhere on each atom, so $L_\Phi(X)$ is topologically just Euclidean space (of dimension equal to the number of atoms of finite measure) which is locally convex.\\ Now assume $\mu$ takes infinitely many values.\\ Since $\mu$ has infinitely many values, there is a sequence of subsets $Y_i\subset X$ such that $$0<\mu(Y_1)<\mu(Y_2)<...$$ From the sets $Y_i,$ we can construct recursively a sequences of disjoint sets $A_i$ such that $\mu(A_i)>0.$\\ Fix $\displaystyle \varepsilon>0.$ Let $f_k=\Phi(\displaystyle\frac{\varepsilon}{\mu(A_k)})\chi_k,$ so \begin{equation*} \int_{X}\Phi(|f_k|)d\mu=\int_{X}\frac{\varepsilon}{\mu(A_k)}\chi_{A_k} d\mu=\frac{\varepsilon}{\mu(A_k)}\times \mu(A_k)=\varepsilon. \end{equation*} If $L_\Phi(X)$ is locally convex, then any open set around $0$ contains a convex open set around $0,$ which in turn contains some $\varepsilon-$ball (and thus every $f_k$). We will show an average of enough $f_k$'s is arbitrarily far from $0$ in the metric on $L_\Phi(X),$ and that will contradict local convexity. Let $h_n=\displaystyle \frac{1}{n}\sum_{k=1}^{n}f_k.$ Since $f_k$'s are supported on disjoint sets, $$\displaystyle \int_{X}\Phi(|h_n|)d\mu=\sum_{k=1}^{n}\int_{A_k}\Phi(|h_n|)d\mu=\sum_{k=1}^{n}\varepsilon \Phi(\frac{1}{n})=n\Phi(\frac{1}{n})\varepsilon.$$ Since $\displaystyle \lim\limits_{x\rightarrow 0^+}\frac{\Phi(x)}{x}=+\infty,$ thus $\displaystyle\lim\limits_{n\rightarrow +\infty}\varepsilon n \Phi(\frac{1}{n})=\displaystyle\varepsilon \lim\limits_{n\rightarrow +\infty}\frac{\Phi(\frac{1}{n})}{\frac{1}{n}}=+\infty, $ thus, $L_\Phi(X)$ is not locally convex. \end{proof} \begin{lem}\label{lemma1} If $(X,\mathcal{M},\mu)$ is a non-atomic measure space, then for any non negative $F\in \mathcal{L}^1(\mu,\mathbb{R}),$ $Fd\mu$ is a finite non-atomic measure. \end{lem} \begin{lem}\label{lemma2} If $(X,\mathcal{M},\lambda)$ is any finite non-atomic measure space, then for all $t$ in $[0,\nu(X)]$ there is some $A\in \mathcal{M}$ such that $\nu(A)=t.$ \end{lem} \begin{thm} If $(X,\mathcal{M},\mu)$ is a non-atomic measure space, then $L_\Phi(X)^*=0.$ \end{thm} \begin{proof} We argue by contradiction. Assume there is $\varphi \in L_\Phi(X)^*$ with $\varphi \neq 0.$ Then $\varphi$ has image $\mathbb{R}$ (a non zero linear map to a one-dimensional space is surjective ),so there is some $f\in L_\Phi(X) $ such that $|\varphi(f)|\geq 1.$\\ When $(X,\mathcal{M},\mu)$ is non-atomic and $\Phi(f)\in L^1(X),$ Lemma \ref{lemma1} tells us $\Phi(|f|)d\mu$ is a finite non-atomic measure on $X,$ and Lemma \ref{lemma2} then tells us there is an $A\in \mathcal{M}$ such that $\displaystyle \int_{A}\Phi(|f|)d\mu=\frac{1}{3}\int_X\Phi(|f|)d\mu.$ Then set $g_1=f \chi_A$ and $g_2=f \chi_{X-A}$ so $f=g_1+g_2$ and $\Phi(|f|)=\Phi(|g_1|)+\Phi(|g_2|).$ So \begin{equation*} \displaystyle \int_{X}\Phi(|g_1|)d\mu= \int_{A}\Phi(|f|)d\mu=\frac{1}{3} \int_{X}\Phi(|f|)d\mu. \end{equation*} Hence \begin{equation*} \displaystyle \int_{X}\Phi(|g_2|)d\mu= \frac{1}{3} \int_{X}\Phi(|f|)d\mu. \end{equation*} Since $\displaystyle|\varphi(|f|)|\geq 1, \ |\varphi(|g_i|)|\geq \displaystyle\frac{1}{2} $ for some $i.$ Let $f_1=2g_i,$ so $|\varphi(f_1)|\geq 1$ and \begin{eqnarray*} \displaystyle \int_{X}\Phi(|f_1|)d\mu=\int_{X}\Phi(|2g_1|)d\mu &\leq&2\int_{X}\Phi(|g_1|)d\mu\\ &\leq&\frac{2}{3}\int_{X}\Phi(|f|)d\mu. \end{eqnarray*} Iterate this to get a sequence $\{f_n\}$ in $L_\Phi (X)$ such that $|\varphi(f_n)|\geq 1$ and $d(f_n,0)=\displaystyle \int_X\Phi(|f_n|)d\mu\leq \displaystyle \big(\frac{2}{3}\big)^n\int_X\Phi(|f|)d\mu .$ Then $\lim\limits_{n\rightarrow +\infty}d(f_n,0)=0,$ a contradiction of continuity of $\varphi.$ \end{proof} \begin{thm} If the measure $\mu$ contains an atom with finite measure, then $L_\Phi^*(X)\neq 0.$ \end{thm} \begin{proof} Let $B$ be an atom with finite measure. Any measurable function $f:X\rightarrow \mathbb{R}$ is constant almost everywhere on $B.$ Call the almost everywhere common value $\varphi(f).$ The reader can check $\varphi$ is a non-zero continuous linear functional on $L_\Phi(X).$ \end{proof} \begin{rem} If $L^*_\Phi\neq 0,$ how to express the elements of $L^*_\Phi \backslash \{0\}?$\\ If $A\in L^*_\Phi$ and any $f\in L_\Phi.$ By the density of $L^1,$ there exists sequence $(f_n)$ in $L^1$ such that $d_\Phi(f,f_n)\rightarrow 0,$ as $n\rightarrow +\infty.$ We have, $$A(f)=A(\lim\limits_{n\rightarrow+\infty}f_n)=\lim\limits_{n\rightarrow+\infty}A(f_n),$$ by Lemma 2 in \cite{D}, we get $A(f)=\displaystyle\lim\limits_{n\rightarrow+\infty}\int_Xf_nud\mu,$ where $u$ is a bounded function in $X.$ \\\\ \end{rem} \hspace{-0.3cm}\textbf{\small{Author contributions}} \small{The study was carried out in collaboration of all authors. All authors read and approved the final manuscript.}\\ \hspace{-0.3cm}\textbf{\small{Funding}} \small{Not Applicable.}\\ \hspace{-0.3cm}\textbf{\large{Compliance with ethical standards}}\\ \hspace{-0.3cm}{\textbf{\small{Conflicts of interest}} \small{The authors declare that they have no conflict of interest.}\\ \hspace{-0.3cm}{\textbf{\small{Ethical approval}}\small{ This article does not contain any studies with human participants or animals performed by any of the authors.} \bibliographystyle{plain} \begin{thebibliography}{00} \bibitem{A}R. Adams, {\it Sobolev Spaces.} Academic Press, New York (1975). \bibitem{apps} D. Arnold and K. Yokoyama , {\it The Leslie Matrix}, Math 45-linear Algebra, David-Arnold @ Eurka. redwoods.cc.ca.us, pp. 1-11. \bibitem{D} M. M. Day, { \it The spaces $\mathcal {L}_p$ with $0<p<1$}, Bull. Amer. Math. Soc. 46 (1940), 816-823. \bibitem{Arabi} M. El-Arabi, R. Elarabi and M. Rhoudaf, { \it Generalization of $\mathcal {L}_p$ spaces for $0<p<1$}, submitted. \bibitem{Gobardhan}R. Gobardhan ,B. Tarapada, { \it Bounded linear operators in quasi-normed linear space,} J. of the Egy. Math. Soc.(2014) http://dx.doi.org/10.1016/j.joems.2014.06.003. \bibitem{H1} P. R. Halmos, {\it On the Set of Values of a Finite Measure}, Bull. Amer. Math. Soc. 53 (1947), 138-141. \bibitem{H2} P. R. Halmos, {\it The Range of a Vector Measure}, Bull. Amer. Math. Soc. 54 (1948), 416-421. \bibitem{H3} P. R. Halmos, {\it Measure Theory}, Van Nostrand, New York, 1950. \bibitem{K} K. Conrad, {\it $L^p$-Spaces for $p\in]0,1]$} Lecture notes, (2012) http://www.math.uconn.edu/~kconrad/blurbs/analysis/lpspace.pdf. \bibitem{Kra} M. A. Krasnosel'skii and Ya. B. Rutickii, {\em Convex functions and Orlicz spaces}. Noordhoff, Groningen, 1969. \bibitem{Maligranda} L. Maligranda, {\it Type, cotype and convexity properties of quasi-Banach spaces,} in: proc. of the International symposium on Banach and function spaces Kitakyushu, Japan, 2003, pp. 83-120. \bibitem{M} J. Musielak, {\it Orlicz spaces and modular spaces,} Lecture Notes in Mathematics, Volume 1034 (Springer, 1983). \bibitem{R} M.M. Rao, Z.D. Ren, {\it Theory of Orlicz Spaces,} Marcel Dekker, New York, 1991. \bibitem{apps1} A. H. Siddiqi , {\it Functional Analysis With Applications,} Tata MacGraw-Hill Publishing Company, Ltd, New Delhi, India, 1986. \bibitem{apps2} Z. M. Sykes , {\it On Discrete Stable Population Theory}, Biometrics, 25, PP. 285-293, 1969 \end{thebibliography} \end{document}
2205.10552v2
http://arxiv.org/abs/2205.10552v2
Smoothing Codes and Lattices: Systematic Study and New Bounds
\documentclass[a4paper,reqno]{amsart} \usepackage{a4wide} \usepackage[foot]{amsaddr} \usepackage{etoolbox} \pagestyle{plain} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{amsmath} \usepackage{amssymb,empheq,mathtools,dsfont} \usepackage{physics} \usepackage{stmaryrd} \usepackage[mathscr]{eucal} \usepackage{color} \usepackage{nicefrac} \usepackage{thm-restate} \usepackage{calrsfs} \usepackage{hyperref} \DeclareMathAlphabet{\pazocal}{OMS}{zplm}{m}{n} \let\ds\mathds \usepackage{scalerel,stackengine} \stackMath \newcommand\reallywidehat[1]{\savestack{\tmpbox}{\stretchto{\scaleto{\scalerel*[\widthof{\ensuremath{#1}}]{\kern.1pt\mathchar"0362\kern.1pt}{\rule{0ex}{\textheight}}}{\textheight}}{2.4ex}}\stackon[-6.9pt]{#1}{\tmpbox}} \newcommand{\QFT}[1]{\reallywidehat{\ket{#1}}} \newcommand{\Ac}{{\mathcal A}} \newcommand{\Bc}{{\mathcal B}} \newcommand{\Cpc}{{\pazocal C}} \newcommand{\Dc}{{\pazocal D}} \newcommand{\Ec}{{\pazocal E}} \newcommand{\Gc}{{\mathcal G}} \newcommand{\Hc}{{\mathcal H}} \newcommand{\Pc}{{\mathcal P}} \newcommand{\Sc}{{\mathcal S}} \newcommand{\Tc}{{\mathcal T}} \newcommand{\Uc}{{\mathcal U}} \newcommand{\Vc}{{\mathcal V}} \newcommand{\Xc}{{\mathcal X}} \newcommand{\C}{\mathbb{C}} \newcommand{\F}{\mathbb{F}} \newcommand{\N}{\mathbb{N}} \newcommand{\R}{\mathbb{R}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\Zq}{\Z_q} \newcommand{\Fq}{\F_q} \newcommand{\ic}{{\text{\bf i}}} \newcommand{\interv}{[\![-\frac{q-1}{2},\frac{q-1}{2}]\!]} \newcommand{\interval}[2]{\left[\!\!\left[#1,#2\right]\!\!\right]} \newcommand{\CC}{\ensuremath{\mathscr{C}}} \newcommand{\DD}{\ensuremath{\mathscr{D}}} \renewcommand{\vec}[1]{\mathbf{#1}} \newcommand{\av}{\vec{a}} \newcommand{\bv}{\vec{b}} \newcommand{\cv}{\vec{c}} \renewcommand{\dv}{\vec{d}} \renewcommand{\ev}{\vec{e}} \renewcommand{\pv}{\vec{p}} \newcommand{\qv}{\vec{q}} \newcommand{\sv}{\vec{s}} \newcommand{\tv}{\vec{t}} \newcommand{\uv}{\vec{u}} \newcommand{\vv}{\vec{v}} \newcommand{\wv}{\vec{w}} \newcommand{\xv}{\vec{x}} \newcommand{\yv}{\vec{y}} \newcommand{\zv}{\vec{z}} \newcommand{\zero}[1]{\vec{0}_{#1}} \newcommand{\un}{\vec{1}} \newcommand{\und}{\mathds{1}} \newcommand{\mat}[1]{\mathbf{#1}} \newcommand{\Am}{\mat{A}} \newcommand{\Bm}{\mat{B}} \newcommand{\Cm}{\mat{C}} \newcommand{\Dm}{\mat{D}} \newcommand{\Em}{\mat{E}} \newcommand{\Fm}{\mat{F}} \newcommand{\Gm}{\mat{G}} \newcommand{\Hm}{\mat{H}} \newcommand{\Mm}{\mat{M}} \newcommand{\Nm}{\mat{N}} \newcommand{\Pm}{\mat{P}} \newcommand{\Rm}{\mat{R}} \newcommand{\Sm}{\mat{S}} \newcommand{\Tm}{\mat{T}} \newcommand{\Um}{\mat{U}} \newcommand{\Xm}{\mat{X}} \newcommand{\Ym}{\mat{Y}} \newcommand{\Zm}{\mat{Z}} \newcommand{\Had}[2]{\Hm_{#1,#2}} \newcommand{\Id}{\mat{Id}} \newcommand{\esp}{\mathbb{E}} \newcommand{\Prob}{\mathbb{P}} \newcommand{\prob}{\pi_{\ev}} \newcommand{\probdual}{\hat{\pi}_{\cv^\perp}} \newcommand{\subV}{\substack{V \le \Fq^n \\ \dim V = t}} \newcommand{\sumV}{\sum_{\subV}} \newcommand{\sumW}{\sum_{\substack{W \le \Fq^n \\ \dim W = t \\ W \neq V}}} \newcommand{\subyt}{\substack{\vec{y} \in \Fq^m \\ \wt{y} = t}} \newcommand{\sumyt}{\sum_{\subyt}} \newcommand{\sumFq}[1]{\sum_{#1 \in \Fq^\ast}} \newcommand{\sumFqn}[1]{\sum_{\vec{#1} \in \Fq^n}} \newcommand{\sumcode}{\sum_{\cv \in \CC}} \newcommand{\sumdual}{\sum_{\cv^\perp \in \CC^\perp}} \DeclarePairedDelimiter\ceil{\lceil}{\rceil} \DeclarePairedDelimiter\floor{\lfloor}{\rfloor} \newcommand{\cov}[1]{{\left| #1\right|}} \newcommand{\gauss}[2]{\genfrac{[}{]}{0pt}{}{#1}{#2}} \newcommand{\innerprod}[2]{\vec{#1} \cdot \vec{#2}} \newcommand\wt[1]{\abs{\vec{#1}}} \newcommand{\dr}{d_{\textup{rank}}} \newcommand\wtr[1]{\abs{\vec{#1}}_{\textup{rank}}} \newcommand\wth[1]{\abs{\vec{#1}}_{\textup{Ham}}} \newcommand\wtl[1]{\abs{\vec{#1}}_{\textup{Lee}}} \newcommand\dl{d_{\textup{Lee}}} \newcommand{\weasy}{w_{\textup{easy}}} \newcommand{\omeasy}{\omega_{\textup{easy}}} \newcommand{\dgv}{d_{\textup{GV}}} \newcommand{\ogv}{\omega_{\textup{GV}}} \newcommand{\drgv}{\tau_{\textup{GV}}} \newcommand{\dmin}{d_{\textup{min}}} \newcommand{\dtr}{D_{\textup{tr}}} \newcommand{\dstat}{D_{\textup{stat}}} \newcommand{\psiA}{\ket{\psi_{\Ac}}} \newcommand{\psiAQFT}{\ket{\psi_{\Ac}^{\textup{QFT}}}} \newcommand{\psiideal}{\ket{\psi_{\text{ideal}}}} \newcommand{\psiidealQFT}{\ket{\psi_{\text{ideal}}^{\textup{QFT}}}} \newcommand{\psiAs}{\psi_{\Ac}} \newcommand{\psiideals}{\psi_{\text{ideal}}} \newcommand{\Trac}[2]{\textup{Tr}_{#1 / #2}} \DeclareMathOperator*{\argmin}{argmin} \DeclareMathOperator*{\argmax}{argmax} \newcommand{\OOtsp}[1]{O(#1)} \newcommand{\Thtsp}[1]{\Theta(#1)} \newcommand{\OO}[1]{O\left( #1 \right)} \newcommand{\Th}[1]{\Theta\left( #1 \right)} \newcommand{\Tht}[1]{\tilde{\Theta}\left( #1 \right)} \newcommand{\OOt}[1]{\tilde{O}\left( #1 \right)} \newcommand{\Om}[1]{\Omega \left( #1 \right)} \newcommand{\Omt}[1]{\tilde{\Omega} \left( #1 \right)} \makeatletter \newcommand*{\transp}{{\mathpalette\@transpose{}}} \newcommand*{\@transpose}[2]{\raisebox{\depth}{$\m@th#1\intercal$}} \makeatother \newcommand{\transpose}[1]{{#1}^{\transp}} \newcommand{\tran}[1]{\transpose{#1}} \newcommand*{\eqdef}{\stackrel{\text{def}}{=}} \newcommand{\decP}{\textup{$\mathsf{DP}$}} \newcommand{\DP}{\decP} \newcommand{\scodeP}{\textup{$\mathsf{SCP}$}} \newcommand{\SCP}{\scodeP} \newtheorem{theorem}{Theorem}[section] \newtheorem{corollary}[theorem]{Corollary} \newtheorem{remark}[theorem]{Remark} \newtheorem{definition}[theorem]{Definition} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{notation}[theorem]{Notation} \newtheorem{fact}[theorem]{Fact} \newtheorem{hypothesis}[theorem]{Hypothesis} \newtheorem{assumption}[theorem]{Assumption} \newtheorem{rthumb}[theorem]{Rule of thumb} \newcommand*\bang{!} \renewcommand{\thefootnote}{\texttt{(\arabic{footnote})}} \renewcommand{\baselinestretch}{1.1} \sloppy \setlength{\parindent}{0pt} \setlength{\parskip}{3pt} \renewcommand*\thesection{\arabic{section}} \newcommand{\scp}[2]{\langle #1,#2 \rangle} \newcommand{\scpc}[2]{\langle #1,#2 \rangle_{\textup{c}}} \newcommand{\scpr}[2]{\langle #1,#2 \rangle_{\textup{rad}}} \newcommand{\vol}[1]{V_{n}\left(#1\right)} \newcommand\dual[1]{#1^{*}} \newcommand\LP[1]{#1_{\textup{LP1}}} \newcommand\Vol[1]{\textup{Vol}(#1)} \usepackage{todonotes} \newcommand{\CKL}{C_{\textup{KL}}} \newcommand{\fmod}[2]{#1^{ \textup{ mod } #2}} \newcommand{\Nb}[2]{N_{#1}( #2 )} \newcommand{\Neq}[2]{\Nb{#1}{#2}} \newcommand{\Neqs}[1]{N_{#1}} \newcommand{\Nlt}[2]{\Nb{\leq #1}{#2}} \newcommand{\crand}[2]{\Cpc_{#1,#2}} \newcommand{\uniff}{u_{\textup{full}}} \newcommand{\unifq}{u} \newcommand{\unifc}{u_{\CC}} \newcommand{\funif}[1]{f_{\textup{unif},#1}} \newcommand{\unifs}[1]{u_{#1}} \newcommand{\fber}[1]{f_{\textup{ber},#1}} \newcommand{\fberTrunc}[1]{f_{\textup{truncBer},#1}} \newcommand{\esphere}{\mathcal S} \newcommand{\eball}{\mathcal B} \newcommand{\gunif}[1]{u_{#1 \eball}} \newcommand{\DTrunc}[1]{D_{\textup{trunc},#1}} \newcommand{\gunifTrunc}[1]{g_{\textup{runcUnif},#1}} \newcommand{\toto}[1]{\textcolor{purple}{\bf [Toto: #1]}} \newcommand{\lolo}[1]{\textcolor{red}{\bf [Leo: #1]}} \newcommand{\jp}[1]{\textcolor{blue}{\bf [jp: #1]}} \author{Thomas Debris-Alazard$^{1}$} \email{[email protected]} \author{L\'eo Ducas$^{2,3}$} \email{[email protected]} \author{Nicolas Resch$^{4}$} \email{[email protected]} \author{Jean-Pierre Tillich$^{1}$} \email{[email protected]} \address{$^{1}$ Inria} \address{$^{2}$ CWI, Amsterdam, The Netherlands} \address{$^{3}$ Mathematical Institute, Leiden University} \address{$^{4}$ Informatics' Institute, University of Amsterdam} \thanks{The work of TDA and JPT was funded by the French Agence Nationale de la Recherche through ANR JCJC COLA (ANR-21-CE39-0011) for TDA and ANR CBCRYPT (ANR-17-CE39-0007) for JPT. Part of this work was done while NR was affiliated with the CWI and partially supported by ERC H2020 grant No.74079 \mbox{(ALGSTRONGCRYPTO)}. LD is supported by an ERC starting Grant 947821 (ARTICULATE)} \title{Smoothing Codes and Lattices: \\ Systematic Study and New Bounds} \begin{document} \maketitle \begin{abstract} In this article we revisit smoothing bounds in parallel between lattices {\em and} codes. Initially introduced by Micciancio and Regev, these bounds were instantiated with Gaussian distributions and were crucial for arguing the security of many lattice-based cryptosystems. Unencumbered by direct application concerns, we provide a systematic study of how these bounds are obtained for both lattices {\em and} codes, transferring techniques between both areas. We also consider multiple choices of spherically symmetric noise distribution. We found that the best strategy for a worst-case bound combines Parseval's Identity, the Cauchy-Schwarz inequality, and the second linear programming bound, and this holds for both codes and lattices and all noise distributions at hand. For an average-case analysis, the linear programming bound can be replaced by a tight average count. This alone gives optimal results for spherically uniform noise over random codes and random lattices. This also improves previous Gaussian smoothing bound for worst-case lattices, but surprisingly this provides even better results with uniform ball noise than for Gaussian (or Bernoulli noise for codes). This counter-intuitive situation can be resolved by adequate decomposition and truncation of Gaussian and Bernoulli distributions into a superposition of uniform noise, giving further improvement for those cases, and putting them on par with the uniform cases. \end{abstract} \section{Introduction} \subsection{Smoothing bounds.} In either a code or a lattice, smoothing refers to fact that, as an error distribution grows wider and wider, the associated syndrome distribution tends towards a uniform distribution. In other words, the error distribution, reduced modulo the code or the lattice, becomes essentially flat. This phenomenon is pivotal in arguing security of cryptosystems~\cite{MR07,GPV08,DST19}. In information theoretic literature, it is also sometimes referred to as flatness~\cite{LLBS14}. Informally, by a ``smoothing bound'' we are referring to a result which lower bounds the amount of noise which needs to be added so that the smoothed distribution ``looks'' flat. To be more concrete, by a ``flat distribution'', we are referring to a uniform distribution over the ambient space modulo the group of interest. For a (linear) code $\CC \subseteq \F_2^n$, this quotient space is $\F_2^n/\CC$; for a lattice $\Lambda \subseteq \R^n$, it is $\R^n/\Lambda$. We then consider some ``noise'' vector $\vec e$ distributed over the ambient space $\F_2^n$ ({\em resp.} $\R^n$), and attempt to prove that $\vec e \mod \CC$ ({\em resp.} $\vec e \mod \Lambda$) is ``close'' to the uniform distribution over the quotient space $\F_2^n/\CC$ ({\em resp.} $\R^n/\Lambda$). To quantify ``closeness'' between distributions, we will use the standard choice of \emph{statistical distance}. An important question to be addressed is the choice of distribution for the noise vector $\vec e$. In lattice-based cryptography (where such smoothing bounds originated~\cite{MR07}), the literature ubiquitously uses Gaussian distributions for errors, and smoothness is guaranteed for an error growing as the inverse of the minimum distance of the dual lattice. The original chain~\cite{MR07} of argument goes as follows: \begin{itemize} \item Apply the Poisson summation formula (PSF); \item Bound variations via the triangle inequality (TI) over all non-zero dual lattice points; \item Bound the absolute sum above via the Banaszczyk tail bound~\cite{B93} for discrete Gaussian (BT). \end{itemize} An intermediate quantity called the smoothing parameter introduced by~\cite{MR07} before the last step is also often used in the lattice-based cryptographic literature. Each bounding step is potentially non-tight, and indeed more recent works have replaced the last step by the following~\cite{ADRS15}: \begin{itemize} \item Bound the number of lattice points in balls of a given radius via the Linear Programming bound~\cite{L79} (LP) and ``sum over all radii'' (with care). \end{itemize} With this LP strategy, it is in principle possible to also compute a smoothing bound for spherically symmetric distributions of errors other than the Gaussian; however, we are not aware of prior work doing this explicitly. A very natural choice would be uniform distributions over Euclidean balls. For codes, there are also two natural distributions of errors: Bernoulli noise, {\em i.e.} flip each bit independently with some probability $p$ ({\em a.k.a.} the binary symmetric $\mathrm{BSC}_p$ channel), and a uniform noise over a Hamming sphere of a fixed radius. The latter is typically preferred for the design of concrete and practical cryptosystems~\cite{M78,A11,MTSB13,DST19}, while the former appears more convenient in theoretical works \footnote{A third choice of distribution, described as a discrete-time random walk, also made an appearance for a complexity theoretic result~\cite{BLVW19}. The expert reader may note that the Bernoulli distribution can also be treated as a continuous-time random walk, and both can be analysed via the heat kernel formalism ~\cite[Chap. 10]{C97}.}. Cryptographic interest for code smoothing has recently arisen~\cite{BLVW19,YZ20}, but results are so far limited to codes with extreme parameters and specific ``balancedness'' constraints. However we note that the question is not entirely new in the coding literature (see for instance ~\cite{K07}). In particular, an understanding of the smoothing properties of Bernoulli noise is intimately connected to the \emph{undetected error probability} of a code transmitted through the $\mathrm{BSC}_p$. In this light, it is interesting to revisit and systematize our understanding of smoothing bounds, unencumbered by direct application concerns. We find it enlightening to do this exploration in parallel between codes and lattices, transferring techniques back and forth between both areas whenever possible. Furthermore, we keep our arguments agnostic to the specific choice of error distribution, allowing us to apply them with different error distributions and compare the results. To compare different (symmetric) distributions, we advocate parametrizing them by the expected weight/norm of a vector. That is, we quantify the magnitude of a noise vector $\vec e$ by $t = \mathbb E(|\vec e|)$ (where $|\cdot|$ denotes either the Hamming weight or the Euclidean norm of the vector). Our smoothing bounds will depend on this parameter, and we consider a smoothing bound to be more effective if for the smoothed distribution to be close to uniform we require a smaller lower-bound on $t$. \subsection{Contributions.} In this work, we collect the techniques that have been used for smoothing, both in the code and lattice contexts. We view individual steps as modular components of arguments, and consider all permissible combinations of steps, thereby determining the most effective arguments. In the following, we outline our systematization efforts, describing the various proof frameworks that we tried before settling on the most effective argument. \paragraph{\bf Code smoothing bounds.} Given the relative dearth of results concerning code smoothing, it seems natural to start by adapting the first argument (PSF+TI+BT) to codes following the proof techniques of~\cite{B93,MR07}. And indeed, the whole strategy translates flawlessly, with only one caveat: it leads to a very poor result, barely better than the trivial bound. Namely, smoothness is established only for Bernoulli errors with parameter very close to $p=1/2$. The adaptation of Banaszczyk tail bound~\cite{B93} to codes (together with replacing the Gaussian by a Bernoulli distribution) is rather na\"ive, and it is therefore not very surprising that it leads to a disappointing result. Instead, we can also follow the improved strategy for lattices from~\cite{ADRS15}, and resort to linear programming bounds for codes~\cite{B65,MRRW77,ABL01}. Briefly, by an LP bound we are referring to a result that bounds the number of codewords ({\em resp.} lattice vectors) of a certain weight ({\em resp.} norm) in terms of the dual distance ({\em resp.} shortest dual vector) of the code ({\em resp.} lattice). In both cases, the results are obtained by considering a certain LP relaxation of the combinatorial quantities one wishes to bound, hence the name. Even more, the bounds for codes and lattices are obtained via essentially the same arguments \cite{MRRW77,DL98,CE03}. We therefore find it natural to apply LP bounds in our effort to develop proof techniques which apply to both code- and lattice-smoothing. The strategy (PSF+TI+LP) turns out to give a significantly better result, but it nevertheless still appears to be far from optimal. We believe that the application of the triangle inequality in the second step to bound the sum of Fourier coefficients given by the Poisson summation formula leads to the unsatisfactory bound. Indeed, a common heuristic when dealing with sums of Fourier coefficients is that, unless there is a good reason otherwise, the sum should have magnitude roughly the square-root of the order of the group (as is the case for random signs): the triangle inequality is far too crude to notice this. Instead, we turn to another common upper-bound on a sum, namely, the Cauchy-Schwarz (CS) inequality. It is natural to subsequently apply Parseval's Identity (PI). It turns out that this strategy yields very promising results, upon which we now elucidate. The upper-bound is described in terms of the \emph{weight distribution} of a code, {\em i.e.} the number of codewords of weight $w$ for each $w=1,\dots,n$. Unfortunately, it is quite difficult to understand the weight distribution of arbitrary codes, and the bounds that we do have are quite technical. \paragraph{\bf Random codes.} For this reason, we first apply our proof template to \emph{random codes}, as it is quite simple to compute the (expected) weight distribution of a random code. Quite satisfyingly, the simple two steps arguments (PI+CS) already yields \emph{optimal} results for this case, but when the error is sampled uniformly at random from a sphere! That is, we can show that the support size of the error distribution matches the obvious lower bound that applies to \emph{any} distribution that successfully smooths a code: namely, for a code $\CC$ the support size must be at least $\sharp(\F_2^n/\CC)$. Using coding-theoretic terminology, the weight of the error vector that we need to smooth is given by the ubiquitous Gilbert-Varshamov bound \[ \omega_{\textup{GV}}(R) = h^{-1}(1-R) \] which characterizes the trade-off between a random code's rate $R$ and its minimum distance. Here, $h^{-1}$ is the inverse of the binary entropy function. Moreover, as the argument is versatile enough to apply to essentially all spherical error distributions, we also tried applying it to the Bernoulli distribution, and the random walk distribution of~\cite{BLVW19}. Comparing them, we were rather surprised that our argument provided better bounds for the uniform distribution over a Hamming sphere than the other two distributions for the same average Hamming weight. However, while the (PI+CS) sequence of arguments is more effective when the noise is sampled uniformly on the sphere, we can exploit the fact that the Hamming weight of a Bernoulli-distributed vector is tightly concentrated to recover the same smoothing bound for this distribution. In more detail, we use a ``truncated'' argument. First, we decompose the Bernoulli distribution into a convex combination of uniform sphere distributions. But, by Chernoff's bound, a Bernoulli distribution is concentrated on vectors whose weight lies in a width $\varepsilon n$ interval around its expected weight. Therefore, outside of this interval, the contribution of the Bernoulli on the statistical distance is negligible. Then apply the (PI+CS) sequence of arguments to each constituent distribution close to the expected weight. In this way, we are able to demonstrate that Bernoulli distributions also optimally smooth random codes. \paragraph{\bf Arbitrary codes.} Next, we turn our attention to smoothing worst-case codes. Motivated by our success in smoothing random codes, we again follow the (PI+CS) sequence of arguments and combine this with LP bounds to derive smoothing bounds when the dual distance of the code is sufficiently large. Again, the sequence of arguments is most effective when the error is distributed uniformly over the sphere, with one caveat: we are also required to assume that the dual code is \emph{balanced} in the sense that it also does not contain any vectors of too large weight. While this assumption has appeared in other works~\cite{BLVW19,YZ20}, we find it somewhat unsatisfactory. Fortunately, this condition is not required if the error is sampled according to the Bernoulli distribution. But then we run into the same issue that we had earlier with random codes: the (PI+CS) argument, followed by LP bounds, natively yields a lesser result when instantiated with Bernoulli noise. Fortunately, we have already seen how to resolve this issue: we pass to the truncated Bernoulli distribution and decompose it into uniform sphere distributions. This yields a best-of-both-worlds result: we obtain the strongest smoothing bound we can in terms of the noise magnitude, while requiring the weakest assumption on the code. \paragraph{\bf And back to lattices.} Having now uncovered this better strategy for codes, we can return to lattices and apply our new proof template. Indeed, as we outline in Section~\ref{subsec:fourier-analysis}, the (PI+CS) sequence of arguments can be applied in a very broad context; see, in particular, Corollary~\ref{coro:FB}. \paragraph{\bf Random lattices.} First, just as we set our expectations for code-smoothing by first studying the random case, we analogously start here by considering random lattices. However, defining a random lattice is a non-trivial task. We actually consider two distributions. The first, which is based on the deep Minkowski-Hlwaka-Siegel (MHS) Theorem, we only abstractly describe. Thanks to the MHS Theorem, we can very easily compute the (expected value) of our upper-bound. For the MHS distribution of lattices, we consider two natural error distributions: the Gaussian distribution (which is used ubiquitously in the literature), as well as the uniform distribution over the Euclidean ball. And again, perhaps surprisingly (although less so now thanks to our experience with the code case), we obtain a better result with the uniform distribution over the Euclidean ball. And moreover, the Euclidean ball result is \emph{optimal} in the same sense that we had for codes: the support volume of the error distribution is exactly equal to the covolume of the lattice \footnote{That is, for a lattice $\Lambda$, the volume of the torus $\R^n/\Lambda$. We will denote this quantity by $\cov{\Lambda}$ from now on. }. We view the value $w$ such that the volume of the $n$-ball of radius $w$ is equal to the covolume of a lattice (which is half the quantity that appears in Minkowski bound) as being the lattice-theoretic analogue of the Gilbert-Varshamov quantity: \[ w_{\textup{M}/2} \eqdef \frac{\sqrt[n]{\cov{\Lambda} \; \Gamma(n/2+1)}}{\sqrt{\pi}} \ . \] However, as Gaussian vectors satisfy many pleasing properties that are often exploited in lattice-theoretic literature, we would like to obtain the same smoothing bound for this error distribution. Fortunately, our experience with codes also tells us how to recover the result for Gaussian noise from the Euclidean ball noise smoothing bound: we decompose the Gaussian distribution appropriately into a convex combination of Euclidean ball distributions. Together with a basic tail bound, we recover the same smoothing bound for Gaussian noise that we had for the uniform ball noise. We also study random $q$-ary lattices, which are more concretely defined: following the traditional lattice-theoretic terminology, they are obtained by applying Construction A to a random code. This does lead to a slight increase in the technicality of the argument -- in particular, we need to apply a certain ``summing over annuli'' trick -- but the computations are still relatively elementary. Again, we find that the argument naturally works better when the errors are distributed uniformly over a ball, but we can still transfer the bound to the Gaussian noise. Interestingly, the same optimal bound has been recovered in a concurrent work \cite[Theorem 1.]{LLB22} for Gaussian distributions. Their arguments are quite unlike ours: \cite{LLB22} uses the Kullback–Leibler divergence in combination with other information-theoretic arguments. However, contrary to our bounds obtained via the (PI + CS) sequence of arguments, \cite[Theorem 1]{LLB22} only holds for random $q$-ary lattices. \paragraph{\bf Arbitrary lattices.} Next, we address the challenge of smoothing arbitrary lattices. And again, we follow the (PI+CS) sequence of arguments, and subsequently use the Kabatiansky and Levenshtein bound~\cite{KL78} to obtain a smoothing bound in terms of the minimum distance of the dual lattice. The Kabatiansky and Levenshtein bound is the lattice-analogue of the second LP bound from coding theory. We can directly apply the arguments with both of our error distributions of interest, and again, the uniform ball distribution wins. But the decomposition and tail-bound trick again applies to yield the same result for the Gaussian distribution that we had for the uniform ball distribution. \paragraph{\bf Comparison.} We summarize how our work improves on the state of the art in Table~\ref{tab:lattice_smoothing_bound} for lattices, and in Table~\ref{tab:code_smoothing_bound} and Figure~\ref{figure:figureCompSmoothingRandCode} for codes. For this discussion, we let $U(\R^n/\Lambda)$ ({\em resp.} $U(\F_2^n/\CC$)) denote the uniform distribution over $\R^n/\Lambda$ ({\em resp.} $\F_2^n/\CC)$, and let $\Delta$ denote the statistical distance. In the case of lattices (Table~\ref{tab:lattice_smoothing_bound}), we fix the smoothing bound target to exponentially small, that is we state the minimal value of $F>0$ such that the bound over the statistical distance implies $\Delta(\vec{e} \bmod \Lambda, U(\mathbb R^n / \Lambda)) \leq 2^{-\Omega(n)}$ when the error follows the prescribed distribution and of an average Euclidean length of $\mathbb E(|\vec{e}|_{2}) = F \; n / \lambda_1^*(\Lambda)$.\footnote{In fact, the values in this table guarantee exponentially small statistical distance from the uniform distribution.} \begin{table} \begin{tabular}{l|l|c|l} Distribution & Proof strategy & smoothing factor $F$ & General statement \\ \hline Gaussian & PSF+TI+BT & $1/(2\pi) \approx 0.15915$ & Lemma 3.2 ~\cite{MR07} \\ Gaussian & PSF+TI+LP & $\CKL/(2\pi \sqrt{e}) \approx 0.12746$ & Lemma 6.1 ~\cite{ADRS15} \\ Gaussian & PI+CS+LP & $\CKL/(2\pi \sqrt{2e}) \approx 0.09013$ & Theorem~\ref{theo:bSDEgauss} (this work) \\ \hline Unif. Euclidean ball & PI+CS+LP & $\CKL/(2\pi e) \approx 0.07731$ & Theorem~\ref{theo:bSDEuc} (this work) \\ \hline Gaussian & via Unif. + Trunc. & $\CKL/(2\pi e) \approx 0.07731$ & Theorem~\ref{theo:bSDEgauss_better} (this work) \end{tabular} \caption{Comparison of smoothing bounds for various proof strategies and error distributions. The smoothing constant $F$ is the smallest constant $C$ such that the bounds proves exponential smoothness when the average norm (over $n$, the length of the ambient space) of an error is at least $C$ times the inverse of the minimal distance of the dual lattice. Here $\CKL \approx 2^{0.401}$ denotes the constant that is involved in the Kabatiansky and Levenshtein bound \cite{KL78}.\label{tab:lattice_smoothing_bound}} \end{table} In the case of codes we also fix the smoothing bound target to negligible,\footnote{Again, it is the same if we insist the statistical distance to uniform is exponentially small.} but we compare two cases: smoothing bounds for random codes (in average) and for a fixed code (worst case). In Figure \ref{figure:figureCompSmoothingRandCode} we compare the minimal value $F>0$ such that $\mathbb{E}_{\CC}\left(\Delta(\vec{e} \bmod \CC, U(\mathbb \F_{2}^n / \CC))\right) \leq 2^{-\Omega(n)}$ when the error $\vec{e}$ follows the prescribed distribution and with an expectation that is taken over codes of rate $R$. In Table \ref{tab:lattice_smoothing_bound} we make the same comparison but to reach $\Delta(\vec{e} \bmod \CC, U(\mathbb \F_{2}^n / \CC)) \leq 2^{-\Omega(n)}$ for a fixed code $\CC$ such that the minimum distance of its dual $\dual{\CC}$ is known. \begin{center} \begin{figure} \includegraphics[height=6cm]{figureCompSmoothingRandCode.pdf} \caption{Comparison of smoothing constants for {\em random} codes as a function of their rate $R$ for various error distributions. The smoothing constant is the smallest constant $C$ such that the bounds proves exponential smoothness when the average Hamming weight of an error is at least $C n$.} \label{figure:figureCompSmoothingRandCode} \end{figure} \end{center} \begin{table} \begin{tabular}{l|c|c|l} Distribution & smoothing factor $F$ & Balanced-code & General statement \\ \hline Bernoulli & $\approx 0.24$ & NO & Eq.~\eqref{eq:BwithBer}, Prop.~\ref{propo:ABL}, \ref{propo:2LPB} \\ \hline Discrete Rand. Walk & $\approx 0.27$ & YES & Theorem~\ref{theo:smoothingBoundsUnifRW} \\ \hline Unif. Hamming sphere & $\approx 0.17$ & YES & Theorem~\ref{theo:smoothingBoundsUnifRW} \\ \hline Bernoulli + Trunc. & $\approx 0.17$ & NO & Theorem~\ref{theo:finalUBSD} \\ \end{tabular} \caption{Comparison of smoothing bounds for a code $\CC$ of length $n$ such that its dual $\dual{\CC}$ has minimum distance $0.11n$ (which is the typical case for a code of rate $1/2$) for various error distributions. The smoothing constant $F$ is the smallest constant $C$ such that the bounds proves exponential smoothness when the average Hamming weight of an error is at least $Cn$. Furthermore the balanced-code hypothesis means that we suppose there are no dual codewords $\dual{\vec{c}}\in\dual{\CC}$ of Hamming weight larger than $(1-0.11)n$.\label{tab:code_smoothing_bound}} \end{table} \section{Preliminaries: Notations and Fourier Analysis over Locally Compact Abelian Group}\label{sec:prelim} \subsection{General Notation.} The notation $x \eqdef y$ means that $x$ is defined as being equal to $y$. Given a set $\mathcal{S}$, its indicator function will be denoted $1_{\mathcal{S}}$. For a finite set $\mathcal{S}$, we will denote by $\sharp\mathcal{S}$ its cardinality. Vectors will be written with bold letters (such as $\vec{x}$). Furthermore, we denotes by $\llbracket a,b \rrbracket$ the set of integers $\{a,a+1,\dots,b\}$. The statistical distance between two discrete probability distributions $f$ and $g$ over a same space $\mathcal{S}$ is defined as: $$ \Delta(f,g) \eqdef \frac{1}{2} \sum_{x \in \mathcal{S}} |f(x) - g(x)|. $$ Similarly, for two continuous probability density functions $f$ and $g$ over a same measure space $\mathcal{E}$, the statistical distance is defined as $$ \Delta(f,g) \eqdef \frac{1}{2} \int_{\mathcal{E}} |f - g|. $$ \subsection{Codes and Lattices} We give here some basic definitions and notation about linear codes and lattices. {\bf \noindent Linear codes.} In the whole paper, we will deal exclusively with binary linear codes, namely subspaces of $\F_2^n$ for some positive integer $n$. The space $\F_{2}^{n}$ will be embedded with the Hamming weight $|\cdot|$, namely $$ \forall \vec{x}\in\F_{2}^{n}, \quad |\vec{x}| \eqdef \sharp \left\{ i \in \llbracket 1,n \rrbracket \mbox{ : } x_{i} \neq 0 \right\}. $$ We will denote by $\mathcal{S}_{w}$ the sphere with center $\mathbf{0}$ and radius $w$; its size is given by $\binom{n}{w}$ and we have $\frac{1}{n}\log_2 \binom{n}{w} = h(w/n) + o(1)$ where $h$ denotes the binary-entropy, namely $h(x) \eqdef -x \log_2 (x) - (1-x) \log_{2}(1-x)$. An $\lbrack n ,k \rbrack$-code $\CC$ is defined as a dimension $k$ subspace of $\F_{2}^{n}$. The rate of $\CC$ is $\frac{k}{n}$. Its minimal distance is given by \begin{align*} \dmin(\CC) &\eqdef \min \left\{ |\cv - \cv'| \text{ : }\cv,\cv'\in \CC \text { and } \cv \neq \cv' \right\} \\ &= \min \left\{ |\vec{c}| \mbox{ : } \vec{c}\in \CC \mbox{ and } \vec{c} \neq \mathbf{0} \right\}. \end{align*} The number of codewords of $\CC$ of weight $t$ will be denoted by $\Neq{t}{\CC}$, namely $$ \Neq{t}{\CC} \eqdef \sharp \left\{ \vec{c} \in \CC \mbox{ and } |\vec{c}| = t \right\}. $$ The dual of a code $\CC$ is defined as $\dual{\CC} \eqdef \left\{ \dual{\vec{c}} \in \F_{2}^{n} \mbox{ : } \forall \vec{c}\in\CC, \mbox{ } \vec{c}\cdot\dual{\vec{c}} = 0 \right\}$ where $\cdot$ denotes the standard inner product on $\F_{2}^{n}$. {\bf \noindent Lattices.} We will consider lattices of $\mathbb{R}^{n}$ which is embedded with the Euclidean norm $|\cdot|_{2}$, namely $$ \forall \vec{x}\in\mathbb{R}^{n}, \quad |\vec{x}|_{2} \eqdef \sqrt{\sum_{i=1}^{n} x_{i}^{2}}. $$ We will denote by $\mathcal{B}_{w}$ the ball with center $\mathbf{0}$ and radius $w$; its volume is given by $$ \vol{w} \eqdef \frac{\pi^{n/2}w^{n}}{\Gamma(n/2+1)}. $$ An $n$-dimension lattice $\Lambda$ is defined as a discrete subgroup of $\mathbb{R}^{n}$. The covolume $|\Lambda| \eqdef \textup{vol}\left( \mathbb{R}^{n}/\Lambda\right)$ of $\Lambda$ is the volume of any fundamental parallelotope. The minimal distance of $\Lambda$ is given by $ \lambda_{1}(\Lambda) \eqdef \min \left\{ |\vec{x}|_{2} \mbox{ : } \vec{x}\in \Lambda \mbox{ and } \vec{x} \neq \mathbf{0} \right\}. $ The number of lattice points of $\Lambda$ of weight $\leq t$ will be denoted by $\Nb{\leq t}{\Lambda}$, namely $$ \Nb{\leq t}{\Lambda} \eqdef \sharp \left\{ \vec{x} \in \Lambda \mbox{ : } |\vec{x}|_{2} \leq t \right\}. $$ \subsection{Fourier Analysis} \label{subsec:fourier-analysis} We give here a brief introduction to Fourier analysis over arbitrary locally compact Abelian groups. Our general treatment will allow us to apply directly some basic results in a code and lattice context, obviating the need in each case to introduce essentially the same definitions and to provide the same proofs. Corollary \ref{coro:FB} at the end of this subsection is the starting point of our smoothing bounds: all of our results are obtained by using different facts to bound the right hand side of the inequality. \newline {\bf \noindent Groups and Their Duals.} In what follows $G$ will denote a locally compact Abelian group. Such a group admits a Haar measure $\mu$. For instance $G = \mathbb{R}$ with $\mu$ the Lebesgue measure $\lambda$, or $G = \F_{2}^{n}$ with $\mu$ the counting measure $\sharp$. The dual group $\widehat{G}$ is given by the continuous group homomorphisms $\chi$ from $G$ into the multiplicative group of complex numbers of absolute value $1$, and it is again a locally compact Abelian group. In Figure \ref{fig:groups} we give groups, their duals as well as their associated Haar measures that will be considered in this work. \begin{figure} \renewcommand{\arraystretch}{1.4}. \begin{tabular}{|c|c||c|c|} \hline $G$ & $\mu$ & $\widehat{G}$ & $\mu$ \\ \hline\hline $\mathbb{F}_{2}^{n}$ & $\frac{1}{2^{n}} \; \sharp$ & & \\ $\mathbb{F}_{2}^{n}/\CC$ & $ \frac{\sharp\CC}{2^{n}}\; \sharp$ & $\widehat{\mathbb{F}_{2}^{n}/\CC} \simeq \dual{\CC}$ & $\sharp$ \\ $\CC$ & $\frac{1}{\sharp\CC}\;\sharp$ & & \\ \hline $\mathbb{R}^{n}$ & $\lambda$ & & \\ $\mathbb{R}^{n}/\Lambda$ & $\frac{1}{|\Lambda|}\; \lambda$ & $\widehat{\mathbb{R}^{n}/\Lambda} \simeq \dual{\Lambda}$ & $\sharp$ \\ $\Lambda$ & $\sharp\; |\Lambda|$ & & \\ \hline \end{tabular} \caption{Some groups $G$, their duals $\widehat{G}$ and their associated Haar measures. Here $\lambda$ denotes the Lebesgue measure and $\sharp$ the counting measure.} \label{fig:groups} \end{figure} It is important to note that if $H \subseteq G$ is a closed subgroup, then $G/H$ and $H$ are also locally compact groups. Furthermore, $G/H$ has a dual group that satisfies the following isomorphism $$ \widehat{G/H} \simeq H^{\perp} \eqdef \left\{ \chi \in \widehat{G} \mbox{ : } \forall h \in H, \mbox{ } \chi(h) = 1 \right\}. $$ {\bf \noindent Norms and Fourier Transforms.} For any $p \in [1, \infty[$, $L_{p}(G)$ will denote the space of measurable functions $f : G \rightarrow \mathbb{C}$ (up to functions which agree almost everywhere) with finite norm $\| f \|_{p}$ which is defined as $$ \| f \|_{p} \eqdef \sqrt[p]{\int_{G} |f|^{p} d\mu}. $$ The Fourier transform of $f \in L_{1}(G)$ is defined as $$ \widehat{f} : \chi \in \widehat{G} \longmapsto \int_{G}f\overline{\chi}d\mu. $$ We omitted here the dependence on $G$. It will be clear from the context. \begin{theorem}[Parseval's Identity]\label{theo:Parseval} Let $f\in L_{1}(G) \cap L_{2}(G)$, then with appropriate normalization of the Haar measure $$ \| f \|_{2} = \| \widehat{f} \|_{2}. $$ \end{theorem} {\bf \noindent Poisson Formula.} Given $H \subseteq G$ and any function $f : G \rightarrow \mathbb{C}$, its restriction over $H$ is defined as $f_{|H}: h \in H \mapsto f(h)\in \mathbb{C}$. We define its periodization as follows. \begin{definition}[Periodization]\label{def:perio} Let $H$ be a closed subgroup of $G$ and $f\in L^{1}(G)$. We define the $H$-periodization of $f$ as $$ f^{|H} : (g+H) \in G/H \longmapsto \int_{H} f(g+h)d\mu_{H}(h) \in \mathbb{C} $$ where $\mu_{H}$ denotes any choice of the Haar measure for $H$. \end{definition} There always exists a Haar measure $\mu_{G/H}$ such that for any continuous function with compact support $f : G \rightarrow \mathbb{C}$ the quotient integral formula holds \begin{equation}\label{eq:quoIntFormula} \int_{G/H} \left( \int_{H} f(g+h)d\mu_{H}(h) \right) d\mu_{G/H}(g + H) = \int_{G} f(g)d\mu(g). \end{equation} \begin{theorem}[Poisson Formula] Let $H\subseteq G$ be a closed subgroup and $f\in L^{1}(G)$, then with appropriate normalization of the Haar measures, $$ \widehat{\left( f^{|H} \right)} = \left(\widehat{f}\right)_{|\widehat{G/H}}. $$ \end{theorem} The following corollary is a simple consequence of the Cauchy-Schwarz inequality, Parseval identity and the Poisson formula. Our results on smoothing bounds are all based on this corollary. \begin{corollary}\label{coro:FB} Let $H$ be a closed subgroup of $G$. Let $a : x \in G/H \mapsto 1$ and $f \in L^{1}(G)$ such that $\int_{G} f d\mu = \mu_{G/H}(G/H)$. Then with appropriate normalization of the Haar measure,\footnote{We choose the Haar measures $\mu_{G}$, $\mu_{H}$,$\mu_{G/H}$ and $\widehat{\mu_{G/H}}$ for which both the Poisson formula and Parseval's Identity hold.} $$ \| a - f^{|H} \|_{1} \leq \sqrt{\mu_{G/H}(G/H)} \; \sqrt{\int_{\widehat{G/H} \backslash \{\chi_{\mathbf{0}}\}} |\widehat{f}|^{2} \; d\mu_{\widehat{G/H}}} $$ where $\chi_{\mathbf 0}$ denotes the identity element of $\widehat{G/H}$. \end{corollary} \begin{proof} We have \begin{align} \|a-f^{|H}\|_{1} &= \int_{\widehat{G/H}}|a-f^{|H}|d\mu_{G/H} \nonumber \\ &\leq \sqrt{\mu_{G/H}(G/H)} \; \|a-f\|_2 \quad (\mbox{By Cauchy-Schwarz})\nonumber \\ &= \sqrt{\mu_{G/H}(G/H)} \; \|\widehat{a}-\widehat{f}\|_2 \quad (\mbox{By Parseval}) \nonumber\\ &= \sqrt{\mu_{G/H}(G/H)} \; \sqrt{\int_{\widehat{G/H}\setminus\{\chi_{\mathbf{0}}\}}|\widehat{f^{|H}}|^2d\mu_{\widehat{G/H}}} \label{eq:fourier-values}\\ &= \sqrt{\mu_{G/H}(G/H)} \; \sqrt{\int_{\widehat{G/H}\setminus\{\chi_{\mathbf{0}}\}}|\widehat{f}|^2d\mu_{\widehat{G/H}}} 2 \quad (\mbox{By Poisson})\nonumber \end{align} where in Equation \eqref{eq:fourier-values} we used the following equalities: \begin{align*} \widehat{f^{|H}}(\chi_{\mathbf 0}) &= \int_{G/H}f^{|H}\overline{\chi_{\mathbf 0}}\; d\mu_{G/H} \\ &= \int_{G/H}\left( \int_{H}f(g+h)d\mu_{H}(h)\right)d\mu_{G/H}(g+H) \\ & = \int_{G} f \quad (\mbox{By Equation \eqref{eq:quoIntFormula}})\\ & = \mu_{G/H}(G/H) \quad (\mbox{By assumption on $f$}) \end{align*} and $$ \widehat{a}(\chi_{\mathbf{0}}) = \int_{G/H}u\overline{\chi_{\mathbf 0}}d\mu_{G/H} = \mu_{G/H}(G/H) \quad \mbox{and} \quad \forall \chi\in \widehat{G/H}\setminus\{\chi_{\mathbf{0}}\}, \mbox{ } \widehat{a}(\chi) =\int_{G/H}\overline{\chi}d\mu_{G/H} = 0 . $$ which concludes the proof. \end{proof} In this work we will choose $G = \mathbb{R}^{n}$ and $H = \Lambda$ or $G = \mathbb{F}_{2}^{n}$ and $H = \CC$. Haar measures associated to $G, G/H$ and $\widehat{G/H}$ for which the corollary holds are given in Figure \ref{fig:groups}. Furthermore, we will use Fourier transforms over $\widehat{G}$ and $\widehat{G/H}$. We describe in Figure \ref{table:FT} these dual groups that we will consider. \begin{figure}[htb] \begin{center} \renewcommand{\arraystretch}{1.8} \begin{tabular}{|c|c|} \hline $\R^n$ & $\F_2^n$ \\ \hline $\widehat{\mathbb{R}^{n}/\Lambda} = \left\{ \chi_{\vec{x}} \mbox{ : } \vec{x}\in\dual{\Lambda} \right\}$ & $\widehat{\mathbb{F}_{2}^{n}/\CC} = \left\{ \chi_{\vec{x}}\mbox{ : } \vec{x}\in\dual{\CC} \right\}$ \\ \hline $\widehat{f}(\xv) = \int_{\R^n} f(\yv) e^{2i\pi \xv \cdot \yv} d\yv$ & $ \widehat{f}(\xv) = \frac{1}{2^n} \sum_{\yv \in \F_2^n} f(\yv) (-1)^{\xv \cdot \yv} $ \\ \hline \end{tabular} \end{center} \caption{Dual groups and Fourier transforms that we will consider. We identify $\widehat{f}(\chi_{\vec{x}})$ with $\widehat{f}(\vec{x})$. \label{table:FT}} \end{figure} \section{Smoothing Bounds: Code Case}\label{sec:SBCode} Given a binary linear code $\CC$ of length $n$, the aim of a smoothing bound is to quantify at which condition on the noise $\cv+\ev$ is statistically close to the uniform distribution over $\F_2^n$ when $\cv$ is uniformly drawn from $\CC$ and $\ev$ sampled according to some noise distribution $f$. Equivalently, we want to understand when $\left(\vec{e} \mod \CC\right) \in \F_{2}^{n}/\CC$ is close to the uniform distribution. We will focus on the case where the distribution of $\ev$ is radial, meaning that it only depends on the Hamming weight of $\ev$. \begin{notation} We will use throughout this section the following notation. \begin{itemize} \item The uniform probability distribution over the quotient space $\F_{2}^{n}/\CC$ will frequently recur and for this reason we just denote it by $\unifq$. The uniform distribution over the whole space $\F_2^n$ is denoted by $\uniff$ and the uniform distribution over the codewords of $\CC$ is denoted by $\unifc$. \item We also use the uniform distribution over the sphere $\Sc_w$ which we denote by $\unifs{w}$. \item For two probability distributions $f$ and $g$ over $\F_2^n$ we denote by $f \star g$ the convolution over $\F_2^n$: $f \star g(\xv) = \sum_{\yv \in \F_2^n} f(\xv-\yv)g(\yv)$. \end{itemize} \end{notation} It will be more convenient to work in the quotient space and for this we use the following proposition. \begin{proposition} Let $f$ be a probability distribution over $\F_2^n$ and $\CC$ be an $\lbrack n,k \rbrack$-code. We have \begin{equation*} \Delta(\uniff,\unifc \star f) = \Delta(u,f^{\CC}), \quad \mbox{where } f^{\CC}(\xv) \eqdef 2^{k} \; f^{|\CC}(\xv) = \sum_{\cv \in \CC} f(\xv-\cv). \end{equation*} \end{proposition} \begin{proof}Let $\vec{c}$ and $\vec{e}$ be distributed according to $u_{\CC}$ and $f$. We have the following computation: \begin{align} \Delta(\uniff,\unifc \star f)& = \frac{1}{2}\; \sum_{\vec{x}\in\F_{2}^{n}} \left| \frac{1}{2^{n}} - \mathbb{P}_{\unifc,f}\left( \vec{c} + \vec{e} = \vec{x} \right) \right| \nonumber \\ &= \frac{1}{2}\;\sum_{\vec{x}\in\F_{2}^{n}} \left| \frac{1}{2^{n}} - \sum_{\vec{c}_{0}\in\CC} \mathbb{P}_{f}(\vec{c}+\vec{e} = \vec{x}\mid \vec{c} = \vec{c}_{0})\;\frac{1}{2^{k}} \right| \nonumber \\ &= \frac{1}{2}\;\sum_{\vec{x}\in\F_{2}^{n}} \left| \frac{1}{2^{n}} - \frac{1}{2^{k}}\;\sum_{\vec{c}_{0}\in\CC} f(\vec{x} - \vec{c}_{0}) \right| \nonumber \\ &= \frac{1}{2}\; \sum_{\vec{x}\in\F_{2}^{n}/\CC} \left| \frac{1}{2^{n-k}} - \sum_{\vec{c}_{0}\in \CC} f(\vec{x}-\vec{c}_{0}) \right| \label{eq:modC} \\ & = \frac{1}{2}\; \sum_{\vec{x}\in\F_{2}^{n}/\CC} \left| \frac{1}{2^{n-k}} - f^{\CC}(\xv) \right| \nonumber \end{align} where in Equation \eqref{eq:modC} we used that each term of the sum is constant on $\vec{x}+\CC$. \end{proof} As a rewriting of Corollary \ref{coro:FB} we get the following proposition that upper-bounds $\Delta(u,f^{\CC})$, namely: \begin{proposition}\label{propo:FBSDCod} Let $\CC$ be an $\lbrack n,k\rbrack$-code and $f$ be a radial distribution on $\F_2^n$. We have $$ \Delta\left(u, f^{\CC}\right) \leq 2^{n} \; \sqrt{\sum_{t = \dmin(\dual{\CC})}^{n} \Neq{t}{\dual{\CC}}|\widehat{f}(t)|^{2}} $$ where by abuse of notation we denote by $\widehat{f}(t)$ the common value of $\widehat{f}$ on vectors of weight $t$. \end{proposition} \begin{proof}We have that $\CC$ is a closed subgroup of $\F_{2}^{n}$ with associated Haar measures: $$ \mu_{\F_{2}^{n}} = \frac{1}{2^{n}} \; \sharp \quad \mbox{and} \quad \mu_{\F_{2}^{n}/\CC} = \frac{2^{k}}{2^{n}}\;\sharp $$ for which we can apply Corollary \ref{coro:FB}. Let $a \eqdef 2^{n-k} u$ and $b \eqdef 2^{n} f$. First, it is clear that $a : \vec{x}\in\F_{2}^{n}/\CC \mapsto 1$ and that $$ \int_{\F_{2}^{n}} b \;d\mu_{\F_{2}^{n}} = \frac{1}{2^{n}}\sum_{\vec{x}\in\F_{2}^{n}} 2^{n}f(\vec{x}) = 1 = \mu_{\F_{2}^{n}/\CC}(\F_{2}^{n}/\CC) $$ where we used that $f$ is a distribution. Therefore we can apply Corollary \ref{coro:FB} with functions $a$ and $b$. Furthermore, $b^{|\CC} = 2^{n} f^{|\CC} = 2^{n-k}f^{\CC}$ by definition of $f^{\CC}$. We get the following computation: \begin{align} \| a - b^{|\CC} \|_{1} &= \| a - 2^{n-k} f^{\CC}\|_{1} \nonumber \\ &= \sum_{\vec{x}\in\F_{2}^{n}/\CC} \left| 1 - 2^{n-k} f^{\CC}(\vec{x}) \right| \; \frac{1}{2^{n-k}} \nonumber\\ &= \sum_{\vec{x}\in\F_{2}^{n}/\CC} \left| \frac{1}{2^{n-k}} - f^{\CC}(\vec{x}) \right| \nonumber\\ &= 2\; \Delta(u,f^{\CC}) \ . \label{eq:toApplyCoro} \end{align} To conclude the proof it remains to apply Corollary \ref{coro:FB} with Equation \eqref{eq:toApplyCoro} and then to use that $f$ is radial and therefore also $\widehat{f}$. \end{proof} Our upper-bound of Proposition \ref{propo:FBSDCod} involves the weight distribution of the code $\dual{\CC}$, namely $(\Neq{t}{\dual{\CC}})_{t \geq \dmin(\dual{\CC})}$. To understand how our bound behaves for a given distribution $f$, we will start (in the following subsection) with the case of random codes. The expected value for $\Neqs{t}$ is well known in this case. This will lead us to estimate our bound on almost all codes and gives us some hints about the best distribution to choose for our smoothing bound in the worst case (which is the case that we treat in Subsection \ref{subsec:smoothFCode}). \subsection{Smoothing Random Codes.} The probabilistic model $\crand{n}{k}$ that we use for our random code of length $n$ is defined by sampling uniformly at random a generator matrix $\vec{G} \in \F_{2}^{k\times n}$ for it, \textit{i.e.} $$ \CC = \left\{ \vec{m}\vec{G} \mbox{ : } \vec{m} \in \F_{2}^{k} \right\}. $$ It is straightforward to check that the expected number of codewords of weight $t$ in the dual $\dual{\CC}$ is given by: \begin{fact}\label{fact:RCode}For $\CC$ chosen according to $\crand{n}{k}$ \begin{equation*} \mathbb{E}_{\CC}(\Neq{t}{\dual{\CC}}) = \frac{\binom{n}{t}}{2^k}. \end{equation*} \end{fact} This estimation combined with Proposition \ref{propo:FBSDCod} enables us to upper-bound $\mathbb{E}_{\CC}\left(\Delta( u,f^{\CC})\right)$. \begin{proposition}\label{propo:Rcode} We have: \begin{equation}\label{bound:RandCodes} \mathbb{E}_{\CC}\left(\Delta(u,f^{\CC})\right) \leq 2^{n}\; \sqrt{ \sum_{t > 0} \frac{\binom{n}{t}}{2^{k}}\; |\widehat{f}(t)|^{2} } . \end{equation} \end{proposition} \begin{proof} By using Proposition \ref{propo:FBSDCod}, we obtain: \begin{align*} \mathbb{E}_{\CC}\left(\Delta(u,f^{\CC})\right) &\leq \mathbb{E}_{\CC}\left(2^{n} \; \sqrt{\sum_{t = \dmin(\dual{\CC})}^{n} \Neq{t}{\dual{\CC}}|\widehat{f}(t)|^{2}} \right) \\ &\leq 2^{n}\; \sqrt{\mathbb{E}_{\CC}\left( \sum_{t = \dmin(\dual{\CC})}^{n} \Neq{t}{\dual{\CC}}|\widehat{f}(t)|^{2} \right)} \quad (\mbox{Jensen's inequality})\\ &= 2^{n}\; \sqrt{ \sum_{t > 0} \frac{\binom{n}{t}}{2^{k}}\; |\widehat{f}(t)|^{2} } \end{align*} where in the last line we used the linearity of the expectation and Fact \ref{fact:RCode}. \end{proof} It remains now to choose the distribution $f$. A natural choice in code-based cryptography is the uniform distribution $\unifs{w}$ over the sphere $\Sc_w$ of radius $w$ centered around $\mathbf{0}$. \noindent {\bf \noindent Uniform Distribution over a Sphere.} The Fourier transform of $\unifs{w}$ is intimately connected to Krawtchouk polynomials. The Krawtchouk polynomial of order $n$ and degree $w\in \{ 0,\dots,n\}$ is defined as $$ K_{w}(X;n) \eqdef \sum_{j=0}^{w} (-1)^j \binom{X}{j} \binom{n-X}{w-j}. $$ To simplify notation, since $n$ is clear here from context, we will drop the dependency on $n$ and simply write $K_w(X)$. The following fact allows to relate $K_{w}$ with $\widehat{\unifs{w}}$ (see for instance \cite[Lem. 3.5.1, \S 3.5]{L99}) \begin{fact}\label{fact:Kraw} For any $\vec{y} \in \mathcal{S}_{t}$, \begin{equation} \sum_{\vec{e}\in\mathcal{S}_{w}} (-1)^{ \vec{y}\cdot \vec{e}} = K_{w}(t). \end{equation} \end{fact} This leads us to $$ \widehat{\unifs{w}}(\vec{x}) = \frac{1}{2^{n}} \; K_{w}(|\vec{x}|)\bigg/\binom{n}{w}.$$ By plugging this in Equation \eqref{bound:RandCodes} of Proposition \ref{propo:Rcode} we obtain \begin{equation} \mathbb{E}_{\CC}\left( \Delta( u, \unifs{w}^{\CC})\right) \leq \sqrt{\sum_{t > 0} \frac{\binom{n}{t}}{2^{k}} \left( \frac{K_{w}(t)}{\binom{n}{w}}\right)^{2}}. \end{equation} The above sum can be upper-bounded by observing that $\left( K_{w}/\sqrt{\binom{n}{w}}\right)_{0\leq w \leq n}$ is an orthonormal basis of functions $f:\{0,1,\cdots,n\}\rightarrow \C$ for the inner product $\langle f,g \rangle_{\textup{rad}} \eqdef \sum_{t=0}^{n} f(t)\overline{g(t)} \binom{n}{t}/2^{n}$. It can be viewed as the standard inner product between radial functions over $\F_2^n$. In particular, $\sum_{t=0}^{n} \frac{K_{w}(t)^{2}}{\binom{n}{w}} \; \frac{\binom{n}{t}}{2^{n}} = 1$ \cite[Corollary 2.3]{L95}. Therefore, for random codes we obtain the following proposition \begin{proposition}\label{propo:BUnifRandCase} We have for random $\CC$ chosen according to $\crand{n}{k}$ \begin{equation} \label{eq:BUnifRandCase} \mathbb{E}_{\CC}\left( \Delta(u,\unifs{w}^{\CC}) \right) \leq \sqrt{2^{n-k}\bigg/\binom{n}{w}}. \end{equation} \end{proposition} In other words, if one wants to smooth a random code with target distance $2^{-\Omega(n)}$ via the uniform distribution over a sphere, one has to choose its radius $w\leq n/2$ such that $\binom{n}{w} = 2^{\Omega(n)} \; 2^{n-k}$. It is readily seen that for fixed code rate $R \eqdef \frac{k}{n}$, choosing any fixed ratio $\omega \eqdef \frac{w}{n}$ such that $\omega >\ogv(R)$ is enough, where $\ogv(R)$ corresponds to the asymptotic relative Gilbert-Varshamov (GV) bound \[ \omega_{\textup{GV}}(R) \eqdef h^{-1}(1-R) \ , \] with $h^{-1}:[0,1] \to [0,1/2]$ being the inverse of the binary entropy function $h(p) = -p\log_2(p)-(1-p)\log_2(1-p)$. The GV bound $\ogv(R) $ appears ubiquitously in the coding-theoretic literature: amongst other contexts, it arises as the (expected) relative minimum distance of a random code of dimension $Rn$, or as the maximum relative minimum error weight for which decoding over the binary symmetric channel can be successful with non-vanishing error probability. This value of radius $n \omega_{\textup{GV}}(R)$ is optimal: clearly, the support size of an error distribution smoothing a code $\CC$ must exceed $\sharp\F_2^n/\CC$. Thus, we cannot expect to smooth a code $\CC$ with errors in the sphere $\mathcal{S}_{w}$ if its volume is smaller than $2^{n-k} = \sharp\F_{2}^{n}/\CC$. Therefore the uniform distribution over a sphere is optimal for random codes. By this, we mean that it leads to the smallest amount of possible noise (when it is concentrated on a ball) to smooth a random code. Notice that we obtained this result after applying the chain of arguments Cauchy-Schwarz, Parseval and Poisson to bound the statistical distance. {\bf \noindent About the original chain of arguments of Micciancio and Regev.} It can be verified that by coming back to the original steps of \cite{MR07,ADRS15}, namely the Poisson summation formula and then the triangle inequality, we would obtain \begin{equation}\label{eq:PFTI} \Delta\left(u, f^{\CC} \right) \leq 2^{n} \sum_{t \geq \dmin(\dual{\CC})} \Neq{t}{\dual{\CC}} |\widehat{f}(t)| . \end{equation} By using that $a^{2}+b^{2} \leq (a+b)^{2}$ (when $a,b \geq 0$) we see that our bound (Proposition \ref{propo:FBSDCod}) is sharper. It turns out that our bound is exponentially sharper for random codes (and even in the worst case) when choosing $f$ as the uniform distribution over a sphere of radius $w$, namely $f = \unifs{w}$. In this case the Micciancio-Regev argument yields the following computation \begin{align} \mathbb{E}_{\CC}\left(\Delta\left( u, \unifs{w}^{\CC} \right) \right) &\leq \mathbb{E}_{\CC}\left( \sum_{t \geq \dmin(\dual{\CC})} \Neq{t}{\dual{\CC}} \; \frac{|K_{w}(t)|}{\binom{n}{w}} \right) \nonumber\\ &= \sum_{t > 0} \frac{\binom{n}{t}}{2^{k}} \; \frac{|K_{w}(t)|}{\binom{n}{w}} . \label{eq:BUnifMRRandCase} \end{align} To carefully estimate this upper-bound (and to compare with \eqref{eq:BUnifRandCase}) we are going to use the following proposition, which gives the asymptotic behaviour of $K_{w}$ (see for instance \cite{IS98,DT17}). \begin{proposition}\label{prop:expansion} Let $n,t$ and $w$ be three positive integers. We set $\tau \eqdef \frac{t}{n}$, $\omega = \frac{w}{n}$ and $\omega^{\perp} \eqdef 1/2 - \sqrt{\omega(1-\omega)}$. We assume $w \leq n/2$. Let $z \eqdef \frac{1-2\tau - \sqrt{D}}{2 (1-\omega)}$ where $D \eqdef \left(1- 2 \tau \right)^2-4 \omega(1-\omega)$. In the case $\tau \in ( 0,\omega^\perp)$, \begin{equation*} K_{w}(t) = O\left( 2^{n(a(\tau,\omega)+o(1))}\right) \quad \mbox{where} \quad a(\tau,\omega) \eqdef \tau \log_{2}(1-z) + (1-\tau)\log_{2}(1+z) - \omega\log_{2}z . \end{equation*} \item In the case $\tau \in (\omega^\perp,1/2)$, $D$ is negative, and \begin{equation*} K_{w}(t) = O\left( 2^{n(a(\tau,\omega)+o(1))} \right) \quad \mbox{where} \quad a(\tau,\omega) \eqdef \frac{1}{2}(1+h(\omega)-h(\tau)). \end{equation*} \end{proposition} We let, \begin{align*} \omega_0 &\eqdef \varlimsup_{n \rightarrow \infty}\left\{\frac{w}{n}: \;\sqrt{2^{n(1-R)}\bigg/\binom{n}{w}} \geq 1\right\},\\ \omega_1 &\eqdef \varlimsup_{n \rightarrow \infty}\left\{\frac{w}{n}: \;\sum_{t > 0} \frac{\binom{n}{t}}{2^{Rn}} \;\frac{|K_{w}(t)|}{\binom{n}{w}} \geq 1 \right\}. \end{align*} In Figure \ref{figure:compareBothApproach} we compare the asymptotic values of $\omega_0$ and $\omega_1$ as functions of $R$. Notice that $\omega_0 = \ogv(R)$. We see that $\omega_1$ is undefined for a rate $R < 1/2$. In other words, it is impossible to show that $\mathbb{E}_{\CC}\left( \Delta(u, \unifs{w}^{\CC}) \right) \leq 2^{-\Omega(n)}$ with the standard approach of \cite{MR07,ADRS15} when $R<1/2$. Furthermore, for larger rates (and sufficiently large $n$), $\omega_0$ is much smaller than $\omega_1$. \begin{center} \begin{figure}[htb] \includegraphics[height=6cm]{figure_distribu.pdf} \caption{$\omega_{0}$ and $\omega_{1}$ as functions of $R \eqdef \frac{k}{n}$. \label{figure:compareBothApproach}} \end{figure} \end{center} \medskip {\noindent \bf Bernoulli Distribution.} Another natural distribution to consider when dealing with codes is the so-called ``Bernoulli'' distribution $\fber{p}$, which is defined for $p\in[0,1/2]$ as $$ \forall \vec{x}\in\F_{2}^{n}, \quad \fber{p}(\vec{x}) \eqdef p^{|\vec{x}|} (1-p)^{n-|\vec{x}|} . $$ This choice leads to simpler computations compared to the uniform distribution over a sphere. For instance we have $\widehat{\fber{p}}(\vec{x}) = \frac{1}{2^{n}}(1-2p)^{|\vec{x}|}$. By plugging this in Equation \eqref{bound:RandCodes} of Proposition \ref{propo:Rcode} we obtain \begin{align} \mathbb{E}_{\CC}\left( \Delta(u,\fber{p}^{\CC}) \right) &\leq \sqrt{\sum_{t > 0}\frac{\binom{n}{t}}{2^{k}}(1-2p)^{2t}} \nonumber \\ &\leq \sqrt{\frac{1}{2^{k}} \; (1+(1-2p)^{2})^{n} } \label{eq:BUnifBerCase} \end{align} Thus, if one wants to smooth a random code at target distance $2^{-\Omega(n)}$ with the Bernoulli distribution, the above argument says that one has to choose $p > p_{0} \eqdef \frac{1}{2}\left( 1-\sqrt{2^{R}-1} \right)$ where $R = k/n$. As $\mathbb E_{\fber{p}}(|\vec x|) = pn$, it is meaningful to compare $p_{0}$ and $\omega_0$. It is readily seen that $\omega_0=\ogv(R)=h^{-1}(1-R) < \frac{1}{2}\left( 1-\sqrt{2^{R}-1}\right) =p_0$. In other words, this time the upper-bound given by Proposition \ref{propo:Rcode} does not give what would be optimal, namely the Gilbert-Varshamov relative distance $\ogv(R)$, but a quantity which is bigger. However, it is expected that the average amount of noise to smooth a random code is the same in both cases, since a Bernoulli distribution of parameter $p$ is extremely concentrated over words of Hamming weight $pn$ and that therefore $\Delta(u, \fber{p}^{\CC}) \approx \Delta(u, \unifs{pn}^{\CC})$. This suggests that Proposition \ref{propo:Rcode} is not tight in this case. This is indeed the case, we can prove that we can smooth a random code with the Bernoulli noise as soon as $p > \ogv(R)$. This follows from the following proposition. \begin{restatable}{proposition}{BerVSUnif}\label{propo:BerVSUnif} Let $\varepsilon > 0$ and $p\in [0,1/2]$. Then, $$ \Delta(u,f^{\CC}_{\textup{ber},p}) \leq \sum_{r=(1-\varepsilon)np}^{(1+\varepsilon)np} \Delta(u,\unifs{r}^{\CC}) + 2^{-\Omega(n)} . $$ \end{restatable} \begin{proof} See Appendix \ref{app:BerVSUnif}. \end{proof} This proposition shows that if one wants $\Delta(u,f^{\CC}_{\textup{ber},p}) \leq 2^{-\Omega(n)}$ it is enough to have $\Delta(u,f^{\CC}_{\textup{unif},r}) \leq 2^{-\Omega(n)}$ for any $r \in \left[(1-\varepsilon)np,(1+\varepsilon)np\right]$. This can be achieved by choosing $\varepsilon$ and $p$ such that $(1-\varepsilon)p > \ogv(R)$. To summarize this subsection we have the following theorem \begin{theorem}Let $\CC$ be a random code chosen according to $\crand{n}{k}$, $R \eqdef \frac{k}{n}$. Let $u$ (resp. $\unifs{\lceil pn \rceil}$) be the uniform distribution over $\F_{2}^{n}/\CC$ (resp. $\mathcal{S}_{w}$) and $\fber{p}$ be the Bernoulli distribution over $\F_{2}^{n}$ of parameter $p$. We have, $$ \mathbb{E}_{\CC}\left( \Delta( u, \unifs{\lceil pn \rceil}^{\CC})\right) \leq \; 2^{\frac{n}{2}\left( 1-R - h(p) + o(1) \right)} \quad \mbox{and} \quad \mathbb{E}_{\CC}\left( \Delta( u, \fber{p}^{\CC})\right) \leq 2^{\frac{n}{2}\left( 1-R - h(p) + o(1) \right)}. $$ In particular, $\mathbb{E}_{\CC}\left( \Delta( u, \unifs{\lceil pn \rceil}^{\CC})\right) \leq 2^{-\Omega(n)}$ and $\mathbb{E}_{\CC}\left( \Delta( u, \fber{p}^{\CC})\right) \leq 2^{-\Omega(n)}$ for any fixed $p > \ogv(R)$. \end{theorem} \subsection{Smoothing a Fixed Code\label{subsec:smoothFCode}.} Our upper-bound on $\Delta(u,f^{\CC})$ given in Proposition \ref{propo:FBSDCod} involves the weight distribution of the dual of $\CC$, namely the $\Neq{t}{\dual{\CC}}$'s. To derive smoothing bounds on a fixed code our strategy will simply consist in using the best known upper bounds on the $\Neq{t}{\dual{\CC}}$'s. Roughly speaking, these bounds show that $\Neq{t}{\dual{\CC}} \leq \binom{n}{t}2^{-Kn}$ for some constant $K$ which is function of $\dmin(\dual{\CC})$. \newline {\bf \noindent Notation.} Let $\delta \in (0,1/2)$ and $\delta \leq \tau \leq 1$, \begin{equation}\label{eq:bDeltaTau} b(\delta,\tau) \eqdef \mathop{\overline{\lim}}\limits_{n \to \infty} \mathop{\max}\limits_{\CC} \left\{ \frac{1}{n}\log_{2} \Neq{\lfloor\tau n\rfloor}{\CC} \right\} \end{equation} where the maximum is taken over all codes $\CC$ of length $n$ and minimum distance $\geq \delta n$. We recall (or slightly extend) results taken from \cite{ABL01}: \begin{restatable}{proposition}{PropoABL}\label{propo:ABL} Let $\delta \in (0,1/2)$ and $\delta^{\perp} \eqdef 1/2 - \sqrt{\delta(1-\delta)}$. For any $\delta \leq \tau \leq 1$ \begin{equation} b(\delta,\tau) \leq c(\delta,\tau) \eqdef \left\{ \begin{array}{ll} h(\tau) + h\left( \delta^{\perp} \right) - 1 & \mbox{if } \tau \in [\delta,1-\delta], \\ 2\left( h(\delta^{\perp}) - a(\tau,\delta^{\perp}) \right) & \mbox{otherwise,} \end{array} \right. \end{equation} where $a(\cdot,\cdot)$ is defined in Proposition \ref{prop:expansion}. \end{restatable} \begin{proof}See Appendix \ref{app:proofPropoABL}. \end{proof} \begin{proposition}[{\cite[Proposition 4]{ABL01}}]\label{propo:2LPB} Let $\delta_{\textup{JSB}} \eqdef \left( 1 - \sqrt{1-2\delta}\right)/2$ and $$ \tau_{0} \eqdef \mathop{\argmin}\limits_{\delta_{\textup{JSB}} \leq \alpha \leq 1/2} 1- h(\alpha) + R_{1}(\alpha,\delta) $$ where $$ R_{1}(\tau,\delta) \eqdef h\left(\frac{1}{2}\left( 1 - \sqrt{1- \left(\sqrt{4\tau(1-\tau)-\delta(2-\delta)} - \delta\right)^{2}} \right) \right). $$ For any $\delta \leq \tau \leq 1$ \begin{equation} b(\delta,\tau) \leq d(\delta,\tau) \eqdef \left\{ \begin{array}{ll} h(\tau) - h(\tau_{0}) + R_{1}(\tau_{0},\delta) & \mbox{if } \tau \in (\delta_{\textup{JSB}},1-\delta_{\textup{JSB}}) \mbox{ and } \tau_{0} \leq \tau, \\ R_{1}(\tau,\delta) & \mbox{if } \tau \in (\delta_{\textup{JSB}},1-\delta_{\textup{JSB}}) \mbox{ and } \tau_{0} > \tau, \\ 0 & \mbox{otherwise.} \end{array} \right. \end{equation} \end{proposition} Both of these bounds are derived from ``linear programming arguments'' which were initially used to upper-bound the size of a code given its minimum distance. Proposition \ref{propo:ABL} is an extension of \cite[Theorem 3]{ABL01} in the case of linear codes, in particular we give an upper-bound for any $\tau \in [\delta,1]$ (and not for only $\tau \in [\delta,1/2]$). The proof is in the appendix. The second bound is usually called the {\em the second linear programming bound}. In terms of $\delta$ and $\tau$, Proposition \ref{propo:ABL} and \ref{propo:2LPB} are among the best (known) upper-bounds on $b(\delta,\tau)$. In the case where $0 \leq \delta \leq 0.273$, Proposition \ref{propo:2LPB} leads to better smoothing bounds compared to Proposition \ref{propo:ABL}. \begin{remark} There exist many other bounds on $b(\delta,\tau)$, like \cite[Theorem 8]{ACKL05} which holds only for linear codes or \cite[Theorem 7]{ACKL05}. However for our smoothing bounds, Propositions \ref{propo:ABL} and \ref{propo:2LPB} lead to the best results, partly because these are the best bounds on the number of codewords of Hamming weight close to the minimum distance of the code. \end{remark} We draw in Figures \ref{figure:Bounds01} and \ref{figure:Bounds035} the bounds of Propositions \ref{propo:ABL} and \ref{propo:2LPB} as function of $\tau \in [\delta,1]$ for a couple values of $\delta$. \begin{center} \begin{figure} \includegraphics[height=6cm]{figure01.pdf} \caption{Bounds of Propositions \ref{propo:ABL} and \ref{propo:2LPB} on $b(\delta,\tau)$ as function of $\tau \in [\delta,1]$ for $\delta = 0.1$. \label{figure:Bounds01}} \end{figure} \end{center} \begin{center} \begin{figure} \includegraphics[height=6cm]{figure035.pdf} \caption{Bounds of Propositions \ref{propo:ABL} and \ref{propo:2LPB} on $b(\delta,\tau)$ as function of $\tau \in [\delta,1]$ for $\delta = 0.35$. \label{figure:Bounds035}} \end{figure} \end{center} Equipped with these bounds we are ready to give our smoothing bounds for codes in the worst case, namely for a fixed code. Our study with random codes gave a hint that the choice of the uniform distribution over a sphere could give better results than the Bernoulli distribution. However, as we will show now, the distribution on a sphere forces us to assume that no codewords of large weight belong to the dual $\dual{\CC}$ when we want to smooth $\CC$. It corresponds to the hypothesis of balanced-codes made in \cite{BLVW19} to obtain a worst-to-average case reduction. We would like to avoid making this assumption as nothing forbids large weight vectors from belonging to a fixed code. Fortunately, as we will later show, we can avoid making this hypothesis while still keeping the advantages of the uniform distribution over a sphere. \newline {\bf \noindent Impossibility to smooth a code whose dual is not balanced with the uniform distribution over a sphere.} It is readily seen that in the case where the dual code $\dual{\CC}$ is not balanced, meaning that it contains the all-one vector (and therefore that the dual weight distribution is symmetric: $\Neq{w}{\dual{\CC}}=\Neq{n-w}{\dual{\CC}}$ for any $w \in \{0,\cdots,n\}$ when the codelength is $n$), then it is impossible to smooth it with the uniform distribution $\unifs{w}$ over a sphere. Indeed, this implies that all codewords of $\CC$ have an even Hamming weight (they have to be orthogonal to the all-one vector). The parity of the Hamming weights of vectors in a coset ({\em i.e.} in the class of representatives of some element in $\F_{2}^{n}/\CC$) will be the same. Therefore, half of the cosets cannot be reached when periodizing $\unifs{w}$ over $\CC$. \newline {\bf Difficulty of using Proposition \ref{propo:FBSDCod} for proving smoothness of the uniform distribution if the dual has large weight codewords.} Even in the case where the dual is balanced, difficulties can arise if we want to use Proposition \ref{propo:FBSDCod} for proving smoothness of the uniform distribution over a sphere when the dual has large weight codewords. First of all, the fact that it contains the all-one codeword also reflects in the upper-bound of Proposition \ref{propo:FBSDCod}. Recall that $\widehat{\unifs{w}}(\vec{x}) = \frac{1}{2^{n}}K_{w}(|\vec{x}|)/\binom{n}{w}$ and that we have $K_{w}(n) = (-1)^{w}\binom{n}{w}$ (see Fact \ref{fact:Kraw}). Therefore, when the full weight vector belongs to $\dual{\CC}$, our upper-bound on $\Delta(u,\unifs{w}^{\CC})$ of Proposition \ref{propo:FBSDCod} cannot be smaller than $1$. Furthermore, even if the dual does not contain the all-one codeword, codewords of weight say $t=n - O(\log n)$ also give a non-negligible contribution to the upper-bound of Proposition \ref{propo:FBSDCod}: the contribution is a polynomial $n^{-O(1)}$. \newline {\bf Difficulty of using Proposition \ref{propo:FBSDCod} for proving smoothness of the ``discrete walk distribution'' if the dual has large weight codewords.} Other meaningful distributions in the cryptographic context display the same problem as the uniform distribution concerning the difficulty of applying Proposition \ref{propo:FBSDCod} to them if the dual contains large weight codewords. This applies to the discrete time random walk distribution $f_{\textup{RW},t}$ introduced in \cite{BLVW19} for worst-to-average case reductions. The authors were only able to prove smoothness of this distribution if the dual code has no small {\em and no large} weight codewords. This distribution is given by $$ \forall \vec{x}\in\F_{2}^{n},\quad f_{\textup{RW},w}(\vec{x}) \eqdef \mathbb{P}\left( \sum_{i=1}^{w} \vec{e}_{u_{i}} = \vec{x} \right) $$ where the $u_{i}$'s are independently and uniformly drawn at random in $\{1,\dots,n\}$ and $\vec{e}_{j}$ denotes the $j$-th canonical basis vector. Recall that \cite{BLVW19} $$ \widehat{f_{\textup{RW},w}(\vec{y})} = \frac{1}{2^{n}} \; \left( 1-2\frac{|\vec{y}|}{n}\right)^{w}. $$ Therefore, $\widehat{f_{\textup{RW},w}}(\vec{y}) = \frac{1}{2^{n}} (-1)^{w}$ when $|\vec{y}| = n$, as for the Fourier transform of the uniform distribution over a sphere, showing that $f_{\textup{RW},w}$ cannot smooth a code when the full weight vector belongs to its dual. In summary, {\em a direct application} of Proposition \ref{propo:FBSDCod} is quite unsatisfactory for these distributions $\unifs{w}$ and $f_{\textup{RW},w}$. If we are willing to also make an assumption on the largest weight of a codeword, then certainly a direct application of Proposition \ref{propo:FBSDCod} is able to provide meaningful smoothing bounds for them. Indeed, the following theorem is obtained by just combining Propositions \ref{propo:FBSDCod}, \ref{propo:ABL} and \ref{propo:2LPB}. \begin{theorem}\label{theo:smoothingBoundsUnifRW} Let $\CC$ be a binary linear code of length $n$ and $\omega\in(0,1)$. Suppose that $\dmin(\dual{\CC}) = \dual{\delta} n$ and that $\dual{\CC}$ has no element of Hamming weight $\geq \beta n$ for some $\beta \in (\dual{\delta},1)$. We have $$ \frac{1}{n} \log_{2} \Delta\left(u,\unifs{\omega n}^{\CC}\right) \leq \mathop{\max}\limits_{\dual{\delta} \leq \tau \leq \beta} \left\{ \frac{1}{2} \min\left\{ c(\dual{\delta},\tau),d(\dual{\delta},\tau) \right\} + a(\omega,\tau) \right\} - h(\omega) $$ $$ \frac{1}{n} \log_{2} \Delta\left(u,f_{\textup{RW},\omega n}^{\CC}\right) \leq \mathop{\max}\limits_{\dual{\delta} \leq \tau \leq \beta} \left\{ \frac{1}{2} \min\left\{ c(\dual{\delta},\tau),d(\dual{\delta},\tau) \right\} + \omega\log_{2}\left(1-2\tau\right) \right\} $$ where $a(\cdot,\cdot)$, $c(\cdot,\cdot)$ and $d(\cdot,\cdot)$ are defined respectively in Propositions \ref{prop:expansion}, \ref{propo:ABL} and \ref{propo:2LPB}. \end{theorem} {\bf \noindent Avoiding making an assumption on the largest dual codeword: the case of the Bernoulli distribution.} Even if the Bernoulli distribution has some drawbacks compared to the uniform distribution over a sphere, when applying Proposition \ref{propo:FBSDCod} with random codes, it has however a nice property concerning the large weight codewords: the large weight dual codewords have a negligible contribution in the upper-bound of Proposition \ref{propo:FBSDCod}. To see this let us first recall that \begin{equation} \widehat{\fber{p}}(\vec{x}) = \frac{1}{2^{n}} \; (1-2p)^{|\vec{x}|}. \end{equation} Therefore, by Proposition \ref{propo:FBSDCod} we have \begin{equation}\label{eq:BerToBound} \Delta(u,\fber{p}^{\CC}) \leq \sqrt{\sum_{t = \dmin(\dual{\CC})}^{n} \Neq{t}{\dual{\CC}} (1-2p)^{2t}}. \end{equation} On the other hand, we have the following lemma which shows that large weight codewords can only have an exponentially small contribution to the above upper-bound. \begin{lemma}\label{lemma:LCodewords} Let $\CC$ be a linear code of length $n$ and let $t > n-\dmin(\CC)/2$. There is at most one codeword $\vec{c}$ of weight $t$. \end{lemma} \begin{proof} Suppose by contradiction that there exists two distinct codewords $\vec{c},\vec{c}'\in\CC$ of Hamming weight $t$. By using the triangle inequality we obtain (where $\vec{1}$ denotes the all-one vector) \begin{align*} |\vec{c} - \vec{c}'| & \leq |\vec{c} - \vec{1}| + |\vec{1} - \vec{c}'| \\ &= 2\left( n-t \right) \\ &< \dmin(\CC) \end{align*} which contradicts the fact that $\CC$ has minimum distance $\dmin(\CC)$. \end{proof} Therefore, using Lemma \ref{lemma:LCodewords} in Equation \eqref{eq:BerToBound} gives for $p\in(0,1/2]$, \begin{equation}\label{eq:BwithBer} \Delta(u,\fber{p}^{\CC}) \leq \sqrt{\sum_{t = \dmin(\dual{\CC})}^{n-\dmin(\dual{\CC})/2} \Neq{t}{\dual{\CC}} (1-2p)^{2t}} + 2^{-\Omega(n)}. \end{equation} In other words, large weight dual codewords (if they exist) have only an exponentially small contribution to our smoothing bound with the Bernoulli distribution. In principle, we could plug in Equation \eqref{eq:BwithBer} bounds on the $\Neq{t}{\dual{\CC}}$'s given in Propositions \ref{propo:ABL} and \ref{propo:2LPB}. We will improve on the bounds obtained in this way by truncating the Bernoulli distribution, then \\ \begin{itemize} \item[$(i)$] prove that by appropriately truncating both distributions have the same smoothness property, \item[$(ii)$] show that the truncated distribution has the same nice properties with respect to large weights, \item[$(iii)$] show that we can apply Proposition \ref{propo:FBSDCod} to the truncated distribution and get appropriate smoothness properties. \end{itemize} We obtain in this way: \begin{restatable}{theorem}{thFinalCode}\label{theo:finalUBSD} Let $\CC$ be a binary linear code of length $n$ and $p \in (0,1/2]$ such that $\dmin(\dual{\CC}) \geq \dual{\delta} n$ for some $\dual{\delta}\in[0,1]$. We have asymptotically, \begin{multline*} \frac{1}{n} \log_{2} \Delta\left(u,\fber{p}^{\CC}\right) \leq \mathop{\max}\limits_{\dual{\delta} \leq \tau \leq 1 - \dual{\delta}/2} \{ \frac{1}{2} \min \left\{c(\dual{\delta},\tau),d(\dual{\delta},\tau) \right\} + \\ \mathop{\max}\limits_{(1-\varepsilon)p \leq \lambda \leq (1+\varepsilon)p} \left\{ \lambda\log_{2}p + (1-\lambda)\log_{2}(1-p) + a(\lambda,\tau) \right\} \} + O\left(\frac{1}{n}\right) \end{multline*} where $a(\cdot,\cdot)$, $c(\cdot,\cdot)$ and $d(\cdot,\cdot)$ are defined respectively in Propositions \ref{prop:expansion}, \ref{propo:ABL} and \ref{propo:2LPB}. \end{restatable} \begin{proof} See Appendix \ref{app:proofThFinalCode}. \end{proof} Let $i\in\{0,1\}$ and $p_{i}$ be the smallest $p\in(0,1/2]$ that enables to reach $\Delta\left(u,\fber{p}^{\CC}\right) \leq 2^{-\Omega(n)}$ with \begin{itemize} \item Theorem \ref{theo:finalUBSD} when $i=0$, \item Equation \eqref{eq:BwithBer} and Propositions \ref{propo:ABL}, \ref{propo:2LPB} when $i =1$. \end{itemize} In Figure \ref{figure:compBerTrunc} we compare the smallest $p$ that enables one to reach $\Delta\left(u,\fber{p}^{\CC}\right) \leq 2^{-\Omega(n)}$ with Equation \eqref{eq:BwithBer} and with Theorem \ref{theo:finalUBSD}. As we can see Theorem \ref{theo:finalUBSD} leads to significantly better bounds. Furthermore, it turns out that $p_{0}n$ is roughly equal to the smallest radius $w$ such that $\Delta(u,\unifs{w}^{\CC}) \leq 2^{-\Omega(n)}$ if we had supposed that no codewords of weight $> n - \dmin(\dual{\CC})$ belong to $\dual{\CC}$. In other words, our proof using the tweak of truncating the Bernoulli enables us to obtain a smoothing bound without the hypothesis of no dual codewords of large Hamming weight which is as good as with the uniform distribution over a sphere if we had made this assumption. \begin{center} \begin{figure} \includegraphics[height=6cm]{figure_comp_ber.pdf} \caption{Smoothing bounds for a code $\CC$ as function of $\dual{\delta} \eqdef \dmin(\dual{\CC})/n$ via Theorem \ref{theo:finalUBSD} (for $\varepsilon = 10^{-2}$) and Equation \eqref{eq:BwithBer} \label{figure:compBerTrunc}.} \end{figure} \end{center} \section{Smoothing Bounds: Lattice Case} \label{sec:SBLat} Given an $n$-dimensional lattice $\Lambda$ the aim of smoothing bounds is to give a non-trivial model of noise $\vec{e} \in \mathbb{R}^{n}$ for $(\vec{e}\mod \Lambda)\in\mathbb{R}^{n}/\Lambda$ (namely the reduction of $\vec{e}$ modulo $\Lambda$) to be uniformly distributed. Following Micciancio and Regev \cite{MR07}, the standard choice of noise is given by the Gaussian distribution, defined via $$ \forall \vec{x}\in \mathbb{R}^{n},\quad D_{s}(\vec{x}) \eqdef \frac{1}{s^{n}}\;\rho_{s}(\vec{x}) \quad \mbox{where} \quad \rho_{s}(\vec{x}) \eqdef e^{-\pi(|\vec{x}|_{2}/s)^{2}} \ . $$ The parametrization is chosen such that $s\sqrt{n/2\pi}$ is the standard deviation of $D_s$. Micciancio and Regev showed that when $\vec{e}$ is distributed according to $D_{s}$, choosing $s$ large enough enables $\vec{e} \mod \Lambda$ to be statistically close to the uniform distribution. However, following the intuition from the case of codes we will first analyze the case where $\vec e$ is sampled uniformly from a Euclidean ball. Interestingly, just as with codes where our methodology led to stronger bounds when the uniform distribution over a sphere was used to smooth rather than the Bernoulli distribution, we will obtain better results when we work with the uniform distribution over a ball. Fortunately, using concentration of the Gaussian measure one can translate results from the case where $\vec e$ is uniformly distributed over a ball to the case that it is sampled according to $D_s$; see Proposition~\ref{propo:gauss-to-unif}. This is analogous to the translation from results for the uniform distribution over a sphere to the Bernoulli distribution for codes elucidated in Proposition~\ref{propo:BerVSUnif}. For either choice of noise, to obtain a smoothing bound we are required to bound the statistical distance between the distribution of $\vec{e} \mod \Lambda$ if $\vec{e}$ has density $g$, and the uniform distribution over $\mathbb{R}^{n}/\Lambda$. It is readily seen that $\vec{e} \mod \Lambda$ has density $|\Lambda| g^{|\Lambda}$ which is defined as (see Definition \ref{def:perio} with the choice of Haar measures given in Table \ref{fig:groups}) $$ g^{|\Lambda}(\vec{x}) = \frac{1}{|\Lambda|}\; \sum_{\vec{y}\in\Lambda} g(\vec{x} + \vec{y}). $$ {\noindent \bf Notation.} For any $g : \mathbb{R}^{n} \rightarrow \mathbb{C}$, $$ g^{\Lambda} \eqdef |\Lambda| \; g^{|\Lambda}. $$ In the following proposition we specialize Corollary~\ref{coro:FB} to the case of lattices. \begin{proposition}\label{propo:FBSDLat} Let $\Lambda$ be an $n$-dimensional lattice. Let $g$ be some density function on $\mathbb{R}^{n}$ and $v$ be the density of the uniform distribution over $\mathbb{R}^{n}/\Lambda$. We have $$ \Delta\left(v,g^{\Lambda}\right) \leq \frac{1}{2}\;\sqrt{\sum_{\vec x \in \dual{\Lambda}\setminus \{\vec 0\}} |\widehat{g}(\vec x)|^{2}} \ . $$ \end{proposition} We will restrict our instantiations to functions $g$ whose Fourier transforms are radial, that is, $\widehat{g}(\vec{x})$ depends only on the Euclidean norm of $\vec{x}$, namely $|\vec{x}|_{2}$. \subsection{Smoothing Random Lattices} As with codes, we begin our investigation of smoothing lattices by considering the random case. However, defining a ``random lattice'' is much more involved than the analogous notion of random codes. Fortunately for us, we can apply the Siegel version of the Minkowski-Hlawka theorem to conclude that there exists a random lattice model which behaves very nicely from the perspective of ``test functions''. We first state the technical theorem that we require. \begin{theorem} [Minkowski-Hlawka-Siegel] \label{theo:Mink-Hl-Si} On the set of all the lattices of covolume $M$ in $\R^n$ there exists a probability measure $\mu$ such that, for any Riemann integrable function $g(\vec{x})$ which vanishes outside some bounded region,\footnote{This statement holds for a larger class of functions. In particular it holds for our instantiation with the Gaussian distribution.} \[ \mathop{\mathbb E}_{\Lambda \sim \mu}\left(\sum_{\vec x \in \Lambda\setminus\{\vec 0\}}g(\vec x)\right) = \frac{1}{M}\int_{\R^n}g(\vec x)d\vec x \ . \] \end{theorem} As intuition for the above theorem, consider the case that $g$ is the indicator function for a bounded, measurable subset $S \subseteq \R^n$. Then, Theorem~\ref{theo:Mink-Hl-Si} promises that the expected number of lattice points (other than the origin\footnote{Note that as $\vec 0 \in \Lambda$ with certainty, there is really no ``randomness'' for this event.}) in $S$ is equal to the volume of $S$ over the covolume of the lattice. {\bf \noindent Uniform Distribution over a Ball.} Let $$ \gunif{w} \eqdef \frac{1_{\Bc_{w}}}{\vol w} $$ be the density of the uniform distribution over the Euclidean ball of radius $w$. Let us recall that $\vol w$ denotes the volume of any ball of radius $w$. From Theorem~\ref{theo:Mink-Hl-Si}, we may obtain the following proposition. This should be compared with Proposition~\ref{propo:BUnifRandCase}. \begin{proposition} \label{propo:Mink-Hl-Si-unif} On the set of all lattices of covolume $M$ in $\R^n$ there exists a probability measure $\nu$ such that, for any $w>0$ \[ \mathop{\mathbb E}_{\Lambda \sim \nu} \left(\Delta(u,\gunif{w}^\Lambda)\right) \leq \frac{1}{2}\;\sqrt{\frac{M}{\vol{w}}} . \] In particular, defining \[ w_0 \eqdef \sqrt{n/2\pi e} \; M^{1/n}, \] if $w > w_0$ we have \[ \mathop{\mathbb E}_{\Lambda \sim \nu} \left(\Delta(u,\gunif{w}^\Lambda)\right) \leq O(1)\;\left(\frac{w_0}{w}\right)^{n/2}. \] \end{proposition} \begin{proof} We define $\nu$ to be the procedure that samples a lattice according to $\mu$ of covolume $M^{-1}$, then outputs its dual. In the following chain, we first apply Proposition~\ref{propo:FBSDLat}; then, Jensen's inequality; then, the Minkowski-Hlawka-Siegel (MHS)~Theorem (Theorem~\ref{theo:Mink-Hl-Si}) to the function $|\widehat{\gunif{w}^{\Lambda}}|^2$; and, lastly, Parseval's Identity (Theorem~\ref{theo:Parseval}). This yields: \begin{align*} \mathop{\mathbb E}_{\Lambda \sim \nu} \left(2\Delta(u,\gunif{w}^\Lambda)\right) &\leq \mathop{\mathbb E}_{\Lambda^* \sim \mu} \left(\sqrt{\sum_{\vec x \in \Lambda^*\setminus\{\vec 0\}}|\widehat{\gunif{w}}(\vec x)|^2}\right) \quad \text{(Proposition~\ref{propo:FBSDLat})} \\ &\leq \sqrt{\mathop{\mathbb E}_{\Lambda^* \sim \mu} \left(\sum_{\vec x \in \Lambda^*\setminus\{\vec 0\}}|\widehat{\gunif{w}}(\vec x)|^2\right)} \quad \text{(Jensen's Inequality)} \\ &=\sqrt{\frac{1}{M^{-1}} \; \left(\int_{\R^n}|\widehat{\gunif{w}}(\vec x)|^2d\vec x\right)} \quad \text{(MHS Theorem)} \\ &= \sqrt{M \int_{\R^n}|\gunif{w}(\vec x)|^2d\vec x} \quad \text{(Parseval's Identity)} \\ &= \sqrt{\frac{M}{V_n(w)^2}\int_{\R^n} 1_{\Bc_{w}}(\vec x)d\vec x}\\ & = \sqrt{\frac{M}{V_n(w)}} . \end{align*} For the ``in particular'' part of the proposition, we use Stirling's estimate to derive \[ \vol{w} = \frac{\pi^{n/2}\; w^n}{\Gamma(n/2+1)} = \frac{\pi^{n/2}\;w^n}{\left(\frac{n}{2e}\right)^{n/2}}\;(1+o(1))^n \] from which it follows that if \[ w > w_0 = \sqrt{n/2\pi e}\; M^{1/n} , \] we have \[ \sqrt{\frac{M}{V_n(w)}} \leq O(1) \left(\frac{w}{w_0}\right)^{n/2} \] which concludes the proof. \end{proof} It is easily verified that the value of $w_{0}$ defined in Proposition~\ref{propo:Mink-Hl-Si-gauss} corresponds to the so-called Gaussian heuristic. We view this condition on $w>w_{0}$ as the equivalent of the Gilbert-Varshamov bound for codes as we discussed just below Proposition~\ref{propo:BUnifRandCase}. In particular, as we need the support of the noise to have volume at least $M$ if we hope to smooth a lattice of covolume $M$, we see that the uniform distribution over a ball is optimal for smoothing random lattices, just as the uniform distribution over a sphere was optimal for smoothing random codes. {\bf \noindent Gaussian Noise.} We now turn to the case of Gaussian noise. Following the proof of Proposition~\ref{propo:Mink-Hl-Si-unif} to the point where we apply Parseval's identity, but replacing $\gunif{w}$ by $D_s$, we obtain that \[ \mathbb E \left(\Delta(u,D_s^\Lambda)\right) \leq \sqrt{M \int_{\R^n}|D_s(\vec x)|^2d\vec x} \ . \] To conclude, one uses the following routine computation \begin{align*} \int_{\R^n}|D_s(\vec x)|^2d\vec x = \frac{1}{s^{2n}}\int_{\R^n}e^{-2\pi\left(\frac{|\vec x|_2}{s}\right)^2} d\vec x = \frac{1}{s^{2n}}\int_{\R^n}\rho_{s/\sqrt 2}(\vec x)d\vec x = \left(\frac{1}{s\sqrt{2}}\right)^n. \end{align*} Thus, we obtain: \begin{proposition} \label{propo:Mink-Hl-Si-gauss} On the set of all the lattices of covolume $M$ in $\R^n$ there exists a probability measure $\nu$ such that, for any $s>0$, \[ \mathop{\mathbb E}_{\Lambda \sim \nu} \left(\Delta(u,D_s^\Lambda)\right) \leq \frac{1}{2}\; \sqrt{\frac{M}{\left(s\sqrt{2}\right)^n}} \ . \] In particular, if $s>s_0\eqdef M^{1/n}/\sqrt 2$, we have \[ \mathop{\mathbb E}_{\Lambda \sim \nu} \left(\Delta(u,D_s^\Lambda)\right) \leq \left(\frac{s_0}{s}\right)^{n/2}. \] \end{proposition} To compare Propositions \ref{propo:Mink-Hl-Si-unif} and \ref{propo:Mink-Hl-Si-gauss}, we note that a random vector sampled according to $D_{s}$ has an expected Euclidean norm given by $s\frac{\Gamma\left(\frac{n+1}{2}\right)}{\sqrt{\pi}\Gamma\left(\frac{n}{2}\right)} \sim s\sqrt{\frac{n}{2\pi}}$. So, it is fair to compare the effectiveness of smoothing with a parameter $s$ Gaussian distribution and the uniform distribution over a ball of radius $s\sqrt{\frac{n}{2\pi}}$. We note that, if $s_0$ is as in Proposition~\ref{propo:Mink-Hl-Si-gauss} and $w_0$ is the radius of the so-called Gaussian heuristic, then \[ s_0\sqrt{\frac{n}{2\pi}} = \frac{M^{1/n}}{\sqrt 2} \sqrt{\frac{n}{2\pi}} = w_0 \; \sqrt{e/2} . \] Thus, we conclude that the parameter $s_0$ from Proposition~\ref{propo:Mink-Hl-Si-gauss} is larger than what we could hope by a factor $\sqrt{e/2}$. \subsection{Connecting Uniform Ball Distribution to Gaussian} However, recall that in the code-case we argued that, as the Hamming weight of a vector sampled according to the Bernoulli distribution is tightly concentrated, we could obtain the same smoothing bound for the Bernoulli distribution as we did for the uniform sphere distribution, essentially by showing that we can approximate a Bernoulli distribution by a convex combination of uniform sphere distributions. Similarly, we can relate the Gaussian distribution to the uniform distribution over a ball, and thereby remove this additional $\sqrt{e/2}$ factor. We state a general proposition that allows us to translate smoothing bounds for the uniform ball distribution to the Gaussian distribution. It guarantees that if the uniform ball distribution smooths whenever $w>w_0$, the Gaussian distribution smooths whenever $s > w_0 \; \sqrt{\frac{2\pi}{n}}$. While the intuition for the argument is the same as that which we used in the code-case, the argument is itself a bit more sophisticated. \begin{restatable}{proposition}{GaussianVSUnif} \label{propo:gauss-to-unif} Let $\Lambda$ be a random lattice of covolume $M$ and let $u \eqdef u_{\R^n/\Lambda}$ be the uniform distribution over its cosets. Suppose that for all $w>w_0$ there is a function $f(n)$ such that \[ \mathbb{E}_{\Lambda}\left(\Delta(u,\gunif{w}^{\Lambda})\right) \leq f(n) \left(\frac{w_0}{w}\right)^{n/2}. \] Let $s_0 \eqdef w_0 \sqrt{\frac{2\pi}{n}}$. Then, for all $s>s_0$, defining $\eta \eqdef 1-\frac{s_0}{s} \in (0,1)$, we have \[ \mathbb{E}_{\Lambda}\left(\Delta(u,D_s^{\Lambda})\right) \leq \exp(-\frac{\eta^2}{8} \; n) + f(n) \left(\frac{s_0}{s}\right)^{n/4}. \] \end{restatable} \begin{proof} See Appendix \ref{app:GaussianUnif}. \end{proof} Combining the above proposition with Theorem~\ref{theo:Mink-Hl-Si}, setting $f(n)=O(1)$, we obtain the following theorem. \begin{theorem} \label{theorem:random-lattice-gauss-to-unif} Let $\Lambda$ be a random lattice of covolume $M$ sampled according to $\nu$, let $u \eqdef u_{\R^n/\Lambda}$ be the uniform distribution over its cosets, and let \[s_0 \eqdef M^{1/n}/\sqrt{e}.\] Then, for any $s>s_0$, setting $\eta \eqdef 1-\frac{s_0}{s} \in (0,1)$, we have \[ \mathbb{E}_{\Lambda}\left(\Delta(u,D_s^{\Lambda})\right) \leq \exp(-\frac{\eta^2}{8} \; n) + O(1) \left(\frac{s_0}{s}\right)^{n/4}. \] \end{theorem} \subsection{Smoothing Random $q$-ary Lattices} While the method of sampling lattices promised by the Minkowski-Hlawka-Siegel Theorem (Theorem~\ref{theo:Mink-Hl-Si}) is indeed very convenient for computations, it does not tell us much about how to explicitly sample from the distribution. Furthermore it is not very relevant if one is interested in the random lattices that are used in cryptography. For a more concrete sampling procedure that is relevant to cryptography, we can consider the randomized Construction A (or, more precisely, its dual), which gives a very popular random model of lattices which are easily constructed from random codes. Specifically, for a prime $q$ and a linear code $\CC \subseteq (\Z/q\Z)^n$ we obtain a lattice as follows. First, we ``lift'' the codewords $\vec c \in \CC$ to vectors in $\R^n$ in the natural way by identifying $\Z/q\Z$ with the set $\{0,1,\dots,q-1\}$; denote the lifted vector as $\widetilde{\vec{c}}$. Then, we can define the following lattice \[ \Lambda_{\CC} \eqdef \{\widetilde{\vec{c}} : \vec c \in \CC\} + q\Z^n. \] In other words: $\Lambda_{\CC}$ consists of all vectors in the integer lattice $\Z^n$ whose reductions modulo $q$ give an element of $\CC$. Fix integers $1 \leq k \leq n$, a prime $q$ and a desired covolume $M$. We sample a random lattice $\Lambda$ as follows \begin{itemize} \item First, sample a random linear code $\CC\subseteq (\Z/q\Z)^n$ of dimension $k$ (recall this means that we sample a random $k \times n$ matrix $\vec G$ and define $\CC=\{\vec m \vec G:\vec m \in (\Z/q\Z)^k\}$), \item Then, we scale $\Lambda_{\CC}$ by $\frac{1}{M^{1/n}}\;\frac{1}{q^{1-k/n}}$, \item Lastly, we output the dual of $\frac{1}{M^{1/n}}\;\frac{1}{q^{1-k/n}}\Lambda_{\CC}$. \end{itemize} Notice that the scaling is chosen so that, as long as $\vec G$ is of full rank, the lattice $\Lambda$ we output has the desired covolume $M$. We denote this procedure of sampling $\Lambda$ by $\nu_{\textup{A}}$ (the dependence on $q$, $k$ and $n$ is left implicit). The important fact is that, up to an error term (which decreases as $q$ increases), the expected number of lattice points from $\dual \Lambda$ in a Euclidean ball of radius $r$ is roughly $\frac{\vol r}{M}$, as one would hope. \begin{proposition}[{\cite[Lemma 7.9.2]{Z14}}] \label{propo:unif-of-q-ary-lattice} For every $n \geq 2$, $1 \leq k < n$ and prime power $q$, for $\Lambda \sim \nu_{\textup{A}}$ the expected number of lattice points from $\dual \Lambda$ in a Euclidean ball of radius $w \eqdef t\sqrt{n}$ satisfies \[ \sqrt[n]{\frac{M\; \mathbb E_{\Lambda}(\Nb{\leq w}{\dual\Lambda})}{\vol{w}}} = 1 \pm \delta/t \quad \mbox{where } \delta \eqdef \frac{1}{q^{1-k/n}}. \] \end{proposition} We now turn to bounding the expected statistical distance between $u$ and $\gunif{w}^{\Lambda}$, where $\Lambda\sim\nu_{\textup{A}}$ and $w>0$ is the radius of the Euclidean ball from which the noise is uniformly sampled. First, we state an explicit formula for the Fourier transform of $1_{\Bc_{w}}$, the indicator function of a Euclidean ball of radius $w$, in terms of \emph{Bessel functions}. \begin{notation} For a positive real number $\mu>0$, we denote by $J_\mu:\R\to\R$ the Bessel function of the first kind of order $\mu$. \end{notation} The important fact concerning Bessel functions that we will use is the following. \begin{fact} We have \begin{align} \label{eq:fourier-transform-bessel} \widehat{1_{\Bc_w}}(\vec y) = \left(\frac{w}{|\vec y|_2}\right)^{n/2} J_{n/2}(2\pi w|\vec y|_2) . \end{align} \end{fact} We will refrain from providing an explicit formula for Bessel functions, and instead use the following upper-bound as a black-box. \begin{proposition}[\cite{K06}]\label{propo:bound-on-bessel} For any $x \in \R$ we have \[ |J_{n/2}(x)| \leq |x|^{-1/3} . \] \end{proposition} Using this proposition, we first prove a technical lemma that will be reused when we discuss smoothing arbitrary lattices. In order to state the lemma, we introduce the following auxiliary function. \begin{notation} \label{not:g_w} For a real $w>0$, we define $g_w:\R \to \R$ via \[ g_w(t) \eqdef \frac{1}{\vol w}\widehat{{1}_{\Bc_w}}(\vec x)^2 \] where $\vec x$ is any vector in $\R^n$ of norm $t$. Note that as $\widehat{{1}_{\Bc_w}}(\vec x)$ depends only on $|\vec x|_2$, this is indeed well-defined. \end{notation} The following lemma leverages Proposition~\ref{propo:bound-on-bessel} to upper-bound $g_w$ on a closed interval. \begin{lemma} \label{lemma:varphi-bound} For any $w>0$ and any $0 \leq a$ and $b = \left(1+\frac{1}{n}\right)a$ we have, for some constant $C>0$ \[ \max_{a \leq t \leq b}g_w(t) \leq \frac{C}{\vol b w^{2/3}}\;\frac{1}{a^{2/3}} . \] \end{lemma} \begin{proof} First, we notice that for all $t \in [a,b]$ \begin{align*} \vol t = \left(\frac{t}{b}\right)^n\vol b \geq \left(\frac{a}{b}\right)^n\vol b = \left(1+\frac{1}{n}\right)^{-n}\vol b \geq \frac{1}{C'}\vol b \end{align*} for some constant $C'>0$. We now use Proposition~\ref{propo:bound-on-bessel} to derive \begin{align*} \max_{a \leq t \leq b}\;g_w(t) \leq \frac{C'}{\vol b} \;\max_{a \leq t \leq b} J_{n/2}(2\pi wt)^2 \leq \frac{C}{\vol bw^{2/3}}\;\frac{1}{a^{2/3}} \end{align*} for an appropriate constant $C>0$ which concludes the proof. \end{proof} We now provide the main theorem of this section. It demonstrates that to smooth our ensemble of random $q$-ary codes (in expectation) with the uniform distribution over the ball of radius $w$, it still suffices to choose $w > w_0 \eqdef \sqrt{n2\pi/e}\;M^{1/n}$, assuming $q$ is not too small. \begin{theorem} \label{theo:smoothing-q-ary} Let $n>2$ and $1 \leq k < n$. Let $q$ be a prime and set $\gamma \eqdef \frac{n^{3/2}}{q^{1-k/n}}$. Let $\Lambda \sim \nu_{\textup{A}}$. For some constant $C>0$, we have \[ \mathbb E_{\Lambda}\left(\Delta(u,\gunif{w}^{\Lambda})\right) \leq C \left(\frac{n}{w}\right)^{1/3} e^{\gamma/2} \sqrt{\frac{M}{\vol w}} . \] In particular, if $w>w_0 \eqdef \sqrt{n2\pi/e}M^{1/n}$, we have \[ \mathbb E_{\Lambda}\left(\Delta(u,\gunif{w}^{\Lambda})\right) \leq O\left(\left(\frac{n}{w}\right)^{1/3} e^{\gamma/2}\right) \left(\frac{w_0}{w}\right)^{n/2} . \] \end{theorem} \begin{proof} Let $t_j\eqdef\left(1+\frac{1}{n}\right)^j$ for $j\in\mathbb{N}$ and \[ N_j \eqdef \sharp\{\dual{\vec x} \in \dual \Lambda : t_j \leq |\dual{\vec x}|_2 < t_{j+1}\} \quad ; \quad \varphi_j \eqdef \max_{t_j \leq t \leq t_{j+1}}g_w(t). \] Now, we apply Proposition~\ref{propo:FBSDLat} and the above definitions to obtain \begin{align*} \mathbb E_{\Lambda}\left(2\Delta(u,\gunif{w}^{\Lambda})\right) &\leq \mathbb E_{\Lambda}\left(\sqrt{\sum_{\vec x \in \dual{\Lambda}\setminus \{\vec 0\}}|\widehat{\gunif{w}}(\vec x)|^2}\right) \\ &\leq \sqrt{\frac{1}{\vol w}\mathbb E_{\Lambda} \left(\sum_{\vec x \in \dual{\Lambda}\setminus \{\vec 0\}}g_w(\vec x)\right)} \quad (\mbox{Jensen's inequality}) \\ &\leq \sqrt{\frac{1}{\vol w}\mathbb E_{\Lambda} \left(\sum_{j=0}^{\infty}N_j\varphi_j\right)} \\ &\leq \sqrt{\frac{1}{\vol w} \sum_{j=0}^{\infty} \mathbb E\left(\Nb{\leq t_{j+1}}{\dual\Lambda}\right)\varphi_j} \ . \end{align*} By Proposition~\ref{propo:unif-of-q-ary-lattice}, we may upper-bound \begin{align} \mathbb E_{\Lambda}\left(\Nb{\leq t_{j+1}}{\dual\Lambda}\right) \leq M \; \vol{t_{j+1}}\left(1 + \left(\frac{\sqrt n}{\left(1+\frac{1}{n}\right)^{jn}q^{1-k/n}}\right)\right)^n \ . \end{align} Now, recalling $\gamma = \frac{n^{3/2}}{q^{1-k/n}}$ we have for any $j \geq 0$ \[ \left(1 + \left(\frac{\sqrt n}{\left(1+\frac{1}{n}\right)^{jn}q^{1-k/n}}\right)\right)^n \leq \left(1 + \left(\frac{\sqrt n}{q^{1-k/n}}\right)\right)^n \leq e^{n\frac{\sqrt{n}}{q^{1-k/n}}} = e^\gamma. \] Thus, we conclude \begin{align*} \mathbb E_{\Lambda}\left(2\Delta(u,\gunif{w}^{\Lambda})\right) \leq \sqrt{\frac{e^\gamma M}{\vol w} \sum_{j=0}^{\infty}\vol{t_{j+1}})\varphi_j} \ . \end{align*} Now, by Lemma~\ref{lemma:varphi-bound} we have $\varphi_j \leq \frac{C_1}{\vol{t_{j+1}}w^{2/3}}\frac{1}{t_j^{2/3}}$ for all $j \geq 0$. Hence, \begin{align*} \sum_{j=0}^{\infty}\vol{t_{j+1}}\varphi_j &\leq \frac{C_1}{w^{2/3}}\sum_{j=0}^{\infty}\frac{\vol{t_{j+1}}}{\vol{t_{j+1}}}\frac{1}{t_j^{2/3}}\\ &= \frac{C_1}{w^{2/3}}\sum_{j=0}^{\infty}\frac{1}{(1+1/n)^{2j/3}}\\ &= \frac{C_1}{w^{2/3}}\frac{1}{1-(1+1/n)^{-2/3}}\\ &\leq \frac{C_2 \; n^{2/3}}{w^{2/3}} \ , \end{align*} for an appropriate constant $C_2>0$. Thus, putting everything together we derive \begin{align*} \mathbb E_{\Lambda}\left(\Delta(u,\gunif{w}^{\Lambda})\right) \leq \sqrt{\frac{e^\gamma M}{2\vol w}\; \frac{C_2 n^{2/3}}{w^{2/3}}} \leq C \left(\frac{n}{w}\right)^{1/3} e^{\gamma/2} \sqrt{\frac{M}{\vol w}} \end{align*} for some constant $C>0$. The ``in particular'' part of the Theorem follows analogously to the corresponding argumentation (Stirling's estimate) used in the proof of Proposition \ref{propo:Mink-Hl-Si-unif}. \end{proof} Next, turning to Gaussian noise, we could again prove a smoothing bound ``directly,'' but this will lose the same factor of $\sqrt{e/2}$ as we had earlier. Instead, we apply Proposition~\ref{propo:gauss-to-unif} with the function $f(n) = O\left(\left(\frac{n}{w}\right)^{1/3} e^{\gamma/2}\right)$ to conclude the following. \begin{theorem} \label{theorem:random-q-ary-gauss-to-unif} Let $n>2$ and $1 \leq k < n$. Let $q$ be a prime and set $\gamma \eqdef \frac{n^{3/2}}{q^{1-k/n}}$. Let $\Lambda$ be a random $q$-ary lattice sampled according to $\nu_A$, let $u=u_{\R^n/\Lambda}$ be the uniform distribution over its cosets, and let $$ s_0 \eqdef M^{1/n}/\sqrt{e}. $$ Then, for any $s>s_0$, setting $\eta \eqdef 1-\frac{s_0}{s} \in (0,1)$, we have \[ \mathbb{E}_{\Lambda}\left(\Delta\left(u,D_s^\Lambda\right)\right) \leq \exp\left(-\frac{\eta^2}{8} \; n\right) + O(1)\;(s/s_0)^{n/4} \; e^{\gamma/2}. \] \end{theorem} \subsection{Smoothing Arbitrary Lattices} We now turn our attention to the task of smoothing arbitrary lattices. Analogously to how we used the minimum distance of the dual code to give our smoothing bound for worst-case codes, we will use the shortest vector of the dual lattice in order to provide our smoothing bound for worst-case lattices. The lemma that we will apply is the following where $$ \CKL \eqdef 2^{0.401}. $$ \begin{lemma}[{\cite[Lemma 3]{PS09}}]\label{lemma:KLt} For any $n$-dimensional lattice $\Lambda$, $$ \forall t \geq \lambda_{1}(\Lambda), \quad N_{\leq t}(\Lambda) \leq \frac{\vol{t}}{\vol{\lambda_{1}(\Lambda)}} \; \CKL^{n(1+o(1))}. $$ \end{lemma} \begin{remark} This lemma is a consequence of the Kabatiansky and Levenshtein' bound \cite{KL78} on the size of spherical codes, historically known as the ``second linear programming bound''. It is why we may refer to the aforementioned bound of Lemma \ref{lemma:KLt} as the second linear programming bound. \end{remark} We begin by considering the effectiveness of smoothing with noise uniformly sampled from the ball. The following theorem is proved using similar techniques to those we used for Theorem~\ref{theo:smoothing-q-ary}, although instead of using Proposition~\ref{propo:unif-of-q-ary-lattice} to bound the $\Nb{\leq t}{\dual \Lambda}$'s, we use Lemma~\ref{lemma:KLt}. \begin{theorem}\label{theo:bSDEuc} Let $\Lambda$ be an $n$-dimensional lattice and $u \eqdef u_{\mathbb R^n/\Lambda}$ be the uniform distribution over its cosets. Then, it holds that $$ \Delta\left(u,\gunif{w}^{\Lambda} \right) \leq \sqrt{\frac{\CKL^{n(1+o(1))}}{\vol{\lambda_{1}(\dual{\Lambda})} \; \vol{w}} }. $$ In particular, setting \[w_0 \eqdef n \; \frac {\CKL^{1+o(1/n)}} {2 \pi \; e \; \lambda_1(\Lambda^*)}\] for all $w>w_0$, it holds that \[ \Delta\left(u,\gunif{w}^{\Lambda} \right) \leq O(1) (w_0/w)^{n/2}. \] \end{theorem} \begin{proof} Define $$ t_0 \eqdef \lambda_1(\dual \Lambda), \quad t_{j+1} \eqdef \left(1+\tfrac{1}{n}\right)t_j \quad \mbox{and} \quad \varphi_j\eqdef \max_{t_j \leq t \leq t_{j+1}}\{g_w(t)\} ~~\text{ for } j\geq 0, $$ where we recall the definition of $g_w(t) = \frac{1}{\vol w}\widehat{{1}_{\Bc_w}}(\vec x)^2$ with $|\vec x|_2=t$ (see Notation~\ref{not:g_w}). We also define \[ N_j \eqdef \sharp\{\dual{\vec x} \in \dual \Lambda: t_j \leq |\dual{\vec x}|_2 \leq t_{j+1}\} \ . \] With this notation and Proposition~\ref{propo:FBSDLat} we have \begin{align} 2\Delta\left(u,\gunif{w}^{\Lambda} \right) &\leq \sqrt{\sum_{\vec x \in \dual{\Lambda}\setminus \{\vec 0\}}|\widehat{\gunif{w}}(\vec x)|^2} \nonumber \\ &\leq \sqrt{\frac{1}{\vol w}\sum_{\vec x \in \dual{\Lambda}\setminus \{\vec 0\}}g_w(\vec x)} \nonumber \\ &\leq \sqrt{\frac{1}{\vol w}\sum_{j=0}^{\infty}N_j\varphi_j} \nonumber \\ &\leq \sqrt{\frac{1}{\vol w}\sum_{j=0}^\infty \Nb{\leq t_{j+1}}{\dual \Lambda} \varphi_j} \label{eq:worst-case-smooth-start} \ . \end{align} By Lemma~\ref{lemma:varphi-bound}, for some constant $C_1>0$, we obtain \[ \varphi_j \leq \frac{C_1}{V_n(t_{j+1})w^{2/3}}\frac{1}{t_j^{2/3}} \ . \] Combining this with the upper-bound on $\Nb{\leq t_{j+1}}{\dual \Lambda}$ provided by Lemma~\ref{lemma:KLt} (note that $t_{j+1} \geq \lambda_1(\dual \Lambda)$ for all $j \geq 0$), we find \begin{align*} \sum_{j=0}^\infty \Nb{\leq t_{j+1}}{\dual \Lambda} \varphi_j &\leq \sum_{j=0}^\infty \frac{V_n(t_{j+1})}{V_n(\lambda_1(\dual \Lambda))}\CKL^{n(1+o(1))}\; \frac{C_1}{V_n(t_{j+1})w^{2/3}}\; \frac{1}{t_j^{2/3}} \\ &= \frac{\CKL^{n(1+o(1))}}{V_n(\lambda_1(\dual \Lambda))w^{2/3}}\;\sum_{j=0}^{\infty}\frac{1}{t_{j}^{2/3}} \\ &= \frac{\CKL^{n(1+o(1))}}{V_n(\lambda_1(\dual \Lambda))w^{2/3}}\; \sum_{j=0}^{\infty}\frac{1}{\lambda_1(\dual \Lambda)^{2/3}\left(1+\frac{1}{n}\right)^{2j/3}} \\ &\leq \frac{\CKL^{n(1+o(1))}}{V_n(\lambda_1(\dual \Lambda))w^{2/3}}\; \left(\frac{n}{w \lambda_{1}(\dual{\Lambda})}\right)^{2/3}. \end{align*} In the above, all necessary constants were absorbed into the $\CKL^{o(n)}$ term. Combining this with \eqref{eq:worst-case-smooth-start}, we obtain the first part of the theorem. The ``in particular'' part again follows using Stirling's approximation. \end{proof} Next, we can consider the effectiveness of smoothing with the Gaussian distribution. As usual, we could follow the steps of the proof of Theorem~\ref{theo:bSDEuc} and obtain the same result, but with an additional multiplicative factor of $\sqrt{\frac{e}{2}}$. That is, we obtain \begin{theorem}\label{theo:bSDEgauss} Let $\Lambda$ be an $n$-dimensional lattice and $u \eqdef u_{\mathbb R^n/\Lambda}$ be the uniform distribution over its cosets. Then, it holds. $$ \Delta\left(u, D_{s}^{\Lambda} \right) \leq \sqrt{\frac{\CKL^{n(1+o(1))}}{\vol{\lambda_{1}(\dual{\Lambda})} \; \vol{s\sqrt{n/(2\pi)}} } \; \left(\frac{e}{2}\right)^{n/2}}. $$ In particular, setting \[s_0 \eqdef \sqrt n \; \frac {\CKL^{1+o(1/n)}} {2\sqrt {\pi e} \; \lambda_1(\Lambda^*)}, \] it holds for any $s> s_0$ that $\Delta\left(u, D_{s}^{\Lambda} \right) \leq O(1) \; (s_0 / s)^{n/2}$. \end{theorem} However, as usual it is more effective to combine the bound for the uniform ball distribution and decompose the Gaussian as a convex combination of uniform ball distributions, {\em i.e.} to apply Proposition~\ref{propo:gauss-to-unif}. In this way, we can obtain the following theorem, improving the smoothing bound $s_0$ by another $\sqrt {e/2}$ factor. In the following theorem, we are setting the $f(n)$ function of Proposition~\ref{propo:gauss-to-unif} with the $O(1)$ term in the bound of Theorem~\ref{theo:bSDEuc}. \begin{theorem} \label{theo:bSDEgauss_better} Let $\Lambda$ be an $n$-dimensional lattice, $u \eqdef u_{\mathbb R^n/\Lambda}$ the uniform distribution over its cosets, and \[ s_0 \eqdef \sqrt n \; \frac{\CKL^{1+o(1/n)}}{\sqrt{2\pi} \; e \; \lambda_1(\Lambda^*)} \ . \] Then, for any $s>s_0$ and letting $\eta\eqdef= 1 - \frac{s_0}{s} \in (0, 1)$, it holds that \[\Delta\left(u, D_{s}^{\Lambda} \right) \leq \exp\left(- \frac{\eta^{2}}{8} \; n\right) + O(1) \; \left(\frac {s_0} {s}\right)^{n/4}.\] \end{theorem} \section{Acknowledgement} We would like to thank Iosif Pinelis for help with the proof of Proposition~\ref{propo:gauss-to-unif}. \bibliographystyle{alpha} \begin{thebibliography}{MRJW77} \bibitem[ABL01]{ABL01} Alexei~E. Ashikhmin, Alexander Barg, and Simon Litsyn. \newblock Estimates of the distance distribution of codes and designs. \newblock {\em Electron. Notes Discret. Math.}, 6:4--14, 2001. \bibitem[ACKL05]{ACKL05} Alexei~E. Ashikhmin, G{\'{e}}rard~D. Cohen, Michael Krivelevich, and Simon Litsyn. \newblock Bounds on distance distributions in codes of known size. \newblock {\em {IEEE} Trans. Inf. Theory}, 51(1):250--258, 2005. \bibitem[ADRS15]{ADRS15} Divesh Aggarwal, Daniel Dadush, Oded Regev, and Noah Stephens{-Davidowitz}. \newblock Solving the shortest vector problem in {$2^n$} time using discrete {Gaussian} sampling. \newblock In {\em Proceedings of the forty-seventh annual ACM symposium on Theory of computing}, pages 733--742, 2015. \bibitem[Ale11]{A11} Michael Alekhnovich. \newblock More on average case vs approximation complexity. \newblock {\em Computational Complexity}, 20(4):755--786, 2011. \bibitem[Ban93]{B93} Wojciech Banaszczyk. \newblock New bounds in some transference theorems in the geometry of numbers. \newblock {\em Mathematische Annalen}, 296(1):625--635, 1993. \bibitem[Bas65]{B65} LA~Bassalygo. \newblock New upper bounds for codes correcting errors. \newblock {\em Probl. Peredachi Inform}, 1(4):41--44, 1965. \bibitem[BLVW19]{BLVW19} Zvika Brakerski, Vadim Lyubashevsky, Vinod Vaikuntanathan, and Daniel Wichs. \newblock Worst-case hardness for {LPN} and cryptographic hashing via code smoothing. \newblock In {\em Annual international conference on the theory and applications of cryptographic techniques}, pages 619--635. Springer, 2019. \bibitem[CE03]{CE03} Henry Cohn and Noam Elkies. \newblock New upper bounds on sphere packings {I}. \newblock {\em Ann. of Math}, (157-2):689--714, 2003. \bibitem[Chu97]{C97} Fan R.~K. Chung. \newblock {\em Spectral graph theory}, volume~92 of {\em CBMS Regional Conference Series in Mathematics}. \newblock American Mathematical Society, 1997. \bibitem[DL98]{DL98} Philippe Delsarte and Vladimir~Iossifovitch Levenshtein. \newblock Association schemes and coding theory. \newblock {\em IEEE Trans. Inform. Theory}, 44(6):2477--2504, 1998. \bibitem[DST19]{DST19} Thomas {Debris-Alazard}, Nicolas Sendrier, and Jean-Pierre Tillich. \newblock Wave: A new family of trapdoor one-way preimage sampleable functions based on codes. \newblock In {\em Advances in Cryptology - ASIACRYPT~2019}, LNCS, Kobe, Japan, December 2019. \bibitem[DT17]{DT17} Thomas {Debris-Alazard} and Jean-Pierre Tillich. \newblock Statistical decoding. \newblock preprint, January 2017. \newblock arXiv:1701.07416. \bibitem[GPV08]{GPV08} Craig Gentry, Chris Peikert, and Vinod Vaikuntanathan. \newblock Trapdoors for hard lattices and new cryptographic constructions. \newblock In {\em Proceedings of the fortieth annual ACM symposium on Theory of computing}, pages 197--206, 2008. \bibitem[IS98]{IS98} Mourad~E.H. Ismail and Plamen Simeonov. \newblock Strong asymptotics for {Krawtchouk} polynomials. \newblock {\em Journal of Computational and Applied Mathematics}, pages 121--144, 1998. \bibitem[KL78]{KL78} Grigory Kabatiansky and Vladimir~I. Levenshtein. \newblock Bounds for packings on a sphere and in space. \newblock {\em Problems of Information Transmission}, (14):1--17, 1978. \bibitem[Kl{\o}07]{K07} Torleiv Kl{\o}ve. \newblock {\em Codes for Error Detection}, volume~2 of {\em Series on Coding Theory and Cryptology}. \newblock WorldScientific, 2007. \bibitem[Kra06]{K06} Ilia Krasikov. \newblock Uniform bounds for bessel functions. \newblock {\em Journal of Applied Analysis}, 12:83--91, 06 2006. \bibitem[Lev79]{L79} Vladimir~I. Levenshtein. \newblock On bounds for packings in $n$-dimensional euclidean space. \newblock {\em Dokl. Akad. Nauk SSSR}, 245:1299--1303, 1979. \bibitem[Lev95]{L95} Vladimir~I. Levenshtein. \newblock Krawtchouk polynomials and universal bounds for codes and designs in hamming spaces. \newblock {\em {IEEE} Trans. Inf. Theory}, 41(5):1303--1321, 1995. \bibitem[LLB22]{LLB22} Laura Luzzi, Cong Ling, and Matthieu~R. Bloch. \newblock Secret key generation from {Gaussian} sources using lattice-based extractors. \newblock {\em CoRR}, abs/2206.10443, 2022. \bibitem[LLBS14]{LLBS14} Cong Ling, Laura Luzzi, Jean-Claude Belfiore, and Damien Stehl{\'e}. \newblock Semantically secure lattice codes for the {Gaussian} wiretap channel. \newblock {\em IEEE Transactions on Information Theory}, 60(10):6399--6416, 2014. \bibitem[McE78]{M78} Robert~J. McEliece. \newblock {\em A Public-Key System Based on Algebraic Coding Theory}, pages 114--116. \newblock Jet Propulsion Lab, 1978. \newblock DSN Progress Report 44. \bibitem[MR07]{MR07} Daniele Micciancio and Oded Regev. \newblock Worst-case to average-case reductions based on {Gaussian} measures. \newblock {\em SIAM Journal on Computing}, 37(1):267--302, 2007. \bibitem[MRJW77]{MRRW77} Robert~J. McEliece, Eugene~R. Rodemich, Howard~Rumsey Jr., and Lloyd~R. Welch. \newblock New upper bounds on the rate of a code via the {Delsarte-MacWilliams} inequalities. \newblock {\em {IEEE} Trans. Inf. Theory}, 23(2):157--166, 1977. \bibitem[MTSB13]{MTSB13} Rafael Misoczki, Jean-Pierre Tillich, Nicolas Sendrier, and Paulo S. L.~M. Barreto. \newblock {MDPC-McEliece}: New {McEliece} variants from moderate density parity-check codes. \newblock In {\em Proc. IEEE Int. Symposium Inf. Theory - ISIT}, pages 2069--2073, 2013. \bibitem[PS09]{PS09} Xavier Pujol and Damien Stehl{\'{e}}. \newblock Solving the shortest lattice vector problem in time 2\({}^{\mbox{2.465n}}\). \newblock {\em {IACR} Cryptol. ePrint Arch.}, 2009:605, 2009. \bibitem[vL99]{L99} Jacobus~Hendricus van Lint. \newblock {\em Introduction to coding theory}. \newblock Graduate texts in mathematics. Springer, 3rd edition edition, 1999. \bibitem[Wai19]{W19} Martin~J Wainwright. \newblock {\em High-dimensional statistics: A non-asymptotic viewpoint}, volume~48. \newblock Cambridge University Press, 2019. \bibitem[YZ21]{YZ20} Yu~Yu and Jiang Zhang. \newblock Smoothing out binary linear codes and worst-case sub-exponential hardness for {LPN}. \newblock In Tal Malkin and Chris Peikert, editors, {\em Advances in Cryptology - {CRYPTO} 2021 - 41st Annual International Cryptology Conference, {CRYPTO} 2021, Virtual Event, August 16-20, 2021, Proceedings, Part {III}}, volume 12827 of {\em Lecture Notes in Computer Science}, pages 473--501. Springer, 2021. \bibitem[Zam14]{Z14} Ram Zamir. \newblock {\em Lattice Coding for Signals and Networks: A Structured Coding Approach to Quantization, Modulation and Multiuser Information Theory}. \newblock Cambridge University Press, 2014. \end{thebibliography} \appendix \section{Proof of Proposition \ref{propo:BerVSUnif}}\label{app:BerVSUnif} Our aim in this section is to prove the following proposition \BerVSUnif* Roughly speaking, this proposition is a consequence of the fact that a Bernoulli distribution concentrates Hamming weights over a small number of slices close to the expected weight (here $np$) and, on each slice the Bernoulli distribution is uniform. Let us introduce the truncated Bernoulli distribution over words of Hamming weight $[(1-\varepsilon)pn,(1+\varepsilon)pn]$ for some $\varepsilon > 0$, namely \begin{equation}\label{eq:fTrunc} \fberTrunc{p}(\vec{x}) \eqdef \left\{ \begin{array}{ll} \frac{1}{Z} \; \fber{p}(\vec{x}) & \mbox{if } |\vec{x}| \in \left[ (1-\varepsilon)pn,(1+\varepsilon)pn \right] \\ 0 & \mbox{otherwise.} \end{array} \right. \end{equation} where \begin{equation} \label{eq:Z} Z \eqdef \mathop{\sum}\limits_{|\vec{y}| = (1-\varepsilon)np}^{(1+\varepsilon)np} \fber{p}(\vec{y}) \end{equation} is the probability normalizing constant. Proposition \ref{propo:BerVSUnif} is a consequence of the following lemmas. \begin{lemma}\label{lemma:gepsFber} Let $\varepsilon>0$. We have $$ \Delta\left(\fber{p}, \fberTrunc{p} \right) = 2^{-\Omega(n)} . $$ \end{lemma} \begin{proof}By Chernoff's bound \begin{equation}\label{eq:chernoff} 1-Z =\sum_{\substack{\vec{y} : \\ |\vec{y}| \notin \left[ (1-\varepsilon)np,(1+\varepsilon)np \right]}} \fber{p}(\vec{y}) \leq 2e^{-\varepsilon^{2}n} = 2^{-\Omega(n)} . \end{equation} Therefore for any $|\vec{x}| \in \left[ (1-\varepsilon)np,(1+\varepsilon)np \right]$, \begin{align}\label{eq:geps} \fberTrunc{p}(\vec{x}) &= \frac{1}{1-2^{-\Omega(n)}} \; \fber{p}(\vec{x}) \nonumber \\ &= \left(1+2^{-\Omega(n)} \right) \; \fber{p}(\vec{x}) . \end{align} We have now the following computation: \begin{align*} 2\Delta\left(\fber{p},\fberTrunc{p} \right) &= \sum_{\vec{x}\in\F_{2}^{n}} \left| \fber{p}(\vec{x}) - \fberTrunc{p} (\vec{x}) \right| \\ &= \sum_{|\vec{x}| \in \left[ (1-\varepsilon)np,(1+\varepsilon)np \right]} \left| \fber{p}(\vec{x}) - \fberTrunc{p} (\vec{x}) \right| + \sum_{|\vec{x}| \notin \left[ (1-\varepsilon)np,(1+\varepsilon)np \right]} \left| \fber{p}(\vec{x}) \right| \\ &= 2^{-\Omega(n)}\left( \sum_{|\vec{x}| \in \left[ (1-\varepsilon)np,(1+\varepsilon)np \right]} \left| \fber{p}(\vec{x}) \right| \right) + 2^{-\Omega(n)} \quad \mbox{(Equations \eqref{eq:chernoff} and \eqref{eq:geps})} \\ &= 2^{-\Omega(n)} \end{align*} where in the last line we used that $\fber{p}$ is a probability distribution. \end{proof} \begin{lemma}\label{lemma:gepsCodeFber}We have $$ \Delta\left(u,\fber{p}^{\CC}\right) \leq \Delta\left(u,\fberTrunc{p} ^{\CC}\right) + 2^{-\Omega(n)}. $$ \end{lemma} \begin{proof} By the triangle inequality, \[ \Delta\left(u,\fber{p}^{\CC}\right) \leq \Delta\left(u,\fberTrunc{p}^{\CC}\right) + \Delta\left(\fber{p}^{\CC},\fberTrunc{p}^{\CC}\right) . \] Focusing on the second term now \begin{align*} \Delta\left(\fber{p}^{\CC},\fberTrunc{p}^{\CC}\right) &= \frac{1}{2} \sum_{\vec{y} \in \F_2^n/\CC}\left|\fber{p}^{\CC}(\vec{y}) - \fberTrunc{p}^{\CC}(\vec{y})\right| \\ &= \frac{1}{2} \sum_{\vec{y} \in \F_2^n/\CC}\left|\sum_{\vec c \in \CC}\fber{p}(\vec{c}+\vec y) - \sum_{\vec c \in \CC}\fberTrunc{p}(\vec{c}+\vec y)\right| \\ &\leq \frac{1}{2} \sum_{\vec{y} \in \F_2^n/\CC}\sum_{\vec c \in \CC}\left|\fber{p}(\vec c + \vec y) - \fberTrunc{p}(\vec c + \vec y) \right| \\ &= \Delta\left(\fber{p},\fberTrunc{p}\right). \end{align*} which concludes the proof by Lemma \ref{lemma:gepsFber}. \end{proof} The following lemma is a basic property of the statistical distance. \begin{lemma}\label{lemma:statIneqCvx} For any distribution $f$ and $(g_i)_{1 \leq i \leq m}$ we have $$ \Delta\left(f, \sum_{i=1}^{m}\lambda_{i} g_{i} \right) \leq \sum_{i=1}^{m} \lambda_{i} \; \Delta(f,g_{i}) $$ where the $\lambda_{i}$'s are positive and sum to one. \end{lemma} We are now ready to prove Proposition \ref{propo:BerVSUnif}. \begin{proof}[Proof of Proposition \ref{propo:BerVSUnif}] First, by Lemma \ref{lemma:gepsCodeFber} we have \begin{equation}\label{eq:uFber} \Delta\left(u, \fber{p}^{\CC}\right) \leq \Delta\left(u, \fberTrunc{p}^{\CC}\right) + 2^{-\Omega(n)}. \end{equation} To upper-bound $\Delta\left(u, \fberTrunc{p}^{\CC}\right)$ we are going to use Lemma \ref{lemma:statIneqCvx}. Notice that $$ \fber{p} = \sum_{r=0}^{n} \binom{n}{r}p^{r}(1-p)^{n-r} \unifs{r}. $$ Therefore it is readily seen that \begin{equation*} \fberTrunc{p} = \sum_{r = (1-\varepsilon)np}^{(1+\varepsilon)np } \lambda_{r} \; \unifs{r} \quad \mbox{where} \quad \lambda_{r} \eqdef \frac{1}{ Z} \; \binom{n}{r}p^{r}(1-p)^{n-r}. \end{equation*} By using Lemma~\ref{lemma:statIneqCvx} we obtain: \begin{align} \Delta\left(u, \fberTrunc{p}^{\CC}\right) &\leq \sum_{r = (1-\varepsilon)np}^{(1+\varepsilon)np} \lambda_{r} \; \Delta\left(u,\unifs{r}^{\CC}\right) \nonumber\\ &\leq \sum_{r = (1-\varepsilon)np}^{(1+\varepsilon)np} \Delta\left(u, \unifs{r}^{\CC}\right)\label{eq:gCodeu} \end{align} where in the last line we used that the $\lambda_{r}$'s are smaller than one. To conclude the proof we plug Equation \eqref{eq:gCodeu} in \eqref{eq:uFber}. \end{proof} \section{Proof of Proposition \ref{propo:ABL}}\label{app:proofPropoABL} Our aim in this section is to prove the following proposition which is an extension of \cite[Theorem 3]{ABL01} for $\tau \in [\delta,1]$ (\cite[Theorem 3]{ABL01} only applied for $\tau \in [\delta,1/2]$.) \PropoABL* Our proof is mainly a rewriting of the proof of \cite[Theorem 3]{ABL01} which relies on the following proposition. \begin{proposition}[{\cite[Proposition $2$ with $d' = 0$]{ABL01}}]\label{propo:BoundBarg} Let $\CC$ be a binary code of length $n$ such that $\dmin(\CC) = \Omega(n)$. Let $t \eqdef \frac{n}{2} - \sqrt{\dmin(\CC)(n-\dmin(\CC))}$ and $a$ be such that $$ x_{1}^{(t+1)} < a < x_{1}^{(t)} \quad \mbox{;} \quad \frac{K_{t}(a)}{K_{t+1}(a)} = -1 $$ where $x_{1}^{(\mu)}$ denotes the first root of the Krawtchouk polynomial of order $\mu$, namely $K_{\mu}$. When $0 \leq w < t \leq n/2$, we have \begin{equation} \sum_{\vec{c} \in \CC \backslash \{\mathbf{0}\}} K_{w}(|{\vec{c}}|)^{2} \leq \frac{t+1}{2a} \; \frac{\binom{n}{w}}{\binom{n}{t}} \left( \binom{n}{t+1} + \binom{n}{t} \right)^{2} \end{equation} \end{proposition} The approach is to optimize on the choice of $w$ in Proposition \ref{propo:BoundBarg} to give an upper-bound on $N_{\ell}(\CC)$. More precisely we observe that \begin{equation}\label{eq:toBoundL1} \Neq{\ell}{\CC} \leq \frac{1}{K_{w}(\ell)^{2}} \sum_{\vec{c}\in\CC \backslash \{\mathbf{0}\}} K_{w}(|\vec{c}|)^{2} \leq \frac{1}{K_{w}(\ell)^{2}}\; \frac{t+1}{2}\frac{\binom{n}{w}}{\binom{n}{t}} \left( \binom{n}{t+1} + \binom{n}{t} \right)^{2} \end{equation} and then choose $w$ to minimize $\frac{\binom{n}{w}}{K_{w}(\ell)^{2}}$. \begin{proof}[Proof of Proposition \ref{propo:ABL}] It will be helpful to bring in the following map: $$ x\in[0,1] \mapsto x^{\perp} \eqdef \frac{1}{2} - \sqrt{x(1-x)}. $$ It can be verified that this application is an involution, is symmetric $(1-x)^\perp = x^\perp$ and decreasing on $[0,\frac{1}{2}]$. Let $\CC$ be a binary code of length $n$ such that $\dmin(\CC) = \delta n$ where $\delta\in(0,1/2]$ and $t$ be defined as in Proposition \ref{propo:BoundBarg}. Let $\omega \eqdef \frac{w}{n}, \lambda \eqdef\frac{\ell}{n}$ and $\delta^{\perp} \eqdef 1/2 - \sqrt{\delta(1-\delta)}$. Then by Proposition \ref{propo:BoundBarg} we have (see Equation \eqref{eq:toBoundL1}) \begin{equation}\label{eq:KrawUPB} \frac{\log_2 \Neq{\ell}{\CC} }{n}  \leq h(\omega) + h(\delta^{\perp}) - \frac{2 \log_{2} |K_{w}(\ell)|}{n} + o(1). \end{equation} \noindent {\bf Case 1: $\lambda \in [\delta,1-\delta]$.} \\ It is optimal to choose in this case $w$ such that $\omega= \lambda^\perp - \varepsilon$ where $\varepsilon > 0$ and $\varepsilon = o(1)$ as $n$ tends to infinity. Let us first notice that $ \lambda \in [\delta,1-\delta]$ implies that $\lambda^\perp \leq \delta^\perp$ which together with $\omega < \lambda^\perp$ implies that $\omega < \delta^\perp$ which in turn is equivalent to the condition $w <t$ for being able to apply Proposition \ref{propo:BoundBarg}. Moreover $\omega < \lambda^\perp$ also implies $\lambda < \omega^\perp$ and by using Proposition \ref{prop:expansion} we obtain $$ \frac{2\log_{2} |K_{w}(\ell)|}{n} \leq h(\omega) + 1 - h(\lambda) +o(1).$$ Therefore $$\frac{\log_2 \Neq{\ell}{\CC} }{n}  \leq h(\omega) + h(\delta^{\perp}) - h(\omega) -1 + h(\lambda) +o(1)= h(\delta^{\perp}) + h(\lambda) -1 +o(1). $$ {\bf Case 2: $\lambda \in (1-\delta,1]$.}\\ In that case, let $\omega= \delta^{\perp} - \varepsilon$ with $\varepsilon > 0$ and $\varepsilon = o(1)$ as $n$ tends to infinity. Here we can write $$\frac{2\log_{2} |K_{w}(\ell)|}{n} = \frac{\log_{2} (K_{w}(\ell)^2)}{n}= \frac{\log_{2} (K_{w}(n-\ell)^2)}{n}.$$ Since $\lambda > 1 - \delta$, we have $1-\lambda < \delta$. On the other hand, $\omega< \delta^{\perp}$ implies $\delta < \omega^\perp$. We deduce from these two inequalities that $1 - \lambda < \omega^\perp$. By using Proposition \ref{prop:expansion} again, we get $$ \frac{\log_{2} (K_{w}(n-\ell)^2)}{n} = 2a(1-\lambda,\delta^\perp)+o(1)=2a(\lambda,\delta^\perp)+o(1). $$ By plugging this estimate in \eqref{eq:KrawUPB} we get $$ \frac{\log_2 \Neq{\ell}{\CC} }{n} \leq 2 h(\delta^{\perp}) - 2 a(\lambda,\delta^{\perp}). $$ This concludes the proof. \end{proof} \section{Proof of Theorem \ref{theo:finalUBSD} }\label{app:proofThFinalCode} Our aim in this appendix is to prove the following theorem. \thFinalCode* {\bf Sketch of proof.} We will use the following proof strategy \begin{itemize} \item[1.] By Lemma \ref{lemma:gepsCodeFber} we know that on one hand \begin{equation} \label{eq:difference} \Delta\left(u,\fber{p}^{\CC}\right) = \Delta\left(u,\fberTrunc{p}^{\CC}\right) + 2^{-\Omega(n)}. \end{equation} This is actually a consequence of Chernoff's bound. This argument can also be used to show that the Fourier transforms are also close to each other pointwise \begin{equation}\label{eq:TFBer} \forall \vec{x}\in \F_{2}^{n}, \quad 2^{n}\; \left|\widehat{\fberTrunc{p}}(\vec{x}) - \widehat{\fber{p}}(\vec{x})\right| = 2^{-\Omega(n)}. \end{equation} \item[2.] Equation \eqref{eq:TFBer} together with Lemma \ref{lemma:LCodewords} are then used to show that: \begin{equation}\label{eq:step1} \Delta\left(u,\fberTrunc{p}^{\CC}\right) \leq 2^n\sqrt{\sum_{t = \dmin(\dual{\CC})}^{n-\dmin(\dual{\CC})/2} \Neq{t}{\dual{\CC}} \widehat{,\fberTrunc{p}}(t)^{2}} + 2^{-\Omega(n)}. \end{equation} \item[3.] We use the two previous points to upper-bound $\Delta\left(u,\fber{p}^{\CC}\right)$ as in the equation above and conclude by using bounds of Propositions \ref{propo:ABL} and \ref{propo:2LPB}. \end{itemize} \noindent {\bf Proof of Step 1.} As we explained above \eqref{eq:difference} is just Lemma \ref{lemma:gepsCodeFber}. Let us now prove that \begin{lemma}\label{lemma:lemmfBerVSTrunc} We have $$ \forall \vec{x}\in \F_{2}^{n}, \quad 2^{n}\; \left|\widehat{\fberTrunc{p}}(\vec{x}) - \widehat{\fber{p}}(\vec{x})\right| = 2^{-\Omega(n)}. $$ \end{lemma} \begin{proof} Recall that $Z = \mathop{\sum}\limits_{|\vec{y}| = (1-\varepsilon)np}^{(1+\varepsilon)np} \fber{p}(\vec{y})$ where by Chernoff's bound, we have \begin{equation}\label{eq:M} Z = 1 - 2^{-\Omega(n)}. \end{equation} Notice now that, $$ \fber{p} = \sum_{r=0}^{n} \binom{n}{r}p^{r}(1-p)^{n-r} \unifs{r} \quad \mbox{and} \quad \fberTrunc{p} = \frac{1}{Z}\;\sum_{r=(1-\varepsilon)pn}^{(1+\varepsilon)pn} \binom{n}{r}p^{r}(1-p)^{n-r} \unifs{r}/ $$ Let $\mathcal{I} \eqdef \llbracket (1-\varepsilon)pn, (1+\varepsilon)pn \rrbracket$. Notice that $Z = \sum_{r\in \mathcal{I}} \binom{n}{r}p^{r}(1-p)^{n-r}$. By linearity of the Fourier transform we obtain the following computation: \begin{align} \left|\widehat{\fberTrunc{p}}(\vec{x}) - \widehat{\fber{p}}(\vec{x})\right| &= \left( \frac{1}{Z} - 1 \right) \sum_{r\in \mathcal{I}} \binom{n}{r}p^{r}(1-p)^{n-r} \left| \widehat{\unifs{r}}(\vec{x}) \right| \nonumber \\ & \qquad\qquad\qquad\qquad+ \sum_{r\notin \mathcal{I}} \binom{n}{r}p^{r}(1-p)^{n-r} \left| \widehat{\unifs{r}}(\vec{x}) \right| \nonumber\\ &= 2^{-\Omega(n)} \sum_{r \in \mathcal{I}}\binom{n}{r}p^{r}(1-p)^{n-r} \left| \widehat{\unifs{r}}(\vec{x}) \right| + 2^{-\Omega(n)} \max_{r} \left| \widehat{\unifs{r}}(\vec{x}) \right| \label{ineq:truncBer} \end{align} where in the last line we used Equation \eqref{eq:M}. Recall now that by definition of the Fourier transform for functions over $\F_{2}^{n}$ we have: $$ \left| \unifs{r}(\vec{x}) \right| = \left| \frac{1}{2^{n}} \sum_{\vec{y} : |\vec{y}|=r} \frac{(-1)^{\vec{x}\cdot\vec{y}}}{\binom{n}{r}} \right| \leq \frac{1}{2^{n}}. $$ By plugging this in Equation \eqref{ineq:truncBer} we get: \begin{align*} \left|\widehat{\fberTrunc{p}}(\vec{x}) - \widehat{\fber{p}}(\vec{x})\right| &\leq \frac{2^{-\Omega(n)}}{2^{n}} \underbrace{\sum_{r\in \mathcal{I}} \binom{n}{r}p^{r}(1-p)^{n-r}}_{\leq 1} + \frac{2^{-\Omega(n)}}{2^{n}} \\ &= \frac{2^{-\Omega(n)}}{2^{n}} \end{align*} which concludes the proof. \end{proof} {\bf Proof of Step 2.} This corresponds to proving the following lemma. \begin{lemma}\label{lem:step1} \begin{equation*} \Delta\left(u,\fberTrunc{p}^{\CC}\right) \leq 2^n\sqrt{\sum_{t = \dmin(\dual{\CC})}^{n-\dmin(\dual{\CC})/2} \Neq{t}{\dual{\CC}} \widehat{,\fberTrunc{p}}(t)^{2}} + 2^{-\Omega(n)}. \end{equation*} \end{lemma} \begin{proof} By applying Proposition \ref{propo:FBSDCod} to $\fberTrunc{p}$ we obtain \begin{equation}\label{eq:boundfBerTrunc1} \Delta\left(u,\fberTrunc{p} ^{\CC}\right) \leq 2^{n} \sqrt{\sum_{t = \dmin(\dual{\CC})}^{n} \Neq{t}{\dual{\CC}}|\widehat{\fberTrunc{p}}(t)|^{2}} \end{equation} where $\widehat{\fberTrunc{p}}(t)$ denotes the common value of the radial function $\widehat{\fberTrunc{p}}$ on vectors of Hamming weight $t$. Recall now that $\widehat{\fber{p}}(\vec{x}) = \frac{1}{2^{n}}\; (1-2p)^{|\vec{x}|}$ and by Lemma \ref{lemma:lemmfBerVSTrunc} that $2^{n}\; \left|\widehat{\fberTrunc{p}}(\vec{x}) - \widehat{\fber{p}}(\vec{x})\right| = 2^{-\Omega(n)}$. Therefore, $$ \forall \vec{x}\in\F_{2}^{n}, \mbox{ } |\vec{x}|\geq n-\frac{\dmin(\dual{\CC})}{2} \quad \mbox{:} \quad 2^{n}\; \left|\widehat{\fberTrunc{p}}(\vec{x})\right| = 2^{-\Omega(n)}. $$ By plugging this in Equation \eqref{eq:boundfBerTrunc1} we obtain (as there is at most one dual codeword of weight $\ell$ for each $\ell >n-\dmin(\dual{\CC})/2$, see Lemma \ref{lemma:LCodewords}) \begin{equation}\label{eq:boundfBerTrunc2} \Delta\left(u,\fberTrunc{p} ^{\CC}\right) \leq 2^{n} \sqrt{\sum_{t = \dmin(\dual{\CC})}^{n - \dmin(\dual{\CC})/2} \Neq{t}{\dual{\CC}}|\widehat{\fberTrunc{p}}(t)|^{2}} + 2^{-\Omega(n)} \end{equation} which concludes the proof. \end{proof} {\bf Proof of Step 3.} We finish the proof of Theorem \ref{theo:finalUBSD} by noticing that $$ \fberTrunc{p} = \frac{1}{Z}\;\sum_{\ell=(1-\varepsilon)pn}^{(1+\varepsilon)pn} \binom{n}{\ell}p^{\ell}(1-p)^{n-\ell} \unifs{\ell} $$ where $Z \eqdef \mathop{\sum}\limits_{|\vec{y}| = (1-\varepsilon)np}^{(1+\varepsilon)np} \fber{p}(\vec{y}) = 1 - 2^{-\Omega(n)}$ by Chernoff's bound. Therefore, $$ \widehat{\fberTrunc{p}} = \left( 1+ 2^{-\Omega(n)}\right)\;\sum_{\ell=(1-\varepsilon)pn}^{(1+\varepsilon)pn} \binom{n}{\ell}p^{\ell}(1-p)^{n-\ell} \;\widehat{\unifs{\ell}}. $$ By plugging this in Equation \eqref{eq:boundfBerTrunc2} and using $\widehat{\unifs{\ell}} = \frac{1}{2^{n}} \; \frac{K_{\ell}}{\binom{n}{\ell}}$ we obtain $$ \Delta\left(u,\fberTrunc{p}^{\CC}\right) \leq \left( 1+2^{-\Omega(n)}\right) \; \sqrt{\sum_{t = \dmin(\dual{\CC})}^{n-\dmin(\dual{\CC})/2} \Neq{t}{\dual{\CC}}\left( \sum_{\ell = (1-\varepsilon)pn}^{(1+\varepsilon)pn} p^{\ell}(1-p)^{n-\ell} K_{\ell}(t) \right)^{2}} + 2^{-\Omega(n)}. $$ We then use in the righthand term, Propositions \ref{propo:ABL}, \ref{propo:2LPB} which give bounds on the $\frac{1}{n} \; \log_{2} N_{\ell}(\dual{\CC})$'s (where $\dmin(\dual{\CC}) \geq \dual{\delta}n$) and Proposition \ref{prop:expansion} which gives an asymptotic expansion of Krawtchouk polynomials to upper-bound $\Delta\left(u,\fberTrunc{p}^{\CC}\right)$. We finish the proof of the theorem by using this upper-bound in the righthand term of \eqref{eq:difference}. \section{Proof of Proposition~\ref{propo:gauss-to-unif}}\label{app:GaussianUnif} Our aim in this section is to prove the following proposition. \GaussianVSUnif* It will be a consequence of the following lemmas. We begin with the following result decomposing the Gaussian as a convex combination of balls. \begin{lemma} \label{lemma:gaussian_convex_combi_ball} The Gaussian distribution in dimension $n$ of parameter $s$ is the following convex combination of uniform distributions over balls: \[ D_s = \frac {1}{s} \int_{0}^\infty G_n(w/s) \; \gunif{w}\, dw \] where $G_{n}(x) = x^{n+1} \; \vol{1} \; 2\pi\; \exp\left(-\pi x^2\right) \geq 0$. Furthermore, we have $\frac{1}{s}\int_{0}^{\infty}G_n(w/s) \, dw = 1$. \end{lemma} \begin{proof} First, let $g_s(w) \eqdef \frac{1}{s^n}\; \exp\left(-\pi\tfrac{w^2}{s^2}\right)$ ({\em i.e.} the value the probability density function $D_s$ takes on vectors of weight $w$) and denote $h_s(w) = -g_s'(w) = \frac{2\pi w}{s^{n+2}} \; \exp\left(-\pi\tfrac{w^2}{s^2}\right)$. For any $\vec x \in \R^n$, setting $u = |\vec x|_2$, as $\lim_{w \to \infty} g_s(w)=0$ we have \begin{align*} D_s(\vec x) &= g_s(u) = \int_{u}^\infty h_s(w) \, dw = \int_{0}^{\infty}h_s(w)\; 1\{u \leq w\} \ dw = \int_{0}^{\infty}h_s(w) \; 1_{\mathcal{B}_{w}}(\vec x)\ dw \ . \end{align*} Above, we denoted by $1\{u\leq w\}$ the function which takes value $1$ on input $w$ if $u \leq w$, and $0$ otherwise. To conclude, note that $\frac{1}{s}\; G_n(w/s) = h_s(w) \; \vol{w}$ and recall $\gunif{w} = \frac{1_{\mathcal B_w}}{\vol{w}}$. For the ``furthermore'' part of the lemma, we compute \begin{align} \label{eq:int-pre-sub} \frac{1}{s}\int_{0}^{\infty}G_n(w/s) \, dw = \frac{1}{s}\int_{0}^{\infty} (w/s)^{n+1} \; \vol{1} \; 2\pi\; \exp\left(-\pi (w/s)^2\right)\, dw \ . \end{align} We make the substitution $t = \pi \left(\frac{w}{s}\right)^2$, which means $dw = \frac{s^2\, dt}{2\pi w} = \frac{s}{2\sqrt{t\pi}}\, dt$. Also, we recall $\vol{1} = \frac{\pi^{n/2}}{\Gamma(n/2+1)}$. Thus, \begin{align*} \frac{1}{s}\int_{0}^{\infty}G_n(w/s) \, dw &= \frac{1}{s}\; \frac{\pi^{n/2}}{\Gamma(n/2+1)}\int_{0}^{\infty} \left(\frac{t}{\pi}\right)^{(n+1)/2} \; 2\pi \; e^{-t} \; \frac{s}{2\sqrt{t\pi}}\, dt \\ &= \frac{1}{\Gamma(n/2+1)}\int_{0}^{\infty}t^{n/2} \; e^{-t} \; dt = \frac{\Gamma(n/2+1)}{\Gamma(n/2+1)} = 1 \end{align*} which concludes the proof. \end{proof} We now quote the following bound, which makes precise the intuition that it is exponentially unlikely that a random Gaussian vector has norm $(1-\eta)$ factor smaller than its expected norm. This result provides the analogy for the Chernoff bound that we used for the code-case. \begin{lemma} [{\cite[Example 2.5]{W19}}]\label{propo:gaussian-tail-bound} Let $\vec X$ be a random Gaussian vector of dimension $n$ and parameter $1$. Let $0 < \eta < 1$. Then \[ \mathbb{P}\left(|\vec X|_{2}^2 \leq (1-\eta)\;\frac{n}{2\pi}\right) \leq \exp(-\frac{\eta^2}{8} \; n). \] \end{lemma} This lemma allows us to prove the following lemma bounding $\frac{1}{s}\int_{0}^{\overline w}G_n(w/s)dw$ when $\overline w < s \; \sqrt{n/(2\pi)}$. \begin{lemma} \label{lem:bound_G_n} Let $\eta \in (0,1)$ and $\overline w = \sqrt{1-\eta} \; s \; \sqrt{n/(2\pi)}$. Then \[ \frac{1}{s}\int_{0}^{\overline w}G_n(w/s)dw \leq \exp(-\frac{\eta^2}{8} \; n) \ . \] \end{lemma} \begin{proof} Let $\overline u \eqdef \sqrt{1-\eta} \; \sqrt{n/(2\pi)}$. By Lemma~\ref{propo:gaussian-tail-bound}, if $\vec X$ denotes a random Gaussian vector of dimension $n$ and parameter $1$, we have \begin{equation} \label{ineq:ini} \int_{0 \leq |\vec x|_{2} \leq \overline u}\exp(-\pi |\vec x|_{2}^2) \ d\vec x = \mathbb{P}\left(|\vec X|_{2}^2 \leq (1-\eta)\;\frac{n}{2\pi}\right) \leq \exp(-\frac{\eta^2}{8} \; n). \end{equation} To compute this last integral, note that \begin{align}\label{eq:1} \int_{0 \leq |\vec x|_{2} \leq \overline u}\exp(-\pi |\vec x|_{2}^2) \ d\vec x &= \int_0^{\overline u} \int_{u\mathcal{S}^{n-1}} e^{-\pi u^2}dA du \ , \end{align} where $u\mathcal{S}^{n-1}$ denotes the Euclidean sphere of radius $u$ and $dA$ is the area element. If $A_{n-1}(u)$ denotes the surface area of $u\mathcal{S}^{n-1}$, then $A_{n-1}(u) = u^{n-1} A_{n-1}(1)$ and thus \begin{align}\label{eq:2} \int_0^{\overline u} \int_{u\mathcal{S}^{n-1}} e^{-\pi u^2}dA du = A_{n-1}(1)\int_{0}^{\overline u}u^{n-1}\exp(-\pi u^2) \, du \ . \end{align} Further, it is known that $A_{n-1}(1) = \frac{2\pi^{n/2}}{\Gamma(n/2)}$. Therefore, plugging Equations \eqref{eq:1} and \eqref{eq:2} into \eqref{ineq:ini} leads to \begin{equation}\label{ineq:R} \int_{0}^{\overline u}u^{n-1}\exp(-\pi u^2) \, du \leq \frac{1}{A_{n-1}(1)} \; \exp(-\frac{\eta^2}{8} \; n) \ . \end{equation} Now, we look at the left-hand side of the inequality we wish to prove. We begin by making the substitution $u = w/s$. So then $dw = s\;du$. Moreover, let $\overline u \eqdef \sqrt{1-\eta} \; \sqrt{n/(2\pi)}$ and note that when $w = \overline w$ we have $u = \overline w/s = \sqrt{1-\eta} \; \sqrt{n/(2\pi)} = \overline u$. \begin{align*} \frac{1}{s}\int_{0}^{\overline w} G_n(w/s) \ dw &= \int_{0}^{\overline u}G_n(u) \ du \\ &= \vol{1} \; 2\pi \int_{0}^{\bar u} u^{n+1} \; \exp(-\pi u^2) \ du \\ &\leq \vol{1} \; 2\pi\bar{u}^2 \int_{0}^{\bar u} u^{n-1} \exp(-\pi u^2) \ du\ . \end{align*} Plugging this last inequality with \eqref{ineq:R} yields \begin{align*} \frac{1}{s}\int_{0}^{\overline w} G_n(w/s) \ dw \leq \frac{\vol{1} 2\pi \; \overline{u}^2}{A_{n-1}(1)} \exp(-\frac{\eta^2}{8} \; n) \ . \end{align*} To conclude the proof, note that $V_n(1)=\int_0^1 A_{n-1}(u) du= \int_0^1 u^{n-1} A_{n-1}(1) du=\frac{A_{n-1}(1)}{n}$ and therefore \[ \frac{\vol{1} 2\pi \; \overline{u}^2}{A_{n-1}(1)} = \frac{2 \pi (1-\eta)n}{2 \pi n}=1-\eta \leq 1. \] It concludes the proof. \end{proof} We are now ready to prove Proposition \ref{propo:gauss-to-unif}. \begin{proof}[Proof of Proposition \ref{propo:gauss-to-unif}.] By Lemma~\ref{lemma:gaussian_convex_combi_ball}, $D_s$ is a convex combination of uniform distribution over balls, namely $D_s = \frac{1}{s}\int_0^{\infty}G_n(w/s) \; \gunif{w} \;dw$. Therefore (we use here the analogue of Lemma \ref{lemma:statIneqCvx} in the context of the statistical distance between two probability density functions) \[ \mathbb{E}_{\Lambda}\left(\Delta(u,D_s^{\Lambda})\right) \leq \frac{1}{s}\int_0^{\infty} G_n(w/s)\; \mathbb{E}_{\Lambda}\left(\Delta(u,\gunif{w}^\Lambda)\right) dw. \] We split the integral in two parts at radius $\overline w = \sqrt{1-\eta} \; s \; \sqrt{n/(2\pi)}$. For the first part $w \leq \overline w$, we use the trivial bound $\mathbb{E}_{\Lambda}\left(\Delta(u,\gunif{w}^\Lambda)\right) \leq 1$ which gives: \[ \frac{1}{s}\int_0^{\overline w} G_n(w/s)\; \mathbb{E}_{\Lambda}\left(\Delta(u,\gunif{w}^\Lambda)\right) dw \leq \frac{1}{s} \int_0^{\overline w} G_n(w/s)dw . \] We then apply Lemma~\ref{lem:bound_G_n}, which bounds this part by $\exp(-\frac{\eta^2}{8} \; n)$. For the second part $w \geq \overline w$, we use the trivial bound $\frac 1s \int_{\overline w}^{\infty}G_n(w/s)dw \leq 1$ and, noting \[ w \geq \overline w = \sqrt{1-\eta} \; s \; \sqrt{n/(2\pi)} = \frac{1}{\sqrt{1-\eta}} \; s_0 \; \sqrt{n/(2\pi)} > s_0 \; \sqrt{n/(2\pi)} = w_0, \] we may apply the assumption of the proposition, yielding \begin{align*} \mathbb{E}_{\Lambda}\left(\Delta(u,\gunif{w}^{\Lambda})\right) &\leq f(n)\left(\frac{w_0}{w}\right)^{n/2} \leq f(n) \left(\frac{w_0}{\overline w}\right)^{1/2} = f(n) \left(\sqrt{1-\eta}\right)^{n/2} = f(n) \left(\frac{s_0}{s}\right)^{n/4}. \end{align*} Adding these bounds yields the proposition. \end{proof} \end{document}
2205.10503v3
http://arxiv.org/abs/2205.10503v3
A Rademacher type theorem for Hamiltonians $H(x,p)$ and application to absolute minimizers
\documentclass[11pt,twoside]{article}\usepackage{bbm} \usepackage{mathrsfs} \usepackage{amssymb} \usepackage{amsmath} \usepackage{amsthm} \usepackage{amsfonts} \usepackage{color} \usepackage{graphicx} \usepackage[active]{srcltx} \usepackage{CJK} \usepackage{geometry} \geometry{left=1.5cm,right=1.5cm,top=2cm,bottom=1cm} \usepackage[cp1252]{inputenc} \usepackage{mathrsfs} \usepackage{graphicx} \usepackage[active]{srcltx} \textwidth 169truemm \textheight 226truemm \oddsidemargin -1.0mm \evensidemargin -1.0mm \topmargin -10mm \headsep 6mm \footskip 11mm \baselineskip 4.5mm \allowdisplaybreaks \usepackage{titletoc} lright} {\contentspush{\thecontentslabel\ }} {}{\titlerule*[8pt]{.}\contentspage} \pagestyle{headings} \setlength{\topmargin}{-1.5cm} \setlength{\headsep}{0.3cm} \setlength{\footskip}{0.6cm} \def\rr{{\mathbb R}} \def\rn{{{\rr}^n}} \def\rnn{{{\rr}^{2n}}} \def\rrm{{{\rr}^m}} \def\rnm{{\rn\times\rrm}} \def\zz{{\mathbb Z}} \def\nn{{\mathbb N}} \def\hh{{\mathbb H}} \def\bd{{\mathbb D}} \def\ch{{\mathcal H}} \def\cc{{\mathbb C}} \def\cS{{\mathcal S}} \def\cx{{\mathscr X}} \def\csgv{\mathcal {S}_{g;V}} \def\csgu{\mathcal {S}_{g;U}} \def\dlxu{d_{\lambda,x}^{U}} \def\hd{\widehat{d}_{\lambda,x}^{U}} \def\fk{{\mathfrak g}} \def\hn{{{\mathbb H}^n}} \def\bi{{\bf I}} \def\od{{\overline d}} \def\odl{\overline {d}_{\lambda}} \def\lfz{{\lfloor}} \def\rfz{{\rfloor}} \def\ce{{\mathcal E}} \def\cf{{\mathcal F}} \def\cg{{\mathcal G}} \def\cl{{\mathcal L}} \def\cj{{\mathcal J}} \def\cq{{\mathcal Q}} \def\cp{{\mathcal P}} \def\cm{{\mathcal M}} \def\car{{\mathcal R}} \def\cb{{\mathcal B}} \def\ca{{\mathcal A}} \def\ccc{{\mathcal C}} \def\dcc{{d_{CC}}} \def\ct{{\mathcal T^{\lambda}}} \def\ctl{{\mathcal T^{\lambda}_{loc}}} \def\dl{d_\lambda} \def\dlu{d^{U}_\lambda} \def\al{{\mathcal A^{\lambda}}} \def\alu{\mathcal A^{\lambda}(U)} \def\rl{\rho_\lambda} \def\rlu{\rho^{U}_\lambda} \def\xd{(x,\cdot)} \def\dx{(\cdot,x)} \def\fz{\infty} \def\az{\alpha} \def\ac{{\mathrm ac}} \def\dist{{\mathop\mathrm{\,dist\,}}} \def\loc{{\mathop\mathrm{\,loc\,}}} \def\weak{{\mathop\mathrm{ \,weak\,}}} \def\lip{{\mathop\mathrm{\,Lip}}} \def\llc{{\mathop\mathrm{\,LLC}}} \def\carr{{\mathop\mathrm{\,Carrot\,}}} \def\lz{\lambda} \def\dz{\delta} \def\bdz{\Delta} \def\ez{\epsilon} \def\ezl{{\epsilon_1}} \def\ezz{{\epsilon_2}} \def\ezt{{\epsilon_3}} \def\kz{\kappa} \def\bz{\beta} \def\fai{\varphi} \def\gz{{\gamma}} \def\oz{{\omega}} \def\Oz{{\Omega}} \def \boz{{\Omega}} \def\toz{{\wz{ U}}} \def\ttoz{{\wz{\toz}}} \def\vz{\varphi} \def\tz{\theta} \def\sz{\sigma} \def\pa{\partial} \def\pau{\partial{U}} \def\pav{\partial{V}} \def\wz{\widetilde} \def\hs{\hspace{0.3cm}} \def\tl{\wzde} \def\ls{\lesssim} \def\gs{\gtrsim} \def\tr{\triangle} \def\bl{\bigg (} \def\br{\bigg )} \def\bbl{\bigg [} \def\bbr{\bigg ]} \def\bbbl{\bigg \{} \def\bbbr{\bigg \}} \def\ext{{\mathrm{\,Ext\,}}} \def\osc{{\textrm{osc}}} \def\bint{{\ifinner\rlap{\bf\kern.35em--} }\ignorespaces} \def\dbint{\displaystyle\bint} \def\bbint{{\ifinner\rlap{\bf\kern.35em--} }\ignorespaces} \def\ocg{{\mathring{\cg}}} \def\ocd{{\mathring{\cd}}} \def\occ{{\mathring{\cal C}}} \def\cgbz{{\cg(\bz,\,\gz)}} \def\cgbb{{\cg(\bz_1,\bz_2;\gz_1,\gz_2)}} \def\cgm{{\cg(x_1,\,r,\,\bz,\,\gz)}} \def\cgom{{\ocg(x_1,\,r,\,\bz,\,\gz)}} \def\scale{{\rm scale}} \def\aplip{{\rm \,apLip\,}} \def\esssup{{\rm \,esssup\,}} \def\dmsp{{\dot M^{s,p}( U)}} \def\msp{{M^{s,p}( U)}} \def\dwp{{\dot W^{1,p}( U)}} \def\wp{{W^{1,p}( U)}} \def\lp{{L^{p}( U)}} \def\esup{\mathop\mathrm{\,esssup\,}} \def\dsum{\displaystyle\sum} \def\osc{ \mathop \mathrm{\, osc\,} } \def\dosc{\displaystyle\osc} \def\diam{{\mathop\mathrm{\,diam\,}}} \def\atom{{\mathop\mathrm{\,atom\,}}} \def\dint{\displaystyle\int} \def\doint{\displaystyle\oint} \def\dlimsup{\displaystyle\limsup} \def\usc{\mathop\mathrm{\,USC\,}} \def\lsc{\mathop\mathrm{\,LSC\,}} \def\cida{\mathop\mathrm{\,CDA\,}} \def\cidb{\mathop\mathrm{\,CDB\,}} \def\cid{\mathop\mathrm{\,CIDF\,}} \def\Lip{\mathop\mathrm{Lip}} \def\ldl{\mathrm{Lip}_{d,loc}^1} \def\ld{\mathrm{Lip}_{d}^1} \def\dfrac{\displaystyle\frac} \def\dsup{\displaystyle\sup} \def\dlim{\displaystyle\lim} \def\dlimsup{\displaystyle\limsup} \def\r{\right} \def\lf{\left} \def\la{\langle} \def\ra{\rangle} \def\subsetneq{{\hspace{0.2cm}\stackrel \subset{\scriptstyle\ne}\hspace{0.15cm}}} \newtheorem{thm}{Theorem}[section] \newtheorem{lem}[thm]{Lemma}\newtheorem{prop}[thm]{Proposition}\newtheorem{rem}[thm]{Remark}\newtheorem{cor}[thm]{Corollary}\newtheorem{defn}[thm]{Definition}\newtheorem{example}[thm]{Example}\newtheorem{ques}[thm]{Problem}\numberwithin{equation}{section} \newtheorem{Assumption}[]{Assumption} \begin{document} \arraycolsep=1pt \title{\Large\bf A Rademacher type theorem for Hamiltonians $H(x,p)$ and {\color{black} an } application to absolute minimizers \footnotetext{\hspace{-0.35cm} \endgraf 2020 {\it Mathematics Subject Classification:} Primary 49J10 $\cdot$ 49J52; Secondary 35F21 $\cdot$ 51K05. \endgraf The first author is supported by the Academy of Finland via the projects: Quantitative rectifiability in Euclidean and non-Euclidean spaces, Grant No. 314172, and Singular integrals, harmonic functions, and boundary regularity in Heisenberg groups, Grant No. 328846. The second author is supported by the National Natural Science Foundation of China (No. 12025102 \& No. 11871088) and by the Fundamental Research Funds for the Central Universities. \endgraf Data sharing not applicable to this article as no datasets were generated or analysed during the current study.} } \author{Jiayin Liu, Yuan Zhou} \date{ } \maketitle \begin{center} \begin{minipage}{13.5cm}\small {\noindent{\bf Abstract.}\quad We establish a Rademacher type theorem involving Hamiltonians $H(x,p)$ under very weak conditions in both of Euclidean and Carnot-Carath\'eodory spaces. In particular, $H(x,p)$ is assumed to be only measurable in the variable $x$, and to be quasiconvex and lower-semicontinuous in the variable $p$. {\color{black}Without the lower-semicontinuity in the variable $p$, we provide a counter example showing the failure of such a Rademacher type theorem.} Moreover, {\color{black} by applying such a Rademacher type theorem} we build up an existence result of absolute minimizers for {\color{black}the} corresponding $L^\infty$-functional. These improve or extend several known results in the literature. {\it Keywords:} $L^\infty$-functional, absolute minimizer, Carnot-Carath\'{e}odory metric space, Rademacher theorem } \end{minipage} \end{center} \tableofcontents \contentsline{section}{References\numberline{ }}{50} \section{Introduction}\label{s1} Let $ \Omega\subset \rn$ be a domain (that is, an open and connected {\color{black}subset}) of $\rn$ with $n\ge2$. {\color{black}We first recall Rademacher's theorem in Euclidean spaces. See Appendix for some of its consequence related to Sobolev and Lipschitz spaces. } \begin{thm} \label{rade} If $u:{\color{black}\Omega}\to\rr$ is a Lipschitz function, that is, \begin{equation}\label{lip}|u(x)-u(y)|\le \lz|x-y|\quad\forall x,y\in \Omega \quad \mbox{for some $0\le \lz<\fz$,} \end{equation} then, at almost all $x\in\Omega$, $u$ is differentiable and $|\nabla u(x)| =\lip u(x)$. Here $|\nabla u(x)|$ is the Euclidean length of the derivative $\nabla u(x)$ at $x$, and $\lip u (x)$ is the pointwise Lipschitz {\color{black}constant at $x$ defined by } \begin{equation}\label{pointlip} \lip u(x):=\limsup_{y\to x}\frac{|u(y)-u(x)|}{|y-x|} . \end{equation} \end{thm} {\color{black}The above Rademacher's theorem was extended to Carnot-Carath\'{e}odory spaces $(\Omega, X)$}, where $X$ is a family of smooth vector fields in $\Oz$ satisfying the H\"ormander condition (See Section 2). {\color{black} Denote by $Xu$ the distributional horizontal derivative of $u\in L^1_\loc(\Omega)$. Write $\dcc$ as the Carnot-Carath\'{e}odory distance with respect to $X$. One then has the following; see \cite{gn,FSS97,fhk99,ksz} and, for the better result in Carnot group and Carnot type vector field, see \cite{monti,p89}. \begin{thm} \label{radcc} If $u:\Omega\to\rr$ is a Lipschitz function with respect to $\dcc$, that is, \begin{equation}\label{lip01}|u(x)-u(y)|\le \lz \dcc(x,y)\quad\forall x,y\in \Omega \quad \mbox{for some $0\le \lz<\fz$,} \end{equation} then, $Xu \in L^\fz(\Omega,\rr^m)$ and, for almost all $x\in\Omega$, the length $|X u(x)| \le \lz$. Under the additional assumption that $X$ is a Carnot type vector field in $\Omega$, or in particular, $(\Omega, X)$ is a domain in some Carnot group, one further has $|X u(x)| = \lip_{d_{CC}}u(x)%=\min\{\lz \ge0 \ | \ \lz \text{ satisfies \eqref{lip01}} \} $ for almost all $x\in\Omega$, where $\lip_\dcc u(x) $ is defined by \eqref{pointlip} with $|y-x|$ replaced by $d_{CC}(y,x)$. \end{thm} } This paper aims to build up some Rademacher type theorem involving Hamiltonians $H(x,p)$ in both of Euclidean and Carnot-Carath\'eodory spaces. {\color{black} Throughout this paper, the following assumptions are always held for $H(x,p)$.} \begin{Assumption} \label{ham} Suppose that $H:\Omega\times\rr^m\to{\color{black}[0,+\fz)}$ is measurable and satisfies {\color{black} \begin{enumerate} \vspace{-0.15cm}\item [(H1)] For each $x\in\Omega$, $H(x,\cdot)$ is quasi-convex, that is, \begin{equation*} H(x,t p + (1 - t)q) \le \max \{H(x,p), H(x,q)\}, \quad \forall p, q \in \rr^m, \ \forall \, t\in[0,1] \ \text{and} \ \forall x\in\Omega. \end{equation*} \item [(H2)] \vspace{-0.15cm} For each $ x\in \Omega$, $H(x,0)=\min_{p\in\rrm}H(x,p)=0$. \vspace{-0.15cm} \item [(H3)] It holds that $R_\lz < \fz$ for all $\lz \ge 0$, and $\lim_{\lz \to \fz}R_\lz' = \fz$, where and in below, $$R_\lz:= \sup \{|p| \ | \ (x,p)\in \Omega \times \rr^{m},H(x,p)\le \lz \} $$ and $$ R'_\lz:= \inf \{|p| \ | \ (x,p)\in \Omega \times \rr^{m},H(x,p)\ge \lz \}.$$ \end{enumerate}} \end{Assumption} For any $\lz\ge 0$, we define \begin{equation}\label{d311} d _{\lambda} (x,y):= \sup\{u(y)-u(x) \ | \ u \in \dot W^{1,\infty}_{X }(\Omega) \ \mbox{ with }\ \|H(\cdot, Xu)\|_{L^\infty(\Omega)} \le \lz \}\quad\forall \mbox{$x,y \in \Omega$}. \end{equation} Recall that $\dot W^{1,\fz}_{X }(\Omega)$ denotes the set of all functions $u\in L^\fz (\Omega)$ whose distributional horizontal derivatives $Xu\in L^\fz(\Omega;\rr^m)$. {\color{black} It was known that any function $ u\in \dot W^{1,\fz}_{X }(\Omega)$ admits a continuous representative $\wz u$; see \cite[Theorem 1.4]{FSS97} and also Theorem \ref{holder} and Remark \ref{conrep} below. In this paper, in particular, in \eqref{d311} above, we always identify functions in $\dot W^{1,\fz}_X(\Omega)$ as their continuous representatives. We remark that $d_\lz$ is not a distance necessarily, but Lemma \ref{lem311} says that $d_\lz$ is always a pseudo-distance as defined in Definition \ref{pseudo} below.} Given any $\lz\ge 0$, by the definition \eqref{d311}, if $u\in \dot W_{X}^{1,\fz}(U)$ and $\|H(\cdot,Xu)\|_{L^\fz(U)}\le \lz$, then $ u(y)-u(x)\leq d_{\lambda}(x,y) \ \forall\, x,y\in \Omega.$ It is natural to ask whether the converse is true or not. However, such converse is not necessarily true as witted by the Hamiltonian \begin{equation}\label{[p]} \lfloor|p|\rfloor =\max\{t\in \nn \ | \ |p|-t\ge 0\}\quad \forall x\in \Omega,\, p\in\rrm; \end{equation} for details see Remark \ref{eg} below. The point is that $ \lfloor |p|\rfloor$ is not lower-semicontinuous. Below, the converse is shown to be true if $H(x,p)$ is assumed additionally to be lower-semicontinuous in the variable $p$, that is, \begin{enumerate}\vspace{-0.15cm} \em\item[(H0)] For almost all $x\in\Omega$, $H(x,p)\le\liminf_{q\to p}H(x,q) \quad\forall p\in\rr^m.$ \end{enumerate} \begin{thm} \label{rad} Suppose that $H$ satisfies (H0)-(H3). Given any $\lz \ge 0 $ and any function $u:\Omega\to \rr$, the following are equivalent: \begin{enumerate} \vspace{-0.15cm} \item[(i)] $u\in \dot W_{X}^{1,\fz}(\Omega) \ \text{and} \ \|H(\cdot,Xu)\|_{L^\fz(\Omega)}\le\lz$; \vspace{-0.15cm} \item[(ii)] $u(y)-u(x)\le d_\lz (x,y)$ $\forall x,y\in \Omega$; \vspace{-0.15cm} \item[(iii)] For any $x\in\Omega$, there exists a neighborhood $N(x)\subset\Omega$ such that $$u(y)-u(z)\le d_\lz (z,y)\quad \forall y,z \in N(x).$$ \end{enumerate} \noindent In particular, {\color{black}if $u:\Omega\to \rr$ satisfies any one of (i)-(iii), then} \begin{align}\label{ident}\|H(\cdot,Xu)\|_{L^\fz(\Omega)}&={\color{black}\inf}\{\lz \ge 0 \ | \ \lz \ \mbox{satisfies (ii)}\}={\color{black}\inf}\{\lz \ge 0 \ | \ \lz \ \mbox{satisfies (iii)}\}. \end{align} \end{thm} Using Theorem \ref{rad}, when $\lz\ge\lz_H$ we prove that $d_\lz$ has a pseudo-length property, which allows us to get the following. Here and below, define \begin{equation}\label{lh} \lz_H:=\inf\{\lz\ge0 \ | \ R'_\lz>0\}. \end{equation} Since $R_\lz'$ defined in (H3) of Assumption \ref{ham} is always nonnegative and increasing in $\lz\ge0$ and tends to $\fz $ as $\lz\to\fz$, we know that $0\le \lz_H<\fz$, and moreover, $\lz>\lz_H$ if and only if $R'_\lz>0$. \begin{thm} \label{radiv} Suppose that $H$ satisfies (H0)-(H3). Given any $\lz \ge \lz_H$ and any function $u:\Omega\to \rr$, the statement (i) in Theorem \ref{rad} is equivalent to the following \begin{enumerate} \item[(iv)] For any $x\in\Omega$, there exists a neighborhood $N(x)\subset\Omega$ such that $$u(y)-u(x)\le d_\lz (x,y)\quad \forall y \in N(x).$$ \end{enumerate} \noindent In particular, {\color{black}if $u:\Omega\to \rr$ satisfies (iv), then} \begin{align}\label{ident2}\max\{\lz_H,\|H(\cdot,Xu)\|_{L^\fz(\Omega)}\} =\min\{\lz \ge \lz_H \ | \ \lz \ \mbox{satisfies (iv)}\}. \end{align} \end{thm} As a consequence of Theorem \ref{rad} and Theorem \ref{radiv}, we have the following Corollary \ref{global}. Associated to Hamiltonian $H(x,p)$, we introduce some notion and notations. Denote by $\dot W^{1,\fz}_H(\Omega)$ the collection of all $u\in \dot W^{1,\fz}_X(\Omega)$ with $\|H(\cdot,Xu)\|_{L^\fz(\Omega)}<\fz$. Denote by $\lip_H(\Omega,X)$ the class of functions $u:\Omega\to\fz$ {\color{black}satisfying (ii) for some $\lz>0$} equipped with (semi-)norms \begin{equation}\label{liph} \lip_H(u,\Omega)={\color{black}\inf}\{\lz \ge \lz_H \ | \ \lz \ \mbox{satisfies (ii)}\}. \end{equation} Denote by $\lip_H^\ast( \Omega)$ the collection of all functions $u$ with $$\lip_H^\ast(u,\Omega)=\sup_{x\in\Omega}\lip_Hu(x)<\fz,$$ where we write the pointwise ``Lipschitz" constant \begin{equation}\label{ptliph} \lip_Hu(x)={\color{black}\inf}\{\lz\ge\lz_H \ | \ \lz \ \mbox{satisfies (iv)}\}. \end{equation} {\color{black}Thanks to the right continuity of the map $\lz\in[\lz_H,\fz)\mapsto d_\lz(x,y)$ as given in Lemma \ref{dpro}, the infima in \eqref{liph} and \eqref{ptliph} are actually minima.} \begin{cor} \label{global} Suppose that $H$ satisfies (H0)-(H3) with $\lz_H=0$. Then $\dot W^{1,\fz}_H(\Omega)=\lip_H(\Omega)=\lip_H^\ast( \Omega)$ and $$\|H(\cdot,Xu)\|_{L^\fz(\Omega)}=\lip_H(u,\Omega)=\lip_H^\ast(u, \Omega).$$ \end{cor} Next, we apply the above Rademacher type property {\color{black}to study a minimization problem} for $L^\fz$-functionals corresponding {\color{black}to the above Hamiltonian} $H(x,p)$ in both Euclidean and Carathe\'odory spaces: $$\cf(u,U):=\big\|H(\cdot, Xu )\big\|_{L^\fz(U)}\ \mbox{for any $u\in W^{1,\infty}_{X,\loc}(U)$ and domain $ U\subset \Omega$}.$$ Aronsson \cite{a1,a2,a3,a4} in 1960's initiated the study in this direction via introducing absolute minimizers. A function $u\in W^{1,\fz}_{X,\loc}(U)$ is called an absolute minimizer in $U$ for $H$ and $X$ (write $u\in AM(U;H,X)$ for short) if for any domain $V\Subset U$, it holds that $${\color{black}\cf(u,V) \le \cf(v,V)} \mbox{\ whenever}\ v\in \dot W^{1,\fz}_X(V)\cap C(\overline V)\ \mbox{and} \ u\big|_{\partial V}=v\big|_{\partial V}.$$ {\color{black}Here and throughout this paper, for domains $A$ and $B$, the notation $A\Subset B$ stands for that $A$ is a bounded subdomain of $B$ and its closure $\overline A\subset B$.} {\color{black} The existence of absolute minimizers with a given boundary value has been extensively studied. Apart from the pioneering work by Aronsson mentioned above, we refer the readers to \cite{acj,bjw,cp,cpp,c03,gxy,j98} and the references therein in the Euclidean setting. For the existence results in Heisenberg groups, Carrot-Carath\'{e}odory spaces and general metric spaces with special type of Hamiltonians, we refer the readers to \cite{b1,j02,ksz,kz,ksz2,wcy}. Usually, there are two major approaches to obtain the existence of absolute minimizers. When dealing with $C^2$ Hamiltonians, one usually transfers the study of absolute minimizers into the study of viscosity solutions of the Aronsson equation (the Euler-Lagrange equation of the $L^\fz$-functional $\cf$). Thus, to get the existence of absolute minimizers, it suffices to show the existence of the corresponding viscosity solutions. This approach was employed, for instance, in \cite{a3,a68,bjw,c03,gwy,j98,wcy}. To study the the existence of absolute minimizers for Hamiltonians $H(x,p)$ with less regularity, one efficient way is to use Perron's method to first get the existence of absolute minimizing Lipschitz extensions (ALME), and then show the equivalence between ALMEs and absolute minimizers. This idea was adopted in \cite{a1,a2,j02,acj,kz,ksz2,gxy}. To see the close connection between ALMEs and absolute minimizers, we refer the readers to \cite{js,ksz,dmv} and references therein. Theorem \ref{rad}, Theorem \ref{radiv} and Corollary \ref{global} allow us to apply Perron's method directly and then to establish the following existence result of absolute minimizers. This is partially motivated by \cite{cpp}. However, since we are faced with measurable Hamiltonians, there are several new barriers to be overcome as illustrated at the end of this section. } \begin{thm}\label{t231} Suppose that $H$ satisfies (H0)-(H3) with $\lz_H=0$. Given any domain $U\Subset\Oz$ and $g\in \lip_{d_{CC}}(\partial U)$, there must be a function $u\in AM(U;H,X)\cap \lip_{d_{CC}}(\overline U)$ so that $u|_{\partial U}=g$. \end{thm} \medskip Theorem \ref{rad}, Theorem \ref{radiv}, Corollary \ref{global} and Theorem \ref{t231} improve or extend several previous studies in the literature including Theorem \ref{rade} and Theorem \ref{radcc} above; see Remark \ref{re1} and Remark \ref{re2} below. \begin{rem} \label{re1} \rm \begin{enumerate} \item[ (i)] In Euclidean spaces, that is, $X=\{\frac{\partial }{\partial x_i} \}_{1\le i\le n}$, if $H(x,p)=|p|$, then Corollary \ref{global} coincides with Lemma \ref{rade'}, which is a consequence of Theorem \ref{rade}. In Carnot-Carath\'{e}odory spaces $(\Omega, X)$, if $H(x,p)=|p|$, then Corollary \ref{global} coincides with Lemma \ref{radcclem}, which is a consequence of Theorem \ref{radcc}. \item[ (ii)] In Euclidean spaces, if $H(x,p)$ is lower semi-continuous in $U \times \rn$, $H(x,\cdot)$ is quasi-convex for each $x \in U$, and satisfies (H2) and (H3), (ii)$\Leftrightarrow$(i) in Theorem \ref{rad} was proved in Champion-De Pascale \cite{cp} (see also \cite{c03,acjs} for convex $H(p)$ in Euclidean spaces). The proof in \cite{cp} relies on the lower semi-continuity in both of $x$ and $p$ heavily, which {\color{black} allows for approximation $H(x,p)$ via a continuous} Hamiltonian in $x$ and $p$. But such an approach fails under the weaker assumptions (H0)\&(H1) here. We refer to Section 7 for more details and related further discussions. \item[ (iii)]{\color{black}In both Euclidean and Carrot-Carath\'{e}odory} spaces, if $H(x,p)=\sqrt{\langle A(x)p,p\rangle}$, where {\color{black}$A(x)$ is a measurable symmetric matrix-valued function} satisfying uniform ellipticity, then Theorem \ref{rad} was established in \cite{ksz,ksz2}. The proofs therein rely on the inner product structure, and also do not work here. For measure spaces {\color{black}endowed with strongly regular} nonlocal Dirichlet energy forms, where the Hamiltonian is given by the square root of Dirichlet form, {\color{black}we refer to} \cite{sp,kz,ksz2,flw} for some corresponding Rademacher type property. \item[ (iv)] Under merely (H0)-(H3), one can not expect that $\lip_H(x)=H(x,Xu(x)) $ almost everywhere. Recall that in Euclidean spaces, there does exist $A(x)$, which satisfying Remark \ref{re1}(iii) above, so that such pointwise property fails for the Hamiltonian $\sqrt {\langle A(x)p,p\rangle}$. For more details see \cite{s97,ksz}. \end{enumerate} \end{rem} \begin{rem}\rm \label{re2} \begin{enumerate} \item[ (i)] {\color{black}In Euclidean spaces}, that is, $X=\{\frac{\partial }{\partial x_i}\}_{1\le i\le n}$, if $H(x,p)$ is given by Euclidean norm and also any Banach norm, the existence of absolute minimizers was established in Aronsson \cite{a1,a2} and Aronsson et al \cite{acj}. If $H(x,p)$ is given by $\sqrt{\langle A(x)p,p\rangle}$ with $A$ being as in Remark \ref{re1} (iii) above, existence of absolute minimizers is given by \cite{ksz} with the aid of \cite{js}. In a similar way, with the aid of \cite{js}, Guo et al \cite{gxy} also obtained the existence of absolute minimizers if $H(x,p)$ is a measurable function in $\Omega\times\rn$, and satisfies that $\frac1C< H(x,p)<C$ for all $x\in\Omega$ and $p\in S^{n-1}$, where $C\ge1$ is a constant, and that $H(x,\eta p) = |\eta| H(x,p)$ for all $x\in \Omega$, $p\in\rn$ and $\eta\in\rr$. \item[ (ii)] In Euclidean spaces, if $H(x,p)$ is continuous in both variables $x,p$ and quasi-convex in the variable $p$, with an additional growth assumption in $p$, Barron et al \cite{bjw} built up the existence of absolute minimizers. If $H(x,p)$ is lower semi-continuous in $x,p$ and quasi-convex in $p$ and satisfies (H1)-(H3), Champion-De Pascale \cite{cp} established the existence of absolute minimizers with the help of their Rademacher type theorem in Remark \ref{re1}(ii). Recall that the lower semi-continuity of $H$ plays a key role in \cite{cp} to obtain the pseudo-length property for $d_\lz$. \item[ (iii)] In Heisenberg groups, if $H(x,p)=\frac12|p|^2$ we refer to \cite{b1} for the existence of absolute minimizers. In any Carnot group, if $H\in C^2(\Omega\times \rr^m)$, $D^2_{pp}H(x, \cdot)$ is positive definite, and there exists $ \alpha\ge1$ such that \begin{equation}\label{e1.2} H(x,\eta p) = \eta^{\az}H(x,p)\quad\forall x\in \Omega,\ \eta>0,\ p\in\rn, \end{equation} then the existence of absolute minimizers was obtained by Wang \cite{wcy} via considering viscosity solutions to the corresponding Aronsson equations. \end{enumerate} \end{rem} The following remark explains that, without the assumption (H0), Theorem \ref{rad} does not necessarily hold. \begin{rem} \label{eg}\rm The Hamiltonian $H(x,p)=\lfloor|p|\rfloor$ given in $\eqref{[p]}$ satisfies (H1)-(H3) but does not satisfy (H0). Given any $\lz\in(0,1)$, we have \begin{align*} d_{\lz} (x,y) & =\sup\{u(y)-u(x) \ | \ u \in \dot W^{1,\fz}_X(\Omega)\ \mbox{with} \ \|H(\cdot,X u)\|_{L^\fz(\Omega)}=\|\lfloor|X u|\rfloor\|_{L^\fz(\Omega)} \le \lz\} \\ & =\sup\{u(y)-u(x) \ | \ u \in \dot W^{1,\fz}_X(\Omega) \ \mbox{with} \ \| |X u| \|_{L^\fz(\Omega)} \le 1\} \\ &=d_{CC}(x,y)\quad\forall x,y\in\Omega. \end{align*} Fix any $ z\in\Omega$ and write $u(x)=d_\lz(z,x)\ \forall x\in\Omega$. By the triangle inequality we have $$u(y)-u(x)=d_\lz(z,y)-d_\lz(z,x)\le d_\lz(x,y)$$ Recall that, when $X=\{\frac{\partial}{\partial x_j}\}_{1\le j\le n}$ or when $X$ is given by Carnot type H\"ormander vector fields, one always has $|X d_{CC} (z,\cdot) | =1$ almost everywhere; see \cite{monti}. For such $X$, we conclude that $$\|H(\cdot,X u)\|_{L^\fz(\Omega)}=1>\lz.$$ Thus Theorem \ref{rad} fails. \end{rem} The following remark explains the reasons why we need $\lz\ge \lz_H $ in Theorem \ref{radiv}, and why we assume $\lz_H=0$ in Theorem \ref{t231}. Note that, in Theorem \ref{rad} where $\lz_H$ maybe not $0$, we do get the equivalence among (i), (ii) and (iii) for any $\lz\ge 0$. \begin{rem}\label{rlambda}\rm \begin{enumerate} {\color{black} \item[ (i)] To prove (iv) in Theorem \ref{radiv} $\Rightarrow$ (i) in Theorem \ref{rad}, we need a pseudo-length property for $d_\lz$ as in Proposition \ref{len}. When $\lz>\lz_H$ (equivalently, $R'_\lz>0$), to get such pseudo-length property for $d_\lz$, our proof does use $R_\lz'>0$ so to guarantee that the topology induced by $\{d_\lz(x ,\cdot)\}_{x \in \Omega}$ (See Definition \ref{pseudo}) is the same as the Euclidean topology; see Remark \ref{exp1}. When $\lz=\lz_H$, we get such pseudo-length property for $d_{\lz_H}$ via approximating by $d_{\lz_H+\ez}$ with sufficiently small $\ez>0$. \item[ (ii)] When $ \lz_H>0$ and $0\le\lz<\lz_H$, we do not know whether $d_\lz$ enjoys such pseudo-length property. We remark that there does exist some Hamiltonian $H(x,p)$ which satisfies the assumptions (H0)-(H3) with $\lz_H>0$ (that is, for some $\lz>0$, $R_\lz'=0$) ; but for $0<\lz<\lz_H$, the topology induced by $\{d_\lz(x ,\cdot)\}_{x \in \Omega}$ does not coincide with the Euclidean topology; see Remark \ref{d1} (ii). \item[ (iii)] To get the existence of absolute minimizers, our approach does need Theorem \ref{radiv} and also several properties of $d^U_\lz$, whose proof relies heavily on the pseudo-length property for $d_\lz$ and $R_\lz'>0$. In Theorem 1.6, we assume $\lz_H=0$ so that we can work with all Lipschitz boundary $g$ so to get existence of absolute minimizer. In the case $\lz_H>0$, our approach will give the existence of of absolute minimizer when the boundary $g:\partial U\to\rr$ satisfies $\mu(g,\partial U)>\lz_H$, but does not work when $\mu(g,\partial U)\le \lz_H$. Here $ \mu(g,\partial U)$ is the infimum of $\lz$ so that $g(y)-d(x)\le d^U_\lz(x,y)$ $forall x,y\in\pa U$. } \end{enumerate} \end{rem} {\color{black} The paper is organized as below, where we also clarify the ideas and main novelties to prove Theorem \ref{rad}, Theorem \ref{radiv} and also Theorem \ref{t231}. We emphasize that in our results from Section 2 - Section 6, $X$ is a fixed smooth vector field in a domain $\Oz$ and satisfies the H\"ormander condition; and that the Hamiltonian $H(x,p)$ always enjoys (H0)-(H3). In all results from Section 5 and 6, we further assume $\lz_H=0$. } In Section 2, we state several facts about the analysis and geometry in Carnot-Carath\'{e}odory spaces employed in the proof. In Section 3, we prove (i)$\Leftrightarrow$(ii)$\Leftrightarrow$(iii) in Theorem \ref{rad}. Since (i)$\Rightarrow$(ii) follows from the definition and that (ii)$\Rightarrow $(iii) is obvious, it suffices to prove (iii)$\Rightarrow $(i). To this end, we borrow some ideas from \cite{sk,sp,flw,ksz}, which were designed for nonlocal Dirichlet energy forms originally. The key is that, by employing assumptions (H0), (H1) and Mazur's theorem, we are able to prove that if $v_j\in\dot W^{1,\fz}_X(\Omega)$ with $\|H(\cdot, Xv_j)\|_{L^\fz(\Omega)}\le\lz$ and $v_j\to v$ in $C^0(\Omega)$ as $j\to\fz$, then $v \in\dot W^{1,\fz}_X(\Omega)$ with $\|H(\cdot, Xv)\|_{L^\fz(\Omega)}\le\lz$; see Lemma \ref{l6.1} for details. Thanks to this, choosing a suitable sequence of approximation functions via the definition of $d_\lz$, we then show that $$\mbox{$ d _{\lz}(x,\cdot) \in W_{X}^{1,\fz}(\Omega)$ and $\|H(\cdot,Xd _{\lz}(x,\cdot))\|_{L^\fz(\Omega)}\le\lz$ for all $\lz>0$ and $x\in \Omega$}.$$ See Lemma \ref{lem3.2}. Given any $u$ satisfying (iii), we construct approximation functions $u_j$ from $d_\lz$ and use Lemma \ref{l6.2} to show $H(x,Du_j)\le \lz$. That is (i) holds. In Section 4, we prove (iii) in Theorem \ref{rad} $\Leftrightarrow $ (iv) in Theorem \ref{radiv}. Since (iii)$\Rightarrow $(iv) is obvious, it suffices to show (iv)$\Rightarrow$(iii). This follows from the pseudo-length property of pseudo metric $d_\lz$ in Proposition \ref{len}. To get such a length property we find some special functions which fulfill the assumption {\color{black}Theorem \ref{rad}(iii)}, and hence we show that the pseudo metric $d_\lz$ has a pseudo-length property. In Section 5, we introduce McShane extensions and minimizers, and then gather several properties of them and pseudo-distance in Lemma \ref{property} to Lemma \ref{lem5.10}, which are required in Section 6. These properties also have their own interests. Given any domain $U\Subset\Oz$, via the intrinsic distance $d_\lz^U$ induced from $U$, we introduce McShane extensions $\csgv^\pm$ of any $g\in \lip_{d_{CC}}(\partial U)$ in $U$. There are several reasons to use $d^U_\lz$ other than $d_\lz$, for example, $d^U_\lz$ has {\color{black}the pseudo-length property in $U$} but the restriction of $d_\lz$ may not have; moreover, Theorem \ref{rad}, and Theorem \ref{radiv} holds if $(\Omega,d_\lz)$ therein is replaced by $ (U,d^U_\lz)$, but not necessarily hold if $(\Omega,d_\lz)$ therein is replaced by $ (U,d _\lz)$. However, the use of $d_\lz^U$ causes several difficulties. For example, $d_\lz^U$ may be infinity when extended to $\overline U$. This makes it quite implicit to see the continuity of McShane extensions around $\pa U$ from the definition. In Lemma \ref{cont}, we get such continuity by analyzing the behaviour of $d_\lz^U$ near $\pa U$. Moreover, as required in Section 6, we have to study the relations between $d_\lz^U$ and $d^V_\lz$ for subdomains $V$ of $U$ in Lemma \ref{ll} and Lemma \ref{lem5.9}. In Section 6, we prove Theorem \ref{t231} in a constructive way by using above Rademacher type property and Perron's approach, where we borrow some ideas from \cite{bjw,cp,cpp}. The proof consists of crucial Lemma \ref{lem234}, Lemma \ref{lem235} and Proposition \ref{p42}. Lemma \ref{lem234} says that McShane extensions $\csgu^\pm$ in $U$ of function $g$ in $\pa U$ are local super/sub minimizers in $U$. Since $\csgu^\pm$ are the maximum/minimum minimizers, the proof of Lemma \ref{lem234} is reduced to showing that for any subdomain $V \subset U$, the McShane extensions $\cS^\pm_{h^\pm, V}$ in $V$ with boundary $h^\pm=\csgu^\pm|_{\pa V}$ satisfy $$\mbox{$\cS^\pm_{h^\pm; V}(y) - \cS^\pm_{h^\pm; V}(x) \le d_\lz^U(x,y)$ for all $x,y \in \overline V$};$$ see Lemma \ref{lem5.7} and Lemma \ref{lem5.10} and the proof of Lemma \ref{lem234}. However, since Lemma \ref{cont} only gives $$\mbox{$\cS^\pm_{h^\pm; V}(y) - \cS^\pm_{h^\pm; V}(x) \le d_\lz^V(x,y)$ for all $x,y \in \overline V$,}$$ we must improve $d_\lz^V(x,y)$ here to the smaller one $d_\lz^U(x,y)$. To this end, we show $d_\lz^V= d_\lz^U$ locally in Lemma \ref{ll}, and also use the pseudo-length property of $ d_\lz^U$ heavily. Lemma \ref{lem235} says that a function which is both of local superminimizers and subminimizer must be an absolute minizimzer. To get the required local minimizing property, we use McShane extensions to construct approximation functions and also need the fact $d_\lz^V= d_\lz^{V\setminus\{x_i\}_{1\le i\le m}}$ in $\overline V\times \overline V$ as in Lemma \ref{lem5.9}. Proposition \ref{p42} says that the supremum of local subminimizers are absolute minimizers. Due to Lemma \ref{lem235}, it suffices to prove the local super/sub minimizing property of such a supremum. We do prove this via using Lemma \ref{lem234} and Lemma \ref{lem5.10} repeatedly and a contradiction argument. In Section 7, we aim at explaining some obstacles in using previous approach to establish the Rademacher type theorem and the existence of absolute minimizers. Indeed, in the literature to study Hamiltonians with better regularity or homogeneity, for instance, \cite{cp,gxy}, another intrinsic distance $\bar d_\lz$ is more common used in those proof which is hard to fit our setting. In Appendix, we revisit the Rademacher's theorem in the Euclidean space to show that Theorem \ref{rad} and Corollary \ref{global} are indeed an extension of the Rademacher's theorem. {\color{black} \begin{center} \textsc{Acknowledgement } \end{center} The authors would like to thank Katrin F\"{a}ssler for the reference \cite{ls16,l16} and her comment on this paper. The authors are also grateful to the anonymous referee for many valuable suggestions which make the paper more self-contained and improve the final presentation of the paper.} \section{Preliminaries} In this section, we introduce the background and some known results related to Carnot-Carath\'{e}odory spaces employed in the proof. Let $X:=\{X_1, ... ,X_m\}$ for some $m\le n$ be a family of smooth vector fields in $\Oz$ which satisfies the H\"ormander condition, that is, there is a step $k \ge 1$ such that, at each point, $\{X_i\}^{m}_{i=1}$ and all their commutators up to at most order $k$ generate the whole $\rn$. Then for each $i=1,\cdots,m$, $X_i$ can be written as $$ X_i = \sum_{l=1}^n b_{il} \frac{\pa}{\pa x_l} \mbox{ in $\Omega$} $$ with $b_{il} \in C^\fz(\Omega)$ for all $i=1,\cdots,m$ and $l=1,\cdots,n$. Define the Carnot-Carath\'{e}odory distance corresponding to $X$ by \begin{equation} \label{dcc0} d_{CC}(x,y):=\inf\{\ell_\dcc(\gz) \ | \ \mbox{$\gz\in \mathcal {ACH}(0,1;x,y;\Omega)$}\}\end{equation} Here and below, we write $\gz\in \mathcal {ACH}(0,1;x,y;\Omega)$ if $\gz:[0,1]\to\Omega$ is absolutely continuous, $\gz(0)=x,\gz(1)=y$, and there exists measurable functions $c_{i}:[0,1] \to \rr$ with $1 \le i \le m$ such that $\dot\gz(t) = \sum_{i=1}^m c_i(t)X_i(\gz(t))$ whenever $\dot\gz(t)$ exists. The length of $\gz$ is $$\ell_\dcc(\gz):=\int_0^1|\dot\gz(t)|\,dt=\int_0^1 \sqrt{\sum_{i=1}^m c_i^2(t)}\,dt.$$ In the Euclidean case, we have the following remark. \begin{rem}\label{dccde} \rm In Euclidean case, that is, $X=\{\frac{\partial }{\partial {x_i}}\}_{1\le i\le n}$, one has $d_{CC}$ coincides with the intrinsic distance $d_E^\Omega$ as given in (A.2). In particular, $d_{CC}(x,y)=d^\Omega_E(x,y)$ for all $ x,y\in\Omega$ with $|x-y|<\dist(x,\partial\Omega)$. When $\Omega$ is convex, one further have and $d_{CC}(x,y)=|x-y|$ for all $x,y\in\Omega$; however, when $\Omega$ is not convex, this is not necessarily true. See Lemma A.4 in Appendix for more details. \end{rem} Since $X$ is a H\"ormander vector field in $\Omega$, for any compact set $K \subset \Omega$, there exists a constant $C(K) \ge 1$ such that \begin{equation*} C(K)^{-1} |x - y | \le \dcc(x,y) \le C(K) |x - y |^{\frac{1}{k}} \quad {\color{black}\forall x, y \in K}, \end{equation*} see for example \cite{nsw85} and \cite[Chapter 11]{HK}. This shows that the topology induces by $(\Omega,d_{CC})$ is exactly {\color{black}the Euclidean topology}. Given a function $u \in L^1_{\loc}(\Omega)$, its distributional derivative along $X_i$ is defined by the identity $$ \la X_i u , \phi \ra = \int_\Omega u X_i^\ast \phi \, dx \mbox{ for all $\phi \in C_0^\fz(\Omega)$,} $$ where $X_i^\ast = -\sum_{l=1}^n \frac{\pa}{\pa x_l}(b_{il} \cdot)$ denotes the formal adjoint of $X_i$. Write $X^\ast= (X_1^\ast, \cdots,X_m^\ast)$. We call $Xu:=(X_1u, \cdots,X_m u)$ the horizontal distributional derivative for $u \in L^1_{\loc}(\Omega)$ and the norm $|Xu|$ is defined by $$ |Xu|= \sqrt{\sum_{i=1}^m|X_i u|^2} .$$ For $1\le p\le \fz$, denote by $\dot W^{1,p}_X(\Omega)$ the $p$-th integrable horizontal Sobolev space, that is, the collection of all functions $u\in L^1_\loc(\Omega)$ with its distributional derivative $Xu\in L^p(\Omega)$. Equip $\dot W^{1,p}_X(\Omega)$ with the semi-norm $\|u\|_{\dot W^{1,p}_X(\Omega)}=\||Xu|\|_{L^p(\Omega)}$. The following was proved in \cite[Lemma 3.5 (II)]{gn96}. \begin{lem}\label{uplus} If $ u \in \dot W^{1,p}_X(U)$ with $1\le p<\fz$ and $ U\Subset \Omega$, then $u^+=\max\{u,0\} \in \dot W^{1,p}_X(U)$ with $Xu^+=(Xu)\chi_{\{x\in U|u>0\}}$ almost everywhere. \end{lem} We recall the following imbedding of horizontal Sobolev spaces from \cite[Theorem 1.4]{gn}. For any set $U\subset\Omega$, the Lipschitz class $\lip_{d_{CC}}(U)$ is the collection of all functions $u:U\to\rr$ with its seminorm $$\lip_{d_{CC}}(u,U) := \sup_{x \ne y, x,y \in U} \frac{|u(x)-u(y)|}{\dcc(x,y) } < \fz. $$ \begin{thm} \label{holder} For any subdomain $U \Subset \Omega$, if $u \in \dot W^{1,\fz}_X(U)$, then there is a continuous function $\wz u \in \lip_{d_{CC}}(U)$ with $\wz u=u$ almost everywhere and $$\lip_{d_{CC}}(\wz u,U)\le C(U,\Omega) [\|u\|_{L^\fz(U)}+\|u\|_{\dot W^{1,\fz}_X(U)}].$$ \end{thm} \begin{rem}\label{conrep}\rm For any $u \in\dot W^{1,\fz}_X(U)$, we call above $\wz u$ given in Theorem \ref{holder} as the continuous representative of $u$. Up to considering $\wz u $, in this paper we always assume that $u$ itself is continuous. \end{rem} We have the following dual formula of $\dcc$. \begin{lem} \label{jerison} For any $x,y\in \Omega$, we have \begin{equation} \label{dcc} d_{CC}(x,y) =\sup\{u(y)-u(x) \ | \ u\in \dot W^{1,\fz}_X(\Omega) \mbox{ with $ \||Xu|\|_{L^\fz(\Omega)}\le1$}\}. \end{equation} \end{lem} To prove this we need the following bound for the norm of horizontal derivative of smooth approximation of functions in $\dot W^{1,\fz}_{X}(\Omega)$, see for example \cite[Proposition 11.10]{HK}. Denote by $\{\eta_\ez\}_{\ez\in(0,1)}$ the standard smooth mollifier, that is, $\eta_\ez(x) = \ez^{-n} \eta(\frac{x}{\ez}) \quad\forall x\in\rn$, where $\eta\in C^\fz(\rn)$ is supported in unit ball of $\rn$ (with Euclidean distance), $\eta \ge 0$ and $\int_{\rn} \eta\,dx=1$. \begin{prop}\label{mollif2} Given any compact set $K\subset \Omega$, there is $\ez_K\in(0,1)$ such that for any $\ez<\ez_K$ and $u \in \dot W^{1,\fz}_{X}(\Omega)$ one has \begin{equation}\label{hk1} |X(u\ast \eta_\ez)(x)| \le \||Xu|\|_{L^\fz(\Omega)} +A_\ez(u)\quad\forall x\in K, \end{equation} where $A_\ez(u)\ge0$ and $\lim_{\ez \to 0}A_\ez(u) \to 0$ in $K$. \end{prop} \begin{proof}[Proof of Lemma \ref{jerison}] Recall that it was shown by \cite[Proposition 3.1]{js87} that \begin{equation}\label{dcc2} d_{CC}(x,y) =\sup\{u(y)-u(x) \ | \ u\in C^\fz(\Omega) \mbox{ with $ \||Xu|\|_{L^\fz(\Omega)}\le1$}\} \quad\forall x,y\in \Omega. \end{equation} It then suffices to show that for any $u\in \dot W^{1,\fz}_X(\Omega) $ with $ \||Xu|\|_{L^\fz(\Omega)}\le1$, we have $$\mbox{$ u(y)-u(x)\le d_{CC}(x,y) \quad\forall x,y\in\Omega$.}$$ Note that $u$ is assumed to be continuous as in Remark \ref{conrep}. To this end, given any $x,y\in\Omega$, for any $ \ez>0$ there exists a curve $ \gz_\ez \subset \mathcal {ACH}(0,1;x,y;\Omega)$ such that $ \ell_{d_{CC}}(\gz_\ez)\le (1+\ez)d_{CC}(x,y). $ We can find a domain $U\Subset \Omega$ such that $\gz_\ez \subset U$. It is standard that $u\ast \eta_t\to u$ uniformly in $\overline U$ and hence $$u(y)-u(x) =\lim_{t\to 0}[u\ast \eta_t(y)-u\ast \eta_t(x)] . $$ Next, by Proposition \ref{mollif2}, for $0<t<t_{\overline U}$ one has $$|X(u\ast \eta_t)(z)| \le \||Xu|\|_{L^\fz(\Omega)} +A_t u(z)\quad\forall z\in \overline U ,$$ and moreover, $A_t u(z) \to 0$ uniformly in $\overline U $ as $t \to 0$. Obviously, we can find $t_{\ez,\overline U}<t_{\overline U}$ such that for any $0<t<t_{\ez,\overline U}$, we have $A_t u(x)\le \ez$ and hence, by $\||Xu|\|_{L^\fz(\Omega)}\le1$, $|X(u\ast \eta_t)(z)| \le 1+\ez ,$ for all $z\in\overline U$. Therefore \begin{align*}u\ast \eta_t(y)-u\ast \eta_t(x)&=\int_0^1 [(u\ast \eta_t)\circ\gz_t]'(s)\,ds\\ &= \int_0^1 X(u\ast \eta_t)(\gz_t(s))\cdot \dot \gz_t(s)\,ds\\ &\le (1+\ez)\ell_{d_{CC}}(\gz_\ez)\\ &\le (1+\ez)(1+\ez)d_{CC}(x,y). \end{align*} Sending $t\to0$ and $\ez\to0$, one concludes $ u(y)-u(x)\le d_{CC}(x,y) $ as desired. \end{proof} As a consequence of Rademacher type theorem (that is, Theorem \ref{radcc}), we have the following, which is an analogue of Lemma \ref{rade'}. Denote by $\lip^\ast_{d_{CC}}(\Omega)$ the collection of all functions $u$ in $\Omega$ with \begin{equation}\label{suplip1}\lip^\ast_{d_{CC}}(u,\Omega):=\sup_{x \in\Omega }\lip_{d_{CC}} u(x)<\fz. \end{equation} \begin{lem}\label{radcclem} We have $ \dot W^{1,\fz}_X(\Omega)=\lip_{d_{CC}}(\Omega)=\lip^\ast_{d_{CC}}(u,\Omega)$ with \begin{equation}\label{1.2} \||X u|\|_{L^\fz(\Omega)} = \lip_{d_{CC}}( u,\Omega)= \lip^\ast_{d_{CC}}( u,\Omega) \end{equation} \end{lem} \begin{proof} First, we show $\lip_{d_{CC}}(\Omega)= \lip^\ast_{d_{CC}}(\Omega)$ and $\lip_{d_{CC}}( u,\Omega)= \lip^\ast_{d_{CC}}( u,\Omega)$. Notice that $\lip_{d_{CC}}(u,\Omega)\subset \lip^\ast_{d_{CC}}(u,\Omega)$ and $\lip^\ast_{d_{CC}}(u,\Omega) \le \lip_{d_{CC}}(u,\Omega)$ are obvious. We prove $$\mbox{$\lip^\ast_{d_{CC}}(u,\Omega) \subset \lip_{d_{CC}}(u,\Omega)$ and $\lip_{d_{CC}}(u,\Omega) \le \lip_{d_{CC}}^\ast(u,\Omega).$}$$ Let $u \in \lip^\ast_{d_{CC}}(u,\Omega)$. Given any $x,y\in\Omega$, and $\gz\in \mathcal {ACH}(0,1;x,y;\Omega)$, parameterise $\gz$ such that $|\dot\gz(t)| = \ell_{\dcc}(\gz)$ for almost every $t \in [0,1]$. Since $$A_{x,y}:=\sup_{t\in[0,1]} \lip u(\gz(t))<\fz,$$ for each $t\in[0,1]$ we can find $r_t>0$ such that $$\mbox{$|u(\gz(s))-u(\gz(t))|\le A_{x,y}|\gz(s)-\gz(t)|=A_{x,y}\ell_{\dcc}(\gz)|s-t|$ whenever $|s-t|\le r_t$ and $s \in [0,1]$.}$$ Since $[0,1]\subset\cup_{t\in[0,1]}(t-r_t,t+r_t)$, we can find an increasing sequence $t_i\in[0,1]$ with $t_0 = 0$ and $t_N=1$ such that $$[0,1]\subset \cup_{i=1}^N(t_i-\frac12r_{t_i},t_i+\frac12 r_{t_i}).$$ Write $x_i=\gz(t_i)$ for $i =0 , \cdots , N.$ We have \begin{align*}|u(x)-u(y)|&=|\sum_{i=0}^{N-1}[u(x_i)-u(x_{i+1})]|\\ &\le \sum_{i=0}^{N-1}|u(x_i)-u(x_{i+1})| \\&\le A_{x,y}\ell_{\dcc}(\gz)\sum_{i=0}^{N-1}|t_i-t_{i+1}|\\&=A_{x,y}\ell_{\dcc}(\gz). \end{align*} Noticing that $A_{x,y} \le \lip^\ast(u,\Omega) < \fz$ for all $x,y \in \Omega$, we deduce that \begin{equation}\label{dcc3} |u(y)-u(x)|\le \lip_{d_{CC}}^\ast(u,\Omega)\ell_{d_{CC}}(\gz) \quad \forall x,y \in \Omega. \end{equation} For any $\ez>0$, recalling the definition of $\dcc$ in \eqref{dcc0}, there exists $\{\gz_\ez\}_{\ez >0} \subset \mathcal {ACH}(0,1;x,y;\Omega)$ such that \begin{equation}\label{dcc4} \ell_{d_{CC}}(\gz_\ez)\le (1+\ez)d_{CC}(x,y) \quad \forall x,y \in \Omega. \end{equation} Combining \eqref{dcc3} and \eqref{dcc4}, we have $$ \frac{|u(y)-u(x)|}{d_{CC}(x,y)} \le \lim_{\ez \to 0} \frac{|u(y)-u(x)|}{(1+\ez)d_{CC}(x,y)} \le \lip_{d_{CC}}^\ast(u,\Omega) \quad \forall x,y \in \Omega.$$ Taking supremum among all $x,y \in \Omega$ in the above inequality, we deduce that $u \in \lip_{d_{CC}}(u,\Omega)$ and $\lip_{d_{CC}}(u,\Omega) \le \lip_{d_{CC}}^\ast(u,\Omega)$. Hence the second equality in \eqref{1.2} holds. Next, we show $\dot W^{1,\fz}_X(\Omega)=\lip_{d_{CC}}(\Omega)$ and $\lip_{d_{CC}}(u,\Omega) = \| | Xu| \|_{L^\fz(\Omega)}$. By Theorem \ref{radcc}, we know $\lip_{\dcc}(\Omega) \subset \dot W^{1,\fz}_X (\Omega)$ and $\| | Xu| \|_{L^\fz(\Omega)} \le\lip_{d_{CC}}(u,\Omega)$. To see $\dot W^{1,\fz}_X (\Omega) \subset \lip_{\dcc}(\Omega)$ and $\lip_{d_{CC}}(u,\Omega) \le \| | Xu| \|_{L^\fz(\Omega)}$, let $u \in \dot W^{1,\fz}_X (\Omega)$. Then $\||Xu|\|_{L^\fz(\Omega)} =:\lz < \fz.$ If $ \lz >0$, then $\lz^{-1}u \in \dot W^{1,\fz}_X (\Omega)$ and $\||X (\lz^{-1} u )|\|_{L^\fz(\Omega)} =1$. Hence $\lz^{-1}u$ could be the test function in \eqref{dcc}, which implies $$ \lz^{-1} u(y)- \lz^{-1}u(x) \le \dcc (x,y) \ \forall x,y \in \Omega, $$ or equivalently, $$ \frac{|u(y)- u(x) |}{\||Xu|\|_{L^\fz(\Omega)}} \le \dcc (x,y) \ \forall x,y \in \Omega. $$ Therefore, $u \in \lip_{\dcc}(\Omega)$ and $ \lip_{\dcc} (u,\Omega) \le \||Xu|\|_{L^\fz(\Omega)}$. If $\lz=0$, then similar as the above discussion, we have for any $\lz'>0$ $$ \frac{|u(y)- u(x) |}{\lz'} \le \dcc (x,y) \ \forall x,y \in \Omega. $$ Therefore, $u \in \lip_{\dcc}(\Omega)$ and $ \lip_{\dcc} (u,\Omega) \le \lz'$ for any $\lz'>0$. Hence $ \lip_{\dcc} (u,\Omega) =0 = \||Xu|\|_{L^\fz(\Omega)}$ and we complete the proof. \end{proof} Next, we recall some concepts from metric geometry. First we recall the notion of pseudo-distance. \begin{defn}\label{pseudo} We say that $\rho$ is a pseudo-distance in a set $\Omega \subset \rn$ if $\rho $ is a function in $\Omega \times \Omega$ such that \begin{enumerate} \vspace{-0.3cm} \item[(i)] $\rho(x,x)=0$ for all $x \in \Omega$ and $\rho(x,y) \ge 0$ for all $x,y \in \Omega$; \vspace{-0.3cm} \item[(ii)] $\rho(x,z) \le \rho(x,y) + \rho(y,z)$ for all $x,y,z \in \Omega$. \end{enumerate} We call $(\Omega,\rho)$ as a pseudo-metric space. The topology induced by $\{\rho(x, \cdot)\}_{x \in \Omega}$ (resp. $\{\rho(\cdot, x)\}_{x \in \Omega}$) is the weakest topology on $\Omega$ such that $\rho(x, \cdot)$ (resp. $\rho(\cdot, x)$) is continuous for all $x \in \Omega$. \end{defn} We remark that since the above pseudo-distance $\rho$ may not have symmetry, the topology induced by $\{\rho(x, \cdot)\}_{x \in \Omega}$ in $\Omega$ may be different from that induced by $\{\rho(\cdot, x)\}_{x \in \Omega}$. Suppose that $H(x,p)$ is an Hamiltonian in $\Omega$ satisfying assumptions (H0)-(H3). Let $\{d_\lz\}_{\lz \ge 0}$ be defined as in \eqref{d311}. Thanks to the convention in Remark \ref{conrep}, one has \begin{equation} \label{dlz} d _{\lambda} (x,y):= \sup\{u(y)-u(x) \ | \ u \in \dot W^{1,\infty}_{X }(\Omega) \ \mbox{with}\ \|H(\cdot, Xu)\|_{L^\infty(\Omega)} \le \lz \}\quad\forall \mbox{$x,y \in \Omega$}. \end{equation} The following properties holds for $\dl$. \begin{lem} \label{lem311} The following holds. \begin{enumerate} \item[(i)] For any $\lz\ge0$, $\dl$ is a pseudo distance on $\Omega$. \item[(ii)] For any $\lz\ge0$, \begin{eqnarray}\label{e311} R'_\lz\dcc(x,y) \leq \dl (x,y) \leq R_\lz\dcc(x,y)<\fz\quad\forall x,y\in\Omega. \end{eqnarray} \item[(iii)] If $H(x,p)=H(x,-p)$ for all $p \in \rr^m$ and almost all $x \in \Omega$, then $d_\lz(x,y)=d_\lz(y,x)$ for all $x, y\in \Omega$. \end{enumerate} \end{lem} \begin{proof} To see Lemma \ref{lem311} (i), by choosing constant functions as test functions in \eqref{dlz}, one has $\rho(x,y)\ge0$ for all $x,y\in\Omega$. Obviously, one has $d_\lz(x,x)=0$ for all $x\in\Omega$. Besides, \begin{align*} d _{\lambda} (x,z)&= \sup\{u(z)-u(x) : u \in \dot W^{1,\infty}_{X }(\Omega),\ \|H(\cdot, Xu)\|_{L^\infty(\Omega)} \le \lz \} \\ & \le \sup\{u(y)-u(x) : u \in \dot W^{1,\infty}_{X }(\Omega),\ \|H(\cdot, Xu)\|_{L^\infty(\Omega)} \le \lz \} \\ & \quad + \sup\{u(z)-u(y) : u \in \dot W^{1,\infty}_{X }(\Omega),\ \|H(\cdot, Xu)\|_{L^\infty(\Omega)} \le \lz \} \\ & = d _{\lambda} (x,y) + d _{\lambda} (y,z). \end{align*} By Definition \ref{pseudo}, $\dl$ is a pseudo distance. To see Lemma \ref{lem311} (ii), by (H3), we have \begin{align*} \{u\in \dot W^{1,\infty}_{X }(\Omega ) \ | \ \||Xu|\|_{L^\fz(\Omega )} \le R'_\lz\} &\subset \{u\in \dot W^{1,\infty}_{X }(\Omega ) \ | \ \|H(x,Xu)\|_{L^\fz(\Omega )} \le \lz\} \\ & \subset \{u\in \dot W^{1,\infty}_{X }(\Omega ) \ | \ \||Xu|\|_{L^\fz(\Omega )} \le R_\lz\}. \end{align*} From this and the definitions of $\dcc$ in \eqref{dcc} and $\dl$ in \eqref{dlz}, we deduce \eqref{e311} as desired. Finally we show Lemma \ref{lem311} (iii), since $H(x,p)=H(x,-p)$ for all $p \in \rr^m$ and almost all $x \in \Omega$, then $$ \|H(x,Xu)\|_{L^\fz(\Omega )} = \|H(x,X(-u))\|_{L^\fz(\Omega )} \mbox{ for all $u \in \dot W^{1,\infty}_{X }(\Omega ) $}.$$ Hence for any $x, y\in \Omega$, $u$ can be a test function for $d_\lz(x,y)$ in the right hand side of \eqref{dlz} if and only if $-u$ can be a test function for $d_\lz(y,x)$ in the right hand side of \eqref{dlz}. As a result, $d_\lz(x,y)=d_\lz(y,x)$, which completes the proof. \end{proof} As a consequence of Lemma \ref{lem311}, we obtain the following. \begin{cor} \label{lem311-1} For any $\lz> \lz_H$, $d_\lz$ is comparable with $d_{CC}$, that is , \begin{eqnarray}\label{e311-1} 0<R'_\lz \leq \frac{\dl (x,y)} {\dcc(x,y)} \leq R_\lz <\fz\quad\forall x,y\in\Omega. \end{eqnarray} Consequently, the topology induced by $\{d_\lz(x ,\cdot)\}_{x \in \Omega}$ and $\{d_\lz(\cdot,x)\}_{x \in \Omega}$ coincides with the one induced by $\dcc$ in $\Omega$, and hence, is the Euclidean topology. \end{cor} \begin{rem} \label{d1} \rm (i) We remark that in Lemma \ref{lem311} (iii), without the assumption $H(x,p)=H(x,-p)$ for all $p \in \rr^m$ and almost all $x \in \Omega$, $d_\lz$ may not be symmetric, that is, $d_\lz(x,y)=d_\lz(y,x)$ may not hold for all $x, y\in \Omega$. (ii) If $R'_\lz=0$ for some $\lz>0$, then the topology induced by $\{\dl(x,\cdot)\}_{x \in \Omega}$ may be different from the Euclidean topology. To wit this, we construct an Hamiltonian $H(p)$ in Euclidean disk $\Omega=\{x\in\rr^2||x|<1\}$ with $X=\{\frac{\partial}{\partial x_1}, \frac{\partial}{\partial x_2} \}$, which satisfies (H0)-(H3) with $\lz_H>0$. Define $H: \Omega \times \rr^2 \to [0,\fz)$ by $$ H(p) = H(p_1,p_2) =\max\{ |p| ,2 \}\chi_{\{p \in \rr^2 | p_1<0\}} + |p|\chi_{\{p \in \rr^2 | p_1 \ge 0\}}$$ where $\chi_E$ is the characteristic function of the set $E$. One can check that $H(p)$ satisfies (H0)-(H3). We omit the details. Now we show $$ R_{1}' = \inf\{ |p| \ | \ H(p) \ge 1 \}=0 ,$$ and thus $\lz_H \ge 1 >0$. Indeed, for any $ p=(p_1,p_2)$ and $p_1\ge 0$, one has $H(p )=|p|$ and hence $H(p)\ge 1$ implies $|p|\ge1$. On the other hand, for any $ p=(p_1,p_2)$ and $p_1< 0$, we always have $H(p)=\max\{|p|,2\}\ge2$, and hence $$ R_{1}' = \inf\{ |p| | \ p=(p_1,p_2)\in\rr^2 \ \mbox{with $ |p|<1$ and $ p_1<0$} \} =0.$$ Writing $e_1=(1,0)$, we claim that \begin{equation}\label{a1} d_1(x,x+ae_1) = 0 \quad \forall x, x+ae_1\in\Omega\ \mbox{with}\ a\in (-1,0] . \end{equation} This claim implies that the topology induced by $\{d_1(x,\cdot)\}_{x\in\Omega}$ is different with the Euclidean topology. To see the claim \eqref{a1}, writing $o=(0,0)$, we only need to show that \begin{equation}\label{a1-1} d_1(o,ae_1) = 0 \quad \forall a\in (-1,0). \end{equation} It then suffices to show that $ u(ae_1)- u(0) \le 0 $ for all $u \in W^{1,\fz}(\Omega) \mbox{ with } \|H(\nabla u)\|_{L^\fz(\Omega)} \le 1$. Given such a function $u$, observe that $ \|H(\nabla u)\|_{L^\fz(\Omega)} \le 1$ implies $\frac{\partial u}{\partial x_1}(x) \ge 0 $ and $|\nabla u(x)|\le 1$ for almost all $x \in \rr^2$. Let $\{\gz_\dz\}_{0\le \dz \ll 1}$ be the line segment joining $\dz e_2$ and $ae_1+\dz e_2$ with $e_2=(0,1)$, that is, $$\gz_\dz (t) := t(a,\dz) + (1-t)(0,\dz) \quad \forall t \in [0,1].$$ Since $u \in W^{1,\fz}(\Omega)$ implies that $u$ is ACL (see \cite[Section 6.1]{hkst}), there exists $\{\dz_n\}_{n \in \nn}$ depending on $u$ such that $u$ is differentiable almost everywhere on $\gz_{\dz_n}$. Noting that $ \dot \gz_{\dz_n}(t) =-e_1 $ and by $\frac{\partial u}{\partial x_1}(x) \ge 0 $, one has $$ \nabla u (\gz_{\dz_n}(t)) \cdot \dot \gz_{\dz_n}(t) = -\frac{\pa u}{\pa x_1}(\gz_{\dz_n}(t))\le0 \quad \forall t \in [0,1],$$ and hence \begin{align*} u((a,\dz_n)) - u((0,\dz_n)) & = \int_0^1 \nabla u (\gz_{\dz_n}(t)) \cdot \dot \gz_{\dz_n}(t) \, dt = \int_0^1 -\pa_1 u(\gz_{\dz_n}(t)) \, dt \le 0. \end{align*} Thus \begin{align*} u(ae_1)- u(0) & = \lim_{n \to \fz} [u((a,\dz_n)) - u((0,\dz_n))] \le 0 \end{align*} as desired. \end{rem} Finally, we introduce the pseudo-length property. \begin{defn}\label{leng}\rm We say a pseudo-metric space $(\Omega,\rho)$ is a pseudo-length space if for all $x,y \in \Omega$, \begin{equation*} \rho (x, y) := \inf \{\ell_{\rho} (\gz) \ | \ \gz \in \mathcal C(a, b; x, y; \Omega )\} \end{equation*} where $ \mathcal C(a, b; x, y; \Omega)$ denotes the class of all continuous curves $\gz:[a,b]\to {\color{black}\Omega}$ with $\gz(a)=x$ and $\gz(b)=y$, and $$\ell_{\rho} (\gz):= \sup \bbbl \sum_{i=0}^{N-1} \rho (\gz(t_{i}), \gz(t_{i+1}))\ \bigg | \ a=t_0 < t_1 < \cdots < t_N=b \bbbr.$$ \end{defn} \section{Proof of Theorem \ref{rad}} In this section, we always suppose that the Hamiltonian $H(x,p)$ enjoys assumptions (H0)-(H3). To prove Theorem \ref{rad}, we first need several auxiliary lemmas. \begin{lem} \label{l6.1} Suppose that $ \{u_j\}_{j\in\nn}\subset \dot W^{1,\fz}_X(\Omega )$, and there exists $\lz\ge0$ such that $$\mbox{$\|H(x,Xu_j)\|_{L^\fz(\Omega )}\le \lz$ for all $j\in\nn$.}$$ If $u_j\to u$ in $C^0(\Omega )$, then $u\in \dot W^{1,\fz}_X(\Omega )$ and $\|H(x,Xu)\|_{L^\fz(\Omega )}\le \lz$. {\color{black} Here and in below, for any open set $V \subset \Omega$, $u_j\to u$ in $C^0(V)$ refers to for any $K \Subset V$, $u_j\to u$ in $C^0(\overline K)$. } \end{lem} \begin{proof} By Lemma \ref{radcclem}, one has $$|u(x)-u(y)|=\lim_{j\to\fz}|u_j(x)-u_j(y)|\le\limsup_{j\to\fz} \||Xu_j|\|_{L^\fz(\Omega )}d_{CC}(x,y).$$ By (H3) and $\|H(\cdot,Xu_j)\|_{L^\fz(\Omega )}\le \lz$, we have $\||Xu_j|\|_{L^\fz(\Omega )}\le R_\lz$ for all $j$, and hence $$|u(x)-u(y)| \le R_\lz d_{CC}(x,y),$$ that is, $ u\in\lip_{d_{CC}}(\Omega)$. By Lemma 2.7 again, we have $u\in {\color{black}\dot W^{1,\fz}_X(\Omega )}$. Next we show that $\|H(x,Xu)\|_{L^\fz(\Omega )}\le \lz$. It suffices to show that $\|H(x,X(u|_U))\|_{L^\fz(U )}\le \lz$ for any $U\Subset\Omega$. {\color{black}Given any $U\Subset\Omega$, we claim that $X(u_j|_{ U })$ converges to $X(u|_{ U })$ weakly in $L^2( U ,\rr^m)$, that is, for all $1\le i\le m$, one has $$ \lim_{j \to \fz}\int_{ U }u_j X_i^\ast \phi \, dx =\int_{ U } \phi Xu \, dx\ \ \forall \phi\in L^2( U ).$$ } To see this claim, note that $\||Xu_j|\|_{L^\infty(\Omega )}\le R_\lz$ for all $j\in\nn$, and hence $\||Xu_j|\|_{L^2( U )}\le R_\lz {\color{black}| U |^{1/2}}$ for all $j\in\nn$. In other words, for each $k\in\nn$, the set $\{X(u_{j }|_{ U })\}_{j\in\nn}$ is bounded in $L^2( U ,\rr^m)$. {\color{black} By the weak compactness of $L^2( U ,\rr^m)$, any subsequence of $\{Xu_{j }\}_{j\in\nn}$ admits a subsubsequence which converges weakly in $L^2( U ,\rr^m)$. Therefore, to get the above claim, by a contradiction argument we only need to show that for any subsequence $\{Xu_{j_s}\}_{s\in\nn}$ of $\{Xu_{j }\}_{j\in\nn}$, if $ Xu_{j_s} \rightharpoonup q_k$ weakly in $L^2( U , \rr^m )$ as $s\to\fz$, then $X(u|_{ U }) =q_k$. For such $\{Xu_{j_s}\}_{s\in\nn}$, recalling that $u_j\to u$ in $C^0(\Omega )$ as $j\to\fz$, for all $1\le i \le m$ one has $$ \int_{ U } u X_i^\ast \phi \, dx =\lim_{j \to \fz}\int_{ U }u_j X_i^\ast \phi \, dx =\lim_{s \to \fz}\int_{ U }u_{j_s} X_i^\ast \phi \, dx =\lim_{s \to \fz}\int_{ U }(X_i u_{j_s})\phi \, dx =\int_{ U }q_k\phi \, dx$$ for any $\phi\in C^\fz_c( U )$. This implies that $Xu|_{ U } =q_k$ as desired. } By Mazur's Theorem, for any $l>0$, we can find a finite convex combination $w_l$ of $\{X(u_j|_{ U })\}_{j=1}^{\fz}$ so that $\|w_l-X(u|_{U})\|_{L^2( U )}\to0 $ as $l\to\fz$. Here $w_l$ is a finite convex combination of $\{X(u_j|_{ U })\}_{j=1}^{\fz}$ if there exist $\{\eta_{j}\}_{j=1}^{k_l}$ for some $k_l$ such that $$ \sum_{i=1}^{k_l}\eta_i=1\ \mbox{ and}\ w_l=\sum_{j=1}^{k_l}X(u_j|_{ U })$$ By the quasi-convexity of $H(x,\cdot)$ as in {\color{black}(H1)}, we have $$H(x,w_l)=H(x,\sum_{j=1}^{k_l}\eta_j X(u_j|_{ U }))\le \sup_{1\le j\le k_l}H(x, X(u_j|_{ U }))\le\lz\quad\mbox{for almost all $x\in U $}.$$ Up to considering subsequence we may assume that $w_l\to X(u|_{ U })$ almost everywhere in $ U $. By the lower-semicontinuity of $H(x,\cdot)$ as in {\color{black}(H0)}, we conclude that $$H(x,X(u|_{ U }))\le {\color{black}\liminf}_{l\to\fz}H(x,w_l)\le \lz\quad\mbox{for almost all $x\in U $}.$$ The proof is complete. \end{proof} \begin{lem} \label{l21} If $v \in \dot W^{1,\infty}_{X }(\Omega)$, then \begin{equation}\label{claimx}\mbox{$v^+=\max\{v,0\}\in \dot W^{1,\infty}_{X }(\Omega)$ and $Xv^+=(Xv)\chi_{\{x\in\Omega, v>0\}} $ almost everywhere.} \end{equation} Consequently, let $\{v_i\}_{1\le i\le j} \subset \dot W^{1,\infty}_{X }(\Omega)$ for some $j\in\nn$. If $ u=\max_{1\le i \le j}\{v_i\}$ or $ u=\min_{1\le i \le j}\{v_i\}$, then \begin{equation}\label{claimxx}\mbox{$u\in \dot W^{1,\infty}_{X }(\Omega)$ and $\|H(x,Xu )\|_{L^\fz(\Omega)} \le \max_{1\le i\le j}\{\|H(x,Xv_i)\|_{L^\fz(\Omega)} \}$.}\end{equation} \end{lem} \begin{proof} {\color{black} First we prove \eqref{claimx}. Let $v \in \dot W^{1,\infty}_{X }(\Omega)$. By Lemma \ref{radcclem}, $v \in \lip_{\dcc}(\Omega)$. Observe that $$|v^+(x)-v^+(y)| \le |v(x)-v(y)| \le \lip_{\dcc}(v,\Omega) \dcc(x,y) \quad \forall x,y \in \Omega,$$ that is, $v^+\in \lip_{\dcc}(\Omega)$. By Lemma \ref{radcclem} again, $v^+ \in \dot W^{1,\infty}_{X }(\Omega)$. To get $Xv^+=(Xv)\chi_{\{x\in\Omega, v>0\}}$ almost everywhere, it suffices to consider the restriction $v|_U$ of $v$ in any bounded domain $U \Subset \Omega$, that is, to prove $X(v|_U)^+=(Xv|_U)\chi_{\{x\in U, v>0\}}$ almost everywhere. But this always holds thanks to Lemma \ref{uplus} and the fact $v|_U \in \dot W^{1,p}_{X}(U)$ for any $1 \le p < \fz$.} Next we prove \eqref{claimxx}. If $u =\max\{v_1,v_2\}$, where $v_i \in \dot W^{1,\infty}_{X }(\Omega)$ for $i=1,2$, then $u =v_2+(v_1-v_2)^+.$ By \eqref{claimx}, $u \in \dot W^{1,\infty}_{X }(\Omega)$ and \begin{align*} Xu &=Xv_2+X(v_1-v_2)^+ \\ &=Xv_2+[(X(v_1-v_2)]\chi_{\{x\in\Omega, v_1>v_2\}}\\ &=(Xv_2)\chi_{\{x\in\Omega, v_1\le v_2\}}+ (Xv_1 ) \chi_{\{x\in\Omega, v_1>v_2\}}. \end{align*} Thus $$H(x,Xu(x))= H(x, Xv_2)\chi_{\{x\in\Omega, v_1\le v_2\}}+ H(x, Xv_1 ) \chi_{\{x\in\Omega, v_1>v_2\}} \quad \mbox{for almost all $x\in\Omega$}.$$ {\color{black}A similar argument} holds for $u=\min\{v_1,v_2\}$. This gives \eqref{claimxx} when $j=2$. By an induction argument, we get \eqref{claimxx} for all $j\ge2$. \end{proof} \begin{lem}\label{lem3.2} For any $ \lz \ge 0$ and $x \in \Omega $, we have $ d_\lz(x, \cdot), \dl(\cdot,x)\in \dot W^{1,\infty}_{X }(\Omega )$ and $$\|H(\cdot,Xd_\lz(x, \cdot))\|_{L^\fz(\Omega )} \le \lz\quad\mbox{and}\quad\|H(\cdot,-X\dl(\cdot,x))\|_{L^\fz(\Omega )} \le \lz.$$ \end{lem} \begin{proof} Given any $x\in \Omega $ and $\lz \ge 0$, write $v(z)=d_\lz(x,z)$ for all $z\in \Omega $. To see $H(\cdot,Xv)\le \lz$ almost everywhere, by Lemma \ref{l6.1}, it suffices to find a sequence of function $u_j\in {\color{black} \dot W_X^{1,\fz}(\Omega )}$ so that $H(\cdot,Xu_j)\le \lz$ almost everywhere and $u_j\to v$ in $C^0(\Omega )$ as $j\to\fz$. To this end, let $\{K_j\}_{j \in \nn}$ be a sequence of compact subsets in $\Omega $ with $$\mbox{$\Omega = \bigcup_{j \in \nn} K_j$ and $K_j \subset K_{j+1}^{\circ}$.}$$ For $j \in \nn$ and $y \in K_j$, by definition of $\dl$ we can find a function $v_{y,j} \in\dot W^{1,\fz}_X(\Omega)$ such that $H(\cdot,Xv_{y,j})\le\lz$ almost everywhere, $$d_\lz(x,y)-\frac{1}{2j}\le v_{y,j}(y) -v_{y,j}(x).$$ Since Lemma \ref{lem311} implies that $d_\lz(x, \cdot)$ is continuous, there exists an open neighbourhood $N_{y,j}$ of $y$ with $$d_\lz(x, z) - \frac{1}{j} \le v_{y,j}(z)-v_{y,j}(x), \quad\forall z\in N_{y,j}. $$ Thanks to the compactness of $K_j$, there exist $y_1, \cdots , y_l\in K_j$ such that $K_j \subset \bigcup_{i=1}^{l} N_{y_i,j}$. Write $$u_j(z) := \max\{v_{y_i,j}(z)-v_{y_i,j}(x) : i = 1, \cdots , l\}\quad\forall z\in K_j.$$ Then $u_j(x)=0$, and $$d_\lz(x,z) \le u_j(z)+ \frac{1}{j} \ \text{for all} \ z\in K_j.$$ Moreover by Lemma \ref{l21} we have $$H(\cdot,Xu_j)\le \lz\quad \mbox{in}\ \Omega .$$ Since $$d_\lz(x,z) \ge u_j(z) \ \text{for all} \ z\in K_j$$ is clear, we conclude that $u_j\to {\color{black}v} $ in $C^0(K_i)$ as $j\to\fz$ for all $i$, and hence {\color{black}$u_j\to {\color{black}v} $ uniformly in any compact subset of $\Omega$} as $j\to\fz$. Similarly, we can show $\|H(x,-Xd_\lz(\cdot, x ))\|_{L^\fz(\Omega )} \le \lz$ which finishes the proof. \end{proof} In general, for any $E \subset \Omega $, we define \begin{equation*} d_{\lz,E}(z) := \inf_{x \in E} \{d_\lz(x,z)\}. \end{equation*} \begin{lem}\label{l6.2} For any set $E\subset\Omega $, we have $d_{\lz,E} \in \dot W^{1,\infty}_{X }(\Omega )$ and $\|H(x,Xd_{\lz,E} )\|_{L^\fz(\Omega )}\le \lz$. \end{lem} \begin{proof} Let $\{K_j\}_{j \in \nn}$ be a sequence of compact subsets in $\Omega $ with $\Omega = \bigcup_{j \in \nn} K_j$ and $K_j \subset K_{j+1}^{\circ}$. For each $j$ and $y\in K_j$, we can find $z_{y,j}\in E$ such that $$d_{\lz,E}(y)\le d_{\lz,z_{y,j}}(y)\le d_{\lz,E}(y)+1/2j.$$ Thus there exists a neighborhood $N(y)$ of $y$ such that $$d_{\lz,E}(y)\le d_{\lz,z_{y,j}}(y)\le d_{\lz,E}(z)+1/ j\quad\forall z\in N(y).$$ So we can find $\{y_1,\cdots, y_{l_j}\}$ such that $K_j=\cup_{i=1}^{l_j}N(y_i) $ for all $i=1,\cdots l_j$. Write $$u_j(z)={\color{black}\min_{i=1,\cdots l_j}}\{d_{\lz,z_{y_i,j}}(z)\} \quad\forall z\in K_j.$$ $$d_{\lz,E}\le u_j(z)\le d_{\lz,E}+1/j\quad \mbox{in}\ K_j .$$ This means that $u_j\to d_{\lz,E}$ in $C^0(\Omega)$ as $j\to\fz$. Note that $$\|H(\cdot, Xu_j )\|_{L^\fz(\Omega)}\le \lz.$$ By Lemma {\color{black}\ref{l6.1}} we have $d_{\lz,E}\in \dot W^{1,\fz}_X(\Omega)$ and $\|H(\cdot, Xd_{\lz,E})\|_{L^\fz(\Omega)}\le \lz$ as desired. \end{proof} We are able to prove (i)$\Rightarrow$(ii)$\Rightarrow$(iii)$\Rightarrow$(i) in Theorem \ref{rad} as below. \begin{proof} [Proofs of (i)$\Rightarrow$(ii)$\Rightarrow$(iii)$\Rightarrow$(i) in Theorem \ref{rad}.] The definition of $d_\lz$ directly gives (i)$\Rightarrow$(ii). Obviously, (ii)$\Rightarrow$(iii). Below we prove (iii)$\Rightarrow$ (i). Recall that (iii) says that $u(y)-u(z)\le d_\lz (z,y)$ for all $x \in \Omega$ and $y,z \in N(x)$, where $N(x)\subset \Omega$ is a neighbourhood of $x$. To get (i), since $\Omega=\cup_{x\in \Omega}N(x)$, it suffices to show that for all $x\in \Omega$, one has $u\in W_{X}^{1,\fz}(N(x))$ and $\|H(\cdot,Xu)\|_{L^\fz(N(x))}\le\lz$. Fix any $x$, and write $ U=N(x)$. Without loss of generality we assume that $U$ is bounded. Notice that $u\in L^\fz(U)$. Let $M\in \nn$ so that $M\ge \sup_{U} |u|$. For each $k \in \nn$ and $l \in\{ -Mk, \cdots , Mk\}$, set $$u_{k,l}(z) := l/k + {\color{black}d_{\lz,F_{k,l}} (z)}\quad\forall z\in U $$ where $$F_{k, l} := \{ y \in U \ | \ u(y) \le \frac{l}{k} \}.$$ By Lemma \ref{l6.2}, one has $$\|H(\cdot,{\color{black}Xu_{k,l}})\|_{L^\fz(U)}\le\lz.$$ Set $$ u_k (z):= \min_{l \in\{ -Mk, \cdots , Mk\}}u_{k,l}(z)\quad\forall z\in U.$$ {\color{black} To get $u\in \dot W_{X}^{1,\fz}(U)\ \text{and} \ \|H(\cdot,Xu)\|_{L^\fz(U)}\le\lz$, thanks to Lemma \ref{l6.1} with $\Omega=U$, we only need to show $u_k \to u$ in $C^0(U)$ as $k\to\fz$. To see $u_k \to u$ in $C^0(U)$ as $k\to\fz$, note that, for any $k \in \nn$, $-M\le u \le M$ in $U$ implies $-Mk\le ku \le Mk$ in $U$. Thus, at any $z \in U$, we can find $j\in\nn$ with $-k\le j\le k$, which depends on $z$, such that $Mj\le ku(z)\le M(j+1)$. Letting $l=Mj$, we have $ \frac{l}{k}\le u(z)\le \frac{l+M}k$. We claim that $u_k(z) \in [u(z),\frac{l+M}{k}]$. Obviously, this claim gives $$|u(z)-u_k(z)| \le \frac{M}{k}\quad\forall z\in U,$$ and hence, the desired convergence $u_k \to u$ in $C^0(U)$ as $k\to\fz$. Below we prove the above claim $u_k(z) \in [u(z),\frac{l+M}{k}]$. Recall that $u(z)\in [\frac{l}{k},\frac{l+M}{k}]$ for some $l =Mj$ with $-k\le j\le k$. First, we prove $u_k(z) \le \frac{l+M}{k}$. If $l+M > Mk$, then $M < (l+M)/k$. Since $$F_{k, Mk} = \{ y \in U \ | \ u(y) \le M \}=U,$$ we have $d_{\lz,F_{k,Mk}} (z)=0$ and hence, $$u_{k,Mk}(z)=M+d_{\lz,F_{k,Mk}} (z)=M < \frac{l+M}{k}.$$ Therefore, \begin{equation}\label{upper2} u_k (z) \le u_{k,Mk}(z) < \frac{l+M}{k}. \end{equation} } {\color{black} If $l+M\le Mk$, then $u(z)\in [\frac{l}{k},\frac{l+M}{k}]$ implies $z \in F_{k,l+M}$ and hence $\dl(z,F_{k,l+M})=0$. Thus \begin{equation}\label{eq2121} u_k(z) \le u_{k,l+M}(z)=\frac{l+M}k+\dl(z,F_{k,l+M}) =\frac{l+M}{k}. \end{equation} Combining \eqref{eq2121} and \eqref{upper2}, we have $ u_k(z) \le \frac{l+M}{k} $ as desired. } Next, we prove $u(z)\le u_k(z)$. {\color{black} For any $-k\le j\le k$ with $Mj\le l$, since $u(z) \ge \frac{l}{k} \ge \frac{Mj}{k}$, we can find $w \in \partial F_{k,Mj}$ such that $d_{\lz,F_{k,Mj}}(z)=\dl(w,z)$. Since $w \in \partial F_{k,Mj}$, we deduce that $u(w)=\frac{Mj} k$ and \begin{equation}\label{eq2122} u_{k,Mj}(z)=\frac{Mj}{k}+d_{\lz,F_{k,Mj}}(z)=u(w)+\dl(w,z). \end{equation} Note that $w \in \overline U$, hence there exists a sequence $\{w_s\}_{s \in \nn}\subset U$ such that $w_s\to w$ as $s\to\fz$. Thus by the assumption (iii), \begin{equation*} u(z)-u(w) =\lim _{s\to\fz} [u(z)-u(w_s)] \le \lim _{s\to\fz}\dl(w_s,z) \end{equation*} By the triangle inequality and $\dl(w_s,w)\le R_\lz d_{CC}(w_s,w)$ given in Lemma \ref{lem311}, we then obtain \begin{equation}\label{eq2123} u(z)-u(w) \le \lim _{s\to\fz}[\dl(w_s,w)+ d_\lz(w,z)]=d_\lz(w,z). \end{equation} Combining \eqref{eq2122} and \eqref{eq2123}, we have \begin{equation}\label{eq2124} u(z)\le u(w)+ \dl(w,z) = u_{k,Mj}(z). \end{equation} } On the other hand, for any $-k\le j\le k$ with $Mj>l$, we have $$u_{k,Mj}(z)\ge \frac{ Mj}{k}\ge \frac{M(l+1)}k>u(z).$$ From this and \eqref{eq2124}, it follows that $$u_k(z)= \min_{j\in\{ - k, ... , l/M\}}u_{k,Mj}(z) \ge u(z) $$ as desired. \end{proof} \section{Proof of Theorem \ref{radiv}} In this section, we always suppose that the Hamiltonian $H(x,p)$ enjoys assumptions (H0)-(H3). To prove Theorem \ref{radiv}, we need to show that {\color{black} $(\Omega,\dl)$ is a pseudo-length space for all $\lz \ge \lz_H$ in the sense of Definition \ref{leng}. In other words, define \begin{equation*} \rl (x, y) := \inf \{\ell_{d_\lz} (\gz) \ | \ \gz \in \mathcal C(a, b; x, y; \Omega )\}, \end{equation*} where we recall the pseudo-length $\ell_{d_\lz}(\gz)$ induced by $\dl$ defined in Definition \ref{leng} and $\lz_H$ in \eqref{lh}. We have the following.} \begin{prop}\label{len} For any $\lz \ge \lz_H$, we have $d _\lz=\rl$. \end{prop} To prove Proposition \ref{len}, we need the following approximation midpoint property of $d_\lz$. \begin{prop} \label{p301} For any $\lz\ge 0$, we have \begin{equation}\label{eq2211} \inf_{z\in\Omega}\max\{\dl (x,z),\dl (z,y)\} \le \frac{1}{2}\dl (x,y) \quad\mbox{for all $x,y \in \Omega $.} \end{equation} \end{prop} \begin{proof} We prove by contradiction. Suppose that \eqref{eq2211} were not true. There exists $x_0,y_0\in\Omega$ such that \begin{equation}\label{lambda} \inf_{z\in\Omega}\max\{\dl (x_0,z),\dl (z,y_0)\} \ge \frac{1}{2}\dl (x_0,y_0)+\ez_0:=r_0 \end{equation} for some $\ez_0>0$. Given any $\dz\in(0,\ez_0)$, define $f(z):=f_1(z)+f_2(z)$ with $$f_1(z) := \min\{\dl(x_0, z)-(r_0 - \dz), 0\},\ \mbox{ } f_2(z):= \max\{(r_0 - \dz) - \dl(z, y_0), 0\}\quad\forall z\in\Omega.$$ We claim that $f$ satisfies Theorem \ref{rad}(iii), that is, for any $z\in\Omega$, there is an open neighborhood $N(z)$ such that \begin{equation}\label{thm3iii}f(y)-f(w)\le d_\lz(w,y)\quad\forall w,y\in N(z).\end{equation} Assume the claim \eqref{thm3iii} holds for the moment. Since we have already shown the equivalence between (ii) and (iii) in Theorem \ref{rad}, we know that $f$ satisfies Theorem \ref{rad}(ii), that is, $$f(y)-f(w)\le d_\lz(w,y)\quad\forall w,y\in \Omega.$$ In particular, \begin{equation}\label{glob1} f(y_0)-f(x_0)\le \dl(x_0, y_0). \end{equation} On the other hand, we have $f_1(x_0)=-(r_0-\dz)$ and $f_2(y_0)=r_0-\dz$. Since \eqref{lambda} implies $$\dl (x_0,y_0)=\max\{\dl (x_0,x_0),\dl (x_0,y_0)\} \ge \frac{1}{2}\dl (x_0,y_0)+\ez_0 =r_0 $$ and $f_2(x_0)=0$ and $f_1(y_0)=0$. Therefore, $$ f(y_0) - f(x_0) = f_2(y_0)-f_1(x_0)= 2r_0 - 2\dz= \dl (x_0,y_0)+2\ez_0 - 2\dz,$$ By $\dz<\ez_0$, one has $$ f(y_0) - f(x_0) >\dl (x_0,y_0),$$ which contradicts to \eqref{glob1}. Finally we prove the above claim \eqref{thm3iii}. Firstly, thanks to Lemma \ref{l21} and \ref{lem3.2}, $H(x,Xf_1)\le \lz$ and $H(x,Xf_2)\le \lz$ almost everywhere in $ \Omega$, and hence, by the definition of $d_\lz$, $$f_1(y)-f_1(w)\le d_\lz(w,y)\ \mbox{and}\ f_2(y)-f_2(w)\le d_\lz(w,y) \quad\forall w,y\in \Omega.$$ Next, {\color{black} set $$\mbox{$\Lambda_1:=\{z\in \Omega \ | \ d_\lz(x_0,z)<r_0\}$,\quad $\Lambda_2:=\{z\in \Omega \ | \ d_\lz(z,y_0)<r_0 \}$.}$$ and $$\Lambda_3:=\{z\in \Omega \ | \ d_\lz(x_0,z)> r_0-\dz\ \mbox{ and }\ d_\lz(z,y_0)> r_0-\dz\}$$ For any $z \in \Lambda_1$, that is, $d_\lz(x_0,z)<r_0$, \eqref{lambda} implies $d_\lz(z,y_0) \ge r_0$. Consequently, $$f_2(z)=\max\{(r_0 - \dz) - \dl(z, y_0), 0\}= 0$$ and hence, $f(z)=f_1(z).$ Consequently, \begin{equation}\label{lz1} f(w)-f(y)=f_1(w)-f_1(y) \le d_\lz(y,w)\quad\forall w,y\in \Lambda_1. \end{equation} Similarly, for any $z \in \Lambda_2$, that is, $d_\lz(z,y_0)<r_0$, \eqref{lambda} implies $d_\lz(x_0,z) \ge r_0$. Consequently, $$f_1(z)=\min\{\dl(x_0, z)-(r_0 - \dz), 0\}= 0$$ and hence, $ f(z)= f_2(z).$ Consequently, \begin{equation}\label{lz2} f(w)-f(y)=f_2(w)-f_2(y) \le d_\lz(y,w)\quad\forall w,y\in \Lambda_2. \end{equation} For any $z\in\Lambda_3$, that is, $d_\lz(x_0,z)> r_0-\dz$ and $ d_\lz(z,y_0)> r_0-\dz$, we have $f_1(z)=0=f_2(z)$ and hence $f(z) =0.$ Consequently \begin{equation}\label{lz3} f(w)-f(y)=0\le d_\lz(y,w)\quad\forall w,y\in \Lambda_3. \end{equation} Noticing that $\{\Lambda_i\}_{i=1,2,3}$ forms an open cover of $\Omega$, for any $z \in \Omega$, we choose \begin{equation}\label{nz} N(z)= \left \{ \begin{array}{ll} \Lambda_1 & \quad \text{if } z \in \Lambda_1, \\ \Lambda_2 & \quad \text{if } z \in \Lambda_2\setminus\Lambda_1, \\ \Lambda_3 & \quad \text{if } z\in \Omega\setminus (\Lambda_1\cup\Lambda_2). \end{array} \right. \end{equation} From \eqref{lz1}, \eqref{lz2} and \eqref{lz3}, we obtain \eqref{thm3iii} as desired with the choice of $N(z)$ in \eqref{nz}. The proof is complete. } \end{proof} {\color{black} \begin{lem} \label{dpro} Given any $x,y\in \Omega$, the map $\lz\in[\lz_H ,\fz)\mapsto d_\lz(x,y)\in[0,\fz)$ is nondecreasing and right continuous. \end{lem} } \begin{proof} The fact that $d_\lz(x,y)$ is non-decreasing with respect to $\lz$ is obvious for any $x,y \in \Omega$ from the definition of $\dl$. Given any $x,y \in \Omega$, we show the right-continuity the map $\lz\in [\lz_H ,\fz)\mapsto d_\lz(x,y)\in[0,\fz)$. We argue by contradiction. Assume there exists $\lz_0 \ge \lz_H$ and $x,y \in \Omega$ such that \begin{equation}\label{rad0} \liminf_{\lz \to \lz_0+}d_\lz(x,y)=\lim_{\lz \to \lz_0+}d_\lz(x,y) =c > d_{\lz_0}(x,y) . \end{equation} Let $w_\lz(\cdot):= d_\lz(x,\cdot)$. By Lemma \ref{lem3.2}, we know $\|H(\cdot,Xw_\lz)\|_{L^\fz(\Omega)}\le \lz$. Since $\{w_\lz\}_{\lz>\lz_0}$ is a non-decreasing sequence with respect to $\lz$, $\{w_\lz\}$ converges pointwise to a function $w$ as $\lz \to \lz_0+$ and for any set $V \Subset \Omega$ and $x,y \in \overline V$, we have $w_\lz \to w$ in $C^0(\overline V)$. Then applying Lemma \ref{l6.1}, we have $$ \|H(\cdot,Xw)\|_{L^\fz(\overline V)}\le \lz $$ for any $\lz>\lz_0$, which implies \begin{equation}\label{rada} \|H(\cdot,Xw)\|_{L^\fz(\overline V)}\le \lz_0. \end{equation} By the definition of $w$, we have \begin{equation}\label{radb} w(x)=\lim_{\lz \to \lz_0+}d_\lz(x,x)=0, \text{ and } w(y)= \lim_{\lz \to \lz_0+}d_\lz(x,y)=c \end{equation} Combining \eqref{rada} and \eqref{radb} and applying Theorem \ref{rad}, we have $$c - 0 =w(y) -w(x) \le d_{\lz_0}(x,y),$$ which contradicts to \eqref{rad0}. The proof is complete. \end{proof} We are in the position to show \begin{proof}[Proof of Proposition \ref{len}.] {\color{black}We consider the cases $\lz>\lz_H$ and $\lz=\lz_H$ separately.} \medskip \textbf{Case 1. $\lz>\lz_H$.} First, $\dl \le \rho_\lz $ follows from the triangle inequality for $d_\lz$. To see $\rho_\lz\le d_\lz$, it suffices to prove that for any $z\in \Omega$, the function $\rho_\lz(z,\cdot):\Omega\to\rr$ satisfies Theorem \ref{rad}(iii), that is, for any $ x\in\Omega $ we can find a neighborhood $N(x)$ of $ x$ such that \begin{equation}\label{rhoth3iii} \rho_\lz (z,y) -\rho_\lz (z,w) \le \dl (w,y)\quad\forall w,y \in N(x). \end{equation} Indeed, since we have already shown the equivalence of (i) and (iii) in Theorem \ref{rad}, \eqref {rhoth3iii} implies that $\rho_\lz(z,\cdot)$ satisfies Theorem \ref{rad}(i), that is, $\rho_\lz(z,\cdot)\in \dot W^{1,\fz}_X(\Omega)$ and $\|H(\cdot,X\rho_\lz(z,\cdot))\|_{L^\fz(\Omega)}\le\lz$. Taking $\rho_\lz(z,\cdot)$ as the test function in the definition of $\dl(z,x)$, one has $$\rho_\lz (z,x) \le \dl(z,x)\quad\forall x\in \Omega $$ as desired. To prove \eqref{rhoth3iii}, let $z\in\Omega$ be fixed. {\color{black} For any $x\in\Omega$ and any $t>0$, write $$ B_{\dl}^+(x, t) := \{ y \in \Omega \ | \ \dl(x,y) < t \mbox{ or } \dl(y,x) < t \} $$ and $$ B_{\dl}^-(x, t) := \{ y \in \Omega \ | \ \dl(x,y) < t \mbox{ and } \dl(y,x) < t \} . $$ } For any $x\in\Omega$, {\color{black}letting $r_x = \min\{\frac{R_\lz'}{10} \dcc(x, \pa \Omega),1\}$, by Corollary \ref{lem311-1}, we have \begin{equation}\label{Sub} B_{\dl}^{+}(x, 6r_x) \subset B_\dcc(x, \frac{6r_x}{R_\lz'} ) \Subset \Omega, \end{equation} where $R_\lz'>0$ thanks to (H3). } Write $N(x)= B_{\dl}^-(x, {\color{black}r_x})$. Given any $w, y \in N(x)$, it then suffices to prove $\rho_\lz(z,y) -\rho_\lz (z,w)\le \dl (w, y ) $. To this end, for any $0 < \ez < \frac12 \dl(w, y)$, we will construct a curve \begin{equation} \label{ccurve} \gz_\ez:[0,1]\to B_{d_\lz}^+(x,6r_x) \ \mbox{with $ \gz_\ez(0)=w$, $ \gz_\ez(1)=y$ and $\ell_{d_\lz} (\gz_\ez)\le \dl(w, y)+\ez$}. \end{equation} Assume the existence of $\gz_\ez$ for the moment. By the triangle inequality for $\rho _\lz$, we have $$\rho_\lz(z,y) -\rho_\lz (z,w)\le \ell_{\dl}(\gz_\ez ) \le \dl (w, y )+\ez $$ By sending $\ez\to0$, this yields $\rho_\lz(z,y) -\rho_\lz (z,w)\le \dl (w, y ) $ as desired. \medskip {\bf Construction of a curve $\gz_\ez$ satisfying \eqref{ccurve}}. {\color{black} For each $t \in \nn$, set $$D_t:= \{k2^{-t} \ | \ k \in \nn, \ 0\le k\le 2^t\}.$$ We will use induction and Proposition \ref{p301} to construct a set \begin{equation}\label{adj4} Y_t =\{y_s\}_{s \in D_t} \subset B_{d_\lz}^+(x,5r_x ) \end{equation} with $y_0=w$ and $y_1=y$ so that $Y_t \subset Y_{t+1}$ , and that \begin{equation}\label{adj5} d_\lz(y_{j2^{-t}},y_{(j+1)2^{-t}})\le 2^{-t}(\dl (w, y )+\ez) \mbox{ for any $0\le j\le 2^t-1$}. \end{equation} } The construction of $\{Y_t\}_{t\in\nn}$ is postponed to the end of this proof. Assuming that $\{Y_t\}_{t\in\nn}$ are constructed, we are able to construct $\gz_\ez$ as below. Firstly, set $D:=\cup_{t\in\nn} D_t$ and $Y:=\cup_{t\in\nn} Y_t$. Given any $s_1, s_2 \in D$ with $s_1 <s_2$, there exists $t \in \nn$ such that $s_1=l2^{-t} \in D_{t}$, $s_2=k2^{-t} \in D_{t}$ for some $l < k$ and hence $y_{s_1},y_{s_2} \in Y_t$. Using \eqref{adj5} and the triangle inequality for $d_\lz$, we have \begin{equation}\label{adj6} d_\lz(y_{s_1},y_{s_2})\le \sum_{j=l}^{k-1} \dl(y_{j2^{-t}}, y_{(j+1)2^{-t}}) \le |k-j| 2^{-t}( \dl(w,y) + \ez)= |s_1-s_2|(\dl(w,y) +\ez). \end{equation} {\color{black} Next, define a map $\gz_\ez^0 : D \to Y$ by $\gz_\ez^0(s) = y_s$ for all $s \in D$. The above inequality \eqref{adj6} implies that \begin{equation*} \lim_{D \ni s' \to s}\gz_\ez^0(s') = \gz_\ez^0(s) \mbox{ for all }s \in D. \end{equation*} } Since $D$ is dense in $[0, 1]$ and $\overline {B_{\dl}^+(y_0, {\color{black}5r_x})}$ is complete, it is standard to extend $\gz_\ez^0$ uniquely to a continuous map $\gz_\ez: [0, 1] \to B_{\dl}^+(y_0, {\color{black}6r_x})$, that is, $\gz_\ez(s)=\gz_\ez^0(s)$ for any $s\in D$ and $$ \gz_\ez(s)=\lim_{D \ni s'\to s}\gz_\ez(s')=\lim_{D \ni s'\to s}y_{s'} \mbox{ for any $s\in [0,1]\setminus D$.} $$ Recalling \eqref{adj6}, one therefore has $$d_\lz(\gz_\ez (s_1),\gz_\ez(s_2))\le |s_1-s_2|(\dl(x,y)+\ez)\quad\forall s_1,s_2\in [0,1],\ s_1\le s_2,$$ which gives $\ell_{d_\lz} (\gz_\ez)\le \dz+\ez $. Thus the curve $\gz_\ez$ satisfies \eqref{ccurve} as desired. \medskip {\bf Construction of $\{Y_t\}_{t\in\nn}$ via induction and Proposition \ref{p301}}. {\color{black}Since $y,w \in B_{\dl}^-(x,r_x)$, we have $\dl(w,x)<r_x$ and $ \dl(x,y) < r_x$, which implies $$\dz:= \dl(w,y) \le \dl(w,x)+\dl(x,y) <2r_x.$$ We construct $Y_1=\{y_0,y_{1/2},y_1\}$ which satisfies \eqref{adj5} with $t=1$ and \begin{equation}\label{y1sub}Y_1 \subset B_{d_\lz}^+(x,3r_x)\subset B_{d_\lz}^+(x,5r_x). \end{equation} We set $y_0=w$ and $y_1=y$. Noting that Proposition \ref{p301} gives $$\inf_{z\in\Omega}\max\{\dl (y_0,z),\dl (z,y_1)\} \le \frac{1}{2}\dl (y_0,y_1)$$ we choose $y_{1/2}\in\Omega$ so that \begin{equation}\label{induction1} \max\{d_\lz(y_0,y_{1/2}),d_\lz(y_{1/2},y_1) \}\le \frac12\dz+\frac14\ez. \end{equation} Obviously, \eqref{induction1} gives \eqref{adj5}. To see \eqref{y1sub}, obviously, $$y_0,y_1 \in B_{d_\lz}^-(x,r_x) \subset B_{d_\lz}^+(x,3r_x).$$ Moreover, noting that $0<\ez <\frac12\dz<r_x$ implies $$\frac12\dz+\frac14\ez< \dz<2r_x$$ and that $ y\in B_{\dl}^-(x,r_x)$ implies $\dl(y,x)\le r_x$, we have $$\dl(y_{1/2},x) \le \dl(y_{1/2},y_1)+\dl(y_1,x) \le \frac12\dz+\frac14\ez + \dl(y,x) \le 3r_x,$$ which gives $y_{1/2} \in B_{d_\lz}^+(x,3r_x).$ In general, by induction given any $t\ge2$, assume that $Y_{t-1}=\{y_s\}_{s\in D_{t-1}}$ is constructed so that \begin{equation}\label{yt-1sub}Y_{t-1}\subset B_{d_\lz}^+(x,(3+\sum_{l=1}^{t-2}2^{-l})r_x+\ez\sum_{l=1}^{t-1}2^{-l}) \end{equation} and that \begin{equation}\label{adj} \dl(y_{j2^{-(t-1)}}, y_{(j+1)2^{-(t-1)}}) \le 2^{-(t-1)}\bl \dz + \ez\sum_{l=1}^{t-1}2^{-l} \br \mbox{ for any $0 \le j \le 2^{t-1}-1$}. \end{equation} Here and in what follows, we make the convention that $\sum_{l=1}^{t-2}2^{-l} =0$ if $t=2$. Since \begin{equation}\label{le5rx}(3+\sum_{l=1}^\fz2^{-l})r_x+\ez\sum_{l=1}^\fz 2^{-l}\le 4r_x+\ez<4r_x+\dz\le 5r_x, \end{equation} the inclusion \eqref{yt-1sub} implies $Y_{t-1}\subset B_{d_\lz}^+(x,5r_x)$ and hence \eqref{adj4}. Below, we construct $Y_t=\{y_s\}_{s\in D_t}$ satisfying \eqref{adj5} and \begin{equation}\label{adj7} Y_{t}\subset B_{d_\lz}^+(x,(3+\sum_{l=1}^{t-1}2^{-l})r_x+\ez\sum_{l=1}^{t}2^{-l}), \end{equation} Note that \eqref{le5rx} and \eqref{adj7} imply $Y_{t}\subset B_{d_\lz}^+(x,5r_x)$ and hence \eqref{adj4}. We define $Y_t=\{y_s\}_{s\in D_t}$ first. Since $D_{t-1}\subsetneqq D_{t}$, for $s \in D_{t-1}$, $y_s \in Y_{t-1}$ is defined. It is left to define $y_s$ for $s\in D_t \setminus D_{t-1}$. Given any $s\in D_t\setminus D_{t-1}$, we know that $s=j2^{-t}\in D_t \setminus D_{t-1}$ for some odd $j$ with $1\le j\le 2^t-1$. Write $s'=(j-1)2^{-t} $ and $s''=(j+1)2^{-t} $. Then $s',s''\in D_{t-1}$ and hence $y_{s'}\in Y_{t-1}$ and $y_{s''}\in Y_{t-1}$ are defined. Since Proposition \ref{p301} gives \begin{equation} \inf_{z\in\Omega} \max\{d_\lz(y_{s'},z),d_\lz(z,y_{s''}) \}\le \frac12d_\lz(y_{s'},y_{s''}) \end{equation} we choose $y_s\in\Omega$ such that \begin{equation}\label{adj3} \max\{d_\lz(y_{s'},y_{s}),d_\lz(y_{s},y_{s''}) \}\le \frac12d_\lz(y_{s'},y_{s''})+2^{-2t}\ez. \end{equation} Note that \eqref{adj3} and \eqref{adj} gives \eqref{adj5} directly. Indeed, for any $0 \le j \le 2^t-1$, if $j$ is odd, applying \eqref{adj3} with $s=j2^{-t}$, $ s'=(j-1)2^{-t}$ and $s''=(j+1)2^{-t}$, we deduce that $$\dl(y_{j2^{-t}}, y_{(j+1)2^{-t}})=d_\lz(y_{s},y_{s''})\le \frac12d_\lz(y_{s'},y_{s''})+2^{-2t}\ez.$$ Since $s',s''\in D_{t-1}$ and $s''=s'+2^{-(t-1)}$, applying \eqref{adj} to $y_{s'},y_{s''}$ we have \begin{align}\label{adj2} \dl(y_{j2^{-t}}, y_{(j+1)2^{-t}}) & \le \frac12 2^{-(t-1)}\bl \dz + \ez\sum_{l=1}^{t-1}2^{-l} \br + 2^{-t}2^{-t}\ez = 2^{-t}\bl \dz + \ez\sum_{l=1}^{t}2^{-l} \br . \end{align} If $j$ is even, then $j\le 2^t-2$. Applying \eqref{adj3} with $s=(j+1)2^{-t}$, $ s'=j2^{-t}$ and $s''=(j+2)2^{-t}$, we deduce that $$\dl(y_{j2^{-t}}, y_{(j+1)2^{-t}})=d_\lz(y_{s'},y_{s})\le \frac12d_\lz(y_{s'},y_{s''})+2^{-2t}\ez.$$ Similarly, we also have \eqref{adj2}. To see \eqref{adj7}, since \eqref{yt-1sub} gives \begin{equation}\label{adj8} Y_{t-1}\subset B_{d_\lz}^+(x,(3+\sum_{l=1}^{t-2}2^{-l})r_x+\ez\sum_{l=1}^{t-1}2^{-l}) \subset B_{d_\lz}^+(x,(3+\sum_{l=1}^{t-1}2^{-l})r_x+\ez\sum_{l=1}^{t }2^{-l}), \end{equation} it suffices to check $$Y_{t} \setminus Y_{t-1}=\{y_{j2^{-t}} \ | \ {1\le j\le 2^t-1,\\ \mbox{$j$ is odd}} \} \subset B_{d_\lz}^+(x,(3+\sum_{l=1}^{t-1}2^{-l})r_x+\ez\sum_{l=1}^{t}2^{-l}).$$ For any odd number $j$ with $1 \le j \le 2^t-1$, since $y_{(j-1)2^{-t}} \in Y_{t-1}$, combining \eqref{adj2} and \eqref{adj8} and noting $\ez <\dz <2r_x$, we obtain \begin{align*} \dl(y_{j2^{-t}},x) & \le \dl(y_{j2^{-t}},y_{(j-1)2^{-t}}) + \dl(y_{(j-1)2^{-t}},x) \\ & \le 2^{-t}\bl 2r_x + \ez\sum_{l=1}^{t}2^{-l} \br + \bl 3+\sum_{l=1}^{t-2}2^{-l} \br r_x+\ez\sum_{l=1}^{t-1}2^{-l} \\ & \le \bl 3+\sum_{l=1}^{t-1}2^{-l} \br r_x + \ez\sum_{l=1}^{t}2^{-l} \end{align*} which implies \eqref{adj7}. We finish the proof of Case 1. } \bigskip \textbf{ Case 2. $\lz=\lz_H$.} Fix $x,y \in \Omega$. For any $\ez>0$ sufficiently small, by the right continuity of the map $\lz \mapsto \dl(x,y)$ at $\lz=\lz_H$ from Lemma \ref{dpro}, there exists $\mu >\lz_H$ such that \begin{equation}\label{d01} d_\mu(x,y) < d_{\lz_H}(x,y) +\frac{\ez}2. \end{equation} By Case (i), there exists $\gz:[0,1] \to \Omega$ joining $x$ and $y$ such that \begin{equation}\label{d02} \ell_{d_\mu}(\gz) < d_\mu(x,y) +\frac{\ez}2. \end{equation} By the definition of the pseudo-length and recalling from Lemma \ref{dpro} that the map $\lz \mapsto \dl(z,w)$ is non-decreasing for all $z,w \in \Omega$, we have \begin{equation}\label{d03} \ell_{d_{\lz_H}}(\gz) \le \ell_{d_\mu}(\gz). \end{equation} Combining \eqref{d01}, \eqref{d02} and \eqref{d03}, we conclude $$ \ell_{d_{\lz_H}}(\gz) < d_{\lz_H}(x,y) + \ez. $$ The proof is complete. \end{proof} We are ready to prove Theorem \ref{radiv}. \begin{proof}[Proof of Theorem \ref{radiv}] Obviously, (iii) in Theorem \ref{rad} $\Rightarrow$ (iv) in Theorem \ref{radiv}. To see the converse, let $\lz\ge 0$ be as in (iv). Given any $x$ and $y,z \in N(x)$, where $N(x)$ is given in (iv) we need to show $$ u(y)-u(z)\le d_\lz(z,y). $$ By Proposition \ref{len}, we know $(\Omega,d_\lz)$ is a pseudo-length space. Hence for any $\ez>0$, there exists a curve $\gz_\ez : [0,1] \to \Omega$ joining $z$ and $y$ such that \begin{equation}\label{ez1} \ell_{d_\lz}(\gz_\ez) \le d_\lz(z,y) +\ez. \end{equation} Since $\gz_\ez \subset \Omega$ is compact, we can find a finite set $\{t_i\}_{i=0}^n \subset [0,1]$ satisfying $$ t_0=0, \ t_n=1, \ \text{and } \gz_\ez(t_{i+1}) \in N(\gz_\ez(t_i)), \ i=0,\cdots , n-1 $$ where $N(\gz_\ez(t_i))$ is the neighbourhood of $\gz_\ez(t_i)$ in (iv). Hence by (iv) we have \begin{equation*} u(\gz_\ez(w_{i+1})) - u(\gz_\ez(w_i)) \le d_\lz(\gz_\ez(w_{i}),\gz_\ez(w_{i+1})), \ i=0, \cdots i-1. \end{equation*} Summing the above inequalities from $0$ to $n-1$, we have \begin{align*} u(y) - u(z) & = u(\gz_\ez(w_n)) - u(\gz_\ez(w_0)) \le \sum_{i=0}^{n-1}d_\lz(\gz_\ez(w_{i}),\gz(w_{i+1})) \le \sum_{i=0}^{n-1} \ell_{d_\lz}(\gz_\ez| _{[w_i,w_{i+1}]}) \\ & = \ell_{d_\lz}(\gz_\ez ) \le d_\lz(y,z) +\ez \end{align*} where in the last inequality we applied \eqref{ez1}. Letting $\ez\to0$ in the above inequality, we obtain (iii) in Theorem \ref{rad}. Finally, \eqref{ident2} is a direct consequence of (iv)$\Leftrightarrow$(i) and thanks to Lemma \ref{dpro}, the minimum in \eqref{ident2} is achieved. \end{proof} {\color{black} \begin{rem} \label{exp1}\rm The assumption $R_\lz'>0$ is needed in the proof of Proposition \ref{len}. Indeed, recall the construction of $\gz_\ez$ in the proof of Proposition \ref{len} below \eqref{adj6}. To guarantee $\gz_\ez$ is a continuous map, especially $\gz_\ez([0,1])$ is compact under the topology induced by $\dcc$, we need $\{\dl(x,\cdot)\}_{x \in \Omega}$ induces the same topology as the one by $\dcc$. By Corollary \ref{lem311-1}, $R_\lz'>0$ can guarantee this. Moreover, to show $\gz_\ez \Subset \Omega$ in \eqref{Sub} in the proof of Proposition \ref{len}, for each $x \in \Omega$, we need the existence of $r_x>0$, such that \begin{equation}\label{Sub2} B^+_{d_\lz}(x,r_x) \Subset \Omega. \end{equation} Again, by Corollary \ref{lem311-1}, $R_\lz'>0$ can guarantee this. If $R_\lz'>0$ does not hold for some $\lz>0$, in Remark \ref{d1} (ii), the example shows that \eqref{Sub2} may fail for some $x \in \Omega$. \end{rem} } \section{McShane extensions and minimizers} In this section, we always suppose that the Hamiltonian $H(x,p)$ enjoys assumptions (H0)-(H3) and further that $\lz_H=0$. Let $U\Subset \Omega$ be any domain. Note that the restriction of $d_\lz$ in $U$ {\color{black}may not have the pseudo-length property} in $U$, and moreover, Theorem \ref{rad} with $\Omega $ replaced by $U$ may not hold for the restriction of $d_\lz$ in $U$. Thus instead of $d_\lz$, below we use intrinsic pseudo metrics $\{d^U_\lz\}_{\lz> 0}$ in $U $, which are defined via \eqref{d311} with $ \Omega$ replaced by $ U$, that is, \begin{equation}\label{du} d^{U} _{\lambda} (x,y):= \sup\{u(y)-u(x) : u \in \dot W^{1,\infty}_{X }(U),\ \|H(\cdot, Xu)\|_{L^\infty(U)} \le \lz \}\quad\forall \mbox{$x,y \in U$ and $\lz \ge 0$}. \end{equation} Obviously we have proved the following. \begin{cor} \label{radu} Theorem \ref{rad}, Theorem \ref{radiv} and Proposition \ref{len} hold with $\Omega$ replaced by $U$ and $d_\lz$ replaced by $d_\lz^U$. In particular, $d_\lz^U$ {\color{black}has the pseudo-length property} in $U$ for all $\lz \ge 0$. \end{cor} Observe that, apriori, $d^U_\lz$ is only defined in $U$ but not in $\overline U$. Naturally, we extend $d^U_\lz:U\times U\to[0,\fz)$ as a function $\wz d^U_\lz: \overline U\times \overline U\to[0,\fz]$ by $$ \wz d_\lz^U(x,y)=\lim_{r\to0}\inf\{d_\lz^U(z,w) \ | \ (z,w)\in U\times U, |(z,w)-(x,y)|\le r\}.$$ Obviously, $\wz d^U_\lz(x,y)=d^U_\lz(x,y)$ for all {\color{black}$(x,y)\in U\times U$}, and $\wz d_\lz^U$ is {\color{black}lower semicontinuous} in $\overline U\times \overline U$, that is, for any $a\in\rr$, the set $$ \{(x,y)\in \overline U\times \overline U \ | \ \wz d^U_\lz(x,y)>a\}$$ is open in $\overline U$. One may also note that it may happen that $\dlu(x,y)=+\fz$ for some $(x,y)\notin U\times U$. Below, {\color{black} for the sake of simplicity, we write $\wz d_\lz^U$ as $d_\lz ^U$. We define $d_{CC}^U$ by letting $H(x,p)=|p|$ in $U$ and $\lz=1$ in \eqref{du}.} The following property will be used later. \begin{lem} \label{property} Let $U \Subset \Omega$ be a subdomain and $\lz\ge 0$. \begin{itemize} \item [(i)] For any $x,y\in \overline U$, we have $\dlu(x,y) \ge \dl(x,y)\ge R'_\lz d_{CC}(x,y)$. \item [(ii)] For any $x\in U$ and {\color{black}$y \in U$ with $\dcc(x,y) < \dcc(x,\pau)$}, we have $ \dlu(x, y)\le R_\lz d_{CC}(x,y). $ \item [(iii)] For any $x\in U$, {\color{black}let $x^\ast\in\partial U$ be the point such that $ d_{CC}(x,x^\ast)=d_{CC}(x,\partial U)$. Then $$d_\lz^U (x,x^\ast)\le R_\lz d_{CC}(x,x^\ast)<\fz.$$} \vspace{-0.8cm} \item [(iv)] For any $x\in \overline U$ and $ y \in U$ we have $$\mbox{$\dlu(x, z) \le \dlu(x,y) + \dlu(y,z) $ and $ d^U_\lz(z,x) \le d^U_\lz(z, y)+ d^U_\lz(y,x) \quad\forall z\in\pa U$} .$$ \item[(v)] Given any $z\in \partial U$, if $\dlu(x,z)= \fz$ for some $x\in \overline U$, then $\dlu(y,z)=+\fz$ for any $y \in \overline U$. \item[(vi)] Given any $x,y\in \overline U\times \overline U$ the map $\lz\in[0,\fz)\mapsto d_\lz^U(x,y)\in[0,\fz]$ is nondecreasing and for ${\color{black} 0<\lz<\mu<\fz}$, $$d^U_\lz(x,y)<\fz \text{ if and only if } d^U_\mu(x,y)<\fz.$$ As a consequence, $$\overline U^\ast:=\{y\in\overline U \ | \ d_{\lz}^U(x,y)<\fz\quad\mbox{for some $x\in U$ and $\lz >0$} \}$$ is well-defined independent of the choice of $\lz >0$. \item[(vii)] Given any $x,y\in \overline U^\ast\times \overline U^\ast$, the map $\lz\in {\color{black} [0 ,\fz)}\mapsto d_\lz^U(x,y)\in[0,\fz]$ is right continuous. \end{itemize} \end{lem} \begin{proof} To see (i), for any $x,y\in U$, since the restriction $u|_U$ is a test function in the definition of $\dlu(x,y)$ whenever $u$ is a test function in the definition of $\dl(x,y)$, we have $d^U_\lz(x,y)\ge d_\lz(x,y)$. In general, given any $(x,y)\in (\overline U\times\overline U)\setminus (U\times U)$, for any $r>0$ sufficiently small, we have $d^U_\lz(z,w)\ge d_\lz(z,w)$ whenever $z,w\in U$ and $|(z,w)-(x,y)|\le r$. By the continuity of $d_\lz$ in $\Omega\times\Omega$, we have $$\lim_{r\to0}\inf\{d^U_\lz(z,w) \ | \ \mbox{$z,w\in U$ and $|(z,w)-(x,y)|\le r$}\}\ge d_\lz(x,y),$$ that is, $ d_\lz^U(x,y)\ge d_\lz(x,y)$. Recall that $d_\lz(x,y)\ge R'd_{CC}(x,y)$ comes from Lemma 2.1. \medskip To see (ii), given any $y\in U$ with $\dcc(x,y)<\dcc(x,\pau)$, there is a geodesic $\gz$ with respect to $\dcc$ connecting $x$ and $y$ so that $\gz\subset B_{d_{CC}}(x,d_{CC}(x,\partial U))$. For any function $u\in \dot W^{1,\infty}_{X }(U)$ with $\|H(\cdot,Xu)\|_{L^\fz(U)}\le \lz$, we know that $\|Xu\|_{L^\fz(U)}\le R_\lz$. {\color{black}Let $U' \Subset U$ and $x,y \in U'$. Thanks to Proposition \ref{mollif2}, we can find a sequence $\{u_k\}_{k\in\nn} \subset C^\fz(U')$ such that $u_k\to u$ everywhere as $k\to\fz$ and $\|Xu_k\|_{L^\fz(U')}\le R_\lz + A_k(u)$ with $\lim_{k \to \fz} A_k(u) \to 0$.} Since $$u_k(y)-u_k(x)= \int_\gz Xu_k\cdot \dot\gz \le \int_\gz |Xu_k||\dot\gz| \le \int_\gz {\color{black}(R_\lz+A_k(u))}|\dot\gz| = {\color{black}(R_\lz+A_k(u))} \dcc(x,y) \mbox{ for all $k \in\nn$},$$ we have $$u(y)-u(x) =\lim_{k \to \fz}\{u_k(y)-u_k(x)\}\le \lim_{k \to \fz} [R_\lz+A_k(u)] \dcc(x,y) = R_\lz \dcc(x,y).$$ Taking supremum in the above inequality over all such $u$, we have $d^U_\lz(x,y)\le R_\lz \dcc(x,y)$. \medskip To see (iii), given any $x\in U$, there exists $x^\ast\in\partial U$ such that $ d_{CC}(x,x^\ast)=d_{CC}(x,\partial U)$. By (ii) and the definition of $d^U_\lz(x,x^\ast)$, we know that $d_\lz^U (x,x^\ast)\le R_\lz d_{CC}(x,x^\ast)<\fz$. \medskip To get (iv), for any $ x\in\overline U$, $y\in U$ and $z\in \partial U$, choose $x_k,z_k\in U$ such that $\dlu(x_k,y)\to \dlu(x,y)$ and $\dlu(y,z_k)\to \dlu(y,z)$ as $k\to\fz$. Since $$\dlu(x_k, z_k) \le \dlu(x_k,y) + \dlu(y,z_k),$$ letting $k\to\fz$ and by the lower-semicontinuous of $d^U_\lz$ we get $$\dlu(x, z ) \le \dlu(x,y) + \dlu(y,z )$$ In a similar way, we also have $$ d^U_\lz(z,x) \le d^U_\lz(z, y)+ d^U_\lz(y,x) $$ \medskip Note that (v) is a direct consequence of (iv). \medskip We show (vi). The fact that $d_\lz^U(x,y)$ is non-decreasing with respect to $\lz$ is obvious for any $x,y \in \overline U$. Assume $0< \lz<\mu<\fz$. Then $d^U_\mu(x,y)<+\fz$ implies $d^U_\lz(x,y)<+\fz$. Conversely, if $d_{\mu}^U(x,y)=+\fz$, we show $d_\lz^U(x,y)=+\fz$. This may happen if at least one of $x$ and $y$ lie in $\pau$. Then for any $\{x_k\}_{k\in \nn} \subset U$ and $\{y_k\}_{k\in \nn}\subset U$ converging to $x$ and $y$, it holds $$ \liminf_{k \to \fz}d_{\mu}^U(x_k,y_k) = +\fz $$ where we let $x_k \equiv x$ (resp. $y_k \equiv y$) if $x \in U$ (resp. $y\in U$). By (i) and (ii), we have for any $\lz\le \mu$ $$ \liminf_{k \to \fz} d_{\lz}^U(x_k,y_k) \ge R'_\lz \liminf_{k \to \fz} d_{CC} (x_k,y_k) \ge \frac{R'_\lz}{R_{\mu}} \liminf_{k \to \fz} d_{\mu}^U(x_k,y_k) = +\fz $$ where we recall that $R'_\lz>0$ for all $\lz>0$. \medskip Finally, we show (vii). Since we only consider $x,y \in \overline U^\ast$, by an approximation argument, it is enough to show the right-continuity for $x,y \in U$. The proof is similar to the one of Lemma \ref{dpro}. We omit the details. The proof of Lemma \ref{property} is complete. \end{proof} {\color{black} \begin{lem} \label{ll} Suppose that $ U \Subset \Omega$ and that $V$ is a subdomain of $U$. For any $\lz\ge0$, one has $$d^U_\lz(x,y)\le d^V_\lz(x,y)\quad\forall x,y\in \overline V.$$ Conversely, given any $\lz>0$ and $x \in V$, there exists a neighborhood $N _\lz(x) \Subset V$ of $x$ such that $$\mbox{ $d^U_\lz ( x,y) = d^V_\lz(x,y)$ and $d^U_\lz (y,x) = d^V_\lz(y,x)$ for any $y \in N_\lz(x)$.}$$ \end{lem} } \begin{proof} For any $u \in \dot W^{1,\infty}_{X }(U)$ with $\|H(\cdot, Xu)\|_{L^\infty(U)} \le \lz$, we know that the restriction $u|_{V} \in \dot W^{1,\infty}_{X }(V)$ with $\|H(\cdot, Xu|_V)\|_{L^\infty(V)} \le \lz$. Hence by the definition of $d^U_\lz $ and $d^V_\lz$, $ d^U_\lz (x,y) \le d^V_\lz (x,y) $ for all $ x,y \in V $ and then for all $ x,y \in \overline V.$ Conversely, we just show $d^U_\lz (x,\cdot) = d^V_\lz(x,\cdot)$ in some neighborhood $N(x)$. In a similar way, we can prove $d^U_\lz (\cdot , x) = d^V_\lz(\cdot ,x)$ in some neighborhood $N(x)$. By Lemma \ref{property} (i), one has $\dl^V (x,y) \ge R'_\lz d_{CC} (x,y) $ for any $x,y\in V$. Thus for any $r>0$, $d^V_\lz(x,y)\le r$ implies $ d_{CC} (x,y)\le r/R'_\lz$. Given any $x\in V$ and $0<r<\dcc(x,\pa V)/R'_\lz$, we therefore have $$N _r(x):=\{y\in V \ | \ d^V_\lz(x,y)\le r\}\subset B_{d_{CC}}(x,r/R'_\lz)\Subset V.$$ Define $u_{x,r}: \Omega \to \rr$ by $$u_{x,r}(z):=\min\{d^V_\lz(x,z),r\}\quad\forall z\in \Omega.$$ If $ r< \dcc(x,\pa V)R'_\lz/4$, we claim that \begin{equation}\label{claim5.3}\mbox{$u_{x,r}\in\lip_{d_{CC}}(\Omega)$ with $H(z,Xu_{x,r}(z))\le \lz$ for almost all $z\in \Omega$.} \end{equation} Assume claim \eqref{claim5.3} holds for the moment. By \eqref{claim5.3}, we are able to take $u_{x,r}|_{U}$ as a test function in $d^U_\lz$ so that \begin{equation*} u_{x,r} (z)-u_{x,r} (w)\le d^U_\lz (w,z)\quad\forall (w,z)\in U\times U. \end{equation*} On the other hand, for any $y\in N_r(x)$, since $d^V _\lz(x,y)= u_{x,r}(y)-u_{x,r}(x)$, we get $d^V _\lz(x,y)\le d^U_\lz(x,y)$ as desired. Finally, we prove the claim \eqref{claim5.3}. {\bf Proof of the claim \eqref{claim5.3}.} First, by Lemma 3.3 and Lemma 3.4, the restriction $u_{x,r}|_V$ of $u_{x,r}$ in $V$ belongs to $\dot W^{1,\fz}_X(V)$ with $H(z,X u_{x,r}|_V(z))\le \lz$ almost everywhere in $V$, and hence \begin{equation}\label{eq11} u_{x,r}|_V(z)-u_{x,r}|_V(w)\le d^V_\lz(w,z)\quad\forall (w,z)\in V\times V. \end{equation} Next, we show $u \in \lip_\dcc(\Omega)$. Given any $w,z\in\Omega$, we consider 3 cases separately. \medskip {\bf Case 1.} $w \in \overline {B_{d_{CC}}(x,r/R'_\lz)}$ and $z\in \overline { B_{d_{CC}}(x, r /R'_\lz)}$. We have $z\in \overline {B_{d_{CC}}(w,2 r/R'_\lz )}$. Since $$ d_{CC}(w,\pa V) \ge d_{CC}(x,\pa V)-d_{CC}(x,w) \ge \frac{4r}{R_\lz'} - \frac{r}{R_\lz'} = \frac{3r}{R_\lz'} > \frac{2r}{R_\lz'} \ge d_{CC}(w,z), $$ by Lemma \ref{property} (ii), $d^V_\lz(w,z)\le R_\lz d_{CC}(w,z)$, which combined with \eqref{eq11}, gives $$ u_{x,r}(z)-u_{x,r} (w)\le R_\lz d_{CC}(w,z).$$ \medskip {\bf Case 2.} $w,z\notin B_{d_{CC}}(x,r / R'_\lz )$. Then $w,z \in \Omega\setminus B_{d_{CC}}(x,r / R'_\lz )$, since $u_{x,r} $ is constant $r$ in $\Omega\setminus B_{d_{CC}}(x,r /R'_\lz)$, we know that $$ u_{x,r} (z)-u_{x,r} (w)=r-r=0\le R_\lz d_{CC}(w,z).$$ \medskip {\bf Case 3.} $w \in \overline {B_{d_{CC}}(x,r/R'_\lz)}$ and $z\notin \overline {B_{d_{CC}}(x,r/R'_\lz)}$. Then for any $\ez>0$, there exists a curve $\gz_\ez:[0,1] \to \Omega$ joining $z$ and $w$ such that $$\ell_{\dcc}(\gz_\ez) \le \dcc(w,z) + \ez$$ and there exists $t \in [0,1]$ such that $\gz_\ez(t) \in \pa B_\dcc(x,r/R'_\lz)$. Thus using Case 1 and Case 2, we deduce \begin{align*} u_{x,r}(z)-u_{x,r} (w)& = u_{x,r}(z)-u_{x,r} (\gz_\ez(t)) +u_{x,r}(\gz_\ez(t))-u_{x,r} (w) \\ & \le R_\lz d_{CC}(z,\gz_\ez(t)) + R_\lz d_{CC}(\gz_\ez(t),w)\\ & \le R_\lz \ell_{\dcc}(\gz_\ez|_{[0,t]}) + R_\lz \ell_{\dcc}(\gz_\ez|_{[t,1]}) \\ & \le R_\lz[\dcc(w,z)+\ez]. \end{align*} Letting $\ez \to 0$ in the above inequality, we conclude $$ u_{x,r}(z)-u_{x,r} (w)\le R_\lz\dcc(w,z). $$ Finally, by Lemma \ref{radcclem}, $u_{x,r}\in \dot W^{1,\fz}_X(\Omega)$. Note that $X u_{x,r}=0$ in $\Omega \setminus \overline {B_{d_{CC}}(x,r/R'_\lz)}$ implies $H(z,X u_{x,r}(z) )=0$ almost everywhere. Therefore recalling $H(z,X u_{x,r}(z) )\le \lz$ almost everywhere in $V$ and $\Omega= V \cup (\Omega\setminus \overline {B_{d_{CC}}(x,r/R'_\lz)})$, we conclude $H(z,X u_{x,r}(z) )\le \lz$ almost everywhere in $\Omega$. \end{proof} {\color{black} \begin{lem} \label{lem5.9} Suppose that $ U \Subset \Omega$ and that $V=U\setminus\{x_i\}_{1\le i\le m}$ for some $m\in\nn$ and $\{x_i\}_{1\le i\le m}\subset U$. Then for any $\lz \ge 0$, one has $$\mbox{$d^V_\lz(x,y)=d^U_\lz(x,y)$ for all $x,y\in\overline U$.}$$ \end{lem} } \begin{proof} Obviously $d_\lz^U\le d^V_\lz$ in $\overline U$. Conversely, we show $d_\lz^V\le d^U_\lz$ in $\overline U$. First, by an approximation argument, it suffices to consider $x,y \in U$. By the right continuity of $\lz\in[0,\fz)\to d_\lz^U(x,y)$ for any $x,y\in \overline U^\ast$, up to considering $ d_{\mu+\ez}^U$ for sufficiently small $\ez>0$, we may assume that $\mu>0$. For any $\lz>0$, by the pseudo-length property of $d^U_\lz$ as in Proposition \ref{len}, it suffices to prove that for any curve $\gz:[a,b] \to U$, one has \begin{equation}\label{xx-4}d^V_\lz(\gz(a),\gz(b)) \le \ell_{d^U_\lz}(\gz) .\end{equation} We consider the following 3 cases. \medskip {\bf Case 1. } $\gz((a,b))\subset V$ and $\gz(\{a,b\}) \subset V$, that is, $\gz( [a,b])\subset V$. Recall from Lemma \ref{ll} that, for each $x \in V$, there exists a neighborhood $N(x)$ such that $$ d^V_\lz(x,y) = d^U_\lz(x,y) \quad \forall y \in N(x). $$ Since $\gz \subset \cup_{t \in [a,b]}N(\gz(t))$, we can find $a=t_0=0<t_1<\cdots< t_m=b$ such that $\gz\subset \cup_{i=0}^m N(\gz(t_i))$ and $\gz([t_i,t_{i+1}])\subset N(\gz(t_i))$. By the triangle inequality, one has \begin{equation*} d_\lz^V(\gz(a),\gz(b))\le \sum_{i=0}^{m-1}d^U_\lz (\gz(t_{i}),\gz(t_{i+1}))\le \ell_{d^U_\lz}(\gz). \end{equation*} \medskip {\bf Case 2. } $\gz((a,b))\subset V$ and $\gz(\{a,b\})\not\subset V$. Applying Case 1 to $\gz|_{[a+\ez,b-\ez]}$ for sufficiently small $\ez>0$, we get \begin{equation*} d_\lz^V(\gz(a),\gz(b))\le \lim_{\ez\to 0}d_\lz^V(\gz(a+\ez),\gz(b-\ez)) \le \liminf_{\ez\to 0} \ell_{d^U_\lz}(\gz|_{[a+\ez,b-\ez]}) \le \ell_{d^U_\lz}(\gz). \end{equation*} \medskip {\bf Case 3. } $\gz((a,b))\not\subset V$. Without loss of generality, for each $x_i$, there is at most one $t \in(a,b)$ such that $\gz(t )=x_i$. Indeed, let $t ^\pm$ as the maximum/mimimum $s\in(a,b)$ such that $\gz(s)=x_1$. Then $ a\le t ^-\le t^+ \le b$. If $t ^-<t ^+ $, we consider $ \gz_1:[a,b-(t^+ -t^- )]\to U$ with $\gz_1(t)=\gz(t)$ for $t\in[a,t ^-]$, and $\gz_1(t)=\gz(t-(t^+ -t^- ))$ for $t\in [t^+ ,b]$. Then $\ell_{d^U_\lz}(\gz_1) \le \ell_{d^U_\lz}(\gz) $. Repeating this procedure for $x_2,\cdots,x_m$ in order, we may get a new curve $\eta$ such that for each $x_i$, there is at most one $t\in(a,b)$ such that $\gz(t )=x_i$ and $\ell_{d^U_\lz}(\eta)\le \ell_{d^U_\lz}(\gz).$ Denote by $\{a_j\}_{j=0}^s$ with $a=a_0<a_1<\cdots <a_s=b$ such that $\gz( \{a_1,\cdots,a_{s-1}\}) \subset U\setminus V=\{x_i\}_{1\le i\le m}$ and $\gz([a,b]\setminus\{a_1,\cdots,a_{s-1}\})\subset V$. Applying Case 2 to $\gz|_{[a_j,a_{j+1}]}$ for all $0\le j\le s-1$, we obtain \begin{equation*} d_\lz^V(\gz(a),\gz(b))\le \sum_{j=0}^{s-1} d_\lz^V(\gz(a_j),\gz(a_{j+1}))\le \sum_{j=0}^{s-1}\ell_{d^U_\lz}(\gz|_{[a_j,a_{j+1}]}) \le \ell_{d^U_\lz}(\gz) \end{equation*} as desired. The proof is complete. \end{proof} {\color{black}For any $g\in C^0(\partial U)$, write} \begin{align*} \mu(g, \partial U) := \inf \big \{ \lz \ge0 \ | \ g(y) - g(x) \le d^{U}_\lz(x, y) \quad \forall \ x, y \in \pa{U} \big \}. \end{align*} The following lemma says the infimum can be reached. \begin{lem} We have \begin{align*} \mu(g, \partial U) & =\min \big \{ \lz\ge0 \ | \ g(y) - g(x) \le d^{U}_\lz(x, y) \quad \forall \ x, y \in \pa{U}\cap \overline U^\ast \big \}. \end{align*} \end{lem} \begin{proof} First, if $x \in \pa{U}\setminus \overline U^\ast $ or $y \in \pa{U}\setminus \overline U^\ast $, we have $d^{U}_\lz(x, y) = \fz$. Hence $g(y) - g(x) \le d^{U}_\lz(x, y)$ holds trivially, which implies that \begin{align*} \mu(g, \partial U) =\inf \big \{ \lz\ge0 \ | \ g(y) - g(x) \le d^{U}_\lz(x, y) \quad \forall \ x, y \in \pa{U}\cap \overline U^\ast \big \}. \end{align*} Thanks to Lemma \ref{property}(vii), we finish the proof. \end{proof} If $\mu=\mu(g,\partial U)<\fz$, we define $$\cS^+_{g;U}(x) := \inf_{y\in\partial U} \{g(y) + d^{U}_\mu (y, x) \} \quad{\rm and}\quad \cS^-_{g; U}(x) := \sup_{y\in\partial U} \{g(y) - d^{U}_\mu (x, y) \}\quad \forall x \in \overline U.$$ Note that $\cS^\pm_{g; U}$ serve as ``McShane" extensions of $g$ in $U$. \begin{lem} \label{cont} If $\mu=\mu(g,\partial U)<\fz$, then we have \begin{enumerate} \item[(i)]$\csgu^\pm \in \dot W^{1,\fz}_X(U)\cap C^0(\overline U)$ with $\csgu^\pm=g$ on $\partial U$; \item[(ii)] for any $ x,y\in\overline U$, \begin{equation}\label{yy-3}\csgu^\pm (y)-\csgu^\pm (x)\le d^U_\mu(x,y) ;\end{equation} \item[(iii)] $ \|H(\cdot,{\color{black}X\csgu^\pm} )\|_{L^\fz(U)}\le \mu.$ \end{enumerate} \end{lem} \begin{proof} By Corollary \ref{radu}, (ii) implies (iii) and $ \csgu^\pm \in \dot W^{1,\fz}_X(U)$. Below we show $\csgu^\pm=g$ on $\partial U$, (ii) and $\csgu^\pm \in C^0(\overline U)$ in order. \medskip {\bf Proof for $\csgu^\pm=g$ on $\partial U$}. For any $x\in\partial U$, by definition we have $$\csgu^-(x) \ge g(x)\ge \csgu^+(x).$$ Conversely, for $y \in \pau$, one has $$g(y) - d^U_\mu(x, y) \le g(x)\le g(y) + d^U_\mu(y, x)$$ and hence $\csgu^-(x) \le g(x)\le \csgu^+(x) $ as desired. \medskip {\bf Proof of (ii). } We only prove (ii) for $\csgu^-$; the proof for $\csgu^+$ is similar. For $x,y\in\partial U$, by the definition of $\mu$ one has $$ \csgu^- (y)-\csgu^- (x)=g(y)-g(x) \le d^U_\mu(x,y).$$ For $x\in U$ and $y\in \pa U$, by definition \begin{align*} \csgu^-(x) & \ge g(y) - d^U_\mu(x, y) = \csgu^- (y) - d^U_\mu(x, y) \end{align*} and hence $$ \csgu^- (y)-\csgu^- (x) \le d^U_\mu(x,y).$$ For $x\in\overline U$ and $ y \in U$, by Lemma \ref{property}(iv), we have $$d^U_\mu(x,z)\le d^U_\mu(x,y)+d^U_\mu(y,z) \quad\forall z\in\partial U.$$ One then has \begin{align*} \csgu^-(y) & = \sup_{z \in \pau }\{g(z) - d^U_\mu(y, z) \} \\ & \le \sup_{z \in \pau }\{g(z) - d^U_\mu (x, z)+d^U_\mu(x,y) \} \\ & \le \csgu^-(x) +d^U_\mu(x,y), \end{align*} and hence $$ \csgu^- (y)-\csgu^- (x) \le d^U_\mu(x,y).$$ {\bf Proof of $\csgu^\pm\in C^0(\overline U)$}. We only consider $\csgu^-\in C^0(\overline U)$; the proof for $\csgu^+\in C^0(\overline U)$ is similar. It suffices to show that for any $x \in \pau$ and a sequence $\{x_j\} \subset U$ converging to $x$, \begin{equation*} \lim_{j \to \fz}\csgu^-(x_j)=\csgu^-(x)=g(x). \end{equation*} Choosing $x^\ast_j\in\partial U$ such that $d_{CC}(x_j,x^\ast_j)=\dist_{d_{CC}}(x_j,\partial U)$, one has $$\mbox{$d_{CC}(x_j,x^\ast_j)\le d_{CC}(x_j,x)\to0$, and hence $d_{CC}(x_j^\ast, x)\to0 $.}$$ Thanks to {\color{black}Lemma \ref{property}(iii) with $x=x_j$ and $x^\ast=x^\ast_j$ therein, we deduce} $$d^U _\mu(x_j,x^\ast_j)\le R_\mu d_{CC}(x_i,x^\ast_j)\to0.$$ Since $$\csgu^-(x_j)\ge g(x_j^\ast)-d^U _\mu(x_j,x^\ast_j),$$ by the continuity of $g$, we have $$\liminf_{j \to \fz}\csgu^-(x_j)\ge g(x).$$ Assume that $$\liminf_{j \to \fz}\csgu^-(w_j)\ge g(x)+2\ez\ \mbox{ for some $\{w_j\}_{j\in\nn}\subset U$ with $w_j\to x$ and some $\ez>0$.}$$ By the definition we can find $z_j\in\partial U$ such that $$ \csgu^-(w_j)-\ez\le g(z_j)-d^U_\mu(w_j,z_j) .$$ Thus {\color{black} for $j \in \nn$ sufficiently large, we have} $$g(z_j)-d^U_\mu(w_j,z_j) \ge g(x)+\ez. $$ Up to some subsequence, we assume that $z_j\to z \in\partial U$. Note that $$d^U_\mu(x,z) \le\liminf_{j\to\fz} d^U_\mu(w_j,z_j)$$ By the continuity of $g$, we conclude $$g(z )-d^U_\mu(x,z) \ge g(x)+\ez,$$ which is a contradiction with the definition of $\mu$ and $\mu<\fz$. \end{proof} Write \begin{equation}\label{minimizer} \mathbf I(g,U) =\inf\{\|H(\cdot,Xu)\|_{L^\fz(U)} \ | \ u\in \dot W_X^{1,\fz}(U)\cap C^0(\overline U), u|_{\partial U}=g \}. \end{equation} {\color{black} A function $ u:\overline U\to\rr$ is called as a minimizer for $\mathbf I(g,U)$ if $$\mbox{$ u\in \dot W_X^{1,\fz}(U)\cap C^0(\overline U)$, $ u|_{\partial U}=g$ and $\|H(\cdot,Xu)\|_{L^\fz(U)}=\mathbf I(g,U)$.}$$} We have the following existence and properties for minimizers. \begin{lem}\label{lem231} For any $g\in C(\partial U)$ with $\mu(g,\partial U)<\fz$, we have the following: \begin{enumerate} \item[(i)] We have $ \mu(g,\partial U)=\mathbf I(g,U). $ Both of $\csgu^\pm$ are minimizers for $\mathbf I(g,U)$. \item[(ii)] If $u$ is a minimizer for $\mathbf I(g,U)$ then $$ \csgu^- \le u \le \csgu^+ \quad\mbox{in $\overline U$ and } \| H(x,Xu) \|_{L^\fz(U)} = \mathbf I(g,U)=\mu(g,\partial U) . $$ \item[(iii)] If $u,v$ are minimizer for $\mathbf I(g,U)$, then $tu+(1-t)v$ with $t\in(0,1)$, $\max\{u, v\}$ and $\min\{u, v\}$ are minimizers for $\mathbf I(g,U)$. \end{enumerate} \end{lem} \begin{proof} (i) Since $\mu(g,\partial U) <\fz$, then by {\color{black}Lemma \ref{cont}}, we know that $\csgu^\pm $ satisfies the condition required in \eqref{minimizer} and hence \begin{equation} \label{xx0} \mathbf I(g,U)\le \|H(\cdot,{\color{black}X\csgu^\pm} )\|_{L^\fz(U)}\le \mu.\end{equation} Below we show that $\mu(g,\partial U)\le \mathbf I(g,U) .$ Note that combining this and \eqref{xx0} we know that $$\mathbf I(g,U)= \|H(\cdot,{\color{black}X\csgu^\pm} )\|_{L^\fz(U)}= \mu$$ and moreover, $\csgu^\pm$ are minimizers for ${ \bf I}(g;U)$. For any $\lz>\mathbf I(g,U)$, there is a function $u\in \dot W^{1,\fz}_X(U)\cap C^0(\overline U)$ with $u=g$ on $\partial U$ such that $ \|H(x,Xu)\|_{L^\fz(U)}\le \lz$. By Corollary \ref{radu}, $${\color{black} u(y)-u(x)\le d_\lz^U(x,y), \ \forall x,y\in U.}$$ By the continuity of $u$ in $\overline U$ and the definition of $d^U_\lz$ in $\overline U\times\overline U$ we have $$g(x)-g(y)\le d^U_\lz(x,y) {\color{black} \mbox{ for all } x,y \in \pau}.$$ Thus $\mu(g,\partial U)\le \mathbf I(g,U) .$ \medskip (ii) If $u$ is a minimizer for ${ \bf I}(g,U)$, one has $$\|H(\cdot, Xu )\|_{L^\fz(U)} = \bi(g,U)=\mu.$$ By {\color{black}Corollary \ref{radu}}, $u(y) - u(x) \le d_\mu^U(x, y)$ for any $x, y \in U$ and hence, by the continuity of $u$ and the definition of $d_\mu^U$, for all $x, y \in \overline U$. Since $u= g$ on $\partial U$, for any $x \in U$, one has $g(y) - d_\mu^U(x, y) \le u(x)$ for any $y \in \partial U$, which yields $\csgu^-(x) \le u(x)$. By a similar argument, one also has $u \le \csgu^+$ in $U$ as desired. \medskip (iii) Suppose that $u_1,u_2$ are minimizers for $\bi(g,U)$. Set $$u_\eta:=(1-\eta) u_1+ \eta u_2 \mbox{ for any $\eta \in [0, 1]$.}$$ Then $u_\eta\in \dot W^{1,\fz}_X(U)\cap C^0(\overline U)$, and $u_\eta= g$ on $\partial U$, and by (H1), $$H(\cdot, Xu_\eta(\cdot)) \le \max_{i=1,2}\{ H(\cdot, Xu_i(\cdot)) \}\le \mu \ \mbox{ a.e. on $U$}.$$ This, {\color{black}combined with the definition of $\bi (g,U)$}, implies that $u_\eta$ is also a minimizer for ${ \bf I}(g,U)$. Finally, note that $$\mbox{$\max \{u_1, u_2\} \in \dot W^{1,\fz}_X(U)\cap C^0(\overline U)$, $\max \{u_1, u_2\} = g$ on $\partial U$}.$$ By Lemma \ref{l21}, one has $$H(\cdot, X\max \{u_1, u_2\}(\cdot)) \le \max_{i=1,2}\{ H(\cdot, Xu_i(\cdot)) \}\le \mu \ \mbox{ a.e. on $U$}.$$ We know that \begin{align*} \bi (g, U )\le \cf (\max\{u_1, u_2\}, U) & \le \max \{\cf (u, U), \cf (v, U)\} = \bi (g, U ), \end{align*} that is, $\max \{u_1, u_2\}$ is a minimizer for ${ \bf I}(g,U)$. Similarly, $\min\{u_1, u_2\}$ is a minimizer for ${ \bf I}(g,U)$. \end{proof} We have the following improved regularity for Mschane extension via $d_\lz$. {\color{black} \begin{lem} \label{lem5.7} Suppose that $U\Subset\Omega$ and that $V \subset U$ is a subdomain. If $g:\partial V\to\rr$ satisfies \begin{equation}\label{xx2} g(y)-g(x)\le d^U_\lz (x,y)\quad\forall x,y\in\partial V\end{equation} for some $\lz \ge 0$, then $ \mu(g,\partial U)\le \lz$ and \begin{equation}\label{yy-1} S^\pm_{g,V}(y)- S^\pm_{g,V}(x)\le d^U_\lz (x,y)\quad\forall x,y\in\overline V. \end{equation} \end{lem} } \begin{proof} Since $d_\lz^U\le d^V_\lz$ in $\overline V\times\overline V$, we know that $$g(y)-g(x)\le d^V_\lz (x,y)\quad\forall x,y\in\partial V$$ and hence $$ \mu(g,\partial V)=\min\{\eta\ge 0 \ | \ g(y)-g(x)\le d^V_\eta (x,y)\quad\forall x,y\in\partial V \}\le \lz.$$ To prove \eqref{yy-1}, by the pseudo-length property of $d_\lz^U$ as in Corollary \ref{radu}, it suffices to prove that for any curve $\gz:[a,b] \to \Omega$ with $\gz(a),\gz(b)\in \overline V$, one has \begin{equation}\label{xx-2}S^\pm_{g,V}(\gz(b))-S^\pm_{g,V}(\gz(a))\le \ell_{d_\lz^U}(\gz) . \end{equation} We consider the following 4 cases. \medskip {\bf Case 1.} $\gz((a,b))\subset V$ and $\gz(\{a,b\}) \subset V$, that is, $\gz( [a,b])\subset V$. Noting $\mu=\mu(g,\partial V)\le \lz$, one then has $d_\mu^U \le d_\lz^V $ in $ V\times V$. From the definition of $S^\pm_{g,V}$, it follows $$S^\pm_{g,V}(y)- S^\pm_{g,V}(x)\le d_\mu^V (x,y) \quad\forall x,y\in V.$$ Recall from Lemma \ref{ll} that, for each $x \in V$, there exists a neighborhood $N(x) \Subset V$ such that $$ d^V_\lz(x,y) = d_\lz^U(x,y) \quad \forall y \in N(x). $$ We therefore have \begin{equation}\label{ell1} S^\pm_{g,V}(y)- S^\pm_{g,V}(x)\le d_\lz^U (x,y)\quad\forall x \in U, y\in N(x). \end{equation} Since $\gz \subset \cup_{t \in [a,b]}N(\gz(t))$, we can find $a=t_0=0<t_1<\cdots< t_m=b$ such that $\gz\subset \cup_{i=0}^m N(\gz(t_i))$ and $\gz([t_i,t_{i+1}])\subset N(\gz(t_i))$. Applying \eqref{ell1} to $\gz(t_i)$ and $\gz(t_{i+1})$, we have $$S^\pm_{g,V}(\gz(t_{i+1}))-S^\pm_{g,V}(\gz(t_{i})) \le d_\lz^U(\gz(t_{i}),\gz(t_{i+1})).$$ Thus \begin{equation*}S^\pm_{g,V}(\gz(b))- S^\pm_{g,V}(\gz(a))=\sum_{i=0}^{m-1} [S^\pm_{g,V}(\gz(t_{i+1}))-S^\pm_{g,V}(\gz(t_{i}))]\le \sum_{i=0}^{m-1}d_\lz^U (\gz(t_{i}),\gz(t_{i+1}))\le \ell_{d_\lz^U}(\gz). \end{equation*} \medskip {\bf Case 2.} $\gz((a,b))\subset V$ and $\gz(\{a,b\})\not\subset V$. Applying Case 1 to $\gz|_{[a+\ez,b-\ez]}$ for sufficiently small $\ez>0$, we get \begin{equation*} S^\pm_{g,V}(\gz(b))- S^\pm_{g,V}(\gz(a))= \lim_{\ez\to 0}[S^\pm_{g,V}(\gz(b-\ez))- S^\pm_{g,V}(\gz(a+\ez))] \le \liminf_{\ez\to 0} \ell_{d_\lz^U}(\gz|_{[a+\ez,b-\ez]}) \le \ell_{d_\lz^U}(\gz). \end{equation*} \medskip {\bf Case 3.} $\gz((a,b))\not\subset V$ and $\gz(\{a,b\})\subset V$. Set $$ t_\ast=\min\{t\in[a,b],\gz(t)\notin V\} \mbox{ and } t^\ast=\max\{t\in[0,1],\gz(t)\notin V\}.$$ Then $\gz(t_\ast) \in \pa V$ and $\gz(t^\ast) \in \pa V$. Write $$ S^\pm_{g,V}(\gz(b))- S^\pm_{g,V}(\gz(a))=S^\pm_{g,V}(\gz(b))- S^\pm_{g,V}(\gz(t^\ast))+ S^\pm_{g,V}(\gz(t^\ast))- S^\pm_{g,V}(\gz(t_\ast))+S^\pm_{g,V}(\gz(t_\ast))- S^\pm_{g,V}(\gz(a)).$$ Note that $$S^\pm_{g,V}(\gz(t^\ast))- S^\pm_{g,V}(\gz(t_\ast))=g(\gz(t^\ast))-g(\gz(t_\ast))\le d_\lz^U (\gz(t_\ast),\gz(t^\ast))\le \ell_{d_\lz^U}(\gz|_{[a,t_\ast]}) .$$ By this, and applying Case 2 to $\gz|_{[a,t_\ast]}$ and $\gz|_{[t^\ast,b]}$, we obtain $$ S^\pm_{g,V}(\gz(b))- S^\pm_{g,V}(\gz(a))\le \ell_{d_\lz^U}(\gz|_{[t^\ast,b]}) + \ell_{d_\lz^U}(\gz|_{[t_\ast,t^\ast ]})+\ell_{d_\lz^U}(\gz|_{[a,t_\ast]}) =\ell_{d_\lz^U}(\gz) .$$ \medskip {\bf Case 4.} $\gz((a,b))\not\subset V$ and $\gz(\{a,b\})\not \subset V$. If $\gz(\{a,b\})\subset \pa V$, then $$S^\pm_{g,V}(\gz(b))- S^\pm_{g,V}(\gz(a))=g(\gz(b))-g(\gz(a))\le d_\lz^U (\gz(b),\gz(a))\le \ell_{d_\lz^U}(\gz ) .$$ If $\gz(a)\in V$ and $\gz(b)\in\pa V$, set $s^\ast=\min\{s\in[a,b] \ | \ \gz(s) \in \pa V\}$. Obviously $a<s^\ast\le b$, and we can find a sequence of $\ez_i>0$ so that $\ez_i\to0$ as $i\to\fz$ and $\gz(s^\ast-\ez_i)\in V$. Write \begin{align*} S^\pm_{g,V}(\gz(b))- S^\pm_{g,V}(\gz(a))= S^\pm_{g,V}(\gz(b))- S^\pm_{g,V}(\gz(s^\ast ))+ S^\pm_{g,V}(\gz(s^\ast ))-S^\pm_{g,V}(\gz(a)). \end{align*} Since $\gz(s^\ast ),\gz(b)\in\pa V$, we have $$S^\pm_{g,V}(\gz(b))- S^\pm_{g,V}(\gz(s^\ast))=g(\gz(b))-g(\gz(s^\ast))\le d_\lz^U (\gz(b),\gz(s^\ast))\le \ell_{d_\lz^U}(\gz|_{[s^\ast,b]} ) .$$ Applying Case 3 to $\gz|_{[a,s^\ast-\ez_i]}$, one has $$ S^\pm_{g,V}(\gz(s^\ast ))-S^\pm_{g,V}(\gz(a)) = \lim_{i\to\fz}[S^\pm_{g,V}(\gz(s^\ast-\ez_i))-S^\pm_{g,V}(\gz(a))]\le\lim_{i\to\fz}\ell_{d_\lz^U}(\gz|_{[a,s^\ast-\ez_i]} )\le \ell_{d_\lz^U}(\gz|_{[a,s^\ast]} ).$$ We therefore have \begin{align*} S^\pm_{g,V}(\gz(b))- S^\pm_{g,V}(\gz(a))\le \ell_{d_\lz^U}(\gz|_{[s^\ast,b]} ) +\ell_{d_\lz^U}(\gz|_{[a,s^\ast]} )=\ell_{d_\lz^U}(\gz). \end{align*} If $\gz(a)\in \pa V$ and $\gz(b)\in V$, we could prove in a similar way. Thus \eqref{xx-2} holds and the proof is complete. \end{proof} The following will be used in Section 6. Let $U\Subset\Omega$ and $u\in \dot W^{1,\fz}_X(U)\cap C^0(\overline U) $ satisfying \begin{equation}\label{udmu}\mbox{$u(y)-u(x)\le d^U_\mu(x,y) \quad\forall x,y\in\overline U$} \end{equation} for some $0\le \mu<\fz$. Given any subdomain $V\subset U$, write $h =u|_{\partial V}$ as the restriction of $u$ in $\overline V$. Since $d^U_\mu\le d^V_\lz$ in $\overline V\times\overline V$ as given in Lemma \ref{ll}, one has $$u (y)-u (x) \le d^V_\mu(x,y) \quad\forall x,y\in\pa V$$ and hence $\mu(u|_{\pa V}, \partial V) \le \mu$. Denote by $\cS_{u|_{\pa V} ,V}^\mp$ the McShane extension of $u|_{\pa V} $ in $V$. Define \begin{equation}\label{upm} u^\pm:= \left \{ \begin{array}{lll} \cS_{u|_{\pa V},V}^\pm & \quad \text{in} & \quad V \\ u & \quad \text{in}&\quad \overline U \backslash V. \end{array} \right. \end{equation} {\color{black} \begin{lem} \label{lem5.10} Under the assumption \eqref{udmu} for some $0\le \mu<\fz$, the functions $u^\pm$ defined in \eqref{upm} are continuous in $\overline U$ and satisfy \begin{equation}\label{claim1} u^\pm(y)-u^\pm(x)\le d_\mu^U(x,y)\quad\forall x,y\in \overline U. \end{equation} In particular, $u^\pm\in \dot W^{1,\fz}_X(U) \cap C^0(\overline U)$ and $ \|H(\cdot,Xu^\pm)\|_{L^\fz(U)}\le \mu $. \end{lem} } \begin{proof} We only prove Lemma \ref{lem5.10} for $u^+$; the proof of Lemma \ref{lem5.10} for $u^-$ is similar. By Lemma \ref{cont}, $\cS^+_{h,V}\in C^0(\overline V)$ and $\cS^+_{h,V}=u$ on $\pa V$. These, together with $u\in C^0(\overline U)$ imply that $u^+\in C^0(\overline U)$. Moreover, by Corollary \ref{radu}, if $u^+$ satisfies \eqref{claim1}, then $u^+\in \dot W^{1,\fz}_X(U)$ and $ \|H(\cdot,Xu^\pm)\|_{L^\fz(U)}\le \mu $. Below we prove \eqref{claim1} for $u^+$ via the following 3 cases. By the right continuity of $\lz\in[0,\fz)\to d_\lz^U(x,y)$ for any $x,y\in \overline U$, up to considering $ d_{\mu+\ez}^U$ for sufficiently small $\ez>0$, we may assume that $\mu>0$. {\bf Case 1.} $x,y\in \overline U\setminus V $. By \eqref{udmu} we have $$u^+(y)-u^+(x)= {\color{black}u(y)-u(x)} \le d_\mu^U(x,y).$$ {\bf Case 2.} $x \in \overline V $ and $y \in V$ or $x \in V $ and $y \in \overline V$. Applying Lemma \ref{lem5.7} with $(U, d_\lz^U,V,d^V_\lz,g)$ replaced by $( U,d^U_\mu,V,d^V_\mu, u|_{\partial V})$, one has $$ u^+(y)-u^+(x)= \cS_{h, V}^+(y)- \cS_{h, V}^+(x)\le d^U_\mu(x,y)\quad\forall x,y\in \overline V.$$ \medskip {\bf Case 3.} $x\in V$ and $y\in\overline U\setminus V $ or $x\in \overline U\setminus V$ and $y\in V $. For any $\ez>0$, {\color{black}by Corollary \ref{radu}}, there exists a curve $\gz$ joining $x, y$ such that $$\ell_{d^U_\mu}(\gz)\le d_\mu^U(x,y)+\ez.$$ Let $z\in \gz\cap \partial V$. Applying Case 2 and Case 1, we have $$ u^+(y)-u^+(x)=u^+(y)-u^+(z)+u^+(z)-u^+(x)\le d^U_\mu(z,y)+d^U_\mu(x,z)\le \ell_{d^U_\mu}(\gz)\le d^U_\mu(x,y)+\ez .$$ By the arbitrariness of $\ez>0$, we have $ u^+(y)-u^+(x) \le d^U_\mu(x,y) $ as desired. \end{proof} \section{Proof of Theorem \ref{t231}} In this section, we always suppose that the Hamiltonian $H(x,p)$ enjoys assumptions (H0)-(H3) and further that $\lz_H=0$. \begin{defn}\label{d231}\rm Let $U\Subset\Omega$ be a domain and $g\in C^0(\pa U)$ with $\mu(g,\partial U)<\fz$. \begin{enumerate} \item[(i)] A minimizer $u $ for ${ \bf I}(g,U)$ is called a local superminimizer for ${ \bf I}(g,U)$ if $u \ge \cS_{u|_{\partial V}; V}^-$ in $V$ for any subdomain $V\subset U$. \item[(ii)] A minimizer $u $ for ${ \bf I}(g,U)$ is called a local subminimizer for ${ \bf I}(g,U)$ if $u \le \cS_{u|_{\partial V}; V}^+$ in $V$ for any subdomain $V\subset U$. \end{enumerate} \end{defn} The next lemma shows McShane extensions are local super/sub minimizers. \begin{lem}\label{lem234} Let $U\Subset\Omega$ and $g\in C^0(\pa U)$ with $\mu(g,\partial U)<\fz$. \begin{enumerate} \item[(i)] For any subdomain $V\subset U$, we have \begin{equation}\label{e601} \cS_{h^+, V}^- \le \cS_{h^+, V}^+ \le \csgu^+ \text{ in $V$, where $h^+=\csgu^+|_{\partial V}$.} \end{equation} In particular, $\csgu^+$ is a local superminimizer for ${ \bf I}(g,U)$. \item[(ii)]For any subdomain $V\subset U$, we have \begin{equation}\label{e601-} \csgu^-\le \cS_{h^-, V}^- \le \cS_{h^-, V}^+ \text{ in $V$, where $h^-=\csgu^-|_{\partial V}$.} \end{equation} In particular, $\csgu^-$ is a local subminimizer for ${ \bf I}(g,U)$. \end{enumerate} \end{lem} \begin{proof} We only prove (i); the proof for (ii) is similar. Write $\mu=\mu(g,\partial U).$ By Lemma \ref{lem231}, $\csgu^+$ is a minimizer for $\bi(g,U)$, $\mu=\bi(g,U)=\|H(\cdot,X\csgu^+)\|_{L^\fz(U)} $, and \begin{equation}\label{udmus}\mbox{$\csgu^+(y)-\csgu^+(x)\le d^U_\mu(x,y) \quad\forall x,y\in\overline U$} \end{equation} Fix any subdomain $V\subset U$. Denote by $\cS_{h,V}^\pm$ the McShane extension of $h^+=\csgu^+|_{\partial V}$ in $V$. Note that $\cS_{h, V}^-\le \cS_{h, V}^+ $ in $\overline V$, that is, the first inequality in \eqref{e601} holds. Below we show $\cS_{h, V}^+\le \csgu^+$ in $V$, that is, the second inequality in \eqref{e601}. Let $ u^+$ be as in the \eqref{upm} with $u=\cS_{g,U}^+$, that is, \begin{equation*} u^+:= \left \{ \begin{array}{lll} \cS_{h^+,V}^+ & \quad \text{in} & \quad V \\ \csgu^+ & \quad \text{in}&\quad \overline U \backslash V. \end{array} \right. \end{equation*} By \eqref{udmus}, we apply Lemma \ref{lem5.10} to conclude that $u^+\in \dot W^{1,\fz}_X(U) \cap C^0(\overline U)$ and $ \|H(\cdot,Xu^+)\|_{L^\fz(U)}\le \mu $. Note that $u^+=\csgu^+$ on $\pa U$ and hence, by definition of $\bi (g, U )$, one has $\bi (g, U )\le \|H(\cdot,Xu^+)\|_{L^\fz(U)}.$ Recalling $\mu= \bi (g, U )$, one obtains that $\bi (g, U )= \|H(\cdot,Xu^+)\|_{L^\fz(U)},$ that is, $u^+$ is a minimizer for $\bi (g, U )$. By Lemma \ref{lem231}(i) again, $u^+ \le \csgu^+$ in $U$. Since $\cS_{h, V}^+= u^+$ in $V$, we conclude that $\cS_{h, V}^+\le \csgu^+$ in $V$ as desired. The proof is complete. \end{proof} {\color{black} \begin{lem}\label{c0} Let $V \Subset \Omega$ be a domain and $P=\{x_j\}_{j \in \nn}\subset V$ be a dense subset of $V$. Assume $u \in C^{0}(\overline V)$ and $\{u_j\}_{j \in \nn} \subset \dot W_X^{1,\fz}(V) \cap C^{0}(\overline V)$ such that for any $j \in \nn$, \begin{equation}\label{pp01} |u_j(x_i)-u(x_i)| \le \frac1j \mbox{ for any $i=1, \cdots,j$, } \end{equation} and \begin{equation}\label{pp02} \|H(\cdot,Xu_j(\cdot))\|_{L^\fz(V)} \le \lz <\fz. \end{equation} Then $u_j \to u$ in $C^0(V)$, and moreover, \begin{equation}\label{pp03} u\in \dot W^{1,\fz}_X(V ) \mbox{ and } \|H(x,Xu)\|_{L^\fz(V)}\le \lz. \end{equation} \end{lem}} \begin{proof} We only need to prove $u_j \to u$ in $C^0(V)$. Note that \eqref{pp03} follows from this and Lemma 3.1. Since $u \in C^0(\overline V)$ and $\overline V$ is compact, $u$ is uniform continuous in $\overline V$, that is, for any $\ez>0$, there exists $h_\ez\in(0,\ez)$ such that for all \begin{equation}\label{u00} |u(x)-u(y)| \le \ez\quad \mbox{whenever $x,y \in \overline V$ with $|x-y|<h_\ez$.} \end{equation} Recalling the assumption (H3), by \eqref{pp02} one has \begin{equation}\label{mini5} \||Xu_j(\cdot)|\|_{L^\fz(V)} \le R_\lz \quad \forall j \in \nn. \end{equation} By Lemma \ref{radcclem}, $$u_j(y)-u_j(x)\le R_\lz d^V_{CC}(x,y ) \quad\forall x,y\in V.$$ Given any $K\Subset V$, recall from \cite{nsw85} that $$d_{CC}^V(x,y)\le C(K,V)|x-y|^{1/k} \quad\forall x,y\in \overline K.$$ It then follows $$|u_j(y)-u_j(x)|\le R_\lz C(K,V)|x-y|^{1/k} \quad\forall x,y\in \overline K.$$ Given any $\ez>0$, thanks to the density of $\{x_i\}_{i\in\nn}$ in $V$, one has $\overline K\subset \cup_{x_i\in \overline K}B(x_i,h_\ez)$. By the compactness of $\overline K$, we have $$\overline K\subset \cup\{B(x_i,h_\ez) \ | \ 1\le i\le i_K \ \mbox{ and }\ x_i\in\overline K\}$$ for some $i_K\in\nn$. For any $j\ge \max\{i_K,1/\ez\}$ and for any $x\in\overline K$, choose $1\le i\le i_K$ such that $x_i\in \overline K$ and $|x-x_i|\le h_\ez<\ez$. Thus $| u(x_i)-u(x)|\le \ez$. By \eqref{pp01} we have $| u_j(x_i)-u(x_i)|\le \frac1j\le \ez$. Thus $$| u_j(x)-u(x)|\le | u_j(x)-u_j(x_i)|+ | u_j(x_i)-u(x_i)|+| u(x_i)-u(x)| \le R_\lz C(K,V)\ez^{1/k}+ 2\ez.$$ This implies that $u_j\to u$ in $C^0(\overline K)$ as $j\to\fz$. The proof is complete. \end{proof} The following clarifies the relations between absolute minimizers and local super/subminimizers. \begin{lem} \label{lem235} Let $U\Subset\Omega$ and $g\in C^0(\pa U)$ with $\mu(g,\partial U)<\fz$. Then a function $u:\overline U\to\rr$ is an absolute minimizer for ${ \bf I}(g,U)$ if and only if it is both a local superminimizer and a local subminimizer for ${ \bf I}(g,U)$. \end{lem} \begin{proof} If $u$ is an absolute minimizer for $\bi (g, U )$, then for every subdomain $V \subset U$, $u$ is a minimizer for ${ \bf I}(u|_{\pa V},V)$. By Lemma \ref{cont}, $\cS_{u|_{\pa V}, V}^- \le u \le \cS_{u|_{\pa V}, V}^+$, that is, $u$ is both a local superminimizer and a local subminimizer for ${ \bf I}(g,U)$. Conversely, suppose that $u$ is both a local superminimizer and a local subminimizer for ${ \bf I}(g,U)$. We need to show that $u$ is absolute minimizer for ${\bf I}(g,U)$. It suffices to prove that for any domain $V\Subset U$, $u$ is a minimizer for $\bi(u, V )$, in particular, $\|H(\cdot,Xu (\cdot))\|_{L^\fz(V)} \le \bi (u, V)$. The proof consists of 3 steps. \medskip {\bf Step 1.} Given any subdomain $V \subset U$, choose a dense subset $\{x_j\}_{j \in \nn}$ of $V$. Set $V_j=V\setminus\{x_i\}_{1\le i\le j}$ and $V_0=V$. Note that $$\partial V_j=\partial V_{j-1}\cup\{x_j\}=\partial V_0\cup \{x_i\}_{1\le i\le j} \quad \forall j\in\nn. $$ For each $j\ge0$, set $$\mu_j=\mu(u|_{\partial V_j},\partial V_j)=\inf\{\lz\ge 0 | \ u(y) -u(x) \le d_\lz^{V_j}(x,y) \quad \forall x,y \in \pa V_j\}.$$ Since $\overline V \subset \overline U$, we have $d_\mu^U(x,y)\le d_\mu^V(x,y)$ for all $ x,y\in \overline V^\ast$, we have $$ u(y) -u(x) \le d_\mu^V(x,y) \quad \forall x,y \in \overline V. $$ and hence $\mu_0\le \mu$. By Lemma \ref{lem231} (i), $\bi (u, V_0)=\mu_0$. In a similar way and by induction, for all $j\ge0$, since $V_{j+1}\subset V_j$, we have \begin{equation}\label{uj4} \bi (u, V_{j+1})=\mu_{j+1}\le \mu_j=\bi (u, V_j)\le \mu_0=\bi (u, V_0)\le\mu=\bi (u, U). \end{equation} {\bf Step 2. } We construct a sequence $\{u_j\}_{j \in \nn}$ of functions so that, for each $j\in\nn$, \begin{equation}\label{pp1} u_j\in \dot W^{1,\fz}_X(V_j)\cap C^0(\overline V) \mbox{ and $u_j(x )=u(x ) $ for any $ x\in\pa V_j= \pa V\cup\{x_i\}_{1\le i\le j}$ } \end{equation} and \begin{equation}\label{pp2j} \mbox{ $\|H(\cdot,Xu_j(\cdot))\|_{L^\fz(V_{j-1})}=\mu_{j-1}$ for any $j \in \nn$}. \end{equation} For any $j \ge 1$, since $u$ is both a local superminimizer and a local subminimizer for ${ \bf I}(g,U)$, by Definition \ref{d231}, $$ \cS_{u|_{\pa V_{j-1}}, V_{j-1}}^- \le u \le \cS_{u|_{\pa V_{j-1}}, V_{j-1}}^+ \mbox{ in $V_{j-1}$} .$$ {\color{black}At $x_{j}$, we have \begin{equation}\label{uj} a_{j}:=\cS^-_{u|_{\pa V_{j-1}},V_{j-1}}(x_{j}) \le u(x_{j}) \le b_{j}:=\cS^+_{u|_{\pa V_{j-1}},V_{j-1}}(x_{j}) \end{equation} Define $u_{j}: \overline V_{j-1} =\overline V \to \rr$ by \begin{displaymath} u_{j}:= \left \{ \begin{array}{lll} \cS^+_{u|_{\pa V_{j-1}},V_{j-1}} & \quad \text{if} & \quad a_{j}=b_{j}, \\ \frac{u(x_{j})-a_{j}}{b_{j}-a_{j}}\cS^+_{u|_{\pa V_{j-1}},V_{j-1}}+ (1-\frac{u(x_{j})-a_{j}}{b_{j}-a_{j}})\cS^-_{u|_{\pa V_{j-1}},V_{j-1}} & \quad \text{if}&\quad a_{j}<b_{j}. \end{array} \right. \end{displaymath} To see \eqref{pp1}, observe that Lemma \ref{lem231} gives $u_j\in \dot W^{1,\fz}_X(V_j)\cap C^0(\overline V) $. Moreover, for any $x\in\pa V_j$, one has either $x\in\pa V_{j-1}$ or $x=x_j$. In the case $x\in \pa V_{j-1}$, by Lemma \ref{lem231}one has $$ u_{j}(x)=\cS^+_{u|_{\pa V_{j-1}},V_{j-1}}(x) = \cS^-_{u|_{\pa V_{j-1}},V_{j-1}}(x)= u(x).$$ In the case $x=x_{j}$, if $a_{j}=b_{j}$, then \eqref{uj} implies $$u(x_{j}) = \cS^+_{u|_{\pa V_{j-1}},V_{j-1}} (x_{j}) =b_{j}=u(x_{j});$$ if $a_{j}<b_{j}$, then \begin{align*} u_{j}(x_{j}) & =\frac{u(x_{j})-a_{j}}{b_{j}-a_{j}}\cS^+_{u|_{\pa V_{j-1}},V_{j-1}} (x_{j})+ (1-\frac{u(x_{j})-a_{j}}{b_{j}-a_{j}})\cS^-_{u|_{\pa V_{j-1}},V_{j-1}} (x_{j}) \\ & = \frac{u(x_{j})-a_{j}}{b_{j}-a_{j}} b_{j} + (1-\frac{u(x_{j})-a_{j}}{b_{j}-a_{j}})a_{j} \\ &= u(x_{j}). \end{align*} To see \eqref{pp2j}, applying Lemma \ref{lem231}(iii) with $t=\frac{u(x_{j})-a_{j}}{b_{j}-a_{j}}$, we deduce that $u_{j}$ is a minimizer for $\bi(u|_{\pa V_{j-1}},V_{j-1})$, that is, \begin{equation}\label{uj1} \| H(\cdot,Xu_{j}(\cdot)) \|_{L^\fz(V_{j-1})} = \bi(u|_{\pa V_{j-1}},V_{j-1})=\mu_{j-1}. \end{equation} } {\bf Step 3. } We show that, for all $j \in \nn$, \begin{equation}\label{pp3} \mbox{ $u_j(z)-u_j(y) \le d^V_{\mu}( y,z) \quad \forall y,z \in V.$} \end{equation} Note that, by Corollary \ref{radu} in $V$, \eqref{pp3} yields that $u_j \in \dot W^{1,\fz}_X(V)$ and $\|H(x,Xu_j)\|_{L^\fz(V)}\le \mu$. Applying Lemma \ref{c0} to $\{u_j\}_{j\in\nn}$ and $u$, we conclude that $u\in \dot W^{1,\fz}_X(V)$ and $\|H(x,Xu )\|_{L^\fz(V)}\le \mu$ as desired. To see \eqref{pp3}, using \eqref{pp2j} and Corollary \ref{radu} in $V_{j-1}$, we have \begin{equation*} u_j(z)-u_j(y) \le d^{V_{j-1}}_{\mu_{j-1}}(y,z) \quad \forall y,z \in V_{j-1}. \end{equation*} Thus \eqref{uj4} implies \begin{equation*} u_j(z)-u_j(y) \le d^{V_{j-1}}_{\mu }(y,z) \quad \forall y,z \in V_{j-1}. \end{equation*} Thanks to Lemma \ref{lem5.9} we have $d^{V_{j-1}}_{\mu }=d_\mu^V$ in $V\times V$ and hence \begin{equation*} u_j(z)-u_j(y) \le d^V_{\mu }(y,z) \quad \forall y,z \in V_{j-1}. \end{equation*} By the continuity of $u_j$ in $\overline V$ we have \eqref{pp3} and hence finish the proof. \end{proof} Finally, using Perron's approach, we obtain the existence of absolute minimizers. \begin{prop} \label{p42} Let $U\Subset\Omega$ and $g\in C^0(\pa U)$ with $\mu(g,\partial U)<\fz$. Define \begin{equation}\label{ug} U^+_g(x):=\sup\{u(x) | u :\overline U\to \rr\text{\ is a local subminimizer for ${ \bf I}(g,U)$} \}\quad\forall x\in\overline U \end{equation} and $$U^-_g(x):=\inf\{u(x) | u :\overline U\to \rr\ \text{\ is a local superminimizer for ${ \bf I}(g,U)$} \}\quad\forall x\in\overline U.$$ Then $U^\pm_g$ are absolute minimizers for ${ \bf I}(g,U)$. \end{prop} \begin{proof} We only show that $U^+_g$ is an absolute minimizer for ${ \bf I}(g,U)$; similarly one can prove that $U^-_g$ is also an absolute minimizer for ${ \bf I}(g,U)$. {\color{black} Thanks to Lemma \ref{lem235}, it suffices to show that $U^+_g$ is a minimizer for $\bi(g,U)$, a local subminimizer for $\bi(g,U)$ and a local superminimizer for $\bi(g,U)$. } Note that $\bi(g,U)=\mu(g,\partial U)<\fz. $ \medskip \textbf{\bf Prove that $U^+_g$ is a minimizer for $\bi(g,U)$.} Firstly, since any local subminimizer $w$ for $\bi(g,U)$ is a minimizer for $\bi(g,U)$. We know $$ w \in \dot W^{1,\infty}_{X }(U) \cap C^0(\overline U), \ w|_{\pa U} =g, \mbox{ and } \|H(\cdot, Xw(\cdot))\|_{L^\fz(U)} = \bi(g,U)<\fz. $$ Recalling the assumption (H3), we have $ \| |Xw |\|_{L^\fz(U)} \le R_{\bi(g,U)} $ and hence $w\in\lip_{d_{CC}}( \overline U)$ with $\lip_{d_{CC}}(w,\overline U)\le R_{\bi(g,U)}$. By a direct calculation, one has \begin{equation}\label{ugc0} \mbox{$U_g^+\in\lip_{d_{CC}}( \overline U)$ with $\lip_{d_{CC}}(U^+_g,\overline U)\le R_{\bi(g,U)}$ and $U_g^+|_{\pa U}= g.$} \end{equation} Next, let $\{x_i\}_{i \in \nn}$ be a dense set of $U$. For any $i,j\in\nn$, {\color{black} by the definition of $U_g^+$,} there exists a local subminimizer $u_{ij} $ for $\bi(g,U)$ such that $$u_{ij} (x_i) \ge U^+_g(x_i) - \frac{1}{j}.$$ Note that, by Definition \ref{d231}, $u_{ij}$ is also a minimizer for ${ \bf I}(g,U)$. Moreover, for each $j\in\nn$, write $$u_j := \max\{u_{ij} \}_{1\le i\le j}.$$ Lemma \ref{lem231}(iii) implies that $u_j$ is a minimizer for $\bi(g,U)$ and hence \begin{equation}\label{ujj0} u_j \in \dot W_X^{1,\fz}(U) \mbox{ and } \|H(\cdot,Xu_j(\cdot))\|_{L^\fz(U)} \le \bi(g,U) \quad \forall j \in \nn. \end{equation} For $1\le i\le j$, from the definition of $U_g^+$, it follows that \begin{equation}\label{ujj1} \quad U^+_g(x_i) \ge u_j(x_i) \ge U^+_g(x_i) - \frac{1}{j} . \end{equation} Finally, combining \eqref{ugc0}, \eqref{ujj0} and \eqref{ujj1}, we are able to apply Lemma \ref{c0} to $U_g^+$ and $\{u_j\}_{j \in \nn}$ so to get \begin{equation}\label{uniform} u_j \to U^+_g \mbox{ in } C^0(U), \ U^+_g \in \dot W^{1,\infty}_{X }(U) \mbox{ and } \|H(\cdot,XU^+_g)\|_{L^\fz(U)}\le \bi(g,U). \end{equation} Hence $$ \|H(\cdot,XU^+_g)\|_{L^\fz(U)}= \bi(g,U).$$ Together with \eqref{ugc0} yields that $U^+_g$ is a minimizer for $\bi(g,U)$. \medskip \textbf{Prove that $U^+_g$ is a local subminimizer for $\bi(g,U)$}. We argue by contradiction. {\color{black} Assume on the contrary that $U^+_g$ is not a local subminimizer for $\bi(g,U)$. Then, by definition, there exists a subdomain $V \subset U$, and some $x_0$ in $V$ such that $$U^+_g(x_0) > \cS_{h^+,V}^+(x_0),$$ where $\cS_{h^+,V}$ is the McShane extension in $V$ of $h^+=U^+_g|_{\pa V}$. } By the definition of $U^+_g$, there exists a local subminimizer $u$ for $\bi(g,U)$ such that \begin{equation}\label{x0} U^+_g(x_0)\ge u(x_0) > \cS_{h^+,V}^+(x_0). \end{equation} The definition of $U^+_g$ also gives \begin{equation}\label{ug1} u\le U^+_g \mbox{ in $U$.} \end{equation} Define $$E := \{x\in \overline V \ | \ u(x) > \cS_{h^+,V}^+(x)\}.$$ {\color{black} Since both $u $ and $\cS_{h^+,V}^+$ are continuous, $E$ is an open subset of $\overline V$. Since $$U^+_g = \cS_{h^+,V}^+ \mbox{ on $\pa V$},$$ by \eqref{ug1}, we infer that $u \le \cS_{h^+,V}^+ \mbox{ on $\pa V$}$ and hence $E \subset V.$ Obviously, $x_0\in E$. Denote by $E_0$ the component of $E$ containing $x_0$. } Recalling that $u$ is a local subminimizer for $\bi(g,U)$, {\color{black}by Definition \ref{d231},} we have $u \le \cS_{u|_{\pa E_0}, E_0}^+$ in $ E$. Since $x_0 \in E_0$, we have $ \cS_{u|_{\pa E_0}, E_0}^+(x_0) \ge u(x_0 ),$ which, combined with \eqref{x0}, gives \begin{equation}\label{x01} \cS_{u|_{\pa E_0}, E_0}^+(x_0) \ge u(x_0 ) > \cS_{h^+,V}^+(x_0). \end{equation} {\color{black}On the other hand, we are able to apply Lemma \ref{lem234} with $(U,V,g, h=\csgu^+|_{\partial V} )$ therein replaced by $(V,E_0, h^+, h= \cS^+_{h^+,V}| _{E_0})$ here and then obtain $$ \cS_{h^+, E_0}^+ \le \cS_{h^+,V}^+ \mbox{ in $E_0$.} $$ Since $u|_{E_0}= \cS^+_{h^+,V}| _{E_0}=h$, at $x_0 \in E$, we arrive at \begin{equation}\label{x02} \cS_{u|_{\pa E_0}, E_0}^+(x_0) \le \cS_{h^+,V}^+(x_0). \end{equation} } Note that \eqref{x01} contradicts with \eqref{x02} as desired. \medskip \textbf{Prove that $U^+_g$ is a local superminimizer for $\bi(g,U)$.} By definition, it suffices to prove that, for any given subdomain $V \subset U$, we have $\cS_{h^+, V}^-\le U^+_g$ in $V$, where we write $h^+=U^+_g|_{\pa V}$. To this end, define $u^+$ as in \eqref{upm} with $u$ therein replaced by $U^+_g$, that is, \begin{equation}\label{cla} u^+:= \left \{ \begin{array}{lll} \cS_{h^+, V}^- & \quad \text{in} & \quad V \\ U^+_g & \quad \text{in} & \quad \overline U \backslash V. \end{array} \right. \end{equation} Then $u^+$ is a minimizer for $\bi(g,U)$. Indeed, since $U^+_g$ is a minimizer for $\bi(g,U)$, we know that $U^+_g$ satisfies \eqref{udmu} with $\mu=\bi(g,U)$ therein. This allows us to apply Lemma \ref{lem5.10} with $u= U^+_g$ therein and then conclude that $u^+\in \dot W^{1,\fz}_X(U)\cap C^0(U)$, $u^+=U^+_g =g$ on $\pa U$, and $\|H(x,Xu^+ )\|\le \bi(g,U) $. Therefore, by definition of $\bi(g,U) $, $\|H(x,Xu^+ )\|= \bi(g,U) $, and hence $u^+$ is a minimizer for $\bi(g,U)$. We further claim that \begin{equation}\label{cllsub} \mbox{$u^+$ is a local subminimizer for $\bi(g,U)$.} \end{equation} Assume that this claim holds. Choosing $u^+$ as a test function in the definition of $U^+_g$ {\color{black}in \eqref{ug}, we know that $$U^+_g \ge u^+ \mbox{ in $U$}.$$ } and in particular $U^+_g \ge \cS_{h^+, V}^- $ in $V$ as desired. \medskip {\bf Proof the claim \eqref{cllsub}.} To get \eqref{cllsub}, by Definition \ref{d231}, we still need to show for any subdomain $B \subset U$, \begin{equation}\label{deft} u ^+\le \cS_{u^+|_{\pa B},B}^+ \text{ in $B$}. \end{equation} To prove \eqref {deft}, we argue by contradiction. Assume that \eqref {deft} is not correct, that is, \begin{equation}\label{ccc} W := \{x \in B \ | \ u^+(x) > \cS_{u^+|_{\pa B},B}^+ (x)\}\ne \emptyset. \end{equation} Up to considering some connected component of $W$, we may assume that $W$ is connected. Note that \begin{equation}\label{paW1} \mbox{$u^+ = \cS_{u^+|_{\pa B},B}^+ $ on $\pa W$. } \end{equation} Consider the set \begin{equation}\label{defd} D := \{x \in B \ | \ U^+_g(x) > \cS_{u^+|_{\pa B},B}^+ (x)\}. \end{equation} By continuity, both of $W$ and $D$ are open. Below, we consider two cases: $D=\emptyset$ and $D\ne \emptyset$. \medskip {\bf Case $D=\emptyset$}. If $D$ is empty, then we always have $U^+_g \le \cS_{u^+|_{\pa B},B}^+$ in $B$. Thus $$\mbox{ $U^+_g \le \cS_{u^+|_{\pa B},B}^+< u^+$ in $W$. }$$ Since $U^+_g=u^+\in \overline U\setminus V$, this implies \begin{equation}\label{incl04} W \subset V. \end{equation} Since $u^+= \cS_{h^+, V}^-$ in $\overline V$ and $W \subset V$ gives $ \pa W \subset \overline V$, we have \begin{equation}\label{paW2}\mbox{$u^+= \cS_{h^+, V}^-$ on $\pa W$. }\end{equation} Moreover, by Lemma \ref{lem234}, we know that $ \cS_{h^+, V}^-$ with $h^+=U^+_g|_{\pa V}$ is a local subminimizer for $\bi(U^+_g,V)$. By the definition of local subminimizer, and by $W\subset V$, we have \begin{equation}\label{e91} \cS_{h^+, V}^- \le \cS_{u^+|_{\pa W}, W}^+ \quad \text{in } W, \end{equation} where we recall $\cS_{h^+, V}^-|_{\pa W}=u^+|_{\pa W}$ from \eqref{paW2}. {\color{black} Applying Lemma \ref{lem234} with $(U,V,g,h^+)$ therein replaced by $(B,W, u^+|_{\pa B}, \cS_{u^+|_{\pa B},B}^+|_{\pa W})$ here, recalling $\cS_{u^+|_{\pa B},B}^+|_{\pa W}=u^+|_{\pa W}$ from \eqref{paW1}, we obtain \begin{equation}\label{e931} \cS_{u^+|_{\pa W}, W}^+ \le \cS_{u^+|_{\pa B},B}^+ \mbox{ in $W$.} \end{equation} Combing \eqref{e91} and \eqref{e931}, by $W\subset V$, one arrives at $$ u^+= \cS_{h^+, V}^- \le \cS_{u^+|_{\pa B},B}^+ \mbox{ in $W$,} $$ which contradicts with \eqref{ccc}. } \medskip {\bf Case $D\ne\emptyset$.} Up to considering some connected component of $D$, we may assume that $D$ is connected. By the definition of $D$ {\color{black} in \eqref{defd}}, we infer that \begin{equation}\label{paD}U^+_g = \cS_{u^+|_{\pa B},B}^+ \mbox{ on $\pa D$.} \end{equation} {\color{black} Since $U^+_g$ is a local subminimizer for $\bi(g,U)$ as proved above, we know} \begin{equation}\label{e95} U^+_g \le \cS_{U^+_g|_{\pa D},D}^+ \text{ in } D. \end{equation} {\color{black} Applying Lemma \ref{lem234} with $(U,V,g,h^+)$ therein replaced by $(B,D, u^+|_{\pa B}, \cS_{u^+|_{\pa B},B}^+|_{\pa D}) $, recalling $\cS_{u^+|_{\pa B},B}^+|_{\pa D}=U^+_g |_{\pa D}$ from \eqref{paD}, we obtain \begin{equation}\label{e951} \cS_{U^+_g |_{\pa D}, D}^+ \le \cS_{u^+|_{\pa B},B}^+ \mbox{ in $D$.} \end{equation} Combining \eqref{e95} and \eqref{e951}, we deduce $$ U^+_g \le \cS_{u^+|_{\pa B},B}^+\mbox{ in $D$.}$$ Recalling \eqref{defd}, this contradicts with $D\ne\emptyset$. The proof is complete. } \end{proof} Theorem \ref{t231} is now a direct consequence of the above series of results. \begin{proof}[Proof of Theorem \ref{t231}] Let $g\in\lip_{d_{CC}} (\partial U)$. It suffices to show that $\mu(g,\pa U)<\fz$, which allows us to use Proposition \ref{p42} and then conclude the desired absolute minimizer $U_g^+ $ therein. Taking $0<\lz<\fz$ such that $R_\lz' \ge \lip_\dcc(g,\pa U)$, we have $$ g(y) -g(x) \le R_\lz'\dcc(x,y) \quad \forall x,y \in \pa U.$$ From Lemma \ref{lem311} (ii), that is, $R'_\lz d_{CC} \le d^U_\lz,$ it follows that $$ g(y) -g(x) \le d_\lz^U(x,y) \quad \forall x,y \in \pa U.$$ and hence that $$\mu(g,\partial U) = \inf\{\mu \ge 0 \ | g(y) -g(x) \le d_\mu(x,y)\} \le \lz<\fz.$$ The proof is complete. \end{proof} \section{Further discussion} Note that Rademacher type Theorem \ref{rad} is a cornerstone when showing the existence of absolute minimizers. Indeed, Champion and Pascale \cite{cp} and Guo-Xiang-Yang \cite{gxy} established partial results similar to Theorem \ref{rad} for a special class of Hamiltonians considered in this paper to show the existence of absolute minimizers. However, their method seems to be invalid for more general Hamiltonians considered in this paper. We briefly explain the reason below. \begin{rem}\rm Champion and Pascale \cite{cp} showed the McShane extension is a minimizer for $H$ when $H$ is lower semi-continuous on $U \times \rn$. In fact, they defined another intrinsic distance induced by $H(x,p)$. For every $\lz\ge 0$, \begin{equation*} L_\lz (x,q):=\sup_{\{p\in H_\lz(x)\}} p\cdot q, \quad \forall \ x\in\overline U \ \mbox{and }\ q\in\rrm. \end{equation*} where $H_\lz(x)$ is the sub-level set at $x$, namely, $H_\lz(x)=\{p \in \rr^m \mid H(x,p) \le \lz\}$. For $0\le a<b\le +\infty$, let $\gz:[a,\,b]\to \overline U$ be a Lipschitz curve, that is, there exists a constant $C>0$ such that $|\gz(s)-\gz(t)|\le C|s-t|$ whenever $s,t\in[a,b]$. The $L_\lz$-length of $\gz$ is defined by \begin{equation*} \ell_\lz(\gz):= \int_a^bL_\lz\lf(\gz(\theta),\gz'(\theta)\r)\,d\theta, \end{equation*} which is nonnegative, since $L_\lambda(x,q)\ge 0$ for any $x\in\overline U$ and $q\in\rn$. For a pair of points $x,\,y\in\overline U$, the $\odl$-distance from $x$ to $y$ is defined by \begin{equation*} \odl(x,y):= \inf\Big\{\ell_\lz(\gz)\ |\ \gamma\in\mathcal C(a,b,x,y, \overline U)\Big\}. \end{equation*} Then, they prove two intrinsic pseudo-distance are equal, that is \begin{equation}\label{e15} \dl(x,y)=\odl(x,y) \quad \text{for any} \ \lz >0 \ \text{and for any} \ x,y \in \overline U. \end{equation} Thanks to the definition of $\odl$, they can justify (i)$\Leftrightarrow$(ii) in Rademacher type Theorem \ref{rad}. However, when asserting \eqref{e15}, we will meet obstacles in generalizing \cite{cp} Proposition A.2 since we are faced with measurable $H$. On the other hand, Guo-Xiang-Yang \cite{gxy} provided another method to identify a weak version of \eqref{e15} for measurable Finsler metrics $H$, that is \begin{equation}\label{e16} \lim_{y \to x}\frac{\dl(x,y)}{\wz d_\lz(x,y)}=1 \quad \text{for any} \ \lz >0 \ \text{and for any} \ x,y \in \overline U. \end{equation} Here $\wz d_\lz$ induced by measurable Finsler metrics $H$ is defined in the following way. \begin{equation} \label{q2} \wz d_\lz(x,y):= \sup_N\inf\Big\{\ell_\lz(\gz)\ |\ \gamma\in \Gamma_N(a,b,x,y, \overline U)\Big\} \end{equation} where the supremum is taken over all subsets $N$ of $ \overline U$ such that $|N| = 0$ and $\Gamma_N(a,b,x,y, U)$ denotes the set of all Lipschitz continuous curves $\gz$ in $ \overline U$ with end points $x$ and $y$ such that $\mathcal H^1(N \cap \gz) = 0$ with $\mathcal H^1$ being the one dimensional Hausdorff measure. In fact, \eqref{e16}, combined with the method in \cite{cp} will be sufficient for validating (i)$\Leftrightarrow$(ii) in Rademacher type Theorem \ref{rad}. Unfortunately, since we are coping with H\"{o}rmander vector field, a barrier arises when modifying their proofs. Indeed, their method uses a result by \cite{da} that every $x$-measurable Hamiltonian $H$ can be approximated by a sequence of smooth Hamiltonians $\{H_n\}_n$ such that two intrinsic distances $\wz d_\lz^{H_n}$ and $d_\lz^{H_n}$ induced by $H_n$ by means of \eqref{q2} and \eqref{d311} satisfy $ \lim_{n \to \fz}d_\lz^{H_n} = d_\lz$ and $ \limsup_{n \to \fz}\wz d_\lz^{H_n} \le \wz d_\lz$ uniformly on $ U\times U$ respectively. The process of the proof of \cite{da} is based on a $C^1$ Lusin approximation property for curves. Namely, given a Lipschitz curve $\gz: \ [0,1] \to U$ joining $x$ and $y$, for any $\ez >0$, there exists a $C^1$ curve $\widetilde{\gz}: \ [0,1] \to U$ with the same endpoints such that $$ \mathcal{L}^1 ( \{t \in [0,1] \ | \ \widetilde{\gz}(t) \neq \gz(t) \quad \text{or} \quad \widetilde{\gz}'(t)\neq \gz'(t)\})<\ez$$ where $\mathcal{L}^1$ denotes the one dimensional Lebesgue measure. Besides, $$ \|\widetilde{\gz}\|_{L^\fz} \le c\|\gz\|_{L^\fz}, $$ for some constant $c$ depending only on $n$. Although this version of $C^1$ Lusin approximation property holds for horizontal curves in Heisenberg groups (\cite{l16}) and step 2 Carnot groups (\cite{ls16}), it fails for some horizontal curve in Engel group (\cite{l16}). In summary, it is difficult to generalize the properties of the pseudo metric $\wz d_\lz$ not only from Euclidean space to the case of H\"{o}rmander vector fields but also from lower-semicontinuous $H(x,p)$ to measurable $H(x,p)$. Hence we would like to pose the following open problem. \end{rem} \begin{ques} Under the assumptions (H0)-(H3), does \eqref{e16} holds? \end{ques} \renewcommand{\thesection}{Appendix } \newtheorem{lemapp}{Lemma \hspace{-0.15cm}}\newtheorem{thmapp}[lemapp] {Theorem \hspace{-0.15cm}}\newtheorem{corapp}[lemapp] {Corollary \hspace{-0.15cm}} \newtheorem{remapp}[lemapp] {Remark \hspace{-0.15cm}} \newtheorem{defnapp}[lemapp] {Definition \hspace{-0.15cm}} \renewcommand{\theequation}{A.\arabic{equation}} \renewcommand{\thelemapp}{A.\arabic{lemapp}} \section {Rademacher's theorem in Euclidean domains---revisit } In this appendix, we state some consequence of Rademacher's theorem (Theorem \ref{rade}) for Sobolev and Lipschitz classes, see Lemma \ref{coin} and Lemma \ref{rade'} below. They were well-known in the literature and also partially motivated our Theorem \ref{rad} and Corollary \ref{global}. For reader's convenience, we give the details. Recall that $\Omega\subset\rn$ is always a domain. The homogeneous Sobolev space $\dot W^{1,\fz}(\Omega)$ is the collection of all functions $u\in L^1_\loc(\Omega)$ with its distributional derivative $\nabla u=(\frac{\partial u}{\partial x_i})_{1\le i\le n}\in L^\fz(\Omega)$. We equip $\dot W^{1,\fz}(\Omega)$ with the semi-norm $$\|u\|_{\dot W^{1,\fz}(\Omega)}=\|\nabla u\|_{L^\fz(\Omega)}.$$ Write $\dot W^{1,\fz}_\loc(\Omega)$ as the collection of all functions $u$ in $\Omega$ so that $u\in \dot W^{1,\fz}(V)$ whenever $V\Subset\Omega$. Here and below, $V\Subset\Omega$ means that $V$ is bounded domain with $\overline V\subset\Omega$. On the other hand, denote by $\lip(\Omega)$ the collection of all Lipschitz functions $u$ in $\Omega$, that is, all functions $u$ satisfying \eqref{lip}. We equip $\lip(\Omega)$ with the semi-norm $$\lip(u,\Omega)=\sup_{x,y\in\Omega,x\ne y}\frac{|u(y)-u(x)|}{|x-y|}=\inf\{ \mbox{$\lz\ge 0$ satisfiying \eqref{lip}}\} .$$ Denote by $\lip_\loc(\Omega)$ the collection of all functions $u$ in $\Omega$ so that $u\in\lip(V)$ for any $V\Subset\Omega$. Moreover, denote by $\lip^\ast(\Omega)$ the collection of all functions $u$ in $\Omega$ with \begin{equation}\label{suplip}\lip^\ast(u,\Omega):=\sup_{x \in\Omega }\lip u(x)<\fz. \end{equation} Obviously, $ \lip(\Omega)\subset\lip^\ast(\Omega)$ with the seminorm bound $\lip^\ast(u,\Omega)\le \lip(u,\Omega).$ Next, we have the following relation. \begin{lemapp} \label{lemast} We have $ \lip^\ast(\Omega)\subset\lip_\loc(\Omega)$. For any convex subdomain $V\subset\Omega$ and $u\in \lip^\ast(\Omega)$, we have \begin{equation}\label{ast} | u(x)- u(y)|\le \|u\|_{\lip^\ast(V ) } |x-y|\quad\forall x,y\in V. \end{equation} \end{lemapp} \begin{proof} Let $u\in \lip^\ast(\Omega)$. To see $ u\in \lip_\loc(\Omega)$, it suffices to prove $u\in\lip(B)$ for any ball $B\Subset\Omega$. Given any $x,y\in B$, denote by $\gz(t)=x+t(y-x)$. Since $A_{x,y}=\sup_{t\in[0,1]} \lip u(\gz(t))<\fz$, for each $t\in[0,1]$ we can find $r_t>0$ such that $|u(\gz(s))-u(\gz(t))|\le A_{x,y}|\gz(s)-\gz(t)|=A_{x,y}|s-t||x-y| $ whenever $|s-t|\le r_t$ and $s \in [0,1]$. Since $[0,1]\subset\cup_{t\in[0,1]}(t-r_t,t+r_t)$ we can find an increasing sequence $t_i\in[0,1]$ with $t_0 = 0$ and $t_N=1$ such that $[0,1]\subset \cup_{i=1}^N(t_i-\frac12r_{t_i},t_i+\frac12 r_{t_i})$. Write $x_i=\gz(t_i)$ for $i =0 , \cdots , N$. We have $$|u(x)-u(y)|=|\sum_{i=0}^{N-1}[u(x_i)-u(x_{i+1})]|\le \sum_{i=0}^{N-1}|u(x_i)-u(x_{i+1})|\le A_{x,y}\sum_{i=0}^{N-1}|x_i-x_{i+1}|=A_{x,y}|x-y|.$$ Noticing that $A_{x,y} \le \lip^\ast(u,B) \le \lip^\ast(u,\Omega) < \fz$ for all $x,y \in B$, we deduce that $u \in \lip(B)$ and hence $u \in \lip_\loc(\Omega)$. If $V \Subset \Omega$ is convex, then for any $x,y \in \Omega$, the line-segment joining $x$ and $y$ lies in $V$. Hence similar to the above discussion, we have $$ |u(x)-u(y)|\le A_{x,y}|x-y|, \ \forall x,y \in V $$ and $A_{x,y} \le \|u\|_{\lip^\ast(V)}< \fz$ for all $x,y \in V$. Therefore, \eqref{ast} holds and the proof is complete. \end{proof} On the other hand, functions $\dot W^{1,\fz}_\loc(\Omega)$ admit continuous representatives. \begin{lemapp} \label{sub} (i) Each $u\in \dot W^{1,\fz}_\loc(\Omega)$ admits a unique continuous representative $\wz u$, that is, $\wz u\in \dot W^{1,\fz}_\loc(\Omega)$ with $\wz u=u$ almost everywhere in $\Omega$. Moreover, $\wz u\in \lip_\loc(\Omega)$, and for any convex subdomain $V\subset\Omega$, we have $$|\wz u(x)-\wz u(y)|\le \|u\|_{\dot W^{1,\fz}(\Omega)}|x-y|\quad\forall x,y\in V.$$ (ii) Each $u\in \dot W^{1,\fz} (\Omega)$ admits a unique continuous representative $\wz u$, that is, $\wz u\in \dot W^{1,\fz} (\Omega)$ with $\wz u=u$ almost everywhere in $\Omega$. Moreover, $\wz u\in \lip^\ast(\Omega)$ with $\lip^\ast(\wz u,\Omega)\le \|u\|_{\dot W^{1,\fz}(\Omega)}$. \end{lemapp} \begin{proof} Since (ii) can be shown in a similar way as (i), we only prove (i). Given any convex domain $V\Subset\Omega$, for any pair $x,y$ of Lebesgue points of $ u$, we have $$ u(y)-u(x)=\lim_{\dz\to0} [u\ast\eta_\dz(y)-u\ast\eta_\dz(x)]=\lim_{\dz\to 0}\nabla (u\ast\eta_\dz)(x+t_\dz(y-x))\cdot (y-x)$$ where $\eta_\dz$ is the standard mollifier in $\rr^n$ with its support ${\rm spt}\eta_\dz \subset B(0,\dz)$ and $t_\dz \in [0,1]$. Also, since for any $z\in V$, $$|\nabla (u\ast\eta_\dz)(z)|= |(\nabla u)\ast\eta_\dz)(z)|\le \|\nabla u\|_{L^\fz(B(z,\dz))}, $$ we deduce that for any pair $x,y$ of Lebesgue points of $ u$, $$ |u(y)-u(x)|\le \|\nabla u\|_{L^\fz(V)} |y-x|.$$ If $z \in V$ is not a Lebesgue point of $u$, let $\{z_i\}_{i \in \nn} \subset V$ be a sequence of Lebesgue points of $u$ converging to $z$. We have $$ \lim_{i \to \fz} |u(z_i)-u(z_{i+1})|\le \lim_{i \to \fz} \|\nabla u\|_{L^\fz(V)} |z_i-z_{i+1}| = 0, $$ which implies $\{u(z_i)\}_{i \in \nn} \subset V$ is a Cauchy sequence. Since $\|u\|_{L^\fz(V)} < \fz$, we know $\{u(z_i)\}_{i \in \nn}$ has a limit in $\rr$ independent of the choice of the sequence $\{u(z_i)\}_{i \in \nn}$. Define $\wz u(z):= u(z) $ if $z \in V$ is a Lebesgue point of $u$ and $\wz u(z)= \lim_{i \to \fz} u(z_i)$ if $z \in V$ is not a Lebesgue point of $u$ where $\{z_i\}_{i \in \nn} \subset V$ is a sequence of Lebesgue points of $u$ converging to $z$. We know $\wz u : V \to \rr$ is well-defined and moreover, $$ | \wz u(y)-\wz u(x)|\le \|\nabla u\|_{L^\fz(V)} |y-x|\quad\forall x,y\in V.$$ Thus $\wz u\in\lip(V)$ with $\sup_{x\in V}\lip \wz u(x)\le \lip(\wz u,V)\le \|\nabla u\|_{L^\fz(V)}$. In particular, $\wz u$ is continuous, which shows (i). \end{proof} Thanks to lemma \ref{sub}, below for any function $u\in \dot W^{1,\fz}_\loc(\Omega)$ or $u\in \dot W^{1,\fz}(\Omega)$, up to considering its continuous representative $\wz u$, we may assume that $u$ is continuous. Under this assumption, Lemma \ref{sub} further gives $\dot W^{1,\fz}_\loc(\Omega)\subset \lip_\loc(\Omega)$, and $\dot W^{1,\fz} (\Omega)\subset \lip^\ast(\Omega)$ with a norm bound $\lip^\ast(u,\Omega)\le \|u\|_{\dot W^{1,\fz}(\Omega)}$. Rademacher's theorem (Theorem \ref{rade}) tells that their converse are also true. Indeed, we have the following. \begin{lemapp} \label{coin} (i) We have $ \dot W^{1,\fz}_\loc(\Omega)=\lip_\loc(\Omega)$ and $\lip( \Omega)\subset \dot W^{1,\fz} (\Omega)= \lip^\ast(\Omega)$ with $\lip^\ast(u,\Omega)= \|u\|_{\dot W^{1,\fz}(\Omega)}\le \lip(u,\Omega)$. (ii) If $\Omega$ is convex, then $\lip( \Omega)= \dot W^{1,\fz} (\Omega)= \lip^\ast(\Omega)$ with $\lip^\ast(u,\Omega)= \|u\|_{\dot W^{1,\fz}(\Omega)}= \lip(u,\Omega)$. \end{lemapp} \begin{proof} (i) If $u\in \lip_\loc(\Omega)$, applying Rademacher's theorem (Theorem \ref{rade}) to all subdomains $V\Subset\Omega$, one has $u\in \dot W^{1,\fz}_\loc(\Omega)$ and $ |\nabla u(x)|=\lip u(x)$ for almost all $x\in \Omega$ (whenever $u$ is differentiable at $x$). Hence $\lip_\loc(\Omega) \subset W^{1,\fz}_\loc(\Omega)$. Combining Lemma \ref{sub}(i), we know $\lip_\loc(\Omega) = W^{1,\fz}_\loc(\Omega)$. If $u\in \lip^\ast(\Omega)$, that is, $\lip^\ast(u,\Omega) = \sup_{x\in\Omega}\lip u(x)<\fz$. We have $u\in \lip_{\loc}(\Omega)$ and hence $u\in \dot W^{1,\fz}_\loc(\Omega)$ and $ |\nabla u(x)|= \lip u(x) \le \lip^\ast(u,\Omega)<\fz$ for almost all $x\in \Omega$. Thus $u\in \dot W^{1,\fz}(\Omega)$. By definition, it is obvious that $\lip( \Omega)\subset \lip^\ast(\Omega)$. Hence Lemma \ref{coin} (i) holds. (ii) By Lemma \ref{coin} (i), we only need to show $\lip^\ast(\Omega) \subset \lip( \Omega)$. Applying Lemma \ref{lemast} with $V=\Omega$ therein, \eqref{ast} becomes $$ \frac{ | u(x)- u(y)|}{|x-y|} \le \|u\|_{\lip^\ast(\Omega ) } \quad\forall x,y\in \Omega. $$ Taking supremum among all $x,y \in \Omega$ in the left hand side of the above inequality, we arrive at $$ \|u\|_{\lip(\Omega ) } \le \|u\|_{\lip^\ast(\Omega ) }, $$ which gives the desired result. \end{proof} \begin{remapp} \label{eg2}\rm (i) Lemma \ref{lemast} and Lemma \ref{coin} fail if we relax $ \sup_{x\in\Omega}\lip u(x)$ in the definition \eqref{suplip} to be $\|\lip u\|_{L^\fz(\Omega)}=\esup_{x\in\Omega}\lip u(x)$. This is witted by the standard Cantor function $w$ in $[0,1]$. Denote by $E$ the standard Cantor set. It is well-known that $w$ is continuous but not absolute continuous in $[0,1]$. Since Lipschitz functions are always absolutely continuous, we know that $w$ is neither Lipschitz nor locally Lipschitz in $\Omega=(0,1)$, and hence $w\notin \lip_\loc(\Omega)$. On the other hand, observe that $\Omega\setminus E$ consists of a sequence of open intervals which mutually disjoint, and $w$ is a constant in each such intervals and hence $\lip w(x)=0 $ therein. So we know that $\lip w(x)=0 $ in $\Omega\setminus E$. Since $|E|=0$, we have $\|\lip u\|_{L^\fz(\Omega)}=0$. (ii) In general, if $\Omega$ is not convex, one cannot expect that $ \dot W^{1,\fz} (\Omega)\subset \lip (\Omega)$ with a norm bound. Indeed, consider the planar domain \begin{equation}\label{domainu}U:=\{x=(x_1,x_2) \in \rr^2 \ | \ |x|<1 \} \setminus [0,1) \times \{0\}. \end{equation} Indeed, in the polar coordinate $(r,\tz)$, let $w: U \to \rr$ be $$w(r,\tz) := r\tz \mbox{ for all $0< r <1$ and $0< \tz< 2\pi$}.$$ One can show that $w\in \dot W^{1,\fz}(U)$ so that $w(x_1,x_2)< \pi/3$ when $1/2\le x_1<1$ and $0<x_2<1/10$, and $w(x_1,x_2)> 5\pi/6$ when $1/2\le x_1<1$ and $-1/10<x_2<0$. One has $\lip (w,\Omega)=\fz$ and hence $w\notin \lip(\Omega)$. \end{remapp} The example in Remark \ref{eg2} (ii) also indicates that the Euclidean distance does not match the geometry of domains and hence $\lip (\Omega)$ defined via Euclidean distance is not the prefect one to understand $ \dot W^{1,\fz} (\Omega)$. Instead of Euclidean distance, for any domain $\Omega$, we consider the intrinsic distance \begin{equation}\label{dual} d^\Omega_E(x,y)=\inf\{\ell(\gz) \ | \ \mbox{$\gz:[0,1]\to\Omega$ is absolute coninuous curve joining $x,y$}\}, \end{equation} where $\ell(\gz):=\int_0^1|\dot\gz(t)|\,dt$ is the Euclidean length. We have the dual formula. \begin{lemapp}\label{la.4} (i) For any $x,y\in\Omega$, \begin{equation}\label{intrinsic} d^\Omega_E(x,y) =\sup\{u(y)-u(x) \ | \ u\in\dot W^{1,\fz}(\Omega),\ \|\nabla u\|_{L^\fz(\Omega)}\le 1\}. \end{equation} (ii) If $x,y\in\Omega$ with $|x-y|\le \dist(x,\partial\Omega)$, then $d^\Omega_E(x,y)=|x-y|$. (iii) If $\Omega$ is convex, then $d^\Omega_E(x,y)=|x-y|$ for all $x,y\in\Omega$. \end{lemapp} \begin{proof} (i) Set \begin{equation}\label{wzd} \wz d^\Omega_E(x,y) =\sup\{u(y)-u(x) \ | \ u\in\dot W^{1,\fz}(\Omega),\ \|\nabla u\|_{L^\fz(\Omega)}\le 1\}. \end{equation} Notice that $d^\Omega_E(x, \cdot) \in \Lip^\ast(\Omega) = \dot W^{1,\fz}(\Omega)$ (Lemma \ref{coin} (i)) and $\|\nabla d^\Omega_E(x, \cdot)\|_{L^\fz(\Omega)}\le 1$ for all $x \in \Omega$. Hence letting $d^\Omega_E(x, \cdot)$ be the test function in \eqref{wzd}, we see $$ d^\Omega_E(x,y) \le \wz d^\Omega_E(x,y) \ \forall x,y \in \Omega.$$ To see the contrary, fix $x,y \in \Omega$. Let $\{u_i\}_{i \in \nn}$ be a sequence of test functions in \eqref{wzd} such that $$ \wz d^\Omega_E(x,y) = \lim_{i \to \fz } (u_i(y)-u_i(x)). $$ Let $\gz:[0,1] \to \Omega$ be an arbitrary absolute continuous curve joining $x$ and $y$. Then there exists a domain $U \Subset \Omega$ with $\gz \subset U$. Let $\{\eta_\dz\}_{\dz >0}$ be the standard mollifiers in $\rr^n$. For each $i \in \nn$, we know $u_i \ast \eta_\dz \in C^\fz(\Omega)$ and $\|\nabla (u_i \ast \eta_\dz)\|_{L^\fz(U)}\le \|\nabla u_i \|_{L^\fz(U)} \le 1$. Then we have \begin{align*} \wz d^\Omega_E(x,y) & = \lim_{i \to \fz} [u_i(y)-u_i(x)] \\ & =\lim_{i \to \fz} \lim_{\dz \to 0}[u_i \ast \eta_\dz(y) - u_i \ast \eta_\dz(x)] \\ & =\lim_{i \to \fz} \lim_{\dz \to 0} \int_{0}^1 \nabla (u_i \ast \eta_\dz) (\gz(t)) \cdot \dot \gz (t) \, dt \\ & \le \lim_{i \to \fz} \lim_{\dz \to 0} \int_{0}^1 |\nabla (u_i \ast \eta_\dz)(\gz(t))| |\dot \gz (t)| \, dt \\ & \le \int_{0}^1 |\dot \gz (t)| \, dt \\ & = \ell(\gz). \end{align*} Finally, taking infimum among all absolute continuous curves joining $x$ and $y$ in the above inequality, we conclude $$ \wz d^\Omega_E(x,y) \le d^\Omega_E(x,y) \ \forall x,y \in \Omega.$$ (ii) If $|x-y|\le \dist(x,\partial\Omega)$, then the line-segment $\gz$ joining $x$ and $y$ is contained in $\Omega$. Letting $\gz$ be the absolute continuous curve in \eqref{dual}, $$ |x-y| \le d^\Omega_E(x,y) \le \ell(\gz) = |x-y| \ \forall x,y \in \Omega.$$ (iii) If $\Omega$ is convex, for all $x,y\in\Omega$, since the line-segment joining them is contained in $\Omega$, similarly to (ii), we have $d^\Omega_E(x,y)=|x-y|$. The proof is complete. \end{proof} Note that if $\Omega$ is not convex, one cannot expect $d^\Omega_E(x,y)=|x-y|$ for all $x,y\in\Omega$. Indeed, if $\Omega$ is given by the domain $U$ as in \eqref{domainu}, for points $(1/2,\ez)$ and $(1/2,-\ez)$ with $\ez\in(0,1/10)$, the Euclidean distance between them is $2\ez$. However, note that any curve $\gz:[0,1]\to\Omega$ joining them must have intersection with $(-1,0)\times\{0\}$, which is call $z$. One then deduce that $$\ell(\gz)\ge |(1/2,\ez)-z|+|(-1/2,\ez)-z|\ge 1/2+1/2=1+2\ez.$$ Thus the intrinsic distance between $(1/2,\ez)$ and $(1/2,-\ez)$ is always larger than or equals to $1+2\ez$. With in Lemma \ref{la.4}, we show that the Lipschitz spaces defined via the intrinsic distance perfectly match with the Sobolev space $ \dot W^{1,\fz}(\Omega)$, see Lemma \ref{rade'} below. Denote by $\lip_{d^\Omega_E}(\Omega) $ the collection of all Lipschitz functions $u$ in $\Omega$ with respect to $d^\Omega_E$, that is, $$ \lip_{d^\Omega_E}(u,\Omega):=\sup\frac{|u(x)-u(y)|}{d^\Omega_E(x,y)}<\fz.$$ We also denote by $\lip^\ast_{d^\Omega_E}(\Omega)$ the collection of all functions $u$ in $\Omega$ with $$\lip^\ast_{d^\Omega_E}(u,\Omega):=\sup_{x \in\Omega }\lip_{d^\Omega_E} u(x)<\fz.$$ \begin{lemapp}\label{rade'} We have $ \lip_{d^\Omega_E}(\Omega)= \dot W^{1,\fz} (\Omega)= \lip^\ast(\Omega)$ and $$\|\nabla u\|_{L^\fz(\Omega)}=\lip_{d^\Omega_E} (u,\Omega)=\lip^\ast(u,\Omega)=\sup_{x\in\Omega}\lip_{d^\Omega_E} u(x).$$ \end{lemapp} \begin{proof} Recall that Lemma \ref{wzd} gives $d^\Omega_E(x,y)=|x-y|$ whenever $|x-y|\le \dist(x,\partial\Omega)$. One then has $\lip_{d^\Omega_E}(\Omega) \subset\lip_\loc(\Omega)$, and moreover, $\lip u(x)= \lip_{d^\Omega_E} u(x)$ for all $x\in\Omega$, which gives $\lip^\ast_{d^\Omega_E}(\Omega)=\lip^\ast (\Omega)$. Next, we show $\dot W^{1,\fz} (\Omega) \subset \lip_{d^\Omega_E}(\Omega)$ and $ \lip_{d^\Omega_E} (u,\Omega) \le \|\nabla u\|_{L^\fz(\Omega)}$. Let $u \in \dot W^{1,\fz} (\Omega)$. Then $\|\nabla u\|_{L^\fz(\Omega)} =:\lz < \fz.$ If $ \lz >0$, then $\lz^{-1}u \in \dot W^{1,\fz} (\Omega)$ and $\|\nabla (\lz^{-1} u )\|_{L^\fz(\Omega)} =1$. Hence $\lz^{-1}u$ could be the test function in \eqref{intrinsic}, which implies $$ \lz^{-1} u(y)- \lz^{-1}u(x) \le d^\Omega_E (x,y) \ \forall x,y \in \Omega, $$ or equivalently, $$ \frac{|u(y)- u(x) |}{\|\nabla u\|_{L^\fz(\Omega)}} \le d^\Omega_E (x,y) \ \forall x,y \in \Omega. $$ Therefore, $u \in \lip_{d^\Omega_E}(\Omega)$ and $ \lip_{d^\Omega_E} (u,\Omega) \le \|\nabla u\|_{L^\fz(\Omega)}$. If $\lz=0$, then similar as the above discussion, we have for any $\lz'>0$ $$ \frac{|u(y)- u(x) |}{\lz'} \le d^\Omega_E (x,y) \ \forall x,y \in \Omega. $$ Therefore, $u \in \lip_{d^\Omega_E}(\Omega)$ and $ \lip_{d^\Omega_E} (u,\Omega) \le \lz'$ for any $\lz'>0$. Hence $ \lip_{d^\Omega_E} (u,\Omega) =0 = \|\nabla u\|_{L^\fz(\Omega)}$. Moreover, we show $ \lip_{d^\Omega_E}(\Omega) \subset \lip^\ast(\Omega)$ and $ \lip^\ast(u,\Omega) \le \lip_{d^\Omega_E} (u,\Omega)$. Let $u \in \lip_{d^\Omega_E}(\Omega)$. Then $\lip_{d^\Omega_E} (u,\Omega)<\fz$. Since $\lip^\ast_{d^\Omega_E}(u,\Omega) \le \lip_{d^\Omega_E} (u,\Omega)$ and $\lip^\ast_{d^\Omega_E}(u,\Omega) = \lip (u,\Omega)$, we arrive at $$ \lip (u,\Omega) \le \lip_{d^\Omega_E} (u,\Omega) < \fz. $$ Therefore, $u \in \lip^\ast(\Omega)$. Finally, recalling that $\dot W^{1,\fz} (\Omega) = \lip^\ast(\Omega)$ and $\|\nabla u\|_{L^\fz(\Omega)} = \lip (u,\Omega) $ in Lemma \ref{coin}, we finish the proof. \end{proof} \begin{thebibliography}{99} \vspace{-0.3cm} \bibitem{a1} G. Aronsson, {\em Minimization problems for the functional $\sup_x F(x, f(x), f' (x))$}. Ark. Mat. {\bf 6} (1965), 33-53. \vspace{-0.3cm} \bibitem{a2} G. Aronsson, {\em Minimization problems for the functional $\sup_x F(x, f(x), f' (x))$. II}. Ark. Mat. {\bf 6} (1966), 409-431. \vspace{-0.3cm} \bibitem{a3} G. Aronsson, {\em Extension of functions satisfying Lipschitz conditions}. Ark. Mat. {\bf 6} (1967), 551-561. \vspace{-0.3cm} \bibitem{a68} G. Aronsson, {\em On the partial differential equation $u^2_x u_{xx} + 2u_x u_y u_{xy} + u^2_y u_{yy} = 0.$ } Ark. Mat. 7, (1968), 395-425. \vspace{-0.3cm} \bibitem{a4} G. Aronsson, {\em Minimization problems for the functional $\sup_x F(x, f(x), f' (x))$. III}. Ark. Mat. {\bf 7} (1969), 509-512. \vspace{-0.3cm} \bibitem{acjs} S. N. Armstrong, M. G. Crandall, V. Julin, and C. K. Smart, {\em Convexity criteria and uniqueness of absolutely minimizing functions.} Arch. Ration. Mech. Anal. {\bf 200} (2011), no. 2, 405-443. \vspace{-0.3cm} \bibitem{acj} G. Aronsson, M. G. Crandall, and P. Juutinen, {\em A tour of the theory of absolutely minimizing functions}. Bull. Amer. Math. Soc. (N.S.) {\bf 41} (2004), 439-505. \vspace{-0.3cm} \bibitem{b99} N. Barron, {\em Viscosity solutions and analysis in $L^\fz$.} In: Nonlinear Analysis, Differential Equations and Control (Montreal, QC, 1998). NATO Sci. Ser. C Math. Phys. Sci. 528. Dordrecht: Kluwer Acad. Publ., pp. 1-60.(1999). \vspace{-0.3cm} \bibitem{bjw} E. N. Barron, R. R. Jensen, and C. Y. Wang, {\em The Euler equation and absolute minimizers of $L^\fz$ functionals.} Arch. Ration. Mech. Anal. {\bf 157} (2001), 255-283. \vspace{-0.3cm} \bibitem{b1} T. Bieske, {\em On $\infty$-harmonic functions on the Heisenberg group.} Comm. Partial Differential Equations 27 (2002), no. 3-4, 727-761. \vspace{-0.3cm} \bibitem{bls} A. Boutet de Monvel, D. Lenz and P. Stollmann, {\em Schnol's theorem for strongly local forms}, Israel J. Math. 173 (2009), 189-211. \vspace{-0.3cm} \bibitem{cpp} T. Champion, L. De Pascale, and F. Prinari, {\em $\Gamma$-Convergence and absolute minimizers for supremal functionals.} ESAIM Control Optim. Calc. Var. {\bf 10} (2004), 14-27. \vspace{-0.3cm} \bibitem{cv96} V. M. Chernikov, S. K. Vodop'yanov, {\em Sobolev Spaces and hypoelliptic equations I,II.} Siberian Advances in Mathematics. 6 (1996) no. 3, 27-67, no. 4, 64-96. Translation from: Trudy In-ta matematiki RAN. Sib. otd-nie. 29 (1995), 7-62. \vspace{-0.3cm} \bibitem{cp} T. Champion and L. De Pascale, {\em Principles of comparison with distance functions for absolute minimizers.} J. Convex Anal. {\bf 14} (2007), 515-541. \vspace{-0.3cm} \bibitem{c03} M. Crandall, {\em An efficient derivation of the Aronsson equation.} Arch. Ration. Mech. Anal. {\bf 167} (2003), 271-279. \vspace{-0.3cm} \bibitem{da} A. Davini, {\em Smooth approximation of weak Finsler metrics.} Differential Integral Equations 18 (5) (2005) 509-530. \vspace{-0.3cm} \bibitem{dmv} F. Dragoni, J. J. Manfredi and D. Vittone, {\em Weak Fubini property and infinity harmonic functions in Riemannian and sub-Riemannian manifolds}, Trans. Amer. Math. Soc. Volume 365, Number 2, (2013), 837-859. \vspace{-0.3cm} \bibitem{f44} K. O. Friedrichs, {\em The identity of weak and strong extensions of differential operators.} Trans. Amer. Math. Soc. 55 (1944), 132-151. \vspace{-0.3cm} \bibitem{fhk99} B. Franchi, P. Haj{\l}asz, P. Koskela. {\em Definitions of Sobolev classes on metric spaces.} Annales de l'Institut Fourier, Volume 49 (1999) no. 6, pp. 1903-1924. \vspace{-0.3cm} \bibitem{FSS} B. Franchi, R. Serapioni, F. Serra Cassano, {\em Meyers-Serrin type theorems and relaxation of variational integrals depending on vector fields.} Houston J. Math. 22 (1996), no. 4, 859-890. \vspace{-0.3cm} \bibitem{FSS97} B. Franchi, R. Serapioni, F. Serra Cassano, {\em Approximation and imbedding theorems for weighted Sobolev spaces associated with Lipschitz continuous vector fields}. Boll. Un. Mat. Ital. (7) 11-B (1997), 83-117. \vspace{-0.3cm} \bibitem{flw} R. L. Frank, D. Lenz, D. Wingert, {\em Intrinsic metrics for non-local symmetric Dirichlet forms and applications to spectral theory} Journal of Functional Analysis, Volume 266, Issue 8, 2014, Pages 4765-4808. \vspace{-0.3cm} \bibitem{gn96} N. Garofalo and D. M. Nhieu, {\em Isoperimetric and Sobolev inequalities for Carnot-Carath\'{e}odory} spaces and the existence of minimal surfaces, Comm. Pure Appl. Math. 49 (1996), 1081-1144. \vspace{-0.3cm} \bibitem{gn} N. Garofalo, D. Nhieu, {\em Lipschitz continuity, global smooth approximations and extension theorems for Sobolev functions in Carnot-Carath\'{e}odory spaces}. J. Anal. Math. 74 (1998), 67-97. \vspace{-0.3cm} \bibitem{gwy} R. Gariepy, C. Y. Wang, and Y. Yu, {\em Generalized cone comparison principle for viscosity solutions of the Aronsson equation and absolute minimizers.} Comm. Partial Differential Equations {\bf 31} (2006), 1027-1046. \vspace{-0.3cm} \bibitem{gxy} C. Y. Guo, C. Xiang, D. Yang, {\em $L^\infty$-variational problems associated to measurable Finsler structures.} Nonlinear Analysis. 132 (2015), 126-140. \vspace{-0.3cm} \bibitem{HK} P. Hajlasz, P. Koskela, {\em Sobolev met Poincar\'{e}}. Mem. Amer. Math. Soc. 145 (2000), no. 688. \vspace{-0.3cm} \bibitem{hkst} J. Heinonen, P. Koskela, N. Shanmugalingam and J. T. Tyson, {\em Sobolev spaces on metric measure spaces: an approach based on upper gradients}, Cambridge Studies in Advanced Mathematics Series, Cambridge University Press, 2015. \vspace{-0.3cm} \bibitem{j93} R. Jensen, {\em Uniqueness of Lipschitz extensions: minimizing the sup norm of the gradient}. Arch. Ration. Mech. Anal. {\bf 123} (1993), 51-74. \vspace{-0.3cm} \bibitem{jwy} R. Jensen, C. Y. Wang, and Y. Yu, {\em Uniqueness and nonuniqueness of viscosity solutions to Aronsson's equation.} Arch. Ration. Mech. Anal. {\bf 190} (2008), no. 2, 347-370. \vspace{-0.3cm} \bibitem{j86} D. Jerison, {\em The Poincar\'{e} inequality for vector fields satisfying H\"{o}rmander's condition}, Duke Math. J. 53 (1986), 503-523. \vspace{-0.3cm} \bibitem{js87} D. Jerison, A. Sanchez-Calle, {\em Subelliptic, second order differential operators. In: Complex analysis, III (College Park, Md., 1985-86)}. pp. 46-77, Lecture Notes in Math., 1277, Springer, 1987. \vspace{-0.3cm} \bibitem{j98} P. Juutinen, {\em Minimization problems for Lipschitz functions via viscosity solutions}, Ann. Acad. Sci. Fenn. Math. Diss. No. 115 (1998), 53 pp. \vspace{-0.3cm} \bibitem{j02} P. Juutinen, {\em Absolutely minimizing Lipschitz extensions on a metric space}, Ann. Acad. Sci. Fenn. Math. 27 (2002), no. 1, 57-67. \vspace{-0.3cm} \bibitem{js} P. Juutinen and N. Shanmugalingam, {\em Equivalence of AMLE, strong AMLE, and comparison with cones in metric measure space. } Math. Nachr. 279, (2006), 1083-1098. \vspace{-0.3cm} \bibitem{ksz} P. Koskela, N. Shanmugalingam, and Y. Zhou, {\em $L^\infty$-Variational problem associated to Dirichlet forms.} Math. Res. Lett. {\bf 19} (2012), 1263-1275. \vspace{-0.3cm} \bibitem{kz} P. Koskela, Y. Zhou, {\em Geometry and analysis of Dirichlet forms.} Adv. Math. 231, (2012), 2755-2801. \vspace{-0.3cm} \bibitem{ksz2} P. Koskela, N. Shanmugalingam, and Y. Zhou, {\em Intrinsic geometry and analysis of diffusion process and $L^\infty$-variational problem. } Arch. Rational Mech. Anal. {\bf 214} (2014), no.1, 99-142. \vspace{-0.3cm} \bibitem{ls16} E. Le Donne and G. Speight {\em Lusin Approximation for Horizontal Curves in Step 2 Carnot Groups.} Calculus of Variations and Partial Differential Equations 55(5) (2016), 1-22. \vspace{-0.3cm} \bibitem{monti} R. Monti , F S .Cassano, {\em Surface measures in Carnot- Caratheodory spaces}. Calc.var.partial Differential Equations. {\bf 13} (2001), 339-376. \vspace{-0.3cm} \bibitem{nsw85} A. Nagel, E. M. Stein and S, Wainger, {\em Balls andmetrics defined by vectorfields L basic properties}, Acta Math. 155 (1985), 103-147. \vspace{-0.3cm} \bibitem{p89} P. Pansu, {\em Metriques de Carnot-Carath\'{e}odory et quasiisom\'{e}tries des espaces sym\'{e}triques de rang un}, Annals of Mathematics 129 (1989), 1-60. \vspace{-0.3cm} \bibitem{l16} G. Speight {\em Lusin Approximation and Horizontal Curves in Carnot Groups.} Revista Matematica Iberoamericana 32 (4) (2016), 1425-1446. \vspace{-0.3cm} \bibitem{sp} P. Stollmann, {\em A dual characterization of length spaces with application to Dirichlet metric spaces.} Stud. Math. 198, (2010), 221-233. \vspace{-0.3cm} \bibitem{sk} K. T. Sturm, {\em Analysis on local Dirichlet spaces. I. Recurrence, conservativeness and $L^p$-Liouville properties.} J. Rein. Angew. Math. 456, (1994), 173-196 . \vspace{-0.3cm} \bibitem{s97} K.T. Sturm, {\em Is a diffusion process determined by its intrinsic metric?} Chaos Solitons Fractals 8 (1997), 1855-1860. \vspace{-0.3cm} \bibitem{wcy} C. Y. Wang, {\em The Aronsson equation for absolute minimizers of $L^\infty$ functionals associated with vector fields satisfying H\"ormander's conditions}. Trans. AMS. 359 (2007), 91-113. \end{thebibliography} \bigskip \noindent Jiayin Liu \noindent School of Mathematical Science, Beihang University, Changping District Shahe Higher Education Park South Third Street No. 9, Beijing 102206, P. R. China {\it and} \noindent Department of Mathematics and Statistics, University of Jyv\"{a}skyl\"{a}, P.O. Box 35 (MaD), FI-40014, Jyv\"{a}skyl\"{a}, Finland \noindent{\it E-mail }: \texttt{[email protected]} \bigskip \noindent Yuan Zhou \noindent School of Mathematical Sciences, Beijing Normal University, Haidian District Xinjiekou Waidajie No.19, Beijing 100875, P. R. China \noindent {\it E-mail }: \texttt{[email protected]} \end{document}
2205.10156v1
http://arxiv.org/abs/2205.10156v1
A note on the maximum number of $k$-powers in a finite word
\documentclass[runningheads,envcountsame,a4paper]{llncs} \usepackage{amssymb,amsmath} \usepackage{wasysym} \usepackage{graphicx} \usepackage{epsfig} \usepackage{latexsym} \usepackage[english]{babel} \usepackage[utf8]{inputenc} \usepackage{csquotes} \usepackage{algorithm} \usepackage{algpseudocode} \usepackage{tikz} \usetikzlibrary{arrows,shapes.geometric,positioning,calc} \usepackage[colorinlistoftodos]{todonotes} \usepackage{changepage} \usepackage{subfig} \usepackage{soul} \usepackage{hyperref} \usepackage{todonotes} \usepackage{enumerate} \newtheorem{thm}{Theorem} \newtheorem{prop}[thm]{Proposition} \newtheorem{cor}[thm]{Corollary} \newtheorem{lem}[thm]{Lemma} \newtheorem{conj}[thm]{Conjecture} \newtheorem{exa}[thm]{Example} \newtheorem{defn}[thm]{Definition} \newtheorem{lemd}[thm]{Lemma and Notation} \newcommand{\QED}{\usebox{\qed} \par\medskip} \newsavebox{\qedB} \newcommand{\D}{\mathcal{D}} \newcommand{\LL}{\mathcal{L}} \sbox{\qedB}{\setlength{\unitlength}{1mm} \begin{picture}(4,4)(0,0) \thinlines {\put(0,0){\framebox(2.83,2.83){}}} {\put(1.17,1.17){\framebox(2.83,2.83){}}} {\put(0,0){\framebox(4,4){}}} {\put(1.17,1.17){{\rule{1ex}{1ex} }}} \end{picture}} \newcommand{\QEDB}{\ifmmode\def\next{\tag"\usebox{\qedB}"} \else\let\next=\relax {\unskip\nobreak\hfil\penalty50 \hskip2em\hbox{}\nobreak\hfil\usebox{\qedB} \next} \newcommand{\bib}{thebibliography} \newcommand{\nl}{\par\medskip\noindent} \newcommand\mbx[1]{\makebox[10pt]{#1}} \newcommand{\Alphabet}{\hbox{\rm Alph}} \newcommand{\Ind}{\hbox{\rm Ind}} \newcommand{\Pref}{\hbox{\rm Pref}} \newcommand{\Suff}{\hbox{\rm Suff}} \newcommand{\USS}{\hbox{\rm USS}} \newcommand{\fac}{\mathrm{Fac}} \newcommand{\Sp}{\hbox{\rm Sp}} \newcommand{\Class}{\mathrm{Class}} \newcommand{\Index}{\mathrm{Index}} \newcommand{\Pal}{\hbox{\rm Pal}} \newcommand{\Powers}{\mathrm{Powers}} \newcommand{\Dim}{\hbox{\rm dim}} \newcommand{\Num}{\hbox{\rm Num}} \newcommand{\RS}{\hbox{\rm RS}} \newcommand{\Res}{\mathrm{MaxPow}} \renewcommand{\mp}{\mathrm{mp}} \newcommand{\Squares}[1]{\hbox{\rm Sq}\left(#1\right)} \newcommand{\Overlaps}{\hbox{\rm Overlaps}} \newcommand{\Lpc}{\hbox{\rm Lpc}} \newcommand{\Card}{\hbox{\rm Card}} \newcommand{\Last}{\hbox{\rm Last}} \newcommand{\First}{\hbox{\rm First}} \newcommand{\map}{\longrightarrow} \newcommand{\donne}{\longmapsto} \newcommand{\vphi}{\varphi} \newcommand{\N}{\mathbb N} \newcommand{\Z}{\mathbb Z} \newcommand{\Q}{\mathbb Q} \newcommand{\prim}{\mathrm{Prim}} \newcommand{\bprop}{\begin{prop}} \newcommand{\eprop}{\end{prop}} \newcommand{\bcor}{\begin{cor}} \newcommand{\ecor}{\end{cor}} \newcommand{\blem}{\begin{lem}} \newcommand{\elem}{\end{lem}} \newcommand{\Ex}{\par\noindent{\bf Example}. } \newcommand{\Exs}{\par\noindent{\bf Examples}. } \newcommand{\floor}[1]{\lfloor #1 \rfloor} \newcommand{\comment}[1]{{ \begin{center} \fbox{ \begin{minipage}[h]{0.9\textwidth} \textsf{#1} \end{minipage} } \end{center} }} \title{A note on the maximum number of $k$-powers in a finite word} \author{Shuo Li\inst{1}, Jakub Pachocki\inst{2}\fnmsep\thanks{Author participated in this research while a student at University of Warsaw, Poland.}, Jakub Radoszewski\inst{3}\fnmsep\thanks{Supported by the Polish National Science Center, grant number 2018/31/D/ST6/03991.}} \institute{Laboratoire de Combinatoire et d'Informatique Mathématique,\\ Université du Québec \`a Montréal,\\ CP 8888 Succ. Centre-ville, Montréal (QC) Canada H3C 3P8\\ \email{[email protected]} \and Open AI, San Francisco, CA, USA\\ \email{[email protected]} \and Institute of Informatics, University of Warsaw, Poland\\ \email{[email protected]} } \begin{document} \maketitle \input{Abstract} \input{Introduction} \input{Number-of-k-powers} \input{lower} \bibliographystyle{splncs03} \bibliography{biblio} \end{document} \begin{abstract} A \emph{power} is a word of the form $\underbrace{uu...u}_{k \; \text{times}}$, where $u$ is a word and $k$ is a positive integer; the power is also called a {\em $k$-power} and $k$ is its {\em exponent}. We prove that for any $k \ge 2$, the maximum number of different non-empty $k$-power factors in a word of length $n$ is between $\frac{n}{k-1}-\Theta(\sqrt{n})$ and $\frac{n-1}{k-1}$. We also show that the maximum number of different non-empty power factors of exponent at least 2 in a length-$n$ word is at most $n-1$. Both upper bounds generalize the recent upper bound of $n-1$ on the maximum number of different square factors in a length-$n$ word by Brlek and Li (2022). \end{abstract} \section{Introduction} Let $k$ be an integer greater than $1$, the {\em $k$-power} (or simply the {\em power}) of a word $u$ is a word of the form $\underbrace{uu...u}_{k \; \text{times}}$. Here $k$ is called the {\em exponent} of the power. We consider only powers of non-empty words. A factor (subword) of a word is its fragment consisting of a number of consecutive letters. In this paper, we investigate the bounds for the maximum number of different $k$-power factors in a word of length $n$. This subject is one of the fundamental topics in combinatorics on words~\cite{lothaire3}. For any pair of positive integers $(n,k)$ with $k >1$, let $N(n,k)$ denote the maximum number of different non-empty $k$-powers that can appear as factors of a word of length $n$. For 2-powers (squares), the bounds for $N(n,2)$ were studied by many authors; see~\cite{FraenkelS98,Ilie,lam,DezaFT15,thie,Brlekli}. The best known lower bound from \cite{FraenkelS98} and a very recent upper bound from \cite{Brlekli} match up to sublinear terms: $$n-o(n) \leq N(n,2) \leq n-1.$$ Actually, one can check that the lower bound from \cite{FraenkelS98} is of the form $n-\Theta(\sqrt{n})$. For $k=3$, it was proved in~\cite{KUBICA} that $$\frac12 n-2\sqrt{n} \leq N(n,3) \leq \frac{4}{5}n.$$ More generally, for $k\geq3$, it was studied in~\cite{li2022} and proved that $$N(n,k) \leq \frac{n-1}{k-2},$$ with the same notation as above. Further in \cite{KUBICA} it was shown that the maximum number of different factors of a word of length $n$ being powers of exponent {\em at least} 3 is $n-2$. In this article, we generalize the methods provided in~\cite{Brlekli} and~\cite{KUBICA} to give an upper and a lower bound for the number of different $k$-powers in a finite word. The main result is announced as follows: \begin{thm} \label{th:main} Let $k$ be an integer greater than 1. For any integer $n>1$, let $N(n,k)$ denote the maximum number of different $k$-powers being factors of a word of length $n$. Then we have $$\frac{n}{k-1}-\Theta(\sqrt{n})\leq N(n,k) \leq \frac{n-1}{k-1}.$$ \end{thm} We also show the following result. It implies, in particular, that a word that contains powers of exponent greater than 2 has fewer squares than $n-1$. \begin{thm} \label{th:main2} The maximum number of different factors in a word of length $n$ being powers of exponent at least 2 is $n-1$. \end{thm} \section{Preliminaries} Let us first recall the basic terminology related to words. By a {\em word} we mean a finite concatenation of symbols $w = w_1 w_2 \cdots w_{n}$, with $n$ being a non-negative integer. The {\em length} of $w$, denoted $|w|$, is $n$ and we say that the symbol $w_i$ is at the {\em position} $i$. The set $\Alphabet(w)=\left\{w_i| 1\leq i \leq n\right\}$ is called the {\em alphabet} of $w$ and its elements are called {\em letters}. Let $|\Alphabet(w)|$ denote the cardinality of $\Alphabet(w)$. A word of length $0$ is called the {\em empty word} and it is denoted by $\varepsilon$. \emph{Concatenation} of two words $u$, $v$ is denoted as $uv$. A word $u$ is called a {\em factor} of a word $w$ if $w = pus$ for some words $p$, $s$; $u$ is called a {\em prefix} ({\em suffix}) of $w$ if $p=\varepsilon$ ($s=\varepsilon$, respectively). The set of all factors of a word $w$ is denoted by $\fac(w)$. Two words $u$ and $v$ {\em conjugate} when there exist words $x,y$ such that $u=xy$ and $v=yx$. The conjugacy class of a word $v$ is denoted by $[v]$. If $v=v_1v_2\cdots v_m$ is word, then for any $i \in \{1,\ldots,m\}$, we define $v_p(i)=v_1v_2\cdots v_i$ and $v_{s}(i)=v_{i+1}v_{i+2}\cdots v_{m}$. Thus, $[v]=\left\{v_s(i)v_p(i), i=1,2,\dots,m\right\}$. For any positive integer $k$, we define the {\em $k$-power} (or simply a {\em power}) of a word $u$ to be the concatenation of $k$ copies of $u$, denoted by $u^k$. Here $k$ is the {\em exponent} of the power. A word $w$ is said to be {\em primitive} if it is not a power of another word, that is, if $w=u^k$ implies $k=1$. For any word $w$, there is exactly one primitive word $u$ such that $w=u^k$ for integer $k \geq 1$; the word $u$ is called the \emph{primitive root} of word $w$, see~\cite{lothaire1}. Furthermore, two words that conjugate are either both primitive or none of them is (\cite{lothaire1}). For a given word $w$, let $N_k(w)$ denote the number of different non-empty $k$-power factors of $w$ and $\prim(w)$ denote all primitive factors of $w$. For any word $u$ and any rational number $\alpha$, the \emph{$\alpha$-power} of $u$ is defined to be $u^au_0$ where $u_0$ is a prefix of $u$, $a$ is the integer part of $\alpha$, and $|u^au_0|=\alpha |u|$. The $\alpha$-power of $u$ is denoted by $u^{\alpha}$. If $\alpha$ is a rational number greater than 1 and there exists a word $u$ such that $w=u^\alpha$, then the word $w$ is said to have a \emph{period} $|u|$. \begin{lem}[Fine and Wilf~\cite{FineWilf}] \label{perlemma} Let $w$ be a word having $p$ and $q$ for periods. If $|w| \geq p + q - \gcd(p, q)$, then $\gcd(p, q)$ is also a period of $w$. \end{lem} \section{Lower bound for $N(n,k)$} We show a family of binary words which yields a lower bound of $\frac{n}{k-1}-\Theta(\sqrt{n})$ for the number of different factors which are $k$-powers, for an integer $k \geq 2$. For integers $i \geq 1$ and $k \geq 2$ we denote $$q^{(k)}_i = (\mathtt{1}\mathtt{0}^i)^{k-1}.$$ Let $r^{(k)}_m$ be the concatenation $$r^{(k)}_m = q^{(k)}_1 q^{(k)}_2 \cdots q^{(k)}_m \mathtt{10}^m.$$ E.g., for $k=2$, we obtain the family of words: $$\mathtt{1010},\ \mathtt{10100100},\ \mathtt{1010010001000},\ \mathtt{1010010001000010000},\ldots$$ and for $k=3$, the family: $$\mathtt{101010},\ \mathtt{1010100100100},\ \mathtt{1010100100100010001000},\ldots$$ \begin{lem}\label{l:lenqn} The length of $r^{(k)}_m$ is $(k-1)\left(\frac{m^2}{2} + \frac{3m}{2}\right) + m+1$. \end{lem} \begin{proof} The length of $q^{(k)}_i$ is $(k-1)(i+1)$, so $$|r^{(k)}_m| = \left(\sum_{i=1}^m (k-1)(i+1)\right)+m+1 = (k-1)\left(\frac{m^2}{2} + \frac{3m}{2}\right) + m+1.\ \ \qed$$ \end{proof} \begin{lem}\label{l:cubesqn} $N_k(r^{(k)}_m) \geq \frac{m^2}2 + \frac{m}2 + \left\lfloor\frac{m}k\right\rfloor$. \end{lem} \begin{proof} Let us note that for a positive integer $i$, the concatenation $\mathtt{0}^{i-1}q^{(k)}_i\mathtt{1}\mathtt{0}^i = \mathtt{0}^{i-1} (\mathtt{1}\mathtt{0}^i)^k$ contains as factors all the $k$-powers that conjugate with the $k$-power $(\mathtt{0}^i\mathtt{1})^k$ that are different from this $k$-power. Let us note that this concatenation is a factor of $r^{(k)}_m$ for each $i \in \{1,\ldots,m\}$. Indeed, for $i \in \{1,\ldots,m\}$, the factor $q^{(k)}_i$ in $r^{(k)}_m$ is preceded by $\mathtt{0}^{i-1}$ (for $i=1$ this is the empty string, and otherwise it is a suffix of $q^{(k)}_{i-1}$) and followed by $\mathtt{1}\mathtt{0}^i$ (for $i<m$ it is a prefix of $q^{(k)}_{i+1}$, and for $i=m$ it is a suffix of $r^{(k)}_m$). Additionally, in $r^{(k)}_m$ there are $\left\lfloor\frac{m}k\right\rfloor$ unary $k$-powers $\mathtt{0}^k,\mathtt{0}^{2k},\ldots$ In total we obtain $$\left(\sum_{i=1}^m i\right) + \left\lfloor\frac{m}k\right\rfloor = \frac{m^2}2 + \frac{m}2 + \left\lfloor\frac{m}k\right\rfloor$$ $k$-powers, all pairwise different. \qed\end{proof} \begin{thm}[Lower Bound]~ \label{lower} Let $k\geq 2$ be an integer. For infinitely many positive integers $n$ there exists a word $w$ of length $n$ for which $N_k(w) > \frac{n}{k-1}-2.2\sqrt{n}$. \end{thm} \begin{proof} Due to Lemmas~\ref{l:lenqn} and~\ref{l:cubesqn}, for any word $r^{(k)}_m$ we have: \begin{align*} \frac{|r^{(k)}_m|}{k-1} - N_k(r^{(k)}_m) &\leq \frac{m^2}2 + \frac{3m}2 + \frac{m+1}{k-1} - \frac{m^2}2 - \frac{m}2 - \left\lfloor\frac{m}{k}\right\rfloor \\ &= m + \frac{m+1}{k-1} - \left\lfloor\frac{m}{k}\right\rfloor \leq m + \frac{m+1}{k-1} - \frac{m-k+1}{k}\\ &=m + \frac{m+1}{k(k-1)} + 1 \leq m+\frac{m+1}{2}+1 = \frac32m + \frac32. \end{align*} This value is smaller than $c \sqrt{|r^{(k)}_m|}$ for $c^2 \ge \frac92$; indeed, in this case, we have: $$\left(\frac32m + \frac32\right)^2 = \frac94m^2 + \frac92m + \frac94 < c^2\left(\frac12m^2 + \frac52m + 1\right) \leq c^2|r^{(k)}_m|.$$ Hence, for $c \ge 2.2$ we conclude that: $$ \frac{|r^{(k)}_m|}{k-1} - N_k(r^{(k)}_m) < c\sqrt{|r^{(k)}_m|} \quad\Rightarrow\quad N_k(r^{(k)}_m) > \frac{|r^{(k)}_m|}{k-1} - c\sqrt{|r^{(k)}_m|}.\ \ \qed $$ \end{proof} \paragraph{\bf Note.} For $k=2$ we obtain a family of words containing $n-o(n)$ different squares that is simpler than the example by Fraenkel and Simpson~\cite{FraenkelS98}: we concatenate the words $q^{(2)}_i = \mathtt{10}^i$ whereas they concatenate the words $q'_i = \mathtt{0}^{i+1}\mathtt{10}^i\mathtt{10}^{i+1}\mathtt{1}$. \begin{proof}[of Theorem~\ref{th:main}] It is a direct consequence of Theorems~\ref{upper} and~\ref{lower}.\ \ \qed \end{proof} \section{Rauzy graphs of a finite word} In this section, we recall the notion of Rauzy graph and some results obtained in~\cite{Brlekli}. Let $w$ be a word of length $n$. For any integer $l \in \{1,\ldots,n\}$, let $L_l(w)$ be the set of all length-$l$ factors of $w$. For any integer $l \in \{1,\ldots,n\}$, let the Rauzy graph $\Gamma_l(w)$ be an oriented graph whose set of vertices is $L_l(w)$ and the set of edges is $L_{l+1}(w)$ (here $L_{n+1}(w)=\emptyset$); an edge $e \in L_{l+1}(w)$ starts at the vertex $u$ and ends at the vertex $v$, if $u$ is a prefix and $v$ is a suffix of $e$. Let us define $\Gamma(w)=\cup_{l=1}^{n}\Gamma_l(w)$. Let $\Gamma_l(w)$ be a Rauzy graph of $w$. A sub-graph in $\Gamma_l(w)$ is called an {\em elementary circuit} if there are $j$ distinct vertices $v_1,v_2,\dots, v_j$ and $j$ distinct edges $e_1,e_2,\dots,e_j$ for some integer $j$, such that for each $t$ with $1 \leq t \leq j-1$, the edge $e_t$ starts at $v_t$ and ends at $v_{t+1}$, and for the edge $e_j$, it starts at $v_j$ and ends at $v_1$; further, $j$ is called the {\em size} of the circuit. The {\em small circuits} in the graph $\Gamma_l(w)$ are those elementary circuits whose sizes are no larger than $l$. \begin{lemd}[Brlek and Li~\cite{Brlekli}] \label{small-cycle} Let $w$ be a word and let $\Gamma_l(w)$ be a Rauzy graph of $w$ for some $l \in \{1,\ldots,|w|\}$. Then for any small circuit $C$ on $\Gamma_l(w)$, there exists a unique primitive word $q$, up to conjugacy, such that $|q| \leq l$ and the vertex set of $C$ is $\left\{p^{\frac{l}{|p|}}| p \in [q]\right\}$ and its edge set is $\left\{p^{\frac{l+1}{|p|}}| p \in [q]\right\}$. Further, each small circuit can be identified by an associated primitive word $q$ and an integer $l$ such that $\Gamma_l(w)$ is the Rauzy graph in which the circuit is located. Let each small circuit be denoted by $C(q,r)$ with the parameters defined as above. \end{lemd} \begin{lem}[Brlek and Li~\cite{Brlekli}] \label{bound} Let $w$ be a word. Then there are at most $|w| -|\Alphabet(w)|$ small circuits in $\Gamma(w)$. \end{lem} \section{Upper bound for $N(n,k)$} Let $w$ be a word and let $v \in \prim(w)$. A factor $u \in \fac(w)$ is said to be {\em in the class of factor $v$} if there is a (primitive) word $y \in [v]$ and an integer $p \geq 2$ such that $u=y^{p}$. Let $\Class_w(v)$ denote the set of all factors in the class of $v$. By $|\Class_w(v)|$ we denote the cardinality of $\Class_w(v)$. For a factor $v$ of $w$, let us define $m_w(v)=\max\left\{n| v^{n} \in \fac(w), n \in \mathbb{N^+} \right\}$. Now given $\Class_w(v)$, let us define its {\em index} to be an integer $\Index_w(v)$ such that $\Index_w(v)=\max\left\{m_w(u)|u \in [v]\right\}$. From the definition, the elements in $\Class_w(v)$ are all of the form $v_s(i)v^{j-1}v_p(i)$ with $1\leq i \leq |v|$ and $1\leq j \leq \Index_w(v)$. By $\prim'(w)$ we denote the set of primitive words $v$ such that $v^{n}$ is in the class of $v$, where $n$ is the index of this class. In other words, $$\prim'(w) = \{v \in \prim(w)| v^{\Index_w(v)} \in \Class_w(v)\}.$$ For $v \in \prim'(w)$, let $\Res_w(v)=\left\{u^{\Index_w(v)}| u \in [v] \right\} \cap \fac(w)$ and $\mp_w(v)$ denote the cardinality of $\Res_w(v)$. \begin{example}\label{exclass} Let $v=\mathtt{00001}$ and let us consider the following class of size 8 for some unspecified word $w$: \begin{align*} \Class_w(v)=\{&(\mathtt{00001})^2,(\mathtt{00010})^2,(\mathtt{00100})^2,(\mathtt{01000})^2,(\mathtt{10000})^2,\\ &(\mathtt{00001})^3,(\mathtt{00100})^3,(\mathtt{01000})^3\}. \end{align*} In this case, $\Index_w(v)=3$, $\Res_w(v)=\{(\mathtt{00001})^3,(\mathtt{00100})^3,(\mathtt{01000})^3\}$, and $\mp_w(v)=3$. Moreover, the only words that conjugate with $v$ in $\prim'(w)$ are $\mathtt{00001},\mathtt{00100},\mathtt{01000}$. \end{example} \begin{lem} \label{classes} Let $u$ and $v$ be primitive words. If words $u$ and $v$ conjugate, then $\Class_w(u)=\Class_w(v)$. Otherwise, classes $\Class_w(u)$ and $\Class_w(v)$ are disjoint. \end{lem} \begin{proof} The first part of the statement is obvious. Assume to the contrary that $y \in \Class_w(u) \cap \Class_w(v)$ for primitive words $u$ and $v$. This means that there exist words $u' \in [u]$ and $v' \in [v]$ and integers $k,t>1$ such that $y = (u')^k = (v')^t$. Word $y$ has periods $|u'|$ and $|v'|$, so by Lemma~\ref{perlemma} it has period $p=\gcd(|u'|,|v'|)$. If $p < |u'|$ ($p<|v'|$, respectively), then $p$ would divide $|u'|$ ($|v'|$, respectively); consequently, $u'$ (respectively $v'$) would not be primitive. Hence, $p=|u'|=|v'|$, so $u'=v'$ and $u$ and $v$ conjugate. \qed\end{proof} In this section we give an upper bound for $N_k(w)$. The strategy is as follows: first, we compute the exact number of powers in each class $\Class_w(v)$ of $w$; second, we prove that there exists an injection from $\cup_{v \in \prim'(w)} \Class_w(v)$ to the set of small circuits in $\Gamma(w)$; third, we conclude by using the proprieties of Rauzy graphs introduced in the previous section. \begin{lem} \label{number} Let $w$ be a word and $v \in \prim'(w)$. If $\Index_w(v) \geq 2$, then we have $|\Class_w(v)|=|v|(\Index_w(v)-2)+\mp_w(v)$. Further, we have $\Class_w(v)=\left\{u^k| u \in [v], 2\leq k \leq \Index_w(v)-1\right\} \cup \Res_w(v)$. \end{lem} \begin{proof} We only need to prove that for any $u \in [v]$ and any integer $k$ satisfying $2\leq k \leq \Index_w(v)-1$, $u^k \in \fac(w)$. We can easily check that $u^k \in \fac(v^{k+1})$. However, from the hypothesis, $k+1 \leq \Index_w(v)$ and $v^{\Index_w(v)} \in \fac(w)$, thus, $v^{k+1} \in \fac(w)$. Consequently, $u^k \in \fac(w)$. \qed \end{proof} Example~\ref{excycle} below shows the main ideas from the proof of the following lemma for a concrete class. \begin{lem} \label{cycle} Let $w$ be a word and $v \in \prim'(w)$ such that $|v|=l$ and $\Index_w(v)\geq 2$. For any integer $t$ satisfying $1 \leq t \leq |\Class_w(v)|$, there exists a small circuit $C(v,t+l-1)$ in the Rauzy graph $\Gamma_{t+l-1}(w)$. Hence, there exists a bijective function $f_v$ which associates each word in $\Class_w(v)$ to a small circuit in the set $\left\{C(v, t+l-1)|1 \leq t \leq |\Class_w(v)|\right\}$. \end{lem} \begin{proof} To prove the existence of the circuit $C(v, t+l-1)$ for any integer $t \in \{1,\ldots,|\Class_w(v)|\}$, it is enough to prove that $$S_t:=\left\{u^{\frac{t+l}{l}}|u \in [v]\right\} \subset \fac(w).$$ If this is the case, then there exists a circuit in $\Gamma_{t+l-1}(w)$ such that its edge set is $S_t$; further, it can be identified by $C(v, t+l-1)$. If integers $t',t$ satisfy $1 \leq t' < t \leq |\Class_w(v)|$, then each word in $S_{t'}$ is a prefix of a word in $S_t$. Hence, it is enough to prove that $S_t \subset \fac(w)$ for $t=|\Class_w(v)|$. From Lemma~\ref{number}, we have $t=l(\Index_w(v)-2)+\mp_w(v)$, so $\frac{t+l}{l}=\Index_w(v)-1+\frac{\mp_w(v)}{l}$. Let $i=\Index_w(v)$ and $j=\mp_w(v)$. For any $u^{i-1+\frac{j}{l}} \in S_t$, the word $u^{i-1+\frac{j}{l}}=u^{i-1}u_p(j)$ is a factor of the word $u_s(m) u^{i-1} u_p(m)=(u_s(m)u_p(m))^i$ for all $m \in \{j,\ldots,l\}$. Hence, there are at most $j-1$ distinct words $y$ that conjugate with $u$ such that $u^{i-1+\frac{j}{l}} \not\in \fac(y^{i})$. In particular, there are at most $j-1$ distinct words $y^i\in \Res_w(v)$ which do not contain $u^{i-1+\frac{j}{l}}$ as a factor. However, there are exactly $j$ elements in $\Res_w(v)$, so there exists at least one word in $\Res_w(v)$ containing $u^{i-1+\frac{j}{l}}$. Thus, $S_t \subset \fac(w)$. The existence of the bijective function $f_v$ is from the fact that the cardinalities of $\Class_w(v)$ and $\left\{C(v, t+|v|-1)|1 \leq t \leq |\Class_w(v)|\right\}$ are the same.\qed \end{proof} \begin{example}\label{excycle} Let us consider the class from Example~\ref{exclass}. We need to show that all words in $S_8=\left\{u^{\frac{13}{5}}|u \in [v]\right\}$, for $v=\mathtt{00001}$, are factors of $w$. Let us consider $u^{\frac{13}{5}}=(\mathtt{00010})^{\frac{13}{5}}=\mathtt{0001000010000} \in S_8$. It is a factor of three words $y^3$, where $u$ and $y$ conjugate: \begin{align*} (\mathtt{10000})^3&=\mathtt{10} \underline{\mathtt{0001000010000}}\\ (\mathtt{00001})^3&=\mathtt{0} \underline{\mathtt{0001000010000}} \mathtt{1}\\ (\mathtt{00010})^3&=\underline{\mathtt{0001000010000}} \mathtt{10} \end{align*} and is not a factor of the two remaining such words. The set $\Res_w(v)$ contains three words and indeed $u^{\frac{13}{5}}$ is a factor of $y^3$ for $y$ being one of them, $y=\mathtt{00001}$. \end{example} For a word $w$, let $\Powers(w)$ denote the set of all powers of exponent at least 2 that are factors of $w$, i.e.\ $\Powers(w)=\{u^t|t \ge 2, u^t \in \fac(w)\}$. \begin{lem} \label{inj} There exists an injective function $f$ from the set $\Powers(w)$ to the set of small circuits in $\Gamma(w)$. \end{lem} \begin{proof} Each power factor of $w$ of exponent at least 2 belongs to some class. Hence, $\Powers(w) = \cup_{v \in \prim'(w)} \Class_w(v)$. For any $v \in \prim'(w)$, from Lemma~\ref{cycle}, there is a bijection $f_v$ from $\Class_w(v)$ to $\left\{C(v,t+|v|-1)|1 \leq t \leq |\Class_w(v)| \right\}$. Let us define the function $f$ as follows: for any $\Class_w(v)$, we set $f|_{\Class_w(v)}=f_v$ with $f_v$ defined as above. This function is well defined by Lemma~\ref{classes}. Now we prove that $f$ is injective. Let $y,z$ be two powers such that $f(y)=f(z)=C(v,t+|v|-1)$ for some $v$ and $t$. In this case, $y,z$ are both in $\Class_w(v)$. However, for a given class $\Class_w(v)$, $f_v$ is bijective, thus $y=z$. \qed \end{proof} \begin{proof}[of Theorem~\ref{th:main2}] The theorem can be stated as: $|\Powers(w)| \leq |w|-1$ for any word $w$. It is a direct consequence of Lemmas~\ref{inj} and~\ref{bound}.\ \ \qed \end{proof} \begin{thm}[Upper Bound]~\label{upper}\\ Let $k$ be an integer greater than 1. For any word $w$, we have $$N_k(w) \leq \frac{|w|-|\Alphabet(w)|}{k-1}.$$ Consequently, for any integer $n \geq 1$, we have $$N(n,k) \leq \frac{n-1}{k-1}.$$ \end{thm} \begin{proof} To each $k$-power factor in $\Powers(w)$ we can assign at least $k-2$ powers in the set $\Powers(w)$ that are not $k$-powers. More precisely, if the $k$-power factor is $v^{kp}$, for positive integer $p$ and primitive word $v$, then the words $v^{kp-1},\dots,v^{kp-k+2}$ are elements of $\Powers(w)$ and are not $k$-powers by uniqueness of primitive roots. (If $p>1$, we could also assign $v^{kp-k+1}$ to $v^{kp}$; however, for $p=1$ this would be a 1-power.) Moreover, this way the sets of powers assigned to different $k$-powers are disjoint. By Theorem~\ref{th:main2}, $$N_k(w)\leq \frac{\Powers(w)}{k-1} \leq \frac{|w|-|\Alphabet(w)|}{k-1}.\quad \qed$$ \end{proof}
2205.09929v2
http://arxiv.org/abs/2205.09929v2
On Thakur's basis conjecture for multiple zeta values in positive characteristic
\documentclass[12pt]{amsart} \usepackage{amsmath} \usepackage{amssymb} \usepackage[all]{xy} \usepackage{longtable} \usepackage[mathscr]{eucal} \usepackage{xcolor} \usepackage[bookmarks=false]{hyperref} \usepackage{multirow,bigdelim} \usepackage{nccmath} \usepackage{url} \usepackage{ulem} \usepackage{comment} \usepackage{mathrsfs} \usepackage{arydshln} \setlength{\topmargin}{0truein} \setlength{\headheight}{.35truein} \setlength{\headsep}{.25truein} \setlength{\textheight}{9.25truein} \setlength{\footskip}{.25truein} \setlength{\oddsidemargin}{0truein} \setlength{\evensidemargin}{0truein} \setlength{\textwidth}{6.5truein} \setlength{\voffset}{-0.625truein} \setlength{\hoffset}{0truein} \newtheorem{theorem}[equation]{Theorem} \newtheorem{lemma}[equation]{Lemma} \newtheorem{proposition}[equation]{Proposition} \newtheorem{corollary}[equation]{Corollary} \newtheorem{conjecture}[equation]{Conjecture} \newtheorem{definition}[equation]{Definition} \newtheorem{question}[equation]{Question} \newtheorem{claim}[equation]{Claim} \newtheorem{theorem-n}{Theorem} \newtheorem{claim-n}[theorem-n]{Claim} \newtheorem{lemma-n}[theorem-n]{Lemma} \newtheorem{proposition-n}[theorem-n]{Proposition} \theoremstyle{definition} \newtheorem{definition-n}[theorem-n]{Definition} \newtheorem{example}[equation]{Example} \theoremstyle{remark} \newtheorem{remark}[equation]{Remark} \newtheorem{remark-n}[theorem-n]{Remark} \numberwithin{equation}{subsection} \allowdisplaybreaks[1] \newcommand{\FF}{\mathbb{F}} \newcommand{\ZZ}{\mathbb{Z}} \newcommand{\QQ}{\mathbb{Q}} \newcommand{\RR}{\mathbb{R}} \newcommand{\LL}{\mathbb{L}} \newcommand{\TT}{\mathbb{T}} \newcommand{\GG}{\mathbb{G}} \newcommand{\EE}{\mathbb{E}} \newcommand{\CC}{\mathbb{C}} \newcommand{\PP}{\mathbb{P}} \newcommand{\KK}{\mathbb{K}} \newcommand{\MM}{\mathbb{M}} \newcommand{\NN}{\mathbb{N}} \newcommand{\A}{\mathbb{A}} \newcommand{\Fq}{\mathbb{F}_{q}} \newcommand{\Fp}{\mathbb{F}_{p}} \newcommand{\ba}{\mathbf{a}} \newcommand{\bb}{\mathbf{b}} \newcommand{\be}{\mathbf{e}} \newcommand{\bff}{\mathbf{f}} \newcommand{\bg}{\mathbf{g}} \newcommand{\bh}{\mathbf{h}} \newcommand{\bk}{\mathbf{k}} \newcommand{\bm}{\mathbf{m}} \newcommand{\bn}{\mathbf{n}} \newcommand{\bA}{\mathbf{A}} \newcommand{\bS}{\mathbf{S}} \newcommand{\bx}{\mathbf{x}} \newcommand{\bX}{\mathbf{X}} \newcommand{\bu}{\mathbf{u}} \newcommand{\bv}{\mathbf{v}} \newcommand{\bp}{\mathbf{p}} \newcommand{\bs}{\mathbf{s}} \newcommand{\bC}{\mathbf{C}} \newcommand{\bF}{\mathbf{F}} \newcommand{\bG}{\mathbf{G}} \newcommand{\bH}{\mathbf{H}} \newcommand{\bz}{\mathbf{z}} \newcommand{\bV}{\mathbf{V}} \newcommand{\bi}{\mathbf{i}} \newcommand{\bj}{\mathbf{j}} \newcommand{\bw}{\mathbf{w}} \newcommand{\by}{\mathbf{y}} \newcommand{\bM}{\mathbf{M}} \newcommand{\bW}{\mathbf{W}} \newcommand{\bZ}{\mathbf{Z}} \newcommand{\bone}{\mathbf{1}} \newcommand{\cA}{\mathcal{A}} \newcommand{\cB}{\mathcal{B}} \newcommand{\cD}{\mathcal{D}} \newcommand{\cE}{\mathcal{E}} \newcommand{\cG}{\mathcal{G}} \newcommand{\cH}{\mathcal{H}} \newcommand{\cI}{\mathcal{I}} \newcommand{\cM}{\mathcal{M}} \newcommand{\cN}{\mathcal{N}} \newcommand{\cO}{\mathcal{O}} \newcommand{\cP}{\mathcal{P}} \newcommand{\cT}{\mathcal{T}} \newcommand{\cK}{\mathcal{K}} \newcommand{\cC}{\mathcal{C}} \newcommand{\cR}{\mathcal{R}} \newcommand{\cL}{\mathcal{L}} \newcommand{\cF}{\mathcal{F}} \newcommand{\cV}{\mathcal{V}} \newcommand{\cZ}{\mathcal{Z}} \newcommand{\IT}{\mathcal{I}^{\mathrm{T}}} \newcommand{\ITw}{\mathcal{I}^{\mathrm{T}}_{w}} \newcommand{\IND}{\mathcal{I}^{\mathrm{ND}}} \newcommand{\INDz}{\mathcal{I}^{\mathrm{ND}_{0}}} \newcommand{\INDw}{\mathcal{I}^{\mathrm{ND}}_{w}} \newcommand{\INDzw}{\mathcal{I}^{\mathrm{ND}_{0}}_{w}} \newcommand{\sI}{\mathscr{I}} \newcommand{\sL}{\mathscr{L}} \newcommand{\sLL}{\mathscr{L}^{\Li}} \newcommand{\sLZ}{\mathscr{L}^{\zeta}} \newcommand{\sLB}{\mathscr{L}^{\bullet}} \newcommand{\sDB}{\mathscr{D}^{\bullet}} \newcommand{\sz}{*^{\zeta}} \newcommand{\sLi}{*^{\Li}} \newcommand{\SB}{S^{\bullet}} \DeclareMathAlphabet{\matheur}{U}{eur}{m}{n} \newcommand{\eC}{\matheur{C}} \newcommand{\eF}{\matheur{F}} \newcommand{\eL}{\matheur{L}} \newcommand{\eR}{\matheur{R}} \newcommand{\eZ}{\matheur{Z}} \newcommand{\fA}{\mathfrak{A}} \newcommand{\fI}{\mathfrak{I}} \newcommand{\fJ}{\mathfrak{J}} \newcommand{\fL}{\mathfrak{L}} \newcommand{\fF}{\mathfrak{F}} \newcommand{\fP}{\mathfrak{P}} \newcommand{\fQ}{\mathfrak{Q}} \newcommand{\fR}{\mathfrak{R}} \newcommand{\fs}{\mathfrak{s}} \newcommand{\fm}{\mathfrak{m}} \newcommand{\fz}{\mathfrak{z}} \newcommand{\fh}{\mathfrak{h}} \newcommand{\ff}{\mathfrak{f}} \newcommand{\fg}{\mathfrak{g}} \newcommand{\fk}{\mathfrak{k}} \newcommand{\ft}{\mathfrak{t}} \newcommand{\fn}{\mathfrak{n}} \newcommand{\fu}{\mathfrak{u}} \newcommand{\fx}{\mathfrak{x}} \newcommand{\fy}{\mathfrak{y}} \newcommand{\sA}{\mathscr{A}} \newcommand{\sB}{\mathscr{B}} \newcommand{\sC}{\mathscr{C}} \newcommand{\sR}{\mathscr{R}} \newcommand{\sT}{\mathscr{T}} \newcommand{\sU}{\mathscr{U}} \newcommand{\sX}{\mathscr{X}} \newcommand{\sUB}{\mathscr{U}^{\bullet}} \newcommand{\sBC}{\mathscr{BC}} \newcommand{\rB}{\mathrm{B}} \newcommand{\rP}{\mathrm{P}} \newcommand{\rT}{\mathrm{T}} \newcommand{\rtr}{\mathrm{tr}} \newcommand{\rpr}{\mathrm{pr}} \newcommand{\rh}{\mathrm{h}} \DeclareMathOperator{\Aut}{Aut} \DeclareMathOperator{\Lie}{Lie} \DeclareMathOperator{\Ker}{Ker} \DeclareMathOperator{\GL}{GL} \DeclareMathOperator{\Mat}{Mat} \DeclareMathOperator{\Cent}{Cent} \DeclareMathOperator{\End}{End} \DeclareMathOperator{\res}{res} \DeclareMathOperator{\Spec}{Spec} \DeclareMathOperator{\Gal}{Gal} \DeclareMathOperator{\Hom}{Hom} \DeclareMathOperator{\Res}{Res} \DeclareMathOperator{\DR}{DR} \DeclareMathOperator{\codim}{codim} \DeclareMathOperator{\Id}{Id} \DeclareMathOperator{\im}{Im} \DeclareMathOperator{\trdeg}{tr.deg} \DeclareMathOperator{\wt}{wt} \DeclareMathOperator{\Ext}{Ext} \DeclareMathOperator{\Li}{Li} \DeclareMathOperator{\rank}{rank} \DeclareMathOperator{\ad}{ad} \DeclareMathOperator{\adj}{adj} \DeclareMathOperator{\diag}{diag} \DeclareMathOperator{\dep}{dep} \DeclareMathOperator{\Span}{Span} \DeclareMathOperator{\Coker}{Coker} \DeclareMathOperator{\den}{den} \DeclareMathOperator{\ord}{ord} \DeclareMathOperator{\pr}{pr} \DeclareMathOperator{\Frac}{Frac} \DeclareMathOperator{\len}{len} \DeclareMathOperator{\Supp}{Supp} \DeclareMathOperator{\Coeff}{Coeff} \DeclareMathOperator{\Init}{Init} \newcommand{\oA}{\overline{A}} \newcommand{\oF}{\overline{F}} \newcommand{\ok}{\overline{k}} \newcommand{\oK}{\overline{K}} \newcommand{\oL}{\overline{L}} \newcommand{\oU}{\overline{U}} \newcommand{\oX}{\overline{X}} \newcommand{\omu}{\overline{\mu}} \newcommand{\okv}{\overline{k_{v}}} \newcommand{\od}{\overline{d}} \newcommand{\okinf}{\overline{k_{\infty}}} \newcommand{\adm}{\mathrm{adm}} \newcommand{\sep}{\mathrm{sep}} \newcommand{\tr}{\mathrm{tr}} \newcommand{\hPhi}{\widehat{\Phi}} \newcommand{\hPsi}{\widehat{\Psi}} \newcommand{\tpi}{\widetilde{\pi}} \newcommand{\ttheta}{\widetilde{\theta}} \newcommand{\tPsi}{\widetilde{\Psi}} \newcommand{\oFqt}{\overline{\FF_q(t)}} \newcommand{\iso}{\stackrel{\sim}{\to}} \newcommand{\Ga}{\GG_{\mathrm{a}}} \newcommand{\Gm}{\GG_{\mathrm{m}}} \newcommand{\Lis}{\Li^{\star}} \newcommand{\tLis}{\widetilde{\Li}^{\star}} \newcommand{\fLis}{\mathfrak{Li}^{\star}} \newcommand{\fLi}{\mathfrak{Li}} \newcommand{\zetaAmot}{\zeta_{A}^{\text{\tiny{\rm{mot}}}}} \newcommand{\smfrac}[2]{{\textstyle \frac{#1}{#2}}} \newcommand{\power}[2]{{#1 [\![ #2 ]\!]}} \newcommand{\laurent}[2]{{#1 (\!( #2 )\!)}} \newcommand{\Rep}[2]{\mathbf{Rep}(#1,#2)} \newcommand{\brac}[2]{\genfrac{\{}{\}}{0pt}{}{#1}{#2}} \newcommand{\g}[1]{\mbox{\boldmath$#1$}} \newcommand{\comma}{\raisebox{3.0pt}{ , }} \newcommand{\Phit}{\widetilde{\Phi}} \newcommand{\PhiMPL}[1]{\Phi^{\mathrm{MPL}}_{#1}} \newcommand{\MMPL}[1]{M^{\mathrm{MPL}}_{#1}} \newcommand{\GMPL}[1]{G^{\mathrm{MPL}}_{#1}} \newcommand{\bvMPL}[1]{\bv^{\mathrm{MPL}}_{#1}} \newcommand{\bVMPL}[1]{\bV^{\mathrm{MPL}}_{#1}} \newcommand{\PhiMZV}[1]{\Phi^{\mathrm{MZV}}_{#1}} \newcommand{\PhitMZV}[1]{\widetilde{\Phi}^{\mathrm{MZV}}_{#1}} \newcommand{\MMZV}[1]{M^{\mathrm{MZV}}_{#1}} \newcommand{\GMZV}[1]{G^{\mathrm{MZV}}_{#1}} \newcommand{\bvMZV}[1]{\bv^{\mathrm{MZV}}_{#1}} \newcommand{\bVMZV}[1]{\bV^{\mathrm{MZV}}_{#1}} \newcommand{\bVdMZV}[1]{\bV^{\mathrm{'MZV}}_{#1}} \newcommand{\bVddMZV}[1]{\bV^{\mathrm{''MZV}}_{#1}} \newcommand{\red}[1]{{\color{red} #1}} \newcommand{\blue}[1]{{\color{blue} #1}} \definecolor{ForestGreen}{rgb}{0.0, 0.5, 0.0} \newcommand{\green}[1]{{\color{ForestGreen}#1}} \newcommand{\cn}{{\color{red}n}} \newcommand{\dn}{{\color{blue}n}} \newcommand{\size}[1]{\overline{\left|#1\right|}} \newcommand{\Var}{\mathrm{Var}} \newcommand{\Eq}{\mathrm{Eq}} \newcommand{\Size}{\mathrm{Size}} \newcommand{\zetaC}{\zeta^{\mathbf{C}}} \newcommand{\SC}[1]{S^{\mathbf{C}}_{#1}} \newcommand{\cLC}{\mathcal{L}^{\mathbf{C}}} \makeatletter \newcommand{\xequal}[2][]{\ext@arrow 0055{\equalfill@}{#1}{#2}} \def\equalfill@{\arrowfill@\Relbar\Relbar\Relbar} \makeatother \title [On Thakur's basis conjecture for multiple zeta values]{On Thakur's basis conjecture for multiple zeta values in positive characteristic} \author{Chieh-Yu Chang} \address{Department of Mathematics, National Tsing Hua University, Hsinchu City 30042, Taiwan R.O.C.} \email{[email protected]} \author{Yen-Tsung Chen} \address{Department of Mathematics, National Tsing Hua University, Hsinchu City 30042, Taiwan R.O.C.} \email{[email protected]} \author{Yoshinori Mishiba} \address{Department of Mathematical Sciences, University of the Ryukyus, 1 Senbaru, Nishihara-cho, Okinawa 903-0213, Japan} \email{[email protected]} \thanks{The firs and second authors are partially supported by MOST Grant 107-2628-M-007-002-MY4. The third author was supported by JSPS KAKENHI Grant Number JP18K13398.} \keywords{Multiple zeta values, Carlitz multiple polylogarithms, Thakur's basis conjecture, Todd's dimension conjecture} \subjclass[2010]{Primary 11R58, 11J93} \date{\today} \begin{document} \maketitle \begin{abstract} In this paper, we study multiple zeta values (abbreviated as MZV's) over function fields in positive characteristic. Our main result is to prove Thakur's basis conjecture, which plays the analogue of Hoffman's basis conjecture for real MZV's. As a consequence, we derive Todd's dimension conjecture, which is the analogue of Zagier's dimension conjecture for classical real MZV's. \end{abstract} \section{Introduction} \subsection{Classical conjectures} In this paper, we study multiple zeta values (abbreviated as MZV's) over function fields in positive characteristic introduced by Thakur~\cite{T04}. Our motivation arises from Zagier's dimension conjecture and Hoffman's basis conjecture for classical real MZV's. The special value of the Riemann $\zeta$-function at positive integer $s\geq 2$ is the following series \[\zeta(s):=\sum_{n=1}^{\infty} \frac{1}{n^{s}}\in \RR^{\times}. \] Classical real MZV's are generalizations of the special $\zeta$-values above. It was initiated by Euler on double zeta values and fully generalized by Zagier~\cite{Za94} in the 1990s. An {\it{admissible index}} is an $r$-tuple of positive integer $\fs=(s_{1},\ldots,s_{r})\in \ZZ_{>0}^{r}$ with $s_{1}\geq 2$. The real MZV at $\fs$ is defined by the following multiple series \[ \zeta(\fs):=\sum_{n_{1}> \cdots > n_{r}\geq 1} \frac{1 }{n_{1}^{s_{1}} \cdots n_{r}^{s_{r}} } \in \RR^{\times}. \] We call $\wt(\fs):=\sum_{i=1}^{r}s_{i}$ and $\dep(\fs):=r$ the weight and depth of the presentation $\zeta(\fs)$ respectively. Over the past decades, the study of real MZV's has attracted many researchers' attention as MZV's have many interesting and important connections with various topics. For example, MZV's occur as periods of mixed Tate motives by Terasoma~\cite{Te02}, Goncharov~\cite{Gon02} and Deligne-Goncharov~\cite{DG05}, and MZV's of depth two have close connection with modular forms by Gangl-Kaneko-Zagier~\cite{GKZ06} etc. For more details and relevant references, we refer the reader to the books~\cite{An04, Zh16, BGF19}. By the theory of regularized double shuffle relations~\cite{R02, IKZ06}, there are rich $\QQ$-linear relations among the same weight MZV's. One core problem on this topic is Zagier's following dimension conjecture: \begin{conjecture}[Zagier's dimension conjecture]\label{Con:Zagier} For an integer $w\geq 2$, we let $\mathfrak{Z}_w$ be the $\QQ$-vector space spanned by real MZV's of weight $w$. We put $d_{0}:=1$, $d_{1}:=0$, $d_{2}:=1$ and $d_{w}:=d_{w-2}+d_{w-3}$ for integers $w\geq 3$. Then for each integer $w\geq 2$, we have \[ {\rm{dim}}_{\QQ}\mathfrak{Z}_{w}=d_{w} . \] \end{conjecture} The best known result towards Zagier's dimension conjecture until now has been the {\it{upper bound}} result proved by Terasoma~\cite{Te02} and Goncharov~\cite{Gon02} independently. Namely, they showed that ${\rm{dim}}_{\QQ}\mathfrak{Z}_{w}\leq d_{w}$ for all integers $w\geq 2$. Due to numerical computation, Hoffman~\cite{Ho97} proposed the following conjectural basis for $\mathfrak{Z}_{w}$ for each $w\geq 2$. \begin{conjecture}[Hoffman's basis conjecture]\label{Con:Hoffman} For an integer $w\geq 2$, we let $\mathcal{I}_{w}^{\rm{H}}$ be the set of admissible indices $\fs=(s_{1},\ldots,s_{r})$ with $s_{i}\in \left\{2,3 \right\}$ satisfying $\wt(\fs)=w$. Then the following set \[\mathcal{B}_{w}^{\rm{H}}:=\left\{ \zeta(\fs)|\fs\in \mathcal{I}_{w}^{\rm{H}} \right\}\] is a basis of the $\QQ$-vector space $\mathfrak{Z}_{w}$. \end{conjecture} We mention that Hoffman's basis conjecture implies Zagier's dimension conjecture. In~\cite{Br12}, Brown proved Hoffman's basis conjecture for {\it{motivic}} MZV's. As there is a surjective map from motivic MZV's to real MZV's, Brown's theorem implies that Hoffman's conjectural basis would be a generating set for $\mathfrak{Z}_{w}$, and as a consequence the upper bound result of Terasoma and Goncharov would be derived. Our main results in this paper are to prove the analogues of Conjecture~\ref{Con:Zagier} and Conjecture~\ref{Con:Hoffman} in the function fields setting. \subsection{The main results} In the positive characteristic setting, we let $A:=\FF_{q}[\theta]$ the polynomial ring in the variable $\theta$ over a finite field $\FF_{q}$ of $q$ elements, where $q$ is a power of a prime number $p$. We let $k$ be the field of fractions of $A$, and $|\cdot|_{\infty}$ be the normalized absolute value on $k$ at the infinite place $\infty$ of $k$ for which $|\theta|_{\infty}=q$. Let $k_{\infty}:=\laurent{\FF_{q}}{1/\theta}$ be the completion of $k$ with respect to $|\cdot|_{\infty}$. We then fix an algebraic closure $\overline{k_{\infty}}$ of $k_{\infty}$ and still denote by $|\cdot|_{\infty}$ the extended absolute value on $\overline{k_{\infty}}$. We let $\CC_{\infty}$ be the completion of $\overline{k_{\infty}}$ with respect to $|\cdot|_{\infty}$. Finally, we let $\ok$ be the algebraic closure of $k$ inside $\CC_{\infty}.$ Let $A_{+}$ be the set of monic polynomials of $A$, which plays an analogous role to the set of positive integers now in the function field setting. In~\cite{T04}, Thakur introduced the following positive characteristic MZV's: for any $r$-tuple of positive integers $\fs=(s_{1},\ldots,s_{r})\in \ZZ_{>0}^{r}$, \begin{equation}\label{E:MZV's} \zeta_{A}(\fs):=\sum_{a_{1},\cdots, a_{r}\in A_{+}} \frac{1 }{a_{1}^{s_{1}} \cdots a_{r}^{s_{r}} } \in k_{\infty} \end{equation} with the restriction that $|a_{1}|_{\infty}>\cdots >|a_{r}|_{\infty}$. As our absolute value $|\cdot|_{\infty}$ is non-archimedean, the series $\zeta_{A}(\fs)$ converges in $k_{\infty}$. However, Thakur~\cite{T09} showed that such series are in fact non-vanishing. The weight and the depth of the presentation $\zeta_{A}(\fs)$ are defined to be $\wt(\fs):=\sum_{i=1}^{r}s_{i}$ and $\dep(\fs):=r$ respectively. When $\dep(\fs)=1$, these values are called Carlitz zeta values as initiated by Carlitz in~\cite{Ca35}. As from now on we focus on MZV's in positive characteristic, in what follows MZV's will be Thakur's ($\infty$-adic) MZV's. In~\cite{T10}, Thakur established a product on MZV's, which is called $q$-shuffle relation in this paper. Namely, the product of two MZV's can be expressed as an $\FF_{p}$-linear combination of MZV's whose weights are the same. It follows that the $k$-vector space $\cZ$ spanned by all MZV's form an algebra. Note that $\ok$-algebraic relations among MZV's are $\ok$-linear relations among monomials of MZV's, which can be expressed as $\ok$-linear relations among MZV's by $q$-shuffle relations mentioned above. In~\cite{C14}, the first named author of the present paper proved that all $\ok$-linear relations come from $k$-linear relations among MZV's of the same weight. Therefore, determination of the dimension of the $k$-vector space of MZV's of weight $w$ for $w\in \ZZ_{>0}$ is the central problem in this topic. Via numerical computation, Todd~\cite{To18} provided the following analogue of Zagier's dimension conjecture. \begin{conjecture}[Todd's dimension conjecture]\label{Todd's Conj} For any positive integer $w$, let $\cZ_{w}$ be the $k$-vector space spanned by all MZV's of weight $w$. Define \[ d_{w}' = \begin{cases} 2^{w-1} & \textnormal {if $1\leq w < q$}, \\ 2^{q-1}-1 & \textnormal{if $w=q$},\\ \sum_{i=1}^{q} d'_{w-i} & \textnormal{if $w >q$}. \end{cases} \] Then one has \[ \dim_{k} \cZ_{w}=d_{w}'.\] \end{conjecture} In analogy with Hoffman's basis conjecture, Thakur~\cite{T17} proposed the following conjecture. \begin{conjecture}[Thakur's basis conjecture]\label{Thakur's Conj} Let $w$ be a positive integer, and let $\ITw$ be the set consisting of all $\fs= (s_{1}, \ldots, s_{r} )\in \ZZ_{>0}^{r}$ (varying positive integers $r$) for which \begin{itemize} \item $\wt(\fs)=w$; \item $s_{i} \leq q$ for all $1 \leq i \leq r - 1$; \item $s_{r}<q$. \end{itemize} Then the following set \[\mathcal{B}_{w}^{\rm{T}}:=\left\{\zeta_{A}(\fs)| \fs\in \ITw \right\} \] is a basis of the $k$-vector space $\cZ_{w}$. \end{conjecture} We mention that for every positive integer $w$, the cardinality of $\IT_w$ is equal to $d_{w}'$ given in Conjecture~\ref{Todd's Conj}. It follows that as in the classical case, Thakur's basis conjecture implies Todd's dimension conjecture. The main result of this paper is to prove Thakur's basis conjecture. \begin{theorem}\label{T:Main Thm} For any positive integer $w$, Conjecture~\ref{Thakur's Conj} is true. As a consequence, Conjecture~\ref{Todd's Conj} is also true. \end{theorem} As we have determined the dimension of $\cZ_{w}$ for each $w\in \NN$, it is a natural question about how to describe all the $k$-linear relations among the MZV's of weight $w$. We establish a concrete and simple mechanism in Theorem~\ref{T: DetermineLR} that~\eqref{E:Main Relations} account for all the $k$-linear relations. Note that Theorem~\ref{T: DetermineLR} basically verifies the $\sB^{*}$-version of~\cite[Conjecture~5.1]{To18}. \begin{remark} In~\cite{ND21}, Ngo Dac showed that for each $w\geq 1$, $\mathcal{B}_{w}^{\rm{T}}$ is a generating set for the $k$-vector space $\cZ_{w}$. As a consequence of Ngo Dac's result, one has the {\it{upper bound result}}: \[ \dim_{w}\cZ_{w}\leq d_{w}',\ \forall w\geq 1. \] Theoretically, to prove Thakur's basis conjecture it suffices to show that his conjectural basis is linearly independent over $k$. However, when developing our approaches in a unified framework, we reprove Ngo Dac's result mentioned above in Corollary~\ref{Cor:generating set}, which also provides a generating set for the $k$-vector space spanned by the special values $\Li_{\fs}(\bone)$ for all indices $\fs$ with $\wt(\fs)=w$ and which is described in the next section. \end{remark} \begin{remark} As mentioned above, by~\cite{C14} $k$-linear independence of MZV's implies $\ok$-linear independence. It follows that for each positive integer $w$, $\mathcal{B}_{w}^{\rm{T}}$ is also a basis for the $\ok$-vector space spanned by the MZV's of weight $w$. \end{remark} Given a finite place $v$ of $k$, we mention that $v$-adic MZV's $\zeta_{A}(\fs)_{v}$ were introduced in~\cite{CM21} for $\fs\in \ZZ_{>0}^{r}$. By~\cite[Thm.~1.2.2]{CM21}, there is a natural $k$-linear map from ($\infty$-adic) MZV's to $v$-adic MZV's, and it was further shown that the map is indeed an algebra homomorphism with kernel containing the ideal generated by $\zeta_{A}(q-1)$ since $\zeta_{A}(q - 1)_{v} = 0$ by \cite{Go79}. As a consequence of Theorem~\ref{T:Main Thm}, we have the following upper bound result for $v$-adic MZV's. \begin{corollary} Let $v$ be a finite place of $k$. For a positive integer $w$, we let $\cZ_{v,w}$ be the $\ok$-vector space space by $v$-adic MZV's of weight $w$ defined in~\cite{CM21}. Then we have \[ \dim_{k} \cZ_{v,w}\leq d_{w}'-d_{w-(q-1)}', \] where $d_{0}':=1$ and $d_{n}':=0$ for $n< 0$. \end{corollary} \subsection{Strategy of proofs} The key ingredient of our proof of Theorem~\ref{T:Main Thm} is to switch the study of MZV's to that of the special values of Carlitz multiple polylogarithms (abbreviated as CMPL's) $\Li_{\fs}$ for $\fs\in \ZZ_{>0}^{r}$ defined by the first named author in~\cite{C14}. For the definition of $\Li_{\fs}$, see~\eqref{E:CMPL}. Note that CMPL's are higher depth generalization of the Carlitz polylogarithms initiated by Anderson-Thakur~\cite{AT90}. Based on the interpolation formula of Anderson-Thakur~\cite{AT90, AT09}, one knows from~\cite{C14} that $\zeta_{A}(\fs)$ can be expressed as a $k$-linear combinations of CMPL's at some integral points, and in the particular case for $\fs \in \ITw$, one has the simple identity \eqref{E:sL=zeta} that \[\zeta_{A}(\fs) =\Li_{\fs}(\bone), \] where $\bone := (1,\ldots,1) \in \ZZ_{>0}^{\dep(\fs)}$. Given a positive integer $w$, we let $\INDw$ be the set consisting of all tuples of positive integers $\fs = (s_{1}, \ldots, s_{r})$ with $\wt(\fs)=w$ and $q \nmid s_{i}$ for all $1 \leq i \leq r$, and note that $\INDw$ was used and studied in~\cite{ND21}. The overall arguments in the proof of Theorem~\ref{T:Main Thm} are divided into the following two parts: \begin{itemize} \item [(I)] We show in Corollary~\ref{Cor:generating set} that the two sets $ \mathcal{B}_{w}^{\Li,\rm{T}}:=\left\{\Li_{\fs}| \fs\in \ITw \right\}$ and $\mathcal{B}_{w}^{T}$ are generating sets of the $k$-vector space $\cZ_{w}$ for every positive integer $w$ (the latter one was known by Ngo Dac~\cite{ND21}), and further show in Theorem~\ref{theorem-generator} that the set $\left\{ \Li_{\fs}(\bone)| \fs\in \INDw \right\}$ is a generating set for $\cZ_{w}$. \item [(II)] We prove in Theorem~\ref{theorem-IND-basis} that $\left\{ \Li_{\fs}(\bone)| \fs\in \INDw \right\}$ is a linearly independent set over $k$. Since $|\ITw|=|\INDw|$ (see~Proposition~\ref{Pop:|ITw|=|INDw|}), it follows that $\mathcal{B}_{w}^{T}$ is a $k$-basis of $\cZ_{w}$. \end{itemize} Regarding the part (I) above, we try to abstract and unify our methods to have a wide scope. We consider a formal $k$-space $\cH$ generated by indices, on which the harmonic product $*^{\Li}$ and $q$-shuffle product $*^{\zeta}$ are defined. We also consider the power/truncated sums maps arising from $\Li_{\fs}(\bone)$ and $\zeta(\fs)$. We then define the maps $\sLL, \sLZ \colon \cH\rightarrow k_{\infty}$ as the CMPL and MZV-realizations. These realizations preserve the harmonic product and $q$-shuffle product respectively, illustrating the stuffle relations~\cite{C14} for CMPL's and the $q$-shuffle relations~\cite{T10, Ch15} for MZV's in a formal framework. See Theorem~\ref{P:product of sLbullet}. Then we adopt the ideas and methods rooted in~\cite{To18, ND21} to achieve Theorem~\ref{theorem-algo}, which enables us to show the results mentioned in (I). Since some essential arguments are from those in~\cite{ND21}, we leave the detailed proofs in the appendix. However, under such abstraction the maps $\sUB$ given in Definition~\ref{def-sU} are explicit. In particular, Theorem~\ref{theorem-algo}~(2) allows us to provide a concrete and effective way to express $\Li_{\fs}(\bone)$ (resp.~$\zeta_{A}(\fs)$) as linear combinations in terms of $\mathcal{B}_{w}^{\Li,\rm{T}}$ (resp.~$\mathcal{B}_{w}^{T}$) simultaneously. This is more concrete and explicit than those developed in~\cite{ND21}, and enables us to establish a simple mechanism in Theorem~\ref{T: DetermineLR} that generates all the $k$-linear relations among MZV's of the same weight (cf.~the $\sB^{*}$-version of~\cite[Conjecture~5.1]{To18}). Concerning the second part (II), the primary tool that we use is the Anderson-Brownawell-Papanikolas criterion~\cite[Thm.~3.1.1]{ABP04} (abbreviated as ABP-criterion), which has very strong applications in transcendence theory in positive characteristic. We prove (II) by induction on $w$. We start with a linear equation \begin{equation}\label{E:IntrodtionLE} \sum_{\fs\in \INDw }\alpha_{\fs}(\theta) \Li_{\fs}(\bone)=0,\ \hbox{ for } \alpha_{\fs}=\alpha_{\fs}(t)\in \FF_{q}(t), \ \forall \fs\in \INDw, \end{equation} and aim to show that $\alpha_{\fs}=0$ for all $\fs\in \INDw$. We outline our arguments as below. \begin{enumerate} \item[(II-1)] Using the period interpretation~\cite[(3.4.5)]{C14} for special values of CMPL's (inspired by~\cite{AT09} for MZV's), we can construct a system of difference equations $\widetilde{\psi}^{(-1)}=\widetilde{\Phi} \widetilde{\psi}$ fitting into the conditions of ABP-criterion for which~\eqref{E:IntrodtionLE} can be expressed as a $k$-linear identity among the entries of $\widetilde{\psi}(\theta)$. We then apply the arguments in the proof of~\cite[Thm.~2.5.2]{CPY19}, eventually we obtain the big system of Frobenius difference equations~\eqref{eq-Frob-w}, which is essentially from~\cite[(3.1.3)]{CPY19}. \item [(II-2)] Denote by $\sX_{w}$ the $\FF_{q}(t)$-vector space of the solution space of~\eqref{eq-Frob-w}. We then show in Theorem~\ref{theorem-dimension} that $\dim_{\FF_{q}(t)}\sX_{w}$ is either $1$ if $(q-1) \mid w$, or $0$ if $(q-1) \nmid w$. \item [(II-3)] Using the trick of~\cite{C14, CPY19} together with the induction hypothesis, we establish Lemma~\ref{lemma-rational}. By the first part of Lemma~\ref{lemma-rational}, we see that if $\alpha_{\fs} \neq 0$, then $\fs$ must be in $\INDzw$, which is given in~\eqref{E:INDzw}. Then we apply Lemma~\ref{lemma-rational}~(2) to conclude that there exists $(\varepsilon_{\fs})\in \sX_w$ for which $\alpha_{\fs}=\varepsilon_{\fs}$ for all $\fs \in \INDzw$. Note that the description of indices in $\INDzw$ arises from the simultaneously Eulerian phenomenon in~\cite[Cor.~4.2.3]{CPY19}, which was first witnessed by Lara Rodr\'iguez and Thakur in~\cite{LRT14}. \item [(II-4)] When $(q - 1) \nmid w$, we use the fact of this case that $\sX_{w}=0$ to conclude that $\varepsilon_{\fs}=0$ for every $\fs$, and hence $\alpha_{\fs}(\theta)=0$ for every $\fs$. When $(q-1) \mid w$, we apply Theorem~\ref{theorem-generator} together with the fact of this case that $\dim_{\FF_{q}(t)}\sX_{w}=1$. Having one trick further in the proof of Theorem~\ref{theorem-IND-basis} shows that $\alpha_{\fs}(\theta)=0$ for every $\fs$. \end{enumerate} \begin{remark} The above is the overall strategy of our proof. However, to save some length without affecting logical arguments, we avoid many details in II-(1). Instead, we directly go to~\eqref{E:IntrodtionLE} in this paper. \end{remark} \subsection{Organization of the paper} In Sec.~\ref{Sec: Indices}, we introduce the abstract $k$-vector space $\cH$, on which the harmonic product $*^{\Li}$ and $q$-shuffle product $*^{\zeta}$ are defined. The purpose of Sec.~\ref{Sec: Indices} is to derive the product formulae, stated as Proposition~\ref{P:product of sLbullet}, for the CMPL and MZV realizations $\sLL$ and $\sLZ$. In Sec.~\ref{Sec: Generators} we follow \cite{To18, ND21} to transfer their $\sB, \sC, \sBC$ maps into our formal framework in a concrete way. The primary result of this section is to establish Theorem~\ref{theorem-generator}, which is an application of Theorem~\ref{theorem-algo}, whose detailed proof is given in the appendix. We study the specific system of Frobenius difference equations~\eqref{eq-Frob-w} mentioned in the (II) above in Sec.~\ref{Sec:FDE}. We carefully analyze the solution space $\sX_{w}$ of \eqref{eq-Frob-w}, and determine its dimension in Theorem~\ref{theorem-dimension}. In Sec.~\ref{Sec:Linear Indep}, we establish the key Lemma~\ref{lemma-rational}, which is used to prove the $k$-linear independence of $\left\{ \Li_{\fs}(\bone)| \fs\in \INDw \right\}$ in Theorem~\ref{theorem-IND-basis}. With these results at hand, we prove Theorem~\ref{T:Main Thm} in Sec.~\ref{Sec: Proof of Main Thm} as a short conclusion. Finally, combining Theorems~\ref{theorem-algo} and \ref{T:Main Thm} we give a proof of Theorem~\ref{T: DetermineLR} in Sec.~\ref{Sub:Generating set of linear relations}. As mentioned above, the appendix consists of a detailed proof of Theorem~\ref{theorem-algo}. \begin{remark} When this paper was nearly in its final version, we announced our results to Thakur and soon after Ngo Dac sent his paper \cite{IKLNDP22} to Thakur on the same day announcing that he and his coauthors show the same results for alternating MZV's. Our paper was finished a few days later than theirs. The key strategy of their proofs is in the same direction as ours. They switch the study of alternating MZV's to the $k$-vector space spanned by the following special values of CMPL's: \[\left\{ \Li_{\fs} (\gamma_{1},\ldots,\gamma_{r} )| \fs\in \ITw,\ (\gamma_{1},\ldots,\gamma_{r} )\in (\FF_{q^{q-1}}^{\times})^{r}\right\} .\] Note that any value in the set above is equal to an alternating MZV at $\fs$ up to an algebraic multiple (cf.~\cite[Prop.~2.12]{CH21}). However, by the first author's result~\cite[Thm.~5.4.3]{C14} showing that CMPL's at algebraic points form a $\ok$-graded algebra defined over $k$, one can remove the algebraic factor without affecting the study for the $k$-vector space of alternating MZV's. We find that some directions of our ideas and theirs are similar and the primary methods rooted in~\cite{ABP04, C14, To18, CPY19, ND21} are the same, but presentations and detailed arguments are different. \end{remark} \section{Two products on $\cH$} \subsection{Indices}\label{Sec: Indices} By an index, we mean the empty set $\emptyset$ or an $r$-tuple of positive integers $\fs=(s_{1},\ldots,s_{r})$. In the former case, its depth and weight are defined to be $\dep(\emptyset)=0$ and $\wt(\emptyset)=0$. The depth and weight of the latter case are defined to be $\dep(\fs)=r$ and $\wt(\fs)=\sum_{i=1}^{r}s_{i}$ respectively. We denote by $\cI := \bigsqcup_{r \geq 0} \ZZ_{\geq 1}^{r}$ the set of indices, where $\ZZ_{\geq 0}^{0}$ is referred to the empty set $\emptyset$. Throughout this paper, we adapt the following notations. \begin{align*} \cI_{w} &:= \{ \fn \in \cI \ | \ \wt(\fn) = w \} \ \ (w \geq 0), \\ \cI_{> 0} &:= \{ \fn \in \cI \ | \ \wt(\fn) > 0 \} = \cI \setminus \{ \emptyset \}, \\ \ITw &:= \{ (s_{1}, \ldots, s_{r} ) \in \cI_{w} \ | \ s_{i} \leq q \ (1 \leq \forall i \leq r - 1) \ \textrm{and} \ s_{r} < q \} \ \ (w > 0), \\ \INDw &:= \{ (s_{1}, \ldots, s_{r} ) \in \cI_{w} \ | \ q \nmid s_{i} \ (1 \leq \forall i \leq r) \} \ \ (w > 0), \\ \IT_{0} &:= \IND_{0} := \{ \emptyset \}. \end{align*} We mention that $\ITw$ refers to the set of indexes arising from Thakur's basis, and $\INDw$ is the set studied in~\cite{ND21}. For any subset $S$ of $\cI$, we denote by $|S|$ the cardinality of $S$ when no confusions arise. \begin{proposition}\label{Pop:|ITw|=|INDw|} For each weight $w\geq 0$, we have a bijection between $\INDw$ and $\ITw$, and hence $|\INDw|=|\ITw|$. \end{proposition} \begin{proof} The desired result follows from the following correspondence \begin{align*} (m_{1} q + n_{1}, \ldots, m_{r} q + n_{r}) \longleftrightarrow (q^{\{ m_{1} \}}, n_{1}, \ldots, q^{\{ m_{r} \}}, n_{r}) \ \ \ (m_{i} \geq 0, \ 1 \leq n_{i} \leq q - 1), \end{align*} where $q^{\{m\}}$ denotes the sequence $(q, \ldots, q) \in \ZZ_{\geq 1}^{m}$. It is understood that in the case of $m=0$, $q^{\left\{0 \right\}}$ is referred to the empty index $\emptyset$. \end{proof} Given $\ell$ indices $\fs_{1},\ldots,\fs_{\ell} \in \cI$ with $\fs_{i} = (s_{i1}, \ldots, s_{ir_{i}})$ ($1 \leq i \leq \ell$), we define \begin{align}\label{E:s1,..,sl} (\fs_{1}, \ldots \fs_{\ell}) := (s_{11}, \ldots, s_{1r_{1}}, s_{21}, \ldots, s_{2r_{2}}, \ldots, s_{\ell 1}, \ldots, s_{\ell r_{\ell}}) \end{align} to be the index obtained by putting the given indices consecutively. For each non-empty index $\fs = (s_{1}, \ldots, s_{r}) \in \cI_{> 0}$, we define \begin{equation} \fs_{+} := (s_{1}, \ldots, s_{r - 1}) \ \textrm{and} \ \fs_{-} := (s_{2}, \ldots, s_{r}). \end{equation} In the depth one case, we note that $\fs_{+}=\emptyset$ and $\fs_{-}=\emptyset$. Let $\cH = \bigoplus_{w \geq 0} \cH_{w}$ be the $k$-vector space with basis $\cI$ graded by weight and let $\cH_{> 0} := \bigoplus_{w > 0} \cH_{w}$. For each index $\fs = (s_{1}, \ldots, s_{r}) \in \cI$, the corresponding generator in $\cH$ is denoted by $[\fs] = [s_{1}, \ldots, s_{r}]$ or $\fs$. For each $P = \sum_{\fs \in \cI} a_{\fs} [\fs] \in \cH$, the support of $P$ is defined by \begin{align*} \Supp(P) := \{ \fs \in \cI \ | \ a_{\fs} \neq 0 \}. \end{align*} \begin{definition}\label{Def:multi-linear[,]} Given a positive integer $\ell$, we define the following $k$-multilinear map \[ [-,-, \ldots,-]:\cH^{\oplus \ell}\rightarrow \cH \] as follows. For any indices $\fs_{1},\ldots,\fs_{\ell}\in \cI$, let $(\fs_{1},\ldots,\fs_{\ell})$ be given in~\eqref{E:s1,..,sl}. We define \[ [\fs_{1},\ldots,\fs_{\ell}]:= [(\fs_{1},\ldots,\fs_{\ell})]\in \cH. \] \end{definition} \begin{remark}\label{Rem: [s,0]=0} As the map above is multilinear, for any $\fs\in\cI$ we particularly have the following identity \[ [\fs,0]=0\in \cH \] which will be used in the appendix. As $\emptyset$ is also an index by our definition, we mention that \[[\fs,\emptyset] \neq [\fs,0]=0\in \cH. \] \end{remark} \subsection{The maps $\sLL$ and $\sLZ$} Recall that the Carlitz logarithm is given by \[\log_{C}(z):=\sum_{d\geq 0} \frac{z^{q^{d}}}{L_{d}} \in \power{k}{z} ,\] where $L_{0} := 1$ and $L_{d} := (\theta - \theta^{q}) \cdots (\theta - \theta^{q^{d}})$ for $d \geq 1$ (see~\cite{Go96, T04}), and the $n$th Carlitz polylogarithm defined by Anderson-Thakur~\cite{AT90} is the following power series \[ \Li_{n}(z):= \sum_{d \geq 0}\frac{z^{q^{d}}}{L_{d}^{n}}\in \power{k}{z}. \] For any index $\fs=(s_{1},\ldots,s_{r})\in \cI_{>0}$ of positive depth, the $\fs$th Carlitz multiple polylogarithm (abbreviated as CMPL) is given by the following series (see~\cite{C14}) \begin{equation}\label{E:CMPL} \Li_{\fs}(z_{1},\ldots,z_{r}):=\sum_{d_{1} > \cdots > d_{r} \geq 0} \dfrac{z_{1}^{q^{d_1}} \cdots z_{r}^{q^{d_r}}}{L_{d_{1}}^{s_{1}} \cdots L_{d_{r}}^{s_{r}}}\in \power{k}{z_{1},\ldots,z_{r}} . \end{equation} For each $s \in \ZZ_{\geq 1}$ and $d \in \ZZ_{\geq 0}$, we set \begin{align*} S^{\Li}_{d}(s) := \dfrac{1}{L_{d}^{s}}\in k \ \ \textrm{and} \ \ S^{\zeta}_{d}(s) := \sum_{a \in A_{+, d}} \dfrac{1}{a^{d}}\in k, \end{align*} where $A_{+, d}$ is the set of monic polynomials in $A$ of degree $d$. We then define $k$-linear maps $\sLB_{d}$, $\sLB_{< d}$ and $\sLB$ on $\cH$. \begin{definition}\label{Def:sLB} Let $\bullet \in \{ \Li, \zeta \}$ and $d \in \ZZ$. \begin{enumerate} \item The map $\sLB_{d} \colon \cH \to k$ is the $k$-linear map defined by \begin{align}\label{E:sLBd} \sLB_{d}(\fs) &:= \left\{ \begin{array}{cl} {\displaystyle \sum_{d = d_{1} > \cdots > d_{r} \geq 0}} \SB_{d_{1}}(s_{1}) \cdots \SB_{d_{r}}(s_{r}) & (\fs = (s_{1}, \ldots, s_{r}) \in \cI_{> 0} \ \textrm{and} \ d \geq \dep(\fs) - 1) \\ 1 & (\fs = \emptyset \ \textrm{and} \ d = 0) \\ 0 & (\textrm{otherwise}) \end{array} \right. \end{align} \item The map $\sLB_{< d} \colon \cH \to k$ is the $k$-linear map defined by \begin{align}\label{E:sLB<d} \sLB_{< d}(\fs) &:= \left\{ \begin{array}{cl} {\displaystyle \sum_{d > d_{1} > \cdots > d_{r} \geq 0}} \SB_{d_{1}}(s_{1}) \cdots \SB_{d_{r}}(s_{r}) & (\fs = (s_{1}, \ldots, s_{r}) \in \cI_{> 0} \ \textrm{and} \ d \geq \dep(\fs)) \\ 1 & (\fs = \emptyset \ \textrm{and} \ d \geq 1) \\ 0 & (\textrm{otherwise}) \end{array} \right. \end{align} \item The map $\sLB \colon \cH \to k_{\infty}$ is the $k$-linear map defined by \begin{align}\label{E:sLB} \sLB(\fs) &:= \left\{ \begin{array}{cl} {\displaystyle \sum_{d_{1} > \cdots > d_{r} \geq 0}} \SB_{d_{1}}(s_{1}) \cdots \SB_{d_{r}}(s_{r}) & (\fs = (s_{1}, \ldots, s_{r}) \in \cI_{> 0}) \\ 1 & (\fs = \emptyset) \\ 0 & (\textrm{otherwise}). \end{array} \right. \end{align} \end{enumerate} \end{definition} It follows that for each $P \in \cH$, $\fs \in \cI$, $s \in \ZZ_{\geq 1}$ and $d \in \ZZ$, we have \begin{align*} \sLB_{d}(P) &= \sLB_{< d + 1}(P) - \sLB_{< d}(P), \\ \sLB_{< d}(P) &= \sum_{d' < d} \sLB_{d'}(P) = \sum_{0 \leq d' < d} \sLB_{d'}(P), \\ \sLB(P) &= \sum_{d \in \ZZ} \sLB_{d}(P) = \lim_{d \to \infty} \sLB_{< d}(P) \in k_{\infty}, \\ \sLB_{d}(s) \sLB_{< d}(P) &= \sLB_{d}([s, P]), \\ \sLL(\fs) &= \Li_{\fs}(\bone), \\ \sLZ(\fs) &= \zeta_{A}(\fs), \end{align*} where $\bf{1}$ is simply referred to $(1,\ldots,1)\in \ZZ_{>0}^{\dep \fs}$ when it is clear from the context without confusion, and we set $\Li_{\emptyset}(\bone) := \zeta_{A}(\emptyset) := 1$. \begin{remark} \label{rmk-C=A} For $1 \leq s \leq q$ and $d \geq 0$, we have the following equality due to Carlitz (see also \cite[Thm.~5.9.1]{T04}) \begin{align*} \dfrac{1}{L_{d}^{s}} = \sum_{a \in A_{+, d}} \dfrac{1}{a^{s}}. \end{align*} Therefore, for each $\fs = (s_{1}, \ldots, s_{r})$ with $1 \leq s_{i} \leq q$ and $d \in \ZZ$, we have \begin{align*} \sLL_{d}(\fs) = \sLZ_{d}(\fs), \ \ \ \sLL_{< d}(\fs) = \sLZ_{< d}(\fs), \end{align*} and \begin{equation}\label{E:sL=zeta} \Li_{\fs}(\bone)=\sLL(\fs) = \sLZ(\fs)=\zeta_{A}(\fs). \end{equation} \end{remark} \subsection{Product formulae} The harmonic product on $\cH$ is denoted by $*$ or $\sLi$. Thus $*$ is a $k$-bilinear map $\cH \times \cH \to \cH$ such that \begin{align*} &[\emptyset] * P = P * [\emptyset] = P, \\ &[\fs] * [\fn] = [s_{1}, \fs_{-} * \fn] + [n_{1}, \fs * \fn_{-}] + [s_{1} + n_{1}, \fs_{-} * \fn_{-}] \end{align*} for each $P \in \cH$, $\fs = (s_{1}, \fs_{-}), \fn = (n_{1}, \fn_{-}) \in I_{> 0}$. For each $\fs, \fn \in I_{> 0}$, we put \begin{equation}\label{E:DLi} D^{\Li}_{\fs, \fn} := 0 \in \cH. \end{equation} For each $s, n \geq 1$, H.-J. Chen showed in \cite{Ch15} that \begin{equation}\label{E:Chen} \sLZ_{d}(s) \sLZ_{d}(n) = \sLZ_{d}(s + n) + \sum_{j = 1}^{s + n - 1} \Delta_{s, n}^{[j]} \sLZ_{d}(s + n - j, j), \end{equation} where we set \begin{align*} \Delta_{s, n}^{[j]} := \left\{ \begin{array}{ll} {\displaystyle (- 1)^{s - 1} \binom{j - 1}{s - 1} + (- 1)^{n - 1} \binom{j - 1}{n - 1}} & \textrm{if $(q - 1) \mid k$ and $1 \leq j < s + n$} \\ 0 & \textrm{otherwise} \end{array} \right.. \end{align*} The $q$-shuffle product on $\cH$ is denoted by $\sz$. Thus $\sz$ is a $k$-bilinear map $\cH \times \cH \to \cH$ such that \begin{align*} &[\emptyset] *^{\zeta} P = P *^{\zeta} [\emptyset] = P, \\ &[\fs] *^{\zeta} [\fn] = [s_{1}, \fs_{-} *^{\zeta} \fn] + [n_{1}, \fs *^{\zeta} \fn_{-}] + [s_{1} + n_{1}, \fs_{-} *^{\zeta} \fn_{-}] + D^{\zeta}_{\fs, \fn} \end{align*} for each $P \in \cH$, $ \fs = (s_{1}, \fs_{-}),\fn = (n_{1}, \fn_{-}) \in \cI_{> 0}$, where we set \begin{equation}\label{E:DZetaS} D^{\zeta}_{\fs, \fn} := \sum_{j = 1}^{s_{1} + n_{1} - 1} \Delta_{s_{1}, n_{1}}^{[j]} [s_{1} + n_{1} - j, (j) \sz (\fs_{-} *^{\zeta} \fn_{-})]. \end{equation} By the induction on $w + w'$, we can show that $P *^{\bullet} Q \in \cH_{w + w'}$ for each $w, w' \in \ZZ_{\geq 0}$, $P \in \cH_{w}$, $Q \in \cH_{w'}$ and $\bullet \in \{ \Li, \zeta \}$. In particular, when $w, w' \in \ZZ_{\geq 1}$ we have $D^{\zeta}_{\fs, \fn} \in \cH_{w + w'}$ for each $\fs \in \cI_{w}$ and $\fn \in \cI_{w'}$. \begin{remark} \label{remark-D-vanish} When $s + n \leq q$, we observe that $\Delta_{s, n}^{[j]} = 0$ for all $j$. Thus for $\fs = (s_{1}, \ldots), \fn = (n_{1}, \ldots) \in \cI_{> 0}$, if $s_{1} + n_{1} \leq q$ then we have $D^{\zeta}_{\fs, \fn} = 0$. \end{remark} The following product formulae are crucial when proving Theorem~\ref{theorem-algo}, whose detailed proof is given in the appendix. \begin{proposition}\label{P:product of sLbullet} Let $\bullet\in \left\{\Li, \zeta \right\}$. Given any $P,Q\in \cH$ and $\fs = (s_{1}, \fs_{-}), \fn = (n_{1}, \fn_{-}) \in \cI_{> 0}$, we have the following identities for each $d\in \ZZ$. \begin{enumerate} \item $\sLB(P) \sLB(Q) = \sLB(P *^{\bullet} Q)$. \item $ \sLB_{< d}(P) \sLB_{< d}(Q) = \sLB_{< d}(P *^{\bullet} Q)$. \item \begin{align*} \sLB_{d}(\fs) \sLB_{d}(\fn) &= \sLB_{d}([s_{1} + n_{1}, \fs_{-} *^{\bullet} \fn_{-}]) + \sLB_{d}(D^{\bullet}_{\fs, \fn}) \\ &= \sLB_{d}(\fs *^{\bullet} \fn) - \sLB_{d}([s_{1}, \fs_{-} *^{\bullet} \fn]) - \sLB_{d}([n_{1}, \fs *^{\bullet} \fn_{-}]). \end{align*} \end{enumerate} \end{proposition} \begin{proof} We first mention that (3) for $d$ follows from the second one for the same $d$. Indeed, for each $\fs = (s_{1}, \ldots), \fn = (n_{1}, \ldots) \in \cI_{> 0}$ we have \begin{align*} \sLL_{d}(\fs) \sLL_{d}(\fn) &= \sLL_{d}(s_{1}) \sLL_{d}(n_{1}) \sLL_{< d}(\fs_{-}) \sLL_{< d}(\fn_{-}) \\ &= \sLL_{d}(s_{1} + n_{1}) \sLL_{< d}(\fs_{-} *^{\Li} \fn_{-}) \\ &= \sLL_{d}([s_{1} + n_{1}, \fs_{-} *^{\Li} \fn_{-}]) + \sLL_{d}(D^{\Li}_{\fs, \fn}), \end{align*} where the second equality comes from (2) and the definition of $\sLL_{d}$, and the third identity comes from \eqref{E:DLi}. Similarly we have \begin{align*} \sLZ_{d}(\fs) \sLZ_{d}(\fn) &= \sLZ_{d}(s_{1}) \sLZ_{d}(n_{1}) \sLZ_{< d}(\fs_{-}) \sLZ_{< d}(\fn_{-}) \\ &= \left( \sLZ_{d}(s_{1} + n_{1}) + \sum_{j} \Delta_{s_{1}, n_{1}}^{j} \sLZ_{d}(s_{1} + n_{1} - j) \sLZ_{< d}(j) \right) \sLZ_{< d}(\fs_{-} *^{\zeta} \fn_{-}) \\ &= \sLZ_{d}([s_{1} + n_{1}, \fs_{-} *^{\zeta} \fn_{-}]) + \sum_{j} \Delta_{s_{1}, n_{1}}^{[j]} \sLZ_{d}(s_{1} + n_{1} - j) \sLZ_{< d}((j) *^{\zeta} (\fs_{-} *^{\zeta} \fn_{-}) ) \\ &= \sLZ_{d}([s_{1} + n_{1}, \fs_{-} *^{\zeta} \fn_{-}]) + \sLZ(D^{\zeta}_{\fs, \fn}), \end{align*} where the second equality comes from~\eqref{E:Chen} and (2), the third identity comes from (2) and~\eqref{E:DZetaS}. To prove the formula~(2), we first mention that when $d \leq 0$, the formula holds as both sides of the identity are zero. We then prove the formula~(2) by induction on $d$. Now, let $d$ be any positive integer. By bi-linearity, we may assume that $P = \fs, Q = \fn \in \cI_{> 0}$. Then we have \begin{align*} &\sLB_{< d}(\fs) \sLB_{< d}(\fn) \\ &= \sum_{d_{1} < d} \sLB_{d_{1}}(s_{1}) \sLB_{< d_{1}}(\fs_{-}) \sLB_{< d_{1}}(\fn) + \sum_{d_{1} < d} \sLB_{d_{1}}(n_{1}) \sLB_{< d_{1}}(\fs) \sLB_{< d_{1}}(\fn_{-}) + \sum_{d_{1} < d} \sLB_{d_{1}}(\fs) \sLB_{d_{1}}(\fn) \\ &= \sum_{d_{1} < d} \sLB_{d_{1}}(s_{1}) \sLB_{< d_{1}}(\fs_{-} *^{\bullet} \fn) + \sum_{d_{1} < d} \sLB_{d_{1}}(n_{1}) \sLB_{< d_{1}}(\fs *^{\bullet} \fn_{-}) \\ & \ \ \ + \sum_{d_{1} < d} \sLB_{d_{1}}([s_{1} + n_{1}, \fs_{-} *^{\bullet} \fn_{-}] + D^{\bullet}_{\fs, \fn}) \\ &= \sLB_{< d}(\fs *^{\bullet} \fn), \end{align*} where the second equality comes from the induction hypothesis as well as (3) (as (2) holds for $d_1<d$ by induction hypothesis). Finally, the formula~(1) follows from (2) by taking the limit $d \to \infty$. \end{proof} \section{Generators}\label{Sec: Generators} The purpose of this section is to establish Theorem~\ref{theorem-algo}, which allows us to obtain the desired generating sets for $\cZ_w$. To achieve it, we need to set up the box-plus operator on $\cH^{\oplus 2}$ as well as the $k$-linear maps $\sUB$ on $\cH$. \subsection{The box-plus operator} \begin{definition}\label{Def:boxplus} Let $\boxplus \colon \cH^{\oplus 2} \to \cH$ be the $k$-linear map defined by \begin{align*} \emptyset \boxplus P = P \boxplus \emptyset := 0 \ \ \textrm{and} \ \ \fs \boxplus \fn := (\fs_{+}, s_{r} + n_{1}, \fn_{-}) \end{align*} for $P \in \cH$ and $\fs = (\fs_{+}, s_{r}), \fn = (n_{1}, \fn_{-}) \in \cI_{> 0}$. \end{definition} \begin{remark} To avoid confusion, we mention that for each $n \geq 1$ and $P = \sum_{\fs \in \cI} a_{\fs} [\fs] \in \cH$, \begin{align*} [n, P] & = \sum_{\fs = (s_{1}, \ldots, s_{r}) \in \cI} a_{\fs} [n, s_{1}, \ldots, s_{r}] \in \cH_{> 0}, \\ (n) \boxplus P & = \sum_{\fs = (s_{1}, \ldots, s_{r}) \in \cI_{> 0}} a_{\fs} [n + s_{1}, s_{2}, \ldots, s_{r}] \in \cH_{> 0} \ \ \ (\neq [n] + P). \end{align*} Furthermore, in the depth one case for $\fs=(s)$ and $\fn=(n)$, we have $\fs_{+}=\emptyset=\fn_{-}$ and hence in this case \[\fs \boxplus \fn=[s+n].\] \end{remark} We define a $k$-linear endomorphism $\sU^{\bullet} \colon \cH \to \cH$ as follows. Let $\fs \in \cI$. We write \begin{equation}\label{E:s^T} \fs = (s_{1}, \ldots ) = (\fs^{\rT}, q^{\{ m \}}, \fs') \end{equation} with $\fs^{\rT} \in \IT$, $m \geq 0$, and $\fs' = (s'_{1}, \ldots)$ with $s'_{1} > q$ or $\fs' = \emptyset$. When $\fs' \neq \emptyset$, we set \begin{equation}\label{E:s''} \fs'' := (s'_{1} - q, \fs'_{-}). \end{equation} \begin{example} Let $\fs=(q-1,q,q,q+1,1)$. Then $\fs^{\rT}=(q-1)$, $m=2$, and $\fs'=(q+1,1)$. Since $\fs'\neq\emptyset$, we also have $\fs''=(1,1)$. \end{example} \subsection{Generating sets} We set \begin{equation}\label{E:alpha} \alpha^{\bullet}_{q}(P) := [1, (q - 1) *^{\bullet} P] \end{equation} for each $P \in \cH$. For $m=0$, $\alpha_{q}^{\bullet, 0}$ is defined to be the identity map on $\cH$, and for $m\in \ZZ_{>0}$, $\alpha^{\bullet,m}_{q}$ is defined to be the $m$th iteration of $\alpha^{\bullet}_{q}$. \begin{definition} \label{def-sU} For $\bullet\in \left\{ \Li, \zeta \right\}$, we define the $k$-linear map $\sUB:\cH\rightarrow \cH$ given by \begin{align*} \sUB(\fs) := \left\{ \begin{array}{@{}ll} - [\fs^{\rT}, q^{\{m + 1\}}, \fs''] + L_{1}^{m + 1} [\fs^{\rT}, \alpha_{q}^{\bullet, m + 1}(\fs'')] & \\ \hspace{5.0em} + L_{1}^{m + 1} (\fs^{\rT} \boxplus \alpha_{q}^{\bullet, m + 1}(\fs'')) - [\fs^{\rT}, q^{\{m\}}, D^{\bullet}_{q, \fs''}] & (\fs' \neq \emptyset) \\[0.5em] L_{1}^{m} [\fs^{\rT}, \alpha_{q}^{\bullet, m}(\emptyset)] + L_{1}^{m} (\fs^{\rT} \boxplus \alpha_{q}^{\bullet, m}(\emptyset)) & (\fs' = \emptyset) \end{array} \right. \end{align*} \end{definition} \begin{remark}\label{Rem:Us=s} From the definition, one sees that $\sU^{\bullet}(\cH_{w}) \subset \cH_{w}$. We also mention that when $\fs\in \IT$, then $\sUB(\fs)=\fs$ since in this case, we have $m=0$ and $\fs'=\emptyset$. \end{remark} \begin{theorem} \label{theorem-algo} Let $\bullet\in \left\{ \Li, \zeta \right\}$. For each $P \in \cH$, the following statements hold. \begin{enumerate} \item $\sLB(\sUB(P)) = \sLB(P)$. \item There exists an explicit integer $e \geq 0$ such that $\Supp((\sU^{\bullet})^{e}(P)) \subset \IT$, where $(\sU^{\bullet})^{0}$ is defined to be the identity map and for $e\in\ZZ_{>0}$, $(\sU^{\bullet})^{e}$ is defined to be the $e$th iteration of $\sU^{\bullet}$. \end{enumerate} \end{theorem} Note that our $\sUB$ is concrete and explicit, and simultaneously deals with the two products $*^{\Li}$ and $*^{\zeta}$. Our strategy for proving Theorem \ref{theorem-algo} arises from~\cite{ND21}, and so we leave the detailed proof to the appendix. However, we give an outline of the proof of Theorem \ref{theorem-algo} (1) as follows. Let $\bullet\in \left\{\Li, \zeta \right\}$. \begin{enumerate} \item[(I)] We first define the space of {\it{binary relations}} $\cP^{\bullet}\subset \cH^{\oplus 2}$ in~\eqref{E:Pbullet}, and note that for $(P,Q)\in \cP^{\bullet}$, we have \[\sLB(P, Q) := \sLB(P)+\sLB(Q)=0 .\] \item[(II)] For any $\fs\in \cI_{>0}$ and integer $m\geq 0$, we define in Sec.~\ref{Sec:Maps B,C,BC} the following maps \[ \sB_{\fs}^{\bullet},\sC_{\fs}^{\bullet},\sBC_{q}^{\bullet,m}:\cH_{>0}^{\oplus 2}\rightarrow \cH_{>0}^{\oplus 2}, \] and further show in Proposition~\ref{prop-BC} that they preserve $\cP^{\bullet}$. \item[(III)] In order to give a clear outline, we drop the subscripts to avoid heavy notation as all details are given in the proof of Theorem~\ref{theorem-sLsU}. We start with $R_{1}\in \cP_{q}^{\bullet}$, and consider $\sDB(R_{1})$ for choosing a suitable $\sDB$ arising from one of $\sB^{\bullet}, \sC^{\bullet}, \sBC^{\bullet}$ as well as their composites (see~\eqref{E:Appendix EQ1}, \eqref{E:Appendix EQ2}, \eqref{E:Appendix EQ3} and \eqref{E:Appendix EQ4}). Based on (II), we have $\sDB(R_{1})\in \cP^{\bullet}$ and hence $\sLB(\sDB(R_{1}))=0$. In this case, the expansion of $\sLB(\sDB(R_{1}))$ is equal to $\sLB(\fs)-\sLB(\sUB(\fs))$, and hence $\sLB(\fs)=\sLB(\sUB(\fs))$ as desired. \end{enumerate} As a consequence of Theorem~\ref{theorem-algo}, we obtain the following important equality. \begin{corollary}\label{Cor:generating set} Let $\bullet\in \left\{\Li, \zeta\right\}$. For any $w\geq 0$, we let $\cZ_{w}^{\bullet} $ be the $k$-vector space spanned by the elements $\sLB(\fs)\in k_{\infty}$ for $\fs \in \cI_{w}$. Then \[\left\{ \sLB(\fs)| \fs\in \ITw \right\}\] is a generating set for $\cZ_{w}^{\bullet} $. In particular, we have \[\cZ^{\Li}_{w}=\cZ_{w}^{\zeta} \ (=\cZ_{w}) .\] \end{corollary} \begin{proof} The first assertion comes from Theorem~\ref{theorem-algo}. We note that for $\fs\in \ITw$, we have \[\sLL(\fs):= \Li_{\fs}(\bone)=\zeta_{A}(\fs) =:\sLZ(\fs),\] whence obtaining $\cZ_{w}^{\Li}= \cZ_{w}^{\zeta}$. \end{proof} \begin{theorem} \label{theorem-generator} The $k$-vector space $\cZ_{w}$ is spanned by the elements $\Li_{\fs}(\bone)$ for $\fs \in \INDw$. \end{theorem} \begin{proof} We set $d_{w}' := |\ITw| = |\INDw|$. By the definition of $\sU^{\Li}$ and Theorem \ref{theorem-algo}~(2), there exists $e \gg 0$ such that $(\sU^{\Li})^{e}(\cH_{w / \Fp[L_{1}]}) \subset \cH^{\rT}_{w / \Fp[L_{1}]}$, where $\cH_{w / \Fp[L_{1}]}$ (resp.\ $\cH^{\rT}_{w / \Fp[L_{1}]}$) is the $\Fp[L_{1}]$-module spanned by $\fs \in \cI_{w}$ (resp.\ $\fs \in \IT_{w}$) in $\cH_{w}$. We note that according to Remark~\ref{Rem:Us=s}, $(\sU^{\Li})^{e}$ is independent of $e$ for $e\gg 0$. To show the desired result, it suffices to prove that the determinant of the matrix \[U = (u_{\fs, \fn})_{\fs \in \INDw, \fn \in \ITw} \in \Mat_{d_{w}}(\Fp[L_{1}])\] arising from \begin{align*} (\sU^{\Li})^{e}(\fs) = \sum_{\fn \in \ITw} u_{\fs, \fn} [\fn] \ \ (\fs \in \INDw) \end{align*} is non-zero. We mention that $\INDw$ is equal to $\ITw$ when $w \leq q$, and in this case the result is valid. So, in what follows, we assume that $\INDw \neq \ITw$. Taking any $\fs = (s_{1}, \ldots, s_{r}) \in \INDw$ but $\fs \notin \ITw$, based on~~\eqref{E:s^T} and the definition of $\INDw$ we write $\fs = (\fs^{\rT}, q^{\{0\}}, \fs')$ with $\fs^{\rT} \in \IT$ ($q^{\left\{ 0\right\}}:=\emptyset$), and $\fs' \neq \emptyset$. Note that $\fs'$ is of the form: $\fs' = (s'_{1}, \ldots)$ with $s'_{1} > q$. By definition~\eqref{E:DLi}, $D^{\Li}_{q, \fs''} = 0$. Thus \begin{align}\label{E:sULis} \sU^{\Li}(\fs) = - [\fs^{\rT}, q, \fs''] + L_{1} [\fs^{\rT}, \alpha_{q}^{\Li}(\fs'')] + L_{1} (\fs^{\rT} \boxplus \alpha_{q}^{\Li}(\fs'')), \end{align} where $\fs'' := (s'_{1} - q, \fs'_{-})$. Thus for each $\fs = (m_{1} q + n_{1}, \ldots, m_{r} q + n_{r}) \in \INDw$ with $m_{i} \geq 0$ ($m_{i}$ are not all zero since $\fs\notin \ITw$) and $1 \leq n_{i} \leq q - 1$, we have \begin{align*} (\sU^{\Li})^{e}(\fs) &= (\sU^{\Li})^{e + m_{1} + \cdots + m_{r}}(\fs) \\ & \in (\sU^{\Li})^{e}\left((- 1)^{m_{1} + \cdots + m_{r}} [q^{\{ m_{1} \}}, n_{1}, \ldots, q^{\{ m_{r} \}}, n_{r}] + L_{1} \cdot \cH_{w / \Fp[L_{1}]}\right) \\ &\subset (- 1)^{m_{1} + \cdots + m_{r}} [q^{\{ m_{1} \}}, n_{1}, \ldots, q^{\{ m_{r} \}}, n_{r}] + L_{1} \cdot \cH^{\rT}_{w / \Fp[L_{1}]}. \end{align*} Note that the first identity above comes from the fact that $(\sU^{\Li})^{e}(\fs)\in \cH^{\rT}_{w / \Fp[L_{1}]}$, which is fixed by $(\sU^{\Li})^{m_{1} + \cdots + m_{r}}$ by Remark~\ref{Rem:Us=s}. The second belonging follows from~\eqref{E:sULis}, whence we have the last inclusion because of $(q^{\{ m_{1} \}}, n_{1}, \ldots, q^{\{ m_{r} \}}, n_{r}) \in \ITw$ and $ (\sU^{\Li})^{e} (\cH_{w / \Fp[L_{1}]}) \subset \cH^{\rT}_{w / \Fp[L_{1}]}$. Since \[(m_{1} q + n_{1}, \ldots, m_{r} q + n_{r}) \mapsto (q^{\{ m_{1} \}}, n_{1}, \ldots, q^{\{ m_{r} \}}, n_{r})\] gives a bijection between $\INDw$ and $\ITw$, we have $\det(U \bmod L_{1}) = \pm 1$ in $\Fp$. In particular, $\det(U) \neq 0$. \end{proof} \section{Frobenius difference equations}\label{Sec:FDE} In this section, we focus on a specific system~\eqref{eq-Frob} of Frobenius difference equations arising from our study of MZV's, and the major result is Theorem~\ref{theorem-dimension} giving the precise dimension of the solution space of\eqref{eq-Frob}. \subsection{ABP-criterion} Let $t$ be a new variable, and $\laurent{\CC_{\infty}}{t}$ be the field of Laurent series in $t$ over $\CC_{\infty}$. For any integer $n$, we define the following {\it{$n$-fold Frobenious twisting}}: \[\tau^{n}:= \left(f\mapsto f^{(n)} \right): \laurent{\CC_{\infty}}{t} \rightarrow \laurent{\CC_{\infty}}{t}, \] where $f^{(n)}:=\sum a_{i}^{q^{n}}t^{i}$ for $f=\sum a_{i}t^{i}\in \laurent{\CC_{\infty}}{t}$. We further extend $\tau^{n}$ to the action on $\Mat_{r\times s}(\laurent{\CC_{\infty}}{t})$ by entrywise action. Note that as an automorphism of the field $\laurent{\CC_{\infty}}{t}$, $\tau^{n}$ stabilizes several subrings such as the power series ring $\power{\CC_{\infty}}{t}$, the Tate algebra \[\TT:=\left\{f\in \power{\CC_{\infty}}{t}| f\hbox{ converges on }|t|_{\infty}\leq 1 \right\},\] and the polynomial rings $\CC_{\infty}[t], \ok[t]$ as well as the subfields $\CC_{\infty}(t)$ and $\ok(t)$. When $n=-1$, we denote by $\sigma := \tau^{-1}$. For $n\in \left\{1,-1 \right\}$, the following fixed elements are particularly used in this paper: \[ \TT^{\tau}=\ok[t]^{\tau}=\FF_{q}[t]=\TT^{\sigma}=\ok[t]^{\sigma} \hbox{ and }\ok(t)^{\tau}=\FF_{q}(t)=\ok(t)^{\sigma}. \] Following~\cite{ABP04}, we denote by $\cE\subset \power{\CC_{\infty}}{t}$ the subring consisting of power series $f=\sum_{i=0}^{\infty}a_{i}t^{i}$ with algebraic coefficients ($a_{i}\in\ok$, $\forall i\in \ZZ_{\geq 0}$) for which \[\lim_{i\rightarrow \infty} \sqrt[i]{|a_{i}|_{\infty}}=0\hbox{ and }[k_{\infty}\left(a_{0},a_{1},\ldots \right):k_{\infty} ]<\infty .\] It follows that such any power series in $\cE$ converges on whole $\CC_{\infty}$, and we have \[ \cE^{\tau}=\FF_{q}[t]=\cE^{\sigma}.\] The primary tool that we use in this paper to show linear independence results is the following ABP-criterion \begin{theorem}[{\cite[Theorem~3.1.1]{ABP04}}] \label{T:ABP} Let $\Phi\in \Mat_{\ell}(\ok[t])$ be a matrix satisfying that as a polynomial in $\ok[t]$, $\det \Phi$ vanishes only (if at all) at $t=\theta$. Suppose that a column vector $\psi=\psi(t)\in \Mat_{\ell \times 1}(\cE)$ satisfies the following difference equation \[\psi^{(-1)}=\Phi \psi .\] We denote by $\psi(\theta)\in \Mat_{\ell \times 1}(\CC_{\infty})$ the evaluation of $\psi$ at $t=\theta$. Then for any row vector $\rho\in \Mat_{1\times \ell}(\ok)$ for which \[\rho \psi(\theta)=0,\] there exists a row vector $\matheur{P}\in \Mat_{1\times \ell}(\ok[t])$ such that \[ \matheur{P}\psi=0,\hbox{ and }\matheur{P}(\theta)=\rho .\] \end{theorem} The spirit of ABP-criterion asserts any $\ok$-linear relation among the entries of $\psi(\theta)$ can be lifted to a $\ok[t]$-linear relation among the entries of $\psi$. We mention that the condition on $\Phi$ in the theorem above arises from {\it{dual $t$-motives}}, and refer the reader to~\cite{ABP04, P08} and~\cite{A86} also. \subsection{Some lemmas} For each $\fs = (s_{1}, \ldots, s_{r}) \in \cI$ and $M \subset \cI$, we set \begin{align*} \sI(\fs) := \{ \emptyset, (s_{1}), (s_{1}, s_{2}), \ldots, (s_{1}, \ldots, s_{r}) \} \ \ \textrm{and} \ \ \sI(M) := \bigcup_{\fs \in M} \sI(\fs). \end{align*} Throughout this paper, we denote by $\eR$ the following $\FF_{q}(t)$-algebra \begin{align*} \eR := \{ f / g \ | \ f \in \ok[t], \ g \in \Fq[t] \setminus \{ 0 \} \} \subset \ok(t). \end{align*} For any nonempty set $J$, it is understood that $\eR^{J}$ refers to the $\FF_{q}(t)$-vector space $\bigoplus_{x\in J} \eR$ and any element of $\eR^{J}$ is written in the form $(r_x)_{x\in J}$ with $r_{x}\in \eR$. Given any nonempty subset $M \subset \cI_{w}$ for some $w \geq 0$, we consider the system of Frobenius equations \begin{align} \label{eq-Frob} \varepsilon_{\fs}^{(1)} = \varepsilon_{\fs} (t - \theta)^{w - \wt(\fs)} + \sum_{\substack{s' > 0 \\ (\fs, s') \in \sI(M)}} \varepsilon_{(\fs, s')} (t - \theta)^{w - \wt(\fs)} \ \ (\fs \in \sI(M)) \tag{E$_{M}$} \end{align} with $(\varepsilon_{\fs})_{\fs \in \sI(M)} \in \eR^{\sI(M)}$. We mention that~\eqref{eq-Frob} is basically the case of $Q_i=1$ from~\cite[(3.1.3)]{CPY19}, which is derived from the study of establishing a criterion for the {\it{zeta-like}} MZV's. Note that our $\varepsilon_{\fs}$ is essentially referring to $\delta_i^{(-1)}$ used in~\cite[(3.1.3)]{CPY19} (with $Q_{i}=1$ in~\cite{CPY19}), but we follow~\cite{ND21} to express \eqref{eq-Frob} with subscripts parameterized by indices. See also~\cite[(6.3), (6.4)]{ND21} in the depth two case. \begin{remark} Note that if $w=0$, we have $M=\cI_{0}=\left\{\emptyset\right\}$ and so $\sI(M)=\left\{ \emptyset\right\}$. In this special case, the equation~\eqref{eq-Frob} becomes $\varepsilon_{\emptyset}^{(1)}=\varepsilon_{\emptyset}$. \end{remark} \begin{lemma} \label{lem-in-A} Let $F = \sum_{i = 0}^{n} f_{i} t^{i}, G \in A[t]$ be polynomials over $A$ such that $n = \deg_{t} F \geq 1$ and $f_{n} \in \Fq^{\times}$. If $\varepsilon \in \eR$ satisfies $\varepsilon^{(1)} = \varepsilon F + G$, then we have $\varepsilon \in \Fq(t)[\theta]$. \end{lemma} \begin{proof} Our method is inspired by H.-J. Chen's formulation in the proof of \cite[Thm.~2 (a)]{KL16} (see Step I of the proof of \cite[Thm.~6.1.1]{C16} also). By multiplying the denominator of $\varepsilon$ on the equation, without loss of generality we assume that $\varepsilon \in \ok[t] \setminus \{ 0 \}$. We put $m := \deg_{t} \varepsilon \geq 0$. Since $\deg_{t} \varepsilon^{(1)} = \deg_{t} \varepsilon < \deg_{t} (\varepsilon F)$, we have $\deg_{t} G = m + n$. We set \begin{align*} \varepsilon = \sum_{i = 0}^{m} a_{i} t^{i} \ (a_{i} \in \ok) \ \ \textrm{and} \ \ G = \sum_{i = 0}^{m + n} g_{i} t^{i} \ (g_{i} \in A). \end{align*} Moreover, we set $a_{i} := 0$ (resp.\ $f_{i} := 0$) when $i$ does not satisfy $0 \leq i \leq m$ (resp.\ $0 \leq i \leq n$). Then we have \begin{align*} \sum_{i = 0}^{m} a_{i}^{q} t^{i} = \sum_{j = 0}^{m} a_{j} t^{j} \sum_{\ell = 0}^{n} f_{\ell} t^{\ell} + \sum_{i = 0}^{m + n} g_{i} t^{i} = \sum_{i = 0}^{m + n} \sum_{j + \ell = i} a_{j} f_{\ell} t^{i} + \sum_{i = 0}^{m + n} g_{i} t^{i}. \end{align*} By comparing the coefficients of $t^{i}$ for $m + 1 \leq i \leq m + n$, we have \begin{align*} a_{i - n} f_{n} = - g_{i} - \sum_{j > i - n} a_{j} f_{i - j}. \end{align*} Then we can show that $a_{m}, a_{m - 1}, \ldots, a_{m + 1 - n} \in A$ inductively. Suppose that $m + 1 - n \geq 1$ $(\Leftrightarrow m \geq n)$. Then by comparing the coefficients of $t^{i}$ for $n \leq i \leq m$, we have \begin{align*} a_{i - n} f_{n} = a_{i}^{q} - g_{i} - \sum_{j > i - n} a_{j} f_{i - j}. \end{align*} Then we can show that $a_{m - n}, a_{m - n - 1}, \ldots, a_{0} \in A$ inductively. Therefore, in any case we have $\varepsilon \in A[t]$. \end{proof} The next lemma can be viewed as an improvement of Step II \cite[Thm.~6.1.1]{C16} and \cite[Thm.~2 (b)]{KL16} which gives a better upper bound for the degree of solutions $(\varepsilon_{\fs})_{\fs \in \sI(M)} \in \eR^{\sI(M)}$ satisfying \eqref{eq-Frob}. \begin{lemma} \label{lem-degree} Let $w \geq 0$ and $\emptyset\neq M \subset \cI_{w}$. If $(\varepsilon_{\fs})_{\fs \in \sI(M)} \in \eR^{\sI(M)}$ satisfies \eqref{eq-Frob} \begin{align*} \varepsilon_{\fs}^{(1)} = \varepsilon_{\fs} (t - \theta)^{w - \wt(\fs)} + \sum_{\substack{s' > 0 \\ (\fs, s') \in \sI(M)}} \varepsilon_{(\fs, s')} (t - \theta)^{w - \wt(\fs)} \ \ (\fs \in \sI(M)), \end{align*} then we have $\varepsilon_{\fs} \in \Fq(t)[\theta]$ and $\deg_{\theta} \varepsilon_{\fs} \leq \dfrac{w - \wt(\fs)}{q - 1}$ for each $\fs \in \sI(M)$. \end{lemma} \begin{proof} When $\fs \in M$, then we have $\varepsilon_{\fs} \in \Fq(t)$ since in this case, $\varepsilon_{\fs}^{(1)}=\varepsilon_{\fs}$. Thus the statements are clearly valid. By Lemma \ref{lem-in-A} and the induction on $w - \wt(\fs)$, we have $\varepsilon_{\fs} \in \Fq(t)[\theta]$ for all $\fs \in \sI(M)$. We now consider $\fs \in \sI(M) \setminus M$ and suppose that $\deg_{\theta} \varepsilon_{(\fs, s')} \leq \dfrac{w - \wt(\fs, s)}{q - 1}$ for all $s' > 0$ with $(\fs, s') \in \sI(M)$. Suppose on the contrary that $\deg_{\theta} \varepsilon_{\fs} > \dfrac{w - \wt(\fs)}{q - 1}$. Then we have $\deg_{\theta} \varepsilon_{\fs}^{(1)} > \dfrac{q (w - \wt(\fs))}{q - 1}$. On the other hand, the induction hypothesis implies \begin{align*} \deg_{\theta} (\varepsilon_{(\fs, s')} (t - \theta)^{w - \wt(\fs)}) \leq \dfrac{w - \wt(\fs, s')}{q - 1} + w - \wt(\fs) < \dfrac{q (w - \wt(\fs))}{q - 1}. \end{align*} It follows that $\deg_{\theta} \varepsilon_{\fs}^{(1)} = \deg_{\theta} (\varepsilon_{\fs} (t - \theta)^{w - \wt(\fs)})$, and hence $\deg_{\theta} \varepsilon_{\fs} = \dfrac{w - \wt(\fs)}{q - 1}$, which is a contradiction. \end{proof} \subsection{Dimension of $\sX_{w}$} For $w\in \ZZ_{w\geq 0}$, We define $\INDzw \subset \INDw$ as follows: \begin{equation}\label{E:INDzw} \INDzw = \left\{ \begin{array}{@{}ll} \left\{\emptyset\right\} & \textrm{if} \ w=0 \\ \left\{ (s_{1},\ldots,s_{r})\in \INDw|s_{2},\ldots,s_{r} \hbox{ are divisible by }q-1 \right\} & \textrm{if} \ w>0 \end{array} \right. \end{equation} Note that the description of indices in $\INDzw$ comes from the simultaneously Eulerian phenomenon in~\cite[Cor.~4.2.3]{CPY19}. For each $w \geq 0$, we consider the system of Frobenius equations \eqref{eq-Frob} for $M = \INDzw$: \begin{align} \label{eq-Frob-w} \varepsilon_{\fs}^{(1)} = \varepsilon_{\fs} (t - \theta)^{w - \wt(\fs)} + \sum_{\substack{s' > 0 \\ (\fs, s') \in \sI(\INDzw)}} \varepsilon_{(\fs, s')} (t - \theta)^{w - \wt(\fs)} \ \ (\fs \in \sI(\INDzw)) \tag{E$_{w}$} \end{align} with $(\varepsilon_{\fs})_{\fs \in \sI(\INDzw)} \in \eR^{\sI(\INDzw)}$. We emphasize that $(E_{w}) = (E_{\INDzw})$ and $(E_{w}) \neq (E_{\{w\}})$. \begin{example} We give some explicit examples for the system of Frobenius equations \eqref{eq-Frob-w}. The simplest example is the case $w=0$. We note that $\INDz_0=\sI(\INDz_0)=\{\emptyset\}$. Thus, \eqref{eq-Frob-w} consists of a single equation \[ \varepsilon_{\emptyset}^{(1)} = \varepsilon_{\emptyset}. \] The second example is the case $w=q$. In this case, $\INDz_{q}=\{(1,q-1)\}$ and \begin{align*} \sI(\INDz_{q})=\{\emptyset,(1),(1,q-1)\}. \end{align*} Thus, \eqref{eq-Frob-w} consists of three equations \begin{align*} \left\{ \begin{array}{@{}ll} \varepsilon_{(1,q-1)}^{(1)} &= \varepsilon_{(1,q-1)}\\ \varepsilon_{(1)}^{(1)} &= \varepsilon_{(1)} (t - \theta)^{q-1}+\varepsilon_{(1,q-1)}(t-\theta)^{q-1}\\ \varepsilon_{\emptyset}^{(1)} &= \varepsilon_{\emptyset} (t - \theta)^{q}+\varepsilon_{(1)}(t-\theta)^{q}. \end{array} \right. \end{align*} The third example is the case $w=2q-2$ with $q \geq 3$. Since $\INDz_{2q-2}=\{(q-1,q-1),(2q-2)\}$, we have \[ \sI(\INDz_{2q-2})=\{\emptyset,(q-1),(q-1,q-1),(2q-2)\}. \] Then \eqref{eq-Frob-w} consists of four equations \begin{align*} \left\{ \begin{array}{@{}ll} \varepsilon_{(2q-2)}^{(1)} &= \varepsilon_{(2q-2)}\\ \varepsilon_{(q-1,q-1)}^{(1)} &= \varepsilon_{(q-1,q-1)}\\ \varepsilon_{(q-1)}^{(1)} &= \varepsilon_{(q-1)} (t - \theta)^{q-1}+\varepsilon_{(q-1,q-1)}(t-\theta)^{q-1}\\ \varepsilon_{\emptyset}^{(1)} &= \varepsilon_{\emptyset} (t - \theta)^{2q-2}+\varepsilon_{(q-1)}(t-\theta)^{2q-2}+\varepsilon_{(2q-2)}(t-\theta)^{2q-2}. \end{array} \right. \end{align*} \end{example} \begin{definition} For each $w\in \ZZ_{\geq 0}$, we let $\sX_{w}$ be the set of $\eR$-valued solutions of \eqref{eq-Frob-w}. \end{definition} \begin{remark} Since $f^{(1)}=f$ for any $f\in \FF_{q}(t)\subset \eR$, we see that $\sX_{w}$ forms an $\Fq(t)$-vector subspace of $\eR^{\sI(\INDzw)}$. \end{remark} The main result of this section is the following dimension formula for $\sX_{w}$. \begin{theorem} \label{theorem-dimension} For any $w\in \ZZ_{\geq 0}$, we have \[\dim_{\Fq(t)} \sX_{w} = \left\{ \begin{array}{@{}ll} 1 & \textrm{if} \ (q - 1) \mid w \\ 0 & \textrm{if} \ (q - 1) \nmid w \end{array} \right.\] When $w = \ell (q - 1)$ for some $\ell\in \ZZ_{\geq 0}$, we can choose a generator $(\eta_{\ell; \fs})_{\fs \in \sI(\INDzw)} \in \sX_{w}$ such that \begin{align*} \eta_{\ell; \emptyset} = \sum_{i = 0}^{\ell} b_{\ell i} (t - \theta)^{i}, \ \ b_{\ell i} \in T \Fp[T]_{(T)} \ (0 \leq i < \ell) \ \ \textrm{and} \ \ b_{\ell \ell} = 1, \end{align*} where $T := t - t^{q}$ and $\Fp[T]_{(T)} \subset \Fp(t)$ is the localization of $\Fp[T]$ at the prime ideal $(T)$. \end{theorem} \begin{proof} Note that by Lemma \ref{lem-degree}, we have $\sX_{w} \subset \Fq(t)[\theta]^{\sI(\INDzw)}$. We write \begin{align*} w &= \ell (q - 1) + s \ \ (\ell \geq 0, \ \ 0 \leq s < q - 1), \\ \ell &= m q + n \ \ (m \geq 0, \ \ 0 \leq n < q). \end{align*} First we assume that $s = 0$ and hence $w = \ell (q - 1)$ with $\ell \geq 0$. When $\ell = 0$, we have $\sI(\cI^{\mathrm{ND}_{0}}_{0}) = \{ \emptyset \}$ and the equation is $\varepsilon_{\emptyset}^{(1)} = \varepsilon_{\emptyset}$. Thus $\sX_0=\Fq(t)$, and we have a generator $\eta_{0; \emptyset} = 1$. Let $\ell \geq 1$ and suppose that the desired properties hold for weight $\ell' (q - 1)$ with $\ell' < \ell$. The system of Frobenius equations (E$_{\ell (q - 1)}$) becomes \begin{align} \label{eq-Fr-empty-even} \varepsilon_{\emptyset}^{(1)} = \varepsilon_{\emptyset} (t - \theta)^{\ell (q - 1)} + \sum_{\substack{0 \leq j < \ell \\ j \not\equiv \ell \bmod q}} \varepsilon_{((\ell - j) (q - 1))}(t - \theta)^{\ell (q - 1)} \end{align} and \begin{align*} \varepsilon_{((\ell - j) (q - 1), \fs)}^{(1)} &= \varepsilon_{((\ell - j) (q - 1), \fs)} (t - \theta)^{j (q - 1) - \wt(\fs)} \\ & \ \ \ + \sum_{\substack{s' > 0 \\ (\fs, s') \in \sI(\cI^{\mathrm{ND}_{0}}_{j (q - 1)})}} \varepsilon_{((\ell - j) (q - 1), \fs, s')} (t - \theta)^{j (q - 1) - \wt(\fs)} \end{align*} for $0 \leq j < \ell$ with $j \not\equiv \ell \bmod q$ and $\fs \in \sI(\cI^{\mathrm{ND}_{0}}_{j (q - 1)})$. Since $(\varepsilon_{((\ell - j) (q - 1), \fs)})_{\fs \in \sI(\cI^{\mathrm{ND}_{0}}_{j (q - 1)})} \in \sX_{j (q - 1)}$ for each $j$, the induction hypothesis implies that we have \begin{align*} \varepsilon_{((\ell - j) (q - 1))} = f_{j} \sum_{i = 0}^{j} b_{j i} (t - \theta)^{i}, \ \ f_{j} \in \Fq(t), \ \ b_{j i} \in T \Fp[T]_{(T)} \ \ \textrm{and} \ \ b_{j j} = 1. \end{align*} By Lemma \ref{lem-degree}, we have $\deg_{\theta} \varepsilon_{\emptyset} \leq \ell$. We write $\varepsilon_{\emptyset} = \sum_{i = 0}^{\ell} a_{i} (t - \theta)^{i}$ with $a_{i} \in \Fq(t)$, and plug it into \eqref{eq-Fr-empty-even}, then we obtain \begin{align*} \sum_{j = 0}^{\ell} a_{j} (t - \theta^{q})^{j} = \sum_{i = 0}^{\ell} a_{i} (t - \theta)^{\ell (q - 1) + i} + \sum_{\substack{0 \leq j < \ell \\ j \not\equiv \ell \bmod q}} f_{j} \sum_{i = 0}^{j} b_{j i} (t - \theta)^{\ell (q - 1) + i}. \end{align*} Note that the equation above can be expressed as \begin{align*} \sum_{i = 0}^{\ell - 1} \sum_{j = i}^{\ell - 1} \binom{j}{i} T^{j - i} a_{j} (t - \theta)^{i q} &- \sum_{i = 0}^{\ell - 1} a_{i} (t - \theta)^{\ell (q - 1) + i} - \sum_{i = 0}^{\ell - 1} \sum_{\substack{i \leq j < \ell \\ j \not\equiv \ell \bmod q}} b_{j i} f_{j} (t - \theta)^{\ell (q - 1) + i} \\ &= a_{\ell} \sum_{i = 0}^{\ell - 1} \binom{\ell}{i} T^{\ell - i} (t - \theta)^{i q}. \end{align*} Note further that \begin{align*} i q < w = (\ell - m) q - n \ \Longleftrightarrow \ i < \ell - m - \dfrac{n}{q}. \end{align*} By comparing the coefficients of $(t - \theta)^{\nu}$ for $\nu = i q$ $(0 \leq i < \ell - m)$ or $\ell (q - 1) \leq \nu < \ell q$, this gives a system of linear equations with $(2 \ell - m)$-equations and $(2 \ell - m + 1)$-variables. We write $\ba := (a_{0}, \ldots, a_{\ell - 1})^{\tr}$ and $\bff := (f_{0}, \ldots, f_{\ell - 1})^{\tr}$ (excluding $f_{n}, f_{q + n}, \ldots, f_{(m - 1) q + n}$). Then the system mentioned above can be expressed as \begin{align}\label{E:matrx U} U \left( \begin{array}{@{}c@{}} \ba \\ \bff \end{array} \right) = a_{\ell} \bb, \ \ U \in \Mat_{2 \ell - m}(\Fp[T]_{(T)}) \ \ \textrm{and} \ \ \bb \in (T \Fp[T]_{(T)})^{2 \ell - m}. \end{align} As our goal of this case $w=\ell (q-1)$ is to show $\dim_{\FF_{q}(t)}\sX_{w}=1$, it is to show that the existence of $\begin{pmatrix} \ba\\ \bff \end{pmatrix}$ is unique up to $\FF_{q}(t)$-scalar multiple. Therefore, from~\eqref{E:matrx U} it suffices to show that $\det(U \bmod T) \neq 0$ in $\Fp$. Indeed, we have \begin{align*} &(U \bmod T) + \left( \begin{array}{@{}cc@{}} O & O \\ \Id_{\ell} & O \end{array} \right) \\ &= \begin{array}{rc@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}|@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{}ll} & \multicolumn{7}{c}{\overbrace{\hspace{6.0em}}^{\ell}} & \multicolumn{16}{c}{\overbrace{\hspace{20.5em}}^{\ell - m}} & & \\ \ldelim( {24}{4pt}[] & 1 & & & & & & & & & & & & & & & & & & & & & & & \rdelim) {24}{2pt}[] & \rdelim\}{3}{10pt}[{\footnotesize $\ell - m$}] \\ & & \ddots & & & & & & & & & & & & & & & & & & & & & & & \\ & & & 1 & & & & & & & & & & & & & & & & & & & & & & \\ \cline{2-22} & & & & & & & & - 1 & & \multicolumn{1}{c@{}:}{} & & & & & & & & & & & & & & & \rdelim\}{3}{10pt}[{\footnotesize $n$}] \\ & & & & & & & & & \ddots & \multicolumn{1}{c@{}:}{} & & & & & & & & & & & & & & & \\ & & & & & & & & & & \multicolumn{1}{c@{}:}{- 1} & & & & & & & & & & & & & & & \\ \cdashline{2-22} & & & & 1 & & & & & & \multicolumn{1}{c@{}:}{} & 0 & & & & & & & & & & & & & & \\ \cdashline{2-22} & & & & & & & & & & & \multicolumn{1}{:c@{}}{- 1} & & \multicolumn{1}{c@{}:}{} & & & & & & & & & & & & \rdelim\}{3}{10pt}[{\footnotesize $q - 1$}] \\ & & & & & & & & & & & \multicolumn{1}{:c@{}}{} & \ddots & \multicolumn{1}{c@{}:}{} & & & & & & & & & & & & \\ & & & & & & & & & & & \multicolumn{1}{:c@{}}{} & & \multicolumn{1}{c@{}:}{- 1} & & & & & & & & & & & & \\ \cdashline{2-22} & & & & & 1 & & & & & & & & \multicolumn{1}{c@{}:}{} & 0 & & & & & & & & & & & \\ \cdashline{2-22} & & & & & & & & & & & & & & \ddots & & & & & & & & & & & \\ & & & & & & \ddots & & & & & & & & & \ddots & & & & & & & & & & \vdots \\ & & & & & & & & & & & & & & & & \multicolumn{1}{c@{}}{\ddots} & & & & & & & & & \\ \cdashline{2-22} & & & & & & & & & & & & & & & & \multicolumn{1}{:c@{}}{- 1} & & \multicolumn{1}{c@{}:}{} & & & & & & & \rdelim\}{3}{10pt}[{\footnotesize $q - 1$}] \\ & & & & & & & & & & & & & & & & \multicolumn{1}{:c@{}}{} & \ddots & \multicolumn{1}{c@{}:}{} & & & & & & & \\ & & & & & & & & & & & & & & & & \multicolumn{1}{:c@{}}{} & & \multicolumn{1}{c@{}:}{- 1} & & & & & & & \\ \cdashline{2-22} & & & & & & & 1 & & & & & & & & & & & \multicolumn{1}{c@{}:}{} & 0 & & & & & & \\ \cdashline{2-22} & & & & & & & & & & & & & & & & & & & \multicolumn{1}{:c@{}}{- 1} & & & & & & \rdelim\}{3}{10pt}[{\footnotesize $q - 1$}] \\ & & & & & & & & & & & & & & & & & & & \multicolumn{1}{:c@{}}{} & \ddots & & & & & \\ & & & & & & & & & & & & & & & & & & & \multicolumn{1}{:c@{}}{} & & - 1 & & & & \end{array}. \end{align*} The matrix $\left(U \bmod T\right)+ \left( \begin{array}{@{}cc@{}} O & O \\ \Id_{\ell} & O \end{array} \right)$ can be described as follows: \begin{itemize} \item [(i)] We first set up the lower right submatrix by beginning with a matrix of diagonal blocks, the first one being $-\Id_n$ and the remaining $m$ of them being equal to $-\Id_{q-1}$. Then we insert zero rows between the blocks to obtain our submatrix of size $\ell \times (\ell -m)$. \item [(ii)] We then put the identity matrix $\Id_{\ell-m}$ on the upper left corner. \item [(iii)] Finally we define the $\left((\ell-m)+1\right)$st column to the $\ell$th column, each of length $2\ell -m$. In the $\left((\ell-m)+i\right)$th column, all entries are zero except for the entry $1$ occurring in the same row as the $i$th row of zeroes inserted into the matrix constructed in step (i), i.e. the row of zeroes lying above the $i$th diagonal block equal to $-\Id_{q-1}$ in step (i). Thus we have our $2(\ell-m)\times2(\ell-m)$ matrix $\left(U \bmod T\right)+ \left( \begin{array}{@{}cc@{}} O & O \\ \Id_{\ell} & O \end{array} \right)$. \end{itemize} For example, let $q = 3$, $w = 14 = 7 (3 - 1)$ \ ($\ell = 7 = 2 \cdot 3 + 1$, $m = 2$, $n = 1$). Then we have \begin{align*} U \bmod T = \left( \begin{array}{@{}ccccccc|ccccc@{}} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline -1 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 & -1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & -1 \\ \end{array} \right) \end{align*} To evaluate $\det \left(U \bmod T\right)$ we appeal to elementary row operations which do not change the value of the determinant. Adding each of the top $\ell-m$ rows to the corresponding rows in the lower left matrix kills all entries there coming from $-\Id_{\ell}$ except the $m$ bottom ones, which lie in the $((\ell-m)+1)$st one to the $\ell$th column. But each $-1$ in these last $m$ rows can be killed by adding the appropriate row from step (iii) which has all zero entries except $1$ in a unique column between $\ell -m +1$ and $ \ell$ . Doing this leaves only the non-zero entries in those $m$ columns the ones inserted in step (iii). We end up with a matrix which has a unique $\pm 1$ in each row and column. So the elementary product expansion evaluates the determinant as $\pm 1$. Note that one can also use elementary column operations to show the desired identity. Next we assume that $1 \leq s < q - 1$ and put $w := \ell (q - 1) + s$. The system of Frobenius equations (E$_{w}$) is \begin{align} \label{eq-Fr-empty-odd} \varepsilon_{\emptyset}^{(1)} = \varepsilon_{\emptyset} (t - \theta)^{w} + \sum_{\substack{0 \leq j \leq \ell \\ j \not\equiv \ell - s \bmod q}} \varepsilon_{((\ell - j) (q - 1) + s)}(t - \theta)^{w} \end{align} and \begin{align*} \varepsilon_{((\ell - j) (q - 1) + s, \fs)}^{(1)} &= \varepsilon_{((\ell - j) (q - 1) + s, \fs)} (t - \theta)^{j (q - 1) - \wt(\fs)} \\ & \ \ \ + \sum_{\substack{s' > 0 \\ (\fs, s') \in \sI(\cI^{\mathrm{ND}_{0}}_{j (q - 1)})}} \varepsilon_{((\ell - j) (q - 1) + s, \fs, s')} (t - \theta)^{j (q - 1) - \wt(\fs)} \end{align*} for $0 \leq j \leq \ell$ with $j \not\equiv \ell - s \bmod q$ and $\fs \in \sI(\cI^{\mathrm{ND}_{0}}_{j (q - 1)})$. Since $(\varepsilon_{((\ell - j) (q - 1) + s, \fs)})_{\fs \in \sI(\cI^{\mathrm{ND}_{0}}_{j (q - 1)})} \in \sX_{j (q - 1)}$ for each $j$, we have \begin{align*} \varepsilon_{((\ell - j) (q - 1) + s)} = f_{j} \sum_{i = 0}^{j} b_{j i} (t - \theta)^{i}, \ \ f_{j} \in \Fq(t), \ \ b_{j i} \in T \Fp[T]_{(T)} \ \ \textrm{and} \ \ b_{j j} = 1. \end{align*} By Lemma \ref{lem-degree}, we have $\deg_{\theta} \varepsilon_{\emptyset} \leq \ell$. We write $\varepsilon_{\emptyset} = \sum_{i = 0}^{\ell} a_{i} (t - \theta)^{i}$ with $a_{i} \in \Fq(t)$. Then \eqref{eq-Fr-empty-odd} becomes \begin{align*} \sum_{j = 0}^{\ell} a_{j} (t - \theta^{q})^{j} = \sum_{i = 0}^{\ell} a_{i} (t - \theta)^{w + i} + \sum_{\substack{0 \leq j \leq \ell \\ j \not\equiv \ell - s \bmod q}} f_{j} \sum_{i = 0}^{j} b_{j i} (t - \theta)^{w + i}. \end{align*} This identity can be written as \begin{align*} \sum_{i = 0}^{\ell} \sum_{j = i}^{\ell} \binom{j}{i} T^{j - i} a_{j} (t - \theta)^{i q} &- \sum_{i = 0}^{\ell} a_{i} (t - \theta)^{w + i} - \sum_{i = 0}^{\ell} \sum_{\substack{i \leq j \leq \ell \\ j \not\equiv \ell - s \bmod q}} b_{j i} f_{j} (t - \theta)^{w + i} = 0. \end{align*} We note that \begin{align*} i q < w = (\ell - m) q + (s - n) \ \Longleftrightarrow \ i < \ell - m + \dfrac{s - n}{q}. \end{align*} When $s \leq n$, by comparing the coefficients of $(t - \theta)^{\nu}$ for $\nu = i q$ $(0 \leq i < \ell - m)$ or $w \leq \nu \leq w + \ell$, this gives a system of linear equations with $(2 \ell - m + 1)$-equations and $(2 \ell - m + 1)$-variables. We write $\ba := (a_{0}, \ldots, a_{\ell})^{\tr}$ and $\bff := (f_{0}, \ldots, f_{\ell})^{\tr}$ (excluding $f_{n - s}, f_{q + n - s}, \ldots, f_{\ell - s}$). Then the system can be written as \begin{align*} U \left( \begin{array}{@{}c@{}} \ba \\ \bff \end{array} \right) = \mathbf{0} \ \ \textrm{and} \ \ U \in \Mat_{2 \ell - m + 1}(\Fp[T]_{(T)}). \end{align*} Since we aim to show that the solution space $\sX_{w}$ is trivial in this case, it suffices to show that $\det(U \bmod T) \neq 0$ in $\Fp$. Indeed, we have \begin{align*} &(U \bmod T) + \left( \begin{array}{@{}cc@{}} O & O \\ \Id_{\ell + 1} & O \end{array} \right) \\ &= \begin{array}{rc@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}|@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{}ll} & \multicolumn{7}{c}{\overbrace{\hspace{6.0em}}^{\ell + 1}} & \multicolumn{16}{c}{\overbrace{\hspace{20.5em}}^{\ell - m}} & & \\ \ldelim( {24}{4pt}[] & 1 & & & & & & & & & & & & & & & & & & & & & & & \rdelim) {24}{2pt}[] & \rdelim\}{3}{10pt}[{\footnotesize $\ell - m$}] \\ & & \ddots & & & & & & & & & & & & & & & & & & & & & & & \\ & & & 1 & & & & & & & & & & & & & & & & & & & & & & \\ \cline{2-22} & & & & & & & & - 1 & & \multicolumn{1}{c@{}:}{} & & & & & & & & & & & & & & & \rdelim\}{3}{10pt}[{\footnotesize $n - s$}] \\ & & & & & & & & & \ddots & \multicolumn{1}{c@{}:}{} & & & & & & & & & & & & & & & \\ & & & & & & & & & & \multicolumn{1}{c@{}:}{- 1} & & & & & & & & & & & & & & & \\ \cdashline{2-22} & & & & 1 & & & & & & \multicolumn{1}{c@{}:}{} & 0 & & & & & & & & & & & & & & \\ \cdashline{2-22} & & & & & & & & & & & \multicolumn{1}{:c@{}}{- 1} & & \multicolumn{1}{c@{}:}{} & & & & & & & & & & & & \rdelim\}{3}{10pt}[{\footnotesize $q - 1$}] \\ & & & & & & & & & & & \multicolumn{1}{:c@{}}{} & \ddots & \multicolumn{1}{c@{}:}{} & & & & & & & & & & & & \\ & & & & & & & & & & & \multicolumn{1}{:c@{}}{} & & \multicolumn{1}{c@{}:}{- 1} & & & & & & & & & & & & \\ \cdashline{2-22} & & & & & 1 & & & & & & & & \multicolumn{1}{c@{}:}{} & 0 & & & & & & & & & & & \\ \cdashline{2-22} & & & & & & & & & & & & & & \ddots & & & & & & & & & & & \\ & & & & & & \ddots & & & & & & & & & \ddots & & & & & & & & & & \vdots \\ & & & & & & & & & & & & & & & & \multicolumn{1}{c@{}}{\ddots} & & & & & & & & & \\ \cdashline{2-22} & & & & & & & & & & & & & & & & \multicolumn{1}{:c@{}}{- 1} & & \multicolumn{1}{c@{}:}{} & & & & & & & \rdelim\}{3}{10pt}[{\footnotesize $q - 1$}] \\ & & & & & & & & & & & & & & & & \multicolumn{1}{:c@{}}{} & \ddots & \multicolumn{1}{c@{}:}{} & & & & & & & \\ & & & & & & & & & & & & & & & & \multicolumn{1}{:c@{}}{} & & \multicolumn{1}{c@{}:}{- 1} & & & & & & & \\ \cdashline{2-22} & & & & & & & 1 & & & & & & & & & & & \multicolumn{1}{c@{}:}{} & 0 & & & & & & \\ \cdashline{2-22} & & & & & & & & & & & & & & & & & & & \multicolumn{1}{:c@{}}{- 1} & & & & & & \rdelim\}{3}{10pt}[{\footnotesize $s$}] \\ & & & & & & & & & & & & & & & & & & & \multicolumn{1}{:c@{}}{} & \ddots & & & & & \\ & & & & & & & & & & & & & & & & & & & \multicolumn{1}{:c@{}}{} & & - 1 & & & & \end{array} \end{align*} and hence $\det(U \bmod T) = \pm 1$. When $s > n$, by comparing the coefficients of $(t - \theta)^{\nu}$ for $\nu = i q$ $(0 \leq i \leq \ell - m)$ or $w \leq \nu \leq w + \ell$, this gives a system of linear equations with $(2 \ell - m + 2)$-equations and $(2 \ell - m + 2)$-variables. We write $\ba := (a_{0}, \ldots, a_{\ell})^{\tr}$ and $\bff := (f_{0}, \ldots, f_{\ell})^{\tr}$ (excluding $f_{q + n - s}, f_{2 q + n - s}, \ldots, f_{\ell - s}$). Then the system can be written as \begin{align*} U \left( \begin{array}{@{}c@{}} \ba \\ \bff \end{array} \right) = \mathbf{0} \ \ \textrm{and} \ \ U \in \Mat_{2 \ell - m + 2}(\Fp[T]_{(T)}). \end{align*} It is enough to show that $\det(U \bmod T) \neq 0$ in $\Fp$. Indeed, we have \begin{align*} &(U \bmod T) + \left( \begin{array}{@{}cc@{}} O & O \\ \Id_{\ell + 1} & O \end{array} \right) \\ &= \begin{array}{rc@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}|@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{\hspace{0.3em}}c@{}ll} & \multicolumn{7}{c}{\overbrace{\hspace{6.0em}}^{\ell + 1}} & \multicolumn{16}{c}{\overbrace{\hspace{20.5em}}^{\ell - m + 1}} & & \\ \ldelim( {24}{4pt}[] & 1 & & & & & & & & & & & & & & & & & & & & & & & \rdelim) {24}{2pt}[] & \rdelim\}{3}{10pt}[{\footnotesize $\ell - m + 1$}] \\ & & \ddots & & & & & & & & & & & & & & & & & & & & & & & \\ & & & 1 & & & & & & & & & & & & & & & & & & & & & & \\ \cline{2-22} & & & & & & & & - 1 & & \multicolumn{1}{c@{}:}{} & & & & & & & & & & & & & & & \rdelim\}{3}{10pt}[{\footnotesize $q + n - s$}] \\ & & & & & & & & & \ddots & \multicolumn{1}{c@{}:}{} & & & & & & & & & & & & & & & \\ & & & & & & & & & & \multicolumn{1}{c@{}:}{- 1} & & & & & & & & & & & & & & & \\ \cdashline{2-22} & & & & 1 & & & & & & \multicolumn{1}{c@{}:}{} & 0 & & & & & & & & & & & & & & \\ \cdashline{2-22} & & & & & & & & & & & \multicolumn{1}{:c@{}}{- 1} & & \multicolumn{1}{c@{}:}{} & & & & & & & & & & & & \rdelim\}{3}{10pt}[{\footnotesize $q - 1$}] \\ & & & & & & & & & & & \multicolumn{1}{:c@{}}{} & \ddots & \multicolumn{1}{c@{}:}{} & & & & & & & & & & & & \\ & & & & & & & & & & & \multicolumn{1}{:c@{}}{} & & \multicolumn{1}{c@{}:}{- 1} & & & & & & & & & & & & \\ \cdashline{2-22} & & & & & 1 & & & & & & & & \multicolumn{1}{c@{}:}{} & 0 & & & & & & & & & & & \\ \cdashline{2-22} & & & & & & & & & & & & & & \ddots & & & & & & & & & & & \\ & & & & & & \ddots & & & & & & & & & \ddots & & & & & & & & & & \vdots \\ & & & & & & & & & & & & & & & & \multicolumn{1}{c@{}}{\ddots} & & & & & & & & & \\ \cdashline{2-22} & & & & & & & & & & & & & & & & \multicolumn{1}{:c@{}}{- 1} & & \multicolumn{1}{c@{}:}{} & & & & & & & \rdelim\}{3}{10pt}[{\footnotesize $q - 1$}] \\ & & & & & & & & & & & & & & & & \multicolumn{1}{:c@{}}{} & \ddots & \multicolumn{1}{c@{}:}{} & & & & & & & \\ & & & & & & & & & & & & & & & & \multicolumn{1}{:c@{}}{} & & \multicolumn{1}{c@{}:}{- 1} & & & & & & & \\ \cdashline{2-22} & & & & & & & 1 & & & & & & & & & & & \multicolumn{1}{c@{}:}{} & 0 & & & & & & \\ \cdashline{2-22} & & & & & & & & & & & & & & & & & & & \multicolumn{1}{:c@{}}{- 1} & & & & & & \rdelim\}{3}{10pt}[{\footnotesize $s$}] \\ & & & & & & & & & & & & & & & & & & & \multicolumn{1}{:c@{}}{} & \ddots & & & & & \\ & & & & & & & & & & & & & & & & & & & \multicolumn{1}{:c@{}}{} & & - 1 & & & & \end{array} \end{align*} and hence $\det(U \bmod T) = \pm 1$. \end{proof} \section{Linear independence}\label{Sec:Linear Indep} In this section, we aim to show that $\{\Li_\fs(\bone)\in k_\infty\mid\fs\in\INDw\}$ is a $k$-linearly independent set. To begin with, we adopt the following setting. Let $(-\theta)^{\frac{1}{q-1}}\in \ok^\times$ be a fixed $(q-1)$st root of $-\theta$. Following~\cite{ABP04}, we consider the following power series \begin{align*} \Omega(t):=(-\theta)^{\frac{-q}{q-1}}\prod_{i=1}^\infty\left(1-\frac{t}{\theta^{q^i}}\right)\in\power{\overline{k_\infty}}{t}. \end{align*} Note that it satisfies the Frobenius difference equation \begin{align*} \Omega^{(-1)}(t)=(t-\theta)\Omega(t). \end{align*} In addition, we have $\Omega(t)\in\mathcal{E}$ and $\tpi:=1/\Omega(\theta)$ is the fundamental period of the Carlitz module (see~\cite[Sec.~3.1.2]{ABP04}). We also set $\LL_{0} := 1$ and $\LL_{d} := (t - \theta^{q}) \cdots (t - \theta^{q^{d}})$ for $d \geq 1$. Put $\cL(\emptyset) := 1$ and for each index $\fs = (s_{1}, \ldots, s_{r}) \in\cI_{>0}$, we define the following deformation series \begin{align*} \cL(\fs) := \Omega^{\wt(\fs)} \sum_{d_{1} > \cdots > d_{r} \geq 0} \dfrac{1}{\LL_{d_{1}}^{s_{1}} \cdots \LL_{d_{r}}^{s_{r}}} \in \TT, \end{align*} which was first introduced by Papanikolas~\cite{P08} for $\dep(\fs)=1$. It is shown in \cite[Lemma 5.3.1]{C14} that $\cL(\fs)\in\cE$. Moreover, by \cite[Proposition 2.3.3]{CPY19}, we have \begin{align*} \cL(\fs)|_{t = \theta^{q^{j}}} = (\cL(\fs)|_{t = \theta})^{q^{j}} = (\Li_{\fs}(\bone) / \tpi^{\wt(\fs)})^{q^{j}} \end{align*} for all $j \geq 0$ (see also \cite[Lemma 5.3.5]{C14}). Let $P = \sum_{\fs \in \cI} a_{\fs} [\fs] \in \cH$ $(a_{\fs} \in k)$. For the convenience of later use, we set \begin{align*} \Li_P(\bone):=\sum_{\fs} a_{\fs} \Li_{\fs}(\bone). \end{align*} \subsection{The key lemma} The fundamental system of Frobenius difference equations that $\left\{ \cL(\bn)\right\}_{\bn\in \sI(\fs)}$ satisfy is given in~\cite[(5.3.3),(5.3.4)]{C14} as well as~\cite[(2.3.4), (2.3.7)]{CPY19} with all $Q_{i}=1$ there, and it plays a crucial role in the proof of the following Lemma when applying ABP-criterion. We further mention that the first formulation of the following Lemma arises from the ideas in the proof of~\cite[Thm.~2.5.2]{CPY19}, and the second one is an extension of part of Step~3 in the proof of~\cite[Thm.~6]{ND21}, which dealt with $w\leq 2q-2$. \begin{lemma} \label{lemma-rational} Let $w \geq 0$ and $\emptyset\neq M \subset \cI_{w}$. Let $P = \sum_{\fs \in \cI} a_{\fs} [\fs] \in \cH$ $(a_{\fs} \in k)$ and suppose that \begin{itemize} \item $\Supp(P) \subset M$, \item $\Li_{P}(\bone)$ is Eulerian, that is, $\Li_{P}(\bone) = \sum_{\fs} a_{\fs} \Li_{\fs}(\bone) \in k \cdot \tpi^{w}$, \item $\Li_{\fs}(\bone)$ $(\fs \in \sI(M) \cap \cI_{\ell})$ are linearly independent over $k$ for each $0 \leq \ell < w$. \end{itemize} Then the following hold: \begin{enumerate} \item We have \begin{align*} \sum_{\fs' \in \cI_{w - \wt(\fs)}} a_{(\fs, \fs')} \Li_{\fs'}(\bone) = 0 \ \ \textrm{or} \ \ (q - 1) \mid (w - \wt(\fs)) \end{align*} for each $\fs \in \sI(M)$. In particular, if $\Li_{\fs'}(\bone)$ $(\fs' \in \cI_{w - \wt(\fs)}, (\fs, \fs') \in \Supp(P))$ are linearly independent over $k$ for each $\fs \in \sI(\Supp(P)) \setminus \{ \emptyset \}$ then \begin{align*} \Supp(P) \subset \{ (s_{1}, \ldots, s_{r}) \in \cI_{w} \ | \ (q - 1) \mid s_{2}, \ldots, s_{r} \}. \end{align*} \item The system of Frobenius equations \eqref{eq-Frob} has a solution $(\varepsilon_{\fs})_{\fs \in \sI(M)} \in \Fq(t)[\theta]^{\sI(M)}$ such that $\varepsilon_{\fs} = a_{\fs}|_{\theta = t}$ for each $\fs \in M$. \end{enumerate} \end{lemma} \begin{proof} In what follows, our essential arguments are rooted in the ideas of the proof of~\cite{CPY19}. Let $\alpha_{\fs} = \alpha_{\fs}(t) := a_{\fs}|_{\theta = t} \in \Fq(t)$, $c := \Li_{P}(\bone) / \tpi^{w} \in k$ and $\sI(M)' := \sI(M) \setminus M$. We may assume that $w > 0$, $P \neq 0$ and $\alpha_{\fs} \in \Fq[t]$ $(\fs \in \cI)$. In particular, $\sI(M)' \neq \emptyset$. We define matrices $\Phi' = (\Phi'_{\fn, \fs})_{\fn, \fs \in \sI(M)'} \in \Mat_{|\sI(M)'|}(\ok[t])$ and $\Psi' = (\Psi'_{\fn, \fs})_{\fn, \fs \in \sI(M)'} \in \GL_{|\sI(M)'|}(\TT)$ indexed by the set $\sI(M)'$. They are defined by \begin{align*} \Phi'_{\fn, \fs} := \left\{ \begin{array}{ll} (t - \theta)^{w - \wt(\fs)} & (\fs = \fn \ \textrm{or} \ \fs = \fn_{+}) \\ 0 & (\textrm{otherwise}) \end{array} \right. \textrm{and} \ \Psi'_{\fn, \fs} := \left\{ \begin{array}{ll} \cL(\fs') \Omega^{w - \wt(\fn)} & (\fn = (\fs, \fs')) \\ 0 & (\textrm{otherwise}) \end{array} \right.. \end{align*} In particular, we have $\Psi'_{\fn, \emptyset} = \cL(\fn) \Omega^{w - \wt(\fn)}$. We also define row vectors $\bv = (v_{\fs})_{\fs \in \sI(M)'} \in \Mat_{1 \times |\sI(M)'|}(\ok[t])$ and $\bff = (f_{\fs})_{\fs \in \sI(M)'} \in \Mat_{1 \times |\sI(M)'|}(\TT)$ by \begin{align*} v_{\fs} := \left\{ \begin{array}{ll} \alpha_{(\fs, w - \wt(\fs))} (t - \theta)^{w - \wt(\fs)} & ((\fs, w - \wt(\fs)) \in M) \\ 0 & (\textrm{otherwise}) \end{array} \right. \textrm{and} \ f_{\fs} := \sum_{\fs' \in \cI_{w - \wt(\fs)}} \alpha_{(\fs, \fs')} \cL(\fs'). \end{align*} In particular, we have $f_{\emptyset} := \sum_{\fs' \in \cI_{w}} \alpha_{\fs'} \cL(\fs') = \cL(P)$. Then we set \begin{align*} \Phi := \left( \begin{array}{cc} \Phi' & 0 \\ \bv & 1 \end{array} \right) \in \Mat_{|\sI(M)'| + 1}(\ok[t]) \ \ \textrm{and} \ \ \Psi := \left( \begin{array}{cc} \Psi' & 0 \\ \bff & 1 \end{array} \right) \in \GL_{|\sI(M)'| + 1}(\TT). \end{align*} We also set \begin{align*} \widetilde{\Phi} := \left( \begin{array}{cc} 1 & 0 \\ 0 & \Phi \end{array} \right) \in \Mat_{|\sI(M)'| + 2}(\ok[t]) \ \ \textrm{and} \ \ \widetilde{\psi} := \left( \begin{array}{c} 1 \\ (\Psi'_{\fn, \emptyset})_{\fn \in \sI(M)'} \\ f_{\emptyset} \end{array} \right) \in \Mat_{(|\sI(M)'| + 2) \times 1}(\TT). \end{align*} Then using \cite[(2.3.4), (2.3.7)]{CPY19} one checks that \begin{align*} \Psi^{(- 1)} = \Phi \Psi \ \ \textrm{and} \ \ \widetilde{\psi}^{(- 1)} = \widetilde{\Phi} \widetilde{\psi}. \end{align*} We mention that we follow~\cite{ND21} to use double indices to indicate the entries of $\Phi$ here, and putting such $f_{\phi}$ into a system of Frobenius difference equations was first used by the first named author in~\cite{C16}, and later used in \cite{CH21, ND21}. Such $f_{\phi}$ naturally appears in the period matrix of the fiber coproduct of rigid analytically trivial dual $t$-motives in \cite{CM21}. By \cite[Theorem 3.1.1]{ABP04}, there exist $g \in \ok(t)$ and $\bg = (g_{\fs})_{\fs \in \sI(M)'} \in \Mat_{1 \times |\sI(M)'|}(\ok(t))$ such that \begin{align*} \widetilde{\bg} \widetilde{\psi} = 0, \ \ \textrm{$g$ and $\bg$ are regular at $t = \theta$} \ \ \textrm{and} \ \ \widetilde{\bg}|_{t = \theta} = (- c, 0, \ldots, 0, 1), \end{align*} where $\widetilde{\bg} := (g, \bg, 1)$. We set $B = B(t)$ and $B_{\fs} = B_{\fs}(t)$ in $\ok(t)$ by \begin{align*} (B, (B_{\fs})_{\fs \in \sI(M)'}, 0) := \widetilde{\bg} - \widetilde{\bg}^{(- 1)} \widetilde{\Phi} \in \Mat_{1 \times (|\sI(M)'| + 2)}(\ok(t)). \end{align*} We claim that $\widetilde{\bg}^{(- 1)} \widetilde{\Phi} = \widetilde{\bg}$, that is, $B = B_{\fs} = 0$ for all $\fs \in \sI(M)'$. Indeed, \begin{align*} B + \sum_{\fs \in \sI(M)'} B_{\fs} \cL(\fs) \Omega^{w - \wt(\fs)} = (B, (B_{\fs})_{\fs \in \sI(M)'}, 0) \widetilde{\psi} = (\widetilde{\bg} - \widetilde{\bg}^{(- 1)} \widetilde{\Phi}) \widetilde{\psi} = \widetilde{\bg} \widetilde{\psi} - (\widetilde{\bg} \widetilde{\psi})^{(- 1)} = 0 \end{align*} and $\sI(M)' = \bigsqcup_{\ell = 0}^{w - 1} \sI(M) \cap \cI_{\ell}$ imply the equality \begin{align} \label{eq-B} B + \sum_{\fs \in \sI(M) \cap \cI_{w - 1}} B_{\fs} \cL(\fs) \Omega + \sum_{\fs \in \sI(M) \cap \cI_{w - 2}} B_{\fs} \cL(\fs) \Omega^{2} + \cdots + \sum_{\fs \in \sI(M) \cap \cI_{0}} B_{\fs} \cL(\fs) \Omega^{w} = 0. \end{align} We note that \begin{itemize} \item $B$ and $B_{\fs}$ $(\fs \in \sI(M)')$ are regular at $t = \theta^{q^{i}}$ for $j \gg 0$, \item $\Omega$ and $\cL(\fs)$ $(\fs \in \cI)$ are entire, \item $\Omega(\theta^{q^{j}}) = 0$ for $j \geq 1$, \item $\cL(\fs)|_{t = \theta^{q^{j}}} = (\Li_{\fs}(\bone) / \tpi^{\wt(\fs)})^{q^{j}}$ for $\fs \in \cI$ and $j \geq 0$. \end{itemize} Thus evaluating at $t = \theta^{q^{j}}$ for $j \gg 0$ in \eqref{eq-B} implies $B(\theta^{q^{j}}) = 0$. Since $B$ is rational, we have $B = 0$. Then dividing \eqref{eq-B} by $\Omega$ and evaluating at $t = \theta^{q^{j}}$ for $j \gg 0$ imply \begin{align*} \sum_{\fs \in \sI(M) \cap \cI_{w - 1}} B_{\fs}(\theta^{q^{j}}) (\Li_{\fs}(\bone) / \tpi^{w - 1})^{q^{j}} = 0. \end{align*} This is equivalent to \begin{align*} \sum_{\fs \in \sI(M) \cap \cI_{w - 1}} B_{\fs}(\theta^{q^{j}})^{q^{- j}} \Li_{\fs}(\bone) = 0. \end{align*} By the assumption of linear independence and \cite[Theorem 2.2.1]{C14}, we have $B_{\fs}(\theta^{q^{j}}) = 0$. Thus we have $B_{\fs} = 0$ for all $\fs \in \sI(M) \cap \cI_{w - 1}$. Repeating this process, we have $B_{\fs} = 0$ for all $\fs \in \sI(M)'$. We note that the claim implies \begin{align} \label{eq-lemma-claim} \left( \begin{array}{cc} \Id & 0 \\ \bg & 1 \end{array} \right)^{(- 1)} \left( \begin{array}{cc} \Phi' & 0 \\ \bv & 1 \end{array} \right) = \left( \begin{array}{cc} \Phi' & 0 \\ 0 & 1 \end{array} \right) \left( \begin{array}{cc} \Id & 0 \\ \bg & 1 \end{array} \right). \end{align} (1) We set \begin{align*} X := \left( \begin{array}{cc} \Id & 0 \\ \bg & 1 \end{array} \right) \Psi = \left( \begin{array}{cc} \Psi' & 0 \\ \bg \Psi' + \bff & 1 \end{array} \right) \in \GL_{|\sI(M)'| + 1}(\Frac(\TT)). \end{align*} By using \eqref{eq-lemma-claim}, we can verify that $X^{(- 1)} = \left( \begin{array}{cc} \Phi' & 0 \\ 0 & 1 \end{array} \right) X$. Thus by \cite[\S 4.1.6]{P08}, \begin{align*} \GL_{|\sI(M)'| + 1}(\Fq(t)) \ni \left( \begin{array}{cc} \Psi' & 0 \\ 0 & 1 \end{array} \right)^{- 1} X = \left( \begin{array}{cc} \Id & 0 \\ \bg \Psi' + \bff & 1 \end{array} \right) =: \left( \begin{array}{cc} \Id & 0 \\ (h_{\fs})_{\fs \in \sI(M)'} & 1 \end{array} \right). \end{align*} Then evaluating at $t = \theta^{q^{j}}$ for $j \gg 0$ in $\bg \Psi' + \bff = (h_{\fs})_{\fs \in \sI(M)'}$, we have $f_{\fs}(\theta^{q^{j}}) = h_{\fs}(\theta^{q^{j}}) = h_{\fs}(\theta)^{q^{j}}$. Since \begin{align*} f_{\fs}(\theta^{q^{j}}) = \sum_{\fs' \in \cI_{w - \wt(\fs)}} \alpha_{(\fs, \fs')}(\theta^{q^{j}}) \cL(\fs')|_{t = \theta^{q^{j}}} = \sum_{\fs' \in \cI_{w - \wt(\fs)}} \alpha_{(\fs, \fs')}(\theta)^{q^{j}} (\Li_{\fs'}(\bone) / \tpi^{w - \wt(\fs)})^{q^{j}} = f_{\fs}(\theta)^{q^{j}}, \end{align*} we have $f_{\fs}(\theta) = h_{\fs}(\theta) \in k$ for each $\fs \in \sI(M)'$. Thus we have \begin{align*} \sum_{\fs' \in \cI_{w - \wt(\fs)}} a_{(\fs, \fs')} \Li_{\fs'}(\bone) \in k \cdot \tpi^{w - \wt(\fs)} \end{align*} for each $\fs \in \sI(M)'$. We note that this holds whenever $\fs \in M$. Then the first statement of (1) follows from this relation because $\tpi \in (- \theta)^{\frac{1}{q - 1}} \cdot k_{\infty}^{\times}$. Next we prove the second statement of (1). Let $(\fs, \fs'') \in \Supp(P)$ with $\fs \neq \emptyset$. Then by the first statement of (1) and the assumption on the linear independence of the second statement of (1), we have $(q - 1) \mid (w - \wt(\fs)) = \wt(\fs'')$. (2) By \eqref{eq-lemma-claim} and \cite[Proposition 2.2.1]{CPY19}, there exists $\alpha \in \Fq[t] \setminus \{ 0 \}$ such that $\alpha \bg \in \Mat_{1 \times |\sI(M)|}(\ok[t])$. If we set $\varepsilon_{\fs} := g_{\fs}^{(- 1)}$ $(\fs \in \sI(M)')$ and $\varepsilon_{\fs} := \alpha_{\fs} \in \Fq(t)$ $(\fs \in M)$, then \begin{align*} ((\varepsilon_{\fs})_{\fs \in \sI(M)'}, 1) \left( \begin{array}{c} \Phi' \\ \bv \end{array} \right) = (\varepsilon_{\fs}^{(1)})_{\fs \in \sI(M)'} \ \ \textrm{and} \ \ \varepsilon_{\fs}^{(1)} = \varepsilon_{\fs} \ (\fs \in M) \end{align*} give the desired equations. \end{proof} \subsection{Linear independence} With the crucial properties of Lemma~\ref{lemma-rational} established, we are able to show the following linear independence result. \begin{theorem} \label{theorem-IND-basis} For each $w \geq 0$, the elements \begin{align*} \Li_{\fs}(\bone) \ \ \textrm{with} \ \ \fs \in \INDw \end{align*} form a basis of $\cZ_{w}$. \end{theorem} \begin{proof} (cf.~\cite[p.~388]{ND21} for the special case of $k$-linear independence of $\zeta_{A}(w)$ and $\zeta_{A}(w - (q - 1), q - 1)$ with $w \leq 2 q - 2$.) By Theorem~\ref{theorem-generator}, $\left\{ \Li_{\fs}(\bone)| \fs\in \INDw \right\}$ is a generating set for $\cZ_{w}$. Thus it is enough to show that the given elements are linearly independent over $k$. This is proved by induction on $w$. When $w = 0$ we have $\IND_{0} = \{ \emptyset \}$, and $\Li_{\emptyset}(\bone) = 1$ is linearly independent over $k$. Let $w \geq 1$ and assume that the elements $\Li_{\fs}(\bone)$ with $\fs \in \IND_{w'}$ are linearly independent over $k$ for $w' < w$. Let $P = \sum_{\fs \in \cI} \alpha_{\fs}(\theta) [\fs] \in \cH$ $(\alpha_{\fs} = \alpha_{\fs}(t) \in \Fq(t))$ be a linear relation among $\Li_{\fs}(\bone)$'s over $k$ such that $\Supp(P) \subset \INDw$. By Lemma \ref{lemma-rational} (1) for $M = \Supp(P)$ and the induction hypothesis, we have $\Supp(P) \subset \INDzw$. By Lemma \ref{lemma-rational} (2) for $M = \INDzw$, there exists a solution $(\varepsilon_{\fs}) \in \sX_{w}$ such that $\varepsilon_{\fs} = \alpha_{\fs}$ for all $\fs \in \INDzw$. When $(q - 1) \nmid w$, Theorem \ref{theorem-dimension} implies that $\alpha_{\fs} = 0$ for all $\fs \in \INDzw$. When $(q - 1) \mid w$, Theorem \ref{theorem-generator} implies that there exists $P_{w} = \sum_{\fs \in \cI} \beta_{\fs}(\theta) [\fs] \in \cH_{w}$ $(\beta_{\fs}(t) \in \Fq(t))$ such that $\Supp(P_{w}) \subset \INDw$ and $\Li_{P_{w}}(\bone) = \tpi^{w}$. It is clear that $P_{w} \neq 0$. By Lemma \ref{lemma-rational} (1) for $M = \Supp(P_{w})$, we have $\Supp(P_{w}) \subset \INDzw$. By Lemma \ref{lemma-rational} (2) for $M = \INDzw$, there exists a solution $(\varepsilon_{\fs}') \in \sX_{w}$ such that $\varepsilon'_{\fs} = \beta_{\fs}$ for all $\fs \in \INDzw$. Then by Theorem~\ref{theorem-dimension}, there exists an $\alpha \in \FF_{q}(t)$ for which $(\varepsilon_{\fs})=\alpha (\varepsilon_{\fs}')$, and hence we have $P = \alpha(\theta) P_{w}$. Then we have \begin{align*} 0 = \Li_{P}(\bone) = \Li_{\alpha (\theta) P_{w}}(\bone) = \alpha(\theta) \tpi^{w}. \end{align*} Thus $\alpha = 0$ and $P = 0$. \end{proof} \subsection{Proof of Theorem~\ref{T:Main Thm}}\label{Sec: Proof of Main Thm} With the fundamental results established, we can now give a short proof of Theorem~\ref{T:Main Thm}. Given $w\in \ZZ_{>0}$, we claim that $\mathcal{B}_{w}^{\rm{T}}$ is a basis of the $k$-vector space $\cZ_{w}$. By Theorem~\ref{theorem-IND-basis}, $\left\{ \Li_{\fs}(\bone)| \fs\in \INDw \right\}$ is a $k$-basis of $\cZ_{w}$. Since $|\INDw|=|\ITw|$ by Proposition~\ref{Pop:|ITw|=|INDw|}, and $\cB^{\rm{T}}_{w}$ is a generating set of $\cZ_{w}$ by Corollary \ref{Cor:generating set}, we have that \[|\mathcal{B}_{w}^{\rm{T}}| \geq \dim_{k} \cZ_{w}=|\INDw|=|\ITw|\geq |\mathcal{B}_{w}^{\rm{T}}|,\] whence the desired result follows. \subsection{Generating set of linear relations}\label{Sub:Generating set of linear relations} Recall the $k$-linear map $\sU^{\zeta}$ defined in Definition~\ref{def-sU}, and Theorem~\ref{theorem-algo} asserts that for $\fs\in \cI_{w} \setminus \ITw$, \begin{equation}\label{E:Main Relations} \sL^{\zeta}\left( [\fs]-\sU^{\zeta}(\fs) \right)=0. \end{equation} We prove in the following theorem, which verifies the $\sB^{*}$-version of~\cite[Conjecture~5.1]{To18}, that these relations account for all $k$-linear relations among MZV's of the same weight. \begin{theorem}\label{T: DetermineLR} Let $w$ be a positive integer. Then all the $k$-linear relations among the MZV's of weight $w$ are generated by~\eqref{E:Main Relations} for $\fs\in \cI_{w} \setminus \ITw$. In other words, if we denote by $\sLZ_{w}:= \sLZ|_{\cH_{w}}:=\left( [\fs]\mapsto \zeta_{A}(\fs) \right) :\cH_{w} \twoheadrightarrow \cZ_{w}$ and put \[ \sR_{w}:=\Span_{k}\left\{[\fs]-\sU^{\zeta}(\fs) \mid \fs\in \cI_{w} \setminus \ITw \right\}\subset \cH_{w} ,\] then \[\Ker\sLZ_{w}= \sR_{w}.\] Moreover, we have \[\Ker \sLZ=\bigoplus_{w\in \NN}\sR_{w}.\] \end{theorem} \begin{proof} Theorem~\ref{theorem-algo} implies the inclusion $\Ker \sLZ_{w} \supset \sR_{w}$, and Remark~\ref{Rem:Us=s} implies \[ \sR_{w}=\Span_{k}\left\{P-\sU^{\zeta}(P) \mid P \in \cH_{w} \right\}.\] We fix a positive integer $e$ given in Theorem~\ref{theorem-algo}. Then for each $P \in \cH_{w}$, we have \[ P \equiv \sU^{\zeta}(P) \equiv \cdots \equiv (\sU^{\zeta})^{e}(P) \bmod \sR_{w}.\] By these relations and Theorem~\ref{theorem-algo}, the quotient space $\cH_{w} / \sR_{w}$ is spanned by the image of $\ITw$. It follows that \[ |\IT_{w}| \geq \dim_{k} \cH_{w} / \sR_{w} \geq \dim_{k} \cH_{w} / \Ker \sLZ_{w} = \dim_{k} \cZ_{w} = |\IT_{w}|, \] where the last equality comes from Theorem~\ref{T:Main Thm}. So we have $\Ker \sLZ_{w}=\sR_{w}$. By~\cite[Thm.~2.2.1]{C14}, the last assertion follows. \end{proof} \appendix \section{Proof of Theorem \ref{theorem-algo}} The aim of this appendix is to give a detailed proof of Theorem \ref{theorem-algo}. \subsection{Inequalities of depth} \begin{proposition} \label{prop-product-depth} Let $\bullet\in \left\{\Li,\zeta \right\}$ and $\fs, \fn \in \cI$ be indices. Then for each $\fu \in \Supp(\fs *^{\bullet} \fn)$, we have \begin{align*} \max\{ \dep(\fs), \dep(\fn) \} \leq \dep(\fu) \leq \dep(\fs) + \dep(\fn). \end{align*} \end{proposition} \begin{proof} Put $r := \dep(\fs)$ and $\ell := \dep(\fn)$. We prove the inequalities by induction on $r + \ell$. Note that in the case of $r = 0$ or $\ell = 0$, namely, $\fs=\emptyset $ or $\fn=\emptyset$, the result follows from the definition of $*^{\bullet}$. Suppose that $r\geq 1$ and $\ell \geq 1$. Since the binary operation $*^{\bullet}$ is commutative, without loss of generality we may assume that $r \geq \ell \geq 1$. We first consider the case that $r > \ell$. By the induction hypothesis, we have: \begin{itemize} \item when $\fu \in \Supp([s_{1}, \fs_{-} *^{\bullet} \fn])$, then $1 + (r - 1) \leq \dep(\fu) \leq 1 + (r - 1) + \ell$; \item when $\fu \in \Supp([n_{1}, \fs *^{\bullet} \fn_{-}])$, then $1 + r \leq \dep(\fu) \leq 1 + r + (\ell - 1)$; \item when $\fu \in \Supp([s_{1} + n_{1}, \fs_{-} *^{\bullet} \fn_{-}])$, then $1 + (r - 1) \leq \dep(\fu) \leq 1 + (r - 1) + (\ell - 1)$; \item when $\fu \in \Supp(D^{\zeta}_{\fs, \fn})$, then there exist $j$, $\fu'$ and $\fu''$ such that $1 \leq j < s_{1} + n_{1}$, $\fu'' \in \Supp(\fs_{-} *^{\zeta} \fn_{-})$ and $\fu' \in \Supp((j) *^{\zeta} \fu'')$ with $\fu = (s_{1} + n_{1} - j, \fu')$. It follows that $r - 1 \leq \dep(\fu'') \leq (r - 1) + (\ell - 1)$, and hence $r - 1 \leq \dep(\fu') \leq 1 + (r - 1) + (\ell - 1)$. \end{itemize} In any case, we have $r \leq \dep(\fu) \leq r + \ell$. In the case of $r = \ell$, we can verify that $r \leq \dep(\fu) \leq 2 r$ by a similar argument as above. \end{proof} \begin{corollary} \label{cor-depth} Let $\fs, \fn \in \cI$ be indices. \begin{enumerate} \item Suppose that both $\fs$ and $\fn$ are non-empty indices. Let $D^{\zeta}_{\fs, \fn}$ be defined in~\eqref{E:DZetaS}. Then for each $\fu \in \Supp(D^{\zeta}_{\fs, \fn})$, we have \begin{align*} \max\{ \dep(\fs), \dep(\fn) \} \leq \dep(\fu) \leq \dep(\fs) + \dep(\fn). \end{align*} \item For any integer $m \geq 0$, we let $\alpha^{\bullet,m}_{q}$ be the $m$-th iteration of $\alpha^{\bullet}_{q}$ defined in~\eqref{E:alpha}. Then for each $\fu \in \Supp(\alpha^{\bullet, m}_{q}(\fs))$, we have \begin{align*} \dep(\fu) \geq \left\{ \begin{array}{@{}ll} 1 + m & (\fs = \emptyset, \ m \geq 1) \\ \dep(\fs) + m & (\textrm{otherwise}) \end{array} \right. \ \ \textrm{and} \ \ \dep(\fu) \leq \dep(\fs) + 2 m. \end{align*} \end{enumerate} \end{corollary} \begin{proof} This is a direct consequence of Proposition \ref{prop-product-depth}. \end{proof} \subsection{Binary relations} Let $\bullet \in \{ \Li, \zeta \}$. For each $d \in \ZZ$ and $R = (P, Q) \in \cH^{\oplus 2}$, we define \begin{align*} \sLB(R) := \sLB(P) + \sLB(Q) \ \ \textrm{and} \ \ \sLB_{d}(R) := \sLB_{d}(P) + \sLB_{d + 1}(Q). \end{align*} We set \begin{align}\label{E:Pbullet} \cP^{\bullet} &:= \{ (P, Q)\in \cH^{\oplus 2} \ | \ \sLB_{d}(P) + \sLB_{d + 1}(Q) = 0 \ \textrm{for all} \ d \in \ZZ \} \end{align} and \begin{equation}\label{E:cPw} \cP^{\bullet}_{w} := \cP^{\bullet} \cap \cH_{w}^{\oplus 2} \end{equation} for $w \geq 0$. Elements in $\cP^{\bullet}$ are called \textit{binary relations}. We note that each binary relation $R = (P, Q) \in \cP^{\bullet}$ induces a $k$-linear relation $\sLB(R) = \sLB(P) + \sLB(Q) = 0$. Indeed, we have \begin{align*} \sLB(P) + \sLB(Q) = \sum_{d \in \ZZ} \sLB_{d}(P) + \sum_{d \in \ZZ} \sLB_{d + 1}(Q) = \sum_{d \in \ZZ} (\sLB_{d}(P) + \sLB_{d + 1}(Q)) = 0. \end{align*} The above ideas were introduced by Todd~\cite{To18}. \begin{example} Thakur proved the identity \cite[Thm.~5]{T09b} \begin{align*} \sLZ_{d}(q) - L_{1} \sLZ_{d + 1}(1, q - 1) &= 0 \ \ \ (d \in \ZZ). \end{align*} Note that by Remark \ref{rmk-C=A} we can replace `$\zeta$' by `$\Li$' in the above equation. It follows that for $\bullet\in \left\{\Li, \zeta \right\}$, we have a binary relation \begin{equation}\label{E:R1} R_{1} := ([q], \ - L_{1} [1, q - 1]) \in \cP^{\bullet}_{q}; \end{equation} namely for each $d\in \ZZ$, the following equation holds: \[ \sLB_{d}(q) - L_{1} \sLB_{d + 1}(1, q - 1) = 0.\] \end{example} \subsection{Maps between relations}\label{Sec:Maps B,C,BC} In what follows, the essential ideas we use are rooted in~\cite{To18, ND21}. Let $\fs = (s_{1}, \ldots, s_{r}) \in \cI_{> 0}$ be a non-empty index, we recall that $\fs_{+} := (s_{1}, \ldots, s_{r - 1})$ and $\fs_{-} := (s_{2}, \ldots, s_{r})$, and the operator $\boxplus: \cH^{\oplus 2}\rightarrow \cH$ is defined in Definition~\ref{Def:boxplus}. For $\bullet\in \left\{\Li,\zeta \right\}$, we define the maps $\sB^{\bullet}_{\fs}, \sC^{\bullet}_{\fs}:\cH_{> 0}^{\oplus 2}\rightarrow \cH_{> 0}^{\oplus 2} $ as follows: For each $R = (P, Q) = (\sum_{\fn \in \cI_{> 0}} a_{\fn} [\fn], \ \sum_{\fn \in \cI_{> 0}} b_{\fn} [\fn]) \in \cH_{> 0}^{\oplus 2}$, we set \begin{align*} \sB^{\bullet}_{\fs}(R) &:= \left( [\fs, P] + [\fs, Q] + (\fs \boxplus Q) + [\fs_{+}, D^{\bullet}_{s_{r}, Q}], \ 0 \right), \\ \sC^{\bullet}_{\fs}(R) &:= \sum_{\fn = (n_{1}, \fn_{-}) \in \cI_{> 0}} \left( a_{\fn} [n_{1} + s_{1}, \fn_{-} *^{\bullet} \fs_{-}] + a_{\fn} [n_{1}, \fn_{-} *^{\bullet} \fs] + a_{\fn} D^{\bullet}_{\fn, \fs}, \ b_{\fn} [n_{1}, \fn_{-} *^{\bullet} \fs] \right), \end{align*} where \[D^{\bullet}_{s, Q} := \sum_{\fn \in \cI_{> 0}} b_{\fn} D^{\bullet}_{s, \fn}.\] For any integer $m\geq 0$, we further define the map $\sBC^{\bullet, m}_{q}: \cH_{> 0}^{\oplus 2}\rightarrow \cH_{> 0}^{\oplus 2}$ given by: \[ \sBC^{\bullet, m}_{q}(R) := \left( [q^{\{m\}}, P], \ L_{1}^{m} \alpha^{\bullet, m}_{q}(Q) \right). \] It is clear that these maps are $k$-linear endomorphisms on $\cH_{> 0}^{\oplus 2}$. \begin{proposition} \label{prop-BC} Let $\bullet\in \left\{\Li,\zeta \right\}$. For each $\fs \in \cI_{> 0}$, and integers $m, w$ satisfying $m \geq 0, w > 0$, the maps $\sB^{\bullet}_{\fs}$, $\sC^{\bullet}_{\fs}$ and $\sBC^{\bullet, m}_{q}$ satisfy \begin{align*} \sB^{\bullet}_{\fs}(\cP^{\bullet}_{w}) \subset \cP^{\bullet}_{w + \wt(\fs)}, \ \ \ \sC^{\bullet}_{\fs}(\cP^{\bullet}_{w}) \subset \cP^{\bullet}_{w + \wt(\fs)} \ \ \ \textrm{and} \ \ \ \sBC^{\bullet, m}_{q}(\cP^{\bullet}_{w}) \subset \cP^{\bullet}_{w + m q}, \end{align*} where $\cP^{\bullet}_w$ is defined in~\eqref{E:cPw}. \end{proposition} \begin{proof} When $R \in \cP^{\bullet}_{w}$, it is clear that \begin{align*} \sB^{\bullet}_{\fs}(R) \in \cH_{w + \wt(\fs)}^{\oplus 2}, \ \ \sC^{\bullet}_{\fs}(R) \in \cH_{w + \wt(\fn)}^{\oplus 2} \ \ \textrm{and} \ \ \sBC^{\bullet}_{\fs}(R) \in \cH_{w + \wt(\fs)}^{\oplus 2}. \end{align*} Thus it suffices to show that $\sB^{\bullet}_{\fs}(R), \sC^{\bullet}_{\fs}(R), \sBC^{\bullet, m}_{q}(R) \in \cP^{\bullet}$. For each $R = (\sum_{\fn \in \cI_{w}} a_{\fn} [\fn], \ \sum_{\fn \in \cI_{w}} b_{\fn} [\fn]) \in \cP^{\bullet}_{w}$, the corresponding equalities \begin{align*} \sum_{\fn \in \cI_{w}} a_{\fn} \sLB_{i}(\fn) + \sum_{\fn \in \cI_{w}} b_{\fn} \sLB_{i + 1}(\fn) = 0 \ \ \ (i \in \ZZ) \end{align*} hold. Thus for each $s \geq 1$ and $d \in \ZZ$, we have \begin{align*} 0 &= \sLB_{d}(s) \sum_{i < d} \left( \sum_{\fn \in \cI_{w}} a_{\fn} \sLB_{i}(\fn) + \sum_{\fn \in \cI_{w}} b_{\fn} \sLB_{i + 1}(\fn) \right) \\ &= \sum_{\fn \in \cI_{w}} a_{\fn} \sLB_{d}(s) \sLB_{< d}(\fn) + \sum_{\fn \in \cI_{w}} b_{\fn} \sLB_{d}(s) \sLB_{< d}(\fn) + \sum_{\fn \in \cI_{w}} b_{\fn} \sLB_{d}(s) \sLB_{d}(\fn) \\ &= \sum_{\fn \in \cI_{w}} a_{\fn} \sLB_{d}(s, \fn) + \sum_{\fn \in \cI_{w}} b_{\fn} \sLB_{d}(s, \fn) + \sum_{\fn = (n_{1}, \fn_{-}) \in \cI_{w}} b_{\fn} \sLB_{d}(s + n_{1}, \fn_{-}) + \sum_{\fn \in \cI_{w}} b_{\fn} \sLB_{d}(D^{\bullet}_{s, \fn}) \\ &= \sLB_{d}(\sB^{\bullet}_{s}(R)), \end{align*} where the third equality comes from Proposition~\ref{P:product of sLbullet}. This means that $\sB^{\bullet}_{s}(R) \in \cP^{\bullet}$. Note that for each $\fs = (s_{1}, \ldots, s_{r}) \in \cI_{> 0}$, we have $\sB^{\bullet}_{\fs}(R) \in \cP^{\bullet}$ since from the definition of $\sB^{\bullet}_{\fs}$ we have \begin{align*} \sB^{\bullet}_{\fs} = \sB^{\bullet}_{s_{1}} \circ \cdots \circ \sB^{\bullet}_{s_{r}}. \end{align*} Similarly, we have \begin{align*} 0 &= \left( \sum_{\fn \in \cI_{w}} a_{\fn} \sLB_{d}(\fn) + \sum_{\fn \in \cI_{w}} b_{\fn} \sLB_{d + 1}(\fn) \right) \sLB_{< d + 1}(\fs) \\ &= \sum_{\fn \in \cI_{w}} a_{\fn} \sLB_{d}(\fn) \sLB_{d}(\fs) + \sum_{\fn = (n_{1}, \fn_{-}) \in \cI_{w}} a_{\fn} \sLB_{d}(n_{1}) \sLB_{< d}(\fn_{-}) \sLB_{< d}(\fs) \\ & \ \ \ + \sum_{\fn = (n_{1}, \fn_{-}) \in \cI_{w}} b_{\fn} \sLB_{d + 1}(n_{1}) \sLB_{< d + 1}(\fn_{-}) \sLB_{< d + 1}(\fs) \\ & = \sum_{\fn = (n_{1}, \fn_{-}) \in \cI_{w}} a_{\fn} \left( \sLB_{d}([n_{1} + s_{1}, \fn_{-} *^{\bullet} \fs_{-}]) + \sLB_{d}(D^{\bullet}_{\fn, \fs}) \right) + \sum_{\fn = (n_{1}, \fn_{-}) \in \cI_{w}} a_{\fn} \sLB_{d}([n_{1}, \fn_{-} *^{\bullet} \fs]) \\ & \ \ \ + \sum_{\fn = (n_{1}, \fn_{-}) \in \cI_{w}} b_{\fn} \sLB_{d + 1}([n_{1}, \fn_{-} *^{\bullet} \fs]) \\ & = \sLB_{d}(\sC^{\bullet}_{\fn}(R)), \end{align*} where the third equality comes from Proposition~\ref{P:product of sLbullet}. Thus, we have $\sC^{\bullet}_{\fs}(R) \in \cP^{\bullet}$. Finally, we have \begin{align*} \cP^{\bullet} &\ni \sB^{\bullet}_{q}(R) - \sum_{\fn \in \cI_{w}} b_{\fn} \sC^{\bullet}_{\fn}(R_{1}) \\ &= \sum_{\fn = (n_{1}, \fn_{-}) \in \cI_{w}} \left( a_{\fn} [q, \fn] + b_{\fn} [q, \fn] + b_{\fn} [q + n_{1}, \fn_{-}] + b_{\fn} D^{\bullet}_{q, \fn}, \ 0 \right) \\ & \ \ \ \ \ - \sum_{\fn = (n_{1}, \fn_{-}) \in \cI_{w}} b_{\fn} \left( \left( [q + n_{1}, \fn_{-}] + [q, \fn] + D^{\bullet}_{q, \fn} \right), \ - L_{1} [1, (q - 1) *^{\bullet} \fn] \right) \\ &= \sum_{\fn \in \cI_{w}} \left( a_{\fn} [q, \fn], \ b_{\fn} L_{1} \alpha^{\bullet}_{q}(\fn) \right) \\ &= \sBC^{\bullet}_{q}(R), \end{align*} where the second equality comes from the definition of $\alpha^{\bullet}_{q}$ given in~\eqref{E:alpha}. Since \begin{align*} \sBC^{\bullet, m}_{q} = \sBC^{\bullet}_{q} \circ \cdots \circ \sBC^{\bullet}_{q} \ \ (\textrm{$m$th iterate of $\sBC^{\bullet}_{q}$}), \end{align*} it shows that $\sBC^{\bullet, m}_{q}(R) \in \cP^{\bullet}$. \end{proof} Let $\fs \in \cI$. We write $\fs = (s_{1}, \ldots ) = (\fs^{\rT}, q^{\{ m \}}, \fs')$ with $\fs^{\rT} \in \IT$, $m \geq 0$ and $\fs' = (s'_{1}, \ldots)$ with $s'_{1} > q$ or $\fs' = \emptyset$, and set $\Init(\fs) := (\fs^{\rT}, q^{\{ m \}})$ and $\ell_{1} := \dep(\Init(\fs))$. When $\fs' \neq \emptyset$, we set \[\fs'' := (s'_{1} - q, \fs'_{-}) = (s_{\ell_{1} + 1} - q, \fs'_{-}).\] We recall $\sUB(\fs)$ given in Definition \ref{def-sU} by \begin{align*} \sUB(\fs) := \left\{ \begin{array}{@{}ll} - [\fs^{\rT}, q^{\{m + 1\}}, \fs''] + L_{1}^{m + 1} [\fs^{\rT}, \alpha_{q}^{\bullet, m + 1}(\fs'')] & \\ \hspace{5.0em} + L_{1}^{m + 1} (\fs^{\rT} \boxplus \alpha_{q}^{\bullet, m + 1}(\fs'')) - [\fs^{\rT}, q^{\{m\}}, D^{\bullet}_{q, \fs''}] & (\fs' \neq \emptyset) \\[0.5em] L_{1}^{m} [\fs^{\rT}, \alpha_{q}^{\bullet, m}(\emptyset)] + L_{1}^{m} (\fs^{\rT} \boxplus \alpha_{q}^{\bullet, m}(\emptyset)) & (\fs' = \emptyset) \end{array} \right. \end{align*} It is clear that $\sUB(\fs) \in \cH_{\wt(\fs)}$. \begin{theorem}[Theorem \ref{theorem-algo} (1)] \label{theorem-sLsU} For each $P \in \cH$, we have $\sLB(\sUB(P)) = \sLB(P)$. \end{theorem} \begin{proof} We may assume that $P = \fs \in \cI$ as above. By the definition of $\sU^{\bullet}$, it suffices to show the desired result in the case $\fs \notin \IT$. We first consider the case that $\fs' \neq \emptyset$. Recall that $R_1$ is given in~\eqref{E:R1} and $\alpha^{\bullet}_{q}$ is defined in~\eqref{E:alpha}. Note that by definition, we have \begin{align*} \sBC^{\bullet, m}_{q}\left( (\sC^{\bullet}_{\fs''}(R_{1}))\right) &= \sBC^{\bullet, m}_{q} \left( [\fs'] + [q, \fs''] + D^{\bullet}_{q, \fs''}, \ - L_{1} \alpha^{\bullet}_{q}(\fs'') \right) \\ &= \left( [q^{\{m\}}, \fs'] + [q^{\{m + 1\}}, \fs''] + [q^{\{m\}}, D^{\bullet}_{q, \fs''}], \ - L_{1}^{m + 1} \alpha^{\bullet, m + 1}_{q}(\fs'') \right). \end{align*} If $\fs^{\rT} = \emptyset$, we can express \begin{equation}\label{E:Appendix EQ1} \sBC^{\bullet, m}_{q}(\sC^{\bullet}_{\fs''}(R_{1})) = \left( [\fs] + [\fs^{\rT}, q^{\{m + 1\}}, \fs''] + [\fs^{\rT}, q^{\{m\}}, D^{\bullet}_{q, \fs''}], \ - L_{1}^{m + 1} [\fs^{\rT}, \alpha^{\bullet, m + 1}_{q}(\fs'')] \right). \end{equation} If $\fs^{\rT} \neq \emptyset$, we have \begin{align}\label{E:Appendix EQ2} &\sB^{\bullet}_{\fs^{\rT}}(\sBC^{\bullet, m}_{q}(\sC^{\bullet}_{\fs''}(R_{1}))) \\ &= \left( [\fs] + [\fs^{\rT}, q^{\{m + 1\}}, \fs''] + [\fs^{\rT}, q^{\{m\}}, D^{\bullet}_{q, \fs''}] - L_{1}^{m + 1} [\fs^{\rT}, \alpha^{\bullet, m + 1}_{q}(\fs'')] - L_{1}^{m + 1} [\fs^{\rT} \boxplus \alpha^{\bullet, m + 1}_{q}(\fs'')], \ 0 \right), \nonumber \end{align} where we use Remark~\ref{remark-D-vanish} and the fact $[\fs^{\rT}_{+},0]=0\in \cH$ from Remark~\ref{Rem: [s,0]=0}. Next we suppose that $\fs' = \emptyset$. Then $m \geq 1$ and we have \begin{align*} \sBC^{\bullet, m - 1}_{q}(R_{1}) = \left( [q^{\{m\}}], \ L_{1}^{m - 1} \alpha^{\bullet, m - 1}_{q}(- L_{1} [1, q - 1]) \right) = \left( [q^{\{m\}}], \ - L_{1}^{m} \alpha^{\bullet, m}_{q}(\emptyset) \right). \end{align*} If $\fs^{\rT} = \emptyset$ then we have \begin{align}\label{E:Appendix EQ3} \sBC^{\bullet, m - 1}_{q}(R_{1}) = \left( [\fs], \ - L_{1}^{m} [\fs^{\rT}, \alpha^{\bullet, m}_{q}(\emptyset)] \right), \end{align} and if $\fs^{\rT} \neq \emptyset$ then by Remark~\ref{remark-D-vanish} and $[\fs^{\rT}_{+},0]=0\in \cH$, we have \begin{align}\label{E:Appendix EQ4} \sB^{\bullet}_{\fs^{\rT}}(\sBC^{\bullet, m - 1}_{q}(R_{1})) = \left( [\fs] - L_{1}^{m} [\fs^{\rT}, \alpha^{\bullet, m}_{q}(\emptyset)] - L_{1}^{m} (\fs^{\rT} \boxplus \alpha^{\bullet, m}_{q}(\emptyset)), \ 0 \right). \end{align} Recall from~\eqref{E:R1} that $R_{1}\in \cP_{q}^{\bullet}$. It follows by Theorem~\ref{prop-BC} that the left hand side (LHS) of each of the equations~\eqref{E:Appendix EQ1}, \eqref{E:Appendix EQ2}, \eqref{E:Appendix EQ3} and \eqref{E:Appendix EQ4} belongs to $\cP_{\wt(\fs)}^{\bullet}$. Thus, in any case of the four equations above we have that \begin{align*} 0 = \sLB(LHS) = \sLB(RHS) = \sLB(\fs) - \sLB(\sUB(\fs)), \end{align*} whence we obtain \[ \sLB(\fs)=\sLB(\sUB(\fs)) .\] \end{proof} We set \begin{align*} \sUB_{1}(\fs) &:= \left\{ \begin{array}{ll} - [\fs^{\rT}, q^{\{m + 1\}}, \fs''] + L_{1}^{m + 1} [\fs^{\rT}, \alpha_{q}^{\bullet, m + 1}(\fs'')] & (\fs' \neq \emptyset) \\ L_{1}^{m} [\fs^{\rT}, \alpha_{q}^{\bullet, m}(\emptyset)] & (\fs' = \emptyset) \end{array} \right., \\ \sUB_{2}(\fs) &:= \left\{ \begin{array}{ll} L_{1}^{m + 1} (\fs^{\rT} \boxplus \alpha_{q}^{\bullet, m + 1}(\fs'')) & (\fs' \neq \emptyset) \\ L_{1}^{m} (\fs^{\rT} \boxplus \alpha_{q}^{\bullet, m}(\emptyset)) & (\fs' = \emptyset) \end{array} \right., \\ \sUB_{3}(\fs) &:= \left\{ \begin{array}{ll} - [\fs^{\rT}, q^{\{m\}}, D^{\bullet}_{q, \fs''}] & (\fs' \neq \emptyset) \\ 0 & (\fs' = \emptyset) \end{array} \right.. \end{align*} It is clear that $\sUB_{i}(\fs) \in \cH_{\wt(\fs)}$ for $1 \leq i \leq 3$ and $\sUB(\fs) = \sUB_{1}(\fs) + \sUB_{2}(\fs) + \sUB_{3}(\fs)$. \begin{lemma} \label{lemma-123} Let $\fs \in \cI \setminus \IT$. Then the following hold: \begin{enumerate} \item If $\fn \in \Supp(\sUB_{1}(\fs))$, then $\dep(\fn) > \dep(\fs)$. \item If $\fn \in \Supp(\sUB_{2}(\fs))$, then $\dep(\fn) \geq \dep(\fs)$ and $\Init(\fn) > \Init(\fs)$ (using lexicographical order). \item If $\fn \in \Supp(\sUB_{3}(\fs))$, then $\dep(\fn) \geq \dep(\fs) > \ell_{1} := \dep(\Init(\fs))$, \ $\Init(\fn) \geq \Init(\fs)$ and $1 \leq n_{\ell_{1} + 1} < s_{\ell_{1} + 1}$. \end{enumerate} \end{lemma} \begin{proof} First we note that since $\fs \notin \IT$, if $\fs' = \emptyset$ then $m \geq 1$. By Corollary \ref{cor-depth}, we have $\dep(\fn) \geq \dep(\fs)$ for all $1 \leq i \leq 3$ and $\fn \in \Supp(\sUB_{i}(\fs))$ and $\dep(\fn) > \dep(\fs)$ for all $\fn \in \Supp(\sUB_{1}(\fs))$. By the definition of $\alpha_{q}^{\bullet}$, when $\fn \in \sUB_{2}(\fs)$ we can write $\fn = \fs^{\rT} \boxplus [1, \fn_{0}]$ for some $\fn_{0} \in \cI$. We may assume $\fs^{\rT}\neq \emptyset$ since if $\fs^{\rT}=\emptyset$, we have $\sUB_{2}(\fs)=\fs^{\rT}\boxplus \alpha_{q}^{\bullet, \ell}(\fs'')=0\in \cH$. Let $j:=\dep \fs^{\rT}$ and so $s_{j}<q$. Since $s_{j} + 1 \leq q$, we have $\Init(\fn) = (s_{1}, \ldots, s_{j - 1}, s_{j} + 1, \ldots) > (\fs^{\rT}, q^{\{m\}}) = \Init(\fs)$. Let $\fn = (n_{1}, \ldots) \in \Supp(\sUB_{3}(\fs))$. We may assume that $\fs' \neq \emptyset$ and $\bullet = \zeta$. In paritcular, $\dep(\fs) > \ell_{1}$. Note that any $\fn^{\dagger} \in \Supp(D^{\zeta}_{q, \fs''})$ can be written as $\fn^{\dagger} = (s_{\ell_{1} + 1} - i, \fn^{\dagger}_{-})$ for some $1 \leq i < s_{\ell_{1} + 1}$ and $\fn^{\dagger}_{-} \in \cI$. Then by the definition of $\sUB_3(\fs)$, we have $\fn = (\fs^{\rT}, q^{\{m\}}, \fn^{\dagger}) = (\fs^{\rT}, q^{\{m\}}, s_{\ell_{1} + 1} - i, \fn^{\dagger}_{-})$. Thus $\Init(\fn) = (\fs^{\rT}, q^{\{m\}}, \ldots) \geq (\fs^{\rT}, q^{\{m\}}) = \Init(\fs)$ and $n_{\ell_{1} + 1} = s_{\ell_{1} + 1} - i < s_{\ell_{1} + 1}$. \end{proof} \begin{theorem}[Theorem \ref{theorem-algo} (2)] \label{theorem-sUe} For each $P \in \cH$, there exists $e \geq 0$ such that $\Supp((\sU^{\bullet})^{e}(P)) \subset \IT$, where $(\sU^{\bullet})^{0}$ is defined to be the identity map and for $e\in\ZZ_{>0}$, $(\sU^{\bullet})^{e}$ is defined to be the $e$-th iteration of $\sU^{\bullet}$. \end{theorem} \begin{proof} We may assume that $P = \fs = (s_{1}, \ldots) \in \cI_{w}$ for some $w \geq 0$. By Lemma \ref{lemma-123}, for each index $\fn = (n_{1}, \ldots) \in \Supp(\sUB(\fs))$ one of the following conditions holds: \begin{enumerate} \item $\fs \in \ITw$ (and hence $\fn = \fs$); \item $\fs \notin \ITw$, $\dep(\fn) > \dep(\fs)$; \item $\fs \notin \ITw$, $\dep(\fn) = \dep(\fs)$, \ $\Init(\fn) > \Init(\fs)$ (using lexicographical order); \item $\fs \notin \ITw$, $\dep(\fn) = \dep(\fs)$, \ $\Init(\fn) = \Init(\fs)$, \ $n_{\dep(\Init(\fn)) + 1} < s_{\dep(\Init(\fs)) + 1}$. \end{enumerate} This means that $\fn = \fs \in \ITw$ or \begin{align*} (\dep(\fn); \Init(\fn); - n_{\dep(\Init(\fn)) + 1}) > (\dep(\fs); \Init(\fs); - s_{\dep(\Init(\fs)) + 1}) \end{align*} in $\{ 0, 1, \ldots, w \} \times \Init(\cI_{w}) \times \{ - w, \ldots, - 2, - 1 \}$ with the lexicographical order. Here, when $\Init(\fs) = \fs$ (resp.\ $\Init(\fn) = \fn$), temporary we put $s_{\dep(\Init(\fs)) + 1} := 1$ (resp.\ $n_{\dep(\Init(\fn)) + 1} := 1$). Since this totally ordered set is finite, this procedure will stop after a finite number of steps. \end{proof} \begin{thebibliography}{99} \bibitem[A86]{A86} G.\ W.\ Anderson, \textit{$t$-motives}, Duke Math. J. \textbf{53} (1986), no. 2, 457--502. \bibitem[ABP04]{ABP04} G. W. Anderson, W. D. Brownawell and M. A. Papanikolas, \textit{Determination of the algebraic relations among special $\Gamma$-values in positive characteristic}, Ann. of Math. (2) \textbf{160} (2004), no. 1, 237--313. \bibitem[AT90]{AT90} G.\ W.\ Anderson and D.\ S.\ Thakur, \textit{Tensor powers of the Carlitz module and zeta values}, Ann. of Math. (2) \textbf{132} (1990), no. 1, 159--191. \bibitem[AT09]{AT09} G.\ W.\ Anderson and D.\ S.\ Thakur, \textit{Multizeta values for $\FF_{q}[t]$, their period interpretation, and relations between them}, Int. Math. Res. Not. IMRN (2009), no. 11, 2038--2055. \bibitem[An04]{An04} Y.\ Andr\'e, \textit{ Une introduction aux motifs (motifs purs, motifs mixtes, p\'eriodes)}, Panoramas et Synth\'eses, \textbf{17}. Soci\'et\'e Math\'ematique de France, Paris, 2004. \bibitem[Br12]{Br12} F.\ Brown, \textit{Mixed Tate motives over $\mathbb{Z}$}, Ann.\ of Math.\ (2) \textbf{175} (2012), no.\ 2, 949--976. \bibitem[BGF19]{BGF19} J.\ I.\ Burgos Gil and J.\ Fres\'an, \textit{Multiple zeta values: from numbers to motives}, to appear in Clay Mathematics Proceedings. \bibitem [Ca35]{Ca35} L.\ Carlitz, \textit{On certain functions connected with polynomials in a Galois field}, Duke Math. J. \textbf{1} (1935), no. 2, 137-168. \bibitem[C14]{C14} C.\-Y.\ Chang, \textit{Linear independence of monomials of multizeta values in positive characteristic}, Compositio Math. \textbf{150} (2014), 1789-1808. \bibitem[C16]{C16} C.\-Y.\ Chang, \textit{Linear relations among double zeta values in positive characteristic}, Camb. J. Math. \textbf{4} (2016), no. 3, 289-331. \bibitem[CCM21]{CCM21} C.\-Y.\ Chang, Y.\-T.\ Chen and Y.\ Mishiba, \textit{Algebra structure on multiple zeta vaues in positive characteristic,} \url{https://arxiv.org/abs/2007.08264}. \bibitem[CH21]{CH21} Y.-T. Chen and R. Harada, \textit{On lower bounds of the dimension of multizeta values in positive characteristic}, Doc. Math. {\bf 26} (2021), 537-559. \bibitem[CM21]{CM21} C.\-Y.\ Chang and Y.\ Mishiba, \textit{On a conjecture of Furusho over function fields}, Invent. math. \textbf{223} (2021), 49-102. \bibitem[CPY19]{CPY19} C.\-Y.\ Chang, M.\ A.\ Papanikolas and J.\ Yu, \textit{An effective criterion for Eulerian multizeta values in positive characteristic}, J. Eur. Math. Soc. (JEMS) \textbf{21} (2019), no. 2, 405-440. \bibitem[Ch15]{Ch15} H.\-J.\ Chen, \textit{On shuffle of double zeta values for $\FF_q[t]$}, J. Number Theory \textbf{148} (2015), 153-163. \bibitem[DG05]{DG05} P.\ Deligne and A.\ B.\ Goncharov, \textit{Groupes fondamentaux motiviques de Tate mixte}. (French) [Mixed Tate motivic fundamental groups] Ann. Sci. \'Ecole Norm. Sup. (4) 38 (2005), no. 1, 1-56. \bibitem[GKZ06]{GKZ06} H.\ Gangl, M.\ Kazeko and D.\ Zagier, \textit{Double zeta values and modular forms}, Automorphic forms and zeta functions, 71-106, World Sci. Publ., Hackensack, NJ, 2006. \bibitem[Gon02]{Gon02} A.\ B.\ Goncharov, \textit{Periods and mixed motives}, arXiv:0202154. \bibitem[Go79]{Go79} D.\ Goss, \textit{$v$-adic zeta functions, $L$-series, and measures for function fields. With an addendum}, Invent. Math. \textbf{55} (1979), no. 2, 107-119. \bibitem[Go96]{Go96} D.\ Goss, \textit{Basic structures of function field arithmetic}, Springer-Verlag, Berlin, 1996. \bibitem[Ho97]{Ho97} M.\ Hoffman, \textit{The algebra of multiple harmonic series}, J. Algebra \textbf{194} (1997), 477-495. \bibitem[IKLNDP22]{IKLNDP22} B. H. Im, H. Kim, K. N. Le, T. Ngo Dac, and L. H. Pham, \textit{Zagier-Hoffman’s conjectures in positive characteristic}, \url{https://arxiv.org/abs/2205.07165}. \bibitem[IKZ06]{IKZ06} K.\ Ihara, M.\ Kaneko and D.\ Zagier, \textit{Derivation and double shuffle relations for multiple zeta values}, Compositio Math. \textbf{142} (2006), 307-338. \bibitem[KL16]{KL16} Y.-L.\ Kuan and Y.-H.\ Lin, \textit{Criterion for deciding zeta-like multizeta values in positive characteristic}. Exp. Math. \textbf{25} (2016), no. 3, 246–256. \bibitem[LRT14]{LRT14} J.\ A.\ Lara Rodr\'{i}guez and D.\ S.\ Thakur, \textit{Zeta-like multizeta values for $\mathbb{F}_{q}[t]$}, Indian J. Pure Appl. Math. 45 (5), 785-798 (2014). \bibitem[ND21]{ND21} T.\ Ngo Dac, \textit{On Zagier-Hoffman's conjectures in positive characteristic}. Ann. of Math. (2), \textbf{194} (2021), no. 1, 361–392. \bibitem[P08]{P08} M.\ A.\ Papanikolas, \textit{Tannakian duality for Anderson-Drinfeld motives and algebraic independence of Carlitz logarithms}, Invent. Math. \textbf{171} (2008), no.~1, 123-174. \bibitem[R02] {R02} G.\ Racinet, \textit{Double mixtures of multiple polylogarithms at the roots of unity.}, IH\'ES Mathematical Publications, Volume \textbf{95} (2002), 185-231. \bibitem[Te02]{Te02} T.\ Terasoma, \textit{Mixed Tate motives and multiple zeta values}. Invent. Math. \textbf{149} (2002), no. 2, 339-369. \bibitem[To18]{To18} G.\ Todd, \textit{A Conjectural Characterization for $\FF_{q}(t)$-Linear Relations between Multizeta Values}, J. Number Theory \textbf{187}, 264-287 (2018). \bibitem [T04]{T04} D.\ S.\ Thakur, \textit{Function field arithmetic}, World Scientific Publishing, River Edge NJ, 2004. \bibitem [T09a]{T09} D.\ S.\ Thakur, \textit{Power sums with applications to multizeta and zeta zero distribution for $\FF_{q}[t]$}, Finite Fields Appl. \textbf{15} (2009), no. 4, 534-552. \bibitem [T09b]{T09b} D.\ S.\ Thakur, \textit{Relations between multizeta values for $\FF_{q}[t]$}, Int. Math. Res. Not. IMRN (2009), no. 12, 2318-2346. \bibitem[T10]{T10} D.\ S.\ Thakur, \textit{Shuffle relations for function field multizeta values}, Int. Math. Res. Not. IMRN (2010), no. 11, 1973-1980. \bibitem[T17]{T17} D.\ S.\ Thakur, \textit{Multizeta values for function fields: a survey}, J. Th\'eor. Nombres Bordeaux \textbf{29} no. 3 (2017), 997–1023. Ann. of Math. (2) \textbf{129} (1989), no. 3, 501-517. \bibitem[Za94]{Za94} D.\ Zagier, \textit{Values of zeta functions and their applications}, in ECM volume, Progress in Math., \textbf{120} (1994), 497-512. \bibitem [Zh16]{Zh16} J.\ Zhao, \textit{Multiple zeta functions, multiple polylogarithms and their special values}, Series on Number Theory and its Applications, 12. World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, 2016. \end{thebibliography} \end{document}
2205.09926v4
http://arxiv.org/abs/2205.09926v4
Smoothing, scattering, and a conjecture of Fukaya
\documentclass[11pt]{amsart} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsthm} \usepackage{graphicx} \usepackage{pb-diagram} \usepackage{epstopdf} \usepackage{amscd} \usepackage{pdfpages} \usepackage{bbm} \usepackage{multirow} \usepackage[mathscr]{eucal} \usepackage{epsfig,epsf,epic} \usepackage{pdfsync} \usepackage{enumerate} \usepackage[all]{xy} \usepackage{fancyref} \usepackage{mathtools} \usepackage{scalerel,stackengine} \usepackage{comment} \usepackage{hyperref} \usepackage{subfig} \usepackage{tabularx} \stackMath \ExecuteOptions{dvips} \addtolength{\textwidth}{+4cm} \addtolength{\textheight}{+2cm} \hoffset-2cm \voffset-1cm \setlength{\parskip}{5pt} \setlength{\parskip}{5pt} \newtheorem{theorem}{Theorem}[section] \newtheorem{prop}[theorem]{Proposition} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{remark}[theorem]{Remark} \newtheorem{assum}[theorem]{Assumption} \newtheorem{condition}[theorem]{Condition} \newtheorem{notation}[theorem]{Notation} \numberwithin{equation}{section} \DeclareMathOperator{\Hom}{Hom} \DeclareMathOperator{\Ker}{Ker} \DeclareMathOperator{\Image}{Im} \DeclareMathOperator{\Realpart}{Re} \DeclareMathOperator{\Spec}{Spec} \DeclareMathOperator{\supp}{supp} \DeclareMathOperator{\trace}{tr} \DeclareMathOperator{\vol}{vol} \newcommand{\real}{\mathbb{R}} \newcommand{\comp}{\mathbb{C}} \newcommand{\realp}{\mathbb{R}_{+}} \newcommand{\inte}{\mathbb{Z}} \newcommand{\bb}[1]{\mathbb{#1}} \newcommand{\cu}[1]{\mathcal{#1}} \newcommand{\til}[1]{\widetilde{#1}} \newcommand{\msc}[1]{\mathscr{#1}} \newcommand{\pd}{\partial} \newcommand{\pdb}{\bar{\partial}} \newcommand{\dd}[1]{\frac{\partial}{\partial #1}} \newcommand{\lied}[1]{\mathcal{L}_{#1}} \newcommand{\lieg}[1]{\mathcal{L}_{\nabla{#1}}} \newcommand{\half}{\frac{1}{2}} \newcommand{\ddd}[2]{\frac{\partial#1}{\partial{#2}}} \newcommand{\ddb}[2]{\frac{\partial#1}{\partial{\bar{#2}}}} \newcommand{\pdddb}[3]{\frac{\partial^2 #1}{\partial z^{#2}\partial \bar{z}^{#3}}} \newcommand{\pdpd}[3]{\frac{\partial^2 #1}{\partial #2\partial #3}} \newcommand{\pdd}[2]{\frac{\partial^2 #1}{\partial #2^2}} \newcommand{\ppd}[1]{\frac{\partial^2}{\partial #1^2}} \newcommand{\tanpoly}{\Lambda} \newcommand{\reint}{\mathrm{int}_\mathrm{re}} \newcommand{\pcate}{\underline{\mathrm{LPoly}}} \newcommand{\pdecomp}{\mathscr{P}} \newcommand{\norpoly}{\mathscr{Q}} \newcommand{\centerfiber}[1]{\prescript{#1}{}{X}} \newcommand{\spec}{\mathrm{Spec}_{\mathrm{an}}} \newcommand{\spf}{\mathrm{Spf}} \newcommand{\localmod}[1]{\prescript{#1}{}{\mathbb{V}}} \newcommand{\linebundle}{\mathcal{L}} \newcommand{\moment}{\mu} \newcommand{\amoeba}{\cu{A}} \newcommand{\tsing}{\mathscr{S}} \newcommand{\modmap}{\nu} \newcommand{\tinclude}{\iota} \newcommand{\nsf}{\hat{\mathscr{S}}} \newcommand{\bchprod}{\odot} \newcommand{\blat}{\mathbf{K}} \newcommand{\cfr}{R} \newcommand{\cfrk}[1]{\prescript{#1}{}{\cfr}} \newcommand{\logs}{S^{\dagger}} \newcommand{\logsk}[1]{\prescript{#1}{}{S}^{\dagger}} \newcommand{\logsf}{\hat{S}^{\dagger}} \newcommand{\logsdrk}[2]{\prescript{#1}{}{\Omega}^{#2}_{S^{\dagger}}} \newcommand{\logsvfk}[1]{\prescript{#1}{}{\Theta}_{S^{\dagger}}} \newcommand{\logsdrf}[1]{\hat{\Omega}^{#1}_{S^{\dagger}}} \newcommand{\logsvff}{\hat{\Theta}_{S^{\dagger}}} \newcommand{\bva}[1]{\prescript{#1}{}{\mathcal{G}}} \newcommand{\tbva}[2]{\prescript{#1}{#2}{\mathcal{K}}} \newcommand{\gmiso}[2]{\prescript{#1}{#2}{\sigma}} \newcommand{\dpartial}[1]{\prescript{#1}{}{\partial}} \newcommand{\volf}[1]{\prescript{#1}{}{\omega}} \newcommand{\rdr}[2]{\prescript{#1}{#2 \parallel}{\mathcal{K}}} \newcommand{\bvd}[1]{\prescript{#1}{}{\Delta}} \newcommand{\gmc}[1]{\prescript{#1}{}{\nabla}} \newcommand{\rest}[1]{\prescript{#1}{}{\flat}} \newcommand{\patch}[1]{\prescript{#1}{}{\psi}} \newcommand{\hpatch}[1]{\prescript{#1}{}{\hat{\psi}}} \newcommand{\resta}[1]{\prescript{#1}{}{\mathfrak{b}}} \newcommand{\patchij}[1]{\prescript{#1}{}{\mathfrak{p}}} \newcommand{\cocyobs}[1]{\prescript{#1}{}{\mathfrak{o}}} \newcommand{\bvdobs}[1]{\prescript{#1}{}{\mathfrak{w}}} \newcommand{\lscript}[3]{\prescript{#1}{#2}{#3}} \newcommand{\simplex}{\blacktriangle} \newcommand{\simplexbdy}{\vartriangle} \newcommand{\hsimplex}{\blacksquare} \newcommand{\hsimplexbdy}{\square} \newcommand{\polyv}[1]{\prescript{#1}{}{PV}} \newcommand{\totaldr}[2]{\prescript{#1}{#2}{\mathcal{A}}} \newcommand{\glue}[1]{\prescript{#1}{}{g}} \newcommand{\mdga}{\Omega} \newcommand{\md}{d} \newcommand{\hp}{\hslash} \newcommand{\tform}{\mathscr{T}} \newcommand{\sfbva}[1]{\prescript{#1}{}{\mathsf{G}}} \newcommand{\sftvbva}[1]{\prescript{#1}{}{\mathfrak{h}}} \newcommand{\sftbva}[2]{\prescript{#1}{#2}{\mathsf{K}}} \newcommand{\sflocmod}[1]{\prescript{#1}{}{\mathsf{V}}} \newcommand{\sfpolyv}[1]{\prescript{#1}{}{\mathsf{PV}}} \newcommand{\sftropv}[1]{\prescript{#1}{}{\mathsf{TL}}} \newcommand{\sftotaldr}[2]{\prescript{#1}{#2}{\mathsf{A}}} \newcommand{\tvbva}[1]{\prescript{#1}{}{\mathscr{H}}} \newcommand{\mpatch}[1]{\prescript{#1}{}{\tilde{\psi}}} \newcommand{\tropv}[1]{\prescript{#1}{}{TL}} \newcommand{\wcs}[1]{\prescript{#1}{}{\mathscr{O}}} \begin{document} \title[Smoothing, scattering, and a conjecture of Fukaya]{Smoothing, scattering, and a conjecture of Fukaya} \author[Chan]{Kwokwai Chan} \address{Department of Mathematics\\ The Chinese University of Hong Kong\\ Shatin\\ Hong Kong} \email{[email protected]} \author[Leung]{Naichung Conan Leung} \address{The Institute of Mathematical Sciences and Department of Mathematics\\ The Chinese University of Hong Kong\\ Shatin\\ Hong Kong} \email{[email protected]} \author[Ma]{Ziming Nikolas Ma} \address{Department of Mathematics\\ Southern University of Science and Technology\\ Nanshan District \\ Shenzhen\\ China} \email{[email protected]} \begin{abstract} In 2002, Fukaya \cite{fukaya05} proposed a remarkable explanation of mirror symmetry detailing the SYZ conjecture \cite{syz96} by introducing two correspondences: one between the theory of pseudo-holomorphic curves on a Calabi-Yau manifold $\check{X}$ and the multi-valued Morse theory on the base $\check{B}$ of an SYZ fibration $\check{p}\colon \check{X}\to \check{B}$, and the other between deformation theory of the mirror $X$ and the same multi-valued Morse theory on $\check{B}$. In this paper, we prove a reformulation of the main conjecture in Fukaya's second correspondence, where multi-valued Morse theory on the base $\check{B}$ is replaced by tropical geometry on the Legendre dual $B$. In the proof, we apply techniques of asymptotic analysis developed in \cite{kwchan-leung-ma, kwchan-ma-p2} to tropicalize the pre-dgBV algebra which governs smoothing of a maximally degenerate Calabi-Yau log variety $\centerfiber{0}^{\dagger}$ introduced in \cite{chan2019geometry}. Then a comparison between this tropicalized algebra with the dgBV algebra associated to the deformation theory of the semi-flat part $X_{\mathrm{sf}} \subseteq X$ allows us to extract consistent scattering diagrams from appropriate Maurer-Cartan solutions. \end{abstract} \maketitle \input{introduction} \input{list_of_notation} \input{gs_construction} \input{moment_map} \input{dgBV} \input{tropical} \bibliographystyle{amsplain} \bibliography{geometry} \end{document} \section{Introduction} Two decades ago, in an attempt to understand mirror symmetry using the SYZ conjecture \cite{syz96}, Fukaya \cite{fukaya05} proposed two correspondences: \begin{itemize} \item Correspondence I: between the theory of pseudo-holomorphic curves (instanton corrections) on a Calabi--Yau manifold $\check{X}$ and the multi-valued Morse theory on the base $\check{B}$ of an SYZ fibration $\check{p}\colon \check{X}\to \check{B}$, and \item Correspondence II: between deformation theory of the mirror $X$ and the same multi-valued Morse theory on the base $\check{B}$. \end{itemize} In this paper, we prove a reformulation of the main conjecture \cite[Conj 5.3]{fukaya05} in Fukaya's Correspondence II, where multi-valued Morse theory on the SYZ base $\check{B}$ is replaced by tropical geometry on the Legendre dual $B$. Such a reformulation of Fukaya's conjecture was proposed and proved in \cite{kwchan-leung-ma} in a local setting; the main result of the current paper is a global version of the main result in {\it loc. cit}. A crucial ingredient in the proof is a precise link between tropical geometry on an integral affine manifold with singularities and smoothing of maximally degenerate Calabi--Yau varieties. The main conjecture \cite[Conj. 5.3]{fukaya05} in Fukaya's Correspondence II asserts that there exists a Maurer--Cartan element of the Kodaira--Spencer dgLa associated to deformations of the semi-flat part $X_{\mathrm{sf}}$ of $X$ that is asymptotically close to a Fourier expansion (\cite[Eq. (42)]{fukaya05}), whose Fourier modes are given by smoothings of distribution-valued 1-forms defined by moduli spaces of gradient Morse flow trees which are expected to encode counting of non-trivial (Maslov index 0) holomorphic disks bounded by Lagrangian torus fibers (see \cite[Rem. 5.4]{fukaya05}). Also, the complex structure defined by this Maurer--Cartan element can be compactified to give a complex structure on $X$. At the same time, Fukaya's Correspondence I suggests that these gradient Morse flow trees arise as adiabatic limits of loci of those Lagrangian torus fibers which bound non-trivial (Maslov index 0) holomorphic disks. This can be reformulated as a holomorphic/tropical correspondence, and much evidence has been found \cite{Floer88, fukaya-oh, Mikhalkin05, Nishinou-Siebert06, Cho-Hong-Lau17, Cho-Hong-Lau18, Lin21, Cheung-Lin21, BE-C-H-Lin21}. The tropical counterpart of such gradient Morse flow trees are given by consistent scattering diagrams, which were invented by Kontsevich--Soibelman \cite{kontsevich-soibelman04} and extensively used in the Gross--Siebert program \cite{gross2011real} to solve the reconstruction problem in mirror symmetry, namely, the construction of the mirror $X$ from smoothing of a maximally degenerate Calabi--Yau variety $\centerfiber{0}$. It is therefore natural to replace the distribution-valued 1-form in each Fourier mode in the Fourier expansion \cite[Eq. (42)]{fukaya05} by a distribution-valued 1-form associated to a wall-crossing factor of a consistent scattering diagram. This was exactly how Fukaya's conjecture \cite[Conj. 5.3]{fukaya05} was reformulated and proved in the local case in \cite{kwchan-leung-ma}. In order to reformulate the global version of Fukaya's conjecture, however, we must also relate deformations of the semi-flat part $X_{\mathrm{sf}}$ with smoothings of the maximally degenerate Calabi--Yau variety $\centerfiber{0}$. This is because consistent scattering diagrams were used by Gross--Siebert \cite{Gross-Siebert-logII} to study the deformation theory of the compact log variety $\centerfiber{0}^{\dagger}$ (whose log structure is specified by \emph{slab functions}), instead of $X_{\mathrm{sf}}$. For this purpose, we consider the open dense part $$\centerfiber{0}_{\mathrm{sf}} := \moment^{-1}(W_0) \subset \centerfiber{0},$$ where $\moment\colon \centerfiber{0} \rightarrow B$ is the \emph{generalized moment map} in \cite{ruddat2019period} and $W_0 \subseteq B$ is an open dense subset such that $B\setminus W_0$ contains the tropical singular locus and all codimension $2$ cells of $B$. Equipping $\centerfiber{0}_{\mathrm{sf}}$ with the \emph{trivial} log structure, there is a \emph{semi-flat dgBV algebra} $\sfpolyv{}^{*,*}$ governing its smoothings, and the general fiber of a smoothing is given by the semi-flat Calabi--Yau $X_{\mathrm{sf}}$ that appeared in Fukaya's original conjecture \cite[Conj. 5.3]{fukaya05}. However, the Maurer--Cartan elements of $\sfpolyv{}^{*,*}$ cannot be compactified to give complex structures on $X$. On the other hand, in our previous work \cite{chan2019geometry} we constructed a \emph{Kodaira--Spencer--type pre-dgBV algebra} $\polyv{}^{*,*}$ which controls the smoothing of $\centerfiber{0}$. A key observation is that a \emph{twisting} of $\sfpolyv{}^{*,*}$ by slab functions is isomorphic to the restriction of $\polyv{}^{*,*}$ to $\centerfiber{0}_{\mathrm{sf}}$ (Lemma \ref{lem:comparing_sheaf_of_dgbv}). Our reformulation of the global Fukaya conjecture now claims the existence of a Maurer--Cartan element $\phi$ of this twisted semi-flat dgBV algebra that is asymptotically close to a Fourier expansion whose Fourier modes give rise to the wall-crossing factors of a consistent scattering diagram. This conjecture follows from (the proof of) our main result, stated as Theorem \ref{thm:introduction_theorem} below, which is a combination of Theorem \ref{prop:Maurer_cartan_equation_unobstructed}, the construction in \S \ref{subsubsec:consistent_diagram_from_solution} and Theorem \ref{thm:consistency_of_diagram_from_mc}: \begin{theorem}\label{thm:introduction_theorem} There exists a solution $\phi$ to the classical Maurer--Cartan equation \eqref{eqn:classical_maurer_cartan_equation} giving rise to a smoothing of the maximally degenerate Calabi--Yau log variety $\centerfiber{0}^{\dagger}$ over $\comp[[q]]$, from which a consistent scattering diagram $\mathscr{D}(\phi)$ can be extracted by taking asymptotic expansions. \end{theorem} A brief outline of the proof of Theorem \ref{thm:introduction_theorem} is now in order. First, recall that the pre-dgBV algebra $\polyv{}^{*,*}$ which governs smoothing of the maximally degenerate Calabi--Yau variety $\centerfiber{0}$ was constructed in \cite[Thm. 1.1 \& \S 3.5]{chan2019geometry}, and we also proved a Bogomolov--Tian--Todorov--type theorem \cite[Thm. 1.2 \& \S 5]{chan2019geometry} showing unobstructedness of the extended Maurer--Cartan equation \eqref{eqn:extended_maurer_cartan_equation}, under the Hodge-to-de Rham degeneracy Condition \ref{cond:Hodge-to-deRham} and a holomorphic Poincar\'{e} Lemma Condition \ref{cond:holomorphic_poincare_lemma} (both proven in \cite{Gross-Siebert-logII, Felten-Filip-Ruddat}). In Theorem \ref{prop:Maurer_cartan_equation_unobstructed}, we will further show how one can extract from the extended Maurer--Cartan equation \eqref{eqn:extended_maurer_cartan_equation} a smoothing of $\centerfiber{0}$, described as a solution $\phi \in \polyv{}^{-1,1}(B)$ to the \emph{classical Maurer--Cartan equation} \eqref{eqn:classical_maurer_cartan_equation} $$ \pdb \phi + \half[\phi,\phi] + \mathfrak{l} = 0, $$ together with a holomorphic volume form $e^{f} \volf{}$ which satisfies the \emph{normalization condition} \begin{equation}\label{eqn:introduction_normalization_of_volume_form} \int_T e^f \volf{} = 1, \end{equation} where $T$ is a nearby vanishing torus in the smoothing. Next, we need to tropicalize the pre-dgBV algebra $\polyv{}^{*,*}$. However, the original construction of $\polyv{}^{*,*}$ in \cite{chan2019geometry} using the Thom--Whitney resolution \cite{whitney2012geometric, dupont1976simplicial} is too algebraic in nature. Here, we construct a geometric resolution exploiting the affine manifold structure on $B$. Using the generalized moment map $\moment \colon \centerfiber{0} \rightarrow B$ \cite{ruddat2019period} and applying the techniques of asymptotic analysis (in particular the notion of \emph{asymptotic support}) in \cite{kwchan-leung-ma}, we define the sheaf $\tform^*$ of \emph{monodromy invariant tropical differential forms} on $B$ in \S \ref{sec:asymptotic_support}. According to Definition \ref{def:sheaf_of_tropical_dga}, a tropical differential form can be regarded as a distribution-valued form supported on polyhedral subsets of $B$. Using the sheaf $\tform^*$, we can take asymptotic expansions of elements in $\polyv{}^{*,*}$, and hence connect differential geometric operations in dgBV/dgLa with tropical geometry. In this manner, we can extract \emph{local} scattering diagrams from Maurer--Cartan solutions as we did in \cite{kwchan-leung-ma}, but we need to glue them together to get a global object. To achieve this, we need the aforementioned comparison between $\polyv{}^{*,*}$ and the semi-flat dgBV algebra $\sfpolyv{}^{*,*}_{\mathrm{sf}}$ which governs smoothing of the semi-flat part $\centerfiber{0}_{\mathrm{sf}} := \moment^{-1}(W_0) \subset \centerfiber{0}$ equipped with the trivial log structure. The key Lemma \ref{lem:comparing_sheaf_of_dgbv} says that the restriction of $\polyv{}^{*,*}$ to the semi-flat part is isomorphic to $\sfpolyv{}^{*,*}_{\mathrm{sf}}$ precisely after we \emph{twist} the semi-flat operator $\pdb_{\circ}$ by elements corresponding to the \emph{slab functions} associated to the \emph{initial walls} of the form: $$ \phi_{\mathrm{in}} = - \sum_{v \in \rho} \delta_{v,\rho} \otimes \log(f_{v,\rho}) \partial_{\check{d}_{\rho}}; $$ here the sum is over vertices in codimension one cells $\rho$'s which intersect with the \emph{essential singular locus} $\tsing_e$ (defined in \S \ref{subsec:tropical_singular_locus}), $\delta_{v,\rho}$ is a distribution-valued $1$-form supported on a component of $\rho\setminus \tsing_e$ containing $v$, $\partial_{\check{d}_{\rho}}$ is a holomorphic vector field and $f_{v,\rho}$'s are the slab functions associated to the initial walls. We remark that slab functions were used to specify the log structure on $\centerfiber{0}$ as well as the local models for smoothing $\centerfiber{0}$ in the Gross--Siebert program; see \S \ref{sec:gross_siebert} for a review. Now, the Maurer--Cartan solution $\phi \in \polyv{}^{-1,1}(B)$ obtained in Theorem \ref{prop:Maurer_cartan_equation_unobstructed} defines a new operator $\pdb_{\phi}$ on $\polyv{}^{*,*}$ which squares to zero. Applying the above comparison of dgBV algebras (Lemma \ref{lem:comparing_sheaf_of_dgbv}) and the gauge transformation from Lemma \ref{lem:vector field}, we show that, after restricting to $W_0$, there is an isomorphism $$\left(\polyv{}^{-1,1}(W_0), \pdb_{\phi}\right) \cong \left(\sfpolyv{}^{-1,1}_{\mathrm{sf}}(W_0),\pdb_{\circ} + [\phi_{\mathrm{in}}+\phi_{\mathrm{s}},\cdot] \right)$$ for some element $\phi_{\mathrm{s}}$, where `s' stands for {\it scattering terms}. From the description of $\tform^*$, the element $\phi_{\mathrm{s}}$, to any fixed order $k$, is written locally as a finite sum of terms supported on codimension one walls/slabs (Definitions \ref{def:walls} and \ref{def:slabs}. For the purpose of a brief discussion in this introduction, we will restrict ourselves to a wall $\mathbf{w}$ below, though the same argument applies to a slab; see \S \ref{subsubsec:consistent_diagram_from_solution} for the details. In a neighborhood $U_{\mathbf{w}}$ of each wall $\mathbf{w}$, the operator $\pdb_{\circ}+[\phi_{\mathrm{in}}+\phi_{\mathrm{s}},\cdot]$ is gauge equivalent to $\pdb_{\circ}$ via some vector field $\theta_{\mathbf{w}} \in \sfpolyv{}^{-1,0}_{\mathrm{sf}}(W_0)$, i.e. $$ e^{[\theta_{\mathbf{w}},\cdot]}\circ \pdb_{\circ} \circ e^{-[\theta_{\mathbf{w}},\cdot]} = \pdb_{\circ}+ [\phi_{\mathrm{in}}+\phi_{\mathrm{s}},\cdot]. $$ Employing the techniques for analyzing the gauge which we developed in \cite{kwchan-leung-ma, kwchan-ma-p2, matt-leung-ma}, we see that the gauge will jump across the wall, resulting in a wall-crossing factor $\varTheta_{\mathbf{w}}$ satisfying $$ e^{[\theta_{\mathbf{w}},\cdot]}|_{ \cu{C}_{\pm}} = \begin{dcases} \varTheta_{\mathbf{w}}|_{ \cu{C}_+} & \text{on $U_{\mathbf{w}} \cap \cu{C}_+$,}\\ \mathrm{id} & \text{on $U_{\mathbf{w}} \cap \cu{C}_-$,} \end{dcases} $$ where $\cu{C}_{\pm}$ are the two chambers separated by $\mathbf{w}$. Then from the fact that the volume form $e^{f}\volf{}$ is normalized as in \eqref{eqn:introduction_normalization_of_volume_form}, it follows that $\phi_{\mathrm{s}}$ is closed under the semi-flat BV operator $\bvd{}$, and hence we deduce that the wall-crossing factor $\varTheta_{\mathbf{w}}$ lies in the {\it tropical vertex group}. This defines a scattering diagram $\mathscr{D}(\phi)$ on the semi-flat part $W_0$ associated to $\phi$. Finally, we prove consistency of the scattering diagram $\mathscr{D}(\phi)$ in Theorem \ref{thm:consistency_of_diagram_from_mc}. We emphasize that the consistency is over the {\it whole} $B$ even though the diagram is only defined on $W_0$, because the Maurer--Cartan solution $\phi$ is globally defined on $B$. \begin{remark}\label{rmk:relaxed_scattering_diagram} Our notion of scattering diagrams (Definition \ref{def:scattering_diagram}) is a little bit more relaxed than the usual notion defined in \cite{kontsevich-soibelman04, gross2011real} in two aspects: One is that we do not require the generator of the exponents of the wall-crossing factor to be orthogonal to the wall.\footnote{It seems reasonable to relax this orthogonality condition because one cannot require such a condition in more general settings \cite{bridgeland2016scattering, matt-leung-ma}.} The other is that we allow possibly infinite number of walls/slabs approaching strata of the tropical singular locus. See the paragraph after Definition \ref{def:scattering_diagram} for more details. In practice, this simply means that we are considering a larger gauge equivalence class (or equivalently, a weaker gauge equivalence), which is natural from the point of view of both the Bogomolov--Tian--Todorov Theorem and mirror symmetry (in the A-side, this amounts to flexibility in the choice of the almost complex structure). We also have a different, but more or less equivalent, formulation of the consistency of a scattering diagram; see Definition \ref{def:consistency_of_scattering_diagram} and \S \ref{subsubsec:scattering_diagram}. \end{remark} Along the way of proving Fukaya's conjecture, besides figuring out the precise relation between the semi-flat part $X_{\mathrm{sf}}$ and the maximally degenerate Calabi--Yau log variety $\centerfiber{0}^{\dagger}$, we also find the correct description of the Maurer--Cartan solutions near the singular locus, namely, they should be extendable to the local models prescribed by the log structure (or slab functions), as was hinted by the Gross--Siebert program. This is related to a remark by Fukaya \cite[Pt. (2) after Conj. 5.3]{fukaya05}. Another important point is that we have established in the global setting an interplay between the differential-geometric properties of the tropical dgBV algebra and the scattering (and other combinatorial) properties of tropical disks, which was speculated by Fukaya as well (\cite[Pt. (1) after Conj. 5.3]{fukaya05}) although he considered holomorphic disks instead of tropical ones. Furthermore, by providing a direct linkage between Fukaya's conjecture with the Gross--Siebert program \cite{Gross-Siebert-logI, Gross-Siebert-logII, gross2011real} and Katzarkov--Kontsevich--Pantev's Hodge theoretic viewpoint \cite{KKP08} through $\polyv{}^{*,*}$ (recall from \cite{chan2019geometry} that a semi-infinite variation of Hodge structures can be constructed from $\polyv{}^{*,*}$, using the techniques of Barannikov--Kontsevich \cite{Barannikov-Kontsevich98, Barannikov99} and Katzarkov--Kontsevich--Pantev \cite{KKP08}), we obtain a more transparent understanding of mirror symmetry through the SYZ framework. \begin{remark} A future direction is to apply the framework in this paper and the works \cite{kwchan-leung-ma, chan2019geometry} to develop a local-to-global approach to understand genus $0$ mirror symmetry. In view of the ideas of Seidel \cite{Seidel-ICM} and Kontsevich \cite{Kontsevich-sheaf}, and also recent breakthroughs by Ganatra--Pardon--Shende \cite{Ganatra-Pardon-Shende20, Ganatra-Pardon-Shende18a, Ganatra-Pardon-Shende18b} and Gammage--Shende \cite{Gammage-Shende17, gammage2021homological}, we expect that there is a sheaf of $L_\infty$ algebras on the A-side mirror to (the $L_\infty$ enhancement of) $\polyv{}^{*,*}$ that can be constructed by gluing local models. More precisely, a large volume limit of a Calabi--Yau manifold $\check{X}$ can be specified by removing from it a normal crossing divisor $\check{D}$ which represents the K\"ahler class of $\check{X}$. This gives rise to a Weinstein manifold $\check{X} \setminus \check{D}$, and produces a mirror pair $\check{X} \setminus \check{D} \leftrightarrow \centerfiber{0}$ at the large volume/complex structure limits. In \cite{gammage2021homological}, Gammage--Shende constructed a Lagrangian skeleton $\Lambda(\Phi) \subset \check{X} \setminus \check{D}$ from a combinatorial structure $\Phi$ called \emph{fanifold}, which can be extracted from the integral tropical manifold $B$ equipped with a polyhedral decomposition $\pdecomp$ (here we assume that the gluing data $s$ is trivial). They also proved an HMS statement at the large limits. We expect that an A-side analogue of $\polyv{}^{*,*}$ can be constructed from the Lagrangian skeleton $\Lambda(\Phi)$ in $\check{X} \setminus \check{D}$, possibly together with a nice and compatible SYZ fibration on $\check{X} \setminus \check{D}$, via gluing of local models. A local-to-global comparsion on the A-side and isomorphisms between the local models on the two sides should then yield an isomorphism of Frobenius manifolds. \end{remark} \section*{Acknowledgement} We thank Kenji Fukaya, Mark Gross and Richard Thomas for their interest and encouragement, and also Helge Ruddat for useful comments on an earlier draft of this paper. We are very grateful to the anonymous referees for numerous constructive and extremely detailed comments/suggestions which have helped to greatly enhanced the exposition of the whole paper. K. Chan was supported by grants of the Hong Kong Research Grants Council (Project No. CUHK14301420 \& CUHK14301621) and direct grants from CUHK. N. C. Leung was supported by grants of the Hong Kong Research Grants Council (Project No. CUHK14301619 \& CUHK14306720) and a direct grant (Project No. 4053400) from CUHK. Z. N. Ma was supported by National Science Fund for Excellent Young Scholars (Overseas). These authors contributed equally to this work. \section*{List of notations} \begin{center} \begin{tabular}{l l l} $M$, $M_{A}$ & \S \ref{subsec:integral_affine_manifolds} & lattice, $M_{A} := M \otimes_{\inte} A$ for any $\inte$-module $A$\\ $N$, $N_{A}$ & \S \ref{subsec:integral_affine_manifolds} & dual lattice of $M$, $N_{A} := N \otimes_{\inte} A$ for any $\inte$-module $A$\\ $(B,\pdecomp)$ & Def. \ref{def:integral_tropical_manifold} & integral tropical manifold equipped with a polyhedral\\ & & decomposition\\ $\tanpoly_{\sigma}$ & \S \ref{subsec:integral_affine_manifolds} & lattice generated by integral tangent vectors along $\sigma$\\ $\reint(\tau)$ & \S \ref{subsec:integral_affine_manifolds} & relative interior of a polyhedron $\tau$\\ $U_\tau$ & \S \ref{subsec:integral_affine_manifolds} & open neighborhood of $\reint(\tau)$\\ $\norpoly_{\tau}$ & \S \ref{subsec:integral_affine_manifolds} & lattice generated by normal vectors to $\tau$\\ $S_\tau \colon U_\tau \rightarrow \norpoly_{\tau,\real}$ & \S \ref{subsec:integral_affine_manifolds} & fan structure along $\tau$\\ $\Sigma_{\tau}$ & \S \ref{subsec:integral_affine_manifolds} & complete fan in $\norpoly_{\tau,\real}$ constructed from $S_{\tau}$\\ $K_{\tau}\sigma$ & \S \ref{subsec:integral_affine_manifolds} & $K_{\tau}\sigma=\real_{\geq 0} S_{\tau}(\sigma \cap U_{\tau})$ is a cone in $\Sigma_{\tau}$ corresponding to $\sigma$\\ $T_{x}$ & \S \ref{subsec:monodromy_data} & lattice of integral tangent vectors of $B$ at $x$\\ $\Delta_i(\tau)$, $\check{\Delta}_i(\tau)$ & Def. \ref{def:simplicity} & monodromy polytope of $\tau$, dual monodromy polytope of $\tau$\\ $\cu{A}\mathit{ff}$ & Def. \ref{def:piecewise_linear} & sheaf of affine functions on $B$\\ $\cu{PL}_{\pdecomp}$ & Def. \ref{def:piecewise_linear} & sheaf of piecewise affine functions on $B$ with respect to $\pdecomp$\\ $\cu{MPL}_{\pdecomp}$ & Def. \ref{def:strictly_convex_piecewise_affine} & sheaf of multi-valued piecewise affine functions on $B$\\ & & with respect to $\pdecomp$\\ $\varphi$ & Def. \ref{def:strictly_convex_multi_valued_function} & strictly convex multi-valued piecewise linear function\\ $\tau^{-1}\Sigma_v$ & \S \ref{subsec:open_construction} & localization of the fan $\Sigma_v$ at $\tau$\\ $V(\tau)$ & \S \ref{subsec:open_construction} & local affine scheme associated to $\tau$ used for open gluing\\ $\mathrm{PM}(\tau)$ & \S \ref{subsec:open_construction} & group of piecewise multiplicative maps on $\tau^{-1}\Sigma_v$\\ $D(\mu,\rho,v)$ & Def. \ref{def:alternative_description_open_gluing_data} & number encoding the change of $\mu \in \mathrm{PM}(\tau)$ across $\rho$ through $v$\\ $\centerfiber{0}_\tau$ & \S \ref{subsec:open_construction} & closed stratum of $\centerfiber{0}$ associated to $\tau$\\ $C_{\tau}$ & \S \ref{subsec:log_structure_and_slab_function} & cone defined by the strictly convex function $\bar{\varphi}_{\tau}\colon \Sigma_{\tau} \rightarrow \real$\\ & & representing $\varphi$\\ $\bar{P}_{\tau}$ & \S \ref{subsec:log_structure_and_slab_function} & monoid of integral points in $C_{\tau}$\\ $q = z^{\varrho}$ & \S \ref{subsec:log_structure_and_slab_function} & parameter for a toric degeneration\\ $\cu{N}_{\rho}$ & \S \ref{subsec:log_structure_and_slab_function} & line bundle on $\centerfiber{0}_{\rho}$ having slab functions $f_{\rho}$ as sections\\ $f_{v\rho}$ & \S \ref{subsec:log_structure_and_slab_function} & local slab function associate to $\rho$ in the chart $V(v)$\\ $\varkappa_{\tau,i} \colon \centerfiber{0}_{\tau} \rightarrow \bb{P}^{r_{\tau,i}}$ & \S \ref{subsec:log_structure_and_slab_function} & toric morphism induced from the monodromy polytope $\Delta_i(\tau)$ \\ $ P_{\tau,x}$ & \S \ref{subsec:log_structure_and_slab_function} & toric monoid describing the local model of toric degeneration\\ & & near $x \in \centerfiber{0}_{\tau}$ \\ $Q_{\tau,x}$ & \S \ref{subsec:log_structure_and_slab_function} & toric monoid isomorphic to $P_{\tau,x} /( \varrho + P_{\tau,x} )$ \\ $\mathscr{N}_{\tau}$ & \S \ref{subsec:log_structure_and_slab_function} & normal fan of a polytope $\tau$\\ $\moment\colon \centerfiber{0}\rightarrow B$ & \S \ref{subsec:moment_map} & generalized moment map\\ \end{tabular} \end{center} \begin{center} \begin{tabular}{lll} $\Upsilon_{\tau}$ & \S \ref{subsubsec:charts_on_B} & coordinate chart on $W(\tau) \subset B$\\ $\tsing$ (resp. $\tsing_e$) & \S \ref{subsec:tropical_singular_locus} & (resp. essential) tropical singular locus in $B$\\ $\modmap \colon \centerfiber{0} \rightarrow B $ &Def. \ref{def:modified_moment_map} & surjective map with $\modmap(Z) \subset \tsing_{e}$\\ $\mathcal{W} = \{W_{\alpha}\}_{\alpha}$ & \S \ref{sec:deformation_via_dgBV} & good cover (Condition \ref{cond:good_cover_of_B}) of $B$ with $V_{\alpha}:= \modmap^{-1}(W_{\alpha})$ being Stein\\ $\localmod{k}_{\alpha}^{\dagger}$ & \S \ref{sec:deformation_via_dgBV} & $k^{\text{th}}$-order local smoothing model of $V_{\alpha}$\\ $\bva{k}_{\alpha}^*$ & Def. \ref{def:higher_order_thickening_data_from_gross_siebert} & sheaf of $k^{\text{th}}$-order holomorphic relative log polyvector fields on $\localmod{k}_{\alpha}^{\dagger}$\\ $\tbva{k}{}_{\alpha}^*$ & Def. \ref{def:higher_order_thickening_data_from_gross_siebert} & sheaf of $k^{\text{th}}$-order holomorphic log de Rham differentials on $\localmod{k}_{\alpha}^{\dagger}$\\ $\tbva{k}{\parallel}_{\alpha}^*$ & \S \ref{subsubsec:local_deformation_data} & sheaf of $k^{\text{th}}$-order holomorphic relative log de Rham differentials on $\localmod{k}_{\alpha}^{\dagger}$\\ $\volf{k}_{\alpha}$ & Def. \ref{def:higher_order_thickening_data_from_gross_siebert} & $k^{\text{th}}$-order relative log volume form on $\localmod{k}_{\alpha}^{\dagger}$\\ $\bvd{k}_{\alpha}$ & \S \ref{subsubsec:local_deformation_data} & BV operator on $\bva{k}_{\alpha}$\\ $\polyv{k}^{*,*}_{\alpha}$ & Def. \ref{def:local_dgBV_from_resolution} & local sheaf of $k^{\text{th}}$-order polyvector fields\\ $\totaldr{k}{}^{*,*}_{\alpha}$ & Def. \ref{def:local_dga_from_resolution} & local sheaf of $k^{\text{th}}$-order de Rham forms\\ $\polyv{k}^{*,*}$ & Def. \ref{def:global_polyvector_and_de_rham} & global sheaf of $k^{\text{th}}$-order polyvector fields from gluing of $\polyv{k}^{*,*}_{\alpha}$'s \\ $\totaldr{k}{}^{*,*}$ & Def. \ref{def:global_polyvector_and_de_rham} & global sheaf of $k^{\text{th}}$-order de Rham forms from gluing of $\totaldr{k}{}^{*,*}_{\alpha}$'s\\ $\tform^*$ & Def. \ref{def:global_sheaf_of_monodromy_invariant_tropical_forms} & global sheaf of tropical differential forms on $B$\\ $W_0$ & \S \ref{subsubsec:semi-flat} & semi-flat locus\\ $\sfbva{k}^*_{\mathrm{sf}}$ & \S \ref{subsubsec:semi-flat} & sheaf of $k^{\text{th}}$-order semi-flat holomorphic relative vector fields \\ $\sftbva{k}{}^*_{\mathrm{sf}}$ & \S \ref{subsubsec:semi-flat} & sheaf of $k^{\text{th}}$-order semi-flat holomorphic log de Rham forms \\ $\sftvbva{k}$ & eqt. \eqref{eqn:tropical_vertex_lie_algebra} & sheaf of $k^{\text{th}}$-order semi-flat holomorphic tropical vertex Lie algebras \\ $\sfpolyv{k}^{*,*}_{\mathrm{sf}}$ & Def. \ref{def:sheaf_of_sf_polyvector} & sheaf of $k^{\text{th}}$-order semi-flat polyvector fields\\ $\sftotaldr{k}{}^{*,*}_{\mathrm{sf}}$ & Def. \ref{def:sheaf_of_sf_polyvector} & sheaf of $k^{\text{th}}$-order semi-flat log de Rham forms \\ $\sftropv{k}^*_{\mathrm{sf}}$ & Def. \ref{def:semi_flat_tropical_vertex_lie_algebra} & sheaf of $k^{\text{th}}$-order semi-flat tropical vertex Lie algebras \\ $(\mathbf{w},\Theta_{\mathbf{w}})$ & Def. \ref{def:walls} & wall equipped with a wall-crossing factor \\ $(\mathbf{b},\Theta_{\mathbf{b}})$ & Def. \ref{def:slabs} & slab equipped with a wall-crossing factor \\ $\mathscr{D}$ & Def. \ref{def:scattering_diagram} & scattering diagram \\ $W_0(\mathscr{D})$ & \S \ref{subsubsec:scattering_diagram} & complement of joints in the semi-flat locus \\ $\mathfrak{i}$ & \S \ref{subsubsec:scattering_diagram} & the embedding $\mathfrak{i}\colon W_0(\mathscr{D}) \rightarrow B$ \\ $\wcs{k}_{\mathscr{D}}$ & \S \ref{subsubsec:scattering_diagram} & $k^{\text{th}}$-order wall-crossing sheaf associated to $\mathscr{D}$ \\ \end{tabular} \end{center} \vspace{3mm} \begin{notation}\label{not:universal_monoid} We usually fix a rank $s$ lattice $\blat$ together with a strictly convex $s$-dimensional rational polyhedral cone $Q_\real \subset \blat_\real = \blat\otimes_\inte \real$. We call $Q := Q_\real \cap \blat$ the {\em universal monoid}. We consider the ring $\cfr:=\comp[Q]$, a monomial element of which is written as $q^m \in \cfr$ for $m \in Q$, and the maximal ideal $\mathbf{m}:= \comp[Q\setminus \{0\}]$. Then $\cfrk{k}:= \cfr / \mathbf{m}^{k+1}$ is an Artinian ring, and we denote by $\hat{\cfr}:= \varprojlim_{k} \cfrk{k}$ the completion of $\cfr$. We further equip $\cfr$, $\cfrk{k}$ and $\hat{\cfr}$ with the natural monoid homomorphism $Q \rightarrow \cfr$, $m \mapsto q^m$, which gives them the structure of a {\em log ring} (see \cite[Definition 2.11]{gross2011real}); the corresponding log analytic spaces are denoted as $\logs$, $\logsk{k}$ and $\logsf$ respectively. Furthermore, we let $\logsdrk{}{*} := \cfr \otimes_{\comp} \bigwedge^*\blat_{\comp}$, $\logsdrk{k}{*}:= \cfrk{k} \otimes_{\comp} \bigwedge^*\blat_{\comp}$ and $\logsdrf{*} := \hat{\cfr} \otimes_{\comp} \bigwedge^*\blat_{\comp}$ (here $\blat_\comp = \blat \otimes_\inte \comp$) be the spaces of log de Rham differentials on $\logs$, $\logsk{k}$ and $\logsf$ respectively, where we write $1 \otimes m = d \log q^m$ for $m \in \blat$; these are equipped with the de Rham differential $\partial$ satisfying $\partial(q^m) = q^m d\log q^m$. We also denote by $\logsvfk{}:= \cfr \otimes_{\comp} \blat_{\comp}^{\vee}$, $\logsvfk{}$ and $\logsvff$, respectively, the spaces of log derivations, which are equipped with a natural Lie bracket $[\cdot,\cdot]$. We write $\partial_n$ for the element $1\otimes n$ with action $\partial_n (q^m) = (m,n) q^m$, where $(m,n)$ is the natural pairing between $\blat_\comp$ and $\blat^{\vee}_\comp$. \end{notation} \section{Gross--Siebert's cone construction of maximally degenerate Calabi--Yau varieties}\label{sec:gross_siebert} This section is a brief review of Gross--Siebert's construction of the maximally degenerate Calabi--Yau variety $\centerfiber{0}$ from the affine manifold $B$ and its log structures from slab functions \cite{Gross-Siebert-logI, Gross-Siebert-logII, gross2011real}. \subsection{Integral tropical manifolds}\label{subsec:integral_affine_manifolds} We first recall the notion of integral tropical manifolds from \cite[\S 1.1]{gross2011real}. Given a lattice $M$ of rank $n$, a \textit{rational convex polyhedron} $\sigma$ is a convex subset in $M_{\real}$ given by a finite intersection of rational (i.e. defined over $M_{\bb{Q}}$) affine half-spaces. We usually drop the attributes ``rational'' and ``convex'' for polyhedra. A polyhedron $\sigma$ is said to be \textit{integral} if all its vertices lie in $M$; a \textit{polytope} is a compact polyhedron. The group $\mathbf{Aff}(M):= M \rtimes \mathrm{GL}(M)$ of integral affine transformations acts on the set of polyhedra in $M_{\real}$. Given a polyhedron $\sigma \subset M_\real$, let $\tanpoly_{\sigma,\real} \subset M_\real$ be the smallest affine subspace containing $\sigma$, and denote by $\tanpoly_{\sigma} := \tanpoly_{\sigma,\real} \cap M$ the corresponding lattice. The \textit{relative interior} $\reint(\sigma)$ refers to taking the interior of $\sigma$ in $\tanpoly_{\sigma,\real}$. There is an identification $T_{\sigma,x} \cong \tanpoly_{\sigma,\real}$ for the tangent space at $x \in \reint(\sigma)$. Write $\partial \sigma = \sigma \setminus \reint(\sigma)$. Then a \textit{face} of $\sigma$ is the intersection of $\partial \sigma$ with a supporting hyperplane. Codimension one faces are called \textit{facets}. Let $\pcate$ be the category whose objects are integral polyhedra and morphisms consist of the identity and integral affine isomorphisms onto faces (i.e. an integral affine morphism $\tau \rightarrow \sigma$ which is an isomorphism onto its image and identifies $\tau$ with a face of $\sigma$). An \textit{integral polyhedral complex} is a functor $\mathtt{F}\colon \pdecomp \rightarrow \pcate$ from a finite category $\pdecomp$ to $\pcate$ such that every face of $\mathtt{F}(\sigma)$ still lies in the image of $\mathtt{F}$, and there is at most one arrow $\tau \rightarrow \sigma$ for every pair $\tau ,\sigma \in \pdecomp$. By abuse of notation, we usually drop the notation $\mathtt{F}$ and write $\sigma \in \pdecomp$ to represent an integral polyhedron in the image of the functor. From an integral polyhedral complex, we obtain a topological space $B := \varinjlim_{\sigma \in \pdecomp} \sigma$ via gluing of the polyhedra along faces. We further assume that: \begin{enumerate} \item the natural map $\sigma \rightarrow B$ is injective for each $\sigma \in \pdecomp$, so that $\sigma$ can be identified with a closed subset of $B$ called a \textit{cell}, and a morphism $\tau \rightarrow \sigma$ can be identified with an inclusion of subsets; \item a finite intersection of cells is a cell; and \item $B$ is an orientable connected topological manifold of dimension $n$ without boundary which in addition satisfies the condition that $H^1(B,\mathbb{Q}) = 0$. \end{enumerate} \begin{remark} The condition $H^1(B,\mathbb{Q}) = 0$ will be used only in Theorem \ref{prop:Maurer_cartan_equation_unobstructed} to ensure that $H^{1}(\centerfiber{0},\mathcal{O}) = H^1(B,\comp) = 0$, where $\centerfiber{0}$ is the degenerate Calabi--Yau variety that we are going to construct.\footnote{In his recent work \cite{Felten23}, Felten was able to prove Theorem \ref{prop:Maurer_cartan_equation_unobstructed} without assuming that $H^1(B,\mathbb{Q}) = 0$.} This corresponds to the condition that $b_1=0$ for smooth Calabi--Yau manifolds. \end{remark} The set of $k$-dimensional cells is denoted by $\pdecomp^{[k]}$, and the $k$-skeleton by $\pdecomp^{[\leq k]}$. For every $\tau \in \pdecomp$, we define its \textit{open star} by $$ U_{\tau}:= \bigcup_{\sigma \supset \tau} \reint(\sigma), $$ which is an open subset of $B$ containing $\reint(\tau)$. A \textit{fan structure along $\tau \in \pdecomp^{[n-k]}$} is a continuous map $S_{\tau} \colon U_{\tau} \rightarrow \real^{k}$ such that \begin{itemize} \item $S^{-1}_{\tau}(0) = \reint(\tau)$, \item for every $\sigma \supset \tau$, the restriction $S_{\tau}|_{\reint(\sigma)}$ is an integral affine submersion onto its image (meaning that it is induced by some epimorphism $\tanpoly_{\sigma} \rightarrow W \cap \inte^k$ for some vector subspace $W\subset \real^k$), and \item the collection of cones $\{ K_{\tau}\sigma:= \real_{\geq 0} S_{\tau}(\sigma \cap U_{\tau}) \}_{\sigma \supset \tau}$ forms a complete finite fan $\Sigma_{\tau}$. \end{itemize} Two fan structures along $\tau$ are \emph{equivalent} if they differ by composition with an integral affine transformation of $\real^{k}$. If $S_\tau$ is a fan structure along $\tau$ and $\sigma \supset \tau$, then $U_{\sigma} \subset U_{\tau}$ and there is a fan structure along $\sigma$ induced from $S_{\tau}$ via the composition: $$ U_{\sigma} \hookrightarrow U_{\tau} \rightarrow \bb{R}^{k} \twoheadrightarrow \bb{R}^{l}, $$ where $\bb{R}^{k} \rightarrow \bb{R}^{k}/ \real S_{\tau}(\sigma \cap U_{\tau}) \cong \bb{R}^{l}$ is the quotient map. \begin{definition}[\cite{gross2011real}, Def. 1.2]\label{def:integral_tropical_manifold} An \emph{integral tropical manifold} is an integral polyhedral complex $(B,\pdecomp)$ together with a fan structure $S_{\tau}$ along each $\tau \in \pdecomp$ such that whenever $\tau \subset \sigma$, the fan structure induced from $S_{\tau}$ is equivalent to $S_{\sigma}$. \end{definition} Taking sufficiently small and mutually disjoint open subsets $W_{v} \subset U_{v}$ for $v \in \pdecomp^{[0]}$ and $\reint(\sigma)$ for $\sigma \in \pdecomp^{[n]}$, there is an integral affine structure on $\bigcup_{v \in \pdecomp^{[0]}} W_v \cup \bigcup_{\sigma \in \pdecomp^{[n]}} \reint(\sigma)$. We will further choose the open subsets $W_v$'s and $\reint(\sigma)$'s so that the affine structure is defined outside a closed subset $\Gamma$ of codimension two in $B$, as in \cite[\S 1.3]{Gross-Siebert-logI}. This affine structure allows us to use parallel transport to identify the tangent spaces $T_x B$ for different points $x$ outside the closed subset. For every $\tau$ we choose a maximal cell $\sigma\supset \tau$ and consider the lattice of normal vectors $\norpoly_{\tau}=\tanpoly_{\sigma}/\tanpoly_{\tau}$ (we suppress the dependence on $\sigma$ because we will see that $\tanpoly_{\tau}$ is monodromy invariant under the monodromy transformation given by any two vertices of $\tau$ and any two maximal cells containing $\tau$). We can identify $\norpoly_{\tau}$ with $\inte^{k}$ via $S_{\tau}$, and write the fan structure as $S_{\tau} \colon U_{\tau} \rightarrow \norpoly_{\tau,\real}$. \begin{example}\label{eg:K3_example} We take a $2$-dimensional example from \cite[Ex. 6.74]{dbrane} to illustrate the above definitions. Let $\Xi$ be the convex hull of the points $$ p_0 = \begin{bmatrix} -1 \\ -1 \\ -1 \end{bmatrix}, \ p_1 = \begin{bmatrix} 3\\-1\\-1\end{bmatrix}, \ p_2 = \begin{bmatrix} -1 \\ 3 \\ -1 \end{bmatrix}, \ p_3 = \begin{bmatrix} -1 \\ -1 \\3 \end{bmatrix}, $$ so $\Xi$ is a $3$-simplex. Take $B$ (as a topological space) to be the boundary of $\Xi$. The polyhedral decomposition $\pdecomp$ is defined so that the integral points are vertices as shown in Figure \ref{fig:k3_polytope}. \begin{figure}[h!] \includegraphics[scale=0.3]{k3_polytope.png} \caption{The polyhedral decomposition}\label{fig:k3_polytope} \end{figure} Then we define affine coordinate charts on $\bigcup_{\sigma \in \pdecomp^{[n]}} \reint(\sigma) \cup \bigcup_{v\in \pdecomp^{[0]}} W_v$ as follows. On $\reint(\sigma)$, we take $\psi_{\sigma} \colon \reint(\sigma) \rightarrow \tanpoly_{\sigma,\real}$ which maps homeomorphically onto its image. At a vertex $v$ treated as a vector in $\real^3$, we let $\psi_v\colon W_v \subset \real^3 \rightarrow \real^3 / \real v$, where $\real^3 \rightarrow \real^3/\real v$ is the natural projection onto the quotient. By \cite[Prop. 6.81]{dbrane}, this gives an integral affine manifold with singularities. The affine structure can be extended to the complement of a subset $\Gamma$ consisting of $24$ points lying on the six edges of $\Xi$, with each edge containing $4$ points (colored in red in Figure \ref{fig:k3_polytope}). The fan structure $S_\tau$ can be defined similarly. Locally near each singular point $p \in \Gamma$ contained in an edge $\rho$, the affine structure is described as a gluing of two affine charts $U_{\mathrm{I}}\subset \real^2 \setminus \{0\} \times \real_{\geq 0}$ and $U_{\mathrm{II}}\subset \real^2 \setminus 0 \times \real_{\leq 0}$ as in \cite[\S 3.2]{gross2011invitation}. The change of coordinates from $U_{\mathrm{I}}$ to $U_{\mathrm{II}}$ is given by the restriction of the map $\Upsilon$ from $(\real \setminus \{0\}) \times \real$ to itself defined by $$ (x,y) \mapsto \begin{cases} (x,y), & x<0\\ (x,x+y), & x>0. \end{cases} $$ The fan structure $S_\rho\colon U_{\rho} \rightarrow \real$ is given as $S_{\rho}(x,y) = x$ and the fan $\Sigma_{\rho}$ is the toric fan for $\mathbb{P}^1$. Figure \ref{fig:affine_chart} below illustrates the situation. \begin{figure}[h!] \includegraphics[scale=0.5]{affine_chart.png} \caption{Affine coordinate charts}\label{fig:affine_chart} \end{figure} With the structure of an integral tropical manifold, the corners and edges in Figure \ref{fig:k3_polytope} are flattened via the affine coordinate charts, and we can view $(B,\pdecomp)$ as the 2-sphere equipped with a polyhedral decomposition and with $24$ affine singularities. Such an affine structure with singularities also appears in the base $B$ of an SYZ fibration of a K3 surface. \end{example} \begin{example}\label{eg:3d_example} A $3$-dimensional example can be constructed as in \cite[Ex. 6.74]{dbrane}. Take $\Xi$ to be the convex hull of the points $$ p_0 = \begin{bmatrix} -1 \\ -1 \\ -1 \\-1 \end{bmatrix}, \ p_1 = \begin{bmatrix} 4 \\-1\\-1 \\ -1 \end{bmatrix}, \ p_2 = \begin{bmatrix} -1 \\ 4 \\ -1 \\ -1 \end{bmatrix}, \ p_3 = \begin{bmatrix} -1 \\ -1 \\4 \\ -1 \end{bmatrix}, \ p_4 = \begin{bmatrix} -1 \\ -1 \\-1 \\ 4 \end{bmatrix}, $$ which gives a $4$-simplex. Take $B$ (as a topological space) to be the boundary of $\Xi$. There are five $3$-dimensional maximal cells intersecting along ten $2$-dimensional facets. The polyhedral decomposition $\pdecomp$ on each facet is as in Figure \ref{fig:three_d_polyhedral_decomposition}. \begin{figure}[h!] \includegraphics[scale=0.8]{three_d_polyhedral_decomposition.png} \caption{The polyhedral decomposition on a facet}\label{fig:three_d_polyhedral_decomposition} \end{figure} The affine structure can be extended to the complement of codimension 2 closed subset $\Gamma$ whose intersection with a triangle in Figure \ref{fig:three_d_polyhedral_decomposition} is a $Y$-shaped locus. Locally near each of these triangles, it looks like Figure \ref{fig:3_d_singular_locus_1}. \begin{figure}[h!] \centering \subfloat[$Y$-vertex of type I]{ \includegraphics[width=0.35\textwidth]{3_d_singular_locus_1.png} \label{fig:3_d_singular_locus_1} } \hspace{12mm} \subfloat[$Y$-vertex of type II]{ \includegraphics[width=0.45\textwidth]{3_d_singular_locus_2.png} \label{fig:3_d_singular_locus_2} } \end{figure} $\Xi$ has ten $1$-dimensional faces, each of which is an edge with affine length $5$. The polyhedral decomposition $\pdecomp$ divides each edge into $5$ intervals as we can see in Figure \ref{fig:three_d_polyhedral_decomposition}. Locally near each of these length $1$ intervals, there are three $2$-cells of $\pdecomp$ intersecting along it. The locus $\Gamma$ on each $2$-cell intersects on the interval as shown in Figure \ref{fig:3_d_singular_locus_2}. \end{example} \begin{definition}[\cite{Gross-Siebert-logI}, Def. 1.43]\label{def:piecewise_linear} An \emph{integral affine function} on an open subset $U \subset B$ is a continuous function $\varphi$ on $U$ which is integral affine on $U \cap \reint(\sigma)$ for $\sigma \in \pdecomp^{[n]}$ and on $U \cap W_v$ for $v \in \pdecomp^{[0]}$. We denote by $\cu{A}\mathit{ff}_{B}$ (or simply $\cu{A}\mathit{ff}$) \emph{the sheaf of integral affine functions on $B$}. A \emph{piecewise integral affine function} (abbrev.\ as \emph{PA-function}) on $U$ is a continuous function $\varphi$ on $U$ which can be written as $\varphi = \psi + S_{\tau}^*(\bar{\varphi})$ on $U \cap U_{\tau}$ for every $\tau \in \pdecomp$, where $\psi \in \cu{A}\mathit{ff}(U \cap U_\tau)$ and $\bar{\varphi}$ is a piecewise linear function on $\norpoly_{\tau,\real}$ with respect to the fan $\Sigma_{\tau}$. \emph{The sheaf of PA-functions on $B$} is denoted by $\cu{PL}_{\pdecomp}$. \end{definition} There is a natural inclusion $\cu{A}\mathit{ff}\hookrightarrow\cu{PL}_{\pdecomp}$, and we let $\cu{MPL}_{\pdecomp}$ be the quotient: $$0\to \cu{A}\mathit{ff}\to\cu{PL}_{\pdecomp}\to\cu{MPL}_{\pdecomp}\to 0.$$ Locally, an element $\varphi\in\Gamma(B,\cu{MPL}_{\pdecomp})$ is a collection of piecewise affine functions $\{\varphi_U\}$ such that on each overlap $U\cap V$, the difference $\varphi_U|_{V}-\varphi_V|_{U}$ is an integral affine function on $U\cap V$. \begin{definition}[\cite{Gross-Siebert-logI}, Def. 1.45 and 1.47]\label{def:strictly_convex_piecewise_affine} The sheaf $\cu{MPL}_{\pdecomp}$ is called \emph{the sheaf of multi-valued piecewise affine functions (abbrev.\ as MPA-funtions) of the pair $(B,\pdecomp)$}. A section $\varphi\in H^0(B,\cu{MPL}_{\pdecomp})$ is said to be \emph{convex} (resp. \emph{strictly convex}) if for any vertex $\{v\}\in\pdecomp$, there is a convex (resp. strictly convex) representative $\varphi_v$ on $U_v$. (Here, convexity (resp. strict convexity) means if we take any maximal cone $\sigma \subset U_v$ with the affine function $l_{\sigma}\colon U_v\rightarrow \real$ defined by requiring $\varphi_v|_{\sigma} = l_{\sigma}$, we always have $\varphi_v(y)\geq l_{\sigma}(y)$ (resp. $\varphi_v(y)> l_{\sigma}(y)$) for $y\in U_v \setminus \sigma$). \end{definition} The set of all convex multi-valued piecewise affine functions gives a sub-monoid of $H^0(B,\cu{MPL}_{\pdecomp})$ under addition, denoted as $H^0(B,\cu{MPL}_{\pdecomp},\bb{N})$; we let $Q$ be the dual monoid. \begin{definition}[\cite{Gross-Siebert-logI}, Def. 1.48]\label{def:strictly_convex_multi_valued_function} The polyhedral decomposition $\pdecomp$ is said to be \emph{regular} if there exists a strictly convex multi-valued piecewise linear function $\varphi\in H^0(B,\cu{MPL}_{\pdecomp})$. \end{definition} We always assume that $\pdecomp$ is regular with a fixed strictly convex $\varphi \in H^0(B,\cu{MPL}_{\pdecomp})$. \subsection{Monodromy, positivity and simplicity}\label{subsec:monodromy_data} To describe monodromy, we consider two maximal cells $\sigma_{\pm}$ and two of their common vertices $v_{\pm}$. Taking a path $\gamma$ going from $v_+$ to $v_-$ through $\sigma_+$, and then from $v_-$ back to $v_+$ through $\sigma_-$, we obtain a monodromy transformation $T_{\gamma}$. As in \cite[\S 1.5]{Gross-Siebert-logI}, we are interested in two cases. The first case is when $v_+$ is connected to $v_-$ via a bounded edge $\omega \in \pdecomp^{[1]}$. Let $d_{\omega} \in \tanpoly_{\omega}$ be the unique primitive vector pointing to $v_-$ along $\omega$. For an integral tangent vector $m \in T_{v_+} := T_{v_+,\inte}B$, the monodromy transformation $T_{\gamma}$ is given by \begin{equation}\label{eqn:monodromy_transformation_edge_fixed} T_{\gamma}(m) = m + \langle m , n^{\sigma_+ \sigma_-}_{\omega} \rangle d_{\omega} \end{equation} for some $n^{\sigma_+ \sigma_-}_{\omega} \in \norpoly_{\sigma_+ \cap \sigma_-}^*\subset T_{v_+}^* $, where $\langle \cdot, \cdot \rangle$ is the natural pairing between $T_{v_+}$ and $T_{v_+}^*$. The second case is when $\sigma_+$ and $\sigma_-$ are separated by a codimension one cell $\rho \in \pdecomp^{[n-1]}$. Let $\check{d}_{\rho}\in \norpoly_{\rho}^*$ be the unique primitive covector which is positive on $\sigma_+$. The monodromy transformation is given by \begin{equation}\label{eqn:monodromy_transformation_rho_fixed} T_{\gamma}(m) = m + \langle m , \check{d}_{\rho} \rangle m^{\rho}_{v_+v_-} \end{equation} for some $m^{\rho}_{v_+v_-} \in \tanpoly_{\tau}$, where $\tau \subset \rho$ is the smallest face of $\rho$ containing $v_{\pm}$. In particular, if we fix both $v_{\pm} \in \omega \subset \rho \subset\sigma_{\pm}$, one obtains the formula \begin{equation}\label{eqn:monodromy_transformation_both_fixed} T_{\gamma}(m) = m + \kappa_{\omega\rho}\langle m , \check{d}_{\rho} \rangle d_{\omega} \end{equation} for some integer $\kappa_{\omega\rho}$. \begin{definition}[\cite{Gross-Siebert-logI}, Def. 1.54]\label{def:positivity_assumption} We say that $(B,\pdecomp)$ is \emph{positive} if $\kappa_{\omega\rho} \geq 0$ for all $\omega \in \pdecomp^{[1]}$ and $\rho \in \pdecomp^{[n-1]}$ with $\omega \subset \rho$. \end{definition} Following \cite[Definition 1.58]{Gross-Siebert-logI}, we package the monodromy data into polytopes associated to $\tau \in \pdecomp^{[k]}$ for $1\leq k \leq n-1$. The simplest case is when $\rho \in \pdecomp^{[n-1]}$, whose \emph{monodromy polytope} is defined by fixing a vertex $v_0 \in \rho$ and setting \begin{equation}\label{eqn:monodromy_polytope_for_rho} \Delta(\rho):= \mathrm{Conv}\{ m^{\rho}_{v_0 v} \ | \ v \in \rho, \ v \in \pdecomp^{[0]} \} \subset \tanpoly_{\rho,\real}, \end{equation} where $\mathrm{Conv}$ refers to taking the convex hull. It is well-defined up to translation and independent of the choice of $v_0$. The normal fan of $\rho$ in $\tanpoly_{\rho,\real}^*$ is a refinement of the normal fan of $\Delta(\rho)$. Similarly, when $\omega \in \pdecomp^{[1]}$, one defines the \emph{dual monodromy polytope} by fixing $\sigma_0 \supset \omega$ and setting \begin{equation}\label{eqn:dual_monodromy_polytope_for_omega} \check{\Delta}(\omega):= \mathrm{Conv}\{ n^{\sigma_0 \sigma}_{\omega} \ | \ \sigma\supset \omega, \ \sigma \in \pdecomp^{[n-1]} \} \subset \norpoly_{\omega,\real}^*. \end{equation} Again, this is well-defined up to translation and independent of the choice of $\sigma_0$. The fan $\Sigma_{\omega}$ in $\norpoly_{\omega,\real}$ is a refinement of the normal fan of $\check{\Delta}(\omega)$. For $1< \dim_{\real}(\tau) <n-1$, a combination of monodromy and dual monodromy polytopes is needed. We let $\pdecomp_1(\tau) = \{ \omega \ | \ \omega \in \pdecomp^{[1]}, \ \omega \subset \tau \}$ and $\pdecomp_{n-1}(\tau) = \{ \rho \ | \ \rho \in \pdecomp^{[n-1]}, \ \rho \supset \tau \}$. For each $\rho \in \pdecomp_{n-1}(\tau)$, we choose a vertex $v_0 \in \rho$ and let $$\Delta_{\rho}(\tau):= \mathrm{Conv}\{ m^{\rho}_{v_0 v} \ | \ v \in \tau, \ v \in \pdecomp^{[0]} \} \subset \tanpoly_{\tau,\real}.$$ Similarly, for each $\omega \in \pdecomp_1(\tau)$, we choose $\sigma_0 \supset \tau$ and let $$\check{\Delta}_{\omega}(\tau):= \mathrm{Conv} \{ n^{\sigma_0 \sigma}_{\omega} \ | \ \sigma\supset \tau, \ \sigma \in \pdecomp^{[n-1]} \} \subset \norpoly_{\tau,\real}^*.$$ These are well-defined up to translation and independent of the choices of $v_0$ and $\sigma_0$ respectively. \begin{definition}[\cite{Gross-Siebert-logI}, Def. 1.60]\label{def:simplicity} We say $(B,\pdecomp)$ is \emph{simple} if, for every $\tau \in \pdecomp$, there are disjoint non-empty subsets $$ \Omega_1,\dots,\Omega_p \subset \pdecomp_1(\tau), \quad R_1,\dots, R_p \subset \pdecomp_{n-1}(\tau) $$ (where $p$ depends on $\tau$) such that \begin{enumerate} \item for $\omega \in \pdecomp_1(\tau)$ and $\rho \in \pdecomp_{n-1}(\tau)$, $\kappa_{\omega\rho} \neq 0$ if and only if $\omega \in \Omega_i$ and $\rho \in R_i$ for some $1 \leq i \leq p$; \item $\Delta_{\rho}(\tau)$ is independent (up to translation) of $\rho \in R_i$ and will be denoted by $\Delta_i(\tau)$; similarly, $\check{\Delta}_{\omega}(\tau)$ is independent (up to translation) of $\omega \in \Omega_i$ and will be denoted by $\check{\Delta}_i(\tau)$; \item if $\{e_1,\dots,e_p\}$ is the standard basis in $\inte^{p}$, then $$ \Delta(\tau):= \mathrm{Conv} \left\{ \bigcup_{i=1}^{p} \Delta_i(\tau) \times \{e_i\} \right\}, \quad \check{\Delta}(\tau):= \mathrm{Conv} \left\{ \bigcup_{i=1}^{p} \check{\Delta}_i(\tau) \times \{e_i\} \right\} $$ are elementary simplices (i.e. a simplex whose only integral points are its vertices) in $\left(\tanpoly_{\tau} \oplus \inte^{p}\right)_{\real}$ and $\left( \norpoly_{\tau}^*\oplus \inte^{p} \right)_{\real}$ respectively. \end{enumerate} \end{definition} We need the following stronger condition in order to apply \cite[Thm. 3.21]{Gross-Siebert-logII} in a later stage: \begin{definition}\label{def:strongly simple} We say $(B,\pdecomp)$ is \emph{strongly simple} if it is simple, and for every $\tau \in \pdecomp$, both $\Delta(\tau)$ and $\check{\Delta}(\tau)$ are standard simplices. \end{definition} \begin{example}\label{eg:2d_monodromy} Consider the $2$-dimensional example in Example \ref{eg:K3_example}. Following \cite[Ex. 6.82(1)]{dbrane}, we may choose the two adjacent vertices in Figure \ref{fig:k3_polytope} to be $v_1 = \begin{bmatrix} -1 & -1 & -1 \end{bmatrix}^T$ and $v_2 = \begin{bmatrix} 0 & -1 & -1 \end{bmatrix}^T$ which bound a $1$-cell $\rho$. The two adjacent maximal cells are given by $\sigma_+ \subset \{ b \ | \ \langle w_+,b\rangle =1\}$ where $w_+ = \begin{bmatrix} 0 & 0 & -1 \end{bmatrix}^T$ and $\sigma_- \subset \{ b \ | \ \langle w_-,b\rangle =1\}$ where $w_- = \begin{bmatrix} 0 & -1 & 0 \end{bmatrix}^T$. The tangent lattice $T_{v_1}$ can be identified with $\inte^3/\inte \cdot v_1$ equipped with the basis $e_1 = \begin{bmatrix} 1 & 0 &0 \end{bmatrix}^T$, $e_2 = \begin{bmatrix} 0 & 1& 0 \end{bmatrix}^T$. If we let $\gamma$ be a loop going from $v_1$ to $v_2$ through $\sigma_+$ and going back to $v_1$ through $\sigma_-$, we have $$ T_{\gamma}(m) = m + \langle \begin{bmatrix} 0 & 1 & -1 \end{bmatrix}^T, m \rangle e_1 $$ for $m \in T_{v_1}$. Therefore, we have $p=1$, $\Delta_1(\rho) = \mathrm{Conv} \{0,e_1\}$ and $\check{\Delta}_{1}(\rho) = \mathrm{Conv} \{ 0 , w_+ - w_-\}$. This is an example of a positive and strongly simple $(B,\pdecomp)$ (Definitions \ref{def:positivity_assumption} and \ref{def:strongly simple}). \end{example} \begin{example}\label{eg:3d_monodromy} Next we consider the two types of $Y$-vertex in Example \ref{eg:3d_example}. We begin with $Y$-vertex of type $I$ in Figure \ref{fig:3_d_singular_locus_1}. Following \cite[Ex. 6.82(2)]{dbrane}, the three vertices $v_1,v_2,v_3$ can be chosen to be $$ v_1 = \begin{bmatrix} -1 & -1& -1 & -1 \end{bmatrix}^T, \ v_2 = \begin{bmatrix} 0 & -1 & -1 & -1 \end{bmatrix}^T, \ v_3 = \begin{bmatrix} -1 & 0 & -1 & -1 \end{bmatrix}^T, $$ and $\sigma_+ \subset \{ b \in \real^4 \ | \ \langle w_+ , b \rangle = 1 \}$, $\sigma_- \subset \{ b \in \real^4 \ | \ \langle w_- , b \rangle = 1 \}$ are $3$-cells of $B$ lying in the affine hyperplanes with dual vector $w_+ = \begin{bmatrix} 0 & 0 & -1 & 0 \end{bmatrix}^T$ and $w_- = \begin{bmatrix} 0 & 0 & 0 & -1 \end{bmatrix}^T$ respectively. If we identify $T_v$ with $\tanpoly_{\sigma_+}$ via parallel transport and choose the basis of $\tanpoly_{\sigma_+}$ as $$ e_1 = \begin{bmatrix} 1 & 0 & 0 & 0 \end{bmatrix}^T, \ e_2 = \begin{bmatrix} 0 & -1 & 0 & 0 \end{bmatrix}^T, \ e_3 = \begin{bmatrix} 0 & 0 & 0 & 1 \end{bmatrix}^T, $$ then the monodromy transformations are given by $$ T_{\gamma_1}= \begin{bmatrix} 1 & 0 & 1 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}, \ T_{\gamma_2} = \begin{bmatrix} 1 & 0 & -1 \\ 0 & 1 & -1 \\ 0 & 0 & 1 \end{bmatrix}, \ T_{\gamma_3} = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{bmatrix}, $$ where $\gamma_i$ is the loop going from $v_i$ to $v_{i+1}$ through $\sigma_+$ and going back to $v_i$ through $\sigma_-$, with indices of $v_i$'s taken modulo $3$. In this case, we have $p=1$, $\Delta_1(\rho) = \mathrm{Conv}\{0, e_1,-e_2\}$ is a $2$-simplex and $\check{\Delta}_{1}(\rho) = \mathrm{Conv}\{0, w_+ - w_-\}$ is a $1$-simplex. For the $Y$-vertex of type II in Figure \ref{fig:3_d_singular_locus_2}, we can choose $$v_1 = \begin{bmatrix} -1 & -1 & -1 & -1 \end{bmatrix}^T,\ v_2 = \begin{bmatrix} 0 & -1 & -1 & -1 \end{bmatrix}^T,$$ which are the end-points of a $1$-cell $\tau$. We choose the three maximal cells $\sigma_1$, $\sigma_2$ and $\sigma_3$ intersecting at $\tau$ to be the $3$-cells lying in affine hyperplanes defined by $\{b \ | \ \langle w_i, b \rangle = 1\}$, where $$ w_1 = \begin{bmatrix} 0 & 0& -1 & 0 \end{bmatrix}^T, \ w_2 = \begin{bmatrix} 0 & 0 & 0 & -1 \end{bmatrix}^T, \ w_3 = \begin{bmatrix} 0 & -1 & 0 & 0 \end{bmatrix}^T. $$ Let $\tilde{\gamma}_i$ be the loop going from $v_1$ to $v_2$ through $w_i$ and then going back to $v_1$ through $w_{i+1}$, with indices taken to be modulo $3$. Then the corresponding monodromy transformations are given by $$ T_{\gamma_1}= \begin{bmatrix} 1 & 0 & 1 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}, \ T_{\gamma_2} = \begin{bmatrix} 1 & 1& 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}, \ T_{\gamma_3} = \begin{bmatrix} 1 & -1& -1 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}, $$ with respect to the basis $$ e_1 = \begin{bmatrix} 1 & 0 & 0 & 0 \end{bmatrix}^T, \ e_2 = \begin{bmatrix} 0 & 1 & 0 & 0 \end{bmatrix}^T, \ e_3 = \begin{bmatrix} 0 & 0 & -1 & 0 \end{bmatrix}^T. $$ In this case, $p=1$, $\Delta_1(\tau) = \mathrm{Conv}\{0, v_2 - v_1\}$ is a $1$-simplex and $\check{\Delta}_1(\tau)= \mathrm{Conv}\{0, w_1 - w_2, w_1-w_3\}$ is a $2$-simplex. Both examples are positive and strongly simple. \end{example} Throughout this paper, we always assume that $(B,\pdecomp)$ is positive and strongly simple. In particular, both $\Delta_i(\tau)$ and $\check{\Delta}_i(\tau)$ are standard simplices of positive dimensions, and $\tanpoly_{\Delta_1(\tau)}\oplus \cdots \oplus \tanpoly_{\Delta_p(\tau)}$ (resp. $\tanpoly_{\check{\Delta}_1(\tau)} \oplus \cdots \oplus \tanpoly_{\check{\Delta}_p(\tau)}$) is an internal direct summand of $\tanpoly_{\tau}$ (resp. $\norpoly_{\tau}^*$). \subsection{Cone construction by gluing open affine charts}\label{subsec:open_construction} In this subsection, we recall the cone construction of the maximally degenerate Calabi--Yau $\centerfiber{0} = \centerfiber{0}(B,\pdecomp,s)$, following \cite{Gross-Siebert-logI} and \cite[\S 1.2]{gross2011real}. For this purpose, we take $\blat = \inte$ and $Q$ to be the positive real axis in Notation \ref{not:universal_monoid}. Throughout this paper, we will work in the category of analytic schemes. We will construct $\centerfiber{0}$ as a gluing of affine analytic schemes $V(v)$ parametrized by the vertices of $\pdecomp$. For each vertex $v$, we consider the fan $\Sigma_v$ and take the analytic affine toric variety $$V(v):=\spec(\bb{C}[\Sigma_v]),$$ where $\spec$ means analytification of the algebraic affine scheme given by $\mathrm{Spec}$. Here, the monoid structure for a general fan $\Sigma \subset M_{\real}$ is given by $$p+q=\begin{cases} p+q &\text{ if }p,q \in M \text{ are in a common cone of } \Sigma, \\\infty & \text{ otherwise}, \end{cases}$$ and we set $z^{\infty} =0$ in taking $\mathrm{Spec}(\bb{C}[\Sigma])$ (by abuse of notation, we use $\Sigma$ to stand for both the fan and the monoid associated to a fan if there is no confusion); in other words, the ring $\comp[\Sigma]$ is defined explicitly as $$\comp[\Sigma]:= \bigoplus_{p \in |\Sigma| \cap M} \comp \cdot z^p, \quad z^p\cdot z^q=\begin{cases} z^{p+q} &\text{ if }p,q \in M \text{ are in a common cone of } \Sigma, \\0 & \text{ otherwise}, \end{cases}$$ where $|\Sigma|$ denotes the support of the fan $\Sigma$. To glue these affine analytic schemes together, we need affine subschemes $\{V(\tau)\}$ associated to $\tau\in \pdecomp$ with $v\in\tau$ and natural open embeddings $V(\tau)\hookrightarrow V(\omega)$ for $v \in \omega\subset\tau$. First, for $\tau\in \pdecomp$ such that $v\in\tau$, we consider the \emph{localization of $\Sigma_v$ at $\tau$} defined by $$\tau^{-1}\Sigma_v:=\{K_v\sigma + \tanpoly_{\tau,\bb{R}}\,|\,K_v\sigma\text{ is a cone in } \Sigma_v \text{ such that }\sigma \supset \tau\};$$ here recall that $K_v\sigma = \real_{\geq 0} S_{v}(\sigma \cap U_v)$ is the cone in $\Sigma_v$ (see the definition of a fan structure before Definition \ref{def:integral_tropical_manifold}). This defines a new complete fan in $T_{v,\real}$ consisting of convex, but not necessarily strictly convex, cones. If $\tau$ contains another vertex $v'$, we can identify the fans $\tau^{-1}\Sigma_v$ and $\tau^{-1}\Sigma_{v'}$ as follows: for each maximal $\sigma \supset \tau$, we identify the maximal cones $K_{v}\sigma + \tanpoly_{\tau,\bb{R}}$ and $K_{v'}\sigma + \tanpoly_{\tau,\bb{R}}$ by identifying the tangent spaces $T_v\cong T_{v'}$ using parallel transport through $\sigma\supset\tau$. Patching these identifications for all $\sigma \supset \tau$ together, we get a piecewise linear transformation from $T_{v}$ to $T_{v'}$, identifying the fans $\tau^{-1}\Sigma_v$ and $\tau^{-1}\Sigma_{v'}$ and hence the corresponding monoids. This defines the affine analytic scheme $$V(\tau):=\spec(\bb{C}[\tau^{-1}\Sigma_v]),$$ up to a unique isomorphism. Notice that $\tau^{-1}\Sigma_v$ can be identified (non-canonically) with the fan $\Sigma_{\tau} \times \tanpoly_{\tau,\real}$ in $\norpoly_{\tau,\real}\times \tanpoly_{\tau,\real}$, so actually $$V(\tau) \cong \spec(\comp[\tanpoly_{\tau}]) \times \spec(\comp[\Sigma_{\tau}]),$$ where $\spec(\comp[\tanpoly_{\tau}]) \cong \tanpoly_{\tau}^* \otimes_\inte \comp^* \cong (\comp^{*})^l$ is a complex torus. For any $v\in \omega\subset\tau$, there is a map of monoids $ \omega^{-1}\Sigma_v \to\tau^{-1}\Sigma_v$ given by $$p\mapsto\begin{cases} p & \text{ if }p\in K_{v}\sigma +\Lambda_{\omega,\bb{R}}\text{ for some }\sigma\supset\tau, \\\infty & \text{ otherwise} \end{cases}$$ (though there is no fan map from $\omega^{-1}\Sigma_v$ to $\tau^{-1}\Sigma_v$ in general), and hence a ring map $$\iota_{\omega\tau}^*\colon \bb{C}[\omega^{-1}\Sigma_v]\to\bb{C}[\tau^{-1}\Sigma_v].$$ This gives an open inclusion of affine schemes $$\iota_{\omega\tau}\colon V(\tau)\hookrightarrow V(\omega),$$ and hence a functor $F\colon \pdecomp\to{\bf{Sch}}_{\mathrm{an}}$ defined by $$ F(\tau):= V(\tau), \quad F(e):= \iota_{\omega\tau} \colon V(\tau)\to V(\omega) $$ for $\omega \subset \tau$. We can further introduce twistings of the gluing of the affine analytic schemes $\{V(\tau)\}_{\tau\in\pdecomp}$. Toric automorphisms $\mu$ of $V(\tau)$ are in bijection with the set of $\bb{C}^{*}$-valued piecewise multiplicative maps on $T_v\cap|\tau^{-1}\Sigma_v|$ with respect to the fan $\tau^{-1}\Sigma_v$. Explicitly, for each maximal cone $\sigma\in \pdecomp^{[n]}$ with $\tau\subset\sigma$, there is a monoid homomorphism $\mu_{\sigma}\colon \tanpoly_{\sigma}\to\bb{C}^{*}$ such that if $\sigma'\in \pdecomp^{[n]}$ also contains $\tau$, then $\mu_{\sigma}|_{\tanpoly_{\sigma\cap\sigma'}}=\mu_{\sigma'}|_{\tanpoly_{\sigma\cap\sigma'}}$. Denote by $\mathrm{PM}(\tau)$ the multiplicative group of $\bb{C}^{*}$-valued piecewise multiplicative maps on $T_v\cap|\tau^{-1}\Sigma_v|$. The group $\mathrm{PM}(\tau)$ a priori depends on the choice of $v \in \tau$; however, for different choices, say $v$ and $v'$, the groups can be identified via the identification $\tau^{-1}\Sigma_v \cong \tau^{-1}\Sigma_{v'}$. For $\omega \subset \tau$, there is a natural restriction map $|_{\tau} \colon \mathrm{PM}(\omega) \rightarrow \mathrm{PM}(\tau)$ given by restricting to those maximal cells $\sigma \supset \omega$ with $\sigma \supset \tau$. \begin{definition}[\cite{gross2011real}, Def. 1.18]\label{def:open_gluing_data} A choice of \emph{open gluing data} (for the cone construction) for $(B,\pdecomp)$ is a set $s=(s_{\omega\tau})_{\omega \subset \tau}$ of elements $s_{\omega \tau}\in \mathrm{PM}(\tau)$ such that \begin{enumerate} \item $s_{\tau\tau}=1$ for all $\tau\in\pdecomp$, and \item if $\omega \subset \tau \subset \rho$, then $$s_{\omega\rho}=s_{\tau\rho}\cdot s_{\omega\tau}|_{\rho}.$$ \end{enumerate} Two choices of open gluing data $s,s'$ are said to be \emph{cohomologous} if there exists a system $\{t_{\tau}\}_{\tau \in \pdecomp}$, with $t_{\tau}\in \mathrm{PM}(\tau)$ for each $\tau\in\pdecomp$, such that $s_{\omega\tau}=t_{\tau}(t_{\omega}|_{\tau})^{-1}s_{\omega\tau}'$ whenever $\omega\subset \tau$. \end{definition} The set of cohomology classes of choices of open gluing data is a group under multiplication, denoted as $H^1(\msc{P},\msc{Q}_{\msc{P}}\otimes\bb{C}^{\times})$. For $s\in \mathrm{PM}(\tau)$, we will denote also by $s$ the corresponding toric automorphism on $V(\tau)$ which is explicitly given by $s^*(z^m) = s_{\sigma}(m) z^m$ for $m \in \sigma \supset \tau$. If $s$ is a choice of open gluing data, then we can define an \emph{$s$-twisted functor} $F_s \colon \pdecomp \to{\bf{Sch}}_{\mathrm{an}}$ by setting $F_s(\tau):=F(\tau)=V(\tau)$ on objects and $F_s(\omega \subset \tau):=F(\omega \subset \tau)\circ s_{\omega\tau}^{-1} \colon V(\tau) \to V(\omega)$ on morphisms. This defines the analytic scheme $$\centerfiber{0}=\centerfiber{0}(B,\pdecomp,s):=\lim_{\longrightarrow}F_s.$$ Gross--Siebert \cite{Gross-Siebert-logI} showed that $\centerfiber{0}(B,\pdecomp,s)\cong\centerfiber{0}(B,\pdecomp,s')$ as schemes when $s,s'$ are cohomologous. \begin{remark}\label{rem:closed_Construction} Given $\tau \in \pdecomp^{[k]}$, one can define a closed stratum $\iota_{\tau} \colon \centerfiber{0}_{\tau} \rightarrow \centerfiber{0}$ of dimension $k$ by gluing together the $k$-dimensional toric strata $V_\tau(\omega) \subset V(\omega) = \spec(\comp[\omega^{-1}\Sigma_v])$ corresponding to the cones $K_v\tau + \tanpoly_{\omega,\bb{R}}$ in $\omega^{-1}\Sigma_v$, for all $\omega \subset \tau$. Abstractly, it is isomorphic to the toric variety associated to the polyhedron $\tau \subset \tanpoly_{\tau,\real}$. Also, for every pair $\omega \subset \tau$, there is a natural inclusion $\iota_{\omega \tau}\colon \centerfiber{0}_{\omega}\rightarrow \centerfiber{0}_{\tau} $. One can alternatively construct $\centerfiber{0}$ by gluing along the closed strata $\centerfiber{0}_\tau$'s according to the polyhedral decomposition; see \cite[\S 2.2]{Gross-Siebert-logI}. \end{remark} We recall the following definition from \cite{Gross-Siebert-logI}, which serves as an alternative set of combinatorial data for encoding $\mu\in \mathrm{PM}(\tau)$. \begin{definition}[\cite{Gross-Siebert-logI}, Def. 3.25 and \cite{gross2011real}, Def. 1.20]\label{def:alternative_description_open_gluing_data} Let $\mu\in \mathrm{PM}(\tau)$ and $\rho\in\pdecomp^{[n-1]}$ with $\tau\subset\rho$. For a vertex $v\in\tau$, we define $$D(\mu,\rho,v):=\frac{\mu_{\sigma}(m)}{\mu_{\sigma'}(m')}\in\bb{C}^{\times},$$ where $\sigma,\sigma'$ are the two unique maximal cells such that $\sigma\cap\sigma'=\rho$, $m \in \tanpoly_{\sigma}$ is an element projecting to the generator in $\norpoly_{\rho} \cong \tanpoly_{\sigma}/\tanpoly_{\rho}\cong\bb{Z}$ pointing to $\sigma'$, and $m'$ is the parallel transport of $m\in\tanpoly_{\sigma}$ to $\tanpoly_{\sigma'}$ through $v$. $D(\mu,\rho,v)$ is independent of the choice of $m$. \end{definition} Let $\rho\in\msc{P}^{[d-1]}$ and $\sigma_+,\sigma_-$ be the two unique maximal cells such that $\sigma_+\cap\sigma_-=\rho$. Let $\check{d}_{\rho}\in\norpoly_{\rho}^*$ be the unique primitive generator pointing to $\sigma_+$. For any two vertices $v,v'\in\tau$, we have the formula \begin{equation}\label{eqn:gluing_data_ratio_under_monodromy} D(\mu,\rho,v)=\mu(m_{vv'}^{\rho})^{-1}\cdot D(\mu,\rho,v') \end{equation} relating monodromy data to the open gluing data, where $m_{vv'}^{\rho}\in\tanpoly_{\rho}$ is as discussed in \eqref{eqn:monodromy_transformation_rho_fixed}. The formula \eqref{eqn:gluing_data_ratio_under_monodromy} describes the interaction between monodromy and a fixed $\mu \in \mathrm{PM}(\tau)$. We shall further impose the following lifting condition from \cite[Prop. 4.25]{Gross-Siebert-logI} relating $s_{v\tau}, s_{v'\tau} \in \mathrm{PM}(\tau)$ and monodromy data: \begin{condition}\label{cond:open_gluing_data_lifting_condition} We say a choice of open gluing data $s$ satisfies the \emph{lifting condition} if for any two vertices $v, v'\in \tau \subset \rho$ with $\rho \in \pdecomp^{[n-1]}$, we have $D(s_{v\tau},\rho,v) = D(s_{v'\tau},\rho,v')$ whenever $m^{\rho}_{vv'} = 0$. \end{condition} \subsection{Log structures}\label{subsec:log_structure_and_slab_function} We need to equip the analytic scheme $\centerfiber{0}=\centerfiber{0}(B,\pdecomp,s)$ with log structures. The main reference is \cite[\S 3 - 5]{Gross-Siebert-logI}. \begin{definition}\label{def:log_structure} Let $X$ be an analytic space, a \emph{log structure} on $X$ is a sheaf of monoids $\cu{M}_X$ together with a homomorphism $\alpha_X\colon \cu{M}_X \rightarrow \cu{O}_X$ of sheaves of (multiplicative) monoids such that $\alpha_X \colon \alpha^{-1}(\cu{O}_X^*) \rightarrow \cu{O}_X^*$ is an isomorphism. The \emph{ghost sheaf} $\overline{\cu{M}}_X$ of a log structure is defined as the quotient sheaf $\cu{M}_X/\alpha^{-1}(\cu{O}_X^*)$, whose monoid structure is written additively. \end{definition} \begin{example}\label{example:divisor_log_structure} Let $X$ be an analytic space and $D\subset X$ be a closed analytic subspace of pure codimension one. We denote by $j \colon X\setminus D \hookrightarrow X $ the inclusion. Then the sheaf of monoids $$ \cu{M}_{X}:=j_*(\cu{O}_{X\setminus D}^*) \cap \cu{O}_X, $$ together with the natural inclusion $\alpha_X \colon \cu{M}_{X} \rightarrow \cu{O}_X$ defines a log structure on $X$. \end{example} We write $X^{\dagger}$ if we want to emphasize the log structure on $X$. A general way to define a log structure is to take an arbitary homomorphism of sheaves of monoids $$ \tilde{\alpha} \colon \cu{P} \rightarrow \cu{O}_X, $$ and then define the associated log structure by \begin{equation*} \cu{M}_X := (\cu{P}\oplus \cu{O}_X^*)/\{(p,\tilde{\alpha}(p)^{-1}) \ | \ p \in \tilde{\alpha}^{-1}(\cu{O}_X^*) \}. \end{equation*} In particular, this allows us to define log structures on an analytic space $Y$ by pulling back those on another analytic space $X$ via a morphism $f \colon Y \rightarrow X$. More precisely, given a log structure on $X$, the \emph{pullback log structure} on $Y$ is defined to be the log structure associated to the composition $\tilde{\alpha}_Y \colon f^{-1}(\cu{M}_X) \rightarrow f^{-1}(\cu{O}_X) \rightarrow \cu{O}_Y$. For more details of the theory of log structures, readers are referred to, e.g., \cite[\S 3]{Gross-Siebert-logI}. \begin{example}\label{example:toric_log_structure} Taking a toric monoid $P$ (i.e. $P= C \cap M$ for a cone $C \subset M_\real$), we can define $\tilde{\alpha} \colon \underline{P} \rightarrow \cu{O}_{\Spec(\comp[P])}$ by sending $m \mapsto z^{m}$, where $\underline{P}$ is the constant sheaf with stalk $P$. From this we obtain a log structure on the analytic toric variety $\spec(\comp[P])$. Note that this is a special case of Example \ref{example:divisor_log_structure}, where we take $X = \spec(\comp[P])$ and $D$ to be the toric boundary divisor. \end{example} Before we describe the log structures on $\centerfiber{0}=\centerfiber{0}(B,\pdecomp,s)$, let us first specify a ghost sheaf $\overline{\cu{M}}$ over $\centerfiber{0}$. Recall that the polyhedral decomposition $\pdecomp$ is assumed to be regular, namely, there exists a strictly convex multi-valued piecewise linear function $\varphi\in H^0(B,\cu{MPL}_{\pdecomp})$. For any $\tau \in \pdecomp$, we take a strictly convex representative $\bar{\varphi}_{\tau}$ of $\varphi$ on $\norpoly_{\tau,\real}$, and define $$\Gamma(V(\tau),\overline{\cu{M}}) := \bar{P}_{\tau} = C_{\tau} \cap (\norpoly_{\tau} \oplus \bb{Z}),$$ where $C_\tau:=\{(m,h)\in \norpoly_{\tau,\real} \oplus\bb{R} \,|\, h\geq \bar{\varphi}_\tau(m)\}$. For any $\omega \subset \tau$, we take an integral affine function $\psi_{\omega\tau}$ on $U_{\omega}$ such that $\psi_{\omega\tau}+ S_{\omega}^*( \bar{\varphi}_{\omega})$ vanishes on $K_{\omega}\tau$, and agrees with $S_{\tau}^*(\bar{\varphi}_{\tau})$ on all of $\sigma \cap U_{\tau}$ for any $\sigma \supset \tau$. This induces a map $C_{\omega} \rightarrow C_{\omega \tau} :=\{(m,h)\in \norpoly_{\omega,\real} \oplus\bb{R} \,|\, h\geq \psi_{\omega\tau}(m)+\bar{\varphi}_\omega(m)\}$ by sending $(m,h)\mapsto (m,h+\psi_{\omega\tau}(m))$, whose composition with the quotient map $\norpoly_{\omega,\real}\oplus \real \rightarrow \norpoly_{\tau,\real}\oplus \real$ gives a map $C_{\omega} \rightarrow C_{\tau}$ of cones that corresponds to the monoid homomorphism $\bar{P}_{\omega}\rightarrow \bar{P}_{\tau}$. The $\bar{P}_{\tau}$'s glue together to give the ghost sheaf $\overline{\cu{M}}$ over $\centerfiber{0}$. There is a well-defined section $\bar{\varrho} \in \Gamma(\centerfiber{0},\overline{\cu{M}})$ given by gluing $(0,1) \in C_{\tau}$ for each $\tau$. One may then hope to find a log structure on $\centerfiber{0}$ which is log smooth and with ghost sheaf given by $\overline{\cu{M}}$. However, due to the presence of non-trivial monodromies of the affine structure, this can only be done away from a complex codimension $2$ subset $Z \subset \centerfiber{0}$ not containing any toric strata. Such log structures can be described by sections of a coherent sheaf $\cu{LS}^+_{\mathrm{pre}}$ supported on the scheme-theoretic singular locus $\centerfiber{0}_{\mathrm{sing}} \subset \centerfiber{0}$. We now describe the sheaf $\cu{LS}^+_{\mathrm{pre}}$ and some of its sections called \emph{slab functions}; readers are referred to \cite[\S 3 and 4]{Gross-Siebert-logI} for more details. For every $\rho \in \pdecomp^{[n-1]}$, we consider $\iota_{\rho} \colon \centerfiber{0}_{\rho} \rightarrow \centerfiber{0}$, where $\centerfiber{0}_{\rho}$ is the toric variety associated to the polytope $\rho \subset \tanpoly_{\rho,\real}$. From the fact that the normal fan $\mathscr{N}_{\rho} \subset \tanpoly_{\rho,\real}^*$ of $\rho$ is a refinement of the normal fan $\mathscr{N}_{\Delta(\rho)}\subset \tanpoly_{\rho,\real}^*$ of the $r_{\rho}$-dimensional simplex $\Delta(\rho)$ (as in \S \ref{subsec:monodromy_data}), we have a toric morphism \begin{equation}\label{eqn:toric_map_to_monodromy_projective_spaces} \varkappa_{\rho}\colon \centerfiber{0}_{\rho} \rightarrow \bb{P}^{r_\rho}. \end{equation} Now, $\Delta(\rho)$ corresponds to $\cu{O}(1)$ on $\bb{P}^{r_\rho}$. We let $\cu{N}_{\rho}:= \varkappa_{\rho}^*(\cu{O}(1))$ on $\centerfiber{0}_{\rho}$, and define \begin{equation}\label{eqn:line_bundle_parametrizing_log_structure} \cu{LS}^+_{\mathrm{pre}} := \bigoplus_{\rho \in \pdecomp^{[n-1]}} \iota_{\rho,*} (\cu{N}_{\rho}). \end{equation} Sections of $\cu{LS}^+_{\mathrm{pre}}$ can be described explicitly. For each $v \in \pdecomp^{[0]}$, we consider the open subscheme $V(v)$ of $\centerfiber{0}$ and the local trivialization $$ \cu{LS}^+_{\mathrm{pre}}|_{V(v)} = \bigoplus_{\rho: v \in \rho} \cu{O}_{V_{\rho}(v)}, $$ whose sections over $V(v)$ are given by $(f_{v\rho})_{v \in \rho}$. Given $v ,v' \in \tau$ where $\tau$ corresponding to $V(\tau)$, these local sections obey the change of coordinates given by \begin{equation}\label{eqn:change_of_coordinates_of_line_bundle_for_log_structures} D(s_{v'\tau},\rho, v')^{-1} s_{v'\tau}^{-1} (f_{v'\rho}) = z^{-m^{\rho}_{vv'}} D(s_{v\tau},\rho, v)^{-1} s_{v\tau}^{-1} (f_{v\rho}), \end{equation} where $\rho \supset \tau$ and $s_{v\tau}, s_{v'\tau}$ are part of the open gluing data $s$. The section $f := (f_{v\rho})_{v \in \rho}$ is said to be \emph{normalized} if $f_{v\rho}$ takes the value $1$ at the $0$-dimensional toric stratum corresponding to a vertex $v$, for all $\rho$. We will restrict ourselves to normalized sections $f$ of $\cu{LS}^+_{\mathrm{pre}}$. The complex codimension $2$ subset $Z \subset \centerfiber{0}$ is taken to be the zero locus of $f$ on $\centerfiber{0}_{\mathrm{sing}}$. Only a subset of normalized sections of $\cu{LS}^+_{\mathrm{pre}}$ corresponds to log structures. For every vertex $v \in \pdecomp^{[0]}$ and $\tau \in \pdecomp^{[n-2]}$ containing $v$, we choose a cyclic ordering $\rho_1,\dots,\rho_l$ of codimension one cells containing $\tau$ according to an orientation of $\norpoly_{\tau,\real}$. Let $\check{d}_{\rho_i} \in \norpoly_v^*$ be the positively oriented normal to $\rho_i$. Then the condition for $f = (f_{v\rho})_{v\in \rho}\in \cu{LS}^+_{\mathrm{pre}}|_{V(v)}$ to define a log structure is given by \begin{equation}\label{eqn:condition_for_sections_to_define_log_structure} \prod_{i=1}^l \check{d}_{\rho_i} \otimes f_{v\rho_i}|_{V_{\tau}(v)} = 0 \otimes 1,\quad \text{in } \norpoly_v^* \otimes \Gamma(V_{\tau}(v)\setminus Z, \cu{O}_{V_{\tau}(v)}^*), \end{equation} where the group structure on $\norpoly_v^*$ is additive and that on $\Gamma(V_{\tau}(v)\setminus Z, \cu{O}_{V_{\tau}(v)}^*)$ is multiplicative. If $f = (f_{v\rho})_{v\in \rho}$ is a normalized section satisfying this condition, we call the $f_{v\rho}$'s \emph{slab functions}. \begin{theorem}[\cite{Gross-Siebert-logI}, Thm. 5.2]\label{thm:log_structures} Suppose that $B$ is compact and the pair $(B,\pdecomp)$ is simple and positive. Let $s$ be a choice of open gluing data satisfying the lifting condition (Condition \ref{cond:open_gluing_data_lifting_condition}). Then there exists a unique normalized section $f \in \Gamma(\centerfiber{0},\cu{LS}^+_{\mathrm{pre}})$ which defines a log structure on $\centerfiber{0}$ (i.e. satisfying the condition \eqref{eqn:condition_for_sections_to_define_log_structure}). \end{theorem} From now on, we always assume that $B$ is compact. To describe the log structure in Theorem \ref{thm:log_structures}, we first construct some local smoothing models: For each vertex $v \in \pdecomp^{[0]}$, we represent the strictly convex piecewise linear function $\varphi$ in a small neighborhood $U$ of $v$ by a strictly convex piecewise linear $\varphi_v\colon \norpoly_{v,\real}\to\bb{R}$ (so that $\varphi = S_v^*(\varphi_v)$) and set \begin{align*} C_v := \{(m,h)\in \norpoly_{v,\real} \oplus\bb{R} \,|\, h\geq\varphi_v(m)\},\quad P_v := C_v\cap(\norpoly_{v}\oplus\bb{Z}). \end{align*} The element $\varrho = (0,1)\in\norpoly_v\oplus\bb{Z}$ gives rise to a regular function $q:=z^{\varrho}$ on $\spec(\bb{C}[P_v])$. We have a natural identification $$V(v):=\spec(\bb{C}[\Sigma_v]) \cong \spec(\bb{C}[P_v]/q),$$ through which we view $V(v)$ as the toric boundary divisor in $\spec(\bb{C}[P_v])$ that corresponds to the holomorphic function $q$, and $\pi_v \colon \spec(\bb{C}[P_v]) \rightarrow \spec(\comp[q])$ as a local model for smoothing $V(v)$. Using these local models, we can now describe the log structure around a point $x \in \centerfiber{0} \setminus Z$. On a neighborhood $V \subset V(v)\setminus Z$ of $x$, the local smoothing model is given by composing the two inclusions $\rest{} \colon V \hookrightarrow V(v)$ and $V(v) \hookrightarrow \spec(\comp[P_{v}])$. The natural monoid homomorphism $P_v \rightarrow \comp[P_v]$ defined by sending $m \mapsto z^m$ determines a log structure on $\spec(\comp[P_v])$ which restricts to one on the toric boundary divisor $V(v) = \spec(\comp[\Sigma_v])$. We further twist the inclusion $\rest{} \colon V \hookrightarrow V(v)$ as \begin{equation}\label{eqn:zero_order_embedding_encode_log_structure} z^{m} \mapsto h_m\cdot z^{m}\text{ for $m \in \Sigma_v$;} \end{equation} here, for each $m \in \Sigma_v$, $h_m$ is chosen as an invertible holomorphic function on $V \cap \mathrm{Zero}(z^m;v)$, where we denote $\mathrm{Zero}(z^m;v):=\overline{ \{x \in V(v) \ | \ z^{m} \in \cu{O}_{x}^* \}}$, and such that they satisfy the relations \begin{equation}\label{eqn:local_log_structure_from_monoid_homomorphism} h_m \cdot h_{m'} = h_{m+m'},\quad \text{on } V \cap \mathrm{Zero}(z^{m+m'};v). \end{equation} Then pulling back the log structure on $V(v)$ via $\rest{} \colon V \hookrightarrow V(v)$ produces a log structure on $V$ which is log smooth. These local choices of $h_m$'s are also required to be determined by the slab functions $f_{v\rho}$'s, up to equivalences. Here, we shall just give the formula relating them; see \cite[Thm. 3.22]{Gross-Siebert-logI} for details. For any $\rho \in \pdecomp^{[n-1]}$ containing $v$ and two maximal cells $\sigma_{\pm}$ such that $\sigma_{+} \cap \sigma_- = \rho$, we take $m_+ \in \norpoly_{v} \cap K_{v} \sigma_+$ generating $\norpoly_{\rho}$ with some $m_0 \in \norpoly_v\cap K_v \rho$ such that $m_0 - m_+ \in \norpoly_v \cap K_v \sigma_-$. Then the required relation is given by \begin{equation}\label{eqn:relation_between_log_smooth_structure_and_slab_functions} f_{v\rho} = \frac{h_{m_0}^2}{h_{m_0-m_+} \cdot h_{m_0 + m_+}}\Big|_{V_\rho(v)\cap V} \in \cu{O}^*_{V_\rho(v)}( V_\rho(v) \cap V), \end{equation} which is independent of the choices of $m_0$ and $m_+$. By abuse of notation, we also let $\rest{} \colon V \rightarrow \localmod{k}$ be the $k$-th order thickening of $V$ over $\comp[q]/q^{k+1}$ in the model $\spec(\comp[P_v])$ under the above embedding. Then there is a natural divisorial log structure on $\localmod{k}^{\dagger}$ over $\logsk{k} $ coming from restriction of the log structure on $\spec(\comp[P_{v}])^{\dagger}$ over $\logs$ (i.e. Example \ref{example:divisor_log_structure}, which is the same as the one given by Example \ref{example:toric_log_structure} in this case). Restricting to $V$ reproduces the log structure we constructed above, which is the log structure of $\centerfiber{0}^{\dagger}$ over the log point $\logsk{0}$ locally around $x$. We have a Cartesian diagram of log spaces \begin{equation}\label{eqn:cartesian_diagram_of_log_spaces} \xymatrix@1{ V^{\dagger}\ \ar@{^{(}->}[r] \ar[d] & \localmod{k}^{\dagger} \ar[d]\\ \logsk{0}\ \ar@{^{(}->}[r] & \logsk{k} } \end{equation} Next we describe the log structure around a singular point $x \in Z \cap \left( \centerfiber{0}_{\tau} \setminus \bigcup_{\omega \subset\tau} \centerfiber{0}_{\omega}\right)$ for some $\tau$. Viewing $f = \sum_{\rho \in \pdecomp^{[n-1]}} f_{\rho}$ where $f_{\rho}$ is a section of $\cu{N}_{\rho}$, we let $Z_{\rho} = Z(f_{\rho}) \subset \centerfiber{0}_{\rho} \subset \centerfiber{0}$ and write $Z = \bigcup_{\rho} Z_{\rho}$. For every $\tau \in \pdecomp$, we have the data $\Omega_i$'s, $R_i$'s, $\Delta_i(\tau)$ and $\check{\Delta}_i(\tau)$ described in Definition \ref{def:simplicity} because $(B,\pdecomp)$ is simple. Since the normal fan $\mathscr{N}_\tau \subset \tanpoly_{\tau,\real}^*$ of $\tau$ is a refinement of $\mathscr{N}_{\Delta_i(\tau)} \subset \tanpoly_{\tau,\real}^*$, we have a natural toric morphism \begin{equation}\label{eqn:i_th_map_to_toric_variety_of_monodromy} \varkappa_{\tau,i} \colon \centerfiber{0}_{\tau} \rightarrow \bb{P}^{r_{\tau,i}}, \end{equation} and the identification $\iota_{\tau\rho}^*(\cu{N}_{\rho}) \cong \varkappa_{\tau,i}^*(\cu{O}(1))$. By the proof of \cite[Thm. 5.2]{Gross-Siebert-logI}, $\iota_{\tau\rho}^*(f_{\rho})$ is completely determined by the gluing data $s$ and the associated monodromy polytope $\Delta_i(\tau)$ where $\rho \in R_i$. In particular, we have $\iota_{\tau\rho}^*(f_{\rho}) = \iota_{\tau \rho'}^*(f_{\rho'})$ and $Z_\rho \cap \centerfiber{0}_{\tau} = Z_{\rho'}\cap \centerfiber{0}_{\tau} =: Z_{i}^{\tau}$ for $\rho,\rho' \in R_i$. Locally, if we write $V(\tau) = \spec(\comp[\tau^{-1}\Sigma_v])$ by choosing some $v \in \tau$, then, for each $ 1 \leq i \leq p$, there exists an analytic function $f_{v,i}$ on $V(\tau)$ such that $f_{v,i}|_{V_{\rho}(\tau)} =s_{v\tau}^{-1}( f_{v\rho})$ for $\rho \in R_i$. According to \cite[\S 2.1]{Gross-Siebert-logII}, for each $ 1 \leq i \leq p$, we have $\check{\Delta}_i(\tau) \subset \norpoly_{\tau,\real}^*$, which gives \begin{equation}\label{eqn:piecewise_linear_function_associated_to_dual_monodromy_polytope} \psi_i(m) = - \inf \{ \langle m,n \rangle \ | \ n \in \check{\Delta}_i(\tau)\}. \end{equation} By convention, we write $\psi_0:= \bar{\varphi}_{\tau}$. By rearranging the indices $i$'s, we can assume that $x \in Z^\tau_{1} \cap \cdots \cap Z^\tau_{r}$ and $x \notin Z^{\tau}_{r+1} \cup \cdots \cup Z^{\tau}_{p} $. We introduce the convention that $\psi_{x,i} = \psi_{i}$ for $0\leq i \leq r$ and $\psi_{x,i} \equiv 0$ for $r<i\leq \dim_{\real}(\tau)$. Then the local smoothing model near $x$ is constructed as $\spec(\comp[P_{\tau,x}])$, where \begin{equation}\label{eqn:local_monoid_near_singularity} P_{\tau,x}:= \{ (m,a_0,\dots,a_{l})\in \norpoly_{\tau} \times \inte^{l+1} \ | \ a_i \geq \psi_{x,i}(m) \}, \end{equation} $l = \dim_{\real}(\tau)$, and the distinguished element $\varrho = (0,1,0,\dots,0)$ defines a family $$\spec(\comp[P_{\tau,x}]) \rightarrow \spec(\comp[q])$$ by sending $q \mapsto z^{\varrho}$. The central fiber is given by $\spec(\comp[Q_{\tau,x}])$, where \begin{equation}\label{eqn:central_fiber_local_model} Q_{\tau,x} = \{ (m,a_0,\dots,a_l) \ | \ a_0 = \psi_{x,0}(m) \} \cong P_{\tau,x}/(\varrho+P_{\tau,x}) \end{equation} is equipped with the monoid structure $$m + m' = \begin{cases} m+m' & \text{if } m+m' \in Q_{\tau,x},\\ \infty & \text{otherwise.} \end{cases} $$ We have the ring isomorphism $\comp[Q_{\tau,x}] \cong\comp[\Sigma_{\tau} \oplus \bb{N}^{l}]$ induced by the monoid isomorphism defined by sending $(m,a_0,a_1,\dots,a_l) \mapsto (m,a_1 - \psi_1(m),\dots,a_l - \psi_l(m))$. We also fix some isomorphism $\comp[\tau^{-1}\Sigma_v] \cong \comp[\Sigma_{\tau}\oplus \bb{Z}^{l}]$ coming from the identification of $\tau^{-1}\Sigma_v $ with the fan $\Sigma_{\tau} \oplus \real^{l} = \{ \omega \oplus \real^{l} \ | \ \omega \text{ is a cone of } \tau \} $ in $\norpoly_{\tau,\real} \oplus \real^{l}$. Taking a sufficiently small neighborhood $V$ of $x$ such that $Z_\rho \cap V = \emptyset$ if $x \notin Z_{\rho}$, we define a map $V \rightarrow \spec(\comp[Q_{\tau,x}])$ by composing $V \hookrightarrow \spec(\comp[\tau^{-1}\Sigma_v]) \cong \spec(\comp[\Sigma_{\tau}\oplus \bb{Z}^{l}])$ with the map $\spec(\comp[\Sigma_{\tau}\oplus \bb{Z}^{l}]) \rightarrow \spec(\comp[\Sigma_{\tau}\oplus \bb{N}^{l}])$ described on generators by \begin{equation}\label{eqn:singular_locus_local_model} \begin{cases} z^m \mapsto h_m \cdot z^m & \text{if } m\in \Sigma_{\tau} ;\\ u_i \mapsto f_{v,i} & \text{if } 1\leq i \leq r;\\ u_i \mapsto z_i-z_i(x) & \text{if } r<i\leq l; \end{cases} \end{equation} here $u_i$ is the $i$-th coordinate function of $\comp[\bb{N}^{l}]$, $z_i$ is the $i$-th coordinate function of $\comp[\bb{Z}^{l}]$ chosen so that $\left(\ddd{f_{v,i}}{z_j}\right)_{1\leq i \leq r, 1\leq j \leq r}$ is non-degenerate on $V$; also, each $h_m$ is an invertible holomorphic functions on $V \cap \mathrm{Zero}(z^m;v)$, and they satisfy the equations \eqref{eqn:local_log_structure_from_monoid_homomorphism} and \eqref{eqn:relation_between_log_smooth_structure_and_slab_functions} where we replace $f_{v\rho}$ by $$\tilde{f}_{v\rho} = \begin{cases} s_{v\tau}^{-1}( f_{v\rho}) & \text{if } x \notin Z_{\rho},\\ 1 & \text{if } x \in Z_{\rho}. \end{cases} $$ Letting $\rest{} \colon V \rightarrow \localmod{k}$ be the $k$-th order thickening of $V$ over $\comp[q]/q^{k+1}$ in the model $\spec(\comp[P_{\tau,x}])$ under the above embedding, we have a natural divisorial log structure on $\localmod{k}^{\dagger}$ over $\logsk{k}$ induced from the inclusion $\spec(\comp[Q_{\tau,x}]) \hookrightarrow \spec(\comp[P_{\tau,x}])$ (i.e. Example \ref{example:divisor_log_structure}). Restricting it to $V$ gives the log structure of $\centerfiber{0}^{\dagger}$ over the log point $\logsk{0}$ locally around $x$. \section{A generalized moment map and the tropical singular locus on $B$}\label{sec:modified_moment_map} In this section, we recall the construction of a generalized moment map $\moment \colon \centerfiber{0} \rightarrow B$ from \cite[Prop. 2.1]{ruddat2019period}. Then we construct some convenient charts on the base tropical manifold $B$ and study its singular locus. \subsection{A generalized moment map}\label{subsec:moment_map} From this point onward, we will assume the vanishing of an obstruction class associated to the open gluing data $s$, namely, $o(s) = 1$, where the obstruction class $o(s)$ is written multiplicatively (see \cite[Thm. 2.34]{Gross-Siebert-logI}). Under this assumption, one can construct an ample line bundle $\cu{L}$ on $\centerfiber{0}$ as follows: For each polytope $\tau \subset \tanpoly_{\tau,\real}$, by identifying $\centerfiber{0}_{\tau}$ (a closed stratum of $\centerfiber{0}$ described in Remark \ref{rem:closed_Construction}) with the projective toric variety associated to $\tau$, we obtain an ample line bundle $\cu{L}_\tau$ on $\centerfiber{0}_{\tau}$. When the assumption holds, then there exists an isomorphism $\mathbf{h}_{\omega\tau}\colon \iota_{\omega\tau}^*(\cu{L}_{\tau}) \cong \cu{L}_{\omega}$, for every pair $\omega \subset \tau$, such that the isomorphisms $\mathbf{h}_{\omega\tau}$'s satisfy the \emph{cocycle condition}, i.e. $\mathbf{h}_{\omega\tau}\circ\iota_{\omega\tau}^*(\mathbf{h}_{\tau\sigma}) = \mathbf{h}_{\omega\sigma}$ for every triple $\omega\subset\tau\subset\sigma$.\footnote{In fact, the vanishing of the obstruction class corresponds exactly to the validity of the cocycle condition.} In particular, the degenerate Calabi--Yau $\centerfiber{0} = \centerfiber{0}(B,\pdecomp,s)$ is projective. Sections of $\cu{L}$ correspond to the lattice points $B_\inte \subset B$. More precisely, given $m\in B_\inte$, there is a unique $\tau \in \pdecomp$ such that $m \in \reint(\tau)$, and this determines a section $\vartheta_{m,\tau}$ of $\cu{L}_{\tau}$ by toric geometry. This section extends uniquely as $\vartheta_{m}$ to $\sigma \supset \tau$ such that $\mathbf{h}_{\tau\sigma}(\vartheta_m) = \vartheta_{m,\tau}$. Further extending $\vartheta_m$ by $0$ to other cells gives a section of $\cu{L}$ corresponding to $m$, called a \emph{($0^{\text{th}}$-order) theta function}. Now for a vertex $v \in \pdecomp^{[0]}$, we can trivialize $\cu{L}$ over $V(v)$ using $\vartheta_{v}$ as the holomorphic frame. Then, for $m$ lying in a cell $\sigma$ that contains $v$, $\vartheta_{m}$ is of the form $g \vartheta_v$, where $g$ is a constant multiple of $z^{m}$. Under the above projectivity assumption, one can define a \emph{generalized moment map} \begin{equation} \moment \colon \centerfiber{0} \rightarrow B \end{equation} following \cite[Prop. 2.1]{ruddat2019period}: First of all, the theta functions $\{\vartheta_m\}_{m \in B_\inte}$ defines an embedding of $\Phi \colon \centerfiber{0}\hookrightarrow \bb{P}^{N}$. Restricting to each closed toric stratum $\centerfiber{0}_{\tau} \subset \centerfiber{0}$, the only non-zero theta functions are those corresponding to $m \in B_{\inte} \cap \tau$. Also, there is an embedding $\mathfrak{j}_{\tau}\colon\mathtt{T}_{\tau} := \tanpoly_{\tau,\real}^*/\tanpoly_{\tau,\inte}^* \hookrightarrow \mathtt{U}(1)^{N}$ of real tori such that the composition $\Phi_{\tau}\colon\centerfiber{0}_{\tau} \rightarrow \bb{P}^N$ of $\Phi$ with the inclusion $\centerfiber{0}_{\tau} \hookrightarrow \centerfiber{0}$ is equivariant. The map $\moment$ is then defined by setting \begin{equation}\label{eqn:moment_map_equation_on_maximal_cell} \moment|_{\centerfiber{0}_{\tau}} (z):= \frac{1}{\sum_{m \in B_{\inte} \cap \tau} |\vartheta_m(z) |^2} \sum_{m \in B_{\inte} \cap \tau} |\vartheta_m(z)|^2 \cdot m, \end{equation} which can be understood as a composition of maps $$ \xymatrix{ \centerfiber{0}_{\tau} \ar[r]^{\Phi_{\tau}}& \bb{P}^N \ar[r]^{\moment_{\bb{P}}} & (\real^{N})^* \ar[r]^{d \mathfrak{j}^*_{\tau}} & \tanpoly_{\tau,\real}, } $$ where $\moment_{\bb{P}}$ is the standard moment map for $\bb{P}^{N}$ and $d\mathfrak{j}_{\tau}\colon \tanpoly_{\tau,\real}^* \rightarrow \real^{N}$ is the Lie algebra homomorphism induced by $\mathfrak{j}_{\tau}\colon\mathtt{T}_{\tau} \rightarrow \mathtt{U}(1)^N$. Fixing a vertex $v \in \pdecomp^{[0]}$, we can naturally embed $\tanpoly_{\tau,\real} \hookrightarrow T_{v,\real}$ for all $\tau$ containing $v$. Furthermore, we can patch the $d\mathfrak{j}^*_{\tau}$'s into a linear map $d\mathfrak{j}^* \colon (\real^{N})^* \rightarrow T_{v,\real}$ so that $\moment_{\tau} = d\mathfrak{j}^* \circ \moment_{\bb{P}} \circ \Phi_{\tau}$ for each $\tau$ which contains $v$. In particular, on the local chart $V(\tau)=\spec(\comp[\tau^{-1}\Sigma_v])$ associated with $v \in \tau$, we have the local description $\moment|_{V(\tau)} = d \mathfrak{j}^*\circ \moment_{\bb{P}}\circ \Phi|_{V(\tau)}$ of the generalized moment map $\moment$. We consider the \emph{amoeba} $\amoeba:= \moment(Z)$. As $\centerfiber{0}_{\tau} \cap Z = \bigcup_{i=1}^p Z^{\tau}_i$, where $Z^{\tau}_i$ is the zero set of a section of $\varkappa_{\tau,i}^*(\cu{O}(1))$ (see the discussion right after equation \eqref{eqn:i_th_map_to_toric_variety_of_monodromy}), we can see that $\amoeba \cap \tau = \bigcup_{i=1}^{p} \moment_{\tau}(Z^{\tau}_i)$ is a union of amoebas $\amoeba^{\tau}_i:= \moment_{\tau}(Z^{\tau}_i)$. It was shown in \cite{ruddat2019period} that the affine structure defined right after Definition \ref{def:integral_tropical_manifold} extends to $B \setminus \amoeba$. \subsection{Construction of charts on $B$}\label{subsubsec:charts_on_B} For any $\tau \in \pdecomp$, we have $$\moment(V(\tau)) = \bigcup_{\tau \subset \omega } \reint(\omega) =: W(\tau).$$ For later purposes, we would like to relate sufficiently small open convex subsets $W \subset W(\tau)$ with \emph{Stein} (or \emph{strongly $1$-completed}, as defined in \cite{demailly1997complex}) open subsets $U \subset V(\tau)$. To do so, we need to introduce a specific collection of (non-affine) charts on $B$. Recall that there are natural maps $\tanpoly_{\tau} \hookrightarrow \tau^{-1} \Sigma_v $ and $\tau^{-1} \Sigma_v \twoheadrightarrow \Sigma_{\tau}$. By choosing a piecewise linear splitting $\mathsf{split}_\tau\colon \Sigma_{\tau} \rightarrow \tau^{-1} \Sigma_v$, we have an identification of monoids $\tau^{-1} \Sigma_v \cong \Sigma_{\tau} \times \tanpoly_{\tau}$, which induces the biholomorphism $$V(\tau) = \spec(\comp[\tau^{-1} \Sigma_v]) \cong \spec(\comp[\tanpoly_{\tau}]) \times \spec(\comp[\Sigma_{\tau}]),$$ where $\tanpoly_{\tau,\comp^*}^*:=\spec(\comp[\tanpoly_{\tau}]) \cong \tanpoly_{\tau}^* \otimes_\inte \comp^* \cong (\comp^{*})^l$ is a complex torus. Fixing a set of generators $\{m_i \}_{i \in \mathtt{B}_{\tau}}$ of the monoid $\Sigma_{\tau}$, which is not necessarily a minimal set, we can define an embedding $\spec(\comp[\Sigma_{\tau}])\hookrightarrow \comp^{|\mathtt{B}_{\tau}|}$ as an analytic subset using the functions $z^{m_i}$'s. We consider the real torus $\mathtt{T}_{\tau,\perp} := \norpoly_{\tau,\real}^*/\norpoly_{\tau}^* \cong \mathtt{U}(1)^{n - l}$ and its action on $\spec(\comp[\Sigma_{\tau}])$ defined by $t\cdot z^{m} = e^{2\pi i (t,m)} z^m$, together with an embedding $\mathtt{T}_{\tau,\perp} \hookrightarrow \mathtt{U}(1)^{|\mathtt{B}_{\tau}|}$ of real tori via $t\mapsto (e^{2\pi i (t,m_i)})_{i \in \mathtt{B}_\tau}$, so that $\spec(\comp[\Sigma_{\tau}])\hookrightarrow \comp^{|\mathtt{B}_{\tau}|}$ is $\mathtt{T}_{\tau,\perp}$-equivariant. We consider the moment map $\hat{\moment}_{\tau} \colon \spec(\comp[\Sigma_{\tau}]) \rightarrow \norpoly_{\tau,\real}$ defined by \begin{equation}\label{eqn:local_moment_map} \hat{\moment}_{\tau} := \sum_{i \in \mathtt{B}_{\tau}} \half |z^{m_i}|^2 \cdot m_i, \end{equation} which is obtained by composing the standard moment map $\comp^{|\mathtt{B}_{\tau}|} \rightarrow \real^{|\mathtt{B}_{\tau}|}_{\geq 0}$, $(z_i)_{i\in \mathtt{B}_{\tau}} \mapsto (\half |z_i|^2)_{i\in \mathtt{B}_{\tau}}$ with the projection $\real^{|\mathtt{B}_{\tau}|} \rightarrow \norpoly_{\tau,\real}$, $e_i \mapsto m_i$. By \cite[\S 4.2]{Fulton_toric_book}, $\hat{\mu}_{\tau}$ induces a homeomorphism between the quotient $\spec(\comp[\Sigma_{\tau}])/\mathtt{T}_{\tau,\perp}$ and $\norpoly_{\tau,\real}$. Taking product with the log map $\log\colon \tanpoly_{\tau,\comp^*}^* \rightarrow \tanpoly_{\tau,\real}^*$ (which is induced from the standard log map $\log\colon \comp^* \rightarrow \real$ defined by $\log(e^{2\pi (x+i\theta)}) = x$), we obtain a map $\moment_{\tau} := (\log, \hat{\moment}_{\tau}) \colon V(\tau) \rightarrow \tanpoly_{\tau,\real}^* \times \norpoly_{\tau,\real}$,\footnote{It depends on the choices of the splitting $\mathsf{split}_{\tau}\colon \Sigma_{\tau} \rightarrow \tau^{-1} \Sigma_v$ and the generators $\{m_i \}_i$, but we omit these dependencies from our notations.} and the following diagram \begin{equation}\label{eqn:local_chart_for_stein_subsets} \xymatrix@1{ & V(\tau) \ar[d]^{\moment} \ar[dl]_{\moment_{\tau}}\\ \tanpoly_{\tau,\real}^* \times \norpoly_{\tau,\real} \ar[r]^{\Upsilon_{\tau}} & W(\tau), } \end{equation} where $\Upsilon_{\tau}$ is a homeomorphism which serves as a chart. The homeomorphism $\Upsilon_{\tau}$ exists because if we fix a vertex $v \in \tau$, then we can equip $V(\tau)$ with an action by the real torus $\mathtt{T}^{n} := T^*_{v, \mathbb{R}} / T^*_v$ such that both $\moment$ and $\moment_\tau$ induce homeomorphisms from the quotient $V(\tau)/\mathtt{T}^{n}$ onto the images. The restriction of $\Upsilon_{\tau}$ to $\tanpoly_{\tau,\real}^* \times\{o\}$, where $\{o\}$ is the zero cone, is a homeomorphism onto $\reint(\tau) \subset W(\tau)$, which is nothing but (a generalized version of) the Legendre transform (see \cite[\S 4.2]{Fulton_toric_book} for the explicit formula); also, this homeomorphism is independent of the choices of the splitting $\mathsf{split}_{\tau}$ and the generators $\{m_i \}_{i \in \mathtt{B}_{\tau}}$. The dependences of the chart $\Upsilon_{\tau}$ on the choices of the splitting $\mathsf{split}_{\tau}\colon \Sigma_{\tau} \rightarrow \tau^{-1} \Sigma_v$ and the generators $\{m_i \}_i$ can be described as follows. First, if we choose another piecewise linear splitting $\widetilde{\mathsf{split}}_{\tau}\colon \Sigma_{\tau} \rightarrow \tau^{-1} \Sigma_v$, then there is a piecewise linear map $b \colon \Sigma_{\tau} \rightarrow \tanpoly_{\tau,\real}$ recording the difference between $\mathsf{split}_\tau$ and $\widetilde{\mathsf{split}}_{\tau}$. The two corresponding coordinate charts $\Upsilon_{\tau}$ and $\tilde{\Upsilon}_{\tau}$ are then related by a homeomorphism $\gimel$ such that $$ \gimel\left(x,\sum_{i} y_i m_i\right) = \left(x , \sum_{i}y_i e^{4\pi \langle b(m_i),x\rangle } m_i \right), $$ where $y_i= \half |z^{m_i}|^2$ for some point $z\in \spec(\comp[\Sigma_{\tau}])$ and $i$ runs through $m_i \in \sigma$, via the formula $\tilde{\Upsilon}_{\tau} = \Upsilon_{\tau} \circ \gimel$. Second, if we choose another set of generators $\tilde{m}_j$'s, then the corresponding maps $\hat{\moment}_{\tau}, \tilde{\moment}_{\tau} \colon \spec(\comp[\Sigma_{\tau}]) \rightarrow \norpoly_{\tau,\real}$ are related by a continuous map $\hat{\gimel} \colon \norpoly_{\tau,\real} \rightarrow \norpoly_{\tau,\real}$ which maps each cone $\sigma \in \Sigma_{\tau}$ back to itself. This is because both $\hat{\moment}_{\tau}, \tilde{\moment}_{\tau} $ induce a homeomorphism between $\spec(\comp[\Sigma_{\tau}])/\mathtt{T}_{\tau,\perp}$ and $\norpoly_{\tau,\real}$. Now suppose that $\omega \subset \tau$. We want to see how the charts $\Upsilon_{\omega}$, $\Upsilon_{\tau}$ can be glued together in a compatible manner. We first make a compatible choice of splittings. So we fix a vertex $v \in \omega$ and a piecewise linear splitting $\mathsf{split}_{\omega}\colon \Sigma_{\omega} \rightarrow \omega^{-1} \Sigma_{v}$. We then choose a piecewise linear splitting $\mathsf{split}_{\omega\tau}\colon \Sigma_{\tau} \rightarrow \Sigma_{\omega}$ such that $K_\tau \sigma$ is mapped into $K_\omega \sigma$ for any $\sigma \supset \tau$. Together with the natural maps $\tanpoly_{\tau}/\tanpoly_{\omega} \hookrightarrow \tau^{-1}\Sigma_{\omega}$ and $\tau^{-1}\Sigma_{\omega} \twoheadrightarrow \Sigma_{\tau}$, we obtain an isomorphism $\tau^{-1}\Sigma_{\omega} \cong (\tanpoly_{\tau}/\tanpoly_{\omega}) \times \Sigma_{\tau}$. By composing together $\mathsf{split}_{\omega\tau}\colon \Sigma_{\tau} \rightarrow \Sigma_{\omega}$, $\mathsf{split}_{\omega}\colon \Sigma_{\omega} \rightarrow \omega^{-1} \Sigma_{v}$ and the natural monoid homomorphism $ \omega^{-1} \Sigma_{v} \rightarrow \tau^{-1}\Sigma_{v}$, we get a splitting $\mathsf{split}_{\tau}\colon \Sigma_{\tau} \rightarrow \tau^{-1} \Sigma_{v}$. Using these choices of splittings, we have a biholomorphism $$\spec(\comp[\tau^{-1}\Sigma_{\omega}]) \cong (\tanpoly_{\tau}/\tanpoly_{\omega})^*\otimes_{\inte}\comp^* \times \spec(\comp[\Sigma_{\tau}])$$ which fits into the following diagram \begin{equation}\label{eqn:change_of_chart_omega_tau} \xymatrix@1{ \tanpoly_{\omega,\comp^*}^* \times \spec(\comp[\Sigma_{\omega}]) \ar[r]^{\cong} & \spec(\comp[\omega^{-1}\Sigma_v])& \\ \tanpoly_{\omega,\comp^*}^* \times \spec(\comp[\tau^{-1}\Sigma_{\omega}]) \ar@{^{(}->}[u] \ar[d]^{\cong} &\spec(\comp[\tau^{-1}\Sigma_v]) \ar[l]^{\cong} \ar[d]^{\cong} \ar@{^{(}->}[u]& \ar[l]^{s_{\omega\tau}^{-1}} \spec(\comp[\tau^{-1}\Sigma_v]) \ar[d]^{\cong} \ar@{_{(}->}[ul]_{F_s(\omega\subset \tau)}\\ (\tanpoly_{\omega}\oplus \tanpoly_{\tau}/\tanpoly_{\omega})^*\otimes_{\inte}\comp^* \times \spec(\comp[\Sigma_{\tau}])& \ar[l] \tanpoly_{\tau,\comp^*}^*\times \spec(\comp[\Sigma_{\tau}])& \ar[l]^{s_{\omega\tau}^{-1}} \tanpoly_{\tau,\comp^*}^* \times \spec(\comp[\Sigma_{\tau}]). } \end{equation} Here, the bottom left horizontal map is induced from a splitting $(\tanpoly_{\tau}/\tanpoly_{\omega}) \rightarrow \tanpoly_{\tau}$ obtained by composing $\tanpoly_{\tau}/\tanpoly_{\omega} \rightarrow \tau^{-1}\Sigma_{\omega}$ with the splitting $\tau^{-1}\Sigma_{\omega} \rightarrow \tau^{-1}(\omega^{-1} \Sigma_{v})$, and then identifying with the image lattice $\tanpoly_{\tau}$. The appearance of $s_{\omega\tau}$ in the diagram is due to the twisting of $V(\tau)$ by the open gluing data $(s_{\omega\tau})_{\omega\subset\tau}$ when it is glued to $V(\omega)$. We also have to make a compatible choice of the generators $\{m_i\}_{i \in \mathtt{B}_{\omega}}$ and $\{m_i\}_{i \in \mathtt{B}_{\tau}}$. First note that the restriction of $\hat{\moment}_{\omega}$ to the open subset $ \spec(\comp[\tau^{-1}\Sigma_{\omega}]) \subset \spec(\comp[\Sigma_{\omega}])$ depends only on the subcollection $\{m_i\}_{i \in \mathtt{B}_{\omega\subset \tau}}$ of $\{m_i\}_{i \in \mathtt{B}_{\omega}}$ which contains those $m_i$'s that belong to some cone $\sigma \supset \tau$. We choose the set of generators $\{\tilde{m}_i\}_{i \in \mathtt{B}_{\tau}}$ for $\Sigma_{\tau}$, with $\mathtt{B}_{\tau}=\mathtt{B}_{\omega\subset \tau}$, to be the projection of $\{m_i\}_{i \in \mathtt{B}_{\omega\subset \tau}}$ through the natural map $\tau^{-1} \Sigma_{\omega} \rightarrow \Sigma_{\tau}$. Each $m_i$ can be expressed as $m_i = \mathsf{split}_{\omega\tau}(\tilde{m}_i) + b_i$ for some $b_i \in \tanpoly_{\tau}/\tanpoly_{\omega}$, through the splitting $\mathsf{split}_{\omega\tau}\colon \Sigma_{\tau} \rightarrow \Sigma_{\omega}$. Notice that if $m_i \in K_{\omega} \tau$, then we have $\tilde{m}_i= o$ and hence $b_i \in K_{\omega}\tau $. By tracing through the biholomorphism in \eqref{eqn:change_of_chart_omega_tau} and taking either the modulus or the log map, we have a map $$\gimel \colon \tanpoly_{\omega,\real}^* \times (\tanpoly_{\tau,\real}/\tanpoly_{\omega,\real})^* \times \norpoly_{\tau,\real} \rightarrow \tanpoly_{\omega,\real}^* \times \norpoly_{\omega,\real}$$ satisfying \begin{equation}\label{eqn:affine_local_chart_change_of_strata} \gimel\left(x_1 - c_{\omega\tau,1},x_2-c_{\omega\tau,2},\sum_i y_i |s_{\omega\tau}(\mathsf{split}_{\omega\tau}(\tilde{m}_i))|^{-2} \tilde{m}_i\right) = \left(x_1,\sum_{i} y_i e^{4\pi \langle b_i,x_2 \rangle} m_i\right), \end{equation} where $y_i = \half | z^{\tilde{m}_i}|^2$. Here, $s_{\omega\tau} \in \mathrm{PM}(\tau)$ is the part of the open gluing data associated to $\omega \subset \tau$, and $c_{\omega\tau}=c_{\omega\tau,1}+c_{\omega\tau,2} \in \tanpoly_{\tau,\real}^*$ is the unique element representing the linear map $\log|s_{\omega\tau}| \colon \tanpoly_{\tau,\real} \rightarrow \real$ defined by $\log|s_{\omega\tau}|(b) = \log|s_{\omega\tau}(b)|$. For instance, the holomorphic function $z^{m_i} \in \comp[\tau^{-1}\Sigma_{\omega}]$ is identified with $z^{b_i}\cdot z^{\tilde{m}_i}$ in $(\tanpoly_{\tau}/\tanpoly_{\omega})^*\otimes_{\inte}\comp^* \times \spec(\comp[\Sigma_{\tau}])$, resulting in the expression $\sum_{i} y_i e^{4\pi \langle b_i,x_2 \rangle} m_i$ on the right hand side. We have $\Upsilon_{\tau} = \Upsilon_{\omega} \circ \gimel$, where we use the splitting $(\tanpoly_{\tau}/\tanpoly_{\omega}) \rightarrow \tanpoly_{\tau}$ to obtain an isomorphism $\tanpoly_{\omega,\real}^* \times (\tanpoly_{\tau,\real}/\tanpoly_{\omega,\real})^* \cong \tanpoly_{\tau,\real}^*$ and an identification of the domains of the two maps $\Upsilon_{\tau}$ and $\Upsilon_{\omega} \circ \gimel$. \begin{lemma}\label{lem:stein_open_subset_lemma} There is a base $\mathscr{B}$ of open subsets of $B$ such that the preimage $\moment^{-1}(W)$ is Stein for any $W \in \mathscr{B}$. \end{lemma} \begin{proof} First of all, it is well-known that analytic spaces associated to affine varieties are Stein. So $V(\tau)$ is Stein for any $\tau$. Now we fix a point $x \in \reint(\tau) \subset B$. It suffices to show that there is a local base $\mathscr{B}_x$ of $x$ such that the preimage $\moment^{-1}(W)$ is Stein for each $W \in \mathscr{B}_x$. We work locally on $\moment|_{V(\tau)} \colon V(\tau) \rightarrow W(\tau)$. Consider the diagram \eqref{eqn:local_chart_for_stein_subsets} and write $\Upsilon^{-1}(x) = (\underline{x},o)$, where $o \in \norpoly_{\tau,\real}$ is the origin. By \cite[Ch. 1, Ex. 7.4]{demailly1997complex}, the preimage $\log^{-1}(W)$ under the log map $\log \colon (\comp^*)^l \rightarrow \tanpoly_{\tau,\real}^*$ is Stein for any convex $W \subset \tanpoly_{\tau,\real}^*$ which contains $\underline{x}$. Again by \cite[Ch. 1, Ex. 7.4]{demailly1997complex}, any subset $$\bigcap_{j=1}^N\{z \in \spec(\comp[\Sigma_{\tau}]) \ | \ |f_j(z)|<\epsilon \},$$ where $f_j$'s are holomorphic functions, is Stein. By taking $f_j$'s to be the functions $z^{m_j}$'s associated to the set of all non-zero generators in $\{m_j\}_{j\in \mathtt{B}_{\tau}}$ and $\epsilon$ sufficiently small, we have a subset $$W = \left\{ y \ \Big| \ y = \sum_{j} y_j m_j \text{ with } |y_j|<\frac{\epsilon^2}{2}, \ \text{where $y_j = \half |z^{m_j}|^2$ at some point $z \in \spec(\comp[\Sigma_{\tau}])$}\right\}$$ of $\norpoly_{\tau,\real}$ such that the preimage $\hat{\moment}_{\tau}^{-1}(W)$ is Stein. Therefore, we can construct a local base $\mathscr{B}_{o}$ of $o$ such that the preimage $\hat{\moment}_{\tau}^{-1}(W)$ is Stein for any $W \in \mathscr{B}_o$. Finally, since a product of Stein open subsets is Stein, we obtain our desired local base $\mathscr{B}_x$ by taking the products of these subsets. \end{proof} \subsection{The tropical singular locus $\tsing$ of $B$}\label{subsec:tropical_singular_locus} We now specify a codimension $2$ singular locus $\tsing \subset B$ of the affine structure using the charts $\Upsilon_{\tau}$ introduced in \eqref{eqn:local_chart_for_stein_subsets} for $\tau$ such that $\dim_{\real}(\tau)<n$. Given the chart $\Upsilon_{\tau}$ that maps $\tanpoly_{\tau,\real}^*$ to $\reint(\tau)$, we define the \emph{tropical singular locus} $\tsing$ by requiring that \begin{equation}\label{eqn:definition_of_tropical_singular_locus} \Upsilon_{\tau}^{-1}(\tsing \cap \reint(\tau)) = \bigcup_{\substack{\rho \in \mathscr{N}_{\tau};\\ \dim_{\real}(\rho) < \dim_{\real}(\tau)}} \big( (\reint(\rho) +c_{\tau}) \times \{o\} \big) , \end{equation} where $\mathscr{N}_{\tau} \subset \tanpoly_{\tau,\real}^*$ is the normal fan of the polytope $\tau$, and $\{o\}$ is the zero cone in $\Sigma_{\tau} \subset \norpoly_{\tau,\real}$; here, $c_{\tau} = \log|s_{v\tau}|$ is the element in $\tanpoly_{\tau,\real}^*$ representing the linear map $\log|s_{v\tau}|\colon \tanpoly_{\tau,\real} \rightarrow \real$, which is independent of the vertex $v\in \tau$. A subset of the form $\tsing_{\tau,\rho} := (\reint(\rho)+c_{\tau}) \times \{o\}$ in \eqref{eqn:definition_of_tropical_singular_locus} is called a \emph{stratum} of $\tsing$ in $\reint(\tau)$. The locus $\tsing$ is independent of the choices of the splittings $\mathsf{split}_{\tau}$'s and generators $\{m_i\}_{i \in \mathtt{B}_{\tau}}$ used to construct the charts $\Upsilon_{\tau}$'s. \begin{remark} Our definition of the singular locus is similar to those in \cite{Gross-Siebert-logI, gross2011real}; the only difference is that our locus is a collection of polyhedra in $\tanpoly_{\tau,\real}^*$, instead of $\reint(\tau)$. Note that $\tanpoly_{\tau,\real}^*$ is homeomorphic to $\reint(\tau)$ by the Legendre transform. This modification is needed for our construction of the contraction map $\mathscr{C}$ below, where we need to consider the convex open subsets in $\tanpoly_{\tau,\real}^*$, instead of those in $\reint(\tau)$.\end{remark} \begin{lemma}\label{lem:stratification_structure_on_singular_locus} For $\omega \subset \tau$ and a stratum $\tsing_{\tau,\rho}$ in $\reint(\tau)$, the intersection of the closure $\overline{\tsing_{\tau,\rho}}$ in $B$ with $\reint(\omega)$ is a union of strata of $\tsing$ in $\reint(\omega)$. \end{lemma} \begin{proof} We consider the map $\gimel$ described in equation \eqref{eqn:affine_local_chart_change_of_strata} and take a neighborhood $W = W_1 \times \norpoly_{\omega,\real}$ of a point $(\underline{x},o)$ in $ \tanpoly_{\omega,\real}^* \times \norpoly_{\omega,\real}$, where $W_1$ is some sufficiently small neighborhood of $\underline{x}$ in $\tanpoly_{\omega,\real}^*$. By shrinking $W$ if necessary, we may assume that $\gimel^{-1}(W) = W_1 \times (a -\reint(K_{\omega}\tau^{\vee})) \times \norpoly_{\tau,\real}$, where $a$ is some element in $-\reint(K_{\omega}\tau^{\vee}) \subset (\tanpoly_{\tau,\real}/\tanpoly_{\omega,\real})^*$. Writing $c_{\tau} = c_{\tau,1}+c_{\tau,2}$, where $c_{\tau,1},c_{\tau,2}$ are the components of $c_{\tau}$ according to the chosen decomposition $\tanpoly_{\tau,\real}^* \cong \tanpoly_{\omega,\real}^* \times (\tanpoly_{\tau,\real}/\tanpoly_{\omega,\real})^*$, the equality $c_{\tau,1} + c_{\omega\tau,1} = c_{\omega}$ follows from the compatibility of the open gluing data in Definition \ref{def:open_gluing_data}. If $\tsing_{\tau,\rho}$ intersects the open subset $\gimel^{-1}(W)$, then $\rho \subset \tanpoly_{\tau,\real}^*$ must be the dual cone of some face $\rho^{\vee} \subset \omega \subset \tau$ in $\tanpoly_{\tau,\real}^*$. The intersection is of the form $$(\reint(\underline{\rho})+c_{\tau,1}) \times (a-\reint(K_{\omega}\tau^{\vee})) \times \{o\}$$ for some $\underline{\rho} \in \mathscr{N}_{\omega}$ ($c_{\tau,2}$ is absorbed by $a$), where $\underline{\rho} \subset \tanpoly_{\omega,\real}^*$ is the dual cone of $\rho^{\vee}$ in $\tanpoly_{\omega,\real}^*$, and hence we have $W \cap \tsing_{\tau,\rho} =\gimel((\reint(\underline{\rho})+c_{\tau,1}) \times (a-\reint(K_{\omega}\tau^{\vee})) \times \{o\})$. Therefore, the intersection of $\overline{\tsing_{\tau,\rho}}$ with $\tanpoly_{\omega,\real}^*$ in the open subset $W \subset \tanpoly_{\omega,\real}^* \times \norpoly_{\omega,\real}$ is given by $(\underline{\rho}+c_{\omega}) \times \{o\}$, which is a union of strata. \end{proof} The tropical singular locus $\tsing$ is naturally equipped with a stratification, where a stratum is given by $\tsing_{\tau,\rho}$ for some cone $\rho \subset \mathscr{N}_{\tau}$ of $\dim_{\real}(\rho) < \dim_{\real}(\tau)$ for some $\tau \in \pdecomp^{[<n]}$. We use the notation $\tsing^{[k]}$ to denote the set of $k$-dimensional strata of $\tsing$. The affine structure on $\bigcup_{v \in \pdecomp^{[0]}} W_v \cup \bigcup_{\sigma \in \pdecomp^{[n]}} \reint(\sigma)$ introduced right after Definition \ref{def:integral_tropical_manifold} in \S \ref{subsec:integral_affine_manifolds} can be naturally extended to $B \setminus \tsing$ as in \cite{gross2011real}. If we consider $\omega \subset \tau \subset \rho$ for some $\omega \in \pdecomp^{[1]}$ and $\rho \in \pdecomp^{[n-1]}$, the corresponding monodromy transformation $T_{\gamma}$ is non-trivial if and only if $\omega \in \Omega_p$ and $\rho \in R_p$, where $p$ is as in Definition \ref{def:simplicity}. Therefore, the part of the singular locus $\tsing$ lying in $\Upsilon_{\tau}^{-1}(\reint(\tau)) = \tanpoly_{\tau,\real}^* \times \{o\}$ is determined by the subsets $\Omega_p$'s. We may further define the \emph{essential singular locus} $\tsing_e$ to include only those strata contained in $\tsing^{[n-2]}$ with non-trivial monodromy around them. We observe that the affine structure can be further extended to $B \setminus \tsing_e$. More explicitly, we have a projection $$\mathtt{i}_{\tau} = \mathtt{i}_{\tau,1} \oplus \cdots \oplus \mathtt{i}_{\tau,p} \colon \tanpoly_{\tau}^* \rightarrow \tanpoly_{\Delta_1(\tau)}^* \oplus \cdots \oplus \tanpoly_{\Delta_p(\tau)}^*,$$ in which $\tanpoly_{\Delta_1(\tau)}^* \oplus \cdots \oplus \tanpoly_{\Delta_p(\tau)}^*$ can be treated as a direct summand as in \S \ref{subsec:monodromy_data}. So we can consider the pullback of the fan $\mathscr{N}_{\Delta_1(\tau)} \times \cdots \times \mathscr{N}_{\Delta_p(\tau)}$ via the map $\mathtt{i}_{\tau}$, and realize $\mathscr{N}_{\tau} \subset \tanpoly_{\tau,\real}^*$ as a refinement of this fan. Similarly, we have $\check{\mathtt{i}}_{\tau} =\check{\mathtt{i}}_{\tau,1} \oplus \cdots \oplus \check{\mathtt{i}}_{\tau,p} \colon \norpoly_{\tau}^* \rightarrow \tanpoly^*_{\check{\Delta}_1(\tau)} \oplus \cdots \oplus \tanpoly^*_{\check{\Delta}_p(\tau)}$ and the fan $\mathscr{N}_{\check{\Delta}_1(\tau)} \times \cdots \times \mathscr{N}_{\check{\Delta}_p(\tau)}$ in $\norpoly_{\tau,\real}^*$ under pullback via $\check{\mathtt{i}}_{\tau}$. The intersection $\tsing_e \cap \reint(\tau) $ can be described by replacing $\rho \in \mathscr{N}_{\tau}$ with the condition $\rho \in \mathtt{i}_{\tau}^{-1}(\mathscr{N}_{\Delta_1(\tau)} \times \cdots \times \mathscr{N}_{\Delta_p(\tau)})$, with a stratum denoted by $\tsing_{e,\tau,\rho}$. This gives a stratification on $\tsing_e$. \begin{lemma}\label{lem:stratification_structure_on_essential_singular_locus} For $\omega \subset \tau$ and a stratum $\tsing_{e,\tau,\rho}$ in $\reint(\tau)$, the intersection of the closure $\overline{\tsing_{e,\tau,\rho}}$ in $B$ with $\reint(\omega)$ is a union of strata of $\tsing_e$ in $\reint(\omega)$. \end{lemma} \begin{proof} Given $\omega \subset \tau$, we take a change of coordinate map $\gimel$ together with a neighborhood $W$ as in the proof of Lemma \ref{lem:stratification_structure_on_singular_locus}. We need to show that $W \cap \tsing_{\tau,\rho} =\gimel((\reint(\rho)+c_{\tau,1}) \times (a-\reint(K_{\omega}\tau^{\vee})) \times \{o\})$ for some cone $\rho \in \mathtt{i}_{\tau}^{-1}(\prod_{i=1}^p \mathscr{N}_{\Delta_i(\tau)})$. Let $\Delta_1(\tau),\dots,\Delta_r(\tau),\dots,\Delta_p(\tau)$ be the monodromy polytopes of $\tau$, and $\Delta_1(\omega), \dots, \Delta_r(\omega),\dots,\Delta_{p'}(\omega)$ be those of $\omega$ such that $\Delta_j(\omega)$ is the face of $\Delta_j(\tau)$ parallel to $\tanpoly_{\omega}$ for $j=1,\dots,r$. Then we have direct sum decompositions $\tanpoly_{\Delta_{1}(\tau)}\oplus \cdots \oplus \tanpoly_{\Delta_{p}(\tau)} \oplus A_{\tau} = \tanpoly_{\tau}$ and $ \tanpoly_{\Delta_{1}(\omega)}\oplus \cdots \oplus \tanpoly_{\Delta_{p'}(\omega)} \oplus A_{\omega} = \tanpoly_{\omega}$. We can further choose an inclusion $$ \mathsf{a}_{\omega\tau}\colon \tanpoly_{\Delta_{r+1}(\omega)}\oplus \cdots \oplus \tanpoly_{\Delta_{p'}(\omega)} \oplus A_{\omega} \hookrightarrow A_{\tau}; $$ in other words, for every $j=r+1,\dots,p'$, any $f \in R_j \subset \pdecomp_{n-1}(\omega)$ in Definition \ref{def:simplicity} is not containing $\tau$. For every $j=r+1,\dots,p$ and any $f \in R_j \subset \pdecomp_{n-1}(\tau)$, the element $m^{f}_{v_1v_2}$ is zero for any two vertices $v_1,v_2$ of $\omega$. We have the identification $$ \tanpoly_{\tau}/\tanpoly_{\omega} = \bigoplus_{j=1}^r (\tanpoly_{\Delta_j(\tau)}/\tanpoly_{\Delta_j(\omega)}) \oplus \bigoplus_{l=r+1}^{p} \tanpoly_{\Delta_l(\tau)} \oplus \mathrm{coker}(\mathsf{a}_{\omega\tau}). $$ As a result, any cone $\mathtt{i}^{-1}_{\tau}(\prod_{j=1}^{p} \rho_j) \in \mathtt{i}^{-1}_{\tau}\big(\prod_{i=1}^p \mathscr{N}_{\Delta_i(\tau)} \big)$ of codimension greater than $0$ intersecting $\gimel^{-1}(W)$ will be a pullback of a cone under the projection to $\tanpoly_{\Delta_1(\tau),\real}^* \oplus \cdots \oplus \tanpoly_{\Delta_r(\tau),\real}^*$. Consider the commutative diagram of projection maps \begin{equation}\label{eqn:compatibility_of_essential_singular_locus} \xymatrix@1{ \tanpoly_{\omega,\real}^* \ar[d]^{\mathtt{p}_{\omega}} & & \ar[ll]^{\mathtt{p}_{\omega\subset \tau}} \tanpoly_{\tau,\real}^* \ar[d]^{\mathtt{p}_{\tau}}\\ \prod_{j=1}^r \tanpoly_{\Delta_j(\omega),\real}^*& & \ar[ll]^{\Pi_{\omega\subset \tau}} \prod_{j=1}^r \tanpoly_{\Delta_j(\tau),\real}^*, } \end{equation} we see that, in the open subset $\gimel^{-1}(W)$, every cone of codimension greater than $0$ coming from pullback via $\mathtt{p}_\tau$ is a further pullback via $\Pi_{\omega\subset\tau} \circ \mathtt{p}_{\tau}$. As a consequence, it must be of the form $\gimel((\reint(\rho)+c_{\tau,1}) \times (a-\reint(K_{\omega}\tau^{\vee})) \times \{o\})$ in $W$. \end{proof} \subsubsection{Contraction of $\amoeba$ to $\tsing$} We would like to relate the amoeba $\amoeba = \moment(Z)$ with the tropical singular locus $\tsing$ introduced above. \begin{assum}\label{assum:existence_of_contraction} We assume the existence of a surjective \emph{contraction map} $\mathscr{C} \colon B \rightarrow B$ which is isotopic to the identity and satisfies the following conditions: \begin{enumerate} \item $\mathscr{C}^{-1}(B \setminus \tsing) \subset (B \setminus \tsing)$ and the restriction $\mathscr{C}|_{\mathscr{C}^{-1}(B \setminus \tsing)}\colon \mathscr{C}^{-1}(B \setminus \tsing) \to B \setminus \tsing$ is a homeomorphism. \item $\mathscr{C}$ maps $\amoeba$ into the essential singular locus $\tsing_e$. \item For each $\tau \in \pdecomp$, we have $\mathscr{C}^{-1}(\reint(\tau)) \subset \reint(\tau)$. \item For each $\tau \in \pdecomp$ with $0<\dim_{\real}(\tau)<n$, we have a decomposition $$\tau \cap \mathscr{C}^{-1}(B\setminus \tsing) = \bigcup_{v \in \tau^{[0]}} \tau_{v}$$ of the intersection $\tau \cap \mathscr{C}^{-1}(B\setminus \tsing)$ into connected components $\tau_{v}$'s, where each $\tau_{v}$ is contractible and is the unique component containing the vertex $v \in \tau$. \item For each $\tau \in \pdecomp$ and each point $x \in \reint(\tau) \cap \tsing$, $\mathscr{C}^{-1}(x) \subset \reint(\tau)$ is a connected compact subset. \item For each $\tau \in \pdecomp$ and each point $x \in \reint(\tau) \cap \tsing$, there exists a local base $\mathscr{B}_x$ around $x$ such that $(\mathscr{C} \circ \moment)^{-1}(W) \subset V(\tau)$ is Stein for every $W \in \mathscr{B}_x$, and for any $U \supset \mathscr{C}^{-1}(x)$, we have $\mathscr{C}^{-1}(W) \subset U$ for sufficiently small $W \in \mathscr{B}_x$. \end{enumerate} \end{assum} Similar contraction maps appear in \cite[Rem. 2.4]{ruddat2019period} (see also \cite{Ruddat-Zharkov21, Ruddat-Zharkov22}). When $\dim_{\real}(B) = 2$, we can take $\mathscr{C} = \mathrm{id}$ because from \cite[Ex. 1.62]{Gross-Siebert-logI}, we see that $Z$ is a finite collection of points, with at most one point lying in each closed stratum $\centerfiber{0}_{\tau}$, and the amoeba $\amoeba$ is exactly the image of $Z$ under the generalized moment map $\mu$. When $\dim_{\real}(B) = 3$, the amoeba $\amoeba$ can possibly be of codimension one and we need to construct a contraction map as shown in Figure \ref{fig:contraction map}. \begin{center} \begin{figure}[h] \includegraphics[scale=0.16]{contraction.png} \caption{Contraction map $\mathscr{C}$ when $\dim_{\real}(B) = 3$}\label{fig:contraction map} \end{figure} \end{center} For $\dim_{\real}(\tau) = 1$, again from \cite[Ex. 1.62]{Gross-Siebert-logI}, we see that if $\amoeba \cap \reint(\tau) \neq \emptyset$, then there is exactly one $\Omega_1$ and $R_1$, and $\Delta_{1}(\tau)$ is a line segment of affine length $1$. In this case, $Z \cap \centerfiber{0}_{\tau}$ consists of only one point, given by the intersection of the zero locus $s_{v\tau}^{-1}(f_{v\rho})$ with $\comp^* \cong V_{\tau}(\tau) \subset V(\tau)$. Taking $m$ to be the primitive vector in $\tanpoly_{\tau}$ starting at $v$ that points into $\tau$, we can write $s_{v\tau}^{-1}(f_{v\rho}) = 1 + s_{v\tau}^{-1}(m) z^{m}$. Applying the log map $\log \colon \comp^* \rightarrow \real$, we see that $\amoeba \cap \reint(\tau) = c_{\tau}$. Therefore, for an edge $\tau \in \pdecomp^{[1]}$, we can define $\mathscr{C}$ to be the identity on $\tau$. \begin{center} \begin{figure}[h] \includegraphics[scale=0.6]{contraction_to_tropical.png} \caption{Contraction at $\rho$}\label{fig:contraction_at_rho} \end{figure} \end{center} On a codimension one cell $\rho$ such that $\reint(\rho) \cap \amoeba \neq \emptyset$ (see Figure \ref{fig:contraction_at_rho}), we consider the log map $\log \colon \spec(\comp[\tanpoly_{\rho}]) \cong (\comp^*)^2 \rightarrow \tanpoly_{\rho,\real}^* \cong \real^2$, and take a sufficiently large polytope $\mathtt{P}$ (colored purple in Figure \ref{fig:contraction_at_rho}) so that $\amoeba \setminus \reint(\mathtt{P})$ is a disjoint union of legs. We first contract each leg to the tropical singular locus (colored blue in Figure \ref{fig:contraction_at_rho}) along the normal direction to the tropical singular locus. Next, we contract the polytope $\mathtt{P}$ to the $0$-dimensional stratum of $\tsing_e$. Notice that the restriction of $\mathscr{C}$ to the tropical singular locus $\tsing$ is not the identity but rather a contraction onto itself. Once the contraction map is constructed for all codimension one cells $\rho$, we can then extend it continuously to the whole of $B$ so that it is a diffeomorphism on $\reint(\sigma)$ for every maximal cell $\sigma$. The map is chosen such that the preimage $\mathscr{C}^{-1}(x)$ for every point $x \in \reint(\rho)$ is a convex polytope in $\real^2$. Therefore, given any open subset $U\subset \real^2$ which contains $\mathscr{C}^{-1}(x)$, we can find some convex open neighborhood $W_1 \subset U$ of $\mathscr{C}^{-1}(x)$ giving the Stein open subset $\log^{-1}(W_1) \subset (\comp^*)^2$. By taking $W = W_1 \times W_2$ in the chart $\tanpoly_{\rho,\real}^* \times \norpoly_{\rho,\real}$ as in the proof of Lemma \ref{lem:stein_open_subset_lemma}, we have the open subset $W$ that satisfies condition (5) in Assumption \ref{assum:existence_of_contraction}. In general, we need to construct $\mathscr{C}|_{\reint(\tau)}$ inductively for each $\tau \in \pdecomp$, so that the preimage $\mathscr{C}^{-1}(x) \subset \reint(\tau)$ is convex in the chart $\tanpoly_{\tau,\real}^*\cong \reint(\tau)$ and the codimension one amoeba $\amoeba$ is contracted to the codimension 2 tropical singular locus $\tsing_e$. The reason for introducing such a contraction map is that we can modify the generalized moment map $\moment$ to one which is more closely related with tropical geometry: \begin{definition}\label{def:modified_moment_map} We call the composition $\modmap := \mathscr{C} \circ \moment \colon \centerfiber{0} \rightarrow B$ the \emph{modified moment map}. \end{definition} One immediate consequence of property $(6)$ in Assumption \ref{assum:existence_of_contraction} is that we have $R\modmap_* (\mathcal{F}) = \modmap_*(\mathcal{F})$ for any coherent sheaf $\mathcal{F}$ on $\centerfiber{0}$, thanks to Lemma \ref{lem:stein_open_subset_lemma} and Cartan's Theorem B: \begin{theorem}[Cartan's Theorem B \cite{cartan1957varietes}; see e.g. Ch. IX, Cor. 4.11 in \cite{demailly1997complex}]\label{thm:cartan_theorem_B} For any coherent sheaf $\mathcal{F}$ over a Stein space $U$, we have $ H^{>0}(U,\mathcal{F}) = 0. $ \end{theorem} \subsubsection{Monodromy invariant differential forms on $B$}\label{subsec:derham_for_B} Outside of the essential singular locus $\tsing_e$, we have a nice integral affine manifold $B \setminus \tsing_e$, on which we can talk about the sheaf $\mdga^*$ of ($\real$-valued) de Rham differential forms. But in fact, we can extend its definition to $\tsing_e$ as well using \emph{monodromy invariant differential forms}. We consider the inclusion $\tinclude \colon B_0 :=B \setminus \tsing_e \rightarrow B$ and the natural exact sequence \begin{equation}\label{eqn:affine_function_cotangent_lattice_relation} 0 \rightarrow \underline{\inte} \rightarrow \cu{A}\mathit{ff} \rightarrow \tinclude_* \tanpoly_{B_0}^* \rightarrow 0, \end{equation} where $\tanpoly_{B_0}^*$ denotes the sheaf of integral cotangent vectors on $B_0$. For any $\tau \in \pdecomp$, the stalk $(\tinclude_*\tanpoly_{B_0}^*)_x$ at a point $x \in \reint(\tau) \cap \tsing_e$ can be described using the chart $\Upsilon_{\tau}$ in \eqref{eqn:local_chart_for_stein_subsets}. Using the description in \S \ref{subsec:tropical_singular_locus}, we have $x \in \tsing_{e,\tau,\rho} = \reint(\rho) \times \{o\}$ for some $\rho \in \mathtt{i}_{\tau}^{-1}(\mathscr{N}_{\Delta_1(\tau)} \times \cdots \times \mathscr{N}_{\Delta_p(\tau)})$. Taking a vertex $v \in \tau$, we can consider the monodromy transformations $T_{\gamma}$'s around the strata $\tsing_{e,\eta,\rho}$'s that contain $x$ in their closures. We can identify the stalk $\tinclude_*(\tanpoly_{B_0}^*)_{x}$ as the subset of invariant elements of $T_{v}^*$ under all such monodromy transformations. Since $\rho \subset \tanpoly_{\tau,\real}^*$ is a cone, we have $\tanpoly_{\rho} \subset \tanpoly_{\tau}^*$. Using the natural projection map $\pi_{v\tau}\colon T_{v}^* \rightarrow \tanpoly_{\tau}^*$, we have the identification $\tinclude_*(\tanpoly_{B_0}^*)_{x} \cong \pi_{v\tau}^{-1}(\tanpoly_{\rho})$. There is a direct sum decomposition $\tinclude_*(\tanpoly_{B_0}^*)_{x} = \tanpoly_{\rho} \oplus \norpoly_{\tau}^*$, depending on a decomposition $T_{v} = \tanpoly_{\tau} \oplus \norpoly_{\tau}$. This gives the map \begin{equation}\label{eqn:monodromy_invariant_affine_functions_near_x_o} \mathtt{x} \colon U_{x} \rightarrow \pi_{v\tau}^{-1}(\tanpoly_{\rho})_{\real}^* \end{equation} in a sufficiently small neighborhood $U_{x}$, locally defined up to a translation in $\pi_{v\tau}^{-1}(\tanpoly_{\rho})^*_{\real}$. We need to describe the compatibility between the map associated to a point $x \in \tsing_{e,\omega,\rho}$ and that to a point $\tilde{x} \in \tsing_{e,\tau,\tilde{\rho}}$ such that $\tsing_{e,\omega,\rho} \subset \overline{\tsing_{e,\tau,\tilde{\rho}}}$. The first case is when $\omega =\tau$. We let $\tilde{x} \in \reint(\tilde{\rho}) \times \{o\} \cap U_{x}$ for some $\rho \subset \tilde{\rho}$. Then, after choosing suitable translations in $\pi_{v\tau}^{-1}(\tanpoly_{\rho})^*_{\real}$ for the maps $\mathtt{x}$ and $\tilde{\mathtt{x}}$, we have the following commutative diagram: \begin{equation}\label{eqn:monodromy_invariant_differential_form_change_of_chart} \xymatrix@1{ U_{\tilde{x}} \cap U_{x} \ar[d] \ar[rr]^{\tilde{\mathtt{x}}} & & \pi_{v\tau}^{-1}(\tanpoly_{\tilde{\rho}})^*_{\real} \ar[d]^{\mathtt{p}}\\ U_{x} \ar[rr]^{\mathtt{x}} & & \pi_{v\tau}^{-1}(\tanpoly_{\rho})^*_{\real}.} \end{equation} The second case is when $\omega \subsetneq \tau$. Making use of the change of charts $\gimel$ in equation \eqref{eqn:affine_local_chart_change_of_strata}, and the description in the proof of Lemma \ref{lem:stratification_structure_on_essential_singular_locus}, we write $$\tilde{x} \in \reint(\tilde{\rho}) \times \{o\}$$ for some cone $\tilde{\rho} = \mathtt{i}_{\tau}^{-1}(\prod_{j=1}^p \tilde{\rho}_j) \in \mathtt{i}_{\tau}^{-1}\big(\prod_{j=1}^p \tanpoly_{\Delta_{j}(\tau)}^* \big)$ of positive codimension. In $\gimel^{-1}(W)$, we may assume $\tilde{\rho}$ is the pullback of a cone $\breve{\rho}$ via $\Pi_{\omega\subset\tau} \circ \mathtt{p}_{\tau}$ as in equation \eqref{eqn:compatibility_of_essential_singular_locus}. Since $\tsing_{e,\omega,\rho} \subset \overline{\tsing_{e,\tau,\tilde{\rho}}}$, we have $\rho \subset \mathtt{p}_{\omega}^{-1}(\breve{\rho})$ and hence $\mathtt{p}_{\omega\subset \tau}^{-1}(\tanpoly_{\rho}) \subset \tanpoly_{\tilde{\rho}}$. Therefore, from $\mathtt{p}_{\omega\subset \tau} \circ \pi_{v\tau} = \pi_{v\omega}$, we obtain $\pi_{v\omega}^{-1}(\tanpoly_{\rho}) \subset \pi_{v\tau}^{-1}(\tanpoly_{\tilde{\rho}})$, inducing the map $\mathtt{p}\colon \pi_{v\tau}^{-1}(\tanpoly_{\tilde{\rho}})_{\real}^* \rightarrow \pi_{v\omega}^{-1}(\tanpoly_{\rho})_{\real}^*$. As a result, we still have the commutative diagram \eqref{eqn:monodromy_invariant_differential_form_change_of_chart} for a point $\tilde{x}$ sufficiently close to $x$. \begin{definition}\label{def:monodromy_invariant_differential_form_near_x_o} Given $x \in \tsing_e$ as above, the stalk of $\mdga^*$ at $x$ is defined as the stalk $\mdga^*_{x}:= (\mathtt{x}^{-1}\mdga^*)_{x}$ of the pullback of the sheaf of smooth de Rham forms on $\pi_{v\tau}^{-1}(\tanpoly_{\rho})^*_{\real}$, which is equipped with the de Rham differential $\md$. This defines the \emph{complex $(\mdga^*, \md)$ of monodromy invariant smooth differential forms} on $B$. A section $\alpha \in \mdga^*(W)$ is a collection of elements $\alpha_{x} \in \mdga^*_{x}$, $x \in W$ such that each $\alpha_{x}$ can be represented by $\mathtt{x}^{-1}\beta_{x}$ in a small neighborhood $U_{x} \subset \mathtt{p}^{-1}(\mathtt{U}_{x})$ for some smooth form $\beta_{x}$ on $\mathtt{U}_{x}$, and satisfies the relation $\alpha_{\tilde{x}} = \tilde{\mathtt{x}}^{-1}(\mathtt{p}^* \beta_{x})$ in $\mdga^*_{\tilde{x}}$ for every $\tilde{x} \in U_{x}$. \end{definition} \begin{example} In the $2$-dimensional case in Example \ref{eg:2d_monodromy}, we consider a singular point $$\{ x \} = \tsing_e \cap \reint(\tau)$$ for some $\tau \in \pdecomp^{[1]}$. In this case, we can take $\rho$ to be the $0$-dimenisonal stratum in $\mathscr{N}_{\tau}=\mathtt{i}_{\tau}^{-1}(\mathscr{N}_{\Delta_1(\tau)})$ and we have $\tinclude_*(\tanpoly_{B_0}^*)_{x} = \norpoly_{\tau}^*$. Taking a generator of $\norpoly_{\tau}^*$, we get an invariant affine coordinate $\mathtt{x}\colon U_{x} \rightarrow \real$ which is the normal affine coordinate of $\tau$. The stalk $\mdga^*_{x}$ is then identified with the pullback of the space of germs of smooth differential forms from $(\real,0)$ via $\mathtt{x}$. In particular, $\mdga^2_{x} =0$. For the $Y$-vertex of type II in Example \ref{eg:3d_monodromy}, the situation is similar to the $2$-dimensional case. For $\{ x \} = \tsing_e \cap \reint(\tau)$, we still have $\tinclude_*(\tanpoly_{B_0}^*)_{x} = \norpoly_{\tau}^*$, and in this case, $\mathtt{x} \colon U_{x} \rightarrow \real^2$ are the two invariant affine coordinates. We can identify $\mdga^*_{x}$ as the pullback of the space of germs of smooth differential forms from $(\real^2,0)$ via $\mathtt{x}$. For the $Y$-vertex of type I in Example \ref{eg:3d_monodromy}, we use the identification $ \tanpoly_{\tau,\real}^* \cong \reint(\tau)$ via $\Upsilon_{\tau}$ for the $2$-dimensional cell $\tau$ separating two maximal cells $\sigma_+$ and $\sigma_-$. In this case, $\tsing_e$ is as shown (in blue color) in Figure \ref{fig:contraction_at_rho} and $\mathscr{N}=\mathtt{i}_{\tau}^{-1}(\mathscr{N}_{\Delta_1(\tau)})$ is the fan of $\mathbb{P}^2$. If $x$ is the $0$-dimensional stratum of $\tsing_e \cap \reint(\tau)$, we have $\tinclude_*(\tanpoly_{B_0}^*)_{x} = \norpoly_{\tau}^*$ and $\mathtt{x}\colon U_{x} \rightarrow \real$ as an invariant affine coordinate. If $x$ is a point on a leg of the $Y$-vertex, we have $\mathtt{x} =(\mathtt{x}_1,\mathtt{x}_2)\colon U_{x} \rightarrow \real^2$ with $\mathtt{x}_1$ coming from a generator of $\tanpoly_{\rho}$ and $\mathtt{x}_2$ coming from a generator of $\norpoly_{\tau}^*$. \end{example} It follows from the definition that $\underline{\real} \rightarrow \mdga^*$ is a resolution. We shall also prove the existence of a partition of unity. \begin{lemma}\label{lem:for_contruction_of_partition_of_unity} Given any $x \in B$ and a sufficiently small neighborhood $U$, there exists $\varrho \in \mdga^{0}(U)$ with compact support in $U$ such that $0 \leq \varrho \leq 1$ and $\varrho \equiv 1$ near $x$. (Since $\mdga^0$ is a subsheaf of the sheaf $\cu{C}^{0}$ of continuous functions on $B$, we can talk about the value $f(x)$ for $f \in \mdga^{0}(W)$ and $x \in W$.) \end{lemma} \begin{proof} If $x \notin \tsing_e$, the statement is a standard fact. So we assume that $x \in \reint(\tau) \cap \tsing_e$ for some $\tau \in \pdecomp$. As above, we can write $x \in \reint(\rho) \times \{o\}$. Furthermore, since $\rho$ is a cone in the fan $\mathtt{i}_{\tau}^{-1}(\mathscr{N}_{\Delta_1(\tau)} \times \cdots \times \mathscr{N}_{\Delta_p(\tau)})$, $\tanpoly_{\tau}^*$ has $\tanpoly_{\Delta_1(\tau)}^* \oplus \cdots \oplus \tanpoly_{\Delta_p(\tau)}^*$ as a direct summand, and the description of $\tinclude_*(\tanpoly_{B_0}^*)_x$ is compatible with the direct sum decomposition of $\tanpoly_{\tau}^*$. We may further assume that $p=1$ and $\tau = \Delta_1(\tau)$ is a simplex. If $\rho$ is not the smallest cone (i.e. the one consisting of just the origin in $\mathscr{N}_{\tau}$), we have a decomposition $\tanpoly_{\tau}^* = \tanpoly_{\rho} \oplus \norpoly_{\rho}$ and the natural projection $\mathtt{p}\colon \tanpoly_{\tau}^* \rightarrow \norpoly_{\rho}$. Then, locally near $x_0$, we can write the normal fan $\mathscr{N}_{\tau}$ as $\mathtt{p}^{-1}(\Sigma_{\rho})$ for some normal fan $\Sigma_{\rho} \subset \norpoly_{\rho}$ of a lower dimensional simplex. For any vector $v$ tangent to $\rho$ at $x_0$ and the corresponding affine function $l_{v}$ locally near $x_0$, we always have $\ddd{l_{v}}{v}>0$. This allows us to construct a bump function $\varrho = \sum_{v_i} (l_{v_i}(x) - l_{v_i}(x_0))^2$ along the $\tanpoly_{\rho,\real}$-direction. So we are reduced to the case when $\rho = \{o\}$ is the smallest cone in the fan $\mathscr{N}_{\tau}$. Now we construct the function $\varrho$ near the origin $o \in \mathscr{N}_{\tau}$ by induction on the dimension of the fan $\mathscr{N}_{\tau}$. When $\dim_{\real}(\mathscr{N}_{\tau}) = 1$, it is the fan of $\mathbb{P}^1$ consisting of three cones $\real_-$, $\{o\}$ and $\real_+$. One can construct the bump function which is equal to $1$ near $o$ and supported in a sufficiently small neighborhood of $o$. For the induction step, we consider an $n$-dimensional fan $\mathscr{N}_\tau$. For any point $x$ near but not equal to $o$, we have $x \in \reint(\rho)$ for some $\rho \neq \{o\}$. Then we can decompose $\mathscr{N}_\tau$ locally as $\tanpoly_{\rho} \oplus \norpoly_{\rho}$. Applying the induction hypothesis to $\norpoly_{\rho}$ gives a bump function $\varrho_x$ compactly supported in any sufficiently small neighborhood of $x$ (for the $\tanpoly_{\rho}$ directions, we do not need the induction hypothesis to get the bump function). This produces a partition of unity $\{\varrho_i\}$ \emph{outside} $o$. Finally, letting $\varrho := 1 - \sum_i \varrho_i$ and extending it continuously to the origin $o$ gives the desired function. \end{proof} Lemma \ref{lem:for_contruction_of_partition_of_unity} produces a partition of unity for the complex $(\mdga^*, \md)$ of monodromy invariant differential forms on $B$, which satisfies the requirement in Condition \ref{cond:requirement_of_the_de_rham_dga} below. In particular, the cohomology of $(\mdga^*(B),\md)$ computes $R\Gamma(B,\underline{\real})$. Given a point $x \in B \setminus \tsing_e$, we can take an element $\varrho_x \in \mdga^n(B)$, compactly supported in an arbitrarily small neighborhood $U_x \subset B \setminus \tsing_e$, to represent a non-zero element in the cohomology $H^n(\mdga^*,\md) = H^n(B,\comp) \cong \comp$. \section{Smoothing of maximally degenerate Calabi--Yau varieties via dgBV algebras}\label{sec:deformation_via_dgBV} In this section, we review and refine the results in \cite{chan2019geometry} concerning smoothing of the maximally degenerate Calabi--Yau log variety $\centerfiber{0}^{\dagger}$ over $\logsf = \spec(\hat{\cfr})^{\dagger} = \spec(\comp[[q]])^{\dagger}$ using the local smoothing models $V^{\dagger} \rightarrow \localmod{k}^{\dagger}$'s specified in \S \ref{subsec:log_structure_and_slab_function}. In order to relate with tropical geometry on $B$, we will choose $V$ so that it is the pre-image $\modmap^{-1}(W)$ of an open subset $W$ in $B$. \subsection{Good covers and local smoothing data}\label{subsubsec:local_deformation_data} Given $\tau \in \pdecomp$ and a point $x \in \reint(\tau) \subset B$, we take a sufficiently small open subset $W \in \mathscr{B}_x$. We need to construct a local smoothing model on the Stein open subset $V = \modmap^{-1}(W)$. \begin{itemize} \item If $x \notin \tsing_e$, then we can simply take the local smoothing $\localmod{}^{\dagger}$ introduced in \eqref{eqn:cartesian_diagram_of_log_spaces} in \S \ref{subsec:log_structure_and_slab_function}. \item If $x \in \tsing_e \cap \reint(\tau)$, we assume that $\mathscr{C}^{-1}(W) \cap \amoeba^{\tau}_i \neq \emptyset$ for $i = 1,\dots,r$ and $\mathscr{C}^{-1}(W) \cap \amoeba^{\tau}_i = \emptyset $ for other $i$'s. Note that $\mathscr{C}^{-1}(W) \cap \reint(\tau)$ may not be a small open subset in $\reint(\tau)$ as we may contract a polytope $\mathtt{P}$ via $\mathscr{C}$ (Figure \ref{fig:contraction_at_rho}). If we write $\tanpoly_{\Delta_{1}(\tau)}\oplus \cdots \oplus \tanpoly_{\Delta_{p}(\tau)} \oplus A_{\tau} = \tanpoly_{\tau}$ as lattices, then for each direct summand $\tanpoly_{\Delta_{i}(\tau)}$, we have a commutative diagram $$ \xymatrix@1{ \tanpoly_{\tau,\comp^*}^*\ar[rr]^{\mathtt{i}_{\tau,i,\comp^*}} \ar[d]^{\log} & & \tanpoly_{\Delta_{i}(\tau),\comp^*}^* \ar[d]^{\log}\\ \tanpoly_{\tau,\real}^* \ar[rr]^{\mathtt{i}_{\tau,i,\real}} & & \tanpoly_{\Delta_{i}(\tau),\real}^*,} $$ so that both $Z^{\tau}_i$ and $\amoeba^{\tau}_i$ are coming from pullbacks of some subsets under the projection maps $\mathtt{i}_{\tau,i,\comp^*}$ and $\mathtt{i}_{\tau,i,\real}$ respectively. From this, we see that $\mathscr{C}^{-1}(W) \cap \amoeba^{\tau}_1 \cap \cdots\cap \amoeba^{\tau}_r \neq \emptyset$ and $\modmap^{-1}(W) \cap Z^{\tau}_1 \cap \cdots \cap Z^{\tau}_r \neq \emptyset$ while $\modmap^{-1}(W) \cap Z^{\tau}_i = \emptyset $ for other $i$'s. Now we take $\psi_{x,i} = \psi_i$ for $1\leq i\leq r$ and $\psi_{x,i} =0$ otherwise accordingly. Then we can take $P_{\tau,x}$ introduced in \eqref{eqn:local_monoid_near_singularity} and the map $V = \modmap^{-1}(W) \rightarrow \spec(\comp[\Sigma_{\tau}\oplus \mathbb{N}^{l}])$ defined by \begin{equation}\label{eqn:modified_local_mod_for_singularity} \begin{cases} z^m \mapsto h_m \cdot z^m & \text{if } m\in \Sigma_{\tau} ;\\ u_i \mapsto f_{v,i} & \text{if } 1\leq i \leq r;\\ u_i \mapsto z_i & \text{if } r<i\leq l. \end{cases} \end{equation} Note that the third line of this formula is different from that of equation \eqref{eqn:singular_locus_local_model} because we do not specify a point $x \in Z^{\tau}_1 \cap \cdots \cap Z^{\tau}_r$. By shrinking $W$ if necessary, one can show that it is an embedding using an argument similar to \cite[Thm. 2.6]{Gross-Siebert-logII}. This is possible because we can check that the Jacobian appearing in the proof of \cite[Thm. 2.6]{Gross-Siebert-logII} is invertible for all point in $\modmap^{-1}(x) = \moment^{-1}(\mathscr{C}^{-1}(x))$, which is a connected compact subset by property $(5)$ in Assumption \ref{assum:existence_of_contraction}. \end{itemize} \begin{condition}\label{cond:good_cover_of_B} An open cover $\{ W_{\alpha} \}_{\alpha}$ of $B$ is said to be \emph{good} if \begin{enumerate} \item for each $W_{\alpha}$, there exists a unique $\tau_{\alpha} \in \pdecomp$ such that $W_{\alpha} \in \mathscr{B}_x$ for some $x \in \reint(\tau)$; \item $W_{\alpha\beta}=W_{\alpha} \cap W_{\beta} \neq \emptyset$ only when $\tau_{\alpha} \subset \tau_{\beta}$ or $\tau_{\beta} \subset \tau_{\alpha}$, and if this is the case, we have either $\reint(\alpha) \cap W_{\alpha\beta} \neq \emptyset$ or $\reint(\beta) \cap W_{\alpha\beta} \neq \emptyset$. \end{enumerate} \end{condition} Given a good cover $\{ W_{\alpha} \}_{\alpha}$ of $B$, we have the corresponding Stein open cover $\cu{V} := \{V_\alpha\}_\alpha$ of $\centerfiber{0}$ given by $V_{\alpha} := \modmap^{-1}(W_{\alpha})$ for each $\alpha$. For each $V_{\alpha}^{\dagger}$, the infinitesimal local smoothing model is given as a log space $\localmod{}_{\alpha}^{\dagger}$ over $\logsf$ (see \eqref{eqn:cartesian_diagram_of_log_spaces}). Let $\localmod{k}_{\alpha}$ be the $k^{\text{th}}$-order thickening over $\logsk{k}= \spec(\cfr/\mathbf{m}^{k+1})^{\dagger}$ and $j \colon V_{\alpha} \setminus Z \hookrightarrow V_{\alpha}$ be the open inclusion. As in \cite[\S 8]{chan2019geometry}, we obtain coherent sheaves of BV algebras (and modules) over $V_\alpha$ from these local smoothing models. But for the purpose of this paper, we would like to push forward these coherent sheaves to $B$ and work with the open subsets $W_{\alpha}$'s. This leads to the following modification of \cite[Def. 7.6]{chan2019geometry} (see also \cite[Def. 2.14 and 2.20]{chan2019geometry}): \begin{definition}\label{def:higher_order_thickening_data_from_gross_siebert} For each $k \in \inte_{\geq 0}$, we define \begin{itemize} \item the \emph{sheaf of $k^{\text{th}}$-order polyvector fields} to be $\bva{k}_{\alpha}^* := \modmap_* j_* ( \bigwedge^{-*} \Theta_{\localmod{k}_{\alpha}^{\dagger}/\logsk{k}})$ (i.e. push-forward of relative log polyvector fields on $\localmod{k}_\alpha^{\dagger}$); \item the \emph{$k^{\text{th}}$-order log de Rham complex} to be $\tbva{k}{}^*_\alpha :=\modmap_* j_* (\Omega^*_{\localmod{k}_{\alpha}^{\dagger}/\comp})$ (i.e. push-forward of log de Rham differentials) equipped with the de Rham differential $\dpartial{k}_{\alpha} = \dpartial{}$ which is naturally a dg module over $\logsdrk{k}{*}$; \item the \emph{local log volume form} $\volf{}_\alpha$ as a nowhere vanishing element in $\modmap_* j_*(\Omega^n_{\localmod{}_{\alpha}^{\dagger}/\logsf})$ and the \emph{$k^{\text{th}}$-order volume form} to be $\volf{k}_{\alpha} = \volf{}_{\alpha} \ (\text{mod $\mathbf{m}^{k+1}$})$. \end{itemize} \end{definition} Given $k > l$, there are natural maps $\rest{k,l}\colon j_* ( \bigwedge^{-*} \Theta_{\localmod{k}_{\alpha}^{\dagger}/\logsk{k}}) \rightarrow j_* ( \bigwedge^{-*} \Theta_{\localmod{l}_{\alpha}^{\dagger}/\logsk{l}})$ which induce the maps $\rest{k,l}\colon \bva{k}^*_{\alpha} \rightarrow \bva{l}^*_{\alpha}$. Before taking the push-forward $\moment_*$, each $j_* ( \bigwedge^{r} \Theta_{\localmod{k}_{\alpha}^{\dagger}/\logsk{k}})$ is a sheaf of flat $\cfrk{k}$-modules with the property that $j_* ( \bigwedge^{r} \Theta_{\localmod{k}_{\alpha}^{\dagger}/\logsk{k}}) \cong j_* ( \bigwedge^{r} \Theta_{\localmod{k+1}_{\alpha}^{\dagger}/\logsk{k+1}}) \otimes_{\cfrk{k+1}} \cfrk{k}$ by \cite[Cor. 7.4 and 7.9]{Felten-Filip-Ruddat}. In other words, we have a short exact sequence of coherent sheaves $$ \xymatrix@1{ 0 \ar[r] & j_* ( \bigwedge^{r} \Theta_{\localmod{0}_{\alpha}^{\dagger}/\logsk{0}}) \ar[r]^{\cdot q^{k+1}} \ar[r] & j_* ( \bigwedge^{r} \Theta_{\localmod{k+1}_{\alpha}^{\dagger}/\logsk{k+1}}) \ar[r] & j_* ( \bigwedge^{r} \Theta_{\localmod{k}_{\alpha}^{\dagger}/\logsk{k}}) \ar[r] & 0. } $$ Applying $\moment_*$, which is exact, we get $$ \xymatrix@1{ 0 \ar[r] & \bva{0}_{\alpha}^{-r} \ar[r]^{\cdot q^{k+1}} \ar[r] & \bva{k+1}_{\alpha}^{-r} \ar[r] & \bva{k}_{\alpha}^{-r} \ar[r] & 0. } $$ As a result, we see that $\bva{k}_{\alpha}^{-r}$ is a sheaf of flat $\cfrk{k}$-modules on $W_{\alpha}$, so we have $\bva{k+1}_{\alpha}^{-r} \otimes_{\cfrk{k+1}} \cfrk{k} \cong \bva{k}^{-r}_{\alpha}$ for each $r$; a similar statement holds for $\tbva{k}{}^{r}_{\alpha}$. A natural filtration $\tbva{k}{\bullet}^*_{\alpha}$ is given by $\tbva{k}{s}^*_{\alpha}:= \logsdrk{k}{\geq s} \wedge \tbva{k}{}^*_{\alpha}[s]$ and taking wedge product defines the natural sheaf isomorphism $\gmiso{k}{r}^{-1} \colon \logsdrk{k}{r} \otimes_{\cfrk{k}} (\tbva{k}{0}^*_\alpha/ \tbva{k}{1}^*_\alpha[-r]) \rightarrow \tbva{k}{r}^*_\alpha/ \tbva{k}{r+1}^*_\alpha$. We have the space $\tbva{k}{\parallel}^*_{\alpha} :=\tbva{k}{0}^*_{\alpha}/\tbva{k}{1}^*_{\alpha} \cong \modmap_* j_* (\Omega^*_{\prescript{k}{}{\mathbf{V}}_{\alpha}^{\dagger}/\logsk{k}})$ of \emph{relative log de Rham differentials}. There is a natural action $v \mathbin{\lrcorner} \varphi$ for $v \in \bva{k}_{\alpha}^*$ and $\varphi \in \tbva{k}{}^*$ given by contracting a logarithmic holomorphic vector field $v$ with a logarithmic holomorphic form $\varphi$. To simplify notations, for $v \in \bva{k}_{\alpha}^0$, we often simply write $v\varphi$, suppressing the contraction $\mathbin{\lrcorner}$. We define the \emph{Lie derivative} via the formula $\cu{L}_{v} := (-1)^{|v|} \partial \circ (v\mathbin{\lrcorner}) - (v\mathbin{\lrcorner}) \circ \partial$ (or equivalently, $(-1)^{|v|}\cu{L}_{v}:= [\dpartial{},v\mathbin{\lrcorner}]$). By contracting with $\volf{k}_{\alpha}$, we get a sheaf isomorphism $\mathbin{\lrcorner} \volf{k}_{\alpha} \colon \bva{k}_{\alpha}^* \rightarrow \tbva{k}{\parallel}^*_{\alpha}$, which defines the \emph{BV operator} $\bvd{k}_{\alpha}$ by $\bvd{k}_{\alpha}(\varphi) \mathbin{\lrcorner} \volf{k} := \dpartial{k}_{\alpha} (\varphi \mathbin{\lrcorner} \volf{k})$. We call it the BV operator because the BV identity: \begin{equation}\label{eqn:BV_identity} (-1)^{|v|}[v,w] : = \Delta(v\wedge w ) - \Delta(v) \wedge w -(-1)^{|v|} v\wedge \Delta(w) \end{equation} for $v, w \in \bva{k}_{\alpha}^*$, where we put $\Delta = \bvd{k}_{\alpha}$, defines a graded Lie bracket. This gives $\bva{k}^*_{\alpha}$ the structure of a sheaf of BV algebras. \subsection{An explicit description of the sheaf of log de Rham forms}\label{subsubsec:local_description_for_log_de_rham_forms} Here we apply the calculations in \cite{Gross-Siebert-logII,Felten-Filip-Ruddat} to give an explicit description of the stalk $\tbva{k}{}_{\alpha,x}^*$. Let us consider $K=\modmap^{-1}(x)$ and the local model near $K$ described in \S \ref{subsubsec:local_deformation_data}, with $P_{\tau,x}$ and $Q_{\tau,x}$ as in \eqref{eqn:local_monoid_near_singularity}, \eqref{eqn:central_fiber_local_model} and an embedding $V \rightarrow \spec(\comp[Q_{\tau,x}])$. We may treat $K \subset V$ as a compact subset of $\comp^{l} = \spec(\comp[\bb{N}^l]) \hookrightarrow \spec(\comp[Q_{\tau,x}])$ via the identification $\spec(\comp[\Sigma_{\tau}\oplus\bb{N}^l]) \cong \spec(\comp[Q_{\tau,x}])$. For each $m \in \Sigma_{\tau}$, we denote the corresponding element $(m,\psi_{x,0}(m),\dots,\psi_{x,l}(m)) \in P_{\tau,x}$ by $\hat{m}$ and the corresponding function by $z^{\hat{m}} \in \comp[P_{\tau,x}]$. Similar to \cite[Lem. 7.14]{Felten-Filip-Ruddat}, the germs of holomorphic functions $\cu{O}_{\localmod{k},K}$ near $K$ in the space $\localmod{k} = \spec(\comp[P_{\tau,x}/q^{k+1}])$ can be written as \begin{equation}\label{eqn:local_germ_of_holomorphic_functions_near_K} \cu{O}_{\localmod{k},K} = \Bigg\{ \sum_{m \in \Sigma_{\tau},\ 0\leq i \leq k } \alpha_{m,i} q^{i} z^{\hat{m}} \,\Big|\, \alpha_{m,i} \in \cu{O}_{\comp^{l}}(U)\ \text{for some neigh. $U\supset K$}, \ \sup_{m \in \Sigma_{\tau} \setminus \{0\}} \frac{\log|\alpha_{m,i}|}{\mathtt{d}(m)} < \infty \Bigg\}, \end{equation} where $\mathtt{d}\colon \Sigma_{\tau} \rightarrow \bb{N}$ is a monoid morphism such that $\mathtt{d}^{-1}(0) = 0$, and it is equipped with the product $z^{\hat{m}_1} \cdot z^{\hat{m}_2} := z^{\hat{m}_1+\hat{m}_2}$ (but note that $\widehat{m_1+m_2}\neq \hat{m}_2 + \hat{m}_2$ in general). Thus we have $\tbva{k}{}^0_{\alpha,x} \cong \bva{k}_{\alpha,x}^0 \cong \cu{O}_{\localmod{k},K}$. To describe the differential forms, we consider the vector space $\mathscr{E} = P_{\tau,x,\comp}$, regarded as the space of $1$-forms on $\spec(\comp[P_{\tau,x}^{\mathrm{gp}}]) \cong (\comp^*)^{n+1}$. Write $d\log z^{p}$ for $p \in P_{\tau,x,\comp}$ and set $\mathscr{E}_1 := \comp \langle d\log u_i \rangle_{i=1}^l$, as a subset of $\mathscr{E}$. For an element $m\in \norpoly_{\tau,\comp}$, we have the corresponding $1$-form $d\log z^{\hat{m}} \in P_{\tau,x,\comp}$ under the association between $m$ and $z^{\hat{m}}$. Let $\mathtt{P}$ be the power set of $\{1,\dots,l\}$ and write $u^{I} = \prod_{i\in I} u_i$ for $I \in \mathtt{P}$. A computation for sections of the sheaf $j_*(\Omega^r_{\localmod{k}^{\dagger}/\comp})$ from \cite[Prop. 1.12]{Gross-Siebert-logII} and \cite[Lem. 7.14]{Felten-Filip-Ruddat} can then be rephrased as the following lemma. \begin{lemma}[\cite{Gross-Siebert-logII,Felten-Filip-Ruddat}]\label{lem:local_computation_of_log_de_rham_forms} The space of germs of sections of $j_*(\Omega^*_{\localmod{k}^{\dagger}/\comp})_K$ near $K$ is a subspace of $\cu{O}_{\localmod{k},K} \otimes \bigwedge^* \mathscr{E}$ given by elements of the form $$\displaystyle \alpha = \sum_{\substack{m \in \Sigma_{\tau}\\ 0\leq i \leq k }} \sum_{I} \alpha_{m,i,I} q^i z^{\hat{m}} u^I \otimes \beta_{m,I}, \quad \beta_{m,I} \in \bigwedge\nolimits^* \mathscr{E}_{m,I} = \bigwedge\nolimits^*(\mathscr{E}_{1,m,I}\oplus \mathscr{E}_{2,m,I} \oplus \langle d\log q\rangle), $$ where $\mathscr{E}_{1,m,I} = \langle d\log u_i \rangle_{i\in I} \subset \mathscr{E}_1$ and the subspace $\mathscr{E}_{2,m,I} \subset \mathscr{E}$ is given as follows: we consider the pullback of the product of normal fans $\prod_{i \notin I}\mathscr{N}_{\check{\Delta}_i(\tau)}$ to $ \norpoly_{\tau,\real}$ and take $\mathscr{E}_{2,m,I} = \langle d\log z^{\hat{m}'} \rangle$ for $m' \in \sigma_{m,I}$, where $\sigma_{m,I}$ is the smallest cone in $\prod_{i \notin I}\mathscr{N}_{\check{\Delta}_i(\tau)} \subset \norpoly_{\tau,\real}$ containing $m$. \end{lemma} Here we can treat $\prod_{i\notin I} \mathscr{N}_{\check{\Delta}_i(\tau)}\subset \norpoly_{\tau,\real}$ since $\bigoplus_{i} \tanpoly_{\check{\Delta}_i(\tau)}$ is a direct summand of $\norpoly_{\tau}^*$. A similar description for $j_*(\Omega^*_{\localmod{k}^{\dagger}/\comp^{\dagger}})_K$ is simply given by quotienting out the direct summand $\langle d\log q\rangle$ in the above formula for $\alpha$. In particular, if we restrict ourselves to the case $k=0$, a general element $\alpha$ can be written as $$\displaystyle \alpha = \sum_{m \in \Sigma_{\tau}} \sum_{I} \alpha_{m,I} z^{\hat{m}} u^I \otimes \beta_{m,I}, \quad \beta_{m,I} \in \bigwedge\nolimits^* \mathscr{E}_{m,I} = \bigwedge\nolimits^*(\mathscr{E}_{1,m,I}\oplus \mathscr{E}_{2,m,I}). $$ One can choose a nowhere vanishing element $$\Omega= du_1\cdots du_l \otimes \eta \in u_1\cdots u_l \otimes \wedge^l \mathscr{E}_1 \otimes \wedge^{n-\dim_{\real}(\tau)} \mathscr{E}_2 \subset j_*(\Omega^n_{\localmod{0}^{\dagger}/\comp^{\dagger}})_K$$ for some nonzero element $\eta \in \wedge^{n-\dim_{\real}(\tau)} \mathscr{E}_2$, which is well defined up to rescaling. Any element in $j_*(\Omega^n_{\localmod{0}^{\dagger}/\comp^{\dagger}})_K$ can be written as $f \Omega$ for some $f =\sum_{m \in \Sigma_{\tau}} f_{m}z^{\hat{m}} \in \cu{O}_{\localmod{0},K}$. Recall that the subset $K \subset \comp^{l}$ is intersecting the singular locus $Z^{\tau}_1,\dots,Z^{\tau}_{r}$ (as in \S \ref{subsubsec:local_deformation_data}), where $u_i$ is the coordinate function of $\comp^{l}$ with simple zeros along $Z_i^{\tau}$ for $i=1,\dots,r$. There is a change of coordinates between a neighborhood of $K$ in $\comp^{l}$ and that of $K$ in $(\comp^*)^l$ given by \begin{equation*} \begin{cases} u_i \mapsto f_{v,i}|_{(\comp^{*})^l} & \text{if } 1\leq i \leq r;\\ u_i \mapsto z_i &\text{if } r<i\leq l. \end{cases} \end{equation*} Under the map $\log \colon (\comp^*)^l \rightarrow \real^l$, we have $K = \log^{-1}(\mathscr{C})$ for some connected compact subset $\mathscr{C}\subset \real^l$. In the coordinates $z_1,\dots,z_l$, we find that $d\log z_1 \cdots d\log z_l\otimes \eta$ can be written as $f \Omega$ near $K$ for some nowhere vanishing function $f \in \cu{O}_{\localmod{0},K}$. \begin{lemma}\label{lem:local_computation_for_top_cohomology} When $K \cap Z = \emptyset$ (i.e. $r=0$ in the above discussion), the top cohomology group $\cu{H}^n(j_*(\Omega^n_{\localmod{0}^{\dagger}/\comp^{\dagger}})_K,\dpartial{}) :=j_*(\Omega^n_{\localmod{0}^{\dagger}/\comp^{\dagger}})_K/\mathrm{Im}(\dpartial{})$ is isomorphic to $\comp$, which is generated by the element $d\log z_1 \cdots d\log z_l\otimes \eta$. \end{lemma} \begin{proof} Given a general element $f \Omega$ as above, first observe that we can write $f = f_0 + f_{+}$, where $f_+ = \sum_{m \in \Sigma_{\tau}\setminus \{0\}} f_m z^{\hat{m}}$ and $f_0 \in \cu{O}_{\comp^l,K}$. We take a basis $e_1,\dots,e_s$ of $\norpoly_{\tau,\real}^*$, and also a partition $I_1,\dots,I_s$ of the lattice points in $\Sigma_{\tau}\setminus \{0\}$ such that $\langle e_j,m\rangle \neq 0$ for $m \in I_j$. Letting $$ \alpha =(-1)^l \sum_{j} \sum_{m \in I_j} \frac{ f_m}{\langle e_j,m\rangle} z^{\hat{m}} du_1 \cdots du_l \otimes \iota_{e_j} \eta, $$ we have $\dpartial{}(\alpha) = f_+ \Omega$. So we only need to consider elements of the form $f_0 \Omega$. If $f_0 \Omega = \dpartial{}(\alpha)$ for some $\alpha$, we may take $\alpha = \sum_j \alpha_j du_1\cdots \widehat{du_j} \cdots du_l \otimes \eta$ for some $\alpha_j \in \cu{O}_{\comp^{l},K}$. Now this is equivalent to $f_0 du_1 \cdots du_l = \dpartial{} \big(\sum_j \alpha_j du_1\cdots \widehat{du_j} \cdots du_l \big) $ as forms in $\Omega^l_{\comp^l,K}$. This reduces the problem to $\comp^l$. Working in $(\comp^*)^l$ with coordinates $z_i$'s, we can write $$ \cu{O}_{(\comp^*)^l,K} = \left\{ \sum_{m\in \inte^l} a_m z^{m} \ \Big| \ \sum_{m \in \inte^l} |a_m| e^{\langle v, m \rangle } < \infty, \ \text{for all $v \in W$, for some open $W \supset \mathscr{C}$} \right\}, $$ using the fact that $K$ is multi-circular. By writing $\Omega^*_{(\comp^*)^l,K} = \cu{O}_{(\comp^*)^l,K} \otimes \bigwedge^* \mathscr{F}_1$ with $\mathscr{F}_1 = \langle d\log z_i \rangle_{i=1}^l$, we can see that any element can be represented as $c d\log z_1 \cdots d\log z_l$ in the quotient $\Omega^l_{(\comp^*)^l,K}/\mathrm{Im}(\dpartial{})$, for some constant $c$. \end{proof} From this lemma, we conclude that the top cohomology sheaf $\cu{H}^n(\tbva{0}{\parallel}^*,\dpartial{})$ is isomorphic to the locally constant sheaf $\underline{\comp}$ over $B \setminus \tsing_e$. \begin{lemma}\label{lem:holomorphic_volume_form_non_vanishing_in_cohomology} The volume element $\volf{0}$ is non-zero in $\cu{H}^n(\tbva{0}{\parallel}^*,\dpartial{})_x$ for every $x \in B$. \end{lemma} \begin{proof} We first consider the case when $x \in \reint(\sigma)$ for some maximal cell $\sigma \in \pdecomp^{[n]}$. The toric stratum $\centerfiber{0}_{\sigma}$ associated to $\sigma$ is equipped with the natural divisorial log structure induced from its boundary divisor. Then the sheaf $\Omega^*_{\centerfiber{0}_{\sigma}^{\dagger}/\comp^{\dagger}}$ of log derivations for $\centerfiber{0}^{\dagger}$ is isomorphic to $\bigwedge^n \tanpoly_{\sigma} \otimes_{\inte} \cu{O}_{\centerfiber{0}_{\sigma}}$. By \cite[Lem. 3.12]{Gross-Siebert-logII}, we have $\volf{0}_{x} = c (\mu_{\sigma})_{\modmap^{-1}(x)}$ in $\modmap_*(\Omega^n_{\centerfiber{0}_{\sigma}^{\dagger}/\comp^{\dagger}})_x \cong \tbva{0}{\parallel}^n_x$, where $\mu_{\sigma} \in \bigwedge^n \tanpoly_{\sigma,\comp}$ is nowhere vanishing and $c$ is a non-zero constant $c$. Thus $\centerfiber{0}|_x$ is non-zero in the cohomology as the same is true for $\mu_{\sigma}\in\modmap_*(\Omega^n_{\centerfiber{0}_{\sigma}^{\dagger}/\comp^{\dagger}})_x$. Next we consider a general point $x \in \reint(\tau)$. If the statement is not true, we will have $\volf{0}_x = \dpartial{0} (\alpha)$ for some $\alpha \in \tbva{0}{\parallel}^{n-1}_x$. Then there is an open neighborhood $U \supset \mathscr{C}^{-1}(x)$ such that this relation continues to hold. As $U \cap \reint(\sigma) \neq \emptyset$, for those maximal cells $\sigma$ which contain the point $x$, we can take a nearby point $y \in U \cap \reint(\sigma)$ and conclude that $c \mu_{\sigma} = \dpartial{0}(\alpha) $ in $\modmap_*(\Omega^n_{\centerfiber{0}_{\sigma}^{\dagger}/\comp^{\dagger}})_y$. This contradicts the previous case. \end{proof} \begin{lemma}\label{lem:preserving_volume_element_by_vector_fields} Suppose that $x \in W_{\alpha} \setminus \tsing_e$. For an element of the form $$e^{f } (\volf{k}_{\alpha}) \in \tbva{k}{\parallel}^n_{\alpha,x}$$ with $f \in \bva{k}^{0}_{\alpha,x} \cong \cu{O}_{\localmod{k}_{\alpha},x}$ satisfying $f \equiv 0 \text{(mod $\mathbf{m}$)}$, there exist $h(q) \in \cfrk{k}=\comp[q]/(q^{k+1})$ and $v \in \bva{k}^{-1}_{\alpha,x}$ with $h,v \equiv 0 \text{(mod $\mathbf{m}$)}$ such that \begin{equation}\label{eqn:preserving_volume_element_by_vector_fields} e^{f} (\volf{k}_{\alpha}) = e^{h} e^{\cu{L}_{v}} (\volf{k}_{\alpha}) \end{equation} in $\tbva{k}{\parallel}^n_{\alpha,x}$, where we recall that $\cu{L}_{v} := (-1)^{|v|} \partial \circ (v\mathbin{\lrcorner}) - (v\mathbin{\lrcorner}) \circ \partial$. \end{lemma} \begin{proof} To simplify notations in this proof, we will drop the subscript $\alpha$. We prove the first statement by induction on $k$. The initial case is trivial. Assuming that this has been done for the $(k-1)^{\text{st}}$-order, then, by taking an arbitrary lifting $\tilde{v}$ of $v$ to the $k^{\text{th}}$-order, we have $$ e^{-h + f +q^{k}\epsilon}(\volf{k}) = e^{\cu{L}_{\tilde{v}}} (\volf{k}) $$ for some $\epsilon \in \cu{O}_{\localmod{0}_{x}}$. By Lemmas \ref{lem:local_computation_for_top_cohomology} and \ref{lem:holomorphic_volume_form_non_vanishing_in_cohomology}, we have $\epsilon \volf{0}= c \volf{0} + \dpartial{}(\gamma)$ for some $\gamma$ and some suitable constant $c$. Letting $\theta \mathbin{\lrcorner} (\volf{0}) = \gamma$ and $\breve{v} = \tilde{v} + q^{k} \theta$, we have $$e^{\cu{L}_{\breve{v}}} (\volf{k}) = e^{\cu{L}_{v}} (\volf{k}) - q^k \dpartial{}( \theta \mathbin{\lrcorner} (\volf{0})) = e^{-h+f + c q^k } (\volf{k}). $$ By defining $\tilde{h}(q) := h(q) - cq^k$ in $\comp[q]/(q^{k+1})$, we obtain the desired expression. \end{proof} \subsection{A global pre-dgBV algebra from gluing}\label{subsubsec:gluing_construction} One approach for smoothing $\centerfiber{0}$ is to look for gluing morphisms $\patch{k}_{\alpha\beta}\colon \localmod{k}_{\alpha}^{\dagger}|_{V_{\alpha\beta}} \rightarrow \localmod{k}_{\beta}^{\dagger}|_{V_{\alpha\beta}}$ between the local smoothing models which satisfy the cocycle condition, from which one obtain a $k^{\text{th}}$-order thickening $\centerfiber{k}$ over $\logsk{k}$. This was done by Kontsevich--Soibelman \cite{kontsevich-soibelman04} (in 2d) and Gross--Siebert \cite{gross2011real} (in general dimensions) using {\em consistent scattering diagrams}. If such gluing morphisms $\patch{k}_{\alpha\beta}$'s are available, one can certainly glue the global $k^{\text{th}}$-order sheaves $\bva{k}^*$, $\tbva{k}{}^*$ and the volume form $\volf{k}$. In \cite{chan2019geometry}, we instead took suitable dg-resolutions $\polyv{k}^{*,*}_{\alpha}:=\mdga^*(\bva{k}_{\alpha}^*)$'s of the sheaves $\bva{k}_{\alpha}^*$'s (more precisely, we used the Thom--Whitney resolution in \cite[\S 3]{chan2019geometry}) to construct gluings $$\glue{k}_{\alpha\beta} \colon \mdga^*(\bva{k}_{\alpha}^*)|_{V_{\alpha\beta}} \rightarrow \mdga^*(\bva{k}_{\beta}^*)|_{V_{\alpha\beta}}$$ of sheaves which only preserve the \emph{Gerstenhaber algebra} structure but not the differential. The key discovery in \cite{chan2019geometry} was that, as the sheaves $\mdga^*(\bva{k}_{\alpha}^*)$'s are soft, such a gluing problem could be solved \emph{without} any information from the complicated scattering diagrams. What we obtained is a \emph{pre-dgBV algebra}\footnote{This was originally called an \emph{almost dgBV algebra} in \cite{chan2019geometry}, but we later found the name \emph{pre-dgBV algebra} from \cite{felten2020log} more appropriate.} $\polyv{k}^{*,*}(\centerfiber{})$, in which the differential squares to zero only modulo $\mathbf{m} = (q)$. Using well-known algebraic techniques \cite{terilla2008smoothness, KKP08}, we can solve the {\em Maurer--Cartan equation} and construct the thickening $\centerfiber{k}$. In this subsection, we will summarize the whole procedure, incorporating the nice reformulation by Felten \cite{felten2020log} in terms of deformations of Gerstenhaber algebras. To begin with, we assume the following condition holds: \begin{condition}\label{cond:requirement_of_the_de_rham_dga} There is a sheaf $(\mdga^*,\md)$ of unital differential graded algebras (abbrev.\ as dga) (over $\real$ or $\comp$) over $B$, with degrees $0\leq * \leq L$ for some $L$, such that \begin{itemize} \item the natural inclusion $\underline{\real} \rightarrow \mdga^*$ (or $\underline{\comp} \rightarrow \mdga^*$) of the locally constant sheaf (concentrated at degree $0$) gives a resolution, and \item for any open cover $\cu{U} = \{ U_i \}_{i \in \cu{I}}$, there is a partition of unity subordinate to $\cu{U}$, i.e. we have $\{ \rho_i\}_{i\in \cu{I}}$ with $\rho_i \in \Gamma(U_i,\mdga^0)$ and $\overline{\mathrm{supp}(\rho_i)} \subset U_i$ such that $\{\overline{\mathrm{supp}(\rho_i)} \}_i$ is locally finite and $\sum_i \rho_i \equiv 1$. \end{itemize} \end{condition} It is easy to construct such an $\mdga^*$ and there are many natural choices. For instance, if $B$ is a smooth manifold, then we can simply take the usual de Rham complex on $B$. In \S \ref{subsec:derham_for_B}, the sheaf of monodromy invariant differential forms we constructed using the (singular) integral affine structure on $B$ is another possible choice for $\mdga^*$ (with degrees $0 \leq * \leq n$). Yet another variant, namely the sheaf of \emph{monodromy invariant tropical differential forms}, will be constructed in \S \ref{sec:asymptotic_support}; this links tropical geometry on $B$ with the smoothing of the maximally degenerate Calabi--Yau variety $\centerfiber{0}$. Let us recall how to obtain a gluing of the dg resolutions of the sheaves $\bva{k}_{\alpha}^*$ and $\tbva{k}{}_{\alpha}^*$ using any possible choice of such an $\mdga^*$. We fix a good cover $\cu{W} := \{W_{\alpha}\}_{\alpha}$ of $B$ and the corresponding Stein open cover $\cu{V} := \{V_\alpha\}_\alpha$ of $\centerfiber{0}$, where $V_{\alpha} = \modmap^{-1}(W_{\alpha})$ for each $\alpha$. \begin{definition}\label{def:local_dgBV_from_resolution} We define $\polyv{k}_{\alpha}^{p,q}= \mdga^q(\bva{k}_{\alpha}^p):= \mdga^q|_{W_{\alpha}} \otimes_{\real} \bva{k}^p_{\alpha}$ and $\polyv{k}_{\alpha}^{*,*} = \bigoplus_{p,q} \polyv{k}_{\alpha}^{p,q}$, which gives a sheaf of dgBV algebras over $W_{\alpha}$. The dgBV structure $(\wedge,\pdb_{\alpha}, \bvd{}_{\alpha})$ is defined componentwise by \begin{align*} (\varphi \otimes v) \wedge ( \psi \otimes w) & := (-1)^{|v||\psi|} (\varphi \wedge \psi) \otimes (v \wedge w),\\ \pdb_{\alpha} (\varphi \otimes v) := (\md\varphi) \otimes v ,&\quad \bvd{}_{\alpha}(\varphi \otimes v) := (-1)^{|\varphi|} \varphi \otimes (\bvd{} v) , \end{align*} for $\varphi, \psi \in \mdga^*(U)$ and $v, w \in \bva{k}_{\alpha}^*(U)$ for each open subset $U \subset W_{\alpha}$. \end{definition} \begin{definition}\label{def:local_dga_from_resolution} We define $\totaldr{k}{}_{\alpha}^{p,q}= \mdga^q(\tbva{k}{}_{\alpha}^p):= \mdga^q|_{W_{\alpha}} \otimes_{\real} \tbva{k}{}^p_{\alpha}$ and $\totaldr{k}{}_{\alpha}^{*,*} = \bigoplus_{p,q} \totaldr{k}{}_{\alpha}^{p,q}$, which gives a sheaf of dgas over $W_{\alpha}$ equipped with the natural filtration $\totaldr{k}{\bullet}_{\alpha}^{*,*}$ inherited from $\tbva{k}{\bullet}^*_{\alpha}$. The structures $(\wedge,\pdb_{\alpha},\dpartial{}_{\alpha} )$ are defined componentwise by \begin{align*} (\varphi \otimes v) \wedge ( \psi \otimes w) & := (-1)^{|v||\psi|} (\varphi \wedge \psi) \otimes (v \wedge w),\\ \pdb_{\alpha} (\varphi \otimes v) := (\md\varphi) \otimes v ,&\quad \dpartial{}_{\alpha}(\varphi \otimes v) = (-1)^{|\varphi|} \varphi \otimes (\dpartial{}v), \end{align*} for $\varphi, \psi \in \mdga^*(U)$ and $v, w \in \tbva{k}{}_{\alpha}^*(U)$ for each open subset $U \subset W_{\alpha}$. \end{definition} There is an action of $\polyv{k}_{\alpha}^{*,*}$ on $\totaldr{k}{}_{\alpha}^{*,*}$ by contraction $\mathbin{\lrcorner}$ defined by the formula $$ (\varphi \otimes v) \mathbin{\lrcorner} (\psi \otimes w):= (-1)^{|v||\psi|} (\varphi \wedge \psi) \otimes (v\mathbin{\lrcorner} w), $$ for $\varphi, \psi \in \mdga^*(U)$, $v\in \bva{k}_{\alpha}^*(U)$ and $ w \in \tbva{k}{}_{\alpha}^*(U)$ for each open subset $U \subset W_{\alpha}$. Note that the local holomorphic volume form $\volf{k}_{\alpha} \in \totaldr{k}{\parallel}_{\alpha}^{n,0}(W_{\alpha})$ satisfies $\pdb_{\alpha}(\volf{k}_{\alpha}) = 0 $, and we have the identity $\dpartial{k}_{\alpha} (\phi \mathbin{\lrcorner} \volf{k}_{\alpha}) = \bvd{k}_{\alpha}(\phi) \mathbin{\lrcorner} \volf{k}_{\alpha}$ of operators. The next step is to consider gluing of the local sheaves $\polyv{k}_{\alpha}$'s for higher orders $k$. Similar constructions have been done in \cite{chan2019geometry,felten2020log} using the combinatorial Thom--Whitney resolution for the sheaves $\bva{k}_{\alpha}$'s. We make suitable modifications of those arguments to fit into our current setting. First, since $\localmod{k}^{\dagger}_{\alpha}|_{V_{\alpha\beta}}$ and $\localmod{k}^{\dagger}_{\beta}|_{V_{\alpha\beta}}$ are divisorial deformations (in the sense of \cite[Def. 2.7]{Gross-Siebert-logII}) of the intersection $V^{\dagger}_{\alpha\beta}:= V^{\dagger}_{\alpha} \cap V^{\dagger}_{\beta}$, we can use \cite[Thm. 2.11]{Gross-Siebert-logII} and the fact that $V_{\alpha\beta}$ is Stein to obtain an isomorphism $\patch{k}_{\alpha\beta} \colon \localmod{k}^{\dagger}_{\alpha}|_{V_{\alpha\beta}} \rightarrow \localmod{k}^{\dagger}_{\beta}|_{V_{\alpha\beta}}$ of divisorial deformations which induces the gluing morphism $\patch{k}_{\alpha\beta} \colon \bva{k}_{\alpha}^*|_{W_{\alpha\beta}} \rightarrow \bva{k}_{\beta}^*|_{W_{\alpha\beta}}$ that in turn gives $\patch{k}_{\alpha\beta} \colon \polyv{k}_{\alpha}|_{W_{\alpha\beta}} \rightarrow \polyv{k}_{\beta}|_{W_{\alpha\beta}}$. \begin{definition}\label{def:deformation_of_Gerstenhaber_algebra} A \emph{$k^{\text{th}}$-order Gerstenhaber deformation} of $\polyv{0}$ is a collection of gluing morphisms $\glue{k}_{\alpha\beta} \colon \polyv{k}_{\alpha}|_{W_{\alpha\beta}} \rightarrow \polyv{k}_{\beta}|_{W_{\alpha\beta}}$ of the form $$\glue{k}_{\alpha\beta} = e^{[\vartheta_{\alpha\beta},\cdot]} \circ \patch{k}_{\alpha\beta}$$ for some $\theta_{\alpha\beta} \in \polyv{k}_{\beta}^{-1,0}(W_{\alpha\beta})$ with $\theta_{\alpha\beta} \equiv 0 \ (\text{mod $\mathbf{m}$})$, such that the cocycle condition $$\glue{k}_{\gamma\alpha} \circ \glue{k}_{\beta\gamma} \circ \glue{k}_{\alpha\beta} = \mathrm{id}$$ is satisfied. An \emph{isomorphism between two $k^{\text{th}}$-order Gerstenhaber deformations} $\{\glue{k}_{\alpha\beta}\}_{\alpha\beta}$ and $\{\glue{k}_{\alpha\beta}'\}_{\alpha\beta}$ is a collection of automorphisms $\prescript{k}{}{h}_{\alpha} \colon \polyv{k}_{\alpha} \rightarrow \polyv{k}_{\alpha}$ of the form $$\prescript{k}{}{h}_{\alpha} = e^{[\mathbf{b}_{\alpha},\cdot]}$$ for some $\mathtt{b}_{\alpha} \in \polyv{k}_{\alpha}^{-1,0}(W_{\alpha})$ with $\mathtt{b}_{\alpha} \equiv 0 (\text{mod $\mathbf{m}$})$, such that $$\glue{k}_{\alpha\beta}'\circ \prescript{k}{}{h}_{\alpha} = \prescript{k}{}{h}_{\beta} \circ \glue{k}_{\alpha\beta}.$$ \end{definition} A slight modification of \cite[Lem. 6.6]{felten2020log}, with essentially the same proof, gives the following: \begin{prop}\label{prop:classification_of_gerstenhaber_deformation} Given a $k^{\text{th}}$-order Gerstenhaber deformation $\{\glue{k}_{\alpha\beta}\}_{\alpha\beta}$, the obstruction to the existence of a lifting to a $(k+1)^{\text{st}}$-order deformation $\{\glue{k+1}_{\alpha\beta}\}_{\alpha\beta}$ lies in the \v{C}ech cohomology (with respect to the cover $\cu{W} = \{W_{\alpha}\}_{\alpha}$) $$ \check{H}^{2}(\cu{W},\polyv{0}^{-1,0}) \otimes_{\mathbb{C}} (\mathbf{m}^{k+1}/\mathbf{m}^{k+2}). $$ The isomorphism classes of $(k+1)^{\text{st}}$-order liftings are in $$ \check{H}^{1}(\cu{W},\polyv{0}^{-1,0}) \otimes_{\mathbb{C}} (\mathbf{m}^{k+1}/\mathbf{m}^{k+2}). $$ Fixing a $(k+1)^{\text{st}}$-order lifting $\{\glue{k+1}_{\alpha\beta}\}_{\alpha\beta}$, the automorphisms fixing $\{\glue{k}_{\alpha\beta}\}_{\alpha\beta}$ are in $$ \check{H}^{0}(\cu{W},\polyv{0}^{-1,0}) \otimes_{\mathbb{C}} (\mathbf{m}^{k+1}/\mathbf{m}^{k+2}). $$ \end{prop} Since $\mdga^i$ satisfies Condition \ref{cond:requirement_of_the_de_rham_dga}, we have $\check{H}^{>0}(\cu{W},\polyv{0}^{-1,0}) = 0$. In particular, we always have a set of compatible Gerstenhaber deformations $\glue{} = (\glue{k})_{k \in \mathbb{N}}$ where $\glue{k} = \{\glue{k}_{\alpha\beta} \}_{\alpha\beta}$ and any two of them are equivalent. Fixing such a set $\glue{}$, we obtain a set $\{\polyv{k}\}_{k \in \mathbb{N}}$ of Gerstenhaber algebras which is compatible, in the sense that there are natural identifications $\polyv{k+1} \otimes_{\cfrk{k+1}} \cfrk{k} = \polyv{k}$. We can also glue the local sheaves $\totaldr{k}{}^{*,*}_{\alpha}$'s of dgas using $\glue{} = (\glue{k})_{k \in \mathbb{N}}$. First, we can define $\patch{k}_{\alpha\beta} \colon \tbva{k}{}_{\alpha}^*|_{W_{\alpha\beta}} \rightarrow \tbva{k}{}_{\beta}^*|_{W_{\alpha\beta}}$ using $\patch{k}_{\alpha\beta} \colon \localmod{k}^{\dagger}_{\alpha}|_{V_{\alpha\beta}} \rightarrow \localmod{k}^{\dagger}_{\beta}|_{V_{\alpha\beta}}$. For each fixed $k$, we can write $\glue{k}_{\alpha\beta} = e^{[\vartheta_{\alpha\beta},\cdot]} \circ \patch{k}_{\alpha\beta}$ as before. Then \begin{equation}\label{eqn:gluing_formula_for_local_de_rham_dga} \glue{k}:=e^{\cu{L}_{\vartheta_{\alpha\beta}}} \circ \patch{k}_{\alpha\beta} \colon \totaldr{k}{}_{\alpha}^{*,*}|_{W_{\alpha\beta}} \rightarrow \totaldr{k}{}_{\beta}^{*,*}|_{W_{\alpha\beta}}, \end{equation} where we recall that $\cu{L}_{v} := (-1)^{|v|} \partial \circ (v\mathbin{\lrcorner}) - (v\mathbin{\lrcorner}) \circ \partial$, preserves the dga structure $(\wedge,\dpartial{}_{\alpha} )$ and the filtration on $\totaldr{k}{\bullet}_{\alpha}^{*,*}$'s. As a result, we obtain a set of compatible sheaves $\{(\totaldr{k}{}^{*,*}, \wedge,\dpartial{})\}_{k \in \mathbb{N}}$ of dgas. The contraction action $\mathbin{\lrcorner}$ is also compatible with the gluing construction, so we have a natural action $\mathbin{\lrcorner}$ of $\polyv{k}^{*,*}$ on $\totaldr{k}{}^{*,*}$. Next, we glue the operators $\pdb_{\alpha}$'s and $\bvd{}_{\alpha}$'s. \begin{definition}\label{def:predifferential} A \emph{$k^{\text{th}}$-order pre-differential} $\pdb$ on $\polyv{k}^{*,*}$ is a degree $(0,1)$ operator obtained from gluing the operators $\pdb_{\alpha}+[\eta_{\alpha},\cdot]$ specified by a collection of elements $\eta_\alpha \in \polyv{k}^{-1,1}_{\alpha}(W_{\alpha})$ such that $\eta_{\alpha} \equiv 0 \ (\text{mod $\mathbf{m}$})$ and $$ \glue{k}_{\beta\alpha }\circ (\pdb_{\beta} + [\eta_{\beta},\cdot]) \circ \glue{k}_{\alpha\beta} = (\pdb_{\alpha} + [\eta_{\alpha},\cdot]). $$ Two pre-differentials $\pdb$ and $\pdb'$ are \emph{equivalent} if there is a Gerstenhaber automorphism (for the deformation $\glue{k}$) $h \colon \polyv{k}^{*,*} \rightarrow \polyv{k}^{*,*}$ such that $h^{-1}\circ \pdb \circ h = \pdb'$. \end{definition} Notice that we only have $\pdb^2 \equiv 0$ $(\text{mod $\mathbf{m}$})$, which is why we call it a pre-differential. Using the argument in \cite[Thm. 3.34]{chan2019geometry} or \cite[Lem. 8.1]{felten2020log}, we can always lift any $k^{\text{th}}$-order pre-differential $\prescript{k}{}{\pdb}$ to a $(k+1)^{\text{st}}$-order pre-differential. Furthermore, any two such liftings differ by a global element $\mathfrak{d} \in \polyv{0}^{-1,1} \otimes \mathbf{m}^{k+1}/\mathbf{m}^{k+2}$. We fix a set $\pdb := \{\prescript{k}{}{\pdb}\}_{k \in \mathbb{N}}$ of such compatible pre-differentials. For each $k$, the action of $\prescript{k}{}{\pdb}$ on $\totaldr{k}{}^{*,*}$ is given by gluing of the action of $\pdb_{\alpha} + \cu{L}_{\eta_{\alpha}}$ on $\totaldr{k}{}^{*,*}_{\alpha}$. On the other hand, the elements \begin{equation}\label{eqn:initial_obstruction_for_pdb_square_to_zero} \mathfrak{l}_{\alpha}:= \pdb_{\alpha}(\eta_{\alpha}) + \half [\eta_{\alpha},\eta_{\alpha} ] \in \polyv{k}^{-1,2}_{\alpha}(W_{\alpha}) \end{equation} glue to give a global element $\mathfrak{l} \in \polyv{k}^{-1,2}(B)$, and for different $k$'s, these elements are compatible. Computation shows that $\pdb^{2} = [\mathfrak{l},\cdot]$ on $\polyv{k}^{*,*}$ and $\pdb^{2} = \cu{L}_{\mathfrak{l}}$ on $\totaldr{k}{}^{*,*}$. To glue the operators $\bvd{}_{\alpha}$'s, we need to glue the local volume elements $\volf{k}_{\alpha}$'s to a global $\volf{k}$. We consider an element of the form $e^{\mathfrak{f}_{\alpha} \mathbin{\lrcorner}} \cdot \volf{k}_{\alpha}$, where $\mathfrak{f}_{\alpha} \in \polyv{k}^{0,0}(W_{\alpha})$ satisfies $\mathfrak{f}_{\alpha} \equiv 0 \ (\text{mod $\mathbf{m}$})$. Given a $k^{\text{th}}$-order global volume element $e^{\mathfrak{f}_{\alpha} \mathbin{\lrcorner}} \cdot \volf{k}_{\alpha}$, we take a lifting $e^{\tilde{\mathfrak{f}}_{\alpha} \mathbin{\lrcorner}}\cdot \volf{k+1}_{\alpha}$ such that $$ \glue{k+1}_{\alpha\beta} \big( e^{\tilde{\mathfrak{f}}_{\alpha} \mathbin{\lrcorner}} \cdot \volf{k+1}_{\alpha} \big)= e^{ (\tilde{\mathfrak{f}}_{\beta} - \mathfrak{o}_{\alpha\beta}) \mathbin{\lrcorner}} \cdot \volf{k+1}_{\beta}, $$ for some element $\mathfrak{o}_{\alpha\beta} \in \polyv{0}^{0,0}(W_{\beta}) \otimes \mathbf{m}^{k+1}/\mathbf{m}^{k+2}$. By construction, $\{\mathfrak{o}_{\alpha\beta}\}_{\alpha\beta}$ gives a \v{C}ech $1$-cycle in $\polyv{0}^{0,0}$ which is exact. So there exist $\mathfrak{u}_{\alpha}$'s such that $\mathfrak{u}_{\beta}|_{W_{\alpha\beta}} - \mathfrak{u}_{\alpha}|_{W_{\alpha\beta}} = \mathfrak{o}_{\alpha\beta}$, and we can modify $\tilde{\mathfrak{f}}_{\alpha}$ as $\tilde{\mathfrak{f}}_{\alpha} + \mathfrak{u}_{\alpha}$, which gives the desired $(k+1)^{\text{st}}$-order volume element. Inductively, we can construct compatible volume elements $\volf{k} \in \totaldr{k}{\parallel}^{n,0}(B)$, $k\in \mathbb{N}$. Any two such volume elements $\volf{k}$ and $\volf{k}'$ differ by $\volf{k} = e^{\mathfrak{f} \mathbin{\lrcorner}} \cdot \volf{k}'$, where $\mathfrak{f} \in \polyv{k}^{0,0}(B)$ is some global element. Notice that $\prescript{k}{}{\pdb}(\volf{k}) \neq 0$ unless $\text{mod $\mathbf{m}$}$. Using the volume element $\volf{}$ (we omit the dependence on $k$ if there is no confusion), we may now define the \emph{global BV operator} $\bvd{}$ by \begin{equation}\label{eqn:defining_global_BV_operator} (\bvd{} \varphi) \mathbin{\lrcorner} \volf{} = \dpartial{} (\varphi\mathbin{\lrcorner} \volf{}), \end{equation} which can locally be written as $\bvd{k}_{\alpha} + [\mathfrak{f}_{\alpha},\cdot]$. We have $\bvd{}^2 = 0$. The local elements \begin{equation}\label{eqn:bv_operator_obstruction} \mathfrak{n}_{\alpha}:= \bvd{k}_{\alpha}(\eta_{\alpha}) + \pdb_{\alpha}(\mathfrak{f}_{\alpha}) + [\eta_{\alpha},\mathfrak{f}_{\alpha}] \end{equation} glue to give a global element $\mathfrak{n} \in \polyv{k}^{0,1}(B)$ which satisfies $\pdb\bvd{}+\bvd{} \pdb = [\mathfrak{n},\cdot]$. Also, the elements $\mathfrak{l}$ and $\mathfrak{n}$ satisfy the relation $\pdb(\mathfrak{n}) + \bvd{}(\mathfrak{l}) = 0$ by a local calculation. In summary, we obtain pre-dgBV algebras $(\polyv{k},\pdb,\bvd{},\wedge)$ and pre-dgas $(\totaldr{k}{},\pdb,\dpartial{},\wedge)$ with a natural contraction action $\mathbin{\lrcorner}$ of $\prescript{k}{}{\pdb}$ on $\totaldr{k}{}^{*,*}$, and also volume elements $\volf{}$. We set $$\polyv{} := \varprojlim_{k} \polyv{k},\ \totaldr{}{}:= \varprojlim_{k} \totaldr{k}{},$$ and define a \emph{total de Rham operator} $\mathbf{d} \colon \totaldr{}{}^{*,*} \rightarrow \totaldr{}{}^{*,*}$ by \begin{equation}\label{eqn:total_de_rham_operator} \mathbf{d}:= \pdb + \dpartial{} + \mathfrak{l}\mathbin{\lrcorner}, \end{equation} which preserves the filtration $\totaldr{}{\bullet}^{*,*}$. Using the contraction $\volf{}\mathbin{\lrcorner} \colon \polyv{}^{*,*} \rightarrow \totaldr{}{\parallel}^{*+n,*}$ to pull back the operator, we obtain the operator $\mathbf{d} = \pdb + \bvd{} + (\mathfrak{l} + \mathfrak{n})\wedge$ acting on $\polyv{}^{*,*}$. Direct computation shows that $\mathbf{d}^2 = 0$, and indeed it plays the role of the de Rham differential on a smooth manifold. Readers may consult \cite[\S 4.2]{chan2019geometry} for the computations and more details. \begin{definition}\label{def:global_polyvector_and_de_rham} We call $\polyv{}^{*,*}$ (resp. $\polyv{k}^{*,*}$) the \emph{sheaf of (resp. $k^{\text{th}}$-order) smooth relative polyvector fields over $\logs$}, and $\totaldr{}{}^{*,*}$ (resp. $\totaldr{k}{}^{*,*}$) the \emph{sheaf of (resp. $k^{\text{th}}$-order) smooth forms over $\logs$}. We denote the corresponding total complexes by $\polyv{}^{*} = \bigoplus_{p+q=*} \polyv{}^{p,q}$ (resp. $\polyv{k}^{*} $) and $\totaldr{}{}^{*} = \bigoplus_{p+q=*} \totaldr{}{}^{p,q}$ (resp. $\totaldr{k}{}^{*} $). \end{definition} \subsection{Smoothing by solving the Maurer--Cartan equation}\label{subsubsec:smoothing_via_maurer_cartan} With the sheaf $\polyv{}^{*}$ of pre-dgBV algebras defined, we can now consider the \emph{extended Maurer--Cartan equation} \begin{equation}\label{eqn:extended_maurer_cartan_equation} (\pdb+t\bvd{})\varphi + \half [\varphi,\varphi] + \mathfrak{l} + t \mathfrak{n} = 0 \end{equation} for $\varphi = (\prescript{k}{}{\varphi})_k$, where $\prescript{k}{}{\varphi} \in \polyv{k}^{0}(B)[[t]] := \polyv{k}^{0}(B)\otimes_{\comp} \comp[[t]]$. Setting $t = 0$ gives the \emph{(classical) Maurer--Cartan equation} \begin{equation}\label{eqn:classical_maurer_cartan_equation} \pdb \varphi + \half[\varphi,\varphi] + \mathfrak{l} = 0 \end{equation} for $\varphi \in \polyv{}^0(B)$. To inductively solve these equations, we need two conditions, namely the \emph{holomorphic Poincar\'{e} Lemma} and the \emph{Hodge-to-de Rham degeneracy}. We begin with the holomorphic Poincar\'{e} Lemma, which is a local condition on the sheaves $j_* (\Omega^*_{\localmod{k}_{\alpha}^{\dagger}/\comp})$'s. We consider the complex $(j_* (\Omega^*_{\localmod{k}_{\alpha}^{\dagger}/\comp})[u], \widetilde{\dpartial{}_{\alpha}})$, where $$\widetilde{\dpartial{}_{\alpha}}\left(\sum_{s=0}^l \nu_s u^s\right) := \sum_{s} (\dpartial{}_{\alpha} \nu_s) u^s + s d\log(q) \wedge \nu_s u^{s-1}.$$ There is a natural exact sequence \begin{equation}\label{eqn:equation_for_holomorphic_poincare_lemma} \xymatrix@1{ 0 \ar[r] & \prescript{k}{}{\bar{\mathfrak{K}}}^*_{\alpha} \ar[r] & j_* (\Omega^*_{\localmod{k}_{\alpha}^{\dagger}/\comp})[u] \ar[r]^{\widetilde{\rest{k,0}}} & j_* (\Omega^*_{\localmod{0}_{\alpha}^{\dagger}/\logsk{0}}) \ar[r] & 0, } \end{equation} where $\widetilde{\rest{k,0}} (\sum_{s=0}^l \nu_s u^s) := \rest{k,0}(\nu_0)$ as elements in $j_* (\Omega^*_{\localmod{0}_{\alpha}^{\dagger}/\logsk{0}}) $. \begin{condition}\label{cond:holomorphic_poincare_lemma} We say that the {\em holomorphic Poincar\'{e} Lemma} holds if at every point $x \in \centerfiber{0}^{\dagger}$, the complex $(\prescript{k}{}{\bar{\mathfrak{K}}}^*_{\alpha,x},\widetilde{\dpartial{}_{\alpha}})$ is acyclic, where $\prescript{k}{}{\bar{\mathfrak{K}}}^*_{\alpha,x}$ denotes the stalk of $\prescript{k}{}{\bar{\mathfrak{K}}}^*_{\alpha}$ at $x$. \end{condition} The holomorphic Poincar\'{e} Lemma for our setting was proved in \cite[proof of Thm. 4.1]{Gross-Siebert-logII}, but a gap was subsequently pointed out by Felten--Filip--Ruddat in \cite{Felten-Filip-Ruddat}, who used a different strategy to close the gap and give a correct proof in \cite[Thm. 1.10]{Felten-Filip-Ruddat}. From this condition, we can see that the cohomology sheaf $\cu{H}^*(\tbva{k}{\parallel}^*_{\alpha},\dpartial{k}_{\alpha})$ is free over $\cfrk{k} = \comp[q]/(q^{k+1})$ (cf. \cite[Lem. 4.1]{Kawamata-Namikawa}). We will need freeness of the cohomology $H^*(\totaldr{k}{\parallel}^*(B),\mathbf{d})$ over $\cfrk{k}$, which can be seen by the following lemma (see \cite{Kawamata-Namikawa} and \cite[\S 4.3.2]{chan2019geometry} for similar arguments). \begin{lemma} Under Condition \ref{cond:holomorphic_poincare_lemma} (the holomorphic Poincar\'{e} Lemma), the natural map $$\rest{k,0}\colon H^*(\totaldr{k}{\parallel}^*(B),\mathbf{d}) \rightarrow H^*(\totaldr{0}{\parallel}^*(B),\mathbf{d})$$ is surjective for each $k \geq 0$. \end{lemma} \begin{proof} First of all, applying the functor $\nu_*$ to the exact sequence $$ \xymatrix@1{ 0 \ar[r] & \prescript{k}{}{\bar{\mathfrak{K}}}^*_{\alpha} \ar[r] & j_* (\Omega^*_{\localmod{k}_{\alpha}^{\dagger}/\comp})[u] \ar[r]^{\widetilde{\rest{k,0}}} & j_* (\Omega^*_{\localmod{0}_{\alpha}^{\dagger}/\logsk{0}}) \ar[r] & 0 } $$ gives the following exact sequence of sheaves on $B$: $$ \xymatrix@1{ 0 \ar[r] & \prescript{k}{}{\mathfrak{K}}^*_{\alpha} \ar[r] & \tbva{k}{}^*_{\alpha}[u] \ar[r]^{\widetilde{\rest{k,0}}} & \tbva{0}{}^*_{\alpha} \ar[r]& 0. } $$ This is true because every sheaf in the first exact sequence is a direct limit of coherent analytic sheaves, $R\nu_{!}$ commutes with direct limits of sheaves, and $R\nu_{!} = R\nu_* $ as the fiber $\nu^{-1}(x)$ is a compact Hausdorff topological space; see e.g. \cite{Kashiwara-Schapira94}. By taking a Cartan--Eilenberg resolution, we have the implication: $$ (\prescript{k}{}{\bar{\mathfrak{K}}}^*_{\alpha,x},\widetilde{\dpartial{k}_{\alpha}}) \text{ is acyclic} \Longrightarrow R\Gamma_U((\prescript{k}{}{\bar{\mathfrak{K}}}^*_{\alpha},\widetilde{\dpartial{k}_{\alpha}})) = 0 $$ for any open subset $U$, where $R\Gamma_U$ is the derived global section functor in the derived category of sheaves. In our case, $U = \nu^{-1}(W)$ and we have $R\Gamma_{\nu^{-1}(W)} = R\Gamma_{W} \circ R\nu_*$. Furthermore, we see that $$R\nu_* (\prescript{k}{}{\bar{\mathfrak{K}}}^*_{\alpha},\widetilde{\partial_{\alpha}}) = (\prescript{k}{}{\mathfrak{K}}^*_{\alpha},\widetilde{\partial_{\alpha}}).$$ This can be seen by taking a double complex $C^{*,*}$ resolving $(\prescript{k}{}{\bar{\mathfrak{K}}}^*_{\alpha},\widetilde{\partial_{\alpha}})$ such that $\nu_*(C^{*,*})$ computes $R\nu_* (\prescript{k}{}{\bar{\mathfrak{K}}}^*_{\alpha},\widetilde{\partial_{\alpha}})$. The spectral sequence associated to the double complex has the $E_1$-page given by $R^q \nu_* (\prescript{k}{}{\bar{\mathfrak{K}}}^p_{\alpha})$, which is $0$ if $q >0$ because $\prescript{k}{}{\bar{\mathfrak{K}}}^p_{\alpha}$ is a direct limit of coherent analytic sheaves. Therefore, $\nu_* (\prescript{k}{}{\bar{\mathfrak{K}}}^*_{\alpha},\widetilde{\partial_{\alpha}}) \rightarrow \nu_*(C^{*,*})= R\nu_* (\prescript{k}{}{\bar{\mathfrak{K}}}^*_{\alpha},\widetilde{\partial_{\alpha}})$ is a quasi-isomorphism. Combining these, we obtain that $R\Gamma^i_W (\prescript{k}{}{\mathfrak{K}}^*_{\alpha},\widetilde{\partial_{\alpha}}) = 0$ for each $i$. Next, by Condition \ref{cond:requirement_of_the_de_rham_dga}, $(\Omega^*|_{W_{\alpha}} \otimes_{\real} \prescript{k}{}{\mathfrak{K}}^*_{\alpha})$ is a resolution with a partition of unity, so the cohomology of the complex $$\left(\prescript{k}{}{\mathcal{B}}^{*}_{\alpha}(W), \pdb_{\alpha}+\widetilde{\dpartial{}_{\alpha}} \right) :=\left((\Omega^*|_{W_{\alpha}} \otimes_{\real} \prescript{k}{}{\mathfrak{K}}^*_{\alpha}) (W), \pdb_{\alpha}+\widetilde{\dpartial{}_{\alpha}} \right) $$ computes $R\Gamma_{W} (\prescript{k}{}{\mathfrak{K}}^*_{\alpha})$. Through an isomorphism $e^{\eta_{\alpha}\mathbin{\lrcorner}} \colon \prescript{k}{}{\mathcal{B}}^{*}_{\alpha} \rightarrow \prescript{k}{}{\mathcal{B}}^{*}_{\alpha}$, we can identify the operator: $$ \mathbf{d}_{\alpha}:=\pdb_{\alpha} + \cu{L}_{\eta_{\alpha}} + \widetilde{\dpartial{}_{\alpha}} + \iota_{\pdb_{\alpha} (\eta_{\alpha}) + \frac{1}{2} [ \eta_{\alpha},\eta_{\alpha}]} $$ with $\pdb_{\alpha}+\widetilde{\dpartial{}_{\alpha}}$, and hence deduce that $(\prescript{k}{}{\mathcal{B}}_{\alpha}^{*}(W),\mathbf{d}_{\alpha})$ is acyclic for any open subset $W$. Now, we consider the global sheaf $(\prescript{k}{}{\mathcal{B}}^{*},\mathbf{d})$ of complexes on $B$ obtained by gluing the local sheaves $(\prescript{k}{}{\mathcal{B}}_{\alpha}^{*},\mathbf{d}_{\alpha})$. We also have $(\widetilde{\prescript{k}{}{\mathcal{A}}^{*}},\mathbf{d})$ obtained by gluing $(\Omega^*|_{W_{\alpha}} \otimes \prescript{k}{}{\mathcal{K}}^*_{\alpha}[u],\mathbf{d}_{\alpha})$, and $(\prescript{0}{\parallel}{\mathcal{A}}^{*},\mathbf{d})$ obtained by gluing $(\Omega^*|_{W_{\alpha}} \otimes \prescript{0}{\parallel}{\mathcal{K}}^*_{\alpha},\mathbf{d}_{\alpha})$. Then there is an exact sequence of complexes of sheaves $$ \xymatrix@1{ 0 \ar[r] & \prescript{k}{}{\mathcal{B}}^{*} \ar[r] & \widetilde{\prescript{k}{}{\mathcal{A}}^{*}} \ar[r] & \prescript{0}{\parallel}{\mathcal{A}}^{*} \ar[r]& 0. } $$ To see that the complex $(\prescript{k}{}{\mathcal{B}}^{*}(B),\mathbf{d})$ is acyclic, we consider the total \v{C}ech complex associated to the cover $\{W_{\alpha}\}_{\alpha}$. The associated spectral sequence has zero $E_1$ page, thus $(\prescript{k}{}{\mathcal{B}}^{*}(B),\mathbf{d})$ is indeed acyclic. As a result, the map $H^i(\widetilde{\prescript{k}{}{\mathcal{A}}^{*}_{\alpha}}(B),\mathbf{d}_{\alpha}) \rightarrow H^i(\prescript{0}{\parallel}{\mathcal{A}}^{*}_{\alpha}(B),\mathbf{d}_{\alpha})$ is an isomorphism. Finally, surjectivity of the map $\rest{k,0}$ follows from the fact that the isomorphism $H^i(\widetilde{\prescript{k}{}{\mathcal{A}}^{*}_{\alpha}}(B),\mathbf{d}_{\alpha}) \rightarrow H^i(\prescript{0}{\parallel}{\mathcal{A}}^{*}_{\alpha}(B),\mathbf{d}_{\alpha})$ factors through $\rest{k,0}$. \end{proof} The Hodge-to-de Rham degeneracy is a global Hodge-theoretic condition on $\centerfiber{0}^{\dagger}$. We consider the Hodge filtration $F^{\geq r} j_* (\Omega^*_{\centerfiber{0}^{\dagger}/\logsk{0}}) =\bigoplus_{p\geq r} j_* (\Omega^{p}_{\centerfiber{0}^{\dagger}/\logsk{0}})$; the spectral sequence associated to it computes the hypercohomology of the complex of sheaves $(j_* (\Omega^*_{\centerfiber{0}^{\dagger}/\logsk{0}}),\dpartial{0})$ \begin{condition}\label{cond:Hodge-to-deRham} We say that the \emph{Hodge-to-de Rham degeneracy} holds for $\centerfiber{0}^{\dagger}$ if the spectral sequence associated to the above Hodge filtration degenerates at $E_1$. \end{condition} Under the assumption that $(B,\pdecomp)$ is strongly simple (Definition \ref{def:strongly simple}), the Hodge-to-de Rham degeneracy for the maximally degenerate Calabi--Yau scheme $\centerfiber{0}^{\dagger}$ was proved in \cite[Thm. 3.26]{Gross-Siebert-logII}. This was later generalized to the case when $(B,\pdecomp)$ is only simple (instead of strongly simple)\footnote{The subtle difference between the log Hodge group and the affine Hodge group when $(B,\pdecomp)$ is just simple, instead of strongly simple, was studied in details by Ruddat in his thesis \cite{Ruddat10}.} and further to log toroidal spaces in Felten--Filip--Ruddat \cite{Felten-Filip-Ruddat} using different methods. We consider the dgBV algebra $\polyv{0}^*(B)[[t]]$ equipped with the operator $\pdb + t \bvd{}$. \begin{lemma} Under Condition \ref{cond:Hodge-to-deRham} (the Hodge-to-de Rham degeneracy), $H^*(\polyv{0}^*(B)[[t]],\pdb + t \bvd{})$ is a free $\comp[[t]]$-module. \end{lemma} \begin{proof} Recall that we are working with a good cover $\cu{W} = \{ W_\alpha\}_{\alpha}$, so that the inverse image $V_{\alpha} = \modmap^{-1}(W_{\alpha})$ is Stein for each $\alpha$. We have $R\Gamma_{\modmap^{-1}(W)} = R\Gamma_{W} \circ R\modmap_*$ and $$R\modmap_* (j_* (\Omega^*_{\centerfiber{0}^{\dagger}/\logsk{0}}),\dpartial{})= (\tbva{0}{\parallel}^*,\dpartial{}).$$ If $\modmap^{-1}(W)$ is Stein, then $R\Gamma_{\modmap^{-1}(W)}(j_* (\Omega^r_{\centerfiber{0}^{\dagger}/\logsk{0}})) = \Gamma_{\modmap^{-1}(W)} (j_* (\Omega^r_{\centerfiber{0}^{\dagger}/\logsk{0}}))$ and hence $$R\Gamma_{W}(\tbva{0}{\parallel}^{r}) = \Gamma_W(\tbva{0}{\parallel}^{r}).$$ The hypercohomology of $(j_*(\Omega^*_{\centerfiber{0}^{\dagger}/\logsk{0}}),\dpartial{})$ is computed using the \v{C}ech double complex $$\check{\cu{C}}^*(\cu{V},j_* (\Omega^*_{\centerfiber{0}^{\dagger}/\logsk{0}}))$$ with respect to the Stein open cover $\cu{V} = \{\modmap^{-1}(W_{\alpha})\}_{\alpha}$. Similarly, the hypercohomology of the complex $(\tbva{0}{\parallel}^*,\dpartial{})$ is computed using the \v{C}ech double complex $\check{\cu{C}}^*(\cu{W},\tbva{0}{\parallel}^*)$ with respect to the cover $\cu{W} = \{ W_\alpha\}_{\alpha}$; here, the Hodge filtration is induced from the filtration $F^{\geq r}\tbva{0}{\parallel}^* = \bigoplus_{p\geq r} \tbva{0}{\parallel}^{\geq p}$. These two \v{C}ech complexes, as well as their corresponding Hodge filtrations, are identified as $\tbva{0}{\parallel}^*(W) = j_* (\Omega^r_{\centerfiber{0}^{\dagger}/\logsk{0}})(\modmap^{-1}(W))$ for each $W = W_{\alpha_1} \cap \cdots \cap W_{\alpha_k}$. Hence, under Condition \ref{cond:Hodge-to-deRham}, we have $E_1$ degeneracy also for $\check{\cu{C}}^*(\cu{W},\tbva{0}{\parallel}^*)$, or equivalently, that $(\check{\cu{C}}^*(\cu{W},\tbva{0}{\parallel}^*)[[t]], \delta + t \dpartial{})$ is a free $\comp[[t]]$-module. In view of the isomorphisms $( \bva{0}^*,\bvd{})\cong (\tbva{0}{\parallel},\dpartial{})$ and $$H^*(\polyv{0}^*(B)[[t]],\pdb + t \bvd{}) \cong H^*(\check{\cu{C}}^*(\cu{W},\tbva{0}{\parallel}^*)[[t]], \delta + t \dpartial{}),$$ we conclude that $H^*(\polyv{0}^*(B)[[t]],\pdb + t \bvd{}) $ is a free $\comp[[t]]$-module as well. \end{proof} For the purpose of this paper, we restrict ourselves to the case that $$\prescript{k}{}{\varphi} = \prescript{k}{}{\phi} + t (\prescript{k}{}{f}),$$ where $\prescript{k}{}{\phi} \in \polyv{k}^{-1,1}(B)$ and $\prescript{k}{}{f} \in \polyv{k}^{0,0}(B)$. The extended Maurer-Cartan equation \eqref{eqn:extended_maurer_cartan_equation} can be decomposed, according to orders in $t$, into the (classical) Maurer--Cartan equation \eqref{eqn:classical_maurer_cartan_equation} for $\prescript{k}{}{\phi}$ and the equation \begin{equation}\label{eqn:volume_form_equation} \pdb(\prescript{k}{}{f}) + [\prescript{k}{}{\phi},\prescript{k}{}{f} ] + \bvd{}(\prescript{k}{}{\phi}) + \mathfrak{n} = 0. \end{equation} \begin{theorem}\label{prop:Maurer_cartan_equation_unobstructed} Suppose that both Conditions \ref{cond:holomorphic_poincare_lemma} and \ref{cond:Hodge-to-deRham} hold. Then for any $k^{\text{th}}$-order solution $\prescript{k}{}{\varphi} = \prescript{k}{}{\phi} + t (\prescript{k}{}{f}) $ to the extended Maurer--Cartan equation \eqref{eqn:extended_maurer_cartan_equation}, there exists a $(k+1)^{\text{st}}$-order solution $\prescript{k+1}{}{\varphi} = \prescript{k+1}{}{\phi} + t (\prescript{k+1}{}{f})$ to \eqref{eqn:extended_maurer_cartan_equation} lifting $\prescript{k}{}{\varphi}$. The same statement holds for the Maurer--Cartan equation \eqref{eqn:classical_maurer_cartan_equation} if we restrict to $\prescript{k}{}{\phi} \in \polyv{k}^{-1,1}(B)$. \end{theorem} \begin{proof} The first statement follows from \cite[Thm. 5.6]{chan2019geometry} and \cite[Lem. 5.12]{chan2019geometry}: Starting with a $k^{\text{th}}$-order solution $\prescript{k}{}{\varphi} = \prescript{k}{}{\phi} + t (\prescript{k}{}{f})$ for \eqref{eqn:extended_maurer_cartan_equation}, one can always use \cite[Thm. 5.6]{chan2019geometry} to lift it to a general $\prescript{k+1}{}{\varphi} \in \polyv{k+1}^0(B)[[t]]$. The argument in \cite[Lem. 5.12]{chan2019geometry} shows that we can choose $\prescript{k+1}{}{\varphi}$ such that the component of $\prescript{k+1}{}{\varphi}|_{t=0}$ in $\polyv{k+1}^{0,0}(B)$ is zero. As a result, the component of $\prescript{k+1}{}{\phi} + t (\prescript{k+1}{}{f})$ in $\polyv{k+1}^{-1,1}(B) \otimes t (\polyv{k+1}^{0,0}(B))$ is again a solution to \eqref{eqn:extended_maurer_cartan_equation}. For the second statement, we argue that, given $\prescript{k}{}{\phi}$, there always exists $\prescript{k}{}{f} \in \polyv{k}^{0,0}(B)$ such that $\prescript{k}{}{\phi}+t (\prescript{k}{}{f})$ is a solution to \eqref{eqn:extended_maurer_cartan_equation}. We need to solve the equation \eqref{eqn:volume_form_equation} by induction on the order $k$. The initial case is trivial by taking $\prescript{0}{}{f} = 0$. Suppose the equation can be solved for $\prescript{j-1}{}{f}$. Then we take an arbitrary lifting ${\prescript{j}{}{\tilde{f}}}$ to the $j^{\text{th}}$-order. We can define an element $\mathfrak{o} \in \polyv{0}^{0,0}(B)$ by $$ q^j \mathfrak{o} = \pdb(\prescript{j}{}{\tilde{f}}) + [\prescript{j}{}{\phi},\prescript{j}{}{\tilde{f}} ] + \bvd{}(\prescript{j}{}{\phi}) + \mathfrak{n}, $$ which satisfies $\pdb(\mathfrak{o}) = 0$. Therefore, the class $[\mathfrak{o}]$ lies in the cohomology $$H^1(\polyv{0}^{0,*},\pdb) \cong H^1(\centerfiber{0},\cu{O}) \cong H^1(B,\comp),$$ where the last equivalence is from \cite[Prop. 2.37]{Gross-Siebert-logI}. By our assumption in \S \ref{sec:gross_siebert}, we have $H^1(B,\comp)=0$, and hence we can find an element $\breve{f}$ such that $\pdb(\breve{f}) = \mathfrak{o}$. Letting $\prescript{j}{}{f} = \prescript{j}{}{\tilde{f}} + q^j \cdot \breve{f} \ (\text{mod $q^{j+1}$})$ proves the induction step from the $(j-1)^{\text{st}}$-order to the $j^{\text{th}}$-order. Now, applying the first statement, we can lift the solution $\prescript{k}{}{\varphi}:=\prescript{k}{}{\phi}+t (\prescript{k}{}{f})$ to $\prescript{k+1}{}{\varphi} = \prescript{k+1}{}{\phi}+t (\prescript{k+1}{}{f})$ which satisfies equation \eqref{eqn:extended_maurer_cartan_equation}, and hence $\prescript{k+1}{}{\phi}$ solves \eqref{eqn:classical_maurer_cartan_equation}. \end{proof} From Theorem \ref{prop:Maurer_cartan_equation_unobstructed}, we obtain a solution $\phi \in \polyv{}^{-1,1}(B)$ to the Maurer--Cartan equation \eqref{eqn:classical_maurer_cartan_equation}, from which we obtain the sheaves $\ker(\pdb+[\phi,\cdot])\subset \polyv{k}^{*,*}$ and $\ker(\pdb+\cu{L}_{\phi}) \subset \totaldr{k}{\parallel}^{*,*}$ over $B$. These sheaves are locally isomorphic to $\bva{k}^*_{\alpha}$ and $\tbva{k}{\parallel}^*_{\alpha}$, so we may treat them as obtained from gluing of the local sheaves $\bva{k}^*_{\alpha}$'s and $\tbva{k}{\parallel}^*_{\alpha}$'s. From these, we can extract consistent and compatible gluings $\prescript{k}{}{\varPhi}_{\alpha\beta} \colon \localmod{k}^{\dagger}_{\alpha}|_{V_{\alpha\beta}} \rightarrow \localmod{k}^{\dagger}_{\beta}|_{V_{\alpha\beta}}$ satisfying the cocycle condition, and hence obtain a $k$-th order thichening $\centerfiber{k}$ of $\centerfiber{0}$ over $\logsk{k}$; see \cite[\S 5.3]{chan2019geometry}. Also, $e^{f} \mathbin{\lrcorner} \volf{}$, as a section of $\ker(\pdb+\cu{L}_{\phi})$ over $B$, defines a holomorphic volume form on the $k$-th order thickening $\centerfiber{k}$. \subsubsection{Normalized volume form}\label{subsec:normalization_condition} For later purposes, we need to further normalize the holomorphic volume $$\varOmega := e^{f} \mathbin{\lrcorner} \volf{} \in \ker(\pdb+\cu{L}_{\phi})(B) \subset \totaldr{k}{\parallel}^{n,0}(B)$$ by adding a suitable power series $h(q) \in (q) \subset \comp[[q]]$ to $f$, so that the condition that $\int_{T} e^{f} \mathbin{\lrcorner} \volf{} = 1$, where $T$ is a nearby $n$-torus in the smoothing, is satisfied. The \emph{$k^{\text{th}}$-order Hodge bundle} over $\spec(\comp[q]/q^{k+1})$ is defined as the cohomology $$\prescript{k}{}{\cu{H}}:= H^n(\totaldr{k}{\parallel}^*,\mathbf{d}),$$ equipped with a Gauss--Manin connection $\gmc{k}$, where $\gmc{k}_{\dd{\log q}}$ is the connecting homomorphism of the long exact sequence associated to \begin{equation}\label{eqn:Gauss_Manin_connection_definition} 0 \rightarrow \totaldr{k}{\parallel}^{*-1} \otimes_{\comp} \comp \langle d\log q \rangle \rightarrow \totaldr{k}{}^* \rightarrow \totaldr{k}{\parallel}^* \rightarrow 0; \end{equation} here $\comp \langle d\log q \rangle $ is the $1$-dimensional graded vector space spanned by the degree $1$ element $d\log q$. We denote $\widehat{\cu{H}} := \varprojlim_{k} \prescript{k}{}{\cu{H}}$. Restricting to the $0^{\text{th}}$-order, we have $N=\gmc{0}_{\dd{\log q}}$, which is a nilpotent operator acting on $\prescript{0}{}{\cu{H}} = H^{n}(\totaldr{0}{\parallel}^*) \cong \mathbb{H}^n(\centerfiber{},j_* \Omega^*_{\centerfiber{}^{\dagger}/\comp^{\dagger}})$, where $\centerfiber{} = \centerfiber{0}$. If we consider the top cohomoloy $H^{2n}(\totaldr{0}{\parallel}^*)$, which is $1$-dimensional, we see that $N=\gmc{0}_{\dd{\log q}} = 0$. So $\gmc{k}_{\dd{\log q}}$ is a flat connection without log poles at $q=0$. Hence, we can find a basis (order by order in $q$) to identify $H^{2n}(\totaldr{k}{\parallel}^*) \cong H^{2n}(\totaldr{0}{\parallel}^*) \otimes \comp[q]/q^{k+1}$, which also trivializes the flat connection $\gmc{}$ as $\dd{\log q}$. Since $H^n(B,\comp)\cong \comp$, we can fix a non-zero generator and choose a representative $\varrho \in \mdga^n(B)$. Then the element $\varrho\otimes 1 \in \totaldr{k}{\parallel}^n(B)$ (which may simply be written as $\varrho$) represents a section $[\varrho]$ in $\widehat{\cu{H}}$. A direct computation shows that $\gmc{} [\varrho] = 0$, i.e. it is a flat section to all orders. The pairing with the $0^{\text{th}}$-order volume form $\volf{0}$ gives a non-zero element $[\volf{0}\wedge \varrho ]$ in $H^{2n}(\totaldr{0}{\parallel}^*)$. \begin{definition}\label{def:normalized_volume_form} The volume form $\varOmega=e^{f} \mathbin{\lrcorner} \volf{}$ is said to be \emph{normalized} if $[\varOmega \wedge \varrho]$ is flat under $\gmc{}$. \end{definition} In other words, we can write $[\varOmega \wedge \varrho]= [\volf{0}\wedge \varrho]$ under the identification $$H^{2n}(\totaldr{k}{\parallel}^*) \cong H^{2n}(\totaldr{0}{\parallel}^*) \otimes \comp[q]/q^{k+1}.$$ By modifying $f$ to $f+h(q)$, this can always be achieved. Further, after the modification, $\varphi = \phi + t f$ still solves \eqref{eqn:extended_maurer_cartan_equation}. \section{From smoothing of Calabi--Yau varieties to tropical geometry}\label{sec:tropical_geometry_and_mc_equation} \subsection{Tropical differential forms}\label{sec:asymptotic_support} To tropicalize the pre-dgBV algebra $\polyv{}^{*,*}$, we need to replace the Thom--Whitney resolution used in \cite{chan2019geometry} by a geometric resolution. To do so, we first need to recall some background materials from our previous works \cite[\S 4.2.3]{kwchan-leung-ma} and \cite[\S 3.2]{kwchan-ma-p2}. Of crucial importance is the notion of \emph{differential forms with asymptotic support} (which will be called \emph{tropical differential forms} in this paper) that originated from multi-valued Morse theory and Witten deformations. Such differential forms can be regarded as distribution-valued forms supported on tropical polyhedral subsets. This key notion allows us to develop tropical intersection theory via differential forms, and in particular, define the intersection pairing between possibly non-transversal tropical polyhedral subsets simply using the wedge product. Let $U$ be an open subset of $M_\real$. We consider the space $\Omega^k_\hp(U) := \Gamma(U \times \mathbb{R}_{>0}, \bigwedge^{\raisebox{-0.4ex}{\scriptsize $k$}} T^{\vee} U)$, where we take $\cu{C}^{\infty}$ sections of $\bigwedge^{\raisebox{-0.4ex}{\scriptsize $k$}} T^{\vee} U$ and $\hp$ is a coordinate on $\mathbb{R}_{>0}$. Let $\mathcal{W}^{k}_{-\infty}(U) \subset \Omega^k_\hp(U)$ be the subset of $k$-forms $\alpha$ such that, for each $q \in U$, there exist a neighborhood $q \in V \subset U$, constants $D_{j,V}$, $c_V$ and a sufficiently small real number $\hp_0 > 0$ such that $\|\nabla^j \alpha\|_{L^\infty(V)} \leq D_{j,V} e^{-c_V/\hp}$ for all $j \geq 0$ and for $0<\hp < \hp_0$; here, the $L^\infty$-norm is defined by $\| \alpha \|_{L^{\infty}(V)} = \sup_{x\in V} \|\alpha(x)\|$ for any section $\alpha$ of the tensor bundle $TU^{\otimes k} \otimes T^{\vee}U^{\otimes l}$, where we fix a constant metric on $M_{\real}$ and use the induced metric on $TU^{\otimes k} \otimes T^{\vee}U^{\otimes l}$; $\nabla^j$ denotes an operator of the form $\nabla_{\dd{x_{l_1}}}\cdots \nabla_{\dd{x_{l_j}}}$, where $\nabla$ is a torsion-free, flat connection defining an affine structure on $U$ and $x = (x_1,\dots, x_n)$ is an affine coordinate system (note that $\nabla$ is {\it not} the Gauss--Manin connection in the previous section). Similarly, let $\mathcal{W}^{k}_{\infty}(U) \subset \Omega^k_\hp(U)$ be the set of $k$-forms $\alpha$ such that, for each $q \in U$, there exist a neighborhood $q \in V \subset U$, a constant $D_{j,V}$, $N_{j,V} \in \inte_{>0}$ and a sufficiently small real number $\hp_0 > 0$ such that $\|\nabla^j \alpha\|_{L^\infty(V)} \leq D_{j,V} \hp^{-N_{j,V}}$ for all $j \geq 0$ and for $0<\hp < \hp_0$. The assignment $U \mapsto \mathcal{W}^{k}_{-\infty}(U)$ (resp. $U \mapsto \mathcal{W}^{k}_{\infty}(U)$) defines a sheaf $\mathcal{W}^{k}_{-\infty}$ (resp. $\mathcal{W}^{k}_{\infty}$) on $M_\real$ (\cite[Defs. 4.15 \& 4.16]{kwchan-leung-ma}). Note that $\mathcal{W}^{k}_{-\infty}$ and $\mathcal{W}^{k}_{\infty}$ are closed under the wedge product, $\nabla_{\dd{x}}$ and the de Rham differential $d$. Since $\mathcal{W}^{k}_{-\infty}$ is a dg ideal of $\mathcal{W}^{k}_{\infty}$, the quotient $\mathcal{W}^{*}_{\infty}/\mathcal{W}^{*}_{-\infty}$ is a sheaf of dgas when equipped with the de Rham differential. Now suppose $U$ is a convex open set. By a {\em tropical polyhedral subset} of $U$, we mean a connected convex subset of $U$ which is defined by finitely many affine equations or inequalities over $\mathbb{Q}$ of the form $a_1 x_1 + \cdots + a_n x_n \leq b$. \begin{definition}[\cite{kwchan-leung-ma}, Def. 4.19]\label{def:asypmtotic_support_pre} A $k$-form $\alpha \in \mathcal{W}_{\infty}^k(U)$ is said to have {\em asymptotic support on a closed codimension $k$ tropical polyhedral subset $P \subset U$ with weight $s \in \inte$}, denoted as $\alpha \in \mathcal{W}_{P,s}(U)$, if the following conditions are satisfied: \begin{enumerate} \item For any $p \in U \setminus P$, there is a neighborhood $p \in V \subset U \setminus P$ such that $\alpha|_V \in \mathcal{W}_{-\infty}^k(V)$. \item There exists a neighborhood $W_P \subset U$ of $P$ such that $\alpha = h(x,\hp) \nu_P + \eta$ on $W_P$, where $\nu_P \in \bigwedge^k N_{\real}$ is a non-zero affine $k$-form (defined up to non-zero constant) which is normal to $P$, $h(x,\hp) \in C^\infty(W_P \times \real_{>0})$ and $\eta \in \mathcal{W}_{-\infty}^k(W_P)$. \item For any $p \in P$, there exists a convex neighborhood $p \in V \subset U$ equipped with an affine coordinate system $x = (x_1,\dots, x_n)$ such that $x' := (x_1, \dots, x_k)$ parametrizes codimension $k$ affine linear subspaces of $V$ parallel to $P$, with $x' = 0$ corresponding to the subspace containing $P$. With the foliation $\{(P_{V, x'})\}_{x' \in N_V}$, where $P_{V,x'} = \{ (x_1,\dots,x_n) \in V \ | \ (x_1,\dots,x_k) = x' \}$ and $N_V$ is the normal bundle of $V$, we require that, for all $j \in \inte_{\geq 0}$ and multi-indices $\beta = (\beta_1,\dots,\beta_k) \in \inte_{\geq 0}^k$, the estimate \[ \int_{x'} (x')^\beta \left(\sup_{P_{V,x'}}|\nabla^j (\iota_{\nu_P^\vee} \alpha)| \right) \nu_P \leq D_{j,V,\beta} \hp^{-\frac{j+s-|\beta|-k}{2}} \] holds for some constant $D_{j,V,\beta}$ and $s \in \inte$, where $|\beta| = \sum_l \beta_l$ and $\nu_P^\vee = \dd{x_1}\wedge\cdots \wedge\dd{x_k}$.\footnote{For $k=0$, we use the convention that $\nu_P = 1 \in \bigwedge^0 N_{\real} = \real$ and also set $\nu_P^\vee =1$.} \end{enumerate} \end{definition} Observe that $\nabla_{\dd{x_l}} \mathcal{W}_{P,s}(U) \subset \mathcal{W}_{P,s+1}(U)$ and $(x')^{\beta}\mathcal{W}_{P,s}(U) \subset \mathcal{W}_{P,s-|\beta|}(U)$. It follows that \[ (x')^{\beta} \nabla_{\dd{x_{l_1}}}\cdots \nabla_{\dd{x_{l_j}}} \mathcal{W}_{P,s}(U) \subset \mathcal{W}_{P,s+j-|\beta|}(U). \] The weight $s$ defines a filtration of $\mathcal{W}^k_{\infty}$ (we drop the $U$ dependence from the notation whenever it is clear from the context):\footnote{Note that $k$ is equal to the codimension of $P \subset U$.} \[ \mathcal{W}_{-\infty}^k \subset \cdots \subset \mathcal{W}_{P,-1}\subset \mathcal{W}_{P,0} \subset \mathcal{W}_{P,1} \subset \cdots \subset \mathcal{W}_{\infty}^k \subset \Omega^k_\hp(U). \] This filtration, which keeps track of the polynomial order of $\hp$ for $k$-forms with asymptotic support on $P$, provides a convenient tool to express and prove results in asymptotic analysis. \begin{definition}[\cite{kwchan-ma-p2}, Def. 3.10]\label{def:asymptotic_support}\label{def:asy_support_algebra} A differential $k$-form $\alpha$ is \emph{in $\tilde{\mathcal{W}}_{s}^k(U)$} if there exist polyhedral subsets $P_1, \dots, P_l \subset U$ of codimension $k$ such that $\alpha \in \sum_{j=1}^l \mathcal{W}_{P_j,s}(U)$. If, moreover, $d \alpha \in \tilde{\mathcal{W}}_{s+1}^{k+1}(U)$, then we write $\alpha \in \mathcal{W}_s^k(U)$. For every $s \in \inte$, let $\mathcal{W}_s^*(U) = \bigoplus_k \mathcal{W}_{s+k}^k(U)$. \end{definition} \begin{example}\label{example:bump_form} Let $U = \real$ and $x$ be an affine coordinate on $U$. Then we consider the $\hp$-dependent $1$-form $$ \delta:= \left( \frac{1}{\hp \pi} \right)^{\half} e^{-\frac{x^2}{\hp}} dx. $$ Direct calculations in \cite[Lem 4.12]{kwchan-leung-ma} showed that $\delta \in \mathcal{W}_1^1(U)$ has asymptotic support on the hyperplane $P$ defined by $x=0$. The hyperplane $P$ separates $U$ into two chambers $H_+$ and $H_-$. If we fix a base point in $H_-$ and apply the integral operator $I$ in \cite[Lem. 4.23]{kwchan-leung-ma}, we obtain $I(\delta) \in W^0_0(U)$ which has asymptotic support on $H_+ \cup P$, playing the role of a step function. Taking finite products of elements of the above form, we obtain $\alpha \in \mathcal{W}^k_k(U)$ with asymptotic support on arbitrary tropical polyhedral subsets of $U$. Any forms obtained from a finite number of steps of applying the differential $d$, applying the integral operator $I$ and taking wedge product are in $W^*_0(U)$. \end{example} We say that two closed tropical polyhedral subsets $P_1, P_2 \subset U$ of codimension $k_1, k_2$ {\em intersect transversally} if the affine subspaces of codimension $k_1$ and $k_2$ which contain $P_1$ and $P_2$, respectively, intersect transversally. This definition applies also when $P_1 \cap P_2 = \emptyset $ or $\partial P_i \neq \emptyset$. \begin{lemma}[{\cite[Lem. 4.22]{kwchan-leung-ma}}] \phantomsection \label{lem:support_product}\label{prop:support_product}\label{prop:asy_support_algebra_ideal} \begin{enumerate} \item Let $P_1, P_2, P \subset U$ be closed tropical polyhedral subsets of codimension $k_1$, $k_2$ and $k_1+k_2$, respectively, such that $P$ contains $P_1 \cap P_2$ and is normal to $\nu_{P_1} \wedge \nu_{P_2}$. Then $\mathcal{W}_{P_1,s}(U) \wedge \mathcal{W}_{P_2,r}(U) \subset \mathcal{W}_{P,r+s}(U)$ if $P_1$ and $P_2$ intersect transversally with $P_1 \cap P_2 \neq \emptyset$, and $\mathcal{W}_{P_1,s}(U) \wedge \mathcal{W}_{P_2,r}(U) \subset \mathcal{W}_{-\infty}^{k_1 + k_2}(U)$ otherwise. \item We have $\mathcal{W}_{s_1}^{k_1}(U) \wedge \mathcal{W}_{s_2}^{k_2}(U) \subset \mathcal{W}_{s_1+s_2}^{k_1+k_2}(U)$. In particular, $\mathcal{W}_0^*(U) \subset \mathcal{W}_{\infty}^*(U)$ is a dg subalgebra and $\mathcal{W}_{-1}^*(U) \subset \mathcal{W}_0^*(U)$ is a dg ideal. \end{enumerate} \end{lemma} \begin{definition}\label{def:sheaf_of_tropical_dga} Let $\mathcal{W}_s^*$ be the sheafification of the presheaf defined by $U \mapsto \mathcal{W}_s^*(U)$. We call the quotient sheaf $\tform^*:=\mathcal{W}_0^*/\mathcal{W}_{-1}^*$ the \emph{sheaf of tropical differential forms}, which is a sheaf of dgas on $M_{\real}$ with structures $(\wedge,\md)$. \end{definition} From \cite[Lem. 3.6]{kwchan-ma-p2}, we learn that $\underline{\real} \rightarrow \tform^*$ is a resolution. Furthermore, given any point $x \in U$ and a sufficiently small neighborhood $x \in W \subset U$, we can show that there exists $f \in \mathcal{W}_0^0(W)$ with compact support in $W$ and satisfying $f \equiv 1$ near $x$ (using an argument similar to the proof of Lemma \ref{lem:for_contruction_of_partition_of_unity}). Therefore, $\tform^*$ has a partition of unity subordinate to a given open cover. Replacing the sheaf of de Rham differential forms on $\tanpoly_{\rho_1,\real}^* \oplus \norpoly_{\tau,\real}$ by the sheaf $\tform^*$ of tropical differential forms, we can construct a particular complex on the integral tropical manifold $B$ satisfying Condition \ref{cond:requirement_of_the_de_rham_dga}, which dictates the tropical geometry of $B$. \begin{definition}\label{def:global_sheaf_of_monodromy_invariant_tropical_forms} Given a point $x$ as in \S\ref{subsec:derham_for_B} (with a chart as in equation \eqref{eqn:monodromy_invariant_affine_functions_near_x_o}), the stalk of $\tform^*$ at $x$ is defined as $\tform^*_{x}:= (\mathtt{x}^{-1}\tform^*)_{x}$. This defines the \emph{complex $(\tform^*,\md)$ (or simply $\tform^*$) of monodromy invariant tropical differential forms on $B$}. A section $\alpha \in \tform^*(W)$ is a collection of elements $\alpha_{x} \in \tform^*_{x}$, $x \in W$ such that each $\alpha_{x}$ can be represented by $\mathtt{x}^{-1}\beta_{x}$ in a small neighborhood $U_{x} \subset \mathtt{p}^{-1}(\mathtt{U}_{x})$ for some tropical differential form $\beta_{x}$ on $\mathtt{U}_{x}$, and satisfies the relation $\alpha_{\tilde{x}} = \tilde{\mathtt{x}}^{-1}(\mathtt{p}^* \beta_{x})$ in $\tform^*_{\tilde{x}}$ for every $\tilde{x} \in U_{x}$. \end{definition} Notice that the definition of $\tform^*$ requires the projection map $\mathtt{p}$ in equation \eqref{eqn:monodromy_invariant_differential_form_change_of_chart} to be affine, while that of $\mdga^*$ in \S \ref{subsec:derham_for_B} does not. But like $\mdga^*$, $\tform^*$ satisfies Condition \ref{cond:requirement_of_the_de_rham_dga} and can be used for the purpose of gluing the sheaf $\polyv{}^*$ of dgBV algebras in \S \ref{subsubsec:gluing_construction}. In the rest of this section, we shall use the notations $\polyv{}^*$ and $\totaldr{}{}^*$ to denote the complexes of sheaves constructed using $\tform^*$. \subsection{The semi-flat dgBV algebra and its comparison with the pre-dgBV algebra $\polyv{}^{*,*}$}\label{subsec:semi-flat_dgBV} In this section, we define a twisting of the semi-flat dgBV algebra by the slab functions (or initial wall-crossing factors) in \S \ref{subsec:log_structure_and_slab_function}, and compare it with the dgBV algebra we constructed in \S \ref{subsubsec:gluing_construction} using gluing of local smoothing models. The key result is Lemma \ref{lem:comparing_sheaf_of_dgbv}, which is an important step in the proof of our main result. We start by recalling some notations from \S \ref{subsec:log_structure_and_slab_function}. Recall that for each vertex $v$, we fix a representative $\varphi_v\colon U_v \rightarrow \real$ of the strictly convex multi-valued piecewise linear function $\varphi \in H^0(B,\cu{MPL}_{\pdecomp})$ to define the cone $C_v$ and the monoid $P_v$. The natural projection $T_v \oplus \inte \rightarrow T_v$ induces a surjective ring homomorphism $\comp[\rho^{-1}P_v] \rightarrow \comp[\rho^{-1}\Sigma_v]$; we denote by $\bar{m} \in \rho^{-1}\Sigma_v$ the image of $m \in \rho^{-1} P_v$ under the natural projection. We consider $\mathbf{V}(\tau)_v := \spec(\comp[\tau^{-1}P_v])$ for some $\tau$ containing $v$, and write $z^m$ for the function corresponding to $m \in \tau^{-1} P_v$. The element $\varrho$ together with the corresponding function $z^{\varrho}$ determine a family $\spec(\comp[\tau^{-1}P_v]) \rightarrow \comp$, whose central fiber is given by $\spec(\comp[\tau^{-1}\Sigma_v])$. The variety $\mathbf{V}(\tau)_v = \spec(\comp[\tau^{-1}P_v])$ is equipped with the divisorial log structure induced by $\spec(\comp[\tau^{-1}\Sigma_v])$, which is log smooth. We write $\mathbf{V}(\tau)_v^{\dagger}$ if we need to emphasize the log structure. Since $B$ is orientable, we can choose a nowhere vanishing integral element $\mu \in \Gamma(B\setminus \tsing_e,\bigwedge^n T_{B,\inte})$. We fix a local representative $\mu_v \in \bigwedge^n T_{v}$ for every vertex $v$ and $\mu_{\sigma} \in \bigwedge^n \tanpoly_{\sigma}$ for every maximal cell $\sigma$. Writing $\mu_v = m_1 \wedge \cdots \wedge m_n$, we have the corresponding relative volume form $$\mu_v = d\log z^{m_1} \wedge \cdots \wedge d\log z^{m_n}$$ in $\Omega^n_{\mathbf{V}(\tau)_v^{\dagger}/\comp^{\dagger}}$. Now the relative log polyvector fields can be written as $$ \bigwedge\nolimits^{-l} \Theta_{\mathbf{V}(\tau)_v^{\dagger}/\comp^{\dagger}} = \bigoplus_{m \in \tau^{-1}P_v} z^m \partial_{n_1} \wedge \cdots \wedge \partial_{n_l}. $$ The volume form $\mu_v$ defines a BV operator via contraction $(\bvd{}\alpha) \mathbin{\lrcorner} \mu_v := \partial(\alpha \mathbin{\lrcorner} \mu_v)$, which is given explicitly by $$ \bvd{}(z^m \partial_{n_1} \wedge \cdots \wedge \partial_{n_l}) = \sum_{j=1}^l (-1)^{j-1} \langle m,n_j\rangle z^m \partial_{n_1} \wedge \cdots \widehat{\partial}_{n_j} \cdots \wedge \partial_{n_l}. $$ A Schouten–-Nijenhuis--type bracket is given by extending the following formulae skew-symmetrically: \begin{align*} [z^{m_1} \partial_{n_1},z^{m_2}\partial_{n_2}] & = z^{m_1+m_2} \partial_{\langle \bar{m}_1, n_2 \rangle n_{1} - \langle \bar{m}_2, n_1 \rangle n_{2}},\\ [z^m , \partial_n] & = \langle \bar{m}, n \rangle z^m. \end{align*} This gives $\bigwedge^{-*} \Theta_{\mathbf{V}(\tau)_v^{\dagger}/\comp^{\dagger}}$ the structure of a BV algebra. \subsubsection{Construction of the semi-flat sheaves}\label{subsubsec:semi-flat} For each $k \in \mathbb{N}$, we shall define a sheaf $\sfbva{k}^*_{\mathrm{sf}}$ (resp. $\sftbva{k}{}^*_{\mathrm{sf}}$) of $k^{\text{th}}$-order semi-flat log vector fields (resp. semi-flat log de Rham forms) over the open dense subset $W_0 \subset B$ defined by $$ W_0 := \bigcup_{\sigma \in \pdecomp^{[n]}} \reint(\sigma) \cup \bigcup_{\rho \in \pdecomp^{[n-1]}_0} \reint(\rho) \cup \bigcup_{\rho \in \pdecomp^{[n-1]}_1} \big( \reint(\rho) \setminus (\tsing \cap \reint(\rho)) \big), $$ where $\pdecomp^{[n-1]}_0$ consists of $\rho$'s such that $\reint(\rho) \cap \tsing_e = \emptyset$ and $\pdecomp^{[n-1]}_1$ of $\rho$'s that intersect with $\tsing_e$. These sheaves use the natural divisorial log structure on $\mathbf{V}(\rho)_v^{\dagger}$ and will \emph{not} depend on the slab functions $f_{v\rho}$'s. This construction is possible because we are using the much more flexible Euclidean topology on $W_0$, instead of the Zariski topology on $\centerfiber{0}$. For $\sigma \in \pdecomp^{[n]}$, recall that we have $V(\sigma) = \spec(\comp[\sigma^{-1}\Sigma_{v}])$ for some $v \in \sigma^{[0]}$. We also have $\spec(\comp[\sigma^{-1}\Sigma_{v}]) = \tanpoly^*_{\sigma,\comp}/\tanpoly^*_{\sigma} $, which is isomorphic to $(\comp^{*})^n$, because $\sigma^{-1}\Sigma_v = \tanpoly_{\sigma,\real} = T_{v,\real}$. The local $k^{\text{th}}$-order thickening $$\localmod{k}(\sigma)^{\dagger}:= \spec(\comp[\sigma^{-1}P_{v}/q^{k+1}]) \cong (\comp^{*})^n \times \spec(\comp[q]/q^{k+1})$$ is obtained by identifying $\sigma^{-1}P_v$ as $\tanpoly_{\sigma} \times \mathbb{N}$. Choosing a different vertex $v'$, we can use the parallel transport $T_{v} \cong T_{v'}$ from $v$ to $v'$ within $\reint(\sigma)$ and the difference $\varphi_v|_{\sigma} - \varphi_{v'}|_{\sigma}$ between two affine functions to identify the monoids $\sigma^{-1}P_{v}\cong \sigma^{-1}P_{v'}$. We take $$\sfbva{k}^*_{\mathrm{sf}}|_{\reint(\sigma)}:= \modmap_*\Big(\bigwedge\nolimits^{-*} \Theta_{\localmod{k}(\sigma)^{\dagger}/\logsk{k}}\Big) \cong \modmap_*(\mathcal{O}_{\localmod{k}(\sigma)^{\dagger}}) \otimes_{\real} \bigwedge\nolimits^{-*} \tanpoly_{\sigma,\real}^*.$$ Next, we need to glue the sheaves $\sfbva{k}^*_{\mathrm{sf}}|_{\reint(\sigma)}$'s along neighborhoods of codimension one cells $\rho$'s. For each codimension one cell $\rho$, we fix a primitive normal $\check{d}_{\rho}$ to $\rho$ and label the two adjacent maximal cells $\sigma_+$ and $\sigma_-$ so that $\check{d}_{\rho}$ is pointing into $\sigma_+$. There are two situations to consider. The simpler case is when $\tsing_e \cap \reint(\rho) = \emptyset$, where the monodromy is trivial. In this case, we have $V(\rho) = \spec(\comp[\rho^{-1}\Sigma_{v}])$, with the gluing $V(\sigma_{\pm}) \hookrightarrow V(\rho)$ as described below Definition \ref{def:open_gluing_data} using the open gluing data $s_{\rho\sigma_{\pm}}$. We take the $k^{\text{th}}$-order thickening given by $$\localmod{k}(\rho)^{\dagger}:= \spec(\comp[\rho^{-1}P_{v}/q^{k+1}])^{\dagger},$$ equipped with the divisorial log structure induced by $V(\rho)$. We extend the open gluing data $$s_{\rho\sigma_{\pm}} \colon \tanpoly_{\sigma_{\pm}} \rightarrow \comp^{*}$$ to $$s_{\rho\sigma_{\pm}} \colon \tanpoly_{\sigma_{\pm}} \oplus \inte \rightarrow \comp^{*}$$ so that $s_{\rho\sigma_{\pm}}(0,1) = 1$, which acts as an automorphism of $\spec(\comp[\sigma^{-1}\Sigma_{v}])$. In this way we can extend the gluing $V(\sigma_{\pm}) \hookrightarrow V(\rho)$ to $$\spec(\comp[\sigma_{\pm}^{-1}P_{v}/q^{k+1}]) \rightarrow \spec(\comp[\rho^{-1}P_{v}/q^{k+1}])$$ by twisting with the ring homomorphism induced by $z^{m} \rightarrow s_{\rho\sigma_{\pm}}(m)^{-1}z^{m}$. On a sufficiently small neighborhood $\mathscr{W}_{\rho}$ of $\reint(\rho)$, we take $$\sfbva{k}^*_{\mathrm{sf}}|_{\mathscr{W}_{\rho}}:= \modmap_* \Big(\bigwedge\nolimits^{-*} \Theta_{\localmod{k}(\rho)^{\dagger}/\logsk{k}}\Big)\Big|_{\mathscr{W}_{\rho}}.$$ Choosing a different vertex $v'$, we may use parallel transport to identify the fans $\rho^{-1} \Sigma_{v} \cong \rho^{-1} \Sigma_{v'}$, and further use the difference $\varphi_v|_{\mathscr{W}_{\rho}} - \varphi_{v'}|_{\mathscr{W}_{\rho}}$ to identify the monoids $\rho^{-1}P_v \cong \rho^{-1}P_{v'}$. One can check that the sheaf $\sfbva{k}^*_{\mathrm{sf}}|_{\mathscr{W}_{\rho}}$ is well-defined. The more complicated case is when $\tsing_e \cap \reint(\rho) \neq \emptyset$, where the monodromy is non-trivial. We write $\reint(\rho) \setminus \tsing = \bigcup_{v} \reint(\rho)_v$, where $\reint(\rho)_v$ is the unique component which contains the vertex $v$ in its closure. We fix one $v$, the corresponding $\reint(\rho)_v$, and a sufficiently small open subset $\mathscr{W}_{\rho,v}$ of $\reint(\rho)_v$. We assume that the neighborhood $\mathscr{W}_{\rho,v}$ of $\reint(\rho)_v$ intersects neither $\mathscr{W}_{v',\rho'}$ nor $\mathscr{W}_{\rho'}$ for any possible $v'$ and $\rho'$. Then we consider the scheme-theoretic embedding $$V(\rho) = \spec(\comp[\rho^{-1}\Sigma_v]) \rightarrow \spec(\comp[\rho^{-1}P_v])$$ given by $$ z^{m} \mapsto \begin{cases} z^{\bar{m}} & \text{if $m$ lies on the boundary of the cone $\rho^{-1}P_{v}$,}\\ 0 & \text{if $m$ lies in the interior of the cone $\rho^{-1}P_{v}$.} \end{cases} $$ We denote by $\sflocmod{k}(\rho)_v^{\dagger}$ the $k^{\text{th}}$-order thickening of $V(\rho)|_{\modmap^{-1}(\mathscr{W}_{\rho,v})}$ in $\spec(\comp[\rho^{-1}P_{v}])$ and equip it with the divisorial log structure which is log smooth over $\logsk{k}$ (note that it is {\em different} from the local model $\localmod{k}(\rho)^{\dagger}$ introduced earlier in \S \ref{sec:deformation_via_dgBV} because the latter depends on the slab functions $f_{v,\rho}$, as we can see explicitly in \S \ref{subsubsec:explicit_gluing_away_from_codimension_2}, while the former doesn't). We take $$\sfbva{k}^*_{\mathrm{sf}}|_{\mathscr{W}_{\rho,v}} := \bigwedge\nolimits^{-*} \Theta_{\sflocmod{k}(\rho)_v^{\dagger}/\logsk{k}}.$$ The gluing with nearby maximal cells $\sigma_{\pm}$ on the overlap $\reint(\sigma_{\pm}) \cap \mathscr{W}_{\rho,v}$ is given by parallel transporting through the vertex $v$ to relate the monoids $\sigma_{\pm}^{-1}P_v$ and $\rho^{-1}P_v$ constructed from $P_v$, and twisting the map $\spec(\comp[\sigma^{-1}_{\pm}P_v]) \rightarrow \spec(\comp[\rho^{-1}P_v])$ with the open gluing data $$ z^{m} \mapsto s_{\rho \sigma_{\pm}}^{-1}(m) z^{m}, $$ using previous liftings of $s_{\rho\sigma_{\pm}}$ to $\tanpoly_{\sigma_{\pm}}\oplus \inte$. We obtain a commutative diagram of holomorphic maps $$ \xymatrix@1{ V(\sigma_{\pm})|_{\mathscr{D}} \ar[r] \ar[d] & \localmod{k}(\sigma_{\pm})^{\dagger}|_{\mathscr{D}} \ar[d]\\ V(\rho)|_{\mathscr{D}} \ar[r] & \sflocmod{k}(\rho)^{\dagger}|_{\mathscr{D}} }, $$ where $\mathscr{D} =\modmap^{-1}( \mathscr{W}_{\rho,v} \cap \reint(\sigma_{\pm}))$ and the vertical arrow on the right hand side respects the log structures. The induced isomorphism $$\modmap_* \Big(\bigwedge\nolimits^{-*} \Theta_{\sflocmod{k}(\rho)_v^{\dagger}/\logsk{k}}\Big) \cong \modmap_* \Big(\bigwedge\nolimits^{-*} \Theta_{\localmod{k}(\sigma_{\pm})_v^{\dagger}/\logsk{k}}\Big)$$ of sheaves on the overlap $\mathscr{W}_{\rho,v} \cap \reint(\sigma_{\pm})$ then gives the desired gluing for defining the sheaf $\sfbva{k}^*_{\mathrm{sf}}$ on $W_0$. Note that the cocycle condition is trivial here as there is no triple intersection of any three open subsets from $\reint(\sigma)$, $\mathscr{W}_\rho$ and $\mathscr{W}_{\rho,v}$. Similarly, we can define the sheaf $\sftbva{k}{}^*_{\mathrm{sf}}$ of semi-flat log de Rham forms, together with a relative volume form $\volf{k}_0 \in \sftbva{k}{\parallel}^n_{\mathrm{sf}}(W_0)$ obtained from gluing the local $\mu_v$'s specified by the element $\mu$ as described in the beginning of \S \ref{subsec:semi-flat_dgBV}. It would be useful to write down elements of the sheaf $\sfbva{k}^*_{\mathrm{sf}}$ more explicitly. For instance, fixing a point $x \in \reint(\rho)_v$, we may write \begin{equation}\label{eqn:explicit_description_of_semi_flat_bva} \sfbva{k}^*_{\mathrm{sf},x}= \modmap_*(\cu{O}_{\sflocmod{k}(\rho)_v})_x \otimes_{\real} \bigwedge\nolimits^{-*} T^*_{v,\real}, \end{equation} and use $\partial_n$ to stand for the semi-flat holomorphic vector field associated to an element $n \in T^*_{v,\real}$. Note that analytic continuation around the singular locus $\tsing_e \cap \reint(\rho)$ acts non-trivially on the semi-flat sheaf $\sfbva{k}^*_{\mathrm{sf}}$ due to the presence of non-trivial monodromy of the affine structure. Below is a simple example. \begin{example}\label{eg:monodromy_action_on_semi_flat_sheaf} We consider the local affine charts which appeared in Example \ref{eg:K3_example}, equipped with a strictly convex piecewise linear affine function $\varphi$ on $\Sigma_{\rho}$ whose change of slopes is $1$. Let us study the analytic continuation of a local section along the loop $\gamma$ which starts at a point $b_+$, as shown in Figure \ref{fig:affine_chart_2}. \begin{figure}[h!] \includegraphics[scale=0.5]{affine_chart_2.png} \caption{Analytic continuation along $\gamma$}\label{fig:affine_chart_2} \end{figure} First, we can identify both $\rho^{-1}P_{v_+}$ and $\rho^{-1}P_{v_-}$ with the monoid in the cone $P = \{ (x,y,z) \ | \ z\geq \varphi(x) \}$ via parallel transport through $\sigma_+$. Writing $u = z^{(1,0,1)}$, $v= z^{(-1,0,0)}$, $w = z^{(0,-1,0)}$ and $q = z^{(0,0,1)}$, we have $\comp[P] \cong \comp[u,v,w^{\pm}, q] / (uv-q)$ . Now the analytic continuation of $u \in \modmap_*(\cu{O}_{\sflocmod{k}(\rho)_{v_+}})_{b_+}$ along $\gamma$ (going from the chart $U_{\mathrm{II}}$ to the chart $U_{\mathrm{I}}$ and then back to $U_{\mathrm{II}}$) is given by as a sequence of elements: $$ \xymatrix@1{ u \ar[r] & s_{\rho\sigma_+}((1,0))^{-1}u \ar[r] & uw \ar[r] & s_{\rho\sigma_-}((1,0))^{-1} qv^{-1}w \ar[r] & wu, } $$ via the following sequence of maps between the stalks over $b_+, c_+ \in U_{\mathrm{II}}$ and $b_-, c_- \in U_{\mathrm{I}}$: $$ \xymatrix@1{ \modmap_*(\cu{O}_{\sflocmod{k}(\rho)_{v_+}})_{b_+} \ar[r] & \modmap_*(\cu{O}_{\localmod{k}(\sigma_{+})^{\dagger}})_{c_+} \ar[r] & \modmap_*(\cu{O}_{\sflocmod{k}(\rho)_{v_-}})_{b_-} \ar[r] & \modmap_*(\cu{O}_{\localmod{k}(\sigma_{-})^{\dagger}})_{c_-} \ar[r] & \modmap_*(\cu{O}_{\sflocmod{k}(\rho)_{v_+}})_{b_+}}. $$ So we see that the analytic continuation along $\gamma$ maps $u$ to $wu$. \end{example} $\sfbva{k}^*_{\mathrm{sf}}$ is equipped with the BV algebra structure inherited from $\spec(\comp[\rho^{-1}P_v])^{\dagger}$ (as described in the beginning of \S \ref{subsec:semi-flat_dgBV}), which agrees with the one induced from the volume form $\volf{k}_0$. This allows us to define the \emph{sheaf of semi-flat tropical vertex Lie algebras} as \begin{equation}\label{eqn:tropical_vertex_lie_algebra} \sftvbva{k}:= \mathrm{Ker}(\bvd{})|_{\sfbva{k}^{-1}_{\mathrm{sf}}}[-1]. \end{equation} \begin{remark} The sheaf can actually be extended over the non-essential singular locus $\tsing \setminus \tsing_e$ because the monodromy around that locus acts trivially, but this is not necessary for our later discussion.\end{remark} \subsubsection{Explicit gluing away from codimension $2$}\label{subsubsec:explicit_gluing_away_from_codimension_2} When we define the sheaves $\bva{k}^{*}_{\alpha}$'s in \S \ref{subsubsec:local_deformation_data}, the open subset $W_{\alpha}$ is taken to be a sufficiently small neighborhood of $x \in \reint(\tau)$ for some $\tau \in \pdecomp$. In fact, we can choose one of these open subsets to be the large open dense subset $W_0$. In this subsection, we construct the sheaves $\bva{k}^*_0$ and $\tbva{k}{}^*_{0}$ on $W_0$ using an explicit gluing of the underlying complex analytic space. Over $\reint(\sigma)$ for $\sigma \in \pdecomp^{[n]}$ or over $\mathscr{W}_{\rho}$ for $\rho \in \pdecomp^{[n-1]}$ with $\tsing_e \cap \reint(\rho) = \emptyset$, we have $\bva{k}^*_0 = \sfbva{k}^*_{\mathrm{sf}}$, which was just constructed in \S \ref{subsubsec:semi-flat}. So it remains to consider $\rho \in \pdecomp^{[n-1]}$ such that $\tsing_e \cap \reint(\rho) \neq \emptyset$. The log structure of $V(\rho)^{\dagger}$ is prescribed by the slab functions $f_{v,\rho} \in \Gamma(\cu{O}_{V_{\rho}(v)})$'s, which restrict to functions $s^{-1}_{v,\rho}(f_{v,\rho})$'s on the torus $\spec(\comp[\tanpoly_{\rho}]) \cong (\comp^*)^{n-1}$. Each of these can be pulled back via the natural projection $\spec(\comp[\rho^{-1}\Sigma_{v}]) \rightarrow \spec(\comp[\tanpoly_{\rho}])$ to give a function on $\spec(\comp[\rho^{-1}\Sigma_{v}])$. In this case, we may fix the log chart $V(\rho)^{\dagger}|_{\modmap^{-1}(\mathscr{W}_{\rho,v})} \rightarrow \spec(\comp[\rho^{-1}P_{v}])^{\dagger}$ given by the equation $$ z^m \mapsto \begin{dcases*} z^{\bar{m}} & \text{if $\langle \check{d}_{\rho}, \bar{m} \rangle \geq 0$ }, \\ z^{\bar{m}} \big( s^{-1}_{v\rho}(f_{v,\rho})\big)^{\langle \check{d}_{\rho} , \bar{m} \rangle } & \text{if $\langle \check{d}_{\rho}, \bar{m} \rangle \leq 0$ }. \end{dcases*} $$ Write $\localmod{k}(\rho)_{v}^{\dagger}$ for the corresponding $k^{\text{th}}$-order thickening in $\spec(\comp[\rho^{-1}P_{v}])$, which gives a local model for smoothing $V(\rho)|_{\modmap^{-1}(\mathscr{W}_{\rho,v})}$ (as in \S \ref{sec:deformation_via_dgBV}). We take $$\bva{k}_0^*|_{\mathscr{W}_{\rho,v}} :=\modmap_* \Big(\bigwedge\nolimits^{-*} \Theta_{\localmod{k}(\rho)_v^{\dagger}/\logsk{k}}\Big).$$ We have to specify the gluing on the overlap $\mathscr{W}_{\rho,v} \cap \reint(\sigma_\pm)$ with the adjacent maximal cells $\sigma_{\pm}$. This is given by first using parallel transport through $v$ to relate the monoids $\sigma_{\pm}^{-1}P_v$ and $\rho^{-1}P_v$ as in the semi-flat case, and then an embedding $\spec(\comp[\sigma_{\pm}^{-1}P_{v}/q^{k+1}]) \rightarrow \spec(\comp[\rho^{-1}P_{v}/q^{k+1}])$ via the formula \begin{equation}\label{eqn:correction_by_wall_crossing} z^m \mapsto \begin{dcases*} s_{\rho\sigma_+}^{-1}(m) z^{m} & \text{for $\sigma_+$ }, \\ s_{\rho\sigma_-}^{-1}(m) z^{m} \big( s^{-1}_{v\sigma_-}(f_{v,\rho})\big)^{\langle \check{d}_{\rho} , \bar{m} \rangle } & \text{for $\sigma_-$ }, \end{dcases*} \end{equation} where $s_{v\sigma_\pm}$, $s_{\rho\sigma_\pm}$ are treated as maps $\tanpoly_{\sigma_{\pm}} \oplus \inte \rightarrow \comp^*$ as before. We observe that there is a commutative diagram of log morphisms $$ \xymatrix@1{ V(\sigma_{\pm})^{\dagger}|_{\mathscr{D}} \ar[r] \ar[d] & \localmod{k}(\sigma_{\pm})^{\dagger}|_{\mathscr{D}} \ar[d]\\ V(\rho)^{\dagger}|_{\mathscr{D}} \ar[r] & \localmod{k}(\rho)^{\dagger}|_{\mathscr{D}} }, $$ where $\mathscr{D} = \modmap^{-1}(\mathscr{W}_{\rho,v} \cap \reint(\sigma_{\pm}))$. The induced isomorphism $$\modmap_* \Big(\bigwedge\nolimits^{-*} \Theta_{\localmod{k}(\rho)_v^{\dagger}/\logsk{k}}\Big) \cong \modmap_* \Big(\bigwedge\nolimits^{-*} \Theta_{\localmod{k}(\sigma_{\pm})_v^{\dagger}/\logsk{k}}\Big)$$ of sheaves on the overlap $\mathscr{D}$ then provides the gluing for defining the sheaf $\bva{k}^*_0$ on $W_0$. Hence, we obtain a sheaf $\bva{k}^*_0$ of BV algebras, where the BV structure is inherited from the local models $\spec(\comp[\sigma^{-1}P_v])$ and $\spec(\comp[\rho^{-1}P_v])$. Similarly, we can define the sheaf $\tbva{k}{}^*_0$ of log de Rham forms over $W_0$, together with a relative volume form $\volf{k}_0 \in \tbva{k}{\parallel}^n_0(W_0)$ by gluing the local $\mu_v$'s. \subsubsection{Relation between the semi-flat dgBV algebra and the log structure}\label{subsubsec:relating_semi_flat_and_actual_construction} The difference between $\bva{k}^*_0$ and $\sfbva{k}^*_{\mathrm{sf}}$ is that analytic continuation along a path $\gamma$ in $\reint(\sigma_{\pm}) \cup \reint(\rho)$, where $\rho = \sigma_+ \cap \sigma_-$, induces a non-trivial action on $\sfbva{k}^*_{\mathrm{sf}}$ (the semi-flat sheaf) but not on $\bva{k}^*_0$ (the corrected sheaf). This is because, near a singular point $p \in \Gamma$ of the affine structure on $B$, there is another local model $\bva{k}^*_{\alpha}$ for $p \in W_{\alpha}$ constructed in \ref{subsubsec:local_deformation_data}, where restrictions of sections are invariant under analytic continuation (cf. Example \ref{eg:monodromy_action_on_semi_flat_sheaf}). This is in line with the philosophy that monodromy is being cancelled by the slab functions $f_{v,\rho}$'s (which we also call \emph{initial wall-crossing factors}). In view of this, we should be able to relate the sheaves $\bva{k}^*_0$ and $\sfbva{k}^*_{\mathrm{sf}}$ by adding back the initial wall-crossing factors $f_{v,\rho}$'s. Recall that the slab function $f_{v,\rho}$ is a function on $V_\rho(v) \subset \centerfiber{0}_{\rho}$, whose zero locus is $Z^{\rho}_1 \cap V_{\rho}(v)$ for $\rho$ such that $\tsing_e \cap \reint(\rho) \neq \emptyset$. Also recall that, for $\rho$ containing $v$, $\rho_{v}$ is the unique contractible component in $\rho \cap \mathscr{C}^{-1}(B\setminus \tsing)$ such that $v \in \rho_{v}$, as defined in Assumption \ref{assum:existence_of_contraction}. Note that the inverse image $\mu^{-1}(\rho_v) \subset V_{\rho}(v)$ under the generalized moment map $\mu$ is also a contractible open subset. It contains the $0$-dimensional stratum $x_v$ in $V_\rho(v)$ that corresponds to $v$. Since $f_{v,\rho} (x_v) =1$, we can define $\log(f_{v,\rho})$ in a small neighborhood of $x_v$, and it can further be extended to the whole of $\mu^{-1}(\rho_v) \subset V_{\rho}(v)$ because this subset is contractible. Restricting to the open dense torus orbit $\spec(\comp[\tanpoly_{\rho}]) \cap \mu^{-1}(\rho_v)$, we obtain $\log(s_{v\rho}^{-1}(f_{v,\rho}))$, which can in addition be lifted to a section in $\sfbva{k}^0_{\mathrm{sf}}(\mathscr{W}_{\rho,v}) = \Gamma(\mathscr{W}_{\rho,v}, \cu{O}_{\sflocmod{k}(\rho)_{v}})$ for a sufficiently small $\mathscr{W}_{\rho,v}$. Now we resolve the sheaves $\bva{k}^*_0$ and $\sfbva{k}^*_{\mathrm{sf}}$ by the complex $\tform^*$ introduced in \S \ref{sec:asymptotic_support}. We let $$\sfpolyv{k}^{*,*}_{\mathrm{sf}} := \tform^*|_{W_0} \otimes_{\real} \sfbva{k}^*_{\mathrm{sf}}$$ and equip it with $\pdb_{\circ}=d \otimes 1$, $\bvd{}$ and $\wedge$, making it a sheaf of dgBV algebras. Over the open subset $\mathscr{W}_{\rho,v}$, using the explicit description of $\sfbva{k}^*_{\mathrm{sf}}|_{\mathscr{W}_{\rho,v}}$, we consider the element \begin{equation}\label{eqn:defining_wall} \phi_{v,\rho}:= -\delta_{v,\rho} \otimes \log(s_{v\rho}^{-1}(f_{v,\rho}))\partial_{\check{d}_\rho} \in \sfpolyv{k}^{-1,1}_{\mathrm{sf}}(\mathscr{W}_{\rho,v}), \end{equation} where $\delta_{v,\rho}$ is any $1$-form with asymptotic support in $\reint(\rho)_v$ and whose integral over any curve transversal to $\reint(\rho)_v$ going from $\sigma_-$ to $\sigma_+$ is asymptotically $1$; such a 1-form can be constructed using a family of bump functions in the normal direction of $\reint(\rho)_v$ similar to Example \ref{example:bump_form} (see also \cite[\S 4]{kwchan-leung-ma}). We can further extend the section $\phi_{v,\rho}$ to the whole $W_0$ by setting it to be $0$ outside a small neighborhood of $\reint(\rho)_v$ in $\mathscr{W}_{\rho,v}$. \begin{definition}\label{def:sheaf_of_sf_polyvector} The \emph{sheaf of semi-flat polyvector fields} is defined as $$\sfpolyv{k}^{*,*}_{\mathrm{sf}} := \tform^*|_{W_0} \otimes_{\real} \sfbva{k}^*_{\mathrm{sf}},$$ which is equipped with a BV operator $\bvd{}$, a wedge product $\wedge$ (and hence a Lie bracket $[\cdot,\cdot]$) and the operator $$ \pdb_{\mathrm{sf}} := \pdb_{\circ} + [\phi_{\mathrm{in}},\cdot] = \pdb_{\circ} + \sum_{v,\rho} [\phi_{v,\rho}, \cdot], $$ where $\pdb_{\circ} = d\otimes 1$ and $\phi_{\mathrm{in}} := \sum_{v,\rho} \phi_{v,\rho}$. We also define the \emph{sheaf of semi-flat log de Rham forms} as $$\sftotaldr{k}{}^{*,*}_{\mathrm{sf}}:= \tform^*|_{W_0} \otimes_{\real} \sftbva{k}{}^*_{\mathrm{sf}},$$ equipped with $\dpartial{}$, $\wedge$, $$ \pdb_{\mathrm{sf}} := \pdb_{\circ} + \sum_{v,\rho} \cu{L}_{\phi_{v,\rho}}, $$ and a contraction action $\mathbin{\lrcorner}$ by elements in $\sfpolyv{k}^{*,*}_{\mathrm{sf}}$. \end{definition} It can be easily checked that $\pdb_{\mathrm{sf}}^2 = [\pdb_{\mathrm{sf}},\bvd{}] = 0$, so we have a sheaf of dgBV algebras. On the other hand, we write $$\polyv{k}^{*,*}_0 := \tform^*|_{W_0} \otimes_{\real} \bva{k}^*_0,$$ which is equipped with the operators $\pdb_0 = d\otimes 1$, $\bvd{}$ and $\wedge$. The following important lemma is a comparison between the two sheaves of dgBV algebras. \begin{lemma}\label{lem:comparing_sheaf_of_dgbv} There exists a set of compatible isomorphisms $$ \varPhi \colon \polyv{k}^{*,*}_0 \rightarrow \sfpolyv{k}^{*,*}_{\mathrm{sf}},\ k \in \mathbb{N} $$ of sheaves of dgBV algebras such that $\varPhi\circ \pdb_0 = \pdb_{\mathrm{sf}} \circ \varPhi$ for each $k \in \mathbb{N}$. There also exists a set of compatible isomorphisms $$ \varPhi \colon \totaldr{k}{}^{*,*}_0 \rightarrow \sftotaldr{k}{}^{*,*}_{\mathrm{sf}},\ k \in \mathbb{N} $$ of sheaves of dgas preserving the contraction action $\mathbin{\lrcorner}$ and such that $\varPhi\circ \pdb_0 = \pdb_{\mathrm{sf}} \circ \varPhi$ for each $k \in \mathbb{N}$. Furthermore, the relative volume form $\volf{k}_0$ is identified via $\varPhi$. \end{lemma} \begin{proof} Outside those $\reint(\rho)$'s such that $\tsing_e \cap \reint(\rho) \neq \emptyset$, the two sheaves are identical. So we will take a component $\reint(\rho)_v$ of $\reint(\rho) \setminus \tsing$ and compare the sheaves on a neighborhood $\mathscr{W}_{\rho,v}$. We fix a point $x \in \reint(\rho)_v$ and describe the map $\Phi$ at the stalks of the two sheaves. First, the preimage $K:=\modmap^{-1}(x) \cong \tanpoly_{\rho,\real}^*/\tanpoly_{\rho}^*$ can be identified as a real $(n-1)$-dimensional torus in $\spec(\comp[\tanpoly_{\rho}]) \cong (\comp^*)^{n-1}$. We have an identification $\rho^{-1}\Sigma_v \cong \Sigma_{\rho} \times \tanpoly_{\rho}$, and we choose the unique primitive element $m_{\rho} \in \Sigma_{\rho}$ in the ray pointing into $\sigma_+$. As analytic spaces, we write $$\spec(\comp[\Sigma_\rho]) = \{ uv=0\} \subset \comp^2,$$ where $u = z^{m_\rho}$ and $v = z^{-m_{\rho}}$, and $$\spec(\comp[\rho^{-1}\Sigma_v]) = (\comp^*)^{n-1} \times \{uv = 0 \}.$$ The germ $\cu{O}_{V(\rho),K}$ of analytic functions can be written as $$ \cu{O}_{V(\rho),K} = \left\{a_0 + \sum_{i=1}^{\infty} a_i u^i + \sum_{i=-1}^{-\infty}a_i v^{-i} \ \Big| \ a_i \in \cu{O}_{(\comp^*)^{n-1}}(U) \ \text{for neigh. $U\supset K$}, \ \sup_{i\neq 0} \frac{\log|a_i|}{|i|} < \infty \right\}. $$ Using the embedding $V(\rho)|_{\modmap^{-1}(\mathscr{W}_{\rho,v})} \rightarrow \localmod{k}(\rho)_v^{\dagger}$ in \S \ref{subsubsec:explicit_gluing_away_from_codimension_2}, we can write \begin{align*} &\bva{k}^0_{0,x}=\cu{O}_{\localmod{k}(\rho)_v,K}=\\ & \left\{\sum_{j=0}^k (a_{0,j} + \sum_{i=1}^{\infty} a_{i,j} u^i + \sum_{i=-1}^{-\infty}a_{i,j} v^{-i}) q^j \Big| \ a_{i,j} \in \cu{O}_{(\comp^*)^{n-1}}(U) \ \text{for neigh. $U\supset K$}, \ \sup_{i\neq 0} \frac{\log|a_{i,j}|}{|i|} < \infty \right\}, \end{align*} with the relation $uv = q^l s_{v\rho}^{-1}(f_{v,\rho})$ (here $l$ is the change of slopes for $\varphi_v$ across $\rho$). For the elements $(m_{\rho},\varphi_v(m_{\rho}))$ and $(-m_{\rho},\varphi_v(-m_{\rho}))$ in $\rho^{-1}P_v$, we have the identities (we omit the dependence on $k$ when we write elements in the stalks of sheaves): \begin{align*} z^{(m_{\rho},\varphi_v(m_{\rho}))} & = u, \\ z^{-(-m_{\rho},\varphi_v(-m_{\rho}))} & = s_{v\rho}^{-1}(f_{v,\rho})^{-1} v, \end{align*} describing the embedding $\localmod{k}(\rho)_v^{\dagger} \hookrightarrow \spec(\comp[\rho^{-1}P_v])^{\dagger}$. For polyvector fields, we can write $$\bva{k}^*_{0,x} = \bva{k}^0_{0,x} \otimes_{\real} \bigwedge^{-*} T_{v,\real}^*.$$ The BV operator is described by the relations $\bvd{}(\partial_n) = 0$, $[\partial_{n_1},\partial_{n_2}] =0$, and \begin{equation}\label{eqn:BV-relations} \begin{cases} [z^m,\partial_n] = \bvd{}(z^{m}\partial_n) = \langle m,n\rangle z^m & \text{for $m$ with $\bar{m} \in \tanpoly_{\rho}, \ n \in T^*_{v,\real}$;}\\ [u,\partial_n] =\bvd{} (u\partial_n) = \langle m_{\rho},n\rangle u & \text{for $n \in T^*_{v,\real}$};\\ [v,\partial_n] =\bvd{}(v\partial_n) = \langle -m_{\rho},n \rangle v + \partial_n(\log s_{v\rho}^{-1}(f_{v,\rho})) v & \text{for $n \in T^*_{v,\real}$}. \end{cases} \end{equation} Similarly, we can write down the stalk for $\sfbva{k}^*_{\mathrm{sf},x} = \sfbva{k}^*_{\mathrm{sf},x} \otimes_{\real} \bigwedge^{-*} T_{v,\real}^*$. As a module over $\cu{O}_{(\comp^*)^{n-1},K}\otimes_{\comp} \comp[q]/(q^{k+1})$, we have $\sfbva{k}^*_{\mathrm{sf},x} = \bva{k}^*_{0,x}$; the ring structure on $\sfbva{k}^0_{\mathrm{sf},x}$ differs from that on $\bva{k}^0_{0,x}$ and is determined by the relation $uv = q^l$. The embedding $\sflocmod{k}(\rho)_v^{\dagger} \hookrightarrow \spec(\comp[\rho^{-1}P_v])^{\dagger}$ is given by \begin{align*} z^{(m_{\rho},\varphi_v(m_{\rho}))} & = u, \\ z^{-(-m_{\rho},\varphi_v(-m_{\rho}))} & = v. \end{align*} The formulae for the BV operator are the same as that for $\bva{k}^*_{0,x}$, except that for the last equation in \eqref{eqn:BV-relations}, we have $[v,\partial_n] = \bvd{}(v\partial_n) = \langle -m_\rho,n\rangle v$ instead. We apply the argument in \cite[\S 4]{kwchan-leung-ma}, where we considered a scattering diagram consisting of only one wall, to relate these two sheaves. We can find a set of compatible elements $\theta = (\prescript{k}{}{\theta})_{k\in \mathbb{N}}$, where $\prescript{k}{}{\theta} \in \sfpolyv{k}^{-1,0}_{\mathrm{sf}}(\mathscr{W}_{\rho,v})$ for $k\in \mathbb{N}$, such that $e^{\theta}*\pdb_{\circ} = \pdb_{\mathrm{sf}}$ and $\bvd{}(\theta) = 0$. Explicitly, $\theta$ is a step-function-like section of the form $$ \theta = \begin{dcases} \log(s_{v\rho}^{-1}(f_{v,\rho})) \partial_{\check{d}_{\rho}} & \text{on $\reint(\sigma_+) \cap \mathscr{W}_{\rho,v}$,}\\ 0 & \text{on $\reint(\sigma_-) \cap \mathscr{W}_{\rho,v}$.} \end{dcases} $$ For each $k\in \mathbb{N}$, we also define $\theta_0:= \log(s_{v\rho}^{-1}(f_{v,\rho})) \partial_{\check{d}_{\rho}}$, as an element in $\sfbva{k}^{-1}_{\mathrm{sf}}(\mathscr{W}_{\rho,v})$. Now we define the map $\varPhi_{x} \colon \polyv{k}^{*,*}_{0,x} \rightarrow \sfpolyv{k}_{\mathrm{sf},x}^{*,*}$ at the stalks by writing $$\polyv{k}^{*,*}_{0,x} = \tform_x^* \otimes_{\real} \bva{k}^0_{0,x} \otimes_{\real} \bigwedge^{-*} T_{v,\real}^*,$$ (and similarly for $\sfpolyv{k}^{*,*}_{\mathrm{sf},x}$), and extending the formulae \begin{equation*} \begin{cases} \varPhi_x(\alpha) = \alpha & \text{for $\alpha \in \tform_x$},\\ \varPhi_x(f) = e^{[\theta,\cdot]}f = f & \text{for $f \in \cu{O}_{(\comp^*)^{n-1},K}$}, \\ \varPhi_x(u) = e^{[\theta-\theta_0,\cdot]} u, & \\ \varPhi_x(v) = e^{[\theta,\cdot]} v ,& \\ \varPhi_x(\partial_n) = e^{[\theta-\theta_0,\cdot]} \partial_n & \text{for $n \in T_{v,\real}^*$} \end{cases} \end{equation*} through the tensor product $\otimes_{\real}$ and skew-symmetrically in $\partial_n$'s. To see that $\varPhi$ is the desired isomorphism, we check all the relations by computations: \begin{itemize} \item Since $e^{[\theta,\cdot]} \circ \pdb_{\circ} \circ e^{-[\theta,\cdot]} = \pdb_{\mathrm{sf}}$, we have $$ \pdb_{\mathrm{sf}} \varPhi_x(u) = e^{[\theta,\cdot]} \pdb_{\circ} ( e^{-[\theta_0,\cdot]} u) = 0; $$ similarly, we have $\pdb_{\mathrm{sf}}( \varPhi_x(v) ) = 0 = \pdb_{\mathrm{sf}}(\varPhi_x (\partial_n))$. Hence, we have $\varPhi_x \circ \pdb_0 = \pdb_{\mathrm{sf}} \circ \varPhi_x$. \item We have $e^{-[\theta_0,\cdot]} u = s^{-1}_{v\rho}(f_{v,\rho}) u$ and $$\varPhi_x(u) \varPhi_x(v) = e^{[\theta,\cdot]} (s^{-1}_{v\rho}(f_{v,\rho}) u) e^{[\theta,\cdot]}v =s^{-1}_{v\rho}(f_{v,\rho}) e^{[\theta,\cdot]} (uv) = q^l s^{-1}_{v\rho}(f_{v,\rho}) = \varPhi_x(uv ), $$ i.e. the map $\varPhi_x$ preserves the product structure. \item From the fact that $\bvd{}(\theta) = 0 = \bvd{}(\theta_0)$, we see that $e^{[\theta-\theta_0,\cdot]}$ commutes with $\bvd{}$, and hence $\bvd{}(\varPhi_x(\partial_n)) = e^{[\theta-\theta_0,\cdot]} \bvd{}(\partial_n)=0$. We also have $[\varPhi_x(\partial_{n_1}),\varPhi_x(\partial_{n_2})] = e^{[\theta-\theta_0,\cdot]} [\partial_{n_1},\partial_{n_2}] = 0$. \item Again from $\bvd{}(\theta) = 0 = \bvd{}(\theta_0)$, we have \begin{align*} \bvd{}(\varPhi_x(u) \varPhi_x(\partial_n)) & = \bvd{}(e^{[\theta-\theta_0,\cdot]} (u\partial_n)) =e^{[\theta-\theta_0,\cdot]} \left( \bvd{}(u\partial_n) \right) \\ & = \langle m_{\rho} , n \rangle e^{[\theta-\theta_0,\cdot]}(u) = \langle m_{\rho} , n \rangle \varPhi_x(u) = \varPhi_x(\bvd{}(u\partial_n)). \end{align*} \item Finally, we have \begin{align*} \bvd{}(\varPhi_x(v) \varPhi_x(\partial_n)) & = \bvd{}\big(e^{[\theta-\theta_0,\cdot]} ( (e^{[\theta_0,\cdot]}v) \partial_n)\big) =e^{[\theta-\theta_0,\cdot]} \big( \bvd{}(s_{v\rho}^{-1}(f_{v,\rho}) v \partial_n) \big)\\ & = e^{[\theta-\theta_0,\cdot]} \big(\langle -m_{\rho}, n \rangle s^{-1}_{v\rho}(f_{v,\rho}) v + \partial_n(s^{-1}_{v\rho}(f_{v,\rho})) v \big) \\ & = \langle -m_{\rho}, n \rangle (e^{[\theta,\cdot]}v) + \partial_n \big(\log s^{-1}_{v\rho}(f_{v,\rho}) \big) (e^{[\theta,\cdot]}v)\\ & = \langle -m_{\rho}, n \rangle \varPhi_x(v) + \partial_n \big(\log s^{-1}_{v\rho}(f_{v,\rho}) \big) \varPhi_x(v)\\ & = \varPhi_x(\bvd{}(v\partial_n)). \end{align*} \end{itemize} We conclude that $\varPhi_x \colon \polyv{k}^{*,*}_{0,x} \rightarrow \sfpolyv{k}_{\mathrm{sf},x}^{*,*}$ is an isomorphism of dgBV algebras. We need to check that the map $\varPhi_x$ agrees with the isomorphism $\polyv{k}^{*,*}_{0}|_{\mathscr{C}} \rightarrow \sfpolyv{k}^{*,*}_{\mathrm{sf}}|_{\mathscr{C}}$ induced simply by the identity $\bva{k}^*_0|_{\mathscr{C}} \cong \sfbva{k}^*_{\mathrm{sf}}|_{\mathscr{C}}$, where $\mathscr{C} = W_0 \setminus \bigcup_{\tsing_e \cap \reint(\rho) \neq \emptyset} \reint(\rho)$. For this purpose, we consider two nearby maximal cells $\sigma_{\pm}$ such that $\sigma_+ \cap \sigma_- = \rho$. We have $\localmod{k}(\sigma_{\pm}) = \spec(\comp[\sigma^{-1}_{\pm} P_v]/q^{k+1})$, and the gluing of $\bva{k}^*_0$ over $\mathscr{W}_{\rho,v} \cap \sigma_+$ is given by parallel transporting through $v$, and then by the formulae \begin{equation}\label{eqn:PV-gluing} \begin{cases} z^{m} \mapsto s^{-1}_{\rho\sigma_{+}}(m) z^m & \text{for $m \in \tanpoly_{\rho}$},\\ u \mapsto s^{-1}_{\rho\sigma_+}(m_{\rho}) z^{m_{\rho}}, & \\ v \mapsto q^{l} s^{-1}_{v\sigma_+}(f_{v,\rho}) s^{-1}_{\rho\sigma_+}(-m_{\rho}) z^{-m_{\rho}}. \end{cases} \end{equation} The only difference for gluing of $\sfbva{k}^*_{\mathrm{sf}}$ is the last equation in \eqref{eqn:PV-gluing}, which is now replaced by the formula $v \mapsto q^{l} s^{-1}_{\rho\sigma_+}(-m_{\rho}) z^{-m_{\rho}}$. Since we have $$ \varPhi_x(v) = \begin{dcases} s_{v\rho}^{-1}(f_{v,\rho}) v & \text{on $U_x \cap \reint(\sigma_+)$}, \\ v & \text{on $U_x \cap \reint(\sigma_-)$} \end{dcases} $$ on a sufficiently small neighborhood $U_x$ of $x$, we see that $\varPhi_x(v) \mapsto q^{l} s^{-1}_{v\sigma_+}(f_{v,\rho}) s^{-1}_{\rho\sigma_+}(-m_{\rho}) z^{-m_{\rho}}$ under the gluing map of $\sfbva{k}^*_{\mathrm{sf}}$ on $U_x \cap \reint(\sigma_+)$. This shows the compatibility of $\varPhi_x$ with the gluing of $\bva{k}^*_0$ and $\sfbva{k}^*_{\mathrm{sf}}$ over $U_x \cap \reint(\sigma_+)$. A similar argument applies for $U_x \cap \reint(\sigma_-)$. The proof for $\varPhi\colon \totaldr{k}{}^{*,*}_0 \rightarrow \sftotaldr{k}{}^{*,*}_{\mathrm{sf}}$ is similar and will be omitted. The volume form is preserved under $\varPhi$ because we have $\bvd{}(\theta) = 0 = \bvd{}(\theta_0)$. This completes the proof of the lemma. \end{proof} \subsubsection{A global sheaf of dgLas from gluing of the semi-flat sheaves}\label{subsubsec:gluing_using_semi_flat} We shall apply the procedure described in \S \ref{subsubsec:gluing_construction} to the semi-flat sheaves to glue a global sheaf of dgLas. First of all, we choose an open cover $\{W_{\alpha}\}_{\alpha \in \mathscr{I}}$ satisfying the Condition \ref{cond:good_cover_of_B}, together with a decomposition $\mathscr{I} = \mathscr{I}_1 \sqcup \mathscr{I}_2$ such that $\mathcal{W}_1 = \{W_{\alpha}\}_{\alpha \in \mathscr{I}_1}$ is a cover of the semi-flat part $W_0$, and $\mathcal{W}_2 = \{W_{\alpha}\}_{\alpha \in \mathscr{I}_2}$ is a cover of a neighborhood of $\big( \bigcup_{\tau \in \pdecomp^{[n-2]}}\tau \big) \cup \big( \bigcup_{\rho \cap \tsing_e \neq \emptyset} \tsing \cap \reint(\rho) \big)$. For each $W_{\alpha}$, we have a compatible set of local sheaves $\bva{k}^*_{\alpha}$ of BV algebras, local sheaves $\tbva{k}{}^*_{\alpha}$ of dgas, and relative volume elements $\volf{k}_{\alpha}$, $k \in \mathbb{N}$ (as in \S \ref{subsubsec:local_deformation_data}). We can further demand that, over the semi-flat part $W_0$, we have $\bva{k}^*_{\alpha} = \bva{k}^*_0|_{W_{\alpha}}$, $\tbva{k}{}^*_{\alpha} = \tbva{k}{}^*_0|_{W_{\alpha}}$ and $\volf{k}_{\alpha} = \volf{k}_0 |_{W_{\alpha}}$, and hence $\polyv{k}^{*,*}_{\alpha} = \polyv{k}^{*,*}_0|_{W_{\alpha}}$ and $\totaldr{k}{}^{*,*}_{\alpha} = \totaldr{k}{}^{*,*}_0|_{W_{\alpha}}$ for $\alpha \in \mathscr{I}_1$. Using the construction in \S \ref{subsubsec:gluing_construction}, we obtain a Gerstenhaber deformation $\glue{k}_{\alpha\beta} = e^{[\theta_{\alpha\beta},\cdot]} \circ \patch{k}_{\alpha\beta}$ specified by $\theta_{\alpha\beta} \in \polyv{k}^{-1,0}_\beta(W_{\alpha\beta})$, which give rise to sets of compatible global sheaves $\polyv{k}^{*,*}$ and $\totaldr{k}{}^{*,*}$, $k \in \mathbb{N}$. Restricting to the semi-flat part, we get two Gerstenhaber deformations $\polyv{k}^{*,*}_0$ and $\polyv{k}^{*,*}|_{W_{0}}$, which must be equivalent as $\check{H}^{>0}(\mathcal{W}_1, \polyv{0}^{-1,0}|_{W_0}) = 0$. So we have a set of compatible isomorphisms locally given by $h_{\alpha} = e^{[\mathbf{b}_\alpha,\cdot]}\colon \polyv{k}^{*,*}_0|_{W_{\alpha}} \rightarrow \polyv{k}^{*,*}|_{W_{\alpha}} \cong \polyv{k}^{*,*}_{\alpha} $ for some $\mathbf{b}_{\alpha} \in \polyv{k}^{-1,0}_0(W_{\alpha})$, for each $k \in \mathbb{N}$, and they fit into the following commutative diagram $$ \xymatrix@1{ \polyv{k}^{*,*}_0|_{W_{\alpha\beta}} \ar[r]^{\mathrm{id}} \ar[d]^{h_{\alpha}} &\polyv{k}^{*,*}_0|_{W_{\alpha\beta}} \ar[d]^{h_{\beta}} \\ \polyv{k}^{*,*}_{\alpha}|_{W_{\alpha\beta}} \ar[r]^{\glue{k}_{\alpha\beta}} & \polyv{k}^{*,*}_{\beta}|_{W_{\alpha\beta}}. } $$ Since the pre-differential on $\polyv{k}^{*,*}|_{W_{0}}$ obtained from the construction in \S \ref{subsubsec:gluing_construction} is of the form $\pdb_\alpha + [\eta_{\alpha},\cdot]$ for some $\eta_{\alpha} \in \polyv{k}^{-1,1}_{0}(W_{\alpha})$, pulling back via $h_{\alpha}$ gives a global element $\eta \in \polyv{k}^{-1,1}_{0}(W_0)$ such that $$ h_{\alpha}^{-1}\circ (\pdb_\alpha + [\eta_{\alpha},\cdot]) \circ h_{\alpha} = \pdb_0 + [\eta,\cdot]. $$ Theorem \ref{prop:Maurer_cartan_equation_unobstructed} gives a Maurer--Cartan solution $\phi \in \polyv{k}^{-1,1}(B)$ such that $(\pdb+[\phi,\cdot])^2=0$, together with a holomorphic volume form $e^{f} \volf{}$, compatible for each $k$. We denote the pullback of $\phi$ under $h_{\alpha}$'s to $\polyv{k}^{-1,1}_0(W_0)$ as $\phi_0$, and that of volume form to $\totaldr{k}{\parallel}^{n,0}_0(W_0)$ as $e^{g} \volf{}_0$. We see that the equation $$(\pdb_0 + \cu{L}_{\eta+\phi_0}) e^{g} \volf{}_0=0$$ is satisfied, or equivalently, that $\eta+\phi_0 + tg$ is a solution to the extended Maurer--Cartan equation \ref{eqn:extended_maurer_cartan_equation}. \begin{lemma}\label{lem:vector field} If the holomorphic volume form $e^{f} \volf{}$ is normalized in the sense of Definition \ref{def:normalized_volume_form}, then we can find a set of compatible $\cu{V} \in \polyv{k}^{-1,0}_0(W_0)$, $k \in \mathbb{N}$ such that $$ e^{-\cu{L}_{\cu{V}}} \volf{}_0 = e^{g} \volf{}_0. $$ As a consequence, the Maurer--Cartan solution $\eta+\phi_0 + tg$ is gauge equivalent to a solution of the form $\zeta_0 + t\cdot 0$ for some $\zeta_0 \in \polyv{k}^{-1,1}_0(W_0)$, via the gauge transformation $e^{[\cu{V},\cdot]} \colon \polyv{k}^{*,*}_0 \rightarrow \polyv{k}^{*,*}_0$. \end{lemma} \begin{proof} We should construct $\cu{V}$ by induction on $k$ as in the proof of Lemma \ref{lem:preserving_volume_element_by_vector_fields}. Namely, suppose $\cu{V}$ is constructed for the $(k-1)^{\text{st}}$-order, then we shall lift it to the $k^{\text{th}}$-order. We prove the existence of a lifting $\cu{V}_x \in \polyv{k}^{-1,0}_{0,x}$ at every stalk $x \in W_0$ and use partition of unity to glue a global lifting $\cu{V}$. First of all, we can always find a gauge transformation $\theta \in \polyv{k}^{-1,0}_{0,x}$ such that $$ e^{-[\theta,\cdot]} \circ \pdb_0 \circ e^{[\theta,\cdot]} = \pdb_0 + [\eta+\phi_0,\cdot]. $$ So we have $\pdb_0 (e^{\cu{L}_\theta} e^{g} \volf{}_0)=0$, which implies that $e^{\cu{L}_\theta} e^{g} \volf{}_0 \in \tbva{k}{\parallel}^n_{0,x}$. We can write $e^{\cu{L}_\theta} e^{g} \volf{}_0 = e^{h} \volf{}_0$ in the stalk at $x$ for some germ $h\in \bva{k}^0_{0,x}$ of holomorphic functions. Applying Lemma \ref{lem:preserving_volume_element_by_vector_fields}, we can further choose $\theta$ so that $h=h(q) \in (q)\subset \comp[q]/q^{k+1}$. In a sufficiently small neighborhood $U_x$, we find an element $\varrho_x \in \tform^n(U_x)$ as in Definition \ref{def:normalized_volume_form}. The fact that the volume form is normalized forces $e^{h(q)} [\volf{}_0\wedge \varrho_x]$ to be constant with respect to the Gauss--Manin connection $\gmc{k}$. Tracing through the exact sequence \eqref{eqn:Gauss_Manin_connection_definition} on $U_x$, we can lift $\volf{}_0$ to $\tbva{k}{}^n_{0}(U_x)$ which is closed under $\dpartial{}$. As a consequence, we have $\gmc{k}_{\dd{\log q}}[\volf{}_0\wedge \varrho_x] = 0$, and hence we conclude that $h(q) =0$. Now we have to solve for a lifting $\cu{V}_x$ such that $e^{\cu{L}_{\theta}}e^{-\cu{L}_{\cu{V}_x}} \volf{}_0 = \volf{}_0$ up to the $k^{\text{th}}$-order. This is equivalent to solving for a lifting $u$ satisfying $e^{\cu{L}_{u}} \volf{}_0 = \volf{}_0$ for the $k^{\text{th}}$-order once the $(k-1)^{\text{st}}$-order is given. Take an arbitrary lifting $\tilde{u}$ to the $k^{\text{th}}$-order, and making use of the formula in \cite[Lem. 2.8]{chan2019geometry}, we have $$ e^{\cu{L}_{\tilde{u}}} \volf{}_0 = \exp\left( \sum_{s=0}^{\infty} \frac{\delta_{\tilde{u}}^{s}}{(s+1)!} \bvd{}(\tilde{u}) \right) \volf{}_0, $$ where $\delta_{\tilde{u}} = -[\tilde{u},\cdot]$. From $e^{\cu{L}_{\tilde{u}}} \volf{}_0 = \volf{}_0 \ (\text{mod $\mathbf{m}^k$})$, we use induction on the order $j$ to prove that $\bvd{}(\tilde{u})=0$ up to order $(k-1)$. Therefore we can write $$\bvd{}(\tilde{u}) =q^{k} \bvd{}(\breve{u}) \ (\text{mod $\mathbf{m}^k$})$$ for some $\breve{u} \in \polyv{0}^{-1,0}_{0,x}$, by the fact that the cohomology sheaf under $\bvd{}$ is free over $\cfrk{k}= \comp[q]/(q^{k+1})$ (see the discussion right after Condition \ref{cond:holomorphic_poincare_lemma}). Setting $u = \tilde{u} - q^{k} \breve{u}$ will then solve the equation. \end{proof} The element $\cu{V}$ obtained in Lemma \ref{lem:vector field} can be used to conjugate the operator $\pdb_0+[\eta+\phi_0,\cdot]$ to get $\pdb_0 + [\zeta_0,\cdot]$, i.e. $$ e^{-[\cu{V},\cdot]}\circ (\pdb_0 + [\zeta_0,\cdot]) \circ e^{[\cu{V},\cdot]} = \pdb_0+[\eta+\phi_0,\cdot]. $$ The volume form $\volf{}_0$ will be holomorphic under the operator $\pdb_0 + [\zeta_0,\cdot]$. From the equation \eqref{eqn:volume_form_equation}, we observe that $\bvd{}(\zeta_0) = 0$. Furthermore, the image of $\zeta_0$ under the isomorphism $\varPhi \colon \polyv{k}^{*,*}_0 \rightarrow \sfpolyv{k}^{*,*}_{\mathrm{sf}}$ in Lemma \ref{lem:comparing_sheaf_of_dgbv} gives $\phi_{\mathrm{s}} \in \sfpolyv{k}^{-1,1}_{\mathrm{sf}}(W_0)$, and an operator of the form \begin{equation}\label{eqn:dbar_operator_on_semi-flat} \pdb_{\circ} + [\phi_{\mathrm{in}} + \phi_{\mathrm{s}},\cdot] = \pdb_{\circ}+ \sum_{v,\rho} [\phi_{v,\rho},\cdot] + [\phi_{\mathrm{s}},\cdot], \end{equation} where $\phi_{\mathrm{in}} = \sum_{v,\rho} \phi_{v,\rho}$, that acts on $\sfpolyv{k}^{*,*}_{\mathrm{sf}}$. Equipping with this operator, the semi-flat sheaf $\sfpolyv{k}^{*,*}_{\mathrm{sf}}$ can be glued to the sheaves $\polyv{k}^{*,*}_{\alpha}$'s for $\alpha \in \mathscr{I}_2$, preserving all the operators. More explicitly, on each overlap $W_{0\alpha}:=W_0 \cap W_{\alpha}$, we have \begin{equation}\label{eqn:gluing_of_semi-flat_sheaf_to_others} \glue{k}_{0\alpha} \colon \sfpolyv{k}^{*,*}_{\mathrm{sf}}|_{W_{0\alpha}}\rightarrow \polyv{k}^{*,*}|_{W_{0\alpha}} \end{equation} defined by $$\glue{k}_{\alpha\beta}\circ \glue{k}_{0\alpha}|_{W_{\alpha\beta}} := h_{\beta}\circ e^{-[\cu{V},\cdot]} \circ \varPhi^{-1}|_{W_{\alpha\beta}}$$ for $\beta \in \mathscr{I}_1$, which sends the operator $\pdb_{\circ} + [\phi_{\mathrm{in}} + \phi_{\mathrm{s}},\cdot]$ to $\pdb_\alpha + [\eta_{\alpha}+\phi,\cdot]$. \begin{definition}\label{def:semi_flat_tropical_vertex_lie_algebra} We call $\sftropv{k}^*_{\mathrm{sf}}:= \mathrm{Ker}(\bvd{})[-1] \subset \sfpolyv{k}^{-1,*}_{\mathrm{sf}}[-1]$, equipped with the structure of a dgLa using $\pdb_{\circ}$ and $[\cdot,\cdot]$ inherited from $\sfpolyv{k}^{-1,*}_{\mathrm{sf}}$, the \emph{sheaf of semi-flat tropical vertex differential graded Lie algebras (abbrev.\ as sf-TVdgLa)}. \end{definition} Note that $\sftropv{k}^*_{\mathrm{sf}} \cong \tform^*|_{W_0} \otimes_{\real} \sftvbva{k}$. Also, we have $\bvd{}(\phi_{\mathrm{s}})=0$ since $\bvd{}(\zeta_0)=0$, and a direct computation shows that $\bvd{}(\phi_{\mathrm{in}})=0$. Thus $\phi_{\mathrm{in}}, \phi_{\mathrm{s}} \in \sftropv{k}^{1}_{\mathrm{sf}}(W_0)$, and the operator $\pdb_{\circ} + [\phi_{\mathrm{in}}+\phi_{\mathrm{s}},\cdot]$ preserves the sub-dgLa $\sftropv{k}^*_{\mathrm{sf}}$. From the description of the sheaf $\tform^*$, we can see that locally on $U \subset W_0$, $\phi_{\mathrm{s}}$ is supported on finitely many codimension one polyhedral subsets, called \emph{walls} or \emph{slabs}, which are constituents of a scattering diagram. This is why we use the subscript `s' in $\phi_{\mathrm{s}}$, which stands for `scattering'. \subsection{Consistent scattering diagrams and Maurer--Cartan solutions}\label{subsec:scattering_diagram_from_MC_solution} \subsubsection{Scattering diagrams}\label{subsubsec:scattering_diagram} In this subsection, we recall the notion of scattering diagrams introduced by Kontsevich--Soibelman \cite{kontsevich-soibelman04} and Gross--Siebert \cite{gross2011real}, and make modifications to suit our needs. We begin with the notion of walls from \cite[\S 2]{gross2011real}. Let $$\nsf = \left( \bigcup_{\tau \in \pdecomp^{[n-2]}}\tau \right) \cup \left( \bigcup_{\substack{\rho \in \pdecomp^{[n-1]}\\ \rho \cap \tsing_e \neq \emptyset}} \tsing \cap \reint(\rho) \right)$$ be equipped with a polyhedral decomposition induced from $\pdecomp$ and $\tsing$. For the exposition below, we will always fix $k>0$ and consider all these structures modulo $\mathbf{m}^{k+1} = (q^{k+1})$. \begin{definition}\label{def:walls} A \emph{wall} $(\mathbf{w},\sigma_{\mathbf{w}}, \check{d}_{\mathbf{w}}, \Theta_{\mathbf{w}})$ consists of \begin{itemize} \item a maximal cell $\sigma_\mathbf{w}\in \pdecomp^{[n]}$, \item a closed $(n-1)$-dimensional tropical polyhedral subset $\mathbf{w}$ of $\sigma_{\mathbf{w}}$ such that $$\reint(\mathbf{w}) \cap \left( \bigcup_{\substack{\rho \in \pdecomp^{[n-1]}\\ \rho \cap \tsing_e \neq \emptyset}} \reint(\rho) \right) = \emptyset,$$ \item a choice of a primitive normal $\check{d}_{\mathbf{w}}$, and \item a section $\Theta_{\mathbf{w}}$ of the {\em tropical vertex group} $\exp(q \cdot \sftvbva{k})$ over a sufficiently small neighborhood of $\mathbf{w}$. \end{itemize} We call $\Theta_{\mathbf{w}}$ the \emph{wall-crossing factor associated to the wall $\mathbf{w}$}. We may write a wall as $(\mathbf{w},\Theta_{\mathbf{w}})$ for simplicity. \end{definition} A wall cannot be contained in $\rho$ with $\rho \cap \tsing_e \neq \emptyset$. We define a notion of slabs for these subsets of codimension one strata $\rho$ intersecting $\tsing_e$. The difference is that we have an extra term $\varTheta_{v,\rho}$ coming from the slab function $f_{v,\rho}$. \begin{definition}\label{def:slabs} A \emph{slab} $(\mathbf{b}, \rho_{\mathbf{b}}, \check{d}_{\rho}, \varXi_{\mathbf{b}})$ consists of \begin{itemize} \item an $(n-1)$-cell $\rho_{\mathbf{b}} \in \pdecomp^{[n-1]}$ such that $\rho_{\mathbf{b}} \cap \tsing_e \neq \emptyset$, \item a closed $(n-1)$-dimensional tropical polyhedral subset $\mathbf{b}$ of $\rho_{\mathbf{b}} \setminus (\rho_{\mathbf{b}} \cap \tsing)$, \item a choice of a primitive normal $\check{d}_{\rho}$, and \item a section $\varXi_{\mathbf{b}}$ of $\exp(q\cdot \sftvbva{k})$ over a sufficiently small neighborhood of $\mathbf{b}$. \end{itemize} The \emph{wall-crossing factor associated to the slab $\mathbf{b}$} is given by $$ \Theta_{\mathbf{b}}:=\varTheta_{v,\rho} \circ \varXi_{\mathbf{b}}, $$ where $v$ is the unique vertex such that $\reint(\rho)_v$ contains $\mathbf{b}$ and $$ \varTheta_{v,\rho} = \exp([\log(s_{v\rho}^{-1} (f_{v,\rho})) \partial_{\check{d}_{\rho}},\cdot]) $$ (cf. equation \eqref{eqn:defining_wall}). We may write a slab as $(\mathbf{b},\Theta_{\mathbf{b}})$ for simplicity. \end{definition} \begin{remark} In the above definition, a slab is \emph{not} allowed to intersect the singular locus $\tsing$. This is different from the situation in \cite[\S 2]{gross2011real}. However, in our definition of consistent scattering diagrams, we will require consistency around each stratum of $\tsing$. \end{remark} \begin{example}\label{eg:wall_and_slab} We consider the 3-dimensional example shown in Figure \ref{fig:wall_and_slab}, from which we can see possible supports of the walls and slabs. There are two adjacent maximal cells intersecting at $\rho \in \pdecomp^{[n-1]}$ with $\tsing_e \cap \rho= \tsing \cap \rho$ colored in red. The $2$-dimensional polyhedral subsets colored in blue can support walls and the polyhedral subset colored in green can support a slab because it is lying inside $\rho$ with $\tsing_e \cap \rho \neq \emptyset$. \begin{figure}[h!] \includegraphics[scale=0.2]{wall_and_slab.png} \caption{Supports of walls/slabs}\label{fig:wall_and_slab} \end{figure} \end{example} \begin{definition}\label{def:scattering_diagram} A \emph{($k^{\text{th}}$-order) scattering diagram} is a countable collection $$\mathscr{D} = \{(\mathbf{w}_i,\Theta_i)\}_{i\in \mathbb{N}} \cup \{(\mathbf{b}_j,\Theta_{j})\}_{j\in \mathbb{N}}$$ of walls or slabs such that the intersections of any two walls/slabs is at most an $(n-2)$-dimensional tropical polyhedral subset, and $\{\mathbf{w}_i \cap W_0 \}_{i\in \mathbb{N}} \cup \{ \mathbf{b}_j\cap W_0\}_{j\in \mathbb{N}}$ is locally finite in $W_0$.\end{definition} Our notion of scattering diagrams is more flexible than the one defined in \cite{kontsevich-soibelman04, gross2011real} in two ways: First, there is no relation between the affine direction orthogonal to a wall $\mathbf{w}$ or a slab $\mathbf{b}$ and its wall crossing factor. As a result, we cannot allow overlapping of walls/slabs in their relative interior because in that case their associated wall crossing factors are not necessarily commuting. Second, we only require that the intersection of $\mathscr{D}$ with $W_0$ is a locally finite collection of $W_0$, which implies that we allow a possibly infinite number of walls/slabs approaching strata of $\hat{\tsing}$. In the construction of the scattering diagram $\mathscr{D}(\varphi)$ associated to a Maurer--Cartan solution $\varphi$ below, all the walls/slabs will be compact subsets of $W_0$. These walls will not intersect $\hat{\tsing}$, as illustrated in Figure \ref{fig:wall_and_slab}. However, there could be a union of infinitely many walls limiting to some strata of $\hat{\tsing}$. See also Remark \ref{rmk:relaxed_scattering_diagram}. \begin{example} For the 2-dimensional example shown in Figure \ref{fig:infinite_many_walls}, we see a vertex $v$ and its adjacent cells, and the singular locus $\tsing_e$ consists of the red crosses. In our version of scattering diagrams, we allow infinitely many intervals limiting to $\{v\}$ or $\tsing_e$. \begin{figure}[h!] \includegraphics[scale=0.4]{infinite_walls.png} \caption{Walls/slabs around $\hat{\tsing}$}\label{fig:infinite_many_walls} \end{figure} \end{example} Given a scattering diagram $\mathscr{D}$, we can define its \emph{support} as $|\mathscr{D}|:= \bigcup_{i \in \bb{N}} \mathbf{w}_i \cup \bigcup_{j \in \bb{N}} \mathbf{b}_j$. There is an induced polyhedral decomposition on $|\mathscr{D}|$ such that its $(n-1)$-cells are closed subsets of some walls or slabs, and all intersections of walls or slabs are lying in the union of the $(n-2)$-cells. We write $|\mathscr{D}|^{[i]}$ for the collection of all the $i$-cells in this polyhedral decomposition. We may assume, after further subdividing the walls or slabs in $\mathscr{D}$ if necessary, that every wall or slab is an $(n-1)$-cell in $|\mathscr{D}|$. We call an $(n-2)$-cell $\mathfrak{j} \in |\mathscr{D}|^{[n-2]}$ a \emph{joint}, and a connected component of $W_0 \setminus |\mathscr{D}|$ a \emph{chamber}. Given a wall or slab, we shall make sense of \emph{wall crossing} in terms of jumping of holomorphic functions across it. Instead of formulating the definition in terms of path-ordered products of elements in the tropical vertex group as in \cite{gross2011real}, we will express it in terms of the action by the tropical vertex group on the local sections of $\sfbva{k}^0_{\mathrm{sf}}$. There is no harm in doing so since we have the inclusion $\sfbva{k}^{-1}_{\mathrm{sf}} \hookrightarrow \mathrm{Der}(\sfbva{k}^0_{\mathrm{sf}},\sfbva{k}^0_{\mathrm{sf}})$, i.e. a relative vector field is determined by its action on functions. In this regard, we would like to define the \emph{($k^{\text{th}}$-order) wall-crossing sheaf} $\wcs{k}_{\mathscr{D}}$ on the open set $$W_{0}(\mathscr{D}):= W_0 \setminus \bigcup_{\mathfrak{j} \in |\mathscr{D}|^{[n-2]}} \mathfrak{j},$$ which captures the jumping of holomorphic functions described by the wall-crossing factor when crossing a wall/slab. We first consider the sheaf $\sfbva{k}^{0}_{\mathrm{sf}}$ of holomorphic functions over the subset $W_0 \setminus |\mathscr{D}|$, and let $$\wcs{k}_{\mathscr{D}}|_{W_0 \setminus |\mathscr{D}|} := \sfbva{k}^{0}_{\mathrm{sf}}|_{W_0 \setminus |\mathscr{D}|}.$$ To extend it through the walls/slabs, we will specify the analyic continuation through $\reint(\mathbf{w})$ for each $\mathbf{w} \in |\mathscr{D}|^{[n-1]}$. Given a wall/slab $\mathbf{w}$ with two adjacent chambers $\cu{C}_+$, $\cu{C}_-$ and $\check{d}_{\mathbf{w}}$ pointing into $\cu{C}_+$, and a point $x \in \reint(\mathbf{w})$ with the germ $\Theta_{\mathbf{w},x}$ of wall-crossing factors near $x$, we let $$\wcs{k}_{\mathscr{D},x} := \sfbva{k}^0_{\mathrm{sf},x},$$ but with a different gluing to nearby chambers $\cu{C}_{\pm}$: in a sufficiently small neighborhood $U_x$ of $x$, the gluing of a local section $f \in \wcs{k}_{\mathscr{D},x}$ is given by \begin{equation}\label{eqn:wall_crossing_gluing} f|_{U_x \cap \cu{C}_{\pm}} := \begin{dcases} \Theta_{\mathbf{w},x}(f)|_{U_x \cap \cu{C}_+} & \text{on $U_x \cap \cu{C}_+$,}\\ f|_{U_x \cap \cu{C}_-} & \text{on $U_x \cap \cu{C}_-$.} \end{dcases} \end{equation} In this way, the sheaf $\wcs{k}_{\mathscr{D}}|_{W_0 \setminus |\mathscr{D}|}$ extends to $W_0(\mathscr{D})$. Now we can formulate consistency of a scattering diagram $\mathscr{D}$ in terms of the behaviour of the sheaf $\wcs{k}_{\mathscr{D}}$ over the joints $\mathfrak{j}$'s and $(n-2)$-dimensional strata of $\nsf$. More precisely, we consider the push-forward $\mathfrak{i}_*(\wcs{k}_{\mathscr{D}})$ along the embedding $\mathfrak{i} \colon W_0(\mathscr{D}) \rightarrow B$, and its stalk at $x \in \reint(\mathfrak{j})$ and $x \in \reint(\tau)$ for strata $\tau \subset \nsf$. Similar to above, we can define the ($l^{\text{th}}$-order) sheaf $\wcs{l}_{\mathscr{D}}$ by using $\sfbva{l}^0_{\mathrm{sf}}$ and considering equation \eqref{eqn:wall_crossing_gluing} modulo $(q)^{l+1}$. There is a natural restriction map $\rest{k,l} \colon \mathfrak{i}_*(\wcs{k}_{\mathscr{D}}) \rightarrow \mathfrak{i}_*(\wcs{l}_{\mathscr{D}})$. Taking tensor product, we have $\rest{k,l}\colon \mathfrak{i}_*(\wcs{k}_{\mathscr{D}}) \otimes_{\cfrk{k}} \cfrk{l} \rightarrow \mathfrak{i}_*(\wcs{l}_{\mathscr{D}})$, where $\cfrk{k} = \comp[q]/(q^{k+1})$. The proof of the following lemma will be given in Appendix \S \ref{sec:hartogs}. \begin{lemma}[Hartogs extension property]\label{lem:hartogs_extension_2} We have $$\iota_* (\bva{0}^0|_{W_0}) = \bva{0}^0,$$ where $\iota \colon W_0 \rightarrow B$ is the inclusion. Moreover, for any scattering diagram $\mathscr{D}$, we have $$\mathfrak{i}_*(\bva{0}^0|_{W_0(\mathscr{D})}) = \bva{0}^0,$$ where $\mathfrak{i} \colon W_0(\mathscr{D}) \rightarrow B$ is the inclusion. \end{lemma} \begin{lemma}\label{lem:0_order_wall_crossing_sheaf_vs_structure_sheaf} The $0^{\text{th}}$-order sheaf $\mathfrak{i}_*(\wcs{0}_{\mathscr{D}})$ is isomorphic to the sheaf $\bva{0}^0$. \end{lemma} \begin{proof} In view of Lemma \ref{lem:hartogs_extension_2}, we only have to show that the two sheaves are isomorphic on the open subset $W_0(\mathscr{D})$. Since we work modulo $(q)$, only the wall-crossing factor $\varTheta_{v,\rho}$ associated to a slab matters. So we take a point $x \in \reint(\mathbf{b}) \subset \reint(\rho)_v$ for some vertex $v$, and compare $\wcs{0}_{\mathscr{D},x}$ with $\bva{0}^0_{x} = \sfbva{0}^0_{\mathrm{sf},x}$. From the proof of Lemma \ref{lem:comparing_sheaf_of_dgbv}, we have \begin{align*} \bva{0}^0_{x} & = \sfbva{0}^0_{\mathrm{sf},x}=\cu{O}_{\localmod{k}(\rho)_v,K}\\ & = \left\{a_{0,j} + \sum_{i=1}^{\infty} a_{i} u^i + \sum_{i=-1}^{-\infty}a_{i} v^{-i} \ \Big| \ a_{i} \in \cu{O}_{(\comp^*)^{n-1}}(U) \ \text{for some neigh. $U\supset K$}, \ \sup_{i\neq 0} \frac{\log|a_i|}{|i|} < \infty \right\}, \end{align*} with the relation $uv= 0$. The gluings with nearby maximal cells $\sigma_{\pm}$ of both $\bva{0}^0$ and $\sfbva{0}^0_{\mathrm{sf}}$ are simply given by the parallel transport through $v$ and the formulae \begin{equation*} \sigma_+\colon\begin{cases} z^{m} \mapsto s^{-1}_{\rho\sigma_{+}}(m) z^m & \text{for $m \in \tanpoly_{\rho}$},\\ u \mapsto s^{-1}_{\rho\sigma_+}(m_{\rho}) z^{m_{\rho}}, & \\ v \mapsto 0, & \end{cases} \qquad\quad \sigma_-\colon\begin{cases} z^{m} \mapsto s^{-1}_{\rho\sigma_{-}}(m) z^m & \text{for $m \in \tanpoly_{\rho}$},\\ u \mapsto 0, & \\ v \mapsto s^{-1}_{\rho\sigma_-}(-m_{\rho})z^{-m_{\rho}} & \end{cases} \end{equation*} in the proof of Lemma \ref{lem:comparing_sheaf_of_dgbv}. Now for the wall-crossing sheaf $\wcs{0}_{\mathscr{D},x} \cong \sfbva{0}^0_{\mathrm{sf},x}$, the wall-crossing factor $\varTheta_{v,\rho}$ acts trivially except on the two coordinate functions $u,v$ because $\langle m ,\check{d}_{\rho} \rangle = 0$ for $m \in \tanpoly_{\rho}$. The gluing of $u$ to the nearby maximal cells which obeys wall crossing is given by $$ u|_{U_x \cap \sigma_{\pm}} := \begin{dcases} u|_{U_x \cap \sigma_+} & \text{on $U_x \cap \sigma_+$,}\\ \varTheta^{-1}_{v,\rho,x}(u)|_{U_x \cap \sigma_-} = 0 & \text{on $U_x \cap \sigma_-$,} \end{dcases} $$ in a sufficiently small neighborhood $U_x$ of $x$. Here, the reason that we have $\varTheta^{-1}_{v,\rho,x}(u)|_{U_x \cap \sigma_-} = 0$ on $U_x \cap \sigma_-$ is simply because we have $u \mapsto 0$ in the gluing of $\sfbva{0}^0_{\mathrm{sf}}$. For the same reason, we see that the gluing of $v$ agrees with that of $\bva{0}^0$ and $\sfbva{0}^0_{\mathrm{sf}}$. \end{proof} \begin{definition}\label{def:consistency_of_scattering_diagram} A ($k^{\text{th}}$-order) scattering diagram $\mathscr{D}$ is said to be \emph{consistent} if there is an isomorphism $\mathfrak{i}_*(\wcs{k}_{\mathscr{D}})|_{W_{\alpha}} \cong \bva{k}^0_{\alpha}$ as sheaves of $\comp[q]/(q^{k+1})$-algebras on each open subset $W_{\alpha}$. \end{definition} The above consistency condition would imply that $\rest{k,l}\colon \mathfrak{i}_*(\wcs{k}_{\mathscr{D}}) \rightarrow \mathfrak{i}_*(\wcs{l}_{\mathscr{D}})$ is surjective for any $l < k$ and hence $\mathfrak{i}_*(\wcs{k}_{\mathscr{D}})$ is a sheaf of free $\comp[q]/(q^{k+1})$-modules on $B$. We are going to see that $\mathfrak{i}_*(\wcs{k}_{\mathscr{D}})$ agrees with the push-forward of the sheaf of holomorphic functions on a ($k^{\text{th}}$-order) thickening $\centerfiber{k}$ of the central fiber $\centerfiber{0}$ under the modified moment map $\modmap$. Let us elaborate a bit on the relation between this definition of consistency and that in \cite{gross2011real}. Assuming we have a consistent scattering diagram in the sense of \cite{gross2011real}, then we obtain a $k^{\text{th}}$-order thickening $\centerfiber{k}$ of $\centerfiber{0}$ which is locally modeled on the thickenings $\localmod{k}_{\alpha}$'s by \cite[Cor. 2.18]{Gross-Siebert-logII}. Pushing forward via the modified moment map $\modmap$, we obtain a sheaf of algebras over $\comp[q]/(q^{k+1})$ lifting $\bva{0}^0$, which is locally isomorphic to the $\bva{k}^0_{\alpha}$'s. This consequence is exactly what we use to formulate our definition of consistency. \begin{lemma}\label{lem:ring_hom_induce_space_hom} Suppose we have $W \subset W_{\alpha} \cap W_{\beta}$ such that $V = \modmap^{-1}(W)$ is Stein, and an isomorphism $ h \colon \bva{k}^0_{\beta}|_W \rightarrow \bva{k}^0_{\alpha}|_W$ of sheaves of $\comp[q]/(q^{k+1})$-algebras which is the identity modulo $(q)$. Then there is a unique isomorphism $\psi\colon \localmod{k}_{\alpha}|_V \rightarrow \localmod{k}_{\beta}|_V$ of analytic spaces inducing $h$. \end{lemma} \begin{proof} From the description in \S \ref{subsec:log_structure_and_slab_function}, we can embed both families $\localmod{k}_{\alpha}$, $\localmod{k}_{\beta}$ over $\spec(\comp[q]/(q^{k+1}))$ as closed analytic subschemes of $\comp^{N+1} = \comp^{N}\times \comp_{q}$ and $\comp^{L+1} = \comp^{L} \times \comp_q$ respectively, where projection to the second factor defines the family over $\comp[q]/(q^{k+1})$. Let $\cu{J}_{\alpha}$ and $\cu{J}_{\beta}$ be the corresponding ideal sheaves, which can be generated by finitely many elements. We can take Stein open subsets $U_{\alpha} \subseteq \comp^{N+1}$ and $U_{\beta} \subseteq \comp^{L+1}$ such that their intersections with the subschemes give $\localmod{k}_{\alpha}|_{V}$ and $\localmod{k}_{\beta}|_{V}$ respectively. By taking global sections of the sheaves over $W$, we obtain the isomorphism $h\colon \cu{O}_{\localmod{k}_{\beta}}(V) \rightarrow \cu{O}_{\localmod{k}_{\alpha}}(V)$. Using the fact that $U_{\alpha}$ is Stein, we can lift $h(z_i)$'s, where $z_i$'s are restrictions of coordinate functions to $\localmod{k}_{\beta}|_V \subset U_{\beta}$, to holomorphic functions on $U_{\alpha}$. In this way, $h$ can be lifted as a holomorphic map $\psi\colon U_{\alpha} \rightarrow U_{\beta}$. Restricting to $\localmod{k}_{\alpha}|_{V}$, we see that the image lies in $\localmod{k}_{\beta}|_{V}$, and hence we obtain the isomorphism $\psi$. The uniqueness follows from the fact the $\psi$ is determined by $\psi^*(z_i) = h(z_i)$. \end{proof} Given a consistent scattering diagram $\mathscr{D}$ (in the sense of Definition \ref{def:consistency_of_scattering_diagram}), the sheaf $\mathfrak{i}_*(\wcs{k}_{\mathscr{D}})$ can be treated as a gluing of the local sheaves $\bva{k}^0_{\alpha}$'s. Then from Lemma \ref{lem:ring_hom_induce_space_hom}, we obtain a gluing of the local models $\localmod{k}_{\alpha}$'s yielding a thickening $\centerfiber{k}$ of $\centerfiber{0}$. This justifies Definition \ref{def:consistency_of_scattering_diagram}. \subsubsection{Constructing consistent scattering diagrams from Maurer--Cartan solutions}\label{subsubsec:consistent_diagram_from_solution} We are finally ready to demonstrate how to construct a consistent scattering diagram $\mathscr{D}(\varphi)$ in the sense of Definition \ref{def:consistency_of_scattering_diagram} from a Maurer--Cartan solution $\varphi = \phi + t f$ obtained in Theorem \ref{prop:Maurer_cartan_equation_unobstructed}. As in \S \ref{subsubsec:gluing_using_semi_flat}, we obtain a $k^{\text{th}}$-order Maurer--Cartan solution $\zeta_0$ and define its scattered part as $\phi_{\mathrm{s}} \in \sftropv{k}^{1}_{\mathrm{sf}}(W_0)$. From this, we want to construct a $k^{\text{th}}$-order scattering diagram $\mathscr{D}(\varphi)$. We take an open cover $\{U_i\}_{i}$ by pre-compact convex open subsets of $W_0$ such that, locally on $U_i$, $\phi_{\mathrm{in}}+\phi_{\mathrm{s}}$ can be written as a finite sum $$ (\phi_{\mathrm{in}}+\phi_{\mathrm{s}})|_{U_i} = \sum_{j} \alpha_{ij} \otimes v_{ij}, $$ where $\alpha_{ij} \in \tform^1(U_i)$ has asymptotic support on a codimension one polyhedral subset $P_{ij} \subset U_i$, and $v_{ij} \in \sftvbva{k}(U_i)$. We take a partition of unity $\{\varrho_i\}_{i}$ subordinate to the cover $\{U_i\}_{i}$ such that $\mathrm{supp}(\varrho_i)$ has asymptotic support on a compact subset $C_i$ of $U_i$. As a result, we can write \begin{equation} \phi_{\mathrm{in}}+\phi_{\mathrm{s}} = \sum_i \sum_j (\varrho_i \alpha_{ij}) \otimes v_{ij}, \end{equation} where each $(\varrho_i \alpha_{ij})$ has asymptotic support on the compact codimension one subset $C_i \cap P_{ij} \subset U_i$. The subset $\bigcup_{ij}C_i \cap P_{ij}$ will be the support $|\mathscr{D}|$ of our scattering diagram $\mathscr{D} = \mathscr{D}(\varphi)$. We may equip $|\mathscr{D}| := \bigcup_{ij}C_i \cap P_{ij}$ with a polyhedral decomposition such that all the boundaries and mutual intersections of $C_i \cap P_{ij}$'s are contained in $(n-2)$-dimensional strata of $|\mathscr{D}|$. So, for each $(n-1)$-dimensional cell $\tau$ of $|\mathscr{D}|$, if $\reint(\tau) \cap (C_i \cap P_{ij} ) \neq \emptyset$ for some $i,j$, then we must have $\tau \subset C_i \cap P_{ij}$. Let $\mathtt{I}(\tau):= \{ (i,j) \ | \ \tau \subset C_i \cap P_{ij} \}$, which is a finite set of indices. We will equip the $(n-1)$-cells $\tau$'s of $|\mathscr{D}|$ with the structure of walls or slabs. We first consider the case of a wall. Take $\tau \in |\mathscr{D}|^{[n-1]}$ such that $\reint(\tau) \cap \reint(\rho) = \emptyset$ for all $\rho$ with $\rho \cap \tsing_e \neq \emptyset$. We let $\mathbf{w} = \tau$, choose a primitive normal $\check{d}_{\mathbf{w}}$ of $\tau$, and give the labels $\cu{C}_{\pm}$ to the two adjacent chambers $\cu{C}_{\pm}$ so that $\check{d}_{\mathbf{w}}$ is pointing into $\cu{C}_{+}$. In a sufficiently small neighborhood $U_\tau$ of $\reint(\tau)$, we have $\phi_{\mathrm{in}}|_{U_{\tau}} = 0$ and we may write $$ \phi_{\mathrm{s}}|_{U_{\tau}} = \sum_{(i,j)\in \mathtt{I}(\tau)}(\varrho_i \alpha_{ij}) \otimes v_{ij}, $$ where each $(\varrho_i \alpha_{ij})$ has asymptotic support on $\reint(\tau)$. Since locally on $U_{\tau}$ any Maurer--Cartan solution is gauge equivalent to $0$, there exists an element $\theta_{\tau} \in \tform^0(U_{\tau})\otimes q\cdot \sftvbva{k}(U_{\tau})$ such that $$e^{[\theta_{\tau},\cdot]} \circ \pdb_{\circ} \circ e^{-[\theta_{\tau},\cdot]} = \pdb_{\circ} + [\phi_{\mathrm{s}},\cdot].$$ Such an element can be constructed inductively using the procedure in \cite[\S 3.4.3]{matt-leung-ma}, and can be chosen to be of the form \begin{equation}\label{eqn:construction_of_wall_crossing_factor_from_solution_wall_case} \theta_{\tau}|_{U_{\tau} \cap \cu{C}_{\pm}} = \begin{dcases} \theta_{\tau,0}|_{U_{\tau} \cap \cu{C}_+} & \text{on $U_{\tau} \cap \cu{C}_+$,}\\ 0 & \text{on $U_{\tau} \cap \cu{C}_-$,} \end{dcases} \end{equation} for some $\theta_{\tau,0} \in q \cdot \sftvbva{k}(U_{\tau})$. From this we obtain the wall-crossing factor associated to the wall $\mathbf{w}$ \begin{equation}\label{eqn:construction_wall_crossing_factor_wall} \Theta_{\mathbf{w}} := e^{[\theta_{\tau,0},\cdot]}. \end{equation} \begin{remark} Here we need to apply the procedure in \cite[\S 3.4.3]{matt-leung-ma}, which is a generalization of that in \cite{kwchan-leung-ma}, because of the potential non-commutativity: $[v_{ij},v_{ij'}]\neq 0$ for $j \neq j'$. \end{remark} For the case where $\tau \subset \reint(\rho)_v$ for some $\rho$ with $\rho \cap \tsing_e \neq \emptyset$, we will define a slab. We take $U_{\tau}$ and $\mathtt{I}(\tau)$ as above, and let the slab $\mathbf{b} = \tau$. The primitive normal $\check{d}_{\rho}$ is the one we chose earlier for each $\rho$. Again we work in a small neighborhood $U_{\tau}$ of $\reint(\tau)$ with two adjacent chambers $\cu{C}_{\pm}$. As in the proof of Lemma \ref{lem:comparing_sheaf_of_dgbv}, we can find a step-function-like element $\theta_{v,\rho}$ of the form $$ \theta_{v,\rho} = \begin{dcases} \log(s_{v\rho}^{-1}(f_{v,\rho})) \partial_{\check{d}_{\rho}} & \text{on $ U_{\tau} \cap \cu{C}_+$},\\ 0 & \text{on $ U_{\tau} \cap \cu{C}_-$} \end{dcases} $$ to solve the equation $e^{[\theta_{v,\rho},\cdot]}\circ \pdb_{\circ} \circ e^{-[\theta_{v,\rho},\cdot]} = \pdb_{\circ} + [\phi_{\mathrm{in}},\cdot]$ on $U_\tau$. In other words, $$\varPsi:=e^{-[\theta_{v,\rho},\cdot]} \colon (\sftropv{k}^*_{\mathrm{sf}}|_{U_{\tau}},\pdb_{\mathrm{sf}}) \rightarrow (\sftropv{k}^*_{\mathrm{sf}}|_{U_{\tau}},\pdb_{\circ})$$ is an isomorphism of sheaves of dgLas. Computations using the formula in \cite[Lem. 2.5]{chan2019geometry} then gives the identity $$ \varPsi^{-1} (\pdb_{\circ} + [\varPsi(\phi_{\mathrm{s}}),\cdot] ) \circ \varPsi = \pdb_{\circ} + [\phi_{\mathrm{in}} + \phi_{\mathrm{s}},\cdot]. $$ Once again, we can find an element $\theta_{\tau}$ such that $$ e^{[\theta_{\tau},\cdot]} \circ \pdb_{\circ} \circ e^{-[\theta_{\tau},\cdot]} = \pdb_{\circ} + [\varPsi(\phi_{\mathrm{s}}),\cdot], $$ and hence a corresponding element $\theta_{\tau,0} \in q\cdot \sftvbva{k}(U_\tau)$ of the form \eqref{eqn:construction_of_wall_crossing_factor_from_solution_wall_case}. From this we get \begin{equation}\label{eqn:construction_wall_crossing_factor_slab} \varXi_{\mathbf{b}}:= e^{[\theta_{\tau,0},\cdot]} \end{equation} and hence the wall-crossing factor $\Theta_{\mathbf{b}} :=\varTheta_{v,\rho} \circ \varXi_{\mathbf{b}} $ associated to the slab $\mathbf{b}$. Next we would like to argue that consistency of the scattering diagram $\mathscr{D}$ follows from the fact that $\phi$ is a Maurer--Cartan solution. First of all, on the global sheaf $\polyv{k}^{*,*}$ over $B$, we have the operator $\pdb_{\phi}:=\pdb+[\phi,\cdot]$ which satisfies $[\bvd{},\pdb_{\phi}] = 0$ and $\pdb_{\phi}^2 = 0$. This allows us to define the \emph{sheaf of $k^{\text{th}}$-order holomorphic functions} as $$\prescript{k}{}{\cu{O}}_{\phi}:=\mathrm{Ker}(\pdb_{\phi}) \subset \polyv{k}^{0,0},$$ for each $k\in \mathbb{N}$. It is a sequence of sheaves of commutative $\comp[q]/(q^{k+1})$-algebras over $B$, equipped with a natural map $\rest{k,l}\colon \prescript{k}{}{\cu{O}}_{\phi} \rightarrow \prescript{l}{}{\cu{O}}_{\phi}$ for $l<k$ that is induced from the maps for $\polyv{k}^{*,*}$. By construction, we see that $\prescript{0}{}{\cu{O}}_{\phi} \cong \bva{0}^0 \cong \modmap_*(\cu{O}_{\centerfiber{0}})$. We claim that the maps $\rest{k,l}$'s are surjective. To prove this, we fix a point $x\in B$ and take an open chart $W_{\alpha}$ containing $x$ in the cover of $B$ we chose at the beginning of \S \ref{subsubsec:gluing_using_semi_flat}. There is an isomorphism $\varPhi_{\alpha}\colon \polyv{k}^{*,*}|_{W_{\alpha}} \cong \polyv{k}^{*,*}_{\alpha}$ identifying the differential $\pdb$ with $\pdb_{\alpha}+ [\eta_{\alpha},\cdot]$ by our construction. Write $\phi_{\alpha} = \varPhi_{\alpha}(\phi)$ and notice that $\pdb_{\alpha} + [\eta_{\alpha}+\phi_{\alpha},\cdot]$ squares to zero, which means that $\eta_{\alpha} + \phi_{\alpha}$ is a solution to the Maurer--Cartan equation for $\polyv{k}^{*,*}_{\alpha}(W_{\alpha})$. We apply the same trick as above to the local open subset $W_{\alpha}$, namely, any Maurer--Cartan solution lying in $\polyv{k}^{-1,1}_{\alpha}(W_{\alpha})$ is gauge equivalent to the trivial one, so there exists $\theta_{\alpha} \in \polyv{k}^{-1,0}_{\alpha}(W_{\alpha})$ such that $$ e^{[\theta_{\alpha},\cdot]} \circ \pdb_{\alpha} \circ e^{-[\theta_{\alpha},\cdot]} = \pdb_{\alpha} + [\eta_{\alpha}+\phi_{\alpha},\cdot]. $$ As a result, the map $e^{-[\theta_{\alpha},\cdot]} \circ \varPhi_{\alpha} \colon (\polyv{k}^{*,*}|_{W_{\alpha}},\pdb+[\phi,\cdot]) \cong (\polyv{k}^{*,*}_{\alpha},\pdb_{\alpha})$ is an isomorphism of dgLas, sending $\prescript{k}{}{\cu{O}}_{\phi}$ isomorphically onto $\bva{k}_{\alpha}^0$. We shall now prove the consistency of the scattering diagram $\mathscr{D} = \mathscr{D}(\varphi)$ by identifying the associated wall-crossing sheaf $\wcs{k}_{\mathscr{D}}$ with the sheaf $\prescript{k}{}{\cu{O}}_{\phi}|_{W_0(\mathscr{D})}$ of $k^{\text{th}}$-order holomorphic functions. \begin{theorem}\label{thm:consistency_of_diagram_from_mc} There is an isomorphism $\Phi \colon \prescript{k}{}{\cu{O}}_{\phi}|_{W_0(\mathscr{D})} \rightarrow \wcs{k}_{\mathscr{D}}$ of sheaves of $\comp[q]/(q^{k+1})$-algebras on $W_0(\mathscr{D})$. Furthermore, the scattering diagram $\mathscr{D} = \mathscr{D}(\varphi)$ associated to the Maurer--Cartan solution $\phi$ is consistent in the sense of Definition \ref{def:consistency_of_scattering_diagram}. \end{theorem} \begin{proof} To prove the first statement, we first notice that there is a natural isomorphism $$\prescript{k}{}{\cu{O}}_{\phi}|_{W_0\setminus | \mathscr{D}|} \cong \wcs{k}_{\mathscr{D}}|_{W_0\setminus | \mathscr{D}|},$$ so we only need to consider those points $x \in \reint(\tau)$ where $\tau$ is either a wall or a slab. Since $W_0(\mathscr{D}) \subset W_0$, we will work on the semi-flat locus $W_0$ and use the model $\sfpolyv{k}^{*,*}_{\mathrm{sf}}$, which is equipped with the operator $ \pdb_{\circ}+ [\phi_{\mathrm{in}} + \phi_{\mathrm{s}},\cdot]$. Via the isomorphism $$\varPhi \colon (\polyv{k}^{*,*}_0,\pdb_{\phi}) \rightarrow (\sfpolyv{k}^{*,*}_{\mathrm{sf}}, \pdb_{\circ}+ [\phi_{\mathrm{in}} + \phi_{\mathrm{s}},\cdot])$$ from Lemma \ref{lem:comparing_sheaf_of_dgbv}, we may write $$\prescript{k}{}{\cu{O}}_{\phi}|_{W_0} = \mathrm{Ker}(\pdb_\phi) \subset \sfpolyv{k}^{0,0}_{\mathrm{sf}}.$$ We fix a point $x \in W_0(\mathscr{D}) \cap |\mathscr{D}|$ and consider the stalk at $x$ for both sheaves. In the above construction of walls and slabs from the Maurer--Cartan solution $\phi$, we first take a sufficiently small open subset $U_x$ and then find a gauge transformation of the form $\varPsi = e^{[\theta_{\tau},\cdot]}$ in the case of a wall, and of the form $\varPsi =e^{[\theta_{v,\rho},\cdot]} \circ e^{[\theta_{\tau},\cdot]} $ in the case of a slab. We have $$\varPsi^{} \circ \pdb_{\circ} \circ \varPsi^{-1} = \pdb_{\circ}+ [\phi_{\mathrm{in}} + \phi_{\mathrm{s}},\cdot]$$ by construction, so this further induces an isomorphism $$\varPsi \colon \sfbva{k}^0_{\mathrm{sf}}|_{U_x} \rightarrow \prescript{k}{}{\cu{O}}_{\phi}|_{U_x}$$ of $\comp[q]/(q^{k+1})$-algebras. It remains to see how the stalk $\varPsi\colon \sfbva{k}^0_{\mathrm{sf},x} \rightarrow \prescript{k}{}{\cu{O}}_{\phi,x}$ is glued to nearby chambers $\cu{C}_{\pm}$. For this purpose, we let $$\Psi_0 := e^{[\theta_{\tau,0},\cdot]}$$ as in equation \eqref{eqn:construction_wall_crossing_factor_wall} in the case of a wall, and $$\Psi_0 := \varTheta_{v,\rho} \circ e^{[\theta_{\tau,0},\cdot]}$$ as in \eqref{eqn:construction_wall_crossing_factor_slab} in the case of a slab. Then, the restriction of an element $f \in \sfbva{k}^0_{\mathrm{sf},x}$ to a nearby chamber is given by $$ \varPsi (f) = \begin{dcases} \Psi_0(f) & \text{on $ U_{x} \cap \cu{C}_+$},\\ f & \text{on $ U_{x} \cap \cu{C}_-$} \end{dcases} $$ in a sufficiently small neighborhood $U_x$. This agrees with the description of the wall-crossing sheaf $\wcs{k}_{\mathscr{D},x}$ in equation \eqref{eqn:wall_crossing_gluing}. Hence we obtain an isomorphism $\prescript{k}{}{\cu{O}}_{\phi}|_{W_0( \mathscr{D})} \cong \wcs{k}_{\mathscr{D}}$. To prove the second statement, we first apply pushing forward via $\mathfrak{i}\colon W_0(\mathscr{D}) \rightarrow B$ to the first statement to get the isomorphism $$\mathfrak{i}_* (\prescript{k}{}{\cu{O}}_{\phi}|_{W_0( \mathscr{D})}) \cong \mathfrak{i}_*(\wcs{k}_{\mathscr{D}}).$$ Now, by the discussion right before this proof, we may identify $\prescript{k}{}{\cu{O}}_{\phi}$ with $\bva{k}^0_{\alpha}$ locally. But the sheaf $\bva{k}^0_{\alpha}$, which is isomorphic to the restriction of $\bva{0}^0 \otimes_{\comp} \comp[q]/(q^{k+1})$ to $W_\alpha$ as sheaves of $\comp[q]/(q^{k+1})$-modules, satisfies the Hartogs extension property from $W_0(\mathscr{D})\cap W_{\alpha}$ to $W_{\alpha}$ by Lemma \ref{lem:hartogs_extension_2}. So we have $\mathfrak{i}_* (\prescript{k}{}{\cu{O}}_{\phi}|_{W_0( \mathscr{D})}) \cong \prescript{k}{}{\cu{O}}_{\phi}$. Hence, we obtain $$\mathfrak{i}_*(\wcs{k}_{\mathscr{D}})|_{W_{\alpha}} \cong (\prescript{k}{}{\cu{O}}_{\phi})|_{W_{\alpha}} \cong \bva{k}^0_{\alpha},$$ from which follows the consistency of the diagram $\mathscr{D} = \mathscr{D}(\varphi)$. \end{proof} \begin{remark} From the proof of Theorem \ref{thm:consistency_of_diagram_from_mc}, we actually have a correspondence between step-function-like elements in the gauge group and elements in the tropical vertex group as follows. We fix a generic point $x$ in a joint $\mathfrak{j}$, and consider a neighborhood of $x$ of the form $U_x \times D_x$, where $U_x$ is a neighborhood of $x$ in $\reint(\mathfrak{j})$ and $D_x$ is a disk in the normal direction of $\mathfrak{j}$. We pick a compact annulus $A_x \subset D_x$ surrounding $x$, intersecting finitely many walls/slabs. We let $\tau_1,\dots,\tau_s$ be the walls/slabs in anti-clockwise direction. For each $\tau_i$, we take an open subset $\mathscr{W}_{i} $ just containing the wall $\tau_i$ such that $\mathscr{W}_i \setminus \tau_i =\mathscr{W}_{i,+}\cup \mathscr{W}_{i,-}$. The following Figure \ref{fig:annulus} below illustrates the situation. As in the proof of Theorem \ref{thm:consistency_of_diagram_from_mc}, there is a gauge transformation on each $\mathscr{W}_i$ of the form $$ \varPsi_i \colon (\sfpolyv{k}^{*,*}_{\mathrm{sf}}|_{\mathscr{W}_i}, \pdb_{\circ}) \rightarrow (\sfpolyv{k}^{*,*}_{\mathrm{sf}}|_{\mathscr{W}_i}, \pdb_{\circ} + [\phi_{\mathrm{in}} + \phi_{\mathrm{s}},\cdot]), $$ where $\varPsi_i = e^{[\theta_{v,\rho},\cdot]} \circ e^{[\theta_{\tau},\cdot]}$ for a slab and $\varPsi_i = e^{[\theta_{\tau},\cdot]}$ for a wall. These are step-function-like elements in the gauge group satisfying $$ \varPsi_i = \begin{dcases} \Theta_{i} & \text{on $\mathscr{W}_{i,+}$,}\\ \mathrm{id} & \text{on $\mathscr{W}_{i,-}$,} \end{dcases} $$ where $\Theta_{i}$ is the wall crossing factor associated to $\tau_i$. On the overlap $\mathscr{W}_{i,+} = \mathscr{W}_i \cap \mathscr{W}_{i+1}$ (where we set $i+1=1$ if $i=s$), there is a commutative diagram $$ \xymatrix@1{ (\sfpolyv{k}^{*,*}_{\mathrm{sf}}|_{\mathscr{W}_{i,+}},\pdb_{\circ}) \ar[rr]^{\Theta_i} \ar[d]^{\varPsi_i} & & (\sfpolyv{k}^{*,*}_{\mathrm{sf}}|_{\mathscr{W}_{i,+}},\pdb_{\circ}) \ar[d]^{\varPsi_{i+1}} \\ (\sfpolyv{k}^{*,*}_{\mathrm{sf}}|_{\mathscr{W}_{i,+}},\pdb_{\circ}+ [\phi_{\mathrm{in}} + \phi_{\mathrm{s}},\cdot]) \ar[rr]^{\mathrm{id}} & & (\sfpolyv{k}^{*,*}_{\mathrm{sf}}|_{\mathscr{W}_{i,+}},\pdb_{\circ}+ [\phi_{\mathrm{in}} + \phi_{\mathrm{s}},\cdot]) } $$ allowing us to interpret the wall crossing factor $\Theta_i$ as the gluing between the two sheaves $\sfpolyv{k}^{*,*}_{\mathrm{sf}}|_{\mathscr{W}_{i}}$ and $\sfpolyv{k}^{*,*}_{\mathrm{sf}}|_{\mathscr{W}_{i+1}}$ over $\mathscr{W}_{i,+}$. Notice that the Maurer--Cartan element $\phi$ is global. On a small neighborhood $W_{\alpha}$ containing $U_x\times D_x$, we have the sheaf $(\polyv{k}_{\alpha}^{*,*},\pdb_{\phi})$ on $W_{\alpha}$, and there is an isomorphism $$ e^{[\theta_{\alpha},\cdot]} \colon (\polyv{k}^{*,*}_{\alpha},\pdb_{\alpha}) \cong (\polyv{k}^{*,*}_{\alpha},\pdb_\phi). $$ Composing with the isomorphism $$(\polyv{k}^{*,*}_{\alpha}|_{ \mathscr{W}_i},\pdb_\phi) \cong (\sfpolyv{k}^{*,*}_{\mathrm{sf}}|_{\mathscr{W}_{i}},\pdb_{\circ}+ [\phi_{\mathrm{in}} + \phi_{\mathrm{s}},\cdot]),$$ we have a commutative diagram of isomorphisms $$ \xymatrix@1{ (\sfpolyv{k}^{*,*}_{\mathrm{sf}}|_{\mathscr{W}_{i,+}},\pdb_{\circ}) \ar[rr]^{\varPsi_{i,0}} \ar[rd] & & (\sfpolyv{k}^{*,*}_{\mathrm{sf}}|_{\mathscr{W}_{i,+}},\pdb_{\circ}) \ar[ld]\\ &(\polyv{k}^{*,*}_{\alpha}|_{\mathscr{W}_{i,+}},\pdb_{\alpha}) & }. $$ This is a \v{C}ech-type cocycle condition between the sheaves $\sfpolyv{k}^{*,*}_{\mathrm{sf}}|_{\mathscr{W}_i}$'s and $\polyv{k}^{*,*}_{\alpha}$, which can be understood as the original consistency condition defined using path-ordered products in \cite{kontsevich-soibelman04, gross2011real}. In particular, taking a local holomorphic function in $\bva{k}^{0}_{\alpha}(W_{\alpha})$ and restricting it to $U_x \times A_x$, we obtain elements in $\sfbva{k}^0_{\mathrm{sf}}(\mathscr{W}_i)$ that jump across the walls according to the wall crossing factors $\Theta_i$'s. \begin{center} \begin{figure}[h] \includegraphics[scale=0.8]{annulus.png} \caption{Wall crossing around a joint $\mathfrak{j}$}\label{fig:annulus} \end{figure} \end{center} \end{remark} \appendix \section{The Hartogs extension property}\label{sec:hartogs} The following lemma is an application of the Hartogs extension theorem \cite{rossi1963vector}. \begin{lemma}\label{lem:hartog_extension_1} Consider the analytic space $(\comp^*)^k \times \spec(\comp[\Sigma_{\tau}])$ for some $\tau$ and an open subset of the form $U \times V$, where $U \subset (\comp^*)^k$ and $V$ is a neighborhood of the origin $o \in \spec(\comp[\Sigma_{\tau}])$. Let $W := V \setminus \big( \bigcup_{\omega} V_{\omega} \big)$, where $\dim_{\real}(\omega)+2 \leq \dim_{\real}(\Sigma_{\tau})$ (i.e. $W$ is the complement of complex codimension $2$ orbits in $V$). Then the restriction $\cu{O}(U\times V) \rightarrow \cu{O}(U\times W)$ is a ring isomorphism. \end{lemma} \begin{proof} We first consider the case where $\dim_{\real}(\Sigma_{\tau}) \geq 2$ and $W = V \setminus \{0\}$. We can further assume that $\Sigma_{\tau}$ consists of just one cone $\sigma$, because the holomorphic functions on $V$ are those on $V \cap \sigma$ that agree on the overlaps. So we can write $$ \cu{O}(U \times W) = \left\{ \sum_{m \in \tanpoly_{\sigma}} a_m z^m \ \Big| \ a_m \in \cu{O}_{(\comp^*)^k}(U) \right\}, $$ i.e. as Laurent series converging in $W$. We may further assume that $W$ is a sufficiently small Stein open subset. Take $f = \sum_{m \in \tanpoly_{\sigma}} a_m z^m \in \cu{O}(U\times W)$. We have the corresponding holomorphic function $\sum_{m \in \tanpoly_{\sigma}} a_m(u) z^m$ on $W$ for each point $u \in U$, which can be extended to $V$ using the Hartogs extension theorem \cite{rossi1963vector} because $\{0\}$ is a compact subset of $V$ such that $W = V \setminus \{0\}$ is connected. Therefore, we have $a_m(u) = 0$ for $m \notin \sigma \cap \tanpoly_{\sigma}$ for each $u$, and hence $f = \sum_{\sigma \cap \tanpoly_{\sigma}} a_m z^m$ is an element in $\cu{O}(U\times V)$. For the general case, we use induction on the codimension of $\omega$ to show that any holomorphic function can be extended through $V_\omega \setminus \bigcup_{\tau} V_{\tau}$ with $\dim_{\real}(\tau) < \dim_{\real}(\omega)$. Taking a point $x \in V_{\omega} \setminus \bigcup_{\tau} V_\tau$, a neighborhood of $x$ can be written as $(\comp^*)^l \times \spec(\comp[\Sigma_{\omega}])$. By the induction hypothesis, we know that holomorphic functions can already be extended through $(\comp^*)^l \times \{0\}$. We conclude that any holomorphic function can be extended through $V_\omega \setminus \bigcup_{\tau} V_{\tau}$. \end{proof} We will make use of the following version of the Hartogs extension theorem, which can be found in e.g. \cite[p. 58]{jarnicki2008first}, to handle extension within codimension one cells $\rho$'s and maximal cells $\sigma$'s. \begin{theorem}[Hartogs extension theorem, see e.g. \cite{jarnicki2008first}]\label{thm:hartogs_extension_C_n_version} Let $U \subset \comp^n$ be a domain with $n\geq 2$, and $A \subset U$ such that $U \setminus A$ is still a domain. Suppose $\pi(U) \setminus \pi(A)$ is a non-empty open subset, and $\pi^{-1}(\pi(x)) \cap A$ is compact for every $x\in A$, where $\pi \colon \comp^{n} \rightarrow \comp^{n-1}$ is projection along one of the coordinate direction. Then the natural restriction $\cu{O}(U) \rightarrow \cu{O}(U\setminus A)$ is an isomorphism. \end{theorem} \begin{proof}[Proof of Lemma \ref{lem:hartogs_extension_2}] To prove the first statement, we apply Lemma \ref{lem:hartog_extension_1}. So we only need to show that, for $\rho \in \pdecomp^{[n-1]}$, a holomorphic function $f$ in $U_x \setminus \tsing \subset V(\rho)$ can be extended uniquely to $U_x$, where $U_x$ is some neighborhood of $x \in \reint(\rho) \cap \tsing$. Writing $V(\rho) = (\comp^*)^{n-1} \times \spec(\comp[\Sigma_{\rho}])$, we may simply prove that this is the case with $\Sigma_{\rho}$ consisting of a single ray $\sigma$ as in the proof of Lemma \ref{lem:hartog_extension_1}. Thus we can assume that $V(\rho) = (\comp^*)^{n-1} \times \comp$ and the open subset $U_x = U \times V$ for some connected $U$. We observe that extensions of holomorphic functions from $(U\setminus \tsing) \times V$ to $U \times V$ can be done by covering the former open subset with Hartogs' figures. To prove the second statement, we need to further consider extensions through $\reint(\mathfrak{j})$ for a joint $\mathfrak{j}$. For those joints lying in some codimension one stratum $\rho$, the argument is similar to the above. So we assume that $\sigma_{\mathfrak{j}} = \sigma$ is a maximal cell. We take a point $x \in \reint(\mathfrak{j})$ and work in a sufficiently small neighborhood $U$ of $x$. In this case, we may find a codimension one rational hyperplane $\omega$ containing $\mathfrak{j}$, together with a lattice embedding $\tanpoly_{\omega} \hookrightarrow \tanpoly_{\sigma}$ which induces the projection $\pi \colon (\comp^*)^{n} \rightarrow (\comp^*)^{n-1}$ along one of the coordinate directions. Letting $A = \modmap^{-1}(A \cap U)$ and applying Theorem \ref{thm:hartogs_extension_C_n_version}, we obtain extensions of holomorphic functions in $U$. \end{proof}
2205.09686v1
http://arxiv.org/abs/2205.09686v1
A new statistic on Dyck paths for counting 3-dimensional Catalan words
\documentclass[11pt,reqno, oneside]{amsart} \usepackage{pdfsync} \usepackage{geometry, tikz} \usepackage{hyperref, fullpage} \usepackage{diagbox} \usepackage{subcaption, enumitem} \usepackage{color} \usepackage{amsmath} \usepackage{multirow} \usepackage{bm} \usetikzlibrary{fit} \usepackage{makecell}\setcellgapes{2pt} \newtheorem{thm}{Theorem}[section] \newtheorem{cor}[thm]{Corollary} \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \theoremstyle{definition} \newtheorem{defn}[thm]{Definition} \theoremstyle{definition} \newtheorem{remark}[thm]{Remark} \theoremstyle{definition} \newtheorem{question}[thm]{Question} \theoremstyle{definition} \newtheorem{obs}[thm]{Observation} \theoremstyle{definition} \newtheorem{ex}[thm]{Example} \newcommand\sumz[1]{\sum_{#1=0}^\infty} \newcommand{\egf}{exponential generating function} \newcommand{\inverse}{^{-1}} \newcommand{\D}{\mathcal{D}} \newcommand{\T}{\mathcal{T}} \newcommand{\M}{\mathcal{M}} \newcommand{\DL}{\mathcal{D}} \renewcommand{\S}{\mathcal{S}} \newcommand{\A}{\mathcal{A}} \newcommand{\B}{\mathcal{B}} \newcommand{\ol}{\overline} \newcommand{\red}[1]{{\color{red}#1}} \DeclareMathOperator{\inv}{inv} \DeclareMathOperator{\des}{des} \DeclareMathOperator{\edge}{e} \DeclareMathOperator{\Av}{Av} \DeclareMathOperator{\Type}{Type} \DeclareMathOperator{\Asc}{Asc} \DeclareMathOperator{\Des}{Des} \DeclareMathOperator{\Step}{Step} \renewcommand{\thesubsection}{\arabic{subsection}} \newcommand{\todo}[1]{\vspace{2 mm}\par\noindent \marginpar[\flushright\textsc{ToDo}]{\textsc{ToDo}}\framebox{\begin{minipage}[c]{\textwidth} \tt #1 \end{minipage}}\vspace{2 mm}\par} \title{A new statistic on Dyck paths for counting 3-dimensional Catalan words} \author{Kassie Archer} \address[K. Archer]{University of Texas at Tyler, Tyler, TX 75799 USA} \email{[email protected]} \author{Christina Graves} \address[C. Graves]{University of Texas at Tyler, Tyler, TX 75799 USA} \email{[email protected]} \begin{document} \maketitle \begin{abstract} A 3-dimensional Catalan word is a word on three letters so that the subword on any two letters is a Dyck path. For a given Dyck path $D$, a recently defined statistic counts the number of Catalan words with the property that any subword on two letters is exactly $D$. In this paper, we enumerate Dyck paths with this statistic equal to certain values, including all primes. The formulas obtained are in terms of Motzkin numbers and Motzkin ballot numbers. \end{abstract} \section{Introduction} Dyck paths of semilength $n$ are paths from the origin $(0,0)$ to the point $(2n,0)$ that consist of steps $u=(1,1)$ and $d=(1,-1)$ and do not pass below the $x$-axis. Let us denote by $\D_n$ the set of Dyck paths of semilength $n$. It is a well-known fact that $\D_n$ is enumerated by the Catalan numbers. A \emph{3-dimensional Catalan path} (or just \emph{Catalan path}) is a higher-dimensional analog of a Dyck path. It is a path from $(0,0,0)$ to $(n,n,n)$ with steps $(1,0,0)$, $(0,1,0)$, and $(0,0,1)$, so at each lattice point $(x,y,z)$ along the path, we have $ x\geq y\geq z$. A \emph{3-dimensional Catalan word} (or just \emph{Catalan word}) is the word on the letters $\{x,y,z\}$ associated to a Catalan path where $x$ corresponds to the step in the $x$-direction $(1,0,0)$, $y$ corresponds to the step in the $y$-direction $(0,1,0)$, and $z$ corresponds to a step in the $z$ direction $(0,0,1)$. As an example, the complete list of Catalan words with $n=2$ is: $$xxyyzz \quad xxyzyz \quad xyxyzz \quad xyxzyz \quad xyzxyz.$$ Given a Catalan word $C$, the subword consisting only of $x$'s and $y$'s corresponds to a Dyck path by associating each $x$ to a $u$ and each $y$ to a $d$. Let us call this Dyck path $D_{xy}(C)$. Similarly, the subword consisting only of $y$'s and $z$'s is denoted by $D_{yz}(C)$ by relabeling each $y$ with a $u$ and each $z$ with a $d$. For example, if $C=xxyxyzzxyyzz$, then $D_{xy}(C) = uudududd$ and $D_{yz}(C) =uudduudd$. Catalan words have been studied previously, see for example in \cite{GuProd20, Prod, Sulanke, Zeil}. In \cite{GuProd20} and \cite{Prod}, the authors study Catalan words $C$ of length $3n$ with $D_{xy}(C)=udud\ldots ud$ and determine that the number of such Catalan words is equal to $\frac{1}{2n+1}{{3n}\choose{n}}$. Notice that when $n=2$, the three Catalan words with this property are those in the above list whose $x$'s and $y$'s alternate. In \cite{ArcGra21}, though it wasn't stated explicitly, it was found that the number of Catalan words $C$ of length $3n$ with $D_{xy}(C)=D_{yz}(C)$ is also $\frac{1}{2n+1}{{3n}\choose{n}}$. Such Catalan words have the property that the subword consisting of $x$'s and $y$'s is the same pattern as the subword consisting of $y$'s and $z$'s. For $n=2$, the three Catalan words with this property are: \[ xxyyzz \quad xyxzyz \quad xyzxyz.\] The authors further show that for any fixed Dyck path $D$, the number of Catalan words $C$ with $D_{xy}(C)=D_{yz}(C)=D$ is given by $$L(D) = \prod_{i=1}^{n-1} {r_i(D) + s_i(D) \choose r_i(D)}$$, where $r_i(D)$ is the number of down steps between the $i^{\text{th}}$ and $(i+1)^{\text{st}}$ up step in $D$, and $s_i(D)$ is the number of up steps between the $i^{\text{th}}$ and $(i+1)^{\text{st}}$ down step in $D$. The table in Figure~\ref{CatWord} shows all Dyck words $D \in \D_3$ and all corresponding Catalan paths $C$ with $D_{xy}(C)=D_{yz}(C)=D$. \begin{figure} \begin{center} \begin{tabular}{c|c|l} ${D}$ & ${L(D)}$ & Catalan word $C$ with ${D_{xy}(C)=D_{yz}(C)=D}$\\ \hline $uuuddd$ & 1 & $xxxyyyzzz$\\ \hline $uududd$ & 1 & $xxyxyzyzz$\\ \hline $uuddud$ & 3 & $xxyyzzxyz, \ xxyyzxzyz,\ xxyyxzzyz$\\ \hline $uduudd$ & 3 & $xyzxxyyzz, \ xyxzxyyzz, \ xyxxzyyzz$\\ \hline $ududud$ & 4 & $xyzxyzxyz, \ xyzxyxzyz, \ xyxzyzxyz, \ xyxzyxzyz$ \end{tabular} \end{center} \caption{ All Dyck words $D \in \D_3$, and all corresponding Catalan words $C$ with ${D_{xy}(C)=D_{yz}(C)=D}$. There are $\frac{1}{7}{9 \choose 3} = 12$ total Catalan words $C$ of length $9$ with ${D_{xy}(C)=D_{yz}(C)}$. } \label{CatWord} \end{figure} As an application of the statistic $L(D)$, in \cite{ArcGra21} it was found that the number of 321-avoiding permutations of length $3n$ composed only of 3-cycles is equal to the following sum over Dyck paths: \begin{equation}\label{eqnSumL2} |\S_{3n}^\star(321)| = \sum_{D \in \D_n} L(D)\cdot 2^{h(D)}, \end{equation} where $h(D)$ is the number of \emph{returns}, that is, the number of times a down step in the Dyck path $D$ touches the $x$-axis. In this paper, we study this statistic more directly, asking the following question. \begin{question} For a fixed $k$, how many Dyck paths $D \in \D_n$ have $L(D)=k$?\end{question} Equivalently, we could ask: how many Dyck paths $D \in \D_n$ correspond to exactly $k$ Catalan words $C$ with $D_{xy}(C) = D_{yz}(C) = D$? We completely answer this question when $k=1$, $k$ is a prime number, or $k=4$. The number of Dyck paths with $L=1$ is found to be the Motzkin numbers; see Theorem~\ref{TheoremL1}. When $k$ is prime, the number of Dyck paths with $L=k$ can be expressed in terms of the Motzkin numbers. These results are found in Theorem~\ref{TheoremL2} and Theorem~\ref{TheoremLp}. Finally, when $k=4$, the number of Dyck paths with $L=4$ can also be expressed in terms of the Motzkin numbers; these results are found in Theorem~\ref{thm:L4}. A summary of these values for $k \in \{1,2,\ldots, 7\}$ can be found in the table in Figure~\ref{TableL}. \begin{figure}[h] \renewcommand{\arraystretch}{1.2} \begin{tabular}{|r|l|c|c|} \hline $|\D_n^k|$ & \textbf{Sequence starting at $n=k$} & \textbf{OEIS} & \textbf{Theorem} \\ \hline \hline $|\D_n^1|$ & $1, 1, 2, 4, 9, 21, 51, 127, 323, \ldots$ & A001006 & Theorem \ref{TheoremL1}\\ \hline $|\D_n^2|$ & $1,0,1,2,6,16,45,126,357,\ldots$ & A005717& Theorem \ref{TheoremL2}\\ \hline $|\D_n^3|$ &$2, 2, 4, 10, 26, 70, 192, 534, \ldots$ & $2\cdot($A005773$)$ & \multirow{1}{*}{Theorem \ref{TheoremLp}} \\ \hline $|\D_n^4|$ & $2, 5, 9, 25, 65, 181, 505, 1434, \ldots$ &$2\cdot($A025565$)$ + A352916 & Theorem \ref{thm:L4}\\ \hline $|\D_n^5|$ &$2, 6, 14, 36, 96, 262, 726, 2034, \ldots$ & $2\cdot($A225034$)$ &\multirow{1}{*}{Theorem \ref{TheoremLp}} \\ \hline $|\D_n^6|$ & $14, 34, 92, 252, 710, 2026, 5844, \ldots$ && Section~\ref{SecRemarks}\\ \hline $|\D_n^7|$ &$2, 10, 32, 94, 272, 784, 2260, 6524, \ldots$ & $2\cdot($A353133$)$ & \multirow{1}{*}{Theorem \ref{TheoremLp}}\\ \hline \end{tabular} \caption{The number of Dyck paths $D$ of semilength $n$ with $L(D)=k$.} \label{TableL} \end{figure} \section{Preliminaries} We begin by stating a few basic definitions and introducing relevant notation. \begin{defn} Let $D \in \D_n$. \begin{enumerate} \item An \emph{ascent} of $D$ is a maximal set of contiguous up steps; a \emph{descent} of $D$ is a maximal set of contiguous down steps. \item If $D$ has $k$ ascents, the \emph{ascent sequence} of $D$ is given by $\Asc(D) = (a_1, a_2, \ldots, a_k)$ where $a_1$ is the length of the first ascent and $a_i - a_{i-1}$ is the length of the $i$th ascent for $2 \leq i \leq k$. \item Similarly, the \emph{descent sequence} of $D$ is given by $\Des(D) = (b_1, \ldots, b_k)$ where $b_1$ is the length of the first descent and $b_i - b_{i-1}$ is the length of the $i$th descent for $2 \leq i \leq k$. We also occasionally use the convention that $a_0=b_0 = 0$. \item The \emph{$r$-$s$ array} of $D$ is the $2 \times n$ vector, \[ \begin{pmatrix} r_1 & r_2 & \cdots & r_{n-1}\\ s_1 & s_2 & \cdots & s_{n-1} \end{pmatrix} \] where $r_i$ is the number of down steps between the $i^{\text{th}}$ and $(i+1)^{\text{st}}$ up step, and $s_i$ is the number of up steps between the $i^{\text{th}}$ and $(i+1)^{\text{st}}$ down step. \item The statistic $L(D)$ is defined by $$L(D) = \prod_{i=1}^{n-1} {r_i(D) + s_i(D) \choose r_i(D)}.$$ \end{enumerate} \end{defn} We note that both the ascent sequence and the descent sequence are increasing, $a_i \geq b_i > 0$ for any $i$, and $a_k = b_k = n$ for any Dyck path with semilength $n$. Furthermore, it is clear that any pair of sequences satisfying these properties produces a unique Dyck path. There is also a relationship between the $r$-$s$ array of $D$ and the ascent and descent sequences as follows: \begin{equation}\label{rs} r_k = \begin{cases} 0 & \text{if } k \notin \Asc(D) \\ b_i - b_{i-1}& \text{if } k = a_i \text{ for some } a_i \in \Asc(D), \end{cases} \end{equation} \begin{equation}\label{rs2} s_k = \begin{cases} 0 & \text{if } k \notin \Des(D) \\ a_{i+1} - a_i & \text{if } k = b_i \text{ for some } b_i \in \Des(D). \end{cases} \end{equation} The following example illustrates these definitions. \begin{figure} \begin{tikzpicture}[scale=.45] \draw[help lines] (0,0) grid (30,5); \draw[thick] (0,0)--(2,2)--(4,0)--(6,2)--(7,1)--(10,4)--(12,2)--(15,5)--(16,4)--(17,5)--(19,3)--(20,4)--(22,2)--(25,5)--(30,0); \end{tikzpicture} \caption{Dyck path $D$ with $L(D)=24$.} \label{fig:dyckexample} \end{figure} \begin{ex} \label{RSEx} Consider the Dyck path \[ D = uudduuduuudduuududdudduuuddddd, \] which is pictured in Figure~\ref{fig:dyckexample}. The ascent sequence and descent sequence of $D$ are \[ \Asc(D) = (2, 4, 7, 10, 11, 12, 15) \quad\text { and } \quad \Des(D) = (2, 3, 5, 6, 8, 10, 15), \] and the $r$-$s$ array of $D$ is \[ \left( \begin{array}{cccccccccccccc} 0 & 2 & 0 & 1 & 0 & 0 & 2 & 0 & 0 & 1 & 2 & 2 & 0 & 0\\ 0 & 2 & 3 & 0 & 3 & 1 & 0 & 1 & 0 & 3 & 0 & 0 & 0 & 0\end{array} \right). \] In order to compute $L(D)$, we note that if the $r$-$s$ array has at least one 0 in column $i$, then ${r_i + s_i \choose r_i} = 1$. There are only two columns, columns 2 and 10, where both entries are nonzero. Thus, \[ L(D) = {r_2 + s_2 \choose r_2}{r_{10} + s_{10} \choose r_{10}}={2 + 2 \choose 2} {1 + 3 \choose 3} = 24. \] \end{ex} The results in this paper rely on Motzkin numbers and Motzkin paths. A \emph{Motzkin path of length $n$} is a path from $(0,0)$ to $(n,0)$ composed of up steps $u=(1,1),$ down steps $d=(1,-1)$, and horizontal steps $h=(1,0)$, that does not pass below the $x$-axis. The set of Motzkin paths of length $n$ will be denoted $\mathcal{M}_n$ and the $n$th Motzkin number is $M_n = |\mathcal{M}_n|$. (See OEIS A001006.) We will also be considering modified Motzkin words as follows. Define $\mathcal{M}^*_n$ to be the set of words of length $n$ on the alphabet $\{h, u, d, *\}$ where the removal of all the $*$'s results in a Motzkin path. For each modified Motzkin word $M^* \in \M_{n-1}^*$, we can find a corresponding Dyck path in $\D_n$ by the procedure described in the following definition. \begin{defn} \label{theta} Let $M^* \in \mathcal{M}^*_{n-1}$. Define $D_{M^*}$ to be the Dyck path in $\D_n$ where $\Asc(D_{M^*})$ is the increasing sequence with elements from the set \[ \{j : m_j = d \text{ or } m_j=*\} \cup \{n\} \] and $\Des(D_{M^*})$ is the increasing sequence with elements from the set \[ \{j : m_j = u \text{ or } m_j=*\} \cup \{n\}. \] Furthermore, given $D\in\D_n$, define $M^*_D = m_1m_2\cdots m_{n-1} \in \mathcal{M}^*_{n-1}$ by \[ m_i = \begin{cases} * & \text{if } r_i > 0 \text{ and } s_i > 0\\ u & \text{if } r_i=0 \text{ and } s_i>0\\ d & \text{if } r_i>0 \text{ and } s_i=0\\ h & \text{if } r_i=s_i=0.\\ \end{cases} \] \end{defn} Notice that this process defines a one-to-one correspondence between $\mathcal{M}^*_{n-1}$ and $\D_n$. That is, $D_{M_D^*} = D$ and $M^*_{D_{M^*}} = M^*$. Because this is used extensively in future proofs, we provide the following example. \begin{ex} Let $D$ be the Dyck path defined in Example~\ref{RSEx}, pictured in Figure~\ref{fig:dyckexample}, with $r$-$s$ array: \[ \left( \begin{array}{cccccccccccccc} 0 & 2 & 0 & 1 & 0 & 0 & 2 & 0 & 0 & 1 & 2 & 2 & 0 & 0\\ 0 & 2 & 3 & 0 & 3 & 1 & 0 & 1 & 0 & 3 & 0 & 0 & 0 & 0\end{array} \right). \] The columns of the $r$-$s$ array help us to easily find $M^*_D$: \begin{itemize} \item if column $i$ has two 0's, the $i$th letter in $M^*_D$ is $h$; \item if column $i$ has a 0 on top and a nonzero number on bottom, the $i$th letter in $M^*_D$ is $u$; \item if column $i$ has a 0 on bottom and a nonzero number on top, the $i$th letter in $M^*_D$ is $d$; and \item if column $i$ has a two nonzero entries, the $i$th letter in $M^*_D$ is $*$. \end{itemize} Thus, \[ M^*_D = h*uduuduh*ddhh. \] Conversely, given $M^*_D$ as above, we find $D=D_{M_D^*}$ by first computing $\Asc(D)$ and $\Des(D)$. The sequence $\Asc(D)$ contains all the positions in $M^*_D$ that are either $d$ or $*$ while $\Des(D)$ contains all the positions in $M^*_D$ that are either $u$ or $*$. Thus, \[ \Asc(D) = (2, 4, 7, 10, 11, 12, 15) \quad \text{and} \quad \Des(D) = (2, 3, 5, 6, 8, 10, 15).\] \end{ex} Notice that $L(D)$ is determined by the product of the binomial coefficients corresponding to the positions of $*$'s in $M^*_D$. One final notation we use is to let $\D_n^k$ be the set of Dyck paths $D$ with semilength $n$ and $L(D) = k$. With these definitions at hand, we are now ready to prove our main results. \section{Dyck paths with $L=1$ or $L=\binom{r_k+s_k}{s_k}$ for some $k$} \label{SecRS} In this section, we enumerate Dyck paths $D \in \D_n$ where $M^*_D$ has at most one $*$. Because $L(D)$ is determined by the product of the binomial coefficients corresponding to the $*$ entries in $M^*_D$, Dyck paths with $L=1$ correspond exactly to the cases where $M^*_D$ has no $*$'s and are thus Motzkin paths. Therefore, these Dyck paths will be enumerated by the well-studied Motzkin numbers. \begin{thm} \label{TheoremL1} For $n\geq 1$, the number of Dyck paths $D$ with semilength $n$ and $L(D)=1$ is \[ |\D_n^1| = M_{n-1}, \] where $M_{n-1}$ is the $(n-1)^{\text{st}}$ Motzkin number. \end{thm} \begin{proof} Let $D \in \D_n^1$. Since $L(D) = 1$, it must be the case that either $r_i(D) = 0$ or $s_i(D) = 0$ for all $i$. By Definition~\ref{theta}, $M^*_D$ consists only of elements in $\{h, u, d\}$ and is thus a Motzkin path in $\mathcal{M}_{n-1}$. This process is invertible, as given any Motzkin path $M \in \mathcal{M}_{n-1} \subseteq \mathcal{M}^*_{n-1}$, we have $D_{M_D} = D$. \end{proof} As an example, the table in Figure \ref{L1Figure} shows the $M_4 = 9$ Dyck paths in $\D_5^1$ and their corresponding Motzkin paths. \begin{figure} \begin{center} \begin{tabular}{c|c|c|c} Dyck path $D$& $r$-$s$ array & $M^*_D$ & Motzkin path\\ \hline \begin{tikzpicture}[scale=.2, baseline=0] \draw[help lines] (0,0) grid (10,5); \draw[thick] (0,0)--(5,5)--(10,0); \node at (0,5.2) {\color{red!90!black}\ }; \end{tikzpicture} & $\begin{pmatrix} 0 & 0 & 0 & 0\\0&0&0&0\end{pmatrix}$ & $hhhh$ & \begin{tikzpicture}[scale=.3] \draw[help lines] (0,0) grid (4,1); \draw[thick] (0,0)--(4,0); \end{tikzpicture} \\ \hline \begin{tikzpicture}[scale=.2] \draw[help lines] (0,0) grid (10,4); \draw[thick] (0,0)--(4,4)--(5,3)--(6,4)--(10,0); \node at (0,4.2) {\color{red!90!black}\ }; \end{tikzpicture} & \begin{tabular}{c}$\begin{pmatrix} 0 & 0 & 0 & 1\\1&0&0&0\end{pmatrix}$\end{tabular} & $uhhd$ & \begin{tikzpicture}[scale=.3] \draw[help lines] (0,0) grid (4,1); \draw[thick] (0,0)--(1,1)--(2,1)--(3,1)--(4,0); \end{tikzpicture}\\ \hline \begin{tikzpicture}[scale=.2] \draw[help lines] (0,0) grid (10,4); \draw[thick] (0,0)--(4,4)--(6,2)--(7,3)--(10,0); \node at (0,4.5) {\color{red!90!black}\ }; \end{tikzpicture} & $\begin{pmatrix} 0 & 0 & 0 & 2\\0&1&0&0\end{pmatrix}$ & $huhd$ & \begin{tikzpicture}[scale=.3] \draw[help lines] (0,0) grid (4,1); \draw[thick] (0,0)--(1,0)--(2,1)--(3,1)--(4,0); \end{tikzpicture}\\ \hline \begin{tikzpicture}[scale=.2] \draw[help lines] (0,0) grid (10,4); \draw[thick] (0,0)--(4,4)--(7,1)--(8,2)--(10,0); \node at (0,4.5) {\color{red!90!black}\ }; \end{tikzpicture} & $\begin{pmatrix} 0 & 0 & 0 & 3\\0&0&1&0\end{pmatrix}$ & $hhud$ & \begin{tikzpicture}[scale=.3] \draw[help lines] (0,0) grid (4,1); \draw[thick] (0,0)--(1,0)--(2,0)--(3,1)--(4,0); \end{tikzpicture}\\ \hline \begin{tikzpicture}[scale=.2] \draw[help lines] (0,0) grid (10,4); \draw[thick] (0,0)--(3,3)--(4,2)--(6,4)--(10,0); \node at (0,4.5) {\color{red!90!black}\ }; \end{tikzpicture} & $\begin{pmatrix} 0 & 0 & 1 & 0\\2&0&0&0\end{pmatrix}$ & $uhdh$ & \begin{tikzpicture}[scale=.3] \draw[help lines] (0,0) grid (4,1); \draw[thick] (0,0)--(1,1)--(2,1)--(3,0)--(4,0); \end{tikzpicture}\\ \hline \begin{tikzpicture}[scale=.2] \draw[help lines] (0,0) grid (10,3); \draw[thick] (0,0)--(3,3)--(5,1)--(7,3)--(10,0); \node at (0,3.5) {\color{red!90!black}\ }; \end{tikzpicture} & $\begin{pmatrix} 0 & 0 & 2 & 0\\0&2&0&0\end{pmatrix}$ & $hudh$ & \begin{tikzpicture}[scale=.3] \draw[help lines] (0,0) grid (4,1); \draw[thick] (0,0)--(1,0)--(2,1)--(3,0)--(4,0); \end{tikzpicture}\\ \hline \begin{tikzpicture}[scale=.2] \draw[help lines] (0,0) grid (10,4); \draw[thick] (0,0)--(2,2)--(3,1)--(6,4)--(10,0); \node at (0,4.5) {\color{red!90!black}\ }; \end{tikzpicture} & $\begin{pmatrix} 0 & 1 & 0 & 0\\3&0&0&0\end{pmatrix}$ & $udhh$ & \begin{tikzpicture}[scale=.3] \draw[help lines] (0,0) grid (4,1); \draw[thick] (0,0)--(1,1)--(2,0)--(3,0)--(4,0); \end{tikzpicture}\\ \hline \begin{tikzpicture}[scale=.2] \draw[help lines] (0,0) grid (10,3); \draw[thick] (0,0)--(3,3)--(4,2)--(5,3)--(6,2)--(7,3)--(10,0); \node at (0,3.5) {\color{red!90!black}\ }; \end{tikzpicture} & $\begin{pmatrix} 0 & 0 & 1 & 1\\1&1&0&0\end{pmatrix}$ & $uudd$ & \begin{tikzpicture}[scale=.3] \draw[help lines] (0,0) grid (4,2); \draw[thick] (0,0)--(1,1)--(2,2)--(3,1)--(4,0); \end{tikzpicture}\\ \hline \begin{tikzpicture}[scale=.2] \draw[help lines] (0,0) grid (10,3); \draw[thick] (0,0)--(2,2)--(3,1)--(5,3)--(7,1)--(8,2)--(10,0); \node at (0,3.5) {\color{red!90!black}\ }; \end{tikzpicture} & $\begin{pmatrix} 0 & 1 & 0 & 2\\2&0&1&0\end{pmatrix}$ & $udud$ & \begin{tikzpicture}[scale=.3] \draw[help lines] (0,0) grid (4,1); \draw[thick] (0,0)--(1,1)--(2,0)--(3,1)--(4,0); \end{tikzpicture}\\ \hline \end{tabular} \end{center} \caption{The nine Dyck paths of semilength 5 having $L=1$ and their corresponding Motzkin paths of length 4.} \label{L1Figure} \end{figure} We now consider Dyck paths $D \in \D_n$ where $D_{M^*}$ has exactly one $*$. Such Dyck paths have $L=\binom{r_k+s_k}{s_k}$ where $k$ is the position of $*$ in $D_{M^*}$. We call the set of Dyck paths of semilength $n$ with $L=\binom{r+s}{s}$ obtained in this way $\D_{n}^{r,s}$. For ease of notation, if $D \in \D_{n}^{r,s}$, define \begin{itemize} \item $x(D)$ to be the number of ups before the $*$ in $M^*_D$, and \item $y(D)$ be the number of downs before the $*$ in $M^*_D$. \end{itemize} We can then easily compute the value of $L(D)$ based on $x(D)$ and $y(D)$ as stated in the following observation. \begin{obs}\label{obsRS} Suppose $D \in \D_{n}^{r,s}$ and write $x=x(D)$ and $y=y(D)$. Then in $M^*_D$, the following are true. \begin{itemize} \item The difference in positions of the $(y+1)$st occurrence of either $u$ or $*$ and the $y$th occurrence of $u$ is $r$; or, when $y=0$, the first occurrence of $u$ is in position $r$. \item The difference in positions of the $(x+2)$nd occurrence of either $d$ or $*$ and the $(x+1)$st occurrence of either $d$ or $*$ is $s$; or, when $x$ is the number of downs in $M^*_D$, the last occurrence of $d$ is in position $n-s$. \end{itemize} \end{obs} \begin{ex} Consider the Dyck path \[ D = uuuuudduudddduuduudddd. \] The ascent sequence and descent sequence of $D$ are \[ \Asc(D) = (5, 7, 9, 11) \quad\text { and } \quad \Des(D) = (2, 6, 7, 11), \] and the $r$-$s$ array of $D$ is \[ \left( \begin{array}{cccccccccc} 0 & 0 & 0 & 0 & 1 & 0 & 4 & 0 & 1 & 0 \\ 0 & 2 & 0 & 0 & 0 & 2 & 2 & 0 & 0 & 0 \end{array} \right). \] There is only one column, column 7, where both entries are nonzero. Thus, \[ L(D) = {r_7 + s_7 \choose r_7}={4 + 2 \choose 4} = 15, \] and $D \in \D_{11}^{4,2}$. Note also that \[ M^*_D = huhhdu*hdh \] has exactly one $*$. Now let's compute $L(D)$ more directly using Observation~\ref{obsRS}. Notice $x(D) = 2$ and $y(D) = 1$ since there are two $u$'s before the $*$ in $M^*_D$ and one $d$ before the $*$. In this case, the position of the second occurrence of either $u$ or $*$ is 6 and the position of the first occurrence of $u$ is 2, so $r=6-2=4$. Since there are only two downs in $M^*_D$, we note the last $d$ occurs in position 9, so $s=11-9=2$. \end{ex} In order to proceed, we need to define the Motzkin ballot numbers. The \emph{Motzkin ballot numbers} are the number of Motzkin paths that have their first down step in a fixed position. These numbers appear in \cite{Aigner98} and are similar to the well-known Catalan ballot numbers (see \cite{Brualdi}). If $n \geq k$, we let $\mathcal{T}_{n,k}$ be the set of Motzkin paths of length $n$ with the first down in position $k$, and we define $\T_{k-1, k}$ to be the set containing the single Motzkin path consisting of $k-1$ horizontal steps. Given any Motzkin path $M$, define the \emph{reverse of $M$}, denoted $M^R$, to be the Motzkin path found be reading $M$ in reverse and switching $u$'s and $d$'s. For example, if $M=huuhdhd$, $M^R = uhuhddh$. Given $M \in \mathcal{T}_{n,k}$, the Motzkin path $M^R$ has its last up in position $n-k+1$. The following lemma gives the generating function for the Motzkin ballot numbers $T_{n,k} = |\mathcal{T}_{n,k}|$. \begin{lem} \label{lemGFt} For positive integers $n \geq k$, let $T_{n,k} = |\T_{n,k}|$. Then for a fixed $k$, the generating function for $T_{n,k}$ is given by \[ \sum_{n=k-1}^{\infty} T_{n,k}x^n = \left(1+xm(x)\right)^{k-1}x^{k-1}. \] \end{lem} \begin{proof} Consider a Motzkin path of length $n$ with the first down in position $k$. It can be rewritten as \[ a_1a_2\cdots a_{k-1} \alpha_1 \alpha_2 \cdots \alpha_{k-1} \] where either \begin{itemize} \item $a_i = f$ and $\alpha_i$ is the empty word, or \item $a_i = u$ and $\alpha_i$ is $dM_i$ for some Motzkin word $M_i$, \end{itemize} for any $1 \leq i \leq k-1$. The generating function is therefore $(x + x^2m(x))^{k-1}$. \end{proof} In later proofs we decompose certain Motzkin paths as shown in the following definition. \begin{defn} \label{PrPs} Let $r$, $s$, and $n$ be positive integers with $n \geq r+ s -2$, and let $P \in \mathcal{T}_{n, r+s-1}$. Define $P_s$ to be the maximal Motzkin subpath in $P$ that begins at the $r$th entry, and define $P_r$ be the Motzkin path formed by removing $P_s$ from $P$. \end{defn} Given $P \in \mathcal{T}_{n, r+s-1}$, notice that $P_r \in \mathcal{T}_{\ell, r}$ for some $r-1 \leq \ell \leq n-s + 1$ and $P_s \in \mathcal{T}_{n-\ell, s}$. In other words, the first down in $P_s$ must be in position $s$ (or $P_s$ consists of $s-1$ horizontal steps), and the first down in $P_r$ must be in position $r$ (or $P_r$ consists of $r-1$ horizontal steps). This process is invertible as follows. Given $P_r \in \mathcal{T}_{\ell,r}$ and $P_s \in \mathcal{T}_{n-\ell,s}$, form a Motzkin path $P \in \mathcal{T}_{n, r+s-1}$ by inserting $P_s$ after the $(r-1)$st element in $P_r$. Because this process is used extensively in subsequent proofs, we illustrate this process with an example below. \begin{ex} \label{exBreakM} Let $r=3$, $s=4$, and $n=13$. Suppose $P = uhuhhdhhdudud \in \mathcal{T}_{13,6}$. By definition, $P_s$ is the maximal Motzkin path obtained from $P$ by starting at the 3rd entry: \[ P = uh\framebox{$uhhdhh$}dudud. \] Thus, $P_s = uhhdhh \in \mathcal{T}_{6, 4}$ as seen in the boxed subword of $P$ above, and $P_r = uhdudud \in \mathcal{T}_{7, 3}$. Conversely, given $P$ as shown above and $r=3$, we note that the maximal Motzkin path in $P_s$ starting at position 3 is exactly the boxed part $P_s$. \end{ex} Using the Motzkin ballot numbers and this decomposition of Motzkin paths, we can enumerate the set of Dyck paths in $\mathcal{D}_n^{r,s}$. These are enumerated by first considering the number of returns. Suppose a Dyck path $D \in \D_n$ has a return after $2k$ steps with $k < n$. Then $r_k(D)$ is the length of the ascent starting in position $2k+1$, and $s_k(D)$ is the length of the descent ending where $D$ has a return. Thus, the binomial coefficient ${r_k+ s_k \choose r_k} > 1$. This implies that if $D \in \mathcal{D}_n^{r,s}$, it can have at most two returns (including the end). Dyck paths in $\mathcal{D}_n^{r,s}$ that have exactly two returns are counted in Lemma~\ref{RSHit2}, and those that have a return only at the end are counted in Lemma~\ref{RSHit1}. \begin{lem}\label{RSHit2} For $r\geq 1, s\geq 1$, and $n\geq r+s$, the number of Dyck paths $D \in \D_n^{r,s}$ that have two returns is $T_{n-2, r+s-1}$. \end{lem} \begin{proof} We will find a bijection between the set of Dyck paths in $\D_n^{r,s}$ that have exactly two returns and $\mathcal{T}_{n-2, r+s-1}$. First, suppose $P \in \mathcal{T}_{n-2, r+s-1}$. Thus, there is some $r-1 \leq \ell \leq n-s+1$ so that $P_r \in \mathcal{T}_{\ell, r}$ and $P_s \in \mathcal{T}_{n-2-\ell, s}$ where $P_r$ and $P_s$ are as defined in Definition~\ref{PrPs}. Now create the modified Motzkin word $M^* \in \M_{n-1}^*$ by concatenating the reverse of $P_r$, the letter $*$, and the word $P_s$; that is, $M^* = P_r^R*P_s$. Because $P_r$ and $P_s$ have a combined total length of $n-2$, the modified Motzkin word $M^*$ is length $n-1$. Let $D = D_{M^*}$ as defined in Definition~\ref{theta} and let $x = x(D)$ and $y= y(D)$. Since $M^*$ has only the Motzkin word $P_r^R$ before $*$, we have $x=y$ and $D$ must have exactly two returns. Using Observation~\ref{obsRS}, we can show that $D \in \D_n^{r,s}$ as follows. The $(y+1)$st occurrence of either a $u$ or $*$ is the $*$ and the $y$th occurrence of $u$ is the last $u$ in $P_r^R$; the difference in these positions is $r$. Also, the $(x+1)$st occurrence of either a $d$ or $*$ is the $*$ and the $(x+2)$nd occurrence of either a $d$ or $*$ is the first $d$ in $P_s$; the difference in these positions is $s$. To see that this process is invertible, consider any Dyck path $D\in\D_n^{r,s}$ that has exactly two returns. Since $D\in\D_n^{r,s}$, $M^*_D$ has exactly one $*$. Furthermore, since $D$ has a return after $2k$ steps for some $k < n$, it must be that $*$ decomposes $M^*_D$ into two Motzkin paths. That is, the subword of $M^*_D$ before the $*$ is a Motzkin path as well as the subword of $M^*_D$ after the $*$. We will call the subword of $M^*_D$ consisting of the first $k-1$ entries $M_r$ and the subword of $M^*_D$ consisting of the last $n-1-k$ entries $M_s$. Since $r_k=r$ and there are the same number of ups and downs before the $*$ in $M^*_D$, the last up before $*$ must be in position $k-r$. Similarly, since $s_k=s$, the first down after $*$ must be in position $k+s$. Thus, $M_r^R \in \T_{k-1,r}$ and $M_s \in \T_{n-1-k, s}$. Let $P$ be the Motzkin path formed by inserting $M_s$ after the $(r-1)$st element in $M_r^R$. Then $P \in \T_{n-2, r+s-1}$ as desired. \end{proof} The following example shows the correspondence. \begin{ex} Let $r=3$, $s=4$, and $n=15$. Suppose $P = uhuhhdhhdudud \in \mathcal{T}_{13,6}$. The corresponding Dyck path $D \in \D_{15}^{3, 4}$ is found as follows. First, find $P_r = uhdudud$ and $P_s = uhhdhh$ as in Example~\ref{exBreakM}. Then let $M^* = P_r^R*P_s$ or \[ M^* = ududuhd*uhhdhh.\] Letting $D = D_{M^*}$, we see that $x(D) = y(D) = 3$. The fourth occurrence of either $u$ or $*$ is the $*$ in position $8$, and the third occurrence of $u$ is in position $5$, so $r=8-5=3$. Similarly, the fourth occurrence of either $d$ or $*$ is the $*$ in position 8, and the fifth occurrence of $d$ is in position 12, so $s=12-8=4$ as desired. \sloppypar{For completion, we write the actual Dyck path $D$ using Definition~\ref{theta} by first seeing $\Asc(D)~=~(2, 4, 7, 8, 12,15)$ and $\Des(D) = (1, 3, 5, 8, 9, 15)$. Thus} \[ D = uuduudduuudduddduuuuduuudddddd.\] \end{ex} Lemma~\ref{RSHit2} counted the Dyck paths in $\D_n^{r,s}$ that have exactly two returns; the ensuing lemma counts those Dyck paths in $\D_n^{r,s}$ that have only one return (at the end). \begin{lem} \label{RSHit1} For $r\geq 1, s\geq 1$, and $n\geq r+s+2$, the number of Dyck paths $D \in \D_n^{r,s}$ that only have a return at the end is \[ \sum_{i=0}^{n-2-s-r}(i+1)M_i T_{n-4-i, r+s-1}. \] \end{lem} \begin{proof} Consider a pair of Motzkin paths, $M$ and $P$, where $M$ is length $i$ with $0 \leq i \leq n-2-s-r$, and $P \in \mathcal{T}_{n-4-i, r+s-1}$. For each such pair, we consider $1 \leq j \leq i+1$ and find a corresponding Dyck path $D\in\D_n^{r,s}$. Thus, there will be $i+1$ corresponding Dyck paths for each pair $M$ and $P$. Each Dyck path $D$ will have exactly one $*$ in $M^*_D$. We begin by letting $\ol{M}^*$ be the modified Motzkin path obtained by inserting $*$ before the $j$th entry in $M$ or at the end if $j=i+1$. Let $\ol{x}$ be the number of ups before the $*$ in $\ol{M}^*$, and let $\ol{y}$ be the number of downs before the $*$ in $\ol{M}^*$. Recall that by Definition~\ref{PrPs}, there is some $r-1 \leq \ell \leq n-3-s-i$ so that $P$ can be decomposed into $P_r \in \mathcal{T}_{\ell, r}$ and $P_s \in \mathcal{T}_{n-4-i-\ell, s}$. We now create a modified Motzkin word, $M^* \in \M^*_{n-1}$ by inserting one $u$, one $d$, $P_r^R$, and $P_s$ into $\ol{M}^*$ as follows. \begin{enumerate} \item Insert a $d$ followed by $P_s$ immediately before the $(\ol{x}+1)$st $d$ in $\ol{M}^*$ or at the end if $\ol{x}$ is equal to the number of downs in $\ol{M}^*$. \item Insert the reverse of $P_r$ followed by $u$ after the $\ol{y}$th $u$ or at the beginning if $\ol{y}=0$. \end{enumerate} Call the resulting path $M^*$. We claim that $D_{M^*}\in \mathcal{D}_n^{r,s}$ and that $D_{M^*}$ only has one return at the end. For ease of notation, let $D = D_{M^*}, x=x(D)$, and $y=y(D)$. Notice that the number of downs (and thus the number of ups) in $P_r$ is $y-\ol{y}$. Then the $(y+1)$st $u$ or $*$ in $M^*$ is the inserted $u$ following $P_r^R$ from Step (2), and the $y$th $u$ is the last $u$ in $P_r^R$. The difference in these positions is $r$. Similarly, the $(x+1)$st $d$ or $*$ in $M^*$ is the inserted $d$ before the $P_s$ from Step (1), and the $(x+2)$nd $d$ or $*$ in $M^*$ is the first down in $P_s$. The difference in these positions is $s$, and thus by Observation~\ref{obsRS}, $D \in \mathcal{D}_n^{r,s}$. To see that $D$ only has one return at the end, we note that the only other possible place $D$ can have a return is after $2k$ steps where $k = \ell + j + 1$, the position of $*$ in $M^*$. However, $x > y$ so $D$ only has one return at the end. We now show that this process is invertible. Consider any Dyck path $D\in\D_n^{r,s}$ that has one return at the end. Since $D$ only has one return at the end, the $*$ does not decompose $M^*_D$ into two Motzkin paths, and we must have $x(D)>y(D)$. Let $P_1$ be the maximal Motzkin word immediately following the $(x+1)$st occurrence of $d$ or $*$ in $M^*_D$. Note that $P_1$ must have its first down in position $s$ or $P_1$ consists of $s-1$ horizontal steps. Let $P_2$ be the maximal Motzkin word preceding the $(y+1)$st up in $M^*$. Then either $P_2$ consists of $r-1$ horizontal step or the last $u$ in $P_2$ is $r$ from the end; that is, the first $d$ in $P_2^R$ is in position $r$. Since $x>y$, the $(y+1)$st $u$ comes before the $x$th $d$. Thus, deleting the $*$, the $(y+1)$st $u$, the $x$th $d$, $P_1$, and $P_2$ results in a Motzkin path we call $M$. Note that if $M$ is length $i$, then the combined lengths of $P_1$ and $P_2$ is length $n-4-i$. This inverts the process by letting $P_s=P_1$ and $P_r=P_2^R.$ \end{proof} We again illustrate the correspondence from the above proof with an example. \begin{ex} Let $r=3$, $s=4$, $n=24$, and consider the following pair of Motzkin paths \[ M = uudhudd \quad \text{ and } \quad P = uhuhhdhhdudud. \] As in Example~\ref{exBreakM}, $P_r = uhdudud$ and $P_s = uhhdhh$. Following the notation in the proof of Lemma~\ref{RSHit1}, we have $i = 7$. Our goal is to find $8$ corresponding Dyck paths for each $1 \leq j \leq 8$. If $j = 1$, we first create $\ol{M}^*$ by inserting $*$ before the 1st entry in M: \[ \ol{M}^* = *uudhudd.\] Now there are $\ol{x} = 0$ ups and $\ol{y}=0$ downs before the $*$ in $\ol{M}^*$. Thus, we form $M^*$ by inserting $P^R_ru$ at the beginning of $\ol{M}^*$ and $dP_s$ immediately before the $1$st down in $\ol{M}^*$ yielding \[ M^*= \framebox{$ududuhd$}\ \bm{u}* uu\bm{d}\ \framebox{$uhhdhh$}\ dhudd. \] The paths $P_r^R$ and $P_s$ are boxed in the above notation and the inserted $u$ and $d$ are in bold. If $D=D_{M^*}$, then $x(D) = 4$ and $y(D) = 3$ because there are four $u$'s and three $d$'s before $*$ in $M^*$. The $(y+1)$st (or fourth) occurrence of $u$ or $*$ in $M^*$ is the bolded $u$ in position 8, and the third occurrence of $u$ is the last $u$ in $P_r^R$ in position 5; thus $r=3$. Similarly, the $(x+2)$nd (or sixth) occurrence of $d$ or $*$ is the first $d$ in $P_s$ in position 16, and the fifth occurrence of $d$ or $*$ is the bolded $d$ in position 12 giving us $s=4$. It is clear that $D$ only has one return since $x > y$. This process can be followed in the same manner for $2 \leq j \leq 8$ to find all $8$ corresponding Dyck paths for the pair $M$ and $P$. The table in Figure~\ref{RSEx2} shows these paths. \end{ex} \begin{figure} \begin{center} {\renewcommand{\arraystretch}{2} \begin{tabular}{c|c|c|c|c} $j$ & $\ol{M}^*$ & $\ol{x}$ & $\ol{y}$ & $M^*$ \\ \hline 1 & $*uudhudd$ & 0 & 0 & $ \framebox{$ududuhd$}\ \bm{u}* uu\bm{d}\ \framebox{$uhhdhh$}\ dhudd$\\ \hline 2 & $u*udhudd$ & 1 & 0 & $ \framebox{$ududuhd$}\ \bm{u} u*udhu\bm{d}\ \framebox{$uhhdhh$}\ dd$\\ \hline 3 & $uu*dhudd$ & 2 & 0 & $ \framebox{$ududuhd$}\ \bm{u} uu*dhud\bm{d}\ \framebox{$uhhdhh$}\ d$\\ \hline 4 & $uud*hudd$ & 2 & 1 & $u \framebox{$ududuhd$}\ \bm{u}ud*hud\bm{d}\ \framebox{$uhhdhh$}\ d$\\ \hline 5 & $uudh*udd$ & 2 & 1 & $u\framebox{$ududuhd$}\ \bm{u}udh*ud\bm{d}\ \framebox{$uhhdhh$}\ d$\\ \hline 6 & $uudhu*dd$ & 3 & 1 & $u\framebox{$ududuhd$}\ \bm{u}udhu*dd\bm{d}\ \framebox{$uhhdhh$}\ $\\ \hline 7 & $uudhud*d$ & 3 & 2 & $uu \framebox{$ududuhd$}\ \bm{u}dhud*d\bm{d}\ \framebox{$uhhdhh$}\ $\\ \hline 8 & $uudhudd*$ & 3 & 3 & $uudhu\framebox{$ududuhd$}\ \bm{u}dd*\bm{d}\ \framebox{$uhhdhh$}\ $\\ \hline \end{tabular}} \end{center} \caption{Given $r=3,$ $s=4,$ $n=24$, and the pair of Motzkin paths $M~=~uudhudd \in \M_7$ and $P = uhuhhdhhdudud \in \T_{13, 6}$, the Dyck words formed by $D_{M^*}$ are the 8 corresponding Dyck paths in $\D_{24}^{3,4}$ that only have one return.} \label{RSEx2} \end{figure} By combining Lemmas~\ref{RSHit2} and \ref{RSHit1}, we have the following proposition which enumerates $\D_n^{r,s}$. \begin{prop} \label{oneterm} For $r\geq 1, s\geq 1$, and $n\geq r+s$, the number of Dyck paths $D \in \D_n^{r,s}$ is \[ |\D_n^{r,s}| =T_{n-2,r+s-1} + \sum_{i=0}^{n-2-s-r}(i+1)M_i T_{n-4-i, r+s-1}.\] \end{prop} \begin{proof} Dyck paths in $\mathcal{D}_n^{r,s}$ can have at most two returns. Thus, this is a direct consequence of Lemmas ~\ref{RSHit2} and \ref{RSHit1}. \end{proof} Interestingly, we remark that the formula for $|\D_n^{r,s}|$ only depends on the sum $r+s$ and not the individual values of $r$ and $s$. For example, $|\D_n^{1,3}| = |\D_n^{2,2}|$. Also, because the formula for $|\D_n^{r,s}|$ is given in terms of Motzkin paths, we can easily extract the generating function for these numbers using Lemma~~\ref{lemGFt}. \begin{cor} For $r, s \geq 1$, the generating function for $|\D_n^{r,s}|$ is \[ x^{r+s}(1+xm(x))^{r+s-2}\left(1 + x^2(xm(x))' \right). \] \end{cor} \section{Dyck paths with $L=p$ for prime $p$} When $L=p$, for some prime $p$, we must have that every term in the product $\prod_{i=1}^{n-1} {r_i + s_i \choose r_i}$ is equal to 1 except for one term which must equal $p$. In particular, we must have that there is exactly one $1\leq k\leq n-1$ with $r_k\neq 0$ and $s_k\neq 0$. Furthermore, we must have that either $r_k=1$ and $s_k=p-1$ or $r_k=p-1$ and $s_k=1$. Therefore, when $L =2$, we have \[ |\mathcal{D}_n^2| = |\mathcal{D}_n^{1,1}|. \] When $L=p$ for an odd prime number, we have \[ |\mathcal{D}_n^p| = |\mathcal{D}_n^{1,p-1}| + |\mathcal{D}_n^{p-1,1}| = 2|\mathcal{D}_n^{1,p-1}|. \] Thus the results from the previous section can be used in the subsequent proofs. \begin{thm} \label{TheoremL2} For $n\geq 4$, the number of Dyck paths with semilength $n$ and $L=2$ is \[ |\D_n^2| = (n-3)M_{n-4}, \]where $M_{n-4}$ is the $(n-4)$th Motzkin number. Additionally, $|\D_2^2| =1$ and $|\D_3^2| = 0.$ Thus the generating function for $|\D_n^2|$ is given by \[ L_2(x) = x^2 + x^4\left(xm(x)\right)' \] where $m(x)$ is the generating function for the Motzkin numbers. \end{thm} \begin{proof} By Proposition~\ref{oneterm}, for $n \geq 2$, \[ |\D_n^{1,1}| =T_{n-2,1} + \sum_{i=0}^{n-4}(i+1)M_i T_{n-4-i, 1}.\] In the case where $n=3$ or $n=4$, the summation is empty and thus $|\D_2^2| = T_{0,1} = 1$ and $|\D_3^2| = T_{1,1} = 0$. For $n \geq 4$, the term $T_{n-2,1} = 0$. Furthermore, the terms in the summation are all 0 except when $i=n-4$. Thus, \[ |\D_n^{1,1}| = (n-3)M_{n-4}T_{0,1} \] or \[ |\D_n^2| = (n-3)M_{n-4}. \] \end{proof} The sequence for the number of Dyck paths of semilength $n$ with $L=2$ is given by: \[ |\D_n^2| = 1,0,1,2,6,16,45,126,357,\ldots \] This can be found at OEIS A005717. Because the formula for $|\D_n^2|$ is much simpler than the one found in Proposition~\ref{oneterm}, the correspondence between Dyck paths in $\D_n^2$ and Motzkin paths of length $n-4$ is actually fairly straightforward. For each Motzkin word of length $n-4$, there are $n-3$ corresponding Dyck paths of semilength $n$ having $L=2$. These corresponding Dyck paths are found by modifying the original Motzkin word $n-3$ different ways. Each modification involves adding a $u$, $d$, and placeholder $*$ to the original Motzkin word. The $n-3$ distinct modifications correspond to the $n-3$ possible positions of the placeholder $*$ into the original Motzkin word. As an example, in the case where $n=6$, there are $M_2 = 2$ Motzkin words of length 2. For each word, we can insert a placeholder $*$ in $n-3=3$ different positions, and thus there are a total of 6 corresponding Dyck paths of semilength 6 having $L=2$. Figure~\ref{L2Figure} provides the detailed process when $n=6$. \begin{figure}[t] \begin{center} \begin{tabular}{c|c|c|c|c} $\begin{matrix} \text{Motzkin}\\ \text{word} \end{matrix}$ & $M^*$ & $\begin{matrix} \Asc(D)\\ \Des(D) \end{matrix}$ & $r$-$s$ array & Dyck path, $D$\\ \hline & \vspace*{-.1cm} $\bm{u}*hh\bm{d}$ & $\begin{matrix} (2&5&6)\\(1&2&6)\end{matrix}$ & $\begin{pmatrix}0&1&0&0&1\\3&1&0&0&0\end{pmatrix}$& \begin{tikzpicture}[scale=.2] \draw[help lines] (0,0) grid (12,4); \draw[thick] (0,0)--(2,2)--(3,1)--(6,4)--(7,3)--(8,4)--(12,0); \node at (0,5) {\color{red!90!black}\ }; \end{tikzpicture}\\ $hh$ &$\bm{u}h*h\bm{d}$ & $\begin{matrix} (3&5&6)\\(1&3&6) \end{matrix}$ & $\begin{pmatrix}0&0&1&0&2\\2&0&1&0&0\end{pmatrix}$& \begin{tikzpicture}[scale=.2] \draw[help lines] (0,0) grid (12,4); \draw[thick] (0,0)--(3,3)--(4,2)--(6,4)--(8,2)--(9,3)--(12,0); \node at (0,5) {\color{red!90!black}\ }; \end{tikzpicture}\\ & $\bm{u}hh*\bm{d}$ & $\begin{matrix} (4&5&6)\\(1&4&6)\end{matrix}$ & $\begin{pmatrix}0&0&0&1&3\\1&0&0&1&0\end{pmatrix}$& \begin{tikzpicture}[scale=.2] \draw[help lines] (0,0) grid (12,4); \draw[thick] (0,0)--(4,4)--(5,3)--(6,4)--(9,1)--(10,2)--(12,0); \node at (0,4.5) {\color{red!90!black}\ }; \end{tikzpicture}\\ \hline & $\bm{u}*u\bm{d}d$ & $\begin{matrix} (2&4&5&6)\\(1&2&3&6)\end{matrix}$ & $\begin{pmatrix}0&1&0&1&1\\2&1&1&0&0\end{pmatrix}$& \begin{tikzpicture}[scale=.2] \draw[help lines] (0,0) grid (12,3); \draw[thick] (0,0)--(2,2)--(3,1)--(5,3)--(6,2)--(7,3)--(8,2)--(9,3)--(12,0); \node at (0,3.5) {\color{red!90!black}\ }; \end{tikzpicture}\\ $ud$ & $\bm{u}u*d\bm{d}$ & $\begin{matrix} (3&4&5&6)\\(1&2&3&6)\end{matrix}$ & $\begin{pmatrix}0&0&1&1&1\\1&1&1&0&0\end{pmatrix}$& \begin{tikzpicture}[scale=.2] \draw[help lines] (0,0) grid (12,3); \draw[thick] (0,0)--(3,3)--(4,2)--(5,3)--(6,2)--(7,3)--(8,2)--(9,3)--(12,0); \node at (0,3.5) {\color{red!90!black}\ }; \end{tikzpicture}\\ & $u\bm{u}d*\bm{d}$ & $\begin{matrix} (3&4&5&6)\\(1&2&4&6)\end{matrix}$ & $\begin{pmatrix}0&0&1&1&2\\1&1&0&1&0\end{pmatrix}$& \begin{tikzpicture}[scale=.2] \draw[help lines] (0,0) grid (12,3); \draw[thick] (0,0)--(3,3)--(4,2)--(5,3)--(6,2)--(7,3)--(9,1)--(10,2)--(12,0); \node at (0,3.5) {\color{red!90!black}\ }; \end{tikzpicture}\\ \hline \end{tabular} \end{center} \caption{The six Dyck paths of semilength 6 having $L=2$ and their corresponding Motzkin paths of length 2} \label{L2Figure} \end{figure} When $L=p$ for an odd prime number, we can also enumerate $\D_n^p$ using Proposition~\ref{oneterm} as seen in the following theorem. \begin{thm}\label{TheoremLp} For a prime number $p \geq 3$ and $n \geq p$, the number of Dyck paths with semilength $n$ and $L=p$ is \[ |\D_n^p| = 2\left(T_{n-2, p-1} + \sum_{i=0}^{n-2-p} (i+1)M_i T_{n-4-i, p-1}\right). \] Thus, the generating function for $|\D_n^p|$ is \[ 2x^p(1+xm(x))^{p-2}\left(x^2(xm(x))' + 1 \right). \] \end{thm} \begin{proof} This lemma is a direct corollary of Proposition~\ref{oneterm} with $r=1$ and $s=p-1$. We multiply by two to account for the case where $r=p-1$ and $s=1$, and $r=1$ and $s=p-1$. \end{proof} \section{Dyck paths with $L=4$}\label{SecL4} When $L(D)=4$, things are more complicated than in the cases for prime numbers. If $D \in \D_n^4$, then one of the following is true: \begin{itemize} \item $D \in \D_n^{1,3}$ or $D \in \D_n^{3,1}$; or \item All but two terms in the product $\prod_{i=1}^{n-1} {r_i + s_i \choose r_i}$ are equal to 1, and those terms must both equal $2$. \end{itemize} Because the first case is enumerated in Section~\ref{SecRS}, this section will be devoted to counting the Dyck paths $D \in \D_n^4$ where $M^*_D$ has exactly two $*$'s in positions $k_1$ and $k_2$ and \[ L(D) = {r_{k_1} + s_{k_1} \choose r_{k_1}} {r_{k_2} + s_{k_2} \choose r_{k_2}} \] with $r_{k_1} = s_{k_1} = r_{k_2} = s_{k_2} = 1$. For ease of notation, let $\widehat{\D}_n$ be the set of Dyck paths $D \in \D_n^4$ with the property that $M^*_D$ has exactly two $*$'s. Also, given $D \in \widehat{\D}_n$, define $x_i(D)$ to be the number of ups before the $i$th $*$ in $M^*_D$ and let $y_i(D)$ be the number of downs before the $i$th $*$ for $i \in \{1, 2\}$. \begin{ex} Let $D $ be the Dyck path with ascent sequence and descent sequence \[ \Asc(D) = (3, 6, 7, 8, 10, 11) \quad \text{and} \quad \Des(D) = (1, 3, 4, 5, 8, 11) \] and thus $r$-$s$ array \[ \left( \begin{array}{cccccccccc} 0 & 0 & 1 & 0 & 0 & 2 & 1 & 1 & 0 & 3 \\ 3 & 0 & 1 & 1 & 2 & 0 & 0 & 1 & 0 & 0 \end{array} \right).\] By inspection of the $r$-$s$ array and noticing that only columns 3 and 8 have two nonzero entries, we see that $L(D) = {1 + 1 \choose 1}{1 + 1 \choose 1} = 4$ and thus $D \in \widehat{\D}_{11}$. Furthermore, we can compute \[ M^*_D = uh*uudd*hd.\] Since there is one $u$ before the first $*$ and no $d$'s, we have $x_1(D) = 1$ and $y_1(D) = 0$. Similarly, there are three $u$'s before the second $*$ and two $d$'s so $x_2(D) = 3$ and $y_2(D) = 2$. \end{ex} In this section, we will construct $M^*$ from smaller Motzkin paths. To this end, let us notice what $M^*_{D}$ should look like if $D\in \widehat{\D}_n$. \begin{lem}\label{DM} Suppose $M^*\in \M_{n-1}^*$ has exactly two $*$'s. Then $D_{M^*} \in \widehat{\D}_n$ if, writing $x_i=x_i(D_{M^*})$ and $y_i=y_i(D_{M^*})$, we have: \begin{itemize} \item The $(x_1+1)$st occurrence of either a $d$ or $*$ is followed by another $d$ or $*$; \item The $(x_2+2)$nd occurrence of either a $d$ or $*$ is followed by another $d$ or $*$, or $x_2$ is equal to the number of $d$'s and $M^*$ ends in $d$ or $*$; \item The $(y_1)$th occurrence of either a $u$ or $*$ is followed by another $u$ or $*$, or $y_1=0$ and the $M^*$ begins with $u$ or $*$; \item The $(y_2+1)$st occurrence of either a $u$ or $*$ is followed by another $u$ or $*$. \end{itemize} \end{lem} \begin{proof} Suppose $M^*\in \M_{n-1}^*$ has two stars in positions $k_1$ and $k_2$. Then it is clear that \[L(D) ={r_{k_1} + s_{k_1} \choose r_{k_1}} {r_{k_2} + s_{k_2} \choose r_{k_2}} \] so it suffices to show that $r_{k_1} = s_{k_1} = r_{k_2} = s_{k_2} = 1$. Recall that $\Asc(D) = (a_1, a_2, \ldots, a_k)$ is the increasing sequence of positions $i$ in $M^*$ with $m_i=d$ or $*$. Similarly, $\Des(D)=(b_1, b_2, \ldots, b_k)$ is the increasing sequence of positions $i$ in $M^*$ with $m_i=u$ or $*$. First notice that $r_{k_1} = 1$ only if $b_i-b_{i-1}=1$ where $a_i=k_1$. However, $i=y_1+1$ since the first star must be the $(y_1+1)$st occurrence of $d$ or $*$. Therefore $b_i$ is the position of the $(y_1+1)$st $u$ or $*$ and $b_{i-1}$ is the position of the $(y_i)$th $u$ or $*$. The difference in positions is 1 exactly when they are consecutive in $M^*$. The other three bullet points follow similarly. \end{proof} Enumerating Dyck paths $D \in \widehat{\D}_n$ will be found based on the values of $x_1(D)$ and $y_2(D)$. The cases we consider are \begin{itemize} \item $x_1(D) \notin \{y_2(D), y_2(D) + 1\}$; \item $x_1(D) = 1$ and $y_2(D) = 0$; \item $x_1(D) = y_2(D) + 1 \geq 2$; and \item $x_1(D) = y_2(D)$. \end{itemize} The next four lemmas address each of these cases separately. Each lemma is followed by an example showing the correspondence the proof provides. \begin{lem} \label{L4Type1} For $n \geq 7$, the number of Dyck paths $D \in \widehat{\D}_n$ with $x_1(D) \notin \{y_2(D), y_2(D) + 1\}$ is ${n-5 \choose 2}M_{n-7}.$ \end{lem} \begin{proof} We will show that for any $M \in \M_{n-7}$, there are ${n-5 \choose 2}$ corresponding Dyck paths $D \in \widehat{\D}_n$ with $x_1(D) \notin \{y_2(D), y_2(D) + 1\}$. To this end, let $M \in \M_{n-7}$ and let $1 \leq j_1< j_2 \leq n-5$. There are ${n-5 \choose 2}$ choices for $j_1$ and $j_2$ each corresponding to a Dyck path with the desired properties. We create a modified Motzkin word $\ol{M}^* \in \M_{n-5}^*$ with $*$'s in position $j_1$ and $j_2$ and the subword of $\ol{M}^*$ with the $*$'s removed is equal to $M$. Let $\overline{x}_i$ be the number of ups before the $i$th $*$ in $\ol{M}^*$ and let $\overline{y}_i$ be the number of downs before the $i$th $*$ in $\ol{M}^*$ for $i \in \{1, 2\}$. We create the modified Motzkin word $M^* \in M^*_{n-1}$ from $\ol{M}^*$ as follows: \begin{enumerate} \item Insert $d$ before the $(\overline{x}_2+1)$th down or at the very end of $\ol{x}_2$ is the number of downs in $\ol{M}^*$. \item Insert $d$ before the $(\overline{x}_1+1)$th down or at the very end of $\ol{x}_1$ is the number of downs in $\ol{M}^*$. \item Insert $u$ after the $\overline{y}_2$th up or at the beginning if $\overline{y}_2 = 0$. \item Insert $u$ after the $\overline{y}_1$th up or at the beginning if $\overline{y}_1 = 0$. \end{enumerate} Notice that in Step (1), the $d$ is inserted after the second $*$ and in Step~(4), the $u$ is inserted before the first $*$. Let $D=D_{M^*}$. We first show that $D \in \widehat{\D}_n$ by showing $L(D)=4$. We proceed by examining two cases. In the first case, assume $\ol{x}_1 +1 \leq \ol{y}_2.$ In this case, the inserted $d$ in Step~$(2)$ must occur before the second $*$ since there were $\ol{y}_2$ $d$'s before the second $*$ in $M^*$. Similarly, the inserted $u$ in Step~$(3)$ must occur after the first $*$. Thus, we have \[ x_1(D) = \ol{x}_1 + 1, \quad y_1(D) = \ol{y_1}, \quad x_2(D) = \ol{x}_2 + 2, \quad \text{and} \quad y_2(D) = \ol{y}_2 + 1.\] We now use the criteria of Lemma~\ref{DM}, to see that $L(D) = 4$: \begin{itemize} \item The $(x_1 + 1)$th occurrence of a $d$ or $*$ is the inserted $d$ from Step (2) and is thus followed by $d$; \item The $(x_2 + 2)$th occurrence of a $d$ or $*$ is the inserted $d$ from Step (1) and is thus followed by $d$; \item The $y_1$th occurrence of a $u$ is the inserted $u$ from Step (4) and is thus followed by $u$; and \item The $(y_2 + 1)$th occurrence of a $u$ is the inserted $u$ from Step (3) and is thus followed by $u$. \end{itemize} We also have \[ x_1(D) = \ol{x}_1 + 1 \leq \ol{y}_2 < \ol{y_2} + 1 = y_2(D), \] and thus $D \in \widehat{\D}_n$ with $x_1(D) \notin \{y_2(D), y_2(D) + 1\}$ as desired. In the second case where $\ol{x}_1 \geq \ol{y}_2$, the inserted $d$ in Step (2) occurs after the second $*$ and the inserted $u$ in Step (3) occurs before the first $*$. Here we have \[ x_1(D) = \ol{x}_1 + 2, \quad y_1(D) = \ol{y_1}, \quad x_2(D) = \ol{x}_2 + 2, \quad \text{and} \quad y_2(D) = \ol{y}_2.\] We can easily check that the criteria of Lemma~\ref{DM} are satisfied to show that $L(D) = 4$. Also, \[ x_1(D) = \ol{x}_1 + 2 \geq \ol{y}_2 + 2 = y_2(D) + 2,\] and thus $D$ has the desired properties. \sloppypar{To see that this process is invertible, consider any $D \in \widehat{\D}_n$ with $x_1(D) \notin \{y_2(D), y_2(D) + 1\}$ and let $k_1 \leq k_2$ be the positions of the $*$'s in $M^*_D$. We consider the two cases where $x_1(D) < y_2(D)$ and where $x_1(D) \geq y_2(D)+2$. Since for each case, we've established the relationship between $x_i$ and $\overline{x}_i$ and between $y_i$ and $\overline{y}_i$, it is straightforward to undo the process. } Begin with the case where $x_1(D) < y_2(D)$. In this case: \begin{itemize} \item Delete the $(x_2(D))$th $d$ and the $(x_1(D))$th $d$. \item Delete the $(y_2(D) + 1)$th $u$ and the $(y_1(D)+1)$th $u$. \item Delete both $*$'s. \end{itemize} Now consider the case where $x_1(D) \geq y_2(D)+2$. In this case: \begin{itemize} \item Delete the $(x_2(D)+2)$th $d$ and the $(x_1(D)+1)$th $d$. \item Delete the $(y_2(D) + 1)$th $u$. and the $(y_1(D)+2)$th $u$. \item Delete both $*$'s. \end{itemize} \end{proof} \begin{ex} Suppose $n=11$ and let $M = hudh \in \M_{4}$. There are ${6 \choose 2} = 15$ corresponding Dyck paths $D \in \widehat{\D}_n$ with $x_1(D) \notin \{y_2(D), y_2(D) + 1\}$, and we provide two of these in this example. First, suppose $j_1 = 2$ and $j_2=5$ so that $\ol{M}^* = h*ud*h$. We then count the number of ups and downs before each $*$ to get \[ \ol{x}_1 = 0, \quad \ol{y}_1 = 0, \quad \ol{x}_2 = 1, \quad \text{and} \quad \ol{y}_2 = 1.\] Following the steps in the proof, we insert two $u$'s and two $d$'s to get \[ M^* = \bm{u}h*u\bm{ud}d*h\bm{d}.\] Let $D=D_{M^*}$ and notice that the number of ups before the first $*$ is $x_1(D) = 1$ and the number of downs before the second $*$ is $y_2(D) = 2$ and thus $x_1(D) < y_2(D)$. Since $L(D) = 4$, $D$ satisfies the desired criteria. To see that the process is invertible, we would delete the third and first $d$, the third and first $u$, and the two $*$'s. Now, suppose $j_1=2$ and $j_2=4$ so that $\ol{M}^* = h*u*dh$. We again count the number of ups and downs before each $*$ to get \[ \ol{x}_1 = 0, \quad \ol{y}_1 = 0, \quad \ol{x}_2 = 1, \quad \text{and} \quad \ol{y}_2 = 0,\] and insert two $u$'s and two $d$'s to get \[ M^* = \bm{uu}h*u*\bm{d}dh\bm{d}. \] Now, if $D=D_{M^*}$ we have $x_1(D)= 2$ and $y_2(D) = 0$ and so $x_1(D) \geq y_2(D) + 2$ as desired. We can also easily check the $L(D) = 4$. \end{ex} \begin{lem}\label{L4Type2} For $n \geq 5$, the number of Dyck paths $D \in \widehat{\D}_n$ with $x_1(D)= 1$ and $y_2(D)= 0$ is $M_{n-5}.$ \end{lem} \begin{proof} We find a bijection between the set of Dyck paths $D \in \widehat{\D}_n$ with $x_1(D)= 1$ and $y_2(D)= 0$ with the set $\M_{n-5}$. First, suppose $M \in \M_{n-5}$. Let $\ol{x}_2$ be the number of ups before the first down. Now create the modified Motzkin word $M^* \in \M^*_{n-1}$ as follows. \begin{enumerate} \item Insert $d$ before the $(\ol{x}_2 + 1)$st $d$ in $M$ or at the end if the number of downs in $M$ is $\ol{x}_2$. \item Insert $*$ before the first $d$. \item Insert $u$ followed by $*$ before the first entry. \end{enumerate} Let $D = D_{M^*}$. By construction, we have \[ x_1(D) = 1, \quad y_1(D) = 0, \quad x_2(D) = \ol{x} + 1, \quad \text{and} \quad y_2(D) =0.\] In particular, Step (3) gives us $x_1(D)$ and $y_1(D)$, while Step (2) gives us $x_2(D)$, and $y_2(D)$. We also have the four criteria of Lemma~\ref{DM}: \begin{itemize} \item The second occurrence of a $d$ or $*$ is the second $*$ which is followed by $d$; \item The $(x_2+2)$st occurrence of a $d$ or $*$ is the inserted $d$ from Step (1) which is followed by $d$ (or at the end); \item $M^*$ begins with $u$; and \item The first occurrence of a $u$ or $*$ is the first entry $u$ which is followed by $*$. \end{itemize} Let us now invert the process. Starting with a Dyck path $D$ with $x_1(D)= 1$ and $y_2(D)= 0$, and its corresponding modified Motzkin word $M^*_D$. Since $y_2(D)=0$, we also have $y_1(D)=0$, and thus by Lemma~\ref{DM}, we must have that the first two entries are either $uu$, $u*$, $**$, or $*u$. However, since we know $x_1(D)=1$, it must be the case that $M^*_D$ starts with $u*$. As usual, let $x_2(D)$ be the number of $u$'s before the second star. Obtain $M\in M_{n-5}$ by starting with $M^*_{D}$ and then: \begin{itemize} \item Delete the $(x_2)$th $d$. \item Delete the first $u$. \item Delete both $*$'s. \end{itemize} \end{proof} \begin{ex} Suppose $n=11$ and let $M=huudhd \in \M_6$. By Lemma~\ref{L4Type2}, there is one corresponding Dyck path $D \in \widehat{\D}_{11}$ with $x_1(D)= 1$ and $y_2(D)= 0$. Following the notation in the proof, we have $\ol{x}_2 = 2$ and we get \[ M^* = \bm{u*}huu\bm{*d}hd\bm{d}.\] Let $D=D_{M^*}$. We can easily check that $L(D)$ = 1. Also, $x_1(D) = 1$ and $y_2(D) = 0$ as desired. \end{ex} \begin{lem} \label{L4Type3} For $n \geq 7$, the number of Dyck paths $D \in \widehat{\D}_n$ with $x_1(D)= y_2(D)+1 \geq 2$ is $$\sum_{i=0}^{n-7} (i+1)M_iM_{n-7-i}.$$ \end{lem} \begin{proof} Consider a pair of Motzkin paths, $M$ and $P$, where $M \in \M_i$ and $P \in \M_{n-7-i}$ with $0 \leq i \le n-7$. For each such pair, we consider $1 \leq j \leq i+1$ and find a corresponding Dyck path $D \in \widehat{\D}_n$ with $x_1(D)= y_2(D)+1 \geq 2$. Thus, there will be $i+1$ corresponding Dyck paths for each pair $M$ and $P$. We begin by creating a modified Motzkin word $\ol{M}^* \in \M^*_{n-4}$ by inserting $u$, followed by $*$, followed by the path $P$, followed by $d$ before the $i$th entry in $M$. Then, let $\overline{x}_1$ be the number of ups before $*$ in $\ol{M}^*$ and $\overline{y}_1$ be the number of downs before the $*$ in $\ol{M}^*$. Notice that $\ol{x}_1 \geq 1$. We now create a modified Motzkin word $M^* \in \M^*$ as follows. \begin{enumerate} \item Insert $*$ before the $(\ol{x}_1 + 1)$st $d$ or at the end if $\ol{x} + 1$ equals the number of downs in $\ol{M}^*$. Now let $\overline{x}_2$ be the number of ups before this second $*$. \item Insert $u$ after the $\ol{y}_1$th $u$ in $\ol{M}^*$ or at the beginning if $\ol{y}_1 = 0$. \item Insert $d$ before the $(\ol{x}_2 + 1)$st $d$ (or at the end). \end{enumerate} Let $D = D_{M^*}$. By construction, we have \[ x_1(D) = \ol{x}_1 + 2, \quad y_1(D) = \ol{y}_1, \quad x_2(D) = \ol{x}_2 + 1, \quad \text{and} \quad y_2(D) = \ol{x}_1,\] and thus $x_1(D)= y_2(D)+1 \geq 2$. We also have the four criteria of Lemma~\ref{DM}: \begin{itemize} \item The $(x_1 + 1)$st occurrence of a $d$ or $*$ is the second $*$ from Step (1) which is followed by $d$; \item The $(x_2+2)$st occurrence of a $d$ or $*$ is the inserted $d$ from Step (3) which is followed by $d$ (or at the end); \item The $(y_1+1)$st occurrence of a $u$ or $*$ is the inserted $u$ from Step (2) and thus is preceded by $u$; and \item The $(y_2 + 2)$nd occurrence of a $u$ or $*$ is the first $*$ which immediately follows a $u$. \end{itemize} To see that this process is invertible, consider any Dyck path $D \in \widehat{\D}_n$ with $x_1(D)= y_2(D)+1 \geq 2$. To create $\ol{M}^*$, start with $M^*_D$ and then: \begin{itemize} \item Delete the $(x_2)$th $d$. \item Delete the second $*$. \item Delete the $(y_1 + 1)$st $u$. \end{itemize} Because $x_1 = y_2 + 1$, we have $y_1 + 1 \leq x_2$ and so this process results in $\ol{M}^* \in \M^*_{n-4}$. Now let $P$ be the maximal subpath in $\ol{M}^*$ beginning with the entry immediately following the $*$. Deleting the $u$ and the $*$ preceding $P$, all of $P$, and the $d$ following $P$ inverts the process. \end{proof} \begin{ex} Suppose $n=11$ and let $M = ud \in \M_{2}$ and $P = hh \in \M_2$. There are 3 corresponding Dyck paths with $D \in \widehat{\D}_n$ with $x_1(D)= y_2(D)+1 \geq 2$ and we provide one example. First, let $j=1$ and create the word $\ol{M}^*\in \M^*_{7}$ by inserting $u*Pd$ before the first entry in $M$: \[ \ol{M}^* = \framebox{$u*hhd$}\ ud.\]Notice $\ol{x}_1 = 1$ and $\ol{y}_1=0$ since there is only one entry, $u$, before the $*$. Then, following the procedure in the proof of Lemma~\ref{L4Type3}, we insert $*$ before the second $d$ and note that $\ol{x}_2 = 2.$ Then we insert $u$ at the beginning and $d$ at the end to get \[ M^* = \bm{u} \framebox{$u*hhd$}\ u\bm{*}d\bm{d}. \] Let $D=D_{M^*}$. By inspection, we note $x_1(D) = 2$ and $y_2(D) = 1$, and we can easily check that $L(D) = 4$. \end{ex} \begin{lem} \label{L4Type4} For $n \geq 7$, the number of Dyck paths $D \in \widehat{\D}_n$ with $x_1(D)= y_2(D)$ is $$\sum_{i=0}^{n-7} (i+1)M_iM_{n-7-i}.$$ Also, for $n=3$, there is exactly 1 Dyck path $D \in \widehat{\D}_3$ with $x_1(D)= y_2(D)$. \end{lem} \begin{proof} Similar to the proof of Lemma~\ref{L4Type3}, consider a pair of Motzkin paths, $M$ and $P$, where $M \in \M_i$ and $P \in \M_{n-7-i}$ with $0 \leq i \le n-7$. For each such pair, we consider $1 \leq j \leq i+1$ and find a corresponding Dyck path $D \in \widehat{\D}_n$ with $x_1(D)= y_2(D)$. Thus, there will be $i+1$ corresponding Dyck paths for each pair $M$ and $P$. We begin by creating a modified Motzkin word $\ol{M}^* \in \M^*_{n-4}$ by inserting $*$, followed by $u$, followed by the path $P$, followed by $d$ before the $j$th entry in $M$. Then, let $\overline{x}_1$ be the number of ups before $*$ in $\ol{M}^*$ and $\overline{y}_1$ be the number of downs before the $*$ in $\ol{M}^*$. We now create a modified Motzkin word $M^* \in \M^*$ as follows. \begin{enumerate} \item Insert $*$ after the $(\ol{x}_1 + 1)$st $d$ in $\ol{M}^*$. Let $\overline{x}_2$ be the number of ups before this second $*$. \item Insert $u$ after the $\ol{y}_1$th $u$ in $\ol{M}^*$ or at the beginning if $\ol{y}_1 = 0$. \item Insert $d$ before the $(\ol{x}_2 + 1)$st $d$ (or at the end). \end{enumerate} Let $D = D_{M^*}$. By construction, we have \[ x_1(D) = \ol{x}_1 + 1, \quad y_1(D) = \ol{y}_1, \quad x_2(D) = \ol{x}_2 + 1, \quad \text{and} \quad y_2(D) = \ol{x}_1+1.\] It is easy to verity that the criteria in Lemma~\ref{DM} are satisfied and so $D \in \widehat{\D}_n$ with $x_1(D)= y_2(D)$. To see that this process is invertible, consider any Dyck path $D \in \widehat{\D}_n$ with $x_1(D)= y_2(D)$. Since $x_1=y_2$, there are $y_2$ ups before the first $*$, in $M^*_D$ and thus the first $*$ in $M^*_D$ is the $(y_2+1)$th occurrence of a $u$ or $*$. By the fourth criterium in Lemma~\ref{DM}, the first $*$ must be followed by another $u$ or $*$. Similarly, the $(x_1 + 2)$th occurrence of either a $d$ or $*$ is the second $*$. Thus, by the second criterium of Lemma~\ref{DM}, the second $*$ must be immediately preceded by a $d$ or $*$. We now show that the case where the first $*$ is immediately followed by the second $*$ results in only one Dyck path. In this case, $x_1=y_1=x_2=y_2$, and thus $M^*_D$ can be decomposed as a Motzkin path, followed by $**$, followed by another Motzkin path. By the second criterium in Lemma~\ref{DM}, the entry after the second $*$ must be a $d$ (which is not allowed) and thus $M^*_D$ ends in $*$. Similarly, the third criterium in Lemma~\ref{DM} tells us $M^*_D$ begins with $*$ and so $M^*_D = **$. Thus, $D \in \widehat{\D}_n$ and is the path $D=ududud$. We now assume the first $*$ is followed by $u$ which implies the second $*$ is preceded by $d$. In this case, we must have at least $y_1 + 1$ downs before the second $*$ and at least $x_1 + 1$ ups before the second $*$ yielding \[ y_1 + 1 \leq x_1 = y_2 \leq x_2 - 1.\] Thus the $(y_1+1)$th $u$ comes before the first $*$ and the $x_2$th $d$ comes after the second $*$. To find $\ol{M}^*$ from $M^*_D$: \begin{itemize} \item Delete the $x_2$th $d$. \item Delete the second $*$. \item Delete the $(y_1 + 1)$st $u$; \end{itemize} which results in $\ol{M}^* \in \M^*_{n-4}$. Now let $P$ be the maximal subpath in $\ol{M}^*$ beginning with the entry after the $u$ that immediately follows the remaining $*$. (The entry after $P$ must be $d$ since $P$ is maximal and $\ol{M}^*$ is a Motzkin path when ignoring the $*$.) Removing $uPd$ and the remaining $*$ from $\ol{M^*}$ results in a Motzkin path $M$ as desired. \end{proof} \begin{ex} Suppose $n=11$ and let $M = ud \in \M_{2}$ and $P = hh \in \M_2$. There are 3 corresponding Dyck paths with $D \in \widehat{\D}_n$ with $x_1(D)= y_2(D)$ and we provide one example. First, let $j=1$ and create the word $\ol{M}^*\in \M^*_{7}$ by inserting $*uPd$ before the first entry in $M$: \[ \ol{M}^* = \framebox{$*uhhd$}\ ud.\]Notice $\ol{x}_1 = 0$ and $\ol{y}_1=0$ since there are no entries before the $*$. Then, following the procedure in the proof of Lemma~\ref{L4Type4}, we insert $*$ after the first $d$ and note that $\ol{x}_2 = 1.$ Then we insert $u$ at the beginning and $d$ before the second $d$ to get \[ M^* = \bm{u} \framebox{$*uhhd$}\bm{*}u\bm{d}d. \] Let $D=D_{M^*}$. By inspection, we note $x_1(D) = y_2(D) = 1$, and we can easily check that $L(D) = 4$. \end{ex} \begin{thm}\label{thm:L4} The number of Dyck paths with semilength $n \geq 4$ and $L=4$ is \[ |\D_n^4| =2\left(T_{n-2, 3} + \sum_{i=0}^{n-6} (i+1)M_i T_{n-4-i, 3}\right) + \binom{n-5}{2}M_{n-7} + M_{n-5} + 2\sum_{i=0}^{n-7} (i+1)M_i M_{n-7-i}. \] Also, $|\D_3^4| = 1$. \end{thm} \begin{proof} This is a direct consequence of Proposition~\ref{oneterm} along with Lemmas~\ref{L4Type1}, \ref{L4Type2}, \ref{L4Type3}, and \ref{L4Type4}. \end{proof} \section{Further Remarks}\label{SecRemarks} As seen in Section~\ref{SecL4}, finding $|\D_n^k|$ is more complicated when $k$ is not prime, as there could be many ways to write $k$ as a product of binomial coefficients. For example, consider $k=6$. If $D \in \D_n^6$, then one of the following is true: \begin{itemize} \item $D \in \D_n^{1,5}$ or $D \in \D_n^{5,1}$; \item $D \in \D_n^{2,2}$; or \item All but two terms in the product $\prod_{i=1}^{n-1} {r_i + s_i \choose r_i}$ are equal to 1, and those terms must equal $2$ and $3$. \end{itemize} The number of Dyck paths in the first two cases is given by Proposition~\ref{oneterm}: \[ |\D_n^{1,5}| = |\D_n^{5,1}| = T_{n-2,5} + \sum_{i=0}^{n-8}(i+1)M_i T_{n-4-i, 5}\] and \[ |\D_n^{2,2}| = T_{n-2,3} + \sum_{i=0}^{n-6}(i+1)M_i T_{n-4-i, 3}.\] In the final case, we have \[ L(D) = {r_{k_1} + s_{k_1} \choose r_{k_1}} {r_{k_2} + s_{k_2} \choose r_{k_2}} \] where exactly one of $\{r_{k_1}, r_{k_2}, s_{k_1},s_{k_2}\}$ is equal to 2 and the other three values are 1. By symmetry, we need only to consider two cases: when $r_{k_1} = 2$ and when $s_{k_1} = 2$. We can appreciate that these cases can become quite involved; the proofs would involve similar techniques to those found in Section~\ref{SecL4} along with the proof of Proposition~\ref{oneterm}. Although we do not provide a closed form, the number of Dyck paths $D \in \D_n^6$ in this case are (starting at $n=4$): \[ 2, 4, 8, 16, 44, 122, 352, 1028, 3036, \ldots.\] Combining the first two cases with this case, we provide the first terms of the values of $|\D_n^6|$ (starting at $n=4$): \[ 3, 6, 14, 34, 92, 252, 710, 2026, 5844, \ldots.\] Further work in this area could involve finding formulas for $|\D_n^k|$ when $k$ is a non-prime number greater than 4. It also still remains open to refine the enumeration of $\D_n^k$ with respect to the number of returns. Having such a refinement in terms of number of returns would yield a new formula for the number of 321-avoiding permutations of length $3n$ composed only of 3-cycles as seen in Equation~(\ref{eqnSumL2}). \bibliographystyle{amsplain} \begin{thebibliography}{99} \bibitem{ArcGra21} K. Archer and C. Graves, Pattern-restricted permutations composed of 3-cycles, \emph{Discrete Mathematics} \textbf{345 (7)} (2022) doi: 10.1016/j.disc.2022.112895. \bibitem{Aigner98} M. Aigner, Motzkin numbers, \emph{European Journal of Combinatorics} \textbf{19} (1998) 663-675. \bibitem{Brualdi} R. A. Brualdi, \emph{Introductory Combinatorics, 4th ed.} New York: Elsevier (1997). \bibitem{GuProd20} N. S. S. Gu and H. Prodinger, A bijection between two subfamilies of Motzkin paths, \emph{Applicable Analysis and Discrete Mathematics}, \textbf{15(2)}, (2021), 460--466. \bibitem{Prod} H. Prodinger, An elementary approach to solve recursions relative to the enumeration of S-Motzkin paths. \emph{Journal of Difference Equations and Applications}, \textbf{27(5)}, (2021) 776-785. \bibitem{Sulanke} R. A. Sulanke, Generalizing Narayana and Schroeder Numbers to Higher Dimensions, \emph{Electronic Journal of Combinatorics} \textbf{11} (2004), Research Paper 54, 20 pp. \bibitem{Zeil} D. Zeilberger, Andre's reflection proof generalized to the many-candidate ballot problem, \emph{Discrete Mathematics} \textbf{44(3)} (1983), 325--326. \end{thebibliography} \end{document}
2205.09657v1
http://arxiv.org/abs/2205.09657v1
On the generalized multiplicities of maximal minors and sub-maximal pfaffians
\documentclass[11pt]{amsart} \usepackage[utf8]{inputenc} \usepackage[margin=1in]{geometry} \usepackage[titletoc,title]{appendix} \usepackage{pifont} \usepackage{yfonts} \RequirePackage[dvipsnames,usenames]{xcolor} \usepackage[colorlinks=true,linkcolor=blue]{hyperref}\hypersetup{ colorlinks=true, linkcolor=RoyalBlue, filecolor=magenta, urlcolor=RoyalBlue, pdfpagemode=FullScreen, citecolor=BurntOrange } \usepackage{amsmath,amsfonts,amssymb,mathtools} \usepackage{enumitem} \usepackage{graphicx,float} \usepackage[ruled,vlined]{algorithm2e} \usepackage{algorithmic} \usepackage{amsthm} \usepackage{ytableau} \usepackage{comment} \usepackage{bigints} \usepackage{setspace} \numberwithin{equation}{section} \newtheorem{defn}{Definition}[section] \newtheorem{theorem}{Theorem}[section] \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{prop}[theorem]{Proposition} \newtheorem{question}[theorem]{Question} \newtheorem{remark}[theorem]{Remark} \newtheorem{example}[theorem]{Example} \newcommand{\frakm}{\mathfrak{m}} \newcommand{\frakj}{\mathfrak{J}} \newcommand{\CC}{\mathbb{C}} \newcommand{\ZZ}{\mathbb{Z}} \DeclareMathOperator{\Soc}{Socle} \DeclareMathOperator{\Hom}{Hom} \DeclareMathOperator{\Ext}{Ext} \DeclareMathOperator{\Tor}{Tor} \DeclareMathOperator{\Dim}{dim} \DeclareMathOperator{\Sup}{sup} \DeclareMathOperator{\GL}{GL} \DeclareMathOperator{\Sym}{Sym} \DeclareMathOperator{\Pf}{Pf} \makeatletter \@namedef{subjclassname@2020}{ \textup{2020} Mathematics Subject Classification} \makeatother \title{On the Generalized Multiplicities of Maximal Minors and Sub-Maximal Pfaffians} \author{Jiamin Li} \address{Department of Mathematics, Statistics, and Computer Science, University of Illinois at Chicago, Chicago, IL 60607} \email{[email protected]} \subjclass[2020]{} \begin{document} \maketitle \begin{abstract} Let $S=\mathbb{C}[x_{ij}]$ be a polynomial ring of $m\times n$ generic variables (resp. a polynomial ring of $(2n+1) \times (2n+1)$ skew-symmetric variables) over $\mathbb{C}$ and let $I$ (resp. $\Pf$) be the determinantal ideal of maximal minors (resp. sub-maximal pfaffians) of $S$. Using the representation theoretic techniques introduced in the work of Raicu et al, we study the asymptotic behavior of the length of the local cohomology module of determinantal and pfaffian thickenings for suitable choices of cohomological degrees. This asymptotic behavior is also defined as a notion of multiplicty. We show that the multiplicities in our setting coincide with the degrees of Grassmannian and Orthogonal Grassmannian. \end{abstract} \section{Introduction} Let $S=\mathbb{C}[x_{ij}]_{m\times n}$ be a polynomial ring of $m \times n$ variables with $m \geq n$. When $[x_{ij}]_{m \times n}$ is a generic matrix and $m>n$, we denote the determinantal ideals generated by $p \times p$ minors by $I_p$. On the other hand if $[x_{ij}]_{m \times n}$ is a skew-symmetric matrix with $m=n$ then we denote the ideals generated by its $2p \times 2p$ pfaffians by $\Pf_{2p}$. Our goal in this paper is to study the generalized multiplicities of $I_n$ and $\Pf_{2n}$, which is also a study of asymtoptic behavior of the length of the local cohomology modules. The precise definition of the generalized multiplicities will be given later. Our main theorems are the following. \begin{theorem}(Theorem \ref{formula_limit}\label{main_thm}) Let $S = \mathbb{C}[x_{ij}]_{m\times n}$ where $m>n$ and $[x_{ij}]_{m\times n}$ is a generic matrix, then we have \begin{enumerate} \item If $j\neq n^2-1$, then $\ell(H^j_\frakm(S/I_n^D))$ and $\ell(H^j_\frakm(I_n^{d-1}/I_n^d))$ are either $0$ or $\infty$. \item If $j=n^2-1$, then $\ell(H^j_\frakm(S/I_n^D))$ and $\ell(H^j_\frakm(I^{d-1}_n/I^d_n))$ are nonzero and finite. Moreover we have \begin{align}\label{main_thm_formula} \begin{split} &\lim_{d \rightarrow \infty}\dfrac{(mn-1)!\ell(H^j_\frakm(I_n^{d-1}/I_n^d))}{d^{mn-1}} \\&= (mn)!\prod^{n-1}_{i=0}\dfrac{i!}{(m+i)!} \end{split} \end{align} \item In fact the limit $$\lim_{D\rightarrow \infty} \dfrac{(mn)!\ell(H^j_\frakm(S/I_n^D))}{D^{mn}}$$ is equal to (\ref{main_thm_formula}) as well. \end{enumerate} \end{theorem} Suprisingly, $(\ref{main_thm_formula})$ is in fact the degree of the Grassmannian $G(n,m+n)$, see Remark \ref{Grassmannian}. A fortiori, it must be an integer. Moreover, (\ref{main_thm_formula}) can be interpreted as the number of fillings of the $m \times n$ Young diagram with integers $1,...,mn$ and with strictly increasing rows and columns, see \cite[Ex 4.38]{EisenbudHarris}. Analogously, we have \begin{theorem}\label{main_thm2}(Theorem \ref{formula_limit_pfaffian}) Let $S = \mathbb{C}[x_{ij}]_{(2n+1)\times (2n+1)}$ where $[x_{ij}]_{(2n+1)\times (2n+1)}$ is a skew-symmetric matrix, then we have \begin{enumerate} \item If $j \neq 2n^2-n-1$, then $\ell(H^j_\frakm(S/\Pf_{2n}^D))$ and $\ell(H^j_\frakm(\Pf_{2n}^{d-1}/\Pf_{2n}^d))$ are either $0$ or $\infty$. \item If $j=2n^2-n-1$, then $\ell(H^j_\frakm(S/\Pf_{2n}^D))$ and $\ell(H^j_\frakm(\Pf_{2n}^{d-1}/\Pf_{2n}^d))$ are finite and nonzero. Moreover we have, \begin{align}\label{main_thm2_formula} \begin{split} &\lim_{D \rightarrow \infty}\dfrac{(2n^2+n-1)!\ell(H^j_\frakm(\Pf_{2n}^{d-1}/\Pf_{2n}^d))}{d^{(2n^2+n-1)}} \\&= (2n^2+n)!\prod^{n-1}_{i=0}\dfrac{(2i)!}{(2n+1+2i)!} \end{split} \end{align} \item In fact the limit $$\lim_{D\rightarrow \infty} \dfrac{(2n^2+n)!\ell(H^j_\frakm(S/\Pf_{2n}^D))}{D^{2n^2+n}}$$ is equal to (\ref{main_thm2_formula}) as well. \end{enumerate} \end{theorem} Similar to Theorem \ref{main_thm}, $(\ref{main_thm2_formula})$ has a geometric interpretation, and it is the degree of the Orthogonal Grassmannian $OG(2n,4n+1)$, see Remark \ref{Orthogonal_Grass}. Therefore it must be an integer as well. This explains the similarities between the Hilbert-Samuel multiplicity and the multiplicities we discuss above. Furthermore, as in the case of Grassmannian, (\ref{main_thm2_formula}) can be interpreted similarly using the shifted standard tableaux, see \cite[p91]{Totaro} for the discussion. As mentioned before, the above limits are notions of muliplicity. The Hilbert-Samuel multiplicity (see \cite[Ch4]{BrunsHerzog} for more detailed discussion) denoted by $e(I)$, has played an important role in the study of commutative algebra and algebraic geometry. The attempt of its generalization can be traced back to the work of Buchsbaum and Rim \cite{BuchsbaumRim} in 1964. One of the more recent generalizations is defined via the $0$-th local cohomology (see for example \cite{CutkoskyHST},\cite{KatzValidashti}, \cite{UlrichValidashti}), which coincides with the Hilbert-Samuel multiplicity when the ideal is $\mathfrak{m}$-primary. In \cite{CutkoskyHST}, the authors proved the existence of the $0$-multiplicities when the ring is a polynomial ring. Later, Cutkosky showed in \cite{Cutkosky} that the $0$-multiplicities exists under mild assumption of the ring. In \cite{JeffriesMontanoVarbaro} the authors studied this $0$-multiplicities of several classical varieties, in particular they calculated the formula of the $0$-multiplicities of determinantal ideals of non-maximal minors and the pfaffians. A further generalized multiplicity is defined in \cite{DaoMontano19} via the local cohomology of arbitrary indices, which is necessary in some situations, e.g. the determinantal ideals of maximal minors. However, the existence of such multiplicity is not known in general. In the unpublised work \cite{Kenkel} the author calculated the closed formula, and thus showed the existence, of the generalized $j$-multiplicity defined in \cite{DaoMontano19} of determinantal ideals of maximal minors of $m\times 2$ matrices. Thus our Theorem \ref{main_thm} and Theorem $\ref{main_thm2}$ are extensions of the results in \cite{JeffriesMontanoVarbaro} and \cite{Kenkel}. We give the definiton of generalized multiplicities here. \begin{defn}\label{defn_dm}(see \cite{DaoMontano19} for more details) Let $S$ be a Noetherian ring of dimension $k$ and $\frakm$ a maximal ideal of $S$. Let $I$ be an ideal of $S$. Define $$\epsilon_+^j(I) := \limsup_{D\rightarrow\infty}\dfrac{k!\ell(H^j_\frakm(S/I^D))}{D^k}.$$ Suppose $\ell(H^j_\frakm(S/I^D)) < \infty$, then we define $$\epsilon^j(I) := \lim_{D \rightarrow \infty} \dfrac{k!\ell(H^j_\frakm(S/I^D))}{D^{k}}$$ if the limit exist and we call it the $j$-$\epsilon$-multiplicity. \end{defn} \begin{defn}\label{defn_higher_mul} Under the same setting, we define $$\mathfrak{J}^j_+(I) := \limsup_{d\rightarrow \infty} \dfrac{(k-1)!\ell(H^j_\frakm(I^{d-1}/I^d))}{d^{k-1}}.$$ If $\ell(H^j_\frakm(I^{d-1}/I^d)) < \infty$, then we define $$\mathfrak{J}^j(I) := \lim_{D\rightarrow \infty} \dfrac{(k-1)!\ell(H^j_\frakm(I^{d-1}/I^d))}{d^{k-1}}.$$ if the limit exists and we call it the $j$-multiplicity. \end{defn} When $I$ is a $\frakm$-primary ideal, we have $$e(I) = (\Dim(S))!\lim_{t\rightarrow \infty} \dfrac{\ell(S/I^t)}{t^{\Dim(S)}} = (\Dim(S)-1)!\lim_{t\rightarrow \infty} \dfrac{\ell(I^{t-1}/I^t)}{t^{\Dim(S)-1}}.$$ However in general we may have $\epsilon^j(I) \neq \frakj^j(I)$, as we can see in the below results of $\epsilon^0(I_p)$ and $\Pf_{2p}$ for $p < n$ in \cite{JeffriesMontanoVarbaro}. \begin{theorem}(See \cite[Theorem 6.1]{JeffriesMontanoVarbaro})\label{0_multiplicities_result} Let $I_p$ be the determinantal ideal of $p \times p$-minors of $S$ where $S$ is a polynomial ring of generic $m \times n$ variables over $\mathbb{C}$ and $0<p<n\leq m$. Let $$c=\dfrac{(mn-1)!}{(n-1)!...(n-m)!m!(m-1)!...1!},$$ then we have \begin{enumerate} \item $$\epsilon^0(I_p) = cmn\int_{\Delta_1}(z_1...z_n)^{m-n}\prod_{1\leq i < j\leq n}(z_j-z_i)^2dz$$ where $\Delta_1=\operatorname{max}_i\{{z_i}\}+t-1\leq \sum z_i \leq t\} \subseteq [0,1]^n$, \item $$\frakj^0(I_p) = cp\int_{\Delta_2}(z_1...z_n)^{m-n}\prod_{1\leq i<j \leq n}(z_j-z_i)^2 dz$$ where $\Delta_2 = \{\sum z_i = t\} \subseteq [0,1]^n$. \end{enumerate} \end{theorem} The authors have also proved a corresponding theorem for the skew-symmetric matrix. \begin{theorem}(See \cite[Theorem 6.3]{JeffriesMontanoVarbaro}) Let Let $\Pf_{2p}$ be the $2p \times 2p$ pfaffians of a polynomial ring $S$ with $n \times n$ skew-symmetric variables. Let $m := \lfloor n/2 \rfloor$. Then for $0 < p < m$, let $$c = \dfrac{(\binom{n}{2}-1)!}{m!(n-1)!...1!},$$ we have \begin{enumerate} \item $$\epsilon^0(\Pf_{2p}) = c\binom{n}{2}\int_{\Delta_1}(z_1...z_m)^{2y}\prod_{1\leq i < j \leq m}(z_j-z_i)^4dz$$ \item $$\frakj^0(\Pf_{2p}) = cp\int_{\Delta_2}(z_1...z_n)^{2y}\prod_{1\leq i < j\leq m}(z_j-z_i)^4 dz.$$ \end{enumerate} where $y=0$ if $n$ is even and $1$ otherwise, and $\Delta_1$ and $\Delta_2$ are the same as those in Theorem \ref{0_multiplicities_result}. \end{theorem} Note that when $S$ is a polynomial ring of $m\times n$ generic variables and $p=n$, $H^0_\frakm(S/I_n^D)$ is always $0$, respectively when $S$ is a polynomial ring of $(2n+1) \times (2n+1)$ skew-symmetric variables and $q=n$, we have $H^0_\frakm(S/\Pf_{2n}^D) = 0$. To avoid this triviality we will instead study the multiplicites of $I_n$ and $\Pf_{2n}$ of higher cohomological indices, which will require more tools from representation theory. It was proved in \cite{DaoMontano19} that when $S$ is a polynomial ring of $k$ variables and when $J$ is a homogeneous ideal of $S$, we have for all $\alpha \in \mathbb{Z}$, $$\limsup_{D\rightarrow\infty}\dfrac{k!\ell(H^j_\frakm(S/I^D)_{\geq \alpha D})}{D^k} < \infty.$$ As a corollary of the above result, combined with the result from \cite{Raicu}, which states that if $S$ is a polynomial ring of $m \times n$ variables and $I_p$ is a determinantal ideal of $p \times p$-minors of $S$, then $H^i_\frakm(S/I_p^D)_j = 0$ for $i\leq m+n-2$ and $j<0$, we get that $\epsilon_+^j(I_p) < \infty$ for $j\leq m+n-2$ (see \cite[Ch 5]{DaoMontano19}). Note that, as mentioned in \cite[Ch 7]{DaoMontano19}, even if $\epsilon^j(I)$ exists, it doesn't have to be rational (see the example in \cite[Ch 3]{CutkoskyHST}). Therefore it is natural to ask for which $j$ the multiplicities exist, and if they exist, the rationality of the multiplicities. As we see in Theorem \ref{main_thm} and Theorem \ref{main_thm2}, the only interesting cohomological indices to our question are $n^2-1$ for maximal minors and $2n^2-n-1$ for sub-maximal pfaffians, and we solve the problem of calculating the generalized multiplicites of determinantal ideals of maximal minors and sub-maximal pfaffians completely. \textbf{Organization.} In section 2 we will recall briefly the construction of Schur functors. In section 3 we will review the $\Ext$-module decompositions in the case of determinantal thickenings of generic matrix and derive some useful properties. Then we will show the existence calculate the $j$-multiplicity in section 4. We will follow the same strategies for skew-symmetric matrix in section 5 and 6. Finally, we will discuss some future directions of this line of work in section 7. \medskip \textbf{Notations.} In this paper $\ell(M)$ will denote the length of a module $M$, $S$ will denote the polynomial ring $\mathbb{C}[x_{ij}]$. We will use $D$ to denote the powers of ideals when we discuss modules related to $\epsilon^j(I)$ and use $d$ to denote the powers of ideal when we dicuss modules related $\frakj^j(I)$. All rings are assumed to be unital commutative. \section{Preliminaries on Schur Functor} We will recall the basic construction of the Schur functors, more information can be found in \cite{FultonHarris} and \cite{Weyman}. Let $V$ be an $n$-dimensional vector space over $\mathbb{C}$. Denote the collection of partitions with $n$ nonzero parts by $\mathcal{P}(n)$. We define a dominant weight of $V$ to be $\lambda = (\lambda_1,...,\lambda_n) \in \mathbb{Z}^n$ such that $\lambda_1\geq ... \geq \lambda_n$ and denote the set of dominant weights to be $\ZZ_{\text{dom}}$. Note that $(\lambda_1,\lambda_2,0,0,...,0) = (\lambda_1,\lambda_2)$. Furthermore we denote $(c,...,c)$ by $(c^n)$. We say $\lambda=(\lambda_1,\lambda_2,...) \geq \alpha = (\alpha_1,\alpha_2,...)$ if each $\lambda_i \geq \alpha_i$. Given a weight we can define an associated Young diagram with numbers filled in. For example if $\lambda=(3,2,1)= (3,2,1,0,0,0)\in \mathbb{Z}^6$, then we can draw the Young diagram \[ \begin{ytableau} 1 & 2 & 3\\ 4 & 5\\ 6\\ \end{ytableau} \] Let $\mathfrak{S}_n$ be the permutation group of $n$ elements. Let $P_\lambda = \{g\in\mathfrak{S}_n:g \text{ preserves each row}\}$ and $Q_\lambda=\{g\in\mathfrak{S}_n:g \text{ preserves each column}\}$. Then we define $a_\lambda = \sum_{g\in P_\lambda}e_g$, $b_\lambda = \sum_{g\in Q_\lambda}\operatorname{sgn}(g)e_g$, and moreover $c_\lambda = a_\lambda \cdot b_\lambda$. Recall that the Schur functor $S_\lambda(-)$ is defined to $$S_\lambda(V) = \operatorname{Im}(c_\lambda\big|_{V^{\otimes \mu}})$$ where $\mu = |\lambda|$. Let $V$ be an $n$-dimensional $\mathbb{C}$-vector space. We have a formula for the dimension of $S_\lambda V$ as $\mathbb{C}$-vector space. \begin{prop}(See \cite[Ch2]{FultonHarris})\label{dim_schur} Suppose $\lambda = (\lambda_1,...,\lambda_n) \in \mathbb{Z}^n_\text{dom}$. Then we have $$\Dim(S_\lambda V) = \prod_{1\leq i < j \leq n}\dfrac{\lambda_i-\lambda_j+j-i}{j-i}.$$ \end{prop} From the formula of $\Dim(S_\lambda V)$ it is easy to see the following. \begin{corollary}\label{same_dim} For any $c\in \mathbb{N}$ we have $$\Dim(S_\lambda V) = \Dim(S_{\lambda+(c^n)}V).$$ \end{corollary} \section{Decompositions of Ext modules of determinantal thickenings of maximal minors} In this section we recall the $\operatorname{GL}$-equivariant $\mathbb{C}$-vector spaces decompositions of $\Ext^j_S(S/I_p^D)$ given in \cite{Raicu}. This will be the key ingredient in the disuccsion of multiplicities in section 4. Following the notations in \cite{Raicu}, we denote $$\mathcal{X}^d_p = \{\underline{x} \in \mathcal{P}(n) : |\underline{x}| = pd, x_1 \leq d\}.$$ Recall the following construction of finite set. First we define $x_i'$ to be the number of boxes in the $i$-th column of the Young diagram defined by $\underline{x}$. Then we define $\underline{x}(c)$ to be such that $\underline{x}(c)_i = \operatorname{min}(x_i,c)$. \begin{defn}\label{Z_set}(See \cite[Definition 3.1]{Raicu}) Suppose $\mathcal{X} \subset \mathcal{P}(n)$ is a finite subset. Then we define the set $\mathcal{Z}(\mathcal{X})$ to be the set consisted of the pair $(\underline{z},l)$ with $\underline{z} \in \mathcal{P}$ and $l \geq 0$. Let $z_1 = c$. Then we have \begin{enumerate} \item There exists a partition $\underline{x} \in \mathcal{X}$ such that $\underline{x}(c) \leq \underline{z}$ and $x'_{c+1} \leq l+1$. \item If $\underline{x} \in \mathcal{X}$ satisfies (1) then $x'_{c+1} = l+1$. \end{enumerate} \end{defn} \begin{lemma}\label{z(x)_set}(See \cite[Lemma 5.3]{Raicu}) Denote $\mathcal{Z}(\mathcal{X}^d_p)$ by $\mathcal{Z}^d_p$, then we have \begin{align*} \mathcal{Z}^d_p=\big\{(\underline{z},l): 0\leq l \leq p-1, \underline{z}\in \mathcal{P}(n), z_1=...=z_{l+1}\leq d-1&, \\|\underline{z}| + (d-z_1)\cdot l +1 \leq p\cdot d \leq |\underline{z}|+(d-z_1)\cdot(l+1)\big\}. \end{align*} \end{lemma} Next we recall the construction of the quotient $J_{\underline{z},l}$ from \cite{RaicuWeyman}, and will be crucial in the decomposition of the $\Ext$ modules of $\GL$-equivariant ideals. Let $\underline{z} = (z_1,...,z_m) \in \mathcal{P}(m)$ be such that $z_1 = ... = z_{l+1}$ for some $0 \leq l \leq m-1$. Then we define $$\mathfrak{succ}(\underline{z}, l) = \{\underline{y} \in \mathcal{P}(m) | \underline{y} \geq \underline{z} \text{ and } y_i > z_i \text{ for some } i>l \},$$ it is easy to see that $I_{\mathfrak{succ}(\underline{z},l)} \subseteq I_{\underline{z}}$, so we can define the quotient $J_{\underline{z},l} = I_{\underline{z}} / I_{\mathfrak{succ}(\underline{z},l)}$. The above definition and lemma will be used again later when we study the case of pfaffians of skew-symmetric matrix. In section 3 and section 4 we consider $S=\mathbb{C}[x_{ij}]$ where $[x_{ij}]$ is a generic matrix of $m \times n$ variables. Recall that we have the $\operatorname{GL}$-equivariant decomposition (Cauchy formula) of $S$:$$S=\bigoplus_{\lambda\in \mathbb{Z}^{\text{dom}}_{\geq 0}} S_\lambda\mathbb{C}^m \otimes S_\lambda\mathbb{C}^n.$$ Denote by $I_\lambda$ to be the ideal generated by $S_\lambda\mathbb{C}^m \otimes S_\lambda\mathbb{C}^n$. It was shown in \cite{DeConciniEisenbudProcesi} that a $\operatorname{GL}$-equivariant ideal $I$ of $S$ can be written as $$I_\lambda = \bigoplus_{\mu\geq \lambda}S_\mu\mathbb{C}^m \otimes S_\mu\mathbb{C}^n,$$ and in particular the ideal of $p\times p$ minors is equal to $I_{(1^p)}$, moreover we have $I^d_p = I_{\mathcal{X}^d_p}$. Moreover we get that the $\GL$-invariant ideals are of the form $$I_\mathcal{X} = \bigoplus_{\lambda \in \mathcal{X}} I_\lambda$$ for $\mathcal{X} \subset \mathcal{P}(n)$. The following is the key tool of this paper. Note that in \cite{Raicu} the author considered the decomposition of $\Ext^j_S(S/I_\mathcal{X},S)$ in general, but here we only consider specifically the determinantal ideals. \begin{theorem}\label{schur-decompos}(See \cite[Theorem 3.3]{RaicuWeyman}, \cite[Theorem 2.5, Theorem 3.2]{Raicu}) There exists a $GL$-equivariant filtration of $S/I_p^d$ with factors $J_{\underline{z},l}$ which are quotients of $I_{\underline{z}}$. Therefore we have the following vector spaces decomposition of $\Ext^j_S(S/I^d_p,S)$: \begin{align}\label{decompose_first} \Ext_S^j(S/I_p^d,S) = \bigoplus_{(\underline{z},l) \in \mathcal{Z}^d_p} \Ext_S^j(J_{\underline{z},l},S) \end{align} and we have \begin{align} \operatorname{Ext}_S^j(J_{(\underline{z},l)},S) = \bigoplus_{\substack{0\leq s \leq t_1 \leq ... \leq t_{n-l}\leq l\\ mn -l^2 -s(m-n)-2(\sum^{n-l}_{i=1}t_i)=j \\ \lambda \in W(\underline{z},l;\underline{t},s)}} S_{\lambda(s)}\mathbb{C}^m \otimes S_\lambda\mathbb{C}^n \end{align} where $\mathcal{P}_n$ is the collection of partitions with at most $n$ nonzero parts, which means $z_1\geq z_2 \geq ... \geq z_n \geq 0$. Moreover the set $W(\underline{z},l,\underline{t},s)$ consists of dominant weights satisfy the following conditions: \begin{align}\label{res_weight_gen} \begin{cases} \lambda_n \geq l -z_l -m, \\ \lambda_{t_i+i} = t_i-z_{n+1-i}-m & i=1,...,n-l,\\ \lambda_s \geq s-n \text{ and } \lambda_{s+1} \leq s-m. \end{cases} \end{align} and the $\lambda(s)$ is given by $$\lambda(s)=(\lambda_1,...,\lambda_s,(s-n)^{m-n},\lambda_{s+1}+(m-n),...,\lambda_n+(m-n)) \in \mathbb{Z}^m_{\text{dom}}.$$ In fact in our case we have $\lambda_n=l-z_l-m$. This also implies that $t_{n-l}=l$. \end{theorem} In the rest of the paper we will assume $p=n$, i.e. we only focus on the maximal minors case. \begin{lemma}\label{weights_max_minors} In Theorem \ref{schur-decompos} we have $l=n-1$. Therefore the pair $(\underline{z},l)$ in Theorem \ref{schur-decompos} is of the form $((c)^n,n-1)$ for $c\leq d-1$. In particular we have $((d-1)^n,n-1)$ in $\mathcal{Z}^d_n$. \end{lemma} \begin{proof}: Note the restriction $l\leq p-1$ gives $l\leq n-1$. It is easy to check that $((d-1)^n,n-1)$ is in $\mathcal{Z}^d_n$. On the other hand, assume that there exists $(\underline{z},l)$ in $\mathcal{Z}^d_n$ such that $l\leq n-2$. From Theorem \ref{schur-decompos} we have the restriction $$|\underline{z}|+(d-z_1)\cdot(l+1) \geq nd$$ when $p=n$. However by our assumption we have \begin{align*} |\underline{z}|+(d-z_1)(l+1) &\leq |\underline{z}|+(d-z_1)(n-1) \\&= |\underline{z}|+d(n-1)-z_1(n-1) \\&\leq nz_1+d(n-1)-z_1(n-1) \\&= z_1+d(n-1) \\&\leq d-1 +d(n-1) = nd - 1 < nd. \end{align*} Contradicting to our restriction. Therefore we must have $l=n-1$. Moreover, by the definition of $(\underline{z},l)$ we have $z_1=...=z_{l+1}$, therefore in our case we have $z_1=...=z_n$. So the $(\underline{z},l)$ is of the form $((c)^n,n-1)$ for $c\leq d-1$. \end{proof} For the rest of section 3 and section 4 we will denote $I:=I_n$. Using this information we can also gives a criterion of the vanishing of the $\Ext$ modules. Recall that the highest non-vanishing cohomological degree of $S/I^d_n$ is $n(m-n)+1$ (see \cite{Huneke}). This can be seen from the following lemma as well. \begin{lemma}\label{vanishing_degree} In our setting $\Ext^j_S(S/I^d,S)\neq 0$ if and only if $m-n$ divides $1-j$ and $j\geq 2$. Moreover, $\Ext^{n(m-n)+1}_S(S/I^d,S)\neq 0$ if and only if $d\geq n$. \end{lemma} \begin{proof} By Lemma \ref{weights_max_minors}, the weights $\lambda\in W:=W(\underline{z},n-1,(n-1),s)$ have the restrictions \begin{align}\label{res_weight_max} \begin{cases} \lambda_n=n-1-z_{n-1}-m,\\ \lambda_s\geq s-n \text{ and } \lambda_{s+1}\leq s-m. \end{cases} \end{align} We also have \begin{align}\label{important_eq} mn-(n-1)^2-s(m-n)-2(n-1)=j \implies s(m-n)=n(m-n)+1-j. \end{align} By Theorem \ref{schur-decompos}, $\Ext^j_S(S/I^d_n,S)\neq 0$ if and only if the set $W$ is not empty, then by (\ref{res_weight_max}) and (\ref{important_eq}) this means $m-n$ divides $n(m-n)+1-j \implies m-n$ divides $1-j$ and $s=(n(m-n)+1-j)/(m-n) \leq l = n-1 \implies j\geq 2$. This proves the first statement. To see the second statement, note that when $j=n(m-n)+1 = mn-(n-1)^2-2(n-1)$ we have $s=0$. In this case we have the restriction \begin{align*} \begin{cases} \lambda_n = n-1-z_{n-1}-m,\\ \lambda_1 \leq -m. \end{cases} \end{align*} If $d<n$, then $\lambda_n \geq n-d-m > -m \geq \lambda_1$, a contradiction, so that means the set $W$ is empty. On the other hand if $d\geq n$ then $W$ is not empty. So $\Ext^{n(m-n)+1}_S(S/I^d,S) \neq 0$ if and only if $d\geq n$. \end{proof} In our proof of the main theorem, we will need an important property of the $\Ext$-modules, which only holds for maximal minors. \begin{prop}\label{isom_ext_modules}(See \cite[Corollary 4.4]{RaicuWeymanWitt}) We have $\Hom_S(I^d,S)=S$, $\Ext^1(S/I^d,S)=0$ and $\Ext^{j+1}_S(S/I^d,S) = \Ext^j_S(I^d,S)$ for $j>0$. \end{prop} \begin{lemma}(See \cite[Theorem 4.5]{RaicuWeymanWitt})\label{injectivity} Given the short exact sequence $$0\rightarrow I^{d}\ \rightarrow I^{d-1} \rightarrow I^{d-1}/I^d \rightarrow 0,$$ the induced map $$\Ext^j_S(I^{d-1},S)\hookrightarrow \Ext^j_S(I^{d},S)$$ is injective for any $j$ such that $\Ext^j_S(I^d,S)\neq 0$. \end{lemma} In order to prove our main theorem, we need to investigate the length of the $\Ext$-modules. We will need the following fact. \begin{lemma}\label{length=dim} Given a graded $S$-module $M$ we have $\ell(M) = \Dim_\mathbb{C}(M)$. \end{lemma} \begin{proof} First assume $M$ is finitely graded over $\mathbb{C}$ and write $M=\oplus_i^\alpha M_i$. We will use the $\mathbb{C}$-vector space basis of each $M_i$ to construct the composition series of $M$ over $S$. Suppose $M_\alpha = \operatorname{span}(x_1,...,x_r)$ and consider the series $$0\subsetneq \operatorname{span}(x_1) \subsetneq \operatorname{span}(x_1,x_2) \subsetneq ... \subsetneq \operatorname{span}(x_1,...x_r)=M_\alpha.$$ Note that each $x_i$ can be annihilated by the maximal ideal $\frakm$ of $S$ since multiplying $x_i$ with elements in $\frakm$ will increase the degree. Since $Sx_i$ is cyclic, we have $Sx_i \cong S/\frakm$. Therefore each quotient of the above series is isomorphic to $S/\frakm$, so the series above is a composition series. Repeat this procedure for each graded piece of $M$ we get a composition series of $M$ and that $\ell(M) = \Dim_\mathbb{C}(M)$. On the other hand if $M$ has infinitely many graded pieces over $\mathbb{C}$ so that $\Dim_\mathbb{C}(M) = \infty$, then the above argument shows that we can form a composition series of infinite length, and so $\ell(M)=\infty$. \end{proof} \begin{prop}\label{length_ext_fin} In our setting $\ell(\Ext^j_S(S/I^d,S)) < \infty$ and is nonzero if and only if $j=n(m-n)+1$ which corresponds to $s=0$ in Theorem \ref{schur-decompos}, and $d\geq n$. \end{prop} \begin{proof} The correspondence of the cohomological index and $s$ can be seem in the proof of Lemma \ref{vanishing_degree}, and the condition $d\geq n$ can be seen from Lemma \ref{vanishing_degree} as well. Observe that the decomposition (\ref{decompose_first}) is finite, so we need to consider the decomposition of each $\Ext^j_S(J_{(\underline{z},l)},S)$. Suppose $s=0$. Then we have the restriction \begin{align*} \begin{cases} \lambda_n=n-1-{z_{n-1}}-m,\\ \lambda_1 \leq -m. \end{cases} \end{align*} Therefore in this case the set $W(\underline{z},n-1,(n-1),0)$ is bounded above by $(-m,...,-m,n-1-z_{n-1}-m)$ and below by $(-m,n-1-z_{n-1}-m,...,,n-1-z_{n-1}-m)$ and so is a finite set. Thus $\Ext^j_S(J_{(\underline{z},l)},S)$ can be decomposed as a finite direct sum of $S_{\lambda(s)}\mathbb{C}^m \otimes S_\lambda\mathbb{C}^n$ for $\lambda \in W(\underline{z},n-1,(n-1),0)$. By Proposition \ref{dim_schur} it is clear that the dimension of each $S_{\lambda(s)}\mathbb{C}^m \otimes S_\lambda\mathbb{C}^n$ is finite. So by Lemma \ref{length=dim}, $\ell(\Ext^j_S(J_{(\underline{z},l)},S)) = \Dim_\mathbb{C}(\Ext^j_S(J_{(\underline{z},l)},S)) < \infty$. On the other hand suppose $s\neq 0$. Then we have the restriction \begin{align*} \begin{cases} \lambda_n=n-1-{z_{n-1}}-m,\\ \lambda_s \geq s-n, \lambda_{s+1} \leq s-m. \end{cases} \end{align*} Since $\lambda_s \geq s-n$ implies that any weight that is greater than $(s-n,...,s-n,s-m,,...,s-m,n-1-z_{n-1}-m)$ is in $W(\underline{z},n-1,(n-1),s)$, the set $W(\underline{z},n-1,(n-1),s)$ is infinite, and therefore the decomposition of $\Ext^j_S(J_{(\underline{z},l)},S)$ in this case is infinite. So by Lemma \ref{length=dim} again we have $\ell(\Ext^j_S(J_{(\underline{z},l)},S)) = \Dim_\mathbb{C}(\Ext^j_S(J_{(\underline{z},l)},S)) = \infty$. Therefore $\ell(\Ext^j_S(S/I^d,S)) < \infty$ if and only if $j=n(m-n)+1$. \end{proof} \begin{corollary}\label{sum_of_length} Let $j=n(m-n)+1$. Then we have $$\ell(\Ext^j_S(S/I^D,S)) = \sum^{D}_{d=n}\ell(\Ext^j_S(I^{d-1}/I^d,S)).$$ \end{corollary} \begin{proof} Given the short exact sequence $$0\rightarrow I^{d-1}/I^d \rightarrow S/I^d \rightarrow S/I^{d-1} \rightarrow 0$$ we have the induced long exact sequence of $\Ext$-modules \begin{align*} ... \rightarrow &\Ext^{j-1}_S(I^{d-1}/I^d,S) \rightarrow \Ext^j_S(S/I^{d-1},S) \rightarrow \Ext^j_S(S/I^d,S)\\ \rightarrow &\Ext^j_S(I^{d-1}/I^d,S) \rightarrow \Ext^{j+1}_S(S/I^{d-1},S) \rightarrow ... \end{align*} By Proposition \ref{isom_ext_modules} and Lemma \ref{injectivity} the map $\Ext^j(S/I^{d-1},S) \rightarrow \Ext^j(S/I^d,S)$ from the above long exact sequence is injective. Therefore we can split the above long exact sequence into short exact sequences $$0\rightarrow \Ext^j_S(S/I^{d-1},S) \rightarrow \Ext^j_S(S/I^d,S) \rightarrow \Ext^j_S(I^{d-1}/I^d,S) \rightarrow 0.$$ By Lemma 3.3, $\Ext^j_S(S/I^d,S) = 0$ for $d<n$, so $\Ext^j_S(I^{d-1}/I^d) = 0$ for $d<n$ as well. Then by Proposition \ref{length_ext_fin} we have \begin{align*} \begin{gathered} \ell(\Ext^j_S(I^{d-1}/I^d,S)) = \ell(\Ext^j_S(S/I^d,S)) - \ell(\Ext^j_S(S/I^{d-1},S)) \implies \\ \sum^D_{d=2} \ell(\Ext^j_S(I^{d-1}/I^d,S) = \ell(\Ext^j_S(S/I^D,S))-\ell(\Ext^j_S(S/I,S) \xRightarrow{\text{Lemma 3.3}}\\ \sum^D_{d=n}\ell(\Ext^j_S(I^{d-1}/I^d,S)) = \ell(\Ext^j_S(S/I^D,S)), \end{gathered} \end{align*} as desired. \end{proof} \section{Multiplicites of the maximal minors} In this section we will prove the main result for maximal minors. We recall the statement here, and recall that $I := I_n$. \begin{theorem}\label{formula_limit} Under the setting as in section 3, we have \medspace \begin{enumerate} \item $j \neq n^2-1$ then $\ell(H^j(I^{d-1}/I^d))$ and $\ell(H^j(S/I^D))$ are either zero or infinite. \item If $j=n^2-1$ then $\ell(H^j_\frakm(I^{d-1}/I^d))$ and $\ell(H^j_\frakm(S/I^D))$ are finite and nonzero for $d$ and $D$ greater than $n$. Moreover we have $$\frakj^j(I) = (mn)!\prod^{n-1}_{i=0}\dfrac{i!}{(m+i)!}$$ \item In fact $$\frakj^j(I) = \epsilon^j(I).$$ \end{enumerate} \end{theorem} \begin{remark}\label{Grassmannian} As mentioned in the introduction, this formula has a geometric interpretation. Recall that the degree of the Grassmannian $G(a,b)$ is \begin{align*} \operatorname{deg}(G(a,b)) = (a(b-a))! \prod^{a-1}_{i=0}\dfrac{i!}{(b-a+i)!}, \end{align*} see \cite[Ch 4]{EisenbudHarris}. Replacing $a$ with $n$ and $b$ with $m+n$, we get \begin{align*} \operatorname{deg}(G(n,m+n)) = (mn)! \prod^{n-1}_{i=0}\dfrac{i!}{(m+i)!}, \end{align*} which is precisely $\epsilon^{n^2-1}(I_n)$ (and $\frakj^{n^2-1}(I_n)$), and so it must be an integer. \end{remark} We will first prove the existence of $\frakj^j(I)$, then use it to prove the existence of $\epsilon^j(I)$. After that we will dicuss their formulae. \begin{prop}\label{existence_limit} Let $$C=(mn-1)!\prod_{1\leq i \leq n}\dfrac{1}{(n-i)!(m-i)!}$$ and let $\delta=\{0\leq x_{n-1} \leq ... \leq x_1 \leq 1\}\subseteq \mathbb{R}^{n-1}$, $dx=dx_{n-1}...dx_1$. Then $\ell(H^{n^2-1}_\frakm(I^{d-1}/I^d)) < \infty$ and is nonzero for $d\geq n$. Moreover the limit $$\frakj^{n^2-1}(I) = \lim_{d\rightarrow \infty}\dfrac{(mn-1)!\ell(H^{n^2-1}_\frakm(I^{d-1}/I^d))}{d^{mn-1}}$$ exists and the formula is given by \begin{align}\label{integral_formula} C\bigintss_\delta(\prod_{1\leq i \leq n-1}(1-x_i)^{m-n}x_i^2)(\prod_{1\leq i < j\leq n-1}(x_i-x_j)^2)dx. \end{align} \end{prop} Before we give the proof of the above Proposition, we need to state some well-known results. We will use the local duality to study $\ell(H^j_\frakm(I^{d-1}/I^d))$ and $\ell(H^j_\frakm(S/I^D))$. Let $M^\vee$ denote the graded Matilis dual of an $R$-module $M$ where $R$ is a polynomial ring over $\mathbb{C}$ such that $$(M^\vee)_\alpha := \Hom_\mathbb{C}(M_-\alpha,\mathbb{C}),$$ and recall that the Matlis duality preserves length of finite length module. \begin{lemma}\label{isom_of_length} Let $M$ be a finite length module over $S$. Then we have $$\ell(\Ext^j_S(M,S)) = \ell(H^{\Dim(S)-j}_\frakm(M)).$$ \end{lemma} \begin{proof} By the local duality (see \cite[theorem 3.6.19]{BrunsHerzog}), we have $$\Ext^j_S(M,S(-\Dim(S))) \cong H^{\Dim(S)-j}_\frakm(M)^\vee.$$ Then the assertion of our lemma is immediate. \end{proof} Using this lemma we turn the problem into studying the length of $\Ext$-modules of cohomological degree $n(m-n)+1$. In the proof of Theorem \ref{existence_limit}, We will employ part of the strategy used in \cite{Kenkel}. However we will not resort to binomial coefficients since they will be too complicated to study in higher dimensional rings. We will instead use the following elementary but powerful facts. \begin{theorem}(Euler-Maclaurin formula, see \cite{Apostol})\label{EM} Suppose $f$ is a function with continuous derivative on the interval $[1,b]$, then $$\sum^b_{i=a}f(i) = \int^b_a f(x)dx+\dfrac{f(b)+f(a)}{2}+\sum^{\lfloor p/2 \rfloor}_{k=1}\dfrac{B_{2k}}{(2k)!}(f^{(2k-1)}(b)-f^{(2k-1)}(a))+R_p$$ where $B_{2k}$ is the Bernoulli number and $R_p$ is the remaining term. \end{theorem} For our application we only need to use the integral part on the RHS of the above formula. A well-known consequence is the following. \begin{corollary}(Faulhaber's formula)\label{Faulhaber} The closed formula of the sum of $p$-th power of the first $b$ integers can be written as \begin{align*} \sum^b_{k=1}k^p = \dfrac{b^{p+1}}{p+1}+\dfrac{1}{2}b^p+\sum^p_{k=2}\dfrac{B_k}{k!}\dfrac{p!}{p-k+1!}b^{p-k+1}. \end{align*} Again the $B_k$ is the Bernoulli number. In particular, the sum on the LHS can be expressed as a polynomial of degree $p+1$ in $b$ with leading coefficient $\dfrac{1}{p+1}$. \end{corollary} \begin{proof}[Proof of Proposition \ref{existence_limit}] Let $s$ be as in Theorem \ref{schur-decompos}. By Lemma \ref{vanishing_degree}, we have $\Ext^j_S(I^{d-1}/I^d,S)\neq 0$ for $s=0$, so $\frakj^j(I) \neq 0$. The first claim follows from Proposition \ref{length_ext_fin} and Lemma \ref{isom_of_length}. We will prove the second claim. We first consider the length of $\Ext^j_S(I^{d-1}/I^d,S)$. By Lemma \ref{injectivity}, in order to calculate $\ell(\Ext^j_S(I^{d-1}/I^d,S)$ we only need to calculate the dimension of the tensor products of Schur modules that is in $\Ext^j_S(S/I^d)$ but not in $\Ext^j_S(S/I^{d-1},S)$. By Lemma \ref{weights_max_minors}, we need to consider the $\underline{z} \in \mathcal{P}_n$ such that $\{z_1=...=z_n=d-1\}$. This means we are considering the weights \begin{align*} \begin{cases} \lambda_n=n-d-m,\\ \lambda_1\leq -m. \end{cases} \end{align*} i.e. $$\Ext^j_S(I^{d-1}/I^d,S) = \bigoplus S_{\lambda(0)}\mathbb{C}^m \otimes S_\lambda \mathbb{C}^n$$ where $\lambda$ satisfies the above conditions. Adopting the notations of \cite{Kenkel}, we can write \begin{align*} \lambda &= (\lambda_1,\lambda_2,...\lambda_n)\\ &= (\lambda_n+\epsilon_1,\lambda_n+\epsilon_2,...,\lambda_n)\\ &= (n-d-m+\epsilon_1, n-d-m+\epsilon_2,...,n-d-m). \end{align*} Since $\lambda_1\leq -m$, it follows that $n-d-m \leq n-d-m+\epsilon_1 \leq m \implies 0\leq \epsilon_1 \leq d-n$. Since $\lambda$ is dominant, we have $0\leq \epsilon_{n-1}\leq ... \leq \epsilon_1 \leq d-n$. By Corollary \ref{same_dim}, we have $$\Dim(S_\lambda\mathbb{C}^n) = \Dim(S_{(\epsilon_1,...,\epsilon_{n-1},0)}\mathbb{C}^n$$ by adding $((n-d-m)^n)$ to $\lambda$. Therefore the dimension of $S_\lambda \mathbb{C}^n$ is given by \begin{align}\label{dim_small} \Dim(S_\lambda \mathbb{C}^n) = \Dim(S_{(\epsilon_1,...,\epsilon_{n-1},0)}\mathbb{C}^n) = (\prod_{1\leq i < j\leq n-1}\dfrac{\epsilon_i - \epsilon_j + j-i}{j-i})(\prod_{1\leq i \leq n-1}\dfrac{\epsilon_i+n-i}{n-i}). \end{align} Now we look at $S_{\lambda(0)}\mathbb{C}^m$. By definition $\lambda(0)=((-m)^{m-n},\lambda_1,...,\lambda_n)$. Use Corollary \ref{same_dim} again by adding $((n-d-m)^m)$ to $\lambda(0)$ we get that \begin{align}\label{dim_large} \begin{split} \Dim(S_{\lambda(0)}\mathbb{C}^m) &= \Dim(S_{((d-n)^{(m-n)},\epsilon_1,...,\epsilon_{n-1},0)}\mathbb{C}^m\\ &= (\prod_{1\leq i\leq n-1}\dfrac{j-i}{j-i})(\prod_{1\leq i \leq m-n}\dfrac{d-\epsilon_1+m-2n+1-i}{m-n+1-i})\\ & (\prod_{1\leq i\leq m-n}\dfrac{d-\epsilon_2+m-2n+2-i}{m-n+2-i})(\epsilon_1-\epsilon_2+1)\\ &...\\ & (\prod_{1\leq i\leq m-n}\dfrac{d-n+m-i}{m-i})(\epsilon_{n-1}+1)(\dfrac{\epsilon_{n-2}+2}{2})...(\dfrac{\epsilon_1+n-1}{n-1}). \end{split} \end{align} Multiplying (\ref{dim_small}) and (\ref{dim_large}) we get that \begin{align}\label{dim_multiply} \begin{split} \Dim(S_{\lambda(0)}\mathbb{C}^m \otimes S_\lambda\mathbb{C}^n) &= \Dim(S_{\lambda(0)}\mathbb{C}^m) \times \Dim(S_\lambda\mathbb{C}^n) \\ &= \big(\prod_{1\leq i \leq m-n}\dfrac{d-n+m-i}{m-i}\big)\\ &\Big(\big(\prod_{1\leq i \leq m-n}\dfrac{d-\epsilon_1-2n+m+1-i}{m-n+1-i}\big)\big(\dfrac{\epsilon_1+n-1}{n-1}\big)^2\\ &\big(\prod_{1\leq i \leq m-n}\dfrac{d-\epsilon_2-2n+m+2-i}{m-n+2-i}\big)\big(\epsilon_1-\epsilon_2+1)^2\big(\dfrac{\epsilon_2+n-2}{n-2}\big)^2\\ &...\\ &\big(\prod_{1\leq i\leq m-n}\dfrac{d-n-\epsilon_{n-1}+m-1-i}{m-1-i}\big)(\epsilon_{n-2}-\epsilon_{n-1}+1)^2...(\epsilon_{n-1}+1)^2\Big) \end{split} \end{align} The formula (\ref{dim_multiply}) is for a particular choice of $\epsilon_1,...,\epsilon_{n-1}$. To calculate $\ell(\Ext^j_S(I^{d-1}/I^d,S))$ we need to add the result of all possible choices of $\epsilon_1,...,\epsilon_{n-1}$. After some manipulations we will end up with \begin{align}\label{dim_formula_sep} \ell(\Ext^j_S(I^{d-1}/I^d,S) &= \sum_{0\leq \epsilon_{n-1}\leq ... \leq \epsilon_1\leq d-n}(\ref{dim_multiply})\\ \tag{\ref{dim_formula_sep} - 0} &= (\prod_{1\leq i\leq m-n}\dfrac{d-n+m-i}{m-i}) \\ \tag{\ref{dim_formula_sep} - 1} &\Big(\sum^{d-n}_{\epsilon_1=0}\big(\prod_{1\leq i \leq m-n}\dfrac{d-\epsilon_1-2n+m+1-i}{m-n+1-i}\big)\big(\dfrac{\epsilon_1+n-1}{n-1}\big)^2\\ \tag{\ref{dim_formula_sep} - 2} &(\sum^{\epsilon_1}_{\epsilon_2=0}\big(\prod_{1\leq i\leq m-n}\dfrac{d-\epsilon_2-2n+m+2-i}{m-n+2-i}\big)(\epsilon_1-\epsilon_2+1)^2\big(\dfrac{\epsilon_2+n-2}{n-2}\big)^2\\ \notag &...\\ \tag{\ref{dim_formula_sep} - (n-2)} &(\sum^{\epsilon_{n-3}}_{\epsilon_{n-2}=0}\big(\prod_{1\leq i\leq m-n}\dfrac{d-n-\epsilon_{n-2}+m-2-i}{m-2-i}\big)(\epsilon_{n-3}-\epsilon_{n-2}+1)^2...(\dfrac{\epsilon_{n-2}+2}{2})^2\big)\\ \tag{\ref{dim_formula_sep} - (n-1)} &(\sum^{\epsilon_{n-2}}_{\epsilon_{n-1}=0}\big(\prod_{1\leq i\leq m-n}\dfrac{d-n-\epsilon_{n-1}+m-1-i}{m-1-i}\big)(\epsilon_{n-2}-\epsilon_{n-1}+1)^2...(\epsilon_{n-1}+1)^2\big)...\Big) \end{align} Now Corollary \ref{Faulhaber} shows that the above sum will be a polynomial in $d$, and we need to calculate its degree. Corollary \ref{Faulhaber} also implies that when looking at each sum of (\ref{dim_formula_sep}) we only need to look at the summands that will contribute to the highest degree of the resulting polynomial. We see that the sum (\ref{dim_formula_sep} - (n-1)) can be expressed as a degree $m-n+2(n-1)+1 = m+n-1$ polynomial in $\epsilon_{n-2}$. Similarly (\ref{dim_formula_sep} - (n-2)) can be expressed as a degree $2m+2n-4$ polynomial in $\epsilon_{n-3}$. Continuing in this fashion we see that the sum (\ref{dim_formula_sep} - 1) can be expressed as a degree $mn-m+n-1$ polynomial in $d$. Multiplying (\ref{dim_formula_sep} - 0) with (\ref{dim_formula_sep} - 1) will result in a degree $mn-1$ polynomial. Moreover, after factoring out the coefficients of the terms that will eventually contribute to the highest degree of the resulting polynomial of (\ref{dim_formula_sep}) and then apply Theorem \ref{EM} to the sum of said terms, the leading coefficient of the resulting polynomial of (\ref{dim_formula_sep}) is given by \begin{align*} \begin{gathered} \prod_{1\leq i\leq n}\dfrac{1}{(n-i)!(m-i)!} \lim_{d\rightarrow \infty}\dfrac{\bigintss_\Delta(\prod_{1\leq i \leq n-1}(d-x_i)^{m-n})(\prod_{1\leq i \leq n-1}x_i^2)(\prod_{1\leq i < j\leq n-1}(x_i-x_j)^2)dA}{d^{mn-m+n-1}},\\ \Delta = \{0 \leq x_{n-1} \leq ... \leq x_1 \leq d-n \}. \end{gathered} \end{align*} where the factor $\frac{1}{(m-1)!(n-1)!}$ comes from $(\ref{dim_formula_sep} - 0)$ and the coefficients of $(\dfrac{\epsilon_i}{n-i})^2$, and the product $\prod_{2\leq i\leq n}\frac{1}{(n-i)!(m-i)!}$ comes from the coefficients of the needed terms from the rest of $(\ref{dim_formula_sep})$. Since the above limit exists and the integrand is a homogeneous polynomial in $d,x_1,...,x_{n-1}$, we can simplify it to \begin{align}\label{formula_mid} \begin{gathered} \prod_{1\leq i\leq n}\dfrac{1}{(n-i)!(m-i)!} \bigintss_\delta(\prod_{1\leq i \leq n-1}(1-x_i)^{m-n}x_i^2)(\prod_{1\leq i < j\leq n-1}(x_i-x_j)^2)dA,\\ \delta = \{0 \leq x_{n-1} \leq ... \leq x_1 \leq 1 \}. \end{gathered} \end{align} Multiplying the result with $(mn-1)!$ completes the proof. \end{proof} We will give some examples of the above formula. \begin{example} Let $n=2$ and $j=3$. By Lemma \ref{vanishing_degree} and Lemma \ref{length_ext_fin}, $H^3_\frakm(I^{d-1}/I^d)\neq 0$ and has finite length. The integral we need to calculate is simply $$\int^{1}_{0}(1-x_1)^{m-2}x_1^2dx_1 = \dfrac{2}{m^3-m}.$$ Since $C=\frac{(2m-1)!}{(m-1)!(m-2)!}$, we get that $$\frakj^3(I) = \dfrac{1}{(m+1)!m!}(2m)! = \dfrac{1}{m+1}\binom{2m}{m}.$$ This recovered the result from \cite[Corollary 1.2]{Kenkel}. \end{example} \begin{example} Let $n=3$ and $j=8$. Again one can check with Lemma \ref{vanishing_degree} and \ref{length_ext_fin} that $H^8_\frakm(I^{d-1}/I^d)$ and $H^8_\frakm(S/I^D)$ are nonzero and has finite length for $D > d \geq n$. By Proposition \ref{existence_limit} we first calculate the integral $$\int_{0\leq x_2 \leq x_1 \leq 1}(1-x_1)^{m-3}(1-x_2)^{m-3}x_1^2x_2^2(x_1-x_2)^2 dx_2dx_1.$$ This can be done by doing integration by parts multiple times or simply use Sage. The result is $\frac{12}{m^2(m^2-4)(m^2-1)^2}$. We also have $C=(3m-1)!\frac{1}{(m-3)!}\frac{1}{(m-2)!}\frac{1}{2(m-1)!}$. Therefore \begin{align*} \frakj^8(I) &= (3m-1)!\dfrac{12}{m^2(m^2-4)(m^2-1)^2(m-3)!(m-2)!2(m-1)!}\\ &= (3m)!\dfrac{2}{(m+2)!(m+1)!m!}. \end{align*} More specifically, consider the case when $m=4$. Then by Lemma \ref{vanishing_degree} and Proposition \ref{length_ext_fin}, $m-n=1$ and $n(m-n)+1 = 4$. We get \begin{enumerate} \item The non-vanishing cohomological degrees of $\Ext^j_S(I^{d-1}/I^d,S)$ are $j=2,3,4$. \item Only $\Ext^4_S(I^{d-1}/I^d,S)$ is nonzero and has finite length. \item $\frakj^8(I) = (12)!2/(4!5!6!) = 462$. \end{enumerate} When $m=5$, $m-n = 2$ and $n(m-n)+1 = 7$. \begin{enumerate} \item The non-vanishing cohomological degrees of $\Ext^j_S(I^{d-1}/I^d,S)$ are $j=3, 5, 7$. \item Only $\Ext^7_S(I^{d-1}/I^d,S)$ is nonzero and has finite length. \item $\frakj^8(I) = (15)!2/(5!6!7!) = 6006$. \end{enumerate} \end{example} The examples above hinted that $\frakj^{n^2-1}(I)$ should be $(mn)!\prod^{n-1}_{i=0}\dfrac{i!}{(m+i)!}$ as stated in the main theorem, and we will prove that this is indeed the case. We first recall a classical result of Atle Selberg. For English reference one might check \cite[(1.1)]{ForresterWarnaar}. \begin{theorem}(See \cite{Selberg})\label{Selberg_int_thm} For $a, b$ and $c$ in $\mathbb{C}$ such that $\operatorname{Re}(a) > 0$, $\operatorname{Re}(b) > 0$ and $\operatorname{Re}(c) > -\operatorname{min}\{1/n, \operatorname{Re}(a)/(n-1), \operatorname{Re}(b)/(n-1)\}$ we have \begin{align*}\label{Selberg_int} S_n(a,b,c) &= \bigintss_{[0,1]^n}\prod^n_{i=1}x_i^{a-1}(1-x_i)^{b-1}\prod_{1\leq i < j \leq n}|x_i-x_j|^{2c}dA \\ &= \prod^{n-1}_{i=0} \dfrac{\Gamma(a+ic)\Gamma(b+ic)\Gamma(1+(i+1)c)}{\Gamma(a+b+(n+i-1)c)\Gamma(1+c)}. \end{align*} where $\Gamma$ is the usual Gamma function $\Gamma(k)=(k-1)!$. \end{theorem} Now we can prove Theorem \ref{formula_limit}. \begin{proof}[Proof of Theorem \ref{formula_limit}] (1) Follows from Lemma \ref{vanishing_degree}, Proposition \ref{length_ext_fin} and Lemma \ref{isom_of_length}. (2) By Proposition \ref{existence_limit} it remains to evaluate $$C\bigintss_\delta \prod^{n-1}_{i=1}(1-x_i)^{m-n}x_i^2\prod_{1\leq i < j \leq n-1}(x_i-x_j)^2 dA$$ where $$C = (mn-1)!\prod^n_{i=1}\dfrac{1}{(m-i)!(n-i)!}, \delta = \{0 \leq x_{n-1} \leq ... \leq x_1 \leq 1\}.$$ By Theorem \ref{Selberg_int_thm} we have \begin{align*} & C\bigintss_{[0,1]^{n-1}}\prod^{n-1}_{i=1}(1-x_i)^{m-n}x_i^2\prod_{1\leq i < j \leq n-1}(x_i-x_j)^2 dA \\ &= C \prod^{n-2}_{i=0}\dfrac{\Gamma(3+i)\Gamma(m-n+1+i)\Gamma(2+i)}{\Gamma(m+i+2)\Gamma(2)} \\ &= C \prod^{n-2}_{i=0}\dfrac{(2+i)!(m-n+i)!(1+i)!}{(m+i+1)!} \\ &= \dfrac{(mn)!}{mn}\dfrac{1}{(m-n)!}\prod^{n-1}_{i=1}\dfrac{1}{(m-i)!(n-i)!}\prod^{n-1}_{i=1}\dfrac{(1+i)!(m-n+i-1)!(i)!}{(m+i)!} \\ &= \dfrac{(mn)!}{mn}\dfrac{1}{(m-n)!}\prod^{n-1}_{i=1}\dfrac{(1+i)!}{(m-n+i)!(m-n+i)...(m+i)} \\ &= \dfrac{(mn)!}{mn}\dfrac{1}{(m-n)!}\dfrac{(m-n)!}{(m-1)!}\prod^{n-1}_{i=1}\dfrac{(1+i)!}{(m+i)!} \\ &= \dfrac{(mn)!}{n}\prod^{n-1}_{i=0}\dfrac{1}{(m+i)!}\prod^{n-1}_{i=1}(1+i)! \end{align*} Since the integrand $\prod^{n-1}_{i=1}(1-x_i)^{m-n}x_i^2\prod_{1\leq i < j \leq n-1} (x_i-x_j)^2$ does not change under permutation of variables, we have \begin{align*} & \bigintss_{[0,1]^{n-1}}\prod^{n-1}_{i=1}(1-x_i)^{m-n}x_i^2\prod_{1\leq i < j \leq n-1}(x_i-x_j)^2 dA \\ = & (n-1)!\bigintss_\delta \prod^{n-1}_{i=1}(1-x_i)^{m-n}x_i^2\prod_{1\leq i < j \leq n-1}(x_i-x_j)^2 dA \end{align*} Hence we have \begin{align*} \frakj^{n^2-1}(I) & = C\bigintss_\delta \prod^{n-1}_{i=1}(1-x_i)^{m-n}x_i^2\prod_{1\leq i < j \leq n-1}(x_i-x_j)^2 dA \\ & = \dfrac{1}{(n-1)!}\dfrac{(mn)!}{n}\prod^{n-1}_{i=0}\dfrac{1}{(m+i)!}\prod^{n-1}_{i=1}(1+i)! \\ & = (mn)!\prod^{n-1}_{i=0}\dfrac{i!}{(m+i)!}. \end{align*} (3) Let $j=mn-n^2+1$. By Corollary \ref{sum_of_length} we need to sum $\ell(\Ext^j_S(I^{d-1}/I^d,S))$ over all $n \leq d\leq D$ to get $\ell(\Ext^j_S(S/I^D,S))$. It is clear that by Corollary \ref{Faulhaber} the sum \begin{equation}\label{sum_all_d} \ell(\Ext^j_S(S/I^D,S))=\sum^D_{d=n}\ell(\Ext^j_S(I^{d-1}/I^d,S)) \end{equation} can be expressed as a polynomial in $D$ of degree $mn$. By Lemma \ref{isom_of_length} we see that $\ell(H^{mn-j}_\frakm(S/I^D))$ is a polynomial in $D$ of degree $mn$ as well. Therefore we have $$\epsilon^{mn-j}(I) = \lim_{D\rightarrow \infty}\dfrac{(mn)!\ell(H^{mn-j}_\frakm(S/I^D))}{D^{mn}} < \infty,$$ where $mn-j = mn-n(m-n)-1 = n^2-1$. Finally, apply Corollary 4.4 to (\ref{sum_all_d}), we see that the leading coefficient of the resulting polynomial of (\ref{sum_all_d}) is given by multiplying $1/mn$ to $(\ref{formula_mid})$, then multiplying the result with $(mn)!$ yields the desired formula, which is precisely $\epsilon^{n^2-1}(I) = \frakj^{n^2-1}(I)$. \end{proof} \section{Decompositions of Ext modules of sub-maximal Pfaffians} We will follow the same strategies to prove the existence of the $j$-multiplicities of the $\Ext$-module of the Pfaffians for a suitable $j$. Let $\Pf_{2k}$ be the $2k \times 2k$ Pfaffian of $S = \Sym(\bigwedge^2 \mathbb{C}^n)$ which can be considered as a polynomial ring with variables in a skew-symmetric matrix. In this section we recall the result of the decomposition of $\Ext^\bullet_S(S/\Pf_{2k}^d,S)$ from \cite{Perlman}. We first recall some notations from \cite{Perlman}. Recall that $\mathcal{P}(k) = \{\underline{z} = (z_1 \geq ... \geq z_k \geq 0)\}$ and $\mathcal{P}_e(k)$ the partitions with columns of even lengths. We denote $$\underline{z}^{(2)} = (z_1,z_1,z_2,z_2,...,z_k,z_k) \in \mathcal{P}_e(2k).$$ It is well-known that $$S=\Sym(\bigwedge^2 \mathbb{C}^n) = \bigoplus_{\underline{z}\in \mathcal{P}(m)}S_{\underline{z}^{(2)}}\mathbb{C}^n,$$ see for example \cite[Proposition 2.3.8]{Weyman}. In \cite{AbeasisDelFra} the authors classified the $\GL$-invariant ideals in $S$. As in the case of generic matrix, we can consider the ideal $I_{\underline{x}}$ generated by $S_{\underline{x}^{(2)}}\mathbb{C}^n$. It can be shown that $I_{\underline{z}} = \bigoplus_{\underline{y} \geq \underline{z}} S_{\underline{y}^{(2)}}\mathbb{C}^n$. Again the $\GL$-invariant ideals in $S$ can be written as $$I_\mathcal{X} = \bigoplus_{\underline{x} \in \mathcal{X}} I_{\underline{x}}$$ for $\mathcal{X} \subset \mathcal{P}(m)$. Recall that we denote $$\mathcal{X}^d_p = \{\underline{x} \in \mathcal{P}(m) : |\underline{x}| = kd, x_1 \leq d\},$$ and it was shown in \cite{AbeasisDelFra} that $\Pf_{2p}^d = I_{\mathcal{X}^d_p}$. Now we are in position to state the main tool for pfaffians. Note that the result in \cite{Perlman} was stated in terms of the dual vector space $(\mathbb{C}^n)^*$, and we will follow this convention here. \begin{theorem}(see \cite[Theorem F, Theorem 3.3]{Perlman})\label{perlman_main} Let \begin{equation}\label{Tl(z)} \mathcal{T}_l(\underline{z}) = \{\underline{t} = (l = t_1 \geq ... \geq t_{n-2l}) \in \mathbb{Z}^{n-2l}_{\geq 0} | \substack{z^{(2)}_{2l+i} - z^{(2)}_{2l+i+1}\geq 2t_i - 2t_{i+1}\\ 1\leq i \leq n-2l-1}\} \end{equation} and let $W(\underline{z},l,\underline{t})$ denote the set of dominant weights $\lambda$ satisfying the following conditions: \begin{align}\label{weight_restrict_pfaffian} \begin{cases} \lambda_{2l+i-2t_i} = z_{2l+i}^{(2)} + n - 1 -2t_i, i=1,...,n-2l,\\ \lambda_{2i} = \lambda_{2i-1}, 0 < 2i < n-2t_{n-2l},\\ \lambda_{n-2i} = \lambda_{n-2i-1}, 0 \leq i \leq t_{n-2l}-1. \end{cases} \end{align} Then for each $j\geq 0$ we have \begin{align}\label{decompose_ext_schur_pfaffian} \Ext^j_S(J_{\underline{z},l},S) = \bigoplus_{\substack{\underline{t} \in \mathcal{T}_l(\underline{z}) \\ \binom{n}{2} - \binom{2l}{2} - 2\sum^{n-2l}_{i=1}t_i = j \\ \lambda \in W(\underline{z}, l, \underline{t})}} S_\lambda (\mathbb{C}^n)^* \end{align} where $J_{\underline{z},l}$ is defined the same way as in section 3, and $S_\lambda (\mathbb{C}^n)*$ appears in degree $-|\lambda|/2$. Moreover, we have \begin{align}\label{decompose_ext_pfaffian} \Ext^j_S(S/\Pf^d_{2p},S) = \bigoplus_{(\underline{z},l) \in \mathcal{Z}^d_p} \Ext^j_S(J_{\underline{z},l},S). \end{align} where $\mathcal{Z}^d_p$ is defined the same way as in Lemma \ref{z(x)_set}. \end{theorem} From now on we focus on the sub-maximal pfaffians. Let $S = \Sym(\bigwedge^2 \mathbb{C}^{2n+1})$ and $\Pf := \Pf_{2n}$. \begin{lemma}\label{non_vanishing_ext_pfaffian} In our setting $\Ext^j_S(S/\Pf^d,S) \neq 0$ if and only if $j=2(n-t_3)+1$. Moreover, $\Ext^{2n+1}_S(S/\Pf^d,S) \neq 0$ if and only if $d\geq 2n-1$. \end{lemma} \begin{proof} Recall that when $p=n$, by Lemma \ref{weights_max_minors} we have that $(\underline{z},l) = ((c^n),n-1) \in \mathcal{Z}^d_n$, and so for such $\underline{z}$ we have $\underline{z}^{(2)} = (c^{2n})$ for $0 \leq c \leq d-1$. Applying this information to (\ref{Tl(z)}) we see that $t_1 = t_2 = n-1$ and since $z^{(2)}_{2n+1} = 0$, we have $z^{(2)}_{2n} - z^{(2)}_{2n+1} \geq 2(t_2 - t_3) \implies t_3 \geq (2(n-1)-c) / 2$. Then we get that \begin{align*} &\binom{2n+1}{2} - \binom{2n-2}{2} - 2(2n-2 + t_3) = j\\ &\implies 2(n-t_3)+1 = j. \end{align*} Moreover, applying this information to (\ref{weight_restrict_pfaffian}), we get that $W(\underline{z}, l, t) = W((c^n),n-1,\underline{t})$ consist of weights of the form \begin{align}\label{weights_restriction_pfaffian} \begin{cases} \lambda_1 = \lambda_2 = c+2 \leq d+1,\\ \lambda_{2(n-t_3)+1} = 2n-2t_3,\\ \lambda_{2i} = \lambda_{2i-1}, 0 \leq 2i < 2n+1-2t_3,\\ \lambda_{2n+1-2i} = \lambda_{2n-2i}, 0 \leq i \leq t_3-1. \end{cases} \end{align} In particular, when $j=2n+1$, we have $t_3 = 0$ and \begin{align}\label{weights_restriction_pfaffian_2} \begin{cases} \lambda_1 = \lambda_2 = c+2 \leq d+1,\\ \lambda_{2i} = \lambda_{2i-1}, 0 \leq 2i < 2n+1,\\ \lambda_{2n+1} = 2n. \end{cases} \end{align} Since the weights are dominant, $2n \leq d+1 \implies 2n-1 \leq d$. \end{proof} \begin{prop}\label{finite_length_pfaffian} In our setting, $\ell(\Ext^j_S(S/\Pf^d,S)) < \infty$ and is nonzero if and only if $j=2n+1$. \end{prop} \begin{proof} we would like to identify the set $W(\underline{z},l,t)$ that is finite. For each $d$ we have $\lambda_1 = \lambda_2 \leq d+1$, so we have an upper bound. The lower bound comes from $\lambda_{2n}$. We see from from (\ref{non_vanishing_ext_pfaffian}) that if $j\neq 2n+1$ then there is no lower bound for the set. On the other hand we see from (\ref{non_vanishing_ext_pfaffian}) that when $j=2n+1$ the set is finite and nonzero. This means by Lemma \ref{length=dim} the only $\Ext^j_S(S/I^d, S)$ with finite length is the one with $j=2n+1$. \end{proof} As in the case of maximal minors of generic matrix, we have the corresponding injectivity maps for the $\Ext$ modules. \begin{lemma}(see \cite[Corollary 5.4]{RaicuWeymanWitt})\label{ext_pfaffian} In our setting, we have $$\Hom_S(\Pf^d,S) = S, \Ext^1_S(S/\Pf^d,S) = 0,$$ and $$\Ext^{j+1}_S(S/\Pf^d,S) = \Ext^j(\Pf^d,S)$$ for $j>0$. \end{lemma} \begin{prop}\label{injection_ext_pfaffian}\cite[Theorem 5.5]{RaicuWeymanWitt} Given the short exact sequence $$0 \rightarrow \Pf^d \rightarrow \Pf^{d-1} \rightarrow \Pf^{d-1}/\Pf^d \rightarrow 0,$$ we have the induced injection map $$\Ext^j_S(\Pf^{d-1},S) \hookrightarrow \Ext^j_S(\Pf^d,S)$$ for any $j$ such that $\Ext^j_S(\Pf^d,S) \neq 0$. \end{prop} Combining the above results we get the following. \begin{theorem}\label{sum_of_length_pfaffian} In our setting we have $$\ell(\Ext^j_S(S/\Pf^D,S)) = \sum^D_{d=2n-1}\Ext^j_S(\Pf^{d-1}/\Pf^d,S)$$ for $j=2n+1.$ \end{theorem} \begin{proof} The argument is identical to Corollary \ref{sum_of_length} and follows from Lemma \ref{non_vanishing_ext_pfaffian}, Proposition \ref{finite_length_pfaffian} and Proposition \ref{injection_ext_pfaffian}. \end{proof} \section{Multiplicities of thickenings of sub-maximal Pfaffians} In this section we will prove the main theorem for sub-maximal pfaffians. \begin{theorem}\label{formula_limit_pfaffian} Under the same setting as in section 5, we have \begin{enumerate} \item If $j \neq 2n^2-n-1$ then $\ell(H^j_\frakm(\Pf^{d-1}/\Pf^d))$ and $\ell(H^j_\frakm(S/\Pf^D,S))$ are either zero or $\infty$. \item If $j = 2n^2-n-1$ then $\ell(H^j_\frakm(\Pf^{d-1}/\Pf^d))$ and $\ell(H^j_\frakm(S/\Pf^D,S))$ are finite and nonzero for $d$ and $D$ greater than $2n-1$. Moreover we have $$\epsilon^j(\Pf) = (2n^2+n)!\prod^{n-1}_{i=0}\dfrac{(2i)!}{(2n+1+2i)!}$$ \item In fact $$\frakj^j(\Pf) = \epsilon^j(\Pf).$$ \end{enumerate} \end{theorem} \begin{remark}\label{Orthogonal_Grass} As in the case of maximal minors (Remark \ref{Grassmannian}), there is a geometric interpretation of this formula. Let $OG(a,b)$ denote the Orthogonal Grassmannian. By the discussion of \cite[p83, p88]{Totaro}, we have \begin{align*} \operatorname{deg}(SO(2a+1)/U(a)) = \operatorname{deg}(OG(a,2a+1)) = ((a^2+a)/2)!\dfrac{1!2!...(a-1)!}{1!3!...(2a-1)!}. \end{align*} Replace $a$ by $2n$, we get \begin{align*} \operatorname{deg}(OG(2n,4n+1)) = (2n^2+n)!\dfrac{1!2!...(2n-1)!}{1!3!...(4n-1)!}. \end{align*} After the cancellation, we end up with the same formula of $\epsilon^{2n^2-n-1}(\Pf)$, and thus $\epsilon^{2n^2-n-1}(\Pf)$ (and $\frakj^{2n^2-n-1}(\Pf)$) must be an integer. \end{remark} Again we will first prove the existence of $\frakj^j(\Pf)$, then use it to prove the existence of $\epsilon^j(\Pf)$. \begin{prop}\label{existence_limit_pfaffian_j} Let $$C = (2n^2+n-1)!\prod_{1\leq i \leq 2n} \dfrac{1}{i!},$$ and let $\delta = \{0 \leq x_{n-1} \leq ... \leq x_1 \leq 1\} \subset \mathbb{R}^{n-1}$, $dx = dx_{n-1...dx_1}$. Then $\ell(H^{2n^2-n-1}_\frakm(\Pf^{d-1}/Pf^d)) \leq \infty$ and is nonzero for $d \geq 2n-1$. Moreover the limit $$\frakj^{2n^2-n-1}(\Pf) = \lim_{d\rightarrow \infty}\dfrac{(2n^2+n-1)!\ell(H^{2n^2-n-1}(\Pf^{d-1}/\Pf^d))}{d^{2n^2+n-1}}$$ exists and is equal to $$C\bigintsss_{\delta} \prod_{1\leq i < j\leq n-1}(x_i - x_j)^4\prod_{1\leq i \leq n-1}x_i^2(1-x_i)^4dx.$$ \end{prop} \begin{proof} Let $m:=\Dim(S) = 2n^2+n$ and fix $j=2n+1$. As in section 4, we first calculate $\ell(\Ext^j_S(\Pf^{d-1}/\Pf^d,S))$. Since by Lemma \ref{length=dim} this is equal to the dimension of $\Ext^j_S(\Pf^{d-1}/\Pf^d,S)$ as $\mathbb{C}$-vector space, we only need to consider the direct summand appears in (\ref{decompose_ext_schur_pfaffian}) with weight $\lambda$ satisfying $\lambda_1 = \lambda_2 = d+1$. Therefore we get $$\Ext^j_S(\Pf^{d-1}/\Pf^d,S) = \bigoplus S_\lambda (\mathbb{C}^{2n+1})^*$$ where by (\ref{weight_restrict_pfaffian}) $\lambda$ is of the form $\lambda = (d+1,d+1,\lambda_2,\lambda_2,...,\lambda_n,\lambda_n,2n)$, and we will calculate \begin{align*} \ell(\Ext^j_S(\Pf^{d-1}/\Pf^d,S)) = \Dim(S_\lambda (\mathbb{C}^{2n+1})^*). \end{align*} Let $0\leq \epsilon_1 \leq ... \leq \epsilon_{n-1}\leq d+1-2n$, then we can rewrite $$\lambda = (d+1,d+1, 2n+\epsilon_1, 2n+\epsilon_1,...,2n+\epsilon_{n-1}, 2n+\epsilon_{n-1}, 2n).$$ Now by Corollary \ref{same_dim} again we have $$S_\lambda (\mathbb{C}^{2n+1})^* \cong S_{(d+1-2n,d+1-2n,\epsilon_1,\epsilon_1,...,\epsilon_{n-1},\epsilon_{n-1},0)}(\mathbb{C}^{2n+1})^*,$$ by Proposition \ref{dim_schur} we calculate, for a fixed set of $\epsilon_1,...,\epsilon_{n-1}$, the dimension of $S_\lambda(\mathbb{C}^{2n+1})^*$. For simplicity we let $c := d+1-2n$. \begin{align}\label{dim_each_schur} \begin{split} &\Dim(S_\lambda (\mathbb{C}^{2n+1})^*) =\prod_{1\leq i < j\leq n-1}\dfrac{\lambda_i-\lambda_j + j-i}{j-i}=\\ &\dfrac{c+2n+1-1}{2n+1-1}\dfrac{c+2n+1-2}{2n+1-2}\prod_{1\leq i\leq n-1}\dfrac{\epsilon_i+2n+1-(2i+1)}{2n+1-(2i+1)}\dfrac{\epsilon_i+2n+1-(2n+2)}{2n+1-(2i+2)}\\ &\prod_{1\leq i \leq n-1}\big(\dfrac{c-\epsilon_i+(2i+1)-1}{(2i+1)-1}\dfrac{c-\epsilon_i+(2i+2)-1}{(2i+2)-1}\dfrac{c-\epsilon_i+(2i+1)-2}{(2i+1)-2}\dfrac{c-\epsilon_i+(2i+2)-2}{(2i+2)-2}\big)\\ &\prod_{1\leq i<j \leq n-1}\dfrac{\epsilon_i-\epsilon_j+(2j+2)-(2i+2)}{(2j+2)-(2i+2)}\dfrac{\epsilon_i-\epsilon_j+(2j+1)-(2i+1)}{(2j+1)-(2i+1)}\\ &\dfrac{\epsilon_i-\epsilon_j+(2j+2)-(2i+1)}{(2j+2)-(2i+1)}\dfrac{\epsilon_i-\epsilon_j+(2j+1)-(2i+2)}{(2j+1)-(2i+2)} \end{split} \end{align} Next, we need to add together (\ref{dim_each_schur}) for all possible $\epsilon_1,...,\epsilon_{n-1}$. \begin{align}\label{sum_dim_schur_pfaffian} \begin{split} &\ell(\Ext^j_S(\Pf^{d-1}/\Pf^d,S)) \\ &= \sum_{0\leq \epsilon_{n-1} \leq ... \leq \epsilon_1 \leq c} (\ref{dim_each_schur})\\ &= \dfrac{c+2n+1-1}{2n+1-1}\dfrac{c+2n+1-2}{2n+1-2}\\ &\sum_{\epsilon_1 = 0}^c \dfrac{\epsilon_1+2n-2}{2n-2}\dfrac{\epsilon_1+2n-3}{2n-3}\Big(\dfrac{c-\epsilon_{1}+2}{2}\dfrac{c-\epsilon_{1}+1}{1}\dfrac{c-\epsilon_1+3}{3}\dfrac{c-\epsilon_{1}+2}{2}\Big)\\ &...\\ &\sum_{\epsilon_{n-1}=0}^{\epsilon_{n-2}} \dfrac{\epsilon_{n-1}+1}{1}\dfrac{\epsilon_{n-1}+2}{2}\\ &\Big(\dfrac{c-\epsilon_{n-1}+2n-2}{2n-2}\dfrac{c-\epsilon_{n-1}+2n-1}{2n-1}\dfrac{c-\epsilon_{n-1}+2n-3}{2n-3}\dfrac{c-\epsilon_{n-1}+2n-2}{2n-2}\Big)\\ &\Big(\prod_{1\leq i \leq n-2}\dfrac{\epsilon_i-\epsilon_{n-1}+2n-(2i+2)}{2n-(2i+2)}\dfrac{\epsilon_i-\epsilon_{n-1}+2n-1-(2i+1)}{(2n-1)-(2i+1)}\\ &\dfrac{\epsilon_i-\epsilon_{n-1}+2n-(2i+1)}{2n-(2i+1)}\dfrac{\epsilon_i-\epsilon_{n-1}+2n-1-(2i+2)}{(2n-1)-(2i+2)}\Big) \end{split} \end{align} An argument similar to one in the proof of Proposition \ref{existence_limit} shows that the above sum can be written as a polynomial of degree $2n^2+n-1$ in $d$. Moreover, an argument similar to the one in the proof of Proposition \ref{existence_limit} shows that, using (\ref{sum_dim_schur_pfaffian}), $\frakj^{2n^2-n-1}(\Pf)$ can be written as \begin{align}\label{limit_int_pfaffian} \begin{split} &(2n^2+n-1)!\dfrac{1}{(2n)(2n-1)}\dfrac{1}{(2n-2)!}\dfrac{1}{(2n-1)!(2n-2)!}\prod_{1\leq i \leq 2n-3}\dfrac{1}{i!}\cdot\\ &\bigintsss_{\delta} \prod_{1\leq i < j\leq n-1}(x_i - x_j)^4\prod_{1\leq i \leq n-1}x_i^2(1-x_i)^4 dA \end{split} \end{align} where $\delta = \{0 \leq \epsilon_{n-1} \leq ... \leq \epsilon_1 \leq d+1-2n\}$, which, after simplification, is precisely the formula in our assertion. To be more precise, the factor $1/(2n-2)!$ comes from the terms of the form $\epsilon_i/(2i+1-(2n+1))$ and $\epsilon_i/(2i+2-(2n+1))$. The factor $1/(2n-1)!(2n-2)!$ comes from the terms $$\dfrac{c-\epsilon_i}{2i+1-1}\dfrac{c-\epsilon_i}{2i+1-2}\dfrac{c-\epsilon_i}{2i+2-1}\dfrac{c-\epsilon_i}{2i+2-2}.$$ Finally, the factor $\prod_{1\leq i \leq 2n-3}(1/i!) $ comes from the rest of the products involving the terms $\epsilon_i-\epsilon_j$. \end{proof} The proof of the main theorem of this section is similar to Theorem \ref{formula_limit}. \begin{proof}[Proof of Theorem \ref{formula_limit_pfaffian}] (1) is clear from Lemma \ref{non_vanishing_ext_pfaffian} and Proposition \ref{finite_length_pfaffian}. (2) Apply the Selberg integral (Theorem \ref{Selberg_int_thm}) to the formula we got in Proposition \ref{existence_limit_pfaffian_j}. In this case we have $a=3, b=5, c=2$. Therefore we can further simplified (\ref{limit_int_pfaffian}) to \begin{align*} &\dfrac{(2n^2+n)!}{2n^2+n}\dfrac{1}{(n-1)!}\prod_{1\leq i \leq 2n}\dfrac{1}{i!}\prod^{n-2}_{i=0}\dfrac{(2+2i)!(2+2i)!(4+2i)!}{2(2n+3+2i)!}\\ &=\dfrac{(2n^2+n)!}{2n^2+n}\dfrac{1}{(n-1)!}\prod_{1\leq i \leq n}\dfrac{1}{(2i)!}\dfrac{1}{(2i-1)!}\prod^{n-2}_{i=0}\dfrac{(2+2i)!(2+2i)!(4+2i)!}{2(2n+3+2i)!}\\ &=\dfrac{(2n^2+n)!}{2n^2+n}\dfrac{1}{(n-1)!}\dfrac{1}{(2n)!}\prod_{0\leq i \leq n-2}\dfrac{1}{(2i+3)!}\prod^{n-2}_{i=0}\dfrac{(2+2i)!(4+2i)!}{2(2n+3+2i)!}\\ &=(2n^2+n)!\dfrac{1}{(2n+1)!n!}\prod^{n-2}_{i=0}\dfrac{(2+2i)!(i+2)}{(2n+3+2i)!}\\ &=(2n^2+n)!\dfrac{1}{(2n+1)!}\prod^{n-2}_{i=0}\dfrac{(2+2i)!}{(2n+3+2i)!}\\ &=(2n^2+n)!\prod^{n-1}_{i=0}\dfrac{(2i)!}{(2n+1+2i)!}. \end{align*} and therefore by Proposition \ref{existence_limit_pfaffian_j} we have $$\frakj^{2n^2-n-1}(\Pf) = (2n^2+n)!\prod^{n-1}_{i=0} \dfrac{(2i)!}{(2n+1+2i)!},$$ which completes the proof of (2). (3) Applying Lemma \ref{ext_pfaffian} and Proposition \ref{injection_ext_pfaffian} then the proof is similar to Theorem \ref{formula_limit} (3). \end{proof} \section{Open questions} Our approach relies on the vector space decompositions of $\Ext^j_S(S/I_n^d,S)$ and $\Ext^j_S(S/\Pf_{2n}^d,S)$ for $S=\Sym(\mathbb{C}^m \otimes \mathbb{C}^n)$ and $S=\Sym(\bigwedge^2 \mathbb{C}^n)$, respectively. When $S=\Sym(\Sym^2(\mathbb{C}^n))$, the corresponding decomposition of $\Ext$ module is still unknown, see also \cite[Remark 3.8]{Perlman} and \cite[Remark 2.7]{RaicuWeyman2}. However, we have seen the unexpected connection between $\epsilon^j(I_n)$ (resp. $\epsilon^j(\Pf_{2n})$) and the degree of Grassmannian (resp. Orthogonal Grassmannian). Therefore it is natural to ask the following questions: \begin{question} Let $S=\Sym(\Sym^2(\mathbb{C}^n))$. Let $\mathcal{S}_p$ be the ideal generated by $p \times p$ symmetric minors of $S$. \begin{enumerate} \item For which $j$ does $\Ext^j_S(S/\mathcal{S}_{n-1}^D,S)$ have finite length? \item Suppose $\ell(\Ext^j_S(S/\mathcal{S}_{n-1}^D,S)) < \infty$ and nonzero. Do $\epsilon^j(\mathcal{S}_{n-1})$ and $\frakj^j(\mathcal{S}_{n-1})$ exist? \item Suppose $\epsilon^j(\mathcal{S}_{n-1})$ or $\frakj^j(\mathcal{S}_{n-1})$ exists. Can we identify it with the degree of Lagrangian Grassmannian? \end{enumerate} \end{question} Moreover, consider the results in \cite{JeffriesMontanoVarbaro}, we ask: \begin{question} \begin{enumerate} \item Are there any geometric interpretations for $\epsilon^0(I_p)$, $\epsilon^0(\Pf_{2p})$ and $\epsilon^0(\mathcal{S}_{p})$? \item Are there any geometric interpretations for $\frakj^0(I_p)$, $\frakj^0(\Pf_{2p})$ and $\frakj^0(\mathcal{S}_{p})$? \end{enumerate} \end{question} \section{Acknowledgement} The author would like to thank her advisor Wenliang Zhang, for suggesting this problem and his guidance throughout the preparation of this paper, and Tian Wang for helpful suggestions. \begin{thebibliography}{HKM} \bibitem[Ap]{Apostol} T.M.~Apostol, \emph{An Elementary View of Euler's Summation Formula},The American Mathematical Monthly, \textbf{106}, Taylor \& Francis, Ltd., 409--418, 1999. \bibitem[ADF]{AbeasisDelFra} S. ~Abeasis and A. ~Del Fra, \emph{Young diagrams and ideals of Pfaffians}, Adv. Math, \textbf{35}, (1980), 158 -- 178. \bibitem [BR]{BuchsbaumRim} D.~Buchsbaum, D.~Rim, \emph{A Generalized Koszul Complex. II. Depth and Multiplicity}, Trans. Amer. Math. Soc. ,\textbf{111}, (1963), 197--224. \bibitem[BH]{BrunsHerzog} W.~Bruns and J.~Herzog, \emph{Cohen-Macaulay rings}, Cambridge Studies in Advanced Mathematics, \textbf{39}. Cambridge University Press, Cambridge, 1993. \bibitem[Cu]{Cutkosky} S.D. ~Cutkosky, \emph{Asymptotic multiplicities of graded families of ideals and linear series}, Adv. Math. 264 (2014), 55–113. \bibitem[CHST]{CutkoskyHST} S.D. ~Cutkosky, H.T. H\`{a}, H.~Srinivasan, E.~Theodorescu, \emph{Asymptotic Behavior of the Length of Local Cohomology}, Canadian Journal of Mathematics, \textbf{57} (2005), 1178 -- 1192. \bibitem[DEP]{DeConciniEisenbudProcesi} C. ~DeConcini, D. ~Eisenbud and C.Procesi, \emph{Young Diagrams and Determinantal Varieties}, Inventiones math \textbf{56} (1980), 129--165. \bibitem[DM]{DaoMontano19} H. ~Dao and J. ~Monta\~{n}o, \emph{Length of Local Cohomology of Powers of Ideals}, Trans.Amer.Soc,\textbf{371}. (2019), no 5,3483--3503. \bibitem[DM2]{DaoMontano20} H. ~Dao and J. ~Monta\~{n}o, \emph{On asymptotic vanishing behavior of local cohomology}, Math. Z. \textbf{295} (2020), no. 1-2, 73–86. \bibitem[EH]{EisenbudHarris} D. ~Eisenbud and J. ~Harris, \emph{3264 and all that -- A second course in algebraic geometry}, Cambridge University Press, Cambridge, 2016. xiv+616 pp. ISBN: 978-1-107-60272-4; 978-1-107-01708-5 \bibitem[FW]{ForresterWarnaar} P.J.~Forrester and S.O.~Warnaar, \emph{The importance of Selberg integral}, Bull. Amer. Math. Soc, \textbf{45}, 2008, 489-534. \bibitem[FH]{FultonHarris} W.~Fulton and J.~Harris, \emph{Representation Theory A First Course}, Graduate Texts in Mathematics, \textbf{129}. (2004), Springer, New York, NY. \bibitem[Hu]{HunekeLecture} C.~Huneke, \emph{Lectures on local cohomology}, Contemp. Math., \textbf{436}, Interactions between homotopy theory and algebra, 51--99, Amer. Math. Soc., Providence, RI, 2007. \bibitem[Hu81]{Huneke} C.~Huneke, \emph{Powers of Ideals Generated by Weak d-Sequences}, Journal of Algebra, \textbf{68}, 471--509 (1981). \bibitem[JMV]{JeffriesMontanoVarbaro} J.~Jeffries, J.~Monta\~{n}o and M.~Varbaro, \emph{Multiplicities of classical varieties}, Proc.London.Mac.Soc.(3), \textbf{110}, 1033--1055, 2015. \bibitem[KV]{KatzValidashti} D. ~Katz and J. ~Validashti, \emph{Multiplicities and Rees valuations}, Collect. Math, \textbf{61} (2010), 1-24. \bibitem[Ke]{Kenkel} J. ~Kenkel, \emph{Length of Local Cohomology of Thickenings}, arXiv:1912.02917. \bibitem[Ra]{Raicu} C. ~Raicu, \emph{Regularity and cohomology of determinantal thickenings}, Proc. Lond. Math. Soc. (3) \textbf{116} (2018), no. 2, 248--280. \bibitem[RW14]{RaicuWeyman} C. ~Raicu and J. ~Weyman, \emph{ Local cohomology with support in generic determinantal ideals}, Algebra and Number Theory. \textbf{8}, 2014, no. 5, 1231--1257. \bibitem[RW16]{RaicuWeyman2} C. ~Raicu and J. ~Weyman, \emph{Local cohomology with support in ideals of symmetric minors and Pfaffians.}, J. Lond. Math. Soc. (2) \textbf{94} (2016), no. 3, 709–725. \bibitem[RWW]{RaicuWeymanWitt} C.~Raicu, J.~Weyman and E.~Witt, \emph{Local cohomology with support in ideals of maximal minors and sub-maximal Pfaffians}, Advances in Mathematics. \textbf{250}, 2014, 596-610. \bibitem[Pe]{Perlman} M. ~Perlman, \emph{Regularity and cohomology of Pfaffian thickenings}, J. Commut. Algebra, \textbf{13}, 2021, 523 -- 548. \bibitem[Se]{Selberg} A. ~Selberg, \emph{Bemerkninger om et multipelt integral}, Norsk. Math. Tidsskr, \textbf{24}, 1944, 71-78. \bibitem[To]{Totaro} B. ~Totaro, \emph{Towards a Schubert calculus for complex reflection groups}, Math. Proc. Cambridge Philos. Soc. \textbf{134} (2003), no. 1, 83–93. \bibitem[UV]{UlrichValidashti} B. ~Ulrich and J. ~Validashti, \emph{Numerical criteria for integral dependence}, Math. Proc. Cambridge Philos. Soc. \textbf{151} (2011), no. 1, 95–102. \bibitem[We]{Weyman} J.~Weyman, \emph{Cohomology of Vector Bundles and Syzygies}, Cambridge University Press,Cambridge,2003. \end{thebibliography} \end{document}
2205.09503v2
http://arxiv.org/abs/2205.09503v2
On the Gross-Prasad conjecture with its refinement for $\left(\mathrm{SO}\left(5\right),\mathrm{SO}\left(2\right)\right)$ and the generalized Böcherer conjecture
\documentclass[11pt,letterpaper]{amsart} \synctex=1 \usepackage{amsmath,amssymb,amsthm,amsfonts, amscd, color, mathdots} \usepackage{newtxtext,newtxmath} \usepackage{mathrsfs} \usepackage{verbatim} \numberwithin{equation}{subsection} \newtheorem{lemma}{Lemma}[section] \newtheorem{LemmaAppendix}{Lemma} \newtheorem{proposition}{Proposition}[section] \newtheorem{propositionA}{Proposition} \newtheorem{Fact}{Fact} \newtheorem{theorem}{Theorem}[section] \newtheorem{mainRemark}{Remark} \newtheorem{mainCorollary}{Corollary} \newtheorem{corollary}{Corollary}[section] \newtheorem*{mainTheorem}{Main Theorem} \newtheorem{Definition}{Definition}[section] \newtheorem{Remark}{Remark}[section] \newtheorem{Conjecture}{Conjecture} \newtheorem{Assumption}{Assumption} \renewcommand{\theLemmaAppendix}{} \renewcommand{\theAssumption}{A} \newcommand{\mQ}{\mathbb{Q}} \newcommand{\mR}{\mathbb{R}} \newcommand{\mN}{\mathbb{N}} \newcommand{\mZ}{\mathbb{Z}} \newcommand{\mA}{\mathbb{A}} \newcommand{\mC}{\mathbb{C}} \newcommand{\mO}{\mathcal{O}} \newcommand{\mB}{\mathcal{B}} \newcommand{\gl}{\mathrm{GL}} \newcommand{\gsp}{\mathrm{GSp}} }]} \begin{document} \title[Gross-Prasad conjecture and B\"ocherer conjecture] {On the Gross-Prasad conjecture with its refinement for $\left(\mathrm{SO}\left(5\right), \mathrm{SO}\left(2\right)\right)$ and the generalized B\"ocherer conjecture } \author{Masaaki Furusawa} \address[Masaaki Furusawa]{ Department of Mathematics, Graduate School of Science, Osaka Metropolitan University, Sugimoto 3-3-138, Sumiyoshi-ku, Osaka 558-8585, Japan } \email{[email protected]} \thanks{The research of the first author was supported in part by JSPS KAKENHI Grant Number 19K03407, 22K03235 and by Osaka Central Advanced Mathematical Institute (MEXT Promotion of Distinctive Joint Research Center Program JPMXP0723833165). The first author would like to dedicate this paper to the memory of his brother, Akira Furusawa (1952--2020).} \author{Kazuki Morimoto} \address[Kazuki Morimoto]{ Department of Mathematics, Graduate School of Science, Kobe University, 1-1 Rokkodai-cho, Nada-ku, Kobe 657-8501, Japan } \email{[email protected]} \thanks{The research of the second author was supported in part by JSPS KAKENHI Grant Number 17K14166, 21K03164 and by Kobe University Long Term Overseas Visiting Program for Young Researchers Fund.} \subjclass[2020]{Primary: 11F55, 11F67; Secondary: 11F27, 11F46} \keywords{B\"ocherer conjecture, central $L$-values, Gross-Prasad conjecture, periods of automorphic forms} \begin{abstract} We investigate the Gross-Prasad conjecture and its refinement for the Bessel periods in the case of $(\mathrm{SO}(5), \mathrm{SO}(2))$. In particular, by combining several theta correspondences, we prove the Ichino-Ikeda type formula for any tempered irreducible cuspidal automorphic representation. As a corollary of our formula, we prove an explicit formula relating certain weighted averages of Fourier coefficients of holomorphic Siegel cusp forms of degree two which are Hecke eigenforms to central special values of $L$-functions. The formula is regarded as a natural generalization of the B\"{o}cherer conjecture to the non-trivial toroidal character case. \end{abstract} \maketitle \section{Introduction} To investigate relations between periods of automorphic forms and special values of $L$-functions is one of the focal research subjects in number theory. The central special values are of keen interest in light of the Birch and Swinnerton-Dyer conjecture and its generalizations. Gross and Prasad~\cite{GP1,GP2} proclaimed a global conjecture relating non-vanishing of certain period integrals on special orthogonal groups to non-vanishing of central special values of certain tensor product $L$-functions, together with the local counterpart conjecture in the early 1990s. Later with Gan~\cite{GGP}, they extended the conjecture to classical groups and metaplectic groups. Meanwhile a refinement of the Gross-Prasad conjecture, which is a precise formula for the central special values of the tensor product $L$-functions for tempered cuspidal automorphic representations, was formulated by Ichino and Ikeda~\cite{II} in the co-dimension one special orthogonal case. Subsequently Harris~\cite{Ha} formulated a refinement of the Gan-Gross-Prasad conjecture in the co-dimension one unitary case. Later an extension of the work of Ichino-Ikeda and Harris to the general Bessel period case was formulated by Liu~\cite{Liu2} and the one to the general Fourier-Jacobi period case for symplectic-metaplectic groups was formulated by Xue~\cite{Xue2}. In \cite{FM1} we investigated the Gross-Prasad conjecture for Bessel periods for $\mathrm{SO}\left(2n+1\right)\times \mathrm{SO}\left(2\right)$ when the character on $\mathrm{SO}\left(2\right)$ is trivial, i.e. the special Bessel periods case and then, in the sequel~\cite{FM2}, we proved its refinement, i.e the Ichino-Ikeda type precise $L$-value formula under the condition that the base field is totally real and all components at archimedean places are discrete series representations. As a corollary of our special value formula in \cite{FM2}, we obtained a proof of the long-standing conjecture by B\"ocherer in \cite{Bo}, concerning central critical values of imaginary quadratic twists of spinor $L$-functions for holomorphic Siegel cusp forms of degree two which are Hecke eigenforms, thanks to the explicit calculations of the local integrals by Dickson, Pitale, Saha and Schmidt~\cite{DPSS}. In this paper, for $\left(\mathrm{SO}(5), \mathrm{SO}(2)\right)$, we vastly generalize the main results in \cite{FM1} and \cite{FM2}. Namely we prove the Gross-Prasad conjecture and its refinement for any Bessel period in the case of $(\mathrm{SO}(5), \mathrm{SO}(2))$. As a corollary, we prove the generalized B\"ocherer conjecture in the square-free case formulated in \cite{DPSS}. Let us introduce some notation and then state our main results precisely. \subsection{Notation}\label{ss: notation} Let $F$ be a number field. We denote its ring of adeles by $\mA_F$, which is mostly abbreviated as $\mA$ for simplicity. Let $\psi$ be a non-trivial character of $\mA \slash F$. For $a \in F^\times$, we write by $\psi^a$ the character of $\mA \slash F$ defined by $\psi^a(x) = \psi(ax)$. For a place $v$ of $F$, we denote by $F_v$ the completion of $F$ at $v$. When $v$ is non-archimedean, we write by $\varpi_v$ and $q_v$ an uniformizer of $F_v$ and the cardinality of the residue field of $F_v$, respectively. Let $E$ be a quadratic extension of $F$ and $\mA_E$ be its ring of adeles. We denote by $x\mapsto x^\sigma$ the unique non-trivial automorphism of $E$ over $F$. Let us denote by $\mathrm{N}_{E \slash F}$ the norm map from $E$ to $F$. We choose $\eta\in E^\times$ such that $\eta^\sigma=-\eta$ and fix. Let $d=\eta^2$. We denote by $\chi_E$ the quadratic character of $\mA^\times$ corresponding to the quadratic extension $E \slash F$. We fix a character $\Lambda$ of $\mA_E^\times \slash E^\times$ whose restriction to $\mA^\times$ is trivial once and for all. \subsection{Measures}\label{ss: measures} Throughout the paper, for an algebraic group $\mathbf G$ defined over $F$, we write $\mathbf G_v$ for $\mathbf G\left(F_v\right)$, the group of rational points of $\mathbf G$ over $F_v$, and we always take the measure $dg$ on $\mathbf G\left(\mA\right)$ to be the Tamagawa measure unless specified otherwise. For each $v$, we take the self-dual measure with respect to $\psi_v$ on $F_v$. Then recall that the product measure on $\mA$ is the self-dual measure with respect to $\psi$ and is also the Tamagawa measure since $\mathrm{Vol}\left(\mA\slash F\right)=1$. For a unipotent algebraic group $\mathbf U$ defined over $F$, we also specify the local measure $du_v$ on $\mathbf{U}(F_v)$ to be the measure corresponding to the gauge form defined over $F$, together with our choice of the measure on $F_v$, at each place $v$ of $F$. Thus in particular we have \[ du=\prod_v\, du_v\qquad\text{and}\qquad \mathrm{Vol}\left(\mathbf U\left(F\right)\backslash \mathbf U\left(\mA\right), du\right)=1. \] \subsection{Similitudes} Various similitude groups appear in this article. Unless there exists a fear of confusion, we denote by $\lambda\left(g\right)$ the similitude of an element $g$ of a similitude group for simplicity. \subsection{Bessel periods}\label{ss: bessel periods} First we recall that when $V$ is a five dimensional vector space over $F$ equipped with a non-degenerate symmetric bilinear form whose Witt index is at least one, there exists a quaternion algebra $D$ over $F$ such that \begin{equation}\label{e: PG_D} \mathrm{SO}\left(V\right)=\mathbb G_D \end{equation} where $\mathbb G_D=G_D\slash Z_D$, $G_D$ is a similitude quaternionic unitary group over $F$ defined by \begin{equation}\label{e: G_D} G_D(F): = \left\{g \in \mathrm{GL}_2(D) : {}^{t}\overline{g} \,\begin{pmatrix}0&1\\ 1& 0\end{pmatrix}\, g= \lambda(g ) \begin{pmatrix}0&1\\ 1&0 \end{pmatrix} , \, \lambda(g) \in F^\times \right\} \end{equation} and $Z_D$ is the center of $G_D$. Here \[ \overline{g}:=\begin{pmatrix}\overline{t}&\overline{u}\\ \overline{w}&\overline{v}\end{pmatrix} \quad\text{for}\quad g=\begin{pmatrix}t&u\\ w&v\end{pmatrix} \in\mathrm{GL}_2\left(D\right) \] where denoted by $x\mapsto\overline{x}$ for $x\in D$ is the canonical involution of $D$. Also, we define a quaternionic unitary group $G_D^1$ over $F$ by \[ \label{G_D^1} G_D^1 := \left\{ g \in G_D : \lambda(g) = 1\right\}. \] Let \[ D^-:=\left\{x\in D: \mathrm{tr}_D\left(x\right)=0\right\} \] where $\mathrm{tr}_D$ denotes the reduced trace of $D$ over $F$. We recall that when $D\simeq \mathrm{Mat}_{2\times 2}\left(F\right)$, $G_D$ is isomorphic to the similitude symplectic group $\mathrm{GSp}_2$ which we denote by $G$, i.e. \begin{equation}\label{gsp} G\left(F\right):=\left\{g\in\mathrm{GL}_4\left(F\right): {}^tg\begin{pmatrix}0&1_2\\-1_2&0\end{pmatrix}g= \lambda\left(g\right)\begin{pmatrix}0&1_2\\-1_2&0\end{pmatrix},\, \lambda\left(g\right)\in F^\times\right\}. \end{equation} Also, we define the symplectic group $\mathrm{Sp}_2$, which we denote by $G^1$, as \[ \label{G^1} G^1 := \left\{ g \in G : \lambda(g) =1 \right\}. \] We denote $\mathrm{PGSp}_2 = G \slash Z_G$ by $\mathbb G$, where $Z_G$ denotes the center of $G$. Thus when $D$ is split, $G_D\simeq G=\mathrm{GSp}_2$, $G_D^1 \simeq G^1 = \mathrm{Sp}_2$ and $\mathbb G_D\simeq \mathbb G=\mathrm{PGSp}_2$. The Siegel parabolic subgroup $P_D$ of $G_D$ has the Levi decomposition $P_D = M_{D} N_{D}$ where \[ \label{d:N_D} M_{D}(F): = \left\{ \begin{pmatrix} x&0\\ 0&\mu \cdot x\end{pmatrix} : x \in D^\times, \mu \in F^\times \right\}, \quad N_{D}(F): = \left\{ \begin{pmatrix} 1&u\\ 0&1\end{pmatrix} : u \in D^-\right\}. \] For $\xi\in D^-\left(F\right)$, let us define a character $\psi_\xi$ on $N_D\left(\mathbb A\right)$ by \begin{equation}\label{e: psi_xi} \psi_\xi\begin{pmatrix}1&u\\0&1\end{pmatrix}:=\psi\left(\mathrm{tr}_D\left(\xi u\right) \right). \end{equation} We note that for $\begin{pmatrix}x&0\\0&\mu\cdot x\end{pmatrix}\in M_D\left(F\right)$, we have \begin{equation}\label{e: psi-conjugation} \psi_\xi \left[ \begin{pmatrix}x&0\\0&\mu\cdot x\end{pmatrix} \begin{pmatrix}1&u\\0&1\end{pmatrix} \begin{pmatrix}x&0\\0&\mu\cdot x\end{pmatrix}^{-1}\right] =\psi_{\mu^{-1}\cdot x^{-1}\xi x}\begin{pmatrix}1&u\\0&1\end{pmatrix}. \end{equation} Suppose that $F\left(\xi\right)\simeq E$. Let us define a subgroup $T_\xi$ of $D^\times$ by \begin{equation}\label{e: T_xi} T_\xi:=\left\{x\in D^\times: x\,\xi \,x^{-1}=\xi\right\}. \end{equation} Then since $F\left(\xi\right)$ is a maximal commutative subfield of $D$, we have \begin{equation}\label{e: maximal} T_\xi(F)=F\left(\xi\right)^\times\simeq E^\times. \end{equation} We identify $T_\xi$ with the subgroup of $M_D$ given by \begin{equation}\label{e: identify} \left\{\begin{pmatrix}x&0\\0&x\end{pmatrix}: x\in T_\xi\right\}. \end{equation} We note that by \eqref{e: psi-conjugation}, we have \[ \psi_\xi\left(tnt^{-1}\right)=\psi_\xi\left(n\right)\quad \text{for $t\in T_\xi\left(\mathbb A\right)$ and $n\in N_D\left(\mathbb A\right)$}. \] We define the Bessel subgroup $R_{\xi}$ of $G_D$ by \begin{equation}\label{d: Bessel} R_\xi:=T_\xi N_D. \end{equation} Then the Bessel periods defined below are indeed the periods in question in the Gross-Prasad conjecture for $\left(\mathrm{SO}\left(5\right), \mathrm{SO}\left(2\right)\right)$. \begin{Definition}\label{d: bessel model} Let $\pi$ be an irreducible cuspidal automorphic representation of $G_D\left(\mathbb A\right)$ whose central character is trivial and $V_\pi$ its space of automorphic forms. Let $\Lambda$ be a character of $\mathbb A_E^\times\slash E^\times$ whose restriction to $\mA^\times$ is trivial. Let $\xi\in D^-\left(F\right)$ such that $F\left(\xi\right)\simeq E$. Fix an $F$-isomorphism $T_\xi\simeq E^\times$ and regard $\Lambda$ as a character of $T_\xi\left(\mathbb A\right) \slash T_\xi\left(F\right)$. We define a character $\chi^{\xi,\Lambda}$ on $R_\xi\left(\mathbb A\right)$ by \begin{equation}\label{d: Bessel char} \chi^{\xi,\Lambda} \left(tn\right): = \Lambda\left(t\right)\psi_\xi\left(n\right) \quad \text{for $t\in T_\xi\left(\mathbb A\right)$ and $n\in N_D\left(\mathbb A\right)$}. \end{equation} Then for $f\in V_\pi$, we define $B_{\xi, \Lambda,\psi}\left(f\right)$, the $\left(\xi, \Lambda,\psi\right)$-Bessel period of $f$, by \begin{equation}\label{e: def of bessel period} B_{\xi, \Lambda,\psi}\left(f\right):= \int_{\mathbb A^\times R_\xi\left(F\right) \backslash R_\xi\left(\mathbb A\right)} f\left(r\right)\chi^{\xi,\Lambda}\left(r\right)^{-1}\, dr. \end{equation} We say that $\pi$ has the $\left(\xi, \Lambda, \psi\right)$-Bessel period when the linear form $B_{\xi,\Lambda, \psi}$ is not identically zero on $V_\pi$. \end{Definition} \begin{Remark}\label{r: dependency} Here we record the dependency of $B_{\xi,\Lambda,\psi}$ on the choices of $\xi$ and $\psi$. First we note that for $\xi^\prime\in D^-\left(F\right)$, we have $F\left(\xi^\prime\right)\simeq E$ if and only if \begin{equation}\label{e: s-n} \xi^\prime=\mu\cdot \alpha^{-1} \xi\alpha \quad\text{for some $\alpha\in D^\times\left(F\right)$ and $\mu\in F^\times$} \end{equation} by the Skolem-Noether theorem. Suppose that $\xi^\prime\in D^-\left(F\right)$ satisfies \eqref{e: s-n} and $\psi^\prime=\psi^a$ where $a\in F^\times$. Let $m_0=\begin{pmatrix}\alpha&0\\ 0&a^{-1}\mu\cdot \alpha\end{pmatrix} \in M_D\left(F\right)$. Then by \eqref{e: psi-conjugation}, we have \begin{align}\label{e: dependency} B_{\xi, \Lambda, \psi}\left(\pi\left(m_0\right)f\right) &=\int_{\mathbb A^\times T_{\xi^\prime}\left(F\right)\backslash T_{\xi^\prime}\left(\mathbb A\right)} \int_{N_D\left(F\right)\backslash N_D\left(\mathbb A\right)} \\ \notag &\qquad\qquad\qquad f\left(t^\prime n^\prime\right) \Lambda\left(t^{\prime}\right)^{-1} \psi^\prime_{\xi^\prime}\left(n^\prime\right)\, dt^\prime\, dn^\prime \\ \notag &= B_{\xi^\prime,\Lambda, \psi^\prime}\left(f\right) \end{align} where we identify $T_{\xi^\prime}(F)$ with $E^\times$ via the $F$-isomorphism $F\left(\xi^\prime\right)\ni x\mapsto \alpha x\alpha^{-1}\in F\left(\xi\right) \simeq E$. \end{Remark} \begin{Definition}\label{def of E,Lambda-Bessel period} Let $\left(\pi, V_\pi\right)$ be an irreducible cuspidal automorphic representation of $G_D(\mA)$ whose central character is trivial. Let $\Lambda$ be a character of $\mathbb A_E^\times\slash E^\times$ whose restriction to $\mathbb A^\times$ is trivial. Then we say that $\pi$ has the $\left(E,\Lambda\right)$-Bessel period if there exist $\xi\in D^-\left(F\right)$ such that $F\left(\xi\right)\simeq E$ and a non-trivial character $\psi$ of $\mathbb A\slash F$ so that $\pi$ has the $\left(\xi,\Lambda,\psi\right)$-Bessel period. This terminology is well-defined because of the relation \eqref{e: dependency}. \end{Definition} \subsection{Gross-Prasad conjecture} \label{s:Gross-Prasad conjecture} First we introduce the following definition which is inspired by the notion of \emph{locally $G$-equivalence} in Hiraga and Saito~\cite[p.23]{HiSa}. \begin{Definition} Let $\left(\pi, V_\pi\right)$ be an irreducible cuspidal automorphic representation of $G_D(\mA)$ whose central character is trivial. Let $D^\prime$ be a quaternion algebra over $F$ and $(\pi^\prime, V_{\pi^\prime})$ an irreducible cuspidal automorphic representation of $G_{D^\prime}(\mA)$. Then we say that $\pi$ is locally $G^+$-equivalent to $\pi^\prime$ if at almost all places $v$ of $F$ where $D\left(F_v\right)\simeq D^\prime \left(F_v\right)$, there exists a character $\chi_v$ of $G_D\left(F_v\right) \slash G_D\left(F_v\right)^+$ such that $\pi_v \otimes \chi_v \simeq \pi_v^\prime$. Here \begin{equation}\label{e: G^+_D} G_D\left(F\right)^+:=\left\{g\in G_D\left(F\right) : \lambda\left(g\right)\in \mathrm{N}_{E\slash F}\left(E^\times\right)\right\}. \end{equation} \end{Definition} \begin{Remark} \label{rem loc ne} When $\pi$ and $\pi^\prime$ have weak functorial lifts to $\mathrm{GL}_4\left(\mA\right)$, say $\Pi$ and $\Pi^\prime$, respectively, the notion of locally $G^+$-equivalence is described simply as the following. Suppose that $\pi$ and $\pi^\prime$ are locally $G^+$-equivalent. Then there exists a character $\omega$ of $G_D\left(\mA\right)$ such that $\pi\otimes\omega$ is nearly equivalent to $\pi^\prime$, where $\omega$ may not be automorphic. Since $\omega_v$ is either $\chi_{E_v}$ or trivial at almost all places $v$ of $F$, we have $\mathrm{BC}_{E\slash F}\left(\Pi\right)\simeq \mathrm{BC}_{E\slash F}\left(\Pi^\prime\right)$ where $\mathrm{BC}_{E\slash F}$ denotes the base change lift to $\mathrm{GL}_4\left(\mathbb A_E\right)$. Then by Arthur-Clozel~\cite[Theorem~3.1]{AC}, we have $\Pi\simeq \Pi^\prime$ or $\Pi^\prime\otimes\chi_E$. Hence $\pi$ is nearly equivalent to either $\pi^\prime$ or $\pi^\prime\otimes\chi_E$. The converse is clear. \end{Remark} Then our first main result is on the Gross-Prasad conjecture for $(\mathrm{SO}(5), \mathrm{SO}(2))$. \begin{theorem} \label{ggp SO} Let $E$ be a quadratic extension of $F$. Let $\left(\pi, V_\pi\right)$ be an irreducible cuspidal automorphic representation of $G_D\left(\mA\right)$ with a trivial central character and $\Lambda$ a character of $\mathbb A_E^\times\slash E^\times$ whose restriction to $\mathbb A^\times$ is trivial. \begin{enumerate} \item\label{theorem1-1-(1)} Suppose that $\pi$ has the $\left(E,\Lambda\right)$-Bessel period. Moreover assume that: \begin{multline}\label{genericity} \text{there exists a finite place $w$ of $F$ such that} \\ \text{ $\pi_w$ and its local theta lift to $\mathrm{GSO}_{4,2}\left(F_w\right)$ are generic.} \end{multline} Here $\mathrm{GSO}_{4,2}$ denote the identity component of $\mathrm{GO}_{4,2}$, the similitude orthogonal group associated to the six dimensional orthogonal space $(E, \mathrm{N}_{E \slash F}) \oplus \mathbb{H}^2$ over $F$ where $\mathbb H$ denotes the hyperbolic plane over $F$. Then there exists a finite set $S_0$ of places of $F$ containing all archimedean places of $F$ such that the partial $L$-function \begin{equation}\label{e: partial non-vanishing} L^S \left(\frac{1}{2}, \pi \times \mathcal{AI} \left(\Lambda \right) \right) \ne 0 \end{equation} for any finite set $S$ of places of $F$ with $S\supset S_0$. Here, $\mathcal{AI}\left(\Lambda \right)$ denotes the automorphic induction of $\Lambda$ from $\mathrm{GL}_1(\mA_E)$ to $\mathrm{GL}_2(\mA)$. Moreover there exists a globally generic irreducible cuspidal automorphic representation $\pi^\circ$ of $G(\mA)$ which is locally $G^+$-equivalent to $\pi$. \item\label{theorem1-1-(2)} Assume that: \begin{multline}\label{arthur classification} \text{ the endoscopic classification of Arthur,} \\ \text{ i.e. \cite[Conjecture~9.4.2, Conjecture~9.5.4]{Ar} holds for $\mathbb G_{D_\circ}$ . } \end{multline} Here $D_\circ$ denotes an arbitrary quaternion algebra over $F$. Suppose that $\pi$ has a generic Arthur parameter, namely the parameter is of the form $\Pi_0$ or $\Pi_1 \boxplus \Pi_2$ where $\Pi_i$ is an irreducible cuspidal automorphic representation of $\mathrm{GL}_4\left(\mA\right)$ for $i=0$ and of $\mathrm{GL}_2\left(\mA\right)$ for $i=1,2$, respectively, such that $L(s, \Pi_i, \wedge^2)$ has a pole at $s=1$. Then we have \begin{equation} \label{e:L non-zero} L \left(\frac{1}{2}, \pi \times \mathcal{AI} \left(\Lambda \right) \right) \ne 0 \end{equation} if and only if there exists a pair $\left(D^\prime, \pi^\prime\right)$ where $D^\prime$ is a quaternion algebra over $F$ containing $E$ and $\pi^\prime$ an irreducible cuspidal automorphic representation of $G_{D^\prime}$ which is nearly equivalent to $\pi$ such that $\pi^\prime$ has the $\left(E,\Lambda\right)$-Bessel period. Moreover, when $\pi$ is tempered, the pair $\left(D^\prime, \pi^\prime\right)$ is uniquely determined. \end{enumerate} \end{theorem} \begin{Remark} \label{L-fct def rem} In \eqref{e:L non-zero}, $L \left(s, \pi \times \mathcal{AI} \left(\Lambda\right) \right)$ denotes the complete $L$-function defined as the following. When $\mathcal{AI} \left(\Lambda \right)$ is not cuspidal, i.e. $\Lambda=\Lambda_0 \circ \mathrm{N}_{E \slash F}$ for a character $\Lambda_0$ of $\mA^\times \slash F^\times$, we define \[ L\left(s,\pi \times \mathcal{AI} \left(\Lambda \right) \right) : = L\left(s,\pi \times \Lambda_0 \right) L\left(s,\pi \times \Lambda_0 \chi_E \right) \] where each factor on the right hand side is defined by the doubling method as in Lapid-Rallis~\cite{LR} or Yamana \cite{Yam}. When $\mathcal{AI} \left(\Lambda \right)$ is cuspidal, the partial $L$-function $L^S\left(s,\pi\times\mathcal{AI}\left(\Lambda \right)\right)$ may be defined by Theorem~\ref{thm mero} in Appendix~\ref{appendix c} for a finite set $S$ of places of $F$ such that $\pi_v$ and $\Pi \left(\Lambda \right)_v$ are unramified at $v \not \in S$. Further, we define the local $L$-factor at each place $v \in S$ by the local Langlands parameters for $\pi_v$ and $\Pi \left(\Lambda \right)_v$, where the local Langlands parameters are given by Gan-Takeda~\cite{GT11} for $\mathbb{G}(F_v)$ (also Arthur~\cite{Ar}), Gan-Tantono~\cite{GaTan} for $\mathbb{G}_D(F_v)$ and Kutzko~\cite{Kutz} for $\mathrm{GL}_2(F_v)$ at finite places, and, by Langlands~\cite{Lan} at archimedean places. We note that the condition~\eqref{e: partial non-vanishing} and the condition~\eqref{e:L non-zero} are equivalent from the definition of local $L$-factors when $\pi$ is tempered. \end{Remark} \begin{Remark} Suppose that at a finite place $w$ of $F$, the group $G_D\left(F_w\right)$ is split and the representation $\pi_w$ is generic and tempered. Then by Gan and Ichino~\cite[Proposition~C.4]{GI1}, the big theta lift of $\pi_w$ and the local theta lift of $\pi_w$ coincide. Thus the genericity of the local theta lift of $\pi_w$ follows from Gan and Takeda~\cite[Corollary~4.4]{GT0} for the dual pair $\left(G, \mathrm{GSO}_{3,3}\right)$ and from a local analogue of the computations in \cite[Section~3.1]{Mo} for the dual pair $\left(G^+,\mathrm{GSO}_{4,2}\right)$, respectively. Here \begin{equation}\label{e: G+} G\left(F\right)^+:=\left\{g\in G: \lambda\left(g\right)\in \mathrm{N}_{E\slash F}\left(E^\times\right)\right\}. \end{equation} When a local representation $\pi_w$ is unramified and tempered, $\pi_w$ is generic as remarked in \cite[Remark~2]{FM1}. Hence the assumption~\eqref{genericity} is fulfilled when $\pi$ is tempered. \end{Remark} In our previous paper~\cite{FM1}, Theorem~\ref{ggp SO} for the pair $\left(\mathrm{SO}\left(2n+1\right),\mathrm{SO}\left(2\right) \right)$ was proved when $\Lambda$ is trivial. Meanwhile Jiang and Zhang~\cite{JZ} studied the Gross-Prasad conjecture in a very general setting assuming the endoscopic classification of Arthur in general by using the twisted automorphic descent. Though Theorem~\ref{ggp SO} is subsumed in \cite{JZ} as a special case, we believe that our method, which is different from theirs, has its own merits because of its concreteness. We also note that because of the temperedness of $\pi$, the uniqueness of the pair $\left(D^\prime,\pi^\prime\right)$ in Theorem~\ref{ggp SO} (2) follows from the local Gan-Gross-Prasad conjecture for $(\mathrm{SO}(5), \mathrm{SO}(2))$ by Prasad-Takloo--Bighash~\cite[Theorem~2]{PT} (see also Waldspurger~\cite{Wal} in general case) at finite places and by Luo~\cite{Luo} at archimedean places. We shall give another proof of this uniqueness by reducing it to a similar assertion in the unitary group case. \subsection{Refined Gross-Prasad conjecture} Let $(\pi, V_\pi)$ be an irreducible cuspidal tempered automorphic representation of $G_D\left(\mA\right)$ with trivial central character. For $\phi_1, \phi_2 \in V_{\pi}$, we define the Petersson inner product $\left(\phi_1,\phi_2\right)_{\pi}$ on $V_\pi$ by \[ (\phi_1, \phi_2)_{\pi} = \int_{Z_D\left(\mathbb A\right) G_D(F) \backslash G_D\left(\mA\right)} \phi_1(g) \overline{\phi_2(g)} \,dg \] where $dg$ denotes the Tamagawa measure. Then at each place $v$ of $F$, we take a $G_D\left(F_v\right)$-invariant hermitian inner product on $V_{\pi_v}$ so that we have a decomposition $\left(\,\, ,\,\,\right)_\pi=\prod_v\left(\,\,,\,\,\right)_{\pi_v}$. In the definition of the Bessel period~\eqref{e: def of bessel period}, we take $dr=dt\,du$ where $dt$ and $du$ are the Tamagawa measures on $T_\xi\left(\mathbb A\right)$ and $N_D\left(\mathbb Z\right)$, respectively. We take and fix the local measures $du_v$ and $dt_v$ so that $du=\prod_v du_v$ and \begin{equation} \label{C_{S_D}} dt = C_\xi \prod dt_v \end{equation} where $C_\xi$ is a constant called the Haar measure constant in \cite{II}. Then the local Bessel period $\alpha^{\xi,\Lambda}_v:V_{\pi_v}\times V_{\pi_v}\to\mathbb C$ and the local hermitian inner product $\left(\,\,,\,\,\right)_{\pi_v}$ are defined as in Section~\ref{s:def local bessel}. Suppose that $D$ is not split. Then by Li~\cite{JSLi}, there exists a pair $\left(\xi^\prime, \Lambda^\prime\right)$ such that $\pi$ has the $\left(\xi^\prime,\Lambda^\prime, \psi\right)$-Bessel period. Here $\xi^\prime\in D^-\left(F\right)$ such that $E^\prime:=F\left(\xi^\prime\right)$ is a quadratic extension of $F$ and $\Lambda^\prime$ is a character on $\mathbb A_{E^\prime}^\times\slash \mathbb A^\times {E^\prime}^\times$. Then by Proposition~\ref{exist gen prp}, which is a consequence of the proof of Theorem~\ref{ggp SO} (\ref{theorem1-1-(1)}), there exists an irreducible cuspidal automorphic representation $\pi^\circ$ of $G\left(\mathbb A\right)$ which is generic and locally $G^+$-equivalent to $\pi$. We take the functorial lift of $\pi^\circ$ to $\mathrm{GL}_4 \left(\mathbb A\right)$ by Cogdell, Kim, Piatetski-Shapiro and Shahidi~\cite{CKPSS}, which is of the form $\Pi_1 \boxplus \cdots \boxplus \Pi_{\ell_0}$ with $\Pi_i$ an irreducible cuspidal automorphic representation of $\mathrm{GL}_{m_i}\left(\mA\right)$ for each $i$. Then we define an integer $\ell\left(\pi\right)$ by $\ell\left(\pi\right)=\ell_0$. We note that $\pi^\circ$ may not be unique, but $\ell\left(\pi\right)$ does not depend on the choice of the pair $\left(\xi^\prime,\Lambda^\prime\right)$ by Proposition~\ref{exist gen prp} and Lemma~\ref{compo number}, \ref{comp number not dep on S}, and thus it depends only on $\left(\pi, V_\pi\right)$. When $D$ is split, then $\pi$ has the functorial lift to $\mathrm{GL}_4\left(\mathbb A\right)$ by Arthur~\cite{Ar} (see also Cai-Friedberg-Kaplan~\cite{CFK}) and we define $\ell\left(\pi\right)$ in a similar way. Our second main result is the refined Gross-Prasad conjecture formulated by Liu~\cite{Liu2}, i.e. the Ichino-Ikeda type explicit central value formula, in the case of $\left(\mathrm{SO}\left(5\right), \mathrm{SO}\left(2\right)\right)$. \begin{theorem} \label{ref ggp} Let $\left(\pi, V_\pi\right)$ be an irreducible cuspidal tempered automorphic representation of $G_D(\mA)$ with a trivial central character. Then for any non-zero decomposable cusp form $\phi=\otimes_v\,\phi_v\in V_\pi$, we have \begin{multline} \label{e: main identity} \frac{\left|B_{\xi, \Lambda,\psi}\left(\phi\right)\right|^2}{ ( \phi,\phi )_\pi} \\ =2^{-\ell(\pi)}\,C_{\xi} \cdot \left(\prod_{j=1}^2\zeta_F\left(2j\right)\right) \frac{L\left(\frac{1}{2},\pi \times \mathcal{AI} \left(\Lambda \right) \right) }{ L\left(1,\pi,\mathrm{Ad}\right)L\left(1,\chi_E\right)} \cdot\prod_v \frac{\alpha_v^\natural\left(\phi_v\right)}{ (\phi_v,\phi_v)_{\pi_v}}. \end{multline} Here $\zeta_F(s)$ denotes the complete zeta function of $F$ and $\alpha_v^\natural\left(\phi_v\right)$ is defined by \[ \alpha_v^\natural\left(\phi_v\right) = \frac{L\left(1,\pi_v,\mathrm{Ad}\right)L\left(1,\chi_{E,v}\right)}{L\left(1/2,\pi_v \times \Pi \left(\Lambda \right)_v \right) \prod_{j=1}^2\zeta_{F_v}\left(2j\right) } \cdot \alpha_{\Lambda_v, \psi_{\xi, v}}\left(\phi_v,\phi_v\right). \] We note that $\displaystyle{\frac{\alpha_v^\natural\left(\phi_v\right)}{\left(\phi_v,\phi_v\right)_{\pi_v}}=1}$ for almost all places $v$ of $F$ by \cite{Liu2}. \end{theorem} \begin{Remark} \label{rem Arthur + ishimoto} Under the assumption~\eqref{arthur classification}, we have $|\mathcal{S}(\phi_\pi)| =2^{\ell(\pi)}$, where $\phi_\pi$ denotes the Arthur parameter of $\pi$ and $\mathcal{S}\left(\phi_\pi\right)$ the centralizer of $\phi_\pi$ in the complex dual group $\hat{G}$. Hence \eqref{e: main identity} coincides with the conjectural formula in Liu~\cite[Conjecture~2.5 (3)]{Liu2}. Thus when $D$ is split, i.e. $G_D\simeq G$, our theorem proves Liu's conjecture since the assumption~\eqref{arthur classification} is indeed fulfilled. After submitting this paper, Ishimoto posted a preprint~\cite{Ishimoto} on arXiv, in which he gives the endoscopic classification of representations of non-quasi split orthogonal groups for generic Arthur parameters. Hence, our theorem proves \cite[Conjecture~2.5 (3)]{Liu2} completely in the case of $(\mathrm{SO}(5), \mathrm{SO}(2))$. \end{Remark} \begin{Remark} Let $\pi_{\rm gen}$ denote the irreducible cuspidal globally generic automorphic representation of $G\left(\mathbb A\right)$ which has the same $L$-parameter as $\pi$. When $\pi_v$ is unramified at any finite place $v$ of $F$, Chen and Ichino~\cite{CI} proved an explicit formula of the ratio $L\left(1,\pi,\mathrm{Ad}\right)\slash \left(\Phi_{\rm gen}, \Phi_{\rm gen} \right)$ for a suitably normalized cusp form $\Phi_{\rm gen}$ in the space of $\pi_{\rm gen}$. \end{Remark} \begin{Remark} In the unitary case, a remarkable progress has been made in the Gan-Gross-Prasad conjecture and its refinement for Bessel periods, by studying the Jacquet-Rallis relative trace formula. In the striking paper~\cite{BPLZZ} by Beuzart-Plessis, Liu, Zhang and Zhu, a proof in the co-dimension one case for irreducible cuspidal tempered automorphic representations of unitary groups such that their base change lifts are cuspidal was given by establishing an ingenious method to isolate the cuspidal spectrum. In yet another striking paper by Beuzart-Plessis, Chaudouard and Zydor~\cite{BPCZ}, a proof for all endoscopic cases in the co-dimension one setting was given by a precise study of the relative trace formula. Very recently, in a remarkable preprint by Beuzart-Plessis and Chaudouard ~\cite{BPC}, the above results are extended to arbitrary co-dimension cases. Thus the Gan-Gross-Prasad conjecture and its refinement for Bessel periods on unitary groups are now proved in general. On the contrary, the orthogonal case in general is still open. We note that, in the $\left(\mathrm{SO}\left(5\right),\mathrm{SO}\left(2\right)\right)$ case, the first author has formulated relative trace formulas to approach the formula~\eqref{e: main identity} and proved the fundamental lemmas in his joint work with Shalika~\cite{FS}, Martin~\cite{FuMa} and Matrin-Shalika~\cite{FuMaS}. In order to deduce the $L$-value formula from these relative trace formulas, several issues such as smooth transfer of test functions must be overcome. In the above mentioned co-dimension one unitary group case, reductions to Lie algebras played crucial roles to solve similar issues. However Bessel periods in our case involves integration over unipotent subgroups and it is not clear, at least to the first author, how to make the reduction to Lie algebras work. \end{Remark} \begin{Remark} In the co-dimension one orthogonal group case, the refined Gross-Prasad conjecture has been deduced from the Waldspurger formula~\cite{Wal} in the $\left(\mathrm{SO}\left(3\right), \mathrm{SO}\left(2\right)\right)$ case and from the Ichino formula~\cite{Ich2} in the $\left(\mathrm{SO}\left(4\right), \mathrm{SO}\left(3\right)\right)$ case, respectively. Gan and Ichino~\cite{GI0} studied the $\left(\mathrm{SO}\left(5\right), \mathrm{SO}\left(4\right)\right)$-case when the representation of $\mathrm{SO}\left(5\right)$ is a theta lift from $\mathrm{GSO}(4)$ by reduction to the $\left(\mathrm{SO}\left(4\right), \mathrm{SO}\left(3\right)\right)$ case. Liu~\cite{Liu2} proved Theorem~\ref{ref ggp} when $D$ is split and $\pi$ is an endoscopic lift, i.e. a Yoshida lift, by reducing it to the Waldspurger formula~\cite{Wal}. The case when $\pi$ is a non-endoscopic Yoshida lift was proved later by Corbett~\cite{Co} in a similar manner. \end{Remark} As a corollary of Theorem~\ref{ref ggp}, we prove the $\left(\mathrm{SO}(5), \mathrm{SO}(2)\right)$ case of the Gan-Gross-Prasad conjecture in the form as stated in \cite[Conjecture~24.1]{GGP}. \begin{corollary} \label{ggp alpha ver} Let $(\pi, V_\pi)$ be an irreducible cuspidal tempered automorphic representation of $G_D\left(\mA\right)$ with a trivial central character. Then the following three conditions are equivalent. \begin{enumerate} \item\label{1-ggp alpha ver} The $\left(\xi,\Lambda, \psi \right)$-Bessel period does not vanish on $\pi$. \item\label{2-ggp alpha ver} $L \left(\frac{1}{2}, \pi \times \mathcal{AI}\left(\Lambda \right) \right) \ne 0$ and the local Bessel period $\alpha_{\Lambda_v, \psi_{\xi, v}}\not\equiv 0$ on $\pi_v$ at any place $v$ of $F$. \item $L \left(\frac{1}{2}, \pi \times \mathcal{AI} \left(\Lambda \right) \right) \ne 0$ and $\mathrm{Hom}_{R_{\xi,v}}\left(\pi_v,\chi^{\xi,\Lambda}_v\right) \ne\left\{0\right\}$ at any place $v$ of $F$. \end{enumerate} \end{corollary} \begin{Remark} The equivalence between the conditions (\ref{1-ggp alpha ver}) and (\ref{2-ggp alpha ver}) is immediate from Theorem~\ref{ref ggp}. The equivalence \begin{equation}\label{equiv wald} \alpha_{\Lambda_v, \psi_{\xi, v}}\not\equiv 0 \Longleftrightarrow \mathrm{Hom}_{R_{\xi,v}}\left(\pi_v,\chi^{\xi,\Lambda}_v\right) \ne\left\{0\right\} \end{equation} is proved by Waldspurger~\cite{Wal12} at any non-archimedean place $v$ and by Luo~\cite{Luo} recently at any archimedean place $v$, respectively. \end{Remark} \subsection{Method}\label{ss:method} In \cite{FM1} and \cite{FM2} we used the theta correspondence for the dual pair $\left(\mathrm{SO}\left(2n+1\right), \mathrm{Mp}_n\right)$. The main tool in \cite{FM1} was the pull-back formula by the first author~\cite{Fu} for the Whittaker period on $\mathrm{Mp}_n$, which is expressed by a certain integral involving the \emph{Special} Bessel period on $\mathrm{SO}\left(2n+1 \right)$. This forced us the restriction that the character $\Lambda$ on $\mathrm{SO}\left(2\right)$ is trivial. In \cite{FM2}, to prove the refined Gross-Prasad conjecture for $\left(\mathrm{SO}\left(2n+1\right),\mathrm{SO}\left(2\right)\right)$ when $\Lambda$ is trivial, the following additional restrictions were necessary: \begin{enumerate} \item\label{c2} The base field $F$ is totally real and at every archimedean place $v$ of $F$, the representation $\pi_v$ is a discrete series representation. \item\label{c3} The assumption \eqref{arthur classification}. \end{enumerate} Additional main tool needed in \cite{FM2} was the Ichino-Ikeda type formula for the Whittaker periods on $\mathrm{Mp}_n$ by Lapid and Mao~\cite{LM17}, which imposed on us the condition~(\ref{c2}). In fact, their proof was to reduce the global identity to certain local identities. They proved the local identities in general at non-archimedean places. On the other hand, at archimedean places, their proof was to note the equivalence between their local identities and the formal degree conjecture by Hiraga-Ichino-Ikeda~\cite{HII1, HII2} and then to prove the latter when $\pi$ is a discrete series representation. Our proof in \cite{FM2} was to reduce to the case when $\pi$ has the special Bessel period by the assumption~\eqref{arthur classification} and to combine these two main tools with the Siegel-Weil formula. It does not seem plausible that a straightforward generalization of the method of \cite{FM1} and \cite{FM2} would allow us to remove these restrictions. Thus we need to adopt a new strategy in this paper. Our main method here is again theta correspondence but we use it differently and in a more intricate way. First we consider the quaternionic dual pair $\left(G^+_D,\mathrm{GSU}_{3,D}\right)$ where $\mathrm{GSU}_{3,D}$ denotes the identity component of the similitude quaternion unitary group $\mathrm{GU}_{3,D}$ defined by \eqref{def of gu_3,D} and $G^+_D$ defined by \eqref{e: G^+_D}. Then we recall the accidental isomorphism \begin{equation}\label{e: accidental 3,D} \mathrm{PGSU}_{3,D}\simeq \mathrm{PGU}_{4,\varepsilon} \end{equation} when $D\simeq D_\varepsilon$ given by \eqref{d: quaternion} and $\mathrm{GU}_{4,\varepsilon}$ is the similitude unitary group defined by \eqref{d: unitary GD}. Hence we have \begin{equation}\label{e: accidental U_4} \mathrm{GU}_{4,\varepsilon}\simeq \begin{cases}\mathrm{GU}_{2,2},&\text{when $D$ is split, i.e. $\varepsilon\in\mathrm{N}_{E\slash F} \left(E^\times\right)$}; \\ \mathrm{GU}_{3,1},&\text{when $D$ is non-split, i.e. $\varepsilon\notin\mathrm{N}_{E\slash F} \left(E^\times\right)$ }. \end{cases} \end{equation} Thus our theta correspondence for $\left(G^+_D,\mathrm{GSU}_{3,D}\right)$ induces a correspondence for the pair $\left(\mathbb G_D,\mathrm{PGU}_{4,\varepsilon}\right)$. Then we note that the pull-back of a certain Bessel period on $\mathrm{PGU}_{4,\varepsilon}$ is an integral involving the $\left(\xi,\Lambda, \psi\right)$-Bessel period on $G_D$. Theorem~\ref{ggp SO} is reduced essentially to the Gan-Gross-Prasad conjecture for the Bessel periods on $\mathrm{GU}_{4,\varepsilon}$, which we proved in \cite{FM3} using the theta correspondence for the pair $\left(\mathrm{GU}_{4,\varepsilon},\mathrm{GU}_{2,2}\right)$. Similarly Theorem~\ref{ref ggp} is reduced to the refined Gan-Gross-Prasad conjecture for the Bessel periods on $\mathrm{GU}_{4,\varepsilon}$. For the reader's sake, here we present an outline of the proof when the $\left(\xi,\Lambda,\psi\right)$-Bessel period does not vanish. Note that in the following paragraph the notation used is provisionally and the argument is not rigorous since our intention here is to present a rough sketch of the main idea. Let $\left(\pi, V_\pi\right)$ be an irreducible cuspidal tempered automorphic representation of $G_D(\mA)$ with a trivial central character. Suppose that the $\left(\xi,\Lambda,\psi\right)$-Bessel period, which we denote by $B$, does not vanish on $\pi$. Let $\theta\left(\pi\right)$ be the theta lift of $\pi$ to $\mathrm{GSU}_{3,D}$. When $G_D=G$ and the theta lift of $\pi$ to $\mathrm{GSO}_{3,1}$ is non-zero, $\theta\left(\pi\right)$ is not cuspidal but the explicit formula \eqref{e: main identity} has been already proved by Corbett~\cite{Co}. Thus suppose otherwise. Then $\theta\left(\pi\right)$ is a non-zero irreducible cuspidal tempered automorphic representation. The pull-back of a certain Bessel period, which we denote by $\mathcal B$ on $\mathrm{GSU}_{3,D}$ is written as an integral involving $B$. As in our previous paper~\cite{FM2}, the explicit formula for $B$ is reduced to the one for $\mathcal B$, which we obtain in the following steps. \begin{enumerate} \item Via the isomorphism~\eqref{e: accidental 3,D}, regard $\theta\left(\pi\right)$ as an automorphic representation of $\mathrm{GU}_{4,\varepsilon}$ and then consider its theta lift $\theta_\Lambda\left(\theta\left( \pi\right)\right)$, which depends on $\Lambda$, to $\mathrm{GU}_{2,2}$. The temperedness of $\pi$ implies that $\theta_\Lambda\left(\theta\left( \pi\right)\right)$ is an irreducible cuspidal automorphic representation of $\mathrm{GU}_{2,2}$. Then the pull-back of a certain Whittaker period $\mathcal W$ on $\mathrm{GU}_{2,2}$ is written as an integral involving the Bessel period $\mathcal B$. Then in \cite{FM3}, it is shown that the explicit formula for $\mathcal B$ follows from the one for $\mathcal W$. Thus we are reduced to show the explicit formula for $\mathcal W$. \item Via the isomorphism $\mathrm{PGU}_{2,2}\simeq\mathrm{PGSO}_{4,2}$, regard $\theta_\Lambda\left(\theta\left( \pi\right)\right)$ as an automorphic representation of $\mathrm{GSO}_{4,2}$. Let $\pi^\prime$ be the theta lift of $\theta_\Lambda\left(\theta\left( \pi\right)\right)$ to $G=\mathrm{GSp}_2$. Then it is shown that $\pi^\prime$ is a globally generic cuspidal automorphic representation of $G$ and indeed the pull-back of the Whittaker period $W$ on $G$ is expressed as an integral involving $\mathcal W$. Hence we are reduced to the explicit formula for $W$. \item Since the theta lift of the globally generic cuspidal automorphic representation $\pi^\prime$ of $G$ to either $\mathrm{GSO}_{2,2}$ or $\mathrm{GSO}_{3,3}$ is non-zero and cuspidal, we are further reduced to the explicit formulas for the Whittaker periods on $\mathrm{PGSO}_{2,2}$ and $\mathrm{PGSO}_{3,3}$ by the pull-back computation. \item Recall the accidental isomorphisms $\mathrm{PGSO}_{2,2}\simeq \mathrm{PGL}_2\times \mathrm{PGL}_2$, $\mathrm{PGSO}_{3,3}\simeq\mathrm{PGL}_4$. Since the explicit formula for the Whittaker period on $\mathrm{PGL}_n$ is already proved by Lapid and Mao~\cite{LM}, we are done. \end{enumerate} \begin{Remark} Though we only consider the case when $\mathrm{SO}\left(2\right)$ is non-split in this paper, the split case is proved by a similar argument as follows. First we note that $D$ is necessarily split when $\mathrm{SO}\left(2\right)$ is split and hence $G_D\simeq G$. If the theta lift to $\mathrm{GSO}_{2,2}$ is non-zero, it is a Yoshida lift and Liu~\cite{Liu2} proved the explicit formula. Suppose otherwise. Then the theta lift to $\mathrm{GSO}_{3,3}$ is non-zero and cuspidal. The pull-back of a certain Bessel period on $\mathrm{GSO}_{3,3}$ is an integral involving the split Bessel period on $G$ (see Section~\ref{sp4 so42}). We recall the accidental isomorphism $\mathrm{PGSO}_{3,3} \simeq \mathrm{PGL}_4$. We consider the theta correspondence for the pair $\left(\mathrm{GL}_4,\mathrm{GL}_4\right)$ instead of $\left(\mathrm{GU}_{4,\varepsilon},\mathrm{GU}_{4,\varepsilon}\right)$ in the non-split case. Then the pull-back computation may be interpreted as expressing the pull-back of the Whittaker period on $\mathrm{GL}_4$ as an integral involving the Bessel period on $\mathrm{GSO}_{3,3}$, which is given in \cite{FM3}. Thus as in the non-split case, we are reduced to the Ichino-Ikeda type explicit formula for the Whittaker period on $\mathrm{GL}_4$. \end{Remark} Here is the statement of the theorem in the split case. \begin{theorem} \label{main thm split} Let $(\pi, V_\pi)$ be an irreducible cuspidal automorphic representation of $G(\mA)$ with trivial central character. Suppose that $D$ is split and the Arthur parameter of $\pi$ is generic. Let $\xi\in D^-\left(F\right)$ such that $F\left(\xi\right)\simeq F\oplus F$ and fix an $F$-isomorphism $T_\xi\simeq F^\times\times F^\times$. For a character $\Lambda$ of $\mathbb A^\times\slash F^\times$, we also denote by $\Lambda$ the character of $T_\xi\left(\mathbb A\right)$ defined by $\Lambda\left(a,b\right):=\Lambda\left(ab^{-1}\right)$. The following assertions hold. \begin{enumerate} \item The $\left(\xi, \Lambda,\psi\right)$-Bessel period does not vanish on $V_\pi$ if and only if $\pi$ is generic and $L\left(\frac{1}{2},\pi\times\Lambda\right)\ne 0$. Here we note that $L\left(\frac{1}{2},\pi\times\Lambda^{-1}\right)$ is the complex conjugate of $L\left(\frac{1}{2},\pi\times\Lambda\right)$ since $\pi$ is self-dual. \item Further assume that $\pi$ is tempered. Then for any non-zero decomposable cusp form $\phi=\otimes_v\,\phi_v\in V_\pi$, we have \begin{multline*} \frac{\left|B_{\xi, \Lambda,\psi}\left(\phi\right)\right|^2}{ ( \phi,\phi )_\pi} =2^{-\ell(\pi)}\,C_{\xi} \cdot \left(\prod_{j=1}^2\zeta_F\left(2j\right)\right) \\ \times \frac{L\left(\frac{1}{2},\pi \times \Lambda \right)L\left(\frac{1}{2},\pi \times \Lambda^{-1} \right) }{ L\left(1,\pi,\mathrm{Ad}\right)\zeta_F(1)} \cdot\prod_v \frac{\alpha_v^\natural\left(\phi_v\right)}{ (\phi_v,\phi_v)_{\pi_v}} \end{multline*} where $\zeta_F\left(1\right)$ stands for $\mathrm{Res}_{s=1}\, \zeta_F(s)$. \end{enumerate} \end{theorem} \subsection{Generalized B\"ocherer conjecture} Thanks to the meticulous local computation by Dickson, Pitale, Saha and Schmidt~\cite{DPSS}, Theorem~\ref{ref ggp} implies the generalized B\"ocherer conjecture. For brevity we only state the scalar valued full modular case here in the introduction. Indeed a more general version shall be proved in \ref{generalized boecherer statement} as Theorem~\ref{t: vector valued boecherer}. \begin{theorem} \label{Boecherer:scalar} Let $\varPhi$ be a holomorphic Siegel cusp form of degree two and weight $k$ with respect to $\mathrm{Sp}_2\left(\mathbb Z\right)$ which is a Hecke eigenform and $\pi\left(\varPhi\right)$ the associated automorphic representation of $\mathbb{G}\left(\mathbb A_\mQ\right)$. Let \begin{equation}\label{e: Fourier} \varPhi\left(Z\right)=\sum_{T>0}a\left(\varPhi, T\right) \exp\left[2\pi\sqrt{-1}\operatorname{tr}\left(TZ\right)\right], \,\, Z\in\mathfrak H_2, \end{equation} be the Fourier expansion of $\varPhi$ where $T$ runs over semi-integral positive definite two by two symmetric matrices and $\mathfrak H_2$ denotes the Siegel upper half space of degree two. Let $E$ be an imaginary quadratic extension of $\mathbb Q$. We denote by $-D_E$ its discriminant, $\mathrm{Cl}_E$ its ideal class group and $w\left(E\right)$ the number of distinct roots of unity in $E$. In \eqref{e: Fourier}, when $T^\prime={}^t\gamma T\gamma$ for some $\gamma\in\mathrm{SL}_2\left(\mathbb Z\right)$, we have $a\left(\varPhi, T^\prime\right)=a\left(\varPhi,T\right)$. By the Gauss composition law, we may naturally identify the $\mathrm{SL}_2\left(\mathbb Z\right)$-equivalence classes of binary quadratic forms of discriminant $-D_E$ with the elements of $\mathrm{Cl}_E$. Thus the notation $a(\varPhi, c)$ for $c \in \mathrm{Cl}_E$ makes sense. For a character $\Lambda$ of $\mathrm{Cl}_E$, we define $\mathcal{B}_\Lambda\left(\varPhi , E\right)$ by \[ \label{mathcal B Phi E} \mathcal{B}_\Lambda\left(\varPhi , E\right): = w\left(E\right)^{-1} \cdot \sum_{c \in \mathrm{Cl}_E} a\left(\varPhi, c\right) \Lambda^{-1} \left(c\right). \] Suppose that $\varPhi$ is not a Saito-Kurokawa lift. Then we have \begin{equation} \label{intro B conj} \frac{|\mathcal{B}_\Lambda(\varPhi , E)|^2}{\langle \varPhi, \varPhi \rangle} =2^{2k-4} \cdot D_E^{k-1} \cdot \frac{L\left(\frac{1}{2},\pi \left( \varPhi\right) \times \mathcal{AI} \left(\Lambda \right) \right) }{ L\left(1,\pi \left(\varPhi \right),\mathrm{Ad}\right)}. \end{equation} Here \[ \langle\varPhi,\varPhi\rangle=\int_{\mathrm{Sp}_2\left(\mathbb Z\right) \backslash \mathfrak H_2} \left|\varPhi\left(Z\right)\right|^2\det\left(Y\right)^{k-3}\,dX\, dY \quad\text{where $Z=X+\sqrt{-1}\,Y$.} \] \end{theorem} \begin{Remark} In Theorem~\ref{t: vector valued boecherer}, we prove \eqref{intro B conj} allowing $\varPhi$ to have a square-free level and to be vector-valued. Moreover, assuming the temperedness of $\pi\left(\varPhi\right)$, the weight $2$ case, which is of significant interest because of the modularity conjecture for abelian surfaces, is also included. The formula~\eqref{intro B conj} and its generalization~\eqref{e: vector valued boecherer} are expected to have a broad spectrum of interesting applications both arithmetic and analytic. Some of the examples are \cite{Blo}, \cite[Section~3]{DPSS}, \cite{Dummigan},\cite{HY}, \cite{Saha} and \cite{Waibel}. \end{Remark} \subsection{Organization of the paper} This paper is organized as follows. In Section~2, we introduce some more notation and define local and global Bessel periods. In Section~3, we carry out the pull-back computation of Bessel periods. In Section~4, we shall prove Theorem~\ref{ggp SO} using the results in Section~3. We also note some consequences of our proof of Theorem~\ref{ggp SO} (\ref{theorem1-1-(1)}), which will be used in the proof of Theorem~\ref{ref ggp} later. In Section~5, we recall the Rallis inner product formula for similitude groups. In Section~6, we will give an explicit formula for Bessel periods on $\mathrm{GU}_{4,\varepsilon}$ in certain cases as explained in our strategy for the proof of Theorem~\ref{ref ggp}. In Section~7, we complete our proof of Theorem~\ref{ref ggp}. In Section~8, we prove the generalized B\"ocherer conjecture, including the vector valued case. In Appendix~\ref{appendix A}, we will give an explicit formula of Whittaker periods for irreducible cuspidal tempered automorphic representations of $G$. In Appendix~\ref{s:e comp}, we compute the local Bessel periods explicitly for representation of $G\left(\mR\right)$ corresponding to vector valued holomorphic Siegel modular forms. This result is used in Section~8. In Appendix~\ref{appendix c}, we consider the meromorphic continuation of the $L$-function for $\mathrm{SO}\left(5\right) \times \mathrm{SO}\left(2\right)$. \subsection*{Acknowledgement} This paper was partly written while the second author stayed at National University of Singapore. He would like to thank the people at NUS for their warm hospitality. The authors would like to thank the anonymous referee for his/her careful reading of the earlier version of the manuscript and providing many helpful comments and suggestions. \section{Preliminaries} \subsection{Groups} \label{ss:groups} \subsubsection{Quaternion algebras}\label{ss: quaternion} Let $X(E : F)$ denote the set of $F$-isomorphism classes of central simple algebras over $F$ containing $E$. Then we recall that the map $\varepsilon \mapsto D_{\varepsilon}$ gives a bijection between $F^\times \slash \mathrm{N}_{E \slash F}(E^\times)$ and $X(E : F)$ (see \cite[Lemma~1.3]{FS}) where \begin{equation}\label{d: quaternion} D_\varepsilon := \left\{ \begin{pmatrix}a&\varepsilon b\\ b^\sigma&a^\sigma \end{pmatrix} : a, b \in E \right\} \quad\text{for $\varepsilon\in F^\times$}. \end{equation} Here we regard $E$ as a subalgebra of $D_\varepsilon$ by \[ E \ni a \mapsto \begin{pmatrix}a&0\\ 0&a^\sigma \end{pmatrix} \in D_\varepsilon. \] We also note that $D_\varepsilon\simeq \mathrm{Mat}_{2 \times 2}\left(F\right)$ when $\varepsilon\in \mathrm{N}_{E\slash F}\left(E^\times\right)$. The canonical involution $D_\varepsilon\ni x\mapsto \bar{x}\in D_\varepsilon$ is given by \[ \bar{x}= \begin{pmatrix}a^\sigma&-\varepsilon b\\ -b^\sigma&a\end{pmatrix} \quad\text{for $x=\begin{pmatrix}a&\varepsilon b\\ b^\sigma&a^\sigma\end{pmatrix}$}. \] We denote the reduced trace of $D$ by $\mathrm{tr}_D$. \subsubsection{Orthogonal groups}\label{ss: orthogonal} For a non-negative integer $n$, a symmetric matrix $S_n\in \mathrm{Mat}_{(2n+2) \times (2n+2)}\left(F\right)$ is defined inductively by \begin{equation}\label{d: symmetric} S_0: = \begin{pmatrix}2&0\\ 0&-2d \end{pmatrix} \quad \text{and} \quad S_n: = \begin{pmatrix}0&0&1\\ 0&S_{n-1}&0\\ 1&0& 0\end{pmatrix}\quad \text{for $n\ge 1$}. \end{equation} We recall that $E=F\left(\eta\right)$ where $\eta^2=d$. Then we write the corresponding orthogonal group, the special orthogonal group and the similitude orthogonal group by \begin{equation}\label{d: orthogonal groups} \mathrm{O}\left(S_n\right)= \mathrm{O}_{n+2,n},\quad \mathrm{SO}\left(S_n\right)= \mathrm{SO}_{n+2,n} \quad \text{and}\quad \mathrm{GO}\left(S_n\right)=\mathrm{GO}_{n+2,n}, \end{equation} respectively. Let $\mathrm{GSO}_{n+2,n}$ denote the identity component of $\mathrm{GO}_{n+2,n}$. Thus \begin{equation}\label{d: GSO_n+2,n} \mathrm{GSO}_{n+2,n}\left(F\right) = \{ g \in \mathrm{GO}_{n+2,n}\left(F\right) : \det (g) = \lambda(g)^{n+1} \} \end{equation} where \begin{equation}\label{d: GO_n+2,n} \mathrm{GO}_{n+2,n}(F) = \left\{ g \in \mathrm{GL}_{2n+2}(F) : {}^{t}g\, S_{n} \,g = \lambda(g)S_n, \,\lambda(g) \in F^\times \right\}. \end{equation} For a positive integer $n$, we denote by $J_{2n}$ the $2n\times 2n$ symmetric matrix with ones on the non-principal diagonal and zeros elsewhere, i.e. \begin{equation}\label{d: J_m} J_2=\begin{pmatrix}0&1\\1&0\end{pmatrix} \quad\text{and}\quad J_{2\left(n+1\right)}= \begin{pmatrix}0&0&1\\0&J_{2n}&0\\1&0&0\end{pmatrix} \quad\text{for $n\ge 1$}. \end{equation} Then the similitude orthogonal group $\mathrm{GO}_{n,n}$ is defined by \begin{equation}\label{d: GO_n,n} \mathrm{GO}_{n,n}\left(F\right):= \left\{g \in\mathrm{GL}_{2n}\left(F\right): {}^{t}g\, J_{2n}\, g = \lambda(g) J_{2n}, \,\lambda\left(g\right)\in F^\times \right\} \end{equation} and we denote by $\mathrm{GSO}_{n,n}$ its identity component, which is given by \begin{equation}\label{d: GSO_n,n} \mathrm{GSO}_{n,n}\left(F\right) = \{ g \in \mathrm{GO}_{n,n}\left(F\right) : \det (g) = \lambda(g)^{n} \}. \end{equation} \subsubsection{Quaternionic unitary groups}\label{ss: quaternionic unitary groups} Let $D$ be a quaternion algebra over $F$ containing $E$. Recall that $G_D$ denotes the similitude quaternionic unitary group of degree $2$ defined by \eqref{e: G_D}. We define a similitude quaternionic unitary group $\mathrm{GU}_{3,D}$ of degree $3$ by \begin{equation}\label{def of gu_3,D} \mathrm{GU}_{3,D}(F):= \left\{g \in \mathrm{GL}_3(D) : {}^t \bar{g} \,{\bf J}_\eta\, g = \lambda(g) {\bf J}_\eta, \, \lambda\left(g\right)\in F^\times\right\} \end{equation} where we define a skew-hermitian matrix $ {\bf J}_\eta$ by \begin{equation}\label{d: j_eta} {\bf J}_\eta := \begin{pmatrix} 0&0&\eta\\0 &\eta&0\\ \eta&0&0\end{pmatrix}. \end{equation} Here $\bar{A}=\left(\bar{a}_{ij}\right)$ for $A=\left(a_{ij}\right)\in\mathrm{Mat}_{m \times n}\left(D\right)$. Let us denote by $\mathrm{GSU}_{3,D}$ the identity component of $\mathrm{GU}_{3,D}$. Then unlike the orthogonal case, as noted in \cite[p.21--22]{MVW}, we have \[ \mathrm{GSU}_{3,D}(F) = \mathrm{GU}_{3,D}(F) \] and \[ \text{$\mathrm{GSU}_{3,D}(F_v) = \mathrm{GU}_{3,D}(F_v)$ when $D \otimes_F F_v$ is not split.} \] Moreover when $D\otimes_F F_v$ is split at a place $v$ of $F$, we have \begin{equation}\label{e: GU3D cases} \mathrm{GU}_{3, D}(F_v) \simeq \begin{cases} \mathrm{GO}_{4,2}(F_v) &\text{if $E\otimes F_v$ is a quadratic extension of $F_v$}; \\ \mathrm{GO}_{3,3}(F_v)&\text{if $E\otimes F_v\simeq F_v\oplus F_v$.} \end{cases} \end{equation} We also define $\mathrm{GU}_{1,D}$ by \begin{equation}\label{d: gu_1,d} \mathrm{GU}_{1,D}(F):= \left\{\alpha\in D^\times :\bar{\alpha} \eta \alpha = \lambda\left(\alpha\right) \eta,\,\lambda\left(\alpha\right) \in F^\times \right\} \end{equation} and denote its identity component by $\mathrm{GSU}_{1,D}$. Then we note that \begin{align}\label{d: gsu1,d} \mathrm{GSU}_{1,D}\left(F\right)&=\left\{\alpha\in D^\times :\bar{\alpha} \eta \alpha = \mathrm{n}_D\left(\alpha\right) \eta \right\} \\ \notag &=\left\{x\in D^\times\mid x\eta=\eta x\right\}=T_\eta \end{align} where $T_\eta$ is defined by \eqref{e: T_xi} with $\xi=\eta$ and $\mathrm{n}_D$ denotes the reduced norm of $D$. \subsubsection{Unitary groups}\label{ss: unitary} Suppose that $D=D_\varepsilon$ defined by \eqref{d: quaternion}. Then we define $\mathrm{GU}_{4,\varepsilon}$ a similitude unitary group of degree $4$ by \begin{equation}\label{d: unitary GD} \mathrm{GU}_{4,\varepsilon}\left(F\right):= \left\{ g \in \mathrm{GL}_4(E) : {}^{t}g^\sigma \mathcal J_\varepsilon g = \lambda(g) \mathcal J_\varepsilon,\, \lambda\left(g\right)\in F^\times \right\} \end{equation} where we define a hermitian matrix $\mathcal J_\varepsilon$ by \[ \mathcal J_\varepsilon := \begin{pmatrix} 0&0&0&1\\ 0&-1&0&0\\ 0&0&\varepsilon&0\\ 1&0&0&0\end{pmatrix}. \] Here $A^\sigma=\left(a_{ij}^\sigma\right)$ for $A=\left(a_{ij}\right)\in\mathrm{Mat}_{m \times n}\left(E\right)$. Then we have \begin{equation}\label{e: gu(2,2) or gu(3,1)} \mathrm{GU}_{4,\varepsilon}\simeq \begin{cases}\mathrm{GU}_{2,2},&\text{when $D$ is split, i.e. $\varepsilon\in \mathrm{N}_{E\slash F}\left(E^\times\right)$}; \\ \mathrm{GU}_{3,1},&\text{when $D$ is non-split, i.e. $\varepsilon\notin \mathrm{N}_{E\slash F}\left(E^\times\right)$}. \end{cases} \end{equation} We also define $\mathrm{GU}_{2,\varepsilon}$ a similitude unitary group of degree $2$ by \begin{multline}\label{d: GU2D} \mathrm{GU}_{2,\varepsilon}(F) := \left\{ g \in \mathrm{GL}_{2}(E) : {}^{t}g^\sigma J_\varepsilon g =\lambda(g) J_\varepsilon, \,\lambda(g) \in F^\times \right\} \\ \text{where}\quad J_\varepsilon=\begin{pmatrix}-1&0\\0&\varepsilon \end{pmatrix}. \end{multline} \subsection{Accidental isomorphisms} \label{acc isom} We need to explicate the accidental isomorphisms of our concern, since we use them in a crucial way to transfer an automorphic period on one group to the one on the other group. The reader may consult, for example, Satake~\cite{Satake} and Tsukamoto~\cite{Tsukamoto} about the details of the material here. \subsubsection{$\mathrm{PGSU}_{3,D}\simeq \mathrm{PGU}_{4,\varepsilon}$} Suppose that $D=D_\varepsilon$. Then we may naturally realize $\mathrm{GSU}_{3,D}(F)$ as a subgroup of $\mathrm{GL}_6(E)$. We note that \[ \begin{pmatrix} 1&0&0&0&0&0\\ 0&-\varepsilon&0&0&0&0\\ 0&0&1&0&0&0\\ 0&0&0&-\varepsilon&0&0\\0 &0&0&0&1&0\\ 0&0&0&0&0&-\varepsilon \end{pmatrix} \,{}^{t}\bar{g}\, \begin{pmatrix} 1&0&0&0&0&0\\ 0&-\varepsilon&0&0&0&0\\ 0&0&1&0&0&0\\ 0&0&0&-\varepsilon&0&0\\0 &0&0&0&1&0\\ 0&0&0&0&0&-\varepsilon \end{pmatrix}^{-1}= {}^{t}g^\sigma \] and \[ \begin{pmatrix}0&1&0&0&0&0\\ -1&0&0&0&0&0\\ 0&0&0&1&0&0\\ 0&0&-1&0&0&0\\ 0&0&0&0&0&1\\ 0&0&0&0&-1& 0\end{pmatrix} \,{}^{t}\bar{g}\, \begin{pmatrix}0&1&0&0&0&0\\ -1&0&0&0&0&0\\ 0&0&0&1&0&0\\ 0&0&-1&0&0&0\\ 0&0&0&0&0&1\\ 0&0&0&0&-1& 0\end{pmatrix}^{-1} = {}^{t}g. \] Thus in this realization, we have \begin{multline}\label{d: gsu_3,D} \mathrm{GSU}_{3,D}(F)= \{ g \in \mathrm{GSO}_{3,3}(E) : {}^{t} g^\sigma\, \mathcal J_\varepsilon^\circ \,g =\lambda(g)\mathcal J_{\varepsilon}^\circ,\,\lambda\left(g\right)\in F^\times \} \\ \text{where}\quad \mathcal J_{\varepsilon}^\circ =- \begin{pmatrix} 0&0&0&0&1&0\\ 0&0&0&0&0&\varepsilon\\0&0&1&0&0&0 \\0&0&0&\varepsilon&0&0\\1&0&0&0&0&0\\0&\varepsilon&0&0&0&0\end{pmatrix}. \end{multline} Here we recall that \begin{equation}\label{iso: GSO(3,3)} \mathrm{GSO}_{3,3}(E) \simeq \mathrm{GL}_4(E) \times \mathrm{GL}_1(E) \slash \{ (z, z^{-2} ) :z \in E^\times \}. \end{equation} In fact the isomorphism \eqref{iso: GSO(3,3)} is realized as follows. Let us take the standard basis \[ b_1 ={}^{t}(1,0,0,0),\quad b_2 ={}^{t}(0,1,0,0), \quad b_3 ={}^{t}(0,0,1,0), \quad b_4 ={}^{t}(0,0,0,1), \] of $E^4$. Then we may consider $V:=\wedge^2 E^4$ as an orthogonal space over $E$ with a quadratic form $\left(\,\, ,\,\,\right)_V$ defined by \[ v_1\wedge v_2=(v_1 ,v_2)_V \cdot b_1 \wedge b_2 \wedge b_3 \wedge b_4 \] for $v_1, v_2 \in V$. As a basis of $V$ over $E$, we take $\{ \varepsilon_i: 1\le i\le 6 \}$ given by \[ \varepsilon_1 =b_1 \wedge b_2, \varepsilon_2 =b_1 \wedge b_3, \varepsilon_3 = b_1 \wedge b_4, \varepsilon_4 = b_2 \wedge b_3, \varepsilon_5 = b_4 \wedge b_2, \varepsilon_6 = b_3 \wedge b_4. \] Let the group $\mathrm{GL}_4(E) \times \mathrm{GL}_1(E)$ act on $V$ by $(g, a)(w_1 \wedge w_2) = a \cdot \left( gw_1 \wedge gw_2 \right)$ where $w_1,w_2\in E^4$. This action defines a homomorphism \begin{equation}\label{e: hom from GL4xGL1} \mathrm{GL}_4(E) \times \mathrm{GL}_1(E)\to \mathrm{GSO}_{3,3}\left(E\right) \end{equation} where we take $\left\{\varepsilon_i: 1\le i\le 6\right\}$ as a basis of $V$ and the homomorphism \eqref{e: hom from GL4xGL1} induces the isomorphism~\eqref{iso: GSO(3,3)}. By a direct computation we observe that $\left(-\mathcal J_\varepsilon, 1\right)$ is mapped to $\mathcal J_\varepsilon^\circ$ under \eqref{e: hom from GL4xGL1} and the restriction of the homomorphism~\eqref{e: hom from GL4xGL1} gives a homomorphism \begin{equation}\label{e: hom from GU4} \mathrm{GU}_{4,\varepsilon}\left(F\right)\to \mathrm{GSU}_{3,D}\left(F\right). \end{equation} Then it is easily seen that the isomorphism \begin{equation} \label{acc isom1} \Phi_D : \mathrm{PGU}_{4,\varepsilon}(F) \overset{\sim}{\to} \mathrm{PGSU}_{3,D}(F) \end{equation} is induced. \subsubsection{$\mathrm{PGU}_{2,2} \simeq\mathrm{PGSO}_{4,2}$} When $\varepsilon \in \mathrm{N}_{E \slash F}(E^\times)$, the quaternion algebra $D=D_\varepsilon$ is split and the isomorphism~\eqref{acc isom1} gives an isomorphism $\mathrm{PGU}_{2,2}\simeq \mathrm{PGSO}_{4,2}$. We recall the concrete realization of this isomorphism. First we define $\mathrm{GU}_{2,2}$ by \begin{multline*} \mathrm{GU}_{2,2}:= \left\{ g\in\mathrm{GL}_4\left(E\right): {}^tg^\sigma\, J_4\,g =\lambda\left(g\right)J_4, \,\lambda\left(g\right)\in F^\times\right\} \\ \text{where $J_4=\begin{pmatrix}0&0&0&1\\0&0&1&0\\ 0&1&0&0\\1&0&0&0\end{pmatrix}$} \end{multline*} as \eqref{d: J_m}. Let \begin{equation*} {\mathcal V}: = \left\{ B\left(\left(x_i\right)_{1\le i\le 6}\right):= \left(\begin{smallmatrix} 0&\eta x_1& x_3 + \eta x_4 &x_2\\ -\eta x_1 &0 &x_5& -x_3 +\eta x_4\\ -x_3 -\eta x_4 &-x_5& 0 &\eta^{-1} x_6\\ -x_2 &x_3 -\eta x_4 &-\eta^{-1} x_6 &0\\ \end{smallmatrix}\right) : x_i \in F \, \left(1\le i\le6\right) \right\}. \end{equation*} We define $\Psi : {\mathcal V} \rightarrow F$ by \[ \Psi \left(B\right): = {\rm Tr} \left(B\, \begin{pmatrix} 0&1_2\\ 1_2&0\end{pmatrix} \,{}^{t}B^\sigma\, \begin{pmatrix}0 &1_2\\ 1_2&0\end{pmatrix}\right). \] Then we have \[ \Psi\left(B\left(\left(x_i\right)_{1\le i\le 6}\right)\right)= -4\left\{x_1x_6+x_2x_5-\left(x_3^2-dx_4^2\right)\right\}. \] Let $\mathrm{GSU}_{2,2}$ denote the identity component of $\mathrm{GU}_{2,2}$, i.e. \[ \mathrm{GSU}_{2,2} = \{ g \in \mathrm{GU}_{2,2}: \det(g) = \lambda(g)^2 \}. \] We let $\mathrm{GSU}_{2,2}$ act on $\mathcal V$ by \[ \mathrm{GSU}_{2,2}\times \mathcal V\ni \left(g,B\right)\mapsto \left(wg w \right)B\left(w\,{}^tg w \right)\in\mathcal V \quad\text{where $w=\begin{pmatrix} 1&0&0&0\\ 0&1&0&0 \\ 0&0&0&1\\ 0&0&1& 0\end{pmatrix}$.} \] Then this action induces a homomorphism $\phi : \mathrm{GSU}_{2,2} \rightarrow \mathrm{GO}({\mathcal V})$. We note that \[ \lambda(\phi(g)) = \det\left(g\right)\quad \text{for $g\in \mathrm{GSU}_{2,2}$} \] and this implies that the image of $\phi$ is contained in $\mathrm{GSO}\left(\mathcal V\right)$. As a basis of ${\mathcal V}$, we may take \begin{align*} f_1&=\begin{pmatrix} 0&\eta&0&0\\ -\eta&0&0&0\\ 0&0&0&0\\ 0&0&0&0\\ \end{pmatrix}, \quad &f_2&=\begin{pmatrix} 0&0&0&1\\ 0&0&0&0\\ 0&0&0&0\\ -1&0&0&0\\ \end{pmatrix}, \quad &f_3&=\begin{pmatrix} 0&0&1&0\\ 0&0&0&-1\\ -1&0&0&0\\ 0&1&0&0\\ \end{pmatrix}, \\ f_4&= \begin{pmatrix} 0&0&\eta &0\\ 0&0&0&\eta\\ -\eta &0&0&0\\ 0&-\eta &0&0\\ \end{pmatrix}, \quad &f_5&=\begin{pmatrix} 0&0&0&0\\ 0&0&1&0\\ 0&-1&0&0\\ 0&0&0&0\\ \end{pmatrix}, \quad &f_6&=\begin{pmatrix} 0&0&0&0\\ 0&0&0&0\\ 0&0&0&\eta^{-1}\\ 0&0&-\eta^{-1}&0\\ \end{pmatrix}. \end{align*} With respect to this basis, we may regard $\phi$ as a homomorphism from $\mathrm{GSU}_{2,2}$ to $\mathrm{GO}_{4,2}$, where the group $\mathrm{GO}_{4,2}$ is given by \eqref{d: GO_n+2,n} for $n=2$. Let us consider $\mathrm{GSU}_{2,2}\rtimes E^\times$ where the action of $\alpha\in E^\times$ on $g\in\mathrm{GSU}_{2,2}$ is given by \[ \alpha \cdot g = \begin{pmatrix} \alpha&0&0&0\\0 &1&0&0\\0 &0&1&0\\ 0&0&0&\left(\alpha^{\sigma} \right)^{-1}\end{pmatrix} \,g \,\begin{pmatrix} \alpha&0&0&0\\0 &1&0&0\\0 &0&1&0\\ 0&0&0&\left(\alpha^{\sigma} \right)^{-1}\end{pmatrix}^{-1}. \] Then as in \cite[p.32--34]{Mo}, $\phi$ may be extended to $\mathrm{GSU}_{2,2}\rtimes E^\times$ and we have a homomorphism $\mathrm{GSU}_{2,2}\rtimes E^\times\to \mathrm{PGSO}_{4,2}$ which induces the isomorphism \begin{equation}\label{acc isom2} \Phi : \mathrm{PGU}_{2,2} \overset{\sim}{\to} \mathrm{PGSO}_{4,2}. \end{equation} \subsection{Bessel periods} Let us introduce Bessel periods on various groups. \subsubsection{Bessel periods on $G=\mathrm{GSp}_2$} \label{s:def bessel G} Though we already introduced Bessel periods on $G_D$ in general as \eqref{e: def of bessel period}, we would like to describe them concretely in the case of $G$ here for our explicit pull-back computations in the next section. Let $P$ be the Siegel parabolic subgroup of $G$ with the Levi decomposition $P = MN$ where \[ \label{d:N} M(F) = \left\{\begin{pmatrix} g&0\\ 0&\lambda \cdot {}^tg^{-1}\end{pmatrix} : \begin{aligned} &g \in \mathrm{GL}_2(F), \\ &\lambda \in F^\times \end{aligned} \right\},\,\, N(F) = \left\{ \begin{pmatrix} 1&X\\ 0&1\end{pmatrix} : X \in \mathrm{Sym}_2(F) \right\}. \] Here $\mathrm{Sym}_n(F)$ denotes the set of $n$ by $n$ symmetric matrices with entries in $F$ for a positive integer $n$. For $S \in \mathrm{Sym}_2(F)$, let us define a character $\psi_{S}$ of $N(\mA)$ by \[ \psi_{S} \begin{pmatrix} 1&X\\ 0&1\end{pmatrix} = \psi \left[\mathrm{tr}(SX)\right]. \] For $S\in\mathrm{Sym}_2\left(F\right)$ such that $\det S\ne 0$, let \[ \label{T_S} T_S := \left\{ g \in\mathrm{GL}_2 : {}^{t}gSg = \det(g)S \right\}. \] We identify $T_S$ with the subgroup of $G$ given by \[ \left\{ \begin{pmatrix}g&0\\ 0&\det (g) \cdot {}^{t}g^{-1} \end{pmatrix} : g \in T_S \right\}. \] \begin{Definition} Let us take $S\in\mathrm{Sym}_2\left(F\right)$ such that $T_S\left(F\right)$ is isomorphic to $E^\times$. Let $\pi$ be an irreducible cuspidal automorphic representation of $G\left(\mathbb A\right)$ whose central character is trivial and $V_\pi$ its space of automorphic forms. Fix an $F$-isomorphism $T_S\left(F\right)\simeq E^\times$. Let $\Lambda$ be a character of $\mathbb A_E^\times\slash E^\times$ such that $\Lambda\mid_{\mathbb A^\times}$ is trivial. We regard $\Lambda$ as a character of $T_S\left(\mathbb A\right) \slash \mathbb A^\times\, T_S\left(F\right)$. Then for $\varphi\in V_\pi$, we define $ B_{S,\Lambda,\psi}\left(\varphi\right)$, the $\left(S,\Lambda,\psi\right)$-Bessel period of $\varphi$ by \begin{equation} \label{Beesel def gsp} B_{S, \Lambda,\psi}(\varphi) = \int_{\mA^\times \,T_{S}(F) \backslash T_{S}(\mA)} \int_{N(F) \backslash N(\mA)} \varphi(uh) \Lambda^{-1}(h) \psi_{S}^{-1}(u) \, du\, dh. \end{equation} We say that $\pi$ has the $\left(S,\Lambda,\psi\right)$-Bessel period when $B_{S, \Lambda,\psi}\not\equiv 0$ on $V_\pi$. Then we also say that $\pi$ has the $\left(E,\Lambda\right)$-Bessel period as in Definition~\ref{def of E,Lambda-Bessel period}. \end{Definition} \subsubsection{Bessel periods on $\mathrm{GSU}_{3,D}$} Let us introduce Bessel periods on the group $\mathrm{GSU}_{3,D}$ defined in \ref{ss: quaternionic unitary groups}. Let $P_{3,D}$ be a maximal parabolic subgroup of $\mathrm{GSU}_{3,D}$ with the Levi decomposition $P_{3,D}=M_{3,D}N_{3,D}$ where \[ \label{d:N3,D} M_{3,D} = \left\{ \begin{pmatrix}g&0&0\\ 0&h&0\\ 0&0&g \end{pmatrix} : \begin{aligned} &g\in D^\times, \\ &h \in T_\eta, \\ & \mathrm{n}_D\left(g\right)=\mathrm{n}_D\left(h\right) \end{aligned} \right\}, \,\, N_{3,D} = \left\{ \begin{pmatrix} 1&A^\prime&B\\ 0&1&A\\0 &0&1 \end{pmatrix} \in \mathrm{GSU}_{3,D} \right\}. \] As for $T_\eta$, we recall \eqref{d: gsu1,d} and $T_\eta \simeq E^\times$. For $X \in D^\times$, we define a character $\psi_{X, D}$ of $N_{3, D}(\mA)$ by \[ \psi_{X, D} \begin{pmatrix} 1&A^\prime&B\\ 0&1&A\\0 &0&1 \end{pmatrix} = \psi \left[\mathrm{tr}_D(XA) \right]. \] Then the identity component of the stabilizer of $\psi_{X, D}$ in $M_{3,D}$ is \[ \label{M_X D} M_{X, D} = \left\{ \begin{pmatrix} h^{X}&0&0\\ 0&h&0\\0 &0&h^{X}\end{pmatrix} : h \in T_\eta \right\} \quad \text{where}\quad h^X = XhX^{-1}. \] We identify $M_X$ with $T_\eta$ by \begin{equation}\label{e: identify M_X with T_eta} M_{X, D}\ni \begin{pmatrix} h^{X}&0&0\\ 0&h&0\\ 0&0&h^{X}\end{pmatrix} \mapsto h\in T_\eta \end{equation} and we fix an $F$-isomorphism $T_\eta\simeq E^\times$. \begin{Definition}\label{d: bessel for GSU_3,D} Let $\sigma_D$ be an irreducible cuspidal automorphic representation of $\mathrm{GSU}_{3,D}\left(\mA \right)$ and $V_{\sigma_D}$ its space of automorphic forms. Let $\chi$ be a character of $\mathbb A_E^\times\slash E^\times$ and we regard $\chi$ as a character of $M_{X,D}\left(\mathbb A\right) \slash M_{X,D}\left(F\right) $. Suppose that $\chi|_{\mA^\times} = \omega_{\sigma_D}$, the central character of $\sigma_D$. Then for $\varphi\in V_{\sigma_D}$, we define $\mathcal{B}^D_{X, \chi, \psi} (\varphi)$, the $\left(X,\chi, \psi \right)$-Bessel period of $\varphi$ by \begin{multline}\label{Besse def gsud} \mathcal{B}^D_{X, \chi, \psi} (\varphi) =\int_{\mA^\times M_{X,D}(F) \backslash M_{X,D}(\mA)} \int_{N_{3,D}(F) \backslash N_{3,D}(\mA)} \varphi(uh) \\ \times \chi (h)^{-1} \psi_{X, D}(u)^{-1} \, du \, dh. \end{multline} \end{Definition} \subsubsection{Bessel periods on $\mathrm{GU}_{4,\varepsilon}$} In light of the accidental isomorphism \eqref{acc isom1}, Bessel periods on the group $\mathrm{GU}_{4,\varepsilon}$ is defined as follows. Let $P_{4,\varepsilon}$ be a maximal parabolic subgroup of $\mathrm{GU}_{4,\varepsilon}$ with the Levi decomposition $M_{4,\varepsilon}N_{4,\varepsilon}$ where \begin{align*} M_{4,\varepsilon}(F) &= \left\{ \begin{pmatrix} a&0&0\\ 0&g&0\\ 0&0&\lambda(g) (a^\sigma)^{-1} \end{pmatrix} : a \in E^\times, g \in \mathrm{GU}_{2,\varepsilon}\left(F\right) \right\} , \\ N_{4,\varepsilon}(F) &= \left\{ \begin{pmatrix} 1&A&B\\ 0&1_2&A^\prime\\ 0&0&1\end{pmatrix} \in \mathrm{GU}_{4,\varepsilon}\left(F\right) \right\}. \end{align*} Let us take an anisotropic vector $e\in E^4$ of the form ${}^{t}(0, \ast, \ast, 0)$. Then we define a character $\chi_e$ of $N_{4,\varepsilon}(\mA)$ by \[ \chi_e \left(u\right) = \psi( (ue, b_1)_\varepsilon) \quad\text{where $\left(x,y\right)_\varepsilon={}^tx^\sigma J_{\varepsilon}y$}. \] Here we recall that $J_\varepsilon$ is as given in \eqref{d: GU2D} and $b_1={}^t\left(1,0,0,0\right)$. Let $D_e$ denote the subgroup of $M_{4,\varepsilon}$ given by \[ D_{e} := \left\{ \begin{pmatrix}1&0&0\\ 0&h&0\\ 0&0&1\end{pmatrix} : h \in \mathrm{U}_{2,\varepsilon}, \, h e = e \right\}. \] Then the group $D_e\left(\mA\right)$ stabilizes the character $\chi_e$ by conjugation. We note that \[ D_e(F) \simeq \mathrm{U}_1(F):= \{ a \in E^\times : \bar{a}a =1 \}. \] Hence for a character $\Lambda$ of $\mathbb A_E^\times$ which is trivial on $\mA^\times$, we may regard $\Lambda$ as a character of $D_e(\mA)$ by $d \mapsto \Lambda(\det d)$. Then we define a character $\chi_{e, \Lambda}$ of $R_e(\mA)$ where $R_e:=D_eN_{4,\varepsilon}$ by \begin{equation}\label{d of chi_e,Lambda} \chi_{e, \Lambda}(ts) := \Lambda(t) \chi_e(s) \quad \text{for} \quad t \in D_e(\mA), \, s \in N_{4,\varepsilon}(\mA). \end{equation} \begin{Definition}\label{d: Bessel on G_4,varepsilon} For a cusp form $\varphi$ on $\mathrm{GU}_{4,\varepsilon}(\mA_F)$ with a trivial central character, we define $B_{e, \Lambda,\psi}(\varphi)$, the $(e, \Lambda,\psi)$-Bessel period of $\varphi$, by \begin{equation}\label{d bessel period on GU_4,varepsilon} B_{e, \Lambda,\psi}(\varphi) = \int_{D_e(F) \backslash D_e(\mA_F)} \int_{N_{4,\varepsilon}(F) \backslash N_{4,\varepsilon}(\mA_F)} \chi_{e, \Lambda}(ts)^{-1}\, \varphi(ts) \, ds \, dt. \end{equation} \end{Definition} \subsubsection{Bessel periods on $\mathrm{GSO}_{4,2}$ and $\mathrm{GSO}_{3,3}$} By combining the accidental isomorphisms \eqref{acc isom1} and \eqref{acc isom2} in the split case, we shall define Bessel periods on $\mathrm{GSO}_{4,2}$ and $\mathrm{GSO}_{3,3}$ as the following. Let $P_{4,2}$ denote a maximal parabolic subgroup of $\mathrm{GSO}_{4,2}$ with the Levi decomposition $P_{4,2}= M_{4,2}N_{4,2}$ where \[ \label{d:N4,2} M_{4,2} = \left\{ \begin{pmatrix} g&0&0\\0 &h&0\\0 &0& g^\ast\cdot \det h\end{pmatrix} : \begin{aligned} &g \in \mathrm{GL}_2, \\ &h \in \mathrm{GSO}_{2 ,0} \end{aligned} \right\}, \,\, N_{4,2}= \left\{ \begin{pmatrix}1_2&A^\prime&B\\ 0&1_2&A\\ 0&0&1_2 \end{pmatrix} \in \mathrm{GSO}_{4,2}\right\}. \] Here \[ g^\ast = \begin{pmatrix}0&1\\1&0\end{pmatrix} {}^{t}g^{-1} \begin{pmatrix}0&1\\1&0\end{pmatrix}\quad\text{ for}\quad g\in\mathrm{GL}_2. \] Then for $X\in\mathrm{Mat}_{2\times 2}\left(F\right)$, we define a character $\psi_X$ of $N_{4,2}\left(\mA\right)$ by \[ \psi_{X} \begin{pmatrix}1_2&A^\prime&B\\ 0&1_2&A\\0 &0&1_2 \end{pmatrix} = \psi \left[\mathrm{tr}(XA) \right]. \] Suppose that $\det X \ne 0$ and let \[ \label{M_X} M_{X} := \left\{ \begin{pmatrix} (\det h) \cdot (h^X)^\ast&0&0\\ 0&h&0\\ 0&0&h^X \end{pmatrix} : h \in \mathrm{GSO}_{2,0} \right\} \] where $h^X = XhX^{-1}$ . Then $M_X\left(\mA\right)$ stabilizes the character $\psi_X$ and $M_X$ is isomorphic to $\mathrm{GSO}_{2,0}$. We fix an isomorphism $\mathrm{GSO}_{2,0}(F)\simeq E^\times$ and we regard a character of $\mathbb A_E^\times$ as a character of $M_X\left(\mA\right)$. \begin{Definition}\label{def of Bessel on GS)_4,2} Let $\sigma$ be an irreducible cuspidal automorphic representation of $\mathrm{GSO}_{4,2}(\mA)$ with its space of automorphic forms $V_\sigma$ and the central character $\omega_\sigma$. For a character $\chi$ of $\mA_E^\times$ such that $\chi |_{\mA^\times} = \omega_{\sigma}$, we define $\mathcal B_{X,\chi, \psi}\left(\varphi\right)$, the $(X, \chi, \psi)$-Bessel period of $\varphi \in V_\sigma$ by \begin{equation}\label{e: Bessel GSO(4,2)} \mathcal{B}_{X, \chi, \psi}(\varphi) = \int_{N_{4,2}(F) \backslash N_{4,2}(\mA)} \int_{M_{X}(F) \mA^\times \backslash M_{X}(\mA)} \varphi(u h) \chi(h)^{-1} \psi_{X}(u)^{-1} \, du \, dh. \end{equation} \end{Definition} When $d \in (F^\times)^2$, we know that $\mathrm{GSO}(S_2) \simeq \mathrm{GSO}_{3,3}$. Hence, as above, for a cusp form $\varphi$ on $\mathrm{GSO}_{3,3}$ with central character $\omega$ and characters $\Lambda_1, \Lambda_2$ of $\mA^\times \slash F^\times$ such that $\Lambda_1 \Lambda_2=\omega$, we define $(X, \Lambda_1, \Lambda_2, \psi)$-Bessel period by \[ \mathcal{B}_{X, \Lambda, \psi}(\varphi) = \int_{N_{4,2}(F) \backslash N_{4,2}(\mA)} \int_{M_{X}(F) \mA^\times \backslash M_{X}(\mA)} \varphi(u h) \chi_{\Lambda_1, \Lambda_2}(h)^{-1} \psi_{X}(u)^{-1} \, du \, dh. \] Here, since $M_{4,2} \simeq \mathrm{GL}_2 \times \mathrm{GSO}_{1,1}$ and $\mathrm{GSO}_{1,1}(F) = \left\{ \left( \begin{smallmatrix} a&\\ &b\end{smallmatrix} \right) : a, b \in F^\times \right\}$, we define a character $\chi_{\Lambda_1, \Lambda_2}$ of $\mathrm{GSO}_{1,1}(\mA)$ by \[ \chi_{\Lambda_1, \Lambda_2} \begin{pmatrix} a&\\ &b\end{pmatrix} =\Lambda_1(a) \Lambda_2(b). \] When $\omega$ is trivial, we have $\Lambda_2 = \Lambda_1^{-1}$. In this case, we simply call $(X, \Lambda_1, \Lambda_1^{-1}, \psi)$-Bessel period as $(X, \Lambda_1, \psi)$-Bessel period and simply write $\chi_{\Lambda_1, \Lambda_1^{-1}} = \Lambda_1$. \subsection{Local Bessel periods} \label{s:def local bessel} Let us introduce local counterparts to the global Bessel periods. Let $k$ be a local field of characteristic zero and $D$ a quaternion algebra over $k$. Since the local Bessel periods are deduced from the global ones in a uniform way, by abuse of notation, let a quintuple $\left(H, T, N,\chi,\psi_N\right)$ stand for one of \begin{align*} &\text{$\left(G_D, T_\xi, N_D, \Lambda,\psi_\xi\right)$ in \eqref{e: def of bessel period},} \\ &\text{$\left(\mathrm{GSp}_2, T_S, N, \Lambda,\psi_S\right)$ in \eqref{Beesel def gsp}, or,} \\ &\text{ $\left(\mathrm{GSU}_{3,D}, M_X, N_{4,2},\chi,\psi_{X}\right)$ in \eqref{Besse def gsud}.} \end{align*} Let $(\pi, V_\pi)$ be an irreducible tempered representation of $H=H\left(k\right)$ with trivial central character and $[\,,\,]$ a $H$-invariant hermitian pairing on $V_\pi$, the space of $\pi$. Let us denote by $V_\pi^\infty$ the space of smooth vectors in $V_\pi$. When $k$ is non-archimedean, clearly $V_\pi^\infty = V_\pi$. Let $\chi$ be a character of $T=T\left(k\right)$ which is trivial on $Z_H=Z_H\left(k\right)$, where $Z_H$ denotes the center of $H$. Suppose that $k$ is non-archimedean. Then for $\phi,\phi^\prime\in V_\pi$, we define the local Bessel period $\alpha_{\chi, \psi_N}^H(\phi, \phi^\prime) = \alpha_{\chi, \psi_N}(\phi, \phi^\prime) = \alpha\left(\phi,\phi^\prime\right)$ by \begin{equation}\label{e: local integral 1} \alpha\left(\phi,\phi^\prime\right) := \int_{T \slash Z_H }\int_{N}^{\mathrm{st}} [\pi \left(ut \right)\phi,\phi^\prime]\, \chi\left(t\right)^{-1} \psi_N(u)^{-1}\, du\,dt. \end{equation} Here the inner integral of \eqref{e: local integral 1} is the stable integral in the sense of Lapid and Mao~\cite[Definition~2.1, Remark~2.2]{LM}. Indeed it is shown that for any $t\in T$ the inner integral stabilizes at a certain compact open subgroup of $N=N\left(k\right)$ and the outer integral converges by Liu~\cite[Proposition~3.1, Theorem~2.1]{Liu2}. We note that it is also shown in Waldspurger~\cite[Section~5.1, Lemme]{Wa2} that \eqref{e: local integral 1} is well-defined. We often simply write $\alpha(\phi) = \alpha(\phi, \phi)$. Now suppose that $k$ is archimedean. Then the local Bessel period is defined as a regularized integral whose regularization is achieved by the Fourier transform as in Liu~\cite[3.4]{Liu2}. Let us briefly recall the definition. We define a subgroup $N_{-\infty}$ of $N=N\left(k\right)$ by: \begin{align*} N_{-\infty}:&= \left\{\begin{pmatrix}1&u\\0&1\end{pmatrix}\in N_D: \mathrm{tr}_D\left(\xi u\right)=0\right\} \quad\text{in the $G_D$-case}; \\ N_{-\infty}:&=\left\{\begin{pmatrix}1&Y\\0&1\end{pmatrix}\in N: \mathrm{tr}\left(SY\right)=0\right\} \quad\text{in the $\mathrm{GSp}_2$-case}; \\ N_{-\infty}:&=\left\{\begin{pmatrix}1&A^\prime&B\\0&1&A\\0&0&1\end{pmatrix}\in N_{3,D}: \mathrm{tr}_D\left(XA\right)=0\right\} \quad\text{in the $\mathrm{GSU}_{3,D}$-case}, \end{align*} respectively. Then it is shown in Liu~\cite[Corollary~3.13]{Liu2} that for $u \in N$, \[ \alpha_{\phi, \phi^\prime}(u):= \int_{T \slash Z_G} \int_{N_{-\infty}} [ \pi\left(us t \right)\phi,\phi^\prime] \,\chi(t)^{-1} \, ds \, dt \] converges absolutely for $\varphi,\varphi^\prime\in V_\pi^\infty$ and it gives a tempered distribution on $N \slash N_{-\infty}$. For an abelian Lie group $\mathcal{N}$, we denote by $\mathcal{D}(\mathcal{N})$ (resp. $\mathcal{S}(\mathcal{N})$) the space of tempered distributions (resp. Schwartz functions) on $\mathcal{N}$. Then we recall that the Fourier transform $\hat{} : \mathcal{D}(\mathcal{N}) \rightarrow \mathcal{D}(\mathcal{N}) $ is defined by the formula \[ \left(\hat{\mathfrak{a}}, \phi\right) = \left(\mathfrak{a}, \hat{\phi}\right) \quad\text{for $\mathfrak a\in \mathcal D\left(\mathcal N\right)$ and $\phi\in \mathcal S\left(\mathcal N\right)$}, \] where $\left(\, ,\right)$ denotes the natural pairing $\mathcal{D}(\mathcal{N}) \times \mathcal{S}(\mathcal{N}) \rightarrow \mC$ and $\hat{\phi}$ is the Fourier transform of $\phi\in \mathcal S\left(\mathcal N\right)$. Then by Liu~\cite[Proposition~3.14]{Liu2}, the Fourier transform $\widehat{\alpha_{\phi, \phi^\prime}}$ is smooth on the regular locus $(\widehat{N \slash N_{-\infty}})^{\rm reg}$ of the Pontryagin dual $\widehat{N \slash N_{-\infty}}$ and we define the local Bessel period $\alpha\left(\phi,\phi^\prime\right)$ by \begin{equation}\label{local archimedean Bessel} \alpha_{\chi, \psi_N}^H(\phi, \phi^\prime) = \alpha_{\chi, \psi_N}(\phi, \phi^\prime) = \alpha\left(\phi,\phi^\prime\right) := \widehat{\alpha_{\phi, \phi^\prime}} \left( \psi_N \right). \end{equation} As in the non-archimedean case, we often simply write $\alpha(\phi) = \alpha(\phi, \phi)$. \section{Pull-back of Bessel periods} In this section, we establish the pull-back formulas of the global Bessel periods with respect to the dual pairs, $\left(\mathrm{GSp}_2,\mathrm{GSO}_{4,2}\right)$, $\left(\mathrm{GSp}_2,\mathrm{GSO}_{3,3}\right)$ and $\left(G_D,\mathrm{GSU}_{3,D}\right)$. We recall that the first two cases may be regarded as the special case when $D$ is split of the last one, by the accidental isomorphisms explained in \ref{acc isom}. \subsection{$\left(\mathrm{GSp}_2,\mathrm{GSO}_{4,2}\right)$ and $\left(\mathrm{GSp}_2,\mathrm{GSO}_{3,3}\right)$ case} \subsubsection{Symplectic-orthogonal theta correspondence with similitudes} \label{def theta} Let $X$ (resp. $Y$) be a finite dimensional vector space over $F$ equipped with a non-degenerate alternating (resp. symmetric) bilinear form. Assume that $\dim_F Y$ is even. We denote their similitude groups by $\mathrm{GSp}(X)$ and $\mathrm{GO}(Y)$, and, their isometry groups by $\mathrm{Sp}(X)$ and $\mathrm{O}(Y)$, respectively. We denote the identity component of $\mathrm{GO}(Y)$ and $\mathrm{O}(Y)$ by $\mathrm{GSO}(Y)$ and $\mathrm{SO}(Y)$, respectively. We let $\mathrm{GSp}(X)$ (resp. $\mathrm{GO}(Y)$) act on $X$ from right (resp. left). The space $Z = X \otimes Y$ has a natural non-degenerate alternating form $\langle \,, \, \rangle$, and we have an embedding $\mathrm{Sp}(X) \times \mathrm{O}(Y) \rightarrow \mathrm{Sp}(Z)$ defined by \begin{equation} \label{SP times O to SP} (x \otimes y)(g, h) = x g \otimes h^{-1}y , \quad \text{for } x \in X, y \in Y, h \in \mathrm{O}(Y), g \in \mathrm{Sp}(X). \end{equation} Fix a polarization $Z = Z_{+} \oplus Z_{-}$. Let us denote by $(\omega_{\psi}, \mathcal{S}(Z_{+}(\mA)))$ the Schr\"{o}dinger model of the Weil representation of $\widetilde{\mathrm{Sp}}(Z)$ corresponding to this polarization with the Schwartz-Bruhat space $\mathcal{S}(Z_{+})$ on $Z_{+}$. We write a typical element of $\mathrm{Sp}(Z)$ by \[ \begin{pmatrix} A&B\\ C&D\\ \end{pmatrix} \quad\text{where} \quad \begin{cases} A \in {\rm Hom}(Z_{+} , Z_{+}),\,\, B \in {\rm Hom}(Z_{+} , Z_{-}), \\ C \in {\rm Hom}(Z_{-} , Z_{+}),\,\, D \in {\rm Hom}(Z_{-} , Z_{-}). \end{cases} \] Then the action of $\omega_{\psi}$ on $\phi\in \mathcal{S}(Z_+)$ is given by the following formulas: \begin{equation} \label{weil act 1} \omega_{\psi} \left( \begin{pmatrix} A&B\\ 0&^{t}A^{-1}\\ \end{pmatrix},\varepsilon \right)\phi(z_{+}) = \varepsilon \frac{\gamma _{\psi }(1)}{\gamma _{\psi }({\rm det}A)} |{\rm det}(A)|^{\frac{1}{2}} \psi \left(\frac{1}{2} \langle z_{+}A ,z_{+}B \rangle\right)\phi(z_{+}A) \end{equation} \begin{equation} \label{weil act 2} \omega_{\psi} \left( \begin{pmatrix} 0&I\\ -I&0\\ \end{pmatrix},\varepsilon \right)\phi(z_{+}) =\varepsilon (\gamma _{\psi }(1))^{-\dim Z_{+}} \int _{Z_{+}} \psi \left( \langle z^{\prime} , z \begin{pmatrix} 0&I\\ -I&0\\ \end{pmatrix} \rangle \right) \phi(z^{\prime}) \, d z^{\prime}, \end{equation} where $\gamma_{\psi}(t)$ is a certain eighth root of unity called the Weil factor. Moreover, since the embedding given by \eqref{SP times O to SP} splits in the metaplectic group $\mathrm{Mp}(Z)$, we obtain the Weil representation of $\mathrm{Sp}(X, \mA) \times \mathrm{O}(Y, \mA)$ by restriction. We also denote this representation by $\omega_\psi$. We have a natural homomorphism \[ i : \mathrm{GSp}(X) \times \mathrm{GO}(Y) \rightarrow \mathrm{GSp}(Z) \] given by the action \eqref{SP times O to SP}. Then we note that $\lambda(i(g,h)) = \lambda(g)\lambda(h)^{-1}$. Let \[ R:=\{(g,h) \in \mathrm{GSp}(X) \times \mathrm{GO}(Y) \, | \, \lambda(g) = \lambda(h) \} \supset \mathrm{Sp}(X) \times \mathrm{O}(Y). \] We may define an extension of the Weil representation of $\mathrm{Sp}(X, \mA) \times \mathrm{O}(Y, \mA)$ to $R(\mA)$ as follows. Let $X= X_+ \oplus X_-$ be a polarization of $X$ and use the polarization $Z_{\pm} = X_{\pm} \otimes Y$ of $Z$ to realize the Weil representation $\omega_\psi$. Then we note that \[ \omega_{\psi}(1, h)\phi(z) = \phi\left( i \left(h \right)^{-1} z \right) \quad \text{for $h\in\mathrm{O}\left( \mA\right)$ and $\phi\in \mathcal{S}(Z_+(\mA))$}. \] Thus we define an action $L$ of $\mathrm{GO}\left(Y,\mA\right)$ on $\mathcal{S}(Z_+(\mA))$ by \[ L\left(h\right)\phi\left(z\right)=|\lambda(h)|^{-\frac{1}{8} \dim~X \cdot \dim~Y } \phi\left(i \left( h \right)^{-1}z\right). \] Then we may extend the Weil representation $\omega_\psi$ of $\mathrm{Sp}(X, \mA) \times \mathrm{O}(Y, \mA)$ to $R(\mA)$ by \[ \omega_{\psi}(g, h) \phi = \omega_\psi(g_1, 1) L\left(h\right)\phi \quad\text{for $\phi\in\mathcal{S}(Z_+(\mA))$ and $\left(g,h\right)\in R\left(\mA\right)$}, \] where \[ g_1 = g \begin{pmatrix}\lambda(g)^{-1}&0\\ 0&1 \end{pmatrix} \in \mathrm{Sp}(X,\mA). \] In general, for any polarization $Z= Z_+^\prime \oplus Z_-^\prime$, there exists an $\mathrm{Sp}(X, \mA) \times \mathrm{O}(Y)(\mA)$-isomorphism $p : \mathcal{S}(Z_+(\mA)) \to \mathcal{S}(Z_+^\prime(\mA))$ given by an integral transform (see Ichino-Prasanna~\cite[Lemma~3.3]{IP}). Let us denote the realization of the Weil representation of $\mathrm{Sp}(X, \mA) \times \mathrm{O}(Y)(\mA)$ on $\mathcal{S}(Z_+^\prime(\mA))$ by $\omega_\psi^\prime$. Then we may extend $\omega_\psi^\prime$ to $R\left(\mA\right)$ by \[ \omega_\psi^\prime\left(g,h\right)=p\circ \omega_\psi\left(g,h\right)\circ p^{-1}\quad\text{for $(g, h) \in R(\mA)$}. \] For $\phi \in \mathcal{S}(Z_{+}(\mA))$, we define the theta kernel $\theta^\phi$ by \[ \theta_{\psi}^{\phi}(g , h) = \theta^{\phi}(g , h) := \sum _{z_{+} \in Z_{+}(F) } \omega_{\psi} \left(g,h\right) \phi(z_{+}) \quad \text{for $(g, h) \in R(\mA)$}. \] Let \begin{equation} \label{gl def gap+} \mathrm{GSp}(X, \mA)^{+}= \left\{ g \in \mathrm{GSp}(X, \mA) \mid \, \lambda(g) = \lambda(h) \text{ for some } h \in \mathrm{GO}(Y, \mA) \right\} \end{equation} and $\mathrm{GSp}(X, F)^{+} = \mathrm{GSp}(X, \mA)^{+} \cap \mathrm{GSp}(X, F)$. As in \cite[Section~5.1]{HK}, for a cusp form $f$ on $\mathrm{GSp}(X, \mA)^{+}$, we define its theta lift to $\mathrm{GO}(Y, \mA)$ by \[ \Theta_\psi^{X, Y} (f, \phi)(h) = \Theta(f, \phi)(h) := \int _{\mathrm{Sp}(X, F) \backslash \mathrm{Sp}(X, \mA)} \theta ^{\phi}(g_1g , h)f(g_1 g) \, dg_1 \] for $h \in \mathrm{GO}(Y, \mA)$, where $g \in \mathrm{GSp}(X, \mA)^+$ is chosen so that $\lambda(g) = \lambda(h)$. It defines an automorphic form on $\mathrm{GO}(Y, \mA)$. For a cuspidal automorphic representation $(\pi_+, V_{\pi_+})$ of $\mathrm{GSp}(X, \mA)^+$, we denote by $\Theta_\psi(\pi_+)$ the theta lift of $\pi_+$ to $\mathrm{GO}(Y, \mA)$. Namely \[ \Theta_\psi^{X, Y}(\pi_+) = \Theta_\psi(\pi_+): = \left\{ \Theta (f, \phi) : f \in V_{\pi_+}, \, \phi \in \mathcal{S}(Z_+(\mA)) \right\}. \] Furthermore, for an irreducible cuspidal automorphic representation $(\pi, V_\pi)$ of $\mathrm{GSp}(X, \mA)$, we define \[ \Theta_\psi(\pi): = \Theta_\psi(\pi|_{\mathrm{GSp}(X, \mA)^+}) \] where $\pi|_{\mathrm{GSp}(X, \mA)^+}$ denotes the automorphic representation of $\mathrm{GSp}(X, \mA)^+$ with its space of automorphic forms $\left\{ \varphi |_{\mathrm{GSp}(X, \mA)^+} : \varphi \in V_\pi \right\}$. As for the opposite direction, for a cusp form $f^\prime$ on $\mathrm{GO}(Y, \mA)$, we define its theta lift $\Theta (f^\prime, \phi)$ to $\mathrm{GSp}(X, \mA)^+$ by \[ \Theta (f^\prime, \phi)(g): = \int _{\mathrm{O}(Y, F) \backslash \mathrm{O}(Y, \mA)} \theta ^{\phi}(g , h_1h)f(h_1 h) \, dh_1 \quad\text{for $g\in\mathrm{GSp}\left(X,\mA\right)^+$}, \] where $h \in \mathrm{GO}(Y,\mA)$ is chosen so that $\lambda(g) = \lambda(h)$. For an irreducible cuspidal automorphic representation $(\sigma, V_\sigma)$ of $\mathrm{GO}(Y, \mA)$, we define the theta lift $\Theta_\psi(\sigma)$ of $\sigma$ to $\mathrm{GSp}(X, \mA)^+$ by \[ \Theta_\psi(\sigma):= \left\{\Theta (f^\prime, \phi): f^\prime\in V_\sigma,\, \phi \in \mathcal{S}(Z_+(\mA)) \right\}. \] Moreover we extend $\theta (f^\prime, \phi)$ to an automorphic form on $\mathrm{GSp}(X, \mA)$ by the natural embedding \[ \mathrm{GSp}(X, F)^+ \backslash \mathrm{GSp}(X, \mA)^+ \rightarrow \mathrm{GSp}(X, F) \backslash \mathrm{GSp}(X, \mA) \] and extension by zero. Then we define the theta lift $\Theta_\psi(\sigma)$ of $\sigma$ to $\mathrm{GSp}(X, \mA)$ as the $\mathrm{GSp}\left(X,\mA\right)$ representation generated by such $\theta\left(f^\prime,\phi\right)$ for $f^\prime\in V_\sigma$ and $\phi\in \mathcal S\left(Z_+\left(\mA\right)\right)$. For some $X$ and $Y$, theta correspondence for the dual pair $(\mathrm{GSp}(X)^+, \mathrm{GO}(Y))$ gives theta correspondence between $\mathrm{GSp}(X)^+$ and $\mathrm{GSO}(Y)$ by the restriction of representations of $ \mathrm{GO}(Y)$ to $\mathrm{GSO}(Y)$. Indeed, when $\dim X=4$ and $\dim Y = 6$, we may consider theta correspondence for the pair $(\mathrm{GSp}(X)^+, \mathrm{GSO}(Y))$. In Gan-Takeda~\cite{GT10, GT0}, they study the case when $\mathrm{GSO}(Y) \simeq \mathrm{GSO_{3,3}}$ or $\mathrm{GSO}_{5,1}$, and, in \cite{Mo}, the case when $\mathrm{GSO}(Y) \simeq \mathrm{GSO_{4,2}}$ is studied. In these cases, for a cusp form $f$ on $\mathrm{GSp}(X, \mA)^{+}$, we denote by $\theta (f, \phi)$ the restriction of $\Theta (f, \phi)$ to $\mathrm{GSO}(Y, \mA)$. Moreover, for a cuspidal automorphic representation $(\pi_+, V_{\pi_+})$ of $\mathrm{GSp}(X, \mA)^+$, we define the theta lift $\theta_\psi\left(\pi_+\right)$ of $\pi_+$ to $\mathrm{GSO}(Y, \mA)$ by \[ \theta_\psi^{X, Y}(\pi_+) = \theta_\psi(\pi_+) := \left\{ \theta (f, \phi) : f \in V_{\pi_+}, \phi \in \mathcal{S}(Z_+(\mA)) \right\}. \] Similarly, for a cusp form $f^\prime$ on $\mathrm{GSO}(Y, \mA)$, we define its theta lift $\theta (f^\prime, \phi)$ to $\mathrm{GSp}(X, \mA)^+$ by \[ \theta (f^\prime, \phi)(g): = \int _{\mathrm{SO}(Y, F) \backslash \mathrm{SO}(Y, \mA)} \theta ^{\phi}(g , h_1h)f(h_1 h) \, dh_1 \quad\text{for $g\in\mathrm{GSp}\left(X,\mA\right)^+$}, \] where $h \in \mathrm{GSO}(Y,\mA)$ is chosen so that $\lambda(g) = \lambda(h)$. We extend it to an automorphic from on $\mathrm{GSp}(X, \mA)$ as above. For a cuspidal automorphic representation $(\sigma, V_\sigma)$ of $\mathrm{GSO}(Y, \mA)$, we define the theta lift $\theta_\psi(\sigma)$ of $\sigma$ to $\mathrm{GSp}(X, \mA)^+$ by \[ \theta_\psi(\sigma):= \left\{\theta (f^\prime, \phi): f^\prime\in V_\sigma,\, \phi \in \mathcal{S}(Z_+(\mA)) \right\}. \] \color{black} \begin{Remark} \label{theta irr rem} Suppose that $\Theta_\psi(\pi_+)$ (resp. $\theta_\psi(\sigma)$) is non-zero and cuspidal where $(\pi_+, V_{\pi_+})$ (resp. $(\sigma, V_\sigma)$) is an irreducible cuspidal automorphic representation of $\mathrm{GSp}(X, \mA)^+$ (resp. $\mathrm{GO}(Y, \mA)$). Then Gan~\cite[Proposition~2.12]{Gan} has shown that the Howe duality, which was proved by Howe~\cite{Ho1} at archimedean places, by Waldspurger~\cite{Wa} at odd finite places and finally by Gan and Takeda~\cite{GT} at all finite places, implies that $\Theta_\psi(\pi_+)$ (resp. $\theta_\psi(\sigma)$) is irreducible and cuspidal. Moreover in the case of our concern, namely when $\dim_FX=4$ and $\dim_FY=6$, the irreducibility of $\Theta_\psi(\pi_+)$ implies that of $\theta_\psi(\pi_+)$ by the conservation relation due to Sun and Zhu~\cite{SZ}. \end{Remark} \subsubsection{ Pull-back of the global Bessel periods for the dual pairs $\left(\mathrm{GSp}_2,\mathrm{GSO}_{4,2}\right)$ and $\left(\mathrm{GSp}_2,\mathrm{GSO}_{3,3}\right)$ } \label{sp4 so42} Our goal here is to prove the pull-back formula~\eqref{f: first pull-back formula}. First we introduce the set-up. Let $X$ be the space of $4$ dimensional row vectors over $F$ equipped with the symplectic form \[ \langle w_1,w_2\rangle=w_1\begin{pmatrix}0&1_2\\-1_2&0\end{pmatrix} \,{}^tw_2. \] Let us take the standard basis of $X$ and name the basis vectors as \begin{equation}\label{sb for X} x_1 = (1, 0, 0, 0), \quad x_2 = (0, 1, 0, 0), \quad x_{-1} = (0, 0, 1, 0), \quad x_{-2} = (0, 0, 0, 1). \end{equation} Then the matrix representation of $\mathrm{GSp}\left(X\right)$ with respect to the standard basis is $G=\mathrm{GSp}_2$ defined by \eqref{gsp}. We let $G$ act on $X$ from the right. Let $Y$ be the space of $6$ dimensional column vectors over $F$ equipped with the non-degenerate symmetric bilinear form \[ \left(v_1,v_2\right)={}^tv_1 S_2 v_2 \] where the symmetric matrix $S_2$ is given by \eqref{d: symmetric}. Let us take the standard basis of $Y$ and name the basis vectors as \begin{align*} y_{-2} &= {}^{t}(1, 0, 0, 0, 0, 0), \quad y_{-1} = {}^{t}(0, 1, 0, 0, 0, 0),\\ e_1&= {}^{t}(0, 0, 1, 0, 0, 0), \quad e_2 = {}^{t}(0, 0, 0, 1, 0, 0),\\ y_{1} &= {}^{t}(0, 0, 0, 0, 1, 0), \quad y_{2} = {}^{t}(0, 0, 0, 0, 0, 1). \end{align*} We note that $( y_i , y_j )=\delta_{ij}, ( e_1 , e_1)=2$ and $( e_2 , e_2) =-2d$. Since $d \in F^\times \setminus (F^\times)^2$, with respect to the standard basis, the matrix representations of $\mathrm{GO}\left(Y\right)$ and $\mathrm{GSO}\left(Y\right)$ are $\mathrm{GO}_{4,2}$ defined by \eqref{d: GO_n+2,n} and $\mathrm{GSO}_{4,2}$ defined by \eqref{d: GSO_n+2,n}, respectively. In this section, we also study the theta correspondence for the dual pair $(\mathrm{GSp}(X), \mathrm{GSO}_{3,3})$, for which, we may use the above matrix representation with $d \in (F^\times)^2$. Hence, in the remaining of this section, we study theta correspondence for $(\mathrm{GSp}(X), \mathrm{GSO}(Y))$ for an arbitrary $d \in F^\times$. We shall denote $\mathrm{GSp}(X, \mA)^+$ as $G(\mA)^+$ and also $\mathrm{GSp}(X, F)^+ $ as $G(F)^+$. We note that when $d \in (F^\times)^2$, $\mathrm{GSp}(X)^+ = \mathrm{GSp}(X)$. Let $Z=X\otimes Y$ and we take a polarization $Z=Z_+\oplus Z_-$ as follows. First we take $X=X_+\oplus X_-$ where \[ X_{+} = F \cdot x_1 + F \cdot x_2 \quad \text{and} \quad X_{-}= F \cdot x_{-1} + F \cdot x_{-2} \] as the polarization of $X$. Then we decompose $Y$ as $Y=Y_+\oplus Y_0\oplus Y_-$ where \[ Y_{+} =F \cdot y_1+ F \cdot y_2, \quad Y_0 = F \cdot e_1 + F \cdot e_2\quad \text{and $ Y_{-} =F \cdot y_{-1} + F \cdot y_{-2}$}. \] Then let \[ Z_{\pm} = \left(X \otimes Y_{\pm} \right)\oplus \left(X_{\pm} \otimes Y_0\right) \] where the double sign corresponds. To simplify the notation, we sometimes write $z_+\in Z_+$ as $z_+=\left(a_1,a_2;b_1,b_2\right)$ when \[ z_{+} = a_1 \otimes y_1 + a_2 \otimes y_2 + b_1 \otimes e_1 + b_2 \otimes e_2\in Z_+, \quad\text{where $ a_i \in X, \, b_i \in X_{+}\, \left(i=1,2\right)$}. \] Let us compute the pull-back of $(X, \chi, \psi)$-Bessel periods on $\mathrm{GSO}(Y)$ defined by \eqref{e: Bessel GSO(4,2)} with respect to the theta lift from $G$. \begin{proposition} \label{pullback Bessel gsp} Let $\left(\pi ,V_\pi \right)$ be an irreducible cuspidal automorphic representation of $G\left(\mA\right)$ whose central character is $\omega_\pi$ and $\chi$ a character of $\mathbb A_E^\times$ such that $\chi\mid_{\mathbb A^\times}=\omega_\pi^{-1}$. Let $X\in\mathrm{Mat}_{2\times 2}\left(F\right)$ such that $\det X\ne 0$. Then for $f \in V_\pi$ and $\phi \in \mathcal{S}(Z_+(\mA))$, we have \begin{equation}\label{f: first pull-back formula} \mathcal{B}_{X, \chi, \psi}(\theta(f: \phi)) = \int_{N(\mA) \backslash G^1(\mA)} B_{S_X, \chi^{-1},\psi}(\pi(g)f)\left( \omega_\psi(g, 1) \phi\right)(v_X) \, dg \end{equation} where $B_{S_X, \chi^{-1},\psi}$ is the $\left(S_X,\chi^{-1},\psi\right)$-Bessel period on $G$ defined by \eqref{Beesel def gsp}. Here, for $X= \begin{pmatrix}x_{11}&x_{12}\\x_{21}&x_{22} \end{pmatrix}$, we define a vector $v_X\in Z_+$ by \begin{equation}\label{e: def of v_X} v_X := \left(x_{-2}, x_{-1};\frac{x_{21}}{2}x_1+\frac{x_{11}}{2} x_2, -\frac{x_{22}}{2d}x_1-\frac{x_{12}}{2d} x_2\right) \end{equation} and a $2$ by $2$ symmetric matrix $S_X$ by \begin{equation}\label{e: def of S_X} S_X: = \frac{1}{4d}{}^{t}(J_2 \,{}^{t}X J_2)S_0 ( J_2\,{}^{t}X J_2). \end{equation} We regard $\chi$ as a character of $\mathrm{GSO}(S_X)(\mA)$ by \begin{equation}\label{chi as a character} \mathrm{GSO}(S_X) \ni k \mapsto \chi((J_2{}^{t}XJ_2) k (J_2{}^{t}XJ_2)^{-1})\in\mathbb C^\times. \end{equation} In particular, the $(S_X, \chi^{-1},\psi)$-Bessel period does not vanish on $V_\pi$ if and only if the $\left(X,\chi, \psi\right)$-Bessel period does not vanish on $\theta_\psi\left(\pi\right)$. \end{proposition} \begin{proof} We compute the $\left(X,\chi, \psi\right)$-Bessel period defined by \eqref{e: Bessel GSO(4,2)} in stages. We consider subgroups of $N_{4,2}$ given by: \begin{align} N_0(F) &= \left\{ u_0(x) := \begin{pmatrix} 1&-{}^{t}X_0 S_1& 0\\ 0&1_4&X_0\\ 0&0&1 \end{pmatrix} \mid X_0 = \begin{pmatrix} x\\0\\0\\0\end{pmatrix} \right\};\label{d: N_0} \\ N_1(F) &= \left\{ u_1(s_1, t_1) := \begin{pmatrix} 1&-{}^{t}X_1 S_1&-\frac{1}{2}{}^{t}X_1 S_1 X_1\\ 0&1_4&X_1\\ 0&0&1 \end{pmatrix} \mid X_1 = \begin{pmatrix} 0\\s_1\\t_1\\0 \end{pmatrix} \right\};\label{d: N_1} \\ N_2(F) &= \left\{ u_2(s_2, t_2) := \begin{pmatrix} 1&0&0&0&0\\ 0&1&-{}^{t}X_2 S_0&-\frac{1}{2}{}^{t}X_2 S_0 X_2&0\\ 0&0&1_2&X_2&0\\ 0&0&0&1&0\\ 0&0&0&0&1 \end{pmatrix} \mid X_2 = \begin{pmatrix} s_2\\t_2\end{pmatrix} \right\}\label{d: N_2} \end{align} where $S_0$ and $S_1$ are given by \eqref{d: symmetric}. Then we have \[ N_0 \lhd N_0 N_1 \lhd N_0 N_1 N_2 =N_{4.2}. \] Thus we may write \begin{multline} \label{bessel u0 u1 u2} \mathcal{B}_{X, \chi, \psi}(\theta(f: \phi)) = \int_{\mA^\times M_{X}(F) \backslash M_X(\mA)} \int_{(F \backslash \mA_{F})^2} \int_{(F \backslash \mA_{F})^2} \int_{F \backslash \mA_{F}} \\ \theta(f, \phi)(u_0(x) u_1(s_1, t_1) u_2(s_2, t_2) h) \\ \times \psi(x_{21}s_1+x_{22}t_1+x_{11}s_2+x_{12}t_2)^{-1}\chi (h)^{-1} \, dx \,ds_1 \,dt_1 \,ds_2 \,dt_2 \,dh. \end{multline} For $h \in \mathrm{GSO}(Y, \mA)$, let us define \[ W_0(\theta(f: \phi))(h) := \int_{F \backslash \mA_F} \theta(f, \phi)(u_0(x)h) \, dx. \] From the definition of the theta lift, we have \begin{multline} \label{W_0 1} W_0(\theta(f, \phi))(h) \\ = \int_{F \backslash \mA_F} \int_{G^1(F) \backslash G^1(\mA_F)} \sum_{a_i \in X, b_i \in X_{+}} \left(\omega_\psi (g_1 \lambda_s(\nu(h)), u_0(x) h) \phi\right)(a_1, a_2; b_1, b_2) \\ \times f(g_1 \lambda_s(\lambda(h)))\, dg_1 \, dx. \end{multline} Here, for $a \in \mA^\times$, we write \[ \lambda_s(a) = \begin{pmatrix} 1_2&0\\0 &a \cdot 1_2 \end{pmatrix}. \] Since $Z_{-}\,(1, u_0(x))= Z_{-}$ and we have \[ z_{+}\,(1, u_0(x)) = z_{+} + (x \cdot a_1 \otimes y_{-2} -x \cdot a_2 \otimes y_{-1}), \] we observe that \begin{align} \label{global comp1} \left(\omega_\psi(1, u_0(x)) \phi\right)(z_+) &=\psi \left(\frac{1}{2} \langle z_+, x \cdot a_1 \otimes y_{-2} -x \cdot a_2 \otimes y_{-1} \rangle \right) \phi(z_+) \\ \notag &= \psi \left( -x \langle a_1, a_2 \rangle \right) \phi(z_+). \end{align} Thus in the summation of the right-hand side of \eqref{W_0 1}, only $a_i$ such that $\langle a_1, a_2 \rangle = 0$ contributes to the integral $W_0(\theta(f, \phi))$, and we obtain \begin{multline*} W_0(\theta(f, \phi))(h) = \int_{G^1(F) \backslash G^1(\mA_F)} \\ \sum_{\substack{a_i \in X, \langle a_1, a_2 \rangle =0,\\ b_i \in X_{+}}} \left(\omega_\psi(g_1 \lambda_s(\lambda(h)), h) \phi\right)(a_1, a_2; b_1, b_2) f(g_1 \lambda_s(\lambda(h)))\, dg_1. \end{multline*} Since the space spanned by $a_1$ and $a_2$ is isotropic, there exists $\gamma \in G^1(F)$ such that $a_1 \gamma^{-1}, a_2 \gamma^{-1} \in X_{-}$. Let us define an equivalence relation $\sim$ on $(X_{-})^{2}$ by \[ (a_1 , a_2) \sim (a_1^\prime , a_2^\prime) \underset{\text{def.}}{\Longleftrightarrow} \text{there exists $\gamma\in G^1\left(F\right)$ such that $a_i^\prime = a_i \gamma$ for $i=1,2$}. \] Let us denote by $\mathcal{X}_{-}$ the set of equivalence classes $\left(X_{-}\right)^2 \slash \sim$ and by $\overline{(a_1, a_2)}$ the equivalence class containing $\left(a_1,a_2\right)\in\left(X_{-}\right)^2$. Then we may write $W_0 (\theta (f, \phi)) (h)$ as \begin{multline*} \int_{G^1(F) \backslash G^1(\mA_F)} \\ \sum _{ \overline{(a_1 , a_2)} \in \mathcal{X}_{-}}\, \sum_{\gamma \in V(a_1 , a_2) \backslash G^1(F)}\, \sum_{b_i \in X_{+}} \left(\omega_\psi (g_1 \lambda_s(\lambda(h)) , h) \phi\right)(a_1 \gamma, a_2 \gamma; b_1, b_2) \\ \times f(g_1 \lambda_s(\lambda(h))) \, dg_1. \end{multline*} Here \[ V(a_1, a_2) = \{ g \in G^1(F) \mid \text{$a_ig =a_i$ for $i=1,2$}\}. \] \begin{lemma} For any $g \in G(\mA)^+$ and $h \in \mathrm{GSO}(Y, \mA)$ such that $\lambda(g) = \lambda(h)$, \[ \sum_{b_i \in X_{+}} \left(\omega_\psi (g, h) \phi\right)(a_1 \gamma, a_2 \gamma, b_1, b_2) =\sum_{b_i \in X_{+}} \left(\omega_\psi (\gamma g, h) \phi\right)(a_1 , a_2 , b_1, b_2). \] \end{lemma} \begin{proof} This is proved by an argument similar to the one for \cite[Lemma~2]{Fu}. \end{proof} Further, by an argument similar to the one for $W_0 (\theta (f, \phi)) (h)$, we shall prove the following lemma. \begin{lemma} \label{pull comp 2} For any $g \in G(\mA)^+$ and $h \in \mathrm{GSO}(Y, \mA)$ such that $\lambda(g) = \lambda (h)$, \begin{align*} &\int_{(F \backslash \mA_F)^{2}} \psi^{-1}(x_{21}s_1+x_{22}t_1) \left(\omega_\psi (g , u_{1}(s_1, t_1) h) \phi\right)(a_1 , a_2 , b_1, b_2) \, ds_1 \, dt_1\\ =& \left\{ \begin{array}{ll} \left(\omega_\psi (g , h)\phi\right)(a_1 , a_2 , b_1, b_2) & \text{if $ \langle a_2 , b_1 \rangle =-\frac{x_{21}}{2}$ and $\langle a_2, b_2 \rangle= \frac{x_{22}}{2d}$};\\ & \\ 0 & \text{otherwise}\\ \end{array} \right. \end{align*} and \begin{align*} &\int_{(F \backslash \mA_F)^{2}} \psi^{-1}(x_{11} s_2+x_{12}t_2) \left(\omega_\psi(g , u_{2}(s_2, t_2) h) \phi\right)(a_1 , a_2 , b_1, b_2) \, ds_2\, dt_2\\ =& \left\{ \begin{array}{ll} \left(\omega_\psi (g , h)\phi\right)(a_1 , a_2 , b_1, b_2) & \text{if $ \langle a_1 , b_1 \rangle =-\frac{x_{11}}{2}$ and $\langle a_1, b_2 \rangle =\frac{x_{12}}{2d}$};\\ & \\ 0 & \text{otherwise}.\\ \end{array} \right. \end{align*} \end{lemma} \begin{proof} Since $Z_-\, (1, u_1(s_1, t_1)) = Z_-$ and we have \begin{multline*} z_+\,(1, u_1(s_1, t_1)) = z_{+} +2s_1 (b_1 \otimes y_{-2}) - 2dt_1(b_2 \otimes y_{-2}) \\ + (-s_1^2+2dt_1^2)a_2 \otimes y_{-2}-s_1a_2 \otimes e_1-t_1 a_2 \otimes e_2, \end{multline*} we obtain \begin{align*} &\left(\omega_\psi(1, u_1(s_1, t_1)) \phi\right)(z_+) = \psi \left(\frac{1}{2} \left( 2s_1\langle a_2, b_1 \rangle -2dt_1 \langle a_2, b_2 \rangle \right) \right) \\ &\qquad\quad\times \psi \left(\frac{1}{2} \left( (-s_1^2+2dt_1^2) \langle a_2, a_2 \rangle -2s_1 \langle b_1, a_2 \rangle +2dt_1 \langle b_2, a_2 \rangle \right) \right) \phi(z_+) \\ =&\psi \left(2s_1\langle a_2, b_1 \rangle -2dt_1 \langle a_2, b_2 \rangle \right)\phi(z_+). \end{align*} Then the first assertion readily follows. Similarly, since $Z_- \,(1, u_2(s_2, t_2)) = Z_-$ and we have \[ z_+\,(1, u_2(s_2, t_2)) = z_+ + a_1 \otimes ((s_2^2-dt_2^2)y_{-1}-s_2 e_1-t_2e_2 ) + 2s_2 b_1 \otimes y_{-1} -2dt_2 b_2 \otimes y_{-1}, \] we obtain \begin{align*} &\omega(1, u_2(s_2, t_2)) \phi(z_+) = \psi \left(\frac{1}{2} \left( -2s_2 \langle b_1, a_1 \rangle +2dt_2 \langle b_2, a_1 \rangle \right) \right) \\ &\qquad\qquad\qquad\times \psi \left(\frac{1}{2} \left( 2s_2 \langle a_1, b_1 \rangle -2dt_2 \langle a_1, b_2 \rangle \right) \right)\phi(z_+) \\ = &\psi \left(2s_2 \langle a_1, b_1 \rangle -2dt_2 \langle a_1, b_2 \rangle \right) \phi(z_+) \end{align*} and the second assertion follows. \end{proof} Lemma~\ref{pull comp 2} implies that \begin{multline*} \mathcal{B}_{X, \chi, \psi}(\theta(f: \phi)) = \int_{\mA^\times M_{X}(F) \backslash M_X(\mA)} \int_{G^1(F) \backslash G^1(\mA_F)} \chi (h)^{-1} \\ \times \sum _{ \overline{(a_1 , a_2)} \in \mathcal{X}_{-}}\, \sum_{\gamma \in V(a_1 , a_2) \backslash G^1(F)}\, \sum_{\substack{ b_i \in X_{+}, \langle a_i, b_1 \rangle = \frac{x_{i1}}{2},\\ \langle a_i, b_2 \rangle =- \frac{x_{i2}}{2d} }} \\ \left(\omega_\psi (\gamma g_1 \lambda_s(\lambda(h)) , h) \phi\right)(a_1, a_2, b_1, b_2) \, f(g_1 \lambda_s(\lambda(h))) \, dg_1\,dh. \end{multline*} We note that $a_1$ and $a_2$ are linearly independent from the conditions on $a_i$ and $\det (X) \ne 0$. Since $a_i \in X_-$ and $\dim X_- =2$, we may take $\left(a_1,a_2\right)=\left(x_{-2},x_{-1}\right)$ as a representative. Then we should have \[ b_1 =\frac{x_{21}}{2}x_1+\frac{x_{11}}{2} x_2, \quad b_2= -\frac{x_{22}}{2d}x_1-\frac{x_{12}}{2d} x_2. \] Hence we get \begin{align} \label{MX to TS} & \mathcal{B}_{X, \chi, \psi}(\theta(f: \phi)) = \int_{\mA^\times M_{X}(F) \backslash M_X(\mA)} \int_{G^1(F) \backslash G^1(\mA_F)} \chi\left(h\right)^{-1} \\ \notag &\times \sum_{\gamma \in N(F) \backslash G^1(F)} \left(\omega_\psi (\gamma g_1 \lambda_s(\lambda(h)) , h) \phi\right)(v_X) \, f(g_1 \lambda_s(\lambda(h))) \, dg_1\,dh \\ =& \notag \int_{N(\mA) \backslash G^1(\mA_F)} \int_{\mA^\times M_{X}(F) \backslash M_X(\mA)} \int_{N(F) \backslash N(\mA)} \\ \notag &\qquad\qquad\chi(h)^{-1} \omega (v g_1 \lambda_s(\lambda(h)) , h) \phi(v_X) f(v g_1 \lambda_s(\lambda(h))) \, dv\,dg_1\,dh \end{align} where we put $v_X = (x_{-2}, x_{-1}; \frac{x_{21}}{2}x_1+\frac{x_{11}}{2} x_2, -\frac{x_{22}}{2d}x_1-\frac{x_{12}}{2d} x_2)$. For $u=\begin{pmatrix}1_2&A\\0&1_2\end{pmatrix}$ where $A= \begin{pmatrix}a&b\\ b&c \end{pmatrix} \in \mathrm{Sym}^2$, we have \begin{align*} &\left( x_{-2} \otimes y_1 + x_{-1} \otimes y_2+ \left(\frac{x_{21}}{2}x_1+\frac{x_{11}}{2} x_2 \right) \otimes e_1 + \left( -\frac{x_{22}}{2d}x_1-\frac{x_{12}}{2d} x_2 \right) \otimes e_2 \right) \left(u,1\right) \\ =& x_{-2} \otimes y_1 + x_{-1} \otimes y_2+ \left(\frac{x_{21}}{2}(x_1 +a x_{-1}+b x_{-2} )+\frac{x_{11}}{2} (x_2+bx_{-1} +c x_{-2}) \right) \otimes e_1 \\ &\qquad\qquad+ \left( -\frac{x_{22}}{2d}(x_1 +a x_{-1}+b x_{-2} )-\frac{x_{12}}{2d} (x_2+bx_{-1} +c x_{-2}) \right) \otimes e_2. \end{align*} Hence, when we put \begin{align*} S_X&=\frac{1}{4d}{}^{t}(J_2{}^{t}XJ_2)S_0 (J_2{}^{t}XJ_2) \\ & =\frac{1}{2d} \begin{pmatrix}x_{22}^2-dx_{21}^2&x_{22}x_{12}-dx_{21}x_{11} \\x_{22}x_{12}-dx_{21}x_{11} &x_{12}^2-dx_{11}^2 \end{pmatrix} \in \mathrm{Sym}^2(F), \end{align*} for $u\in N\left(\mA\right)$, we have \[ \left(\omega_\psi(ug\lambda_s(\lambda(h)), h) \phi\right) (v_X) = \psi_{S_X} \left( u \right)^{-1} \omega_\psi(g\lambda_s(\lambda(h)), h) \phi(v_X). \] Therefore, we get \begin{align*} &\int_{N(\mA) \backslash G^1(\mA_F)} \int_{\mA^\times M_{X}(F) \backslash M_X(\mA)} \int_{N(F) \backslash N(\mA)} \chi(h)^{-1} \\ &\times \left(\omega_\psi (g_1 \lambda_s(\lambda(h)) , h) \phi\right)(v_X) \, f(u g_1 \lambda_s(\lambda(h))) \psi_{S_X}\left(u\right)^{-1} \, du \, dh\, dg_1 \\ =& \int_{N(\mA) \backslash G^1(\mA_F)} \int_{\mA^\times M_{X}(F) \backslash M_X(\mA)} \int_{N(F) \backslash N(\mA)} \chi (h)^{-1} \\ &\times \omega_\psi (\lambda_s(\lambda(h)) g_1, h) \phi(v_X) \, |\lambda(h)|^3 f(u \lambda_s(\lambda(h)) g_1) \psi_{S_X}\left(u\right)^{-1} \, du \, dh\, dg_1. \end{align*} By a direct computation, we see that \[ \left(\omega_\psi (\lambda_s(\lambda(h)) g_1, h) \phi\right)(v_X) =|\lambda(h)|^{-3} \left(\omega_\psi (h_0 \lambda_s(\lambda(h)) g_1, 1) \phi\right)(v_X) \] when we write \[ h = \begin{pmatrix}(\det h) (h^X)^\ast &0&0\\ 0&h&0\\ 0&0&h^X\end{pmatrix}, \, h_0 = \begin{pmatrix} ({}^{t}XJ_2)^{-1} {}^{t}h ({}^{t}XJ_2)&0\\ 0&(J_2X) h^{-1} (J_2X)^{-1}\end{pmatrix}. \] For $g \in \mathrm{GSO}(S_0)$, we have ${}^{t}g = wgw$ and we may write \[ h_0 = \begin{pmatrix} (J_2{}^{t}XJ_2)^{-1} {}^{t}h (J_2{}^{t}XJ_2)&0\\ 0&{}^{t}\left( (J_2{}^{t}XJ_2)^{-1} {}^{t}h (J_2{}^{t}XJ_2)\right)^{-1} \end{pmatrix}. \] Since we have \[ \mathrm{GSO}(S_X) = (J_2{}^{t}XJ_2)^{-1} \mathrm{GSO}(S_0) (J_2{}^{t}XJ_2), \] we get \begin{align} \label{pull-back gsp complete} &\int_{N(\mA) \backslash G^1(\mA_F)} \int_{\mA^\times T_{S_X}(F) \backslash T_{S_X}(\mA)} \int_{N(F) \backslash N(\mA)} \chi(h) \\ \notag & \qquad\qquad\times \left(\omega_\psi (g_1, 1) \phi\right)(v_X) f(u h g_1) \psi_{S_X}\left(u\right)^{-1} \, du \, dh\, dg_1 \\ =& \notag \int_{N(\mA) \backslash G^1(\mA_F)} B_{S_X, \chi^{-1}}(\pi(g_1)f) \left(\omega_\psi (g_1, 1) \phi\right)(v_X) dg_1 \end{align} where we regard $\chi$ as a character of $\mathrm{GSO}(S_X)(\mA)$ by \eqref{chi as a character}. Finally the last statement concerning the equivalence of the non-vanishing conditions on the $\left(S_X,\chi^{-1},\psi\right)$-Bessel period and the $\left(X,\chi\right)$-Bessel period follows from the pull-back formula~\eqref{f: first pull-back formula} by an argument similar to the one in the proof of Proposition~ 2 in \cite{FM1}. \end{proof} \subsection{$\left(G_D, \mathrm{GSU}_{3,D}\right)$ case} \subsubsection{Theta correspondence for quaternionic dual pair with similitudes} \label{def theta D} Let $D$ be a quaternion division algebra over $F$. Let $X_D$ (resp. $Y_D$) be a right (resp. left) $D$-vector space of finite rank equipped with a non-degenerate hermitian bilinear form $(\,,\,)_{X_D}$ (resp. non-degenerate skew-hermitian bilinear form $\langle \, , \, \rangle_{Y_D}$). Hence $(\,,\,)_{X_D}$ and $\langle \, , \,\rangle_{Y_D}$ are $D$-valued $F$-bilinear form on $X_D$ and $Y_D$ satisfying: \begin{gather*} \overline{(x ,x^\prime)_{X_D}} = (x^\prime,x)_{X_D}, \quad (xa,x^\prime b)_{X_D} = \bar{a}(x,x^\prime)_{X_D} b, \\ \overline{\langle y,y^\prime \rangle_{Y_D}} = -\langle y^\prime,y\rangle_{Y_D}, \quad \langle ay,y^\prime b \rangle_{Y_D} = a \langle y,y^\prime \rangle_{Y_D} \bar{b}, \end{gather*} for $x, x^\prime \in X_D$, $y, y^\prime \in Y_D$ and $a,b\in D$. We denote the isometry group of $X_D$ and $Y_D$ by $\mathrm{U}(X_D)$ and $\mathrm{U}(Y_D)$, respectively. Then the space $Z_D = X_D \otimes_D Y_D$ is regarded as a symplectic space over $F$ with the non-degenerate alternating form $\langle \,, \, \rangle$ defined by \begin{equation}\label{d: quaternion dual pair} \langle x_1 \otimes y_1, x_2 \otimes y_2 \rangle = \mathrm{tr}_D ((x_1, x_2)_{X_D} \overline{\langle y_1, y_2 \rangle_{Y_D}}) \in F \end{equation} and we have a homomorphism $\mathrm{U}(X_D) \times \mathrm{U}(Y_D) \rightarrow \mathrm{Sp}(Z_D)$ defined by \begin{equation} \label{DSP times DO to DSP} (x \otimes y)(g, h) = x g \otimes h^{-1}y \quad \text{for $x \in X, y \in Y$, $h \in \mathrm{U}(Y_D)$ and $g \in \mathrm{U}(X_D)$.} \end{equation} As in the case when $D \simeq \mathrm{Mat}_{2 \times 2}$, this mapping splits in the metaplectic group $\mathrm{Mp}(Z_D)$. Hence we have the Weil representation $\omega_\psi$ of $\mathrm{U}(X_D, \mA) \times \mathrm{U}(Y_D, \mA)$ by restriction. From now on, we suppose that the rank of $X_D$ is $2k$ and $X_D$ is maximally split, in the sense that its maximal isotropic subspace has rank $k$. Let us denote by $\mathrm{GU}(X_D)$ (resp. $\mathrm{GU}(Y_D)$) the similitude unitary group of $X_D$ (resp. $Y_D$) with the similitude character $\lambda_D$ (resp. $\nu_D$). Also we write the identity component of $\mathrm{GU}(Y_D)$ by $\mathrm{GSU}(Y_D)$. Then the action \eqref{DSP times DO to DSP} extends to a homomorphism \[ i_D : \mathrm{GU}(X_D) \times \mathrm{GU}(Y_D) \rightarrow \mathrm{GSp}(Z_D) \] with the property $\lambda(i_D(g, h)) = \lambda_D(g) \nu_D(h)^{-1}$. Let \[ R_D:=\{(g,h) \in \mathrm{GU}(X_D) \times \mathrm{GU}(Y_D) \, | \, \lambda_D(g) = \nu_D(h) \} \supset \mathrm{U}(X_D) \times \mathrm{U}(Y_D). \] Since $X_D$ is maximally split, we have a Witt decomposition $X_D = X_D^+ \oplus X_D^-$ with maximal isotropic subspaces $X_{D}^{\pm}$. Then as in Section~\ref{SP times O to SP}, we may realize the Weil representation $\omega_\psi$ of $\mathrm{U}(X_D) \times \mathrm{U}(Y_D)$ on $\mathcal{S}((X_D^+ \otimes Y_D)(\mA))$. In this realization, for $h \in \mathrm{U}(Y_D)$ and $\phi \in \mathcal{S}((X_D^+ \otimes Y_D)(\mA))$, we have \[ \omega_{\psi}(1, h)\phi(z) = \phi(i_D\left(h\right)^{-1} z). \] Hence, as in Section~\ref{SP times O to SP}, we may extend $\omega_\psi$ to $R_D(\mA)$ by \[ \omega_{\psi}(g, h) \phi\left(z\right) = |\lambda(h)|^{ -2 \mathrm{rank}~X_D \cdot \mathrm{rank}~Y_D} \omega_\psi(g_1, 1) \phi \left(i_D\left(h\right)^{-1}z\right) \] for $\left(g,h\right)\in R_D\left(\mA\right)$, where \[ g_1 = g \begin{pmatrix}\lambda_D(g)^{-1}&0\\ 0&1 \end{pmatrix} \in \mathrm{U}(X_D). \] Then as in Section~\ref{SP times O to SP}, we may extend the Weil representation $\omega_\psi$ of $\mathrm{U}(X_D) \times \mathrm{U}(Y_D)$ on $\mathcal S\left( Z_+\left(\mathbb A_F\right)\right)$, where $Z_D = Z_D^+ \oplus Z_D^-$ is an arbitrary polarization, to $R_D\left(\mA\right)$, by using the $\mathrm{U}(X_D) \times \mathrm{U}(Y_D)$-isomorphism $p:\mathcal{S}((X_D^+ \otimes Y_D)(\mA))\to\mathcal S\left( Z_+\left(\mathbb A_F\right)\right)$. Thus for $\phi\in\mathcal S\left( Z_+\left(\mathbb A_F\right)\right)$, the theta kernel $\theta^\phi_\psi=\theta^\phi$ on $R_D(\mA)$ is defined by \[ \theta_\psi^{\phi}(g , h) =\theta ^{\phi}(g , h) = \sum _{z_{+} \in Z_D^{+}(F) } \, \omega_{\psi} (g , h) \phi(z_{+}) \quad \text{for $(g, h) \in R_D(\mA)$}. \] Let us define \[ \mathrm{GU}(X_D, \mA)^+ = \left\{ h \in \mathrm{GU}(X_D, \mA) : \lambda_D(h) \in \nu_D \left(\mathrm{GU}(Y_D, \mA) \right)\right\}. \] and \[ \mathrm{GU}(X_D, F)^+ = \mathrm{GU}(X_D, \mA)^+ \cap \mathrm{GU}(X_D, F). \] We note that $\nu_D \left(\mathrm{GU}(Y_D, F_v) \right)$ contains $N_D(D(F_v)^\times)$ for any place $v$. Thus, if $v$ is non-archimedean or complex, we have $\mathrm{GU}(X_D, F_v)^+ = \mathrm{GU}(X_D, F_v)$, and if $v$ is real, $|\mathrm{GU}(X_D, F_v) \slash \mathrm{GU}(X_D, F_v)^+| \leq 2$. For a cusp form $f$ on $\mathrm{GU}(X_D, \mA)^+$, as in \ref{def theta}, we define the theta lift of $f$ to $\mathrm{GU}(Y_D, \mA)$ by \[ \Theta (f, \phi)(h) := \int _{\mathrm{U}(X_D, F) \backslash \mathrm{U}(X_D, \mA)} \theta ^{\phi}(g_1g , h)f(g_1 g) \, dg_1 \] where $g \in \mathrm{GU}(X_D, \mA)^+$ is chosen so that $\lambda_D(g) = \nu_D(h)$. It defines an automorphic form on $\mathrm{GU}(Y_D, \mA)$. When we regard $\Theta (f, \phi)(h)$ as an automorphic form on $\mathrm{GSU}(Y_D, \mA)$ by the restriction, we denote it as $\theta (f, \phi)(h)$. For an irreducible cuspidal automorphic representation $(\pi_+, V_{\pi_+})$ of $\mathrm{GU}(X_D, \mA)^+$, we denote by $\Theta_\psi(\pi_+)$ (resp. $\theta_\psi(\pi_+)$) the theta lift of $\pi_+$ to $\mathrm{GU}(Y_D, \mA)$ (resp. $\mathrm{GSU}(Y_D, \mA)$), namely \begin{align*} \Theta_\psi(\pi) &:= \left\{ \Theta (f, \phi) : f \in V_{\pi_+}, \phi \in \mathcal{S}(Z_D^+(\mA)) \right\}, \\ \theta_\psi(\pi) &:= \left\{ \theta (f, \phi) : f \in V_{\pi_+}, \phi \in \mathcal{S}(Z_D^+(\mA)) \right\}, \end{align*} respectively. Moreover, for an irreducible cuspidal automorphic representation $(\pi, V_\pi)$ of $\mathrm{GU}(X_D, \mA)$, we define the theta lift $\Theta_\psi(\pi)$ (resp. $\theta_\psi(\pi)$) of $\pi$ to $\mathrm{GU}(Y_D, \mA)$ (resp. $\mathrm{GSU}(Y_D, \mA)$) by $\Theta_\psi(\pi) := \Theta_\psi(\pi|_{\mathrm{GU}(X_D, \mA)^+})$ (resp. $\theta_\psi(\pi) := \theta_\psi(\pi|_{\mathrm{GU}(X_D, \mA)^+})$). As for the opposite direction, as in \ref{def theta}, for a cusp form $f^\prime$ on $\mathrm{GSU}(Y_D, \mA)$, we define the theta lift of $f^\prime$ to $\mathrm{GU}(X_D, \mA)^+$ by \[ \theta (f^\prime, \phi)(g) := \int _{\mathrm{SU}(Y_D, F) \backslash \mathrm{SU}(Y_D, \mA)} \theta ^{\phi}(g , h_1h)f(h_1 h) \, dh_1 \] where $h \in \mathrm{GSU}(Y_D, \mA)$ is chosen so that $\lambda_D(g) = \nu_D(h)$. For an irreducible cuspidal automorphic representation $(\sigma, V_\sigma)$ of $\mathrm{GSU}(Y_D, \mA)$, we denote by $\theta_\psi(\sigma)$ the theta lift of $\sigma$ to $\mathrm{GU}(X_D, \mA)^+$. Moreover, we extend $\theta (f^\prime, \phi)$ to an automorphic form on $\mathrm{GU}(X_D, \mA)$ by the natural embedding \[ \mathrm{GU}(X_D, F)^+ \backslash \mathrm{GU}(X_D, \mA)^+ \rightarrow \mathrm{GU}(X_D, F) \backslash \mathrm{GU}(X_D, \mA) \] and extension by zero. Then we define the theta lift $\Theta_\psi\left(\sigma\right)$ of $\sigma$ to $\mathrm{GU}(X_D, \mA)$ as the $\mathrm{GU}(X_D, \mA)$ representation generated by such $\theta\left(f^\prime,\phi\right)$ for $f^\prime\in V_\sigma$ and $\phi\in\mathcal S\left(Z_+\left(\mA\right)\right)$. \begin{Remark} \label{theta irr rem D} Suppose that $(\pi_+, V_{\pi_+})$ (resp. $(\sigma, V_\sigma)$) is an irreducible cuspidal automorphic representation of $\mathrm{GU}(X_D, \mA)^+$ (resp. $\mathrm{GSU}(Y_D, \mA)$). Suppose moreover that the theta lift $\Theta_\psi(\pi_+)$ (resp. $\theta_\psi(\sigma)$) is non-zero and cuspidal. Then by Gan~\cite[Proposition~2.12]{Gan}, $\Theta_\psi(\pi_+)$ (resp. $\theta_\psi(\sigma)$) is an irreducible cuspidal automorphic representation because of the Howe duality for quaternionic dual pairs proved by Gan and Sun~\cite{GSun} and Gan and Takeda~\cite{GT}. We shall study the case $\dim_D~X_D=2$ and $\dim_D~Y_D=3$. In this case, by the conservation relation proved by Sun and Zhu~\cite{SZ}, the irreducibility of $\Theta_\psi(\pi_+)$ implies that of $\theta_\psi(\pi_+)$. \end{Remark} \subsubsection{Pull-back of the global Bessel periods for the dual pair $\left(\mathrm{G}_D,\mathrm{GSU}_{3,D}\right)$} \label{ss: second pull-back} The set-up is as follows. Let $X_D$ be the space of $2$ dimensional row vectors over $D$ equipped with the hermitian form \[ \left(x,x^\prime\right)_{X_D}=\bar{x}\begin{pmatrix}0&1\\1&0\end{pmatrix} \,{}^tx^\prime. \] Let us take the standard basis of $X_D$ and name the basis vectors as \[ x_+=\left(1,0\right),\quad x_-=\left(0,1\right). \] Then $G_D$ defined by \eqref{e: G_D} is the matrix representation of the similitude unitary group $\mathrm{GU}\left(X_D\right)$ for $X_D$ with respect to the standard basis. Let $Y_D$ be the space of $3$ dimensional column vectors over $D$ equipped with the skew-hermitian form \[ \langle y, y^\prime \rangle_{Y_D} = {}^{t} y\, \begin{pmatrix}0&0&\eta\\ 0&\eta&0 \\ \eta&0&0 \end{pmatrix}\, \overline{y^\prime}. \] Let us take the standard basis of $Y_D$ and name the basis vectors as \[ y_-={}^t\left(1,0,0\right),\quad e={}^t\left(0,1,0\right),\quad y_-={}^t\left(0,0,1\right). \] Then $\mathrm{GSU}_{3,D}$ defined in \ref{ss: quaternionic unitary groups} is the matrix representation of the group $\mathrm{GSU}\left(Y_D\right)$ for $Y_D$ with respect to the standard basis. We take a polarization $Z_D=Z_{D,+}\oplus Z_{D,-}$ of $Z_D=X_D\otimes_D Y_D$ defined as follows. Let \[ X_{D,\pm}=x_{\pm}\cdot D \] where the double sign corresponds. We decompose $Y_D$ as $Y_D=Y_{D,+}\oplus Y_{D,0}\oplus Y_{D,-}$ where \[ Y_{D,+}=D\cdot y_+,\quad Y_{D,0}=D\cdot y_0, \quad Y_{D,-}=D\cdot y_-. \] Then let \begin{equation}\label{zD def} Z_{D,\pm}=\left(X_D\otimes Y_{D,\pm}\right) \oplus \left(X_{D,\pm}\otimes Y_{D,0}\right) \end{equation} where the double sign corresponds. To simplify the notation, we write $z_+ \in Z_{D,+}\left(\mA\right)$ as $z_+=\left(a,b\right)$ when \[ z_+=a\otimes y_+ + b\otimes e \quad\text{where $a\in X_D\left(\mA\right)$ and $b\in X_{D,+} \left(\mA\right)$} \] and $\phi\left(z_+\right)$ as $\phi\left(a,b\right)$ for $\phi\in \mathcal S\left(Z_{D,+}\left(\mA\right)\right)$. Let us compute the pull-back of the $\left(X,\chi, \psi\right)$-Bessel periods on $\mathrm{GSU}_{3,D}$ defined by \eqref{Besse def gsud} with respect to the theta lift from $G_D$. \begin{proposition} \label{pullback Bessel gsp innerD} Let $\left(\pi_D, V_{\pi_D}\right)$ be an irreducible cuspidal automorphic representation of $G_D\left(\mA\right)$ whose central character is $\omega_\pi$ and $\chi$ a character of $\mathbb A_E^\times$ such that $\chi\mid_{\mathbb A^\times}=\omega_\pi^{-1}$. Let $X\in D^\times$. Then for $f \in V_{\pi_D}$ and $\phi \in \mathcal{S}(Z_{D, +}(\mA))$, we have \begin{equation}\label{2: 2nd pull-back formula} \mathcal{B}_{X, \chi, \psi}^D(\theta(f: \phi)) = \int_{N_{D}(\mA) \backslash G_D^1(\mA)} B_{\xi_X, \chi^{-1},\psi}(\pi(g)f)\left( \omega(g, 1) \phi\right)(v_{D, X}) \, dg \end{equation} where \begin{equation}\label{def of S_X} \xi_X:=X\eta \bar{X}\in D^-\left(F\right),\quad v_{D, X} := (x_-, -\eta^{-1}X x_+)\in Z_{D,+}, \end{equation} and $B_{\xi_X, \chi^{-1},\psi}$ denotes the $\left(\xi_X, \chi^{-1},\psi\right)$-Bessel period on $G_D$ defined by \eqref{e: def of bessel period}. In particular, the $\left(\xi_X, \chi^{-1},\psi\right)$-Bessel period does not vanish on $V_{\pi_D}$ if and only if the $\left(X,\chi, \psi\right)$-Bessel period does not vanish on $\theta_\psi\left(\pi_D\right)$. \end{proposition} \begin{proof} The proof of this proposition is similar to the one for Proposition~\ref{pullback Bessel gsp}. Let $N_{0,D}$ be a subgroup of $N_{3,D}$ given by \[ N_{0, D}(F) = \left\{ u_D(x):=\begin{pmatrix} 1&0&\eta x\\ 0&1&0\\ 0&0&1 \end{pmatrix} : x \in F\right\}. \] Then we note that $N_{0, D}$ is a normal subgroup of $N_{3,D}$ and $\psi_{X, D}$ is trivial on $N_{0, D}(\mA)$. Since \[ Z_{D,-}(\mA)(1, u_D(x)) = Z_{D,-}(\mA)\,\,\text{and}\,\, z_+(1, u_D(x)) = z_+ + a \otimes (-\eta x) y_-\,\,\text{for $x \in \mA$}, \] we have \[ \left(\omega(1, u_D(x)) \phi\right)(z_+) = \psi \left(- \frac{1}{2} \,\mathrm{tr}_D \left( \langle a, a \rangle \eta^2 x \right) \right) \phi(z_+). \] Thus by an argument similar to the one in the proof of Proposition~\ref{pullback Bessel gsp}, one may show that \begin{multline}\label{e: 2nd bessel step 1} \int_{N_{3,D}(F) \backslash N_{3,D}(\mA)} \theta(f; \phi)(hu) \psi^{-1}_{X, D}(u) \, du \\ = \int_{N_{3,D}(F) \backslash N_{3,D}(\mA)} \int_{G_D^1(F) \backslash G_D^1(\mA)} \,\, \sum _{ \overline{a} \in \mathcal{X}_{D,-}}\, \sum_{\gamma \in V_D(a) \backslash G_D^1(F)}\, \sum_{b \in X_{D,+}} \\ \left(\omega (\gamma g_1 \lambda_s^D(\nu(h)) , uh) \phi\right)(a, b) f(g_1 \lambda_s(\nu(h))) \, dg_1 \, du. \end{multline} Here $\mathcal X_{D,-}$ is the set of equivalence classes $X_{D,-}\slash\sim$ where $a\sim a^\prime$ if and only if there exists a $\gamma\in G_D^1\left(F\right)$ such that $a^\prime=a\gamma$, $\bar{a}$ denotes the equivalence class of $\mathcal X_{D,-}$ containing $a\in X_{D,-}$, and, $V\left(a\right)=\left\{\gamma\in G_D^1\left(F\right)\mid a\gamma=a \right\}$. Then we may rewrite \eqref{e: 2nd bessel step 1} as \begin{multline}\label{e: 2nd bessel step 2} \int_{N_{3,D}(F) \backslash N_{3,D}(\mA)} \theta(f; \phi)(hu) \psi^{-1}_{X, D}(u) \, du = \int_{N_{3,D}(F) \backslash N_{3,D}(\mA)} \int_{G_D^1(F) \backslash G_D^1(\mA)} \\ \sum_{N_D\left(F\right) \backslash G_D^1(F)}\, \sum_{b \in X_{D,+}} \left(\omega (\gamma g_1 \lambda_s^D(\nu(h)) , uh) \phi\right)(x_{-}, b) f(g_1 \lambda_s(\nu(h))) \, dg_1 \, du. \end{multline} Since, for $u = \begin{pmatrix} 1&-\eta^{-1} \bar{A} \eta&B\\ 0&1&A\\ 0&0&1 \end{pmatrix} \in N_{3,D}(\mA)$, we have $Z_{D,-}(\mA)(1, u) = Z_{D,-}(\mA)$ and \begin{align*} z_+ (1, u) &=z_++ x_- \otimes (B^\prime y_- - A e +y_+) +b \otimes (\eta^{-1} \bar{A} \eta y_- + e) \\ &= z_+ + x_- \otimes (B^\prime y_- - A e) +b \otimes (\eta^{-1} \bar{A} \eta y_-) , \end{align*} we obtain \[ \left(\omega(1, u) \phi\right)(z_+) = \psi \left( \mathrm{tr}_D \left( \langle b, x_- \rangle \overline{(e, -Ae)} \right) \right) \phi\left(z_+\right) = \psi \left(\mathrm{tr}_D \left( \eta \langle b, x_- \rangle A \right) \right) \phi\left(z_+\right). \] Hence in \eqref{e: 2nd bessel step 2}, only $b\in X_{D,-}$ satisfying $\eta \langle b, x_- \rangle= X$, i.e. $b=x_+(-\overline{X}\eta^{-1} )$ contributes. Thus our integral is equal to \begin{align*} &\int_{N_{D}(F) \backslash G_D^1(\mA)} \left(\omega (g_1 \lambda_s^D(\nu(h)) , uh) \phi\right)(v_{D, X}) f(g_1 \lambda_s^D(\nu(h))) \, dg_1 \, du \\ =& \int_{N_{D}(\mA) \backslash G_D^1(\mA)} \int_{N_{D}(F) \backslash N_{D}(\mA)} \omega (u g_1 \lambda_s^D(\nu(h)) , uh) \phi(v_{D, X}) \\ &\qquad\qquad\times f( u g_1 \lambda_s^D(\nu(h))) \, dg_1 \, du \end{align*} where $v_{D, X} =( x_-, x_+(-\overline{X}\eta^{-1} ))$. Further for $u=\begin{pmatrix}1&a\\0&1\end{pmatrix} \in N_D\left(\mA\right)$, we have \[ \left( \omega \left( ug, h \right) \phi \right)(v_{D, X}) = \psi_{\xi_{X}}\left( u\right)^{-1}\left(\omega\left(g,h\right)\phi\right) \left(v_{D,X}\right) \] where we put $\xi_{X} = X \eta \overline{X}$. Thus our integral becomes \begin{multline*} \int_{N_{D}(\mA) \backslash G_D^1(\mA)} \int_{N_{D}(F) \backslash N_{D}(\mA)} \psi_{\xi_{X}} (u)^{-1} \omega (g_1 \lambda_s^D(\nu(h)) , h) \phi(v_{D, X}) \\ \times f( u g_1 \lambda_s^D(\nu(h))) \, du \, dg_1 . \end{multline*} As for the integration over $\mathbb A^\times M_{X, D}\left(F\right) \backslash M_{X, D}\left(\mathbb A\right)$ in \eqref{Besse def gsud}, by a direct computation, we see that \[ \omega (\lambda_s^D(\nu(h)) g_1, h) \phi(v_{D, X}) =|\nu(h)|^{-3} \omega (h_0 \lambda_s(\nu(h)) g_1, 1) \phi(v_{D, X}) \] where \[ h = \begin{pmatrix} n_D(h) \cdot (h^X)^\ast &0&0\\ 0&h&0\\ 0&0&h^X\end{pmatrix} \quad \text{and} \quad h_0 = \begin{pmatrix} \overline{h^X}&0\\ 0&(h^X)^{-1}\end{pmatrix}. \] Therefore, as in the previous case, we obtain \[ \mathcal{B}_{X, \chi, \psi}^D(\theta(f: \phi))= \int_{N_{D}(\mA) \backslash G_D^1(\mA)} B_{\xi_X, \chi^{-1},\psi}(\pi(g_1)f) \left( \omega (g_1, 1) \phi\right)(v_{D, X}) dg_1. \] The equivalence of the non-vanishing conditions follows from the pull-back formula~\eqref{2: 2nd pull-back formula} as Proposition~\ref{pullback Bessel gsp}. \end{proof} \subsection{Theta correspondence for similitude unitary groups} In our proof of Theorem~\ref{ggp SO} and \ref{ref ggp}, we shall use theta correspondence for similitude unitary groups besides theta correspondences for dual pairs $(\mathrm{GSp}_2, \mathrm{GSO}_{4,2})$ and $(G_D, \mathrm{GSU}_{3,D})$. Let us recall the definition of the theta lifts in this case. Let $(X, (\,,\,)_X)$ be an $m$-dimensional hermitian space over $E$, and let $(Y, (\,, \,)_Y)$ be an $n$-dimensional skew-hermitian space over $E$. Then we may define the quadratic space \[ (W_{X, Y}, (\,, \,)_{X, Y}) := \left(\mathrm{Res}_{E \slash F} X \otimes Y, \mathrm{Tr}_{E \slash F}\left((\,, \,)_X \otimes \overline{(\,,\,)_Y} \right) \right). \] This is a $2mn$-dimensional symplectic space over $F$. Then we denote its isometry group by $\mathrm{Sp}\left( W_{X, Y}\right)$. For each place $v$ of $F$, we denote the metaplectic extension of $\mathrm{Sp}\left(W_{X, Y}\right)(F_v)$ by $\mathrm{Mp}\left( W_{X, Y}\right)(F_v)$. Also, $\mathrm{Mp}\left( W_{X, Y}\right)(\mA)$ denotes the metaplectic extension of $\mathrm{Sp}\left(W_{X, Y}\right)(\mA)$. Let $\chi_X$ and $\chi_Y$ be characters of $\mA_E^\times \slash E^\times$ such that $\chi_{X}|_{\mA^\times} = \chi_E^m$ and $\chi_{Y}|_{\mA^\times} = \chi_E^n$. For each place $v$ of $F$, let \[ \iota_{\chi_v} : \mathrm{U}(X)(F_v) \times \mathrm{U}(Y)(F_v) \rightarrow \mathrm{Mp}(W_{X, Y})(F_v) \] be the local splitting given by Kudla~\cite{Ku} depending on the choice of a pair of characters $\chi_v =(\chi_{X,v}, \chi_{Y,v})$. Using this local splitting, we get a splitting \[ \iota_{\chi} : \mathrm{U}(X)(\mA) \times \mathrm{U}(Y)(\mA) \rightarrow \mathrm{Mp}(W_{X, Y})(\mA), \] depending on $\chi = (\chi_X, \chi_Y)$. Then by the pull-back, we obtain the Weil representation $\omega_{\psi, \chi}$ of $\mathrm{U}(X)(\mA) \times \mathrm{U}(Y)(\mA)$. When we fix a polarization $W_{X, Y} = W_{X, Y}^+ \oplus W_{X, Y}^-$, we may realize $\omega_{\psi, \chi}$ so that its space of smooth vectors is given by $\mathcal{S}(W_{X, Y}^+(\mA))$, the space of Schwartz-Bruhat functions on $W_{X, Y}^+(\mA)$. We define \[ R := \left\{ (g, h) \in \mathrm{GU}(X) \times \mathrm{GU}(Y) : \lambda(g) = \lambda(h) \right\} \supset \mathrm{U}(X)\times \mathrm{U}(Y). \] Suppose that $\dim Y$ is even and $Y$ is maximally split, in the sense that $Y$ has a maximal isotropic subspace of dimension $\frac{1}{2} \dim Y$. In this case, as in Section~\ref{def theta} and \ref{def theta D}, we may extend $\omega_{\psi,\chi}$ to $R(\mA)$. On the other hand, in this case, we have an explicit local splitting of $R(F_v) \rightarrow \mathrm{Sp}(W_{X, Y})(F_v)$ by Zhang~\cite{CZ} and we may extend $\omega_{\psi,\chi}$ to $R(\mA)$ using this splitting. These two extensions of $\omega_{\psi,\chi}$ to $R\left(\mA\right)$ coincide. Then for $\phi \in \mathcal{S}(W_{X, Y}^+(\mA))$, we define the theta function $\theta_{\psi, \chi}^\phi$ on $R(\mA)$ by \begin{equation} \label{theta fct def} \theta_{\psi, \chi}^\phi(g, h) = \sum_{w \in W_{X, Y}^+(F)} \omega_{\psi, \chi}(g, h)\phi(w). \end{equation} Let us define \begin{align*} \mathrm{GU}(X)(\mA)^{+}: &= \left\{g \in \mathrm{GU}(X)(\mA) : \lambda(g) \in \lambda(\mathrm{GU}(Y)(\mA)) \right\}, \\ \mathrm{GU}(X)(F)^{+} :&= \mathrm{GU}(X)(\mA)^{+} \cap \mathrm{GU}(X)(F). \end{align*} We define $\mathrm{GU}(Y)(\mA)^{+}$ and $\mathrm{GU}(Y)(F)^{+} $ in a similar manner. Let $(\sigma, V_\sigma)$ be an irreducible cuspidal automorphic representation of $\mathrm{GU}(X)(\mA)^{+}$. Then for $\varphi \in V_\sigma$ and $\phi \in \mathcal{S}(W_{X, Y}^+(\mA))$, we define the theta lift of $\varphi$ by \[ \theta_{\psi, \chi}^\phi(\varphi)(h) = \int_{\mathrm{U}(X)(F) \backslash \mathrm{U}(X)(\mA)} \varphi(g_1g) \theta_{\psi, \chi}^\phi(g_1g, h) \, dg_1 \] where $g_1 \in \mathrm{GU}(X)(\mA)^+$ is chosen so that $\lambda(g) = \lambda(h)$. Further, we define the theta lift of $\sigma$ by \[ \Theta_{\psi, \chi}^{X, Y}(\sigma) = \langle \theta^\phi_{\psi, \chi}(\varphi) ; \varphi \in \sigma, \phi \in \mathcal{S}(W_{X, Y}^+(\mA)) \rangle. \] When the space we consider is clear, we simply write $\Theta^{X, Y}_{\psi, \chi}(\sigma) = \Theta_{\psi, \chi}(\sigma)$. Similarly, for an irreducible cuspidal automorphic representation $\tau$ of $\mathrm{U}(Y)(\mA)$, we define $\Theta_{\psi, \chi}^{Y, X}(\tau)$ and we simply write it by $\Theta_{\psi, \chi}(\tau)$. \section{Proof of the Gross-Prasad conjecture for $\left(\mathrm{SO}\left(5\right),\mathrm{SO}\left(2\right)\right)$} In this section we prove Theorem~\ref{ggp SO}, i.e. the Gross-Prasad conjecture for $\left(\mathrm{SO}\left(5\right),\mathrm{SO}\left(2\right)\right)$, based on the pull-back formulas obtained in the previous section. \subsection{Proof of the statement (1) in Theorem~\ref{ggp SO}} Let $(\pi, V_\pi)$ be as in Theorem~\ref{ggp SO} (1). By the uniqueness of the Bessel model due to Gan, Gross and Prasad~\cite[Corollary~15.3]{GGP} at finite places and to Jiang, Sun and Zhu~\cite[Theorem~A]{JSZ} at archimedean places, there exists uniquely an irreducible constituent $\pi_+^B$ of $\pi\mid_{G_D(\mA)^+}$ that has the $(\xi, \Lambda, \psi)$-Bessel period. When $D$ is split and $\pi_+^B$ is a theta lift from an irreducible cuspidal automorphic representation of $\mathrm{GSO}_{3,1}\left(\mA\right)$, our assertion has been proved by Corbett~\cite{Co}. Hence in the remainder of this subsection, we assume that: \begin{multline}\label{assumption not type I-A} \text{\emph{ when $D$ is split, $\pi$ is not a theta lift from $\mathrm{GSO}_{3,1}$}} \\ \text{\emph{of an irreducible cuspidal automorphic representation}}. \end{multline} Let us proceed under the assumption~\eqref{assumption not type I-A}. By Proposition~\ref{pullback Bessel gsp} and \ref{pullback Bessel gsp innerD}, the theta lift $\theta_\psi(\pi_+^B)$ of $\pi_+^B$ to $\mathrm{GSU}_{3,D}\left(\mA\right)$ has the $(X_\xi, \Lambda^{-1}, \psi)$-Bessel period and, in particular, $\theta_\psi(\pi_+^B) \ne 0$ where we take $X_\xi \in D^-(F)$ so that $\xi_{X_\xi} = \xi$. For example, when we take $\xi =\eta$, we may take $X_\xi =1$. \begin{lemma} \label{nonzero lemma (1)} $\theta_\psi(\pi_+^B)$ is an irreducible cuspidal automorphic representation of $\mathrm{GSU}_{3,D}(\mA)$. \end{lemma} \begin{proof} First we note that the irreducibility follows from the cuspidality by Remark~\ref{theta irr rem} and \ref{theta irr rem D}. Let us show the cuspidality. Suppose on the contrary that $\theta_\psi(\pi_+^B)$ is not cuspidal. When $D$ is not split, the Rallis tower property implies that the the theta lift $\theta_{D, \psi}(\pi_+^B)$ of $\pi_+^B$ to $\mathrm{GSU}_{1,D}( \mA)$ is non-zero and cuspidal. Let $w$ be a finite place of $F$ such that $D(F_w)$ is split and $\pi_{+,w}^B$ is a generic representation of $G(F_w)^+$. Since $\pi_{+,w}^B$ is generic, the theta lift of $\pi_{+, w}^B$ to $\mathrm{GSO}_2(F_w)$ vanishes by the same argument as the one for \cite[Proposition~2.4]{GRS97}. We note that $\mathrm{GSU}_{1,D}( F_w) \simeq \mathrm{GSO}_2(F_w)$ and hence the theta lift of $\pi_+^B$ to $\mathrm{GSU}_{1,D}( \mA)$ must vanish. This is a contradiction. Suppose that $D$ is split. Then the theta lift of $\pi_+^B$ to $\mathrm{GSO}_{3,1}$ is non-zero by the Rallis tower property. Moreover, it is not cuspidal by our assumption on $\pi$. Thus the theta lift of $\pi_+^B$ to $\mathrm{GSO}_{2,0}$ is non-zero, again by the Rallis tower property. Then we reach a contradiction by the same argument as in the non-split case. \end{proof} We may regard $\theta_\psi(\pi_+^B)$ as an irreducible cuspidal automorphic representation of $\mathrm{PGU}_{2,2}$ or $\mathrm{PGU}_{3,1}$ according to whether $D$ is split or not, under the isomorphism $\Phi$ in \eqref{acc isom2} or $\Phi_D$ in \eqref{acc isom1}. Recall our assumption that $\theta_{\psi_{w}}(\pi_{+,w}^B)$ is generic at a finite place $w$. Then the non-vanishing of $(X_\xi, \Lambda^{-1}, \psi)$-Bessel period on $\theta_\psi(\pi_+^B)$ implies the non-vanishing of the central value of the standard $L$-function for $\theta_\psi(\pi_+^B)$ of $\mathrm{PGU}_4$ twisted by $\Lambda^{-1}$, namely \[ L^{S} \left(\frac{1}{2}, \theta_\psi(\pi_+^B) \times \Lambda^{-1} \right) \ne 0 \] for any finite set $S$ of places of $F$ containing all archimedean places because of the unitary group case of the Gan-Gross-Prasad conjecture for $\theta_\psi(\pi_+^B)$ proved by Proposition~A.2 and Remark~A.1 in \cite{FM3}. Moreover, from the explicit computation of local theta correspondence in \cite{GT0} and \cite{Mo}, we see that \[ L(s, \pi_v \times \mathcal{AI} \left(\Lambda \right)_v) = L \left(s, \theta_\psi(\pi_+^B)_v \times \Lambda_v^{-1} \right) \] at a finite place $v$ where all data are unramified. Thus when we take $S_0$, a finite set of places of $F$ containing all archimedean places, so that all data are unramified at $v\notin S_0$, we have \[ L^{S} \left(\frac{1}{2}, \pi \times \mathcal{AI} \left(\Lambda \right) \right) = L^{S} \left(\frac{1}{2}, \theta_\psi(\pi_+^B) \times \Lambda \right) \ne 0 \] for any finite set $S$ of places of $F$ with $S\supset S_0$. Let us show an existence of $\pi^\circ$. We denote $\theta_\psi(\pi_+^B)$ by $\sigma$. Then the theta lift $\Sigma : = \Theta_{\psi, (\Lambda^{-1}, \Lambda^{-1})}(\sigma)$ of $\sigma$ to $\mathrm{GU}_{2,2}$ which we may regard as an automorphic representation of $\mathrm{GSO}_{4,2}$ by the accidental isomorphism~\eqref{acc isom2}, is an irreducible cuspidal globally generic automorphic representation with trivial central character by the proof of \cite[Proposition~A.2]{FM3} since $\theta_\psi(\pi_+^B)$ has the $(X_\xi, \Lambda^{-1}, \psi)$-Bessel period. Here we recall that, by the conservation relation due to Sun and Zhu~\cite[Theorem~1.10, Theorem~7.6]{SZ}, for any irreducible admissible representations $\tau$ of $\mathrm{GO}_{4,2}(k)$ (resp. $\mathrm{GO}_{3,3}(k)$) over a local field $k$ of characteristic zero, theta lifts of either $\tau$ or $\tau \otimes \det$ to $\mathrm{GSp}_3(k)^+$ (resp. $\mathrm{GSp}_3(k)$) is non-zero. Thus we may extend $\Sigma$ to an automorphic representation of $\mathrm{GO}_{4,2}(\mA)$ as in Harris--Soudry--Taylor~\cite[Proposition~2]{HST} so that its local theta lift to $\mathrm{GSp}_3(F_v)^+$ is non-zero at every place $v$. On the other hand, since $\Sigma$ is nearly equivalent to $\sigma$, we have \begin{equation}\label{e: pole at 1} L^S(s, \Sigma, \mathrm{std}) = L^S(s, \pi, \mathrm{std} \otimes \chi_E) \zeta_F^S(s) \end{equation} for a sufficiently large finite set $S$ of places of $F$ containing all archimedean places by the explicit computation of local theta correspondences in \cite{GT0} and \cite{Mo}. Here \[ \label{s=1 non-zero} L^S(1, \pi, \mathrm{std} \otimes \chi_E) \ne 0 \] by Yamana~\cite[proof of Theorem~10.2, Theorem~10.3]{Yam}, since the theta lift $\theta_\psi(\pi_+^B)$ of $\pi_+^B$ to $\mathrm{GSU}_{3,D}( \mA)$ is non-zero and cuspidal. Hence the left hand side of \eqref{e: pole at 1} has a pole at $s=1$. In particular, it is non-zero and the theta lift of $\Sigma$ to $\mathrm{GSp}_3(\mA)^+$ is non-zero by Takeda~\cite[Theorem~1.1 (1)]{Tak}. Further, again by Takeda~\cite[Theorem~1.1 (1)]{Tak}, this theta lift actually descends to $\mathrm{GSp}_2(\mA)^+ =G(\mA)^+$. Namely, the theta lift $\pi^\prime_+ := \theta_{\psi^{-1}}(\Sigma)$ of $\Sigma$ to $G(\mA)^+$ is non-zero since $L^S(s, \Sigma, \mathrm{std})$ actually has a pole at $s=1$. Suppose that $\pi^\prime_+$ is not cuspidal. Then by the Rallis tower property, the theta lift of $\Sigma$ to $\mathrm{GL}_2\left(\mA\right)^+$ is non-zero and cuspidal. Meanwhile the local theta lift of $\Sigma_v$ to $\mathrm{GL}_2\left( F_v\right)^+$ vanishes by a computation similar to the one for \cite[Proposition~3.3]{GRS97} since $\Sigma_v$ is generic. This is a contradiction and hence $\pi^\prime_+$ is cuspidal. Since $\Sigma$ is generic, so is $\pi^\prime_+$ by \cite[Proposition~3.3]{Mo}. Let us take an extension $\pi^\circ$ of $\pi_+^\prime$ to $G\left(\mathbb A\right)$. Since $\left| G(F_v) \slash G(F_v)^+\right|=2$, we have $\pi^\prime_{v} \simeq \pi_v$ or $\pi^\prime_{v} \simeq \pi_v \otimes \chi_{E_v}$ at almost all places $v$ such that $\pi_{+,v}^\prime \simeq \pi_{+, v}^B$. Hence $\pi$ is locally $G^+$-nearly equivalent to $\pi^\circ$. \qed \\ \subsection{Some consequences of the proof of Theorem~\ref{ggp SO} (1)} As preliminaries for our further considerations, we would like to discuss some consequence of the proof of Theorem~\ref{ggp SO} (1) and related results. First we note the following result concerning the functorial transfer. \begin{proposition} \label{exist gen prp} Let $(\pi, V_\pi)$ be an irreducible cuspidal automorphic representation of $G_D(\mA)$ with a trivial central character. Assume that there exists a finite place $w$ at which $\pi_w$ is generic and tempered. Then there exists a globally generic irreducible cuspidal automorphic representation $\pi^\circ$ of $G\left(\mA\right)$ and an \'{e}tale quadratic extension $E^\circ$ of $F$ such that $\pi^\circ$ is $G^{+, E^\circ}$-nearly equivalent to $\pi$. In particular we have a weak functorial lift of $\pi$ to $\mathrm{GL}_4(\mA_{E^\circ})$ with respect to $\mathrm{BC} \circ \mathrm{spin}$. Moreover, $\pi$ is tempered if and only if $\pi^\circ$ is tempered. \end{proposition} \begin{Remark} \label{exist gen rem} When $D$ is split, our assumption implies that $\pi$ has a generic Arthur parameter. Though our assertion thus follows from the global descent method by Ginzburg, Rallis and Soudry~\cite{GRS} and Arthur~\cite{Ar}, we shall present another proof which does not refer to these papers. \end{Remark} \begin{proof} Suppose that $D$ is split. When $\pi$ participates in the theta correspondence with $\mathrm{GSO}_{3,1}$, our assertion follows from \cite{Ro}. Thus we now assume that the theta lift of $\pi$ to $\mathrm{GSO}_{3,1}$ is zero. By \cite{JSLi}, $\pi$ has $(S_\circ, \Lambda_\circ, \psi)$-Bessel period for some $S_\circ$ and $\Lambda_\circ$. When $\mathrm{GSO}(S_\circ)$ is not split, the existence of a globally generic irreducible cuspidal automorphic representation follows from Theorem~\ref{ggp SO} (1). Suppose that $\mathrm{GSO}(S_\circ)$ is split. Then by Proposition~\ref{pullback Bessel gsp}, the theta lift of $\pi$ to $\mathrm{GSO}_{3,3}$ is non-zero. Since $\pi_w$ is generic, the local theta lift of $\pi_w$ to $\mathrm{GSO}_{1,1}$ is zero as in the proof of Theorem~\ref{ggp SO} (1) and hence the theta lift of $\pi$ to $\mathrm{GSO}_{1,1}$ is zero. Hence by the Rallis tower property, either the theta lift of $\pi$ to $\mathrm{GSO}_{2,2}$ or the one to $\mathrm{GSO}_{3,3}$ is non-zero and cuspidal. Then $\pi$ itself is globally generic by Proposition~\ref{GRS identity} in the former case. In the latter case, the global genericity of $\pi$ readily follows from the proof of Soudry~\cite[Proposition~1.1]{So} (see also Theorem in p.264 of \cite{So}). In any case when $D$ is split, we have a globally generic irreducible cuspidal automorphic representation $\pi^\circ$ of $G\left(\mA\right)$ which is nearly equivalent to $\pi$. Thus when we take the strong lift of $\pi^\circ$ to $\mathrm{GL}_4\left(\mA\right)$ by \cite{CKPSS}, it is a weak lift of $\pi$ to $\mathrm{GL}_4\left(\mA\right)$. Suppose that $D$ is not split. Then by Li~\cite{JSLi}, there exist an $\eta_\circ\in D^-(F)$ where $E_\circ:=F\left(\eta_\circ\right)$ is a quadratic extension of $F$, and a character $\Lambda_\circ$ of $\mathbb A_{E_\circ}^\times \slash E_\circ^\times \mA^\times$ such that $\pi$ has the $\left(\eta_\circ, \Lambda_\circ\right)$-Bessel period. Then there exists a desired automorphic representation $\pi^\circ$ of $G\left(\mA\right)$ by Theorem~\ref{ggp SO} (1). Let us discuss the temperedness. Let $\sigma, \Sigma$ and $\pi_+^\prime$ denote the same as in the proof of Theorem~\ref{ggp SO} (1). Suppose that $\pi$ is tempered. Then the temperedness of $\sigma$ follows from a similar argument as in Atobe-Gan~\cite[Proposition~5.5]{AG} (see also \cite[Proposition~C.1]{GI1}) at finite places, from Paul~\cite[Theorem~15, Theorem~30]{Pau}, \cite[Theorem~15, Theorem~18, Corollary~24]{Pau3} and Li-Paul-Tan-Zhu~\cite[Theorem~4.20, Theorem~5.1]{LPTZ} at real places and from Adams-Barbasch~\cite[Theorem~2.7]{AB} at complex places. Then the temperedness of $\sigma$ implies that of $\Sigma$ by Atobe-Gan~\cite[Proposition~5.5]{AG} at finite places, by Paul~\cite[Theorem~3.4]{Pau2} at non-split real places, by M\oe glin~\cite[Proposition~III.9]{Moe} at split real places and by Adams-Barbasch~\cite[Theorem~2.6]{AB} at complex places. As we obtained the temperedness of $\sigma$ from that of $\pi$, the temperedness of $\Sigma$ implies that of $\pi^\prime_+$ and hence $\pi^\circ$ is tempered. The opposite direction, i.e., the temperedness of $\pi^\circ$ implies that of $\pi$, follows by the same argument. \end{proof} \begin{lemma} \label{compo number} Let $\pi$ be as in Theorem~\ref{ggp SO} (1). Suppose that $\sigma=\theta_\psi\left(\pi_+^B\right)$ is an irreducible cuspidal autormophic representation of $\mathrm{GSU}_{3,D} \left(\mA\right)$. Here $\pi_+^B$ denotes the unique irreducible constituent of $\pi|_{G_D(\mA)^+}$ such that $\pi_+^B$ has the $(E, \Lambda)$-Bessel period. We regard $\sigma$ as an automorphic representation of $\mathrm{GU}_{4,\varepsilon}\left(\mA\right)$ via \eqref{acc isom1} or \eqref{acc isom2} and let $\Pi_\sigma$ denote the base change lift of $\sigma |_{\mathrm{U}_{4,\varepsilon}\left(\mA\right)}$ to $\mathrm{GL}_4\left(\mA_E\right)$. Let $\pi^\circ$ be a globally generic irreducible cuspidal automorphic representation of $G\left(\mA\right)$ whose existence is proved in Theorem~\ref{ggp SO} (1). We denote the functorial lift of $\pi^\circ$ to $\mathrm{GL}_4\left(\mA\right)$ by $\Pi_{\pi^\circ}$. Suppose that \begin{equation}\label{e: decomposition 1} \Pi_{\pi^\circ} = \Pi_{1} \boxplus \cdots \boxplus \Pi_\ell \end{equation} where $\Pi_i$ are irreducible cuspidal automorphic representations of $\mathrm{GL}_{n_i}(\mA)$ and \begin{equation}\label{e: decomposition 2} \Pi_{\sigma} =\Pi_1^\prime \boxplus \cdots \boxplus \Pi_k^\prime \end{equation} where $\Pi_j^\prime$ are irreducible cuspidal automorphic representations of $\mathrm{GL}_{m_j}(\mA_E)$. Then we have $\Pi_\sigma=\mathrm{BC}\left(\Pi_{\pi^\circ}\right)$, $\Pi_{\pi^\circ}\not\simeq\Pi_{\pi^\circ}\otimes\chi_E$ and $\mathrm{BC}\left(\Pi_i\right)$ is cuspidal for each $i$. In particular, we have $\ell=k$. Here $\mathrm{BC}$ denotes the base change from $F$ to $E$. \end{lemma} \begin{proof} By the explicit computation of local theta correspondences in \cite{GT0} and \cite{Mo}, we see that $\left(\Pi_{\sigma}\right)_v\simeq \mathrm{BC}(\Pi_{\pi^\circ} )_v$ at almost all finite places $v$ of $E$. Thus, $\Pi_{\sigma}= \mathrm{BC}(\Pi_{\pi^\circ} )$ by the strong multiplicity one theorem. Also, by \cite{CKPSS}, we know that $\ell = 1$ or $2$. Suppose that $\ell=1$. We note that the cuspidality of $\mathrm{BC}(\Pi_{\pi^\circ})$ is equivalent to $\Pi_{\pi^\circ} \otimes \chi_E \not \simeq \Pi_{\pi^\circ}$. Suppose otherwise, i.e. $\Pi_{\pi^\circ} \simeq \Pi_{\pi^\circ} \otimes \chi_E$. Then $\Pi_{\pi^\circ} = \mathcal{AI}(\tau)$ for some irreducible cuspidal automorphic representation $\tau$ of $\mathrm{GL}_2(\mA_E)$. Since $\Pi_{\pi^\circ}$ is a lift from $\mathrm{PGSp}_2$, the central character of $\tau$ needs to be trivial and hence $\tau\simeq \tau^\vee$. On the other hand, we have \[ \Pi_{\sigma} = \mathrm{BC}\left( \mathcal{AI}(\tau) \right)= \tau \boxplus \tau^\sigma. \] Since this is a base change lift of $\sigma |_{\mathrm{U}_{4,\varepsilon}\left(\mA\right)}$, we have $\tau = \left( \tau^\sigma \right)^\vee$ and $\tau \not \simeq \tau^\sigma$ by \cite{AC} (see also \cite[Proposition~3.1]{PR}). In particular, $\tau\not\simeq \tau^\vee$ and we have a contradiction. Thus $\mathrm{BC}\left(\Pi_{\pi^\circ}\right)$ is cuspidal and $k=1$. Suppose that $\ell=2$. First we show that $\Pi_{\pi^\circ} \not \simeq \Pi_{\pi^\circ} \otimes \chi_E$. Suppose otherwise, i.e. $\Pi_{\pi^\circ} \simeq \Pi_{\pi^\circ} \otimes \chi_E$. Then either $\Pi_i \simeq \Pi_i \otimes \chi_E$ for $i=1,2$, or, $\Pi_2 \simeq \Pi_1 \otimes \chi_E$. In the former case, we have $\Pi_i=\mathcal{AI}\left(\chi_i\right)$ with a character $\chi_i$ of $\mA_E^\times \slash E^\times$ for $i=1,2$. Then we have $\Pi_{\pi^\circ}=\mathcal{AI}\left(\chi_1\right)\boxplus \mathcal{AI}\left(\chi_2\right)$ and $\Pi_\sigma=\chi_1 \boxplus \chi_1^\sigma \boxplus \chi_2 \boxplus \chi_2^\sigma$. Since $\Pi_{\pi^\circ}$ is a lift from $\mathrm{PGSp}_2$, the central character of $\mathcal{AI}\left(\chi_i\right)$ is trivial and hence $\chi_i\mid_{\mA^\times}=\chi_E$. On the other hand, since $\Pi_\sigma$ is a base change lift of $\sigma |_{\mathrm{U}_{4,\varepsilon}\left(\mA\right)}$, we see that $\chi_i\mid_{\mA^\times}$ is trivial. This is a contradiction. In the latter case, we have $\mathrm{BC}\left(\Pi_2\right)=\mathrm{BC}\left(\Pi_1 \otimes \chi_E\right) =\mathrm{BC}\left(\Pi_1\right)$ and hence $\Pi_\sigma=\mathrm{BC}\left(\Pi_1\right)\boxplus \mathrm{BC}\left(\Pi_1\right)$. This implies that $\Pi_\sigma$ is not in the image of the base change lift from the unitary group and again we have a contradiction. Thus we have $\Pi_{\pi^\circ} \not \simeq \Pi_{\pi^\circ} \otimes \chi_E$. Then $\Pi_i \not\simeq \Pi_i \otimes \chi_E$ at least one of $i=1,2$. Suppose that this is so only for one of the two, say $i=2$. Then $\Pi_1=\mathcal{AI}\left(\chi\right)$ for some character $\chi$ of $\mA_E^\times\slash E^\times$ and $\mathrm{BC}\left(\Pi_2\right)$ is cuspidal. We have $\Pi_{\pi^\circ}=\mathcal{AI}\left(\chi\right)\boxplus \Pi_2$ and $\Pi_\sigma=\chi\boxplus\chi^\sigma\boxplus\mathrm{BC}\left(\Pi_2\right)$. Then $\chi\mid_{\mA^\times}$ is trivial from the former equality and $\chi\mid_{\mA^\times}=\chi_E$ from the latter equality as above. Hence we have a contradiction. Thus $\mathrm{BC}\left(\Pi_i\right)$ for $i=1,2$ are both cuspidal, $\Pi_\sigma=\mathrm{BC}\left(\Pi_1\right)\boxplus\mathrm{BC}\left(\Pi_2\right)$ and $k=2$. \end{proof} The following lemma gives the uniqueness of the constant $\ell(\pi)$ defined before Theorem~\ref{ref ggp}. \begin{lemma} \label{comp number not dep on S} Let $\pi$ be as in Theorem~\ref{ggp SO} (1). For $i=1,2$, let $E_i$ be a quadratic extension of $F$ and $\pi^\circ_i$ an irreducible cuspidal automorphic representation of $G\left(\mA\right)$ which is $G^{+,E_i}$-locally near equivalent to $\pi$. Let $\Pi_{\pi^\circ_i}$ be the functorial lift of $\pi_i^\circ$ to $\mathrm{GL}_4\left(\mA\right)$ and consider the decomposition \[ \Pi_{\pi^\circ_i}=\Pi_{i,1}\boxplus\cdots\boxplus\Pi_{i,\ell_i} \quad\text{for $i=1,2$} \] as \eqref{e: decomposition 1}. Then we have $\ell_1=\ell_2$. \end{lemma} \begin{proof} Since the case when $E_1=E_2$ is trivial, suppose that $E_1\ne E_2$. Let $K=E_1E_2$. From the definition of the base change, we have \[ \mathrm{BC}_{K\slash E_1} \left( \mathrm{BC}_{E_1 \slash F} (\Pi_{\pi^\circ_1}) \right) = \mathrm{BC}_{K \slash E_1} \left( \mathrm{BC}_{E_1 \slash F} (\Pi_{\pi^{\circ}_2}) \right). \] Hence \[ \mathrm{BC}_{E_1 \slash F} (\Pi_{\pi^\circ_1}) = \mathrm{BC}_{E_1 \slash F} (\Pi_{\pi^{\circ}_2}) \quad \text{or} \quad \mathrm{BC}_{E_1 \slash F} (\Pi_{\pi^\circ_1}) = \mathrm{BC}_{E_1 \slash F} (\Pi_{\pi^{\circ}_2})\otimes \chi_{K\slash E_1} \] where $\chi_{K\slash E_1}$ denotes the character of $\mA_E^\times$ corresponding to $K \slash E_1$. In the former case, we have \[ \Pi_{\pi^\circ_1} = \Pi_{\pi^{\circ}_2} \quad \text{or} \quad \Pi_{\pi^\circ_1} = \Pi_{\pi^{\circ}_2} \otimes \chi_{E_1} \] and our claim follows. In the latter case, since $\chi_{K\slash E_1} = \chi_{E_2} \circ N_{E_1 \slash F}$, we have \[ \Pi_{\pi^\circ_1} = \Pi_{\pi^{\circ}_2} \otimes \chi_{E_2} \quad \text{or} \quad \Pi_{\pi^\circ_1} = \Pi_{\pi^{\circ}_2} \otimes \chi_{E_2} \chi_{E_1} \] and our claim follows. \end{proof} \begin{Definition} \label{type} Let $\pi$ be as in Theorem~\ref{ggp SO} (1). Then we say that $\pi$ is of Type I if $\pi$ and $\pi \otimes \chi_E$ are nearly equivalent. Moreover, we say that $\pi$ is of type I-A if $\pi$ participates in the theta correspondence with $\mathrm{GSO}(S_1) =\mathrm{GSO}_{3,1}$ and that $\pi$ is of type I-B if $\pi$ participates in the theta correspondence with $\mathrm{GSO}(X_\circ)$ for some four dimensional anisotropic orthogonal space $X_\circ$ over $F$ with discriminant algebra $E$. \end{Definition} \begin{Remark} From the proof of Theorem~\ref{ggp SO} (1), if $\pi$ is not of type I-A, then the theta lift of $\pi$ to $\mathrm{GSU}_{3,D}$ is cuspidal. Further, we note that $D$ is necessarily split when $\pi$ is of type I-A or I-B, by definition. \end{Remark} In order to study an explicit formula using theta lifts from $G_D(\mA)$, the following lemma will be important later. \begin{lemma} \label{irr lemma} Let $\pi$ be as in Theorem~\ref{ggp SO} (1). Then $\pi$ is either type I-A or I-B if and only if $\pi$ is nearly equivalent to $\pi \otimes \chi_E$. In particular, when $\pi$ is neither of type I-A nor I-B, $\pi|_{\mathcal{G}_D}$ is irreducible where \begin{equation}\label{d: mathcal G_D} \mathcal G_D=Z_{G_D}\left(\mA\right)G_D\left(\mA\right)^+ G_D\left(F\right). \end{equation} \end{lemma} \begin{proof} Suppose that $\pi$ is nearly equivalent to $\pi \otimes \chi_E$. Then at almost all places $v$ of $F$, $\mathrm{Ind}_{G_D\left(F_v\right)^+}^{G_D\left(F_v\right)}\left(\pi_{+,v}\right)$ is irreducible where $\pi_{+,v}$ is an irreducible constituent of $\pi_v\mid_{G_D\left( F_v\right)^+}$. This implies that $\pi$ and $\pi^\circ$ are nearly equivalent and hence $\pi^\circ$ is nearly equivalent to $\pi^\circ \otimes \chi_E$. Thus $\Pi_{\pi^\circ}$ is nearly equivalent to $\Pi_{\pi^\circ} \otimes \chi_E$ and hence $\Pi_{\pi^\circ} = \Pi_{\pi^\circ} \otimes \chi_E$ by the strong multiplicity one theorem. When $\pi$ is neither of type I-A nor I-B, this does not happen by Lemma~\ref{nonzero lemma (1)} and Lemma~\ref{compo number}. Suppose that $\pi$ is either of type I-A or I-B. Then $D$ is split and the functorial lift $\Pi_\pi$ of $\pi$ to $\mathrm{GL}_4\left(\mA\right)$ is of the form $\mathcal{AI}\left(\tau\right)$ for an irreducible automorphic representation $\tau$ of $\mathrm{GL}_2\left(\mA_E\right)$ by Roberts~\cite{Ro}. Then we have $\Pi_\pi=\Pi_\pi\otimes\chi_E$. Hence $\pi$ is nearly equivalent to $\pi\otimes\chi_E$. When $\pi$ is not nearly equivalent to $\pi\otimes\chi_E$, $\pi\mid_{\mathcal G_D}$ is irreducible since $\mathcal G_D$ is of index $2$ in $G_D\left(\mA\right)$. \end{proof} \begin{Remark} This lemma give a classification of $\pi$ such that the twist $\pi \otimes \chi_E$ of $\pi$ by $\chi_E$ has the same Arthur parameter as $\pi$. A classification of $\pi$ such that $\pi$ and $\pi \otimes \chi_E$ are isomorphic when $G_D \simeq G$ is given in Chan~\cite{PSC}. \end{Remark} \subsection{Proof of the statement (2) in Theorem~\ref{ggp SO}} \label{s:Proof of the statement (2)} Suppose that $\pi$ has a generic Arthur parameter. When there exists a pair $\left(D^\prime,\pi^\prime\right)$ as described in Theorem~\ref{ggp SO} (2), $\pi$ and $\pi^\prime$ share the same generic Arthur parameter since they are nearly equivalent to each other. Hence by Theorem~\ref{ggp SO} (1), we have \[ L^S \left(\frac{1}{2}, \pi \times \mathcal{AI} \left(\Lambda \right) \right) = L^S \left(\frac{1}{2}, \pi^\prime \times \mathcal{AI} \left(\Lambda \right) \right) \ne 0 \] when $S$ is a sufficiently large finite set of places of $F$. Then by Remark~\ref{L-fct def rem}, we have \[ L\left(\frac{1}{2},\pi\times\mathcal{AI}\left(\Lambda \right)\right)\ne 0, \] i.e. \eqref{e:L non-zero} holds. Conversely suppose that $L \left(\frac{1}{2}, \pi \times \mathcal{AI} \left(\Lambda \right) \right) \ne 0$. There exists an irreducible cuspidal globally generic automorphic representation $\pi^\circ$ of $G(\mA)$ which is nearly equivalent to $\pi$ since $\pi$ has a generic Arthur parameter. Let $U$ be a maximal unipotent subgroup of $\mathrm{GSO}_{4,2}$ and $\psi_U$ be a non-degenerate character of $U(\mA)$ defined below by \eqref{def:U} and \eqref{def:psi_U}, which are the same as \cite[(2.4)]{Mo} and \cite[(3.1)]{Mo}, respectively. Let $U_G$ be the maximal unipotent subgroup of $\mathrm{GSp}_{2}$ defined by \eqref{d: U_G} and $\psi_{U_G}$ the non-degenerate character of $U_G(\mA)$ defined by \eqref{def:psi_UG} in \ref{s: pf wh gso}. Note that in \cite{Mo}, $U_G$ is denoted by $N$ and $\psi_{U_G}$ is denoted by $\psi_{N}$ in \cite[p.34]{Mo} and \cite[(3.2)]{Mo}, respectively. Then we note that the restriction of $\pi^\circ$ to $G(\mA)^+$ contains a unique $\psi_{U_G}$-generic irreducible constituent and we denote it by $\pi_+^\circ$. Let us consider the theta lift $\Sigma:=\theta_\psi(\pi_+^\circ)$ of $\pi_+^\circ$ to $\mathrm{GSO}_{4,2}(\mA)$. Then by \cite[Proposition~3.3]{Mo}, we know that $\Sigma$ is $\psi_U$-globally generic and hence non-zero. We divide into two cases according to the cuspidality of $\Sigma$. Suppose that $\Sigma$ is not cuspidal. Then by Rallis tower property, $\pi^\circ_+$ participates in the theta correspondence with $\mathrm{GSO}_{3,1}$. As in the proof of Lemma~\ref{nonzero lemma (1)}, the theta lift of $\pi^\circ_+$ to $\mathrm{GSO}_2$ is zero since $\pi^\circ_+$ is generic. Hence the theta lift $\tau:=\theta_\psi^{X, S_1}(\pi^\circ_+)$ of $\pi^\circ_+$ to $\mathrm{GSO}_{3,1}$ is cuspidal and non-zero. By Remark~\ref{theta irr rem}, $\tau$ is also irreducible. Recall that \[ \mathrm{GSO}_{3,1}(F) \simeq \mathrm{GL}_2(E) \times F^\times \slash \{(z, \mathrm{N}_{E \slash F}(z)) : z \in E^\times \}, \,\, \mathrm{PGSO}_{3,1}(F) \simeq \mathrm{PGL}_2(E). \] Then we may regard $\tau$ as an irreducible cuspidal automorphic representation of $\mathrm{GL}_2(\mA_E)$ with a trivial central character since the central character of $\pi^\circ_+$ is trivial. Let $\Pi$ denote the strong functorial lift of $\pi^\circ$ to $\mathrm{GL}_4(\mA)$ by \cite{CKPSS}. Then at almost all finite places $v$ of $F$, we have $\Pi_v \simeq \mathcal{AI}(\tau)_v$, and thus by the strong multiplicity one theorem, $\Pi = \mathcal{AI}(\tau)$ holds. Since $\pi$ is nearly equivalent to $\pi^\circ$, Remark~\ref{L-fct def rem} and our assumption imply that for a sufficiently large finite set $S$ of places of $F$, we have \begin{align*} &L^S \left(\frac{1}{2}, \tau \times \Lambda \right) L^S \left(\frac{1}{2}, \tau \times \Lambda^{-1} \right) = L^S \left(\frac{1}{2}, \pi^\circ \times \mathcal{AI} \left(\Lambda \right) \right) \\ = &L^S \left(\frac{1}{2}, \pi \times \mathcal{AI} \left(\Lambda \right) \right) \ne 0. \end{align*} Then by Waldspurger~\cite{Wal}, $\tau$ has the split torus model with respect to the character $(\Lambda, \Lambda^{-1})$. Hence, the equation in Corbett~\cite[p.78]{Co} implies that $\pi^\circ$ has the $(E, \Lambda)$-Bessel period. Hence we may take $D^\prime = \mathrm{Mat}_{2 \times 2}$ and $\pi^\prime = \pi^\circ$. Thus the case when $\Sigma$ is not cuspidal is settled. Suppose that $\Sigma$ is cuspidal. We may regard $\Sigma$ as an irreducible cuspidal globally generic automorphic representation of $\mathrm{GU}(2,2)$ with trivial central character because of the accidental isomorphism \eqref{acc isom2}. As in the proof of Theorem~\ref{ggp SO} (1), our assumption implies that $L \left( \frac{1}{2}, \Sigma \times \Lambda \right) \ne 0$. Then by \cite[Proposition~A.2]{FM3}, there exists an irreducible cuspidal automorphic representation $\Sigma^\prime$ of $\mathrm{GU}(V)$ such that $\Sigma^\prime$ is locally $\mathrm{U}(V)$-nearly equivalent to $\Sigma$ and $\Sigma^\prime$ has the $(e, \Lambda, \psi)$-Bessel period where $V$ is a $4$-dimensional hermitian space over $E$ whose Witt index is at least $1$. Then we note that $\mathrm{PGU}(V) \simeq \mathrm{PGSO}_{4,2}$ or $\mathrm{P}\mathrm{GU}_{3,D^\prime}$ for some quaternion division algebra $D^\prime$ over $F$. In the first case, we consider the theta lift $\pi^\prime_+:=\theta_{\psi^{-1}}(\Sigma^\prime)$ of $\Sigma^\prime$ to $G(\mA)^+$. Then by the same argument as the one in the proof of Theorem~\ref{ggp SO} (1), we see that $\pi_+^\prime \ne 0$ by Takeda~\cite[Theorem~1.1 (1)]{Tak} and that it is an irreducible cuspidal automorphic representation of $G\left(\mA\right)^+$. Since $\Sigma^\prime$ has the $(e, \Lambda, \psi)$-Bessel period, $\pi^\prime_+$ has the $(E, \Lambda)$-Bessel period by Proposition~\ref{pullback Bessel gsp}. From the definition, $\pi_+^\prime$ is nearly equivalent to $\pi_+^\circ$. Let us take an irreducible cuspidal automorphic representation $(\pi^\prime, V_{\pi^\prime})$ of $G(\mA)$ such that $\pi^\prime\mid_{G\left(\mA\right)^+} \supset \pi_+^\prime$. Then $\pi^\prime$ is locally $G^+$-nearly equivalent, and thus either $\pi^\prime$ or $\pi^\prime \otimes \chi_E$ is nearly equivalent to $\pi$ by Remark~\ref{rem loc ne}. Since both $\pi^\prime$ and $\pi^\prime \otimes \chi_E$ have the $(E, \Lambda)$-Bessel period, our claim follows. In the second case, we consider the theta lift of $\Sigma^\prime$ to $G_{D^\prime}(\mA)$. Then by an argument similar to the one in the first case, we may show that the theta lift of $\Sigma^\prime$ to $G_{D^\prime}(\mA)$ contains an irreducible constituent which is cuspidal, locally $G^+$-nearly equivalent to $\pi$ and has the $(E, \Lambda)$-Bessel period. Here we use \cite[Lemma~10.2]{Yam} and its proof in the case of ($\rm{I}_1$) with $n=3, m=2$, noting Remark~\ref{yam typo}. This completes our proof of the existence of a pair $\left(D^\prime,\pi^\prime\right)$. Let us show the uniqueness of a pair $\left(D^\prime,\pi^\prime\right)$ under the assumption that $\pi$ is tempered. Suppose that for $i=1,2$ there exists a pair$\left(D_i,\pi_i\right)$ where $D_i$ is a quaternion algebra over $F$ and $\pi_i$ is an irreducible cuspidal automorphic representation of $G_{D_i}\left(\mA\right)$ which is nearly equivalent to $\pi$ such that $\pi_i$ has the $(E, \Lambda)$-Bessel period. Suppose that $\pi_i$ is nearly equivalent to $\pi_i \otimes \chi_E$ for $i=1,2$. Then by Proposition~\ref{irr lemma}, $\pi_1$, $\pi_2$ are of type I-A or I-B and in particular $D_1 \simeq D_2 \simeq \mathrm{Mat}_{2 \times 2}$. Hence for $i=1,2$, there exist a four dimensional orthogonal space $X_i$ over $F$ with discriminant algebra $E$ and an irreducible cuspidal automorphic representation $\sigma_i$ of $\mathrm{GSO}(X_i, \mA)$ such that $\pi_i= \theta_{\psi}(\sigma_i)$. Since $\mathrm{PGSO}(X_i, F) \simeq (D_i^\prime)^\times(E)\slash E^\times$ for some quaternion algebra $D_i^\prime$ over $F$, we may regard $\sigma_i$ as an automorphic representation of $(D_i^\prime)^\times(\mA_E)$ with the trivial central character. Since $\pi_i$ has the $(E, \Lambda)$-Bessel period, $\sigma_i$ has the split torus period with respect to a character $\left(\Lambda,\Lambda^{-1}\right)$ by \cite[p.78]{Co}. Hence $D_i^\prime(E) \simeq \mathrm{Mat}_{2 \times 2}(E)$ by \cite{Wal}. Since $\sigma_1$ is nearly equivalent to $\sigma_2$, we have $\sigma_1=\sigma_2$ by the strong multiplicity one. Thus $\pi_1\simeq \pi_2$. Suppose that $\pi_i$ is neither type I-A nor I-B for $i=1,2$. For each $i$, let us take a unique irreducible constituent $\pi_{i, +}^B$ of $\pi_{i}|_{G_{D_i}(\mA)^+}$ that has the $(\xi_i, \Lambda, \psi)$-Bessel period. Note that $\pi_{1, +}^B$ and $\pi_{2, +}^B$ are nearly equivalent to each other. Now let $\sigma_i$ denote the theta lift $\theta_{\psi}(\pi_{i, +}^B)$ of $\pi_{i, +}^B$ to $\mathrm{GSU}_{3,D_i}$. Then we regard $\sigma_i$ as an automorphic representation of $\mathrm{GU}_{4,\varepsilon}$ via \eqref{acc isom1}, \eqref{acc isom2} and let $\Sigma_i := \Theta_{\psi, (\Lambda^{-1}, \Lambda^{-1})}$ denote the theta lift of $\sigma_i$ to $\mathrm{GU}_{2,2}$. In turn, we regard $\Sigma_i$ as an automorphic representation of $\mathrm{GSO}_{4,2}$ via \eqref{acc isom2} and we denote by $\pi_{i,+}^\prime$ its theta lift to $G\left(\mA\right)^+$. Then from the proof of Theorem~\ref{ggp SO} (1), $\sigma_i$, $\Sigma_i$ and $\pi_{i,+}^\prime$ are irreducible and cuspidal. Moreover $\pi_{1,+}^\prime$ and $\pi_{2,+}^\prime$ are both globally generic and nearly equivalent to each other. Furthermore, since $\pi_i$ is tempered, $\sigma_i=\theta_{\psi}(\pi_{i, +}^B)$ is tempered at finite places by an argument similar to the one in Atobe-Gan~\cite[Proposition~5.5]{AG} (see also \cite[Proposition~C.1]{GI1}) and similarly at real and complex places by Paul~\cite[Theorem~15, Theorem~30]{Pau} and Li-Paul-Tan-Zhu~\cite[Theorem~4.20, Theorem~5.1]{LPTZ}, and, by Adams-Barbasch~\cite[Theorem~2.7]{AB}, respectively. Similarly $\Sigma_i$ and $\pi_{i,+}^\prime$ are also tempered. By Proposition~\ref{pullback Bessel gsp} and Proposition~\ref{pullback Bessel gsp innerD}, we know that $\sigma_i$ has the $(X_{\xi_i}, \Lambda, \psi)$-Bessel period. Let $\mathrm{GU}_i$ denote the similitude unitary group which modulo center is isomorphic to $\mathrm{PGSU}_{3, D_i}$ by \eqref{acc isom1}. Then $\sigma_i\mid_{\mathrm{U}_i}$ has a unique irreducible constituent $\nu_i$ which has the $(X_{\xi_i}, \Lambda, \psi)$-Bessel period. Then by Beuzart-Plessis~\cite{BP1,BP2} (also by Xue~\cite{Xuea} at the real place), we see that $\mathrm{U}_1 \simeq \mathrm{U}_2$ since $\nu_1$ and $\nu_2$ are equivalent to each other. This implies that $D_1 \simeq D_2$ and hence $G_{D_1} \simeq G_{D_2}$. Let us denote $D^\prime\simeq D_i$ for $i=1,2$. We take an irreducible cuspidal automorphic representation $\pi_{i}^\prime$ of $G(\mA)$ such that $\pi_i^\prime|_{G(\mA)^+}$ contains $\pi_{i,+}^\prime$. Then by Remark~\ref{rem loc ne}, we may suppose that $\pi_1^\prime$ is nearly equivalent to $\pi_2^\prime$ or $\pi_2^\prime \otimes \chi_E$. Thus replacing $\pi_2^\prime$ by $\pi_2^\prime\otimes\chi_E$ if necessary, we may assume that $\pi_1^\prime$ and $\pi_2^\prime$ are nearly equivalent to each other. Then since $\pi_1^\prime$ and $\pi_2^\prime$ are generic and they have the same $L$-parameter because of the temperedness of $\pi_i^\prime$, we have $\pi_1^\prime \simeq \pi_2^\prime$ by the uniqueness of the generic member in the $L$-packet by Atobe~\cite{At} or Varma~\cite{Var} at finite places and by Vogan~\cite{Vo} at archimedean places. Hence in particular, $\pi_{1,+}^\prime\simeq\pi_{2,+}^\prime$. From the definition of $\pi_{+,i}^\prime$, we get $\pi_{1,+}^B \simeq \pi_{2,+}^B$. Then, we see that $\pi_1 \simeq \pi_2 \otimes \omega$ for some character $\omega$ of $G_{D^\prime}(\mA)$ such that $\omega_v$ is trivial or $\chi_{E,v}$ at each place $v$ of $F$. Since $\pi_1$ and $\pi_2$ have the same $L$-parameter, $\pi_{1,v}$ and $\pi_{1,v}\otimes \omega_v$ are in the same $L$-packet for every place $v$ of $F$. Let us take a place $v$ of $F$, and write the $L$-parameter of $\pi_{1, v}$ as $\phi_v : WD_{F_v} \rightarrow G^1(\mC)$. If $\phi_v$ is an irreducible four dimensional representation, the $L$-packet of $\phi_v$ is singleton, and thus $\pi_{1, v} \simeq \pi_{2,v}$. So let us suppose that $\phi_{v} = \phi_1 \oplus \phi_2$ with two dimensional irreducible representations $\phi_i$. Further, we may suppose that $\omega_v = \chi_{E,v}$ since there is nothing to prove when $\omega_v$ is trivial. This implies that $\phi_v \otimes \chi_{E ,v}\simeq \phi_v$. Then, by \cite[Proposition~3.1]{PR}, we have $\phi_i=\pi(\chi_i)$ for some character $\chi_i$ of $E_v^\times$ for $i=1,2$. Moreover, any member of the $L$-packet of $\pi_1$ is given by the theta lift from an irreducible representation $\mathrm{JL}(\pi(\chi_1)) \boxtimes \pi(\chi_2)$ of $D^\prime(F_v)^\times \times \mathrm{GL}_2(F_v)$ where $\mathrm{JL}$ denotes the Jacquet-Langlands transfer. Since the theta lift preserves the character twist, we see that \[ \theta(\mathrm{JL}(\pi(\chi_i)) \boxtimes \pi(\chi_j)) \otimes \chi_{E,v} \simeq \theta(\mathrm{JL}(\pi(\chi_i)) \boxtimes \pi(\chi_j)) \] by $\pi(\chi_i) \otimes \chi_{E, v} \simeq \pi(\chi_i)$. This shows that in this case, any element in the $L$-packet is invariant under the twist by $\chi_{E,v}$. Thus $\pi_{1, v} \otimes \chi_{E, v} \simeq \pi_{2,v}$ and hence $\pi_{1,v} \simeq \pi_{2,v}$. \qed \begin{Remark} As we remarked in the end of Section~\ref{s:Gross-Prasad conjecture}, the uniqueness of $(D^\prime, \pi^\prime)$ follows from the local Gan-Gross-Prasad conjecture for $(\mathrm{SO}(5), \mathrm{SO}(2))$, which is proved by Luo~\cite{Luo} at archimedean places and by Prasad--Takloo-Bighash~\cite{PT} (see also Waldspurger~\cite{Wal12} in general case) at finite places. Our proof gives another proof of the uniqueness. \end{Remark} \begin{Remark} \label{yam typo} There is a typo in the statement of \cite[Lemma~10.2]{Yam}. The first condition stated there should be the holomorphy at $s=-s_m+\frac{1}{2}$. \end{Remark} \section{Rallis inner product formula for similitude groups} In this section, as a preliminary for the proof of Theorem~\ref{ref ggp}, we recall Rallis inner product formulas for similitude dual pairs. \subsection{For the theta lift from $G$ to $\mathrm{GSO}_{4,2}$} \label{s:RI H gso} In this section, we shall recall the Rallis inner product formula for the theta lift from $G$ to $\mathrm{GSO}_{4,2}$. It is derived from the isometry case in a manner similar to the one in Gan-Ichino~\cite[Section~6]{GI0}, where the case of the theta lift from $\mathrm{GL}_2$ to $\mathrm{GSO}_{3,1}$ is treated. Let $(\pi, V_\pi)$ be an irreducible cuspidal automorphic representation of $G(\mA)$ with a trivial central character. Let us define a subgroup $\mathcal G$ of $G\left(\mA\right)$ by \begin{equation} \label{cal G def} \mathcal{G} := Z_G(\mA)G(\mA)^+ G(F) \end{equation} and in this section we assume that: \begin{equation}\label{assumption 1} \text{\emph{the restriction of $\pi$ to $\mathcal{G}$ is irreducible, i.e. $\pi \otimes \chi_E \not \simeq \pi$}} \end{equation} for our later use. Let us recall the notation in \ref{sp4 so42}. Thus $X$ denotes the four dimensional symplectic space on which $G$ acts on the right and $Y$ denotes the six dimensional orthogonal space on which $\mathrm{GSO}_{4,2}$ acts on the left. Then $Z=X\otimes Y$ is a symplectic space over $F$. Here we take $X_\pm\otimes Y$ as the polarization and we realize the Weil representation $\omega_\psi$ of $\mathrm{Mp}\left(Z\right)\left(\mA\right)$ on $V_\omega:=\mathcal{S}((X_+ \otimes Y)(\mA))$. Put $X^\Box = X \oplus (-X)$. Then $X^\Box$ is naturally a symplectic space. Let $\widetilde{G}:=\mathrm{GSp}\left(X^\Box\right)$ and we denote by $\mathbf{G}$ a subgroup of $G\times G$ given by \[ \mathbf{G}: = \left\{ (g_1, g_2) \in G \times G : \lambda(g_1) = \lambda(g_2) \right\}, \] which has a natural embedding $\iota:\mathbf {G}\to\widetilde{G}$. We define the canonical pairing $\mathcal{B}_\omega : V_\omega \otimes V_\omega \rightarrow \mC$ by \[ \mathcal{B}_\omega(\varphi_1, \varphi_2) := \int_{(X_+ \otimes Y)(\mA)} \varphi_1(x) \,\overline{\varphi_2(x)} \, dx \quad\text{for $\varphi_1, \varphi_2 \in V_\omega$} \] where $dx$ denotes the Tamagawa measure on $(X_+ \otimes Y)(\mA)$. Let $\widetilde{Z} = X^\Box \otimes Y$ and we take a polarization $\widetilde{Z}=\widetilde{Z}_+\oplus \widetilde{Z}_-$ with \[ \widetilde{Z}_\pm:=\left(X_\pm\oplus \left(-X_\pm\right)\right)\otimes Y \] where the double sign corresponds. Let us denote by $\widetilde{\omega}_\psi$ the Weil representation of $\mathrm{Mp}(\widetilde{Z}(\mA))$ on $\mathcal{S}( \widetilde{Z}^+(\mA))$. On the other hand, let \[ X^\nabla:=\left\{(x,-x) : x \in X \right\} \quad\text{and}\quad \widetilde{X}^\nabla := X^\nabla \otimes Y . \] Then we have a natural isomorphism \[ V_\omega \otimes V_\omega\simeq \mathcal{S}( \widetilde{X}^\nabla(\mA)) \] by which we regard $\mathcal{S}( \widetilde{X}^\nabla(\mA))$ as a representation of $\mathrm{Mp}(Z)(\mA) \times \mathrm{Mp}(Z)(\mA)$. Meanwhile we may realize $\widetilde{\omega}_\psi$ on $\mathcal{S}(\widetilde{X}^\nabla(\mA) )$ and indeed we have an isomorphism \[ \delta : \mathcal{S}(\widetilde{Z}_+(\mA)) \rightarrow \mathcal{S}( \widetilde{X}^\nabla (\mA)) \] as representations of $\mathrm{Mp}(\widetilde{Z})(\mA)$ such that \[ \delta(\varphi_1 \otimes \overline{\varphi}_2)(0) = \mathcal{B}_\omega(\varphi_1, \varphi_2) \quad\text{for $\varphi_1, \varphi_2 \in V_\omega$.} \] Let us define Petersson inner products on $G(\mA)$ and $G(\mA)^+$ as follows. For $f_1, f_2 \in V_\pi$, we define the Petersson inner product $\left(\, ,\,\right)_\pi$ on $G(\mA)$ by \[ (f_1, f_2)_\pi := \int_{\mA^\times G(F) \backslash G(\mA)} f_1(g) \,\overline{f_2(g)} \,dg \] where $dg$ denotes the Tamagawa measure. Then regarding $f_1, f_2$ as automorphic forms on $G(\mA)^+$, we define \[ (f_1, f_2)_{\pi}^+ := \int_{\mA^\times G(F)^+ \backslash G(\mA)^+} f_1(h) \,\overline{f_2(h)} \,dh \] where the measure $dh$ is normalized so that \[ \mathrm{vol}(\mA^\times G(F)^+ \backslash G(\mA)^+)=1. \] Then from our assumption \eqref{assumption 1} on $\pi$, as in \cite[Lemma~6.3]{GI0}, we see that \[ \left(f_1, f_2\right)_{\pi}^+= \frac{1}{2}\,\left(f_1, f_2\right)_{\pi} \] since $\mathrm{Vol}(\mA^\times G(F) \backslash G(\mA)) =2$. For each place $v$ of $F$, we take a hermitian $G(F_v)$-invariant local pairing $\left(\, ,\,\right)_{\pi_v}$ of $\pi_v$ so that \begin{equation} \label{e:inner decomp H} \left(f_1,f_2\right)_\pi = \prod_v \left(f_{1,v},f_{2,v}\right)_{\pi_v} \quad\text{for $f_i=\otimes_v \,f_{i,v}\in V_\pi\, \left(i=1,2\right)$}. \end{equation} We also choose a local Haar measure $d g_v$ on $G(F_v)$ for each place $v$ of $F$ so that $\mathrm{Vol}(K_{G, v}, d_{g_v})=1$ at almost all $v$, where $K_{G, v}$ is a maximal compact subgroup of $G(F_v)$. We define positive constants $C_{G}$ by \[ dg = C_{G} \cdot \prod_v dg_v \] Local doubling zeta integrals are defined as follows. Let $I(s)$ denote the degenerate principal series representation of $\widetilde{G}(\mA)$ defined by \[ I(s): = \mathrm{Ind}_{\widetilde{P}(\mA)}^{\widetilde{G}(\mA)} \left(\chi_E \,\delta_{\widetilde{P}}^{\,s\slash 9}\right) \] where $\widetilde{P}$ denotes the Siegel parabolic subgroup of $\widetilde{G}$. Then for each place $v$, we define a local zeta integral by \[ Z_v(s, \Phi_v, f_{1, v}, f_{2,v}) := \int_{G^1(F_v)} \Phi_v(\iota(g_v, 1), s) \left( \pi_v(g_v)f_{1,v}, f_{2,v} \right)_{\pi_v} \, dg_v \] for $\Phi_v\in I\left(s\right)$, $f_{1,v},f_{2,v}\in V_{\pi_v}$, where $G^1=\left\{g\in G:\lambda\left(g\right)=1\right\}$. The integral converges absolutely at $s=\frac{1}{2}$ when $\Phi_v \in I_v(s)$ is a holomorphic section by \cite[Proposition~6.4]{PSR} (see also \cite[Lemma~6.5]{GI0}). Moreover, when we define a map $ \mathcal{S}( \widetilde{X}^\nabla(\mA)) \ni\varphi\mapsto \left[\varphi\right]\in I \left(\frac{1}{2} \right) $ by \[ [\varphi] \left(g, \frac{1}{2} \right) := |\nu(g)|^{-4} \left(\widetilde{\omega}_\psi(\begin{pmatrix}1_4&\\&\lambda\left(g\right)^{-1}1_4\end{pmatrix}g))\varphi\right)(0), \] we may naturally extend $\left[\varphi\right]$ to a holomorphic section in $I\left(s\right)$. By an argument similar to the one in the proof of \cite[Proposition~6.10]{GI0}, we may derive the following Rallis inner product formula in the similitude groups case from the one \cite[Theorem~8.1]{GQT} in the isometry groups case. \begin{proposition} \label{RIPF H} Keep the above notation. Then for decomposable vectors $f = \otimes f_v \in V_\pi$ and $\phi = \otimes \phi_v\in V_\omega$, we have \begin{multline*} \frac{\langle \Theta(f; \phi), \Theta(f; \phi) \rangle}{\left( f, f \right)_\pi} =C_G \cdot \frac{1}{2} \cdot \frac{L\left(1, \pi, \mathrm{std} \otimes \chi_E \right)}{L(3, \chi_E) L(2, \mathbf{1}) L(4, \mathbf{1}) } \\ \times \prod_{v} \, Z_v^\sharp \left(\frac{1}{2}, [\delta(\phi_v \otimes \phi_v)], f_v, f_v \right). \end{multline*} Here we recall that $\Theta_\psi(f; \phi)$ is the theta lift of $f$ to $\mathrm{GO}_{4,2}$, $\langle\,\, , \,\,\rangle$ denotes the Petersson inner product with respect to the Tamagawa measure and we define \begin{multline*} Z_v^\sharp \left(\frac{1}{2}, [\delta(\phi_v \otimes \phi_v)], f_v, f_v \right): = \frac{1}{\left(f_v, f_v \right)_{\pi_v}} \frac{L(3, \chi_{E_v \slash F_v}) L(2, \mathbf{1}_v) L(4, \mathbf{1}_v) }{L\left(1, \pi_v, \mathrm{std} \otimes \chi_{E_v \slash F_v} \right)} \\ \times Z_v \left(\frac{1}{2}, [\delta(\phi_v \otimes \phi_v)], f_v, f_v \right), \end{multline*} which is equal to $1$ at almost all places $v$ of $F$ by \cite{PSR}. \end{proposition} Recall that $ \theta(f; \phi)$ denotes the restriction of $ \Theta_\psi(f; \phi)$ to $\mathrm{GSO}_{4,2}(\mA)$, namely the theta lift of $f$ to $\mathrm{GSO}_{4,2}$. Then as in \cite[Lemma~2.1]{GI0}, we see that \[ 2\langle \Theta(f; \phi), \Theta(f; \phi) \rangle = \langle \theta(f; \phi), \theta(f; \phi) \rangle \] where the right hand side denotes the Petersson inner product on $\mathrm{GSO}_{4,2}$ with respect to the Tamagawa measure. Hence, Proposition~\ref{RIPF H} yields \begin{multline} \label{RIPF H GSO} \frac{\langle \theta(f; \phi), \theta(f; \phi) \rangle}{\left(f, f \right)_{\pi}}=C_G \cdot \frac{L\left(1, \pi, \mathrm{std} \otimes \chi_E \right)}{L(3, \chi_E) L(2, \mathbf{1}) L(4, \mathbf{1}) } \\ \times \prod_{v}\, Z_v^\sharp \left(\frac{1}{2}, [\delta(\phi_v \otimes \phi_v)], f_v, f_v \right). \end{multline} \subsection{Theta lift from $G_D$ to $\mathrm{GSU}_{3,D}$} \label{s:RI HD gsu} In this subsection, we shall consider the Rallis inner product formula for the theta lift from $G_D$ to $\mathrm{GSU}_{3,D}$ as in the previous section. We recall that the formula in the case of isometry groups is proved by Yamana~\cite[Lemma~10.1]{Yam} where our case corresponds to ($\rm{I}_3$) with $m=3, n=2$. Let $(\pi, V_\pi)$ be an irreducible cuspidal automorphic representation of $G_D(\mA)$ with a trivial central character. Recall that $\mathcal G_D$ denotes the subgroup of $G_D\left(\mA\right)$ given by \eqref{d: mathcal G_D}. In this section, assume that: \begin{equation}\label{assumption 1_D} \text{\emph{the restriction of $\pi$ to $\mathcal{G}_D$ is irreducible}} \end{equation} for our later use. Let us recall the notation in \ref{ss: second pull-back}. Thus $X_D$ denotes the hermitian space of degree two over $D$ on which $G_D$ acts on the right and $Y_D$ denotes the skew-hermitian space of degree three over $D$ on which $\mathrm{GSU}_{3,D}$ acts on the left. Then $Z_D= X_D \otimes_D Y_D$ is a symplectic space over $F$ by \eqref{d: quaternion dual pair}. Here we take $X_{D,\pm}\otimes_D Y_D$ as the polarization and we realize the Weil representation $\omega_\psi$ of $\mathrm{Mp}\left(Z_D\right)\left(\mA\right)$ on $V_{\omega,D}:=\mathcal S\left(\left(X_{D,+}\otimes_D Y_D\right)\left(\mA\right)\right)$. Put $X_D^\Box = X_D \oplus \overline{X_D}$. Then $X_D^\Box$ is naturally a hermitian space over $D$. Let $\widetilde{G}_D:=\mathrm{GU}\left(X_D^\Box\right)$ and we denote by $\mathbf{G}_D$ a subgroup of $G_D\times G_D$ given by \[ \mathbf{G}_D := \left\{ (g_1, g_2) \in G_D \times G_D : \lambda(g_1) = \lambda(g_2) \right\} \] which has a natural embedding $\iota:\mathbf{G}_D\to\widetilde{G}_D$. We define the canonical pairing $\mathcal{B}_\omega : V_{\omega,D} \otimes V_{\omega,D}\rightarrow \mC$ by \[ \mathcal{B}_\omega(\varphi_1, \varphi_2) := \int_{(X_{D,+} \otimes Y_D)(\mA)} \varphi_1(x) \,\overline{\varphi_2(x)} \, dx \quad\text{for $\varphi_1, \varphi_2 \in V_{\omega,D}$} \] where $dx$ denotes the Tamagawa measure on $(X_{D,+} \otimes Y_D)(\mA)$. Let $\widetilde{Z}_D = X_D^\Box \otimes Y_D$ and we take a polarization $\widetilde{Z}_D=\widetilde{Z}_{D,+}\oplus \widetilde{Z}_{D,-}$ with \[ \widetilde{Z}_{D,\pm}= \left( X_{D,\pm}\oplus \overline{-X_{D,\pm}}\,\right)\otimes Y_D \] where the double sign corresponds. Let us denote by $\widetilde{\omega}_\psi$ the Weil representation of $\mathrm{Mp}(\widetilde{Z}_D)(\mA)$ on $\mathcal{S}( \widetilde{Z}_{D,+}(\mA))$. On the other hand, let \[ X_D^\nabla:=\left\{(x,\overline{x}) : x \in X_D \right\} \quad\text{and}\quad \widetilde{X}_D^\nabla := X_D^\nabla \otimes Y_D . \] Then we have a natural isomorphism \[ V_{\omega,D} \otimes V_{\omega,D}\simeq \mathcal{S}( \widetilde{X}_D^\nabla(\mA)) \] by which we regard $\mathcal{S}( \widetilde{X}_D^\nabla(\mA))$ as a representation of $\mathrm{Mp}(Z_D)(\mA) \times \mathrm{Mp}(Z_D)(\mA)$. Meanwhile we may realize $\widetilde{\omega}_\psi$ on $\mathcal{S}(\widetilde{X}_D^\nabla(\mA) )$ and indeed we have an isomorphism \[ \delta : \mathcal{S}(\widetilde{Z}_{D,+}(\mA)) \rightarrow \mathcal{S}( \widetilde{X}_D^\nabla (\mA)) \] as representations of $\mathrm{Mp}(\widetilde{Z}_D)(\mA)$ such that \[ \delta(\varphi_1 \otimes \overline{\varphi}_2)(0) = \mathcal{B}_\omega(\varphi_1, \varphi_2) \quad\text{for $\varphi_1, \varphi_2 \in V_{\omega,D}$.} \] Let us define Petersson inner products on $G_D(\mA)$ and $G_D(\mA)^+$ as follows. For $f_1, f_2 \in V_{\pi_D}$, we define the Petersson inner product $\left(\, ,\,\right)_{\pi_D}$ on $G_D(\mA)$ by \[ (f_1, f_2)_{\pi_D}: = \int_{\mA^\times G_D(F) \backslash G_D(\mA)} f_1(g) \,\overline{f_2(g)} \,dg \] where $dg$ denotes the Tamagawa measure. Then regarding $f_1, f_2$ as automorphic forms on $G_D(\mA)^+$, we define \[ (f_1, f_2)_{\pi_D}^+ := \int_{\mA^\times G_D(F)^+ \backslash G_D(\mA)^+} f_1(h) \,\overline{f_2(h)} \,dh \] where the measure $dh$ is normalized so that \[ \mathrm{vol}(\mA^\times G_D(F)^+ \backslash G_D(\mA)^+)=1. \] Then from our assumption \eqref{assumption 1_D} on $\pi_D$, as in \cite[Lemma~6.3]{GI0}, we see that \[ \left(f_1, f_2\right)_{\pi_D}^+= \frac{1}{2}\,\left(f_1, f_2\right)_{\pi_D} \] since $\mathrm{Vol}(\mA^\times G_D(F) \backslash G_D(\mA)) =2$. For each place $v$ of $F$, we take a hermitian $G_D(F_v)$-invariant local pairing $\left(\, ,\,\right)_{\pi_{D,v}}$ of $\pi_{D,v}$ so that \begin{equation} \label{e:inner decomp H_D} \left(f_1,f_2\right)_{\pi_D} = \prod_v \left(f_{1,v},f_{2,v}\right)_{\pi_{D,v}} \quad\text{for $f_i=\otimes_v \,f_{i,v}\in V_{\pi_D}\, \left(i=1,2\right)$}. \end{equation} As in the previous section, we choose local Haar measures $dg_v$ on $G_D(F_v)$ at each place $v$ of $F$ and we have \[ dg = C_{G_D} \cdot \prod_v dg_v \] for some positive constant $C_{G_D}$. Local doubling zeta integrals are defined as follows. Let $I_D(s)$ denote the degenerate principal series representation of $\widetilde{G}_D(\mA)$ defined by \[ I_D(s) := \mathrm{Ind}_{\widetilde{P}_D(\mA)}^{\widetilde{G}_D(\mA)}\left(\chi_E \,\delta_{\widetilde{P}_D}^{\,s \slash 9}\right) \] where $\widetilde{P}_D$ denotes the Siegel parabolic subgroup of $\widetilde{G}_D$. Then for each place $v$, we define a local zeta integral for $\Phi_v\in I_{D,v}\left(s\right)$, $f_{1,v},f_{2,v}\in V_{\pi_{D,v}}$ by \[ Z_v(s, \Phi_v, f_{1, v}, f_{2,v}) := \int_{G^1_{D}(F_v)} \Phi_v(\iota(g_v, 1), s) \left( \pi_{D,v}(g_v)f_{1,v}, f_{2,v} \right)_{\pi_v} \, dg_v \] where $G_D^1=\left\{g\in G_D:\lambda\left(g\right)=1\right\}$. The integral converges absolutely at $s=\frac{1}{2}$ when $\Phi_v \in I_{D,v}(s)$ is a holomorphic section by \cite[Proposition~6.4]{PSR} (see also \cite[Lemma~6.5]{GI0}). Moreover, when we define a map $ \mathcal{S}( \widetilde{X}_D^\nabla(\mA)) \ni\varphi\mapsto \left[\varphi\right]\in I_D \left(\frac{1}{2} \right) $ by \[ [\varphi] \left(g, \frac{1}{2} \right): = |\lambda(g)|^{-4} \left(\widetilde{\omega}_\psi(\begin{pmatrix}1_4&\\&\lambda\left(g\right)^{-1}1_4\end{pmatrix}g))\varphi\right)(0), \] we may naturally extend $\left[\varphi\right]$ to a holomorphic section in $I_D\left(s\right)$. By an argument similar to the one in the proof of \cite[Proposition~6.10]{GI0}, we may derive the following Rallis inner product formula in the similitude groups case from the one \cite[Theorem~2]{Yam2} in the isometry groups case. \begin{proposition} \label{RIPF HD} Keep the above notation. Then for decomposable vectors $f = \otimes f_v \in V_{\pi_D}$ and $\phi = \otimes \phi_v\in V_{\omega,D}$, we have \[ \frac{\langle \theta(f; \phi), \theta(f; \phi) \rangle}{\left( f_{\pi_D}, f_{\pi_D} \right)} = \frac{L\left(1, \pi, \mathrm{std} \otimes \chi_E \right)}{L(3, \chi_E) L(2, \mathbf{1}) L(4, \mathbf{1}) }\, \prod_{v} \,Z_v^\sharp \left(\frac{1}{2}, [\delta(\phi_v \otimes \phi_v)], f_v, f_v \right). \] Here recall that $\theta_\psi(f; \phi)$ is the theta lift of $f$ to $\mathrm{GSU}_{3,D}$, $\langle\,\, ,\,\,\rangle$ denotes the Petersson inner product with respect to the Tamagawa measure and we define \begin{multline*} Z_v^\sharp \left(\frac{1}{2}, [\delta(\phi_v \otimes \phi_v)], f_v, f_v \right) := \frac{1}{\left( f_v, f_v \right)_{\pi_{D, v}}} \frac{L(3, \chi_{E_v \slash F_v}) L(2, \mathbf{1}_v) L(4, \mathbf{1}_v) }{L\left(1, \pi_v, \mathrm{std} \otimes \chi_{E_v \slash F_v} \right)} \\ \times Z_v \left(\frac{1}{2}, [\delta(\phi_v \otimes \phi_v)], f_v, f_v \right), \end{multline*} which is equal to $1$ at almost all places $v$ of $F$ by \cite{PSR}. \end{proposition} \section{Explicit formula for Bessel periods on $\mathrm{GU}(4)$} Let $\mathrm{GU}\left(4\right)$ stand for one of $\mathrm{GU}_{2,2}$ or $\mathrm{GU}_{3,1}$. In \cite{FM3}, the explicit formula for the Bessel periods on $\mathrm{GU}\left(4\right)$ is proved under the assumption that the explicit formula for the Whittaker periods on $\mathrm{GU}_{2,2}$ holds. In this section we shall show that this assumption is indeed satisfied in the cases we need, from the explicit formula for the Whittaker periods on $G=\mathrm{GSp}_2$, which in turn will be proved in Appendix~\ref{appendix A}. Thus the explicit formula for the Bessel periods on $\mathrm{GU}(4)$ holds by \cite{FM3}, in the cases which we need for the proof of Theorem~\ref{ref ggp}. \subsection{Explicit formulas} \label{s:Explicit formulas whittaker} Let $(\pi, V_{\pi})$ be an irreducible cuspidal tempered globally generic automorphic representation of $G(\mA)$ such that $\pi |_{\mathcal{G}}$ is irreducible. We recall that the subgroup $\mathcal G$ of $G\left(\mA\right)$ is defined by \eqref{cal G def}. Let $\pi^\circ$ denote the unique generic irreducible constituent of $\pi |_{G(\mA)^+}$. Let $\left(\Sigma, V_\Sigma\right)$ denote the theta lift of $\pi^\circ$ to $\mathrm{GSO}_{4,2}(\mA)$. Then as in \cite[Proposition~3.3]{Mo}, we know that $\Sigma$ is an irreducible globally generic cuspidal tempered automorphic representation. Here we prove the explicit formula for the Whittaker periods for $\Sigma$ assuming the explicit formula for the Whittaker periods for $\pi$. Let us recall some notation. Let $X, Y, Y_0$ and $Z$ be as in Section~\ref{sp4 so42} and we use a polarization $Z = Z_+ \oplus Z_-$ with \[ Z_{\pm} = (X \otimes Y_{\pm}) \oplus (X_{\pm} \otimes Y_0) \] where the double sign corresponds. We write $z_+=(a_1, a_2; b_1, b_2)$ when \[ z_+=a_1 \otimes y_1 + a_2 \otimes y_2 + b_1 \otimes e_1+b_2\otimes e_2\in Z_+ \quad\text{with $a_i \in X$, $b_i \in X_+$} . \] Recall that the unipotent subgroups $N_0$, $N_1$ and $N_2$ of $\mathrm{GSO}_{4,2}$ are defined by \eqref{d: N_0}, \eqref{d: N_1} and \eqref{d: N_2}, respectively. Let us define an unipotent subgroup $\tilde{U}$ of $\mathrm{GSO}_{4,2}$ by \begin{equation}\label{e: def of u_tilde} \tilde{U}: = \left\{ \tilde{u}(b): = \begin{pmatrix} 1&-{}^{t}\widetilde{X} S_1& 0\\ 0&1_4&\widetilde{X}\\ 0&0&1 \end{pmatrix} : \widetilde{X} = \begin{pmatrix} 0\\0\\0\\-b\end{pmatrix} \right\} \end{equation} where $S_1$ is given by \eqref{d: symmetric}. Let \begin{equation} \label{def:U} U:=N_{4,2}\,\tilde{U}. \end{equation} Then $U$ is a maximal unipotent subgroup of $\mathrm{GSO}_{4,2}$ and we have \[ N_0 \triangleleft N_0 N_1 \triangleleft N_0 N_1 N_2= N_{4,2} \triangleleft N_{4,2}\, \tilde{U} = U. \] Then we define a non-degenerate character $\psi_U$ of $U(\mA)$ by \begin{equation} \label{def:psi_U} \psi_U(u_0(x)u_1(s_1, t_1)u_2(s_2, t_2)\tilde{u}(b)) := \psi(2dt_2+b). \end{equation} By \cite[Proposition~3.3]{Mo}, $\Sigma$ is $\psi_U$-generic. Namely \[ \label{W U} W^{\psi_U}(\varphi) := \int_{U(F) \backslash U(\mA)} \varphi(u)\, \psi_U^{-1}(u) \,du \quad\text{for $ \varphi \in V_\Sigma$}, \] is not identically zero on $V_\Sigma$. Now we regard $\Sigma$ as an automorphic representation of $\mathrm{GU}_{2,2}$ by the accidental isomorphism \eqref{acc isom2} and let $\Pi_{\Sigma} =\Pi_{1}^\prime \boxplus \cdots \boxplus \Pi_\ell^\prime$ denote the base change lift of $\Sigma\mid_{\mathrm{U}_{2,2}}$ to $\mathrm{GL}_4(\mA_E)$ where $\Pi_i^\prime$ is an irreducible cuspidal automorphic representations of $\mathrm{GL}_{m_i}(\mA_E)$. Here the existence of $\Pi_\Sigma$ follows from \cite{KMSW}. Recall that in Section~\ref{s:RI H gso}, the Petersson inner products on $G\left(\mA\right)$ and $\mathrm{GSO}_{4,2}(\mA)$ using the Tamagawa measures, denoted respectively as $\left(\,\, ,\,\,\right)$ and $\left<\,\, ,\,\,\right>$, are introduced. Moreover at each place $v$ of $F$, we choose and fix an $G(F_v)$-invariant hermitian inner product $\left(\,\, ,\,\,\right)_v$ on $V_{\pi^\circ_v}$ so that the decomposition formula \eqref{e:inner decomp H} holds. Similarly at each place $v$, we choose and fix a $\mathrm{GSO}_{4,2}(F_v)$-invariant hermitian inner product $\langle\,\,,\,\,\rangle_v$ on $V_{\Sigma_v}$ so that the decomposition formula \begin{equation}\label{e:inner decomp GSO_4,2} \langle \phi_1,\phi_2\rangle= \prod_v\,\langle \phi_{1,v},\phi_{2,v}\rangle_v \quad \text{for $\phi_i=\otimes_v\,\phi_{i,v}\in V_{\Sigma}\quad(i=1,2)$} \end{equation} holds. Then as in Section~\ref{s:def local bessel}, at each place $v$ of $F$, we may define a local period $\mathcal{W}_v(\varphi_v)$ for $\varphi_v \in V_{\Sigma_v}$ by the stable integral \begin{equation} \label{e: local integral whittaker} \mathcal{W}_v(\varphi_v):= \int_{U(F_v)}^{\mathrm{st}} \frac{\langle \Sigma_v\left(n_v\right)\varphi_v, \varphi_v \rangle_v}{{\langle \varphi_v, \varphi_v \rangle_v}} \cdot \psi_U^{-1}\left(n_v\right)\, dn_v \end{equation} when $v$ is finite. When $v$ is archimedean, we use the Fourier transform to define $\mathcal{W}_v(\varphi_v)$. See \cite[Proposition~3.5, Proposition~3.15]{Liu2} for the details. We shall prove the following theorem, namely the explicit formula for the Whittaker periods on $V_\Sigma$, in \ref{s: pf wh gso}. \begin{theorem} \label{gso whittaker} For a non-zero decomposable vector $\varphi = \otimes \varphi_v \in V_\Sigma$, we have \begin{equation} \label{e:gso whittaker} \frac{|W^{\psi_U}(\varphi)|^2}{ \langle \varphi, \varphi \rangle} = \frac{1}{2^\ell}\cdot \frac{\prod_{j=1}^4 L \left(j, \chi_{E}^j \right)}{L \left(1, \Pi_{\Sigma}, \mathrm{As}^+ \right)}\cdot \prod_v \mathcal{W}_v^\circ(\varphi_v) \end{equation} where \[ \mathcal{W}_v^\circ(\varphi_v):= \frac{L\left(1,\Pi_{\Sigma_v}, \mathrm{As}^+\right)}{ \prod_{j=1}^4 L \left(j, \chi_{E_v}^j \right)} \cdot \mathcal{W}_v(\varphi_v). \] Here we note that $\mathcal{W}_v^\circ(\varphi_v) = 1$ at almost all places $v$ by Lapid and Mao~\cite{LM}. \end{theorem} Before proceeding to the proof of Theorem~\ref{gso whittaker}, by assuming it, we prove the following theorem, namely the explicit formula for the Bessel periods on $\mathrm{GU}\left(4\right)$. \begin{theorem} \label{gsu Bessel} Let $(\pi, V_\pi)$ be an irreducible cuspidal tempered automorphic representation of $G_D(\mA)$ with trivial central character. Suppose that $\pi$ has the $\left(\xi,\Lambda,\psi\right)$-Bessel period and that $\pi$ is neither of type I-A nor type I-B. Let $\pi_+^B$ denote the unique irreducible constituent of $\pi|_{G_D(\mA)^+}$ which has the $\left(\xi,\Lambda,\psi\right)$-Bessel period. We denote by $\left(\sigma, V_\sigma\right)$ the theta lift of $\pi_+^B$ to $\mathrm{GSU}_{3,D}$, which is an irreducible cuspidal automorphic representation by Lemma~\ref{nonzero lemma (1)} and Lemma~\ref{irr lemma}. Then for a non-zero decomposable vector $\varphi = \otimes \varphi_v \in V_\sigma$, we have \[ \frac{|\mathcal B_{X, \psi. \Lambda}(\varphi)|^2}{(\varphi, \varphi)} =\frac{1}{2^\ell} \left( \prod_{j=1}^4 L(1, \chi_E^j) \right) \frac{ L \left(\frac{1}{2}, \sigma \times \Lambda^{-1} \right)}{L(1, \pi, \mathrm{std} \otimes \chi_E) L(1, \chi_E)} \prod_v \alpha_{\Lambda_v, \psi_{X, v}}^{\natural}(\varphi_v) \] where \[ \alpha_{\Lambda_v, \psi_{X, v}}^{\natural}(\varphi_v) = \left( \prod_{j=1}^4 L(1, \chi_{E, v}^j) \right)^{-1} \frac{L(1, \pi_v, \mathrm{std} \otimes \chi_{E,v}) L(1, \chi_{E_v})}{ L \left(\frac{1}{2}, \sigma_v \times \Lambda_v^{-1} \right)} \cdot \frac{\alpha_{\Lambda_v, \psi_{X, v}}(\varphi_v)}{(\varphi_v, \varphi_v)_v} \] and $X\in D^\times$ is taken so that $\xi=S_X$ in \eqref{def of S_X}. \end{theorem} \begin{proof} Let us regard $\sigma$ as an automorphic representation of $\mathrm{GU}(4)$ with trivial central character via the accidental isomorphisms $\Phi$ \eqref{acc isom2} or $\Phi_D$ \eqref{acc isom1}, depending whether $D$ is split or not. Let $\theta(\sigma) = \Theta_{\psi, (\Lambda^{-1},\Lambda^{-1})}(\sigma)$ denote the theta lift of $\sigma$ to $\mathrm{GU}_{2,2}$ with respect to $\psi$ and $(\Lambda^{-1}, \Lambda^{-1})$. By \cite[Proposition~3.1]{FM3}, $\theta(\sigma)$ is globally generic and, in particular, non-zero. By the same argument as in the proof of \cite[Theorem~1]{FM3}, we see that $\theta(\sigma)$ is cuspidal and hence irreducible by Remark~\ref{theta irr rem} and \ref{theta irr rem D}. Moreover by the unramified computations in \cite{Ku2} and \cite[(3.6)]{Mo}, we see that $L^S(s, \Sigma, \wedge_t^2)$ has a pole at $s=1$ when $S$ is a sufficiently large finite set of places of $F$ containing all archimedean places, where $L^S(s, \Sigma, \wedge_t^2)$ denotes the twisted exterior square $L$-function of $\Sigma$ (see \cite[Section~2.1.1]{FM0} for the definition). Since $\theta(\sigma)$ is generic, \cite[Theorem~4.1]{FM0} implies that it has the unitary Shalika period defined in \cite[(2.5)]{FM0}. Then, by \cite[Theorem~B]{Mo}, the theta lift of $\theta(\sigma)$ to $G(\mA)^+$, which we denote by $(\pi_{+}^\prime, V_{\pi_{+}^\prime})$, is an irreducible cuspidal globally generic automorphic representation of $G\left(\mA\right)^+$. We note that $\pi_+^B$ is nearly equivalent to $\pi_+^\prime$. Let us take an irreducible cuspidal automorphic representation $(\pi^\prime, V_{\pi^\prime})$ of $G(\mA)$ such that $V_{\pi^\prime}|_{G(\mA)^+} \supset V_{\pi_+^\prime}$. Then $\pi^\prime$ is globally generic. Moreover $\pi^\prime \otimes \chi_E$ is not nearly equivalent to $\pi^\prime$ by our assumption on $\pi$. Hence $\pi^\prime|_{\mathcal{G}}$ is irreducible. Thus we may apply Theorem~\ref{gso whittaker}, taking $\pi^\circ = \pi^\prime$ and $\Sigma = \theta(\sigma)$, and we obtain the explicit formula for the Whittaker periods on $\theta(\sigma)$. Then by \cite[Theorem~A.1]{FM3}, the required explicit formula for the Bessel periods follows. \end{proof} \subsection{Proof of Theorem~\ref{gso whittaker}} \label{s: pf wh gso} We reduce Theorem~\ref{gso whittaker} to a certain local identity in \ref{Reduction to a local identity} and then prove the local identity in \ref{ss: local pull-back computation}. As we stated in the beginning of this section, what we do essentially is to deduce the explicit formula \eqref{e:gso whittaker} for the Whittaker periods on $\mathrm{GSO}_{4,2}$ from \eqref{e:gsp whittaker} below, the one for the Whittaker periods on $G$. \subsubsection{Explicit formula for the Whittaker periods on $G=\mathrm{GSp}_2$} Let $U_G$ denote the maximal unipotent subgroup of $G$. Namely \begin{equation}\label{d: U_G} U_G := \left\{ m\left(n\right)\begin{pmatrix}1_2&X\\ 0&1_2 \end{pmatrix} : X \in \mathrm{Sym_2}, n \in N_2 \right\} \end{equation} where $m\left(h\right)=\begin{pmatrix} h&0 \\0 &{}^{t}h^{-1}\end{pmatrix} $ for $h\in\mathrm{GL}_2$ and $N_2$ denotes the group of upper unipotent matrices in $\mathrm{GL}_2$. Then we define a non-degenerate character $\psi_{U_G}$ of $U_G(\mA)$ by \begin{equation} \label{def:psi_UG} \psi_{U_G}(u) := \psi(u_{1\,2}+d \,u_{2\,4}) \quad\text{for $u=\left(u_{i\,j}\right)\in U_G\left(\mA\right)$}. \end{equation} Then for an automorphic form $\phi$ on $G(\mA)$, we define the Whittaker period $W_{\psi_{U_G}}(\phi)$ of $\phi$ by \[ \label{W U_G} W_{\psi_{U_G}}(\phi) = \int_{U_G(F) \backslash U_G(\mA)} \phi(n) \,\psi_{U_G}^{-1}(n) \,dn. \] The following theorem shall be proved in Appendix~\ref{appendix A}. \begin{theorem} \label{gsp whittaker} Suppose that $\left(\pi,V_{\pi}\right)$ is an irreducible cuspidal tempered globally generic automorphic representation of $G\left(\mA\right)$. Let $\Pi_{\pi} = \Pi_1 \boxplus \cdots \boxplus \Pi_k$ denote the functorial lift of $\pi$ to $\mathrm{GL}_4(\mA)$. Then for any non-zero decomposable vector $\varphi = \otimes \varphi_v \in V_{\pi}$, we have \begin{equation} \label{e:gsp whittaker} \frac{|W_{\psi_{U_G}}(\varphi)|^2}{\left( \varphi, \varphi \right)} =\frac{1}{2^k}\cdot \frac{\prod_{j=1}^2\xi_F(2j)}{L\left(1, \Pi_{\pi}, \mathrm{Sym}^2\right)} \cdot \prod_v \mathcal{W}_{G,v}^\circ(\varphi_v). \end{equation} Here $\mathcal{W}^\circ_{G, v}(\varphi_v)$ is defined by \[ \label{mathcal W_G} \mathcal{W}^\circ_{G, v}(\varphi_v) =\frac{L\left(1, \Pi_{\pi, v}, \mathrm{Sym}^2\right)}{\prod_{j=1}^2 \zeta_{F_v}(2j)} \mathcal{W}_{G, v}(\varphi_v) \] and $\mathcal{W}_{G, v}(\varphi_v)$ is defined by \[ \mathcal{W}_{G, v}(\varphi_v) = \int_{U_G(F_v)}^{st} \frac{\left( \pi_v^\circ(n)\varphi_v, \varphi_v \right)}{\left( \varphi_v, \varphi_v \right)} \,\psi_{U_G}^{-1}(n) \, dn \] when $v$ is finite and by the Fourier transform when $v$ is archimedean. \end{theorem} \subsubsection{Reduction to a local identity} \label{Reduction to a local identity} Let us go back to the situation stated in the beginning of \ref{s:Explicit formulas whittaker}. First we note that the unramified computation in \cite{Ku2} implies the following lemma. \begin{lemma} \label{pi sigma identity} There exists a finite set $S_0$ of places of $F$ containing all archimedean places such that for a place $v\notin S_0$, we have \[ L \left(1, \Pi_{\Sigma_v}, \mathrm{As}^+ \right) = L\left(1, \pi_v, \mathrm{std} \otimes \chi_E\right) L\left(1, \Pi_{\pi,v}, \mathrm{Sym}^2\right)L\left(1, \chi_{E_v}\right). \] \end{lemma} Let us recall the following pull-back formula for the Whittaker period on $\Sigma = \theta_\psi\left(\pi^\circ\right)$. \begin{proposition}\cite[p.~40]{Mo} \label{mo 40} Let $f \in V_{\pi^\circ}$ and $\phi \in \mathcal{S}(Z_+(\mA))$. Then \begin{multline}\label{e: global w pull-back} W^{\psi_U}(\theta(\phi; f)) = \int_{N(\mA) \backslash G^1(\mA)} (\omega_{\psi}(g_1, 1)\phi)( (x_{-2}, x_{-1}, 0, x_2)) \\ \times W_{\psi_{U_G}}(\pi^\circ(g_1) f) \, dg_1. \end{multline} \end{proposition} Suppose that $f=\otimes f_v$ and $\phi=\otimes\phi_v$. Then by an argument similar to the one in obtaining \cite[(2.27)]{FM2}, when $W_{\psi_{U_G}}(f) \ne 0$, we have \[ W^{\psi_U}(\theta(\phi; f)) = C_{G^1} \cdot W_{\psi_{U_G}}(f) \cdot \prod_v \mathcal{L}_v^\circ(\phi_v, f_v) \] where \begin{multline*} \mathcal{L}_v^\circ(\phi_v, f_v) := \int_{N(F_v) \backslash G^1(F_v)} (\omega_{\psi_v}(g_1, 1)\phi_v)((x_{-2}, x_{-1}, 0, x_2)) \\ \times \mathcal{W}_{G, v}^\circ(\pi_v^\circ(g_1)f_v) \, dg_{1,v} \end{multline*} when $\phi = \otimes_v \phi_v$ and $f = \otimes_v f_v$. We also define \[ \label{mathcal L} \mathcal{L}_v(\phi_v, f_v): = \frac{\prod_{j=1}^2 \xi_f\left(2j\right)}{L\left(1,\Pi_{\pi, v}, \mathrm{Sym}^2\right)} \, \frac{\mathcal{L}_v^\circ(\phi_v, f_v)}{ \mathcal{W}_{G, v}(f_v)}. \] Here the measures are taken as the following. Let $dg_v$ be the measure on $G^1\left(F_v\right)$ defined by the gauge form and $dn_v$ the measure on $N\left(F_v\right)$ defined in the manner stated in \ref{ss: measures}. Then we take the measure $dg_{1,v}$ on $N(F_v) \backslash G^1(F_v)$ so that $dg_v=dn_v \, dg_{1,v}$. Let $\Theta\left(\pi_v^\circ,\psi_v\right):=\operatorname{Hom}_{G(F_v)^+} \left(\Omega_{\psi_v},\bar{\pi}_v^\circ\right)$ where $\Omega_{\psi_v}$ is the extended local Weil representation of $G(F_v)^+ \times \mathrm{GSO}_{4,2}\left(F_v\right)$ realized on $\mathcal S\left(Z_+\left(F_v\right)\right)$, the space of Schwartz-Bruhat functions on $Z_+\left(F_v\right)$. We recall that the action of $G(F_v)^+ \times \mathrm{GSO}_{4,2}\left(F_v\right)$ on $\mathcal S\left(Z_+\left(F_v\right)\right)$ via $\Omega_{\psi_v}$ is defined as in the global case (e.g. see \cite[2.2]{Mo}). We also recall that for $\Sigma=\theta_\psi\left(\pi^\circ\right)$, we have $\Sigma=\otimes_v\,\Sigma_v$ where $\Sigma_v=\theta_{\psi_v}\left(\pi_v^\circ\right)$ is the local theta lift of $\pi_v^\circ$. Let \[ \theta_v:\mathcal S\left(Z_+\left(F_v\right)\right)\otimes V_{\pi_v^\circ}\to V_{\Sigma_v} \] be a $G(F_v)^+ \times \mathrm{GSO}_{4,2}\left(F_v\right)$-equivariant linear map, which is unique up to a scalar multiplication. Since the global mapping \[ \mathcal S\left(Z_+\left(\mA\right)\right)\otimes V_{\pi^\circ} \ni\left(\phi^\prime,f^\prime\right)\mapsto \theta_\psi\left(\phi^\prime; f^\prime\right)\in V_\Sigma \] is $G(F_v)^+ \times \mathrm{GSO}_{4,2}\left(F_v\right)$-equivariant at any place $v$, by the uniqueness of $\theta_v$, we may adjust $\left\{\theta_v\right\}_v$ so that \[ \theta_\psi\left(\phi^\prime; f^\prime\right)=\otimes_v\, \theta_v\left(\phi_v^\prime\otimes f_v^\prime\right) \quad\text{for $f^\prime=\otimes_v\,f_v^\prime\in V_{\pi^\circ}$, $\phi^\prime=\otimes_v\, \phi_v^\prime \in\mathcal S\left(Z_+\left(\mA\right)\right)$.} \] Then as in \cite[Section~2.4]{FM2}, combining Theorem~\ref{gsp whittaker}, the Rallis inner product formula \eqref{RIPF H GSO}, Lemma~\ref{pi sigma identity}, Lemma~\ref{compo number} and Proposition~\ref{mo 40}, we see that a proof of Theorem~\ref{gso whittaker} is reduced to a proof of the following local identity~\eqref{e: 1st local id}. \begin{proposition} \label{1st local id} Let $v$ be an arbitrary place of $F$ . For a given $f_v \in V_{\pi^\circ_v}^\infty$ satisfying $\mathcal{W}_{G_,v}(f_v) \ne 0$, there exists $\phi_v \in \mathcal{S}(Z_+(F_v))$ such that the local integral $\mathcal{L}_v(\phi_v, f_v)$ converges absolutely, $\mathcal{L}_v(\phi_v, f_v) \ne 0$ and the equality \begin{equation}\label{e: 1st local id} \frac{Z_v(\phi_v, f_v, \pi_v) \cdot \mathcal{W}_v(\theta(\phi_v \otimes f_v))}{|\mathcal{L}_v(\phi_v, f_v)|^2} = \mathcal{W}_{G_, v}(f_v) \end{equation} holds with respect to the specified local measures. \end{proposition} \noindent Let us define a hermitian inner product $\mathcal{B}_{\omega_v}$ on $\mathcal{S}(Z_+(F_v))$ by \[ \mathcal{B}_{\omega_v}(\phi, \phi^\prime) = \int_{Z_+(F_v)} \phi(x) \,\overline{\phi^\prime}(x) \,dx \quad \text{for} \quad \phi, \phi^\prime \in \mathcal{S}(Z_+(F_v)). \] Here on $Z_+(F_v) \simeq (F_v)^{12}$, we take the product measure of the one on $F_v$. Then we consider the integral \begin{align} \label{theta local PI} &Z^{\flat}(f, f^\prime; \phi, \phi^\prime) = \int_{G^1(F_v)} \langle \pi_v^\circ(g) f, f^\prime \rangle_v \, \mathcal{B}_{\omega_v}(\omega_{\psi}(g)\phi, \phi^\prime) \, dg \\ = &\int_{G^1(F_v)} \int_{Z_+(F_v)} \langle \pi_v^\circ(g) f, f^\prime \rangle_v \, \left(\omega_{\psi_v}(g,1)\phi \right)(z) \overline{\phi^\prime(z)} \, dx \,dg \quad\text{for $f, f^\prime \in V_{\pi_v^\circ}$}. \notag \end{align} The integral \eqref{theta local PI} converges absolutely by Yamana~\cite[Lemma~7.2]{Yam}. As in Gan and Ichino~\cite[16.5]{GI1}, we may define a $\mathrm{GSO}_{4,2}(F_v)$-invariant hermitian inner product $\mathcal{B}_{\Sigma_v} : V_{\Sigma_v} \times V_{\Sigma_v} \rightarrow \mC$ by \[ \mathcal{B}_\Sigma(\theta(\phi \otimes f), \theta(\phi^\prime \otimes f^\prime)) := Z^{\flat}(f, f^\prime; \phi, \phi^\prime). \] Here we note that for $h \in \mathrm{SO}_{4,2}(F_v)$, we have \[ \mathcal{B}_\Sigma(\Sigma(h)\theta(\phi \otimes f), \theta(\phi^\prime \otimes f^\prime)) = \mathcal{B}_\Sigma(\theta(\omega_{\psi}(1,h)\phi \otimes f), \theta(\phi^\prime \otimes f^\prime)). \] As in the definition of $W_v$, we define \[ \label{mathcal W psi U} \mathcal{W}^{\psi_U} (\tilde{\phi}_1, \tilde{\phi}_2):= \int_{U(F_v)}^{st} \mathcal{B}_\Sigma(\Sigma(n)\tilde{\phi}_1, \tilde{\phi}_2) \psi_{U}(n)^{-1} \, dn \quad\text{for $\,\widetilde{\phi}_i \in \Sigma_v\,\,\left(i=1,2\right)$}. \] Then by an argument similar to the one in \cite[3.2--3.3]{FM2}, indeed by word for word, Proposition~\ref{1st local id} is reduced to the following another local identity, which is regarded as a local pull-back computation of the Whittaker periods with respect to the theta lift. \begin{proposition}\label{main identity whittaker} For any $f, f^\prime\in V_{\pi_v^\circ}$ and any $\phi,\phi^\prime\in C_c^\infty\left(Z_+(F_v)\right)$, we have \begin{multline}\label{e: weil hermitian13} \mathcal{W}_{\psi_U}\left( \theta\left(\phi\otimes f \right), \theta\left(\phi^\prime\otimes f^\prime\right) \right) = \int_{N(F_v)\backslash G^1(F_v)} \int_{N(F_v)\backslash G^1(F_v)} \\ \mathcal{W}_{G,v}\left(\pi_v^\circ\left(g\right)f,\pi_v^\circ\left(g^\prime\right)f^\prime\right) \left(\omega_{\psi_v}\left(g,1\right)\phi\right)\left(x_0\right)\, \overline{\left(\omega_{\psi_v}\left(g^\prime,1\right)\phi^\prime\right)\left(x_0\right)}\, dg\, dg^\prime. \end{multline} \end{proposition} \begin{Remark} Since $\{ g \cdot x_0 : g \in G^1(F_v) \}$ is locally closed in $Z_+(F_v)$, the mappings \[ N(F_v)\backslash G^1(F_v) \ni g \mapsto \phi(g^{-1} \cdot x_0)\in \mathbb C, \quad N(F_v)\backslash G^1(F_v) \ni g^\prime \mapsto \phi^\prime(g^{-1} \cdot x_0)\in \mathbb C \] are compactly supported, and thus the right-hand side of \eqref{e: weil hermitian13} converges absolutely for $\phi, \phi^\prime \in C_c^\infty\left(Z_+(F_v)\right)$. \end{Remark} \subsubsection{Local pull-back computation} \label{ss: local pull-back computation} Here we shall prove Proposition~\ref{main identity whittaker} and thus complete our proof of Theorem~\ref{gso whittaker}. Since we work over a fixed place $v$ of $F$, we shall suppress $v$ from the notation in this subsection, e.g. $F$ means $F_v$. Further, for any algebraic group $K$ over $F$, we denote its group of $F$-rational points $K\left(F\right)$ by $K$ for simplicity. \subsubsection*{The case when $F$ is non-archimedean} Suppose that $F$ is non-archimedean. From the definition, the local Whittaker period is equal to \[ \int_{U}^{st} \int_{G^1} \int_{Z_+} (\omega_{\psi}(g, n) \phi)(x) \overline{ \phi^\prime(x)} \langle \pi^\circ(g)f, f^\prime \rangle \psi_{U}(n)^{-1} \, dx \, dg \, dn. \] Recall that we have defined subgroups $N_0, N_1, N_2$ and $\widetilde{U}$ of $U$ in \eqref{d: N_0}, \eqref{d: N_1}, \eqref{d: N_2} and \eqref{e: def of u_tilde}, respectively. Then because of the absolute convergence of the integral \eqref{theta local PI}, the above local integral can be written as \begin{multline} \label{e:1} \int_{\widetilde{U}}^{st} \int_{N}^{st} \int_{N_1} \int_{N_0} \int_{Z_+} \int_{G^1} (\omega_{\psi}(g, u_0 u_1 u_2 \tilde{u}) \phi)(x) \overline{ \phi^\prime(x)} \\ \times \langle \pi^\circ(g)f, f^\prime \rangle \psi_{U}(u_2 \tilde{u})^{-1} \, dx \, dg \, du_0\, du_1 \, du_2\, d\tilde{u}. \end{multline} Let us define $Z_{+, \circ} := \left\{ (a_1, a_2 ; 0,0) \in Z_+ : \text{$a_1$ and $a_2$ are linearly independent}\right\}$. Then since $Z_{+, \circ} \oplus (X_+ \otimes Y_0)$ is open and dense in $Z_+$, we have \[ \int_{Z_+} \Phi(z) \, dz = \int_{Z_{+, \circ}} \int_{X_+ \otimes Y_0} \Phi(z_1 +z_2) \, dz_2 \, dz_1 \] for any $\Phi \in L^1(Z_+)$. We consider a map $p : Z_{+, \circ} \rightarrow F$ defined by $p((a_1, a_2; 0, 0)) = \langle a_1, a_2\rangle$. This is clearly surjective. For each $t \in F$, we fix $x_t \in Z_{+, \circ}$ such that $p(x_t) = t$. Then by Witt's theorem, the fiber $p^{-1}(x_t)$ of $x_t :=(a_1^t, a_2^t; 0, 0)$ is given by \[ p^{-1}(x_t) = \left\{ \gamma \cdot x_t :=(\gamma a_1^t, \gamma a_2^t: 0, 0) : \gamma \in G^1 \right\}. \] We may identify this space with $G^1 \slash R_t$ as a $G^1$-homogeneous space. Here $R_t$ denotes the stabilizer of $x_t$ in $G^1$. From this observation, the following lemma readily follows (cf. \cite[Lemma~3]{FM2}). \begin{lemma} \label{meas decomp 1} For each $x_t \in Z_{+, \circ}$, there exists a Haar measure $dr_t$ on $R_t$ such that \[ \int_{Z_+}\Phi(z) \, dz = \int_{F} \int_{R_t \backslash G^1} \int_{X_+ \otimes Y_0} \Phi(g^{-1} \cdot x_t+z) \, dz \, dg_t \, dt. \] Here $dg_t$ denotes the quotient measure $dr_t \backslash dg$ on $R_t \backslash G^1$. \end{lemma} Further, we note that the following lemma, which is proved by an argument similar to the one for \cite[Lemma~3.20]{Liu2}. (cf. \cite[Lemma~3]{FM2}). \begin{lemma} \label{lem L1} For $\phi_1,\phi_2\in C_c^\infty\left(Z_+\right)$ and $f_1,f_2\in V_{\pi^\circ}$, let \[ \mathcal G_{\phi_1,\phi_2,f_1,f_2} \left(t\right)=\int_{G^1} \int_{R_t \backslash G^1} \phi_1\left((g g^\prime)^{-1} \cdot x_t\right)\, \phi_2\left(g^{-1} \cdot x_t\right)\, \langle\pi^\circ\left(g^\prime\right)f_1,f_2\rangle\, dg\, dg^\prime \] for $t \in F$. Then the integral is absolutely convergent and is locally constant. \end{lemma} \begin{Remark} \label{archi lem L1} When $F$ is archimedean, by an argument similar to the one for \cite[Proposition 3.22]{Liu2}, we see that this integral is absolutely convergent and is a continuous function on $F$ not only for $C_c^\infty\left(Z_+\right)$ but also for $\mathcal{S}(Z_+)$. \end{Remark} By Lemma~\ref{meas decomp 1}, the integral \eqref{e:1} can be written as \begin{multline*} \int_{N_0} \int_{F} \int_{R_t \backslash G^1} \int_{X_+ \otimes Y_0} \int_{G^1} (\omega_{\psi}(g, u_0 h) \phi)(\gamma^{-1} \cdot x_t+z) \overline{ \phi^\prime(\gamma^{-1} \cdot x_t+z)} \\ \times\langle \pi^\circ(g)f, f^\prime \rangle \, dg \, dz \, d\gamma_t \, dt \, du_0. \end{multline*} Moreover, by the computation in \cite[Section~3.1]{Mo}, we have \[ (\omega_{\psi}(g, u_0(x) h) \phi)(\gamma^{-1} \cdot x_t+z) = \psi(-xt) \phi(\gamma^{-1} \cdot x_t+z). \] Then because of Lemma~\ref{lem L1}, we may apply the Fourier inversion with respect to $x$ and $t$, and thus the above integral is equal to \begin{multline} \label{FI 1} \int_{R_0 \backslash G^1} \int_{X_+ \otimes Y_0} \int_{G^1} (\omega_{\psi}(g, h) \phi)(\gamma^{-1} \cdot x_0+z) \overline{ \phi^\prime(\gamma^{-1} \cdot x_0+z)} \\ \times \langle \pi^\circ(g)f, f^\prime \rangle \, dg \, dz \, d\gamma_0 \, dt \, du_0 \\ = \int_{R_0 \backslash G^1} \int_{X_+ \otimes Y_0} \int_{G^1} (\omega_{\psi}(\gamma g, h) \phi)(x_0+z) (\overline{\omega_{\psi}(\gamma, 1) \phi^\prime)(x_0+z)} \\ \times \langle \pi^\circ(g)f, f^\prime \rangle \, dg \, dz \, d\gamma_0. \end{multline} The support of $\phi^\prime(\gamma^{-1} \cdot x_0+z)$ as a function of $X_+ \otimes Y_0$ is compact since $\phi^\prime \in C_c^\infty(Z_+)$. Therefore this integral converges absolutely and is equal to \begin{multline*} \int_{X_+ \otimes Y_0} \int_{R_0 \backslash G^1} \int_{G^1} (\omega_{\psi}(\gamma g, h)\phi)(x_0+z) (\overline{\omega_{\psi}(\gamma, 1)\phi^\prime)(x_0+z)} \\ \times \langle \pi^\circ(g)f, f^\prime \rangle \, dg \, d\gamma_0 \, dz. \end{multline*} Now, let us take $(x_{-2}, x_{-1}: 0, 0)$ as $x_0$. Then we have \[ R_0 = N. \] Let us define a map $q : X_+ \otimes Y_0 \rightarrow \mathrm{Mat}_{2 \times 2}$ by \[ q(b_1 \otimes e_1+b_2 \otimes e_2) = \begin{pmatrix} \langle x_{-2}, b_1 \rangle & \langle x_{-2}, b_2 \rangle \\ \langle x_{-1}, b_1 \rangle & \langle x_{-1}, b_2 \rangle \end{pmatrix} \] with $b_i \in X_+$. Clearly this map is bijective. Hence, there exists a measure $dT$ on $\mathrm{Mat}_{2 \times 2}$ such that we have \[ \int_{X_+ \otimes Y_0} \Phi(x_{-2}, x_{-1}: z) \, dz = \int_{\mathrm{Mat}_{2 \times 2}} \Phi(x_{-2}, x_{-1}: x_T) \, dT \] with $x_T = q^{-1}(T)$. Here we note that the measure $dz$ on $X_+ \otimes Y_0$ is taken to be the Tamagawa measure and hence we have the Fourier inversion \[ \int_{\mathrm{Mat}_{2 \times 2}} \int_{\mathrm{Mat}_{2 \times 2}} \Phi(T) \psi \left(\mathrm{tr} \left(T S_0 T^\prime \right) \right) \, dT \, dT^\prime = \Phi(0) \] with the above Haar measures $dT, dT^\prime$ on $\mathrm{Mat}_{2 \times 2}$ if the integral converges. Thus we have \begin{multline*} \int_{N_2}^{st} \int_{N_1} \int_{X_+ \otimes Y_0} \int_{N \backslash G^1} \int_{G^1} (\omega_{\psi}(\gamma g, u_1 u_2 h)\phi)(x_0+z) (\overline{\omega_{\psi}(\gamma, 1)\phi^\prime)(x_0+z)} \\ \times \langle \pi^\circ(g)f, f^\prime \rangle \, dg \, d\gamma_0 \, dz \, du_1 \, du_2 \\ = \int_{N}^{st} \int_{N_1} \int_{\mathrm{Mat}_{2 \times 2}} \int_{N \backslash G^1} \int_{G^1} (\omega_{\psi}(\gamma g, u_1 u_2 h)\phi)(x_0+x_T) (\overline{\omega_{\psi}(\gamma, 1)\phi^\prime)(x_0+x_T)} \\ \times \langle \pi^\circ(g)f, f^\prime \rangle \, dg \, d\gamma_0 \, dT \, du_1 \, du_2. \end{multline*} Moreover, similarly to the global computation in \cite[Section~3.1]{Mo}, we may write this integral as \begin{multline} \label{e:pre 1} \int_{N_2}^{st} \int_{N_1} \int_{\mathrm{Mat}_{2 \times 2}} \int_{N \backslash G^1} \int_{G^1} \psi \left(\mathrm{tr} \left( \begin{pmatrix}s_1&s_2\\ t_1 &t_2 \end{pmatrix}S_0 (x_T-x_{T_0}) \right) \right) \\ \times (\omega_{\psi}(\gamma g, h)\phi)(x_0+x_T) (\overline{\omega_{\psi}(\gamma, 1)\phi^\prime)(x_0+x_T)} \langle \pi^\circ(g)f, f^\prime \rangle \, dg \, d\gamma_0 \, dT \, du_1 \, du_2 \end{multline} where we write $u_1 = u_1(s_1, t_1)$ and $u_2 = u_2(s_2, t_2)$, and we put $ T_0 = \begin{pmatrix}0&0\\0&1 \end{pmatrix}$. By an argument similar to the proof to show \eqref{FI 1}, we may apply the Fourier inversion to this integral, and we see that this is equal to \[ \int_{N_H \backslash G^1} \int_{G^1} (\omega_{\psi}(\gamma g, h) \phi)(x_0+x_{T_0}) (\overline{\omega_{\psi}(\gamma, 1) \phi^\prime)(x_0+x_{T_0})} \langle \pi^\circ(g)f, f^\prime \rangle \, dg \, d\gamma_0. \] Now we note that from the argument to obtain \eqref{FI 1}, this integral converges absolutely. Then by telescoping the $G^1$-integration, we obtain \begin{multline*} \int_{N \backslash G^1} \int_{N \backslash G^1} \int_{N} (\omega_{\psi}( r g, h) \phi)(x_0+x_{T_0}) (\overline{\omega_{\psi}(\gamma, 1)\phi^\prime)(x_0+x_{T_0})} \\ \times \langle \pi^\circ(r g)f, \pi^\circ(\gamma)f^\prime \rangle \, dr \, dg \, d\gamma_0. \end{multline*} Put $z_0 = x_0+x_{T_0} = (x_{-2}, x_{-1}, 0, x_2)$. Recall that from the computation in \cite[Section~3.1]{Mo}, we have \begin{equation} \label{da22} \omega_{\psi}(v(A)g, \tilde{u}(b) h) \phi(z_0) = \psi(-d a_{22})\omega_{\psi}(g, \tilde{u}(b) h) \phi(z_0) \end{equation} when we write $A =\begin{pmatrix}a_{11}&a_{12}\\a_{21}&a_{22} \end{pmatrix} $, and we have \begin{equation} \label{z0 w(b)} z_0(1, \tilde{u}(b)) = z_0(w(b), 1). \end{equation} Therefore, $\mathcal{W}_{\psi_U}\left( \theta\left(\phi\otimes f \right), \theta\left(\phi^\prime\otimes f^\prime\right) \right) $ is equal to \begin{multline*} \int_{F}^{st} \int_{N \backslash G^1} \int_{N \backslash G^1} \int_{N} \psi(-b) (\omega_{\psi}(r g, \tilde{u}(b)) \phi)(z_0) (\overline{\omega_{\psi}(\gamma, 1) \phi^\prime)(z_0)} \\ \times \langle \pi^\circ(rg)f, \pi^\circ(\gamma)f^\prime \rangle \, dr \, dg \, d\gamma_0 \, db \\ = \int_{F}^{st} \int_{N \backslash G^1} \int_{N \backslash G^1} \int_{\mathrm{Sym}^2} \psi(-b-da_{22}) (\omega_{\psi}(w(b)g, 1)\phi)(z_0) \\ \times (\overline{\omega_{\psi}(\gamma, 1)\phi^\prime)(z_0)} \langle \pi^\circ(v(A)g)f, \pi^\circ(\gamma)f^\prime \rangle \, dA \, dg \, d\gamma_0 \, db. \end{multline*} By an argument similar to the one in \cite{FM2} showing that \cite[(3.30)]{FM2} is equal to $\alpha(\pi(g) \phi, \pi(h)\phi^\prime)$ there, indeed, by word for word, we see that this integral is equal to \begin{multline*} \int_{N \backslash G^1} \int_{N \backslash G^1} \int_{U_G}^{st} \psi_{U_{G}}^{-1}(n) (\omega_{\psi}(g, 1)\phi)(z_0) (\overline{\omega_{\psi}(\gamma, 1)\phi^\prime)(z_0)} \\ \times \langle \pi^\circ(ng)f, \pi^\circ(\gamma)f^\prime \rangle \, dn \, dg \, d\gamma_0. \end{multline*} Thus Proposition~\ref{main identity whittaker} in the non-archimedean case is proved. \subsubsection*{The case when $F$ is archimedean} Suppose that $F$ is archimedean. Recall that \[ \mathcal{W}^{\psi_U}(\tilde{\phi}_1, \tilde{\phi}_2)= \widehat{\mathcal{W}_{\tilde{\phi}_1, \tilde{\phi}_2}}\left( \psi_U\right) \quad \text{for $\tilde{\phi}_i \in \Sigma^\infty\,\,\left(i=1,2\right)$}, \] where we set \[ \mathcal{W}_{\tilde{\phi}_1, \tilde{\phi}_2}(n) = \int_{U_{-\infty}} \mathcal{B}_{\Sigma}(\Sigma(n u)\tilde{\phi}_1, \tilde{\phi}_2) \psi_{U}^{-1}(n u) \, du \quad\text{for $n\in U$}, \] which converges absolutely and gives a tempered distribution on $U \slash U_{-\infty}$ by \cite[Corollary~3.13]{Liu2}. Let us define $U^\prime = N_0 N_1 N_2$. Then $U^\prime_{-\infty} = U_{-\infty}$. Moreover, for any $\tilde{u} \in \widetilde{U}$ and $u^\prime \in U^\prime$, we have $\tilde{u} u^\prime \tilde{u}^{-1} (u^\prime)^{-1} \in U^\prime_{-\infty}$ and we obtain $\mathcal{W}_{\tilde{\phi}_1, \tilde{\phi}_2}(\tilde{u} u^\prime) = \mathcal{W}_{\tilde{\phi}_1, \tilde{\phi}_2}(u^\prime \tilde{u})$. Hence, we may regard it as a tempered distribution on $\tilde{U} \times \left(U^\prime \slash U_{-\infty}^\prime \right)$. Then for a tempered distribution $I$ on $\tilde{U} \times \left(U^\prime \slash U_{-\infty}^\prime \right)$, we define partial Fourier transforms $\widehat{I^j}$ of $I$ for $j=1,2$ by \[ \langle I, \widehat{f_1} \otimes f_2 \rangle = \langle \widehat{I^1}, f_1 \otimes f_2 \rangle \quad \text{and} \quad \langle I, f_1 \otimes \widehat{f_2} \rangle = \langle \widehat{I^2}, f_1 \otimes f_2 \rangle \] where $f_1 \in \mathcal{S} \left( \tilde{U} \right)$ and $f_2 \in \mathcal{S} \left( U^\prime \slash U_{-\infty}^\prime \right)$, respectively. Then we have \[ \widehat{ \widehat{I}^2}^1(\psi_U) = \widehat{ \widehat{I}^1}^2(\psi_U) = \widehat{I}(\psi_U). \] From the definition of $\mathcal{B}_\Sigma$, we have \begin{multline*} \mathcal{W}_{\theta(\phi \otimes f), \theta(\phi^\prime \otimes f^\prime)}(n) = \int_{U_{-\infty}} \int_{G^1} \int_{Z_+} (\omega_{\psi}(g, nu) \phi)(x) \overline{ \phi^\prime(x)} \\ \times \langle \pi^\circ(g)f, f^\prime \rangle \psi_{U}^{-1}(n u) \, dx \, dg \, du \\ = \int_{U_{-\infty} \slash N_0} \int_{N_0} \int_{G^1} \int_{Z_+} (\omega_{\psi}(g, nu_0u) \phi)(x) \overline{ \phi^\prime(x)} \\ \times \langle \pi^\circ(g)f, f^\prime \rangle \psi_{U}^{-1}(n u) \, dx \, dg \, du_0 \, du, \end{multline*} for $\phi, \phi^\prime \in \mathcal{S}(Z_+)$ and $f, f^\prime \in V_{\pi^\circ}^\infty$. Clearly, Lemma~\ref{meas decomp 1} holds in the archimedean case also. Then as in \eqref{FI 1}, because of Remark~\ref{archi lem L1} and the Fourier inversion, the above integral is equal to \begin{multline*} \int_{U_{-\infty} \slash N_0} \int_{N \backslash G^1} \int_{X_+ \otimes Y_0} \int_{G^1} (\omega_{\psi}(\gamma g, nu) \phi)(x_0+z) (\overline{\omega_{\psi}(\gamma, 1) \phi^\prime)(x_0+z)} \\ \times \langle \pi^\circ(g)f, f^\prime \rangle \, dg \, dz \, d\gamma_0 \, du. \end{multline*} As \eqref{FI 1}, this integral converges absolutely. Let us denote this integral by $J_{\phi, \phi^\prime, f, f^\prime}(n)$. Then from the definition, \[ \widehat{J_{\phi, \phi^\prime, f, f^\prime}} = \widehat{\mathcal{W}_{\theta(\phi \otimes f), \theta(\phi^\prime \otimes f^\prime)}}. \] Again, from the definition, for $\varphi \in \mathcal{S}(U^\prime \slash U^\prime_{-\infty})$, we have \begin{multline*} (\widehat{J_{\phi, \phi^\prime, f, f^\prime}}^2, \psi_U \cdot \varphi) =(J_{\phi, \phi^\prime, f, f^\prime}, \widehat{\psi_U \cdot \varphi}) = \int_{U^\prime \slash U^\prime_{-\infty}} \int_{U_{-\infty}^\prime \slash N_0} \int_{N \backslash G^1} \int_{X_+ \otimes Y_0} \int_{G^1} \\ \times (\omega_{\psi}(\gamma g, nu) \phi)(x_0+z) (\overline{\omega_{\psi}(\gamma, 1) \phi^\prime)(x_0+z)} \\ \times \langle \pi^\circ(g)f, f^\prime \rangle \widehat{\varphi}(n) \psi_U^{-1}(n) \, dg \, dz \, d\gamma_0 \, du \, dn. \end{multline*} By a computation similar to the one to obtain \eqref{e:pre 1}, this integral is equal to \begin{multline*} \int_{N_1} \int_{N_2} \int_{N \backslash G^1} \int_{X_+ \otimes Y_0} \int_{G^1} (\omega_{\psi}(\gamma g, u_1 u_2u) \phi)(x_0+z) (\overline{\omega_{\psi}(\gamma, 1) \phi^\prime)(x_0+z)} \\ \times \langle \pi^\circ(g)f, f^\prime \rangle \widehat{\varphi}(u_1 u_2) \psi_U^{-1}(u_1 u_2) \, dg \, dz \, d\gamma_0 \, du \, du_1 \, du_2 \\ = \int_{N_1} \int_{N_2} \int_{\mathrm{Mat}_{2 \times 2}} \int_{N \backslash G^1} \int_{G^1} \psi \left(\mathrm{tr} \left( \begin{pmatrix}s_1&s_2\\ t_1 &t_2 \end{pmatrix}S_0 (x_T-x_{T_0}) \right) \right) \\ \times (\omega_{\psi}(\gamma g, h)\phi)(x_0+x_T) (\overline{\omega_{\psi}(\gamma, 1)\phi^\prime)(x_0+x_T)} \\ \times \langle \pi^\circ(g)f, f^\prime \rangle \widehat{\varphi}(u_1 u_2) \, dg \, d\gamma_0 \, dT \, du_1 \, du_2. \end{multline*} As above, we may apply the Fourier inversion, and thus this is equal to \begin{multline*} \widehat{\varphi}(1) \cdot \int_{N \backslash G^1} \int_{G^1} (\omega_{\psi}(\gamma g,1) \phi)(x_0+x_{T_0}) (\overline{\omega_{\psi}(\gamma, 1) \phi^\prime)(x_0+x_{T_0})} \\ \times \langle \pi^\circ(g)f, f^\prime \rangle \, dg \, d\gamma_0. \end{multline*} Hence, \begin{multline*} \widehat{J_{\phi, \phi^\prime, f, f^\prime}}^2(\psi_U) = \int_{N \backslash G^1} \int_{G^1} (\omega_{\psi}(\gamma g,1) \phi)(x_0+x_{T_0}) (\overline{\omega_{\psi}(\gamma, 1) \phi^\prime)(x_0+x_{T_0})} \\ \times \langle \pi^\circ(g)f, f^\prime \rangle \, dg \, d\gamma_0. \end{multline*} Here, we note that by Remark~\ref{archi lem L1}, this integral converges absolutely. Then this identity shows that we have \begin{multline} \label{J12} \widehat{\widehat{J_{\phi, \phi^\prime, f, f^\prime}}^2}^1(\varphi) = \int_{\tilde{U}} \int_{N \backslash G^1} \int_{G^1} (\omega_{\psi}(\gamma g,b) \phi)(x_0+x_{T_0}) (\overline{\omega_{\psi}(\gamma, 1) \phi^\prime)(x_0+x_{T_0})} \\ \times \langle \pi^\circ(g)f, f^\prime \rangle \varphi(b) \, dg \, d\gamma_0 \, db \end{multline} for $\varphi \in \mathcal{S}(\tilde{U})$. As in the non-archimedean case, by \eqref{da22} and \eqref{z0 w(b)}, we may easily show that this is equal to \begin{multline*} \int_{N \backslash G^1} \int_{N \backslash G^1} \int_{N} \int_{F} \psi_{U_{G}}^{-1}(v(x) n) (\omega_{\psi}(g, 1)\phi)(z_0) (\overline{\omega_{\psi}(\gamma, 1)\phi^\prime)(z_0)} \\ \times \langle \pi^\circ(v(x)ng)f, \pi^\circ(\gamma)f^\prime \rangle \varphi(\tilde{u}(x)) \,dx \, dn \, dg \, d\gamma_0 \end{multline*} since the integral in \eqref{J12} converges absolutely. Thus Proposition~\ref{main identity whittaker} is proved in the archimedean case also. \section{Proof of Theorem~\ref{ref ggp}} In this section, we complete our proof of Theorem~\ref{ref ggp}. Let $(\pi, V_\pi)$ be an irreducible cuspidal tempered automorphic representation of $G_D(\mA)$ with a trivial central character. Throughout this section, we suppose that $\pi$ is neither of type I-A nor type I-B. When $\pi$ is one of these types, our theorem is already proved in \cite[Theorem~7.5]{Co}. The case when $B_{\xi, \Lambda,\psi} \not \equiv 0$ on $V_\pi$ is treated in \ref{s: pf main thm 1} and the case when $B_{\xi, \Lambda, \psi} \equiv 0$ on $V_\pi$ is treated in \ref{s: pf main thm 3}, respectively. \subsection{Proof of Theorem~\ref{ref ggp} when $B_{\xi, \Lambda,\psi} \not\equiv 0$} \label{s: pf main thm 1} \subsubsection{Reduction to a local identity} Suppose that $B_{\xi, \Lambda,\psi} \not \equiv 0$ on $V_\pi$. Let $(\sigma, V_\sigma)$ denote the theta lift of $\pi$ to $\mathrm{GSU}_{3,D}(\mA)$, which is an irreducible cuspidal automorphic representation. As in the proof of Theorem~\ref{gso whittaker}, our theorem may be reduced to a certain local identity. Let us set some notation to explain our local identity. As in Section~\ref{s:RI H gso} and Section~\ref{s:RI HD gsu}, we fix the Petersson inner product $(\,,\,)$ on $V_{\pi}$ and the local hermitian pairing $(\,,\,)_v$ on $\pi_v$. As in \eqref{zD def}, we define the maximal isotropic subspaces $Z_{D, \pm}$. Let \[ \theta_{D, v}:\mathcal S\left(Z_{D, +}\left(F_v\right)\right)\otimes V_{\pi_v}\to V_{\sigma_v} \] be the $G_D(F_v)^+ \times \mathrm{GSU}_{3, D}\left(F_v\right)$-equivariant linear map, which is unique up to multiplication by a scalar. As in Section~\ref{s:Explicit formulas whittaker}, let us adjust $\left\{\theta_{D, v}\right\}_v$ so that \[ \theta_{D, \psi}\left(\phi^\prime; f^\prime\right)=\otimes_v\, \theta_{D, v}\left(\phi_v^\prime\otimes f_v^\prime\right) \] for $f^\prime=\otimes_v\,f_v^\prime\in V_{\pi}$ and $\phi^\prime=\otimes_v\, \phi_v^\prime \in\mathcal S\left(Z_{D, +}\left(\mA\right)\right)$. Let us choose $X \in D^\times(F)$ so that $S_{X} =\xi$. Then by Proposition~\ref{pullback Bessel gsp innerD}, we have \begin{equation} \label{pullback main decomp} \mathcal{B}_{X, \Lambda^{-1}}(\theta(f: \phi)) =B_{\xi, \Lambda}(f) \cdot \prod_v \mathcal{K}_v(f_v; \phi_v) \end{equation} where $f = \otimes f_v \in V_{\pi_D}$ and $\phi = \otimes \phi_v \in \mathcal{S}(Z_{D, +}(\mA))$, and we define \[ \mathcal{K}_v(f_v ;\phi_v) = \int_{N_D(F_v) \backslash G_D^1(F_v)} \alpha_{\Lambda_v, \psi_{\xi, v}}(\pi_v(g)f_v) \phi_v(g^{-1} \cdot v_{D, X}) \, dg. \] Here, we take the measure $dh_{v}$ on $G_D^1(F_v)$ defined by the gauge form, the measure $dn_v$ on $N_{G_D}(F_v)$ defined in \ref{ss: measures} under the identification $D(F_v) \simeq F_v^4$ and the measure $dg_{1,v}$ on $N_{G_D}(F_v) \backslash G_D^1(F_v)$ such that $dh_v=dn_v \, dg_{1,v}$. Then by combining the explicit formula of the Bessel periods on $\sigma$ given in Theorem~\ref{gsu Bessel}, the Rallis inner product formulas \eqref{RIPF H GSO} and Proposition~\ref{RIPF HD}, Lemma~\ref{pi sigma identity} and Lemma~\ref{compo number}, and the above pull-back formula \eqref{pullback main decomp}, we see that Theorem~\ref{ref ggp} is reduced to the following local identity. \begin{proposition} \label{main prp bessel} Let $v$ be an arbitrary place of $F$. For a given $f_v \in V_{\pi_v}$ satisfying $\alpha_{\xi, \Lambda, v}\left(f_v\right)\ne 0$, there exists $\phi_v \in \mathcal{S}(Z_{D, +}\left(F_v\right))$ such that the local integral $\mathcal{K}_v\left(f_v;\phi_v\right)$ converges absolutely, $\mathcal{K}_v\left(f_v;\phi_v\right) \ne 0$ and the equality \[ \frac{Z_v(\phi_v, f_v, \pi_v) \, \alpha_{\Lambda_v^{-1}, \psi_{X, v}}\left(\theta(\phi_v \otimes f_v\right)}{|\mathcal{K}_v(f_v;\phi_v)|^2} = \frac{\alpha_{\Lambda_v, \psi_{\xi, v}}(f_v)}{(f_v, f_v)_v} \] holds. \end{proposition} \begin{Remark} In Corollary~\ref{cor nonzero bessel}, the existence of $f_v$ with $\alpha_{\Lambda_v, \psi_{\xi, v}}\left(f_v\right)\ne 0$ is shown. \end{Remark} Let us define hermitian inner product on $\mathcal{S}(Z_{D, +}(F_v))$ by \[ \mathcal{B}_{\omega_v, D}(\phi, \phi^\prime) = \int_{Z_{D, +}(F_v)} \phi(x) \overline{\phi^\prime}(x) \,dx \quad \text{for} \quad \phi, \phi^\prime \in \mathcal{S}(Z_{D, +}(F_v)). \] Then we consider the integral \[ Z^{\bullet}(f, f^\prime; \phi, \phi^\prime) = \int_{G^1(F_v)} \langle \pi_v(g) f, f^\prime \rangle_v \, \mathcal{B}_{\omega_v}(\omega_{\psi}(g)\phi, \phi^\prime) \, dg \] for $f, f^\prime \in \pi_v$ and $\phi, \phi^\prime \in \mathcal{S}(Z_{D, +}(F_v))$. As in Section~\ref{s: pf wh gso}, this converges absolutely and gives a $\mathrm{GSU}_{3,D}(F_v)$-invariant hermitian inner product \[ \mathcal{B}_{\sigma_v} : V_{\sigma_v} \times V_{\sigma_v} \rightarrow \mC \] by \[ \mathcal{B}_{\sigma_v}(\theta(\phi \otimes f), \theta(\phi^\prime \otimes f^\prime)) := Z^{\bullet}(f, f^\prime; \phi, \phi^\prime). \] By the Rallis inner product formula \eqref{RIPF H GSO} and Proposition~\ref{RIPF HD}, at any place $v$, there exist $f_v, f_v^\prime, \phi, \phi^\prime$ such that $Z^{\bullet}(f, f^\prime; \phi, \phi^\prime) \ne 0$ since $\theta_{\psi, D}(\pi) \ne 0$. Thus, $\mathcal{B}_{\sigma_v} \not\equiv 0$. For $\widetilde{\phi}_i \in \sigma_v$, we define \[ \mathcal{A}(\tilde{\phi}_1, \tilde{\phi}_2):= \int_{N_{3, D}(F_v)}^{st} \int_{M_{X}(F_v)} \mathcal{B}_{\sigma_v}(\sigma_v(nt)\tilde{\phi}_1, \tilde{\phi}_2) \Lambda_{D,v}(t) \psi_{X, D, v}(n)^{-1} \, dt \, dn. \] Here, at an archimedean place $v$, a stable integration means the Fourier transform as in the definition of $\alpha_{\chi, \psi_N}$. Then by an argument similar to the one in \cite[3.2--3.3]{FM2}, we may reduce Proposition~\ref{main prp bessel} to the following identity. \begin{proposition} For any $f, f^\prime\in V_{\pi_v}$ and any $\phi,\phi^\prime\in C_c^\infty\left(Z_{D, +}(F_v)\right)$, we have \begin{multline} \label{e:main local pullback bessel} \mathcal{A}\left( \theta\left(\phi\otimes f \right), \theta\left(\phi^\prime\otimes f^\prime\right) \right) = \\ \int_{N_D(F_v)\backslash G_D^1(F_v)} \int_{N_D(F_v)\backslash G_D^1(F_v)} \alpha_{\Lambda_v, \psi_{\xi, v}}\left(\pi_v\left(h\right)f,\pi_v\left(h^\prime\right)f^\prime\right) \\ \times \left(\omega_{\psi_v}\left(h,1\right)\phi\right)\left(x_0\right)\, \overline{\left(\omega_{\psi_v}\left(h^\prime,1\right)\phi^\prime\right)\left(x_0\right)}\, dh\, dh^\prime. \end{multline} \end{proposition} Before proceeding to a proof of this proposition, we give some corollaries of this identity. \begin{corollary} \label{cor nonzero bessel} For an arbitrary place $v$ of $F$, we have $\alpha_{\Lambda_v, \psi_{\xi, v}} \not \equiv 0$ on $\pi_v$. \end{corollary} \begin{proof} Sine $\mathcal{B}_{\sigma_v} \not\equiv 0$, \eqref{e:main local pullback bessel} implies that $\alpha_{\Lambda_v, \psi_{\xi, v}} \not \equiv 0$ on $\pi_v$ if and only if $\alpha_{\Lambda_v^{-1}, \psi_{X, v}} \not \equiv 0$ on $\sigma_v$. Moreover, by \cite[Corollary~5.1]{FM2}, $\alpha_{\Lambda_v^{-1}, \psi_{X, v}} \not \equiv 0$ on $\sigma_v$ since the theta lift of $\sigma_v$ to $\mathrm{GU}_{2,2}(F_v)$ is generic. Thus our claim follows. \end{proof} As another corollary, a non-vanishing of local theta lifts follows from a non-vanishing of local periods. \begin{corollary} \label{supple lem RIPF} Let $k$ be a local field of characteristic zero and $\mathcal{D}$ be a quaternion algebra over $k$. Let $\tau$ be an irreducible admissible tempered representation of $G_{\mathcal{D}}$ with a trivial central character. Let $S_{\mathcal{D}} \in \mathcal{D}^1$ and $\chi$ be a character of $T_{\mathcal{D}, S_{\mathcal{D}}}$. Suppose that $\alpha_{\chi, \psi_{S_{\mathcal{D}}}} \not \equiv 0$ on $\tau$. Then $\mathcal{A} \not \equiv 0$ on $\theta_{\psi, \mathcal{D}}(\tau) \times \theta_{\psi, \mathcal{D}}(\tau)$. In particular $\theta_{\psi, \mathcal{D}}(\tau) \ne 0$ and $Z^\bullet\left(\phi, \phi^\prime, f, f^\prime\right) \ne 0$ for some $f, f^\prime \in \tau$ and $\phi, \phi^\prime \in \mathcal{S}(Z_{\mathcal{D}, +})$. \end{corollary} \begin{Remark} By \cite[Lemma~8.6, Remark~8.4 (1)]{Yam}, we know that the existence of such $f, f^\prime, \phi, \phi^\prime$ is equivalent to the non-vanishing of the theta lift of $\tau$ to $\mathrm{GSU}_{3,\mathcal{D}}$ when $k \ne \mR$. Though the equivalence is not clear when $k=\mR$, we shall use Corollary~\ref{supple lem RIPF} to show that the local non-vanishing of the theta lifts implies the global non-vanishing of the theta lifts in \ref{s: pf main thm 3}. \end{Remark} \begin{proof} By our assumption, the right-hand side of \eqref{e:main local pullback bessel} is not zero for some $f, f^\prime, \phi, \phi^\prime$ when $F_v \ne \mR$. Hence, the left-hand side is not zero, and in particular $Z^\bullet\left(\phi, \phi^\prime, f, f^\prime\right) \ne 0$. \end{proof} \subsubsection{Local pull-back computation} Here we shall prove the identity \eqref{e:main local pullback bessel} and thus we complete our proof of Theorem~\ref{ref ggp} when $B_{\xi, \psi, \Lambda} \not \equiv 0$. Here we give a proof of \eqref{e:main local pullback bessel} only in the non-archimedean case since the archimedean case is similarly proved as in the proof of Proposition~\ref{main identity whittaker}. Our proof is a local analogue of the proof of Proposition~\ref{pullback Bessel gsp} and Proposition~\ref{pullback Bessel gsp innerD}. Moreover we will consider only the case when $D$ is split since the proof is similar and indeed is easier in the non-split case as in the global computation. Since the argument in this subsection is purely local, in order to simplify the notation, we omit subscripts $v$ and we simply write $K(F)$ by $K$ for any algebraic group $K$ defined over $F=F_v$. From the definition, we may write the left-hand side of \eqref{e:main local pullback bessel} as \[ \int_{N_{4,2}}^{st} \int_{M_{X}} \int_{G^1} \int_{Z_+} \langle \pi(g)f, f^\prime \rangle (\omega_{\psi}(g, nt) \phi)(x) \overline{\phi^\prime(x)} \Lambda(t) \psi_{X}(n)^{-1} \, dx \, dg \, dt \, dn \] where $X$ is chosen so that $S_X =S$. Further as in \eqref{bessel u0 u1 u2}, this is equal to \begin{multline*} \int_{F}^{st} \int_{F^2}^{st} \int_{F^2}^{st} \int_{M_{X}} \int_{G^1} \int_{Z_+} (\omega_{\psi}(g, u_0(s) u_1(s_1, t_1) u_2(s_2, t_2) t) \phi)(x) \overline{\phi^\prime(x)} \\ \times \langle \pi(g)f, f^\prime \rangle \Lambda(t) \psi(x_{21}s_1+x_{22}t_1+x_{11}s_2+x_{12}t_2)^{-1} \, dx \, dg \, dt \, ds_2 \,dt_2 \,ds_1 \, dt_1 \,ds \end{multline*} when we write $X = \begin{pmatrix} x_{11}&x_{12}\\ x_{21}&x_{22}\end{pmatrix}$. For each $r \in F$, we may take $A_r =(a_1^r, a_2^r, 0, 0) \in Z_+$ such that $a_1^r,a_2^r$ are linearly independent and $\langle a_1^r, a_2^r \rangle =r$. Let us denote by $Q_r$ the stabilizer of $x_r$ in $G^1$. Then as in the proof of Proposition~\ref{main identity whittaker}, for each $r \in F$, there is a Haar measure $dq_r$ of $Q_r$ such that \[ \int_{Z_+} \Phi(x) \, dx = \int_{F} \int_{Q_r \backslash G^1} \int_{X_+^2} \Phi(h^{-1} \cdot A_r+b) \, db \,dh_r \,dr \] with $dh_r=dq_r \backslash dh$, provided that the both sides converge. Then applying the Fourier inversion, because of \eqref{global comp1}, our integral becomes \begin{multline*} \int_{F^2}^{st} \int_{F^2}^{st} \int_{M_{X} }\int_{G^1}\int_{Q_0 \backslash G^1} \int_{X_+^2} \langle \pi(g)f, f^\prime \rangle \Lambda(t) \psi(x_{21}s_1+x_{22}t_1+x_{11}s_2+x_{12}t_2)^{-1} \\ \times (\omega_{\psi}(hg, u_1(s_1, t_1) u_2(s_2, t_2) t) \phi)(A_0+b) \overline{(\omega_{\psi}(h,1)\phi^\prime)(A_0+b)} \\ db \, dh \, dx \, dg \, dt \, ds_2 \,dt_2 \,ds_1 \, dt_1 \end{multline*} with $A_0=(x_{-2}, x_{-1}, 0, 0)$. This is verified by an argument similar to the one for \cite[Lemma~3.20]{Liu2}. We note that $Q_0 = N$ from the definition. Moreover, as in \cite[Lemma~3.19]{Liu2}, the inner integral $\int_{M_{X}} \int_{G^1}\int_{Q_0 \backslash G^1} \int_{X_+^2} $ converges absolutely, and thus this is equal to \begin{multline*} \int_{F^2}^{st} \int_{F^2}^{st} \int_{Q_0 \backslash G^1} \int_{G^1} \int_{M_{X}} \int_{X_+^2} \langle \pi(g)f, f^\prime \rangle \Lambda(t) \psi(x_{21}s_1+x_{22}t_1+x_{11}s_2+x_{12}t_2)^{-1} \\ \times (\omega_{\psi}(hg, u_1(s_1, t_1) u_2(s_2, t_2) t) \phi)(A_0+b) \overline{(\omega_{\psi}(h,1)\phi^\prime)(A_0+b)} \\ \\ db \, dh \, dx \, dg \, dt \, ds_2 \,dt_2 \,ds_1 \, dt_1 . \end{multline*} From the proof of Lemma~\ref{pull comp 2}, this integral is equal to \begin{multline} \label{pre F4 X+} \int_{F^2}^{st} \int_{F^2}^{st} \int_{Q_0 \backslash G^1} \int_{G^1} \int_{M_{X}} \int_{X_+^2} \langle \pi(g)f, f^\prime \rangle (\omega_{\psi}(hg, t) \phi)(A_0+b) \\ \times \overline{(\omega_{\psi}(h,1)\phi^\prime)(A_0+b)} \, \Lambda(t)\, \psi \left( \mathrm{tr} \begin{pmatrix}s_2&t_2\\ s_1&t_1 \end{pmatrix} \left( S_0 \begin{pmatrix} \langle x_{-2}, b_1 \rangle & \langle x_{-2}, b_2 \rangle\\ \langle x_{-1}, b_1 \rangle& \langle x_{-1}, b_2 \rangle \end{pmatrix} -X\right) \right) \\ db \, dh \, dx \, dg \, dt \, ds_2 \,dt_2 \,ds_1 \, dt_1 . \end{multline} Now we claim that we may define the stable integral \begin{multline*} \int_{F^2}^{st} \int_{F^2}^{st} \int_{X_+^2} \langle \pi(g)f, f^\prime \rangle (\omega_{\psi}(hg, t) \phi)(A_0+b) \overline{(\omega_{\psi}(h,1)\phi^\prime)(A_0+b)} \\ \times \Lambda(t) \psi \left( \mathrm{tr} \begin{pmatrix}s_2&t_2\\ s_1&t_1 \end{pmatrix} \left( S_0 \begin{pmatrix} \langle x_{-2}, b_1 \rangle & \langle x_{-2}, b_2 \rangle\\ \langle x_{-1}, b_1 \rangle& \langle x_{-1}, b_2 \rangle \end{pmatrix} -X\right) \right) \, db \, ds_2 \,dt_2 \,ds_1 \, dt_1 \end{multline*} and we may choose a sufficiently large compact open subgroup $F_i$ of $F$ ($1 \leq i \leq 4$) so that it depends only on $\psi$ and $\int_{F^2}^{st} \int_{F^2}^{st}\dots = \int_{F_1} \int_{F_2}\int_{F_3}\int_{F_4}\cdots$. This claim easily follows from the following lemma in the one dimensional case. \begin{lemma}\label{l: integral formula} Let $f$ be a locally constant function on $F$ which is in $L^1(F)$. Then there exists a compact open subgroup $F_0$ of $F$ such that for any compact open subgroups $F^\prime$ and $F^{\prime \prime}$ of $F$ containing $F_0$, we have \begin{equation}\label{e: integral formula} \int_{F^\prime} \int_{F} f(x) \psi(xy) \, dx \, dy = \int_{F^{\prime \prime}} \int_{F} f(x) \psi(xy) \, dx \, dy. \end{equation} \end{lemma} \begin{proof} Suppose that $\psi$ is trivial on $F_0 : =\varpi^{m}\mathcal{O}_F$ and not trivial on $\varpi^{m-1}\mathcal{O}_F$. Put $F^\prime = \varpi^{m^\prime} \mathcal{O}_F$ with $m^{\prime} \leq m$. Then we may write the left-hand side of \eqref{e: integral formula} as \begin{equation}\label{e: integral formula2} \int_{F^\prime} \int_{F \setminus \mathcal{O}} f(x) \psi(xy)\, dx \, dy +\int_{F^\prime}\int_{\mathcal{O}} f(x) \psi(xy) \, dx \, dy . \end{equation} The first integral of \eqref{e: integral formula2} converges absolutely. Hence by interchanging the order of integration, it is equal to \[ \int_{F \setminus \mathcal{O}} \int_{F^\prime} f(x) \psi(xy)\, dy \, dx =\int_{F \setminus \mathcal{O}}f\left(x\right) \left(\int_{F^\prime}\psi\left(xy\right)\,dy\right)\, dx=0 \] since $y \mapsto \psi(xy)$ is a non-trivial character of $F^\prime$ for each $x \in F\setminus \mathcal{O}$. As for the second integral of \eqref{e: integral formula2}, we have \begin{multline*} \int_{F^\prime}\int_{\mathcal{O}} f(x) \psi(xy) \, dx \, dy \\ =\int_{\varpi^{m} \mathcal{O}} \int_{\mathcal{O}} f(x) \psi(xy) \, dx \, dy +\int_{\varpi^{m^\prime} \mathcal{O} \setminus \varpi^{m} \mathcal{O}}f\left(x\right) \left(\int_{\mathcal{O}} \psi(xy) \, dy\right) \, dx \end{multline*} where the inner integral of the second integral vanishes as above. Thus the left hand side of \eqref{e: integral formula} is equal to \[ \int_{\varpi^{m} \mathcal{O}} \int_{\mathcal{O}} f(x) \psi(xy) \, dx \, dy. \] Similarly the right-hand side of \eqref{e: integral formula} becomes as above, and our claim follows. \end{proof} By Lemma~\ref{l: integral formula}, we see that \eqref{pre F4 X+} is equal to \begin{multline*} \int_{N \backslash G^1} \int_{G^1} \int_{M_{X}} \int_{F^2}^{st} \int_{F^2}^{st} \int_{X_+^2} \langle \pi(g)f, f^\prime \rangle (\omega_{\psi}(hg, t) \phi)(A_0+b) \\ \times \overline{(\omega_{\psi}(h,1)\phi^\prime)(A_0+b)} \, \Lambda(t)\, \psi \left( \mathrm{tr} \begin{pmatrix}s_2&t_2\\ s_1&t_1 \end{pmatrix} \left( S_0 \begin{pmatrix} \langle x_{-2}, b_1 \rangle & \langle x_{-2}, b_2 \rangle\\ \langle x_{-1}, b_1 \rangle& \langle x_{-1}, b_2 \rangle \end{pmatrix} -X\right) \right) \\ db \, dh \, dx \, dg \, dt \, ds_2 \,dt_2 \,ds_1 \, dt_1 . \end{multline*} Then applying the Fourier inversion, we get \begin{multline} \label{eq 1} \int_{N \backslash G^1} \int_{G^1} \int_{M_{X}} \langle \pi(g)f, f^\prime \rangle \\ \times (\omega_{\psi}(hg, t) \phi)(A_0+B_0) \overline{(\omega_{\psi}(h,1)\phi^\prime)(A_0+B_0)} \, \Lambda(t) \, db \, dh \, dx \, dg \, dt \end{multline} where $B_0 = (0, 0,\frac{x_{21}}{2}x_1+\frac{x_{11}}{2} x_2, -\frac{x_{22}}{2d}x_1-\frac{x_{12}}{2d} x_2)$ and $x_0=A_0+B_0$. By \cite[Proposition~3.1]{Liu2}, for a sufficiently large compact open subgroup $N_0$ of $N$, we have \[ \int_{M_X} \int^{st}_{N} f(nt) \chi(nt) \,dn \,dt = \int_{N_0} \int_{M_X} f(nt) \chi(nt) \,dn \,dt \] and thus we may define \[ \int_{N}^{st} \int_{M_X} f(nt) \chi(nt) \,dn \,dt . \] Further, we note a simple fact that we have \[ \int_{G} g(h) \,dh = \int_{N \backslash G} \int_{N}^{st} g(nh) \, dn dh \] when both sides are defined. Thus \eqref{eq 1} is equal to \begin{multline*} \int_{N \backslash G^1} \int_{N \backslash G^1} \int_{M_{X}} \int_{N}^{st} \langle \pi(g)f, f^\prime \rangle \\ \times (\omega_{\psi}(hg, t) \phi)(A_0+B_0) \overline{(\omega_{\psi}(h,1)\phi^\prime)(A_0+B_0)} \Lambda(t) \, db \, dh \, dx \, dg \, dt . \end{multline*} Then the same computation as the one to get \eqref{pull-back gsp complete} from \eqref{MX to TS} may be applied to the above integral, and thus we see that our integral is equal to \begin{multline*} \int_{N\backslash G^1} \int_{N\backslash G^1} \alpha_{\Lambda, \psi_{S}}\left(\pi_v\left(h\right)f,\pi_v\left(h^\prime\right)f^\prime\right) \\ \times \left(\omega_{\psi}\left(h,1\right)\phi\right)\left(x_0\right)\, \overline{\left(\omega_{\psi}\left(h^\prime,1\right)\phi^\prime\right)\left(x_0\right)}\, dh\, dh^\prime. \end{multline*} Hence the identity \eqref{e:main local pullback bessel} holds when $B_{\xi, \Lambda,\psi} \not\equiv 0$. \subsection{Proof of Theorem~\ref{ref ggp} when $B_{\xi, \Lambda,\psi} \equiv 0$} \label{s: pf main thm 3} First we note the following proposition concerning the non-vanishing of the $L$-values. \begin{proposition} Let $\pi$ be an irreducible cuspidal tempered automorphic representation of $G_D(\mA)$ with trivial central character. If $G_D \simeq G$ and $\pi$ is a theta lift from $\mathrm{GSO}_{3,1}$, then $L(s, \pi, \mathrm{std} \otimes \chi_E)$ has a simple pole at $s=1$. Otherwise $L(s, \pi, \mathrm{std} \otimes \chi_E)$ is holomorphic and non-zero at $s=1$. \end{proposition} \begin{proof} Suppose that $G_D \simeq G$, i.e. $D$ is split. Then there exists an irreducible cuspidal globally generic automorphic representation $\pi_0$ of $G(\mA)$ such that $\pi$ and $\pi_0$ are nearly equivalent. Then our claim follows from \cite[Lemma~10.2]{Yam} and \cite[Theorem~5.1]{Sha0}. Suppose that $D$ is not split. Let us take a quadratic extension $E_0$ of $F$ such that $\pi$ has $(E_0, \Lambda_0)$-Bessel period for some character $\Lambda_0$ of $\mA_{E_0}^\times \slash E_0^\times$. Then by Theorem~\ref{ggp SO} (1), we see that there exists an irreducible cuspidal tempered automorphic representation $\pi_0$ of $G(\mA)$ such that for a sufficiently large finite set $S$ of places of $F$ containing all archimedean places, $\pi_v, \pi_{0,v}$ are unramified and $\mathrm{BC}_{E_0 \slash F}(\pi_v) \simeq \mathrm{BC}_{E_0 \slash F}(\pi_{0,v})$ for $v\not\in S$. This implies that \begin{multline*} L^S(s, \pi_0, \mathrm{std} \otimes \chi_{E_0} \chi_E) L^S(s, \pi_0, \mathrm{std} \otimes \chi_E) \\ = L^S(s, \pi, \mathrm{std} \otimes \chi_{E_0} \chi_E) L^S(s, \pi, \mathrm{std} \otimes \chi_E). \end{multline*} From the case when $G_D \simeq G$, the left-hand side of this identity is not zero at $s=1$, and thus so is the right-hand side, which possibly has a pole at $s=1$. Suppose that $L^S(s, \pi, \mathrm{std} \otimes \chi_{E_0 \slash F} \chi_E)$ has a pole at $s=1$. We may take a quadratic extension $E_1 \subset D$ of $F$ such that $\chi_{E_1} = \chi_{E_0} \chi_E$. Then by Yamana~\cite[Lemma~10.2]{Yam}, $\pi$ is a theta lift from $\mathrm{GSU}_{1,D}$, which is a similitude quaternion unitary group of degree one defined by an element in $E_1$ as in \eqref{d: gu_1,d}. In this case, $\pi$ is not tempered, and thus it contradicts to our assumption on $\pi$. Thus, $L^S(s, \pi, \mathrm{std} \otimes \chi_{E_0 \slash F} \chi_E)$ is holomorphic at $s=1$. Further, by an argument similar to the one for $L^S(s, \pi, \mathrm{std} \otimes \chi_{E_0 \slash F} \chi_E)$, we see that $L^S(s, \pi, \mathrm{std} \otimes \chi_E)$ is holomorphic. Therefore, it is holomorphic and non-zero at $s=1$. \end{proof} Suppose that $B_{\xi, \Lambda,\psi} \equiv 0$ on $V_\pi$. We shall show that the right-hand side of \eqref{e: main identity} is zero. If $L\left(\frac{1}{2}, \pi \times \mathcal{AI} \left(\Lambda \right)\right) = 0$, then there is nothing to prove. Hence, we may suppose that $L\left(\frac{1}{2}, \pi \times \mathcal{AI} \left(\Lambda \right)\right) \ne 0$. Then we shall show that for some place $v$ of $F$, we have $\alpha_{\Lambda_v, \psi_{\xi, v}}\equiv 0$ on $\pi_v$. Assume contrary, i.e. $\alpha_{\Lambda_v, \psi_{\xi, v}} \not \equiv 0$ on $\pi_v$ for any $v$. Let us denote by $\pi_+^{B, \rm{loc}}$ the unique irreducible constituent of $\pi|_{G_D(\mA)^+}$ such that $\alpha_{\Lambda_v, \psi_{\xi, v}} \not \equiv 0$ on $\pi_{+,v}^{B, \mathrm{loc}}$ for any $v$. From our assumption $\alpha_{\Lambda_v, \psi_{\xi, v}} \not \equiv 0$ on $\pi_v$ and Corollary~\ref{supple lem RIPF}, we see that $\alpha_{\Lambda_v^{-1}, \psi_{X, v}} \not \equiv 0$ on the theta lift $\theta_{\psi_v, D}(\pi_v)$ of $\pi_v$ to $\mathrm{GSU}_{3,D}(F_v)$ and $Z_v(\phi_v, f_v, \pi) \ne 0$ for some $f_v \in \pi_v$ and $\phi_v \in \mathcal{S}(Z_{D, +}(F_v))$. Since $\pi^\prime$ is nearly equivalent to $\pi$, we have $L(1, \pi, \mathrm{std} \otimes \chi_E) \ne 0$. Therefore, the theta lift $\theta_{\psi, D}(\pi_+^{B, \mathrm{loc}})$ of $\pi_+^{B, \mathrm{loc}}$ to $\mathrm{GSU}_{3,D}(\mA)$ is non-zero by Yamana~\cite[Theorem~10.3]{Yam}, which states that the non-vanishing of local theta lifts at all places together with the non-vanishing of the $L$-value implies the non-vanishing of the global theta lift. We note that actually in \cite[Theorem~10.3]{Yam}, there is an assumption that $D$ is not split at real places, which was necessary to ensure that the non-vanishing of the local theta lift implies $Z_v(\phi_v, f_v, \pi) \ne 0$ for some $f_v \in \pi_v$ and $\phi_v \in \mathcal{S}(Z_{D, +}(F_v))$. Since the non-vanishing of $Z_v(\phi_v, f_v, \pi)$ for some $f_v$ and $\phi_v$ is shown in our case by the argument above, we may apply \cite[Theorem~10.3]{Yam} regardless of the assumption. Recall that from the proof of Theorem~\ref{ggp SO} (1), $\theta_{\psi, D}(\pi_+^{B, \mathrm{loc}})$ is tempered. Let us regard $\theta_{\psi, D}(\pi_+^{B, \mathrm{loc}})$ as automorphic representations of $\mathrm{GU}_{4, \varepsilon}$. By the uniqueness of the Bessel model for $\mathrm{GU}_{4, \varepsilon}$ proved in \cite[Proposition~A.1]{FM3}, there uniquely exists an irreducible constituent $\tau$ of $\theta_{\psi, D}(\pi_+^{B, \mathrm{loc}})|_{\mathrm{U}(4)}$ such that $\tau$ has the local $(X, \Lambda_v^{-1},\psi_v)$-Bessel model at any place $v$. On the other hand, we note $L\left(1 \slash 2, \tau \times \Lambda^{-1} \right) \ne 0$ since $L\left(\frac{1}{2}, \pi \times \mathcal{AI} \left(\Lambda \right)\right) \ne 0$. Then by \cite[Theorem~1.2]{FM3}, there exists an irreducible cuspidal automorphic representation $\tau^\prime$ of $\mathrm{U}(V_0)$ with four dimensional hermitian space $V_0$ over $E$ such that $\tau^\prime$ has $(X, \Lambda_v,\psi_v)$-Bessel period. Then we know that $\tau$ and $\tau^\prime$ have the same $L$-parameter, in particular, $\tau_v \simeq \tau_v^\prime$ when $v$ is split. At a non-split place $v$, by the uniqueness of an element of the tempered $L$-packet which has the same Bessel period due to Beuzart-Plessis~\cite{BP1,BP2}, we see that $\mathrm{U}(V_0) \simeq \mathrm{U}(J_D)$ and $\tau \simeq \tau^\prime$. Moreover, by Mok~\cite{Mok}, we have $\tau = \tau^\prime$. Therefore, $\tau = \tau^\prime$ has $(X, \Lambda^{-1},\psi)$-Bessel period, and this implies that $\theta_{\psi, D}(\pi_+^{B, \mathrm{loc}})$ also has $(X, \Lambda^{-1},\psi)$-Bessel period. Then Proposition~\ref{pullback Bessel gsp} and \ref{pullback Bessel gsp innerD} show that $\pi$ has $(E,\Lambda)$-Bessel period, and this is a contradiction. Thus, \eqref{e: main identity} holds when $B_{\xi, \Lambda,\psi} \equiv 0$ on $V_\pi$. \section{Generalized B\"{o}cherer conjecture} \label{GBC} In this section we prove the generalized B\"{o}cherer conjecture. In fact, we shall prove Theorem~\ref{t: vector valued boecherer} below, which is more general than Theorem~\ref{Boecherer:scalar} stated in the introduction. \subsection{Temperedness condition} In order to apply Theorem~\ref{ref ggp} to holomorphic Siegel cusp forms of degree two, we need to verify the temperedness for corresponding automorphic representations. \begin{proposition} \label{temp prp} Suppose that $F$ is totally real. Let $\tau$ be an irreducible cuspidal automorphic representation of $G_D(\mA)$ with a trivial central character such that $\tau_v$ is a discrete series representation for every real place $v$ of $F$. Suppose moreover that $\tau$ is not CAP. Then $\tau$ is tempered. \end{proposition} \begin{Remark} When $D$ is split, i.e. $G_D \simeq G$, Weissauer~\cite{We} proved that $\tau_v$ is tempered at a place $v$ when $\tau_v$ is unramified. Moreover, when $\tau_v$ is a holomorphic discrete series representation at each archimedean place $v$, Jorza~\cite{Jo} showed the temperedness at finite places not dividing $2$. \end{Remark} \begin{proof} First suppose that $G_D \simeq G$. Let $\Pi$ denote the functorial lift of $\tau$ to $\mathrm{GL}_4(\mA)$ established by Arthur~\cite{Ar} (see also Cai-Friedberg-Kaplan~\cite{CFK}). When $\Pi$ is not cuspidal, since $\tau$ is not CAP, $\Pi$ is of the form $\Pi=\Pi_1 \boxplus \Pi_2$ with irreducible cuspidal automorphic representations $\Pi_i$ of $\mathrm{GL}_2(\mA)$. Since $\tau_v$ is a discrete series representation for any real place $v$, $\Pi_{i,v}$ is also a discrete series representation. Then $\Pi_i$ is tempered by \cite{Bl} and thus the Langlands parameter of $\Pi_v$ is tempered at all places $v$ of $F$. Hence $\tau$ is tempered. Suppose that $\Pi$ is cuspidal. Then by Raghuram-Sarnobat~\cite[Theorem~5.6]{RS}, $\Pi_v$ is tempered and cohomological at any real place $v$. Let us take an imaginary quadratic extension $E$ of $F$ such that the base change lift $\mathrm{BC}(\Pi)$ of $\Pi$ to $\mathrm{GL}_4(\mA_E)$ is cuspidal. Note that $\mathrm{BC}(\Pi)$ is cohomological and that $\mathrm{BC}(\Pi)^\vee \simeq \mathrm{BC}(\Pi^\vee)\simeq \mathrm{BC}(\Pi) \simeq \mathrm{BC}(\Pi)^\sigma$. Then Caraiani~\cite[Theorem~1.2]{Car} shows that $\mathrm{BC}(\Pi)$ is tempered at all finite places. This implies that $\Pi_v$ is also tempered for any finite place $v$. Thus $\tau$ is tempered. Now let us consider the case when $D$ is not split. Since $\tau$ is not CAP, by Proposition~\ref{exist gen prp}, there exists an irreducible cuspidal automorphic representation $\tau^\prime$ of $G\left(\mA\right)$ and a quadratic extension $E_0$ of $F$ such that $\tau^\prime$ is $G^{+, E_0}$-locally equivalent to $\tau$. Moreover $\tau$ is tempered if and only if $\tau^\prime$ is tempered. By \cite{LPTZ, Moe, Pau, Pau2}, $\tau^\prime_v$ is a discrete series representation at any real place $v$. Then the temperedness of $\tau^\prime$ follows from the split case. Hence $\tau$ is also tempered. \end{proof} As an application of Proposition~\ref{temp prp}, the following corollary holds. \begin{corollary} \label{GRC} Suppose that $F$ is totally real. Let $\tau$ be an irreducible cuspidal globally generic automorphic representation of $G(\mA)$ such that $\tau_v$ is a discrete series representation at any real place $v$. Then $\tau$ is tempered and hence the explicit formula \eqref{e:gsp whittaker} for the Whittaker periods holds for any non-zero decomposable vector in $V_\tau$. \end{corollary} \begin{proof} Recall that the functorial lift $\Pi$ of $\tau$ to $\mathrm{GL}_4\left(\mA\right)$ is cuspidal or an isobaric sum of irreducible cuspidal automorphic representations of $\mathrm{GL}_2$ by \cite{CKPSS}. In particular $\tau$ is not CAP by Arthur~\cite{Ar}. Then by Proposition~\ref{temp prp}, $\tau$ is tempered and our claim follows from Theorem~\ref{gsp whittaker}. \end{proof} \subsection{Vector valued Siegel cusp forms and Bessel periods} Let $\mathfrak H_2$ be the Siegel upper half space of degree two, i.e. the set of two by two symmetric complex matrices whose imaginary parts are positive definite. Then the group $G\left(\mathbb R\right)^+=\left\{ g\in G\left(\mathbb R\right):\nu\left(g\right)>0\right\}$ acts on $\mathfrak H_2$ by \[ g\langle Z\rangle=\left(AZ+B\right)\left(CZ+D\right)^{-1} \quad \text{for $g=\begin{pmatrix}A&B\\C&D\end{pmatrix}\in G\left(\mathbb R\right)^+$ and $Z\in\mathfrak H_2$} \] and the factor of automorphy $J\left(g,Z\right)$ is defined by \[ J\left(g,Z\right)=CZ+D. \] For an integer $N\ge 1$, let \[ \Gamma_0\left(N\right)= \left\{ \gamma\in G^1\left(\mathbb Z\right) : \gamma=\begin{pmatrix}A&B\\C&D\end{pmatrix},\, C \equiv 0 \pmod{N\mathbb{Z}} \right\}. \] \subsubsection{Vector valued Siegel cusp forms} Let $\left(\varrho, V_\varrho\right)$ be an algebraic representation of $\gl_2\left(\mathbb C\right)$. Then a holomorphic mapping $\varPhi:\mathfrak H_2\to V_\varrho$ is a \emph{Siegel cusp form of weight $\varrho$ with respect to $\Gamma_0\left(N\right)$} when $\varPhi$ vanishes at the cusps and satisfies \begin{equation}\label{e: transformation} \varPhi\left(\gamma\langle Z\rangle\right)= \varrho\left(J\left(\gamma, Z\right)\right)\varPhi\left(Z\right) \quad \text{for $\gamma\in \Gamma_0\left(N\right)$ and $Z\in\mathfrak H_2$}. \end{equation} We denote by $S_\varrho\left(\Gamma_0\left(N\right)\right)$ the complex vector space of Siegel cusp forms of weight $\varrho$ with respect to $\Gamma_0\left(N\right)$. Then $\varPhi\in S_\varrho\left(\Gamma_0\left(N\right)\right)$ has a Fourier expansion \[ \varPhi\left(Z\right)=\sum_{T>0} a\left(T,\varPhi\right)\,\exp\left[2\pi\sqrt{-1}\, \mathrm{tr}\left(TZ\right)\right] \quad\text{where $Z\in\mathfrak H_2$ and $a\left(T,\Phi\right)\in V_\varrho$}. \] Here $T$ runs over positive definite two by two symmetric matrices which are semi-integral, i.e. $T$ is of the form $T=\begin{pmatrix}a&b\slash 2\\ b\slash 2&c\end{pmatrix}$, $a,b,c\in\mathbb Z$. We note that \eqref{e: transformation} implies \begin{equation}\label{e: transformation2} a\left(\varepsilon \,T\,{}^t\varepsilon,\varPhi\right)= \varrho\left(\varepsilon\right) a\left(T,\varPhi\right) \quad\text{for $\varepsilon\in\gl_2\left(\mathbb Z\right)$}. \end{equation} From now on till the end of this paper, we assume $\varrho$ to be \emph{irreducible}. It is well known that the irreducible algebraic representations of $\gl_2\left(\mathbb C\right)$ are parametrized by \begin{equation}\label{e: L} \mathbb L=\left\{\left(n_1,n_2\right)\in\mathbb Z^2: n_1\ge n_2\right\}. \end{equation} Namely the parametrization is given by assigning \[ \varrho_\kappa:=\mathrm{Sym}^{n_1-n_2}\otimes {\det}^{n_2} \quad\text{to $\kappa=\left(n_1,n_2\right)\in\mathbb L$}. \] Suppose that $\varrho=\varrho_\kappa$ with $\kappa=\left(n+k,k\right)\in\mathbb L$. Then we realize $\varrho$ concretely by taking its space of representation $V_{\varrho}$ to be $\mathbb C\left[X,Y\right]_{n}$, the space of degree $n$ homogeneous polynomials of $X$ and $Y$, where the action of $\gl_2\left(\mathbb C\right)$ is given by \[ \varrho\left(g\right)P\left(X,Y\right)=\left(\det g\right)^k\cdot P\left(\left(X,Y\right)g\right) \quad \text{for $g\in\gl_2\left(\mathbb C\right)$ and $P\in\mathbb C\left[X,Y\right]_n$}. \] Let us define a bilinear form \[ \mathbb C\left[X,Y\right]_n\times \mathbb C\left[X,Y\right]_n\ni\left(P,Q\right)\mapsto \left(P,Q\right)_n\in \mathbb C \] by \begin{equation}\label{e: bilinear form} \left(X^iY^{n-i}, X^j Y^{n-j}\right)_n= \begin{cases} \displaystyle{\left(-1\right)^i \begin{pmatrix}n\\ i\end{pmatrix}} &\text{if $i+j=n$}; \\ 0&\text{otherwise.} \end{cases} \end{equation} Then we have \begin{equation}\label{e: equivariance} \left(\varrho\left(g\right)P,\varrho\left(g\right)Q\right)_n= \left(\det g\right)^{n+2k}\left(P,Q\right)_n \quad\text{for $g\in\gl_2\left(\mathbb C\right)$.} \end{equation} We define a positive definite hermitian inner product $\langle\,,\,\rangle_\varrho$ on $V_\varrho$ by \begin{equation}\label{e: def of hermitian} \langle P,Q\rangle_\varrho:= \left(P,\varrho\left(w_0\right)\overline{Q}\,\right)_n \quad\text{where $w_0=\begin{pmatrix}0&1\\-1&0\end{pmatrix}$.} \end{equation} Here $\overline{Q}$ denotes the polynomial obtained from $Q$ by taking the complex conjugates of its coefficients. Then \eqref{e: equivariance} implies that we have \begin{equation}\label{e: invariant inner product} \langle\varrho\left(g\right)v,w\rangle_{\varrho}= \langle v,\varrho\left({}^t\bar{g}\right)w\rangle_{\varrho} \quad \text{for $g\in\gl_2\left(\mathbb C\right)$ and $v,w\in V_\varrho$}. \end{equation} In particular the hermitian inner product $\langle\, ,\,\rangle_\varrho$ is $\mathrm{U}_2\left(\mathbb R\right)$-invariant. Then for $\varPhi,\varPhi^\prime\in S_{\varrho}\left(\Gamma_0\left(N\right)\right)$, we define the Petersson inner product $\langle\varPhi,\varPhi^\prime\rangle_{\varrho}$ by \begin{equation}\label{e: Petersson} \langle\varPhi,\varPhi^\prime\rangle_{\varrho}= \frac{1}{\left[\mathrm{Sp}_2\left(\mathbb Z\right): \Gamma_0\left(N\right)\right]} \int_{\Gamma_0\left(N\right)\backslash \mathfrak H_2} \langle\varPhi\left(Z\right),\varPhi^\prime\left(Z\right)\rangle_{\varrho}\, \left(\det Y\right)^{k-3}\, dX\, dY \end{equation} where $X=\mathrm{Re}\left(Z\right)$ and $Y=\mathrm{Im}\left(Z\right)$. The space $S_{\varrho}\left(\Gamma_0\left(N\right)\right)$ has a natural orthogonal decomposition with respect to the Petersson inner product \[ S_{\varrho}\left(\Gamma_0\left(N\right)\right)= S_{\varrho}\left(\Gamma_0\left(N\right)\right)^{\mathrm{old}}\oplus S_{\varrho}\left(\Gamma_0\left(N\right)\right)^{\mathrm{new}} \] into the oldspace and the newspace in the sense of Schmidt~\cite[3.3]{Sch1}. We note that when $n$ is odd, we have $S_\varrho\left(\Gamma_0\left(N\right)\right)=\left\{0\right\}$ for $\varrho$ with $\kappa=\left(n+k,k\right)$ by \eqref{e: transformation} since $-1_4\in\Gamma_0\left(N\right)$. \subsubsection{Adelization}\label{sss: adelization} Given $\varPhi\in S_\varrho\left(\Gamma_0\left(N\right)\right)$, its adelization $\varphi_\varPhi:G\left(\mathbb A\right)\to V_\varrho$ is defined as follows (cf. \cite[3.1]{Sa}, \cite[3.2]{Sch1}). For each prime number $p$, let us define a compact open subgroup $P_{1,p}\left(N\right)$ of $G\left(\mathbb Q_p\right)$ by \[ P_{1,p}\left(N\right):= \left\{ g\in G\left(\mathbb Z_p\right): g=\begin{pmatrix}A&B\\C&D\end{pmatrix},\, C\equiv 0\pmod{N\,\mathbb Z_p} \right\}. \] Then we define a mapping $\varphi_{\varPhi}:G\left(\mathbb A\right) \to V_{\varrho}$ by \begin{equation}\label{e: vector valued} \varphi_{\varPhi}\left(g\right)= \nu\left(g_\infty\right)^{k+r} \varrho\left(J\left(g_\infty, \sqrt{-1}\, 1_2\right)\right)^{-1} \varPhi\left(g_\infty\langle\sqrt{-1}\, 1_2\rangle\right) \end{equation} when \[ g=\gamma\, g_\infty\,k_0\quad \text{with $\gamma\in G\left(\mathbb Q\right)$, $g_\infty\in G\left(\mathbb R\right)^+$ and $k_0\in\prod_{p<\infty}P_{1,p}\left(N\right)$}. \] Let $L$ be any non-zero linear form on $V_\varrho$. Then $L\left(\varphi_\varPhi\right):G\left(\mA\right)\to\mathbb C$ defined by $L\left(\varphi_\varPhi\right)\left(g\right)=L\left(\varphi_\varPhi\left(g\right)\right)$ is a scalar valued automorphic form on $G\left(\mA\right)$. Let $V\left(\varPhi\right)$ denote the the space generated by right $G\left(\mathbb A\right)$-translates of $L\left(\varphi_\varPhi\right)$. Then $V\left(\varPhi\right)$ does not depend on the choice of $L$ and we denote by $\pi\left(\varPhi\right)$ the right regular representation of $G\left(\mA\right)$ on $V\left(\varPhi\right)$. Note that the central character of $\pi\left(\varPhi\right)$ is trivial. We recall that for scalar valued automorphic forms $\phi$, $\phi^\prime$ on $G\left(\mathbb A\right)$ with a trivial central character, their Petersson inner product $\langle \phi,\phi^\prime\rangle$ is defined by \[ \langle \phi,\phi^\prime\rangle =\int_{Z_G\left(\mathbb A\right) G\left(\mathbb Q\right) \backslash G\left(\mathbb A\right)} \phi\left(g\right)\overline{\phi^\prime\left(g\right)}\,dg \] where $Z_G$ denotes the center of $G$ and $dg$ is the Tamagawa measure. \begin{lemma}\label{l: norm constant} Let $L$ be a non-zero linear form on $V_\varrho$. Take $v^\prime\in V_\varrho$ such that $L\left(v\right)=\langle v,v^\prime\rangle_\varrho$ for any $v\in V_\varrho$. Then we have \[ \langle L\left(\varphi_\varPhi\right),L\left(\varphi_\varPhi\right)\rangle =C\left(v^\prime\right)\cdot \langle\varPhi,\varPhi\rangle_\varrho \quad\text{for any $\varPhi\in S_\varrho\left(\Gamma_0\left(N\right)\right)$} \] where \begin{equation}\label{e: norm constant} C\left(v^\prime\right)= \frac{\mathrm{Vol}\left(Z_G\left(\mathbb A\right) G\left(\mathbb Q\right) \backslash G\left(\mathbb A\right)\right)}{ \mathrm{Vol}\left(\mathrm{Sp}_2\left(\mathbb Z\right)\backslash \mathfrak H_2\right)}\cdot \frac{\langle v^\prime,v^\prime\rangle_\varrho}{ \dim V_\varrho}. \end{equation} \end{lemma} \begin{proof} Let $K_\infty=\mathrm{U}_2\left(\mathbb R\right)$. We identify $K_\infty$ as a subgroup of $\mathrm{Sp}_2\left(\mathbb R\right)$ via \[ K_\infty\ni A+\sqrt{-1}\, B\mapsto \begin{pmatrix}A&-B\\B&A\end{pmatrix} \in \mathrm{Sp}_2\left(\mathbb R\right). \] Let $dk$ be the Haar measure on $K_\infty$ such that $\mathrm{Vol}\left(K_\infty, dk\right)=1$. Then by the Schur orthogonality relations, we have \[ \int_{K_\infty}L\left(\varrho\left(k\right)^{-1}v\right) \cdot\overline{L\left(\varrho\left(k\right)^{-1}w\right)}\,dk = \frac{\langle v,w\rangle_\varrho\cdot\langle v^\prime,v^\prime\rangle_\varrho}{ \dim V_\varrho}. \] On the other hand, it is easily seen that for $\varPhi\in S_\varrho\left(\Gamma_0\left(N\right)\right)$, we have \[ \frac{\langle\varPhi,\varPhi\rangle_\varrho}{ \mathrm{Vol}\left(\mathrm{Sp}_2\left(\mathbb Z\right)\backslash \mathfrak H_2\right)} =\frac{\langle\varphi_\varPhi,\varphi_\varPhi\rangle_\varrho}{ \mathrm{Vol}\left(Z_G\left(\mathbb A\right) G\left(\mathbb Q\right) \backslash G\left(\mathbb A\right)\right) } \] where \[ \langle\varphi_\varPhi,\varphi_\varPhi\rangle_\varrho:= \int_{Z_G\left(\mathbb A\right) G\left(\mathbb Q\right) \backslash G\left(\mathbb A\right)} \langle\varphi_\varPhi\left(g\right),\varphi_\varPhi\left(g\right)\rangle_\varrho dg. \] Hence \begin{align*} \langle\varPhi,\varPhi\rangle_\varrho= &C\left(v^\prime\right)^{-1} \int_{Z_G\left(\mathbb A\right) G\left(\mathbb Q\right) \backslash G\left(\mathbb A\right)} \int_{K_\infty} \left| L\left(\varrho\left(k\right)^{-1}\varphi_\varPhi\left(g\right)\right) \right|^2 \, dk\,dg \\ =& C\left(v^\prime\right)^{-1}\int_{K_\infty} \int_{Z_G\left(\mathbb A\right) G\left(\mathbb Q\right) \backslash G\left(\mathbb A\right)} \left| L\left(\varphi_\varPhi\left(gk\right)\right) \right|^2\,dg\,dk \\ =&C\left(v^\prime\right)^{-1}\cdot\langle L\left(\varphi_\varPhi\right),L\left(\varphi_\varPhi\right) \rangle_\varrho. \end{align*} \end{proof} \subsubsection{Bessel periods of vector valued Siegel cusp forms} Let $E$ be an imaginary quadratic field of $\mQ$ and $-D_E$ its discriminant. We put \begin{equation}\label{e: matrix S} S_E:=\begin{cases} \,\begin{pmatrix}1&0\\0&D_E\slash 4\end{pmatrix} &\text{when $D_E\equiv 0\pmod{4}$}; \\ \begin{pmatrix}1&1\slash 2\\ 1\slash 2&\left(1+D_E\right)\slash 4\end{pmatrix} &\text{when $D_E\equiv -1\pmod{4}$}. \end{cases} \end{equation} Given $S=S_E$ as above, we define $T_S$, $N$ and $\psi_{S}$ as in \ref{s:def bessel G}. Then $T_S\left(\mathbb Q\right)\simeq E^\times$. Let $\Lambda$ be a character of $T_S\left(\mathbb A\right)$ which is trivial on $\mathbb A^\times T_S\left(\mathbb Q\right)$. Let $\psi$ be the unique character of $\mathbb A\slash \mathbb Q$ such that $\psi_\infty\left(x\right)=e^{-2\pi\sqrt{-1}\, x}$ and the conductor of $\psi_\ell$ is $\mathbb Z_\ell$ for any prime number $\ell$. Then for a scalar valued automorphic form $\phi$ on $G\left(\mathbb A\right)$ with a trivial central character, we define its $(S, \Lambda, \psi)$-Bessel period $B_{S,\Lambda, \psi}\left(\phi\right)$ by \eqref{Beesel def gsp} with the Haar measures $du$ on $N\left(\mathbb A\right)$ and $dt=dt_\infty\, dt_f$ on $T_S\left(\mathbb A\right)=T_S\left(\mathbb R\right) \times T_S\left(\mathbb A_f\right)$ are taken so that $\mathrm{Vol}\left(N\left(\mathbb Q\right)\backslash N\left(\mathbb A\right), du\right)=1$ and \[ \mathrm{Vol}\left(\mathbb R^\times \backslash T_S\left(\mathbb R\right), dt_\infty\right)= \mathrm{Vol}\left(T_S\left(\mathbb{Z}_p \right), dt_f\right)=1. \] Then we note that \[ \mathrm{Vol}(\mathbb A^\times T_S\left(\mathbb Q\right) \backslash T_S\left(\mathbb A\right),dt) = \frac{2h_E}{w(E)} = D_E^{1 \slash 2} \cdot L(1, \chi_E). \] For a $V_\varrho$-valued automorphic form $\varphi$ with a trivial central character, it is clear that for a linear form $L:V_\varrho\to\mathbb C$ we have \begin{equation}\label{e: vector into scalar} B_{S,\Lambda, \psi}\left(L\left(\varphi\right)\right) =L\left[ \int_{\mathbb A^\times T_S\left(\mathbb Q\right) \backslash T_S\left(\mathbb A\right)} \int_{N\left(\mathbb Q\right)\backslash N\left(\mathbb A\right)} \Lambda \left(t\right)^{-1}\psi_S \left(u\right)^{-1} \varphi\left(tu\right)\, dt\, du \right]. \end{equation} Recall that we may identify the ideal class group $\mathrm{Cl}_E$ of $E$ with the quotient group \[ T_S\left(\mathbb A\right) \slash T_S\left(\mathbb Q\right)T_S\left(\mathbb R\right) T_S\left(\hat{\mathbb Z}\right) . \] Let $\left\{t_c: c\in\mathrm{Cl}_E\right\}$ be a set of representatives of $\mathrm{Cl}_E$ such that $t_c\in \prod_{p<\infty}T\left(\mathbb Q_p\right)$. We write $t_c$ as $t_c=\gamma_c\,m_c\,\kappa_c$ with $\gamma_c\in \gl_2\left(\mathbb Q\right)$, $m_c\in\left\{g\in\gl_2\left(\mathbb R\right):\det g>0\right\}$, $\kappa_c\in\prod_{p<\infty}\gl_2\left(\mathbb Z_p\right)$. Let $S_c=\left(\det \gamma_c\right)^{-1}\cdot {}^t\gamma_c S\gamma_c$. Then the set $\left\{S_c: c\in\mathrm{Cl}_E\right\}$ is a set of representatives for the $\mathrm{SL}_2\left(\mathbb Z\right)$-equivalence classes of primitive semi-integral positive definite two by two symmetric matrices of discriminant $D_E$. Thus when $\varphi=\varphi_\varPhi$ for $\varPhi\in S_\varrho\left(\Gamma_0\left(N\right)\right)$ and $\Lambda$ is a character of $\mathrm{Cl}_E$, we may write \eqref{e: vector into scalar} as \begin{equation}\label{e: scalar vs vector} B_{S,\Lambda, \psi}\left(L\left(\varphi_\varPhi\right)\right)= 2\cdot e^{-2\pi\mathrm{tr}\left(S\right)}\cdot L\left(B_\Lambda\left(\varPhi;E\right)\right) \end{equation} where \begin{equation}\label{e: sbp for vector valued} B_\Lambda \left(\varPhi; E\right):= w\left(E\right)^{-1}\cdot \pi_\varrho\left(\sum_{c\in\mathrm{Cl}_E} \Lambda(c)^{-1} \cdot a\left(S_c,\varPhi\right)\right) \end{equation} is the vector valued $(S, \Lambda, \psi)$-Bessel period where \begin{equation}\label{e: representation part} \pi_\varrho=\int_{T_S^1\left(\mathbb R\right)} \varrho\left(t\right)\, dt \quad\text{with $T_S^1=\mathrm{SL}_2\cap T_S$, $\mathrm{Vol}\left(T_S^1\left(\mathbb R\right), dt\right)=1$} \end{equation} (e.g. Dickson et al.~\cite[Proposition 3.5]{DPSS} and Sugano~\cite[(1-26)]{Su}). \begin{Remark}[An erratum to \cite{FM1}] The definition of $B\left(\varPhi;E\right)$ in the vector valued case in \cite[Theorem~5]{FM1} should be replaced by \eqref{e: sbp for vector valued}. The statement and the proof of \cite[Theorem~5]{FM1} remain valid. \end{Remark} Suppose that $\varrho=\varrho_\kappa$ where $\kappa=\left(2r+k,k\right) \in\mathbb L$. We define $Q_{S,\varrho}\in \mathbb C\left[X,Y\right]_{2r}$ by \begin{equation}\label{e: def of Q} Q_{S,\varrho}\left(X,Y\right):= \left(\left(X,Y\right) S\begin{pmatrix}X\\ Y\end{pmatrix}\right)^r \cdot \left(\det S\right)^{-\frac{2r+k}{2}} \quad\text{where $S=S_E$ in \eqref{e: matrix S}}. \end{equation} Then for $\varPhi\in S_\varrho\left(\Gamma_0\left(N\right)\right)$, the scalar valued $(S, \Lambda, \psi)$-Bessel period ${\mathcal B}_\Lambda\left(\Phi; E\right)$ of $\varPhi$ is defined by \begin{equation}\label{e: scalar bs1} {\mathcal B}_\Lambda\left(\Phi; E\right):= \left(B_\Lambda\left(\varPhi;E\right), Q_{S,\varrho}\right)_{2r}. \end{equation} \subsection{Explicit $L$-value formula in the vector valued case} \label{generalized boecherer statement} Let us state our explicit formula for holomorphic Siegel modular forms. In what follows, whenever we refer to a type of an admissible representation of $G$ over a non-archimedean local field, we use the standard classification due to Roberts and Schmidt~\cite{RS}. Let $N$ be a squarefree integer. We say that a non-zero $\varPhi\in S_\varrho\left(\Gamma_0\left( N\right)\right)$ is a \emph{newform} if \begin{enumerate} \item $\varPhi\in S_\varrho\left(\Gamma_0\left( N\right)\right)^{\text{new}}$. \item $\varPhi$ is an eigenform for the local Hecke algebras for all primes $p$ not dividing $N$ and an eigenfunction of the local $U\left(p\right)$ operator (see Saha and Schmidt~\cite[2.3]{SS}) for all primes dividing $N$. \item The representation $\pi\left(\varPhi\right)$ of $G\left(\mA\right)$ is irreducible. \end{enumerate} Then the following theorem is derived from Theorem~\ref{ref ggp} exactly as Dickson, Pitale, Saha and Schmidt~\cite[Theorem~1.13]{DPSS} except that we need to compute local Bessel periods at the real place adapting to the vector valued case. We perform the computation of them in Appendix~\ref{s:e comp}. \begin{theorem}\label{t: vector valued boecherer} Let $N\ge 1$ be an odd squarefree integer. Let $\varrho=\varrho_\kappa$ where $\kappa=\left(2r+k,k\right)$ with $k\ge 2$. Let $\varPhi$ be a non-CAP newform in $S_{\varrho}\left(\Gamma_0\left(N\right)\right)$. Suppose that $\displaystyle{\left(\frac{D_E}{p}\right)=-1}$ for all primes $p$ dividing $N$. When $k=2$, suppose moreover that $\pi\left(\varPhi\right)$ is tempered. Then we have \begin{equation}\label{e: vector valued boecherer} \frac{\left| {\mathcal B}_\Lambda\left(\varPhi;E\right)\right|^2}{ \langle\varPhi,\varPhi\rangle_{\varrho}} =\frac{2^{4k+6r-c}}{D_E}\cdot \frac{L\left(1\slash 2,\pi\left(\varPhi\right) \times \mathcal{AI} \left(\Lambda \right)\right)}{ L\left(1,\pi\left(\varPhi\right),\mathrm{Ad}\right)} \cdot \prod_{p | N}J_p \end{equation} where $c=5$ if $\varPhi$ is a Yoshida lift in the sense of Saha~\cite[Section~4]{Sa} and $c=4$ otherwise. The quantities $J_p$ for $p$ dividing $N$ are given by \[ J_p=\left(1+p^{-2}\right)\left(1+p^{-1}\right) \times \begin{cases} 1&\text{if $\pi\left(\varPhi\right)_p$ is of type $\mathrm{IIIa}$}; \\ 2&\text{if $\pi\left(\varPhi\right)_p$ is of type $\mathrm{VIb}$}; \\ 0&\text{otherwise.} \end{cases} \] \end{theorem} \begin{Remark}\label{temperedness condition} When $k\ge 3$, $\pi\left(\varPhi\right)$ is tempered by Proposition~\ref{temp prp}. \end{Remark} \begin{Remark}\label{e: scalar valued case} Since $\mathcal B\left(\Phi;E\right)=2^{k}D_E^{-\frac{k}{2}} \cdot B\left(\varPhi;E\right)$ when $r=0$, \eqref{intro B conj} follows from \eqref{e: vector valued boecherer} by putting $N=1$ and $r=0$. \end{Remark} \begin{Remark}\label{e: Yoshida space} In the statement of the theorem, we used the notion of Yoshida lifts in the sense of Saha~\cite{Sa}. Though it is necessary to extend the arguments concerning Yoshida lifts in \cite[Section~4]{Sa} in the scalar valued case to the vector valued case to be rigorous, we omit it here since it is straightforward. We also mention that the arguments in \cite[4.4]{Sa} now work unconditionally since the classification theory in Arthur~\cite{Ar} is complete for $\mathbb G=\mathrm{PGSp}_2\simeq\mathrm{SO}\left(3,2\right)$. \end{Remark} \begin{Remark}\label{r: finite part} Recall that the $L$-functions in \eqref{e: vector valued boecherer} are complete $L$-functions. We may rewrite the explicit formula in terms of the finite parts of the $L$-functions by observing that the relevant archimedean $L$-factors are given by \[ L\left(1\slash 2,\pi\left(\varPhi\right)_\infty \times \mathcal{AI} \left(\Lambda \right)_\infty \right) =2^4\left(2\pi\right)^{-2\left(k+r\right)} \Gamma\left(k+r-1\right)^2\Gamma\left(r+1\right)^2 \] and \begin{multline*} L\left(1,\pi\left(\varPhi\right)_\infty,\mathrm{Ad}\right) =2^6\left(2\pi\right)^{-\left(4k+6r+1\right)} \\ \times \Gamma\left(k+2r\right)\Gamma\left(k-1\right)\Gamma\left(2r+2\right) \Gamma\left(2k+2r-2\right) \end{multline*} respectively. \end{Remark} \begin{Remark} Let us consider the case when $D$ is a quaternion algebra over $\mQ$ which is split at the real place, i.e. $D(\mR) \simeq \mathrm{Mat}_{2 \times 2}(\mR)$. Assuming that the endoscopic classification holds for $\mathbb G_D=G_D\slash Z_D$, we may apply Theorem~\ref{ref ggp} to holomorphic modular forms on $\mathbb G_D\left(\mA\right)$. In this case, Hsieh-Yamana~\cite{HY} compute local Bessel periods and show an explicit formula for Bessel periods such as \eqref{e: vector valued boecherer} for scalar valued holomorphic modular forms, including the case when $G_D =G$ and $N$ is an even squarefree integer. Meanwhile we shall maintain $N$ to be odd in Theorem~\ref{t: vector valued boecherer}, since our computation of the local Bessel period at the real place in the vector valued case in Appendix~\ref{s:e comp} is performed under the assumption that $N$ is odd. As we noted in Remark~\ref{rem Arthur + ishimoto}, after the submission of this paper, Ishimoto~\cite{Ishimoto} showed the endoscopic classification of $\mathrm{SO}(4,1)$ for generic Arthur parameters. Therefore, we may apply our theorem to the case of $\Bbb G_D \simeq \mathrm{SO}(4,1)$. \end{Remark} \begin{Remark} A global explicit formula such as \eqref{e: vector valued boecherer} is obtained in a certain non-squarefree level case by Pitale, Saha and Schmidt~\cite[Theorem~4.8]{PSS2}. \end{Remark} \appendix \section{Explicit formula for the Whittaker periods on $G=\mathrm{GSp}_2$} \label{appendix A} Here we shall prove Theorem~\ref{gsp whittaker}. Let $(\pi, V_{\pi})$ be an irreducible cuspidal globally generic automorphic representation of $G\left(\mA\right)$. Then Soudry~\cite{So} has shown that the theta lift of $\pi$ to $\mathrm{GSO}_{3,3}$ is non-zero and globally generic. We may divide into two cases according to whether the theta lift of $\pi$ to $\mathrm{GSO}_{3,3}$ is cuspidal or not. Suppose that the theta lift of $\pi$ to $\mathrm{GSO}_{3,3}$ is cuspidal. Since $\mathrm{PGSO}_{3,3}\simeq \mathrm{PGL}_4$ and the explicit formula for the Whittaker periods on $\mathrm{GL}_n$ is known by Lapid and Mao~\cite{LM}, the arguments in \ref{s: pf wh gso} and \ref{ss: local pull-back computation}, which are used to obtain \eqref{e:gso whittaker} in Theorem~\ref{gso whittaker} from \eqref{e:gsp whittaker}, work mutatis mutandis to obtain \eqref{e:gsp whittaker} from the Lapid--Mao formula in the case of $\mathrm{GL}_4$. Suppose that the theta lift of $\pi$ to $\mathrm{GSO}_{3,3}$ is not cuspidal. Then the theta lift of $\pi$ to $\mathrm{GSO}_{2,2}$ is non-zero and cuspidal. Thus here we give a proof of Theorem~\ref{gsp whittaker} only in the case when $\pi$ is a theta lift from $\mathrm{GSO}_{2,2}$. Recall that $\mathrm{PGSO}_{2,2}\simeq \mathrm{PGL}_2\times \mathrm{PGL}_2$. Our argument is similar to the one for \cite[Theorem~4.3]{Liu2}. Indeed we shall prove \eqref{e:gsp whittaker} by pushing forward the Lapid--Mao formula for $\mathrm{GSO}_{2,2}$ to $G$. \subsection{Global pull-back computation} Let $\left(X,\left<\,,\,\right>\right)$ be the $4$ dimensional symplectic space as in \ref{sp4 so42} and let $\left\{x_1,x_2,x_{-1},x_{-2}\right\}$ be the standard basis of $X$ given by \eqref{sb for X}. Let $Y=F^4$ be an orthogonal space with a non-degenerate symmetric bilinear form defined by \[ (v_1, v_2) = {}^{t}v_1 J_4 v_2 \quad\text{for $v_1,v_2 \in Y$} \] where $J_4$ is given by \eqref{d: J_m}. We take a standard basis $\left\{y_{-2},y_{-1},y_1,y_2\right\}$ of $Y = F^{4}$ given by \[ y_{-2} = {}^{t}(1, 0, 0, 0), \quad y_{-1} = {}^{t}(0, 1, 0, 0),\quad y_{1} = {}^{t}(0, 0, 1, 0), \quad y_{2} = {}^{t}(0, 0, 0, 1). \] We note that $\left( y_i , y_{-j}\right)=\delta_{ij}$ for $1\le i,j\le 2$. Put $Z=X \otimes Y$. Then $Z$ is naturally a symplectic space over $F$. We take a polarization $Z = Z_+ \oplus Z_-$ where \[ Z_\pm=X_\pm\otimes Y \] and $X_\pm=F\cdot x_{\pm 1}+F\cdot x_{\pm 2}$. Here all the double signs correspond. When $z_+=x_1\otimes a_1+x_2\otimes a_2\in Z_+\left(\mA\right)$ where $a_1,a_2\in Y$, we write $z_+=\left(a_1,a_2\right)$ and $\phi\left(z_+\right)=\phi\left(a_1,a_2\right)$ for $\phi\in\mathcal S\left(Z_+\left(\mA\right)\right)$. Let $N_{2,2}$ denote the group of upper triangular unipotent matrices of $\mathrm{GO}_{2,2}$, i.e. \[ N_{2,2}\left(F\right) = \left\{ \begin{pmatrix} 1&x&y&-xy\\ 0&1&0&-y\\ 0&0&1&-x\\0&0&0&1\end{pmatrix} | \, x, y \in F \right\}. \] We define a non-degenerate character $\psi_{2,2}$ of $N_{2,2}(\mA)$ by \[ \psi_{2,2} \begin{pmatrix} 1&x&y&-xy\\ 0&1&0&-y\\ 0&0&1&-x\\0&0&0&1\end{pmatrix} = \psi(x+y). \] Then for a cusp form $f$ on $\mathrm{GSO}_{2,2}\left(\mA\right)$, we define its Whittaker period $W_{2,2}(f)$ by \[ W_{2,2}(f) = \int_{N_{2,2}(F) \backslash N_{2,2}(\mA)} f\left(n\right) \,\psi_{2,2}\left(n\right)^{-1} \, dn. \] The following identity is stated in \cite[p.113]{GRS97} but without a proof. Though it is shown by an argument similar to the one for \cite[Proposition~2.6]{GRS97}, here we give a proof for the convenience of the reader. \begin{proposition} \label{GRS identity} Let $\varphi$ be a cusp form on $\mathrm{GO}_{2,2}\left(\mA\right)$. For $\phi \in \mathcal{S}(Z(\mA)_+)$, let $\Theta_\psi\left(\varphi,\phi\right)$ (resp. $\theta_\psi\left(\varphi,\phi\right)$) be the theta lift of $\sigma$ (resp. the restriction of $\varphi$ to $\mathrm{GSO}_{2,2}\left(\mA\right)$) to $G\left(\mA\right)$. Then we have \begin{equation} \label{GO22 to GSp4} W_{\psi_{U_G}}(\Theta_\psi(\varphi, \phi)) = \int_{N_0(\mA) \backslash \mathrm{O}_{2,2}(\mA)} \phi(g^{-1}(y_{-2}, y_{-1}+y_1)) W_{\psi_{2,2}}(\sigma(g) \varphi) \, dg \end{equation} where $N_0$ denotes the unipotent subgroup \[ N_0 = \left\{ \begin{pmatrix}1&x&-x&x^2\\ 0&1&0&-x\\ 0&0&1&x\\ 0&0&0&1 \end{pmatrix}\right\}, \] which is the stabilizer of $y_{-2}$ and $y_{-1}+y_1$. Similarly we have \begin{equation} \label{4.2 Liu2} W_{\psi_{U_G}}(\theta_\psi(\varphi, \phi)) = \int_{N_0(\mA) \backslash \mathrm{SO}_{2,2}(\mA)} \phi(g^{-1}(y_{-2}, y_{-1}+y_1)) W_{\psi_{2,2}}(\sigma(g) \varphi) \, dg. \end{equation} \end{proposition} \begin{proof} Since the proofs are similar, we prove only \eqref{GO22 to GSp4}. From the definition of the theta lift, we may write \begin{multline*} \int_{N(F) \backslash N(\mA)} \Theta_\psi(\varphi, \phi) \left(ug\right) \psi_{U_G}(u)^{-1} \,du \\ = \int_{\mathrm{O}_{2,2}(F) \backslash \mathrm{O}_{2,2}(\mA)} \sum_{(a_1, a_2) \in \mathcal{X}} \omega_{\psi}(g, h) \,\phi(a_1, a_2) \varphi(h) \, dh \end{multline*} where \[ \mathcal{X} = \left\{ (a_1, a_2) \in Y(F)^2 : \begin{pmatrix} ( a_1, a_1) & ( a_1, a_2 ) \\ ( a_2, a_1)&( a_2, a_2 ) \end{pmatrix} = \begin{pmatrix} 0&0\\ 0&1\end{pmatrix}\right\}. \] Then as in \cite[Lemma~1]{Fu}, only $\left(a_1,a_2\right)\in \mathcal X$ such that $a_1$ and $a_2$ are linearly independent contributes in the above sum. Thus, by Witt's theorem, we may rewrite the above integral as \begin{align*} &\int_{\mathrm{O}_{2,2}(F) \backslash \mathrm{O}_{2,2}(\mA)} \sum_{ \gamma \in N_0(F) \backslash \mathrm{O}_{2,2}(F)} \omega_{\psi}(g, h) \phi( \gamma^{-1} y_{-2}, \gamma^{-1} (y_{-1}+y_1)) \, \varphi(h) \, dh \\ =& \int_{\mathrm{O}_{2,2}(F) \backslash \mathrm{O}_{2,2}(\mA)} \sum_{ \gamma \in N_0(F) \backslash \mathrm{O}_{2,2}(F)} \omega_{\psi}(g, \gamma h) \phi(y_{-2}, y_{-1}+y_1) \,\varphi(h) \, dh \\ = &\int_{N_0(F) \backslash \mathrm{O}_{2,2}(\mA)} \omega_{\psi}(g, h) \phi( y_{-2}, y_{-1}+y_1) \, \varphi(h) \, dh \\ = &\int_{N_0(\mA) \backslash \mathrm{O}_{2,2}(\mA)} \int_{N_0(F) \backslash N_0(\mA)} \omega_{\psi}(g, h) \phi( y_{-2}, y_{-1}+y_1) \, \varphi(n h) \, dn \, dh. \end{align*} Thus by \eqref{d: U_G} we have \begin{multline}\label{e: whittaker 1} W_{\psi_{U_G}}(\Theta_\psi\left(\varphi,\phi\right)) = \int_{N_0(\mA) \backslash \mathrm{O}_{2,2}(\mA)} \int_{N_2(F) \backslash N_2(\mA)} \int_{N_0(F) \backslash N_0(\mA)} \\ \omega_{\psi}(m(u)g, h) \phi( y_{-2}, y_{-1}+y_1) \varphi(nh) \psi_{U_G}\left(m(u)\right)^{-1} \, dh \, du. \end{multline} Here we have \[ \omega_{\psi}(m(u)g, h) \phi( y_{-2}, y_{-1}+y_1) =\omega_{\psi}(g, m_0(u)h) \phi( y_{-2}, y_{-1}+y_1) \] where $m_0(u) = \begin{pmatrix}1&\frac{a}{2}&\frac{a}{2}&\frac{a^2}{4}\\ 0&1&0&-\frac{a}{2}\\ 0&0&1&-\frac{a}{2}\\ 0&0&0&1 \end{pmatrix}$ for $u=\begin{pmatrix}1&a\\0&1\end{pmatrix}$, since $\psi_{U_G}(m(u))^{-1} = \psi(-a)$. By noting the decomposition \[ \begin{pmatrix} 1&x&y&-xy\\ 0&1&0&-y\\ 0&0&1&-x\\0&0&0&1\end{pmatrix} = \begin{pmatrix}1&\frac{x+y}{2}&\frac{x+y}{2}&\frac{(x+y)^2}{4}\\ 0&1&0&-\frac{x+y}{2}\\ 0&0&1&-\frac{x+y}{2}\\ 0&0&0&1 \end{pmatrix} \begin{pmatrix}1&\frac{x-y}{2}&-\frac{x-y}{2}&\frac{(x-y)^2}{4}\\ 0&1&0&-\frac{x-y}{2}\\ 0&0&1&-\frac{x-y}{2}\\ 0&0&0&1 \end{pmatrix}, \] the required identity \eqref{GO22 to GSp4} follows from \eqref{e: whittaker 1}. \end{proof} Recall the exact sequence \[ 1 \rightarrow \mathrm{GSO}_{2,2} \rightarrow \mathrm{GO}_{2,2} \rightarrow \mu_2 \rightarrow 1. \] Hence we have \[ \Theta_\psi(\varphi, \phi)(g) = \int_{\mu_2(F) \backslash \mu_2(\mA)} \theta_\psi(\varphi^\varepsilon: \phi^\varepsilon)(g) \, d\varepsilon \] where $\varphi^\varepsilon = \sigma(\varepsilon)\varphi$ and $\phi^\varepsilon =\omega_\psi(\varepsilon) \phi$. Thus we have \[ \left|W_{\psi_{U_G}}(\Theta_\psi(\varphi, \phi))\right|^2 = \int_{\mu_2(F) \backslash \mu_2(\mA)} \mathbb{W}_{\psi_{U_G}}(\theta_\psi(\varphi^\varepsilon, \phi^\varepsilon)) \, d\varepsilon \] where \[ \mathbb{W}_{\psi_{U_G}}(\theta_\psi(\varphi^\varepsilon, \phi^\varepsilon)) = \int_{\mu_2(F) \backslash \mu_2(\mA)} W_{\psi_{U_G}}(\theta_\psi(\varphi^\varepsilon, \phi^\varepsilon))\, \overline{W_{\psi_{U_G}}(\theta_\psi(\varphi, \phi))} \, d\varepsilon. \] \subsection{Lapid-Mao formula} Let us recall the Lapid-Mao formula in the $\mathrm{GL}_2$ case. Let $(\tau, V_\tau)$ denote an irreducible cuspidal unitary automorphic representation of $\mathrm{GL}_2(\mA)$. Then for $f \in V_\tau$, its Whittaker period is defined by \[ W_2(f) = \int_{F \backslash \mA} f \begin{pmatrix}1&x\\ 0&1 \end{pmatrix} \psi(-x) \, dx \] with the Tamagawa measure $dx = \prod dx_v$. Let $v$ be a place of $F$. For $f_v \in \tau_v$ and $\widetilde{f}_v \in \overline{\tau}_v$, by \cite{Liu2} (see also \cite[Section~2]{LM} ), we may define \[ \mathcal{W}_2(f_v, \widetilde{f}_v) = \int_{F}^{st} \mathcal{B}_{\tau_v} (\tau_v(x_v)f_v, \widetilde{f}_v) \psi_v(-x_v) \, dx_v. \] Put \[ \mathcal{W}_2^\natural(f_v, \widetilde{f}_v) =\frac{L(1, \tau_v, \mathrm{Ad})}{\zeta_{F_v}(2)}\mathcal{W}_2(f_v, \widetilde{f}_v) \] which is equal to $1$ at almost all places $v$ by \cite[Proposition~2.14]{LM}. Let us define \[ \langle f, f \rangle = \int_{\mA^\times \mathrm{GL}_2(F) \backslash \mathrm{GL}_2(\mA)} |f(g)|^2 \,dg \] where $dg$ is the Tamagawa measure. We note that $\mathrm{Vol}\left(\mA^\times \mathrm{GL}_2(F)\backslash \mathrm{GL}_2(\mA), dg\right)=2$. Further, let us take a local $\mathrm{GL}_2(F_v)$-invariant pairing $\langle \,,\, \rangle_v$ on $\tau_v \times \tau_v$ such that $\langle f, f \rangle =\prod \langle f_v, f_v \rangle_v$. Then by \cite[Theorem~4.1]{LM}, we have \begin{equation} \label{LM GL2} |W_2(f)|^2 =\frac{1}{2} \cdot \frac{\zeta_F(2)}{L(1, \tau, \mathrm{Ad})}\, \prod \mathcal{W}_2^\natural(f_v, \overline{f}_v). \end{equation} for a factorizable vector $f = \otimes f_v \in V_\tau$. \subsection{Local pull-back computation} We fix a place $v$ of $F$ which will be suppressed from the notation in this section. Further, we simply write $X(F)$ by $X$ for any object $X$ defined over $F$. Let $\sigma$ be an irreducible tempered representation of $\mathrm{GO}_{2,2}$ such that its big theta lift $\Theta(\sigma)$ to $H$ is non-zero. Because of the Howe duality proved by Howe~\cite{Ho1}, Waldspurger~\cite{Wa} and Gan-Takeda~\cite{GT}, combined with Roberts~\cite{Rob}, $\Theta(\sigma)$ has a unique irreducible quotient, which we denote by $\pi$. Put $R= \{(g, h) \in G \times \mathrm{GO}_{2,2} : \lambda(g)=\nu(h)\}.$ Then we have a unique $R$-equivariant map \[ \theta : \omega_{\psi} \otimes \sigma \rightarrow \pi. \] Let $\mathcal{B}_\omega : \omega_\psi \otimes \overline{\omega_\psi} \rightarrow \mC$ be the canonical bilinear pairing defined by \[ \mathcal{B}_\omega(\phi, \tilde{\phi})=\int_{V^2} \phi(x) \widetilde{\phi}(x) \, dx. \] By \cite[Lemma~5.6]{GI0}, the pairing $\mathcal{Z} : (\sigma \otimes \overline{\sigma}) \otimes (\omega_\psi \otimes \overline{\omega_\psi}) \rightarrow \mC$, defined as \[ \mathcal{Z}(\varphi, \widetilde{\varphi}, \phi, \widetilde{\phi}) =\frac{\zeta_F(2) \zeta_F(4)}{L(1, \sigma, \mathrm{std})} \int_{\mathrm{O}_{2,2}} \mathcal{B}_\omega(\omega_\psi(h)\phi, \widetilde{\phi}) \langle \sigma(h)\varphi, \widetilde{\varphi} \rangle \, dh, \] which converges absolutely by \cite[Lemma~3.19]{Liu2}, gives a pairing $\mathcal{B}_\pi : \pi \otimes \overline{\pi} \rightarrow \mC$ by \[ \mathcal{B}_\pi(\theta(\varphi, \phi), \theta(\widetilde{\varphi}, \widetilde{\phi}))=\mathcal{Z}(\varphi, \widetilde{\varphi}, \phi, \widetilde{\phi}). \] \begin{proposition} We write $y_0=(y_{-2}, y_{-1}+y_1)$. For any $u \in N_{2}$, \begin{multline*} \left(\frac{\zeta_F(2) \zeta_F(4)}{L(1, \sigma, \mathrm{std})} \right)^{-1} \int_{N_H}^{st} \mathcal{B}_\pi (\pi(n m(u)) \theta(\varphi, \phi),\theta(\widetilde{\varphi}, \widetilde{\phi}) ) \psi_{U_H}(n)^{-1} \, dn \\ = \int_{\mathrm{O}_{2,2}} \int_{N_0 \backslash \mathrm{SO}_{2,2}} \left( \omega_{\psi}(g, m(u))\phi \right)(y_0) \overline{\widetilde{\phi}(h^{-1} \cdot y_0)} \langle \sigma(g) \varphi, \sigma(h) \widetilde{\varphi} \rangle \, dg \, dh. \end{multline*} \end{proposition} Let us define \[ \mathcal{W}_{\psi_{U_H}}(f_1, f_2)) = \int_{U_H}^{st} \mathcal{B}_\pi(\pi(u)f_1, f_2) \psi_{U_H}^{-1}(u) \, du. \] Take the measure $dh_0 =2 dh|_{\mathrm{SO}_{2,2}}$. Then \begin{multline*} \left(\frac{\zeta_F(2) \zeta_F(4)}{L(1, \sigma, \mathrm{std})} \right)^{-1} \mathcal{W}_{\psi_{U_H}}(\theta(\varphi, \phi), \theta(\widetilde{\varphi}, \widetilde{\phi})) \\ =\int_{N_2}^{st} \int_{\mathrm{O}_{2,2}} \int_{N_0 \backslash \mathrm{SO}_{2,2}} \left( \omega_{\psi}(g, m(u))\phi \right)(y_0) \overline{\widetilde{\phi}(h^{-1} \cdot y_0)} \\ \times\langle \sigma(g) \varphi, \sigma(h) \widetilde{\varphi} \rangle dg \, dh \, du. \end{multline*} By an argument similar to the one for \cite[Section~3.4.2]{FM2} and \cite[Section~5.4]{FM3}, we see that this is equal to \[ \int_{N_0 \backslash \mathrm{O}_{2,2}} \int_{N_0 \backslash \mathrm{SO}_{2,2}} \int_{N_{2,2}}^{st} \left( \omega_{\psi}(g, m(u))\phi \right)(y_0) \overline{\widetilde{\phi}(h^{-1} \cdot y_0)} \langle \sigma(g) \varphi, \sigma(h) \widetilde{\varphi} \rangle dg \, dh \, du. \] Further, it is equal to \begin{multline} \label{4.6 Liu2} \sum_{\varepsilon=\pm1} \int_{N_0 \backslash \mathrm{SO}_{2,2}} \int_{N_0 \backslash \mathrm{SO}_{2,2}} \int_{N_{2,2}}^{st} \left( \omega_{\psi}(g, m(u))\phi^\varepsilon \right)(y_0) \overline{\widetilde{\phi}(h^{-1} \cdot y_0)} \\ \times \langle \sigma(g) \varphi^\varepsilon, \sigma(h) \widetilde{\varphi} \rangle \, dg \, dh \, du\\ =\sum_{\varepsilon=\pm1} \int_{N_0 \backslash \mathrm{SO}_{2,2}} \int_{N_0 \backslash \mathrm{SO}_{2,2}} \phi^\varepsilon(g^{-1} \cdot y_0) \widetilde{\phi}(h^{-1} \cdot y_0) \mathcal{W}_{2,2}(\sigma(g)\varphi^\varepsilon, \sigma(h)\widetilde{\varphi}) \, dg \, dh \end{multline} where we define \[ \mathcal{W}_{2,2}(\varphi_1, \varphi_2): = \int_{N_{2,2}}^{st} \langle \sigma(u)\varphi_1, \varphi_2 \rangle \psi_{2,2}^{-1}(u) \,du \quad\text{for}\quad \varphi_i \in V_\sigma. \] Let us introduce a measure $ d^\prime h = \zeta_F(2)^2 dh $. Then we get \begin{multline*} \mathcal{W}_{\psi_{U_H}}^\natural(\theta(\varphi, \phi), \theta(\widetilde{\varphi}, \widetilde{\phi}))= \sum_{\varepsilon=\pm1} \int_{N_0 \backslash \mathrm{SO}_{2,2}} \int_{N_0 \backslash \mathrm{SO}_{2,2}} \phi^\varepsilon(g^{-1} \cdot y_0) \widetilde{\phi}(h^{-1} \cdot y_0) \\ \times \mathcal{W}_{2,2}^\natural(\sigma(g)\varphi^\varepsilon, \sigma(h)\widetilde{\varphi}) \, dg \, d^\prime h. \end{multline*} Here \[ \mathcal{W}_{2,2}^\natural(\sigma(g)\varphi^\varepsilon, \sigma(h)\widetilde{\varphi}) = \frac{L(1, \sigma_1, \mathrm{Ad}) L(1, \sigma_2, \mathrm{Ad})}{\zeta_F(2)^2} \mathcal{W}_{2,2}(\sigma(g)\varphi^\varepsilon, \sigma(h)\widetilde{\varphi}). \] \subsection{Proof of Theorem~\ref{gsp whittaker}} Let $(\sigma, V_{\sigma})$ be an irreducible cuspidal automorphic representation of the group $\mathrm{GO}_{2,2}(\mA)$. Suppose that $\sigma$ is induced by the representation $\sigma_1 \boxtimes \sigma_2$ of $\mathrm{GL}_2(\mA) \times \mathrm{GL}_2(\mA)$. For $f = f_1 \otimes f_2 \in V_{\sigma_1} \otimes V_{\sigma_2}$, we have \[ W_{U_H}(f) = \int_{F \backslash \mA} f_1 \left( \begin{pmatrix}1&x\\ &1 \end{pmatrix}h_1\right) \psi(-x) \, dx \int_{F \backslash \mA} f_2 \left( \begin{pmatrix}1&x\\ &1 \end{pmatrix} h_2\right) \psi(-x) \, dx \] for $h=(h_1, h_2) \in \mathrm{SO}_{2,2}(\mA)$. Moreover, for any place $v$ of $F$, we have \[ \mathcal{W}_{2,2}^\natural(\varphi_v, \widetilde{\varphi}_v) = \mathcal{W}_2^\natural(\varphi_{1,v}, \widetilde{\varphi}_{1,v})\mathcal{W}_2^\natural(\varphi_{2,v}, \widetilde{\varphi}_{2,v}) \] with $\varphi_v = (\varphi_{1,v}, \varphi_{2,v})$ and $\widetilde{\varphi}_v = (\widetilde{\varphi}_{1,v}, \widetilde{\varphi}_{2,v})$. Then by \eqref{4.2 Liu2} and the Lapid-Mao formula \eqref{LM GL2}, we obtain \begin{multline*} \mathbb{W}_{\psi_{U_H}}(\theta_\psi(\varphi^\varepsilon, \phi^\varepsilon)) = \frac{1}{4} \frac{\zeta_F(2)^2}{L(1, \sigma_1, \mathrm{Ad})L(1, \sigma_2, \mathrm{Ad})} \\ \times \int_{\mu_2(F) \backslash \mu_2(\mA)} \prod_v \int \int_{(N_{0}(F_v) \backslash \mathrm{SO}_{2,2})^2} \left(\prod_{\alpha=1, 2} \mathcal{W}_2^\natural((\sigma(g_v)\varphi_{v}^\varepsilon)_\alpha, (\overline{\sigma}(h_v)\overline{\varphi}_{v})_\alpha) \right) \\ \times \phi_{v}^\varepsilon (g_v^{-1} \cdot y_0)\overline{\phi}_{v} (h_v^{-1} \cdot y_0) \, dg \, dh \\ =\frac{1}{4} \frac{\zeta_F(2)^2}{L(1, \sigma_1, \mathrm{Ad})L(1, \sigma_2, \mathrm{Ad})} \int_{\mu_2(F) \backslash \mu_2(\mA)} \prod_v \int \int_{(N_{0}(F_v) \backslash \mathrm{SO}_{2,2})^2} \\ \mathcal{W}_{2,2}^\natural(\sigma(g_v)\varphi_v, \overline{\sigma}_v(h_v)\overline{\varphi}_v) \phi_{v}^\varepsilon (g_v^{-1} \cdot y_0)\overline{\phi}_{v} (h_v^{-1} \cdot y_0) \, dg \, dh. \end{multline*} By \eqref{4.6 Liu2}, this is equal to \[ \frac{1}{4} \frac{\zeta_F(2) \zeta_F(4)}{L(1, \sigma_1, \mathrm{Ad})L(1, \sigma_2, \mathrm{Ad})}\prod \mathcal{W}_{\psi_{U_H}}^\natural(\theta(\varphi_v, \phi_v), \theta(\overline{\varphi}_v, \overline{\phi}_v)), \] and thus this completes our proof of Theorem~\ref{gsp whittaker}. \section{Explicit computation of local Bessel periods at the real place} \label{s:e comp} The goal of this appendix is to compute explicitly the local Bessel periods at the real place and to complete our proof of Theorem~\ref{t: vector valued boecherer}. In this section, we use the same notation as in Section~\ref{GBC}. For a newform $\varPhi\in S_\varrho\left(\Gamma_0\left(N\right)\right)$ in Theorem~\ref{t: vector valued boecherer}, we define a scalar valued automorphic form $\phi_{\varPhi,S}$ on $ G\left(\mathbb A\right)$ by \begin{equation} \label{def vector adelize} \phi_{\varPhi,S}\left(g\right)= \left(\varphi_\varPhi\left(g\right), Q_{S,\varrho}\right)_{2r} \quad\text{for $g\in G\left(\mathbb A\right)$}, \end{equation} where $\varphi_\varPhi$ is the adelization of $\varPhi$ given by \eqref{e: vector valued} and $Q_{S,\varrho}$ by \eqref{e: def of Q}. We note that by the argument in \cite[3.2]{DPSS}, $\phi_{\varPhi,S}$ is a factorizable vector $\phi_{\varPhi, S}=\otimes_v\,\phi_{\varPhi, S, v}$. For a place $v$ of $\mathbb Q$, we define $J_v$ by \begin{equation}\label{e: j infty} J_v= \frac{\alpha_v^{\natural} \left(\phi_{\varPhi,S,v}, \phi_{\varPhi,S,v}\right)}{ \langle \phi_{\varPhi, S,v}, \phi_{\varPhi, S,v} \rangle_v}. \end{equation} It is clear that $J_v$ remains invariant under replacing $\phi_{\varPhi, S,v}$ by its non-zero scalar multiple. Further, we put \begin{equation}\label{e: def of c} \mathcal C= C_\xi\cdot \frac{\zeta_\mQ\left(2\right)\, \zeta_\mQ\left(4\right)}{L(1, \chi_{E})} \end{equation} with the Haar measure constant $C_\xi$ defined by \eqref{C_{S_D}}. Then the following identity holds. \begin{theorem} \label{arch comp} \begin{equation} \label{e:arch comp} C\left(Q_{S,\varrho}\right) \mathcal C J_\infty=\frac{2^{4k+6r-1}e^{-4\pi\,\mathrm{tr}\left(S\right)}}{ D_E}. \end{equation} Recall that $C\left(Q_{S,\varrho}\right)$ is defined by \eqref{e: norm constant} for $v^\prime=Q_{S,\varrho}$. \end{theorem} \begin{Remark} \label{scalar rem} In the scalar valued case, i.e. $r=0$, the explicit computation of $J_\infty$ is done in Dickson~et al.~\cite[3.5]{DPSS} using the explicit formula for matrix coefficients when $k \geq 3$. Meanwhile Hsieh and Yamana~\cite[Proposition~5.7]{HY} compute $J_ \infty$ in a different way when $k \geq 2$, based on Shimura's work on confluent hypergeometric functions. \end{Remark} We note that the left hand side of \eqref{e:arch comp} depends only on the archimedean representation $\pi\left(\varPhi\right)_\infty$ and the vector $\phi_{\varPhi, S,\infty}$. Thus our strategy is to first obtain an explicit formula \eqref{e: formula 1} for the Bessel periods of vector valued Yoshida lifts by combining the results in Hsieh and Namikawa~\cite{HN1,HN2}, Chida and Hsieh~\cite{CH}, Martin and Whitehouse~\cite{MW}, and, then to evaluate $C\left(Q_{S,\varrho}\right) \mathcal C J_\infty$ by singling out the real place contribution, comparing \eqref{e: formula 1} with \eqref{e: main identity}. \subsection{Explicit formula for Bessel periods of Yoshida lifts} For a prime number $p$, let \[ \Gamma_0^{(1)}\left(p\right)= \left\{\begin{pmatrix}a&b\\c&d\end{pmatrix}\in \mathrm{SL}_2\left(\mathbb Z\right): c\equiv 0\pmod{p}\right\} \] and $S_k\left(\Gamma_0^{(1)}\left(p\right)\right)$ the space of cusp forms of weight $k$ with respect to $\Gamma_0^{(1)}\left(p\right)$. In order to insure what follows to be non-vacuous, first we shall prove the following technical lemma. \begin{lemma}\label{l: simultaneous non-vanishing} Let $k_1$ and $k_2$ be integers with $k_1 \geq k_2\ge0$. Then there is a constant $N=N(k_1, k_2, E) \in \mR$ such that for any prime $p > N$, there exist distinct normalized newforms $f_i\in S_{2k_i+2}\left(\Gamma_0^{(1)}\left(p\right)\right)$ for $i=1,2$ satisfying the condition: \begin{equation}\label{e: eigenvalue condition} \text{the Atkin-Lehner eigenvalues of $f_i$ at $p$ for $i=1,2$ coincide.} \end{equation} \end{lemma} \begin{proof} We divide into the following two cases: \begin{subequations}\label{e: parities} \begin{equation}\label{e: case 1} k_1\equiv k_2\pmod{2}; \end{equation} \begin{equation}\label{e: case 3} k_1+1\equiv k_2\equiv 0\pmod{2}. \end{equation} \end{subequations} Suppose that \eqref{e: case 1} holds. Then by Iwaniec, Luo and Sarnak~\cite[Corollary 2.14]{ILS}, there is a constant $N(k_1, k_2)$ such that, for any prime $p > N(k_1, k_2)$, there exist distinct normalized newforms $f_i\in S_{2k_i+2}\left(\Gamma_0^{(1)}\left(p\right)\right)$ for $i=1,2$ such that \[ \varepsilon\left(1\slash 2,\pi_1\right)=\varepsilon\left(1\slash 2,\pi_2\right) \] where $\pi_i$ denotes the automorphic representation of $\mathrm{GL}_2\left(\mA\right)$ corresponding to $f_i$ for $i=1,2$. Since $\pi_i$ is unramified at all prime numbers different from $p$, we have \[ \left(-1\right)^{k_1+1}\cdot\varepsilon_p\left(1\slash 2,\pi_1\right)= \left(-1\right)^{k_2+1}\cdot\varepsilon_p\left(1\slash 2,\pi_2\right). \] Hence $\varepsilon_p\left(1\slash 2,\pi_1\right)=\varepsilon_p\left(1\slash 2,\pi_2\right)$ by \eqref{e: case 1}. Then by the relationship between the local $\varepsilon$-factor at $p$ and the Atkin-Lehner eigenvalue at $p$ (e.g. \cite[4.4]{HN2}), we see that \eqref{e: eigenvalue condition} holds. Suppose that \eqref{e: case 3} holds. Then by Michel and Ramakrishnan~\cite[Theorem~3]{MR} or Ramakrishnan and Rogawski~\cite[Corollary~B]{RR}, there exists a constant $N_1=N_1(k_1, E)$ such that for any prime $p > N_1$, there exists a normalized newform $f_1\in S_{2k_1+2}\left(\Gamma_0^{(1)}\left(p\right)\right)$ such that \[ L\left(1\slash 2,\pi_1\right)L\left(1\slash 2,\pi_1\times\chi_E\right)\ne 0. \] In particular, $\varepsilon\left(1\slash 2,\pi_1\right) =1$, and thus as in the previous case, we have \[ \left(-1\right)^{k_1+1}\cdot\varepsilon_p\left(1\slash 2,\pi_1\right) = 1. \] Moreover, by \cite[Corollary 2.14]{ILS}, there exists a constant $N_2 = N_2(k_2)$ such that for any prime $p > N_2$, there exists a normalized newform $f_2\in S_{2k_2+2}\left(\Gamma_0^{(1)}\left(p\right)\right)$ such that \[ \varepsilon\left(1\slash 2,\pi_2\right) =-1. \] Then by taking the constant $N$ to be $\mathrm{max}(N_1, N_2)$, the condition \eqref{e: eigenvalue condition} holds by the same argument as above. \end{proof} \subsubsection{Vector valued Yoshida lift} As for the Yoshida lifting, we refer the details to our main references Hsieh and Namikawa~\cite{HN1,HN2}. Let $k_1$ and $k_2$ be integers with $k_1\ge k_2\ge 0$. Then by Lemma~\ref{l: simultaneous non-vanishing}, we may take a prime number $p$ satisfying the condition: \begin{equation}\label{e: inert condition} \text{$p$ is odd, and inert and unramified in $E$} \end{equation} and may take distinct normalized newforms $f_i\in S_{2k_i+2}\left(\Gamma_0^{(1)}\left(p\right)\right)$ ($i=1,2$) satisfying the condition \eqref{e: eigenvalue condition}. For a non-negative integer $r$, we denote by $\left(\tau_r,\mathcal W_r\right)$ the representation $\left(\varrho,V_\varrho\right)$ of $\gl_2\left(\mathbb C\right)$ where $\varrho=\varrho_{\left(r,-r\right)}$, i.e. $\tau_r=\mathrm{Sym}^{2r}\otimes \det^{-r}$. We note that the action of the center of $\gl_2\left(\mathbb C\right)$ on $\mathcal W_r$ by $\tau_r$ is trivial and the pairing $\left(\, ,\,\right)_{2r}$ is $\gl_2\left(\mathbb C\right)$-invariant by \eqref{e: equivariance}. Let $p$ be a prime number and $D=D_{p,\infty}$ the unique division quaternion algebra over $\mathbb Q$ which ramifies precisely at $p$ and $\infty$. Let $\mathcal O_D$ be the maximal order of $D$ specified as in \cite[3.2]{HN1} and we put $\hat{\mathcal O}_D=\mathcal O_D\otimes_{\mathbb Z} \hat{\mathbb Z}$. \begin{Definition} $\mathcal A_r\left(D^\times\left(\mathbb A\right), \hat{\mathcal O}_D\right)$, the space of automorphic forms of weight $r$ and level $\hat{\mathcal O}_D$ on $D^\times\left(\mathbb A\right)$ is a space of functions $\mathbf{g}:D^\times\left(\mathbb A\right) \to \mathcal W_r$ satisfying \[ \mathbf{g}\left(z\gamma h u\right)=\tau_r\left(h_\infty\right)^{-1} \mathbf{g}\left(h_f\right) \] for $z\in \mathbb A^\times$, $\gamma\in D^\times\left(\mathbb Q\right)$, $u\in \hat{\mathcal O}_D^\times$ and $h=\left(h_\infty, h_f\right) \in D^\times\left(\mathbb R\right)\times D^\times\left(\mathbb A_f\right)$. \end{Definition} For $i=1,2$, let $\pi_i$ be the irreducible cuspidal automorphic representation of $\gl_2\left(\mathbb A\right)$ corresponding to $f_i$. Let $\pi^D_i$ be the Jacquet-Langlands transfer of $\pi_i$ to $D^\times\left(\mathbb A\right)$. We denote by $\mathcal A_{k_i}\left(D^\times\left(\mathbb A\right), \hat{\mathcal O}_D\right)\left[\pi_i^D\right]$ the $\pi_i^D$-isotypic subspace of $\mathcal A_{k_i}\left(D^\times\left(\mathbb A\right), \hat{\mathcal O}_D\right)$. Then $\mathcal A_{k_i}\left(D^\times\left(\mathbb A\right), \hat{\mathcal O}_D\right)\left[\pi_i^D\right]$ has a subspace of newforms, which is one dimensional. Let us take newforms $\mathbf{f}_i\in \mathcal A_{k_i}\left(D^\times\left(\mathbb A\right), \hat{\mathcal O}_D\right)\left[\pi_i^D\right]$ for $i=1,2$ and fix. Then to the pair $\mathbf{f}=\left(\mathbf{f}_1,\mathbf{f}_2\right)$, Hsieh and Namikawa~\cite[3.7]{HN1} associate the \emph{Yoshida lift} $\theta_{\mathbf{f}}$, a $V_\varrho$-valued cuspidal automorphic form on $ G\left(\mathbb A\right)$ where $\varrho=\varrho_\kappa$ with \[ \kappa=\left(k_1+k_2+2, k_1-k_2+2\right)\in\mathbb L. \] The \emph{classical Yoshida lift} $\theta^\ast_{\mathbf{f}}\in S_\varrho\left(\Gamma_0\left(p\right)\right)$ is also attached to $\mathbf{f}$ in \cite[3.7]{HN1} so that $\theta_{\mathbf{f}}$ is obtained from $\theta^\ast_{\mathbf{f}}$ by the adelization procedure in \eqref{e: vector valued}. \subsubsection{Bessel periods of Yoshida lifts} Let $\phi_{\mathbf{f},S}$ denote a scalar valued automorphic form attached to $\theta^\ast_{\mathbf{f}}$ as in \eqref{def vector adelize}. Hsieh and Namikawa evaluated the Bessel periods of $\phi_{\mathbf{f},S}$ in \cite{HN1}. First we remark that by \cite[Theorem~5.3]{HN1}, for any sufficiently large prime number $q$ which is different from $p$, we may take a character $\Lambda_0$ of $\mathbb A_E^\times$ satisfying: \begin{subequations}\label{e: Lambda} \begin{equation}\label{e: Lambda 1} L\left(1\slash 2,\pi_1\otimes\mathcal{AI} \left(\Lambda_0 \right)\right) L\left(1\slash 2,\pi_2\otimes\mathcal{AI} \left(\Lambda_0^{-1} \right)\right)\ne 0; \end{equation} \begin{equation}\label{e: Lambda 2} \text{the conductor of $\Lambda_0$ is $q^m\mathcal O_E$ where $m>0$}; \end{equation} \begin{equation}\label{e: Lambda 3} \text{$\Lambda_0\mid_{\mathbb A^\times}$ is trivial}; \end{equation} \begin{equation}\label{e: Lambda 4} \text{$\Lambda_{0,\infty}$ is trivial.} \end{equation} \end{subequations} Then \cite[Proposition~4.7]{HN1} yields the following formula. \begin{lemma}\label{l: bessel for Yoshida} We have \begin{equation}\label{e: Bessel 1} B_{S, \Lambda_0, \psi}\left(\phi_{\mathbf{f},S}\right)= q^{2m}\cdot\left(-2\sqrt{-1}\,\right)^{k_1+k_2}\cdot e^{-2\pi\,\mathrm{Tr}\left(S\right)}\cdot \prod_{i=1}^2\, P\left(\mathbf{f}_i, \Lambda_0^{\alpha_i},1_2\right) \end{equation} where $\alpha_i=\left(-1\right)^{i+1}$ and \[ P\left(\mathbf{f}_i, \Lambda_0^{\alpha_i},1_2\right)= \int_{E^\times \mathbb A^\times\backslash \mathbb A_E^\times} \left(\left(XY\right)^{k_i}, \mathbf{f}_i\left(t\right)\right)_{2k_i} \cdot \Lambda_0^{\alpha_i}\left(t\right)\, dt. \] \end{lemma} From \eqref{e: Bessel 1}, we have \begin{equation}\label{e: Bessel 2} \left| B_{S, \Lambda_0, \psi}\left(\phi_{\mathbf{f},S}\right)\right|^2= q^{4m}\cdot 2^{2(k_1+k_2)}\cdot e^{-4\pi\,\mathrm{tr}\left(S\right)}\cdot \prod_{i=1}^2 \, \left|P\left(\mathbf{f}_i, \Lambda_0^{\alpha_i},1_2\right)\right|^2. \end{equation} Since $p$ is odd and inert in $E$, we may evaluate the right hand side of \eqref{e: Bessel 2} by Martin and Whitehouse~\cite{MW}. Namely the following formula holds by \cite[Theorem~4.1]{MW}. \begin{lemma}\label{l: MW} We have \begin{multline}\label{e: Bessel 3} \frac{\left| P(\mathbf{f}_i, \Lambda_0^{\alpha_i}, 1_2) \right|^2}{ \left\|\phi_{\mathbf{f}_i} \right\|^2} =\frac{1}{4}\cdot \frac{\xi(2)}{\zeta_{\mQ_p}(2)}\cdot \frac{L\left(1 \slash 2, \pi_i \otimes \mathcal{AI}\left(\Lambda_0^{\alpha_i}\right)\right)}{L\left(1, \pi_i, \mathrm{Ad}\right)} \cdot \left(1+p^{-1}\right)^{-1} \\ \times \frac{\Gamma\left(2k_i+2\right)}{ 2\,q^m\, \pi \,D_E^{1\slash 2} \,\, \Gamma\left(k_i+1\right)^2} \end{multline} where $\xi(s)$ denotes the complete Riemann zeta function, $\phi_{\mathbf{f}_i}$ the scalar valued automorphic form on $D^\times\left(\mathbb A \right)$ defined by \[ \phi_{\mathbf{f}_i}\left(h\right)= \left(\left(XY\right)^{k_i},\mathbf{f}_i\left(h\right)\right)_{2k_i} \quad\text{for $h\in D^\times\left(\mathbb A\right)$} \] and \[ \left\|\phi_{\mathbf{f}_i} \right\|^2= \int_{\mathbb A^\times D^\times\left(\mathbb Q\right)\backslash D^\times\left(\mathbb A\right)} \left|\phi_{\mathbf{f}_i}\left(h\right) \right|^2\, dh. \] Here $dh$ is the Tamagawa measure on $\mA^\times \backslash D^\times\left(\mathbb A\right)$, and thus \[ \mathrm{Vol}(\mathbb A^\times D^\times\left(\mathbb Q, \right)\backslash D^\times\left(\mathbb A\right),dh)=2. \] \end{lemma} \begin{Remark} The factor $\frac{1}{4}$ in \eqref{e: Bessel 3} originates from the difference of measures between the one used here and the one in \cite{MW}. \end{Remark} In order to utilize the explicit inner product formula for vector valued Yoshida lifts in Hsieh and Namikawa~\cite{HN2}, we need the following lemma. \begin{lemma}\label{l: inner product change} Let us define an inner product $\langle\mathbf{f}_i,\mathbf{f}_i\rangle$ for $i=1,2$ by \begin{equation} \label{finite sum PI} \langle\mathbf{f}_i,\mathbf{f}_i\rangle= \sum_{a}\, \langle\mathbf{f}_i\left(a\right), \mathbf{f}_i\left(a\right)\rangle_{\tau_{k_i}} \cdot \frac{1}{\# \,\Gamma_a} \end{equation} where $\left<\,,\,\right>_{\tau_{k_i}}$ is defined by \eqref{e: def of hermitian}, $a$ runs over double coset representatives of $D^\times\left(\mathbb Q\right)\backslash D^\times\left(\mathbb A_f\right)\slash \hat{\mathcal O}_D^\times$ and $\Gamma_a=\left(a\,\hat{\mathcal O}_D^\times \, a^{-1}\cap D^\times\left(\mathbb Q \right)\right)\slash \left\{\pm 1\right\}$. Then for $i=1,2$, we have \begin{equation}\label{e: little inner product} \left\|\phi_{\mathbf{f}_i}\right\|^2= 2^3\cdot 3\cdot p^{-1}\left(1-p^{-1}\right)^{-1}\cdot \frac{\Gamma\left(k_i+1\right)^2}{\Gamma\left(2k_i+1\right)} \cdot\frac{1}{\left(2k_i+1\right)^2}\cdot \langle\mathbf{f}_i,\mathbf{f}_i\rangle. \end{equation} \end{lemma} \begin{proof} Since $\left\|\phi_{\mathbf{f}_i}\right\|^2=\left\|\pi_i^D\left(h_\infty\right) \phi_{\mathbf{f}_i}\right\|^2$ for $h_\infty\in D^\times\left(\mathbb R\right)$, we have \begin{multline*} \left\|\phi_{\mathbf{f}_i}\right\|^2 =\frac{1}{ \mathrm{Vol}\left(\mathbb R^\times\backslash D^\times\left(\mathbb R\right), dh_\infty\right)} \\ \times \int_{\mathbb R^\times\backslash D^\times\left(\mathbb R\right)} \int_{\mathbb A^\times D^\times\left(\mathbb Q\right) \backslash D^\times\left(\mathbb A\right)} \left|\phi_{\mathbf{f}_i}\left(hh_\infty\right)\right|^2 dh\,dh_\infty. \end{multline*} By interchanging the order of integration, we have \begin{multline*} \left\|\phi_{\mathbf{f}_i}\right\|^2 = \frac{1}{ \mathrm{Vol}\left(\mathbb R^\times\backslash D^\times\left(\mathbb R\right), dh_\infty\right)} \\ \times \int_{\mathbb A^\times D^\times\left(\mathbb Q\right) \backslash D^\times\left(\mathbb A\right)} \int_{\mathbb R^\times\backslash D^\times\left(\mathbb R\right)} \left|\phi_{\mathbf{f}_i}\left(hh_\infty\right)\right|^2 dh_\infty\,dh. \end{multline*} Here the Schur orthogonality implies \begin{multline*} \frac{1}{ \mathrm{Vol}\left(\mathbb R^\times\backslash D^\times\left(\mathbb R\right), dh_\infty\right)} \int_{\mathbb R^\times\backslash D^\times\left(\mathbb R\right)} \left| \left(\left(XY\right)^{k_i},\mathbf{f}_i\left(hh_\infty\right)\right)_{2k_i} \right|^2 \,dh_\infty \\ =d_i^{-1} \cdot \left(\left(XY\right)^{k_i},\left(XY\right)^{k_i}\right)_{2k_i} \cdot \left(\mathbf{f}_i\left(h\right),\overline{\mathbf{f}_i\left(h\right)}\right)_{2k_i} \end{multline*} where $d_i=\dim \mathrm{Sym}^{2k_i}=2k_i+1$ and $ \left(\left(XY\right)^{k_i},\left(XY\right)^{k_i}\right)_{2k_i} =\left(-1\right)^{k_i} \begin{pmatrix}2k_i\\k_i\end{pmatrix}^{-1}$. Hence \[ \left\| \phi_{\mathbf{f}_i} \right\|^2 =\begin{pmatrix}2k_i\\ k_i \end{pmatrix}^{-1} (2k_i+1)^{-1} \int_{\mA^\times D^\times(\mQ) \backslash D^\times(\mA)} \left(\mathbf{f}_i\left(h\right), \overline{\mathbf{f}_i\left(h\right)} \right)_{2k_i} \, dh. \] By \cite[Lemma~6]{HN1}, we have \begin{multline} \label{PI difference} \int_{\mA^\times D^\times(\mQ) \backslash D^\times(\mA)} \left( \mathbf{f}_i\left(h\right), \overline{\mathbf{f}_i\left(h\right)} \right)_{2k_i} \, dh \\ =\frac{(-1)^{k_i}}{2k_i+1} \int_{\mA^\times D^\times(\mQ) \backslash D^\times(\mA)} \langle\mathbf{f}_i\left(h\right), \mathbf{f}_i\left(h\right)\rangle_{\tau_{k_i}} \, dh. \end{multline} Finally by Chida and Hsieh~\cite[(3.10)]{CH} with the following Remark~\ref{2 power miss CH}, we obtain \eqref{e: little inner product}. \end{proof} \begin{Remark} \label{2 power miss CH} In \cite{CH}, the Eichler mass formula is used to express the right hand side of \eqref{PI difference} in terms of the inner product defined by \eqref{finite sum PI}. There is a typo in the Eichler mass formula in \cite[p.103]{CH}. The right hand side of the formula quoted there should be multiplied by $2$. \end{Remark} Let us recall the inner product formula for $\theta^\ast_{\mathbf{f}}$ by Hsieh and Namikawa~\cite[Theorem~A]{HN2}. \begin{proposition}\label{p: inner product classical} We have \begin{multline}\label{e: inner product classical} \frac{ \langle\theta_{\mathbf{f}}^\ast, \theta_{\mathbf{f}}^\ast\rangle_{ \varrho }}{ \langle\mathbf{f}_1,\mathbf{f}_1\rangle\langle\mathbf{f}_2,\mathbf{f}_2\rangle} \\ =L\left(1,\pi_1\times\pi_2\right)\cdot \frac{2^{-\left(2k_1+6\right)}}{\left(2k_1+1\right)\left(2k_2+1\right)} \cdot \frac{1}{p^2\left(1+p^{-1}\right)\left(1+p^{-2}\right)}. \end{multline} Here $\langle\theta_{\mathbf{f}}^\ast, \theta_{\mathbf{f}}^\ast\rangle_{ \varrho }$ is given by \[ \langle \theta_{\mathbf{f}}^\ast, \theta_{\mathbf{f}}^\ast \rangle_{ \varrho } = \frac{1}{[\mathrm{Sp}_2(\mZ) : \Gamma_0\left(p\right)]} \int_{\Gamma_0\left(p\right) \backslash \mathfrak{H}_2} \langle\theta_{\mathbf{f}}^\ast(Z), \theta_{\mathbf{f}}^\ast(Z) \rangle_{\varrho} \left(\det Y\right)^{k_1-k_2-1} \, dX\, dY \] with $\varrho=\varrho_\kappa$ where $\kappa=\left(k_1+k_2+2,k_1-k_2+2\right)$. \end{proposition} Thus by combining \eqref{e: Bessel 2}, \eqref{e: Bessel 3}, \eqref{e: little inner product} and \eqref{e: inner product classical}, we have \begin{multline} \label{e: formula 1} \frac{\left|B_{S, \Lambda_0, \psi}\left(\phi_{\mathbf{f},S}\right)\right|^2}{ \langle\theta_{\mathbf{f}}^\ast, \theta_{\mathbf{f}}^\ast \rangle_{ \varrho }} = \frac{2^{4k_1 + 2k_2+ 5} e^{-4\pi \,\mathrm{tr}\left(S\right) } }{D_E} \cdot 2\left(1+p^{-1}\right)\left(1+p^{-2}\right) \cdot q^{2m} \\ \times \frac{L\left(1 \slash 2, \pi_1 \otimes \mathcal{AI}\left(\Lambda_0\right)\right) L\left(1 \slash 2, \pi_2 \otimes \mathcal{AI}\left(\Lambda_0^{-1}\right)\right) }{ L\left(1, \pi_1, \mathrm{Ad}\right) L\left(1, \pi_2, \mathrm{Ad}\right) L\left(1, \pi_1 \times \pi_2\right)}. \end{multline} Here we note that the both sides of \eqref{e: formula 1} are non-zero due to the conditions \eqref{e: eigenvalue condition} and \eqref{e: Lambda}. \subsection{Proof of Theorem~\ref{arch comp}} Since the Ichino-Ikeda type formula has been proved for Yoshida lifts by Liu \cite[Theorem~4.3]{Liu2}, the computations in Dickson et al.~\cite{DPSS} implies \begin{multline} \label{e: formula 2} \frac{\left|B_{S,\Lambda_0, \psi}\left(\phi_{\mathbf{f},S}\right)\right|^2}{ \langle\phi_{\mathbf{f},S},\phi_{\mathbf{f},S} \rangle } = \frac{\mathcal CJ_\infty}{2^2} \cdot 2\left(1+p^{-1}\right)\left(1+p^{-2}\right) \cdot J_q \\ \times \frac{L\left(1 \slash 2, \pi_1 \otimes \mathcal{AI}\left(\Lambda_0\right)\right) L\left(1 \slash 2, \pi_2 \otimes \mathcal{AI}\left(\Lambda_0^{-1}\right)\right) }{ L\left(1, \pi_1, \mathrm{Ad}\right) L\left(1, \pi_2, \mathrm{Ad}\right) L\left(1, \pi_1 \times \pi_2\right)}. \end{multline} Thus in order to evaluate $J_\infty$, we need to determine $J_q$. Here we use a scalar valued Yoshida lift to evaluate $J_q$. First we recall that \eqref{e:arch comp} holds in the scalar valued case, i.e. when $k_2=0$, as we noted in Remark~\ref{scalar rem}. By Lemma~\ref{l: simultaneous non-vanishing}, when $q$ is large enough, there also exist distinct normalized newforms $f_1^\prime\in S_{2k_1+2}\left(\Gamma_0^{(1)}\left(p\right)\right)$ and $f_2^\prime\in S_{2}\left(\Gamma_0^{(1)}\left(p\right)\right)$ satisfying the condition \eqref{e: eigenvalue condition}, and, a character $\lambda_0^\prime$ of $\mathbb A_E^\times$ satisfying the conditions \eqref{e: Lambda} for $\pi_i^\prime$ $\left(i=1,2\right)$ where $\pi_i^\prime$ is the automorphic representation of $\mathrm{GL}_2\left(\mA\right)$. Define $\mathbf{f}^\prime$ similarly for $\pi_1^\prime$ and $\pi_2^\prime$. Since \eqref{e:arch comp} is valid in the scalar valued case, we have \begin{multline*} \frac{\left|B_{S, \Lambda_0, \psi}\left(\phi_{\mathbf{f}^\prime,S}\right)\right|^2}{ \langle\phi_{\mathbf{f}^\prime,S},\phi_{\mathbf{f}^\prime,S} \rangle } = \frac{2^{4k_1+5}e^{-4\pi\,\mathrm{tr}\left(S\right)}}{ D_E} \cdot C\left(Q_{S,\varrho_{\left(k_1, k_1\right)}}\right)^{-1} \\ \cdot 2\left(1+p^{-1}\right)\left(1+p^{-2}\right) \cdot J_q \cdot \frac{L\left(1 \slash 2, \pi_1^\prime \otimes \mathcal{AI}\left(\Lambda_0^\prime\right)\right) L\left(1 \slash 2, \pi_2^\prime \otimes \mathcal{AI}\left(\Lambda_0^{\prime\, -1}\right)\right) }{ L\left(1, \pi_1^\prime, \mathrm{Ad}\right) L\left(1, \pi_2^\prime, \mathrm{Ad}\right) L\left(1, \pi_1^\prime \times \pi_2^\prime\right)}. \end{multline*} We note that $J_q$ here is the same as the one in \eqref{e: formula 2}. Then by comparing the formula above with \eqref{e: formula 1} for $\mathbf{f}^\prime$ and $\Lambda_0^\prime$, we have $J_q=q^{2m}$. Finally by comparing \eqref{e: formula 1} with \eqref{e: formula 2} substituting $J_q=q^{2m}$, we have \begin{equation}\label{e: j_infty} C\left(Q_{S,\varrho}\right)\mathcal CJ_\infty=\frac{2^{4k_1+2k_2+7}e^{-4\pi\, \mathrm{tr}\left(S\right)}}{D_E} \end{equation} in the general case. For $\varPhi$ in Theorem~\ref{t: vector valued boecherer}, a scalar valued automorphic form $\phi_{\varPhi,S}$ defined by \[ \label{phi_Phi S} \phi_{\varPhi,S}\left(g\right)= \left(\varphi_\varPhi\left(g\right), Q_{S,\varrho}\right)_{2r} \quad\text{for $g\in G\left(\mathbb A\right)$} \] is factorizable, i.e. $\phi_{\varPhi, S}=\otimes_v\,\phi_{\varPhi, S,v}$. Let us choose $k_1$ and $k_2$ so that \[ \left(2r+k,k\right)=\left(k_1+k_2+2,k_1-k_2+2\right), \quad \text{i.e.}\quad k_1 =r+k-2, \,\, k_2=r. \] Then for $\phi_{\mathbf{f},S}=\otimes_v \,\phi_{\mathbf{f},S,v}$ in \eqref{e: formula 2}, the archimedean factor $\phi_{\mathbf{f},S,\infty}$ is a non-zero scalar multiple of $\phi_{\varPhi, S,\infty}$. Thus \eqref{e:arch comp} follows from \eqref{e: j_infty}. \qed \subsection{Proof of Theorem~\ref{t: vector valued boecherer}} Let us complete our proof of Theorem~\ref{t: vector valued boecherer}. By Theorem~\ref{ref ggp}, we have \begin{equation}\label{e: final1} \frac{\left| B_{S,\Lambda, \psi}\left(\phi_{\varPhi,S}\right)\right|^2}{ \langle \phi_{\varPhi,S},\phi_{\varPhi,S}\rangle} =\frac{\mathcal CJ_\infty}{2^{c-3}} \cdot \frac{L\left(1\slash 2,\pi\left(\varPhi\right) \times \mathcal{AI} \left(\Lambda \right)\right)}{ L\left(1,\pi\left(\varPhi\right),\mathrm{Ad}\right)} \cdot \prod_{p | N} J_p \end{equation} where $c$ is as stated in Theorem~\ref{t: vector valued boecherer}. By \eqref{e: scalar vs vector} and \eqref{e: scalar bs1}, we have \[ B_{S,\psi, \Lambda}\left(\phi_{\varPhi,S}\right)=2 \cdot e^{-2\pi\,\mathrm{tr}\left(S\right)} \cdot \mathcal B_\Lambda \left(\varPhi;E\right). \] Since $\langle \phi_{\varPhi,S},\phi_{\varPhi,S}\rangle =C\left(Q_{S,\varrho}\right)\cdot \langle \varPhi,\varPhi\rangle_\varrho$ by Lemma~\ref{l: norm constant}, we have \begin{equation}\label{e: final2} \frac{\left| \mathcal B_\Lambda \left(\varPhi;E\right)\right|^2}{ \langle \varPhi,\varPhi\rangle_\varrho}= \frac{\left| B_{S,\Lambda, \psi}\left(\phi_{\varPhi,S}\right)\right|^2}{ \langle \phi_{\varPhi,S},\phi_{\varPhi,S}\rangle} \cdot 2^{-2}e^{4\pi\,\mathrm{tr}\left(S\right)} C\left(Q_{S,\varrho}\right). \end{equation} Thus by combining \eqref{e: final1}, \eqref{e: final2} and \eqref{e:arch comp}, the identity \eqref{e: vector valued boecherer} holds. \section{Meromorphic continuation of $L$-functions for $\mathrm{SO}(5) \times \mathrm{SO}(2)$}\label{appendix c} As we remarked in Remark~\ref{L-fct def rem}, here we show the meromorphic continuation of $L^S(s, \pi \times \mathcal{AI}(\Lambda))$ in Theorem~\ref{ggp SO}, when $\mathcal{AI} \left(\Lambda \right)$ is cuspidal and $S$ is a sufficiently large finite set of places of $F$ containing all archimedean places. The following theorem clearly suffices. \begin{theorem} \label{thm mero} Let $\pi$ (resp. $\tau$) be an irreducible unitary cuspidal automorphic representation $\pi$ of $G_D(\mA)$ (resp. $\mathrm{GL}_2(\mA)$) with a trivial central character. Then $L^S(s, \pi \times \tau)$ has a meromorphic continuation to $\mC$ and it is holomorphic at $s= \frac{1}{2}$ for a sufficiently large finite set $S$ of places of $F$ containing all archimedean places. \end{theorem} When $D$ is split, then $G_D\simeq G$ and the theorem follows from Arthur~\cite{Ar}. Hence from now on we assume that $D$ is non-split. By \cite{JSLi}, for some $\xi$ and $\Lambda$, $\pi$ has the $\left(\xi, \Lambda, \psi\right)$-Bessel period. Thus we may use the the integral representation of the $L$-function for $G_D\times \mathrm{GL}_2$ introduced in \cite{Mo3}. Then the meromorphic continuation of the Siegel Eisenstein series on $\mathrm{GU}_{3,3}$, which is used in the integral representation is known by the main theorem of Tan~\cite{Tan} (see also \cite[Proposition~3.6.2]{PSS}). Hence by the standard argument, our theorem is reduced to the analysis of the local zeta integrals. Meanwhile the non-archimedean local integrals are already studied in \cite[Lemma~5.1]{Mo3}. Hence it suffices for us to investigate the archimedean ones. Since the case when $E_v$ is a quadratic extension field of $F_v$ is similar to, and indeed simpler than, the split case, here we only consider the split case. Let us briefly recall our local zeta integral (see \cite[(28)]{Mo3}). Let $v$ be an archimedean place of $F$. Since we consider the split case, $D_v$ is split and we may assume that $G_{D}\left(F_v\right)=G\left(F_v\right) =\mathrm{GSp}_2\left(F_v\right)$ and $\xi=\begin{pmatrix}1&0\\0&-1\end{pmatrix}$. Then we have \[ T_\xi\left(F_v\right) = \left\{g\in\mathrm{GL}_2\left(F\right)\mid {}^tg\xi g=\det\left(g\right)\xi \right\} = \left\{ \begin{pmatrix} x&y\\ y&x\end{pmatrix} \in \mathrm{GL}_2(F) \right\}. \] In what follows, we omit the subscript $v$ from any object in order to simplify the notation. Let $\Lambda$ be a unitary character of $F^\times$. Then we regard $\Lambda$ as a character of $T_\xi(F)$ by \[ \Lambda \begin{pmatrix} x&y\\ y&x\end{pmatrix} = \Lambda \left(\frac{x+y}{x-y} \right) \quad\text{for $\begin{pmatrix} x&y\\ y&x\end{pmatrix} \in T_S\left(F\right)$}. \] For a non-trivial character $\psi$ of $F$, let $\mathcal{B}_{\xi, \Lambda, \psi}(\pi)$ denote the $(\xi, \Lambda, \psi)$-Bessel model of $\pi$, i.e. the space of functions $B:G(F)\to \mathbb C$ such that \[ B(tug) = \Lambda(t) \psi_\xi(u) B(g) \quad \text{for $t \in T_\xi(F)$, $u \in N\left(F\right)$ and $g\in G\left(F\right)$}, \] which affords $\pi$ by the right regular representation. Let $\mathcal{W}(\tau)$ denote the Whittaker model of $\pi$, i.e. the space of functions $W:\mathrm{GL}_2(F)\to\mathbb C$ such that \[ W\left( \begin{pmatrix} 1&x\\ 0&1\end{pmatrix}g\right) = \psi(- x) W(g)\quad \text{for $x\in F$ and $g\in \mathrm{GL}_2\left(F\right)$}, \] which affords $\tau$ by the right translation. Let $G_0\left(F\right)= \mathrm{GL}_2(F) \times G(F)$ and we regard $G$ as a subgroup of $\mathrm{GL}_6(F)$ by the embedding \[ \iota : G_0 \ni \left(\begin{pmatrix}a&b\\ c&d\end{pmatrix}, \begin{pmatrix}A&B\\ C&C\end{pmatrix} \right) \hookrightarrow \begin{pmatrix}a&0&b&0\\ 0&A&0&B\\ c&0&d&0\\ 0&C&0&D\end{pmatrix} \in \mathrm{GL}_6(F). \] Let us define a subgroup $H_0$ of $G_0$ by \[ H_0(F) = \left\{ \nu(h)\left(\begin{pmatrix}1&\mathrm{tr}(\xi X)\\0 &1 \end{pmatrix}, \begin{pmatrix}h&0\\ 0&\det h \cdot {}^{t}h^{-1} \end{pmatrix} \begin{pmatrix}1_2&X\\ 0&1_2 \end{pmatrix} \right) \mid X={}^{t}X, h \in T_\xi\left(F\right)\right\} \] where \[ \nu(h) = x-y \quad \text{for $ h= \begin{pmatrix} x&y\\y&x\end{pmatrix}\in T_\xi\left(F\right)$}. \] Let $P_3$ be the maximal parabolic subgroup of $\mathrm{GL}_6$ defined by \[ P_3= \left\{ \begin{pmatrix} h_1&X\\0&h_2\end{pmatrix} : h_1,h_2\in\mathrm{GL}_3 \right\}. \] Then we consider a principal series representation \[ I(\Lambda, s) =\left\{ f_s : \mathrm{GL}_6(F) \rightarrow \mC \mid f_s \left( \left(\begin{smallmatrix} h_1&X\\0&h_2\end{smallmatrix} \right) h \right) = \Lambda\left(\frac{\det h_1}{\det h_2}\right) \left|\frac{\det h_1}{\det h_2} \right|^{3s+\frac{3}{2}}f_s(h) \right\}. \] For $f_s \in I\left(\Lambda, s\right) $, $B \in \mathcal{B}_{\xi, \Lambda, \psi}\left(\pi\right)$ and $W \in \mathcal{W}\left(\tau\right)$, our local zeta integral $Z(f_s, B, W)$ is given by \[ Z(f_s, B, W) = \int_{Z_0\left(F\right)H_0\left(F\right) \backslash G_0\left(F\right)} f_s\left(\theta_0 \, \iota\left(g_1, g_2\right)\right) B(g_2)W(g_1) \, dg_1\,dg_2 \] where $Z_0$ denote the center of $G_0$ and \[ \theta_{0} = \begin{pmatrix} 0&0&0&0&0&-1\\ 0&1&0&0&0&0\\ 1&0&0&0&0&0\\ 1&-1&1&0&0&0\\ 0&0&0&0&1&-1\\ 0&0&0&1&0&-1 \end{pmatrix}. \] As explained above, Theorem~\ref{thm mero} follows by the standard argument if we prove the following lemma. \begin{lemma}\label{meromorphy lemma} Let $s_0$ be an arbitrary point in $\mC$. Then we may choose $f_s, B$ and $W$ so that $Z(f_s, B, W)$ has a meromorphic continuation to $\mC$ and is holomorphic and non-zero at $s=s_0$. \end{lemma} \begin{proof} For $\varphi \in C_c^\infty(\mathrm{GL}_6(F))$, we may define $P_s[\varphi] \in I(\Lambda, s)$ by \begin{multline*} P_s[\varphi](h) = \int_{\mathrm{GL}_3(F)} \int_{\mathrm{GL}_3(F)} \int_{\mathrm{Mat}_{3 \times 3}(F)} \varphi \left(\begin{pmatrix}h_1 &0\\ 0&h_2 \end{pmatrix} \begin{pmatrix}1_3 &X\\ &1_3 \end{pmatrix} h \right) \\ \times \left|\frac{\det h_1}{\det h_2} \right|^{-3s+\frac{3}{2}} \Lambda\left(\frac{\det h_1}{\det h_2}\right)^{-1} \, dh_1 \, dh_2 \, dX. \end{multline*} In what follows we construct $\varphi$ of a special form, whose support is contained in the open double coset $P_3\left(F\right) \theta_0 G_0\left(F\right)$ in $\mathrm{GL}_6\left(F\right)$. Let $B_0$ be the group of upper triangular matrices in $\mathrm{GL}_2$, and, $P_0$ the mirabolic subgroup of $\mathrm{GL}_2$, i.e. \[ P_0\left(F\right) = \left\{ \begin{pmatrix} a&b\\ 0&1\end{pmatrix} \mid a \in F^\times, \, b \in F\right\}. \] We define a subgroup $M_0$ of $G$ by \[ M_0\left(F\right)=\left\{ \begin{pmatrix}h&0\\0& \lambda \cdot {}^{t}h^{-1} \end{pmatrix} \mid \lambda \in F^\times, h \in B_0\left(F\right) \right\} \] and $M = \iota\left(P_0, M_0\right)$. Then by the Iwasawa decomposition for $G_0\left(F\right)$ and the inclusion \begin{equation} \label{intersection} H_0\left(F\right) \subset G_0\left(F\right) \cap \theta_0^{-1}P_3\left(F\right)\theta_0 , \end{equation} we have \[ P_3\left(F\right) \theta_0 G_0\left(F\right) = P_3\left(F\right) \theta_0 M\left(F\right) K_0 \] where $K_0$ is a maximal compact subgroup of $G_0\left(F\right)$. We take $K_0=\iota\left(K_1,K_2\right)$ where $K_1$ (resp. $K_2$) is a maximal compact subgroup of $\mathrm{GL}_2\left(F\right)$ (resp. $G\left(F\right)$). By direct computations, we see that \[ \begin{cases} \theta_0 \,N\left(F\right)\, \theta_0^{-1} \cap P_3\left(F\right) =\{ 1_6 \}; \\ \theta_0 \,M\left(F\right)\, \theta_0^{-1} \cap P_3\left(F\right) =\theta_0 \,A\left(F\right)\, \theta_0^{-1}; \\ \theta_0 \,K_0\,\theta_0^{-1} \cap P_3\left(F\right) =\{ 1_6 \}, \end{cases} \] where \[ A\left(F\right) =\left\{ \begin{pmatrix}a\cdot 1_3 &\\&1_3 \end{pmatrix} : a \in F^\times \right\}. \] Let us define subgroups $T_0$, $N_0$ of $G_0$ by \begin{align*} T_0\left(F\right)& = \left\{ \iota\left( \begin{pmatrix}a&\\ &1 \end{pmatrix}, \begin{pmatrix}x&&&\\ &y&&\\&&\lambda x^{-1}&\\ &&&\lambda y^{-1} \end{pmatrix} \right) : x, y, \lambda \in F^\times \right\}; \\ N_0\left(F\right) &= \left\{ \iota \left( \begin{pmatrix}1&x\\ &1 \end{pmatrix}, \begin{pmatrix}1&y&&\\ &1&&\\&&1&\\ &&-y&1 \end{pmatrix}\right) : x, y \in F\right\}. \end{align*} Then for $\varphi_1 \in C_c^\infty\left(N_0\left(F\right)\right)$, $\varphi_2 \in C_c^\infty\left(T_0\left(F\right)\right)$, $\varphi_3, \varphi_4 \in C_c^\infty\left(\mathrm{GL}_3\left(F\right)\right)$, $\varphi_5 \in C_c^\infty\left(\mathrm{Mat}_{3 \times 3}\left(F\right) \right)$ and $\varphi_6 \in C_c^\infty(K_0)$, we may construct $\varphi^\prime\in C_c^\infty(\mathrm{GL}_6(F))$, whose support is contained in $P_3\left(F\right) \theta_0 G_0\left(F\right)$, by \begin{multline*} \varphi^\prime \left(\begin{pmatrix}h_1&0\\ 0&h_2 \end{pmatrix}\begin{pmatrix}1_3&X\\ 0&1_3 \end{pmatrix} \theta_0\, n_0\, t_0\, k \right) \\ =\varphi_6(k) \varphi_3(h_1) \varphi_4(h_2) \varphi_5(X) \varphi_1(n_0 ) \int_{A\left(F\right)} \varphi_2(t_0\, a) \, d^\times a \end{multline*} where $n_0 \in N_0\left(F\right)$, $t_0 \in T_0\left(F\right)$ and $k \in K_0$. Then the local zeta integral $Z(P_s[\varphi^\prime], B, W)$ is written as \begin{multline*} Z(P_s[\varphi^\prime], B, W)=\int \varphi^\prime \left(\begin{pmatrix}h_1 &0\\ 0&h_2 \end{pmatrix} \begin{pmatrix}1_3 &X\\ &1_3 \end{pmatrix} \iota(n_{0,1}, n_{0,2})\iota(t_{0,1}, t_{0,2}) \iota(k_1, k_2) \right) \\ \times \left|\frac{\det h_1}{\det h_2} \right|^{-3s+\frac{3}{2}} \Lambda\left(\frac{\det h_1}{\det h_2}\right) ^{-1} W(n_{0,1} t_{0,1} k_1) B(n_{0,2} t_{0, 2}k_2) \, dh_1 \, dh_2 \, dX \, dn_0 \, dt_0\, dk \\ = \int \varphi_6(\iota(k_1, k_2)) \varphi_3(h_1) \varphi_4(h_2) \varphi_5(X) \varphi_1(n_0 ) \varphi_2(t_0a) \left|\frac{\det h_1}{\det h_2} \right|^{-3s+\frac{3}{2}} \Lambda\left(\frac{\det h_1}{\det h_2}\right) ^{-1} \\ \qquad\qquad\times W(n_{0,1} t_{0,1} k_1) B(n_{0,2} t_{0, 2}k_2) \, d^\times a \, dh_1 \, dh_2 \, dX \, dn_0 \, dt_0\, dk \\ = \int \varphi_6(\iota(k_1, k_2)) \varphi_3(h_1) \varphi_4(h_2) \varphi_5(X) \varphi_1(n_0 ) \varphi_2(t_0) \left|\frac{\det h_1}{\det h_2} \right|^{-3s+\frac{3}{2}} \Lambda\left(\frac{\det h_1}{\det h_2}\right) ^{-1} \\ \times \Lambda(\lambda) |\lambda|^{3s-\frac{9}{2}}\,W \left(n_{0,1} \begin{pmatrix}\lambda&0\\ 0&1 \end{pmatrix} t_{0,1} k_1 \right) B \left(n_{0,2} \begin{pmatrix} \lambda \cdot 1_2&0\\ 0&1_2 \end{pmatrix} t_{0, 2}k_2 \right) \\ d^\times \lambda \, dh_1 \, dh_2 \, dX \, dn_0 \, dt_0\, dk \end{multline*} where we write $n_0 = \iota(n_{0,1}, n_{0,2}) \in N_0\left(F\right)$, $t_0 = \iota(t_{0,1}, t_{0,2}) \in T_0\left(F\right)$ and $k = \iota(k_1, k_2) \in K_0$. Since we may vary $\varphi_i$ ($1\le i\le 6$), our assertion in Lemma~\ref{meromorphy lemma} follows from the same assertion for the integral \begin{equation} \label{mero proof e:last} \int_{F^\times} \Lambda(\lambda)|\lambda|^{3s-\frac{9}{2}}\,B \begin{pmatrix}\lambda \cdot 1_2&0\\0 &1_2 \end{pmatrix} \, W \begin{pmatrix}\lambda&0\\ 0&1 \end{pmatrix} d^\times \lambda. \end{equation} For any $\phi \in C_c^\infty(F^\times)$, there exists $W_\phi\in W\left(\tau\right)$ such that $W_\phi\begin{pmatrix}a&0\\0&1\end{pmatrix}= \phi\left(a\right)$ by the theory of Kirillov model for $\mathrm{GL}_2(\mR)$ by Jacquet~\cite[Proposition~5]{Jac} and for $\mathrm{GL}_2(\mC)$ by Kemarsky~\cite[Theorem~1]{Kem}. Thus our assertion clearly holds for the integral \eqref{mero proof e:last}. \end{proof} \begin{thebibliography}{99} \bibitem{AB} J. Adams and D. Barbasch, \emph{Reductive dual pair correspondence for complex groups.} J. Funct. Anal. \textbf{132} (1995), no. 1, 1--42. \bibitem{AC} J. Arthur and L. Clozel, \emph{Simple Algebras, Base Change and the Advanced Theory of the Trace Formula.} Ann. Math. Studies \textbf{120} (1989), Princeton, NJ. \bibitem{Ar} J. Arthur, \emph{The endoscopic classification of representations. Orthogonal and symplectic groups.} Amer. Math. Soc. Colloq. Publ. \textbf{61}, xviii+590 pp. Amer. Math. Soc., Providence, RI, 2013. \bibitem{At} H. Atobe, \emph{On the uniqueness of generic representations in an $L$-packet.} Int. Math. Res. Not. IMRN 2017, no. 23, 7051--7068. \bibitem{AG} H. Atobe and W. T. Gan, \emph{Local theta correspondence of tempered representations and Langlands parameters.} Invent. Math. \textbf{210} (2017), no. 2, 341--415. \bibitem{BP1} R. Beuzart-Plessis, \emph{La conjecture locale de Gross-Prasad pour les repr\'{e}sentations temp\'{e}r\'{e}es des groupes unitaires.} M\'{e}m. Soc. Math. Fr. (N.S.) 2016, no. 149, vii+191 pp. \bibitem{BP2} R. Beuzart-Plessis, \emph{A local trace formula for the Gan-Gross-Prasad conjecture for unitary groups: the Archimedean case. } Ast\'{e}risque No. \textbf{418} (2020), ix + 305 pp. \bibitem{BPC} R. Beuzart-Plessis and P.-H. Chaudouard, \emph{The global Gan-Gross-Prasad conjecture for unitary groups. II. From Eisenstein series to Bessel periods.} Preprint, arXiv:2302.12331 \bibitem{BPCZ} R. Beuzart-Plessis, P.-H. Chaudouard and M. Zydor, \emph{The global Gan-Gross-Prasad conjecture for unitary groups: the endoscopic case.} Publ. Math. Inst. Hautes \'{E}tudes Sci. \textbf{135} (2022), 183--336. \bibitem{BPLZZ} R. Beuzart-Plessis, Y. Liu, W. Zhang, X. Zhu, \emph{Isolation of cuspidal spectrum, with application to the Gan-Gross-Prasad conjecture.} Ann. of Math. (2) \textbf{194} (2021), no. 2, 519--584. \bibitem{Bl} D. Blasius, \emph{Hilbert modular forms and the Ramanujan conjecture.} Noncommutative Geometry and Number Theory, Aspects Math. E37, Vieweg, Wiesbaden 2006, 35--56. \bibitem{Blo} V. Blomer, \emph{Spectral summation formula for $\mathrm{GSp}(4)$ and moments of spinor $L$-functions.} J. Eur. Math. Soc. (JEMS) \textbf{21} (2019), no. 6, 1751--1774. \bibitem{Bo} S. B\"{o}cherer, \emph{Bemerkungen \"uber die Dirichletreihen von Koecher und Maa\ss.} Mathematica Gottingensis, G\"ottingen, vol. \textbf{68}, p. 36 (1986). \bibitem{CFK} Y. Cai, S. Friedberg and E. Kaplan, \emph{Doubling constructions: local and global theory, with an application to global functoriality for non-generic cuspidal representations.} Preprint, arXiv:1802.02637 \bibitem{Car} A. Caraiani, \emph{Local-global compatibility and the action of monodromy on nearby cycles.} Duke Math. J. \textbf{161} (2012), no. 12, 2311--2413. \bibitem{PSC} P.-S. Chan, \emph{Invariant representations of GSp(2) under tensor product with a quadratic character.} Mem. Amer. Math. Soc. \textbf{204} (2010), no.957, vi+172 pp. \bibitem{CI} S.-Y. Chen and A. Ichino, \emph{On Petersson norms of generic cusp forms and special values of adjoint $L$-functions for $\mathrm{GSp}_4$.} Amer. J. Math. \textbf{145} (2023), 899--993. \bibitem{CH} M. Chida, and M.-L.. Hsieh, \emph{Special values of anticyclotomic $L$-functions for modular forms.} J. reine angew. Math., \textbf{741} (2018), 87--131. \bibitem{CKPSS} J. W. Cogdell, H. H. Kim, I. I. Piatetski-Shapiro, and F. Shahidi, \emph{Functoriality for the classical groups.} Publ. Math. Inst. Hautes \'{E}tudes Sci. \textbf{99} (2004), 163--233. \bibitem{Co} A. Corbett, \emph{A proof of the refined Gan-Gross-Prasad conjecture for non-endoscopic Yoshida lifts.} Forum Math. \textbf{29} (2017), no. 1, 59--90. \bibitem{DPSS} M. Dickson, A. Pitale, A. Saha and R. Schmidt, \emph{Explicit refinements of B\"{o}cherer's conjecture for Siegel modular forms of squarefree level.} J. Math. Soc. Japan \textbf{72} (2020), no. 1, 251--301. \bibitem{Dummigan} N. Dummigan, \emph{Congruences of Saito-Kurokwa lifts and denominators of central special $L$-values.} Glasg. Math. J. \textbf{64} (2022), no. 2, 504--525. \bibitem{Fu} M. Furusawa, \emph{On the theta lift from $\mathrm{SO}_{2n+1}$ to $\widetilde{\mathrm{Sp}}_n$.} J. Reine Angew. Math. \textbf{466} (1995), 87--110. \bibitem{FuMa} M. Furusawa and K. Martin, \emph{On central critical values of the degree four L-functions for $\mathrm{GSp}(4)$: the fundamental lemma.II.} Amer. J. Math. \textbf{133} (2011), 197--233. \bibitem{FuMaS} M. Furusawa and K. Martin, \emph{On central critical values of the degree four L-functions for $\mathrm{GSp}(4)$: the fundamental lemma.III.} Memoirs of the AMS, Vol. 225, No. 1057 (2013), x+134pp. \bibitem{FM0} M. Furusawa and K. Morimoto, \emph{Shalika periods on $\mathrm{GU}(2, 2)$.} Proc. Amer. Math. Soc. \textbf{141} (2013), no. 12, 4125--4137. \bibitem{FM1} M. Furusawa and K. Morimoto, \emph{On special Bessel periods and the Gross--Prasad conjecture for $\mathrm{SO}(2n+1) \times \mathrm{SO}(2)$.} Math. Ann. \textbf{368} (2017), no. 1-2, 561--586. \bibitem{FM2} M. Furusawa and K. Morimoto, \emph{Refined global Gross-Prasad conjecture on special Bessel periods and B\"ocherer's conjecture.} J. Eur. Math. Soc. (JEMS) \textbf{23}, 1295--1331 (2021). \bibitem{FM3} M. Furusawa and K. Morimoto, \emph{On the Gan-Gross-Prasad conjecture and its refinement for $\left(\mathrm{U}\left(2n\right), \mathrm{U}\left(1\right)\right)$.} Preprint, arXiv:2205.09471 \bibitem{FS} M. Furusawa and J. Shalika. \emph{On central critical values of the degree four L-functions for $\mathrm{GSp}(4)$: the fundamental lemma.} Mem. Amer. Math. Soc. \textbf{164} (2003), no. 782, x+139 pp. \bibitem{Gan} W. T. Gan, \emph{The Saito-Kurokawa space of $\mathrm{PGSp}_4$ and its transfer to inner forms.} In: Eisenstein series and applications, Progr. Math. \textbf{258}, pp. 87--123. Birkh\"auser Boston, Boston, MA (2008). \bibitem{GGP} W. T. Gan, B. Gross and D. Prasad, \emph{Symplectic local root numbers, central critical L values, and restriction problems in the representation theory of classical groups.} Sur les conjectures de Gross et Prasad. I. Ast\'{e}risque No. 346 (2012), 1--109. \bibitem{GSun} W. T. Gan and B. Sun, \emph{The Howe duality conjecture: quaternionic case.} Representation theory, number theory, and invariant theory, 175--192, Progr. Math., 323, Birkh\"{a}user/Springer, Cham, 2017. \bibitem{GT10} W. T. Gan and S. Takeda, \emph{On Shalika periods and a theorem of Jacquet-Martin.} Amer. J. Math. \textbf{132} (2010), no.2, 475--528. \bibitem{GT11} W. T. Gan and S. Takeda, \emph{The local Langlands conjecture for $\mathrm{GSp}(4)$.} Ann. of Math. (2) \textbf{173} (2011), no. 3, 1841--1882. \bibitem{GT0} W. T. Gan and S. Takeda, \emph{Theta correspondences for $\mathrm{GSp}(4)$.} Represent. Theory \textbf{15} (2011), 670--718. \bibitem{GT} W. T. Gan and S. Takeda, \emph{A proof of the Howe duality conjecture.} J. Amer. Math. Soc. \textbf{29} (2016), no. 2, 473--493. \bibitem{GaTan} W. T. Gan and W. Tantono, \emph{The local Langlands conjecture for $\mathrm{GSp}(4)$, II: The case of inner forms.} Amer. J. Math. \textbf{136} (2014), no. 3, 761--805. \bibitem{GI0} W. T. Gan and A. Ichino, \emph{On endoscopy and the refined Gross-Prasad conjecture for $(\mathrm{SO}_5,\mathrm{SO}_4)$.} J. Inst. Math. Jussieu \textbf{10} (2011), no. 2, 235--324. \bibitem{GI1} W. T. Gan and A. Ichino, \emph{Formal degrees and local theta correspondence.} Invent. Math. \textbf{195} (2014), no. 3, 509--672. \bibitem{GQT} W. T. Gan, Y. Qiu and S. Takeda, \emph{The regularized Siegel-Weil formula (the second term identity) and the Rallis inner product formula.} Invent. Math. \textbf{198} (2014), no. 3, 739--831. \bibitem{GRS97} D. Ginzburg, S. Rallis and D. Soudry, \emph{Periods, poles of L-functions and symplectic-orthogonal theta lifts.} J. Reine Angew. Math. \textbf{487} (1997), 85--114. \bibitem{GRS} D. Ginzburg, S. Rallis and D. Soudry, \emph{The descent map from automorphic representations of GL(n) to classical groups.} World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, 2011. x+339 pp. \bibitem{GP1} B. Gross and D. Prasad, \emph{On the decomposition of a representation of $\mathrm{SO}_n$ when restricted to $\mathrm{SO}_{n-1}$.} Canad. J. Math. \textbf{44}, 974--1002 (1992). \bibitem{GP2} B. Gross and D. Prasad, \emph{On irreducible representations of $\mathrm{SO}_{2n+1} \times \mathrm{SO}_{2m}$.} Canad. J. Math. \textbf{46}, 930--950 (1994). \bibitem{HK} M. Harris and S. Kudla, \textit{Arithmetic automorphic forms for the nonholomorphic discrete series of ${\rm GSp}(2)$.} Duke Math. J. \textbf{66} (1992), no. 1, 59--121. \bibitem{HST} M. Harris, D.Soudry, R.Taylor, \emph{$\ell$-adic representations associated to modular forms over imaginary quadratic fields. I. Lifting to $\mathrm{GSp}_4(\mQ)$.} Invent. Math. \textbf{112} (1993), no. 2, 377--411. \bibitem{Ha} N. Harris, \emph{The refined Gross-Prasad conjecture for unitary groups.} Int. Math. Res. Not. IMRN (2014), no. 2, 303--389. \bibitem{HII1} K. Hiraga, A. Ichino and T. Ikeda, \emph{Formal degrees and adjoint $\gamma$-factors.} J. Amer. Math. Soc. \textbf{21} (2008), no. 1, 283--304. \bibitem{HII2} K. Hiraga, A. Ichino and T. Ikeda, \emph{Correction to: ``Formal degrees and adjoint $\gamma$-factors''.} J. Amer. Math. Soc. \textbf{21} (2008), no. 4, 1211--1213. \bibitem{HiSa} K. Hiraga and H. Saito, \emph{On $L$-packets for inner forms of $\mathrm{SL}_n$.} Mem. Amer. Math. Soc. \textbf{215} (2012), no. 1013, vi+97 pp. \bibitem{Ho1} R. Howe \emph{Transcending classical invariant theory.} J. Amer. Math. Soc. \textbf{2}, 535--552 (1989). \bibitem{HN1} M.-L. Hsieh and K. Namikawa, \emph{Bessel periods and the non-vanishing of Yoshida lifts modulo a prime.} Math. Z. \textbf{285}, 851--878 (2017). \bibitem{HN2} M.-L. Hsieh and K. Namikawa, \emph{Inner product formula for Yoshida lifts.} Ann. Math. Qu\'{e}. \textbf{42} (2018), no. 2, 215--253. \bibitem{HY} M.-L. Hsieh and S. Yamana, \emph{Bessel periods and anticyclotomic p-adic spinor $L$-functions.} To appear in Trans. Amer. Math. Soc., DOI: 10.1090/tran/9143 \bibitem{Ich2} A. Ichino, \emph{Trilinear forms and the central values of triple product L-functions.} Duke Math. J. Volume \textbf{145}, Number 2 (2008), 281--307. \bibitem{II} A. Ichino and T. Ikeda, \emph{On the periods of automorphic forms on special orthogonal groups and the Gross-Prasad conjecture.} Geom. Funct. Anal. \textbf{19}, 1378--1425 (2010). \bibitem{IP} A. Ichino and K. Prasanna, \emph{Periods of quaternionic Shimura varieties. I.} Contemp. Math., \textbf{762} American Mathematical Society, [Providence], RI, [2021], vi+214 pp. \bibitem{Ishimoto} H. Ishimoto, \emph{The endoscopic classification of representations of non-quasi-split odd special orthogonal groups.} Preprint, arXiv:2301.12143 \bibitem{ILS} H. Iwaniec, W. Luo and P. Sarnak, \emph{Low Lying zeros of families of $L$-functions.} Inst. Hautes \'{E}tudes Sci. Publ. Math. \textbf{91} (2000), 55--131 (2000). \bibitem{Jac} H. Jacquet, \emph{Distinction by the quasi-split unitary group,} Isr. J. Math. \textbf{178} (1) (2010) 269--324. \bibitem{JSZ} D. Jiang, B. Sun, C.-B. Zhu, \emph{Uniqueness of Bessel models: the Archimedean case.} Geom. Funct. Anal. \textbf{20} (2010), no. 3, 690--709. \bibitem{JZ} D. Jiang and L. Zhang, \emph{Arthur parameters and cuspidal automorphic modules of classical groups.} Ann. of Math. (2) \textbf{191} (2020), no. 3, 739--827. \bibitem{Jo} A. Jorza, \emph{Galois representations for holomorphic Siegel modular forms.} Math. Ann. \textbf{355} (2013), no. 1, 381--400. \bibitem{KMSW} T. Kaletha, A. Minguez, S. W. Shin, and P. J. White, \emph{Endoscopic classification of representations: Inner forms of unitary groups.} Preprint, arXiv:1409.3731 \bibitem{Kem} A. Kemarsky, \emph{A note on the Kirillov model for representations of $\mathrm{GL}_n(\mC)$.} C. R. Math. Acad. Sci. Paris \textbf{353} (2015), no. 7, 579--582. \bibitem{Ku} S. Kudla, \emph{Splitting metaplectic covers of dual reductive pairs.} Israel J. Math. \textbf{87} (1994), 361--401. \bibitem{Ku2} S. Kudla, \emph{On the local theta-correspondence. } Invent. Math. \textbf{83} (1986), no. 2, 229--255. \bibitem{Kutz} P. Kutzko, \emph{The Langlands conjecture for $\mathrm{Gl}_2$ of a local field.} Ann. of Math. (2) \textbf{112} (1980), no. 2, 381--412. \bibitem{Lan} R. P. Langlands, \emph{On the classification of irreducible representations of real algebraic groups, Representation theory and harmonic analysis on semisimple Lie groups.} 101--170, Math. Surveys Monogr., 31, Amer. Math. Soc., Providence, RI, 1989. \bibitem{LM} E. Lapid and Z. Mao, \emph{A conjecture on Whittaker-Fourier coefficients of cusp forms.} J. Number Theory \textbf{146} (2015), 448--505. \bibitem{LM17} E. Lapid and Z. Mao, \emph{On an analogue of the Ichino-Ikeda conjecture for Whittaker coefficients on the metaplectic group.} Algebra Number Theory \textbf{11} (2017), no.3, 713--765. \bibitem{LR} E. Lapid and S. Rallis, \emph{On the local factors of representations of classical groups.} In: Automorphic representations, L-functions and applications: progress and prospects, Ohio State Univ. Math. Res. Inst. Publ. 11, pp. 309--359. de Gruyter, Berlin (2005) \bibitem{JSLi} J.-S. Li, \emph{Nonexistence of singular cusp forms.} Compositio Math. \textbf{83} (1992), no. 1, 43--51. \bibitem{LPTZ} J.-S. Li, A. Paul, E.-C. Tan and C.-B. Zhu, \emph{The explicit duality correspondence of $(\mathrm{Sp}(p,q),\mathrm{O}^\ast(2n))$.} J. Funct. Anal. \textbf{200} (2003), no. 1, 71--100. \bibitem{Liu2} Y. Liu, \emph{Refined Gan-Gross-Prasad conjecture for Bessel periods.} J. Reine Angew. Math. \textbf{717} (2016) 133--194. \bibitem{Luo} Z. Luo, \emph{A local trace formula for the local Gan-Gross-Prasad conjecture for special orthogonal groups.} Preprint, arXiv:2009.13947 \bibitem{MW} K. Martin and D. Whitehouse, \emph{Central values and toric periods for $\gl\left(2\right)$.} Int. Math. Res. Not. IMRN 2009, 141--191. \bibitem{MR} P. Michel and D. Ramakrishnan, \emph{Consequences of the Gross-Zagier formulae: stability of average $L$-values, subconvexity, and non-vanishing mod $p$.} Number theory, analysis and geometry, 437--459, Springer, New York, 2012. \bibitem{Moe} C. M\oe glin, \emph{Correspondance de Howe pour les paires reductives duales: quelques calculs dans le cas archim\'{e}dien.} J. Funct. Anal. \textbf{85} (1989), no. 1, 1--85. \bibitem{MVW} C. M\oe glin, M.-F. Vigneras, J.-L. Waldspurger, \emph{Correspondances de Howe sur un corps $p$-adique.} Lecture Notes in Mathematics, 1291. Springer-Verlag, Berlin, 1987. viii+163 pp. \bibitem{Mok} C. P. Mok, \emph{Endoscopic classification of representations of quasi-split unitary groups.} Mem. Amer. Math. Soc. \textbf{235} (2015), no. 1108, vi+248 pp. \bibitem{Mo} K. Morimoto, \emph{On the theta correspondence for $(\mathrm{GSp}(4),\mathrm{GSO}(4,2))$ and Shalika periods.} Represent. Theory \textbf{18} (2014), 28--87. \bibitem{Mo3} K. Morimoto, \emph{On $L$-functions for quaternion unitary groups of degree $2$ and $\mathrm{GL}(2)$ (with an Appendix by M. Furusawa and A. Ichino).} Int. Math. Res. Not. IMRN 2014, no. 7, 1729--1832. \bibitem{Pau} A. Paul, \emph{Howe correspondence for real unitary groups.} J. Funct. Anal. \textbf{159} (1998), no. 2, 384--431. \bibitem{Pau2} A. Paul, \emph{Howe correspondence for real unitary groups. II.} Proc. Amer. Math. Soc. \textbf{128} (2000), no. 10, 3129--3136. \bibitem{Pau3} A. Paul, \emph{On the Howe correspondence for symplectic--orthogonal dual pairs.} J. Funct. Anal. \textbf{228} (2005), no. 2, 270--310. \bibitem{PSR} I. I. Piatetski-Shapiro and S. Rallis, \emph{$L$-functions for the classical groups.} in Explicit constructions of automorphic L-functions, Lecture Notes in Mathematics, Volume 1254, 1--52. \bibitem{PSS} A. Pitale, A. Saha and R. Schmidt, \emph{Transfer of Siegel cusp forms of degree 2.} Mem. Amer. Math. Soc. (2014), \textbf{232} (1090). \bibitem{PSS2} A. Pitale, A. Saha and R. Schmidt, \emph{Simple supercuspidal representations of $\mathrm{GSp}_4$ and test vectors.} Preprint, arXiv:2302.05148 \bibitem{PR} D. Prasad and D. Ramakrishnan, \emph{On the global root numbers of $\mathrm{GL}(n) \times \mathrm{GL}(m)$.} Automorphic forms, automorphic representations, and arithmetic (Fort Worth, TX, 1996), 311--330, Proc. Sympos. Pure Math., 66, Part 2, Amer. Math. Soc., Providence, RI, 1999. \bibitem{PT} D. Prasad and R. Takloo-Bighash, \emph{Bessel models for $\mathrm{GSp}(4)$.} J. Reine Angew. Math. \textbf{655} (2011), 189--243. \bibitem{RS} A. Raghuram and M. Sarnobat, \emph{Cohomological representations and functorial transfer from classical groups.} In Cohomology of arithmetic group, 157-176, Springer Proc. Math. Stat., 245, Springer Cham., 2018. \bibitem{RR} D. Ramakrishnan and J. Rogawski, \emph{Average values of modular $L$-series via the relative trace formula. } Pure Appl. Math. Q. \textbf{1}, Special Issue: In memory of Armand Borel. Part 3, 701--735 (2005). \bibitem{Rob} B. Roberts, \emph{The theta correspondence for similitudes.} Israel J. Math. \textbf{94} (1996), 285--317, \bibitem{Ro} B. Roberts, \emph{Global L-packets for $\mathrm{GSp}(2)$ and theta lifts.} Doc. Math. \textbf{6} (2001), 247--314. \bibitem{Saha} A. Saha, \emph{A relation between multiplicity one and B\"{o}cherer's conjecture.} Ramanujan J. (2014), \textbf{33} (2): 263--268. \bibitem{Sa} A. Saha, \emph{On ratios of Petersson norms for Yoshida lifts.} Forum Math. \textbf{27}, 2361--2412 \bibitem{SS} A. Saha and R. Schmidt, \emph{Yoshida lifts and simultaneous non-vanishing of dihedral twists of modular $L$-functions.} J. London Math. Soc. \textbf{88}, 251--270. \bibitem{Sch1} R. Schmidt, \emph{Iwahori-spherical representations of $\mathrm{GSp}\left(4\right)$ and Siegel modular forms of degree $2$ with square-free level.} J. Math. Soc. Japan \textbf{57} (2005), no. 1, 259--293. \bibitem{Sha0} F. Shahidi, \emph{On certain $L$-functions.} Amer. J. Math. \textbf{103} (1981) 297--355. \bibitem{Satake} I. Satake, \emph{Some remarks to the preceding paper of Tsukamoto.} J. Math. Soc. Japan \textbf{13} (1961), 401--409. \bibitem{So} D. Soudry, \emph{A uniqueness theorem for representations of $\mathrm{GSO}(6)$ and the strong multiplicity one theorem for generic representations of $\mathrm{GSp}(4)$.} Israel J. Math. \textbf{58} (1987), no. 3, 257--287. \bibitem{Su} T. Sugano, \emph{On holomorphic cusp forms on quaternion unitary group of degree $2$.} J. Fac. Sci. Univ. Tokyo Sect \textrm{I}A Math. \textbf{31}, 521--568 (1985) \bibitem{SZ} B. Sun and C.-B. Zhu, \emph{Conservation relations for local theta correspondence.} J. Amer. Math. Soc. \textbf{28} (2015), no. 4, 939--983. \bibitem{Tak} S. Takeda, \emph{Some local-global non-vanishing results of theta lifts for symplectic-orthogonal dual pairs.} J. Reine Angew. Math. \textbf{657} (2011), 81--111. \bibitem{Tan} V. Tan, \emph{Poles of Siegel Eisenstein series on $\mathrm{U}(n, n)$.} Canad. J. Math. \textbf{51} (1999), no. 1, 164--175. \bibitem{Tsukamoto} T. Tsukamoto, \emph{On the local theory of quaternionic anti-hermitian forms.} J. Math. Soc. Japan \textbf{13} (1961), 387--400. \bibitem{Var} S. Varma, \emph{On descent and the generic packet conjecture.} Forum Math. \textbf{29} (2017), no. 1, 111--155. \bibitem{Vo} D. Vogan, \emph{Gel'fand-Kirillov dimension for Harish-Chandra modules.} Invent. Math. \textbf{48} (1978), no. 1, 75--98. \bibitem{Waibel} F. Waibel, \emph{Moments of spinor $L$-functions and symplectic Kloosterman sums.} Q. J. Math. \textbf{70} (2019), no. 4, 1411--1436. \bibitem{Wal} J.-L. Waldspurger, \emph{Sur les valeurs de certaines fonctions $L$ automorphes en leur centre de sym\'{e}trie.} Compos. Math. \textbf{54} (1985), no. 2, 173--242. \bibitem{Wa} J.-L. Waldspurger, \emph{D\'emonstration d'une conjecture de dualit\'e de Howe dans le cas $p$-adique, $p\ne 2$.} In: Festschrift in honor of I. I. Piatetski-Shapiro on the occasion of his sixtieth birthday, Part \textrm{I} (Ramat Aviv, 1989), Israel Math. Conf. Proc., vol.~2, pp. 267--324. Weizmann, Jerusalem (1990). \bibitem{Wa2} J.-L. Waldspurger, \emph{Une formule int\'{e}grale reli\'{e}e \`{a} la conjecture locale de Gross-Prasad, 2e partie: extension aux repr\'{e}sentations temp\'{e}r\'{e}es.} Sur les conjectures de Gross et Prasad. \textrm{I}. Ast\'{e}risque \textbf{346}, 171-312 (2012). \bibitem{Wal12} J.-L. Waldspurger, \emph{La conjecture locale de Gross-Prasad pour les repr\'{e}sentations temp\'{e}r\'{e}es des groupes sp\'{e}ciaux orthogonaux.} Sur les conjectures de Gross et Prasad. II Ast\'{e}risque \textbf{347}, 103--165 (2012) \bibitem{We} R. Weissauer, \emph{Endoscopy for $\mathrm{GSp}(4)$ and the cohomology of Siegel modular threefolds.} Lecture Notes in Mathematics, vol. 1968, pp. xviii+368. Springer, Berlin (2009). \bibitem{Xue2} H. Xue, \emph{Refined global Gan-Gross-Prasad conjecture for Fourier-Jacobi periods on symplectic groups.} Compos. Math. \textbf{153} (2017), no. 1, 68--131. \bibitem{Xuea} H. Xue, \emph{Bessel models for real unitary groups: the tempered case.} Duke Math. J. \textbf{172} (2023), no. 5, 995--1031 \bibitem{Yam2} S. Yamana, \emph{The Siegel-Weil formula for unitary groups.} Pacific J. Math. \textbf{264} (2013) 235--257. \bibitem{Yam} S. Yamana, \emph{$L$-functions and theta correspondence for classical groups.} Invent. Math. \textbf{196} (2014), no. 3, 651--732. \bibitem{CZ} Z. Zhang, \emph{A note on the local theta correspondence for unitary similitude dual pairs.} J. Number Theory \textbf{133} (2013), no.9, 3057--3064. \end{thebibliography} \begin{theindex} \item $M_D, N_D$ \pageref{d:N_D} \item $G_D$ \pageref{e: G_D} \item $G$ \pageref{gsp} \item $G_D^1$ \pageref{G_D^1} \item $G^1$ \pageref{G^1} \item $T_{\xi}$ \pageref{e: T_xi} \item $G_D^+$ \pageref{e: G^+_D} \item $B_{\xi, \Lambda, \psi}$ \pageref{e: dependency} \item $\mathcal{AI}(\Lambda)$ \pageref{e: partial non-vanishing} \item $\mathcal{B}_\Lambda\left(\varPhi , E\right) $ \pageref{mathcal B Phi E} \item $J_m$ \pageref{d: J_m} \item $\mathrm{GO}_{n+2, n}, \mathrm{GSO}_{n+2, n}$ \pageref{d: GO_n+2,n} \item $\mathrm{GU}_{3, D}, \mathrm{GSU}_{3, D}$ \pageref{def of gu_3,D} \item $\mathrm{GU}_{4, \varepsilon}$ \pageref{e: gu(2,2) or gu(3,1)} \item $\Phi_D$ \pageref{acc isom1} \item $\Phi$ \pageref{acc isom2} \item $M$, $N$ \pageref{d:N} \item $T_S$ \pageref{T_S} \item $B_{S, \Lambda, \psi}$ \pageref{Beesel def gsp} \item $M_{3, D}$ $N_{3, D}$ \pageref{d:N3,D} \item $M_{X, D}$ \pageref{M_X D} \item $\mathcal{B}_{X, \chi, \psi}$ \pageref{Besse def gsud} \item $\mathcal{B}_{X, \chi, \psi}^D$ \pageref{Besse def gsud} \item $M_{4,2}$ $N_{4,2}$ \pageref{d:N4,2} \item $M_{X}$ \pageref{M_X} \item $\alpha_{\chi, \psi_N}(\phi, \phi^\prime)$ \pageref{e: local integral 1} \item Type I-A, Type I-B \pageref{type} \item $W^{\psi_U}$ \pageref{W U} \item $\mathcal{W}^{\psi_U}$ \pageref{mathcal W psi U} \item $\mathcal{W}^\circ_{G, v}, \mathcal{W}_{G, v}$ \pageref{mathcal W_G} \item $\mathcal{L}_v^\circ(\phi_v, f_v), \mathcal{L}_v(\phi_v, f_v)$ \pageref{mathcal L} \item $W_{\psi_{U_G}}$ \pageref{W U_G} \item $\phi_{\Phi, S}$ \pageref{phi_Phi S} \end{theindex} \end{document}
2205.09368v5
http://arxiv.org/abs/2205.09368v5
Universality of the cokernels of random $p$-adic Hermitian matrices
\documentclass[oneside, a4paper, 10pt]{article} \usepackage[colorlinks = true, linkcolor = black, urlcolor = blue, citecolor = black]{hyperref} \usepackage{amsmath, amsthm, amssymb, amsfonts, amscd} \usepackage[left=3cm,right=3cm,top=3cm,bottom=3cm,a4paper]{geometry} \usepackage{mathtools} \usepackage{thmtools} \usepackage{graphicx} \usepackage{cleveref} \usepackage{mathrsfs} \usepackage{titling} \usepackage{url} \usepackage[nosort, noadjust]{cite} \usepackage{enumitem} \usepackage{tikz-cd} \usepackage{multirow} \usepackage{fancyhdr} \usepackage{sectsty} \usepackage{longtable} \usepackage{authblk} \DeclareGraphicsExtensions{.pdf,.png,.jpg} \newcommand{\im}{{\operatorname{im}}} \newcommand{\cok}{{\operatorname{cok}}} \newcommand{\id}{{\operatorname{id}}} \newcommand{\Cl}{{\operatorname{Cl}}} \newcommand{\Gal}{\operatorname{Gal}} \newcommand{\Hom}{\operatorname{Hom}} \newcommand{\End}{\operatorname{End}} \newcommand{\Aut}{\operatorname{Aut}} \newcommand{\GL}{\operatorname{GL}} \newcommand{\re}{\operatorname{Re}} \newcommand{\imm}{\operatorname{Im}} \newcommand{\Frob}{\operatorname{Frob}} \newcommand{\rank}{\operatorname{rank}} \newcommand{\tr}{\operatorname{Tr}} \newcommand{\nm}{\operatorname{N}} \newcommand{\tor}{\operatorname{tor}} \newcommand{\lcm}{\operatorname{lcm}} \newcommand{\Avg}{\operatorname{Avg}} \newcommand{\diag}{\operatorname{diag}} \newcommand{\Sur}{\operatorname{Sur}} \newcommand{\Sym}{\operatorname{Sym}} \newcommand{\Alt}{\operatorname{Alt}} \newcommand{\M}{\operatorname{M}} \newcommand{\HH}{\operatorname{H}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\R}{\mathbb{R}} \newcommand{\C}{\mathbb{C}} \newcommand{\F}{\mathbb{F}} \newcommand{\A}{\mathbb{A}} \newcommand{\G}{\mathbb{G}} \newcommand{\PP}{\mathbb{P}} \newcommand{\EE}{\mathbb{E}} \newcommand{\OO}{\mathcal{O}} \newcommand{\YY}{\mathcal{Y}} \newcommand{\ZZ}{\mathcal{Z}} \theoremstyle{definition} \newtheorem{theorem}{Theorem}[section] \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem*{conjecture*}{Conjecture} \newtheorem{remark}[theorem]{Remark} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{assumption}[theorem]{Assumption} \linespread{1.25} \renewcommand\labelenumi{(\theenumi)} \setlength{\droptitle}{-14mm} \newcommand{\tlap}[1]{\vbox to 0pt{\vss\hbox{#1}}} \newcommand{\blap}[1]{\vbox to 0pt{\hbox{#1}\vss}} \newcommand{\changeurlcolor}[1]{\hypersetup{urlcolor=#1}} \setlength{\skip\footins}{1cm} \makeatletter \newcommand\semilarge{\@setfontsize\semilarge{11}{13.2}} \makeatother \title{\semilarge{\textbf{UNIVERSALITY OF THE COKERNELS OF RANDOM $p$-ADIC HERMITIAN MATRICES}}} \author{\normalsize{JUNGIN LEE} } \date{} \sectionfont{\centering \large} \subsectionfont{\normalsize} \makeatletter \renewcommand{\@seccntformat}[1]{\csname the#1\endcsname.\quad} \makeatother \renewenvironment{abstract} {\quotation\small\noindent\rule{\linewidth}{.5pt}\par\smallskip {\centering\bfseries\abstractname\par}\medskip} {\par\noindent\rule{\linewidth}{.5pt}\endquotation} \AtEndDocument{ \bigskip \footnotesize \textsc{Jungin Lee, Department of Mathematics, Ajou University, Suwon 16499, Republic of Korea}\par\nopagebreak \textit{E-mail address:} \texttt{[email protected]} } \begin{document} \maketitle \vspace{-18mm} \begin{abstract} In this paper, we study the distribution of the cokernel of a general random Hermitian matrix over the ring of integers $\mathcal{O}$ of a quadratic extension $K$ of $\mathbb{Q}_p$. For each positive integer $n$, let $X_n$ be a random $n \times n$ Hermitian matrix over $\mathcal{O}$ whose upper triangular entries are independent and their reductions are not too concentrated on certain values. We show that the distribution of the cokernel of $X_n$ always converges to the same distribution which does not depend on the choices of $X_n$ as $n \rightarrow \infty$ and provide an explicit formula for the limiting distribution. This answers Open Problem 3.16 from the ICM 2022 lecture note of Wood in the case of the ring of integers of a quadratic extension of $\mathbb{Q}_p$. \end{abstract} \section{Introduction} \label{Sec1} \subsection{Distribution of the cokernel of a random $p$-adic matrix} \label{Sub11} Let $p$ be a prime. The Cohen-Lenstra heuristics \cite{CL84} predict the distribution of the $p$-Sylow subgroup of the class group $\Cl(K)$ of a random imaginary quadratic field $K$ ordered by the absolute value of the discriminant. (When $p=2$, one needs to modify the conjecture by replacing $\Cl(K)$ with $2 \Cl(K)$.) Friedman and Washington \cite{FW89} computed the limiting distribution of the cokernel of a Haar random $n \times n$ matrix over $\Z_p$ as $n \rightarrow \infty$. They proved that for every finite abelian $p$-group $G$ and a Haar random matrix $A_n \in \M_n(\Z_p)$ for each positive integer $n$, \begin{equation*} \lim_{n \rightarrow \infty} \PP (\cok (A_n) \cong G) = \frac{1}{\left | \Aut(G) \right |} \prod_{i=1}^{\infty}(1-p^{-i}) \end{equation*} where $\M_n(R)$ denotes the set of $n \times n$ matrices over a commutative ring $R$. The right-hand side of the above formula is equal to the conjectural distribution of the $p$-parts of the class groups of imaginary quadratic fields predicted by Cohen and Lenstra \cite{CL84}. There are two possible ways to generalize the work of Friedman and Washington. One way is to consider the distribution of the cokernels of various types of random matrices over $\Z_p$. Bhargava, Kane, Lenstra, Poonen and Rains \cite{BKLPR15} computed the distribution of the cokernel of a random alternating matrix over $\Z_p$. They suggested a model for the $p$-Sylow subgroup of the Tate-Shafarevich group of a random elliptic curve over $\Q$ of given rank $r \geq 0$, in terms of a random alternating matrix over $\Z_p$. They also proved that the distribution of their random matrix model coincides with the prediction of Delaunay \cite{Del01, Del07, DJ14} on the distribution of the $p$-Sylow subgroup of the Tate-Shafarevich group of a random elliptic curve over $\Q$. Clancy, Kaplan, Leake, Payne and Wood \cite{CKLPW15} computed the distribution of the cokernel of a random symmetric matrix over $\Z_p$. In the above results, random matrices are assumed to be equidistributed with respect to Haar measure. The distributions of the cokernels for much larger classes of random matrices were established by Wood \cite{Woo17}, Wood \cite{Woo19} and Nguyen-Wood \cite{NW22}. \begin{definition} \label{def1a} Let $0 < \varepsilon < 1$ be a real number. A random variable $x$ in $\Z_p$ is $\varepsilon$\textit{-balanced} if $\PP (x \equiv r \,\, (\text{mod } p)) \leq 1 - \varepsilon$ for every $r \in \Z / p\Z$. A random matrix $A$ in $\M_n(\Z_p)$ is $\varepsilon$\textit{-balanced} if its entries are independent and $\varepsilon$-balanced. A random symmetric matrix $A$ in $\M_n(\Z_p)$ is $\varepsilon$\textit{-balanced} if its upper triangular entries are independent and $\varepsilon$-balanced. \end{definition} \begin{theorem} \label{thm1b} Let $G$ be a finite abelian $p$-group. \begin{enumerate} \item (\cite[Theorem 1.2]{Woo19}, \cite[Theorem 4.1]{NW22}) Let $(\alpha_n)_{n \geq 1}$ be a sequence of positive real numbers such that $0 < \alpha_n < 1$ for each $n$ and for any constant $\Delta > 0$, we have $\alpha_n \geq \frac{\Delta \log n}{n}$ for sufficiently large $n$. Let $A_n$ be an $\alpha_n$-balanced random matrix in $\M_n(\Z_p)$ for each $n$. Then, $$ \lim_{n \rightarrow \infty} \PP(\cok(A_n) \cong G) = \frac{1}{\left | \Aut(G) \right |} \prod_{i=1}^{\infty}(1-p^{-i}). $$ \item (\cite[Theorem 1.3]{Woo17}) Let $0 < \varepsilon < 1$ be a real number and $B_n$ be an $\varepsilon$-balanced random symmetric matrix in $\M_n(\Z_p)$ for each $n$. Then, $$ \lim_{n \rightarrow \infty} \PP(\cok(B_n) \cong G) = \frac{\# \left\{ \text{symmetric, bilinear, perfect } \phi : G \times G \rightarrow \C^* \right\}}{\left | G \right | \left | \Aut(G) \right |} \prod_{i=1}^{\infty}(1-p^{1-2i}). $$ \end{enumerate} \end{theorem} Another way is to generalize the cokernel condition. We refer to the introduction of \cite{Lee22} for the recent progress in this direction. The following theorem provides the joint distribution of the cokernels $\cok(P_j(A_n))$ ($1 \leq j \leq l$), where $P_1(t), \cdots, P_l(t) \in \Z_p[t]$ are monic polynomials with some mild assumptions and $A_n$ is a Haar random matrix in $\M_n(\Z_p)$. It is a modified version of the conjecture by Cheong and Huang \cite[Conjecture 2.3]{CH21}. \begin{theorem} \label{thm1c} (\cite[Theorem 2.1]{Lee22}) Let $P_1(t), \cdots, P_l(t) \in \Z_p[t]$ be monic polynomials whose mod $p$ reductions in $\F_p[t]$ are distinct and irreducible, and let $G_j$ be a finite module over $R_j := \Z_p[t]/(P_j(t))$ for each $1 \leq j \leq l$. Also let $A_n$ be a Haar random matrix in $\M_n(\Z_p)$ for each positive integer $n$. Then we have \begin{equation*} \lim_{n \rightarrow \infty} \PP \begin{pmatrix} \cok(P_j(A_n)) \cong G_j \\ \text{ for } 1 \leq j \leq l \end{pmatrix} = \prod_{j=1}^{l} \left ( \frac{1}{\left | \Aut_{R_j}(G_j) \right |} \prod_{i=1}^{\infty}(1-p^{-i \deg (P_j)}) \right ). \end{equation*} \end{theorem} \subsection{Hermitian matrices over $p$-adic rings} \label{Sub12} Before stating the main theorem of this paper, we summarize the basic results on quadratic extensions of $\Q_p$ and Hermitian matrices over $p$-adic rings. Every quadratic extension of $\Q_p$ is of the form $\Q_p(\sqrt{a})$ for some non-trivial element $a \in \Q_p^{\times} / (\Q_p^{\times})^2$. For $a, b \in \Q_p^{\times}$, we have $\Q_p(\sqrt{a}) = \Q_p(\sqrt{b})$ if and only if $\frac{b}{a} \in (\Q_p^{\times})^2$. Therefore the number of quadratic extensions of $\Q_p$ is given by $\left | \Q_p^{\times} / (\Q_p^{\times})^2 \right | - 1$. Since $\Q_p^{\times} \cong \Z \times \Z_p \times \Z/(p-1)\Z$ for odd $p$ and $\Q_2^{\times} \cong \Z \times \Z_2 \times \Z/2\Z$, there are $3$ quadratic extensions of $\Q_p$ for odd $p$ and $7$ quadratic extensions of $\Q_2$. Let $K$ be a quadratic extension of $\Q_p$ with the ring of integers $\OO := \OO_K$, the residue field $\kappa$ and the uniformizer $\pi$ that will be specified. Denote the generator of the Galois group $\Gal(K/\Q_p)$ by $\sigma$. Fix a primitive $(p^2-1)$-th root of unity $w$ in $\overline{\Q_p}$. Then $K = \Q_p(w)$ is the unique unramified quadratic extension of $\Q_p$ and satisfies $\OO = \Z_p[w]$ and $\kappa = \OO / p \OO \cong \F_{p^2}$ (\cite[Proposition II.7.12]{Neu99}). In this case, fix the uniformizer by $\pi = p$. The element $\sigma \in \Gal(K/\Q_p)$ maps to the Frobenius automorphism $\Frob_p \in \Gal(\F_{p^2}/\F_p)$ ($x \mapsto x^p$) so it satisfies $\sigma(w) = w^p$. If $K/\Q_p$ is ramified, then we always have $\OO = \Z_p[\pi]$ and $\kappa = \F_p$. When $K/\Q_p$ is ramified and $p>2$, there exists a uniformizer $\pi \in \OO$ such that $\sigma(\pi) = -\pi$. There are two types of ramified quadratic extensions of $\Q_2$ (\cite[p. 456]{Cho16}): \begin{enumerate} \item $K=\Q_2(\sqrt{1+2u})$ for some $u \in \Z_2^{\times}$, $\pi := 1 + \sqrt{1+2u}$ is a uniformizer and $\sigma (\pi) = 2 - \pi$. \item $K=\Q_2(\sqrt{2u})$ for some $u \in \Z_2^{\times}$, $\pi := \sqrt{2u}$ is a uniformizer and $\sigma (\pi) = - \pi$. \end{enumerate} A matrix $A \in \M_n(\OO)$ is called \textit{Hermitian} if $A = \sigma(A^t)$, where $A^t$ denotes the transpose of $A$. This is equivalent to the condition that $A_{ij} = \sigma(A_{ji})$ for every $1 \leq i \leq j \leq n$, where $A_{ij}$ denotes the $(i,j)$-th entry of the matrix $A$. Denote the set of $n \times n$ Hermitian matrices over $\OO$ by $\HH_n(\OO)$. For an extension of finite fields $\F_{p^2}/\F_p$, the set $\HH_n(\F_{p^2})$ is defined by the same way. A \textit{Hermitian lattice} over $\OO$ of \textit{rank} $n$ is a free $\OO$-module $L$ of rank $n$ equipped with a bi-additive map $h : L \times L \rightarrow \OO$ such that $h(y, x) = \sigma(h(x,y))$ and $h(ax, by) = a \sigma(b) h(x,y)$ for every $a, b \in \OO$ and $x, y \in L$. When $(L, h)$ is a Hermitian lattice with an $\OO$-basis $v_1, \cdots, v_n$, a matrix $H = (H_{i, j})_{1 \leq i,j \leq n} \in \M_n(\OO)$ given by $H_{i, j} = h(v_j, v_i)$ is Hermitian. Conversely, if $H \in \HH_n(\OO)$, then $(L, h)$ given by $L = \OO^n$ and $h(x, y) = \sigma(y^T) H x$ is a Hermitian lattice and we have $h(e_j, e_i) = e_i^T H e_j = H_{i, j}$ where $e_1, \cdots, e_n$ is the standard basis of $L$. We say two Hermitian matrices $A, B \in \HH_n(\OO)$ are \textit{equivalent} if $B = YA \sigma(Y^t)$ for some $Y \in \GL_n(\OO)$. The correspondence between Hermitian lattices and Hermitian matrices gives a bijection between the set of equivalent classes of Hermitian matrices in $\HH_n(\OO)$ and the set of isomorphism classes of Hermitian lattices over $\OO$ of rank $n$. \subsection{Main results and the structure of the paper} \label{Sub13} The purpose of this paper is to establish the universality result for the distribution of the cokernels of random $p$-adic Hermitian matrices. First we provide the definition of $\varepsilon$-balanced random matrix in $\HH_n(\OO)$. We will consider the unramified and ramified cases separately. Let $X \in \HH_n(\OO)$. If $K/\Q_p$ is unramified, then $X_{ij} = Y_{ij} + w Z_{ij}$ for some $Y_{ij}, Z_{ij} \in \Z_p$. If $K/\Q_p$ is ramified, then $X_{ij} = Y_{ij} + \pi Z_{ij}$ for some $Y_{ij}, Z_{ij} \in \Z_p$. For both cases, $Z_{ii}=0$ for each $1 \leq i \leq n$ and $X$ is determined by $n^2$ elements $Y_{ij}$ ($i \leq j$), $Z_{ij}$ ($i < j$). Let $E^{ij} \in \M_n(\Z_p)$ be a matrix defined by $(E^{ij})_{kl} = \delta_{ik} \delta_{jl}$. If $K/\Q_p$ is unramified, then $\HH_n(\OO)$ is generated by $n^2$ matrices $E^{ij}$ ($ i \le j$) and $wE^{ij}$ ($i < j$) as a $\Z_p$-module. If $K/\Q_p$ is ramified, then $\HH_n(\OO)$ is generated by $n^2$ matrices $E^{ij}$ ($ i \le j$) and $\pi E^{ij}$ ($i < j$) as a $\Z_p$-module. Therefore an additive measure on $\HH_n(\OO)$ defined by the product of the Haar probability measures on $Y_{ij}$ ($i \leq j$), $Z_{ij}$ ($i < j$) is same as the Haar probability measure on $\HH_n(\OO)$ by the uniqueness of the Haar probability measure. \begin{definition} \label{def1f} Let $0 < \varepsilon < 1$. A random matrix $X$ in $\HH_n(\OO)$ is \textit{$\varepsilon$-balanced} if the $n^2$ elements $Y_{ij}$ ($i \leq j$), $Z_{ij}$ ($i < j$) in $\Z_p$ are independent and $\varepsilon$-balanced. \end{definition} The following remarks shows that our definition of $\varepsilon$-balanced random matrix in $\HH_n(\OO)$ is independent of the choice of the primitive $(p^2-1)$-th root of unity $w$ (unramified case) and the uniformizer $\pi$ (ramified case). \begin{remark} \label{rmk1g} \begin{enumerate} \item Assume that $K/\Q_p$ is unramified. By the relation $$ (p+1) + (w^{p-1}-1) \sum_{i=1}^{p+1}i w^{(p+1-i)(p-1)} = \frac{w^{p^2-1}-1}{w^{p-1}-1} = 0, $$ we have $w - \sigma(w) = w(1-w^{p-1}) \in \OO^{\times}$. Now let $w'$ be any primitive $(p^2-1)$-th root of unity in $\overline{\Q_p}$. For random elements $x , y \in \Z_p$, we have $x + y w = x' + y' w'$ for $$ x' = x + \frac{w' \sigma(w) - w \sigma(w')}{w' - \sigma(w')}y, \, y' = \frac{w - \sigma(w)}{w' - \sigma(w')} y. $$ Then we have $x', y' \in \Z_p$ and $x$ and $y$ are independent and $\varepsilon$-balanced if and only if $x'$ and $y'$ are independent and $\varepsilon$-balanced. \item Assume that $K/\Q_p$ is ramified and let $\pi'$ be any uniformizer of $K$. Then $\pi' \equiv u \pi \,\, (\text{mod } \pi^2)$ for some $u \in \Z_p^{\times}$. For random elements $x , y \in \Z_p$, we have $x + y \pi' = x' + y' \pi$ for some $x', y' \in \Z_p$ such that $x \equiv x' \,\, (\text{mod } p)$ and $uy \equiv y' \,\, (\text{mod } p)$ so $x$ and $y$ are independent and $\varepsilon$-balanced if and only if $x'$ and $y'$ are independent and $\varepsilon$-balanced. \end{enumerate} \end{remark} Let $\Gamma$ be an $\OO$-module and ${}^{\sigma} \Gamma$ be its conjugate which is same as $\Gamma$ as abelian groups, with the scalar multiplication $r \cdot g := \sigma(r) g$. A \textit{Hermitian pairing} on $\Gamma$ is a bi-additive map $\delta : \Gamma \times \Gamma \rightarrow K/\OO$ such that $\delta(y, x) = \sigma(\delta(x,y))$ and $\delta(ax, by) = a \sigma(b)\delta(x,y)$ for every $a, b \in \OO$ and $x, y \in \Gamma$. We say a Hermitian pairing $\delta : \Gamma \times \Gamma \rightarrow K/\OO$ is \textit{perfect} if ${}^{\sigma} \Gamma \rightarrow \Hom_\OO(\Gamma, K/\OO)$ ($g \mapsto \delta(\cdot, g)$) is an $\OO$-module isomorphism. The following theorem is the main result of this paper, which settles a problem suggested by Wood \cite[Open Problem 3.16]{Woo22} in the case that $\mathfrak{o}$ is the ring of integers of a quadratic extension of $\Q_p$. \begin{theorem} \label{thm1h} Let $0 < \varepsilon < 1$ be a real number, $X_n \in \HH_n(\OO)$ be an $\varepsilon$-balanced random matrix for each $n$ and $\Gamma$ be a finite $\OO$-module. \begin{enumerate} \item (Theorem \ref{thm4m}) If $K/\Q_p$ is unramified, then \begin{equation} \label{eq1c} \lim_{n \rightarrow \infty} \PP(\cok(X_n) \cong \Gamma) = \frac{ \# \left\{ \text{Hermitian, perfect } \delta : \Gamma \times \Gamma \rightarrow K/\OO \right\}}{\left| \Aut_{\OO}(\Gamma) \right|} \prod_{i=1}^{\infty}(1 + \frac{(-1)^i}{p^i}). \end{equation} \item (Theorem \ref{thm5i}) If $K/\Q_p$ is ramified, then \begin{equation} \label{eq1d} \lim_{n \rightarrow \infty} \PP(\cok(X_n) \cong \Gamma) = \frac{ \# \left\{ \text{Hermitian, perfect } \delta : \Gamma \times \Gamma \rightarrow K/\OO \right\}}{\left| \Aut_{\OO}(\Gamma) \right|} \prod_{i=1}^{\infty}(1 - \frac{1}{p^{2i-1}}). \end{equation} \end{enumerate} \end{theorem} In Section \ref{Sec2}, we provide a proof of Theorem \ref{thm1h} under the assumption that each $X_n$ is equidistributed with respect to Haar measure (Theorem \ref{thm2c}). Our proof follows the strategy of \cite[Theorem 2]{CKLPW15} which consists of four steps (see the paragraph after Lemma \ref{lem2f}). The most technical part of the proof is the second step, i.e. the computation of the probability that $\left< \; , \; \right>_{X_n} = \left< \; , \; \right>_M$ for a given $M \in \HH_n(\OO)$. When $K/\Q_p$ is ramified, this computation is even more complicated than the proof of \cite[Theorem 2]{CKLPW15} for $p=2$. To extend this result to $\varepsilon$-balanced Hermitian matrices, we use the \textit{moments} as in the symmetric case. For a random $\OO$-module $M$ and a given $\OO$-module $G$, the $G$\textit{-moment} of $M$ is defined by the expected value $\mathbb{E}(\# \Sur_{\OO}(M, G))$ of the number of surjective $\OO$-module homomorphisms from $M$ to $G$. The key point is that if we know the $G$-moment of $M$ for every $G$ and if the moments are not too large, then we can recover the distribution of a random $\OO$-module $M$. In Section \ref{Sec3}, we show that the limiting distribution of the cokernels of Haar random Hermitian matrices over $\OO$ is determined by their moments (Theorem \ref{thm3c}). We conclude the proof of Theorem \ref{thm1h} in the general case by combining the following result with Theorem \ref{thm2c} and \ref{thm3c}. \begin{theorem} \label{thm1i} Let $X_n$ be as in Theorem \ref{thm1h} and $G = \prod_{i=1}^{r} \OO / \pi^{\lambda_i} \OO$ for $\lambda_1 \geq \cdots \geq \lambda_r \geq 1$. \begin{enumerate} \item (Theorem \ref{thm4l}) If $K/\Q_p$ is unramified, then \begin{equation} \label{eq1e} \lim_{n \rightarrow \infty} \EE(\# \Sur_{\OO}(\cok(X_n), G)) = p^{\sum_{i=1}^{r} (2i-1)\lambda_i}. \end{equation} \item (Theorem \ref{thm5h}) If $K/\Q_p$ is ramified, then \begin{equation} \label{eq1f} \lim_{n \rightarrow \infty} \EE(\# \Sur_{\OO}(\cok(X_n), G)) = p^{\sum_{i=1}^{r}\left ( (i-1)\lambda_i + \left \lfloor \frac{\lambda_i}{2}\right \rfloor \right )}. \end{equation} \end{enumerate} For both cases, the error term is exponentially small in $n$. \end{theorem} The unramified and ramified cases should be considered separately in the proof of the above theorem. Moreover, the ramified extensions $K/\Q_p$ are classified by two types (see the first paragraph of Section \ref{Sec5}) and the proof for these cases are slightly different for some technical reasons. We prove the unramified case in Section \ref{Sec4} and the ramified case in Section \ref{Sec5}. Our proof of Theorem \ref{thm1i} is based on the innovative work of Wood \cite{Woo17}, but it cannot be directly adapted to our case. In particular, we need some effort to deal with the linear and conjugate-linear maps simultaneously, which is different from the symmetric case where every map is linear. For example, a lot of conjugations appear during the computations on the maps $\alpha_{ij}^{c}$ and $\alpha_i^{d}$ in $\Hom_R(V, ({}^{\sigma}G)^*)$, which make the proof more involved than the proof for the symmetric case. \section{Haar random $p$-adic Hermitian matrices} \label{Sec2} Throughout this section, we assume that random matrices are equidistributed with respect to Haar measure. More general (i.e. $\varepsilon$-balanced) random matrices will be considered in Section \ref{Sec4} and \ref{Sec5}. This section is based on \cite[Section 2]{CKLPW15}, where the distribution of the cokernel of a Haar random symmetric matrix over $\Z_p$ was computed. Some notations are also borrowed from \cite{CKLPW15}. Jacobowitz \cite{Jac62} classified Hermitian lattices over $p$-adic rings. We follow the exposition of Yu \cite[Section 2]{Yu12}, whose original reference is also \cite{Jac62}. Let $(L, h)$ be a Hermitian lattice over $\OO$. (We refer Section \ref{Sub12} for the definition of a Hermitian lattice.) A vector $x \in L$ is \textit{maximal} if $x \notin \pi L$. (Recall that $\pi$ is a uniformizer of $\OO$.) For each $i \in \Z$, $(L, h)$ is called $\pi^i$-\textit{modular} if $h(x, L) = \pi^i \OO$ for every maximal $x \in L$ (\cite[p.447]{Jac62}). We say $(L, h)$ is \textit{modular} if it is $\pi^i$-modular for some $i \in \Z$. Any Hermitian lattice can be written as an orthogonal sum of modular lattices \cite[Proposition 4.3]{Jac62}. When $K/\Q_p$ is unramified, any $\pi^i$-modular lattice $(L, h)$ is isomorphic to an orthogonal sum of the copies of the $\pi^i$-modular lattice $(\pi^i)$ of rank $1$ \cite[Theorem 7.1]{Jac62}. Using the correspondence between Hermitian lattices and Hermitian matrices, we obtain the following proposition. \begin{proposition} \label{prop2a} Assume that $K/\Q_p$ is unramified. For every $A \in \HH_n(\OO)$, there exists $Y \in \GL_n(\OO)$ such that $YA \sigma(Y^t)$ is diagonal. \end{proposition} The classification of Hermitian lattices over $\OO$ for the ramified case can be found in \cite[Proposition 8.1]{Jac62} (the case $p>2$) and \cite[Theorem 2.2]{Cho16} (the case $p=2$). Using the correspondence between Hermitian lattices and Hermitian matrices, we can summarize the results of \cite[Proposition 8.1]{Jac62} and \cite[Theorem 2.2]{Cho16} as follow. For $a, b \in \Z_p$ and $c \in \OO$, denote $$ A(a,b,c) := \begin{pmatrix} a & c \\ \sigma (c) & b \\ \end{pmatrix} \in \HH_2(\OO). $$ \begin{proposition} \label{prop2b} Assume that $K/\Q_p$ is ramified. For every $A \in \HH_n(\OO)$, there exists $Y \in \GL_n(\OO)$ such that $YA \sigma(Y^t)$ is a block diagonal matrix consisting of \begin{enumerate} \item the zero diagonal blocks, \item diagonal blocks $\begin{pmatrix} u_ip^{d_i} \end{pmatrix}$ for some $d_i \geq 0$ and $u_i \in \Z_p^{\times}$, \item $2 \times 2$ blocks of the form $B_j = A(a_j, b_j, c_j)$ for some $a_j, b_j \in \Z_p$ and $c_j \in \OO$ such that $c_j \neq 0$, $\displaystyle \frac{a_j}{c_j} \in \OO$ and $\displaystyle \frac{b_j}{\pi c_j} \in \OO$. \end{enumerate} \end{proposition} Let $\Gamma$ be a finite $\OO$-module. Recall that we have defined the conjugate ${}^{\sigma} \Gamma$ of $\Gamma$ and a perfect Hermitian pairing on $\Gamma$ in Section \ref{Sub13}. When $\Gamma_1$, $\Gamma_2$ are finite $\OO$-modules and $\delta_i$ ($i = 1, 2$) is a perfect Hermitian pairing on $\Gamma_i$, we say $(\Gamma_1, \delta_1)$ and $(\Gamma_2, \delta_2)$ are \textit{isomorphic} if there is an $\OO$-module isomorphism $f : \Gamma_1 \rightarrow \Gamma_2$ such that $\delta_1 (x, y) = \delta_2 ( f(x), f(y))$ for every $x, y \in \Gamma_1$. For $X \in \HH_n(\OO)$, a Hermitian pairing $\left< \; , \; \right>_X : \OO^n \times \OO^n \rightarrow K/\OO$ is defined by $$ \left< x, \, y \right>_X := \sigma(y^t) X^{-1} x $$ and this induces a perfect Hermitian pairing $\delta_X$ on $\cok(X)$. The following theorem provides the limiting distribution of the cokernel of a Haar random Hermitian matrix over $\OO$. \begin{theorem} \label{thm2c} Let $X_n \in \HH_n(\OO)$ be a Haar random matrix for each $n$, $\Gamma$ be a finite $\OO$-module, $r := \dim_{\kappa}(\Gamma / \pi \Gamma)$ and $\delta$ be a perfect Hermitian pairing on $\Gamma$. \begin{enumerate} \item If $K/\Q_p$ is unramified, then the probability that $(\cok(X_n), \delta_{X_n})$ is isomorphic to $(\Gamma, \delta)$ is \begin{equation*} \mu_n (\Gamma, \delta) = \frac{1}{\left| \Aut_{\OO}(\Gamma, \delta) \right|} \prod_{j=n-r+1}^{n}(1-\frac{1}{p^{2j}}) \prod_{i=1}^{n-r}(1 + \frac{(-1)^i}{p^i}), \end{equation*} which implies that \begin{equation} \label{eq2c} \lim_{n \rightarrow \infty} \PP(\cok(X_n) \cong \Gamma) = \frac{ \# \left\{ \text{Hermitian, perfect } \delta : \Gamma \times \Gamma \rightarrow K/\OO \right\}}{\left| \Aut_{\OO}(\Gamma) \right|} \prod_{i=1}^{\infty}(1 + \frac{(-1)^i}{p^i}). \end{equation} \item If $K/\Q_p$ is ramified, then the probability that $(\cok(X_n), \delta_{X_n})$ is isomorphic to $(\Gamma, \delta)$ is \begin{equation*} \mu_n (\Gamma, \delta) = \frac{1}{\left| \Aut_{\OO}(\Gamma, \delta) \right|} \prod_{j=n-r+1}^{n}(1-\frac{1}{p^j}) \prod_{i=1}^{\left \lceil \frac{n-r}{2} \right \rceil}(1 - \frac{1}{p^{2i-1}}), \end{equation*} which implies that \begin{equation} \label{eq2d} \lim_{n \rightarrow \infty} \PP(\cok(X_n) \cong \Gamma) = \frac{ \# \left\{ \text{Hermitian, perfect } \delta : \Gamma \times \Gamma \rightarrow K/\OO \right\}}{\left| \Aut_{\OO}(\Gamma) \right|} \prod_{i=1}^{\infty}(1 - \frac{1}{p^{2i-1}}). \end{equation} \end{enumerate} \end{theorem} Before starting the proof, we provide three lemmas that will be used in the proof. The first one is an analogue of \cite[Lemma 4]{CKLPW15}, whose proof is also similar. For an arbitrary Hermitian pairing $\left [ \; , \; \right ] : \OO^n \times \OO^n \rightarrow K/\OO$, define its cokernel by $$ \cok \left [ \; , \; \right ] := \OO^n / \left\{ x \in \OO^n : \left [ x, y \right ]=0 \text{ for all } y \in \OO^n \right\} $$ and denote the canonical perfect Hermitian pairing on $\cok \left [ \; , \; \right ]$ by $\delta_{\left [ \; , \; \right ]}$. Note that $\cok \left [ \; , \; \right ]$ is always a finite $\OO$-module. \begin{lemma} \label{lem2d} The number of Hermitian pairings $\left [ \; , \; \right ] : \OO^n \times \OO^n \rightarrow K/\OO$ such that $(\cok \left [ \; , \; \right ], \delta_{\left [ \; , \; \right ]})$ is isomorphic to given $(\Gamma, \delta)$ is $$ \frac{\left| \Gamma \right|^n \prod_{j=n-r+1}^{n} (1 - \left| \kappa \right|^{-j})}{\left| \Aut_{\OO}(\Gamma, \delta) \right|}, $$ where $r := \dim_{\kappa}(\Gamma / \pi \Gamma)$. \end{lemma} Denote the set of $n \times n$ symmetric matrices over a commutative ring $R$ by $\Sym_n(R)$. For a matrix $A \in \HH_n(\OO)$, $\overline{A} := A \text{ mod } (\pi)$ is an element of $\HH_n(\F_{p^2})$ (resp. $\Sym_n(\F_{p})$) if $K/\Q_p$ is unramified (resp. ramified). \begin{lemma} \label{lem2e} \begin{enumerate} \item (\cite[equation (4)]{GLSV14}) The number of invertible matrices in $\HH_n(\F_{p^2})$ is \begin{equation} \label{eq2b} \left| \GL_n(\F_{p^2}) \cap \HH_n(\F_{p^2}) \right| = p^{n^2} \prod_{i=1}^{n} (1 + \frac{(-1)^i}{p^i}). \end{equation} \item (\cite[Theorem 2]{Mac69}) The number of invertible matrices in $\Sym_n(\F_p)$ is \begin{equation} \label{eq2a} \left| \GL_n(\F_p) \cap \Sym_n(\F_p) \right| = p^{\frac{n(n+1)}{2}} \prod_{i=1}^{\left \lceil \frac{n}{2} \right \rceil} (1 - \frac{1}{p^{2i-1}}). \end{equation} \end{enumerate} \end{lemma} The following lemma is a variant of \cite[Lemma 3.2]{BKLPR15}. Since an $n \times n$ matrix over $\OO$ is invertible if and only if its reduction modulo $\pi$ (which is a matrix over $\kappa$) is invertible, the proof is exactly same as the original lemma. \begin{lemma} \label{lem2f} Suppose that $A, M \in \HH_n(\OO)$ and $\det M \neq 0$. Then we have $\left< \; , \; \right>_A = \left< \; , \; \right>_M$ if and only if $A \in M + M \HH_n(\OO) M$ and $\rank_{\kappa}(\overline{A}) = \rank_{\kappa}(\overline{M})$. \end{lemma} Now we are ready to prove Theorem \ref{thm2c}, which is the main result of this section. Our proof follows the strategy of \cite[Theorem 2]{CKLPW15}. \begin{proof}[Proof of Theorem \ref{thm2c}] Throughout the proof, we denote a Haar random matrix $X_n \in \HH_n(\OO)$ by $A$ for simplicity. First we prove the case that $K/\Q_p$ is unramified. \begin{enumerate} \item By Lemma \ref{lem2d}, the number of Hermitian pairings $\left [ \; , \; \right ] : \OO^n \times \OO^n \rightarrow K/\OO$ such that $(\cok \left [ \; , \; \right ], \delta_{\left [ \; , \; \right ]})$ is isomorphic to $(\Gamma, \delta)$ is $$ \frac{\left| \Gamma \right|^n}{\left| \Aut_{\OO}(\Gamma, \delta) \right|} \prod_{j=n-r+1}^{n} (1 - \frac{1}{p^{2j}}). $$ \item Let $\left [ \; , \; \right ] : \OO^n \times \OO^n \rightarrow K/\OO$ be a Hermitian pairing. Choose a matrix $N \in \HH_n(K)$ such that $N_{ij} \in K$ is a lift of $\left [ e_i, e_j \right ] \in K/\OO$. There exists $m \in \Z_{\geq 0}$ such that $p^m N \in \HH_n(\OO)$. By Proposition \ref{prop2a}, there exists $Y \in \GL_n(\OO)$ such that $$ YN\sigma(Y^t) = \diag(u_1'p^{d_1'-m}, \cdots, u_n'p^{d_n'-m}), $$ where $u_i' \in \OO^{\times}$ and $d_i' \in \Z_{\geq 0}$ for each $i$. By possibly changing the lift $N_{ij}$ of $\left [ e_i, e_j \right ] \in K/\OO$ for each $1 \leq i \leq j \leq n$, one may assume that $d_i' - m \leq 0$ for each $i$. In this case, $M := (YN\sigma(Y^t))^{-1} \in \HH_n(\OO)$ satisfies $\left [ \; , \; \right ] = \left< \; , \; \right>_M$ and is of the form $$ M = \diag (u_1 p^{d_1}, \cdots, u_n p^{d_n}), $$ where $u_i \in \OO^{\times}$ and $d_i \in \Z_{\geq 0}$ for each $i$. \item Let $M \in \HH_n(\OO)$ be as above. Then we have $\Gamma \cong \cok(M) \cong \prod_{i=1}^{n} \OO / p^{d_i} \OO$. By Lemma \ref{lem2f}, we have $\left< \; , \; \right>_A = \left< \; , \; \right>_M$ if and only if $X := M^{-1}(A-M)M^{-1} \in \HH_n(\OO)$ and $\rank_{\F_{p^2}}(\overline{A}) = \rank_{\F_{p^2}}(\overline{M}) = n-r$. First we compute the probability that $X_{ij} \in \OO$ for every $i \leq j$. \begin{itemize} \item For $1 \leq i < j \leq n$, we have $X_{ij} = (u_i p^{d_i})^{-1} A_{ij} (u_{j} p^{d_{j}})^{-1}\in \OO$ if and only if $p^{d_i+d_j} \mid A_{ij}$ for a random $A_{ij} \in \OO$. The probability is given by $p^{-2(d_i+d_j)}$. \item For $1 \leq i \leq n$, we have $X_{ii} = (u_i p^{d_i})^{-1} A_{ii} (u_{i} p^{d_{i}})^{-1}\in \OO$ if and only if $p^{2d_i} \mid A_{ii}$ for a random $A_{ii} \in \Z_p$. The probability is given by $p^{-2d_i}$. \end{itemize} By the above computations, the probability that $X \in \HH_n(\OO)$ is given by \begin{equation} \label{eq2e1} \prod_{i<j}p^{-2(d_i+d_j)} \prod_{i} p^{-2d_i} = \left| \Gamma \right|^{-n}. \end{equation} Given the condition $X \in \HH_n(\OO)$, we may assume that $\overline{M}$ is zero outside of its upper left $(n-r) \times (n-r)$ minor by permuting the rows and columns of $M$. In this case, $\rank_{\F_{p^2}}(\overline{A})= n-r$ if and only if the upper left $(n-r) \times (n-r)$ minor of $\overline{A}$, which is random in $\HH_{n-r}(\F_{p^2})$, has a rank $n-r$. Therefore the probability that $\left< \; , \; \right>_A = \left< \; , \; \right>_M$ is given by \begin{equation*} \frac{\left| \GL_{n-r}(\F_{p^2}) \cap \HH_{n-r}(\F_{p^2}) \right|}{\left| \HH_{n-r}(\F_{p^2}) \right|} \cdot \left| \Gamma \right|^{-n} = \prod_{i=1}^{n-r} (1 + \frac{(-1)^i}{p^i})\left| \Gamma \right|^{-n}. \end{equation*} (The equality holds due to Lemma \ref{lem2e}.) \item Using the above results, we conclude that \begin{equation*} \begin{split} \mu_n(\Gamma, \delta) & = \prod_{i=1}^{n-r} (1 + \frac{(-1)^i}{p^i})\left| \Gamma \right|^{-n} \cdot \frac{\left| \Gamma \right|^n}{\left| \Aut_{\OO}(\Gamma, \delta) \right|} \prod_{j=n-r+1}^{n} (1 - \frac{1}{p^{2j}}) \\ & = \frac{1}{\left| \Aut_{\OO}(\Gamma, \delta) \right|} \prod_{j=n-r+1}^{n}(1-\frac{1}{p^{2j}}) \prod_{i=1}^{n-r} (1 + \frac{(-1)^i}{p^i}). \end{split} \end{equation*} Let $\Phi_{\Gamma}$ denotes the set of Hermitian perfect pairings on $\Gamma$ and $\overline{\Phi}_{\Gamma}$ be the set of isomorphism classes of elements of $\Phi_{\Gamma}$. The orbit-stabilizer theorem implies that \begin{equation} \label{eq2x1} \sum_{[\delta] \in \overline{\Phi}_{\Gamma}} \frac{1}{\left| \Aut_{\OO}(\Gamma, \delta) \right|} = \sum_{\delta \in \Phi_{\Gamma}} \frac{1}{\left| \Aut_{\OO}(\Gamma) \right|} \end{equation} (\cite[p. 952]{Woo17}) and this implies that \begin{equation*} \lim_{n \rightarrow \infty} \PP(\cok(A) \cong \Gamma) = \lim_{n \rightarrow \infty} \sum_{[\delta] \in \overline{\Phi}_{\Gamma}} \mu_n(\Gamma, \delta) = \frac{\left| \Phi_{\Gamma} \right|}{\left| \Aut_{\OO}(\Gamma) \right|} \prod_{i=1}^{\infty}(1 + \frac{(-1)^i}{p^i}). \end{equation*} \end{enumerate} \noindent Now assume that $K/\Q_p$ is ramified. \begin{enumerate} \item By Lemma \ref{lem2d}, the number of Hermitian pairings $\left [ \; , \; \right ] : \OO^n \times \OO^n \rightarrow K/\OO$ such that $(\cok \left [ \; , \; \right ], \delta_{\left [ \; , \; \right ]})$ is isomorphic to $(\Gamma, \delta)$ is $$ \frac{\left| \Gamma \right|^n}{\left| \Aut_{\OO}(\Gamma, \delta) \right|} \prod_{j=n-r+1}^{n} (1 - \frac{1}{p^j}). $$ \item Let $\left [ \; , \; \right ] : \OO^n \times \OO^n \rightarrow K/\OO$ be a Hermitian pairing. Choose a matrix $N \in \HH_n(K)$ such that $N_{ij} \in K$ is a lift of $\left [ e_i, e_j \right ] \in K/\OO$. There exists $m \in \Z_{\geq 0}$ such that $p^m N \in \HH_n(\OO)$. By Proposition \ref{prop2b}, there exists $Y \in \GL_n(\OO)$ such that $$ YN\sigma(Y^t) = \diag (u_1' p^{d_1'-m}, \cdots, u_k' p^{d_k'-m}, p^{-m}B_1', \cdots, p^{-m}B_s') \;\; (k+2s=n), $$ where $u_i'p^{d_i'}$ ($1 \leq i \leq k$) and $B_j'=A(a_j', b_j', c_j')$ ($1 \leq j \leq s$) satisfy the conditions appear in Proposition \ref{prop2b}. By possibly changing the lift $N_{ij}$ of $\left [ e_i, e_j \right ] \in K/\OO$ for each $1 \leq i \leq j \leq n$, one may assume that $d_i' - m \leq 0$ for each $i$ and $(p^{-m}B_j')^{-1} =p^m A(b_j', a_j', -c_j') \in \HH_2(\OO)$ for each $j$. In this case, $M := (YN\sigma(Y^t))^{-1} \in \HH_n(\OO)$ satisfies $\left [ \; , \; \right ] = \left< \; , \; \right>_M$ and is of the form $$ M = \diag (u_1 p^{d_1}, \cdots, u_k p^{d_k}, B_1, \cdots, B_s) \;\; (k+2s=n), $$ where $u_ip^{d_i} \in \Z_p$ ($1 \leq i \leq k$) and $B_j = A(a_j, b_j, c_j) \in \HH_2(\OO)$ ($1 \leq j \leq s$) satisfy the conditions appear in Proposition \ref{prop2b}. \item Let $M \in \HH_n(\OO)$ be as above. The conditions $c_j \neq 0$, $\displaystyle \frac{a_j}{c_j} \in \OO$ and $\displaystyle \frac{b_j}{\pi c_j} \in \OO$ imply that $$ \cok (B_j) \cong \cok \begin{pmatrix} a_j & c_j \\ \sigma(c_j)-a_jb_jc_j^{-1} & 0 \\ \end{pmatrix} \cong \cok \begin{pmatrix} 0 & c_j \\ \sigma(c_j)-a_jb_jc_j^{-1} & 0 \\ \end{pmatrix} \cong (\OO/c_j\OO)^2 $$ so we have $$ \Gamma \cong \cok(M) \cong \prod_{i=1}^{k} \OO / \pi^{2d_i} \OO \times \prod_{j=1}^{s} (\OO / \pi^{e_j} \OO)^2 $$ for $\pi^{e_j} \parallel c_j$. By Lemma \ref{lem2f}, we have $\left< \; , \; \right>_A = \left< \; , \; \right>_M$ if and only if $X := M^{-1}(A-M)M^{-1} \in \HH_n(\OO)$ and $\rank_{\F_p}(\overline{A}) = \rank_{\F_p}(\overline{M}) = n-r$. First we compute the probability that $X_{ij} \in \OO$ for every $i \leq j$. To simplify the proof, we will introduce some notations. For a positive integer $x$, denote $\underline{x} := x+k$. For $1 \leq i \leq k$ and $1 \leq j \leq s$, denote $\YY_{ij} := \begin{pmatrix} X_{i, \, \underline{2j-1}} & X_{i, \, \underline{2j}} \\ \end{pmatrix}$. For $1 \leq j \leq j' \leq s$, denote $\ZZ_{j j'} := \begin{pmatrix} X_{\underline{2j-1}, \, \underline{2j'-1}} & X_{\underline{2j-1}, \, \underline{2j'}} \\ X_{\underline{2j}, \, \underline{2j'-1}} & X_{\underline{2j}, \, \underline{2j'}} \\ \end{pmatrix}$. \begin{itemize} \item For $1 \leq i < i' \leq k$, we have $X_{ii'} = (u_i p^{d_i})^{-1} A_{ii'} (u_{i'} p^{d_{i'}})^{-1}\in \OO$ if and only if $\pi^{2(d_i+d_{i'})} \mid A_{ii'}$ for a random $A_{ii'} \in \OO$. The probability is given by $p^{-2(d_i+d_{i'})}$. \item For $1 \leq i \leq k$, we have $X_{ii} = (u_i p^{d_i})^{-1}(A_{ii} - u_i p^{d_i})(u_i p^{d_i})^{-1} \in \OO$ if and only if $p^{2d_i} \mid A_{ii} - u_ip^{d_i}$ for a random $A_{ii} \in \Z_p$. The probability is given by $p^{-2d_i}$. \item For $1 \leq i \leq k$ and $1 \leq j \leq s$, denote $(a_j, b_j, c_j)=(a,b,c)$ for simplicity. Then we have \begin{equation*} \begin{split} & \YY_{ij} = (u_i p^{d_i})^{-1} \begin{pmatrix} A_{i, \, \underline{2j-1}} = x & A_{i, \, \underline{2j}} = y \\ \end{pmatrix} B_j^{-1} \in \M_{1 \times 2}(\OO) \\ \Leftrightarrow \; & -\frac{b}{\sigma(c)} x + y, \, x + -\frac{a}{c} y \in \pi^{2d_i+e_j} \OO \\ \Leftrightarrow \; & x, y \in \pi^{2d_i+e_j} \OO. \end{split} \end{equation*} The probability is given by $p^{-2(2d_i+e_j)}$. \item For $1 \leq j < j' \leq s$, denote $(a_j, b_j, c_j)=(a,b,c)$ and $(a_{j'}, b_{j'}, c_{j'})=(a',b',c')$ for simplicity. Then we have \begin{equation*} \begin{split} & \ZZ_{jj'} = B_j^{-1} \begin{pmatrix} A_{\underline{2j-1}, \, \underline{2j'-1}} = x & A_{\underline{2j-1}, \, \underline{2j'}} = y \\ A_{\underline{2j}, \, \underline{2j'-1}} = z & A_{\underline{2j}, \, \underline{2j'}} = w \\ \end{pmatrix} B_{j'}^{-1} \in \M_{2}(\OO) \\ \Leftrightarrow \; & \begin{pmatrix} bb' & -b \sigma(c') & -cb' & c \sigma(c') \\ -bc' & ba' & cc' & -ca' \\ - \sigma(c) b' & \sigma(c) \sigma(c') & ab' & -a \sigma(c') \\ \sigma(c) c' & -\sigma(c) a' & -ac' & aa' \\ \end{pmatrix} \begin{pmatrix} x \\ y \\ z \\ w\end{pmatrix} \in \pi^{2(e_j+e_{j'})} \M_{4 \times 1}(\OO) \\ \Leftrightarrow \;\, & T \begin{pmatrix} x \\ y \\ z \\ w\end{pmatrix} := \begin{pmatrix} \frac{bb'}{c \sigma(c')} & -\frac{b}{c} & -\frac{b'}{\sigma(c')} & 1 \\ -\frac{b}{c} & \frac{ba'}{cc'} & 1 & -\frac{a'}{c'} \\ -\frac{b'}{\sigma(c')} & 1 & \frac{ab'}{\sigma(c) \sigma(c')} & -\frac{a}{\sigma(c)} \\ 1 & -\frac{a'}{c'} & -\frac{a}{\sigma(c)} & \frac{aa'}{\sigma(c) c'} \\ \end{pmatrix} \begin{pmatrix} x \\ y \\ z \\ w\end{pmatrix} \in \pi^{e_j+e_{j'}} \M_{4 \times 1}(\OO). \end{split} \end{equation*} Since the matrix $T \in \M_4(\OO)$ is invertible, the probability is given by $p^{-4(e_j+e_{j'})}$. \item For $1 \leq j \leq s$, denote $(a_j, b_j, c_j)=(a,b,c)$ for simplicity. Then we have \begin{equation*} \begin{split} & \ZZ_{jj} = B_j^{-1} \begin{pmatrix} A_{\underline{2j-1}, \, \underline{2j-1}} = x & A_{\underline{2j-1}, \, \underline{2j'}} = y \\ A_{\underline{2j}, \, \underline{2j-1}} = \sigma(y) & A_{\underline{2j}, \, \underline{2j'}} = z \\ \end{pmatrix} B_{j}^{-1} \in \M_{2}(\OO) \\ \Leftrightarrow \; & \begin{pmatrix} \frac{b^2}{c \sigma(c)} & -\frac{b}{c} & -\frac{b}{\sigma(c)} & 1 \\ -\frac{b}{c} & \frac{ba}{c^2} & 1 & -\frac{a}{c} \\ -\frac{b}{\sigma(c)} & 1 & \frac{ab}{\sigma(c)^2} & -\frac{a}{\sigma(c)} \\ 1 & -\frac{a}{c} & -\frac{a}{\sigma(c)} & \frac{a^2}{\sigma(c) c} \\ \end{pmatrix} \begin{pmatrix} x \\ y \\ \sigma(y) \\ z \end{pmatrix} \in \pi^{2e_j} \M_{4 \times 1}(\OO) \\ \Leftrightarrow \;\, & x, z \in p^{e_j} \Z_p, \, y \in \pi^{2e_j} \OO \end{split} \end{equation*} for a random $x, z \in \Z_p$ and $y \in \OO$. The probability is given by $p^{-4e_j}$. \end{itemize} By the above computations, the probability that $X \in \HH_n(\OO)$ is given by \begin{equation} \label{eq2e} \prod_{i<i'}p^{-2(d_i+d_{i'})} \prod_{i} p^{-2d_i} \prod_{i, j} p^{-2(2d_i+e_j)} \prod_{j<j'} p^{-4(e_j+e_{j'})} \prod_{j} p^{-4e_j} = \left| \Gamma \right|^{-n}. \end{equation} Given the condition $X \in \HH_n(\OO)$, we may assume that $\overline{M}$ is zero outside of its upper left $(n-r) \times (n-r)$ minor by permuting the rows and columns of $M$. In this case, $\rank_{\F_p}(\overline{A})= n-r$ if and only if the upper left $(n-r) \times (n-r)$ minor of $\overline{A}$, which is random in $\Sym_{n-r}(\F_p)$, has a rank $n-r$. Therefore the probability that $\left< \; , \; \right>_A = \left< \; , \; \right>_M$ is given by \begin{equation*} \frac{\left| \GL_{n-r}(\F_p) \cap \Sym_{n-r}(\F_p) \right|}{\left| \Sym_{n-r}(\F_p) \right|} \cdot \left| \Gamma \right|^{-n} = \prod_{i=1}^{\left \lceil \frac{n-r}{2} \right \rceil} (1 - \frac{1}{p^{2i-1}}) \left| \Gamma \right|^{-n}. \end{equation*} (The equality holds due to Lemma \ref{lem2e}.) \item Using the above results, we conclude that \begin{equation*} \begin{split} \mu_n(\Gamma, \delta) & = \prod_{i=1}^{\left \lceil \frac{n-r}{2} \right \rceil} (1 - \frac{1}{p^{2i-1}}) \left| \Gamma \right|^{-n} \cdot \frac{\left| \Gamma \right|^n}{\left| \Aut_{\OO}(\Gamma, \delta) \right|} \prod_{j=n-r+1}^{n} (1 - \frac{1}{p^j}) \\ & = \frac{1}{\left| \Aut_{\OO}(\Gamma, \delta) \right|} \prod_{j=n-r+1}^{n}(1-\frac{1}{p^j}) \prod_{i=1}^{\left \lceil \frac{n-r}{2} \right \rceil}(1 - \frac{1}{p^{2i-1}}) \end{split} \end{equation*} and the equation (\ref{eq2x1}) implies that \begin{equation*} \lim_{n \rightarrow \infty} \PP(\cok(A) \cong \Gamma) = \lim_{n \rightarrow \infty} \sum_{[\delta] \in \overline{\Phi}_{\Gamma}} \mu_n(\Gamma, \delta) = \frac{\left| \Phi_{\Gamma} \right|}{\left| \Aut_{\OO}(\Gamma) \right|} \prod_{i=1}^{\infty}(1 - \frac{1}{p^{2i-1}}). \qedhere \end{equation*} \end{enumerate} \end{proof} For a partition $\lambda = (\lambda_1 \geq \cdots \geq \lambda_r)$ ($\lambda_r \geq 1$), an $\OO$-module of \textit{type} $\lambda$ is defined by $\prod_{i=1}^{r} \OO / \pi^{\lambda_i} \OO$. The moments of the cokernel of a Haar random matrix $X_n \in \HH_n(\OO)$ are given as follow. The next theorem is a special case of Theorem \ref{thm1i}. \begin{theorem} \label{thm2g} Let $G = G_{\lambda}$ be a finite $\OO$-module of type $\lambda = (\lambda_1 \geq \cdots \geq \lambda_r)$. \begin{enumerate} \item If $K/\Q_p$ is unramified, then \begin{equation} \label{eq2f} \lim_{n \rightarrow \infty} \EE(\# \Sur_{\OO}(\cok(X_n), G)) = p^{\sum_{i=1}^{r} (2i-1)\lambda_i}. \end{equation} \item If $K/\Q_p$ is ramified, then \begin{equation} \label{eq2g} \lim_{n \rightarrow \infty} \EE(\# \Sur_{\OO}(\cok(X_n), G)) = p^{\sum_{i=1}^{r}\left ( (i-1)\lambda_i + \left \lfloor \frac{\lambda_i}{2}\right \rfloor \right )}. \end{equation} \end{enumerate} \end{theorem} \begin{proof} For $n \geq r$, denote $A = X_n$ for simplicity. Assume that $\det (A) \neq 0$ (so $\cok(A)$ is finite), which holds with probability $1$. Following the proof of \cite[Theorem 11]{CKLPW15}, the problem reduces to the computation of the probability that $A\OO^n \in D\OO^n$ where $$ D := \diag(\pi^{e_1}, \cdots, \pi^{e_n}) \;\; (e_1 = \lambda_1, \cdots, e_r = \lambda_r, \, e_{r+1} = \cdots = e_n = 0). $$ The condition $A\OO^n \in D\OO^n$ holds if and only if $A_{ij} \in \pi^{e_i}\OO$ for every $i \leq j$. \begin{itemize} \item For $i<j$, $A_{ij}$ is a random element of $\OO$ so $$ \PP(A_{ij} \in \pi^{e_i}\OO) = \left\{\begin{matrix} p^{-2e_i} & (K/\Q_p \text{ is unramified}) \\ p^{-e_i} & (K/\Q_p \text{ is ramified}) \end{matrix}\right. . $$ \item For $i=j$, $A_{ii}$ is a random element of $\Z_p$. If $K/\Q_p$ is unramified, then $A_{ii} \in \pi^{e_i}\OO$ if and only if $A_{ii} \in p^{e_i}\Z_p$. If $K/\Q_p$ is ramified, then $A_{ii} \in \pi^{e_i}\OO$ if and only if $A_{ii} \in p^{\left \lceil \frac{e_i}{2} \right \rceil}\Z_p$. Thus $$ \PP(A_{ii} \in \pi^{e_i}\OO) = \left\{\begin{matrix} p^{-e_i} & (K/\Q_p \text{ is unramified}) \\ p^{-\left \lceil \frac{e_i}{2} \right \rceil} & (K/\Q_p \text{ is ramified}) \end{matrix}\right. . $$ \end{itemize} Since the probability that a random uniform element of $\Hom_{\OO}(\OO^n, G)$ is a surjection goes to $1$ as $n \rightarrow \infty$, we have \begin{equation*} \begin{split} & \lim_{n \rightarrow \infty} \EE(\# \Sur_{\OO}(\cok(X_n), G)) \\ = & \lim_{n \rightarrow \infty} \left| \Sur_{\OO}(\OO^n, G) \right| \prod_{1 \leq i < j \leq n} \PP(A_{ij} \in \pi^{e_i}\OO) \prod_{1 \leq i \leq n} \PP(A_{ii} \in \pi^{e_i}\OO) \\ = & \left\{\begin{matrix} p^{\sum_{i=1}^{r} (2i-1)\lambda_i} & (K/\Q_p \text{ is unramified}) \\ p^{\sum_{i=1}^{r}\left ( (i-1)\lambda_i + \left \lfloor \frac{\lambda_i}{2}\right \rfloor \right )} & (K/\Q_p \text{ is ramified}) \end{matrix}\right. . \qedhere \end{split} \end{equation*} \end{proof} Let $\lambda$ be a partition and $\lambda'$ be its transpose partition. Then we have $\sum_{i} (2i-1) \lambda_i = \sum_{j} \lambda_j'^2$. (This follows from the formula $\sum_{i} (i-1) \lambda_i = \sum_{j} \frac{\lambda_j'^2-\lambda_j'}{2}$ which appears in \cite[p. 717]{CKLPW15}.) By Theorem \ref{thm2g}, the $n \rightarrow \infty$ limit of the moment $\EE(\# \Sur_{\OO}(\cok(X_n), G))$ for a finite $\OO$-module $G$ of type $\lambda$ is bounded above by $\left| \kappa \right|^{\sum_{j=1}^{\lambda_1}\frac{\lambda_j'^2}{2}}$ in both unramified and ramified cases. In the next section, we will prove that these moments determine the distribution. \section{Moments} \label{Sec3} Unlike the equidistributed case, it seems almost impossible to compute the distribution of the cokernel of an arbitrary $\varepsilon$-balanced matrix directly. The use of moments enables us to compute the distributions of random groups (in our case, random $\OO$-modules) without direct computation. Wood \cite{Woo17, Woo19} and Nguyen-Wood \cite{NW22} proved Theorem \ref{thm1b} by computing the moments of the cokernels of $\varepsilon$-balanced matrices and showing that these moments determine the distribution. In the remaining part of the paper, we follow the same strategy. In this section, we prove that the distribution of the cokernel of a Haar random Hermitian matrix over $\OO$ is uniquely determined by its moments that appear in Theorem \ref{thm2g}. Since the ring $\OO$ is a PID, the structure theorem for finitely generated modules over a PID implies that the partitions classify finite $\OO$-modules. Our arguments are largely based on \cite[Section 7--8]{Woo17}, so some details will be omitted. For a partition $\lambda$, denote the finite $\OO$-module of type $\lambda$ by $G_{\lambda}$ and let $m(G_{\lambda}) := \left| \kappa \right|^{\sum_{j=1}^{\lambda_1} \frac{\lambda_j'^2}{2}}$. For partitions $\mu \leq \lambda$, let $G_{\mu, \lambda}$ be the set of $\OO$-submodules of $G_{\lambda}$ of type $\mu$. The following lemma is an analogue of \cite[Lemma 7.5]{Woo17}. \begin{lemma} \label{lem3a} We have $\sum_{G \leq G_{\lambda}} m(G) \leq F^{\lambda_1} m(G_{\lambda})$ for $F := \prod_{i=1}^{\infty} (1 - 2^{-i})^{-1} \cdot \sum_{d=0}^{\infty} 2^{-\frac{d^2}{2}} > 0$. \end{lemma} \begin{proof} Let $C = \prod_{i=1}^{\infty} (1 - 2^{-i})$ and $D = \sum_{d=0}^{\infty} 2^{-\frac{d^2}{2}}$, so that $F = C^{-1}D$. Then we have \begin{equation} \label{eq3a} \begin{split} \left| G_{\mu, \lambda} \right| & = \prod_{j \geq 1} \left ( \left| \kappa \right|^{\mu_j'\lambda_j' - \mu_j'^2} \prod_{k=1}^{\mu_j'-\mu_{j+1}'} \frac{1-\left| \kappa \right|^{-\lambda_j'+\mu_j'-k}}{1-\left| \kappa \right|^{-k}}\right ) \\ & \leq \left| \kappa \right|^{\sum_{j=1}^{\lambda_1} (\mu_j'\lambda_j' - \mu_j'^2)} \prod_{j=1}^{\lambda_1} \left ( \prod_{k=1}^{\infty} \frac{1}{1-\left| \kappa \right|^{-k}} \right ) \\ & \leq \frac{1}{C^{\lambda_1}} \left| \kappa \right|^{\sum_{j=1}^{\lambda_1} (\mu_j'\lambda_j' - \mu_j'^2)} \end{split} \end{equation} by \cite[Theorem 2.4]{HL00}. Now the equation (\ref{eq3a}) implies that \begin{equation*} \begin{split} \sum_{G \leq G_{\lambda}} m(G) & = \sum_{\mu \leq \lambda} \left| G_{\mu, \lambda} \right| m(G_{\mu}) \\ & \leq \frac{1}{C^{\lambda_1}} \sum_{\mu \leq \lambda} \left| \kappa \right|^{\sum_{j=1}^{\lambda_1} (\mu_j'\lambda_j' - \mu_j'^2 + \frac{\mu_j'^2}{2})} \\ & = \frac{\left| \kappa \right|^{\sum_{j=1}^{\lambda_1} \frac{\lambda_j'^2}{2}}}{C^{\lambda_1}} \sum_{\mu \leq \lambda} \left| \kappa \right|^{\sum_{j=1}^{\lambda_1} - \frac{(\lambda_j' - \mu_j')^2}{2}} \\ & \leq \frac{\left| \kappa \right|^{\sum_{j=1}^{\lambda_1} \frac{\lambda_j'^2}{2}}}{C^{\lambda_1}} \sum_{d_1, \cdots, d_{\lambda_1} \geq 0} \left| \kappa \right|^{-\sum_{j=1}^{\lambda_1} \frac{d_j^2}{2}} \\ & \leq F^{\lambda_1} m(G_{\lambda}). \qedhere \end{split} \end{equation*} \end{proof} \begin{lemma} \label{lem3b} Let $m$ be a positive integer, $M$ be the set of partitions with at most $m$ parts and $t>1$ and $b<1$ be real numbers. For each $\mu \in M$, let $x_{\mu}$ and $y_{\mu}$ be non-negative real numbers. Suppose that for all $\lambda \in M$, $$ \sum_{\mu \in M} x_{\mu} t^{\sum_{i} \lambda_i \mu_i} = \sum_{\mu \in M} y_{\mu} t^{\sum_{i} \lambda_i \mu_i} = O(F^m t^{\sum_{i} \frac{\lambda_i^2 + b \lambda_i}{2}}) $$ for an absolute constant $F>0$. Then we have $x_{\mu} = y_{\mu}$ for all $\mu \in M$. \end{lemma} \begin{proof} See \cite[Theorem 8.2]{Woo17}. The upper bound given there is of the form $O(F^m t^{\sum_{i} \frac{\lambda_i^2 - \lambda_i}{2}})$, but the same proof works for the bound $O(F^m t^{\sum_{i} \frac{\lambda_i^2 + b \lambda_i}{2}})$ for every $b<1$. \end{proof} The following theorem can be proved exactly as in \cite[Theorem 8.3]{Woo17} using Lemma \ref{lem3a} and \ref{lem3b} (for $t = \left| \kappa \right|$ and $b=0$). Since the limits of the moments appear in Theorem \ref{thm2g} are bounded above by $m(G_{\lambda})$, this theorem implies that the limiting distribution of the cokernels of Haar random matrices in $\HH_n(\OO)$ is uniquely determined by their moments. \begin{theorem} \label{thm3c} Let $(A_n)_{n \geq 1}$ and $(B_n)_{n \geq 1}$ be sequences of random finitely generated $\OO$-modules. Let $a$ be a non-negative integer and $\mathcal{M}_a$ be the set of finite $\OO$-modules $M$ such that $\pi^a M = 0$. Suppose that for every $G \in \mathcal{M}_a$, the limit $\displaystyle \lim_{n \rightarrow \infty} \PP(A_n \otimes \OO/\pi^a \OO \cong G)$ exists and we have $$ \lim_{n \rightarrow \infty} \EE(\# \Sur_{\OO}(A_n , G)) = \lim_{n \rightarrow \infty} \EE(\# \Sur_{\OO}(B_n , G)) = O(m(G)). $$ Then for every $G \in \mathcal{M}_a$, we have $\displaystyle \lim_{n \rightarrow \infty} \PP(A_n \otimes \OO/\pi^a \OO \cong G) = \lim_{n \rightarrow \infty} \PP(B_n \otimes \OO/\pi^a \OO \cong G)$. \end{theorem} \section{Universality of the cokernel: the unramified case} \label{Sec4} In this section, we assume that $K/\Q_p$ is unramified. Let $m$ be a positive integer and $G$ be a finite $\OO$-module of type $\lambda = (\lambda_1 \geq \cdots \geq \lambda_r)$ such that $p^m G = 0$ (equivalently, $m \geq \lambda_1$). Let $R$ and $R_1$ be the rings $\OO / p^m \OO$ and $\Z / p^m \Z$, respectively. Denote the image of $w$ in $R$ also by $w$. (Recall that $w$ is a fixed primitive $(p^2-1)$-th root of unity in $\overline{\Q_p}$.) For $X \in \HH_n(\OO)$ (resp. $X \in \HH_n(R)$), denote $X_{ij} = Y_{ij} + w Z_{ij}$ for $Y_{ij}, Z_{ij} \in \Z_p$ (resp. $Y_{ij}, \, Z_{ij} \in R_1$). Then $Z_{ii}=0$ and $X$ is determined by $n^2$ elements $Y_{ij}$ ($i \leq j$), $Z_{ij}$ ($i < j$). \begin{definition} \label{def4a} Let $0 < \varepsilon < 1$. A random variable $x$ in $\Z_p$ (or $R_1$) is \textit{$\varepsilon$-balanced} if $\PP (x \equiv r \,\, (\text{mod } p)) \leq 1 - \varepsilon$ for every $r \in \Z/p\Z$. A random matrix $X$ in $\HH_n(\OO)$ (or $\HH_n(R)$) is \textit{$\varepsilon$-balanced} if the $n^2$ elements $Y_{ij}$ ($i \leq j$), $Z_{ij}$ ($i < j$) are independent and $\varepsilon$-balanced. \end{definition} Let $V=R^n$ (resp. $W=R^n$) with a standard basis $v_1, \cdots, v_n$ (resp. $w_1, \cdots, w_n$) and its dual basis $v_1^*, \cdots, v_n^*$ (resp. $w_1^*, \cdots, w_n^*$). For an $\varepsilon$-balanced matrix $X_0 \in \HH_n(\OO)$, its modulo $p^m$ reduction $X \in \HH_n(R)$ is also $\varepsilon$-balanced and we have \begin{equation} \label{eq4a} \EE(\# \Sur_{\OO}(\cok(X_0), G)) = \EE(\# \Sur_{R}(\cok(X), G)) = \sum_{F \in \Sur_R(V, G)} \PP (FX = 0). \end{equation} (Here we understand $X \in \HH_n(R)$ as an element of $\Hom_R(W, V)$, so $FX \in \Hom_R(W, G)$.) Therefore, to compute the moments of $\cok(X_0)$, it is enough to compute the probability $\PP (FX = 0)$ for each $F \in \Sur_R(V, G)$. \begin{lemma} \label{lem4b} Let $\zeta := \zeta_{p^m} \in \C$ be a primitive $p^m$-th root of unity and $\tr : R \rightarrow R_1$ be the trace map given by $\tr (x) := x + \sigma(x)$. Then we have \begin{equation} \label{eq4new1} 1_{FX=0} = \frac{1}{\left| G \right|^n}\sum_{C \in \Hom_R(\Hom_R(W, G), R)} \zeta^{\tr(C(FX))}. \end{equation} \end{lemma} \begin{proof} Write $J := \Hom_R(\Hom_R(W, G), R)$ for simplicity. When $FX = 0$, the right-hand side of the equation (\ref{eq4new1}) is given by $$ \frac{1}{\left| G \right|^n}\sum_{C \in J} \zeta^{\tr(C(0))} = \frac{\left| J \right|}{\left| G \right|^n} = 1. $$ Therefore it is enough to show that if $\alpha \in \Hom_R(W, G)$ is non-zero, then $$ \frac{1}{\left| G \right|^n}\sum_{C \in J} \zeta^{\tr (C \alpha)} = 0. $$ Let $t>0$ be an integer such that $p^{t-1} \alpha \neq 0$ and $p^t \alpha = 0$ in $\Hom_R(W, G)$. Without loss of generality, we may assume that $p^{t-1} \alpha(w_1) \neq 0$ (and $p^t \alpha(w_1)=0$). For each $x \in R$ with $p^t x = 0$, there exists $C_1 \in \Hom_R(G, R)$ such that $C_1(\alpha(w_1)) = x$. Since we have $G \cong \prod_{j=1}^{r} R/p^{\lambda_j}R$, $\alpha(w_1) \in G$ corresponds to $(\alpha(w_1)_1, \cdots, \alpha(w_1)_r) \in \prod_{j=1}^{r} R/p^{\lambda_j}R$ and $p^{t-1}\alpha(w_1)_k \neq 0$ for some $k$. Then $\alpha(w_1)_k = p^{\lambda_k-t}u$ for some $u \in R^{\times}$ so the map $C_1 \in \Hom_R(G, R)$ defined by $\displaystyle C_1(y_1, \cdots, y_r)=\frac{x}{p^{\lambda_k-t}u}y_k$ is well-defined and satisfies $C_1(\alpha(w_1)) = x$. Using an isomorphism $\Hom_R(W, G) \cong G^n$ ($f \mapsto (f(w_1), \cdots, f(w_n))$, let $C \in J$ be the element which corresponds to $(C_1, 0, \cdots, 0) \in \Hom_R(G, R)^n$. Then $C \alpha = C_1(\alpha(w_1)) = x$. Conversely, if there exists $C \in J$ such that $C \alpha = x$, then $p^t x = C(p^t \alpha) = 0$. Therefore $J_x := \left\{ C \in J : C \alpha = x \right\}$ is nonempty if and only if $x \in R[p^t] := \left\{ x \in R : p^tx = 0 \right\}$. For any $x \in R[p^t]$ and $C_x \in J_x$, we have $J_x = \left\{ C_x + C_0 : C_0 \in J_0 \right\}$ so each of the sets $J_x$ has the same cardinality $\displaystyle \frac{\left| G \right|^n}{\left| R[p^t] \right|} = \frac{\left| G \right|^n}{p^{2t}}$. Now we have \begin{equation} \label{eq4new2} \frac{1}{\left| G \right|^n}\sum_{C \in J} \zeta^{\tr(C \alpha)} = \frac{1}{p^{2t}} \sum_{x \in R[p^t]} \zeta^{\tr(x)} = \frac{1}{p^{2t}} \sum_{y, z \in R_1[p^t]} \zeta^{2y+(w+w^p)z}. \end{equation} (Recall that we have $\sigma(w) = w^p$ in the unramified case.) When $p$ is odd, we have $$ \sum_{y \in R_1[p^t]} \zeta^{2y+(w+w^p)z} = \zeta^{(w+w^p)z} \sum_{y \in R_1[p^t]} \zeta^{2y} = 0 $$ for each $z \in R_1[p^t]$. When $p=2$, we have $w + w^2 = -1$ so $$ \sum_{z \in R_1[p^t]} \zeta^{2y+(w+w^p)z} = \zeta^{2y} \sum_{z \in R_1[p^t]} \zeta^{-z} = 0 $$ for each $y \in R_1[p^t]$. Thus the right-hand side of the equation (\ref{eq4new2}) is always zero. \end{proof} By the above lemma, we have \begin{equation} \label{eq4b} \PP (FX = 0) = \EE (1_{FX=0}) = \frac{1}{\left| G \right|^n}\sum_{C \in \Hom_R(\Hom_R(W, G), R)} \EE (\zeta^{\tr(C(FX))}). \end{equation} For a finite $R$-module $G$, its conjugate ${}^{\sigma}G$ is an $R$-module which is same as $G$ as abelian groups, but the scalar multiplication is given by $r \cdot x := \sigma(r)x$. For every $f \in G^* := \Hom_{R}(G, R)$, there is a conjugate ${}^{\sigma}f \in ({}^{\sigma}G)^*$ defined by $({}^{\sigma}f)(x) := \sigma(f(x))$. Define an $R$-module structure on $G^*$ by $(rf)(x) := r f(x)$. \begin{lemma} \label{lem4new1} Let $G$ and $G'$ be finite $R$-modules. \begin{enumerate} \item There is a canonical $R$-module isomorphism ${}^{\sigma} (G^*) \cong ({}^{\sigma}G)^*$. \item There is a canonical $R$-module isomorphism $\Hom_R(G, G') \cong \Hom_R({}^{\sigma}G, {}^{\sigma}G')$. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item Let $\phi : {}^{\sigma} (G^*) \rightarrow ({}^{\sigma}G)^*$ be the map defined by $f \mapsto {}^{\sigma}f$. It is clear that $\phi$ is a bijection. For $f, g \in {}^{\sigma} (G^*)$, we have $({}^{\sigma}(f+g))(x) = \sigma((f+g)(x)) = \sigma(f(x)) + \sigma(g(x)) = ({}^{\sigma}f)(x) + ({}^{\sigma}g)(x)$ so ${}^{\sigma}(f+g) = {}^{\sigma}f + {}^{\sigma}g$. For $f \in {}^{\sigma} (G^*)$ and $r \in R$, we have $({}^{\sigma}(r \cdot f))(x) = ({}^{\sigma}(\sigma(r) f))(x) = \sigma((\sigma(r) f)(x)) = \sigma(\sigma(r) f(x))$ and $(r ({}^{\sigma}f))(x) = ({}^{\sigma}f)(r \cdot x) = \sigma(f(\sigma(r)x)) = \sigma(\sigma(r) f(x))$ so ${}^{\sigma}(r \cdot f) = r ({}^{\sigma}f)$. Therefore $\phi$ is an $R$-module isomorphism. \item Let $\phi : \Hom_R(G, G') \rightarrow \Hom_R({}^{\sigma}G, {}^{\sigma}G')$ be the map defined by $\phi(f)(x)=f(x)$. It is clear that $\phi$ is an isomorphism of abelian groups. For $f \in \Hom_R(G, G')$ and $r \in R$, we have $\phi(rf)(x) = f(r \cdot x) = \phi(f)(r \cdot x) = (r\phi(f))(x)$ for every $x \in {}^{\sigma}G$ so $\phi(rf) = r\phi(f)$. \qedhere \end{enumerate} \end{proof} Now we identify $V = (^{\sigma}W)^*$ (so $W \cong {}^{\sigma} (V^*) \cong ({}^{\sigma}V)^*$), $v_i = {}^{\sigma}(w_i^*)$ and $w_i = {}^{\sigma}(v_i^*)$. For $F \in \Hom_R(V, G)$ and $$ C \in \Hom_R(\Hom_R(W, G), R) \cong \Hom_R(W^*, G^*) \cong \Hom_R({}^{\sigma} (W^*), {}^{\sigma} (G^*)) \cong \Hom_R(V, ({}^{\sigma}G)^*), $$ denote $e_{ij} := C(v_j)(F(v_i)) \in R$. (By abuse of notation, we denote the image of $C$ in $\Hom_R(V, ({}^{\sigma}G)^*)$ also by $C$.) Since $X$ is Hermitian, $X_{ji} = \sigma(X_{ij}) = Y_{ij} + w^p Z_{ij}$. Therefore \begin{equation} \label{eq4c} \begin{split} \tr (C(FX)) & = \tr (\sum_{i, j} e_{ij}X_{ij} ) \\ & = \sum_{i=1}^{n} \tr(e_{ii}X_{ii}) + \sum_{i < j} \tr (e_{ij}(Y_{ij}+ w Z_{ij}) + e_{ji}(Y_{ij}+ w^p Z_{ij})) \\ & = \sum_{i=1}^{n} \tr(e_{ii}) Y_{ii} + \sum_{i < j} \tr(e_{ij}+e_{ji})Y_{ij} + \sum_{i < j} \tr(e_{ij} w +e_{ji} w^p)Z_{ij}. \end{split} \end{equation} By the equations (\ref{eq4b}) and (\ref{eq4c}), we have \begin{equation} \label{eq4d} \begin{split} & \PP (FX=0) \\ = \, & \frac{1}{\left| G \right|^n} \sum_{C} \left ( \prod_{i} \EE(\zeta^{\tr(e_{ii}) Y_{ii}}) \right ) \left ( \prod_{i < j} \EE(\zeta^{\tr(e_{ij}+e_{ji})Y_{ij}}) \right ) \left ( \prod_{i < j} \EE(\zeta^{\tr(e_{ij}w +e_{ji} w^p)Z_{ij}}) \right ) \\ = & \frac{1}{\left| G \right|^n} \sum_{C} p_F(C) \end{split} \end{equation} for $$ p_F(C) := \left ( \prod_{i} \EE(\zeta^{\tr(e_{ii}) Y_{ii}}) \right ) \left ( \prod_{i < j} \EE(\zeta^{\tr(e_{ij}+e_{ji})Y_{ij}}) \right ) \left ( \prod_{i < j} \EE(\zeta^{\tr(e_{ij}w +e_{ji} w^p)Z_{ij}}) \right ). $$ For every $i \leq j$, define $E(C,F,i,j) := e_{ij} + \sigma(e_{ji})$. \begin{remark} \label{rmk4c} Let $e_{ij} = a+w^p b$ and $e_{ji} = c+wd$ for $a,b,c,d \in R_1$. Then we have \begin{equation*} \begin{split} \tr(e_{ij}+e_{ji}) & = 2(a+c)+(w+w^p)(b+d), \\ \tr(e_{ij} w +e_{ji} w^p) & = (w+w^p)(a+c)+2w^{p+1}(b+d). \end{split} \end{equation*} One can check that both are zero if and only if $a+c = b+d = 0$. (Assume that $\tr(e_{ij}+e_{ji}) = \tr(e_{ij} w +e_{ji} w^p) = 0$. If one of $a+c$ and $b+d$ is zero, then the other one should also be zero. If both $a+c$ and $b+d$ are non-zero, then we have $2(a+c) \cdot 2w^{p+1}(b+d) = (w+w^p)(b+d) \cdot (w+w^p)(a+c)$ so $(w+w^p)^2 - 2 \cdot 2w^{p+1} = (w-w^p)^2=0$, which is not true.) Since $$ E(C, F, i, j) = (a+c) + w^p (b+d), $$ this is equivalent to the condition $E(C, F, i, j)=0$. By \cite[Lemma 4.2]{Woo17}, we have $\displaystyle \left| p_F(C) \right| \leq \exp(-\frac{\varepsilon N}{p^{2m}})$ where $N$ is the number of the non-zero coefficients $E(C, F, i, j)$. \end{remark} For $F \in \Hom_R(V, G)$ and $C \in \Hom_R(V, ({}^{\sigma}G)^*)$, define the maps $\phi_{F, C} \in \Hom_R(V, G \oplus ({}^{\sigma}G)^*)$ and $\phi_{C, F} \in \Hom_R(V, ({}^{\sigma}G)^* \oplus G)$ by $\phi_{F,C}(v) = (F(v), C(v))$ and $\phi_{C, F}(v) = (C(v), F(v))$. Then, for a map \begin{equation*} \begin{split} t : (({}^{\sigma}G)^* \oplus G) \times (G \oplus ({}^{\sigma}G)^*) & \rightarrow R \\ ((\phi_1, g_1), (g_2, \phi_2)) & \mapsto \phi_1(g_2) + \sigma(\phi_2(g_1))), \end{split} \end{equation*} we have $t(\phi_{C, F}(v_j), \phi_{F, C}(v_i)) = E(C, F, i, j)$. For $\nu \subset [n]$, let $V_{\nu}$ (resp. $V_{\setminus \nu}$) be an $R$-submodule of $V$ generated by $v_i$ with $i \in \nu$ (resp. $i \in [n] \setminus \nu$). The following definitions are from \cite[p.928--929]{Woo17}. \begin{definition} \label{def4d} Let $0 < \gamma < 1$ be a real number. For a given $F$, we say $C$ is $\gamma$-\textit{robust} for $F$ if for every $\nu \subset [n]$ with $\left| \nu \right| < \gamma n$, we have $\ker(\phi_{C, F} \mid_{V \setminus \nu}) \neq \ker(F \mid_{V \setminus \nu})$. Otherwise, we say $C$ is $\gamma$-\textit{weak} for $F$. \end{definition} \begin{definition} \label{def4e} Let $d_0 > 0$ be a real number. An element $F \in \Hom_R(V, G)$ is called a \textit{code} of distance $d_0$ if for every $\nu \subset [n]$ with $\left| \nu \right| < d_0$, we have $F V_{\setminus \nu} = G$. \end{definition} The following lemmas are analogues of \cite[Lemma 3.1 and 3.5]{Woo17}, whose proofs are also identical. Since the classification of finitely generated modules over $\OO/p^m \OO$ and finitely generated modules over $\Z/p^m\Z$ are the same, we can imitate the proof given in \cite{Woo17}. The only difference is that the equation $t(\phi_{C, F}(v_i), \phi_{F, C}(v_i)) = 2E(C, F, i, i)$ in \cite{Woo17} has changed to $t(\phi_{C, F}(v_i), \phi_{F, C}(v_i)) = E(C, F, i, i)$ in our case, which does not affect the proof at all. \begin{lemma} \label{lem4f1} There is a constant $C_G > 0$ such that for every $n$ and $F \in \Hom_R(V, G)$, the number of $\gamma$-weak $C$ is at most $$ C_G \binom{n}{\left \lceil \gamma n \right \rceil - 1} \left| G \right|^{\gamma n}. $$ \end{lemma} \begin{lemma} \label{lem4f2} If $F \in \Hom_R(V, G)$ is a code of distance $\delta n$ and $C \in \Hom_R(V, ({}^{\sigma}G)^*)$ is $\gamma$-robust for $F$, then $$ \# \left\{ (i, j) : i \leq j \text{ and } E(C, F, i, j) \neq 0 \right\} \geq \frac{\gamma \delta n^2}{2 \left| G \right|^2}. $$ \end{lemma} Let $\HH(V)$ be the set of Hermitian pairings on $V=R^n$, i.e. the set of the maps $h : V \times V \rightarrow R$ which are bi-additive, $h(y,x) = \sigma(h(x,y))$ and $h(rx,sy)=r \sigma(s) h(x,y)$ for every $r,s \in R$ and $x, y \in V$. Define a map $$ m_F : \Hom_R(V, ({}^{\sigma}G)^*) \rightarrow \HH(V) $$ by $$ m_F(C)(x,y) := C(x)(F(y)) + \sigma(C(y)(F(x))). $$ It is clear that $m_F(C)$ is bi-additive and $m_F(C)(y,x) = \sigma(m_F(C)(x,y))$. For every $r, s \in R$, we have \begin{equation*} \begin{split} m_F(C)(rx, sy) & = C(rx)(F(sy)) + \sigma(C(sy)(F(rx))) \\ & = rC(x)(\sigma(s) \cdot F(y)) + \sigma(sC(y)(\sigma(r) \cdot F(x))) \\ & = r \sigma(s) C(x)(F(y)) + \sigma(s \sigma(r) C(y)(F(x))) \\ & = r \sigma(s) m_F(C)(x,y) \end{split} \end{equation*} so $m_F(C)$ is an element of $\HH(V)$. Let $e_1, \cdots, e_r$ be the canonical generators of an $R$-module $G \cong \prod_{i=1}^{r} R/p^{\lambda_i}R$ and $e_1^*, \cdots, e_r^*$ be generators of $G^*$ given by $e_i^*(\sum_{j}a_je_j) = p^{m - \lambda_i}a_i$ for $a_1, \cdots, a_r \in R$. For an element $F \in \Hom_R(V, G)$, a map $e_i^*(F) \in V^*$ is defined by $v \mapsto e_i^*(Fv)$. Now consider the following elements in $\Hom_R(V, ({}^{\sigma}G)^*)$ for a given $F$. For $\theta := w - \sigma(w)$, we have $\theta \in R^{\times}$ (see Remark \ref{rmk1g}) and $\sigma(\theta) = - \theta$. \begin{itemize} \item For every $i<j$ and $c \in R/p^{\lambda_j}R$, $\displaystyle \alpha_{ij}^{c} := \frac{ce_i^*(F) ({}^{\sigma} e_j^*) - \sigma(c)e_j^*(F) ({}^{\sigma} e_i^*)}{p^{m - \lambda_i}} \in \Hom_R(V, ({}^{\sigma}G)^*)$. \item For every $i$ and $d \in R_1/p^{\lambda_i}R_1$, $\displaystyle \alpha_i^{d} := \frac{d \theta e_i^*(F) ({}^{\sigma} e_i^*)}{p^{m - \lambda_i}} \in \Hom_R(V, ({}^{\sigma}G)^*)$. \end{itemize} The basic properties of the elements $\alpha_{ij}^{c}$ and $\alpha_i^{d}$ are provided in the following lemmas. \begin{lemma} \label{lem4g} The elements $\alpha_{ij}^{c}$ and $\alpha_i^{d}$ are contained in $\Hom_R(V, ({}^{\sigma}G)^*)$. \end{lemma} \begin{proof} Since $\displaystyle \frac{e_i^*(Fv)}{p^{m - \lambda_i}} \in R$ for every $v \in V$, $\alpha_{i}^{d} \in \Hom_{\Z}(V, ({}^{\sigma}G)^*)$ for $d \in R_1$. Also, ${}^{\sigma}e_i^* \in ({}^{\sigma}G)^*$ is a $p^{\lambda_i}$-torsion element so $\alpha_{i}^{d} \in \Hom_{\Z}(V, ({}^{\sigma}G)^*)$ for $d \in R_1 / p^{\lambda_i} R_1$ is well-defined. Now we prove that the map $\alpha_{i}^{d}$ is $R$-linear. For every $r \in R$, $v \in V$ and $w \in {}^{\sigma}G$, $$ \alpha_i^{d}(rv)(w) = \frac{d \theta e_i^*(F(rv)) \sigma(e_i^*(w))}{p^{m - \lambda_i}} = r \frac{d \theta e_i^*(Fv) \sigma(e_i^*(w))}{p^{m - \lambda_i}} $$ and $$ (r \cdot \alpha_i^{d}(v))(w) = (\alpha_i^{d}(v))(\sigma(r) w) = \frac{d \theta e_i^*(Fv) \sigma(e_i^*(\sigma(r)w))}{p^{m - \lambda_i}} = r \frac{d \theta e_i^*(Fv) \sigma(e_i^*(w))}{p^{m - \lambda_i}} $$ so $\alpha_i^{d}$ is $R$-linear. The $R$-linearity of $\alpha_{ij}^{c}$ can be proved by the same way. \end{proof} \begin{lemma} \label{lem4h} The elements $\alpha_{ij}^{c}$ and $\alpha_i^{d}$ are contained in $\ker(m_F)$. \end{lemma} \begin{proof} For every $x, y \in V$, we have \begin{equation*} \begin{split} m_F(\alpha_{ij}^{c})(x, y) & = \alpha_{ij}^{c}(x)(F(y)) + \sigma(\alpha_{ij}^{c}(y)(F(x))) \\ & = \frac{ce_i^*(Fx) \sigma(e_j^*(Fy)) - \sigma(c)e_j^*(Fx) \sigma(e_i^*(Fy))}{p^{m - \lambda_i}} \\ & + \sigma \left ( \frac{ce_i^*(Fy) \sigma(e_j^*(Fx)) - \sigma(c)e_j^*(Fy) \sigma(e_i^*(Fx))}{p^{m - \lambda_i}} \right ) \\ & = 0 \end{split} \end{equation*} and \begin{equation*} \begin{split} m_F(\alpha_i^{d})(x, y) & = \alpha_i^{d}(x)(F(y)) + \sigma(\alpha_i^{d}(y)(F(x))) \\ & = \frac{d \theta e_i^*(Fx) \sigma(e_i^*(Fy))}{p^{m - \lambda_i}} + \sigma \left ( \frac{d \theta e_i^*(Fy) \sigma(e_i^*(Fx))}{p^{m - \lambda_i}} \right ) \\ & = 0, \end{split} \end{equation*} where the last equality follows from the facts that $\sigma(d) = d$ and $\sigma(\theta) = - \theta$. \end{proof} \begin{lemma} \label{lem4i} If $FV = G$, then $\sum_{i < j} \alpha_{ij}^{c_{ij}} + \sum_{i} \alpha_i^{d_i} = 0$ if and only if each $c_{ij}$ and $d_i$ is zero. \end{lemma} \begin{proof} Assume that $\alpha := \sum_{i < j} \alpha_{ij}^{c_{ij}} + \sum_{i} \alpha_i^{d_i}$ is zero. Choose $x_1, \cdots, x_r \in V$ such that $Fx_i = e_i$ for each $i$. For every $1 \leq t \leq r$, we have \begin{equation*} \begin{split} \alpha (x_t) & = \sum_{i < t} \alpha_{it}^{c_{it}}(x_t) + \sum_{t < j} \alpha_{tj}^{c_{tj}}(x_t) + \alpha_t^{d_t}(x_t) \\ & = - \sum_{i < t} \sigma(c_{it})p^{\lambda_i - \lambda_t} ({}^{\sigma}e_i^*) + d_t \theta ({}^{\sigma}e_t^*) + \sum_{t < j} c_{tj} ({}^{\sigma}e_j^*) \\ & = 0. \end{split} \end{equation*} For all $t < j$, $\alpha(x_t)(e_j) = p^{m-\lambda_j} c_{tj} = 0$ (in $R$) so $c_{tj}=0$ (in $R/p^{\lambda_j}R$). Since $\theta$ is a unit in $R$, the relation $\alpha(x_t)(e_t) = p^{m-\lambda_t} d_t \theta =0$ (in $R$) implies that $d_t=0$ (in $R_1/p^{\lambda_t}R_1$) for all $t$. \end{proof} \begin{definition} \label{def4j} An element $C \in \Hom_R(V, ({}^{\sigma}G)^*)$ is called \textit{special} if $C = \sum_{i < j} \alpha_{ij}^{c_{ij}} + \sum_{i} \alpha_i^{d_i}$ for some $c_{ij} \in R/p^{\lambda_j}R$ and $d_i \in R_1/p^{\lambda_i}R_1$. \end{definition} For a special $C$, we have $E(C, F, i, j) = m_F(C)(v_j, v_i) = 0$ for every $i \leq j$ by Lemma \ref{lem4h} so $p_F(C)=1$. The following corollary (of Lemma \ref{lem4i}) tells us that the number of special $C$ coincides with the moment appears in Theorem \ref{thm2g}(1). \begin{corollary} \label{cor4new2} If $FV=G$, then the number of special $C \in \Hom_R(V, ({}^{\sigma}G)^*)$ is $$ \prod_{i<j} \left| R/p^{\lambda_j}R \right| \times \prod_{i} \left| R_1/p^{\lambda_i}R_1 \right| = p^{\sum_{i=1}^{r} (2i-1) \lambda_i}. $$ \end{corollary} The next proposition, which is an analogue of \cite[Lemma 3.7]{Woo17}, tells us that if $C$ is not special, it is not even close to $\ker(m_F)$. \begin{proposition} \label{prop4k} If $F \in \Hom_R(V, G)$ is a code of distance $\delta n$ and $C \in \Hom_R(V, ({}^{\sigma}G)^*)$ is not special, then $$ \# \left\{ (i, j) : i \leq j \text{ and } E(C, F, i, j) \neq 0 \right\} \geq \frac{\delta n}{2}. $$ \end{proposition} \begin{proof} Let $\eta := \left\{ (i, j) : i\leq j \text{ and } E(C, F, i, j) \neq 0 \right\}$ and $\nu$ be the set of all $i$ and $j$ that appear in an $(i, j) \in \eta$. Suppose that there exists a non-special $C$ such that $\left| \eta \right| < \frac{\delta n}{2}$ (so $\left| \nu \right| < \delta n$). Since $F$ is a code of distance $\delta n$, we can find $\tau \subset [n] \setminus \nu$ such that $\left| \tau \right| = r$ and $FV_{\tau} = G$. For $w_i \in V_{\tau}$ such that $Fw_i = e_i$, one can show that $w_1, \cdots, w_r$ form an $R$-basis of $V_{\tau}$ as in the proof of \cite[Lemma 3.6]{Woo17}. Let $\tau = \left\{ \tau_1, \cdots, \tau_r \right\}$ ($\tau_1 < \cdots < \tau_r$) and define $$ z_i := \left\{\begin{matrix} v_i & (i \notin \tau) \\ w_j & (i = \tau_j \in \tau) \end{matrix}\right. . $$ Then $z_1, \cdots, z_n$ is a basis of $V$. Denote its dual basis by $z_1^*, \cdots, z_n^*$. Let $Z_1(V)$ and $Z_2(V)$ be (additive) subgroups of $\HH(V)$ defined by \begin{equation*} \begin{split} Z_1(V) & := \left\{ f \in \HH(V) : f(x,y)=0 \text{ for all } x \in V, y \in V_{\tau} \right\}, \\ Z_2(V) & := \left\{ f \in \HH(V) : f(x,y)=0 \text{ for all } x, y \in V_{\tau} \right\} \supset Z_1(V) \end{split} \end{equation*} and let $\HH_i(V) := \HH(V) / Z_i(V)$ for $i= 1, 2$. Also the map $m_F^i : \Hom_R(V, ({}^{\sigma}G)^*) \rightarrow \HH_i(V)$ is defined by the composition of $m_F$ and the projection $\HH(V) \rightarrow \HH_i(V)$. If $i \in \tau$ or $j \in \tau$, then $m_F(C)(v_j, v_i)=E(C, F, i, j)=0$. This shows that $m_F(C) \in Z_1(V)$ and $m_F^{1}(C)=0$. To prove the contradiction, it is enough to show that every $C \in \ker(m_F^1)$ is special, or equivalently the inequality \begin{equation} \label{eq4e} \left| \im(m_F^1) \right| = \frac{\left| \Hom_R(V, ({}^{\sigma}G)^*) \right|}{\left| \ker(m_F^1) \right|} \leq \frac{\left| \Hom_R(V, ({}^{\sigma}G)^*) \right|}{\left| \left\{ \text{special } C \right\} \right|} = p^{\sum_{i=1}^{r} (2n-2i+1) \lambda_i} \end{equation} is actually an equality. \begin{itemize} \item For each $1 \leq i < j \leq n$ and $c \in R$, there exists $f_{ij}^{c} = f_{ji}^{\sigma(c)} \in \HH(V)$ such that $$ f_{ij}^{c}(z_{i'}, z_{j'}) = \left\{\begin{matrix} c & ((i', j')=(i,j)) \\ \sigma(c) & ((i', j')=(j, i)) \\ 0 & (\text{otherwise}) \end{matrix}\right. $$ for every $i'$ and $j'$. Similarly, for each $1 \leq i \leq n$ and $d \in R_1$, there exists $g_i^{d} \in \HH(V)$ such that $$ g_i^d (z_{i'}, z_{j'}) = \left\{\begin{matrix} d & ((i', j')=(i,i)) \\ 0 & (\text{otherwise}) \end{matrix}\right. . $$ \item Since $V$ is a free $R$-module, the natural map $\Hom_R(V, R) \otimes ({}^{\sigma}G)^* \rightarrow \Hom_R(V, ({}^{\sigma}G)^*)$ is an isomorphism. Thus for $c \in R$ and $1 \leq b \leq a \leq r$, $c z_{\tau_a}^* \otimes {}^{\sigma}e_b^*$ is an element of $\Hom_R(V, ({}^{\sigma}G)^*)$ and \begin{equation*} \begin{split} m_F(c z_{\tau_a}^* \otimes {}^{\sigma}e_b^* )(z_{\tau_i}, z_{\tau_j}) & = (c \delta_{ai} {}^{\sigma}e_b^*)({}^{\sigma} e_j) + \sigma((c \delta_{aj} {}^{\sigma}e_b^*)({}^{\sigma} e_i)) \\ & = p^{m - \lambda_b} (c \delta_{ai} \delta_{b j} + \sigma(c) \delta_{aj} \delta_{bi}) \end{split} \end{equation*} so $$ m_F^2(c z_{\tau_a}^* \otimes {}^{\sigma}e_b^* ) = \left\{\begin{matrix} p^{m - \lambda_b}f_{\tau_b \tau_a}^{\sigma(c)} & (b<a) \\ p^{m - \lambda_b} g_{\tau_b}^{c + \sigma(c)} & (b=a) \end{matrix}\right. $$ as elements in $\HH_2(V)$. We also have that each element of $R_1$ can be expressed by $c + \sigma(c)$ for some $c \in R$. (For $c = x+wy$ ($x, y \in R_1$), we have $c + \sigma(c) = 2x+(w+w^p)y$. When $p$ is odd, each element of $R_1$ is of the form $2x$ for some $x \in R_1$. When $p=2$, we have $w+w^2=-1$ so $wy+\sigma(wy) = -y$ for every $y \in R_1$.) The image of $m_F^2$ contains every element of $\HH_2(V)$ of the form $$ f = \sum_{i<j} f_{\tau_i \tau_j}^{c_{ij}} + \sum_{i} g_{\tau_i}^{d_i} $$ where $c_{ij} \in p^{m - \lambda_i} R$ and $d_i \in p^{m - \lambda_i} R_1$. As in the proof of Lemma \ref{lem4i}, one can deduce that $$ \sum_{i<j} f_{\tau_i \tau_j}^{c_{ij}} + \sum_{i} g_{\tau_i}^{d_i} = 0 $$ if and only if each $c_{ij}$ and $d_i$ is zero. This implies that the number of $f$ in $\HH_2(V)$ of the form $\sum_{i<j} f_{\tau_i \tau_j}^{c_{ij}} + \sum_{i} g_{\tau_i}^{d_i}$ is $$ \prod_{i<j} p^{2 \lambda_i} \cdot \prod_{i} p^{\lambda_i} = p^{\sum_{i=1}^{r} (2r-2i+1) \lambda_i}. $$ Thus we have \begin{equation} \label{eq4f} \left| \im(m_F^2) \right| \geq p^{\sum_{i=1}^{r} (2r-2i+1) \lambda_i}. \end{equation} \item For $\ell \notin \tau$, $c \in R$ and $1 \leq b \leq r$, \begin{equation*} m_F(c z_{\ell}^* \otimes {}^{\sigma}e_b^*)(z_i, z_{\tau_j}) = cp^{m - \lambda_b} \delta_{\ell i} \delta_{bj} \end{equation*} so $$ m_F^1(c z_{\ell}^* \otimes {}^{\sigma}e_b^*) = f_{\ell \tau_b}^{p^{m - \lambda_b} c} $$ as elements in $\HH_1(V)$. Let $S$ be the set of the elements of $\HH_1(V)$ of the form $$ f = \sum_{\ell \notin \tau} \sum_{b} f_{\ell \tau_b}^{c_{\ell b}} $$ for some $c_{\ell b} \in p^{m-\lambda_b} R$. Then $\left| S \right| = p^{\sum_{i=1}^{r} 2(n-r) \lambda_i}$ (since $\sum_{\ell \notin \tau} \sum_{b} f_{\ell \tau_b}^{c_{\ell b}} = 0$ if and only if each $c_{\ell b}$ is zero) and $S$ is contained in the kernel of the surjective homomorphism $\im(m_F^1) \rightarrow \im(m_F^2)$, which implies that \begin{equation} \label{eq4g} \left| \im(m_F^1) \right| \geq p^{\sum_{i=1}^{r} 2(n-r) \lambda_i} \left| \im(m_F^2) \right|. \end{equation} \end{itemize} By the equations (\ref{eq4f}) and (\ref{eq4g}), the inequality (\ref{eq4e}) should be an equality. \end{proof} Now we compute the moments of the cokernel of an $\varepsilon$-balanced matrix $X \in \HH_n(\OO)$. Although the proof follows the strategy of the proof of \cite[Theorem 6.1]{Woo17}, we provide some details of the proof for the convenience of the readers. For an $\OO$-module $G$ of type $\lambda = (\lambda_1 \geq \cdots \geq \lambda_r)$, let $M_G$ be the number of special $C$ for a surjective $F \in \Hom_R(V, G)$, i.e. $M_G := p^{\sum_{i=1}^{r} (2i-1) \lambda_i}$. \begin{lemma} \label{lem4x1} For given $0 < \varepsilon < 1$, $\delta >0$ and $G$, there are $c, K_0 > 0$ such that the following holds: Let $X \in \HH_n(R)$ be an $\varepsilon$-balanced matrix, $F \in \Hom_R(V, G)$ be a code of distance $\delta n$ and $A \in \Hom_R((^{\sigma}V)^*, G)$. Then for each $n$, $$ \left| \PP(FX=0) - M_G \left| G \right|^{-n} \right| \leq \displaystyle \frac{K_0 e^{-cn}}{\left| G \right|^n} $$ and $$ \PP(FX=A) \leq K_0 \left| G \right|^{-n}. $$ \end{lemma} \begin{proof} By the equations (\ref{eq4b}) and (\ref{eq4c}) (replace $FX$ by $FX-A$), we have \begin{equation} \label{eq4h} \begin{split} \PP (FX = A) & = \frac{1}{\left| G \right|^n}\sum_{C} \EE (\zeta^{\tr(C(FX-A))}) \\ & = \frac{1}{\left| G \right|^n}\sum_{C} \EE (\zeta^{\tr(C(-A))}) \EE (\zeta^{\tr(C(FX))}) \\ & = \frac{1}{\left| G \right|^n} \sum_{C} \EE (\zeta^{\tr(C(-A))}) p_F(C). \end{split} \end{equation} For $\gamma \in (0, \delta)$, we break the sum into $3$ pieces: \begin{equation*} \begin{split} S_1 & := \left\{ C \in \Hom_R(V, ({}^{\sigma}G)^*) : C \text{ is special for } F \right\}, \\ S_2 & := \left\{ C \in \Hom_R(V, ({}^{\sigma}G)^*) : C \text{ is not special for } F \text{ and } \gamma \text{-weak for } F \right\}, \\ S_3 & := \left\{ C \in \Hom_R(V, ({}^{\sigma}G)^*) : C \text{ is } \gamma \text{-robust for } F \right\}. \end{split} \end{equation*} \begin{enumerate}[label=(\alph*)] \item $C \in S_1$: By Lemma \ref{lem4i}, $\left| S_1 \right| = M_G$. Since $p_F(C)=1$ for $C \in S_1$ by Lemma \ref{lem4h}, we have $\sum_{C \in S_1} \EE (\zeta^{C(-A)}) p_F(C) = M_G$ for $A=0$ and $\left| \sum_{C \in S_1} \EE (\zeta^{C(-A)}) p_F(C) \right| \leq M_G$ for any $A$. \item $C \in S_2$: By Lemma \ref{lem4f1}, $\left| S_2 \right| \leq C_G \binom{n}{\left \lceil \gamma n \right \rceil - 1} \left| G \right|^{\gamma n}$ for some constant $C_G > 0$. By Remark \ref{rmk4c} and Proposition \ref{prop4k}, we have $\left| p_F(C) \right| \leq \exp ( -\varepsilon \delta n / 2 p^{2m} )$ for every $C \in S_2$. \item $C \in S_3$: By Remark \ref{rmk4c} and Lemma \ref{lem4f2}, we have $$ \left| \sum_{C \in S_3} \EE (\zeta^{C(-A)}) p_F(C) \right| \leq \left| G \right|^n \exp (\varepsilon \gamma \delta n^2 / 2 p^{2m} \left| G \right|^2). $$ \end{enumerate} Now the proof can be completed as in \cite[Lemma 4.1]{Woo17} by applying the above computations (for a sufficiently small $\gamma$) to the equation (\ref{eq4h}). \end{proof} For an integer $D = \prod_{i} p_i^{e_i}$, let $\ell (D) := \sum_{i} e_i$. \begin{definition} \label{def4x2} Assume that $\delta < \ell (\left| G \right|)^{-1}$. The \textit{depth} of an $F \in \Hom_R(V, G)$ is the maximal positive integer $D$ such that there is a $\sigma \subset [n]$ with $\left| \sigma \right| < \ell(D) \delta n$ such that $D = [G : FV_{\setminus \sigma}]$, or is $1$ if there is no such $D$. \end{definition} The following lemmas are analogues of \cite[Lemma 5.2 and 5.4]{Woo17}. \begin{lemma} \label{lem4x3} There is a constant $K_0$ depending on $G$ such that for every $D>1$, the number of $F \in \Hom_R(V, G)$ of depth $D$ is at most $$ K_0 \binom{n}{\left \lceil \ell(D)\delta n \right \rceil -1} \left| G \right|^n D^{-n+\ell(D)\delta n}. $$ \end{lemma} \begin{lemma} \label{lem4x4} Let $\varepsilon, \delta, G$ be as in Lemma \ref{lem4x1}. Then there exists $K_0>0$ such that if $F \in \Hom_R(V, G)$ has depth $D>1$ and $[G : FV] < D$, then for all $\varepsilon$-balanced matrix $X \in \HH_n(R)$, $$ \PP(FX=0) \leq K_0e^{-\varepsilon (1- \ell(D) \delta) n}(\left| G \right|/D)^{-(1-\ell(D) \delta)n}. $$ \end{lemma} \begin{proof} We follow the proof of \cite[Lemma 5.4]{Woo17}. It is enough to show that $$ \PP(x_1f_1 \equiv g \text{ in } G/H) \leq 1 - \varepsilon \leq e^{-\varepsilon}, $$ where $H$ is an $\OO$-submodule of $G$ of index $D$, $f_1 \in G \setminus H$ and $x_1$ is a non-diagonal entry of $X$. Write $x_1 = x+wy$ for $\varepsilon$-balanced $x, y \in R_1$. Since $\text{Ann}_{G/H}(f_1) := \left\{ r \in R : rf_1 \in H \right\}$ is a proper ideal of $R$, it is of the form $\pi^k R$ for some $k \geq 1$. Thus the elements of the set $$ \left \{ x \in R_1 : x_1f_1 \equiv g \text{ in } G/H \text{ for some } y \in R_1 \right \} $$ are contained in a single equivalence class modulo $p$. Since $x$ is $\varepsilon$-balanced, we conclude that $\PP(x_1f_1 \equiv g \text{ in } G/H) \leq 1 - \varepsilon$. \end{proof} \begin{theorem} \label{thm4l} Let $0 < \varepsilon < 1$ and $G$ be given. Then for any sufficiently small $c > 0$, there is a $K_0=K_{\varepsilon, G, c}>0$ such that for every positive integer $n$ and an $\varepsilon$-balanced matrix $X_0 \in \HH_n(\OO)$, $$ \left| \EE(\# \Sur_{\OO}(\cok(X_0), G)) - p^{\sum_{i=1}^{r} (2i-1) \lambda_i} \right| \leq K_0 e^{-cn}. $$ In particular, the equation (\ref{eq2f}) holds for every sequence of $\varepsilon$-balanced matrices $(X_n)_{n \geq 1}$. \end{theorem} \begin{proof} Throughout the proof, $K_0$ denotes a positive constant which may vary from line to line. Let $X \in \HH_n(R)$ be the reduction of $X_0 \in \HH_n(\OO)$. By the equation (\ref{eq4a}), we have \begin{equation*} \begin{split} & \left| \EE(\# \Sur_{\OO}(\cok(X_0), G)) - M_G \right| \\ = & \left| \sum_{F \in \Sur_R(V, G)} \PP(FX=0) - \sum_{F \in \Hom_R(V, G)} M_G \left| G \right|^{-n} \right| \\ \leq & \sum_{F \in \Sur_R(V, G)} \left| \PP(FX=0) - M_G \left| G \right|^{-n} \right| + \sum_{F \in \Hom_R(V, G) \setminus \Sur_R(V, G)} M_G \left| G \right|^{-n}. \end{split} \end{equation*} \begin{enumerate}[label=(\alph*)] \item By Lemma \ref{lem4x1}, we have $$ \sum_{\substack{F \in \Sur_R(V, G) \\ F \text{ code of distance } \delta n}} \left| \PP(FX=0) - M_G \left| G \right|^{-n} \right| \leq K_0e^{-cn}. $$ \item By Lemma \ref{lem4x3} and \ref{lem4x4}, for a sufficiently small $\delta$ we have \begin{equation*} \begin{split} & \sum_{\substack{F \in \Sur_R(V, G) \\ F \text{ not code of distance } \delta n}} \PP(FX=0) \\ \leq & \sum_{\substack{D>1 \\ D \mid \# G}} K_0 \binom{n}{\left \lceil \ell(D)\delta n \right \rceil -1} \left| G \right|^n D^{-n+\ell(D)\delta n} e^{-\varepsilon (1- \ell(D) \delta) n}(\left| G \right|/D)^{-(1-\ell(D) \delta)n} \\ = & \sum_{\substack{D>1 \\ D \mid \# G}} K_0 \binom{n}{\left \lceil \ell(D)\delta n \right \rceil -1} \left| G \right|^{\ell(D) \delta n} e^{-\varepsilon (1- \ell(D) \delta) n} \\ \leq & \, K_0e^{-cn}. \end{split} \end{equation*} \item By Lemma \ref{lem4x3}, for a sufficiently small $\delta$ we have \begin{equation*} \begin{split} & \sum_{\substack{F \in \Sur_R(V, G) \\ F \text{ not code of distance } \delta n}} M_G \left| G \right|^{-n} \\ \leq & \sum_{\substack{D>1 \\ D \mid \# G}} K_0 \binom{n}{\left \lceil \ell(D)\delta n \right \rceil -1} \left| G \right|^n \left| D \right|^{-n+\ell(D)\delta n} M_G \left| G \right|^{-n} \\ \leq & K_0 \binom{n}{\left \lceil \ell(\left| G \right|)\delta n \right \rceil -1} 2^{-n+\ell(\left| G \right|)\delta n} \\ \leq & \, K_0e^{-cn}. \end{split} \end{equation*} \item We also have \begin{equation*} \begin{split} \sum_{F \in \Hom_R(V, G) \setminus \Sur_R(V, G)} M_G \left| G \right|^{-n} & \leq \sum_{H \lneq G} \sum_{F \in \Sur_R(V, H)} K_0 \left| G \right|^{-n} \\ & \leq \sum_{H \lneq G} K_0 \left| H \right|^n \left| G \right|^{-n} \\ & \leq K_0 2^{-n}. \qedhere \end{split} \end{equation*} \end{enumerate} \end{proof} Combining the above theorem with Theorem \ref{thm2c} and \ref{thm3c}, we obtain the universality result for the distribution of the cokernels of random $p$-adic unramified Hermitian matrices. \begin{theorem} \label{thm4m} For every sequence of $\varepsilon$-balanced matrices $(X_n)_{n \geq 1}$ ($X_n \in \HH_n(\OO)$), the limiting distribution of $\cok(X_n)$ is given by the equation (\ref{eq2c}). \end{theorem} \begin{proof} We follow the proof of \cite[Corollary 9.2]{Woo17}. Choose a positive integer $a$ such that $p^{a-1} \Gamma = 0$. Then for any finitely generated $\OO$-module $H$, we have $H \otimes \OO/p^{a} \OO \cong \Gamma$ if and only if $H \cong \Gamma$. Let $A_n$ be the cokernel of a Haar random matrix in $\HH_n(\OO)$ and $B_n = \cok(X_n)$. Then Theorem \ref{thm2c}, \ref{thm3c} and \ref{thm4l} conclude the proof. \end{proof} \section{Universality of the cokernel: the ramified case} \label{Sec5} In this section, we assume that $K/\Q_p$ is ramified. We say $K/\Q_p$ is of \textit{type II} if $p=2$ and $K = \Q_2(\sqrt{1+2u})$ for some $u \in \Z_2^{\times}$. Otherwise we say $K/\Q_p$ is of \textit{type I}. By the choice of the uniformizer $\pi$ in Section \ref{Sub12}, we have $\sigma(\pi) = - \pi$ if $K/\Q_p$ is of type I and $\sigma(\pi) = 2 - \pi$ if $K/\Q_p$ is of type II. Let $G$ be a finite $\OO$-module of type $\lambda = (\lambda_1 \geq \cdots \geq \lambda_r)$. Define the rings $R$, $R_1$, $R_2$ and the map $T \in \Hom_{\Z}(R, R_1)$ as follow. Recall that we have $\OO=\Z_p[\pi]$. \begin{itemize} \item If $K/\Q_p$ is of type I, choose an integer $m>1$ such that $\pi^{2m-1} G = 0$. Define $R = \OO / \pi^{2m-1} \OO$, $R_1 = \Z_p/p^m \Z_p \cong \Z / p^{m}\Z$ and $R_2 = \Z_p/p^{m-1} \Z_p \cong \Z / p^{m-1}\Z$. Every element of $\OO$ is of the form $x + \pi y$ for $x, y \in \Z_p$, and the images of $x+\pi y$ and $x'+\pi y'$ in $R$ are the same if and only if $(x-x')+\pi(y-y') \in \pi^{2m-1} \OO$, which is equivalent to $x-x' \in p^m\Z_p$ and $y-y' \in p^{m-1}\Z_p$. Therefore every element of $R$ is uniquely expressed as $x + \pi y$ for some $x \in R_1$ and $y \in R_2$. Define the map $T \in \Hom_{\Z}(R, R_1)$ by $T(x + \pi y) = x$. \item If $K/\Q_p$ is of type II, choose an integer $m>0$ such that $\pi^{2m} G = 0$. Define $R = \OO / \pi^{2m} \OO$ and $R_1 = R_2 = \Z_p/p^m \Z_p \cong \Z / p^{m}\Z$. Every element of $R$ is uniquely expressed as $x + \pi y$ for some $x \in R_1$ and $y \in R_2$. Define the map $T \in \Hom_{\Z}(R, R_1)$ by $T(x + \pi y) = x+y$. \end{itemize} For both cases, we have $x + \sigma(x) = 2T(x)$ for every $x \in R$. This shows that it is natural to replace the trace map in Section \ref{Sec4} by the map $T$. For $X \in \HH_n(\OO)$, denote $X_{ij} = Y_{ij} + \pi Z_{ij}$ for $Y_{ij}, Z_{ij} \in \Z_p$. Similarly, for $X \in \HH_n(R)$, denote $X_{ij} = Y_{ij} + \pi Z_{ij}$ for $Y_{ij} \in R_1$ and $Z_{ij} \in R_2$. Then $Z_{ii}=0$ and $X$ is determined by $n^2$ elements $Y_{ij}$ ($i \leq j$), $Z_{ij}$ ($i<j$). Define $V$, $W$, $v_i$, $w_i$, $v_i^*$ and $w_i^*$ and identify them as in Section \ref{Sec4}. For an $\varepsilon$-balanced matrix $X_0 \in \HH_n(\OO)$, its reduction $X \in \HH_n(R)$ is also $\varepsilon$-balanced and we have \begin{equation} \label{eq5a} \EE(\# \Sur_{\OO}(\cok(X_0), G)) = \EE(\# \Sur_{R}(\cok(X), G)) = \sum_{F \in \Sur_R(V, G)} \PP (FX = 0). \end{equation} \begin{lemma} \label{lem5a} Let $\zeta := \zeta_{p^m} \in \C$ be a primitive $p^m$-th root of unity. Then we have \begin{equation*} 1_{FX=0} = \frac{1}{\left| G \right|^n}\sum_{C \in \Hom_R(\Hom_R(W, G), R)} \zeta^{T(C(FX))}. \end{equation*} \end{lemma} \begin{proof} Following the proof of Lemma \ref{lem4b}, it is enough to show that $$ \sum_{r \in R[\pi^t]} \zeta^{T(r)} = 0 $$ for $t > 0$. Let $\displaystyle s := \left \lceil \frac{t}{2} \right \rceil \geq 1$. If $K/\Q_p$ is of type I, then we have \begin{equation*} \sum_{r \in R[\pi^t]} \zeta^{T(r)} = \sum_{x \in R_1[p^s], \, y \in R_2[p^{t-s}]} \zeta^{x} = p^{t-s} \sum_{x \in R_1[p^s]} \zeta^{x} = 0. \end{equation*} If $K/\Q_p$ is of type II, then we have \begin{equation*} \sum_{r \in R[\pi^t]} \zeta^{T(r)} = \sum_{x \in R_1[p^s], \, y \in R_2[p^{t-s}]} \zeta^{x+y} = \left ( \sum_{x \in R_1[p^s]} \zeta^{x} \right ) \left ( \sum_{y \in R_2[p^{t-s}]} \zeta^{y} \right ) = 0. \qedhere \end{equation*} \end{proof} By the above lemma, we have \begin{equation} \label{eq5b} \PP (FX = 0) = \EE (1_{FX=0}) = \frac{1}{\left| G \right|^n}\sum_{C \in \Hom_R(\Hom_R(W, G), R)} \EE (\zeta^{T(C(FX))}). \end{equation} Define the maps $F$ and $C$ as in Section \ref{Sec4}. If $X \in \HH_n(R)$, then $X_{ji} = \sigma(X_{ij}) = Y_{ij} + \sigma(\pi) Z_{ij}$. Therefore \begin{equation} \label{eq5c} \begin{split} T (C(FX)) & = T (\sum_{i, j} e_{ij}X_{ij} ) \\ & = \sum_{i=1}^{n} T(e_{ii}X_{ii}) + \sum_{i < j} T (e_{ij}(Y_{ij}+ \pi Z_{ij}) + e_{ji}(Y_{ij}+ \sigma(\pi) Z_{ij})) \\ & = \sum_{i=1}^{n} T(e_{ii}) Y_{ii} + \sum_{i < j} T(e_{ij}+e_{ji})Y_{ij} + \sum_{i < j} T(e_{ij} \pi +e_{ji} \sigma(\pi))Z_{ij} \end{split} \end{equation} for $e_{ij} := C(v_j)(F(v_i)) \in R$. Note that if $K/\Q_p$ is of type I, then $T(e_{ij} \pi +e_{ji} \sigma(\pi)) \in pR_1$ so $T(e_{ij} \pi +e_{ji} \sigma(\pi))Z_{ij}$ is well-defined as an element of $R_1$ for $Z_{ij} \in R_2$. By the equations (\ref{eq5b}) and (\ref{eq5c}), we have \begin{equation} \label{eq5d} \begin{split} & \PP (FX=0) \\ = \, & \frac{1}{\left| G \right|^n} \sum_{C} \left ( \prod_{i} \EE(\zeta^{T(e_{ii})Y_{ii}}) \right ) \left ( \prod_{i < j} \EE(\zeta^{T(e_{ij}+e_{ji})Y_{ij}}) \right ) \left ( \prod_{i < j} \EE(\zeta^{T(e_{ij}\pi +e_{ji} \sigma(\pi))Z_{ij}}) \right ) \\ = & \frac{1}{\left| G \right|^n} \sum_{C} p_F(C). \end{split} \end{equation} for $$ p_F(C) := \left ( \prod_{i} \EE(\zeta^{T(e_{ii})Y_{ii}}) \right ) \left ( \prod_{i < j} \EE(\zeta^{T(e_{ij}+e_{ji})Y_{ij}}) \right ) \left ( \prod_{i < j} \EE(\zeta^{T(e_{ij}\pi +e_{ji} \sigma(\pi))Z_{ij}}) \right ). $$ Define $E(C, F, i, i) := T(e_{ii})$ and $E(C,F,i,j) := e_{ij} + \sigma(e_{ji})$ for every $i<j$. \begin{remark} \label{rmk5b} For $i<j$, write $e_{ij} = a+\pi b$ and $e_{ji} = c+ \pi d$ for $a, c \in R_1$ and $b, d \in R_2$. \begin{itemize} \item Type I: \begin{equation*} \begin{split} E(C,F,i,j) & = (a+c) + \pi(b-d), \\ T(e_{ij}+e_{ji}) & = a+c, \\ T(e_{ij} \pi +e_{ji} \sigma(\pi)) & = \pi^2(b-d). \end{split} \end{equation*} Note that for $b-d \in R_2$, $\pi^2(b-d)$ is well-defined as an element of $R_1$. \item Type II: $\pi(\pi-2) = 2u \in R_1$ so $T(\pi^2) = T(\pi(\pi-2) + 2 \pi) = \pi^2 - 2 \pi + 2$. This implies that \begin{equation*} \begin{split} E(C,F,i,j) & = (a+c+2d) + \pi(b-d), \\ T(e_{ij}+e_{ji}) & = (a+c+2d)+(b-d), \\ T(e_{ij} \pi +e_{ji} \sigma(\pi)) & = (a+c+2d)+(\pi^2 - 2 \pi + 2)(b-d). \end{split} \end{equation*} \end{itemize} One can check that for both types, $$ E(C, F, i, j)=0 \,\, \Leftrightarrow \,\, T(e_{ij}+e_{ji})=0 \text{ and } T(e_{ij} \pi +e_{ji} \sigma(\pi))=0. $$ By \cite[Lemma 4.2]{Woo17}, we have $\displaystyle \left| p_F(C) \right| \leq \exp(-\frac{\varepsilon N}{p^{2m}})$ where $N$ is the number of the non-zero coefficients $E(C, F, i, j)$. \end{remark} Define $\phi_{F, C}$, $\phi_{C, F}$ and $t$ as before. Then $$ t(\phi_{C, F}(v_j), \phi_{F, C}(v_i)) = E(C, F, i, j) $$ for $i<j$ and $$ t(\phi_{C, F}(v_i), \phi_{F, C}(v_i)) = e_{ii} + \sigma(e_{ii}) = 2E(C, F, i, i). $$ The following lemmas are analogues of \cite[Lemma 3.1 and 3.5]{Woo17}, whose proofs are also identical. Since the classification of finitely generated modules over $\OO/\pi^{2m-1} \OO$ (resp. $\OO/\pi^{2m} \OO$) and finitely generated modules over $\Z/p^{2m-1}\Z$ (resp. $\Z/p^{2m}\Z$) are the same, we can imitate the proof given in \cite{Woo17}. The $\gamma$-robustness, $\gamma$-weakness and the code of distance $d_0$ are defined as in Definition \ref{def4d} and \ref{def4e}. \begin{lemma} \label{lem5c1} There is a constant $C_G > 0$ such that for every $n$ and $F \in \Hom_R(V, G)$, the number of $\gamma$-weak $C$ is at most $$ C_G \binom{n}{\left \lceil \gamma n \right \rceil - 1} \left| G \right|^{\gamma n}. $$ \end{lemma} \begin{lemma} \label{lem5c2} If $F \in \Hom_R(V, G)$ is a code of distance $\delta n$ and $C \in \Hom_R(V, ({}^{\sigma}G)^*)$ is $\gamma$-robust for $F$, then $$ \# \left\{ (i, j) : i \leq j \text{ and } E(C, F, i, j) \neq 0 \right\} \geq \frac{\gamma \delta n^2}{2 \left| G \right|^2}. $$ \end{lemma} Recall that $\HH(V)$ denotes the set of Hermitian pairings on $V=R^n$. An element $f \in \HH(V)$ is uniquely determined by the values $f(v_j, v_i)$ for $1 \leq i \leq j \leq n$. Define a map $$ m_F : \Hom_R(V, ({}^{\sigma}G)^*) \rightarrow \HH(V) $$ by $$ m_F(C)(v_j, v_i) := E(C, F, i, j) $$ for every $i \leq j$. Let $m' :=2m-1$ if $K/\Q_p$ is of type I and $m' := 2m$ if $K/\Q_p$ is of type II. Let $e_1, \cdots, e_r$ be the canonical generators of an $R$-module $G \cong \prod_{i=1}^{r} R/\pi^{\lambda_i}R$ and $e_1^*, \cdots, e_r^*$ be generators of $G^*$ given by $e_i^*(\sum_{j}a_je_j) = \pi^{m' - \lambda_i}a_i$ for $a_1, \cdots, a_r \in R$. Consider the following elements in $\Hom_R(V, ({}^{\sigma}G)^*)$ for a given $F$. \begin{itemize} \item For every $i<j$ and $c \in R/\pi^{\lambda_j}R$, $\displaystyle \alpha_{ij}^{c} := \frac{ce_i^*(F) ({}^{\sigma}e_j^*)}{\pi^{m' - \lambda_i}} - \frac{\sigma(c)e_j^*(F) ({}^{\sigma}e_i^*)}{\sigma(\pi)^{m' - \lambda_i}}$. \item The definition of $\alpha_i^{d}$ for $d \in R_1 / p^{\left \lfloor \frac{\lambda_i}{2} \right \rfloor}R_1$ depends on the type of $K/\Q_p$. \begin{itemize} \item (Type I) For every $i$ and $d \in R_1 / p^{\left \lfloor \frac{\lambda_i}{2} \right \rfloor} R_1$, $\displaystyle \alpha_i^{d} := \frac{d \pi e_i^*(F) ({}^{\sigma}e_i^*)}{p^{\left \lceil \frac{m'-\lambda_i}{2} \right \rceil}}$. \item (Type II) For every $i$ and $d \in R_1 / p^{\left \lfloor \frac{\lambda_i}{2} \right \rfloor} R_1$, $\displaystyle \alpha_i^{d} := \frac{d (1 - \pi) e_i^*(F) ({}^{\sigma}e_i^*)}{p^{\left \lfloor \frac{m'-\lambda_i}{2} \right \rfloor}}$. \end{itemize} \end{itemize} \begin{lemma} \label{lem5d1} The elements $\alpha_{ij}^{c}$ and $\alpha_i^{d}$ are contained in $\Hom_R(V, ({}^{\sigma}G)^*)$. \end{lemma} \begin{proof} We will prove the lemma for $\alpha_i^{d}$. The proof for $\alpha_{ij}^c$ can be done by the same way. \begin{itemize} \item ($K/\Q_p$ is of type I) Since $\displaystyle \frac{\pi e_i^*(Fv)}{p^{\left \lceil \frac{m'-\lambda_i}{2} \right \rceil}} \in \pi^{1 + (m' - \lambda_i) - 2 \left \lceil \frac{m'-\lambda_i}{2} \right \rceil} R \subseteq R$ for every $v \in V$, we have $\alpha_{i}^{d} \in \Hom_{\Z}(V, ({}^{\sigma}G)^*)$ for $d \in R_1$. Also, ${}^{\sigma}e_i^* \in ({}^{\sigma}G)^*$ is a $\pi^{\lambda_i}$-torsion element and $$ 1 + (m' - \lambda_i) - 2 \left \lceil \frac{m'-\lambda_i}{2} \right \rceil + 2 \left \lfloor \frac{\lambda_i}{2} \right \rfloor = - \lambda_i - 2 \left \lceil \frac{-1-\lambda_i}{2} \right \rceil + 2 \left \lfloor \frac{\lambda_i}{2} \right \rfloor \geq \lambda_i $$ so $\alpha_{i}^{d} \in \Hom_{\Z}(V, ({}^{\sigma}G)^*)$ for $d \in R_1/p^{\left \lfloor \frac{\lambda_i}{2} \right \rfloor}R_1$ is well-defined. For every $r \in R$, $v \in V$ and $w \in {}^{\sigma}G$, we have $$ \alpha_i^{d}(rv)(w) = \frac{d \pi e_i^*(F(rv)) \sigma(e_i^*(w))}{p^{\left \lceil \frac{m'-\lambda_i}{2} \right \rceil}} = r \frac{d \pi e_i^*(Fv) \sigma(e_i^*(w))}{p^{\left \lceil \frac{m'-\lambda_i}{2} \right \rceil}} $$ and $$ (r \cdot \alpha_i^{d}(v))(w) = (\alpha_i^{d}(v))(\sigma(r) w) = \frac{d \pi e_i^*(Fv)) \sigma(e_i^*(\sigma(r)w))}{p^{\left \lceil \frac{m'-\lambda_i}{2} \right \rceil}} = r \frac{d \pi e_i^*(Fv) \sigma(e_i^*(w))}{p^{\left \lceil \frac{m'-\lambda_i}{2} \right \rceil}} $$ so $\alpha_i^{d}$ is $R$-linear. \item ($K/\Q_p$ is of type II) Since $\displaystyle \frac{d (1 - \pi) e_i^*(Fv)}{p^{\left \lfloor \frac{m'-\lambda_i}{2} \right \rfloor}} \in \pi^{(m' - \lambda_i) - 2 \left \lfloor \frac{m'-\lambda_i}{2} \right \rfloor} R \subseteq R$ for every $v \in V$, we have $\alpha_{i}^{d} \in \Hom_{\Z}(V, ({}^{\sigma}G)^*)$ for $d \in R_1$. Also, ${}^{\sigma}e_i^* \in ({}^{\sigma}G)^*$ is a $\pi^{\lambda_i}$-torsion element and $$ (m' - \lambda_i) - 2 \left \lfloor \frac{m'-\lambda_i}{2} \right \rfloor + 2 \left \lfloor \frac{\lambda_i}{2} \right \rfloor = - \lambda_i - 2 \left \lfloor \frac{-\lambda_i}{2} \right \rfloor + 2 \left \lfloor \frac{\lambda_i}{2} \right \rfloor \geq \lambda_i $$ so $\alpha_{i}^{d} \in \Hom_{\Z}(V, ({}^{\sigma}G)^*)$ for $d \in R_1 / \pi^{\lambda_i} R_1$ is well-defined. For every $r \in R$, $v \in V$ and $w \in {}^{\sigma}G$, we have $$ \alpha_i^{d}(rv)(w) = \frac{d (1 - \pi) e_i^*(F(rv)) \sigma(e_i^*(w))}{p^{\left \lfloor \frac{m'-\lambda_i}{2} \right \rfloor}} = r \frac{d (1 - \pi) e_i^*(Fv) \sigma(e_i^*(w))}{p^{\left \lfloor \frac{m'-\lambda_i}{2} \right \rfloor}} $$ and $$ (r \cdot \alpha_i^{d}(v))(w) = (\alpha_i^{d}(v))(\sigma(r) w) = \frac{d (1 - \pi) e_i^*(Fv) \sigma(e_i^*(\sigma(r)w))}{p^{\left \lfloor \frac{m'-\lambda_i}{2} \right \rfloor}} = r \frac{d (1 - \pi) e_i^*(Fv) \sigma(e_i^*(w))}{p^{\left \lfloor \frac{m'-\lambda_i}{2} \right \rfloor}} $$ so $\alpha_i^{d}$ is $R$-linear. \qedhere \end{itemize} \end{proof} \begin{lemma} \label{lem5d} The elements $\alpha_{ij}^{c}$ and $\alpha_i^{d}$ are contained in $\ker(m_F)$. \end{lemma} \begin{proof} For every $1 \leq k<l \leq n$, \begin{equation*} \begin{split} m_F(\alpha_{ij}^{c})(v_l, v_k) & = \alpha_{ij}^{c}(v_l)(Fv_k) + \sigma(\alpha_{ij}^{c}(v_k)(Fv_l)) \\ & = \frac{ce_i^*(Fv_l) \sigma(e_j^*(Fv_k))}{\pi^{m' - \lambda_i}} - \frac{\sigma(c)e_j^*(Fv_l) \sigma(e_i^*(Fv_k))}{\sigma(\pi)^{m' - \lambda_i}} \\ & + \sigma \left ( \frac{ce_i^*(Fv_k) \sigma(e_j^*(Fv_l))}{\pi^{m' - \lambda_i}} - \frac{\sigma(c)e_j^*(Fv_k) \sigma(e_i^*(Fv_l))}{\sigma(\pi)^{m' - \lambda_i}} \right ) \\ & = 0. \end{split} \end{equation*} Since $\pi + \sigma(\pi)=0$ for type I and $(1- \pi) + \sigma(1-\pi)=0$ for type II, $m_F(\alpha_{i}^{d})(v_l, v_k)=0$ for both types. For every $1 \leq k \leq n$, \begin{equation*} \begin{split} m_F(\alpha_{ij}^{c})(v_k, v_k) & = T \left ( \frac{ce_i^*(Fv_k) \sigma(e_j^*(Fv_k))}{\pi^{m' - \lambda_i}} - \frac{\sigma(c)e_j^*(Fv_k) \sigma(e_i^*(Fv_k))}{\sigma(\pi)^{m' - \lambda_i}} \right ) \\ & = T(u - \sigma(u)) \;\; (u := \frac{ce_i^*(Fv_k) \sigma(e_j^*(Fv_k))}{\pi^{m' - \lambda_i}}) \\ & = 0, \\ m_F(\alpha_{i}^{d})(v_k, v_k) & = \frac{d e_i^*(Fv_k) \sigma(e_i^*(Fv_k))}{p^{\left \lceil \frac{m'-\lambda_i}{2} \right \rceil}} T(\pi) = 0 \;\; (\text{type I}), \\ m_F(\alpha_{i}^{d})(v_k, v_k) & = \frac{d e_i^*(Fv_k) \sigma(e_i^*(Fv_k))}{p^{\left \lfloor \frac{m'-\lambda_i}{2} \right \rfloor}} T(1-\pi) = 0 \;\; (\text{type II}). \qedhere \end{split} \end{equation*} \end{proof} \begin{lemma} \label{lem5e} If $FV = G$, then $\sum_{i < j} \alpha_{ij}^{c_{ij}} + \sum_{i} \alpha_i^{d_i} = 0$ if and only if each $c_{ij}$ and $d_i$ is zero. \end{lemma} \begin{proof} Assume that $\alpha := \sum_{i < j} \alpha_{ij}^{c_{ij}} + \sum_{i} \alpha_i^{d_i}$ is zero. Choose $x_1, \cdots, x_r \in V$ such that $Fx_i = e_i$ for each $i$. For each $1 \leq t \leq r$, we have \begin{equation*} \begin{split} \alpha (x_t) & = \sum_{i < t} \alpha_{it}^{c_{it}}(x_t) + \sum_{t < j} \alpha_{tj}^{c_{tj}}(x_t) + \alpha_t^{d_t}(x_t) \\ & = \left\{\begin{matrix} - \sum_{i < t} \frac{\sigma(c_{it}) \pi^{m'-\lambda_t}}{\sigma(\pi)^{m'-\lambda_i}} ({}^{\sigma}e_i^*) + \frac{d_t \pi^{m'-\lambda_t+1}}{p^{\left \lceil \frac{m'-\lambda_t}{2} \right \rceil}} ({}^{\sigma}e_t^*) + \sum_{t < j} c_{tj} ({}^{\sigma}e_j^*) & (\text{type I}) \\ - \sum_{i < t} \frac{\sigma(c_{it}) \pi^{m'-\lambda_t}}{\sigma(\pi)^{m'-\lambda_i}} ({}^{\sigma}e_i^*) + \frac{d_t (1 - \pi)\pi^{m'-\lambda_t}}{p^{\left \lfloor \frac{m'-\lambda_t}{2} \right \rfloor}} ({}^{\sigma}e_t^*) + \sum_{t < j} c_{tj} ({}^{\sigma}e_j^*) & (\text{type II}) \end{matrix}\right. \\ & = 0. \end{split} \end{equation*} For all $t<j$, $\alpha(x_t)(e_j) = \pi^{m'-\lambda_j} c_{tj} =0$ (in $R=\OO/\pi^{m'}\OO$) so $c_{tj}=0$ (in $R/\pi^{\lambda_j}R$). If $K/\Q_p$ is of type I, then \begin{equation*} \begin{split} & \alpha(x_t)(e_t)= \pi^{m'-\lambda_t} \frac{d_t \pi^{m'-\lambda_t+1}}{p^{\left \lceil \frac{m'-\lambda_t}{2} \right \rceil}} = 0 \text{ in } R \\ \Leftrightarrow \;\; & 2v_p(d_t) + ((2m-1)-\lambda_t+1) - 2 \left \lceil \frac{2m-1-\lambda_t}{2} \right \rceil \geq \lambda_t \\ \Leftrightarrow \;\; & v_p(d_t) \geq \lambda_t + \left \lceil -\frac{\lambda_t+1}{2} \right \rceil = \left \lfloor \frac{\lambda_t}{2} \right \rfloor \end{split} \end{equation*} ($v_p(d_t)$ denotes the exponent of $p$ in $d_t$) so $d_t=0$. Similarly, if $K/\Q_p$ is of type II, then \begin{equation*} \begin{split} & \alpha(x_t)(e_t)= \pi^{m'-\lambda_t} \frac{d_t (1 - \pi)\pi^{m'-\lambda_t}}{p^{\left \lfloor \frac{m'-\lambda_t}{2} \right \rfloor}}= 0 \text{ in } R \\ \Leftrightarrow \;\; & 2v_p(d_t) + (2m-\lambda_t) - 2 \left \lfloor \frac{2m-\lambda_t}{2} \right \rfloor \geq \lambda_t \\ \Leftrightarrow \;\; & v_p(d_t) \geq \lambda_t + \left \lfloor -\frac{\lambda_t}{2} \right \rfloor = \left \lfloor \frac{\lambda_t}{2} \right \rfloor \end{split} \end{equation*} so $d_t=0$. \end{proof} \begin{definition} \label{def5f} An element $C \in \Hom_R(V, ({}^{\sigma}G)^*)$ is called \textit{special} if $C$ is of the form $\sum_{i < j} \alpha_{ij}^{c_{ij}} + \sum_{i} \alpha_i^{d_i}$ for some $c_{ij} \in R/\pi^{\lambda_j}R$ and $d_i \in R_1/p^{\left \lfloor \frac{\lambda_i}{2} \right \rfloor}R_1$. \end{definition} For a special $C$, we have $E(C, F, i, j) = m_F(C)(v_j, v_i) = 0$ for every $i \leq j$ by Lemma \ref{lem5d} so $p_F(C)=1$. Moreover, Lemma \ref{lem5e} implies that the number of special $C$ is $$ \prod_{i<j} \left| R/\pi^{\lambda_j}R \right| \times \prod_{i} \left| R_1/p^{\left \lfloor \frac{\lambda_i}{2} \right \rfloor}R_1 \right| = p^{\sum_{i=1}^{r}\left ( (i-1)\lambda_i + \left \lfloor \frac{\lambda_i}{2}\right \rfloor \right )} $$ when $FV=G$. This number coincides with the moment appears in Theorem \ref{thm2g}(2). Now we prove that for a non-special $C$, there are linearly many non-zero coefficients $E(C, F, i, j)$. \begin{proposition} \label{prop5g} If $F \in \Hom_R(V, G)$ is a code of distance $\delta n$ and $C \in \Hom_R(V, ({}^{\sigma}G)^*)$ is not special, then $$ \# \left\{ (i, j) : i \leq j \text{ and } E(C, F, i, j) \neq 0 \right\} \geq \frac{\delta n}{2}. $$ \end{proposition} \begin{proof} Define $f_{ij}^{c} = f_{ji}^{\sigma(c)}$ ($c \in R$), $g_i^{d}$ ($d \in R_1$), $\HH_t(V)$ and $m_F^t : \Hom_R(V, (^{\sigma}G)^*) \rightarrow \HH_t(V)$ ($t=1, 2$) as in the proof of Proposition \ref{prop4k}. Following the argument of the of Proposition \ref{prop4k}, the proof reduces to show that the inequality \begin{equation} \label{eq5e} \left| \im(m_F^1) \right| \leq \frac{\left| \Hom_R(V, ({}^{\sigma}G)^*) \right|}{\left| \left\{ \text{special } C \right\} \right|} = p^{\sum_{i=1}^{r} \left ( (n-i) \lambda_i + \left \lceil \frac{\lambda_i}{2} \right \rceil \right ) } \end{equation} is actually an equality. \begin{itemize} \item Since $V$ is a free $R$-module, the natural map $\Hom_R(V, R) \otimes ({}^{\sigma}G)^* \rightarrow \Hom_R(V, ({}^{\sigma}G)^*)$ is an isomorphism. Thus for $c \in R$ and $1 \leq b \leq a \leq r$, $c z_{\tau_a}^* \otimes {}^{\sigma}e_b^*$ is an element of $\Hom_R(V, ({}^{\sigma}G)^*)$ and \begin{equation*} m_F(c z_{\tau_a}^* \otimes {}^{\sigma}e_b^* )(z_{\tau_i}, z_{\tau_j}) = \left\{\begin{matrix} c \delta_{ai} \delta_{b j} \sigma(\pi)^{m'-\lambda_b} + \sigma(c)\delta_{aj} \delta_{bi} \pi^{m'-\lambda_b} & (i<j) \\ T(c \delta_{ai} \delta_{bi} \sigma(\pi)^{m'-\lambda_b}) & (i=j) \end{matrix}\right. \end{equation*} so $$ m_F^2(c z_{\tau_a}^* \otimes {}^{\sigma}e_b^* ) = \left\{\begin{matrix} f_{\tau_b \tau_a}^{\pi^{m'-\lambda_b} \sigma(c)} & (b<a) \\ g_{\tau_b}^{T(\pi^{m'-\lambda_b} \sigma(c))} & (b=a) \end{matrix}\right. $$ as elements in $\HH_2(V)$. Since we have $$ \left\{ T(\pi^{m'-\lambda_b} \sigma(c)) : c \in R \right\} = \left\{\begin{matrix} p^{\left \lceil \frac{m'-\lambda_b}{2} \right \rceil} R_1 = p^{m - \left \lceil \frac{\lambda_b}{2} \right \rceil}R_1 & (\text{type I}) \\ p^{\left \lfloor \frac{m'-\lambda_b}{2} \right \rfloor} R_1 = p^{m - \left \lceil \frac{\lambda_b}{2} \right \rceil}R_1 & (\text{type II}) \end{matrix}\right. , $$ the image of $m_F^2$ contains every element of $\HH_2(V)$ of the form $$ f = \sum_{i<j} f_{\tau_i \tau_j}^{c_{ij}} + \sum_{i} g_{\tau_i}^{d_i} $$ where $c_{ij} \in \pi^{m'-\lambda_i} R$ and $d_i \in p^{m - \left \lceil \frac{\lambda_i}{2} \right \rceil} R_1$. As in the proof of Lemma \ref{lem5e}, one can deduce that $$ \sum_{i<j} f_{\tau_i \tau_j}^{c_{ij}} + \sum_{i} g_{\tau_i}^{d_i} = 0 $$ if and only if each $c_{ij}$ and $d_i$ is zero. This implies that the number of $f$ in $\HH_2(V)$ of the form $\sum_{i<j} f_{\tau_i \tau_j}^{c_{ij}} + \sum_{i} g_{\tau_i}^{d_i}$ is $$ \prod_{i<j} p^{\lambda_i} \cdot \prod_{i} p^{\left \lceil \frac{\lambda_i}{2} \right \rceil} = p^{\sum_{i=1}^{r}\left ( (r-i)\lambda_i + \left \lceil \frac{\lambda_i}{2} \right \rceil \right ) }. $$ Thus we have \begin{equation} \label{eq5f} \left| \im(m_F^2) \right| \geq p^{\sum_{i=1}^{r} \left ( (r-i)\lambda_i + \left \lceil \frac{\lambda_i}{2} \right \rceil \right )}. \end{equation} \item For $\ell \notin \tau$, $c \in R$ and $1 \leq b \leq r$, \begin{equation*} m_F(c z_{\ell}^* \otimes {}^{\sigma}e_b^*)(z_i, z_{\tau_j}) = c \sigma(\pi^{m' - \lambda_b}) \delta_{\ell i} \delta_{bj} \end{equation*} so $$ m_F^1(c z_{\ell}^* \otimes {}^{\sigma}e_b^*) = f_{\tau_b \ell}^{\pi^{m'-\lambda_b} \sigma(c)} $$ as elements in $\HH_1(V)$. Let $S$ be the set of the elements of $\HH_1(V)$ of the form $$ f = \sum_{\ell \notin \tau} \sum_{b} f_{\tau_b \ell}^{c_{b \ell}} $$ for some $c_{b \ell} \in \pi^{m'-\lambda_b} R$. Then $\left| S \right| = p^{\sum_{i=1}^{r} (n-r) \lambda_i}$ (since $\sum_{\ell \notin \tau} \sum_{b} f_{\tau_b \ell}^{c_{b \ell}} = 0$ if and only if each $c_{b \ell}$ is zero) and $S$ is contained in the kernel of the surjective homomorphism $\im(m_F^1) \rightarrow \im(m_F^2)$, which implies that \begin{equation} \label{eq5g} \left| \im(m_F^1) \right| \geq p^{\sum_{i=1}^{r} (n-r) \lambda_i} \left| \im(m_F^2) \right|. \end{equation} \end{itemize} By the equations (\ref{eq5f}) and (\ref{eq5g}), the inequality (\ref{eq5e}) should be an equality. \end{proof} Now we compute the moments of the cokernel of an $\varepsilon$-balanced matrix $X \in \HH_n(\OO)$. As in the unramified case, we provide some details of the proof for the convenience of the readers. For an $\OO$-module $G$ of type $\lambda = (\lambda_1 \geq \cdots \geq \lambda_r)$, let $M_G$ be the number of special $C$ for a surjective $F \in \Hom_R(V, G)$, i.e. $M_G := p^{\sum_{i=1}^{r}\left ( (i-1)\lambda_i + \left \lfloor \frac{\lambda_i}{2}\right \rfloor \right )}$. \begin{lemma} \label{lem5x1} For given $0 < \varepsilon < 1$, $\delta >0$ and $G$, there are $c, K_0 > 0$ such that the following holds: Let $X \in \HH_n(R)$ be an $\varepsilon$-balanced matrix, $F \in \Hom_R(V, G)$ be a code of distance $\delta n$ and $A \in \Hom_R((^{\sigma}V)^*, G)$. Then for each $n$, $$ \left| \PP(FX=0) - M_G \left| G \right|^{-n} \right| \leq \displaystyle \frac{K_0 e^{-cn}}{\left| G \right|^n} $$ and $$ \PP(FX=A) \leq K_0 \left| G \right|^{-n}. $$ \end{lemma} \begin{proof} By the equations (\ref{eq5b}) and (\ref{eq5c}) (replace $FX$ by $FX-A$), we have \begin{equation} \label{eq5h} \begin{split} \PP (FX = A) & = \frac{1}{\left| G \right|^n}\sum_{C} \EE (\zeta^{T(C(FX-A))}) \\ & = \frac{1}{\left| G \right|^n}\sum_{C} \EE (\zeta^{T(C(-A))}) \EE (\zeta^{\tr(C(FX))}) \\ & = \frac{1}{\left| G \right|^n} \sum_{C} \EE (\zeta^{T(C(-A))}) p_F(C). \end{split} \end{equation} For $\gamma \in (0, \delta)$, we break the sum into $3$ pieces: \begin{equation*} \begin{split} S_1 & := \left\{ C \in \Hom_R(V, ({}^{\sigma}G)^*) : C \text{ is special for } F \right\}, \\ S_2 & := \left\{ C \in \Hom_R(V, ({}^{\sigma}G)^*) : C \text{ is not special for } F \text{ and } \gamma \text{-weak for } F \right\}, \\ S_3 & := \left\{ C \in \Hom_R(V, ({}^{\sigma}G)^*) : C \text{ is } \gamma \text{-robust for } F \right\}. \end{split} \end{equation*} \begin{enumerate}[label=(\alph*)] \item $C \in S_1$: By Lemma \ref{lem5e}, $\left| S_1 \right| = M_G$. Since $p_F(C)=1$ for $C \in S_1$ by Lemma \ref{lem5d}, we have $\sum_{C \in S_1} \EE (\zeta^{C(-A)}) p_F(C) = M_G$ for $A=0$ and $ \left| \sum_{C \in S_1} \EE (\zeta^{C(-A)}) p_F(C) \right| \leq M_G$ for any $A$. \item $C \in S_2$: By Lemma \ref{lem5c1}, $\left| S_2 \right| \leq C_G \binom{n}{\left \lceil \gamma n \right \rceil - 1} \left| G \right|^{\gamma n}$ for some constant $C_G > 0$. By Remark \ref{rmk5b} and Proposition \ref{prop5g}, we have $\left| p_F(C) \right| \leq \exp ( -\varepsilon \delta n / 2 p^{2m} )$ for every $C \in S_2$. \item $C \in S_3$: By Remark \ref{rmk5b} and Lemma \ref{lem5c2}, $$ \left| \sum_{C \in S_3} \EE (\zeta^{C(-A)}) p_F(C) \right| \leq \left| G \right|^n \exp (\varepsilon \gamma \delta n^2 / 2 p^{2m} \left| G \right|^2). $$ \end{enumerate} Now the proof can be completed as in \cite[Lemma 4.1]{Woo17} by applying the above computations (for a sufficiently small $\gamma$) to the equation (\ref{eq5h}). \end{proof} Recall that for an integer $D = \prod_{i} p_i^{e_i}$, we have defined $\ell (D) := \sum_{i} e_i$ in Section \ref{Sec4}. The \textit{depth} of $F \in \Hom_R(V, G)$ is defined exactly as in Definition \ref{def4x2}. The next lemmas are analogues of \cite[Lemma 5.2 and 5.4]{Woo17}. \begin{lemma} \label{lem5x3} There is a constant $K_0$ depending on $G$ such that for every $D>1$, the number of $F \in \Hom_R(V, G)$ of depth $D$ is at most $$ K_0 \binom{n}{\left \lceil \ell(D)\delta n \right \rceil -1} \left| G \right|^n D^{-n+\ell(D)\delta n}. $$ \end{lemma} \begin{lemma} \label{lem5x4} Let $\varepsilon, \delta, G$ be as in Lemma \ref{lem5x1}. Then there exists $K_0>0$ such that if $F \in \Hom_R(V, G)$ has depth $D>1$ and $[G : FV] < D$, then for all $\varepsilon$-balanced matrix $X \in \HH_n(R)$, $$ \PP(FX=0) \leq K_0e^{-\varepsilon (1- \ell(D) \delta) n}(\left| G \right|/D)^{-(1-\ell(D) \delta)n}. $$ \end{lemma} \begin{proof} The proof is same as Lemma \ref{lem4x4}. In the ramified case, one can write $x_1 = x+\pi y$ for $\varepsilon$-balanced $x \in R_1$ and $y \in R_2$. For an $\OO$-submodule $H$ in $G$ of index $D$, $f_1 \in G \setminus H$ and a non-diagonal entry $x_1$ of $X$, the elements of the set $$ \left \{ x \in R_1 : x_1f_1 \equiv g \text{ in } G/H \text{ for some } y \in R_2 \right \} $$ are contained in a single equivalence class modulo $p$. Since $x$ is $\varepsilon$-balanced, we conclude that $\PP(x_1f_1 \equiv g \text{ in } G/H) \leq 1 - \varepsilon$. \end{proof} The following theorem can be proved exactly as in Theorem \ref{thm4l}. (Replace the Lemma \ref{lem4x1}, \ref{lem4x3} and \ref{lem4x4} to the Lemma \ref{lem5x1}, \ref{lem5x3} and \ref{lem5x4}, respectively.) \begin{theorem} \label{thm5h} Let $0 < \varepsilon < 1$ and $G$ be given. Then for any sufficiently small $c > 0$, there is a $K_0=K_{\varepsilon, G, c}>0$ such that for every positive integer $n$ and an $\varepsilon$-balanced matrix $X_0 \in \HH_n(\OO)$, $$ \left| \EE(\# \Sur_{\OO}(\cok(X_0), G)) - p^{\sum_{i=1}^{r}\left ( (i-1)\lambda_i + \left \lfloor \frac{\lambda_i}{2}\right \rfloor \right )} \right| \leq K_0 e^{-cn}. $$ In particular, the equation (\ref{eq2g}) holds for every sequence of $\varepsilon$-balanced matrices $(X_n)_{n \geq 1}$. \end{theorem} Combining the above theorem with Theorem \ref{thm2c} and \ref{thm3c}, we obtain the universality result for the distribution of the cokernels of random $p$-adic ramified Hermitian matrices. \begin{theorem} \label{thm5i} For every sequence of $\varepsilon$-balanced matrices $(X_n)_{n \geq 1}$ ($X_n \in \HH_n(\OO)$), the limiting distribution of $\cok(X_n)$ is given by the equation (\ref{eq2d}). \end{theorem} \begin{proof} Choose a positive integer $a$ such that $\pi^{a-1} \Gamma = 0$. Then for any finitely generated $\OO$-module $H$, we have $H \otimes \OO/\pi^{a} \OO \cong \Gamma$ if and only if $H \cong \Gamma$. Let $A_n$ be the cokernel of a Haar random matrix in $\HH_n(\OO)$ and $B_n = \cok(X_n)$. Then Theorem \ref{thm2c}, \ref{thm3c} and \ref{thm5h} conclude the proof. \end{proof} \section*{Acknowledgments} The author is supported by a KIAS Individual Grant (SP079601) via the Center for Mathematical Challenges at Korea Institute for Advanced Study. We thank Jacob Tsimerman and Myungjun Yu for their helpful comments. {\small \begin{thebibliography}{99} \bibitem{BKLPR15} M. Bhargava, D. M. Kane, H. W. Lenstra, B. Poonen and E. Rains, Modeling the distribution of ranks, Selmer groups, and Shafarevich-Tate groups of elliptic curves, Camb. J. Math. 3 (2015), no. 3, 275--321. \bibitem{CH21} G. Cheong and Y. Huang, Cohen-Lenstra distributions via random matrices over complete discrete valuation rings with finite residue fields, Illinois J. Math. 65 (2021), 385--415. \bibitem{Cho16} S. Cho, Group schemes and local densities of ramified hermitian lattices in residue characteristic 2 Part I, Algebra Number Theory 10 (2016), no. 3, 451--532. \bibitem{CKLPW15} J. Clancy, N. Kaplan, T. Leake, S. Payne and M. M. Wood, On a Cohen-Lenstra heuristic for Jacobians of random graphs, J. Algebraic Combin. 42 (2015), no. 3, 701--723. \bibitem{CL84} H. Cohen and H. W. Lenstra Jr., Heuristics on class groups of number fields, Number Theory, Noordwijkerhout 1983, Lecture Notes in Math. 1068, Springer, Berlin, 1984, 33--62. \bibitem{Del01} C. Delaunay, Heuristics on Tate-Shafarevitch groups of elliptic curves defined over $\Q$, Exp. Math. 10 (2001), no. 2, 191--196. \bibitem{Del07} C. Delaunay, Heuristics on class groups and on Tate-Shafarevich groups: the magic of the Cohen-Lenstra heuristics, in Ranks of Elliptic Curves and Random Matrix Theory, London Math. Soc. Lecture Note Ser. 341, Cambridge Univ. Press, Cambridge, 2007, 323--340. \bibitem{DJ14} C. Delaunay and F. Jouhet, $p^{\ell}$-torsion points in finite abelian groups and combinatorial identities, Adv. Math. 258 (2014), 13--45. \bibitem{FW89} E. Friedman and L. C. Washington, On the distribution of divisor class groups of curves over a finite field, in Théorie des Nombres (Quebec, PQ, 1987), de Gruyter, Berlin, 1989, 227--239. \bibitem{GLSV14} R. Gow, M. Lavrauw, J. Sheekey and F. Vanhove, Constant rank-distance sets of hermitian matrices and partial spreads in hermitian polar spaces, Electron. J. Combin. 21 (2014), P1.26. \bibitem{HL00} T. Honold and I. Landjev, Linear codes over finite chain rings, Electron. J. Combin. 7 (2000), R11. \bibitem{Jac62} R. Jacobowitz, Hermitian forms over local fields, Amer. J. Math. 84 (1962), no. 3, 441--465. \bibitem{Lee22} J. Lee, Joint distribution of the cokernels of random $p$-adic matrices, Forum Math. 35 (2023), no. 4, 1005--1020. \bibitem{Mac69} J. MacWilliams, Orthogonal matrices over finite fields, Amer. Math. Monthly 76 (1969), 152--164. \bibitem{Neu99} J. Neukirch, Algebraic number theory, Grundlehren der Mathematischen Wissenschaften 322, Springer, Berlin, 1999. \bibitem{NW22} H. H. Nguyen and M. M. Wood, Random integral matrices: universality of surjectivity and the cokernel, Invent. Math. 228 (2022), 1--76. \bibitem{Woo17} M. M. Wood, The distribution of sandpile groups of random graphs, J. Amer. Math. Soc. 30 (2017), no. 4, 915--958. \bibitem{Woo19} M. M. Wood, Random integral matrices and the Cohen-Lenstra heuristics, Amer. J. Math. 141 (2019), no. 2, 383--398. \bibitem{Woo22} M. M. Wood, Probability theory for random groups arising in number theory, arXiv:2301.09687, to appear in Proceedings of the International Congress of Mathematicians (2022). \bibitem{Yu12} C. F. Yu, On Hermitian forms over dyadic non-maximal local orders, Pure Appl. Math. Q. 8 (2012), no. 4, 1117--1146. \end{thebibliography} } \end{document}
2205.09313v3
http://arxiv.org/abs/2205.09313v3
Large deviation principle and thermodynamic limit of chemical master equation via nonlinear semigroup
\documentclass[11pt]{amsart} \renewcommand\baselinestretch{1.01} \usepackage{amssymb} \usepackage{amsthm} \usepackage{amsmath} \usepackage{graphicx} \usepackage{amsaddr} \usepackage{comment} \usepackage[usenames,dvipsnames]{xcolor} \usepackage[margin = 1in]{geometry} \usepackage{enumerate} \usepackage[toc,page]{appendix} \usepackage{lineno} \usepackage{color} \usepackage{bbm} \usepackage{hyperref} \usepackage{mhchem} \usepackage{siunitx} \newcommand{\red}{} \newcommand{\blue}{} \definecolor{mygreen}{rgb}{0.1,0.75,0.2} \newcommand{\grn}{\color{mygreen}} \providecommand{\bbs}[1]{\left(#1\right)} \newtheorem{thm}{Theorem}[section] \newtheorem{cor}[thm]{Corollary} \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \theoremstyle{definition} \newtheorem{defn}[thm]{Definition} \theoremstyle{remark} \newtheorem{rem}[thm]{Remark} \numberwithin{equation}{section} \DeclareMathOperator{\PV}{PV} \DeclareMathOperator{\KL}{KL} \DeclareMathOperator{\ran}{Ran} \DeclareMathOperator{\kk}{Ker} \DeclareMathOperator{\argmin}{argmin} \DeclareMathOperator{\argmax}{argmax} \DeclareMathOperator{\Bo}{Bo} \newcommand{\lra}{\longrightarrow} \newcommand{\Lra}{\Longrightarrow} \newcommand{\da}{\downarrow} \newcommand{\llra}{\Longleftrightarrow} \newcommand{\Lla}{\Longleftarrow} \newcommand{\wra}{\rightharpoonup} \newcommand{\per}{\text{per}} \newcommand{\loc}{\text{loc}} \newcommand{\peq}{\vec{x}^{\text{s}}} \newcommand{\xss}{x^{\xs}} \newcommand{\la}{\langle} \newcommand{\ra}{\rangle} \newcommand{\pt}{\partial} \newcommand{\pcon}{\eta} \newcommand{\vcon}{p} \newcommand{\eps}{\varepsilon} \newcommand{\sige}{\sigma^e_{ij}} \newcommand{\sij}{{ij}} \newcommand{\ud}{\,\mathrm{d}} \newcommand{\8}{\infty} \newcommand{\F}{\mathcal{F}} \newcommand{\cL}{\mathcal{L}} \newcommand{\mm}{\mathcal{M}} \newcommand{\bR}{\mathbb{R}} \newcommand{\bZ}{\mathbb{Z}} \newcommand{\bE}{\mathbb{E}} \newcommand{\bP}{\mathbb{P}} \newcommand{\vs}{\vec{s}} \newcommand{\vx}{\vec{x}} \newcommand{\psih}{u_{\scriptscriptstyle{\text{h}}}} \newcommand{\ph}{\psi_{\scriptscriptstyle{\text{h}}}} \newcommand{\uh}{u_{\scriptscriptstyle{\text{h}}}} \newcommand{\vh}{v_{\scriptscriptstyle{\text{h}}}} \newcommand{\psihs}{\psi_{\scriptscriptstyle{\text{h}}}^{\scriptscriptstyle{\text{ss}}}} \newcommand{\V}{\scriptscriptstyle{\text{h}}} \newcommand{\A}{\scriptscriptstyle{\text{A}}} \newcommand{\B}{\scriptscriptstyle{\text{B}}} \newcommand{\tot}{\scriptscriptstyle{\text{tot}}} \newcommand{\C}{\scriptscriptstyle{\text{C}}} \newcommand{\cv}{X^{\scriptscriptstyle{\text{h}}}} \newcommand{\R}{\scriptscriptstyle{\text{R}}} \newcommand{\xs}{\scriptscriptstyle{\text{S}}} \newcommand{\nes}{\scriptscriptstyle{\text{NESS}}} \newcommand{\mr}{\scriptscriptstyle{\text{MR}}} \newcommand{\vxv}{\vec{x}_{i}} \newcommand{\vxi}{\vec{x}_i} \newcommand{\vyi}{\vec{y}_i} \newcommand{\vyv}{\vec{y}_{\V}} \newcommand{\pe}{\vec{x}^{\text{e}}} \newcommand{\vxr}{\vec{x}^{\R}} \newcommand{\vpr}{\vec{p}^{\R}} \newcommand{\vprr}{\vec{p}^{\mr}} \newcommand{\vp}{\vec{p}} \newcommand{\vy}{\vec{y}} \newcommand{\xa}{\vec{x}^{\A}} \newcommand{\xaa}{\vec{x}^{\A'}} \newcommand{\xcc}{\vec{x}^{\C'}} \newcommand{\xb}{\vec{x}^{\B}} \newcommand{\xc}{\vec{x}^{\C}} \newcommand{\va}{\vec{\alpha}} \newcommand{\vb}{\vec{\beta}} \newcommand{\vr}{\vec{r}} \newcommand{\vm}{\vec{m}} \newcommand{\vR}{\vec{R}} \newcommand{\vq}{\vec{q}} \newcommand{\ac}{\ce{I}} \newcommand{\usc}{\text{USC}(\bR^N)} \newcommand{\lsc}{\text{LSC}(\bR^N)} \newcommand{\buc}{C(\bR^{N*})} \newcommand{\ccc}{C_{c}(\mathbb{R}^{N*})} \newcommand{\ellb}{\ell^\8(\Omega_{\V}^*)} \newcommand{\ellc}{\ell^\8_{c}(\Omega_{\V}^*)} \newcommand{\omp}{\Omega_{\V}^+} \newcommand{\omn}{\Omega_{\V}\backslash \Omega_{\V}^+} \newcommand{\jl}[1]{\textcolor{red}{\textbf{JL: #1}}} \begin{document} \title[Large deviation for chemical reactions]{Large deviation principle and thermodynamic limit of chemical master equation via nonlinear semigroup} \author[Y. Gao]{Yuan Gao} \address{Department of Mathematics, Purdue University, West Lafayette, IN} \email{[email protected]} \author[J.-G. Liu]{Jian-Guo Liu} \address{Department of Mathematics and Department of Physics, Duke University, Durham, NC} \email{[email protected]} \keywords{Large deviation principle, Varadhan's inverse lemma, Lax-Oleinik semigroup, monotone schemes, non-equilibrium chemical reactions} \subjclass[2010]{49L25, 37L05, 60F10, 60J74, 92E20} \date{\today} \begin{abstract} Chemical reactions can be modeled by a random time-changed Poisson process on countable states. The macroscopic behaviors, such as large fluctuations, can be studied via the WKB reformulation. The WKB reformulation for the backward equation is Varadhan's discrete nonlinear semigroup and is also a monotone scheme that approximates the limiting first-order Hamilton-Jacobi equations (HJE). The discrete Hamiltonian is an m-accretive operator, which generates a nonlinear semigroup on countable grids and justifies the well-posedness of the chemical master equation (CME) and the backward equation with 'no reaction' boundary conditions. The convergence from the monotone schemes to the viscosity solution of HJE is proved by constructing barriers to overcome the polynomial growth coefficients in the Hamiltonian. This implies the convergence of Varadhan's discrete nonlinear semigroup to the continuous Lax-Oleinik semigroup and leads to the large deviation principle for the chemical reaction process at any single time. Consequently, the macroscopic mean-field limit reaction rate equation is recovered with a concentration rate estimate. Furthermore, we establish the convergence from a reversible invariant measure to an upper semicontinuous viscosity solution of the stationary HJE. \end{abstract} \maketitle \section{Introduction} Chemical or biochemical reactions, such as the production of useful materials in industry and the maintenance of metabolic processes with enzymes in living cells, are among the most important events in the world. At a microscopic scale, these reactions can be understood from a probabilistic viewpoint. {\blue In this paper, we focus on the large deviation principle (fluctuation estimate) for chemical reaction processes in the thermodynamic limit regime. As a byproduct, the reaction rate equation will be recovered as a mean-field limit equation with an exponential concentration rate. To estimate the small fluctuations away from the typical chemical reaction trajectory, we will utilize Varadhan's exponential nonlinear semigroup method with novel techniques, which we will explain in detail below.} A convenient way to stochastically describe chemical reactions is via random time-changed Poisson processes $X(t)$ \eqref{Csde}, c.f. \cite{Kurtz15}. This continuous time Markov process on countable states counts the number of molecular species $X_i(t), i=1, \cdots, N$ for chemical reactions happening in a large container characterized by a size $\frac{1}{h}\gg 1$. We assume chemical reactions in a container are independent of molecule position, and the molecular number is proportional to the container size. Therefore, we will refer to the large size limit $h\to 0$ as the thermodynamic limit or macroscopic limit. After counting the net change of the molecular numbers $\vec{\nu}_j$ for a $j$-th reaction, the rescaled process $\cv(t)=hX(t)$ is \begin{equation} \label{Csde} \begin{aligned} \cv(t) = \cv(0) + \sum_{j=1}^M \vec{\nu}_{j} h \Bigg(\mathbbm{1}_{\{\cv(t^-)+\vec{\nu}_j h \geq 0\}} Y^+_j \bbs{\frac{1}{h}\int_0^t \tilde{\Phi}^+_j(\cv(s))\ud s}\\ -\mathbbm{1}_{\{\cv(t^-)-\vec{\nu}_j h \geq 0\}}Y^-_j \bbs{\frac{1}{h}\int_0^t \tilde{\Phi}^-_j(\cv(s))\ud s}\Bigg), \end{aligned} \end{equation} where $Y^\pm_{j}(t)$ are i.i.d. unit rate Poisson processes, and $\mathbbm{1}$ is the indicating function to show that there is no reaction if the next state $\cv(t^-)\pm\vec{\nu}_j h$ is negative. This `no reaction' constraint will also be reflected in the chemical master equation \eqref{odex} below. The intensity $\tilde{\Phi}_j^{\pm}(\vx)$ of this Poisson process is given by the law of mass action (LMA) \eqref{newR}, indicating the encounter of species in one reaction. The chemical master equation (CME) \eqref{rp_eq} for the rescaled process $\cv(t)$ is a linear ordinary differential equation (ODE) system on countable discrete grids with a `no reaction' boundary condition to maintain nonnegative counting states and the conservation of total probability. The key observation is that CME \eqref{rp_eq} has a monotonicity property, which is also known as a monotone scheme approximation for hyperbolic differential equations. The generator and the associated backward equation \eqref{backward} of the process $\cv$ can be regarded as a dual equation for CME. The backward equation \eqref{backward} is also a linear ODE system on the same countable discrete grids, adapting the `no reaction' boundary condition and monotonicity. The macroscopic behaviors of chemical reactions with the above stochastic modeling are deterministic "statistical properties" that can be studied by taking the large size limit $h\to 0$. At the first level, the law of large numbers characterizes the mean-field limit nonlinear ODE, known as the reaction rate equation (RRE), with polynomial nonlinearity due to the law of mass action (LMA): \begin{equation}\label{odex} \frac{\ud}{\ud t} \vec{x} = \sum_{j=1}^M \vec{\nu}_j \bbs{\Phi^+_j(\vec{x}) - \Phi^-_j(\vec{x})}, \qquad \Phi^\pm_j(\vxv)= k^\pm_j \prod_{\ell=1}^{N} \bbs{x_\ell}^{\nu^\pm_{j\ell}}. \end{equation} At a more detailed level, the large deviation principle estimates the fluctuations away from the mean-field limit RRE \eqref{odex}. To capture the small probabilities in the large deviation regime, the WKB reformulation $p_{\V}(\vxv,t) = e^{-\frac{\psi_{\V}(\vxv,t)}{h}}$ of the master equation \cite{Kubo73} gives a discrete Hamiltonian and an exponentially nonlinear ODE system on the same countable discrete grids; see \eqref{upwind00}. This nonlinear ODE system inherits the monotonicity and the "no reaction" boundary condition from the CME, so we refer to \eqref{upwind00} as the CME in the Hamilton-Jacobi equation (HJE) form or the monotone scheme for HJE \eqref{HJEpsi}. The backward equation can proceed with the same WKB reformulation and inherit the "no reaction" boundary condition as the restriction $\vxi\pm\vec{\nu_{j}} h\geq 0$. The resulting exponentially nonlinear ODE system is also a nonlinear semigroup \eqref{wkb_b} on discrete countable grids: \begin{equation}\label{upwind} \begin{aligned} \pt_t \uh(\vxi,t)= & H_{\V}(\uh(\vxi),\vxi)\\ := & \sum_{j=1, \vxi+\vec{\nu_{j}} h\geq 0}^M \tilde{\Phi}^+_j(\vxv)\bbs{ e^{\frac{\uh(\vxv+ \vec{\nu_{j}} h)-\uh(\vxv)}{h}} - 1} + \sum_{j=1, \vxi-\vec{\nu_{j}} h\geq 0}^M \tilde{\Phi}_j^-(\vxv)\bbs{ e^{ \frac{\uh(\vxv- \vec{\nu_{j}}h)-\uh(\vxv)}{h} } - 1}. \end{aligned} \end{equation} A zero extension to negative grids, which is consistent with the "no reaction" boundary condition, is taken on $\tilde{\Phi}^\pm_j$; see \eqref{exPHI} and Lemma \ref{lem:generator}. Thus, after proper extension, the domain for \eqref{upwind} and the corresponding HJE \eqref{HJE2psi} becomes the whole space. We observe that \eqref{upwind} is a monotone scheme because $H_{\V}$ is decreasing w.r.t. $\uh(\vxi)$ while increasing w.r.t. $\uh(\vxi\pm \vec{\nu}_j h)$. The probability representation for this nonlinear semigroup is given by \begin{equation}\label{semigroup} \bbs{S_t u_0}(\vxv)=h \log \bE^{\vxv}\bbs{e^{ \frac{u_0(\cv_t)}{h}}}, \end{equation} dated back to \textsc{Varadhan} \cite{Varadhan_1966}. The justification of the existence of the nonlinear semigroup on discrete countable grids relies on the resolvent approximation $u_{\V} - \Delta t H_{\V}(u_{\V})=f_{\V}$ (i.e., backward Euler scheme \eqref{bEuler}) and \textsc{Crandall-Liggett}'s nonlinear semigroup theory \cite{CrandallLiggett}; see Theorem \ref{thm:backwardEC}. {\blue We will prove the large deviation principle for process $\cv_t$ at a single time through Varadhan's inverse lemma \cite{Bryc_1990}. In detail, the rigorous justification for the WKB expansion of the backward equation can be obtained by proving the convergence from the discrete Varadhan's nonlinear semigroup of \eqref{upwind} to the Lax-Oleinik semigroup solution of the limiting HJE: \begin{equation}\label{HJE2psi} \pt_t u(\vx,t) - H(\nabla u(\vx),\vx)=0, \quad H(\vp,\vx):=\sum_{j=1}^M \bbs{ \Phi^+_j(\vec{x})\bbs{e^{\vec{\nu}_j\cdot \vp}-1 } + \Phi_j^-(\vec{x})\bbs{ e^{-\vec{\nu}_j\cdot \vp}-1 } }. \end{equation} Notice the Lax-Oleinik semigroup representation for \eqref{HJE2psi} \begin{equation}\label{LO} u(\vx,t) = \sup_{\vy } \bbs{u_0(\vy) - I(\vy; \vx,t)}, \quad I(\vy; \vx,t) := \inf_{\gamma(0)=\vx, \gamma(t)=\vy} \int_0^t L(\dot{\gamma}(s),\gamma(s)) \ud s, \end{equation} where $L(\vs,\vx)$ is the convex conjugate of Hamiltonian $H(\vp,\vx)$. This connects the viscosity solution to the HJE with the pathwise deterministic optimal control problem with terminal profit $u_0(y)$ and running cost $L(\dot{\gamma}(s),\gamma(s))$; cf. \cite{evans2008weak}. Then, the large deviation principle can be obtained via the inverse Varadhan's lemma (proved by \textsc{Bryc} \cite{Bryc_1990}). In other words, the sufficient conditions for the large deviation principle are to justify the convergence of the above nonlinear semigroup and the exponential tightness of the process. These are also necessary conditions, known as Varadhan's lemma \cite{Varadhan_1966}. To obtain the above convergence, the viscosity method is a well-developed tool pioneered by \textsc{Crandall} and \textsc{Lions} in \cite{crandall1983viscosity, crandall1984two} using two methods: the vanishing viscosity method and the monotone scheme approximation. Our first contribution is to observe that the WKB reformulation for the backward equation is exactly a monotone scheme for the limiting first-order HJE \eqref{HJE2psi}. Then, one can prove the convergence from the monotone scheme \eqref{upwind} to the viscosity solution of HJE \eqref{HJE2psi} using the upper/lower semicontinuous envelopes of the discrete solution to the resolvent problem and \textsc{Crandall-Liggett}'s nonlinear semigroup theory. However, two difficulties arise: the "no-reaction" boundary condition and the coefficients $\Phi_j^\pm(\vx)$ in Hamiltonian $H$ have polynomial growth at far fields. The abstract theorems established by \textsc{Souganidis} \cite{souganidis1985approximation} and \textsc{Barles, Perthame} \cite{Barles_Perthame_1987} cannot be directly applied. Second, for compactness of $u_{\V}$ at the far field, we need a one-point compactification at the far field. Hence, we choose the ambient space as $\buc$, i.e., the far field limit is a constant value (see \eqref{space}). The key observation is that each chemical reaction satisfies the conservation of mass, i.e., there exists a positive mass vector $\vm$ such that $\vec{\nu}_j \cdot \vm=0, \, j=1,\cdots,M$. Thanks to this, any mass function $f(\vm\cdot \vx)$ is a stationary solution to the Hamilton-Jacobi equations (HJEs) \eqref{upwind} and \eqref{HJE2psi}, and hence can be used to construct barriers to overcome the polynomial growth of $\Phi^\pm_j(\vx)$. Thus we first prove the viscosity solution $u(\vx,t)\in \ccc$ and then extend it to $\buc$ by using the non-expansive property of the resolvent problem; see Corollary \ref{cor:semi}. Third, to overcome the dynamic boundary condition due to the "no reaction" constraint for negative states, we perform a zero extension for ${\Phi}^\pm_j(\vx)$ for $\vx\in \mathbb{R}^N$ and impose a local Lipschitz continuous condition. This condition excludes some reactions, but it is convenient to use to guarantee the comparison principle for the HJE. Removing this technical condition is possible by considering an optimal control formulation with a boundary cost, but we leave it for future study. Our convergence result (see Theorem \ref{thm_vis}) from the discrete nonlinear semigroup solution of \eqref{upwind} to the viscosity solution of HJE \eqref{HJE2psi} provides the convergence part of the large deviation principle at a single time. The exponential tightness is trivial if the starting point of $\cv_t$ is deterministic and positive due to the mass conservation law for the chemical reaction. In general, to verify the exponential tightness of $\cv$ at any time $t$, we impose one of the following two assumptions: one is the existence of a positive reversible invariant measure $\pi_{\V}$ (see \eqref{master_db_n}) that is exponentially tight; the other is that there is compact support for the initial density $\rho^0_{\V}$; see Theorem \ref{thm_dp}. As a consequence of the large deviation principle at a single time, the mean-field limit RRE \eqref{odex} is recovered using the concentration of measures with an explicit concentration rate (see Corollary \ref{cor:ode}). Our other contribution is in constructing a stationary solution to the HJE. The construction of a stationary solution to the HJE is usually difficult and non-unique. However, under the assumption of the existence of a positive reversible invariant measure $\pi_{\V}$ satisfying \eqref{master_db_n}, we can construct an upper semicontinuous (USC) viscosity solution in the Barron-Jensen's sense \cite{Barron_Jensen_1990}; see Proposition \ref{prop:USC}. However, since the stationary HJE does not have uniqueness, whether our construction selects a meaningful weak KAM solution is still unknown. This selection principle for the drift-diffusion process is proved in \cite{GL23}. } While a comprehensive review of the vast literature on chemical reactions and the large deviation principle is beyond the scope of this work, let us review here some closely related works. The use of viscosity solutions to the Hamilton-Jacobi equation (HJE) as the limit of the nonlinear semigroup of Markov processes was developed by \textsc{Feng} and \textsc{Kurzt} in \cite{feng2006large} as a general framework to study the large deviation principle. See also some recent developments in \cite{Kraaij_2016, Kraaij_2020}. The idea of using the nonlinear semigroup and the variational principle of Markov processes to study the large deviation principle can be traced back to \cite{Varadhan_1966, fleming1983optimal}, while the idea of using viscosity solutions to HJE can be traced back to \cite{Ishii_Evans_1985, fleming1986pde, Barles_Perthame_1987}. Another general approach that uses HJE to study the large deviation principle in the physical literature is the macroscopic fluctuation theory developed by \textsc{Bertini}, et al. \cite{bertini2002macroscopic, Bertini15}; see further mathematical analysis and the variational structure in \cite{Renger_2018, Patterson_Renger_Sharma_2021}. {\blue For chemical reactions, the sample path large deviation principle was first proved in \cite{Dembo18} via constructing a process with linear interpolation in time and the inverse contraction principle; and recent developments have addressed the boundary issue with a uniform vanishing rate at the boundary \cite{Agazzi_Andreis_Patterson_Renger_2021}. The LDP in path space covers the single time LDP result in this paper, but the methodologies are completely different; see the remark after Corollary \ref{cor:ode}.} A dynamic large deviation principle for the pair of concentrations and fluxes of chemical reaction jump processes was proved in \cite{Patterson_Renger_2019}. The connection between the generalized gradient flow and the good rate function in the large deviation principle was rigorously established by \textsc{Mielke, Renger, Peletier} \cite{Mielke_Renger_Peletier_2014}. The mean-field limit of the chemical reaction stochastic model was proved by Kurtz in \cite{kurtz1970solutions, Kurtz71}. Recently, \textsc{Maas} and \textsc{Mielke} \cite{maas2020modeling} provided another proof via the evolutionary $\Gamma$-convergence approach under the detailed balance assumption. The remaining part of this paper is organized as follows. In Section \ref{sec2}, we revisit some terminologies of stochastic/deterministic chemical reaction equations and introduce the WKB reformulations for CME and the backward equation. In Section \ref{sec3}, we study the monotonicity, construct barriers to control the polynomial growth at far fields, and investigate the solvability of the associated monotone schemes (i.e., WKB reformulations). Then, in Section \ref{sec:ext_dd}, we prove the well-posedness of the backward equation, and in Section \ref{sec:db_d}, we recover the well-posedness of CME for reversible processes. In Section \ref{sec4}, we prove the convergence from the monotone schemes to the viscosity solution of the HJE. We discuss the construction of USC viscosity solutions for the stationary HJE in Section \ref{sec:sHJE} and for the dynamic HJE in Section \ref{sec:vis}, respectively. The short-time classical solution to the HJE and error estimates are discussed in Section \ref{sec5}. Finally, in Section \ref{sec6}, we use the convergence results, together with the exponential tightness, to prove the large deviation principle at a single time, which also recovers the mean-field limit RRE for chemical reactions. \section{Preliminaries: stochastic and deterministic models for chemical reactions, and WKB reformulations for forward/backward equations}\label{sec2} In this section, we provide the necessary background for understanding the stochastic modeling of chemical reactions using the random time-changed Poisson process (see Section \ref{sec2.1}). We then derive CME \eqref{rp_eq} with the corresponding 'no reaction' boundary condition (see Section \ref{sec2.2}). Additionally, we revisit the macroscopic RRE \eqref{odex} in Section \ref{sec2.3}. In Sections \ref{sec2.4} and \ref{sec2.5}, we employ the WKB reformulation to transform the CME and the backward equation into nonlinear ODE systems, specifically nonlinear discrete semigroups on countable grids. \subsection{Random time changed Poisson process and the law of mass action for chemical reactions}\label{sec2.1} Chemical reactions involving $\ell=1,\cdots,N$ species $X_i$ and $j=1,\cdots,M$ reactions can be kinematically described as \begin{equation}\label{CRCR} \text{Reaction }j: \quad \sum_{\ell} \nu_{j\ell}^+ X_{\ell} \quad \ce{<=>[k_j^+][k_j^-]} \quad \sum_i \nu_{j\ell}^- X_\ell, \end{equation} where the nonnegative integers $\nu_{j\ell}^\pm \geq 0$ are stoichiometric coefficients and $k_j^\pm\geq 0$ are the reaction rates for the $j$-th forward and backward reactions. The column vector $\vec{\nu}_j:= \vec{\nu}_j^- - \vec{\nu}_j^+ := \bbs{\nu_{j\ell}^- - \nu_{j\ell}^+}_{\ell=1:N}\in \bZ^N$ is called the reaction vector for the $j$-th reaction, counting the net change in molecular numbers for species $X_\ell$. Let $\mathbb{N}$ be the set of natural numbers including zero. In this paper, all vectors $\vec{X}= \bbs{X_i}_{i=1:N} \in \mathbb{N}^N$ and $\bbs{\varphi_j}_{j=1:M},\, \bbs{k_j}_{j=1:M}\in \bR^M$ are column vectors. Denote $\vec{m}=(m_\ell)_{\ell=1:N}$, where $m_\ell$ represents the molecular weight for the $\ell$-th species. Then the conservation of mass for the $j$-th reaction in \eqref{CRCR} implies \begin{equation}\label{mb_j} \vec{\nu}_j \cdot \vec{m} = 0, \quad j=1,\cdots, M. \end{equation} Sometimes, in an open system where the materials exchange with the environment denoted as $\emptyset$, the exchange reaction $X_{\ell}\ce{<=>} \emptyset$ does not conserve mass. Let $\mathbb{N}$ be the space of natural numbers, which serves as the state space for the counting process $X_\ell(t)$, representing the number of each species $\ell=1,\cdots,N$ in the biochemical reactions. With the reaction container size $1/h\gg 1$, the process $\cv_\ell(t)=hX_\ell(t)$ satisfies the random time-changed Poisson representation for chemical reactions \eqref{Csde} (see \cite{kurtz1980representations, Kurtz15}): \begin{align*} \cv(t) = \cv(0) + \sum_{j=1}^M \vec{\nu}_{j} h \Bigg(\mathbbm{1}_{\{\cv(t^-)+\vec{\nu}_j h \geq 0\}} Y^+_j \bbs{\frac{1}{h}\int_0^t \tilde{\Phi}^+_j(\cv(s))\ud s} \\ -\mathbbm{1}_{\{\cv(t^-)-\vec{\nu}_j h \geq 0\}}Y^-_j \bbs{\frac{1}{h}\int_0^t \tilde{\Phi}^-_j(\cv(s))\ud s}\Bigg). \end{align*} Here, for the $j$-th reaction channel, $Y^\pm_{j}(t)$ are i.i.d. unit-rate Poisson processes, and the intensity function is given by the mesoscopic law of mass action (LMA): \begin{equation}\label{newR} \tilde{\Phi}_j^\pm(\cv) = {k_j^\pm} \prod_{\ell=1}^{N} \frac{\bbs{\frac{\cv_\ell}{h}} ! \ h^{\nu_{j\ell}^\pm}}{ \bbs{\frac{\cv_\ell}{h} - \nu_{j\ell}^\pm}!}. \end{equation} The `no reaction' constraints $\cv(t^-)\pm \vec{\nu}_{j} h\geq 0$ ensure that no jump occurs if the number of some species would become negative in the container. This `no reaction' correction to the process \eqref{Csde} was also noticed in \cite[eq(28)]{anderson2019constrained}, where a very similar `no reaction' constraint was imposed near the relative boundary of the positive orthant. Here, we use $\tilde{\Phi}^\pm_j$ to distinguish the mesoscopic LMA from the limit macroscopic LMA $\Phi^\pm_j$ in \eqref{lma}. In the literature, this process \eqref{Csde} is also known as the Marcus-Lushnikov process \cite{marcus1968stochastic, lushnikov1978coagulation} or Gillespie's process \cite{gillespie1972stochastic}. The existence and uniqueness of the stochastic equation \eqref{Csde} were proved by \cite{kurtz1980representations, Kurtz15} in terms of the corresponding martingale problem. For the purpose of effectively handling the boundary condition when some species become zero, we extend the process $\cv(t)$ to include negative counting numbers for species such that those negative numbers remain unchanged. Clearly, if $\cv_\ell(0)<0$ for some $\ell$, then $\cv(t) \equiv \cv(0)$. To realize this, we also set the LMA as: \begin{equation}\label{exPHI} \tilde{\Phi}_j^\pm(\cv) = 0 \quad \text{if for some $\ell$, $\cv_\ell<0$}. \end{equation} In summary, the original process is given by: \begin{equation} \cv \in \Omega^+_{\V} := \{ \vxi = \vec{i} h; \vec{i} \in \mathbb{N}^N \}, \end{equation} while after including the above extended notions, the process \begin{equation} \cv\in \Omega_{\V}= \{ \vxi=\vec{i} h; \vec{i}\in \mathbb{Z}^N \} \end{equation} can still be described by \eqref{Csde}. In other words, if $\cv(0) \in \Omega_{\V} \backslash \Omega_{\V}^+$, then $\cv(t) \equiv \cv(0)$. \subsection{Chemical master equation and `no reaction' boundary condition}\label{sec2.2} For a chemical reaction modeled by \eqref{Csde}, we denote the counting probability of $\cv(t)$ as $p_{\V}(\vxv,t) = \bE(\mathbbm{1}_{\vxv}(\cv(t)))$, where $\mathbbm{1}_{\vxv}$ is the indicator function. Then $p_{\V}(\vxv,t)$, $\vxi\in \Omega_h$, satisfies CME \cite{Kurtz15}: \begin{equation}\label{rp_eq} \begin{aligned} \frac{\ud}{\ud t} p_{\V}(\vxi, t) =& \frac{1}{h}\sum_{j=1, \vxi- \vec{\nu}_{j}h\geq 0}^M \left( \tilde{\Phi}^+_j(\vxi- \vec{\nu}_{j} h) p_{\V}(\vxi- \vec{\nu}_{j} h,t) - \tilde{\Phi}^-_j(\vxi) p_{\V}(\vxi,t) \right) \\ & + \frac{1}{h}\sum_{j=1, \vxi+\vec{\nu}_{j} h\geq 0}^M \left( \tilde{\Phi}^-_j(\vxi+\vec{\nu}_{j}h) p_{\V}(\vxi+\vec{\nu}_{j}h,t) - \tilde{\Phi}_j^+(\vxi) p_{\V}(\vxi,t) \right), \quad \text{for } \vxi\in \omp;\\ \frac{\ud}{\ud t} p_{\V}(\vxi, t) =& 0, \quad \text{for } \vxi \in \omn. \end{aligned} \end{equation} It is natural to take $p_{\V}(\vxi, 0) = 0$ for $\vxi\in\omn$, which remains $0$ for all times for $\vxi\in\omn$. In the $Q$-matrix form, we denote $\frac{\ud}{\ud t} p_{\V} = Q^*_{\V} p_{\V}$, where $Q^*_{\V}$ is the transpose of the generator $Q_{\V}$ of process $\cv(t)$. For the derivation of the generator $Q_{\V}$ with the `no reaction' constraints, we refer to \cite[Appendix]{GL22}. Here, the constraints $\vxi\pm \vec{\nu}_{j} h\geq 0$ are understood componentwisely, and we refer to this as the `no reaction' boundary condition for CME. The `no reaction' boundary condition implies that if the number of some species is negative in the container, then no chemical reaction can occur. In Lemma \ref{lem:mass1}, we will show that under this boundary constraint, the total probability is conserved. In Lemma \ref{lem:generator}, we will derive the generator $Q_{\V}$ and the backward equation associated with the boundary constraint. In the literature, a commonly used simple Dirichlet boundary condition is $p_{\V}(\vxi) = 0$ for $\vxi\in \omn$ (or equivalently $\tilde{\Phi}^\pm(\vxi) = 0$ for $\vxi\in\omn$), as seen in \cite{Kurtz15, Gauckler14, patterson2019large, maas2020modeling}. However, the `no reaction' boundary condition is directly derived from \eqref{Csde} and imposes the condition that any reaction jump resulting in a negative species count cannot occur. This is in the spirit of the Feller-Wentzell type dynamic boundary conditions proposed by Feller and Wentzell \cite{Feller_1952, Feller_1954, venttsel1959boundary} to preserve the total probability. A similar boundary condition was used in \cite{anderson2019constrained}, where a constrained Langevin approximation was established to incorporate the boundary condition. An immediate lemma below shows the conservation of total probability. \begin{lem}\label{lem:mass1} For CME \eqref{rp_eq}, we have the conservation of total probability: \begin{equation} \frac{\ud }{\ud t} \sum_{\vxi \in \Omega_{\V}}p_{\V}(\vxi, t) = \frac{\ud }{\ud t} \sum_{\vxi \in\omp}p_{\V}(\vxi, t) = 0. \end{equation} \end{lem} \begin{proof} From \eqref{rp_eq}, we have \begin{align}\label{mass1} \frac{\ud }{\ud t} \sum_{\vxi \geq 0}p_{\V}(\vxi, t) =& \frac{1}{h}\sum_{\vxi \geq 0}\sum_{j=1, \vxi-\vec{\nu}_{j}h\geq 0}^M \tilde{\Phi}^+_j(\vxi-\vec{\nu}_{j}h) p_{\V}(\vxi-\vec{\nu}_{j}h,t) - \tilde{\Phi}^-_j(\vxi) p_{\V}(\vxi,t) \nonumber \\ & + \frac{1}{h}\sum_{\vxi \geq 0}\sum_{j=1, \vxi+ \vec{\nu}_{j}h\geq 0}^M \tilde{\Phi}^-_j(\vxi+ \vec{\nu}_{j}h) p_{\V}(\vxi+ \vec{\nu}_{j}h,t) - \tilde{\Phi}_j^+(\vxi) p_{\V}(\vxi,t). \end{align} Then changing the variable $\vyi = \vxi+ \vec{\nu}_{j}h$, the second line above becomes \begin{equation} \frac{1}{h}\sum_{\vyi\geq 0}\sum_{\vyi - \vec{\nu}_{j} h \geq 0} \tilde{\Phi}^-_j(\vyi) p_{\V}(\vyi,t) - \tilde{\Phi}_j^+(\vyi - \vec{\nu}_{j}h) p_{\V}(\vyi - \vec{\nu}_{j}h,t). \end{equation} And thus $\frac{\ud }{\ud t} \sum_{\vxi \geq 0}p_{\V}(\vxi, t) = 0$. \end{proof} The rigorous justification of non-explosion $\sum_{\vxi} p_{\V}(\vxi) = 1$ for CME was proved in \cite{maas2020modeling} under the detailed balance assumption \eqref{master_db_n}. We refer to \cite{Gauckler14} for the existence and regularity of solutions to CME. \subsection{Mean-field limit is the macroscopic reaction rate equation}\label{sec2.3} The mesoscopic jumping process $\cv$ in \eqref{Csde} can be regarded as a large-size interacting particle system. In the large-size limit (thermodynamic limit), this interacting particle system can be approximately described by a mean field equation, i.e., a macroscopic nonlinear chemical reaction-rate equation. If the law of large numbers in the mean field limit holds, i.e., $p_{\V}(\vxv, t)\to \delta_{\vx(t)}$ for some $\vx(t)$, then the limit $\vx(t)$ describes the dynamics of the concentration of $N$ species in the continuous state space $\mathbb{R}_+^N := \{\vx \in \bR^N; x_\ell > 0\}$ and is given by RRE \eqref{odex}, also known as the chemical kinetic rate equation. The macroscopic fluxes $\Phi_j^\pm$ satisfy the macroscopic LMA: \begin{equation}\label{lma} \Phi^\pm_j(\vx) = k^\pm_j \prod_{\ell=1}^{N} \bbs{x_\ell}^{\nu^\pm_{j\ell}}. \end{equation} For simplicity of analysis, we also extend this LMA as: \begin{equation}\label{exLMA} \Phi^{\pm}_j(\vx) = 0 \quad \text{if } \vx \in \bR^N \backslash \bR^N_+. \end{equation} This is a macroscopic approximation for the mesoscopic LMA \eqref{newR} since $\tilde{\Phi}_j^\pm(\vxi) \to \Phi_j^\pm(\vx)$ as $\vxi \to \vx$. This RRE with LMA was first proposed by Guldberg and Waage in 1864. The detailed balance condition for RRE \eqref{odex} is defined by Wegscheider (1901) and Lewis (1925): there exists $\peq > 0$ (componentwise) such that \begin{equation}\label{DB} \Phi^+_j(\peq) - \Phi^-_j(\peq) = 0, \quad \forall j. \end{equation} Kurtz \cite{kurtz1970solutions, Kurtz71} proved the law of large numbers for the large-size process $\cv(t)$; cf. \cite[Theorem 4.1]{Kurtz15}. That is, if $\cv(0)\to \vec{x}(0)$ as $h\to 0$, then for any $\epsilon > 0$ and $t > 0$, \begin{equation}\label{lln} \lim_{h \to 0} \bP\{ \sup_{0\leq s\leq t}|\cv(s)-\vec{x}(s)|\geq \epsilon \}=0. \end{equation} Thus, we will also refer to the large-size limiting ODE \eqref{odex} as the macroscopic RRE. This provides a passage from the mesoscopic LMA \eqref{newR} to the macroscopic one \eqref{lma}. {\blue Recent results in \cite{maas2020modeling} establish the evolutionary $\Gamma$-convergence from CME to the Liouville equation for the case that both CME and the limiting equation has a generalized gradient flow structure. Starting from a deterministic state $\vx_0$, \cite[Theorem 4.7]{maas2020modeling} recovers Kurtz's results on the mean field limit of CME. However, the approach in \cite{{maas2020modeling}} requires the detailed balance assumption.} For the derivation of the mean field limit of CME with 'no reaction' constraints, please refer to \cite[Appendix]{GL22}. \subsection{WKB reformulation for CME and discrete HJE}\label{sec2.4} Besides the macroscopic trajectory $\vx(t)$ given by the law of large numbers, the WKB expansion for $p_{\V}(\vxv, t)$ in CME \eqref{rp_eq} is another standard method \cite{Kubo73, Hu87, Dykman_Mori_Ross_Hunt_1994, SWbook95, QianGe17}, which builds a more informative bridge between mesoscopic dynamics and macroscopic behaviors. To characterize the exponential asymptotic behavior, we make a change of variable \begin{equation}\label{1.3} p_{\V}(\vxv,t) = e^{-\frac{\psi_{\V}(\vxv,t)}{h}}, \quad p_{\V}(\vxv,0)=p_0(\vxv). \end{equation} CME can be recast as the following nonlinear ODE system for $\vxi \in \omp$: \begin{equation}\label{upwind00} \begin{aligned} \partial_t \psi_{\text{h}}(\vxi) + \sum_{j=1, \,\vxi- \vec{\nu}_{j}h\geq 0}^M &\tilde{\Phi}^+_j(\vxi-\vec{\nu}_{j}h ) e^{\left(\frac{\psi_{\V}(\vxi) - \psi_{\V}(\vxi-\vec{\nu}_jh)}{h}\right)} - \tilde{\Phi}_j^-(\vxi) \\ &+ \sum_{j=1, \,\vxi+\vec{\nu}_{j}h\geq 0}^M \tilde{\Phi}^-_j(\vxi+ \vec{\nu}_{j}h) e^{\left(\frac{\psi_{\V}(\vxi)-\psi_{\V}(\vxi+\vec{\nu}_{j}h)}{h}\right)}- \tilde{\Phi}^+_j(\vxi)=0, \end{aligned} \end{equation} while for $\vxi\in\omn$, $\partial_t \psi_{\text{h}}(\vxi)=0$. We point out that, in general, a constant is not a solution to \eqref{upwind00}. However, if $\psi_{\V}$ is a solution, then $\psi_{\V}+c$ is also a solution to \eqref{upwind00}. Formally, as $h\to 0$, Taylor's expansion of $\psi_{\V}$ with respect to $h$ leads to the following HJE for the rescaled master equation \eqref{rp_eq} for $\psi(\vx,t)$: \begin{equation}\label{HJEpsi} \pt_t \psi(\vec{x}, t) = -\sum_{j=1}^M \bbs{ \Phi^+_j(\vec{x})\bbs{e^{\vec{\nu_j} \cdot \nabla \psi(\vec{x},t)} - 1} + \Phi^-_j(\vec{x})\bbs{e^{-\vec{\nu_j} \cdot \nabla \psi(\vec{x},t)} - 1 }}. \end{equation} Define the Hamiltonian $H(\vp,\vx)$ on $\mathbb{R}^N\times \mathbb{R}^N$ as follows. For $\vx\in \mathbb{R}^N_+$, \begin{equation}\label{H} H(\vec{p},\vec{x}) := \sum_{j=1}^M \left[ \Phi^+_j(\vec{x})e^{\vec{\nu_j} \cdot \vec{p}} - \Phi^+_j(\vec{x}) + \Phi^-_j(\vec{x})e^{-\vec{\nu_j} \cdot\vec{p}} - \Phi^-_j(\vec{x}) \right]. \end{equation} Recall that in \eqref{exLMA}, $\Phi^{\pm}_j(\vx)=0$ for $\vx\in \mathbb{R}^N\setminus \mathbb{R}^N_+$. We naturally extend $H(\vp,\vx)$ to $\mathbb{R}^N\times \mathbb{R}^N$ such that $H(\vp,\vx)=0$ for $\vx\notin \mathbb{R}^N_+$. Then the HJE for $\psi(\vx,t)$ can be recast as \begin{equation}\label{HJE2psi00} \partial_t \psi + H(\nabla \psi, \vx) =0, \quad \vx\in\mathbb{R}^N. \end{equation} The WKB analysis above defines a Hamiltonian $H(\vp,\vx)$, which contains almost all the information for the macroscopic dynamics \cite{Dykman_Mori_Ross_Hunt_1994, GL22}. For this reason, we also call \eqref{upwind00} the CME in the HJE form. We remark that this kind of WKB expansion was first used by Kubo et al. \cite{Kubo73} for master equations of general Markov processes and later applied to CME in \cite{Hu87}. In \cite{Dykman_Mori_Ross_Hunt_1994}, the HJE \eqref{HJE2psi00} with the associated Hamiltonian $H$ in \eqref{H} was formally derived. {\blue In the mathematical analysis later, we will impose a technique assumption that $\Phi^\pm_j(\vx)$ is locally Lipschitz continuous after the zero extension. We point out this assumption might be removed by directly consider the optimal control problem in a domain with a boundary and impose an appropriate boundary cost, which will be an interesting future study. } \subsection{WKB reformulation for the backward equation is Varadhan's nonlinear semigroup}\label{sec2.5} The fluctuation on path space, i.e., the large deviation principle, can be computed through WKB expansion for the backward equation. Recall the rescaled process $\cv(t)$ in \eqref{Csde}, which is also denoted as $\cv_t$ for simplicity. For any $f\in C_b(\mathbb{R}^N_+)$, denote \begin{equation} w_{\V}(\vxv, t) := \mathbb{E}^{\vxv}[f(\cv_t)], \end{equation} then $w_{\V}(\vxv,t)$ satisfies the backward equation \begin{equation}\label{backward} \partial_t w_{\V} = Q_{\V} w_{\V}, \quad w_{\V}(\vxv, 0)=f(\vxv). \end{equation} Notice the 'no reaction' boundary condition we derived in CME \eqref{rp_eq}. Below we give a lemma to explicitly derive the associated boundary condition in the generator $Q_{\V}$ as the duality of $Q^*_{\V}$. \begin{lem}\label{lem:generator} The backward equation with explicit generator $Q_{\V}$ can be expressed as the duality of the forward equation \begin{equation} \begin{aligned} \partial_t w_{\V}(\vxi,t) =& \frac{1}{h}\sum_{j=1, \vxi+\vec{\nu_{j}} h\geq 0}^M \tilde{\Phi}^+_j(\vxv)\left[ w_{\V}(\vxv+ \vec{\nu_{j}}h) - w_{\V}(\vxv) \right] \\ &+ \frac{1}{h}\sum_{j=1, \vxi-\vec{\nu_{j}} h\geq 0}^M \tilde{\Phi}_j^-(\vxv)\left[ w_{\V}(\vxv- \vec{\nu_{j}}h )-w_{\V}(\vxv) \right], \quad \vxi\in\omp; \\ \partial_t w_{\V}(\vxi,t) =& 0, \quad \vxi\in \omn. \end{aligned} \end{equation} \end{lem} \begin{proof} Multiplying \eqref{rp_eq} by $w_{\V}$ and summation yields \begin{equation} \begin{aligned} \langle w_{\V}, Q_{\V}^* p \rangle = & \frac{1}{h}\sum_{\vxi\geq 0}\sum_{j=1, \vxi- \vec{\nu}_{j}h\geq 0}^M \tilde{\Phi}^+_j(\vxi- \vec{\nu}_{j} h) p_{\V}(\vxi- \vec{\nu}_{j} h,t)w_{\V}(\vxi) - \tilde{\Phi}^-_j(\vxi) p_{\V}(\vxi,t)w_{\V}(\vxi) \\ & + \frac{1}{h}\sum_{\vxi\geq 0}\sum_{j=1, \vxi+\vec{\nu}_{j} h\geq 0}^M \tilde{\Phi}^-_j(\vxi+\vec{\nu}_{j}h) p_{\V}(\vxi+\vec{\nu}_{j}h,t)w_{\V}(\vxi) - \tilde{\Phi}_j^+(\vxi) p_{\V}(\vxi,t)w_{\V}(\vxi). \end{aligned} \end{equation} Here from \eqref{rp_eq}, we used \begin{equation} \langle w_{\V}, Q_{\V}^* p \rangle = \langle w_{\V}, Q_{\V}^* p \rangle_{\omp}. \end{equation} Changing variable $\vyi=\vxi- \vec{\nu}_{j}h$ in the above $\tilde{\Phi}^+(\vxi- \vec{\nu}_{j}h)$ term while changing variable $\vyi=\vxi+ \vec{\nu}_{j}h$ in the above $\tilde{\Phi}^-(\vxi+ \vec{\nu}_{j}h)$, we have \begin{equation} \begin{aligned} \langle Q_{\V} w_{\V}, p \rangle_{\omp} = & \frac{1}{h}\sum_{\vyi\geq 0}\sum_{j=1, \vyi+ \vec{\nu}_{j}h\geq 0}^M \tilde{\Phi}^+_j(\vyi) p_{\V}(\vyi ,t)[w_{\V}(\vyi+ \vec{\nu}_{j} h) - w_{\V}(\vyi)] \\ & + \frac{1}{h}\sum_{\vyi\geq 0}\sum_{j=1, \vyi-\vec{\nu}_{j} h\geq 0}^M \tilde{\Phi}^-_j(\vyi ) p_{\V}(\vyi,t)[w_{\V}(\vyi- \vec{\nu}_{j}h) - w_{\V}(\vyi)]. \end{aligned} \end{equation} Meanwhile, we set \begin{equation} Q_{\V} w_{\V} = 0, \quad \vxi\in \omn. \end{equation} Regarding $p$ as arbitrary test functions, this gives the backward equation \eqref{backward}. \end{proof} Denote $\uh(\vxi,t)= h\log w_{\V}(\vxi,t)$ for $\vxi\in\Omega_{\V}$. We obtain a nonlinear ODE system for the backward equation in HJE form \begin{equation}\label{upwind0} \pt_t \uh(\vxv,t) = h e^{- \frac{\uh(\vxv,t)}{h}} Q_{\V} e^{ \frac{\uh(\vxv,t)}{h}} =: H_{h}( \uh), \quad \uh(\vxv,0) = h\log f(\vxv). \end{equation} The above WKB reformulation for the backward equation is equivalent to \begin{equation}\label{wkb_b} \uh(\vxv,t) =h \log w_{\V}(\vxv,t) = h \log \bE^{\vxv}\bbs{f(\cv_t)} = h \log \bE^{\vxv}\bbs{e^{ \frac{u_0(\cv_t)}{h}}} =:\bbs{S_t u_0}(\vxv), \end{equation} which is the so-called nonlinear semigroup \cite{Varadhan_1966, feng2006large} for process $\cv(t)$. Notice here the process $\cv$ and the nonlinear semigroup are all extended to $\vxi\in\Omega_{\V}$. We first write $H_{\V}$ explicitly. For $\vxi\geq 0$, \begin{align}\label{tm24} H_{\V}(u) =& \sum_{j=1, \vxi+\vec{\nu_{j}} h\geq 0}^M \tilde{\Phi}^+_j(\vxv)\bbs{ e^{\frac{\uh(\vxv+ \vec{\nu_{j}} h)-\uh(\vxv)}{h}} - 1} + \sum_{j=1, \vxi-\vec{\nu_{j}} h\geq 0}^M \tilde{\Phi}_j^-(\vxv)\bbs{ e^{ \frac{\uh(\vxv- \vec{\nu_{j}}h)-\uh(\vxv)}{h} } - 1}. \end{align} From the extended $\tilde{\Phi}^{\pm}_j(\vxi)$ in \eqref{exPHI}, \eqref{tm24} also naturally extended to $\vxi\in\Omega_{\V}$. We see $\pt_t \uh = H_{\V}(\uh)$ is a nonlinear ODE system on the countable grids. We point out a constant is always a solution to \eqref{upwind0}. Meanwhile, if $\uh$ is a solution then $\uh+c$ is also a solution to \eqref{upwind0}. We will use these two important invariance facts to prove the existence and contraction principle later; see Lemma \ref{lem:perron} and Proposition \ref{prop:cp}. Take a formal limit in \eqref{upwind0} with the discrete Hamiltonian $H_{\V}$ in \eqref{tm24}, we obtain the HJE in terms of $u(\vx,t), \,\vx\in \mathbb{R}^N$, i.e., \eqref{HJE2psi} \begin{equation} \partial_t u(\vx,t) = \sum_{j=1}^M \left[ \Phi^+_j(\vec{x})\left(e^{\vec{\nu}_j\cdot \nabla u(\vx, t)}-1\right) + \Phi_j^-(\vec{x})\left( e^{-\vec{\nu}_j\cdot \nabla u(\vx, t)}-1\right) \right] = H(\nabla u(\vx), \vx). \end{equation} We remark that the duality between the forward and backward equations, after the WKB limit, leads to the same HJE with only a sign difference in the time derivative. The boundary condition for \eqref{upwind0} is incorporated into the constraint $\vxi \pm \vec{\nu_{j}} h\geq 0$ in the equation. Imposing a physically meaningful boundary condition for the limit HJE \eqref{HJE2psi} at $\vx=\vec{0}$ is complicated. However, in our construction of the viscosity solution to the HJE, as a limit of \eqref{upwind0}, it automatically inherits the boundary condition of \eqref{upwind0}, and at the same time, the HJE \eqref{HJE2psi} is defined on the whole space $\mathbb{R}^N$ by using an extension; see Section \ref{sec:vis}. This is in the same spirit of the Feller-Wentzell type dynamic boundary conditions, which were proposed by Feller and Wentzell to impose proper boundary conditions preserving total mass \cite{Feller_1952, Feller_1954, venttsel1959boundary}. From now on, for both the discrete monotone scheme \eqref{upwind} and the HJE \eqref{HJEpsi}, the domain refers to the extended domain $\Omega_{\V}$ and $\mathbb{R}^N$, respectively. \section{Wellposedness of CME and backward equation via nonlinear semigroup}\label{sec3} In this section, we study the well-posedness of CME \eqref{rp_eq} and the backward equation \eqref{backward} via the nonlinear semigroup method. First, the WKB reformulation for both CME \eqref{rp_eq} and the backward equation \eqref{backward} leads to two discrete HJEs, respectively. The main difference between them is the sign of the time derivative. Next, we observe that both CME and the backward equation can be regarded as monotone schemes for linear hyperbolic equations. These monotone schemes, after the WKB reformulation, are still monotone schemes for the corresponding HJEs. The essential features of these monotone schemes are the monotonicity and the nonexpansive estimates, which naturally generate a nonlinear semigroup thanks to Crandall-Liggett's nonlinear semigroup theory. Using this observation, this section obtains a unique discrete solution to the corresponding HJEs. In Section \ref{sec:upwind}, we study the monotone and nonexpansive properties of the monotone scheme with backward Euler time discretization and then prove the existence and uniqueness of the backward Euler scheme \eqref{bEuler} by the Perron method. We point out that we cannot directly use the explicit scheme in \cite{crandall1984two} because LMA \eqref{lma} leads to a polynomial growth in the coefficients $\Phi^\pm_j(\vx)$. Thus, we will prove the existence and uniqueness of this backward Euler scheme \eqref{bEuler}. In Section \ref{sec:ext_dd}, by taking $\Delta t\to 0$ and using the nonexpansive nonlinear semigroup constructed by Crandall-Liggett \cite{CrandallLiggett}, we prove the global existence of the solution to the backward equation \eqref{backward}. In order to recover the existence and exponential tightness of CME from the results of the backward equation, Section \ref{sec:db_d} assumes the existence of a positive reversible invariant measure $\pi_{\V}$ satisfying \eqref{master_db_n} and then proves the existence, comparison principle, and exponential tightness of CME; see Corollary \ref{cor:tight}. This reversibility assumption includes some non-equilibrium enzyme reactions \cite{GL22}, where the energy landscape (i.e., the limiting solution to the stationary HJE in Proposition \ref{prop:USC}) is nonconvex with multiple steady states indicating three key features of non-equilibrium reactions: multiple steady states, non-zero flux, and positive entropy production rate at the non-equilibrium steady states (NESS). \subsection{A monotone scheme for HJE inherited from CME}\label{sec:upwind} Recall the backward equation \eqref{backward} and the WKB reformulation \eqref{upwind0}. Recall the countable discrete domain as \begin{equation} \Omega_{\V}:= \{ \vxi=\vec{i} h; \vec{i}\in \mathbb{Z}^N \}. \end{equation} \eqref{upwind} is exactly a monotone scheme for the corresponding HJE \eqref{HJE2psi}. We remark that the solution to \eqref{upwind} shifted by a constant is still a solution. After proving the well-posedness, we also refer to the solution to \eqref{upwind} as Varadhan's nonlinear semigroup. \subsubsection{Nonexpansive property for backward Euler monotone scheme} We now rewrite the monotone scheme \eqref{upwind} following the \textsc{Barles-Souganidis} framework \cite{barles1991convergence}. Denote the monotone scheme operator as \begin{equation}\label{def_H} H_{\V}: D(H_{\V}) \subset \ell^{\infty}(\Omega_{\V}) \to \ell^{\infty}(\Omega_{\V}); \qquad ( u(\vxi) )_{\vxi\in\Omega_{\V}} \mapsto ( H_{\V}(\vxi, u(\vxi), u(\cdot)) )_{\vxi\in\Omega_{\V}}, \end{equation} where $D(H_{\V})$ is the domain of $H_{\V}$ defined as $D(H_{\V}):=\{(u(\vxi))\in\ell^{\infty}(\Omega_{\V}); (H_{\V}(\vxi, u(\vxi), u))\in \ell^{\infty}(\Omega_{\V})\}$. The discrete Hamiltonian $H_{h}$ is defined as \begin{equation}\label{def_Hh} H_{h}(\vxi, \psih(\vxi), \varphi):= \sum_{j=1, \,\vxi+ \vec{\nu}_{j}h\geq 0}^M \tilde{\Phi}^+_j(\vxv)\left( e^{\frac{\varphi(\vxv+ \vec{\nu_{j}} h)-\uh(\vxv)}{h}} - 1 \right) + \sum_{j=1, \,\vxi-\vec{\nu}_{j}h\geq 0}^M \tilde{\Phi}_j^-(\vxv)\left( e^{ \frac{\varphi(\vxv- \vec{\nu_{j}}h)-\uh(\vxv)}{h} } - 1 \right). \end{equation} This notation of the discrete Hamiltonian follows \cite{barles1991convergence}. It is straightforward to verify that $H_{h}$ satisfies the monotone scheme condition, i.e., \begin{equation}\label{mono} \text{if } \varphi_1 \leq \varphi_2 \text{, then } H_{h}(\vx, \uh(\vx), \varphi_1) \leq H_{h}(\vx,\uh(\vx), \varphi_2). \end{equation} Denote the time step as $\Delta t$. Then the backward Euler time discretization gives \begin{equation}\label{bEuler} \frac{\psih^{n}-\psih^{n-1}}{\Delta t} - H_{\V}(\vxi, \psih^{n}(\vxi), \psih^{n})=0. \end{equation} Here, the second variable in $H$ is the value $\psih^n(\vxi)$ at $\vxi$, and the third variable $\psih$ replaces $\varphi$ in \eqref{def_H}, indicating the dependence of the values $\psih$ at the surrounding jumping points $\psih(\vxi\pm \vec{\nu}_j h)$. This notation clearly shows that $H_{\V}$ is decreasing with respect to the second variable while increasing with respect to the third variable. Then $\psih^{n}=(I-\Delta t H_{\V})^{-1} \psih^{n-1}:= J_{\Delta t, h} \psih^{n-1}$ can be solved as the solution $u_{\V}$ to the discrete resolvent problem with $f_{\V}=\psih^{n-1}$ \begin{equation}\label{dis_resolvent} u_{\V}(\vxi) - \Delta t H_{\V}(\vxi, u_{\V}(\vxi), u_{\V})=f_{\V}(\vxi). \end{equation} We first prove the nonexpansive property for the discrete solutions, which implies the uniqueness of the solution to \eqref{bEuler}. \begin{lem}\label{lem_nonexp_d} Let $u_{\V}$ and $v_{\V}$ be two solutions satisfying $u_{\V}= J_{\Delta t, h} f_{\V}$ and $v_{\V}=J_{\Delta t, h} g_{\V}$. Then we have \begin{enumerate}[(i)] \item Monotonicity: if $f_{\V}\leq g_{\V}$, then $u_{\V}\leq v_{\V}$; \item Nonexpansive: \begin{equation}\label{nonexp_d} \|u_{\V}-v_{\V}\|_{\infty} \leq \|f_{\V}-g_{\V}\|_{\infty}. \end{equation} \end{enumerate} \end{lem} \begin{proof} First, denote $\vxi^*$ as a maximum point such that $\max(u_{\V}-v_{\V})=(u_{\V}-v_{\V})(\vxi^*)$. Here, without loss of generality, we assume the maximum is attained; otherwise, the proof is still true by passing to a limit. Then \begin{equation}\label{mp1} \begin{aligned} (u_{\V}-v_{\V})(\vxi^*) = & \frac{1}{h}\sum_{j=1, \,\vxi+ \vec{\nu}_{j}h\geq 0}^M \tilde{\Phi}^+_j(\vxi^* ) e^{\xi_i} [(u_{\V}-v_{\V})(\vxi^*+ {\vec{\nu}_j}h) - (u_{\V}-v_{\V})(\vxi^* )] \\ & + \frac{1}{h}\sum_{j=1, \,\vxi-\vec{\nu}_{j}h\geq 0}^M \tilde{\Phi}^-_j(\vxi^* ) e^{\xi_i'} [(u_{\V}-v_{\V})(\vxi^*-{\vec{\nu}_j}h)-(u_{\V}-v_{\V})(\vxi^*)] + (f_{\V}-g_{\V})(\vxi^*)\\ \leq& (f_{\V}-g_{\V})(\vxi^*), \end{aligned} \end{equation} where $\xi_i, \xi_i'$ are some constants from the mean value theorem. This immediately gives \begin{equation} \max (u_{\V}-v_{\V}) \leq \max(f_{\V}-g_{\V}). \end{equation} Then if $f_{\V}\leq g_{\V}$, we have $u_{\V}\leq v_{\V}$, which concludes (i). Notice also the identity \begin{equation} \|u_{\V}-v_{\V}\|_{\infty} = \max\{ \max(u_{\V}-v_{\V}), \max(v_{\V}-u_{\V})\} \leq \max\{ \max(f_{\V}-g_{\V}), \max(f_{\V}-g_{\V})\}=\|f_{\V}-g_{\V}\|_{\infty}, \end{equation} which concludes (ii). \end{proof} The property (ii) shows that the resolvent $J_{\Delta t, h} = (I - \Delta t H_{\V})^{-1}$ is nonexpansive. Thus, we also obtain that $-H_{\V}$ is an accretive operator on $\ell^\infty(\Omega_{\V}).$ \subsubsection{Existence and uniqueness of the backward Euler scheme via the Perron method} We next prove the following lemma to ensure the existence and uniqueness of a solution $\psih^{n}$ to \eqref{bEuler}, i.e., solving the resolvent problem \eqref{dis_resolvent}. The proof of this lemma follows the classical Perron method and the observation that a constant is always a solution to \eqref{upwind}. \begin{lem}\label{lem:perron} Let $\lambda>0$. Assume there exist constants $K_M:=\sup_i f(\vxi)$ and $K_m:= \inf_i f(\vxi)$. Then there exists a unique solution $\uh$ to \begin{equation}\label{dis_exist} \uh(\vxi) - \lambda H_{\V}(\vxi, \uh(\vxi), \uh) = f(\vxi), \quad \vxi\in\Omega_{\V}. \end{equation} We also have $K_m\leq \psih \leq K_M$. \end{lem} \begin{proof} Step 1. We define a discrete subsolution $\uh$ to \eqref{dis_exist} as \begin{equation} \uh(\vxi) - \lambda H_{\V}(\vxi, \uh(\vxi), \uh) \leq f(\vxi), \quad \vxi\in\Omega_{\V}. \end{equation} Since $H_{\V}$ satisfies the monotone scheme condition \eqref{mono}, we can directly verify that the above discrete subsolution definition is consistent with the viscosity subsolution; see the definition in Appendix \ref{app3}. Supersolution is defined by reversing the inequality. Based on this definition, we can directly verify that $K_M$ is a supersolution while $K_m$ is a subsolution. Denote \begin{equation} \mathcal{F}:=\{v; \quad v \text{ is a subsolution to \eqref{dis_exist}}, \, v\leq K_M\}. \end{equation} Since $K_m\in\mathcal{F}$, $\mathcal{F}\neq \emptyset$. Step 2. Define \begin{equation}\label{def_u} \uh(\vxi) := \sup \{ \vh(\vxi); \,\, v\in \mathcal{F}\}. \end{equation} We will prove that $\uh(\vxi)$ is a subsolution. It is obvious that $\uh\geq K_m$. Since $\Omega_{\V}$ is countable, by the diagonal argument, there exists a subsequence $v_k\in\mathcal{F}$ such that for any $\vxi$, $v_k(\vxi) \to \uh(\vxi)$ as $k\to+\infty$. Then, from the continuity of $H_{\V}$, for each $\vxi\in \Omega_{\V}$, taking $k\to +\infty$, we have \begin{equation}\label{217} \uh(\vxi)- \lambda H_{\V}(\vxi, \uh(\vxi), \uh) \leq f(\vxi). \end{equation} Step 3. We now prove that $\uh$ is also a supersolution. If not, there exists a point $x^*$ such that \begin{equation}\label{218} \uh(x^*)- \lambda H_{\V}(x^*, \uh(x^*), \uh) < f(x^*). \end{equation} First, if $\uh(x^*)=K_M$, then from the definition of $\uh$, $x^*$ is a maximal point for $\uh$. Then $H_{\V}(x^*, \uh(x^*), \uh)\leq 0$, which contradicts with $f\leq K_M$. Therefore, we know that $\uh(x^*)<K_M$. There exists $\epsilon>0$ such that $\uh(x^*)+\epsilon < K_M$. Define \begin{equation} \vh(\vxi):= \left\{ \begin{array}{ll} \uh(x^*)+\epsilon, & \quad \vxi=x^*;\\ \uh(\vxi), & \quad \text{otherwise}. \end{array} \right. \end{equation} For $\vxi=x^*$, from \eqref{218}, by the continuity of $H_{\V}$, taking $\epsilon$ small enough, we have \begin{equation} \uh(x^*)+\epsilon- \lambda H_{\V}(x^*, \uh(x^*)+\epsilon, \uh) < f(x^*). \end{equation} For $\vxi\neq x^*$, since $\uh\leq \vh$, from \eqref{217} and the monotone condition \eqref{mono}, we have \begin{equation} \vh(\vxi)- \lambda H_{\V}(\vxi, \vh(\vxi), \vh) \leq \uh(\vxi)- \lambda H_{\V}(\vxi, \uh(\vxi), \uh) \leq f(\vxi). \end{equation} Therefore, $\vh\in \mathcal{F}$ is a subsolution that is strictly larger than $\uh$. This is a contradiction. In summary, $\uh$ constructed in \eqref{def_u} is a solution. Uniqueness is directly from the construction in \eqref{def_u}. \end{proof} \subsubsection{Construct barriers to control the polynomial growth of intensity $\tilde{\Phi}_j^\pm(\vxi)$} Below, under a slight assumption of the conservation of total mass, we provide more contraction estimates on the solution to \eqref{dis_exist}. Assume there exists a componentwisely positive mass vector $\vm$ such that \eqref{mb_j} holds. Under the mass balance assumption \eqref{mb_j}, for $m_1=\min_{i=1}^N m_i$, we have \begin{equation}\label{322mm} m_1 \|\vx\| \leq m_1 \|\vx\|_1 \leq \vm \cdot \vx \leq \|\vm\| \|\vx\|, \quad \vx\in\mathbb{R}^N_+, \end{equation} where $\|\vx\|=\sqrt{x_1^2+\cdots + x_N^2},$ and $\|\vx\|_1 = \sum_i x_i$. In fact, since $x_i>0$ and $m_i>0$, we have $\|\vx\| \leq \|\vx\|_1$ and $m_1\|\vx\|_1 \leq \vm \cdot \vx.$ \begin{prop}\label{prop:cp} Let $f\in \ell^\8$ be RHS in the resolvent problem \eqref{dis_exist}. Assume there exists positive $\vm$ satisfying \eqref{mb_j}. Define \begin{equation} f_m(r):= \inf_{|\vxi|\geq r, \vxi\in\omp} f(\vxi),\quad f_M(r):= \sup_{|\vxi|\geq r, \vxi\in\omp} f(\vxi). \end{equation} Define \begin{equation} u_m(\vxi):= \left\{ \begin{array}{cc} f_m\bbs{\frac{ \vm \cdot \vxi }{\|m\|}}, \, & \vxi\in\omp,\\ f(\vxi),\, & \vxi \in \omn, \end{array} \right. \quad u_M(\vxi):= \left\{ \begin{array}{cc} f_M\bbs{\frac{ \vm \cdot \vxi }{m_1}}, \, & \vxi\in\omp,\\ f(\vxi),\, & \vxi \in \omn. \end{array} \right. \end{equation} Then \begin{enumerate}[(i)] \item we have the estimate \begin{equation}\label{fmM} u_m(\vxi) \leq f(\vxi) \leq u_M(\vxi), \quad \forall \vxi\in \Omega_{\V}; \end{equation} \item $u_m(\vxi)$ and $u_M(\vxi)$ are two stationary solutions to HJE \begin{equation}\label{ssHJE} H(\vxi, u_m(\vxi), u_m) = 0, \quad H(\vxi, u_M(\vxi), u_M) = 0; \end{equation} \item we have the barrier estimates for the solution $\uh$ to \eqref{dis_exist} \begin{equation} u_m(\vxi) \leq \uh(\vxi) \leq u_M(\vxi); \end{equation} \item if $f$ satisfies $f\equiv \text{const}$ for $|\vxi|>R$ and $\vxi\in\omp$, then $\uh(\vxi)$ satisfies $\uh\equiv \text{const}$ for $|\vxi|> \frac{\|\vm\|R}{m_1}$ and $\vxi\in \omp$. \end{enumerate} \end{prop} \begin{proof} First, $f_{M}(r)$ is decreasing in $r$ while $f_{m}(r)$ is increasing in $r$. Thus, together with \eqref{322mm}, we know for $\vxi\in\omp$ \begin{align} f(\vxi) \leq f_{M}( |\vxi| ) \leq f_{M}\bbs{\frac{ \vm \cdot \vx_i }{m_1}},\\ f(\vxi) \geq f_{m}( |\vxi| ) \geq f_{m}\bbs{\frac{ \vm \cdot \vx_i }{\|\vm\|}}. \end{align} Second, $\vm$ satisfies \eqref{mb_j} implies \begin{equation} u_m(\vxi+\vec{\nu}_j h) = f_m\bbs{\frac{\vm \cdot \vxi + \vec{\nu}_j \cdot \vm h}{\|m\|}} = u_m(\vxi), \quad \vxi\in\omp \text{ and } \vxi + \vec{\nu}_j h \in\omp. \end{equation} Thus from the definition of $H_h$ in \eqref{def_Hh}, we conclude (ii). Third, from the monotonicity (i) in Lemma \ref{lem_nonexp_d} and \eqref{ssHJE}, we know the comparison $u_m(\vxi)\leq f(\vxi)\leq u_M(\vxi)$ in \eqref{fmM} implies $u_m(\vxi) \leq \uh(\vxi)\leq u_M(\vxi)$. Thus we have conclusion (iii) and thus (iv). \end{proof} \subsection{Existence and uniqueness of semigroup solution to the backward equation}\label{sec:ext_dd} Combining the existence results and barrier estimates in Section \ref{sec:upwind}, we obtain the existence and uniqueness of the solution to the backward Euler scheme \eqref{bEuler}. This, together with the nonexpansive property of the resolvent $J_{\Delta t, h} = (I - \Delta t H_{\V})^{-1}$ in Lemma \ref{lem_nonexp_d}, implies that $-H_{\V}$ is a maximal accretive operator. Precisely, we first use the one-point Alexandroff compactification $\Omega_{\V}^*=\Omega_{\V} \cup \{\infty\}$ of $\Omega_{\V}$ \cite{Kelley} to define the Banach space \begin{equation} \ellb:= \{(\uh(\vxi))\in \ell^\8(\Omega_{\V});\,\, \uh(\vxi) \to \text{const as }|\vxi|\to \infty \} \end{equation} and a subspace \begin{equation} \ellc:= \{(\uh(\vxi))\in \ell^\8(\Omega_{\V});\,\, \uh(\vxi) = \text{const for } |\vxi|>R \text{ for some }R>0\} . \end{equation} Define \begin{equation}\label{HHn} H_{\V}: D(H_{\V}) \subset \ellb \to \ellb; \qquad ( u(\vxi) )_{\vxi\in\Omega_{\V}} \mapsto ( H_{\V}(\vxi, u(\vxi), u(\cdot)) )_{\vxi\in\Omega_{\V}} \end{equation} The abstract domain of $H_{\V}$ is not straightforward to characterize due to the polynomial growth of $\tilde{\Phi}^{\pm}_j(\vxi)$ in $H_{\V}$. However, for $\uh\in \ellc$, the polynomial growth at far field does not turn on because $\uh=\text{const}$ for large $|\vxi|$. Hence, $\ellc\subset D(H_{\V})$. Since $\overline{\ellc} = \ellb$, we conclude that $H_{\V}$ is densely defined. Thus, we obtain the following theorem: \begin{thm}\label{thm:backwardEC} Let $H_{\V}$ be the discrete Hamiltonian operator defined in \eqref{HHn}. Assume there exists a (componentwise) positive $\vm$ satisfying \eqref{mb_j}. Let $u^0\in \ellb$, then there exists a unique global solution $\uh(t,\vxi)\in C([0,\infty); \ellb)$ to \eqref{upwind}, and $\uh(t,\vxi)$ satisfies \begin{enumerate}[(i)] \item contraction: \begin{equation}\label{contraction} \inf_{\vxi\in\Omega_{\V}} u^0(\vxi) \leq \inf_{\vxi\in\Omega_{\V}} \uh(t,\vxi) \leq \sup_{\vxi\in\Omega_{\V}} \uh(t,\vxi) \leq \sup_{\vxi\in\Omega_{\V}} u^0(\vxi); \end{equation} \item for all $\vxi\in \Omega_{\V}$, $\uh(\cdot,\vxi)\in C^1(0,+\8);$ \item Moreover, if $u^0\in \ellc$ then $\uh(\cdot,\vxi)\in C^1[0,+\8)$ for all $\vxi\in \Omega_h.$ \end{enumerate} \end{thm} \begin{proof} Assume $u^0\in \ellb = \overline{ D(H_{\V}) } $. Then fix $\Delta t$, from the existence and uniqueness of the resolvent problem in Lemma \ref{lem:perron} and the barrier estimate in Proposition \ref{prop:cp}, we know for any $f\in \ellb$, there exists a unique solution $\uh\in \ellb$ to the resolvent problem $$\uh(\vxi) - \Delta t H_{\V}(\vxi, \uh(\vxi), \uh) = f(\vxi).$$ Hence we know $H_{\V}(\uh)\in \ellb$ and thus $\uh \in D(H_h)$. This gives the maximality of $-H_h$. Besides, the accretivity of $-H_h$ is given by the non-expansive property of the resolvent in Lemma \ref{lem_nonexp_d}. Thus $-H_{\V}$ is a maximal accretive operator on Banach space $\ellb$, then by \textsc{Crandall and Liggett} \cite{CrandallLiggett}, there exists a unique global contraction $C^0$-semigroup solution $\uh(t,\vxi)\in C([0,+\8); \ellb)$ to \eqref{upwind} \begin{equation}\label{semi27} \uh(t,\cdot):=\lim_{\Delta t \to 0} (I - \Delta t H_{\V})^{-[\frac{t}{\Delta t}]} u^0=: S_{h,t}u^0. \end{equation} Here $[\frac{t}{\Delta t}]$ is the integer part of $\frac{t}{\Delta t}$. The contraction \eqref{contraction} follows from the nonexpansive property of $J_{\Delta t, h}$ in Lemma \ref{lem_nonexp_d}. For any $\vxi\in\Omega_{\V}$, we have \begin{equation} \uh(t,\vxi) - u (0,\vxi) = \int_0^t H_{\V}(\vxi, \uh(s, \vxi), \uh(s)) \ud s. \end{equation} Therefore we conclude $\uh(\cdot, \vxi)\in C^1(0,+\8)$ is the solution to \eqref{upwind}. For $u^0\in \ellc$, $u^0\in D(H_{\V})$, thus the $C^1$ regularity includes $t=0.$ \end{proof} \begin{rem} We remark that since the solution to \eqref{upwind} translated by a constant is still a solution, the nonexpansive property for time-continuous monotone schemes is equivalent to the monotonicity of the solution, as shown by Crandall and Tartar's lemma \cite{Crandall_Tartar_1980}. Precisely, given an initial data $\psih^0$, denote the numerical solution computed from the monotone scheme \eqref{upwind} as \begin{equation} \psih(t, \cdot) = S_t \psih^0. \end{equation} Then we know that $S_t$ is invariant under a translation by a constant, i.e., for any constant $c$ and any function $\uh\in \ell^\8(\Omega_{\V})$, \begin{equation}\label{invariant} S_t(\uh + c) = S_t \uh + c. \end{equation} Moreover, this semigroup satisfies the following properties: \begin{enumerate}[(i)] \item Monotonicity: if $\uh \geq \vh$, then $S_t \uh \geq S_t \vh$; \item $\ell^\8$ nonexpansive: \begin{equation} \|S_t \uh - S_t \vh\|_{\8} \leq \|\uh-\vh\|_{\8}. \end{equation} \end{enumerate} Indeed, first, from the definition of $S_t$, if $\psih$ is a solution to \eqref{upwind}, then $\psih+c$ is also a solution. Thus, \eqref{invariant} holds. Second, similar to \eqref{mp1}, we have (ii). Then, by Crandall and Tartar's lemma \cite{Crandall_Tartar_1980}, under \eqref{invariant}, (i) is equivalent to (ii). \end{rem} \subsection{Existence, comparison principle and exponential tightness of CME for reversible case}\label{sec:db_d} In this section, under the assumption of a positive reversible invariant measure $\pi_{\V}$ satisfying \eqref{master_db_n}, we focus on recovering the existence and exponential tightness of the CME from the backward equation. \subsubsection{CME Reversibility Condition} We first review the CME reversibility condition. Since the jumping process with $Q_{\V}$ only distinguishes the same reaction vector $\vec{\xi}$, we rearrange all $j$ such that $\vec{\nu}_j=\pm\vec{\xi}$ and define the grouped fluxes as \begin{equation}\label{group_flux} \tilde{\Phi}^+_{\xi}(\vx) := \sum_{j: \vec{\nu}_j = \vec{\xi}} \tilde{\Phi}^+_j(\vx) + \sum_{j: \vec{\nu}_j = -\vec{\xi}} \tilde{\Phi}^-_j(\vx), \quad \tilde{\Phi}^-_{\xi}(\vx) := \sum_{j: \vec{\nu}_j = \vec{\xi}} \tilde{\Phi}^-_j(\vx) + \sum_{j: \vec{\nu}_j = -\vec{\xi}} \tilde{\Phi}^+_j(\vx). \end{equation} Then the CME \eqref{rp_eq} can be recast as \begin{equation}\label{rp_eq_nn} \begin{aligned} \frac{\partial}{\partial t} p_{\V}(\vxi, t) = &\frac{1}{h}\sum_{\vec{\xi}, \vxi-\vec{\xi} h\geq 0} \left(\tilde{\Phi}^+_\xi(\vxi-\vec{\xi} h) p_{\V}(\vxi-\vec{\xi} h, t) - \tilde{\Phi}^-_\xi(\vxi) p_{\V}(\vxi, t)\right) \\ &+ \frac{1}{h}\sum_{\vec{\xi}, \vxi+h\vec{\xi} \geq 0} \left(\tilde{\Phi}^-_\xi(\vxi+\vec{\xi} h) p_{\V}(\vxi+\vec{\xi} h, t) - \tilde{\Phi}_\xi^+(\vxi) p_{\V}(\vxi, t)\right), \quad \vxi\in\omp. \end{aligned} \end{equation} Again, $\frac{\partial}{\partial t} p_{\V}(\vxi, t) = 0$ for $\vxi\in\omn$. Therefore, for any $\vxi$ and $\vec{\xi}$, the reversibility condition for the "effective stochastic process" means that the total forward probability steady flux from $\vxi$ to $\vxi+\vec{\xi}h$ equals the total backward one. In other words, for $\vxi\in\omp$ and $\vxi+ \vec{\xi}_{j}h\in\omp$, \begin{equation}\label{master_db_n} \begin{aligned} \tilde{\Phi}^-_{\xi}(\vxi+ \vec{\xi}_{j}h) \pi_{\V}(\vxi+ {\vec{\xi}_{j}}h) = \tilde{\Phi}^+_{\xi}(\vxi)\pi_{\V}(\vxi), \quad \forall \vec{\xi}. \end{aligned} \end{equation} We call this as a reversible condition for CME; a.k.a Markov chain detailed balance, c.f. \cite{Joshi}. It is well known that under a chemical version of constrained detail balance condition, the invariant measure $\pi_{\V}$ is given by the product Poisson and the limit of $\psihs(\vxi)=- h \log \pi_{\V}(\vx_{\V})$ gives a convex viscosity solution to stationary HJE; see for example, \cite[Ch7, Thm 3.1 and eq. (1) in Sec 7.4]{Whittle}, \cite[Theorem 4.5]{Anderson_Craciun_Kurtz_2010}, \cite[(7.30)]{QianBook}, \cite[Section 3]{GL22}. Under the reversible condition for CME \eqref{master_db_n}, due to the flux grouping degeneracy brought by the same reaction vector, we no longer have an explicit invariant distribution. In fact, in some non-equilibrium enzyme reactions that satisfy the reversible condition \eqref{master_db_n}, the macroscopic energy landscape $\psi^{ss}$ becomes nonconvex with multiple minima \cite[Section 4]{GL22}. These multiple stable states are known as non-equilibrium steady states (NESS), which were pioneered by Prigogine \cite{prigogine1967introduction}. Multiple steady states, nonzero steady state fluxes, and positive entropy production rates at NESS are three distinguished features of non-equilibrium reactions. In the following, we provide a rigorous proof for the existence and uniqueness of the limiting energy landscape $\psi^{ss}$ under the reversible condition for CME \eqref{master_db_n}. \subsubsection{Existence and Comparison Principle for Reversible CME} In this section, under the assumption that the CME satisfies the reversible condition \eqref{master_db_n}, we prove the existence and comparison principle for the solution to the CME \eqref{rp_eq}. \begin{lem} Assume there exists a positive invariant measure $\pi_{\V}>0$ in $\omp$ for the CME \eqref{rp_eq}, and $\pi_{\V}$ satisfies the reversible condition \eqref{master_db_n}. Then, we have \begin{equation} Q_{\V}^* p_{\V} = \left\{ \begin{array}{cc} \pi_{\V} Q_{\V} \left(\frac{p_{\V}}{\pi_{\V}}\right), & \vxi\in\omp, \\ 0, & \vxi\in \omn. \end{array} \right. \end{equation} \end{lem} \begin{proof} Using \eqref{master_db_n}, the right-hand side of CME \eqref{rp_eq} reads that for $\vxi\in\omp$, \begin{equation} \begin{aligned} \frac{1}{\pi_{\V}}Q^*_{\V} p_{\V} = &\frac{1}{h}\sum_{\vec{\xi}, \vxi- \vec{\xi} h\geq 0} \tilde{\Phi}^+_\xi(\vxi-\vec{\xi} h ) \frac{p_{\V}(\vxi- \vec{\xi} h,t)}{\pi_{\V}(\vxi)} - \tilde{\Phi}^-_\xi(\vxi) \frac{p_{\V}(\vxi,t)}{\pi_{\V}(\vxi)} \\ & + \frac1h\sum_{\vec{\xi}, \vxi+h\vec{\xi} \geq 0} \tilde{\Phi}^-_\xi(\vxi+ \vec{\xi} h ) \frac{p_{\V}(\vxi+ \vec{\xi} h,t)}{\pi_{\V}(\vxi)} - \tilde{\Phi}_\xi^+(\vxi) \frac{p_{\V}(\vxi,t)}{\pi_{\V}(\vxi)}\\ =& \frac{1}{h}\sum_{\vec{\xi}, \vxi- \vec{\xi} h\geq 0} \tilde{\Phi}^-_\xi(\vxi) \bbs{ \frac{p_{\V}(\vxi- \vec{\xi} h,t)}{\pi_{\V}(\vxi- \vec{\xi} h)} - \frac{p_{\V}(\vxi,t)}{\pi_{\V}(\vxi)}} \\ & + \frac1h\sum_{\vec{\xi}, \vxi+h\vec{\xi} \geq 0} \tilde{\Phi}^+_\xi(\vxi)\bbs{ \frac{p_{\V}(\vxi+ \vec{\xi} h,t)}{\pi_{\V}(\vxi+ \vec{\xi} h)} - \frac{p_{\V}(\vxi,t)}{\pi_{\V}(\vxi)}} = Q_{\V} \bbs{\frac{p_{\V}}{\pi_{\V}}}. \end{aligned} \end{equation} \end{proof} \begin{defn} Given any $0<\ell<\8$, if there exists a $R_\ell$ and $h_0$ such that, \begin{equation} \sup_{t\in[0,T]}\sum_{|\vxi|\geq R_{\ell}, \vxi\in\omp} p_{\V}(\vxi,t) \leq e^{-\frac{\ell}{h}}, \quad \forall h\leq h_0, \end{equation} then we say the sequence of process $(\cv(t))$ is exponentially tight for each $t\in[0,T].$ \end{defn} Here we only consider $\vxi\in\omp$ because from \eqref{rp_eq}, it is natural to take $p_{\V}(\vxi, 0) = 0$ for $\vxi\in\omn$, which remains to be $0$ all the time for $\vxi\in\omn.$ Notice that \begin{equation} \sup_{t\in[0,T]} \bP\bbs{|\cv(t)|\geq R_{\ell}} \leq \bP\bbs{\sup_{t\in[0,T]}|\cv(t)|\geq R_{\ell}}. \end{equation} Hence the exponential tightness of $(\cv(t))$ for each $t\in[0,T]$ defined above is weaker then the usual definition of the exponential compact containment condition for $\cv$, i.e., for any $\ell<\8$, there exist $R_\ell$ and $h_0$ such that, \begin{equation} \bP(\sup_{t\in[0,T]}|\cv(t)|\geq R_{\ell}) \leq e^{-\frac{\ell}{h}}, \quad \forall h\leq h_0; \end{equation} c.f. \cite[Theorem 4.4]{feng2006large}. Then using Theorem \ref{thm:backwardEC}, we obtain the existence and comparison principle for $\uh = h\log w_{\V}$ and thus the solution $w_{\V}$ to backward equation \eqref{backward}. We remark that one can also directly prove the existence and comparison principle for \eqref{backward} using linear semigroup theory developed by \textsc{Lumer–Phillips} in 1961. Therefore, under the detailed balance assumption of the invariant measure $\pi_{\V}$, we have the following corollary. \begin{cor}\label{cor:tight} Assume there exists a positive reversible invariant measure $\pi_{\V}>0$ in $\omp$ to CME \eqref{rp_eq} such that \eqref{master_db_n} holds. We have \begin{enumerate}[(i)] \item there exists a unique global solution $p_{\V}(\vxi,t)=\pi_{\V}(\vxi)\uh(\vxi,t)$ in $\omp$, where $\uh$ is the solution to \eqref{upwind} with initial data $\frac{p_{{\V}}^0}{\pi_{\V}}$ in $\omp$; \item if the initial density satisfies $c_1 \pi_{\V} \leq p^0_{\V} \leq c_2 \pi_{\V}$ in $\omp$, then \begin{equation} c_1 \pi_{\V} \leq p_{\V} \leq c_2 \pi_{\V} \quad \text{ in } \omp; \end{equation} \item if $\pi_{\V}$ is exponentially tight, then $p_{\V}$ is also exponentially tight for any $t$. \end{enumerate} \end{cor} Now we state a lemma which also ensures the exponential tightness as long as the initial density is compact support. \begin{lem}\label{tight2} Assume there exists (componently) positive $\vm$ satisfying \eqref{mb_j} and the initial distribution $p^0_{\V}$ has a compact support. Then $\cv$ is exponentially tight for any $t$. \end{lem} \begin{proof} Under the assumption (ii), then \eqref{322mm} holds. Using the mass balance vector \eqref{mb_j}, multiplying SDE \eqref{Csde} by $\vm$, we have \begin{equation} \cv(t) \cdot \vm = \cv(0) \cdot \vm. \end{equation} For $\cv(0) \in \omp$, we know $\cv(t) \in \omp$ and obtain \begin{equation} m_1 \|\cv(t)\| \leq \|m\|\|\cv(0)\|, \quad \forall t\geq 0 \end{equation} and thus the associated distribution $\bP$ of $\cv$ has a compact support for any time $t$. We obtain the exponential tightness of $\cv(t)$ at each time $t$. \end{proof} \begin{rem}\label{rem38} For chemical reaction detailed balance case \eqref{DB}, we know $\psi_{\V} = - h \log \pi_{\V} \to \KL(\vx||\peq)$. Since $\KL(\vx||\peq)$ is super linear, so we know for $|\vxi|\geq R\gg 1$, $\vxi\in\omp$ \begin{equation} \psi_{\V} \geq \frac{1}{2} \KL(\vx||\peq) \geq c|\vx|. \end{equation} Thus \begin{equation} \sum_{|\vxi| \geq R} e^{-\frac{\psi_{\V}}{h}} \leq \int_{|\vx|\geq R} e^{-\frac{c|\vx|}{h}} \leq e^{-\frac{c R}{2h}}. \end{equation} For any $\ell<\8$, taking $R=\frac{2\ell}{c}$, we obtain $\pi_{\V}$ has exponential tightness and hence by Corollary \ref{cor:tight}, the chemical process satisfies the exponential tightness for any $t$. \end{rem} \section{Thermodynamic limit of CME and backward equation in the HJE forms}\label{sec4} The comparison principle and nonexpansive properties for the monotone schemes naturally bring up the concept of viscosity solutions to the corresponding continuous HJE as $h\to 0$; see Appendix \ref{app3} for the definition of viscosity solutions. Indeed, the monotone scheme approximation and the vanishing viscosity method are two ways to construct a viscosity solution to HJE proposed by Crandall and Lions in \cite{crandall1983viscosity, crandall1984two}. In this section, we first study the existence of a upper semicontinuous (USC) viscosity solution to the stationary HJE in the Barron-Jensen sense; see Proposition \ref{prop:USC}. This relies on the assumption of the existence of a positive reversible invariant measure $\pi_{\V}$ for CME. We point out that our reversible assumption \eqref{master_db_n} is slightly more general than the commonly used chemical reaction detailed balance condition. Therefore, our invariant measure includes some non-equilibrium enzyme reactions, and the stationary HJE solution can be nonconvex, indicating a nonconvex energy landscape for non-equilibrium reactions \cite{GL22}. Second, we focus on the viscosity solution to the dynamic HJE \eqref{HJE2psi}. Fixing $\Delta t$ in the backward Euler scheme \eqref{bEuler} and taking the limit $h\to 0$, following Barles and Perthame's procedure, the USC envelope of the discrete HJE solution gives a USC subsolution to HJE, which automatically inherits the 'no reaction' boundary condition from \eqref{bEuler}. The proof still relies on the monotonicity and nonexpansive property of the monotone schemes. Thanks to the conservation of mass, we use some mass functions $f(\vm \cdot \vx)$ to construct barriers to control the polynomial growth of coefficients in the Hamiltonian. Then, by the comparison principle, we obtain the viscosity solution to the continuous resolvent problem. Next, taking $\Delta t \to 0$, thanks to Crandall and Liggett's construction of a nonlinear contraction semigroup solution \cite{CrandallLiggett}, we finally obtain a unique viscosity solution to the HJE as a large size limit of the backward equation; see Theorem \ref{thm_vis}. \subsection{Stationary HJE and convergence from the mesoscopic reversible invariant measure}\label{sec:sHJE} The stationary solution to the HJE, $H(\nabla \psi^{ss}(\vx), \vx) = 0$, can be used to compute the energy landscape guiding the chemical reaction, to decompose the RRE to dissipative and conservative parts and to compute the associated energy barrier of transition paths; see \cite{GL22}. However, $\psi^{ss}$ is usually challenging to compute and is non-unique. Indeed, the structure of chemical reaction network can only be revealed through the global energy landscape obtained from the large deviation rate function for the invariant measure. In this section, we assume the existence of an invariant measure for the mesoscopic CME \eqref{rp_eq} satisfying the reversible condition \eqref{master_db_n} and then use it to obtain convergence to the stationary solution of the HJE. Under the assumption of the existence of a positive reversible invariant measure, we now prove that this invariant measure converges to a Barron-Jensen's USC viscosity solution. As in \eqref{1.3}, we change the variable in the CME to $\psihs(\vxi)$ such that $$\pi_{\V}(\vxi) = e^{-\psihs(\vxi)/h}.$$ Then the reversible condition for CME \eqref{master_db_n} becomes, for $\vxi\in\omp$ and $\vxi+ \vec{\xi}_{j}h\in\omp$, \begin{equation}\label{db_psi} \tilde{\Phi}^-_{\xi}(\vxi+ \vec{\xi}_{j}h) e^{\frac{\psihs(\vxi) - \psihs(\vxi+\vec{\nu}_j h)}{h}} = \tilde{\Phi}^+_{\xi}(\vxi), \quad \forall \vec{\xi}. \end{equation} The solvability follows from the assumption that $\pi$ exists and satisfies the reversible condition for CME \eqref{master_db_n}. Let $\ph(\vxi), \, \vxi\in \omp$, be the solution to \eqref{db_psi}. Take some continuous function $\psi(\vx)$ in the domain $\bR^N\backslash \bR^N_+$. Define the extension of $\ph$ in $\omn$ as \begin{equation}\label{exC} \ph(\vxi)=\psi(\vxi), \quad \vxi\in\omn. \end{equation} Denote its upper semicontinuous (USC) envelope as \begin{equation}\label{USC} \bar{\psi}(\vx):=\limsup_{h\to 0^+, \vxi \to \vx} \ph(\vxi). \end{equation} Similarly, we use the lower semicontinuous (LSC) envelope to construct a supersolution \begin{equation}\label{LSC} \underline{\psi}(\vx):=\liminf_{h\to 0^+, \vxi \to \vx} \ph(\vxi). \end{equation} The next proposition states that, after taking the large size limit $h\to 0$ from a reversible invariant measure, we can obtain a USC viscosity solution to the stationary HJE in the sense of \textsc{Barron-Jensen} \cite[Definition 3.1]{Barron_Jensen_1990}. \begin{prop}\label{prop:USC} Let $\ph(\vxi)$ be a solution to \eqref{db_psi} with a continuous extension \eqref{exC}. Then, as $h\to 0$, the upper semicontinuous envelope $\bar{\psi}(\vx)$ is a USC viscosity solution to the stationary HJE in the Barron-Jensen's sense. That is, for any smooth test function $\varphi$, if $\bar{\psi}-\varphi$ has a local maximum at $\vx_0$, then \begin{equation} H(\nabla \varphi(\vx_0), \vx_0)=0. \end{equation} \end{prop} \begin{proof} Step 1. For any smooth test function $\varphi$, let $x^*$ be a strict local maximal of $\bar{\psi}-\varphi$. Denote $c_0:=\max_j |\vec{\nu}_j|$. Then for some $r,$ there exists $c>0$ such that \begin{equation} \bar{\psi}(x_0) - \varphi(x_0) \geq \bar{\psi}(x) - \varphi(x) + c|x-x_0|^2, \quad x\in B(x_0, r). \end{equation} Then taking $c$ large enough, as proved by \textsc{Barles, Souganidis} in \cite[Theorem 2.1]{barles1991convergence}, there exists a sequence $\{\ph\}$ with $ x_{*}^{h}$ being the maximum point of $\ph -\varphi$ in $B(x_*^h, c_0 h)$ and satisfies \begin{equation} x_{*}^{h} \to x_0, \quad \ph( x_{*}^{h}) \to \bar{\psi}(x_0). \end{equation} Step 2. Since $\ph$ is the discrete solution to \eqref{db_psi}, \begin{equation}\label{tm_db_1} \tilde{\Phi}^-_{\xi}(\vxi+ \vec{\xi}_{j}h) e^{\frac{\ph(\vxi) - \ph(\vxi+\vec{\xi}_j h)}{h}} = \tilde{\Phi}^+_{\xi}(\vxi), \quad \forall \vec{\xi}. \end{equation} Since for $x\in B(x_*^h, c_0 h)$, we have \begin{equation}\label{tm_test1} \ph(x_{*}^{h}) - \varphi(x_{*}^{h}) \geq \ph(x) - \varphi(x), \end{equation} thus $\ph(x_{*}^{h}) - \ph(x) \geq \varphi(x_{*}^{h}) - \varphi(x).$ Thus replacing $\ph$ by $\varphi$ in \eqref{tm_db_1} yields \begin{equation}\label{l1} \tilde{\Phi}^-_{\xi}(x_*^h+ \vec{\xi}_{j}h) e^{\frac{\varphi(x_*^h) - \varphi(x_*^h+\vec{\xi}_j h)}{h}} \leq \tilde{\Phi}^+_{\xi}(x_*^h), \quad \forall \vec{\xi}. \end{equation} On the other hand, $\ph$ also satisfies \begin{equation}\label{tm_db_2} \tilde{\Phi}^+_{\xi}(\vxi- \vec{\xi}_{j}h) e^{\frac{\ph(\vxi) - \ph(\vxi-\vec{\xi}_j h)}{h}} = \tilde{\Phi}^-_{\xi}(\vxi), \quad \forall \vec{\xi} \end{equation} due to \eqref{db_psi}. Hence same as \eqref{tm_test1}, using $\ph(x_{*}^{h}) - \ph(x) \geq \varphi(x_{*}^{h}) - \varphi(x)$, replacing $\ph$ by $\varphi$ in \eqref{tm_db_2} yields \begin{equation}\label{l2} \tilde{\Phi}^+_{\xi}(x_*^h- \vec{\xi}_{j}h) e^{\frac{\varphi(x_*^h) - \varphi(x_*^h-\vec{\xi}_j h)}{h}} \leq \tilde{\Phi}^-_{\xi}(x_*^h), \quad \forall \vec{\xi}. \end{equation} Then talking limit $h\to 0$ in \eqref{l1} and \eqref{l2} implies \begin{equation} \tilde{\Phi}^-_{\xi}(x_0) e^{-\nabla\varphi(x_0)} = \tilde{\Phi}^+_{\xi}(x_0). \end{equation} \end{proof} \begin{rem} This notion of USC viscosity solution was first proposed by \textsc{Barron and Jensen} \cite{Barron_Jensen_1990}. They proved the uniqueness of the solution in this new notion for evolutionary problems. However, the uniqueness of the solution in this new notion for the stationary problem was left as an open problem. On the other hand, if this USC viscosity solution can be proved to be continuous, then by \cite[Theorem 2.3]{Barron_Jensen_1990}, this USC viscosity solution is indeed the viscosity solution in the sense of \textsc{Crandall, Lions} viscosity solution \cite{crandall1983viscosity}. \end{rem} \begin{rem} We point out that the assumption of the reversibility of $\pi_{\V}$ includes both the chemical reaction detailed balance case and some non-equilibrium enzyme reactions \cite{GL22}, where the energy landscape is proved to be the USC viscosity solution to the stationary HJE in the Barron-Jensen's sense (see Proposition \ref{prop:USC}). The resulting energy landscape is nonconvex with multiple steady states, indicating three key features of non-equilibrium reactions: multiple steady states, non-zero flux, and a positive entropy production rate at the non-equilibrium steady states (NESS), as advocated by \textsc{Prigogine} \cite{prigogine1967introduction}. \end{rem} \begin{rem} In the chemical reaction detailed balance case \eqref{DB}, one has a closed formula for the unique invariant measure $\pi_{\V}$ given by the product Poisson distribution \eqref{pos}. However, for the corresponding macroscopic stationary HJE $H(\nabla \psi^{ss}(\vx),\vx)=0$, we still do not have the uniqueness of the viscosity solution since a constant is always a solution. This means a selection principle such as the weak KAM solution needs to be used to identify a unique solution; see \cite{GL23} for the drift-diffusion case. We believe this selection principle shall be consistent with the physically meaningful stationary solution, which is constructed by the limit of the invariant measure $\pi_{\V}$ for the CME as shown in Proposition \ref{prop:USC}. We leave this deep question for future study. \end{rem} \subsection{Convergence of backward equation to viscosity solution of HJE}\label{sec:vis} Define the Banach space \begin{equation}\label{space} \buc:= \{u\in C(\mathbb{R}^N);\,\, u(\vx) \to \text{const} \text{ as } |\vx|\to +\infty\}, \end{equation} where $\mathbb{R}^{N*}=\mathbb{R}^N \cup \{\infty\}$ is the one-point Alexandroff compactification of $\mathbb{R}^N$ \cite{Kelley}. In this section, we will follow the framework of \textsc{Crandall, Lions} \cite{crandall1983viscosity} and \textsc{Feng, Kurtz} \cite{feng2006large} to first prove that the Hamiltonian, regarded as an operator on the Banach space $\buc$, is a $m$-accretive operator and then obtain the dynamic solution of HJE as a nonlinear semigroup contraction generated by $H$. To overcome the polynomial growth of the coefficients in the Hamiltonian $H$, we further define a subspace of $\buc$ as \begin{equation} \ccc:= \{u\in \buc; \,\, u\equiv\text{const} \text{ for } |\vx|\geq R \text{ for some } R\}. \end{equation} Recalling the Hamiltonian $H(\nabla u(\vx), \vx)$ in \eqref{H}, we define the operator \begin{equation} H: D(H)\subset \buc \to \buc \quad \text{such that } u(\vx) \mapsto H(\nabla u(\vx), \vx). \end{equation} The maximality of $H$ such that $\text{ran}(I-\Delta t H)=\buc$ requires finding a classical solution to the HJE, which is generally difficult. This is because (i) the solution to the HJE does not have enough regularity; (ii) due to the polynomial growth of the coefficients in the Hamiltonian, we cannot find a solution in commonly used spaces, such as $C_b(\mathbb{R}^N)$. Instead, we will use the notion of viscosity solution to extend the domain of $H$ so that the maximality of $H$ in the sense of viscosity solution is easy to obtain as the limit of the backward Euler approximation when $h\to 0$. We refer to Appendix \ref{app3} for the definition of viscosity solutions. Additionally, we need to construct a $\buc$ solution as a limit of a $\ccc$ viscosity solution obtained above. Precisely, in the first step, following \cite{crandall1983viscosity} and \cite{feng2006large}, we define the viscosity extension of $H$ as \begin{equation}\label{operatorH} \hat{H}_1: D(\hat{H}_1)\subset \ccc \to \ccc \end{equation} satisfying \begin{enumerate}[(i)] \item $u\in D(\hat{H}_1)\subset \ccc$ if and only if there exists $f\in \ccc$ such that the resolvent problem $$u(\vx)-\Delta t H(\nabla u(\vx), \vx) = f(\vx)$$ has a unique viscosity solution $u\in \ccc$. \item For $u\in D(\hat{H}_1)$, $\hat{H}_1(u):= \frac{u-f}{\Delta t}$ for such an $f(\vx)$ in (i). \end{enumerate} In other words, the domain $D(\hat{H}_1)$ is defined as the range of the resolvent operator \begin{equation}\label{con_J} J_{\Delta t}:= (I-\Delta t \hat{H}_1)^{-1}. \end{equation} Since the resolvent problem holding in the classical sense implies that the resolvent problem has a viscosity solution, we can regard $\hat{H}_1$ as an extension of $H$. We will prove later in Proposition \ref{prop_h_limit} the existence and comparison principle for the resolvent problem in $\ccc$ above. We remark that the domain $D(\hat{H}_1)$ characterized in (i) does not depend on the value of $\Delta t<\infty$. Indeed, if $u\in D(\hat{H}_1)$ such that $u_1-H(\nabla u_1(x), x)=f_1$ has a viscosity solution $u_1\in \ccc$ for some $f_1\in \ccc$, then $u_2-\Delta t H(\nabla u_2(x),x) = f_2$ also has a viscosity solution with $f_2= \Delta t f_1+ (1-\Delta t)u_1\in \ccc$. Next, we extend $\ccc$ to the Banach space $\buc$. Notice that $\ccc$ is dense in $\buc$. For any $f\in \buc$, there exists a Cauchy sequence $f_k\in \ccc$ such that $\|f_k - f\|_{\infty} \to 0$. Then, from the comparison principle of the resolvent problem in $\ccc$ \cite[Page 293, Theorem 5.1.3]{bardi1997optimal}, we know that the corresponding viscosity solutions $u_k\in \ccc$ of the resolvent problem with $f_k$ form a Cauchy sequence in $\buc$. Let $u$ be the limit of $u_k$ in $\buc$. This limit is independent of the choice of the Cauchy sequence. Thus, we further define the viscosity extension of $H$ as \begin{equation}\label{operatorH_n} \hat{H}: D(\hat{H})\subset \buc \to \buc \end{equation} satisfying \begin{enumerate}[(i)] \item $u\in D(\hat{H})\subset \buc$ if and only if there exists $f\in \buc$ such that the resolvent problem $$u(\vx)-\Delta t H(\nabla u(\vx), \vx) = f(\vx)$$ has a unique viscosity solution $u\in \buc$. \item For $u\in D(\hat{H})$, $\hat{H}(u):= \frac{u-f}{\Delta t}$ for such an $f(\vx)$ in (i). \end{enumerate} Below, we show that the abstract domain $D(\hat{H})$ above indeed includes a large subspace. \begin{lem}\label{lem_inc} We have the following inclusions: \begin{align} C^1_{c}(\mathbb{R}^{N*}) \subset D(H) \subset D(\hat{H}_1) \subset D(\hat{H});\label{inc1}\\ \buc = \overline{D(H)}^{\|\cdot\|_{\infty}} = \overline{D(\hat{H}_1)}^{\|\cdot\|_{\infty}} = \overline{D(\hat{H})}^{\|\cdot\|_{\infty}}\label{inc2}. \end{align} \end{lem} \begin{proof} First, based on the definition of $D(\hat{H})$, it is obvious that $D(H)\subset D(\hat{H})$, because for $u$ such that $H(\nabla u(\vx),\vx)\in \buc$ holds in the classical sense, it implies that there exists $f\in \buc$ such that $(I-\Delta t H)u = f$ holds in the viscosity solution sense. Also, $D(\hat{H}_1)\subset D(\hat{H})$ is obvious because if there exists $f\in \ccc \subset \buc$, then we have a viscosity solution $u\in \ccc\subset \buc$. Second, for $u\in C^1_{c}(\mathbb{R}^{N*})$, $\nabla u=0$ outside a ball $B_r$. Thus, $H(\nabla u(\vx), \vx)$ is bounded for $\vx\in B_r$, while $H(\nabla u(\vx),\vx)=0$ outside $B_r$. Hence, $H(\nabla u(\vx),\vx)\in \ccc$ and $u\in D(H)$. So we conclude \eqref{inc1}. Third, since $\ccc\subset \buc$ and $\buc$ is a closed space, we have $\buc = \overline{C^1_{c}(\mathbb{R}^{N*}) }^{\|\cdot\|_{\infty}} \subset \overline{D(\hat{H})}^{\|\cdot\|_{\infty}} \subset \buc.$ \end{proof} In the next subsection, we construct a viscosity solution to the resolvent problem \begin{equation}\label{con_J} u = J_{\Delta t} f \quad \text{with} \quad J_{\Delta t}= (I-\Delta t \hat{H})^{-1}. \end{equation} This solvability in the viscosity solution sense gives the maximality of $\hat{H}$. Then the nonexpansive property of $J_{\Delta t}$ can be shown after taking limit from the nonexpansive property of the discrete resolvent $J_{\Delta t, h}$ proved in Lemma \ref{lem_nonexp_d}. \subsubsection{Barles-Perthame's procedure of convergence to viscosity solution as $h\to 0$} In this section, we first fix $\Delta t$ and take $h\to 0$ to construct a viscosity solution to the backward Euler problem \begin{equation}\label{resolventP} (I-\Delta t \hat{H}) u^n(\vx) = u^{n-1}(\vx), \quad \vx\in \bR^N. \end{equation} For easy presentation, this reduces to solving the following resolvent equation \begin{equation}\label{resolventP_n} (I-\Delta t \hat{H}) u(\vx) = f(\vx), \quad \vx\in \bR^N. \end{equation} The following proposition follows Barles-Perthame's procedure \cite{Barles_Perthame_1987} to use the upper semicontinuous (USC) envelope of the numerical approximation to construct a subsolution to \eqref{resolventP}. Let $u_{\V}(\vxi), \, \vxi\in \Omega_{\V}$ be the solution to \eqref{dis_exist}, and define \begin{equation}\label{USC} \bar{u}(\vx):=\limsup_{h\to 0^+, \vxi \to \vx} u_{\V}(\vxi). \end{equation} Similarly, we use the lower semicontinuous (LSC) envelope to construct a supersolution \begin{equation}\label{LSC} \underline{u}(\vx):=\liminf_{h\to 0^+, \vxi \to \vx} u_{\V}(\vxi). \end{equation} We denote the set of upper semicontinuous functions on $\bR^N$ as $\usc.$ Below, we impose the local Lipschitz continuity condition for ${\Phi}^\pm_j(\vx), \, \vx\in \mathbb{R}^N$ after zero extension. This condition is to guarantee the comparison principle for the viscosity sub/super solution constructed in the following proposition. We remark that this local Lipschitz continuity condition can be weakened, but the comparison principle requires more estimates; see \cite{Deng_Feng_Liu_2011, Kraaij_Mahe_2020}. Similar assumptions on the vanishing rate of fluxes near the boundary are studied for the sample path large deviation principle \cite{Agazzi_Andreis_Patterson_Renger_2021}. \begin{prop}\label{prop_h_limit} Let $u_{\V}(\vxi)$ be a solution to \eqref{dis_exist} with $f\in \ell^{\8}_{c*}$. Assume ${\Phi}^\pm_j(\vx), \, \vx\in \bR^N$, after zero extension, is local Lipschitz continuous. Assume there exists (componently) positive $\vm$ satisfying \eqref{mb_j}. Then as $h\to 0$, the upper semicontinuous envelope $\bar{u}(\vx)$ is a subsolution to \eqref{resolventP}. The lower semicontinuous envelope $\underline{u}(\vx)$ is a supersolution to \eqref{resolventP}. Furthermore, from the comparison principle, \begin{equation}\label{dis_h_limit} u(\vx) = \bar{u} = \underline{u} = \lim_{h\to 0, \vxi \to \vx} u_{\V}(\vxi)\in \ccc. \end{equation} \end{prop} \begin{proof} Step 1. From the barrier estimates in Proposition \ref{prop:cp}, we know $\uh(\vxi)\equiv c_0$ for $|\vxi|> R$ for some $R>0$ and for some constant $c_0$. Thus \begin{equation} \bar{u}(\vx)=\underline{u}(\vx)=c_0 \,\, \text{ for } |\vxi|>R. \end{equation} Step 2. For any test function $\varphi$, let $x_0$ be a strict local maximal of $\bar{u}-\varphi$. Denote $c_0:=\max_j |\vec{\nu}_j|$. Then for some $r,$ there exists $c>0$ such that \begin{equation} \bar{u}(x_0) - \varphi(x_0) \geq \bar{u}(x) - \varphi(x) + c|x-x_0|^2, \quad x\in B(x_0, r). \end{equation} Then taking $c$ large enough, as proved by \textsc{Barles, Souganidis} in \cite[Theorem 2.1]{barles1991convergence}, there exists a sequence $\{u_{\V}\}$ with $ x_{*}^{h}$ being the maximum point of $u_{\V} -\varphi$ in $B(x_*^h, c_0 h)$ and satisfies \begin{equation} x_{*}^{h} \to x_0, \quad u_{\V}( x_{*}^{h}) \to \bar{u}(x_0). \end{equation} Step 3. Since $u_{\V}$ is the discrete solution to \eqref{dis_exist}, \begin{equation} \label{tm_p1} u_{\V}(x_{*}^{h}) - \lambda H_{\V}(x_{*}^{h}, u_{\V}(x_{*}^{h}), u_{\V}) = f(x_{*}^{h}). \end{equation} Since for $x\in B(x_*^h, c_0 h)$, we have \begin{equation} u_{\V}(x_{*}^{h}) - \varphi(x_{*}^{h}) \geq u_{\V}(x) - \varphi(x), \end{equation} so $u_{\V}(x)- u_{\V}(x_{*}^{h}) \leq \varphi(x)- \varphi(x_{*}^{h}) .$ Thus replacing $u_{\V}$ by $\varphi$ in \eqref{tm_p1}, we have \begin{equation} H_{\V}(x_{*}^{h}, u_{\V}(x_{*}^{h}), u_{\V}) \leq H_{\V}(x_{*}^{h}, \varphi(x_{*}^{h}), \varphi). \end{equation} Thus taking limit $h\to 0$, \begin{equation} \bar{u}(x_0) - \lambda H(x_0, \nabla\varphi(x_0)) \leq f(x_0). \end{equation} Step 4, similarly, we can prove the LSC envelope $\underline{u}$ is a supersolution. Then by the comparison principle of \eqref{resolventP} in a ball $B_R$ \cite[Page 293, Theorem 5.1.3]{bardi1997optimal}, we have $\bar{u}\leq \underline{u}$ and thus \begin{equation} u(\vx) = \lim_{h\to 0, \vxi \to \vx} u_{\V}(\vxi) \end{equation} is the unique viscosity solution to \eqref{resolventP}. \end{proof} \begin{cor}\label{cor:semi} Assume ${\Phi}^\pm_j(\vx), \, \vx\in \bR^N$, after zero extension, is local Lipschitz continuous. Assume there exists (componentwisely) positive $\vm$ satisfying \eqref{mb_j}. \begin{enumerate}[(i)] \item Let $f\in \ccc$, then the limit solution $u(\vx)\in \ccc$ obtained in Proposition \ref{prop_h_limit} is the unique viscosity solution to \eqref{resolventP}. \item Let $u_1$ and $u_2$ be viscosity solutions to \eqref{resolventP} in (i) corresponding to $f_1$ and $f_2$. Then $\|u_1- u_2\|_{\8}\leq \|f_1 - f_2\|_{ \8}$. \item Let $f\in \buc$, then there exists a unique viscosity solution to \eqref{resolventP} $u\in \buc$ which satisfies the non-expansive property. In other words, $\ran(I+\Delta t \hat{H})=\buc.$ \end{enumerate} \end{cor} \begin{proof} First, (i) is directly from Proposition \ref{prop_h_limit}. Second, taking limit $h\to 0$ preserves the nonexpansive property of $J_{\Delta t}$. Indeed, assume $u_1$ and $u_2$ are two solutions to the resolvent problem \eqref{resolventP} with $f_1(\vx)$, $f_2(\vx)$ respectively. Then denote $f_1^h(\vxi) := f_1(\vxi),\,f_2^h(\vxi) := f_2(\vxi)$, the associated discrete solutions $u^h_1(\vxi)$ and $u^h_2(\vxi)$, by Lemma \ref{lem_nonexp_d}, satisfies \begin{equation} \|u^h_1(\cdot)-u^h_2(\cdot)\|_{\ell^\8} \leq \|f_1^h(\cdot)-f_2^h(\cdot)\|_{\ell^\8}. \end{equation} Then taking limit $h\to 0$ implies \begin{equation} \|u_1-u_2\|_{L^\8} \leq \lim_{h\to 0} \|u^h_1-u^h_2\|_{\ell^\8} \leq \lim_{h\to 0} \|f_1^h - f_2^h\|_{\ell^\8} = \|f_1-f_2\|_{L^\8}. \end{equation} This implies the monotonicity as in Lemma \ref{lem_nonexp_d}. Third, notice $\ccc$ is dense in $\buc$. For any $f\in \buc$, there exists a Cauchy sequence $f_k\in \ccc$ such that $\|f_k - f\|_{\8} \to 0$. Then from the last step, we know the corresponding viscosity solutions $u_k\in \ccc$ of the resolvent problem with $f_k$ is also a Cauchy sequence in $\buc$. Take $u=\lim_{k\to +\8}u_k$ as the limit of $u_k$ in $\buc$. The non-expansive property is still maintained as in the last step. \end{proof} \subsubsection{$\Delta t \to 0$ converges to the semigroup solution} Now we follow the framework in \cite{crandall1983viscosity, feng2006large} to obtain a strongly continuous nonlinear semigroup solution $u(\vx,t)$ to HJE. It is indeed a viscosity solution to the original dynamic HJE \cite{crandall1983viscosity, feng2006large}. Given any $t>0$, denote $[\frac{t}{\Delta t}]$ as the integer part of $\frac{t}{\Delta t}$. We next prove the convergence from the discrete solution of the backward Euler scheme \eqref{bEuler} to the viscosity solution of HJE \eqref{HJE2psi}. \begin{thm}\label{thm_vis} Assume ${\Phi}^\pm_j(\vx), \, \vx\in \bR^N$, after zero extension, is local Lipschitz continuous. Assume there exists (componentwisely) positive $\vm$ satisfying \eqref{mb_j}. Given any $t>0$ and $\Delta t>0$, let $\psih^n(\vxi), \vxi\in\Omega_{\V}, n=1,\cdots,[\frac{t}{\Delta t}]$ be a solution to \eqref{bEuler} with initial data $\psih^0(\vxi)\in \ellc$. Assume for any $\vx\in \mathbb{R}^N$, $u^0(\vx) = \lim_{\vxi\to \vx} \psih^0(\vxi)$ and $u^0\in \ccc$. Let $\hat{H}: D(\hat{H})\subset \buc \to \buc$ be the viscosity extension of $H$ defined in \eqref{operatorH}. Then as $h\to 0$, $\Delta t \to 0$, we know \begin{equation} u(\vx,t) = \lim_{\Delta t\to 0} \bbs{(I - \Delta t \hat{H})^{-[t/\Delta t]} u^0 } = \lim_{\Delta t\to 0}\bbs{\lim_{h\to 0} (I - \Delta t H_{\V})^{-[t/\Delta t]} \psih^0 }\in \buc. \end{equation} This is the unique viscosity solution to HJE \eqref{HJE2psi}. \end{thm} \begin{proof} First, take $u^0\in \ccc$, fix $\Delta t$ and denote the solution to resolvent problem \eqref{resolventP_n} as $\psih^1(\vxi)$ satisfying \begin{equation} (I-\Delta t H_{\V}) \psih^1(\vxi) = \psih^0(\vxi)\in \ellc. \end{equation} Then by Proposition \ref{prop_h_limit}, since $u^0(\vx) = \lim_{\vxi\to \vx} \psih^0(\vxi)\in \ccc$, we know as $h\to 0$, $u^1\in \ccc$ is a viscosity solution to the continuous resolvent problem \begin{equation} (I-\Delta t H) u^1(\vxi) = u^0(\vxi). \end{equation} In the resolvent form, we have \begin{equation}\label{maximal} u^1 = (I-\Delta t \hat{H})^{-1}u^0 = \lim_{h\to 0} (I-\Delta H_{\V})^{-1}\psih^0\in \ccc . \end{equation} Repeating $[\frac{t}{\Delta t}]=:n$ times, we conclude that for any $\vx$ and $\vxi\to \vx$ as $h\to 0$, \begin{equation}\label{conver_be} u^n(\vx)=\bbs{(I - \Delta t \hat{H})^{-[t/\Delta t]} u^0}(\vx) = \lim_{h\to 0} \bbs{(I - \Delta t H_{\V})^{-[t/\Delta t]} \psih^0}(\vxi) = \lim_{h\to 0} u^n_{\V}(\vxi)\in \ccc. \end{equation} Second, the non-expansive property and the maximality of $-\hat{H}$ in $\buc$ are proved in Corollary \ref{cor:semi}. Thus we conclude $-\hat{H}$ is m-accretive operator on $\buc$. From inclusion \eqref{inc2} in Lemma \ref{lem_inc}, we know $u^0 \in \buc\subset \overline{D(\hat{H})}^{\|\cdot\|_\8}$, then by Crandall-Liggett's nonlinear semigroup theory \cite{CrandallLiggett}, $\hat{H}$ generates a strongly continuous nonexpansive semigroup \begin{equation} u(\vx,t) = \lim_{\Delta t\to 0} (I - \Delta t \hat{H})^{-[t/\Delta t]} u^0 =:S(t)u^0, \end{equation} which is the solution to HJE \eqref{HJE2psi}. \end{proof} \section{Short time classical solution and error estimate}\label{sec5} In this section, we give error estimates between the discrete solution of monotone scheme \eqref{upwind} (i.e., discrete nonlinear semigroup) and the classical solution of HJE for a short time. Notice the chemical flux has a polynomial growth at the far field so usual methods for error estimate do not work. However, we observe that for special initial data $u^0\in C^1_{c}(\bR^{N*})$, the dynamic classical solution $u$ still belongs to $C^1_{c}(\bR^{N*})$ for a short time. This can be seen from the characteristic method for a short time classical solution. Starting from any initial data $(\vx(0), \vp(0))$, construct the bi-characteristics $\vx(t), \vp(t)$ for HJE \begin{equation}\label{cc} \begin{aligned} \dot{\vec{x}} = \nabla_p H(\vec{p}, \vec{x}), \quad \vec{x}(0) = \vec{x}_0,\\ \dot{\vec{p}} = -\nabla_x H(\vec{p}, \vec{x}), \quad \vec{p}(0) = \nabla \psi_0(\vec{x}_0) \end{aligned} \end{equation} upto some $T>0$ such that the characteristics exist uniquely. Then along characteristics, with $\vec{p}(t) = \nabla_xu( \vec{x}(t), t )$, we know $z(t)=u(\vx(t),t)$ satisfies \begin{equation}\label{tm_zz} \begin{aligned} \dot{z} =\nabla_x u(\vx(t),t) \cdot \dot{\vx} + \pt_t u(\vx(t),t) =\vec{p} \cdot \nabla_p H(\vec{p},\vec{x}) +H(\vec{p}, \vec{x}), \quad z(0) = u_0(\vec{x}_0). \end{aligned} \end{equation} Hence after solving \eqref{cc}, we can solve for $z(t)=u(\vx(t),t).$ Since $u^0\in C^1_{c}(\bR^{N*})$, we know for $|\vx_0|\gg 1$, $\vp_0 = 0$. Then as long as bi-characteristic is unique up to $T$, $\vp \equiv 0$ due to $\nabla_x H(\vec{0}, \vx)=0$. This, together with \eqref{tm_zz} and $H(\vec{0}, \vx)=0$, implies $\dot{z} = 0 $ up to $T$. Thus we conclude \begin{equation}\label{compact} u(\vx,t)= \text{ const \,\, for } |\vx|\gg 1. \end{equation} Recall $\Delta t$ is the time step and the size of a container for chemical reactions is $\frac{1}{h}$. Next, based on the above observation, we give the error estimate for initial data $u^0\in C^1_{c}(\bR^{N*}).$ \begin{prop} Assume ${\Phi}^\pm_j(\vx), \, \vx\in \bR^N$, after zero extension, is locally Lipschitz continuous. Let $T>0$ be the maximal time such that bi-characteristic $\vx(t), \vp(t)$ for HJE exists uniquely. Given any initial data $\psih^0$ for the backward equation \eqref{upwind}, assume there exists $u^0$ such that $\|\psih^0 -u^0\|_{\8} \leq Ch$. Then we have \begin{enumerate}[(i)] \item the error estimate between backward Euler scheme solution $\psih^n(\vxi)$ to \eqref{bEuler} and the classical solution $u(\vx, t^n)$ to HJE \eqref{HJE2psi} \begin{equation} \begin{aligned} \|\psih^n - u(\cdot, t^n)\|_{\8} \leq C (T+1) (\Delta t +h). \end{aligned} \end{equation} \item the error estimate between monotone scheme solution $\psih(\vxi, t)$ to \eqref{upwind} and the classical solution $u(\vx, t)$ to HJE \eqref{HJE2psi} \begin{equation} \|\psih(\cdot,t) - u(\cdot, t)\|_{\8} \leq C h, \quad \forall 0<t\leq T. \end{equation} \end{enumerate} \end{prop} \begin{proof} Let $T$ be the maximal time such that bi-characteristic $\vx(t), \vp(t)$ for HJE exists uniquely and thus from the uniqueness of solution to HJE, the viscosity solution obtained in Theorem \ref{thm_vis} is the unique classical solution. Then from analysis for \eqref{compact}, $u$ is constant outside $B_R$. (i) We perform the truncation error estimate for $u(\vx, t)\in C^2$. By Taylor expansion, plugging $u(\vx, t^n)$ to the resolvent problem, we have \begin{equation} u(\vx, t^n) - \Delta t H(\vx, \nabla u(\vx, t^n)) = u(\vx,t^{n-1}) + O(\Delta t^2). \end{equation} Then we further plug $u(\vxi, t^n)$ into the discrete resolvent problem. Since $H=0$ for $\vx\in B_R^c$, the polynomial growth outside $B_R$ does not affect the truncation error and thus \begin{equation} u(\vxi, t^n) - \Delta t H_{\V}(\vxi, \nabla u(\vxi, t^n)) = u(\vxi,t^{n-1}) + O(\Delta t^2+ h\Delta t ). \end{equation} (ii) We perform error estimate using the nonexpansive property for the discrete resolvent in Lemma \ref{lem_nonexp_d}. Recall the numerical solution obtained in backward Euler scheme \begin{equation}\label{bEuler_n} \psih^{n}(\vxi) -\Delta t H_{\V}(\vxi, \psih^{n}(\vxi), \psih^n) = \psih^{n-1}(\vxi). \end{equation} Then by Lemma \ref{lem_nonexp_d}, we have \begin{equation} \begin{aligned} \|\psih^n - u(\cdot, t^n)\|_{\8} &\leq \|\psih^{n-1} - u(\cdot, t^{n-1})\|_{\8} + C\Delta t(\Delta t + h) \\ &\leq \cdots \leq \|\psih^{0} - u(\cdot, 0)\|_{\8} + C n\Delta t(\Delta t +h) \leq C (T+1) (\Delta t +h). \end{aligned} \end{equation} Notice this is linear growth in time. Changing the above resolvent problem to the original CME in the HJE form \eqref{upwind}, we have the first order convergence \begin{equation} \|\psih(\cdot, t) - u(\cdot, t)\|_{\8} \leq C h, \quad \forall 0<t\leq T. \end{equation} \end{proof} Under the observation of the finite time propagation of support \eqref{compact}, the error estimates for monotone schemes to classical solutions of HJE are standard; see abstract theorem in \cite{souganidis1985approximation}. We remark that if the solution to HJE \eqref{HJE2psi} lose the regularity and becomes only locally Lipschitz, then the convergence rate in terms of $h$ can be at most $h^{\frac12}$; c.f. \cite{crandall1984two, waagan2008convergence, souganidis1985approximation}. \section{Application to large deviation principle for chemical reactions at single time} \label{sec6} In previous sections, we proved the convergence from the solution to monotone scheme \eqref{upwind}, i.e., Varadhan's nonlinear semigroup \eqref{semigroup}, to the global viscosity solution to HJE \eqref{HJE2psi}. In this section, we discuss some straightforward applications of this convergence result, particularly the large deviation principle and the law of large numbers at a single time. First, we will discuss the exponential tightness of the large size process $\cv$ at single times. Then, using the Lax–Oleinik semigroup representation for first order HJE, the convergence result in Theorem \ref{thm_vis} implies the convergence from Varadhan's discrete nonlinear semigroup to the continuous Lax–Oleinik semigroup. Therefore, together with the exponential tightness, we obtain the large deviation principle at a single time point in Theorem \ref{thm_dp}. As long as one can prove the exponential tightness in Skorokhod space $D([0,T];\Omega_{\V})$, by using \cite[Theorem 4.28]{feng2006large}, the large deviation principle for finite many time points will lead to the sample path large deviation principle for the chemical reaction process $\cv.$ We leave the exponential tightness in Skorokhod space $D([0,T];\Omega_{\V})$ to future study. The exponential tightness for $(\cv(\cdot))$ in path space was proved in \cite[Lemma 2.1]{Dembo18} under the assumption that there exists a growth estimate for a Lyapunov function. For finite states continuous time Markov chain, the exponential tightness for $(\cv(\cdot))$ in path space was proved by \cite{Mielke_Renger_Peletier_2014}. Second, the large deviation principle also implies the mean-field limit of the CME \eqref{rp_eq} is the RRE \eqref{odex}, but with a more explicit rate of the concentration of measures. The mean-field limit of the CME for chemical reactions was first proved by Kurtz \cite{kurtz1970solutions, Kurtz71}. Recent results in \cite{{maas2020modeling}} proves the evolutionary $\Gamma$-convergence from CME to the Liouville equation corresponding to RRE under the detailed balance assumption. \subsection{Large deviation principle for chemical reaction at single times} In this section, we prove the large deviation principle of the random variable $\cv(t)$ at each time. Precisely, \begin{defn}\label{def_ld} Let $\cv $ be the large size process defined in \eqref{Csde} and $\cv (0)=\vx_0^{\V}\in\bR^N_+$. Assume $\vx_0^{\V} \to \vx_0 \in \bR^N_+$. Then at each time $t$, we say the random variable $\cv (t)$ satisfies the large deviation principle in $\bR^N_+$ with a good rate function $I(\vy; \,\vx_0,t)$ defined in \eqref{LO} if for any open set $\mathcal{O}\subset \bR^N_+$, it holds \begin{equation} \liminf_{h\to 0} h \log \bP_{\vx_0^{\V}} \{\cv (t) \in \mathcal{O}\} \geq - \inf_{\vy\in \mathcal{O}} I(\vy;\,\vx_0, t) \end{equation} and for any closed set $\mathcal{C}\subset \bR^N_+$, it holds \begin{align} \limsup_{h \to 0} h \log \bP_{\vx_0^{\V}} \{\cv (t) \in\mathcal{C}\} \leq - \inf_{\vy\in \mathcal{C}} I(\vy;\,\vx_0, t). \end{align} \label{LD11} Here $I(\vy; \vx_0,t)$ is a good rate function means the sublevel set $\{\vy\in \bR^N_+; \, I(\vy;\vx_0, t)\leq \ell\}$ is compact for any $\ell$. This sublevel set compactness automatically implies $I$ is lower semicontinuous. \end{defn} From the theory of Hamilton-Jacobi equation (or the deterministic optimal control formulation), it is well-known that the viscosity solution to first order HJE can be expressed as the Lax-Oleinik semigroup, cf. \cite[Theorem 2.22]{Tran21},\cite{evans2008weak}. \begin{lem}[Lax-Oleinik semigroup] Assume ${\Phi}^\pm_j(\vx), \, \vx\in \bR^N$, after zero extension, is locally Lipschitz continuous. The viscosity solution $u(\vx,t)$ to HJE \eqref{HJE2psi} can be represented as the Lax–Oleinik semigroup, i.e., \eqref{LO} \begin{equation} u(\vx,t) = \sup_{\vy } \bbs{u_0(\vy) - I(\vy; \vx,t)}, \quad I(\vy; \vx,t) := \inf_{\gamma(0)=\vx, \gamma(t)=\vy} \int_0^t L(\dot{\gamma}(s),\gamma(s)) \ud s, \end{equation} where $L(\vs,\vx)$ is the convex conjugate of $H(\vp,\vx)$. \end{lem} We can directly verify the following semigroup property for the Lax-Oleinik representation for viscosity solution \eqref{LO} \begin{equation} u(\vx,t) = \sup_{\vy} \bbs{ u(\vy, t-\tau) - I(\vy; \, \vx, \tau) }, \quad 0\leq \tau \leq t. \end{equation} This is also known as the dynamic programming principle. \begin{thm}\label{thm_dp} Assume ${\Phi}^\pm_j(\vx), \, \vx\in \bR^N$, after zero extension, is locally Lipschitz continuous. Assume there exists (componentwisely) positive $\vm$ satisfying \eqref{mb_j}. Let $\cv (0)=\vx_0^{\V} \to \vx_0$ in $\bR^N_+$. Then the chemical reaction process $\cv(t)$ at each time $t$ satisfies the large deviation principle with a good rate function $I(y;\vx_0,t)$ as in Definition \ref{def_ld}. \end{thm} \begin{proof} First, the exponential tightness of the process $\cv$ is essential for large deviation analysis. Using Lemma \ref{tight2}, the existence of positive $\vm$ and the fixed initial position $\cv (0)$ ensure that $\cv$ is exponentially tight for any $t$. Second, from Varadhan's lemma \cite{Varadhan_1966} on the necessary condition of the large deviation principle and Bryc's theorem \cite{Bryc_1990} on the sufficient condition, we know $\cv$ satisfies the large deviation principle with a good rate function $I(y)$, if and only if for any bounded continuous function $u_0(\vx)$, \begin{equation}\label{var} \lim_{h\to 0} h \log \bE^{\vx}\bbs{ e^{\frac{u_0(\cv(t))}{h}} } =u(\vx,t) = \sup_{y} \bbs{u_0(y) - I(y; x,t)}. \end{equation} Then we show \eqref{var} holds. From the WKB reformulation for the backward equation \eqref{wkb_b} $$\uh(\vxv,t) =h \log w_{\V}(\vxv,t) = h \log \bE^{\vxv}\bbs{e^{ \frac{u_0(\cv_t)}{h}}},$$ we show below the pointwise convergence for any $t\in [0,T]$, \begin{equation}\label{con1} \lim_{h\to 0} h \log \bE^{\vx}\bbs{ e^{\frac{u_0(\cv_t)}{h}} } =u(\vx,t). \end{equation} For any $t\in[0,T]$, $\eps>0$, from Theorem \ref{thm_vis}, we know there exists $\Delta t_0$ such that for any $\Delta t\leq \Delta t_0$, $n=[t/\Delta t]$, the estimate between the solution $u(\vx,t)$ to HJE and the solution $u^n(\vx)$ to the backward Euler approximation \eqref{resolventP} satisfies $ \|u(\cdot,t) - u^n(\cdot)\|_{L^\8} \leq \frac13\eps. $ For this $u^n(\vx)$, at any fixed $\vx\in \mathbb{R}^N$, from \eqref{conver_be}, there exists $h_0$ such that for any $h\leq h_0$, the solution $u_{\V}^n(\vxi)$ to the discrete \eqref{bEuler} converges to $u^n(\vx)$ pointwisely. That is to say, exists $h_0$, such that for $h\leq h_0$, $ |u^n(\vx)-u^n_{\V}(\vxi)| \leq \frac{1}{3}\eps. $ From the convergence of backward Euler scheme \eqref{bEuler} to the ODE system \eqref{odex} in \eqref{semi27}, we obtain another $\frac13 \eps$ and hence \eqref{con1} holds. Meanwhile, $u(x,t)$ has the Lax–Oleinik semigroup representation \ref{LO}. Hence we conclude \eqref{var}. \end{proof} As a consequence of the above large deviation principle, we state below the law of large numbers gives the mean-field limit ODE \eqref{odex}. \begin{cor}\label{cor:ode} Assume ${\Phi}^\pm_j(\vx), \, \vx\in \bR^N$, after zero extension, is locally Lipschitz continuous. Assume there exists (componentwisely) positive $\vm$ satisfying \eqref{mb_j}. Let $\cv (0)=\vx_0^{\V} \to \vx_0$ in $\bR^N_+$. Let $\vx^*(t)$ be the solution to RRE \eqref{odex}. Then $\vx^*(t)$ is the mean-field limit path in the sense that the weak law of large numbers for process $\cv(t)$ holds at any time $t$. {\blue Moreover, for any $t>0$, for any $\eps>0$, there exists $h_0>0$ such that if $h\leq h_0$ then $$ \bP \{ |\cv(t) - \vx^*(t) | \geq \eps \} \leq e^{ -\frac{\alpha(\eps,t)}{2h} }, $$ where $\alpha(\eps,t) = \inf_{|\vy - \vx^*(t)|\geq \eps} I(\vy; \vx,t) >0.$ } \end{cor} The proof for mean-field limit of $\cv(t)$ was given by Kurtz \cite{Kurtz71}. Here we use the large deviation principle at singe time $t$ to illustrate that large deviation principle implies the law of large numbers after the concentration of measure and gives the exponential rate of that concentration. This can be seen via verification method. First, we know $\vx^*(t)$ is the solution to RRE \eqref{odex} if and only if the action cost is zero $I(y; x_0, t)=0$; see Appendix \ref{app2}. Second, we choose any bounded continuous function $u_0$ in \eqref{var} such that at time $t$, $u_0(\vx^*(t))=\argmax_x u_0(x).$ Then \eqref{var} becomes $\lim_{h\to 0} h\log \bE(e^{u_0(\cv(t))/h})=u_0(\vx^*(t))$. This is to say $\cv(t)$ concentrates at $\vx^*(t)$. Thus the concentration rate near a tube of $\vx^*$ is directly given by the obtained rate function $I(\vy;\vx,t)$. We remark that the sample path large deviation principle for $\{\cv(t)\}$ is more significant for studying the transition path theory and the proof is more involved; see \textsc{Agazzi} et.al. \cite{Dembo18}. We state the definition of the sample path large deviation principle in Appendix \ref{app_ld} and show it covers the single time large deviation principle in Theorem \ref{thm_dp}. \section*{Acknowledgment} The authors would like to thank Jin Feng for valuable discussions. The authors would also like to thank the Isaac Newton Institute for Mathematical Sciences at Cambridge for their support and hospitality during the program "Frontiers in kinetic theory: connecting microscopic to macroscopic scales - KineCon 2022," where work on this paper was partially undertaken. Yuan Gao was supported by NSF under Award DMS-2204288. Jian-Guo Liu was supported by NSF under Award DMS-2106988. \bibliographystyle{alpha} \bibliography{LD_vis_bib} \appendix \section{Terminologies for macroscopic RRE, LDP and viscosity solutions} In this appendix, we review some known terminologies for the large size limiting RRE \eqref{odex} and collect some preliminary results/concepts for the detailed balanced RRE system, and the associated Hamiltonian. We also give the definition for viscosity solutions and LDP for completeness. \subsection{The macroscopic RRE and equilibrium}\label{app1} The $M\times N$ matrix $\nu:= \bbs{\nu_{ji}}=(\nu_{ji}^- - \nu_{ji}^+), j=1,\cdots,M, \, i=1,\cdots, N$ is called the Wegscheider matrix and $\nu^T$ is referred as the stoichiometric matrix \cite{mcquarrie1997physical}. Then mass balance \eqref{mb_j} reads $\nu \vec{m}=\vec{0}$ and the Wegscheider matrix $\nu$ has a nonzero kernel, i.e., $ \dim \bbs{\kk(\nu)} \geq 1. $ Thus we have a direct decomposition for the species space \begin{equation}\label{direct} \bR^N = \ran(\nu^T)\oplus \kk(\nu), \end{equation} where $\ran(\nu^T)$ is the span of the column vectors $\{\vec{\nu}_j\}$ of $\nu^T$. Denote the stoichiometric space $G:=\ran(\nu^T)$. Given an initial state $\vec{X}_0 \in \vq+G$, $\vq\in\kk(\nu)$, the dynamics of both mesoscopic \eqref{Csde} and macroscopic \eqref{odex} states stay in the same space $G_q:=\vq+G$, called a stoichiometric compatibility class. \subsubsection{Detailed balance and characterization of steady states for the macroscopic RRE} Denote a steady state to RRE \eqref{odex} as $\peq$, which satisfies \begin{equation} \sum_{j=1}^M \vec{\nu}_j \bbs{\Phi^+_j(\peq) - \Phi^-_j(\peq)} =0. \end{equation} Under the assumption of detailed balance \eqref{DB} and \eqref{lma}, we have \begin{equation} \log k_j^+ - \log k_j^- = \vec{\nu}_{j} \cdot \log \peq \end{equation} Then $\vec{\nu}_{j} \cdot \log (\peq_1-\peq_2)=0$ and all the steady states of RRE \eqref{odex} can be characterized as follows. For any $\vq\in \kk(\nu)$, there exists a unique steady states $\peq_*$ in the space $ \{\vx \in \vq+G;\, \vx>0\}$. This uniqueness of steady states in one stoichiometric compatibility class is a well-known result; c.f., \cite[Theorem 6A]{Horn72}, \cite[Theorem 3.5]{Kurtz15}. \subsection{Properties of Hamiltonian $H$ and the convex conjugate}\label{app2} Recall the mass conservation law of chemical reactions \eqref{mb_j} and direct decomposition \eqref{direct}. We further observe, for $H$, we have $ H(\vec{0},\vec{x})\equiv 0$ and hence $\nabla_x H(\vec{0}, \vec{x}) \equiv 0. $ We summarize useful lemmas on the properties of $H$ and $L$. \begin{lem}\label{lem_Hdege} Hamiltonian $H(\vp,\vx)$ on $\bR^N\times \bR^N$ in \eqref{H} is degenerate in the sense that \begin{equation}\label{Hdege} H(\vp, \vx) = H(\vp_1, \vx), \end{equation} where $\vp_1\in \ran(\nu^T)$ is the direct decomposition of $\vp$ such that \begin{equation}\label{decom} \vp = \vp_1 + \vp_2, \quad \vp_1\in \ran(\nu^T),\,\, \vp_2 \in \kk(\nu). \end{equation} \end{lem} \begin{lem}\label{Hconvex} For any $\vx>0$, $H(\vp,\vx)$ defined in \eqref{H} is strictly convex and exponential coercivity w.r.t. $\vp \in G$, i.e., for $\vp\in G$, there exists $A>0$ such that ${H(\vp,\vx)} \geq A e^{\alpha |\vp|}$ for $|\vp|\gg 1$. \end{lem} Indeed, fix any $\vx>0$, $ {H(\vp,\vx)} = \sum_{j=1}^M \Phi_j^+(\vx) \bbs{e^{\vec{\nu}_j \cdot \vp}-1} + \Phi^-_j(\vx) \bbs{e^{-\vec{\nu}_j \cdot \vp}-1}. $ Since for $\vp\in G$, $\vec{\nu}_j \cdot \vp \neq 0$ and thus $|\vec{\nu}_j \cdot \vp | > \alpha |\vp| >0$ for some $j$. Hence for $|\vp|$ sufficient large, \begin{equation} {H(\vp,\vx)} \geq \frac12 \min\{\Phi_j^+(\vx), \Phi^-_j(\vx) \} e^{\alpha |\vp|}. \end{equation} Since for $\vx>0$, $H$ defined in \eqref{H} is convex and superlinear w.r.t $\vp$, we compute the convex conjugate of $H$ via the Legendre transform. For any $\vs\in \bR^N$, define \begin{equation}\label{L} L(\vs,\vec{x}) := \sup_{\vec{p}\in \bR^N} \bbs{ \la \vp , \vs \ra - H(\vp, \vx)} \end{equation} Then we have the following lemma. \begin{lem}\label{lem:least} For $L$ function defined in \eqref{L}, we know $L(\vs,\vx)\geq 0$ and \begin{equation}\label{LL} L(\vs, \vx) = \left\{ \begin{array}{cc} \max_{\vp \in G} \{ \vs \cdot \vp - H(\vp, \vx) \}<+\8, & \vs\in G \text{ and } \vx>0,\\ - \min_{\vp\in \bR^N} H(\vp,\vx)<+\8, & \vs = \vec{0},\\ +\8, & \text{ otherwise }; \end{array} \right. \end{equation} moreover, $L$ is strictly convex in $G$ for $\vx>0$. \end{lem} \begin{proof} First, for $\vs= \vec{0}$, $L(\vec{0},\vx)= - \min_{\vp\in \bR^N} H(\vp,\vx)$ due to the coercivity of $H$. For $\vs\neq \vec{0}$, case (i), for some component $x_i\leq 0$, by the zero extension of $H$, we have $L(\vs,\vx)=+\8$; case (ii), if $\vs \in \ker(\nu)$, then take sup for $\vp\in \ker(\nu)$ gives $L(\vs,\vx)=+\8$. For $\vx>0, \vp\in G$, since $H$ is superlinear, the maximum is attained at $\vp^*(\vs, \vx)$ satisfying \begin{equation}\label{ts1} \vs = \nabla_p H(\vp^*, \vx) = \sum_j \vec{\nu}_j \bbs{\Phi_j^+ e^{\vec{\nu}_j \cdot \vp^*} - \Phi_j^- e^{-\vec{\nu}_j \cdot \vp^*} }. \end{equation} Thus for $\vx>0,$ $ L(\vs, \vx) = \vs \cdot \vp^*(\vs,\vx) - H(\vp^*(\vs,\vx),\vx). $ \end{proof} Assume there is a $C^1$ least action curve $\gamma(t)$ connecting $\gamma(0)=\vx> 0$ and $\gamma(t)=\vy> 0$ and $\gamma(\tau) > 0$ for all $\tau\in[0,t]$. Thus $0\leq I(\vy;\vx,t)<+\8$. Then $\gamma(\tau)$ satisfies Euler-Lagrangian equation \begin{equation}\label{EL} \frac{\ud}{\ud t} \bbs{\frac{\pt L}{\pt \dot{\vx}}(\dot{\vx}(t), \vx(t)) }= \frac{\pt L}{\pt \vx}(\dot{\vx}(t), \vx(t)). \end{equation} \subsection{Concepts of viscosity solution}\label{app3} Here for the convenience of readers, we adapt the notion of viscosity solutions used in \cite{Barles_Perthame_1987}. \begin{defn} Let $\Omega\subset \mathbb{R}^N$ be an open set. We say $u\in \usc(\Omega)$ (resp. $\lsc(\Omega)$) is a viscosity subsolution (resp. supersolution) to \eqref{resolventP}, if for any $x\in \Omega$ and any test function $\varphi\in C^\8(\mathbb{R}^N)$ such that $u-\varphi$ has a local maximum (resp. minimum) at $x$, it holds \begin{equation} u(x) - H(\nabla\varphi(x), x) \leq 0. \,\, (\text{resp. }\geq 0) \end{equation} We say $u$ is a viscosity solution if it is both a viscosity subsolution and supersolution. \end{defn} \subsection{Definition of the sample path large deviation principle}\label{app_ld} Let $D([0,T];\bR^N_+)$ be Skorokhod space and $AC([0,T];\bR^N)$ be the space of absolute continuous curves. We state the definition of the sample path large deviation principle in path space $D([0,T];\bR^N_+).$ \begin{defn} Let $\cv $ be the large number process defined in \eqref{Csde} with generator $Q_{\V}$. Assume $\cv (0)=\vx_0^{\V}\in \bR^N_+$. Then we say the sample path $\cv (t)$, $t\in [0,T]$ satisfies the large deviation principle in $D([0,T];\bR^N_+)$ with the good rate function \begin{equation} A_{x_0, T}(\vx(\cdot)) := \left\{ \begin{array}{cc} \int_0^T L(\dot{\vx}(t),\vx(t)) \ud t & \text{ if } \vx(0)=\vx_0,\,\, \vx(\cdot)\in AC([0,T];\bR^N),\\ +\8 & \text{ otherwise} \end{array} \right. \end{equation} if for any open set $\mathcal{E}\subset D([0,T];\bR^N_+)$, it holds \begin{equation} \liminf_{h\to 0} h \log \bP_{\vx_0^{\V}} \{\cv (t) \in \mathcal{E}\} \geq - \inf_{\vx\in \mathcal{E}} A_{\vx_0, T}(\vx(\cdot)), \end{equation} while for any closed set $\mathcal{G}\subset D([0,T];\bR^N_+)$, it holds \begin{align} \limsup_{h\to 0} h \log \bP_{\vx_0^{\V}} \{\cv (t) \in \mathcal{G}\} \leq - \inf_{\vx\in \mathcal{G}} A_{\vx_0, T}(\vx(\cdot)). \end{align} \label{LD22} \end{defn} Under additional mild assumption for exponential tightness in $D([0,T];\bR^N_+)$, the sample path large deviation principle for $\cv(y)$ was proved in \cite[Theorem 1.6]{Dembo18}. It covers the single time large deviation principle in Theorem \ref{thm_dp}. Indeed, for any fixed open set $\mathcal{O}\subset \bR^N_+$, we take a special open set $\mathcal{E}\subset D([0,T];\bR^N_+) $ as $\mathcal{E} = \{\vx(\cdot)\in D([0,T];\bR^N_+); \, \vx(t) \in \mathcal{O}\}$. Then $$\inf_{\vx\in \mathcal{E}} A_{\vx_0, T}(\vx(\cdot)) = \inf_{\vy \in \mathcal{O}} \bbs{\inf_{\vx(\cdot)\in D([0,T];\bR^N_+),\, \vx(0)=\vx_0,\, \vx(t) = \vy } \int_0^T L(\dot{\vx}(s),\vx(s)) \ud s} = \inf_{\vy \in \mathcal{O}} I(\vy;\,\vx_0, t). $$ Here in the last equality, the least action from $0$ to $T$ is the combination of the least action from $0$ to $t$ and a zero-cost action for $t$ to $T$. \end{document}
2205.09222v2
http://arxiv.org/abs/2205.09222v2
On the Walsh and Fourier-Hadamard Supports of Boolean Functions From a Quantum Viewpoint
\documentclass[runningheads]{llncs} \usepackage[T1]{fontenc} \usepackage{graphicx} \usepackage{inputenc} \usepackage{amssymb} \usepackage{verbatim} \usepackage{blkarray} \usepackage{multirow} \usepackage{graphicx} \usepackage{afterpage} \usepackage[hidelinks]{hyperref} \usepackage{framed} \usepackage{amsmath} \usepackage{systeme} \usepackage{xcolor} \usepackage[margin=1.3in]{geometry} \newcommand*\rfrac[2]{{}^{#1}\!/_{#2}} \newcommand{\ket}[1]{|#1\rangle} \newcommand{\bra}[1]{\langle #1|} \newcommand{\braket}[2]{\langle#1|#2\rangle} \newcommand{\vp}{\varphi} \newcommand{\str}{\ket{\psi} = \alpha\ket{0} + \beta\ket{1}} \newcommand{\stk}[2]{#1\ket{0} + #2\ket{1}} \newcommand{\had}{\mathbf{H}} \newcommand{\rk}{\operatorname{rk}} \newcommand{\supp}{\operatorname{supp}} \newcommand{\wt}{\operatorname{wt}} \newcommand{\bs}[1]{\{0,1\}^{#1}} \newcommand{\bsp}{\{0,1\}} \newcommand{\bff}[2]{f:\bs{n}\to\bs{m}} \newcommand{\f}[2]{\mathbb{F}_{#1}^{#2}} \newcommand{\w}[1]{\mathbf{#1}} \DeclareMathOperator{\GPK}{GPK} \begin{document} \title{On the Walsh and Fourier-Hadamard Supports of Boolean Functions From a Quantum Viewpoint} \titlerunning{Boolean Functions from a Quanutm Viewpoint} \author{Claude Carlet\inst{1,2} \and Ulises Pastor-Díaz\inst{3} \and José M. Tornero\inst{3}} \authorrunning{C. Carlet et al.} \institute{University of Bergen, Department of Informatics, 5005 Bergen, Norway \and University of Paris 8, Department of Mathematics, 93526 Saint-Denis, France \\\email{[email protected]} \and University of Sevilla, Departament of Algebra, 41012 Sevilla, Spain \\ \email{[email protected]}, \email{[email protected]}} \maketitle \begin{abstract} In this paper, we focus on the links between Boolean function theory and quantum computing. In particular, we study the notion of what we call fully-balanced functions and analyse the Fourier--Hadamard and Walsh supports of those functions having such property. We study the Walsh and Fourier supports of other relevant classes of functions, using what we call balancing sets. This leads us to revisit and complete certain classic results and to propose new ones. We complete our study by extending the previous results to pseudo-Boolean functions (in relation to vectorial functions) and giving an insight on its applications in the analysis of the possibilities that a certain family of quantum algorithms can offer. \keywords{Boolean functions \and Quantum computing \and Walsh supports.} \end{abstract} \begin{section}{Introduction} The main results of this paper deal with Boolean functions and do not require a knowledge in quantum computing and quantum algorithms, but they have been highly motivated by and have important applications in the analysis of the Generalised Phase Kick-Back quantum algorithm (a quantum algorithm inspired by the phase kick-back technique that the second and third authors introduced in \cite{gpk} and which is used to distinguish certain classes of functions). For this reason, we will begin this introduction by giving some notions about this model of computation and its relation to Boolean functions, specially for those readers coming from a Boolean function background. However, for a more general and in-depth explanation, \cite{oyt,kaye} can be consulted. A quantum computer is made up of qubits, which are Hilbert spaces of dimension two. A state of the qubit is a vector $\ket{\psi} = \alpha \ket{0} + \beta \ket{1},$ where $\alpha,\beta\in\mathbb{C}$, and which satisfies the normalisation condition $|\alpha|^2+|\beta|^2 = 1.$ Here, $\ket{0} = \begin{pmatrix} 1 & 0\end{pmatrix}^t$ and $\ket{1} = \begin{pmatrix} 0 & 1\end{pmatrix}^t$ are the column vectors of the canonical basis, and $\alpha$ and $\beta$ are called the amplitudes of the state. We are using the Dirac or so-called \emph{bra-ket} notation. In this notation, vectors are represented by kets, $\ket{\cdot}$, their duals (in the linear algebra sense) are denoted by bras, $\ket{\cdot}^* = \bra{\cdot}$, and the inner product is denoted by a bracket, $\braket{\cdot}{\cdot}$. Systems of multiple qubits are constructed using the tensor product (more particularly, the Kronecker specialisation) of the individual qubit systems, and a state is said to be entangled if it does not correspond to a pure tensor in said product (that is, it cannot be written as the Kronecker product of vectors in the individual qubit systems). In particular, elements of the canonical basis (called computational basis in this context) are represented using elements of $\f{2}{n}$ inside the ket: $\ket{\w{x}}_n = \bigotimes_{i=1}^n \ket{x_i},$ where $\w{x} = (x_1, x_2,\ldots, x_n) \in \f{2}{n}$. A general state of a system of $n$ qubits can be then written as $\ket{\psi}_n = \sum_{\w{x}\in\f{2}{n}} \alpha_{\w{x}}\ket{\w{x}}, \text{ where } \alpha_{\w{x}}\in\mathbb{C} \text{ for all } \w{x}\in\f{2}{n}, $ satisfying the normalisation condition $\sum_{\w{x}\in\f{2}{n}} |\alpha_{\w{x}}|^2 = 1.$ These qubit systems evolve by means of unitary matrices, and a quantum algorithm consists of the application of a unitary transformation to the first element of the computational basis (that is, the $\ket{\w{0}}_n$ vector) in a system of $n$ qubits and measuring the resulting state. We should recall that $\ket{\w{0}}_n = \otimes_{i=1}^n \ket{0}$ is not the zero vector. Indeed, vectors in the computational basis of an $n$-qubit system will be labeled using the elements of $\f{2}{n}$, and $\ket{\w{0}}_n$ will be just the first of them. The process of measuring a state, $\ket{\psi}_n = \sum_{\w{x}\in\f{2}{n}} \alpha_{\w{x}}\ket{\w{x}}$, makes it collapse into one of the elements of the computational basis, say $\ket{\w{y}}$, with probability $|\alpha_{\w{y}}|^2$, which in turn would give us $\w{y}\in\f{2}{n}$ as a result. The Walsh transform has a deep relation to some quantum algorithms, like the Deutsch--Jozsa algorithm \cite{dyj} or the Bernstein--Vazirani algorithm \cite{byv2}. In fact, in the final superposition of these algorithms before measuring, the amplitude of a given state of the computational basis, $\ket{\w{z}}_n$ is $$ \alpha_{\w{z}} = \frac{1}{\sqrt{2^n}}\sum_{\w{x}} (-1)^{f(\w{x})\oplus\w{x}\cdot\w{z}}, $$ where $f:\f{2}{n}\to\f{2}{}$ is the function used as an input in the algorithms. This, in particular, allows us to distinguish balanced functions from constant ones, as the Walsh transform of a balanced function---which coincides with the unnormalised amplitudes of the final superposition---takes value zero when evaluated at the zero point ($\alpha_{\w{0}} = 0$), while a constant function takes value $2^n$ ($\alpha_{\w{0}} = 1$). This implies that, in the constant situation, we will always obtain zero as the result of our measurement, while in the balanced situation we will always obtain a value different from zero. The relevance of studying these specific classes of functions springs from the fact that they are completely distinguishable using this technique, but is also due to its implications in quantum complexity theory \cite{bb1}. However, different classes of functions can be considered. The technique used in these algorithms, called the phase kick-back, and its generalised version, the Generalised Phase Kick-Back or $\GPK$ \cite{gpk}, make this relation even more relevant. Indeed, if we use a vectorial function $F:\f{2}{n}\to\f{2}{m}$ as an input, then, after choosing $\w{y}\in\f{2}{m}$, which in this context is called a marker, the amplitudes of the states of the canonical basis in the final superposition of the $\GPK$ algorithm are: $$ \alpha_{\w{z}} = \frac{1}{\sqrt{2^n}}\sum_{\w{x}} (-1)^{F(\w{x})\cdot\w{y}\oplus\w{x}\cdot\w{z}}. $$ This is, once again, a normalised version of the Walsh transform (in this case of a vectorial function) and thus it seems clear that we can use the properties of the Walsh transform in distinguishing classes of functions using the $\GPK$. What is more, we will see in Section \ref{quant} that, for a particular class of functions, there is a relation between the Walsh transform of a vectorial function, $F$, and the Fourier--Hadamard support of the pseudo-Boolean function determined by the image of $F$. The aforementioned class of functions, which will be defined in Section \ref{FBsect}, will be referred to as the class of fully balanced functions, and the relation that we have pointed out can be used to solve the problem of determining the image of a fully balanced function when it is given as a black box using the $\GPK$. However. we do so in a different article \cite{gpk2}. We will proceed now to study the Fourier--Hadamard and Walsh transforms of vectorial functions. \end{section} \begin{section}{Preliminaries} \begin{subsection}{Notation} Throughout the whole paper we will work with vectors in $\f{2}{n}$, which we will write in bold. In particular, $\w{0}$ will denote the zero vector. A subset of $\f{2}{n}$ will be called a Boolean set. Regarding the different operations, we will use $\oplus$ when dealing with additions modulo $2$, but for additions either in $\mathbb{Z}$ or in $\f{2}{n}$ we will make use of $+$. Furthermore, we denote by $\cdot$ the usual inner product in $\f{2}{n}$, $\w{x}\cdot\w{y} = \bigoplus_{i=1}^{n} x_iy_i, \mbox{ for } \w{x},\w{y}\in\f{2}{n}.$ We will refer to mappings $f:\f{2}{n}\to \f{2}{}$ as Boolean functions, mappings $f:\f{2}{n}\to\mathbb{R}$ as pseudo-Boolean functions (in particular, a Boolean function can be seen as a pseudo-Boolean function) and mappings $F:\f{2}{n}\to\f{2}{m}$ as $(n,m)$-functions. Some important concepts regarding a Boolean function $f:\f{2}{n}\to \f{2}{}$ are the following. Its support, $\supp(f) = \{\w{x}\in\f{2}{n}\mid f(\w{x}) = 1\}$. Its Hamming weight, denoted by $\wt(f)$, will be the number of vectors $\w{x}\in\f{2}{n}$ such that $f(\w{x})= 1$. In other words, $\wt(f) = |\supp(f)|$. Its sign function is the integer-valued function $\chi_f(\w{x}) = (-1)^{f(\w{x})} = 1-2f(\w{x})$. Note also that $f$, can always be expressed uniquely as follows: $$ f(\w{x}) = \bigoplus_{\w{u}\in\f{2}{n}} a_{\w{u}} \w{x}^{\w{u}}, \mbox{ where } \w{x}^{\w{u}} = \displaystyle\prod_{i=1}^{n}x_i^{u_i}. $$ This expression is called the algebraic normal form, or ANF$(f)$. The degree of this polynomial is called the algebraic degree of $f$. The derivative of $f$ in the direction of $\w{a}\in\f{2}{n}$ is defined as $D_{\w{a}}(f)(\w{x}) = f(\w{x})\oplus f(\w{x}+\w{a})$. Finally, we will say that a Boolean multiset is a pair $M = (\f{2}{n},m)$ where $m:\f{2}{n} \to \mathbb{Z}_{\geq 0}$ is a pseudo-Boolean function that can take the value $0$. For each $\w{x} \in \f{2}{n}$ we will call $m(\w{x})$ its multiplicity and we will denote by $S_M = \{\w{x} \in \f{2}{n} \mid m(\w{x}) > 0\}$ the support of $M$. However, we will also represent multisets by using set notation but repeating every element of a given multiset as many times as the multiplicity indicates. For a more general overview on multisets \cite{syro} can be consulted. \end{subsection} \begin{subsection}{Fourier--Hadamard and Walsh transforms} We will now give a quick summary on some results for Boolean and pseudo-Boolean functions, but for a more general reference, \cite{bfc} can be consulted. The Fourier--Hadamard transform of a pseudo-Boolean function $f:\f{2}{n}\to\mathbb{R}$ is the function: $$ \widehat{f}(\w{u}) = \sum_{\w{x}\in\f{2}{n}} f(\w{x})(-1)^{\w{x}\cdot\w{u}}. $$ We will call the Fourier--Hadamard support of $f$ the set of $\w{u}\in\f{2}{n}$ such that $\widehat{f}(\w{u}) \neq 0$ and its Fourier--Hadamard spectrum the multiset of all values $\widehat{f}(\w{u})$. It is important to underline the relation between the Fourier--Hadamard transform and linear functions. If we denote $l_{\w{u}}(\w{x}) = \w{u}\cdot\w{x}$ for $\w{u}\neq \w{0}$, we have: \begin{align*} \widehat{f}(\w{u}) & = \sum_{\w{x}\in\f{2}{n}} f(\w{x})(1-2\w{x}\cdot\w{u}) = \wt(f) - 2\wt(f\cdot l_{\w{u}}) \\ & = \wt(f\oplus l_{\w{u}}) - \wt(l_{\w{u}}) = \wt(f\oplus l_{\w{u}}) - 2^{n-1}, \end{align*} while $\widehat{f}(\w{0}) = \wt(f)$. Given a Boolean function $f$, we can also calculate its Walsh transform: $$ W_f(\w{u}) = \sum_{\w{x}\in\f{2}{n}} (-1)^{f(\w{x})\oplus\w{x}\cdot\w{u}}. $$ We will analogously call the Walsh support of $f$ the set of $\w{u}\in\f{2}{n}$ such that $W_f(\w{u})\neq 0$ and its Walsh spectrum the multiset of all values $W_f(\w{u})$. It is clear that the Walsh transform of a Boolean function $f$ is the Fourier--Hadamard transform of its sign function, which implies by the linearity of the Fourier--Hadamard transform: $W_f = 2^n\delta_{\w{0}} - 2\widehat{f},$ where $\delta_{\w{0}}$ is the indicator of $\{\w{0}\}$ and the Boolean function $f$ is viewed here as a pseudo-Boolean function. In particular, if $\w{u}\neq \w{0}$, then we have $W_f(\w{u}) = -2\widehat{f}(\w{u})$, and thus any $\w{u}\neq \w{0}$ is in the Fourier--Hadamard support if and only if it is in the Walsh support. Regarding the zero vector we need to analyse two particular situations. When $f$ is the zero function, i.e., $f(\w{x}) = \w{0}$ for all $\w{x}\in\f{2}{n}$, then $\widehat{f}(\w{0}) = 0$ but $W_f(\w{0}) = 2^n$. On the other hand, if $f$ is a balanced function, that is, $\wt(f) = 2^{n-1}$, then $\widehat{f}(\w{0}) = 2^{n-1}$ but $W_f(\w{0}) = 0$. In any other situation the Fourier--Hadamard and Walsh supports will be the same. Some important properties of the Fourier--Hadamard transform---which result in similar properties for the Walsh transform---are the inverse Fourier--Hadamard transform formula: $\widehat{\widehat{f\,}}\!\! = 2^n f,$ and Parseval's relation: $$ \sum_{\w{u}\in\f{2}{n}} \widehat{f}^{\,\,2}(\w{u}) = 2^{n}\sum_{\w{x}\in\f{2}{n}} f^2(\w{x}), $$ which for Boolean functions turns into $$ \sum_{\w{u}\in\f{2}{n}} \widehat{f}^{\,\,2}(\w{u}) = 2^{n} |\supp(f)|, $$ and for the Walsh transform becomes $$ \sum_{\w{u}\in\f{2}{n}} W_f^2(\w{u}) = 2^{2n}. $$ The Walsh transform of a vectorial function $F:\f{2}{n}\to\f{2}{m}$ is the function $W_F:\f{2}{n}\times \f{2}{m}\to \mathbb{Z}$ defined as follows: $$ W_F(\w{u},\w{v}) = \sum_{\w{x}\in\f{2}{n}} (-1)^{\w{v}\cdot F(\w{x})\oplus\w{u}\cdot\w{x}}, $$ where $\w{u}\in\f{2}{n}$ and $\w{v}\in\f{2}{m}.$ \end{subsection} \begin{subsection}{Reed--Muller codes} We will devote Section \ref{FBsect} to analysing the concept of fully balanced sets and its relation with minimum weight codewords in Reed--Muller codes. For a deeper analysis on these codes \cite{mac} or \cite{bfc} can be consulted. Given a Boolean function $f:\f{2}{m}\to \f{2}{}$, we can identify it with a vector of length $2^m$ by fixing an ordering---we will use the lexicographical ordering---in $\f{2}{m}$. Said vector, $\w{f}$, is then the vector of evaluations of $f$ for the chosen order. The Reed--Muller code of order $r$ and length $n = 2^m$, noted $\mathcal{R}(r,m)$, is the set of vectors $\w{f}$ where $f:\f{2}{m}\to\f{2}{}$ is a Boolean function of degree at most $r$. Reed--Muller codes are linear codes with minimum distance---i.e., minimum weight among its non-zero vectors--- $2^{m-r}$ and dimension $1+m+\binom{m}{2}+\ldots+\binom{m}{r}$. \end{subsection} \end{section} \begin{section}{On balanced sets and Fourier--Hadamard supports}\label{basicresults} In this section, we introduce very simple concepts, on which we will build more complex and interesting ones. The fact that a hyperplane $H_{\w{x}} = \{\w{y}\in \f{2}{n} \mid \w{x}\cdot\w{y} = 0\}$ for $\w{x}$ nonzero has $2^{n-1}$ elements can be stated in a way reminiscent of the Fourier--Hadamard transform. Given a vector space $E$ in $\f{2}{n}$, and $l_{\w{y}}$ a nontrivial linear form in $E$, then $$ \displaystyle\sum_{\w{x}\in E} (-1)^{l_{\w{y}}(\w{x})} = 0. $$ We will say that $\w{y}$---the vector which determines $l_{\w{y}}$---balances $\f{2}{n}$. \begin{definition}{($\mathbf{y}$-Balanced sets.)}\label{balset} Let $\mathbf{y}\in\mathbb{F}_2^n$ be a nonzero binary vector, we say that a nonempty set $S\subset\mathbb{F}_2^n$ is balanced with respect to $\mathbf{y}$ or $\mathbf{y}$-balanced if: $$ \big|( S\cap H_{\mathbf{y}} )\big| = \frac{|S|}{2}. $$ That is, $S$ is halved by $\mathbf{y}$ with respect to the inner product. \end{definition} We will also say that $\w{y}$ balances $S$. If the size of $S$ is odd, then it is clear that no vector balances $S$. \begin{remark} There are a few equivalent ways to define this notion. Let $\w{1}_S$ be the indicator vector of the set $S$ and $\w{l}_{\w{y}}$ the vector associated with the linear form $l_{\w{y}}$, then stating that $\w{y}$ balances $S$ is equivalent to saying that $\wt(\w{1}_S \w{l}_{\w{y}}) = |S|/2,$ as $\w{1}_S \w{l}_{\w{y}}$---the component-wise product of the two vectors, also known as the Hadamard product---is the indicator vector of $S\cap (\f{2}{n}\setminus H_{\w{y}})$. In another equivalent way, $\w{y} \neq \w{0}$ balances $S$ if (and only if) $1_S\oplus l_{\w{y}}$ is a balanced function---i.e., a function of weight $2^{n-1}$---where $1_S$ is the indicator function of $S$ and $l_{\w{y}}$ the linear function determined by $\w{y}$, since $\wt(1_{S}\oplus l_{\w{y}}) = \wt(1_S) + \wt(l_{\w{y}})-2\wt(1_S\ l_{\w{y}})$ and $\wt(l_{\w{y}}) = 2^{n-1}$. A third way of defining this notion, and the one we will mostly focus on, is by means of the Fourier--Hadamard transform. The nonzero vector $\w{y}\in\f{2}{n}$ balances a nonempty set $S$ if and only if: $$ \widehat{1}_S(\w{y}) = \sum_{\w{x}\in \f{2}{n}} 1_S(\w{x})(-1)^{\w{x}\cdot\w{y}} = \sum_{\mathbf{x}\in S} (-1)^{\mathbf{x}\cdot\mathbf{y}} = 0, $$ or equivalently $W_{1_S}(\w{y}) = 0.$ \end{remark} In the same manner, we can define the idea of being $\mathbf{y}$-constant. \begin{definition}{($\mathbf{y}$-Constant sets.)}\label{constset} Let $\mathbf{y}\in\mathbb{F}_2^n\setminus \{\w{0}\}$ be a nonzero binary vector, we say that a nonempty set $S\subset\mathbb{F}_2^n$ is constant with respect to $\mathbf{y}$ or $\mathbf{y}$-constant if either $$ S \subset H_\mathbf{y} \quad \text{ or }\quad S \cap H_\mathbf{y} = \varnothing, $$ that is, the product $\w{x}\cdot\w{y}$ is constant for all $\w{x}\in S$. We will say that every nonempty set $S$ is $\w{0}$-constant. \end{definition} \begin{remark} We can also express this idea by means of the Fourier--Hadamard transform: given a nonempty set $S$ and its indicator function $1_S$, it is $\mathbf{y}$-constant if and only if $$ \left|\widehat{1}_S(\w{y}) \right| = \left|\sum_{\mathbf{x}\in S} (-1)^{\mathbf{x}\cdot\mathbf{y}}\right| = |S|. $$ \end{remark} Analogously, given a set $B\subset\f{2}{n}$, we will say that $S$ is $B$-balanced ($B$-constant) if it is $\w{y}$-balanced ($\w{y}$-constant) for every $\w{y}\in B$. It is important to note that the definition of $B$-constant does not require that the product $\mathbf{x} \cdot \mathbf{y}$ be the same for every pair $\mathbf{x} \in S$, $\mathbf{y} \in B$, but rather constant for $\w{x}\in S$ once a $\w{y}\in B$ is fixed. \begin{definition}{(Balancing set and constant set.)} Let $S\subset\mathbb{F}_2^n$ be a nonempty set, then we call its balancing set, denoted by $B(S)$, the set of all binary vectors $\mathbf{y}\in\mathbb{F}_2^n$ such that $S$ is $\mathbf{y}$-balanced and its constant set, denoted by $C(S)$, the set of all binary vectors $\mathbf{y}\in\mathbb{F}_2^n$ such that $S$ is $\mathbf{y}$-constant. \end{definition} In the next remark we will explain the interest of defining the balancing and constant sets in this manner. \begin{remark} Given a nonzero Boolean function $f:\f{2}{n}\to\f{2}{}$ with support $S = \supp(f)$, we have that its Fourier--Hadamard support is $\supp(\widehat{f}) = \f{2}{n}\setminus B(S)$. Following the relation $W_f = 2^n\delta_{\w{0}}-2\widehat{f}$, if $f$ is not a balanced function, we have that its Walsh support is $\supp(W_f) = \supp(\widehat{f}) = \f{2}{n}\setminus B(S)$. However, if $f$ is balanced, we have $\supp(W_f) = \supp(\widehat{f})\setminus \{\w{0}\} = \f{2}{n}\setminus (B(S)\cup \{\w{0}\})$. \end{remark} We will begin by considering the constant set problem. The following result follows from the Fourier--Hadamard transform formula. \begin{lemma} Let $S\subset \mathbb{F}_2^n$ and $\mathbf{s} + S = \{\mathbf{s} + \mathbf{x} \mid \mathbf{x} \in S\}$ be the translation of $S$ by $\mathbf{s}\in\mathbb{F}_2^n$, then $C(S) = C(\mathbf{s}+S)$. \end{lemma} Indeed, it is clear that $\widehat{1_{\w{s}+S}}(\w{y}) = (-1)^{\w{s}\cdot\w{y}} \widehat{1_S}(\w{y}).$ Using this result we can simply consider that $\mathbf{0} \in S$ without loss of generality, which simplifies things, as then, if $\mathbf{y}\in\mathbb{F}_2^n$ makes $S$ constant, it actually makes it $0$ and we can find $C(S)$ by solving a system of linear equations. In this situation, since $\w{y}$ belongs to $C(S)$ if and only if it is orthogonal to every element $\w{x}\in S$, we have: \begin{lemma}{(Constant set.)} Let $S$ be a set such that $\mathbf{0}\in S$, then $$ C(S) = \bigcap_{\mathbf{x} \in S} H_\mathbf{x}, $$ which is the linear subspace $\langle S \rangle^{\perp}$ of dimension $n-\rk (S)$, where $\rk(S)$ (the rank of $S$) is the dimension of $\langle S \rangle$---the linear space spanned by $S$---and $\langle S \rangle^{\perp}$ is the orthogonal space of $S$ with respect to the $\cdot$ product. \end{lemma} Note that, still assuming that $\w{0}\in S$, we have then $C(C(S)) = \langle S \rangle$. If $\w{0}\notin S$, then it suffices to consider $S' = \w{s} + S$ for some $\w{s}\in S$. Taking now a look into the balancing set, the following properties are straightforward. \begin{lemma}{(Properties.)} Let $S$ be a nonempty Boolean set, $B(S)$ be as in the previous definition and let $S_1,S_2 \subset \mathbb{F}_2^n$ both $B$-balanced and nonempty, then: \begin{itemize} \item[(i)] For all $A\subset B(S)$, $S$ is $A$-balanced. \item[(ii)] If $S_1$ and $S_2$ are such that $S_1\cap S_2 = \varnothing$, then $S_1\cup S_2$ is $B$-balanced. \item[(iii)] If $S_1\subset S_2$, then $S_2 \setminus S_1$ is $B$-balanced. \item[(iv)] $S_1\cap S_2$ is $B$-balanced if and only if $S_1\cup S_2$ is $B$-balanced. \item[(v)] $B(S) = B(\mathbb{F}_2^n \setminus S)$ for all $S\subset\mathbb{F}_2^n$. \item[(vi)] $B(\mathbb{F}_2^n) = \mathbb{F}_2^n\setminus \{\mathbf{0}\}.$ \item[(vii)] $B(S) = B(\mathbf{s} + S)$ for all $\mathbf{s}\in\mathbb{F}_2^n$. (Invariance by translation). \end{itemize} \end{lemma} We also have the following. \begin{lemma}\label{lemaprop} Let $S$ be a nonempty Boolean set: \begin{itemize} \item[(viii)] Let $\mathbf{s} \in\f{2}{n}\setminus \{\w{0}\}$ such that $S = \mathbf{s} + S$ and $\mathbf{y} \in \mathbb{F}_2^n$ with $\mathbf{y} \cdot \mathbf{s} = 1$, then $\mathbf{y} \in B(S)$. \item[(ix)] Let $S$ be $B$-balanced, then $\langle S\rangle$ is $B$-balanced. \item[(x)] Let $S_1$ be $B_1$-balanced and $S_2$ be $B_2$-balanced, then $\langle S_1, S_2\rangle$---the vector space generated by $S_1\cup S_2$---is $(B_1 \cup B_2)$-balanced. \end{itemize} \end{lemma} \begin{proof} Property $(viii)$ follows from the fact that $\widehat{1_{\w{s}+S}}(\w{y}) = (-1)^{\w{s}\cdot\w{y}}\widehat{1_{S}}.$ For property $(ix)$, given $\w{y}\in B$, we know that there is an $\w{x}\in S$ such that $\w{y}\cdot\w{x}=1$, and by property $(viii)$ we get the result, as $\w{x} + \langle S\rangle = \langle S\rangle.$ Property $(x)$ follows from the same idea, as for any $i=1,2$; $\w{y}\in B_i$ implies that there is an $\w{x}\in S_i\subset\langle S_1\cup S_2\rangle$ such that $\w{y}\cdot\w{x}=1$.\hfill $\square$ \end{proof} \begin{remark}\label{structure} Given a nonzero Boolean function $f:\f{2}{n}\to\f{2}{}$ and let $S= \supp(f)$. Then, the constant set and the balancing set of $S$, and incidentally also the Fourier-Hadamard support of $f$, have the following structure. \begin{enumerate} \item Both $C(S)$ and $\widehat{f}^{-1}(|S|)$ are vector spaces, with $C(S) = \widehat{f}^{-1}(|S|)$ if $\w{0}\in S$. \item If $\w{0}\in S$, then each of the sets $\widehat{f}^{-1}(z)$ with $z\in\mathbb{Z}$ is either empty or a union of disjoint cosets of $C(S)$, this applies in particular to $B(S) = \widehat{f}^{-1}(0)$. If $r$ is the rank of $S$, then the dimension of $C(S)$ will be $n-r$ and thus we will have $2^r$ of these cosets. \item If $\w{0}\notin S$, taking $g(\w{x}) = f(\w{x}+\w{s})$ for some $\w{s}\in S$ we have that $\widehat{g}(\w{u}) = (-1)^{\w{s}\cdot\w{u}}\widehat{f}(\w{u}).$ This implies that now the sets $\widehat{f}^{-1}(z)\cup\widehat{f}^{-1}(-z)$ are the ones which are either empty or a union of cosets of $C(S)$, but the situation of $B(S)$ does not change. \end{enumerate} \end{remark} Taking into consideration this remark, it makes sense to define the following concept. \begin{definition}{(Balancing index.)}\label{balindex} Let $\varnothing \neq S\subset\mathbb{F}_2^n$, then we define its balancing index to be: $$ b(S) = \frac{|B(S)|}{|C(S)|}. $$ This index is always an integer, and it corresponds to the amount of disjoint cosets of $C(S)$ that conform $B(S)$, as we have seen in Remark \ref{structure}. \end{definition} The balancing index is clearly invariant by isomorphism, but this can be taken even further \begin{proposition}\label{isonoiso} Let $S\subset \f{2}{n}$ be a Boolean set and $\vp: \langle S\rangle\to \mathbb{F}_2^m$ a monomorphism (i.e., an injective linear function). Then $b(S) = b(\vp(S))$. \end{proposition} \begin{proof} We have seen that both the constant and the balancing set are invariant by translation, so we will suppose that $S$ and $\vp(S)$ include the $\w{0}$ vector and that $\vp(\w{0}) = \w{0}$ without loss of generality. Let $r$ be the rank of $S$---and also of $\vp(S)$--and let $\w{s}_1,\ldots,\w{s}_r$ be independent elements of $S$, then it is clear that they conform a basis of $\langle S \rangle$, and that the vectors $\vp(\w{s}_1),\ldots,\vp(\w{s}_r)$ conform a basis of $\langle \vp(S)\rangle$. We also know that both the constant set and the balancing set can be computed as the sets of solutions to certain families of systems of equations of the form $\{\w{s}_i\cdot\w{x} = b_i\mid i = 1,\ldots,r\}$, where $\w{x}$ is the vector of unknowns. The vector whose $i$-th component is $b_i$ will be noted as $\w{b}\in\f{2}{r}$. If, for a certain $\w{b}\in\f{2}{r}$, the solutions to the previous system balance $S$, then the solutions to the system $\{\vp(\w{s}_i)\cdot\w{x} = b_i\mid i = 1,\ldots,r\}$ will also balance $\vp(S)$, and the same will happen in the opposite direction. Indeed, if we take any $\w{s}\in S$ such that $\w{s} = \sum_{i=1}^r \alpha_i\w{s}_i$ for some $\alpha_i \in\f{2}{}$, we have that $\vp(\w{s}) = \sum_{i=1}^r \alpha_i\vp(\w{s}_i)$. Let $\w{z}\in\f{2}{n}$ be a solution to the system $\{\w{s}_i\cdot\w{x} = b_i\mid i = 1,\ldots,r\}$ and $\w{z}'\in\f{2}{m}$ a solution to $\{\vp(\w{s}_i)\cdot\w{x} = b_i\mid i = 1,\ldots,r\}$, then $$ \w{s}\cdot\w{z} = \left(\sum_{i=1}^r \alpha_i\w{s}_i\right)\cdot\w{z} = \bigoplus_{i=1}^r \alpha_i b_i = \left(\sum_{i=1}^r \alpha_i\vp(\w{s}_i)\right)\cdot\w{z}' = \vp(\w{s})\cdot\w{z}'. $$ We saw in Remark \ref{structure} that the balancing sets of $S$ and $\vp(S)$ were composed of cosets of $C(S)$. Although the dimension of $C(\vp(S))$ does not have to be the same as that of $C(S)$, the amount of cosets that balance each of these sets is the same. To see this we just need to explicitly construct the bijection between the cosets of $C(S)$ and those of $C(\vp(S))$ that we have hinted at before. Each of these cosets can be assigned to the vector $\w{b}\in\f{2}{r}$ of independent terms in the system of equations whose solution is said coset. We just need to identify cosets of $C(S)$ and of $C(\vp(S))$ which are assigned to the same $\w{b}$. \hfill $\square$ \end{proof} This allows us, by taking $r = m$, to consider only the cases where $S$ is made of the zero vector, the $n$ vectors of the cannonical basis of $\f{2}{n}$ and $|S|-n-1$ other linear combinations of these vectors when studying certain properties. Indeed, let $S\subset\f{2}{n}$ be a Boolean set with rank $r\leq n$ and such that $\w{0}\in S$, and let $\w{s}_1,\ldots,\w{s}_r$ be independent elements of $S$. Then, we can consider the application $\vp:\langle S\rangle\to\f{2}{r}$ linearly determined by $\vp(\w{s}_i) = \w{e}_i$ for $i=1,\ldots,r$, where $\w{e}_i$ are the elements of the standard basis in $\f{2}{r}$. As this application satisfies the conditions presented above, we know that $b(S) = b(\vp(S))$. \end{section} \begin{section}{Fully balanced sets}\label{FBsect} In this section, we will define the notion of fully balanced sets and take a look into its relation with minimal distance codewords in a Reed--Muller code. We will begin by recalling the following result that we can find, for instance, in \cite{bfc}. \begin{proposition}\label{hadvecprop} Let $E$ be a vector subspace of $\mathbb{F}_2^n$ and $1_E$ its indicator function, then: $$ \widehat{1}_E = |E|1_{E^{\perp}}. $$ \end{proposition} In particular, this implies that $C(E) = E^{\perp}$ and $B(E) = \f{2}{n} \setminus E^{\perp}$. Another interesting remark is that if $r = \dim(E)$, then $b(E) = 2^r-1$. Moreover, we will always have $B(S)\cup C(S) = \mathbb{F}_2^n$, which is the property we will use for our following definition. \begin{definition}{(Fully balanced.)} We say that a nonempty set $S\subset\mathbb{F}_2^n$ is fully balanced if $B(S)\cup C(S) = \mathbb{F}_2^n$. \end{definition} \begin{remark} Of course, this property is equivalent to saying that $\widehat{1_S}$ is valued in $\{-|S|,0,|S|\}$, but there are actually many other ways to define it. For instance, if $r= \rk(S)$, then $S$ is fully balanced if $b(S) = 2^r-1$, as $|C(S)| = 2^{n-r}$ and $|B(S)| = 2^n -2^{n-r}$ due to Definition \ref{balindex}. However, the more intuitive one is that a nonempty $S$ is fully balanced if for every $\w{y}\in\f{2}{n}\setminus\{\w{0}\}$ we have that $|S\cap H_{\w{y}}|$ is either $|S|$, $0$ (these two cases corresponding to $y\in C(S)$ by Definition \ref{constset}) or $|S|/2$ (the case where $\w{y}$ balances $S$ using Definition \ref{balset}). \end{remark} To answer the question of whether there are any other fully balanced sets apart from affine spaces, we turn to \cite{mac}, and more particularly to Lemma 6 in its Chapter 13. The result we present here is just the aforementioned one, but we have rewritten it so we do not explicitly assume that the size of $S$ is a power of $2$, which is one of the premises set in \cite{mac}. This statement is not actually used in their proof, so our result is not fundamentally new, but it seems important to know that the result is more general that as stated in \cite{mac} (and we give an original and simpler proof). \begin{theorem}\label{FBsetsTh} Let $S\subset\f{2}{n}$ be a nonempty Boolean set and $\w{1}_S$ its indicator vector, then the following statements are equivalent. \begin{itemize} \item[(i)] $S$ is fully balanced. \item[(ii)] $S$ is an affine space. \item[(iii)] $\w{1}_S$ is a minimum weight codeword in $\mathcal{R}(r,n)$ for some $r$. \end{itemize} \end{theorem} \begin{proof} The equivalence $(ii)\iff(iii)$ can be found in \cite{mac}, and we have already taken a look into $(ii)\implies(i)$. \noindent The implication $(i)\implies(ii)$ is also implicitly in \cite[Chapter~13]{mac}, when we read the proof of its Lemma $6$ due to Rothschild and Van Lint; indeed, the proof given in \cite{mac} does not use in fact that $|S|$ is a power of two. We will instead present another proof of this implication using the Fourier--Hadamard transform properties. \noindent Let $1_S$ be the indicator function of $S$. As $S$ is fully balanced, we know that $\widehat{1_S}(\w{u})$ is either $0$, $|S|$ or $-|S|$. We will suppose without loss of generality that $\w{0}\in S$, as both properties (being fully balanced and being an affine space) are preserved by translation. As we have seen, in this situation the value $-|S|$ is not possible, and $C(S)$ is the vector space of those $\w{u}$ such that $\widehat{1_S}(\w{u}) = |S|$, so: $$ \widehat{1_S} = |S|\, 1_{C(S)}, $$ where $1_{C(S)}$ is the indicator function of $C(S)$. Using now the inverse Fourier--Hadamard transform formula, we have: $$ \widehat{\widehat{1_S}} = 2^n 1_S = |S|\, \widehat{1}_{C(S)} = |S| |C(S)|\, 1_{C(S)^{\perp}} $$ making also use of Proposition \ref{hadvecprop}. As $1_S$ is a Boolean function, then $|S| |C(S)| = 2^n$ and $S = C(S)^{\perp}$, so $S$ is a vector space. \hfill $\square$ \end{proof} \begin{remark} In the proof by Rothschild and Van Lint, they suppose that $|S|$ is a power of two and proceed by induction on $n$. Their proof can also be seen in terms of the Fourier--Hadamard transform, so we will briefly present it this way to show the differences between both approaches. For $n = 2$, the result is trivial, so we will suppose the result to be true for $n\leq k-1$ and prove it for $n=k$. \noindent Let $1_S$ be the indicator function of $S$, we know that $1_S$ is fully balanced if and only if $\widehat{1_S}(\w{u})\in\{0,\pm|S|\}$ for all $\w{u}$. If there is $\w{s}\in C(S)$ which is not zero, then there is a hyperplane $H\subset\f{2}{k}$ such that $S\subseteq H$ (this $H$ is $\{\w{0},\w{s}\}^\perp$ if $\widehat{1_S}(\w{s}) = |S|$ and its complement if $\widehat{1_S}(\w{s}) = -|S|$). Taking now any hyperplane $X$ of $H$, we know that $X = H \cap H'$ for some hyperplane $H'$ in $\f{2}{n}$, and thus $S\cap X = S\cap H'$. This implies that $|S\cap X|$ verifies the induction hypothesis and we have our result. If $C(S) = \{\w{0}\}$, then Rothschild and Van Lint show that $|S| = 2^n$, and thus $S = \f{2}{n}$ by a geometrical argument counting hyperplanes, but it is simpler to do it via an equivalent argument using Parseval's relation. Using $C(S) = \{\w{0}\}$ we know that $$ \sum_{\w{u}\in\f{2}{n}} \left(\widehat{1_S}(\w{u})\right)^2 = \left(\widehat{1_S}(\w{0})\right)^2 = |S|^2, $$ but it is also equal to $2^n|S|$ due to Parseval's relation, so $|S|$ must be either $0$ or $2^n$. \end{remark} \end{section} \begin{section}{Some Fourier--Hadamard and Walsh supports} We now move on to analysing the Fourier--Hadamard and Walsh supports of functions that have not yet been studied from the viewpoint of the Fourier support (recall that, for any nonzero function, $B(S)$ is the complement of the Fourier--Hadamard support). A common problem that we deal with in quantum computing and that has implications in quantum complexity theory is that of distinguishing classes of functions. Indeed, if we can isolate two classes of functions (for instance, balanced and constant functions) that cannot be distinguished efficiently in one of the classical models of computation and we can distinguish them efficiently in the quantum model, the result would have interesting implications. We will show in Section \ref{quant} that the Walsh support of vectorial functions is tied to the Fourier support of the Boolean or pseudo-Boolean functions determined by their images. For that reason, knowing the Fourier support of Boolean functions is paramount in applying the $\GPK$ algorithm to distinguish classes of vectorial functions. Few results are known regarding possible Walsh supports: we know that $\f{2}{n}$ can be a Walsh support (for instance of any function having odd Hamming weight), as well as any singleton $\{\w{a}\}$ (in this latter case, this is equivalent to the fact that the function is affine). A set of the form $\f{2}{n}\setminus \{\w{a}\}$ can also be a Walsh support, a study of which can be found in \cite{car1} together with a general review in what is already known on the subject. We study now the Fourier support of a class of functions that has never been studied: \begin{theorem}{(Balancing independent sets.)}\label{indepth} Let $S\subset\f{2}{n}$ with $|S|$ even, $\mathbf{0}\in S$ and $r = \rk(S) = |S|-1$. Then $$ b(S) = \binom{r}{(r+1)/2}, $$ and $B(S)$ is the disjoint union of $\binom{r}{(r+1)/2}$ affine spaces of dimension $n-r$ whose underlying vector space is $C(S) = \langle S \rangle^{\perp}$. \end{theorem} \begin{proof} Let $\w{s}_i\in S$; $i=1,\ldots,r$, be the nonzero elements of $S$. For a certain $\w{x}\in\f{2}{n}$ to balance $S$, it must satisfy that $\w{x}\cdot\w{s}_i = 0$ for $(r-1)/2$ of the nonzero elements of $S$ and $\w{x}\cdot\w{s}_i = 1$ for the remaining $(r+1)/2$. Let $B_{r,t}$ be the set of vectors of weight $t$ in $\f{2}{r}$, and take $t = (r+1)/2$. We know that, for every $\w{x}\in\f{2}{n}$ that balances $S$, the vector obtained after computing the $r$ possible $\w{x}\cdot\w{s}_i$ values must be in $B_{r,t}$. Next, for each element $\mathbf{b}\in B_{r,t}$, we will consider the system of equations given by $\{\mathbf{s}_i\cdot\w{x} = b_i\mid i = 1,\ldots, r\}$, where $b_i$ is the $i$-th component of $\w{b}$ and $\w{x}$ is the vector of unknowns. It is obvious that the solution to any of these systems balances $S$, as $w(\w{b}) = t$. Furthermore, any vector that balances $S$ must be a solution to one of these systems. The next step will be to analyse the solutions to these systems of equations. As the $r$ non-trivial vectors of $S$ are independent, we know that $n \geq r$, and in particular, the rank of any of the systems will be $r$ and they will always have a subspace of dimension $n-r$ as solution. The final step is simply to remark that the pairwise intersections of such subspaces are empty and that there are precisely $$ \displaystyle\binom{r}{(r+1)/2} $$ possible such systems of equations. \hfill $\square$ \end{proof} In particular, we have that $b(S) = \displaystyle\binom{r}{(r+1)/2}.$ Let us see what this implies in terms of the Fourier--Hadamard and Walsh supports. \begin{remark} Let $f:\f{2}{n}\to\f{2}{}$ be a Boolean function whose support $S = \supp(f)$ satisfies the conditions of Theorem \ref{indepth}. Then, if $f$ is not balanced---which is always the case if $n>3$---the Fourier--Hadamard and Walsh supports are $\supp(\widehat{f}) = \supp(W_f) = \f{2}{n}\setminus B(S)$, and thus we have supports of size: $$ 2^n-\displaystyle\binom{r}{(r+1)/2}2^{n-r}. $$ If $f$ is balanced then the Walsh support is of size $$ 2^n-\displaystyle\binom{r}{(r+1)/2}2^{n-r}-1. $$ \end{remark} The next step in our quest could be to consider the next general situation. Let us study the case where all nontrivial elements of $S$ are independent except for one. \begin{proposition}\label{indepmas1pr} Let $S\subset \mathbb{F}_2^n$ with $|S|$ even, $\mathbf{0}\in S$ and $r = \rk(S) = |S| - 2$. Denoting by $\mathbf{s}_1,\ldots, \mathbf{s}_{r}$ $r$ independent elements of $S$ and by $\mathbf{s}$ the remaining nonzero element of $S$ such that $$ \mathbf{s} = \sum_{i=1}^{r} \alpha_i \mathbf{s}_i, \text{ where } \alpha_i\in\mathbb{F}_2 \text{ for all } i\in\{1\ldots,r\}, $$ let $k = \displaystyle\sum_{i=1}^{r} \alpha_i$, where the sum is calculated in $\mathbb{Z}$, then: $$ b(S) = \begin{cases} 0 & \text{ if } \varphi_1(k) > \varphi_2(k) \\ \displaystyle\sum_{i=\vp_1(k)}^{\vp_2(k)} \binom{k}{i} \ \binom{r-k}{(r/2)+e(i)-i} & \text{ otherwise,} \end{cases} $$ where $$ \vp(k) = (\vp_1(k),\vp_2(k)) = \begin{cases} (0, k) & \text{ if } k < r/2\\ (1,k) & \text{ if } k = r/2\\ \displaystyle \Big(k - \frac{r}{2} + e \left( k-\frac{r}{2} \right), \frac{r}{2}+ e \left( \frac{r}{2}+1 \right)\Big) & \text{ if } k > r/2, \end{cases} $$ and $$ e(x) = \frac{1+(-1)^x}{2}. $$ Once again, $B(S)$ will be a disjoint union of $b(S)$ affine spaces with $C(S)$ as their underlying vector space. \end{proposition} \begin{proof} As announced after Proposition \ref{isonoiso}, we will consider $n = r$, $\mathbf{e}_j\in S$ for each $j=1,\ldots,r$ and $\mathbf{s} = 1^k \ 0^{r-k}$ without loss of generality. In this situation, $k$ is the weight of $\mathbf{s}$. We know that $b(S)$ corresponds to the amount of affine spaces with $C(S)$ as their underlying vector space that balance $S$, but since we have $C(S) = \langle S\rangle^{\perp} = \{\w{0}\}$, each of these affine spaces consists of a single vector and $|B(S)| = b(S)$. Thus, we only need to count the vectors $\w{b}\in\f{2}{r}$ that balance $S$. Let $\w{b}\in\f{2}{r}$ be any such vector, then $\w{b}\cdot\w{s}_j = \w{b}\cdot\w{e}_j = b_j$ for every $j=1,\ldots,r$ and $$\w{b}\cdot \w{s} = \sum_{j=1}^k b_j,$$ where $b_j$ is the $j$-th component of $\w{b}$. It is clear that for $\mathbf{b}$ to balance $S$ it must either have weight $r/2$ and have an odd amount of ones among the first $k$ positions, or have weight $(r/2)+1$ and have an even amount of ones among the first $k$ positions. At this point, the only thing that remains is to count such $\mathbf{b}$ vectors. Let $i$ be the number of ones among the first $k$ positions, then there are exactly $\binom{k}{i}$ ways for them to be distributed. If $i$ is odd, then we want $\wt(\w{b}) = r/2$, so for each choice of $b_1,\ldots,b_k$ $$ \binom{r-k}{(r/2)-i} $$ ways of choosing the remaining ones. If $i$ is even, then we want $\wt(\mathbf{b}) = (r/2) +1$, and then the remaining combinations will be $$ \binom{r-k}{(r/2)+1-i}. $$ A compact way to consider both possibilities at the same time is $$ \binom{r-k}{(r/2)+e(i)-i}, $$ so the total number of combinations will be: $$ \sum_i \binom{k}{i} \binom{r-k}{(r/2)+e(i)-i}. $$ The only thing left to do is to analyse which are the possible values for $i$, which is the task of the $\vp$ function. Given $i\leq k \leq r$, the previous expression is nonzero if and only if $i\leq k$ and $r/2+e(i)-i\leq r-k$, that is, $i\geq k-r/2+e(i)$. If $k < r/2$, then it is trivial that $i$ can range between $0$ and $k$. If $k = r/2$, then it is almost the same situation, with the slight difference that $i$ cannot be $0$, as then there would have to be $(r/2) + 1$ ones in the last $r/2$ positions. However, if $k > r/2$, then the maximum range we can achieve is from $k-(r/2)$ to $r/2$. This is not always possible, as if $r/2$ is odd, then $i = (r/2)+1$ is even and it would not balance our set. The same happens with our lower bound and the parity of $k-(r/2)$. If it were even, then it would be impossible to balance our set with $i = k - (r/2)$, so we must adjust the limits of our range in the $\vp$ function accordingly. \hfill $\square$ \end{proof} As we can see, the formulae get incredibly complicated when the set lacks structure. \begin{remark} Let $f:\f{2}{n}\to\f{2}{}$ be a Boolean function such that its support $S = \supp(f)$ satisfies the conditions of Proposition \ref{indepmas1pr}. Then, if $f$ is not balanced---which again is always the case if $n>3$--- we have that both the Fourier--Hadamard and Walsh supports are a disjoint union of $b(S)$ affine spaces with $C(S)$ as their underlying vector space. Therefore, their total size will be $2^n - b(S) 2^{n-r}$, where $b(S)$ is as shown in Proposition \ref{indepmas1pr} and $r$ is the rank of $S$. \end{remark} We will focus now on functions whose support has a certain structure. There are many known ways of determining the Fourier--Hadamard and Walsh supports of Boolean functions by decomposing them. A summary of these situations can be found in \cite{bfc}, where, in particular, we can find that, given two pseudo-Boolean functions, $\psi$ and $\vp$, in $\f{2}{n}$, then, $\widehat{\vp\otimes\psi} = \widehat{\vp}\times\widehat{\psi}.$ Also, we know that $\widehat{\vp\times\psi} = \widehat{\vp}\otimes\widehat{\psi}/2^n,$ where $\otimes$ stands here for the Kronecker product. We will proceed to do a similar thing, but using a different construction based on the $0$-kernel of the function. We will first highlight the idea with an example. \begin{remark} Let us consider a nonempty $S\subset \mathbb{F}_2^n$ for which there is an $\mathbf{s}\in \mathbb{F}_2^n$ such that $\mathbf{s} + S = S$ and $\rk(S) = r$. Then, any $\w{y}\in \mathbb{F}_2^n \setminus H_{\mathbf{s}}$ balances $S$, but for any $\w{y}\in H_{\w{s}}$ to balance $S$ we would need it to balance $S/\langle \w{s}\rangle$. In particular, if $|S|$ is not a multiple of $4$, then $B(S) = \mathbb{F}_2^n \setminus H_{\mathbf{s}}$ and $b(S) = 2^{r-1}.$ \end{remark} If we note as $f$ the indicator function of $S$, then what we have here is an $\w{s}$ such that $D_{\w{s}}(f) = 0$, the set of all the elements that fulfill this property is known as the $0$-kernel, $\mathcal{E}_{0}(f) = \{\w{a}\in\f{2}{n}\mid D_{\w{a}}(f) = 0\}$, and we will use it to generalise the previous result. Let us recall some of its properties first: \begin{proposition}{(Properties.)} Let $f:\f{2}{n}\to\f{2}{}$ be a Boolean function and $S=\supp(f)$: \begin{itemize} \item[(i)] $\mathcal{E}_{0}(f)$ is a vector subspace. \item[(ii)] If $\mathbf{0}\in S$, then $\mathcal{E}_0(f)\subset S$. \item[(iii)] $\mathcal{E}_0(f(\w{x})) = \mathcal{E}_0\big(f(\w{x}+\w{a})\big)$ for all $\mathbf{a}\in\f{2}{n}$. \item[(iv)] $\mathcal{E}_0(f) = \displaystyle\bigcap_{\mathbf{x}\in S} (\mathbf{x}+S).$ \end{itemize} \end{proposition} What we are trying to do here is to construct $B(S)$ from both $E$ and the balancing set of the set of classes $\mathbf{x} + E$. So let us delve a little deeper into this idea. \begin{definition}{(Classes modulo $\mathcal{E}_0$.)} Let $S\subset\mathbb{F}_2^n$ be nonempty, $f$ its indicator function and $E= \mathcal{E}_0(f)$. We will consider the set of classes modulo $E$, $S / E = \{\mathbf{x} + E \mid \mathbf{x}\in S\}$ and define the balancing set of $S/E$ as: $$ B(S/E) = B(S) \cap H, \text{ where } H = \displaystyle\bigcap_{\mathbf{s}\in E} H_{\mathbf{s}}. $$ \end{definition} Annoyingly, this does not help us much, since we use $B(S)$ as part of the definition, but the next result will give us an alternative way to compute $B(S/E)$. \begin{lemma}\label{balcll} Let $S\subset\mathbb{F}_2^n$ be nonempty, $f$ its indicator function and $E= \mathcal{E}_0(f)$, and let $S_E$ be a complete set of representatives of $S/E$. Then: $$ B(S/E) = B(S_E)\cap H, \text{ where } H = \displaystyle\bigcap_{\mathbf{s}\in E} H_{\mathbf{s}}. $$ \end{lemma} \begin{proof} Let $\mathbf{y}\in H$, stating that $\mathbf{y}$ balances $S$ is equivalent to stating that it balances the classes, as $\mathbf{y}\cdot \mathbf{s} = 0$ for every $\mathbf{s}\in F$ and thus $\mathbf{y}\cdot \mathbf{x}_1 = \mathbf{y}\cdot \mathbf{x}_2$ for $\mathbf{x}_1,\mathbf{x'}_1$ in the same class. The result follows from the fact that, in this situation, balancing a complete set of representatives is the same as balancing the classes. \hfill $\square$ \end{proof} Now the next result becomes obvious. \begin{theorem}\label{cadena3th} Let $S\subset \mathbb{F}_2^n$ be nonempty, $f$ its indicator function and $E= \mathcal{E}_0(f)$, with $ k= \dim(E)$ and $r = \rk(S)$. Then $$ B(S) = \big( \mathbb{F}_2^n\setminus H \big) \sqcup B(S/H), $$ where $H =\bigcap_{\mathbf{s}\in E} H_{\mathbf{s}}$ and $\sqcup$ stands for the disjoint union. Furthermore, $b(S) = 2^{r-k}(2^k-1) + b(S_E)$, where $S_E$ is a complete set of representatives of $S/E$. \end{theorem} \begin{proof} As every element of $\mathbb{F}_2^n\setminus H$ balances $S$, the result just follows from: $$ B(S) = \big[ B(S)\cap \left(\mathbb{F}_2^n\setminus H \right) \big] \sqcup \big[ B(S)\cap H \big]. $$ For the first part, $\left(B(S)\cap \left(\mathbb{F}_2^n\setminus H\right)\right) = \mathbb{F}_2^n\setminus H$, while the second is just the definition of $B(S/E)$. Regarding $b(S)$, the $2^{r-k}(2^k-1)$ part comes from $\mathbb{F}_2^n\setminus H$. For the $b(S_E)$ part, if we consider without loss of generality $\mathbf{0}\in S$, $n=r$, $\mathbf{e}_1,\ldots,\mathbf{e}_k \in E$ and $\mathbf{e}_{k+1},\ldots,\mathbf{e}_r \in S$, where the $\w{e}_i$ are the vectors of the canonical base, then each of the vectors in $b(S_E)$ comes from a compatible system of equations of rank $r$, and thus there will be $b(S_E)$ affine spaces in $B(S/H)$. \hfill $\square$ \end{proof} \begin{corollary} Let $f:\f{2}{n}\to\f{2}{}$ such that $\supp (f) = S$ is as in Theorem \ref{cadena3th}, then $\supp(\widehat{f}) = H \cap \supp(\widehat{f}_{S/H})$, where $f_{S/H}$ is the indicator function of $S/H$. \end{corollary} \end{section} \begin{section}{The Fourier--Hadamard Transform for multisets} Let us devote some time to consider the Fourier--Hadamard transform for multisets, as it will become useful for analysing the Walsh transform of vectorial functions $F:\f{2}{n}\to\f{2}{m}$ through their image multisets. The main motivation for doing so is to define the concept of fully balanced function and apply the already presented results to analyse what the $\GPK$ has to offer. For a general review on multisets we refer the reader to \cite{syro}. As we will see, most of the results we have presented remain unchanged, so we will mostly just give a small sketch of the proofs. However, we do present some new results. In particular, we answer the question of determining the possible Fourier--Hadamard supports that a pseudo-Boolean function can have using the constant and balancing sets of multisets. \begin{definition}{(Balanced multiset.)} Let $M = (\f{2}{n},m)$ be a nonempty Boolean multiset. We say that $\mathbf{y}\in \mathbb{F}_2^n$ balances $M$ if: $$ \widehat{m}(\w{y}) = \sum_{\mathbf{x} \in S_M} m(\mathbf{x}) \cdot (-1)^{\mathbf{x}\cdot \mathbf{y}} = 0, $$ and we say that it makes $M$ constant if: $$ \left|\widehat{m}(\w{y})\right| = \left|\sum_{\mathbf{x}\in S_M} m(\mathbf{x}) \cdot (-1)^{\mathbf{x}\cdot \mathbf{y}}\right| = \displaystyle\sum_{\mathbf{x} \in S_M} m(x) = |M|, $$ where $|M| = \displaystyle\sum_{\mathbf{x} \in S_M} m(x)$ is the cardinality of $M$. We define $C(M)$, $B(M)$ and $b(M)$ analogously. \end{definition} In order to solve the problem of the constant set, we only need the following result. \begin{lemma} Let $M$ be a Boolean multiset, then $C(M) = C(S_M).$ \end{lemma} We will now present results analogous to those of Section \ref{basicresults} for the balancing problem. \begin{proposition} Let $M=(\f{2}{n},m)$ be a nonempty Boolean multiset and $\mathbf{s}\in\mathbb{F}_2^n$. Let $\mathbf{s} + M = (\f{2}{n},m')$ where $m'(\mathbf{x}) = m(\mathbf{s}+\mathbf{x})$, then $B(M) = B(\mathbf{s} + M).$ \end{proposition} \begin{proof} Let $\mathbf{y}\in B(M)$, then $$ \sum_{\mathbf{x}\in S_M} m'(\mathbf{s}+\mathbf{x}) \cdot (-1)^{(\mathbf{s}+\mathbf{x})\cdot \mathbf{y}} = (-1)^{\mathbf{s}\cdot\mathbf{y}} \sum_{\mathbf{x}\in S_M} m(\mathbf{x}) \cdot (-1)^{\mathbf{x}\cdot \mathbf{y}} = 0. $$ And thus $\mathbf{y}\in B(\mathbf{s}+M)$. As $\mathbf{s}+(\mathbf{s}+M) = M$ we have the result. \hfill $\square$ \end{proof} \begin{proposition} Let $M$ be a nonempty Boolean multiset and let $C(M)$ be its constant set. Then $B(M)$ is a disjoint union of affine spaces all of them with $C(M)$ as their underlying vector space. \end{proposition} The reasoning is the same as the one for the set situation, and we also still have that the balancing index of a Boolean multiset is invariant by isomorphisms of $\mathbb{F}_2^n$. We will now extend the definition of fully balanced and the main results regarding this property. \begin{definition}{(Fully balanced multiset.)} Let $M$ be a nonempty Boolean multiset. We say that it is a fully balanced multiset if $B(M)\cup C(M) = \mathbb{F}_2^n.$ \end{definition} \begin{theorem} Let $M = (\f{2}{n},m)$ be a nonempty Boolean multiset; then it is fully balanced if and only if $S_M$ is an affine space and $m$ is constant. \end{theorem} \begin{proof} It is clear that any such $M$ is fully balanced. The converse implication can be proven either by induction or by following the sketch in the proof of Theorem \ref{FBsetsTh} but replacing $|S|$ by $|M|$ and $1_S$ by $m$. If we did so, we would still have that if $m(\w{0})\neq 0$, then $\widehat{m} = |M|1_{C(M)},$ just as before. As the inverse Fourier--Hadamard formula still applies, we would end up with: $$ m = \frac{|M||C(M)|}{2^n}1_{C(M)^{\perp}}, $$ having thus our result. \hfill $\square$ \end{proof} The main problem that we have been trying to solve has been that of determining the possible Fourier--Hadamard supports---or, equivalently, balancing sets---that Boolean functions can have, but what happens when we consider multisets? The final result we will reach is the following \begin{theorem}{(Multiset balancing sizes.)}\label{multisize} For any set $S$ such that $\w{0}\in S$ there is a multiset whose multiplicity function has $S$ as its Fourier--Hadamard support. \end{theorem} However, we will not prove this right away, and instead, we will first focus on a constructive method to create multisets that have increasingly smaller balancing sets. \begin{lemma}\label{multisize2} Let $M_1 = (\f{2}{n},m_1)$ be a nonempty multiset such that there is an $\w{y}\in B(M_1)$, then there is another multiset $M_2 = (\f{2}{n},m_2)$ satisfying $B(M_2) = B(M_1)\setminus \{\w{y}\}$. \end{lemma} \begin{proof} Thanks to the inverse Fourier--Hadamard transform we know that for all $\w{x}\in\f{2}{n}$ $$ m_1(\w{x}) = \frac{1}{2^n}\widehat{\widehat{m_1}}(\w{x}). $$ If we want to change some values in $\widehat{m_1}$, the only two conditions that $\widehat{\widehat{m_1}}(\w{x})$ must satisfy in order to determine a multiset are the following: first, it is clear that $\widehat{\widehat{m_1}}(\w{x})$ must be nonnegative for all $\w{x}\in\f{2}{n}$, and secondly, it must be a multiple of $2^n$. If we set then $$ \widehat{m_2}(\w{x}) = \begin{cases} \widehat{m_1} +2^{n-1} & \text{if } \w{x} = \w{0} \text{ or } \w{x} = \w{y} \\ \widehat{m_1}(\w{x}) & \text{otherwise,} \end{cases} $$ it is clear that $$ \widehat{\widehat{m_2}}(\w{x}) = \begin{cases} \widehat{\widehat{m_1}}(\w{x}) & \text{if } \w{x}\cdot\w{y} = 1 \\ \widehat{\widehat{m_1}}(\w{x}) + 2^n & \text{if } \w{x}\cdot\w{y} = 0 \end{cases} $$ and thus both conditions are satisfied. As $M_1$ was nonempty, $\widehat{m_1}(\w{0})$ was not $0$, so $B(M_2) = B(M_1)\setminus\{\w{y}\}$. \hfill $\square$ \end{proof} We will see this with an example as soon as we finish the proof of the general result. \begin{proof}{(Theorem \ref{multisize}.)} We know multisets in $\f{2}{n}$ whose Fourier--Hadamard support is just $\{\w{0}\}$---for instance $\f{2}{n}$---so we only need to iterate the procedure of Lemma \ref{multisize2} to obtain the desired multiset, but we can actually just begin with the empty multiset. \hfill $\square$ \end{proof} \end{section} \begin{section}{Application in Quantum Computing}\label{quant} We will finally point out why it seems relevant to study the Fourier--Hadamard and Walsh transforms for multisets and in the fully balanced situation. This will be done by tying down the Walsh transform of vectorial functions and some results in quantum computing. As we already mentioned in the introduction, the Walsh transform appears in quantum algorithms in which the phase kick-back technique \cite{kaye} is applied. In particular, it first appeared in the Deutsch--Jozsa algorithm \cite{dyj}, where the final amplitude associated to a certain state $\ket{\w{x}}_n$ when the algorithm is applied to a Boolean function $f:\f{2}{n}\to\f{2}{}$ is shown to be $W_f(\w{x})/2^n$. The Walsh transform of vectorial functions also shows up in the generalised version of this algorithm \cite{gpk}. Indeed, once again, the final amplitude associated to a certain state $\ket{\w{x}}_n$ when the algorithm is applied to a vectorial function $F:\f{2}{n}\to\f{2}{m}$ using a marker $\w{y}\in\f{2}{m}$ is shown to be $W_F(\w{x},\w{y})/2^n$. We will show now the relation between fully balanced multisets and the Walsh transform of vectorial functions. To do so, we will first define some concepts. \begin{definition}{($\mathbf{y}$-Balanced function.)} Let $F:\f{2}{n}\to \f{2}{m}$ be a vectorial function and let $\mathbf{y}\in \f{2}{m}$. We say that $F$ is $\mathbf{y}$-balanced if we have $F(\mathbf{x})\cdot \mathbf{y} = 0$ for half of the vectors $\mathbf{x}\in\f{2}{n}$ and $F(\mathbf{x}) \cdot \mathbf{y} = 1$ for the other half. \end{definition} In the same way, we can define the idea of $\mathbf{y}$-constant functions. \begin{definition}{($\mathbf{y}$-Constant function.)} Let $F:\f{2}{n}\to \f{2}{m}$ be a vectorial function and let $\mathbf{y}\in \f{2}{m}$. We say that $F$ is $\mathbf{y}$-constant if for every vector $\mathbf{x}\in\f{2}{n}$ the result of $F(\mathbf{x}) \cdot \mathbf{y}$ is the same. \end{definition} As we have seen, these definitions can be translated in terms of the Fourier--Hadamard transform. \begin{proposition} Let $F:\f{2}{n}\to\f{2}{m}$ be a vectorial function. Then, $F$ is $\mathbf{y}$-balanced if and only if: $$ \sum_{\mathbf{x}\in\f{2}{n}} (-1)^{F(\mathbf{x})\cdot\mathbf{y}} = 0. $$ Similarly, $F$ is $\mathbf{y}$-constant if and only if: $$ \left|\sum_{\mathbf{x}\in\f{2}{n}} (-1)^{F(\mathbf{x})\cdot\mathbf{y}} \right| = 2^n. $$ \end{proposition} The next result now becomes trivial. \begin{theorem} Let $F:\f{2}{n}\to\f{2}{m}$ be a vectorial function and $\mathbf{y}\in\f{2}{m}$. Then, $F$ is $\mathbf{y}$-constant if and only if $\big|W_F(\w{0},\w{y})\big| = 2^n$. Besides, $F$ is $\w{y}$-balanced if and only if $W_F(\w{0},\w{y}) = 0$. \end{theorem} Following things up, we will now focus on the fully balanced situation. \begin{definition}{(Fully balanced functions.)} Let $F:\f{2}{n}\to\f{2}{m}$ be a vectorial function, we say that it is fully balanced if for every $\mathbf{y}\in\f{2}{m}$, $F$ is either $\mathbf{y}$-balanced or $\mathbf{y}$-constant. \end{definition} So we say that a function is fully balanced if the multiset of its image is fully balanced. We can likewise define the concepts of $C(F)$, $B(F)$ and $b(F)$ for a given vectorial function $F$. We will finish things up by highlighting a trivial consequence of this result that ties down the Walsh transform of a vectorial function and the balancing set of its image multiset. As a previous notation remark, we will refer to the image of $F$ multiset as $I_F = (\f{2}{m},m(\w{x}))$ with $m(\w{x}) = |F^{-1}(\w{x})|.$ \begin{corollary} Let $F:\f{2}{n}\to\f{2}{m}$ be a fully balanced vectorial function. Then, $$ \Big|W_F(\w{0},\w{v})\Big| = 2^n\, 1_{C(I_F)}(\w{v}), $$ where $\w{v}\in\f{2}{m}$ and $1_{C(I_F)}$ is the indicator function of $C(I_F)$. \end{corollary} In other words, when we are dealing with fully-balanced functions, we can determine whether a marker $\w{y}$ is in the balancing set or in the constant set of its image multiset by looking at $\GPK(\w{y})$. If the result is zero, then $\w{y}$ is in fact in the constant set, while if we get any other result, then $\w{y}$ is in the balancing set. The idea now is that we can use the $\GPK$ algorithm to determine the dimension of the image of a given fully balanced function, and even compute said image, but this will be done on a separate publication. \end{section} \begin{credits} \subsubsection{\ackname} The research of the first author is partly supported by the \emph{Norwegian Research Council}, ID 314395. The research of the second and third authors is supported by the \emph{Ministerio de Ciencia e Innovación} under Project PID2020-114613GB-I00 (MCIN/AEI/10.13039/501100011033). \subsubsection{\discintname} The authors report that there are no competing interests to declare. \end{credits} \bibliographystyle{splncs04} \bibliography{mybibliography} \end{document}
2205.09040v1
http://arxiv.org/abs/2205.09040v1
Strongly nonexpansive mappings revisited: uniform monotonicity and operator splitting
\documentclass[11pt]{article} \usepackage{tikz} \usepackage{empheq} \usepackage[shortlabels]{enumitem} \setlist[enumerate]{leftmargin=15mm,nosep} \newcommand*\rectangled[1]{\tikz[baseline=(char.base)]{ \node[rectangle,fill=blue!20,draw,inner sep=2pt,opacity=0.5,text opacity=1] (char) {#1};}} \usepackage[title]{appendix} \usepackage{fancybox} \usepackage[doc, wmcom,jvcom]{optional} \usepackage{xcolor} \usepackage[colorlinks=true, linkcolor=refkey, urlcolor=lblue, citecolor=red]{hyperref} \usepackage{float} \usepackage{soul} \usepackage{graphicx} \definecolor{labelkey}{rgb}{0,0.08,0.45} \definecolor{refkey}{rgb}{0,0.6,0.0} \definecolor{Brown}{rgb}{0.45,0.0,0.05} \definecolor{lime}{rgb}{0.00,0.8,0.0} \definecolor{lblue}{rgb}{0.5,0.5,0.99} \definecolor{OliveGreen}{rgb}{0,0.6,0} \usepackage{mathpazo} \colorlet{hlcyan}{cyan!30} \DeclareRobustCommand{\hlcyan}[1]{{\sethlcolor{hlcyan}\hl{#1}}} \usepackage{stmaryrd} \usepackage{amssymb} \oddsidemargin -0.1cm \textwidth 16.5cm \topmargin -0.1cm \headheight 0.0cm \textheight 21.2cm \parindent 4mm \parskip 10pt \tolerance 3000 \hyphenation{non-empty} \makeatletter \def\namedlabel#1#2{\begingroup \def\@currentlabel{#2} \label{#1}\endgroup } \makeatother \newcommand{\sepp}{\setlength{\itemsep}{-2pt}} \newcommand{\seppthree}{\setlength{\itemsep}{-3pt}} \newcommand{\seppone}{\setlength{\itemsep}{-1pt}} \newcommand{\seppfour}{\setlength{\itemsep}{-4pt}} \newcommand{\seppfive}{\setlength{\itemsep}{-5pt}} \newcommand{\reckti}{$3^*$ monotone} \newcommand{\reckticity}{$3^*$ monotonicity} \newcommand{\Reckticity}{$3^*$ Monotonicity} \newcommand{\ssnonex}{super strongly nonexpansive} \oddsidemargin -0.1cm \textwidth 16.5cm \topmargin -0.1cm \headheight 0.0cm \textheight 21.2cm \parindent 4mm \parskip 10pt \tolerance 3000 \newcommand{\SE}{\ensuremath{{\mathcal S}}} \newcommand{\J}[1]{\ensuremath{{\operatorname{J}}_{#1}}} \newcommand{\Pj}[1]{\ensuremath{{\operatorname{P}}_ {#1}}} \newcommand{\Px}[1]{\ensuremath{{\operatorname{P}}_ {#1}}} \newcommand{\Nc}[1]{\ensuremath{{\operatorname{N}}_ {#1}}} \newcommand{\dist}[1]{\ensuremath{{\operatorname{dist}}_ {#1}}} \newcommand{\Pxg}{\ensuremath{\Px{g}}} \newcommand{\bId}{\ensuremath{\operatorname{\mathbf{Id}}}} \newcommand{\bM}{\ensuremath{{\mathbf{M}}}} \newcommand{\bb}{\ensuremath{\mathbf{b}}} \newcommand{\bX}{\ensuremath{\mathbf{X}}} \newcommand{\bx}{\ensuremath{\mathbf{x}}} \newcommand{\bZ}{\ensuremath{\mathbf{Z}}} \newcommand{\bu}{\ensuremath{\mathbf{u}}} \newcommand{\bj}{\ensuremath{\mathbf{j}}} \newcommand{\be}{\ensuremath{e}} \newcommand{\by}{\ensuremath{\mathbf{y}}} \newcommand{\bzero}{\ensuremath{{\boldsymbol{0}}}} \providecommand{\siff}{\Leftrightarrow} \newcommand{\weakly}{\ensuremath{\:{\rightharpoonup}\:}} \newcommand{\bDK}{$\bD{}\,$-Klee} \newcommand{\fDK}{$\fD{}\,$-Klee} \newcommand{\nnn}{\ensuremath{{n\in{\mathbb N}}}} \newcommand{\thalb}{\ensuremath{\tfrac{1}{2}}} \newcommand{\menge}[2]{\big\{{#1}~\big |~{#2}\big\}} \newcommand{\mmenge}[2]{\bigg\{{#1}~\bigg |~{#2}\bigg\}} \newcommand{\Menge}[2]{\left\{{#1}~\Big|~{#2}\right\}} \newcommand{\todo}{\hookrightarrow\textsf{TO DO:}} \newcommand{\To}{\ensuremath{\rightrightarrows}} \newcommand{\lev}[1]{\ensuremath{\mathrm{lev}_{\leq #1}\:}} \newcommand{\fenv}[1]{\ensuremath{\,\overrightarrow{\operatorname{env}}_{#1}}} \newcommand{\benv}[1]{\ensuremath{\,\overleftarrow{\operatorname{env}}_{#1}}} \newcommand{\emp}{\ensuremath{\varnothing}} \newcommand{\sign}{\ensuremath{\operatorname{sign}}} \newcommand{\infconv}{\ensuremath{\mbox{\small$\,\square\,$}}} \newcommand{\pair}[2]{\left\langle{{#1},{#2}}\right\rangle} \newcommand{\scal}[2]{\left\langle{#1},{#2} \right\rangle} \newcommand{\bscal}[2]{\big\langle{#1},{#2} \big\rangle} \newcommand{\tscal}[2]{\textstyle\langle{#1},{#2} \rangle} \newcommand{\Tt}{\ensuremath{\mathfrak{T}}} \newcommand{\YY}{\ensuremath{\mathcal Y}} \newcommand{\prim}{\ensuremath{z}} \newcommand{\dl}{\ensuremath{k}} \newcommand{\exi}{\ensuremath{\exists\,}} \newcommand{\zeroun}{\ensuremath{\left]0,1\right[}} \newcommand{\RR}{\ensuremath{\mathbb R}} \newcommand{\RP}{\ensuremath{\mathbb{R}_+}} \newcommand{\RPP}{\ensuremath{\mathbb{R}_{++}}} \newcommand{\RM}{\ensuremath{\mathbb{R}_-}} \newcommand{\RPX}{\ensuremath{\left[0,+\infty\right]}} \newcommand{\RX}{\ensuremath{\,\left]-\infty,+\infty\right]}} \newcommand{\RRX}{\ensuremath{\,\left[-\infty,+\infty\right]}} \newcommand{\KK}{\ensuremath{\mathbf K}} \newcommand{\ZZ}{\ensuremath{\mathbf Z}} \newcommand{\NN}{\ensuremath{\mathbb N}} \newcommand{\oldIDD}{\ensuremath{\operatorname{int}\operatorname{dom}f}} \newcommand{\IdD}{\ensuremath{U}} \newcommand{\BIDD}{\ensuremath{\mathbf U}} \newcommand{\CDD}{\ensuremath{\overline{\operatorname{dom}}\,f}} \newcommand{\dom}{\ensuremath{\operatorname{dom}}} \newcommand{\codim}{\ensuremath{\operatorname{codim}}\,} \newcommand{\argmin}{\ensuremath{\operatorname{argmin}}} \newcommand{\arginf}{\ensuremath{\operatorname{arginf}}} \newcommand{\Arginf}{\ensuremath{\operatorname{Arginf}}} \newcommand{\Argmin}{\ensuremath{\operatorname{Argmin}}} \newcommand{\cont}{\ensuremath{\operatorname{cont}}} \newcommand{\prox}{\ensuremath{\operatorname{Prox}}} \newcommand{\T}{\ensuremath{{\operatorname{T}}}} \newcommand{\reli}{\ensuremath{\operatorname{ri}}} \newcommand{\inte}{\ensuremath{\operatorname{int}}} \newcommand{\sri}{\ensuremath{\operatorname{sri}}} \newcommand{\deriv}{\ensuremath{\operatorname{\; d}}} \newcommand{\rockderiv}{\ensuremath{\operatorname{\; \hat{d}}}} \newcommand{\closu}{\ensuremath{\operatorname{cl}}} \newcommand{\cart}{\ensuremath{\mbox{\LARGE{$\times$}}}} \newcommand{\SC}{\ensuremath{{\mathfrak S}}} \newcommand{\card}{\ensuremath{\operatorname{card}}} \newcommand{\bd}{\ensuremath{\operatorname{bdry}}} \newcommand{\ran}{\ensuremath{{\operatorname{ran}}\,}} \newcommand{\zer}{\ensuremath{\operatorname{zer}}} \newcommand{\conv}{\ensuremath{\operatorname{conv}\,}} \newcommand{\aff}{\ensuremath{\operatorname{aff}\,}} \newcommand{\cconv}{\ensuremath{\overline{\operatorname{conv}}\,}} \newcommand{\cspan}{\ensuremath{\overline{\operatorname{span}}\,}} \newcommand{\cdom}{\ensuremath{\overline{\operatorname{dom}}\,}} \newcommand{\Fix}{\ensuremath{\operatorname{Fix}}} \newcommand{\Id}{\ensuremath{\operatorname{Id}}} \newcommand{\bDelta}{{\begin{proof} \Delta}} \newcommand{\bv}{{\begin{proof} v}} \newcommand{\bT}{{\begin{\\ } T}} \newcommand{\bD}{{\begin{\\ } D}} \newcommand{\bg}{{\begin{proof} g}} \newcommand{\Lss}{{\ensuremath{L}}} \newcommand{\Icons}{\ensuremath{J}} \newcommand{\Iobj}{\ensuremath{I \smallsetminus J}} \newcommand{\JAinv}{J_{A^{-1}}} \newcommand{\JBinv}{J_{B^{-1}}} \newcommand{\TAB}{\operatorname{T}_{A,B}} \newcommand{\TBA}{\T{B,A}} \newcommand{\fprox}[1]{\overrightarrow{\operatorname{prox}}_{\thinspace#1}} \newcommand{\bprox}[1]{\overleftarrow{\operatorname{prox}}_{\thinspace#1}} \newcommand{\Hess}{\ensuremath{\nabla^2\!}} \newcommand{\fproj}[1]{\overrightarrow{P\thinspace}_ {\negthinspace\negthinspace #1}} \newcommand{\ffproj}[1]{\overrightarrow{Q\thinspace}_ {\negthinspace\negthinspace #1}} \newcommand{\bproj}[1]{\overleftarrow{\thinspace P\thinspace}_ {\negthinspace\negthinspace #1}} \newcommand{\proj}[1]{{\thinspace P\thinspace}_ {\negthinspace\negthinspace #1}} \newcommand{\fD}[1]{\overrightarrow{D\thinspace}_ {\negthinspace\negthinspace #1}} \newcommand{\ffD}[1]{\overrightarrow{F\thinspace}_ {\negthinspace\negthinspace #1}} \newcommand{\ffDbz}{\protect\overrightarrow{F\thinspace}_ {\negthinspace\negthinspace C}(\bz)} \newcommand{\minf}{\ensuremath{-\infty}} \newcommand{\pinf}{\ensuremath{+\infty}} \newcommand{\minimize}[2]{\ensuremath{\underset{\substack{{#1}}}{\mathrm{minimize}}\;\;#2 }} \newcommand{\veet}{\ensuremath{{\scriptscriptstyle\vee}}} \newcommand{\vD}{v_D} \newcommand{\vR}{v_R} \newcommand{\vI}{v} \newcommand{\jvcom}[1]{{\opt{jvcom}{\marginpar{\color{blue} {\sffamily J.V.}}{\color{blue}{{\rm\bfseries [}{\sffamily{#1}} {\rm\bfseries ]}}}}}} \newcommand{\wmcom}[1]{{\opt{wmcom}{\marginpar{\color{red} {\sffamily wmm $\clubsuit$}}{\color{red}{{\rm\bfseries [}{\sffamily{#1}} {\rm\bfseries ]}}}}}} \providecommand{\fejer}{Fej\'{e}r} \newenvironment{deflist}[1][\quad] {\begin{list}{}{\renewcommand{\makelabel}[1]{\textrm{##1~}\hfil} \settowidth{\labelwidth}{\textrm{#1~}} \setlength{\leftmargin}{\labelwidth+\labelsep}}} {\end{list}} \usepackage{amsthm} \usepackage[capitalize,nameinlink]{cleveref} \crefname{equation}{}{equations} \crefname{chapter}{Appendix}{chapters} \crefname{item}{}{items} \crefname{enumi}{}{} \crefname{appsec}{Appendix}{Appendices} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{lem}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{cor}[theorem]{Corollary} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{prop}[theorem]{Proposition} \newtheorem{definition}[theorem]{Definition} \newtheorem{defn}[theorem]{Definition} \newtheorem{thm}[theorem]{Theorem} \newtheorem{example}[theorem]{Example} \newtheorem{ex}[theorem]{Example} \newtheorem{fact}[theorem]{Fact} \newtheorem{remark}[theorem]{Remark} \newtheorem{rem}[theorem]{Remark} \def\proof{\noindent{\it Proof}. \ignorespaces} \def\endproof{\ensuremath{\hfill \quad \blacksquare}} \renewcommand\theenumi{(\roman{enumi})} \renewcommand\theenumii{(\alph{enumii})} \renewcommand{\labelenumi}{\rm (\roman{enumi})} \renewcommand{\labelenumii}{\rm (\alph{enumii})} \newenvironment{noteJV}{\begin{quote}\color{red} \small\sf JonV $\diamondsuit$~}{\end{quote}} \newenvironment{noteWM}{\begin{quote}\color{OliveGreen}\small\sf Walaa $\clubsuit$~}{\end{quote}} \newcommand{\boxedeqn}[1]{ \[\fbox{ \addtolength{\linewidth}{-2\fboxsep} \addtolength{\linewidth}{-2\fboxrule} \begin{minipage}{\linewidth} \begin{equation}#1\\[+5mm]\end{equation} \end{minipage} }\] } \providecommand{\dss}{\small\displaystyle\small\sum} \providecommand{\dsf}{\displaystyle \frac} \providecommand{\dsint}{\displaystyle \int} \providecommand{\dsm}{\displaystyle \min} \providecommand{\abs}[1]{\lvert#1\rvert} \providecommand{\norm}[1]{\lVert#1\rVert} \providecommand{\Norm}[1]{{\Big\lVert}#1{\Big\rVert}} \providecommand{\normsq}[1]{\lVert#1\rVert^2} \providecommand{\bk}[1]{\left(#1\right)} \providecommand{\stb}[1]{\left\{#1\right\}} \providecommand{\innp}[1]{\langle#1\rangle} \providecommand{\iNNp}[1]{\Big\langle#1\Big\rangle} \providecommand{\lf}{\eft \lfloor} \providecommand{\rf}{\right \rfloor} \providecommand{\lra}{\longrightarrow} \providecommand{\ra}{\rightarrow} \providecommand{\LA}{\Leftarrow} \providecommand{\RA}{\Rightarrow} \providecommand{\lla}{\longleftarrow} \providecommand{\la}{\leftarrow} \providecommand{\grad}{\nabla} \providecommand{\ex}{\exist} \providecommand{\eps}{\varepsilon} \providecommand{\om}{\omega} \providecommand{\til}{\tilde} \providecommand{\e}{\eta} \providecommand{\p}{\phi} \providecommand{\pat}{\partial} \providecommand{\RRR}{\left]-\infty,+\infty\right]} \providecommand{\weak}{\rightharpoonup} \providecommand{\sse}{\subseteq} \providecommand{\sbs}{\subset} \providecommand{\lam}{\lambda} \providecommand{\RR}{\mathbb{R}} \providecommand{\clint}[1]{\left[#1\right]} \providecommand{\opint}[1]{\left]#1\right[} \providecommand{\ocint}[1]{\left]#1\right]} \providecommand{\aff}{\operatorname{aff}} \providecommand{\conv}{\operatorname{conv}} \providecommand{\ran}{\operatorname{ran}} \providecommand{\cone}{\operatorname{cone}} \providecommand{\rint}{\operatorname{rint}} \providecommand{\intr}{\operatorname{int}} \providecommand{\CL}{\operatorname{CL}} \providecommand{\dom}{\operatorname{dom}} \providecommand{\parl}{\operatorname{par}} \providecommand{\JA}{{{ J}_A}} x}{\ensuremath{\operatorname{Fix}}} \providecommand{\proj}{\operatorname{Proj}} \providecommand{\epi}{\operatorname{epi}} \providecommand{\gr}{\operatorname{gra}} \providecommand{\gra}{\operatorname{gra}} \providecommand{\Id}{\operatorname{{ Id}}} \providecommand{\cont}{\operatorname{cont}} \providecommand{\mmin}[2]{\ensuremath{\underset{\substack{{#1}}}{\mathrm{minimize}}\;\;#2 }} \providecommand{\kk}{{\begin{proof} K}} \providecommand{\bK}{{\bf K}} \providecommand{\zz}{{\bf Z}} \providecommand{\ZZ}{{\bf Z}} \providecommand{\KK}{{\bf K}} \providecommand{\fady}{\varnothing} \providecommand{\emp}{\varnothing} \providecommand{\argmin}{\mathrm{arg}\!\min} \providecommand{\argmax}{\mathrm{arg}\!\max} \providecommand{\rras}{\rightrightarrows} \providecommand{\To}{\rightrightarrows} \providecommand{\ccup}{\bigcup} \providecommand{\ind}{\iota} \providecommand{\DD}{{D}} \providecommand{\CC}{\mathcal{C}} \providecommand{\NN}{\mathbb{N}} \providecommand{\ball}[2]{\operatorname{ball}(#1;#2)} \providecommand{\mil}{ \operatorname{the~minimal~ element~of}} \providecommand{\gr}{\operatorname{gra}} x}{\operatorname{Fix}} \providecommand{\ran}{\operatorname{ran}} \providecommand{\rec}{\operatorname{rec}} \providecommand{\Id}{\operatorname{Id}} \providecommand{\PR}{\operatorname{P}} \providecommand{\pt}{{\partial}} \providecommand{\pr}{\operatorname{P}} \providecommand{\spn}{\operatorname{span}} \providecommand{\zer}{\operatorname{zer}} \providecommand{\inns}[2][w]{#2_{#1}} \newcommand{\outs}[2][w]{{_#1}#2} \providecommand{\R}{{ R}} \providecommand{\T}{{ T}} \providecommand{\D}{ {D}} \providecommand{\E}{ {V}} \providecommand{\U}{ {U}} \providecommand{\uu}{ {u}} \providecommand{\rra}{\rightrightarrows} \providecommand{\pran}{\PR_{\overline{\ran(\Id-\T)}}(0)} \providecommand{\fady}{\varnothing} \providecommand{\Aw}{\inns[w]{A}} \providecommand{\Bw}{\outs[w]{B}} \providecommand{\wA}{\outs[w]{A}} \providecommand{\wB}{\inns[w]{B}} \providecommand{\wK}{K_w} \providecommand{\wZ}{Z_w} \providecommand{\Tw}{\inns[-w]{\bT}} \providecommand{\wT}{\outs[-w]{\bT}} \providecommand{\Fw}{{\bf F}_{w}} \providecommand{\wF}{{_w}{{\bf F}}} \providecommand{\clran}{\overline{\ran(\Id-\T)}} \newcommand{\cran}{\ensuremath{\overline{\operatorname{ran}}\,}} \providecommand{\conran}{\overline{\conv(\ran(\Id-\T))}} \providecommand{\bhmon}{{$\beta$-cohypomonotone}} \providecommand{\bhmaxmon}{{$\beta$-maximally cohypomonotone}} \providecommand{\rhmon}{{$\rho$-cohypomonotone}} \providecommand{\rhmaxmon}{{$\rho$-maximally cohypomonotone}} \providecommand{\bmon}{{$\beta$-monotone}} \providecommand{\bmaxmon}{{$\beta$-maximally monotone}} \providecommand{\DR}{\operatorname{DR}} \providecommand{\ri}{\operatorname{ri}} \providecommand{\amin}{\operatorname{argmin}} \providecommand{\amax}{\operatorname{argmax}} \providecommand{\CC}{\mathrm{C}} \providecommand{\RR}{\mathbb{R}} \providecommand{\NN}{\mathbb{N}} \providecommand{\C}{\mathcal{C}} \newcommand{\Cr}[2][S]{\ensuremath{{#1}_{#2}}} \providecommand{\xt}{x} \providecommand{\yt}{y} \providecommand{\ut}{a^*} \providecommand{\vt}{b^*} \providecommand{\at}{a} \providecommand{\bt}{b} \providecommand{\aat}{a^*} \providecommand{\bbt}{b^*} \newcommand{\vsne}{super strongly nonexpansive} \newcommand\Wtilde{\stackrel{\sim}{\smash{\mathcal{W}}\rule{0pt}{1.1ex}}} \newenvironment{myproof}[1][\proofname]{ {\emph {#1} } }{\endproof} eld}[1]{\mathbb{#1}} eld{R}} eld{N}} \newcommand{\ds}{\displaystyle} \newcommand{\tn}{|\hskip -1.4pt|\hskip -1.4pt|} \newcommand{\tnorm}{\tn\cdot\tn} \DeclareMathOperator{\domai}{dom} \DeclareMathOperator{\graph}{gr} \definecolor{myblue}{rgb}{0.9,0.9,0.98} \newcommand*\mybluebox[1]{ \colorbox{myblue}{\hspace{1em}#1\hspace{1em}}} \newcommand{\algJA}{\text{\texttt{alg}}(J_A)} \newcommand{\algJR}{{\text{\texttt{alg}}(\bJ\circ\bR)}} \newcommand{\algT}{\text{\texttt{alg}}(\bT)} \allowdisplaybreaks \usepackage[most]{tcolorbox} \newtcbox{\mymath}[1][]{ nobeforeafter, math upper, tcbox raise base, enhanced, colframe=blue!20!black, colback=brown!10, boxrule=0.7pt, #1} \begin{document} \author{ Leon Liu\thanks{Department of Combinatorics and Optimization, University of Waterloo, Waterloo, Ontario N2L~3G1, Canada. . E-mail: \texttt{[email protected]}.}, ~ Walaa M. Moursi\thanks{Department of Combinatorics and Optimization, University of Waterloo, Waterloo, Ontario N2L~3G1, Canada. E-mail: \texttt{[email protected]}.} ~and~ Jon Vanderwerff\thanks{ Department of Mathematics, La Sierra University, Riverside, CA 92515, USA. E-mail: \texttt{[email protected]}.} } \title{\textsf{ Strongly nonexpansive mappings revisited: \\ uniform monotonicity and operator splitting } } \date{May 18, 2022} \maketitle \begin{abstract} The correspondence between the class of nonexpansive mappings and the class of maximally monotone operators via the reflected resolvents of the latter has played an instrumental role in the convergence analysis of the splitting methods. Indeed, the performance of some of these methods, e.g., Douglas--Rachford and Peaceman--Rachford methods hinges on iterating the so-called splitting operator associated with the individual operators. This splitting operator is a function of the composition of the reflected resolvents of the underlying operators. In this paper, we provide a comprehensive study of the class of uniformly monotone operators and their corresponding reflected resolvents. We show that the latter is closely related to the class of the strongly nonexpansive operators introduced by Bruck and Reich. Connections to duality via inverse operators are systematically studied. We provide applications to Douglas--Rachford and Peaceman--Rachford methods. Examples that illustrate and tighten our results are presented. \end{abstract} { \noindent {\bfseries 2010 Mathematics Subject Classification:} {49M27, 65K10, 90C25; Secondary 47H14, 49M29. } \noindent {\bfseries Keywords:} contraction mappings, Douglas--Rachford splitting, Peaceman--Rachford splitting, resolvent, reflected resolvent, strongly nonexpansive mapping, uniformly convex function, uniformly monotone operator. \section{Introduction} Throughout, we assume that \begin{empheq}[box=\mybluebox]{equation} \text{$X$ is a real Hilbert space, } \end{empheq} with inner product $\scal{\cdot}{\cdot}\colon X\times X\to\RR$ and induced norm $\|\cdot\|$. Let $A\colon X\rras X$ be a set-valued operator. The \emph{graph} of $A$ is $\gra A=\menge{(x,x^*)\in X\times X}{x^*\in Ax}$. Recall that $A$ is \emph{monotone} if $\{(x,x^*),(y,y^*)\}\subseteq \gra A$ implies that $\scal{x-y}{x^*-y^*}\ge 0$. A monotone operator $A$ is \emph{maximally monotone} if $\gra A$ does not admit a proper extension (in terms of set inclusion) to a graph of a monotone operator. The \emph{resolvent} of $A$ is $J_A=(\Id+A)^{-1} $ and the \emph{reflected resolvent} of $A$ is $R_A=2J_A-\Id $, where $\Id\colon X\to X\colon x\mapsto x$. The theory of monotone operators has been of significant interest in optimization: indeed a typical problem in convex optimization seeks finding a minimizer of the sum $f+g$, where both $f$ and $g$ are proper lower semicontinuous convex functions on $X$. Thanks to Rockafellar's fundamental work (see \cite[Theorem~A]{Rock1970}) the (possibly set-valued) \emph{subdifferential} operators $\partial f$ and $\partial g$ of $f$ and $g$ respectively are maximally monotone. Assuming appropriate constraint qualifications the problem of minimizing $f+g$ amounts to solving the monotone inclusion problem: \begin{equation} \tag{P} \label{P} \text{Find $x\in X$ such that $x\in \zer(A+B) = \menge{x\in X}{0\in Ax+Bx}$.} \end{equation} For a comprehensive discussion on \cref{P} and its connection to optimization problems we refer the reader to \cite{BC2017}, \cite{Borwein50}, \cite{Brezis}, \cite{BurIus}, \cite{Rock98}, \cite{Simons1}, \cite{Simons2}, \cite{Zeidler2a}, \cite{Zeidler2b} and the references therein. Splitting algorithms are potential candidates to solve \cref{P}. Many of these algorithms employ the resolvent and/or the reflected resolvent of the underlaying operators $A$ and $B$. The monotonicity of an operator $A$ is reflected in the firm nonexpansiveness of its resolvents or, equivalently; the nonexpansiveness of its reflected resolvent. When a monotone operator $A$ posses supplementary properties, e.g., strong monotonicity, Lipschitz continuity or cocoercivity its reflected resolvent enjoys refined notions of nonexpansiveness, see, e.g., \cite{BMW2021}, \cite{BMW12}, \cite{Gis2017}, and \cite{MVan2019}. However, none of these works studies the notion of \emph{uniform monotonicity} and what the corresponding property in the reflected resolvent (if any) could be. \emph{The goal of this paper is to provide a systematic study of the class of uniformly monotone operators and their corresponding reflected resolvents. We show that the latter is closely related to the class of the strongly nonexpansive operators introduced by Bruck and Reich \cite{BR77}. Connection to duality via the inverse operators is systematically studied. When the underlying operators are subdifferentials of proper lower semicontinuous convex functions better duality results hold. We provide applications to Douglas--Rachford, Peaceman--Rachford and forward-backward algorithms . Examples that illustrate and tighten our results are presented. } \subsection*{Organization and notation} The organization of this paper is as follows: \cref{sec:2} contains a collection of auxiliary results and facts. Our main results appear in \cref{sec:3}--\cref{sec:7}. In \cref{sec:3} we provide key results concerning the correspondence between the class of uniformly monotone operators and the new class of super strongly nonexpansive mappings. In \cref{sec:4} we prove the surjectivity of uniformly monotone operators. In \cref{sec:5} and \cref{sec:6} we demonstrate the power of self-dual properties and the connection to the class of contractions for large distances. \cref{sec:7} is dedicated to the study of the compositions of the classes of nonexpansive mapping studied in the paper. Finally, \cref{sec:8} presents applications of our results to refine and strengthen known results in operator splitting methods. The notation we adopt is standard and follows largely, e.g., \cite{BC2017} and \cite{Rock1970}. \section{Facts and auxiliary results} \label{sec:2} We start by recalling the following instrumental fact by Minty. \begin{fact}[{\bf Minty's Theorem}]{\rm\cite{Minty} (see also \cite[Theorem~21.1]{BC2017})} \label{thm:minty} Let $A\colon X\rras X$ be monotone. Then \begin{equation} \label{eq:Minty} \gra A=\menge{(J_A x, (\Id-J_A)x)}{x\in \ran (\Id+A)}. \end{equation} Moreover, \begin{equation} \label{eq:Minty:2} \text{$A$ is maximally monotone $\siff$ $\ran (\Id+A)=X$.} \end{equation} \end{fact} Let $A\colon X\rras X$ be monotone and let $(x,u)\in X\times X$. In view of \cref{thm:minty}, it is easy to check that \begin{equation} \label{lem:gr:RA:i} (x,u)\in \gra J_A\siff (u, x-u) \in \gra A, \end{equation} and that \begin{equation} \label{eq:gr:A:RA} (x,u)\in \gra R_A\siff \bigl(\tfrac{1}{2}(x+u), \tfrac{1}{2}(x-u) \bigr) \in \gra A. \end{equation} It is straightforward to verify that (see, e.g., \cite[Proposition~23.38]{BC2017}) \begin{equation} \label{eq:fix:JA:RA} x R_A, \end{equation} and that \begin{equation} \label{eq:RA:-RA} J_{A^{-1}}=\Id -J_A \text{ and consequently } R_{A^{-1}}=-R_A. \end{equation} \begin{example} Suppose that $f\colon X\to \left]-\infty,+\infty\right]$ is convex lower semicontinuous and proper. Then $\partial f$ is maximally monotone. The resolvent $J_A=J_{\partial f}=\prox f$ and the reflected resolvent is $R_f= R_A=2\prox f-\Id$ and hence, by \cref{eq:RA:-RA}, $R_{f^*}=-R_A=\Id-2\prox_f$. \end{example} \begin{proof} See \cite{Rock1970} and \cite[Example~23.3]{BC2017}. \end{proof} Let $\phi\colon \RR_{+}\to\left[0,+\infty\right]$ be an increasing function that vanishes only at $0$ (such a function is called a modulus). Recall that $f\colon X\to\left]-\infty,+\infty\right] $ is \emph{uniformly convex} with modulus $\phi$ if $(\forall x\in \dom f)$ $(\forall y\in \dom f)$ $(\forall \alpha\in \left]0,1\right[)$ \begin{equation} f(\alpha x+(1-\alpha )y)+\alpha(1-\alpha)\phi(\norm{x-y})\le \alpha f(x)+(1-\alpha)f(y). \end{equation} Recall also that $A\colon X\rras X$ is \emph{uniformly monotone} with modulus $\phi$ if $\{(x,x^*),(y,y^*)\}\subseteq \gra A$ implies that \begin{equation} \label{eq:def:um} \scal{x-y}{x^*-y^*}\ge \phi(\norm{x-y}). \end{equation} The notion of uniform monotonicity is naturally motivated by properties of subdifferentials of uniformly convex functions. A comprehensive overview of uniformly convex functions is found in \cite{Za02}. Some other results regarding uniformly convex functions can be found in \cite{BV10,BV12,Za83}. \begin{fact} Suppose that $f\colon X\to \left]-\infty,+\infty\right]$ is uniformly convex with a modulus $\phi$. Then $\partial f$ is uniformly monotone with a modulus $2\phi$. \end{fact} \begin{proof} See \cite[Theorem~3.5.10]{Za02} and also \cite[Example~22.4(iii)]{BC2017}. \end{proof} We now turn to definitions of certain classes of mappings related to the notions of nonexpansiveness and Lipschitz continuity. \begin{definition} \label{d:buttout} Let $T\colon X\to X$, let $(x,y)\in X\times X$ and let $\alpha \in\left]0,1\right[$. \begin{enumerate} \item \label{d:nonex} $T$ is \emph{nonexpansive} if $\norm{Tx-Ty}\le \norm{x-y}$. \item \label{d:snonex} $T $ is \emph{strongly nonexpansive} if $T$ is nonexpansive and we have the implication \begin{align} \label{eq:def:sn} \left.\begin{array}{r@{\mskip\thickmuskip}l} (x_n-y_n)_\nnn \text{ is bounded} \\ \norm{x_n-y_n}-\norm{Tx_n-Ty_n}\to 0 \end{array} \right\} \quad \implies \quad (x_n-y_n)-(Tx_n-Ty_n)\to 0. \end{align} \item \label{d:av} $T $ is \emph{$\alpha$-averaged} if $\alpha\in\left[0,1\right[$ and there exists a nonexpansive operator $N\colon X\to X$ such that $T=(1-\alpha)\Id+\alpha N$; equivalently, we have (see \cite[Proposition~4.35]{BC2017}) \begin{equation} (1-\alpha)\norm{(\Id-T)x-(\Id-T)y}^2 \le \alpha(\norm{x-y}^2-\norm{Tx-Ty}^2). \end{equation} \item \label{d:fne} $T $ is \emph{firmly nonexpansive} if $T$ is $\tfrac{1}{2}$-averaged; equivalently, \begin{equation} \normsq{Tx-Ty}+\normsq{(\Id-T)x-(\Id-T)y}\le \normsq{x-y}. \end{equation} \item $T$ is {\em Lipschitz for large distances} (see \cite[Proposition~1.11]{BL2000}) if for each $\epsilon > 0$, there exists $K_{\epsilon} > 0$ so that $\|Tx - Ty\| \le K_\epsilon \|x - y\|$ whenever $\|x - y\| \ge \epsilon$. \end{enumerate} \end{definition} The following well-known fact summarizes the correspondences between the class of maximally monotone operators and the classes of firmly nonexpansive and nonexpansive mappings. \begin{fact} \label{fact:corres} Let $T\colon X\to X$, and set $A=T^{-1}-\Id$. Then $T=J_A$ and $2T-\Id=R_A$. Moreover, \begin{equation} \label{eq:corresp} \text{$A$ is maximally monotone $\siff$ $T$ is firmly nonexpansive $\siff$ $2T-\Id$ is nonexpansive.} \end{equation} \end{fact} \begin{proof} See, e.g., \cite[Theorem~2]{EckBer} or \cite[Corollary~23.11]{BC2017}. \end{proof} We conclude this section with the following fact concerning the asymptotic behaviour of iterates of strongly nonexpansive mappings. \begin{fact} \label{fact:T:sne:converges} Let $T\colon X\to X$ be strongly nonexpansive. x T\neq \fady$. x T$ such that $(T^nx_0)_\nnn$ converges weakly to $\overline{x}$. \end{fact} \begin{proof} See \cite[Corollary~1.1]{BR77}. \end{proof} \section{Strongly nonexpansive and \ssnonex\ mappings} \label{sec:3} The main goal of this section is to show that the notion of uniform monotonicity of an operator corresponds to the notion of \emph{super strong} nonexpansiveness (see \cref{def:ssexp} below) of its reflected resolvent. Throughout we assume that \begin{empheq}[box=\mybluebox]{equation} \text{$A\colon X\rras X$ and $B\colon X\rras X$ are maximally monotone. } \end{empheq} We start by defining a new subclass of nonexpansive mappings. \begin{definition} \label{def:ssexp} Let $T\colon X\to X$. We say that $T$ is \emph{\ssnonex}\ if $T$ is nonexpansive and we have the implication \begin{equation} \norm{x_n-y_n}^2- \norm{Tx_n-Ty_n}^2\to 0\RA (x_n-y_n)-(Tx_n-T y_n) \to 0. \end{equation} \end{definition} \begin{proposition} \label{lem:une:sne} Let $T\colon X\to X$. Suppose $T$ is \ssnonex. Then $T$ is strongly nonexpansive. \end{proposition} \begin{proof} Let $(x_n)_\nnn$ and $(y_n)_\nnn$ be sequences in $X$ such that $(x_n-y_n)_\nnn$ is bounded and suppose that $\norm{x_n-y_n}-\norm{Tx_n-Ty_n}\to 0$. We claim that \begin{equation} \norm{x_n-y_n}^2-\norm{Tx_n-Ty_n}^2\to 0. \end{equation} Indeed, the nonexpansiveness of $T$ implies that $(Tx_n-Ty_n)_\nnn$ is bounded. Now $\norm{x_n-y_n}^2-\norm{Tx_n-Ty_n}^2 =(\norm{x_n-y_n}-\norm{Tx_n-Ty_n})(\norm{x_n-y_n}+\norm{Tx_n-Ty_n}) \to 0$. Since $T $ is \ssnonex\ we conclude that $(x_n-y_n)-(Tx_n-T y_n) \to 0$. Hence, $T$ is strongly nonexpansive as claimed. \end{proof} The converse of \cref{lem:une:sne} is not true in general as we illustrate in \cref{ex:umoce} below. Nonetheless, when $X=\RR$ the converse of \cref{lem:une:sne} holds as we next illustrate in \cref{prop-sne-r}. We will use the following simple observation. Let $(a,b)\in X\times X$. Then \begin{equation} \label{e:observ} (\norm{a}-\norm{b})^2\le \abs{\norm{a}^2-\norm{b}^2}. \end{equation} \begin{proposition}[{\bf the case of the real line}] \label{prop-sne-r} Suppose $T\colon\RR \to \RR$ is strongly nonexpansive, then $T$ is \ssnonex. \end{proposition} \begin{proof} Suppose for eventual contradiction that $T$ is strongly nonexpansive, but $T$ is not \ssnonex. Then we have sequences $(x_n)_\nnn$ and $(y_n)_\nnn$ in $\RR$ so that \begin{equation} \label{prop-sne-r-a} |x_n - y_n|^2 - |Tx_n - Ty_n|^2 \to 0 \end{equation} but \begin{equation} \label{prop-sne-r-b} (x_n - y_n) - (Tx_n - Ty_n) \not\to 0. \end{equation} It follows from the nonexpansiveness of $T$ and \cref{e:observ} applied with $(a,b,X)$ replaced by $(x_n-y_n, Tx_n-Ty_n,\RR)$ that $(\forall \nnn )$ $0\le (|x_n - y_n| - |Tx_n - Ty_n| )^2\le |x_n - y_n|^2 - |Tx_n - Ty_n|^2 $. Therefore \cref{prop-sne-r-a} implies that \begin{equation} \label{prop-sne-r-c} |x_n - y_n| - |Tx_n - Ty_n| \to 0, \end{equation} and because $T$ is strongly nonexpansive, \cref{prop-sne-r-b} and \cref{prop-sne-r-c} imply $(x_n - y_n)$ is not bounded. Thus, without lost of generality, and swapping $x_n$ and $y_n$ as necessary, we may and do assume $(\forall \nnn )$ $y_n - x_n > 1$. If $(Ty_n - Tx_n)_\nnn$ is eventually positive, that is, the same sign as $(y_n - x_n)$ for all large $n$, then \cref{prop-sne-r-c} would imply that $(y_n - x_n) - (Ty_n - Tx_n) \to 0$ in contradiction with \cref{prop-sne-r-b}. Therefore, after passing to a subsequence and relabelling if necessary, we may and do assume that \begin{equation} \label{e:nov23:iv} (\forall \nnn)\;\;Ty_n - Tx_n < 0. \end{equation} Now consider $z_n = x_n + 1$, so $x_n < z_n < y_n$. Observe that \begin{equation} \label{e:nov23:ii} |x_n - y_n| = |x_n - z_n| + |z_n - y_n|=1+|z_n - y_n|. \end{equation} Hence, because $T$ is nonexpansive we learn, in view of the triangle inequality, \cref{e:nov23:ii} and \cref{prop-sne-r-c}, that \begin{subequations} \begin{align} 0&\le |x_n - z_n| -| Tx_n - Tz_n| + |z_n - y_n| -|Tz_n - Ty_n| \\ &= |x_n - y_n|-| Tx_n - Tz_n|-|Tz_n - Ty_n|\le |x_n - y_n|- |Tx_n - Ty_n| \to 0. \end{align} \label{e:nov23:i} \end{subequations} It follows from the nonexpansiveness of $T$ and \cref{e:nov23:i} that $ |Tz_n - Tx_n| - |z_n - x_n| \to 0$. Consequently, because $T$ is strongly nonexpansive, we have $(Tz_n - Tx_n) - 1 =(Tz_n - Tx_n) - (z_n - x_n)\to 0$. After passing to a subsequence and relabelling if necessary, we may and do assume that \begin{equation} \label{e:nov23:iii} (\forall \nnn) \; \;Tz_n \ge Tx_n. \end{equation} Because $T$ is nonexpansive, in view of \cref{e:nov23:ii} we have $(\forall \nnn)$ $|Ty_n - Tz_n| \le |y_n - z_n| = |y_n - x_n| - 1$. This and \cref{e:nov23:iii} imply $ (\forall \nnn) \; \;Ty_n \ge Tz_n - (|y_n - x_n| - 1) \ge Tx_n - |y_n - x_n| + 1. $ Therefore, by \cref{prop-sne-r-c} we have $(\forall \nnn)$ $ -|y_n - x_n| + 1 \le Ty_n - Tx_n < 0 $. That is $(\forall \nnn)$ $|x_n - y_n| - |Tx_n - Ty_n| \ge 1$, and this contradicts \cref{prop-sne-r-c}. This completes the proof. \end{proof} We now turn to the correspondence between uniformly monotone operators and \ssnonex\ operators. We start with the following lemma that provides a characterization of uniformly monotone operators via their reflected resolvents. \begin{lemma} \label{prop:um:sn:i} The following hold: \begin{enumerate} \item \label{prop:um:sn:i:i} Suppose that $A$ is uniformly monotone with modulus $\phi$. Then $(\forall (x,y)\in X\times X)$ \begin{equation} \label{eq:ref:res:um} \norm{x-y}^2-\norm{R_Ax-R_Ay}^2\ge 4\phi\big(\tfrac{1}{2}\norm{(x-y)+(R_Ax-R_Ay)}\big) =4\phi\big( \norm{J_Ax-J_Ay}\big). \end{equation} \item \label{prop:um:sn:i:ii} Suppose that there exists a modulus function $\phi$ such that $(\forall (x,y)\in X\times X)$ \begin{equation} \label{eq:assmp:RA:mod} \norm{x-y}^2-\norm{R_Ax-R_Ay}^2\ge \phi\big(\norm{(x-y)+(R_Ax-R_Ay)}\big). \end{equation} Then $A$ is uniformly monotone with a modulus $\tfrac{1}{4}\phi\circ (2(\cdot))$. \end{enumerate} \end{lemma} \begin{proof} \cref{prop:um:sn:i:i}: It follows from \cref{eq:gr:A:RA} that $ \gra A=\menge{\tfrac{1}{2}(x+R_Ax,x-R_Ax)}{x\in X}$. Now combine this with \cref{eq:def:um}. \cref{prop:um:sn:i:ii}: Let $\{(x,x^*),(y,y^*)\}\subseteq \gra A$ and observe that \cref{eq:gr:A:RA} implies that \begin{equation} \label{eq:gra:RA:opp} \{(x+x^*,x-x^*),(y+y^*,y-y^*)\}\subseteq \gra R_A. \end{equation} Now \begin{subequations} \label{eq:un:RA} \begin{align} &\quad\scal{x-y}{x^*-y^*} \\ &=\tfrac{1}{4}\scal{x+x^*-(y+y^*)+x-x^*-(y-y^*)}{ x+x^*-(y+y^*)-(x-x^*-(y-y^*))} \\ &=\tfrac{1}{4}(\norm{x+x^*-(y+y^*)}^2-\norm{x-x^*- ( y-y^*)}^2) \ge \tfrac{1}{4}\phi (2\norm{x-y}), \label{eq:un:RA:c} \end{align} \end{subequations} where the inequality in \cref{eq:un:RA:c} follows from combining \cref{eq:assmp:RA:mod} applied with $(x,y)$ replaced with $(x+x^*,y+y^*)$ and \cref{eq:gra:RA:opp}. The proof is complete. \end{proof} \begin{proposition} \label{prop:um:sne} Consider the following statements: \begin{enumerate} \item \label{prop:um:sne:i} $A$ is uniformly monotone. \item \label{prop:um:sne:ii} $-R_A$ is \ssnonex. \item \label{prop:um:sne:iii} $-R_A$ is strongly nonexpansive. \end{enumerate} Then \cref{prop:um:sne:i}$\siff$\cref{prop:um:sne:ii}$\RA$\cref{prop:um:sne:iii}. \end{proposition} \begin{proof} \cref{prop:um:sne:i}$\RA$\cref{prop:um:sne:ii}: Let $\phi$ be a modulus function for $A$. Recalling \cref{eq:corresp} we have $-R_A$, as is $R_A$, is nonexpansive. Let $(x_n-y_n)_\nnn$ be a sequence in $X$ and suppose that $\norm{x_n-y_n}^2-\norm{R_Ax_n-R_Ay_n}^2\to 0$. Combining this with \cref{prop:um:sn:i}, we learn that $0\le \phi(\norm{(x_n+R_Ax_n)-(y_n+R_Ay_n)})\to 0$. Since $\phi$ is increasing and vanishes only at $0$ we must have $(x_n+R_Ax_n)-(y_n+R_Ay_n)\to 0$, hence $-R_A$ is \vsne. \cref{prop:um:sne:ii}$\RA$\cref{prop:um:sne:i}: Suppose $A$ is not uniformly monotone but $-R_A$ is \vsne. Because $A$ is not uniformly monotone, there exist sequences $(a_n, a^*_n)_\nnn$ and $ (b_n, b^*_n)_\nnn$ in $ \gra A$ and $\epsilon > 0$ such that \begin{equation} \label{prop:vsne:um:i} \langle a_n - b_n, a^*_n -b^*_n \rangle \to 0 \ \ \mbox{but}\ \ (\forall \nnn) \|a_n - b_n\| \ge \epsilon. \end{equation} Set $(\forall \nnn)$ $x_n=a_n+a^*_n$ and $y_n=b_n+b^*_n$ and observe that Minty's parametrization of $\gra A$ implies that $(\forall \nnn)$ $(a_n,a^*_n,b_n,b^*_n) =\tfrac{1}{2}(x_n+R_Ax_n,x_n-R_Ax_n,y_n+R_Ay_n,y_n-R_Ay_n)$. Therefore \begin{subequations} \begin{align} &\quad\|x_n - y_n\|^2 - \|-R_A x_n -(- R_A y_n)\|^2 \nonumber \\ &=\|x_n - y_n\|^2 - \|R_A x_n - R_A y_n\|^2 \\ &=\scal{x_n - y_n+(R_A x_n - R_A y_n)}{x_n - y_n-(R_A x_n - R_A y_n)} \\ &=\scal{x_n + R_A x_n -(y_n+ R_A y_n)}{x_n - R_A x_n -(y_n- R_A y_n)} \\ &=4\scal{a_n - b_n}{ a^*_n -b^*_n}\to 0, \label{prop:vsne:um:e} \end{align} \label{prop:vsne:um:ii} \end{subequations} where the limit in \cref{prop:vsne:um:e} follows from \cref{prop:vsne:um:i}. Because $-R_A$ is \vsne\ \cref{prop:vsne:um:ii} implies \begin{equation} \label{prop:vsne:um:iii} a_n-b_n=\tfrac{1}{2}((x_n + R_A x_n) - (y_n + R_A y_n)) =\tfrac{1}{2}((x_n - y_n) - (-R_A x_n -(-R_A y_n)) )\to 0. \end{equation} This contradicts (\ref{prop:vsne:um:i}), hence $A$ is uniformly monotone. \cref{prop:um:sne:ii}$\RA$\cref{prop:um:sne:iii}: Apply \cref{lem:une:sne} with $T$ replaced by $-R_A$. \end{proof} \begin{corollary} \label{cor-real-line-eq} Suppose that $X=\RR$. The following are equivalent. \begin{enumerate} \item $A$ is uniformly monotone. \item $-R_A$ is strongly nonexpansive. \item $-R_A$ is \vsne. \end{enumerate} \end{corollary} \begin{proof} Combine \cref{prop:um:sne} and \cref{prop-sne-r}. \end{proof} \begin{example} \label{ex:umoce} Suppose that $X=\mathbb{R}^2$. Set $a_0=0$ and set $(\forall m \ge 1)$ \begin{subequations} \begin{align} a_m&=2^{m+1}-2\\ w_m&=\tfrac{1}{\sqrt{4^m+1}}(2^m,1)\\ K_m&=({4^m-4^{-m}})^{1/2}\\ \beta_m&=\tfrac{1}{2^m}K_m\\ D_m&=\menge{(x,y)}{x\le a_m}. \end{align} \end{subequations} Let \begin{equation} T(x,y)\colon \RR^2\to \RR^2\colon (x,y)\mapsto \begin{cases} (0,0),&x\le 0; \\ \sum_{j=0}^{m-1}K_jw_j+(\tfrac{x-a_{m-1}}{2^m})K_mw_m,& x\in [a_{m-1},a_m], m\ge 1. \end{cases} \end{equation} Set $\widetilde{A}=\big(\tfrac{\Id-T}{2}\big)^{-1}-\Id$. Then the following hold: \begin{enumerate} \item \label{ex:umoce:0} $(\forall m \in \mathbb{N})$ $T_{|D_m}$ is a contraction with a constant $\beta_m$. \item \label{ex:umoce:i} $T$ is strongly nonexpansive, hence nonexpansive. \item \label{ex:umoce:ii} There exist sequences $(x_n)_\nnn, (y_n)_\nnn $ in $ \RR^2$ satisfying \begin{equation} \label{e:umoce} \|x_n - y_n\|^2 - \|T x_n - Ty_n\|^2 \to 0, \ \ \mbox{nevertheless}\ \ (x_n - y_n) - (Tx_n - Ty_n) \not\to 0. \end{equation} Consequently, $T$ is not \ssnonex. \item \label{ex:umoce:iii:0} $T=-R_{\widetilde{A}}$. \item \label{ex:umoce:iii} $\widetilde{A}$ is maximally monotone. \item \label{ex:umoce:iv} $\widetilde{A}$ is \emph{not} uniformly monotone. \end{enumerate} \end{example} \begin{proof} Observe that $(w_m)_{m\in \NN}$ is a sequence of unit vectors whose positive slopes are strictly decreasing to $0$, that $(\beta_m)_{m\in \NN}$ is a sequence of strictly increasing real numbers in $\left[0,1\right[$ and that $(K_m)_{m\in \NN}$ is a sequence of strictly increasing real numbers with $K_m\to +\infty$. \cref{ex:umoce:0}: Let $m\in \NN$. Observe that if $m=0 $ the conclusion is obvious. Therefore, we assume $m\ge 1$. Let $\{u,v\}\subseteq D_m$, with $u=(u_1,u_2)$ and $v=(v_1,v_2)$ and let $\{r,s\}\subseteq \{1, \ldots,m\}$ be such that $u_1\in [a_{r-1}, a_r]$ and $v_1\in [a_{s-1}, a_s]$. Without loss of generality we may and do assume that $u_1\le v_1$ and hence $r\le s\le m$. If $r=s$ then the definition of $T$ implies that \begin{equation} \label{e:umoce:i} Tu - Tv=\beta_r(u_1-v_1)w_r. \end{equation} Consequently, we have \begin{equation} \label{e:cont:oneint} \norm{Tu-Tv} =\beta_r\abs{u_1-v_1}\le \beta_r \norm{u-v}. \end{equation} Observe that $(\forall (u_1,u_2)\in \RR^2)$ we have $T(u_1,u_2)=T(u_1,0)$. Moreover, let $ k\ge 1$. Applying \cref{e:umoce:i} with $(u,v)$ replaced by $((a_{k-1},0),(a_{k},0))$ yields: \begin{equation} \label{e:am:am1} {T(a_k,0)-T(a_{k-1},0)} =\beta_k(a_{k}-a_{k-1})w_k=K_kw_k. \end{equation} Using the triangle inequality, \cref{e:umoce:i} and \cref{e:am:am1} we have \begin{subequations} \begin{align} \norm{Tu-Tv} &=\norm{T(v_1,v_2)-T(u_1,u_2)}=\norm{T(v_1,0)-T(u_1,0)} \\ &= \norm{T(v_1,0) - T(a_{s-1},0) +T(a_{s-1},0)-T(a_{s-2},0)+\ldots \nonumber \\ & \quad -T(a_{r},0)+T(a_{r},0) -T(u_1,0)} \\ &\le \norm{T(v_1,0) - T(a_{s-1},0)}+\norm{T(a_{s-1},0)-T(a_{s-2},0)}+\ldots \nonumber \\ &\quad +\norm{T(a_{r},0) -T(u_1,0)} \\ &=\beta_s(v_1-a_{s-1})+\beta_{s-1}(a_{s-1}-a_{s-2})+\ldots+\beta_{r+1}(a_{r+1}-a_{r})+\beta_r(a_r-u_1) \\ &\le\beta_m(v_1-a_{s-1})+\beta_{m}(a_{s-1}-a_{s-2})+\ldots+\beta_{m}(a_{r+1}-a_{r})+\beta_m(a_r-u_1) \\ &=\beta_m(v_1-u_1) \le \beta_m\norm{u-v}. \end{align} \end{subequations} \cref{ex:umoce:i}: Suppose $(x_n)_\nnn$, and $ (y_n)_\nnn$ are sequences in $ \RR^2$ such that \begin{equation} \label{e:umoce:iii} \|x_n - y_n\| - \|Tx_n - Ty_n\| \to 0 \ \ \mbox{and} \ \ (x_n - y_n)_\nnn \ \mbox{is bounded}. \end{equation} Let us denote $x_n = (x_{n,1},x_{n,2})$, $y_n = (y_{n,1},y_{n,2})$. We proceed by proving the following claims: \textsc{Claim~1}: The sequences $(x_{n,1})_\nnn$, and $ (y_{n,1})_\nnn$ are unbounded. \\ Indeed, suppose for eventual contradiction that one of the sequences $(x_{n,1})_\nnn$, and $ (y_{n,1})_\nnn$ is bounded. The boundedness of $(x_n - y_n)_\nnn$ implies that both sequences $(x_{n,1})_\nnn$ and $ (y_{n,1})_\nnn$ must be bounded. Indeed, without loss of generality we may and do assume that $(x_n-y_n)\not\to 0$. Let $m\ge 1$ be such that $(\forall \nnn)$ $\max \{x_{n,1},y_{n,1}\}\le a_m$. Observe that \cref{ex:umoce:0} implies that $\norm{Tx_n-Ty_n}\le \beta_m\norm{x_n-y_n}$. Hence, \begin{equation} 0<(1-\beta_m)\norm{x_n-y_n}\le \norm{x_n-y_n}-\norm{Tx_n-Ty_n}\to 0. \end{equation} That is, $\norm{x_n-y_n}\to 0$, which is absurd. Therefore, the sequences $(x_{n,1})_\nnn$ and $ (y_{n,1})_\nnn$ are unbounded as claimed. After passing to a subsequence and relabelling if necessary we conclude that \begin{equation} x_{n,1}\to \infty \quad \text{and} \quad y_{n,1}\to \infty. \end{equation} Because $(x_n - y_n)_\nnn$ is bounded and the slopes of the vectors $(w_n)_\nnn$ go to $0$ as $n \to \infty$. \textsc{Claim~2:} \begin{equation} \text{$(\forall \nnn)$ $Tx_n - Ty_n = (c_n(x_{n,1} - y_{n,1}), d_n)$ where $c_n \to 1^{-}$ and $d_n \to 0$. } \end{equation} To verify \textsc{Claim~2}, let $M > 0$ be such that $(\forall \nnn)$ $\|x_n - y_n\| \le M$. it follows from \cref{e:umoce:i} that \begin{equation} \label{e:umoce:ii:i} Tu - Tv=\tfrac{K_r}{{\sqrt{4^r+1}}}({u_1-v_1})\big(1, \tfrac{1}{2^r}\big). \end{equation} In view of \cref{e:umoce:ii:i} and \cref{e:cont:oneint} we learn that for $n$ and $j$ such that $\min\{x_{n,1},y_{n,1}\} \ge a_{j-1}$, it follows that \begin{equation} \label{e:umoce:iv} |d_n| \le 2^{-j}M \qquad \mbox{while} \qquad c_n \ge \frac{K_j }{{\sqrt{4^j+1}}}= \left(\frac{4^j - 4^{-j}}{4^j + 1}\right)^{1/2}. \end{equation} Note it is possible that the interval with endpoints $x_{n,1}$ and $y_{n,1}$ may intersect more than one interval $[a_{r-1},a_r]$. However, $2^{-n}$ decreases as $n$ increases and $K_n/{\sqrt{4^n+1}}$ increases as $n$ increases in \cref{e:umoce:ii:i} and so \cref{e:umoce:iv} remains valid in this case as well since $a_{j - 1} \le \min\{x_{n,1},y_{n,1}\}$. This verifies \textsc{Claim~2}. \textsc{Claim~3:} \begin{equation} |x_{n,2} - y_{n,2}| \to 0. \end{equation} Indeed, set $(\forall \nnn) $ $\overline{x}_n=(x_{n,1},0)$ and $\overline{y}_n=(y_{n,1},0)$ and note that $\norm{\overline{x}_n-\overline{y}_n}\le \norm{x_n-y_n}$ and that $(T\overline{x}_n,T\overline{y}_n)=(Tx_n,Ty_n)$. Therefore, the nonexpansiveness of $T$ and \cref {e:umoce:iii} imply \begin{equation} 0\le \norm{\overline{x}_n-\overline{y}_n}-\norm{T\overline{x}_n-T\overline{y}_n} = \norm{\overline{x}_n-\overline{y}_n}- \|Tx_n - Ty_n\| \le \|x_n - y_n\| - \|Tx_n - Ty_n\| \to 0. \end{equation} That is \begin{equation} \label{e:aux:seq} \norm{\overline{x}_n-\overline{y}_n}- \|Tx_n - Ty_n\|\to 0. \end{equation} Subtracting \cref{e:umoce:iii} and \cref{e:aux:seq} yields \begin{equation} \norm{ {x}_n- {y}_n} - \norm{\overline{x}_n-\overline{y}_n} \to 0. \end{equation} Consequently we have \begin{subequations} \begin{align} |x_{n,2} - y_{n,2}| ^2&= \norm{ {x}_n- {y}_n}^2 - \norm{\overline{x}_n-\overline{y}_n}^2 \\ &=( \norm{ {x}_n- {y}_n}- \norm{\overline{x}_n-\overline{y}_n})( \norm{ {x}_n- {y}_n}+ \norm{\overline{x}_n-\overline{y}_n}) \to 0. \end{align} \end{subequations} Combining \textsc{Claim~2} and \textsc{Claim~3} we learn that \begin{equation} (x_n-y_n)-(Tx_n-Ty_n)\to (0,0), \end{equation} hence $T$ is strongly nonexpansive. \cref{ex:umoce:ii}: Set $(\forall \nnn)$ $x_n = (a_{n+1},0)$ and $y_n = (a_{n}, 0)$. Observe that $x_n - y_n = (2^n,0)$ while $Tx_n - Ty_n = K_n w_n = \tfrac{K_n}{\sqrt{4^n+1}}( 2^n,1)$. Therefore, $\|x_n - y_n\|^2 = 4^n$ and $\|Tx_n - Ty_n\|^2 = \| K_n w_n\|^2 = K_n^2=4^n - 4^{-n}$. Hence \begin{equation} \|x_n - y_n\|^2 - \|T x_n - Ty_n\|^2 \to 0 \end{equation} However, \begin{equation} (x_n - y_n) - (Tx_n - Ty_n) =\Big(\big(1-\tfrac{K_n}{\sqrt{4^n+1}}\big)2^n,-\tfrac{K_n}{\sqrt{4^n+1}}\Big) \to (0,-1)\neq (0,0). \end{equation} and so (\ref{e:umoce}) holds. \cref{ex:umoce:iii:0}: This is clear. \cref{ex:umoce:iii}: Combine \cref{ex:umoce:i} and \cite[Corollary~23.9~and~Proposition~4.4]{BC2017}. \cref{ex:umoce:iv}: Combine \cref{ex:umoce:ii}, \cref{ex:umoce:iii:0} and \cref{prop:um:sne}. \end{proof} \begin{figure}[H] \begin{center} \includegraphics[scale=0.6]{T_proj_plot} \end{center} \caption{A \texttt{GeoGebra} snapshopt illustrating the operator $T$ in \cref{ex:umoce}. Here $x(0)\coloneqq T(x,y), x\le 0, y\in \RR$ and $x(a_i)\coloneqq T(a_i,y)$ where $y\in \RR$, $i\in\{1,2,\ldots, 7\} $.} \end{figure} \section{Properties of Uniformly Monotone Operators} \label{sec:4} The main goals of this section are to establish that uniformly monotone operators are surjective, have unique zeros and have uniformly continuous inverse operators. The following result is motivated by Z\u{a}linescu's important result \cite[Proposition 3.5.1]{Za02} concerning the growth rate of the modulus of a uniformly convex function. \begin{proposition} \label{prop-uni-mon-mod} Let $Y$ be a Banach space and suppose $B\colon Y \rras {Y^*}$ is uniformly monotone\footnote{Let $Y$ be a Banach space. An operator $B \colon Y \rras {Y^*}$ is {\em uniformly monotone} with a modulus $\phi\colon \RR_+ \to [0,+\infty]$ if $\phi$ is increasing, vanishes only at $0$ and $(\forall (x,x^*)\in \gra B)$ $(\forall (y,y^*)\in \gra B)$ $ \langle x - y, x^*-y^*\rangle\ge \phi(\|x- y\|) . $} with convex domain. Then $B$ has a supercoercive modulus $\phi$ that satisfies the following property. \begin{equation} \label{eq-ums} \mbox{For each} \ \epsilon > 0 \ \ (\exists\beta_\epsilon > 0) \ \ \mbox{so that}\ \ \phi(t) \ge \beta_\epsilon t^2 \ \ \mbox{whenever} \ t \ge \epsilon, \ \mbox{in particular}\ \liminf_{t \to \infty} \frac{\phi(t)}{t^2} > 0. \end{equation} \end{proposition} \begin{proof} Because $B$ is uniformly monotone we fix $\alpha > 0$ so that $(\forall (x,x^*)\in \gra B)$ $(\forall (y,y^*)\in \gra B)$ \begin{equation} \label{eq-uma} \langle y - x, y^* - x^* \rangle \ge \alpha \ \ \mbox{whenever} \ \|y - x\| \ge 1. \end{equation} We claim that: \begin{equation} \label{eq-umc} \langle y - x, y^* - x^* \rangle \ge \tfrac{ \alpha}{4}\|x-y\|^2 \ \mbox{whenever} \ \|y - x\| \ge 1, (x,x^*)\in \gra B, \ (y,y^*)\in \gra B. \end{equation} To verify \cref{eq-umc} it is sufficient to show that $(\forall k\in \NN) $ we have \begin{equation} \label{eq-umb} \langle y - x, y^* - x^* \rangle \ge 2^{2k} \alpha \ \mbox{whenever} \ \|y - x\| \ge 2^k, \ (x,x^*)\in \gra B, \ (y,y^*)\in \gra B. \end{equation} We proceed by induction on $k$. Indeed, \cref{eq-uma} verifies the base case at $k=0$. Now suppose that \cref{eq-umb} is true for some $k = n$. Suppose that $\ (x,x^*)\in \gra B, \ (z,z^*)\in \gra B$ satisfy $\|z - x\| \ge 2^{n+1}$. Let $y = (x+z)/2$ and observe that $y- x= z- y = (z - x)/2 $, hence $\norm{y- x}= \norm{z- y}\ge 2^n$. Because $\dom B$ is convex we have $y \in \domai B$ and so we can pick $y^* \in By$. It follows from the inductive hypothesis that \begin{subequations} \begin{align} 2^{2n} \alpha &\le \langle y - x, y^* - x^* \rangle = \left\langle \tfrac{z-x}{2} , y^* - x^* \right\rangle \label{se:ine:1} \\ 2^{2n} \alpha &\le \langle z -y, z^* - y^*\rangle = \left\langle \tfrac{z-x}{2} , z^* - y^* \right\rangle \label{se:ine:2} \end{align} \end{subequations} Adding \cref{se:ine:1} and \cref{se:ine:2} yields $\ds \left\langle \tfrac{z-x}{2}, z^* - x^*\right\rangle \ge 2 (2^{2n})\alpha$ and consequently \begin{equation} \langle z - x, z^* - x^*\rangle \ge 4( 2^{2n})\alpha = 2^{2(n+1)} \alpha, \end{equation} which proves (\ref{eq-umc}). We now establish (\ref{eq-ums}). Let $\epsilon >0$. In view of \cref{eq-umc} if $\epsilon \ge 1$ we set $\beta_\epsilon=\tfrac{\alpha}{4}$. Now suppose that $0 < \epsilon < 1$. The uniform monotonicity of $B$ implies that $(\exists \alpha_\epsilon > 0)$ such that \begin{equation} \label{eq-umd} \langle y - x, y^* - x^* \rangle \ge \alpha_\epsilon \ \ \mbox{whenever} \ \|y - x\| \ge \epsilon, (x,x^*)\in \gra B, \ (y,y^*)\in \gra B. \end{equation} In view of \cref{eq-umc} the conclusion follows by setting $\beta_\epsilon = \min\{\alpha/4,\alpha_\epsilon\}$ for $0 < \epsilon < 1$, and $\beta_\epsilon = \alpha/4$ for $\epsilon \ge 1$. \end{proof} Analogous to the concept of Lipschitz for large distances in \cite{BL2000}, we introduce the following definition. \begin{definition} \label{defn:in-the-large-cont} We say that $T\colon X \to X$ is a {\em contraction for large distances} if for each $\epsilon > 0$, $(\exists K_\epsilon < 1)$ so that $\|Tx - Ty\| \le K_\epsilon\|x - y\|$ whenever $(x,y) \in X\times X$ satisfy $\|x - y\| \ge \epsilon$. \end{definition} \begin{remark} Contractions for large distances are not a new concept. In fact, they coincide with an unnamed class of nonexpansive maps introduced by Rakotch in \cite[Definition~2]{Rak62}, and have been referred to as \emph{contractive} by others, including, for example, Reich \& Zaslavski in \cite{ReiZas00}. However, in \cite{Rak62} the term contractive referred to the more general class of strictly nonexpansive mappings. \end{remark} \begin{lemma} \label{lem-inv-res} Suppose $A$ is uniformly monotone, then the following hold: \begin{enumerate} \item \label{lem-inv-res:a} $J_{A^{-1}}$ is uniformly monotone with a supercoercive modulus. \item \label{lem-inv-res:b} For each $\epsilon > 0$ $(\exists\beta_\epsilon \in \left ]0,1\right])$ such that $(\forall (x,y)\in X\times X)$ satisfying $\|x - y \| \ge \epsilon$ we have \begin{equation} \label{eq-ir-s} \scal{J_Ax - J_Ay}{J_{A^{-1}}x - J_{A^{-1}}y} + \|J_{A^{-1}}x - J_{A^{-1}}y\|^2 \ge \beta_\epsilon \norm{x-y}^2. \end{equation} \item \label{lem-inv-res:ab} $J_{A^{-1}}$ is surjective. \item \label{lem-inv-res:c} $J_{A}$ is a contraction for large distances. \end{enumerate} \end{lemma} \begin{proof} Let $\epsilon > 0$ and let $\phi$ be a modulus function for $A$. Set \begin{equation} \label{eq-ir-alp} \alpha=\alpha(\epsilon) = \min\{\phi(\epsilon/2), \epsilon^2/4\}. \end{equation} Then $\alpha > 0$. \cref{lem-inv-res:a}\&\cref{lem-inv-res:b}: Let $(x,y)\in X\times X$ be such that $\|x -y\| \ge \epsilon$. The triangle inequality and \cref{thm:minty} imply that \begin{equation} \label{eq-ir-c} \max \{\|J_{A}x - J_{A}y\|,\|J_{A^{-1}}x - J_{A^{-1}}y\|\} \ge \epsilon/2. \end{equation} Therefore we obtain \begin{subequations} \begin{align} \scal{x - y}{ J_{A^{-1}}x - J_{A^{-1}}y} &= \scal{J_{A}x - J_{A}y+J_{A^{-1}}x - J_{A^{-1}}y}{ J_{A^{-1}}x - J_{A^{-1}}y} \\ &=\scal{J_{A}x - J_{A}y}{J_{A^{-1}}x - J_{A^{-1}}y} + \|J_{A^{-1}}x - J_{A^{-1}}y\|^2 \\ &\ge \phi(\|J_{A}x - J_{A}y\|) + \|J_{A^{-1}}x - J_{A^{-1}}y\|^2 \ge \alpha. \end{align} \label{eq-ir-b} \end{subequations} It follows from (\ref{eq-ir-b}) that $J_{A^{-1}}$ is uniformly monotone. Combining this with \cref{prop-uni-mon-mod} applied with $(Y, A)$ replaced by $(X, J_{A^{-1}})$ and the fact that $\dom J_{A^{-1}}=X$ we learn that $J_{A^{-1}}$ is uniformly monotone with a supercoercive modulus that satisfies \cref{eq-ums}. The claim that $\beta_\epsilon \le1$ is a direct consequence of the nonexpansiveness of $J_{A^{-1}}$ and the monotonicity of $A$ in view of \cref{eq-ir-s}. \cref{lem-inv-res:ab}: Combine \cref{lem-inv-res:a} and \cite[Proposition~22.11(ii)]{BC2017}. \cref{lem-inv-res:c}: Indeed, using \cref{lem-inv-res:b} and the firm nonexpansiveness of $J_A$ there exists $(\exists\beta_\epsilon \in \left ]0,1\right])$ such that \begin{subequations} \begin{align} 0 \le \norm{J_{A}x - J_{A}y}^2&\le \scal{x - y}{ J_{A}x - J_{A}y} = \norm{x-y}^2-\scal{x-y}{ J_{A^{-1}}x - J_{A^{-1}}y} \\ &\le \norm{x-y}^2-\phi(\norm{x-y}) \le (1-\beta_\epsilon) \norm{x-y}^2, \end{align} \label{eq-ir-e} \end{subequations} and the conclusion follows. \end{proof} The previous results lead to nice consequences for uniformly monotone operators which we next state as the main result of this section. \begin{theorem} \label{thm-umg} Suppose $A\colon X \rras X$ is uniformly monotone. Then the following hold. \begin{enumerate} \item \label{thm-umg:i} $A$ satisfies the growth condition \begin{equation} \label{e:growth} \lim_{\|x-y\| \to \infty} \inf_{\substack{(x,x^*)\in \gra A\\(y,y^*)\in \gra A}} \frac{\|x^* - y^*\|}{\|x - y\|} > 0. \end{equation} \item \label{thm-umg:ii} $A$ is surjective. \item \label{thm-umg:iii} $A$ has a unique zero. \item \label{thm-umg:iv} $A^{-1}$ is uniformly continuous. \end{enumerate} \end{theorem} \begin{proof} \cref{thm-umg:i}: Suppose for eventual contradiction that there exist sequences $(x_n,x_n^*)_\nnn$, $(y_n,y_n^*)_\nnn$ in $\gra A$ such that $\|x_n - y_n\| \to \infty$ but \begin{equation} \lim_{n \to \infty} \frac{\|x_n^* - y_n^*\|}{\|x_n - y_n\|} = 0. \end{equation} Indeed, write $\|x_n^* - y_n^*\| = a_n\|x_n - y_n\|$ where $a_n \to 0^+$. Then using \cref{eq-ir-s}, we have $\beta > 0$ such that \begin{equation} (\forall \nnn)\quad a_n\|x_n - y_n\|^2 + a_n^2\|x_n - y_n\|^2 \ge \beta(\|x_n -y_n\|^2 + a_n^2\|x_n - y_n\|^2), \end{equation} where on the right side, we drop the inner product from \cref{eq-ir-s} because it is nonnegative by monotonicity of A. But the above inequality is impossible since $a_n \to 0^+$ while $\beta>0$ is fixed. Hence, \cref{e:growth} holds. \cref{thm-umg:ii}: Combine \cref{lem-inv-res}\cref{lem-inv-res:ab} and the fact that $\ran A=\dom A^{-1}=\dom (\Id+A^{-1})=\ran (\Id+A^{-1})^{-1}=\ran J_{A^{-1}}$. \cref{thm-umg:iii}: It follows from \cref{thm-umg:ii} that $\zer A\neq \fady$. Moreover, $A$ is strictly monotone, hence $\zer A$ is at most a singleton by, e.g., \cite[Proposition~23.35]{BC2017}. Altogether, $A$ possess a unique zero. \cref{thm-umg:iv}: Let $\epsilon > 0$. In view of \cref{thm-umg:i} choose $K > 0$ such that $(\forall (x,x^*)\in \gra A)$ $(\forall (y,y^*)\in \gra A)$ \begin{equation} \label{eq-umg-b} \|x - y\| \ge K\RA \|x^* - y^*\| > \epsilon. \end{equation} Let $\alpha = \phi(\epsilon)$ where $\phi$ is a modulus function for $A$. Then $\alpha > 0$ and $(\forall (x,x^*)\in \gra A)$ $(\forall (y,y^*)\in \gra A)$ \begin{equation} \label{eq-umg-c} \|x - y\| \ge \epsilon \RA \scal {x - y}{ x^* - y^*}\ge \alpha . \end{equation} Now choose $\delta = \min\{\alpha/K, \epsilon\}$. Recalling \cref{thm-umg:ii}, let $\{x^* ,y^*\} \subseteq X=\dom A^{-1}$. Suppose $\|x^* - y^*\| < \delta$ and let $(x^*,x) ,(y^*,y) $ be points in $\gra A^{-1}$, equivalently; $(x,x^*) ,(y,y^*) $ are points in $\gra A$. Because $\|x^* - y^*\| < \delta \le \epsilon$, (\ref{eq-umg-b}) implies $\|x - y\| < K$. Because $\|x - y\| < K$, using Cauchy--Schwarz we obtain \begin{equation} \label{eq-umg-d} \scal{x - y}{x^* - y^*} \le \|x-y\| \|x^* - y^*\| < K\delta \le \alpha. \end{equation} Combining \cref{eq-umg-d} and \cref{eq-umg-c} we learn that $\|x - y\| < \epsilon$. Therefore $A^{-1}$ is uniformly continuous as desired. \end{proof} \begin{remark} In passing we point out that \cref{thm-umg}\cref{thm-umg:ii}\&\cref{thm-umg:iii} relax the assumptions of \cite[Proposition 22.11~and~Corollary 23.37]{BC2017}. Indeed, \cref{thm-umg}\cref{thm-umg:ii}\&\cref{thm-umg:iii} assume only the uniform monotonicity of $A$ and do not require supercoercivity of the modulus. \end{remark} Following \cite[p. 160]{Simons2}, we will say that $S\colon X \rras {X}$ is {\em coercive} provided that $\inf \langle x,Sx\rangle/\|x\| \to \infty$ as $\|x\| \to \infty$, where we use the standard convention that the infimum of the empty set is $+\infty$. Or in other words, given any $K > 0$ $(\exists M > 0)$ so that \begin{equation} \langle x,x^* \rangle \ge K\|x\| \ \mbox{whenever} \ \ \|x\| \ge M, (x,x^*)\in \gra S. \end{equation} Neither the growth condition in \cref{thm-umg}\cref{thm-umg:i} and nor coercivity implies the other for monotone operators as we illustrate in \cref{ex:sc} and \cref{ex-uc-dual-fail}\cref{ex-uc-dual-fail:ii:ii}\&\cref{ex-uc-dual-fail:ii}. \begin{example} \label{ex:sc} Let $f\colon \RR^2 \to \RR$ be defined by \begin{equation} f(\xi_1,\xi_2) = \begin{cases} \xi_1^2, &\mbox{if}\;\; 0 \le \xi_1, \ 0 \le \xi_2 \le\xi_1; \\ +\infty, &\mbox{otherwise.} \end{cases} \end{equation} Set $A = \partial f$. Then \begin{equation} \label{e:sc:i} \lim_{\|x\| \to \infty} \inf \left\{\tfrac{\langle x^*, x\rangle}{\|x\|^2} \,\Big|\, x^* \in Ax \right\} > 0. \end{equation} Hence, $A$ is coercive. However, $A$ does not satisfy the growth condition in \cref{thm-umg}\cref{thm-umg:i}. \end{example} \begin{proof} Because $f(x) \ge \frac{1}{2}{\|x\|^2}$ $(\forall x \in \RR^2)$ it follows that (\ref{e:sc:i}) holds. To verify the second claim, set $(\forall \nnn)$ $x_n = (n,0)$, $y_n = (n,n)$ and $x_n^* =y_n^* = (2n,0)$. Observe that $\{(x_n,x_n^*),(y_n,y_n^*)\}\subseteq \gra A$. Consequently, $(\forall \nnn)$ $\tfrac{\norm{x_n^*-y_n^*}}{\norm{x_n-y_n}}=\tfrac{0}{n}=0$. The proof is complete. \end{proof} \begin{example} \label{ex-uc-dual-fail} Suppose that $S\colon \RR^2 \to \RR^2\colon (x_1,x_2) \to (-x_2,x_1)$ is the rotator by $\pi/2$. Then the following properties hold. \begin{enumerate} \item \label{ex-uc-dual-fail:i} $S$ and $S^{-1}=-S$ are maximally monotone. \item \label{ex-uc-dual-fail:i:ii} Both $S$ and $S^{-1}$ are isometries, hence are Lipschitz continuous. \item \label{ex-uc-dual-fail:ii:ii} $ \lim_{\|x-y\| \to \infty} \inf \tfrac{\|Sx - Sy\|}{\|x - y\|}=1 > 0$. \item \label{ex-uc-dual-fail:i:iii} Both $S$ and $S^{-1}$ are uniformly continuous. \item \label{ex-uc-dual-fail:ii} Neither $S$ nor $S^{-1}$ are uniformly monotone, nor are they coercive. \item \label{ex-uc-dual-fail:iii} Both $J_S$ and $J_{S^{-1}}$ are strongly monotone. \end{enumerate} \end{example} \begin{proof} \cref{ex-uc-dual-fail:i}: This is \cite[Example~22.15]{BC2017}. \cref{ex-uc-dual-fail:i:ii}: This is clear. \cref{ex-uc-dual-fail:ii:ii}\&\cref{ex-uc-dual-fail:i:iii}: This is a direct consequence of \cref{ex-uc-dual-fail:i:ii}. \cref{ex-uc-dual-fail:ii}: Indeed, $S$ is neither uniformly monotone nor coercive because $(\forall x\in \RR^2)$ $\langle Sx,x \rangle = 0$. The same properties hold for $S^{-1}=-S$. \cref{ex-uc-dual-fail:iii}: Observe that $J_S= (\Id+ S)^{-1}$ and so $\ds J_S = \tfrac{1}{2}(\Id-S)$ and $(\forall x \in \RR^2)$ $\scal{J_S x}{ x} = \frac{1}{2}\|x\|^2$. Hence $J_S$ is strongly monotone. Similarly, one verifies that $(\forall x \in \RR^2)$ $\scal{J_{S^{-1}} x}{ x} = \frac{1}{2}\|x\|^2$. \end{proof} \begin{remark}\ \begin{enumerate} \item In the convex function case, $f$ is uniformly convex if and only if $f^*$ has uniformly continuous derivative (see \cite[Theorem 3.5.5 and Theorem 3.5.6]{Za02}. \cref{ex-uc-dual-fail}\cref{ex-uc-dual-fail:ii} shows that this correspondance does not hold for general maximally monotone operators. \item \cref{ex-uc-dual-fail}\cref{ex-uc-dual-fail:ii}\&\cref{ex-uc-dual-fail:iii} also shows in a strong way that if $J_{A^{-1}}$ is uniformly monotone (resp. coercive), it does not automatically follow that $A$ is uniformly monotone (resp. coercive). This is in contrast to the situation for convex functions where the infimal convolution of $\|\cdot\|^2$ and $f$ is uniformly convex (resp. supercoercive) if and only if $f$ is uniformly convex (resp. supercoercive). This follows by checking the conjugate of that infimal convolution is uniformly smooth if and only if $f^*$ is. See \cite{BC2017,BV10,Za02} for more information on this. \end{enumerate} \end{remark} \section{Contractions for large distances} \label{sec:5} We start with the following lemma which provides a characterization of Banach contractions using averaged mappings. \begin{lemma} \label{lem:Bcont:duality} Let $T\colon X\to X$. Then $T $ is a Banach contraction if and only if [$T$ is averaged and $-T$ is averaged]. \end{lemma} \begin{proof} $(\RA)$: Suppose that $T $ is a Banach contraction. Observe that $-T $ is also a Banach contraction. The conclusion follows from \cite[Proposition~4.38]{BC2017}. $(\LA)$: Suppose that both $T$ and $-T$ are averaged. It follows from \cite[Proposition~4.3(ii)]{BMW2021} that $T=R_A$ and both $A$ and $A^{-1}$ are strongly monotone operators. Therefore, by \cite[Corollary~4.7]{BMW12} $T$ is a Banach contraction. \end{proof} Before we proceed, we present the following useful result. \begin{lemma} \label{prop:primal:dual} Let $T\colon X\to X$. Suppose that $T$ and $-T$ are strongly nonexpansive. Suppose that $(x_n-y_n)_\nnn$ is bounded and that $\norm{x_n-y_n}-\norm{Tx_n-Ty_n}\to 0$. Then $(x_n-y_n)\to 0$. \end{lemma} \begin{proof} By assumption we have \begin{subequations} \begin{align} (x_n-y_n)-(Tx_n-Ty_n)&\to 0 \\ (x_n-y_n)+(Tx_n-Ty_n)&\to 0. \end{align} \end{subequations} Adding the above limits yields the desired result. \end{proof} In an analogy to \cref{lem:Bcont:duality} we present the following result that characterizes contractions for large distances using either strongly nonexpansive mappings or \ssnonex\ mappings. \begin{proposition} \label{new:pi} Suppose $T\colon X \to X$ is a nonexpansive mapping. Then the following are equivalent: \begin{enumerate} \item \label{new:pi:i} Both $T$ and $-T$ are \ssnonex. \item \label{new:pi:ii} Both $T$ and $-T$ are strongly nonexpansive. \item \label{new:pi:iii} $T$ is a contraction for large distances. \end{enumerate} \end{proposition} \begin{proof} \cref{new:pi:i} $\Rightarrow$ \cref{new:pi:ii}: Apply \cref{lem:une:sne} to both $T$ and $-T$. \cref{new:pi:ii} $\Rightarrow$ \cref{new:pi:iii}: Fix $\epsilon > 0$, and define $$ \beta = \sup \left\{ \frac{\|Tx - Ty\|}{\|x-y\|}\, \Big{|} \, \ \epsilon \le \|x-y\| \le 2\epsilon\right\}. $$ We claim that $\beta < 1$. Indeed, suppose for eventual contradiction that $\beta = 1$. Then there exist sequences $(x_n )_\nnn$ and $( y_n)_\nnn$ in $ X$ such that \begin{equation} \text{ $(\forall \nnn)$ $\epsilon \le \|x_n - y_n\| \le 2\epsilon$ \;\; and\;\; $\frac{\|Tx_n - Ty_n\|}{\|x_n-y_n\|} \to 1.$} \end{equation} Because $\epsilon \le \|x_n - y_n\| \le 2\epsilon$, we have $\|Tx_n - Ty_n\| - \|x_n - y_n\| \to 0$. Therefore, it follows from \cref{prop:primal:dual} that $(x_n - y_n) \to 0$, which is absurd since $(\forall \nnn)$ $\|x_n - y_n\| \ge \epsilon$. Therefore, $0 \le \beta < 1$, and \begin{equation} \label{new:ei} \|Tx - Ty\| \le \beta \|x - y\| \ \ \mbox{whenever} \ \epsilon \le \|x - y\| \le 2\epsilon. \end{equation} We next show $(\forall k \in \{0, 1, 2, \ldots\})$ \begin{equation} \label{new:eii} \|Tx - Ty\| \le \beta \|x - y\| \ \ \mbox{whenever} \ 2^k \epsilon \le \|x - y\| \le 2^{k+1} \epsilon. \end{equation} We proceed by induction. Indeed, \cref{new:eii} holds for $k = 0$ by \cref{new:ei}. Now suppose \cref{new:eii} holds for $k = n$ where $n \ge 0$. Thus suppose $\{x,y \}\subseteq X$ satisfies that $2^{n+1}\epsilon \le \|x-y\| \le 2^{n+2}\epsilon$. Now let $z = (x + y)/2$, then \begin{equation} x - z = \tfrac{x-y}{2} = z - y \ \ \mbox{and so}\ \ 2^n \epsilon \le \|x - z\| \le 2^{n+1} \epsilon, \ 2^n \epsilon \le \|z - y\| \le 2^{n+1} \epsilon. \end{equation} Because $\|x - z\| = \|z - y\| = \frac{1}{2} \|x- y\|$, the triangle inequality yields \begin{subequations} \begin{align} \| Tx - Ty\| &= \| Tx - Tz + Tz - Ty\| \le \|Tx - Tz\| + \|Tz - Ty\| \\ &\le \beta\|x - z\| + \beta\|z - y\| = \beta\|x - y\|. \end{align} \end{subequations} It follows by induction that (\ref{new:eii}) is true $(\forall k \in \{0, 1, 2, \ldots\})$, and so $T$ is a contraction for large distances. \cref{new:pi:iii} $\Rightarrow$ \cref{new:pi:i}: Clearly $T$ is a contraction for large distances if and only if $-T$ is a contraction for large distances. Therefore it is sufficient to show the implication [$T$ is a contraction for large distances $\RA$ $T$ is \ssnonex]. Suppose $(x_n)_\nnn$, $( y_n)_\nnn$ are sequences in $X$ such that \begin{equation} \label{e:lim:cont} \|x_n - y_n\|^2 - \|T x_n - Ty_n\|^2 \to 0. \end{equation} We claim that $(x_n-y_n)\to 0$. Indeed, suppose for eventual contradiction that $\limsup \|x_n - y_n\| > 0$. After passing to a subsequence and relabelling if necessary $(\exists \epsilon > 0)$ such that $(\forall \nnn)$ $\|x_{n} - y_{n}\| \ge \epsilon$. This means $(\exists\beta < 1)$ so that $\|T x_{n} - T y_{n}\| \le \beta \|x_{n} - y_{n}\|$. Therefore, \begin{equation} (\forall \nnn)\quad \|x_{n} - y_{n}\|^2 - \|T x_{n} - Ty_{n}\|^2 \ge (1-\beta^2) \|x_{n} - y_{n}\|^2 \ge (1 - \beta^2) \epsilon^2 \end{equation} and this contradicts \cref{e:lim:cont}. Therefore $\|x_n - y_n\| \to 0$ and consequently $\|T x_n - Ty_n\| \to 0$ which implies $(x_n - y_n) - (Tx_n - Ty_n) \to 0$. Thus $T$ is \ssnonex. \end{proof} It is clear that every Banach contraction is a contraction for large distances. However, the opposite is not true as we illustrate in \cref{ex-cont-in-large} below. \begin{example} \label{ex-cont-in-large} Let \begin{equation} T\colon \RR\to \RR\colon x\mapsto \begin{cases} 1,&x\ge \tfrac{\pi}{2}; \\ \sin x,&\abs{x}<\tfrac{\pi}{2}; \\ -1, &\text{otherwise}. \end{cases} \end{equation} Then the following hold: \begin{enumerate} \item \label{ex-cont-in-large:i} $T$ is nonexpansive. \item \label{ex-cont-in-large:ii} $T$is \emph{not} a Banach contraction. \item \label{ex-cont-in-large:iii} $T$ is a contraction for large distances. \item \label{ex-cont-in-large:iv} Both $T$ and $-T$ are \ssnonex. \item \label{ex-cont-in-large:v} Both $T$ and $-T$ are strongly nonexpansive. \end{enumerate} \begin{figure} \begin{center} \includegraphics[scale=0.7]{plot_T} \end{center} \caption{A \texttt{GeoGebra} snapshot illustrating the mapping $T$ in \cref{ex-cont-in-large}. } \end{figure} \end{example} \begin{proof} Recall that (see, e.g., \cite[Theorem~5.12]{Beck2017} ) if $T\colon \RR\to \RR$ is differentiable then \begin{equation} \label{e:fact} \text{ $T$ is Lipschitz continuous with a constant $K\ge 0$ if and only if $(\forall x\in \RR)$ $\abs{T'(x)}\le K$.} \end{equation} \cref{ex-cont-in-large:i}: One can directly verify that $T$ is differentiable and that \begin{equation} T'\colon \RR\to \RR\colon x\mapsto \begin{cases} \cos x,&\abs{x}<\tfrac{\pi}{2}; \\ 0, &\text{otherwise}. \end{cases} \end{equation} Hence $(\forall x\in \RR)$ $\abs{T'(x)}\le 1$. Consequently, by \cref{e:fact}, $T$ is Lipschitz continuous with a constant $1$ and the conclusion follows. \cref{ex-cont-in-large:ii}: Suppose for eventual contradiction that $T$ is a Banach contraction. Then \cref{e:fact} implies that exists $K\in \left[0,1\right[$ such that $(\forall x\in \RR)$ $\abs{T'(x)}\le K<1$. However, $\abs{T'(0)}=\cos 0=1>K$, which is absurd. \cref{ex-cont-in-large:iii}: Let $\epsilon > 0$. Observe that if $|t| \ge \epsilon/4$ then $|T^\prime(t)| \le \alpha < 1$ where $\alpha = |T^\prime(\epsilon/4)|$. We choose $\beta = (\alpha + 3)/4$. Now suppose $|x - y| \ge \epsilon$, where $x < y$. In the case $|y| \ge |x|$, we have $y \ge |x -y|/2 \ge \epsilon/2$. Therefore, by the Fundamental theorem of calculus, we write \begin{equation} |Tx - Ty| = \left| \int_x^y T^\prime(t) \, dt \right| \le \int_x^{|x-y|/4} 1 \, dt + \int_{|x-y|/4}^y \alpha \le \beta |x -y|. \end{equation} Similarly, if $|x| \ge |y|$, one has $x \le -|x-y|/2$, and again, $|Tx - Ty| \le \beta |x -y|$. Therefore, $T$ is a contraction for large distances. \cref{ex-cont-in-large:iv}--\cref{ex-cont-in-large:v}: Combine \cref{ex-cont-in-large:iii} with \cref{new:pi}. \end{proof} The next result and more general variations of it, are well-known in fixed point theory, see, for example, \cite[Corollary, p. 463]{Rak62} and \cite[Theorem 2.1]{AlbGue97}. Nevertheless, we include a simple proof based on \cref{thm-umg}\cref{thm-umg:ii} for completeness. \begin{proposition} \label{cor-cil-fp} Let $T\colon X \to X$ be a contraction for large distances. Let $x_0\in X$ and set $(\forall \nnn)$ $x_n=T^n x_0$. Then $(\exists\bar x\in X)$ such that the following hold: \begin{enumerate} \item \label{cor-cil-fp:i} x T=\{\overline{x}\}$. \item \label{cor-cil-fp:ii} $x_n\to \bar x$. \end{enumerate} \end{proposition} \begin{proof} \cref{cor-cil-fp:i}: On the one hand because $T$ is nonexpansive, $T = R_A$ for some maximally monotone operator $A\colon X \rras X$ (see \cite[Corollary 23.11, Proposition 4.4]{BC2017}). On the other hand, because $-T=-R_A$ is a contraction for large distances it is \ssnonex\ by \cref{new:pi}. Therefore, $A$ is uniformly monotone by \cref{prop:um:sne}. Consequently, by \cref{thm-umg}\cref{thm-umg:iii}, $A$ has a unique zero. Now combine this with \cref{eq:fix:JA:RA}. \cref{cor-cil-fp:ii}: Note that $(\norm{x_n-\overline{x}})_\nnn$ converges by, e.g., \cite[Proposition~5.4(ii)]{BC2017}. Now, suppose by way of contradiction that $(x_n)_\nnn $ does not converge in norm to $\bar x$. Then $\lim_\nnn \|x_n - \bar x\| = \epsilon$ where $\epsilon > 0$. Thus we choose $0 < \beta < 1$ so that \begin{equation} (\forall \nnn)\quad \|x_{n+1} - \bar x\| = \|Tx_n - T\bar x\| \le \beta \|x_n -\bar x\|. \end{equation} Now for $n$ sufficiently large, we have $\|x_n - \bar x\| < \epsilon/\beta$. Then \begin{equation} \|x_{n+1} - \bar x\| \le \beta \|x_n - \bar x\| < \epsilon. \end{equation} This contradiction completes the proof. Alternatively, use \cref{prop:primal:dual} with $(x_n,y_n)_\nnn$ replaced by $(T^n x_0, \overline{x})_\nnn$ to conclude that $T^n x_0-\overline{x}\to 0$. \end{proof} \section{Self-Dual Properties on Hilbert Spaces} \label{sec:6} \begin{lemma} \label{lem-res-a} Suppose $C\colon X \to X$ is uniformly continuous. Then for each $\epsilon > 0$ $(\exists M > 0)$ depending on $\epsilon$ so that \begin{equation} \|x - y \| \le M\|u-v\| \ \ \ \mbox{whenever}\ \ \|x - y\| \ge \epsilon, \;\;(u,v)\in J_C x \times J_C y. \end{equation} \end{lemma} \begin{proof} Let $\epsilon > 0$. By the uniform continuity of $C$ we choose $0 < \delta < \epsilon/2$ so that \begin{equation} \label{eq-eps} \|Cu - Cv\| < \frac{\epsilon}{2} \ \ \ \mbox{whenever}\ \ \|u - v\| < \delta. \end{equation} Because $C$ is uniformly continuous, it is Lipschitz for large distances (see \cite[Proposition~1.11]{BL2000}). Thus we choose $K > 0$ so that \begin{equation} \label{eq-A-lip-large} \|Cu - Cv\| \le K \|u - v\| \ \ \ \mbox{whenever}\ \ \|u - v\| \ge \delta. \end{equation} Now let us suppose \begin{equation} \label{eq-lem-res-c} \|x - y\| \ge \epsilon, \ \ \ u \in J_C x,\ v \in J_Cy. \end{equation} We will show that $\|x - y\| \le M\|u - v\|$ where $M = K+1$. First, we verify that $\|u - v\| \ge \delta$ where $\delta$ is from (\ref{eq-eps}) by way of a contradiction. So let us assume to the contrary that $\|u - v\| < \delta$. Then by (\ref{eq-eps}) we have \begin{equation} \|Cu - Cv\| < \frac{\epsilon}{2} \end{equation} Because $u \in J_Cx$ and $v \in J_C y$, this implies $Cu = x - u$ and $Cv = y -v$. Then \begin{equation} \|x - u - (y - v)\| < \tfrac{\epsilon}{2} \ \Rightarrow \ \|x- y\| - \|u-v\| < \tfrac{\epsilon}{2}\ \Rightarrow \ \|x-y\| < \tfrac{\epsilon}{2} + \delta < \epsilon \end{equation} This contradicts (\ref{eq-lem-res-c}), and so $\|u - v\| \ge \delta$. Therefore, using (\ref{eq-A-lip-large}), one has \begin{eqnarray*} \|x - y\| &=& \|u + Cu - (v + Cv)\| \le \|u - v\| + \|Cu - Cv\| \\ &\le& \|u - v\| + K\|u - v\| = M\|u - v\|, \end{eqnarray*} as desired. \end{proof} \begin{theorem} \label{thm-RA-cont-inl} The following hold: \begin{enumerate} \item \label{thm-RA-cont-inl:i} Suppose $A$ is uniformly monotone and uniformly continuous. Then $R_A$ is a contraction for large distances. \item \label{thm-RA-cont-inl:ii} Suppose $R_A$ is a contraction for large distances. Then $A$ is uniformly monotone with a supercoercive modulus. \end{enumerate} \end{theorem} \begin{proof} \cref{thm-RA-cont-inl:i}: Let $\phi$ be a modulus function for $A$, let $\epsilon > 0$ and suppose $\|x - y\| \ge \epsilon$. On the one hand, it follows from \cref{lem-res-a} that $(\exists K>0)$ such that $\|J_A x - J_A y \| \ge K\|x-y\|\ge K\epsilon $. On the other hand, because $\dom A=X$, \cref{prop-uni-mon-mod} implies that $(\exists \alpha > 0)$ such that $(\forall t \ge K\epsilon)$ $\phi(t) \ge \alpha t^2$. Altogether, we learn that \begin{equation} \label{eq-thm-RA-cont-inl-c} \phi(\tfrac{1}{2}\|J_A x - J_Ay\|) \ge \tfrac{\alpha K^2}{4}\|x-y\|^2. \end{equation} Set $\beta=\sqrt{\alpha}K$ and let $(x,y)\in X\times X$. Combining \cref{eq-thm-RA-cont-inl-c} and \cref{prop:um:sn:i} we learn that $\|x - y\| \ge \epsilon\RA \|R_A x - R_A y\|^2 + \beta^2\|x - y\|^2 \le \|x - y\|^2 $; equivalently, $ \|x - y\| \ge \epsilon \RA \|R_A x - R_A y\| \le(1-\beta) \|x - y\|$. That is, $R_A$ is a contraction for large distances. \cref{thm-RA-cont-inl:ii}: Let $\epsilon>0$ and let $(x,x^*)\in \gra A$, $(y,y^*)\in \gra A$. Suppose that $\norm{x-y}\ge \epsilon$. Set $(u,v)=(x+x^*, y+y^*)$ and observe that \cref{eq:gr:A:RA} implies \begin{equation} \label{eq-Minty-rep} (x,x^*)=\tfrac{1}{2}(u+R_Au,u-R_Au) \text{\;\;and\;\;} (y,y^*)=\tfrac{1}{2}(v+R_Av,v-R_Av). \end{equation} It follows from \cref{eq-Minty-rep}, the nonexpansiveness of $R_A$, and the triangle inequality that \begin{equation} \label{eq-bigger} \|x - y\| =\tfrac{1}{2}\norm{u-v-(R_Au-R_Av)}\le \tfrac{1}{2}(\|u - v\| + \|R_A u - R_A v\|) \le\|u - v\|. \end{equation} Hence $\norm{u-v}\ge \epsilon$. Consequently, because $R_A$ is a contraction for large distances, $(\exists \beta \in \left]0,1\right[)$ such that \begin{equation} \label{eq:opp:i} \norm{R_Au-R_Av}\le \beta \norm{u-v}. \end{equation} Using \cref{eq-Minty-rep} and \cref{eq:opp:i} we learn that \begin{subequations} \begin{align} \langle x - y, x^* - y^*\rangle &= \tfrac{1}{4} \langle u - v + R_A u - R_A v, u - v -( R_Au - R_A v) \rangle \\ &= \tfrac{1}{4} \left(\|u-v\|^2 - \|R_Au - R_Av\|^2\right) \\ &\ge \tfrac{1}{4} (1 - \beta^2) \|u - v\|^2 \\ &\ge \tfrac{1}{4} (1 - \beta^2)\|x - y\|^2. \end{align} \end{subequations} Therefore, for $t \ge \epsilon$, we have \begin{equation} \inf\left\{ \langle x - y, x^* - y^* \rangle \, | \, (x,x^*) \in \graph(A), (y,y^*) \in \graph(A), \|x - y\| \ge t\right\} \ge \tfrac{1}{4}(1 - \beta)^2 t^2. \end{equation} That is, $A$ has a modulus $\phi$ satisfying $\phi(t) \ge \frac{1}{4}(1-\beta^2) t^2$ for $t \ge \epsilon$. \end{proof} This brings us to our main duality result of this section. \begin{theorem} \label{thm-um-self-dual} The following are equivalent. \begin{enumerate} \item \label{thm-um-self-dual:i} $A$ is uniformly monotone and uniformly continuous. \item \label{thm-um-self-dual:ii} $R_A$ is a contraction for large distances. \item \label{thm-um-self-dual:iii} Both $A$ and $A^{-1}$ are uniformly monotone with supercoercive moduli. \item \label{thm-um-self-dual:iv} Both $A$ and $A^{-1}$ are uniformly monotone. \item \label{thm-um-self-dual:v} $A^{-1}$ is uniformly monotone and uniformly continuous. \item \label{thm-um-self-dual:vi} $R_A$ and $R_{A^{-1}}$ are strongly nonexpansive. \end{enumerate} \end{theorem} \begin{proof} \cref{thm-um-self-dual:i} $\Rightarrow$ \cref{thm-um-self-dual:ii}: \cref{thm-RA-cont-inl}\cref{thm-RA-cont-inl:i} . \cref{thm-um-self-dual:ii} $\Rightarrow$ \cref{thm-um-self-dual:iii}: Since $R_A$ is a contraction for large distances, so is $R_{A^{-1}} = -R_A$. Thus this follows by applying \cref{thm-RA-cont-inl}\cref{thm-RA-cont-inl:ii} on $R_A$ and on $R_{A^{-1}}$. \cref{thm-um-self-dual:iii} $\Rightarrow$ \cref{thm-um-self-dual:iv}: is immediate, and \cref{thm-um-self-dual:iv} $\Rightarrow$ \cref{thm-um-self-dual:v}: follows from \cref{thm-umg} applied to $A$ to deduce $A^{-1}$ is uniformly continuous. \cref{thm-um-self-dual:v} $\Rightarrow$ \cref{thm-um-self-dual:i}: The above implications show \cref{thm-um-self-dual:i} $\Rightarrow$ \cref{thm-um-self-dual:v}, so the reverse implication follows by applying \cref{thm-um-self-dual:i} $\Rightarrow$ \cref{thm-um-self-dual:v} to the operator $A^{-1}$. \cref{thm-um-self-dual:ii} $\siff$ \cref{thm-um-self-dual:vi}: This is a direct consequence of \cref{new:pi} applied with $T$ replaced by $R_A$. \end{proof} Let $f\colon X\to \left]-\infty, +\infty\right]$ be convex, lower semicontinuous and proper. Recall that (see, e.g., \cite[Corollary~16.30]{BC2017}) \begin{equation} \label{eq:f:f^*:subgrad:inv} \partial f^*=(\partial f)^{-1}, \end{equation} that\footnote{Let $f\colon X\to \RR$ be convex. Then $f$ is uniformly smooth if $f$ is smooth and $\grad f$ is uniformly continuous.} (see \cite[Theorem~3.5.12]{Za02} ) \begin{subequations} \label{eq:equiv:f:f*:uc:us} \begin{align} \text{$f$ is uniformly convex}& \text{ $\siff$ $f^*$ is uniformly smooth} \\& \text{ $\siff $ $\partial f$ is uniformly monotone,} \end{align} \end{subequations} and that (see, e.g., \cite[Theorem~18.15]{BC2017} ) \begin{subequations} \label{eq:equiv:f:f*:sc:ls} \begin{align} \text{$f$ is strongly convex}& \text{ $\siff$ $f^*$ is differentiable and $\grad f^*$ is Lipschitz continuous} \\& \text{ $\siff $ $\partial f$ is monotone}. \end{align} \end{subequations} \begin{example} \label{ex-f-f*-uc} Let \begin{equation} f\colon \RR\to \RR\colon x\mapsto \begin{cases} 4x^2-2,&x\le -1; \\ 2x^4,&-1<x<0; \\ x^{3/2},&0\le x<1; \\ \tfrac{3}{4}x^2+\tfrac{1}{4}, &\text{otherwise}. \end{cases} \end{equation} Then $f$ is differentiable and $f^\prime$ is continuous and increasing. Moreover the following hold: \begin{enumerate} \item \label{ex-f-f*-uc:i} Both $f$ and $f^*$ are uniformly convex. \item \label{ex-f-f*-uc:ii} Both $f$ and $f^*$ are uniformly smooth. \item \label{ex-f-f*-uc:iii} Both $f^{\prime}$ and $(f^*)^{\prime}$ are uniformly monotone. \item \label{ex-f-f*-uc:iv} Both $f^{\prime}$ and $(f^*)^{\prime}$ are uniformly continuous. \item \label{ex-f-f*-uc:v} Neither $f$ nor $f^*$ is strongly convex. \end{enumerate} \end{example} \begin{proof} It is straightforward to verify that $f$ is differentiable and that \begin{equation} \label{eq:def:f'} f^\prime\colon \RR\to \RR\colon x\mapsto \begin{cases} 8x,&x\le -1; \\ 8x^3,&-1<x<0; \\ \tfrac{3}{2}x^{1/2},&0\le x<1; \\ \tfrac{3}{2}x, &\text{otherwise}. \end{cases} \end{equation} Hence, $f^\prime$ is continuous and strictly increasing as claimed. \cref{ex-f-f*-uc:i}: It follows from \cite[Theorem~3.1]{Za83} that $f$ is uniformly convex on bounded sets. It follows from the strong (hence uniform) convexity of $x^2$ that $f$ is uniformly convex. We now prove the uniform convexity of $f^*$. In view of \cref{eq:equiv:f:f*:uc:us} it suffices to verify that $f$ is uniformly smooth which can be easily deduced from \cref{eq:def:f'}. \cref{ex-f-f*-uc:ii}\& \cref{ex-f-f*-uc:iii}: Combine \cref{ex-f-f*-uc:i} and \cref{eq:equiv:f:f*:uc:us}. \cref{ex-f-f*-uc:iv}: This follows from \cref{ex-f-f*-uc:ii}. \cref{ex-f-f*-uc:v}: On the one hand $2x^4$ is \emph{not} strongly convex, hence $f$ is \emph{not} strongly convex. On the other hand, $(x^{{3}/{2}})^\prime=\tfrac{3}{2}\sqrt{x}$ is \emph{not} Lipschitz continuous on $\opint{0,1}$. Therefore, in view of \cref{eq:equiv:f:f*:sc:ls} applied with $f$ replaced by $f^*$ we conclude that $f^*$ is \emph{not} strongly convex. \end{proof} \begin{example} \label{ex-cont-in-large-b} Let \begin{equation} T\colon \RR\to \RR\colon x\mapsto \begin{cases} 1,&x\ge \tfrac{\pi}{2}; \\ \sin x,&\abs{x}<\tfrac{\pi}{2}; \\ -1, &\text{otherwise}. \end{cases} \end{equation} Set $A=\Big(\tfrac{\Id-T}{2}\Big)^{-1}-\Id$. Then there exists a proper lower semicontinuous convex function $ f\colon \RR\to \ocint{-\infty,+\infty}$ such that \begin{equation} \label{ex-cont-in-large-b:ii:b} A=f^\prime. \end{equation} Moreover, the following hold: \begin{enumerate} \item \label{ex-cont-in-large-b:iii} Both $A$ and $A^{-1}$ are maximally monotone and uniformly monotone with supercoercive modulus. \item \label{ex-cont-in-large-b:iii:i} Both $A$ and $A^{-1}$ are uniformly continuous. \item \label{ex-cont-in-large-b:iv} Both $A$ and $A^{-1}$ are \emph{not} strongly monotone. \item \label{ex-cont-in-large-b:v} Both $f$ and $f^*$ are uniformly convex and uniformly smooth. \item \label{ex-cont-in-large-b:vi} Both $f$ and $f^*$ are \emph{not} strongly convex. \end{enumerate} \end{example} \begin{proof} It is clear that $T=-R_A$. Because $T$ is nonexpansive, from \cref{ex-cont-in-large}\cref{ex-cont-in-large:i} we learn that $A$ is maximally monotone. Therefore, by e.g., \cite[Corollary~22.23]{BC2017} there exists a proper lower semicontinuous convex function $ f\colon \RR\to \ocint{-\infty,+\infty}$ such that $A=\partial f$. Finally, we show in \cref{ex-cont-in-large-b:v} below that $f$ is uniformly smooth, hence $A=f^\prime$. \cref{ex-cont-in-large-b:iii}\&\cref{ex-cont-in-large-b:iii:i}: Combine \cref{ex-cont-in-large}\cref{ex-cont-in-large:iii}, and \cref{thm-um-self-dual} . \cref{ex-cont-in-large-b:iv}: Suppose for eventual contradiction that $A$ is strongly monotone. Then by \cite[Proposition~4.3(ii)]{BMW2021} $(\exists \alpha\in\left]0,1\right[)$ such that $-R_A$ is $\alpha$-averaged. It follows from \cite[Proposition~4.35]{BC2017} that $(\forall (x,y)\in \RR\times \RR)$ $\tfrac{1-\alpha}{\alpha}({(\Id+T)x-(\Id+T)y})^2\le ({x-y})^2-({Tx-Ty})^2$. In particular, for $y=0$ the above inequality yields $(\forall x\in \left]0,\tfrac{\pi}{2}\right[)$ \begin{equation} \frac{1-\alpha}{\alpha}(x+\sin x)^2\le x^2-\sin^2x. \end{equation} Simplifying and multiplying both sides of the above inequality by $\alpha$ we obtain $(1-2\alpha)x^2+2(1-\alpha)x\sin x+\sin^2 x\le 0$. Rearranging yields $(x+\sin x)^2\le 2\alpha x(x+\sin x)$. Because $x\in \left]0,\tfrac{\pi}{2}\right[$, it follows that $x+\sin x>0$ and therefore the last inequality is equivalent to $1+\tfrac{\sin x}{x}\le 2\alpha$. Taking the limit as $x\to 0^+$ we learn that $2\leftarrow 1+\tfrac{\sin x}{x}\le 2\alpha$ which is absurd. Hence, $-R_A$ is not averaged; equivalently, $A$ is not strongly monotone as claimed. Using similar argument, one can show that $R_A=-R_{A^{-1}}$ is not averaged; equivalently, $A^{-1}$ is not strongly nonexpansive as claimed. \cref{ex-cont-in-large-b:v}: Combine \cref{ex-cont-in-large-b:iii:i}, \cref{ex-cont-in-large-b:ii:b} and \cite[Theorem~3.5.10]{Za83} in view of \cref{eq:f:f^*:subgrad:inv}. \cref{ex-cont-in-large-b:vi}: Combine \cref{ex-cont-in-large-b:iv}, \cref{ex-cont-in-large-b:ii:b} and \cite[Example~22.4(iv)]{BC2017}. \end{proof} \begin{remark} In \cref{App:B} we provide finer conclusions about the operator $A$ and the function $f$ introduced in \cref{ex-cont-in-large-b}. \end{remark} \begin{example} \label{ex:only:primal} Let $f\colon \RR \to \RR\colon x\mapsto \tfrac{1}{4}x^4$ and let $A = f^\prime$. Let $a\colon \RR \to \RR\colon x\mapsto \sqrt[3]{108 x+12\sqrt{81x^2+12}}$ and set \begin{equation} T\colon \RR \to \RR\colon x\mapsto x-2\Big(\tfrac{a(x)}{6}-\tfrac{2}{a(x)}\Big). \end{equation} Then the following hold: \begin{enumerate} \item \label{ex:only:primal:i} $T=-R_A$. \item \label{ex:only:primal:ii} $A=x^3$ is maximally monotone but \emph{not} uniformly continuous. \item \label{ex:only:primal:iii} $A^{-1}=\sqrt[3]{x}$ is maximally monotone and uniformly continuous. \item \label{ex:only:primal:ii:i} $A$ is uniformly monotone. \item \label{ex:only:primal:iii:i} $A^{-1}$ is \emph{not} uniformly monotone. \item \label{ex:only:primal:iv} $f$ is uniformly convex. \item \label{ex:only:primal:v} $f^*$ is \emph{not} uniformly convex. \item \label{ex:only:primal:vi} $T$ is \ssnonex\ and strongly nonexpansive. \item \label{ex:only:primal:vii} $-T$ is \emph{neither} \ssnonex\ \emph{nor} strongly nonexpansive. \end{enumerate} \end{example} \begin{proof} \cref{ex:only:primal:i}: It is enough to show that $(\forall x\in \RR)$ $J_A(x)=\tfrac{a(x)}{6}-\tfrac{2}{a(x)}$, equivalently, to show that $(\Id+A)\big(\tfrac{a(x)}{6}-\tfrac{2}{a(x)}\big)=x$ . Indeed, \begin{subequations} \begin{align} (\Id+A)\big(\tfrac{a(x)}{6}-\tfrac{2}{a(x)}\big) &= \tfrac{a(x)}{6}-\tfrac{2}{a(x)}+\tfrac{(a(x))^3}{6^3}-\tfrac{a(x)}{6}+\tfrac{2}{a(x)}-\tfrac{2^3}{(a(x))^3} \\ &=\tfrac{(a(x))^3}{6^3}-\tfrac{2^3}{(a(x))^3}=\tfrac{(a(x))^6-12^3}{6^3(a(x))^3}=x. \end{align} \end{subequations} \cref{ex:only:primal:ii}\&\cref{ex:only:primal:iii}: This is clear. \cref{ex:only:primal:ii:i}--\cref{ex:only:primal:v}: Combine \cref{ex:only:primal:ii}\&\cref{ex:only:primal:iii} with \cite[Theorem~3.5.10]{Za83}. \cref{ex:only:primal:vi}: Combine \cref{ex:only:primal:i}, \cref{ex:only:primal:ii:i} and \cref{cor-real-line-eq}. \cref{ex:only:primal:vii}: Combine \cref{ex:only:primal:i}, \cref{ex:only:primal:iii:i} and \cref{cor-real-line-eq}. \end{proof} \section{Compositions} \label{sec:7} In this section we examine the behaviour of strongly nonexpansive mappings, \ssnonex\ mappings and contractions for large distances under structured compositions. The proof of the next result follows along the lines of the proof of \cite[Proposition~1.1]{BR77}. \begin{proposition} \label{prop:comp:sne} Let $T_1\colon X\to X$ and $T_2\colon X\to X$ be nonexpansive. Set $T=T_2T_1$. Then the following hold: \begin{enumerate} \item \label{prop:comp:0} Suppose that $T_1$ is strongly nonexpansive and $T_2$ is strongly nonexpansive. Then $T $ is strongly nonexpansive. \item \label{prop:comp:i} Suppose that $-T_1$ is strongly nonexpansive and $-T_2$ is strongly nonexpansive. Then $T $ is strongly nonexpansive. \item \label{prop:comp:ii} Suppose that $-T_1$ is strongly nonexpansive and $T_2$ is strongly nonexpansive. Then $-T $ is strongly nonexpansive. \item \label{prop:comp:iii} Suppose that $T_1$ is strongly nonexpansive and $-T_2$ is strongly nonexpansive. Then $-T $ is strongly nonexpansive. \end{enumerate} \end{proposition} \begin{proof} \cref{prop:comp:0}: This is \cite[Proposition~1.1]{BR77}. \cref{prop:comp:i}: Clearly $T$ is nonexpansive. Now suppose that $(x_n-y_n)_\nnn$ is bounded and that $\norm{x_n-y_n}-\norm{Tx_n-Ty_n}\to 0$. Observe that the nonexpansiveness of $T_1$, $T_2$ and, consequently, $T$ implies that \begin{equation} 0\le \norm{x_n-y_n}-\norm{Tx_n-Ty_n}=\underbrace{\norm{x_n-y_n} -\norm{T_1x_n-T_1y_n}}_{\ge 0} +\underbrace{\norm{T_1x_n-T_1y_n}-\norm{Tx_n-Ty_n}}_{\ge 0}\to 0. \end{equation} Consequently, we learn that \begin{subequations} \begin{align} \norm{x_n-y_n}-\norm{(-T_1)x_n-(-T_1)y_n}=\norm{x_n-y_n}-\norm{T_1x_n-T_1y_n}\to 0,\\ \norm{T_1x_n-T_1y_n}-\norm{(-T_2)(T_1x_n)-(-T_2)(T_1y_n)}=\norm{T_1x_n-T_1y_n}-\norm{Tx_n-Ty_n}\to 0 \end{align} \end{subequations} The nonexpansiveness of $T_1$ implies that $\norm{T_1x_n-T_1y_n}$ is bounded. Recalling that $-T_1$ is strongly nonexpansive and $-T_2$ is strongly nonexpansive we obtain \begin{subequations} \begin{align} (x_n-y_n)+(T_1x_n-T_1y_n)&\to 0 \\ (T_1x_n-T_1y_n)+(T_2T_1x_n-T_2T_1y_n)&\to 0. \end{align} \end{subequations} Hence, \begin{equation} (x_n-y_n)-(Tx_n-Ty_n)=(x_n-y_n)+(T_1x_n-T_1y_n)-((T_1x_n-T_1y_n)+(T_2T_1x_n-T_2T_1y_n))\to 0. \end{equation} \cref{prop:comp:ii}: Proceeding similar to \cref{prop:comp:i} we learn that \begin{subequations} \begin{align} (x_n-y_n)+(T_1x_n-T_1y_n)&\to 0 \\ (T_1x_n-T_1y_n)-(T_2T_1x_n-T_2T_1y_n)&\to 0. \end{align} \end{subequations} Hence, \begin{equation} (x_n-y_n)+(Tx_n-Ty_n) =(x_n-y_n)+(T_1x_n-T_1y_n)-((T_1x_n-T_1y_n)-(T_2T_1x_n-T_2T_1y_n))\to 0. \end{equation} \cref{prop:comp:iii}: Proceed similar to \cref{prop:comp:ii}. \end{proof} The following analogous result holds for \vsne\ mappings. \begin{proposition} \label{prop:comp:vne} Let $T_1\colon X\to X$ and $T_2\colon X\to X$ be nonexpansive. Set $T=T_2T_1$. Then the following hold: \begin{enumerate} \item \label{prop:comp:vne:0} Suppose that $T_1$ is \vsne\ and $T_2$ is \vsne. Then $T $ is \vsne. \item \label{prop:comp:vne:i} Suppose that $-T_1$ is \vsne\ and $-T_2$ is \vsne. Then $T $ is \vsne. \item \label{prop:comp:vne:ii} Suppose that $-T_1$ is \vsne\ and $T_2$ is \vsne. Then $-T $ is \vsne. \item \label{prop:comp:vne:iii} Suppose that $T_1$ is \vsne\ and $-T_2$ is \vsne. Then $-T $ is \vsne. \end{enumerate} \end{proposition} \begin{proof} \cref{prop:comp:vne:0}: Let $(x,y)\in X\times X$ and suppose that $(x_n)_\nnn$ and $(y_n)_\nnn$ are sequences in $X$ such that $0\le \norm{x_n-y_n}^2-\norm{Tx_n-Ty_n}^2\to 0$. Rewrite the above limit as \begin{equation} 0\le \norm{x_n-y_n}^2-\norm{T_1x_n-T_1y_n}^2 +\norm{T_1x_n-T_1y_n}^2-\norm{Tx_n-Ty_n}^2\to 0, \end{equation} and observe that the nonexpansiveness of $T_1$ and $T_2$ implies \begin{equation} \norm{x_n-y_n}^2-\norm{T_1x_n-T_1y_n}^2\to 0 \quad\text{and}\quad \norm{T_1x_n-T_1y_n}^2-\norm{Tx_n-Ty_n}^2\to 0. \end{equation} Because $T_1$ and $T_2$ are \vsne\ we learn that \begin{subequations} \begin{align} (x_n-y_n)-(T_1x_n-T_1y_n)&\to 0 \label{eq:vsne:comp:i} \\ (T_1x_n-T_1y_n)-(Tx_n-Ty_n)&\to 0. \label{eq:vsne:comp:ii} \end{align} \end{subequations} Adding \cref{eq:vsne:comp:i} and \cref{eq:vsne:comp:ii} yields $(x_n-y_n)-(Tx_n-Ty_n)\to 0$, hence $T$ is \vsne\ as claimed. \cref{prop:comp:vne:i}--\cref{prop:comp:vne:iii}: Proceed similar to the proof of \cref{prop:comp:sne}\cref{prop:comp:i}--\cref{prop:comp:iii}. \end{proof} We now turn to compositions of finitely many mappings each of which is either strongly nonexpansive or its negative is strongly nonexpansive. \begin{theorem} \label{thm:comp:m:sne} Let $m\ge 2$, let $I=\{1,\ldots , m\}$, let $J\subseteq I$, and let $(T_i)_{i\in I}$ be a family of nonexpansive mappings from $X$ to $X$. Suppose that $(\forall i\in I\smallsetminus J)$ $T_i$ is strongly nonexpansive and that $(\forall j\in J)$ $-T_j$ is strongly nonexpansive. Set \begin{equation} T=T_m\ldots T_1. \end{equation} Then $(-1)^{|J|}T$ is strongly nonexpansive. \end{theorem} \begin{proof} We proceed by induction on $k\in \{2,\ldots,m\}$. To this end, let us set $(\forall k\in \{2,\ldots,m\})$ $J_k=\menge{j\in J}{j\le k}$. By \cref{prop:comp:sne} the claim is true for $k=2$. Now assume that, for some $k\in \{2,\ldots,m\}$, we have $(-1)^{|J_k|}T_k\ldots T_1$ is strongly nonexpansive. If $T_{k+1}$ is strongly nonexpansive then $|J_{k+1}|=|J_k|$ and the conclusion follows by applying \cref{prop:comp:sne}\cref{prop:comp:0} (respectively \cref{prop:comp:sne}\cref{prop:comp:ii}) with $(T_1,T_2)$ replaced by $(T_m\ldots T_1,T_{m+1})$ in the case $|J_k|$ is even (respectively $|J_k|$ is odd). If $-T_{k+1}$ is strongly nonexpansive then $|J_{k+1}|=|J_k|+1$ and the conclusion follows by applying \cref{prop:comp:sne}\cref{prop:comp:i} (respectively \cref{prop:comp:sne}\cref{prop:comp:iii}) with $(T_1,T_2)$ replaced by $(T_m\ldots T_1,T_{m+1})$ in the case $|J_k|$ is even (respectively $|J_k|$ is odd). \end{proof} \begin{theorem} \label{thm:comp:m:vsne} Let $m\ge 2$, let $I=\{1,\ldots , m\}$, let $J\subseteq I$, and let $(T_i)_{i\in I}$ be a family of nonexpansive mappings from $X$ to $X$. Suppose that $(\forall i\in I\smallsetminus J)$ $T_i$ is \vsne\ and that $(\forall j\in J)$ $-T_j$ is \vsne. Set \begin{equation} T=T_m\ldots T_1. \end{equation} Then $(-1)^{|J|}T$ is \vsne. \end{theorem} \begin{proof} Proceed similar to the proof of \cref{thm:comp:m:sne} but use \cref{prop:comp:vne}\cref{prop:comp:vne:0}--\cref{prop:comp:vne:iii} instead of \cref{prop:comp:sne}\cref{prop:comp:0}--\cref{prop:comp:iii}. \end{proof} We conclude this section with the following result concerning compositions that involve contractions for large distances. \begin{proposition} \label{lem:comp:cont:large} Let $m\ge 2$, let $I=\{1,\ldots , m\}$, let $\overline{m}\in I$, and let $(T_i)_{i\in I}$ be a family of nonexpansive mappings from $X$ to $X$ and suppose that $T_{\overline{m}}$ is a contraction for large distances. Set \begin{equation} T=T_m\ldots T_1. \end{equation} Then $T$ is a contraction for large distances. \end{proposition} \begin{proof} Let $(x,y)\in X\times X$ and let $\epsilon >0$. Suppose that $\norm{x-y}\ge \epsilon$. We proceed by induction on $k\in \{2,\ldots,m\}$. For $m=2$ we examine two cases: \textsc{Case~1:} $T_1$ is a contraction for large distances. Then $(\exists \beta_\epsilon\in \left]0,1\right[)$ such that $\norm{T_1x-T_1y}\le \beta_\epsilon \norm{x-y}$. Therefore, because $T_2 $ is nonexpansive we learn that $\norm{T x-T y}\le \norm{T_1x-T_1y}\le \beta_\epsilon \norm{x-y}$. \textsc{Case~2:} $T_2$ is a contraction for large distances. If $\norm{T_1x-T_1y}\ge \tfrac{\epsilon}{2}$. Then$(\exists \alpha_\epsilon\in \left]0,1\right[)$ such that $\norm{Tx-Ty}\le \alpha_{\epsilon/2} \norm{x-y}$. Now suppose that $\norm{T_1x-T_1y}< \tfrac{\epsilon}{2}$. Then $\norm{Tx-Ty}\le\norm{T_1x-T_1y}< \tfrac{\epsilon}{2}\le \tfrac{1}{2}\norm{x-y} $. Setting $\beta_\epsilon=\max\{\alpha_{\epsilon/2},\tfrac{1}{2} \}$ proves the claim for $m=2$. Now assume that, for some $k\in \{2,\ldots,m\}$, we have $T_k\ldots T_1$ is a contraction for large distances whenever $T_{\overline{k}}$ is a contraction for large distances, $\overline{k}\in \{1,\ldots,k\}$. Consider the composition $T_{k+1}T_k\ldots T_1$. If $(\exists \overline{k}\in \{1,\ldots, k\})$ such that $T_{\overline{k}}$ is a contraction for large distances then the inductive hypothesis implies that $T_k\ldots T_1$ is a contraction for large distances. Otherwise, $T_{k+1} $ must be a contraction for large distances. In both cases the conclusion follows from applying the base case with $(T_1,T_2)$ replaced by $(T_k\ldots T_1,T_{k+1})$. The proof is complete. \end{proof} \section{Application to splitting algorithms} \label{sec:8} In this section we use our earlier conclusions to obtain stronger and more refined convergence results for some important splitting methods (see, e.g., \cite[Chapter~26]{BC2017}); namely, Peaceman--Rachford algorithm (see \cref{thm:PR}\cref{thm:PR:iii:a}--\cref{thm:PR:iii:b}\&\cref{thm:PR:ii} below), Douglas--Rachford algorithm (see \cref{thm:DR}\cref{thm:DR:ii} below) and forward-backward algorithm (see \cref{thm:FB}\cref{thm:FB:ii} below). Let $C\colon X\rras X$ be uniformly monotone and suppose that $\zer C\neq \fady$. Then $C$ is strictly monotone and it follows from, e.g., \cite[Proposition~23.35]{BC2017} that \begin{equation} \label{eq:zer:uniq} \text{$\zer C$ is a singleton}. \end{equation} \begin{theorem}[{\bf Peaceman--Rachford algorithm}] \label{thm:PR} Suppose that $A$ is uniformly monotone. Set $T=R_BR_A$. Let $x_0\in X$ and set $(\forall \nnn)$: \begin{subequations} \begin{align} x_{n+1}&=Tx_n, \\ y_{n}&=J_A x_n. \end{align} \end{subequations} Then the following hold. \begin{enumerate} \item Suppose that $\zer(A+B)\neq \fady$. Then we have: \label{thm:PR:i} \begin{enumerate} \item \label{thm:PR:i:a} x T\neq \fady$. \item \label{thm:PR:i:b} x T)$ $(y_n)_\nnn$ converges strongly to $J_A\overline{x}$ and $\zer(A+B)=\{J_A\overline{x}\}$. \end{enumerate} If, in addition, $B$ is uniformly monotone then we also have: \begin{enumerate} \setcounter{enumii}{2} \item \label{thm:PR:iii:a} $T$ is strongly nonexpansive. \item \label{thm:PR:iii:b} $(x_n)_\nnn$ converges weakly to $\overline{x}$. \end{enumerate} \item \label{thm:PR:ii} Suppose that $A^{-1}$ is uniformly monotone. Then we have: \begin{enumerate} \item \label{thm:PR:ii:a} $T$ is a contraction for large distances and $(\exists \overline{x}\in X)$ such that x T=\{\overline{x}\}$. \item \label{thm:PR:ii:b} $(x_n)_\nnn$ converges strongly to $\overline{x}$. \end{enumerate} \end{enumerate} \end{theorem} \begin{proof} \cref{thm:PR:i:a}: Applying \cref{eq:zer:uniq} with $C$ replaced by $A+B$, we conclude that $\zer(A+B) $ is a singleton. x T\neq \fady$. \cref{thm:PR:i:b}: This is \cite[Proposition~26.13]{BC2017}. \cref{thm:PR:iii:a}: It follows from \cref{prop:um:sne} applied to $A$ (respectively $B$) that $-R_A$ and (respectively $-R_B$) is strongly nonexpansive. Consequently, $R_BR_A$ is strongly nonexpansive by \cref{prop:comp:sne}\cref{prop:comp:i} applied with $(T_1,T_2)$ replaced by $(R_A,R_B)$. \cref{thm:PR:iii:b}: Combine \cref{thm:PR:iii:a} with \cref{fact:T:sne:converges}. \cref{thm:PR:ii:a}\&\cref{thm:PR:ii:b}: It follows from \cref{thm-um-self-dual} that $R_A$ is a contraction for large distances. Combining this with \cref{lem:comp:cont:large} applied with $(m,T_1,T_2) $ replaced by $(2, R_A,R_B)$ we learn that $T$ is a contraction for large distances. Now combine this with \cref{cor-cil-fp}\cref{cor-cil-fp:i}\&\cref{cor-cil-fp:ii}. \end{proof} The assumption that $B$ is uniformly monotone is critical in the conclusion of \cref{thm:PR}\cref{thm:PR:iii:b} as we illustrate below. \begin{example} \label{ex:no:conv} Suppose that $X\neq \{0\}$. Let $A=N_{\{0\}}$ and let $B\equiv 0$. Then $A$ is strongly monotone, hence uniformly monotone, and $B$ is \emph{not} uniformly monotone. Moreover, $R_A=-\Id $, $R_B=\Id$. Consequently $T= R_BR_A=-\Id$. Let $x_0\in X\smallsetminus \{0\}$. Then $(\forall \nnn)$ $T^n x_0=(-1)^nx_0$ and $(T^n x_0)_\nnn$ does \emph{not} converge. \end{example} The assumption that $A^{-1}$ is uniformly monotone is critical in the conclusion of \cref{thm:PR}\cref{thm:PR:ii:b} as we illustrate below. \begin{example} \label{ex:weak-not-strong:conv} Suppose that $X=\ell^2(\{1,2,3,\dots\})$ with the standard Schauder basis $e_1=(1,0,\ldots)$, $e_2=(0,1,0,\ldots)$, and so on. Let $A=N_{\{0\}}$, let $R\colon X\to X\colon (x_1,x_2,\ldots)\mapsto (0,x_1,x_2,\ldots)$ (the right shift operator) and set $B=\tfrac{1}{2}(\Id-R)^{-1}-\Id$. Then $A$ is strongly monotone, hence uniformly monotone. Moreover, because $R_B=-R$ is nonexpansive, we conclude that $B$ is maximally monotone by \cref{fact:corres}. Observe that $A^{-1}\equiv 0$ is \emph{not} uniformly monotone. Moreover, $R_A=-\Id $, $R_B=-R$. Consequently $T= R_BR_A=R$. Let $x_0=e_1$. Then $(\forall \nnn)$ $T^n x_0=e_{n+1}$ and $(T^n x_0)_\nnn=(e_1,e_2,\ldots)$ converges weakly but \emph{not} strongly to $0$. \end{example} We now turn to Douglas--Rachford algorithm. We recall the following fact. \begin{fact} \label{fact:strong:comb} Let $T_1\colon X\to X$, let $T_2\colon X\to X$ and let $\lambda\in \left]0,1\right[$. Set $T=(1-\lambda) T_1+\lambda T_2$. Suppose that $T_1$ is strongly nonexpansive and that $T_2 $ is nonexpansive. Then $T$ is strongly nonexpansive. \end{fact} \begin{proof} See \cite[Proposition~1.3]{BR77}. \end{proof} \begin{proposition} \label{prop:DR:cont} Let $R\colon X\to X$ and let $\lambda\in \left]0,1\right[$. Set $T=(1-\lambda) \Id+\lambda R$. Suppose that $-R$ is strongly nonexpansive. Then $T$ is a contraction for large distances. \end{proposition} \begin{proof} Clearly $R$ is nonexpansive. Observe that \cref{fact:strong:comb} applied with $(T_1,T_2)$ replaced by $(\Id, R)$ implies that $T$ is strongly nonexpansive. We claim that $-T$ is strongly nonexpansive. Indeed, applying \cref{fact:strong:comb} with $(T_1,T_2,\lambda)$ replaced by $( -R,-\Id,1-\lambda)$ implies that $-T=(1-\lambda)(-R)+\lambda (-\Id)$ is strongly nonexpansive. Altogether, we conclude that $T$ is a contraction for large distances in view of \cref{new:pi}. \end{proof} \begin{theorem}[{\bf Douglas--Rachford algorithm}] \label{thm:DR} Suppose that $\zer(A+B)\neq \fady$. Suppose that $A$ is uniformly monotone. Set $T=\tfrac{1}{2}(\Id+R_BR_A)$. Let $x_0\in X$ and set $(\forall \nnn)$: \begin{subequations} \begin{align} x_{n+1}&=Tx_n, \\ y_{n}&=J_A x_n. \end{align} \end{subequations} x T)$ such that the following hold: \begin{enumerate} \item \label{thm:DR:i} $(y_n)_\nnn$ converges strongly to $\overline{y}\coloneqq J_A\overline{x}$. \item \label{thm:DR:ii} Suppose that $(\exists C\in \{A^{-1},B^{-1}\})$ such that $C$ is uniformly monotone. Then we additionally have: \begin{enumerate} \item \label{thm:DR:iii:a} $T$ is a contraction for large distances and $(\exists \overline{x}\in X)$ such that x T=\{\overline{x}\}$. \item \label{thm:DR:iii:b} $(x_n)_\nnn$ converges strongly to $\overline{x}$. \end{enumerate} \end{enumerate} \end{theorem} \begin{proof} Applying \cref{eq:zer:uniq} with $C$ replaced by $A+B$, we conclude that $\zer(A+B)\neq \fady$ is a singleton. x T\neq \fady$. \cref{thm:DR:i}: This is \cite[Proposition~26.11(vi)(b)]{BC2017}. \cref{thm:PR:ii:a}\&\cref{thm:PR:ii:b}: Suppose that $C=A^{-1}$. It follows from \cref{thm-um-self-dual} that $R_A$ is a contraction for large distances. Consequently, $T $ is a contraction for large distances. Now combine with \cref{cor-cil-fp}\cref{cor-cil-fp:i}\&\cref{cor-cil-fp:ii}. Now suppose that $C=B^{-1}$. It follows from \cref{prop:um:sne} applied to $A$ (respectively $B^{-1}$) that $-R_A$ and (respectively $R_B=-R_{B^{-1}}$) is strongly nonexpansive. Consequently, $-R_BR_A$ is strongly nonexpansive by \cref{prop:comp:sne}\cref{prop:comp:i} applied with $(T_1,T_2)$ replaced by $(R_A,R_B)$. Now combine with \cref{prop:DR:cont} applied with $(\lambda,R)$ replaced by $(\tfrac{1}{2}, R_BR_A)$. This proves \cref{thm:PR:iii:a}. To show \cref{thm:PR:iii:b}, combine \cref{thm:DR:iii:a} with \cref{fact:T:sne:converges}. \end{proof} We conclude this section with an application to the forward-backward algorithm. \begin{theorem}[{\bf Forward-backward algorithm}] \label{thm:FB} Let $\beta >0$. Suppose that $A$ is $\beta$-cocoercive and that $B$ is uniformly monotone. Let $\gamma\in \left]0,2\beta\right[$. Set $T=J_{\gamma B}(\Id-\gamma A)$. Let $x_0\in X$ and set $(\forall \nnn)$: \begin{equation} x_{n+1}=Tx_n. \end{equation} Then the following hold: \begin{enumerate} \item \label{thm:FB:i} $\zer(A+B)$ is a singleton \item \label{thm:FB:ii} $T$ is a contraction for large distances. \item \label{thm:FB:iii} $(x_n)_\nnn$ converges strongly to the unique point in $\zer(A+B)$. \end{enumerate} \end{theorem} \begin{proof} \cref{thm:FB:i}: Observe that $A+B$ is maximally monotone by, e.g., \cite[Corollary~25.5(i)]{BC2017}, and uniformly monotone. Now combine this with \cref{thm-umg}\cref{thm-umg:iii} applied with $A$ replaced by $A+B$. \cref{thm:FB:ii}: Observe that $\Id-\gamma A$ is $\gamma /(2\beta)$ averaged, hence nonexpansive. Now combine this with \cref{lem-inv-res}\cref{lem-inv-res:c} ( applied with $A$ replaced by $\gamma A$) and \cref{lem:comp:cont:large} applied with $(m,T_1,T_2) $ replaced by $(2, \Id-\gamma A,J_{\gamma B})$. \cref{thm:FB:iii}: Combine \cref{thm:FB:ii} and \cref{cor-cil-fp}\cref{cor-cil-fp:ii}. \end{proof} \small \section*{Acknowledgements} The research of WMM was partially supported by the Natural Sciences and Engineering Research Council of Canada Discovery Grant. \begin{thebibliography}{999} \small \seppthree \bibitem{AlbGue97} Ya. I.\ Alber and S.\ Guerre-Delabriere, Principle of weakly contractive maps in Hilbert spaces, New Results in Operator Theory and its Applications, \emph{Operator Theory: Advances and Applications}~98 (1997), 7--22. \bibitem{BC2017} H.H.\ Bauschke and P.L.\ Combettes, \emph{Convex Analysis and Monotone Operator Theory in Hilbert Spaces}, 2nd edition, Springer, 2017. \bibitem{BMW12} H.H.\ Bauschke, S.\ Moffat, and X.\ Wang, Firmly nonexpansive mappings and maximally monotone operators: correspondence and duality, {\em Set-valued and Variational Analysis}~{20} (2012), 131--153. \bibitem{BMW2021} H.H.\ Bauschke, W.M.\ Moursi, and X.\ Wang, Generalized monotone operators and their averaged resolvents, {\em Mathematical Programming (Series B)}~{189} (2021) 55--74. \bibitem{BMW2012} H.H.\ Bauschke, S.M.\ Moffat, and X.\ Wang, Firmly nonexpansive mappings and maximally monotone operators: correspondence and duality, {\em Set-valued and Variational Analysis}~{20} (2012), 131--153. \bibitem{Beck2017} A.\ Beck, \emph{First-Order Methods in Optimization}, SIAM 2017. \url{https://doi.org/10.1137/1.9781611974997} \bibitem{BL2000} Y.\ Benyamini, J.\ Lindenstrauss, \emph{Geometric Nonlinear Functional Analysis~1}, {American Mathematical Society Colloquium Publications~48}, {American Mathematical Society, Providence, RI}, {2000} \bibitem{BV10} J.M.\ Borwein and J.\ Vanderwerff, Constructions of uniformly convex functions, {\em Canadian Mathematical Bulletin}~{55} (2012), 697--707. \bibitem{BV12} J.M.\ Borwein and J.\ Vanderwerff, {\em Convex Functions: Constructions, Characterizations and Counterexamples}, Encyclopedia of Mathematica and Its Applications~ 109, Cambridge University Press, 2010. \bibitem{Borwein50} J.M.\ Borwein, Fifty years of maximal monotonicity, \emph{Optimization Letters}~4 (2010), 473--490. \bibitem{Brezis} H.\ Brezis, \emph{Op\'erateurs Maximaux Monotones et Semi-Groupes de Contractions dans les Espaces de Hilbert}, North-Holland/Elsevier, 1973. \bibitem{BR77} R.E.\ Bruck and S.\ Reich, Nonexpansive projections and resolvents of accretive operators in Banach spaces, \emph{Houston Journal of Mathematics}~3 (1977), 459--470. \bibitem{BurIus} R.S.\ Burachik and A.N.\ Iusem, \emph{Set-Valued Mappings and Enlargements of Monotone Operators}, Springer-Verlag, 2008. \bibitem{EckBer} J.\ Eckstein and D.P.\ Bertsekas, On the Douglas--Rachford splitting method and the proximal point algorithm for maximal monotone operators, \emph{Mathematical Programming (Series A)}~55 (1992), 293--318. \bibitem{Gis2017} P.\ Giselsson, Tight global linear convergence rate bounds for Douglas--Rachford splitting \emph{Journal of Fixed Point Theory and Applications}~19 (2017), 2241--2270. \bibitem{GJ08} M.I.\ Garrido and J.\ Jaramillo, Lipschitz-type functions on metric spaces. \emph{Journal of Mathematical Analysis and Applications}~340 (2008), 282--290. \bibitem{Minty} G.J.\ Minty, Monotone (nonlinear) operators in Hilbert spaces, \emph{Duke Mathematical Journal}~29 (1962), 341--346. \bibitem{MVan2019} W.M.\ Moursi and L. Vandenberghe, Douglas--Rachford splitting for the sum of a Lipschitz continuous and a strongly monotone operator, \emph{Journal of Optimization Theory and Applications}~183 (2019), 179--198 \bibitem{Parker55} F. D.\ Parker, Integrals of Inverse Functions, \emph{American Mathematical Monthly}~62, (1955) 439--440. \bibitem{Rak62} E.\ Rakotch, A note on contractive mappings, \emph{Proceedings of the American Mathematical Society}~13 (1962), 459--465. \bibitem{ReiZas00} S.\ Reich and A.J.\ Zaslavski, Almost all nonexpansive mappings are contractive, \emph{La Soc\'et\'e Royale du Canada. L'Acad\'emie des Sciences. Comptes Rendus Math\'ematiques}~22 (2000), 118--124. \bibitem{Rock70} R.T.\ Rockafellar, \emph{Convex Analysis}, Princeton University Press, Princeton, 1970. \bibitem{Rock1970} R.T.\ Rockafellar, On the maximal monotonicity of subdifferential mappings, \emph{Pacific Journal of Mathematics}~33, 209--216 (1970). \bibitem{Rock98} R.T.\ Rockafellar and R. J-B\ Wets, \emph{Variational Analysis}, Springer-Verlag, corrected 3rd printing, 2009. \bibitem{Simons1} S.\ Simons, \emph{Minimax and Monotonicity}, Springer-Verlag, 1998. \bibitem{Simons2} S.\ Simons, {\em From Hahn--Banach to Monotonicity}, Springer--Verlag, 2007. \bibitem{Za83} C.\ Z\u{a}linescu, On uniformly convex functions, {\em Journal Mathematical Analysis and Appllications}~{95} (1983), 344--374. \bibitem{Za02} C.\ Z\u{a}linescu, {\em Convex Analysis in General Vector Spaces}, World Scientific, 2002. \bibitem{Zeidler2a} E.\ Zeidler, \emph{Nonlinear Functional Analysis and Its Applications II/A: Linear Monotone Operators}, Springer-Verlag, 1990. \bibitem{Zeidler2b} E.\ Zeidler, \emph{Nonlinear Functional Analysis and Its Applications II/B: Nonlinear Monotone Operators}, Springer-Verlag, 1990. \end{thebibliography} \begin{appendices} \crefalias{section}{appsec} \section{} \label{App:B} \begin{myproof}[Finer conclusions for \cref{ex-cont-in-large-b}.] Let $g$ be the inverse function of the function $x\mapsto x+\sin x$ over the interval $\left]\tfrac{-\pi-2}{2},\tfrac{\pi+2}{2}\right[$ and let $h$ be the inverse function of the function $x\mapsto x-\sin x$ over the interval $\left]\tfrac{-\pi-2}{2},\tfrac{\pi+2}{2}\right[$. Set \begin{equation} A\colon \RR\to \RR\colon x\mapsto \begin{cases} x+1,&x\le \tfrac{-\pi-2}{4}; \\ g(2x)-x,&\abs{x}<\tfrac{\pi+2}{4}; \\ x-1, &\text{otherwise}, \end{cases} \end{equation} and set \begin{equation} f\colon \RR\to \RR\colon x\mapsto \frac{1}{2}\begin{cases} x^2+2x,&x\le \tfrac{-\pi-2}{4}; \\ 2xg(2x)-\tfrac{1}{2}(g(2x))^2+\cos(g(2x))-x^2-\tfrac{\pi+1}{2},&\abs{x}<\tfrac{\pi+2}{4}; \\ x^2-2x, &\text{otherwise}. \end{cases} \end{equation} Then \begin{equation} \label{ex-cont-in-large-b:i} T=R_A, \end{equation} \begin{equation} \label{ex-cont-in-large-b:ii} A=f' , \end{equation} \begin{equation} \label{ex-cont-in-large-b:e1} A^{-1}\colon \RR\to \RR\colon x\mapsto \begin{cases} x-1,&x\le \tfrac{-\pi+2}{4}; \\ h(2x)-x,&\abs{x}<\tfrac{\pi-2}{4}; \\ x+1, &\text{otherwise}, \end{cases} \end{equation} and \begin{equation} \label{ex-cont-in-large-b:e2} f^*\colon \RR\to \RR\colon x\mapsto \frac{1}{2}\begin{cases} x^2-2x,&x\le \tfrac{-\pi+2}{4}; \\ 2xh(2x)-\tfrac{1}{2}(h(2x))^2-\cos(g(2x))-x^2+\tfrac{\pi-1}{2},&\abs{x}<\tfrac{\pi-2}{4}; \\ x^2+2x, &\text{otherwise}. \end{cases} \end{equation} Observe that $T=R_A$ if and only if $A=((\Id+T)/2)^{-1}-\Id$. Now \begin{equation} \frac{1}{2}(\Id+T)\colon \RR\to \RR\colon x\mapsto \frac{1}{2}\begin{cases} x-1,&x\le- \tfrac{\pi}{2}; \\ x+\sin x,&\abs{x}<\tfrac{\pi}{2}; \\ x+1, &\text{otherwise}. \end{cases} \end{equation} Note that $ \tfrac{1}{2}(\Id+T)\colon \RR\to \RR$ is strictly increasing and differentiable, hence continuous. Therefore, $(\tfrac{1}{2}(\Id+T))^{-1}\colon \RR\to \RR$ is strictly increasing and continuous. To this end let $(x,y)\in \RR\times \RR$. Then $y=( \tfrac{1}{2}(\Id+T))^{-1}(x)$ if and only if $x=\tfrac{1}{2}(y+Ty)$. If $y\le -\tfrac{\pi}{2}$ then $x=\tfrac{1}{2}(y-1)$. Hence, $y=2x+1$ and $x\le \tfrac{-\pi-2}{4}$. Similarly, if $y\ge \tfrac{\pi}{2}$ then $x=\tfrac{1}{2}(y+1)$. Hence, $y=2x-1$ and $x\ge \tfrac{\pi+2}{4}$. If $y<\abs{\tfrac{\pi}{2}}$ then $x=\tfrac{1}{2}(y+\sin y)$; equivalently, $y=g(2x)$ and $\abs{x}\le \tfrac{\pi+2}{4}$. This proves \cref{ex-cont-in-large-b:i}. We now show that $f$ is an antiderivative of $A$. The formula for $f$ over the intervals $\left[\tfrac{\pi+2}{4},+\infty\right[$ and $\left]-\infty, -\tfrac{\pi+2}{4}\right]$ is straightforward. To compute an antidrivative of $g(2x)-x$ we use \cite{Parker55} to learn that \begin{equation} \int (g(2x)-x)dx=\tfrac{1}{2}\Big(2xg(2x)-\tfrac{(g(x))^2}{2}+\cos(g(2x))-x^2\Big)+C. \end{equation} The continuity of $A$ implies that $g(\tfrac{\pi+2}{2})=\tfrac{\pi}{2}$ and $g(-\tfrac{\pi+2}{2})=-\tfrac{\pi}{2}$. This, together with the continuity of $f$, imply that $C=-\tfrac{\pi+1}{4}$. This proves \cref{ex-cont-in-large-b:ii}. To prove \cref{ex-cont-in-large-b:e1} proceed similar to the proof of \cref{ex-cont-in-large-b:i} and observe that $R_{A^{-1}}=-T$. Analogously, the proof of \cref{ex-cont-in-large-b:e2} is similar to that of \cref{ex-cont-in-large-b:ii} by observing that $A^{-1}=(f^*)'$. \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[scale=0.45]{plot_A}& \includegraphics[scale=0.45]{plot_Ainv} \end{tabular} \end{center} \caption{A \texttt{GeoGebra} snapshot illustrating \cref{ex-cont-in-large-b}. Left: plot of $A(x)$. Right: plot of $A^{-1}(x)$.} \end{figure} \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[scale=0.45]{plot_f}& \includegraphics[scale=0.45]{plot_fconj} \end{tabular} \end{center} \caption{A \texttt{GeoGebra} snapshot illustrating \cref{ex-cont-in-large-b}. Left: plot of $f$. Right: plot of $f^*$.} \end{figure} \end{myproof} \end{appendices} \end{document}
2205.09033v3
http://arxiv.org/abs/2205.09033v3
A visual tour via the Definite Integration $\int_{a}^{b}\frac{1}{x}dx$
\documentclass[11pt,leqno]{amsart} \topmargin= .5cm \textheight= 22.5cm \textwidth= 32cc \baselineskip=16pt \usepackage{indentfirst, amssymb,amsmath,amsthm} \usepackage{hyperref} \usepackage{csquotes} \usepackage{graphicx} \usepackage{epstopdf} \usepackage{multicol} \usepackage[margin=.75in]{geometry} \newtheorem{theo}{Theorem} \newtheorem{cor}{Corollary} \newtheorem{rem}{Remark} \newcommand{\beas}{\begin{eqnarray*}} \newcommand{\eeas}{\end{eqnarray*}} \begin{document} \title[A visual tour via Definite Integration]{A visual tour via the Definite Integration $\int_{a}^{b}\frac{1}{x}dx$} \date{} \author[B. Chakraborty]{Bikash Chakraborty} \date{} \footnotetext{MSC 2020: Primary: 11A99, 00A05, 11B33, 00A66; Keywords: Visual tour, $e$, $\pi$, Euler's constant, Euler's limit, Geometric progression.} \address{Department of Mathematics, Ramakrishna Mission Vivekananda Centenary College, Rahara, West Bengal 700 118, India.} \email{[email protected], [email protected]} \maketitle \footnotetext{2010 Mathematics Subject Classification: Primary 00A05, Secondary 00A66.} \begin{abstract} Geometrically, $\int_{a}^{b}\frac{1}{x}dx$ means the area under the curve $\frac{1}{x}$ from $a$ to $b$, where $0<a<b$, and this area gives a positive number. Using this area argument, in this expository note, we present some visual representations of some classical results. For examples, we demonstrate an area argument on a generalization of Euler's limit $\left(\lim\limits_{n\to\infty}\left(\frac{(n+1)}{n}\right)^{n}=e\right)$. Also, in this note, we provide an area argument of the inequality $b^a < a^b$, where $e \leq a< b$, as well as we provide a visual representation of an infinite geometric progression. Moreover, we prove that the Euler's constant $\gamma\in [\frac{1}{2}, 1)$ and the value of $e$ is near to $2.7$.\par Some parts of this expository article has been accepted for publication in Resonance – Journal of Science Education, The Mathematical Gazette, and International Journal of Mathematical Education in Science and Technology. \end{abstract} \section{Introduction} It is well known that the function $\phi:(0,+\infty)\to \mathbb{R}$, defined by $\phi(x)=\frac{1}{x}$, is a monotone decreasing and continuous. Thus $\phi(x)$ is Riemann integrable on $[a,b]$ where $0<a<b$. Geometrically, $\int_{a}^{b}\frac{1}{x}dx$ means the area under the curve $y=\frac{1}{x}$ from $a$ to $b$. Moreover, it is useful to observe that the funtion $f(t)=\int_{1}^{t} \frac{1}{x}dx$ is strictly increasing for $t\geq 1$.\par \begin{figure}[h] \includegraphics[scale=.6]{t1.png} \caption{$\ln b-\ln a< \frac{1}{2}\cdot(\frac{1}{a}+\frac{1}{b})\cdot(b-a).$} \centering \end{figure} Let $a$ and $b$ be two positive real numbers. Then the fact $\frac{1}{x}+\frac{x}{ab}\leq \frac{1}{a}+\frac{1}{b}$ for $a\leq x \leq b$ (as $(x-b)(x-a)\leq 0$)is equivalent to saying that the line $y=\frac{1}{a}+\frac{1}{b}-\frac{x}{ab}$ lies above the curve $y=\frac{1}{x}$ for $a\leq x \leq b$. Thus, Figure 1 shows that the area under the curve $y =\frac{1}{x}$ from $a$ to $b$ is less than the area of the trapeziums covering it, i.e., $$\ln b-\ln a< \frac{1}{2}\cdot(\frac{1}{a}+\frac{1}{b})\cdot(b-a).$$ \begin{figure}[h] \includegraphics[scale=.5]{t2.png} \caption{$\ln b-\ln a>\frac{2}{a+b}\cdot(b-a).$} \centering \end{figure} Again, the fact $\frac{1}{x}+\frac{4x}{(a+b)^{2}}\geq \frac{4}{a+b}$ for $x>0$ (follows from AM-GM inequality) is equivalent to saying that the curve $y=\frac{1}{x}$ lies above its tangent line $y=\frac{4}{a+b}-\frac{4x}{(a+b)^{2}}$ at the point $(\frac{a+b}{2},\frac{2}{a+b})$.\par Thus, figure 2 gives the visualization that the area under the curve $y =\frac{1}{x}$ from $a$ to $b$ is greater than the area of the trapezium below it, i.e., $$\ln b-\ln a>\frac{2}{a+b}\cdot(b-a).$$ \section{Tour-1} In a recent note of the American Mathematical Monthly, R. Farhadian (\cite{i}) made a beautiful generalization of Euler's limit $\left(\lim\limits_{n\to\infty}\left(\frac{(n+1)}{n}\right)^{n}=e\right)$ as follows: \begin{theo} (\cite{i}) Let $A_{n}$ be a strictly increasing sequence of positive numbers satisfying the asymptotic formula $A_{(n+1)}\sim A_{n}$, and let $d_{n}=A_{(n+1)}-A_{n}$. Then $$\lim\limits_{n\to\infty}\left(\frac{A_{(n+1)}}{A_{n}}\right)^\frac{A_{n}}{d_{n}}=e.$$ \end{theo} Now, we will provide a second proof of it, which is purely pictorial. \begin{figure}[h] \includegraphics[scale=.4]{t3.png} \caption{} \centering \end{figure} From Figure 3, it is clear that \beas \frac{2}{A_{n}+A_{n+1}}\cdot(A_{n+1}-A_{n})<\ln(A_{n+1})-\ln(A_{n})<\frac{1}{2}\cdot\left(\frac{1}{A_{n}}+\frac{1}{A_{n+1}}\right)\cdot(A_{n+1}-A_{n}),\eeas i.e., $$\frac{2}{1+\frac{A_{n+1}}{A_{n}}}<\ln\left(\frac{A_{n+1}}{A_{n}}\right)^{\frac{A_n}{d_n}}<\frac{1}{2}\cdot\left(1+\frac{A_{n}}{A_{n+1}}\right).$$ Since $A_{(n+1)}\sim A_{n}$, thus $\lim\limits_{n\to\infty}\left(\frac{A_{(n+1)}}{A_{n}}\right)^\frac{A_{n}}{d_{n}}=e.$ \begin{rem} It is well-known that if $a_{n}$ is a sequence of positive numbers satisfying $\lim\limits_{n\to +\infty} a_{n}=0$, then $$\lim\limits_{n\to +\infty} \left(1+a_{n}\right)^{\frac{1}{a_n}}=e$$.\\ Here, we will provide a visual proof of it. \begin{figure}[h] \includegraphics[scale=.4]{t3.1.png} \caption{} \centering \end{figure} From the figure, it is clear that \beas \frac{2}{2+a_{n}}\cdot a_{n}<\ln(1+a_{n})-\ln 1<\frac{1}{2}\cdot\left(1+\frac{1}{1+a_{n}}\right)\cdot a_{n},\eeas i.e., $$\lim\limits_{n\to +\infty} \left(1+a_{n}\right)^{\frac{1}{a_n}}=e$$. \end{rem} \section{Tour-2} Next, we provide a pictorial description of a geometric series $$1+\frac{1}{r}+\frac{1}{r^2}+\frac{1}{r^3}+\ldots=\frac{r}{r-1}$$ when $r>1$. \begin{figure}[h] \includegraphics[scale=.2]{t4.png} \caption{$\int_{1}^{r} \frac{dt}{t}=\int_{\frac{1}{r}}^{1} \frac{dt}{t}$} \centering \end{figure} Since $\int_{1}^{r} \frac{dx}{x}=\int_{\frac{1}{r}}^{1} \frac{dy}{y}$, thus the area covered by the rectangle $\{(0,1);(1,1);(1,\frac{1}{r});(0,\frac{1}{r})\}$ is same as the area covered by the rectangle $\{(1,0);(r,0);(r,\frac{1}{r});(1,\frac{1}{r})\}$. Thus \beas \left(1-\frac{1}{r}\right)\cdot1&=&\left(\frac{1}{r}-\frac{1}{r^2}\right)\cdot(r-1)+\left(\frac{1}{r^2}-\frac{1}{r^3}\right)\cdot(r-1)+\ldots,\\ &=&\frac{(r-1)^2}{r^2}\cdot\left(1+\frac{1}{r}+\frac{1}{r^2}+\ldots\right),\eeas i.e., $$\left(1+\frac{1}{r}+\frac{1}{r^2}+\ldots\right)=\frac{r}{r-1},$$ which gives the required equality. \section{Tour-3} The two constants $e$ and $\pi$ have encouraged many visual proofs of the inequality ${\pi}^e < e^{\pi}$. In a recent Mathematical Intelligencer note (\cite{C}), the author provided a visual proof of the inequality $\pi^{e}< e^{\pi}$. However, their visual proof can be used to show the more general inequality $b^a < a^b$, where $e \leq a< b$. \medbreak \textbf{Visual Proof-1} \\ Since $\ln a\geq 1$, thus $\frac{1}{x\ln a}\leq \frac{1}{x}$ for $x>0$. Thus the Figure 6 shows that the area under the curve $y = \frac{1}{x\ln a}$ from $a$ to $b$ is less than the area of the rectangle PQRS, i.e., \begin{figure}[h] \includegraphics[scale=.6]{figure.png} \caption{$\int\limits_{a\ln a}^{b\ln a}\frac{dx}{x} < \frac{1}{a}\cdot(b-a)\ln a.$} \centering \end{figure} \beas \frac{\ln b }{\ln a}-1=\int_{a}^{b}\frac{dx}{x\ln a} < \frac{1}{a}(b-a)=\frac{b}{a}-1, \eeas i.e., \beas b^{a}< a^{b}.\eeas \newpage \textbf{Visual Proof-2} \\ Also, Figure 7 shows that the area under the curve $y = \frac{1}{x}$ from $a\ln a$ to $b\ln a$ is less than the area of the rectangle covering it. Since $e\leq a$, so $1\leq \ln a$, i.e., $a\leq a\ln a<b\ln a$. \begin{figure}[h] \includegraphics[scale=.6]{t6.png} \caption{$\int\limits_{a\ln a}^{b\ln a}\frac{dx}{x} < \frac{1}{a}\cdot(b-a)\ln a.$} \centering \end{figure} \beas \ln b -\ln a < \ln a\cdot\left(\frac{b}{a}-1\right),\eeas i.e., \beas \frac{\ln b}{\ln a}-1 < \frac{b}{a}-1. \eeas Thus \beas a\ln b<b\ln a, \text{~~i.e.,~~} b^{a}< a^{b}.\eeas \begin{cor} (\cite{C2}) If we take $a=e$, then $(a,0)$ and $(a\ln a, 0)$ will be coincided with $(e, 0)$. Also, $(a,\frac{1}{a})$ and $(a\ln a, \frac{1}{a})$ will be coincided with $(e, \frac{1}{e})$. Thus the figure 7 becomes : \begin{figure}[h] \includegraphics[scale=.6]{t5.png} \caption{$\int_{e}^{b}\frac{dx}{x} < \frac{1}{e}(b-e).$} \centering \end{figure} \end{cor} Thus we get $\ln b-1<\frac{b}{e}-1$, i.e., $b ^{e}<e^{b }$. \begin{cor} By taking $a=e$ and $b=\pi$, we get $\pi ^{e}<e^{\pi }$ (\cite{C}). \end{cor} \section{Tour-4} Considering the definition of the number $e$ by the equation $$1=\int_{1}^{e}\frac{1}{x} dx,$$ we are explaining that why the value of $e$ is near to $2.7$. Basically, we will show that $2.7<e<2.75$. \begin{figure}[h] \includegraphics[scale=.5]{t7.png} \caption{$\int_{a}^{b} \frac{1}{x}dx > \frac{2(b-a)}{b+a}.$} \centering \end{figure} Applying the lower bound of the integral $\int_{a}^{b} \frac{1}{x}dx$ (see, Figure 9), we have \beas \int_{4}^{11} \frac{1}{x}dx &=& \int_{4}^{6} \frac{1}{x}dx +\int_{6}^{9} \frac{1}{x}dx +\int_{9}^{11} \frac{1}{x}dx\\ &>&\frac{2}{5}+\frac{2}{5}+\frac{1}{5}\\ &=&1, \eeas i.e., $$ \int_{1}^{\frac{11}{4}} \frac{1}{x}dx > \int_{1}^{e}\frac{1}{x} dx~~~~~\Rightarrow~~e<2.75.$$ Again, applying the upper bound of the integral $\int_{a}^{b} \frac{1}{x}dx$ (see, Figure 10), we have \begin{figure}[h] \includegraphics[scale=.5]{t8.png} \caption{$\int_{a}^{b} \frac{1}{x}dx < \frac{1}{2}\cdot(\frac{1}{a}+\frac{1}{b})\cdot(b-a).$} \centering \end{figure} \beas &&\int_{10}^{27} \frac{1}{x}dx\\ &=& \int_{10}^{12} \frac{1}{x}dx +\int_{12}^{15} \frac{1}{x}dx +\int_{15}^{18} \frac{1}{x}dx+\int_{18}^{21}\frac{1}{x}dx+\int_{21}^{24} \frac{1}{x}dx +\int_{24}^{27} \frac{1}{x}dx\\ &<&\frac{1}{2}\cdot(\frac{2}{10}+\frac{2}{12})+\frac{1}{2}\cdot(\frac{3}{12}+\frac{3}{15})+\frac{1}{2}\cdot(\frac{3}{15}+\frac{3}{18})+ +\frac{1}{2}\cdot(\frac{3}{18}+\frac{3}{21})\\&&+\frac{1}{2}\cdot(\frac{3}{21}+\frac{3}{24})+\frac{1}{2}\cdot(\frac{3}{24}+\frac{3}{27})\\ &<& 1, \eeas i.e., $$\int_{1}^{2.7} \frac{1}{x}dx <\int_{1}^{e} \frac{1}{x}dx~~~~~~\Rightarrow~~ e>2.70.$$ \section{Tour-5} The Euler's constant is defined as $$\gamma=\lim\limits_{n\to\infty}\gamma_{n},$$ where $$\gamma_{n}=1+\frac{1}{2}+\frac{1}{3}+\ldots\frac{1}{n}-\ln n.$$ In this visual tour, we will show that the existence of the Euler's constant $\gamma$ and $\gamma\in(\frac{1}{2},1)$. Since $$1-\gamma_{n}=\ln n-\frac{1}{2}-\frac{1}{3}-\ldots\frac{1}{n},$$ thus $1-\gamma_{n}$ can be described as the shaded area in the following figure. \begin{figure}[h] \includegraphics[scale=.5]{t9.png} \caption{$\ln n-\frac{1}{2}-\frac{1}{3}-\ldots-\frac{1}{n}=1-\gamma_{n}$} \centering \end{figure} It is seen from the figure that $\{1-\gamma_{n}\}$ is strictly monotone increasing, and $1-\gamma_{n}>0.$ That is $\{\gamma_{n}\}$ is strictly monotone decreasing sequence and $\gamma_{n}$ is bounded above by $1$. \newpage Next, we define a sequence $\{A_{n}\}$, where \begin{figure}[h] \includegraphics[scale=.5]{t10.png} \caption{$A_{n}=1+\frac{1}{2}+\frac{1}{3}+\ldots+\frac{1}{n}-\ln (n+1)$} \centering \end{figure} $A_{n}$ is described by the shaded area in the following figure. Then it is seen that $\{A_{n}\}$ is strictly monotone increasing and $A_{n}>0.$ Since $$A_{n}=\gamma_{n}-\ln(n+1)+\ln n,$$ thus $$\gamma_{n}>\ln(n+1)-\ln n,$$ which means, by Figure 13, that \begin{figure}[h] \includegraphics[scale=.5]{t11.png} \caption{$\ln b-\ln a >\frac{b-a}{b}$, \text{where} $b>a>0$.} \centering \end{figure} $$\gamma_{n}>\ln(n+1)-\ln n>\frac{1}{n}>0,$$ i.e., $\gamma_{n}$ is bounded below by $0$. Thus the Euler's constant $$\gamma=\lim\limits_{n\to\infty} \gamma_{n},$$ exist, and $\gamma\in[0,1)$. \newpage Next, we assume that $$\Gamma_{n}=1+\frac{1}{2}+\frac{1}{3}+\ldots\frac{1}{n}-\ln (n+\frac{1}{2}).$$ Thus, $$\Gamma_{n+1}-\Gamma_{n}=\frac{1}{n+1}+\ln(n+\frac{1}{2})-\ln(n+\frac{3}{2}).$$ Thus by the figure 14, \begin{figure}[h] \includegraphics[scale=.4]{t12.png} \caption{$\ln b-\ln a <\frac{b-a}{a}$, \text{where} $b>a>0$.} \centering \end{figure} $$\Gamma_{n+1}-\Gamma_{n}<\frac{1}{n+1}-\frac{1}{n+\frac{1}{2}}<0,$$ i.e., $\{\Gamma_{n}\}$ is strictly monotone decreasing sequence.\medbreak Again, Figure 14 shows that \beas \int_{n}^{n+\frac{1}{2}} \frac{1}{x}dx<\frac{1}{2n},\eeas and, Figure 15 shows that \begin{figure}[h] \includegraphics[scale=.5]{t13.png} \caption{$\ln b -\ln a< \frac{1}{2}(\frac{1}{a}+\frac{1}{b})\cdot(b-a).$} \centering \end{figure} \beas \int_{1}^{n} \frac{1}{x}dx &=&\sum_{i=1}^{n-1} \int_{i}^{i+1} \frac{1}{x}dx\\ &<& \sum_{i=1}^{n-1} \frac{1}{2}(\frac{1}{i}+\frac{1}{i+1})\\ &=&1+\frac{1}{3}+\frac{1}{4}+\ldots+\frac{1}{n-1}+\frac{1}{2n}. \eeas Thus \beas \ln\left(n+\frac{1}{2}\right)&=&\int_{1}^{n+\frac{1}{2}} \frac{1}{x}dx\\ &=& \int_{1}^{n} \frac{1}{x}dx+\int_{n}^{n+\frac{1}{2}} \frac{1}{x}dx\\ &<&1+\frac{1}{3}+\frac{1}{4}+\ldots+\frac{1}{n-1}+\frac{1}{n}, \eeas Thus $\Gamma_{n}=1+\frac{1}{2}+\frac{1}{3}+\ldots\frac{1}{n}-\ln (n+\frac{1}{2})>\frac{1}{2}.$ Hence $\lim\limits_{n\to\infty} \Gamma_{n}$ exist and $\lim\limits_{n\to\infty} \Gamma_{n}\in [\frac{1}{2}, 1)$.\\ As $\gamma_{n}-\Gamma_n=\ln (n+\frac{1}{2})-\ln n$, thus, applying Figures 13 and 14, we get $$\frac{1}{2n+1}<\gamma_{n}-\Gamma_n<\frac{1}{2n},$$ i.e., $$\gamma=\lim\limits_{n\to\infty} \gamma_{n}=\lim\limits_{n\to\infty} \Gamma_{n}~~~\text{and}~~~~\gamma\in[\frac{1}{2},1).$$ \begin{thebibliography}{99} \bibitem{BS} R. G. Bartle, and D. R. Sherbert, Introduction to Real Analysis (Fourth Edition), Wiley, (2014). \bibitem{C} B. Chakraborty, A Visual Proof that $\pi ^{e}<e^{\pi }$, Math. Intelligencer, 41(1) (2019), pp. 56. \bibitem{C2} B. Chakraborty, A Visual Proof that $b^{e}<e^{b}$ when $b>e$, The Mathematical Gazette, To apper in 2024. \bibitem{C3} B. Chakraborty, A Visual Proof that $b^{a}<a^{b}$ when $e\leq a<b$, International Journal of Mathematical Education in Science and Technology, doi 10.1080/0020739X.2022.2102547. \bibitem{cc} B. Chakraborty and S. Chakraborty, A Visual Proof that $2.5 < e < 2.75$, Resonance – Journal of Science Education (Accepted). \bibitem{i} R. Farhadian, A Generalization of Euler’s Limit, The American Mathematical Monthly, Volume 129, 2022, No. 4, pp. 384. \bibitem{g} C. Gallant, $A^B>B^A$ for $e\leq A<B$, Math. Mag., 64(1) (1991), pp. 31. \bibitem{G} H. N. Gupta, A simple proof that $e<3$, Math. Gaz., 62 (1978), pp. 124. \bibitem{L} Nick Lord, Proof without words that $2<e<3$, Math. Gaz. 95 (2011), pp. 115-117. \end{thebibliography} \end{document}
2205.08964v2
http://arxiv.org/abs/2205.08964v2
Skew constacyclic codes over a class of finite commutative semisimple rings
\documentclass{elsarticle} \newcommand{\trans}{^{\mathrm{T}}} \newcommand{\fring}{\mathcal{R}} \newcommand{\tlcycliccode}{$\theta$-$\lambda$-cyclic code} \renewcommand{\geq}{\geqslant} \renewcommand{\leq}{\leqslant} \newcommand{\Aut}{\mathrm{Aut\,}} \newcommand{\id}{\mathrm{id}} \newcommand{\wt}{\mathrm{wt}} \newcommand{\ord}{\mathrm{ord\,}} \newcommand{\cchar}{\mathrm{char\,}} \newcommand{\dhamming}{\mathrm{d}_H} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{mathtools} \usepackage{hyperref} \usepackage{geometry} \geometry{ textwidth=138mm, textheight=215mm, left=27mm, right=27mm, top=25.4mm, bottom=25.4mm, headheight=2.17cm, headsep=4mm, footskip=12mm, heightrounded, } \journal{arXiv} \usepackage[amsmath,thmmarks]{ntheorem} { \theoremstyle{nonumberplain} \theoremheaderfont{\bfseries} \theorembodyfont{\normalfont} \theoremsymbol{\mbox{$\Box$}} \newtheorem{proof}{Proof.} } \qedsymbol={\mbox{$\Box$}} \newtheorem{theorem}{Theorem}[section] \newtheorem{axm}[theorem]{Axiom} \newtheorem{alg}[theorem]{Algorithm} \newtheorem{asm}[theorem]{Assumption} \newtheorem{definition}[theorem]{Definition} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{rul}[theorem]{Rule} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{example}{Example}[section] \newtheorem{remark}[example]{Remark} \newtheorem{exc}[example]{Exercise} \newtheorem{frm}[theorem]{Formula} \newtheorem{ntn}{Notation} \usepackage{verbatim} \bibliographystyle{elsarticle-num} \begin{document} \begin{frontmatter} \title{Skew constacyclic codes over a class of finite commutative semisimple rings} \author{Ying Zhao} \address{27 Shanda Nanlu, Jinan, P.R.China 250100} \begin{abstract} Let $F_q$ be a finite field of $q=p^r$ elements, where $p$ is a prime, $r$ is a positive integer, we determine automorphism $\theta$ of a class of finite commutative semisimple ring $\fring =\prod_{i=1}^t F_q$ and the structure of its automorphism group $\Aut(\fring)$. We find that $\theta$ is totally determined by its action on the set of primitive idempotent $e_1, e_2,\dots,e_t$ of $\fring$ and its action on $F_q1_{\fring}=\{a1_{\fring} \colon a\in F_q\},$ where $1_{\fring}$ is the multiplicative identity of $\fring.$ We show $\Aut(\fring) = G_1G_2,$ where $G_1$ is a normal subgroup of $\Aut(\fring)$ isomorphic to the direct product of $t$ cyclic groups of order $r,$ and $G_2$ is a subgroup of $\Aut(\fring)$ isomorphic to the symmetric group $S_t$ of $t$ elements. For any linear code $C$ over $\fring,$ we establish a one-to-one correspondence between $C$ and $t$ linear codes $C_1,C_2,\dots,C_t$ over $F_q$ by defining an isomorphism $\varphi.$ For any $\theta$ in $\Aut(\fring)$ and any invertible element $\lambda$ in $\fring,$ we give a necessary and sufficient condition that a linear code over $\fring$ is a $\theta$-$\lambda$-cyclic code in a unified way. When $ \theta\in G_1,$ the $C_1,C_2,\dots,C_t$ corresponding to the $\theta$-$\lambda$-cyclic code $C$ over $\fring$ are skew constacyclic codes over $F_q.$ When $\theta\in G_2,$ the $C_1,C_2,\dots,C_t$ corresponding to the skew cyclic code $C$ over $\fring$ are quasi cyclic codes over $F_q.$ For general case, we give conditions that $C_1,C_2,\dots,C_t$ should satisfy when the corresponding linear code $C$ over $\fring$ is a skew constacyclic code. Linear codes over $\fring$ are closely related to linear codes over $F_q.$ We define homomorphisms which map linear codes over $\fring$ to matrix product codes over $F_q.$ One of the homomorphisms is a generalization of the $\varphi$ used to decompose linear code over $\fring$ into linear codes over $F_q,$ another homomorphism is surjective. Both of them can be written in the form of $\eta_M$ defined by us, but the matrix $M$ used is different. As an application of the theory constructed above, we construct some optimal linear codes over $F_q.$ \end{abstract} \begin{keyword} Finite commutative semisimple rings\sep Skew constacyclic codes\sep Matrix product codes \MSC[2020] Primary 94B15 \sep 94B05; Secondary 11T71 \end{keyword} \end{frontmatter} \input{Intro.tex} \input{CodeoverF.tex} \input{CodeoverR.tex} \input{Image.tex} \section{Conclusions} In this article, we study skew constacyclic codes over a class of finite commutative semisimple rings. The automorphism group of $\fring$ is determined, and we characterize skew constacyclic codes over ring by linear codes over finite field. We also define homomorphisms which map linear codes over $\fring$ to matrix product codes over $F_q,$ some optimal linear codes over finite fields are obtained. \section{Acknowledgements} This article contains the main results of the author's thesis of master degree. This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. \bibliography{mybibfile} \end{document} \section{Introduction} In modern society, the efficient and reliable transmission of information is inseparable from coding theory. Cyclic codes have been widely studied and applied due to their excellent properties. In recent years, more and more people have studied cyclic codes. Early studies mainly considered codes over finite fields, but when Hammons et al. \cite{Z4linear} found that some good nonlinear codes over binary fields can be regarded as Gray images of linear codes over $\mathbb{Z}_4,$ more and more people began to study linear codes over finite rings, and by defining appropriate mappings, codes over finite rings were related to codes over finite fields. Boucher et al. studied skew cyclic codes over finite fields \cite{Boucher2007Skew, 2009Coding}, and gave some self dual linear codes over $F_4$ with better parameters than known results, which attracted many people to study skew cyclic codes. Skew cyclic codes is a generalization of the cyclic codes, and it can be characterized by skew polynomial ring $F_q\left[x;\theta\right].$ The elements in skew polynomial ring are still polynomials, but multiplication is non-commutative, and factorization is not unique. It is because of these differences that Boucher et al. obtained self dual linear codes with batter parameters over finite fields. But they required $\ord (\theta) $ divides the length of codes. Then, Siap et al. \cite{siap2011skew} removed this condition and studied skew cyclic codes of arbitrary length. Boucher et al. also studied skew constacyclic codes over $F_q$ \cite{Boucher2009codesMod, boucher2011note} and skew constacyclic codes over Galois rings \cite{boucher2008skew}. Influenced by the work of Boucher et al., people began to research skew cyclic codes and skew constacyclic codes over finite rings. But general finite rings no longer have many good properties like finite fields, so it is not easy to study linear codes over general finite rings. Therefore, people always consider a very specific ring or a class of rings, and then study linear codes over the ring. Abualrub et al. \cite{2012Onskewcyclic} studied skew cyclic codes and its dual codes over the ring $F_2+vF_2, v^2=v,$ they used skew polynomial rings. Dougherty et al. \cite{dougherty1999self} used the Chinese remainder theorem to study self dual codes over $\mathbb{Z}_{2k},$ which brought new ideas for the study of linear codes over finite commutative rings. Similar to the method used by Zhu et al. \cite{zhu2010some} to study cyclic codes over $F_2+vF_2,$ when people study linear codes over $F_q+vF_q, v^2=v,$ they mainly use $F_q +vF_q$ is isomorphic to the finite commutative semisimple ring $F_q\times F_q,$ and then they show that to study a linear code $C$ over the ring, it is only necessary to study the two linear codes $C_1,C_2$ over $F_q.$ Gao \cite{gao2013skew} studied skew cyclic codes over $F_p +vF_p,$ and the main result he obtained can be regarded as a special case of the result in special case two we will deal with later. Gursoy et al. \cite{gursoy2014construction} studied skew cyclic codes over $F_q +vF_q,$ their main result can be seen as a special case of the result in special case one we deal with later. Gao et al. \cite{gao2017skew} later studied skew constacyclic codes over $F_q+vF_q,v^2=v,$ what they considered can be seen as special case one in this paper. A more general ring than the one above is $F_q\left[v\right]/\left(v^{k+1}-v\right),\ k\mid (q-1),$ which is isomorphic to finite commutative semisimple ring $\prod_{j=1}^{k+1} F_q.$ Shi et al. \cite{shi2015skew} studied skew cyclic codes over the ring $F_q+vF_q+v^2F_q,$ $v^3=v,$ $q=p^m, $ where $p$ is an odd prime number, they also used the structure of the ring, and then turned the problem into the study of linear codes over $F_q,$ the skew cyclic codes they studied can be included in the special case one of this paper. The ring $F_q[u,v]/(u^2-u,v^2-v)$ is isomorphic to the finite commutative semisimple ring $F_q\times F_q\times F_q\times F_q,$ the study of linear codes over this ring is similar, Yao et al. \cite{ting2015skew} studied skew cyclic codes over this ring, although they chose a different automorphism of this ring, they limited linear codes $C$ over the ring satisfy specific conditions. Islam et al. \cite{islam2018skew} also studied skew cyclic codes and skew constacyclic codes over this ring, the automorphism they chose can be classified into our special case one. Islam et al. \cite{islam2019note} also studied skew constacyclic codes over the ring $F_q+uF_q+vF_q$ ($u^2=u,v^2=v,uv=vu=0$), they mainly used the ring is isomorphic to the finite commutative semisimple ring $F_q\times F_q\times F_q,$ the skew cyclic codes they studied can also be classified as special case one in this paper. Bag et al. \cite{bag2019skew} studied skew constacyclic codes over $F_p+u_1F_p + \dots + u_{2m}F_p,$ this ring is isomorphic to $\prod_{j=1}^{2m+1} F_p,$ their results are similar to those in our special case two. The main work they do is to define two homomorphisms, so that the skew constacyclic codes over the ring will be mapped to special linear codes over $F_p.$ There are also many people who study linear codes over rings with similar structures and then construct quantum codes. For example, Ashraf et al. \cite{Ashraf2019Quantum} studied cyclic codes over $F_p\left[u,v\right]/\left(u^2 -1,v^3-v\right),$ and then using the obtained result to construct quantum codes, this ring is isomorphic to $\prod_{j=1}^{6} F_p.$ Bag et al. \cite{bag2020quantum} studied skew constacyclic codes over $F_q [u, v]/\langle u^2- 1, v^2- 1, uv- vu\rangle$ to construct quantum codes, this ring is isomorphic to $F_q\times F_q \times F_q \times F_q,$ the part of their paper on skew constacyclic codes can be classified into our special case one. It is noted that the above rings are all special cases of a class of finite commutative semisimple rings $\fring=\prod_{i=1}^tF_q,$ and the methods used in the related research and the results obtained are similar, so we are inspired to study skew constacyclic codes over $\fring.$ In addition, we noticed that when the predecessors studied skew constacyclic codes over rings, they often considered not all but a given automorphism, the results are not very complete, so we intend to characterize skew constacyclic codes over $\fring$ corresponding to all automorphisms. However, in order to make the results easier to understand, we also discuss two special cases. Dinh et al. have studied constacyclic codes and skew constacyclic codes over finite commutative semisimple rings\cite{dinh2017constacyclic,Dinh2019Skew}, but what they discussed were similar to our special case one. Our results can be further generalized to skew constacyclic codes over general finite commutative semisimple rings. But to clarify the automorphism of general finite commutative semisimple rings, the symbol will be a bit complicated, and when we want to establish relations from linear codes over rings to codes over fields, what we use is essentially this kind of special finite commutative semisimple rings we considered, so we do not consider skew constacyclic codes over general finite commutative semisimple rings here. In the rest of this paper, we first introduce linear codes over $F_q,$ and review the characterization of skew constacyclic codes and their dual codes over $F_q.$ In order to clearly describe skew constacyclic codes over $\fring,$ We determine the automorphism of $\fring$ and the structure of its automorphism group, we define an isomorphism to decompose the linear code over $\fring,$ and give the characterizations of skew constacyclic codes in two special cases, and finally the characterization of the general skew constacyclic codes over $\fring$ is given. For the last part of this paper, we define homomorphisms to relate linear codes over $\fring$ to matrix product codes over $F_q,$ and give some optimal linear codes over $F_q.$ \section{Skew constacyclic codes over \texorpdfstring{$F_q$}{Fq}} Let $F_q$ be a $q$ elements finite field, and if $C$ is a nonempty subset of the $n$ dimensional vector space $F_q^n$ over $F_q,$ then $C$ is a code of length $n$ over $F_q,$ and here we write the elements in $F_q^n$ in the form of row vectors. The element in a code $C$ is called codeword, and for $x\in C,$ denote by $\wt_H(x)$ the number of non-zero components in $x,$ which is called the Hamming weight of $x.$ For two elements $x,y\in C$ define the distance between them as $\dhamming(x,y) = \wt_H(x-y). $ If $C$ has at least two elements, define the minimum distance $\dhamming(C) = \min\{\dhamming(x,y) \colon x, y\in C,\, x\neq y\}$ of $C$. Define the inner product of any two elements $x=(x_0,x_1,\dots,x_{n-1}), y=(y_0,y_1,\dots,y_{n-1}) \in F^n_q$ as $x\cdot y = \sum_{i=0}^{n-1} x_iy_i.$ With the inner product we can define the dual of the code $C$ as $C^\perp = \{x\in F_q^n \colon x\cdot y=0, \, \forall\, y \in C\},$ the dual code must be linear code we define below. \begin{definition} If $C$ is a vector subspace of $F_q^n,$ then $C$ is a linear code of length $n$ over $F_q.$ \end{definition} If $C$ is a nonzero linear code, then there exists a basis of $C$ as a vector subspace, for any basis $\alpha_1,\alpha_2,\dots,\alpha_k,$ where $k$ is the dimension of $C$ as a linear space over $F_q,$ then we say that the $k\times n$ matrix $\left(\alpha_1,\alpha_2,\dots,\alpha_k\right)\trans$ is the generator matrix of $C.$ Suppose $H$ is a generator matrix of $C^\perp,$ then $x\in C$ if and only if $Hx\trans = 0,$ that is, we can use $H$ to check whether the element $x$ in $F_q^n$ is in $C,$ and call $H$ a parity-check matrix of $C.$ Let $C$ be a linear code, if $C \subset C^\perp,$ then $C$ is a self orthogonal code; if $C = C^\perp ,$ then $C$ is a self dual code. If $C$ is a linear code with length $n,$ dimension $k,$ and minimum distance $d,$ then we say it has parameters $[n,k,d].$ Sometimes, without indicating the minimum distance of $C,$ we say the parameters of $C$ is $[n,k]. $ For given positive integer $n,k,$ where $1\leq k \leq n,$ define the maximum value of the minimum distance of all linear codes over $F_q$ with parameters $[n,k]$ as $\dhamming(n,k,q) = \max\left\{ \dhamming(C) \colon C\subset F_q^n,\, \dim C=k \right\}. $ In general, it is difficult to determine the exact value of $\dhamming(n,k,q)$, often only the upper and lower bounds are given, the famous Singleton bound is that $\dhamming(n,k,q) \leq n-k+1.$ A linear code $C$ with parameter $[n,k]$ is said to be an optimal linear code over $F_q$ if the minimum distance $\dhamming(C)$ is exactly equal to $\dhamming(n,k,q).$ The most studied linear codes are cyclic codes, and the definitions of cyclic codes and quasi cyclic codes over $F_q$ are given below. \begin{definition} Let $\rho$ be a map from $F_q^n$ to $F_q^n$, which maps $ (c_0,c_1,\dots,c_{n-1})$ to $\left(c_{n-1},c_0,\dots,c_{n-2}\right). $ Let $C$ be a linear code of length $n$ over $F_q$ and $\ell$ be a positive integer, if $\rho^{\ell}(C) = C,$ then $C$ is a quasi cyclic code of length $n$ with index $\ell$ over $F_q.$ In particular, $C$ is said to be a cyclic code when $\ell =1.$ \end{definition} From the definition, it is clear that quasi cyclic code is a generalization of cyclic code, and another generalization of cyclic code is skew constacyclic code. \subsection{Characterization of skew constacyclic codes over \texorpdfstring{$F_q$}{Fq}}\label{sec:skewF_q} \begin{definition} Let $\theta\in \Aut(F_q),$ $\lambda\in F_q^\ast,$ and use $\rho_{\theta,\lambda}$ to denote the mapping from $F_q^n$ to $F_q^n$, which maps $ (c_0,c_1,\dots,c_{n-1})$ to $\left(\lambda \theta(c_{n-1}),\theta(c_0),\dots,\theta(c_{n-2})\right). $ If $C$ is a linear code of length $n$ over $F_q,$ and for any $c\in C,$ with $\rho_{\theta,\lambda}(c)\in C,$ then $C$ is a $\theta$-$\lambda$-cyclic code of length $n$ over $F_q$, and if $\theta$ and $\lambda$ are not emphasized, $C$ is also called a skew constacyclic code. In particular, if $\theta = \id,$ $C$ is a cyclic code when $\lambda=1$, a negative cyclic code when $\lambda=-1,$ and a $\lambda$-cyclic code or constacyclic code when $\lambda$ is some other invertible element. If $\lambda=1,\theta\neq \id ,$ then it is called a $\theta$-cyclic code, also called a skew cyclic code. \end{definition} Skew polynomials were first studied by Ore \cite{Ore1933poly}, and a detailed description of skew polynomial ring $F_q[x;\theta]$ over a finite field $F_q$ can be found in McDonald's monograph \citep[\uppercase\expandafter{\romannumeral2}. (C),][]{McDonald1974FiniteRW}. \begin{definition} Let $\theta\in \Aut(F_q),$ define the ring of skew polynomials over the finite field $F_q$ \begin{equation*} F_q[x;\theta] = \left\{a_0 +a_1x+\dots+a_kx^k\mid a_i \in F_q, 0\leq i \leq k\right\}. \end{equation*} That is, the element in $F_q[x;\theta]$ is element in $F_q[x]$, except that the coefficients are written on the left, which is called a skew polynomial, and the degree of skew polynomial is defined by the degree of polynomial. Addition is defined in the usual way, while $ax^n \cdot (bx^m) = a\theta^n(b)x^{n+m}, $ and then the general multiplication is defined by the law of associativity and the law of distribution. \end{definition} If $\theta = \id ,$ then $F_q[x;\theta]$ is $F_q[x].$ If $\theta \neq \id,$ then $F_q[x;\theta]$ is a non-commutative ring, with properties different from $F_q[x]$, such as right division algorithm. \begin{theorem} \citep[Theorem \uppercase\expandafter{\romannumeral2}.11,][]{McDonald1974FiniteRW} For any $ f(x) \in F_q[x;\theta], 0\neq g(x) \in F_q[x;\theta],$ there exists unique $q(x),r(x) \in F_q[x;\theta]$ such that $f(x) = q(x)g(x) + r(x),$ where $r(x) = 0$ or $0\leq \deg r(x) < \deg g(x). $ \end{theorem} \begin{proof} Similar to the proof in the polynomial ring, which is obtained by induction. \end{proof} For convenience, use $\left\langle x^n-\lambda\right\rangle$ to abbreviate the left $F_q[x;\theta]$-module $F_q[x;\theta]\left(x^n-\lambda\right)$ generated by $x^n-\lambda$, then left $F_q[x;\theta]$-module $R_n = F_q[x;\theta]/\left\langle x^n-\lambda\right\rangle$ is the set of equivalence classes obtained by dividing the elements in $F_q[x;\theta]$ by $x^n-\lambda$ using right division algorithm. Define a map $\Phi:$ \begin{align*} F_q^{n} & \rightarrow F_q[x;\theta]/\left\langle x^n-\lambda\right\rangle \\ (c_0,c_1,\dots,c_{n-1}) & \mapsto c_0+c_1x+\dots+c_{n-1}x^{n-1}+\left\langle x^n-\lambda\right\rangle. \end{align*} Clearly $\Phi$ is an isomorphism of vector spaces over $F_q.$ The following theorem is a general form of \citep[Theorem 10,][]{siap2011skew} by Siap et al.. They consider the case $\lambda =1$. It is worth noting that Boucher et al. should also have noticed the fact that they used submodules to define the module $\theta$-constacyclic codes \citep[Definition 3,][]{Boucher2009codesMod}. \begin{theorem} \label{thm:skewcodesoverFbymod} Let $\theta \in \Aut(F_q),$ $\lambda \in F_q^\ast,$ $C$ be a vector subspace of $F_q^{n},$ then $C$ is a \tlcycliccode\ if and only if $ \Phi(C)$ is a left $F_q[x;\theta]$-submodule of $R_n.$ \end{theorem} \begin{proof} Necessity. Note that \begin{align*} & \mathrel{\phantom{=}} x\cdot\left(c_0+c_1x+\dots+c_{n-1}x^{n-1}+\left\langle x^n-\lambda\right\rangle\right) \\ & = \theta(c_0) x + \theta(c_1)x^2+\dots+\theta(c_{n-1})x^n + \left\langle x^n-\lambda\right\rangle \\ & = \lambda\theta(c_{n-1}) + \theta(c_0)x+\dots+\theta(c_{n-2})x^{n-1} + \left\langle x^n-\lambda\right\rangle \\ & = \Phi\left(\lambda\theta(c_{n-1}),\theta(c_0),\dots,\theta(c_{n-2})\right) \in \Phi(C), \end{align*} Thus for any $a(x) \in F_q[x;\theta],c(x) + \left\langle x^n-\lambda\right\rangle \in \Phi(C),$ we have $a(x)\left(c(x) + \left\langle x^n-\lambda\right\rangle \right) \in \Phi(C).$ Sufficiency. For any $(c_0,\dots,c_{n-1}) \in C,$ we need to prove that $$\left(\lambda\theta(c_{n-1}),\theta(c_0),\dots,\theta(c_{n-2})\right) \in C.$$ Notice that $c_0+c_1x+\dots+c_{n-1}x^{n-1}+\left\langle x^n-\lambda\right\rangle \in \Phi(C),$ and $ \Phi(C)$ is a left $F_q[x;\theta]$-submodule of $R_n,$ so $x \cdot \left(c_0+c_1x+\dots+c_{n-1}x^{n-1}+\left\langle x^n-\lambda\right\rangle\right) \in \Phi(C),$ which gives the proof. \end{proof} Each nonzero element in the left $F_q[x;\theta]$-module $R_n$ corresponds to a skew polynomial in $F_q\left[x;\theta \right]$ with degree no more than $n-1$. If $C$ is a \tlcycliccode\ of length $n$ over $F_q$ and $C\neq \{0\},$ then there exists a monic skew polynomial $g(x)$ with minimal degree such that $g(x) + \left\langle x^n-\lambda\right\rangle \in \Phi(C),$ then $F_q[x;\theta]\left(g(x)+\left\langle x^n-\lambda\right\rangle\right) = \Phi(C).$ This is because for each $ f(x) + \left\langle x^n-\lambda\right\rangle \in \Phi(C),$ by right division algorithm we have $f(x) = q(x)g(x) + r(x),$ where $r(x) = 0$ or $0\leq \deg r(x) < \deg g(x). $ The former case corresponds to $f(x) + \left\langle x^n-\lambda\right\rangle = q(x)\left(g(x)+\left\langle x^n-\lambda\right\rangle\right). $ In the latter case $r(x)+\left\langle x^n-\lambda\right\rangle = f(x)+\left\langle x^n-\lambda\right\rangle - q(x)\left(g(x)+\left\langle x^n-\lambda\right\rangle\right) \in \Phi(C),$ so that a monic skew polynomial of lower degree can be found, it is a contradiction. Similarly, it can be shown that the monic skew polynomial $g(x)$ with the minimal degree is unique. \begin{definition} Let $C$ be a \tlcycliccode\ of length $n$ over $F_q$ and $C\neq \{0\},$ define $g(x)$ as described above to be the generator skew polynomial of $C,$ we also call it generator polynomial when $\theta = \id.$ \end{definition} The generator skew polynomial $g(x)$ should be a right factor of $x^n-\lambda.$ In fact, according to the right division algorithm $x^n-\lambda = q(x)g(x) + r(x),$ if $r(x)\neq 0,$ then $r(x)+\left\langle x^n-\lambda\right\rangle = -q(x)\left(g(x) + \left\langle x^n-\lambda\right\rangle\right) \in \Phi(C),$ contradicts with $g(x)$ is the monic skew polynomial of minimal degree. Let $g(x) = a_0+a_1x+\dots + a_{n-k}x^{n-k},$ then one of the generator matrices of $C$ is \begin{equation}\label{eq:genmat} G = \begin{pmatrix} a_0 & \dots & a_{n-k} & & & \\ & \theta(a_0) & \dots & \theta(a_{n-k}) & & \\ & & \ddots & \ddots & \ddots & \\ & & & \theta^{k-1}(a_0) & \dots & \theta^{k-1}(a_{n-k}) \end{pmatrix}. \end{equation} If $C$ is a nontrivial (i.e., $C\neq 0$ and $C\neq F_q^n$) \tlcycliccode\ of length $n,$ there is a monic skew polynomial $g(x) = a_0+a_1x+\dots + a_{n-k}x^{n-k}, (a_{n-k}= 1),$ and $g(x)$ right divides $x^n-\lambda.$ Conversely, for a monic right factor $g(x)$ with degree less than $n$ of $x^n-\lambda,$ the left $F_q[x;\theta]$-module generated by $ g(x) + \langle x^n-\lambda \rangle$ corresponds to a \tlcycliccode\ of length $n$ over $F_q$. Thus, we establish a one-to-one correspondence between the nontrivial \tlcycliccode\ of length $n$ over $F_q$ and the nontrivial monic right factor of $x^n-\lambda$ in $F_q[x;\theta].$ Unfortunately, however, since factorization in skew polynomial ring is different from factorization in polynomial ring, we cannot use this correspondence to explicitly give a formula counting the number of \tlcycliccode\ over $F_q.$ We can concretely perceive the difference in factorization from the following example. \begin{example} Consider $F_4=\{0,1,\alpha, 1+\alpha\},$ where $\alpha^2 = 1+\alpha.$ Let $\theta$ be a nontrivial automorphism of $F_4.$ It is straightforward to verify that $x^3-\alpha$ is irreducible in $F_4[x]$, while it can be decomposed in $F_4[x;\theta]$ as $x^3-\alpha = (x+\alpha)(x^2+\alpha^2 x + 1) = (x^2+\alpha^2 x + 1)(x+\alpha). $ Also, $x^3-1$ can be decomposed in $F_4[x]$ as $x^3-1 = (x-\alpha)(x-\alpha^2)(x-1),$ but in $F_4[x;\theta]$ it cannot be decomposed as a product of linear factors, only as $x^3-1 = (x^2 + x+1)(x-1) = (x-1)(x^2+x+1).$ \end{example} \begin{definition} Let $\lambda \in F_q^\ast,$ and use $\rho_{\lambda} $ to denote the mapping from $F_q^n$ to $F_q^n$, which maps $ (c_0,c_1,\dots,c_{n-1})$ to $\left(\lambda c_{n-1},c_0,\dots,c_{n-2} \right). $ Let $C$ be a linear code of length $n$ over $F_q$ and $\ell$ be a positive integer, if $\rho_{\lambda}^{\ell}(C) = C,$ then $C$ is a quasi $\lambda$-cyclic code of length $n$ with index $\ell$ over $F_q$. \end{definition} Siap et al. studied skew cyclic codes of arbitrary length over finite field, their main results \citep[Theorem 16 and Theorem 18,][]{siap2011skew} are that skew cyclic codes over $F_q$ are either cyclic codes or quasi cyclic codes. Their results can be generalized to the following \begin{theorem} Let $\theta\in \Aut(F_q), \lambda \in F_q^\ast ,$ $\ord (\theta) = m, $ $\theta(\lambda) = \lambda,$ $C$ is a $\theta$-$\lambda$-cyclic code of length $n$ over $F_q.$ If $(m,n)=1,$ then $C$ is a $\theta$-$\lambda$-cyclic code of length $n$ over $F_q$. If $(m,n)=\ell,$ then $C$ is a quasi $\lambda$-cyclic code over $F_q$ of length $n$ with index $\ell.$ \end{theorem} \begin{proof} We only need to proof the case $(m,n)=\ell.$ In this case, define a mapping $\tilde{\theta}$ from $F_q^n$ to $F_q^n$ which maps $(x_0,x_1,\dots,x_{n-1})$ to $(\theta(x_0),\theta(x_1),\dots,\theta(x_{n-1})). $ You can directly verify $\rho_{\lambda} \circ \tilde{\theta} = \tilde{\theta} \circ \rho_{\lambda} = \rho_{\theta,\lambda}. $ Since $(m,n) = \ell, $ so there exists $a,b \in \mathbb{N},$ such that $am = \ell + bn.$ For any $x = (x_0,x_1,\dots,x_{n-1}) \in C,$ since $C$ is a \tlcycliccode, therefore \begin{equation*} \rho_{\theta,\lambda}^{am} (x) = \rho_{\lambda}^{\ell + bn} \tilde{\theta}^{am} (x) = \rho_{\lambda}^{\ell + bn}(x) = \lambda^b \rho_{\lambda}^{ \ell} (x) \in C, \end{equation*} so $\rho_{\lambda}^{ \ell} (x) \in C.$ \end{proof} \begin{remark} The converse of the above theorem does not hold, i.e., quasi cyclic code over $F_q$ is not necessarily a skew cyclic code. For example, let $F_4=\{0,1,\alpha, 1+\alpha\},$ where $\alpha^2 = 1+\alpha.$ Let $\theta$ be the nontrivial automorphism of $F_4.$ Consider the code $C=\{(0,0,0),(\alpha,\alpha^2,1),(\alpha^2,1,\alpha),(1,\alpha,\alpha^2)\}$ of length $3$ over $F_4,$ and it is straightforward to verify that $C$ is a cyclic code over $F_4,$ but it is not a $\theta$-cyclic code over $F_4.$ \end{remark} \subsection{Dual codes of skew constacyclic codes over \texorpdfstring{$F_q$}{Fq}} The following theorem is well known, and we give a simple proof similar to the proof of \citep[Theorem 2.4,][]{valdebenito2018dual} by Valdebenito et al. \begin{theorem} \label{dualcodesoverF} If $C$ is a \tlcycliccode\ of length $n$ over $F_q$, where $\theta\in \Aut(F_q),\lambda \in F_q^\ast, $ then $C^\perp$ is a $\theta$-$\lambda^{-1}$-cyclic code of length $n$ over $F_q.$ \end{theorem} \begin{proof} For any $z=(z_0,z_1,\dots,z_{n-1}) \in C^\perp,$ we need to prove that $$\tilde{z}=\left(\lambda^{-1}\theta(z_{n-1}),\theta(z_0),\dots,\theta(z_{n-2})\right) \in C^\perp.$$ Since $C$ is a \tlcycliccode, for any $y\in C,$ it can be written as $$y= \left(\lambda\theta(c_{n-1}),\theta(c_0),\dots,\theta(c_{n-2})\right),$$ where $(c_0,c_1,\dots,c_{n-1}) \in C.$ Thus $ \tilde{z} \cdot y= 0,$ and by the arbitrariness of $y$ we get $\tilde{z}\in C^\perp.$ \end{proof} If $C$ is a nontrivial \tlcycliccode\ of length $n$ over $F_q$, then $C^\perp$ is a $\theta$-$\lambda^{-1}$-cyclic code of length $n$ over $F_q$ by Theorem \ref{dualcodesoverF}. Naturally, $C^\perp$ also has generator skew polynomial $g^\perp(x),$ and since $C$ uniquely determines $C^\perp,$ it follows that $g^\perp(x)$ is uniquely determined by the generator skew polynomial $g(x)= a_0+a_1x+\dots + a_{n-k}x^{n-k}$ of $C.$ In fact, if $g^\perp(x)$ corresponds to the element $c=(c_0,c_1,\dots,c_{n-1}),$ then since $g^\perp(x)$ is degree $k$ and monic we get: $c_k = 1, c_i=0, k<i<n,$ only $c_0,c_1,\dots ,c_{k-1}$ is still need to be determined, and then the system of linear equations $Gc\trans =0$ is uniquely solved for $c_i, 0\leq i\leq k-1,$ where $G$ is given by the Eq. \eqref{eq:genmat}. Thus, the coefficients of $g^\perp(x)$ can be expressed by the coefficients of $g(x)$, but the formula is a bit complicated and is omitted here. Although it is complicated to give a specific expression for $g^\perp(x)$ in terms of the coefficients of $g(x)$ directly, there are simple formulas to write down the relationship between $g(x)$ and $g^\perp(x)$ indirectly. Let us first prove a technical lemma, originally given by \citep[Lemma 2,][]{boucher2011note} of Boucher et al. \begin{lemma} \label{lem:dualpoly} Let $\lambda \in F_q^\ast,$ $\theta \in \Aut(F_q),$ if $g(x)$ is monic with degree $n-k,$ and $x^n- \lambda = h(x)g(x)$ in $F_q[x;\theta],$ then $x^n - \theta^{ -k}(\lambda) = g(x)\lambda^{-1}h(x)\theta^{-k}(\lambda). $ \end{lemma} \begin{proof} Let $g(x) = a_0 + a_1 x + \dots + a_{n-k}x^{n-k},$ denote $g_\theta(x) = \theta^{n}(a_0) + \theta^{n}(a_1) x + \dots + \theta^{n}(a_{n-k}) x^{n-k},$ then $x^n g(x) = g _\theta(x) x^n,$ and thus \begin{align*} \left(x^n - g_\theta(x) h(x) \right) g(x) & = x^n g(x) - g_\theta(x) h(x)g(x) \\\ & = g_\theta(x) \left(x^n - h(x)g(x) \right) \\\ & = g_\theta(x)\lambda. \end{align*} From the fact that both sides of the equation have the same degree and the same coefficient of the highest term we get $x^n - g_\theta(x) h(x) = \theta^{n-k}(\lambda)$ and $\theta^{n-k}(\lambda) g(x) = g_\theta(x)\lambda,$ so $x^n - \theta^{n-k}(\lambda)$ $\theta^{n-k}(\lambda) = g_\theta(x)h(x) = \theta^{n-k}(\lambda) g(x) \lambda^{-1} h(x), $ multiple the left side of the equation by $\theta^{n-k}(\lambda^{-1})$ the right side by $\theta^{-k} (\lambda)$ to get $x^n - \theta^{-k}(\lambda) = g(x) \lambda^{-1}h(x) \theta^{-k}(\lambda). $ \end{proof} The following theorem is derived from \citep[Theorem 1,][]{boucher2011note} by Boucher et al. and can also be found in \citep[Proposition 1,][]{valdebenito2018dual} by Valdebenito et al. \begin{theorem} \label{polynomialofdualcodes} Let $\theta \in \Aut(F_q),$ $\lambda \in F_q^\ast,$ $C$ be a \tlcycliccode\ of length $n$ over $F_q$ and $C$ is not $\{0\}$ and $F_q^{n},$ $g(x)= a_0+a_1x + \dots + a_{n-k}x^{ n-k}$ is the generator skew polynomial of $C,$ $x^n - \lambda=h(x) g(x),$ denote $\hbar(x) = \lambda^{-1}h(x) \theta^{-k}(\lambda) = b_0 + b_1x + \dots + b_kx^k,$ $\hbar^{\ast} (x) = b_ k + \theta(b_{k-1}) x + \dots + \theta^k(b_0) x^k,$ then the generator skew polynomial of $C^\perp$ is $ g^\perp(x) = \theta^k(b_0^{-1}) \hbar^{\ast}(x). $ \end{theorem} \begin{proof} For $i>n-k,$ let $a_i =0.$ For $j<0,$ let $b_j =0.$ By Lemma \ref{lem:dualpoly} we know that $x^n - \theta^{-k}(\lambda) = g(x) \hbar(x),$ comparing the coefficients of $x^k$ on both sides of the equation gives \begin{equation*} a_0b_k + a_1\theta(b_{k-1}) + \dots + a_k\theta^k (b_0) = 0. \end{equation*} Comparing the coefficients of $x^{k-1}$ yields \begin{equation*} a_0b_{k-1} + a_1\theta(b_{k-2})+ \dots + a_{k-1}\theta^{k-1}(b_0) = 0, \end{equation*} act with $\theta$ to get \begin{equation*} \theta(a_0)\theta(b_{k-1}) + \theta(a_1)\theta^2(b_{k-2}) + \dots + \theta(a_{k-1})\theta^{k}(b_0) = 0. \end{equation*} And so on, comparing the coefficients, and then using the $\theta$ action, we get $k$ equations. The last one is to compare the coefficients of $x$ to get $a_0b_1 + a_1 \theta(b_0) = 0,$ so $\theta^{k-1}(a_0) \theta^{k-1}(b_1) + \theta^{k-1}(a_1)\theta^{k}(b_0) = 0.$ Observe that a generator matrix of $C,$ i.e., a parity-check matrix $G$ of $C^\perp$ yields $$\left(b_k,\theta(b_{k-1}),\theta^2(b_{k-2}),\dots,\theta^{k}(b_0),0,\dots,0\right)$$ belonging to $C^\perp,$ where $ G$ is specified in Eq. \eqref{eq:genmat}. Notice that $\dim C^\perp = n-k,$ so the degree of the generator skew polynomial of $C^\perp$ is $k,$ and the degree of $\hbar^{\ast}(x) = b_k + \theta(b_{k-1}) x + \dots + \theta^k(b_0) x^k$ is $k,$ thus $\theta^k(b_0^{-1}) \hbar^{\ast}(x)$ is the generator skew polynomial of $C^\perp.$ \end{proof} Assume the same as the above theorem, let $\tilde{g}(x) = \theta^{-k}(a_k) + \theta^{-(k-1)}(a_{k-1})x + \dots + a_0x^k,$ then it is straightforward to verify that $1-\lambda x^n = \hbar^\ast (x) \tilde{g}(x). $ One can use skew polynomials to determine whether a linear code has an inclusion relation with its dual. \begin{theorem}\label{containingcondition} Let $\theta \in \Aut(F_q),$ $\lambda \in F_q^\ast,$ if $C$ is a \tlcycliccode\ of length $n$ over $F_q$, $g(x)$ is the generator skew polynomial of $C,$ $ x^n -\lambda=h(x)g(x) ,$ $x^n - \theta^{ -k}(\lambda) = g(x) \hbar(x),$ $1-\lambda x^n = \hbar^\ast (x) \tilde{g}(x). $ Suppose $C$ is not $\{0\}$ and $F_q^{n},$ then $C^\perp \subset C$ if and only if $\hbar^\ast (x) \hbar(x) $ is right divisible by $x^n - \theta^{-k}(\lambda) $ and $\lambda^{-1} = \lambda.$ Similarly, $C \subset C^\perp$ if and only if $g (x) \tilde{g}(x) $ is right divisible by $x^n - \lambda^{-1}$ and $\lambda^{-1} = \lambda.$ \end{theorem} \begin{proof} Only the case $C^\perp \subset C$ will be proved, and the proof for $C \subset C^\perp$ is similar. Sufficiency. If $\hbar^\ast(x) \hbar(x) = \ell(x)(x^n-\theta^{-k}(\lambda)),$ then $\hbar^\ast(x) = \ell(x)g(x),$ so $$F_q[x;\theta]\left(\hbar^\ast(x) + \left\langle x^n-\lambda^{-1} \right\rangle \right) = F_q[x;\theta]\left(\hbar^\ast(x) + \left\langle x^n-\lambda\right\rangle \right) \subset F_q[x;\theta] \left(g(x) + \left\langle x^n-\lambda\right\rangle\right),$$ Thus $\Phi \left(C^\perp\right) \subset \Phi(C),$ Therefore $C^\perp \subset C.$ Necessity. If $C^\perp \subset C,$ then $\hbar^\ast ( x) + \left\langle x^n-\lambda\right\rangle = \ell(x)\left(g(x) + \left\langle x^n-\lambda\right\rangle\right),$ therefore \begin{gather*} \hbar^\ast(x) = \ell(x)g(x) + u(x)(x^n - \lambda) = \left(\ell(x) + u(x)h(x)\right)g(x), \end{gather*} thus $g(x)$ right divides $\hbar^\ast(x),$ and \begin{gather*} \hbar^\ast (x) \hbar(x) =\left(\ell(x) + u(x)h(x)\right) g(x)\hbar(x) = \left(\ell(x) + u(x)h(x)\right) \left(x^n - \theta^{-k}(\lambda)\right), \end{gather*} Thus $\hbar^\ast(x)\hbar(x) $ is right divisible by $x^n - \theta^{-k}(\lambda)$. We have already obtained that $g(x)$ right divides $\hbar^\ast(x),$ and that $\hbar^\ast(x)$ right divides $x^n - \lambda^{-1},$ by the property of the generator skew polynomial of a skew constacyclic code. Combine with $g(x)$ divides $x^n - \lambda$ to get $g(x)$ right divides $\lambda - \lambda^{-1},$ so $\lambda = \lambda^{-1}. $ \end{proof} \section{Skew constacyclic codes over \texorpdfstring{$\fring$}{R}} \subsection{Structure of \texorpdfstring{$\fring$}{R} and its automorphism} Suppose $R$ is a finite commutative semisimple ring, then according to the Wedderburn-Artin Theorem, $R$ is isomorphic to the direct product of finite fields, that is, $R \cong \prod_{i=1}^s F_{q_i },$ where $F_{q_i}$ is a finite field. If we put the same direct product terms together, the above isomorphism can be written as $$R \cong \prod_{j=1}^\ell \left(\prod_{k=1}^{t_j} F_{q_j} \right),$$ So we can see that the finite commutative semisimple ring $\prod_{i=1}^t F_q$ is the basic structural component of a general finite commutative semisimple ring, and the ring to be considered in this paper is exactly this kind of finite commutative semisimple ring. Let $p$ be a prime number, $r$ be a positive integer, $q=p^r,$ denote a finite field with $q$ elements by $F_q$. In this paper, unless otherwise specified, let \begin{equation*} \fring = \prod_{i=1}^t F_q = \left\{(x_1,x_2,\dots,x_t)\mid x_i \in F_q,1\leq i\leq t \right\}, \end{equation*} the addition of two elements in the ring $\fring$ is the component addition, and the multiplication is the component multiplication. Let $e_1=(1,0,\dots,0),e_2=(0,1,0,\dots,0),\dots,e_t=(0,\dots,0,1)$, then the elements in $\fring$ are of the form $x=x_1e_1 + \dots + x_te_t,$ where $x_i\in F_q.$ In the following, we always use $F_q$ linear sum of $e_1,e_2,\dots,e_t$ to represent elements in $\fring,$ instead of being written as vectors, and by convention $1e_i = e_i, 1\leq i\leq t.$ Under the above convention, for any $x=\sum_{i=1}^t x_ie_i,y=\sum_{i=1}^t y_ie_i\in \fring ,$ the addition and multiplication formulas are: \begin{equation*} x+y = \sum_{i=1}^t (x_i+y_i)e_i, \qquad x\cdot y = \sum_{i=1}^t (x_iy_i)e_i. \end{equation*} The additive zero element of the ring $\fring$ is $0_{\fring} = 0e_1 + 0e_2 + \dots + 0e_t,$ the multiplication identity is $1_{\fring} = 1e_1 + 1e_2 + \dots + 1e_t.$ Write $0_{\fring}$ as $0.$ Obviously, the number of elements in $\fring$ is $|\fring| = q^t.$ The characteristic of $\fring$ is $\cchar(\fring) = \cchar(F_q) = p.$ Thus, for any $ x=\sum_{j=1}^t x_j e_j \in \fring, x^q=x_1^q e_1 + \dots + x_t^q e_t = x_1e_1 + \dots + x_te_t = x.$ According to $x\in U(\fring) $ if and only if $ x_j \in F_q^\ast, 1\leq j \leq t.$ We get \begin{equation*} U(\fring) \cong \underbrace{F_q^\ast \times \dots \times F_q^\ast}_{t\text{ terms}} \cong \underbrace{\mathbb{Z}_{q-1} \times \dots \times \mathbb{Z}_{q-1}}_{t\text{ terms}}. \end{equation*} and $|U(\fring)| = (q-1)^t.$ \begin{proposition} The ring $\fring$ is a principal ideal ring, with $2^t$ ideals and $t$ maximal ideals. The maximal ideal is $(1_{\fring}-e_j), 1\leq j\leq t.$ \end{proposition} \begin{proof} Assuming that $I$ is an ideal of $\fring$, then $I = Ie_1 + \dots + Ie_t.$ For any $ x=\sum_{i=1}^t x_ie_i \in \fring,$ where $x_i\in F_q,$ we have $xe_j = x_je_j \in F_qe_j,$ so $Ie_j \subset F_qe_j.$ If $Ie_j \neq \left\{0\right\},$ then there are non-zero elements $x_j \in F_q$ such that $x_je_j \in Ie_j,$ so $e_j \in x_j^{-1}Ie_j = Ie_j,$ So $F_qe_j = Ie_j.$ Each $Ie_j, 1\leq j \leq t$ has two ways of choices, and different ways of choices correspond to different ideals, therefore there are $2^t$ ideals. If $I\neq \left\{0\right\} ,$ then the sum of $e_j$ such that $Ie_j\neq \left\{0\right\}$ is a generator of $I,$ which shows that $\fring$ is a principal ideal ring. The conclusion about the maximal ideal is obvious. \end{proof} \begin{definition} Suppose $R$ is a ring with identity $1_R,$ and $f$ is a mapping from $R$ to $R.$ If for any $x,y\in R,$ there is $f(x+y )=f(x)+f(y),\ f(xy) = f(x)f(y),\ f(1_R)=1_R,$ then $f$ is a ring homomorphism of $R.$ Moreover, if $f$ is a bijective homomorphism, then $f$ is said to be an automorphism of $R.$ Let $\Aut(R)$ be the group of automorphism of $R.$ \end{definition} Before discussing $\Aut(\fring),$ some preparations need to be done. \begin{definition} Let $e$ be an element in a commutative ring $R,$ if $e^2=e,$ then $e$ is an idempotent of $R.$ Let $e_1,e_2 \in R,$ if $e_1e_2=0,$ then $e_1$ is said to be orthogonal to $e_2.$ For an idempotent $e \in R,$ if $e=e_1+e_2,\ e_1^2=e_1 ,\ e_2^2=e_2,\ e_1e_2=0$ implies $e_1=0$ or $e_2=0,$ then $e$ is a primitive idempotent of $R.$ \end{definition} \begin{lemma} The element $e_j,1\leq j\leq t,$ are pairwise orthogonal primitive idempotents of $\fring.$ \end{lemma} \begin{proof} From the definition of the multiplication of elements in $\fring,$ $e_1,e_2,\dots,e_t$ are idempotents and pairwise orthogonal to each other, and then it is only necessary to show that these idempotents are primitive. The following gives the proof of $e_1$ is a primitive idempotent, and the proof that other idempotents are also primitive idempotents is similar. If $e_1 = x+y,\ x^2=x,\ y^2 = y,\ xy=0,$ we need to show that $x=0$ or $y=0.$ Let $x=\sum_{j=1}^t x_je_j,\ y=\sum_{j=1}^t y_je_j,$ Then $e_1 = \sum_{j=1}^t (x_j+y_j)e_j,$ So $1 = x_1 + y_1,\ 0=x_j + y_j,\ 2\leq j\leq t.$ From $x^2 = x,\ y^2=y,\ xy=0$ we get $x_j^2=x_j,\ y_j^2 = y_j,\ x_jy_j=0,\ 1\leq j\leq t.$ From the previous relations, we can get $x=e_1,\ y=0$ or $x=0,\ y=e_1.$ \end{proof} \begin{lemma} \label{lem:morphismonbasis} The set $\{e_1,e_2,\dots,e_t\}$ is stable under the action of any automorphism of $\fring.$ \end{lemma} \begin{proof} For any $\sigma \in Aut(\fring),$ we have \begin{gather*} \sigma(e_1) + \dots + \sigma(e_t) = \sigma(e_1+\dots + e_t) =\sigma(1_{\fring}) = 1_{\fring}.\\ \sigma(e_i)\sigma(e_j) = \sigma(e_ie_j) = \sigma(0) = 0, \quad 1\leq i,j\leq t.\\ \sigma(e_i)\sigma(e_i) = \sigma(e_ie_i)=\sigma(e_i),\quad 1\leq i \leq t. \end{gather*} From the above equation it can be seen that $\sigma(e_1),\dots,\sigma(e_t)$ is also $t$ pairwise orthogonal idempotents of $\fring$ and sums to $1_{\fring}.$ Because $\sigma(e_1) \in \fring,$ $\sigma(e_1) = x_1e_1+\dots + x_te_t,$ where $x_j\in F_q, 1\leq j \leq t.$ From $e_1^2 = e_1,$ then $\sigma(e_1)\sigma(e_1) = \sigma(e_1),$ so $x_j^2 = x_j, 1\leq j\leq t,$ so $x_j=0$ or $x_j=1,$ $ 1\leq j\leq t.$ These $x_1,\dots,x_t$ cannot be all $0,$ otherwise $e_1 = 0.$ and there is only one non-zero element in $x_1,\dots,x_t$, otherwise $e_1=\sigma^{-1}(x_1e_1+\dots+x_te_t)$ decomposes into a sum of nontrivial orthogonal idempotents, which contradicts with $e_1$ is primitive. Similarly, $\sigma(e_j ) \in \{e_1,e_2,\dots,e_t\}, 1\leq j\leq t.$ Note again that $\sigma$ is bijective, so the action of $\sigma$ on $e_1,\dots,e_t$ is equivalent to a permutation on $e_1,\dots,e_t.$ \end{proof} \begin{remark} Denote $\theta(e_i) = e_{\bar{\theta}(i)},$ then $\bar{\theta}$ is naturally a permutation of $1,2,\dots,t,$ namely $\bar {\theta}\in S_t,$ where $S_t$ represents the symmetric group of $t$ elements, and $\bar{\theta}$ is uniquely determined by $\theta.$ \end{remark} Use $F_q1_{\fring}$ to represent the set $\{a1_{\fring} \colon a\in F_q \}, $ then $F_q1_{\fring}$ is a subring of $\fring$ isomorphic to finite fields $F_q,$ but $F_q$ is not included in $\fring $ if $t>1.$ For any $\sigma \in \Aut(\fring),$ since $\sigma(1_{\fring} )=1_{\fring},$ we have $\sigma|_{F_p1_{\fring}} = \id.$ For any $a\in F_p,$ $ \sigma(ae_1) = \sigma(a1_{\fring}e_1)=\sigma(a1_{\fring})\sigma(e_1) = a1_{\fring} \sigma(e_1) = a\sigma(e_1). $ So when $r=1,$ i.e., $q=p^r=p,$ once you know how $\sigma$ acts on $\{e_1,e_2,\dots,e_t\},$ the way $\sigma$ acts on $\fring$ is determined, and $\Aut(\fring) \cong S_t.$ For $r>1,$ i.e., $q\neq p$, you also need to know the action of $\sigma$ on $F_q1_{\fring}$ to completely determine $\sigma.$ In fact, if $\sigma $ is a injective ring homomorphism from $F_{q}1_{\fring}$ to $\fring,$ and the action of $\sigma$ on $e_1,\dots,e_t$ is permutation on $e_1,\dots,e_t,$ then an automorphism of $\fring$ can be naturally obtained $\tilde{\sigma}: \alpha_1e_1 + \dots + \alpha_te_t \mapsto \sigma(\alpha_11_ {\fring})\sigma(e_1) + \dots + \sigma(\alpha_t1_{\fring})\sigma(e_t), \alpha_j\in F_q, 1\leq j \leq t.$ We can actually consider it a little simpler. In fact, in order to determine $\sigma,$ we only need to determine how $\sigma$ acts on each $F_qe_i,$ where $ 1\leq i\leq t. $ The general form of automorphism of $\fring$ is given below. \begin{theorem} \label{thm:autofR} Let $\theta \in \Aut(\fring),$ then $\theta \left(\sum_{j=1}^t \alpha_j e_j \right) = \sum_{j=1}^t \psi_j(\alpha_j)\theta(e_j),$ where $\alpha_j \in F_q,$ $ \psi_j\in \Aut(F_q),$ $1\leq j\leq t.$ \end{theorem} \begin{proof} Because $$\theta(F_qe_j ) =\theta(F_qe_j e_j) \subset \theta(F_qe_j)\theta(e_j) \subset \fring \theta(e_j) = F_q\theta(e_j),$$ and $\theta$ is bijective, we have $\theta(F_qe_j) = F_q\theta(e_j).$ So for any $a\in F_q,$ there is a unique $a^\prime \in F_q$ such that $\theta(ae_j) = a^\prime \theta(e_j), $ then we can define a mapping from $F_q$ to $F_q,$ i.e., $\psi_j: a\mapsto a^\prime.$ For $\theta(ae_j) = \psi_j(a)\theta(e_j), \theta(be_j) = \psi_j(b)\theta(e_j), $ according to $\theta$ is a ring isomorphism of $\fring,$ we get $\psi_j (a+b)=\psi_j(a) + \psi_j(b),\psi_j(ab) = \psi_j(a)\psi_j(b), \psi_j(1)=1$ and $\psi_j$ is bijective, so $\psi_j \in \Aut(F_q).$ \end{proof} \begin{remark} By imitating the above proof, we can determine automorphism of general finite commutative semisimple ring, and the automorphism group of $\fring$ to be given below can also be extended to general finite commutative semisimple ring. \end{remark} Let $G_1,G_2$ be a subset of $\Aut(\fring)$, where the element $\theta\in G_1$ satisfies $\theta(e_i) = e_i, 1\leq i \leq t,$ that is to say, $\theta$ acts on $e_1,\dots,e_t$ $\bar{\theta}$ as the identity map $\id.$ The element $\theta\in G_2$ satisfies $\theta\mid_{F_{q}1_{\fring}}= \id, $ that is to say that the elements in $G_2$ are the automorphisms of $\fring$ that keep the elements in $F_{q}1_{\fring}$ stable. \begin{theorem} The $G_1,G_2$ defined above are subgroups of $\Aut(\fring),$ and \begin{equation*} G_1 \cong \underbrace{\mathbb{Z}_r\times \dots \times \mathbb{Z}_r }_{t \text{ terms}},\qquad G_2 \cong S_t, \qquad \Aut(\fring) /G_1 \cong G_2. \end{equation*} \end{theorem} \begin{proof} By Theorem \ref{thm:autofR} we can define a map from $G_1$ to $\prod_{i=1}^t \Aut(F_q)$: $\theta \mapsto (\psi_1,\dots,\psi_t ),$ we can directly verify that this is a group isomorphism, and then combine with $\Aut(F_q)\cong \mathbb{Z}_r$ to get the first isomorphism. For $\theta\in \Aut(\fring),$ from Lemma \ref{lem:morphismonbasis} we know that the action of $\theta$ on $e_1,\dots,e_t$ is equivalent to permutation on $e_1,\dots,e_t,$ recall $\theta(e_i) = e_{\bar{\theta}(i)},$ then $\bar{\theta}$ is naturally a permutation of $1,2,\dots, t$. Define a map from $G_2$ to $S_t:$ $\theta \mapsto \bar{\theta},$ we can directly verify that this is a group isomorphism. Consider the surjective homomorphism $f$ from $\Aut(\fring)$ to $S_t\colon \theta \mapsto \bar{\theta},$ then $\ker f = G_1,$ so $\Aut(\fring)/G_1 \cong S_t \cong G_2.$ \end{proof} \begin{corollary} The number of elements in $\Aut(\fring)$ is $ |\Aut(\fring)| = r^t \times t!.$ \qed \end{corollary} Now we can give the structure of $\Aut(\fring).$ \begin{theorem} The automorphism group of $\fring$ is $\Aut(\fring) = G_1G_2. $ \end{theorem} \begin{proof} First, $G_1G_2 \subset \Aut(\fring).$ By definition, $G_1 \cap G_2 = \{\id\},$ so $|G_1G_2| = |G_1||G_2|/|G_1\cap G_2| = |\Aut(\fring)|,$ so $\Aut(\fring) = G_1G_2. $ \end{proof} \subsection{Decomposition of linear codes over \texorpdfstring{$\fring$}{R}} In this section, we will establish a one-to-one correspondence between a linear code $C$ over $\fring$ and $t$ linear codes $C_1,C_2,\dots,C_t$ over $F_q,$ the initial idea can be traced back to Dougherty et al. \cite{dougherty1999self} on self-dual codes over $\mathbb{Z}_{2k}$. However, for the class of finite commutative semisimple ring we deal with here, this method is widely used. For example, Zhu et al. \cite{zhu2010some} showed how a linear code over $F_2[v]/(v^2-v)$ can be decomposed into linear codes over $F_2.$ First, the definition of linear codes over a finite ring with identity is given. \begin{definition} Let $R$ be a finite ring with identity. If $C$ is a nonempty subset of free module $R^{n}$ of rank $n$ over $R,$ then $C$ is said to be a code over $R$ of length $n,$ and the elements in $C$ are called codewords. Let $x\in C,$ denote the number of nonzero components in $x$ by $\wt_H(x),$ call it the Hamming weight of $x.$ More specifically, if $C$ is a left $R$-submodule of $R^{n},$ then $C$ is said to be a linear code of length $n$ over $R.$ \end{definition} We can also define inner product and dual code. \begin{definition} Assume $R$ be a finite ring with identity. Let $u=(u_1,u_2,\dots,u_n),$ $v=(v_1,v_2,\dots,v_n) \in R^{n}, $ define the inner product of $u$ and $v$ by $u\cdot v = u_1v_1 + u_2v_2 + \dots + u_nv_n.$ If $C$ is a code of length $n$ over $R,$ define the dual code of $C $ is \begin{equation*} C^{\perp} = \left\{z\in R^{n } : z\cdot x =0,\ \forall\ x\in C \right\}. \end{equation*} \end{definition} The ring $\fring$ can be viewed as a $t$ dimensional vector space over $F_q.$ In fact, there is an isomorphism of vector spaces. $\varphi: \fring \rightarrow F_q^{t},$ $\sum_{j=1}^t x_je_j \mapsto (x_1,x_2,\dots,x_t).$ Then there is a map induced by $\varphi$ from $\fring^{n}$ to $F_q^{tn}$, and we still use $\varphi$ to represent it, it maps \begin{equation*} \left(\sum_{j=1}^t x_{0j}e_j,\dots,\sum_{j=1}^tx_{n-1,j}e_j\right) \in \fring^{n} \end{equation*} to \begin{equation*} \left(x_{01},x_{11},\dots,x_{n-1,1},x_{02},x_{12},\dots,x_{n-1,2},\dots, x_{0t},x_{1t},\dots,x_{n-1,t}\right) \in F_q^{tn} . \end{equation*} \begin{remark} \label{expresscodesbymatrix} Observe matrix \begin{equation*} \begin{pmatrix} x_{01} & x_{02} & \dots & x_{0t} \\ x_{11} & x_{12} & \dots & x_{1t} \\ \vdots & \vdots & & \vdots \\ x_{n-1,1} & x_{n-1,2} & \dots & x_{n-1,t} \end{pmatrix}, \end{equation*} the preimage of $\varphi$ is the elements in the matrix arranged in rows, and the image is the elements in the matrix arranged in columns. \end{remark} \begin{definition} Define the weight of the codeword $x$ in $\fring^{n}$ as $\wt_{G}(x) = \wt_{H}(\varphi(x)),$ define the distance between $x,y \in \fring^{n}$ as $\mathrm{d}_{G}(x,y) = \wt_{G}(x-y).$ If $C$ is a code over $\fring$ and it has at least two codewords, define the minimum distance of $C$ as $\mathrm{d}_G(C) = \min\{\mathrm{d}_G(x,y) \colon x,y\in C,\ x\neq y \}.$ \end{definition} \begin{proposition} Consider $\fring^{n}$ and $F_{q}^{tn}$ as vector spaces over $F_q,$ then $\varphi$ is an isomorphism of vector spaces and distance preserving. \end{proposition} \begin{proof} For any element in $r\in F_q,$ and for any \begin{align*} x & =\left(\sum_{j=1}^t x_{0j}e_j,\sum_{j=1}^tx_{1j}e_j,\dots,\sum_{j=1 }^tx_{n-1,j}e_j\right) \in \fring^{n}, \\ y & =\left(\sum_{j=1}^t y_{0j}e_j,\sum_{j=1}^t y_{1j}e_j,\dots,\sum_{j=1}^t y_{ n-1,j}e_j\right)\in \fring^{n},\end{align*} by definition we have \begin{align*} \varphi(x+y) & = \left(x_{01}+y_{01},\dots,x_{n-1,1}+y_{n-1,1},\dots,x_{0t} +y_{0t},\dots,x_{n-1,t}+y_{n-1,t}\right) \\ & =\varphi(x) + \varphi(y).\\ \varphi(rx) & = \left(rx_{01},\dots,rx_{n-1,1},\dots,rx_{0,t},\dots,rx_{n-1,t}\right)=r\varphi(x). \end{align*} So $\varphi$ is a $F_q$ linear map. Obviously, $\varphi$ is bijective, thus $\varphi$ is an isomorphic map. Because $$\mathrm{d}_{G}(x,y) = \wt_{G}(x-y) = \wt_{H}\left(\varphi(x-y)\right)= \wt_H\left(\varphi(x) - \varphi(y)\right) = \dhamming \left(\varphi(x),\varphi(y)\right),$$ $\varphi$ is distance preserving. \end{proof} From the above proposition it follows: \begin{corollary} If $C$ is a linear code of length $n$ over $\fring$ and its minimum distance is $\mathrm{d}_G,$ then $\varphi(C)$ is a linear code over $F_q$ with length $tn$ and minimum distance $\mathrm{d}_G.$ \qed \end{corollary} \begin{definition} Let $A_1, A_2,\dots, A_t$ be non-empty sets, denote \begin{align*} A_1+A_2+\dots + A_t &= \left\{a_1 +a_2+ \dots + a_t \colon a_i\in A_i,\ 1\leq i \leq t\right\}, \\ A_1\times A_2\times \dots \times A_t &= \left\{(a_1,a_2,\dots,a_t)\colon a_i\in A_i,\ 1\leq i\leq t\right\}. \end{align*} For any subset $C$ of $\fring^{n}$ and $1\leq j \leq t$ define \begin{equation*} C_j = \left\{(x_{0j},x_{1j},\dots,x_{n-1,j}) \in F_q^n : \exists\,\left(\sum_{j=1}^ tx_{0j}e_j,\dots,\sum_{j=1}^t x_{n-1,j}e_j\right) \in C\right\}. \end{equation*} \end{definition} \begin{definition} Let $G$ be a matrix over a finite ring $R,$ and every row vector of $G$ is a codeword in a linear code $C$ over $R.$ If the left $R $-module generated by the row vector of $G$ is $C,$ then the matrix $G$ is said to be a generator matrix of the linear code $C.$ \end{definition} \begin{theorem} \label{decomposingcodes} Let $C$ be a linear code of length $n$ over $\fring,$ then $C=C_1e_1 + \dots + C_te_t,$ where $C_j$ is a linear code of length $n$ over $F_q,$ $1\leq j \leq t.$ Conversely, if $C_1,C_2,\dots,C_t$ are linear codes of length $n$ over $F_q,$ then $C_1e_1 + \dots + C_te_t$ is a linear code of length $n$ over $\fring.$ If $G_j$ is a generator matrix of $C_j,$ $1\leq j\leq t,$ then \begin{equation*} \begin{pmatrix} G_1e_1 \\ G_2e_2 \\ \vdots \\ G_te_t \end{pmatrix}, \quad \begin{pmatrix} G_1 & & & \\ &G_2 && \\ & & \ddots & \\ & & & G_t \end{pmatrix} \end{equation*} are generator matrices of $C=C_1e_1 + \dots + C_te_t$ and $\varphi(C)= C_1\times \dots \times C_t$ respectively, $$\mathrm{d}_{G}(C) = \min\left\{\dhamming(C_j): 1\leq j\leq t\right\}.$$ \end{theorem} \begin{proof} It can be verified according to the definition. \end{proof} \begin{theorem} \label{dualcodesoverR} Assuming that $C=C_1e_1 + \dots + C_te_t$ is a linear code over $\fring,$ then $C^{\perp} = C_1^{\perp} e_1 + \dots + C_t^{\perp}e_t.$ Moreover, $C\subset C^{\perp}$ if and only if $ C_j \subset C_j^{\perp}, 1\leq j \leq t.$ $C^{\perp}\subset C$ if and only if $ C_j^{\perp} \subset C_j, 1\leq j \leq t.$ \end{theorem} \begin{proof} For any $x=x_1e_1 + \dots + x_te_t \in C^{\perp},$ where $x_j\in F_q^{n}, 1\leq j \leq t.$ For any $ y_j \in C_j,$ by definition there exists $ y=y_1e_1 + \dots + y_te_t \in C,$ so $x\cdot y = x_1\cdot y_1 e_1 + \dots + x_t\cdot y_t e_t = 0,$ then $x_j\cdot y_j=0,$ so $x_j \in C_j^{\perp}, 1\leq j \leq t.$ Thus, $C^{\perp} \subset C_1^{\perp}e_1 + \dots + C_t^{\perp}e_t.$ For any $x=x_1e_1 + \dots + x_te_t\in C_1^{\perp}e_1 + \dots + C_t^{\perp}e_t,$ where $x_j \in C_j^{\perp}, 1\leq j\leq t,$ for any $y_j \in C_j,$ we have $x_j\cdot y_j=0, 1\leq j \leq t.$ So for any $ y=y_1e_1 + \dots + y_te_t \in C,$ there is $x\cdot y = 0,$ therefore $x\in C^{\perp}.$ This shows that $C_1^{\perp}e_1 + \dots + C_t^{\perp}e_t \subset C^\perp.$ In summary, $C^\perp = \left(C_1e_1 + \dots + C_te_t\right)^\perp=C_1^{\perp}e_1 + \dots + C_t^{\perp}e_t.$ From this equation it immediately follows that the latter statements hold. \end{proof} \section{Characterization of skew constacyclic codes over \texorpdfstring{$\fring$}{R}} \begin{definition} Let $\lambda\in U(\fring),$ $\theta\in \Aut(\fring),$ define a map from $\fring^n$ to $\fring^n$ $$\sigma_{\theta, \lambda}: (c_0,c_1,\dots,c_{n-1}) \mapsto (\lambda \theta(c_{n-1}),\theta(c_0),\dots,\theta(c_{n -2})).$$ Let $C$ be a linear code of length $n$ over $\fring.$ If for any $c \in C , $ we have $ \sigma_{\theta,\lambda }(c) \in C,$ then $C$ is said to be a $\theta$-$\lambda$-cyclic code of length $n$ over $\fring,$ if $\theta$ and $\lambda$ is not specified, $C$ is also called a skew constacyclic code. When $\lambda =1,$ it is called a $\theta$-cyclic code or skew cyclic code. \end{definition} Similar to skew constacyclic code over $F_q$ considered earlier, let $\theta\in \Aut(\fring),$ define the skew polynomial ring over $\fring$ \begin{equation*} \fring[x;\theta] = \left\{a_0 +a_1x+\dots+a_kx^k\mid a_i \in \fring, 0\leq i \leq k\right\}, \end{equation*} that is, the elements in $\fring[x;\theta]$ are elements in $\fring[x],$ but the coefficients are written on the left. Addition is defined in the usual way, and $ax^n \cdot (bx ^m) = a\theta^n(b)x^{n+m}, $ then define multiplication according to the associative and distributive laws. Similarly, it can be proved that in $\fring[x;\theta]$ we can also do division with remainder on the right, as long as the divisor is monic. Use $\left\langle x^n-\lambda \right\rangle$ to represent the left $\fring[x;\theta]$-module $\fring[x;\theta](x^n-\lambda),$ then $\fring[x;\theta]/\left\langle x^n-\lambda \right\rangle $ is a left $\fring[x;\theta]$-module. Define a map $\Phi: $ \begin{align*} \fring^{n} & \rightarrow \fring[x;\theta]/\left\langle x^n-\lambda \right\rangle \\ (c_0,c_1,\dots,c_{n-1}) & \mapsto c_0+c_1x+\dots+c_{n-1}x^{n-1}+\left\langle x^n-\lambda \right\rangle. \end{align*} For special cases of the following theorems see \citep[Theorem 2,][]{2012Onskewcyclic} by Abualrub et al., \citep[Theorem 3.5,][]{gao2013skew} by Gao, \citep[Theorem 4.1,][]{islam2019note} by Islam et al., \citep[Theorem 3.1,][]{bag2019skew}, \citep[Theorem 2.4,][]{bag2020quantum} by Bag et al. \begin{theorem} Let $\theta \in \Aut(\fring),$ $\lambda \in U(\fring),$ $C$ be a linear code of length $n$ over $\fring, $ then $C$ is a \tlcycliccode\ over $\fring$ if and only if $\Phi(C)$ is a left $\fring [x;\theta]$-submodule of $\fring[x;\theta]/\left\langle x^n-\lambda \right\rangle$. \end{theorem} \begin{proof} Almost identical to the proof of Theorem \ref{thm:skewcodesoverFbymod}. \end{proof} Similar to Theorem \ref{dualcodesoverF}, the dual code of a skew constacyclic code over $\fring$ is still a skew constacyclic code. \begin{theorem} Let $\theta \in \Aut(\fring),$ $\lambda \in U(\fring).$ If $C$ is a \tlcycliccode\ of length $n$ over $\fring,$ then $C ^\perp$ is a $\theta$-$\lambda^{-1}$-cyclic code of length $n$ over $\fring.$ \end{theorem} \begin{proof} First, it can be directly verified that $C^\perp$ is a left $\fring$-module, that is, $C^\perp$ is a linear code of length $n$ over $\fring.$ Then we just need to prove: for any $y\in C^\perp,$ there is $\sigma_{\theta,\lambda^{-1}}(y) \in C^\perp.$ For any $x\in C,$ since $C$ is a \tlcycliccode, there exists $\tilde{x} \in C,$ such that $ x = \sigma_{\theta,\lambda}(\tilde{x}).$ Note that $ \sigma_{\theta,\lambda^{-1}}(y) \cdot \sigma_{\theta,\lambda}(\tilde{x}) = \theta(y \cdot \tilde{x} ) =0,$ thus $\sigma_{\theta,\lambda^{-1}}(y) \in C^\perp.$ \end{proof} \begin{remark} It is not difficult to see from the above proof that the above conclusion holds for skew constacyclic codes over finite commutative rings with identity. \end{remark} From the definition of skew constacyclic code, it can be seen that skew constacyclic code is related to the parameters $\theta$ and $\lambda,$ and the following discussion indeed shows that these two parameters are important to the structure of the skew constacyclic code. From the previous discussion, we already know the invertible element in $\fring$ and the form of the automorphism of $\fring,$ and know the linear code $C=C_1e_1+\dots+C_te_t$ over $\fring$ is completely determined by $C_1,\dots,C_t,$ with these preparations, we can give the characterization of \tlcycliccode\ over $\fring.$ Because the conclusions in some special cases can be written more clearly, and understanding the conclusions in special cases helps us understand the general case, here we first give the characterization of \tlcycliccode\ in two special cases, and finally give the most general conclusion. \subsection{Special case one} \label{sec:specialone} Let $\theta \in G_1,$ i.e., $\theta$ be an automorphism of $\fring$ that keeps $e_1,e_2,\dots,e_t$ stable, specifically $\theta(\alpha_1e_1 + \dots + \alpha_te_t) = \psi_1(\alpha_1)e_1 + \dots + \psi_t(\alpha_t)e_t,$ where $\psi_j \in \Aut(F_q),$ $\alpha_j\in F_q,1\leq j\leq t.$ In this subsection, it is always assumed that $\theta$ is as previously defined. If $\lambda = \lambda_1e_1 + \dots + \lambda_te_t$ is the invertible element in $\fring,$ where $\lambda_j \in F_q,1\leq j\leq t,$ then $\lambda$ is invertible is equivalent to $\lambda_j \neq 0$ for all $1\leq j\leq t.$ The \tlcycliccode\ over $\fring$ will be considered below. The main idea is to transform the problem into processing the $\psi_j$-$\lambda_j$-cyclic code over $F_q.$ For special cases of the following theorems, see \citep[Theorem 3,][]{gursoy2014construction} by Gursoy et al., \citep[Theorem 5,][]{gao2017skew} by Gao et al., \citep[Theorem 4.1,][]{shi2015skew} by Shi et al., \citep[Theorem 4.2 and Theorem 8.1,][]{islam2018skew}, \citep[Theorem 5.2,][]{islam2019note} by Islam et al., \citep[Theorem 3.3,][]{Ashraf2019Quantum} by Ashraf et al., \citep[Theorem 4.3,][]{bag2020quantum} by Bag et al. \begin{theorem} \label{thm:mainofcase1} Let $\lambda = \lambda_1e_1 + \dots + \lambda_te_t \in U(\fring),$ $C=C_1e_1 + \dots+ C_te_t$ is a linear code of length $n$ over $\fring.$ Then $C$ is a \tlcycliccode\ over $\fring$ if and only if $C_j$ is a $\psi_j$-$\lambda_j$-cyclic code over $F_q,$ $1\leq j\leq t.$ \end{theorem} \begin{proof} Necessity. For any $x_j=(x_{0j},x_{1j},\dots,x_{n-1,j}) \in C_j, 1\leq j\leq t.$ We need to prove \begin{equation*} \left(\lambda_j\psi_j(x_{n-1,j}),\psi_j(x_{0j}),\dots,\psi_j(x_{n-2,j})\right) \in C_j. \end{equation*} From $x_j \in C_j$, $x=x_1e_1 + \dots + x_te_t\in C,$ then \begin{gather*} \left(\lambda\theta\left(\sum_{j=1}^t x_{n-1,j}e_j\right),\theta\left(\sum_{j=1}^tx_{0j}e_j\right),\dots,\theta\left(\sum_{j=1}^tx_{n-2,j}e_j\right)\right) \in C,\\ \left(\sum_{j=1}^t \lambda_j \psi_j(x_{n-1,j})e_j,\sum_{j=1}^t\psi_j(x_{0j})e_j,\dots, \sum_{j=1}^t\psi_j(x_{n-2,j})e_j\right)\in C, \end{gather*} so $\left(\lambda_j\psi_j(x_{n-1,j}),\psi_j(x_{0j}),\dots,\psi_j(x_{n-2,j})\right) \in C_j .$ Sufficiency. For any \begin{gather*} \left(\sum_{j=1}^t x_{0j}e_j,\sum_{j=1}^tx_{1j}e_j,\dots,\sum_{j=1}^tx_{n-1,j }e_j\right)\in C, \shortintertext{we need to proof} \left(\lambda\theta\left(\sum_{j=1}^tx_{n-1,j}e_j\right),\theta\left(\sum_{j=1}^tx_{0j}e_j\right),\dots,\theta\left(\sum_{j=1}^tx_{n-2,j}e_j\right)\right) \in C. \end{gather*} For $(x_{0j},x_{1j},\dots,x_{n-1,j}) \in C_j,$ we have $\left(\lambda_j\psi_j(x_{n-1,j}),\psi_j(x_{0j}),\dots,\psi_j(x_{n-2,j})\right) \in C_j,$ then \begin{equation*} \left(\sum_{j=1}^t \lambda_j \psi_j(x_{n-1,j})e_j,\sum_{j=1}^t\psi_j(x_{0j})e_j,\dots, \sum_{j=1}^t\psi_j(x_{n-2,j})e_j\right) \in C. \end{equation*} \end{proof} The skew polynomials can used to describe skew constacyclic codes over $\fring,$ mainly using the relevant conclusions of the skew constacyclic codes over $F_q$ obtained before. The special cases of the following theorem can be found in \citep[Theorem 4 and Theorem 5,][]{gursoy2014construction}, \citep[Theorem 6,][]{gao2017skew} by Gao et al., \citep[Theorem 4.2 and Theorem 4.3,][]{shi2015skew} by Shi et al., \citep[Theorem 4.3 and Theorem 8.2, ][]{islam2018skew}, \citep[Theorem 5.5,][]{islam2019note} by Islam et al., \citep[Theorem 3.4,][]{Ashraf2019Quantum} by Ashraf et al., \citep[Theorem 4.6,][]{bag2020quantum} by Bag et al.. \begin{theorem}\label{characterizedbypolynomial} Let $C=C_1e_1 + \dots + C_te_t$ be a \tlcycliccode\ of length $n$ over $\fring,$ and the generator skew polynomial of $C_j$ is $g_j(x), 1\leq j \leq t, $ then \begin{align*} \Phi(C) & = \fring[x;\theta]\left(g_1(x)e_1+\left\langle x^n-\lambda \right\rangle,\dots,g_t(x)e_t + \left\langle x ^n-\lambda \right\rangle \right) \\ & = \fring[x;\theta] \left(g_1(x)e_1 + \dots + g_t(x)e_t + \left\langle x^n-\lambda \right\rangle \right). \end{align*} Where $g_1(x)e_1 + \dots + g_t(x)e_t$ is a right factor of $x^n - \lambda.$ \begin{equation*} \left|C\right| = \prod_{j=1}^t \left|C_j\right| = q^{\sum_{j=1}^t \left(n-\deg g_j(x)\right)}. \end{equation*} \end{theorem} \begin{proof} Since $ e_j \left(g_1(x)e_1 + \dots + g_t(x)e_t\right) = g_j(x)e_j,$ the second equality is naturally true. Because $g_j(x)e_j + \left\langle x^n-\lambda \right\rangle$ corresponds to an element in $C_je_j,$ an element in $C_je_j$ is naturally an element in $C,$ so $g_j(x)e_j +\left\langle x^n-\lambda \right\rangle \in \Phi(C), 1\leq j\leq t,$ thus \begin{equation*} \Phi(C) \supset \fring[x;\theta]\left(g_1(x)e_1+\left\langle x^n-\lambda \right\rangle,\dots,g_t(x)e_t + \left\langle x ^n-\lambda \right\rangle\right). \end{equation*} For any $ c(x) + \left\langle x^n-\lambda \right\rangle \in \Phi(C),$ the coefficients of $c(x) $ are in $\fring,$ and it can be written as the $F_q$ linear sum of $e_j .$ Then $c(x) = c_1(x)e_1 + \dots + c_t(x)e_t,$ where $c_j(x) \in F_q[x;\theta],$ $c_1(x)$ corresponds to an element in $C_1,$ then $c_1(x) = q_1(x)g_1(x) + k(x)(x^n-\lambda_1),$ so $c_1(x )e_1 = \left(q_1(x)g_1(x) + k(x)(x^n-\lambda)\right)e_1.$ Therefore, \begin{equation*} c(x) +\left\langle x^n-\lambda \right\rangle = q_1(x)g_1(x)e_1 + \dots + q_t(x)g_t(x)e_t + \left\langle x^n -\lambda \right\rangle , \end{equation*} and $c(x)+\left\langle x^n-\lambda \right\rangle \in \fring[x;\theta]\left(g_1(x)e_1+\left\langle x^n-\lambda \right\rangle,\dots,g_t(x)e_t + \left\langle x^n-\lambda \right\rangle\right).$ Since $h_j(x)g_j(x) = x^n - \lambda_j,$ then $h_j(x)g_j(x)e_j = (x^n-\lambda)e_j,$ $ 1\leq j\leq t,$ thus \begin{equation*} \left(\sum_{j=1}^th_j(x)e_j\right)\left(\sum_{j=1}^t g_j(x)e_j\right) = \sum_{j=1}^t h_j (x)g_j(x)e_j = x^n-\lambda. \end{equation*} \end{proof} The above conclusion tells us that if $C$ is a \tlcycliccode\ of length $n$ over $\fring,$ then $\Phi(C)$ can be generated by only one element. From the skew polynomial characterization of $C,$ the skew polynomial characterization of $C^\perp$ can be obtained. The special case of the following corollary can be found in \citep[Corollary 7,][]{gursoy2014construction} by Gursoy et al., \citep[Corollary 4.4,][]{shi2015skew} by Shi et al., \citep[Corollary 4.4 and Corollary 8.4,][]{islam2018skew} by Islam et al., \citep[Corollary 3.7,][]{Ashraf2019Quantum} by Ashraf et al., \citep[Corollary 4.8,][]{bag2020quantum} by Bag et al.. They assumed $\ord(\theta) \mid n$ and $\theta(\lambda) = \lambda$. \begin{corollary} If $C$ is a \tlcycliccode\ of length $n$ over $\fring,$ $\theta \in \Aut(\fring),$ $\lambda = \sum_{j=1}^t \lambda_je_j \in U(\fring),$ $C=C_1e_1 + \dots + C_te_t,$ then $C^\perp$ is a $\theta$-$\lambda^{-1}$-cyclic code of length $n$ over $\fring.$ For $1\leq j\leq t,$ if $C_j$ is a $\psi_j$-$\lambda_j$-cyclic code of length $n$ over $F_q,$ $g_j(x)$ is the generator skew polynomial of $C_j$ and $h_j(x)g_j(x) = x^n - \lambda_j,$ then $C_j^\perp$ is a $\psi_j$-$\lambda_j$-cyclic code of length $n$ over $F_q$ and the generator skew polynomial of $C_j^{\perp}$ is $\hbar_j^\ast (x)$ left multiplied by the inverse of its coefficient of leading term, and $\left|C^\perp\right| = q^{\sum_{j=1}^t \deg g_j(x)},$ \begin{align*} \Phi \left(C^\perp\right) & = \fring [x;\theta]\left(\hbar_1^\ast(x) e_1 + \left\langle x^n-\lambda^{-1} \right\rangle ,\dots,\hbar_t^\ast(x)e_t + \left\langle x^n-\lambda^{-1} \right\rangle \right) \\ & = \fring[x;\theta]\left(\hbar_1^\ast(x)e_1 + \dots + \hbar_t^\ast(x)e_t+ \left\langle x^n-\lambda^{-1} \right\rangle \right). \end{align*} \end{corollary} \begin{proof} It follows from Theorem \ref{dualcodesoverR}, Theorem \ref{dualcodesoverF}, Theorem \ref{polynomialofdualcodes}, Theorem \ref{characterizedbypolynomial}. \end{proof} Using Theorem \ref{containingcondition}, the necessary and sufficient conditions that a \tlcycliccode\ and its dual code over $\fring$ have mutual containment relation can also be obtained, which is omitted here. \subsection{Special case two} For $\theta \in \Aut(\fring),$ we have showed that $\theta$ is determined by the way it acts on $F_q1_{\fring}$ and the set of primitive idempotent $ \{e_1,e_2,\dots,e_t\}. $ The effect of $\theta$ on the set $\{e_1,e_2,\dots,e_t\}$ in the case considered above is trivial. In the case now to be considered, the action of $\theta$ on $F_q1_{\fring}$ is trivial, and more specifically, in this section, unless otherwise specified, it is always assumed $$ \theta: \fring \rightarrow \fring,\quad \alpha_1e_1 + \dots +\alpha_t e_t \mapsto \alpha_1e_2 + \dots + \alpha_{t-1}e_t + \alpha_te_1, $$ that is to say, $\theta\in G_2$ and $\theta$ corresponds to $\bar{\theta}=(1,2,\dots,t)$ in the symmetric group $S_t.$ In addition, we only consider $\theta$-cyclic codes $C$ of length $n$ over $\fring.$ From Theorem \ref{decomposingcodes}, determine $C =C_1e_1 + \dots + C_te_t$ is equivalent to determine $C_i,\ 1\leq i\leq t.$ For the convenience of notation, we define a map first. \begin{equation*} \sigma_{\theta}: \fring^{n} \rightarrow \fring^{n},\quad (c_0,c_1,\dots,c_{n-1}) \mapsto (\theta(c_{n-1 }),\theta(c_0),\dots,\theta(c_{n-2})). \end{equation*} The following theorem generalizes Gao's \cite[Theorem 3.7,][]{gao2013skew}. \begin{theorem} \label{thm:specialcase} If $(t,n) =1,$ $C = C_1e_1 + \dots + C_te_t$ is a linear code of length $n$ over $\fring.$ Then $C$ is a $\theta$-cyclic code of length $n$ over $\fring$ if and only if $C_1 = C_2 = \dots = C_t$ and $C_1$ is a cyclic code of length $n$ over $F_q.$ \end{theorem} \begin{proof} Sufficiency. For all $x = x_1e_1 + \dots + x_te_t \in C,$ where $x_i \in C_i, 1\leq i \leq t.$ We have $\sigma_{\theta}(x) = \rho(x_1)e_{\bar{\theta}(1)}+ \dots + \rho(x_t)e_{\bar{\theta}(t)} = \rho(x_t)e_1 + \rho(x_1) e_2 + \dots + \rho(x_{t-1})e_t \in C.$ Necessity. Because $(t,n)=1,$ there exist $u,v \in \mathbb{Z},$ such that $ut + vn =1,$ then for any $k\in \mathbb{Z},$ $(u+kn)t = 1+ (kt-v)n,$ so there are positive integers $a,b$ such that $at = 1+bn,$ so that for any $ x = x_1e_1 + \dots + x_te_t \in C,$ where $x_i \in C_i, 1\leq i \leq t,$ we have \begin{align*} \sigma_{\theta}^{at}(x) & = \rho^{at}(x_1)e_1 + \dots + \rho^{at}(x_t)e_t \\ & = \rho^{1+bn}(x_1)e_1 + \dots + \rho^{1+bn}(x_t)e_t \\ & = \rho(x_1)e_1 + \dots + \rho(x_t)e_t \in C, \end{align*} Thus $C_i$ is a cyclic code of length $n$ over $F_q,$ $1\leq i\leq t.$ There are also positive integers $a^\prime,b^\prime$ such that $a^\prime n = 1+b^\prime t,$ so \begin{equation*} \sigma_{\theta}^{a^\prime n} (x) = x_te_1 + x_1e_2 + x_2e_3 + \dots + x_{t-1}e_t \in C, \end{equation*} thus for any $x_1 \in C_1,$ there is $x_1 \in C_2,$ then $C_1 \subset C_2.$ Similarly, $C_1\subset C_2 \subset C_3 \subset \dots \subset C_t \subset C_1,$ so $C_1 = C_2 = \dots = C_t.$ \end{proof} \begin{remark} This theorem can be regarded as a special case of Theorem \ref{thm:generalcase} to be proved below, but for the reader to understand the following result better, this result is shown here first. Using Theorem \ref{thm:mainofcase1} we can obtain: under the conditions of the above theorem, $\theta$-cyclic code over $\fring$ must be a cyclic code over $\fring.$ \end{remark} \begin{corollary} \label{cor:specialcase_coprime} If $(t,n) =1,$ $C = C_1e_1 + \dots + C_te_t$ is a linear code of length $n$ over $\fring.$ If $C$ is a $\theta$-cyclic code of length $n$ over $\fring,$ then $C^{\perp} = C_1^{\perp} e_1 + \dots + C_t^{\perp}e_t$ is also a $\theta$-cyclic code of length $n$ over $\fring.$ Moreover, $C^\perp = C$ if and only if $C_1^\perp = C_1.$ Let $x^n -1 = p_1^{k_1}(x) \dots p_s^{k_s}(x) $ is the decomposition of $x^n -1$ in $F_q[x]$, where $k_i$ is a positive integer and $p_i(x)$ is a monic irreducible polynomial in $F_q[x],$ $1\leq i \leq s,$ then the number of $\theta$-cyclic codes of length $n$ over $\fring$ is $(1+k_1 )\dots (1+k_s).$ \end{corollary} \begin{proof} The first two assertions can be obtained from Theorem \ref{dualcodesoverR} and Theorem \ref{thm:specialcase}. Since cyclic codes over $F_q$ one to one corresponds to ideals of $F_q[x]/(x^n-1),$ we get the conclusion about the number of $\theta$-cyclic codes. \end{proof} The following theorem generalizes Gao's \cite[Theorem 3.3,][]{gao2013skew}. \begin{theorem} \label{thm:generalcase} If $(t,n) =\ell,$ $C = C_1e_1 + \dots + C_te_t$ is a linear code of length $n$ over $\fring.$ Then $C$ is a $\theta$-cyclic code of length $n$ over $\fring$ if and only if $C_i = C_{i+\ell} = \dots = C_{i+t-\ell}, 1\leq i \leq \ell, $ $\rho^\ell(C_1) = C_1,$ and $C_2 = \rho(C_1), C_3 = \rho^2(C_1),\dots,C_\ell = \rho^{\ell-1} (C_1).$ The number of $\theta$-cyclic codes of length $n$ over $\fring$ is equal to the number of quasi cyclic codes of length $n$ with index $\ell$ over $F_q.$ \end{theorem} \begin{proof} Sufficiency. For any $x = x_1e_1 + \dots + x_te_t \in C,$ where $x_i \in C_i, 1\leq i \leq t.$ $\sigma_{\theta}(x) = \rho( x_t)e_1 + \rho(x_1)e_2 + \dots + \rho(x_{t-1})e_t \in C.$ Necessity. Because $(t,n)=\ell,$ there is a positive integer $a^\prime,b^\prime$ such that $a^\prime n = \ell+b^\prime t,$ then for any $x = x_1e_1 + \dots + x_te_t \in C,$ where $x_i \in C_i, 1\leq i \leq t,$ we have \begin{equation*} \sigma_{\theta}^{a^\prime n} (x) = x_1e_{1+\ell} + x_2e_{2+\ell} + \dots +x_{t-\ell}e_t+ x_{t-\ ell+1}e_1+\dots+ x_{t}e_{\ell} \in C, \end{equation*} so $C_i\subset C_{i+\ell} \subset C_{i+2\ell} \subset \dots \subset C_{i+t-\ell} \subset C_i, 1\leq i \leq \ell.$ Thus $C_i = C_{i+\ell} = \dots = C_{i+t-\ell}, 1\leq i \leq \ell.$ There are positive integers $a,b$ such that $at = \ell+bn,$ thus \begin{align*} \sigma_{\theta}^{at} (x) & = \rho^{at}(x_1)e_1 + \dots + \rho^{at}(x_t)e_t \\ & = \rho^{\ell+bn}(x_1)e_1 + \dots + \rho^{\ell+bn}(x_t)e_t \\ & = \rho^\ell (x_1)e_1 + \dots + \rho^\ell (x_t)e_t \in C, \end{align*} for any $ x_i \in C_i, $ we have $\rho^\ell (x_i) \in C_i,$ so $\rho^\ell (C_i) =C_i, 1\leq i \leq t.$ From $\sigma_{\theta}(x) = \rho(x_t)e_1 + \rho(x_1)e_2 + \dots + \rho(x_{t-1})e_t \in C,$ we know $\rho( C_1) \subset C_2,$ $\rho(C_2) \subset C_3,$ $\dots,$ $\rho(C_t) \subset C_1,$ Thus $C_1 = \rho^\ell(C_1) \subset \rho^{\ell-1}(C_2) \subset \rho^{\ell-2}(C_3) \subset \dots \subset \rho^{ 2}(C_{\ell-1}) \subset \rho(C_\ell) \subset C_{\ell+1} = C_1.$ So $C_2 = \rho(C_1), C_3 = \rho^2( C_1),\dots,C_\ell = \rho^{\ell-1}(C_1).$ According to the above analysis, if $C = C_1e_1 + \dots + C_te_t$ is a $\theta$-cyclic code of length $n$ over $\fring,$ then $\rho^\ell(C_1) = C_1 ,$ and $C_j$ is determined by $C_1$ for $2\leq j \leq t.$ Conversely, if $\rho^\ell(C_1) = C_1,$ and $C_j, 2\leq j \leq t,$ is determined according to the above relationship. Then $C = C_1e_1 + \dots + C_te_t$ is a $\theta$-cyclic code of length $n$ over $\fring.$ Therefore, we find that $C$ is determined by $C_1$ satisfying $\rho^\ell(C_1) = C_1,$ from which the final conclusion about the number of $\theta$-cyclic codes can be obtained. \end{proof} For general $\theta \in G_2,$ let its corresponding element in $S_t$ be $\bar{\theta},$ then $\bar{\theta}$ can be expressed as the product of some disjoint cycles, if $C$ is a $\theta$-cyclic code over $\fring,$ then $C_1,C_2,\dots,C_t$ are partitioned according to cycles, each part is essentially determined by one code, it can also be proved that each $C_i$ is a quasi-cyclic code over $F_q.$ Since this result can be obtained from the following general result, we do not write a separate proof. \subsection{General case} Let $\theta\in \Aut(\fring),$ $\theta$ act on $F_q1_{\fring}$ as $\theta(a1_{\fring}) = \psi_1(a)\theta( e_1)+\dots+\psi_t(a)\theta(e_t), \psi_j \in \Aut(F_q), 1\leq j \leq t.$ And $\theta$ corresponds to $\bar{\theta}$ in the symmetric group $S_t.$ That is to say, for any $\sum_{j=1}^ta_je_j\in \fring,$ \ $\theta\left(\sum_{j=1}^t a_je_j\right) = \sum_{j=1}^t \psi_j(a_j) e_{\bar{\theta}(j)}.$ A general characterization of \tlcycliccode\ over $\fring$ is given below. \begin{theorem}[Characterization of skew constacyclic code] \label{thm:Generalcase} Let $\theta\in \Aut(\fring),$ $\lambda=\lambda_1e_1+\dots+\lambda_te_t,$ where $\lambda_j\in F_q^\ast, 1\leq j \leq t.$ If $C= C_1e_1+\dots+C_te_t$ is a linear code of length $n$ over $\fring,$ then $C$ is a \tlcycliccode\ over $\fring$ if and only if $\rho_{\psi_{j}, \lambda_{\bar{\theta}(j)}}(C_j)= C_{\bar{\theta}(j)}, 1\leq j \leq t.$ \end{theorem} \begin{proof} For any $ x=x_1e_1 + \dots + x_te_t \in C,$ it can be calculated directly that \begin{align*} \sigma_{\theta,\lambda}(x) & = \left( \lambda\theta\left( \sum_{j=1}^t x_{n-1,j}e_j \right),\theta\left( \sum_{j=1}^t x_{0j}e_j \right),\dots,\theta\left( \sum_{j=1}^t x_{n-2,j}e_j \right) \right) \\ & =\left( \lambda \sum_{j=1}^t \psi_{j}\left(x_{n-1,j}\right)e_{\bar{\theta}(j)}, \sum_ {j=1}^t \psi_{j}\left(x_{0j}\right)e_{\bar{\theta}(j)}, \dots,\sum_{j=1}^t \psi_{ j}\left(x_{n-2,j}\right)e_{\bar{\theta}(j)} \right) \\ & = \left( \sum_{j=1}^t \lambda_{\bar{\theta}(j)} \psi_{j}\left(x_{n-1,j}\right)e_{\bar {\theta}(j)}, \sum_{j=1}^t \psi_{j}\left(x_{0j}\right)e_{\bar{\theta}(j)}, \dots,\ sum_{j=1}^t \psi_{j}\left(x_{n-2,j}\right)e_{\bar{\theta}(j)} \right), \\ \sigma_{\theta,\lambda}(x_je_j) & = \left( \lambda\theta\left(x_{n-1,j}e_j\right),\theta\left(x_{0j}e_j\right) ,\dots,\theta\left(x_{n-2,j}e_j\right) \right) \\ & = \left( \lambda_{\bar{\theta}(j)} \psi_{j}\left(x_{n-1,j}\right)e_{\bar{\theta}(j)}, \psi_{j}\left(x_{0j}\right)e_{\bar{\theta}(j)}, \dots, \psi_{j}\left(x_{n-2,j}\right) e_{\bar{\theta}(j)} \right) \\ & = \rho_{ \psi_{j},\lambda_{\bar{\theta}(j)} }(x_j)e_{\bar{\theta}(j)}. \end{align*} Necessity. If $C$ is a \tlcycliccode, then for any $x_j=(x_{0j},x_{1j},\dots,x_{n-1,j}) \in C_j,$ naturally $x_je_j \in C,$ thus $\sigma_{\theta,\lambda}(x_je_j) \in C,1\leq j \leq t.$ So for any $ x_j \in C_j,$ $\rho_{\psi_{j},\lambda_{\bar{\theta}(j)}}(x_j)\in C_{\bar{\theta}(j )}, 1\leq j \leq t.$ Thus $\rho_{\psi_{j},\lambda_{\bar{\theta}(j)}}(C_j)\subset C_{\bar{\theta}(j)}, 1\leq j \leq t .$ Since $\rho_{\psi_j,\lambda_{\bar{\theta}(j)}}$ is injective, so $\left|C_j\right| \leq \left|C_{\bar{\theta}(j)}\right| \leq \left|C_{\bar{\theta}^2(j)}\right|\leq \dots \leq \left|C_j\right| ,$ we get $\rho_{\psi_{j,\lambda_{\bar{\theta}(j)}}}(C_j)= C_{\bar{\theta}(j)}, 1\leq j \leq t.$ Sufficiency. It can be directly verified that for any $ x=x_1e_1 + \dots + x_te_t \in C,$ there is $\sigma_{\theta,\lambda}(x) \in C.$ \end{proof} As an application of the above theorem, we can obtain a general characterization of skew cyclic codes over $\fring.$ \begin{theorem} Let $\theta\in \Aut(\fring),$ $C=C_1e_1+\dots+C_te_t$ be a linear code of length $n$ over $\fring,$ then $C$ is a $\theta$-cyclic code over $\fring$ if and only if $\rho_{\psi_{j}}(C_j)= C_{\bar{\theta}(j)},\ 1\leq j \leq t.$ \qed \end{theorem} \begin{remark} This is a more general conclusion than Theorem \ref{thm:specialcase} and Theorem \ref{thm:generalcase}, but since $\theta$ is more general, the resulting necessary and sufficient conditions are not as clear as the previous ones. \end{remark} \section{Image of homomorphism} The reason for studying linear codes over finite commutative rings is: some nonlinear codes with good parameters over binary field can be regarded as the Gray images of linear codes over $\mathbb{Z}_4$ \cite{ Z4linear}, thus promoting the understanding of the original nonlinear codes. Similar to the previous results, we will establish relationship between linear codes over $\fring$ and linear codes over $F_q$ by defining homomorphisms. The main results we get are: homomorphism image of a linear code over $\fring$ is a matrix product code over $F_q,$ some optimal linear codes over $F_q$ can also be viewed as images of the homomorphisms we have defined. \subsection{Isomorphism} We previously defined an isomorphic map $\varphi$ from $\fring^n$ to $F_q^{tn}$ and found that the linear code $C$ over $\fring$ is completely determined by the linear code $C_1,C_2,\dots,C_t$ over $F_q.$ About $\varphi(C)$ we have the following conclusion. \begin{proposition} If $C=C_1e_1 + \dots + C_te_t$ is a linear code of length $n$ over $\fring,$ where the parameters of $C_i$ are $[n,k_i,d_i],$ then $\varphi(C) = C_1\times \dots \times C_t =\{(x_1,\dots,x_t) \in F_q^{tn}: x_i \in C_i, 1\leq i\leq t\}$ is a linear code over $F_q$ with parameters $[tn,k_1+\dots + k_t ,\min\{d_1,\dots,d_t\}].$ \qed \end{proposition} \begin{example} Let $C_1=C_2=C_3=C_4$ be a cyclic code of length $3$ generated by $x-1$ over $F_2,$ then their parameters are $[3,2,2],$ then $C=C_1e_1 + C_2e_2$ is a cyclic code of length $3$ over $F_2\times F_2,$ and $\varphi(C)$ is a linear code over $F_2$ with parameters $[6,4,2].$ Look up the table \cite{Grassl:codetables}, we see that it is an optimal linear code. Similarly, $\varphi(C_1e_1 + C_2e_2+C_3e_3)$ is a linear code over $F_2$ with parameters $[9,6, 2],$ which is also an optimal linear code. However, $\varphi(C_1e_1 + C_2e_2+C_3e_3+C_4e_4)$ is a linear code with parameters $[12,8,2]$ over $F_2$, which is not an optimal linear code. \end{example} Because the length and dimension of $\varphi(C)$ is larger than the sum of $C_1, C_2, \dots, C_t$ respectively, but the minimum distance does not become larger, so in general $\varphi(C)$ will not have good parameters. But we can give linear codes with good parameters over $F_q$ by constructing other isomorphism. Let $M=(m_{ij})_{t\times t}$ be an invertible matrix over $F_q,$ consider the mapping $\eta_M$ from $\fring $ to $F_q^t :$ \begin{equation*} \sum_{k=1}^t a_ke_k \mapsto (a_1,a_2,\dots,a_t)M=\left( \sum_{k=1}^t a_km_{k1}, \sum_{k=1}^t a_km_{k2},\dots,\sum_{k=1}^t a_km_{kt}, \right) . \end{equation*} Obviously, $\eta_M$ is an isomorphism of vector spaces over $F_q,$ and induces a mapping from $\fring^n$ to $F_q^{tn},$ still denoted by $\eta_M$, it maps \begin{equation*} x = \left(x_0,x_1,\dots,x_{n-1}\right)=\left(\sum_{k=1}^t x_{0k}e_k,\sum_{k=1}^t x_ {1k}e_k,\dots,\sum_{k=1}^t x_{n-1,k}e_k\right) \end{equation*} to \begin{equation*} \begin{gathered} \biggl( \sum_{k=1}^t x_{0k}m_{k1},\sum_{k=1}^t x_{1k}m_{k1},\dots,\sum_{k=1}^ t x_{n-1,k}m_{k1},\phantom{\biggr)}\\ \phantom{\biggl(} \sum_{k=1}^t x_{0k}m_{k2},\sum_{k=1}^t x_{1k}m_{k2},\dots,\sum_{k =1}^t x_{n-1,k}m_{k2} ,\phantom{\biggr)} \\ \dots, \\ \phantom{\biggl(}\sum_{k=1}^t x_{0k}m_{kt},\sum_{k=1}^t x_{1k}m_{kt},\dots,\sum_{k =1}^t x_{n-1,k}m_{kt} \biggr), \end{gathered} \end{equation*} which is \begin{equation*} \eta_M(x) =\varphi(x) \left(M\otimes E_n \right), \end{equation*} where $E_n$ represents the identity matrix of order $n$ over $F_q,$ and $M\otimes E_n$ represents the tensor product of $M$ and $E_n.$ It is not difficult to find that $\eta_M$ is an isomorphism from the vector space $\fring^n$ over $F_q$ to $F_q^{tn},$ and $\varphi$ defined previously is actually $\eta_{E_t}.$ Let $x_k = \left(x_{0k},x_{1k},\dots,x_{n-1,k}\right), 1\leq k \leq t,$ then $\varphi( x) = (x_1,x_2,\dots,x_t),$ \begin{equation*} \begin{aligned} \eta_M (x) & = \eta_M (x_1e_1 + x_2e_2 + \dots + x_te_t) \\ & = \left( \sum_{k=1}^t m_{k1}x_k, \sum_{k=1}^t m_{k2}x_k,\dots,\sum_{k=1}^t m_{kt }x_k \right). \end{aligned} \end{equation*} \begin{lemma} \label{lem:dualofimage} Let $C$ be a linear code of length $n$ over $\fring,$ then $\varphi\left(C^\perp\right)=\varphi(C)^\perp,$ and $C= C^\perp$ if and only if $\varphi(C) = \varphi(C)^\perp.$ \end{lemma} \begin{proof} For any $x=x_1e_1 + \dots + x_te_t\in C^\perp,$ $y=y_1e_1 + \dots + y_te_t \in C,$ where $x_j\in C_j^\perp, y_j \in C_j, 1 \leq j\leq t.$ Then $\varphi(x) = (x_1,\dots,x_t),\varphi(y)=(y_1,\dots,y_t),$ $\varphi(x)\cdot \varphi(y) = x_1\cdot y_1 + \dots + x_t\cdot y_t = 0,$ so $\varphi(C^\perp) \subset \varphi(C)^\perp .$ Since $\varphi$ is bijective, \begin{equation*} \left|\varphi(C^\perp)\right| = \left|C^\perp\right| = \left|C_1^\perp\right|\dots \left|C_t^\perp\right|=\frac{\left|F_q^{n}\right|}{|C_1|} \dots \frac{\left|F_q^{n}\right|}{|C_t|} =\frac{q^{tn} }{|C|}= \frac{\left|F_q^{tn}\right|}{|\varphi(C)|} = \left|\varphi(C)^\perp\right|. \end{equation*} Thus, $\varphi(C^\perp) = \varphi(C)^\perp.$ The remained statement follows from this equation. \end{proof} \begin{theorem} Let $C$ be a linear code of length $n$ over $\fring.$ If $MM\trans = kE_t,$ where $k\in F_q^\ast,$ then $\eta_M \left(C^\perp\right)=\eta_M(C)^\perp,$ and $C=C^\perp$ if and only if $\eta_M (C) = \eta_M (C)^\perp.$ \end{theorem} \begin{proof} For any $x=x_1e_1 + \dots + x_te_t\in C^\perp,$ $y=y_1e_1 + \dots + y_te_t \in C,$ where $x_j\in C_j^\perp, y_j \in C_j, 1 \leq j\leq t.$ Then $\varphi(x) = (x_1,\dots,x_t),\varphi(y)=(y_1,\dots,y_t),$ by Lemma \ref{lem:dualofimage} we know $\varphi(x)\cdot \varphi(y) = x_1\cdot y_1 + \dots + x_t\cdot y_t = 0,$ note that $\eta_M(x) = \varphi(x) (M\otimes E_n) , \eta_M(y) = \varphi(y) (M\otimes E_n),$ so \begin{equation*} \begin{aligned} \eta_M(x) \cdot \eta_M(y) & = \varphi(x)(M\otimes E_n) (M\otimes E_n)\trans \varphi(y)\trans \\ & = k\varphi(x)\varphi(y)\trans = k\varphi(x)\cdot \varphi(y) = 0, \end{aligned} \end{equation*} thus $\eta_M(C^\perp) \subset \eta_M (C)^\perp.$ Since $\eta_M$ is bijective, \begin{equation*} \left|\eta_M(C^\perp)\right| =\left|\varphi(C^\perp)\right| = \frac{\left|F_q^{tn}\right|}{|\varphi( C)|} = \frac{\left|F_q^{tn}\right|}{|\eta_M(C)|} = \left|\eta_M(C)^\perp\right|. \end{equation*} Thus, $\eta_M(C^\perp) = \eta_M(C)^\perp.$ From this equation the remained statement follows. \end{proof} Matrix product codes were first defined by Blackmore et al. \cite{blackmore2001matrix}. \begin{definition} Let $C_1,C_2,\dots,C_u$ be linear codes of length $n$ over $F_q,$ $A=(a_{ij})$ be a $u\times v$ matrix over $F_q,$ \begin{align*} C & =[C_1,\dots ,C_u]\cdot A \\ & = \left\{ \left(\sum_{k=1}^u a_{k1}c_k,\sum_{k=1}^u a_{k2}c_k,\dots,\sum_{k=1}^ u a_{kv}c_k\right) \in F_q^{nv}: c_j\in C_j, 1\leq j \leq u\right\} \end{align*} is a matrix product code over $F_q.$ \end{definition} \begin{proposition} If $C=C_1e_1 + \dots + C_te_t$ is a linear code of length $n$ over $\fring,$ $M$ is a $t\times t$ invertible matrix over $F_q,$ then $\eta_M( C)$ is a matrix product code over $F_q$ of length $tn,$ and its dimension is $\dim C_1 + \dots + \dim C_t.$ \end{proposition} \begin{proof} Just note that $\eta_M(C) = [C_1,\dots ,C_t] \cdot M$ and $\eta_M$ is an isomorphism of vector spaces. \end{proof} \begin{remark} Since $\eta_M(C)$ is a matrix product code over $F_q,$ we establish a connection between linear codes over $\fring$ and matrix product codes over $F_q$ through $\eta_M.$ The matrix product codes that can be obtained by choosing different $M$ are very different. \end{remark} \begin{example} \label{ex:PlotkinSum} Let $C_1,C_2$ be a linear code of length $n$ over $F_q,$ we can construct a linear code of length $2n$ over $F_q.$ Specifically, $C=\{(u,u+ v)\in F_q^{2n}: u\in C_1,v\in C_2\},$ this method of constructing new code is called $(u|u+v)$ construction. If we set $t=2,M=\left(\begin{smallmatrix} 1&1\\0&1 \end{smallmatrix}\right),$ then the image of the linear code $C_1e_1 +C_2e_2$ over $F_q \times F_q$ under the isomorphism $\eta_M$ is the linear code constructed from $C_1,C_2$ by $(u|u+v)$ construction. \end{example} Using Magma online calculator and $(u|u+v)$ construction in Example \ref{ex:PlotkinSum}, we can construct optimal linear codes over finite fields. \begin{example} In $F_3[x]$, $x^{10}-1=(x+1)(x+2)(x^4 + x^3 + x^2 + x + 1)(x^4 + 2x^3 + x^2 + 2x + 1) ,\ x^{10}+1 =(x^2+1)(x^4 + x^3 + 2x + 1)(x^4 + 2x^3 + x + 1).$ Let $C_1$ be the cyclic code generated by $g_1(x) = x+1,$ and $C_2$ be the negative cyclic code generated by $g_2(x) = x^4 + x^3 + 2x + 1 ,$ then the parameters of $C_1, C_2$ are $[10,9,2],[10,6,4],$ and the parameters of $\eta_M(C_1e_1 + C_2e_2)$ are $[20 ,15,4].$ Look up the table \cite{Grassl:codetables} we know that this is an optimal linear code, and one of its generator matrices is \setcounter{MaxMatrixCols}{30} \begin{equation*} \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 2 & 0 & 2 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 2 & 2 & 2 & 1 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 2 & 1 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 2 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 2 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 2 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 2 & 1 & 0 & 2 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 2 & 2 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 2 & 2 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 2 & 1 & 2 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 2 & 0 & 1 & 1 \end{pmatrix}. \end{equation*} In fact, the optimal linear code with parameters $[20,15,4]$ over $F_3$ in the table \cite{Grassl:codetables} is also constructed by $(u|u+v)$ construction. \end{example} \begin{example} In $F_3[x]$, $x^{11}-1=(x+2)(x^5 + 2x^3 + x^2 + 2x + 2)(x^5 + x^4 + 2x ^3 + x^2 + 2) ,\ x^{11}+1 =(x+1)(x^5 + 2x^3 + 2x^2 + 2x + 1)(x^5 + 2x^4 + 2x^3 + 2x^2 + 1).$ Let $C_1$ be the cyclic code generated by $g_1(x) = x-1$, $C_2$ be the negative cyclic code generated by $g_2(x) = x^5 + 2x^3 + 2x^2 + 2x + 1 ,$ then the parameters of $C_1, C_2$ are $[11,10,2],[11,6,5],$ and the parameters of $\eta_M(C_1e_1 + C_2e_2)$ are $[22,16,4].$ Look up the table \cite{Grassl:codetables}, we know that this is an optimal linear code, and one of its generator matrices is \begin{equation*}\label{eq:genmatof22164} \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 2 & 2 & 2 & 2 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 2 & 2 & 1 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 2 & 0 & 1 & 1 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 2 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 2 & 2 & 1 & 2 & 1 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 2 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 2 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 2 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 2 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 2 & 1 & 1 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 2 & 1 & 1 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 2 & 1 & 0 & 2 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 2 & 0 & 2 & 1 & 2 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 2 & 1 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 2 & 2 & 2 & 0 & 1 \end{pmatrix} \end{equation*} \end{example} Zhu et al. studied constacyclic codes over $F_p + vF_p $ ($v^2=v, $ where $p$ is an odd prime number), and the main result\cite[Theorem 3.4,][]{zhu2011class} they obtained is that the $(1-2v)$-cyclic codes of length $n$ over $F_p+vF_p$ under their Gray map are cyclic codes of length $2n$ over $F_p.$ Their results can be generalized. Let \begin{equation} \label{eq:specialmat} M = \begin{pmatrix} m_1 & & & \\ &m_2 && \\ & & \ddots & \\ & & & m_t \end{pmatrix} \begin{pmatrix} 1 & \lambda_1 & \dots & \lambda_1^{t-1} \\ 1 & \lambda_2 & \dots & \lambda_2^{t-1} \\ \vdots & \vdots & & \vdots \\ 1 & \lambda_t & \dots & \lambda_t^{t-1} \end{pmatrix}, \end{equation} where $m_j \in F_q^\ast, $ and $t\mid (q-1),$ while $\lambda_j$ are $t$ pairwise different roots of $x^t =1$ in $F_q,$ $1\leq j \leq t.$ \begin{theorem} \label{thm:skewtocyclic} Let the matrix $M$ as the Eq. \eqref{eq:specialmat}, $\lambda = \lambda_1e_1 + \dots + \lambda_te_t.$ If $C=C_1e_1 + \dots + C_te_t$ is a linear code of length $n$ over $\fring,$ then $C$ is a $\lambda^{-1}$-cyclic code over $\fring$ if and only if $\eta_M(C)$ is a cyclic code of length $tn$ over $F_q.$ \end{theorem} \begin{proof} Denote $\rho$ the map from $F_q^{tn}$ to $F_q^{tn},$ which maps $(x_0,x_1,\dots,x_{tn-1})$ to $(x_{tn-1},x_0,\dots,x_{tn-2}).$ Use $\rho_{\lambda^{-1}_j}$ to represent the map from $F_q^{n}$ to $ F_q^{n},$ which maps $(y_0,y_1,\dots,y_{n-1})$ to $(\lambda^{-1}_j y_{n-1},y_0,\dots,y_{n-2}).$ According to Theorem \ref{thm:mainofcase1} we know that $C$ is a $\lambda^{-1}$-cyclic code over $\fring$ if and only if $C_j$ is a $\lambda^{-1}_j$-cyclic code over $F_q,$ $1\leq j\leq t.$ Necessity. For any $x=\sum_{j=1}^t x_je_j \in C,$ we have $ \eta_M(x) = \eta_M(x_1e_1) + \dots + \eta_M(x_te_t). $ Therefore, to prove that $\eta_M (C)$ is a cyclic code, it is only necessary to prove that for all $x_j =(x_{0j},x_{1j},\dots,x_{n-1,j}) \in C_j,$ $\eta_M(x_je_j) \in F_q^{tn}$ is still in $\eta_M(C)$ after a cyclic shift $\rho$. Direct calculation shows \begin{align*} \eta_M(x_je_j) & = m_j\left(x_j,\lambda_jx_j,\dots,\lambda_j^{t-1}x_j \right), \\ \rho(\eta_M(x_je_j)) & = m_j\left( \rho_{\lambda_j^{-1}}(x_j),\lambda_j \rho_{\lambda_j^{-1}} (x_j),\dots, \lambda_j^{t-1}\rho_{\lambda_j^{-1}} (x_j) \right) \\ & = \eta_M\left(\rho_{\lambda_j^{-1}} (x_j)e_j\right) \in \eta_M(C). \end{align*} Sufficiency. For any $x_j \in C_j,$ we need to prove that $\rho_{\lambda_j^{-1}}(x_j) \in C_j.$ Since \begin{equation*} \rho(\eta_M(x_je_j)) = \eta_M\left(\rho_{\lambda_j^{-1}} (x_j)e_j\right) \in \eta_M(C), \end{equation*} then $\rho_{\lambda_j^{-1}} (x_j)e_j \in C,$ and $\rho_{\lambda_j^{-1}} (x_j) \in C_j.$ \end{proof} The results of Zhu et al. can be stated in our language here: \begin{example}\cite[Theorem 3.4,][]{zhu2011class} Let $C=C_1e_1 + C_2e_2$ be a linear code over $\fring= F_p\times F_p,$ where $p$ is an odd prime, $M = \left(\begin{smallmatrix} 1&1\\ -1&1 \end{smallmatrix}\right),$ then $C$ is a $(e_1 - e_2)$-cyclic code over $\fring$ if and only if $\eta_M(C)$ is a cyclic code of length $2n$ over $F_p.$ \end{example} \begin{remark} In Theorem \ref{thm:skewtocyclic}, we require $\lambda_j$ to be $t$ distinct roots of $x^t =1$ in $F_q$ to make $\eta_M$ be bijective, but in the proof of necessity we do not use the fact that $\eta_M$ is bijective. That is, we can define other mappings that map skew constacyclic codes over $\fring$ to cyclic codes over $F_q.$ For example, Bag et al. defined a surjective homomorphism that maps skew constacyclic codes over $F_p+uF_p + vF_p + wF_p$ to cyclic codes over $F_p,$ see \citep[Theorem 4.2][]{bag2019skew}. \end{remark} \subsection{Surjective homomorphism} Consider the surjective homomorphism $\Psi: \fring \rightarrow F_q,\ \sum_{j=1}^t a_je_j \mapsto \sum_{j=1}^t a_j.$ A mapping from $\fring^{n}$ to $F_q^{n}$ is induced by $\Psi$, and we still denote it by $\Psi,$ i.e., $\Psi: $ \begin{align*} \fring^{n} & \rightarrow F_q^{n}, \\ \left(\sum_{j=1}^t x_{0j}e_j,\dots,\sum_{j=1}^tx_{n-1,j}e_j\right) & \mapsto \left(\sum_{ j=1}^t x_{0j},\dots,\sum_{j=1}^tx_{n-1,j}\right). \end{align*} \begin{proposition} If $C=C_1e_1 + \dots + C_te_t$ is a linear code of length $n$ over $\fring,$ then $\Psi(C) = C_1+ \dots + C_t$ is a linear code of length $n$ over $F_q.$ Moreover, if the parameters of $C_i$ are $[n,k_i,d_i],$ then the parameters of $\Psi(C)$ are $[n,\dim\left( C_1+ \dots + C_t \right),d],$ where $d\leq \min\{d_1,\dots,d_t\}.$ \qed \end{proposition} Let $A=(1,1,\dots,1)\trans,$ then $\Psi(C) = [C_1,C_2,\dots,C_t]\cdot A,$ that is to say, $\Psi(C )$ is also a matrix product code over $F_q.$ Using the mapping $\Psi,$ we may get linear codes with good parameters. After the following proposition, we will give the specific examples. \begin{proposition} Suppose that the characteristic of $F_q$ is an odd prime. If $C_1$ is a cyclic code with parameters $[n,k_1,d_1]$ over $F_q,$ $C_2$ is a negative cyclic code with parameters $[n,k_2,d_2]$ over $F_q,$ and $k_1 + k_2 < n,$ then $C_1\cap C_2 =\{0\},$ thus, the parameters of $\Psi\left(C_1e_1 +C_2e_2\right) = C_1+C_2$ are $[n,k_1+k_2,d],$ where $d\leq \min\{d_1,d_2\}.$ \end{proposition} \begin{proof} Let $a\in C_1\cap C_2,$ and $a(x)$ be the polynomial corresponding to $a$ with degree less than $n,$ then the generator polynomial $g_1(x)$ of $C_1$ divides $ a( x),$ and $C_2$'s generator polynomial $g_2(x) \mid a(x).$ Because $g_1(x) \mid x^n -1,$ $g_2(x) \mid x^n + 1,$ then $g_1(x),g_2(x)$ are coprime, so $g_1(x)g_2(x) \mid a(x),$ consider the degree of polynomials, we get $a(x) =0.$ \end{proof} The $C_1e_1+C_2e_2$ in the above proposition is a $(e_1 -e_2)$-cyclic code over $\fring=F_q \times F_q,$ and $C_1+C_2$ is the image of $\Psi.$ Using Magma online calculator, we get some optimal linear codes over $F_3.$ \begin{example} In $F_3[x],$ $x^8-1=(x+1)(x+2)(x^2 + 1)(x^2 + x + 2)(x^2 + 2x + 2 ),\ x^8+1 =(x^4 + x^2 + 2)(x^4 + 2x^2 + 2).$ Let $C_1$ is the cyclic code generated by $g_1(x) = (x+1)(x +2)(x^2 + 1)(x^2 + 2x + 2),$ $C_2$ is the negative cyclic code generated by $g_2(x) = x^4 + 2x^2 + 2,$ then the parameters of $C_1$ and $C_2$ are $[8,2,6],[8,4,3]$ respectively, and the parameters of $C_1 + C_2$ are $[8,6,2].$ Look up the table \cite{Grassl:codetables}, we know that this is an optimal linear code, and one of its generator matrices is \begin{equation*} \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 2 & 2 \\ 0 & 1 & 0 & 0 & 0 & 0 & 2 & 1 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 2 \\ 0 & 0 & 0 & 1 & 0 & 0 & 2 & 2 \\ 0 & 0 & 0 & 0 & 1 & 0 & 2 & 1 \\ 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 \end{pmatrix}. \end{equation*} \end{example} \begin{example} In $F_3[x]$, $x^{10}-1=(x+1)(x+2)(x^4 + x^3 + x^2 + x + 1)(x^4 + 2x^3 + x^2 + 2x + 1),\ x^{10}+1 =(x^2+1)(x^4 + x^3 + 2x + 1)(x^4 + 2x^3 + x + 1).$ Let $C_1$ is the cyclic code generated by $g_1(x) = (x+1)(x^4 + x^3 + x^2 + x + 1)(x^4 + 2x^3 + x^ 2 + 2x + 1),$ $C_2$ is the negative cyclic code generated by $g_2(x) = x^2+1,$ then the parameters of $C_1$ and $C_2$ are $[10,1, 10],[10,8,2]$ respectively. And the parameters of $C_1 + C_2$ are $[10,9,2],$ Look up the table \cite{Grassl:codetables}, we know that this is an optimal linear code, one of its generator matrices is \begin{equation*} \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 2 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 2 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 2 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 2 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 \\ \end{pmatrix}. \end{equation*} \end{example} \begin{example} In $F_3[x]$, $x^{12}-1=(x+1)^3(x+2)^3(x^2 + 1)^3,\ x^{12}+1 = (x^2 +x+ 2)^3(x^2 + 2x + 2)^3.$ Let $C_1$ is the cyclic code generated by $g_1(x) = (x+1)(x+2)(x^2 + 1 )^3,$ $C_2$ is the negative cyclic code generated by $g_2(x) = (x^2 +x+ 2)(x^2 + 2x + 2)^3,$ then the parameters of $C_1,C_2 $ are $[12,4,4],[12,4,6],$ and the parameters of $C_1 + C_2$ are $[12,8,3].$ Look up table \cite{Grassl:codetables}, we know that this is an optimal linear code, and one of its generator matrices is \begin{equation*} \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 2 & 1 & 2 & 1 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 2 & 1 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 2 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 2 & 1 & 2 & 2 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 2 & 1 & 2 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 2 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 2 & 1 & 2 & 0 \\ \end{pmatrix}. \end{equation*} \end{example}
2205.08802v3
http://arxiv.org/abs/2205.08802v3
Recognising elliptic manifolds
\documentclass{amsart} \usepackage{amsthm} \usepackage{amsmath} \usepackage{amssymb} \usepackage{enumerate} \usepackage{microtype} \usepackage{graphicx} \usepackage[hidelinks,pagebackref,pdftex]{hyperref} \usepackage{booktabs} \usepackage{color} \usepackage[dvipsnames]{xcolor} \usepackage{import} \usepackage{tikz-cd} \newcommand{\sqdiamond}{\tikz [x=1.2ex,y=1.85ex,line width=.1ex,line join=round, yshift=-0.285ex] \draw (0,.5) -- (0.75,1) -- (1.5,.5) -- (0.75,0) -- (0,.5) -- cycle;}\renewcommand{\Diamond}{$\sqdiamond$} \usepackage[scaled=0.9]{sourcecodepro} \AtBeginDocument{ \def\MR#1{} } \renewcommand{\backreftwosep}{\backrefsep} \renewcommand{\backreflastsep}{\backrefsep} \renewcommand*{\backref}[1]{} \renewcommand*{\backrefalt}[4]{ \ifcase #1 [No citations.] \or [#2] \else [#2] } \makeatletter \let\c@equation\c@subsection \makeatother \numberwithin{equation}{section} \makeatletter \let\c@figure\c@equation \makeatother \numberwithin{figure}{section} \theoremstyle{plain} \newtheorem{theorem}[equation]{Theorem} \newtheorem{corollary}[equation]{Corollary} \newtheorem{lemma}[equation]{Lemma} \newtheorem{conjecture}[equation]{Conjecture} \newtheorem{proposition}[equation]{Proposition} \newtheorem{principle}[equation]{Principle} \newtheorem{axiom}[equation]{Axiom} \theoremstyle{definition} \newtheorem{definition/}[equation]{Definition} \newtheorem{problem}[equation]{Problem} \newtheorem{solution}[equation]{Solution} \newtheorem{exercise}[equation]{Exercise} \newtheorem{example/}[equation]{Example} \newtheorem{fact/}[equation]{Fact} \newtheorem{question/}[equation]{Question} \newtheorem*{question*}{Question} \newtheorem*{answer*}{Answer} \newtheorem*{application*}{Application} \newtheorem{algorithm}[equation]{Algorithm} \newtheorem{objective}{Objective}[section] \newenvironment{definition} {\renewcommand{\qedsymbol}{$\Diamond$} \pushQED{\qed}\begin{definition/}} {\popQED\end{definition/}} \newenvironment{example} {\renewcommand{\qedsymbol}{$\Diamond$} \pushQED{\qed}\begin{example/}} {\popQED\end{example/}} \newenvironment{question} {\renewcommand{\qedsymbol}{$\Diamond$} \pushQED{\qed}\begin{question/}} {\popQED\end{question/}} \newenvironment{fact} {\renewcommand{\qedsymbol}{$\Diamond$} \pushQED{\qed}\begin{fact/}} {\popQED\end{fact/}} \theoremstyle{remark} \newtheorem{remark/}[equation]{Remark} \newtheorem*{remark*}{Remark} \newtheorem{remarks}[equation]{Remarks} \newtheorem{case}{Case} \newtheorem*{case*}{Case} \renewcommand{\thecase}{\Roman{case}} \newtheorem{claim}[equation]{Claim} \newtheorem*{claim*}{Claim} \newtheorem*{proofclaim}{Claim} \newtheorem*{proofsubclaim}{Subclaim} \newenvironment{remark} {\renewcommand{\qedsymbol}{$\Diamond$} \pushQED{\qed}\begin{remark/}} {\popQED\end{remark/}} \newcommand{\refsec}[1]{Section~\ref{Sec:#1}} \newcommand{\refapp}[1]{Appendix~\ref{App:#1}} \newcommand{\refthm}[1]{Theorem~\ref{Thm:#1}} \newcommand{\refcor}[1]{Corollary~\ref{Cor:#1}} \newcommand{\reflem}[1]{Lemma~\ref{Lem:#1}} \newcommand{\refprop}[1]{Proposition~\ref{Prop:#1}} \newcommand{\refclm}[1]{Claim~\ref{Clm:#1}} \newcommand{\refrem}[1]{Remark~\ref{Rem:#1}} \newcommand{\refexa}[1]{Example~\ref{Exa:#1}} \newcommand{\reffac}[1]{Fact~\ref{Fac:#1}} \newcommand{\refexe}[1]{Exercise~\ref{Exe:#1}} \newcommand{\reffig}[1]{Figure~\ref{Fig:#1}} \newcommand{\reftab}[1]{Table~\ref{Tab:#1}} \newcommand{\refdef}[1]{Definition~\ref{Def:#1}} \newcommand{\refprob}[1]{Problem~\ref{Prob:#1}} \newcommand{\refalg}[1]{Algorithm~\ref{Alg:#1}} \newcommand{\refax}[1]{Axiom~\ref{Ax:#1}} \newcommand{\refrul}[1]{Rule~\ref{Rul:#1}} \newcommand{\refeqn}[1]{Equation~\ref{Eqn:#1}} \newcommand{\refconj}[1]{Conjecture~\ref{Conj:#1}} \newcommand{\refobj}[1]{Objective~\ref{Obj:#1}} \newcommand{\refitm}[1]{(\ref{Itm:#1})} \newcommand{\refcon}[1]{(\ref{Con:#1})} \newcommand{\refcase}[1]{Case~\ref{Case:#1}} \newcommand{\fakeenv}{} \newenvironment{restate}[2] { \renewcommand{\fakeenv}{#2} \theoremstyle{plain} \newtheorem*{\fakeenv}{#1~\ref{#2}} \begin{\fakeenv} } { \end{\fakeenv} } \newcommand{\from}{\colon} \newcommand{\cross}{\times} \newcommand{\thsup}{\textrm{th}} \newcommand{\HH}{{\mathbb{H}}} \newcommand{\RR}{{\mathbb{R}}} \newcommand{\ZZ}{{\mathbb{Z}}} \newcommand{\NN}{{\mathbb{N}}} \newcommand{\CC}{{\mathbb{C}}} \newcommand{\QQ}{{\mathbb{Q}}} \newcommand{\calB}{{\mathcal{B}}} \newcommand{\calC}{{\mathcal{C}}} \newcommand{\calF}{{\mathcal{F}}} \newcommand{\calH}{{\mathcal{H}}} \newcommand{\calI}{{\mathcal{I}}} \newcommand{\calM}{{\mathcal{M}}} \newcommand{\calO}{{\mathcal{O}}} \newcommand{\calP}{{\mathcal{P}}} \newcommand{\calS}{{\mathcal{S}}} \newcommand{\calT}{{\mathcal{T}}} \newcommand{\calU}{{\mathcal{U}}} \newcommand{\SO}{\operatorname{SO}} \newcommand{\RP}{\mathbb{RP}} \newcommand{\orb}{{\operatorname{orb}}} \newcommand{\bdy}{\partial} \newcommand{\isom}{\cong} \newcommand{\homeo}{\mathrel{\cong}} \newcommand{\group}[2]{{\langle #1 \st #2 \rangle}} \newcommand{\cover}[1]{\widetilde{#1}} \newcommand{\quotient}[2]{{\raisebox{0.2em}{$#1$} \!\!\left/\raisebox{-0.2em}{\!$#2$}\right.}} \newcommand{\NP}{\textsc{NP}} \newcommand{\FNP}{\textsc{FNP}} \newcommand{\vol}{\operatorname{vol}} \newcommand{\PSL}{\operatorname{PSL}} \newcommand{\MCG}{\operatorname{MCG}} \newcommand{\area}{\operatorname{area}} \newcommand{\len}{\operatorname{len}} \newcommand{\interior}{{\mathrm{int}\:}} \newcommand{\closure}{\operatorname{closure}} \newcommand{\id}{{\mathop{id}}} \newcommand{\abs}[1]{\left\vert #1 \right\vert} \newcommand{\half}{\frac{1}{2}} \newcommand{\cut}{\backslash\backslash} \newcommand{\subgp}[1]{{\langle #1 \rangle}} \newcommand{\twist}{\mathrel{ \stackrel{\sim}{\smash{\times}\rule{0pt}{0.6ex}} }} \newcommand{\st}{\mathbin{\mid}} \newcommand{\connect}{\#} \DeclareMathOperator{\arccosh}{arccosh} \DeclareMathOperator{\arcsinh}{arcsinh} \DeclareMathOperator{\arctanh}{arctanh} \begin{document} \title{Recognising elliptic manifolds} \author[Lackenby]{Marc Lackenby} \address{\hskip-\parindent Mathematical Institute\\ University of Oxford\\ Oxford OX2 6GG, United Kingdom} \email{[email protected]} \author[Schleimer]{Saul Schleimer} \address{\hskip-\parindent Mathematics Institute\\ University of Warwick\\ Coventry CV4 7AL, United Kingdom} \email{[email protected]} \thanks{This work is in the public domain.} \date{\today} \begin{abstract} We show that the problem of deciding whether a closed three-manifold admits an elliptic structure lies in \NP. Furthermore, determining the homeomorphism type of an elliptic manifold lies in the complexity class \FNP. These are both consequences of the following result. Suppose that $M$ is a lens space which is neither $\RP^3$ nor a prism manifold. Suppose that $\calT$ is a triangulation of $M$. Then there is a loop, in the one-skeleton of the $86^\thsup$ iterated barycentric subdivision of $\calT$, whose simplicial neighbourhood is a Heegaard solid torus for $M$. \end{abstract} \maketitle \section{Introduction} \label{Sec:Intro} Compact orientable three-manifolds have been classified in the following sense: there are algorithms that, given two such manifolds, determine if they are homeomorphic~\cite{Kuperberg19, ScottShort, AschenbrennerFriedlWilton}. Kuperberg~\cite[Theorem~1.2]{Kuperberg19} has further shown that this problem is no worse than elementary recursive. Beyond this, very little is known about the computational complexity of the homeomorphism problem. All known solutions rely on the geometrisation theorem, due to Perelman~\cite{Perelman1, Perelman2, Perelman3}. This motivates the following closely related problem: given a compact orientable three-manifold, determine if it admits one of the eight Thurston geometries. This problem has an exponential-time solution using normal surface theory and, again, geometrisation. See~\cite[Section~8.2]{Kuperberg19} for a closely related discussion. Here we give a much better upper bound in an important special case. Recall that the \emph{elliptic} manifolds are those admitting spherical geometry. The decision problem \textsc{Elliptic manifold} takes as input a triangulation $\calT$, of a compact connected three-manifold $M$, and asks if $M$ is elliptic. \begin{theorem} \label{Thm:Elliptic} The problem \textsc{Elliptic manifold} lies in \NP. \end{theorem} If a three-manifold is elliptic, then it is reasonable to ask which elliptic manifold it is. This is a \emph{function problem} as the desired output is more complicated than simply ``yes'' or ``no''. The problem \textsc{Naming elliptic} takes as input a triangulation of a compact connected three-manifold, which is promised to admit an elliptic structure, and requires as output the manifold's Seifert data. Some elliptic three-manifolds admit more than one Seifert fibration; in this case, the output is permitted to be the data for any of these. \begin{theorem} \label{Thm:NamingElliptic} The problem \textsc{Naming elliptic} lies in \FNP. \end{theorem} One precursor to \refthm{Elliptic} is that \textsc{Three-sphere recognition} lies in \NP. The first proof of this~\cite[Theorem~15.1]{Schleimer11} uses Casson's technique of \emph{crushing} normal two-spheres as well as Rubinstein's \emph{sweep-outs}, derived from almost normal two-spheres. There is another proof, due to Ivanov~\cite[Theorem~2]{Ivanov08}, that again uses crushing but avoids the machinery of sweep-outs. Ivanov also shows that the problem of recognising the solid torus lies in \NP. Our results rely on this prior work in a crucial but non-obvious fashion. By geometrisation, a three-manifold $M$ is elliptic if and only if $M$ is finitely covered by the three-sphere. Thus, one might hope to prove Theorems~\ref{Thm:Elliptic} and~\ref{Thm:NamingElliptic} by exhibiting such a finite cover together with a certificate that the cover is $S^3$. However, consider the following examples. Let $F_n$ denote the $n^\thsup$ Fibonacci number. There is a triangulation of the lens space $L(F_n, F_{n-1})$ with $n$ tetrahedra. The degree of the universal covering is $F_n$; since this grows exponentially in $n$ it cannot be used directly in an \NP~certificate. See~\cite[Section~2]{Kuperberg18} for many more examples of this phenomenon. Instead we use the following: any elliptic three-manifold has a cover, of degree at most sixty, which is a lens space. Thus Theorems~\ref{Thm:Elliptic} and~\ref{Thm:NamingElliptic} reduce, respectively, to the problem of deciding whether a three-manifold is a lens space and, if so, naming it. Our approach to certifying lens spaces is conceptually simple. Suppose that $U \homeo V \homeo S^1 \cross D^2$ are solid tori. A lens space $M$ can be obtained by gluing $U$ and $V$ along their boundaries. This gives a \emph{Heegaard splitting} for $M$. This decomposition of $M$ is unique up to ambient isotopy~\cite[Th\'eor\`eme~1]{BonahonOtal83}. We call $S^1 \cross \{0\} \subset S^1 \cross D^2$ a \emph{core curve} for the solid torus. We say that a simple closed curve $\gamma \subset M$ is a \emph{core curve} for $M$ if it is isotopic to a core curve for $U$ or for $V$. We certify that $M$ is a lens space by exhibiting such a core curve. This approach is inspired by the results of~\cite{Lackenby:CoreCurves}. There Lackenby shows that, for any handle structure of a solid torus satisfying some natural conditions, there is a core curve that lies nicely with respect to the handles. Specifically, the curve lies within the union of the zero- and one-handles and its intersection with each such handle is one of finitely many types. This list of types is universal, in the sense that it does not depend on the handle structure. The handle structure must satisfy some hypotheses, but these hold for any handle structure that is dual to a triangulation. Using~\cite[Theorem~4.2]{Lackenby:CoreCurves} we give an explicit bound on the ``combinatorial length'' of the core curve. For a triangulation $\calT$ and positive integer $n$, we let $\calT^{(n)}$ denote the triangulation obtained from $\calT$ by performing barycentric subdivision $n$ times. \begin{theorem} \label{Thm:DerivedSolidTorus} Suppose that $\calT$ is a triangulation of the solid torus $M$. Then $M$ contains a core curve that is a subcomplex of $\calT^{(51)}$. \end{theorem} Using this we prove the following technical result. \begin{theorem} \label{Thm:LensSpaceCurve} Suppose that $M$ is a lens space other than $\RP^3$. Suppose that $\calT$ is a triangulation of $M$. Then there is a simple closed curve $C$ that is a subcomplex of $\calT^{(86)}$, such that the exterior of $C$ is either a solid torus or a twisted $I$--bundle over a Klein bottle. \end{theorem} This is proved by placing a Heegaard torus $S$ into almost normal form in the triangulation $\calT$, using Stocking's work~\cite[Theorem~1]{Stocking}. We then cut along this torus to get two solid tori, which inherit handle structures $\calH_1$ and $\calH_2$. We could apply \refthm{DerivedSolidTorus} to each of these, and we would then obtain core curves of the solid tori. However, their intersection with each tetrahedron of $\calT$ would not necessarily be of the required form. In particular, the intersection between each of these curves and any tetrahedron of $\calT$ would not be in one of finitely many configurations; thus there would be no way of showing that it lay in $\calT^{(86)}$. The reason for this is that each tetrahedron of $\calT$ may contain many handles of $\calH_1$ and $\calH_2$. However, in this situation, all but a bounded number of these handles would lie between normally parallel triangles and squares of $S$. Hence, they lie in the \emph{parallelity bundle} for $\calH_1$ or $\calH_2$. (The parallelity bundle was first introduced by Kneser~\cite{Kneser28}; we follow the treatment given in~\cite{Lackenby:Composite}.) The strategy behind \refthm{LensSpaceCurve} is to ensure that one of the core curves does not intersect these handles by using the results from~\cite{Lackenby:CoreCurves}. It then intersects each tetrahedron of $\calT$ in one of finitely many types, and with some work, we show that it is in fact simplicial in $\calT^{(86)}$. Using Theorems~\ref{Thm:DerivedSolidTorus} and~\ref{Thm:LensSpaceCurve} we prove our main technical result. \begin{theorem} \label{Thm:LensSpaceCore} Suppose that $M$ is a lens space which is neither a prism manifold nor a copy of $\RP^3$. Suppose that $\calT$ is a triangulation of $M$. Then the iterated barycentric subdivision $\calT^{(86)}$ contains a core curve of $M$ in its one-skeleton. Furthermore, $\calT^{(139)}$ contains in its one-skeleton the union of the two core curves. \end{theorem} \subsection{Other work} We announced Theorems~\ref{Thm:Elliptic} and~\ref{Thm:NamingElliptic} in 2012, at Oberwolfach~\cite{LackenbySchleimer12}. Motivated by this, Kuperberg~\cite[Theorem~1.1]{Kuperberg18} showed that the \emph{function promise problem} \textsc{Naming lens space} has a polynomial-time solution. That is, there is a polynomial-time algorithm that, given a triangulated lens space $M$, determines its lens space coefficients. His work, together with \refthm{Elliptic} and parts of \refsec{CertificateElliptic}, can be used to give another proof of \refthm{NamingElliptic}. The work of Haraway and Hoffman~\cite{HarawayHoffman19} is also relevant here. In particular, in \refsec{IBundles}, we rely upon their result~\cite[Theorem~3.6]{HarawayHoffman19} that the decision problems \textsc{Recognising $T^2 \times I$} and \textsc{Recognising $K^2 \twist I$} lie in \NP. \subsection{Outline of paper} For us, all manifolds are given via a triangulation; we work in the PL category throughout. In \refsec{LensPrism}, we remind the reader of some elementary facts about lens spaces and \emph{prism manifolds}. The lens spaces which are also prism manifolds are exceptional cases in our analysis (as can be seen in the statement of \refthm{LensSpaceCore}, for example). In \refsec{NormalAlmostNormal}, we recall the definition of an (almost) normal surface in a triangulated three-manifold. In \refsec{HandleStructures}, we discuss handle structures for three-manifolds. In \refsec{SolidTori}, we give the background needed to state~\cite[Theorem~4.2]{Lackenby:CoreCurves}. This result places the core curve of a solid torus into a controlled position with respect to a handle structure for the solid torus. In \refsec{Barycentric}, we translate this back to triangulations. In particular, we prove \refthm{DerivedSolidTorus}. In \refsec{NicelyEmbedded}, we say what it means for one three-manifold to be \emph{nicely embedded} into another. Our eventual goal is to show that one of the Heegaard solid tori of the lens space $M$ is nicely embedded within the triangulation $\calT$ of $M$. We show that, in this situation, the core curve of this solid torus can be arranged to be simplicial in an iterated barycentric subdivision of $\calT$. In \refsec{Parallelity}, we recall the notion of parallelity bundles in a handle structure. In addition, we also discuss \emph{generalised} parallelity bundles, also from \cite{Lackenby:Composite}, where it was shown that such bundles may be chosen to have incompressible horizontal boundary~\cite[Proposition~5.6]{Lackenby:Composite}. In our situation, this implies that the generalised parallelity bundle has horizontal boundary being a collection of annuli and discs lying in the Heegaard torus. In \refsec{ProofMain}, we bring these results together and prove Theorems~\ref{Thm:LensSpaceCore} and~\ref{Thm:LensSpaceCurve}. This section may be viewed as the heart of the paper. In \refsec{Elliptic}, we recall the classification of elliptic three-manifolds into lens spaces, prism manifolds, and \emph{platonic} manifolds. We show that any platonic manifold has a cover, of degree at most sixty, which is a lens space. We also show how its Seifert data can be extracted from the cover and from the homology of the manifold. In \refsec{CertificateElliptic}, we certify elliptic manifolds, thereby completing the proof of Theorems~\ref{Thm:Elliptic} and~\ref{Thm:NamingElliptic}. \subsection{The role of geometrisation in this paper} \label{Sec:Geometrisation} Very little of this paper relies on the proof of Thurston's geometrisation conjecture by Perelman \cite{Perelman1, Perelman2, Perelman3}. The proofs of Theorems~\ref{Thm:DerivedSolidTorus},~\ref{Thm:LensSpaceCurve} and~\ref{Thm:LensSpaceCore} make no appeal to geometrisation. As a result, our proof that lens space recognition is in NP also does not rely on geometrisation. As explained above, we certify other elliptic manifolds by exhibiting a cover that is a lens space. The fact that any manifold covered by a lens space is elliptic seems to rely on the resolution of the spherical space form conjecture, which was a consequence of geometrisation. However, the certification of prism manifolds can avoid the use of geometrisation by appealing to earlier results on the spherical space form conjecture by Livesay~\cite{Livesay60} and Myers~\cite{Myers81}. However, the certification of platonic manifolds \emph{does} seem to rely on geometrisation; see the proof of \refprop{PlatonicCriterion}. \subsection{Acknowledgements} We thank the referee for their detailed and insightful comments. We also thank the organisers of the conference on Geometric Topology of Knots at the University of Pisa, where this work was initiated. ML was partially supported by the Engineering and Physical Sciences Research Council (grant number EP/Y004256/1). \section{Lens spaces and prism manifolds} \label{Sec:LensPrism} In this section, we gather a few facts about lens spaces and prism manifolds. Fix an orientation of the two-torus $T$. Suppose that $\lambda$ and $\mu$ are simple closed oriented curves in $T$. We write $\lambda \cdot \mu$ for the algebraic intersection number of $\lambda$ and $\mu$. If $\lambda \cdot \mu = 1$ then we call the ordered pair $\subgp{\lambda, \mu}$ a \emph{framing} of $T$. In this case we may isotope $\lambda$ and $\mu$ so that $x = \lambda \cap \mu$ is a single point. This done, $\lambda$ and $\mu$ generate $\pi_1(T, x) \isom \ZZ^2$. Thus, for any simple closed oriented essential curve $\alpha$ in $T$ we may write $\alpha = p\lambda + q\mu$, with $p$ and $q$ coprime. Note that $\lambda \cdot \alpha = q$ and $\alpha \cdot \mu = p$. We say that $\alpha$ has \emph{slope} $q/p$. Note that if $\beta$ has slope $s/r$ then $\alpha \cdot \beta = \pm (ps - qr)$. We say that $\alpha$ and $\beta$ are \emph{Farey neighbours} if $\alpha \cdot \beta = \pm 1$. Suppose $U = D^2 \cross S^1$ is a solid torus. Fix $x \in \bdy D^2$ and $y \in S^1$. The boundary $T = \bdy U$ has a framing coming from taking $\lambda = \{x\} \cross S^1$ and $\mu = \bdy D^2 \cross \{y\}$. \begin{definition} \label{Def:Lens} A three-manifold $M$, obtained by gluing a pair of solid tori $U$ and $V$ via a homeomorphism of their boundaries, is called a \emph{lens space} if it has finite fundamental group. \end{definition} Under this definition $S^2 \cross S^1$ is not a lens space; we use this convention because we are here interested in elliptic manifolds. We use $L(p, q)$ to denote the lens space obtained by attaching the meridian slope in $\bdy V$ to the slope $q/p$ in $\bdy U$. We call $p$ and $q$ \emph{coefficients} for the lens space. We now record several facts. \begin{fact} \label{Fac:Fund} $\pi_1(L(p, q)) \isom \ZZ/p\ZZ$. \end{fact} Note for any $q$ we have $L(1, q) \homeo S^3$. \begin{fact} \label{Fac:Homeo} \cite{Reidemeister35} $L(p', q')$ is homeomorphic to $L(p, q)$ if and only if $|p'| = |p|$ and $q' = \pm q^{\pm 1} \mod p$. \end{fact} \begin{fact} \label{Fac:Cover} The double cover of $L(2p, q)$ is $L(p, q)$. \end{fact} Notice $L(p, q)$ double covers both $L(2p, q)$ and $L(2p, p-q)$. For example, $L(8, 1)$ and $L(8, 3)$ are both covered by $L(4, 1)$. However, $L(8, 1)$ and $L(8, 3)$ are not homeomorphic according to \reffac{Homeo}. Thus one cannot recover the coefficients of a lens space just by knowing a double cover. \begin{fact} \label{Fac:Glue} Suppose that $\alpha$ and $\beta$ are Farey neighbours in $T$, with slopes $q/p$ and $s/r$. Suppose that $\gamma$ has slope $q'/p'$ in $T$. The three-manifold $M = U \cup (T \cross I) \cup V$, formed by attaching the meridian of $U$ along $\alpha \cross 0$ and attaching the meridian of $V$ along $\gamma \cross 1$, is homeomorphic to the lens space $L(-p'q + q'p, p's - q'r)$. \end{fact} We use $\twist$ to denote a twisted product. We write $K = K^2 = S^1 \twist S^1$ for the Klein bottle. We write $K \twist I$ for the orientation $I$--bundle over the Klein bottle. Recall that $K$ contains exactly four essential simple closed curves, up to isotopy. These are the cores $\alpha$ and $\alpha'$ of the two M\"obius bands, their common boundary $\delta$, and the fibre $\beta$ of the bundle structure $K = S^1 \twist S^1$. Thus $\pi_1(K) \isom \pi_1(K \twist I)$ has a presentation \[ \group{a,b}{aba^{-1} = b^{-1}} \] where $a = [\alpha]$ and $b = [\beta]$. This presentation is not canonical, as we could have chosen $\alpha'$ instead of $\alpha$. Let $\rho \from T \to K$ be the orientation double cover. Thus we have \[ \rho_*(\pi_1(T)) \isom \subgp{a^2, b} < \pi_1(K). \] Since $a^2 = [\delta]$ this generating set for $\pi_1(T)$ gives a canonical framing of $T$ (up to the choice of orientations). The identification of $T$ and $\bdy (K \twist I)$ now gives us a canonical framing of the latter. \begin{definition} \label{Def:Prism} A three-manifold $M$, obtained by gluing an $I$--bundle $K \twist I$ to a solid torus $W$ via a homeomorphism of their boundaries, is called a \emph{prism manifold} if it has finite fundamental group. \end{definition} We use the notation $P(p, q)$ to denote the three-manifold obtained by gluing the meridian slope of a solid torus $W$ to the slope $a^{2p}b^q$ in $\bdy (K \twist I)$. We call $p$ and $q$ \emph{coefficients} of $P(p, q)$. \begin{fact} \label{Fac:FundPrism} $\pi_1(P(p, q)) \isom \group{a, b}{aba^{-1} = b^{-1}, a^{2p} = b^{-q}}$. \end{fact} For example, we have $P(1, 1) \homeo L(4, 3)$; see \reflem{PrismIsLens}. On the other hand we have $P(1, 0) \homeo \RP^3 \connect \RP^3$ and $P(0, 1) \homeo S^2 \cross S^1$. Since the fundamental groups are infinite, we do not admit $P(1, 0)$ or $P(0, 1)$ as prism manifolds. \begin{lemma} \label{Lem:EmbeddedKleinBottle} Let $M$ be an irreducible oriented closed non-Haken three-manifold. Then $M$ contains an embedded Klein bottle if and only if it is a prism manifold. In particular, a lens space contains an embedded Klein bottle if and only if it is a prism manifold. \end{lemma} \begin{proof} By construction, prism manifolds contain an embedded Klein bottle. We need to establish the converse. Suppose that $M$ contains an embedded Klein bottle $K$. By the orientability of $M$, the regular neighbourhood $N(K)$ is the orientable $I$--bundle over the Klein bottle. Let $T = \bdy N(K)$ be the boundary torus. Since $M$ is non-Haken, $T$ is compressible along a disc $D$. Note that $D$ has interior disjoint from $N(K)$. Compressing $T$ along $D$ gives a sphere, which bounds a ball $B$ by the irreducibility of $M$. Note that $B$ is disjoint from $K$. Thus $T$ bounds a solid torus with interior disjoint from $N(K)$. Therefore, $M$ is a prism manifold. \end{proof} We now give a proof of~\cite[Corollary~6.4]{BredonWood69}. For another account, with further references, see~\cite[Theorem~1.2]{GeigesThies23}. This proof is included for the convenience for the reader. \begin{lemma} \label{Lem:PrismIsLens} $P(p, q)$ is a lens space if and only if $q = 1$ and $p \neq 0$. In this case, $P(p, 1) \homeo L(4p, 2p + 1)$. \end{lemma} \begin{proof} Recall $\gcd(p, q) = 1$ for any slope $q/p$. So if $q = 0$ then $p = 1$; likewise, if $p = 0$, then $q = 1$. In these cases, as noted above, $P(1, 0)$ and $P(0, 1)$ are not lens spaces. Suppose that $q \geq 2$. Taking $P = P(p, q)$ we note that $\pi_1(P)$ has as a quotient group \[ D_{2q} = \group{a,b}{aba^{-1} = b^{-1}, a^2 = b^q = 1} \] the dihedral group of order $2q$. This is not cyclic, so $P$ is not a lens space. Thus we may assume that $q = 1$ and $p \neq 0$. Set $P = P(p, 1)$. We note that \[ \pi_1(P) = \group{a,b}{aba^{-1} = b^{-1}, a^{2p} = b^{-1}} \isom \group{a}{a^{4p} = 1}. \] This implies that $P(p,1)$ is the unique manifold, up to homeomorphism, that is a lens space and prism manifold with fundamental group of order $4p$. We now give a direct proof that $P = P(p,1)$ is a lens space. Recall that if we glue a pair of solid tori along an annulus, primitive in at least one of them, the result is a solid torus. Write $K \twist I = Q \cup R$ as a union of solid tori, each the orientation $I$--bundle over a M\"obius band. Note $Q \cap R$ is a vertical annulus in $K \twist I$. Set $A = Q \cap \bdy (K \twist I)$. Note that $A$ is not primitive in $Q$, as it crosses the meridian disc of $Q$ twice. Recall that $P = (Q \cup R) \cup W$, all solid tori. When $q = 1$ we find that the annulus $A$ is primitive in $W$. Thus $Q \cup W$ is a solid torus and so $P = (Q \cup W) \cup R$ is a lens space. This proves the first half of the lemma. To finish we must identify the lens space coefficients of $P = P(p, 1)$. Let $\gamma$ be the $1/2$ slope on the boundary of a solid torus $V$. Since $\gamma$ crosses the meridian exactly twice, it bounds a M\"obius band $M$ in $V$. If we double $V$ across its boundary, we obtain $S^2 \cross S^1$. Note $M$ doubles to give a Klein bottle. We now alter the double by opening it along an annulus neighbourhood of $\gamma$ inside of $\bdy V$ and regluing with a twist to obtain the lens space $L$. Note the Klein bottle persists, so $L$ is again a prism manifold. The image of the meridian of $V$ under the $p$--fold twist about $\gamma$ is the gluing slope, $(2p + 1)/4p$. Thus $L(4p, 2p + 1)$ is a lens space, is a prism manifold, and has fundamental group of order $4p$. Thus $L(4p, 2p + 1) \homeo P(p, 1)$ and we are done. \end{proof} \begin{lemma} \label{Lem:PrismCover} Suppose that $q/p$ is a slope with $s/r$ as a Farey neighbour. Then $P(p, q)$ is double covered by the lens space $L \homeo L(2pq, ps + qr)$. The subgroup of $\pi_1(L)$ that is fixed by the deck group, has order $2p$. \end{lemma} \begin{proof} Recall that $T \cross I$ is a double cover of $K \twist I$. Let $U$ and $V$ be solid tori, whose disjoint union double covers the solid torus $W$. Thus the lens space $L = U \cup (T \cross I) \cup V$ double covers $P = W \cup (K \twist I)$. Let $\alpha$ and $\beta$ be simple closed curves in $T$, lifting $a^2$ and $b$ in $K$. Thus $\subgp{\alpha,\beta}$ gives a framing of $T$. The meridians of $U$ and $V$ have slopes $q/p$ and $-q/p$ with respect to this framing. \reffac{Glue} now implies $L \homeo L(2pq, ps+qr)$. The above decomposition of $L$ gives a presentation of $\pi_1(L)$. Abelianising, we obtain the following. \[ \pi_1(L) \isom \group{\alpha, \beta}{p\alpha + q\beta = 0, p\alpha - q\beta = 0} \] The elements correspond to the integer lattice points (up to translation) in the parallelogram with vertices at $(0, 0)$, $(p, -q)$, $(2p, 0)$, and $(p, q)$. The deck group fixes $\alpha$ while sending $\beta$ to $\beta^{-1}$. The fixed points under this action are the lattice points with second coordinate zero. So the fixed subgroup has order $2p$. \end{proof} We use \reflem{PrismCover} to decide if a given manifold is a prism manifold $P(p, q)$, assuming that a double cover of the manifold is a lens space. \section{Normal and almost normal surfaces} \label{Sec:NormalAlmostNormal} \begin{definition} \label{Def:NormalArc} Suppose that $f$ is a two-simplex. An arc, properly embedded in $f$, is \emph{normal} if it misses the vertices of $f$ and has endpoints on distinct edges. \end{definition} \begin{definition} \label{Def:TriangleSquare} Suppose that $\Delta$ is a tetrahedron. A disc $D$, properly embedded in $\Delta$ and transverse to the edges of $\Delta$, is \begin{itemize} \item a \emph{triangle} if $\bdy D$ consists of three normal arcs; \item a \emph{square} if $\bdy D$ consists of four normal arcs. \end{itemize} In either case $D$ is a \emph{normal disc}. \end{definition} \begin{definition} \label{Def:NormalSurface} Suppose that $(M, \calT)$ is a triangulated three-manifold. A surface $S$, properly embedded in $M$, is \emph{normal} if, for each tetrahedron $\Delta \in \calT$, the intersection $S \cap \Delta$ is a disjoint collection of triangles and squares. \end{definition} \begin{definition} Suppose that $\Delta$ is a tetrahedron. A surface $E$, properly embedded in $\Delta$ and transverse to the edges of $\Delta$, is \begin{itemize} \item an \emph{octagon} if $E$ is a disc with $\bdy E$ consisting of eight normal arcs; \item a \emph{tubed piece} if $E$ is an annulus obtained from two disjoint normal discs by attaching a tube that runs parallel to an arc of an edge of $\Delta$. \end{itemize} In either case $E$ is an \emph{almost normal piece}. \end{definition} \begin{figure} \includegraphics{Figures/NormalAlmostNormal_small.pdf} \caption{Left to right: triangle, square, octagon, tubed piece.} \label{Fig:NormalAlmostNormal} \end{figure} \begin{definition} \label{Def:AlmostNormalSurface} Suppose that $(M, \calT)$ is a triangulated three-manifold. A surface $S$, properly embedded in $M$, is \emph{almost normal} if, for each tetrahedron $\Delta \in \calT$, the intersection $S \cap \Delta$ is a disjoint collection of triangles and squares except for precisely one tetrahedron, where the collection additionally contains exactly one almost normal piece. \end{definition} \begin{definition} A Heegaard surface $S$ for a three-manifold $M$ is \emph{strongly irreducible} if it does not have disjoint compressing discs emanating from opposite sides of $S$. \end{definition} We note that the genus one Heegaard surface for a lens space is strongly irreducible. We now have a result due to Stocking~\cite[Theorem~1]{Stocking}, following work of Rubinstein. \begin{theorem} \label{Thm:AlmostNormalHeegaard} Suppose that $M$ is a closed, connected, oriented three-manifold equipped with a triangulation $\calT$. Suppose that $H$ is a strongly irreducible Heegaard surface for $M$. Then $H$ is (ambiently) isotopic to a surface which is almost normal with respect to $\calT$. \qed \end{theorem} Although almost normal surfaces are useful, they can sometimes be technically challenging to work with. We therefore apply the following result. \begin{proposition} \label{Prop:FromAlmostNormalToNormal} Suppose that $S$ is almost normal with respect to a triangulation $\calT$. Then $S$ is isotopic to a surface which is normal with respect to the first barycentric subdivision $\calT^{(1)}$. \end{proposition} To prove this, we first require some lemmas. \begin{lemma} \label{Lem:NormaliseSubsurfaceOfBoundary} Suppose that $\calT$ is a triangulation of a three-ball $B$. Suppose that for each tetrahedron $\Delta$ of $\calT$ the preimage of $\bdy B$ in $\Delta$ is either empty or is a single vertex, edge, or face. Suppose that $F$ is a subsurface of $\bdy B$ so that $\bdy F$ is normal with respect to $\bdy B$ and intersects each edge of $\calT$ at most once. Then, after pushing the interior of $F$ slightly into the interior of $B$, the resulting properly embedded surface $F'$ is normal. \end{lemma} \begin{proof} Consider any tetrahedron $\Delta$ of $\calT$. We divide into cases depending on the number of vertices of $\Delta$ contained in $F$. If there are none, then $F' \cap \Delta$ is empty. Suppose that there is exactly one vertex $v$ of $\Delta$ contained in $F$. Then $F' \cap \Delta$ is a normal triangle separating $v$ from the remaining three vertices of $\Delta$. This triangle meets $\bdy B$ in the set $(\bdy F) \cap \Delta$. Suppose instead that there are exactly two vertices of $\Delta$ contained in $F$. Then there is an edge $e$ of $\Delta$ contained in $F$. In this case $F' \cap \Delta$ is a normal square separating $e$ from the remaining two vertices of $\Delta$. Again, this square meets $\bdy B$ in the set $(\bdy F) \cap \Delta$. Suppose instead that there are exactly three vertices of $\Delta$ contained in $F$. Then there is a face $f$ of $\Delta$ contained in $F$. In this case $F' \cap \Delta$ is again a normal triangle, separating $f$ from the final vertex of $\Delta$. Also, this triangle is disjoint from $\bdy B$. Note that, by assumption, we cannot have all four vertices of $\Delta$ contained in $F$. \end{proof} \begin{lemma} \label{Lem:NormaliseMultipleSubsurfacesOfBoundary} Suppose that $\calT$ is a triangulation of a three-ball $B$. Suppose that for each tetrahedron $\Delta$ of $\calT$ the preimage of $\bdy B$ in $\Delta$ is either empty or is a single vertex, edge, or face. Let $F_1, \dots, F_n$ be a collection of subsurfaces of $\bdy B$. Suppose that each $\bdy F_i$ is normal and intersects each edge of $\calT$ at most once. Suppose also that for each $i \not= j$, $\bdy F_i$ and $\bdy F_j$ are disjoint and either $F_i \subset F_j$ or $F_j \subset F_i$. Then we may push the interiors of $F_1, \dots, F_n$ slightly into the interior of $B$ so that $F'$, the resulting union of surfaces, is properly embedded and normal. \end{lemma} \begin{proof} We apply the construction in the proof of \reflem{NormaliseSubsurfaceOfBoundary} to each $F_i$, to form a normal surface $F'_i$. We note that when $i \not=j$, we can arrange for $F'_i$ and $F'_j$ to be disjoint, as follows. There is some $F_i$ with the property that no other $F_j$ lies inside it. For this $F_i$, form the resulting surface $F'_i$, which we can view as lying extremely close to $\bdy B$. When we form the remaining surfaces $F'_j$, we can ensure that they are disjoint from this $F'_i$. In this way, the required surface is constructed recursively. \end{proof} \begin{proof} [Proof of \refprop{FromAlmostNormalToNormal}] We first specify the intersection between $S$ and the edges of $\calT^{(1)}$ lying in the one-skeleton of $\calT$. By assumption, $S$ has an almost normal piece $P$ in some tetrahedron $\Delta$ of $\calT$. The intersection between $P$ and each one-simplex of $\Delta$ is at most two points. If it is exactly two points, then we arrange for these to lie in distinct edges of $\calT^{(1)}$. We then arrange that the remaining points of intersection between $S$ and the one-skeleton of $\calT$ are disjoint from the vertices of $\calT^{(1)}$. The intersection between $S$ and each two-simplex $F$ of $\calT$ consists of a collection of arcs that are normal with respect to $\calT$. We may realise these arcs $F \cap S$ as a concatenation of normal arcs in the two-skeleton of $\calT^{(1)}$, in such a way that each arc of $F \cap S$ intersects each edge of $\calT^{(1)}$ at most once. Thus, we have specified the intersection between $S$ and the simplices of $\calT^{(1)}$ lying in the two-skeleton of $\calT$. The remainder of the three-manifold is a collection of tetrahedra of $\calT$. Consider any such tetrahedron $\Delta$. This inherits a triangulation from $\calT^{(1)}$. We have already specified $S \cap \bdy \Delta$. This is a collection of normal curves in $\bdy \Delta$. Suppose first that $\Delta$ does not contain the almost normal piece of $S$. Then $S \cap \Delta$ is a collection of triangles and squares in $\Delta$. Their boundary is a collection $C_1, \dots, C_n$ of normal curves in $\bdy \Delta$. We need to specify, for each $C_i$, a subsurface $F_i$ of $\bdy \Delta$ that it bounds. For each $C_i$ bounding a triangle in $\Delta$, we pick $F_i$ so that it contains a single vertex of $\Delta$. The remaining $C_i$ bound normal squares in $\Delta$. We pick the $F_i$ that these curves $C_i$ bound so that they are nested. Then applying \reflem{NormaliseMultipleSubsurfacesOfBoundary}, we realise the discs bounded by these curves in $\Delta$ as normal surfaces with respect to $\calT^{(1)}$. Suppose now that $\Delta$ does contain the almost normal piece $P$ of $S$. Then $S \cap \Delta$ is equal to $P$ plus possibly some triangles and squares. In the case where $P$ is an octagon, its boundary divides $\bdy \Delta$ into two discs, and we pick one of these discs to be the relevant $F_i$. In the case where $P$ is obtained by tubing together two normal discs in $\Delta$, the boundary curves of these discs cobound an annulus in $\bdy \Delta$, and we set $F_i$ to be this annulus. The remaining components of $S \cap \Delta$ are triangles and squares, and squares can only arise in the case where $P$ is tubed. For each triangle with boundary $C_j$, we pick $F_j \subset \bdy \Delta$ so that it contains a single vertex of $\Delta$. For each square with boundary $C_j$, we pick $F_j \subset \bdy \Delta$ so that it is disjoint from the annulus $F_i$ considered above. Applying \reflem{NormaliseMultipleSubsurfacesOfBoundary} again, we arrange for $S \cap \Delta$ to be normal with respect to $\calT^{(1)}$. \end{proof} \section{Handle structures} \label{Sec:HandleStructures} Suppose that $M$ is a compact, connected, oriented three-manifold. Suppose that $\calH$ is a handle decomposition of $M$. For example, we may obtain $\calH$ by taking the dual of a triangulation. \begin{definition} \label{Def:Standard} Suppose that $S$ is a surface, properly embedded in $M$. We say that $S$ is \emph{standard} with respect to $\calH$ if the following properties hold. \begin{itemize} \item The intersection of $S$ and any zero-handle $D^0 \cross D^3$ is a disjoint union of properly embedded discs. \item The intersection of $S$ and any one-handle $D^1 \cross D^2$ is of the form $D^1 \cross A$, where $A$ is a disjoint union of arcs properly embedded in $D^2$. \item The intersection of $S$ and any two-handle $D^2 \cross D^1$ is of the form $D^2 \cross P$, where $P$ is a finite collection of points in the interior of $D^1$. \item The intersection of $S$ and any three-handle $D^3 \cross D^0$ is empty. \qedhere \end{itemize} \end{definition} \begin{definition} Let $M$ be a closed three-manifold with a handle structure $\calH$. A disc properly embedded in a zero-handle $H_0$ of $\calH$ is \emph{normal} if \begin{itemize} \item its boundary lies within the union of the one-handles and two-handles; \item its intersection with each one-handle is a collection of arcs; \item it runs over each component of intersection between $H_0$ and the two-handles in at most one arc, and this arc respects the product structure on the two-handles. \end{itemize} A disc properly embedded in a one-handle $H_1$ of $\calH$ is \emph{normal} if \begin{itemize} \item it respects the product structure on $H_1$; \item its boundary lies within the union of the zero-handles and two-handles; \item it runs over each component of intersection between $H_1$ and the two-handles in at most one arc. \qedhere \end{itemize} \end{definition} \begin{definition} Let $M$ be a closed three-manifold with a handle structure $\calH$. A surface properly embedded within $M$ is \emph{normal} if it is standard and its intersection with each zero-handle is normal. This implies that its intersection with each one-handle is also normal. \end{definition} When $S$ is a normal surface embedded in a closed triangulated three-manifold $M$, it naturally becomes a standard surface in the dual handle structure. When a three-manifold with a handle structure is decomposed along a standard surface $S$, the resulting three-manifold $M \cut S$ inherits a handle structure. Thus, we deduce that when a closed triangulated three-manifold is cut along a normal surface, then $M \cut S$ inherits a handle structure. Our handle structures satisfy the following condition. \begin{definition} A handle structure of a three-manifold is \emph{locally small} if the following conditions hold: \begin{itemize} \item the intersection between any zero-handle and the union of the one-handles consists of at most $4$ discs and \item the intersection between any one-handle and the union of the two-handles consists of at most $3$ discs. \qedhere \end{itemize} \end{definition} The handle structure that is dual to a triangulation is locally small. Moreover, when a three-manifold $M$ with a locally small handle structure is cut along a normal surface $S$, the resulting handle structure is again locally small. \section{Core curves of solid tori} \label{Sec:SolidTori} Over the next two sections, we prove \refthm{DerivedSolidTorus}. This is achieved using affine handle structures, first introduced in \cite{Lackenby:CoreCurves}. Here is the definition. \begin{definition} \label{Def:AffineHandle} An \emph{affine handle structure} on a three-manifold $M$ is a handle structure where each zero-handle and one-handle is identified with a compact polyhedron in $\RR^3$, so that \begin{itemize} \item each face of each polyhedron is convex (but the polyhedron identified with a zero-handle need not be convex); \item whenever a zero-handle and one-handle intersect, each component of intersection is identified with a convex polygon in $\RR^2$, in such a way that the inclusion of this intersection into each handle is an affine map onto a face of the relevant polyhedron; \item for each zero-handle $H_0$, each component of intersection with a two-handle, three-handle or $\bdy M$ is a union of faces of the polyhedron associated with $H_0$; \item the polyhedral structure on each one-handle is the product of a convex two-dimensional polygon and an interval. \qedhere \end{itemize} \end{definition} \begin{definition} \label{Def:CanonicalAffine} Let $\calT$ be a triangulation of a compact three-manifold $M$ and let $\calH$ be the dual handle structure. Then the \emph{canonical affine structure} on $\calH$ realises each zero-handle as a truncated octahedron. Specifically, one realises each tetrahedron of $\calT$ as regular and euclidean with side length $1$, and then one slices off the edges and the vertices to form a truncated octahedron. Each one-handle of $\calH$ corresponds to a face of $\calT$ that does not lie wholly in $\bdy M$, and hence corresponds to a pair of hexagonal faces of the truncated octahedra that are identified. Thus, the one-handle is identified with the product of a hexagon and an interval. (See \reffig{TruncatedOctahedron}.) \end{definition} \begin{figure} \includegraphics[width=2.5in]{Figures/cuboctahedron.pdf} \caption{Each zero-handle is realised as a truncated octahedron. This is obtained from a tetrahedron by slicing off its vertices and edges.} \label{Fig:TruncatedOctahedron} \end{figure} The following theorem~\cite[Theorem~4.2]{Lackenby:CoreCurves} is the key result that goes into the proof of \refthm{DerivedSolidTorus}. \begin{theorem} \label{Thm:CoreSolidTorusAffine} Let $\calH$ be a locally small, affine handle structure of the solid torus $M$. Then $M$ has a core curve that intersects only the zero-handles and one-handles, that respects the product structure on the one-handles, that intersects each one-handle in at most $24$ straight arcs, and that intersects each zero-handle in at most $48$ arcs. Moreover, this collection of arcs in each zero-handle is parallel (as a collection) to a collection of arcs $A$ in the boundary of the corresponding polyhedron. Finally, each component of $A$ intersects each face of the polyhedron in at most $6$ straight arcs. \qed \end{theorem} \section{From affine handle structures to barycentric subdivisions} \label{Sec:Barycentric} In this section, the goal is prove Theorem \ref{Thm:DerivedSolidTorus}, which states that a core curve in a triangulated solid torus can be realised as a subcomplex in the triangulation's $51^{\mathrm{st}}$ barycentric subdivision. Many of the technicalities of this section could have been avoided if we had aimed just for a subcomplex of the $k^{\mathrm{th}}$ barycentric subdivision, for some universal but unspecified $k$ independent of the triangulation. \begin{definition} A triangulation of a subset $X$ of euclidean space is \emph{straight} if the inclusion of each simplex into $X$ is an affine map. \end{definition} \begin{definition} \label{Def:ArcType} Two arcs properly embedded in a polygon and disjoint from its vertices are of the same \emph{type} if there is an ambient isotopy taking one to the other, and which keeps the arcs disjoint from the vertices. \end{definition} Note that a polygon with $k$ sides can support at most $2k-3$ disjoint straight arcs that are of distinct types. \begin{lemma} \label{Lem:ArcBecomeSimplicial} Let $D$ be a euclidean polygon with a straight triangulation $\calT$. Let $\alpha$ be a properly embedded straight arc in $D$. Then there is a realisation of the barycentric subdivision $\calT^{(1)}$ as a straight triangulation of $D$ that contains $\alpha$ as a subcomplex. \end{lemma} \begin{proof} Since $\alpha$ is straight and $\calT$ is straight, the intersection between $\alpha$ and the interior of each one-simplex of $\calT$ is either all the interior of the one-simplex or at most one point. If $\alpha$ does intersect the interior of a one-simplex in a point, place a vertex of $\calT^{(1)}$ at the point of intersection. Similarly, if $\alpha$ intersects the interior of a two-simplex, it does so in a single arc, and we place a vertex of $\calT^{(1)}$ in the interior of this arc. Hence, $\alpha$ becomes simplicial in $\calT^{(1)}$. (See \reffig{SimplicialCurveDisc}.) \end{proof} \begin{figure} \includegraphics[width=3.5in]{Figures/simplicial-curve-disc2.pdf} \caption{Making an arc simplicial} \label{Fig:SimplicialCurveDisc} \end{figure} Induction then gives the following. \begin{lemma} \label{Lem:ArcsBecomeSimplicial} Let $D$ be a euclidean polygon with a straight triangulation $\calT$. Let $A$ be a union of $k$ disjoint properly embedded straight arcs in $D$. Then there is a realisation of $\calT^{(k)}$ as a straight triangulation of $D$ that contains $A$ as a subcomplex. \qed \end{lemma} However, we can improve this in certain circumstances, as follows. \begin{lemma} \label{Lem:TwoArcTypes} Let $D$ be a euclidean polygon with a straight triangulation $\calT$. Let $A$ be a union of at most $2^k$ disjoint properly embedded straight arcs in $D$, each with endpoints that are disjoint from the vertices of $D$, and that form at most two arc types. Then there is a realisation of $\calT^{(k+1)}$ as a straight triangulation of $D$ that contains $A$ as a subcomplex. \end{lemma} \begin{proof} We prove this by induction on $k$. The case $k = 0$ is the statement of \reflem{ArcBecomeSimplicial}. Let us prove the inductive step. Since the arcs fall into at most two types, there is one component $\alpha$ of $A$ such that at most $|A|/2 \leq 2^{k-1}$ arcs lie on each side of it. By \reflem{ArcBecomeSimplicial}, there is a realisation of $\calT^{(1)}$ as a straight triangulation of $D$ that contains $\alpha$ as a subcomplex. Cutting $D$ along $\alpha$ gives two euclidean polygons, each of which contains at most two arc types. Inductively, the intersection between $A$ and these polygons may be made simplicial in $\calT^{(k+1)}$. \end{proof} \begin{lemma} \label{Lem:SeveralArcTypes} Let $D$ be a euclidean polygon with a straight triangulation $\calT$. Let $A$ be a union of at most $2^k$ disjoint properly embedded arcs in $D$, each with endpoints that are disjoint from the vertices of $D$, and that form at most $n \geq 2$ arc types. Then there is a realisation of $\calT^{(n+k-1)}$ as a straight triangulation of $D$ that contains $A$ as a subcomplex. \end{lemma} \begin{proof} We prove this by induction on $n$. The induction starts with $n=2$, which is the content of \reflem{TwoArcTypes}. To prove the inductive step, suppose that $n > 2$. Pick an arc type of $A$ and let $\alpha$ be a component of $A$ of this type that is closest to other types of arcs. Make $\alpha$ simplicial in $\calT^{(1)}$ using \reflem{ArcBecomeSimplicial}. Then cut along it, to give two euclidean polygons. In each, the arcs come in at most $n-1$ types. So, by induction, a further $n+k-2$ barycentric subdivisions suffice to make $A$ simplicial. \end{proof} \begin{lemma} \label{Lem:PointsBecomeVertices} Let $D$ be a euclidean polygon with a straight triangulation $\calT$. Let $P$ be a set of at most $2^k$ points in $D$. Then there is realisation of $\calT^{(k+1)}$ as a straight triangulation of $D$ and that contains $P$ in its vertex set. \end{lemma} \begin{proof} Pick a point $x$ in $D$ that is disjoint from $P$ and that also does not lie on any line containing at least two point from $P$. We can pick a straight arc $\alpha$ through $x$, so that at most half the points of $P$ lie on each side of $\alpha$. This can be done as follows. Pick any straight arc $\alpha$ through $x$ that misses $P$. If this has $|P|/2$ vertices on each side, then we have our desired arc. If not, then pick a transverse orientation on $\alpha$ that points to the side with more than $|P|/2$ points. Start to rotate $\alpha$ around $x$. By our general position hypothesis on $x$, at any given moment in time, the number of points on each side of $\alpha$ can jump by at most $1$. By the time that $\alpha$ has rotated through angle $\pi$, the number of points on the side into which it points is less than $|P|/2$. So, at some stage, the arc contains a point of $P$ and has fewer than $|P|/2$ points of $P$ on either side of it. At this stage, we have our required arc $\alpha$. By \reflem{ArcBecomeSimplicial}, we can make $\alpha$ simplicial in $\calT^{(1)}$, and if $\alpha \cap P$ is non-empty, we can also make it a vertex. Cut along $\alpha$ and apply induction. \end{proof} \begin{lemma} \label{Lem:ImproperArcsSimplicial} Let $D$ be a euclidean polygon with a straight triangulation $\calT$. Let $A$ be a union of $k$ disjoint straight arcs, with each endpoint being a vertex of $\calT$ or a point on $\bdy D$, and with interior in the interior of $D$. Then there is realisation of $\calT^{(k)}$ as a straight triangulation of $D$ and that contains $A$ as a subcomplex. \end{lemma} \begin{proof} We prove this by induction on $k$. Pick an arc $\alpha$ of $A$. Since $\alpha$ is straight, the intersection between $\alpha$ and the interior of each one-simplex of $\calT$ is either all the interior of the one-simplex or at most one point. If it does intersect the interior of this simplex in a point, place a vertex of $\calT^{(1)}$ at the point of intersection. Similarly, if $\alpha$ intersects the interior of a two-simplex, it does so in a single arc, and we place a vertex of $\calT^{(1)}$ in the interior of this arc. Hence, $\alpha$ becomes simplicial in $\calT^{(1)}$. We then inductively deal with the remaining $k-1$ arcs. \end{proof} \begin{lemma} \label{Lem:SimplicialDiscsInPolyhedronAllParallel} Let $\calT$ be a straight triangulation of a euclidean polyhedron $P$. Let $C$ be a union of at most $2^k$ disjoint simple closed curves in $\bdy P$ that are simplicial in $\calT$ and that are topologically parallel in $\bdy P$. Then there is realisation of $\calT^{(2k)}$ as a straight triangulation of $P$ such that $C$ bounds a union of disjoint properly embedded discs in $P$ that are simplicial in the triangulation. \end{lemma} \begin{proof} We prove this by induction on $k$. Since the components of $C$ are all parallel in $\bdy P$, there is some component $C'$ of $C$ that so that each component of $\bdy P - C'$ contains at most half the components of $C$. We may realise a regular neighbourhood $N$ of $\bdy P$ as a simplicial subset of $\calT^{(2)}$. This is homeomorphic to $\bdy P \times [0,1]$, where $\bdy P \times \{ 0 \} = \bdy P$. The annulus $C' \times [0,1]$ may be realised as simplicial in $\calT^{(2)}$. The curve $C' \times \{ 1 \}$ bounds a disc in $\bdy N - \bdy P = \bdy P \times \{ 1 \}$. The union of this disc with the annulus $C' \times [0,1]$ is one of the required discs. If we cut $P$ along this disc, the result is two polyhedra $P_1$ and $P_2$, with straight triangulations $\calT_1$ and $\calT_2$. The intersection between $C - C'$ and each $P_i$ consists of at most $2^{k-1}$ curves. By induction, these curves bound simplicial discs in $\calT_i^{(2k-2)}$. Thus, $C$ bounds simplicial discs in $\calT^{(2k)}$. \end{proof} \begin{lemma}\label{Lem:SimplicialDiscsInPolyhedron} Let $\calT$ be a straight triangulation of a euclidean polyhedron $P$. Let $C$ be a union of at most $2^k$ disjoint simple closed curves in $\bdy P$ that are simplicial in $\calT$. Suppose that the maximal number of pairwise non-parallel components of $C$ is $n$. Then there is realisation of $\calT^{(2k+2n-2)}$ as a straight triangulation of $P$ such that $C$ bounds a union of disjoint properly embedded discs in $P$ that are simplicial in the triangulation. \end{lemma} \begin{proof} We prove this by induction on $n$. The induction starts with $n = 1$, which is the content of \reflem{SimplicialDiscsInPolyhedronAllParallel}. To prove the inductive step, suppose that $n \geq 2$. Pick a curve type of $C$ and let $C'$ be a component of $C$ of this type that is closest to other types of curves. As in the proof of \reflem{SimplicialDiscsInPolyhedronAllParallel}, we may find a properly embedded disc bounded by $C'$ that is simplicial in $\calT^{(2)}$. Cutting $P$ along this disc gives two polyhedra, each of which inherits a triangulation. In each of these polyhedra, the maximal number of non-parallel components of $C - C'$ is at most $n-1$. Thus, by induction, after barycentrically subdividing the triangulations of these polyhedra $(2k+2n-4)$ times, we obtain simplicial discs bounded by these curves. Hence, $\calT^{(2k+2n-2)}$ contains simplicial discs bounded by $C$. \end{proof} \begin{lemma} \label{Lem:PushArcsIntoInterior} Let $\calT$ be a triangulation of a polyhedron $P$. Let $A$ be a collection of disjoint simplicial arcs in $\bdy P$. Let $A'$ be the properly embedded arcs obtained by pushing the interior of $A$ into the interior of $P$. Then, after an ambient isotopy supported in the interior of $P$, $A'$ can be realised as simplicial in $\calT^{(2)}$. \end{lemma} \begin{proof} \reffig{PushArcsIntoInterior} gives a construction of $A'$. A regular neighbourhood $N$ of $\bdy P$ is simplicial in $\calT^{(2)}$. This is homeomorphic to $\bdy P \times I$. Incident to the arcs $A$ are simplicial discs of the form $A \times I$ in $N$. We then set $A' = \bdy(A \times I) \cut A$. \end{proof} \begin{figure} \includegraphics[width=4in]{Figures/push-arcs-into-interior.pdf} \caption{Pushing an arc into the interior} \label{Fig:PushArcsIntoInterior} \end{figure} We now turn to the proof. \begin{restate}{Theorem}{Thm:DerivedSolidTorus} Let $\calT$ be a triangulation of the solid torus $M$. Then $M$ contains a core curve that is a subcomplex of $\calT^{(51)}$. \end{restate} \begin{proof} We start with the triangulation $\calT$ of the solid torus $M$. Let $\calH$ be its dual handle structure. We give it its canonical affine structure, as in \refdef{CanonicalAffine}. Each zero-handle is a truncated octahedron, which may be realised as a simplicial subset of the 2nd derived subdivision of the tetrahedron of $\calT$ that contains it. Each one-handle of $\calH$ is realised as a product of a hexagon and an interval. We collapse this vertically onto its co-core, which is a hexagonal face of the two incident truncated octahedra. We now apply \refthm{CoreSolidTorusAffine}, which provides a core curve $C$. The intersection between $C$ and each hexagonal face is a collection of at most $24$ points. These may be made simplicial after $6$ barycentric subdivisions, by \reflem{PointsBecomeVertices}. Within each zero-handle, $C$ is a union of at most $48$ arcs, and these are simultaneously parallel to a collection of arcs $A$ in the boundary of the truncated octahedron. We now make $A$ simplicial. The intersection between $A$ and each face of the truncated octahedron is at most $6 \times 48 = 288$ straight arcs. The ones that start and end on the boundary of the face come in at most $9$ arc types and these can be made simplicial using at most $17$ barycentric subdivisions, by \reflem{SeveralArcTypes}. There are at most $24$ arcs that have at least one endpoint not on the boundary of the face. We make these simplicial using at most $24$ barycentric subdivisions using \reflem{ImproperArcsSimplicial}. Finally, we can push these arcs in the boundary of the truncated octahedron into the interior and make them simplicial, using at most $2$ barycentric subdivisions, by \reflem{PushArcsIntoInterior}. In total, we have used at most $2 + 6 + 17 + 24 + 2 = 51$ subdivisions. \end{proof} \section{Nicely embedded handle structures} \label{Sec:NicelyEmbedded} \begin{definition} Let $\calT$ be a triangulation of a compact three-manifold $M$, and let $\calH$ be the dual handle structure. A subset of a zero-handle or one-handle $H$ of $\calH$ is \emph{subnormal} if it is obtained from $H$ by cutting along a collection of disjoint normal discs and then taking some of the resulting components. \end{definition} The proof of \refthm{LensSpaceCurve} (in the case where the lens space $M$ is not a prism manifold) proceeds by finding, within $M$, one of the solid tori $V$ in its Heegaard splitting embedded in a nice way. More specifically, it has a handle structure where the union of the zero-handles and the one-handles is embedded in $M$ in the following way. \begin{definition} \label{Def:NicelyEmbeddedHandlebody} Let $M'$ be a handlebody embedded in a three-manifold $M$. Let $A$ be a union of disjoint annuli in $\bdy M'$. Let $\calH'$ and $\calH$ be handle structures for $M'$ and $M$. We say that $(\calH', A)$ is \emph{nicely embedded} in $\calH$ if the following hold: \begin{itemize} \item $\calH'$ has only zero-handles and one-handles; \item each zero-handle of $\calH'$ is a subnormal subset of a zero-handle of $\calH$; \item each one-handle of $\calH'$ is a subnormal subset of a one-handle of $\calH$ and has the same product structure; \item the intersection between the annuli $A$ and any handle $H'$ of $\calH'$ is a union of components of intersection between $H'$ and handles of $\calH$. \qedhere \end{itemize} \end{definition} \begin{definition} \label{Def:KLNicelyEmbeddedHandlebody} With notation as in the previous definition, we say that $(\calH',A)$ is $(k, \ell)$--\emph{nicely embedded} in $\calH$, for non-negative integers $k$ and $\ell$, if it is nicely embedded and, in addition, the following hold: \begin{itemize} \item in any zero-handle of $\calH$, at most $k$ zero-handles of $\calH'$ lie between parallel normal discs in $\calH$; \item in any one-handle of $\calH$, at most $\ell$ one-handles of $\calH'$ lie between parallel normal discs in $\calH$. \qedhere \end{itemize} \end{definition} \begin{definition} \label{Def:NicelyEmbeddedManifold} Let $M''$ be a three-manifold embedded in another three-manifold $M$. Let $\calH''$ and $\calH$ be handle structures for $M''$ and $M$. Let $\calH'$ be the handle structure just consisting of the zero-handles and one-handles of $\calH''$, and let $A$ be the attaching annuli of the two-handles. We say that $\calH''$ is $(k,\ell)$--\emph{nicely embedded} in $\calH$, for non-negative integers $k$ and $\ell$, if $(\calH',A)$ is $(k,\ell)$--nicely embedded in $\calH$. \end{definition} These definitions are designed to capture the essential properties of the handle structure that $M \cut S$ inherits when $S$ is a normal surface. More specifically, we have the following. \begin{lemma} \label{Lem:NicelyEmbedded} Suppose that $M$ is a compact three-manifold with triangulation $\calT$. Suppose that $S$ is a normal surface, properly embedded in $M$. Let $\calH$ be the handle structure dual to $\calT$ and $\calH'$ be the handle structure that $M \cut S$ inherits, but with the two-handles removed. Let $A$ be their attaching annuli. Then $(\calH', A)$ is nicely embedded in $\calH$. \qed \end{lemma} We typically collapse the one-handles of $\calH'$ vertically onto their co-cores. Thus, the underlying manifold of $\calH'$ becomes a collection of balls, which are just its zero-handles, glued along discs in their boundary. Once we have such a $(k, \ell)$--nice embedding, we get the following results. \begin{theorem} \label{Thm:NiceEmbeddedManifold} Let $M$ be a compact three-manifold with a triangulation $\calT$. Let $M'$ be a handlebody with a handle structure $\calH'$, and let $A$ be a union of disjoint annuli in $\bdy M'$. Suppose that $M'$ is embedded in $M$ in such a way that $(\calH', A)$ is $(k, \ell)$--nicely embedded in the dual of $\calT$. Then we can arrange that the following are all simplicial subsets of $\calT^{(m)}$: \begin{itemize} \item each zero-handle of $M'$; \item each one-handle of $M'$, vertically collapsed onto its co-core; \item the annuli $A$; \end{itemize} where $m = 17 + 2 \lceil \log_2 (2k+10) \rceil + \lceil \log_2(6+2\ell) \rceil + \lceil \log_2(4 + 2\ell) \rceil$. \end{theorem} \begin{theorem} \label{Thm:NiceEmbeddedSolidTorus} Let $M$ be a compact three-manifold with a triangulation $\calT$. Let $V$ be a solid torus with a handle structure $\calH'$. Suppose that $V$ is embedded in $M$ in such a way that $\calH'$ is $(k, \ell)$--nicely embedded in the dual of $\calT$. Then there is a core curve of $V$ that is a subcomplex of $\calT^{(m+49)}$, where $m$ is as in \refthm{NiceEmbeddedManifold}. \end{theorem} \begin{proof}[Proof of Theorems~\ref{Thm:NiceEmbeddedManifold} and~\ref{Thm:NiceEmbeddedSolidTorus}] We start with the triangulation $\calT$ of $M$. Let $\calH$ be its dual handle structure. We give it its canonical affine structure, as in the proof of \refthm{DerivedSolidTorus}. Each zero-handle is a truncated octahedron, which may be realised as a simplicial subset of the second derived subdivision of the tetrahedron of $\calT$ that contains it. Each one-handle is the product of a hexagon and an interval, but it is collapsed onto its hexagonal co-core. Our goal is to construct an affine handle structure on $\calH'$. Thus, each zero-handle of $\calH'$ is given the structure of a euclidean polyhedron. We realise this as a polyhedron in the truncated octahedron of $\calH'$ that contains it and as a simplicial subset of a suitable iterated barycentric subdivision of $\calT$. Now, within each zero-handle $H_0$ of $\calH$, every zero-handle of $\calH'$ is subnormal. These subnormal zero-handles are obtained from the truncated octahedron $H_0$ by cutting along normal triangles and squares. These are arranged into at most $4$ triangle types and at most one square type. Our approach, in overview, is to arrange for the intersections between these normal discs and the one-handles of $\calH$ to be simplicial, then for the remainder of the boundary of these discs to be simplicial and then for the normal discs themselves to be simplicial. Let us first focus on a one-handle of $\calH$, which is a product of a hexagon $X$ and an interval $[-1,1]$. Within this one-handle, there are various one-handles of $\calH'$. Only four possible one-handles of $\calH'$ in $X \times [-1,1]$ do not lie between parallel normal discs. By assumption, at most $\ell$ one-handles do lie between parallel normal discs. These one-handles are therefore obtained from $X \times [-1,1]$ by cutting along at most $6 + 2\ell$ normal discs and then possibly throwing away some components. The union of these discs is of the form $\beta \times [-1,1]$ for normal arcs $\beta$ in $X$. These arcs come in at most $3$ types. Hence, by \reflem{SeveralArcTypes}, after at most $2 + \lceil \log_2(6+2\ell) \rceil$ barycentric subdivisions, we make $\beta$ simplicial in the triangulation of $X$. Now consider each zero-handle $H_0$ of $\calH$. We cut this handle along normal discs and then take some of the resulting components to get the subnormal zero-handles of $\calH'$ in $H_0$. We are going to realise the boundary of these normal discs as simplicial in a suitable subdivision of the triangulation. We have already arranged for their intersection with the one-handles to be simplicial. Their intersection with each two-handle consists of at most $4 + 2\ell$ arcs, all of the same arc type, and so by using $1 + \lceil \log_2(4 + 2\ell) \rceil$ further barycentric subdivisions, we can also arrange for these arcs to be simplicial, by \reflem{TwoArcTypes}. The number of normal discs that we need to consider within $H_0$ is at most $2k + 10$. We now make these simplicial, using \reflem{SimplicialDiscsInPolyhedron}. This requires at most $2 \lceil \log_2 (2k + 10)\rceil + 8$ barycentric subdivisions. Thus, each zero-handle of $\calH'$ is now a simplicial subset of $\calT^{(m)}$, where $m$ is as given in the statement of the theorem. Furthermore, when two zero-handles of $\calH'$ are joined by a one-handle, then we glue the simplicial subsets of $\calT^{(m)}$ corresponding to these zero-handles along some faces. These can be arranged to be flat euclidean convex polygons with at most $6$ sides. Thus, we have realised each one-handle, when vertically collapsed onto its co-core, as a simplicial subset of $\calT^{(m)}$. Also, the components of intersection between the two-handles and the zero-handles and between the two-handles and the collapsed one-handles are simplicial. Thus, we have proved \refthm{NiceEmbeddedManifold}. Let us now prove \refthm{NiceEmbeddedSolidTorus}. So $V$ is now a solid torus. Each zero-handle and one-handle of $\calH'$ has the structure of a euclidean polyhedron, as described above. This gives $\calH'$ an affine handle structure. We can therefore apply \refthm{CoreSolidTorusAffine}, which gives a core curve $C$ of the solid torus $V$. The intersection between $C$ and any one-handle of $\calH'$ is at most 24 arcs, which respect the product structure on the handle. When the one-handle is vertically collapsed onto its co-core, these arcs become points. Using \reflem{PointsBecomeVertices}, we can make these vertices in the triangulation of the co-core of the one-handle after $6$ barycentric subdivisions. The intersection between $C$ and each zero-handle of $\calH'$ is a trivial tangle. Moreover, we have control over the arcs in the boundary of the handle to which it is parallel. Specifically, these arcs are a union of straight arcs in each face of the handle. These arcs come in two types: those that start and end on the boundary of the face, and those that have at least one endpoint in $C$. In each face (that arises as a component of intersection with the one-handles), there are at most $24$ of the latter type of arc. Hence, we need at most $24$ barycentric subdivisions to make these simplicial, by \reflem{ImproperArcsSimplicial}. There are at most $48 \times 6 = 288$ arcs in each face that start and end on the boundary of the face and these come in at most $9$ types. At most $17$ subdivisions are required to make these simplicial, by \reflem{SeveralArcTypes}. Now in each zero-handle, $C$ runs parallel to these arcs, which have been made simplicial. Hence, using \reflem{PushArcsIntoInterior}, two further subdivisions are required to make $C$ simplicial. The total number of barycentric subdivisions we have performed is at most $m + 49$. \end{proof} \section{Parallelity bundles} \label{Sec:Parallelity} In this section, we recall some material from~\cite[Section~5]{Lackenby:Composite} about parallelity bundles for handle structures. Here we assume that $M$ is a compact orientable three-manifold. We further assume that $\calH$ is a handle structure for $M$. \begin{definition} Suppose that $\gamma$ is a simple closed curve, properly embedded in $\bdy M$. We say that $\gamma$ is \emph{standard} with respect to $\calH$ if it satisfies the following properties. \begin{itemize} \item The curve $\gamma$ is disjoint from the two-handles of $\calH$, \item for each one-handle $H = D^1 \times D^2$ there is a finite set $P \subset \bdy D^2$ so that $\gamma \cap H = D^1 \times P$, and \item $\gamma$ meets at least one one-handle. \qedhere \end{itemize} \end{definition} \begin{definition} Suppose that $S \subset \bdy M$ is a subsurface. We say that $\calH$ is a \emph{handle structure for the pair} $(M,S)$ if \begin{itemize} \item $\calH$ is a handle structure for $M$ and \item the boundary of $S$ in $\bdy M$ is a union of standard curves for $\calH$. \qedhere \end{itemize} \end{definition} \begin{definition} Suppose that $\calH$ is a handle structure for the pair $(M, S)$. Suppose that $H$ is a zero-, one-, or two-handle of $\calH$. We say that $H$ is a \emph{parallelity handle} if it admits a product structure $D^2 \times I$ so that \begin{itemize} \item $H \cap S = D^2 \times \bdy I$ and \item for any other handle $H'$ of $\calH$ every component of $H \cap H'$ is \emph{vertical} in $H$: that is, of the form $\beta \times I$ where $\beta$ is an arc in $\bdy D^2$. \qedhere \end{itemize} \end{definition} Following the above definition we typically regard parallelity handles as $I$--bundles over $D^2$, their first coordinate. \begin{definition} The union of the parallelity handles in $\calH$ is the \emph{parallelity bundle} for $\calH$. \end{definition} By \cite[Lemma~5.3]{Lackenby:Composite} the $I$--bundle structures on the parallelity handles agree where they intersect and so give an $I$--bundle structure on the parallelity bundle. \begin{definition} Suppose that $\calB$ is an $I$--bundle over a surface $F$. The resulting $(\bdy I)$--bundle over $F$ is $\bdy_h \calB$, the \emph{horizontal boundary} of $\calB$. The resulting $I$--bundle over $\bdy F$ is $\bdy_v \calB$, the \emph{vertical boundary} of $\calB$. The components of the boundary of $\bdy_h \calB$ (which equals the boundary of $\bdy_v \calB$) are called the \emph{corner curves} of $\calB$. \end{definition} \begin{definition} \label{Def:GeneralisedParallelityBundle} Suppose that $\calH$ is a handle structure for the pair $(M,S)$. Suppose that $\calB^+$ is a three-dimensional submanifold of $M$. We say that $\calB^+$ is a \emph{generalised parallelity bundle} if \begin{itemize} \item $\calB^+$ is an $I$--bundle over a compact surface; \item the horizontal boundary of $\calB^+$ is $\calB^+ \cap S$; \item $\calB^+$ is a union of handles of $\calH$; \item any handle in $\calB^+$ that intersects the vertical boundary of $\calB^+$ is a parallelity handle, where the $I$--bundle structure on the parallelity handle agrees with the $I$--bundle structure of $\calB^+$; \item for any $i$--handle lying in $\calB^+$ that is incident to $j$--handle, where $j > i$, the $j$--handle must also lie in $\calB^+$. \qedhere \end{itemize} \end{definition} Note that the parallelity bundle $\calB$ is itself a generalised parallelity bundle. \begin{definition} We say that a generalised parallelity bundle $\calB^+$ is \emph{maximal} if $\calB^+$ is not properly contained in another generalised parallelity bundle. \end{definition} Note that there is always a maximal generalised parallelity bundle $\calB^+$ that contains all of $\calB$. However the inclusion of $\calB$ into $\calB^+$ need not respect the $I$--bundle structure on all components of $\calB$. \begin{lemma} \label{Lem:BundleBoundaryCurves} Let $\calB$ be the parallelity bundle and let $\calB^+$ be any maximal generalised parallelity bundle that contains $\calB$. Then every corner curve of $\calB^+$ is a corner curve of $\calB$. \end{lemma} \begin{proof} By hypothesis, $\calB^+$ contains $\calB$. Hence, $\bdy_h \calB^+ = \calB^+ \cap S$ contains $\bdy_h \calB = \calB \cap S$. Suppose that $\gamma$ is a corner curve of $\calB^+$. So $\gamma$ is a component of $\bdy A$ for some component $A$ of $\bdy_v \calB^+$. By definition, the handles of $\calB^+$ incident to $A$ are parallelity handles. The $I$--bundle structures on $A$ and $\calB$ agree; also any parallelity handle incident to $A$ lies in $\calB$. Hence, $A$ is a component of $\bdy_v \calB$. We deduce that $\gamma$ is a corner curve of $\calB$. \end{proof} \begin{definition} Suppose that $G$ is an annulus, properly embedded in $M$, with boundary in $S$. Suppose that $G'$ is an annulus in $\bdy M$ with $\bdy G = \bdy G'$. Suppose also that $G \cup G'$ bounds a three-manifold $P$ such that \begin{itemize} \item either $P$ is a parallelity region between $G$ and $G'$ or $P$ lies in a three-ball; \item $P$ is a non-empty union of handles; \item $\closure(M - P)$ inherits a handle structure from $\calH$; \item any parallelity handle of $\calH$ that intersects $P$ lies in $P$; \item $G$ is a vertical boundary component of a generalised parallelity bundle lying in $P$; \item $G' \cap (\bdy M - S)$ is either empty or a regular neighbourhood of a core curve of the annulus $G'$. \end{itemize} Removing the interiors of $P$ and $G'$ from $M$ is called an \emph{annular simplification}. \end{definition} The resulting three-manifold $M'$, obtained from an annular simplification, is homeomorphic the original manifold $M$. This holds even in the case where $P$ is homeomorphic to the exterior of a non-trivial knot; in this case $P$ lies in a three-ball in $M$. We now restate~\cite[Proposition~5.6]{Lackenby:Composite}. \begin{theorem} \label{Thm:IncompressibleHorizontalBoundary} Suppose that $M$ is a compact orientable irreducible three-manifold. Suppose that $F$ is an incompressible subsurface of $\bdy M$. Let $\calH$ be a handle structure for $(M, F)$. Suppose that $\calH$ admits no annular simplification. Let $\calB^+$ be any maximal generalised parallelity bundle in $\calH$. Then the horizontal boundary of $\calB^+$ is incompressible. \qed \end{theorem} \section{Finding a simplicial core curve of a lens space} \label{Sec:ProofMain} This section is devoted to the proof of \refthm{LensSpaceCurve}: that is, for any triangulation $\calT$ of any lens space $M$ (except $\RP^3$) there is a relatively short curve with complement a solid torus or the orientation $I$--bundle over the Klein bottle. Let $\calH$ be the handle structure of $M$ that is dual to $\calT$. Let $S$ be an almost normal Heegaard torus in $\calT$, which exists by \refthm{AlmostNormalHeegaard}. Cutting $M$ along $S$ gives two solid tori $X_1$ and $X_2$. As explained in \refsec{HandleStructures}, these inherit handle structures $\calH_1$ and $\calH_2$. We now consider some particular situations where the proof of \refthm{LensSpaceCurve} is fairly straightforward. They highlight the approach that needs to be taken in the general case. Suppose, as a special case, that one of $\calH_1$ or $\calH_2$ is $(0,0)$--nicely embedded within $\calH$. We could then use \refthm{NiceEmbeddedSolidTorus} to find a core curve of one of the solid tori $X_1$ or $X_2$ that is simplicial in a suitable iterated barycentric subdivision of $\calT$. But in general, the embeddings of $\calH_1$ and $\calH_2$ in $\calH$ are not $(0,0)$--nicely embedded, because when $S$ contains two normal discs of the same type, the space between them becomes a zero-handle of $\calH_1$ or $\calH_2$ that violates the definition of a $(0,0)$--nice embedding. For $i = 1$ and $2$, let $\calB_i$ be the parallelity bundle for $\calH_i$. Suppose, as a different special case, that this is an $I$--bundle over a collection of discs, for $i=1$ or $2$. Then we could create a new handle structure from $\calH_i$ by removing $\calB_i$ and replacing it by two-handles. This new handle structure is then $(0,0)$--nicely embedded in $\calH$, and we could then apply \refthm{NiceEmbeddedSolidTorus}. Of course, though, there is no particular reason for $\calB_i$ to be $I$--bundles over discs. But according to \refthm{IncompressibleHorizontalBoundary}, we can apply annular simplifications and then extend the parallelity bundle to a generalised parallelity bundle $\calB_i^+$ with horizontal incompressible boundary. In fact, \refthm{IncompressibleHorizontalBoundary} is not immediately applicable, because if one sets $F$ in that theorem to be all of the Heegaard surface $S$, then it is not incompressible in the two solid tori $X_1$ and $X_2$. But setting this aside for the moment, suppose that we could ensure that the generalised parallelity bundle $\calB_i^+$ has horizontal incompressible boundary. It cannot be all of $S$, and so it is a union of disjoint discs and annuli. Hence, $\calB_i^+$ consists of $I$--bundles over discs, annuli and M\"obius bands. We can replace any $I$--bundles over discs by two-handles and remove any $I$--bundles over annuli using annular simplifications. Thus, if $\calB_i^+$ contains no $I$--bundles over M\"obius bands, then we end with a handle structure for one of the solid tori that is $(0,0)$--nicely embedded in $\calH$. We could then apply \refthm{NiceEmbeddedSolidTorus}. To fix the problem that $S$ is not incompressible in $X_1$ and $X_2$, we work with the pair $(X_i, F_i)$, where $F_i$ is a suitable subsurface of $S$. This is obtained from $S$ by cutting along a curve or curves. One way of producing the required curves is via the following lemma. \begin{lemma} \label{Lem:ShortEssentialCurveInTorus} Let $V$ be a solid torus with a handle structure $\calH$. Then there is a simple closed curve $C$ in $\bdy V$ satisfying the following conditions: \begin{itemize} \item it is standard in $\calH$; \item it runs over each component of intersection between $\bdy V$ and the one-handles at most once; \item it is essential in $\bdy V$ and non-meridional. \end{itemize} Suppose also that $D$ is a union of disjoint discs in $\bdy V$ such that \begin{itemize} \item it is a union of components of intersection between $\bdy V$ and the handles; \item if $H$ and $H'$ are handles of $\calH$ where $H'$ has higher index than that of $H$, then whenever a component of $\bdy V \cap H$ lies in $D$, so do all incident components of $\bdy V \cap H'$. \end{itemize} Then we may also ensure that $C$ is disjoint from $D$. \end{lemma} \begin{proof} Any essential curve in $\bdy V$ may be isotoped to a standard one, by first pushing it off the two-handles and then making it vertical in the one-handles. We consider all standard simple closed curves that are non-zero and non-meridional in $H_1(\bdy V; \ZZ/2\ZZ)$. We let $C$ be such a curve that runs over the one-handles the fewest number of times. Then if it runs over a component of intersection between a one-handle and $\bdy V$ more than once, we can modify it to reduce this number by $2$. This might create a disconnected one-manifold. But if so, then we just focus on one component. We can choose this component to be non-zero and non-meridional in $H_1(\bdy V; \ZZ/2\ZZ)$. Thus, under the assumption that $C$ runs over the one-handles the fewest number of times, we deduce that $C$ in fact runs over each component of intersection between the one-handles and $\bdy V$ at most once. Consider now the case where there are also the discs $D$. Then we isotope any essential curve off $D$ and make it standard. We consider such a curve that is non-zero and non-meridional in $H_1(\bdy V; \ZZ/2\ZZ)$. Among all such curves disjoint from $D$, we let $C$ be one that runs over the one-handles the fewest number of times. The same argument as above then applies. \end{proof} \begin{lemma} \label{Lem:RemoveThreeCurves} Let $M$ be a compact three-manifold with a handle structure $\calH$. Let $\calB$ be the parallelity bundle for $(M, \bdy M)$. Let $C$ be a standard curve in $\bdy M$ that is disjoint from $\bdy_h \calB$. Let $C'$ be three parallel copies of $C$, and let $F$ be $\bdy M \cut N(C')$. Then the parallelity bundle for $(M,F)$ is equal to $\calB$. \end{lemma} \begin{proof} Every handle $H$ of $\calB$ is, by definition, a parallelity handle for $(M, \bdy M)$ and so satisfies $H \cap \bdy M = H \cap \bdy_h B$. Since $C$ is disjoint from $\bdy_h \calB$, it misses $H$, and therefore $H \cap F = H \cap \bdy M$. So, $H$ is a parallelity handle for $(M, F)$. Now consider a parallelity handle $H$ for $(M, F)$. If $H$ is a two-handle, it is disjoint from the standard curves $C'$, and therefore $H \cap \bdy M = H \cap F$. Hence, in this case, $H$ is a parallelity handle for $(M, \bdy M)$. Suppose now that $H$ is a one-handle. It has the form $D^2 \times I$, where $H \cap F = D^2 \times \bdy I$. Each component of intersection between $H$ and any other handle has the form $\beta \times I$ for an arc $\beta$ in $\bdy D^2$. There are two such components arising from the intersection between $H$ and the incident zero-handles. There may be a further one or two components, arising from the intersection with the two-handles. If $H$ does have two components of intersection with the two-handles, then $H \cap \bdy M = H \cap F$, and therefore $H$ is a parallelity handle for $(M, \bdy M)$. On the other hand, if $H$ has fewer than two components of intersection with the two-handles, then it intersects $\bdy M \cut F$ once or twice. Therefore, $C'$ runs along the one-handle once or twice. But $C'$ consists of three parallel copies of $C$, and therefore the number of times that it runs along this one-handle is a multiple of three. This is a contradiction. This completes the proof when $H$ is a one-handle. Now suppose that $H$ is a zero-handle. If it is disjoint from the one-handles, then $C'$ misses it, since $C'$ is standard. Hence, in this case, $H$ is a parallelity handle for $(M, \bdy M)$. On the other hand, if $H$ intersects a one-handle, then this is also a parallelity handle for $(M,F)$ and hence, as argued above, this one-handle has two components of intersection with the two-handles. Thus, $\bdy D^2 \times I$ consists of intersections with one-handles and two-handles in an alternating fashion around the annulus $\bdy D^2 \times I$. In particular, $H \cap \bdy M = D^2 \times \bdy I$, and therefore $H$ is again a parallelity handle for $(M, \bdy M)$. \end{proof} We are now equipped to prove the theorem. \begin{restate}{Theorem}{Thm:LensSpaceCurve} Let $M$ be a lens space other than $\RP^3$. Let $\calT$ be any triangulation of $M$. Then there is a simple closed curve $C$ that is a subcomplex of $\calT^{(86)}$, such that the exterior of $C$ is either a solid torus or a twisted $I$--bundle over a Klein bottle. \end{restate} \begin{proof} Let $S$ be an almost normal Heegaard torus in $\calT$, which exists by \refthm{AlmostNormalHeegaard}. By \refprop{FromAlmostNormalToNormal}, $S$ can be arranged to be normal in the barycentric subdivision $\calT^{(1)}$. Let $\calH$ be the handle structure of $M$ that is dual to $\calT^{(1)}$. Cutting $M$ along $S$ gives two solid tori $X_1$ and $X_2$. These then inherit handle structures $\calH_1$ and $\calH_2$. Let $\calB_i$ be the parallelity bundle for $(X_i, \bdy X_i)$ with handle structure $\calH_i$. Then $\bdy_v \calB_i$ is a collection of properly embedded annuli in $X_i$. Their boundary curves are a (possibly empty) collection of essential curves on $\bdy X_i$, each with the same slope $\alpha_i$, together with some inessential curves on $\bdy X_i$. We let $\alpha_i= \emptyset$ if there are no essential curves. We consider three cases: \begin{enumerate} \item $\alpha_1$ or $\alpha_2$ is empty; \item $\alpha_1$ and $\alpha_2$ are equal and non-empty; \item $\alpha_1$ and $\alpha_2$ are distinct and non-empty. \end{enumerate} \begin{case} \label{Case:Empty} $\alpha_1$ or $\alpha_2$ is empty. \end{case} Say that $\alpha_i$ is empty. We claim that $\bdy_h \calB_i$ is a subsurface of $S$ that lies in a collection of discs. Otherwise, $\bdy_h \calB_i$ contains a component $F$ that is $S$ minus some open discs. There cannot be another copy of $F$ in $S$ that is disjoint from $F$, and so $F$ must be the horizontal boundary of a twisted $I$--bundle component of $\calB_i$. The zero section of this $I$--bundle is therefore a non-orientable surface. By attaching annuli to its boundary that lie in $\bdy_v \calB_i$ and then capping off with discs, we obtain a closed non-orientable surface in the solid torus. This does not exists, which proves the claim. We may take the discs $D$ that contain $\bdy_h \calB_i$ to be as in \reflem{ShortEssentialCurveInTorus}. Specifically, each boundary component of $\bdy_h \calB_i$ bounds a disc in $S$, and we set $D$ to be the union of these discs (which may be nested). Hence, there is a standard curve $C_i$ in $S$ that misses these discs, runs over each component of intersection between $\bdy X_i$ and the one-handles of $\calH_i$ at most once and is essential in $\bdy X_i$ and non-meridional. Let $C'_i$ be three parallel copies of $C_i$ and let $F_i = S \cut N(C_i')$. By \reflem{RemoveThreeCurves}, the parallelity bundle for $(X_i, F_i)$ is exactly $\calB_i$. We now apply as many annular simplifications to $\calH_i$ as possible, forming a handle structure $\calH_i'$ for a solid torus isotopic to $X'_i$ with subsurface $F'_i$ of $\bdy X'_i$. This process does not introduce parallelity handles. Thus, the horizontal boundary of the parallelity bundle $\calB'_i$ for $\calH'_i$ also lies in a union of disjoint discs in $\bdy X'_i$. We now extend this parallelity bundle to a maximal generalised parallelity bundle $\calB_i^+$. By \reflem{BundleBoundaryCurves}, the boundary curves of $\bdy_h \calB_i^+$ form a subset of the boundary curves of $\bdy_h \calB_i'$. They are therefore also inessential in $\bdy X'_i$. By \refthm{IncompressibleHorizontalBoundary}, $\bdy_h \calB_i^+$ is incompressible in $X'_i$. Hence it is a union of discs. So, $\calB_i^+$ consists of $I$--bundles over discs. Remove these discs and replace them by two-handles. The resulting handle structure is $(0,0)$--nicely embedded in $\calH$. So, by \refthm{NiceEmbeddedSolidTorus}, a core curve of $X'_i$ is simplicial in $\calT^{(79)}$. This proves the theorem in this case. \begin{case} \label{Case:EqualNonEmpty} $\alpha_1$ and $\alpha_2$ are equal and non-empty. \end{case} For some $i \in \{ 1,2 \}$, $\alpha_i$ is non-meridional in $X_i$, since $M$ is not $S^2 \times S^1$. Fix some such $i$. Let $C_i$ be a boundary component of $\bdy_v \calB_i$ with slope $\alpha_i$. Then isotope $C_i$ a little away from $\bdy_h \calB_i$ so that it becomes a standard curve in $\calH_i$. Let $C'_i$ be three parallel copies of $C_i$ and let $F_i$ be $S \cut N(C_i')$. We view $\calH_i$ as a handle structure for $(X_i, F_i)$. Again the parallelity bundle for $(X_i, F_i)$ is $\calB_i$, by \reflem{RemoveThreeCurves}. Perform as many annular simplification to $\calH_i$ as possible, forming the three-manifold $X'_i$ with subsurface $F'_i$ of $\bdy X'_i$. Let $\calB_i'$ be its parallelity bundle. The boundary curves of its horizontal boundary therefore are inessential in $\bdy X'_i$ or have slope $\alpha_i$. Then, since $\alpha_i$ is non-meridional in $X_i$, \refthm{IncompressibleHorizontalBoundary} gives that we may extend $\calB'_i$ to a generalised parallelity bundle $\calB_i^+$ with horizontal boundary that is incompressible in $X_i'$. The boundary curves of $\bdy_h \calB_i^+$ are inessential or have slope $\alpha_i$, by \reflem{BundleBoundaryCurves}. Each component of $\calB_i^+$ is an $I$--bundle over a disc, annulus or M\"obius band. In fact, no component is an $I$--bundle over an annulus, for the following reason. A vertical boundary component of such an $I$--bundle is an incompressible annulus properly embedded in the solid torus $X'_i$. It is therefore boundary parallel in $X'_i$. By picking the vertical boundary component of the $I$--bundle appropriately, we may assume that the product region between it and an annulus in $\bdy X'_i$ contains the $I$--bundle. Hence, we could have performed an annular simplification, contradicting our assumption that as many of these as possible were performed. Suppose that no component of $\calB_i^+$ is an $I$--bundle over a M\"obius band. Then we can replace $\calB_i^+$ by two-handles as in \refcase{Empty}. The resulting handle structure is $(0,0)$--nicely embedded in $\calH$. So, by \refthm{NiceEmbeddedSolidTorus}, a core curve of $X'_i$ is simplicial in $\calT^{(79)}$. So we are left with the case where some component of $\calB_i^+$ is an $I$--bundle over a M\"obius band. Then let $j = 3 - i$. Then $\alpha_j$ cannot be meridional in $X_j$, because we could patch a meridian disc for $X_j$ onto a M\"obius band in $X_i$ to get an embedded projective plane. This would imply that $M$ is $\RP^3$, contrary to assumption. Note that here we are using the fact that $\alpha_1$ and $\alpha_2$ are equal. Thus, the above argument gives a maximal generalised parallelity bundle $\calB_j^+$ for $(X'_j, F'_j)$ with $\bdy_h \calB_j^+$ incompressible in $X_j'$. The only situation where the theorem is not proved in this case is when the generalised parallelity bundles $\calB_i^+$ and $\calB_j^+$ both contain components $B_i$ and $B_j$ that are $I$--bundles over M\"obius bands. Then, $\alpha_i$ and $\alpha_j$ are the boundaries of M\"obius bands in $X_i$ and $X_j$ and, after an isotopy, these can be glued to form an embedded Klein bottle in $M$. The exterior of this Klein bottle is a solid torus $V$. This is a general fact about Klein bottles in lens spaces (see the proof of \reflem{EmbeddedKleinBottle}). But it can be seen directly as follows. The exterior of $B_i$ in $X'_i$ is a solid torus which winds twice along $X'_i$. Moreover, $\bdy_v B_i$ is longitudinal in this solid torus. Similarly, the exterior of $B_j$ in $X'_j$ is a solid torus. If the two annuli $\bdy X'_i \cut \bdy_h B_i$ and $\bdy X'_j \cut \bdy_h B_j$ are isotoped to be equal, then when $X'_i \cut B_i$ and $X'_j \cut B_j$ are glued along this annulus, the result is a solid torus that is isotopic to $V$. Each boundary component of $\bdy_v B_i$ is isotopic to a core curve of $V$. In this case, we take the required curve $C$ to be one of these boundary components. Let $A$ be $\bdy_v \calB_i^+$ lying in the boundary of the manifold $M' = X'_i \cut \calB_i^+$. Then $(M', A)$, with its inherited handle structure, is $(0,0)$--nicely embedded in $\calH$. By \refthm{NiceEmbeddedManifold}, $A$ is simplicial in $\calT^{(30)}$. In particular, $C$, which is a boundary component of $A$ is simplicial in $\calT^{(30)}$, as required. \begin{case} \label{Case:UnequalNonEmpty} $\alpha_1$ and $\alpha_2$ are distinct and non-empty. \end{case} For $i \in \{ 1, 2 \}$, let $C_i$ be some boundary curve of $\bdy_v \calB_i$ with slope $\alpha_i$, isotoped a little so that it becomes a standard curve. Note that this isotopy pushes $C_i$ into handles of $\calH_i$ that are not parallelity handles. Now let $F$ be $S \cut N(C_1 \cup C_2)$. Since this lies in the complement of $C_1$ and $C_2$, which are essential curves with the distinct slopes, $F$ is a union of discs. In particular, it is incompressible in $X_1$ and $X_2$. Pick some $i$ and perform as many annular simplifications to $(X_i, F)$ as possible, giving a handle structure $\calH'_i$ for a pair $(X'_i, F')$ that is isotopic to $(X_i,F)$. Let $\calB_i^+$ be a maximal generalised parallelity bundle for $(X'_i, F')$ that contains its parallelity bundle. By \refthm{IncompressibleHorizontalBoundary}, the horizontal boundary of $\calB^+_i$ is incompressible. Since it is a subsurface of $F$, it is a union of discs. Thus, $\calB^+_i$ consists of $I$--bundles over discs. Replace each of these with a two-handle, giving a handle structure $\calH'$. We claim that $\calH'$ is $(10,6)$--nicely embedded in $\calH$. Because $\calH'$ is obtained from $\calH$ by cutting along the normal surface $S$, it is $(k, \ell)$--nicely embedded in $\calH$ for some $k$ and $\ell$, but we need to show why we can take $k = 10$ and $\ell = 6$. Consider a zero-handle $H'_0$ of $\calH'$ that lies between normally parallel discs. Let $H_0$ be the zero-handle of $\calH$ that contains it. Since $H'_0$ does not lie in $\calB_i^+$, it is not a parallelity handle for $(X'_i, F')$. Hence, it intersects $C_1$ or $C_2$ in $S$. Hence, in a component of $H_0 \cut S$ incident to $H'_0$, there must be a zero-handle of $\calH'$ that does not lie between parallel normal discs of $S$. Thus, at most $10$ zero-handles of $\calH'$ in $H_0$ do not lie between parallel normal discs. This number $10$ is twice the number of normal disc types of $S$ that can simultaneously exist within $H_0$. Therefore, we may set $k = 10$. A similar argument gives that $\ell = 6$ also suffices. Since $\calH'$ is $(10,6)$--nicely embedded in $\calH$, \refthm{NiceEmbeddedSolidTorus} gives that there is a core curve of $X'_i$ that is simplicial in $\calT^{(86)}$. This proves the theorem in this case. \end{proof} We now use \refthm{DerivedSolidTorus} and \refthm{LensSpaceCurve} to prove our main technical result. \begin{restate}{Theorem}{Thm:LensSpaceCore} Let $M$ be a lens space, which is neither a prism manifold nor a copy of $\RP^3$. Let $\calT$ be any triangulation of $M$. Then the iterated barycentric subdivision $\calT^{(86)}$ contains a core curve of $M$ in its one-skeleton. Furthermore, $\calT^{(139)}$ contains in its one-skeleton the union of the two core curves. \end{restate} \begin{proof} Let $\calT$ be a triangulation of a lens space $M$, other than a prism manifold or $\RP^3$. Note that the lens spaces that contain an embedded Klein bottle are exactly the prism manifolds by \reflem{EmbeddedKleinBottle}. So, by \refthm{LensSpaceCurve}, there is a core curve $C$ of $M$ that is simplicial in $\calT^{(86)}$. A regular neighbourhood of $C$ is simplicial in $\calT^{(88)}$. Removing the interior of this regular neighbourhood gives a triangulated solid torus. By \refthm{DerivedSolidTorus}, this contains a core curve that is simplicial in its 51st barycentric subdivision. Hence, $M$ contains the union of its two core curves as a simplicial subset of $\calT^{(139)}$. \end{proof} \section{Elliptic manifolds} \label{Sec:Elliptic} A three-manifold $M$ is \emph{elliptic} if there is a subgroup $\Gamma \subset \SO(4)$, acting freely on the three-sphere, with $M \homeo S^3/\Gamma$. Since the universal cover of $M$ is the three-sphere, the fundamental group $\pi_1(M)$ is necessarily finite. Since all elements of $\SO(4)$ are orientation preserving, all elliptic manifolds are orientable. To give names to all of the elliptic manifolds we invoke a beautiful result of Seifert and Threlfall~\cite[page~568]{SeifertThrelfall33}. See also~\cite[Corollary 4.4.11]{Thurston97}. \begin{theorem} \label{Thm:Seifert} Every elliptic manifold admits a Seifert fibred structure. \end{theorem} Recall that a Seifert fibred structure $\calF$ on a three-manifold $M$ is a foliation by circles where each circle $C$ has a neighbourhood $U$ that is a fibred solid torus. The fibre $C$ has \emph{Seifert invariant} $q/p$ if every fibre in $\calF|\bdy U$ has slope $q/p$ with respect to a framing $\subgp{\lambda, \mu}$. Here $\mu$ is the meridian of $U$ and $p$ is necessarily non-zero. Note that replacing $\lambda$ by $\lambda + \mu$ changes the slope of the fibre to be $(q - p)/p$. We simplify the notation $(M, \calF)$ to just $M$ when the foliation is understood. We say that a fibre $C \subset M$ is \emph{generic} if it has integral Seifert invariant (in other words, $|p| = 1$) and \emph{critical} otherwise. If we quotient all fibres of $\calF$ to points we get the \emph{base orbifold} $B = M/S^1$, where critical fibres project to orbifold points. The quotient induces a surjection $\pi_1(M) \to \pi_1^\orb(B)$ where the kernel is cyclic (possibly trivial) and generated by the generic fibre. Since $\pi_1(M) \isom \Gamma$ is finite, we deduce that $\pi_1^\orb(B)$ is also finite. Here, then, are the possibilities for $B$. \begin{lemma} \label{Lem:AllYourBaseAreBelongToUs} Suppose $B$ is a closed two-dimensional orbifold (without mirrors). Then $\pi_1^\orb(B)$ is finite if and only if $\chi^\orb(B) > 0$. Thus $B$ is one of the following. \begin{itemize} \item Cyclic -- $S^2$, $S^2(p)$, $P^2$, $S^2(p,q)$. \item Dihedral -- $P^2(r)$, $S^2(2,2,r)$, where $r \geq 2$. \item Tetrahedral -- $S^2(2,3,3)$. \item Octahedral -- $S^2(2,3,4)$. \item Icosahedral -- $S^2(2,3,5)$. \end{itemize} The names indicate the orbifold fundamental group. \qed \end{lemma} We call the last three the \emph{platonic} orbifolds. In these cases we say the \emph{type} of $B$ is the orbifold fundamental group $\calP = \pi_1^\orb(B)$. Here $\calP$ is one of the three platonic groups $A_4$, $S_4$ or $A_5$, respectively. \begin{lemma} \label{Lem:LensPrismBase} Suppose $(M, \calF)$ is Seifert fibred and $B = M/S^1$ is the base orbifold. If $B$ is cyclic or dihedral then $M$ is $S^2 \times S^1$, a lens space or prism manifold. \end{lemma} \begin{proof} Suppose that $B = M/S^1$ is a two-sphere with at most two orbifold points. Let $\alpha$ be a loop in $B$ cutting $B$ into a pair of discs each with at most one marked point. Since $M$ is orientable and since $\alpha$ is two-sided, the preimage of $\alpha$ in $M$ is a torus $T$. The preimage of either component of $B - \interior(N(\alpha))$ is a fibred solid torus and thus $M$ is a lens space or $S^2 \times S^1$. Suppose that $B$ is a projective plane with at most one orbifold point. Let $\alpha$ be a loop in $B$ reversing orientation. So $B - \interior(N(\alpha))$ is a disc with at most one orbifold point. Since $M$ is orientable, and since $\alpha$ is one-sided, the preimage of $\alpha$ in $M$ is a Klein bottle. It follows that $M$ is a prism manifold. Suppose that $B = S^2(2,2,r)$. Let $\alpha$ be an arc connecting the orbifold points of order $2$. Again, the preimage of $\alpha$ is a Klein bottle and $M$ is a prism manifold. \end{proof} We say that a Seifert fibred space $(M, \calF)$ is \emph{platonic} if $B = M/S^1$ is platonic. In this case, the \emph{type} of $M$ is the same as the type of $B$. Now Lemmas~\ref{Lem:AllYourBaseAreBelongToUs} and~\ref{Lem:LensPrismBase} allow us to coarsely name the elliptic manifolds. \begin{proposition} \label{Prop:Names} Every elliptic manifold is either a lens space, a prism manifold, or a platonic manifold. \qed \end{proposition} To pin down an elliptic manifold precisely, we must discuss how the Seifert invariants of the critical fibres interact with the \emph{Euler number}. Suppose, for this paragraph, that $M$ is a platonic manifold. Suppose the Seifert invariants of the critical fibres $C_i$ are $q_i/p_i$, for $i = 1, 2, 3$. Following Orlik~\cite[page~91]{Orlik72}, there is an integer $q$ so that $\pi_1(M)$ has the following presentation. \[ \pi_1(M) = \group{a_1, a_2, a_3, b}{[a_i,b] = 1, a_i^{p_i} = b^{-q_i}, a_1 a_2 a_3 = b^q} \] Here $\subgp{b}$ is the central subgroup and is generated by a generic fibre in $M$. Killing $b$ gives $\pi_1^\orb(B)$. The integer $q$ is determined by the \emph{Euler number} of $(M, \calF)$: \[ e(M, \calF) = - q - \sum \frac{q_i}{p_i}. \] In a small abuse of notation we call the data $(q, q_1/p_1, q_2/p_2, q_3/p_3)$ the \emph{Seifert invariants} of $M$. In the terminology of Orlik~\cite[page~88]{Orlik72}, this manifold would be denoted by \[ \{ q, (o_1, 0); (q_1,p_1), (q_2,p_2), (q_3,p_3) \}. \] Suppose $U_i$ is a fibred neighbourhood of the critical fibre $C_i \subset M$. If we change the framing of $\bdy U_i$, say by replacing $q_i/p_i$ with $(q_i + p_i)/p_i$, then in order to keep the Euler number the same we must also change the framing about a generic fibre, by replacing $q$ with $q - 1$. We call this process \emph{reframing}. Note also that there is an orientation-reversing homeomorphism between the manifolds having Seifert invariants \[ (q,q_1/p_1, \dots, q_n/p_n) \quad \mbox{and} \quad (-q,-q_1/p_1, \dots, -q_n/p_n). \] Since we are not considering our manifolds as oriented in this paper, we view these two Seifert fibre spaces as equivalent. We now record a very useful observation, essentially due to Seifert and Threlfall~\cite[page~573]{SeifertThrelfall33}. \begin{proposition} \label{Prop:PlatonicHomeo} Suppose $M$ is a platonic manifold with $B = M/S^1 = S^2(p_1, p_2, p_3)$. Then the Seifert invariants of $M$ can be recovered, up to reframing and reversing orientation, from $B$ and the order of $H_1(M)$. \end{proposition} \begin{proof} Abelianizing the fundamental group shows $H_1(M)$ has order \[ \big| q p_1 p_2 p_3 + q_1 p_2 p_3 + p_1 q_2 p_3 + p_1 p_2 q_3 \big|. \] A bit of modular arithmetic shows the data $q, q_1, q_2, q_3$ can be recovered, up to reframing and changing all their signs, from $B = M/S^1$ and the order of $H_1(M)$. For example, suppose that $B$ is $S^2(2,3,4)$. By applying an orientation-reversing homeomorphism if necessary, we may assume that $e(M, \calF) \leq 0$. Its Seifert invariants are $(q,1/2,q_2/3,q_3/4)$ where $q_2 \in \{ 1,2 \}$ and $q_3 \in \{1,3 \}$. Note that \[ |H_1(M)|= |24 q + 12+ 8q_2 + 6q_3| = 24 q + 12+ 8q_2 + 6q_3 \] since $e(M, \calF) \leq 0$. We may compute $q_2$ using the fact that $|H_1(M)| \equiv 8 q_2$ mod $3$, and we may compute $q_3$ using the fact that $|H_1(M)| \equiv 6 q_3 + 12$ mod $8$. Then finally, $q$ can be determined using the above equality for $|H_1(M)|$. The cases of the other platonic manifolds are similar. \end{proof} \begin{remark} It is evident that the procedure given in the above proof runs in polynomial time as a function of the number of digits of $|H_1(M)|$. \end{remark} \begin{lemma} \label{Lem:Platonic} Suppose $M$ is a platonic manifold with base orbifold $B = M/S^1$. The map $\pi_1(M) \to \pi_1^\orb(B)$ is the only surjection of the fundamental group of $M$ onto a platonic group, up to post-composing by an automorphism of $\pi_1^\orb(B)$. \end{lemma} \begin{proof} Suppose $G = \pi_1(M)$ and $\calP = \pi_1^\orb(B)$. Let $\rho \from G \to \calP$ be the associated surjection. Now suppose $\rho' \from G \to \calP'$ is any surjection to a platonic group. Since $\calP'$ has trivial centre, the map $\rho'$ kills the fibre of $M$. Thus $\rho$ factors $\rho'$. However, there is no nontrivial surjective map between distinct platonic groups. Thus $\rho = \rho'$, perhaps after applying an automorphism of $\calP$. \end{proof} \begin{proposition} \label{Prop:PrismCriterion} Suppose $q/p$ and $s/r$ are Farey neighbours, with $q \geq 2$. An orientable three-manifold $M$ is homeomorphic to the prism manifold $P(p,q)$ if and only if $M$ is double covered by the lens space $L = L(2pq, ps+qr)$ and the subgroup of $\pi_1(L)$ fixed by the deck group action has order $2p$. \end{proposition} \begin{proof} The forward direction is the statement of \reflem{PrismCover}. For the backwards direction let $G = \pi_1(L) \isom H_1(L)$. Since $M$ is double covered by a lens space, the resolution of the spherical space form conjecture~\cite{Perelman1, Perelman2, Perelman3} implies $M$ is elliptic. Since $\pi_1(M)$ has an index two cyclic subgroup, namely $G$, we deduce that $\pi_1(M)$ cannot surject a platonic group. \refprop{Names} implies $M$ is either a lens space or a prism manifold. However, in any double cover of a lens space, the deck group acts trivially on homology. We deduce that $M$ is a prism manifold. According to \reflem{PrismCover}, the coefficients $p$ and $q$ can be recovered from the order and index, respectively, of the subgroup of $G$ that is fixed by the action of the deck group. \end{proof} \begin{proposition} \label{Prop:PlatonicCriterion} Suppose $\calP$ is one of the three platonic groups $A_4$, $S_4$ or $A_5$. Let $d$ be the order of $\calP$. An orientable three-manifold $M$ is homeomorphic to a platonic manifold of type $\calP$ if and only if it is $d$--fold covered by a lens space $L$ with deck group isomorphic to $\calP$. Moreover, the Seifert invariants of $M$ can be recovered from the order of $H_1(M)$. \end{proposition} \begin{proof} For the forward direction, recall the fundamental group of $M$ has the form \[ G = \pi_1(M) = \group{a_1, a_2, a_3, b}{[a_i,b] = 1, a_i^{p_i} = b^{-q_i}, a_1 a_2 a_3 = b^q}. \] The subgroup $\subgp{b}$ is central. The corresponding cover is an elliptic manifold with cyclic fundamental group, and so is a lens space. The deck group is isomorphic to $G/\subgp{b} \isom \calP$. For the backwards direction, since $M$ is covered by a lens space, the spherical space form conjecture (which is a consequence of the geometrisation conjecture, proved by Perelman \cite{Perelman1, Perelman2, Perelman3}) implies that $M$ is elliptic. Since the degree of the cover equals the order of the deck group the covering is normal. Thus $\pi_1(M)$ is a cyclic extension of $\calP$. We deduce $M$ is not a lens space or prism manifold. \refprop{Names} implies $M$ is a platonic manifold. \reflem{Platonic} implies $M$ has the type of $\calP$. Finally, \refprop{PlatonicHomeo} states that the Seifert invariants can be recovered from the order of $H_1(M)$. \end{proof} \section{Certifying \texorpdfstring{$T^2 \times I$}{T2 x I} and \texorpdfstring{$K^2 \twist I$}{K2 x I}} \label{Sec:IBundles} As usual, we assume that any three-manifold $M$ is given via a finite triangulation $\calT$. The decision problem \textsc{Recognising $T^2 \times I$} takes $\calT$ as its input and it asks whether $M$ is homeomorphic to $T^2 \times I$. The decision problem \textsc{Recognising $K^2 \twist I$} is defined similarly. Both are dealt with by Haraway and Hoffman~\cite[Theorem~3.6]{HarawayHoffman19}. \begin{theorem} \label{Thm:RecognisingBundles} The problems \textsc{Recognising $T^2 \times I$} and \textsc{Recognising $K^2 \twist I$} are in \NP. \qed \end{theorem} There is a subtle point to note here. Haraway and Hoffman, for their certificate for \textsc{Recognising $T^2 \times I$}, rely on~\cite[Theorem~12.1]{Lackenby:Knottedness}. There, given a handle structure $\calH$ of a sutured manifold $(M, \gamma)$, the theorem provides a certificate that $(M, \gamma)$ is a product sutured manifold. In our setting, we would simply check that $M$ had two toral boundary components $T_0$ and $T_1$, say, and would assign the sutured manifold structure $R_-(M) = T_0$ and $R_+(M) = T_1$. Then~\cite[Theorem 12.1]{Lackenby:Knottedness} provides a certificate that $(M, \emptyset)$ is a product sutured manifold, which is equivalent to the statement that $M$ is homeomorphic to $T^2 \times I$. However, there is a gap in the published proof of~\cite[Theorem~12.1]{Lackenby:Knottedness}. There one tacitly assumes that $\gamma$ is non-empty. Nevertheless, there is a straightforward fix. We take as the certificate a non-separating annulus $A$ in normal form with respect to the triangulation $\calT$ and with weight at most an exponential function of the number of tetrahedra of $\calT$. Such an annulus exists by~\cite[Lemma~8.5, Theorem~8.3]{Lackenby:Knottedness}. Then we form a handle structure on $M \cut A$ where the number of handles is bounded above by a linear function of the number of tetrahedra of $\calT$, using~\cite[Theorem~9.3]{Lackenby:Knottedness}. This is a product sutured manifold if and only if $M$ was a product; however now $M \cut A$ has a non-empty collection of sutures and so the proof of~\cite[Theorem~12.1]{Lackenby:Knottedness} applies. We also note that there are alternative methods of certifying $T^2 \times I$. One is to use the following result. \begin{theorem} Let $M$ be a compact orientable three-manifold with two boundary components, both of which are tori. Let $s_1, s_2, s_3$ be slopes on one of the boundary components $T$ of $M$ that represent the three non-trivial elements of $H_1(T; \ZZ/2\ZZ)$. Then $M$ is homeomorphic to $T^2 \times I$ if and only if the three manifolds obtained by Dehn filling along $s_1$, $s_2$, and $s_3$ are all homeomorphic to a solid torus. \qed \end{theorem} The proof is given inside the proof of~\cite[Theorem~11]{Haraway20}. Thus we can certify that $M$ is homeomorphic to $T^2 \times I$ as follows. We check that $M$ has two boundary components, both of which are tori. Let $T$ be one of these. We pick embedded normal one-manifolds $C_1$, $C_2$ and $C_3$ that represent the three non-trivial elements of $H_1(T; \ZZ/2\ZZ)$, with the property that each $C_i$ intersects each triangle of $T$ in at most one normal arc. Each $C_i$ consists of some parallel essential simple closed curves in $T$, plus possibly some curves that bound discs in $T$. Let $C_i'$ be one of these essential curves. The curves $C_1', C_2', C_3'$ again represent the three non-trivial elements of $H_1(T; \ZZ/2\ZZ)$. We form triangulations for the three-manifolds obtained by Dehn filling along the slopes of $C_1', C_2', C_3'$ as follows. We barycentrically subdivide the triangulation of $M$ once, so that each $C'_i$ is simplicial. Then we attach (simplicially) a triangulated disc to $M$ along $C'_i$. Finally we attach a triangulated three-cell to complete the Dehn filling. Each of the resulting triangulations has at most $56$ times as many tetrahedra as the original, given, triangulation of $M$. The final piece of our certificate is that these three triangulated three-manifolds are solid tori; for this we use~\cite[Corollary~2]{Ivanov08}. There is even a third method of providing a certificate for \textsc{Recognising $T^2 \times I$}, going back to Schleimer's thesis~\cite[Chapter~6]{Schleimer01}. Let $M$ be the given three-manifold equipped with the triangulation $\calT$. We first check that $M$ has the homology of $T^2$. As in~\cite[Theorem~15.1]{Schleimer11}, the first third of the certificate is a sequence $(\calT_i, v(S_i))_{i = 0}^n$ where: \begin{itemize} \item $\calT_0 = \calT$, \item $v(S_i)$ is the normal vector of $S_i$, a fundamental non-vertex linking normal two-sphere in $\calT_i$ (for $i < n$), \item $v(S_n)$ is the zero-vector, \item $\calT_{i+1}$ is the result of \emph{crushing} $\calT_i$ along $S_i$~\cite[Section~13]{Schleimer11}, and \item $\calT_n$ is \emph{zero-efficient}~\cite[Definition~4.10]{Schleimer11}. \end{itemize} We next certify three-sphere components of $\calT_n$ and discard them to obtain $\calT'$. The final third of the certificate follows the plan of~\cite[Theorem~6.2.1]{Schleimer01}. We find a list $(T_i)_{i = 0}^{2n}$ of disjoint tori in $\calT'$. Each even torus $T_{2k}$ (except for the first and last) is almost normal and \emph{normalises via isotopy} to the normal tori $T_{2k \pm 1}$. The union $T_0 \cup T_{2n}$ is the frontier of the boundary of a small regular neighbourhood of $\bdy M$, taken in $\calT$. We require that $T_0$ and $T_{2n}$ normalise via isotopy to $T_1$ and $T_{2n - 1}$, respectively. That the $T_i$ exist and have controlled weight is proved in~\cite[Chapter~6]{Schleimer01}. The normalisations are produced in polynomial time using the algorithm of~\cite[Theorem~12.1]{Schleimer11}. There are several possible certificates for \textsc{Recognising $K^2 \twist I$}. One is to exhibit a double cover $\cover{M}$ of the manifold $M$, to certify that $\cover{M}$ is $T^2 \times I$ and to check that $M$ has a single torus boundary component. Specifically, the certificate is as follows: \begin{enumerate} \item a triangulation $\cover{\calT}$ of a three-manifold $\cover{M}$; \item a simplicial involution $\phi$ of $\cover{\calT}$ with no fixed points; \item a combinatorial isomorphism between $\cover{\calT} / \phi$ and $\calT$; \item a certificate that $\cover{M}$ is homeomorphic to $T^2 \times I$. \end{enumerate} We now check that $\bdy M$ is a single torus and verify the given certificate. This suffices because $K^2 \twist I$ is the unique orientable three-manifold with a single toral boundary component and that is double covered by $T^2 \times I$~\cite[Theorem~10.5]{Hempel}. \section{Certificates for elliptic manifolds} \label{Sec:CertificateElliptic} In this section, we give our method for certifying whether a closed three-manifold is an elliptic manifold and, if it is, then which elliptic manifold it is. That is, we prove the following. \begin{restate}{Theorem}{Thm:Elliptic} The problem \textsc{Elliptic manifold} lies in \NP. \end{restate} \begin{restate}{Theorem}{Thm:NamingElliptic} The problem \textsc{Naming elliptic} lies in \FNP. \end{restate} In what follows we assume that the three-manifold $M$ is given via a finite triangulation $\calT$. \subsection{Some problems in \NP} In our proofs of Theorems~\ref{Thm:Elliptic} and~\ref{Thm:NamingElliptic}, we rely on the fact that the following decision problems lie in \NP. \begin{enumerate} \item \textsc{Recognising $S^3$} -- \cite[Theorem~15.1]{Schleimer11} and~\cite[Theorem~2]{Ivanov08}. \item \textsc{Recognising $D^2 \cross S^1$} -- \cite[Corollary~2]{Ivanov08}. \item \textsc{Recognising $T^2 \times I$} -- \cite[Theorem~3.6]{HarawayHoffman19}. \item \textsc{Recognising $K^2 \twist I$} -- \cite[Theorem~3.6]{HarawayHoffman19}. \end{enumerate} \subsection{Some problems in {\textsc{P}}} In what follows we rely on the algorithm of Kannan and Bachem~\cite{KannanBachem79} which places a given integer matrix into Smith normal form. The running time and also the bit-size of the output are bounded above by polynomial functions of the bit-size of the input. We also use the fact, due to Burton~\cite[Corollary~8]{Burton11}, that it is decidable in polynomial time whether two triangulations of compact three-manifolds are combinatorially isomorphic. Burton proved this result by creating from a triangulation of a compact three-manifold an \emph{isomorphism signature}, which gives a labelling of the simplices of the triangulation, and it has the property that two triangulations are combinatorially isomorphic if and only if their isomorphism signatures are equal \cite[Theorem 7]{Burton11}. Furthermore, his algorithm also computes the automorphism group of a single triangulation in polynomial time. We now provide various types of certificates for various types of elliptic manifolds. In each case, we give the certificate and explain how it is verified. The time to complete the verification is bounded above by a polynomial function of $|\calT|$, the number of tetrahedra in $\calT$. Finally, we prove that such a certificate exists if and only if $M$ is the corresponding type of elliptic manifold. In our certificates, we give more information than is strictly necessary. We do this in order to make the certificates easier to interpret by the reader of this paper. Some parts of the certificates could be removed, at the cost of requiring the checker of the certificate to do more work. We place an asterisk against those parts of the certificate that seem to crucial given the current state of knowledge. \subsection{Three-sphere} Recognising the three-sphere is discussed immediately above. \subsection{Real projective space} \label{Sec:RP3} The certificate here is: \begin{enumerate} \item[(1)] a triangulation $\cover{\calT}$ of a three-manifold $\cover{M}$; \item[$\ast$(2)] a certificate that $\cover{M}$ is the three-sphere; \item [(3)] a simplicial involution $\phi$ of $\cover{\calT}$ that has no fixed points; and \item [(4)] a simplicial isomorphism between $\cover{\calT} / \phi$ and $\calT$. \end{enumerate} The first, third, and fourth parts may be verified in polynomial time; we omit the details. By the spherical space form conjecture~\cite{Perelman1, Perelman2, Perelman3} or \cite{Livesay60}, the manifold $M$ has such a certificate if and only if it is $\RP^3$. The output of the algorithm records whether the certificate has been verified, and in the function case, it also gives the Seifert data $(0, 1/2)$. Only (2) appears to be necessary. Instead of (1), (3) and (4) being part of the certificate, one can proceed as follows. The verifier is given a triangulation $\calT$ of a three-manifold $M$. Using the algorithm of Kannan and Bachem, the verifier can compute $H_1(M)$ in polynomial time and check that it is $\mathbb{Z}/2\mathbb{Z}$. The verifier can also construct the unique non-trivial homomorphism $\pi_1(M) \rightarrow \mathbb{Z}/2\mathbb{Z}$, and hence can build the unique double cover $\cover{M}$ and its lifted triangulation $\cover{\calT}$. Using Burton's isomorphism signature, the verifier can ensure that the simplices of this triangulation are labelled in a canonical fashion. The certificate in (2) applies to this labelled triangulation. Once the certificate in (2) is verified, the verifier can deduce that $M$ is $\RP^3$. \begin{remark} There is an interesting generalisation of the above. Suppose that $M = L(p, q)$ is given via a triangulation $\calT$; suppose that $p$ is bounded by a uniform polynomial in $|\calT|$. In this case, the verifier can construct the $p$--fold homology cover in polynomial time and also check that the deck group is cyclic. They then check a (provided) certificate that the cover is the three-sphere. This proves that $M$ is a lens space. Finally, and more delicately, they verify the coefficient $q$ in polynomial time using~\cite[Theorem~1.1]{Kuperberg18}. \end{remark} \subsection{Non-prism non-\texorpdfstring{$\RP^3$}{RP3} lens spaces} \label{Sec:NonPrismNonRP3Lens} The certificate is as follows: \begin{enumerate} \item [$\ast$(1)] a simplicial subset $C$ of the one-skeleton of $\calT^{(86)}$; \item [(2)] a triangulation $\calT'$ of a three-manifold $X$; this will in fact be the exterior of $C$; \item [(3)] a simplicial isomorphism between $\calT'$ and the triangulation that results from $\calT^{(88)}$ by keeping only those simplices that are disjoint from $C$; \item [$\ast$(4)] a certificate establishing that $X$ is a solid torus; \item [(5)] simplicial one-chains $\lambda$ and $\mu$ in $\calT^{(88)}$; these will in fact be meridional and longitudinal curves on $\bdy N(C)$; \item [(6)] positive integers $p$ and $q$ with $1 \leq q < p$; $M$ will in fact be homeomorphic to $L(p,q)$. \end{enumerate} To verify this certificate we must check the following: \begin{enumerate} \item that $C$ is a circle; \item that the mapping given in part (3) is a simplicial isomorphism; \item that the certificate in part (4) shows that $X$ is a solid torus. \item that $\mu$ and $\lambda$ are one-cycles that generate $H_1(\bdy N(C))$; \item that $\mu$ is trivial in $H_1(N(C))$ and that $\lambda$ is a generator; \item that the kernel of the homomorphism $H_1(\bdy N(C)) \to H_1(X)$ (induced by inclusion) is generated by $p \lambda + q \mu$; and \item that $(p,q) \neq (4p', 2p' \pm 1)$ for some integer $p'$. \end{enumerate} Once verified this also, for the function case, records $p$ and $q$. More specifically, since we are requiring that the output is the Seifert data for $M$, then this is $(0,q/p)$. If we can verify the certificate, then $M$ is the manifold $L(p,q)$. Conversely, suppose $M$ is the lens space $L(p, q)$ and is neither a prism manifold nor $\RP^3$. Then $(p,q) \neq (4p', 2p' \pm 1)$ by \reflem{PrismIsLens}. By \refthm{LensSpaceCore}, the subdivision $\calT^{(86)}$ contains a simplicial core curve $C$. Its regular neighbourhood $N(C)$ is simplicial in $\calT^{(88)}$. An essential simple closed curve in $\bdy N(C)$ that lies in the link of some vertex of $C$ gives a simplicial meridian $\mu$. An embedded simplicial arc in $\bdy N(C)$ joining opposite sides of $\mu$ can then be closed up to form a simplicial longitude $\lambda$. Thus, the required certificate exists. It can be verified in polynomial time, as a function of the number of tetrahedra in $\calT$, for the following reasons. The subdivision $\calT^{(88)}$ contains $24^{88}$ times as many tetrahedra as $\calT$. Thus, steps (1)--(3) in the verification can be achieved in polynomial time. For (4)--(6), we note that the boundary maps in the chain groups for $N(C)$ and $\bdy N(C)$ can be expressed as integer matrices, where the number of rows and columns is bounded by a constant multiple of the number of tetrahedra in $\calT^{(88)}$ and where each column has $L^1$ norm at most $3$. Thus, (4)--(6) can be verified in polynomial time using linear algebra, as in the Kannan-Bachem algorithm. One can alternatively just use (1) and (4) in the certificate. Instead of (2) and (3), the verifier can compute the triangulation for the exterior $X$ of $C$ obtained from $\calT^{(88)}$ by keeping only those simplices that are disjoint from $C$. Furthermore, by using Burton's isomorphism signature, the verifier can give the simplices of this triangulation a canonical labelling. Furthermore, the verifier can build the simplicial one-chains $\lambda$ and $\mu$ and compute the integers $p$ and $q$, using linear algebra. Note that, since they are computed in polynomial time, $p$ and $q$ necessarily have polynomial bit-size. \subsection{Prism lens spaces} \label{Sec:PrismLens} Here, the certificate is either as in the case of non-prism non-$\RP^3$ lens spaces (but where the integers $p$ and $q$ do satisfy $(p, q) = (4p', 2p' \pm 1)$ for some positive integer $p'$) or the following: \begin{enumerate} \item [$\ast$(1)] a simplicial subset $C$ of the one-skeleton of $\calT^{(86)}$; \item [(2)] a triangulation $\calT'$ of a three-manifold $X$; this will in fact be the exterior of $C$; \item [$\ast$(3)] a certificate that $X$ is homeomorphic to $K^2 \twist I$; \item [(4)] a triangulation $\cover{\calT}$ of a three-manifold $\cover{M}$; this will in fact be the double cover of $M$ for which the inverse image of $K^2\twist I$ is a copy of $T^2 \times I$; \item [(5)] a simplicial involution $\phi$ of $\cover{\calT}$ that has no fixed points; \item [(6)] a simplicial isomorphism between $\calT$ and $\cover{\calT} / \langle \phi \rangle$; \item [(7)] a simplicial subset $\tilde C$ of $\cover{\calT}^{(86)}$, partitioned into two subsets $\tilde C_1$ and $\tilde C_2$; this will in fact be the inverse image of $C$ partitioned into its two components; \item [(8)] a triangulation $\cover{\calT}'$ of a three-manifold $\tilde X'$; this will be the exterior of $\tilde C_1$; \item [$\ast$(9)] a certificate that $\tilde X'$ is a solid torus; \item [(10)] simplicial one-chains $\lambda$ and $\mu$ in $(\cover{\calT}')^{(88)}$; these will in fact be meridional and longitudinal curves on $\bdy N(\tilde C_1)$; and \item [(11)] a positive integer $p$; $\cover{M}$ will in fact be the lens space $L(2p, 1)$. \end{enumerate} We check that $C$ is a simple closed curve. We check that $\calT'$ is the triangulation obtained from $\calT^{(88)}$ by removing those simplices incident to $C$. We verify the certificate that this is a copy of $K^2 \twist I$, and so on. These imply that $M$ is a prism manifold. Once we have checked that $\cover{M}$ is the lens space $L(2p, 1)$, we then check that $\phi_\ast$ acts trivially on $H_1(\cover{M})$. By \reflem{PrismCover}, this implies that $M$ was the manifold $P(p,1)$, which is indeed both a prism manifold and a lens space. The existence of this certificate is a consequence of \refthm{LensSpaceCurve}. As in previous cases, some parts of this certificate can be dispensed with. One can alternatively compute $H_1(X)$ using the triangulation $\calT'$ provided in (2) and check that it is isomorphic to $\mathbb{Z} \times (\mathbb{Z}/2\mathbb{Z})$. One can then construct all double covers of $M$. One of these will be the cover $\cover{M}$ with triangulation $\cover{\calT}$. The inverse image of $C$ in $\cover{\calT}^{(86)}$ is readily computed. There are two possible choices for the component $C_1$. A triangulation of its exterior is readily computed, as are the one-chains $\lambda$ and $\mu$ and the positive integer $p$. \begin{remark} Note that the above three cases, plus the case of $S^3$, provide certificates for all lens spaces. \end{remark} \subsection{Prism non-lens spaces} The certificate: \begin{enumerate} \item [(1)] a triangulation $\cover{\calT}$ of a three-manifold $\cover{M}$; \item [(2)] integers $p, q, r, s$ where $p \geq 1$ and $q > 1$; \item [$\ast$(3)] a certificate that $\cover{M}$ is the lens space $L(2pq, ps+qr)$; \item [(4)] a simplicial involution $\phi$ of $\cover{\calT}$ that has no fixed points; \item [(5)] a simplicial isomorphism between $\calT$ and $\cover{\calT} / \langle \phi \rangle$. \end{enumerate} To verify this certificate we must check the following: \begin{enumerate} \item that $ps-qr = 1$; \item that the given certificate establishes that $\cover{M}$ is the lens space $L(2pq, ps+qr)$; \item that the subgroup of $H_1(\cover{M})$ that is fixed by $\phi$ has order $2p$; \item that $\phi$ has no fixed points; \item that the given map between $\calT$ and $\cover{\calT} / \langle \phi \rangle$ is a simplicial isomorphism. \end{enumerate} \refprop{PrismCriterion} then implies that $M$ is the prism manifold $P(p,q)$. This has two possible Seifert fibrations. The algorithm outputs the one with spherical base space, which has Seifert data $(0, 1/2, -1/2, q/p)$. We need to show that if $M$ is the manifold $P(p,q)$, then there is a certificate as above that can be verified in polynomial time. \refprop{PrismCriterion} gives that the prism manifold $P(p,q)$ has a double cover $\cover{M}$ that is the lens space $L(2pq, ps + qr)$. The triangulation $\calT$ lifts to a triangulation $\cover{\calT}$ for $\cover{M}$. As explained in the previous subsections, there is a certificate that establishes that $\cover{M}$ is $L(2pq, ps+qr)$. As usual, $2pq$ has at most polynomially many digits (as a function of $|\cover{\calT}|$) as do $p$ and $q$. Now, $s/r$ can be any rational number so that $ps - qr = 1$. However, we may rechoose $r$ and $s$ so that $0 \leq r \leq p$ and hence $|s| \leq |ps| = |1 + qr|$. Thus, it can be verified in polynomial time that $ps - qr = 1$. The remaining parts of the certificate may be checked in polynomial time. In particular, one uses linear algebra to verify that the subgroup of $H_1(\cover{M})$ that is fixed by the covering involution $\phi$ has order $2p$. Again certain parts of this certificate are not strictly necessary. Since the first homology of a prism manifold is generated by at most two elements, it has at most three double covers. For each such cover, one can compute its lifted triangulation and its isomorphism signature. One of these triangulations will be $\cover{\calT}$. If we proceed in this way, we do not need to be provided with (4) and (5) in the certificate. The order of the subgroup of $H_1(\cover{M})$, fixed by the covering transformation, can be computed in polynomial time; this gives $p$. Since we have established that lens space recognition is in \FNP, when this is applied to $\cover{M}$, the integers $2pq$ and $ps+qr$ are given as the lens space coefficients; hence these do not need to be provided as part of a certificate. Once we have $p$ and $2pq$, we can compute $q$. As observed above, $s$ and $r$ can be any integers satisfying $ps - qr = 1$ and hence can be computed using the Euclidean algorithm, and can be arranged so that $0 \leq r \leq p$ and $|s| \leq |1 + qr|$. Then $ps + qr$ can be computed and we can check whether it equals the second integer that is output from the recognition of $\cover{M}$ as a lens space. \subsection{Platonic manifolds} The certificate: \begin{enumerate} \item [$\ast$(1)] a triangulation $\cover{\calT}$ of a three-manifold $\cover{M}$; \item [$\ast$(2)] a group $\calP$ of simplicial isomorphisms of $\cover{\calT}$ that acts freely; \item [(3)] a simplicial isomorphism between $\cover{\calT}/\calP$ and $\calT$; \item [(4)] an isomorphism between $\calP$ and one of the platonic groups $A_4$, $S_4$ or $A_5$; \item [$\ast$(5)] a certificate that certifies that $\cover{M}$ is a lens space using one of the certificates described above in Sections \ref{Sec:RP3}, \ref{Sec:NonPrismNonRP3Lens} or \ref{Sec:PrismLens}. \end{enumerate} To verify this certificate we must check the following: \begin{enumerate} \item that the given isomorphism between $\calP$ and one of the platonic groups is indeed an isomorphism; \item that $\calP$ acts freely on $\cover{\calT}$; \item that the given simplicial isomorphism between $\cover{\calT}/\calP$ and $\calT$ is indeed a simplicial isomorphism; \item that the given certificate establishes that $\cover{M}$ is homeomorphic to a lens space; \item the resulting Seifert invariants of $M$, using \refprop{PlatonicHomeo}. \end{enumerate} It is clear that if there is such a certificate, then $M$ is regularly covered by a lens space, with deck group given by the platonic group $\calP$. Hence, by \refprop{PlatonicCriterion}, $M$ is a platonic manifold with type $\calP$. Conversely, if $M$ is such a manifold, then by \refprop{PlatonicCriterion}, there is such a finite cover, and hence the certificate exists. It can be verified in polynomial time. It is clear that (3) and (4) are not actually needed as part of the certificate. The simplicial isomorphism in (3) can be computed using Burton's algorithm and it can readily be checked that the given group $\calP$ of covering transformations is isomorphic to one of $A_4$, $S_4$ or $A_5$. An alternative certificate, instead of giving (1) and (2), could give a surjective homomorphism from $\pi_1(M)$ to $\calP$, the desired platonic group. The verifier could then build the cover $\cover{\calT}$ and its deck group directly, in polynomial time. \bibliographystyle{plainurl} \bibliography{lens_spaces} \end{document}
2205.08796v1
http://arxiv.org/abs/2205.08796v1
Absolute exponential stability criteria of delay time-varying systems with sector-bounded nonlinearity: a comparison approach
\documentclass[review]{elsarticle} \usepackage{lineno,hyperref} \modulolinenumbers[5] \journal{Journal of \LaTeX\ Templates} \bibliographystyle{elsarticle-num} \usepackage{amsfonts,amssymb,amsbsy} \usepackage{latexsym} \usepackage{amsmath} \usepackage{xcolor} \setlength{\parskip}{0.2cm} \setlength{\topmargin}{-2.cm} \setlength{\oddsidemargin}{-0.6cm} \setlength\evensidemargin{-0.6cm} \setlength{\textwidth}{17.3cm} \setlength{\textheight}{25cm} \newtheorem{theorem}{Theorem} \newtheorem{lemma}{Lemma} \newtheorem{corollary}{Corollary} \newtheorem{example}{Example} \newtheorem{remark}{Remark} \newtheorem{definition}{Definition} \newtheorem{assumption}{Assumption} \newcommand{\R}{\mathbb{R}} \newcommand{\C}{\mathbb{C}} \newcommand{\N}{\mathbb{N}} \newcommand{\K}{\mathbb{K}} \newcommand{\Z}{\mathbb{Z}} \begin{document} \begin{frontmatter} \title{Absolute exponential stability criteria of delay time-varying systems with sector-bounded nonlinearity: a comparison approach } \author[mymainaddress]{Nguyen Khoa Son\corref{mycorrespondingauthor}} \cortext[mycorrespondingauthor]{Corresponding author} \address[mymainaddress]{Institute of Mathematics, Vietnam Academy of Science and Technology, 18 Hoang Quoc Viet Rd., Hanoi, Vietnamm}\ead{[email protected]} \author[mysecondaryaddress]{Nguyen Thi Hong} \address[mysecondaryaddress]{Institute of Mathematics, Vietnam Academy of Science and Technology, 18 Hoang Quoc Viet Rd., Hanoi, Vietnamm} \ead{[email protected]} \begin{keyword} absolute stability, time-varying systems, comparison principle, sector nonlinearities. \end{keyword} \begin{abstract} Absolute exponential stability problem of delay time-varying systems (DTVS) with sector-bounded nonlinearity is presented in this paper. By using the comparison principle and properties of positive systems we derive several novel criteria of absolute exponential stability, for both continuous-time and discrete-time nonlinear DTVS. When applied to the time-invariant case, the obtained stability criteria are shown to cover and extend some previously known results, including, in particular, the result due to S.K. Persidskii in Ukrainian Mathematical Journal, vol. 57(2005). The theoretical results are illustrated by examples that can not be treated by the existing ones. \end{abstract} \end{frontmatter} \section{Introduction and Preliminaries} \label{sec:introduction} The absolute stability problem, first formulated in \cite{lurie1944}, is one of the main problems in the systems and control theory. Roughly speaking, a dynamic nonlinear system containing nonlinearities in its mathematical description is said to be absolute stable if its equilibrium is asymptotically stable for any nonlinearity in a given nonlinearities class. In this nonlinear framework, the most widely used approach for characterizing and checking the absolute stability is the Lyapunov function method. The reader is referred to the survey article \cite{liberzon_automat2008} and the monograph \cite{liao_book} for the study on absolute stability of the so-called Luri'e control systems and the monograph \cite{KaszBhaya} which is dedicated to absolute stability analysis of several classes of nonlinear systems of practical interest, including the so-called Persidskii-type systems, whose stability can be characterized by diagonal Lyapunov functions. As is well-known, the most simple Persidskii-type system can be represented by the differential equation \begin{equation}\label{persid1} \dot x= Af(x), \ f(0)=0 \end{equation} where $A:=(a_{ij})$ is $n\times n$-matrix and $f$ is supposed to be a continuous diagonal function $f(x):=(f_1(x_1),\ldots, f_n(x_n))^{\top}$ belonging to the {\it infinite sector} \begin{equation}\label{infsector} K(0,\infty):= \{f: \; x_if_i(x_i) > 0, \forall x_i\not=0,\ i=1,\ldots, n\}. \end{equation} This class of models was first introduced for stability analysis in \cite{barbashin}, where a linear combination of the integrals of the nonlinearities was used as a Lyapunov function. Next, that result was improved and extended by Persidskii \cite{persidskii69} and in many subsequent papers (see, e.g. \cite{K_B-SIAM1993, Oliveira2002, persidskii2005, Efimov2021}), where different types of Lyapunov functions have been proposed for checking absolute stability for continuous-time systems as well as their discrete-time counterparts, under different classes of sector nonlinearities. The study of absolute stability of Persidskii-type systems is an important tool in many application fields such as automatic control, Lotka-Volterra ecosystems, Hopfield neural networks, and decentralized power-frequency control of power systems, among others (see, e.g. \cite{K_B-SIAM1993}). Recently, the absolute stability problems have been investigated also for the classes of switched nonlinear systems, including those with time-delay in the state variables, see, e.g. \cite{ sun_wang2013, alex_mason2014}, \cite{zhang_zhao2016} \cite{alex2021} and the references therein. It is important to note that in most of the aforementioned works only the classes of time-invariant nonlinear systems have been considered where some criteria of absolute asymptotical stability have been derived. So far little attention has been devoted to time-varying systems and absolute exponential stability analysis. In this paper we will study the absolute exponential stability problem for the Persidskii class of time-varying delay nonlinear systems of the form \begin{equation} \label{TVS} \dot x=A(t) f(x(t)) + B(t)f(x(t-h)), \ t\geq 0 \end{equation} where $A(\cdot), B(\cdot)$ are $n\times n$-matrix continuous functions and the diagonal function $f$ belongs to the {\it bounded sector} $K[\delta,\beta]$ defined as \begin{equation}\label{sector2} K[\delta,\beta]\!:=\! \{f: \delta_ix_i^2\leq x_if_i(x_i) \leq \beta_i x_i^2,\forall x_i\not=0, \ i=1,\ldots, n \} \end{equation} with $0<\delta_i\leq \beta_i, i\in \underline n$ being given numbers. Our primary purpose is to derive some verifiable criteria of absolute exponential stability for this class of time-varying nonlinear systems. Similar results will be established also for time-varying difference systems. What is more, it is remarkable that when applied to the particular cases of time-invariant systems, the obtained results yield novel criteria of absolute exponential stability which improve the existing ones. Differently from the previous works which are mainly based on the Lyapunov functions (or Lyapunov-Krasovskii functions) method, we will employ the comparison principle in deriving the main results. It is worthy to mention that the comparison approach was proved to be a quite effective tool in stability analysis, particularly for time-varying and switched systems. The readers are referred to \cite{zhang2012, Liu2018,Tian_Sun,son_ngoc_2022} and the literature given therein on this topic. Basically, this approach is based on comparing the original systems with a positive comparison system of the same dimension and using the stability characterization of the latter one (which is supported by the powerful Perron-Frobenius theory, see e.g. \cite{Chelaboina2004, Ngoc_IEEE, blanc_valcher}) to conclude on stability of the original system. \noindent {\it Notation.} Throughout the paper, $\R, \R_+,\Z, \Z_+$ will stand for the sets of real numbers, nonnegative real numbers, natural numbers and nonnegative natural numbers, respectively. For $t_1,t_2 \in \Z$ with $ t_1<t_2, \Z_{[t_1,t_2]}$ denotes the interval in $\Z: \Z_{[t_1,t_2]}= \{t_1,t_1+1,\ldots, t_2-1,t_2\}$. For an integer $m, \underline m$ denotes the set of numbers $ \{1,2,\ldots, m \}$. Inequalities between vectors and matrices are understood componentwise: for vectors $x=(x_i), y=(y_i) \in \R^n$ we write $x\geq y$ and $x\gg y$ iff $x_i \geq y_i$ and $x_i> y_i$, for all $i\in \underline n,$ respectively. Denote $|x|=(|x_i|)$ and $x^{\top}$ is the transpose of $x$. Similar notation is adopted for $(n\times n)$-matrices. Without loss of generality, the norm of vectors $x\in \R^n$ is assumed to be the $1$-norm: $\|x\|=\sum_{i=1}^n|x_i|$. A matrix $A\in \R^{n\times n}$ is said to be {\it Hurwitz stable} if $\text{Re}\lambda <0$ and {\it Schur stable} if $|\lambda|<1$, for any root $\lambda $ of the characteristic polynomial, i.e. $ \det(\lambda I-A)=0$. Matrix $A=(a_{ij})\in \R^{n\times n}$ is said to be a Metzler matrix if $a_{ij}\geq 0$ for all $i\not=j$. The following stability characterization of Metzler and nonnegative matrices is useful in stability analysis of positive systems, see e.g. \cite{Berman}. \begin{lemma}\label{metzmatrix} Assume that $A=(a_{ij})\in \R^{n\times n}$ is a Metzler matrix (resp., a nonnegative matrix). Then $A$ is Hurwitz stable (reps., Schur stable) if and only if there exists a positive vector $\zeta \gg 0$ such that $A\zeta \ll 0 $ (reps., $A\zeta \ll \zeta$). \end{lemma} Finally, for a continuous function $\psi:\R \rightarrow \R$ the upper right Dini derivative is defined as \begin{equation}\label{dini} D^+\psi(t)= \limsup_{\delta\rightarrow 0^+}\frac{\psi(t+\delta)-\psi(t)}{\delta}. \end{equation} Useful properties of the Dini derivatives can be found in \cite{Rouche}, Appendix 1). \section{Main Results} Consider the time-varying nonlinear system \eqref{TVS}-\eqref{sector2}. By continuity of $f$, it is obvious that $f(0)=0$ and therefore the system \eqref{TVS} has the zero solution $x(t)\equiv 0, t\geq 0$, for any $f\in K[\delta,\beta]$. Moreover, as a convention, the function $f\in K[\delta,\beta]$ is assumed to satisfy some Lipschitz or differentiability conditions to assure the global existence and uniqueness of the solution of \eqref{TVS}, for any initial condition. In what follows such a function $f$ is called admissible nonlinearity. \begin{definition}\label{AES} The zero solution $x(t)\equiv 0$ of the time-varying nonlinear system \eqref{TVS} is said to be absolutely exponentially stable (shortly, AES) if there exist positive numbers $M, \lambda $ such that for any admissible nonlinearity $f\in K[\delta,\beta] $ and any non-zero continuous function $ \varphi\in C([-h,0],\R^n)$ the solution $x(t)= x(t,\varphi) $ of \eqref{TVS} with the initial condition $ x(\theta)= \varphi(\theta), \ \theta\in [-h,0], $ satisfies \begin{equation} \label{conditiondef1} \|x(t)\|= \|x(t,\varphi)\|\leq M e^{-\lambda t}\|\varphi\|, \quad \forall t\geq 0. \end{equation} Such a number $\lambda $ is called the exponential decay rate of $x(t)$. \end{definition} \noindent Define the Meztler matrix function $ \widehat A(t) =(\widehat a_{k,ij}(t)),\ t\geq 0,\ k\in \underline N,$ by setting \begin{equation}\label{metzler} \widehat a_{i,i}(t)\!=\! a_{ii}(t),\ \widehat a_{ij}(t)\!=\! |a_{ij}(t)|, j\not=i,\ i,j\in \underline n. \end{equation} The main contribution of this paper is the following \begin{theorem}\label{main1} Consider the time-varying system \eqref{TVS} with admissible nonlinearities $f\in K[\delta,\beta]$ where $ \beta = (\beta_1,\ldots,\beta_n)^{\top}$, $\delta:=(\delta_1,\ldots,\delta_n)^{\top} \in \R^n$ are given positive vectors such that $\beta_i\geq \delta_i>0,\ \forall i\in \underline n$. Assume that there exists a nonnegative $n\times n$-matrix $\bar B =( \bar b_{ij}) \geq 0$ such that \begin{equation}\label{barB} |B(t)|\leq \bar B, \ \forall t\geq 0. \end{equation} Then the zero solution of the system \eqref{TVS} is AES if there exist $n$-dimensional vector $\xi :=(\xi_{1},\xi_{2},\ldots,\xi_{n})^{\top}\gg 0$ and a real number $\alpha >0$ such that \begin{equation}\label{cond1} \bigg( D_{\delta} \widehat A^{\top}(t)+ e^{\alpha h}D_{\beta}\bar B^{\top}\bigg)\xi \leq-\alpha \xi, \forall t \geq 0, \end{equation} where $D_{\delta}$ and $D_{\beta}$ are diagonal matrices defined, respectively, as $ D_{\beta}= \text{\rm diag}(\delta_1, \delta_2, \ldots, \delta_n), D_{\beta}= \text{\rm diag}(\beta_1, \beta_2, \ldots, \beta_n)$. Moreover, in this case, the exponential decay rate is $\alpha$. \end{theorem} \textit{Proof.} Let $x(t)$ be the solution of \eqref{TVS}, with a nonlinearity $f\in K[\delta,\beta]$ and a non-zero initial function $\varphi\in C([-h,0],\R^n). $ Then, for each $i\in \underline n$, \begin{equation}\label{mean} \dot x_i(t)\! =\! \sum_{j=1}^na_{ij}(t)f_j(x_j(t))\!+\! \sum_{j=1}^nb_{ij}(t)f_j(x_j(t\!-\!h)), \ t\geq 0. \end{equation} Assume that \eqref{cond1} holds for some $\alpha>0, \xi\gg 0 $. Then, it implies readily \begin{equation}\label{aii} \sum_{i=1}^n \widehat a_{ij}(t)\xi_i < 0,\ \forall t\geq 0, \ \forall j\in \underline n. \end{equation} In order to make use of the comparison principle, let us define the numbers \begin{equation}\label{d1d2} d_1 := \min_{i\in \underline n}\xi_{i},\ \ \ d_2:= \max_{ i\in \underline n}\xi_i. \end{equation} and the continuous nonnegative functions $v_0(t), v_1(t),$ by setting, \begin{equation*} v_0(t)= \sum_{i=1}^n\xi_{i}|x_i(t)|= \xi^{\top}|x(t)|, \ \text{for}\ t\geq -h, \end{equation*} \begin{equation*} v_1(t) = \sum_{i=1}^n\xi_{i}|\varphi_{i}(t)| \!+\!e^{\alpha h} \sum_{i=1}^n \sum_{j=1}^n \int_{-h}^0e^{\alpha \theta}\bar b_{ij} \xi_{i}\beta_j|\varphi_j(\theta)|d\theta, \end{equation*} for $ t\in [-h,0]$ and \begin{equation*} v_1(t) = v_0(t)+ e^{-\alpha(t-h)} \sum_{i=1}^n \sum_{j=1}^n \int_{t-h}^te^{\alpha s}\bar b_{ij}\xi_i\beta_j|x_j(s)|ds, \end{equation*} for $t\ge 0$. Define, moreover, the following comparison function \begin{equation}\label{v1} y_{\alpha}(t) = L_1e^{-\alpha t}\|\varphi\|, \ \text{for}\ t\geq -h, \end{equation} where $L_1$ is an arbitrary positive number satisfying \begin{equation}\label{Lrho} d_2 +he^{\alpha h}\sum_{i=1}^ n{\sum_{j=1}^n \bar{b}_{ij}\;\xi_i\;\beta_j} < L_1. \end{equation} Then, we have obviously \begin{equation}\label{ineq} d_1\sum_{i=1}^n|x_i(t)|\leq v_0(t)\leq v_1(t), \ \forall t\geq 0. \end{equation} Moreover, it follows immediately from the definition of $v_0,v_1,y_{\alpha}$ and \eqref{d1d2}, \eqref{Lrho} that \begin{equation} \label{defk} v_1(t) < L_1\|\varphi\| \leq y_{\alpha}(t), \ \text{for} \ t\in [-h,0]. \end{equation} Our goal is to prove that the above inequality holds true for all $t\geq 0$, that is \begin{equation}\label{goal_0} v_1(t)\leq y_{\alpha}(t) = L_1e^{-\alpha t} \|\varphi\|, \ \forall t\geq 0. \end{equation} To this end, taking an arbitrary positive number $\alpha_{\epsilon}\in (0,\alpha)$, we will prove that \begin{equation}\label{goal} v_1(t)\leq y_{\alpha_{\epsilon}}(t) = L_1e^{-\alpha_{\epsilon} t} \|\varphi\|, \ \forall t\geq 0. \end{equation} Assume to the contrary that \eqref{goal} does not hold. This implies that the set $T_{0}:=\{t\in (0,+\infty): v_{1}(t)> y_{\alpha_{\epsilon}}(t)\}$ is nonempty. Then, denoting $\bar t_0= \inf \{ t\in T_{0} \}$, we have, by the continuity and \eqref{defk}, that $\bar t_0>0 $ and \vspace{-0.2cm} \begin{equation}\label{equal} v_1(t)\leq y_{\alpha_{\epsilon}}(t),\ \forall t\in [-h,\bar t_0),\ v_1(\bar t_0)= y_{\alpha_{\epsilon}}(\bar t_0), \end{equation} and there exist a sequence $t_k\downarrow \bar t_0$ such that \begin{equation}\label{io} v_1(t_k) >y_{\alpha_{\epsilon}}(t_k), k=1,2, \ldots \end{equation} Since $f\in K[\delta,\beta] $, it follows immediately from \eqref{sector2} that, for each $i,j\in \underline n$ and $j\not= i$ we have \begin{equation}\label{deltabeta} \begin{split} \delta_i|x_i|\leq f_i(x_i)\ \text{sign} x_i = |f_i(x_i)| \ \ \text{and} \\ f_j(x_j)\ \text{sign} x_i\leq |f_j(x_j) |\leq \beta_j|x_j|. \end{split} \end{equation} Therefore, by using \eqref{metzler}, \eqref{barB}, \eqref{mean}, \eqref{deltabeta}, we get, for each $ t\in [0,\bar t_0], $ \begin{align*} & D^+v_0(t)= \sum_{i=1}^n \xi_{i}D^+ |x_i(t)| \leq \sum_{i=1}^n \xi_{i} \text{sign}(x_i(t))\dot x_i(t) \notag \\ & \leq \sum_{i=1}^n\xi_{i}\bigg(a_{ii}(t)|f_i(x_i(t))|+ \sum_{j\not=i}^n |a_{ij}(t)|\;|f_j(x_j(t))|\bigg) \\ & + \sum_{i=1}^n\xi_{i}\sum_{j=1}^n|b_{ij}(t)|\;|f_j(x_j(t-h))|\notag\\ &=\sum_{j=1}^n \sum_{i=1}^n\bigg( \xi_{i}\widehat a_{ij}(t)|f_j(x_j(t))| + \xi_i|b_{ij}(t)||f_j(x_j(t-h))|\bigg)\\ &\stackrel{\eqref{aii} }\leq \sum_{j=1}^n \sum_{i=1}^n\bigg( \xi_{i}\widehat a_{ij}(t)\delta_j|x_j(t)| + \xi_i \bar{b}_{ij} \beta_j|x_j(t-h)| \bigg). \end{align*}Consequently, by the definition of $v_1$ and the properties of the Dini derivative $D^+$ (see, e.g. \cite{Rouche}, Appendix 1) we can deduce, for each $ t\in [0,\bar t_0], $ \vspace{-0.2cm} \begin{align*} &D^+v_1( t)\!\leq\!\sum_{j=1}^n \sum_{i=1}^n\bigg( \xi_{i}\widehat a_{ij}(t)\delta_j|x_j(t)| \!+\! \xi_i \bar{b}_{ij} \beta_j|x_j(t-h)| \bigg) \\ &+(-\alpha)\left(v_1(t)-v_0(t)\right) +\\ & +e^{\alpha h} \sum_{i=1}^n \sum_{j=1}^n \xi_{i} \bar b_{ij}\beta_j|x_j(t)| - \sum_{i=1}^n \sum_{j=1}^n \xi_{i}\bar b_{ij} \beta_j|x_j(t\!-\!h)| \notag\\ &=-\alpha v_1(t)+\alpha v_0(t)\\ &+\sum_{j=1}^n \left[\left(D_{\delta}\widehat{A}^{\top}(t)+e^{\alpha h}D_{\beta}\bar{B}^{\top}\right)\xi\right]_j|x_j(t)|\\ &\stackrel{\eqref{cond1}}\leq -\alpha v_1(t)+\alpha v_0(t)+\sum_{j=1}^n (-\alpha)\xi_j|x_j(t)| =-\alpha v_1(t). \end{align*} By virtue of \eqref{equal}, the last inequality implies that\begin{align*} D^+v_1(\bar{t}_0)&\leq -\alpha v_1(\bar t_0)= -\alpha y_{\alpha_{\epsilon}}(\bar{t}_0) \notag\\ &<-\alpha_{\epsilon} y_{\alpha_{\epsilon}}(\bar{t}_0)= \frac{d}{dt}y_{\alpha_{\epsilon}}(\bar t_0). \end{align*} On the other hand, by definition of $D^+$ and \eqref{equal}, \eqref{io}, we have \begin{align*} D^+v_1(\bar t_0)&\geq \limsup_{t_k\downarrow \bar t_0}\dfrac{v_1(t_k)- v_1(\bar t_0)}{t_k-\bar t_0}\notag\\ &\geq \limsup_{t_k\downarrow \bar t_0}\dfrac{y_{\alpha_{\epsilon}}(t_k)-y_{\alpha_{\epsilon}}(\bar t_0)}{t_k-\bar t_0}=\dfrac{d}{dt}y_{\alpha_{\epsilon}}(\bar t_0), \end{align*} conflicting with the above strict inequality. Thus, \eqref{goal} is proved. Now, letting $\alpha_{\epsilon} \uparrow \alpha$ in \eqref{goal}, we obtain \eqref{goal_0}, which together with \eqref{ineq} implies that \begin{equation}\label{tau_0} \|x(t)\| \leq \frac{v_0(t)}{d_1} \leq Me^{-\alpha t}\|\varphi\|, \ \forall t \geq 0, \end{equation} where $ M:= L_1/d_1 $ Therefore, \eqref{conditiondef1} holds, for any $ \varphi \in C([-h,0],\R^n)$ and any admissible nonlinearity $f\in K[\delta,\beta]$, completing the proof. As an immediate consequence of Theorem \ref{main1} we obtain the following criterion of AES for time-varying nondelay systems. \begin{corollary}\label{cor-Persid} Consider the time-varying nonlinear system \begin{equation}\label{nondelay} \dot x =A(t)f(x(t)), \ t\geq 0, \end{equation} where $f\in K[\delta,\infty)$. Assume that there exist a positive vector $\xi \gg 0$ such that \vspace{-0.2cm} \begin{equation}\label{cond5} \gamma := \max_{j\in \underline n} \sup_{t\geq 0}\sum_{i=1}^n\widehat a_{ij}(t)\xi_i \ < \ 0, \end{equation} where $\widehat A(t)$ is defined by \eqref{metzler}. Then the zero solution of \eqref{nondelay} is AES, with the exponential decay rate $\alpha, $ for any $\alpha \in (0, \frac{-\gamma\delta_0}{d_2})$, where $ \delta_0= \min_{j\in \underline n} \delta_j $ and $d_2$ is defined by \eqref{d1d2}. \end{corollary} In the case when both matrix functions $A(t), B(t)$ in \eqref{TVS} can be upper bounded by some time-invariant matrices, Theorem \ref{main1} implies the following verifiable criterion of absolute exponential stability. \begin{corollary}\label{main2} Consider the time-varying system \eqref{TVS} with admissible nonlinearities $f\in K[\delta,\beta]$. Assume that there exist a Metzler matrix $\widehat A $, a nonnegative matrix $\bar B \geq 0$ and a positive vector $\xi :=(\xi_{1},\xi_{2},\ldots,\xi_{n})^{\top}\gg 0$ satisfying \begin{equation}\label{barAB} \widehat A(t) \leq \widehat A,\ |B(t)|\leq \bar B, \ \ \forall t\geq 0, \end{equation} and \begin{equation}\label{cond2} \big( \widehat A D_{\delta} + \bar B D_{\beta} \big)^{\top}\xi \ll 0,\ \forall t \geq 0, \end{equation} where $D_{\delta}$ and $D_{\beta}$ are diagonal matrices defined as in Theorem \ref{main1}. Then the zero solution of system \eqref{TVS} is AES, with the exponential decay rate $\alpha=\alpha_{\max} >0$, which can be calculated, correspondingly to each $\xi \gg 0$ satisfying \eqref{cond2}, as \begin{equation}\label{almax} \alpha_{\max}= \min_{i\in \underline n}\{\alpha_i: g_i(\alpha_i)=0\}, \end{equation} where $ g_i(\cdot)$ are continuous functions defined by \begin{equation}\label{gi} g_i(\alpha):= \sum_{j=1}^n\big(\widehat a_{ji}\delta_i\xi_j + e^{\alpha h}\bar b_{ji}\beta_i \xi_j\big)+ \alpha \xi_i, \ i\in \underline n. \end{equation} \end{corollary}\textcolor{blue} \noindent {\it Proof.} Obviously, for each $ i\in \underline n, \ g_i(\alpha)$ is continuous and monotonically increasing to $+\infty$ as $\alpha \rightarrow +\infty$ (because $g_i'(\alpha)> 0, \forall \alpha >0$). Since $ g_i(0)<0$, due to \eqref{cond2}, it follows that the equation $g_i(\alpha)=0$ has a unique solution $\alpha_i >0 $ and the inequality $g_i(\alpha)\leq 0$ is valid for all $ \alpha \in [0,\alpha_i]$ but violated for $\alpha>\alpha_i$. This implies that \eqref{cond1} is valid for all $\alpha \in (0,\alpha_{\max}] $ but violated afterwards. Therefore, by Theorem \ref{main1} the zero solution of the system \eqref{TVS} is AES, with the 'maximal' (for the given $\xi $) exponential decay rate $\alpha_{\max}$. \begin{corollary}\label{TIDS} Consider the time-invariant nonlinear system \begin{equation}\label{persiddelay} \dot x(t) = Af(x(t)) + Bf(x(t-h)), \ t\geq 0, \end{equation} where $A$ is a Metzler matrix, $B\geq 0$ and $f$ is any nonlinear function belonging to the sector $ K[\delta,\beta]$, defined by \eqref{sector2}. Then the zero solution of \eqref{persiddelay} is AES if there exists a positive vector $\xi \gg 0$ satisfying \begin{equation}\label{S_H} (AD_{\delta}+BD_{\beta})^{\top}\xi \ll 0. \end{equation} \end{corollary} \vspace{0.2cm} \begin{remark}\label{persidski} {\rm By Lemma \ref{metzmatrix}, \eqref{S_H} is equivalent to that the Metzler matrix $AD_{\delta}+BD_{\beta}$ is Hurwitz stable. If, conversely, the time-invariant nonlinear system \eqref{persiddelay} is AES then, since, obviously $f(x):= D_{\delta} x \in K[\delta,\beta]$, it follows that the linear positive system $\dot x(t)= AD_{\delta}x+BD_{\delta}x(t-h), \ t\geq 0$ is exponentially stable. This, in turn, is equivalent (see, e.g. \cite{Chelaboina2004}) to the existence of a positive vector $\xi \gg 0$ such that $ D_{\delta}(A+B)^{\top}\xi = (AD_{\delta}+BD_{\delta})^{\top}\xi \ll 0$ which implies readily \begin{equation}\label{A_S} (A+B)^{\top}\xi \ll 0, \end{equation} or, equivalently, the Metzler matrix $A+B$ is Hurwitz stable (by Lemma \ref{metzmatrix}). Thus, in the nondelay case (i.e. $B=0$), Corollary \ref{TIDS} is an extension of the main result of \cite{persidskii2005}. } \end{remark} \begin{remark}\label{positive} {\rm Note that, under the assumption of Corollary \ref{TIDS}, the nonlinear system \eqref{persiddelay} is \textit{positive} which means that $x(t)=x(t, \varphi)\geq 0, \forall t\geq 0$ for any nonnegative initial function $\varphi \in C([-h,0],\R^n_+)$ (see, e.g. \cite{zhang_zhao2016}, Lemma 4). Therefore, in view of Corollary \ref{TIDS}, Corollary \ref{main_2} amounts to saying that the zero solution of the time-varying system \eqref{TVS}-\eqref{sector2} is AES if its associate upper bounding (in the sense \eqref{barAB}) by time-invariant positive nonlinear system \begin{equation*} \dot x= \widehat A f(x)+ \bar {B}f(x(t-h), \ t\geq 0 \end{equation*} is AES, under the same sector constraints. } \end{remark} \begin{remark}\label{alex} {\rm It is important to mention that problems of absolute asymptotic stability have been considered for time-invariant delay systems of the form \eqref{persiddelay} in a number of previous works, including those with switchings. In particular, it has been established (see, e.g. \cite{alex_mason2014,sun_wang2013}) that, for any nonlinearity $f\in K(0,\infty)$, the solution $x(t)$ of the positive system \eqref{persiddelay} satisfies $\|x(t)\|\rightarrow 0 $ as $t\rightarrow +\infty$ if there exists $\xi \gg 0$ such that \eqref{A_S} holds. This condition, however, is not sufficient to assure the absolute exponential stability of the zero solution (as asserted in Corollary \ref{TIDS} where, however, a more restrictive nonlinearities class $K[\delta,\beta]\subset K(0,\infty)$ is assumed). Indeed, let us consider the scalar nondelay positive system $\dot x=-f_0(x)$, where $f_0(x)=x^3 $ for $|x|\leq 1 $ and $f_0(x)=x$ for $ |x|>1.$ Then, clearly, $f_0\in K(0,\infty)$ and \eqref{A_S} holds for $\xi=1$. However, the zero solution $x=0$, while being globally asymptotically stable equilibrium, is not exponentially stable (e.g., by Theorem 4.6 in \cite{khalil}, p. 184). Note that $f_0$ does not belong to $K[\delta,\beta]$ for any $\delta >0$ and any $\beta\geq 1> \delta$. In \cite{zhang_zhao2016} several criteria of absolute exponential stability for switched time-invariant systems were obtained, also by using the Lyapunov function method, but the conditions look much more complicated and not easy to be checked. Thus, even for time-invariant systems, our results are novel, while those for the time-varying case as presented in this paper have not yet been known in the existing literature, to the best of our knowledge.} \end{remark} \section{Some extensions of the main results}The approach developed in the previous section can be extended to get criteria of AES when the system's equation contains time-varying nonlinearities or multiple discrete delays. We just formulate the results, omitting of proof, because they are largely similar to that for Theorem \ref{main1}. First, consider a more general model related to the Persidskii-type system \eqref{TVS} that has the form: \vspace{-0.2cm} \begin{equation} \label{system_gen} \dot{x}_i(t)=\sum_{i=1}^{n}a_{ij}(t)f_{ij}(x_j(t),t)+\sum_{i=1}^{n}b_{ij}(t)f_{ij}(x_j(t-h),t), \end{equation} for $t\geq 0, i=1,\ldots, n$, where the functions $f_{ij}(\cdot, \cdot): \R\times \R_+\rightarrow \R$ are assumed to satisfy, for all $i,j \in \underline n$, \begin{equation} \label{sector_2} f_{ij}(x_j, t)x_j>0, \ \forall x_j\ne 0, \text{ and } f_{ij}(0, t)=0, \forall t\geq 0. \end{equation} Then, similarly to Theorem 3.2.10 of \cite{KaszBhaya}, we have the following extension of Theorem \ref{main1}. \begin{theorem}\label{main_2} Consider the time-varying system \eqref{system_gen}-\eqref{sector_2} and assume that there exists the admissible nonlinearity $f=(f_1, f_2, ..., f_n)\in K[\delta,\beta]$ which is defined as \eqref{sector2} such that, for all $i,j\in \underline n$, the following diagonal dominance type conditions are satisfied \begin{equation*} \left|f_{ij}(x_j,t)\right|\leq \left|f_{j}(x_j)\right | \leq \left|f_{jj}(x_j,t)\right|,\ \forall \ i\ne j, \ \forall x_j\in \R,\ \forall t\geq 0. \end{equation*} Then the zero solution of the system \eqref{system_gen}-\eqref{sector_2} is AES if there exist $n$-dimensional vector $\xi :=(\xi_{1},\xi_{2},\ldots,\xi_{n})^{\top}\gg 0$ and a real number $\alpha >0$ such that \eqref{cond1} holds. \end{theorem} Next, consider time-varying nonlinear system with multiple delays of the form \vspace{-0.2cm} \begin{equation}\label{multiple} \dot x= A(t)f(x(t))+ \sum_{l=1}^m B_l(t)f(x(t-h_l)),\ t\geq 0, \end{equation} \noindent where $A(\cdot), B_l(\cdot), C(\cdot,\cdot)$ are continuous matrix functions and $f\in K[\delta,\beta]$ is an admissible sector-bounded nonlinearity defined by \eqref{sector2}. It is assumed, without loss of generality, that $0<h_1<h_2<\ldots < h_m=h$. Furthermore, assume that there exist constant matrices $ \widetilde B_l =(\widetilde b_{l,ij})\in \R^{n\times n}$ such that \begin{equation}\label{upper} | B_l(t)| \leq \widetilde{B}_l, \ \forall t \geq 0,\ \forall l \in \underline m. \end{equation} Then, the following criterion of AES holds for the system \eqref{multiple}. \begin{theorem}\label{theorem_multiple} Assume that there exist $n$-dimensional vector $\xi=(\xi_1,\xi_2,\ldots,\xi_n)^{\top}\gg 0$ and a real number $\alpha >0$ such that \begin{equation}\label{con-multi} \bigg( D_{\delta} \widehat A^{\top}(t)+ \sum_{l=1}^m e^{\alpha h_l}D_{\beta}\widetilde{B}_l^{\top}\bigg)\xi \leq -\alpha \xi,\ \forall t \geq 0, \end{equation} \noindent where the matrix function $\widehat A(t)$ and the diagonal matrices $D_{\delta}, D_{\beta}$ are defined as in Theorem \ref{main1}. Then the zero solution of the delay nonlinear system \eqref{multiple} with sector nonlinearity $f\in K[\delta,\beta]$ is AES. Moreover, in this case, the exponential decay rate is $\alpha$. \end{theorem} \section{Time-Varying Nonlinear Difference Systems with delays} Consider time-varying nonlinear difference system with delays of the form \begin{equation} \label{discrete_system} x(k\!+\!1)\!=\! A(k)f(x(k))+B(k)f(x(k-h)), \; k\in \mathbb{Z}_+, \end{equation} where $h\in \Z_+$ is a given number, $A(k), B(k) : \Z_+ \rightarrow \R^{n\times n}$ are given matrix functions and $f: \R^n\rightarrow \R^n$ is a nonlinear diagonal function belonging to the bounded sector of the form \begin{equation}\label{sect_disc} K(0,\beta]:= \{f: 0<x_if_i(x_i) \leq \beta_ix_i^2, \forall x_i\not=0, i\in \underline n\}, \end{equation} where $\beta_i >0, i=1,\ldots, n$ are given positive numbers. Such a nonlinear function $f$ is called admissible sector nonlinearity. It is easy to verify that $f\in K(0,\beta]$ if and only if \begin{equation}\label{sector_1} 0<x_if_i(x_i), \ 0< |f_i(x_i)|\leq \beta_i|x_i|\ \ \text{for}\ \ x_i\not=0, \ i\in \underline n. \end{equation} Let $S[-h,0]$ be the Banach space of functions $\varphi: \mathbb{Z}_{[-h, 0]}\rightarrow \mathbb{R}^n$ equipped with norm $ \|\varphi\|=\max_{k\in \mathbb{Z}_{[-h, 0]}}\|\varphi(k)\| $ and $x(k):=x(k, \varphi), k\in \Z_+$ be the solution of \eqref{discrete_system} satisfying the initial condition $x(k)= \varphi (k), \ \ k\in \Z_{[-h,0]}.$ We will say that the system \eqref{discrete_system} is absolutely exponentially stable (AES), with the convergence rate $\lambda \in (0, 1) $, if for a number $L>0$, \begin{equation}\label{eqdf1} \|x(k)\|=\left\| x(k, \varphi) \right\| \leq L \;\lambda^k \; \|\varphi\|, \ \forall k\in \mathbb{Z}_+, \end{equation} for any $\varphi \in S[-h,0]$ and any admissible sector nonlinearity $f\in K(0,\beta]$. \begin{theorem} \label{main_discrete} Assume that there exists a vector $ \xi\in \R^n, \xi \gg 0$ and $\lambda\in (0,1)$ satisfying \vspace{-0.2cm} \begin{equation}\label{pk} \bigg(|A(k)|+\lambda^{-h}|B(k)| \bigg) D_{\beta}\xi\leq \lambda \xi,\; \forall k\in \mathbb{Z}_+, \end{equation} where $D_{\beta}$ is the diagonal matrix $D_{\beta}= \text{diag}(\beta_1,\ldots, \beta_n)$. Then the system \eqref{discrete_system} is AES with the convergence rate $\lambda$. \end{theorem} {\it Proof.} Let $\xi \gg 0$ and $\lambda \in (0,1)$ satisfy \eqref{pk} and $ f \in K(0,\beta]$ be an arbitrary admissible nonlinearity. Let $x(\cdot)$ be the solution of \eqref{discrete_system} satisfying the initial condition $x(k)= \varphi (k), \ \ k\in \Z_{[-h,0]}.$ Setting $L_0= (\min_{j\in \underline n}\xi_j)^{-1}$ and $u_i(k)=L_0\xi_i\lambda^k \|\varphi\|,\ k\in \mathbb{Z}_{[-h, +\infty)}, \ i\in \underline{n},$ then we have immediately \begin{equation}\label{xkpast1} |x_i(k)|\leq u_i(k), \ \ \forall k\in \mathbb{Z}_{[-h, 0]}, \forall i\in \underline n. \end{equation} Clearly, to prove the theorem, it suffices to show that \begin{equation} \label{in_discrete1} |x_i(k)|\leq u_i(k), \forall k\in \mathbb{Z}_+, \forall i \in \underline n \end{equation} (because then \eqref{eqdf1} holds with $L=L_0\|\xi\|$). Assume to the contrary that \eqref{in_discrete1} does not hold. Then, in view of \eqref{xkpast1}, it follows that there exists $k_0>0$, $i_0\in \underline{n}$ such that \begin{equation} \label{in_discrete} |x_i(k)|\leq u_i(k), \forall k\in \mathbb{Z}_{[-h, k_0)}, \forall i\in \underline{n}, \end{equation} and \begin{equation} \label{assume_proof} |x_{i_0}(k_0)|> u_{i_0}(k_0).\end{equation} Then, we can deduce \begin{align*} |x_{i_0}(k_0)| &\stackrel{\eqref{discrete_system},\eqref{sector_1}}\leq\sum_{j=1}^n|a_{i_0j}(k_0-1)| \; |x_j(k_0-1)|\beta_j +\sum_{j=1}^n|b_{i_0j}(k_0-1)| \; |x_j(k_0-1-h)|\beta_j\\ &\stackrel{\eqref{xkpast1}, \eqref{in_discrete}}\leq\sum_{j=1}^n |a_{i_0j}(k_0-1)|\xi_jL_0\lambda^{k_0-1}\|\varphi\|\beta_j +\sum_{j=1}^n\lambda^{-h}|b_{i_0j}(k_0-1)| \xi_jL_0\lambda^{k_0-1}\|\varphi\|\beta_j \stackrel{\eqref{pk}} \leq u_{i_0}(k_0). \end{align*} This, however, conflicts with \eqref{assume_proof} and completes the proof. Theorem \ref{main_discrete} can be extended to the case of several delays as follows. \begin{theorem}\label{th5} Consider time-varying nonlinear difference systems with delays of the form \begin{equation} \label{multdelay_disc} x(k\!+\!1)\!=\! A(k)f(x(k)) \!+\!\sum_{l=1}^mB_l(k)f(x(\!k-h_l)), k\in \mathbb{Z}_+, \end{equation} and $A(\cdot), B_l(\cdot), l\in\underline{ m} $ are given matrix functions on $\Z_+, 0<h_1< ...<h_m$ are given positive numbers and the nonlinearities $f$ belong to the bounded sector $ K(0,\beta]$ defined by \eqref{sect_disc}. Then the system \eqref{multdelay_disc} is AES if there exist vector $ \xi \gg 0$ and $\lambda\in (0,1)$ such that, \begin{equation}\label{pk0} \bigg(|A(k)|+ \sum_{l=1}^m\lambda^{-h_l}|B_l(k)|\bigg) D_{\beta}\xi \leq \lambda \xi,\ \forall k\in \Z_+. \end{equation} \end{theorem} \vspace{0.2cm} Similarly to the continuous-time case, it is easy to show that for the system \eqref{multdelay_disc} to be positive, it is necessary and sufficient that all matrices $A(k), B_l(k)=(b_{l,ij}(k)), k\in \Z_+, l\in \underline m$ are nonnegative. The following consequence of Theorem \ref{th5} gives a delay-independent criterion of AES for positive difference systems. The proof is similar to that of Corollary \ref{main2}. \begin{corollary} \label{alex_mason} The delay positive nonlinear difference system \begin{equation} \label{discrete_system2} x(k+1)= Af(x(k)) +\sum_{l=1}^mB_lf(x(k-h_l)), \; k\in \mathbb{Z}_+, \end{equation} with sector-bounded nonlinearity $ f\in K(0,\beta]$ is AES if there exists a vector $ \xi \gg 0$ satisfying \begin{equation}\label{pk1} \big(A+B_1+... + B_m\big) D_{\beta}\xi-\xi\ll 0, \end{equation} Moreover, in this case, the maximal convergence rate $\lambda_{\max} \in (0,1) $ can be calculated as $\lambda_{\max}:=\max_{i\in \underline{n}}\lambda_i$ where $\lambda_i\in (0,1) $ is the unique solution of the equation $g_i(\lambda)= \sum_{j=1}^n a_{ij}\beta_j\xi_j+ \sum_{l=1}^{m}\lambda^{-h_l}\sum_{j=1}^n b_{l,ij}\beta_j\xi_j-\lambda\xi_i=0.$ Moreover, the above AES property holds true for {\it any time-varying nonlinear difference system} of the form \eqref{multdelay_disc}, whenever system's matrix functions $A(\cdot), B_l(\cdot), l\in \underline m$ satisfy \begin{equation}\label{bound} |A(k)|\leq A,\ |B_l(k)|\leq B_l,\ \forall k\in \Z_+, \ \forall l\in\underline m. \end{equation} \end{corollary} \vspace{0.5cm} Corollary \ref{alex_mason} improves considerably the result of \cite{alex_mason2014} (Theorem 6.2) which only proved, equivalently, that the time-invariant delay positive system \eqref{discrete_system2}, with $\beta=(1, 1, ...,1)^{\top}$, is absolutely {\it asymptotically stable } if \eqref{pk1} holds. \vspace{-0.2cm} \section{Illustrative examples} \begin{example}\label{Theorem2} {\rm We consider the time-varying system of the form \eqref{TVS}, where$n=2$, $h=1$, $f\in K[\delta, \beta]$, with $\delta=(\frac{1}{3}, \frac{1}{2})^{\top}$, $\beta=(\frac{3}{2}, 2)^{\top}$ and for $t\geq 0$, \begin{equation*} \begin{split} &A(t)= \widehat{A}(t)= \begin{bmatrix}-4t-12&0\\t &-2t-5 \end{bmatrix}, \\ &B(t)=\begin{bmatrix}\frac{1}{3} \sin t&\frac{1}{8} \cos t\\\frac{1}{3} e^{-t}\cos t&\frac{1}{8} e^{-t}\sin t \end{bmatrix}. \end{split} \end{equation*} Clearly, $ |B(t)|\leq \bar{B}= \begin{bmatrix}\frac{1}{3} &\frac{1}{8} \\\frac{1}{3} &\frac{1}{8} \end{bmatrix}, \forall t\geq 0. $ Then, taking $\alpha=1$, $\xi=\begin{bmatrix} 1&1\\ \end{bmatrix}^{\top}$, we can check that, for all $t\geq 0$, \begin{equation*} \left[D_{\delta}\widehat{A}^{\top}(t)+e^{\alpha h}D_{\beta}\bar{B}^{\top}\right]\xi=\begin{bmatrix}-t-4+e\\-t-\frac{5}{2}+\frac{e}{2} \end{bmatrix}\ll \begin{bmatrix}-1\\-1 \end{bmatrix}=-\alpha \xi. \end{equation*} Then, by Theorem \ref{main1}, we conclude that the zero solution the system under consideration is AES, with exponential decay rate $\alpha=1$. Note that the above matrix function $A(t)$, $t\geq 0$ can not be upper bounded by any constant matrix, so that the result of \cite{sun_wang2013, alex_mason2014} can not be applied in this case.} \end{example} \begin{example} {\rm Consider the time-varying nonlinear difference system \eqref{multdelay_disc} where $m=1, h=1, f\in K (0, \beta]$ with $\beta=(\frac{1}{8}; \frac{1}{14})^{\top}$ and, for all $k\in \mathbb{Z}_+$, \begin{equation*} A(k)=\begin{bmatrix}-\sin k&2e^{-3k}\\ 3\cos k&-\sin k \end{bmatrix}, \ \ B_1(k)=\begin{bmatrix}\frac{1}{2}e^{-k} &\frac{1}{3}\sin k \\ \frac{1}{2} e^{-2k}&\frac{1}{4} \cos k \end{bmatrix}, \end{equation*} It is easy to see that, for all $k\in \mathbb{Z}_+$, \begin{equation*} |A(k)|\leq A=\begin{bmatrix}1&2\\ 3&1 \end{bmatrix},\ \ |B_1(k)|\leq B_1=\begin{bmatrix}\frac{1}{2}& \frac{1}{3} \\ \frac{1}{2}&\frac{1}{4} \end{bmatrix}. \end{equation*} Taking $\xi=\begin{bmatrix} 1&1\\ \end{bmatrix}^{\top}$, it can be verified that \begin{equation} \begin{split}\bigg(A+B_1\bigg) D_{\beta}\xi=\begin{bmatrix} \frac{3}{16}+\frac{7}{42}\\ \frac{7}{16}+\frac{5}{56}\\ \end{bmatrix}\leq \begin{bmatrix} 1\\ 1\\ \end{bmatrix}= \xi. \end{split} \end{equation} Therefore, by Corollary \ref{alex_mason}, the system \eqref{multdelay_disc} is AES and, moreover, the 'maximal' convergence rate is $\lambda_{\max}=0.5840213813$. } \end{example} \vspace{-0.3cm} \section{Conclusion} We have presented a number of verifiable sufficient conditions of absolute exponential stability for different classes of delay time-varying nonlinear systems with sector-bounded nonlinearity. Differently from the traditional approach which is based on using Lyapunov-Krasovskii functionals, our analysis makes use of comparison principle and the stability characterization of positive upper bounding systems. The results have been obtained for both the continuous-time and discrete-time cases. When applied to the time-invariant systems, the obtained results have been shown to cover and improve the stability criteria in existing literature. There are several possibilities for developing the work described here. In particular, it would be of interest to investigate whether our comparison analysis can be adapted to deal with systems with time-varying delays or to address the absolute exponential stability of time-varying Luri'e systems. The application of the results obtained in this paper for studying absolute exponential stability of time-varying nonlinear switched systems would be also an interesting and promising topic which is currently under our consideration. \section*{Acknowledgment} This work is supported partly by Vietnam Academy of Science and Technology, via the research project DLTE00.01/22-23. \vspace{-0.5cm} \begin{thebibliography}{00} \bibitem{lurie1944}A.I. Luri'e, V.N. Postnikov, ''On stability theory for controllable systems,'' \emph{ Prikl. Mat. Mekh.}, vol. 8, no. 3, pp. 246-248, 1944. \bibitem{liberzon_automat2008} M.R. Liberzon, "Essays on the absolute stability theory," \emph{Autom. Remote Control}, vol. 67, no. 10, pp. 1610-1644, 2006. \bibitem{liao_book} X. Liao, P. Yu, \emph{ Absolute Stability of Nonlinear Control Systems}. Springer Science \& Business Media, London, 2008. \bibitem{KaszBhaya} E. Kaszkurewicz, A. Bhaya,\emph{ Matrix Diagonal Stability in Systems and Computation}. Birkhauser, London, 2000. \bibitem{barbashin} E. Barbashin, "On construction of Lyapunov functions for nonlinear systems," in \emph{Proc. 1st IFAC World Congr.,} 1961, pp. 742-751. \bibitem{persidskii69} S.K. Persidskii, "Problem of absolute stability," \emph{ Autom. Remote Control}, vol.12, pp. 1889-1895, 1969. \bibitem{K_B-SIAM1993} E. Kaszkurewicz, A. Bhaya, "Robust stability and diagonal Lyapunov functions," \emph{ SIAM J. Matrix Anal. Appl.} vol. 14, no.2, pp. 508-520, 1993. \bibitem{Oliveira2002} M.C. De Oliveira, J.C. Geromel, L. Hsu, "A new absolute stability test for systems with state-dependent perturbations," \emph{ Inter. J. Robust Nonlinear Control}, vol. 14, no. 12, pp. 1209-1226, 2002. \bibitem{persidskii2005} S.K. Persidskii, "On the exponential stability of some nonlinear systems," \emph{ Ukrainian Math. J.}, vol.57, pp. 157-164, 2005. \bibitem{Efimov2021} D. Efimov, A. Aleksandrov, "On analysis of Persidskii systems and their implementations using LMIs," \emph{ Automatica}, vol. 134, 109905, 2021. \bibitem{sun_wang2013} Y. Sun, L. Wang, "On stability of a class of switched nonlinear systems," \emph{Automatica}, vol. 49, no. 1, pp. 305-307, Jan. 2013. \bibitem{alex_mason2014} A. Aleksandrov, O. Mason, " Absolute stability and Lyapunov-Krasovskii functionals for switched nonlinear systems with time-delay," \emph{ J. Franklin Inst.}, vol. 351, no. 8, pp. 4381-4394, Aug. 2014. \bibitem{zhang_zhao2016} J. Zhang, X. Zhao, J. Huang, "Absolute exponential stability of switched nonlinear time-delay systems," \emph{ J. Franklin Inst. }, vol. 353, no. 6, pp. 1249-1267, April 2016. \bibitem{alex2021} A. Aleksandrov, "On the existence of a common Lyapunov function for a family of nonlinear positive systems," \emph{ Syst. Control Lett.}, vol. 147, 104832, Jan. 2021. \bibitem{zhang2012} X. Zhao, L. Zhang, P. Shi, M. Liu, "Stability of switched positive linear systems with average dwell time switching," \emph{ Automatica}, vol. 48, pp. 1132-1137, 2012. \bibitem{Liu2018} X. Liu, Q. Zhao, S. Zhong, "Stability analysis of a class of switched nonlinear systems with delays: a trajectory-based comparison method," \emph{Automatica }, vol. 91, pp. 36-42, May 2018. \bibitem{Tian_Sun} Y. Tian, Y. Sun, "Exponential stability of switched nonlinear time-varying systems with mixed delays: Comparison principle," \emph{ J. Franklin Inst. }, vol. 357, no.11, pp. 6918-6931, July 2020. \bibitem{son_ngoc_2022} S. Nguyen Khoa, V.N. Le, "Exponential stability analysis for a class of switched nonlinear time-varying functional differential systems", \emph{ Nonlinear Analysis: Hybrid Systems}, vol.44, 101177, 2022. \bibitem{Chelaboina2004} W.M. Haddad, V. Chellaboina, "Stability theory for nonnegative and compartmental dynamical systems with time delay," \emph{Syst. Control Lett.}, vol. 51, pp. 355-361, 2004. \bibitem{Ngoc_IEEE} P. H. A. Ngoc, "Stability of positive differential systems with delay," \emph{ IEEE Trans. Autom. Control, }, vol. 58, no. 1, pp. 203-209, Jan. 2013. \bibitem{blanc_valcher} F. Blanchini, P. Colaneri, E. Valcher, "Switched positive linear systems," \emph{Foundations and Trends in Systems and Control}, vol. 2, pp.101-273, 2015. \bibitem{Berman} A. Berman, R. J. Plemmons, \emph{ Nonnegative Matrices in Mathematical Sciences}. Academic Press, New York, 1979. \bibitem{Rouche} N. Rouche, P. Habets, M. Laloy, \emph{Stability Theory by Lyapunov Direct Method}. Springer Verlag, Berlin, 1977. \bibitem{khalil} H. Khalil, \emph{ Nonlinear Systems.} Second Ed., Prentice-Hall, Inc., Englewood-Cliffs, NJ, 1996. \end{thebibliography} \end{document}
2205.08795v2
http://arxiv.org/abs/2205.08795v2
Degenerations and order of graphs realized by finite abelian groups
\documentclass[12pt,twoside]{article} \usepackage{graphicx} \usepackage{tikz} \usepackage{tikz-cd} \usepackage{amsfonts} \usepackage{amsbsy} \usepackage{amssymb} \usepackage{float} \usepackage{amsmath,amsthm} \usepackage{verbatim} \usepackage{amsfonts} \usepackage{pst-all} \usepackage{pstricks} \usepackage[T1]{fontenc} \usepackage{multicol} \usepackage{color} \usepackage[english]{babel} \usepackage[autostyle, english = american]{csquotes} \MakeOuterQuote{"} \usepackage{lipsum} \usepackage[document]{ragged2e} \frenchspacing \usepackage{pgf} \usepackage{mathtools} \usepackage{genyoungtabtikz} \usepackage[onehalfspacing]{setspace} \usepackage[parfill]{parskip} \usepackage{hyperref} \hypersetup{ colorlinks=true, linkcolor=blue, filecolor=blue, urlcolor=blue, citecolor=blue, pdftitle={Overleaf Example}, pdfpagemode=FullScreen, } \DeclarePairedDelimiter\ceil{\lceil}{\rceil} \DeclarePairedDelimiter\floor{\lfloor}{\rfloor} \DeclareFontFamily{U}{matha}{\hyphenchar\font45} \DeclareFontShape{U}{matha}{m}{n}{ <5> <6> <7> <8> <9> <10> gen * matha <10.95> matha10 <12> <14.4> <17.28> <20.74> <24.88> matha12 }{} \DeclareSymbolFont{matha}{U}{matha}{m}{n} \DeclareMathSymbol{\wedge} {2}{matha}{"5E} \DeclareMathSymbol{\vee} {2}{matha}{"5F} \newenvironment{ytableau}[1][] \textwidth = 6.5 in \textheight = 9 in \oddsidemargin = 0.0 in \evensidemargin = 0.0 in \topmargin = 0.0 in \headheight = 0.0 in \headsep = 0.0 in \parskip = 0.2in \parindent = 0.3in \newtheorem{thm}{Theorem} \newtheorem{prop}{Proposition}[section] \newtheorem{cor}[thm]{Corollary} \newtheorem{defn}[thm]{Definition} \newtheorem{obs}[prop]{Observation} \newtheorem{lem}[thm]{Lemma} \newtheorem{pbm}[thm]{Problem} \newtheorem{conj}[thm]{Conjecture} \newtheorem{claim}[prop]{Claim} \newtheorem{note}[prop]{Note} \newtheorem{notation}[prop]{Notation} \newtheorem{rem}[prop]{Remark} \newtheorem{exm}[thm]{Example} \newtheorem{prob}[thm]{Problem} \newcommand{\A}{\mathcal{A}} \newcommand{\B}{\mathcal{B}} \newcommand{\C}{\mathcal{C}} \newcommand{\D}{\mathcal{D}} \newcommand{\E}{\mathcal{E}} \newcommand{\F}{\mathcal{F}} \newcommand{\G}{\mathcal{G}} \newcommand{\Hy}{\mathcal{H}} \newcommand{\I}{\mathcal{I}} \newcommand{\List}{\mathcal{L}} \newcommand{\M}{\mathcal{M}} \newcommand{\Mf}{\mathfrak{M}} \newcommand{\Nf}{\mathfrak{N}} \newcommand{\N}{\mathcal{N}} \newcommand{\R}{\mathcal{R}} \newcommand{\Pa}{\mathcal{P}} \newcommand{\La}{\mathcal{L}} \newcommand{\Ta}{\mathcal{T}} \newcommand{\bC}{\mathbb{C}} \newcommand{\bG}{\mathbb{G}} \newcommand{\bP}{\mathbb{P}} \newcommand{\bE}{\mathbb{E}} \newcommand{\bH}{\mathbb{H}} \newcommand{\bF}{\mathbb{F}} \newcommand{\bN}{\mathbb{N}} \newcommand{\bfa}{\mathbf{a}} \newcommand{\bfb}{\mathbf{b}} \newcommand{\bfg}{\mathbf{g}} \newcommand{\mfa}{\mathfrak{A}} \newcommand{\mfi}{\mathfrak{i}} \newcommand{\mfj}{\mathfrak{j}} \newcommand{\mcS}{\mathcal{S}} \newcommand{\bft}{\mathbf{t}} \newcommand{\bfx}{\mathbf{x}} \newcommand{\bfy}{\mathbf{y}} \newcommand{\bfzero}{\mathbf{0}} \newcommand{\V}{\mathcal{V}} \newcommand{\X}{\mathcal{X}} \newcommand{\Y}{\mathcal{Y}} \newcommand{\ep}{\varepsilon} \newcommand{\fD}{f^{(D)}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\no}{\noindent} \makeatletter \def\blfootnote{\xdef\@thefnmark{}\@footnotetext} \makeatother \begin{document} \title{Degenerations and order of graphs realized by finite abelian groups} \author{{Rameez Raja \footnote{Department of Mathematics, National Institute of Technology Srinagar-190006, Jammu and Kashmir, India. Email: [email protected]}}} \date{} \maketitle \vskip 5mm \noindent{\footnotesize \bf Abstract.} Let $G_1$ and $G_2$ be two groups. If a group homomorphism $\varphi:G_1 \rightarrow G_2$ maps $a\in G_1$ into $b\in G_2$ such that $\varphi(a) = b$, then we say $a$ \textit{degenerates} to $b$ and if every element of $G_1$ degenerates to elements in $G_2$, then we say $G_1$ degenerates to $G_2$. We discuss degeneration in graphs and show that degeneration in groups is a particular case of degeneration in graphs. We exhibit some interesting properties of degeneration in graphs. We use this concept to present a pictorial representation of graphs realized by finite abelian groups. We discus some partial orders on the set $\mathcal{T}_{p_1 \cdots p_n}$ of all graphs realized by finite abelian $p_r$-groups, where each $p_r$, $1\leq r\leq n$, is a prime number. We show that each finite abelian $p_r$-group of rank $n$ can be identified with \textit{saturated chains} of \textit{Young diagrams} in the poset $\mathcal{T}_{p_1 \cdots p_n}$. We present a combinatorial formula which represents the degree of a projective representation of a symmetric group. This formula determines the number of different \textit{saturated chains} in $\mathcal{T}_{p_1 \cdots p_n}$ and the number of finite abelian groups of different orders. \vskip 3mm \noindent{\footnotesize Keywords: Degenerations, Finite abelian groups, Threshold graph, Partial order.} \vskip 3mm \noindent {\footnotesize AMS subject classification: Primary: 13C70, 05C25.} \section{\bf Introduction} A notion of degeneration in groups was introduced in \cite{KA} to parametrize the orbits in a finite abelian group under its full automorphism group by a finite distributive lattice. The authors in \cite{KA} were motivated by attempts to understand the decomposition of the weil representation associated to a finite abelian group $G$. Note that the sum of squares of the multiplicities in the Weil representation is the number of orbits in $G \times \hat{G}$ under automorphisms of a symplectic bicharacter, where $\hat{G}$ denotes the Pontryagin dual of $G$.\\ The above combinatorial description is one of the explorations between groups and combinatorial structures (posets and lattices). There is an intimate relationship between between groups and other combinatorial structures (graphs). For example, any graph $\Gamma$ give rise to its automorphism group whereas any group with its generating set give rise to a realization of a group as a graph (Cayley graph). Recently, authors in \cite{ER} studied the \textit{group-annihilator} graph $\Gamma(G)$ realized by a finite abelian group $G$ (viewed as a $\mathbb{Z}$-module) of different ranks. The vertices of $\Gamma(G)$ are all elements of $G$ and two vertices $x ,y \in G$ are adjacent in $\Gamma(G)$ if and only if $[x : G][y : G]G = \{0\}$, where $[x : G] = \{r\in\mathbb{Z} : rG \subseteq \mathbb{Z}x\}$ is an ideal of a ring $\mathbb{Z}$. They investigated the concept of creation sequences in $\Gamma(G)$ and determined the multiplicities of eigenvalues $0$ and $-1$ of $\Gamma(G)$. Interestingly, they considered orbits of the symmetric group action: $Aut(\Gamma(G)) \times G \longrightarrow G$ and proved that the representatives of orbits are the Laplacian eigenvalues of $\Gamma(G)$. There are number of realizations of groups as graphs. The generating graph \cite{LS} realized by a simple group was introduced to get an insight that might ultimately guide us to a new proof of the classification of simple groups. The graphs such as power graph \cite{CS}, intersection graph \cite{ZB} and the commuting graph \cite{BF} were introduced to study the information contained in the graph about the group. Moreover, the realizations of rings as graphs were introduced in \cite{AL, Bk}. The aim of considering these realizations of rings as graphs is to study the interplay between combinatorial and ring theoretic properties of a ring $R$. This concept was further studied in \cite{FL, M, SR, RR} and was extended to modules over commutative rings in \cite{SR1}. The main objective of this work is to investigate some deeper interconnections between partitions of a number, young diagrams, finite abelain groups, group homomorphisms, graph homomorphisms, posets and lattices. This investigation will lead us to develop a theory which is going to simplify the concept of degeneration of elements in groups and also provide a lattice of finite abelian groups in which each \textit{saturated chain} of length $n$ can be identified with a finite abelian $p_r$-group of rank $n$. This research article is organized as follows. In section 2, we discuss some results related to degeneration in groups and group-annihilator graphs realized by finite abelian groups. Section 3 is dedicated to the study of degenerations in graphs realized by finite abelian groups. We present a pictorial sketch which illustrates degeneration in graphs. Finally in section 4, we investigate multiple relations on the set $\mathcal{T}_{p_1 \cdots p_n}$ and furnish the information contained in a locally finite distributive lattice about finite abelian groups. We provide a combinatorial formula which represents degree of a projective representation of a symmetric group and the number of \textit{saturated chains} from empty set to some non-trivial member of $\mathcal{T}_{p_1 \cdots p_n}$. \section{\bf Preliminaries} Let $\lambda = (\lambda_1, \lambda_2, \cdots, \lambda_r)$ be a partition of $n$ denoted by $\lambda \vdash n$, where $n\in \Z_{>0}$ is a positive integer. For any $\mu \vdash n$, we have an abelian group of order $p^n$ and conversely every abelian group corresponds to some partition of $n$. In fact, if $H_{\mu, p} = \mathbb{Z}/p^{{\mu}_1}\mathbb{Z} ~\oplus~ \mathbb{Z}/p^{{\mu}_2}\mathbb{Z} ~\oplus~ \cdots ~\oplus~ \mathbb{Z}/p^{{\mu}_r}\mathbb{Z}$ is a subgroup of $G_{\lambda, p}$ ($G_{\lambda, p} = \mathbb{Z}/p^{\lambda_1}\mathbb{Z} \oplus \mathbb{Z}/p^{\lambda_2}\mathbb{Z} \oplus \cdots \oplus \mathbb{Z}/p^{\lambda_r}\mathbb{Z}$ is a finite abelian $p$-group), then $\mu_1 \leq \lambda_1, \mu_2 \leq \lambda_2, \cdots, \mu_r \leq \lambda_r$. If these inequalities holds we write $\mu \subset \lambda$, that is a \textquotedblleft containment order\textquotedblright on partitions. For example, a $p$-group $\mathbb{Z}/p^{7}\mathbb{Z} ~\oplus~ \mathbb{Z}/p\mathbb{Z} ~\oplus~ \mathbb{Z}/p\mathbb{Z}$ is of type $\lambda = (7, 1, 1)$. The possible types for its subgroup are: $(7, 1, 1), (6, 1, 1), (5, 1, 1), (4, 1, 1)$, \noindent$(3, 1, 1), (2, 1, 1), (1, 1, 1), 2(7, 1), 2(6, 1), 2(5, 1), 2(4, 1), 2(3,1), 2(2, 1), 2(1, 1), (7), (6), (5), (4)$, $\noindent (3), (2), 2(1)$. Note that the types $(7, 1), (6, 1), (5, 1), (4, 1), (3,1), (2, 1), (1, 1)$ are appearing twice in the sequence of partitions for a subgroup. The authors in \cite{KA} have considered the group action: $Aut(G) \times G \rightarrow G$, where $Aut(G)$ is an automorphism group of $G$ and studied $Aut(G)\setminus G$, the set of all disjoint $Aut(G)$-orbits in $G$. The group $\mathbb{Z}/p^{k}\mathbb{Z}$ has $k$ orbits of non-zero elements under the action of its automorphism group, represented by elements $1, p, \cdots, p^{k-1}$. We denote orbits of the group action: $Aut(\mathbb{Z}/p^{k}\mathbb{Z}) \times \mathbb{Z}/p^{k}\mathbb{Z}\longrightarrow \mathbb{Z}/p^{k}\mathbb{Z}$ by $\mathcal{O}_{k, p^{m}}$, where $0\leq m \leq k-1$. Miller \cite{ML}, Schwachh\"ofer and Stroppel \cite{SS} provided some well known formulae for the cardinality of the set $Aut(G_{\lambda, p})$ $\setminus$ $G_{\lambda, p}$ . \begin{defn}\label{df1} (Degeneration in groups) \cite{KA}. Let $G_1$ and $G_2$ be two groups, then $a \in G_1$ degenerates to $b \in G_2$, if a homomorphism $\varphi : G_1 \longrightarrow G_2$ maps $a$ into $b$ such that $\varphi(a) = b$. \end{defn} The following result provide a characterization for degenerations of elements of the group $\mathbb{Z}/p^{k}\mathbb{Z}$ to elements of the group $\mathbb{Z}/p^{l}\mathbb{Z}$, where $k \leq l$. \begin{lem}\label{lm1}\cite{KA}. $p^{r}u \in \mathcal{O}_{k, p^{r}}$ in $\mathbb{Z}/p^{k}\mathbb{Z}$ degenerates to $p^{s}v \in \mathcal{O}_{l, p^{s}}$ in $\mathbb{Z}/p^{l}\mathbb{Z}$ if and only if $r \leq s$ and $k-r \geq l-s$, where $u, v$ are relatively prime to $p$, $r<k$ and $s<l$. If in addition $p^{s}v \in \mathcal{O}_{l, p^{s}}$ degenerates to $p^{r}u \in \mathcal{O}_{k, p^{r}}$, then $k = l$ and $r = s$. \end{lem} By Lemma \ref{lm1}, it is easy to verify that degeneracy is a partial order relation on the set of all orbits of non-zero elements in $\mathbb{Z}/p^{k}\mathbb{Z}$. The diagrammatic representation (Hasse diagram) of the set $Aut(\mathbb{Z}/p^{k}\mathbb{Z})\setminus \mathbb{Z}/p^{k}\mathbb{Z}$ with respect to degeneracy, which is called a fundamental poset is presented in [Figure 1 \cite{KA}]. Let $a = (a_1, a_2, \cdots, a_r) \in G_{\lambda, p}$, the \textit{ideal of a} in $Aut(G_{\lambda, p})$ $\setminus$ $G_{\lambda, p}$ denoted by $I(a)$ is the ideal generated by orbits of non-zero coordinates $a_i\in \mathbb{Z}/p^{\lambda_{i}}\mathbb{Z}$. One of the explorations between ideals of posets, partitions and orbits of finite abelian groups is the following interesting result. \begin{thm}\label{thm1}\cite{KA}. Let $\lambda$ and $\mu$ be any two given partitions and $a\in G_{\lambda, p}$, $b\in G_{\mu, p}$. Then $a$ degenerates to $b$ if and only if $I(b) \subset I(a)$. \end{thm} The enumeration of orbits as ideals, first as counting ideals in terms of their boundaries, and the second as counting them in terms of anti chains of maximal elements is presented in [Example 6.1, 6.2 \cite{KA}]. Please see sections 7 and 8 of \cite{KA} for results related to embedding of the lattice of orbits of $G_{\lambda, p}$ into the lattice of characteristic subgroups of $G_{\lambda, p}$, formula for the order of the characteristic subgroup associated to an orbit, computation of a monic polynomial in $p$ (with integer coefficients) using mobius inversion formula representing cardinality of the orbit in $G_{\lambda, p}$. Let $\Gamma = (V, E)$ be a simple connected graph and let $\Gamma_1$ and $\Gamma_2$ be two simple connected graphs, recall a mapping $\phi: V(\Gamma_1) \rightarrow V(\Gamma_2)$ is a \textit{homomorphism} if it preserves edges, that is, for any edge $(u, v)$ of $\Gamma_1$, $(\phi(u), \phi(v))$ is an edge of $\Gamma_2$, where $u, v\in V(\Gamma_1)$. A homomorphism $\phi: V(\Gamma_1) \rightarrow V(\Gamma_2)$ is faithful when there is an edge between two pre images $\phi^{-1}(u)$ and $\phi^{-1}(u)$ such that $(u, v)$ is an edge of $\Gamma_2$, a faithful bijective homomorphism is an \textit{isomorphism} and in this case we write $\Gamma_1 \cong \Gamma_2$. An isomorphism from $\Gamma$ to itself is an \textit{automorphism} of $\Gamma$, it is well known that set of automorphisms of $\Gamma$ forms a group under composition, we denote the group of automorphisms of $\Gamma$ by $Aut(\Gamma)$. Understanding the automorphism group of a graph is a guiding principle for understanding objects by their symmetries. Consider the group action: $Aut(\Gamma) ~acting~on ~V(\Gamma)$ by some permutation of $Aut(\Gamma)$, that is, \hskip .9cm \hskip .9cm \hskip .9cm \hskip .9cm \hskip .9cm \hskip .9cm $Aut(\Gamma) \times V(\Gamma) \rightarrow V(\Gamma)$, \hskip .9cm \hskip .9cm \hskip .9cm \hskip .9cm \hskip .9cm \hskip .9cm \hskip .9cm \hskip .3cm $\sigma(v) = u$, \noindent where $\sigma \in Aut(\Gamma)$ and $v, u\in V(\Gamma)$ are any two vertices of $\Gamma$. This group action is called a \textit{symmetric action} \cite{ER}. Consider a finite abelian non-trivial group $G$ with identity element $0$ and view $G$ as a $\mathbb{Z}$-module. For $a\in G$, set $[a : G] =\{x\in \mathbb{Z} ~|~ xG\subseteq \mathbb{Z}a\}$, which clearly is an ideal of $\mathbb{Z}.$ For $a \in G$, $G/\mathbb{Z}a$ is a $\mathbb{Z}$-module. So, $[a : G]$ is a annihilator of $G/\mathbb{Z}a$, $[a:G]$ is called a $a$-annihilator of $G.$ Also, an element $a$ is called an \textit{ideal-annihilator} of $G$ if there exists a non-zero element $b$ of $G$ such that $[a : G][b : G]G = \{0\}$, where $[a : G][b : G]$ denotes the product of ideals of $\mathbb{Z}$. The element $0$ is a trivial ideal-annihilator of $G$, since $[0 : G][b : G]G = ann(G)[b : G]G = \{0\}$, $ann(G)$ is an annihilator of $G$ in $\mathbb{Z}$. Given an abelian group $G$, the \textit{group-annihilator} graph is defined to be the graph $\Gamma(G) = (V(\Gamma(G))$, $E(\Gamma(G)))$ with vertex set $V(\Gamma(G))= G$ and for two distinct $a, b\in V(\Gamma(G))$, the vertices $a$ and $b$ are adjacent in $\Gamma(G)$ if and only if $[a : G][b : G]G = \{0\}$, that is, $E(G) = \{(a, b)\in G \times G : [a : G][b : G]G = \{0\}\}$. For a cyclic group $G = \mathbb{Z}/p^{n}\Z$ ($n\geq1$), it is easy to verify that the orbits of the action: $Aut(G) \times G \longrightarrow G$ are same as the orbits of the symmetric action: $Aut(\Gamma(G)) \times G \longrightarrow G$ which are given as follows, \begin{center} $\mathcal{O}_{n, p^i} = \{p^i\alpha (mod~p^{n})\mid \alpha \in \Z, (\alpha, p) =1\}$,\end{center} where $i \in [0, n]$. Furthermore, for $0\leq i< j \leq n$, $p^i\alpha \equiv p^j \alpha' (mod~p^{n})\text{ where } (\alpha, p) = 1 \text{ and } (\alpha', p) = 1$. Consequently, we have for $i \neq j$, $\mathcal{O}_{n, p^i} \cap \mathcal{O}_{n, p^j} = \emptyset$. Any element $a \in \Z/p^{n}\Z$ can be expressed as, \begin{center} $a \equiv p^{n-1}b_1 + p^{n-2} b_2 +\cdots + p b_{n-1 } +b_{n} (mod~p^{n})$,\end{center} where $b_i \in [1, p-1]$. If $a \in \mathcal{O}_{n, 1}$, then $b_{n} \neq 0.$ So, $|\mathcal{O}_{n, 1}| = p^{n-1} (p-1) = \phi (p^{n})$. If $a' \in \mathcal{O}_{n, p}$, then for some $a \in \mathcal{O}_{n, 1}$ $a' = pa$, that is, $b_{n}\neq 0$, so $|\mathcal{O}_{n, p}| = \frac{\phi (p^{n})}{p}$. Similarly, for $i \in [0, n]$, we have $|\mathcal{O}_{n ,p^i}| = \frac{\phi(p^{n})}{p^i}$. \begin{prop}\cite{ER}. Let $G =\Z/p^{n}\Z$ be a cyclic group of order $p^{n}$, where $n \ge 2$. Then for each $a\in \mathcal{O}_{n, p^i}$ with $i\in [1, n]$, the $a-$annihilator of $G$ is $[a: G] = p^i \Z $ \end{prop} Thus if we consider the symmetric group action: $Aut(\Gamma(G)) \times G \longrightarrow G$, then for $G =\Z/p^{n}\Z$, the group-annihilator graph realized by $G$ is defined as $\Gamma (G) = (V(\Gamma(G)), E(\Gamma(G)))$, where $V(\Gamma(G)) = \Z/p^{n}\Z$ and two vertices $u \in \mathcal{O}_{n, p^i}$, $v \in \mathcal{O}_{n, p^j}$ are adjacent in $\Gamma(G)$ if and only if $ i + j \geq n$. Therefore, from the above observation it follows that the vertices of the graph $\Gamma(G)$ are parametrized by representatives of orbits of the group action: $Aut(\Gamma(G)) \times G \longrightarrow G$. Thus an element $0\in \mathcal{O}_{n, p^{n}}$ of $G$ is adjacent to all vertices in $\Gamma (G)$, elements $a \in \mathcal{O}_{n, 1}$ which are prime to order of $G$ are adjacent to $0$ only in $\Gamma (G)$. Furthermore, elements of the orbit $\mathcal{O}_{n, p}$ are adjacent to $0$ and elements of the orbit $\mathcal{O}_{n, p^{n-1}}$, elements of the orbit $\mathcal{O}_{n, p^2}$ are adjacent to $0$ and elements of the orbits $\mathcal{O}_{n, p^{n-1}}$, $\mathcal{O}_{n, p^{n-2}}$. Thus, for $k \geq 1$, elements of the orbit $\mathcal{O}_{n, p^k}$ are adjacent to elements of the orbits $\mathcal{O}_{n, p^{n-k}}$, $\mathcal{O}_{n, p^{n-k + 1}}, \cdots, \mathcal{O}_{n, p^{n -1}}$, $\mathcal{O}_{n, p^{n}}$. \begin{thm} \cite{ER}. Let $n$ be a positive integer. Then for the $p$-group $G = (\Z/p^{n}\Z)^{\ell} $ of rank $\ell\geq2,$ and $(a_1,\ldots,a_l)\in G$, the $(a_1,\ldots,a_l)$-annihilator of $G$ is $p^n\Z.$ In particular the corresponding group-annihilator graph realized by $G$ is a complete graph. \end{thm} Note that the action of $Aut(\Gamma((\mathbb{Z}/p\mathbb{Z})^{\ell}))$ on $(\mathbb{Z}/p\mathbb{Z})^{\ell}$ is transitive, since an automorphism of $\Gamma((\mathbb{Z}/p\mathbb{Z})^{\ell})$ map any vertex to any other vertex and this does not place any restriction on where any of the other $p^{\ell} - 1$ vertices are mapped, as they are all mutually connected in $\Gamma((\mathbb{Z}/p\mathbb{Z})^{\ell})$. This implies $Aut(\Gamma((\mathbb{Z}/p\mathbb{Z})^{\ell})) \setminus(\mathbb{Z}/p\mathbb{Z})^{\ell}$ is a single orbit of order $p^{\ell}$. For more information regarding $a-$annihilators, $(a, b)-$annihilators and $(a_1, a_2, \cdots, a_l)$ $-$annihilators of finite abelian $p$-groups, please see section 3 of \cite{ER}. We conclude this section by an example which illustrates the parametrization of vertices of the group-annihilator graph $\Gamma(G)$ by representatives of orbits of the symmetric action on $G$. \begin{exm} Let $G = \mathbb{Z}/2^4\mathbb{Z}$ be a finite abelian. Consider the group action: $Aut(\Gamma(G)) \times G \longrightarrow G$. The orbits of this action are: $\mathcal{O}_{4, 2^4} = \{0\}$, $\mathcal{O}_{4, 1} = \{1, 3, 5, 7\}$, $\mathcal{O}_{4, 2} = \{2, 6, 10, 14\} = \{2a ~|~ (a, 2) = 1\}$, $\mathcal{O}_{4, 2^2} = \{4, 12\} = \{2^2a ~|~ (a, 2) = 1\} $ and $\mathcal{O}_{4, 2^3} = \{8\} = \{2^3a ~|~ (a, 2) = 1\}$. Note that orbits of elements $3, 5, 7$ are same as the orbit of $1$, orbits of $6, 10, 14$ are same as the orbit of $2$ and orbit of $12$ is same as the orbit of $4$. Therefore, the group $G$ has $4$ orbits of nonzero elements under the action of $Aut(\Gamma(G))$ represented by $1, 2, 2^2, 2^3$. The group-annihilator graph realized by $G$ with its orbits is shown in Figure \eqref{1}.\end{exm} \begin{figure}[H] \begin{center} \includegraphics[scale=.415]{1} \end{center} \caption{$\Gamma(\Z/2^{4}\Z)$ with its orbits} \label{1} \end{figure} \section{Degeneration in graphs} This section is devoted to the study of degeneration in graphs. We show that every group homomorphism is a graph homomorphism. We employ the methods of degeneration in graphs to simply the techniques used to establish degenerations of elements in finite abelian groups \cite{KA}. As far as groups are concerned, there are always homomorphisms (trivial homomorphisms) from one group to another. Any \textit{source group} (a group where from we have the map) can be mapped by a homomorphism into \textit{target group} (a group where the elements are mapped) by simply sending all of its elements to the identity of the target group. In fact, the study of kernels is very important in algebraic structures. In the context of simple graphs, the notion of a homomorphism is far more restrictive. Indeed, there need not be a homomorphism between two graphs, and these cases are as much a part of the theory as those where homomorphisms do exist. There are other categories where homomorphisms do not always exist between two objects, for example, the category of bounded lattices or that of semi-groups. The answer to the question that \enquote{every group homomorphism is a graph homomorphism} is affirmative, and the same is discussed in the following result. Note that the orbits of elements of actions (automorphism group and symmetric) on finite abelian $p$-group of rank one coincide and it can be explored further on abelian $p$-groups of different ranks. \begin{prop}\label{prp1} Every group homomorphism which maps elements from orbits $\mathcal{O}_{k,p^i}$ to orbits $\mathcal{O}_{l,p^j}$ is a graph homomorphism, where $1\leq i\leq k$, $1\leq j\leq l$ and $k \leq l$. \end{prop} \begin{proof} The group homomorphisms are uniquely determined by the image of unity element in the target group and order of the element divides order of unity in the source group. Let $\tau(a)$ be the image of unity in the target group. Therefore, we have $\tau(a) = a_1, a_2, \cdots, a_{p^k}$, where $a_1, a_p, \cdots, a_{p^{k}}$ are elements of orbits, $\mathcal{O}_{l,1}$, $\mathcal{O}_{l,p}$, $\cdots$, $\mathcal{O}_{l,p^k}$. Note that $k\leq l$, therefore we have the following inequalities concerning the cardinalities of obits, \begin{center} $|\mathcal{O}_{k,1}| \leq |\mathcal{O}_{l,1}|$, $|\mathcal{O}_{k,p}| \leq |\mathcal{O}_{l,p}|$, \hskip .1cm \vdots $|\mathcal{O}_{k,p^k}| \leq |\mathcal{O}_{l,p^k}|$. \end{center} If $\tau(a)\in \mathcal{O}_{l, 1}$, then under the monomorphism the elements of orbits are mapped as, \begin{center} $ \mathcal{O}_{k,1} \xhookrightarrow{~1-1~} \mathcal{O}_{l,1}$, $ \mathcal{O}_{k,p} \xhookrightarrow{~1-1~} \mathcal{O}_{l,p}$, \hskip .1cm \vdots $ \mathcal{O}_{k,p^{k-1}} \xhookrightarrow{~1-1~} \mathcal{O}_{l,p^{k-1}}$, $\mathcal{O}_{k,p^{k}} \xhookrightarrow{~1-1~} \mathcal{O}_{l,p^{l}}$. \end{center} If $\tau(a)\in \mathcal{O}_{l,p}$, then elements of orbits are mapped as, \begin{center} $ \mathcal{O}_{k,1} \twoheadrightarrow \mathcal{O}_{l,p}$, $ \mathcal{O}_{k,p} \twoheadrightarrow \mathcal{O}_{l,p^2}$, \hskip .1cm \vdots $ \mathcal{O}_{k,p^{k-1}} \twoheadrightarrow \mathcal{O}_{l,p^{k}}$, $\mathcal{O}_{k,p^{k}} \twoheadrightarrow \mathcal{O}_{l,p^{l}}$. \end{center} Thus it follows that if $\tau(a)\in \mathcal{O}_{l,p^t}$ for $(0\leq t\leq k-1)$, then every element of the orbit $\mathcal{O}_{k,p^t}$ is mapped to elements of the orbit $\mathcal{O}_{l,p^{t+1}}$. Under the symmetric action the orbits of vertices are same as the orbits listed above. Note that the vertices of the orbit $\mathcal{O}_{k,1}$ are only adjacent to the vertex in $\mathcal{O}_{k,p^k}$, vertices of the orbit $\mathcal{O}_{k,p}$ are adjacent to vertices in $\mathcal{O}_{k,p^k}$ and $\mathcal{O}_{k,p^{k-1}}$ and so on. Thus if $\tau(a)\in \mathcal{O}_{l,1}$, then for $0\leq i\leq j\leq k$, every edge $(u, v) \in \mathcal{O}_{k,p^i} \times \mathcal{O}_{k,p^j}$ is mapped to edges $(\tau(u), \tau(v)) \in \mathcal{O}_{l,p^r} \times \mathcal{O}_{l,p^s}$, where $0\leq r\leq s\leq l$. Therefore $\tau$ is a graph homomorphism. Similarly it can be verified that all other group homomorphisms are graph homomorphisms, since the adjacencies are preserved under all group homomorphisms. \end{proof} \begin{rem} The converse of the preceding result is not true, that is, a graph homomorphism between two graphs realised by some groups need not to be a group homomorphism. To illustrate this we consider the \enquote{distribution of edges in orbits}. Theoretically, distribution of edges is carried out in a way that for sufficiently large $l$, a graph homomorphism is acting on vertices in orbits $\mathcal{O}_{k,p^k}$, $\mathcal{O}_{k,1}$ such that $\mathcal{O}_{k,p^{k}} \xhookrightarrow{identity} \mathcal{O}_{l,p^{l}}$, $\mathcal{O}_{k,p^{k-1}} \xhookrightarrow{identity} \mathcal{O}_{l,p^{k-1}}$, $\cdots$, $\mathcal{O}_{k,p} \xhookrightarrow{identity} \mathcal{O}_{l,p}$. Some vertices of $\mathcal{O}_{k,1}$ are mapped to itself in $\mathcal{O}_{l,1}$ whereas the remaining are mapped to vertices in $\mathcal{O}_{l,p}$. So, under the above distribution some edges in $\mathcal{O}_{k,p^{k}} \times \mathcal{O}_{k,1}$ are mapped to edges in $\mathcal{O}_{l,p^{l}} \times \mathcal{O}_{l,1}$, whereas the remaining edges in $\mathcal{O}_{k,p^{k}} \times \mathcal{O}_{k,1}$ are mapped to edges in $\mathcal{O}_{l,p^{l}} \times \mathcal{O}_{l,p}$. Thus if $x \neq y$ are two elements of $\mathcal{O}_{k,1}$ such that $x$ is mapped to $x' \in \mathcal{O}_{l,1}$ and $y$ is mapped to $y' \in \mathcal{O}_{l,p}$, then the following equation may have no solution, \begin{center} $x + y(mod~p^k) = x' + y'(mod~p^l)$. \end{center} \end{rem} \begin{defn} \label{df2} Let $\Gamma_1$ and $\Gamma_2$ be two simple graphs. Then $(a, b) \in E(\Gamma_1)$ degenerates to $(u, v) \in E(\Gamma_2)$ if there exists a homomorphism $\varphi : V(\Gamma_1) \longrightarrow V(\Gamma_2)$ such that $\varphi(a, b) = (u, v)$. If every edge of $\Gamma_1$ degenerates to edges in $\Gamma_2$, then we say that $\Gamma_1$ degenerates to $\Gamma_2$. \end{defn} Recall that an \textit{independent part} (independent set) in a graph $\Gamma$ is a set of vertices of $\Gamma$ such that for every two vertices, there is no edge in $\Gamma$ connecting the two. Also, the \textit{complete part} (complete subgraph) in a graph $\Gamma$ is a set of vertices in $\Gamma$ such that there is an edge between every pair of vertices in $\Gamma$. The simplified form of Lemma \eqref{lm1} is presented in the following result. We adapted the definition of degeneration in groups and make it to work for graphs which are realized by finite abelian groups. \begin{thm}\label{thm1} If under any graph homomorphism $\mathcal{O}_{k,p^{k}}$ is the only vertex mapped to $\mathcal{O}_{l,p^{l}}$, then the pair $(p^ru, p^su) \in \mathcal{O}_{k,p^{r}} \times \mathcal{O}_{k,p^s}$ degenerates to $(p^{r'}u, p^{s'}u) \in \mathcal{O}_{l,p^{r'}} \times \mathcal{O}_{l,p^{s'}}$ if and only if $r\leq r'$ and $s\leq s'$, where $u$ is relatively prime to $p$ and $k\leq l$. \end{thm} \begin{proof} In setting of the symmetric group action on finite abelain $p$-groups of rank one, let $\mathcal{O}_{k,p^{r}}$, $\mathcal{O}_{k,p^s}$ be orbits represented by elements $p^r$ and $p^s$ of the source group and $\mathcal{O}_{l,p^{r'}}$, $\mathcal{O}_{l,p^{s'}}$ be orbits represented by elements $p^{r'}$ and $p^{s'}$ of the target group, where $0\leq r, ~s\leq k-1$ and $0\leq r',~ s'\leq l-1$. We consider the cases hereunder. \textbf{Case I:} $k = l = 2t$, $t\in\mathbb{Z}_{>0}$. Then the independent and complete parts of the graph realised by a source group is $X = \dot\bigcup_{i=0}^{t-1}\mathcal{O}_{k, p^i}$ and $Y = \dot\bigcup_{i=0}^{t-1}\mathcal{O}_{k, p^{t+j}}$, where each element of both $X$ and $Y$ are connected to $\mathcal{O}_{k,p^{k}} =\{0\}$. Similarly, $X' = \dot\bigcup_{i=0}^{t-1}\mathcal{O}_{l, p^i}$ and $Y' = \dot\bigcup_{j=0}^{t-1}\mathcal{O}_{l, p^{t+j}}$ represents the independent and complete parts of the graph realized by a target group, where each element of both $X'$ and $Y'$ are connected to $\mathcal{O}_{l,p^{l}} =\{0\}$. Let $x\in X$. If $x\in \mathcal{O}_{k, 1}$, then as discussed above, $x$ is adjacent to $\mathcal{O}_{k, p^k}$ only. On the other hand, if $x\in \mathcal{O}_{k, p^i}$ for $1\leq i \leq t-1$, then $x$ is adjacent to all elements of the set $\dot\bigcup_{n=i}^{0}\mathcal{O}_{k, p^{k-n}} \subset Y$. Moreover, if $x'\in \mathcal{O}_{l, 1}$, then $x'$ is adjacent to $\mathcal{O}_{l, p^l}$ whereas if $x'\in \mathcal{O}_{l, p^j}$ for $1\leq j \leq t-1$, then $x'$ is adjacent to all elements of the set $\dot\bigcup_{m=j}^{0}\mathcal{O}_{l, p^{l-m}} \subset Y'$. Under any given graph homomorphism $\tau$, the images of relations in $X \times \mathcal{O}_{k, p^k}$, $X \times Y$ and $Y \times \mathcal{O}_{k, p^k}$ are in $X' \times \mathcal{O}_{l, p^l}$, $X' \times Y'$ and $Y' \times \mathcal{O}_{l, p^l}$. Let $(a, b) \in X \times \mathcal{O}_{k, p^k} \bigcup X \times Y \bigcup Y \times \mathcal{O}_{k, p^k}$. Suppose $(a, b)$ degenerates to some $(a', b') \in X' \times \mathcal{O}_{l, p^l} \bigcup X' \times Y' \bigcup Y' \times \mathcal{O}_{l, p^l}$ . If $\tau$ is group homomorphism such that $\tau(1) \in \mathcal{O}_{l, 1}$, then \begin{center} $\mathcal{O}_{k,1} \times \mathcal{O}_{k,p^{k}} \xhookrightarrow{1-1} \mathcal{O}_{l,1}\times\mathcal{O}_{l,p^{l}}$, $\mathcal{O}_{k,p} \times \mathcal{O}_{k,p^{k}} \xhookrightarrow{1-1} \mathcal{O}_{l,p}\times\mathcal{O}_{l,p^{l}}$, $\mathcal{O}_{k,p} \times \mathcal{O}_{k,p^{k-1}} \xhookrightarrow{1-1} \mathcal{O}_{l,p}\times\mathcal{O}_{l,p^{l-1}}$, \hskip .1cm \vdots \end{center} If $\tau(1)\in \mathcal{O}_{l, p}$, then \begin{center} $\mathcal{O}_{k,1} \times \mathcal{O}_{k,p^{k}} \twoheadrightarrow \mathcal{O}_{l,p} \times \mathcal{O}_{l,p^{l}} $, $\mathcal{O}_{k,p} \times \mathcal{O}_{k,p^{k}} \twoheadrightarrow \mathcal{O}_{l,p^{2}} \times \mathcal{O}_{l,p^{l}} $, $\mathcal{O}_{k,p} \times \mathcal{O}_{k,p^{k-1}} \twoheadrightarrow \mathcal{O}_{l,p^{2}} \times \mathcal{O}_{l,p^{l-1}}$, \hskip .1cm \vdots \end{center} If $\tau(1)$ lies in any other orbit of $X'\bigcup Y'$, then as above we have the mapping of edges to edges. Thus for any group homomorphism which maps $(p^ru, p^su) \in \mathcal{O}_{k,p^{r}} \times \mathcal{O}_{k,p^s}$ to $(p^{r'}u, p^{s'}u) \in \mathcal{O}_{l,p^{r'}} \times \mathcal{O}_{l,p^{s'}}$, the relations $r\leq r'$ and $s\leq s'$ are verified. Now, suppose $\tau$ is not a group homomorphism but a graph homomorphism. Assume without loss of generality that under $\tau$, $A \times \mathcal{O}_{k,p^{k}} \xhookrightarrow{1-1} A' \times \mathcal{O}_{l,p^{l}}$, where $A \subset \mathcal{O}_{k,1} \subset X$ and $A' \subset \mathcal{O}_{l,1} \subset X'$ are proper subsets of $X$ and $X'$. Moreover, \begin{center} $\mathcal{O}_{k,1}\setminus A \times \mathcal{O}_{k,p^{k}} \bigcup \mathcal{O}_{k,p} \times \mathcal{O}_{k,p^{k}} \bigcup \mathcal{O}_{k,p} \times \mathcal{O}_{k,p^{k-1}} \twoheadrightarrow \mathcal{O}_{l,p} \times \mathcal{O}_{l,p^{l-1}} \bigcup \mathcal{O}_{l,p} \times \mathcal{O}_{l,p^{l}}$, $\mathcal{O}_{k,p^2} \times \mathcal{O}_{k,p^{k}} \xhookrightarrow{1-1} \mathcal{O}_{l,p^2} \times \mathcal{O}_{l,p^{l}}$, $\mathcal{O}_{k,p^2} \times \mathcal{O}_{k,p^{k-1}} \xhookrightarrow{1-1} \mathcal{O}_{l,p^2} \times \mathcal{O}_{l,p^{l-1}}$, $\mathcal{O}_{k,p^2} \times \mathcal{O}_{k,p^{k-2}} \xhookrightarrow{1-1} \mathcal{O}_{l,p^2} \times \mathcal{O}_{l,p^{l-2}}$, \hskip .1cm \vdots \end{center} Thus, for $\tau$, we observe that the relations $r\leq r'$ and $s\leq s'$ hold. Similarly these relations can be verified for other graph homomorphisms. Suppose to the contrary that $r > r'$ and $s > s'$. Then $(a, b)$ does not degenerates to $(a', b')$, since by Lemma \eqref{lm1}, $a$ and $b$ degenerates to $a'$ and $b'$ if and only if $r\leq r'$ and $s\leq s'$, therefore, a contradiction. Further, if under any graph homomorphism the elements of orbits $\mathcal{O}_{k,p^r} \times \mathcal{O}_{k,p^{s}}$ are mapped to elements of $\mathcal{O}_{l,p^{r'}} \times \mathcal{O}_{l,p^{s'}}$, then it follows that for some $1 \leq s \leq k-1$, $\mathcal{O}_{k,p^{s}}$ is mapped to $\mathcal{O}_{l,p^{l}}$, again a contradiction. \textbf{Case II:} $k = l = 2t+1$, $t\in\mathbb{Z}_{>0}$. The independent and complete parts of the graph realised by source and target groups are $X = \dot\bigcup_{i=0}^{t}\mathcal{O}_{k, p^i}$, $Y = \dot\bigcup_{j=1}^{t+1}\mathcal{O}_{l, p^{t+j}}$ and $X' = \dot\bigcup_{i=0}^{t}\mathcal{O}_{l, p^i}$, $Y' = \dot\bigcup_{j=0}^{t+1}\mathcal{O}_{l, p^{t+j}}$. Rest of the proof for this case follows by the same argument which we discussed above for the even case. Finally, if we consider the cases $(k, l) = (2t, 2t+1)$ or $(k, l) = (2t+1, 2t)$, then these cases can be handled in the same manner as above. \end{proof} \begin{figure}[H] \begin{center} \includegraphics[scale=.360]{2} \end{center} \caption{Pictorial sketch of degeneration} \label{2} \end{figure} Note that in Figure \eqref{2}, the graph on the left hand side is the graph realized by $\Z/2^3\Z$ and the graph on the right hand side is realized by $\Z/2^5\Z$. \section{\bf{Partial orders on $\mathcal{T}_{p_1 \cdots p_n}$}} In this section, we study some relations on the set $\mathcal{T}_{p_1 \cdots p_n}$ of all graphs realized by finite abelian $p_r$-groups of rank $1$, where each $p_r$, $1\leq r\leq n$, is a prime number. We discuss equivalent forms of the partial order "degeneration" on $\mathcal{T}_{p_1 \cdots p_n}$ and obtain a locally finite distributive lattice of finite abelian groups. Threshold graphs play an essential role in graph theory as well as in several applied areas which include psychology and computer science \cite{MP}. These graphs were introduced by Chv\'{a}tal and Hammer \cite{CH} and Henderson and Zalcstein \cite{HZ}. A vertex in a graph $\Gamma$ is called \textit{dominating} if it is adjacent to every other vertex of $\Gamma$. A graph $\Gamma$ is called a \textit{threshold graph} if it is obtained by the following procedure. Start with $K_1$, a single vertex, and use any of the following steps, in any order, an arbitrary number of times. (i) Add an isolated vertex. (ii) Add a dominating vertex, that is, add a new vertex and make it adjacent to each existing vertex. It is always interesting to determine the classes of threshold graphs, since we may represent a threshold graph on $n$ vertices using a binary code $(b_1, b_2, \cdots, b_n)$, where $b_i = 0$ if vertex $v_i$ is being added as an isolated vertex and $b_i = 1$ if $v_i$ is being added as a dominating vertex. Furthermore, using the concept of creation sequences we establish the nullity, multiplicity of some non-zero eigenvalues and the Laplacian eigenvalues of a threshold graph. The Laplacian eigenvalues of $\Gamma$ are the eigenvalues of a matrix $D(\Gamma) - A(\Gamma)$, where $D(\Gamma)$ is the diagonal matrix of vertex degrees and $A(\Gamma)$ is the familiar $(0, 1)$ adjacency matrix of $\Gamma$. The authors in \cite{ER} confirmed that the graph realised by a finite abelian $p$-group of rank $1$ is a threshold graph. In fact, they proved the following intriguing result for a finite abelain $p$-groups of rank $1$. \begin{thm}\cite{ER}.\label{thm2} If $G$ is a finite abelian $p$-group of rank $1$, then $\Gamma(G)$ is a threshold graph. \end{thm} Let $p_1 < p_2 < \cdots < p_n$ be a sequence of primes and let $\lambda_{i} = (\lambda_{i,1}, \lambda_{i, 2}, \cdots, \lambda_{i, n})$ be sequence of partitions of positive integers, where $1 \leq i \leq n$. For each prime $p_t$, where $1 \leq t \leq n$, the sequences of finite abelian $p_t$-groups with respect to partitions $\lambda_{i,1}, \lambda_{i, 2}, \cdots, \lambda_{i, n}$ are listed as follows, \begin{center} $G_{\lambda_1, p_1} = \mathbb{Z}/p_{1}^{\lambda_{1, 1}}\mathbb{Z} \oplus \mathbb{Z}/p_{1}^{\lambda_{1, 2}}\mathbb{Z} \oplus \cdots \oplus \mathbb{Z}/p_{1}^{\lambda_{1,n}}\mathbb{Z}$, $G_{\lambda_2, p_2} = \mathbb{Z}/p_{2}^{\lambda_{2, 1}}\mathbb{Z} \oplus \mathbb{Z}/p_{2}^{\lambda_{2, 2}}\mathbb{Z} \oplus \cdots \oplus \mathbb{Z}/p_{2}^{\lambda_{2,n}}\mathbb{Z}$, \hskip .1cm \vdots \end{center} Fix a prime $p_r$, where $1\leq r \leq n$. Then for each distinct power $\lambda_{i, j}$, $1 \leq i, j \leq n$, it follows from Theorem \eqref{thm2}, that members of the sequence of graphs realised by a sequence of finite abelian $p_r$-groups of rank $1$ are threshold graphs. The sets of orbits of symmetric group action on sequence of finite abelian $p_r$-groups $\mathbb{Z}/p_{r}^{\lambda_{r, 1}}\mathbb{Z}, \mathbb{Z}/p_{r}^{\lambda_{r, 2}}\mathbb{Z}, \cdots, \mathbb{Z}/p_{r}^{\lambda_{r, n}}\mathbb{Z}$ of rank $1$ are: \begin{center} $\{\mathcal{O}_{r, 1}, \mathcal{O}_{r, p_r^{1}}\}$, $\{\mathcal{O}_{r, 1}, \mathcal{O}_{r, p_r^{1}}, \mathcal{O}_{r, p_r^{2}}\}$, $\{\mathcal{O}_{r, 1}, \mathcal{O}_{r, p_{r}^{1}}, \mathcal{O}_{r, p_r^{2}}, \mathcal{O}_{r, p_r^{3}}\}$, \hskip .1cm \vdots \end{center} Note that, $\lambda_{r, 1} = 1, \lambda_{r, 2} = 2, \lambda_{r, 3} = 3, \cdots$, in the above sequence of finite abelian$p_r$-groups. Thus for each prime $p_r$ and positive integer $\lambda_{i, j}$, we have sequences of threshold graphs realised by sequences of abelian $p_r$-groups. The \textit{degree sequence} of a graph $\Gamma$ is given by $\pi(\Gamma) = (d_1, d_2, \cdots, d_n)$, which is the non-increasing sequence of non-zero degrees of vertices of $\Gamma$. For a graph $\Gamma$ of order $n$ and size $m$, let $d = [d_1, d_2, \cdots, d_n]$ be a sequence of non-negative integers arranged in non-increasing order, which we refer to as a partition of $2m$. Define the transpose of the partition as $d^* = [d_1^{*}, d_2^{*}, \cdots, d_r^{*}]$, where $d_j^{*} = |\{d_i : d_i \geq j\}|$, $j = 1, 2,\cdots, r$. Therefore $d_j^{*}$ is the number of $d_i$'s that are greater than equal to $j$. Recall from \cite{RB} that a sequence $d^*$ is called the conjugate sequence of $d$. The another interpretation of a conjugate sequence is the \textit{Ferrer's diagram (or Young diagram)} denoted by $Y(d)$ corresponding to $d_1, d_2, \cdots, d_n$ consists of $n$ left justified rows of boxes, where the $i^{th}$ row consists of $d_i$ boxes (blocks), $i = 1, 2, \cdots, n$. Note that $d_i^{*}$ is the number of boxes in the $i^{th}$ column of the Young diagram with $i = 1, 2, \cdots, r$. An immediate consequence of this observation is that if $d^*$ is the conjugate sequence of $d$, then, \begin{equation*} \sum\limits_{i = 1}^{n} d_i = \sum\limits_{i = 1}^{r} d_i^{*} \end{equation*} If $d$ represents the degree sequence of a graph, then the number of boxes in the $i^{th}$ row of the Young diagram is the degree of vertex $i$, while the number of boxes in the $i^{th}$ row of the Young diagram of the transpose is the number of vertices with degree at least $i$. The trace of a Young diagram $tr(Y(d))$ is $tr(Y(d)) = |\{i : d_i\geq i\}| = tr(Y(d^{*}))$, which is the length of "diagonal" of the Young diagram for $d$ (or $d^{*}$). The degree sequence is a graph invariant, so two isomorphic graphs have the same degree sequence. In general, the degree sequence does not uniquely determine a graph, that is, two non-isomorphic graphs can have the same degree sequence. However, for threshold graphs, we have the following result. \begin{prop}[\cite{RMB}]\label{prp2} Let $\Gamma_1$ and $\Gamma_2$ be two threshold graphs and let $\pi_1(\Gamma_1)$ and $\pi_{2}(\Gamma_2)$ be degree sequences of $\Gamma_1$ and $\Gamma_2$ respectively. If $\pi_1(\Gamma_1) = \pi_{2}(\Gamma_2)$, then $\Gamma_1 \cong \Gamma_2$. \end{prop} The Laplacian spectrum of threshold graphs $\Gamma$, which we denote by $\ell-spec(\Gamma)$, have been studied in \cite{HK, RM}. In \cite{HK}, the formulas for the Laplacian spectrum, the Laplacian polynomial, and the number of spanning trees of a threshold graph are given. It is shown that the degree sequence of a threshold graph and the sequence of eigenvalues of its Laplacian matrix are \enquote {almost the same} and on this basis, formulas are given to express the Laplacian polynomial and the number of spanning trees of a threshold graph in terms of its degree sequence. The following is the fascinating result regarding the Laplacian eigenvalues of the graph realized by a finite abelian $p$-group of rank $1$. \begin{thm}\cite{ER}. \label{thm3} Let $\Gamma(G)$ be the graph realized by a finite abelian $p$-group of the type $G = \mathbb{Z}/p^{k}\mathbb{Z}$. Then the representatives $0, 1, p, p^2, \cdots, p^{k - 1}$ (with multiplicities) of orbits $\{\mathcal{O}_{k, p^{k}}\} \cup \{\mathcal{O}_{k, p^{i}} : 0\leq i \leq k - 1\}$ of symmetric action on $G$ are the Laplacian eigenvalues of $\Gamma(G)$, that is, $\ell-spec(\Gamma(G)) = \{0, 1, p, p^2, \cdots, p^{k - 1}, p^k \}$. \end{thm} \begin{defn}\label{df3} Let $\pi_1, \pi_2, \cdots, \pi_n \in \mathbb{Z}_{>0}$ and $\pi_1^{\bullet}, \pi_2^{\bullet}, \cdots, \pi_n^{\bullet} \in \mathbb{Z}_{>0}$ be some partitions of $n \in \mathbb{Z}_{>0}$. A sequence (partition) of eigenvalues $\pi = (\pi_1, \pi_2, \cdots, \pi_n)$ of a graph $\Gamma$ is said to be a threshold eigenvalues sequence (partition) if $\pi_{i} = \pi_{i}^{\bullet} + 1$ for all $i$ with $1 \leq i \leq tr(Y(\pi))$. \end{defn} Just for the convenience we refer the Laplacian eigenvalues as eigenvalues. The sequence of representatives of orbits (or eigenvalues of $\Gamma(\mathbb{Z}/p^{k}\mathbb{Z})$) of a symmetric action on a group $\mathbb{Z}/p^{k}\mathbb{Z}$ obtained in Theorem \eqref{thm3} represents transpose of a young diagram $Y(d)$, where $d$ is the degree sequence of the graph realized by $\mathbb{Z}/p^{k}\mathbb{Z}$. For a group $G = \mathbb{Z}/2^4\mathbb{Z}$ be a group, the degree sequence $\sigma$ of $\Gamma(G)$ is, \begin{equation*} \sigma = \pi^{\bullet} = (15, 7, 3, 3, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1). \end{equation*} The conjugate sequence of $\sigma$ is, \begin{equation*} \sigma^{*} = \pi = (2^4, 2^3, 2^2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1). \end{equation*} A partition $\pi$ of eigenvalues of $\Gamma(G)$ is a threshold eigenvalues partition, since $\sum\limits_{i = 1}^{3}\pi_{i} = \sum\limits_{i = 1}^{3}\pi_{i}^{\bullet} + 1$. Note that $tr(Y(\pi)) = 3$, the three blocks in $Y(\sigma^{*}) = Y(\pi)$ are shown as $t_{11}, t_{22}, t_{33}$ before the darkened column in Figure \eqref{3} below. \begin{figure}[H] \begin{center} \includegraphics[scale=.400]{3} \end{center} \caption{\bf{$Y(\pi)$}} \label{3} \end{figure} Thus from above discussion we assert that a partition $\pi$ of eigenvalues is a threshold eigenvalues partition if and only if $Y(\pi)$ can be decomposed into an $tr(Y(\pi)) \times tr(Y(\pi))$ array of blocks in the upper left-hand corner called the \textit{trace square} in $Y(\pi)$. A column of $tr(Y(\pi))$ blocks placed immediately on the right hand side of trace square, darkened in Figure \eqref{3}, and a piece of blocks on the right hand side of column $tr(Y(\pi)) + 1$ is the transpose of the piece which is below the trace square. If $a = (a_1, a_2, \cdots, a_r)$ and $b = (b_1, b_2, \cdots, b_s)$ are non-increasing sequences of real numbers. Then $b$ \textit{weakly majorizes} $a$, written as $b \succeq a$, if $r \geq s$, \begin{equation} \sum\limits_{i = 1}^{k}b_i \geq \sum\limits_{i = 1}^{k}a_i, \label{eq1} \end{equation} where $1 \leq k \leq s$, and \begin{equation} \sum\limits_{i = 1}^{r}b_i \geq \sum\limits_{i = 1}^{s}a_i. \label{eq2} \end{equation} If $b$ weakly majorizes $a$ and equality holds in \eqref{eq2}, then $b$ \textit{majorizes} $a$, written as $b \succ a$. We present an example which illustrates that the threshold eigenvalues partition of some graph realized by a finite abelian $p$-group $G_1$ majorizes the degree partition of the graph realized by some other finite abelian $p$-group $G_2$. Let $G_1 = \mathbb{Z}/2^3\mathbb{Z}$ and $G_1 = \mathbb{Z}/3^2\mathbb{Z}$ be two groups. The degree partitions $\pi_1^{\bullet}$ and $\pi_2$ of graphs $\Gamma(G_1)$ and $\Gamma(G_2)$ are listed below as, \begin{center} $\pi_1^{\bullet} = (7, 3, 2, 2, 1, 1, 1, 1),$ $\pi_2 = (8, 2, 2, 1, 1, 1, 1, 1, 1).$ \end{center} The partitions $\pi_1^{\bullet}, \pi_2 \in \mathcal{P}(18)$, where $\mathcal{P}(18)$ is the set of all partitions of $18$. The partition $\pi_1 = (8, 4, 2, 1, 1, 1, 1)$ is the threshold eigenvalues partition of $\Gamma(G_1)$. The Young diagrams of partitions $\pi_1$ and $\pi_{2}$ are shown in Figure \eqref{4}. \begin{figure}[H] \begin{center} \includegraphics[scale=.310]{4} \end{center} \caption{Young diagrams of $\pi_1$ and $\pi_{2}$} \label{4} \end{figure} Let $\pi^{\bullet}$ and $\sigma$ be two degree sequences of graphs realized by finite abelian $p$-groups of rank $1$ such that $\pi^{\bullet}, \sigma \vdash m$, where $m\in \mathbb{Z}_{> 0}$. Then $\pi \succ \sigma$ if and only if $Y(\pi)$ can be obtained from $Y(\sigma)$ by moving blocks of the highest row in $Y(\sigma)$ to lower numbered rows. Thus majorization induces a partial order on sets $\{Y(\pi^{\bullet}) : \pi^{\bullet} ~is ~a ~degree ~sequence ~of ~some ~graph ~re$ \noindent $alized ~by ~a ~p-group ~of ~rank ~1\}$ and $\{Y(\pi^{\bullet}) : \pi^{\bullet} \vdash n, n \in \mathbb{Z}_{>0}\}$. \begin{cor} If $\pi, \sigma \in \mathcal{P}(n)$, $n\in \mathbb{Z}_{>0}$, then $\pi \succ \sigma$ if and only if $Y(\pi)$ can be obtained from $Y(\sigma)$ by moving blocks of the highest row in $Y(\sigma)$ to lower numbered rows. \end{cor} \begin{thm}\label{thm4} Let $\mathcal{T}_{p_1 \cdots p_n}$ be the collection of all graphs realised by all sequences of finite abelian $p_r$-groups, where $1 \leq r \leq n$. If $\pi$ is a threshold eigenvalues partition, then upto isomorphism, there is exactly one finite abelian $p_r$-group $G$ of rank $1$ such that $\ell-spec(\Gamma(G))\setminus \{0\} = \pi$. \end{thm} \begin{proof} Let $\left(\Gamma(\mathbb{Z}/p_{r}^{\lambda_{r, 1}}\mathbb{Z}), \Gamma(\mathbb{Z}/p_{r}^{\lambda_{r, 2}}\mathbb{Z}), \cdots, \Gamma(\mathbb{Z}/p_{r}^{\lambda_{r, n}}\mathbb{Z})\right) \in \mathcal{T}_{p_1 \cdots p_n}$ be a sequence of graphs realized by a sequence of finite abelian $p_r$-groups $\left(\mathbb{Z}/p_{r}^{\lambda_{r, 1}}\mathbb{Z}, \mathbb{Z}/p_{r}^{\lambda_{r, 2}}\mathbb{Z}, \cdots, \mathbb{Z}/p_{r}^{\lambda_{r, n}}\mathbb{Z}\right)$. Let $\pi$ be a threshold eigenvalues partition of some graph of the sequence. Without loss of generality let it be the graph realised by a finite abelian $p_r$-group $\mathbb{Z}/p_{r}^{\lambda_{r, r}}\mathbb{Z}$. The partition $\pi$ is represented by Young diagram $Y(\pi)$ and the Young diagram for the abelian $p_r$-group of type $\mathbb{Z}/p_{r}^{\lambda_{r, r-1}}\mathbb{Z}$ can be obtained from $Y(\pi)$ by removing some blocks in rows and columns of $Y(\pi)$. The proof now follows by induction on terms of the sequence of graphs. \end{proof} For $1\leq i \leq j \leq n$, let $G$ be a finite abelian $p_i$-group of rank $1$ and $H$ be a finite abelian $p_j$-group of the same rank. Moreover, let $\Gamma(G)$ and $\Gamma(H)$ be two graphs realized by $G$ and $H$. We define a partial order "$\leq$" on $\mathcal{T}_{p_1 \cdots p_n}$. Graphs $\Gamma(G), \Gamma(H) \in \mathcal{T}_{p_1 \cdots p_n}$ are related as $\Gamma(G) \leq \Gamma (H)$ if and only if $\Gamma(H)$ contains a subraph isomorphic to $\Gamma(G)$, that is if and only if $\Gamma(G)$ can be obtained from $\Gamma(H)$ by "deletion of vertices". The relation "degeneration" on the set $\mathcal{T}_{p_1 \cdots p_n}$ descends to a partial order on $\mathcal{T}_{p_1 \cdots p_n}$ and two graphs $\Gamma(G)$, $\Gamma(H)$ are related if $\Gamma(G)$ degenerates to $\Gamma(H)$. It is not hard to verify that the partial orders "$\leq$" and "degeneration" are equivalent on $\mathcal{T}_{p_1 \cdots p_n}$, since by "deletion of vertices" in $\Gamma(H)$ we get the homomorphic image of $\Gamma(G)$ in $\Gamma(H)$ and if $\Gamma(G)$ degenerates to $\Gamma(H)$, then $\Gamma(G)$ can be obtained from $\Gamma(H)$ by "deletion of vertices". Recall that a poset $P$ is locally finite if the interval $[x, z] = \{y \in P : x \leq y \leq z\}$ is finite for all $x, z \in P$. If $x, z \in P$ and $[x, z] = \{x, z\}$, then $z$ \textit{covers} $x$. A Hasse diagram of $P$ is a graph whose vertices are the elements of $P$, whose edges are the cover relations, and such that z is drawn "above" x whenever $x < z$. A lattice is a poset $P$ in which every pair of elements $x, y \in P$ has a least upper bound (or join), $x \vee y \in P$, and a greatest lower bound (or meet), $x \wedge y \in P$. Lattice $P$ is distributive if $x \wedge (y \vee z) = (x \wedge y) \vee (x \wedge z)$ and $x \vee (y \wedge z) = (x \vee y) \wedge (x \vee z)$ for all $x, y, z \in P$. Let $\mathcal{Y}$ be the set of all threshold eigenvalues partitions of members of $\mathcal{T}_{p_1 \cdots p_n}$. If $\mu, \eta \in \mathcal{Y}$, define $\mu \leq \eta$, if $Y(\mu)$ "fits in" $Y(\eta)$, that is, if $\mu \leq \eta$, then $Y(\eta)$ is overlapped by $Y(\mu)$ or $Y(\mu)$ fits inside $Y(\eta)$. The set $\mathcal{Y}$ with respect this partial ordering is a locally finite distributive lattice. The unique smallest element of $\mathcal{Y}$ is $\hat{0} = \emptyset$, the empty set. Recall that the dual of a poset $P$ is the poset $P^{*}$ on the same set as $P$, such that $x \leq y$ in $P^{*}$ if and only if $y \leq x$ in $P$. If $P$ is isomorphic to $P^{*}$, then $P$ is self-dual. \begin{thm}\label{thm5} If $\Gamma(G), \Gamma(H) \in \mathcal{T}_{p_1 \cdots p_n}$, then $\Gamma(G)\leq \Gamma(H)$ if and only $Y(\mu)$ "fits in" $Y(\eta)$, where $\mu$ and $\eta$ are threshold eigenvalues partitions of graphs $\Gamma(G)$ and $\Gamma(H)$. \end{thm} \begin{proof} If $\Gamma(G)$ is obtained from $\Gamma(H)$ by deletion of one or more vertices, then the terms in the threshold eigenvalues partition $\mu$ are less in number than the terms in the threshold eigenvalues partition $\eta$ of $\Gamma(H)$. It follows that $Y(\mu)$ "fits in" $Y(\eta)$. Conversely, suppose $Y(\mu)$ "fits in" $Y(\eta)$. The threshold eigenvalues partitions $\mu$ and $\eta$ are obtained from degree sequences of $\Gamma(G)$ and $\Gamma(H)$. If $\Gamma(G)$ and $\Gamma(H)$ have same degree sequence, then $\mu = \eta$. Therefore by Proposition \eqref{prp2}, $\Gamma(G) \cong \Gamma(H)$. Otherwise, $\mu \neq \eta$. Let $\Gamma(K)$ be a subgraph of $\Gamma(H)$ obtained by removing a pendant vertex from $\Gamma(H)$. Then $Y(\eta')$ is obtained from $Y(\eta)$ by removing a single block in the string with number of blocks in the string equal to the largest eigenvalue in $\eta$. It is clear that $Y(\eta'$ "fits in" $Y(\eta)$. We continue the process of deletion of vertices untill the resulting graph has the same threshold eigenvalues partition as $\Gamma(G)$. Thus, it follows that $\Gamma(H)$ contains a subgraph isomorphic to $\Gamma(H)$, that is, $\Gamma(G) \leq \Gamma(H)$. \end{proof} \begin{cor}\label{cr1} The sets $\mathcal{T}_{p_1 \cdots p_n}$ and $\mathcal{Y}$ are isomorphic to each other (as posets). \end{cor} \begin{proof} The bijection $\Gamma(G) \longrightarrow Y(\mu)$ is a poset isomorphism from $\mathcal{T}_{p_1 \cdots p_n}$ onto $\mathcal{Y}$, where $\mu$ is threshold eigenvalues partition of the graph $\Gamma(G) \in \mathcal{T}_{p_1 \cdots p_n} $ realised by a finite abelian$p_r$-group of rank $1$. \end{proof} For $n \geq 1$, let $\mathcal{F}_n$ be the the collection of all connected threshold graphs on $n$ vertices. We extend the partial order "$\leq$" to $\mathcal{F}_n$. Two graphs $G_1, G_2 \in\mathcal{F}_n$ are related as $G_1 \leq G_2$ if and only if $G_1$ is isomorphic to a subgraph of $G_2$. It is not difficult to verify that the poset $\mathcal{T}_{p_1 \cdots p_n}$ is an induced subposet of $\mathcal{F}_n$ and $\mathcal{F}_n$ is a self-dual distributive lattice. Moreover, if $\mathcal{H}_n$ is the collection of threshold eigenvalues partitions of members of $\mathcal{F}_n$, then again it is easy verify that $\mathcal{H}_n$ is a poset with respect to partial order "fits in" and we have the following observation related to posets $\mathcal{F}_n$ and $\mathcal{H}_n$. \begin{cor} The bijection $G \longrightarrow Y(\mu)$ is a poset isomorphism from $\mathcal{F}_n$ to $\mathcal{H}_n$, where $\mu$ is threshold eigenvalues partition of $G \in \mathcal{F}_n$. In particular, $\mathcal{H}_n$ is self-dual distributive lattice. \end{cor} Now, we focus on sub-sequences (sub-partitions) of a threshold eigenvalues partition. We begin by dividing $Y(\pi)$ into two disjoint pieces of blocks, where $\pi)$ is a threshold eigenvalues partition of a graph $\Gamma(G)\in \mathcal{T}_{p_1 \cdots p_n}$. We denote by $R(Y(\pi))$ those blocks of $Y(\pi)$ which lie on the diagonal of a trace square of $Y(\pi)$ and to the right of diagonals. By the notation $C(Y(\pi))$, we denote those blocks of $Y(\pi)$ that lie strictly below diagonals of a trace square, that is, $R(Y(\pi))$ is a piece of blocks of $Y(\pi)$ on or above the diagonal and $C(Y(\pi))$ is the piece of $Y(\pi)$ which lie strictly below the diagonal. This process if division is illustrated as follows (Figure \eqref{5}). \begin{figure}[H] \begin{center} \includegraphics[scale=.350]{5} \end{center} \caption{Division of $Y(\pi)$} \label{5} \end{figure} If we look more closely at these shifted divisions of $Y(\pi)$. Each successive row of $R(Y(\pi))$ is shifted one block to the right. Furthermore, $R(Y(\pi))$ corresponding to sub-partition of $\pi$ forms a strictly decreasing sequence, that is, terms of of the sub-partition are distinct and these sub-partitions with distinct terms are called \textit{strict threshold eigen vales partitions}. Thus, if $\pi' = (a_1, a_2, \cdots, a_n)$ is a strict threshold eigen vales partition of a threshold eigenvalues partition $\pi$, then there is a unique shifted division whose $i^{th}$ row contains $a_i$ blocks, where $1 \leq i \leq n$. It follows that there is a one to one correspondence between the set of all threshold eigenvalue partitions of members of $\mathcal{T}_{p_1 \cdots p_n}$ and the set of all threshold eigen vales partition. As a result, $\mathcal{Y}$ is identical to the lattice, which we call \textit{lattice of shifted divisions}. Recall that a subset $A$ of a poset $P$ is a \textit{chain} if any two elements of $A$ are comparable in $P$. A chain is called \textit{saturated} if there do not exist $x, z \in A$ and $y \in P\setminus{A}$ such that $y$ lies in between $x$ and $z$. In a locally finite lattice, a chain $\{x_0, x_1, \cdots, x_n\}$ of length $n$ is saturated if and only if $x_i$ covers $x_{i-1}$, where $1 \leq i \leq n$. Since $\mathcal{T}_{p_1 \cdots p_n}$ is a locally finite distributive lattice, therefore $\mathcal{T}_{p_1 \cdots p_n}$ has a unique \textit{rank function} $\Psi : \mathcal{T}_{p_1 \cdots p_n} \longrightarrow \mathbb{Z}_{>0}$, where $\Psi\left(\Gamma(\mathbb{Z}/p_{r}^{\lambda_{r, 1}}\mathbb{Z}), \cdots, \Gamma(\mathbb{Z}/p_{r}^{\lambda_{r, n}}\mathbb{Z})\right)$ is the length of any saturated chain from $\hat{0}$ to the graph realized by a finite abelian $p_r$-group $\mathbb{Z}/p_{r}^{\lambda_{r,n}}\mathbb{Z}$. Note that a finite abelian $p_r$-group of rank $n$, $G_{\lambda_r, p_r} = \mathbb{Z}/p_{r}^{\lambda_{r, 1}}\mathbb{Z} \oplus \mathbb{Z}/p_{r}^{\lambda_{r, 2}}\mathbb{Z} \oplus \cdots \oplus \mathbb{Z}/p_{r}^{\lambda_{r,n}}\mathbb{Z}$ is identified with a sequence of abelian $p_r$ groups of rank $1$ $\left(\mathbb{Z}/p_{r}^{\lambda_{r, 1}}\mathbb{Z}, \mathbb{Z}/p_{r}^{\lambda_{r, 2}}\mathbb{Z}, \cdots, \mathbb{Z}/p_{r}^{\lambda_{r, n}}\mathbb{Z}\right)$ which in turn is identified with a sequence of graphs $\left(\Gamma(\mathbb{Z}/p_{r}^{\lambda_{r, 1}}\mathbb{Z}), \Gamma(\mathbb{Z}/p_{r}^{\lambda_{r, 2}}\mathbb{Z}), \cdots, \Gamma(\mathbb{Z}/p_{r}^{\lambda_{r, n}}\mathbb{Z})\right)$ or a sequence of a threshold partitions $(\mu_1, \mu_2, \cdots, \mu_n)\in \mathcal{Y}$. Therefore, the correspondence of $G_{\lambda_r, p_r} = \mathbb{Z}/p_{r}^{\lambda_{r, 1}}\mathbb{Z} \oplus \mathbb{Z}/p_{r}^{\lambda_{r, 2}}\mathbb{Z} \oplus \cdots \oplus \mathbb{Z}/p_{r}^{\lambda_{r,n}}\mathbb{Z}$ to $(\mu_1, \mu_2, \cdots, \mu_n)$ establishes that every finite abelain $p_r$-group of rank $n$ can be identified with a saturated chain in $\mathcal{T}_{p_1 \cdots p_n}$ or $\mathcal{Y}$ and the rank function of each abelian $p_r$-group of rank $n$ is $\Psi(\mu_1, \mu_2, \cdots, \mu_n) = {\lambda_{r,n}} =$ $max \{\lambda_{r,i} : 1\leq i \leq n \}$. \begin{rem} Let $\Lambda_{q}$ be the set of all non-isomorphic graphs of $\mathcal{T}_{p_1 \cdots p_n}$ with equal number of edges say $q$, (graphs realized by groups $\mathbb{Z}/2^{3}\mathbb{Z}$ and $\mathbb{Z}/3^{2}\mathbb{Z}$ are non-isomorphic graphs with equal number of edges). Since there is one to one correspondence between threshold eigenvalues partitions and strict threshold eigenvalues partitions. The \textit{rank generating function} of the poset is presented in the following equation, \begin{equation*} \sum\limits_{q \geq 0}\kappa_{q}z^{q} = \prod\limits_{t \geq 1}(1 + z^{t}) = 1 + z + z^2 + 2z^3 + 2z^4 + \cdots, \end{equation*} where $\kappa_q$ is the cardinality of $\Lambda_{q}$. \end{rem} The representation of a locally finite distributive lattice $\mathcal{T}_{235}$ is illustrated in Figure \eqref{6}. \begin{figure}[H] \begin{center} \includegraphics[scale=.415]{6} \end{center} \caption{$\mathcal{T}_{235}$} \label{6} \end{figure} Fix a finite abelian $p_r$-group $G$ and $\Gamma(G) \in \mathcal{T}_{p_1 \cdots p_n}$. Let $\ell(\Gamma(G))$ be the number of saturated chains in $\mathcal{T}_{p_1 \cdots p_n}$ from $\hat{0}$ to $\Gamma(G)$. The following result relates the number of saturated chains in $\mathcal{T}_{p_1 \cdots p_n}$ with the degree of a projective representation of a symmetric group $\mathcal{S}_t$ on $t$ number of symbols. \begin{cor} Let $\pi = (\pi_1, \pi_2, \cdots, \pi_k)$ be a strict threshold eigenvalues partition of some $\Gamma(G) \in \mathcal{T}_{p_1 \cdots p_n}$. Then the following hold, \begin{equation}\label{eq3} \ell(\Gamma(G)) = \frac{t!}{\prod\limits_{i =1}^{tr(Y(\pi))}\lambda_i!}\prod\limits_{r<s}\frac{\lambda_r - \lambda_s}{\lambda_r + \lambda_s}, \end{equation} where $\lambda_i = \pi_{i} - i$, $1 \leq i \leq tr(Y(\pi))$ and $\lambda = (\lambda_1, \lambda_2, \cdots, \lambda_{tr(Y(\pi))})$ is a partition of some $t \in \Z_{>0}$. \end{cor} \begin{proof} The right side of \eqref{eq3} represents the count of number of saturated chains from $\hat{0}$ to $\Gamma(G)$. \end{proof} Note that the number of saturated chains from $\hat{0}$ to $\Gamma(G)$ in \eqref{eq3} also provide a combinatorial formula for the number of finite abelian groups of different orders. {\bf Acknowledgement:} This research project was initiated when the author visited School of Mathematics, TIFR Mumbai, India. So, I am immensely grateful to TIFR Mumbai for all the facilities. Moreover, I would like to thank Amitava Bhattacharya of TIFR Mumbai for some useful discussions on this research work. \textbf{Declaration of competing interest.}\\ There is no conflict of interest to declare. \textbf{Data Availability.}\\ Data sharing not applicable to this article as no datasets were generated or analysed during the current study. \begin{thebibliography}{20} \bibitem{AL} D. F. Anderson and P. S. Livingston, The zero-divisor graph of a commutative ring, J. Algebra 217 (1999) 434 - 447. \bibitem{RB} R. B. Bapat, Graphs and Matrices, Springer/Hindustan Book Agency, London/New Delhi, 2010. \bibitem{Bk} I. Beck, Coloring of commutative rings, J. Algebra 116 (1988) 208 - 226. \bibitem{ZB} Z. Bohdan, Intersection graphs of finite abelian groups, Czech. Math. Journal 25 (2) (1975) 171 - 174. \bibitem{BF} R. Brauer, K. A. Fowler, On groups of even order, Annals of Math. 62 (2) (1955) 565 - 583. \bibitem{CS} P. Cameron, S. Ghosh, The power graph of a finite group, Disc. Math. 311 (13) (2011) 1220 - 1222. \bibitem{CH} V. Chv\'{a}tal, P. L. Hammer, Aggregation of Inequalities in Integer Programming, Ann. Disc. Math. 1 (1977) 145 - 162. \bibitem{KA} K. Dutta, A. Prasad, Degenerations and orbits in finite abelian groups, J. Comb. Theory, Series A 118 (2011) 1685 - 1694. \bibitem{HK} P. L. Hammer and A. K. Kelmans, Laplacian spectra and spanning trees of threshold graphs, 65 1-3 (1996) 255 - 273. \bibitem{HZ} P. B. Henderson and Y. Zalcstein, A Graph-Theoretic Characterization of the PV class of synchronizing primitives, SIAM J. Comput. 6 (1) (1977) 88 - 108. \bibitem{LS} M. W. Liebeck and A. Shalev, Simple Groups, Probabilistic Methods, and a Conjecture of Kantor and Lubotzky, J. Algebra 184 (1996) 31 - 57. \bibitem{MP} N.V.R. Mahadev, U.N. Peled, Threshold Graphs and Related Topics, Ann. Disc. Math. 56 (1995). \bibitem{ER} E. Mazumdar, Rameez Raja, Group-annihilator graphs realised by finite abelian and its properties, Graphs and Combinatorics 38 25 (2022) 25pp. \bibitem{RM} R. Merris. Laplacian matrices of graphs: A survey, L. Algebra Appl. 197 (1994) 143 - 176. \bibitem{RMB} R. Merris. Graph Theory, John Wiley and Sons, (2011). \bibitem{FL} F. D. Meyer and L. D. Meyer, Zero-divisor graphs of semigroups, J. Algebra 283 (2005) 190 - 198. \bibitem{ML} G. A. Miller, Determination of all the characteristic subgroups of any abelian group, Amer. J. Math. 27 (1) (1905) 15 - 24. \bibitem{M} K. M\"{o}nius, Eigenvalues of zero-divisor graphs of finite commutative rings, J. Algebr. Comb. 54 (2021) 787 – 802. \bibitem{SR} S. Pirzada, Rameez Raja, On the metric dimension of a zero-divisor graph, Commun. Algebra 45 (4) (2017) 1399 - 1408. \bibitem{RR} Rameez Raja, Total perfect codes in graphs realized by commutative rings, Transactions of Comb. 11 (4) (2022) 295-307. \bibitem{SR1} Rameez Raja, S. Pirzada, On annihilating graphs associated with modules over commutative rings, Algebra Colloquium 29 (2) (2022) 281-296. \bibitem{SS} M. Schwachh\"{o}fer, M. Stroppel, Finding representatives for the orbits under the automorphism group of a bounded abelian group, J. Algebra 211 (1) (1999) 225 - 239. \end{thebibliography} \end{document}
2206.07657v1
http://arxiv.org/abs/2206.07657v1
Impact of End Point Conditions on the Representation and Integration of Fractal Interpolation Functions and Well Definiteness of the Related Operator
\documentclass[a4paper,fleqn]{cas-sc} \usepackage[numbers]{natbib} \usepackage{lineno} \usepackage{hyperref} \usepackage{amsthm, amsmath, amssymb, amsfonts} \newtheorem{theorem}{Theorem}[section] \newdefinition{definition}{Definition}[section] \def\tsc#1{\csdef{#1}{\textsc{\lowercase{#1}}\xspace}} \tsc{WGM} \tsc{QE} \begin{document} \title [mode = title]{Impact of End Point Conditions on the Representation and Integration of Fractal Interpolation Functions and Well Definiteness of the Related Operator} \author[1]{Aparna M.P} \fnmark[1] \author[2]{P. Paramanathan}[orcid=0000-0003-0688-4858] \cormark[1] \fnmark[2] \address{Department of Mathematics, Amrita School of Engineering, Coimbatore,Amrita Vishwa Vidyapeetham,India.} \cortext[cor1]{Corresponding author} \fntext[fn1]{mp$\[email protected] (Aparna M.P); p$\[email protected] (P. Paramanathan)} \begin{abstract} Fractal interpolation technique is an alternative to the classical interpolation methods especially when a chaotic signal is involved. The logic behind the formulation of an iterated function system for the construction of fractal interpolation functions is to divide the entire interpolating domain into subdomains and define functions on each subdomain piecewisely. The objective of this paper is to explore the significance of the end point conditions on the graphical representation of the resultant functions and their numerical integration. The central problem in the formulation of the IFS, the continuity of the fractal interpolation functions, is addressed with an explanation on the techniques implemented to resolve the problem. Instead of an analytical expression, the fractal interpolation functions are always represented in terms of recursive relations. This paper further presents the derivation of these recursive relations and proposes a straightforward method to find the approximating function, involved in these relations. \end{abstract} \begin{keywords} Fractal interpolation function (FIF) \sep Iterated function system (IFS) \sep Read-Bajraktarevic operator \sep Attractor \end{keywords} \maketitle \section{Introduction} \indent Fractal analysis deals with signals of chaotic behaviour. The chaotic nature of the signals can easily be identified by measuring the dimension of their graphs. The dimension must be non integer for such signals. Another indicator of the chaotic nature is that of self similarity. \\ \indent In numerical approximation, when a signal is provided with only the sample values, there are numerous techniques available to find the interpolating function that passing through these values. But, when the signal is of self similar nature and its graph is of non integer dimension, different methods are to be applied. Thus, fractal interpolation functions (FIF) have been invented by Barnsley to approximate signals with such characteristics. The functions obtained via fractal interpolation are of non-differentiable nature. These characteristics enable them to represent most of the real world signals, whereas this representation is not possible with the usual interpolation functions (IF). The dimension of fractal interpolation function is a measure of the complexity of the signals, facilitating the comparison of recordings. Thus, fractal interpolation is considered as a generalisation of all the existing interpolation techniques. \indent The fractal interpolation functions are deeply connected with iterated function systems (IFS). Once a proper IFS has been defined, it is easier to derive the recursive formula for the fractal interpolation function. Fractal interpolation functions have been constructed for single variable functions, bivariable functions and multivariable functions. Although the fundamental aim is to create an IFS, there are some slight variations in the IFS according to the interpolating domain. \\ \indent The theory of fractal interpolation functions was put forward by Barnsley in \cite{ff}. He introduced IFS and arrived at fractal interpolation functions as attractors of the IFS. Some of the properties of such interpolation functions have been explained in the paper and certain applications have been explored. The technique for the numerical integration using FIF was introduced by Navascues and Sebastian \cite{ni}. The extension of the concept of fractal interpolation to two dimension was first carried out by Massopust \cite{fs}. The construction was based on a triangular region with interpolating points on the boundary are required to be coplanar. The coplanarity condition was later removed by Germino and Hardin in their work \cite{fi}. They constructed fractal interpolation surfaces on triangles with arbitrary boundary data. The paper also considers the construction over polygons with arbitrary interpolation points. In \cite{bf}, L. Dalla considered rectangular interpolating domain where the interpolation points on the boundary were collinear. By using fold-out technique, Malysz introduced FIS over rectangles \cite{md}. The construction requires all the vertical scaling factors to be constant. Metzler and yun generalised this method by taking the vertical scaling factor as a function \cite{cf}. Hardin and Massopust constructed multivariable fractal interpolation functions \cite{fr}. A different construction with discontinuous FIS was considered in \cite{ifs}. In \cite{fsw}, Massopust constructed bivariable fractal interpolation functions over rectangles by considering the tensor product of two univariable fractal interpolation functions. Xie and Sun considered another construction wherein the attractor was not the graph of a continuous function \cite{bfi}. The lack of continuity is later solved by Vasileios Drakopoulos and Ploychronis Manousopoulos \cite{nt}. In \cite{nl}, Kobes and Penner considered non linear generalisation of fractal interpolation functions with an aim to apply in image processing fields. Three dimensional IFS was considered in \cite{sf}. The construction of nonlinear fractal interpolation function is proposed in \cite{an}. In \cite{nn}, Ri and S.I introduced the construction of a bivariable FIF using Matkowski fixed point theorem and Rakotch contractions.\\ \indent This paper is an attempt to investigate the impact of the end point conditions on the IFS for the construction and integration of fractal interpolation functions. Also, it explains the different techniques used to prove the well definiteness of the Read-Bajraktarevic operator whose fixed point is the required fractal interpolation function. This study begins with some basic definitions as a prerequisite to understand the rest of the sequel. A quick recap of the important steps in the construction is provided then. The recursive relation, that is to be satisfied by the fractal function has been proven logically and analytically. The next section presents the difference between an interpolation function and a fractal interpolation function for the same data set. The relation between the functions in the IFS and the approximating function involved in the recursive relation is established then. An easier method to find the approximating function is also provided. Following which, the significance of the end point conditions are explored and the consequences of violating them have been shown. Referring the construction procedure, it is observed that the fundamental problem in the construction of a bivariable fractal interpolation function is that of continuity. Since the Read Bejrakarevic operator is not well defined at the common boundaries of each subdomain, the resulting fixed point, i.e, the fractal interpolation function is not continuous. Finally, this paper concludes with an explanation on the different techniques implemented to solve the problem of well definiteness for a rectangular interpolating domain. \section{Prerequisites} The basic terminologies, definitions and results related to the topic are given below: \begin{definition} Let $(X,d)$ be a metric space. Then, $(X,d)$ is a complete metric space if every cauchy sequences in $X$ converges in the metric space $(X,d).$ \end{definition} \begin{definition} Let $(X,d)$ be a metric space. A function $f:X \rightarrow X$ is said to be a contraction map if, for any $x,y \in X,$ $$d(f(x),f(y)) \leq r. d(x,y)$$ for some $r$ such that $0 \leq r <1.$ \end{definition} \begin{theorem}[The Contraction Mapping Theorem] \cite{fe} Let $f:X \rightarrow X$ be a contraction mapping on a complete metric space $(X,d)$. Then, $f$ possesses exactly one fixed point $x_{f} \in X$ and moreover for any point $x\in X$, the sequence $\{f^{on}(x): n=0,1,2,...\}$ converges to $x_{f}.$ That is, $$\lim\limits_{n \rightarrow \infty} f^{on}(x)=x_{f,}$$ for each $x \in X.$ \end{theorem} \begin{definition} A hyperbolic iterated function system (IFS) consists of a complete metric space $(X,d)$ together with a finite set of contraction mappings $f_{n}:X \rightarrow X, n=1,2,...N$ for some $N \in \mathbb{N}.$ It is generally denoted as $\{X; f_{n}, n=1,2,...N\}.$ Its contractivity factor is $s=max\{s_{n}:n=1,2,..N\}$ where $s_{n}$ is the contractivity factor of $f_{n}.$ \end{definition} \begin{definition} Consider a hyperbolic iterated function system $\{X; f_{n}, n=1,2,...N\}.$ An operator $F:H(X)\rightarrow H(X)$ such that $$F(B)=\cup_{n=1}^{N}f_{n}(B)$$ is known as Hutchinson operator. \end{definition} \begin{theorem} Let $\{X; f_{n}, n=1,2,...,N\}$ be a hyperbolic IFS for the given set of data. Let $s=max\{s_{n}:n=1,2,..N\}$ be the contractivity factor of the hyperbolic IFS. The map $F:H(X) \rightarrow H(X)$ defined by $$F(B)=\cup_{n=1}^{N}f_{n}(B),$$ for all $B \in H(X)$ is a contraction mapping on the complete metric space $(H(X),h(d))$ with contractivity factor $s.$ $i.e,$ $$h(F(B),F(C)) \leq s h(B,C)$$ for all $B, C \in H(X).$ Its unique fixed point $A \in H(X)$ is such that $$A=F(A)=\cup_{n=1}^{N}f_{n}(A)$$ and is given by $$A=\lim\limits_{n \rightarrow \infty}F^{on}(B)$$ for any $B \in H(X).$ The unique, fixed point of the operator $F$ is known as the attractor of the IFS. \end{theorem} \section{Quick Recap on the Steps Involved in the Construction of Single Variable Fractal Interpolation Functions} The construction of a fractal interpolation function involves mainly three steps. \\ \noindent The first and foremost step is the construction of an iterated function system (IFS) for the data set. \\ \noindent The given data set is of the form $\{(t_{n},x_{n}):n=0,1,...,N\},$ for some $N \in Z, N \geq 2,$ where $t_{n}'s \in R$ are the input arguments and the output arguments $x_{n}'s$ are such that $x_{n}=f(t_{n})$ for every $n.$ Moreover, the input arguments are ordered by the usual ordering, $t_{0} <t_{1} < ...<t_{N}.$ \\ \noindent Given a data set, the following are the procedures to arrive at an IFS: \begin{enumerate} \item Specify the interpolation domain, $D,$ consisting of the data points. It is the set on which the fractal interpolation function is defined. For a single variable function, $D$ will be a closed interval. $D$ can be a triangle, rectangle, circle, polygon or any two dimensional shape for a two variable function. Likewise, $D$ will be an n-dimensional shape when an n-variable function is considered. The data set will be a subset of $D\times R.$ \item Create contractions $L_{n}$ from $D$ into each of its subparts $D_{n},$ such that each $L_{n}$ is a homeomorphism and satisfy the end point conditions. The end point conditions depend upon the domain $D.$ \item Choose a vertical scaling factor between -1 and 1. \item Define continuous functions $F_{n}:D \times R \rightarrow R$ such that it satisfies the end point conditions and contractive in the output argument. Here also, the end point conditions depend upon the domain $D.$ Usually, $F_{n}$ is defined such that $F_{n}(t,x)=\alpha_{n}x+q_{n}(t), $ where $q_{n}$ is a continuous function on $D.$ If $g_{0}$ is an approximating function to the data set, there exists a connection between $g_{0}$ and $q_{n}.$ This relation is specifically explained later for a single variable function and an alternative method for finding the approximation function $g_{0}$ is provided. \end{enumerate} \noindent Once an IFS has been defined, the next step is to make it hyperbolic. Since the IFS may not be hyperbolic in Euclidean metric, a new metric has been defined based on a real number $\theta$ on $I\times R$, equivalent to Euclidean metric. The IFS turns out to be hyperbolic once a proper $\theta$ has been chosen. The next and the final step into the fractal interpolation function is to establish the existence of a continuous function defined on interpolating domain such that the function interpolates the data and that the graph of the function is the attractor of the IFS. In order to prove the existence of such a function, the following points have to be verified. \begin{itemize} \item Define a complete metric space that consists of the set of all continuous functions defined on the interpolating domain, satisfying some properties, depending on the domain. \item Define an operator, called Read-Bajraktarevic operator, on the complete metric space and make it well defined. \item Prove the contractivity of the operator. \item Verify the unique, fixed point of the operator interpolates the data. \item Verify graph of unique, fixed point of the operator is the attractor of the IFS. \end{itemize} Then, the unique, fixed point of the operator is the required fractal interpolation function to the given data set. \begin{theorem} The fractal interpolation function satisfies the recursive relation $$f(t)=F_{n}({L_{n}^{-1}(t), f(L_{n}^{-1}(t))}), \,\,\,\, n=1,2,...,N, \,\, t \in D_{n}.$$ \end{theorem} \begin{proof} Let $\mathbb{F}$ be the set of all continuous functions $f:I \rightarrow R$ where $I=[t_{0},t_{N}]$ such that $f(t_{0})=x_{0}$ and $f(t_{N})=x_{N}.$ Trivially, $\mathbb{F}$ is a complete metric space with respect to the supremum metric. Now, define an operator $T:\mathbb{F} \rightarrow \mathbb{F}$ by $(Tf)(t)=F_{n}(L_{n}^{-1}(t), f(L_{n}^{-1}(t)) ).$ $T$ is continuous on each open interval $(t_{n-1}, t_{n}).$ To prove the continuity of $T$ at the end points of each subinterval, consider an arbitrary point $t_{n} \in I_{n}.$ Then, \begin{align*} (Tf)(t_{n}) &=F_{n}(L_{n}^{-1}(t_{n}), f(L_{n}^{-1}(t_{n})) )\\ &=F_{n}(t_{N}, f(t_{N}))\\ &=F_{n}(t_{N},x_{N})\\ &=x_{n} \end{align*} Since $t_{n}$ is also a point in $I_{n+1},$ \begin{align*} (Tf)(t_{n}) &=F_{n+1}(L_{n+1}^{-1}(t_{n}), f(L_{n+1}^{-1}(t_{n})) )\\ &=F_{n+1}(t_{0}, f(t_{0}))\\ &=F_{n+1}(t_{0},x_{0})\\ &=x_{n} \end{align*} Therefore, $T$ is continuous everywhere. To complete the proof of well definiteness, consider \begin{align*} (Tf)(t_{0}) &=F_{1}(L_{1}^{-1}(t_{0}), f(L_{1}^{-1}(t_{0})) )\\ &=F_{1}(t_{0}, f(t_{0}))\\ &=F_{1}(t_{0},x_{0})\\ &=x_{0} \end{align*} and \begin{align*} (Tf)(t_{N}) &=F_{N}(L_{N}^{-1}(t_{N}), f(L_{N}^{-1}(t_{N})) )\\ &=F_{N}(t_{N}, f(t_{N}))\\ &=F_{N}(t_{N},x_{N})\\ &=x_{N}. \end{align*} Now, it remains to show that $T$ is contractive. For that, let $f,g \in \mathbb{F}.$ Then, \begin{align*} d(Tf(t), Tg(t)) &=|Tf(t)-Tg(t)|\\ &=|\alpha_{n}||f(L_{n}^{-1}(t))- g(L_{n}^{-1}(t))|\\ &\leq |\alpha_{n}| d(f,g). \end{align*} Hence, $d(Tf,Tg)\leq \delta d(f,g),$ where $\delta=max_{n}\{|\alpha_{n}|\}<1.$ Now, by contraction mapping theorem, $T$ has a unique, fixed point $f$ in $\mathbb{F}.$\\ $i.e,$ $(Tf)(t)=f(t),$ which implies $f(t)=F_{n}({L_{n}^{-1}(t), f(L_{n}^{-1}(t))}), \,\,\,\, n=1,2,...,N, \,\, t \in D_{n}.$ From the definition of $F_{n}$ used for a single variable function, it is evident that $f(t)=\alpha_{n}f(L_{n}^{-1}(t))+q_{n1}L_{n}^{-1}(t)+q_{n0}.$ \vspace{0.2cm}\\ \indent This relation can also be achieved logically by going through the IFS theory as follows: \vspace{0.2cm} \\ \indent The given initial data set was $\{(t_{n},x_{n}):n=0,1,...,N\},$ such that $x_{n}=f(t_{n})$ for every $n.$ $i.e,$ $f$ is an interpolation function to this data set. Now, by applying the functions in the IFS, the data set is transformed into $\{ (L_{n}(t), F_{n}(t,x)): n=1,2,..,N\},$ for which also $f$ is an interpolation function. $i.e,$ $f(L_{n}(t))=F_{n}(t,x).$ Applying $L_{n}^{-1},$ this becomes $f(t)=F_{n}(L_{n}^{-1}(t), f(L_{n}^{-1}(t))).$ \end{proof} \section{Comparison of IF and FIF for the same data set} \begin{theorem} Consider a set of data points $\{(t_{n}, x_{n}):n=0,1,..,N\}.$ Let $f$ be the classical interpolation function to this data set. Let the set be assigned with an IFS $\{(L_{n}(t),F_{n}(t,x)):n=1,2,...,N\}.$ The corresponding affine fractal interpolation function be $g.$ Then, the function $f$ is approximately equal to $g$ if there exists a scale vector $\alpha^{'}$ such that $$||f-g||_{\infty}\leq w_{f}(h)+\frac{2|\alpha^{'}|_{\infty}}{1-|\alpha^{'}|_{\infty}}||f||_{\infty}$$ where $h$ is the constant step length $h=t_{n}-t_{n-1}$ and $w_{f}(h)$ is the modulus of continuity of $f.$ On the converse, the function $g$ will always be an interpolation function to the data set. \end{theorem} \begin{proof} The first part of the theorem is proved in lemma 3.2 of \cite{ni}. For the converse part, consider an arbitrary point $t_{n}.$ Since the fractal interpolation function is the unique, fixed point of the operator $T$ defined as $$(Tf)(t) \\ =F_{n}(L_{n}^{-1}(t), f(L_{n}^{-1}(t))),$$ \begin{align*} f(t_{n})&=(Tf)(t_{n}) \\ &=F_{n}(L_{n}^{-1}(t_{n}), f(L_{n}^{-1}(t_{n}))) \\ &= F_{n}(t_{N},x_{N}) \\ &=x_{n} \end{align*} Therefore, $f$ is an interpolation function to the data set. Hence the proof. \end{proof} \section{Relation between $g_{0}$ and $q_{n}$} \begin{theorem} Let $\{(t_{n},x_{n}): n=0,1,...,N\}$ be the given data set with IFS $\{(L_{n}(t),F_{n}(t,x)):n=1,2,...,N\}.$ Let $f$ be the fractal interpolation function corresponding to this data set. Then, $f$ satisfies another recursive relation $$f(t)=g_{0}(t)+\alpha_{n}(f-r)oL_{n}^{-1}(t)$$ for $t\in I_{n}$ where $g_{0}$ is a piecewise linear function through the points $\{(t_{n},x_{n}): n=0,1,...,N\}$ and $r$ is the line passing through $(t_{0},x_{0}), (t_{N},x_{N}).$ Moreover, the functions $g_{0}$ and $q_{n}$ are related by the equation $q_{n}oL_{n}^{-1}(t)=g_{0}(t)-\alpha_{n}roL_{n}^{-1}(t)$ for $t\in I_{n}.$ \end{theorem} \begin{proof} By lemma 3.2 in \cite{ca}, $(Tf)(t)=g_{0}(t)+\alpha_{n}(f-r)oL_{n}^{-1}(t)$ for $t\in I_{n}$ where $g_{0}$ is a piecewise linear function through the points $\{(t_{n},x_{n}): n=0,1,...,N\}$ and $r$ is the line passing through $(t_{0},x_{0}), (t_{N},x_{N}).$ Since the fractal interpolation function is the unique, fixed point of $T,$ it is trivial that $$f(t)=g_{0}(t)+\alpha_{n}(f-r)oL_{n}^{-1}(t)$$ for $t\in I_{n}$ with $g_{0}$ and $r$ as specified above. The relation between $g_{0}$ and $q_{n}$ is also established in the lemma 3.2. \end{proof} \begin{theorem} The function $g_{0}$ can also be calculated by solving the system of two equations given below. \begin{align*} a_{n}t_{n}+b_{n} &=x_{n} \\ a_{n-1}t_{n-1}+b_{n-1} &=x_{n-1} \end{align*} \end{theorem} \begin{proof} Since $q_{n}$ is a linear function in $t,$ the functions $g_{0}, r$ will also be linear in $t.$ Then, writing $g_{0}(t)=a_{n}t+b_{n}, $ for some constants $a_{n}, b_{n},$ consider the given system of equations \begin{align} a_{n}t_{n}+b_{n} &=x_{n} \\ a_{n-1}t_{n-1}+b_{n-1} &=x_{n-1}. \end{align} Subtracting (1) from (2) gives $a_{n}=\frac{y_{n}-y_{n-1}}{x_{n}-x_{n-1}}.$ \vspace{0.2cm} \\ Substituting this in (1), gives $b_{n}=\frac{y_{n-1}x_{n}-y_{n}x_{n-1}}{x_{n}-x_{n-1}}.$ \vspace{0.2cm} \\ Then \begin{align*} g_{0}(t) &=a_{n}t+b_{n} \\ &=\Big(\frac{y_{n}-y_{n-1}}{x_{n}-x_{n-1}}\Big)t+\Big(\frac{y_{n-1}x_{n}-y_{n}x_{n-1}}{x_{n}-x_{n-1}}\Big), \end{align*} which is the piecewise linear function with vertices $(t_{n},x_{n}), $ for $n\in \{1,2,...,N\}.$ \end{proof} \section{Significance of the end point conditions on $F_{n}$} The iterted function systems for the single variable and bivariable fractal interpolation functions are given below:\\ \noindent Consider a single variable fractal interpolation function. The IFS for the function is $\{(L_{n}(t), F_{n}(t,x)): n=1,2,..,N\}.$ The role of $L_{n}$ is to contract the entire interpolating domain $D$. The interpolating domain is a closed interval when a single variable function is considered. It contracts the interval into its subintervals by assigning the end points of $I$ into the respective end points of $I_{n}.$ The second component function $F_{n}$ in the IFS, responsible for vertical scaling, normally consists of another function $q_{n}$ of the input arguments $q_{n}(t)=q_{n1}t+q_{n0}.$ The constants in $q_{n}$ are obtained by solving the end point conditions on $F_{n}, i.e,$ the conditions, $$ F_{n}(t_{0}, x_{0})=x_{n-1} \,\,\,\, F_{n}(t_{N}, x_{N})=x_{n} $$ for $n=1,2,...,N.$ When a two variable function with triangular region is considered, the IFS is $\{(L_{n}(x,y), F_{n}(x,y,z)): n=1,2,..,N\}$ where $L_{n}$ contracts the entire triangle into its subtriangles. The function $F_{n}$ is such that $F_{n}(x,y,z)=\alpha_{n}z+q_{n}(x,y)$ with $q_{n}(x,y)=a_{n}x+b_{n}y+c_{n}.$ The respective end point conditions are $$ F_{n}(x_{1}, y_{1}, z_{1})=z_{n1} \,\,\,\, F_{n}(x_{2}, y_{2},z_{2})=z_{n2}, \,\,\,\,F_{n}(x_{3}, y_{3},z_{3})=z_{n3} $$ for $n=1,2,...,N$ where $(x_{1}, y_{1}, z_{1}), \,\,\, (x_{2}, y_{2}, z_{2}), \,\,\, (x_{3}, y_{3}, z_{3}) $ are the vertices of the triangle $D$ with $z_{n1}, \,\,\, z_{n2}, \,\,\, z_{n3}$ the value of the function at the vertices of the $n-th$ subtriangle. \\ \indent For rectangular region, the IFS is $\{(\phi_{n}(x), \psi_{m}(y), F_{n,m}(x,y)): n=1,2,..,N, m=1,2,...,M\}$ where $\phi_{n}$ contracts along the $X$ axis and $\psi_{m}$ contracts along the $Y$ axis. The function $F_{n,m}$ is such that $F_{n,m}(x,y,z)=\alpha_{n,m}z + q_{n,m}(x,y)$ where $ q_{n,m}(x,y)=e_{n,m}x+f_{n,m}y+g_{n,m}xy+k_{n,m}.$ Since it is a rectangle, four conditions are there for $F_{n,m}. i.e, $ $$ F_{n,m}(x_{0}, y_{0}, z_{0,0})=z_{n-1,m-1} \,\,\,\, F_{n,m}(x_{N}, y_{M},z_{N,M})=z_{n,m} $$ $$ F_{n,m}(x_{0}, y_{M} z_{0,M})=z_{n-1,m} \,\,\,\, F_{n,m}(x_{N}, y_{0},z_{N,0})=z_{n,m-1} $$ for $n=1,2,...,N \,\,\, m=1,2,...,M.$ In short, for all the shapes, the first component function in the IFS is responsible for the contraction of the respective interpolation domains. These functions carry out contractions by assigning the corner points of the domain $D$ into the respective corners of the subdomain $D_{n}.$ The number of end point conditions will depend on the number of corners of $D.$ The second component function involves vertical scaling. Here also, the end point conditions varies according to the interpolating domain $D.$ This component function consists of another function of the input arguments. The coefficients of the input arguments are obtained by solving the end point conditions.\\ \subsection{Consequences of removing any of the end point conditions on the second component function} \begin{itemize} \item If any one of the end point condition is removed, the expression for $q_{n}$ changes which affects the shape of the attractor. \item In the formula for the numerical integration, numerator varies according to $q_{n},$ thereby making a difference in the integral value. \item With the change in $q_{n},$ the approximating function $g_{0}$ will be changed, affecting the error analysis techniques. \end{itemize} These points are specific to single variable functions, but also, relevant for two variable functions. \section{Implementation of the Different Techniques used to Make $'T'$ Well Defined} A set of data points is given with an IFS to the data and a freely chosen vertical scaling factor between -1 and 1. Before finding the fractal interpolation function to this data set, the existence of such a function has to be verified. The obtained function, defined from the interpolating domain to the set of real numbers, must be a continuous, interpolating function whose graph coincides with the attractor of the IFS. \\ \indent In order to establish the existence of such a function, a complete metric space consisting of the set of all continuous functions satisfying certain properties is created. The properties, used to verify the interpolating nature of the function, varies according to the interpolating domain. Then, an operator T is defined on the space, which has to be shown contractive. Then, its unique fixed point is the required FIF. Usually, T is defined in terms of $F_{n,m}$, which is defined piecewisely on each subdomain. It is trivial that the boundaries of each subdomain is shared by the nearest subdomain. So, the operator T will be well defined only if it provides the same output along the boundaries. But, usually, T gives different outputs along the boundaries, which makes it a discontinuous map, which then implies that T is not contractive. But, there are various techniques available in order to transform T into a well defined map. Two such methods are discussed below. \subsection{Formulation of $G_{n,m}$ from the existing function $F_{n,m}$} In \cite{nt}, authors have introduced a function $G_{n,m}$ and formulated a new IFS with $G_{n,m}.$ An explanation for the functional representation of $G_{n,m}$ is provided below:\\ Consider the data set $\{(x_{n},y_{m},z_{n,m}): n=0,1,...,N, m=0,1,..,M\}$ where $z_{n,m}=f(x_{n},y_{m}).$ Let the usual IFS be $w_{n,m}(x,y,z)=(\phi_{n}(x),\psi_{m}(y), F_{n,m}(x,y,z)), $ where the functions $\phi_{n}, \,\, \psi_{m}, \,\, F_{m,n}$ are defined as in \cite{bf}. Now, defining the operator $T$ to be $(Tf)(x,y)=F_{n,m}(\phi_{n}^{-1}(x),\psi_{m}^{-1}(y),f(\phi_{n}^{-1}(x),\psi_{m}^{-1}(y))),$ then consider the right vertical side of the subrectangle $I_{n} \times J_{m},$ ie, along the line $x=x_{n}, y_{m-1}\leq y \leq y_{m}.$ Since this side is shared by the subrectangle $I_{n+1}\times J_{m},$ the operator $T$ is defined in two ways: Considering this line as a side of $I_{n} \times J_{m},$ \begin{align*} \nonumber (Tf)(x_{n},y) &=F_{n,m}(\phi_{n}^{-1}(x_{n}), \psi_{m}^{-1}(y), f(\phi_{n}^{-1}(x_{n}), \psi_{m}^{-1}(y))) \\ \nonumber &= F_{n,m}(x_{N}, \psi_{m}^{-1}(y), f(\phi_{n}^{-1}(x_{n}), \psi_{m}^{-1}(y))) \end{align*} If it is in $I_{n+1}\times J_{m},$ \begin{align*} (Tf)(x_{n},y) &=F_{n+1,m}(\phi_{n+1}^{-1}(x_{n}), \psi_{m}^{-1}(y), f(x_{N}, \psi_{m}^{-1}(y))) \\ &= F_{n+1,m}(x_{0}, \psi_{m}^{-1}(y), f(x_{0}, \psi_{m}^{-1}(y))) \end{align*} Hence, the operator $T$ have two different values along the line. Therefore, Define the operator $T$ such that \begin{align*} \nonumber (Tf)(x_{n},y)&=\frac{\Big[ F_{n,m}(x_{N}, \psi_{m}^{-1}(y), f(\phi_{n}^{-1}(x_{n}), \psi_{m}^{-1}(y))) + F_{n+1,m}(x_{0}, \psi_{m}^{-1}(y), f(x_{0}, \psi_{m}^{-1}(y))) \Big]}{2} \end{align*} Since $T$ is piecewisely defined on each subrectangle $I_{m}\times J_{n},$ it must be defined as $$(Tf)(x,y)=G_{n,m}(\phi_{n}^{-1}(x), \psi_{m}^{-1}(y), f(\phi_{n}^{-1}(x), \psi_{m}^{-1}(y)))$$ where \begin{align*} \nonumber G_{n,m}(\phi_{n}^{-1}(x_{n}), \psi_{m}^{-1}(y), f(\phi_{n}^{-1}(x_{n}), \psi_{m}^{-1}(y))) &=G_{n,m}(x_{N}, \psi_{m}^{-1}(y), f(x_{N}, \psi_{m}^{-1}(y))) \end{align*} Comparing the earlier expression, it is obtained that $$G_{n,m}(x_{N},y,z)=\frac{\Big[F_{n,m}(x_{N}, y, z) + F_{n+1,m}(x_{0}, y, z)\Big]}{2}.$$ Similarly, taking each side of the subrectangle, the expression for $G_{n,m}$ as in \cite{nt} can be reached. The well definiteness of the operator $T$ is clearly established in \cite{nt} with this new IFS. \subsection{Making the data points on the boundary to be coplanar} Authors in \cite{bf} used the condition of collinearity of the data set to prove the well definitess of $T.$ This work is based on rectangular domain where the interpolation points on the boundary of each subrectangle is assumed to be collinear. Consider the collinear data sets $\{ (x_{0},y_{m}, z_{0,m}): m=0,1,..,M\},$ $\{ (x_{N},y_{m}, z_{N,m}): m=0,1,..,M\},$ $\{ (x_{n},y_{0}, z_{n,0}): n=0,1,..,N\},$ $\{ (x_{n},y_{M}, z_{n,M}): n=0,1,..,N\}.$ The collinearity implies that, for example, the set $\{ (x_{0},y_{m}, z_{0,m}): m=0,1,..,M\}$ is such that $x=x_{0}, \,\, y=(1-\lambda)y_{0}+\lambda y_{M}, z=\lambda z_{0,0} + (1-\lambda)z_{0,M},$ for all $\lambda \in [0,1].$ \\ Since the data is assumed to be collinear and the set $\mathcal{F}$ is taken to be the set of all continuous functions satisfying \begin{align*} \nonumber f(x_{0},(1-\lambda)y_{0}+\lambda y_{M})&=(1-\lambda)z_{0,0}+\lambda z_{0,M} \\ \nonumber f(x_{N},(1-\lambda)y_{0}+\lambda y_{M}) &=(1-\lambda)z_{N,0}+\lambda z_{N,M} \\ \nonumber f((1-\lambda)x_{0}+\lambda x_{N},y_{0}) &=(1-\lambda)z_{0,0}+\lambda z_{N,0} \\ \nonumber f((1-\lambda)x_{0}+\lambda x_{N},y_{M}) &=(1-\lambda)z_{0,M}+\lambda z_{N,M} \end{align*} \noindent Applying $T$ on the side of the subrectangle $I_{n}\times J_{m},$, $i.e,$ along $x=x_{n}, \,\, y_{m-1}\leq y \leq y_{m},$ \begin{align*} \nonumber (Tf)(x_{n},y)&=(Tf)(x_{n}, (1-\lambda)y_{m-1}+\lambda y_{m}) \\ \nonumber &=F_{n,m}(x_{N}, (1-\lambda)y_{0}+\lambda y_{M}, f(x_{N}, (1-\lambda)y_{0}+\lambda y_{M})) \end{align*} \noindent By expanding $F_{n,m}, $ the expression reaches at $$(Tf)(x_{n},y)= (1-\lambda) z_{n,m-1}+\lambda z_{n,m}$$ \\ Considering the line as a side of $I_{n+1}\times J_{m},$ the same expression can be achieved. Hence, the operator is well defined along the boundaries of each subrectangle. \section{Conclusion} This paper analyses the steps involved in the construction of a fractal interpolation function and provides a comparison on the difference between interpolation functions and fractal interpolation functions. From this study, it is observed that the continuity of the fractal interpolation function can be achieved whenever the Read-Bajraktarevic operator, used to formulate the fractal interpolation function is well defined. Two techniques to implement the well definiteness of the operator have been described in this work. The recursive relation, satisfied by the fractal interpolation function is established using Read-Bejraktaarevic operator. The relation is also proven logically. Analysing the end point conditions on the IFS, it is obtained that the removal any of these will affect the graphical as well as the integral results of fractal interpolation functions. It also affects the approximating function, involved in the recursive relation, since the function is depended on the IFS. While establishing this dependency, this paper also provides an alternative method to find the approximating function. \begin{thebibliography}{99} \bibitem{ni}M.A. Navascu{\'e}s, M.V. Sebasti{\'a}n, Numerical integration of affine fractal functions, Journal of computational and applied mathematics, 252 (2013) 169-176. \bibitem{bf} L. Dalla, Bivariate fractal interpolation functions on grids, Fractals, 10 (2002) 53-58. \bibitem{nt} Vasileios Drakopoulos and Polychronis Manousopoulos, On Non-tensor product bivariate fractal interpolation surfaces on rectangular grids, Mathematics, 8 (2020) 525. \bibitem{ff} M. F. Barnsley, Fractal functions and interpolation, Constr. Approx, 2 (1986) 303-329. \bibitem{fi} J.S. Geronimo, D. Hardin, Fractal interpolation surfaces and a related 2-D multiresolution analysis, J. Math. Anal. Appl, 176 (1993) 561-586. \bibitem{fs} P.R. Massopust, Fractal surfaces, J. Math. Anal. Appl, 151 (1990) 275-290. \bibitem{fe} M.F. Barnsley, Fractals Everywhere, second ed., Academic Press Professional, Newyork, 1988. \bibitem{nl} R. Kobes, A.J. Penner, Non-linear fractal interpolating functions of one and two variables, Fractals, 13 (2005) 179-186. \bibitem{ca} M.A. Navascu{\'e}s, M.V. Sebasti{\'a}n, Construction of affine fractal functions close to classical interpolants, J. Comput. Anal. Appl, 9 (2007) 271-285. \bibitem{md} R. Małysz, The Minkowski dimension of the bivariate fractal interpolation surfaces, Chaos Solitons Fractals, 27 (2006) 1147–1156. \bibitem{cf} W. Metzler, C. Yun, Construction of fractal interpolation surfaces on rectangular grids, Internat. J. Bifur. Chaos, 20 (2010) 4079–4086. \bibitem{fr} D. Hardin, P.R Massopust, Fractal interpolation functions from $R^{n} \rightarrow R^{m}$ and their projections, Z.Anal.Anwend, 12 (1993) 535-548. \bibitem{ifs} C.M Witterbrink, IFS fractal interpolation for 2D and 3D visualization, In Proceedings of the 6th IEEE visaulization conference, Atlanta, GA, USA, 29 october - 3 november (1995) 77-84. \bibitem{fsw} P.R Massopust, Fractal functions, fractal surfaces and wavelets, Academic press, San Diego, CA, USA, 1994. \bibitem{bfi} H. Xie, H. Sun, The study on bivariate fractal interpolation functions and creation of fractal interpolated surfaces, Fractals, 5 (1997) 625-634. \bibitem{sf} H.Y. Wang, On smoothness for a class of fractal interpolation surfaces, Fractals, 14 (2006) 223-230. \bibitem{nn} Ri, S.I, A new nonlinear bivariate fractal interpolation function, Fractals, 26 (2018) 1850054. \bibitem{an} Ri Songil, A new idea to construct fractal interpolation function, Indagationes mathematicae 29 (2018) 962-971. \end{thebibliography} \end{document}
2205.08466v2
http://arxiv.org/abs/2205.08466v2
On a Ramanujan type expansion of arithmetical functions
\documentclass[12pt]{amsart} \synctex = 1 \renewcommand{\baselinestretch}{1.5} \usepackage{amsmath} \usepackage{amssymb} \usepackage{amsfonts} \usepackage{amsthm} \usepackage{xcolor} \usepackage{enumerate} \newtheorem{theo}{Theorem}[section] \newtheorem*{theo*}{Theorem} \newtheorem{coro}[theo]{Corollary} \newtheorem{lemm}[theo]{Lemma} \newtheorem{prop}[theo]{Proposition} \newtheorem{prob}[theo]{Problem} \newtheorem{defi}[theo]{Definition} \newtheorem{rema}[theo]{Remark} \newtheorem{exam}[theo]{Example} \newtheorem{note}[theo]{Note} \newcommand{\N}{\mathbb{N}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\R}{\mathbb{R}} \newcommand{\C}{\mathbb{C}} \title{ On a Ramanujan type expansion of arithmetical functions } \begin{document} \keywords{Ramanujan sum, Ramanujan expansions; arithmetical functions; Cohen-Ramanujan sum; additive functions} \subjclass[2010]{11A25, 11L03} \author[A Chandran]{Arya Chandran} \address{Department of Mathematics, University College, Thiruvananthapuram, Kerala - 695034, India} \email{[email protected]} \author[K V Namboothiri]{K Vishnu Namboothiri} \address{Department of Mathematics, Government College, Chittur, Palakkad - 678104, INDIA\\Department of Collegiate Education, Government of Kerala, India} \email{[email protected]} \begin{abstract} Srinivasa Ramanujan provided series expansions of certain arithmetical functions in terms of the exponential sum defined by $c_r(n)=\sum\limits_{\substack{{m=1}\\(m,r)=1}}^{r}e^{\frac{2 \pi imn}{r}}$ in [\emph{Trans. Cambridge Philos. Soc, 22(13):259–276, 1918}]. Here we give similar type of expansions in terms of the Cohen-Ramanujan sum defined by E. Cohen in [\emph{Duke Mathematical Journal, 16(85-90):2, 1949}] by $c_r^s(n)=\sum\limits_{\substack{h=1\\(h,r^s)_s=1}}^{r^s}e^{\frac{2\pi i n h}{r^s}}$. We also provide some necessary and sufficient conditions for such expansions to exist. \end{abstract} \maketitle \section{Introduction} The Ramanujan sum denoted by $c_r(n)$ is defined to be the sum of certain powers of a primitive $r$th root of unity. That is, \begin{align} c_r(n)&=\sum\limits_{\substack{{m=1}\\(m,r)=1}}^{r}e^{\frac{2 \pi imn}{r}} \end{align} where $r\in\mathbb{N}$ and $n \in \mathbb{Z}$. This sum appeared for the first time in a paper of Ramanujan \cite{ramanujan1918certain} where he discussed series expansions of certain arithmetical functions in terms of these sums. These expansions were pointwise convergent. The series expansions he gave there were of the form \begin{align}\label{ramsum} g(a) = \sum\limits_{\substack{{r=1}}}^{\infty}\widehat{g}(r)c_r(a), \end{align} with suitable coefficients $\widehat{g}(r)$. In particular, he gave expansions like \begin{align*} d(n) = \sum\limits_{\substack{{r=1}}}^{\infty} \frac{\log r}{r}c_r(n) \end{align*} and \begin{align*} \sigma(n) = \frac{\pi^2n}{6}\sum\limits_{\substack{{r=1}}}^{\infty} \frac{c_r(n)}{r^2} \end{align*} where $d(n)$ and $\sigma(n)$ are respectively the number of divisors and the sum of divisors of $n$. Though Ramanujan gave series expansions of some functions, no necessary or sufficient conditions were given by him to understand for what type of functions such an expansion may exist. Attempts on this direction were made by others later. For an arithmetical function $g$, its mean value is defined by $M(g) = \lim\limits_{\substack{x\rightarrow \infty}}\frac{1}{x} \sum\limits_{\substack{{n \leq x}}} g(n)$, when the limit exists. Carmichael \cite{carmichael1932expansions} proved the following identity for the Ramanujan sums which helps us to write down possible candidates for the Ramanujan coefficients of any given arithmetical function. \begin{theo}[Orthogonality Relation] $$\lim\limits_{\substack{x\rightarrow \infty}}\frac{1}{x} \sum\limits_{\substack{{n \leq x}}}c_r(n)c_s(n) = \begin{cases}\phi(r) , \quad\text{ if } r=s\\ 0, \quad\text{ otherwise.} \end{cases}$$ \end{theo} By applying the orthogonality relation to ($\ref{ramsum}$), we get $\widehat{g}(r) = \frac{M(gc_r)}{\phi(r)}$ provided the mean value of $gc_r$ exists. Thus Ramanujan expansions exist for those arithmetical functions for which the mean values $M(gc_r)$ exist. Wintner proved \cite{hardy1943eratosthenian} later the following sufficient condition for the existence of the mean values. \begin{theo} Suppose that $g(n)=\sum\limits_{\substack{{d \mid n}}} f(d) $, and that $\sum\limits_{\substack{{n=1}}}^{\infty} \frac{|f(n)|}{n} < \infty$. Then $M(g)= \sum\limits_{\substack{{n=1}}}^{\infty} \frac{f(n)}{n} $. \end{theo} Delange \cite{delange1976ramanujan} improved the above result and proved the following giving another sufficient condition for the Ramanujan expansions to exist. \begin{theo} Suppose that $g(n) = \sum\limits_{\substack{{d \mid n}}} f(d)$, and that $\sum\limits_{\substack{{n=1}}}^{\infty} 2^{w(n)}\frac{|f(n)|}{n} < \infty$, where $w(n)$ is the number of distinct prime divisors of $n$. Then $g$ admits a Ramanujan expansion with $\widehat{g}(q)=\sum\limits_{\substack{{n=1}}}^{\infty}\frac{f(qm)}{qm} $. \end{theo} For a detailed discussion on the above results, please see \cite[Chapter VIII]{schwarz1994arithmetical}. Later, Lucht \cite{lucht1995ramanujan} gave an alternate method to compute the Ramanujan coefficients. \begin{theo}{\cite[Theorem 1]{lucht1995ramanujan} }\label{lucht1} Let $\widehat{g}: \mathbb{N}\rightarrow \mathbb{C}$ be an arbitrary arithmetical function and $\mu$ the usual M{\"o}bius function. The following are equivalent. \begin{enumerate} \item $g(a)= \sum\limits_{\substack{{r=1}}}^{\infty} \widehat{g}(r) c_r(a)$ converges (absolutely) for every $a \in \mathbb{N}$. \item $\gamma(a)= a \sum\limits_{\substack{{r=1}}}^{\infty} \widehat{g}(ar) \mu(r)$ converges (absolutely) for every $a \in \mathbb{N}$. \end{enumerate} \end{theo} By this theorem, the Ramanujan coefficients can be computed to be $\widehat{g}(n)=\sum\limits_{\substack{a=1\\n|a}}^{\infty}\frac{(\mu*g)(a)}{a}$ where $\mu$ is the usual M\"{o}bius function. In the same paper, he proved the following theorem to provide Ramanujan expansion to a class of additive functions. \begin{theo}\label{lucht2} Let $g \in \mathcal{A}$, the set of all additive arithmetical functions. If the series $\sum\limits_{\substack{{v=1}}}^{\infty}\frac{g(p^v)}{p^v}$\text{ and } $\sum\limits_{\substack{{p}}}\sum\limits_{\substack{{v=1}}}^{\infty}\frac{g(p^v)}{p^v}$ converge then $g$ has a pointwise convergent Ramanujan expansion (\ref{ramsum}) with coefficients \begin{align*} \widehat{g}(p^\alpha) &= \frac{-g(p^{\alpha-1})}{p^{\alpha }}+ (1-\frac{1}{p})\sum\limits_{\substack{{v\geq \alpha}}}\frac{g(p^v)}{p^{v}}\\ \widehat{g}(1) &= \sum\limits_{\substack{{p}}}\widehat{g}(p)\\ \widehat{g}(n) &= 0, \text{ otherwise.} \end{align*} \end{theo} The key component of Ramanujan expansions is the Ramanujan Sum and it has been generalized in many ways. The aim of this paper is to study the Ramanujan type expansions using a generalization of the Ramanujan sum given by E.\ Cohen in \cite{cohen1949extension}. He defined the sum \begin{align}\label{Coh1} c_r^s(n)&=\sum\limits_{\substack{{(h,r^s)_s=1}\\h=1}}^{r^s}e^{\frac{2\pi i n h}{r^s}}, \end{align} where $(a,b)_s$ is the generalized gcd of $a$ and $b$ (see the definition in the next section). We call this henceforth as the Cohen-ramanujan sum. When $s=1$, this reduces to the Ramanujan sum. We will call such expansions by the name Cohen-Ramanujan expansion. We provide expansions for two well known arithmetical functions using this generalization. We also provide some conditions for such expansions to exist following the method of arguments given by Lucht in \cite{lucht1995ramanujan} and \cite{lucht2010survey}. Infact, we will be proving two theorems analogous to theorem \ref{lucht1} and theorem \ref{lucht2} appearing in \cite{lucht1995ramanujan}. Our expansions and results, as in the case of most of the existing results related to the usual Ramanujan sum, deal only with pointwise convergence of such expansions. We would like to remark that some other generalizations also exist for the Ramanujan sum. A few such generalizations were given by Cohen himself \cite{cohen1959trigonometric}, M. Sugunamma \cite{sugunamma1960eckford}, C. S. Venkataraman and Sivaramakrishnan \cite{venkataraman1972extension} and Chidambaraswamy \cite{chidambaraswamy1979generalized}. \section{Notations and basic results} Most of the notations, functions, and identities we mention in this paper are standard and can be found in \cite{tom1976introduction} or \cite{mccarthy2012introduction}. However for the sake of completeness, we restate some of them below. For two arithmetical functions $f$ and $g$, $f*g$ denotes their Dirichlet convolution (Dirichlet product). Then the M{\"o}bius inversion formula states that $f(n)=\sum\limits_{d|n}g(d) \Longleftrightarrow g(n)=\sum\limits_{d|n}f(d)\mu\left(\frac{n}{d}\right)=f*\mu$. An arithmetical function $g$ is said to be additive if $g(mn)= g(m)+g(n)$ for coprime positive integers $m$ and $n$. $ \mathcal{A}$ denotes the set of all additive arithmetical functions. $\mathcal{P^*}$ denotes the set of all prime powers $p^\alpha$ with $\alpha \in \mathbb{N}$. By $\xi_q^s(n)$, we mean the function $$\xi_q^s(n)= \begin{cases} q^s , \quad\text{ if } q^s\mid n\\ 0, \quad\text{ otherwise.} \end{cases}$$ It was proved by Cohen in \cite{cohen1949extension} that \begin{align}\label{ram-ident} \sum\limits_{\substack{r|q}}c_r^{s}(n)=\xi_q^{(s)}(n). \end{align} For $s\in\N$, the generalized GCD function $(a,b)_s$ gives the largest $d^s$ where $d\in\N$ such that $d^s|a$ and $d^s|b$. For $s>1$, a positive integer $m$ is $s-$power free if no $p^s$ divides $m$, where $p $ is prime. \begin{defi} For $s \in \N$, $\tau_s(n)$ gives the number of $d^s\mid n$ where $d^s \in \N$. That is $\tau_s(n)= \sum\limits_{\substack{d^s \mid n\\d \in \N}}1$. \end{defi} For $k,n \in \mathbb{N} $, $\sigma_k(n) = \sum\limits_{\substack{{d \mid n}}}d^k$, the sum of $k$th powers of the divisors of $n$. \begin{defi} Let $k,s \in \N$. The generalized sum of divisors function $\sigma_{k,s}(n)$ is given by $\sigma_{k,s}(n)=\sum\limits_{\substack{d^s \mid n\\d \in \N}}(d^s)^k$. \end{defi} Note that this function is different from $\sigma_{ks}$. But $\sigma_{k,1}=\sigma_{k}$. Using the identity \begin{align} c_r^{s}(n)=\sum\limits_{\substack{d|r\\d^s|n}}\mu(r/d)d^s\label{eq:mu_crs} \end{align} given by Cohen in \cite{cohen1949extension}, we see that for fixed $s, n\in\mathbb{N}$, $c_r^{s}(n)$ is bounded since $\vert c_r^{s}(n)\vert \leq \sum\limits_{\substack{d\mid r\\d^s\mid n}}d^s \leq \sigma_{1,s}(n).$ The unit function $u$ is an arithmetical function such that $u(n)=1$ for all $n$. By $\zeta(s)$, we mean the Riemann Zeta function. By \cite[Example 1,Theorem 11.5]{tom1976introduction}, we have \begin{align} \sum\limits_{\substack{n=1}}^{\infty} \frac{\mu(n)}{n^s}=\frac{1}{\zeta(s)}\text { if }Re(s) > 1.\label{eq:mu_zeta} \end{align} \section{Main Results} We begin with giving the Cohen-Ramanujan expansion of $\tau_s$. Here we use elementary number theoretic techniques to establish the result. \begin{theo}\label{div-exp} For $s>1$, we have $\tau_s(n) = \zeta(s)$$ \sum\limits_{\substack{{r=1}}}^{\infty}\frac{c_r^{s}(n)}{r^s}$. \end{theo} \begin{proof} Using (\ref{eq:mu_crs}), we have $c_r^{s}(n) = \sum\limits_{\substack{d\mid r\\d^s\mid n}}\mu(\frac{r}{d})d^s=\sum\limits_{\substack{r=dq\\d^s\mid n}}\mu(q)d^s$.\\ Now consider the sum \begin{align*} \sum\limits_{\substack{{r=1}}}^{\infty}\frac{c_r^{s}(n)}{r^s}&=\sum\limits_{\substack{{r=1}}}^{\infty}\sum\limits_{\substack{r=dq\\d^s\mid n}}\frac{\mu(q)d^s}{d^s q^s} = \sum\limits_{\substack{{q=1}}}^{\infty}\sum\limits_{\substack{d^s\mid n}}\frac{\mu(q)}{ q^s} =\sum\limits_{\substack{{q=1}}}^{\infty}\frac{\mu(q)}{ q^s}\sum\limits_{\substack{d^s\mid n}}1 \end{align*} and it is nothing but $\frac{1}{\zeta(s)} \tau_s(n)$ by identity (\ref{eq:mu_zeta}). Thus $\tau_s(n) = \zeta(s) \sum\limits_{\substack{{r=1}}}^{\infty}\frac{c_r^{s}(n)}{r^s}$. Since $s>1$ and $\vert c_r^{s}(n)\vert \leq \sigma_{1,s}(n)$, the sum $ \sum\limits_{\substack{{r=1}}}^{\infty}\frac{c_r^{s}(n)}{r^s}$ converges absolutely. \end{proof} \begin{note} In \cite{ramanujan1918certain}, Ramanujan showed that, $d(n)=\sum\limits_{\substack{{r=1}}}^{\infty} \frac{\log r}{r}c_r(n)$. Though $\tau_s$ becomes $d$ when $s=1$, the above result cannot be reduced to the Ramanujan's result because in our case above, we require $s$ to be greater than 1. \end{note} We derive the following Cohen-Ramanujan expansion for $\sigma_{ks}$. \begin{theo} For $k, s\geq 1$,$\frac{\sigma_{ks}(n)}{n^{ks}} =\zeta((k+1)s) \sum\limits_{\substack{{r=1}}}^{\infty}\frac{c_r^{s}(n^s)}{r^{(k+1)s}}$. \end{theo} \begin{proof} \begin{align*} \text{We have } \frac{\sigma_{ks}(n)}{n^{ks}} = \frac{\sum\limits_{\substack{{d\mid n}}}d^{ks}}{n^{ks}} = \sum\limits_{\substack{{n=dq}}}\frac{(\frac{n}{q})^{ks}}{n^{ks}}&= \sum\limits_{\substack{{q^s\mid n^s}}}\frac{1}{q^{ks}} = \sum\limits_{\substack{{q=1}}}^{\infty}\frac{1}{q^{ks}}\frac{1}{q^{s}}\xi_q^{(s)}(n^s). \end{align*} Now by equation (\ref{ram-ident}), \begin{align*} \frac{\sigma_{ks}(n)}{n^{ks}}=\sum\limits_{\substack{{q=1}}}^{\infty}\frac{1}{q^{(k+1)s}}\sum\limits_{\substack{r \mid q}}c_r^{s}(n^s) &= \sum\limits_{\substack{{q=1}}}^{\infty}\frac{1}{q^{(k+1)s}}\sum\limits_{\substack{ q=rm}}c_r^{s}(n^s)\\&= \sum\limits_{\substack{{r=1}}}^{\infty}\sum\limits_{\substack{{m=1}}}^{\infty}\frac{1}{r^{(k+1)s}m^{(k+1)s}}c_r^{s}(n^s)\\&= \sum\limits_{\substack{{m=1}}}^{\infty}\frac{1}{m^{(k+1)s}}\sum\limits_{\substack{{r=1}}}^{\infty}\frac{c_r^{s}(n^s)}{r^{(k+1)s}} \\&= \zeta((k+1)s)\sum\limits_{\substack{{r=1}}}^{\infty}\frac{c_r^{s}(n^s)}{r^{(k+1)s}}. \end{align*} The above sum converges absolutely since $s>1$ and $\vert c_r^{s}(n)\vert \leq \sigma_{1,s}(n)$. \end{proof} \begin{note} Since $\sigma_{ks}(n)=\sum\limits_{\substack{{d\mid n}}}d^{ks} = \sum\limits_{\substack{{d^s\mid n^s}}}(d^s)^k=\sigma_{k,s}(n^s) ,$ the above gives an expansion for $\frac{\sigma_{k,s}(n^s)}{n^{ks}}$ also. \end{note} \begin{note} If $n=m^sn_1$ where $n_1$ is an $s$-power free positive integer, then $\sigma_{k,s}(n)=\sigma_{k,s}(m^s)$. Hence $\sigma_{k,s}$ depends only on the $s$-power part in its argument. \end{note} \begin{note} When $s=1$, the above reduces to the expansion \\$\frac{\sigma_k(n)}{n^k}=\zeta(k+1)\sum\limits_{\substack{{r=1}}}^{\infty}\frac{c_r^{}(n)}{r^{k+1}}$ given by Ramanujan in \cite{ramanujan1918certain}. \end{note} Our next result is crucial in establishing the existence of the Cohen-Ramanujan expansions for certain class of additive functions. \begin{theo}\label{Equiv_conditions} Let $g : \mathbb{N}\rightarrow \mathbb{C}$ be an arbitrary arithmetical function. Then the following are equivalent.\\ $(i) g(a) = \sum\limits_{\substack{{r=1}}}^{\infty} \widehat{g}(r) c_r^{s}(a^s)$ converges absolutely for every $a \in \mathbb{N}$. \\ $(ii) \gamma(a) = a^s \sum\limits_{\substack{{m=1}}}^{\infty} \widehat{g}(am) \mu(m)$ converges absolutely for every $a \in \mathbb{N}$.\\ In case of convergence, $\gamma= \mu * g$. \end{theo} \begin{proof} We have \\$ c_r^{s}(a^s)= \sum\limits_{\substack{d\mid r\\d^s\mid a^s}}\mu(\frac{r}{d})d^s= \sum\limits_{\substack{d^s\mid r^s\\d\mid a}}\mu(\frac{r}{d})d^s= \sum\limits_{\substack{d\mid a}}\mu(\frac{r}{d})\xi_d^{(s)}(r^s)= \sum\limits_{\substack{d\mid a}} f(d), $ where $f(d)= \mu(\frac{r}{d})\xi_d^{(s)}(r^s).$ By M{\"o}bius inversion, $\sum\limits_{\substack{d\mid a}} c_r^{s}(d^s) \mu(\frac{a}{d}) = f(a)= \mu(\frac{r}{a})\xi_a^{(s)}(r^s)= \begin{cases} a^s \mu(\frac{r}{a}), \quad\text{ if } a^s\mid r^s\\ 0, \quad\text{ otherwise.} \end{cases}\\$ Now we prove that $(i)\Rightarrow (ii)$. Suppose $g(a) = \sum\limits_{\substack{{r=1}}}^{\infty} \widehat{g}(r) c_r^{s}(a^s)$ converges absolutely for every $a \in \mathbb{N}$. Then \begin{align*} \mu*g(a) = \sum\limits_{\substack{d\mid a}} \mu(\frac{a}{d}) g(d) &= \sum\limits_{\substack{d\mid a}} \mu(\frac{a}{d}) \sum\limits_{\substack{{r=1}}}^{\infty} \widehat{g}(r) c_r^{s}(d^s)\\ &= \sum\limits_{\substack{{r=1}}}^{\infty} \widehat{g}(r) \sum\limits_{\substack{d\mid a}} c_r^{s}(d^s) \mu(\frac{a}{d})\\ & = \sum\limits_{\substack{{r=1}\\{a^s\mid r^s}}}^{\infty} \widehat{g}(r) a^s \mu(\frac{r}{a})\\ &= \gamma(a) \text{ since }a^s|r^s \Longleftrightarrow a|r. \end{align*} From this, we get $\gamma(a) = a^s \sum\limits_{\substack{{m=1}}}^{\infty} \widehat{g}(am) \mu(m)$ and \begin{align*} a^s \sum\limits_{\substack{{m=1}}}^{\infty} |\widehat{g}(am) \mu(m)| &\leq \sum\limits_{\substack{{r=1}}}^{\infty} \sum\limits_{\substack{d\mid a}}| \widehat{g}(r) \mu(\frac{a}{d}) c_r^{s}(d^s)| \leq \sum\limits_{\substack{d\mid a}} \sum\limits_{\substack{{r=1}}}^{\infty} | \widehat{g}(r) c_r^{s}(d^s)| \end{align*} which converges by the assumption. Thus $\gamma(a) = a^s \sum\limits_{\substack{{m=1}}}^{\infty} \widehat{g}(am) \mu(m)$ converges absolutely. To prove that $(ii)\Rightarrow (i)$, suppose that $\gamma(a) = a^s \sum\limits_{\substack{{m=1}}}^{\infty} \widehat{g}(am) \mu(m)$ converges absolutely for every $a \in \mathbb{N}$. \begin{align*} \text{Now } u*\gamma(a) = \sum\limits_{\substack{{d\mid a}}}\gamma(d) u(\frac{a}{d}) = \sum\limits_{\substack{{d\mid a}}}\gamma(d) &= \sum\limits_{\substack{{d\mid a}}} d^s \sum\limits_{\substack{{m=1}}}^{\infty} \widehat{g}(dm) \mu(m)\\ &= \sum\limits_{\substack{d\mid a\\d\mid r}} d^s \sum\limits_{\substack{{r=1}}}^{\infty} \widehat{g}(r) \mu(\frac{r}{d})\\ &= \sum\limits_{\substack{{r=1}}}^{\infty} \widehat{g}(r) \sum\limits_{\substack{d\mid a\\d\mid r}} d^s \mu(\frac{r}{d})\\ &= \sum\limits_{\substack{{r=1}}}^{\infty} \widehat{g}(r) \sum\limits_{\substack{d^s\mid a^s\\d\mid r}} d^s \mu(\frac{r}{d})\\ &= \sum\limits_{\substack{{r=1}}}^{\infty} \widehat{g}(r) c_r^{s}(a^s) = g(a). \end{align*} That is \begin{align*} g(a)&= \sum\limits_{\substack{{r=1}}}^{\infty} \widehat{g}(r) c_r^{s}(a^s)= \sum\limits_{\substack{{d\mid a}}} d^s \sum\limits_{\substack{{m=1}}}^{\infty} \widehat{g}(dm) \mu(m) \end{align*} and it converges absolutely. \end{proof} Let us see how to use the above theorem to deal with $\tau_s$. \begin{exam} Let $g(a) = \frac{\tau_s(a^s)}{\zeta(s)} $. By abuse of notation, let $\widehat{g}(r)= \frac{1}{r^s}$. Then \begin{align*} \gamma(a) &= a^s \sum\limits_{\substack{{m=1}}}^{\infty} \widehat{g}(am) \mu(m)\\ &= a^s \sum\limits_{\substack{{m=1}}}^{\infty} \frac{1}{(am)^{s}} \mu(m)\\ &= \sum\limits_{\substack{{m=1}}}^{\infty} \frac{\mu(m)}{m^{s}}\\ &= \frac{1}{\zeta(s)} \text{ (by identity }(\ref{eq:mu_zeta})) \end{align*} and so $\gamma(a)$ exists. By Theorem \ref{Equiv_conditions}, $ g(a) = \sum\limits_{\substack{{r=1}}}^{\infty} \widehat{g}(r) c_r^{s}(a^s)$ converges absolutely. So $\sum\limits_{\substack{{r=1}}}^{\infty}\frac{1}{r^s}c_r^{s}(a^s)=\frac{\tau_s(a^s)}{\zeta(s)}$, giving an expansion for $\tau_s$. This expansion is in agreement with what we proved in Theorem \ref{div-exp}. \end{exam} The same way we can get an expansion for $\sigma_{ks}$. \begin{exam} Let $g(a) = \frac{\sigma_{ks}(a)}{a^{ks}} $ and (once again, by abuse of notation) let $\widehat{g}(r)=\frac{\zeta(k+1)}{r^{(k+1)s}}$. Then \begin{align*} \gamma(a) &= a^s \sum\limits_{\substack{{m=1}}}^{\infty} \widehat{g}(am) \mu(m)\\ &= a^s \sum\limits_{\substack{{m=1}}}^{\infty} \frac{\zeta(k+1)}{(am)^{(k+1)s}} \mu(m)\\ &= \frac{\zeta(k+1)}{(a)^{ks}} \sum\limits_{\substack{{m=1}}}^{\infty} \frac{\mu(m)}{m^{(k+1)s}}\\ &= \frac{\zeta(k+1)}{(a)^{ks}} \frac{1}{\zeta((k+1)s)} \text{ (by identity }(\ref{eq:mu_zeta})). \end{align*} Thus $\gamma(a)$ exists. By Theorem $\ref{Equiv_conditions}$, $g(a) = \sum\limits_{\substack{{r=1}}}^{\infty} \widehat{g}(r) c_r^{s}(a^s)$ converges absolutely. Hence $\sum\limits_{\substack{{r=1}}}^{\infty} \widehat{g}(r) c_r^{s}(a^s)= \sum\limits_{\substack{{r=1}}}^{\infty}\frac{\zeta(k+1)}{r^{(k+1)s}}c_r^{s}(a^s)=\frac{\sigma_{ks}(a)}{a^{ks}} $ has an absolutely convergent expansion. \end{exam} Now we give a sufficient condition for the existence of Cohen-Ramanujan expansions of certain class of additive arithmetical functions. \begin{theo} Let $g \in \mathcal{A}$. If the series $\sum\limits_{\substack{{v=1}}}^{\infty}\frac{g(p^v)}{p^{vs}}$ and $\sum\limits_{\substack{{p}}} \sum\limits_{\substack{{v=1}}}^{\infty}\frac{g(p^v)}{p^{vs}}$ converge then $g$ has a pointwise convergent Cohen-Ramanujan expansion with coefficients \begin{align*} \widehat{g}(p^\alpha) &= \frac{-g(p^{\alpha-1})}{p^{\alpha s}}+ (1-\frac{1}{p^s})\sum\limits_{\substack{{v\geq \alpha}}}\frac{g(p^v)}{p^{vs}}\\ \widehat{g}(1) &= \sum\limits_{\substack{{p}}}\widehat{g}(p)\\ \widehat{g}(n) &= 0, \text{ otherwise.} \end{align*} \end{theo} \begin{proof} First we prove the existence of the above Cohen-Ramanujan coefficients. Since $\sum\limits_{\substack{{v=1}}}^{\infty}\frac{g(p^v)}{p^{vs}}$ converges, $\widehat{g}(p^\alpha) = \frac{-g(p^{\alpha-1})}{p^{\alpha s}}+ (1-\frac{1}{p^s})\sum\limits_{\substack{{v\geq \alpha}}}\frac{g(p^v)}{p^{vs}}$ exists. Now, since $g$ is additive, $g(1)=0$ and so \begin{align*} \widehat{g}(1) = \sum\limits_{\substack{{p}}}\widehat{g}(p) &= \sum\limits_{\substack{{p}}}(\frac{-g(1)}{p^s}+ (1-\frac{1}{p^s})\sum\limits_{\substack{{v=1}}}^{\infty}\frac{g(p^v)}{p^{vs}})\text{, where $s\geq 1$ }\\ &= \sum\limits_{\substack{{p}}}(1-\frac{1}{p^s})\sum\limits_{\substack{{v=1}}}^{\infty}\frac{g(p^v)}{p^{vs}} \text{ exists.} \end{align*} Also $\widehat{g}(n)=0$ if $n \notin \mathbb{P}^*\cup \{1\}$ exists.\\ Consider the action of $\mu *g$ on prime powers. \begin{align*} \mu*g(p^\alpha) = \sum\limits_{\substack{{p^\alpha}}} \mu(d) g(\frac{p^\alpha}{d}) &= \mu(1) g(p^\alpha)+\mu(p)g(p^{\alpha-1})\\ &= g(p^\alpha)-g(p^{\alpha-1}).\\ \text{Since } g \text{ is an additive function, }\mu*g(1)&= \mu(1) g(1)\\ &= 0. \end{align*} Let $n = p_1^{r_1}p_2^{r_2}\cdots p_k^{r_k}$, where $p_i$ are distinct primes. Then, \begin{align*} \mu*g(n) &= \sum\limits_{\substack{{d\mid n}}} \mu(d)g(\frac{n}{d})\\ &=\mu(1)g(p_1^{r_1}p_2^{r_2} \cdots p_k^{r_k})+\mu(p_1)g(\frac{p_1^{r_1}p_2^{r_2}\cdots p_k^{r_k}}{p_1})+\\& +\mu(p_2)g(\frac{p_1^{r_1}p_2^{r_2}\cdots p_k^{r_k}}{p_2})+\cdots +\mu(p_k)g(\frac{p_1^{r_1}p_2^{r_2}\cdots p_k^{r_k}}{p_k})\\&+\mu(p_1p_2)g(\frac{p_1^{r_1}p_2^{r_2}\cdots p_k^{r_k}}{p_1p_2})+\mu(p_1p_3)g(\frac{p_1^{r_1}p_2^{r_2}\cdots p_k^{r_k}}{p_1p_3})+\\&+\cdots +\mu(p_{k-1}p_k)g(\frac{p_1^{r_1}p_2^{r_2}\cdots p_k^{r_k}}{p_{k-1}p_k})+\cdots +\mu(p_1p_2\cdots p_k)g(\frac{p_1^{r_1}p_2^{r_2}p_3^{r_3}}{p_1p_2\cdots p_k})\\ &= \sum\limits_{\substack{{i = 1}}}^{k}\left(\binom{k-1}{0}g(p_i^{r^i})-\binom{k-1}{1}g(p_i^{r^i})+\cdots +(-1)^{k-1}\binom{k-1}{k-1}g(p_i^{r^i}) \right)-\\& \left(\sum\limits_{\substack{{i = 1}}}^{k}\binom{k-1}{0}g(p_i^{r_i-1})-\binom{k-1}{1}g(p_i^{r_i-1})+\cdots +(-1)^{k-1}\binom{k-1}{k-1}g(p_i^{r_i-1})\right) \\&= \sum\limits_{\substack{{i = 1}}}^{k}(1+-1)^{k-1}g(p_i)-\sum\limits_{\substack{{i = 1}}}^{k}(1+-1)^{k-1}g(p_i^{r_i-1}) \\&=0. \end{align*} Thus $\mu*g(a) = \begin{cases} g(p^\alpha)-g(p^{\alpha-1}), \quad\text{ if } a = p^\alpha \in \mathbb{P}^*\\ 0, \quad\text{ otherwise.} \end{cases}$\\ Now consider the sum $\gamma(a) = a^s \sum\limits_{\substack{{n=1}}}^{\infty} \widehat{g}(an) \mu(n)$. \begin{align*} \text{ Then, }\gamma(1) & = \sum\limits_{\substack{{n=1}}}^{\infty}\widehat{g}(n)\mu(n)\\ &= \widehat{g}(1)\mu(1)+\sum\limits_{\substack{{p}}}\widehat{g}(p)\mu(p)\\ &= \widehat{g}(1)-\sum\limits_{\substack{{p}}}\widehat{g}(p)\\ &=0, \text{ by assumption}. \end{align*} \begin{align*} \gamma(p^{\alpha s}) & = p^{\alpha s}\sum\limits_{\substack{{n=1}}}^{\infty}\widehat{g}(p^\alpha n)\mu(n)\\ &= p^{\alpha s}\left(\widehat{g}(p^\alpha)\mu(1)+\widehat{g}(p^\alpha p)\mu(p)\right)\\ &= p^{\alpha s}\left(\widehat{g}(p^\alpha)-\widehat{g}(p^{\alpha+1 })\right)\\ &=p^{\alpha s}\left((\frac{-g(p^{\alpha-1})}{p^{\alpha s}}+ (1-\frac{1}{p^s})\sum\limits_{\substack{{v\geq \alpha}}}\frac{g(p^v)}{p^{vs}})-(\frac{-g(p^{\alpha})}{p^{(\alpha+1) s}}+ (1-\frac{1}{p^s})\sum\limits_{\substack{{v\geq \alpha+1}}}\frac{g(p^v)}{p^{vs}})\right)\\ &= p^{\alpha s}\left((\frac{-g(p^{\alpha-1})}{p^{\alpha s}}+ \frac{g(p^\alpha)}{p^{(\alpha+1)s}}+ (1-\frac{1}{p^s})\frac{g(p^\alpha)}{p^{\alpha s}})\right)\\ &= g(p^{\alpha})-g(p^{\alpha-1}). \end{align*} If $a\notin \mathbb{P}^*\cup\{1\}$, then $\widehat{g}(a) = 0$. Therefore $\gamma(a) = a^s \sum\limits_{\substack{{n=1}}}^{\infty} \widehat{g}(an) \mu(n) = 0$.\\ Hence $\gamma(a)=\mu*g(a)$, converges absolutely. By Theorem \ref{Equiv_conditions}, $g(a)$ converges absolutely with Cohen-Ramanujan coefficients $\widehat{g}(a)$. \section{Further directions} In addition to the expansions mentioned above, in \cite{ramanujan1918certain}, Ramanujan derived expansion for the Euler totient function $\phi$. It is possible that using our techniques given above, we may get expansions for the Jordan totient function defined as $J_s(n) = n^s \prod\limits_{\substack{{p \mid n}}}(1-\frac{1}{p^s})$ and the Klee's function defined as $\Phi_s(n) = n\prod\limits_{\substack{p^s\mid n\\p \text{ prime}}}(1-\frac{1}{p^s})$, which behave very much similar to the Euler totient function, but has some $s$th power in their closed form formulae to deal with. We further feel that our techniques can be used to find such expansions using some other generalizations of the Ramanujan Sum and various other such type of sums. \section{Acknowledgements} The first author thanks the University Grants Commission of India for providing financial support for carrying out research work through their Senior Research Fellowship (SRF) scheme. The authors thank the reviewer for offering some insightful comments which made this paper more compact than what it was before. \section{Data availability statement} We hereby declare that data sharing not applicable to this article as no datasets were generated or analysed during the current study. \end{proof} \begin{thebibliography}{10} \bibitem{tom1976introduction} Tom Apostol. \newblock {\em Introduction to analytic number theory}. \newblock Springer, 1976. \bibitem{carmichael1932expansions} RD~Carmichael. \newblock Expansions of arithmetical functions in infinite series. \newblock {\em Proceedings of the London Mathematical Society}, 2(1):1--26, 1932. \bibitem{chidambaraswamy1979generalized} J~Chidambaraswamy. \newblock Generalized ramanujan's sum. \newblock {\em Periodica Mathematica Hungarica}, 10(1):71--87, 1979. \bibitem{cohen1949extension} Eckford Cohen. \newblock An extension of {Ramanujan's} sum. \newblock {\em Duke Mathematical Journal}, 16(85-90):2, 1949. \bibitem{cohen1959trigonometric} Eckford Cohen. \newblock Trigonometric sums in elementary number theory. \newblock {\em The American Mathematical Monthly}, 66(2):105--117, 1959. \bibitem{delange1976ramanujan} Hubert Delange. \newblock On ramanujan expansions of certain arithmetical functions. \newblock {\em Acta Arith}, 31(3):259--270, 1976. \bibitem{hardy1943eratosthenian} GH~Hardy. \newblock Eratosthenian averages. \newblock {\em Nature}, 152(3868):708--708, 1943. \bibitem{lucht1995ramanujan} Lutz Lucht. \newblock Ramanujan expansions revisited. \newblock {\em Archiv der Mathematik}, 64(2):121--128, 1995. \bibitem{lucht2010survey} Lutz~G Lucht. \newblock A survey of ramanujan expansions. \newblock {\em International Journal of Number Theory}, 6(08):1785--1799, 2010. \bibitem{mccarthy2012introduction} Paul~J McCarthy. \newblock {\em Introduction to arithmetical functions}. \newblock Springer Science \& Business Media, 2012. \bibitem{ramanujan1918certain} Srinivasa Ramanujan. \newblock On certain trigonometrical sums and their applications in the theory of numbers. \newblock {\em Trans. Cambridge Philos. Soc}, 22(13):259--276, 1918. \bibitem{schwarz1994arithmetical} Wolfgang Schwarz, Jurgen Spilker, and J{\"u}rgen Spilker. \newblock {\em Arithmetical functions}, volume 184. \newblock Cambridge University Press, 1994. \bibitem{sugunamma1960eckford} M~Sugunamma. \newblock Eckford cohen’s generalizations of ramanujan’s trigonometrical sum c (n, r). \newblock {\em Duke Mathematical Journal}, 27(3):323--330, 1960. \bibitem{venkataraman1972extension} CS~Venkataraman and R~Sivaramakrishnan. \newblock An extension of ramanujan's sum. \newblock {\em Math. Student A}, 40:211--216, 1972. \end{thebibliography} \end{document}
2205.08453v3
http://arxiv.org/abs/2205.08453v3
Sequential Parametrized Motion Planning and its Complexity
\documentclass[a4paper,leqno,12pt, reqno]{amsart} \usepackage{amsmath} \usepackage{amssymb} \usepackage{graphicx} \usepackage{tikz-cd} \usepackage{hyperref} \usepackage[all]{xy} \usepackage{tikz} \usepackage{xspace} \usepackage{bm} \usepackage{amsmath} \usepackage{amstext} \usepackage{amsfonts} \usepackage[mathscr]{euscript} \usepackage{amscd} \usepackage{latexsym} \usepackage{amssymb} \usepackage{enumerate} \usepackage{xcolor} \usepackage{mathtools} \setlength{\topmargin}{-10mm} \setlength{\textheight}{9.0in} \setlength{\oddsidemargin}{ .1 in} \setlength{\evensidemargin}{.1 in} \setlength{\textwidth}{6.0 in} \theoremstyle{plain} \swapnumbers \newcommand{\cat}{{\sf{cat}}} \newcommand{\scat}{{\sf{scat}}} \newcommand{\secat}{{\sf{secat}}} \newcommand{\genus}{{\sf{genus}}} \newcommand{\TC}{{\sf{TC}}} \newcommand{\SC}{{\sf{SC}}} \newcommand{\CC}{{\sf{CC}}} \newcommand{\STC}{{\sf{STC}}} \newcommand{\sd}{{\sf{sd}}} \newcommand{\st}{{\sf{st}}} \newcommand{\cd}{{\sf{cd}}} \newcommand{\map}{{\sf{map}}} \newcommand{\hdim}{{\sf{hdim}}} \newcommand{\Id}{{\sf{Id}}} \newcommand{\inc}{{\sf{inc}}} \newcommand{\proj}{{\sf{proj}}} \newcommand{\tcn}{\TC_{n}} \newcommand{\tcng}{\TC_{n, G}} \newcommand{\scn}{\SC_{n}} \newcommand{\scngr}{\SC^{r}_{n, G}} \newcommand{\scng}{\SC_{n, G}} \newcommand{\ccn}{\CC_{n}} \newcommand{\KK}{\mathcal{K}} \newcommand{\XX}{\mathcal{X}} \newcommand{\OO}{\mathcal{O}} \newcommand{\zz}{\mathbb{Z}} \newcommand{\rr}{\mathbb{R}} \newcommand{\cc}{\mathbb{C}} \newcommand{\bx}{\mathbf{x}} \newcommand{\bX}{\mathbf{X}} \renewcommand{\dim}{{\sf {dim}}} \DeclarePairedDelimiter\ceil{\lceil}{\rceil} \DeclarePairedDelimiter\floor{\lfloor}{\rfloor} \newtheorem{theorem}{Theorem}[section] \newtheorem{prop}[theorem]{Proposition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{fact}[theorem]{Fact} \newtheorem*{thma}{Theorem A} \newtheorem*{thmb}{Theorem B} \newtheorem*{thmc}{Theorem C} \newtheorem*{thmd}{Theorem D} \newenvironment{mysubsection}[2][] {\begin{subsec}\begin{upshape}\begin{bfseries}{#2.} \end{bfseries}{#1}} {\end{upshape}\end{subsec}} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{examples}[theorem]{Examples} \newtheorem{claim}[theorem]{Claim} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{notation}[theorem]{Notation} \newtheorem{construct}[theorem]{Construction} \newtheorem{ack}[theorem]{Acknowledgements} \newtheorem{subsec}[theorem]{} \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} \newtheorem{remarks}[theorem]{Remarks} \newtheorem{warning}[theorem]{Warning} \newtheorem{assume}[theorem]{Assumption} \setcounter{secnumdepth}{3} \begin{document} \title{Sequential Parametrized Motion Planning and its complexity} \author{Michael Farber} \address{School of Mathematical Sciences\\ Queen Mary University of London\\ E1 4NS London\\UK.} \email{[email protected]} \address{School of Mathematical Sciences\\ Queen Mary University of London\\ E1 4NS London\\UK.} \email{[email protected]/[email protected]} \subjclass{55M30} \keywords{Topological complexity, Parametrized topological complexity, Sequential topological complexity, Fadell - Neuwirth bundle} \thanks{Both authors were partially supported by an EPSRC research grant} \author{Amit Kumar Paul} \begin{abstract} In this paper we develop a theory of {\it sequential parametrized} motion planning generalising the approach of {\it parametrized} motion planning, which was introduced recently in \cite{CohFW21}. A sequential parametrized motion planning algorithm produced a motion of the system which is required to visit a prescribed sequence of states, in a certain order, at specified moments of time. The sequential parametrized algorithms are universal as the external conditions are not fixed in advance but rather constitute part of the input of the algorithm. In this article we give a detailed analysis of the sequential parametrized topological complexity of the Fadell - Neuwirth fibration. In the language of robotics, sections of the Fadell - Neuwirth fibration are algorithms for moving multiple robots avoiding collisions with other robots and with obstacles in the Euclidean space. In the last section of the paper we introduce the new notion of {\it $\TC$-generating function of a fibration}, examine examples and raise some exciting general questions about its analytic properties. \end{abstract} \maketitle \begin{section}{Introduction} Autonomously functioning systems in robotics are controlled by motion planning algorithms. Such an algorithm takes as input the initial and the final states of the system and produces a motion of the system from the initial to final state, as output. The theory of algorithms for robot motion planning is a very active field of contemporary robotics and we refer the reader to the monographs \cite{Lat}, \cite{LaV} for further references. A topological approach to the robot motion planning problem was developed in \cite{Far03}, \cite{Far04}; the topological techniques explained relationships between instabilities occurring in robot motion planning algorithms and topological features of robots' configuration spaces. A new {\it parametrized} approach to the theory of motion planning algorithms was suggested recently in \cite{CohFW21}. The parametrized algorithms are more universal and flexible, they can function in a variety of situations involving external conditions which are viewed as parameters and are part of the input of the algorithm. A typical situation of this kind arises when we are dealing with collision-free motion of many objects (robots) moving in the 3-space avoiding a set of obstacles, and the positions of the obstacles are a priori unknown. This specific problem was analysed in full detail in \cite{CohFW21}, \cite{CohFW}. In this paper we develop a more general theory of {\it sequential parametrized} motion planning algorithms. In this approach the algorithm produces a motion of the system which is required to visit a prescribed sequence of states in a certain order. The sequential parametrized algorithms are also universal as the external conditions are not a priori fixed but constitute a part of the input of the algorithm. In the first part of this article we develop the theory of sequential parametrized motion planning algorithms while the second part consists of a detailed analysis of the sequential parametrized topological complexity of the Fadell - Neuwirth fibration. In the language of robotics, the sections of the Fadell - Neuwirth bundle are exactly the algorithms for moving multiple robots avoiding collisions with each other and with multiple obstacles in the Euclidean space. Our results depend on the explicit computations of the cohomology algebras of certain configuration spaces. We describe these computations in full detail, they employ the classical Leray - Hirsch theorem from algebraic topology of fibre bundles. In the last section of the paper we introduce the new notion of a {\it $\TC$-generating function of a fibration}, discuss a few examples, and raise interesting questions about analytic properties of this function. In a forthcoming publication (which is now in preparation) we shall describe the explicit sequential parametrized motion planning algorithm for collision free motion of multiple robots in the presence of multiple obstacles in $\rr^d$, generalising the ones presented in \cite{FarW}. These algorithms are optimal as they have minimal possible topological complexity. \end{section} \begin{section}{Preliminaries} In this section we recall the notions of sectional category and topological complexity; we refer to \cite{BasGRT14, CohFW21, CohFW, Far03, Gar19, Rud10, Sva66} for more information. \subsection*{Sectional category} Let $p: E \to B$ be a Hurewicz fibration. The \emph{sectional category} of $p$, denoted $\secat[p: E \to B]$ or $\secat(p)$, is defined as the least non-negative integer $k$ such that there exists an open cover $\{U_0, U_1, \dots, U_k\}$ of $B$ with the property that each open set $U_i$ admits a continuous section $s_i: U_i \to E$ of $p$. We set $\secat(p)=\infty$ if no finite $k$ with this property exists. The \emph{generalized sectional category} of a Hurewicz fibration $p: E \to B$, denoted $\secat_g[p: E \to B]$ or $\secat_g(p)$, is defined as the least non-negative integer $k$ such that $B$ admits a partition $$B=F_0 \sqcup F_1 \sqcup ... \sqcup F_k, \quad F_i\cap F_j = \emptyset \text{ \ for \ } i\neq j$$ with each set $F_i$ admitting a continuous section $s_i: F_i \to E$ of $p$. We set $\secat_g(p)=\infty$ if no such finite $k$ exists. It is obvious that $\secat(p) \geq \secat_g(p)$ in general. However, as was established in \cite{Gar19}, in many interesting situations there is an equality: \begin{theorem} \label{lemma secat betweeen ANR spaces} Let $p : E \to B$ be a Hurewicz fibration with $E$ and $B$ metrizable absolute neighborhood retracts (ANRs). Then $\secat(p) = \secat_g(p).$ \end{theorem} In the sequel the term \lq\lq fibration\rq\rq \ will always mean \lq\lq Hurewicz fibration\rq\rq, unless otherwise stated explicitly. The following Lemma will be used later in the proofs. \begin{lemma}\label{lm:secat} (A) If for two fibrations $p: E\to B$ and $p':E'\to B$ over the same base $B$ there exists a continuous map $f$ shown on the following digram \begin{center} $\xymatrix{ E \ar[dr]_{ p} \ar[rr]^{f} && E' \ar[dl]^{p'} \\ & B }$ \end{center} then $\secat(p)\ge \secat(p')$. (B) If a fibration $p: E\to B$ can be obtained as a pull-back from another fibration $p': E'\to B'$ then $\secat(p)\le \secat(p')$. (C) Suppose that for two fibrations $p: E\to B$ and $p': E'\to B'$ there exist continuous maps $f, g, F, G$ shown on the commutative diagram \begin{center} $\xymatrix{ E\ar[r]^F\ar[d]_p & E'\ar[d]^{p'} \ar[r]^G & E\ar[d]^p\\ B \ar[r]_f & B'\ar[r]_{g}& B, }$ \end{center} such that $g\circ f: B\to B$ is homotopic to the identity map $\Id_B: B\to B$. Then $\secat(p)\le \secat(p')$. \end{lemma} \begin{proof} Statements (A) and (B) are well-known and follow directly from the definition of sectional category. Below we give the proof of (C) which uses (A) and (B). Consider the fibration $q: \bar E \to B$ induced by $f: B\to B'$ from $p': E'\to B'$. Here $\bar E=\{(b, e')\in B\times E'; f(b)=p'(e')\}$ and $q(b, e')=b$. Then \begin{eqnarray}\label{sec11} \secat(q)\le \secat(p') \end{eqnarray} by statement (B). Consider the map $\bar G: \bar E\to E$ given by $\bar G(b, e')= G(e')$ for $(b, e')\in \bar E$. Then $$(p\circ \bar G)\ (b, e') = p(G(e')) = g(p'(e')) = g(f(b)) = ((g\circ f)\circ q)\ (b, e')$$ and thus $p\circ \bar G = (g\circ f)\circ q $ and using the assumption $g\circ f\simeq \Id_B$ we obtain $p\circ \bar G \simeq q$. Let $h_t: B\to B$ be a homotopy with $h_0=g\circ f$ and $h_1=\Id_B$, $t\in I$. Using the homotopy lifting property, we obtain a homotopy $\bar G_t: \bar E\to E$, such that $\bar G_0=\bar G$ and $p\circ \bar G_t = h_t\circ q$. The map $\bar G_1: \bar E \to E$ satisfies $p\circ \bar G_1= q$; in other words, $\bar G_1$ appears in the commutative diagram \begin{center} $\xymatrix{ \bar E \ar[dr]_{ q} \ar[rr]^{\bar G_1} && E \ar[dl]^{p} \\ & B }$ \end{center} Applying to this diagram statement (A) we obtain the inequality $\secat(p) \le \secat(q)$ which together with inequality (\ref{sec11}) implies $\secat(p) \le \secat(p')$, as claimed. \end{proof} \subsection*{Topological complexity} Let $X$ be a path-connected topological space. Consider the path space $X^I$ (i.e. the space of all continuous maps $I=[0,1] \to X$ equipped with compact-open topology) and the fibration $$\pi : X^I \to X \times X, \quad \alpha \mapsto (\alpha(0), \alpha(1)).$$ The {\it topological complexity} $\TC(X)$ of $X$ is defined as $\TC(X):=\secat(\pi)$, cf. \cite{Far03}. For information on recent developments related to the notion of $\TC(X)$ we refer the reader to \cite{GraLV}, \cite{Dra}. For any $r\ge 2$, fix $r$ points $0\le t_1<t_2<\dots <t_r\le 1$ (which we shall call the {\it \lq\lq time schedule\rq\rq}) and consider the evaluation map \begin{eqnarray}\label{pir} \pi_r : X^I \to X^r, \quad \alpha \mapsto \left(\alpha(t_1), \alpha(t_2), \dots, \alpha(t_r)\right), \quad \alpha \in X^I. \end{eqnarray} Typically, one takes $t_i=(i-1)(r-1)^{-1}$. The {\it $r$-th sequential topological complexity} is defined as $\TC_r(X):=\secat(\pi_r)$; this invariant was originally introduced by Rudyak \cite{Rud10}. It is known that $\TC_r(X)$ is a homotopy invariant, it vanishes if and only if the space $X$ is contractible. Moreover, $\TC_{r+1}(X)\ge \TC_r(X)$. Besides, $\TC(X)=\TC_2(X)$. \subsection*{Parametrized topological complexity} For a Hurewicz fibration $p : E \to B$ denote by $E^I_B\subset E^I$ the space of all paths $\alpha: I\to E$ such that $p\circ \alpha: I\to B$ is a constant path. Let $E^2_B\subset E\times E$ denote the space of pairs $(e_1, e_2)\in E^2$ satisfying $p(e_1)=p(e_2)$. Consider the fibration $$\Pi: E^I_B \to E^2_B = E\times_B E, \quad \alpha \mapsto (\alpha(0), \alpha(1)).$$ The fibre of $\Pi$ is the loop space $\Omega X$ where $X$ is the fibre of the original fibration $p:E\to B$. The following notion was introduced in a recent paper \cite{CohFW21}: \begin{definition} The {\it parametrized topological complexity} $\TC[p : E \to B]$ of the fibration $p : E \to B$ is defined as $$\TC[p : E \to B]=\secat[\Pi: E^I_B \to E^2_B].$$ \end{definition} Parametrized motion planning algorithms are universal and flexible, they are capable to function under a variety of external conditions which are parametrized by the points of the base $B$. We refer to \cite{CohFW21} for more detail and examples. If $B'\subset B$ and $E'=p^{-1}(B')$ then obviously $\TC[p : E \to B] \geq \TC[p' : E' \to B']$ where $p'=p|_{E'}$. In particular, restricting to a single fibre we obtain $$\TC[p : E \to B] \geq \TC(X).$$ \end{section} \section{The concept of sequential parametrized topological complexity} In this section we define a new notion of sequential parametrized topological complexity and establish its basic properties. Let $p : E \to B$ be a Hurewicz fibration with fibre $X$. Fix an integer $r\ge 2$ and denote $$E^r_B= \{(e_1, \cdots, e_r)\in E^r; \, p(e_1)=\cdots = p(e_r)\}.$$ Let $E^I_B\subset E^I$ be as above the space of all paths $\alpha: I\to E$ such that $p\circ \alpha: I\to B$ is constant. Fix $r$ points $$0\le t_1<t_2<\dots <t_r\le 1$$ in $I$ (for example, one may take $t_i=(i-1)(r-1)^{-1}$ for $i=1, 2, \dots, r$), which will be called the {\it time schedule}. Consider the evaluation map \begin{eqnarray}\label{Pir} \Pi_r : E^I_B \to E^r_B, \quad \Pi_r(\alpha) = (\alpha(t_1), \alpha(t_2), \dots, \alpha(t_r)).\end{eqnarray} $\Pi_r$ is a Hurewicz fibration, see \cite[Appendix]{CohFW}, the fibre of $\Pi_r$ is $(\Omega X)^{r-1}$. A section $s: E^r_B \to E^I$ of the fibration $\Pi_r$ can be interpreted as a parametrized motion planning algorithm, i.e. as a function which assigns to every sequence of points $(e_1, e_2, \dots, e_r)\in E^r_B$ a continuous path $\alpha: I\to E$ (motion of the system) satisfying $\alpha(t_i)=e_i$ for every $i=1, 2, \dots, r$ and such that the path $p\circ \alpha: I \to B$ is constant. The latter condition means that the system moves under constant external conditions (such as positions of the obstacles). Typically $\Pi_r$ does not admit continuous sections; then the motion planning algorithms are necessarily discontinuous. The following definition gives a measure of complexity of sequential parametrized motion planning algorithms. This concept is the main character of this paper. \begin{definition}\label{def:main} The {\it $r$-th sequential parametrized topological complexity} of the fibration $p : E \to B$, denoted $\TC_r[p : E \to B]$, is defined as the sectional category of the fibration $\Pi_r$, i.e. \begin{eqnarray} \TC_r[p : E \to B]:=\secat(\Pi_r). \end{eqnarray} \end{definition} In more detail, $\TC_r[p : E \to B]$ is the minimal integer $k$ such that there is a open cover $\{U_0, U_1, \dots, U_k\}$ of $E^r_B$ with the property that each open set $U_i$ admits a continuous section $s_i : U_i \to E^I_B$ of $\Pi_r$. Let $B'\subset B$ be a subset and let $E'=p^{-1}(B')$ be its preimage, then obviously $$\TC_r[p: E \to B]\geq \TC_r[p': E' \to B']$$ where $p'=p|_{E'}$. In particular, taking $B'$ to be a single point, we obtain $$\TC_r[p: E \to B] \geq \TC_r(X),$$ where $X$ is the fibre of $p$. \begin{example}\label{example para tc trivial fibration} Let $p: E \to B$ be a trivial fibration with fibre $X$, i.e. $E=B\times X$. In this case we have $E^r_B=B\times X^r$, $E^I_B= B\times X^I$ and the map $\Pi_r: E_B^I\to E^r_B$ becomes $$\Pi_r: B\times X^I\to B\times X^r, \quad \Pi_r= \Id_B\times \pi_r,$$ where $\Id_B: B\to B$ is the identity map and $\pi_r$ is the fibration (\ref{pir}). Thus we obtain in this example $$\TC_{r}[p: E \to B]= \TC_r(X),$$ i.e. for the trivial fibration the sequential parametrized topological complexity equals the sequential topological complexity of the fibre. \end{example} \begin{prop}\label{princ} Let $p: E \to B$ be a principal bundle with a connected topological group $G$ as fibre. Then $$\TC_{r}[p: E \to B] = \cat(G^{r-1})=\TC_r(G).$$ \end{prop} \begin{proof} Let $0\le t_1<t_2<\dots < t_r\le 1$ be the fixed time schedule. Denote by $P_0G\subset G^I$ the space of paths $\alpha$ satisfying $\alpha(t_1)=e$ where $e\in G$ denotes the unit element. Consider the evaluation map $\pi_r': P_0G\to G^{r-1}$ where $\pi'_r(\alpha)=(\alpha(t_2), \alpha(t_3), \dots, \alpha(t_r))$. We obtain the commutative diagram \begin{center} $\xymatrix{ P_0G \times E \ar[d]_{ \pi'_r \times \Id } \ar[r]^{F} & E^I_B \ar[d]^{\Pi_r} \\ G^{r-1} \times E \ar[r]_{F'} & E^r_B }$ \end{center} where $F: P_0G \times E \to E^I_B$ and $F': G^{r-1}\times E\to E^r_B$ are homeomorphisms given by $$F(\alpha, x)(t)= \alpha(t)x, \quad F'(g_2, g_3, \dots, g_r, x)= (x, g_2 x, g_3 x, \dots, g_r x),$$ where $\alpha \in P_0G$, \, $x\in E$, \, $t\in I$ and $g_i\in G$. Thus we have $$ \TC_r[p: E\to B]= \secat(\Pi_r)=\secat(\pi'_r \times \Id )=\secat(\pi'_r). $$ Clearly, $\secat(\pi'_r)=\cat(G^{r-1})$ since $P_0G$ is contractible. And finally $\cat(G^{r-1})=\TC_r(G)$, see \cite[Theorem 3.5]{BasGRT14}. \end{proof} \begin{example} As a specific example consider the Hopf fibration $p: S^3 \to S^2$ with fibre $S^1$. Applying the result of the previous Proposition we obtain $$\TC_r[p: S^3 \to S^2]=\TC_r(S^1)=r-1$$ for any $r\ge 2$. \end{example} \subsection*{Alternative descriptions of sequential parametrized topological complexity} Let $K$ be a path-connected finite CW-complex and let $k_1, k_2, \cdots, k_r\in K$ be a collection of $r$ pairwise distinct points of $K$, where $r\ge 2$. For a Hurewicz fibration $p: E \to B$, consider the space $E^{K}_B$ of all continuous maps $\alpha: K \to E$ such that the composition $p\circ \alpha: K\to B$ is a constant map. We equip $E^K_B$ with the compact-open topology induced from the function space $E^K$. Consider the evaluation map $$\Pi_r^K : E^{K}_B \to E^r_B, \quad \Pi^K_r(\alpha) = (\alpha(k_1), \alpha(k_2), \cdots, \alpha(k_r)) \quad \mbox{for} \quad\alpha\in E^K_B.$$ It is known that $\Pi^K_r$ is a Hurewicz fibration, see Appendix to \cite{CohFW}. \begin{lemma}\label{lemma para tc by secat} For any path-connected finite CW-complex $K$ and a set of pairwise distinct points $k_1, \dots, k_r\in K$ one has $$\secat(\Pi^K_r) = \TC_r[p:E\to B].$$ \end{lemma} \begin{proof} Let $0\le t_1<t_2<\dots<t_r\le 1$ be a given time schedule used in the definition of the map $\Pi_r=\Pi_r^I$ given by (\ref{Pir}). Since $K$ is path-connected we may find a continuous map $\gamma: I\to K$ with $\gamma(t_i) =k_i$ for all $i=1, 2, \dots, r$. We obtain a continuous map $F_\gamma: E^K_B \to E^I_B$ acting by the formula $F_\gamma(\alpha) = \alpha \circ \gamma$. It is easy to see that the following diagram commutes $$ \xymatrix{ E_B^{K} \ar[rr]^{F} \ar[dr]_{\Pi_{r}^K}& &E_B^{I} \ar[dl]^{\Pi_{r}^I} \\ & E_B^r }$$ Using statement (A) of Lemma \ref{lm:secat} we obtain $$\TC_r[p:E\to B]= \secat(\Pi^I_r)\le \secat(\Pi_r^K).$$ To obtain the inverse inequality note that any locally finite CW-complex is metrisable. Applying Tietze extension theorem we can find continuous functions $\psi_1, \dots, \psi_r: K\to [0,1]$ such that $\psi_i(k_j)=\delta_{ij}$, i.e. $\psi_i(k_j)$ equals 1 for $j=i$ and it equals $0$ for $j\not=i$. The function $f=\min\{1, \sum_{i=1}^r t_i\cdot \psi_i\}: K\to [0,1]$ has the property that $f(k_i)=t_i$ for every $i=1, 2, \dots, r$. We obtain a continuous map $F': E^I_B \to E^K_B$, where $F'(\beta) = \beta\circ f$, \, $\beta\in E^I_B$, which appears in the commutative diagram $$ \xymatrix{ E_B^{I} \ar[rr]^{F'} \ar[dr]_{\Pi_{r}^I}& &E_B^{K} \ar[dl]^{\Pi_{r}^K} \\ & E_B^r }$$ By Lemma \ref{lm:secat} this implies the opposite inequality $\secat(\Pi_r^K) \le \secat(\Pi^I_r)$ and completes the proof. \end{proof} The following proposition is an analogue of \cite[Proposition 4.7]{CohFW21}. \begin{prop} Let $E$ and $B$ be metrisable separable ANRs and let $p: E \to B$ be a locally trivial fibration. Then the sequential parametrized topological complexity $\TC_r[p: E \to B]$ equals the smallest integer $n$ such that $E_B^r$ admits a partition $$E_B^r=F_0 \sqcup F_1 \sqcup ... \sqcup F_n, \quad F_i\cap F_j= \emptyset \text{ \ for } i\neq j,$$ with the property that on each set $F_i$ there exists a continuous section $s_i : F_i \to E_B^I$ of $\Pi_r$. In other words, $$\TC_r[p : E \to B] = \secat_g[\Pi_r: E_B^I \to E^r_B].$$ \end{prop} \begin{proof} From the results of \cite[Chapter IV]{Bor67} it follows that the fibre $X$ of $p: E \to B$ is ANR and hence $X^r$ is also ANR. Now, $E^r_B$ is the total space of the locally trivial fibration $E_B^r \to B$ with fibre $X^r$. Thus, applying \cite[Chapter IV, Theorem 10.5]{Bor67}, we obtain that the space $E^r_B$ is ANR. Using \cite[Proposition 4.7]{CohFW21} we see that $E_B^I$ is ANR. Finally, using Theorem \ref{lemma secat betweeen ANR spaces}, we conclude that $\TC_r[p : E \to B] = \secat_g[\Pi_r: E_B^I \to E^r_B].$ \end{proof} \section{Fibrewise homotopy invariance} \begin{prop}\label{prop homotopy invariant sptc} Let $p : E \to B$ and $p': E' \to B$ be two fibrations and let $f: E\to E'$ and $g: E'\to E$ be two continuous maps such the following diagram commutes $$ \xymatrix{ E \ar@<-1.4pt>[rr]^{g} \ar[dr]_{p}& &E' \ar@<-1.4pt>[ll]^{f} \ar[dl]^{p'} \\ & B }$$ i.e. $p=p'\circ f$ and $p'=p\circ g$. If the map $g \circ f : E \to E$ is fibrewise homotopic to the identity map $\Id_{E}: E \to E$ then $$\TC_r[p: E \to B] \leq \TC_r[p': E' \to B].$$ \end{prop} \begin{proof} Denote by $f^r: E^r_B\to E'^r_B$ the map given by $f^r(e_1, \dots, e_r) =(f(e_1), \dots, f(e_r))$ and by $f^I: E^I_B \to E'^I_B$ the map given by $f^I(\gamma)(t)= f(\gamma(t))$ for $\gamma\in E^I_B$ and $t\in I$. One defines similarly the maps $g^r: E'^r_B\to E^r_B$ and $g^I: E'^I_B \to E^I_B$. This gives the commutative diagram \begin{center} $\xymatrix{ E^I_B\ar[r]^{f^I}\ar[d]_{\Pi_r} & E'^I_B\ar[d]^{\Pi'_r} \ar[r]^{g^I} & E^I_B\ar[d]^{\Pi_r}\\ E^r_B \ar[r]_{f^r} & E'^r_B\ar[r]_{g^r}& E^r_B, }$ \end{center} in which $g^r\circ f^r \simeq \Id_{E^r_B}$. Applying statement (C) of Lemma \ref{lm:secat} we obtain \begin{eqnarray*} \TC_r[p:E\to B]&=&\secat[\Pi_r: E^I_B \to E^r_B]\\ &\le& \secat[\Pi'_r: E'^I_B \to E'^r_B] \\ &=& \TC_r[p':E'\to B]. \end{eqnarray*} \end{proof} Proposition \ref{prop homotopy invariant sptc} obviously implies the following property of $\TC_r[p:E\to B]$: \begin{corollary}\label{fwhom} If fibrations $p : E \to B$ and $p' : E' \to B$ are fibrewise homotopy equivalent then $$\TC_r[p: E \to B] = \TC_r[p': E' \to B].$$ \end{corollary} \section{Further properties of $\TC_r[p: E\to B]$} Next we consider products of fibrations: \begin{prop}\label{prop product inequality} Let $p_1: E_1 \to B_1$ and $p_2: E_2 \to B_2$ be two fibrations where the spaces $E_1, E_2, B_1, B_2$ are metrisable. Then for any $r\geq 2$ we have $$\TC_r[p_1\times p_2 : E_1 \times E_2 \to B_1\times B_2]\leq \TC_r[p_1 : E_1 \to B_1] + \TC_r[p_2 : E_2 \to B_2].$$ \end{prop} \begin{proof} The proof is essentially identical to the proof of \cite[Proposition 6.1]{CohFW21} where it is done for the case $r=2.$ \end{proof} \begin{prop} Let $p_1: E_1 \to B$ and $p_2: E_2 \to B$ be two fibrations where the spaces $E_1, E_2, B$ are metrisable. Consider the fibration $p: E\to B$ where $E=E_1\times_B E_2=\{(e_1, e_2)\in E_1 \times E_2 \ | \ p_1(e_1) = p_2(e_2)\}$ and $p(e_1, e_2)=p(e_1)=p(e_2)$. Then $$\TC_r[p: E \to B]\, \leq \, \TC_r[p_1 : E_1 \to B] + \TC_r[p_2 : E_2 \to B].$$ \end{prop} \begin{proof} Viewing $B$ as the diagonal of $B\times B$ gives $$\TC_r[p: E \to B]\leq \TC_r[p_1\times p_2 : E_1 \times E_2 \to B\times B].$$ Combining this inequality with the result of Proposition \ref{prop product inequality} completes the proof. \end{proof} \begin{lemma} For any fibration $p: E \to B$ one has $$\TC_{r+1}[p: E \to B]\geq \TC_{r}[p: E \to B].$$ \end{lemma} \begin{proof} We shall apply Lemma \ref{lemma para tc by secat} and consider the interval $K=[0,2]$ and the time schedule $0\le t_1< t_2< \dots< t_r\le 1$ and the additional point $t_{r+1}=2.$ We have the following diagram \begin{center} $\xymatrix{ E^I_B\ar[r]^{F}\ar[d]_{\Pi_r} & E^K_B\ar[d]^{\Pi^K_{r+1}} \ar[r]^{G} & E^I_B\ar[d]^{\Pi_r}\\ E^r_B \ar[r]_{f} & E^{r+1}_B\ar[r]_{g}& E^r_B, }$ \end{center} where $f$ acts by the formula $f(e_1, e_2, \dots, e_r) = (e_1, e_2, \dots, e_r, e_r)$ and $F$ sends a path $\gamma: I\to E$, $\gamma\in E^I_B$, to the path $\bar \gamma: K=[0,2]\to E$ where $\bar\gamma|_{[0,1]}=\gamma$ and $\bar \gamma(t) = \gamma(1)$ for any $t\in [1, 2]$. The vertical maps are evaluations at the points $t_1, \dots, t_r$ and at the points $t_1, \dots, t_r, t_{r+1}$, for $\Pi_r$ and $\Pi^K_{r+1}$ correspondingly. The map $G$ is the restriction, it maps $\alpha: K\to E$ to $\alpha_I: I\to E$. Similarly, the map $g: E^{r+1}_B \to E^r_B$ is given by $(e_1, \dots, e_r, e_{r+1}) \mapsto (e_1, \dots, e_r)$. The diagram commutes and besides the composition $g\circ f: E^r_B\to E^r_B$ is the identity map. Applying statement (C) of Lemma \ref{lm:secat} we obtain \begin{eqnarray*} \TC_r[p:E\to B]&=&\secat[\Pi_r: E^I_B\to E^r_B] \\ &\le& \secat[\Pi^K_{r+1}: E^K_B\to E^{r+1}_B]\\ &=& \TC_{r+1}[p:E\to B]. \end{eqnarray*}\end{proof} \section{Upper and lower bounds for $\TC_r[p:E\to B]$} In this section we give upper and lower bound for sequential parametrized topological complexity. \begin{prop}\label{prop upper bound} Let $p: E \to B$ be a locally trivial fibration with fiber $X$, where $E, B, X$ are CW-complexes. Assume that the fiber $X$ is $k$-connected, where $k\ge 0$. Then \begin{eqnarray}\label{upper} \TC_{r}[p: E \to B]<\frac{\hdim (E_B^r) + 1}{k+1}\leq \frac{r\cdot \dim X+\dim B + 1}{k+1}. \end{eqnarray} \end{prop} \begin{proof} Since $X$ is $k$-connected, the loop space $\Omega X$ is $(k-1)$-connected and hence the space $(\Omega X)^{r-1}$ is also $(k-1)$-connected. Thus, the fibre of the fibration $\Pi_r : E_B^I \to E_B^r$ is $(k-1)$-connected and applying Theorem 5 from \cite{Sva66} we obtain: \begin{equation}\label{equation upper bound} \TC_{r}[p: E \to B] \, =\, \secat(\Pi_r)\, < \, \frac{\hdim (E_B^r) + 1}{k+1}, \end{equation} where $\hdim(E_B^r)$ denotes homotopical dimension of $E_B^r$, i.e. the minimal dimension of a CW-complex homotopy equivalent to $E^r_B$, $$\hdim(E_B^r ) := \min\{\dim \, Z | \, Z \, \mbox{is a CW-complex homotopy equivalent to} \, E_B^r \}.$$ Clearly, $\hdim(E_B^r)\leq \dim(E_B^r)$. The space $E^r_B$ is the total space of a locally trivial fibration with base $B$ and fibre $X^r$. Hence, $\dim(E_B^r)\, \leq \, \dim(X^r)+ \dim B= r\cdot \dim X+ \dim B$. Combining this with (\ref{equation upper bound}), we obtain (\ref{upper}). \end{proof} Below we shall use the following classical result of A.S. Schwarz \cite{Sva66}: \begin{lemma}\label{lemma lower bound of secat} For any fibration $p: E \to B$ and coefficient ring $R$, if there exist cohomology classes $u_1, \cdots, u_k\in \ker[p^*: H^*(B; R)\to H^*(E; R)]$ such that their cup-product is nonzero, $u_1 \cup \cdots \cup u_k \neq 0 \in H^*(B; R)$, then $\secat(p) \geq k$. \end{lemma} The following Proposition gives a simple and powerful lower bound for sequential parametrized topological complexity. \begin{prop}\label{lemma lower bound for para tc} For a fibration $p: E \to B$, consider the diagonal map $\Delta : E \to E^r_B$ where $\Delta(e)= (e, e, \cdots, e)$, and the induced by $\Delta$ homomorphism in cohomology $\Delta^\ast: H^\ast(E^r_B;R) \to H^\ast(E;R)$ with coefficients in a ring $R$. If there exist cohomology classes $$u_1, \cdots, u_k \in \ker[\Delta^* : H^*(E_B^r; R) \to H^*(E; R)]$$ such that $$u_1 \cup \cdots \cup u_k \neq 0 \in H^*(E_B^r; R)$$ then $\TC_{r}[p: E \to B]\geq k$. \end{prop} \begin{proof} Define the map $c : E \to E_B^I$ where $c(e)(t) = e$ is the constant path. Note that the map $c : E \to E_B^I$ is a homotopy equivalence. Besides, the following diagram commutes $$ \xymatrix{ E \ar[rr]^{c} \ar[dr]_{\Delta}&&E_B^I \ar[dl]^{\Pi_r}\\ & E_B^r & & } $$ and thus, $\ker[\Pi_r^*: H^*(E_B^r; R) \to H^*(E_B^I; R)] = \ker[\Delta^*: H^*(E_B^r; R) \to H^*(E; R)]$. The result now follows from Lemma \ref{lemma lower bound of secat} and from the definition $\TC_r[p:E\to B]=\secat(\Pi_r)$. \end{proof} \section{Cohomology algebras of certain configuration spaces} In this section we present auxiliary results about cohomology algebras of relevant configuration spaces which will be used later in this paper for computing the sequential parametrized topological complexity of the Fadell - Neuwirth fibration. All cohomology groups will be understood as having the integers as coefficients although the symbol $\Bbb Z$ will be skipped from the notations. We start with the following well-known fact, see \cite[Chapter V, Theorem 4.2]{FadH01}: \begin{lemma}\label{lemma cohomology of configuration space} The integral cohomology ring $H^\ast(F(\rr^d, m+n))$ contains $(d-1)$-dimensional cohomology classes $\omega_{ij}$, where $1\leq i< j\leq m+n,$ which multiplicatively generate $H^*(F(\rr^d, m+n))$ and satisfy the following defining relations $$(\omega_{ij})^2=0 \quad \mbox{and}\quad \omega_{ip}\omega_{jp}= \omega_{ij}(\omega_{jp}-\omega_{ip})\quad \text{for all }\, i<j<p.$$ \end{lemma} The cohomology class $\omega_{ij}$ arises as follows. For $1\le i<j\le m+n$, mapping a configuration $(u_1, \dots, u_{m+n})\in F(\rr^d, m+n)$ to the unit vector $$\frac{u_i-u_j}{||u_i-u_j||}\, \in\, S^{d-1},$$ defines a continuous map $\phi_{ij}: F(\rr^d, m+n)\to S^{d-1}$, and the class $$\omega_{ij}\in H^{d-1}(F(\rr^d, m+n))$$ is defined by $\omega_{ij}=\phi^\ast_{ij}(v)$ where $v\in H^{d-1}(S^{d-1})$ is the fundamental class. Below we shall denote by $E$ the configuration space $E=F(\rr^d, n+m)$. A point of $E$ will be understood as a configuration $$(o_1, o_2, \dots, o_m, z_1, z_2, \dots, z_n)$$ where the first $m$ points $o_1, o_2, \dots, o_m$ represent \lq\lq obstacles\rq\rq\, while the last $n$ points $z_1, z_2, \dots, z_n$ represent \lq\lq robots\rq\rq. The map \begin{eqnarray}\label{FN} p: F(\rr^d, m+n) \to F(\rr^d, m), \end{eqnarray} where $$ p(o_1, o_2, \dots, o_m, z_1, z_2, \dots, z_n)= (o_1, o_2, \dots, o_m),$$ is known as the Fadell - Neuwirth fibration. This map was introduced in \cite{FadN} where the authors showed that $p$ is a locally trivial fibration. The fibre of $p$ over a configuration $\mathcal O_m=\{o_1, \dots, o_m\}\in F(\rr^d, m)$ is the space $X=F(\rr^d- \mathcal O_m, n)$, the configuration space of $n$ pairwise distinct points lying in the complement of the set $\mathcal O_m=\{o_1, \dots, o_m\}$ of $m$ fixed obstacles. We plan to use Lemma \ref{lemma lower bound for para tc} to obtain lower bounds for the topological complexity, and for this reason our first task will be to calculate the integral cohomology ring of the space $E_B^r$. Here $E$ denotes the space $E=F(\rr^d, m+n)$ and $B$ denotes the space $B=F(\rr^d, m)$ and $p: E\to B$ is the Fadell - Neuwirth fibration (\ref{FN}); hence $E^r_B$ is the space of $r$-tuples $(e_1, e_2, \dots, e_r)\in E^r$ satisfying $p(e_1)=p(e_2)=\dots= p(e_r)$. Explicitly, a point of the space $E_B^r$ can be viewed as a configuration \begin{eqnarray}\label{confr} (o_1, o_2, \cdots o_m, z^1_1, z^1_2, \cdots z^1_n, z^2_1, z^2_2, \cdots, z^2_n, \cdots, z^r_1, z^r_2, \cdots, z^r_n) \end{eqnarray} of $m+rn$ points $o_i, \, z^l_j\in \rr^d$ (for $i=1, 2, \dots, m$, $j=1, 2, \dots, n$ and $l=1, 2, \dots, r$), such that \begin{enumerate} \item $o_i \neq o_{i'}$ for $i\neq i'$, \item $o_i \neq z^l_j$ for $1\leq i \leq m, \, 1\leq j \leq n$ and $1\leq l \leq r$, \item $z^l_j \neq z^l_{j'}$ for $j \neq j'$. \end{enumerate} The following statement is a generalisation of Proposition 9.2 from \cite{CohFW21}. \begin{prop}\label{prop relation of cohomology classes} The integral cohomology ring $H^*(E_B^r)$ contains cohomology classes $\omega^l_{ij}$ of degree $d-1$, where $1\leq i < j \leq m+n$ and $1\leq l \leq r$, satisfying the relations \begin{enumerate} \item[{\rm (a)}] $\omega_{ij}^l = \omega_{ij}^{l'} \text{ for } 1\leq i < j \leq m$ and $1\leq l \leq l' \leq r$, \item[{\rm (b)}] $(\omega_{ij}^l)^2=0\text{ for } i < j \text{ and } 1\leq l \leq r$, \item[{\rm (c)}] $\omega_{ip}^l\omega_{jp}^l= \omega_{ij}^l(\omega_{jp}^l-\omega_{ip}^l) \text{ for } i<j<p \text { and }\, 1\leq l \leq r.$ \end{enumerate} \end{prop} \begin{proof} For $1\le l\le r$, consider the projection map $q_l : E_B^r \to E$ which acts as follows: the configuration (\ref{confr}) is mapped into $$(u_1, u_2, \dots, u_{m+n})\in E=F(\rr^d, m+n)$$ where $$ u_i=\left\{ \begin{array}{lll} o_i& \mbox{for} & i\le m,\\ z^l_{i-m}& \mbox{for} & i> m. \end{array} \right. $$ Using Lemma \ref{lemma cohomology of configuration space} and the cohomology classes $\omega_{ij}\in H^{d-1}(E)$, we define $$(q_{l})^*(\omega_{ij})=\omega^l_{ij}\, \in \, H^*(E_B^r).$$ Relations {\rm {(a), \, (b), \, (c)}} are obviously satisfied. This completes the proof. \end{proof} For $1\leq i < j \leq m$ we shall denote the class $\omega_{ij}^l\in H^{d-1}(E^r_B)$ simply by $\omega_{ij}$; this is justified because of the relation {\rm {(a)}} above. We shall introduce notations for the classes which arise as the cup-products of the classes $\omega^l_{ij}$. For $p\ge 1$ consider two sequences of integers $I=(i_1, i_2, \cdots, i_p)$ and $J=(j_1, j_2, \cdots, j_p)$ where $i_s, j_s\in \{1, 2, \dots, m+n\}$ for $s=1, 2, \dots, p$. We shall say that the sequence $J$ is {\it increasing} if either $p=1$ or $j_1<j_2<\dots <j_p$. Besides, we shall write $I<J$ if $i_s<j_s$ for all $s=1, 2, \dots, p$. A pair of sequences $I<J$ of length $p$ as above determines the cohomology class $$\omega_{IJ}^l=\omega^l_{i_1j_1}\omega^l_{i_2j_2}\cdots \omega^l_{i_pj_p}\in H^{(d-1)p}(E_B^r)$$ for any $l=1, 2, \dots, r$. Note that the order of the factors is important in the case when the dimension $d$ is even. Because of the property {\rm {(a)}} of Proposition \ref{prop relation of cohomology classes} this class is independent of $l$ assuming that $j_p\le m$; for this reason, if $j_p\le m$, we shall denote $\omega^l_{IJ}$ simply by $\omega_{IJ}$. For formal reasons we shall allow $p=0$. In this case the symbol $\omega^l_{IJ}$ will denote the unit $1\in H^0(E^r_B)$. The next result is a generalisation of \cite[Proposition 9.3]{CohFW21} where the case $r=2$ was studied. \begin{prop}\label{prop basic cohomology classes} An additive basis of $ H^*(E_B^r)$ is formed by the following set of cohomology classes \begin{eqnarray}\label{basis} \omega_{IJ}\omega^1_{I_1J_1}\omega^2_{I_2J_2}\cdots \omega^r_{I_rJ_r}\, \in\, H^\ast(E^r_B)\quad \mbox{with}\quad I<J\quad \mbox{and}\quad I_i<J_i,\end{eqnarray} where: \begin{enumerate} \item[{\rm (i)}] the sequences $J, J_1, J_2, \cdots J_r$ are increasing, \item[{\rm (ii)}] the sequence $J$ takes values in $\{2, 3, \cdots, m\}$, \item[{\rm (iii)}] the sequences $J_1, J_2, \dots, J_r$ take values in $\{m+1, \cdots m+n\}$. \end{enumerate} \end{prop} \begin{proof} Recall our notations: $E=F(\rr^d, m+n)$, \, $B=F(\rr^d, m)$ and $p: E\to B$ is the Fadell - Neuwirth fibration (\ref{FN}). Consider the fibration $$p_r: E^r_B \to B \quad \mbox{where}\quad p_r(e_1, \dots, e_r) =p(e_1)=\dots=p(e_r).$$ Its fibre over a configuration $\mathcal O_m=(o_1, \dots, o_m)\in B$ is $X^r$, the Cartesian product of $r$ copies of the space $X$, where $X=F(\rr^d-\mathcal O_m, n)$. We shall apply Leray-Hirsch theorem to the fibration $p_r: E^r_B \to B$. The classes $\omega_{ij}$ with $i < j \leq m$ originate from the base of this fibration. Moreover, from Lemma \ref{lemma cohomology of configuration space} it follows that a free additive basis of $H^\ast(B)$ forms the set of the classes $\omega_{IJ}$ where $I<J$ run over all sequences of elements of the set $\{ 1,2,...,m\} $ such that the sequence $J = (j_1,j_2,...,j_p)$ is increasing. Next consider the classes of the form $$\omega^1_{I_1J_1}\omega^2_{I_2J_2}\cdots \omega^r_{I_rJ_r}\, \in\, H^\ast(E^r_B),$$ with increasing sequences $J_1, J_2, \dots, J_r$ satisfying {\rm {(iii)}} above. Using the known results about the cohomology algebras of configuration spaces (see \cite{FadH01}, Chapter V, Theorems 4.2 and 4.3) as well as the Ku\"nneth theorem, we see that the restrictions of the family of these classes onto the fiber $X^r$ form a free basis in the cohomology of the fiber $H^\ast (X^r).$ Hence, Leray-Hirsch theorem \cite{Hat} is applicable and we obtain that a free basis of the cohomology $H^\ast (E^r_B)$ is given by the set of classes described in the statement of Proposition \ref{prop basic cohomology classes}. This completes the proof. \end{proof} Proposition \ref{prop basic cohomology classes} implies: \begin{corollary}\label{new} Consider two basis elements $\alpha, \beta \in H^*(E_B^r)$ $$ \alpha = \omega_{IJ}\omega^1_{I_1J_1}\omega^2_{I_2J_2}\cdots \omega^r_{I_rJ_r}\,\quad \mbox{and}\quad \beta = \omega_{I'J'}\omega^1_{I'_1J'_1}\omega^2_{I'_2J'_2}\cdots \omega^r_{I'_rJ'_r}, $$ satisfying the properties {\rm (i), (ii), (iii)} of Proposition \ref{prop basic cohomology classes}. The product $$\alpha\cdot \beta\in H^\ast(E^r_B)$$ is another basis element up to sign (and hence is nonzero) if the sequences $J$ and $J'$ are disjoint and for every $k=1, 2, \dots,r$ the sequences $J_k$ and $J'_k$ are disjoint. \end{corollary} There is a one-to-one correspondence between increasing sequences and subsets; this explains the meaning of the term "disjoint" applied to two increasing sequences. Next we consider the situation when the product of basis elements is not a basis element but rather a linear combination of basis elements. Let $J=(j_1, j_2, \dots, j_p)$ be an increasing sequence of positive integers, where $p\ge 2$, and let $j$ be an integer satisfying $j_p<j$. Our goal is to represent the product $$ \omega^l_{j_1 j}\omega^l_{j_2 j}\dots \omega^l_{j_p j} \in H^{p(d-1)}(E^r_B), \quad l=1, \dots, r, $$ as a linear combination of the basis elements of Proposition \ref{prop basic cohomology classes}. We say that a sequence $I=(i_1, i_2, \dots, i_p)$ with $p\ge 2$ is a {\it $J$-modification} if $i_1=j_1$ and for $s=2, 3, \dots, p$ each number $i_s$ equals either $i_{s-1}$ or $j_s$. An increasing sequence of length $p$ has $2^{p-1}$ modifications. For example, for $p=3$ the sequence $J= (j_1, j_2, j_3)$ has the following $4$ modifications \begin{eqnarray}\label{mod} (j_1, j_2, j_3), \, \, (j_1, j_1, j_3), \, \, (j_1, j_2, j_2), \, \, (j_1, j_1, j_1). \end{eqnarray} For a $J$-modification $I$ we shall denote by $r(I)$ the number of repetitions in $I$. For instance, the numbers of repetitions of the modifications (\ref{mod}) are $0, \, 1, \, 1, \, 2$ correspondingly. The following statement is equivalent to Proposition 3.5 from \cite{CohFW}. Lemma 9.5 from \cite{CohFW21} gives the answer in a recurrent form. \begin{lemma}\label{lm:mod} For a sequence $j_1<j_2<\dots < j_p<j$ of positive integers, where $p\ge 2$, denote $J=(j_1, j_2, \dots, j_p)$ and $J'=(j_2, j_3, \dots, j_p, j)$. In the cohomology algebra $H^\ast(E^r_B)$ associated to the Fadell - Neuwirth fibration, one has the following relation \begin{eqnarray}\label{dec} \omega^l_{j_1 j}\omega^l_{j_2 j}\dots \omega^l_{j_p j} = \sum_I (-1)^{r(I)} \omega^l_{I J'}, \end{eqnarray} where $I$ runs over $2^{p-1}$ $J$-modifications and $l=1, 2, \dots, r.$ \end{lemma} \begin{proof} First note that for any $J$-modification $I$ one has $I< J'$ and hence the terms in the RHS of (\ref{dec}) make sense. We shall use induction in $p$. For $p=2$ the statement of Lemma \ref{lm:mod} is $$ \omega^l_{j_1j}\omega^l_{j_2j}= \omega^l_{j_1j_2}\omega^l_{j_2j}-\omega^l_{j_1j_2}\omega^l_{j_1 j}, $$ which is the familiar $3$-term relation, see Proposition \ref{prop relation of cohomology classes}, statement (c). The first term on the right corresponds to the sequence $I=(j_1,j_2)$ and the second term corresponds to $I=(j_1, j_1)$; the latter has one repetition and appears with the minus sign. Suppose now that Lemma \ref{lm:mod} is true for all sequences $J$ of length $p$. Consider an increasing sequence $J=(j_1, j_2, \dots, j_{p+1})$ of length $p+1$ and an integer $j$ satisfying $j>j_{p+1}$. Denote by $K=(j_1, j_2, \dots, j_p)$ the shortened sequence and let $I=(i_1, i_2, \dots, i_p)$ be a modification of $K$. As in (\ref{dec}), denote $K'=(j_2, j_3, \dots, j_p, j)$. Consider the product \begin{eqnarray*} \omega^l_{I K'}\omega^l_{j_{p+1}j}&=& \omega^l_{i_1j_2}\omega^l_{i_2j_3}\dots \omega^l_{i_{p-1}j_p} \omega^l_{i_p j}\cdot \omega^l_{j_{p+1}j}\\ &=& \left[\omega^l_{i_1j_2}\omega^l_{i_2j_3}\dots \omega^l_{i_{p-1}j_p} \right]\cdot \omega^l_{i_pj_{p+1}}\cdot \left[\omega^l_{j_{p+1}j} -\omega^l_{i_pj}\right]\\ &=& \omega^l_{I_1J'}-\omega^l_{I_2J'} \end{eqnarray*} where $I_1=(i_1, \dots, i_p, j_{p+1})$ and $I_2= (i_1, \dots, i_p, i_p)$ are the only two modifications of $J$ extending $I$. The equality of the second line is obtained by applying the relation (c) of Proposition \ref{prop relation of cohomology classes}. Note that $r(I_1)=r(I)$ and $r(I_2)=r(I_1)+1$ which is consistent with the minus sign. Thus, we see that Lemma follows by induction. \end{proof} Since each term in the RHS of (\ref{dec}) is a $\pm$ multiple of a basis element we obtain: \begin{corollary}\label{cor:form} Any basis element (\ref{basis}) which appears with nonzero coefficient in the decomposition of the monomial \begin{eqnarray}\label{prodd} \omega^l_{j_1 j}\omega^l_{j_2 j}\dots \omega^l_{j_p j}, \quad \mbox{where}\quad j_1<j_2<\dots < j_p< j, \end{eqnarray} contains a factor of the form $\omega^l_{j_s j}$, where $s\in \{1, 2, \dots, p\}$. Moreover, $$\omega^l_{j_1 j_2}\omega^l_{j_1 j_3}\dots \omega^l_{j_1 j_p}\omega^l_{j_1 j}$$ is the only basis element in the decomposition of (\ref{prodd}) which contains the factor $\omega^l_{j_1 j}$. \end{corollary} Consider the diagonal map $$\Delta : E \to E_B^r, \quad \Delta(e) = (e, e, \dots, e), \quad e\in E.$$ \begin{lemma} \label{lm:ker} The kernel of the homomorphism $\Delta^* : H^*(E_B^r) \to H^*(E)$ contains the cohomology classes of the form $$\omega^l_{ij}-\omega^{l'}_{ij}.$$ \end{lemma} \begin{proof} This follows directly from the definition of the classes $\omega^l_{ij}$; compare the proof of Proposition 9.4 from \cite{CohFW21}. \end{proof} \section{Sequential parametrized topological complexity of the Fadell-Neuwirth bundle; the odd-dimensional case}\label{sec:odd} Our goal is to compute the sequential parametrized topological complexity of the Fadell - Neuwirth bundle. As we shall see, the answers in the cases of odd and even dimension $d$ are slightly different. When $d$ is odd the cohomology algebra has only classes of even degree and is therefore commutative; in the case when $d$ is even the cohomology algebra is skew-commutative which imposes major distinction in treating these two cases. The main result of this section is: \begin{theorem}\label{thm:odd} For any odd $d\ge 3$, and for any $n\ge 1$, $m\ge 2$ and $r\ge 2$, the sequential parametrized topological complexity of the Fadell - Neuwirth bundle (\ref{FN}) equals $rn+m -1$. \end{theorem} This result was obtained in \cite{CohFW21} for $r=2$. Note that the special case of $d=3$ is most important for robotics applications. As in the previous section, we shall denote the Fadell - Neuwirth bundle (\ref{FN}) by $p: E\to B$ for short; this convention will be in force in this and in the following sections. We start with the following statement which is valid without imposing restriction on the parity of the dimension $d\ge 3$. Note that for $d= 2$ we shall have a stronger upper bound in \S \ref{sec:9}. \begin{prop}\label{prop upper bound for Fadell-Neuwirth bundle} For any $d\geq 3$ and $m\ge 2$ one has $$\TC_r[p: E\to B] \leq rn+m-1.$$ \end{prop} \begin{proof} The space $E^r_B$ is $(d-2)$-connected and in particular it is simply connected (since $d\ge 3)$. By Proposition \ref{prop basic cohomology classes} the top dimension with nonzero cohomology is $(rn + m-1)(d-1)$. Hence the homotopical dimension of the configuration space $\hdim (E^r_B)$ equals $(rn+m-1)(d-1)$. Here we use the well-known fact that the homotopical dimension of a simply connected space with torsion free integral cohomology equals its cohomological dimension. The fibre $X=F(\rr^d - \mathcal O_m, n)$ of the Fadell - Neuwirth bundle $p: E\to B$ is $(d-2)$-connected. Applying Proposition \ref{prop upper bound} we obtain $$\TC_r[p : E\to B]\, <\, rn+m-1+\frac{1}{d-1},$$ which is equivalent to our statement. \end{proof} To complete the proof of Theorem \ref{thm:odd} we only need to establish the lower bound: \begin{prop}\label{odd lower bound for Fadell-Neuwirth bundle} For any odd $d\geq 3$ and $m\ge 2$ one has $$\TC_r[p: E\to B] \geq rn+m-1.$$ \end{prop} Note that the assumption of this Proposition that the dimension $d$ is odd is essential as Proposition \ref{odd lower bound for Fadell-Neuwirth bundle} is false for $d$ even, see below. \begin{proof} We shall use Lemma \ref{lemma lower bound for para tc} and Propositions \ref{prop relation of cohomology classes} and \ref{prop basic cohomology classes} and Lemma \ref{lm:ker}. Consider the cohomology classes \begin{eqnarray*} x_1& =& \prod_{i=2}^m(\omega^1_{i(m+1)} - \omega^2_{i(m+1)})\, \in \, H^{(m-1)(d-1)}(E^r_B),\\ x_2 &=& \prod_{j=m+1}^{m+n}(\omega_{1j}^{2} - \omega_{1j}^{1})^2 \, =\, -2 \prod_{j=m+1}^{m+n} \omega^1_{1j}\omega^2_{1j}\, \in \, H^{2n(d-1)}(E^r_B),\\ x_3&=& \prod_{l=3}^{r}\prod_{j=m+1}^{m+n}(\omega_{1j}^{l} - \omega_{1j}^{1})\, \in H^{n(r-2)(d-1)}(E^r_B). \end{eqnarray*} Each of these classes is a product of elements of the kernel of the homomorphism $\Delta^\ast: H^\ast(E^r_B)\to H^\ast(E)$, by Lemma \ref{lm:ker}. Proposition \ref{odd lower bound for Fadell-Neuwirth bundle} would follow once we show that the product \begin{eqnarray*} x_1 x_2 x_3 \not= 0\, \in \, H^\ast(E^r_B) \end{eqnarray*} is nonzero. By Proposition \ref{prop basic cohomology classes}, the product $x_1 x_2 x_3$ is a linear combination of the basis cohomology classes and it is nonzero if at least one coefficient in this decomposition does not vanish. According to \cite{CohFW21}, cf. page 248, the product $x_1 x_2$ contains the basis element \begin{eqnarray} \omega_{I_0J_0} \omega^1_{IJ} \omega^2_{I'J}\, \in \, H^{(2n+m-1)(d-1)}(E^r_B) \end{eqnarray} with a nonzero coefficient; here \begin{eqnarray*} I_0&=& (1, 2, 2, \dots, 2), \quad J_0= (2, 3, \dots, m), \end{eqnarray*} and \begin{eqnarray*} I= (1, 1, \dots, 1), \quad I'= (2, 1, 1, \dots, 1),\quad J= (m+1, m+2, \dots, m+n), \end{eqnarray*} with $|I_0|= |J_0|=m-1$ and $ |I|=|I'|= |J|=n.$ The product representing $x_3$ can be expanded into a sum. This sum contains the class $\prod_{l=3}^{r}\omega^l_{IJ}$ and each of the other terms contains a factor of type $\omega^1_{1j}$. Since obviously $x_1x_2\omega^1_{1j}=0,$ we obtain that the product $x_1x_2x_3$ contains the basis element $$ \omega_{I_0J_0}\cdot \omega^1_{IJ}\cdot \omega^2_{I'J}\cdot \prod_{l=3}^{r}\omega^l_{IJ} $$ with a nonzero coefficient. Hence $x_1 x_2x_3 \not=0$ is nonzero. This completes the proof of Proposition \ref{odd lower bound for Fadell-Neuwirth bundle}. \end{proof} \begin{remark} The lower bound estimate of Proposition \ref{odd lower bound for Fadell-Neuwirth bundle} fails to work in the case when the dimension $d$ of the ambient Euclidean space is even. Indeed, then the classes $\omega^l_{ij}$ have odd degree (which equals $d-1$) and the square of any class of odd degree vanishes (since the cohomology algebra $H^\ast(E^r_B)$ with integral coefficients is torsion free). Thus, in the case of even dimension $d$ the product $x_2$ vanishes. In the following section we shall suggest a different estimate for $d$ even . \end{remark} \section{Sequential parametrized topological complexity of the Fadell-Neuwirth bundle; the even-dimensional case}\label{sec:even}\label{sec:9} In this section we give a lower bound for $\TC_r[p:E\to B]$ for the Fadell - Neuwirth bundle (\ref{FN}) in the case when the dimension $d$ of the Euclidean space $\rr^d$ is even. We also prove a matching upper bound for the planar case $d=2$. Such an upper bound can be obtained for any even $d$ by a totally different method; this material will be presented in another publication. First we establish the following lower bound which is valid for any $d$ regardless of its parity. \begin{prop}\label{prop lower bound for Fadell-Neuwirth bundle even d} For any $d\ge 2$, $r\ge 2$ and $m\ge 2$, the sequential parametrized topological complexity of the Fadell - Neuwirth bundle satisfies \begin{eqnarray}\label{lowereven} \TC_r[p:E\to B]\ge rn+m-2. \end{eqnarray} \end{prop} \begin{proof} As an illustration, consider first the special case $m=2$ and $n=1$, i.e. the situation when we have one robot and two obstacles. Then the product of $r$ classes \begin{eqnarray}\label{15} (\omega^1_{23}-\omega^2_{23}) \cdot \prod_{l=2}^r (\omega^l_{13}-\omega^1_{13}) \end{eqnarray} lying in the kernel of $\Delta^\ast$ contains the basis element \begin{eqnarray}\label{targetm2} \omega^1_{23}\cdot \prod_{l=2}^r \omega^l_{13}\end{eqnarray} with a nonzero coefficient. Indeed, (\ref{15}) equals $ (\omega^1_{23}-\omega^2_{23}) \cdot \left[\prod_{l=2}^r \omega^l_{13}-\omega^1_{13}\cdot \alpha\right] $ where $\alpha$ is a polynomial in the classes $\omega^l_{13}$ with $l\in \{2, \dots, r\}$. Opening the brackets gives $$ \omega^1_{23}\cdot \prod_{l=2}^r \omega^l_{13} - \omega^2_{23} \cdot \prod_{l=2}^r \omega_{13}^l - \omega^1_{23}\omega^1_{13}\alpha + \omega^2_{23}\omega^1_{13}\alpha. $$ Here the second and the third terms are the sums of basis elements each containing the factor $\omega_{12}$ and hence distinct from (\ref{targetm2}). The basis elements of the fourth term all contain the factor $\omega^1_{13}$ and therefore are also distinct from (\ref{targetm2}). Thus, (\ref{15}) is nonzero and Lemma \ref{lemma lower bound for para tc} gives the desired lower bound in the case $m=2$, $n=1$. Returning to the general case, consider the following three cohomology classes $x_1, x_2, x_3\in H^\ast(E^r_B)$, where \begin{eqnarray*} x_1&=& \prod_{i=2}^m(\omega^1_{i(m+1)} - \omega^2_{i(m+1)})\, \in H^{(m-1)(d-1)}(E^r_B),\\ x_2 &=& \prod_{j=m+2}^{m+n}(\omega_{(j-1)j}^{1} - \omega_{(j-1)j}^{2})\, \in H^{(n-1)(d-1)}(E^r_B), \\ x_3 &=& \prod_{l=2}^{r}\prod_{j=m+1}^{m+n}(\omega_{1j}^{l} - \omega_{1j}^{1}) \, \in H^{n(r-1)(d-1)}(E^r_B). \end{eqnarray*} Note that in the case when $n=1$ the class $x_2$ is not defined; however, the arguments below show that in the case $n=1$ the class $x_1 x_3$ (which is the product of $r+m-2$ classes lying in the kernel of $\Delta^\ast$) is nonzero. Each of the classes $x_1, x_2, x_3$ is the product of elements of the kernel of $\Delta^\ast$, see Lemma \ref{lm:ker}, and the total number of the factors is $rn+m-2$. Hence, by Lemma \ref{lemma lower bound for para tc}, our statement (\ref{lowereven}) will follow once we know that the product $x_1x_2x_3\not=0\in H^\ast(E^r_B)$ is nonzero. Consider the following sequences \begin{eqnarray*} \begin{array}{lll} I_0=(2, 2, \dots, 2), &\mbox{where} &|I_0|=m-2,\\ J_0=(3, 4, \dots, m), &\mbox{where} &|J_0|=m-2,\\ I=(1, 1, \dots, 1), &\mbox{where} &|I|=n,\\ K=(2, m+1, m+2, \dots, m+n-1), &\mbox{where} &|K|=n,\\ J=(m+1, m+2, \dots, m+n), &\mbox{where}&|J|=n. \end{array} \end{eqnarray*} We claim that the basis element \begin{eqnarray}\label{target} \omega_{I_0J_0}\omega^1_{KJ} \omega_{IJ}^2 \omega_{IJ}^3\dots \omega_{IJ}^r \end{eqnarray} appears in the decomposition of the product $x_1x_2x_3$ with a nonzero coefficient. In the special case $n=1$ the class (\ref{target}) has the form \begin{eqnarray}\label{target1} \omega_{23}\omega_{24}\dots\omega_{2 m}\cdot \omega^1_{2 (m+1)}\cdot \prod_{l=2}^r \omega^l_{1 (m+1)}. \end{eqnarray} Consider the basis elements which appear in the decomposition of the class $x_1$. For $m=2$ the class $x_1$ equals $\omega^1_{23} - \omega^2_{23}$ and for $m>2$ we can write \begin{eqnarray}\label{x1} x_1= \sum_{R\subset [m]}\pm \, \left(\prod_{i\in R} \omega^1_{i (m+1)}\cdot \prod_{i\in R^c} \omega^2_{i (m+1)}\right), \end{eqnarray} where $R$ runs over all subsets (including $R= \emptyset$) of the set $[m]=\{2, 3, \dots, m\}$ and $R^c$ denotes the complement $[m]-R$. The terms of (\ref{x1}) are basis elements for $m=2$; for $m>2$ they can be decomposed into basis elements using Lemma \ref{lm:mod}. For example, taking $R=[m]$ and applying Lemma \ref{lm:mod} we find that one of the $2^{m-2}$ basis elements which appear in the decomposition of the product $\prod_{i=2}^m\omega^1_{i (m+1)}$ is the class \begin{eqnarray}\label{class16} \omega_{23}\omega_{24}\dots\omega_{2m}\omega^1_{2(m+1)}= \omega_{I_0J_0} \omega^1_{2(m+1)}. \end{eqnarray} This class clearly is a factor of (\ref{target}). For $m>2$ each other basis elements in the decomposition of $\prod_{i=2}^m\omega^1_{i (m+1)}$ has a factor of type $\omega^1_{i (m+1)}$ with $2<i\le m$, see Corollary \ref{cor:form}. Note also the basic elements of the form \begin{eqnarray}\label{16b} \omega_{23}\omega_{24}\dots \omega_{2 (m-1)}\omega_{2 m}\omega^2_{k (m+1)}= \omega_{I_0J_0} \omega^2_{k (m+1)}, \quad \mbox{where}\quad 2\le k\le m, \end{eqnarray} which arise in the basic element decomposition of the summand of (\ref{x1}) with $R=\emptyset$. The basis element decomposition of $x_2$ is given by \begin{eqnarray}\label{x2} \sum_S \pm \left(\prod_{j\in S} \omega^1_{(j-1) j} \cdot \prod_{j\in S^c} \omega^2_{(j-1) j}\right), \end{eqnarray} where $S$ runs over all subsets $S\subset \{m+2, m+3, \dots, m+n\}$, including $S=\emptyset$. The symbol $S^c$ denotes the complement $\{m+2, m+3, \dots, m+n\}- S$. Taking $S= \{m+2, m+3, \dots, m+n\}$ in (\ref{x2}) gives the class $\omega^1_{KJ}$, without the factor $\omega^1_{2 (m+1)}$, which is a factor of (\ref{target}). Note that the missing factor $\omega^1_{2 (m+1)}$ appears in (\ref{class16}). The basis element decomposition of the class $x_3$ is given by \begin{eqnarray}\label{16}\label{x3} \sum_{T_2, \dots, T_r} \pm \, \, \omega^1_{I_1 T_1}\omega^2_{I_2 T_2}\omega^3_{I_3 T_3} \dots \omega^r_{I_r T_r}, \end{eqnarray} where $T_2, T_3, \dots, T_r$ run over subsets of the set $\{m+1, m+2, \dots, m+n\}$ such that every two of these sets cover $\{m+1, m+2, \dots, m+n\}$ and $T_1=\cup_{j=2}^r T_j^c$ where $T_j^c$ stands for the complement $\{m+1, m+2, \dots, m+n\} -T_j$. We identify the subsets of $\{m+1, m+2, \dots, m+n\}$ with increasing sequences in the obvious way. The sequences $I_1, I_2, \dots, I_r$ in (\ref{16}) all have the form $(1, 1, \dots, 1)$. Taking in (\ref{16}) $T_2=T_3=\dots=T_r=J$ gives the class $\omega^2_{IJ} \omega^3_{IJ} \dots \omega^r_{IJ}$ which is a factor of (\ref{target}). We have seen that the class (\ref{target}) appears as a product of specific basis elements in the decomposition of $x_1$, $x_2$ and $x_3$. We show below that the class (\ref{target}) appears {\it only} with the set of choices indicated above and hence it cannot be cancelled. Firstly, we note that only $x_3$ involves terms $\omega^l_{ij}$ with $l\ge 3$ and $j\ge m+1$. Therefore the only choice $T_3=T_4=\dots=T_r=J$ in (\ref{x3}) may possibly lead to (\ref{target}). Secondly, the basis elements in the decompositions of $x_2$ and $x_3$ have no factors $\omega^l_{ij}$ with $j\le m$. Hence the factor $\omega_{I_0J_0}$ of (\ref{target}) may only arise from the basis elements of the decomposition of $x_1$. It is clear that this may happen either when $R=[m]$ with (\ref{class16}) corresponding to the modification $(2, 2, \dots, 2)$ of the sequence $(2, 3, \dots, m)$, or with $R=\emptyset$, see above. Any basis element of $x_1$ distinct from (\ref{class16}) has either a factor of type $\omega^1_{i (m+1)}$ with $3\le i\le m$ or a factor of type $\omega^2_{k (m+1)}$ with $2\le k\le m$. Such factors do not appear in (\ref{target}). If the set $T_1$ in (\ref{x3}) contains $m+1$ then we could have the factor $$\omega^1_{i (m+1)}\omega^1_{1 (m+1)}=\pm \omega_{1 i}(\omega^1_{i (m+1)} - \omega^1_{1 (m+1)})$$ with the factor $\omega_{1 i}$ missing in (\ref{target}). Similarly, the set $T_2$ might contain $m+1$ leading to the product $$\omega^2_{k (m+1)}\omega^2_{1 (m+1)}= \pm \omega_{1 k}(\omega^2_{k (m+1)} - \omega^2_{1 (m+1)})$$ with the factor $\omega_{1 k}$ being absent in (\ref{target}). Thus, we see that (\ref{class16}) is the only basis element of the decomposition of $x_1$ which can contribute into (\ref{target}). Comparing (\ref{x2}) and (\ref{target}) and using Corollary \ref{new} we see that the only basis element of the sum (\ref{x2}) with $S= \{m+2, m+3, \dots, m+n\}$ can contribute into (\ref{target}). This basis element, together with the factor $\omega^1_{2 (m+1)}$, gives $\omega^1_{KJ}$. Finally, examining (\ref{x3}), we see that the only way obtaining (\ref{target}) is by taking $T_2=J$ and hence $T_1=\emptyset$, since, as we established earlier, one must have $T_3=\dots=T_r=J$ and $T_1=\cup_{j=2}^r T_j^c$. Thus, the basis element (\ref{target}) appears in the decomposition of the product $x_1x_2x_3$ with a nonzero coefficient and hence $x_1x_2x_3\not=0.$ This completes the proof. \end{proof} Next we state the main result of this section: \begin{theorem}\label{thm:even} For any $m\geq 2$, $n\ge 1$ and $r\ge 2$, the $r$-th sequential parametrized topological complexity of the Fadell-Neuwirth bundle in the plane is given by$$\TC_r[p : F(\rr^2, n+m) \to F(\rr^2, m)]=rn + m - 2.$$ \end{theorem} \begin{proof} Proposition \ref{prop lower bound for Fadell-Neuwirth bundle even d} gives the lower bound. In the proof below we establish the upper bound. We shall adopt the method developed in \cite{CohFW}. As in \cite{CohFW}, we identify $\rr^2$ with the set of complex numbers $\cc$ and for any $s\geq 3$ consider the homeomorphism $$h_s: F(\cc, s) \to F(\cc \smallsetminus \{0, 1\}, s-2)\times F(\cc, 2)$$ given by $$h_s(u_1, u_2, ..., u_s)=\left(\left(\frac{u_3-u_1}{u_2-u_1}, \frac{u_4-u_1}{u_2-u_1}, ..., \frac{u_s-u_1}{u_2-u_1} \right), (u_1, u_2)\right),$$ where $u_i\in \cc$, $u_i\not= u_j$ for $i\not=j$. Thus, using the algebraic structure of complex numbers we may split the configuration space into a product. We have the following commutative diagram $$\xymatrix{ F(\cc, n+m) \ar[d]_{p} \ar[r]^{\hskip -2 cm {h_{n+m}}} & F(\cc \smallsetminus \{0, 1\}, n+m-2)\times F(\cc, 2) \ar[d]^{q\times \Id} \\ F(\cc, m) \ar[r]_{\hskip -2 cm {h_m}} & F(\cc \smallsetminus \{0, 1\}, m-2)\times F(\cc, 2) }$$ where $p$ is the Fadell - Neuwirth fibration, $q$ is analogue of the Fadell - Neuwirth bundle for the plane with points $0, 1$ removed and with $m-2$ obstacles, and $\Id$ is the identity map. In the case when $m=2$ we shall consider the space $F(\cc \smallsetminus \{0, 1\}, m-2)$ as consisting of a single point; then the diagram above will make sense for $m=2$ (two obstacles only) as well. Noting that $\TC_r[\Id : F(\cc, 2) \to F(\cc, 2)]=0$ and applying Proposition \ref{prop product inequality} we obtain \begin{equation*} \label{equation para tc r=2} \TC_r[p: F(\cc, n+m) \to F(\cc, m)] \leq \TC_r[q: E' \to B'], \end{equation*} where $E'=F(\cc \smallsetminus \{0, 1\}, n+m-2)$ and $B'=F(\cc \smallsetminus \{0, 1\}, m-2) $. The fibre of the fibration $q: E' \to B'$ is the configuration space $F(\cc \smallsetminus \mathcal{O}_m, n)$, which is connected and has homotopical dimension $n$. The homotopical dimension of the base $F(\cc \smallsetminus \{0, 1\}, m-2)$ is $m-2$. Proposition \ref{prop upper bound} gives $\TC_r[q: E'\to B'] \le rn+ m-2.$ Hence, $$\TC_r[p: F(\cc, n+m) \to F(\cc, m)]\leq rn+ m-2.$$ This completes the proof. \end{proof} \begin{remark} Theorems \ref{thm:odd} and \ref{thm:even} leave unanswered the question about the sequential parametrized topological complexity for the Fadell - Neuwirth bundle for $d\ge 4$ even. The upper bound of Proposition \ref{prop upper bound for Fadell-Neuwirth bundle} and the lower bound of Proposition \ref{prop lower bound for Fadell-Neuwirth bundle even d} specify the answer with indeterminacy one. In a forthcoming publication we shall extend the upper bound $rn +m -2$ for any $d\ge 2$ even. We shall employ the method which was briefly described in \cite{FarW}, \S 7 for the case $r=2$. \end{remark} \section{$\TC$-generating function and rationality} \subsec{} Definition \ref{def:main} associates with each fibration $p: E\to B$ an infinite sequence of integer numerical invariants, \begin{eqnarray}\label{seq} \TC_2[p: E\to B], \quad \TC_3[p: E\to B], \quad \dots,\quad \TC_r[p: E\to B],\quad \dots \end{eqnarray} In order to understand the global behaviour of the sequence (\ref{seq}), it can be organised into a generating function \begin{eqnarray}\label{gen} \mathcal F(t) = \sum_{r\ge 1} \TC_{r+1}[p: E\to B]\cdot t^r, \end{eqnarray} which we shall call {\it the $\TC$-generating function of the fibration $p: E\to B$}. Various analytic properties of the generating function $\mathcal F(t)$ reflect asymptotic behaviour of the sequence (\ref{seq}) and topological structure of the fibration $p: E\to B$. Rationality of the generating function (\ref{gen}) would mean existence of a linear recurrence relation between the integers (\ref{seq}) representing sequential parametrized topological complexities for various values of $r$. \begin{lemma} The $\TC$-generating function (\ref{gen}) depends only on the fiberwise homotopy type of the fibration $p: E\to B$. \end{lemma} \begin{proof} This is equivalent to Corollary \ref{fwhom}. \end{proof} \subsec{} In paper \cite{FarO19} the authors introduced the $\TC$-generating function \begin{eqnarray}\label{gen:x} \mathcal F_X(t) = \sum_{r\ge 1} \TC_{r+1}(X)\cdot t^r \end{eqnarray} associating with a finite path-connected CW-complex $X$ a formal power series (\ref{gen:x}). The paper \cite{FarO19} contains many examples when this power series can be explicitly computed and in all these examples $\mathcal F_X(t)$ is representable by a rational function of the form \begin{eqnarray}\label{form} \mathcal F_X(t) = \frac{A}{(1-t)^2} + \frac{B}{1-t}+ p(t), \quad \mbox{where}\quad p(t)\quad \mbox{is a polynomial}. \end{eqnarray} This property of $\mathcal F_X(t)$ is equivalent to the recurrence relation $$\TC_{r+1}(X) = \TC_r(X) +A$$ valid for all sufficiently large $r$; we refer to \cite{FarKS20} for more detail. In many examples the principal residue $A$ in (\ref{form}) equals the Lusternik - Schnirelmann category, \begin{eqnarray}\label{cat} A=\cat(X). \end{eqnarray} These examples lead to the Rationality Question of \cite{FarO19}: {\it for which finite CW-complexes the formal power series (\ref{gen:x}) represents a rational function of the form (\ref{form}) with the top residue equal the Lusternik - Schnirelmann category (\ref{cat})?} \subsec{} In the subsequent paper \cite{FarKS20} the authors analysed a class of CW-complexes violating the Ganea conjecture and found examples $X$ such that the $\TC$-generating function (\ref{gen:x}) is a rational function of the form (\ref{form}) although the top residue $A$ is distinct from $\cat(X)$. \subsec{} Next we mention a few examples when the generating function (\ref{gen}) can be computed. Firstly, suppose that $p: E\to B$ is the trivial fibration with path-connected fibre $X$. Then the generating function (\ref{gen}) equals $\mathcal F_X(t)$. Secondly, consider the Hopf bundle $p: S^3\to S^2$. Then, according to Proposition \ref{princ}, we have $$\TC_{r+1}[p:S^3\to S^2]=\TC_{r+1}(S^1)= \cat((S^1)^r)=r \quad \mbox{for any}\quad r\ge 1.$$ Therefore, the $\TC$-generating function of the Hopf bundle equals $$\sum_{r\ge 1} r\cdot t^r \, =\, \frac{t}{(1-t)^2} \, =\, \frac{1}{(1-t)^2} - \frac{1}{1-t}.$$ In this case the principal residue equals $A = 1= \cat(S^1)$. Exactly the same answer for the $\TC$-generating function can be obtained in the case of a more general Hopf bundle $p: S^{2n+1}\to {\Bbb {CP}}^n$. \subsec{} Consider now the $\TC$-generating function of the Fadell - Neuwirth bundle $p: F(\rr^d, m+n)\to F(\rr^d, m)$ which was analysed in this paper. We start with the case when the dimension $d$ is odd. Then we have the $\TC$-generating function $$\mathcal F(t) \, =\, \sum_{r=1} [(r+1)n +m-1]\cdot t^r = \frac{n}{(1-t)^2} + \frac{m-1}{1-t} -n - m+1. $$ It is a rational function of the form (\ref{form}) with the principal residue $$A= n = \cat(F(\rr^d-\mathcal O_m, n))$$ equal category of the fibre. Using Theorem 1.3 from \cite{GonG15} we may write the $\TC$-generating function of the fiber $X=F(\rr^d-\mathcal O_m, n)$ (for any $d\ge 2$) as follows \begin{eqnarray} \mathcal F_X(t) = n \cdot \sum_{r\ge 1} (r+1)t^r = \frac{n}{(1-t)^2} -n. \end{eqnarray} The $\TC$-generating function of the Fadell - Neuwirth bundle is slightly different in the case when $d=2$: $$\mathcal F(t) \, =\, \sum_{r=1} [(r+1)n +m-2]\cdot t^r = \frac{n}{(1-t)^2} + \frac{m-2}{1-t} -n - m+2.$$ In this case the power series represents a rational function of form (\ref{form}) and the principal residue equals the Lusternik - Schnirelmann category of the fibre. We see that for the Fadell - Neuwirth bundle the $\TC$-generating functions of the bundle and of the fiber have the same principal residue and their difference has a simple pole at $t=1$. This suggests the following general question: {\it How the $\TC$-generating functions of a fibration $p:E\to B$ and of its fibre $X$ are related?} More specifically we may ask: {\it For which fibrations $p: E\to B$ the differences $$\TC_{r+1}[p:E\to B] - \TC_{r}[p:E\to B] \quad \mbox{and}\quad \TC_{r}[p:E\to B]-\TC_r(X)$$ are eventually constant?} This stabilisation happens in the case of the Fadell - Neuwirth fibration. \begin{thebibliography}{12} \bibitem{BasGRT14} I.~Basabe, J.~ Gonz\'{a}lez, Y. B.~Rudyak \& D.~Tamaki, ``Higher topological complexity and its symmetrization," \textit{Algebr. Geom. Topol.} \textbf{14} (2014), no. 4, pp.~2103 - 2124. \bibitem{Bor67} K.~Borsuk, ``Theory of Retracts," \textit{Monografie Matematyczne}, Tom 44, Pa\'nstwowe Wydawnictwo Naukowe, Warsaw (1967). \bibitem{CohFW21} D.C.~ Cohen, M.~ Farber and S.~ Weinberger, ``Topology of parametrized motion planning algorithms," \textit{SIAM Journal of Appl. Algebra Geometry} \textbf{5}, (2021), no. 2, pp.~229 - 249. \bibitem{CohFW} D.C.~ Cohen, M.~ Farber and S.~ Weinberger, ``Parametrized topological complexity of collision-free motion planning in the plane," Annals of Mathematics and Artificial Intelligence, https://doi.org/10.1007/s10472-022-09801-6. \bibitem{Dra} A. Dranishnikov, \textit{On topological complexity of hyperbolic groups}. Proc. Amer. Math. Soc. 148 (2020), no. 10, pp. 4547–4556. \bibitem{FadH01} E.~Fadell and S.~Husseini, \textit{Geometry and Topology of Configuration Spaces,} Springer Mono-graphs in Mathematics, Springer-Verlag, Berlin, 2001. \bibitem{FadN} E. ~Fadell and L. ~Neuwirth, \textit{Configuration spaces.} Math. Scand. {\bf 10} (1962), 111–118. \bibitem{Far03} M.~ Farber, \textit{Topological complexity of motion planning}, {Discrete Comput. Geom.} \textbf{29} (2003), no. 2, pp.~211 - 221. \bibitem{Far04} M. Farber, \textit{Topology of robot motion planning,} in Morse Theoretic Methods in Non-linear Analysis and in Symplectic Topology, NATO Sci. Ser. II Math. Phys. Chem. 217, Springer, Dordrecht, 2006, pp. 185-230. \bibitem{FarKS20} M.~ Farber, D.~ Kishimoto, and D.~ Stanley ``Generating functions and topological complexity", \textit{Topology Appl.} \textbf{278} (2020), 107235. \bibitem{FarO19} M.~ Farber, J.~ Oprea, ``Higher topological complexity of aspherical spaces", \textit{Topology Appl.} \textbf{258} (2019), pp.~142-160. \bibitem{FarW} M.~ Farber and S.~ Weinberger, ``Parametrized motion planning and topological complexity," \textit{ arXiv:2202.05801}, (2022). To appear in "Workshop on Algorithmic Foundations of Robotics", Springer. \bibitem{Gar19} J. M.~ Garc\'{\i}a-Calcines, ``A note on covers defining relative and sectional categories," \textit{Topology Appl.} \textbf{265} (2019), 106810. \bibitem{GonG15} J. González and M. Grant, \title{Sequential motion planning of non-colliding particles in Euclidean spaces.} Proc. Amer. Math. Soc. 143 (2015), no. 10, 4503–4512. \bibitem{GraLV} M. Grant, G. Lupton, L. Vandembroucq, \textit{Topological Complexity and Related Topics}, Contemp. Math., 702, Amer. Math. Soc., Providence, RI, 2018. \bibitem{Hat} A. Hatcher, \textit{Algebraic Topology}, Cambridge University Press, Cambridge, 2002. \bibitem{Lat} J.-C. Latombe, \textit{Robot Motion Planning}, Springer Internat. Ser. Engrg. Comput. Sci. 124, Springer, Boston, 1991. \bibitem{LaV} S. M. LaValle, \textit{Planning Algorithms}, Cambridge University Press, Cambridge, 2006. \bibitem{Rud10} Y. B. ~Rudyak, `` On higher analogs of topological complexity," \textit{Topology Appl.} \textbf{157} (2010), no. 5, pp.~916 - 920. \bibitem{Sva66} A. S. ~ Schwarz, \textit{The genus of a fiber space}, \textit{Amer. Math. Soc. Transl. Ser. 2}, \textbf{55} (1966), pp.~ 49-140. \end{thebibliography} \end{document}
2205.08375v2
http://arxiv.org/abs/2205.08375v2
Hilbert-Poincaré series and Gorenstein property for some non-simple polyominoes
\documentclass[12pt]{amsart} \usepackage[margin=2cm]{geometry} \usepackage{amsmath, amsthm, amssymb} \usepackage{epsfig} \usepackage{rawfonts} \usepackage{enumerate} \usepackage{graphics} \usepackage{multirow} \usepackage{xspace} \usepackage{graphicx} \usepackage{pgf,tikz,pgfplots} \usepackage{mathrsfs} \usetikzlibrary{arrows} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsthm} \usepackage{graphicx} \usepackage{booktabs} \usepackage{caption} \usepackage{listings} \usepackage{setspace} \usepackage[mathscr]{eucal} \usepackage{pgfplots} \usepackage{hyperref} \usepackage{wrapfig} \usepackage{floatflt,epsfig} \usepackage{dsfont} \usepackage{amscd} \usepackage{tikz-cd} \usepackage{fancyhdr} \usepackage[all]{xy} \usepackage{latexsym} \usepackage{amscd} \usepackage{pifont} \usepackage{eufrak} \usepackage{subfig} \newcommand{\lik}{\mathrel{\mathop:}=} \newcommand{\cstar}{\mathbb{C}^{\ast}} \newcommand{\id}{\textrm{Id}} \newcommand{\pn}[1]{\mathbb{P}^{#1}} \newcommand{\dime}[1]{\textrm{dim}(#1)} \newcommand{\singl}[1]{\textrm{sing}(#1)} \newcommand{\dimemph}[1]{\textrm{\emph{dim}}(#1)} \newcommand{\singemph}[1]{\textrm{\emph{sing}}(#1)} \newcommand{\degree}[1]{\textrm{deg}\hskip 2pt #1} \newcommand{\MCD}{\mathop{\rm MCD}\nolimits} \newcommand{\mcm}{\mathop{\rm mcm}\nolimits} \newcommand{\Aut}{\mathop{\rm Aut}\nolimits} \newcommand{\Int}{\mathop{\rm Int}\nolimits} \newcommand{\Id}{\mathop{\rm Id}\nolimits} \newcommand{\Ann}{\mathop{\rm Ann}\nolimits} \newcommand{\Stab}{\mathop{\rm Stab}\nolimits} \newcommand{\Fix}{\mathop{\rm Fix}\nolimits} \newcommand{\Ht}{\mathop{\rm ht}\nolimits} \newcommand{\Spec}{\mathop{\rm Spec}\nolimits} \newcommand{\embdim}{\mathop{\rm embdim}\nolimits} \newcommand{\depth}{\mathop{\rm depth}\nolimits} \newcommand{\codh}{\mathop{\rm codh}\nolimits} \newcommand{\Hom}{\mathop{\rm Hom}\nolimits} \newcommand{\injdim}{\mathop{\rm injdim}\nolimits} \newcommand{\Ext}{\mathop{\rm Ext}\nolimits} \newcommand{\size}{\mathop{\rm size}\nolimits} \newcommand{\sign}{\mathop{\rm sign}\nolimits} \def\NN{{\mathbb N}} \def\ZZ{{\mathbb Z}} \def\QQ{{\mathbb Q}} \def\RR{{\mathbb R}} \def\CC{{\mathbb C}} \def\FF{{\mathbb F}} \def\KK{{\mathbb K}} \newcommand{\ord}{<^{\mathsf{P}}_{\mathrm{lex}}} \newcommand{\lego}{\left(\frac} \newcommand{\legc}{\right)} \newcommand{\gC}{\mathsf{C}} \newcommand{\gD}{\mathsf{D}} \newcommand{\gS}{\mathsf{S}} \newcommand{\gA}{\mathsf{A}} \newcommand{\gQ}{\mathsf{Q}} \newcommand{\gB}{\mathsf{B}} \newcommand{\cC}{\mathcal{C}} \newcommand{\Ob}{\mathcal{O}} \newcommand{\cP}{\mathcal{P}} \newcommand{\cH}{\mathcal{H}} \newcommand{\cI}{\mathcal{I}} \newcommand{\cK}{\mathcal{K}} \newcommand{\cB}{\mathcal{B}} \newcommand{\cM}{\mathcal{M}} \newcommand{\cF}{\mathcal{F}} \newcommand{\cQ}{\mathcal{Q}} \newcommand{\cW}{\mathcal{W}} \newcommand{\cR}{\mathcal{R}} \newcommand{\cS}{\mathcal{S}} \newcommand{\cT}{\mathcal{T}} \newcommand{\cL}{\mathcal{L}} \newcommand{\cV}{\mathcal{V}} \newcommand{\cG}{\mathcal{G}} \newcommand{\rHP}{\mathrm{HP}} \renewcommand{\qedsymbol}{$\square$} \theoremstyle{plain} \newtheorem{theorem}{Theorem}[section]\newtheorem{proposition}[theorem]{Proposition}\newtheorem{lemma}[theorem]{Lemma}\newtheorem{coro}[theorem]{Corollary} \newtheorem{example}[theorem]{Example}\newtheorem{remark}[theorem]{Remark} \newtheorem{definition}[theorem]{Definition} \begin{document} \title[Hilbert series and Gorenstein property for some non-simple polyominoes]{HILBERT-POINCAR\'E SERIES AND GORENSTEIN PROPERTY FOR SOME NON-SIMPLE POLYOMINOES} \author{CARMELO CISTO} \address{Universit\'{a} di Messina, Dipartimento di Scienze Matematiche e Informatiche, Scienze Fisiche e Scienze della Terra\\ Viale Ferdinando Stagno D'Alcontres 31\\ 98166 Messina, Italy} \email{[email protected]} \author{FRANCESCO NAVARRA} \address{Universit\'{a} di Messina, Dipartimento di Scienze Matematiche e Informatiche, Scienze Fisiche e Scienze della Terra\\ Viale Ferdinando Stagno D'Alcontres 31\\ 98166 Messina, Italy} \email{[email protected]} \author{ROSANNA UTANO} \address{Universit\'{a} di Messina, Dipartimento di Scienze Matematiche e Informatiche, Scienze Fisiche e Scienze della Terra\\ Viale Ferdinando Stagno D'Alcontres 31\\ 98166 Messina, Italy} \email{[email protected]} \subjclass[2010]{05B50, 05E40, 13C05, 13G05} \keywords{Polyominoes, Hilbert-Poincar\'e series, rook polynomial, Gorenstein.} \begin{abstract} Let $\mathcal{P}$ be a closed path having no zig-zag walks, a kind of non-simple thin polyomino. In this paper we give a combinatorial interpretation of the $h$-polynomial of $K[\mathcal{P}]$, showing that it is the rook polynomial of $\mathcal{P}$. It is known by Rinaldo and Romeo (2021), that if $\mathcal{P}$ is a simple thin polyomino then the $h$-polynomial is equal to the rook polynomial of $\mathcal{P}$ and it is conjectured that this property characterizes all thin polyominoes. Our main demonstrative strategy is to compute the reduced Hilbert-Poincar\'e series of the coordinate ring attached to a closed path $\mathcal{P}$ having no zig-zag walks, as a combination of the Hilbert-Poincar\'e series of convenient simple thin polyominoes. As a consequence we prove that the Krull dimension is equal to $\vert V(\mathcal{P})\vert -\mathrm{rank}\, \mathcal{P}$ and the regularity of $K[\mathcal{P}]$ is the rook number of $\mathcal{P}$. Finally we characterize the Gorenstein prime closed paths, proving that $K[\mathcal{P}]$ is Gorenstein if and only if $\mathcal{P}$ consists of maximal blocks of length three. \end{abstract} \maketitle \section{Introduction} \noindent A \textit{polyomino} is a finite collection of unitary squares joined edge by edge. In 2012 A.A. Qureshi defined the \textit{polyomino ideal} attached to a polyomino $\cP$ as the ideal $I_\cP$ generated by all inner 2-minors of $\cP$ in the polynomial ring over $K$ in the variables $x_v$, for each $v$ vertex of $\cP$. For more details see \cite{Qureshi}.\\ \noindent The study of the main algebraic properties of the quotient ring $K[\cP]=S/I_{\cP}$ depending on the shape of $\cP$ has become an exciting line of research. For instance, several mathematicians have studied the primality of $I_\cP$, normality and Cohen-Macaulayness of $K[\cP]$. For some references to these results we mention \cite{Cisto_Navarra_closed_path}, \cite{Cisto_Navarra_weakly}, \cite{Cisto_Navarra_CM_closed_path}, \cite{Dinu_Navarra}, \cite{Simple equivalent balanced}, \cite{def balanced}, \cite{Not simple with localization}, \cite{Trento}, \cite{Trento2}, \cite{Simple are prime}, \cite{Shikama}. We mention also other references, which provided inspiration for this work. In \cite{Andrei} the author classifies all convex polyominoes whose coordinate rings are Gorenstein and computes the regularity of the coordinate ring of any stack polyomino in terms of the smallest interval which contains its vertices. In \cite{L-convessi} the authors give a new combinatorial interpretation of the regularity of the coordinate ring attached to an $L$-convex polyomino, as the rook number of $\cP$, that is the maximum number of rooks which can be arranged in $\cP$ in non-attacking positions. In \cite{Trento3} it is showed that if $\cP$ is a simple thin polyomino, which is a polyomino not containing the square tetromino, then the $h$-polynomial $h(t)$ of $K[\cP]$ is the rook polynomial $r_{\cP}(t)=\sum_{i=0}^n r_i x^i$ of $\cP$, whose coefficient $r_i$ represents the number of distinct possibilities of arranging $i$ rooks on cells of $\cP$ in non-attacking positions (with the convention $r_0=1$). Gorenstein simple thin polyominoes are also characterized using the $S$-property and finally it is conjectured that a polyomino is thin if and only if $h(t)=r_{\cP}(t)$. In this paper we give also a partial support to this conjecture, since we provide an affirmative answer for a particular class of non-simple thin polyominoes, namely closed paths. In \cite{Kummini rook polynomial} this conjecture is discussed for a certain class of polyominoes. In a recent paper \cite{Parallelogram Hilbert series} the authors introduce a particular equivalence relation on the rook complex of a simple polyomino and they conjecture that the number of equivalence classes of $i$ non-attacking rooks arrangements is exactly the $i$-th coefficient of the $h$-polynomial in the reduced Hilbert-Poincar\'e series. Moreover they prove it for the class of parallelogram polyominoes and, by a computational method, also for all simple polyominoes having rank at most eleven.\\ Let $\cP$ be a closed path without zig-zag walks, which is a particular non-simple thin polyomino introduced in \cite{Cisto_Navarra_closed_path}. The aim of this paper is to provide a combinatorial view of the $h$-polynomial of $K[\cP]$. In this direction, we show that it is equal to the rook polynomial of $\cP$, studying the Hilbert-Poincar\'e series of certain non-simple polyominoes having just one hole and relating them to the Hilbert-Poincar\'e series of simple thin polyominoes included in $\cP$. As a consequence we compute the Krull dimension and the regularity of $K[\cP]$ and, in conclusion, we give a characterization of the Gorenstein property in terms of the length of maximal blocks of $\cP$. Roughly speaking, a closed path is a sequence of cells, similar to a pearl necklace on a table, in which each pearl corresponds to a cell, producing only one hole. This class of polyominoes is defined in \cite{Cisto_Navarra_closed_path}, where the authors characterize their primality by the non-existence of zig-zag walks, equivalently by the existence of an $L$-configuration or a ladder of at least three steps. In Section \ref{Section: Introduction} we introduce definitions and notations, which are fundamental along the paper. In Section \ref*{Section:Poincare-Hilbert series of certain non-simple polyominoes}, we introduce a particular polyomino $\cL$ and define the class of $(\cL,\cC)$-polyominoes, $\cC$ a generic polyomino, and we provide an explicit formula for the Hilbert-Poincar\'e series of the related coordinate rings, depending on the Hilbert-Poincar\'e series of some polyominoes obtained by eliminating specific cells. In a particular case we compute also the Krull dimension of the coordinate ring of a polyomino belonging to this class. In Section \ref{Section: H-P series of prime closed path polyominoes having no L-configuration} we assume that $\cP$ is a closed path polyomino and we deal the case in which $\cP$ has no $L$-configuration but it contains a ladder of at least three steps. In order to reach our aim we describe the initial ideals of some subpolyominoes of $\cP$ with respect to some monomial orders. The case $\cP$ has an $L$-configuration is discussed implicitly in Section 3, since such a closed path is a particular $(\mathcal{L},\cC)$-polyomino, with $\cC$ a simple path polyomino. In this way we fulfil the study of the Hilbert-Poincar\'e series for the class of closed paths without zig-zag walks. In Section 5 we prove that the $h$-polynomial of $K[\cP]$, where $\cP$ is a prime closed path polyomino, is the rook polynomial of $\cP$, obtaining as a consequence the regularity and the Krull dimension of $K[\cP]$. Finally we characterize all Gorenstein prime closed paths using the $S$-property. We conclude giving some open questions. \section{Polyominoes and polyomino ideals}\label{Section: Introduction} \noindent Let $(i,j),(k,l)\in \mathbb{Z}^2$. We say that $(i,j)\leq(k,l)$ if $i\leq k$ and $j\leq l$. Consider $a=(i,j)$ and $b=(k,l)$ in $\mathbb{Z}^2$ with $a\leq b$. The set $[a,b]=\{(m,n)\in \mathbb{Z}^2: i\leq m\leq k,\ j\leq n\leq l \}$ is called an \textit{interval} of $\mathbb{Z}^2$. In addition, if $i< k$ and $j<l$ then $[a,b]$ is a \textit{proper} interval. In such a case we say $a, b$ the \textit{diagonal corners} of $[a,b]$ and $c=(i,l)$, $d=(k,j)$ the \textit{anti-diagonal corners} of $[a,b]$. If $j=l$ (or $i=k$) then $a$ and $b$ are in \textit{horizontal} (or \textit{vertical}) \textit{position}. We denote by $]a,b[$ the set $\{(m,n)\in \mathbb{Z}^2: i< m< k,\ j< n< l\}$. A proper interval $C=[a,b]$ with $b=a+(1,1)$ is called a \textit{cell} of $\ZZ^2$; moreover, the elements $a$, $b$, $c$ and $d$ are called respectively the \textit{lower left}, \textit{upper right}, \textit{upper left} and \textit{lower right} \textit{corners} of $C$. The sets $\{a,c\}$, $\{c,b\}$, $\{b,d\}$ and $\{a,d\}$ are the \textit{edges} of $C$. We put $V(C)=\{a,b,c,d\}$ and $E(C)=\{\{a,c\},\{c,b\},\{b,d\},\{a,d\}\}$. Let $\cS$ be a non-empty collection of cells in $\mathbb{Z}^2$. The set of the vertices and of the edges of $\cS$ are respectively $V(\cS)=\bigcup_{C\in \cS}V(C)$ and $E(\cS)=\bigcup_{C\in \cS}E(C)$, $\mathrm{rank}\,\cS$ is the number of cells belonging to $\cS$. If $C$ and $D$ are two distinct cells of $\cS$, then a \textit{walk} from $C$ to $D$ in $\cS$ is a sequence $\cC:C=C_1,\dots,C_m=D$ of cells of $\ZZ^2$ such that $C_i \cap C_{i+1}$ is an edge of $C_i$ and $C_{i+1}$ for $i=1,\dots,m-1$. In addition, if $C_i \neq C_j$ for all $i\neq j$, then $\cC$ is called a \textit{path} from $C$ to $D$. We say that $C$ and $D$ are connected in $\cS$ if there exists a path of cells in $\cS$ from $C$ to $D$. A \textit{polyomino} $\cP$ is a non-empty, finite collection of cells in $\mathbb{Z}^2$ where any two cells of $\cP$ are connected in $\cP$. For instance, see Figure \ref{Figura: Polimino introduzione}. \begin{figure}[h] \centering \includegraphics[scale=0.5]{Esempio_polimino_introduzione} \caption{A polyomino.} \label{Figura: Polimino introduzione} \end{figure} \noindent We say that a polyomino $\cP$ is \textit{simple} if for any two cells $C$ and $D$ not in $\cP$ there exists a path of cells not in $\cP$ from $C$ to $D$. A finite collection of cells $\cH$ not in $\cP$ is a \textit{hole} of $\cP$ if any two cells of $\cH$ are connected in $\cH$ and $\cH$ is maximal with respect to set inclusion. For example, the polyomino in Figure \ref{Figura: Polimino introduzione} is not simple with an hole. Obviously, each hole of $\cP$ is a simple polyomino and $\cP$ is simple if and only if it has not any hole.\\ Let $A$ and $B$ be two cells of $\mathbb{Z}^2$, with $a=(i,j)$ and $b=(k,l)$ as the lower left corners of $A$ and $B$ and $a\leq b$. A \textit{cell interval} $[A,B]$ is the set of the cells of $\mathbb{Z}^2$ with lower left corner $(r,s)$ such that $i\leqslant r\leqslant k$ and $j\leqslant s\leqslant l$. If $(i,j)$ and $(k,l)$ are in horizontal (or vertical) position, we say that the cells $A$ and $B$ are in \textit{horizontal} (or \textit{vertical}) \textit{position}.\\ Let $\cP$ be a polyomino, $A$ and $B$ two cells of $\cP$ in vertical or horizontal position. The cell interval $[A,B]$, containing $n>1$ cells, is called a \textit{block of $\cP$ of rank n} if all cells of $[A,B]$ belong to $\cP$. The cells $A$ and $B$ are called \textit{extremal} cells of $[A,B]$. Moreover, a block $\cB$ of $\cP$ is \textit{maximal} if there does not exist any block of $\cP$ which contains properly $\cB$. It is clear that an interval of $\ZZ^2$ identifies a cell interval of $\ZZ^2$ and vice versa, hence we can associate to an interval $I$ of $\ZZ^2$ the corresponding cell interval denoted by $\cP_{I}$. A proper interval $[a,b]$ is called an \textit{inner interval} of $\cP$ if all cells of $\cP_{[a,b]}$ belong to $\cP$. We denote by $\cI(\cP)$ the set of all inner intervals of $\cP$. An interval $[a,b]$ with $a=(i,j)$, $b=(k,j)$ and $i<k$ is called a \textit{horizontal edge interval} of $\cP$ if the sets $\{(\ell,j),(\ell+1,j)\}$ are edges of cells of $\cP$ for all $\ell=i,\dots,k-1$. In addition, if $\{(i-1,j),(i,j)\}$ and $\{(k,j),(k+1,j)\}$ do not belong to $E(\cP)$, then $[a,b]$ is called a \textit{maximal} horizontal edge interval of $\cP$. We define similarly a \textit{vertical edge interval} and a \textit{maximal} vertical edge interval. \\ \noindent As in \cite{Trento} we call a \textit{zig-zag walk} of $\cP$ a sequence $\cW:I_1,\dots,I_\ell$ of distinct inner intervals of $\cP$ where, for all $i=1,\dots,\ell$, the interval $I_i$ has either diagonal corners $v_i$, $z_i$ and anti-diagonal corners $u_i$, $v_{i+1}$ or anti-diagonal corners $v_i$, $z_i$ and diagonal corners $u_i$, $v_{i+1}$, such that: \begin{enumerate}[(1)] \item $I_1\cap I_\ell=\{v_1=v_{\ell+1}\}$ and $I_i\cap I_{i+1}=\{v_{i+1}\}$, for all $i=1,\dots,\ell-1$; \item $v_i$ and $v_{i+1}$ are on the same edge interval of $\cP$, for all $i=1,\dots,\ell$; \item for all $i,j\in \{1,\dots,\ell\}$ with $i\neq j$, there exists no inner interval $J$ of $\cP$ such that $z_i$, $z_j$ belong to $J$. \end{enumerate} \noindent In according to \cite{Cisto_Navarra_closed_path}, we recall the definition of a \textit{closed path polyomino}, and the configuration of cells characterizing its primality. We say that a polyomino $\cP$ is a \textit{closed path} if it is a sequence of cells $A_1,\dots,A_n, A_{n+1}$, $n>5$, such that: \begin{enumerate}[(1)] \item $A_1=A_{n+1}$; \item $A_i\cap A_{i+1}$ is a common edge, for all $i=1,\dots,n$; \item $A_i\neq A_j$, for all $i\neq j$ and $i,j\in \{1,\dots,n\}$; \item for all $i\in\{1,\dots,n\}$ and for all $j\notin\{i-2,i-1,i,i+1,i+2\}$ then $V(A_i)\cap V(A_j)=\emptyset$, where $A_{-1}=A_{n-1}$, $A_0=A_n$, $A_{n+1}=A_1$ and $A_{n+2}=A_2$. \end{enumerate} A path of five cells $C_1, C_2, C_3, C_4, C_5$ of $\cP$ is called an \textit{L-configuration} if the two sequences $C_1, C_2, C_3$ and $C_3, C_4, C_5$ go in two orthogonal directions. A set $\cB=\{\cB_i\}_{i=1,\dots,n}$ of maximal horizontal (or vertical) blocks of rank at least two, with $V(\cB_i)\cap V(\cB_{i+1})=\{a_i,b_i\}$ and $a_i\neq b_i$ for all $i=1,\dots,n-1$, is called a \textit{ladder of $n$ steps} if $[a_i,b_i]$ is not on the same edge interval of $[a_{i+1},b_{i+1}]$ for all $i=1,\dots,n-2$. For instance, in Figure \ref{Figura:L conf + Ladder}(a) there is a closed path having an $L$-configuration and a ladder of three steps. We recall that a closed path has no zig-zag walks if and only if it contains an $L$-configuration or a ladder of at least three steps (see \cite[Section 6]{Cisto_Navarra_closed_path}). \\A finite non-empty collection of cells $\cP$ is called a \textit{weakly closed path} (\cite[Definition 4.1]{Cisto_Navarra_weakly}) if it is a path of $n$ cells $A_1,\dots,A_{n-1},A_n=A_0$ with $n>6$ such that: \begin{enumerate}[(1)] \item $\vert V(A_0)\cap V(A_1)\vert =1$; \item $V(A_2)\cap V(A_{0})=V(A_{n-1})\cap V(A_1)=\emptyset$; \item $V(A_i)\cap V(A_j)=\emptyset$ for all $i\in\{1,\dots,n\}$ and for all $j\notin\{i-2,i-1,i,i+1,i+2\}$, where the indices are reduced modulo $n$. \end{enumerate} A finite collection of cells of $\cP$, made up of a maximal horizontal (resp. vertical) block $[A,B]$ of $\cP$ of length at least two and two distinct cells $C$ and $D$ of $\cP$, not belonging to $[A,B]$, with $V(C)\cap V([A,B])=\{a_1\}$ and $V(D)\cap V([A,B])=\{a_2,b_2\}$, $a_2\neq b_2$, is called a \textit{weak ladder} if $[a_2,b_2]$ is not on the same maximal horizontal (resp. vertical) edge interval of $\cP$ containing $a_1$ (see Figure \ref{Figura:L conf + Ladder}(b)) \begin{figure}[h!] \centering \subfloat[]{\includegraphics[scale=0.55]{Esempio_closed_path_Lconf_ladder}} \qquad\qquad \subfloat[]{\includegraphics[scale=0.65]{Esempio_weak_ladder}} \caption{Examples of a closed path (a) and a weakly closed path (b).} \label{Figura:L conf + Ladder} \end{figure} \noindent Let $\cP$ be a polyomino. We set $S_\cP=K[x_v\vert v\in V(\cP)]$, where $K$ is a field. If $[a,b]$ is an inner interval of $\cP$, with $a$, $b$ and $c$, $d$ respectively diagonal and anti-diagonal corners, then the binomial $x_ax_b-x_cx_d$ is called an \textit{inner 2-minor} of $\cP$. We call the ideal $I_{\cP}$ in $S_\cP$ generated by all the inner 2-minors of $\cP$ the \textit{polyomino ideal} of $\cP$. We set also $K[\cP] = S_\cP/I_{\cP}$, which is the \textit{coordinate ring} of $\cP$. \\ \noindent We recall some notions on the Hilbert function and the Hilbert-Poincar\'e series of a graded $K$-algebra $R/I$. Let $R$ be a graded $K$-algebra and $I$ be an homogeneous ideal of $R$. Then $R/I$ has a natural structure of graded $K$-algebra as $\bigoplus_{k\in\mathbb{N}}(R/I)_k$. The numerical function $\mathrm{H}_{R/I}:\mathbb{N}\to \mathbb{N}$ with $\mathrm{H}_{R/I}(k)=\dim_{K} (R/I)_k$ is called the \textit{Hilbert function} of $R/I$. The formal series $\rHP_{R/I}(t)=\sum_{k\in\mathbb{N}}\mathrm{H}_{R/I}(k)t^k$ is called the \textit{Hilbert-Poincar\'e series} of $R/I$. It is known by Hilbert-Serre theorem that there exists a polynomial $h(t)\in \mathbb{Z}[t]$ with $h(1)\neq0$ such that $\rHP_{R/I}(t)=\frac{h(t)}{(1-t)^d}$, where $d$ is the Krull dimension of $R/I$. Moreover, if $R/I$ is Cohen-Macaulay then $\mathrm{reg}(R/I)=\deg h(t)$. Recall also that if $S=K[x_1,\ldots,x_n]$ then $\rHP_{S}(t)=\frac{1}{(1-t)^n}$.\\ We will use frequently the following well known results (see for instance \cite[Chapter 5]{Villareal}). \begin{proposition}\label{Proposizione: la prima che serve per calcolare HP} Let $R$ be a graded $K$-algebra and $I$ be a graded ideal of $R$. Let $q$ be an homogeneous element of $R$ of degree $m$. Consider the exact sequence: $$0 \longrightarrow R/(I:q) \longrightarrow R/I \longrightarrow R/(I,q) \longrightarrow0 $$ Then $\rHP_{R/I}(t)=\rHP_{R/(I,q)}(t)+t^m\rHP_{R/(I:q)}(t)$. \end{proposition} \begin{proposition}\label{Hilber-tensoriale} Let $A$ and $B$ be standard graded $K$-algebras over a field $K$. Then $\rHP_{A \otimes_K B}(t)=\rHP_{A}(t) \cdot \rHP_{B}(t)$. \end{proposition} \begin{remark} \rm We will often use also the following elementary fact: if $\mathbf{X}$ is a set of indeterminates, $\mathbf{X}_1,\mathbf{X}_2\subset \mathbf{X}$ form a partition of $\mathbf{X}$ into disjoint non-empty subsets and $I$ is an ideal of $K[\mathbf{X}]$, $K$ a field, such that each generator of $I$ belongs to $K[\mathbf{X}_j]$ for $j\in \{1,2\}$, then $K[\mathbf{X}]/I\cong K[\mathbf{X}_1]/I_1 \otimes_K K[\mathbf{X}_2]/I_2$, where $I_j=I\cap K[\mathbf{X}_j]$ for $j\in \{1,2\}$ (see \cite[Proposition 3.1.33]{Villareal}). \end{remark} \noindent If $\cP$ is a polyomino then $\rHP_{K[\cP]}(t)=\frac{h(t)}{(1-t)^d}$ for some $h(t)\in \mathbb{Z}[t]$ with $h(1)\neq0$, so we denote $h(t)=h_{K[\cP]}(t)$ along the paper. Finally, if $n\in \mathbb{N}$ as usual we denote by $[n]$ the set $\{1,2,\ldots,n\}$.\\ \section{Hilbert-Poincar\'e series of certain non-simple polyominoes}\label{Section:Poincare-Hilbert series of certain non-simple polyominoes} \noindent In this section we examine a particular class of non-simple polyominoes that we introduce in the following definition. \begin{definition}\rm\label{Definizione: (L,C)-polimino} Let $\cL$ be the union of the two cell intervals $[A,A_r]$, consisting of the cells $A,A_1,\ldots ,A_r$, and $[A,B_s]$, consisting of the cells $A,B_1,\ldots,B_s$, where $A,A_r$ and $A,B_s$ are respectively in horizontal and vertical position with $r,s\geq 2$. We denote by $a,b$ and $c,d$ respectively the diagonal and anti-diagonal corners of $A$, by $d_i$ and $a_i$ respectively the upper left and upper right corners of $B_i$ for $i\in [s]$ and by $b_j$ and $c_j$ respectively the upper and lower right corners of $A_j$ for $j\in[r]$. Let $\cC$ be a polyomino. We say that a polyomino $\cP$ is an $(\mathcal{L},\mathcal{C})$\textit{-polyomino} if $\cP=\mathcal{L}\sqcup \mathcal{C}$ and it satisfies one and only one of the following four conditions (see also Figure~\ref{Figura:esempio P(L,C)}): \begin{enumerate}[(1)] \item $V(\cL)\cap V(\mathcal{\cC})=\{a_{s-1},a_s,b_{r-1},b_r\}$; \item $V(\cL)\cap V(\mathcal{\cC})=\{a_{s-1},a_s,c_{r-1},c_r\}$; \item $V(\cL)\cap V(\mathcal{\cC})=\{d_{s-1},d_s,b_{r-1},b_r\}$; \item $V(\cL)\cap V(\mathcal{\cC})=\{d_{s-1},d_s,c_{r-1},c_r\}$; \end{enumerate} \begin{figure}[h] \centering \subfloat[]{\includegraphics[scale=0.6]{Esempio_PLC4}} \subfloat[]{\includegraphics[scale=0.6]{Esempio_PLC3}} \subfloat[]{\includegraphics[scale=0.6]{Esempio_PLC}} \subfloat[]{\includegraphics[scale=0.6]{Esempio_PLC2}} \caption{Examples of the different cases of $(\cL,\cC)$-polyominoes} \label{Figura:esempio P(L,C)} \end{figure} If $\cP$ is an $(\mathcal{L},\mathcal{C})$-polyomino, the following related polyominoes will be used along the paper: \begin{itemize} \item $\cP_1=\cP\backslash [A,A_r]$; \item $\cP_2=\cP\backslash [A,B_s]$; \item $\cP_3=\cP\backslash ([A,A_r]\cup [A,B_s])=\mathcal{C}$; \item $\cP_4=\cP\backslash \{A,A_1,B_1\}$; \item $\cP'_1=\cP\backslash [A_1,A_r]$; \item $\cP'_2=\cP\backslash [B_1,B_s]$. \end{itemize} \end{definition} \begin{lemma} Let $\cP$ be an $(\cL,\cC)$-polyomino. Then $S_\cP/(I_{\cP},x_a,x_d,x_c,x_b)\cong K[\cP_4]$. \label{isomorfismoP4} \end{lemma} \begin{proof} Observe that $I_\cP$ can be written in the following way \begin{align*} &I_\cP=I_{\cP_4}+ (x_a x_b -x_b x_c)+\sum_{i=1}^r(x_a x_{a_i}-x_c x_{d_i})+\sum_{i=1}^r(x_d x_{a_i}-x_b x_{d_i})+\\ &\sum_{i=1}^s(x_a x_{b_i}-x_d x_{c_i})+\sum_{i=1}^r(x_c x_{b_i}-x_b x_{c_i}). \end{align*} It follows that $(I_{\cP},x_a,x_d,x_c,x_b)=(I_{\cP_4},x_a,x_d,x_c,x_b) $, in particular $$S_\cP/(I_{\cP},x_a,x_d,x_c,x_b)=S_\cP/(I_{\cP_4},x_a,x_d,x_c,x_b)\cong S_{\cP_4}/I_{\cP_4}=K[\cP_4]$$. \end{proof} \begin{proposition}\label{Proposizione: K[P_i] è dominio, sapendo che I è primo} Let $\cP$ be an $(\cL,\cC)$-polyomino. If $I_{\cP}$ is prime, then $K[\cP_i]$ and $K[\cP'_j]$ are domains for $i\in [4]$ and $j\in\{1,2\}$. \end{proposition} \begin{proof} We may assume that $\cP$ is an $(\cL,\cC)$-polyomino such that $V(\cL)\cap V(\cC)=\{a_{s-1},a_s, b_{r-1},b_r\}$, since similar arguments can be used in the other cases. We prove that $K[\cP_1]$ is a domain. Observe that $I_\cP$ is a toric ideal since $I_{\cP}$ is a prime binomial ideal. Then there exists a map $\phi: S_{\cP}\to K[t_1^{\pm 1},\dots,t_d^{\pm1}]$ with $x_{ij}\mapsto \mathbf{t}^{\mathbf{a_{ij}}} \footnote{If $\mathbf{t}=(t_1,\dots,t_d)$ and $\mathbf{a}=(a_1,\dots,a_d)\in \ZZ^d$, then $\mathbf{t}^{\mathbf{a}}=t_1^{a_1}\dots t_d^{a_d}$.}$ for all $(i,j)\in V(\cP)$ such that $I_{\cP}=\ker \phi$. Let $\mathcal{V}=\{a,c,c_1,\dots,c_r,b_1,\dots,b_{r-2}\}$, we define $\phi_\mathcal{V}:S_{\cP}\to K[t_1^{\pm 1},\dots,t_d^{\pm1}]$, by $\phi_\cV(x_v)=0$ if $v\in \cV$ and $\phi_\cV(x_v)=\phi(x_v)$ otherwise. Put $J_\cV:=(I_{\cP},\{x_v\mid v\in \cV\})$, we prove that $J_\cV=\ker \phi_\cV$. If $f\in \ker \phi_\cV$, we can write $f=\tilde{f}+\beta g$ where $\beta\in S_\cP$, $g\in (\{x_v\mid v\in \cV\})$ and $\tilde{f}$ not containing variables in the set $\{x_v\mid v\in \cV\}$. Since $\phi_\cV(f)=0$, we have $\phi(\tilde{f})=0$, so $\tilde{f}\in \ker\phi=I_{\cP}$. For the other inclusion it suffices to prove that $I_\cP \subseteq \ker\phi_\cV$. In such a case observe that, for this configuration, if $f=x_{i_1}x_{i_2}-x_{j_1}x_{j_2}$ is a generator of $I_\cP$ then $\{x_{i_1},x_{i_2}\}\cap \{x_v\mid v\in \cV\}\neq \emptyset$ if and only if $\{x_{j_1},x_{j_2}\}\cap \{x_v\mid v\in \cV\}\neq \emptyset$, so in all possible cases we have $\phi_\cV(f)=0$. Therefore $J_\cV=\ker\phi_\cV$ and $J_\cV$ is a prime ideal. As in Lemma \ref{isomorfismoP4}, we have also that $J_\cV=(I_{\cP_1},x_a,x_c,x_{c_i},x_{b_j}:i\in[r],j\in[r-2])$, and $K[\cP_1]\cong S_{\cP}/J$ is a domain. The proof for this case is done. For the other polyominoes the proof is analogue, considering: \begin{itemize} \item for $\cP_2$ the set $\cV=\{a,d,d_1,\dots,d_s,a_1,\dots,a_{s-2}\}$; \item for $\cP_3$ the set $\cV=\{a,b,c,d,c_1,\ldots, c_r, d_1,\dots,d_s,a_1,\dots,a_{s-2},b_1,\ldots,b_{r-2}\}$; \item for $\cP_4$ the set $\cV=\{a,b,c,d\}$; \item for $\cP_1'$ the set $\cV=\{b_1,\dots,b_{r-2},c_1,\dots,c_r\}$; \item for $\cP_2'$ the set $\cV=\{a_1,\dots,a_{s-2},d_1,\dots,d_s\}$. \end{itemize} \end{proof} \noindent If $\cP$ is an $(\cL,\cC)$-polyomino, our aim is to provide a formula for the Hilbert-Poincar\'e series of $K[\cP]$, involving the Hilbert-Poincar\'e series of $K[\cP_1]$, $K[\cP_2]$, $K[\cP_3]$ and $K[\cP_4]$ in the hypotheses that $K[\cP]$ is an integral domain. In particular, if $(i_1,i_2,i_3,i_4)$ is a permutation of the set $\{a,b,c,d\}$, our strategy consists in considering the following four short exact sequences: \begin{footnotesize} $$0 \longrightarrow S_\cP/(I_{\cP}:x_{i_1}) \longrightarrow S_\cP/I_{\cP} \longrightarrow S_\cP/(I_{\cP},x_{i_1}) \longrightarrow0 $$ $$0 \longrightarrow S_\cP/((I_{\cP},x_{i_1}):x_{i_2}) \longrightarrow S_\cP/(I_{\cP},x_{i_1}) \longrightarrow S_\cP/(I_{\cP},x_{i_1},x_{i_2}) \longrightarrow0 $$ $$0 \longrightarrow S_\cP/((I_{\cP},x_{i_1},x_{i_2}):x_{i_3}) \longrightarrow S_\cP/(I_{\cP},x_{i_1},x_{i_2}) \longrightarrow S_\cP/(I_{\cP},x_{i_1},x_{i_2},x_{i_3}) \longrightarrow0 $$ $$0 \longrightarrow S_\cP/((I_{\cP},x_{i_1},x_{i_2},x_{i_3}):x_{i_4}) \longrightarrow S_\cP/(I_{\cP},x_{i_1},x_{i_2},x_{i_3}) \longrightarrow S_\cP/(I_{\cP},x_a,x_d,x_c,x_b) \longrightarrow0 $$ \end{footnotesize} \noindent From the exact sequences above, we will obtain the Hilbert-Poincar\'e series of $S_\cP/I_\cP$ by a repeated application of Proposition~\ref{Proposizione: la prima che serve per calcolare HP} and considering in each case a suitable permutation $(i_1,i_2,i_3,i_4)$ of the set $\{a,b,c,d\}$ in order to compute the Hilbert-Poincar\'e series of the rings in the intermediate steps. To reach our aim we provide several preliminary lemmas, distinguishing the different possibilities for the set $V(\cL)\cap V(\mathcal{\cC})$. \begin{lemma}\label{Lemma: column+prodotti tensoriali - primo caso} Let $\cP$ be an $(\cL,\cC)$-polyomino such that $V(\cL)\cap V(\mathcal{\cC})=\{a_{s-1},a_s,b_{r-1},b_r\}$. Suppose that $I_{\cP}$ is prime. Then: \begin{enumerate}[(1)] \item $S_\cP/((I_\cP,x_a):x_d)\cong K[\cP_1]\otimes_K K[x_{b_1},\dots,x_{b_{r-2}}]$; \item $S_\cP/((I_\cP,x_a,x_d):x_c)\cong K[\cP_2]\otimes_K K[x_{a_1},\dots,x_{a_{s-2}}]$; \item $S_\cP/((I_\cP,x_a,x_d,x_c):x_b)\cong K[\cP_3]\otimes_K K[x_b,x_{a_1},\dots,x_{a_{s-2}},x_{b_1},\dots,x_{b_{r-2}}]$. \end{enumerate} \end{lemma} \begin{proof} (1) Firstly observe that $I_{\cP}$ can be written in the following way: \begin{align*} &I_{\cP}=I_{\cP_1}+(x_ax_b-x_cx_d)+\sum_{i=1}^{s}(x_ax_{a_i}-x_cx_{d_i})+\sum_{j=1}^{r}(x_ax_{b_j}-x_dx_{c_j})+\\ &\sum_{j=1}^{r}(x_cx_{b_j}-x_bx_{c_j})+\sum_{k,l\in[r]\atop k<l}(x_{c_k}x_{b_l}-x_{c_l}x_{b_k})+\\ &(\{x_{c_{r-1}}x_{v}-x_{c_r}x_{u}\vert [c_{r-1},v]\in\cI(\cP), u=v-(1,0)\}), \end{align*} Now we describe the ideal $(I_{\cP},x_a)$ in $S_\cP$: \begin{align*} &(I_{\cP},x_a)=(I_{\cP_1},x_a)+(x_cx_d)+\sum_{i=1}^{s}(x_cx_{d_i})+\sum_{j=1}^{r}(x_dx_{c_j})+\\ &\sum_{j=1}^{r}(x_cx_{b_j}-x_bx_{c_j})+\sum_{k,l\in[r]\atop k<l}(x_{c_k}x_{b_l}-x_{c_l}x_{b_k})+\\ &(\{x_{c_{r-1}}x_{v}-x_{c_r}x_{u}\vert [c_{r-1},v]\in\cI(\cP), u=v-(1,0)\}). \end{align*} \noindent We prove that $(I_{\cP},x_a):x_d=(I_{\cP_1},x_a)+(x_c)+\sum_{i=1}^{r}(x_{c_i})$. It follows trivially from the previous equality that $(I_{\cP},x_a):x_d\supseteq(I_{\cP_1},x_a)+(x_c)+\sum_{i=1}^{r}(x_{c_i})$. Let $f\in S_\cP$ such that $x_df\in (I_\cP,x_a)$. Then \begin{align*} &x_df=g+\alpha x_a+\beta x_cx_d+\sum_{i=1}^{s}\gamma_i x_cx_{d_i}+\sum_{j=1}^{r}\delta_j x_dx_{c_j}+\sum_{i=j}^{r}\omega_{j}(x_cx_{b_j}-x_bx_{c_j})+\\ &+\sum_{k,l\in[r]\atop k<l}\nu_{kl}(x_{c_k}x_{b_l}-x_{c_l}x_{b_k})+\sum_{ [c_{r-1},v]\in\cI(\cP)\atop u=v-(1,0)} \lambda_{v}(x_{c_{r-1}}x_{v}-x_{c_r}x_{u}), \end{align*} where $g\in \cI_{\cP_1}$, $\alpha,\beta,\gamma_i,\delta_j,\omega_{j},\nu_{k,l}\lambda_{v}\in S_\cP$ for all $i,k\in[s],j,l\in[r]$ and for all $v\in V(\cP)$ such that $[c_{r-1},v]\in\cI(\cP)$. As a consequence: \begin{align*} &x_d\biggl(f-\beta x_c-\sum_{j=1}^{r}\delta_j x_{c_j}\biggr)=g+\alpha x_a+\sum_{i=1}^{s}(\gamma_i x_{d_i})x_c+\sum_{i=j}^{r}(\omega_{j}x_{b_j})x_c-\sum_{i=j}^{r}(\omega_jx_b)x_{c_j}+\\ &+\sum_{k,l\in[r]\atop k<l}(\nu_{kl}x_{b_l})x_{c_k}-\sum_{k,l\in[r]\atop k<l}(\nu_{kl}x_{b_k})x_{c_l}+\biggl(\sum_{ [c_{r-1},v]\in\cI(\cP)\atop u=v-(1,0)} \lambda_{v}x_{v}\biggr)x_{c_{r-1}}+\\ &-\biggl(\sum_{ [c_{r-1},v]\in\cI(\cP)\atop u=v-(1,0)}\lambda_{v}x_{u}\biggr)x_{c_r}. \end{align*} Hence we obtain that $x_d\bigl(f-\beta x_c-\sum_{j=1}^{r}\delta_jx_{c_j}\bigr)\in I_{\cP_1}+(x_a)+(x_c)+\sum_{i=1}^{r}(x_{c_i})$. Since $K[\cP_1]$ is a domain (Proposition \ref{Proposizione: K[P_i] è dominio, sapendo che I è primo}) and $a,c,c_i\notin V(\cP_1)$ for all $i\in[r]$ then $I_{\cP_1}+(x_a)+(x_c)+\sum_{i=1}^{r}(x_{c_i})$ is a prime ideal in $S_\cP$. Since $x_d\notin I_{\cP_1}$, we have $f-\beta x_c-\sum_{j=1}^{r}\delta_jx_{c_j}\in I_{\cP_1}+(x_a)+(x_c)+\sum_{i=1}^{r}(x_{c_i})$, so $f\in I_{\cP_1}+(x_a)+(x_c)+\sum_{i=1}^{r}(x_{c_i})$, that is $(I_{\cP},x_a):x_d\subseteq(I_{\cP_1},x_a)+(x_c)+\sum_{i=1}^{r}(x_{c_i})$. In conclusion we have $(I_{\cP},x_a):x_d=(I_{\cP_1},x_a)+(x_c)+\sum_{i=1}^{r}(x_{c_i})$ and as a consequence $S_\cP/((I_\cP,x_a):x_d)=S_\cP/(I_{\cP_1}+(x_a,x_c,x_{c_1},\dots,x_{c_r}))\cong S_{\cP_1}/I_{\cP_1}\otimes_K K[x_v\mid v\in V(\cP\setminus \cP_1)]/(x_a,x_c,x_{c_1},\dots,x_{c_r}) = K[\cP_1]\otimes_K K[x_{b_1},\dots,x_{b_{r-2}}]$. The claim (1) is proved.\\ (2) By similar computations as in the first part of (1) we can prove that $(I_\cP,x_a,x_d):x_c=(I_{\cP_2},x_a,x_d)+\sum_{i=1}^s(x_{d_i})$, so claim (2) follows by using similar arguments as in the last part in (1).\\ (3) The argument is the same, considering that $(I_\cP,x_a,x_d,x_c):x_b=(I_{\cP_3},x_a,x_d,x_c)+\sum_{i=1}^s(x_{d_i})+\sum_{j=1}^r(x_{c_i})$ can be proved using computations similar to the previous cases. \end{proof} \noindent In the previous result we examine, for a $(\cL,\cC)$-polyomino, the case $V(\cL)\cap V(\mathcal{\cC})=\{a_{s-1},a_s,b_{r-1},b_r\}$. In order to examine the other cases we need other preliminary results involving the polyominoes $\cP'_1$ and $\cP'_2$. \begin{lemma} Let $\cP$ be an $(\cL,\cC)$-polyomino. Then \begin{enumerate} \item[(1)] $S_{\cP_2'} /(I_{\cP'_2},x_b,x_c)\cong K[\cP_2]$. Moreover, if $I_{\cP_2}$ is a prime ideal then $(I_{\cP'_2},x_b,x_c)$ is a prime ideal of $S_\cP$. \item[(2)] $S_{\cP'_1} /(I_{\cP'_1},x_b,x_d)\cong K[\cP_1]$. Moreover, if $I_{\cP_1}$ is a prime ideal then $(I_{\cP'_1},x_b,x_d)$ is a prime ideal of $S_\cP$. \end{enumerate} \label{P'_2} \end{lemma} \begin{proof} (1) Let $\mathcal{R}$ be the polyomino obtained from the cells of $\cP_2$ and renaming the vertices $b$ and $c$ respectively by $d$ and $a$, in particular $S_\mathcal{R}=K[x_v \mid v\in V(\cP'_2)\setminus \{b,c\}]$. Observe that $$I_{\cP'_2}=I_\mathcal{R}+ (x_a x_b-x_c x_d)+\sum_{i=1}^r(x_c x_{b_i}-x_b x_{c_i})$$ So $(I_{\cP'_2},x_b,x_c)=(I_\mathcal{R},x_b,x_c)$ and in particular $S_{\cP'_2}/(I_{\cP'_2},x_b,x_c)=S_{\cP'_2}/(I_\mathcal{R},x_b,x_c)\cong S_\mathcal{R}/I_\mathcal{R} = K[\mathcal{R}]\cong K[\cP_2]$, since $x_b,x_c$ do not belong to the support of any element of $I_\mathcal{R}$ and observing that, apart from the name of the vertices involved, $\mathcal{R}=\cP_2$. Furthermore $S_\cP/(I_{\cP'_2},x_b,x_c)\cong S_{\cP'_2}/(I_{\cP'_2},x_b,x_c)\otimes_K K[x_v\mid v\in V(\cP\setminus \cP'_2)]\cong K[\cP_2]\otimes_K K[x_v\mid v\in V(\cP\setminus \cP'_2)]$, so also the last claim follows. \\ (2) The result can be obtained arguing as in the proof of (1). Indeed the arrangements involved in these situations can be considered the same up to one reflection and one rotation. \end{proof} \begin{lemma}\label{Lemma: column+prodotti tensoriali - secondo caso} Let $\cP$ be an $(\cL,\cC)$-polyomino such that $V(\cL)\cap V(\mathcal{\cC})=\{d_{s-1},d_s,b_{r-1},b_r\}$. Suppose that $I_{\cP}$ is prime. Then: \begin{enumerate}[(1)] \item $S_\cP/((I_\cP,x_c):x_b)\cong K[\cP_1]\otimes_K K[x_{b_1},\dots,x_{b_{r-2}}]$; \item $S_\cP/((I_\cP,x_b,x_c):x_a)\cong K[\cP_2]\otimes_K K[x_{d_1},\dots,x_{d_{s-2}}]$; \item $S_\cP/((I_\cP,x_a,x_b,x_c):x_d)\cong K[\cP_3]\otimes_K K[x_d,x_{b_1},\dots,x_{b_{r-2}},x_{d_1},\dots,x_{d_{s-2}}]$. \end{enumerate} \end{lemma} \begin{proof} Arguing as in Lemma~\ref{Lemma: column+prodotti tensoriali - primo caso}, we obtain the equalities of the following ideals: \begin{enumerate}[(1)] \item $(I_\cP,x_c):x_b=(I_{\cP_1},x_c)+(x_a)+\sum_{i=1}^r(x_{c_i})$ \item $(I_\cP,x_b,x_c):x_a=(I_{\cP'_2},x_b,x_c)+\sum_{i=1}^s(x_{a_i})$. \item $(I_\cP,x_a,x_b,x_c):x_d=(I_{\cP_3},x_a,x_b,x_c)+\sum_{i=1}^s(x_{a_i})+\sum_{i=1}^r(x_{c_i})$. \end{enumerate} In particular, the second equality above holds since $(I_{\cP'_2},x_b,x_c)$ is a prime ideal by Lemma~\ref{P'_2}. By the same lemma we have also $S_{\cP_2'} /(I_{\cP'_2},x_b,x_c)\cong K[\cP_2]$, from which claim (2) derives. For the sake of completeness we provide its proof.\\ Observe that $I_{\cP}$ can be written in the following way: \begin{align*} &I_{\cP}=I_{\cP'_2}+\sum_{i=1}^{s}(x_ax_{a_i}-x_cx_{d_i})+\sum_{i=1}^{s}(x_dx_{a_i}-x_bx_{d_i})+\sum_{k,l\in[s]\atop k<l}(x_{d_k}x_{a_l}-x_{d_l}x_{a_k})+\\ &+(\{x_{a_{s}}x_{v}-x_{a_{s-1}}x_{u}\vert [v, a_s]\in\cI(\cP), u=v+(0,1)\}), \end{align*} It follows: \begin{align*} &(I_{\cP},x_b,x_c)=(I_{\cP'_2},x_b,x_c)+\sum_{i=1}^{s}(x_ax_{a_i})+\sum_{i=1}^{s}(x_dx_{a_i})+\sum_{k,l\in[s]\atop k<l}(x_{d_k}x_{a_l}-x_{d_l}x_{a_k})\\ &+(\{x_{a_{s}}x_{v}-x_{a_{s-1}}x_{u}\vert [v, a_s]\in\cI(\cP), u=v+(0,1)\}), \end{align*} We prove that $(I_{\cP},x_b,x_c):x_a=(I_{\cP'_2},x_b,x_c)+\sum_{i=1}^{s}(x_{a_i})$. From the previous equality it follows that $(I_{\cP},x_b,x_c):x_a\supseteq (I_{\cP'_2},x_b,x_c)+\sum_{i=1}^{s}(x_{a_i})$. Let $f\in S$ such that $x_af\in (I_\cP,x_b,x_c)$. Then \begin{align*} &x_af=g+\sum_{i=1}^{s}\gamma_i x_ax_{a_i}+\sum_{j=1}^{s}\delta_j x_dx_{a_j}+\sum_{k,l\in[s]\atop k<l}\nu_{kl}(x_{d_k}x_{a_l}-x_{d_l}x_{a_k})+\\ &+\sum_{ [v,a_s]\in\cI(\cP)\atop u=v+(0,1)} \lambda_{v}(x_{a_s} x_{v}-x_{a_{s-1}}x_{u}), \end{align*} where $g\in (\cI_{\cP'_2},x_b,x_c)$, $\gamma_i,\delta_j,\nu_{k,l}\lambda_{v}\in S_\cP$ for all $i,k,j,l\in[s]$ and for all $v\in V(\cP)$ such that $[v,a_s]\in\cI(\cP)$. As a consequence: \begin{align*} &x_a\biggl(f-\sum_{i=1}^{s}\gamma_i x_{a_i}\biggr)=g+\sum_{j=1}^{s}(\delta_j x_{d})x_{a_j}+\sum_{k,l\in[s]\atop k<l}(\nu_{kl}x_{d_k})x_{a_l}-\sum_{k,l\in[s]\atop k<l}(\nu_{kl}x_{d_l})x_{a_k}+\\ &+\biggl(\sum_{ [v,a_s]\in\cI(\cP)\atop u=v+(0,1)} \lambda_{v}x_{v}\biggr)x_{a_s}-\biggl(\sum_{ [v,a_s]\in\cI(\cP)\atop u=v+(0,1)}\lambda_{v}x_{u}\biggr)x_{a_{s-1}}. \end{align*} Hence we obtain that $x_a\bigl(f-\sum_{i=1}^{s}\gamma_i x_{a_i}\bigr)\in (I_{\cP'_2},x_b,x_c)+\sum_{i=1}^{s}(x_{a_i})$. Since $(I_{\cP'_2},x_b,x_c)$ is prime and $a_i\notin V(\cP'_2)$ for all $i\in[s]$ then $(I_{\cP'_2},x_b,x_c)+\sum_{i=1}^{s}(x_{a_i})$ is a prime ideal in $S_\cP$. By being $x_a\notin I_{\cP'_2}$, we have $f-\sum_{i=1}^{s}\gamma_i x_{a_i}\in (I_{\cP'_2},x_b,x_c)+\sum_{i=1}^{s}(x_{a_i})$, so $f\in (I_{\cP'_2},x_b,x_c)+\sum_{i=1}^{s}(x_{a_i})$, that is $(I_{\cP},x_b,x_c):x_a\subseteq(I_{\cP'_2},x_b,x_c)+\sum_{i=1}^{s}(x_{a_i})$. In conclusion we have $(I_{\cP},x_b,x_c):x_a=(I_{\cP'_2},x_b,x_c)+\sum_{i=1}^{s}(x_{a_i})$ and as a consequence $S_\cP/((I_{\cP},x_b,x_c):x_a)=S_\cP/(I_{\cP'_2},x_b,x_c)+(x_{a_1},\dots,x_{a_s}))\cong S_{\cP_2}/(I_{\cP'_2},x_b,x_c)\otimes_K K[x_v\mid v\in V(\cP\setminus \cP_1)]/(x_{a_1},\dots,x_{a_s}) \cong K[\cP_2]\otimes_K K[x_{d_1},\dots,x_{d_{s-2}}].$ \end{proof} \noindent We omit to provide the analogous result for the case $V(\cL)\cap V(\mathcal{\cC})=\{a_{s-1},a_s,c_{r-1},c_r\}$. In fact, we can reduce it to the case examined in the previous Lemma up to a rotation and a reflection. \begin{lemma}\label{Lemma: column+prodotti tensoriali - quarto caso} Let $\cP$ be an $(\cL,\cC)$-polyomino such that $V(\cL)\cap V(\mathcal{\cC})=\{d_{s-1},d_s,c_{r-1},c_r\}$. Suppose that $I_{\cP}$ is prime. Then: \begin{enumerate}[(1)] \item $S_\cP/((I_\cP,x_b):x_c)\cong K[\cP_1]\otimes_K K[x_{c_1},\dots,x_{c_{r-2}}]$; \item $S_\cP/((I_\cP,x_b,x_c):x_d)\cong K[\cP_2]\otimes_K K[x_{d_1},\dots,x_{d_{s-2}}]$; \item $S_\cP/((I_\cP,x_b,x_c,x_d):x_a)\cong K[\cP_3]\otimes_K K[x_d,x_{c_1},\dots,x_{c_{r-2}},x_{d_1},\dots,x_{d_{s-2}}]$. \end{enumerate} \end{lemma} \begin{proof} The claims follow reasoning as in Lemma~\ref{Lemma: column+prodotti tensoriali - primo caso}, obtaining the equalities of the following ideals: \begin{enumerate}[(1)] \item $(I_\cP,x_b):x_c=(I_{\cP'_1},x_b,x_d)+\sum_{i=1}^r(x_{b_i})$ \item $(I_\cP,x_b,x_c):x_d=(I_{\cP'_2},x_b,x_c)+\sum_{i=1}^s(x_{a_i})$. \item $(I_\cP,x_b,x_c,x_d):x_a=(I_{\cP_3},x_b,x_c,x_d)+\sum_{i=1}^s(x_{a_i})+\sum_{i=1}^r(x_{b_i})$. \end{enumerate} In particular, the first equality follows from the primality of $(I_{\cP'_1},x_b,x_d)$, the second equality follows from the primality of $(I_{\cP'_2},x_b,x_c)$, both by Lemma~\ref{P'_2}. \end{proof} \begin{theorem}\label{Teorema: serie di Hilbert - primo caso} Let $\cP$ be an $(\cL,\cC)$-polyomino. Suppose that $I_{\cP}$ is prime. Then: $$\mathrm{HP}_{K[\cP]}(t)=\frac{1}{1-t}\rHP_{K[\cP_4]}(t)+\frac{t}{1-t}\bigg[\frac{\rHP_{K[\cP_1]}(t)}{(1-t)^{r-2}}+\frac{\rHP_{K[\cP_2]}(t)}{(1-t)^{s-2}}+\frac{\rHP_{K[\cP_3]}(t)}{(1-t)^{s+r-3}} \bigg]$$ \end{theorem} \begin{proof} Assume that $V(\cL)\cap V(\mathcal{\cC})=\{a_{s-1},a_s,b_{r-1},b_r\}$. Consider the following four short exact sequences: \begin{footnotesize} $$0 \longrightarrow S_\cP/(I_{\cP}:x_a) \longrightarrow S_\cP/I_{\cP} \longrightarrow S_\cP/(I_{\cP},x_a) \longrightarrow0 $$ $$0 \longrightarrow S_\cP/((I_{\cP},x_a):x_d) \longrightarrow S_\cP/(I_{\cP},x_a) \longrightarrow S_\cP/(I_{\cP},x_a,x_d) \longrightarrow0 $$ $$0 \longrightarrow S_\cP/((I_{\cP},x_a,x_d):x_c) \longrightarrow S_\cP/(I_{\cP},x_a,x_d) \longrightarrow S_\cP/(I_{\cP},x_a,x_d,x_c) \longrightarrow0 $$ $$0 \longrightarrow S_\cP/((I_{\cP},x_a,x_d,x_c):x_b) \longrightarrow S_\cP/(I_{\cP},x_a,x_d,x_c) \longrightarrow S_\cP/(I_{\cP},x_a,x_d,x_c,x_b) \longrightarrow0 $$ \end{footnotesize} Since $I_{\cP}:x_a=I_{\cP}$, because $I_{\cP}$ is prime, the claim easily follows by repeated applications of Proposition \ref{Proposizione: la prima che serve per calcolare HP} and from Proposition~\ref{Hilber-tensoriale}, Lemma~\ref{isomorfismoP4} and Lemma \ref{Lemma: column+prodotti tensoriali - primo caso}.\\ If $V(\cL)\cap V(\mathcal{\cC})=\{d_{s-1},d_s,b_{r-1},b_r\}$ the formula is obtained referring to Lemma~\ref{Lemma: column+prodotti tensoriali - secondo caso} by a suitable permutation of the set $\{a,b,c,d\}$. For symmetry, we obtain the claim also for the case $V(\cL)\cap V(\mathcal{\cC})=\{a_{s-1},a_s,c_{r-1},c_r\}$.\\ Finally, if $V(\cL)\cap V(\mathcal{\cC})=\{d_{s-1},d_s,c_{r-1},c_r\}$ we use again the same argument together with Lemma~\ref{Lemma: column+prodotti tensoriali - quarto caso}. \end{proof} \begin{coro} Let $\cP$ be an $(\cL,\cC)$-polyomino, suppose that $I_{\cP}$ is a prime ideal and $\cC$ is a simple polyomino. Then: $$\mathrm{HP}_{K[\cP]}(t)=\frac{h_{K[\cP_4]}(t)+t\big[h_{K[\cP_1]}(t)+h_{K[\cP_2]}(t)+(1-t)h_{K[\cP_3]}(t)\big]}{(1-t)^{\vert V(\cP)\vert -\mathrm{rank}\,\cP}}$$ In particular $K[\cP]$ has Krull dimension $\vert V(\cP)\vert -\mathrm{rank}\,\cP$. \label{HilbertCsimple} \end{coro} \begin{proof} Since $\cC$ is a simple polyomino, then $\cP_1$, $\cP_2$, $\cP_3$ and $\cP_4$ are simple polyominoes, so we have that $K[\cP_j]$ is a normal Cohen-Macaulay domain of dimension $\vert V(\cP_j)\vert -\mathrm{rank}\,\cP_j$ for $j\in\{1,2,3,4\}$ from \cite[Corollary 3.3]{def balanced} and \cite[Theorem 2.1]{Simple equivalent balanced}. We put $\vert V(\cP)\vert =n$ and $\mathrm{rank}\,\cP=p$. Observe that \begin{itemize} \item $\vert V(\cP_1)\vert =n-2r$ and $\mathrm{rank}\,\cP_1=p-r-1$, so $\vert V(\cP_1)\vert -\mathrm{rank}\,\cP_1=n-p-r+1$; \item $\vert V(\cP_2)\vert =n-2s$ and $\mathrm{rank}\,\cP_2=p-s-1$, so $\vert V(\cP_2)\vert -\mathrm{rank}\,\cP_2=n-p-s+1$; \item $\vert V(\cP_3)\vert =n-2s-2r$ and $\mathrm{rank}\,\cP_3=p-r-s-1$, so $\vert V(\cP_3)\vert -\mathrm{rank}\,\cP_3=n-p-s-r+1$; \item $\vert V(\cP_4)\vert =n-4$ and $\mathrm{rank}\,\cP_4=p-3$, so $\vert V(\cP_4)\vert -\mathrm{rank}\,\cP_4=n-p-1$. \end{itemize} Then $n-p=\vert V(\cP_4)\vert -\mathrm{rank}\,\cP_4+1=\vert V(\cP_1)\vert -\mathrm{rank}\,\cP_1+(r-2)+1=\vert V(\cP_2)\vert -\mathrm{rank}\,\cP_2+(s-2)+1$ and $\vert V(\cP_3)\vert -\mathrm{rank}\,\cP_3+(s+r-3)+1=n-p-1$. Therefore the formula for $\mathrm{HP}_{K[\cP]}(t)$ in the statement follows from Theorem \ref{Teorema: serie di Hilbert - primo caso} after an easy computation. Finally, let $h(t)$ be the polynomial in the numerator of the formula. By \cite[Corollary 4.1.10]{Bruns_Herzog}, observe that $h(1)=h_{K[\cP_4]}(1)+h_{K[\cP_1]}(1)+h_{K[\cP_2]}(1)>0$, so $(1-t)$ does not divide $h(t)$, hence $K[\cP]$ has Krull dimension $\vert V(\cP)\vert -\mathrm{rank}\,\cP$. \end{proof} \section{Hilbert-Poincar\'e series of prime closed path polyominoes having no $L$-configuration} \label{Section: H-P series of prime closed path polyominoes having no L-configuration} \noindent In this section we suppose that $\cP$ is a prime closed path polyomino having no $L$-configurations, so $\cP$ contains a ladder of at least three steps (\cite[Section 6]{Cisto_Navarra_closed_path}). Let $\mathcal{B}_1$, $\mathcal{B}_2$ and $\mathcal{B}_3$ be three maximal horizontal blocks of a ladder of $n$ steps in $\cP$, $n\geq 3$. Without loss of generality, we can assume that there does not exist a maximal block $\mathcal{K}\neq\mathcal{B}_2,\mathcal{B}_3$ of $\cP$ such that $\{\mathcal{K},\mathcal{B}_1,\mathcal{B}_2\}$ is a ladder of three steps. Moreover, applying suitable reflections or rotations of $\cP$, we can suppose that the orientation of the ladder is right/up, as in Figure~\ref{ladder_vuota}. \begin{figure}[h!] \centering \includegraphics[scale=0.75]{Ladder} \caption{} \label{ladder_vuota} \end{figure} \noindent Our aim is to study the Hilbert-Poincar\'e series of the coordinate ring of $\cP$. We split our arguments in two cases. In the first case we suppose that at least one block between $\mathcal{B}_1$ or $\mathcal{B}_2$ contains exactly two cells, in the second one assume that $\mathcal{B}_1$ and $\mathcal{B}_2$ contain at least three cells. \subsection{At least one block between $\mathcal{B}_1$ or $\mathcal{B}_2$ contains exactly two cells.}\label{ladder2} We start with some preliminary definitions that we adopt throughout this subsection. Let $\mathcal{W}$ be a collection of cells consisting of an horizontal block $[A_s,A_1]$ of rank at least two, containing the cells $A_s,A_{s-1},\ldots,A_1$, a vertical block $[B_1,B_r]$ of rank at least two, containing the cells $B_1,B_2,\ldots,B_r$, and a cell $A$ not belonging to $[A_s,A_1]\cup[B_1,B_r]$, such that $V([A_s,A_1])\cap V([B_1,B_r])=\{b\}$, where $b$ is the lower right corner of $A$. Moreover we denote the left upper corner of $A$ by $a$, the lower right corner of $B_1$ by $d$, the lower right corner of $A_1$ by $c$. Moreover, let $b_i$ and $c_i$ be respectively the left upper and lower corners of $A_i$ for $i\in[s]$, let $a_j$ and $d_j$ be respectively the left and the right upper corners of $B_j$ for $j\in[r]$ (Figure~\ref{Figura:pentomino}). \begin{figure}[h!] \centering \includegraphics[scale=0.9]{W_pentomino} \caption{A collection of cells $\mathcal{W}$} \label{Figura:pentomino} \end{figure} \noindent Since $\cP$ has no $L$-configurations, it is trivial to check that $\cP$ contains a collection of cells $\mathcal{W}$ such that $[A_s,A_1]$ and $[B_1,B_r]$ are maximal blocks of $\cP$. In particular, if $\cM$ is the collection of cells such that $\cP=\cW\sqcup \cM$, then we call $\cW$: \begin{itemize} \item \textit{1-Configuration}, if $V(\cW)\cap V(\mathcal{\cM})=\{c_{s-1},c_s,d_{r-1},d_r\}$; \item \textit{2-Configuration}, if $V(\cW)\cap V(\mathcal{\cM})=\{b_{s-1},b_s,d_{r-1},d_r\}$. \end{itemize} \noindent Observe that just one of the following cases can occur: \begin{enumerate}[(1)] \item $\vert \mathcal{B}_1\vert =\vert \mathcal{B}_2\vert =2$. In such a case $s=2$ and $r=2$, $\mathcal{B}_1=[A_2,A_1]$ and $\mathcal{B}_2=[A,B_1]$, so we obtain an 1-Configuration. \item $\vert \mathcal{B}_1\vert > 2$ and $\vert \mathcal{B}_2\vert =2$. In such a case $s> 2$ and $r=2$, $\mathcal{B}_1=[A_s,A_1]$ and $\mathcal{B}_2=[A,B_1]$, so we have an 1-Configuration or a 2-Configuration depending on $\mathcal{M}\cap \{A_s\}$. \item $\vert \mathcal{B}_1\vert = 2$ and $\vert \mathcal{B}_2\vert > 2$. In such a case, after a suitable rotation and reflection, consider a new ladder where $\mathcal{B}_1=[A_1,A]$ and $\mathcal{B}_2=[B_1,B_r]$, $s\geq 2$ and $r> 2$. Let $C$ be a cell of $\cP$ such that $I:=[C,A_1]$ is a maximal block of $\cP$. Therefore we obtain an 1-Configuration or a 2-Configuration depending on the position of the cell of $\cP\backslash I$ adjacent to $C$. \end{enumerate} The following related polyominoes will be essential in this subsection: \begin{itemize} \item $\mathcal{Q}=\cP \setminus\{A\}$; \item $\mathcal{Q}_1=\cP \setminus \{A,A_1,B_1\}$; \item $\cR_1=\cQ\setminus \{B_1\}$; \item $\cR_2=\cQ\setminus \{B_1,\ldots,B_s\}$; \item $\cF_1=\cQ\setminus \{A_1,\ldots,A_s\}$; \item $\cF_2=\cQ\setminus \{A_1,B_1,\ldots,B_s\}$. \end{itemize} \noindent Let $<^1$ be the total order on $V(\cP)$ defined as $u<^1 v$ if and only if, for $u = (i,j)$ and $v = (k,l)$, $i < k$, or $i = k$ and $j < l$. Let $Y\subset V(\cP)$ and let $<^Y_{\mathrm{lex}}$ be the lexicographic order in $S_\cP$ induced by the following order on the variables of $S_\cP$: \[ \mbox{for}\ u,v \in V(\cP)\qquad x_u<^Y_{\mathrm{lex}} x_v \Leftrightarrow \left\{ \begin{array}{l} u\notin Y\ \mbox{and}\ v\in Y \\ u,v\notin Y\ \mbox{and}\ u<^1 v \\ u,v\in Y\ \mbox{and}\ u<^1 v \end{array} \right. \] Considering Figure~\ref{Figura:pentomino}, from \cite[Theorem 4.9] {Cisto_Navarra_CM_closed_path} we know that there exists a set $L\subset V(\cP)$, with $a,d \in L$ and $b,c,a_1,b_1,c_1,d_1 \notin L$, such that the set of generators of $I_{\cP}$ forms the reduced Gr\"obner basis of $I_\cP$ with respect to $<^L_{\mathrm{lex}}$. Furthermore, in the case of 1-Configuration also $d_2,\ldots d_r\notin L$. For convenience we denote such a monomial order by $\prec_\cP$. Moreover, let $\prec_{\cQ}$, $\prec_{\cQ_1}$, $\prec_{\cR_1}$, $\prec_{\cR_2}$ be the monomial orders induced from $\prec_\cP$ respectively on the rings $S_Q$, $S_{\cQ_1}$, $S_{\cR_1}$, $S_{\cR_2}$. The following proposition will be useful. \begin{proposition} Let $\cP$ be a closed path polyomino containing a collection of cells of type $\cW$. Then the set of the inner 2-minors of $\cQ$ is the reduced Gr\"obner basis of $I_\cQ$ with respect to the monomial order $\prec_{\cQ}$. The same holds for the polyominoes $\cQ_1$, $\cR_1$ and $\cR_2$ considering respectively the monomial orders $\prec_{\cQ_1}$, $\prec_{\cR_1}$ and $\prec_{\cR_2}$. \label{orders} \end{proposition} \begin{proof} Let $f,g$ be two generators of $I_\cQ\subset I_\cP$. Since every $S$-polynomial $S(f,g)$ reduces to zero in $I_\cP$ then the conditions in lemmas in \cite[Section 3] {Cisto_Navarra_CM_closed_path} are satisfied for the collection of cells $\cP$. Apart from the occurrences $f=x_b x_{c_1}-x_c x_{b_1}$ and $g=x_b x_{d_1} -x_d x_{a_1}$, in which the leading terms of $f$ and $g$ have the greatest common divisor equal to 1, the other conditions of the mentioned lemmas do not involve the cell $A$. So the same conditions hold also for the collection of cells $Q$, hence $S(f,g)$ reduces to zero also in $I_\cQ$. By the same argument also the second claim in the statement holds. \end{proof} \begin{remark} \rm Observe that $\cQ_1$, $\cR_1$, $\cR_2$, $\cF_1$ and $\cF_2$ are simple polyominoes, so their related coordinate rings are normal Cohen-Macaulay domains whose Krull dimension is given by the difference between the number of vertices and the number of cells of the fixed polyomino (see \cite[Corollary 3.3]{def balanced} and \cite[Theorem 2.1]{Simple equivalent balanced}). The polyomino $\cQ$ is not simple but it is a weakly closed path and it is easy to see that $\cQ$ has a weak ladder in the cases which we are studying. Therefore $I_\cQ$ is a prime ideal (equivalently $K[\cQ]$ is a domain) from \cite[Proposition 4.5]{Cisto_Navarra_weakly}. Moreover, from Proposition~\ref{orders} and arguing as in the proof of \cite[Theorem 4.10]{Cisto_Navarra_CM_closed_path} we also obtain that $K[\cQ]$ is a normal Cohen-Macaulay domain. \label{RemarkQ} \end{remark} \noindent We are going to use all these introductory facts in the proofs of the next results. With abuse of notation we refer to $\mathrm{in}(I_\cP)$, $\mathrm{in}(I_\cQ)$, $\mathrm{in}(I_{\cQ_1})$, $\mathrm{in}(I_{\cR_1})$, $\mathrm{in}(I_{\cR_2})$ respectively for the initial ideals of $I_\cP$ with respect to $\prec_\cP$, of $I_\cQ$ with respect to $\prec_\cQ$, of $I_{\cQ_1}$ with respect to $\prec_{\cQ_1}$, of $I_{\cR_1}$ with respect to $\prec_{\cR_1}$ and of $I_{\cR_2}$ with respect to $\prec_{\cR_2}$. \begin{proposition} Let $\cP$ be a closed path polyomino containing a collection of cells of type $\cW$. Then $$\mathrm{HP}_{K[\cP]}(t)=\mathrm{HP}_{K[\cQ]}(t)+\frac{t}{1-t}\mathrm{HP}_{K[\cQ_1]}(t)$$ \label{HilbertQ} \end{proposition} \begin{proof} Observe that: \begin{align*} &I_\cP= I_{\cQ}+ (x_{b_1}x_{a_1}-x_a x_b)+(x_{b_1} x_{d_1}-x_a x_d)+(x_{c_1} x_{a_1}-x_a x_c),\\ &I_\cP= I_{\cQ_1}+ (x_{b_1}x_{a_1}-x_a x_b)+(x_{b_1} x_{d_1}-x_a x_d)+(x_{c_1} x_{a_1}-x_a x_c)+\\ &+\sum_{i=1}^r (x_b x_{d_i}-x_d x_{a_i})+\sum_{i=1}^s (x_b x_{c_i}-x_c x_{b_i}). \end{align*} From Proposition~\ref{orders} we obtain: \begin{align*} & \mathrm{in}(I_\cP)=\mathrm{in}(I_\cQ)+(x_a x_b)+ (x_a x_d) + (x_a x_c),\\ & \mathrm{in}(I_\cP)=\mathrm{in}(I_{\cQ_1})+(x_a x_b)+ (x_a x_d) + (x_a x_c) + (\{\max_{\prec_\cP}\{x_b x_{d_i},x_d x_{a_i}\}:i\in[r]\})+\\ &+ (\{\max_{\prec_\cP} \{x_b x_{c_i},x_c x_{b_i}:i\in [s]\}). \end{align*} From the above equalities it is not difficult to see that: \begin{itemize} \item $(\mathrm{in}(I_\cP),x_a)=(\mathrm{in}(I_\cQ),x_a)$, in particular $S_\cP/(\mathrm{in}(I_\cP),x_a)= S_\cP/(\mathrm{in}(I_\cQ),x_a)\cong S_\cQ/\mathrm{in}(I_\cQ)$. \item $\mathrm{in}(I_\cP):x_a=(\mathrm{in}(I_{\cQ_1}),x_b,x_c,x_d)$ (see for instance \cite[Proposition 1.2.2]{monomial_ideals}), in particular $S_\cP/(\mathrm{in}(I_\cP):x_a)=S_\cP/(\mathrm{in}(I_{\cQ_1}),x_b,x_c,x_d)\cong S_{\cQ_1}/\mathrm{in}(I_{\cQ_1})\otimes_K K[x_a]$. \end{itemize} Consider the following exact sequence: $$0 \longrightarrow S_\cP/(\mathrm{in}(I_\cP):x_a) \longrightarrow S_\cP/\mathrm{in}(I_\cP) \longrightarrow S_\cP/(\mathrm{in}(I_\cP),x_a) \longrightarrow0 $$ \noindent Since for every graded ideal $I$ of a standard graded $K$-algebra $S$ and for every monomial order $<$ on $S$ it is verified that $S/I$ and $S/\mathrm{in}_<(I)$ have the same Hilbert function (see \cite[Corollary 6.1.5]{monomial_ideals}), then from the above computations and from Propositions~\ref{Proposizione: la prima che serve per calcolare HP} and \ref{Hilber-tensoriale} we obtain $\mathrm{HP}_{K[\cP]}(t)=\mathrm{HP}_{K[\cQ]}(t)+\frac{t}{1-t}\mathrm{HP}_{K[\cQ_1]}(t)$. \end{proof} \noindent We observed that $\cQ$ is not a simple polyomino. Our aim is to provide a formula for the Hilbert-Poincar\'e series of $K[\cP]$ involving the Hilbert-Poincar\'e series related to the coordinate rings of simple polyominoes. By the previous result, since $\cQ_1$ is a simple polyomino, we have to study the Hilbert-Poincar\'e series of $K[\cQ]$. We examine 1-Configuration and 2-Configuration separately. \begin{theorem} Let $\cP$ be a closed path polyomino containing a collection of cells of type $\cW$ with the occurrence of 1-Configuration. Then $$\mathrm{HP}_{K[\cP]}(t)=\frac{h_{K[\cR_1]}(t)+t\big[h_{K[\cR_2]}(t)+h_{K[\cQ_1]}(t)\big]}{(1-t)^{\vert V(\cP)\vert -\mathrm{rank}\,\cP}}$$ In particular, the Krull dimension of $K[\cP]$ is $\vert V(\cP)\vert -\mathrm{rank}\,\cP$. \label{Hilbert-1conf} \end{theorem} \begin{proof} Observe that: \begin{align*} & I_\cQ= I_{\cR_1}+\sum_{i=1}^r (x_b x_{d_i}-x_d x_{a_i}),\\ & I_\cQ= I_{\cR_2}+\sum_{i=1}^r (x_b x_{d_i}-x_d x_{a_i})+ \sum_{k,l\in[r]\atop k<l}(x_{a_k}x_{d_l}-x_{a_l}x_{d_k})+ \\ &+ (\{x_{a_{r-1}}x_{v}-x_{a_r}x_{u}\vert [a_{r-1},v]\in\cI(\cQ), u=v-(0,1)\}). \end{align*} From Proposition~\ref{orders} we obtain: \begin{align*} & \mathrm{in}(I_\cQ)=\mathrm{in}(I_{\cR_1})+ \sum_{i=1}^r (x_d x_{a_i}), \\ & \mathrm{in}(I_\cQ)=\mathrm{in}(I_{\cR_2})+ \sum_{i=1}^r (x_d x_{a_i})+ (\{\max_{\prec_\cQ} \{x_{a_k}x_{d_l},x_{a_l}x_{d_k}\}\vert k,l\in[r], k<l\})+\\ & +(\{\max_{\prec_\cQ} \{x_{a_{r-1}}x_{v},x_{a_r}x_{u}\}\vert [a_{r-1},v]\in\cI(\cQ), u=v-(0,1)\}). \end{align*} From the above equalities is not difficult to see that: \begin{itemize} \item $(\mathrm{in}(I_\cQ),x_d)=(\mathrm{in}(I_{\cR_1}),x_d)$, in particular $S_\cQ/(\mathrm{in}(I_\cQ),x_d)= S_\cQ/(\mathrm{in}(I_{\cR_1}),x_d)\cong S_{\cR_1}/\mathrm{in}(I_{\cR_1})$. \item $\mathrm{in}(I_\cQ):x_d=\mathrm{in}(I_{\cR_2})+\sum_{i=1}^r(x_{a_i})$, in particular $S_\cQ/(\mathrm{in}(I_\cQ):x_d)\cong S_{\cR_2}/\mathrm{in}(I_{\cR_2})\otimes_K K[x_{d_1},\ldots,x_{d_{r-2}}]$. \end{itemize} So, arguing as in the proof of Proposition~\ref{HilbertQ}, we obtain $\mathrm{HP}_{K[\cQ]}(t)=\mathrm{HP}_{K[\cR_1]}(t)+t\cdot \frac{\mathrm{HP}_{K[\cR_2]}(t)}{(1-t)^{r-2}}$. Combining such an equality with the claim of Proposition~\ref{HilbertQ} we have: $$ \mathrm{HP}_{K[\cP]}(t)=\mathrm{HP}_{K[\cR_1]}(t)+t\cdot \left(\frac{\mathrm{HP}_{K[\cR_2]}(t)}{(1-t)^{r-2}}+\frac{\mathrm{HP}_{K[\cQ_1]}(t)}{1-t}\right)$$ Set $\vert V(\cP)\vert =n$ and $\mathrm{rank}\,\cP=p$. Observe that \begin{itemize} \item $\vert V(\cR_1)\vert =n-2$ and $\mathrm{rank}\,\cR_1=p-2$, so $\vert V(\cR_1)\vert -\mathrm{rank}\,\cR_1=n-p$ and this is the Krull dimension of $K[\cR_1]$ since $\cR_1$ is simple; \item $\vert V(\cR_2)\vert =n-2r+1$ and $\mathrm{rank}\,\cP_2=p-r-1$, so $\vert V(\cR_2)\vert -\mathrm{rank}\,\cR_2=n-p-r+2$ and this is the Krull dimension of $K[\cR_2]$; \item $\vert V(\cQ_1)\vert =n-4$ and $\mathrm{rank}\,\cQ_1=p-3$, so $\vert V(\cQ_1)\vert -\mathrm{rank}\,\cQ_1=n-p-1$ and this is the Krull dimension of $K[\cQ_1]$. \end{itemize} Therefore, by easy computations, we obtain the formula for $\mathrm{HP}_{K[\cP]}(t)$ in the statement. Finally, because of the Cohen-Macaulay property of $K[\cR_1]$, $K[\cR_2]$ and $K[\cQ_1]$ and by \cite[Corollary 4.1.10]{Bruns_Herzog}, we have that $h_{K[\cR_1]}(1)+h_{K[\cR_2]}(1)+h_{K[\cQ_1]}(1)>0$, so $\dim K[\cP]=\vert V(\cP)\vert -\mathrm{rank}\,\cP$. \end{proof} \noindent Now we want to study the 2-Configuration. In such a case we do not need to use the initial ideals. \begin{theorem} Let $\cP$ be a closed path polyomino containing a collection of cells of type $\cW$ with the occurence of 2-Configuration. Then $$\mathrm{HP}_{K[\cP]}(t)=\frac{(1+t)h_{K[\cQ_1]}(t)+t\big[h_{K[\cF_1]}(t)+h_{K[\cF_2]}(t)\big]}{(1-t)^{\vert V(\cP)\vert -\mathrm{rank}\,\cP}}$$ In particular, the Krull dimension of $K[\cP]$ is $\vert V(\cP)\vert -\mathrm{rank}\,\cP$. \label{Hilbert-2conf} \end{theorem} \begin{proof} Arguing as in Lemma~\ref{Lemma: column+prodotti tensoriali - primo caso} we obtain the following equalities: \begin{enumerate}[(1)] \item $I_\cQ : x_c=I_\cQ$; \item $(I_\cQ,x_c):x_b= I_{\cF_1}+(x_c)+\sum_{i=1}^s(x_{c_i})$; \item $(I_\cQ,x_b,x_c):x_d= I_{\cF_2}+(x_b,x_c)+\sum_{i=1}^r(x_{a_i})$. \item $(I_\cQ,x_b,x_c,x_d)=(I_{\cQ_1},x_b,x_c,x_d)$ \end{enumerate} Again by the same arguments of Lemma~\ref{Lemma: column+prodotti tensoriali - primo caso} we obtain also the following: \begin{enumerate}[(1)] \item $S_\cQ/(I_\cQ : x_c)=K[\cQ]$; \item $S_\cQ/((I_\cQ,x_c):x_b)\cong K[\cF_1]\otimes_K K[x_{b_1},\ldots,x_{b_{s-2}}]$; \item $S_\cQ/((I_\cQ,x_b,x_c):x_d)\cong K[\cF_2]\otimes_K K[x_d,x_{d_1},\ldots,x_{d_{r-2}}]$; \item $S_\cQ/(I_\cQ,x_b,x_c,x_d)\cong K[\cQ_1]$ \end{enumerate} Now considering the suitable exact sequences and arguing as in Theorem~\ref{Teorema: serie di Hilbert - primo caso}, the following holds: $$\mathrm{HP}_{K[\cQ]}(t)=\frac{1}{1-t}\mathrm{HP}_{K[\cQ_1]}+\frac{t}{1-t}\left[\frac{\mathrm{HP}_{K[\cF_1]}(t)}{(1-t)^{s-2}}+\frac{\mathrm{HP}_{K[\cF_2]}(t)}{(1-t)^{r-1}}\right]$$ So, from Theorem~\ref{HilbertQ} we have: $$\mathrm{HP}_{K[\cP]}(t)=\frac{1+t}{1-t}\mathrm{HP}_{K[\cQ_1]}+\frac{t}{1-t}\left[\frac{\mathrm{HP}_{K[\cF_1]}(t)}{(1-t)^{s-2}}+\frac{\mathrm{HP}_{K[\cF_2]}(t)}{(1-t)^{r-1}}\right]$$ Finally we obtain our claims arguing as in the last part of the previous result (or also, for instance, as in Corollary~\ref{HilbertCsimple}). \end{proof} \subsection{$\cB_1$ and $\cB_2$ contain at least three cells}\label{ladder3} Suppose that $\cB_1=[B_1,B]$ consists of the cells $B_1,\ldots, B_r,B$, $r\geq 2$, and $\cB_2=[A,A_r]$ of the cells $A,A_1,\ldots,A_s$, $s\geq2$. We denote the upper and lower left corners of $A$ by $a,c$ respectively, the upper and lower right corners of $A$ by $b,d$ respectively, the left and right lower corners of $B$ by $f,g$ respectively, the upper and lower right corners of $A_i$ by $a_i,b_i$ respectively for $i\in[s]$, the lower and upper left corners of $B_i$ by $c_i,d_i$ respectively for $i\in [r]$. Considering our assumption on the ladder at the beginning of Section \ref{Section: H-P series of prime closed path polyominoes having no L-configuration} and the fact that $\cP$ has not any $L$-configuration, we have that $c_1,c_2\notin V(\cP)\setminus V(\cB_1)$. The described arrangement is summarized in Figure \ref{Figura:Esempio ladder con B1 e B2 di lunghezza tre}. \\ For our purpose we need to introduce the following related polyominoes: \begin{itemize} \item $\cK_1=\cP\backslash [B_1,B]$; \item $\cK_2=\cP\backslash ([A,A_s]\cup \{B,B_r\})$; \item $\cK_3=\cP\backslash ([B_1,B]\cup \{A\})$; \item $\cK_4=\cP\backslash \{A,B,A_1,B_r\}$. \end{itemize} \begin{figure}[h!] \centering \includegraphics[scale=0.9]{Esempio_ladder_con_B1_B2_di_lunghezza_tre} \caption{} \label{Figura:Esempio ladder con B1 e B2 di lunghezza tre} \end{figure} \begin{lemma}\label{Lemma: colum + prodotti tensoriali - caso ladder B1 e B2 tre celle} Let $\cP$ be a closed path polyomino having a ladder of at least three steps satisfying the previous assumptions. Then the following hold: \begin{enumerate}[(1)] \item $S_{\cP}/(I_{\cP}:x_g)\cong K[\cP]$; \item $S_{\cP}/((I_{\cP},x_g):x_d)\cong K[\cK_1]\otimes_K K[x_{d_3},\dots,x_{d_r}]$; \item $S_{\cP}/((I_{\cP},x_g,x_d):x_b)\cong K[\cK_2]\otimes_K K[x_{a},x_{b},x_{a_1},\dots,x_{a_{s-2}}]$; \item $S_{\cP}/((I_{\cP},x_g,x_d,x_b):x_f)\cong S_{\cP}/(I_{\cP},x_g,x_d,x_b)$; \item $S_{\cP}/((I_{\cP},x_g,x_d,x_b,x_f):x_c)\cong K[\cK_3]\otimes_K K[x_{d_3},\dots,x_{d_r}]$; \item $S_{\cP}/((I_{\cP},x_g,x_d,x_b,x_f,x_c):x_a)\cong K[\cK_1]\otimes_K K[x_{a},x_{a_1},\dots,x_{a_{s-2}}]$; \item $S_{\cP}/(I_{\cP},x_g,x_d,x_b,x_f,x_c,x_a)\cong K[\cK_4]$. \end{enumerate} \end{lemma} \begin{proof} To prove the isomorphisms in the statements $(1)-(7)$, it is enough to prove the following equalities: \begin{enumerate}[(1)] \item $I_{\cP}:x_g=I_{\cP}$; \item $(I_{\cP},x_g):x_d=I_{\cK_1}+(x_f,x_g)+\sum_{i=1}^{r}(x_{c_i})$; \item $(I_{\cP},x_g,x_d):x_b=I_{\cK_2}+(x_f,x_g,x_d,x_c)+\sum_{i=1}^{s}(x_{b_i})$; \item $(I_{\cP},x_g,x_d,x_b):x_f=(I_{\cP},x_g,x_d,x_b)$; \item $(I_{\cP},x_g,x_d,x_b,x_f):x_c=(I_{\cK_1},x_b,x_d)+(x_g,x_f)+\sum_{i=1}^{r}(x_{c_i})$; \item $(I_{\cP},x_g,x_d,x_b,x_f,x_c):x_a=I_{\cK_2}+(x_f,x_g,x_d,x_c,x_b)+\sum_{i=1}^{s}(x_{b_i})$; \item $(I_{\cP},x_g,x_d,x_b,x_f,x_c,x_a)=(I_{\cK_4},x_g,x_d,x_b,x_f,x_c,x_a)$. \end{enumerate} In particular the equality (1) is trivial since $I_{\cP}$ is prime, $(2)$, $(3)$ and $(6)$, together with the related claims, can be proved as done in Lemma \ref{Lemma: column+prodotti tensoriali - primo caso}. The equality $(5)$ and its related claim follow as in Lemma~\ref{Lemma: column+prodotti tensoriali - secondo caso}, considering also that $S_{\cK_1}/(I_{\cK_1},x_b,x_d)\cong K[\cK_3]$ and arguing as in Lemma~\ref{P'_2}. We obtain the equality $(7)$ and its related claim as for Lemma~\ref{isomorfismoP4}.\\ Finally, in order to show $4)$, we prove that $(I_{\cP},x_g,x_d,x_b)$ is a prime ideal in $S_{\cP}$. Let $\{V_i\}_{i\in I}$ be the set of the maximal edge intervals of $\cP$ and $\{H_j\}_{j\in J}$ be the set of the maximal horizontal edge intervals of $\cP$. Let $\{v_i\}_{i\in I}$ and $\{h_j\}_{j\in J}$ be the set of the variables associated respectively to $\{V_i\}_{i\in I}$ and $\{H_j\}_{j\in J}$. Let $w$ be another variable and set $\cI=\{f,c,d,g,b_1,\dots,b_s\}\subset V(\cP)$. We consider the following ring homomorphism $$\phi: S_{\cP} \longrightarrow K[\{v_i,h_j,w\}:i\in I,j\in J]$$ defined by $\phi(x_{ij})=v_ih_jw^k$, where $(i,j)\in V_i\cap H_j$, $k=0$ if $(i,j)\notin \cI$, and $k=1$ if $(i,j)\in \cI$. From \cite[Theorem 5.2]{Cisto_Navarra_closed_path} we have $I_{\cP}=\ker \phi$. Let $i'\in I$ such that $V_{i'}$ is the maximal edge interval of $\cP$ containing $b$, $d$ and $g$. We define $\psi: S_{\cP}\rightarrow K[\{v_i,h_j,w\}:i\in I\backslash\{i'\},j\in J]$ as $\psi(x_v)=\phi(x_v)$ if $v\in V(\cP)\backslash\{b,d,g\}$, and $\psi(x_b)=\psi(x_d)=\psi(x_g)=0$. It is not difficult to check that $(I_{\cP},x_d,x_b,x_g)\subseteq \ker\psi$. Let $f\in \ker \psi$. We can write $f=\tilde{f}+\beta x_b+ \delta x_d + \gamma x_g$ where $\beta, \delta, \gamma \in S_{\cP}$ and $x_b,x_d,x_g$ are not variables of $\tilde{f}$. Since $\psi(f)=0$, we have $\phi(\tilde{f})=0$, so $\tilde{f}\in \ker\phi=I_{\cP}$. Hence $S_{\cP}/(I_{\cP},x_b,x_d,x_g)\cong \mathrm{Im}(\psi)$, that is a domain since it is the subring of a domain. So $(I_{\cP},x_b,x_d,x_g)$ is prime in $S_{\cP}$. \end{proof} \begin{remark}\rm If we suppose that $\cB_2$ has just two cells (so $s=1$), then $(I_{\cP},x_g,x_d,x_b)$ is not prime. In fact, set $b=a_0$, denote the cell adjacent to $A_1$ by $C$, and let $b,p$ and $q,a_1$ be respectively the diagonal and anti-diagonal corners of $C$. Observe that in such a case $x_qx_{a_1}\in (I_{\cP},x_g,x_d,x_b)$ but $x_q,x_{a_1}\notin (I_{\cP},x_g,x_d,x_b)$. \end{remark} \begin{theorem}\label{Teorema: serie di Hilbert - ladder con B1 e B2 di tre} Let $\cP$ be a closed path polyomino having a ladder of at least three steps satisfying the assumptions at the beginning of Subsection \ref{ladder3}. Then $$\mathrm{HP}_{K[\cP]}(t)=\frac{h_{K[\cK_4]}(t)+t\big[h_{K[\cK_1]}(t)+2\cdot h_{K[\cK_2]}(t)+h_{K[\cK_3]}(t)\big]}{(1-t)^{\vert V(\cP)\vert -\mathrm{rank}\,\cP}}$$ In particular $K[\cP]$ has Krull dimension $\vert V(\cP)\vert -\mathrm{rank}\,\cP$. \label{Hilbert_series_ladder} \end{theorem} \begin{proof} It follows from Lemma \ref{Lemma: colum + prodotti tensoriali - caso ladder B1 e B2 tre celle} considering the suitable exact sequences and arguing as done in Theorem \ref{Teorema: serie di Hilbert - primo caso} and Corollary \ref{HilbertCsimple}. \end{proof} \section{Rook polynomial and Gorenstein property} \noindent Let $\cP$ be a polyomino. A \textit{$k$-rook configuration} in $\cP$ is a configuration of $k$ rooks which are arranged in $\cP$ in non-attacking positions. \begin{figure}[h] \centering \includegraphics[scale=0.8]{Esempio_rook_configuration} \caption{An example of a $4$-rook configuration in $\cP$.} \label{Figura:esempio rook configuration} \end{figure} \noindent The rook number $r(\cP)$ is the maximum number of rooks which can be placed in $\cP$ in non-attacking positions. We denote by $\cR(\cP,k)$ the set of all $k$-rook configurations in $\cP$ and we set $r_k=\vert \cR(\cP,k)\vert $ for all $k\in \{0,\dots,r(\cP)\}$, conventionally $r_0=1$. The \textit{rook polynomial} of $\cP$ is the polynomial $r_{\cP}(t)=\sum_{k=0}^{r(\cP)}r_kt^k \in \mathbb{Z}[t]$. \\ \noindent We recall that a polyomino is \emph{thin} if it does not contain the square tetromino, that is the square consisting of four cells. In \cite{Trento3} the authors prove that if $\cP$ is a simple thin polyomino then $h_{K[\cP]}(t)=r_{\cP}(t)$ and, in particular, $\mathrm{reg} K[\cP]=r(\cP)$ (see \cite[Theorem 3.12]{Trento3}). Now we show how the rook polynomial is related to Hilbert-Poincar\'e series of the polyominoes considered in this work. \begin{proposition}\label{Proposizione: Proprietà grado del rook polynomial I caso} Let $\cP$ be a $(\cL,\cC)$-polyomino. Let $\cP_1,\cP_2,\cP_3,\cP_4$ be the polyominoes in Section 3. Then: \begin{enumerate}[(1)] \item $r(\cP_1)=r(\cP_2)=r(\cP)-1$; \item $r(\cP_3)=r(\cP)-2$; \item $r(\cP)-2\leq r(\cP_4)\leq r(\cP)$. \end{enumerate} \end{proposition} \begin{proof} $(1)$ Let $\cP_1= \cP\backslash [A,A_r]$. Once we fix a rook in a cell of $[A_1,\dots,A_{r-1}]$, we cannot place another rook in $[A,A_r]$ in non-attacking position in $\cP$, so $r(\cP_1)=r(\cP)-1$. In a similar way it can be showed that $r(\cP_2)=r(\cP)-1$.\\ $(2)$ It follows by similar previous arguments on the intervals $[A,A_r]$ and $[A,B_s]$.\\ $(3)$ Since $\cP=\cP_4\cup [A,A_1]\cup [A,B_1]$, it is obvious that $r(\cP_4)\leq r(\cP)$. Moreover, $\cP_4=\cP_3\cup [A_2,A_r]\cup [B_2,B_s]$, so $r(\cP_3)\leq r(\cP_4)$, that is $r(\cP)-2\leq r(\cP_4)$. In particular, observe that if $r,s>3$ then $r(\cP_4)=r(\cP)$, if either $r=3$ or $s=3$ then $r(\cP_4)=r(\cP)-1$, and if $r,s=3$ then $r(\cP_4)=r(\cP)-2$. \end{proof} \begin{theorem}\label{Teorema: h-polinomio = rook polinomio caso L-conf} Let $\cP$ be a $(\cL,\cC)$-polyomino. Suppose that $\cC$ is a simple thin polyomino. Then $h_{K[\cP]}(t)$ is the rook polynomial of $\cP$. Moreover $\mathrm{reg}(K[\cP])=r(\cP)$. \end{theorem} \begin{proof} It is known that $h_{K[\cP]}(t)=h_{K[\cP_4]}(t)+t\big[h_{K[\cP_1]}(t)+h_{K[\cP_2]}(t)+(1-t)h_{K[\cP_3]}(t)\big].$ We denote by $r_{\cP_j}(t)=\sum_{k=0}^{r(\cP_j)}r_k^{(j)}t^k$ the rook polynomial of $\cP_j$. Since $\cC$ is a simple thin polyomino, then $\cP_1$, $\cP_2$, $\cP_3$ and $\cP_4$ are simple thin polyominoes, so $h_{K[\cP_j]}(t)=r_{\cP_j}(t)$ for $j\in\{1,2,3,4\}$. By Proposition \ref{Proposizione: Proprietà grado del rook polynomial I caso} we have $\deg h_{K[\cP]}=r(\cP)$. Then $$h_{K[\cP]}(t)=\sum_{k=0}^{r(\cP)}[r_k^{(4)}+r_{k-1}^{(1)}+r_{k-1}^{(2)}+r_{k-1}^{(3)}-r_{k-2}^{(3)}]t^k,$$ where we set $r_{-1}^{(j)},r_{-2}^{(3)},r_{r(\cP)-1}^{(3)},r_k^{(4)}$ equal to $0$, for all $j\in\{1,2,3\}$ and for $k\geq r(\cP_4)$. \\ We want to prove that $r_k^{(4)}+r_{k-1}^{(1)}+r_{k-1}^{(2)}+r_{k-1}^{(3)}-r_{k-2}^{(3)}$ is exactly the number of ways in which $k$ rooks can be placed in $\cP$ in non-attacking positions, for all $k\in\{0,\dots,r(\cP)\}$. Fix $k\in\{0,\dots,r(\cP)\}$. Observe that: \begin{enumerate}[(1)] \item $r_k^{(4)}$ can be viewed as the number of $k$-rook configurations in $\cP$ such that no rook is placed on $A,A_1$ and $B_1$. \item Assume that a rook $\mathcal{T}$ is placed in $A_1$. Then we cannot place any rook on a cell of $[A,A_r]$, so $r_{k-1}^{(1)}$ is the number of all $(k-1)$-rook configurations in $\cP_1$. Hence $r_{k-1}^{(1)}$ is the number of all $k$-rook configurations in $\cP$ such that a rook is on $A_1$. Observe that there are some $k$-rook configurations in $\cP$ in which a rook $\mathcal{T}'\neq \mathcal{T}$ is on $B_1$. Paraphrasing, note that $r_{k-1}^{(1)}$ is the number of all $k$-rook configurations in $\cP$ such that $\mathcal{T}$ is on $A_1$ and $\mathcal{T}'$ is not on $B_1$ plus those ones where $\mathcal{T}$ is on $A_1$ and $\mathcal{T}'$ is on $B_1$. \item Assume that a rook $\mathcal{T}$ is placed in $B_1$. Arguing as before, $r_{k-1}^{(1)}$ is the number of all $k$-rook configurations in $\cP$ such that $\mathcal{T}$ is on $B_1$ and $\mathcal{T}'$ is not on $A_1$ plus those ones where $\mathcal{T}$ is on $B_1$ and $\mathcal{T}'$ is on $A_1$. \item Assume that a rook is placed on $A$. Then we cannot place any rook on a cell of $[A,A_r]\cup[A,B_s]$, so $r_{k-1}^{(3)}$ is the number of all $(k-1)$-rook configurations in $\cP_3$, that is the number of $k$-rook configurations in $\cP$ such that a rook is placed on $A$. \item Fix a rook $\mathcal{T}$ in $A_1$ and another one $\mathcal{T}'$ in $B_1$. Then we cannot place any rook on a cell of $[A,A_r]\cup [A,B_s]$, so $r_{k-2}^{(3)}$ is the number of all $(k-2)$-rook configurations in $\cP_3$. Hence $r_{k-2}^{(3)}$ is the number of all $k$-rook configurations in $\cP$ such that a rook is on $A_1$ and another is on $B_1$. \end{enumerate} From $(1),(2),(3),(4)$ and $(5)$ it follows that $r_k^{(4)}+r_{k-1}^{(1)}+r_{k-1}^{(2)}+r_{k-1}^{(3)}-r_{k-2}^{(3)}$ is the number of $k$-rook configurations in $\cP$. \end{proof} \begin{proposition} Let $\cP$ be a closed path satisfying the conditions in Subsection~\ref{ladder3}. Then $h_{K[\cP]}(t)$ is the rook polynomial of $\cP$ and $\mathrm{reg}(K[\cP])=r(\cP)$. \label{prop1} \end{proposition} \begin{proof} It can be proved by similar arguments as in Proposition \ref{Proposizione: Proprietà grado del rook polynomial I caso} that the rook numbers of $\cK_1$, $\cK_2$, $\cK_3$ and $\cK_4$ satisfy the following: \begin{enumerate}[(1)] \item $r(\cK_1)=r(\cP)-1$; \item $r(\cP)-2\leq r(\cK_2)\leq r(\cP)-1$; \item $r(\cK_3)=r(\cP)-1$; \item $r(\cP)-2\leq r(\cK_4)\leq r(\cP)$. \end{enumerate} We denote by $r_{\cK_j}(t)=\sum_{k=0}^{r(\cK_j)}r_k^{(j)}t^k$ the rook polynomial of $\cK_j$, for $j=1,2,3,4$. Observe that $\cK_1$, $\cK_2$, $\cK_3$ and $\cK_4$ are simple thin polyominoes, so $h_{K[\cK_j]}(t)=r_{\cK_j}(t)$ for $j\in\{1,2,3,4\}$, and by the above formulas and Theorem~\ref{Hilbert_series_ladder} we have $\deg h_{K[\cP]}=r(\cP)$. Moreover $$h_{K[\cP]}(t)=\sum_{k=0}^{r(\cP)}[r_k^{(4)}+r_{k-1}^{(1)}+2r_{k-1}^{(2)}+r_{k-1}^{(3)}]t^k,$$ where we set $r_{-1}^{(j)},r_k^{(4)},r_l^{(2)}$ equal to $0$, for all $j\in\{1,2,3\}$, for $k\geq r(\cK_4)$ and $l\geq r(\cK_2)$. \\ Similarly as done in Theorem \ref{Teorema: h-polinomio = rook polinomio caso L-conf}, we have that $r_k^{(4)}+r_{k-1}^{(1)}+2r_{k-1}^{(2)}+r_{k-1}^{(3)}$ is the number of $k$-rook configurations in $\cP$, for all $k\in\{0,\dots,r(\cP)\}$. In fact, let $k\in\{0,\dots,r(\cP)\}$. Observe that: \begin{enumerate}[(1)] \item $r_k^{(4)}$ is the number of $k$-rook configurations in $\cP$ such that no rook is placed on $A,A_1$, $B$ and $B_r$. \item Fix a rook $\mathcal{T}$ on $B_r$. Then $r_{k-1}^{(1)}$ is the number of all $k$-rook configurations in $\cP$ such that $\mathcal{T}$ is on $B_r$. Observe that among these configurations, there are some $k$-rook configurations in which $\mathcal{T}'\neq \mathcal{T}$ is placed either in $A$ or in $A_1$. \item Fix a rook $\mathcal{T}$ in $B$. Then $r_{k-1}^{(3)}$ is the number of all $k$-rook configurations in $\cP$ such that $\mathcal{T}$ is on $B$. As before, among these configurations there are some $k$-rook configurations in which $\mathcal{T}'\neq \mathcal{T}$ is placed in $A_1$. \item Assume that a rook is placed in $A$ (resp. $A_1$). Then $r_{k-1}^{(2)}$ is the number of all $k$-rook configurations in $\cP$ such that $\mathcal{T}$ is on $A$ (resp. $A_1$), and no rook is on a cell of $[A,A_s]\cup \{B,B_r\}$. \end{enumerate} From $(1),(2),(3)$ and $(4)$ we have the desired conclusion. \end{proof} \noindent In order to complete the study of closed path polyominoes having no $L$-configuration, it remains to consider 1-Configuration and 2-Configuration introduced in Subsection~\ref{ladder2}. For such cases we mention only the analogous result, omitting the proof since the arguments are similar. \begin{proposition}\label{prop2} Let $\cP$ be a closed path which satisfies the conditions of 1-Configuration or 2-Configurations in Subsection~\ref{ladder2}. Then $h_{K[\cP]}(t)$ is the rook polynomial of $\cP$ and $\mathrm{reg}(K[\cP])=r(\cP)$. \end{proposition} \noindent Observing that a closed path having an $L$-configuration is an $(\cL,\cC)$-polyomino with $\cC$ a path of cells and gathering all the results above, we obtain the following general result. \begin{theorem} Let $\cP$ be a closed path having no zig-zag walks, equivalently having an $L$-configuration or a ladder of three steps. Then: \begin{enumerate}[(1)] \item $K[\cP]$ is a normal Cohen-Macaulay domain of Krull dimension $\vert V(\cP)\vert -\mathrm{rank}\,\cP$; \item $h_{K[\cP]}(t)$ is the rook polynomial of $\cP$ and $\mathrm{reg}(K[\cP])=r(\cP)$. \end{enumerate} \label{ultimo} \end{theorem} \noindent At this point we are ready to provide the condition for the Gorenstein property of a closed path polyomino having no zig-zag walks. Recall the definition of \textit{S-property} given in \cite{Trento3} for a thin polyomino. \begin{definition}\rm Let $\cP$ be a thin polyomino. A cell $C$ is called \textit{single} if there exists a unique maximal interval of $\cP$ containing $C$. $\cP$ has the \textit{S-property} if every maximal interval of $\cP$ has only one single cell. \end{definition} \noindent Observe that if $\cP$ is a closed path polyomino, then $\cP$ has the S-property if and only if every maximal block of $\cP$ contains exactly three cells. \begin{theorem} Let $\cP$ be a closed path having no zig-zag walks. The following are equivalent: \begin{enumerate}[(1)] \item $\cP$ has the $S$-property; \item $K[\cP]$ is Gorenstein. \end{enumerate} \end{theorem} \begin{proof} If $\cP$ has no zig-zag walks, then $K[\cP]$ is a normal Cohen-Macaulay domain of Krull dimension $\vert V(\cP)\vert -\mathrm{rank}\,\cP$ and $h_{K[\cP]}(t)=r_{\cP}(t)=\sum_{k=0}^{s}r_kt^k$, where $s=r(\cP)$. In such a case it is known (\cite{Stanley}) that $K[\cP]$ is Gorenstein if and only if $r_i=r_{s-i}$ for all $i=0,\dots,s$.\\ $(1)\Rightarrow (2)$. Suppose that $\cP$ has the $S$-property. Fix $i\in\{0,1,\dots,r(\cP)\}$ and prove that $r_i=r_{s-i}$. Since $\cP$ has the $S$-property, $\cP$ consists of maximal cell intervals of rank three. If $i=0$ then it is trivial that $r_0=r_s=1$. Assume $i\in[s-1]$. It is not restrictive to consider a part of $\cP$ arranged as in Figure \ref{Figura:Dimostrazione Gorenstein L conf}. \begin{figure}[h] \centering \includegraphics[scale=0.7]{Dimostrazione_Gorenstein_L} \caption{} \label{Figura:Dimostrazione Gorenstein L conf} \end{figure} \noindent Define $\cP_1=\cP\backslash\{A,A_1,A_2\}$, $\cP_2=\cP\backslash\{A,A_1,A_2,C_1,C_2\}$ and $\cP_3=\cP\backslash\{A,A_1,A_2,B_1,B_2\}$. We denote by $r_{\cP_j}(t)=\sum_{k=0}^{r(\cP_j)}r_k^{(j)}t^k$ the rook polynomial of $\cP_j$. Observe that $r(\cP_1)=r(\cP)-1=s-1$ and $r(\cP_2)=r(\cP_3)=r(\cP)-2=s-2$. By similar arguments as in Theorem \ref{Teorema: h-polinomio = rook polinomio caso L-conf}, it is easy to prove that $r_{k}=r_k^{(1)}+r_{k-1}^{(1)}+r_{k-1}^{(2)}+r_{k-1}^{(3)}$ for all $k\in\{1,\dots,s\}$. Then \begin{align*} &r_{s-i}=r_{s-i}^{(1)}+r_{s-i-1}^{(1)}+r_{s-i-1}^{(2)}+r_{s-i-1}^{(3)}=r_{(s-1)-(i-1)}^{(1)}+\\ &+r_{(s-1)-i}^{(1)}+r_{(s-2)-(i-1)}^{(2)}+r_{(s-2)-(i-1)}^{(3)}. \end{align*} Since $\cP_1$, $\cP_2$ and $\cP_3$ are simple thin polyominoes having the $S$-property, then by Theorem 4.2 of \cite{Trento3} we have: $r_{(s-1)-(i-1)}^{(1)}=r_{i-1}^{(1)}$, $r_{(s-1)-i}^{(1)}=r_{i}^{(1)}$, $r_{(s-1)-(i-2)}^{(2)}=r_{i-1}^{(2)}$ and $r_{(s-2)-(i-1)}^{(3)}=r_{i-1}^{(3)}$. Hence \begin{align*} &r_{s-i}=r_{(s-1)-(i-1)}^{(1)}+r_{(s-1)-i}^{(1)}+r_{(s-2)-(i-1)}^{(2)}+r_{(s-2)-(i-1)}^{(3)}=r_{i-1}^{(1)}+\\ &+r_{i}^{(1)}+r_{i-1}^{(2)}+r_{i-1}^{(3)}=r_i. \end{align*} $(2)\Rightarrow (1)$. Assume that $K[\cP]$ is Gorenstein, that is $r_i=r_{s-i}$ for all $i=0,\dots,s$. We prove that $\cP$ has the $S$-property. First of all, we observe that all the ranks of the maximal intervals of $\cP$ cannot be greater than or equal to four. In fact, if there exists a maximal interval $I=[A,B]$ with $\mathrm{rank}\, I\geq 4$, then we can consider two distinct cells $C,D\in I\backslash \{A,B\}$. Hence we can obtain an $s$-rook configuration in $\cP$ with a rook in $C$ and another one with a rook in $D$, so $r_{s}\geq 2>r_0=1$, that is a contradiction. In addition, in such a case, we can suppose that $\cP$ has an $L$-configuration, otherwise it is not difficult to see that $\cP$ has a subpolyomino as in Figure~\ref{Figura:pentomino}, and arguing as in the proof of the case $b)\Rightarrow c)$ (hypothesis (2)) of \cite[Theorem 4.2]{Trento3}, then $K[\cP]$ is not Gorenstein. So, let $\{A,A_1,A_2,B_1,B_2\}$ be an $L$-configuration of $\cP$, as in Figure \ref{Figura:Dimostrazione Gorenstein L conf}. Consider $\cP'=\cP\backslash\{A,A_1,A_2\}$, which is a simple thin polyomino. Let $r_{\cP'}(t)=\sum_{k=0}^{s'}r_k't^k$ be the rook polynomial of $\cP'$, where $s'=r(\cP)-1$. We prove that $\cP'$ has the $S$-property. Suppose that $\cP'$ has not the $S$-property so from the case $b)\Rightarrow c)$ of \cite[Theorem 4.2]{Trento3} it follows that either $r'_{s'}>1$ or $r'_{s'-1}>\mathrm{rank}\, \cP'$. Both cases lead to a contradiction with $r_s=1$ or $r_{s-1}=\mathrm{rank}\, \cP$. By similar arguments we can prove that $\cP''=\cP\backslash \{A,B_1,B_2\}$ is a simple thin polyomino having the $S$-property. Since $\cP'$ and $\cP''$ have the $S$-property, it follows trivially that also $\cP$ has the $S$-property. \end{proof} \begin{remark}\rm If $\cQ$ is the weakly closed path introduced Subsection~\ref{ladder2}, then $K[\cQ]$ is a normal Cohen-Macaulay domain (Remark~\ref{RemarkQ}). From Theorem~\ref{HilbertQ} we obtain that $\mathrm{HP}_{K[\cQ]}(t)=\frac{h(t)}{(1-t)^{\vert V(\cP)\vert -\mathrm{rank}\,\cP}}$ with $h(t)=h_{K[\cP]}(t)-h_{K[\cQ_1]}(t)$. We know that $h_{K[\cP]}(t)$ and $h_{K[\cQ_1]}(t)$ are the rook polynomials respectively of $\cP$ and $\cQ_1$. Since $\cQ_1$ is contained in $\cP$, then $h(1)>0$. So the Krull dimension of $K[\cQ]$ is $\vert V(\cP)\vert -\mathrm{rank}\,\cP=\vert V(\cQ)\vert -\mathrm{rank}\,\cQ$. Moreover, by the same arguments adopted in this work, we obtain that $h(t)$ is the rook polynomial of $\cQ$ and $K[\cQ]$ is Gorenstein if and only if $\cQ$ has the $S$-property. \end{remark} \vspace{0.1cm} \begin{flushleft} \textbf{Concluding remarks.}\\ \end{flushleft} \noindent In the existing literature, the Hilbert-Poincar\'e series and the Gorenstein property of polyomino ideals have been discussed only for a few classes of simple polyominoes. In this work, we provide some results in this direction for a class of non-simple polyominoes, known as the closed paths without zig-zag walks. The coordinate rings of these polyominoes are known to be Cohen-Macaulay. In the light of our results, we propose the following questions: \begin{enumerate}[(1)] \item Is it possible to characterize the Gorenstein property for $(\cL,\cC)$-polyominoes based on the choice of $\cC$? \item In this work, the Hilbert-Poincar\'e series of closed path polyominoes without zig-zag walks is studied. Is it possible to provide similar results for the Hilbert-Poincar\'e series of a closed path polyomino with a zig-zag walk?\\ \end{enumerate} \noindent \textbf{Disclosure of Potential Conflicts of Interest:} The authors declare that there is no conflict of interest. \begin{thebibliography}{99} \addcontentsline{toc}{chapter}{\bibname} \bibitem{Andrei} C. Andrei, Algebraic Properties of the Coordinate Ring of a Convex Polyomino, Electron. J. Comb., \textbf{28}(1), \#P1.45, 2021. \bibitem{Bruns_Herzog} W. Bruns, J. Herzog, Cohen--Macaulay Rings, Cambridge University Press, London, Cambridge, New York, 1993. \bibitem{Cisto_Navarra_closed_path} C. Cisto, F. Navarra, Primality of closed path polyominoes, J. Algebra Appl., in press, 2021. \bibitem{Cisto_Navarra_weakly} C. Cisto, F. Navarra, R. Utano, Primality of weakly closed path polyominoes, Illinois J. Math., \textbf{66}(4), 545--563, 2022. \bibitem{Cisto_Navarra_CM_closed_path} C. Cisto, F. Navarra, R. Utano, On Gr\"obner bases and Cohen-Macaulay property of closed path polyominoes, Electron. J. Comb., \textbf{29}(3): \#P3.54, 2022. \bibitem{Dinu_Navarra} R. Dinu, F. Navarra, Non-simple polyominoes of K\"onig type, arXiv:2210.12665, 2022. \bibitem{golomb} S. W. Golomb, Polyominoes, puzzles, patterns, problems, and packagings, Second edition, Princeton University Press, 1994. \bibitem{L-convessi} V. Ene, J. Herzog, A. A. Qureshi, F. Romeo, Regularity and Gorenstein property of the L-convex Polyominoes, Electron. J. Comb., \textbf{28}(1), \#P1.50, 2021. \bibitem{Ene-qureshi} V. Ene, A.A. Qureshi, Ideals generated by diagonal 2-minors, Communications in Algebra \textbf{41}(8), 2013. \bibitem{monomial_ideals} J. Herzog, T. Hibi, Monomial Ideals. GTM, vol. 260. Springer, Berlin (2010). \bibitem{Simple equivalent balanced} J. Herzog, S. S. Madani, The coordinate ring of a simple polyomino, Illinois J. Math., \textbf{58}, 981--995, 2014. \bibitem{def balanced} J. Herzog, A. A. Qureshi, A. Shikama, Gr\"obner basis of balanced polyominoes, Math. Nachr., \textbf{288}(7), 775--783, 2015. \bibitem{Not simple with localization} T. Hibi, A. A. Qureshi, Nonsimple polyominoes and prime ideals, Illinois J. Math., \textbf{59}(2), 391--398, 2015. \bibitem{Kummini rook polynomial} M. Kummini, D. Veer, The $h$-polynomial and the rook polynomial of some polyominoes, arXiv:2110.14905, preprint, 2021. \bibitem{Trento} C. Mascia, G. Rinaldo, F. Romeo, Primality of multiply connected polyominoes, Illinois J. Math., \textbf{64}(3), 291--304, 2020. \bibitem{Trento2} C. Mascia, G. Rinaldo, F. Romeo, Primality of polyomino ideals by quadratic Gr\"obner basis, Math. Nachr., \textbf{295}(3), 593--606, 2022. \bibitem{Qureshi} A.A. Qureshi, Ideals generated by 2-minors, collections of cells and stack polyominoes, J. Algebra \textbf{357}, 279--303, 2012. \bibitem{Parallelogram Hilbert series} A. A. Qureshi, G. Rinaldo, F. Romeo, Hilbert series of parallelogram polyominoes, Res. Math. Sci. \textbf{9},28, 2022. \bibitem{Simple are prime} A. A. Qureshi, T. Shibuta, A. Shikama, Simple polyominoes are prime, J. Commut. Algebra, \textbf{9}(3), 413--422, 2017. \bibitem{Trento3} G. Rinaldo, and F. Romeo, Hilbert Series of simple thin polyominoes, J. Algebr. Comb. \textbf{54}, 607--624, 2021. \bibitem{Shikama} A. Shikama, Toric representation of algebras defined by certain non-simple polyominoes, J. Commut. Algebra, \textbf{10}, 265--274, 2018. \bibitem{Shikam rettangolo meno } A. Shikama, Toric rings of non-simple polyominoes, Int. J. Algebra, \textbf{9}(4), 195--201, 2015. \bibitem{Stanley} R. Stanley, Hilbert functions of graded algebras, Adv. Math., \textbf{28}, 57--83, 1978. \bibitem{Villareal} R. H. Villarreal, Monomial algebras, Second edition, Monograph and Research notes in Mathematics, CRC press, 2015. \end{thebibliography} \end{document}
2205.08250v1
http://arxiv.org/abs/2205.08250v1
Analytic properties of Stretch maps and geodesic laminations
\documentclass[12pt]{amsart} \usepackage{tikz-cd} \usepackage{mathtools} \usepackage{amsfonts} \usepackage{graphicx} \usepackage{verbatim} \usepackage{textcomp}\usepackage{amssymb} \usepackage{cite} \usepackage{color} \usepackage[all]{xy} \usepackage{graphicx} \usepackage{color} \definecolor{NoteColor}{rgb}{1,0,0} \newcommand{\note}[1]{\textcolor{NoteColor}{^\sharp 1}} \usepackage{hyperref} \usepackage{soul} \hypersetup{ colorlinks, citecolor=black, filecolor=black, linkcolor=black, urlcolor=black } \usepackage{tikz} \usepackage[all]{xy} \usepackage{graphicx} \usetikzlibrary{backgrounds} \setlength{\oddsidemargin}{.25in} \setlength{\evensidemargin}{.25in} \setlength{\textwidth}{6in} \hfuzz2pt \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{assumption}[theorem]{Assumption} \newtheorem{claim}[theorem]{Claim} \allowdisplaybreaks \newtheorem{notation}[theorem]{Notation} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{question}[theorem]{Question} \newtheorem{keytechnicallemmaformodelspace}[theorem]{The Key Technical Lemma for Model Space} \newtheorem{keytechnicallemma}[theorem]{The Key Technical Lemma} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{remark}[theorem]{Remark} \newtheorem{example}[theorem]{Example} \newenvironment{proofET}{{\sc Proof of Theorem~\ref{thm:supptmeasure}}} {{\sc q.e.d.} \\} \newenvironment{existpk} {{\sc Proof of Proposition~\ref{existpk}.}} {{\sc q.e.d.} \\} \newenvironment{princregconv} {{\sc Proof of Corollary~\ref{princregconv}.}} {{\sc q.e.d.} \\} \newenvironment{thm:supptmeasure} {{\sc Proof of Theorem~\ref{thm:supptmeasure}.}} {{\sc q.e.d.} \\} \newenvironment{proofconvvq}{{\sc Proof of Claim~\ref{convvq}.}}{{\sc q.e.d.} \\} \newenvironment{proofappendix2}{{\sc Proof of Lemma~\ref{federeroutermeasure}.}}{{\sc q.e.d.} \\} \newenvironment{proofappendix3}{{\sc Proof of Lemma~\ref{cd2general}.}}{{\sc q.e.d.} \\} \newenvironment{proofappendix4}{{\sc Proof of Lemma~\ref{usordv}.}}{{\sc q.e.d.} \\} \newenvironment{support}{{\sc Proof of Theorem~\ref{thm:supptmeasure}.}}{{\sc q.e.d.} \\} \newenvironment{regularkpos}{{\sc Proof of Theorem~\ref{regularkpos}.}}{{\sc q.e.d.} \\} \newenvironment{theo:idealoptim}{{\sc Proof of Theorem~\ref{theo:idealoptim}.}}{{\sc q.e.d.} \\} \newenvironment{sincoslemma}{{\sc Proof of Proposition~\ref{propineq1}.}}{{\sc q.e.d.} \\} \newenvironment{thm:existence}{{\sc Proof of Theorem~\ref{thm:existence}.}}{{\sc q.e.d.} \\} \numberwithin{equation}{section} \newtheorem*{theorem*}{Theorem} \newenvironment{thm:pdirichlet}{{\sc Proof of Theorem~\ref{thm:pdirichlet}.}}{{\sc q.e.d.} \\} \newenvironment{ne2}{{\sc Proof of Theorem~\ref{regularity}.}}{{\qed} \\} \newenvironment{proof:main}{{\sc Proof of Theorem~\ref{ex}.}}{{\qed} \\} \newenvironment{proof:pluriharmonic} {{\sc Proof of Theorem~\ref{theorem:pluriharmonic}.}} {{\qed} \\} \newenvironment{proofof(iv)} {{\sc Proof of $(iv)$.}} {{\qed} \\} \newcommand{\norm}[1]{\left\Vert^\sharp 1\right\Vert} \newcommand{\abs}[1]{\left\vert^\sharp 1\right\vert} \newcommand{\R}{\mathbb R} \newcommand{\RR}{Q} \newcommand{\Z}{\mathbb Z} \newcommand{\T}{\mathbb T} \newcommand{\PP}{\mathbb P} \newcommand{\N}{\mathbb N} \newcommand{\HH}{\mathbb H} \newcommand{\C}{\mathbb C} \newcommand{\D}{\mathbb D} \newcommand{\V}{\mathbb V} \newcommand{\E}{\mathbb E} \newcommand{\W}{\mathbb W} \newcommand{\Q}{\mathbb Q} \newcommand{\Sp}{\mathbb S} \newcommand{\Ker}{\mathcal G} \newcommand{\B}{\mathcal B} \newcommand{\oT}{\mathcal T} \newcommand{\ep}{\epsilon} \newcommand{\eps}{\varepsilon} \newcommand{\lam}{\lambda} \newcommand{\ra}{>} \newcommand{\la}{<} \newcommand{\ttau}{{\tilde \tau}} \newcommand{\cV}{\frac{1}{2\pi}E^{\gamma_{j}}[\Sp^1]} \newcommand{\cVV}{\frac{1}{2\pi}E^{\gamma_{j}}[\Sp^1]+\frac{1}{2\pi}E^{\gamma_{i}}[\Sp^1]} \newcommand*{\avint}{\mathop{\ooalign{$\int$\cr$-$}}} \voffset -.5in \begin{document} \title{Analytic properties of Stretch maps and geodesic laminations} \author[Daskalopoulos]{Georgios Daskalopoulos} \address{Department of Mathematics \\ Brown University \\ Providence, RI}\author[Uhlenbeck]{Karen Uhlenbeck} \address{Department of Mathematics, University of Texas, Austin, TX and Distinguished Visiting Professor, Institute for Advanced Study, Princeton, NJ} \thanks{ GD supported in part by NSF DMS-2105226.} \maketitle \begin{abstract} In a 1995 preprint, William Thurston outlined a Teichm\"uller theory for hyperbolic surfaces based on maps between surfaces which minimize the Lipschitz constant. In this paper we continue the analytic investigation into best Lipschitz maps which we began in \cite{daskal-uhlen1}. In the spirit of the construction of infinity harmonic functions, we obtain best Lipschitz maps $u$ as limits of minimizers of Schatten-von Neumann integrals in a fixed homotopy class of maps between two hyperbolic surfaces. We construct Lie algebra valued dual functions which minimize a dual Schatten von-Neumann integral and limit on a Lie algebra valued function $v$ of bounded variation with least gradient properties. The main result of the paper is that the support of the measure which is a derivative of $v$ lies on the canonical geodesic lamination constructed by Thurston \cite{thurston} and further studied by Gueritaud-Kassel \cite{kassel}. In the sequel paper \cite{daskal-uhlen2} we will use these results to investigate the dependence on the hyperbolic structures and construct a variety of transverse measures. This should provide information about the geometry and make contact with results of the Thurston school. \end{abstract} \section{Introduction} Thurston proposed a Teichm\"uller theory based on maps $u:M \rightarrow N$ between two hyperbolic surfaces in a fixed homotopy class which minimize the Lipschitz constant among all maps in that class. A brief description is: Such a map $u$ takes on its Lipschitz constant $L$ on set containing a geodesic lamination $\lambda$ in $M$, which it stretches by the factor $L$ as it maps onto a geodesic lamination $\mu$ in $N$. This canonical lamination first defined by Thurston (cf. \cite[Theorem 8.2]{thurston}) has been carefully studied by Gueritaud-Kassel \cite{kassel} and we discuss their results in Section~\ref{sect:Varblm}. In the development of Thurston's theory, the topology and geometry are encoded entirely in these laminations and the transverse measures constructed on them. While he does construct maps $u$, we found by looking at the geometry carefully that most of his constructions realize the Lipschitz constant on sets larger than $\lambda.$ This paper makes very little contact with these topological constructions but sets the stage for their development in a subsequent paper. The goal of this paper is two fold. We first of all want to construct examples of best Lipschitz maps in the spirit of infinity harmonic functions. Secondly we construct dual functions derived from the closed 1-forms predicted by conservation laws with derivatives supported on the canonical geodesic lamination. These dual objects are locally Lie algebra valued functions of bounded variation, which we understand to be the basis for the countable additivity of transverse measures. There is very little in the literature on the analytical underpinnings of our theory, either the approximations to the infinity harmonic maps or on the dual functions of bounded variation. Hence we have had to develop most of the analysis from scratch. Along the way, we have included some straight PDE type results which give some insight into the behavior of the solutions to these somewhat unusual equations. This introduction contains an outline of the topics and main results in each section, and ends with a brief description of the sequel paper \cite{daskal-uhlen2} with more topological results. In Section~\ref{sect1} we fix a homotopy class of maps given by $f:M \rightarrow N$, and if boundary $\partial M \neq \emptyset$, either fix the boundary data (Dirichlet problem) or put no restriction on the boundary (Neumann problem). For $\dim N > 1$, the study of $p$-harmonic maps as $p \rightarrow \infty$ produces a map which minimizes $\max |df|$, which is not the Lipschitz constant and {\it has no geometrical significance}. Sheffield-Smart in \cite{smart} suggest using the approximation of $\int_Ms(df)^p*1$, for $s(df$) the largest singular value of $df$. For finite $p$ this unfortunately does not lead to a recognizable elliptic partial differential equation as its Euler-Lagrange equation. We instead use a Schatten-von Neumann norm, in which the integrand is essentially the sum of the $p$-th powers of the eigenvalues of $df$. This section defines the norms, the integrals $J_p$, the Euler-Lagrange equations and their basic properties. Roughly speaking, \[ J_p(f) = \int_M TrQ(df)^p *1 \] where $Q(df)^2 = dfdf^T$ is a non-negative symmetric linear map mapping the tangent space $T_f N$ to itself. Details are in Section~\ref{sect1}. The Euler-Lagrange equations of $J_p$ are \[ D^*Q(df)^{p-2}df=0 \] where $D=D_f$ is the pullback of the Levi-Civita connection on $f^{-1}(TN)$. The next is Theorem~\ref{thm:pdirichlet}. \begin{theorem} Let $(M, g)$ be a compact Riemannian manifold with boundary $ \partial M$ (possibly empty) and $\dim M=n$. Let $(N, h)$ be a closed Riemannian manifold of non-positive curvature. Then, for $p>n$ there exists a minimizer $u \in W^{1,p}(M, N)$ of the functional $J_p$ with either the Neumann or the Dirichlet boundary conditions in a homotopy (resp. relative homotopy) class. Furthermore, if $ \partial M \neq \emptyset$, the solution of the Dirichlet problem is unique. \end{theorem} In Section~\ref{sec3} we discuss the Noether currents obtained from the symmetry of the target. Here is where we construct the dual functions. Technically they have values in the dual of the Lie algebra, but since we constantly use the geometry coming from the indefinite invariant inner product, we will refer to them as Lie algebra valued. In \cite{daskal-uhlen1}, we found the dual function by inspection, but we found its Lie algebra valued counterpart in the present paper only by looking at the conserved quantities from the symmetries of the target. We describe the flat bundle structure needed to encode the local action of $SO(2,1)$ on $N$. The dual functions are only defined locally. We will study their global formulation in the sequel paper \cite{daskal-uhlen2}. The description of the closed 1-form needed to obtain them is Theorem~\ref{Prop:Elag-bund}. \begin{theorem} Let $u=u_p$ satisfy the $J_p$-Euler-Lagrange equations. Then, in the distribution sense $d* (S_{p-1}(du) \times u)=0.$ \end{theorem} Here $S_{p-1}(du)=Q(du)^{p-2}du$ and $d$ is the flat connection determined by $u$. If we set $ V_q = *(S_{p-1}(du) \times u)$ then $V_q$ is the closed $1 = \dim M-1$ form predicted by Noether's theorem. We can set locally $dv_q = V_q$. As in the case of functions, these dual fields also satisfy the Euler-Lagrange equations for a functional based on Schatten-von Neumann $q$ norms, $1/q + 1/p = 1.$ This is the content of Theorems~\ref{TTheorem A6} and~\ref{TTheorem A66}. Ignoring regularity issues of $u$ for the moment, we can explain these theorems as follows: Given a solution $u$ satisfying the $J_p$-Euler-Lagrange equations, we can view $Z=*S_{p-1}(du)$ as a section of the pullback bundle $u^{-1}TN \otimes T^*M$. Then the minimum of the functional $J_q(\xi)=||Z+D_u\xi||^q_{sv^q}=||Z+(d\xi)_u||^q_{sv^q}$ over all sections $\xi$ of $u^{-1}TN$ is attained at $\xi=0$. Similarly for $J_q(\psi)=||V+(d\psi)_u||^q_{sv^q}$ where $V=*S_{p-1}(du)\times u$. Here $\psi$ a section of the subbundle of the Lie algebra bundle $\mathfrak p_u$ corresponding to the positive definite part of the Cartan decomposition defined by $u$ (cf. Proposition~\ref{helpemb}). In both cases subscript $u$ means projection onto the positive part of the bundle. Note that this construction depends only on the local symmetric space structure of the target. We conjecture that some version is true for a variety of integrands and domains $M$ with $\dim M \geq 2.$ In Section~\ref{sect:regtheor} we discuss results on regularity of minimizers of the functionsl $J_p$. This section deals with two dimensions only. The functional $J_p$ leads to an Euler-Lagrange equation which is elliptic as long as the eigenvalues of the derivative of the map are non-zero. In dimension 2, at least $C^{1,\alpha}$ regularity would ordinarily be expected. However, we were unable to prove this even for $p = 4.$ Possible disparity between the two eigenvalues (which is why we picked this integrand) invalidates standard techniques such as hole filling. Since so much of the later chapters involve computations on solutions of the Euler-Lagrange equations, we include the regularity theorem we were able to obtain for maps between hyperbolic surfaces. Note that the apriori estimates on smooth solutions are quite easy to obtain. However, to use these estimates, we would have to expand our integrands to a one parameter family that is known to have smooth solutions in the family up until the final point we are seeking to estimate. This actually could be done, but is not clearly easier and is certainly not more applicable. Our main result is Theorem~\ref{mainregtH} and its Corollary~\ref{cormainredsh}. Here we abbreviate $Q(du)^{p/2-1}du = S_{p/2}(du)$. \begin{theorem}If $u=u_p$ satisfies the $J_p$-Euler-Lagrange equations in $\Omega \subset M$, and $\Omega' \subset \Omega$, then $S_{p/2}(du) \big|_{\Omega'} \in H^1(\Omega')$ and \[ ||S_{p/2}(du)||_{H^1(\Omega')} \leq kpJ_p(du\big|_{\Omega'})^{1/2}. \] Here $k$ depends on the geometry of $\Omega' \subset \Omega \subset\HH$ but not on $p.$ Moreover $du|\Omega'$ is in $L^s$ for all $s.$ \end{theorem} In Section~\ref{sect:Varblm} we give the variational construction of the infinity harmonic map. Here we show that as $p \rightarrow \infty$, the minimizers $u_p$ of $J_p$ converge to best Lipschitz maps $u.$ The theorem is very general, requiring only non-positive curvature of the target. This is Theorem~\ref{thm:existence}. \begin{theorem} Let $M$, $N$ be Riemannian manifolds where $M$ is convex with possibly non-empty boundary and $N$ is closed and has non-positive curvature. Given a sequence $p \rightarrow \infty$, there exists a subsequence (denoted also by $p $) and a sequence of $J_p$-minimizers $u_p: M \rightarrow N$ in the same homotopy class (either in the absolute sense or relative to the boundary depending on the context) such that \[ u=\lim_{p \rightarrow \infty} u_p \ \ \ \mbox{weakly in} \ \ W^{1,s} \ \forall s. \] Furthermore, $u_p \rightarrow u$ uniformly and $u$ is a best Lipschitz map in the homotopy class. \end{theorem} Following the statement of this theorem, we restrict ourselves to maps between hyperbolic surfaces for the rest of the paper. We also make the standing assumption that the best Lipschitz constant is at least 1. In order to follow the rest of the paper, it is necessary to absorb the structure of the canonical laminations $\lambda$ in $M$ and $\mu$ in $N$ determined by a homotopy class of maps. We refer the reader who did not find the description in the beginning of the introduction satisfactory to Definition~\ref{canlamin}, and the papers \cite{thurston} and \cite{kassel}. In fact, they prove the existence of these canonical laminations, and show that every best Lipschitz map must contain $\lambda$ in the set of points at which the best Lipschitz constant is taken on (=maximum stretch set). Hence our infinity harmonic maps contain $\lambda$ in their stretch set. Gueritaud-Kassel use the term {\it optimal best Lipschitz} for best Lipschitz maps whose maximum stretch set is $\lambda.$ Note that Thurston's stretch maps are not optimal. In Section~\ref{lipqgoesto1} we consider the limit $q \rightarrow1$. Here $q$ is the conjugate to $p$ satisfying $1/p+1/q=1$. As in \cite{daskal-uhlen1}, we normalize $S_{p-1}$ and obtain a measure in the limit as $p$ goes to infinity. In this paper, we similarly define \[ S_{p-1}= S_{p-1}(\kappa_p du_p) = \kappa_p^{p-1}Q(du_p)^{p-2}du_p \] for a normalizing factor $\kappa_p$, and $|S_p| = \kappa_p(S_p,du_p)^\sharp$. The next theorem is a combination of Theorem~\ref{TTheorem A5} and Theorem~\ref{thm:limmeasures0}. \begin{theorem} Given a sequence $p \rightarrow \infty$, there exists a subsequence (denoted again by $p$), a real-valued positive Radon measure $|S|$, a Radon measure $S$ with values in $T^*M \otimes E$ and a a Radon measure $ V$ with values in $T^*M \otimes ad(E)$ such that \begin{itemize} \item $(i)$ $ |S_{p-1}| \rightharpoonup |S| $ and $\int_M |S|*1 = 1$ \item $(ii)$ $ S_{p-1} \rightharpoonup S$ and $V_q \rightharpoonup V$ \item $(iii)$ The masses of $S$ and $V$ are at most one and two respectively \item $(iv)$ $ S=S_u$ and $V=V_u$ \item $(v)$The support of $S$, $V$ and $|S|$ are all equal \item $(vi)$ The meaures $Z=*S$ and $V$ are mass minimizing with respect to the best Lipschitz map $u$. Furthermore, $V$ is closed with respect to the flat connection on $ad(E)$. \end{itemize} \end{theorem} What we have not explained until here is the term mass minimizing. This is a term we introduce in Section~\ref{lipqgoesto1} and is the equivalent of least gradient for functions of bounded variation, but adapted to our situation. See Theorem~\ref{thm:limmeasures04r}. In the case of scalar valued functions one can construct functions of least gradient by taking limits of $q$-harmonic functions as $q \rightarrow 1$ (cf. \cite{juutinen}). For the vector valued case we propose to take limits of $q$-Schatten-von Neumann harmonic maps instead. At the moment we do not know exactly where our definition fits into the bigger picture of a theory of least gradient vector valued functions of bounded variation. In Section~\ref{sec7} we relate the support of the measures $S$ and $|S|$ to the canonical lamination. Recall our remarks about the canonical lamination $\lambda$ and its image $\mu.$ The existence of the measures $S$ and $|S|$ is in itself not interesting unless we know some useful geometric properties of them. In this section, we show that the support of the measures is on the geodesic lamination $\lambda.$ Recall that \[ V_q = \kappa^{p-1}*(Q(du_p)^{p-2}du_p \times u_p) =\kappa^{p-1} *(S_{p-1}(du_p)\times u_p) \] and $S$ and $V$ are the measures we obtained in the limit of subsequences $S_p$ and $V_q$ as $p \rightarrow \infty$ or equivalently as $q \rightarrow 1$. Note that this theorem is proved by comparison with the optimal best Lipschitz maps and says the support is on the canonical $\lambda$, not the maximum stretch set of $u.$ The next is Theorem~\ref{thm:supptmeasure}. \begin{theorem} The support of the measures $S$ and $V$ is contained in the canonical geodesic lamination $ \lambda$ associated to the hyperbolic metrics on $M$, $N$ and the homotopy class. \end{theorem} We will investigate the global properties of $v$ for which $dv = V$ in the next paper \cite{daskal-uhlen2}. In Section~\ref{sec:idealoptim} we discuss optimal best Lipschitz maps in the special case when the canonical lamination consists of finitely many closed geodesics. We begin this section with an investigation of local solutions mapping a neighborhood of a closed geodesic $\lambda_0 \subset \lambda$ to a neighborhood of $\mu_0 \subset \mu$. This problem allows separated variable solutions, which provide a unique opportunity to study first hand some infinity harmonic maps. We sketch a solution to the Dirichlet problem, and find that an open set around $\lambda_0$ is invariably mapped to $\mu_0$. The discussion of the Neumann problem, leads to construction of a comparison ideal optimal best Lipschitz map, which allows us to prove our last theorem. This is Theorem~\ref{infharmopt}. \begin{theorem} If the canonical geodesic lamination $\lambda$ for a homotopy class of maps $M \rightarrow N$ consists of a finite number of closed geodesics $\lambda_j$, then the maximum stretch set of the infinity harmonic map $u_p\rightarrow u$ is $\lambda.$ In other words any infinity harmonic map is optimal. \end{theorem} We conjecture that the infinity harmonic map is in general optimal, i.e takes on its Lipschitz constant on the canonical lamination $\lambda.$ We end this introduction with a preview of some of the results which will be contained in our sequel paper \cite{daskal-uhlen2}. Recall that the Lipschitz constant is roughly speaking a ratio of length of $\mu$ to the length of $\lambda$ (cf. \cite{thurston}), and when asking questions about the Lipschitz constant, we are in reality examining the length functional. Here is a list of questions that we will address in \cite{daskal-uhlen2}: First is the meaning of the closed 1-current $V$. We have already seen that $V$ has support on the lamination and values in the Lie algebra bundle $ad(E)$. The local primitive $v$ satisfying $V = dv$ is a Lie algebra valued function of bounded variation that is constant on the plaques of lamination. The total variation of $dv$ defines a transverse measure. Second is the interpretation of the homology class of $V$. We will show that as an element of $H^1(M, ad(E))$ it can be interpreted as the variation of the best Lipschitz constant with respect to the metric on the target. More precisely, by varying the target hyperbolic metric (while keeping the domain fixed) the best Lipschitz constant defines a functional on Teichm\"uller space whose variation is described by the homology class of $V$. This paper does not use heavily the hyperbolic structure of the domain. We will investigate the conservation laws which come from the base, which are connected with variation of the Lipschitz constant under deformation of the hyperbolic structure in the domain. We entirely missed this point in our first paper \cite{daskal-uhlen1} on real valued best Lipschitz maps. We also like to point out that, because of the connection with Thurston theory, we have restricted ourselves to the Lie group $G=SO(2,1)$. In the first sections of the paper we allowed $G=SO(n,1)$ but we did not pursue this level of generality in the later sections. We conjecture that there should be an extension of our results for higher dimensional hyperbolic manifolds relating to the results of Gueritaud and Kassel \cite{kassel}. Even a more general question is what happens analytically for maps between other symmetric spaces. In fact, our method is based on studying maps between Lie algebras with the indefinite inner product coming from the Killing form and projecting to the positive definite part. This is necessary in order to write down a meaningful PDE. The decomposition of the Lie algebra is given by a Cartan decomposition $\mathfrak g_u=\mathfrak k_u \oplus \mathfrak p_u$ determined by a map $u$, in our case the best Lipschitz map. Hopefully we will come back to this in a future article. We conjecture that these constructions once properly understood, will make contact with the local geometry and topology as developed by the Thurston school (see for example \cite{papa} and the references therein). {\bf Acknowledgements.} We would like to thank Camillo De Lellis and Athanase Papadopoulos for useful discussions during the preparation of this paper. \section{Schatten-von Neumann Harmonic maps}\label{sect1} In this section we introduce a new version of $p$-harmonic maps between Riemannian manifolds. They are defined as critical points of a functional $J_p$ given by the integral of the $p$-power of the Schatten-von Neumann norm of the gradient. In the first section we review some simple facts from linear algebra. These include the definition of the Schatten-von-Neumann norms on the space of matrices and their basic properties. We proceed to define the functional $J_p$ on the space of maps between two Riemannian manifolds, describe its Euler-Lagrange equations and study its convexity in the case when the target manifold has non-positive curvature. \subsection{Preliminaries} Given $V_1$ and $V_2$ inner product spaces and $A \in Hom(V_1, V_2)$ denote $A^T \in Hom(V_2, V_1)$ the adjoint. Here the inner products are used to identify the spaces with their duals. First note that for $ B \in Hom(V_1, V_2)$, \begin{equation}\label{eqn:traceinn} Tr(A^TB)=Tr(AB^T)=(A, B) \end{equation} is nothing but the trace inner product of the matrices $A$ and $B$. We denote by $|A|_2=(A, A)^{1/2}.$ Set \[ Q(A)=(AA^T)^{1/2} \ \ \mbox{and} \ \ \mathcal Q(A)=(A^TA)^{1/2}. \] Define by $s_1(A) \geq s_2(A) \geq ...\geq s_r(A) \geq 0$ their common eigenvalues (singular values of $A$) where $r=\min\{\dim V_1, \dim V_2\}$. \begin{definition}\label{schatten} Let $1 \leq p < \infty$ and $A \in Hom(V_1, V_2)$. We define the $p$-Schatten-von Neumann norm \[ |A|_{sv^p}= \left( TrQ(A)^p \right)^{1/p}= \left( Tr\mathcal Q(A)^p \right)^{1/p}=\left( \sum_{i=1}^r s_i(A)^p \right)^{1/p}. \] We extend the definition at $p=\infty$ by setting \[ |A|_{sv^\infty}=\sup_{|a|=1}|A(a)| \] the operator norm of $A$. Equivalently, $|A|_{sv^\infty}= s_1=s_1(A)$ is the largest singular value of $A$. For convenience we will denote $s_1(A)$ simply by $s(A)$. \end{definition} Below we list some standard properties of Schatten-von Neumann norms: \begin{proposition}\label{shat1} \begin{itemize} \item $(i)$ For $1/p+1/q=1/r, \ p \in [1, \infty]$ \[ |AB^T|_{sv^r} \leq |A|_{sv^p}|B|_{sv^q} \ \ \mbox{and} \] \item $(ii)$ \[ |Tr(AB^T)| \leq |A|_{sv^p}|B|_{sv^q}. \] \item $(iii)$ For $1 \leq p \leq q \leq \infty$, \[ |A|_{sv^1} \geq |A|_{sv^p} \geq |A|_{sv^q} \geq |A|_{sv^\infty}. \] \item $(iv)$ \[ \lim_{p \rightarrow \infty}|A|_{sv^p}=|A|_{sv^\infty}. \] \end{itemize} \end{proposition} \begin{proof} For $(i)$ see \cite[(IV. 42), (IV. 43)]{bhatia}. For $(ii)$ combine $(i)$ for $r=1$ with \cite[(II. 40)]{bhatia}. Properties $(iii)$ and $(iv)$ follow immediately from the following well known facts: for a fixed $r$-tuple of non-negative numbers $(x_1,...x_r) \in \R^r$ the function \[ p \mapsto \left(\sum_{i=1}^r x_i^p \right)^{1/p} \] is non-increasing in $p$ and as $p\rightarrow \infty$ converges to $\max_{i=1,...r}x_i$. \end{proof} The following consequence of the convexity of the norms will be used frequently in this paper. \begin{proposition}\label{lemmaconvvnorms} For $A, B \in Hom(V_1, V_2)$ \[ p(Q(A)^{p-2}A, B - A) \leq |B|_{sv^p}^p-|A|_{sv^p}^p \] \end{proposition} \begin{proof} The function $j(A) = Tr Q(A)^p$ is convex, and differentiable. Furthermore, \[ (dj)_A(C)= p(Q(A)^{p-2}A, C). \] If $j$ is a convex function on a vector space, it is always true that \[ (dj)_A(C) \leq j(A + C) - j(A). \] The proposition follows by setting $C = B - A$. \end{proof} In this paper we will use the induced norms on spaces of sections of vector bundles. More precisely, let $V_1, V_2$ be Riemannian vector bundles over a Riemannian manifold $(M,g)$ and $A: M \rightarrow Hom(V_1,V_2)$ a section. We define the $p$-Schatten-von Neumann norm of $A$ \[ ||A||_{sv^p}=\left(\int_M |A|^p_{sv^p}*1\right)^{1/p}\ \ \mbox{for} \ \ 1\leq p < \infty \] and \[ ||A||_{sv^\infty}=esssup |A|_{sv^\infty}. \] \begin{proposition}\label{ineqscoten} $||.||_{sv^p}$ is a norm which is equivalent to the $L^p$ norm. More precisely, \begin{equation}\label{lemma:normineq0} \frac{1}{\sqrt r} ||A||_{L^p} \leq ||A||_{sv^p} \leq ||A||_{L^p} \end{equation} where $r=\min \{\dim V_1, \dim V_2 \}$. Furthermore, for sections $A$, $B$ and $1/p+1/q=1$, $p \in [1, \infty]$ \begin{eqnarray}\label{holds} ||AB^T||_{sv^1} \leq ||A||_{sv^p}||B||_{sv^q} \ \mbox{and} \ \left| \int_M Tr(AB^T) \right| \leq ||A||_{sv^p}||B||_{sv^q}. \end{eqnarray} \end{proposition} \begin{proof}To show that $||.||_{sv^p}$ is a norm on the space of sections, it suffices to check subadditivity. This follows from a standard argument: \begin{eqnarray*} ||A+B||^p_{sv^p}&=&\int_M|A+B|_{sv^p}^p*1\\ &\leq&\int_M(|A|_{sv^p}+|B|_{sv^p})|A+B|_{sv^p}^{p-1}*1\\ &\leq&\left(\left(\int_M |A|_{sv^p}^p*1 \right)^{1/p}+\left(\int_M |B|_{sv^p}^p*1 \right)^{1/p}\right)\left(\int_M |A+B|_{sv^p}^p\right)^{1-1/p}\\ &=&(||A||_{sv^p}+||B||_{sv^p})||A+B||^{p-1}_{sv^p}. \end{eqnarray*} For the second statement, \[ |A|_{sv^p}^p= \sum_{i=1}^r s_i(A)^p\leq \left(\sum_{i=1}^r s_i(A)^2 \right)^{p/2}=|A|_2^p \] and \[ |A|_{sv^p}^p= \sum_{i=1}^r s_i(A)^p\geq s_1(A)^p = (s_1(A)^2)^{p/2} \geq \left(\frac{1}{r}\sum_{i=1}^r s_i(A)^2 \right)^{p/2} = \frac{1}{r^{p/2}} |A|_2^p. \] Finally, from Proposition~\ref{shat1}, \begin{eqnarray*} ||AB^T||_{sv^1} &=& \int_M |AB^T|_{sv^1} \leq \int_M |A|_{sv^p} |B|_{sv^q} \\ &\leq& \left(\int_M |A|_{sv^p}^p\right)^{1/p} \left(\int_M |B|_{sv^q}^q\right)^{1/q} =||A||_{sv^p}||B_{sv^q}. \end{eqnarray*} Similarly for other. \end{proof} \begin{lemma}\label{lemma:normlim} Let $V_1, V_2$ be Riemannian vector bundles of dimension $r$ over a Riemannian manifold $(M,g)$ and $A: M \rightarrow Hom(V_1,V_2)$ a section. Then \begin{itemize} \item $(i)$ \[ \lim_{p \rightarrow \infty} ||A||_{sv^p}= ||A||_{sv^\infty}=s(A). \] \item $(ii)$ If $s<p$, then \[ \frac{1}{(r \ vol(M))^{1/s}}||A||_{sv^s}\leq \frac{1}{vol(M)^{1/p}} ||A||_{sv^p}. \] \end{itemize} \end{lemma} \begin{proof}Let $s_1(x) \geq s_2(x)\geq ... \geq s_r(x)$ be the eigenvalues of $Q(A(x))$ pointwise. Then, \begin{eqnarray*} ||A||_{sv^p}&=&\left(\int_M \sum_{i=1}^r s_i(x)^p *1\right)^{1/p} \leq ||s_1||_{L^\infty}\left(\int_M r dx \right)^{1/p}. \end{eqnarray*} Thus \[ \limsup_{p \rightarrow \infty} ||A||_{sv^p} \leq ||A||_{sv^\infty}. \] From Proposition~\ref{shat1}$(iii)$, \[ ||A||_{sv^p}=\left(\int_M |A|_{sv^p}^p \right)^{1/p} \geq \left(\int_M |A|_{sv^\infty}^p\right)^{1/p}. \] By taking limits, we obtain \[ \liminf_{p \rightarrow \infty} ||A||_{sv^p} \geq ||A||_{sv^\infty}. \] This proves $(i)$. For $(ii)$, \begin{eqnarray*} ||A||_{sv^s} &= &\left(\int_M \sum_{i=1}^r s_i(x)^s *1\right)^{1/s} \leq \left(\int_M r s_1(x)^s *1\right)^{1/s}\\ &= & (r vol(M))^{1/s}\left(\avint_{\ \ M} s_1(x)^s *1\right)^{1/s}\\ &\leq &(r vol(M))^{1/s} \left(\avint_{\ \ M} s_1(x)^p *1\right)^{1/p} \\ &\leq & \frac{(r vol(M))^{1/s}}{vol(M)^{1/p}} ||A||_{sv^p}. \end{eqnarray*} \end{proof} \subsection{The functional}\label{sect:funtional} Let $(M, g)$ be a compact Riemannian manifold with boundary $ \partial M$ (possibly empty) and $\dim M=n$. Let $(N, h)$ be closed of non-positive sectional curvature. Throughout the paper we will denote the inner product coming from the domain metric $(.;.)$ and the one from the target metric $(.,.)^\sharp.$ The notation $(.;.)^\sharp$ means we use both metrics. For $1< p < \infty$ consider the subspace of maps $W^{1,p}(M,N) \cap C^0(M,N)$. For such a map $u$, define \[ J_p(u)=||du||^p_{sv^p}=\int_M |du|_{sv^p}^p*1. \] In order to determine the Euler-Lagrange equations of $J_p$ we let \[ Q(du)^2 :=dudu^T: T_{u(x)} N \xrightarrow{du^T} T_x M \xrightarrow{du} T_{u(x)} N. \] Here $ Q(du)$ is a section of the bundle $End(u^{-1}(TN))$ and $|du|_{sv^p}^p=TrQ(du)^p.$ It follows from the multiplication theorems of Sobolev spaces that, in the continuous range $p>n$, $J_p$ is a functional of class $C^{[p]}$, where $[p]$ denotes the largest integer no greater than $p$. We now express $ Q= Q(du)$ in local coordinates. We choose a local orthonormal frame $\{e^\alpha \}$ on the tangent bundle of $U \subset M$ and denote by $\{e_\alpha \}$ the dual frame on the cotangent bundle of $M$. We also fix a local orthonormal frame $\{F^j \}$ on the tangent bundle of $V \subset N$, $\{F_j \}$ the dual frame on the cotangent bundle. Assume further that $u(U) \subset V.$ Then \[ du=d_\alpha u^i e_\alpha F^i=d u^i F^i \] and \begin{eqnarray}\label{formscripQ} dudu^T&=&d_\alpha u^id_\alpha u^j F^i F_j \nonumber\\ &=&(d u^i; d u^j) F^i F_j. \end{eqnarray} Recall that $(.;.)$ denotes the inner product on $M$. In other words, \[ Q(du)^2_{ij}=(d u^i; d u^j)= Q^2_{ij}, \ \ Q(du)^p=Q^p_{ij}F^i F_j. \] Equivalently, by viewing $du=d u^i F^i \in \Omega^1(u^{-1}(TN))$, define the dual with respect to metric on $N$ \begin{equation}\label{Qeqnstarhtro} du^\sharp=d u^j F_j \in \Omega^1(u^{-1}(TN^*)). \end{equation} Then, \begin{equation}\label{Qeqnstarhtrox} Q(du)^2=(d u; d u^\sharp)=(d u^i; d u^j)F^iF_j\in \Omega^0(End(u^{-1}(TN))). \end{equation} We write \begin{equation*} Q(du)^{p-2}du=Q^{p-2}_{ij}d u^jF^i \in \Omega^1(u^{-1}(TN)) \end{equation*} \begin{equation*}\label{Qloccord2x} *Q(du)^{p-2}du=Q^{p-2}_{ij} *d u^jF_i \in \Omega^1({u^{-1}(TN)^*}). \end{equation*} We can now compute the first variation of $J_p$: \begin{proposition}\label{prop:firstvar2}The Euler Lagrange equations of the functional $J_p$ are \begin{equation}\label{eqn:firstvargen} \int_M(Q(du)^{p-2}du; D\phi)^\sharp*1=0 \ \ \ \forall \phi \in \Omega^0(u^{-1}(TN)). \end{equation} In particular by taking $\phi$ compactly supported away from $\partial M$, \begin{equation}\label{eqn:firstvar3} D^*Q(du)^{p-2}du=0. \end{equation} Here $D=D_u$ is the pullback of the Levi-Civita connection on $u^{-1}(TN)$. \end{proposition} \begin{proof} Consider a variation $u=u_t$ with $\frac{du}{dt }= \phi$ and let $J_p=J_p(t)$. Then, \begin{eqnarray*}\label{derJ} \frac{dJ_p}{dt} &=&\int_M \frac{d}{dt} Tr(dudu^T)^{p/2} *1 \\ &=& \frac{p}{2}\int_M Tr (dudu^T)^{p/2-1} D_\frac{d}{dt} (dudu^T) *1 \\ &=& \frac{p}{2}\int_M Tr (du du^T)^{p/2-1} ((D_\frac{d}{dt}du du^T)+(du D_\frac{d}{dt}du^T))*1 \\ &=& p \int_M Tr (Q(du)^{p-2} du D_\frac{d}{dt} du^T)*1 \\ &=& p \int_M ( Q(du)^{p-2} du; D_\frac{d}{dt} du)^\sharp *1 \\ &=& p \int_M (Q(du)^{p-2} du; D \phi )^\sharp *1. \end{eqnarray*} \end{proof} A critical point of the functional $J_p=||du||^p_{sv^p}$ is called a {\it{Schatten-von Neumann $p$-harmonic map}} or simply a {\it{$J_p$-harmonic map.}} Since we are only considering the case when $N$ has non-positive curvature we will show (cf. Corollary~\ref{cor:convex}) that every $J_p$-harmonic map is a minimizer. We will call such a map simply a {\it$J_p$-minimizing map} or a {\it$J_p$-minimizer}. $J_p$-harmonic maps should not be confused with the usual $p$-harmonic maps which are critical points of the {\it{different functional}} \[ ||du||_{L^p}^p=\int_M |du|_2^p*1. \] The same can be said for the infinity norms. We define the $L^\infty$ norm of $du$ as $||du||_{L^\infty}=\it{essup} |du|_2$. {\it In general this is different from the Lipschitz constant which is related to the the operator norm $||.||_{sv^\infty}$} (unless one of the dimensions is one). \subsection{The second variation} In this section we continue with $(M, g)$ and $(N, h)$ as before and $p>n$. We also assume that $t \mapsto u_t \in W^{1,p}(M, N)$ is a $C^2$ geodesic homotopy. Thus, $t \mapsto J_p(t)$ is twice differentiable and \begin{equation}\label{geohom} \frac{D}{\partial t}\frac{\partial u}{\partial t} \equiv 0. \end{equation} We will continue to denote $D=D_u$ the pullback connection on $u^{-1}(TN)$ \begin{lemma} \label{convineq} The following holds: \[ (\frac{D}{\partial t}du \ du^T, \frac{D}{\partial t}Q(du)^{p-2} )^\sharp \geq 0. \] \end{lemma} \begin{proof} The estimate is pointwise, so we may assume that we are with normal coordinates at a point so that the metric is Euclidean and Christoffel symbols vanish so that $\frac{D}{\partial t}=\frac{\partial }{\partial t}$. For simplicity set $du=A$, $Q^2=AA^T=R$ and notice \begin{eqnarray*} p\lefteqn{( A'A^T, {Q^{p-2}}' )^\sharp }\\ &=&\frac{p}{2}(R', (R^{{p-2}/2})' )^\sharp \\ &=&\frac{p}{2} Tr(R'(R^{p/2-1})')^\sharp \\ &=& D^2 F(R',R')^\sharp \\ &\geq& 0. \end{eqnarray*} Here we set $F(R):=TrR^{p/2}$ is convex by the convexity of the Schatten-von Neumann norms. Also in the second equality we used (\ref{eqn:traceinn}). \end{proof} \begin{proposition}\label{prop:secvar} Let $t \mapsto u_t \in W^{1,p}(M, N)$ be a $C^2$ path which is also a geodesic homotopy. Then, \begin{eqnarray*} \frac{1}{p}\frac{d^2J_p}{dt^2}&\geq&-\int_M( R^N \left(Q(du)^{p-2/2}du \frac{\partial u}{\partial t} \right)\frac{\partial u}{\partial t}; Q(du)^{p-2/2}du )^\sharp*1 \\ &+&\int_M \left | Q(du)^{p-2/2} D\frac{\partial u}{\partial t} \right |^2 *1. \end{eqnarray*} In the above, we view $Q(du)^{p-2/2}du$ and $D\frac{\partial u}{\partial t}$ as sections of the bundle $T^*M\otimes u^{-1}(TN)$. \end{proposition} \begin{proof} \begin{eqnarray*} \frac{1}{p}\frac{d^2J_p}{dt^2}&=&\int_M ( \frac{D}{\partial t}D\frac{\partial u}{\partial t}; Q(du)^{p-2}du )^\sharp + (D\frac{\partial u}{\partial t}; \frac{D}{\partial t}(Q(du)^{p-2}du) )^\sharp *1 \\ &=&\int_M( D \frac{D}{\partial t}\frac{\partial u}{\partial t}; Q(du)^{p-2}du)^\sharp*1 -\int_M( R^N\left(du, \frac{\partial u}{\partial t} \right) \frac{\partial u}{\partial t}; Q(du)^{p-2}du)^\sharp*1 \\ &+& \int_M (D\frac{\partial u}{\partial t}; \frac{D}{\partial t}Q(du)^{p-2} du+ Q(du)^{p-2}\frac{D}{\partial t}du)^\sharp*1 \\ &=& -\int_M( R^N\left(du, \frac{\partial u}{\partial t} \right)\frac{\partial u}{\partial t}; Q(du)^{p-2}du)^\sharp*1\\ &+&\int_M( \frac{D}{\partial t} du; \frac{D}{\partial t}Q(du)^{p-2}du)^\sharp*1 \\ &+&\int_M ( D\frac{\partial u}{\partial t}; Q(du)^{p-2}\frac{D}{\partial t}du)^\sharp*1\\ &\geq&-\int_M(R^N (Q(du)^{p-2/2}du, \frac{\partial u}{\partial t} )\frac{\partial u}{\partial t}; Q(du)^{p-2/2}du)^\sharp*1 \\ &+&\int_M \left |Q(du)^{p-2/2} D\frac{\partial u}{\partial t} \right |^2 *1. \end{eqnarray*} In the third equality we used (\ref{geohom}) and in the last inequality Lemma~\ref{convineq}. \end{proof} \begin{corollary}\label{cor:convex} The map \[ t \mapsto J_p(u_t) \] is convex. In particular, any critical point is a global minimum. \end{corollary} \begin{corollary}\label{cor:unique1} Let $t \mapsto u_t $ be a $C^2$ geodesic homotopy between two non-constant minimizing maps $u_0$ and $u_1$. Then, \[ \left | \frac{\partial u}{\partial t} \right | \equiv c \] and \[ ( R^N \left( Q(du)^{p-2/2}du, \frac{\partial u}{\partial t} \right)\frac{\partial u}{\partial t}; Q(du)^{p-2/2}du)^\sharp \equiv 0. \] If in addition $\partial M \neq \emptyset$, then there exists a unique $J_p$-minimizer in a fixed homotopy class. \end{corollary} \begin{proof} The convexity statement of Proposition~\ref{prop:secvar} implies both that $Q(du)^{p-2/2}D\frac{\partial u}{\partial t} \equiv 0$ and $( R^N \left(Q(du)^{p-2/2}du, \frac{\partial u}{\partial t} \right)\frac{\partial u}{\partial t}; Q(du)^{p-2/2}du)^\sharp \equiv 0$. We next claim that $D\frac{\partial u}{\partial t} =0$ everywhere. The first equality implies that $\frac{D}{\partial t}du=D\frac{\partial u}{\partial t} =0$ on the set $\{du \neq 0 \}$. Therefore also on the set $\tilde{\{du \neq 0\}}$, by continuity. On the complement, $du \equiv 0$ and hence the claim also holds. Since $\frac{\partial u}{\partial t} $ is covariantly constant, $|\frac{\partial u}{\partial t}| \equiv c $. \end{proof} \begin{corollary}\label{cor:unique2} Let $t \mapsto u_t $ be a $C^2$ geodesic homotopy between two non-constant minimizing maps $u_0$ and $u_1$. Assuming the target has negative curvature, either $u_0=u_1$ or the rank of each $u_t$ is $\leq 1$. \end{corollary} \begin{proof} If $c=0$ in the above corollary, then $u_0 \equiv u_1$. Otherwise $\frac{\partial u}{\partial t}$ is never zero. By the negativity of the sectional curvature $R^N$, $ Q(du)^{p-2/2}du$ must have pointwise rank $\leq 1$ everywhere, hence also $du$. \end{proof} \subsection{Existence of $J_p$-minimizers}\label{sect:thpdirichlet} Let $(M, g)$ be a compact Riemannian manifold with boundary $ \partial M$ (possibly empty) and $\dim M=n$. Let $(N, h)$ be a closed Riemannian manifold of non-positive curvature. The purpose of this section is to show existence of a minimizer $u$ of the functional $J_p$ subject to either Neumann or Dirichlet boundary conditions. For the Dirichlet problem we fix a continuous map $f:M \rightarrow N$ and seek a minimizer in the homotopy class of $f$ relative to the boundary values of $f$. For the Neumann problem we only fix a homotopy class and no boundary condition at all. The main result of the section is the following: \begin{theorem}\label{thm:pdirichlet} Let $(M, g)$ be a compact Riemannian manifold with boundary $ \partial M$ (possibly empty) and $\dim M=n$. Let $(N, h)$ be a closed Riemannian manifold of non-positive curvature. Then, for $p>n$ there exists a minimizer $u \in W^{1,p}(M, N)$ of the functional $J_p$ with either the Neumann or the Dirichlet boundary conditions in a homotopy (resp. relative homotopy) class. Furthermore, if $ \partial M \neq \emptyset$, the solution of the Dirichlet problem is unique. \end{theorem} \begin{proof} The proof of existence is fairly standard so we will only give a sketch. Indeed, since the functional $J_p$ is convex it is coercive and lower semicontinuous (cf. \cite[Chapter 8.2]{evans}). Thus, given a minimizing sequence $u_i $ for $J_p$, there exists a subsequence $u_{i_j}\rightharpoonup u \in W^{1,p} $ which must be a minimum. The compactness of the inclusion $W^{1,p} \subset C^0$ implies that the map $u$ stays in the appropriate homotopy class. The uniqueness follows from Corollary~\ref{cor:unique1} below. \end{proof} By the Sobolev embedding theorem we also obtain \begin{corollary} \label{hoelderpk}The minimizer $u$ of Theorem~\ref{thm:pdirichlet} is in $C^\alpha$ for $\alpha=1-\frac{n}{p}$. \end{corollary} \subsection{The transpose formalism} There is an equivalent way of defining the variation of the functional $J_p$ where instead of $Q(du)^2$ we consider $\mathcal Q(du)^2=du^Tdu \in End(TM)$. Set \[ J_p(u)=\int_M Tr \mathcal Q(du)^p*1. \] In local coordinates, \[ du^Tdu=d_\alpha u^id_\beta u^i e^\alpha e_\beta=(d_\alpha u, d_\beta u)^\sharp e^\alpha e_\beta. \] Setting \[ \mathcal Q^2_{\alpha \beta}=(d_\alpha u, d_\beta u)^\sharp = \mathcal Q^2_{\beta \alpha} \] we can write \[ \mathcal Q(du)^p= \mathcal Q^p_{\alpha \beta}e^\alpha e_\beta \] and \begin{equation*} du \mathcal Q(du)^{p-2}=\mathcal Q^{p-2}_{\alpha \beta}d_\alpha u e_\beta, \ \ *du \mathcal Q(du)^{p-2}= \mathcal Q^{p-2}_{\alpha \beta}d_\alpha u *e_\beta. \end{equation*} As in Proposition~\ref{prop:firstvar2}, the Euler-Lagrange equations of $J_p$ are given by \begin{equation}\label{eqn:firstvar0} D^*(du \mathcal Q(du)^{p-2})=0. \end{equation} A simple way to see these are indeed the same Euler-Lagrange equations is via the formula \[ du \mathcal Q(du)^{p-2}=Q(du)^{p-2}du. \] \section{Conservation laws from the symmetries of the target}\label{sec3} In this section, we develop the geometry to understand $J_p$-harmonic maps and later the best Lipschitz maps into symmetric spaces of non-compact type. We work it out specifically for $G = SO^+(n,1)$ and $\HH = \HH^n$ the positive unit hyperboloid in $ \R^{n,1}$. According to Noether's theorem, every symmetry of a calculus of variations problem corresponds to a divergent free vector field. However, the symmetries of the target manifold $N$ are local, not global. The invariant here is necessary to understand the global aspects of the problem. We will be constructing a flat $G$ bundle over $M$ using the homotopy class of the map $u: M \rightarrow N$. The local symmetries appear as sections of an associated bundle. \subsection{The geometry of the hyperboloid}\label{geomhyp} We review some basic geometry. Let $e^\sharp = diag(1,...,1,-1)$. For $X \in \R^{n,1}$ let $X^\sharp = (e^\sharp X)^T$. The inner product in $\R^{n,1}$ is \[ ( X,Y)^\sharp = X^\sharp Y =Y^\sharp X. \] The associated transpose on linear maps $B$ is $B^\sharp = e^\sharp B^Te^\sharp$. Then $g \in G =SO^+(n,1)$ preserves the inner product and satisfies $g^{-1} = g^\sharp$. $B \in \mathfrak g=so(n,1)$ satisfies $B^\sharp = -B$. Note that $XY^\sharp$ is a matrix and $YX^\sharp - X^\sharp Y$ is a skew symmetric matrix (with respect to $^\sharp$). We define \begin{equation}\label{wedgedef} X \times Y=YX^\sharp - X Y^\sharp \in \mathfrak g=so(n,1). \end{equation} Let $(A,B)^\sharp= Tr A B $ denote the Killing form on $\mathfrak g$. With respect to a Cartan decomposition $ \mathfrak g=so(n,1)$, we can write $A \in \mathfrak g$ as \begin{equation*} A = \begin{pmatrix} W & X \\ X^T & 0 \end{pmatrix} \end{equation*} for $X$ arbitrary vector and $W=-W^T$. It follows $(A,A)^\sharp= Tr WW+2|X|^2 $ and the metric has signature $(n, n(n-1)/2)$. In particular for $n=2$, the Killing form is (up to a constant) a flat Lorenzian metric on $\mathfrak g=so(2,1)$ of signature $(2,1)$ isometric to $\R^{2,1}$. We let \[ \HH^n = \HH = \{X \in \R^{n,1}: ( X,X )^\sharp = -1 \ and \ X_{n+1} \geq 1 \}. \] We can also formulate this as the unit hyperboloid divided out by the involution $X \mapsto -X$. Let $X_0=(0,..,0,1) \in \HH $ be a conveniently chosen base point. Let $\Pi_0$ be the orthogonal in $(, )^\sharp$ projection onto ${X_0}^\perp$. Let $\Pi_0^\perp$ be the orthogonal projection onto $X_0$. Note that \[ I = \Pi_0 +\Pi_0^\perp= \Pi_0 - (X_0, \cdot )^\sharp X_0 = \Pi_0 -X_0X_0^\sharp, \] If $X \in \HH $ an arbitrary point, then since the action of $G$ on $\HH$ is transitive there exists $g(X) \in G$ such that $X=g(X)X_0$. Since $G$ acts by isometries it will preserve orthogonal projections on the span of $X$ and the subspace $X^\perp$. In other words \begin{lemma} $g(X) {X_0}^\perp$ is the tangent space $T_X\HH$. \end{lemma} Let $\Pi(X)$ be the orthogonal in $(, )^\sharp$ projection onto $X^\perp$ and $\Pi^\perp(X)$ be the orthogonal projection onto $X$. Then, $ g(X)\Pi_0g(X)^\sharp = \Pi(X)$ is the orthogonal in $^\sharp$ projection from $\R^{n,1}$ onto $T_X\HH$. It follows that \begin{equation} \label{proj} I = \Pi(X)+ \Pi(X)^\perp=\Pi(X) - XX^\sharp. \end{equation} Recall a similar theorem in $S^n$. The minus sign comes from $\R^{n,1}$. So we will use the expression \begin{equation} \label{projperp} \Pi^\perp (X)= -( X, . )^\sharp X= -XX^\sharp \end{equation} for the projection onto the normal bundle at a point. \begin{lemma}\label{metricHemb}The inner product $(,)^\sharp$ restrictred to the tangent bundle of $\HH$ is a Riemannian metric which agrees with the standard metric of the hyperbolic space. \end{lemma} The Levi-Civita connection on $\R^{n,1}$ is $d$ and hence it defines a flat connection, still denoted by $d$, on $T \R^{n,1} \big |_{\HH}$. We consider the exact sequence of smooth vector bundles with $SO^+(n,1)$ action \begin{equation}\label{exactse} 0 \rightarrow T\HH \rightarrow T \R^{n,1} \big |_{\HH} \simeq \HH \times \R^{n,1} \xrightarrow{\Pi^\perp} (T\HH)^\perp \rightarrow 0. \end{equation} As smooth vector bundles with $SO^+(n,1)$ action $T \R^{n,1} \big |_{\HH}=T\HH \oplus (T\HH)^\perp$. However $d$ does not preserve the splitting and we write $D^\sharp$ for the induced split connection on $T \HH \oplus (T \HH)^\perp$ and $A$ for the off diagonal part. \begin{lemma}\label{Dec2funform} We have \[ d=D^\sharp + A \] where \[ D^\sharp=\Pi d \Pi+ \Pi^\perp d \Pi^\perp \ \mbox{and} \ A=d\Pi (\Pi - \Pi^\perp). \] Furthermore, the restrictions of $D^\sharp$ and $ A$ to $T\HH$ are the Levi-Civita connection of $\HH$ and the second fundamental form of the embedding of $\HH$ is $\R^{n,1}$. \end{lemma} \begin{proof} For $W$ a tangent vector to $\HH$, $D^\sharp W = \Pi dW= d(\Pi W) - (d\Pi)W = dW - (d\Pi )(\Pi W)$. Likewise for $W$ in the normal bundle \[ D^\sharp W = \Pi^\perp d W = d(\Pi^\perp W)- (d\Pi^\perp) W = dW + (d\Pi)( \Pi^\perp W). \] Here we use the equality \begin{equation}\label{eqnpi1} (d\Pi^\perp) W= d(I-\Pi)W=-(d\Pi)W \end{equation} because the identity is constant as a map from $\R^{n,1}$ to itself. So on an arbitrary vector $W$, \begin{eqnarray*} D^\sharp W &=& \Pi d \Pi W +\Pi^\perp d \Pi^\perp W \\ &=& d W - (d\Pi )(\Pi W)+(d\Pi)( \Pi^\perp W)\\ &=& d W- A(W) \end{eqnarray*} as claimed. The last statements are clear from the construction. \end{proof} \subsection{The pullback under $u$} The above construction is equivariant so it pulls back under equivariant maps. Indeed, consider a representation $\rho: \pi_1(M) \rightarrow SO^+(n,1)$ and a $\rho$-equivariant map $\tilde u: \tilde M \rightarrow \HH$. Here $M$ can be arbitrary as we are only exploiting the symmetries of the target. For the sake of simplicity in this section we assume that $\tilde u$ is smooth. We will first address the bundles. Consider the pull back of the exact sequence of bundles (\ref{exactse}) to obtain an exact sequence of vector bundles over $\tilde M$ \begin{equation}\label{exactse2} 0 \rightarrow {\tilde u}^{-1}(T\HH) \rightarrow {\tilde u}^{-1}(\HH \times \R^{n,1}) \xrightarrow{\Pi(u)^\perp} {\tilde u}^{-1}(T\HH^\perp) \rightarrow 0 \end{equation} with an action of $\pi_1(M)$ given by $\rho$. We also define the flat bundle on $M$ \begin{equation}\label{exact45231} E=\tilde M \times_\rho R^{n,1}. \end{equation} and the fiber bundle \begin{equation}\label{exact452} H=\tilde M \times_\rho \HH \subset E. \end{equation} Note that $\tilde u$ defines a section $u: M \rightarrow H$. Let \begin{equation}\label{exact452fg} u^{-1}(VTH) \subset E \end{equation} denote the pullback of the vertical tangent bundle of the fibration $H$. We have \begin{equation}\label{exactse2x} 0 \rightarrow u^{-1}(VTH) \rightarrow E \rightarrow u^{-1}(VTH^\perp) \rightarrow 0 \end{equation} which is nothing but the quotient of (\ref{exactse2}) by the action of $\pi_1(M)$. In the case $\tilde u$ descends to a map $u: M \rightarrow N$, (\ref{exactse2x}) can be restated as \begin{equation}\label{exactse20} 0 \rightarrow u^{-1}(T N) \rightarrow E \rightarrow u^{-1}(TN^\perp) \rightarrow 0. \end{equation} The whole point of the above construction can be summurized in the following Lemma: \begin{lemma}\label{recalllemm} Given a smooth map $ u: M \rightarrow N$ we have an exact sequence of bundles (\ref{exactse20}) and a fiber bundle (\ref{exact452}). In particular both $u$ and $du$ can be considered as sections of $E$. \end{lemma} Since the construction is equivariant, we can also keep track of the connections. We denote still by $d$ the induced flat connection on $E$ by pullback via $u$ (after passing to the quotient) and let \begin{equation}\label{exactse20X} D_u=u^{-1}(D^\sharp)=\Pi(u) d \Pi(u)+ \Pi^\perp(u) d \Pi^\perp(u) \end{equation} \begin{equation}\label{exactse20XTY} A_u=u^{-1} A=d\Pi(u) (\Pi(u) - \Pi^\perp(u)). \end{equation} \begin{lemma}\label{decomphig} $E$ is a flat bundle with flat connection $d$, $D_u$ is a connection on $E$ and \[ d=D_u+A_u. \] Furthermore, the restriction of $D_u$ to $u^{-1}(TN)$ (as a subbundle of $E$ given in (\ref{exactse20})) is the pullback of the Levi-Civita connection of $N$. The restriction $A_u\Pi(u)$ of $A_u$ to $u^{-1}(TN)$ is the second fundamental form. \end{lemma} \subsection{The infinitesimal symmetries}\label{sectinfis} Because of curvature, we know that there are no covariant sections of associated bundles which vanish. However, we know that $G$ is a large isometry group preserving the metric on $\HH$. Noether's theorem applies to local isometries, so first describe the role of local isometries. First recall that an element $w$ of the Lie algebra $\mathfrak g$ of $SO(n,1)$ defines a vector field on $\HH$ by setting $w(X)=wX \in T_X\HH$. Here $X \in \HH \subset \R^{n,1}$ and $w$ acts on $X$ as a skew-adjoint endomorphism with respect to $(,)^\sharp$. Note that $w X \in T_X\HH$. Indeed, \begin{eqnarray}\label{infisoacts} X^\sharp w X &=& (X, w X)^\sharp = (w X, X)^\sharp \nonumber\\ &=& (w X)^\sharp X = X^\sharp w^\sharp X =-X^\sharp w X. \end{eqnarray} Alternatively, by embedding everything into $\R^{n,1}$, $w(X)=\frac{d}{dt} \Big|_{t=0}e^{tw}X$. This agrees with the usual definition of infinitesimal action by isometries. We can thus define a map \[ \alpha_X: \mathfrak g \rightarrow T_X\HH \subset \R^{n,1}, \ \ w \mapsto wX. \] We also have a map the other way \[ \beta_X: \R^{n,1} \rightarrow \mathfrak g, \ \ v \mapsto v \times X. \] \begin{proposition}\label{helpemb0} \begin{itemize} \item $(i)$ For $v \in \R^{n,1}$, $\alpha_X \circ \beta_X(v)=v+(v,X)^\sharp X$. \item $(ii)$ For $v \in \R^{n,1}$, \[ Tr(\beta_X(v) \beta_X(v))=2 (\alpha_X \beta_X(v),v)^\sharp. \] Thus, the adjoint $\beta_X^\sharp=2\alpha_X$. \item $(iii)$ The map $\sqrt 2 \beta_X$ identifies $T_X\HH$ isometrically with its image $\mathfrak p \subset ad(E)$. \item $(iv)$ $\mathfrak g=\ker(\alpha_X) \oplus \mathfrak p$ is an orthogonal direct sum with respect to the Killing form. The inner product is positive on $\mathfrak p$ and negative on $\ker(\alpha_X)$ and corresponds pointwise to a Cartan decomposition of $\mathfrak g$ as a compact Lie algebra and its complement. \end{itemize} \end{proposition} \begin{proof}For $(i)$, \[ \alpha_X \circ \beta_X(v)=(v \times X) X=Xv^\sharp X-vX^\sharp X=v+(v,X)^\sharp X. \] For $(ii)$, let $v \in \R^{n,1}$. Then, \begin{eqnarray*} Tr(\beta_X(v), \beta_X(v)) &=& Tr(X v^\sharp-vX^\sharp)(X v^\sharp-vX^\sharp) \\ &=& 2 (\alpha_X \beta_X(v),v)^\sharp+2(X,v)^{\sharp 2}\\ &=&2 (v+(v,X)^\sharp X, v)^\sharp. \end{eqnarray*} In the above we used $X^\sharp X=-1$ and the fact that $Trvv^\sharp=(v,v)^\sharp$, $TrXv^\sharp=TrvX^\sharp=(X,v)^\sharp$. For $(iii)$ note that if $v \in T_X\HH$, then $\alpha_X \circ \beta_X(v)=v$. It follows that $\sqrt 2 \beta_X$ restricted to $T_X\HH$ is injective and identifies $T_X\HH$ isometrically with its image in $\mathfrak g$. $(iv)$ is immediate from the above. \end{proof} We will now apply the above construction to a smooth map $u$ as before. Let $ad(E) $ be the flat Lie algebra bundle over $M$ associated to $\rho$, in other words $ad(E)=\tilde M \times_{Ad(\rho)} \mathfrak g $. Recall also the pullback bundle $u^{-1}(TN) \subset E$. Since the construction is local we will identify $N$ with $\HH$ and $u$ with $\tilde u$. We view the flat Lie algebra bundle of skew tensors $ad(E)$ as the space of local isometries. Given $\phi$ a section of $ad(E)$, let $\psi = \phi u $ as a section of $E$. By (\ref{infisoacts}), $\phi u$ is tangent to $\HH$ at $u$ so we have a map \[ \alpha_u: ad(E) \rightarrow u^{-1}(TN) \subset E, \ \ \phi \mapsto \phi u. \] and also \[ \beta_u: E \rightarrow ad(E), \ \ v \mapsto v \times u. \] The maps $\alpha_u$ and $\beta_u$ will play an important role for the rest of the paper. The following follows immediately from Proposition~\ref{helpemb0}: \begin{proposition}\label{helpemb} \begin{itemize} \item $(i)$ For $v \in E$, $\alpha_u \circ \beta_u(v)=v+(v,u)^\sharp u$. \item $(ii)$ For $v \in E$, \[ Tr(\beta_u(v) \beta_u(v))=2 (\alpha_u \beta_u(v),v)^\sharp. \] Thus, the adjoint $\beta_u^\sharp=2\alpha_u$. \item $(iii)$ The map $\sqrt 2 \beta_u$ identifies $u^{-1}(TN)$ isometrically with its image $\mathfrak p_u \subset ad(E)$. \item $(iv)$ $ad(E)=\ker(\alpha_u) \oplus \mathfrak p_u$ is an orthogonal direct sum with respect to the Killing form. The inner product is positive on $\mathfrak p_u$ and negative on $\ker(\alpha_u)$ and corresponds pointwise to a Cartan decomposition of $\mathfrak g$ as a compact Lie algebra and its complement. \end{itemize} \end{proposition} We next interpret the second fundamental form: \begin{proposition} \label{prop:2fundform} We have \[ A_u = du \times u=\beta_u(du). \] \end{proposition} \begin{proof} Recall that $\Pi^\perp = -uu^\sharp$. So \begin{eqnarray*} A_u&=& d(\Pi(u))(\Pi(u) - \Pi^\perp(u)) \\ &=& d(uu^\sharp )(I + 2 uu^\sharp ) \\ &=& du u^\sharp + u du^\sharp +2duu^\sharp uu^\sharp + 2udu^\sharp u u^\sharp\\ &=& du u^\sharp + u du^\sharp +2du(u, u)^\sharp u^\sharp + 2u(du, u)^\sharp u^\sharp\\ &=& du u^\sharp + u du^\sharp -2duu^\sharp + 2(du, u)^\sharp u u^\sharp \\ &=& -du u^\sharp+ u du^\sharp\\ &=&du \times u \end{eqnarray*} In the above we used $u^\sharp u = -1$ and that $u$ points in the normal direction, i.e $(du, u)^\sharp = 0$. \end{proof} \subsection{The Euler-Lagrange equations revisited} We will write the Euler-Lagrange equations in this setting, where the structure of the Noether current is transparent. Since the bundles are flat, this is a global construction. First we are going to briefly discuss the case when $u: M \rightarrow N$ is only $W^{1,p}$, $p> dim M$ but not necessarily smooth. Given $X \in \HH$ and $\xi \in \R^{n,1}$, we write \[ \xi_X=\Pi(X)\xi= \xi+(\xi, X)^\sharp X. \] Recall from Lemma~\ref{recalllemm} that $u$ defines a section of the bundle $E$. We define the $W^{1,p}$-subbundle $E_u \subset E$ as \[ E_u=\{ \xi \in E: \xi_{u(x)}=\xi \ \forall x \in M\}. \] In the case $u$ is smooth $E_u=u^{-1}TN$. Next we are going to discuss the tensor $Q$ in this setting. Denote $E^\sharp$ the dual of $E$ with duality isomorphism $^\sharp: E \rightarrow E^\sharp$ induced by the indefinite metric $(,)^\sharp$. Given $W=W_iF^i \in T^*M \otimes E$ let $W^\sharp=W_iF_i=W_i (F^i)^\sharp \in T^*M \otimes E^\sharp$. Under the same formula as in (\ref{Qeqnstarhtrox}) we define \begin{equation}\label{tenform2} Q(W)^2= (W; W^\sharp) \in E \otimes E^\sharp. \end{equation} Here $(;)$ denotes the inner product in the tangent space of $M$. Note that $Q(W)^2$ is a symmetric endomorphism in $E \otimes E^\sharp$. If $W_u=W,$ then $Q(W)$ sends $E_u$ to itself. We will now assume that $u = u_p$ satisfies the $J_p$-Euler-Lagrange equations. We can apply the above to $W=du$ and use Lemma~\ref{metricHemb} that the inner product $(,)^\sharp $ restricted to $T\HH$ is the Riemannian metric of $\HH$ to obtain \[ Q(du)^2 = (du; du^\sharp) \] is consistent with the definition in Section~\ref{sect1}. For a section $\xi \in \Omega^l(E)$ we will denote by \[ \xi_u=\Pi(u)\xi= \xi+(\xi, u)^\sharp u \] its projection onto the tangent space of $\HH$ at $u$. Similarly, for $\psi \in \Omega^l(ad(E))$ we denote $\psi_u$ its projection onto the positive definite part $\mathfrak p_u$. We will also collect some shortcuts in notation: \[ <A,B> = \int_M (A; B)^\sharp *1 \] \[ <Q(W)^p> = <Q(W)^{p-2}W,W> = \int_M TrQ(W)^p *1=||W||^p_{sv^p}. \] If $\Omega \subset M$, then $<A,B>_\Omega$ refers to the integral over $\Omega$. Likewise for $<Q(W)^p>_\Omega.$ More generally we will use the notation $<f>_\Omega= \int_\Omega f*1$. \begin{proposition}\label{piati} In this notation, the Euler-Lagrange equations for $u=u_p$ can be written \[ < Q(du)^{p-2}du, d\xi > = 0. \] for all $\xi \in \Omega^0(E)$ such that $\xi_u = \xi$. \end{proposition} \begin{proof} The Euler-Lagrange equations are \[ < Q(du)^{p-2}du, D_u\xi > = 0. \] where $\xi$ is tangent i.e $\xi_u = \xi$. By definition, $D_u\xi =\Pi(u)d\xi=(d\xi)_u $. But we can drop the $\Pi$ from the formula since we are evaluating it against a tangent vector, $Q(du)^{p-2}du$. \end{proof} Recall the map $\beta_u$ identifying $T\HH$ with a subbundle of the Lie algebra bundle $ad(E)$ and that $Q(du)^{p-2}du$ is tangent. In view of the Lemma above it is natural to identify $Q(du)^{p-2}du \in T^*M \otimes T\HH$ with $(Q(du)^{p-2}du )\times u \in T^*M \otimes ad(E)$. For simplicity we denote $S_{p-1}(du)= Q(du)^{p-2}du$. With this understood, \begin{theorem}\label{Prop:Elag-bund} Let $u=u_p$ satisfy the $J_p$-Euler-Lagrange equations. Then for any section $\phi \in \Omega^0(ad(E))$, $ <S_{p-1}(du) \times u, d \phi>= 0.$ Equivalently, in the distribution sense \begin{equation}\label{Elag-bund1} d* (S_{p-1}(du) \times u)=0. \end{equation} \end{theorem} \begin{proof} By Proposition~\ref{piati}, since $\phi u$ is automatically in the tangent space of $u$, for all $\phi \in \Omega^0(ad(E))$ \[ \int_M ( Q(du)^{p-2}du; d(\phi u))^\sharp *1 = 0. \] We work entirely on the integrand. \[ ( Q(du)^{p-2}du; d(\phi u))^\sharp = (Q(du)^{p-2}du; d\phi u )^\sharp + (Q(du)^{p-2}du;\phi du)^\sharp. \] Rewrite the first term on the left as \begin{eqnarray*} (Q(du)^{p-2}du; d\phi u)^\sharp &=& Tr (Q(du)^{p-2}du; (d\phi u)^\sharp )\\ &=&-Tr( Q(du)^{p-2}duu^\sharp ; d\phi^\sharp )\\ &=& -1/2 Tr(S_{p-1}(du) \times u; d\phi) \end{eqnarray*} The second to last identity follows from the fact that $d\phi$ is skew so we can replace $Q(du)^{p-2}duu^\sharp$ by its skew adjoint part $S_{p-1}(du) \times u=(Q(du)^{p-2}du) \times u$. The second term on the right can also be rewritten as \[ (Q(du)^{p-2}du; \phi du)^\sharp = Tr(Q(du)^{p-2}du;du^\sharp \phi^\sharp) =Tr (Q(du)^p \phi^\sharp) = 0. \] Here we use the fact that $Q$ is symmetric and $\phi$ skew-symmetric. \end{proof} We next compute $Q(\beta_u(W))=Q(W \times u)$ and the Schatten-von Neumann norm of $W \times u$. Recall $W: TM \rightarrow E$, $\beta_u(W)=W \times u: TM \rightarrow ad(E)$. Therefore, $Q(W)^2 \in End(E)$ and $Q(W\times u)^2 \in End(ad(E))$. Also recall $\alpha_u(\phi)=\phi u$ from Proposition~\ref{helpemb} is the infinitesimal action of the Lie algebra bundle. \begin{lemma}\label{mapfromlie0er} If $W=W_u,$ then \[ Q(\beta_u(W))^2=2\beta_u Q(W)^2\alpha_u \ \mbox{and} \ Q(\beta_u(W))^2 \beta_u=2\beta_u Q(W)^2. \] Moreover, $Q(\beta_u(W))^q=2^{q/2}\beta_u Q(W)^q\alpha_u$. In particular, $| \beta_u(W)|_{sv^q}= \sqrt 2|W |_{sv^q}$. \end{lemma} \begin{proof} By Proposition~\ref{helpemb}, $ \beta_u^\sharp= 2\alpha_u$. Thus, \begin{eqnarray*} Q(\beta_u(W))^2=(\beta_u(W);\beta_u(W)^\sharp ) = \beta_u (W;W^\sharp) \beta_u^\sharp= 2\beta_u Q(W)^2 \alpha_u. \end{eqnarray*} From this and again Proposition~\ref{helpemb}, \begin{eqnarray*} Q(\beta_u(W))^2\beta_u(v)=2\beta_u Q(W)^2 \alpha_u\beta_u(v)=2\beta_u Q(W)^2(v+(v,u)^\sharp u)=2\beta_u Q(W)^2(v). \end{eqnarray*} The last equality holds because $Q(W)^2$ applied to $u$ is zero. The result for general $q$ as well as the last identity follow again from Proposition~\ref{helpemb} since $\alpha_u \beta_u$ is the identity up to a term in the kernel of $Q(W)$. \end{proof} The previous Lemma can be best explained by the commutative diagram below: \begin{equation*} \begin{tikzcd} E \arrow[r, "{\times u}"] \arrow[d,"{ 2Q^2(W)}"] & ad(E) \arrow[d, "{Q^2(W \times u)}" ] \\ E \arrow[r, "\times u" ] & ad(E). \end{tikzcd} \end{equation*} The next proposition follows immediately from the lemma. \begin{proposition}\label{prop:thirdcoreqn} The $J_p$-Euler-Lagrange equations for $u=u_p$ are equivalent to the equation \begin{eqnarray}\label{thirdcoreqn} d*( Q(A_u)^{p - 2}A_u)= 0 \end{eqnarray} for the Lie algebra valued form $A_u=du \times u$. \end{proposition} We end this section with a few comments on the analogue of the Corlette-Donaldson-Hitchin equations. These will play no significance for the rest of the paper. Given $ u:M \rightarrow N$ a smooth map, we can decompose the flat connection $d$ on $E$ as $d=D_u+A_u$ where $D_u$ is the Levi-Civita connection on $u^{-1}(TN)$ and $A_u=du\times u$ (cf. Lemma~\ref{decomphig}). By decomposing the flatness equation for $d$ into diagonal and off diagonal parts, we obtain \begin{eqnarray}\label{flateqn} F(D_u)+1/2[A_u, A_u]=0, \ \ D_u(A_u)=0. \end{eqnarray} The map $u$ into the symmetric space $N$ is often called a {\it generalized metric}. $A_u$ satisfies equations (\ref{flateqn}) and (\ref{thirdcoreqn}). These are the analogues of the Corlette-Donaldson-Hitchin equations for the Higgs field $A_u$ (cf. \cite{corlette}). \subsection{The dual equations}\label{dual} In this section we assume $u=u_p$ satisfies the $J_p$-Euler-Lagrange equation and $S_{p-1}(du)=Q(du)^{p-2}du$. We will show that $Z=*S_{p-1}(du)$ satisfies equations that can be expressed as the Euler-Lagrange equations of a functional $J_q$ which is analogous to $J_p$ for $q$ conjugate to $p$, i.e \begin{equation}\label{form:conjugate} 1/p+1/q=1. \end{equation} The appropriate framework to define this functional is the bundle of sections of the flat bundle $E$ with one of the main difficulties being that the natural metric is indefinite. Given $Z=*S_{p-1}(du) \in \Omega^1(E)$ we would like to minimize the functional $J_q$ within the cohomoloy class of $Z$. In other words, we would like to minimize the $q$-Schatten-von Neumann norm among variations $Z+ d\xi$ for $\xi \in \Omega^0(E)$. The question is: What is the optimal choice for $d\xi$? In order to make sense of measuring $Z + d\xi$, we need to project into a positive definite subspace, so we will measure this by projecting into the tangent space of $\HH$ at $u$, and measure $(Z + d\xi)_u = Z + (d\xi)_u.$ In order to get a manageable estimate, we will then also to restrict $\xi = \xi_u.$ Note that then $D_u \xi = (d\xi)_u$ is then the covariant derivative induced in the pull back of the tangent bundle of $\HH.$ As before, \[ Q(Z + (d\xi)_u)^2 = (Z + (d\xi)_u; Z+ (d\xi)_u)^\sharp. \] \begin{theorem}\label{TTheorem A6} Let $u=u_p$ satisfy the $J_p$-Euler-Lagrange equations and let $Z=*S_{p-1}(du)$. For $\xi \in \Omega^0(E)$ let \[ J_q(\xi) = \int_M Tr Q(Z + (d\xi)_u)^q *1=||Z+ (d\xi)_u)||^q_{sv^q}. \] Then the minimum of $J_q$ over all $\xi = \xi_u$ is taken on at $\xi= 0$. Furthermore, \begin{equation}\label{eulags0} d* Q(Z)^{q - 2}Z = 0. \end{equation} \end{theorem} \begin{proof} Since the computations are done in the pull-back tangent bundle of $\HH$, the induced norm is positive definite here. By a straightforward extension of the arguments already used, $J_q $ is convex functional. The covariant derivative $(d |\xi|;d|\xi|) \leq (D_u\xi; D_u\xi)^\sharp)$ and $D_u $ has no kernel. The computation of the Euler-Lagrange equations is also straightforward. By (\ref{form:conjugate}), \[ Q(Z)^{q-2}Z = Q(du)^{(q-2)(p-1)/2}Q(du)^{p-2}*du = *du. \] Then, $ddu = 0 $, proving (\ref{eulags0}). \end{proof} There is an analogous formulation of Theorem~\ref{TTheorem A6} in terms of $V=*S_{p-1}(du) \times u$. The proof is the same. \begin{theorem}\label{TTheorem A66} Let $u=u_p$ satisfy the $J_p$-Euler-Lagrange equations and let $V=*S_{p-1}(du)\times u$. For $\psi \in \Omega^0(ad(E))$, let \[ J_q(\psi) = \int_M Tr Q(V + (d\psi)_u)^q *1=||V + (d\psi)_u)||^q_{sv^q}. \] Then the minimum of $J_q$ over all $\psi =\psi_u$ is taken on at $\psi= 0$. Furthermore, \begin{equation}\label{eulags} d*Q(V)^{q - 2}V = 0. \end{equation} \end{theorem} Note that Theorems~\ref{TTheorem A6} and~\ref{TTheorem A66} are completely equivalent. Assuming $\psi=\psi_u$, we write $\psi=\xi \times u$ where $\xi=\xi_u$. Then $d\psi=d(\xi \times u)=d \xi \times u+ \xi \times du $ implies $(d\psi)_u=d \xi \times u$. By Lemma~\ref{mapfromlie0er}, the integrals $J_q(\xi)$ and $J_q(\psi)$ are equal and also their Euler-Lagrange equations. \subsection{The positive definite inner product}\label{posdef} Recall that the inner product $( , )^\sharp$ is not positive definite. To get a positive definite inner product, we must insert a bar operator. This serves as a replacement for conjugation in a complex setting. The operator $O = \Pi - \Pi^\bot = O^\sharp$ preserves the tangent and normal bundle structures of $\HH$. So if we define $\bar W = OW$ on vectors and $\bar B=OBO$ on matrices, then we can get positive definite inner products by setting $(W_1,W_2)^+ = (\bar W_1,W_2)^\sharp$ and $(A,B)^+ =-Tr(\bar A B )$. From this we can induce Riemannian metrics on our bundles. \begin{proposition}\label{Prop: posdef} The inner product $(.,.)^\sharp $ induces via the map $u: M \rightarrow N$ a Riemannian metric $(.,.)^{+, u} $ on the bundles $E$ and $ad(E)$. Furthermore, the map $1/\sqrt 2\beta_u$ identifies $E_u$ isometrically with its image in $ad(E)$. \end{proposition} \begin{proof} For $ x \in \tilde M$ we define the metric on the fiber over $x$ by \[ (v_x, v_x)^{+, u}=(O_{\tilde u(x)}v_{x}, v_{x})^\sharp. \] Recall $E= \tilde M \times_{Ad(\rho)} \R^{n,1}$. Since, $v_x$ is identified with $\rho(\gamma)v_{x}$, $O_{\tilde u(\gamma x)}=Ad(\rho(\gamma))O_{\tilde u(x)}$ and the metric $(.,.)^\sharp $ is invariant under $\rho$, $(.,.)^{+, u}$ is well defined on $E$. This defines a (positive definite) Riemannian metric on $E$ and similarly $ad(E)$. Next, we will denote for simplicity $\Pi_{u(x)}=\Pi$ and $O_{u(x)}=O$. If $v$ is tangent to $\HH$ at $u(x)$ then $Ov = \Pi v - \Pi^\bot v=v$, hence \[ (v,v)^\sharp=( v,v )^{+, u}. \] In other words, the indefinite metric agrees with the Riemannian metric. Since, \begin{eqnarray*} O\beta_u(v)O&=&(1+2uu^\sharp)(u du^\sharp-du u^\sharp)(1+2uu^\sharp) \\ &=&-\beta_u(v) \end{eqnarray*} we have by Proposition~\ref{helpemb}, \begin{eqnarray*} (\beta_u(v), \beta_u(v))^{+, u} &=&-Tr(O\beta_u(v)O \beta_u(v))\\ &=&Tr (\beta_u(v) \beta_u(v))\\ &=&2 (v,v )^\sharp \ \\ &=&2 ( v,v )^{+, u}. \end{eqnarray*} \end{proof} \section{Regularity theory}\label{sect:regtheor} The study of $J_p$-harmonic maps is as natural as studying $p$-harmonic maps. However, we found no references in the literature about Schatten-von Neumann integrals such as $J_p$. One of the problems is that the usual methods of proving regularity, even the simplest theorem of showing $du_p$ is bounded, do not seem to be applicable. The discrepancy between the sizes of eigenvalues $s_i(du_p)$ prevent uniform estimates even for $p = 4.$ However, in Theorem~\ref{mainregtH} we have the apriori estimates for showing that for $u_p$ satisfying the $J_p$-Euler-Lagrange equations, $D(Q(du_p)^{p/2-1}du_p) \in L^2$. \subsection{Estimates on the indefinite metric}We first derive some basic estimates by considering maps $f: M \rightarrow N$ as sections of the $\HH$-bundle $H$ embedded in the flat $R^{2,1}$ bundle $E$ determined by the homotopy class of the map. Because the metric on $R^{2,1}$ is indefinite, the geometry is slightly different than what we are accustomed to. These simple results will be used throughout the subsequent sections. \begin{lemma}\label{distpos} Let $X,Y \in \HH$ and $t = dist (X,Y).$ Then \[ t^2 \leq (X - Y,X - Y)^\sharp = 2(\cosh t -1) \leq t^2 \cosh t . \] \end{lemma} \begin{proof} By equivariance, we may assume $X_i = 0, i = 1,2$, $X_3 = 1$ and $Y_1 = 0, Y_2 = \sinh t, Y_3 = \cosh t. $ Then distance in $\HH$ from $X$ to $Y$ equals \[ \int_0^t \left (X'_2(\tau)^2 - X'_3(\tau)^2 \right)^{1/2}d\tau = t. \] The geodesic is \[ X_1(\tau) = 0, \ X_2(\tau) = \sinh(\tau) \ X_3(\tau) = \cosh(\tau). \] Since \[ (X-Y,X-Y)^\sharp = (\sinh t)^2 - (\cosh t - 1)^2 = 2(\cosh t - 1)\geq t^2, \] the estimate $2(\cosh t - 1) \leq t^2 \cosh t $ can be obtained via calculus. \end{proof} We denote the orthogonal projection of $W$ into the tangent space of $\HH$ at $X$ by \begin{equation}\label{proj1} W_X = W + (W,X)^\sharp X. \end{equation} \begin{lemma}\label{supptLemma2} Let $X, Y \in \HH$, $W_Y = W$. Then \[ (W_X,W_X)^\sharp \leq \left (1 + 1/2(X-Y,X-Y)^\sharp \right)^2(W,W)^\sharp. \] \end{lemma} \begin{proof} \[ (W_X,W_X)^\sharp = (W + (W,X)^\sharp X, W+(W,X)^\sharp X)^\sharp = (W,W)^\sharp + {(W,X)^\sharp}^2. \] But \begin{eqnarray}\label{strineq1} {(W,X)^\sharp}^2 &=& {(W, X - Y)^\sharp}^2 \nonumber \\ &=& {(W,(X -Y)_Y)^\sharp}^2 \nonumber \\ &=& {(W, X - Y + (X - Y,Y)^\sharp Y)^\sharp}^2 \\ &\leq& (W,W)^\sharp \left((X - Y, X - Y)^\sharp + {(Y-X,Y)^\sharp}^2\right) \nonumber\\ &=& (W,W)^\sharp (X - Y, X - Y)^\sharp (1 + 1/4(X - Y, X - Y)^\sharp).\nonumber \end{eqnarray} The inequality follows from the fact that both terms are in the tangent space to $Y$ in which the inner product is positive definite, and the last equality from \begin{eqnarray*} (Y - X,Y)^\sharp &=& 1/2(2(-X,Y)^\sharp -2)\\ &=& 1/2(-2(X,Y)^\sharp +(Y,Y)^\sharp +(X,X)^\sharp) \\ &=&1/2(X - Y, X - Y)^\sharp. \end{eqnarray*} The desired inequality follows from this. \end{proof} We assume $W: T_x M\rightarrow T_X \HH \subset R^{2,1} $ with the usual positive definite inner product on $T_x M$ and the indefinite one on $R^{2,1}$. Recall from (\ref{tenform2}) that we defined $Q(W)$ by $Q(W)^2 = (W ; W^\sharp) $. Typically $W=dw$ for a section $w$ of the bundle $H$ defined by the homotopy class. Let $s(W) = s_1(W)$ be the largest eigenvalue of $Q(W) $ as in Definition~\ref{schatten} \begin{eqnarray}\label{shatl1} s (W)^l \leq Tr Q(W)^l \leq 2s (W)^l \end{eqnarray} which will make it useful in estimates. At the same time, we find the weight \begin{eqnarray} \omega(X,Y) = 1 + 1/2(X-Y, X-Y)^\sharp \end{eqnarray}\label{deromea} appears often in our estimates. \begin{lemma} \label{compprojnorm} If $W = W_Y$, then the pointwise Schatten-von Neumann norms satisfy \[ |W|_{sv^p} \leq |W_X|_{sv^p} \leq \omega(X,Y)|W|_{sv^p} \ \ 1 \leq p \leq \infty. \] Moreover, for each singular value \[ s_j(Q(W)) \leq s_j (Q(W_X)) \leq \omega(X,Y)s_j(Q(W)). \] \end{lemma} \begin{proof}This follows from the estimates in Lemma~\ref{supptLemma2} and the definition of $\omega(X,Y).$ \end{proof} \begin{lemma}\label{formdRw} Assume $w, f :M \rightarrow N$, $W =dw$. Let $(w - f)_w = w - f + R$, where $R = (w - f,w)^\sharp w. $ Then \begin{eqnarray*} (dR)_w = 1/2(w-f,w-f)^\sharp dw. \end{eqnarray*} \end{lemma} \begin{proof} Using the fact that $w$ is orthogonal to the tangent space, \begin{eqnarray*} (dR)_w&=&\left (d(w - f,w)^\sharp w+(w - f,w)^\sharp dw \right)_w \\ &=&(w - f,w)^\sharp dw. \end{eqnarray*} Since $(w,w)^\sharp = (f,f)^\sharp = -1$, \[ (w-f,w)^\sharp = 1/2(w-f,w-f))^\sharp. \] The lemma follows immediately from this. \end{proof} \begin{proposition}\label{supptprop5} Assume $w = u_p$ satisfies the $J_p$-Euler-Lagrange equations, $W=du_p$ and $f :M \rightarrow N$ is a comparison map. Then \[ < \omega(w,f) Q(W)^{p-2}W, W - F> = 0. \] Here $F = \omega(w,f)^{-1}(df)_{w} $ satisfies $s(F) \leq s(df).$ \end{proposition} \begin{proof} Because $w$ satisfies the Euler-Lagrange equations for $ J_p$, \begin{eqnarray*} 0 &= &<Q(W)^{p-2}W, D_{w}(w-f)_{w}> \\ &= &<Q(W)^{p-2}W, d(w-f)_{w}> \\ &= &<Q(W)^{p-2}W, W - df + dR> \\ &= &<Q(W)^{p-2}W, \left(1 + 1/2 (w-f, w-f)^\sharp \right)W - df> \\ &= &<\omega(w,f)Q(W)^{p-2}W, W - F>. \end{eqnarray*} Here we have used that $Q(W)^{p-2}W$ is tangent and Lemma~\ref{formdRw}, The fact that $s(F) \leq s(df)$ follows from Lemma~\ref{compprojnorm}. \end{proof} The maps we are dealing with are not necessarily smooth. We can approximate $W^{1,p}$ maps by $C^\infty$ maps, prove the inequalities for them (of course, remembering that the approximations do not satisfy the Euler-Lagrange equations, but do up to a term which goes to zero as the approximation goes to its limit). This is a standard technique in basic differential topology of maps based on Banach norms, and we do not go into the details. The next Lemma follows immediately from Proposition~\ref{lemmaconvvnorms}. We simply use the pointwise inequality on differentiable sections $W$ and $F$ of the tangent bundle with a smooth weight. The weights simply multiply the inequality point wise. Since the differentiable sections and smooth weights are dense, it follows for all $W^{1,p}$ sections $W$ and $F$ and bounded measurable non-negative weights $\omega.$ \begin{lemma}\label{sptlemma7} $w: M \rightarrow N$ a $W^{1,p}$ map, $W=dw$. If $0 \leq \omega=\omega(x) \leq k$ is a weight and $F_w=F$, then \[ p<\omega Q(W)^{p-2}W, F - W> \leq <\omega Tr(Q(F)^p-Q(W)^p)>. \] \end{lemma} \subsection{Pointwise inequalities} In this section we will prove some pointwise inequalities that we will need in the sequel. We found no reference for similar inequalities in the literature. Let $V_1$ and $V_2$ be inner product spaces of dimension 2 and $A,B \in Hom(V_1, V_2)$. In the applications of the first inequalities $A = du$ and $B = (dw)_u$ both map $V_1=T_xM$ to $V_2=T_{u(x)} N$. As before $Q(A)^2 = A A^T$ is a symmetric map of $V_2$ and $\mathcal Q(A)^2 = A^T A$ a symmetric map of $V_1$. Let $S_q(A)=Q(A)^{q-1}A$ a map from $V_1$ to $V_2$. We also introduce the notation $\delta(X,Y)=(X-Y, X-Y)^\sharp.$ We start with the following non-standard inequality: \begin{lemma}\label{lemmappp} For $x, y > 0$, $p>2$ \[ (x^{p/2} \pm y^{p/2})^2 < p(x^{p-1} \pm y^{p-1})(x\pm y). \] \end{lemma} \begin{proof} Assume $x\geq y$ and note the inequality is trivial if $y = 0.$ If $y > 0$, divide the expression by $y^p.$ We see it is sufficient to prove the inequality for $x' = x/y\geq 1$, $y' = 1$. Prove this first for the minus sign, which is less standard. At $x = 1$, both sides and their derivatives are zero. If we compute the second derivatives of each side, we get on the right \[ p\left(p(p-1)x^{p-2} - (p-1)(p-2)x^{p-3}\right) = p(p-1)x^{p-3}(px- (p-2)) \geq p(p-1)x^{p-2}. \] Note we used $x\geq1$ for this last step. And on the left \[ p(p-1)x^{p-2} - p(p/2 - 1)x^{p/2 - 2} < p(p-1)x^{p-2}. \] So the second derivative of the right-hand expression is less than the second derivative on the left, and we can concluded the inequality. For the $+$ sign, we simply write out left and right sides. Assuming $p>2$, \begin{eqnarray*} (x^{p/2} + 1)^2 = x^p +2 x^{p/2} + 1\leq x^p +2 x^{p-1} + 1\\ \leq 2(x^{p-1} + 1)(x+1) < p(x^{p-1} + 1)(x+1). \end{eqnarray*} \end{proof} \begin{proposition} \label{propineq1} \begin{eqnarray*} (S_{p/2}(A) - S_{p/2}(B); S_{p/2}(A) - S_{p/2}(B))^\sharp \leq p (S_{p-1}(A) - S_{p-1}(B), A- B)^\sharp. \end{eqnarray*} \end{proposition} \begin{proof} Since the diagonalizable matrices are dense, it suffices to prove the inequality for those. We will do this for all pointwise inequalities in this section without explicit mention each time. Let $A = \sum \alpha_j a_j \otimes A_j$ and $B = \sum \beta_j b_j \otimes B_j$ where $a_j$ and $b_j$ are orthonormal bases for $V_1$, $A_j$ and $B_j$ are orthonormal bases for $V_2$ chosen so the real numbers $\alpha_j$ and $\beta_j$ are positive. Then we can write \begin{eqnarray*} b_1 &=& \cos \theta a_1 + \sin \theta a_2 \ \ \ b_2 = -\sin \theta a_1 + \cos \theta a_2\\ B_1 &=& \cos \phi A_1 + \sin \phi A_2 \ \ B_2 = -\sin \phi A_1 + \cos \phi A_2. \end{eqnarray*} A direct computation gives \begin{eqnarray*} \lefteqn{ (S_{p-1}(A) - S_{p-1}(B); A-B)^\sharp} \\ &=&(Q(A)^{p-2}A - Q(B)^{p-2}B; A-B)^\sharp\\ &=& (\sum_k (\alpha_k^{p-1}a_k \otimes A_k - \beta_k^{p-1}b_k \otimes B_k); \sum_j (\alpha_j a_j \otimes A_j - \beta_j b_j \otimes B_j))^\sharp\\ &=& \sum_j ( \alpha_j^p +\beta_j^p - \cos \theta \cos \phi(\alpha_j^{p-1}\beta_j + \beta_j^{p-1}\alpha_j) - \sin \theta \sin \phi(\alpha_j^{p-1}\beta_{j'} + \beta_j^{p-1}\alpha_{j'}) \end{eqnarray*} Here $1' = 2$ and $2' = 1$. Let \[ E_1 = (-1)^n \cos \theta\cos \phi \geq 0 \ \mbox{and} \ E_2 = (-1)^m \sin \theta\sin \phi \geq 0. \] Note that $E_1 + E_2 \leq 1$. We rewrite \begin{eqnarray*} \lefteqn{(S_{p-1}(A) - S_{p-1}B; A-B))^\sharp} \\ &=& \sum_j (\alpha_j^p + \beta_j^p) (1 - E_1 - E_2) + E_1 (\alpha_j^{p-1} - (-1)^n \beta_j^{p-1})(\alpha_j -(-1)^n\beta_j)\\ &+&E_2 (\alpha_j^{p-1} - (-1)^m \beta_{j'}^{p-1})(\alpha_j -(-1)^m \beta_{j'}). \end{eqnarray*} Using the same rules, we get \begin{eqnarray}\label{Esecondterm1} \lefteqn{\left |S_{p/2}(A)-S_{p/2}(B) \right|^2}\nonumber \\ &=&\sum_j ( \alpha_j^p + \beta_j^p )(1 - E_1 - E_2) + E_1 (\alpha_j^{p/2} - (-1)^n \beta_j^{p/2})^2 \\ &+&E_2 (\alpha_j^{p/2} - (-1)^m \beta_{j'}^{p/2})^2. \nonumber \end{eqnarray} The first terms are equal in the two expressions, so the insertion of $p$ increases the right hand side. Term by term, the inequality follows by applying Lemma~\ref{lemmappp} to the pairwise expressions with the same indices in the sums which multiply the E's. \end{proof} The next inequality is much easier and follows from the same computation. The relevant inequality in one variable is the following simple: \begin{lemma}\label{lemmappp234}For $x, y > 0$, $p>2$ and $z \geq \max\{x,y\}$ \[ (x^{p-1} \pm y^{p-1})^2 < 4z^{p-1}(x^{p/2} \pm y^{p/2})^2. \] \end{lemma} \begin{proof}Assume without loss of generality that $x \geq y$. Divide both sides by $y^{p-1}$, so it suffices to prove \[ (x^{p-1} \pm 1) \leq 2z^{p/2-1}(x^{p/2} \pm 1) \ \ \mbox{for} \ x \geq 1. \] The inequality holds for $x = 1.$ Since the derivative of the left hand side is less than the derivative of the right hand side the inequality follows. \end{proof} \begin{proposition}\label{propineq2} \begin{eqnarray*} \left |S_{p-1}(A) - S_{p-1}(B) \right|^2 \leq 4(\max(s(A),s(B))^{p-2}\left | S_{p/2}(A)-S_{p/2}(B) \right|^2. \end{eqnarray*} \end{proposition} \begin{proof} The left hand side of this inequality is the same as the one in (\ref{Esecondterm1}) with $p$ replaced by $2p-2$. Thus, \begin{eqnarray}\label{Esecondterm2} \lefteqn{\left |S_{p-1}(A) - S_{p-1}(B) \right|^2} \nonumber \\ &=& \left |Q(A)^{p-2}A - Q(B)^{p-2}B \right|^2 \nonumber\\ &=&\sum_j ( \alpha_j^{2p-2} + \beta_j^{2p-2} )(1 - E_1 - E_2) + E_1 (\alpha_j^{p-1} - (-1)^n \beta_j^{p-1})^2\\ &+&E_2 (\alpha_j^{p-1} - (-1)^m \beta_{j'}^{p-1})^2.\nonumber \end{eqnarray} If we multiply the terms of four times the expression of $|Q(A)^{p/2-1}A - Q(B)^{p/2-1}B|^2$ given in (\ref{Esecondterm1}) by the largest of the four terms $\alpha_k^{p-2}$ and $\beta_k^{p-2}$, $k = 1,2$, this will dominate the corresponding terms of $|Q(A)^{p-2}A - Q(B)^{p-2}B)|^2$ given in (\ref{Esecondterm2}). \end{proof} In the next inequality, we look at a slightly more complicated situation. Let \[ \tilde B: V_1 =T_xM \rightarrow \tilde V_2 = T_YN \subset R^{2,1} \] where $Y = w(x)$ is a different point than $X = u(x).$ Let \[ B = \tilde B_X = \tilde B + (\tilde B,X)^\sharp X =\tilde B + (\tilde B, X-Y)^\sharp X. \] Note that $(\tilde B, X)^\sharp$ is a cotangent (= tangent) vector on $M$ defined by \[ (\tilde B,X)^\sharp(a)=(\tilde B(a),X)^\sharp \ \mbox{where} \ a \in V_1=T_xM. \] Also (after identifying tangent with cotangent vectors) \begin{eqnarray}\label{mathcalQproj} \mathcal Q(B)^2 &= & \mathcal Q(\tilde B)^2 + (\tilde B, X)^\sharp \otimes (\tilde B, X )^\sharp \\ &= & \mathcal Q(\tilde B)^2 + (\tilde B, X-Y)^\sharp \otimes (\tilde B, X-Y )^\sharp. \nonumber \end{eqnarray} Indeed, for $a, c \in V_1=T_xM$, \begin{eqnarray*} (\mathcal Q(B)^2a;c) &=&(Ba, Bc)^\sharp \\ &=& (\tilde Ba + (\tilde Ba, X)^\sharp X, \tilde Bc + (\tilde Bc, X)^\sharp X)^\sharp \\ &=& (\mathcal Q(\tilde B)^2a;c)+(\tilde Ba, X)^\sharp (\tilde Bc, X)^\sharp. \end{eqnarray*} \begin{proposition} \label{morecomp1} If $\tilde B_Y = \tilde B$ (namely, $ \tilde B $ is in the tangent space of $N$ at $Y$), then \[ ((S_{p-1}(\tilde B),X)^\sharp; (\tilde B,X)^\sharp) \leq Tr Q(\tilde B)^p \delta(X,Y)(1 + 1/4\delta(X,Y)) \] \end{proposition} \begin{proof} We can decompose $\tilde B = \sum_j \tilde \beta_j b_j \otimes \tilde B_j$ where $b_j$ and $\tilde B_j$ are orthonormal bases of $V_1=T_xM$ and $V_2 = T_YN$ respectively. Then \begin{eqnarray*} ((S_{p-1}(\tilde B),X)^\sharp;(\tilde B,X)^\sharp )&=& \sum_j \tilde \beta_j^{p-1}b_j(\tilde B_j,X)^\sharp \sum_k \tilde \beta_k b_k (\tilde B_k,X)^\sharp \\ &=& \sum_j \tilde \beta_j^p {(\tilde B_j,X)^\sharp}^2 \\ &\leq&\sum_j \tilde \beta_j^p{(\tilde B_j, \tilde B_j)^\sharp}^2\delta(X,Y)(1 + 1/4\delta(X,Y))\\ &=& \sum_j \tilde \beta_j^p\delta(X,Y)(1 + 1/4\delta(X,Y)). \end{eqnarray*} In the above the inequality follows from (\ref{strineq1}). \end{proof} Now we come to the most complicated situation. The terms we are estimating do not appear in the final computation of the derivative from difference quotients, so they are smaller than our other terms. Because $p$ is large, it is a real nuisance to compute. We do an easy computation to warm up. We could get better decay for $p \geq 8,$ but we give the proof which includes $p = 4.$ We assume that $\delta(X,Y) = (X-Y, X-Y )^\sharp < 1/10$, so as not to carry around an extra factor. \begin{lemma}\label{props5} Let $ \mathcal Q(B)^{2q} - \mathcal Q (\tilde B)^{2q}: V_1 \rightarrow V_1$ be a symmetric map, with $\tilde B_Y = \tilde B$ and $\tilde B_X = B.$ Then the eigenvalues of $\mathcal Q(B)^{2q} - \mathcal Q (\tilde B)^{2q}$ are non-negative and bounded above by $2q\beta_1^{2q}\delta(X,Y).$ \end{lemma} \begin{proof} Recall from (\ref{mathcalQproj}), $\mathcal Q (\tilde B)^2 = \mathcal Q(B)^2 - (\tilde B,X)^\sharp \otimes (\tilde B,X)^\sharp.$ We use (\ref{strineq1}) to estimate \begin{equation}\label{normtilBX} | (\tilde B,X)^\sharp | \leq s(\tilde B)(\delta(X,Y)(1+ 1/4\delta(X,Y)))^{1/2}. \end{equation} We rewrite \[ (\tilde B,X)^\sharp \otimes (\tilde B,X)^\sharp = s(B)^2 \delta(X,Y) C \] where $C$ is a rank 1 symmetric matrix whose entries are all less than a number slightly larger than 1 (recall $\delta(X,Y)$ is small). We want to compute the operator norm of $\mathcal Q(B)^{2q} - \mathcal Q(\tilde B)^{2q}.$ As symmetric matrices, \begin{eqnarray*} 0 \leq \mathcal Q(B)^{2q} -\mathcal Q(\tilde B)^{2q} &=& \mathcal Q(B)^{2q} - (\mathcal Q(B)^2 - s(B)^2\delta(X,Y)C)^q \\ &\leq & (\mathcal Q(B)^{2q} - (\mathcal Q(B)^2 - 2s(B)^2\delta(X,Y)Id)^q. \end{eqnarray*} The eigenvalues of the larger matrix are \[ \beta_j^{2q} - (\beta_j^2 - 2\beta_1^2\delta(X,Y))^q \leq 2q \beta_1^{2q} \delta(X,Y). \] This means the smaller matrix must have a smaller operator norm. This gives the result. \end{proof} \begin{proposition}\label{propineq6} Let $\tilde B_Y = \tilde B$ and $\tilde B_X = B.$ Then, \[ |(S_{p-1}(B) - S_{p-1}(\tilde B), X - Y)^\sharp| \leq 2(p-1)s(B)^{p-1} \delta(X,Y)^{3/2}. \] \end{proposition} \begin{proof} \begin{eqnarray*} \lefteqn{\left |(S_{p-1}(B) - S_{p-1}(\tilde B), X - Y)^\sharp \right |} \\ &\leq & \left |(S_{p-1}(B) - S_{p-1}(\tilde B)_X, (X - Y)_X)^\sharp \right |\\ &+& \left | (S_{p-1}(\tilde B),X)^\sharp (X-Y,X)^\sharp\right |. \end{eqnarray*} The second factor is easy to estimate, as $(X-Y,X)^\sharp = 1/2\delta(X,Y)$ and \begin{eqnarray*} |(S_{p-1}(\tilde B),X)^\sharp | &=& |\mathcal Q(\tilde B)^{p-2}(\tilde B,X)^\sharp| \\ &\leq& s(\tilde B)^{p-2}|(\tilde B,X)^\sharp|\\ &\leq& s(\tilde B)^{p-1}(\delta(X,Y)(1 + 1/4\delta(X,Y))^{1/2}. \end{eqnarray*} In the last inequality we used (\ref{normtilBX}). Recall $s(\tilde B) \leq s(B)$, so we may replace $s(\tilde B)$ by $s(B)$ and absorb the $(1 + 1/4\delta(X,Y))^{1/2}$ in the constant. Next, \[ (S_{p-1}(B) - S_{p-1}(\tilde B)_X, (X - Y)_X)^\sharp = (\mathcal Q(B)^{p-2}- \mathcal Q(\tilde B)^{p-2})(B,(X-Y)_X)^\sharp. \] From Lemma~\ref{props5}, we can estimate the operator norm of $\mathcal Q(B)^{p-2}- \mathcal Q(\tilde B)^{p-2}$ by $(p-2)s(B)^{p-2}\delta(X,Y)$, the norm of $B$ by $s(B)$ and the norm of $(X-Y)_X$ by $\delta(X,Y)^{1/2}(1+ 1/4\delta(X,Y))^{1/2}.$ \end{proof} \begin{proposition}\label{propine7} Let $\tilde B_Y = \tilde B$ and $\tilde B_X = B.$ Then, \[ |S_{p/2}(\tilde B)_X - S_{p/2}(B)| \leq p\delta(X,Y) s(B)^{p/2}. \] \end{proposition} This follows in the same fashion as in the first estimate of the previous proposition. \begin{proposition}\label{propine8} Let \[ \tilde B_Y =\tilde B, \ \tilde B_X = B, \ A_X = A \] and $p \geq 4.$ Then, \[ |(S_{p-1}(\tilde B) - S_{p-1}(B); B - A )^\sharp| \leq 2(p-2)C s(B)^{p-2}\delta(X,Y) |S_{p/2}(B)-S_{p/2}(A)|^{4/p}. \] Here $C$ is a combinatorial constant independent of $p$ computed using the number of terms in each summand. \end{proposition} \begin{proof} Since $(B-A)_X = B-A$ we can compute \begin{eqnarray*} \lefteqn{|(S_{p-1}(\tilde B) - S_{p-1}(B); B - A )^\sharp|}\\ &=&| ((\mathcal Q(\tilde B)^{p-2} - \mathcal Q(B)^{p-2})B; B-A)^\sharp \\ &=& |Tr (\mathcal Q(\tilde B)^{p-2} - \mathcal Q(B)^{p-2}) (B, B-A)^\sharp |\\ &\leq & 8 \mbox{ max eigenvalue of} \ (Q(\tilde B)^{p-2}-Q(B)^{p-2}) \\ &\times & \mbox{largest entry in the $2 \times 2$ matrix} \ (B, B-A)^\sharp. \end{eqnarray*} We can take the largest entry in any orthogonal basis (we use the $b_j$). Here the $2 \times 2$ matrix \[ (B, B-A)^\sharp = \sum_j \beta_j^2 b_j \otimes b_j - \sum_{j,k} \beta_j\alpha_k (B_j, A_k) (b_j \otimes a_k). \] We have already estimated the eigenvalues of $Q(\tilde B)^{p-2}-Q(B)^{p-2}$ in Lemma~\ref{props5}. We need only estimate the terms in $(B, B-A)^\sharp. $ We recall that $B = \sum_j \beta_j b_j \otimes B_j$ and $A = \sum_k \alpha_k a_k \otimes A_k.$ We will use the calculations in the proof of Proposition~\ref{propineq1}, except we now use the bases $b_j $ of $T_xM$ and $B_j$ of $T_XN$ to expand in. This means that \begin{eqnarray*} a_1 &=& \cos\theta b_1 - \sin\theta b_2 \ \ a_2 = \sin \theta b_1 + \cos \theta b_2 \\ A_1 &=& \cos \phi B_1 - \sin \phi B_2 \ \ A_2 = \sin \phi B_1 + \cos\phi B_2. \end{eqnarray*} We get that the matrix $(B, B-A)^\sharp$ is \begin{eqnarray*} && \beta_1(\beta_1 -\alpha_1 E_1 - \alpha_2 E_2)( b_1 \otimes b_1) + \beta_2(\beta_2 - \alpha_2 E_1 - \alpha_1 E_2) (b_2 \otimes b_2)\\ &+& \beta_1 (\alpha_1 E_3 - \alpha_2 E_4) (b_1\otimes b_2) +\beta_2(\alpha_1 E_4-\alpha_2 E_3) (b_2 \otimes b_1). \end{eqnarray*} Here, as before $E_1 = \cos\theta \cos\phi$ and $E_2 = \sin\theta\sin\phi.$ New in this computation is $E_3 = \sin\theta \cos\phi$ and $E_4 = \cos\theta\sin\phi.$ For convenience, let $Z = |S_{p/2}(B)-S_{p/2}(A)|^2.$ We need to bound all the terms above by a constant times $Z^{2/p}.$ As before, some complications arise if the signs of the cosine and sines are not as expected. Without changing signs, a computation similar to (\ref{Esecondterm1}) implies \begin{eqnarray}\label{Esecondterm1Y} \lefteqn{\left |S_{p/2}(A)-S_{p/2}(B) \right|^2}\nonumber \\ &=&\sum_j ( \alpha_j^p + \beta_j^p )(1 - E_1 - E_2) + E_1 (\alpha_j^{p/2} - \beta_j^{p/2})^2 \\ &+&E_2 (\alpha_j^{p/2} - \beta_{j'}^{p/2})^2. \nonumber \end{eqnarray} If $E_1$ and $E_2$ are negative \[ Z \geq \sum (\alpha_j^p + \beta_j^p). \] In this case, we can estimate all terms easily. In the remaining cases $E_1$ and $E_2$ are interchangeable by reversing the roles of $b_1$ with $b_2$ and $B_1$ with $B_2$. We may thus assume $E_1 \geq E_2.$ Also \[ E_1 + |E_2| <1. \] In the proof of Propositions~\ref{propineq1} and~\ref{propineq2}, we already handled terms that look like the coefficients of $b_j \otimes b_j. $ The only new ingredient we need here is that $|x (x-y)| \leq |x^{p/2} - y^{p/2}|^{4/p}$ which can be easily checked. To handle the off diagonal terms, we first decide which of $|\sin\theta|$ and $|\sin\phi|$ is smaller. Suppose it is $|\sin\theta|$. Then $|E_3|^2 \leq |E_2|. $ We write the coefficient of $ b_1 \otimes b_2$ as \[ \beta_1 (\alpha_1 E_3 - \alpha_2 E_4) = \beta_1 (\alpha_1 - \alpha_2)E_3 +\beta_1\alpha_2(E_3 - E_4). \] But \begin{eqnarray*} (E_3 - E_4)^2 &=& \sin(\theta - \phi)^2 = 1 - cos(\theta - \phi)^2 \leq 2(1 - cos(\phi - \theta)) \\ &=&2(1 - E_1 - E_2). \end{eqnarray*} Hence, if $E_2 \geq 0$ (recall $p\geq 4$) \begin{eqnarray*} |\beta_1 \alpha_1 (E_3 - E_4)|^{p/2} &\leq& \beta_1^{p/2} \alpha_1^{p/2} |(E_3 - E_4)|\\ &\leq& \beta_1^{p/2} \alpha_1^{p/2} 2(1 - E_1 - E_2) \leq 2 Z. \end{eqnarray*} If $E_2 < 0$ we use instead \begin{eqnarray*} Z &\geq & 1/2 \sum_j (\alpha_j^p+ \beta_j^p) (1 - E_1 -E_2) \\ &\geq& 1/2 \alpha_1^{p/2}\beta_1^{p/2} (E_3 -E_4)^2 \geq 1/2 |\beta_1 \alpha_1 (E_3 - E_4)|^{p/2}. \end{eqnarray*} For the proof of the first inequality we use $1+E_2>E_1$. To estimate the other term, we write \[ |\beta_1(\alpha_1 - \alpha_2)E_3| \leq \sum_k \beta_1 |(\beta_1 - \alpha_k)E_3|. \] If $E_2 \geq 0$, \[ Z\geq \sum_{j,k} (\beta_j^{p/2} - \alpha_k^{p/2})^2 E_2. \] Recall that we chose $E_2 \leq E_1$ for exactly this purpose. Since $|E_3| ^2 \leq |E_2|$ and we already met the estimate $|x (x-y)| \leq |x^{p/2} - y^{p/2}|^{4/p},$ the bound on this term is complete. The case when $E_2<0$ is easier, since in this case \[ Z \geq \sum (\alpha_j^p + \beta_j^p)(1 - E_1) \geq \sum (\alpha_j^p + \beta_j^p) |E_2|. \] The coefficient of $b_2 \otimes b_1$ is estimated in the same way. We reverse the roles of $E_3$ and $E_4$ in the case that $|\sin\theta| \geq |\sin\phi|.$ \end{proof} \subsection{The regularity theorem} In this section we prove that if $u=u_p$ satisfies the $J_p$-Euler-Lagrange equations, $D(Q(du)^{p/2 - 1}du)=DS_{p/2}(du)$ is in $L^2.$ We find the notation $Q(du)^{q-1}du = S_q(du)$ and $\delta(u,w) = (u-w, u-w)^\sharp$ useful. The a priori estimate predicting this is easily obtained from a Bochner formula, but to obtain the result for a solution only known to be in $W^{1,p}$, we must take difference quotients. Because we are mapping into a locally symmetric space, we can locally consider differences obtained using the solution $u$, and the translated solutions $w_t = g_t^*u$, or $w_t(x) = u(g_tx)$ for $g_t = \exp ta$, for a an arbitrary elements in the Lie algebra. If we can obtain bounds in $L^2$ on the difference quotients, $1/t \left(S_{p/2}(dw_t)_u - S_{p/2}(du)\right)$, then the covariant derivative of $S_{p/2}(du)$ exists in $L^2.$ Assume $w$ and $u$ are solutions to the $J_p$-Euler-Lagrange equations in a ball, and $\phi$ is a cut-off function with support in the ball $\Omega$ (and equal to 1 on a smaller ball $\Omega'$). Use the Euler-Lagrange equations for $u$ to get: \[ <S_{p-1}(du), d(\phi^2(u - w + (u-w,u)^\sharp u))> = 0. \] Rearrange to get \[ <S_{p-1}(du),d(\phi^2(u-w))> = -1/2 <\phi^2 (S_{p-1}(du),du)^\sharp \delta(u,w)>. \] Write down the same equation with $u$ and $w$ interchanged and add the two equations. This gives \[ <S_{p-1}(dw) - S_{p-1}(du),d(\phi^2(w-u))> = -1/2<\phi^2 Tr(Q(du)^p + Q(dw)^p) \delta(u,w)>. \] Thus, \begin{eqnarray*} \lefteqn{<\phi^2(S_{p-1}(dw) - S_{p-1}(du)),d(w-u))>}\\ &=&-< d(\phi^2)(S_{p-1}(dw) - S_{p-1}(du)), w-u>\\ &-& 1/2<\phi^2\delta(u,w) (Tr Q(du)^p + Tr Q(dw)^p)>. \end{eqnarray*} Adding and rearranging some terms, we get the next equation. For the purposes of the proof of the next proposition, we label each term in the equation by a Roman numeral. Note that the inner product in $R^{2,1}$ is not positive definite, so to make estimates, we need to project terms into the tangent space at $N$ of either $u$ or $w.$ This explains the elaborate rearrangement of terms: \begin{eqnarray*} \lefteqn{<\phi^2(S_{p-1}(dw_u) - S_{p-1}(du)), dw - du> \ (I)}\\ &=&<\phi^2(S_{p-1}(dw_u) - S_{p-1}(dw)), dw - du>\\ &+&<\phi^2(S_{p-1}(dw) - S_{p-1}(du)), dw - du> \\ &=& <\phi^2(S_{p-1}(dw_u) - S_{p-1}(dw)), dw_u - du> \ (II) \\ &+& <\phi^2(S_{p-1}(dw) - S_{p-1}(dw_u)), (dw - du,u)^\sharp u> \ (III)\\ &- 2&<\phi d\phi(S_{p-1}(dw_u) - S_{p-1}(du)),(w-u)_u> \ (IV)\\ &-& <d(\phi^2) (S_{p-1}(dw) - S_{p-1}(dw_u)), w-u > \ (V)\\ &-1/2& (<\phi^2(\delta(u,w)(Tr Q(du)^p + TrQ(dw)^p)> \ (VI). \end{eqnarray*} \begin{proposition}\label{propine9}Assume $w$ and $u$ are solutions to the $J_p$-Euler-Lagrange equations in a ball, and $\phi$ is a cut-off function with support in the ball $\Omega$ (and equal to 1 on a smaller ball $\Omega'$). Then, \begin{eqnarray*} \lefteqn{1/(4p)<\phi^2 (S_{p/2}(dw_u) - S_{p/2}(du)), S_{p/2}(dw_u) - S_{p/2}(du)>_\Omega }\\ &\leq& 64p \max|d\phi|^2 <(s(du)^{p-2}+ s(dw)^{p-2})\delta (w,u)>_\Omega \ (VII)\\ &+ & C(p) \max |d(\phi^2)| <s(dw_u)^{p-1}\delta(w,u)^{3/2}>_\Omega \ (VIII)\\ &+& C(p) <s(dw_u)^p (\phi \delta(w,u))^{p/p-2}>_\Omega \ (IX)\\ &+&1/2 < \phi^2(Q(dw)^p - Q(du)^p)\delta(u,w)>_\Omega \ (X)\\ &+&<\phi^2 Q(dw)^p\delta(w,u)^2>_\Omega \ (XI). \end{eqnarray*} \end{proposition} \begin{proof} The proof of this comes from the point wise inequalities. We do not keep track of the middle constants $C(p)$, because they do not appear in the a priori estimates and vanish once we have higher regularity. \begin{itemize} \item $(I)$ With the notation from the point wise inequalities, $B = dw_u$ and $A = du,$ we have from Proposition~\ref{propineq1} that \[ 1/p<\phi^2(S_{p/2}(dw_u) - S_{p/2}(du)), S_{p/2}(dw_u) - S_{p/2}(du)> \leq I. \] \end{itemize} This is equal to four times the left hand side of the inequality in Proposition~\ref{propine9}. We will now write $I$ as a sum of the terms $II -VI$. Below we will estimate these in terms of $VII -XI$. Twice we will absorb terms from the right hand side to the left hand side. This explains the coefficient $1/(4p)$ in the left hand side of Proposition~\ref{propine9}. \begin{itemize} \item $(II)$ We use the point wise inequality in Proposition~\ref{propine8}. With $\tilde B = dw$, $B = dw_u$ and $A = du,$ we have \begin{eqnarray*} \lefteqn{ (|S_{p-1}(dw_u) - S_{p-1}(dw); dw_u - du|)^\sharp }\\ &\leq& 2(p-1)C s(dw_u)^{p-2} \delta(w,u)|S_{p/2}(dw_u)-S_{p/2}(du)|^{4/p}\\ &\leq& 2/p \left(|S_{p/2}(dw_u)-S_{p/2}(du)|^{4/p}\right)^{p/2}\\ &+& (p-2)/p \left(2(p-1)C s(dw_u)^{p-2} \delta(w,u)\right)^{p/(p-2)}\\ &=& 1/{2p} |S_{p/2}(dw_u) - S_{p/2}(du)|^2 + C TrQ(dw_u)^p\delta(w,u)^{p/p-2}. \end{eqnarray*} Multiply by $\phi^2$ and integrate. The first term can be subtracted from the left hand side (leaving $1/(2p)$) and the second is found in $(IX)$ of the right hand side of the inequality in Proposition~\ref{propine9}. \item $(III)$ This is a serious term, containing part of the curvature of $N.$ Note that since $(S_{p-1}(dw_u),u)^\sharp = 0$ and $(du, u)^\sharp=0$, term $(III)$ is equal to \[ (<\phi^2(dw,u)^\sharp;(S_{p-1}(dw),u)^\sharp>). \] Proposition~\ref{morecomp1} bounds this term by \[ <\phi^2 Q(dw)^p(\delta(w,u)(1 + 1/4\delta(w,u))>. \] This combines with term $(VI)$ to give $(X)$ and $(XI)$ of Proposition~\ref{propine9}. \item $(IV)$ A direct application of Proposition~\ref{propineq2} bounds this term by \begin{eqnarray*} \lefteqn{2\max d\phi<\phi |S_{p/2}(dw_u)-S_{p/2}(du)|}\\ &\times&(\delta(w,u)(1+1/4\delta(w,u)))^{1/2} 2\max (s(du),s(dw_u))^{(p-2)/2}> \\ &\leq& 1/(2p) <\phi^2|S_{p/2}(dw_u)-S_{p/2}(du)|^2> + (VII). \end{eqnarray*} Subtracting this from $(I)$ leaves the $1/(4p)$ as the coefficient on the right hand side of the inequality of Proposition~\ref{propine9}. \item $(V)$ There is a direct bound of $(V)$ by $(VIII)$ via Proposition~\ref{propineq6}. \end{itemize} \end{proof} We can now finish the regularity theorem. \begin{theorem}\label{mainregtH}Let $u=u_p$ satisfy the $J_p$-Euler-Lagrange equations and $w = w_t = g_t^*u$, where $g = \exp (at)$ for $a$ an element of the Lie algebra. Then \[ \lim_{t \rightarrow 0}1/{t^2} <\phi^2(S_{p/2}(dw_t)_u - S_{p/2}(u)), S_{p/2}(dw_t)_u - S_{p/2}(u)>_\Omega \] is finite and $S_{p/2}(du) \in H^1(\Omega'). $ Moreover, \[ ||S_{p/2}(du)||_{H^1(\Omega')} \leq k p <Q(du)^p>_\Omega^{1/2} \] where $k$ depends on $\Omega' \subset \Omega$ but not on $p.$ Moreover, \[ ||S_{p/2}(du)||_{H^1(M)} \leq k' p <Q(du)>^{1/2}, \] where $k'$ depends on $M$ but not on $p.$ \end{theorem} \begin{proof} We proceed as in Proposition~\ref{propine9} and in an arbitrary neighborhood with arbitrary choice of the Lie algebra element $a.$ Note that $W^{1,p} \subset C^{1 - 2/p}.$ This means that we have a uniform estimate \[ \max \delta(w_t,u) \leq kt^{2-4/p}. \] Of course, $k$ depends on the manifold and the choice of the Lie algebra element $a. $ We are going to use the estimate from Proposition~\ref{propine9} to prove Theorem~\ref{mainregtH}. However, it is not quite correct yet. Formally we need a uniform bound on \[ 1/t ||{S_{p/2}(dw_t)}_u -S_{p/2}(du)||_{L^2(\Omega')} \] when Proposition~\ref{propine9} is only giving us a bound on \[ 1/t||S_{p/2}({(dw_t)}_u) - S_{p/2}(du)||_{L^2(\Omega')}. \] However, \begin{eqnarray*} \lefteqn{||{S_{p/2}(dw_t)}_u -S_{p/2}(du)||_{L^2(\Omega')}}\\ &\leq& ||{S_{p/2}(dw_t)}_u - {S_{p/2}((dw_t)}_u)||_{L^2(\Omega')} + ||{S_{p/2}((dw_t)}_u)-S_{p/2}(du)||_{L^2(\Omega')}. \end{eqnarray*} A straight forward application of the inequalities of Proposition~\ref{propine7}, with $dw_t= \tilde B$, $dw_u = B$, $Y = w_t(x)$ and $X = u(x)$ bounds \[ ||{S_{p/2}(dw_t)}_u - S_{p/2}(({dw_t})_u)||_{L^2(\Omega')} \leq p|| Q((dw_t)_u)^{p/2} \delta(w_t,u)||_{L^2(\Omega')}. \] For $p>4$ this goes to 0 at the rate $O(t^{2-4/p}) \leq O(t)$ since $Q(dw_t)^p$ is bounded in $L^1$ and $\delta(w_t,u) \leq kt^{2(1-2/p)}.$ We start checking the terms on the right of Proposition~\ref{propine9} in reverse order. We need bounds on the order of $t^2.$ \begin{itemize} \item $(XI)$ $Q(dw_t)^p$ is bounded in $L^1$ and $\delta(w_t,u)^2 \leq k^2t^{4(1-2/p)}.$ This goes to 0 faster than $t^2$ for $p > 4$ and can be neglected. \item $(X)$ This term looks difficult at first, but we reverse the roles of $w_t$ and add the two inequalities. The left hand side of the inequalities are positive, terms $(X)$ cancel and the other terms are handled the same way in both inequalities. \item $(IX)$ The bound is exactly $\delta(w_t,u)^{p/(p-2)}=O(t^2).$ So the contribution to the estimate is exactly bounded by a constant times the $p$ norm of $w_t,$ which is the $p$ norm of $u$ as well. Once we have any estimate on $S_{p/2}\in H^1$, the next corollary and Sobolev embedding imply that this term will converge faster and can be neglected. \item $(VIII)$ This term is bounded by $||s(dw)^p||_{L^1}^{(p-1)/p} ||d_a u||_{L^p} t^{2 -4/p}$ which goes to 0 faster than $t^2.$ Here we use that $1/t\delta(w_t,u)^{1/2} $ approaches the derivative of $u$ in the direction $a$ as $ t \rightarrow 0.$ \item $(VII)$ This term, when divided by $t^2$, is bounded by $2Q(du)^{p-2}(d_a u)^2.$ This is the only term which survives once we know that $u$ is in $C^\alpha$ for $\alpha > 1-2/p.$ \end{itemize} \end{proof} \begin{corollary}\label{cormainredsh} If $u_p$ satisfies the $J_p$-Euler-Lagrange equations, then $|du_p|$ is in $L^s$ for all $s.$ Moreover for all $s < \infty$ \[ ||du_p||_{L^{ps}} \leq 2(k' C_s p)^{2/p}||du_p||_{sv^p}. \] Here $C_s$ is the norm of the embedding $H^1$ in $L^{2s}$ and $k'$ depends on $M$ and not on $p$. \end{corollary} \begin{proof} By applying the standard inequality $(d|\xi|; d|\xi|)\leq (D\xi; D\xi)^\sharp$ for $\xi=S_{p/2}(du_p)=Q(du_p)^{p/2-1}du_p$, \[ |d (Tr(Q(du_p)^p)^{1/2}| \leq |D(Q(du_p)^{p/2-1}du_p)|. \] Hence, from Theorem~\ref{mainregtH} and the Sobolev embedding $ H^1 \subset L^{2s}$ we have \begin{eqnarray}\label{eqnsobemb} || (TrQ(du_p)^p))^{1/2}||_{L^{2s}} \leq C_s ||(TrQ(du_p)^p)^{1/2}||_{H^1} \leq k'C_s p ||du_p||_{sv^p}^{p/2}. \end{eqnarray} Since \[ |du_p|^p \leq 2^p Tr Q(du_p)^p, \] \[ ||du_p||_{L^{sp}} \leq 2 (||TrQ(du_p)^p)^{1/2}||_{L^{2s}})^{2/p}. \] The result follows directly from this. \end{proof} Note that minimizing $p$-harmonic maps, $p>2$ are $C^{1, \alpha}$ (cf. \cite{uhlen} and \cite{hardlin}). The higher regularity of the $J_p$-minimizers is an interesting question which we state as a conjecture: \begin{conjecture}\label{highregu} Let $u_p: M \rightarrow N$ satisfiy the $J_p$-Euler-Lagrange equations between hyperbolic surfaces, $p>2$. Then $u_p$ is in $C^{1, \alpha}$. \end{conjecture} \section{Variational construction of the infinity harmonic map}\label{sect:Varblm} This section contains the construction of a special type of best Lipschitz map which we call infinity harmonic. We start with some generalities about Lipschitz maps including the notion of best the Lipschitz constant, the maximum stretch set and best Lipschitz map in a homotopy class of maps between Riemannian manifolds. We then give a variational construction of a best Lipschitz map as the limit of minimizers of the $J_p$-functionals as $p \rightarrow \infty$. We introduce the notion of canonical lamination from \cite{thurston} and \cite{kassel} which is always contained in the maximum stretch set of the best Lipschitz map. We also introduce the notion of optimal best Lipschitz map from \cite{kassel} which we will study later in the paper. \subsection{Preliminaries on Lipschitz maps} \begin{definition} Let $(M, g)$ and $(N,h)$ as in Section~\ref{sect:funtional}, $S \subset M$ and $ f: S \rightarrow (N, h) $ a map. We call $f$ $L$-Lipschitz in $S$, if there exists $L>0$ such that for all $x, y \in S$ \[ d_h(f(x), f(y))\leq L d_g(x,y). \] The smallest possible $L$, \[ L_f(S)= \inf \{L \in \R: d_h(f(x), f(y))\leq Ld_g(x,y) \ \forall x,y \in S \} \] is called {\it{the Lipschitz constant}} of $f$ on $S$. We define the local Lipschitz constant of $f$ at $x $ \[ L_f(x)= \lim_{r \rightarrow 0}L_f(B_r(x)). \] Since we are taking a non-increasing sequence the limit above exists. Also, clearly, if $f$ is $L$-Lipschitz in $S$, $L_f(x) \leq L \ \ \forall x \in S$. We next give the following generalization of \cite[Lemma 4.3]{crandal}, \cite[Propositions 2.13 and 2.14]{ambrosio} from functions to maps between Riemannian manifolds. \begin{proposition}\label{crandal0} For any domain $U \subset M$ and any function $f:U \rightarrow N$, \begin{itemize} \item $(i)$ the map $x \mapsto L_f(x)$ is upper semicontinuous. \item $(ii)$ If $f$ is differentiable at $x \in U$, then $|df_x|_{sv^\infty}=s(df_x)\leq L_f(x)$. (If $f$ is $C^1$, then $s(df_x)= L_f(x)$ but the example $x^2\sin(1/x)$ shows that the inequality can be strict if $f$ is differentiable but not continuously differentiable.) \item $(iii)$ (Rademacher) $f \in W^{1,\infty}(U)$ is differentiable a.e and the differential $df$ agrees a.e with the weak derivative. \item $(iv)$ Let $U \subset M$ convex. Then $f \in W^{1,\infty}(U)$ iff $L_f(U) < \infty$ and $||df||_{sv^\infty}=L_f(U)$. \end{itemize} \end{proposition} \begin{proof} For $(i)$, note that if $x_j\rightarrow x$, then $B_{r-d_g(x,x_j)}(x_j) \subset B_r(x)$ and therefore, for large $j$, \[ L_f(x_j) \leq L_f(B_{r-d_g(x,x_j)}(x_j)) \leq L_f(B_r(x)). \] Let $j \rightarrow \infty$ and then $r \rightarrow 0$ to conclude that $\limsup_{j \rightarrow \infty} L_f(x_j) \leq L_f(x)$. In order to prove $(ii)$ we will first assume that the metrics $g$ and $h$ are Euclidean. Let $P=df_x$, $p=|P|_{sv^\infty}$ and assume $p \neq 0$ since otherwise the inequality holds trivially. Then, \begin{eqnarray*} L_f(B_{p|h|}(x)) \geq \frac {|f(x+ph)-f(x)|}{p|h|} \geq \frac {|P(ph)|}{ph}+o(1)=\frac {|Ph|}{h}+o(1) \ \end{eqnarray*} Fix $r$ and taking $\sup$ over the set $|h|=r$, we obtain \[ L_f(B_{pr}(x))\geq p+o(1). \] Taking $r \rightarrow 0$ gives the desired inequality in the case of Euclidean metrics. Since, via normal coordinates, in a small scale any metric is euclidean $+o(r^2)$ where $r$ is the distance to the point, the result for general metrics follows from this. Statement $(iii)$ follows from the classical Rademacher theorem since neither the assumption nor the conclusion are sensitive to the particular metrics chosen. For a proof see \cite[Proposition 2.14]{ambrosio} among many other references. Statement $(iv)$, is clear for $C^1$ functions. To deduce the general case use convolutions as in \cite[Proposition 2.13]{ambrosio}. \end{proof} \end{definition} \begin{definition}\label{def:lipdirichlet} A map $u \in Lip(M,N)$ is called a {\it{best Lipschitz map}} if for any map $f \in Lip(M,N)$ homotopic to $u$ (either in the absolute sense or relative to the boundary depending on the context), \[ L_u \leq L_f \] where $L_u=L_u(M)$ denotes the global Lipschitz constant of $u$ (and similarly for $f$). \end{definition} \begin{lemma} \label{realized} For a Lipschitz map $ f: M \rightarrow N$ with global Lipschitz constant $L=L_f(M)$ and a number $0< \mu \leq L$, the set $ \{ x \in M: L_f(x) \geq \mu \}$ is non-empty and closed. In particular the set of maximum stretch \[ \lambda_f=\{ x \in M: L_f(x) =L \} \] is non-empty and closed. \end{lemma} \begin{proof} First, by the upper semicontinuity of the Lipschitz constant (cf. Proposition~\ref{crandal0}), the set \[ \{ x \in M: L_f(x) \geq \mu \} \] is closed. To finish the proof we need to show that $ \lambda_f$ is non-empty. Take a sequence $x_i$ such that $L_f(x_i) \nearrow L$. By compactness, we may assume $ x_i \rightarrow x$ and by upper semicontinuity $L_f(x) \geq \lim_i L_f(x_i) = L$. Thus, $x \in \lambda_f$ and hence $ \lambda_f \neq \emptyset$. By upper semicontinuity \[ \lambda_f=\{ x \in M: L_f(x)= L \}=\{ x \in M: L_f(x) \geq L \} \] is closed. \end{proof} \subsection{The limit of $J_p$-minimizers as $p \rightarrow \infty$.} Let $(M, g)$ be a compact Riemannian manifold with boundary $ \partial M$ (possibly empty) and $\dim M=n$. We further assume that $(M, g)$ is convex, in other words $(M, g)$ can be isometrically embedded as a convex subset of a manifold of the same dimension without boundary. Let $(N, h)$ be a closed Riemannian manifold of non-positive curvature. Fix a homotopy class of maps from $M$ to $N$ (either absolute or relative to the boundary) and choose a Lipschitz map $f: M \rightarrow N$ in that homotopy class. We are going to construct a best Lipschitz map $u: M \rightarrow N$ homotopic to $f$ as a limit as $p \rightarrow \infty$ of minimizers of the functional \[ J_p(u)=\int_M Tr Q(du)^p*1 \ \ \mbox{where} \ \ u \in W^{1,p}=W^{1,p}(M, N). \] For $N=\R$ (or maybe better $N=S^1$, since we are assuming the target is compact), this construction is due to Aronsson (cf. \cite{aron1},\cite{aron2} and \cite{lind}). See also \cite{daskal-uhlen1}. The main result of the section is the following: \begin{theorem}\label{thm:existence} Given a sequence $p \rightarrow \infty$, there exists a subsequence (denoted also by $p$) and a sequence of $J_p$-minimizers $u_p: M \rightarrow N$ homotopic to $f$ (either absolute or relative to the boundary) such that \[ u=\lim_{p \rightarrow \infty} u_p \ \ \ \mbox{weakly in} \ \ W^{1,s} \ \forall s. \] Furthermore, $u_p \rightarrow u$ uniformly and $u$ is a best Lipschitz map in the homotopy class. \end{theorem} The proof of the theorem will follow from the next two Lemmas: \begin{lemma}\label{limminimizer} For each $p>n$, let $u_p$ be a $J_p$-minimizer in the homotopy class of $f.$ Given a sequence $p \rightarrow \infty$ there exists a subsequence (denoted also by $p $) and a map $u: M \rightarrow N$ such that for any $s>n$ \[ du_p \rightharpoonup du \ \mbox{weakly in} \ L^s. \] Moreover, there exists a $C>0$ depending only on $M$, $g$, $h$ and $v$ such that \[ ||du ||_{L^{\infty}} \leq C. \] \end{lemma} \begin{proof}Take a sequence $p \rightarrow \infty$. By Lemma~\ref{lemma:normlim} and Proposition~\ref{crandal0}, we have for $n<s<p$ large, \begin{eqnarray*} \frac{1}{(n \ vol(M))^{1/s}}J_s(u_p)^{1/s} &\leq& \frac{1}{vol(M)^{1/p}} J_p(u_p)^{1/p} \\ &\leq& \frac{1}{vol(M)^{1/p}} J_p(f)^{1/p} \\ &\leq& L_f+ \epsilon. \end{eqnarray*} Formula (\ref{lemma:normineq0}) implies \[ ||du_p||_{L^s}\leq n^{1/2}(n \ vol(M))^{1/s}(L_f+ \epsilon)\leq C. \] Thus, there is a subsequence $du_p \rightharpoonup du$ weakly in $L^s$. By semicontinuity, \[ ||du ||_{L^s} \leq \liminf_{p \rightarrow \infty} ||du_p ||_{L^s}. \] Since $C$ is independent of $s$, by a diagonalization argument we can choose a single subsequence $p$ such that \[ du_p \rightharpoonup du \] weakly in $ L^s $ for all $s$. By semicontinuity, $||du ||_{L^s} \leq C$ independently of $s$ and by taking $s \rightarrow \infty$, $||du ||_{L^{\infty}} \leq C.$ \end{proof} \begin{lemma}\label{lemma:globalmin} The map $u$ is a best Lipschitz map in its homotopy class. \end{lemma} \begin{proof} Since $u_p$ is a $J_p$-minimizer, given any Lipschitz map $w$ in the homotopy class of $f$ we obtain as before for $n<s<p$, \[ \frac{1}{(n \ vol(M))^{1/s}}J_s(u_p)^{1/s} \leq L_w+\epsilon. \] Since $u_p \rightharpoonup u$ weakly in $W^{1,s}$ we have by lower semicontinuity, \[ \frac{1}{(n \ vol(M))^{1/s}}J_s(u)^{1/s} \leq L_w+ \epsilon \] and since $\epsilon$ is arbitrary, \[ \frac{1}{(n \ vol(M))^{1/s}}J_s(u)^{1/s} \leq L_w. \] By taking $s \rightarrow \infty$, we obtain by Lemma~\ref{lemma:normlim} \[ ||du||_{sv^\infty}\leq L_w. \] Since $M$ was assumed to be convex, Proposition~\ref{crandal0} implies that $L_u \leq L_w.$ \end{proof} \begin{definition}\label{def:infhharm} Let $(M, g)$, $(N, h)$ be as before and $u: M \rightarrow N$ a best Lipschitz map in its homotopy class. The map $u$ is called {\it $\infty$-harmonic} if there exists a sequence of $J_p$-minimizers $u_p: M \rightarrow N$ homotopic to $u$, called the $p$-approximations of $u$, such that $u=\lim_{p \rightarrow \infty} u_p$ weakly in $W^{1,s}$ for all $s$ (and thus also uniformly). \end{definition} \begin{lemma}\label{pintconvto} If $u$ has Lipschitz constant $L$ and $u_p$ is a $p$-approximation, \[ \lim_{p \rightarrow \infty} J_p^{1/p}(u_p) = L. \] \end{lemma} \begin{proof}The fact that $u_p$ is a $J_p$-minimizer and Lemma~\ref{lemma:normlim}$(i)$ imply \[ J_p^{1/p}(u_p) \leq J_p^{1/p}(u) \rightarrow L. \] Hence the $\limsup$ is less than equal to $L$. On the other hand, if $\liminf=a<L$, then proceeding as it the proof of Lemma~\ref{lemma:globalmin}, there exists a Lipschitz map $w$ such that $L_w\leq a<L$ which contradicts the best Lipschitz constant. \end{proof} \subsection{Optimal best Lipschitz maps}\label{optil} {\it For the rest of the paper we make the assumption that the best Lipschitz constant is greater or equal to 1.} We review here some results from \cite{kassel}. \begin{definition}\label{canlamin}Let $(M,g)$ and $(N,h)$ be closed hyperbolic surfaces. Given a best Lip map $f: M \rightarrow N$, let $\lambda_f$ be its maximum stretch locus (cf. Lemma~\ref{realized}). Let $\mathcal F$ denote the collection of best Lipschitz maps in a homotopy class and set \[ \lambda=\cap_{f \in \mathcal F} \lambda_f. \] We call $\lambda$ {\it{the canonical lamination}} associated to the hyperbolic metics $g$, $h$ and the fixed homotopy class. \end{definition} From \cite{kassel} we know: \begin{theorem} \begin{itemize} \item $(i)$ The closed set $\lambda$ is a geodesic lamination on $(M, g)$ and any $f \in \mathcal F$ multiplies arc length by $L$ on the leaves of the lamination. Thus $\mu=f(\lambda)$ is a geodesic lamination on $(N, h)$ (cf. \cite[Lemma 5.2]{kassel}). \item $(ii)$ There exists $\hat f \in \mathcal F$ such that $\lambda_{\hat f}=\lambda$. We call a $\hat f$ with $\lambda_{\hat f}=\lambda$ an {\it{optimal map}} (cf. \cite[Lemma 4.13]{kassel}). \item $(iii)$ If the homotopy class is that of the identity, the canonical lamination is equal to Thurston's chain recurrent lamination $\mu(g,h)$ associated to the hyperbolic structures $g,h$. (cf. \cite[Theorem 8.2]{thurston} and \cite[Lemma 9.3]{kassel}). \end{itemize} \end{theorem} In Section~\ref{sec:idealoptim} we will construct an ideal optimal best Lipschitz map in the case when the canonical lamination consists of a finite union of closed geodesics (cf. Theorem~\ref{theo:idealoptim}). In Section~\ref{sec:infinityopt} we will show that in this case an infinity harmonic map is optimal. (cf. Theorem~\ref{infharmopt}). We conjecture that this holds in general, but we will not prove it in this paper (cf. Conjecture~\ref{conjopt}). We end by stating a Lipschitz approximation theorem that we will need in Section~\ref{lipqgoesto1}. The proof can be found in \cite[Theorem 4.4 and 4.6]{karcher}. \begin{theorem}\label{thmgreenewu} If $f: M \rightarrow N$ is a Lipschitz map, then there exists a sequence of smooth maps $f_k: M \rightarrow N$ such that $f_k \rightarrow f$ uniformly and the Lipschitz constants converge, $L_{f_k} \rightarrow L_f.$ \end{theorem} \section{The limit $q \rightarrow 1$}\label{lipqgoesto1} In this section we show that $*S_{p-1}(du_p)$ and $*(S_{p-1}(du_p)\times u_p)$ converge after appropriate normalizations as $p \rightarrow \infty$ (or equivalently as $q \rightarrow 1$) to Radon measures $Z$ and $V$ with values in the flat vector bundles $E$ and $ad(E)$ respectively tensor with $T^*M$. Furthermore, $V$ is closed as 1-current with respect to the flat connection $d$. The final result is that $Z$ and $V$ are mass minimizing with respect to the best Lipschitz map $u$. First we discuss various normalizations. \subsection{The normalizations}\label{normkappa} In this section we fix a $2\leq p < \infty$, $1< q \leq 2$ satisfying (\ref{form:conjugate}) and $u_p$ is a $J_p$-harmonic map. Choose a normalizing factor $\kappa_p$ and define $U_p= \kappa_p du_p$ such that \begin{equation}\label{normintv1} ||U_p||_{sv^p}^p= \int_M Tr Q(U_p)^p*1 = <Q(U_p)^p> = 1. \end{equation} Consider the normalized Noether current \begin{equation}\label{normintv21} S_{p-1} = Q(U_p)^{p-2} U_p \in \Omega^1(E). \end{equation} Note that by (\ref{form:conjugate}) and (\ref{normintv1}) \begin{eqnarray}\label{Quv} Q(S_{p-1})^2=(S_{p-1} ; S_{p-1}^\sharp)= Q(U_p)^{2p-2} \end{eqnarray} satisfies \begin{equation}\label{dualnor} ||S_{p-1}||_{sv^q}^q=\int_M Tr Q(S_{p-1})^q*1=\int_M Tr Q(U_p)^p*1=||U_p||^p_{sv^p}=1. \end{equation} \begin{lemma} \label{klemma1}Under the normalizations above, $\lim_{p \rightarrow \infty} \kappa_p = L^{-1}. $ \end{lemma} \begin{proof} By (\ref{normintv1}) and Lemma~\ref{pintconvto} \[ \lim_{p \rightarrow \infty} \kappa_p^{-1}=\lim_{p \rightarrow \infty} J_p(u_p)^{1/p} = L. \] \end{proof} From now on we will replace all the Noether currents by the normalized Noether currents. \subsection{The measures} Let \[ f: M \rightarrow H=\tilde M\times_\rho \HH \subset E =\tilde M\times_\rho \R^{2,1} \] be a Lipschitz section and $\xi \in \Omega^1(E)$. We define $\xi_f \in \Omega^1(E)$ by setting $\xi_f \in \Omega^1(E)$ to be the section corresponding to the $\rho$-equivariant map $\tilde \xi_{\tilde f}$, where as usual tilde means lift to the universal cover. Recall from (\ref{proj1}) that $\tilde \xi_{\tilde f} =\tilde \xi+ (\tilde \xi, \tilde f)^\sharp \tilde f.$ \begin{definition}\label{projcurr} Let $T$ be a Radon measure with values in $T^*M \otimes E$. We write $T_f=T$ if for any $\xi \in \Omega^1(E)$ we have $T[\xi _f]= T[\xi]$. Note that because $f$ is Lipschitz, $T[\xi _f]$ makes sense. (We use brackets to denote measures acting on test functions instead of parentheses). \end{definition} \begin{definition}\label{measrcurr} Let $f$ be a Lipschitz section as before and $T$ a Radon measure with values in $T^*M \otimes E$. \begin{itemize} \item We define the \it{mass of $T$ with respect of $f$} \[ [[T]]_f = \sup \{ T[\xi]: \xi \in \Omega^1(E), \xi=\xi_f, s( \xi) \leq 1 \}. \] \item We say that $T$ is {\it{mass minimizing with respect to $f$}} if for all sections of bounded variation $\psi \in \Omega^1(E)$ with $\psi=\psi_f$, \[ [[T]]_f \leq [[T + d\psi ]]_f. \] \end{itemize} \end{definition} It is convenient to denote \begin{equation}\label{def|sp-1} |S_{p-1}|:=(S_{p-1},U_p)^\sharp = Tr Q(U_p)^p= |U_p|_{sv^p}^p \end{equation} viewed as a non-negative measure on $M$. For a test function $\xi \in \Omega^1(E)$, define the Radon measure with values in $T^*(M) \otimes E$ \begin{eqnarray}\label{krisn} S_{p-1}[\xi] = <S_{p-1}, *\xi >= <S_{p-1}, *\xi_{u_p}>= S_{p-1}[\xi_{u_p}]. \end{eqnarray} Our goal is to define $\lim_{p \rightarrow \infty} S_{p-1}.$ We are plagued by the necessity of needing to project into a tangent space to get a positive definite inner product, but the tangent spaces move around. \begin{theorem}\label{TTheorem A5} Given a sequence $p \rightarrow \infty$, there exists a subsequence (denoted again by $p$), a real-valued positive Radon measure $|S|$ and a Radon measure $S$ with values in $T^*M \otimes E$ such that \begin{itemize} \item $(i)$ $ |S_{p-1}| \rightharpoonup |S| $ and $\int_M |S|*1 = 1$. \item $(ii)$ $ S_{p-1} \rightharpoonup S$, $[[S]]_u \leq 1$ and $ S=S_u$. \item $(iii)$support $S$ in support $|S|$ \item $(iv)$support $|S|$ in support $S$ \item $(v)$ $Z=*S$ is mass minimizing with respect to the best Lipschitz map $u$. \end{itemize} \end{theorem} \begin{proof} For $(i)$ note that for a real valued test function $\phi$ with $||\phi||_{L^\infty} \leq 1$, it follows from (\ref{normintv1}) that \[ \int_M |S_{p-1}|\phi*1 \leq \big|\big||S_{p-1}|\big|\big|_{L^1}= ||U_p||_{sv^p}^p=1. \] Thus, after passing to a subsequence, there is a non-negative Radon measure $|S|$ such that $ |S_{p-1}| \rightharpoonup |S| $ with $\int_M |S|*1= 1$. For $(ii)$, let $\xi \in \Omega^1(E)$ be a test function. From~(\ref{krisn}), (\ref{dualnor}) and (\ref{holds}) and the convergence of $u_p$, \begin{eqnarray}\label{krisn2} S_{p-1}[\xi] \leq ||S_{p-1}||_{sv^q} ||\xi_{u_p}||_{sv^p} \leq ||\xi_{u_p}||_{sv^p}. \end{eqnarray} Assuming that the $L^\infty$-norm $ ||\xi||_{L^\infty} \leq 1$ with respect to the positive definite metric on $E$ defined by $u$ (cf. Section~\ref{posdef}) $S_{p-1}[\xi]$ is uniformly bounded because $u_p$ converges to $u$ uniformly. Hence, after passing to a subsequence $ S_{p-1} \rightharpoonup S$. As before, \begin{eqnarray*}\label{krisn3} S_{p-1}[\xi] &\leq& ||\xi_{u_p}||_{sv^p}\\ &\leq&\omega(u_p,u)||\xi||_{sv^p}\\ &\rightarrow& ||\xi||_{sv^p}. \end{eqnarray*} The fact that $[[S]]_u \leq 1$ follows easily from the above, Lemma~\ref{lemma:normlim} and the assumption $s(\xi=\xi_u) \leq 1$. In order to check $ S=S_u$, \begin{eqnarray}\label{curwithvalinbundle} S_{p-1}[\xi]-S_{p-1}[\xi_u]&=&<S_{p-1}, *(\xi-\xi_u)> \nonumber\\ &=&<S_{p-1},*(\xi-\xi_u)_{u_p}> \nonumber \\ &\leq& ||(\xi-\xi_u)_{u_p})||_{sv^p}\\ &\rightarrow& 0. \nonumber \end{eqnarray} The last holds because $u_p \rightarrow u$ uniformly. In order to show $(iii)$, let $B \subset M$ such that $|S| \big |_B=0$. This is equivalent to $|S_{p-1}| \rightarrow 0$ in $L^1(B)$. For any test function $\xi$ with support in $B$, \begin{eqnarray*} S_{p-1}[\xi] \leq K ||S_{p-1}||_{L^1} ||\xi_{u_p}||_{L^\infty} \rightarrow 0. \end{eqnarray*} We will return to the proof of $(iv)$ in the next section (see Theorem~\ref{theoremsuptharddir}). We are going to show $(v)$. Note that we already know one direction $[[Z]]_u=[[S]]_u \leq 1$. Conversely, let $w_k \rightarrow u$ be a smooth Lipschitz approximation to $u$ as in Theorem~\ref{thmgreenewu} and set \begin{eqnarray}\label{kaly479} \xi_k = h_kdw_k \ \mbox{where} \ h_k = s((dw_k)_u)^{-1}. \end{eqnarray} Since $S_u=S$, \begin{eqnarray}\label{kaly4} (Z +d\psi)[(\xi_k)_u ] = h_k Z [dw_k ]+ h_k<d\psi, *(dw_k)_u>. \end{eqnarray} Note that \begin{eqnarray}\label{kkaly6} <d\psi, *(dw_k)_u> &=& <d\psi, *(dw_k +(dw_k,u)^\sharp u)>\\ &=& <d\psi, *dw_k>+<d\psi, (*dw_k, u-w_k)^\sharp u>. \nonumber \end{eqnarray} The Lipschitz bound on $w_k$, the fact that $\psi$ is of bounded variation plus $w_k \rightarrow u$ in $C^0$ imply that the second term limits on 0. The first term is zero from $d^2=0$ because $\psi$ is an actual section of the flat bundle $E$. Note that $\liminf_{k \rightarrow \infty} h_k \geq 1/L$ since \[ \limsup_{k \rightarrow \infty} s(({dw_k})_u) \leq \limsup_{k \rightarrow \infty}( \omega(u,w_k)s(w_k) ) = L. \] As a last step, \begin{eqnarray}\label{kaly1} Z[dw_k] = \lim_{ p\rightarrow \infty} <*S_{p-1}, *(dw_k)> \nonumber \\ = \lim_{ p\rightarrow \infty} <Q(U_p)^{p-2}U_p, dw_k>. \end{eqnarray} But according to Proposition~\ref{supptprop5}, with $w = u_p$ and $f = w_k$ \[ < \omega(u_p, w_k)Q(U_p)^{p-2}U_p, du_p - dw_k> = 0. \] Thus, \begin{eqnarray}\label{kaly2} \lim_{ p\rightarrow \infty}<Q(U_p)^{p-2}U_p, dw_k> &=& \lim_{ p\rightarrow \infty} \left(1/\kappa_p<\omega(u_p,w_k)Q(U_p)^p>\right) \nonumber\\ & \geq& \lim_{ p\rightarrow \infty} 1/\kappa_p = L. \end{eqnarray} The last limit follows from Lemma~\ref{klemma1}. Since $\liminf_{k \rightarrow \infty} h_k \geq 1/L$, (\ref{kaly1}) and (\ref{kaly2}) imply $[[Z]]_u \geq1$. This completes the proof. \end{proof} We have analogues of Definition~\ref{measrcurr} and Theorem~\ref{TTheorem A5} for $V_q=*(S_{p-1}\times u_p)$ as follows: \begin{definition}Let $f$ be a Lipschitz section as before and $T$ a Radon measure with values in the flat bundle $ad(E)$ tensor with $T^*M$. \begin{itemize} \item $(i)$ We define \begin{eqnarray*} [[T]]_f &=& \sup \{ T[\xi \times f]: \xi \in \Omega^1(E), s( \xi_f) \leq 1 \}\\ &=& \sup \{ T[\phi]: \phi \in \Omega^1(ad(E)), \phi=\phi_f, s( \phi) \leq \sqrt2 \}. \end{eqnarray*} \item $(ii)$ We say that $T$ is {\it{mass minimizing with respect to $f$}} if for all sections of bounded variation $\psi: M \rightarrow E$, \[ [[T]]_f \leq [[T + d(\psi \times f)]]_f. \] Equivalently for all sections of bounded variation $\varphi \in \Omega^0(ad(E))$ with $\varphi =\varphi_f$, \[ [[T]]_f \leq [[T + d\varphi]]_f. \] \end{itemize} \end{definition} \begin{theorem} \label{thm:limmeasures0} Given a sequence $p \rightarrow \infty$ ($q \rightarrow 1$), there exists a subsequence (denoted again by $p$) such that: \begin{itemize} \item $(i)$ There exists a Radon measure $ V$ with values in $T^*M \otimes ad(E)$ such that $V_q \rightharpoonup V.$ \item $(ii)$ The support of $V$ is equal to the support of $S$. \item $(iii)$ $[[V]]_u \leq 2$ and $V=V_u$, i.e $V[\phi]=V[\phi_u]$ for all sections $\phi \in \Omega^1(ad(E)$. \item $(iv)$ The 1-current $V$ is closed and is mass minimizing with respect to the best Lipschitz map $u$. \end{itemize} \end{theorem} \begin{proof}The proof of $(i)$ is the same as that of Theorem~\ref{TTheorem A5} and is omitted. For $(ii)$ suppose that if $B \subset M$ such that $S \big |_B=0$. This is equivalent to $S_{p-1} \rightarrow 0$ in $L^1(B)$ and since the map $\beta_{u_p}$ of Proposition~\ref{helpemb} is an isometry, this is equivalent to $V_q \rightarrow 0$ in $L^1(B)$, hence $V \big |_B=0$. For $(iii)$, let $\xi \in \Omega^1(E)$ with $\xi_u=\xi$ and $s(\xi) \leq 1$. By using (\ref{holds}), (\ref{dualnor}) and Lemma~\ref{lemma:normlim}, \begin{eqnarray*} V[\xi \times u] &=& \lim_{p \rightarrow \infty} V_q[\xi \times u_p]=\lim_{p \rightarrow \infty} <V_q, *(\xi \times u_p)> =\lim_{p \rightarrow \infty} <S_{p-1}\times u_p, \xi_{u_p} \times u_p>\\ &=& 2\lim_{p \rightarrow \infty} < S_{p-1}, \xi_{u_p}> =2\lim_{p \rightarrow \infty} < S_{p-1}, \xi_u> \leq 2\lim_{p \rightarrow \infty} ||S_{p-1}||_{sv^q} ||\xi_u||_{sv^p}\\ &=& 2s(\xi) \leq 2. \end{eqnarray*} Hence $[[V]]_u \leq 2.$ The statement $V=V_u$ follows as in (\ref{curwithvalinbundle}), \begin{eqnarray*} V_q[\xi]-V_q[\xi_u]&=&<*(S_{p-1}\times u_p), *(\xi-\xi_u)> \nonumber\\ &=&<*(S_{p-1}\times u_p),*(\xi-\xi_u)_{u_p}> \nonumber \\ &\leq& 2||(\xi-\xi_u)_{u_p})||_{sv^p}\\ &\rightarrow& 0. \nonumber \end{eqnarray*} We now come to $(iv)$. First, $V$ is closed being the weak limit of closed currents. We already know $[[V]]_u \leq 2.$ Conversely, let $w_k \rightarrow u$ and $\xi_k = h_kdw_k$ be as in the proof of Theorem~\ref{TTheorem A5}. Then \begin{eqnarray}\label{kkaly4} (V +d(\psi \times u))[\xi_k \times u] = h_k\left( V [dw_k \times u] + d(\psi \times u) [dw_k \times u] \right) \nonumber\\ = h_k\left( V [dw_k \times u] + d(\psi_u \times u) [dw_k \times u] \right). \end{eqnarray} But \begin{eqnarray}\label{kkaly5} \lefteqn{ d(\psi_u \times u) [dw_k \times u] = <d(\psi_u \times u), *(dw_k \times u)> } \nonumber \\ &=& <d\psi_u \times u, *(dw_k \times u)> -<\psi_u \times du, *(dw_k \times u)>\\ &=& 2<d\psi_u, *(dw_k)_u> \nonumber. \end{eqnarray} In the first step, we can carry out the Leibnitz rule as we have enough regularity to integrate against a Lipschitz function. The second term vanishes because we are coupling the wedge product of tangents spaces to $N$ at $u(x)$ against a term in the wedge product of the tangent space to $N$ at $u(x)$ and the normal in $R^{2,1}$. As in the proof of Theorem~\ref{TTheorem A5} $(v)$ (cf. (\ref{kkaly6})) this terms limits to zero. Thus, combining with (\ref{kkaly4}) and (\ref{kkaly5}), it suffices to show $ \liminf_{k \rightarrow \infty}V[\xi_k \times u] \geq 2.$ Note that \begin{eqnarray*} V[dw_k \times u] = \lim_{ p\rightarrow \infty} <V_q, *(dw_k \times u_p)> \nonumber \\ = 2\lim_{ p\rightarrow \infty} <Q(U_p)^{p-2}U_p, dw_k>. \end{eqnarray*} The rest follows as in the proof of Theorem~\ref{TTheorem A5} $(v)$. \end{proof} \subsection{Constructing a local primitive of $V$}\label{sect:affbundlfinite} We now give a construction of a local primitive $v$ of $V$. We start with the case $p< \infty$ ($q>1$) and we find $dv_q=V_q$. Here we have enough regularity to show $v_q$ is continuous so the topology is preserved. We will then pass to weak limits to construct $dv=V$ for a function $v$ of bounded variation. We continue with $G=SO^+(2,1)$, $\mathfrak g=so(2,1)$ and $d$ the flat connection on $ad(E)$ induced from $\rho$. Let $u_p$ be a $J_p$-minimizer, $p<\infty$ and let $V_q=*(S_{p-1} \times u_p)$. By Theorem~\ref{Prop:Elag-bund} $dV_q=0$ weakly since by the regularity theorem (cf. Corollary~\ref{cormainredsh}), $V_q \in L^s$, $s>2$ but not necessarily smooth. In any case, $V_q$ defines a cohomology class in $H^1(M, ad(E))$. We now can lift everything to the universal cover (or work on local simply connected sets). Since in this paper we are only interested in local properties we will denote the lifts by the same notation as downstairs. We can trivialize the flat connection $d$ and write \[ dv_q=V_q \] for a locally defined function $v_q$ with values in the Lie algebra. Since $V_q$ is only in $L^s$ we need to approximate by smooth functions and pass to the limit. In fact, we can do this globally by preserving the cohomology class of $dv_q$ on the quotient but we will postpone this discussion for the next paper (cf. \cite{daskal-uhlen2}). By the regularity theory (cf. Corollary~\ref{cormainredsh}), $v_q \in W^{1,s}$ for all $s$ and thus $v_q$ is continuous. Furthermore by Theorem~\ref{TTheorem A66}, $v_q$ satisfies the $q$-Schatten-von Neumann harmonic map equation \[ d*Q(dv_q)^{q-2}dv_q=0. \] We now move to the limiting case $q=1$. Consider a sequence $p \rightarrow \infty$ ($q \rightarrow 1$). Express locally as before $V_q=dv_q$. Note that even though, by the regularity theory $v_q \in W^{1,s}_{loc}$ for all $s$, they are only uniformly so for $s=1$. Recall that by the normalization (\ref{dualnor}), $dv_q=V_q$ has $q$-Schatten-von Neumann norm at most 2. In particular $||dv_q||_{L^1}$ is uniformly bounded. From this together with the Poincare inequality we obtain a subsequence such that $v_q$ converges to $v$ {\it weakly} in $BV_{loc}$. It also converges strongly in $L^s_{loc}$ for $s <2$ by the Sobolev embedding theorem. Since also $V_q$ converges, it must be $dv=V$. We have thus proved: \begin{theorem} \label{thm:limmeasures04r} There exists a local Lie algebra valued function of bounded variation $v$ such that $dv=V$. Moreover, $v$ is locally of least gradient in the sense that for any local section of bounded variation $\varphi \in \Omega^0(ad(E))$ with $\phi=\phi_u$, \[ [[dv]]_u \leq [[ d(v+ \varphi)]]_u. \] \end{theorem} The last statement follows from Theorem~\ref{thm:limmeasures0}. It is worth mentioning that even though $dv=(dv)_u$, $v$ is not equal to $v_u$. \section{The support of the measures}\label{sec7} The main result of this section is to provide a proof of the following theorem: \begin{theorem}\label{thm:supptmeasure} The support of the measures $S$ and $V$ is contained in the canonical geodesic lamination $ \lambda$ associated to the hyperbolic metrics $g$, $h$ and the homotopy class (cf. Definition~\ref{canlamin}). \end{theorem} A rough outline is as follows: Before we prove Theorem~\ref{thm:supptmeasure} we complete the proof of Theorem~\ref{TTheorem A5} by showing that the support of the limiting measure $S $ is equal to the support of the non-negative real valued measure $|S|.$ We then show that the support of $|S|$ and therefore $S$ is contained in the canonical geodesic lamination $\lambda$. For the rest of this section, we rescale $M$ to assume $L = 1.$ This rescales the curvature of $M$ to be $-1/L^2$ and multiplies the volume by $L^2.$ We recall that $L > 1$ originally, so $M$ becomes a hyperbolic manifold of larger volume and smaller curvature, although we do not need this in this section until. While this is not strictly necessary, it allows us to avoid carrying the extra factor of $L$ around. From now on we assume that $w = u_p$ satisfies the $J_p$-Euler-Lagrange equations, $W =dw$ and $f$ is a comparison best Lipschitz map. This means that $s(df) = s((df)_f)\leq L$. From Proposition~\ref{supptprop5} \[ < \omega(w,f) Q(W)^{p-2}W, W - F> = 0 \] and $s((df)_w) \leq \omega(w,f) s(df).$ Recall that in (\ref{deromea}) we have defined $1 + (w-f, w-f)^\sharp = \omega(w,f)$ as a weight, which now depends on $x \in M$. We will find it both useful and a nuisance in the rest of the section. Note that $F_w = F$ and $(df)_f = df.$ In the rest of the section, we will use this fact without comment to obtain inequalities in tangent spaces where the metric is positive definite which are not available in $R^{2,1}.$ In order to localize the Euler-Lagrange equations above, we check on the region in which the integrand is negative. Let \[ G_p =G_p(F) = \{x \in M: (Q(du_p)^{p-2}du_p, du_p - F)^\sharp < 0 \}. \] Another way to put this is to let $H = (Q(du_p)^{p-2}du_p; du_p - F)^\sharp$. Now $H$ is an $L^1$ function. We can define $G_p$ is the support of $H - |H|.$ This can be confusing as we do not know that $du_p$ and $F$ are smooth. We do know that $H$ is in $L^1. $ Alternatively, as we commented earlier, we can approximate by smooth maps ${\hat u}_p$ and $ \hat f$, make the estimates with an error term in the Euler-Lagrange equations, and then take the limit as the approximations limit on $u_p$ and $f.$ \begin{proposition}\label{supptprop8} Let $\kappa_p$ be the normalization introduced in (\ref{normintv1}) and $U_p = \kappa_p du_p. $ Assume that $s(F) \leq 1 (=L).$ Then \[ \lim_{p \rightarrow \infty} <Q(U_p)^{p-2}U_p, F - du_p>_{G_p} = 0. \] \end{proposition} \begin{proof} Write the integrand as \[ (Q(U_p)^{p-2}U_p; F- \kappa_p du_p)^\sharp + (1-\kappa_p^{-1}) TrQ(U_p)^p. \] By the normalization we have imposed, and $\kappa_p \rightarrow 1$, the limit of the integral of the second term is 0. By Lemma~\ref{sptlemma7}, with $\omega$ to be the characteristic function of $G_p$ and $W=du_p$, \begin{eqnarray*} <Q(du_p)^{p-2}du_p, F -du_p>_{G_p} &\leq& 1/p \left(<Q(F)^p>_{G_p} - <Q(du_p)^p>_{G_p}\right) \\ &\leq& 2/p<s(F)^p> \\ &\leq& 2/p \ vol(M). \end{eqnarray*} \end{proof} In the second inequality we used (\ref{shatl1}). In Theorem~\ref{TTheorem A5} we showed that $S = \lim_{ p \rightarrow \infty} S_{p-1}$, $|S| = \lim_{ p \rightarrow \infty} |S_{p-1}|$ and the support of $S$ is contained in the support of $|S|$. We now show the other inclusion: \begin{theorem}\label{theoremsuptharddir} The support of $|S|$ is contained in the support of $S$. \end{theorem} \begin{proof} Suppose $B \subset M$ is an open ball on which $S =0$, i.e $ \lim_{ p \rightarrow \infty} S_{p-1} \big |_B \rightarrow 0$ in $L^1$. Let $f$ be a comparison best Lipschitz map, set $F=\omega^{-1}(u_p,f)(df)_{u_p}$ and choose normalization that $<Q(U_p)> = 1$. By Proposition~\ref{supptprop5}, \begin{eqnarray*} <\omega(u_p,f) S_{p-1}, du_p - F> = 0. \end{eqnarray*} Recall that $G_p$ is the set on which the integrand is negative, and by Proposition~\ref{supptprop8}, this integral on $G_p$ limits on 0. Hence \begin{eqnarray*} <S_{p-1}, du_p - F>_{B-G_p} &\leq& <\omega(u_p, f)S_{p-1}, du_p - F>_{M - G_p} \\ &=& <\omega(u_p,f)S_{p-1}, F - du_p>_{G_p} \\ &\leq& k<S_{p-1}, F - du_p>_{G_p} \\ & \rightarrow& 0. \end{eqnarray*} All the terms in each integral are positive, hence we can restrict to a smaller domain and get a smaller value for the integral. The limit is a direct application of Proposition~\ref{supptprop8}. Note that the weights $\omega(u_p,f)$ are just a nuisance factor. They do not affect the definition of $G_p$ and are bounded by 1 from below and a constant $k$ from above. Since also, \[ <S_{p-1}, du_p - F>_{B \cap G_p} \leq 0 \] putting the two estimates together, we get \begin{eqnarray}\label{Spneg} \lim_{ p \rightarrow \infty} <S_{p-1}, du_p - F>_B \leq 0. \end{eqnarray} Thus, \begin{eqnarray*} <|S_{p-1}|>_B &=& \kappa_p<S_{p-1},du_p>_B \\ & \leq& \kappa_p <S_{p-1},F>_B + O(1/p) \\ & \leq& \kappa_p || S_{p-1} ||_{L^1(B)} \max s(F) + O(1/p)\\ & \leq&K || S_{p-1} ||_{L^1(B)}+ O(1/p). \end{eqnarray*} The last term converges to zero because $S_{p-1} \big |_B \rightarrow 0$ in $L^1$. \end{proof} \begin{remark} There is a second proof of the theorem above using the regularity theory in Section~\ref{sect:regtheor} (cf. Theorem~\ref{mainregtH} and Corollary~\ref{cormainredsh}). Let $B$ be a ball in which $S_{p-1} \big |_B \rightarrow 0$ in $ L^1$ (which is equivalent to $S \big |_B = 0).$ Let $|U_p|_{sv^p}^2=|S_{p-1}|^{2/p} = R_p$. Since $|S_{p-1}| = |S_{p-1}|^{(p-2)/p}R_p$, by Holder's inequality \begin{eqnarray*} < |S_{p-1}| >_B &\leq & <|S_{p-1}|^{(p-2/p)(p-1/p-2)}>_B^{(p-2)/(p-1)} <R_p^{p-1}>_B^{1/(p-1)}\\ &=&<|S_{p-1}|^{(p-1)/p}>_B^{(p-2)/(p-1)} <R_p^{p-1}>_B^{1/(p-1)}. \end{eqnarray*} We first claim that the first factor goes to zero. If $S_{p-1} \big |_B\rightarrow 0$ in $L^1$, then $|S_{p-1}|^{(p-1)/p} \rightarrow 0$ in $L^1(B) $. Indeed, $S_{p-1} \big |_B\rightarrow 0$ in $L^1$ is equivalent to $\left(TrQ^{2p-2}(U_p) \right)^{1/2} \rightarrow 0$ in $L^1$. Thus, for the largest singular value of $U_p$, $s(U_p)^{p-1} \rightarrow 0$ in $L^1$. This in turn implies that $|S_{p-1}|^{(p-1)/p} \rightarrow 0$ in $L^1(B) $. We next claim that the second factor is bounded by the regularity Theorem~\ref{mainregtH} and Corollary~\ref{cormainredsh}. Indeed, (\ref{eqnsobemb}) for $s=2$ implies that \[ <R_p^p>_B^{1/p}=<|S_{p-1}|^2>_B^{1/p} \leq (cc_2p)^{4/p}. \] By H\"older, also $<R_p^{p-1}>_B^{1/(p-1)}$ is uniformly bounded. This, completes the proof. \end{remark} We now turn to: \vspace{.1cm} \begin{support} Let $f$ be any comparison best Lipschitz map and choose a ball $B \subset M$ away from the maximum stretch locus $\lambda_f$ of $f$. Normalize as before, $<Q(U_p)> = 1$. By Proposition~\ref{supptprop5}, $<\omega(u_p,f)S_{p-1}, du_p - F> = 0$ where $F = \omega(w,f)^{-1}(df)_{w} $ satisfies $s(F) \leq s(df).$ Hence the following is true for integrals with positive integrands: \begin{eqnarray*} <Q(U_p)^{p-2}U_p, du_p - F>_{B- G_p} &\leq& <Q(U_p)^{p-2}U_p, du_p - F>_{M - G_p} \\ &\leq& <\omega(u_p,f)Q(U_p)^{p-2}U_p, du_p - F)>_{M - G_p} \\ &=&<\omega(u_p,f)Q(U_p)^{p-2}U_p, F - du_p>_{G_p}\\ &\leq& k<Q(U_p)^{p-2}U_p, F - du_p>_{G_p}. \end{eqnarray*} The last term converges to zero by Proposition~\ref{supptprop8}. Here $k = \max \omega(u_p,f)$ is finite since $u_p\rightarrow u$ in $C^0.$ By multiplying by $\kappa_p$, we have \[ \lim_{p \rightarrow \infty}<Q(U_p)^{p-2}U_p, U_p - \kappa_p F>_{B-G_p} = 0. \] By the definition of $G_p$, \[ <Q(U_p)^{p-2}U_p, U_p - \kappa_p F>_{B \cap G_p} \leq <Q(U_p)^{p-2}U_p, U_p - \kappa_p F>_{G_p} < 0, \] hence by combining, \begin{eqnarray}\label{tauhat21} \lim_{p \rightarrow \infty} <Q(U_p)^{p-2}U_p, U_p -\kappa_p F>_B \leq 0. \end{eqnarray} Choose $p$ large enough so that \begin{eqnarray}\label{tauhat} \tau < \kappa_p s(F) < \hat \tau < 1. \end{eqnarray} Then rewrite (\ref{tauhat21}) as \begin{eqnarray}\label{tauhat2132} \lefteqn{\lim_{p \rightarrow \infty} ((1 - \hat \tau^{1/2})<Q(U_p)^p>_B} \nonumber \\ &-& \hat \tau^{1/2}<Q(U_p)^{p-2}U_p, \kappa_p \left(F/\hat \tau^{1/2}\right) - U_p>_B ) \leq 0. \end{eqnarray} By Lemma~\ref{sptlemma7} with $\omega$ equal to the characteristic function of $B$, discarding the negative term $- 1/p <Q(U_p)^p>_B$ and noting (\ref{tauhat}), \begin{eqnarray*} <Q(U_p)^{p-2}U_p, \kappa_p \left(F/\hat \tau^{1/2}\right) - U_p>_B &\leq& 1/p< Q(\kappa_p F /\hat \tau^{1/2})^p>_B\\ &\leq& 2/p <{{\hat \tau}^{1/2}}> \\ &\leq& \left(2/p\right) {\hat \tau}^{p/2} vol(M)\\ &\rightarrow& 0. \end{eqnarray*} It follows from (\ref{tauhat}) and (\ref{tauhat2132}) that $\lim_{p \rightarrow \infty} <Q(U_p)>_B = 0$ and completes the proof. \end{support} \section{Optimal best Lipschitz maps}\label{sec:idealoptim} In this section we assume that the canonical geodesic lamination consists of finitely many closed geodesics $\lambda = \cup \lambda_j$. Under this assumption we show that the stretch set of the limiting infinity harmonic map is equal to the canonical geodesic lamination $\lambda.$ In other words, the infinity harmonic map is optimal in the sense of Section~\ref{optil}. In the process we study the Dirichlet and Neumann problems for $J_p$ in a certain special neighborhood of $\lambda$. \subsection{The Dirichlet problem} In this section we study local properties of limits of solutions of $J_p$-harmonic maps in a special case. We start by studying some geometry we will need later to construct a best Lipschitz map. The rest of the section on the Dirichlet problem is not needed for this paper. The geometry we will study involves certain neighborhoods which we call parallel of a closed geodesic $\lambda$ in $M$ mapping with Lipschitz constant $L > 1$ to a similar neighborhood of $\mu$ in $N. $ Let $g: S^1 \rightarrow \lambda$ be a parameterization of $\lambda$ by arc length (we choose the circle of circumferance $d$). Let $e_t$ be the unit normal to $\lambda$ at $g(t)$ and define \[ \tilde g(s,t) = exp_{g(t)}(s e_t). \] By the choices of parameterization, the metric looks like $ds^2 + A(s,t)^2dt^2$ in this coordinate chart in {\it{any}} surface. Note that the curves $\tilde g_t(s) = \tilde g (s,t)$ in $s$ are geodesics parameterized by arc length with zero curvature. The curves $t \mapsto \tilde g (t,s)$ obtained by fixing $s$ are curves of fixed distance (parallel) to $\lambda$ with the curvature $A^{-1}\frac{\partial A}{\partial s}.$ However, because $s = 0$ is a geodesic parameterized by arc length, $A(0,t) = 1$ and ${\partial A}/{\partial s}(0,t) = 0.$ \begin{lemma} The map $\tilde g: \R \times S^1 \rightarrow M$ is a covering map which is an isomorphism when restricted to $[-h,h] \times S^1$ for $h$ small. The hyperbolic metric in the $(s,t)$ coordinate system is $ds^2 + \cosh s ^2 dt^2$. \end{lemma} \begin{proof} The curvature $R$ of any metric of the form $B^2ds^2 + A^2ds^2$ is \[ R = \frac{-1}{AB} \left( \frac{\partial}{\partial t}\left( A^{-1} \frac{\partial B}{\partial t} \right) + \frac{\partial}{\partial s}\left( B^{-1}\frac{\partial A}{\partial s} \right)\right). \] In our case, this is \[ -1 = (-1/A){\partial^2 A}/{\partial s^2}. \] We also know that ${\partial A}/{\partial s}(0,t) = 0.$ The only solution to this ODE with the given boundary conditions is $A(s,t) = \cosh s.$ \end{proof} We call the image of $\tilde g \big |_{[-h,h] \times [0, length(\lambda)]}$ a parallel neighborhood $\mathcal O_h(\lambda)$ of size $h$ of the closed geodesic $\lambda$ and use these canonical coordinate to describe the geometry. Consider maps from one of these parallel neighborhoods $\mathcal O_h(\lambda)$ of a geodesic $\lambda$ in $M$ of length $d$ to a canonical neighborhood $\mathcal O_h(\mu)$ of a geodesic $\mu$ in $N $ of length $Ld.$ We prescribe first the Dirichlet boundary condition $u(h,t ) = (R_0,Lt).$ We do not need this result for the rest of the paper, but it gives some insight as to the properties of $J_p$-harmonic maps, so we include it here. We assume $L > 1$ and $R_0 \leq h,$ as we are interested in the case $L > 1$ and the solutions with Lipschitz constant $L.$ If $R_0 > h,$ then the Lipschitz constant of the solution would be $L(\cosh R_0)/\cosh h > L.$ \begin{proposition} Under the Dirichlet boundary condition $u(h,t) = (R_0,t)$, there is a unique minimum of $J_p$ given by a separated variable solution $u(s,t) = (R(s),Lt)$ with $R(s) = - R(-s)$. This approximation is a critical point of the integral \begin{eqnarray}\label{Label A} J_p (R) = \int_0^{s_0} \left (|dR/ds|^p + (L\cosh R(s)/\cosh s )^p \right) \cosh s \ ds. \end{eqnarray} The function $R$ solves the ordinary differential equation \begin{eqnarray}\label{Label B} d/ds \left ( \cosh s \ dR/ds \left| dR/ds \right |^{p-2} \right) = L^p (\cosh R/ \cosh s)^{p-1} \sinh R. \end{eqnarray} \end{proposition} \begin{proof} We can write any solution as $u(s,t) = (R(s,t), T(s,t))$. By Theorem~\ref{thm:pdirichlet}, solutions are unique. But we can find a solution with $u(s,t) = (R(s), Lt).$ Hence it must be the unique solution. To show $u(s,t) = (R(s),Lt)$ is a solution, it is sufficient to show it minimizes the integral subject to the separated variable conditions (cf. \cite{palais}). Hence it is enough to show it minimizes the integral (\ref{Label A}). The computation of the Euler-Lagrange equations is an exercise. One should point out that $R(s) = -R(-s)$ by uniqueness as well. \end{proof} To analyze the solutions to (\ref{Label B}), we perform some straightforward steps. {\bf Step 1:} $R(s) = |R(s)|\mbox{sign} s. $ First note that the condition $(R, T) \in W^{1,p}$ implies the same for $(|R|, T)$ and $(|R|\mbox{sign} s,T)$. Thus, if $R(s) \neq |R(s)|\mbox{sign} s $, we can replace $R(s)$ by $|R(s)|\mbox{sign} s$ to obtain another another minimum for integral (\ref{Label A}), which contradicts uniqueness. {\bf Step 2:} Let \[ d \sigma/ds = (\cosh s)^{-1/(p-1)}, \ \sigma(0) = 0. \] Then for $s>0$ \[ d/d \sigma (|dR/d \sigma )|^{p-2} dR/d \sigma )) = L^p \cosh s^{-p^*}(\cosh R)^{p-1} \sinh R. \] Note that since $R(0)=0$ and $R(s) \geq 0$ for $s \geq 0$, we have $dR/d \sigma (0) \geq 0$. We can conclude from this that $dR/d \sigma \geq 0$ is increasing. Hence we can use $dR/d \sigma = |dR/d \sigma |.$ For convenience, set $p^* = p-1-1/(p-1).$ Note that as $p$ goes to infinity, $\sigma$ goes rapidly to $s.$ And $p^*$ becomes $p-1.$ {\bf Step 3:} \[ (p-1)/p (d/d \sigma) (d R/d\sigma )^p = 1/p L^p \cosh s^{-p^*} d (\cosh R)^p /d \sigma. \] This is the equation we get when we multiply the equation in step 2 by $dR/d\sigma$ and use the chain rule. {\bf Step 4:} Choose $0 \leq \sigma_1 < \sigma_2$ and let $s(\sigma_j) = s_j,$ $R(\sigma_j) =R_j $ and $dR/d \sigma (\sigma_j) = R'_j$ where by the discussion in Step 2, $s_1 < s_2, R_1 < R_2$ and $R'_1 < R'_2. $ By integrating the equation in Step 3, \[ {R'_2}^p - {R'_1}^p < 1/(p-1)L^p {\cosh s_1}^{-p^*} \left((\cosh R_2)^p - (\cosh R_1)^p \right). \] {\bf Step 5:} Similarly, \[ {R'_2}^p - {R'_1}^p > 1/(p-1)L^p \cosh s_2^{-p^*} \left((\cosh R_2)^p - (\cosh R_1)^p\right). \] \begin{proposition}\label{Proposition 3DIR} Let $R_p$ be the solutions to the $p$-approximation to the Dirichlet problem and $R=\lim_{p \rightarrow \infty} R_p.$ For $[\cosh R] = \cosh R$, $R \neq 0$, $[\cosh 0] = 0$ we have for $R_j = R(s_j), R'_j = dR/ds (s_j)$ and all $s_1 < s_2$ \begin{eqnarray*} R &=& |R|sign s\\ 0 &\leq& dR/ds \\ R'_2 &\leq& \max\{ R'_1, L[\cosh R_2]/\cosh s_1 \}\\ R'_2 &\geq& \max \{ R'_1,L [\cosh R_2]/\cosh s_2 \}. \end{eqnarray*} Note that we can take the limit as $s_1 \rightarrow s_2$, but we must preserve the condition that $dR/ds $ is non-decreasing. \end{proposition} \begin{proof} Take the limits of the equations in steps 4 and 5. First we write \[ R'_2 \leq \left({R'_1}^p + 1/(p-1)L^p (\cosh R_2^p \chi_p^p \cosh s_1^{-p*} \right)^{1/p} \] where $\chi_p = \left(1- (R_1/R_2)^p\right)^{1/p}$. Thus, \[ R'_2\leq 2^{1/p} \max \{R'_1,L/(p-1)^{1/p}\chi_p \cosh R_2 /\cosh s_1^{p*/p}\}. \] Now take the limit $p \rightarrow \infty$. The terms $2^{1/p}$, $(p-1)^{1/p}$, $p^*/p$ and $\chi_p$ all limit to 1, except in the case where $R_2$ is 0. In that case $\chi_p$ has limit 0. The argument in the other direction is similar. \end{proof} We state our final result as a conjecture, as we have only sketched the details. \begin{conjecture} Given $L> 1$, there is a one parameter family of functions $R_{s^*}$, $s^* \in (0, \infty) $ satisfying the conditions of Proposition~\ref{Proposition 3DIR}. The Dirichlet data for $R(h) = R_0$, $0 < R_0 \leq h$ is realized for exactly one $s^*$. We describe $R=R_{s^*}$ by: \begin{itemize} \item $R(s) = 0$, $0 \leq s \leq s^*$. \item $R(s) = L/\cosh s^*(s - s^*)$ for $\tilde s \leq s \leq \tilde s$ where $\tilde s$ is obtained by solving \[ \cosh R(\tilde s)/\cosh \tilde s = \cosh(\tilde s - s^*)L/\cosh s)/\cosh \tilde s = 1/\cosh s^* \] \item For $s\geq s^*,$ $R'(s) = L \cosh R/\cosh s.$ \end{itemize} \end{conjecture} This is described in words by saying that $R$ is $ 0$ up to $s^*.$ In this region, the limit of the term $(L\cosh R / \cosh s L)^p$ dominates and is at its minimum. At $s^*$, the solution becomes linear with derivative picked to keep $R \geq L \cosh R/\cosh s.$ In this region, the term in the integrand $(dR/ds)^p$ is larger and being minimized. When $\cosh R/\cosh s$ increases to match the linear growth, the solution follows the trajectory given by the ODE $R'(s) = L \cosh R/\cosh s.$ This equation keeps the two terms in the integrand equal. Each piece is necessary. If a solution $R $ starts off right away being positive, the growth has to be $E =L\cosh R/\cosh s > 1.$ Since the derivative only increases, we would immediately get that $R_0 \geq E h.$ These are not the solutions we are looking for, as the Lipschitz constant would be taken on on the boundary $s = h.$ In the simple case, the slope would be $L/\cosh s^*$, and $L/\cosh s^*(h - s^*) = R_0$ and there would be no third interval. But a careful analysis shows that these linear solutions reach a point $\tilde s$ where the derivative fails to be larger than $L \cosh R / \cosh s$, and it is necessary to switch to the solution of the ODE. The solution $R_{s^*}$ parameterized by $s^*$ decreases as $s^*$ increases unless we are looking at a point $s < s^*$ where $R_{s^*(s)} = 0.$ Note $R_0(h) > Lh$ and $R_h(h) = 0.$ Hence every boundary condition $R(h) = R_0 $ is realized once. This leads us to a conjecture. \begin{conjecture} The limit infinity harmonic maps are not homeomorphisms. There are small regions around the canonical geodesic lamination which are mapped onto $\lambda.$ Namely $u^{-1}(\mu)$ often contains open sets. \end{conjecture} \subsection{The Neumann problem} The situation for the Neumann problem is much easier to analyze, and this is the one we want for our applications. Note that each of our solutions to the Dirichlet problem is contained an interval with the solution to the Neumann problem in it. Let $\mathcal O=\mathcal O_h(\lambda_j)$ be a parallel neighborhood of size $h$ about a closed geodesic $\lambda_j$ in $M.$ \begin{proposition}\label{neuprop2} Minimize $J_p$ among all maps $u: \mathcal O \rightarrow N$, $u(s,t) = (R(s), T(t))$ with $T(t +d) = T(t) + Ld. $ The solution for all $p$ is taken on at $f(s,t) =(0, L(t - t_0))$. The solution is unique up to the rotation along the geodesic. \end{proposition} \begin{proof}By looking at the integrand~(\ref{Label A}), it is immediate that the minimum is realized at $R=0$. \end{proof} Lift $f(s,t) = (0, L t)$ to the abstract setting on the parallel neighborhood $\mathcal O$ of $\lambda_j.$ We will denote by $ f $ the minimum obtained in Proposition~\ref{neuprop2}. \begin{corollary}\label{neumineq} Suppose $u: \mathcal O \rightarrow N$ is in $ W^{1,p}$ (or the Shatten-von Neumann space) and homotopic to the minimum $f$ from Proposition~\ref{neuprop2}. Then $ J_p(f) \leq J_p(u)$ with equality if and only if $u$ is a rotation of $f.$ \end{corollary} \begin{proof} By the results in Section~\ref{sect1}, $J_p$ is convex on the space $ W^{1,p}$ of maps $u$ homotopic to $ f$, and strictly convex if we fix $u(x_0) = X_0$. It takes on a minima, which satisfies the Neumann conditions on the boundary. The map $f$ to the geodesic $\mu$ satisfies the $J_p$-Euler-Lagrange equations and the boundary conditions (with some work one can see this without going to the PDE). The uniqueness follows by fixing the value of one point. \end{proof} \begin{corollary} For solutions of the Neumann problem, on the boundary \[ (dT/dt, dT/dt)^\sharp = L^2(1/\cosh^2h) < L^2. \] \end{corollary} \subsection{Construction of an ideal optimal best Lipschitz map}\label{sec:idealoptim} The purpose of this section consists of providing a proof of the following theorem: \begin{theorem}\label{theo:idealoptim}Assume $L > 1.$ Assume the canonical geodesic lamination $\lambda$ for a homotopy class of maps $M \rightarrow N$ consists of a finite disjoint union of closed geodesics $\lambda_j.$ Then for $h$ sufficiently small, the solution to the Neumann problem in the parallel neighborhoods ${\mathcal O}_h(\lambda_j)$ of $\lambda_j$ of size $h$ extends to a best Lipschitz map $f: M \rightarrow N.$ Moreover, if we restrict $f$ to $M^c = M - \cup {\mathcal O}_h(\lambda_j)$, then $f \big |_{M^c}$ has best Lipschitz constant $K < L.$ The map $f$ is optimal. \end{theorem} Note that the translation along the geodesics has been fixed, since we know in advance that there is at least one best Lipchitz map which prescribes the relative rotations. Make use of an optimal best Lipschitz map $\hat f :M \rightarrow N $ which has maximum stretch lamination exactly $\lambda$ (cf. Section~\ref{optil}). Simply change the parameterization along the geodesics perpendicular to $\lambda_j$ in a parallel neighborhood of $\lambda_j$. We fix $h_0>0$ sufficiently small. We define a map \[ z: \mathcal O_{2h_0} \rightarrow \mathcal O_{2h_0} \ \ z(s,t) = (R(s),t) \] where \[ R: [0, 2h_0] \rightarrow [0, 2h_0] \] as follows: Let $K_0< L$ be the Lipschitz constant of $\hat f$ in the parallel strip in ${\mathcal O}_{2h_0}$ for $h_0/2 \leq s \leq 2 h_0.$ Choose $h$ sufficiently small and fix $k$ with \[ 1< k:= (h_0 + 2h)/h_0 \leq (L/K_0)^{1/2}. \] Define: \begin{itemize} \item $a)$ $R(s) = 0$ for $0 \leq s \leq h$ \item $b)$ $R(s) = 1/2(s - h)$ for $h\leq s \leq 3h$ \item $c)$ $R(s) = s - 2h $ for $3h \leq s \leq h_0$ \item $d)$ $R(s) = k(s - h_0) + h_0 - 2h = 2h_0 - k(2h_0 - s)$ for $h_0 \leq s \leq 2h_0.$ \end{itemize} Note that by definition of $h$, $- k h_0 + h_0 - 2h = 2h_0 - k2 h_0$. We use the fact that the metric tensor is $(1, \cosh^2 R)$ in the image and $(1, \cosh^2 s)$ in the domain. The eigenvalues of $dz$ at $(s,t)$ are $(R'(s), \cosh R(s)/ \cosh s).$ Note that because $R(s)\leq s$ the second term is bounded above by 1. We collect the information which is easily proved about the eigenvalues of $dz$: \begin{lemma} The eigenvalues of $dz$ are $(R'(s), \cosh R(s)/\cosh s)$. \begin{itemize} \item In region $a)$ the image of $z$ is contained in $\lambda_j$ and $f = \hat f \circ z$ is the solution to the Neumann problem. \item In region $b)$ the eigenvalues of $dz$ are $(1/2, \cosh R/\cosh(2R+h))$ for $0 \leq R \leq h.$ They are bounded by $k_h = max(1/2, 1/\cosh h) < 1.$ \item In region $c)$, the eigenvalues are $\leq 1$. \item In region $d)$ the eigenvalues are bounded by $k \leq (L/K_0)^{1/2}.$ \end{itemize} \end{lemma} We are now ready to prove the theorem: \vspace{.1cm} \begin{theo:idealoptim} We define $f = \hat f \circ z$ in the parallel neighborhoods $\mathcal O_{2 h_0} (\lambda_j)$, and $f = \hat f$ on $M - \cup \mathcal O_{2 h_0} (\lambda_j).$ In region $a)$, $f$ is the solution to the Neumann problem. In region $b)$ the Lipschitz constant is less than or equal to $K_h = k_h L < L$ since Lipschitz constant of $\hat f$ is $L.$ To study region $c)$, the Lipschitz constant of $\hat f \big |_{M^c}$ is $K^* < L$, since this region is disjoint from $\lambda.$ Then in region $c)$, as the image of $z$ lies in $M^c$, $f = \hat f \circ z$ has Lipschitz constant at most $K^*.$ In region $d)$, since the Lipschitz constant of $z$ is bounded by $k$ and the Lipschitz constant of $\hat f$ restricted to the image of $z$ has Lipschitz constant $K_0$, the Lipschitz constant of $\hat f \circ z$ is $kK_0 \leq (LK_0)^{1/2} < L.$ In the compliment of the parallel neighborhoods, $f = \hat f$ has Lipschitz constant $K^*$. Putting all this together, the Lipschitz constant $K$ of $f \big |_{M^c} \leq \max \{K_h, K^*, kK_0 =(LK_0)^{1/2}\} < L.$ \end{theo:idealoptim} \subsection{The infinity harmonic map is optimal}\label{sec:infinityopt} We now turn to the most delicate argument. Of course we conjecture this theorem is true for any maximum stretch lamination. The difficulties of proving this theorem demonstrate the difficulties of working with $p$-approximations in general. We will utilize the ideal best Lipschitz map constructed in the previous section which allows us to repeat our constructions in $M^c$, the compliment of parallel neighborhoods of the geodesics in $\lambda_j.$ \begin{theorem}\label{infharmopt} If the canonical geodesic lamination $\lambda$ for a homotopy class of maps $M \rightarrow N$ consists of a finite number of closed geodesics $\lambda_j$, then the maximum stretch set of the infinity harmonic map $u_p\rightarrow u$ is $\lambda.$ In other words any infinity harmonic map is optimal. \end{theorem} \begin{proof} The goal is to show that the maximum stretch set does not intersect the domain $M^c = M \backslash \cup_j {\mathcal O}_j$, where the ${\mathcal O}_j={\mathcal O}_h(\lambda_j)$ are parallel neighborhoods of the geodesics $\lambda_j$ of size $ h$. We take $h$ arbitrarily small. By Theorem~\ref{theo:idealoptim}, if $h$ is sufficiently small there is a best Lipschitz map $f:M \rightarrow N$ such that \begin{itemize} \item a) $f \big |_{\mathcal O_j}$ solves the Neumann problem for all $p$ \item b) $f \big |_{M^c}$ has Lipschitz constant $K < L.$ \end{itemize} Our previous analysis is repeated using this best Lipschitz map (we could have used this $f$ earlier; we have not needed its properties until now). This is the first place where we use $L > 1$ to construct $f.$ Although we have renormalized $M$ so the Lipschitz constant is 1, this does not affect the construction of $f.$ Write the Euler-Lagrange equations for $u_p$ with $F=\omega(u_p, f)^{-1}df_{u_p}$ (cf. Proposition~\ref{supptprop5}) as \begin{eqnarray}\label{eulqind} \lefteqn{<\omega(u_p, f)Q(du_p)^{p-2}du_p,du_p-F>_{M^c}} \nonumber\\ &=& \sum_j <\omega(u_p, f)Q(du_p)^{p-2}du_p,F - du_p>_{\mathcal O_j}. \end{eqnarray} Recall $<. >_{\mathcal O}$ means restrict the integral to ${\mathcal O}$. By the convexity of the Shatten-von Neumann norms in Lemma~\ref{sptlemma7}, \begin{eqnarray}\label{eulqind2} \lefteqn{<\omega(u_p, f) Q(du_p)^{p-2}du_p, F-du_p>_{\mathcal O_j}} \nonumber\\ &\leq&1/p <\omega (u_p, f) (Tr( Q(F)^p -Q(du_p)^p))>_{\mathcal O_j}. \end{eqnarray} We claim that this is nonpositive. We know it is nonpositive without the weights, since $Q(F) = Q(\omega(u_p, f)^{-1}df_{u_p}) \leq Q(df)$ as non-negative matrices by Lemma~\ref{supptLemma2}. Since $f$ solves the Neumann problem we have by Corollary~\ref{neumineq}, \[ <Q(F)^p>_{\mathcal O_j} \leq J_p(f \big|_{\mathcal O_j}) \leq J_p(u_p \big|_{\mathcal O_j})=<Q(du_p)^p>_{\mathcal O_j}. \] It is a minor miracle that it is also true with the weights. We simply reverse the role of $u_p$ and $f$ and apply the Euler-Lagrange equations for $J_p$ to $f \big|_{\mathcal O_j}$. We only wrote this is in coordinates when we derived $f$, but we are certainly free to do this. Switching to roles of $f$ and $u_p$ and using the fact that the Neumann condition allows test functions with arbitrary boundary vales in ${\mathcal O_j}$, we get \[ <\omega(f, u_p) Q(df)^{p-2}df, df - U^*>_{\mathcal O_j}= 0. \] Here $Q(U^*) = Q(\omega(f, u_p)^{-1}{(du_p)}_f) \leq Q(du_p)$ as positive matrices. By the pointwise convexity argument on the Schatten-von Neumann norms in Lemma~\ref{sptlemma7}, \begin{eqnarray}\label{dfustar} 0 &=& <\omega(f, u_p) Q(df)^{p-2}df, U^* - df>_{\mathcal O_j} \nonumber \\ &\leq&1/p <\omega(f, u_p) \left(Tr (Q(U^*)^p - Q(df)^p) \right)>_{\mathcal O_j}. \end{eqnarray} Since $Q(U^*) \leq Q(du_p)$ and $Q(F) \leq Q(df)$ as non-negative matrices, we deduce from this the weighted inequalities \begin{eqnarray*} <\omega(f, u_p) TrQ(F)^p>_{\mathcal O_j} &\leq& <\omega(f, u_p)TrQ(df)^p>_{\mathcal O_j}\\ &\leq& <\omega(f, u_p) TrQ(U^*)^p>_{\mathcal O_j}\\ &\leq&<\omega(f, u_p) Tr( Q(du_p)^p>_{\mathcal O_j}. \end{eqnarray*} Here we used (\ref{dfustar}) in the second inequality. The upshot of this inequality together with (\ref{eulqind}) and (\ref{eulqind2}) is simply that \begin{eqnarray}\label{upsh12} <\omega(f, u_p)Q(du_p)^{p-2}du_p, du_p - F>_{M^c} \leq 0. \end{eqnarray} We can now proceed replacing $M$ by $M^c$, using the fact that $s(F) \leq s(df) \leq K < L (= 1)$. Formula (\ref{upsh12}) and the pointwise convexity of the Schatten-von Neumann norms of the integrands in Lemma~\ref{sptlemma7}, give \begin{eqnarray}\label{secupf} 0 &\leq& <\omega(f, u_p)Q(du_p)^{p-2}du_p, F - du_p>_{M^c} \nonumber \\ &\leq& 1/p<\omega(f, u_p)TrQ(F)^p>_{M^c} - <\omega(f, u_p)Tr Q(du_p)^p>_{M^c}. \end{eqnarray} Using the point-wise bounds on $1 \leq \omega (f, u_p) \leq k$, we get from (\ref{secupf}) \begin{eqnarray*} J_p(u_p \big|_{M^c})&\leq& <\omega(f, u_p) Tr Q(du_p)^p>_{M^c}\\ &\leq& <\omega(f, u_p) Tr Q(F)^p>_{M^c}\\ &\leq& k J_p(df \big|_{M^c})\\ &\leq& 2k K^p vol(M). \end{eqnarray*} By taking $p$-th roots and going to the limit, it follows that the Lipschitz constant of the limit $u$ in $M^c$ is less than or equal to $K < L ( = 1). $ \end{proof} We are not at this time able to prove the theorem above for general laminations. We make the following conjecture: \begin{conjecture}\label{conjopt} Any infinity harmonic map in a homotopy class of maps $M \rightarrow N$ between hyperbolic surfaces is optimal. Therefore, the stretch set of any infinity harmonic map is the canonical lamination. \end{conjecture} As a partial result we can show that the stretch set of an infinity harmonic map does not have a component disjoint from the canonical lamination. \begin{theorem} Assume the maximal stretch set of the infinity harmonic map $u$ can be decomposed as a finite sum $l = \cup_j l_j$ where $l_j$ is closed and the $l_j$ are mutually disjoint. Then every $l_j$ must contain a closed subset of the canonical lamination. \end{theorem} \begin{proof} We first normalize as before so that $L=1$. Let $l_0$ be such a closed subset of the maximal stretch set. Then there exist open sets $\mathcal O$ and $\tilde{\mathcal O}$ such that \[ l_0 \subset \mathcal O \subset \overline{\mathcal O} \subset \tilde{\mathcal O} \subset \overline{\tilde{\mathcal O}} \] where $\Omega =\overline{\tilde{\mathcal O}} - \mathcal O$ does not intersect the maximum stretch set of $u$. Let $\tilde L < L=1$ be the Lipschitz constant of $u \big|_\Omega.$ Choose a cut-off function $\phi$ such that $\phi = 1$ on $\mathcal O$ and 0 on $M - \tilde{\mathcal O}$. The support of $d\phi$ will be in $\Omega.$ Now choose a different normalization for $\tilde U_p = \tilde \kappa_p~du_p$, so that \begin{eqnarray}\label{est?MYU} <\phi S_{p-1} (\tilde U_p),\tilde U_p> = <\phi Tr Q(\tilde U_p)^p> = 1. \end{eqnarray} Note that in general $\tilde \kappa_p^p >\kappa_p^p$, but as before, $ \lim_{p \rightarrow \infty} \tilde \kappa_p = L^{-1}=1.$ Now repeat the argument in Proposition~\ref{supptprop5}, using an optimal best Lipschitz comparison function $f$. Recall that by definition, the maximal stretch set of $f$ is the canonical lamination. Insert as a test function $\phi(u_p - f)$. We get \begin{eqnarray}\label{est?M} \lefteqn{<\phi \omega(u_p,f) S_{p-1}(\tilde U_p), \tilde U_p - F> } \nonumber\\ &\leq & 2 \max |d\phi| \max |(u_p-f)_{u_p}|< s(\tilde U_p)^{p-1}>_\Omega = Error(p). \end{eqnarray} Note that \begin{eqnarray*} \lim_{p \rightarrow \infty} <s(\tilde U_p)^{p-1}>_\Omega^{1/p-1} =\lim_{p \rightarrow \infty} \tilde \kappa_p <s(d u_p)^{p-1}>_\Omega^{1/p-1} = \tilde L < 1 \end{eqnarray*} and $u_p$ is uniformly bounded, so $Error(p) \rightarrow 0.$ Using (\ref{est?M}) together with the second inequality in (\ref{holds}) on the Schatten-von Neumann norms, we get \begin{eqnarray*} \lefteqn{<\phi \omega(u_p,f) Tr Q(\tilde U_p)^p>}\\ & \leq & <\phi \omega(u_p,f) Tr Q({\tilde U_p})^p>^{(p-1)/p} {<\phi \omega(u_p, f) Tr Q(F)^p>}^{1/p} \\ & +& Error(p). \end{eqnarray*} Using the normalization (\ref{est?MYU}) we have chosen, we divide by \[ <\phi \omega(u_p,f) Tr Q({\tilde U_p})^p>^{(p-1)/p}>1. \] This gives \[ <\phi \omega(u_p, f) Tr Q(\tilde U_p)^p>^{1/p} \leq <\phi \omega(u_p, f) Tr Q(F)^p>^{1/p}+Error(p). \] In the limit, we get $1\leq L_{f |_{\tilde{\mathcal O}}}$ which shows that the Lipschitz constant of $f$ in $\tilde{\mathcal O}$ had to be $L=1$. Hence $\tilde{\mathcal O}$ contains a component of the canonical lamination. \end{proof} \begin{thebibliography}{ABC} \bibitem[Bha]{bhatia} R. Bhatia. {\it{Matrix Analysis}}. Graduate Texts in Mathematics 169, Springer (1991). \bibitem[A-F-P]{ambrosio} L. Ambrosio, N. Fusco, D. Pallara. {\it{Functions of bounded variation and free discontinuity problems.}} Oxford Mathematical Monographs (2000). \bibitem[Ar1]{aron1}G. Aronsson. {\it Extension of functions satisfying Lipschitz conditions}. Ark. Mat. 6, 551-561, (1967). \bibitem[Ar2]{aron2}G. Aronsson. {\it On the partial differential equation $u^2_xu_{xx} +2u_xu_yu_{xy} +u^2_yu_{yy} =0$}. Ark. Mat. 7, 395-425, (1968). \bibitem[Co]{corlette} K. Corlette. {\it Flat {$G$}-bundles with canonical metrics.} J. Differential Geom. 28, 361-382, (1988). \bibitem[C]{crandal} M. Crandal. {\it A Visit with the $\infty$-Laplace Equation.} Calculus of Variations and Nonlinear Partial Differential Equations, 75-122 (2008). \bibitem[D-U1]{daskal-uhlen1} G. Daskalopoulos and K. Uhlenbeck. {Transverse Measures and Best Lipschitz and Least Gradient Maps.} Preprint. \bibitem[D-U2]{daskal-uhlen2} G. Daskalopoulos and K. Uhlenbeck. {Analytic properties of Stretch maps and geodesic laminations II.} In preparation. \bibitem[Ev]{evans} C. Evans. {\it Partial Differential Equations}. Graduate Studies in Mathematics, American Mathematical Society 2nd Edition. Vol. 54, Issue 2, 200-225, (1984). \bibitem[Gu-K]{kassel} F. Gueritaud and F. Kassel. {\it Maximally stretched laminations on geometrically finite hyperbolic manifolds.} Geom. Topol. Volume 21, Number 2, 693-840 (2017). \bibitem[H-L]{hardlin} R. Hard and F. Lin. {\it Mappings minimizing the $L^p$ norm of the gradient.} Communication in Pure and Applied mathematics, Volume 40, Issue 5, 555-588 (1987). \bibitem[Ju]{juutinen} P. Juutinen. {\it{p-harmonic approximation of functions of least gradient.}} Indiana Math. Jour. Vol 54, 1015-1030 (2005). \bibitem [Kar]{karcher} H. Karcher. {\it Riemannian center of mass and smoothing.} Communications Pure and Applied math. vol xxx, 509-549, (1977). \bibitem [Lind]{lind} P. Lindqvist. {\it{Notes on the Infinity Laplace Equation.}} Springer Briefs in Mathematics (2016). \bibitem[P]{palais} R. Palais. {\it{The principle of Symmetric Criticality.}} Commun. Math. Phys. 69, 19-30, (1979). \bibitem[Pa-Th]{papa} A. Papadopoulos and G. Theret. {\it On Teichm\"uller's metric and Thurston's asymmetric metric on Teichm\"uller space.} Handbook of Teichm\"uller Theory, Vol. 1, European Math. Soc. Publishing House (2007). \bibitem[S-S]{smart} S. Sheffield and C. Smart. {\it Vector-valued optimal Lipschitz extensions.} Communications on Pure and Applied Math. vol LXV 128-154 (2012). \bibitem[Si]{simon} L. Simon. {\it Introduction to Geometric Measure Theory.} Tsinghua Lectures (2014). \bibitem[Thu]{thurston} W. Thurston. {\it Minimal stretch maps between hyperbolic surfaces.} Preprint arXiv:math/9801039. \bibitem[U]{uhlen} K. Uhlenbeck. {\it {Regularity for a class of nonlinear elliptic systems.}} Acta Mathematica 138, 219-240 (1977). \end{thebibliography} \end{document}
2205.08032v1
http://arxiv.org/abs/2205.08032v1
On Algebraic Constructions of Neural Networks with Small Weights
\documentclass[conference,letterpaper]{IEEEtran} \usepackage[cmex10]{amsmath} \usepackage{amsthm} \usepackage{mathtools} \usepackage{amssymb} \usepackage{dsfont} \usepackage{xcolor} \usepackage{float} \usepackage{verbatim} \usepackage{multirow} \usepackage[maxbibnames=99,style=ieee,sorting=nyt,citestyle=numeric-comp]{biblatex} \bibliography{bibliography.bib} \title{On Algebraic Constructions of Neural Networks with Small Weights} \author{ \IEEEauthorblockN{\textbf{Kordag Mehmet Kilic}, \textbf{Jin Sima} and \textbf{Jehoshua Bruck}} \IEEEauthorblockA{Electrical Engineering, California Institute of Technology, USA, \texttt{\{kkilic,jsima,bruck\}@caltech.edu} } } \date{} \newtheorem{theorem}{Theorem}\newtheorem{corollary}{Corollary}[theorem] \newtheorem{lemma}{Lemma} \newtheorem{proposition}{Proposition} \newtheorem{definition}{Definition} \newcommand*\rfrac[2]{{}^{#1}\!/_{#2}} \newcommand{\LT}{\mathcal{L}\mathcal{T}} \usepackage{tikz}[border=2mm] \usepackage{float} \usetikzlibrary{shapes.geometric} \AtBeginBibliography{\small} \interdisplaylinepenalty=2500 \hyphenation{op-tical net-works semi-conduc-tor} \tikzset{ ->, gate/.style={draw=black,fill=#1,minimum width=6mm,circle}, square/.style={regular polygon,regular polygon sides=4}, every pin edge/.style={draw=black} } \begin{document} \maketitle \begin{abstract} Neural gates compute functions based on weighted sums of the input variables. The expressive power of neural gates (number of distinct functions it can compute) depends on the weight sizes and, in general, large weights (exponential in the number of inputs) are required. Studying the trade-offs among the weight sizes, circuit size and depth is a well-studied topic both in circuit complexity theory and the practice of neural computation. We propose a new approach for studying these complexity trade-offs by considering a related algebraic framework. Specifically, given a single linear equation with arbitrary coefficients, we would like to express it using a system of linear equations with smaller (even constant) coefficients. The techniques we developed are based on Siegel’s Lemma for the bounds, anti-concentration inequalities for the existential results and extensions of Sylvester-type Hadamard matrices for the constructions. We explicitly construct a constant weight, optimal size matrix to compute the EQUALITY function (checking if two integers expressed in binary are equal). Computing EQUALITY with a single linear equation requires exponentially large weights. In addition, we prove the existence of the best-known weight size (linear) matrices to compute the COMPARISON function (comparing between two integers expressed in binary). In the context of the circuit complexity theory, our results improve the upper bounds on the weight sizes for the best-known circuit sizes for EQUALITY and COMPARISON. \end{abstract} \section{Introduction} \label{sec:intro} An $n$-input Boolean function is a mapping from $\{0,1\}^n$ to $\{0,1\}$. In other words, it is a partitioning of $n$-bit binary vectors into two sets with labels $0$ and $1$. In general, we can use systems of linear equations as descriptive models of these two sets of binary vectors. For example, the solution set of the equation $\sum_{i=1}^n x_i = k$ is the $n$-bit binary vectors $X = (x_1,\dots, x_n)$ where each $x_i \in \{0,1\}$ and $k$ is the number of $1$s in the vectors. We can ask three important questions: How expressive can a single linear equation be? How many equations do we need to describe a Boolean function? Could we simulate a single equation by a system of equations with smaller integer weights? Let us begin with an example: $3$-input PARITY function where we label binary vectors with odd number of 1s as 1. We can write it in the following form: \begin{equation} \label{eq:parity_3} \text{PARITY}(X) = \mathds{1}\Big\{(2^2 x_3 + 2^1 x_2 + 2^0 x_1) \in \{1,2,4,7\} \Big\} \end{equation} where $\mathds{1}\{.\}$ is the indicator function with outputs 0 or 1. We express the binary vectors as integers by using binary expansions. Thus, it can be shown that if the weights are exponentially large in $n$, we can express all Boolean functions in this form. Now, suppose that we are only allowed to use a single equality check in an indicator function. Considering the $3$-input PARITY function, we can simply obtain \begin{equation} \label{eq:exp_construction} \begin{bmatrix} 2^0 & 2^1 & 2^2 \\ 2^0 & 2^1 & 2^2 \\ 2^0 & 2^1 & 2^2 \\ 2^0 & 2^1 & 2^2 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} = \begin{bmatrix} 1 \\ 2 \\ 4 \\ 7 \end{bmatrix} \end{equation} None of the above equations can be satisfied if $X$ is labeled as 0. Conversely, if $X$ satisfies one of the above equations, we can label it as $1$. For an arbitrary Boolean function of $n$ inputs, if we list every integer associated with vectors labeled as $1$, the number of rows may become exponentially large in $n$. Nevertheless, in this fashion, we can compute this function by the following system of equations using smaller weights. \begin{equation} \label{eq:parity_best} \begin{bmatrix} 1 & 1 & 1 \\ 1 & 1 & 1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} = \begin{bmatrix} 1 \\ 3 \end{bmatrix} \end{equation} Not only there is a simplification on the number of equations, but also the weights are reduced to smaller sizes. This phenomenon motivates the following question with more emphasis: For which Boolean functions could we obtain such simplifications in the number of equations and weight sizes? For PARITY, this simplification is possible because it is a \textit{symmetric} Boolean function, i.e., the output depends only on the number of $1$s of the input $X$. We are particularly interested in such simplifications from large weights to small weights for a class of Boolean functions called \textit{threshold functions}. Note that we use the word ``large'' to refer to exponentially large quantities in $n$ and the word ``small'' to refer to polynomially large quantities (including $O(n^0) = O(1)$) in $n$. \subsection{Threshold Functions and Neural Networks} Threshold functions are commonly treated functions in Boolean analysis and machine learning as they form the basis of neural networks. Threshold functions compute a weighted summation of binary inputs and feed it to a \textit{threshold} or \textit{equality} check. If this sum is fed to the former, we call the functions \textit{linear threshold functions} (see \eqref{eq:lt}). If it is fed to the latter, we call them \textit{exact threshold functions} (see \eqref{eq:elt}). We can write an $n$-input threshold function using the indicator function where $w_i$s are integer weights and $b$ is a \textit{bias} term. \begin{align} \label{eq:lt} f_{\LT} (X) &= \mathds{1}\Big\{\sum_{i=1}^n w_i x_i \geq b\Big\} \\ \label{eq:elt} f_{\mathcal{E}} (X) &= \mathds{1}\Big\{\sum_{i=1}^n w_i x_i = b\Big\} \end{align} A device computing the corresponding threshold function is called a \textit{gate} of that type. To illustrate the concept, we define COMPARISON (denoted by COMP) function which computes whether an $n$-bit integer $X$ is greater than another $n$-bit integer Y. The exact counterpart of it is defined as the EQUALITY (denoted by EQ) function which checks if two $n$-bit integers are equal (see Figure \ref{fig:comp_and_eq}). For example, we can write the EQ function in the following form. \begin{equation} \label{eq:equality} \text{EQ}(X,Y) = \mathds{1}\Big\{\sum_{i=1}^n 2^{i-1}(x_i-y_i) = 0 \Big\} \end{equation} \begin{figure}[H] \begin{tikzpicture} \tikzstyle{sum} = [gate=white,label=center:+] \tikzstyle{input} = [circle] \newcommand{\nodenum}{3} \newcommand{\eq}{=} \pgfmathsetmacro{\offset}{\nodenum} \node[gate=white,label=center:$\LT$,pin=right:COMP] (lt) at (2,-\nodenum-0.5) {}; \foreach \x in {1,...,\nodenum} { \pgfmathsetmacro{\exponent}{int(\x-1)} \node[input,label=180:$x_\x$] (xo-\x) at (0,-\x) {}; \draw (xo-\x) -- node[above,pos=0.3] {$2^{\exponent}$} (lt); } \foreach \y in {1,...,\nodenum} { \pgfmathsetmacro{\exponent}{int(\y-1)} \node[input,label=180:$y_\y$] (yo-\y) at (0,-\y - \offset) {}; \draw (yo-\y) -- node[above,pos=0.25] {$-2^{\exponent}$} (lt); } \pgfmathsetmacro{\offsetx}{4.6} \node[gate=white,label=center:$\mathcal{E}$,pin=right:EQ] (e) at (2+\offsetx,-\nodenum-0.5) {}; \foreach \x in {1,...,\nodenum} { \pgfmathsetmacro{\exponent}{int(\x-1)} \node[input,label=180:$x_\x$] (xo-\x) at (\offsetx,-\x) {}; \draw (xo-\x) -- node[above,pos=0.3] {$2^{\exponent}$} (e); } \foreach \y in {1,...,\nodenum} { \pgfmathsetmacro{\exponent}{int(\y-1)} \node[input,label=180:$y_\y$] (yo-\y) at (\offsetx,-\y - \offset) {}; \draw (yo-\y) -- node[above,pos=0.25] {$-2^{\exponent}$} (e); } \end{tikzpicture} \caption{The $3$-input COMP and EQ functions for integers $X$ and $Y$ computed by linear threshold and exact threshold gates. A gate with an $\LT$(or $\mathcal{E}$) inside is a linear (or exact) threshold gate. More explicitly, we can write $\text{COMP}(X,Y) = \mathds{1}\{4x_3 + 2x_2 + x_1 \geq 4y_3 + 2y_2 + y_1\}$ and $\text{EQ}(X,Y) = \mathds{1}\{4x_3 + 2x_2 + x_1 = 4y_3 + 2y_2 + y_1\}$} \label{fig:comp_and_eq} \end{figure} In general, it is proven that the weights of a threshold function can be represented by $O(n\log{n})$-bits and this is tight \cite{alon1997anti,babai2010weights, haastad1994size,muroga1971threshold}. However, it is possible to construct ``small'' weight \textit{threshold circuits} to compute any threshold function \cite{amano2005complexity,goldmann1993simulating,hofmeister1996note, siu1991power}. This transformation from a circuit of depth $d$ with exponentially large weights in $n$ to another circuit with polynomially large weights in $n$ is typically within a constant factor of depth (e.g. $d+1$ or $3d + 3$ depending on the context) \cite{goldmann1993simulating,vardi2020neural}. For instance, such a transformation would simply follow if we can replace any ''large`` weight threshold function with ``small'' weight depth-$2$ circuits so that the new depth becomes $2d$. It is possible to reduce polynomial size weights into constant weights by replicating the gates that is fed to the top gate recursively (see Figure \ref{fig:replicate}). Nevertheless, this would inevitably introduce a polynomial size blow-up in the circuit size. We emphasize that our focus is to achieve this weight size reduction from polynomial weights to constant weights with at most a constant size blow-up in the circuit size. \begin{figure} \centering \begin{tikzpicture} \tikzstyle{exa} = [gate=white,label=center:$\mathcal{E}$] \node[exa,draw=red] (i-1) at (0,1) {}; \node[exa,draw=blue] (i-2) at (0,0) {}; \node[exa,draw=violet] (i-3) at (0,-1) {}; \node[exa] (o-1) at (2, 0) {}; \draw[draw=red] (i-1) -- node[above,color=red] {$5$} (o-1); \draw[draw=blue] (i-2) -- node[above,color=blue] {$6$}(o-1); \draw[draw=violet] (i-3) -- node[above,color=violet] {$7$}(o-1); \pgfmathsetmacro{\offset}{4} \node[exa,draw=red] (i-11) at (\offset,3) {}; \node[exa,draw=red] (i-12) at (\offset,2) {}; \node[exa,draw=blue] (i-21) at (\offset,1) {}; \node[exa,draw=blue] (i-22) at (\offset,0) {}; \node[exa,draw=violet] (i-31) at (\offset,-1) {}; \node[exa,draw=violet] (i-32) at (\offset,-2) {}; \node[exa,draw=violet] (i-33) at (\offset,-3) {}; \node[exa] (o-11) at (\offset + 2, 0) {}; \draw[draw=red] (i-11) -- node[above,color=red] {$2$} (o-11); \draw[draw=red] (i-12) -- node[above,color=red] {$3$} (o-11); \draw[draw=blue] (i-21) -- node[above,color=blue] {$3$}(o-11); \draw[draw=blue] (i-22) -- node[above,color=blue] {$3$}(o-11); \draw[draw=violet] (i-31) -- node[above,color=violet] {$3$}(o-11); \draw[draw=violet] (i-32) -- node[above,color=violet] {$3$}(o-11); \draw[draw=violet] (i-33) -- node[above,color=violet] {$1$}(o-11); \end{tikzpicture} \caption{An example of a weight transformation for a single gate (in black) to construct constant weight circuits. Different gates are colored in red, blue, and violet and depending on the weight size, each gate is replicated a number of times. In this example, each weight in this construction is at most 3.} \label{fig:replicate} \end{figure} For neural networks and learning tasks, the weights are typically finite precision real numbers. To make the computations faster and more power efficient, the weights can be quantized to small integers with a loss in the accuracy of the network output \cite{jacob2018quantization, hubara2016binarized}. In practice, given that the neural circuit is large in size and deep in depth, this loss in the accuracy might be tolerated. We are interested in the amount of trade-off in the increase of size and depth while using as small weights as possible. More specifically, our goal is to provide insight to the computation of single threshold functions with large weights using threshold circuits with small weights by relating theoretical upper and lower bounds to the best known constructions. In this manner, for the ternary weight case ($\{-1,0,1\}$ as in \cite{li2016ternary}), we give an explicit and optimal size circuit construction for the EQ function using depth-2 circuits. This optimality is guaranteed by achieving the theoretical lower bounds asymptotically up to vanishing terms \cite{kilic2021neural,roychowdhury1994lower}. We also prove an existential result on the COMP constructions to reduce the weight size on the best known constructions. It is not known if constant weight constructions exist without a polynomial blow-up in the circuit size and an increase in the depth for arbitrary threshold functions. \subsection{Bijective Mappings from Finite Fields to Integers} It seems that choosing powers of two as the weight set is important. In fact, we can expand any arbitrary weight in binary and use powers of two as a fixed weight set for any threshold function \cite{kilic2021neural}. This weight set is a choice of convenience and a simple solution for the following question: How small could the elements of $\mathcal{W} = \{w_1, w_2, \cdots, w_n\}$ be if $\mathcal{W}$ has all distinct subset sums (DSS)? If the weights satisfy this property, called the \textbf{DSS property}, they can define a bijection between $\{0,1\}^n$ and integers. Erd\H{o}s conjectured that the largest weight $w \in \mathcal{W}$ is upper bounded by $c_0 2^n$ for some $c_0 > 0$ and therefore, choosing powers of two as the weight set is asymptotically optimal. The best known result for such weight sets yields $0.22002 \cdot 2^{n}$ and currently, the best lower bound is $\Omega(2^n/\sqrt{n})$ \cite{bohman1998construction,dubroff2021note,guy2004unsolved}. Now, let us consider the following linear equation where the weights are fixed to the ascending powers of two but $x_i$s are not necessarily binary. We denote the powers of two by the vector $w_b$. \begin{equation} \label{eq:w_b} w_b^T x = \sum_{i=1}^n 2^{i-1} x_i = 0 \end{equation} As the weights of $w_b$ define a bijection between $n$-bit binary vectors and integers, $w_b^T x = 0$ does not admit a non-trivial solution for the alphabet $\{-1,0,1\}^n$. This is a necessary and sufficient condition to compute the EQ function given in \eqref{eq:equality}. We extend this property to $m$ many rows to define \textit{\textup{EQ} matrices} which give a bijection between $\{0,1\}^n$ and $\mathbb{Z}^m$. Thus, an EQ matrix can be used to compute the EQ function in \eqref{eq:equality} and our goal is to use smaller weight sizes in the matrix. \begin{definition} A matrix $A \in \mathbb{Z}^{m\times n}$ is an $\textup{EQ matrix}$ if the homogeneous system $Ax = 0$ has no non-trivial solutions in $\{-1,0,1\}^n$. \end{definition} Let $A \in \mathbb{Z}^{m\times n}$ be an EQ matrix with the weight constraint $W \in \mathbb{Z}$ such that $|a_{ij}| \leq W$ for all $i,j$ and let $R$ denote the \textit{rate} of the matrix $A$, which is $n/m$. It is clear that any full-rank square matrix is an EQ matrix with $R = 1$. Given any $W$, how large can this $R$ be? For the maximal rate, a necessary condition can be proven by Siegel's Lemma \cite{siegel2014einige}. \begin{lemma}[Siegel's Lemma (modified)] Consider any integer matrix $A \in \mathbb{Z}^{m \times n}$ with $m < n$ and $|a_{ij}| \leq W$ for all $i,j$ and some integer $W$. Then, $Ax = 0$ has a non-trivial solution for an integer vector $x \in \mathbb{Z}^n$ such that $||x||_\infty \leq (\sqrt{n}W)^\frac{m}{n-m}$. \end{lemma} It is shown that if $m = o(n/\log{nW})$, then any $A \in \mathbb{Z}^{m \times n}$ with weight constraint $W$ admits a non-trivial solution in $\{-1,0,1\}^n$ and cannot be an EQ matrix, i.e., $R = O(\log{nW})$ is tight \cite{kilic2021neural}. A similar result can be obtained by the matrix generalizations of Erd\H{o}s' Distinct Subset Sum problem \cite{costa2021variations}. When $m = O(n/\log{nW})$, the story is different. If $m = O(n/\log{n})$, there exists a matrix $A \in \{-1,1\}^{m \times n}$ such that every non-trivial solution of $Ax = 0$ satisfies $\max_{j} |x_j| \geq c_0 \sqrt{n}^\frac{m}{n-m}$ for a positive constant $c_0$. This is given by Beck's Converse Theorem on Siegel's Lemma \cite{beck2017siegel}. For an explicit construction, it is possible to achieve the optimal rate $R = O(\log{nW})$ if we allow $W = poly(n)$. This can be done by the Chinese Remainder Theorem (CRT) and the Prime Number Theorem (PNT) \cite{amano2005complexity,hofmeister1996note,kilic2021neural}. It is known that CRT can be used to define \textit{residue codes} \cite{mandelbaum1972error}. For an integer $x$ and modulo base $p$, we denote the modulo operation by $[x]_p$, which maps the integer to a value in $\{0,...,p-1\}$. Suppose $0 \leq Z < 2^n$ for an integer $Z$. One can encode this integer by the $m$-tuple $(d_1,d_2,\cdots,d_m)$ where $[Z]_{p_i} = d_i$ for a prime number $p_i$. Since we can also encode $Z$ by its binary expansion, the CRT gives a bijection between $\mathbb{Z}^m$ and $\{0,1\}^n$ as long as $p_1 \cdots p_m > 2^n$. By taking modulo $p_i$ of Equation \eqref{eq:w_b}, we can obtain the following matrix, defined as a \textit{CRT matrix}: \begin{align} \label{eq:ex_crt} &\begin{bmatrix*}[l] [2^0]_{3} & [2^1]_{3} & [2^2]_{3} & [2^4]_{3} & [2^5]_{3} & [2^6]_3 & [2^7]_3 \\ [2^0]_{5} & [2^1]_{5} & [2^2]_{5} & [2^4]_{5} & [2^5]_{5} & [2^6]_5 & [2^7]_5 \\ [2^0]_{7} & [2^1]_{7} & [2^2]_{7} & [2^4]_{7} & [2^5]_{7} & [2^6]_7 & [2^7]_7 \\ [2^0]_{11} & [2^1]_{11} & [2^2]_{11} & [2^4]_{11} & [2^5]_{11} & [2^6]_{11} & [2^7]_{11} \end{bmatrix*} \\ &\hphantom{aaaaaaaaaa}= \begin{bmatrix*}[r] 1 & 2 & 1 & 2 & 1 & 2 & 1 & 2 \\ 1 & 2 & 4 & 3 & 1 & 2 & 4 & 3 \\ 1 & 2 & 4 & 1 & 2 & 4 & 1 & 2 \\ 1 & 2 & 4 & 8 & 5 & 10 & 9 & 7 \end{bmatrix*}_{4 \times 8} \end{align} We have $Z < 256$ since $n=8$ and $3\cdot 5\cdot 7\cdot 11 = 1155$. Therefore, this CRT matrix is an EQ matrix. In general, by the PNT, one needs $O(n/\log{n})$ many rows to ensure that $p_1 \cdots p_m > 2^n$. Moreover, $W$ is bounded by the maximum prime size $p_m$, which is $O(n)$ again by the PNT. However, it is known that constant weight EQ matrices with asymptotically optimal rate exist by Beck's Converse Theorem on Siegel's Lemma \cite{beck2017siegel,kilic2021neural}. In this paper, we give an explicit construction where $W = 1$ and asymptotic efficiency in rate is achieved up to vanishing terms. It is in fact an extension of Sylvester-type Hadamard matrices. \begin{equation} \label{eq:eq_4x8} \begin{bmatrix*}[r] 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 \\ 1 & -1 & 0 & 1 & -1 & 0 & 0 & 1 \\ 1 & 1 & 1 & -1 & -1 & -1 & 0 & 0 \\ 1 & -1 & 0 & -1 & 1 & 0 & 0 & 0 \end{bmatrix*}_{4\times 8} \end{equation} One can verify that the matrix in \eqref{eq:eq_4x8} is an EQ matrix and its rate is twice the trivial rate being the same in \eqref{eq:ex_crt}. Therefore, due to the optimality in the weight size, this construction can replace the CRT matrices in the constructions of EQ matrices. We can also focus on $q$-ary representation of integers in a similar fashion by extending the definition of EQ matrices to this setting. In our analysis, we always treat $q$ as a constant value. \begin{definition} A matrix $A \in \mathbb{Z}^{m\times n}$ is an $\textup{EQ}_q\textup{ matrix}$ if the homogeneous system $Ax = 0$ has no non-trivial solutions in $\{-q+1,\dots,q-1\}^n$. \end{definition} If $q = 2$, then we drop $q$ from the notation and say that the matrix is an EQ matrix. For the $\text{EQ}_q$ matrices, the optimal rate given by Siegel's Lemma is still $R = O(\log{nW})$ and constant weight constructions exist. We give an extension of our construction to $\text{EQ}_q$ matrices where $W = 1$ and asymptotic efficiency in rate is achieved up to constant terms. \subsection{Maximum Distance Separable Extensions of EQ Matrices} Residue codes are treated as Maximum Distance Separable (MDS) codes because one can extend the CRT matrix by adding more prime numbers to the matrix (without increasing $n$) so that the resulting integer code achieves the Singleton bound in Hamming distance \cite{tay2015non}. However, we do not say a CRT matrix is an \textit{MDS matrix} as this refers to another concept. \begin{definition} An integer matrix $A \in \mathbb{Z}^{m \times n}$ ($m \leq n$) is \textup{MDS} if and only if no $m \times m$ submatrix $B$ is singular. \end{definition} \begin{definition} \label{def:rmds} An integer matrix $A \in \mathbb{Z}^{rm \times n}$ is \textup{MDS for $q$-ary bijections with MDS rate $r$ and EQ rate $R = n/m$} if and only if every $m \times n$ submatrix $B$ is an $\textup{EQ}_q$ matrix. \end{definition} Because Definition \ref{def:rmds} considers solutions over a \textbf{restricted} alphabet, we denote such matrices as $\textit{RMDS}_q$. Remarkably, as $q \rightarrow \infty$, both MDS definitions become the same. Similar to the $\text{EQ}_q$ definition, we drop the $q$ from the notation when $q = 2$. A CRT matrix is not MDS, however, it can be $\text{RMDS}_q$. We can demonstrate the difference of both MDS definitions by the following matrix. This matrix is an RMDS matrix with EQ rate $2$ and MDS rate $5/4$ because any $4 \times 8$ submatrix is an EQ matrix. This is in fact the same matrix in \eqref{eq:ex_crt} with an additional row with entries $[2^{i-1}]_{13}$ for $i \in \{1,...,8\}$. \begin{align} \begin{bmatrix*}[r] 1 & 2 & 1 & 2 & 1 & 2 & 1 & 2 \\ 1 & 2 & 4 & 3 & 1 & 2 & 4 & 3 \\ 1 & 2 & 4 & 1 & 2 & 4 & 1 & 2 \\ 1 & 2 & 4 & 8 & 5 & 10 & 9 & 7 \\ 1 & 2 & 4 & 8 & 3 & 6 & 12 & 11 \end{bmatrix*}_{5 \times 8} \end{align} Here, the determinant of the $5\times5$ submatrix given by the first five columns is 0. Thus, this matrix is not an MDS matrix. In this work, we are interested in $\text{RMDS}_q$ matrices. We show that for a constant weight $\text{RMDS}_q$ matrix, the MDS rate bound $r = O(1)$ is tight. We also provide an existence result for such $\text{RMDS}_q$ matrices where the weight size is bounded by $O(r)$ given that the EQ rate is $O(\log{n})$. The following is a summary of our contributions in this paper. \begin{itemize} \item In Section \ref{sec:constr}, we explicitly give a rate-efficient $\text{EQ}$ matrix construction with constant entries $\{-1,0,1\}$ where the optimality is guaranteed up to vanishing terms. This solves an open problem in \cite{kilic2021neural} and \cite{roychowdhury1994lower}. \item In Section \ref{sec:rmds}, we prove that the MDS rate $r$ of an $\text{RMDS}_q$ matrix with entries from an alphabet $\mathcal{Q}$ with cardinality $k$ should satisfy $r \leq k^{k+1}$. Therefore, constant weight $\text{RMDS}_q$ matrices can at most achieve $r = O(1)$. \item In Section \ref{sec:rmds}, provide an existence result for $\text{RMDS}_q$ matrices given that $W = O(r)$ and the optimal EQ rate $O(\log{n})$. In contrast, the best known results give $W = O(rn)$ with the optimal EQ rate $O(\log{n})$. \item In Section \ref{sec:neural}, we apply our results to Circuit Complexity Theory to obtain better weight sizes with asymptotically \textbf{no trade-off} in the circuit size for the depth-2 EQ and COMP constructions, as shown in the Table \ref{tab:cct}. \end{itemize} \begin{table}[h!] \caption{Results for the Depth-2 Circuit Constructions} \label{tab:cct} \centering \begin{tabular}{|c||c|c||c|c|} \hline \multirow{2}{*}{\textbf{Function}} & \multicolumn{2}{|c||}{\textbf{This Work}} & \multicolumn{2}{|c|}{\textbf{Previous Works}} \\ \cline{2-3}\cline{4-5} & Weight Size & Constructive & Weight Size & Constructive \\ \hline\hline \multirow{2}{*}{EQ} & \multirow{2}{*}{$O(1)$} & \multirow{2}{*}{Yes} & $O(1)$\cite{kilic2021neural} & No \\ \cline{4-5} & & & $O(n)$\cite{roychowdhury1994lower} & Yes \\ \hline COMP & $O(n)$ & No & $O(n^2)$\cite{amano2005complexity} & Yes \\ \hline \end{tabular} \end{table} \section{Rate-efficient Constructions with Constant Alphabet Size} \label{sec:constr} The $m\times n$ EQ matrix construction we give here is based on Sylvester's construction of Hadamard matrices. It is an extension of it to achieve higher rates with the trade-off that the matrix is no longer full-rank. \begin{theorem} \label{th:constr} Suppose we are given an EQ matrix $A_0 \in \{-1,0,1\}^{m_0\times n_0}$. At iteration $k$, we construct the following matrix $A_k$: \begin{equation} A_k = \begin{bmatrix*}[c] A_{k-1} & A_{k-1} & I_{m_{k-1}} \\ A_{k-1} & -A_{k-1} & 0 \end{bmatrix*} \end{equation} $A_k$ is an EQ matrix with $m_k = 2^k m_0$, $n_k = 2^k n_0 (\frac{k}{2}\frac{m_0}{n_0} + 1)$ for any integer $k \geq 0$. \end{theorem} \begin{proof} We will apply induction. The case $k = 0$ is trivial by assumption. For the system $A_kx = z$, let us partition the vector $x$ and $z$ in the following way: \begin{equation} \label{eq:constr_partition} \begin{bmatrix*}[c] A_{k-1} & A_{k-1} & I_{m_{k-1}} \\ A_{k-1} & -A_{k-1} & 0 \end{bmatrix*} \begin{bmatrix*} x^{(1)} \\ x^{(2)} \\ x^{(3)} \end{bmatrix*} = \begin{bmatrix*} z^{(1)} \\ z^{(2)} \end{bmatrix*} \end{equation} Then, setting $z = 0$, we have $A_{k-1}x^{(1)} = A_{k-1}x^{(2)}$ by the second row block. Hence, the first row block of the construction implies $2A_{k-1}x^{(1)} + x^{(3)} = 0$. Each entry of $x^{(3)}$ is either $0$ or a multiple of $2$. Since $x_i \in \{-1,0,1\}$, we see that $x^{(3)} = 0$. Then, we obtain $A_{k-1}x^{(1)} = A_{k-1}x^{(2)} = 0$. Applying the induction hypothesis on $A_{k-1}x^{(1)}$ and $A_{k-1}x^{(2)}$, we see that $A_kx = 0$ admits a unique trivial solution in $\{-1,0,1\}^{n_k}$. To see that $m_k = 2^k m_0$ is not difficult. For $n_k$, we can apply induction. \end{proof} To construct the matrix given in \eqref{eq:eq_4x8}, we can use $A_0 = I_1 = \begin{bmatrix}1\end{bmatrix}$, which is a trivial rate EQ matrix, and take $k = 2$. By rearrangement of the columns, we also see that there is a $4\times 4$ Sylvester-type Hadamard matrix in this matrix. For the sake of completeness, we will revisit Siegel's Lemma to prove the best possible rate one can obtain for an EQ matrix with weights $\{-1,0,1\}$ as similarly done in \cite{kilic2021neural}. We note that the following lemma gives the best possible rate upper bound to the best of our knowledge and our construction asymptotically shows the sharpness of Siegel's Lemma similar to Beck's result. \begin{lemma} \label{lem:upper} For any $\{-1,0,1\}^{m\times n}$ EQ matrix, $R = \frac{n}{m} \leq \frac{1}{2}\log{n} + 1$. \end{lemma} \begin{proof} By Siegel's Lemma, we know that for a matrix $A \in \mathbb{Z}^{m\times n}$, the homogeneous system $Ax = 0$ attains a non-trivial solution in $\{-1,0,1\}^n$ when \begin{equation} ||x||_\infty \leq (\sqrt{n}W)^{\frac{m}{n-m}} \leq 2^{1-\epsilon} \end{equation} for some $\epsilon > 0$. Then, since $W = 1$, we deduce that $ R = \frac{n}{m} \geq \frac{1}{2(1-\epsilon)}\log{n} + 1 $. Obviously, an EQ matrix cannot obtain such a rate. Taking $\epsilon \rightarrow 0$, we conclude the best upper bound is $ R \leq \frac{1}{2}\log{n} + 1 $. \end{proof} Using Lemma \ref{lem:upper} for our construction with $A_0 = \begin{bmatrix}1\end{bmatrix}$, we compute the upper bound on the rate and the real rate \begin{align} R_{upper} &= \frac{k + 1 + \log{(k+2)}}{2},\hphantom{a} R_{constr} = \frac{k}{2}+1 \\ \frac{R_{upper}}{R_{constr}} &= 1 + \frac{\log{(k+2)}-1}{k+2} \implies \frac{R_{upper}}{R_{constr}} \sim 1 \end{align} which concludes that the construction is \textbf{optimal} in rate up to vanishing terms. By deleting columns, we can generalize the result to any $n \in \mathbb{Z}$ to achieve optimality up to a constant factor of $2$. We conjecture that one can extend the columns of any Hadamard matrix to obtain an EQ matrix with entries $\{-1,0,1\}$ achieving rate optimality up to vanishing terms. In this case, the Hadamard conjecture implies a rich set of EQ matrix constructions. Given $Ax = z \in \mathbb{Z}^m$, we note that there is a linear time decoding algorithm to find $x \in \{0,1\}^n$ uniquely, given in the appendix. This construction can also be generalized to $\text{EQ}_q$ matrices with the same proof idea. In this case, the rate is optimal up to a factor of $q$. \begin{theorem} \label{th:constr_q} Suppose we are given an $\text{EQ}_q$ matrix $A_0 \in \{-1,0,1\}^{m_0\times n_0}$. At iteration $k$, we construct the following matrix $A_k$: \begin{equation} \begin{bmatrix*}[c] A_{k-1} & A_{k-1} & A_{k-1} & \cdots & A_{k-1} & I_{m_{k-1}} \\ A_{k-1} & -A_{k-1} & 0 &\cdots & 0 & 0 \\ 0 & A_{k-1} & -A_{k-1} & \cdots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & -A_{k-1} & 0 \end{bmatrix*} \end{equation} $A_k$ is an $\text{EQ}_q$ matrix with $m_k = q^k m_0$, $n_k = q^k n_0 (\frac{k}{q}\frac{m_0}{n_0} + 1)$ for any integer $k \geq 0$. \end{theorem} \section{Bounds on the Rate and the Weight Size of $\text{RMDS}_q$ Matrices} \label{sec:rmds} Similar to a CRT-based RMDS matrix, is there a way to extend the EQ matrix given in Section \ref{sec:constr} without a large trade-off in the weight size? We give an upper bound on the MDS rate $r$ based on the alphabet size of an $\text{RMDS}_q$ matrix. \begin{theorem} \label{th:mds_lower_bound} An $\text{RMDS}_q$ matrix $A \in \mathbb{Z}^{rm \times n}$ with entries from $\mathcal{Q} = \{q_1,...,q_k \}$ satisfies $r \leq k^{k+1}$ given that $n > k$. \end{theorem} \begin{proof} The proof is based on a simple counting argument. We rearrange the rows of a matrix in blocks $B_i$ for $i \in \{1,\cdots,k^{k+1}\}$ assuming $n > k$. Here, each block contains rows starting with a prefix of length $k+1$ in the lexicographical order of the indices, i.e., \begin{equation} A = \begin{bmatrix} B_1 \\ B_2 \\ \vdots \\ B_{k^{k+1}} \end{bmatrix} = \begin{bmatrix} q_1 & q_1 & \cdots & q_{1} & \cdots \\ q_1 & q_1 & \cdots & q_{2} & \cdots \\ \vdots & \vdots & \ddots & \vdots & \ddots \\ q_{k} & q_{k} & \cdots & q_{k} & \cdots \end{bmatrix} \end{equation} For instance, the block $B_1$ contains the rows starting with the prefix $\begin{bmatrix} q_1 & q_1 & \cdots & q_{1} \end{bmatrix}_{1 \times k+1}$. It is easy to see that there is a vector $x \in \{-1,0,1\}^n$ that will make at least one of the $B_i x = 0$ because it is guaranteed that the $(k + 1)$-th column will be equal to the one of the preceding elements by Pigeonhole Principle. Since the matrix is MDS, any $m$ row selections should be an EQ matrix. Therefore, any $B_i$ should not contain more than $m$ rows. Again, by Pigeonhole Principle, $rm \leq k^{k+1}m$ and consequently, $r \leq k^{k+1}$. \end{proof} The condition that $n > k$ is trivial in the sense that if the weights are constant, then $k$ is a constant. Therefore, the MDS rate can only be a constant due to Theorem \ref{th:mds_lower_bound}. \begin{corollary} An $\text{RMDS}_q$ matrix $A \in \mathbb{Z}^{rm \times n}$ weight size $W = O(1)$ can at most achieve the MDS rate $r = O(1)$. \end{corollary} In Section \ref{sec:constr}, we saw that CRT-based EQ matrix constructions are not optimal in terms of the weight size. For $\text{RMDS}_q$ matrices, we now show that the weight size can be reduced by a factor of $n$. The idea is to use the probabilistic method on the existence of $\text{RMDS}_q$ matrices. \begin{lemma} \label{lem:prob} Suppose that $a \in \{-W,\dots,W\}^n$ is uniformly distributed and $x_i \in \{-q+1,\dots,q-1\} \setminus 0$ for $i \in \{1,\dots,n\}$ are fixed where $q$ is a constant. Then, for some constant $C$, \begin{equation} Pr(a^T x = 0) \leq \frac{C}{\sqrt{n}W} \end{equation} \end{lemma} \begin{proof} We start with a statement of the Berry-Esseen Theorem. \begin{lemma}[Berry-Esseen Theorem] Let $X_1,\dots,X_n$ be independent centered random variables with finite third moments $\mathbb{E}[|X_i^3|] = \rho_i$ and let $\sigma^2 = \sum_{i=1}^n \mathbb{E}[X_i^2]$. Then, for any $t > 0$, \begin{equation} \label{eq:berry_esseen} \Big| Pr\Big(\sum_{i=1}^n X_i \leq t \Big) - \Phi(t) \Big| \leq C \sigma^{-3} \sum_{i=1}^n \rho_i \end{equation} where $C$ is an absolute constant and $\Phi(t)$ is the cumulative distribution function of $\mathcal{N}(0,\sigma^2)$. \end{lemma} Since the density of a normal random variable is uniformly bounded by $1/\sqrt{2\pi\sigma^2}$, we obtain the following. \begin{equation} Pr\Big(\Big|\sum_{i=1}^n X_i \Big| \leq t \Big) \leq \frac{2t}{\sqrt{2\pi \sigma^2}} + 2C \sigma^{-3} \sum_{i=1}^n \rho_i \end{equation} Let $a^{(n-1)}$ denote the $(a_1,\dots,a_{n-1})$ and similarly, let $x^{(n-1)}$ denote $(x_1,\dots,x_{n-1})$. By the Total Probability Theorem, we have \begin{align} Pr(&a^T x = 0) = Pr\Bigg(\frac{{a^{(n-1)}}^T x^{(n-1)}}{|x_n|} \in \{-W,\dots,W\}\Bigg) \nonumber \\ &Pr\Bigg(a^T x = 0 \Big| \frac{{a^{(n-1)}}^T x^{(n-1)}}{|x_n|} \in \{-W,\dots,W\}\Bigg) \nonumber \\ & = Pr\Bigg(\frac{{a^{(n-1)}}^T x^{(n-1)}}{|x_n|} \in \{-W,\dots,W\}\Bigg)\frac{1}{2W+1} \end{align} where the last line follows from the fact that $Pr(a_n = k)$ for some $k \in \{-W,\dots,W\}$ is $\frac{1}{2W+1}$. The other conditional probability term is $0$ because $Pr(a_n = k)$ for any $k \not\in \{-W,\dots,W\}$ is $0$. We will apply Berry-Esseen Theorem to find an upper bound on the first term. We note that $\mathbb{E}[a_i^2x_i^2] = x_i^2 \frac{(2W+1)^2-1}{12}$ and $\mathbb{E}[|a_i^3x_i^3|] = |x_i|^3 \frac{2}{2W+1}\Big(\frac{W(W+1)}{2}\Big)^2$. Then, \begin{align} Pr\Bigg(&\Bigg|\frac{{a^{(n-1)}}^T x^{(n-1)}}{|x_n|}\Bigg| \leq W \Bigg) \leq \frac{2W|x_n|}{\sqrt{2\pi}\sqrt{\frac{(2W+1)^2-1}{12}\sum_{i=1}^{n-1} x_i^2}} \nonumber \\ &\hphantom{0000000000} + C \frac{\frac{2}{2W+1}\Big(\frac{W(W+1)}{2}\Big)^2 \sum_{i=1}^{n-1} |x_i|^3}{\Big(\frac{(2W+1)^2-1}{12}\Big)^\frac{3}{2}\Big(\sum_{i=1}^{n-1} x_i^2\Big)^\frac{3}{2}} \end{align} One can check that the whole RHS is in the form of $\frac{C'}{\sqrt{n-1}}$ for a constant $C'$ as we assume that $q$ is a constant. We can bound $|x_i|$s in the numerator by $q-1$ and $|x_i|$s in the denominator by $1$. The order of $W$ terms are the same and we can use a limit argument to bound them as well. Therefore, for some $C'' > 0$, \begin{equation} Pr(a^T x = 0) \leq \frac{C'}{\sqrt{n-1}}\frac{1}{2W+1} \leq \frac{C''}{\sqrt{n}W} \end{equation} \end{proof} Lemma \ref{lem:prob} is related to the Littlewood-Offord problem and anti-concentration inequalities are typically used in this framework \cite{halasz1977estimates, rudelson2008littlewood}. We specifically use Berry-Esseen Theorem. Building on this lemma and the union bound, we have the following theorem. \begin{theorem} \label{th:rmds} An $\text{RMDS}_q$ matrix $M \in \mathbb{Z}^{rm \times n}$ with entries in $\{-W,\dots,W\}$ exists if $m = \Omega(n/\log{n})$ and $W = O(r)$. \end{theorem} \begin{proof} Let $A$ be any $m \times n$ submatrix of $M$. Then, by the union bound (as done in \cite{karingula2021singularity}), we have \begin{align} Pr(\text{M is not RMDS}_q) = P &\leq \binom{rm}{m} Pr(\text{A is not EQ}_q) \nonumber \\ & \leq r^m e^m Pr(\text{A is not EQ}_q) \end{align} We again use the union bound to sum over all $x \in \{-q+1,\dots,q-1\}^n \setminus 0$ the event that $Ax = 0$ by counting non-zero entries in $x$ by $k$. Also, notice the independence of the events that $a_i^T x = 0$ for $i \in \{1,\dots,m\}$. Let $x^{(k)}$ denote an arbitrary $x \in \{-q+1,\dots,q-1\}^n \setminus 0$ with $k$ non-zero entries. Then, \begin{align} Pr(\text{A is not EQ}_q) &\leq \sum_{x \in \{-q+1,\dots,q-1\}^n \setminus 0} \prod_{i=1}^m Pr(a_i^Tx = 0) \\ &\leq \sum_{k=1}^n \binom{n}{k} (2(q-1))^k Pr(a^Tx^{(k)} = 0)^m \end{align} We can use Lemma \ref{lem:prob} to bound the $Pr(a^Tx^{(k)} = 0)$ term. Therefore, \begin{align} P &\leq r^m e^m \sum_{k=1}^n \binom{n}{k} (2(q-1))^k \Big(\frac{C}{\sqrt{k}W}\Big)^m \\ \label{eq:binomial_terms} &\hphantom{000}=\sum_{k=1}^{T-1} \binom{n}{k} (2(q-1))^k \Big(\frac{rC_1}{\sqrt{k}W}\Big)^m \nonumber \\ &\hphantom{000000} + \sum_{k=T}^{n} \binom{n}{k} (2(q-1))^k \Big(\frac{rC_1}{\sqrt{k}W}\Big)^m \end{align} for some $C_1, T \in \mathbb{Z}$. We bound the first summation by using $\binom{n}{k}(2(q-1))^k \leq (n(q-1))^k$ and choosing $k = 1$ for the probability term. This gives a geometric sum from $k = 1$ to $k = T-1$. \begin{align} \sum_{k=1}^{T-1} (n(q-1))^k \Big(\frac{rC_1}{W}\Big)^m = \Big(\frac{rC_1}{W}\Big)^m \Big(\frac{(n(q-1))^T - 1}{n(q-1) - 1} \Big) \end{align} For the second term, we take the highest value term $\frac{rC_1}{\sqrt{T}W}$ and $\sum_{k=T}^n \binom{n}{k} 2(q-1)^k \leq \sum_{k=0}^n \binom{n}{k} 2(q-1)^k = (2(q-1) + 1)^n$. Hence, \begin{align} P & \leq \Big(\frac{rC_1}{W}\Big)^m C_2 (n(q-1))^T + (2(q-1)+1)^n \Big(\frac{rC_1}{\sqrt{T}W}\Big)^m \\ & \leq 2^{C_3 T\log{n(q-1)} - m\log{W/rC_1}} \nonumber \\ &\hphantom{00000}+ 2^{n\log{(2(q-1)+1)} - m \log{(\sqrt{T}W/rC_1)}} \end{align} We take $T = O(n^c)$ for some $0 < c < 1$. It is easy to see that if $m = \Omega(n/\log{n})$ and $W = O(r)$, both terms vanish as $n$ goes to the infinity. \end{proof} When $r = 1$, we prove an existential result on EQ matrices already known in \cite{kilic2021neural}. We remark that the proof technique for Theorem \ref{th:rmds} is not powerful enough to obtain non-trivial bounds on $W$ when $m = 1$ to attack Erd\H{o}s' conjecture on the DSS weight sets. The CRT gives an explicit way to construct $\text{RMDS}_q \in \mathbb{Z}^{rm\times n}$ matrices. We obtain $p_{rm} = O(rm\log{rm}) = O(rn)$ given that $r = O(n^c)$ for some $c > 0$ and $m = O(n/\log{n})$ by the PNT. Therefore, we have a factor of $O(n)$ weight size reduction in Theorem \ref{th:rmds}. However, modular arithmetical properties of the CRT do not reflect to $\text{RMDS}_q$ matrices. Therefore, an $\text{RMDS}_q$ matrix cannot replace a CRT matrix in general (see Appendix). \section{Applications of the Algebraic Results to Neural Circuits} \label{sec:neural} In this section, we will give EQ and COMP threshold circuit constructions. We note that in our analysis, the size of the bias term is ignored. One can construct a depth-$2$ exact threshold circuit with small weights to compute the EQ function \cite{kilic2021neural}. For the first layer, we select the weights for each exact threshold gate as the rows of the EQ matrix. Then, we connect the outputs of the first layer to the top gate, which just computes the $m$-input AND function (i.e. $\mathds{1}\{z_1 + ... + z_m = m\}$ for $z_i \in \{0,1\}$). In Figure \ref{fig:eq_constr}, we give an example of EQ constructions. \begin{figure}[h] \centering \begin{tikzpicture} \tikzstyle{sum} = [gate=white,label=center:+] \tikzstyle{exa} = [gate=white,label=center:$\mathcal{E}$] \node[exa,pin=right:EQ] (out) at (4,-2) {}; \tikzstyle{input} = [circle] \newcommand{\inputnum}{8} \newcommand{\domnum}{4} \newcommand{\eq}{=} \newcommand{\offset}{1} \foreach \x in {1,...,\inputnum} { \node[input,label=180:$x_\x'$] (i-\x) at (0,-\x*.5) {}; } \foreach \x in {1,...,\domnum} { \node[exa] (e-\x) at (2.5,{-\x*(\offset)+0.25}) {}; \draw (e-\x) -- (out); } \node[input] (bias) at (3.5,-1) {}; \draw (bias) -- node[right,pos=0.3] {$-4$} (out); \def\eqmatrixorig{{{1, 1, 1, 1,1, 1,1,0}, {1,-1, 1,-1,0, 0,0,1}, {1, 1,-1,-1,1,-1,0,0}, {1,-1,-1, 1,0, 0,0,0} }} \def\eqmatrix{{{1, 1, 1, 1,1, 1,0,1}, {1, 0, 1, 0,0,-1,1,-1}, {1, 0,-1,-1,1,-1,0,1}, {1, 0,-1, 0,0, 1,0,-1} }} \foreach \x in {1,...,\inputnum} { \foreach \y in {1,...,\domnum} { \pgfmathtruncatemacro{\val}{\eqmatrix[\y-1][\x-1]} \ifnum\val=0{} \else{ \ifnum\val=1{ \draw (i-\x) -- (e-\y); } \else{ \draw[color=red] (i-\x) -- (e-\y); } } } \end{tikzpicture} \caption{An example of $\text{EQ}(X,Y)$ constructions with 8 $x_i' = x_i - y_i$ inputs (or 16 if $x_i$s and $y_i$s are counted separately) and 5 exact threshold gates (including the top gate). The black(or red) edges correspond to the edges with weight 1 (or -1).} \label{fig:eq_constr} \end{figure} We roughly follow the previous works for the construction of the COMP \cite{amano2005complexity}. First, let us define $F^{(l)}(X,Y) = \sum_{i=l+1}^n 2^{i-l-1} (x_i-y_i)$ so that $\mathds{1}\{F^{(l)}(X,Y) \geq 0\}$ is a $(n-l)$-bit COMP function for $l < n$ and $F^{(0)}(X,Y) = F(X,Y)$. We say that $X \geq Y$ when $F(X,Y) \geq 0$ and vice versa. We also denote by $X^{(l)}$ the most significant $(n-l)$-tuple of an $n$-tuple vector $X$. We have the following observation. \begin{lemma} \label{lem:comp} Let $F^{(l)}(X,Y) = \sum_{i=l+1}^n 2^{i-l-1}(x_i - y_i)$ and $F^{(0)}(X,Y) = F(X,Y)$ for $X, Y \in \{0,1\}^n$. Then, \begin{align} F(X,Y) > 0 &\Leftrightarrow \exists l : F^{(l)}(X,Y) = \hphantom{-}1 \\ F(X,Y) < 0 &\Leftrightarrow \exists l : F^{(l)}(X,Y) = -1 \end{align} \end{lemma} \begin{proof} It is easy to see that if $X-Y = (0,0,\dots,0,1,\times,\dots,\times)$ where the vector has a number of leading 0s and $\times$s denote any of $\{-1,0,1\}$, we see that $F(X,Y) > 0$ (this is called the \textit{domination property}). Similarly, for $F(X,Y) < 0$, the vector $X-Y$ should have the form $(0,0,\dots,0,-1,\times,\dots,\times)$ and $F(X,Y) = 0$ if and only if $X-Y = (0,\dots,0)$. The converse holds similarly. \end{proof} By searching for the $(n-l)$-tuple vectors $(X-Y)^{(l)} = (0,...,0,-1)$ for all $l \in \{0,...,n-1\}$, we can compute the COMP. We claim that if we have an $\text{RMDS}_q$ matrix $A \in \mathbb{Z}^{rm \times n}$, we can detect such vectors by solving $A^{(l)} (X-Y)^{(l)} = -a_{n-l}$ where $A^{(l)}$ is a truncated version of $A$ with the first $n-l$ columns and $a_{n-l}$ is the $(n-l)$-th column. Specifically, we obtain the following: \begin{align} \label{eq:comp_forward} X < Y &\Rightarrow \sum_{l=0}^{n-1} \mathds{1}\{A^{(l)}(X-Y)^{(l)} = -a_{n-l}\} \geq rm \\ \label{eq:comp_backward} X \geq Y &\Rightarrow \sum_{l=0}^{n-1} \mathds{1}\{A^{(l)}(X-Y)^{(l)} = -a_{n-l}\} < n(m-1) \end{align} Here, the indicator function works row-wise, i.e., we have the output vector $z \in \{0,1\}^{rmn}$ such that $z_{rml + i} = \mathds{1}\{(A^{(l)}(X-Y)^{(l)})_i = -(a_{n-l})_i\}$ for $i \in \{1,\dots,rm\}$ and $l \in \{0,\dots,n-1\}$. We use an $\text{RMDS}_3$ matrix in the construction to map $\{-1,0,1\}^{n-l}$ vectors bijectively to integer vectors with large Hamming distance. Note that each exact threshold function can be replaced by two linear threshold functions and a summation layer (i.e. $\mathds{1}\{F(X) = 0\} = \mathds{1}\{F(X) \geq 0\} + \mathds{1}\{-F(X) \geq 0\} - 1)$ which can be absorbed to the top gate (see Figure \ref{fig:absorption}) . This increases the circuit size only by a factor of two. \begin{figure} \centering \begin{tikzpicture} \tikzstyle{input} = [circle] \tikzstyle{redg} = [gate=white,label=center:$\mathcal{L}\mathcal{T}$,draw=red] \tikzstyle{blueg} = [gate=white,label=center:$\mathcal{L}\mathcal{T}$,draw=blue] \tikzstyle{sum} = [gate=white,label=center:+] \pgfmathsetmacro{\nodenum}{3} \pgfmathsetmacro{\offset}{2.75} \newcommand{\eq}{=} \node[redg,label=45:\textcolor{red}{$\mathds{1}\{F(X) \geq 0\}$}] (ex) at (1.25,-3) {}; \node at (-.5,-4.9) {$\vdots$}; \foreach \x in {1,...,\nodenum} { \pgfmathsetmacro{\inputlabels}{\x<\nodenum ? int(\x-1) : "n"} \pgfmathsetmacro{\xp}{\x<\nodenum ? \x : "L"} \draw (xo-\x) -- node[above,pos=0.3] {} (ex); } \pgfmathsetmacro{\offset}{4} \node[blueg,label=315:\textcolor{blue}{$\mathds{1}\{F(X) \leq 0\}$}] (ex_neg) at (1.25,-2-\offset) {}; \foreach \x in {1,...,\nodenum} { \pgfmathsetmacro{\inputlabels}{\x<\nodenum ? int(\x-1) : "n"} \pgfmathsetmacro{\xp}{\x<\nodenum ? \x : "L"} \draw (xo-\x) -- node[above,pos=0.3] {} (ex_neg); } \node[sum,pin=0:$\mathds{1}\{F(X) \eq 0\}$] (out) at (3,-4.5) {}; \node[input] (bias) at (2.75, -3.5) {}; \draw (ex) -- node[above,pos=0.6] {$1$} (out); \draw (ex_neg) -- node[above,pos=0.6] {$1$} (out); \draw (bias) -- node[above,pos=0.3] {$-1$} (out); \end{tikzpicture} \caption{A construction of an arbitrary exact threshold function ($\mathds{1}\{F(X) = 0\}$) using two linear threshold functions ($\mathds{1}\{F(X) \geq 0\}$ and $\mathds{1}\{F(X) \leq 0\}$) and a summation node. This summation node can be removed if its output is connected to another gate due to linearity.} \label{fig:absorption} \end{figure} If $F(X,Y) < 0$, $z_{rml +i}$ should be 1 for all $i \in \{1,\dots,rm\}$ for some $l$ by Lemma \ref{lem:comp}. For $F(X,Y) \geq 0$ and any $l$, the maximum number of 1s that can appear in $z_{rml + i}$ is upper bounded by $m-1$ because A is $\text{RMDS}_3$. Therefore, the maximum number of 1s that can appear in $z$ is upper bounded by $n(m-1)$. A sketch of the construction is given in Figure \ref{fig:comp_constr}. \begin{figure}[h] \centering \begin{tikzpicture} \tikzstyle{sum} = [gate=white,label=center:+] \tikzstyle{lt} = [gate=white,label=center:$\mathcal{L}\mathcal{T}$] \node[lt,pin=right:COMP] (out) at (4,-2) {}; \tikzstyle{input} = [circle] \newcommand{\inputnum}{5} \newcommand{\Lnum}{3} \newcommand{\domnum}{2} \newcommand{\eq}{=} \newcommand{\offset}{1} \def\colors{red,violet,blue} \node[input,label=180:$1$] (i-5) at (0,-3.5) {}; \node[input,label=180:$x_1'$] (i-1) at (0,0) {}; \node[input,label=180:$x_2'$] (i-2) at (0,-0.75) {}; \node at (-0.45,-1.25) {$\vdots$}; \node[input,label=180:$x_{n-1}'$] (i-3) at (0,-2) {}; \node[input,label=180:$x_n'$] (i-4) at (0,-2.75) {}; \node[input,label=0:\textcolor{red}{$\hphantom{a}(l\eq 0)$}] (l-0) at (2,1.75) {}; \node[lt,draw=red] (lt-1-1) at (2,1.75) {}; \node[color=red,label=right:\textcolor{red}{$\hphantom{a}\mathds{1}\{A^{(0)}(X-Y)^{(0)} = -a_n\}$}] at (2,1.25) {$\vdots$}; \node[lt,draw=red] (lt-1-2) at (2,0.5) {}; \node[input,label=\textcolor{violet}{$\hphantom{a}(l\eq 1)$}] (l-0) at (2,-0.75) {}; \node[lt,draw=violet] (lt-2-1) at (2,-0.75) {}; \node[color=violet] at (2,-1.25) {$\vdots$}; \node[lt,draw=violet] (lt-2-2) at (2,-2) {}; \node at (2, -3) {$\vdots$}; \node[input,label=0:\textcolor{blue}{$\hphantom{a}(l\eq n-1)$}] (l-0) at (2,-5.25) {}; \node[lt,draw=blue] (lt-3-1) at (2,-4) {}; \node[color=blue,label=right:\textcolor{blue}{$\hphantom{aaa}\mathds{1}\{A^{(n-1)}(X-Y)^{(n-1)} = -a_1\}$}] at (2,-4.5) {$\vdots$}; \node[lt,draw=blue] (lt-3-2) at (2,-5.25) {}; \foreach[count=\l] \c in \colors { \foreach \y in {1,...,\domnum} { \foreach \x in {1,...,\inputnum} { \pgfmathsetmacro{\i}{int(\x + \l - 1 + div(\l,\Lnum)} \ifnum\i<6 { \draw[color=\c] (i-\i) -- (lt-\l-\y) {}; } \else{} } \draw (lt-\l-\y) -- node[midway] {} (out); } } \node[input] (bias) at (3.66,-1) {}; \draw (bias) -- node[right,pos=0.3] {} (out); \end{tikzpicture} \caption{A sketch of the $\text{COMP}(X,Y)$ construction where $x_i' = x_i - y_i$ using linear threshold gates. Each color specifies an $l$ value in the construction. If $X < Y$, all the $rm$ many gates in at least one of the colors will give all $1$s at the output. Otherwise, all the $rm$ many gates in a color will give at most $(m-1)$ many $1$s at the output.} \label{fig:comp_constr} \end{figure} In order to make both cases separable, we choose $r = n$. At the second layer, the top gate is a MAJORITY gate ($\mathds{1}\{\sum_{i=1}^{rmn} z_i < n(m-1)\} = \mathds{1}\{\sum_{i=1}^{rmn} -z_i \geq -n(m-1)+1\}$). By Theorem \ref{th:rmds}, there exists an $\text{RMDS}_3$ matrix with $m = O(n/\log{n})$ and $W = O(n)$. Thus, there are $rmn + 1 = O(n^3/\log{n})$ many gates in the circuit, which is the best known result. The same size complexity can be achieved by a CRT-based approach with $W = O(n^2)$ \cite{amano2005complexity}. \section{Conclusion} We explicitly constructed a rate-efficient constant weight EQ matrix for a previously existential result. Namely, we proved that the CRT matrix is not optimal in weight size to compute the EQ function and obtained the optimal EQ function constructions. For the COMP function, the weight size complexity is improved by a linear factor, using $\text{RMDS}_q$ matrices and their existence. An open problem is whether similar algebraic constructions can be found for general threshold functions so that these ideas can be developed into a weight quantization technique for neural networks. \section*{Acknowledgement} This research was partially supported by The Carver Mead New Adventure Fund. \printbibliography \appendix \textbf{The Decoding Algorithm:} Suppose that $A_kx = z$ is given where $A_k$ is an $m\times n$ EQ matrix given in Theorem \ref{th:constr} starting with $A_0 = \begin{bmatrix} 1 \end{bmatrix}$. One can obtain a linear time decoding algorithm in $n$ by exploiting the recursive structure of the construction. Let us partition $x$ and $z$ in the way given in \eqref{eq:constr_partition}. It is clear that $x^{(3)} = [z^{(1)} + z^{(2)}]_2$. Also, after computing $x^{(3)}$, we find that $A_{k-1}x^{(1)} = (z^{(1)} + z^{(2)} - x^{(3)})/2$ and $A_{k-1}x^{(2)} = (z^{(1)} - z^{(2)} - x^{(3)})/2$. These operations can be done in $O(m_{k-1})$ time complexity. Let $T(m)$ denote the time to decode $z \in \mathbb{Z}^m$. Then, $T(m) = 2T(m/2) + O(m)$ and by the Master Theorem, $T(m) = O(m\log{m}) = O(n)$. \textbf{An Example for the Decoding:} Consider the following system Ax = b: \begin{equation*} \begin{bmatrix*}[r] 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 \\ 1 & -1 & 0 & 1 & -1 & 0 & 0 & 1 \\ 1 & 1 & 1 & -1 & -1 & -1 & 0 & 0 \\ 1 & -1 & 0 & -1 & 1 & 0 & 0 & 0 \end{bmatrix*} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \\ x_6 \\ x_7 \\ x_8 \end{bmatrix} = \begin{bmatrix*} 4 \\ -2 \\ -1 \\ 0 \end{bmatrix*} \end{equation*} Here is the first step of the algorithm: \begin{equation*} a = \begin{bmatrix} 4 \\ -2\end{bmatrix},\hphantom{a} b = \begin{bmatrix*} -1 \\ 0\end{bmatrix*} \Rightarrow \begin{bmatrix} x_7 \\ x_8 \end{bmatrix} = [a + b]_2 = \begin{bmatrix*}[r] [3]_2 \\ [-2]_2 \end{bmatrix*} = \begin{bmatrix} 1 \\ 0\end{bmatrix} \end{equation*} Then, we obtain the two following systems: \begin{align*} &\begin{bmatrix*}[r] 1 & 1 & 1 \\ 1 & -1 & 0 \end{bmatrix*} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ \end{bmatrix} = \frac{a + b - \begin{bmatrix} 1 \\ 0 \end{bmatrix}}{2} = \begin{bmatrix*} 1 \\ -1 \end{bmatrix*} \\ &\begin{bmatrix*}[r] 1 & 1 & 1 \\ 1 & -1 & 0 \end{bmatrix*} \begin{bmatrix} x_4 \\ x_5 \\ x_6 \\ \end{bmatrix} = \frac{a - b - \begin{bmatrix} 1 \\ 0 \end{bmatrix}}{2} = \begin{bmatrix*} 2 \\ -1 \end{bmatrix*} \end{align*} For the first system, we find that $x_3 = [1 - 1]_2 = 0$. Then, \begin{align} x_1 = \frac{1 - 1 - 0}{2} = 0 \\ x_2 = \frac{1 + 1 - 0}{2} = 1 \end{align} For the second system, we similarly find that $x_6 = [2 - 1]_2 = 1$. Then, \begin{align} x_4 = \frac{2 - 1 - 1}{2} = 0 \\ x_5 = \frac{2 + 1 - 1}{2} = 1 \end{align} Thus, we have $x = \begin{bmatrix} 0 & 1 & 0 & 0 & 1 & 1 & 1 & 0\end{bmatrix}^T$. \textbf{A Modular Arithmetical Property of the CRT Matrix:} Consider the following equation with powers of two in 8 variables and the corresponding $4\times 8$ CRT matrix, say $A$. \begin{equation} w_b^Tx = \sum_{i=1}^8 2^{i-1}x_i = 0 \end{equation} \begin{align} &\begin{bmatrix*}[l] [2^0]_{3} & [2^1]_{3} & [2^2]_{3} & [2^4]_{3} & [2^5]_{3} & [2^6]_3 & [2^7]_3 \\ [2^0]_{5} & [2^1]_{5} & [2^2]_{5} & [2^4]_{5} & [2^5]_{5} & [2^6]_5 & [2^7]_5 \\ [2^0]_{7} & [2^1]_{7} & [2^2]_{7} & [2^4]_{7} & [2^5]_{7} & [2^6]_7 & [2^7]_7 \\ [2^0]_{11} & [2^1]_{11} & [2^2]_{11} & [2^4]_{11} & [2^5]_{11} & [2^6]_{11} & [2^7]_{11} \end{bmatrix*} \\ &\hphantom{aaaaaaaaaa}= \begin{bmatrix*}[r] 1 & 2 & 1 & 2 & 1 & 2 & 1 & 2 \\ 1 & 2 & 4 & 3 & 1 & 2 & 4 & 3 \\ 1 & 2 & 4 & 1 & 2 & 4 & 1 & 2 \\ 1 & 2 & 4 & 8 & 5 & 10 & 9 & 7 \end{bmatrix*}_{4 \times 8} \end{align} For the prime $p_i$, the elements in the row $i$ are congruent to the elements of $w_b$. The $i^\text{th}$ element of the $Ax$ vector should be divisible by $p_i$ whenever $w_b^T x = 0$. For instance, one can pick $x = \begin{bmatrix} 2 & 1 & 1 & 3 & 0 & 1 & -1 & 0 \end{bmatrix}^T$ as a solution for $w_b^Tx = 0$ and we see that $Ax = \begin{bmatrix} 12 & 15 & 14 & 33 \end{bmatrix}^T = \begin{bmatrix} 4\cdot 3 & 3\cdot 5 & 2 \cdot 7 & 3 \cdot 11 \end{bmatrix}^T$. This property is essential in the construction of small weight depth-2 circuits for arbitrary threshold functions while the $\text{RMDS}_q$ matrices do not behave in this manner necessarily. \newpage \end{document}
2205.07998v1
http://arxiv.org/abs/2205.07998v1
A Faber-Krahn inequality for wavelet transforms
\documentclass[a4paper,12pt]{amsart} \usepackage{amsmath,amssymb,amsfonts,bbm} \usepackage{graphicx,color} \usepackage{amsmath} \usepackage{float} \usepackage{caption} \captionsetup[figure]{font=small} \captionsetup{width=\linewidth} \usepackage{geometry} \geometry{ a4paper, total={140mm,230mm}, left=35mm, top=40mm, bottom=45mm,} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{definition}[theorem]{Definition} \newtheorem{cor}[theorem]{Corollary} \newtheorem{example}[theorem]{Example} \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} \newtheorem*{remark*}{Remark} \newtheorem{Alg}[theorem]{Algorithm} \theoremstyle{definition} \newcommand\realp{\mathop{Re}} \newcommand\dH{\,d{\mathcal H}^1} \def\bR{\mathbb{R}} \def\bC{\mathbb{C}} \newcommand\cB{\mathcal{B}} \newcommand\cA{\mathcal{A}} \newcommand\cF{\mathcal{F}} \newcommand\cS{\mathcal{S}} \newcommand\cH{\mathcal{H}} \newcommand\cV{\mathcal{V}} \newcommand\bN{\mathbb{N}} \newcommand{\commF}[1]{{\color{blue}*** #1 ***}} \newcommand{\commP}[1]{{\color{red}*** #1 ***}} \newcommand{\PhiOmega}[1]{\Phi_\Omega(#1)} \newcommand{\PhiOm}{\Phi_\Omega} \newcommand{\PsiOmega}[1]{\Psi_\Omega(#1)} \newcommand{\PsiOm}{\Psi_\Omega} \newcommand\Aa{{\mathcal{A}_\alpha}} \numberwithin{equation}{section} \title{A Faber-Krahn inequality for Wavelet transforms} \author{Jo\~ao P. G. Ramos and Paolo Tilli} \begin{document} \maketitle \begin{abstract} For some special window functions $\psi_{\beta} \in H^2(\bC^+),$ we prove that, over all sets $\Delta \subset \bC^+$ of fixed hyperbolic measure $\nu(\Delta),$ the ones over which the Wavelet transform $W_{\overline{\psi_{\beta}}}$ with window $\overline{\psi_{\beta}}$ concentrates optimally are exactly the discs with respect to the pseudohyperbolic metric of the upper half space. This answers a question raised by Abreu and D\"orfler in \cite{AbreuDoerfler}. Our techniques make use of a framework recently developed by F. Nicola and the second author in \cite{NicolaTilli}, but in the hyperbolic context induced by the dilation symmetry of the Wavelet transform. This leads us naturally to use a hyperbolic rearrangement function, as well as the hyperbolic isoperimetric inequality, in our analysis. \end{abstract} \section{Introduction} In this paper, our main focus will be to answer a question by L. D. Abreu and M. D\"orfler \cite{AbreuDoerfler} on the sets which maximise concentration of certain wavelet transforms. To that extent, given a fixed function $g \in L^2(\bR),$ the \emph{Wavelet transform} with window $g$ is defined as \begin{equation}\label{eq:wavelet-transform} W_gf(x,s) = \frac{1}{s^{1/2}} \int_{\bR} f(t)\overline{ g\left( \frac{t-x}{s}\right) }\, dt, \quad \forall f \in L^2(\bR). \end{equation} This map is well-defined pointwise for each $x \in \bR, s > 0,$ but in fact, it has better properties if we restrict ourselves to certain subspaces of $L^2.$ Indeed, if $f,g$ are so that $\widehat{f},\widehat{g} = 0$ over the negative half line $(-\infty,0),$ then it can be shown that the wavelet transform is an isometric inclusion from $H^2(\bC^+)$ to $L^2(\bC^+,s^{-2} \, dx \, ds).$ This operator has been introduced first by I. Daubechies and T. Paul in \cite{DaubechiesPaul}, where the authors discuss its properties with respect to time-frequency localisation, in comparison to the short-time Fourier transform operator introduced previously by Daubechies in \cite{Daubechies} and Berezin \cite{Berezin}. Together with the short-time Fourier transform, the Wavelet transform has attracted attention of several authors. As the literature of this topic is extremely rich and we could not, by any means, provide a complete account of it here, we mention specially those interested in the problem of obtaining information from a domain from information on its localisation operator - see, for instance, \cite{AbreuDoerfler,AbreuSpeckbacher1, AbreuSpeckbacher2, AbreuGrochRomero, AbreuPerRomero, GroechenigBook, WongWaveletBook} and the references therein. In this manuscript, we shall be interested in the continuous wavelet transform for certain special window functions, and how much of its mass, in an $L^2(\bC^+,s^{-2} \, dx \, ds)-$sense, can be concentrated on certain subsets of the upper half space. To that extent, fix $\beta > 0.$ We then define $\psi_{\beta} \in L^2(\bR)$ to be such that \[ \widehat{\psi_{\beta}}(t) = \frac{1}{c_{\beta}} 1_{[0,+\infty)} t^{\beta} e^{-t}, \] where one lets $c_{\beta} = \int_0^{\infty} t^{2\beta - 1} e^{-2t} dt = 2^{2\beta -1}\Gamma(2\beta).$ Here, we normalise the Fourier transform as \[ \widehat{f}(\xi) = \frac{1}{(2\pi)^{1/2}} \int_{\bR} f(t) e^{-it \xi} \, d \xi. \] Fix now a subset $\Delta \subset \bC^+$ of the upper half space. We define then \[ C_{\Delta}^{\beta} := \sup \left\{ \int_{\Delta} |W_{\overline{\psi_{\beta}}} f(x,s)|^2 \,\frac{ dx \, ds}{s^2} \colon f \in H^2(\bC^+), \|f\|_2 = 1 \right\}. \] The constant $C_{\Delta}^{\beta}$ measures, in some sense, the maximal wavelet concentration of order $\beta >0$ in $\Delta$. Fix then $\beta > 0.$ A natural question, in this regard, is that of providing sharp bounds for $C_{\Delta}^{\beta},$ in terms of some quantitative constraint additionally imposed on the set $\Delta.$ This problem has appeared previously in some places in the literature, especially in the context of the short-time Fourier transform \cite{AbreuSpeckbacher1, AbreuSpeckbacher2, NicolaTilli}. For the continuous wavelet transform, we mention, in particular, the paper by L. D. Abreu and M. D\"orfler \cite{AbreuDoerfler}, where the authors pose this question explicitly in their last remark. The purpose of this manuscript is, as previously mentioned, to solve such a problem, under the contraint that the \emph{hyperbolic measure} of the set $\Delta$, given by \[ \nu(\Delta) = \int_{\Delta} \frac{dx\, ds}{s^2} < +\infty, \] is \emph{prescribed}. This condition arises in particular if one tries to analyse when the localisation operators associated with $\Delta$ \[ P_{\Delta,\beta} f = ( (W_{\overline{\psi_{\beta}}})^{*} 1_{\Delta} W_{\overline{\psi_{\beta}}} ) f \] are bounded from $L^2$ to $L^2.$ One sees, by \cite[Propositions~12.1~and~12.12]{WongWaveletBook}, that \begin{equation}\label{eq:localisation-operator} \| P_{\Delta,\beta} \|_{2 \to 2} \le \begin{cases} 1, & \text{ or } \cr \left(\frac{\nu(D)}{c_{\beta}}\right). & \cr \end{cases} \end{equation} As we see that \[ C_{\Delta}^{\beta} = \sup_{f \colon \|f\|_2 = 1} \int_{\Delta} |W_{\overline{\psi_{\beta}}} f(x,s)|^2 \, \frac{ dx \, ds}{s^2} = \sup_{f \colon \|f\|_2 = 1} \langle P_{\Delta,\beta} f, f \rangle_{L^2(\bR)}, \] we have the two possible bounds for $C_{\Delta}^{\beta},$ given by the two possible upper bounds in \eqref{eq:localisation-operator}. By considering the first bound, one is led to consider the problem of maximising $C_{\Delta}^{\beta}$ over all sets $\Delta \subset \bC^{+},$ which is trivial by taking $\Delta = \bC^+.$ From the second bound, however, we are induced to consider the problem we mentioned before. In this regard, the main result of this note may be stated as follows: \begin{theorem}\label{thm:main} It holds that \begin{equation}\label{eq:first-theorem} C_{\Delta}^{\beta} \le C_{\Delta^*}^{\beta}, \end{equation} where $\Delta^* \subset \bC^+$ denotes any pseudohyperbolic disc so that $\nu(\Delta) = \nu(\Delta^*).$ Moreover, equality holds in \eqref{eq:first-theorem} if and only if $\Delta$ is a pseudohyperbolic disc of measure $\nu(\Delta).$ \end{theorem} The proof of Theorem \ref{thm:main} is inspired by the recent proof of the Faber-Krahn inequality for the short-time Fourier transform, by F. Nicola and the second author \cite{NicolaTilli}. Indeed, in the present case, one may take advantage of the fact that the wavelet transform induces naturally a mapping from $H^2(\bC^+)$ to analytic functions with some decay on the upper half plane. This parallel is indeed the starting point of the proof of the main result in \cite{NicolaTilli}, where the authors show that the short-time Fourier transform with Gaussian window induces naturally the so-called \emph{Bargmann transform}, and one may thus work with analytic functions in a more direct form. The next steps follow the general guidelines as in \cite{NicolaTilli}: one fixes a function and considers certain integrals over level sets, carefully adjusted to match the measure constraints. Then one uses rearrangement techniques, together with a coarea formula argument with the isoperimetric inequality stemming from the classical theory of elliptic equations, in order to prove bounds on the growth of such quantities. The main differences in this context are highlighted by the translation of our problem in terms of Bergman spaces of the disc, rather than Fock spaces. Furthermore, we use a rearrangement with respect to a \emph{hyperbolic} measure, in contrast to the usual Hardy--Littlewood rearrangement in the case of the short-time Fourier transform. This presence of hyperbolic structures induces us, further in the proof, to use the hyperbolic isoperimetric inequality. In this regard, we point out that a recent result by A. Kulikov \cite{Kulikov} used a similar idea in order to analyse extrema of certain monotone functionals on Hardy spaces. \\ This paper is structured as follows. In Section 2, we introduce notation and the main concepts needed for the proof, and perform the first reductions of our proof. With the right notation at hand, we restate Theorem \ref{thm:main} in more precise form - which allows us to state crucial additional information on the extremizers of inequality \eqref{eq:first-theorem} - in Section 3, where we prove it. Finally, in Section 4, we discuss related versions of the reduced problem, and remark further on the inspiration for the hyperbolic measure constraint in Theorem \ref{thm:main}. \\ \noindent\textbf{Acknowledgements.} J.P.G.R. would like to acknowledge financial support by the European Research Council under the Grant Agreement No. 721675 ``Regularity and Stability in Partial Differential Equations (RSPDE)''. \section{Notation and preliminary reductions} Before moving on to the proof of Theorem \ref{thm:main}, we must introduce the notion which shall be used in its proof. We refer the reader to the excellent exposition in \cite[Chapter~18]{WongWaveletBook} for a more detailed account of the facts presented here. \subsection{The wavelet transform} Let $f \in H^2(\bC^+)$ be a function on the Hardy space of the upper half plane. That is, $f$ is holomorphic on $\bC^+ = \{ z \in \bC \colon \text{Im}(z) > 0\},$ and such that \[ \sup_{s > 0} \int_{\bR} |f(x+is)|^2 \, dx < +\infty. \] Functions in this space may be identified in a natural way with functions $f$ on the real line, so that $\widehat{f}$ has support on the positive line $[0,+\infty].$ We fix then a function $g \in H^2(\bC^+) \setminus \{0\}$ so that \[ \| \widehat{g} \|_{L^2(\bR^+,t^{-1})}^2 < +\infty. \] Given a fixed $g$ as above, the \emph{continuous Wavelet transform} of $f$ with respect to the window $g$ is defined to be \begin{equation}\label{eq:wavelet-def} W_gf(z) = \langle f, \pi_z g \rangle_{H^2(\bC^+)} \end{equation} where $z = x + i s,$ and $\pi_z g(t) = s^{-1/2} g(s^{-1}(t-x)).$ From the definition, it is not difficult to see that $W_g$ is an \emph{isometry} from $H^2(\bC^+)$ to $L^2(\bC^+, s^{-2} \, dx \, ds),$ as long as $\| \widehat{g} \|_{L^2(\bR^+,t^{-1})}^2 = 1.$ \\ \subsection{Bergman spaces on $\bC^+$ and $D$}For every $\alpha>-1$, the Bergmann space $\Aa(D)$ of the disc is the Hilbert space of all functions $f:D\to \bC$ which are holomorphic in the unit disk $D$ and are such that \[ \Vert f\Vert_\Aa^2 := \int_D |f(z)|^2 (1-|z|^2)^\alpha \,dz <+\infty. \] Analogously, the Bergman space of the upper half place $\Aa(\bC^+)$ is defined as the set of analytic functions in $\bC^+$ such that \[ \|f\|_{\Aa(\bC^+)}^2 = \int_{\bC^+} |f(z)|^2 s^{\alpha} \, d\mu^+(z), \] where $d \mu^+$ stands for the normalized area measure on $\bC^+.$ These two spaces defined above do not only share similarities in their definition, but indeed it can be shown that they are \emph{isomorphic:} if one defines \[ T_{\alpha}f(w) = \frac{2^{\alpha/2}}{(1-w)^{\alpha+2}} f \left(\frac{w+1}{i(w-1)} \right), \] then $T_{\alpha}$ maps $\Aa(\bC^+)$ to $\Aa(D)$ as a \emph{unitary isomorphism.} For this reason, dealing with one space or the other is equivalent, an important fact in the proof of the main theorem below. For the reason above, let us focus on the case of $D$, and thus we abbreviate $\Aa(D) = \Aa$ from now on. The weighted $L^2$ norm defining this space is induced by the scalar product \[ \langle f,g\rangle_\alpha := \int_D f(z)\overline{g(z)} (1-|z|^2)^\alpha\, dz. \] Here and throughout, $dz$ denotes the bidimensional Lebesgue measure on $D$. An orthonormal basis of $\Aa$ is given by the normalized monomials $ z^n/\sqrt{c_n}$ ($n=0,1,2,\ldots$), where \[ c_n = \int_D |z|^{2n}(1-|z|^2)^\alpha \,dz= 2\pi \int_0^1 r^{2n+1}(1-r^2)^\alpha\,dr= \frac{\Gamma(\alpha+1)\Gamma(n+1)}{\Gamma(2+\alpha+n)}\pi. \] Notice that \[ \frac 1 {c_n}=\frac {(\alpha+1)(\alpha+2)\cdots (\alpha+n+1)}{\pi n!} =\frac{\alpha+1}\pi \binom {-\alpha-2}{n}(-1)^n , \] so that from the binomial series we obtain \begin{equation} \label{seriescn} \sum_{n=0}^\infty \frac {x^n}{c_n}=\frac{\alpha+1}\pi (1-x)^{-2-\alpha},\quad x\in D. \end{equation} Given $w\in D$, the reproducing kernel relative to $w$, i.e. the (unique) function $K_w\in\Aa$ such that \begin{equation} \label{repker} f(w)=\langle f,K_w\rangle_\alpha\quad\forall f\in\Aa, \end{equation} is given by \[ K_w(z):=\frac {1+\alpha}\pi (1-\overline{w}z)^{-\alpha-2}= \sum_{n=0}^\infty \frac{\overline{w}^n z^n}{c_n},\quad z\in D \] (the second equality follows from \eqref{seriescn}; note that $K_w\in\Aa$, since the sequence $\overline{w}^n/\sqrt{c_n}$ of its coefficients w.r.to the monomial basis belongs to $\ell^2$). To see that \eqref{repker} holds, it suffices to check it when $f(z)=z^k$ for some $k\geq 0$, but this is immediate from the series representation of $K_w$, i.e. \[ \langle z^k,K_w\rangle_\alpha =\sum_{n=0}^\infty w^n \langle z^k,z^n/c_n\rangle_\alpha=w^k=f(w). \] Concerning the norm of $K_w$, we have readily from the reproducing property the following identity concerning their norms: \[ \Vert K_w\Vert_\Aa^2=\langle K_w,K_w\rangle_\alpha= K_w(w)=\frac{1+\alpha}\pi (1-|w|^2)^{-2-\alpha}. \] We refer the reader to \cite{Seip} and the references therein for further meaningful properties in the context of Bergman spaces. \subsection{The Bergman transform} Now, we shall connect the first two subsections above by relating the wavelet transform to Bergman spaces, through the so-called \emph{Bergman transform.} For more detailed information, see, for instance \cite{Abreu} or \cite[Section~4]{AbreuDoerfler}. Indeed, fix $\alpha > -1.$ Recall that the function $\psi_{\alpha} \in H^2(\bC^+)$ satisfies \[ \widehat{\psi_{\alpha}} = \frac{1}{c_{\alpha}} 1_{[0,+\infty)} t^{\alpha} e^{-t}, \] where $c_{\alpha} > 0$ is chosen so that $\| \widehat{\psi_{\alpha}} \|_{L^2(\bR^+,t^{-1})}^2 =1.$ The \emph{Bergman transform of order $\alpha$} is then given by \[ B_{\alpha}f(z) = \frac{1}{s^{\frac{\alpha}{2} +1}} W_{\overline{\psi_{\frac{\alpha+1}{2}}}} f(-x,s) = c_{\alpha} \int_0^{+\infty} t^{\frac{\alpha+1}{2}} \widehat{f}(t) e^{i z t} \, dx. \] From this definition, it is immediate that $B_{\alpha}$ defines an analytic function whenever $f \in H^2(\bC^+).$ Moreover, it follows directly from the properties of the wavelet transform above that $B_{\alpha}$ is a unitary map between $H^2(\bC^+)$ and $\Aa(\bC^+).$ Finally, note that the Bergman transform $B_{\alpha}$ is actually an \emph{isomorphism} between $H^2(\bC^+)$ and $\Aa(\bC^+).$ Indeed, let $l_n^{\alpha}(x) = 1_{(0,+\infty)}(x) e^{-x/2} x^{\alpha/2} L_n^{\alpha}(x),$ where $\{L_n^{\alpha}\}_{n \ge 0}$ is the sequence of generalized Laguerre polynomials of order $\alpha.$ It can be shown that the function $\psi_n^{\alpha}$ so that \begin{equation}\label{eq:eigenfunctions} \widehat{\psi_n^{\alpha}}(t) = b_{n,\alpha} l_n^{\alpha}(2t), \end{equation} with $b_{n,\alpha}$ chosen for which $ \|\widehat{\psi_n^{\alpha}}\|_{L^2(\bR^+,t^{-1})}^2=1,$ satisfies \begin{equation}\label{eq:eigenfunctions-disc} T_{\alpha} (B_{\alpha}\psi_n^{\alpha}) (w) = e_n^{\alpha}(w). \end{equation} Here, $e_n^{\alpha}(w) = d_{n,\alpha} w^n,$ where $d_{n,\alpha}$ is so that $\|e_n^{\alpha}\|_{\Aa} = 1.$ Thus, $T_{\alpha} \circ B_{\alpha}$ is an isomorphism between $H^2(\bC^+)$ and $\Aa(D),$ and the claim follows. \section{The main inequality} \subsection{Reduction to an optimisation problem on Bergman spaces} By the definition of the Bergman transform above, we see that \[ \int_{\Delta} |W_{\overline{\psi_{\beta}}} f(x,s)|^2 \, \frac{ dx \, ds}{s^2} = \int_{\tilde{\Delta}} |B_{\alpha}f(z)|^{2} s^{\alpha} \, dx \, ds, \] where $\tilde{\Delta} =\{ z = x + is\colon -x+is \in \Delta\}$ and $\alpha = 2\beta - 1.$ On the other hand, we may further apply the map $T_{\alpha}$ above to $B_{\alpha}f;$ this implies that \[ \int_{\tilde{\Delta}} |B_{\alpha}f(z)|^{2} s^{\alpha} \, dx \, ds = \int_{\Omega} |T_{\alpha}(B_{\alpha}f)(w)|^2 (1-|w|^2)^{\alpha} \, dw, \] where $\Omega$ is the image of $\tilde{\Delta}$ under the map $z \mapsto \frac{z-i}{z+i}$ on the upper half plane $\bC^+.$ Notice that, from this relationship, we have \begin{align*} & \int_{\Omega} (1-|w|^2)^{-2} \, dw = \int_D 1_{\Delta}\left( \frac{w+1}{i(w-1)} \right) (1-|w|^2)^{-2} \, dw \cr & = \frac{1}{4} \int_{\Delta} \frac{ dx \, ds}{s^2} = \frac{\nu(\Delta)}{4}. \cr \end{align*} This leads us naturally to consider, on the disc $D$, the Radon measure \[ \mu(\Omega):=\int_\Omega (1-|z|^2)^{-2}dz,\quad\Omega\subseteq D, \] which is, by the computation above, the area measure in the usual Poincar\'e model of the hyperbolic space (up to a multiplicative factor 4). Thus, studying the supremum of $C_{\Delta}^{\beta}$ over $\Delta$ for which $\nu(\Delta) = s$ is equivalent to maximising \begin{equation}\label{eq:optimal-bergman-object} R(f,\Omega)= \frac{\int_\Omega |f(z)|^2 (1-|z|^2)^\alpha \,dz}{\Vert f\Vert_\Aa^2} \end{equation} over all $f \in \Aa$ and $\Omega \subset D$ with $\mu(\Omega) = s/4.$ With these reductions, we are now ready to state a more precise version of Theorem \ref{thm:main}. \begin{theorem}\label{thm:main-bergman} Let $\alpha>-1,$ and $s>0$ be fixed. Among all functions $f\in \Aa$ and among all measurable sets $\Omega\subset D$ such that $\mu(\Omega)=s$, the quotient $R(f,\Omega)$ as defined in \eqref{eq:optimal-bergman-object} satisfies the inequality \begin{equation}\label{eq:upper-bound-quotient} R(f,\Omega) \le R(1,D_s), \end{equation} where $D_s$ is a disc centered at the origin with $\mu(D_s) = s.$ Moreover, there is equality in \eqref{eq:upper-bound-quotient} if and only if $f$ is a multiple of some reproducing kernel $K_w$ and $\Omega$ is a ball centered at $w$, such that $\mu(\Omega)=s$. \end{theorem} Note that, in the Poincar\'e disc model in two dimensions, balls in the pseudohyperbolic metric coincide with Euclidean balls, but the Euclidean and hyperbolic centers differ in general, as well as the respective radii. \begin{proof}[Proof of Theorem \ref{thm:main-bergman}] Let us begin by computing $R(f,\Omega)$ when $f=1$ and $\Omega=B_r(0)$ for some $r<1$. \[ R(1,B_r)=\frac {\int_0^r \rho (1-\rho^2)^\alpha\,d\rho} {\int_0^1 \rho (1-\rho^2)^\alpha\,d\rho} = \frac {(1-\rho^2)^{1+\alpha}\vert_0^r} {(1-\rho^2)^{1+\alpha}\vert_0^1} =1-(1-r^2)^{1+\alpha}. \] Since $\mu(B_r)$ is given by \begin{align*} \int_{B_r} (1-|z|^2)^{-2}\,dz & =2\pi \int_0^r \rho (1-\rho^2)^{-2}\,d\rho \cr =\pi(1-r^2)^{-1}|_0^r & =\pi\left(\frac{1}{1-r^2}-1\right), \cr \end{align*} we have \[ \mu(B_r)=s \iff \frac 1{1-r^2}=1+\frac s\pi, \] so that $\mu(B_r)=s$ implies $R(1,B_r)=1-(1+s/\pi)^{-1-\alpha}.$ The function \[ \theta(s):=1-(1+s/\pi)^{-1-\alpha},\quad s\geq 0 \] will be our comparison function, and we will prove that \[ R(f,\Omega)\leq \theta(s) \] for every $f$ and every $\Omega\subset D$ such that $\mu(\Omega)=s$. Consider any $f\in\Aa$ such that $\Vert f\Vert_\Aa=1$, let \[ u(z):= |f(z)|^2 (1-|z|^2)^{\alpha+2}, \] and observe that \begin{equation} \label{eq10} R(f,\Omega)=\int_\Omega u(z)\,d\mu \leq I(s):=\int_{\{u>u^*(s)\}} u(z) \,d\mu,\quad s=\mu(\Omega), \end{equation} where $u^*(s)$ is the unique value of $t>0$ such that \[ \mu(\{u>t\})=s. \] That is, $u^*(s)$ is the inverse function of the distribution function of $u$, relative to the measure $\mu$. Observe that $u(z)$ can be extended to a continuous function on $\overline D$, by letting $u\equiv 0$ on $\partial D.$ Indeed, consider any $z_0\in D$ such that, say, $|z_0|>1/2$, and let $r=(1-|z_0|)/2$. Then, on the ball $B_r(z_0)$, for some universal constant $C>1$ we have \[ C^{-1} (1-|z|^2) \leq r \leq C(1-|z|^2)\quad\forall z\in B_r(z_0), \] so that \begin{align*} \omega(z_0):=\int_{B_r(z_0)} |f(z)|^2 (1-|z|^2)^\alpha \,dz \geq C_1 r^{\alpha+2}\frac 1 {\pi r^2} \int_{B_r(z_0)} |f(z)|^2 \,dz\\ \geq C_1 r^{\alpha+2} |f(z_0)|^2 \geq C_2 (1-|z_0|^2)^{\alpha+2} |f(z_0)|^2= C_2 u(z_0). \end{align*} Here, we used that fact that $|f(z)|^2$ is subharmonic, which follows from analyticity. Since $|f(z)|^2 (1-|z|^2)^\alpha\in L^1(D)$, $\omega(z_0)\to 0$ as $|z_0|\to 1$, so that \[ \lim_{|z_0|\to 1} u(z_0)=0. \] As a consequence, we obtain that the superlevel sets $\{u > t\}$ are \emph{strictly} contained in $D$. Moreover, the function $u$ so defined is a \emph{real analytic function}. Thus (see \cite{KrantzParks}) all level sets of $u$ have zero measure, and as all superlevel sets do not touch the boundary, the hyperbolic length of all level sets is zero; that is, \[ L(\{u=t\}) := \int_{\{u = t\}} (1-|z|^2)^{-1} \, d\mathcal{H}^1 =0, \, \forall \, t > 0. \] Here and throughout the proof, we use the notation $\mathcal{H}^k$ to denote the $k-$dimensional Hausdorff measure. It also follow from real analyticity that the set of critical points of $u$ also has hyperbolic length zero: \[ L(\{|\nabla u| = 0\}) = 0. \] Finally, we note that a suitable adaptation of the proof of Lemma 3.2 in \cite{NicolaTilli} yields the following result. As the proofs are almost identical, we omit them, and refer the interested reader to the original paper. \begin{lemma}\label{thm:lemma-derivatives} The function $\varrho(t) := \mu(\{ u > t\})$ is absolutely continuous on $(0,\max u],$ and \[ -\varrho'(t) = \int_{\{u = t\}} |\nabla u|^{-1} (1-|z|^2)^{-2} \, d \mathcal{H}^1. \] In particular, the function $u^*$ is, as the inverse of $\varrho,$ locally absolutely continuous on $[0,+\infty),$ with \[ -(u^*)'(s) = \left( \int_{\{u=u^*(s)\}} |\nabla u|^{-1} (1-|z|^2)^{-2} \, d \mathcal{H}^1 \right)^{-1}. \] \end{lemma} Let us then denote the boundary of the superlevel set where $u > u^*(s)$ as \[ A_s=\partial\{u>u^*(s)\}. \] We have then, by Lemma \ref{thm:lemma-derivatives}, \[ I'(s)=u^*(s),\quad I''(s)=-\left(\int_{A_s} |\nabla u|^{-1}(1-|z|^2)^{-2}\,d{\mathcal H}^1\right)^{-1}. \] Since the Cauchy-Schwarz inequality implies \[ \left(\int_{A_s} |\nabla u|^{-1}(1-|z|^2)^{-2}\,d{\mathcal H}^1\right) \left(\int_{A_s} |\nabla u|\,d{\mathcal H}^1\right) \geq \left(\int_{A_s} (1-|z|^2)^{-1}\,d{\mathcal H}^1\right)^2, \] letting \[ L(A_s):= \int_{A_s} (1-|z|^2)^{-1}\,d{\mathcal H}^1 \] denote the length of $A_s$ in the hyperbolic metric, we obtain the lower bound \begin{equation}\label{eq:lower-bound-second-derivative} I''(s)\geq - \left(\int_{A_s} |\nabla u|\,d{\mathcal H}^1\right) L(A_s)^{-2}. \end{equation} In order to compute the first term in the product on the right-hand side of \eqref{eq:lower-bound-second-derivative}, we first note that \[ \Delta \log u(z) =\Delta \log (1-|z|^2)^{2 + \alpha}=-4(\alpha+2)(1-|z|^2)^{-2}, \] which then implies that, letting $w(z)=\log u(z)$, \begin{align*} \frac {-1} {u^*(s)} \int_{A_s} |\nabla u|\,d{\mathcal H}^1 & = \int_{A_s} \nabla w\cdot\nu \,d{\mathcal H}^1 = \int_{u>u^*(s)} \Delta w\,dz \cr =-4(\alpha+2)\int_{u>u^*(s)} (1-|z|^2)^{-2} \,dz & =-4(\alpha+2) \mu(\{u>u^*(s)\})= -4(\alpha+2)s.\cr \end{align*} Therefore, \begin{equation}\label{eq:lower-bound-second-almost} I''(s)\geq -4(\alpha+2)s u^*(s)L(A_s)^{-2}= -4(\alpha+2)s I'(s)L(A_s)^{-2}. \end{equation} On the other hand, the isoperimetric inequality on the Poincaré disc - see, for instance, \cite{Izmestiev, Osserman, Schmidt} - implies \[ L(A_s)^2 \geq 4\pi s + 4 s^2, \] so that, pluggin into \eqref{eq:lower-bound-second-almost}, we obtain \begin{equation}\label{eq:final-lower-bound-second} I''(s)\geq -4 (\alpha+2)s I'(s)(4\pi s+4 s^2)^{-1} =-(\alpha+2)I'(s)(\pi+s)^{-1}. \end{equation} Getting back to the function $\theta(s)$, we have \[ \theta'(s)=\frac{1+\alpha}\pi(1+s/\pi)^{-2-\alpha},\quad \theta''(s)=-(2+\alpha)\theta'(s)(1+s/\pi)^{-1}/\pi. \] Since \[ I(0)=\theta(0)=0\quad\text{and}\quad \lim_{s\to+\infty} I(s)=\lim_{s\to+\infty}\theta(s)=1, \] we may obtain, by a maximum principle kind of argument, \begin{equation}\label{eq:inequality-sizes} I(s)\leq\theta(s)\quad\forall s>0. \end{equation} Indeed, consider $G(s) := I(s) - \theta(s).$ We claim first that $G'(0) \le 0.$ To that extent, notice that \[ \Vert u\Vert_{L^\infty(D)} = u^*(0)=I'(0) \text{ and }\theta'(0)=\frac{1+\alpha}\pi. \] On the other hand, we have, by the properties of the reproducing kernels, \begin{align}\label{eq:sup-bound} u(w)=|f(w)|^2 (1-|w|^2)^{\alpha+2}& =|\langle f,K_w\rangle_\alpha|^2(1-|w|^2)^{\alpha+2}\cr \leq \Vert f\Vert_\Aa^2 \Vert K_w\Vert_\Aa^2& (1-|w|^2)^{\alpha+2}=\frac{1+\alpha}\pi, \end{align} and thus $I'(0) - \theta'(0) \le 0,$ as claimed. Consider then \[ m := \sup\{r >0 \colon G \le 0 \text{ over } [0,r]\}. \] Suppose $m < +\infty.$ Then, by compactness, there is a point $c \in [0,m]$ so that $G'(c) = 0,$ as $G(0) = G(m) = 0.$ Let us first show that $G(c)<0$ if $G \not\equiv 0.$ In fact, we first define the auxiliary function $h(s) = (\pi + s)^{\alpha + 2}.$ The differential inequalities that $I, \, \theta$ satisfy may be combined, in order to write \begin{equation}\label{eq:functional-inequality} (h \cdot G')' \ge 0. \end{equation} Thus, $h\cdot G'$ is increasing on the whole real line. As $h$ is increasing on $\bR,$ we have two options: \begin{enumerate} \item either $G'(0) = 0,$ which implies, from \eqref{eq:sup-bound}, that $f$ is a multiple of the reproducing kernel $K_w.$ In this case, It can be shown that $G \equiv 0,$ which contradicts our assumption; \item or $G'(0)<0,$ in which case the remarks made above about $h$ and $G$ imply that $G'$ is \emph{increasing} on the interval $[0,c].$ In particular, as $G'(c) =0,$ the function $G$ is \emph{decreasing} on $[0,c],$ and the claim follows. \end{enumerate} Thus, $c \in (0,m).$ As $G(m) = \lim_{s \to \infty} G(s) = 0,$ there is a point $c' \in [m,+\infty)$ so that $G'(c') = 0.$ But this is a contradiction to \eqref{eq:functional-inequality}: notice that $0 = G(m) > G(c)$ implies the existence of a point $d \in (c,m]$ with $G'(d) > 0.$ As $h \cdot G'$ is increasing over $\bR,$ and $(h \cdot G')(c) = 0, \, (h \cdot G')(d) > 0,$ we cannot have $(h \cdot G') (c') = 0.$ The contradiction stems from supposing that $m < +\infty,$ and \eqref{eq:inequality-sizes} follows. With \eqref{eq:upper-bound-quotient} proved, we now turn our attention to analysing the equality case in Theorem \ref{thm:main-bergman}. To that extent, notice that, as a by-product of the analysis above, the inequality \eqref{eq:inequality-sizes} is \emph{strict} for every $s>0,$ unless $I\equiv\theta$. Now assume that $I(s_0)=\theta(s_0)$ for some $s_0>0$, then $\Omega$ must coincide (up to a negligible set) with $\{u>u^*(s_0)\}$ (otherwise we would have strict inequality in \eqref{eq10}), and moreover $I\equiv \theta$, so that \[ \Vert u\Vert_{L^\infty(D)} = u^*(0)=I'(0)=\theta'(0)=\frac{1+\alpha}\pi. \] By the argument above in \eqref{eq:sup-bound}, this implies that the $L^\infty$ norm of $u$ on $D$, which is equal to $(1+\alpha)/\pi$, is attained at some $w\in D$, and since equality is achieved, we obtain that $f$ must be a multiple of the reproducing kernel $K_w$, as desired. This concludes the proof of Theorem \ref{thm:main-bergman}. \end{proof} \noindent\textbf{Remark 1.} The uniqueness part of Theorem \ref{thm:main-bergman} may also be analysed through the lenses of an overdetermined problem. In fact, we have equality in that result if and only if we have equality in \eqref{eq:final-lower-bound-second}, for almost every $s > 0.$ If we let $w = \log u$, then a quick inspection of the proof above shows that \begin{align}\label{eq:serrin-disc} \begin{cases} \Delta w = \frac{-4(\alpha+2)}{(1-|z|^2)^2} & \text { in } \{u > u^*(s)\}, \cr w = \log u^*(s), & \text{ on } A_s, \cr |\nabla w| = \frac{c}{1-|z|^2}, & \text{ on } A_s. \cr \end{cases} \end{align} By mapping the upper half plane $\mathbb{H}^2$ to the Poincar\'e disc by $z \mapsto \frac{z-i}{z+i},$ one sees at once that a solution to \eqref{eq:serrin-disc} translates into a solution of the Serrin overdetermined problem \begin{align}\label{eq:serrin-upper-half} \begin{cases} \Delta_{\mathbb{H}^2} v = c_1 & \text { in } \Omega, \cr v = c_2 & \text{ on } \partial\Omega, \cr |\nabla_{\mathbb{H}^2} v| = c_3 & \text{ on } \partial\Omega, \cr \end{cases} \end{align} where $\Delta_{\mathbb{H}^2}$ and $\nabla_{\mathbb{H}^2}$ denote, respectively, the Laplacian and gradient in the upper half space model of the two-dimensional hyperbolic plane. By the main result in \cite{KumaresanPrajapat}, the only domain $\Omega$ which solves \eqref{eq:serrin-upper-half} is a geodesic disc in the upper half space, with the hyperbolic metric. Translating back, this implies that $\{u>u^*(s)\}$ are (hyperbolic) balls for almost all $s > 0.$ A direct computation then shows that $w = \log u,$ with $u(z) = |K_w(z)|^2(1-|z|^2)^{\alpha+2},$ is the unique solution to \eqref{eq:serrin-disc} in those cases. \\ \noindent\textbf{Remark 2.} Theorem \ref{thm:main-bergman} directly implies, by the reductions above, Theorem \ref{thm:main}. In addition to that, we may use the former to characterise the extremals to the inequality \eqref{eq:first-theorem}. Indeed, it can be shown that the reproducing kernels $K_w$ for $\Aa(D)$ are the image under $T_{\alpha}$ of the reproducing kernels for $\Aa(\bC^+),$ given by \[ \mathcal{K}_{w}^{\alpha}(z) = \kappa_{\alpha} \left( \frac{1}{z-\overline{w}} \right)^{\alpha+2}, \] where $\kappa_{\alpha}$ accounts for the normalisation we used before. Thus, equality holds in \eqref{eq:first-theorem} if and only if $\Delta$ is a pseudohyperbolic disc, and moreover, the function $f \in H^2(\bC^+)$ is such that \begin{equation}\label{eq:equality-Bergman-kernel} B_{2\beta-1}f(z) = \lambda_{\beta} \mathcal{K}^{2\beta - 1}_w(z), \end{equation} for some $w \in \bC^+.$ On the other hand, it also holds that the functions $\{\psi^{\alpha}_n\}_{n \in \bN}$ defined in \eqref{eq:eigenfunctions} are so that $B_{\alpha}(\psi_0^{\alpha}) =: \Psi_0^{\alpha}$ is a \emph{multiple} of $\left(\frac{1}{z+i}\right)^{\alpha+2}.$ This can be seen by the fact that $T_{\alpha}(\Psi_0^{\alpha})$ is the constant function. From these considerations, we obtain that $f$ is a multiple of $\pi_{w} \psi_0^{2\beta-1},$ where $\pi_w$ is as in \eqref{eq:wavelet-def}. In summary, we obtain the following: \begin{corollary} Equality holds in Theorem \ref{thm:main} if an only if $\Delta$ is a pseudohyperbolic disc with hyperbolic center $w = x + i y,$ and $$f(t) = c \cdot \frac{1}{y^{1/2}}\psi_0^{2\beta-1} \left( \frac{t-x}{y}\right),$$ for some $c \in \mathbb{C} \setminus \{0\}.$ \end{corollary} \section{Other measure contraints and related problems} As discussed in the introduction, the constraint on the \emph{hyperbolic} measure of the set $\Delta$ can be seen as the one which makes the most sense in the framework of the Wavelet transform. In fact, another way to see this is as follows. Fix $w = x_1 + i s_1,$ and let $z = x + is, \,\, w,z \in \bC^+.$ Then \[ \langle \pi_{w} f, \pi_z g \rangle_{H^2(\bC^+)} = \langle f, \pi_{\tau_{w}(z)} g \rangle_{H^2(\bC^+)}, \] where we define $\tau_{w}(z) = \left( \frac{x-x_1}{s_1}, \frac{s}{s_1} \right).$ By \eqref{eq:wavelet-def}, we get \begin{align}\label{eq:change-of-variables} \int_{\Delta} |W_{\overline{\psi_{\beta}}}(\pi_w f)(x,s)|^2 \, \frac{ dx \, ds}{s^2} & = \int_{\Delta} |W_{\overline{\psi_{\beta}}}f(\tau_w(z))|^2 \, \frac{dx \, ds}{s^2} \cr & = \int_{(\tau_w)^{-1}(\Delta)} |W_{\overline{\psi_{\beta}}}f(x,s)|^2 \, \frac{dx \, ds}{s^2}. \cr \end{align} Thus, suppose one wants to impose a measure constraint like $\tilde{\nu}(\Delta) = s,$ where $\tilde{\nu}$ is a measure on the upper half plane. The computations in \eqref{eq:change-of-variables} tell us that $C_{\Delta}^{\beta} = C_{\tau_w(\Delta)}^{\beta}, \, \forall \, w \in \bC^+.$ Thus, one is naturally led to suppose that the class of domains $\{ \tilde{\Delta} \subset \bC^+ \colon \tilde{\nu}(\tilde{\Delta}) = \tilde{\nu}(\Delta) \}$ includes $\{ \tau_w(\Delta), \, w \in \bC^+.\}.$ Therefore, $\tilde{\nu}(\Delta) = \tilde{\nu}(\tau_w(\Delta)).$ Taking first $w = x_1 + i,$ one obtains that $\tilde{\nu}$ is invariant under horizontal translations. By taking $w = is_1,$ one then obtains that $\tilde{\nu}$ is invariant with respect to (positive) dilations. It is easy to see that any measure with these properties has to be a multiple of the measure $\nu$ defined above. On the other hand, if one is willing to forego the original problem and focus on the quotient \eqref{eq:optimal-bergman-object}, one may wonder what happens when, instead of the hyperbolic measure on the (Poincar\'e) disc, one considers the supremum of $R(f,\Omega)$ over $f \in \Aa(D)$, and now look at $|\Omega| =s,$ where $| \cdot |$ denotes \emph{Lebesgue} measure. In that case, the problem of determining \[ \mathcal{C}_{\alpha} := \sup_{|\Omega| = s} \sup_{f \in \Aa(D)} R(f,\Omega) \] is much simpler. Indeed, take $\Omega = D \setminus D(0,r_s),$ with $r_s > 0$ chosen so that the Lebesgue measure constraint on $\Omega$ is satisfied. For such a domain, consider $f_n(z) = d_{n,\alpha} \cdot z^n,$ as in \eqref{eq:eigenfunctions-disc}. One may compute these constants explicitly as: \[ d_{n,\alpha} = \left( \frac{\Gamma(n+2+\alpha)}{n! \cdot \Gamma(2+\alpha)} \right)^{1/2}. \] For these functions, one has $\|f_n\|_{\Aa} = 1.$ We now claim that \begin{equation}\label{eq:convergence-example} \int_{D(0,r_s)} |f_n(z)|^2(1-|z|^2)^{\alpha} \, dz \to 0 \text{ as } n \to \infty. \end{equation} Indeed, the left-hand side of \eqref{eq:convergence-example} equals, after polar coordinates, \begin{equation}\label{eq:upper-bound} 2 \pi d_{n,\alpha}^2 \int_0^{r_s} t^{2n} (1-t^2)^{\alpha} \, dt \le 2 \pi d_{n,\alpha}^2 (1-r_s^2)^{-1} r_s^{2n}, \end{equation} whenever $\alpha > -1.$ On the other hand, the explicit formula for $d_{n,\alpha}$ implies this constant grows at most like a (fixed) power of $n.$ As the right-hand side of \eqref{eq:upper-bound} contains a $r_s^{2n}$ factor, and $r_s < 1,$ this proves \eqref{eq:convergence-example}. Therefore, \[ R(f_n,\Omega) \to 1 \text{ as } n \to \infty. \] So far, we have been interested in analysing the supremum of $\sup_{f \in \Aa} R(f,\Omega)$ over different classes of domains, but another natural question concerns a \emph{reversed} Faber-Krahn inequality: if one is instead interested in determining the \emph{minimum} of$\sup_{f \in \Aa} R(f,\Omega)$ over certain classes of domains, what can be said in both Euclidean and hyperbolic cases? In that regard, we first note the following: the problem of determining the \emph{minimum} of $\sup_{f \in \Aa} R(f,\Omega)$ over $\Omega \subset D, \, \mu(\Omega) = s$ is much easier than the analysis in the proof of Theorem \ref{thm:main-bergman} above. Indeed, by letting $\Omega_n$ be a sequence of annuli of hyperbolic measure $s,$ one sees that $\sup_{f \in \Aa} R(f,\Omega_n) = R(1,\Omega_n), \, \forall n \in \bN,$ by the results in \cite{DaubechiesPaul}. Moreover, if $\mu(\Omega_n) = s,$ one sees that we may take $\Omega_n \subset D \setminus D\left(0,1-\frac{1}{n}\right), \, \forall n \ge 1,$ and thus $|\Omega_n| \to 0 \, \text{ as } n \to \infty.$ This shows that \[ \inf_{\Omega \colon \mu(\Omega) = s} \sup_{f \in \Aa(D)} R(f,\Omega) = 0, \, \forall \, \alpha > -1. \] On the other hand, the situation is starkly different when one considers the Lebesgue measure in place of the hyperbolic one. Indeed, we shall show below that we may also explicitly solve the problem of determining the \emph{minimum} of $\sup_{f \in \Aa} R(f,\Omega)$ over all $\Omega, \, |\Omega| = s.$ For that purpose, we define \[ \mathcal{D}_{\alpha} = \inf_{\Omega\colon |\Omega| = s} \sup_{f \in \Aa} R(f,\Omega). \] Then we have \begin{equation}\label{eq:lower-bound} \mathcal{D}_{\alpha} \ge \inf_{|\Omega| = s} \frac{1}{\pi} \int_{\Omega} (1-|z|^2)^{\alpha} \, dz. \end{equation} Now, we have some possibilities: \begin{enumerate} \item If $\alpha \in (-1,0),$ then the function $z \mapsto (1-|z|^2)^{\alpha}$ is strictly \emph{increasing} on $|z|,$ and thus the left-hand side of \eqref{eq:lower-bound} is at least \[ 2 \int_0^{(s/\pi)^{1/2}} t(1-t^2)^{\alpha} \, dt = \theta^1_{\alpha}(s). \] \item If $\alpha > 0,$ then the function $z \mapsto (1-|z|^2)^{\alpha}$ is strictly \emph{decreasing} on $|z|,$ and thus the left-hand side of \eqref{eq:lower-bound} is at least \[ 2 \int_{(1-s/\pi)^{1/2}}^1 t(1-t^2)^{\alpha} \, dt = \theta^2_{\alpha}(s). \] \item Finally, for $\alpha = 0,$ $\mathcal{D}_0 \ge s.$ \end{enumerate} In particular, we can also characterise \emph{exactly} when equality occurs in the first two cases above: for the first case, we must have $\Omega = D(0,(s/\pi)^{1/2});$ for the second case, we must have $\Omega = D \setminus D(0,(1-s/\pi)^{1/2});$ notice that, in both those cases, equality is indeed attained, as constant functions do indeed attain $\sup_{f \in \Aa} R(f,\Omega).$ Finally, in the third case, if one restricts to \emph{simply connected sets} $\Omega \subset D,$ we may to resort to \cite[Theorem~2]{AbreuDoerfler}. Indeed, in order for the equality $\sup_{f \in \mathcal{A}_0} R(f,\Omega) = \frac{|\Omega|}{\pi},$ to hold, one necessarily has \[ \mathcal{P}(1_{\Omega}) = \lambda, \] where $\mathcal{P}: L^2(D) \to \mathcal{A}_0(D)$ denotes the projection onto the space $\mathcal{A}_0.$ But from the proof of Theorem 2 in \cite{AbreuDoerfler}, as $\Omega$ is simply connected, this implies that $\Omega$ has to be a disc centered at the origin. We summarise the results obtained in this section below, for the convenience of the reader. \begin{theorem}\label{thm:sup-inf} Suppose $s = |\Omega|$ is fixed, and consider $\mathcal{C}_{\alpha}$ defined above. Then $C_{\alpha} =1, \forall \alpha > -1,$ and no domain $\Omega$ attains this supremum. Moreover, if one considers $ \mathcal{D}_{\alpha},$ one has the following assertions: \begin{enumerate} \item If $\alpha \in (-1,0),$ then $\sup_{f \in \Aa} R(f,\Omega) \ge \theta_{\alpha}^1(s),$ with equality if and only if $\Omega = D(0,(s/\pi)^{1/2}).$ \item If $\alpha > 0,$ then $\sup_{f \in \Aa} R(f,\Omega) \ge \theta_{\alpha}^2(s),$ with equality if and only if $\Omega = D \setminus D(0,(1-s/\pi)^{1/2}).$ \item If $\alpha = 0,$ $\sup_{f \in \Aa} R(f,\Omega) \ge s.$ Furthermore, if $\Omega$ is simply connected, then $\Omega = D(0,(s/\pi)^{1/2}).$ \end{enumerate} \end{theorem} The assuption that $\Omega$ is simply connected in the third assertion in Theorem \ref{thm:sup-inf} cannot be dropped in general, as any radially symmetric domain $\Omega$ with Lebesgue measure $s$ satisfies the same property. We conjecture, however, that these are the \emph{only} domains with such a property: that is, if $\Omega$ is such that $\sup_{f \in \mathcal{A}_0} R(f,\Omega) = |\Omega|,$ then $\Omega$ must have radial symmetry. \begin{thebibliography}{99} \bibitem{Abreu} L. D. Abreu, \newblock Wavelet frames, Bergman spaces and Fourier transforms of Laguerre functions. \newblock \emph{arXiv preprint arXiv:0704.1487}. \bibitem{AbreuDoerfler} L. D. Abreu and M. D\"orfler, \newblock An inverse problem for localization operators. \newblock \emph{Inverse Problems}, 28(11):115001, 16, 2012. \bibitem{AbreuGrochRomero} L. D. Abreu, K. Gr\"ochenig, and J. L. Romero, \newblock On accumulated spectrograms. \newblock \emph{Transactions of the American Mathematical Society}, 368(5):3629–3649, 2016. \bibitem{AbreuPerRomero} L. D. Abreu, J. a. M. Pereira, and J. L. Romero, \newblock Sharp rates of convergence for accumulated spectrograms. \newblock \emph{Inverse Problems}, 33(11):115008, 12, 2017. \bibitem{AbreuSpeckbacher1} L. D. Abreu and M. Speckbacher, \newblock Donoho-Logan large sieve principles for modulation and polyanalytic Fock spaces. \newblock \emph{arXiv preprint arXiv:1808.02258}. \bibitem{AbreuSpeckbacher2} L.D. Abreu and M. Speckbacher, \newblock Deterministic guarantees for $L^1$-reconstruction: A large sieve approach with geometric flexibility. \newblock \emph{IEEE Proceedings SampTA}, 2019. \bibitem{Berezin} F. A. Berezin, \newblock Wick and anti-Wick operator symbols. \newblock \emph{Matematicheskii Sbornik (Novaya Seriya)}, 86(128):578–610, 1971. \bibitem{Daubechies} I. Daubechies, \newblock Time-frequency localisation operators: a geometric phase space approach. \newblock \emph{IEEE Transactions on Information Theory}, 34(4):605–612, 1988. \bibitem{DaubechiesPaul} I. Daubechies and T. Paul, \newblock Time-frequency localisation operators: a geometric phase space approach: II. The use of dilations. \newblock \emph{Inverse Problems}, 4:661-680, 1988. \bibitem{GroechenigBook} K. Gr\"ochenig, \newblock \emph{Foundations of time-frequency analysis}. \newblock Applied and Numerical Harmonic Analysis. Birkh\"auser Boston, Inc., Boston, MA, 2001. \bibitem{Izmestiev} I. Izmestiev, \newblock A simple proof of an isoperimetric inequality for Euclidean and hyperbolic cone-surfaces, \newblock \emph{Differential Geometry and Applications}, 43:95--101, 2015. \bibitem{KrantzParks} S. G. Krantz and H. R. Parks. \newblock \emph{A primer of real analytic functions}. \newblock Birkh\"auser Advanced Texts: Basler Lehrb\"ucher. [Birkh\"auser Advanced Texts: Basel Textbooks]. Birkh\"auser Boston, Inc., Boston, MA, second edition, 2002. \bibitem{Kulikov} A. Kulikov, \newblock Functionals with extrema at reproducing kernels. \newblock \emph{arXiv preprint arXiv:2203.12349}. \bibitem{KumaresanPrajapat} S. Kumaresan and J. Prajapat, \newblock Serrin's result for hyperbolic space and sphere. \newblock \emph{Duke mathematical journal}, 91(1):17--28, 1998. \bibitem{NicolaTilli} F. Nicola and P. Tilli, \newblock The Faber-Krahn inequality for the short-time Fourier transform. \newblock \emph{arXiv preprint arXiv:2106.03423}. \bibitem{Osserman} R. Osserman, \newblock The isoperimetric inequality, \newblock \emph{Bulletin of the American Mathematical Society}, 84(6):1182--1238, 1978. \bibitem{Schmidt} E. Schmidt, \newblock \"Uber die isoperimetrische Aufgabe im $n$-dimensionalen Raum konstanter negativer Kr\"ummung. I. Die isoperimetrischen Ungleichungen in der hyperbolischen Ebene und f\"ur Rotationsk\"orper im $n$-dimensionalen hyperbolischen Raum, \newblock \emph{Mathematische Zeitschrift}, 46:204--230, 1940. \bibitem{Seip} K. Seip, \newblock Reproducing formulas and double orthogonality in Bargmann and Bergman spaces, \newblock \emph{SIAM Journal on Mathematical Analysis}, 22(3):856--876, 1991. \bibitem{WongWaveletBook} M. W. Wong, \newblock \emph{Wavelet transforms and localization operators}, volume 136 of \emph{Operator Theory: Advances and Applications}. Birkh\"auser Verlag, Basel, 2002. \end{thebibliography} \end{document} \title[The Faber-Krahn inequality for the STFT]{The Faber-Krahn inequality for the Short-time Fourier transform} \author{Fabio Nicola} \address{Dipartimento di Scienze Matematiche, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Torino, Italy.} \email{[email protected]} \author{Paolo Tilli} \address{Dipartimento di Scienze Matematiche, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Torino, Italy.} \email{[email protected]} \subjclass[2010]{49Q10, 49Q20, 49R05, 42B10, 94A12, 81S30} \keywords{Faber-Krahn inequality, shape optimization, Short-time Fourier transform, Bargmann transform, uncertainty principle, Fock space} \begin{abstract} In this paper we solve an open problem concerning the characterization of those measurable sets $\Omega\subset \bR^{2d}$ that, among all sets having a prescribed Lebesgue measure, can trap the largest possible energy fraction in time-frequency space, where the energy density of a generic function $f\in L^2(\bR^d)$ is defined in terms of its Short-time Fourier transform (STFT) $\cV f(x,\omega)$, with Gaussian window. More precisely, given a measurable set $\Omega\subset\bR^{2d}$ having measure $s> 0$, we prove that the quantity \[ \Phi_\Omega=\max\Big\{\int_\Omega|\cV f(x,\omega)|^2\,dxd\omega: f\in L^2(\bR^d),\ \|f\|_{L^2}=1\Big\}, \] is largest possible if and only if $\Omega$ is equivalent, up to a negligible set, to a ball of measure $s$, and in this case we characterize all functions $f$ that achieve equality. This result leads to a sharp uncertainty principle for the ``essential support" of the STFT (when $d=1$, this can be summarized by the optimal bound $\Phi_\Omega\leq 1-e^{-|\Omega|}$, with equality if and only if $\Omega$ is a ball). Our approach, using techniques from measure theory after suitably rephrasing the problem in the Fock space, also leads to a local version of Lieb's uncertainty inequality for the STFT in $L^p$ when $p\in [2,\infty)$, as well as to $L^p$-concentration estimates when $p\in [1,\infty)$, thus proving a related conjecture. In all cases we identify the corresponding extremals. \end{abstract} \maketitle \section{Introduction} The notion of energy concentration for a function $f\in L^2(\bR)$ in the time-frequency plane is an issue of great theoretical and practical interest and can be formalised in terms of time-frequency distributions such as the so-called Short-time Fourier transform (STFT), defined as \[ \cV f(x,\omega)= \int_\bR e^{-2\pi i y\omega} f(y)\varphi(x-y)dy, \qquad x,\omega\in\bR, \] where $\varphi$ is the ``Gaussian window'' \begin{equation} \label{defvarphi} \varphi(x)=2^{1/4}e^{-\pi x^2}, \quad x\in\bR, \end{equation} normalized in such way that $\|\varphi\|_{L^2}=1$. It is well known that $\cV f$ is a complex-valued, real analytic, bounded function and $\cV:L^2(\bR)\to L^2(\bR^2)$ is an isometry (see \cite{folland-book,grochenig-book,mallat,tataru}). It is customary to interpret $|\cV f(x,\omega)|^2$ as the time-frequency energy density of $f$ (see \cite{grochenig-book,mallat}). Consequently, the fraction of energy captured by a measurable subset $\Omega\subseteq \bR^2$ of a function $f\in L^2(\bR)\setminus\{0\}$ will be given by the Rayleigh quotient (see \cite{abreu2016,abreu2017,daubechies,marceca}) \begin{equation}\label{defphiomegaf} \PhiOmega{f}:= \frac{\int_\Omega |\cV f(x,\omega)|^2\, dxd\omega}{\int_{\bR^2} |\cV f(x,\omega)|^2\, dxd\omega}=\frac{\langle \cV^\ast \mathbbm{1}_\Omega \cV f,f\rangle}{\|f\|^2_{L^2}}. \end{equation} The bounded, nonnegative and self-adjoint operator $\cV^\ast \mathbbm{1}_\Omega \cV$ on $L^2(\bR)$ is known in the literature under several names, e.g. localization, concentration, Anti-Wick or Toeplitz operator, as well as time-frequency or time-varying filter. Since its first appearance in the works by Berezin \cite{berezin} and Daubechies \cite{daubechies}, the applications of such operators have been manifold and the related literature is enormous: we refer to the books \cite{berezin-book,wong} and the survey \cite{cordero2007}, and the references therein, for an account of the main results. \par Now, when $\Omega$ has finite measure, $\cV^\ast \mathbbm{1}_\Omega \cV$ is a compact (in fact, trace class) operator. Its norm $\|\cV^\ast \mathbbm{1}_\Omega \cV \|_{{\mathcal L}(L^2)}$, given by the quantity \[ \PhiOm:=\max_{f\in L^2(\bR)\setminus\{0\}} \PhiOmega{f} = \max_{f\in L^2(\bR)\setminus\{0\}} \frac{\langle \cV^\ast \mathbbm{1}_\Omega \cV f,f\rangle}{\|f\|^2_{L^2}}, \] represents the maximum fraction of energy that can in principle be trapped by $\Omega$ for any signal $f\in L^2(\bR)$, and explicit upper bounds for $\PhiOm$ are of considerable interest. Indeed, the analysis of the spectrum of $\cV^\ast \mathbbm{1}_\Omega \cV$ was initiated in the seminal paper \cite{daubechies} for radially symmetric $\Omega$, in which case the operator is diagonal in the basis of Hermite functions --and conversely \cite{abreu2012} if an Hermite function is an eigenfunction and $\Omega$ is simply connected then $\Omega$ is a ball centered at $0$-- and the asymptotics of the eigenvalues (Weyl's law), in connection with the measure of $\Omega$, has been studied by many authors; again the literature is very large and we address the interested reader to the contributions \cite{abreu2016,abreu2017,demari,marceca,oldfield} and the references therein. The study of the time-frequency concentration of functions, in relation to uncertainty principles and under certain additional constraints (e.g. on subsets of prescribed measure in phase space, or under limited bandwidth etc.) has a long history which, as recognized by Landau and Pollak \cite{landau1961}, dates back at least to Fuchs \cite{fuchs}, and its relevance both to theory and applications has been well known since the seminal works by Landau-Pollack-Slepian, see e.g. \cite{folland,landau1985,slepian1983}, and other relevant contributions such as those of Cowling and Price \cite{cowling}, Donoho and Stark \cite{donoho1989}, and Daubechies \cite{daubechies}. However, in spite of the abundance of deep and unexpected results related to this circle of ideas (see e.g. the visionary work by Fefferman \cite{fefferman}) the question of characterizing the subsets $\Omega\subset\bR^2$ of prescribed measure, which allow for the maximum concentration, is still open. In this paper we provide a complete solution to this problem proving that the optimal sets are balls in phase space, and, in dimension one, our result can be stated as follows (see Theorem \ref{thm mult} for the same result in arbitrary dimension). \begin{theorem}[Faber-Krahn inequality for the STFT]\label{thm mainthm} Among all measurable subsets $\Omega\subset \bR^2$ having a prescribed (finite, non zero) measure, the quantity \begin{equation} \label{eee} \Phi_\Omega:= \max_{f\in L^2(\bR)\setminus\{0\}} \frac{\int_\Omega |\cV f(x,\omega)|^2\, dxd\omega}{\int_{\bR^2} |\cV f(x,\omega)|^2\, dxd\omega} = \max_{f\in L^2(\bR)\setminus\{0\}} \frac{\langle \cV^\ast \mathbbm{1}_\Omega \cV f,f\rangle}{\|f\|^2_{L^2}} \end{equation} achieves its maximum if and only if $\Omega$ is equivalent, up to a set of measure zero, to a ball. Moreover, when $\Omega$ is a ball of center $(x_0,\omega_0)$, the only functions $f$ that achieve the maximum in \eqref{eee} are the functions of the kind \begin{equation} \label{optf} f(x)=c\, e^{2\pi i \omega_0 x }\varphi(x-x_0),\qquad c\in\bC\setminus\{0\}, \end{equation} that is, the scalar multiples of the Gaussian window $\varphi$ defined in \eqref{defvarphi}, translated and modulated according to $(x_0,\omega_0)$. \end{theorem} This ``Faber--Krahn inequality'' (see Remark \ref{remFK} at the end of this section) proves, in the $L^2$-case, a conjecture by Abreu and Speckbacher \cite{abreu2018} (the full conjecture is proved in Theorem \ref{thm lpconc}), and confirms the distinguished role played by the Gaussian \eqref{optf}, as the first eigenfunction of the operator $\cV^\ast \mathbbm{1}_\Omega \cV$ when $\Omega$ has radial symmetry (see \cite{daubechies}; see also \cite{donoho1989} for a related conjecture on band-limited functions, and \cite[page 162]{cowling} for further insight). When $\Omega$ is a ball of radius $r$, one can see that $\PhiOm=1-e^{-\pi r^2}$ (this follows from the results in \cite{daubechies}, and will also follow from our proof of Theorem \ref{thm mainthm}). Hence we deduce a more explicit form of our result, which leads to a sharp form of the uncertainty principle for the STFT. \begin{theorem}[Sharp uncertainty principle for the STFT]\label{cor maincor} For every subset $\Omega\subset\bR^2$ whose Lebesgue measure $|\Omega|$ is finite we have \begin{equation}\label{eq stima 0} \PhiOm\leq 1-e^{-|\Omega|} \end{equation} and, if $|\Omega|>0$, equality occurs if and only if $\Omega$ is a ball. As a consequence, if for some $\epsilon\in (0,1)$, some function $f\in L^2(\bR)\setminus\{0\}$ and some $\Omega\subset\bR^2$ we have $\PhiOmega{f}\geq 1-\epsilon$, then necessarily \begin{equation}\label{eq stima eps} |\Omega|\geq \log(1/\epsilon), \end{equation} with equality if and only if $\Omega$ is a ball and $f$ has the form \eqref{optf}, where $(x_0,\omega_0)$ is the center of the ball. \end{theorem} Theorem \ref{cor maincor} solves the long--standing problem of the optimal lower bound for the measure of the ``essential support" of the STFT with Gaussian window. The best result so far in this direction was obtained by Gr\"ochenig (see \cite[Theorem 3.3.3]{grochenig-book}) as a consequence of Lieb's uncertainly inequality \cite{lieb} for the STFT, and consists of the following (rougher, but valid for any window) lower bound \begin{equation}\label{eq statart} |\Omega|\geq \sup_{p>2}\,(1-\epsilon)^{p/(p-2)}(p/2)^{2/(p-2)} \end{equation} (see Section \ref{sec genaralizations} for a discussion in dimension $d$). Notice that the $\sup$ in \eqref{eq statart} is a bounded function of $\epsilon\in (0,1)$, as opposite to the optimal bound in \eqref{eq stima eps} (see Fig.~\ref{figure1} in the Appendix for a graphical comparison). We point out that, although in this introduction the discussion of our results is confined (for ease of notation and exposition) to the one dimensional case, our results are valid in arbitrary space dimension, as discussed in Section \ref{sec mult} (Theorem \ref{thm mult} and Corollary \ref{cor cor2}). While addressing the reader to \cite{bonami,folland,grochenig} for a review of the numerous uncertainty principles available for the STFT (see also \cite{boggiatto,degosson,demange2005,galbis2010}), we observe that inequality \eqref{eq stima 0} is nontrivial even when $\Omega$ has radial symmetry: in this particular case it was proved in \cite{galbis2021}, exploiting the already mentioned diagonal representation in the Hermite basis. Some concentration--type estimates were recently provided in \cite{abreu2018} as an application of the Donoho-Logan large sieve principle \cite{donoho1992} and the Selberg-Bombieri inequality \cite{bombieri}. However, though this machinery certainly has a broad applicability, as observed in \cite{abreu2018} it does not seem to give sharp bounds for the problem above. For interesting applications to signal recovery we refer to \cite{abreu2019,pfander2010,pfander2013,tao} and the references therein. Our proof of Theorem \ref{thm mainthm} (and of its multidimensional analogue Theorem \ref{thm mult}) is based on techniques from measure theory, after the problem has been rephrased as an equivalent statement (where the STFT is no longer involved explicitly) in the Fock space. In order to present our strategy in a clear way and to better highlight the main ideas, we devote Section \ref{sec proof} to a detailed proof of our main results in dimension one, while the results in arbitrary dimension are stated and proved in Section \ref{sec mult}, focusing on all those things that need to be changed and adjusted. In Section \ref{sec genaralizations} we discuss some extensions of the above results in different directions, such as a local version of Lieb's uncertainty inequality for the STFT in $L^p$ when $p\in [2,\infty)$ (Theorem \ref{thm locallieb}), and $L^p$-concentration estimates for the STFT when $p\in [1,\infty)$ (Theorem \ref{thm lpconc}, which proves \cite[Conjecture 1]{abreu2018}), identifying in all cases the extremals $f$ and $\Omega$, as above. We also study the effect of changing the window $\varphi$ by a dilation or, more generally, by a metaplectic operator. We believe that the techniques used in this paper could also shed new light on the Donoho-Stark uncertainty principle \cite{donoho1989} and the corresponding conjecture \cite[Conjecture 1]{donoho1989}, and that also the stability of \eqref{eq stima 0} (via a quantitative version when the inequality is strict) can be investigated. We will address these issues in a subsequent work, together with applications to signal recovery. \begin{remark}\label{remFK} The maximization of $\PhiOm$ among all sets $\Omega$ of prescribed measure can be regarded as a \emph{shape optimization} problem (see \cite{bucur}) and, in this respect, Theorem \ref{thm mainthm} shares many analogies with the celebrated Faber-Krahn inequality (beyond the fact that both problems have the ball as a solution). The latter states that, among all (quasi) open sets $\Omega$ of given measure, the ball minimizes the first Dirichlet eigenvalue \[ \lambda_\Omega:=\min_{u\in H^1_0(\Omega)\setminus\{0\}} \frac{\int_\Omega |\nabla u(z)|^2\,dz}{\int_\Omega u(z)^2\,dz}. \] On the other hand, if $T_\Omega:H^1_0(\Omega)\to H^1_0(\Omega)$ is the linear operator that associates with every (real-valued) $u\in H^1_0(\Omega)$ the weak solution $T_\Omega u\in H^1_0(\Omega)$ of the problem $-\Delta (T_\Omega u)=u$ in $\Omega$, integrating by parts we have \[ \int_\Omega u^2 \,dz= -\int_\Omega u \Delta(T_\Omega u)\,dz=\int_\Omega \nabla u\cdot \nabla (T_\Omega u)\,dz=\langle T_\Omega u,u\rangle_{H^1_0}, \] so that Faber-Krahn can be rephrased by claiming that \[ \lambda_\Omega^{-1}:=\max_{u\in H^1_0(\Omega)\setminus\{0\}} \frac{\int_\Omega u(z)^2\,dz}{\int_\Omega |\nabla u(z)|^2\,dz} =\max_{u\in H^1_0(\Omega)\setminus\{0\}} \frac{\langle T_\Omega u,u\rangle_{H^1_0}}{\Vert u\Vert^2_{H^1_0}} \] is maximized (among all open sets of given measure) by the ball. Hence the statement of Theorem \ref{thm mainthm} can be regarded as a Faber-Krahn inequality for the operator $\cV^\ast \mathbbm{1}_\Omega \cV$. \end{remark} \section{Rephrasing the problem in the Fock space}\label{sec sec2} It turns out that the optimization problems discussed in the introduction can be conveniently rephrased in terms of functions in the Fock space on $\bC$. We address the reader to \cite[Section 3.4]{grochenig-book} and \cite{zhu} for more details on the relevant results that we are going to review, in a self-contained form, in this section. The Bargmann transform of a function $f\in L^2(\bR)$ is defined as \[ \cB f(z):= 2^{1/4} \int_\bR f(y) e^{2\pi yz-\pi y^2-\frac{\pi}{2}z^2}\, dy,\qquad z\in\bC. \] It turns out that $\cB f(z)$ is an entire holomorphic function and $\cB$ is a unitary operator from $L^2(\bR)$ to the Fock space $\cF^2(\bC)$ of all holomorphic functions $F:\bC\to\bC$ such that \begin{equation}\label{defHL} \|f\|_{\cF^2}:=\Big(\int_\bC |F(z)|^2 e^{-\pi|z|^2}dz\Big)^{1/2}<\infty. \end{equation} In fact, $\cB$ maps the orthonormal basis of Hermite functions in $\bR$ into the orthonormal basis of $\cF^2(\bC)$ given by the monomials \begin{equation}\label{eq ek} e_k(z):=\Big(\frac{\pi^k}{k!}\Big)^{1/2} z^k,\qquad k=0,1,2,\ldots; \quad z\in\bC. \end{equation} In particular, for the first Hermite function $\varphi(x)=2^{1/4}e^{-\pi x^2}$, that is, the window in \eqref{defvarphi}, we have $\cB \varphi(z)=e_0(z)=1$. The connection with the STFT is based on the following crucial formula (see e.g. \cite[Formula (3.30)]{grochenig-book}): \begin{equation}\label{eq STFTbar} \cV f(x,-\omega)=e^{\pi i x\omega} \cB f(z) e^{-\pi|z|^2/2},\qquad z=x+i\omega, \end{equation} which allows one to rephrase the functionals in \eqref{defphiomegaf} as \[ \PhiOmega{f}=\frac{\int_\Omega |\cV f(x,\omega)|^2\, dxd\omega}{\|f\|^2_{L^2}}= \frac{\int_{\Omega'}|\cB f(z)|^2e^{-\pi|z|^2}\, dz}{\|\cB f\|^2_{\cF^2}} \] where $\Omega'=\{(x,\omega):\ (x,-\omega)\in\Omega\}$. Since $\cB:L^2(\bR)\to\cF^2(\bC)$ is a unitary operator, we can safely transfer the optimization problem in Theorem \ref{thm mainthm} directly on $\cF^2(\bC)$, observing that \begin{equation}\label{eq max comp} \Phi_\Omega= \max_{F\in\cF^2(\bC)\setminus\{0\}} \frac{\int_{\Omega}|F(z)|^2e^{-\pi|z|^2}\, dz}{\|F\|^2_{\cF^2}}. \end{equation} We will adopt this point of view in Theorem \ref{thm36} below. \par In the meantime, two remarks are in order. First, we claim that the maximum in \eqref{eq max comp} is invariant under translations of the set $\Omega$. To see this, consider for any $z_0\in\bC$, the operator $U_{z_0}$ defined as \begin{equation}\label{eq Uz_0} U_{z_0} F(z)=e^{-\pi|z_0|^2 /2} e^{\pi z\overline{z_0}} F(z-z_0). \end{equation} The map $z\mapsto U_z$ turns out to be a projective unitary representation of $\bC$ on $\cF^2(\bC)$, satisfying \begin{equation}\label{eq transl} |F(z-z_0)|^2 e^{-\pi|z-z_0|^2}=|U_{z_0} F(z)|^2 e^{-\pi|z|^2}, \end{equation} which proves our claim. Invariance under rotations in the plane is also immediate. Secondly, we observe that the Bargmann transform intertwines the action of the representation $U_z$ with the so-called ``time-frequency shifts": \[ \cB M_{-\omega} T_{x} f= e^{-\pi i x\omega} U_z \cB f, \qquad z=x+i\omega \] for every $f\in L^2(\bR)$, where $T_{x}f(y):=f(y-x)$ and $M_{\omega}f(y):=e^{2\pi iy\omega}f(y)$ are the translation and modulation operators. This allows us to write down easily the Bargmann transform of the maximizers appearing in Theorem \ref{thm mainthm}, namely $c U_{z_0} e_0$, $c\in\bC\setminus\{0\}$, $z_0\in\bC$. For future reference, we explicitly set \begin{equation}\label{eq Fz0} F_{z_0}(z):=U_{z_0} e_0(z)=e^{-\frac{\pi}{2}|z_0|^2} e^{\pi z\overline{z_0}}, \quad z,z_0\in\bC. \end{equation} The following result shows the distinguished role played by the functions $F_{z_0}$ in connection with extremal problems. A proof can be found in \cite[Theorem 2.7]{zhu}. For the sake of completeness we present a short and elementary proof which generalises in higher dimension. \begin{proposition}\label{pro1} Let $F\in\cF^2(\bC)$. Then \begin{equation}\label{eq bound} |F(z)|^2 e^{-\pi|z|^2}\leq \|F\|^2_{\cF^2}\qquad \forall z\in\bC, \end{equation} and $|F(z)|^2 e^{-\pi|z|^2}$ vanishes at infinity. Moreover the equality in \eqref{eq bound} occurs at some point $z_0\in\bC$ if and only if $F=cF_{z_0}$ for some $c\in \bC$. \end{proposition} \begin{proof} By homogeneity we can suppose $\|F\|_{\cF^2}=1$, hence $F=\sum_{k\geq0} c_k e_k$ (cf.\ \eqref{eq ek}), with $\sum_{k\geq 0} |c_k|^2=1$. By the Cauchy-Schwarz inequality we obtain \[ |F(z)|^2\leq \sum_{k\geq 0} |e_k(z)|^2 =\sum_{k\geq0} \frac{\pi^k}{k!}|z|^{2k}=e^{\pi|z|^2} \quad \forall z\in\bC. \] Equality in this estimate occurs at some point $z_0\in\bC$ if and only if $c_k=ce^{-\pi |z_0|^2/2}\overline{e_k(z_0)}$, for some $c\in\bC$, $|c|=1$, which gives \[ F(z)= ce^{-\pi|z_0|^2/2}\sum_{k\geq0} \frac{\pi^k}{k!}(z \overline{z_0})^k=cF_{z_0}(z). \] Finally, the fact that $|F(z)|^2 e^{-\pi|z|^2}$ vanishes at infinity is clearly true if $F(z)=z^k$, $k\geq0$, and therefore holds for every $F\in \cF^2(\bC)$ by density, because of \eqref{eq bound}. \end{proof} \section{Proof of the main results in dimension $1$}\label{sec proof} In this section we prove Theorems \ref{thm mainthm} and \ref{cor maincor}. In fact, by the discussion in Section \ref{sec sec2}, cf.\ \eqref{eq max comp}, these will follow (without further reference) from the following result, which will be proved at the end of this section, after a few preliminary results have been established. \begin{theorem}\label{thm36} For every $F\in \cF^2(\bC)\setminus\{0\}$ and every measurable set $\Omega\subset\bR^2$ of finite measure, we have \begin{equation} \label{stimaquoz} \frac{\int_\Omega|F(z)|^2 e^{-\pi|z|^2}\, dz}{\|F\|_{\cF^2}^2} \leq 1-e^{-|\Omega|}. \end{equation} Moreover, recalling \eqref{eq Fz0}, equality occurs (for some $F$ and for some $\Omega$ such that $0<|\Omega|<\infty$) if and only if $F=c F_{z_0}$ (for some $z_0\in\bC$ and some nonzero $c\in\bC$) and $\Omega$ is equivalent, up to a set of measure zero, to a ball centered at $z_0$. \end{theorem} Throughout the rest of this section, in view of proving \eqref{stimaquoz}, given an arbitrary function $F\in \cF^2(\bC)\setminus\{0\}$ we shall investigate several properties of the function \begin{equation} \label{defu} u(z):=|F(z)|^2 e^{-\pi|z|^2}, \end{equation} in connection with its super-level sets \begin{equation} \label{defAt} A_t:=\{u>t\}=\left\{z\in\bR^2\,:\,\, u(z)>t\right\}, \end{equation} its \emph{distribution function} \begin{equation} \label{defmu} \mu(t):= |A_t|,\qquad 0\leq t\leq \max_{\bC} u \end{equation} (note that $u$ is bounded due to \eqref{eq bound}), and the \emph{decreasing rearrangement} of $u$, i.e. the function \begin{equation} \label{defclassu*} u^*(s):=\sup\{t\geq 0\,:\,\, \mu(t)>s\}\qquad \text{for $s\geq 0$} \end{equation} (for more details on rearrangements, we refer to \cite{baernstein}). Since $F(z)$ in \eqref{defu} is entire holomorphic, $u$ (which letting $z=x+i\omega$ can be regarded as a real-valued function $u(x,\omega)$ on $\bR^2$) has several nice properties which will simplify our analysis. In particular, $u$ is \emph{real analytic} and hence, since $u$ is not a constant, \emph{every} level set of $u$ has zero measure (see e.g. \cite{krantz}), i.e. \begin{equation} \label{lszm} \left| \{u=t\}\right| =0\quad\forall t\geq 0 \end{equation} and, similarly, the set of all critical points of $u$ has zero measure, i.e. \begin{equation} \label{cszm} \left| \{|\nabla u|=0\}\right| =0. \end{equation} Moreover, since by Proposition \ref{pro1} $u(z)\to 0$ as $|z|\to\infty$, by Sard's Lemma we see that for a.e. $t\in (0,\max u)$ the super-level set $\{u>t\}$ is a bounded open set in $\bR^2$ with smooth boundary \begin{equation} \label{boundaryAt} \partial\{u>t\}=\{u=t\}\quad\text{for a.e. $t\in (0,\max u).$} \end{equation} Since $u(z)>0$ a.e. (in fact everywhere, except at most at isolated points), \[ \mu(0)=\lim_{t\to 0^+}\mu(t)=+\infty, \] while the finiteness of $\mu(t)$ when $t\in (0,\max u]$ is entailed by the fact that $u\in L^1(\bR^2)$, according to \eqref{defu} and \eqref{defHL} (in particular $\mu(\max u)=0$). Moreover, by \eqref{lszm} $\mu(t)$ is \emph{continuous} (and not just right-continuous) at \emph{every point} $t\in (0,\max u]$. Since $\mu$ is also strictly decreasing, we see that $u^*$, according to \eqref{defclassu*}, is just the elementarly defined \emph{inverse function} of $\mu$ (restricted to $(0,\max u]$), i.e. \begin{equation} \label{defu*} u^*(s)=\mu^{-1}(s) \qquad\text{for $s\geq 0$,} \end{equation} which maps $[0,+\infty)$ decreasingly and continuously onto $(0,\max u]$. In the following we will strongly rely on the following result. \begin{lemma}\label{lemmau*} The function $\mu$ is absolutely continuous on the compact subintervals of $(0,\max u]$, and \begin{equation} \label{dermu} -\mu'(t)= \int_{\{u=t\}} |\nabla u|^{-1} \dH \qquad\text{for a.e. $t\in (0,\max u)$.} \end{equation} Similarly, the function $u^*$ is absolutely continuous on the compact subintervals of $[0,+\infty)$, and \begin{equation} \label{deru*} -(u^*)'(s)= \left(\int_{\{u=u^*(s)\}} |\nabla u|^{-1} \dH\right)^{-1} \qquad\text{for a.e. $s\geq 0$.} \end{equation} \end{lemma} These properties of $\mu$ and $u^*$ are essentially well known to the specialists in rearrangement theory, and follow e.g. from the general results of \cite{almgren-lieb,BZ}, which are valid within the framework of $W^{1,p}$ functions (see also \cite{cianchi} for the framework of $BV$ functions, in particular Lemmas 3.1 and 3.2). We point out, however, that of these properties only the absolute continuity of $u^*$ is valid in general, while the others strongly depend on \eqref{cszm} which, in the terminology of \cite{almgren-lieb}, implies that $u$ is \emph{coarea regular} in a very strong sense, since it rules out the possibility of a singular part in the (negative) Radon measure $\mu'(t)$ and, at the same time, it guarantees that the density of the absolutely continuous part is given (only) by the right-hand side of \eqref{dermu}. As clearly explained in the excellent Introduction to \cite{almgren-lieb}, there are several subtleties related to the structure of the distributional derivative of $\mu(t)$ (which ultimately make the validity of \eqref{deru*} highly nontrivial), and in fact the seminal paper \cite{BZ} was motivated by a subtle error in a previous work, whose fixing since \cite{BZ} has stimulated a lot of original and deep research (see e.g. \cite{cianchi,fuscoAnnals} and references therein). However, since unfortunately we were not able to find a ready-to-use reference for \eqref{deru*} (and, moreover, our $u$ is very smooth but strictly speaking it does not belong to $W^{1,1}(\bR^2)$, which would require to fix a lot of details when referring to the general results from \cite{almgren-lieb,BZ,cianchi}), here we present an elementary and self-contained proof of this lemma, specializing to our case a general argument from \cite{BZ} based on the coarea formula. \begin{proof}[Proof of Lemma \ref{lemmau*}] The fact that $u$ is locally Lipschitz guarantees the validity of the coarea formula (see e.g. \cite{BZ,evans}), that is, for every Borel function $h:\bR^2\to [0,+\infty]$ we have \[ \int_{\bR^2} h(z) |\nabla u(z)|\,dz = \int_0^{\max u} \left( \int_{\{u=\tau\}} h \dH\right)\,d\tau, \] where ${\mathcal H}^1$ denotes the one-dimensional Hausdorff measure (and with the usual convention that $0\cdot \infty=0$ in the first integral). In particular, when $h(z)=\chi_{A_t}(z) |\nabla u(z)|^{-1}$ (where $|\nabla u(z)|^{-1}$ is meant as $+\infty$ if $z$ is a critical point of $u$), by virtue of \eqref{cszm} the function $h(z)|\nabla u(z)|$ coincides with $\chi_{A_t}(z)$ a.e., and recalling \eqref{defmu} one obtains \begin{equation} \label{rappmu} \mu(t)=\int_t^{\max u} \left( \int_{\{u=\tau\}} |\nabla u|^{-1} \dH \right)\,d\tau\qquad\forall t\in [0,\max u]; \end{equation} therefore we see that $\mu(t)$ is \emph{absolutely continuous} on the compact subintervals of $(0,\max u]$, and \eqref{dermu} follows. Now let $D\subseteq (0,\max u)$ denote the set where $\mu'(t)$ exists, coincides with the integral in \eqref{dermu} and is strictly positive, and let $D_0=(0,\max u]\setminus D$. By \eqref{dermu} and the absolute continuity of $\mu$, and since the integral in \eqref{dermu} is strictly positive for \emph{every} $t\in (0,\max u)$ (note that ${\mathcal H}^1(\{u=t\})>0$ for every $t\in (0,\max u)$, otherwise we would have that $|\{u>t\}|=0$ by the isoperimetric inequality), we infer that $|D_0|=0$, so that letting $\widehat D=\mu(D)$ and $\widehat D_0=\mu(D_0)$, one has $|\widehat D_0|=0$ by the absolute continuity of $\mu$, and $\widehat D=[0,+\infty)\setminus \widehat D_0$ since $\mu$ is invertible. On the other hand, by \eqref{defu*} and elementary calculus, we see that $(u^*)'(s)$ exists for \emph{every} $s\in \widehat{D}$ and \[ -(u^*)'(s)=\frac{-1}{\mu'(\mu^{-1}(s))} = \left(\int_{\{u=u^*(s)\}} |\nabla u|^{-1} \dH\right)^{-1} \qquad\forall s\in\widehat D, \] which implies \eqref{deru*} since $|\widehat D_0|=0$. Finally, since $u^*$ is differentiable \emph{everywhere} on $\widehat D$, it is well known that $u^*$ maps every negligible set $N\subset \widehat D$ into a negligible set. Since $\widehat D\cup \widehat D_0=[0,+\infty)$, and moreover $u^*(\widehat D_0)=D_0$ where $|D_0|=0$, we see that $u^*$ maps negligible sets into negligible sets, hence it is absolutely continuous on every compact interval $[0,a]$. \end{proof} The following estimate for the integral in \eqref{deru*}, which can be of some interest in itself, will be the main ingredient in the proof of Theorem \ref{thm36}. \begin{proposition}\label{prop34} We have \begin{equation} \label{eq4} \left(\int_{\{u=u^*(s)\}} |\nabla u|^{-1} \dH\right)^{-1} \leq u^*(s)\qquad\text{for a.e. $s>0$,} \end{equation} and hence \begin{equation} \label{stimaderu*} (u^*)'(s)+ u^*(s)\geq 0\quad\text{for a.e. $s\geq 0$.} \end{equation} \end{proposition} \begin{proof} Letting for simplicity $t=u^*(s)$ and recalling that, for a.e. $t\in (0,\max u)$ (or, equivalently, for a.e. $s>0$, since $u^*$ and its inverse $\mu$ are absolutely continuous on compact sets) the super-level set $A_t$ in \eqref{defAt} has a smooth boundary as in \eqref{boundaryAt}, we can combine the Cauchy-Schwarz inequality \begin{equation} \label{CS} {\mathcal H}^1(\{u=t\})^2 \leq \left(\int_{\{u=t\}} |\nabla u|^{-1} \dH\right) \int_{\{u=t\}} |\nabla u| \dH \end{equation} with the isoperimetric inequality in the plane \begin{equation} \label{isop} 4\pi \,|\{ u > t \}|\leq {\mathcal H}^1(\{u=t\})^2 \end{equation} to obtain, after division by $t$, \begin{equation} \label{eq3} t^{-1} \left(\int_{\{u=t\}} |\nabla u|^{-1} \dH\right)^{-1} \leq \frac{\int_{\{u=t\}} \frac{|\nabla u|}t \dH }{4\pi \,|\{ u > t \}|}. \end{equation} The reason for dividing by $t$ is that, in this form, the right-hand side turns out to be (quite surprisingly, at least to us) independent of $t$. Indeed, since along $\partial A_t=\{u=t\}$ we have $|\nabla u|=-\nabla u\cdot \nu$ where $\nu$ is the outer normal to $\partial A_t$, along $\{u=t\}$ we can interpret the quotient $|\nabla u|/t$ as $-(\nabla\log u)\cdot\nu$, and hence \begin{equation*} \int_{\{u=t\}} \frac{|\nabla u|}t \dH =-\int_{\partial A_t} (\nabla\log u)\cdot\nu \dH =-\int_{A_t} \Delta \log u(z)\,dz. \end{equation*} But by \eqref{defu}, since $\log |F(z)|$ is a harmonic function, we obtain \begin{equation} \label{laplog} \Delta(\log u(z))= \Delta(\log |F(z)|^2 +\log e^{-\pi |z|^2}) =\Delta (-\pi |z|^2)=-4\pi, \end{equation} so that the last integral equals $4\pi |A_t|$. Plugging this into \eqref{eq3}, one obtains that the quotient on the right equals $1$, and \eqref{eq4} follows. Finally, \eqref{stimaderu*} follows on combining \eqref{deru*} with \eqref{eq4}. \end{proof} The following lemma establishes a link between the integrals of $u$ on its super-level sets (which will play a major role in our main argument) and the function $u^*$. \begin{lemma}\label{lemma3.3} The function \begin{equation} \label{defI} I(s)=\int_{\{u > u^*(s)\}} u(z)dz,\qquad s\in [0,+\infty), \end{equation} i.e. the integral of $u$ on its (unique) super-level set of measure $s$, is of class $C^1$ on $[0,+\infty)$, and \begin{equation} \label{derI} I'(s)=u^*(s)\quad\forall s\geq 0. \end{equation} Moreover, $I'$ is (locally) absolutely continuous, and \begin{equation} \label{derI2} I''(s)+I'(s)\geq 0\quad \text{for a.e. $s\geq 0$.} \end{equation} \end{lemma} \begin{proof} We have for every $h>0$ and every $s\geq 0$ \[ I(s+h)-I(s)= \int_{ \{u^*(s+h)< u\leq u^*(s)\}} u(z)dz \] and, since by \eqref{defu*} and \eqref{defmu} $|A_{u^*(\sigma)}|=\sigma$, \[ \left| \{u^*(s+h)< u\leq u^*(s)\}\right| = |A_{u^*(s+h)}|-|A_{u^*(s)}|=(s+h)-s=h, \] we obtain \[ u^*(s+h) \leq \frac{I(s+h)-I(s)}{h}\leq u^*(s). \] Moreover, it is easy to see that the same inequality is true also when $h<0$ (provided $s+h>0$), now using the reverse set inclusion $A_{u^*(s+h)}\subset A_{u^*(s)}$ according to the fact that $u^*$ is decreasing. Since $u^*$ is continuous, \eqref{derI} follows letting $h\to 0$ when $s>0$, and letting $h\to 0^+$ when $s=0$. Finally, by Lemma \ref{lemmau*}, $I'=u^*$ is absolutely continuous on $[0,a]$ for every $a\geq 0$, $I''=(u^*)'$, and \eqref{derI2} follows from \eqref{stimaderu*}. \end{proof} We are now in a position to prove Theorem \ref{thm36}. \begin{proof}[Proof of Theorem \ref{thm36}] By homogeneity we can assume $\|F\|_{\cF^2}=1$ so that, defining $u$ as in \eqref{defu}, \eqref{stimaquoz} is equivalent to \begin{equation} \label{eq1} \int_\Omega u(z)\,dz \leq 1-e^{-s} \end{equation} for every $s\geq 0$ and every $\Omega\subset\bR^2$ such that $|\Omega|=s$. It is clear that, for any fixed measure $s\geq 0$, the integral on the left is maximized when $\Omega$ is the (unique by \eqref{lszm}) super-level set $A_t=\{u>t\}$ such that $|A_t|=s$ (i.e. $\mu(t)=s$), and by \eqref{defu*} we see that the proper cut level is given by $t=u^*(s)$. In other words, if $|\Omega|=s$ then \begin{equation} \label{eq2} \int_\Omega u(z)\,dz\leq \int_{A_{u^*(s)}} u(z)\,dz, \end{equation} with strict inequality unless $\Omega$ coincides --up to a negligible set-- with $A_{u^*(s)}$ (to see this, it suffices to let $E:=\Omega\cap A_{u^*(s)}$ and observe that, if $|\Omega\setminus E|> 0$, then the integral of $u$ on $\Omega\setminus E$, where $u\leq u^*(s)$, is strictly smaller than the integral of $u$ on $A_{u^*(s)}\setminus E$, where $u> u^*(s)$). Thus, to prove \eqref{stimaquoz} it suffices to prove \eqref{eq1} when $\Omega=A_{u^*(s)}$, that is, recalling \eqref{defI}, prove that \begin{equation} \label{ineqI} I(s)\leq 1-e^{-s}\qquad\forall s\geq 0 \end{equation} or, equivalently, letting $s=-\log \sigma$, that \begin{equation} \label{ineqI2} G(\sigma):= I(-\log \sigma)\leq 1-\sigma \qquad\forall \sigma\in (0,1]. \end{equation} Note that \begin{equation} \label{v0} G(1)=I(0)=\int_{\{u>u^*(0)\}} u(z)\,dz = \int_{\{u>\max u\}} u(z)\,dz=0, \end{equation} while by monotone convergence, since $\lim_{s\to+\infty} u^*(s)=0$, \begin{equation} \label{vinf} \lim_{\sigma\to 0^+} G(\sigma)= \lim_{s\to+\infty} I(s)= \int_{\{u>0\}}\!\!\! u(z)\,dz = \int_{\bR^2} |F(z)|^2 e^{-\pi |z|^2}\,dz=1, \end{equation} because we assumed $F$ is normalized. Thus, $G$ extends to a continuous function on $[0,1]$ that coincides with $1-\sigma$ at the endpoints, and \eqref{ineqI2} will follow by proving that $G$ is convex. Indeed, by \eqref{derI2}, the function $e^s I'(s)$ is non decreasing, and since $G'(e^{-s})=-e^s I'(s)$, this means that $G'(\sigma)$ is non decreasing as well, i.e. $G$ is convex as claimed. Summing up, via \eqref{eq2} and \eqref{ineqI}, we have proved that for every $s\geq 0$ \begin{equation} \label{sumup} \begin{split} &\int_\Omega|F(z)|^2 e^{-\pi|z|^2}\, dz =\int_\Omega u(z)\,dz \\ \leq &\int_{A_{u^*(s)}} u(z)\,dz=I(s)\leq 1-e^{-s} \end{split} \end{equation} for every $F$ such that $\|F\|_{\cF^2}=1$. Now assume that equality occurs in \eqref{stimaquoz}, for some $F$ (we may still assume $\|F\|_{\cF^2}=1$) and for some set $\Omega$ of measure $s_0>0$: then, when $s=s_0$, equality occurs everywhere in \eqref{sumup}, i.e. in \eqref{eq2}, whence $\Omega$ coincides with $A_{u^*(s_0)}$ up to a set of measure zero, and in \eqref{ineqI}, whence $I(s_0)=1-e^{-s_0}$. But then $G(\sigma_0)=1-\sigma_0$ in \eqref{ineqI2}, where $\sigma_0=e^{-s_0}\in (0,1)$: since $G$ is convex on $[0,1]$, and coincides with $1-\sigma$ at the endpoints, we infer that $G(\sigma)=1-\sigma$ for every $\sigma\in [0,1]$, or, equivalently, that $I(s)=1-e^{-s}$ for \emph{every} $s\geq 0$. In particular, $I'(0)=1$; on the other hand, choosing $s=0$ in \eqref{derI} gives \[ I'(0)=u^*(0)=\max u, \] so that $\max u=1$. But then by \eqref{eq bound} \begin{equation} \label{catena} 1=\max u =\max |F(z)|^2 e^{-\pi |z|^2}\leq \|F\|^2_{\cF^2}=1 \end{equation} and, since equality is attained, by Proposition \ref{pro1} we infer that $F=c F_{z_0}$ for some $z_0,c\in\bC$. We have already proved that $\Omega=A_{u^*(s_0)}$ (up to a negligible set) and, since by \eqref{eq Fz0} \begin{equation} \label{uradial} u(z)=|c F_{z_0}(z)|^2 e^{-\pi |z|^2} =|c|^2 e^{-\pi |z_0|^2} e^{2\pi\realp (z \overline{z_0})}e^{-\pi |z|^2}=|c|^2 e^{-\pi |z-z_0|^2} \end{equation} has radial symmetry about $z_0$ and is radially decreasing, $\Omega$ is (equivalent to) a ball centered at $z_0$. This proves the ``only if part" of the final claim being proved. The ``if part'' follows by a direct computation. For, assume that $F=c F_{z_0}$ and $\Omega$ is equivalent to a ball of radius $r>0$ centered at $z_0$. Then using \eqref{uradial} we can compute, using polar coordinates \[ \int_\Omega u(z)\,dz= |c|^2 \int_{\{|z|<r\}} e^{-\pi |z|^2}\,dz = 2\pi |c|^2\int_0^\rho \rho e^{-\pi \rho^2}\,d\rho=|c|^2(1-e^{-\pi r^2}), \] and equality occurs in \eqref{stimaquoz} because $\|c F_{z_0}\|_{\cF^2}^2=|c|^2$. \end{proof} \begin{remark} The ``only if part" in the final claim of Theorem \ref{thm36}, once one has established that $I(s)=1-e^{-s}$ for every $s\geq 0$, instead of using \eqref{catena}, can also be proved observing that there must be equality, for a.e. $t\in (0,\max u)$, both in \eqref{CS} and in \eqref{isop} (otherwise there would be a strict inequality in \eqref{stimaderu*}, hence also in \eqref{ineqI}, on a set of positive measure). But then, for at least one value (in fact, for infinitely many values) of $t$ we would have that $A_t$ is a ball $B(z_0,r)$ (by the equality in the isoperimetric estimate \eqref{isop}) and that $|\nabla u|$ is constant along $\partial A_t=\{u=t\}$ (by the equality in \eqref{CS}). By applying the ``translation'' $U_{z_0}$ (cf.\ \eqref{eq Uz_0} and \eqref{eq transl}) we can suppose that the super-level set $A_t=B(z_0,r)$ is centred at the origin, i.e. that $z_0=0$, and in that case we have to prove that $F$ is constant (so that, translating back to $z_0$, one obtains that the original $F$ had the form $c F_{z_0}$). Since now both $u$ and $e^{-|z|^2}$ are constant along $\partial A_t=\partial B(0,r)$, also $|F|$ is constant there (and does not vanish inside $\overline{B(0,r)}$, since $u\geq t>0$ there). Hence $\log|F|$ is constant along $\partial B(0,r)$, and is harmonic inside $B(0,r)$ since $F$ is holomorphic: therefore $\log |F|$ is constant in $B(0,r)$, which implies that $F$ is constant over $\bC$. Note that the constancy of $|\nabla u|$ along $\partial A_t$ has not been used. However, also this property alone (even ignoring that $A_t$ is a ball) is enough to conclude. Letting $w=\log u$, one can use that both $w$ and $|\nabla w|$ are constant along $\partial A_t$, and moreover $\Delta w=-4\pi$ as shown in \eqref{laplog}: hence every connected component of $A_t$ must be a ball, by a celebrated result of Serrin \cite{serrin}. Then the previous argument can be applied to just one connected component of $A_t$, which is a ball, to conclude that $F$ is constant. \end{remark} \section{The multidimensional case}\label{sec mult} In this Section we provide the generalisation of Theorems \ref{thm mainthm} and \ref{cor maincor} (in fact, of Theorem \ref{thm36}) in arbitrary dimension. We recall that the STFT of a function $f\in L^2(\bR^d)$, with a given window $g\in L^2(\bR^d)\setminus\{0\}$, is defined as \begin{equation}\label{eq STFT wind} \cV_g f(x,\omega):=\int_{\bR^d} e^{-2\pi i y\cdot\omega} f(y)\overline{g(y-x)}\, dy,\qquad x,\omega\in\bR^d. \end{equation} Consider now the Gaussian function \begin{equation}\label{eq gaussian dimd} \varphi(x)=2^{-d/4}e^{-\pi|x|^2}\qquad x\in\bR^d, \end{equation} and the corresponding STFT in \eqref{eq STFT wind} with window $g=\varphi$; let us write shortly $\cV=\cV_\varphi$. Let $\boldsymbol{\omega}_{2d}$ be the measure of the unit ball in $\bR^{2d}$. Recall also the definition of the (lower) incomplete $\gamma$ function as \begin{equation} \label{defgamma} \gamma(k,s):=\int_0^s \tau^{k-1}e^{-\tau}\, d\tau \end{equation} where $k\geq 1$ is an integer and $s\geq 0$, so that \begin{equation} \label{propgamma} \frac{\gamma(k,s)}{(k-1)!}= 1-e^{-s}\sum_{j=0}^{k-1} \frac{s^j}{j!}. \end{equation} \begin{theorem}[Faber--Krahn inequality for the STFT in dimension $d$]\label{thm mult} For every measurable subset $\Omega\subset\bR^{2d}$ of finite measure and for every $f\in L^2(\bR^d)\setminus\{0\}$ there holds \begin{equation}\label{eq thm mult} \frac{\int_\Omega |\cV f(x,\omega)|^2\, dxd\omega}{\|f\|^2_{L^2}}\leq \frac{\gamma(d,c_\Omega)}{(d-1)!}, \end{equation} where $c_\Omega:=\pi(|\Omega|/\boldsymbol{\omega}_{2d})^{1/d}$ is the symplectic capacity of the ball in $\bR^{2d}$ having the same volume as $\Omega$. Moreover, equality occurs (for some $f$ and for some $\Omega$ such that $0<|\Omega|<\infty$) if and only if $\Omega$ is equivalent, up to a set of measure zero, to a ball centered at some $(x_0,\omega_0)\in\bR^{2d}$, and \begin{equation}\label{optf-bis} f(x)=ce^{2\pi ix\cdot\omega_0}\varphi(x-x_0),\qquad c\in\bC\setminus\{0\}, \end{equation} where $\varphi$ is the Gaussian in \eqref{eq gaussian dimd}. \end{theorem} We recall that the symplectic capacity of a ball of radius $r$ in phase space is $\pi r^2$ in every dimension and represents the natural measure of the size of the ball from the point of view of the symplectic geometry \cite{degosson,gromov,hofer}. \begin{proof}[Proof of Theorem \ref{thm mult}] We give only a sketch of the proof, because it follows the same pattern as in dimension $1$. \par The definition of the Fock space $\cF^2(\bC)$ extends essentially verbatim to $\bC^d$, with the monomials $(\pi^{|\alpha|}/\alpha!)^{1/2}z^\alpha$, $z\in\bC^d$, $\alpha\in\bN^d$ (multi-index notation) as orthonormal basis. The same holds for the definition of the functions $F_{z_0}$ in \eqref{eq Fz0}, now with $z,z_0\in\bC^d$, and Proposition \ref{pro1} extends in the obvious way too. Again one can rewrite the optimization problem in the Fock space $\cF^2(\bC^d)$, the formula \eqref{eq STFTbar} continuing to hold, with $x,\omega\in\bR^d$. Hence we have to prove that \begin{equation} \label{stimaquoz bis} \frac{\int_\Omega|F(z)|^2 e^{-\pi|z|^2}\, dz}{\|F\|_{\cF^2}^2} \leq \frac{\gamma(d,c_\Omega)}{(d-1)!} \end{equation} for $F\in \cF^2(\bC^d)\setminus\{0\}$ and $\Omega\subset\bC^{d}$ of finite measure, and that equality occurs if and only if $F=c F_{z_0}$ and $\Omega$ is equivalent, up to a set of measure zero, to a ball centered at $z_0$. To this end, for $F\in \cF^2(\bC^d)\setminus\{0\}$, $\|F\|_{\cF^2}=1$, we set $u(z)=|F(z)|^2 e^{-\pi|z|^2}$, $z\in\bC^d$, exactly as in \eqref{defu} when $d=1$, and define $A_t$, $\mu(t)$ and $u^*(s)$ as in Section \ref{sec proof}, replacing $\bR^{2}$ with $\bR^{2d}$ where necessary, now denoting by $|E|$ the $2d$-dimensional Lebesgue measure of a set $E\subset\bR^{2d}$, in place of the 2-dimensional measure. Note that, now regarding $u$ as a function of $2d$ real variables in $\bR^{2d}$, properties \eqref{lszm}, \eqref{cszm} etc. are still valid, as well as formulas \eqref{dermu}, \eqref{deru*} etc., provided one replaces every occurrence of $\cH^1$ with the $(2d-1)$-dimensional Hausdorff measure $\cH^{2d-1}$. Following the same pattern as in Proposition \ref{prop34}, now using the isoperimetric inequality in $\bR^{2d}$ (see e.g. \cite{fusco-iso} for an updated account) \[ \cH^{2d-1}(\{u=t\})^2\geq (2d)^2\boldsymbol{\omega}_{2d}^{1/d}|\{u>t\}|^{(2d-1)/d} \] and the fact that $\triangle \log u=-4\pi d$ on $\{u>0\}$, we see that now $u^\ast$ satisfies the inequality \[ \left(\int_{\{u=u^*(s)\}} |\nabla u|^{-1} \, d\cH^{2d-1}\right)^{-1} \leq \pi d^{-1}\boldsymbol{\omega}_{2d}^{-1/d} s^{-1+1/d} u^*(s)\quad\text{for a.e. $s>0$} \] in place of \eqref{eq4}, and hence \eqref{stimaderu*} is to be replaced with \[ (u^*)'(s)+ \pi d^{-1}\boldsymbol{\omega}_{2d}^{-1/d} s^{-1+1/d} u^*(s)\geq 0\quad\text{for a.e. $s> 0$.} \] Therefore, with the notation of Lemma \ref{lemma3.3}, $I'(t)$ is locally absolutely continuous on $[0,+\infty)$ and now satisfies \[ I''(s)+ \pi d^{-1}\boldsymbol{\omega}_{2d}^{-1/d} s^{-1+1/d} I'(s)\geq 0\quad\text{for a.e. $s> 0$.} \] This implies that the function $e^{\pi \boldsymbol{\omega}_{2d}^{-1/d} s^{1/d}}I'(s)$ is non decreasing on $[0,+\infty)$. Then, arguing as in the proof of Theorem \ref{thm36}, we are led to prove the inequality \[ I(s)\leq \frac{\gamma(d,\pi (s/\boldsymbol{\omega}_{2d})^{1/d})}{(d-1)!},\qquad s\geq0 \] in place of \eqref{ineqI}. This, with the substitution \[ \gamma(d,\pi (s/\boldsymbol{\omega}_{2d})^{1/d})/(d-1)!=1-\sigma,\qquad \sigma\in (0,1] \] (recall \eqref{propgamma}), turns into \[ G(\sigma):=I(s)\leq 1-\sigma\quad \forall\sigma\in(0,1]. \] Again $G$ extends to a continuous function on $[0,1]$, with $G(0)=1$, $G(1)=0$. At this point one observes that, regarding $\sigma$ as a function of $s$, \[ G'(\sigma(s))=-d! \pi^{-d}\boldsymbol{\omega}_{2d} e^{\pi (s/\boldsymbol{\omega}_{2d})^{1/d}}I'(s). \] Since the function $e^{\pi (s/\boldsymbol{\omega}_{2d})^{1/d}}I'(s)$ is non decreasing, we see that $G'$ is non increasing on $(0,1]$, hence $G$ is convex on $[0,1]$ and one concludes as in the proof of Theorem \ref{thm36}. Finally, the ``if part" follows from a direct computation, similar to that at the end of the proof of Theorem \ref{thm36}, now integrating on a ball in dimension $2d$, and using \eqref{defgamma} to evaluate the resulting integral. \end{proof} As a consequence of Theorem \ref{thm mult} we deduce a sharp form of the uncertainty principle for the STFT, which generalises Theorem \ref{cor maincor} to arbitrary dimension. To replace the function $\log(1/\epsilon)$ in \eqref{eq stima eps} (arising as the inverse function of $e^{-s}$ in the right-hand side of \eqref{eq stima 0}), we now denote by $\psi_d(\epsilon)$, $0<\epsilon\leq1$, the inverse function of \[ s\mapsto 1-\frac{\gamma(d,s)}{(d-1)!}=e^{-s}\sum_{j=0}^{d-1} \frac{s^j}{j!},\qquad s\geq 0 \] (cf. \eqref{propgamma}). \begin{corollary}\label{cor cor2} If for some $\epsilon\in (0,1)$, some $f\in L^2(\bR^d)\setminus\{0\}$, and some $\Omega\subset\bR^{2d}$ we have $\int_\Omega |\cV f(x,\omega)|^2\, dxd\omega\geq (1-\epsilon) \|f\|^2_{L^2}$, then \begin{equation}\label{uncertainty dim d} |\Omega|\geq \boldsymbol{\omega}_{2d}\pi^{-d}\psi_d(\epsilon)^d, \end{equation} with equality if and only if $\Omega$ is a ball and $f$ has the form \eqref{optf-bis}, where $(x_0,\omega_0)$ is the center of the ball. \end{corollary} So far, the state-of-the-art in this connection has been represented by the lower bound \begin{equation}\label{bound groc dim d} |\Omega|\geq \sup_{p>2}\,(1-\epsilon)^{p/(p-2)}(p/2)^{2d/(p-2)} \end{equation} (which reduces to \eqref{eq statart} when $d=1$, see \cite[Theorem 3.3.3]{grochenig-book}). See Figure \ref{figure1} in the Appendix for a graphical comparison with \eqref{uncertainty dim d} in dimension $d=2$. Figure \ref{figure2} in the Appendix illustrates Theorem \ref{thm mult} and Corollary \ref{cor cor2}. \begin{remark*} Notice that $\psi_1(\epsilon)=\log(1/\epsilon)$, and $\psi_d(\epsilon)$ is increasing with $d$. Moreover, it is easy to check that \begin{align*} \psi_d(\epsilon)&\sim (d!)^{1/d}(1-\epsilon)^{1/d},\quad \epsilon\to 1^-\\ \psi_d(\epsilon)&\sim \log(1/\epsilon),\quad \epsilon \to 0^+. \end{align*} On the contrary, the right-hand side of \eqref{bound groc dim d} is bounded by $e^d$; see Figure \ref{figure1} in the Appendix. \end{remark*} \section{Some generalizations}\label{sec genaralizations} In this Section we discuss some generalizations in several directions. \subsection{Local Lieb's uncertainty inequality for the STFT} An interesting variation on the theme is given by the optimization problem \begin{equation}\label{eq phip} \sup_{f\in {L^2(\bR)\setminus\{0\}}}\frac{\int_\Omega |\cV f(x,\omega)|^p\, dxd\omega}{\|f\|^p_{L^2}}, \end{equation} where $\Omega\subset\bR^2$ is measurable subset of finite measure and $2\leq p<\infty$. Again, we look for the subsets $\Omega$, of prescribed measure, which maximize the above supremum. Observe, first of all, that by the Cauchy-Schwarz inequality, $\|\cV f\|_{L^\infty}\leq \|f\|_{L^2}$, so that the supremum in \eqref{eq phip} is finite and, in fact, it is attained. \begin{proposition}\label{pro41} The supremum in \eqref{eq phip} is attained. \end{proposition} \begin{proof} The desired conclusion follows easily by the direct method of the calculus of variations. We first rewrite the problem in the complex domain via \eqref{eq STFTbar}, as we did in Section \ref{sec sec2}, now ending up with the Rayleigh quotient \[ \frac{\int_\Omega |F(z)|^p e^{-p\pi|z|^2/2}\, dz}{\|F\|^p_{\cF^2}} \] with $F\in \cF^2(\bC)\setminus\{0\}$. It is easy to see that this expression attains a maximum at some $F\in\cF^2(\bC)\setminus\{0\}$. In fact, let $F_n\in \cF^2(\bC)$, $\|F_n\|_{\cF^2}=1$, be a maximizing sequence, and let $u_n(z)= |F_n(z)|^p e^{-p\pi|z|^2/2}$. Since $u_n(z)= (|F_n(z)|^2 e^{-\pi|z|^2})^{p/2}\leq\|F_n\|^{p}_{\cF^2}=1$ by Proposition \ref{pro1}, we see that the sequence $F_n$ is equibounded on the compact subsets of $\bC$. Hence there is a subsequence, that we continue to call $F_n$, uniformly converging on the compact subsets to a holomorphic function $F$. By the Fatou theorem, $F\in\cF^2(\bC)$ and $\|F\|_{\cF^2}\leq 1$. Now, since $\Omega$ has finite measure, for every $\epsilon>0$ there exists a compact subset $K\subset\bC$ such that $|\Omega\setminus K|<\epsilon$, so that $\int_{\Omega\setminus K} u_n<\epsilon$ and $\int_{\Omega\setminus K} |F(z)|^p e^{-p\pi|z|^2/2}\, dz<\epsilon$. Together with the already mentioned convergence on the compact subsets, this implies that $\int_{\Omega} u_n(z)\,dz\to \int_{\Omega} |F(z)|^p e^{-p\pi|z|^2/2}\, dz$. As a consequence, $F\not=0$ and, since $\|F\|_{\cF^2}\leq 1=\|F_n\|_{\cF^2}$, \[ \lim_{n\to \infty}\frac{\int_\Omega |F_n(z)|^p e^{-p\pi|z|^2/2} }{\|F_n\|^p_{\cF^2}} \leq \frac{ \int_{\Omega} |F(z)|^p e^{-p\pi|z|^2/2}\, dz}{\|F\|^p_{\cF^2}}. \] The reverse inequality is obvious, because $F_n$ is a maximizing sequence. \end{proof} \begin{theorem}[Local Lieb's uncertainty inequality for the STFT]\label{thm locallieb} Let $2\leq p<\infty$. For every measurable subset $\Omega\subset\bR^2$ of finite measure, and every $f\in\ L^2(\bR)\setminus\{0\}$, \begin{equation}\label{eq locallieb} \frac{\int_\Omega |\cV f(x,\omega)|^p\, dxd\omega}{\|f\|^p_{L^2}}\leq\frac{2}{p}\Big(1-e^{-p|\Omega|/2}\Big). \end{equation} Moreover, equality occurs (for some $f$ and for some $\Omega$ such that $0<|\Omega|<\infty$) if and only if $\Omega$ is equivalent, up to a set of measure zero, to a ball centered at some $(x_0,\omega_0)\in\bR^{2}$, and \begin{equation*} f(x)=ce^{2\pi ix \omega_0}\varphi(x-x_0),\qquad c\in\bC\setminus\{0\}, \end{equation*} where $\varphi$ is the Gaussian in \eqref{defvarphi}. \end{theorem} Observe that when $p=2$ this result reduces to Theorem \ref{thm mainthm}. Moreover, by monotone convergence, from \eqref{eq locallieb} we obtain \begin{equation}\label{eq liebineq} \int_{\bR^2} |\cV f(x,\omega)|^p\, dxd\omega\leq \frac{2}{p}\|f\|^p_{L^2}, \quad f\in L^2(\bR), \end{equation} which is Lieb's inequality for the STFT with a Gaussian window (see \cite{lieb} and \cite[Theorem 3.3.2]{grochenig-book}). Actually, \eqref{eq liebineq} will be an ingredient of the proof of Theorem \ref{thm locallieb}. \begin{proof}[Proof of Theorem \ref{thm locallieb}] Transfering the problem in the Fock space $\cF^2(\bC)$, it is sufficient to prove that \[ \frac{\int_\Omega |F(z)|^p e^{-p\pi|z|^2/2}\, dz}{\|F\|^p_{\cF^2}}\leq \frac{2}{p}\Big(1-e^{-p|\Omega|/2}\Big) \] for $F\in \cF^2(\bC)\setminus\{0\}$, $0<|\Omega|<\infty$, and that the extremals are given by the functions $F=cF_{z_0}$ in \eqref{eq Fz0}, together with the balls $\Omega$ of center $z_0$. We give only a sketch of the proof, since the argument is similar to the proof of Theorem \ref{thm36}. \par Assuming $\|F\|_{\cF^2}=1$ and setting $ u(z)= |F(z)|^p e^{-p\pi|z|^2/2} $, arguing as in the proof of Proposition \ref{prop34} we obtain that \[ \left(\int_{\{u=u^*(s)\}} |\nabla u|^{-1} \dH\right)^{-1} \leq \frac{p}{2}u^*(s)\qquad\text{for a.e. $s>0$,} \] which implies $(u^*)'(s)+ \frac{p}{2} u^*(s)\geq 0$ for a.e.\ $s\geq 0$. With the notation of Lemma \ref{lemma3.3} we obtain $I''(s)+ \frac{p}{2} I'(s)\geq 0$ for a.e.\ $s\geq 0$, i.e. $e^{sp/2}I'(s)$ is non decreasing on $[0,+\infty)$. Arguing as in the proof of Theorem \ref{thm36} we reduce ourselves to study the inequality $I(s)\leq \frac{2}{p}(1-e^{-ps/2})$ or equivalently, changing variable $s= -\frac{2}{p}\log \sigma$, $\sigma\in (0,1]$, \begin{equation}\label{eq gsigma2} G(\sigma):=I\Big(-\frac{2}{p}\log \sigma\Big)\leq \frac{2}{p}(1-\sigma)\qquad \forall\sigma\in (0,1]. \end{equation} We can prove this inequality and discuss the case of strict inequality as in the proof of Theorem \ref{thm36}, the only difference being that now $G(0):=\lim_{\sigma\to 0^+} G(\sigma)=\int_{\bR^2} u(z)\, dz\leq 2/p$ by \eqref{eq liebineq} (hence, at $\sigma=0$ strict inequality may occur in \eqref{eq gsigma2}, but this is enough) and, when in \eqref{eq gsigma2} the equality occurs for some (and hence for every) $\sigma\in[0,1]$, in place of \eqref{catena} we will have \begin{align*} 1=\max u =\max |F(z)|^p e^{-p\pi |z|^2/2}&= (\max |F(z)|^2 e^{-\pi |z|^2})^{p/2} \\ &\leq \|F\|^p_{\cF^2}=1. \end{align*} The ``if part" follows by a direct computation. \end{proof} \subsection{$L^p$-concentration estimates for the STFT} We consider now the problem of the concentration in the time-frequency plane in the $L^p$ sense, which has interesting applications to signal recovery (see e.g. \cite{abreu2019}). More precisely, Theorem \ref{thm lpconc} below proves a conjecture of Abreu and Speckbacher \cite[Conjecture 1]{abreu2018} (note that when $p=2$ one obtains Theorem \ref{thm mainthm}). Let $\cS'(\bR)$ be the space of temperate distributions and, for $p\geq 1$, consider the subspace (called \textit{modulation space} in the literature \cite{grochenig-book}) \[ M^p(\bR):=\{f\in\cS'(\bR): \|f\|_{M^p}:=\|\cV f\|_{L^p(\bR^2)}<\infty\}. \] In fact, the definition of STFT (with Gaussian or more generally Schwa\-rtz window) and the Bargmann transform $\cB$ extend (in an obvious way) to injective operators on $\cS'(\bR)$. It is clear that $\cV: M^p(\bR)\to L^p(\bR^2)$ is an isometry, and it can be proved that $\cB$ is a \textit{surjective} isometry from $M^p(\bR)$ onto the space $\cF^p(\bC)$ of holomorphic functions $F(z)$ satisfying \[ \|F\|_{\cF^p}:=\Big(\int_\bC |F(z)|^p e^{-p\pi|z|^2/2}\, dz\Big)^{1/p}<\infty, \] see e.g. \cite{toft} for a comprehensive discussion. Moreover the formula \eqref{eq STFTbar} continues to hold for $f\in\cS'(\bR)$. \begin{theorem}[$L^p$-concentration estimates for the STFT]\label{thm lpconc} Let $1\leq p<\infty$. For every measurable subset $\Omega\subset\bR^2$ of finite measure and every $f\in M^p(\bR)\setminus\{0\}$, \begin{equation}\label{eq lpconc} \frac{\int_\Omega |\cV f(x,\omega)|^p\, dxd\omega}{\int_{\bR^2} |\cV f(x,\omega)|^p\, dxd\omega}\leq 1-e^{-p|\Omega|/2}. \end{equation} Moreover, equality occurs (for some $f$ and for some $\Omega$ such that $0<|\Omega|<\infty$) if and only if $\Omega$ is equivalent, up to a set of measure zero, to a ball centered at some $(x_0,\omega_0)\in\bR^{2}$, and \begin{equation}\label{eq lp concert optimal} f(x)=ce^{2\pi ix \omega_0}\varphi(x-x_0),\qquad c\in\bC\setminus\{0\}, \end{equation} where $\varphi$ is the Gaussian in \eqref{defvarphi}. \end{theorem} We omit the proof, that is very similar to that of Theorem \ref{thm36}. We just observe that \eqref{eq bound} extends to any $p\in [1,\infty)$ as \[ |F(z)|^p e^{-p\pi|z|^2/2}\leq \frac{p}{2} \|F\|^p_{\cF^p} \] and again the equality occurs at some point $z_0\in\bC$ if and only if $F=cF_{z_0}$, for some $c\in\bC$ (in particular, $\|F_{z_0}\|^p_{\cF^p}=2/p$); see e.g. \cite[Theorem 2.7]{zhu}. As a consequence we obtain at once the following sharp uncertainty principle for the STFT. \begin{corollary}\label{cor corsez5} Let $1\leq p<\infty$. If for some $\epsilon\in (0,1)$, some function $f\in M^p(\bR)\setminus\{0\}$ and some $\Omega\subset\bR^2$ we have $\int_\Omega|\cV f(x,\omega)|^p \, dxd\omega\geq (1-\epsilon)\|f\|^p_{M^p}$, then necessarily \begin{equation}\label{eq stima eps cor} |\Omega|\geq \frac{2}{p}\log(1/\epsilon), \end{equation} with equality if and only if $\Omega$ is a ball and $f$ has the form \eqref{eq lp concert optimal}, where $\varphi$ is the Gaussian in \eqref{defvarphi} and $(x_0,\omega_0)$ is the center of the ball. \end{corollary} We point out that, in the case $p=1$, the following rougher --but valid for any window in $M^1(\bR)\setminus\{0\}$-- lower bound \[ |\Omega|\geq 4(1-\epsilon)^2 \] was obtained in \cite[Proposition 2.5.2]{grochenig}. Arguing as in Section \ref{sec mult} it would not be difficult to suitably generalise Theorem \ref{thm lpconc} and Corollary \ref{cor corsez5} in arbitrary dimension. \subsection{Changing window}\label{sec change wind} Theorem \ref{thm mult} can be suitably reformulated when the Gaussian window $\varphi$ in \eqref{eq gaussian dimd} is dilated or, more generally, replaced by $\mu(\cA)\varphi$, where $\mu(\cA)$ is a metaplectic operator associated with a symplectic matrix $\cA\in Sp(d,\bR)$ (recall that, in dimension 1, $Sp(1,\bR)=SL(2,\bR)$ is the special linear group of $2\times 2$ real matrices with determinant $1$). We address to \cite[Section 9.4]{grochenig} for a detailed introduction to the metaplectic representation. Roughly speaking one associates, with any matrix $\cA\in Sp(d,\bR)$, a unitary operator $\mu(\cA)$ on $L^2(\bR^d)$ defined up to a phase factor, providing a projective unitary representation of $Sp(d,\bR)$ on $L^2(\bR^d)$. In more concrete terms, we know that $Sp(d,\bR)$ is generated by matrices of the type (in block-matrix notation) \[ \cA_1=\begin{pmatrix} 0 &I \\ -I &0 \end{pmatrix} \qquad \cA_2=\begin{pmatrix} A &0 \\ 0 &{A^*}^{-1} \end{pmatrix} \qquad \cA_3=\begin{pmatrix} I &0 \\ C & I \end{pmatrix} \] where $A\in GL(d,\bR)$, and $C$ is real and symmetric ($I$ denoting the identity matrix). The corresponding operators are then given by $\mu(\cA_1)=\cF$ (Fourier transform), $\mu(\cA_2)f(x)=|{\rm det}\,A|^{-1/2}f(A^{-1}x)$ and $\mu(\cA_3)f(x)=e^{\pi iC x\cdot x} f(x)$ (up to a phase factor). \par Now, the relevant property of the STFT is its symplectic covariance (see \cite[Lemma 9.4.3]{grochenig-book}): \[ |\cV_{\mu(\cA)\varphi} (\mu(\cA)f)(x,\omega)|=|\cV_{\varphi} (f)(\cA^{-1}(x,\omega))|. \] As a consequence, if we define, for $g,f\in L^2(\bR^d)\setminus\{0\}$, the quotients \[ \Phi_{\Omega,g}(f):=\frac{\int_\Omega |\cV_g f(x,\omega)|^2\, dxd\omega}{\int_{\bR^{2d}} |\cV_g f(x,\omega)|^2\, dxd\omega}, \] we obtain (since ${\rm det}\,\cA=1$) \[ \Phi_{\Omega,\mu(\cA)\varphi}(\mu(\cA)f)=\Phi_{\cA^{-1}(\Omega),\varphi}(f). \] Hence, since $\cA$ is measure preserving and $\mu(\cA)$ is a unitary operator, we deduce at once from Theorem \ref{thm mult} that for every measurable subset $\Omega\subset\bR^{2d}$ of finite measure and every $f\in L^2(\bR^d)\setminus\{0\}$, \[ \frac{\int_\Omega |\cV_{\mu(\cA)\varphi} f(x,\omega)|^2\, dxd\omega}{\|f\|_{L^2}^2}\leq \frac{\gamma(d,c_\Omega)}{(d-1)!} \] with $c_\Omega=\pi(|\Omega|/\boldsymbol{\omega}_{2d})^{1/d}$. Moreover, the equality occurs if and only if $f=\mu(\cA)M_{\omega_0}T_{x_0}\varphi$ (recall \eqref{eq gaussian dimd}) and $\Omega$ is equivalent, up to a set of measure zero, to $\cA(B)$ for some ball $B\subset\bR^{2d}$ centered at $(x_0,\omega_0)$, where $T_{x_0}f(x)=f(x-x_0)$ and $M_{\omega_0}f(x)=e^{2\pi ix\cdot\omega_0}f(x)$. \par\bigskip \nopagebreak \noindent{\bf Acknowledgements.} We wish to thank Nicola Fusco for useful discussion on the validity of Lemma \ref{lemmau*}, and for addressing us to some relevant references. \section*{Appendix} \begin{figure}[H] \includegraphics[width=12cm, height = 7cm]{figura1_f_k-ter.pdf} \caption{Left: in dimension 1, assuming that $\Omega\subset\bR^2$ captures a fraction $1-\epsilon$ of the energy of some function $f\in L^2(\bR)$, comparison between the lower bound for $|\Omega|$ in \eqref{eq statart} (state-of-the-art), the sharp lower bound $\log(1/\epsilon)$ in \eqref{eq stima eps} and the so-called \textit{weak uncertainty principle} $|\Omega|\geq 1-\epsilon$ \cite[Proposition 3.3.1]{grochenig-book} (which follows at once from the elementary estimate $\|\cV f\|_{L^\infty}\leq\|f\|_{L^2}$).\newline Right: the same comparison in dimension $d=2$. Here the state-of-the-art is represented by \eqref{bound groc dim d}, whereas the sharp bound $|\Omega|\geq \boldsymbol{\omega}_4 \pi^{-2}\psi_2(\epsilon)^2$ is given in \eqref{uncertainty dim d} ($\boldsymbol{\omega}_4=\pi^2/2$).}\label{figure1} \vspace{4mm} \includegraphics[width=12cm, height = 6.5cm]{figura2_f_k.pdf} \caption{Left: The upper bound $\gamma(d,c_\Omega)/(d-1)!$ in \eqref{eq thm mult}, for $d=1,2,3$, as a function of $c_\Omega=\pi(|\Omega|/\boldsymbol{\omega}_{2d})^{1/d}$.\newline Right: The lower bound $\psi_d(\epsilon)$ for $c_\Omega$ in \eqref{uncertainty dim d}, for $d=1,2,3$. Recall, $\psi_d(\epsilon)$ is the inverse function of $1-\gamma(d,s)/(d-1)!$, in particular $\psi_1(\epsilon)=\log(1/\epsilon)$.} \label{figure2} \end{figure} \newpage \bibliographystyle{abbrv} \bibliography{biblio.bib} \end{document}
2205.07961v1
http://arxiv.org/abs/2205.07961v1
Multipliers for Hardy spaces of Dirichlet series
\documentclass[12pt,a4paper]{article} \usepackage[utf8x]{inputenc} \usepackage{ucs} \usepackage{amsfonts, amssymb, amsmath, amsthm} \usepackage{color} \usepackage{graphicx} \usepackage[lf]{Baskervaldx} \usepackage[bigdelims,vvarbb]{newtxmath} \usepackage[cal=boondoxo]{mathalfa} \renewcommand*\oldstylenums[1]{\textosf{#1}} \usepackage[width=16.00cm, height=24.00cm, left=2.50cm]{geometry} \newtheorem{theorem}{Theorem}\newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \theoremstyle{definition} \newtheorem{remark}[theorem]{Remark} \usepackage[colorlinks=true,linkcolor=colorref,citecolor=colorcita,urlcolor=colorweb]{hyperref} \definecolor{colorcita}{RGB}{21,86,130} \definecolor{colorref}{RGB}{5,10,177} \definecolor{colorweb}{RGB}{177,6,38} \usepackage[shortlabels]{enumitem} \DeclareMathOperator{\dist}{dist} \DeclareMathOperator{\re}{Re} \DeclareMathOperator{\essinf}{essinf} \DeclareMathOperator{\ess}{ess} \DeclareMathOperator{\gpd}{gpd} \renewcommand{\theenumi}{\alph{enumi})} \renewcommand{\labelenumi}{\theenumi} \allowdisplaybreaks \title{Multipliers for Hardy spaces of Dirichlet series} \author{Tomás Fernández Vidal\thanks{Supported by CONICET-PIP 11220200102336} \and Daniel Galicer\thanks{Supported by PICT 2018-4250.} \and Pablo Sevilla-Peris\thanks{Supported by MINECO and FEDER Project MTM2017-83262-C2-1-P and by GV Project AICO/2021/170}} \date{} \newcommand{\ha}{\medskip \textcolor[RGB]{243,61,61}{\hrule} \medskip} \newcommand*{\nota}[1]{\textcolor[RGB]{243,61,61}{\bf #1}} \renewcommand{\thefootnote}{\roman{footnote}} \begin{document} \maketitle \begin{abstract} We characterize the space of multipliers from the Hardy space of Dirichlet series $\mathcal H_p$ into $\mathcal H_q$ for every $1 \leq p,q \leq \infty$. For a fixed Dirichlet series, we also investigate some structural properties of its associated multiplication operator. In particular, we study the norm, the essential norm, and the spectrum for an operator of this kind. We exploit the existing natural identification of spaces of Dirichlet series with spaces of holomorphic functions in infinitely many variables and apply several methods from complex and harmonic analysis to obtain our results. As a byproduct we get analogous statements on such Hardy spaces of holomorphic functions. \end{abstract} \footnotetext[0]{\textit{Keywords:} Multipliers, Spaces of Dirichlet series, Hardy spaces, Infinite dimensional analysis\\ \textit{2020 Mathematics subject classification:} Primary: 30H10,46G20,30B50. Secondary: 47A10 } \section{Introduction} A Dirichlet series is a formal expression of the type $D=\sum a_n n^{-s}$ with $(a_n)$ complex values and $s$ a complex variable. These are one of the basic tools of analytic number theory (see e.g., \cite{apostol1984introduccion, tenenbaum_1995}) but, over the last two decades, as a result of the work initiated in \cite{hedenmalm1997hilbert} and \cite{konyaginqueffelec_2002}, they have been analyzed with techniques coming from harmonic and functional analysis (see e.g. \cite{queffelec2013diophantine} or \cite{defant2018Dirichlet} and the references therein). One of the key point in this analytic insight on Dirichlet series is the deep connection with power series in infinitely many variables. We will use this fruitful perspective to study multipliers for Hardy spaces of Dirichlet series. We begin by recalling some standard definitions of these spaces. The natural regions of convergence of Dirichlet series are half-planes, and there they define holomorphic functions. To settle some notation, we consider the set $\mathbb{C}_{\sigma} = \{ s \in \mathbb{C} \colon \re s > \sigma\}$, for $\sigma \in \mathbb{R}$. With this, Queff\'elec \cite{Quefflec95} defined the space $\mathcal{H}_{\infty}$ as that consisting of Dirichlet series that define a bounded, holomorphic function on the half-plane $\mathbb{C}_{0}$. Endowed with the norm $\Vert D \Vert_{\mathcal{H}_\infty} := \sup\limits_{s\in \mathbb{C}_0} \vert \sum \frac{a_n}{n^s} \vert < \infty$ it becomes a Banach space, which together with the product $(\sum a_n n^{-s})\cdot (\sum b_n b^{-s}) = \sum\limits_{n =1}^{\infty} \big(\sum\limits_{k\cdot j = n} a_k\cdot b_j \big) n^{-s}$ results a Banach algebra. The Hardy spaces of Dirichlet series $\mathcal{H}_p$ were introduced by Hedenmalm, Lindqvist and Seip \cite{hedenmalm1997hilbert} for $p=2$, and by Bayart \cite{bayart2002hardy} for the remaining cases in the range $1\leq p < \infty$. A way to define these spaces is to consider first the following norm in the space of Dirichlet polynomials (i.e., all finite sums of the form $\sum_{n=1}^{N} a_{n} n^{-s}$, with $N \in \mathbb{N}$), \[ \Big\Vert \sum_{n=1}^{N} a_{n} n^{-s} \Big\Vert_{\mathcal{H}_p} := \lim_{R \to \infty} \bigg( \frac{1}{2R} \int_{-R}^{R} \Big\vert \sum_{n=1}^{N} a_{n} n^{-it} \Big\vert^{p} dt \bigg)^{\frac{1}{p}} \,, \] and define $\mathcal{H}_p$ as the completion of the Dirichlet polynomials under this norm. Each Dirichlet series in some $\mathcal{H}_{p}$ (with $1 \leq p < \infty$) converges on $\mathbb{C}_{1/2}$, and there it defines a holomorphic function. The Hardy space $\mathcal H_p$ with the function product is not an algebra for $p<\infty$. Namely, given two Dirichlet series $D, E \in \mathcal{H}_p$, it is not true, in general, that the product function $D\cdot E$ belongs to $\mathcal{H}_p$. Nevertheless, there are certain series $D$ that verify that $D \cdot E \in \mathcal{H}_p$ for every $E \in \mathcal{H}_p$. Such a Dirichlet series $D$ is called a multiplier of $\mathcal{H}_p$ and the mapping $M_D: \mathcal{H}_p \to \mathcal{H}_p$, given by $M_D(E)= D\cdot E$, is referred as its associated multiplication operator. In \cite{bayart2002hardy} (see also \cite{defant2018Dirichlet, hedenmalm1997hilbert,queffelec2013diophantine}) it is proved that the multipliers of $\mathcal{H}_p$ are precisely those Dirichlet series that belong to the Banach space $\mathcal{H}_\infty$. Moreover, for a multiplier $D$ we have the following equality: \[ \Vert M_D \Vert_{\mathcal H_p \to \mathcal H_p} = \Vert D \Vert_{\mathcal H_{\infty}}. \] Given $1 \leq p, q \leq \infty$, we propose to study the multipliers of $\mathcal{H}_p$ to $\mathcal{H}_q$; that is, we want to understand those Dirichlet series $D$ which verify that $D\cdot E \in \mathcal{H}_q$ for every $E \in \mathcal{H}_p$. For this we use the relation that exists between the Hardy spaces of Dirichlet series and the Hardy spaces of functions. The mentioned connection is given by the so-called Bohr lift $\mathcal{L}$, which identifies each Dirichlet series with a function (both in the polytorus and in the polydisk; see below for more details). This identification allows us to relate the multipliers in spaces of Dirichlet series with those of function spaces. As consequence of our results, we obtain a complete characterization of $\mathfrak{M}(p,q)$, the space of multipliers of $\mathcal{H}_p$ into $\mathcal{H}_q$. It turns out that this set coincides with the Hardy space $\mathcal{H}_{pq/(p-q)}$ when $1\leq q<p \leq \infty$ and with the null space if $1 \leq p<q \leq \infty$. Precisely, for a multiplier $D \in \mathfrak{M}(p,q)$ where $1\leq q<p \leq \infty$ we have the isometric correspondence \[ \Vert M_D \Vert_{\mathcal H_p \to \mathcal H_q} = \Vert D \Vert_{\mathcal H_{pq/(p-q)}}. \] Moreover, for certain values of $p$ and $q$ we study some structural properties of these multiplication operators. Inspired by some of the results obtained by Vukoti\'c \cite{vukotic2003analytic} and Demazeux \cite{demazeux2011essential} for spaces of holomoprhic functions in one variable, we get the corresponding version in the Dirichlet space context. In particular, when considering endomorphisms (i.e., $p=q$), the essential norm and the operator norm of a given multiplication operator coincides if $p>1$. In the remaining cases, that is $p=q=1$ or $1\leq q < p \leq \infty$, we compare the essential norm with the norm of the multiplier in different Hardy spaces. We continue by studying the structure of the spectrum of the multiplication operators over $\mathcal{H}_p$. Specifically, we consider the continuum spectrum, the radial spectrum and the approximate spectrum. For the latter, we use some necessary and sufficient conditions regarding the associated Bohr lifted function $\mathcal{L}(D)$ (see definition below) for which the multiplication operator $M_D : \mathcal H_p \to \mathcal{H}_p$ has closed range. \section{Preliminaries on Hardy spaces} \subsection{Of holomorphic functions} We note by $\mathbb{D}^{N} = \mathbb{D} \times \mathbb{D} \times \cdots$ the cartesian product of $N$ copies of the open unit disk $\mathbb{D}$ with $N\in \mathbb{N}\cup \{\infty\}$ and $\mathbb{D}^{\infty}_{2}$ the domain in $\ell_2$ defined as $\ell_2 \cap \mathbb{D}^{\infty}$ (for coherence in the notation we will sometimes write $\mathbb{D}^N_2$ for $\mathbb{D}^N$ also in the case $N\in \mathbb{N}$). We define $\mathbb{N}_0^{(\mathbb{N})}$ as consisting of all sequences $\alpha = (\alpha_{n})_{n}$ with $\alpha_{n} \in \mathbb{N}_{0} = \mathbb{N} \cup \{0\}$ which are eventually null. In this case we denote $\alpha ! := \alpha_1! \cdots \alpha_M!$ whenever $\alpha = (\alpha_1, \cdots, \alpha_M, 0,0,0, \dots)$. A function $f: \mathbb{D}^{\infty}_2 \to \mathbb{C}$ is holomorphic if it is Fr\'echet differentiable at every $z\in \mathbb{D}^{\infty}_2$, that is, if there exists a continuous linear functional $x^*$ on $\ell_2$ such that \[ \lim\limits_{h\to 0} \frac{f(z+h)-f(z)- x^*(h)}{\Vert h \Vert}=0. \] We denote by $H_{\infty} (\mathbb{D}^{\infty}_2)$ the space of all bounded holomorphic functions $f : \mathbb{D}^\infty_2 \to \mathbb{C}$. For $1\leq p< \infty$ we consider the Hardy spaces of holomorphic functions on the domain $\mathbb{D}^{\infty}_2$ defined by \begin{multline*} H_p(\mathbb{D}^\infty_2) :=\{ f : \mathbb{D}^\infty_2 \to \mathbb{C} : \; f \; \text{is holomorphic and } \\ \Vert f \Vert_{H_p(\mathbb{D}_2^\infty)} := \sup\limits_{M\in \mathbb{N}} \sup\limits_{ 0<r<1} \left( \int\limits_{\mathbb{T}^M} \vert f(r\omega, 0) \vert^p \mathrm{d}\omega \right)^{1/p} <\infty \}. \end{multline*} The definitions of $H_{\infty} (\mathbb{D}^{N})$ and $H_p(\mathbb{D}^{N})$ for finite $N$ are analogous (see \cite[Chapters~13 and~15]{defant2018Dirichlet}).\\ For $N \in \mathbb{N} \cup \{ \infty \}$, each function $f\in H_p(\mathbb{D}^N_2)$ defines a unique family of coefficients $c_{\alpha}(f)= \frac{(\partial^{\alpha} f)(0)}{\alpha !}$ (the Cauchy coefficients) with $\alpha \in \mathbb{N}_0^{N}$ having always only finitely many non-null coordinates. For $z \in \mathbb{D}^N_2$ one has the following monomial expansion \cite[Theorem~13.2]{defant2018Dirichlet} \[ f(z)= \sum\limits_{\alpha \in \mathbb{N}_0^{(\mathbb{N})}} c_{\alpha}(f) \cdot z^\alpha, \] with $z^{\alpha} = z_1^{\alpha_1} \cdots z_M^{\alpha_M}$ whenever $\alpha = (\alpha_1, \cdots, \alpha_M, 0,0,0, \dots)$.\\ Let us note that for each fixed $N \in \mathbb{N}$ and $1 \leq p \leq \infty$ we have $H_{p}(\mathbb{D}^{N}) \hookrightarrow H_{p}(\mathbb{D}_{2}^{\infty})$ by doing $f \rightsquigarrow [ z = (z_{n})_{n} \in \mathbb{D}_{2}^{\infty} \rightsquigarrow f(z_{1}, \ldots z_{N}) ]$. Conversely, given a function $f \in H_{p}(\mathbb{D}_{2}^{\infty})$, for each $N \in \mathbb{N}$ we define $f_{N} (z_{1}, \ldots , z_{N}) = f (z_{1}, \ldots , z_{N}, 0,0, \ldots)$ for $(z_{1}, \ldots , z_{N}) \in \mathbb{D}^{N}$. It is well known that $f_N \in H_p(\mathbb{D}^N)$. An important property for our purposes is the so-called Cole-Gamelin inequality (see \cite[Remark~13.14 and Theorem~13.15]{defant2018Dirichlet}), which states that for every $f\in H_p(\mathbb{D}^{N}_2)$ and $z \in \mathbb{D}^{N}_2$ (for $N \in \mathbb{N} \cup \{\infty\}$) we have \begin{equation}\label{eq: Cole-Gamelin} \vert f(z) \vert \leq \left( \prod\limits_{j=1}^{N} \frac{1}{1-\vert z_j \vert^2} \right)^{1/p} \Vert f \Vert_{H_p(\mathbb{D}^N_2)}. \end{equation} For functions of finitely many variable this inequality is optimal in the sense that if $N\in \mathbb{N}$ and $z\in \mathbb{D}^N$, then there is a function $f_z \in H_p(\mathbb{D}^N_2)$ given by \begin{equation} \label{optima} f_z(u) = \left( \prod\limits_{j=1}^N \frac{1- \vert z_j\vert^2}{(1- \overline{z}_ju_j)^2}\right)^{1/p}, \end{equation} such that $\Vert f_z \Vert_{H_p(\mathbb{D}^N_2)} = 1$ and $\vert f_z(z) \vert = \left( \prod\limits_{j=1}^N \frac{1}{1-\vert z_j \vert^2} \right)^{1/p}$. \subsection{On the polytorus} On $\mathbb{T}^\infty = \{ \omega = ( \omega_{n})_{n} \colon \vert \omega_{n} \vert =1, \text{ for every } n \}$ consider the product of the normalized Lebesgue measure on $\mathbb{T}$ (note that this is the Haar measure). For each $F \in L_1(\mathbb{T}^\infty)$ and $\alpha \in \mathbb{Z}^{(\mathbb{N})}$, the $\alpha-$th Fourier coefficient of $F$ is defined as \[ \hat{F}(\alpha) = \int\limits_{\mathbb{T}^N} f(\omega) \cdot \omega^{\alpha} \mathrm{d}\omega \] where again $\omega^{\alpha} = \omega_1^{\alpha_1}\cdots \omega_M^{\alpha_M}$ if $\alpha = (\alpha_{1}, \ldots , \alpha_{M}, 0,0,0, \ldots)$. The Hardy space on the polytorus $H_p(\mathbb{T}^\infty)$ is the subspace of $L_p(\mathbb{T}^\infty)$ given by all the functions $F$ such that $\hat{F}(\alpha)=0$ for every $\alpha \in \mathbb{Z}^{(\mathbb{N})} - \mathbb{N}_0^{(\mathbb{N})}$. The definition of $H_{p} (\mathbb{T}^{N})$ for finite $N$ is analogous (note that these are the classical Hardy spaces, see \cite{rudin1962fourier}). We have the canonical inclusion $H_{p}(\mathbb{T}^{N}) \hookrightarrow H_{p}(\mathbb{T}^{\infty})$ by doing $F \rightsquigarrow [ \omega = (\omega_{n})_{n} \in \mathbb{T}^{\infty} \rightsquigarrow F(\omega_{1}, \ldots \omega_{N}) ]$.\\ Given $N_1 < N_2 \leq \infty$ and $F\in H_p(\mathbb{T}^{N_2})$, then the function $F_{N_1}$, defined by $F_{N_1}(\omega)= \int\limits_{\mathbb{T}^{N_2-N_1}} F(\omega,u)\mathrm{d}u$ for every $\omega\in \mathbb{T}^{N_1}$, belongs to $H_{p}(\mathbb{T}^{N_1})$. In this case, the Fourier coefficients of both functions coincide: that is, given $\alpha \in \mathbb{N}_0^{N_1}$ then \[ \hat{F}_{N_1}(\alpha)= \hat{F}(\alpha_1, \alpha_2, \dots, \alpha_{N_1},0,0, \dots). \] Moreover, \begin{equation*} \Vert F \Vert_{H_p(\mathbb{T}^{N_2})} \geq \Vert F_{N_1} \Vert_{H_p(\mathbb{T}^{N_1})}. \end{equation*} Let $N \in \mathbb{N} \cup \{\infty\}$, there is an isometric isomorphism between the spaces $H_{p}(\mathbb{D}^N_2)$ and $H_p(\mathbb{T}^N)$. More precisely, given a function $f\in H_p(\mathbb{D}^N_2)$ there is a unique function $F\in H_p(\mathbb{T}^N)$ such that $c_{\alpha}(f) = \hat{F}(\alpha)$ for every $\alpha$ in the corresponding indexing set and $\Vert f \Vert_{H_{p}(\mathbb{D}^N_2)} =\Vert F \Vert_{H_p(\mathbb{T}^N)}$. If this is the case, we say that the functions $f$ and $F$ are associated. In particular, by the uniqueness of the coefficients, $f_{M}$ and $F_{M}$ are associated to each other for every $1 \leq M \leq N$. Even more, if $N\in \mathbb{N}$, then \[ F(\omega) = \lim\limits_{r\to 1^-} f(r\omega), \] for almost all $\omega \in \mathbb{T}^N$. \noindent We isolate the following important property which will be useful later. \begin{remark} \label{manon} Let $F \in H_p(\mathbb{T}^\infty)$. If $1 \leq p < \infty$, then $F_{N} \to F$ in $H_{p}(\mathbb{T}^{\infty})$ (see e.g \cite[Remark~5.8]{defant2018Dirichlet}). If $p=\infty$, the convergence is given in the $w(L_{\infty},L_1)$-topology. In particular, for any $1 \leq p \leq \infty$, there is a subsequence so that $\lim_{k} F_{N_{k}} (\omega) = F(\omega)$ for almost $\omega \in \mathbb{T}^{\infty}$ (note that the case $p=\infty$ follows directly from the inclusion $H_{\infty}(\mathbb{T}^\infty) \subset H_2(\mathbb{T}^\infty)$). \end{remark} \subsection{Bohr transform} We previously mentioned the Hardy spaces of functions both on the polytorus and on the polydisk and the relationship between them based on their coefficients. This relation also exists with the Hardy spaces of Dirichlet series and the isometric isomorphism that identifies them is the so-called Bohr transform. To define it, let us first consider $\mathfrak{p}= (\mathfrak{p}_1, \mathfrak{p}_2, \cdots)$ the sequence of prime numbers. Then, given a natural number $n$, by the prime number decomposition, there are unique non-negative integer numbers $\alpha_1, \dots , \alpha_M$ such that $n= \mathfrak{p}_1^{\alpha_1}\cdots \mathfrak{p}_M^{\alpha_M}$. Therefore, with the notation that we already defined, we have that $n= \mathfrak{p}^{\alpha}$ with $\alpha = (\alpha_1, \cdots, \alpha_M, 0,0, \dots)$. Then, given $1\leq p \leq \infty$, the Bohr transform $\mathcal{B}_{\mathbb{D}^\infty_2}$ on $H_p(\mathbb{D}^\infty_2)$ is defined as follows: \[ \mathcal{B}_{\mathbb{D}^\infty_2}(f) = \sum\limits_n a_n n^{-s}, \] where $a_n= c_{\alpha}(f)$ if and only if $n= \mathfrak{p}^{\alpha}$. The Bohr transform is an isometric isomorphism between the spaces $H_p(\mathbb{D}^{\infty}_2)$ and $\mathcal{H}_p$ (see \cite[Theorem~13.2]{defant2018Dirichlet}). We denote by $\mathcal H^{(N)}$ the set of all Dirichlet series $\sum a_{n} n^{-s}$ that involve only the first $N$ prime numbers; that is $a_n=0$ if $\mathfrak{p}_i$ divides $n$ for some $i>N$. We write $\mathcal{H}_p^{(N)}$ for the space $\mathcal H^{(N)} \cap \mathcal H_p$ (endowed with the norm in $\mathcal H_p$). Note that the image of $H_{p} (\mathbb{D}^{N})$ (seen as a subspace of $H_p(\mathbb{D}^{\infty}_2)$ with the natural identification) through $\mathcal{B}_{\mathbb{D}^\infty_2}$ is exactly $\mathcal{H}_p^{(N)}$. The inverse of the Bohr transform, which sends the space $\mathcal{H}_p$ into the space $H_p(\mathbb{D}^{\infty}_2)$, is called the \textit{Bohr lift}, which we denote by $\mathcal{L}_{\mathbb{D}^\infty_2}$. With the same idea, the Bohr transform $\mathcal{B}_{\mathbb{T}^\infty}$ on the polytorus for $H_p(\mathbb{T}^\infty)$ is defined; that is, \[ \mathcal{B}_{\mathbb{T}^\infty}(F) = \sum\limits_n a_n n^{-s}, \] where $a_n = \hat{F}(\alpha)$ if and only if $n = \mathfrak{p}^\alpha$. It is an isometric ismorphism between the spaces $H_p(\mathbb{T}^N)$ and $\mathcal{H}_p$. Its inverse is denoted by $\mathcal{L}_{\mathbb{T}^\infty}$. In order to keep the notation as clear as possible we will carefully use the following convention: we will use capital letters (e.g., $F$, $G$, or $H$) to denote functions defined on the polytorus $\mathbb{T}^{\infty}$ and lowercase letters (e.g., $f$, $g$ or $h$) to represent functions defined on the polydisk $\mathbb{D}_2^\infty$. If $f$ and $F$ are associated to each other (meaning that $c_{\alpha}(f)= \hat{F}(\alpha)$ for every $\alpha$), we will sometimes write $f \sim F$. With the same idea, if a function $f$ or $F$ is associated through the Bohr transform to a Dirichlet series $D$, we will write $f \sim D$ or $F\sim D$. \section{The space of multipliers} As we mentioned above, our main interest is to describe the multipliers of the Hardy spaces of Dirichlet series. Let us recall again that a holomorphic function $\varphi$, defined on $\mathbb{C}_{1/2}$ is a $(p,q)$-multiplier of $\mathcal{H}_{p}$ if $\varphi \cdot D \in \mathcal{H}_{q}$ for every $D \in \mathcal{H}_{p}$. We denote the set of all such functions by $\mathfrak{M}(p,q)$. Since the constant $\mathbf{1}$ function belongs to $\mathcal{H}_{p}$ we have that, if $\varphi \in \mathfrak{M}(p,q)$, then necessarily $\varphi$ belongs to $\mathcal{H}_{q}$ and it can be represented by a Dirichlet series. So, we will use that the multipliers of $\mathcal{H}_{p}$ are precisely Dirichlet series. The set $\mathfrak{M}^{(N)}(p,q)$ is defined in the obvious way, replacing $\mathcal{H}_{p}$ and $\mathcal{H}_{q}$ by $\mathcal{H}_{p}^{(N)}$ and $\mathcal{H}_{q}^{(N)}$. The same argument as above shows that $\mathfrak{M}^{(N)}(p,q) \subseteq \mathcal{H}_{q}^{(N)}$.\\ The set $\mathfrak{M}(p,q)$ is clearly a vector space. Each Dirichlet series $D \in \mathfrak{M}(p,q)$ induces a multiplication operator $M_D$ from $\mathcal{H}_p$ to $\mathcal{H}_q$, defined by $M_D(E)=D\cdot E$. By the continuity of the evaluation on each $s \in \mathbb{C}_{1/2}$ (see e.g. \cite[Corollary 13.3]{defant2018Dirichlet}), and the Closed Graph Theorem, $M_D$ is continuous. Then, the expression \begin{equation} \label{normamult} \Vert D \Vert_{\mathfrak{M}(p,q)} := \Vert M_{D} \Vert_{\mathcal{H}_{p} \to \mathcal{H}_{q}}, \end{equation} defines a norm on $\mathfrak{M}(p,q)$. Note that \begin{equation} \label{aleluya} \Vert D \Vert_{\mathcal{H}_{q}} = \Vert M_D(1) \Vert_{\mathcal{H}_{q}} \leq \Vert M_D \Vert_{\mathcal{H}_{p} \to \mathcal{H}_{q}} \cdot \Vert 1 \Vert_{\mathcal{H}_{q}} = \Vert D \Vert_{\mathfrak{M}(p,q)} \,, \end{equation} and the inclusions that we presented above are continuous. A norm on $\mathfrak{M}^{(N)}(p,q)$ is defined analogously. \\ Clearly, if $p_{1}< p_{2}$ or $q_{1} < q_{2}$, then \begin{equation}\label{inclusiones} \mathfrak{M}(p_{1}, q) \subseteq \mathfrak{M}(p_{2},q) \text{ and } \mathfrak{M}(p, q_{2}) \subseteq \mathfrak{M}(p,q_{1}) \,, \end{equation} for fixed $p$ and $q$. Given a Dirichlet series $D = \sum a_{n} n^{-s}$, we denote by $D_{N}$ the `restriction' to the first $N$ primes (i.e., we consider those $n$'s that involve, in its factorization, only the first $N$ primes). Let us be more precise. If $n \in \mathbb{N}$, we write $\gpd (n)$ for the greatest prime divisor of $n$. That is, if $n = \mathfrak{p}_1^{\alpha_{1}} \cdots \mathfrak{p}_N^{\alpha_{N}}$ (with $\alpha_{N} \neq 0$) is the prime decomposition of $n$, then $\gpd(n) = \mathfrak{p}_{N}$. With this notation, $D_{N} := \sum_{\gpd(n) \leq \mathfrak{p}_N} a_{n} n^{-s}$. \begin{proposition} \label{hilbert} Let $D = \sum a_{n} n^{-s}$ be a Dirichlet series and $1 \leq p,q \leq \infty$. Then $D \in \mathfrak{M}(p,q)$ if and only if $D_{N} \in \mathfrak{M}^{(N)}(p,q)$ for every $N \in \mathbb{N}$ and $\sup_{N} \Vert D_{N} \Vert_{\mathfrak{M}^{(N)}(p,q)} < \infty$. \end{proposition} \begin{proof} Let us begin by noting that, if $n=jk$, then clearly $\gpd (n) \leq \mathfrak{p}_{N}$ if and only if $\gpd (j) \leq \mathfrak{p}_{N}$ and $\gpd (k) \leq \mathfrak{p}_{N}$. From this we deduce that, given any two Dirichlet series $D$ and $E$, we have $(DE)_{N}= D_{N} E_{N}$ for every $N \in \mathbb{N}$. \\ Take some Dirichlet series $D$ and suppose that $D \in \mathfrak{M}(p,q)$. Then, given $E \in \mathcal{H}_{p}^{(N)}$ we have $DE \in \mathcal{H}_{q}$, and $(DE)_{N} \in \mathcal{H}_{q}^{(N)}$. But $(DE)_{N} = D_{N} E_{N} = D_{N} E$ and, since $E$ was arbitrary, $D_{N} \in \mathfrak{M}^{(N)}(p,q)$ for every $N$. On the other hand, if $E \in \mathcal{H}_{q}$, then $E_{N} \in \mathcal{H}_{q}^{(N)}$ and $\Vert E_{N} \Vert_{\mathcal{H}_q} \leq \Vert E \Vert_{\mathcal{H}_q}$ (see \cite[Corollary~13.9]{defant2018Dirichlet}). This gives $\Vert D_{N} \Vert_{\mathfrak{M}^{(N)}(p,q)} \leq \Vert D \Vert_{\mathfrak{M}(p,q)}$ for every $N$.\\Suppose now that $D$ is such that $D_{N} \in \mathfrak{M}^{(N)}(p,q)$ for every $N$ and $ \sup_{N} \Vert D_{N} \Vert_{\mathfrak{M}^{(N)}(p,q)} < \infty$ (let us call it $C$). Then, for each $E \in \mathcal{H}_{p}$ we have, by \cite[Corollary~13.9]{defant2018Dirichlet}, \[ \Vert (DE)_{N} \Vert_{\mathcal{H}_p} = \Vert D_{N} E_{N} \Vert_{\mathcal{H}_p} \leq \Vert D_{N} \Vert_{\mathfrak{M}^{(N)}(p,q)} \Vert E_{N} \Vert_{\mathcal{H}_p} \leq C \Vert E \Vert_{\mathcal{H}_p} \,. \] Since this holds for every $N$, it shows (again by \cite[Corollary~13.9]{defant2018Dirichlet}) that $DE \in \mathcal{H}_{p}$ and completes the proof. \end{proof} We are going to exploit the connection between Dirichlet series and power series in infinitely many variables. This leads us to consider spaces of multipliers on Hardy spaces of functions. If $U$ is either $\mathbb{T}^{N}$ or $\mathbb{D}_{2}^{N}$ (with $N \in \mathbb{N} \cup \{\infty\}$) we consider the corresponding Hardy spaces $H_{p}(U)$ (for $1 \leq p \leq \infty$), and say that a function $f$ defined on $U$ is a $(p,q)$-multiplier of $H_{p}(U)$ if $ f \cdot g \in H_{q}(U)$ for every $f \in H_{p}(U)$. We denote the space of all such fuctions by $\mathcal{M}_{U}(p,q)$. The same argument as before with the constant $\mathbf{1}$ function shows that $\mathcal{M}_{U} (p,q) \subseteq H_{q}(U)$. Also, each multiplier defines a multiplication operator $M : H_{p}(U) \to H_{q}(U)$ which, by the Closed Graph Theorem, is continuous, and the norm of the operator defines a norm on the space of multipliers, as in \eqref{normamult}.\\ Our first step is to see that the identifications that we have just shown behave `well' with the multiplication, in the sense that whenever two pairs of functions are identified to each other, then so also are the products. Let us make a precise statement. \begin{theorem} \label{jonas} Let $D,E \in \mathcal{H}_{1}$, $f,g \in H_{1} (\mathbb{D}_{2}^{\infty})$ and $F,G \in H_{1} (\mathbb{T}^{\infty})$ so that $f \sim F \sim D$ and $g \sim G \sim E$. Then, the following are equivalent \begin{enumerate} \item \label{jonas1} $DE \in \mathcal{H}_{1}$ \item \label{jonas2} $fg \in H_{1} (\mathbb{D}_{2}^{\infty})$ \item \label{jonas3} $FG \in H_{1} (\mathbb{T}^{\infty})$ \end{enumerate} and, in this case $DE \sim fg \sim FG$. \end{theorem} The equivalence between~\ref{jonas2} and~\ref{jonas3} is based in the case for finitely many variables. \begin{proposition} \label{nana} Fix $N \in \mathbb{N}$ and let $f,g \in H_{1} (\mathbb{D}^{N})$ and $F,G \in H_{1} (\mathbb{T}^{N})$ so that $f \sim F$ and $g \sim G$. Then, the following are equivalent \begin{enumerate} \item\label{nana2} $fg \in H_{1} (\mathbb{D}^{N})$ \item\label{nana3} $FG \in H_{1} (\mathbb{T}^{N})$ \end{enumerate} and, in this case, $fg \sim FG$. \end{proposition} \begin{proof} Let us suppose first that $fg \in H_{1} (\mathbb{D}^{N})$ and denote by $H \in H_{1} (\mathbb{T}^{N})$ the associated function. Then, since \[ F(\omega) = \lim_{r \to 1^{-}} f(r \omega) , \text{ and } G(\omega) = \lim_{r \to 1^{-}} g(r \omega) \, \] for almost all $\omega \in \mathbb{T}^{N}$, we have \[ H (\omega) = \lim_{r \to 1^{-}} (fg)(r\omega) = F(\omega) G(\omega) \] for almost all $\omega \in \mathbb{T}^{N}$. Therefore $F G = H \in H_{1}(\mathbb{T}^{N})$, and this yields~\ref{nana3}. \\ Let us conversely assume that $FG \in H_{1}(\mathbb{T}^{N})$, and take the associated function $h \in H_{1} (\mathbb{D}^{N})$. The product $fg : \mathbb{D}^{N} \to \mathbb{C}$ is a holomorphic function and $fg -h$ belongs to the Nevanlinna class $\mathcal{N}(\mathbb{D}^{N})$, that is \[ \sup_{0<r<1} \int\limits_{\mathbb{T}^{N}} \log^{+} \vert f (r\omega) g(r\omega) - h(r\omega) \vert \mathrm{d} \omega < \infty \, \] where $\log^{+}(x):= \max \{0, \log x\}$ (see \cite[Section~3.3]{rudin1969function} for a complete account on this space). Consider $H(\omega)$ defined for almost all $\omega \in \mathbb{T}^{N}$ as the radial limit of $fg-h$. Then by \cite[Theorem 3.3.5]{rudin1969function} there are two possibilities: either $\log \vert H \vert \in L_{1}(\mathbb{T}^{N})$ or $fg-h =0$ on $\mathbb{D}^{N}$. But, just as before, we have \[ \lim_{r \to 1^{-}} f(r\omega) g(r\omega) = F(\omega) G(\omega) = \lim_{r \to 1^{-}} h(r\omega) \] for almost all $\omega \in \mathbb{T}^{N}$, and then necessarily $H=0$. Thus $fg=h$ on $\mathbb{D}^{N}$, and $fg \in H_{1}(\mathbb{D}^{N})$. This shows that~\ref{nana3} implies~\ref{nana2} and completes the proof. \end{proof} For the general case we need the notion of the Nevanlinna class in the infinite dimensional framework. Given $\mathbb{D}_1^\infty := \ell_1 \cap \mathbb{D}^\infty$, a function $u: \mathbb{D}_1^\infty \to \mathbb{C}$ and $0< r < 1$, the mapping $u_{[r]}: \mathbb{T}^\infty \to \mathbb{C}$ is defined by \[ u_{[r]} (\omega) = (r\omega_1, r^2 \omega_2, r^3 \omega_3, \cdots). \] The Nevanlinna class on infinitely many variables, introduced recently in \cite{guo2022dirichlet} and denoted by $\mathcal{N}(\mathbb{D}_1^\infty)$, consists on those holomorphic functions $u: \mathbb{D}_1^\infty \to \mathbb{C}$ such that \[ \sup\limits_{0<r<1} \int\limits_{\mathbb{T}^\infty} \log^+ \vert u_{[r]}(\omega) \vert \mathrm{d} \omega < \infty. \] We can now prove the general case. \begin{proof}[Proof of Theorem~\ref{jonas}] Let us show first that~\ref{jonas1} implies~\ref{jonas2}. Suppose that $D=\sum a_{n} n^{-s}, E= \sum b_{n} n^{-s} \in \mathcal{H}_{1}$ are so that $\big(\sum a_{n} n^{-s} \big) \big( \sum b_{n} n^{-s} \big) = \sum c_{n} n^{-s} \in \mathcal{H}_{1}$. Let $h \in H_{1}(\mathbb{D}_{2}^{\infty})$ be the holomorphic function associated to the product. Recall that, if $\alpha \in \mathbb{N}_{0}^{(\mathbb{N})}$ and $n = \mathfrak{p}^{\alpha} \in \mathbb{N}$, then \begin{equation} \label{producto1} c_{\alpha}(f) = a_{n} , \, c_{\alpha}(g) = b_{n} \text{ and } c_{\alpha} (h) = c_{n} = \sum_{jk=n} a_{j} b_{k} \,. \end{equation} On the other hand, the function $f \cdot g : \mathbb{D}_{2}^{\infty} \to \mathbb{C}$ is holomorphic and a straightforward computation shows that \begin{equation} \label{producto2} c_{\alpha} (fg) = \sum_{\beta + \gamma = \alpha} c_{\beta}(f) c_{\gamma}(g) \,. \end{equation} for every $\alpha$. Now, if $jk=n = \mathfrak{p}^{\alpha}$ for some $\alpha \in \mathbb{N}_{0}^{(\mathbb{N})}$, then there are $\beta, \gamma \in \mathbb{N}_{0}^{(\mathbb{N})}$ so that $j = \mathfrak{p}^{\beta}$, $k = \mathfrak{p}^{\gamma}$ and $\beta + \gamma = \alpha$. This, together with \eqref{producto1} and \eqref{producto2} shows that $c_{\alpha}(h) = c_{\alpha} (fg)$ for every $\alpha$ and, therefore $fg=h \in H_{1} (\mathbb{D}_{2}^{\infty})$. This yields our claim.\\ Suppose now that $fg \in H_{1} (\mathbb{D}_{2}^{\infty})$ and take the corresponding Dirichlet series $\sum a_{n} n^{-s}$, $\sum b_{n} n^{-s}$, $\sum c_{n} n^{-s} \in \mathcal{H}_{1}$ (associated to $f$, $g$ and $fg$ respectively). The same argument as above shows that \[ c_{n} = c_{\alpha}(fg)= \sum_{\beta + \gamma = \alpha} c_{\beta}(f) c_{\gamma}(g) = \sum_{jk=n} a_{j} b_{k} \, , \] hence $\big(\sum a_{n} n^{-s} \big) \big( \sum b_{n} n^{-s} \big) = \sum c_{n} n^{-s} \in \mathcal{H}_{1}$, showing that~\ref{jonas2} implies~\ref{jonas1}.\\ Suppose now that $fg \in H_{1}(\mathbb{D}_{2}^{\infty})$ and let us see that~\ref{jonas3} holds. Let $H \in H_{1}(\mathbb{T}^{\infty})$ be the function associated to $fg$. Note first that $f_{N} \sim F_{N}$, $g_{N} \sim G_{N}$ and $(fg)_{N} \sim H_{N}$ for every $N$. A straightforward computation shows that $(fg)_{N} = f_{N} g_{N}$, and then this product is in $H_{1}(\mathbb{D}^{N})$. Then Proposition~\ref{nana} yields $f_{N} g_{N} \sim F_{N} G_{N}$, therefore \[ \hat{H}_{N} (\alpha) = \widehat{(F_{N}G_{N})} (\alpha) \] for every $\alpha \in \mathbb{N}_{0}^{(\mathbb{N})}$ and, then, $H_{N} = F_{N}G_{N}$ for every $N \in \mathbb{N}$. We can find a subsequence in such a way that \[ \lim_{k} F_{N_{k}} (\omega) = F(\omega), \, \lim_{k} G_{N_{k}} (\omega) = G(\omega), \, \text{ and } \lim_{k} H_{N_{k}} (\omega) = H(\omega) \] for almost all $\omega \in \mathbb{T}^{\infty}$ (recall Remark~\ref{manon}). All this gives that $F(\omega)G(\omega) = H(\omega)$ for almost all $\omega \in \mathbb{T}^{\infty}$. Hence $FG = H \in H_{1} (\mathbb{T}^{\infty})$, and our claim is proved. \\ Finally, if $FG \in H_{1}(\mathbb{T}^{\infty})$, we denote by $h$ its associated function in $H_{1}(\mathbb{D}_{2}^{\infty})$. By \cite[Propostions~2.8 and 2.14]{guo2022dirichlet} we know that $H_1(\mathbb{D}_2^\infty)$ is contained in the Nevanlinna class $\mathcal{N}(\mathbb{D}_1^\infty)$, therefore $f,g,h \in \mathcal{N}(\mathbb{D}_1^\infty)$ and hence, by definition, $f\cdot g - h \in \mathcal{N}(\mathbb{D}_1^\infty)$. On the other hand, \cite[Theorem~2.4 and Corollary~2.11]{guo2022dirichlet} tell us that, if $u \in \mathcal{N}(\mathbb{D}_1^\infty)$, then the radial limit $u^*(\omega) = \lim\limits_{r\to 1^-} u_{[r]} (\omega)$ exists for almost all $\omega\in \mathbb{T}^\infty$. Even more, $u=0$ if and only if $u^*$ vanishes on some subset of $\mathbb{T}^\infty$ with positive measure. The radial limit of $f,g$ and $h$ coincide a.e. with $F, G$ and $F\cdot G$ respectively (see \cite[Theorem~1]{aleman2019fatou}). Since \[ (f\cdot g - h)^* (\omega)= \lim\limits_{r\to 1^-} f_{[r]}(\omega) \cdot g_{[r]}(\omega) -h_{[r]}(\omega) = 0, \] for almost all $\omega\in \mathbb{T}^\infty$, then $f\cdot g =h$ on $\mathbb{D}_1^\infty$. Finally, since the set $\mathbb{D}_1^\infty$ is dense in $\mathbb{D}_2^\infty$, by the continuity of the functions we have that $f\cdot g \in H_1(\mathbb{D}_2^\infty).$ \end{proof} As an immediate consequence of Theorem~\ref{jonas} we obtain the following. \begin{proposition} \label{charite} For every $1 \leq p, q \leq \infty$ we have \[ \mathfrak{M}(p,q) = \mathcal{M}_{\mathbb{D}_{2}^{\infty}}(p,q) = \mathcal{M}_{\mathbb{T}^{\infty}}(p,q) \,, \] and \[ \mathfrak{M}^{(N)}(p,q) = \mathcal{M}_{\mathbb{D}^{N}}(p,q) = \mathcal{M}_{\mathbb{T}^{N}}(p,q) \,, \] for every $N \in \mathbb{N}$, by means of the Bohr transform. \end{proposition} Again (as in Proposition~\ref{hilbert}), being a multiplier can be characterized in terms of the restrictions (this follows immediately from Proposition~\ref{hilbert} and Proposition~\ref{charite}). \begin{proposition}\label{remark multiplicadores} \, \begin{enumerate} \item $f \in \mathcal{M}_{\mathbb{D}^{\infty}_2}(p,q)$ if and only if $f_N \in \mathcal{M}_{\mathbb{D}^N_2}(p,q)$ for every $N \in \mathbb{N}$ and $\sup_{N} \Vert M_{f_{N}} \Vert < \infty$. \item $F \in \mathcal{M}_{\mathbb{T}^{\infty}}(p,q)$, then, $F_N \in \mathcal{M}_{\mathbb{T}^N}(p,q)$ for every $N \in \mathbb{N}$ and $\sup_{N} \Vert M_{F_{N}} \Vert < \infty$. \end{enumerate} \end{proposition} The following statement describes the spaces of multipliers, viewing them as Hardy spaces of Dirichlet series. A result of similar flavour for holomorphic functions in one variable appears in \cite{stessin2003generalized}. \begin{theorem}\label{descripcion} The following assertions hold true \begin{enumerate} \item \label{descr1} $\mathfrak{M}(\infty,q)= \mathcal{H}_q$ isometrically. \item \label{descr2} If $1\leq q<p<\infty$ then $\mathfrak{M}(p,q) = \mathcal{H}_{pq/(p-q)} $ \; isometrically. \item \label{descr3} If $1 \leq p \leq \infty$ then $\mathfrak{M}(p,p)= \mathcal{H}_{\infty}$ isometrically. \item \label{descr4} If $1 \le p<q \leq \infty$ then $\mathfrak{M}(p,q)=\{0\}$. \end{enumerate} The same equalities hold if we replace in each case $\mathfrak{M}$ and $\mathcal{H}$ by $\mathfrak{M}^{(N)}$ and $\mathcal{H}^{(N)}$ (with $N \in \mathbb{N}$) respectively. \end{theorem} \begin{proof} To get the result we use again the isometric identifications between the Hardy spaces of Dirichlet series and both Hardy spaces of functions, and also between their multipliers given in Proposition~\ref{charite}. Depending on each case we will use the most convenient identification, jumping from one to the other without further notification. \ref{descr1} We already noted that $\mathcal{M}_{\mathbb{T}^{N}}(\infty,q)\subset H_{q}(\mathbb{T}^N)$ with continuous inclusion (recall \eqref{aleluya}). On the other hand, if $D \in \mathcal{H}_{q}$ and $E \in \mathcal{H}_{\infty}$ then $D\cdot E$ a Dirichlet series in $\mathcal{H}_{q}$. Moreover, \[ \Vert M_D(E) \Vert_{\mathcal{H}_{q}} \leq \Vert D \Vert_{\mathcal{H}_{q}} \Vert E \Vert_{\mathcal{H}_{\infty}}. \] This shows that $\Vert M_D \Vert_{\mathfrak{M}(\infty,q)} \leq \Vert D \Vert_{\mathcal{H}_{q}},$ providing the isometric identification. \ref{descr2} Suppose $1 \leq q<p<\infty$ and take some $f \in H_{pq/(p-q)} (\mathbb{D}^\infty_2)$ and $g\in H_{p}(\mathbb{D}^\infty_2)$, then $f\cdot g$ is holomorphic on $\mathbb{D}^\infty_2$. Consider $t= \frac{p}{p-q}$ and note that $t$ is the conjugate exponent of $\frac{p}{q}$ in the sense that $\frac{q}{p} + \frac{1}{t} = 1$. Therefore given $M\in \mathbb{N}$ and $0< r <1$, by H\"older inequality \begin{align*} \left( \int\limits_{\mathbb{T}^M} \vert f\cdot g(r\omega,0) \vert^q \mathrm{d}\omega \right)^{1/q} & \leq \left( \int\limits_{\mathbb{T}^M} \vert f(r\omega, 0) \vert^{qt} \mathrm{d}\omega \right)^{1/qt}\left( \int\limits_{\mathbb{T}^M} \vert g(r\omega, 0) \vert^{qp/q} \mathrm{d}\omega \right)^{q/qp} \\ &= \left( \int\limits_{\mathbb{T}^M} \vert f(r\omega, 0) \vert^{qp/(p-q)} \mathrm{d}\omega \right)^{(p-q)/qp} \left( \int\limits_{\mathbb{T}^M} \vert g(r\omega, 0) \vert^p \mathrm{d}\omega \right)^{1/p} \\ &\leq \Vert f \Vert_{H_{pq/(p-q)}(\mathbb{D}^\infty_2)} \Vert g \Vert_{H_p(\mathbb{D}^\infty_2)}. \end{align*} Since this holds for every $M\in \mathbb{N}$ and $0<r<1$, then $f\in \mathcal{M}_{\mathbb{D}^\infty_2}(p,q)$ and furthermore $\Vert M_f \Vert_{\mathcal{M}_{\mathbb{D}^\infty_2}(p,q)} \leq \Vert f \Vert_{H_{pq/(p-q)}(\mathbb{D}^\infty_2)},$. Thus $H_{pq/(p-q)} (\mathbb{D}^\infty_2) \subseteq \mathcal{M}_{\mathbb{D}^\infty_2}(p,q)$. The case for $\mathbb{D}^{N}$ with $N\in\mathbb{N}$ follows with the same idea.\\ To check that the converse inclusion holds, take some $F \in \mathcal{M}_{\mathbb{T}^N}(p,q)$ (where $N \in \mathbb{N} \cup \{\infty\}$) and consider the associated multiplication operator $M_F : H_p(\mathbb{T}^N) \to H_{q}(\mathbb{T}^N)$ which, as we know, is continuous. Let us see that it can be extended to a continuous operator on $L_{q}(\mathbb{T}^{N})$. To see this, take a trigonometric polynomial $Q$, that is a finite sum of the form \[ Q(z)=\sum\limits_{\vert \alpha_i\vert \leq k} a_{\alpha} z^{\alpha} \,, \] and note that \begin{equation} \label{desc polinomio} Q= \left( \prod\limits_{j=1}^{M} z_{j}^{-k} \right) \cdot P, \end{equation} where $P$ is the polynomial defined as $P:= \sum\limits_{0\leq \beta_i \leq 2k} b_{\beta} z^{\beta}$ and $b_{\beta}= a_{\alpha}$ whenever $\beta = \alpha +(k,\cdots, k, 0)$. Then, \begin{align*} \left(\int\limits_{\mathbb{T}^N} \vert F\cdot Q(\omega)\vert^q \mathrm{d}\omega\right)^{1/q} &= \left(\int\limits_{\mathbb{T}^N} \vert F\cdot P(\omega)\vert^q \prod\limits_{j=1}^{M} \vert \omega_{j}\vert^{-kq} \mathrm{d}\omega\right)^{1/q} = \left(\int\limits_{\mathbb{T}^N} \vert F\cdot P(\omega)\vert^q \mathrm{d}\omega\right)^{1/q} \\ &\leq C \Vert P \Vert_{H_p(\mathbb{T}^N)} = C \left(\int\limits_{\mathbb{T}^N} \vert P(\omega)\vert^p \prod\limits_{j=1}^{M} \vert \omega_{j}\vert^{-kp} \mathrm{d}\omega\right)^{1/p} \\ &= C \Vert Q \Vert_{H_p(\mathbb{T}^N)}. \end{align*} Consider now an arbitrary $H\in L_p(\mathbb{T}^N)$ and, using \cite[Theorem~5.17]{defant2018Dirichlet} find a sequence of trigonometric polynomials $(Q_n)_n$ such that $Q_n \to H$ in $L_p$ and also a.e. on $\mathbb{T}^N$ (taking a subsequence if necessary). We have \[ \Vert F\cdot Q_n - F \cdot Q_m \Vert_{H_q(\mathbb{T}^N)} =\Vert F\cdot (Q_n-Q_m) \Vert_{H_q(\mathbb{T}^N)} \leq C \Vert Q_n - Q_m \Vert_{H_p(\mathbb{T}^N)} \to 0 \] which shows that $(F\cdot Q_n)_n$ is a Cauchy sequence in $L_q(\mathbb{T}^N)$. Since $F\cdot Q_n \to F\cdot H$ a.e. on $\mathbb{T}^N$, then this proves that $F\cdot H \in L_q (\mathbb{T}^N)$ and $F\cdot Q_n \to F\cdot H$ in $L_q(\mathbb{T}^N)$. Moreover, \[ \Vert F\cdot H \Vert_{H_q(\mathbb{T}^N)} = \lim \Vert F\cdot Q_n \Vert_{H_q(\mathbb{T}^N)} \leq C \lim \Vert Q_n \Vert_{H_p(\mathbb{T}^N)} = C \Vert H \Vert_{H_p(\mathbb{T}^N)}, \] and therefore the operator $M_F : L_p(\mathbb{T}^N) \to L_q (\mathbb{T}^N)$ is well defined and bounded. In particular, $\vert F \vert^q \cdot \vert H\vert^q \in L_1(\mathbb{T}^N)$ for every $H\in L_p(\mathbb{T}^N)$. Now, consider $H\in L_{p/q}(\mathbb{T}^N)$ then $\vert H\vert^{1/q} \in L_{p} (\mathbb{T}^N)$ and $\vert F\vert^q \cdot \vert H\vert \in L_1(\mathbb{T}^N)$ or, equivalently, $\vert F\vert^q \cdot H \in L_1(\mathbb{T}^N)$. Hence \[ \vert F \vert^q \in L_{p/q}(\mathbb{T}^N)^* = L_{p/(p-q)}(\mathbb{T}^N), \] and therefore $F\in L_{pq/(p-q)}(\mathbb{T}^N)$. To finish the argument, since $\hat{F}(\alpha)=0$ whenever $\alpha \in \mathbb{Z}^N \setminus \mathbb{N}_{0}^N$ then $F\in H_{pq/(p-q)}(\mathbb{T}^N)$. We then conclude that \[ H_{pq/(p-q)}( \mathbb{T}^N) \subseteq \mathcal{M}_{\mathbb{T}^{N}}(p,q) \,. \] In order to see the isometry, given $F\in H_{pq/(p-q)}(\mathbb{T}^N)$ and let $G=\vert F \vert^r \in L_p(\mathbb{T}^N)$ with $r = q/(p-q)$ then $F\cdot G \in L_q(\mathbb{T}^N)$. Let $Q_n$ a sequence of trigonometric polynomials such that $Q_n \to G$ in $L_p(\mathbb{T}^N)$, since $M_F: L_p(\mathbb{T}^N) \to L_q(\mathbb{T}^N)$ is continuous then $F\cdot Q_n = M_F(Q_n) \to F\cdot G$. On the other hand, writing $Q_n$ as \eqref{desc polinomio} we have for each $n\in \mathbb{N}$ a polynomial $P_n$ such that $\Vert F\cdot Q_n \Vert_{L_q(\mathbb{T}^N)} = \Vert F \cdot P_n \Vert_{L_q(\mathbb{T}^N)}$ and $\Vert Q_n \Vert_{L_p(\mathbb{T}^N)} = \Vert P_n \Vert_{L_p(\mathbb{T}^N)}$. Then we have that \begin{multline*} \Vert F \cdot G \Vert_{L_q(\mathbb{T}^N)} = \lim\limits_n \Vert F \cdot Q_n \Vert_{L_q(\mathbb{T}^N)} = \lim\limits_n \Vert F \cdot P_n \Vert_{L_q(\mathbb{T}^N)} \leq \lim\limits_n \Vert M_F \Vert_{\mathcal{M}_{\mathbb{T}^{N}}(p,q)} \Vert P_n \Vert_{L_p(\mathbb{T}^N)} \\= \lim\limits_n \Vert M_F \Vert_{\mathcal{M}_{\mathbb{T}^{N}}(p,q)} \Vert Q_n \Vert_{L_p(\mathbb{T}^N)} = \Vert M_F \Vert_{\mathcal{M}_{\mathbb{T}^{N}}(p,q)} \Vert G \Vert_{L_p(\mathbb{T}^N)}. \end{multline*} Now, since \[ \Vert F \Vert_{L_{pq/(p-q)}(\mathbb{T}^N)}^{p/(p-q)} = \Vert F^{r + 1} \Vert_{L_q(\mathbb{T}^N)} = \Vert F \cdot G \Vert_{L_q(\mathbb{T}^N)} \] and \[ \Vert F \Vert_{L_{pq/(p-q)}(\mathbb{T}^N)}^{q/(p-q)} = \Vert F^{r} \Vert_{L_p(\mathbb{T}^N)} = \Vert G \Vert_{L_p(\mathbb{T}^N)} \] then \[ \Vert M_F \Vert_{\mathcal{M}_{\mathbb{T}^{N}}(p,q)} \geq \Vert F \Vert_{L_{pq/(p-q)}}= \Vert F \Vert_{H_{pq/(p-q)}(\mathbb{T}^N)}, \] as we wanted to show. \ref{descr3} was proved in \cite[Theorem~7]{bayart2002hardy}. We finish the proof by seeing that~\ref{descr4} holds. On one hand, the previous case and \eqref{inclusiones} immediately give the inclusion \[ \{0\} \subseteq \mathcal{M}_{\mathbb{T}^{N}}(p,q) \subseteq H_{\infty}(\mathbb{T}^N). \] We now show that $\mathcal{M}_{\mathbb{D}_{2}^{N}}(p,q)=\{0\}$ for any $N\in\mathbb{N} \cup \{\infty\}$. We consider in first place the case $N \in \mathbb{N}$. For $1 \leq p < q < \infty$, we fix $f \in \mathcal{M}_{\mathbb{D}^N_2}(p,q)$ and $M_{f}$ the associated multiplication operator from $H_p(\mathbb{D}^N)$ to $H_q(\mathbb{D}^N)$. Now, given $g\in H_{p}(\mathbb{D}^{N}_2)$, by \eqref{eq: Cole-Gamelin} we have \begin{equation}\label{ec. desigualdad del libro} \vert f\cdot g(z) \vert \leq \left( \prod\limits_{j=1}^N \frac{1}{1-\vert z_j \vert^2} \right)^{1/q} \Vert f\cdot g\Vert_{H_q(\mathbb{D}^N_2)} \leq \left( \prod\limits_{j=1}^N \frac{1}{1-\vert z_j \vert^2} \right)^{1/q} C \Vert g \Vert_{H_p(\mathbb{D}^N_2)}. \end{equation} Now since $f\in H_{\infty}(\mathbb{D}^N_2)$ and \[ \Vert f \Vert_{H_\infty(\mathbb{D}^N)} = \lim\limits_{r\to 1} \sup\limits_{z\in r\mathbb{D}^N_2} \vert f(z) \vert = \lim\limits_{r\to 1} \sup\limits_{z\in r\mathbb{T}^N} \vert f(z) \vert, \] then there is a sequence $(u_n)_n\subseteq \mathbb{D}^N$ such that $\Vert u_n \Vert_{\infty} \to 1$ and \begin{equation}\label{limite sucesion} \vert f(u_n) \vert \to \Vert f \Vert_{H_\infty(\mathbb{D}^N_2)}. \end{equation} For each $u_n$ there is a non-zero function $g_n\in H_{p}(\mathbb{D}^N)$ (recall \eqref{optima}) such that \[ \vert g_n(u_n) \vert = \left( \prod\limits_{j=1}^N \frac{1}{1-\vert u_n^j \vert^2} \right)^{1/p} \Vert g_n \Vert_{H_p(\mathbb{D}^N)}. \] From this and \eqref{ec. desigualdad del libro} we get \[ \vert f(u_n) \vert \left( \prod\limits_{j=1}^N \frac{1}{1-\vert u_n^j \vert^2} \right)^{1/p} \Vert g_n \Vert_{H_p(\mathbb{D}^N)} \leq \left( \prod\limits_{j=1}^N \frac{1}{1-\vert u_n^j \vert^2} \right)^{1/q} C \Vert g_n \Vert_{H_p(\mathbb{D}^N)}. \] Then, \[ \vert f(u_n) \vert \left( \prod\limits_{j=1}^N \frac{1}{1-\vert u_n^j \vert^2} \right)^{1/p-1/q} \leq C. \] Since $1/p-1/q>0$ we have that $\left( \prod\limits_{j=1}^N \frac{1}{1-\vert u_n^j \vert^2} \right)^{1/p-1/q} \to \infty,$ and then, by the previous inequality, $\vert f(u_n) \vert \to 0$. By \eqref{limite sucesion} this shows that $\Vert f \Vert_{H_\infty(\mathbb{D}^N)}=0$ and this gives the claim for $q<\infty$. Now if $q=\infty$, by noticing that $H_{\infty}(\mathbb{D}^N)$ is contained in $H_{t}(\mathbb{D}^N)$ for every $1 \leq p < t < \infty$ the result follows from the previous case. This concludes the proof for $N \in \mathbb{N}$.\\ To prove that $\mathcal{M}_{\mathbb{D}^{\infty}_2}(p,q)=\{0\}$, fix again $f \in \mathcal{M}_{\mathbb{D}^{\infty}_2}(p,q).$ By Proposition~\ref{remark multiplicadores}, for every $N \in \mathbb{N}$ the truncated function $f_N \in \mathcal{M}_{\mathbb{D}^N_2}(p,q)$ and therefore, by what we have shown before, is the zero function. Now the proof follows using that $(f_{N})_{N}$ converges pointwise to $f$. \end{proof} \section{Multiplication operator} Given a multiplier $D \in \mathfrak{M}(p,q)$, we study in this section several properties of its associated multiplication operator $M_D : \mathcal{H}_p \to \mathcal{H}_q$. In \cite{vukotic2003analytic} Vukoti\'c provides a very complete description of certain Toeplitz operators for Hardy spaces of holomorphic functions of one variable. In particular he studies the spectrum, the range and the essential norm of these operators. Bearing in mind the relation between the sets of multipliers that we proved above (Proposition~\ref{charite}), it is natural to ask whether similar properties hold when we look at the multiplication operators on the Hardy spaces of Dirichlet series. In our first result we characterize which operators are indeed multiplication operators. These happen to be exactly those that commute with the monomials given by the prime numbers. \begin{theorem} Let $1\leq p,q \leq \infty$. A bounded operator $T: \mathcal{H}_p \to \mathcal{H}_q$ is a multiplication operator if and only if $T$ commutes with the multiplication operators $M_{\mathfrak{p}_i^{-s}}$ for every $i \in \mathbb{N}$. The same holds if we replace in each case $\mathcal{H}$ by $\mathcal{H}^{(N)}$ (with $N \in \mathbb{N}$), and considering $M_{\mathfrak{p}_i^{-s}}$ with $1 \leq i \leq N$. \end{theorem} \begin{proof} Suppose first that $T: \mathcal{H}_p \to \mathcal{H}_q$ is a multiplication operator (that is, $T=M_D$ for some Dirichlet series $D$) and for $i \in \mathbb{N}$, let $\mathfrak{p}_i^{-s}$ be a monomial, then \[ T \circ M_{\mathfrak{p}_i^{-s}} (E)= D \cdot \mathfrak{p}_i^{-s} \cdot E= \mathfrak{p}_i^{-s} \cdot D \cdot E = M_{\mathfrak{p}_i^{-s}} \circ T (E). \] That is, $T$ commutes with $M_{\mathfrak{p}_i^{-s}}$. For the converse, suppose now that $T: \mathcal{H}_p \to \mathcal{H}_q$ is a bounded operator that commutes with the multiplication operators $M_{\mathfrak{p}_i^{-s}}$ for every $i \in \mathbb{N}$. Let us see that $T = M_D$ with $D = T(1)$. Indeed, for each $\mathfrak{p}_i^{-s}$ and $k\in \mathbb{N}$ we have that \[ T((\mathfrak{p}_i^{k})^{-s})=T((\mathfrak{p}_i^{-s})^{k}) = T(M_{\mathfrak{p}_i^{-s}}^{k}(1)) = M_{\mathfrak{p}_i^{-s}}^{k}( T(1)) = (\mathfrak{p}_i^{-s})^{k} \cdot D = (\mathfrak{p}_i^{k})^{-s} \cdot D, \] and then given $n\in \mathbb{N}$ and $\alpha \in \mathbb{N}_0^{(\mathbb{N})}$ such that $n = \mathfrak{p}_1^{\alpha_1} \cdots \mathfrak{p}_k^{\alpha_k}$ \[ T(n^{-s})= T( \prod\limits_{j=1}^k (\mathfrak{p}_i^{\alpha_i})^{-s} ) = T ( M_{\mathfrak{p}_1^{-s}}^{\alpha_1} \circ \cdots \circ M_{\mathfrak{p}_k^{-s}}^{\alpha_k} (1) ) = M_{\mathfrak{p}_1^{-s}}^{\alpha_1} \circ \cdots \circ M_{\mathfrak{p}_k^{-s}}^{\alpha_k} ( T(1) ) = (n^{-s}) \cdot D. \] This implies that $T(P)= P \cdot D$ for every Dirichlet polynomial $P$. Take now some $E\in \mathcal{H}_p$ and choose a sequence of polynomials $P_n$ that converges in norm to $E$ if $1 \leq p < \infty$ or weakly if $p= \infty$ (see \cite[Theorems~5.18 and~11.10]{defant2018Dirichlet}). In any case, if $s \in \mathbb{C}_{1/2}$, the continuity of the evaluation at $s$ (see again \cite[Corollary~13.3]{defant2018Dirichlet}) yields $P_n(s) \to E(s)$. Since $T$ is continuous, we have that \[ T(E) = \lim\limits_n T(P_n)= \lim\limits_n P_n\cdot D \] (where the limit is in the weak topology if $p=\infty$). Then for each $s\in \mathbb{C}$ such that $\re s > 1/2$, we have \[ T(E)(s) = \lim\limits_n P_n\cdot D(s) = E(s) D(s). \] Therefore, $T(E) = D \cdot E$ for every Dirichlet series $E$. In other words, $T$ is equal to $M_D$, which concludes the proof. \end{proof} Given a bounded operator $T: E \to F$ the essential norm is defined as \[ \Vert T \Vert_{\ess} = \inf \{ \Vert T - K \Vert : \; K : E \to F \; \text{ compact} \}. \] This norm tells us how far from being compact $T$ is. The following result shows a series of comparisons between essential norm of $M_D : \mathcal{H}_p \to \mathcal{H}_q$ and the norm of $D$, depending on $p$ and $q$. In all cases, as a consequence, the operator is compact if and only if $D=0$. \begin{theorem} \label{chatruc} \; \begin{enumerate} \item\label{chatruc1} Let $1\leq q < p < \infty$, $D\in \mathcal{H}_{pq/(p-q)}$ and $M_D$ its associated multiplication operator from $\mathcal{H}_p$ to $\mathcal{H}_q$. Then \[ \Vert D \Vert_{\mathcal{H}_q} \leq \Vert M_D \Vert_{\ess} \leq \Vert M_D \Vert = \Vert D \Vert_{\mathcal{H}_{pq/(p-q)}}. \] \item \label{chatruc2} Let $1\leq q < \infty$, $D\in \mathcal{H}_q$ and $M_D : \mathcal{H}_\infty \to \mathcal{H}_q$ the multiplication operator. Then \[ \frac{1}{2}\Vert D \Vert_{\mathcal{H}_q} \leq \Vert M_D \Vert_{\ess} \leq \Vert M_D \Vert = \Vert D \Vert_{\mathcal{H}_q}. \] \end{enumerate} In particular, $M_D$ is compact if and only if $D=0$. The same equalities hold if we replace $\mathcal{H}$ by $\mathcal{H}^{(N)}$ (with $N \in \mathbb{N}$). \end{theorem} We start with a lemma based on \cite[Proposition~2]{brown1984cyclic} for Hardy spaces of holomorphic functions. We prove that weak-star convergence and uniformly convergence on half-planes are equivalent on Hardy spaces of Dirichlet series. We are going to use that $\mathcal{H}_{p}$ is a dual space for every $1 \leq p < \infty$. For $1<p<\infty$ this is obvious because the space is reflexive. For $p=1$ in \cite[Theorem~7.3]{defantperez_2018} it is shown, for Hardy spaces of vector valued Dirichlet series, that $\mathcal{H}_{1}(X)$ is a dual space if and only if $X$ has the Analytic Radon-Nikodym property. Since $\mathbb{C}$ has the ARNP, this gives what we need. We include here an alternative proof in more elementary terms. \begin{proposition} \label{basile} The space $\mathcal{H}_1$ is a dual space. \end{proposition} \begin{proof} Denote by $(B_{H_1}, \tau_0)$ the closed unit ball of $H_1(\mathbb{D}_2^\infty)$, endowed with the topology $\tau_0$ given by the uniform convergence on compact sets. Let us show that $(B_{H_1}, \tau_0)$ is a compact set. Note first that, given a compact $K\subseteq \ell_2$ and $\varepsilon >0$, there exists $j_0 \in \mathbb{N}$ such that $\sum\limits_{j\geq j_0}^\infty \vert z_j \vert^2 < \varepsilon$ for all $z\in K$ \cite[Page 6]{diestel2012sequences}. Then, from Cole-Gamelin inequality~\eqref{eq: Cole-Gamelin}, the set \[ \{f(K) : f \in B_{H_1} \} \subset \mathbb{C} \] is bounded for each compact set $K$. By Montel's theorem (see e.g. \cite[Theorem~15.50]{defant2018Dirichlet}), $(B_{H_1},\tau_0)$ is relatively compact. We now show that $(B_{H_1}, \tau_0)$ is closed. Indeed, suppose now that $(f_\alpha) \subset B_{H_1}$ is a net that converges to $B_{H_1}$ uniformly on compact sets, then we obviously have \[ \int\limits_{\mathbb{T}^N} \vert f(r\omega,0,0, \cdots) \vert \mathrm{d} \omega \leq \int\limits_{\mathbb{T}^N} \vert f(r\omega,0,0, \cdots) -f_\alpha(r\omega,0,0, \cdots) \vert \mathrm{d} \omega + \int\limits_{\mathbb{T}^N} \vert f_\alpha(r\omega,0,0, \cdots) \vert \mathrm{d} \omega. \] Since the first term tends to $0$ and the second term is less than or equal to $1$ for every $N \in \mathbb{N}$ and every $0 < r <1$, then the limit function $f$ belongs to $B_{H_1}$. Thus, $(B_{H_1}, \tau_0)$ is compact. \\ We consider now the set of functionals \[ \{ev_z: H_1(\mathbb{D}_2^\infty) \to \mathbb C : z \in \mathbb{D}_2^\infty\}. \] Note that the weak topology $w(H_1,E)$ is exactly the topology given by the pointwise convergence. Thus, since a priori $\tau_0$ is clearly a stronger topology than $w(H_1,E)$ we have that $(B_{H_1},w(H_1,E))$ is also compact. Since $E$ separates points, by \cite[Theorem~1]{kaijser1977note}, $H_1(\mathbb{D}_2^\infty)$ is a dual space and hence, using the Bohr transform, $\mathcal{H}_1$ also is a dual space. \end{proof} \begin{lemma}\label{bastia} Let $1\leq p <\infty$ and $(D_n) \subseteq \mathcal{H}_p$ then the following statements are equivalent \begin{enumerate} \item \label{bastia1} $D_n \to 0$ in the weak-star topology. \item \label{bastia2} $D_n(s) \to 0$ for each $s\in \mathbb{C}_{1/2}$ and $\Vert D_n \Vert_{\mathcal{H}_p} \leq C$ for some $C<0$. \item \label{bastia3} $D_n \to 0$ uniformly on each half-plane $\mathbb{C}_{\sigma}$ with $\sigma > 1/2$ and $\Vert D_n \Vert_{\mathcal{H}_p} \leq C$ for some $C<0$. \end{enumerate} \end{lemma} \begin{proof} The implication~\ref{bastia1} then~\ref{bastia2} is verified by the continuity of the evaluations in the weak-star topology, and because the convergence in this topology implies that the sequence is bounded. Let us see that~\ref{bastia2} implies~\ref{bastia3}. Suppose not, then there exists $\varepsilon>0$, a subsequence $(D_{n_j})_j$ and a half-plane $\mathbb{C}_\sigma$ with $\sigma > 1/2$ such that $\sup\limits_{s \in \mathbb{C}_\sigma} \vert D_{n_j}(s) \vert \geq \varepsilon$. Since $D_{n_j} = \sum\limits_{m} a_m^{n_j} m^{-s}$ is uniformly bounded, by Montel's theorem for $\mathcal{H}_p$ (see \cite[Theorem~3.2]{defant2021frechet}), there exists $D = \sum\limits_{m} a_m m^{-s} \in \mathcal{H}_p$ such that \[ \sum\limits_{m} \frac{a_m^{n_j}}{m^{\delta}} m^{-s} \to \sum\limits_{m} \frac{a_m}{m^{\delta}} m^{-s} \; \text{in} \; \mathcal{H}_p \] for every $\delta >0$. Given $s \in \mathbb{C}_{1/2}$, we write $s= s_0 + \delta$ with $\delta >0$ and $s_0 \in \mathbb{C}_{1/2}$, to have \[ D_{n_j}(s) = \sum\limits_{m} a_m^{n_j} m^{-(s_0 + \delta)} = \sum\limits_{m} \frac{a_m^{n_j}}{m^{\delta}} m^{-s_0} \to \sum\limits_{m} \frac{a_m}{m^{\delta}} m^{-s_0} = D(s_0+\delta) = D(s). \] We conclude that $D=0$ and by Cole-Gamelin inequality for Dirichlet series (see \cite[Corollary~13.3]{defant2018Dirichlet}) we have \begin{align*} \varepsilon &\leq \sup\limits_{\re s > 1/2 + \sigma} \vert D_{n_j} (s) \vert = \sup\limits_{\re s > 1/2 + \sigma/2} \vert D_{n_j} (s + \sigma/2) \vert \\ &= \sup\limits_{\re s > 1/2 + \sigma/2} \vert \sum\limits_{m} \frac{a_m^{n_j}}{m^{\sigma/2}} m^{-s} \vert \leq \zeta( 2 \re s)^{1/p} \Bigg\Vert \sum\limits_{m} \frac{a_m^{n_j}}{m^{\sigma/2}} m^{-s} \Bigg\Vert_{\mathcal{H}_p}\\ &\leq \zeta(1+ \sigma)^{1/p} \Bigg\Vert \sum\limits_{m} \frac{a_m^{n_j}}{m^{\sigma/2}} m^{-s} \Bigg\Vert_{\mathcal{H}_p} \to 0, \end{align*} for every $\sigma >0$, which is a contradiction. To see that~\ref{bastia3} implies~\ref{bastia1}, let $B_{\mathcal{H}_p}$ denote the closed unit ball of $\mathcal{H}_{1}$. Since for each $1 \leq p <\infty$ the space $\mathcal{H}_{p}$ is a dual space, by Alaouglu's theorem, $(B_{\mathcal{H}_p}, w^*)$ (i.e. endowed with the weak-star topology) is compact. On the other hand $(B_{\mathcal{H}_p}, \tau_{0})$ (that is, endowed with the topology of uniform convergence on compact sets) is a Hausdorff topological space. If we show that the identity $Id : (B_{\mathcal{H}_p}, w^*) \to (B_{\mathcal{H}_p}, \tau_{0})$ is continuous, then it is a homeomorphism and the proof is completed. To see this let us note first that $\mathcal{H}_p$ is separable (note that the set of Dirichlet polynomials with rational coefficients is dense in $\mathcal{H}_p$) and then $(B_{\mathcal{H}_p}, w^*)$ is metrizable (see \cite[Theorem~5.1]{conway1990course}). Hence it suffices to work with sequences. If a sequence $(D_{n})_{n}$ converges in $w^{*}$ to some $D$, then in particular $(D_{n}-D)_{n}$ $w^{*}$-converges to $0$ and, by what we just have seen, it converges uniformly on compact sets. This shows that $Id$ is continuous, as we wanted. \end{proof} Now we prove Theorem~\ref{chatruc}. The arguments should be compared with \cite[Propositions~4.3 and~5.5]{demazeux2011essential} where similar statements have been obtained for weighted composition operators for holomorphic functions of one complex variable. \begin{proof}[Proof of Theorem~\ref{chatruc}] \ref{chatruc1} By definition $\Vert M_D \Vert_{\ess} \leq \Vert M_D \Vert = \Vert D \Vert_{\mathcal{H}_{pq/(p-q)}}$. In order to see the lower bound, for each $n \in \mathbb{N}$ consider the monomial $E_n= (2^n)^{-s} \in \mathcal{H}_p$. Clearly $\Vert E_n \Vert_{\mathcal{H}_p} =1$ for every $n$, and $E_n(s) \to 0$ for each $s\in \mathbb{C}_{1/2}$. Then, by Lemma~\ref{bastia}, $E_n\to 0$ in the weak-star topology. Take now some compact operator $K: \mathcal{H}_p \to \mathcal{H}_q$ and note that, since $\mathcal{H}_p$ is reflexive, we have $K(E_n) \to 0$, and hence \begin{align*} \Vert M_D -K \Vert \geq \limsup\limits_{n\to \infty} \Vert M_D(E_n) & - K(E_n) \Vert_{\mathcal{H}_q} \\ & \geq \limsup\limits_{n\to \infty} \Vert D\cdot E_n \Vert_{\mathcal{H}_q} -\Vert K(E_n) \Vert_{\mathcal{H}_q} = \Vert D \Vert_{\mathcal{H}_q}. \end{align*} \ref{chatruc2} Let $K: \mathcal{H}_\infty \to \mathcal{H}_q$ be a compact operator, and take again $E_n= (2^n)^{-s} \in \mathcal{H}_\infty$ for each $n\in \mathbb{N}$. Since $\Vert E_n \Vert_{\mathcal{H}_\infty} =1$ then there exists a subsequence $(E_{n_j})_j$ such that $(K(E_{n_j}))_j$ converges in $\mathcal{H}_q$. Given $\varepsilon > 0$ there exists $m\in \mathbb{N}$ such that if $j,l \geq m$ then \[ \Vert K(E_{n_j})-K(E_{n_l}) \Vert_{\mathcal{H}_q} < \varepsilon. \] On the other hand, if $D=\sum a_k k^{-s}$ then $D\cdot E_{n_l}= \sum a_k (k\cdot 2^{n_l})^{-s}$ and by \cite[Proposition~11.20]{defant2018Dirichlet} the norm in $\mathcal{H}_q$ of \[ (D\cdot E_{n_l})_\delta = \sum \frac{a_k}{(k\cdot 2^{n_l})^{\delta}} (k\cdot 2^{n_l})^{-s} \] tends increasingly to $\Vert D \cdot E_{n_l}\Vert_{\mathcal{H}_q} = \Vert D \Vert_{\mathcal{H}_q}$ when $\delta \to 0$. Fixed $j\geq m$, there exists $\delta >0$ such that \[ \Vert (D\cdot E_{n_j})_\delta \Vert_{\mathcal{H}_q} \geq \Vert D \Vert_{\mathcal{H}_q} - \varepsilon. \] Given that $\Vert \frac{E_{n_j} - E_{n_l}}{2} \Vert_{\mathcal{H}_\infty} = 1$ for every $j \not= l$ then \begin{align*} \Vert M_D - K \Vert & \geq \Bigg\Vert (M_D -K) \frac{E_{n_j} - E_{n_l}}{2} \Bigg\Vert_{\mathcal{H}_q} \\ &\geq \frac{1}{2} \Vert (D \cdot E_{n_j} - D \cdot E_{n_l})_{\delta} \Vert_{\mathcal{H}_q} - \frac{1}{2} \Vert K(E_{n_j})-K(E_{n_l}) \Vert_{\mathcal{H}_q} \\ & >\frac{1}{2} (\Vert (D \cdot E_{n_j})_{\delta} \Vert_{\mathcal{H}_q} - \Vert (D \cdot E_{n_l})_{\delta} \Vert_{\mathcal{H}_q}) - \varepsilon/2 \\ & \geq \frac{1}{2} \Vert D \Vert_{\mathcal{H}_q} - \frac{1}{2} \Vert (D \cdot E_{n_l})_{\delta} \Vert_{\mathcal{H}_q} - \varepsilon. \end{align*} Finally, since \[ \Vert (D \cdot E_{n_l})_{\delta} \Vert_{\mathcal{H}_q} \leq \Vert D_\delta \Vert_{\mathcal{H}_q} \Vert (E_{n_l})_{\delta} \Vert_{\mathcal{H}_\infty} \leq \Vert D_\delta \Vert_{\mathcal{H}_q} \Vert \frac{(2^{n_l})^{-s}}{2^{n_l \delta}} \Vert_{\mathcal{H}_\infty} = \Vert D_\delta \Vert_{\mathcal{H}_q} \cdot \frac{1}{2^{n_l \delta}}, \] and the latter tends to $0$ as $l \to \infty$, we finally have $\Vert M_D -K \Vert \geq \frac{1}{2} \Vert D \Vert_{\mathcal{H}_q}$. \end{proof} In the case of endomorphism, that is $p=q$, we give the following bounds for the essential norms. \begin{theorem}\label{saja} Let $D\in \mathcal{H}_\infty$ and $M_D : \mathcal{H}_p \to \mathcal{H}_p$ the associated multiplication operator. \begin{enumerate} \item\label{saja1} If $1 < p \leq \infty$, then \[ \Vert M_D \Vert_{\ess} = \Vert M_D \Vert = \Vert D \Vert_{\mathcal{H}_\infty}. \] \item\label{saja2} If $p=1$, then \[ \max\{\frac{1}{2}\Vert D \Vert_{\mathcal{H}_\infty} \; , \; \Vert D \Vert_{\mathcal{H}_1} \} \leq \Vert M_D \Vert_{\ess} \leq \Vert M_D \Vert = \Vert D \Vert_{\mathcal{H}_\infty}. \] \end{enumerate} In particular, $M_D$ is compact if and only if $D=0$. The same equalities hold if we replace $\mathcal{H}$ by $\mathcal{H}^{(N)}$, with $N \in \mathbb{N}$. \end{theorem} The previous theorem will be a consequence of the Proposition~\ref{ubeda} which we feel is independently interesting. For the proof we need the following technical lemma in the spirit of \cite[Proposition~2]{brown1984cyclic}. Is relates weak-star convergence and uniform convergence on compact sets for Hardy spaces of holomorphic functions. It is a sort of `holomorphic version´ of Lemma~\ref{bastia}. \begin{lemma}\label{maciel} Let $1\leq p <\infty$, $N\in \mathbb{N}\cup \{\infty\}$ and $(f_n) \subseteq H_p(\mathbb{D}^N_2)$ then the following statements are equivalent \begin{enumerate} \item\label{maciel1} $f_n \to 0$ in the weak-star topology, \item\label{maciel2} $f_n(z) \to 0$ for each $z\in \mathbb{D}^N_2$ and $\Vert f_n \Vert_{H_p(\mathbb{D}^N_2)} \leq C$ \item\label{maciel3} $f_n \to 0$ uniformly on compact sets of $\mathbb{D}^N_2$ and $\Vert f_n \Vert_{H_p(\mathbb{D}^N_2)} \leq C$, \end{enumerate} \end{lemma} \begin{proof} \ref{maciel1} $\Rightarrow$~\ref{maciel2} and~\ref{maciel3} $\Rightarrow$~\ref{maciel1} are proved with the same arguments used in Lemma~\ref{bastia}. Let us see~\ref{maciel2} $\Rightarrow$~\ref{maciel3}. Suppose not, then there exists $\varepsilon>0$, a subsequence $f_{n_j}$ and a compact set $K \subseteq \mathbb{D}_{2}^{\infty}$ such that $\Vert f_{n_j}\Vert_{H_{\infty}(K)} \geq \varepsilon$. Since $f_{n_j}$ is bounded, by Montel's theorem for $H_p(\mathbb{D}^N_2)$ (see \cite[Theorem~2]{vidal2020montel}), we can take a subsequence $f_{n_{j_l}}$ and $f\in H_p(\mathbb{D}^N_2)$ such that $f_{n_{j_l}} \to f$ uniformly on compact sets. But since it tends pointwise to zero, then $f=0$ which is a contradiction. \end{proof} \begin{proposition}\label{ubeda} \; Let $1\leq p < \infty$, $f\in H_{\infty}(\mathbb{D}^\infty_2)$ and $M_f : H_p(\mathbb{D}^\infty_2) \to H_p(\mathbb{D}^\infty_2)$ the multiplication operator. If $p>1$ then \[ \Vert M_f \Vert_{\ess} = \Vert M_f \Vert = \Vert f \Vert_{H_{\infty}(\mathbb{D}^\infty_2)}. \] If $p=1$ then \[ \Vert M_f\Vert \geq \Vert M_f \Vert_{\ess} \geq \frac{1}{2} \Vert M_f \Vert. \] In particular $M_f : H_p(\mathbb{D}^\infty_2) \to H_p(\mathbb{D}^\infty_2)$ is compact if and only if $f=0$. The same equalities hold if we replace $\mathbb{D}^\infty_2$ by $\mathbb{D}^N$, with $N \in \mathbb{N}$. \end{proposition} \begin{proof} The inequality $\Vert M_f \Vert_{\ess} \leq \Vert M_f \Vert = \Vert f\Vert_{H_{\infty}(\mathbb{D}^N_2)}$ is already known for every $N\in \mathbb{N}\cup\{\infty\}$. It is only left, then, to see that \begin{equation} \label{cilindro} \Vert M_f \Vert \leq \Vert M_f \Vert_{\ess} \,. \end{equation} We begin with the case $N \in \mathbb{N}$. Assume in first place that $p>1$, and take a sequence $(z^{(n)})_n \subseteq \mathbb{D}^N$, with $\Vert z^{(n)} \Vert_\infty \to 1$, such that $\vert f(z^{(n)}) \vert \to \Vert f \Vert_{H_{\infty}(\mathbb{D}^N)}$. Consider now the function given by \[ h_{z^{(n)}}(u) = \left( \prod\limits_{j=1}^N \frac{1- \vert z^{(n)}_j\vert^2}{(1- \overline{z^{(n)}_j}u_j)^2}\right)^{1/p}, \] for $u \in \mathbb{D}^{N}$. Now, by the Cole-Gamelin inequality \eqref{eq: Cole-Gamelin} \[ \vert f(z^{(n)})\vert = \vert f(z^{(n)}) \cdot h_{z^{(n)}}(z^{(n)}) \vert \left( \prod\limits_{j=1}^N \frac{1}{1-\vert z^{(n)}_j \vert^2} \right)^{-1/p} \leq \Vert f \cdot h_{z^{(n)}} \Vert_{H_p(\mathbb{D}_2^N)} \leq \Vert f \Vert_{H_\infty(\mathbb{D}^N_2)}, \] and then $\Vert f \cdot h_{z^{(n)}} \Vert_{H_p(\mathbb{D}^N_2)} \to \Vert f \Vert_{H_{\infty}(\mathbb{D}^N_2)}$. \\ Observe that $\Vert h_{z^{(n)}} \Vert_{H_p(\mathbb{D}^N)} =1$ and that $ h_{z^{(n)}}(u) \to 0$ as $n\to \infty$ for every $u\in \mathbb{D}^N$. Then Lemma~\ref{maciel} $h_{z^{(n)}}$ tends to zero in the weak-star topology and then, since $H_p(\mathbb{D}^N_2)$ is reflexive (recall that $1<p<\infty$), also in the weak topology. So, if $K$ is a compact operator on $H_p(\mathbb{D}^N_2)$ then $K(h_{z^{(n)}}) \to 0$ and therefore \begin{multline*} \Vert M_f - K \Vert \geq \limsup\limits_{n \to \infty} \Vert f\cdot h_{z^{(n)}} - K(h_{z^{(n)}}) \Vert_{H_p(\mathbb{D}^N_2)} \\ \geq \limsup\limits_{n\to \infty} \Vert f\cdot h_{z^{(n)}} \Vert_{H_p(\mathbb{D}^N_2)} -\Vert K(h_{z^{(n)}}) \Vert_{H_p(\mathbb{D}^N_2)} =\Vert f\Vert_{H_{\infty}(\mathbb{D}^N_2)}. \end{multline*} Thus, $\Vert M_f - K\Vert \geq \Vert f \Vert_{H_{\infty}(\mathbb{D}^N_2)}$ for each compact operator $K$ and hence $\Vert M_f \Vert_{\ess} \geq \Vert M_f\Vert$ as we wanted to see.\\ The proof of the case $p=1$ follows some ideas of Demazeux in \cite[Theorem~2.2]{demazeux2011essential}. First of all, recall that the $N$-dimensional F\'ejer's Kernel is defined as \[ K_n^N (u)=\sum\limits_{\vert \alpha_1\vert, \cdots \vert \alpha_N\vert \leq N} \prod\limits_{j=1}^{N} \left(1-\frac{\vert \alpha_j\vert}{n+1}\right) u^{\alpha}\,, \] for $u \in \mathbb{D}^N_2$. With this, the $n$-th F\'ejer polynomial with $N$ variables of a function $g\in H_p(\mathbb{D}^N_2)$ is obtained by convoluting $g$ with the $N-$dimensional F\'ejer's Kernel, in other words \begin{equation} \label{fejerpol} \sigma_n^N g (u) = \frac{1}{(n+1)^N} \sum\limits_{l_1,\cdots, l_N=1}^{n} \sum\limits_{\vert\alpha_j\vert\leq l_j} \hat{g}(\alpha) u^{\alpha}. \end{equation} It is well known (see e.g. \cite[Lemmas~5.21 and~5.23]{defant2018Dirichlet}) that $\sigma_n^N : H_1(\mathbb{D}^N_2) \to H_1(\mathbb{D}^N_2)$ is a contraction and $\sigma_n^N g \to g$ on $H_1(\mathbb{D}^N_2)$ when $n\to \infty$ for all $g\in H_1(\mathbb{D}^N_2)$. Let us see how $R_n^N = I - \sigma_n^N$, gives a first lower bound for the essential norm.\\ Let $K: H_1(\mathbb{D}^N_2) \to H_1(\mathbb{D}^N_2)$ be a compact operator, since $\Vert \sigma_n^N \Vert \leq 1$ then $\Vert R_n^N \Vert \leq 2$ and hence \[ \Vert M_f - K \Vert \geq \frac{1}{2} \Vert R_n^N \circ (M_f -K) \Vert \geq \frac{1}{2} \Vert R_n^N \circ M_f \Vert - \frac{1}{2} \Vert R_n^N \circ K \Vert. \] On the other side, since $R_n^N \to 0$ pointwise, $R_n^N$ tends to zero uniformly on compact sets of $H_1(\mathbb{D}^N)$. In particular on the compact set $\overline{K(B_{H_1(\mathbb{D}^N)})}$, and therefore $\Vert R_n^N \circ K \Vert \to 0$. We conclude then that $\Vert M_f \Vert_{\ess} \geq \frac{1}{2} \limsup\limits_{n\to\infty} \Vert R_n^N\circ M_f \Vert$.\\ Our aim now is to obtain a lower bound for the right-hand-side of the inequality. To get this, we are going to see that \begin{equation} \label{agus} \Vert \sigma^N_n \circ M_f(h_z) \Vert_{H_1(\mathbb{D}^N)} \to 0 \; \text{when} \; \Vert z \Vert_\infty \to 1, \end{equation} where $h_z$ is again defined, for each fixed $z \in \mathbb{D}^{N}$, by \[ h_z(u) = \prod\limits_{j=1}^N \frac{1- \vert z_j\vert^2}{(1- \overline{z}_ju_j)^2}. \] To see this, let us consider first, for each $z \in \mathbb{D}^{N}$, the function $g_z (u) = \prod\limits_{j=1}^N \frac{1}{(1-\bar{z_j} u_{j})^{2}}$. This is clearly holomorphic and, hence, has a development a as Taylor series \[ g_{z}(u) = \sum_{\alpha \in \mathbb{N}_{0}^{N}} c_{\alpha}(g_{z}) u^{\alpha} \] for $u \in \mathbb{D}^{N}$. Our first step is to see that the Taylor coefficients up to a fixed degree are bounded uniformly on $z$. Recall that $c_{\alpha}(g_{z}) = \frac{1}{\alpha !} \frac{\partial^{\alpha} g(0)}{\partial u^{\alpha}}$ and, since \[ \frac{\partial^{\alpha}g_z(u)}{\partial u^{\alpha}} = \prod\limits_{j=1}^{N} \frac{(\alpha_j + 1)!}{(1- \overline{z_j}u_j)^{2+\alpha_j}} (\overline{z_j})^{\alpha_j}, \] we have \[ c_{\alpha}(g_{z}) = \frac{1}{\alpha !}\frac{\partial^{\alpha}g_z(0)}{\partial u^{\alpha}} = \frac{1}{\alpha !} \prod\limits_{j=1}^{N} (\alpha_j + 1)!(\overline{z_j})^{\alpha_j} = \left( \prod\limits_{j=1}^{N} (\alpha_j + 1) \right) \overline{z}^{\alpha} \,. \] Thus $\vert c_{\alpha} (g_{z}) \vert \leq (M+1)^{N}$ whenever $\vert \alpha \vert \leq M$. \\ On the other hand, for each $\alpha \in \mathbb{N}_{0}^{N}$ (note that $h_{z}(u) = g_{z}(u) \prod_{j=1}^{N} (1- \vert z_{j}\vert)$ for every $u$) we have \[ c_{\alpha} (f\cdot h_z) = \left( \prod\limits_{j=1}^N (1- \vert z_j \vert^2) \right) \sum\limits_{\beta + \gamma =\alpha} \hat{f}(\beta) \hat{g}_z(\gamma) \,. \] Taking all these into account we finally have (recall \eqref{fejerpol}), for each fixed $n \in \mathbb{N}$ \begin{align*} \Vert \sigma_n^N & \circ M_f (h_z) \Vert_{H_1(\mathbb{D}^N)} \\ & \leq \left( \prod\limits_{j=1}^N 1- \vert z_j \vert^2 \right) \frac{1}{(n+1)^N} \sum\limits_{l_1,\cdots, l_N=1}^{N} \sum\limits_{\vert\alpha_j\vert\leq l_j} \vert \sum\limits_{\beta + \gamma =\alpha} \hat{f}(\beta) \hat{g}_z(\gamma) \vert \Vert u^{\alpha}\Vert_{H_1(\mathbb{D}^N)} \\ &\leq \left( \prod\limits_{j=1}^N 1- \vert z_j \vert^2 \right) \frac{1}{(n+1)^N} \sum\limits_{l_1,\cdots, l_N=1}^{N}\sum\limits_{\vert\alpha_j\vert\leq l_j} \sum\limits_{\beta + \gamma =\alpha} \Vert f \Vert_{H_{\infty}(\mathbb{D}^N)} (N+1)^{N} \,, \end{align*} which immediately yields \eqref{agus}. Once we have this we can easily conclude the argument. For each $n\in \mathbb{N}$ we have \begin{multline*} \Vert R_n^N \circ M_f \Vert = \Vert M_f - \sigma_n^N \circ M_f \Vert \geq \Vert M_f (h_z) - \sigma_n^N \circ M_f (h_z) \Vert_{H_1(\mathbb{D}^N)} \\ \geq \Vert M_f (h_z) \Vert_{H_1(\mathbb{D}^N_2)} - \Vert \sigma_n^N \circ M_f (h_z) \Vert_{H_1(\mathbb{D}^N)}, \end{multline*} and since the last term tends to zero if $\Vert z\Vert_{\infty} \to 1$, then \[ \Vert R_n^N \circ M_f \Vert \geq \limsup\limits_{\Vert z\Vert \to 1} \Vert M_f (h_{z})\Vert_{H_1(\mathbb{D}^N)} \geq \Vert f\Vert_{H_{\infty}(\mathbb{D}^N)} \,, \] which finally gives \[ \Vert M_f \Vert_{\ess} \geq \frac{1}{2} \Vert f\Vert_{H_{\infty}(\mathbb{D}^N_2)} = \frac{1}{2} \Vert M_f \Vert\,, \] as we wanted.\\ To complete the proof we consider the case $N=\infty$. So, what we have to see is that \begin{equation} \label{farola} \Vert M_f \Vert \geq \Vert M_f \Vert_{\ess} \geq C \Vert M_f \Vert \,, \end{equation} where $C=1$ if $p>1$ and $C=1/2$ if $p=1$. Let $K: H_p(\mathbb{D}^\infty_2) \to H_p(\mathbb{D}^\infty_2)$ be a compact operator, and consider for each $N \in \mathbb{N}$ the continuous operators $\mathcal{I}_N : H_p (\mathbb{D}^N) \to H_p(\mathbb{D}^\infty_2)$ given by the inclusion and $\mathcal{J}_N : H_p(\mathbb{D}^\infty_2) \to H_p ( \mathbb{D}^N)$ defined by $\mathcal{J}(g)(u)= g(u_1,\cdots, u_N, 0) = g_N(u)$ then $K_N =\mathcal{J}_{N} \circ K \circ \mathcal{I}_{N}: H_p(\mathbb{D}^N) \to H_p(\mathbb{D}^N)$ is compact. On the other side we have that $\mathcal{J}_N \circ M_f \circ \mathcal{I}_{N} (g) = f_n\cdot g = M_{f_N} (g)$ for every $g$, furthermore given any operator $T:H_p(\mathbb{D}^\infty_2) \to H_p(\mathbb{D}^\infty_2)$ and defining $T_N$ as before we have that \begin{align*} \Vert T \Vert =\sup\limits_{ \Vert g\Vert_{H_p(\mathbb{D}^\infty_2)}\leq 1} \Vert T(g) \Vert_{H_p(\mathbb{D}^\infty_2)} & \geq \sup\limits_{ \Vert g\Vert_{H_p(\mathbb{D}^N)}\leq 1} \Vert T(g) \Vert_{H_p(\mathbb{D}^\infty_2)} \\ & \geq \sup\limits_{ \Vert g\Vert_{H_p(\mathbb{D}^N)}\leq 1} \Vert T_M(g) \Vert_{H_p(\mathbb{D}^N_2)} =\Vert T_N \Vert, \end{align*} and therefore \[ \Vert M_f - K \Vert \geq \Vert M_{f_N} -K_N \Vert \geq \Vert M_{f_N} \Vert_{\ess} \geq C \Vert f_N \Vert_{H_{\infty}(\mathbb{D}^N_2)}\,. \] Since $\Vert f_{N} \Vert_{H_{\infty}(\mathbb{D}^N_2)} \to \Vert f \Vert_{H_{\infty}(\mathbb{D}^\infty_2)}$ when $N \to \infty$ we have \eqref{farola}, and this completes the proof. \end{proof} \noindent We can now prove Theorem~\ref{saja}. \begin{proof}[Proof of Theorem~\ref{saja}] Since for every $1\leq p < \infty$ the Bohr lift $\mathcal{L}_{\mathbb{D}^N_2} : \mathcal{H}_p^{(N)} \to H_p(\mathbb{D}^N_2)$ and the Bohr transform $\mathcal{B}_{\mathbb{D}^N_2} : H_p(\mathbb{D}^N_2) \to \mathcal{H}_p^{(N)}$ are isometries, then an operator $K : \mathcal{H}_p^{(N)} \to \mathcal{H}_p^{(N)}$ is compact if and only if $K_h = \mathcal{L}_{\mathbb{D}^N_2} \circ K \circ \mathcal{B}_{\mathbb{D}^N_2} : H_p(\mathbb{D}^N_2) \to H_p(\mathbb{D}^N_2)$ is a compact operator. On the other side $f= \mathcal{L}_{\mathbb{D}^N_2}(D)$ hence $M_f = \mathcal{L}_{\mathbb{D}^N_2} \circ M_D \circ \mathcal{B}_{\mathbb{D}^N_2}$ and therefore \[ \Vert M_D - K \Vert = \Vert \mathcal{L}_{\mathbb{D}^N_2}^{-1} \circ ( M_f - K_h ) \circ \mathcal{L}_{\mathbb{D}^N_2} \Vert = \Vert M_f - K_h \Vert \geq C \Vert f \Vert_{H_\infty(\mathbb{D}^N_2)} = C \Vert D \Vert_{\mathcal{H}_\infty^{(N)}}, \] where $C=1$ if $p>1$ and $C= 1/2$ if $p=1$. Since this holds for every compact operator $K$ then we have the inequality that we wanted. The upper bound is clear by the definition of essential norm. On the other hand, if $p=1$ and $N \in \mathbb{N} \cup\{\infty\}$. Let $1 < q < \infty$ an consider $M_D^q : \mathcal{H}_q^{(N)} \to \mathcal{H}_1^{(N)}$ the restriction. If $K: \mathcal{H}_1^{(N)} \to \mathcal{H}_1^{(N)}$ is compact then its restriction $K^q : \mathcal{H}_q^{(N)} \to \mathcal{H}_1^{(N)}$ is also compact and then \begin{align*} \Vert M_D - K \Vert_{\mathcal{H}_1^{(N)} \to \mathcal{H}_1^{(N)}} &= \sup\limits_{\Vert E \Vert_{\mathcal{H}_1^{(N)}} \leq 1} \Vert M_D(E) - K(E) \Vert_{\mathcal{H}_1^{(N)}} \\ &\geq \sup\limits_{\Vert E \Vert_{\mathcal{H}_q^{(N)} \leq 1}} \Vert M_D(E) - K(E) \Vert_{\mathcal{H}_1^{(N)}} \\ &= \Vert M_D^q - K^q \Vert_{\mathcal{H}_q^{(N)} \to \mathcal{H}_1^{(N)}} \geq \Vert M_D^q \Vert_{\ess} \geq \Vert D \Vert_{\mathcal{H}_1^{(N)}}. \end{align*} Finally, the case $p=\infty$ was proved in \cite[Corollary~2,4]{lefevre2009essential}. \end{proof} \section{Spectrum of Multiplication operators} In this section, we provide a characterization of the spectrum of the multiplication operator $M_D$, with respect to the image of its associated Dirichlet series in some specific half-planes. Let us first recall some definitions of the spectrum of an operator. We say that $\lambda$ belongs to the spectrum of $M_D$, that we note $\sigma(M_D)$, if the operator $M_D - \lambda I : \mathcal{H}_p \to \mathcal{H}_p$ is not invertible. Now, a number $\lambda$ can be in the spectrum for different reasons and according to these we can group them into the following subsets: \begin{itemize} \item If $M_D - \lambda I$ is not injective then $\lambda \in \sigma_p(M_D)$, the point spectrum. \item If $M_D-\lambda I$ is injective and the $Ran(A-\lambda I)$ is dense (but not closed) in $\mathcal{H}_p$ then $\lambda \in \sigma_c(M_D)$, the continuous spectrum of $M_D$. \item If $M_D-\lambda I$ is injective and its range has codimension greater than or equal to 1 then $\lambda$ belongs to $\sigma_r(M_D)$, the radial spectrum. \end{itemize} We are also interested in the approximate spectrum, noted by $\sigma_{ap}(M_D)$, given by those values $\lambda \in \sigma(M_D)$ for which there exist a unit sequence $(E_n)_n \subseteq \mathcal{H}_p$ such that $\Vert M_D(E_n) - \lambda E_n \Vert_{\mathcal{H}_p} \to 0$. Vukoti\'c, in \cite[Theorem~7]{vukotic2003analytic}, proved that the spectrum of a Multiplication operator, induced by function $f$ in the one dimensional disk, coincides with $\overline{f(\mathbb{D})}$. In the case of the continuous spectrum, the description is given from the outer functions in $H_\infty(\mathbb{D})$. The notion of outer function can be extended to higher dimensions. If $N\in \mathbb{N}\cup\{\infty\}$, a function $f\in H_p(\mathbb{D}^N_2)$ is said to be outer if it satisfies \[ \log\vert f(0) \vert = \int\limits_{\mathbb{T}^N} \log\vert F(\omega)\vert \mathrm{d}\omega, \] with $f\sim F$. A closed subspace $S$ of $H_p(\mathbb{D}^N_2)$ is said to be invariant, if for every $g\in S$ it is verified that $z_i \cdot g \in S$ for every monomial. Finally, a function $f$ is said to be cyclic, if the invariant subspace generated by $f$ is exactly $H_p(\mathbb{D}^N_2)$. The mentioned characterization comes from the generalized Beurling's Theorem, which affirms that $f$ is a cyclic vector if and only if $f$ is an outer function. In several variables, there exist outer functions which fail to be cyclic (see \cite[Theorem~4.4.8]{rudin1969function}). We give now the aforementioned characterization of the spectrum of a multiplication operator. \begin{theorem} \label{espectro} Given $1\leq p <\infty$ and $D\in \mathcal{H}_{\infty}$ a non-zero Dirichlet series with associated multiplication operator $M_D : \mathcal{H}_p \to \mathcal{H}_p$. Then \begin{enumerate} \item \label{espectro1} $M_D$ is onto if and only if there is some $c>0$ such that $\vert D (s) \vert \geq c$ for every $s \in \mathbb{C}_{0}$. \item \label{espectro2} $\sigma(M_D)=\overline{D(\mathbb{C}_0)}$. \item \label{espectro3} If $D$ is not constant then $\sigma_c(M_D) \subseteq \overline{D(\mathbb{C}_0)} \setminus D(\mathbb{C}_{1/2})$. Even more, if $\lambda \in \sigma_c(M_D)$ then $f - \lambda = \mathcal{L}_{\mathbb{D}^\infty_2}(D) - \lambda$ is an outer function in $H_{\infty}(\mathbb{D}^\infty_2)$. \end{enumerate} The same holds if we replace in each case $\mathcal{H}$ by $\mathcal{H}^{(N)}$ (with $N \in \mathbb{N}$). \end{theorem} \begin{proof} \ref{espectro1} Because of the injectivity of $M_D$, and the Closed Graph Theorem, the mapping $M_D$ is surjective if and only if $M_D$ is invertible and this happens if and only if $M_{D^{-1}}$ is well defined and continuous, but then $D^{-1} \in \mathcal{H}_{\infty}$ and \cite[Theorem~6.2.1]{queffelec2013diophantine} gives the conclusion. \ref{espectro2} Note that $M_D - \lambda I = M_{D-\lambda}$; this and the previous result give that $\lambda \not\in \sigma( M_D)$ if and only if $\vert D(s) - \lambda \vert > \varepsilon$ for some $\varepsilon >0$ and all $s\in \mathbb{C}_0$, and this happens if and only if $\lambda \not\in \overline{D(\mathbb{C}_0)}$. \ref{espectro3} Let us suppose that the range of $M_D - \lambda = M_{D-\lambda}$ is dense. Since polynomials are dense in $\mathcal H_p$ and $M_{D-\lambda}$ is continuous then $A:=\{ (D-\lambda)\cdot P : P \; \text{Dirichlet polynomial} \}$ is dense in the range of $M_{D-\lambda}$. By the continuity of the evaluation at $s_0 \in \mathbb{C}_{1/2}$, the set of Dirichlet series that vanish in a fixed $s_0$, which we denote by $B(s_0)$, is a proper closed set (because $1 \not\in B(s_0)$). Therefore, if $D-\lambda \in B(s_0)$ then $A\subseteq B(s_0)$, but hence $A$ cannot be dense in $\mathcal{H}_p$. So we have that if $\lambda \in \sigma_c(M_D)$ then $D(s) - \lambda \not= 0$ for every $s\in \mathbb{C}_{1/2}$ and therefore $\lambda \in \overline{D(\mathbb{C}_0)} - D(\mathbb{C}_{1/2})$. Finally, since $\sigma_c(M_D) = \sigma_c(M_f)$ then $\lambda \in \sigma_c(M_D)$ if and only if $M_{f-\lambda}(H_p(\mathbb{D}^\infty_2))$ is dense in $H_p(\mathbb{D}^\infty_2)$. Consider $S(f-\lambda)$ the smallest closed subspace of $H_p(\mathbb{D}^\infty_2)$ such that $z_i\cdot (f-\lambda) \in S(f-\lambda)$ for every $i \in \mathbb{N}$. Take $\lambda \in \sigma_c(M_f)$ and note that \[ \{ (f-\lambda)\cdot P : P \; \text{polynomial} \} \subseteq S(f-\lambda) \subseteq H_p(\mathbb{D}^\infty_2) \,. \] Since the polynomials are dense in $H_p(\mathbb{D}^\infty_2)$, and $S(f - \lambda)$ is closed, we obtain that $S(f-\lambda) = H_p(\mathbb{D}^\infty_2)$. Then $f-\lambda$ is a cyclic vector in $H_{\infty}(\mathbb{D}^\infty_2)$ and therefore the function $f-\lambda \in H_{\infty}(\mathbb{D}^\infty_2)$ is an outer function (see \cite[Corollary~5.5]{guo2022dirichlet}). \end{proof} Note that, in the hypothesis of the previous Proposition, if $D$ is non-constant, then $\sigma_p(M_D)$ is empty and therefore, $\sigma_r(M_D) = \sigma(M_D) \setminus \sigma_c(M_D)$. As a consequence, $\sigma_r(M_D)$ must contain the set $D(\mathbb{C}_{1/2})$. Note that a value $\lambda$ belongs to the approximate spectrum of a multiplication operator $M_D$ if and only if $M_{D} - \lambda I = M_{D-\lambda}$ is not bounded from below. If $D$ is not constant and equal to $\lambda$ then, $M_{D-\lambda}$ is injective. Therefore, being bounded from below is equivalent to having closed ranged. Thus, we need to understand when does this operator have closed range. We therefore devote some lines to discuss this property. The range of the multiplication operators behaves very differently depending on whether or not it is an endomorphism. We see now that if $p\not= q$ then multiplication operators never have closed range. \begin{proposition} \label{prop: rango no cerrado} Given $1\leq q < p \leq \infty$ and $D\in \mathcal{H}_t$, with $t=pq/(p-q)$ if $p< \infty$ and $t= q$ if $p= \infty$, then $M_D : \mathcal{H}_p \to \mathcal{H}_q$ does not have a closed range. The same holds if we replace $\mathcal{H}$ by $\mathcal{H}^{(N)}$ (with $N \in \mathbb{N}$). \end{proposition} \begin{proof} Since $M_D : \mathcal{H}_p \to \mathcal{H}_q$ is injective, the range of $M_D$ is closed if and only if there exists $C>0$ such that $C \Vert E \Vert_{\mathcal{H}_p} \leq \Vert D\cdot E \Vert_{\mathcal{H}_q}$ for every $E\in \mathcal{H}_p$. Suppose that this is the case and choose some Dirichlet polynomial $P\in \mathcal{H}_t$ such that $\Vert D - P \Vert_{\mathcal{H}_t} < \frac{C}{2}$. Given $E\in \mathcal{H}_p$ we have \begin{multline*} \Vert P \cdot E \Vert_{\mathcal{H}_q} = \Vert D\cdot E - (D-P) \cdot E \Vert_{\mathcal{H}_q} \geq \Vert D \cdot E \Vert_{\mathcal{H}_q} - \Vert ( D - P ) \cdot E \Vert_{\mathcal{H}_q} \\ \geq C \Vert E \Vert_{\mathcal{H}_p} - \Vert D - P \Vert_{\mathcal{H}_t} \Vert E \Vert_{\mathcal{H}_p} \geq \frac{C}{2} \Vert E \Vert_{\mathcal{H}_p}. \end{multline*} Then $M_P : \mathcal{H}_p \to \mathcal{H}_q$ has closed range. Let now $(Q_n)_n$ be a sequence of polynomials converging in $\mathcal{H}_q$ but not in $\mathcal{H}_p$, then \[ C\Vert Q_n - Q_m \Vert_{\mathcal{H}_p} \leq \Vert P \cdot (Q_n -Q_m) \Vert_{\mathcal{H}_q} \leq \Vert P \Vert_{\mathcal{H}_\infty} \Vert Q_n - Q_m \Vert_{\mathcal{H}_q}, \] which is a contradiction. \end{proof} As we mentioned before, the behaviour of the range is very different when the operator is an endomorphism, that is, when $p=q$. Recently, in \cite[Theorem~4.4]{antezana2022splitting}, Antenaza, Carando and Scotti have established a series of equivalences for certain Riesz systems in $L_2(0,1)$. Within the proof of this result, they also characterized those Dirichlet series $D\in \mathcal{H}_\infty$, for which their associated multiplication operator $M_D: \mathcal{H}_p \to \mathcal{H}_p$ has closed range. The proof also works for $\mathcal H_p$. In our aim to be as clear and complete as possible, we develop below the arguments giving all the necessary definitions. A character is a function $\gamma: \mathbb{N} \to \mathbb{C}$ that satisfies \begin{itemize} \item $\gamma (m n) = \gamma(m) \gamma (n)$ for all $m,n \in \mathbb{N}$, \item $\vert \gamma (n) \vert =1$ for all $n \in \mathbb{N}$. \end{itemize} The set of all characters is denoted by $\Xi$. Given a Dirichlet series $D= \sum a_n n^{-s}$, each character $\gamma \in \Xi$ defines a new Dirichlet series by \begin{equation}\label{caracter} D^\gamma (s) =\sum a_n \gamma(n) n^{-s}. \end{equation} Each character $\gamma \in\Xi$ can be identified with an element $\omega \in \mathbb{T}^{\infty}$, taking $\omega = (\gamma ( \mathfrak{p}_1) , \gamma(\mathfrak{p}_2), \cdots )$, and then we can rewrite \eqref{caracter} as \[ D^\omega (s) =\sum a_n \omega(n)^{\alpha(n)} n^{-s}, \] being $\alpha(n)$ such that $n= \mathfrak{p}^{\alpha(n)}$. Note that if $\mathcal{L}_{\mathbb{T}^\infty}(D)(u) = F(u) \in H_\infty(\mathbb{T}^\infty),$ then by comparing coefficients we have that $\mathcal{L}_{\mathbb{T}^\infty}(D^\omega)(u) = F(\omega\cdot u) \in H_\infty(\mathbb{T}^\infty)$. By \cite[Lemma~11.22]{defant2018Dirichlet}, for all $\omega \in \mathbb{T}^\infty$ the limit \[ \lim\limits_{\sigma\to 0} D^\omega(\sigma + it), \; \text{exists for almost all} \; t\in \mathbb{R}. \] Using \cite[Theorem~2]{saksman2009integral}, we can choose a representative $\tilde{F}\in H_\infty(\mathbb{T}^\infty)$ of $F$ which satisfies \begin{equation*} \tilde{F}(\omega)= \left\{ \begin{aligned} &\lim\limits_{\sigma\to 0^+} D^\omega(\sigma) \; &\text{if the limit exists}; \\ &0 \; &\text{otherwise}. \end{aligned} \right. \end{equation*} To see this, consider \[ A:=\{ \omega \in \mathbb{T}^\infty : \lim\limits_{\sigma\to 0} D^\omega(\sigma) \; \text{exists}. \}, \] and let us see that $\vert A \vert =1$. To that, take $T_t: \mathbb{T}^\infty \to \mathbb{T}^\infty$ the Kronecker flow defined by $T_t(\omega)=(\mathfrak{p}^{-it} \omega),$ and notice that $T_t(\omega)\in A$ if and only if $\lim\limits_{\sigma\to 0} D^{T_t(\omega)}(\sigma)$ exists. Since \[ D^{T_t(\omega)}(\sigma)= \sum a_n (\mathfrak{p}^{-it} \omega)^{\alpha(n)} n^{-\sigma}= \sum a_n \omega^{\alpha(n)} n^{-(\sigma+it)} = D^{\omega}(\sigma+it), \] then for all $\omega\in \mathbb{T}^\infty$ we have that $T_t(\omega) \in A$ for almost all $t\in \mathbb{R}.$ Finally, since $\chi_A \in L^1(\mathbb{T}^\infty),$ applying the Birkhoff Theorem for the Kronecker flow \cite[Theorem 2.2.5]{queffelec2013diophantine}, for $\omega_0 = (1,1,1,\dots)$ we have \[ \vert A \vert = \int\limits_{\mathbb{T}^\infty} \chi_A(\omega) \mathrm{d}\omega = \lim\limits_{R\to \infty} \frac{1}{2R} \int\limits_{-R}^{R} \chi_A (T_t(\omega_0)) \mathrm{d}t = 1. \] Then $\tilde{F} \in H_\infty (\mathbb{T}^\infty),$ and to see that $\tilde{F}$ is a representative of $F$ it is enough to compare their Fourier coefficients (see again \cite[Theorem~2]{saksman2009integral}). From now to the end $F$ is always $\tilde{F}$.\\ Fixing the notation \[ D^\omega(it_0)= \lim\limits_{\sigma\to 0} D^\omega(\sigma +it), \] then taking $t_0= 0,$ we get \[ F(\omega) = D^\omega(0) \] for almost all $\omega \in \mathbb{T}^\infty$. Moreover, given $t_0 \in \mathbb{R}$ we have \begin{equation}\label{igualdad} D^\omega(it_0) = \lim\limits_{\sigma\to 0^+} D^\omega(\sigma + it_0) = \lim\limits_{\sigma\to 0^+} D^{T_{t_0}(\omega)} (\sigma) = F(T_{t_0}(\omega)). \end{equation} From this identity one has the following. \begin{proposition}\label{acotacion} The followings conditions are equivalent. \begin{enumerate} \item\label{acotacion1} There exists $\tilde{t}_0$ such that $\vert D^{\omega} (i\tilde{t}_0) \vert \geq \varepsilon$ for almost all $\omega \in \mathbb{T}^\infty$. \item\label{acotacion2} For all $t_0$ there exists $B_{t_0} \subset \mathbb{T}^\infty$ with total measure such that $\vert D^\omega(it_0) \vert \geq \varepsilon$ for all $\omega \in B_{t_0}$. \end{enumerate} \end{proposition} \begin{proof} If~\ref{acotacion1}, holds take $t_0$ and consider \[ B_{t_0} = \{\mathfrak{p}^{-i(-t_0+\tilde{t}_0)}\cdot \omega : \; \omega\in B_{\tilde{t}_0} \}, \] which is clearly a total measure set. Take $\omega{'} \in B_{t_0}$ and choose $\omega \in B_{\tilde{t}_0}$ such that $\omega{'} = \mathfrak{p}^{-i(-t_0+\tilde{t}_0)}\cdot \omega$, then by \eqref{igualdad} we have that \[ \vert D^{\omega{'}} (it_0) \vert = \vert F(T_{\tilde{t}_0}(\omega)) \vert \geq \varepsilon\,, \] and this gives~\ref{acotacion2}. The converse implications holds trivially. \end{proof} We now give an $\mathcal H_p$-version of \cite[Theorem~4.4.]{antezana2022splitting}. \begin{theorem}\label{ACS} Let $1\leq p < \infty$, and $D \in \mathcal{H}_\infty$. Then the following statements are equivalent. \begin{enumerate} \item\label{ACS1} There exists $m>0$ such that $\vert F(\omega) \vert \geq M$ for almost all $\omega\in \mathbb{T}^\infty$; \item\label{ACS2} The operator $M_D : \mathcal{H}_p \to \mathcal{H}_p$ has closed range; \item\label{ACS3} There exists $m>0$ such that for almost all $(\gamma, t) \in \Xi \times \mathbb{R}$ we have \[ \vert D^\gamma(it) \vert\geq m. \] \end{enumerate} Even more, in that case, \begin{multline*} \inf\left\{\Vert M_D(E) \Vert_{\mathcal{H}_p} : E\in \mathcal{H}_p, \Vert E \Vert_{\mathcal{H}_p}=1 \right\} \\ = \essinf \left\{ \vert F(\omega) \vert : \omega \in \mathbb{T}^\infty \right\} = \essinf \left\{ \vert D^\gamma(it) \vert : (\gamma,t)\in \Xi \times \mathbb{R} \right\}. \end{multline*} \end{theorem} \begin{proof} \ref{ACS1} $\Rightarrow$~\ref{ACS2} $M_D$ has closed range if and only if the rage of $M_F$ is closed. Because of the injectivity of $M_F$ we have, by Open Mapping Theorem, that $M_F$ has closed range if and only if there exists a positive constant $m>0$ such that \[ \Vert M_F(G) \Vert_{H_p(\mathbb{T}^\infty)} \geq m \Vert G \Vert_{H_p(\mathbb{T}^\infty)}, \] for every $G\in H_p(\mathbb{T}^\infty)$. If $\vert F(\omega)\vert \geq m$ a.e. $\omega \in \mathbb{T}^\infty$, then for $G \in H_p(\mathbb{T}^\infty)$ we have that \[ \Vert M_F (G) \Vert_{H_p(\mathbb{T}^\infty)} = \Vert F\cdot G \Vert_{H_p(\mathbb{T}^\infty)} =\left(\int\limits_{\mathbb{T}^\infty} \vert FG(\omega)\vert^p \mathrm{d} \omega\right)^{1/p} \geq m \Vert G\Vert_{H_p(\mathbb{T}^\infty)}. \] \ref{ACS2} $\Rightarrow$~\ref{ACS1} Let $m>0$ be such that $\Vert M_F(G)\Vert_{H_p(\mathbb{T}^\infty)} \geq m \Vert G \Vert_{H_p(\mathbb{T}^\infty)}$ for all $G\in H_p(\mathbb{T}^\infty)$. Let us consider \[ A=\{ \omega\in \mathbb{T}^\infty : \vert F(\omega) \vert <m\}. \] Since $\chi_A \in L^p(\mathbb{T}^\infty)$, by the density of the trigonometric polynomials in $L^p(\mathbb{T}^\infty)$ (see \cite[Proposition~5.5]{defant2018Dirichlet}) there exist a sequence $(P_k)_k$ of degree $n_k$ in $N_k$ variables (in $z$ and $\overline{z}$) such that \[ \lim\limits_{k} P_k = \chi_A \; \text{in} \; L^p(\mathbb{T}^\infty). \] Therefore \begin{align*} m^p\vert A \vert &= m^p\Vert \chi_A \Vert^p_{L^p(\mathbb{T}^\infty)} = m^p\lim\limits_k \Vert P_k \Vert^p_{L^p(\mathbb{T}^\infty)}\\ &=m^p\lim\limits_k \Vert z_1^{n_k} \cdots z_{N_k}^{n_k} P_k \Vert^p_{L_p(\mathbb{T}^\infty)}\\ &\leq \liminf\limits_k \Vert M_F(z_1^{n_k} \cdots z_{N_k}^{n_k} P_k) \Vert^p_{L_p(\mathbb{T}^\infty)}\\ &= \Vert F\cdot \chi_A \Vert^p_{L^p(\mathbb{T}^\infty)} = \int\limits_{A} \vert F(\omega) \vert^p \mathrm{d}\omega. \end{align*} Since $\vert F(\omega) \vert < m$ for all $\omega \in A$, this implies that $\vert A \vert =0$. \ref{ACS2} $\Rightarrow$~\ref{ACS3} By the definition of $F$ we have $m \leq \vert F(\omega) \vert = \lim\limits_{\sigma\to 0^+} \vert D^\omega (\sigma) \vert$ for almost all $\omega \in \mathbb{T}^\infty$. Combining this with Remark~\ref{acotacion} we get that the $t-$sections of the set \[ C= \{ (\omega, t ) \in \mathbb{T}^\infty \times \mathbb{R} : \; \vert D^\omega(it) \vert < \varepsilon \}, \] have zero measure. As a corollary of Fubini's Theorem we get that $C$ has measure zero. The converse~\ref{ACS3} $\Rightarrow$~\ref{ACS2} also follows from Fubini's Theorem. The last equality follows from the proven equivalences. \end{proof} In the case of polynomials, using the continuity of the polynomials and Kronecker's theorem (see e.g. \cite[Proposition~3.4]{defant2018Dirichlet}), the condition of Theorem~\ref{ACS} is restricted to the image of the polynomial on the line of complex with zero real part. As a consequence, one can extend this characterization to the Dirichlet series belonging to $\mathcal{A}(\mathbb{C}_0)$, that is the closed subspace of $\mathcal{H}_\infty$ given by the Dirichlet series that are uniformly continuous on $\mathbb{C}_0$ (see \cite[Definition~2.1]{aron2017dirichlet}). \begin{corollary}\label{torres} Let $1\leq p < \infty$ then \begin{enumerate} \item\label{torres1} Let $P\in \mathcal{H}_\infty$ be a Dirichlet polynomial. Then $M_P: \mathcal{H}_p \to \mathcal{H}_p$ has closed range if and only if there exists a constant $m>0$ such that $\vert P(it) \vert \geq m$ for all $t\in \mathbb{R}$. \item\label{torres2} Let $D\in \mathcal{A}(\mathbb{C}_0)$, then $M_D: \mathcal{H}_p \to \mathcal{H}_p$ has closed range if and only if there exists a constant $m>0$ such that $\vert D(it) \vert \geq m$ for all $t\in \mathbb{R}$. \end{enumerate} Even more, in each case \[ \inf \{ \Vert M_D(E) \Vert_{\mathcal{H}_p} : E \in \mathcal{H}_p,\; \Vert E \Vert_{\mathcal{H}_p}=1 \} = \inf \{ \vert D(it) \vert : t\in \mathbb{R} \}. \] \end{corollary} \begin{proof} \ref{torres1} Let $F= \mathcal{L}_{\mathbb{T}^\infty}(P)$ then, by Theorem~\ref{ACS}, $M_P$ has close range if and only if there exists a constant $m>0$ such that $\vert F (\omega) \vert \geq m$ a.e. $\omega \in \mathbb{T}^\infty$. Since $F(\omega)= \sum a_{\alpha} \omega^\alpha$ is continuous and by Kronecker's theorem \[ \{(\mathfrak{p}_1^{-it},\cdots, \mathfrak{p}_N^{-it}, \omega) \; : \; t \in \mathbb{R}, \; \omega \in \mathbb{T}^\infty \} \] is dense in $\mathbb{T}^\infty$, then $M_P$ has closed range if and only if $\vert F(\mathfrak{p}_1^{-it},\cdots, \mathfrak{p}_N^{-it}, \omega) \vert \geq m$ for every $t \in \mathbb{R}$ and $\omega \in \mathbb{T}^\infty$. The result is concluded from the fact that \begin{equation*} F(\mathfrak{p}_1^{-it},\cdots, \mathfrak{p}_N^{-it}, \omega) = \sum a_{\alpha} \mathfrak{p}_1^{-it\alpha_1}\cdots \mathfrak{p}_N^{-it\alpha_N} = \sum a_n n^{-it} = P(it). \end{equation*} \ref{torres2} Since $D$ is uniformly continuous on $\mathbb{C}_0$ then $D$ admits a uniformly continuous extension to the half-plane $\{s\in \mathbb{C} : \re s\geq 0\}.$ By \cite[Theorem~2.3]{aron2017dirichlet}, $D$ is the uniform limit on $\mathbb{C}_0$ of a sequence of Dirichlet polynomials $P_n$. Let $\mathcal{A}(\mathbb{T}^\infty)$ be the closed subspace of $H_\infty(\mathbb{T}^\infty)$ given by the Bohr transform of $\mathcal{A}(\mathbb{C}_0)$. If $\mathcal{L}_{\mathbb{T}^\infty}(D) = F \in \mathcal{A}(\mathbb{T}^\infty)$, since it is the uniform limit of polynomials, then $F$ is continuous. Then, given $t\in \mathbb{R}$ we have that \begin{align}\label{borde} \vert F(\mathfrak{p}^{-it}) \vert = \lim\limits_n \vert \mathcal{B}_{\mathbb{T}^\infty} (P_n) (\mathfrak{p}^{-it}) \vert = \lim\limits_n \vert P_n(it) \vert = \vert D(it) \vert. \end{align} Let us suppose first that the range of $M_D:\mathcal{H}_p \to \mathcal{H}_p$ is closed and let $m>0$ be such that $\Vert M_D(E) \Vert_{\mathcal{H}_p} \geq \Vert E \Vert_{\mathcal{H}_p}$. Given $\varepsilon >0$ there exists $n_0\in\mathbb{N}$ such that $\Vert D - P_n \Vert_{\mathcal{H}_\infty} <\varepsilon$ for every $n_0 \leq n$. Therefore, if $E \in \mathcal{H}_p$ we have that \[ \Vert M_{P_n} (E) \Vert_{\mathcal{H}_p} \geq \Vert M_D(E) \Vert_{\mathcal{H}_p} - \Vert M_{D-P_n}(E) \Vert_{\mathcal{H}_p} \geq (m - \varepsilon) \Vert E \Vert_{\mathcal{H}_p}. \] Then by~\ref{torres1}, $\vert P_n(it) \vert \geq m-\varepsilon$ for every $n\geq n_0$ and $t\in \mathbb{R}$ and hence, by \eqref{borde}, $\vert D(it)\vert \geq m - \varepsilon$ for every $\varepsilon >0$. Let us suppose now that there exists a constant $m>0$ such that $\vert D(it) \vert \geq m$ for every $t\in \mathbb{R}$, then from \eqref{borde} we have that $\vert F(\mathfrak{p}^{-it})\vert \geq m$ for all $t\in \mathbb{R}$. Since $F$ is continuous, and by Kronecker's theorem, $\vert F(\omega) \vert \geq m$ for all $\omega \in \mathbb{T}^\infty.$ Therefore, by Theorem~\ref{ACS}, $M_D$ has closed range. \end{proof} For what was said above, in the non trivial case, a value $\lambda$ belongs to the approximate spectrum of $M_D$ if and only if the range of $M_{D-\lambda}$ is not closed. Then, Theorem ~\ref{ACS} and Proposition~\ref{torres} give us a characterization of the approximate spectrum. For this, we need the definition of the essential range of the function $[(\gamma,t) \rightsquigarrow D^{\gamma}(it)]$. That is, \[ \Big\{ \lambda \in \mathbb{C} : \text{ for all } \varepsilon>0, \; \mu\big\{(\gamma,t): \vert D^{\gamma}(it) - \lambda \vert < \varepsilon \big\}>0 \Big\}, \] where $\mu$ stands for the Haar measure in $\Xi \times\mathbb{R}$. \begin{theorem}\label{aproximado} Let $1\leq p < \infty$ \begin{enumerate} \item\label{aproximado1} If $D\in \mathcal{H}_\infty$, then $\sigma_{ap}(M_D)=essran [(\gamma,t) \rightsquigarrow D^{\gamma}(it)]$. \item\label{aproximado2} If $D\in \mathcal{A}(\mathbb{C}_0)$, then $\sigma_{ap}(M_D) = \overline{\{ D(it) : t\in\mathbb{R} \}}.$ \end{enumerate} \end{theorem} \begin{proof} \ref{aproximado1} A value $\lambda$ belongs to $\sigma_{ap}(M_D)$ if and only if the range of $M_{D-\lambda}$ is not closed; and by Theorem~\ref{ACS}, if and only if \[ \essinf \{ \vert D^\gamma(it)- \lambda\vert: (\gamma, t) \in \Xi \times\mathbb{R}\}=\essinf \{ \vert (D-\lambda)^\gamma(it)\vert: (\gamma, t) \in \Xi \times\mathbb{R}\} = 0, \] but that is equivalent to say that the measure of $\{\vert D^\gamma (it) - \lambda \vert < \varepsilon : (\gamma, t) \in \Xi\times\mathbb{R}\}$ is bigger than zero for every $\varepsilon >0$. In other words, $\lambda$ belongs to the essential range of $[(\gamma,t) \rightsquigarrow D^{\gamma}(it)]$. \ref{aproximado2} Following the same arguments used in~\ref{aproximado1} and using Corollary~\ref{torres} we have that $\lambda \in \sigma_{ap}(M_D)$ if and only if $\inf\{ \vert D(it) - \lambda \vert : t\in \mathbb{R} \} = 0,$ if and only if $\lambda \in \overline{\{ D(it) : t\in\mathbb{R} \}}.$ \end{proof} \begin{thebibliography}{10} \bibitem{aleman2019fatou} A.~Aleman, J.-F. Olsen, and E.~Saksman. \newblock Fatou and brothers {R}iesz theorems in the infinite-dimensional polydisc. \newblock {\em J. Anal. Math.}, 137(1):429--447, 2019. \newblock \href {https://doi.org/10.1007/s11854-019-0006-x} {\path{doi:10.1007/s11854-019-0006-x}}. \bibitem{antezana2022splitting} J.~Antezana, D.~Carando, and M.~Scotti. \newblock Splitting the {R}iesz basis condition for systems of dilated functions through {D}irichlet series. \newblock {\em J. Math. Anal. Appl.}, 507(1):Paper No. 125733, 20, 2022. \newblock \href {https://doi.org/10.1016/j.jmaa.2021.125733} {\path{doi:10.1016/j.jmaa.2021.125733}}. \bibitem{apostol1984introduccion} T.~M. Apostol. \newblock {\em Introduction to analytic number theory}. \newblock Undergraduate Texts in Mathematics. Springer-Verlag, New York-Heidelberg, 1976. \bibitem{aron2017dirichlet} R.~M. Aron, F.~Bayart, P.~M. Gauthier, M.~Maestre, and V.~Nestoridis. \newblock Dirichlet approximation and universal {D}irichlet series. \newblock {\em Proc. Amer. Math. Soc.}, 145(10):4449--4464, 2017. \newblock \href {https://doi.org/10.1090/proc/13607} {\path{doi:10.1090/proc/13607}}. \bibitem{bayart2002hardy} F.~Bayart. \newblock Hardy spaces of {D}irichlet series and their composition operators. \newblock {\em Monatsh. Math.}, 136(3):203--236, 2002. \newblock \href {https://doi.org/10.1007/s00605-002-0470-7} {\path{doi:10.1007/s00605-002-0470-7}}. \bibitem{brown1984cyclic} L.~Brown and A.~L. Shields. \newblock Cyclic vectors in the {D}irichlet space. \newblock {\em Trans. Amer. Math. Soc.}, 285(1):269--303, 1984. \newblock \href {https://doi.org/10.2307/1999483} {\path{doi:10.2307/1999483}}. \bibitem{conway1990course} J.~B. Conway. \newblock {\em A course in functional analysis}, volume~96 of {\em Graduate Texts in Mathematics}. \newblock Springer-Verlag, New York, second edition, 1990. \bibitem{defant2021frechet} A.~Defant, T.~Fern\'{a}ndez~Vidal, I.~Schoolmann, and P.~Sevilla-Peris. \newblock Fr\'{e}chet spaces of general {D}irichlet series. \newblock {\em Rev. R. Acad. Cienc. Exactas F\'{\i}s. Nat. Ser. A Mat. RACSAM}, 115(3):Paper No. 138, 34, 2021. \newblock \href {https://doi.org/10.1007/s13398-021-01074-8} {\path{doi:10.1007/s13398-021-01074-8}}. \bibitem{defant2018Dirichlet} A.~Defant, D.~Garc\'{\i}a, M.~Maestre, and P.~Sevilla-Peris. \newblock {\em Dirichlet {S}eries and {H}olomorphic {F}unctions in {H}igh {D}imensions}, volume~37 of {\em New Mathematical Monographs}. \newblock Cambridge University Press, Cambridge, 2019. \newblock \href {https://doi.org/10.1017/9781108691611} {\path{doi:10.1017/9781108691611}}. \bibitem{defantperez_2018} A.~Defant and A.~P\'{e}rez. \newblock Hardy spaces of vector-valued {D}irichlet series. \newblock {\em Studia Math.}, 243(1):53--78, 2018. \newblock \href {https://doi.org/10.4064/sm170303-26-7} {\path{doi:10.4064/sm170303-26-7}}. \bibitem{demazeux2011essential} R.~Demazeux. \newblock Essential norms of weighted composition operators between {H}ardy spaces {$H^p$} and {$H^q$} for {$1\leq p, q\leq\infty$}. \newblock {\em Studia Math.}, 206(3):191--209, 2011. \newblock \href {https://doi.org/10.4064/sm206-3-1} {\path{doi:10.4064/sm206-3-1}}. \bibitem{diestel2012sequences} J.~Diestel. \newblock {\em Sequences and series in {B}anach spaces}, volume~92 of {\em Graduate Texts in Mathematics}. \newblock Springer-Verlag, New York, 1984. \newblock \href {https://doi.org/10.1007/978-1-4612-5200-9} {\path{doi:10.1007/978-1-4612-5200-9}}. \bibitem{guo2022dirichlet} K.~Guo and J.~Ni. \newblock Dirichlet series and the {N}evanlinna class in infinitely many variables. \newblock {\em arXiv preprint arXiv:2201.01993}, 2022. \bibitem{hedenmalm1997hilbert} H.~Hedenmalm, P.~Lindqvist, and K.~Seip. \newblock A {H}ilbert space of {D}irichlet series and systems of dilated functions in {$L^2(0,1)$}. \newblock {\em Duke Math. J.}, 86(1):1--37, 1997. \newblock \href {https://doi.org/10.1215/S0012-7094-97-08601-4} {\path{doi:10.1215/S0012-7094-97-08601-4}}. \bibitem{kaijser1977note} S.~Kaijser. \newblock A note on dual {B}anach spaces. \newblock {\em Math. Scand.}, 41(2):325--330, 1977. \newblock \href {https://doi.org/10.7146/math.scand.a-11725} {\path{doi:10.7146/math.scand.a-11725}}. \bibitem{konyaginqueffelec_2002} S.~V. Konyagin and H.~Queff\'{e}lec. \newblock The translation {$\frac12$} in the theory of {D}irichlet series. \newblock {\em Real Anal. Exchange}, 27(1):155--175, 2001/02. \bibitem{lefevre2009essential} P.~Lef\`evre. \newblock Essential norms of weighted composition operators on the space {$\mathcal{H}^\infty$} of {D}irichlet series. \newblock {\em Studia Math.}, 191(1):57--66, 2009. \newblock \href {https://doi.org/10.4064/sm191-1-4} {\path{doi:10.4064/sm191-1-4}}. \bibitem{Quefflec95} H.~Queff\'{e}lec. \newblock H. {B}ohr's vision of ordinary {D}irichlet series; old and new results. \newblock {\em J. Anal.}, 3:43--60, 1995. \bibitem{queffelec2013diophantine} H.~Queff\'elec and M.~Queff\'elec. \newblock {\em Diophantine approximation and {D}irichlet series}, volume~80 of {\em Texts and Readings in Mathematics}. \newblock Hindustan Book Agency, New Delhi; Springer, Singapore, 2020. \newblock Second edition. \newblock \href {https://doi.org/10.1007/978-981-15-9351-2} {\path{doi:10.1007/978-981-15-9351-2}}. \bibitem{rudin1962fourier} W.~Rudin. \newblock {\em Fourier analysis on groups}. \newblock Interscience Tracts in Pure and Applied Mathematics, No. 12. Interscience Publishers (a division of John Wiley and Sons), New York-London, 1962. \bibitem{rudin1969function} W.~Rudin. \newblock {\em Function theory in polydiscs}. \newblock W. A. Benjamin, Inc., New York-Amsterdam, 1969. \bibitem{saksman2009integral} E.~Saksman and K.~Seip. \newblock Integral means and boundary limits of {D}irichlet series. \newblock {\em Bull. Lond. Math. Soc.}, 41(3):411--422, 2009. \newblock \href {https://doi.org/10.1112/blms/bdp004} {\path{doi:10.1112/blms/bdp004}}. \bibitem{stessin2003generalized} M.~Stessin and K.~Zhu. \newblock Generalized factorization in {H}ardy spaces and the commutant of {T}oeplitz operators. \newblock {\em Canad. J. Math.}, 55(2):379--400, 2003. \newblock \href {https://doi.org/10.4153/CJM-2003-017-1} {\path{doi:10.4153/CJM-2003-017-1}}. \bibitem{tenenbaum_1995} G.~Tenenbaum. \newblock {\em Introduction to analytic and probabilistic number theory}, volume~46 of {\em Cambridge Studies in Advanced Mathematics}. \newblock Cambridge University Press, Cambridge, 1995. \newblock Translated from the second French edition (1995) by C. B. Thomas. \bibitem{vidal2020montel} T.~F. Vidal, D.~Galicer, and P.~Sevilla-Peris. \newblock A {M}ontel-type theorem for {H}ardy spaces of holomorphic functions. \newblock {\em arXiv preprint arXiv:2004.10511}, 2020. \bibitem{vukotic2003analytic} D.~Vukoti\'{c}. \newblock Analytic {T}oeplitz operators on the {H}ardy space {$H^p$}: a survey. \newblock {\em Bull. Belg. Math. Soc. Simon Stevin}, 10(1):101--113, 2003. \newblock URL: \url{http://projecteuclid.org/euclid.bbms/1047309417}. \end{thebibliography} \noindent T.~Fern\'andez Vidal, D.~Galicer\\ Departamento de Matem\'{a}tica, Facultad de Cs. Exactas y Naturales, Universidad de Buenos Aires and IMAS-CONICET. Ciudad Universitaria, Pabell\'on I (C1428EGA) C.A.B.A., Argentina, [email protected], [email protected]\\ \noindent P.~Sevilla-Peris\\ Insitut Universitari de Matem\`atica Pura i Aplicada. Universitat Polit\`ecnica de Val\`encia. Cmno Vera s/n 46022, Spain, [email protected] \end{document}
2205.07822v1
http://arxiv.org/abs/2205.07822v1
Description of $GL_3$-orbits on the quadruple projective varieties
\documentclass[11pt]{article} \usepackage{amsfonts,amsmath,amssymb} \usepackage{amsthm} \usepackage{enumerate} \usepackage{graphicx} \usepackage[hypertex]{hyperref} \usepackage{epic} \usepackage{eepic} \usepackage{tikz} \usepackage{comment} \usepackage{multirow} \usepackage{fouridx} \usepackage{here} \usepackage{amsmath,amsfonts,amsthm,amssymb,amscd} \usepackage[all]{xy} \usepackage{thmtools,thm-restate} \declaretheorem{theorem} \usepackage{float} \usepackage{comment} \usepackage{enumerate} \usepackage{bm} \makeatletter \renewcommand{\theequation}{ \thesection.\arabic{equation}} \@addtoreset{equation}{section} \makeatother \newcommand{\R }{\mathbb{R}} \newcommand{\C }{\mathbb{C}} \newcommand{\N }{\mathbb{N}} \newcommand{\K }{K} \newcommand{\Z }{\mathbb{Z}} \newcommand{\pr}[1]{\mathbb{P}^{#1}} \newcommand{\SL}[1]{SL(#1,\K)} \newcommand{\GL}[1]{GL_{#1}} \newcommand{\diag}[1]{\mathrm{diag}(#1)} \newcommand{\orb}[1]{\mathcal{O}(#1)} \newcommand{\prb}[1]{\mathcal P(#1)} \newcommand{\prbn}[2]{\mathcal{P}_{#1}(#2)} \newcommand{\vp}[1]{\varphi[#1]} \newcommand{\eq}[1]{\left[#1\right]} \newcommand{\bbra}[1]{\bigl(#1\bigr)} \newcommand{\tilx}[1]{\tilde{X}_{#1}} \newcommand{\ba }{\bm{a}} \newcommand{\bb }{\bm{b}} \newcommand{\bd }{\bm{d}} \newcommand{\be }{\bm{e}} \renewcommand{\labelenumi}{(\arabic{enumi})} \renewcommand{\labelenumii}{\roman{enumii})} \renewcommand{\labelenumiii}{(\alph{enumiii})} \theoremstyle{plain} \newtheorem{thm}{Theorem}[section] \newtheorem{thmalph}{Theorem} \renewcommand{\thethmalph}{\Alph{thmalph}} \newtheorem{thmalphp}{Theorem} \renewcommand{\thethmalphp}{A'} \newtheorem*{fct}{Fact} \newtheorem{fact}[thm]{Fact} \newtheorem{factalph}[thmalph]{Fact} \renewcommand{\thefactalph}{\Alph{factalph}} \newtheorem{coralph}[thmalph]{Corollary} \renewcommand{\thecoralph}{\Alph{coralph}} \newtheorem{propalph}[thmalph]{Proposition} \renewcommand{\thepropalph}{\Alph{propalph}} \newtheorem{prop}[thm]{Proposition} \newtheorem{lemma}[thm]{Lemma} \newtheorem{cor}[thm]{Corollary}\newtheorem{conj}[thm]{Conjecture} \theoremstyle{definition} \newtheorem{df}[thm]{Definition} \newtheorem*{notation}{Notation} \newtheorem{exmp}[thm]{Example} \newtheorem{prob}{Problem}[section] \newtheorem{remark}[thm]{Remark} \newtheorem{ass}{Assumption} \providecommand{\bysame}{\makebox[3em]{\hrulefill}\thinspace} \usepackage[truedimen,margin=30truemm]{geometry} \pagestyle{plain} \newcommand{\bethm}{\begin{thm}} \newcommand{\enthm}{\end{thm}} \newcommand{\bedef}{\begin{df}} \newcommand{\beprop}{\begin{prop}} \newcommand{\enprop}{\end{prop}} \newcommand{\belem}{\begin{lemma}} \newcommand{\enlem}{\end{lemma}} \newcommand{\berem}{\begin{remark}} \newcommand{\enrem}{\end{remark}} \newcommand{\beex}{\begin{exmp}} \newcommand{\ene}{\end{exmp}} \newcommand{\beeq}{\begin{equation}} \newcommand{\eneq}{\end{equation}} \newcommand{\bealign}{\begin{align}} \newcommand{\bear}{\begin{array}} \newcommand{\enar}{\end{array}} \newcommand{\beca}{\begin{cases}} \newcommand{\enca}{\end{cases}} \newcommand{\been}{\begin{enumerate}} \newcommand{\enen}{\end{enumerate}} \allowdisplaybreaks \title{Description of $GL_3$-orbits on the quadruple projective varieties } \author{Naoya SHIMAMOTO \\ Chiba Institute of Technology } \date{} \begin{document} \maketitle \begin{abstract} This article gives a description of the diagonal $GL_3$-orbits on the quadruple projective variety $(\mathbb P^2)^4$. We give explicit representatives of orbits, and describe the closure relations of orbits. A distinguished feature of our setting is that it is the simplest case where $\diag{\GL{n}}$ has infinitely many orbits but has an open orbit in the multiple projective space $(\mathbb P^{n-1})^m$. \end{abstract} \tableofcontents \section{Introduction} The object of this article is an orbit decomposition of the flag variety $G/P$ under the action of a closed subgroup $H$ of a reductive group $G$ in the setting that $H$ has an open orbit in $G/P$ but $\#(H\backslash G/P)=\infty$. The study of the double coset $H\backslash G/P$ is motivated by the representation theory. For example, for a real reductive algebraic group $G$, and for an algebraically defined closed subgroup $H$, Kobayashi-Oshima \cite{ko} proved that the finiteness (resp. boundedness) of the multiplicities of irreducible admissible representations $\pi$ of $G$ occurring in the induced representations $\mathrm{Ind}_H^G(\tau)$ for finite-dimensional irreducible representations $\tau$ of $H$ is equivalent to the existence of open $H$-orbits (resp. $H_c$-orbits) on $G/P_G$ (resp. $G_c/B_G$), where $P_G$ is a minimal parabolic subgroup of $G$, and $B_G$ is a Borel subgroup of the complexification $G_c$ of $G$. A homogeneous space $G/H$ satisfying these equivalent conditions is called a real spherical variety (resp.\ spherical variety). In this case, the existence of an open $H$-orbit is known to be the finiteness of $H$-orbits \cite{br1, br2, morb}. Needless to say, the latter implies the former, however, the converse may fail for a general parabolic subgroup $P$ of $G$. Even in such cases, the existence of open $H$-orbits or the finiteness of $H$-orbits on the flag variety $G/P$ gives useful information on the branching laws between representations of $H$ and representations of $G$ induced from characters of $P$ \cite{k,tt}. Kobayashi-Oshima \cite{ko} also proved that the finiteness of the dimension of the space of symmetry breaking operators ($H$-intertwining operators from irreducible admissible representations of $G$ to those of $H$) is equivalent to the existence of open $P_H$-orbits on $G/P_G$ where $H$ is reductive and $P_H$ is its minimal parabolic subgroup. Furthermore, such pairs $(G,H)$ are classified by Kobayashi-Matsuki \cite{km} under the assumption that $(G,H)$ are symmetric pairs. Moreover, an explicit description of the double coset $P_H\backslash G/P_G\cong \mathrm{diag}(H)\backslash(G\times H)/(P_G\times P_H)$ parametrises the ``families'' of symmetry breaking operators as in the works \cite{kl,ks,ks2} for the pair $(G,H)=(O(p+1,q),O(p,q))$ which is in the classification given by \cite{km}. Our research comes from these motivations. \bigskip So far, an explicit description of the double coset $H\backslash G/P$ has been studied only when $H$ has finitely many orbits in $G/P$. For instance, the descriptions of $H$-orbits on $G/P$ for symmetric pairs $(G,H)$ have been well studied by Matsuki \cite{mp,mpp}, generalising the Bruhat decomposition where $\mathrm{diag}(G)\backslash (G\times G)/(P_1\times P_2)$ may be identified with $P_1\backslash G/P_2$. Matsuki \cite{mBex,mB} gave also a description of diagonal action of orthogonal groups on multiple flag varieties $G^m/(P_1\times P_2\times\cdots\times P_m)$ under the assumption that the number of orbits is finite, referred to as ``finite type'', which were also classified in that work. Note that the pair $(G^m, \mathrm{diag}(G))$ is no more a symmetric pair if $m\geq 3$, and multiple flag varieties of finite type were earlier classified by \cite{mwzA,mwzC} where $G$ is of type $A$ or $C$. \bigskip In this article, we are interested in the case where $\#(H\backslash G/P)=\infty$. We fix $\mathbb K$ to be an algebraically closed field with characteristic $0$, and $Q_n$ to be the maximal parabolic subgroup of $\GL{n}$ such that $GL_n/Q_n\cong\pr{n-1}$ over $\mathbb K$. We at first prove the following. \begin{thmalph}[see Theorems \ref{thm:open} and \ref{thm:finite}] \label{thm:openandfinite} Let $n,m$ be positive integers with $n\geq 2$, and $\mathbb K$ be an algebraically closed field with characteristic $0$, then the followings hold: \begin{enumerate} \item the number of $\diag{GL_n}$-orbits on $(\pr{n-1})^m$ is finite if and only if $m\leq 3$; \item there exists an open $\diag{GL_n}$-orbit on $(\pr{n-1})^m$ if and only if $m\leq n+1$. \end{enumerate} \end{thmalph} In particular, there exist infinitely many orbits and an open orbit simultaneously if and only if $4\leq m\leq n+1$. Hence, the case $(n,m)=(3,4)$ can be regarded as the simplest case among them. In this article, we highlight this case, and give an explicit description of $\diag{\GL{3}}$-orbits on $(GL_3/Q_3)^4\cong(\pr{2})^4$ and their closure relations. For this aim, we introduce the following $\diag{GL_3}$-invariant map with a finite image: \beeq \label{eq:prbintro} \begin{array}{cccc} \pi\colon & (\mathbb P^2)^4 & \to & Map(2^{\{1,2,3,4\}},\mathbb N) \\ &\rotatebox{90}{$\in$} && \rotatebox{90}{$\in$} \\ &([v_i])_{i=1}^4 &\mapsto &\left(I\mapsto \dim\mathrm{Span}\{v_i\}_{i\in I}\right) \end{array} \eneq Our strategy is to determine the image of $\pi$ in $Map(2^{\{1,2,3,4\}},\mathbb N)$, and to describe $\diag{GL_3}$-orbit decomposition of each fibre $\pi^{-1}(\varphi)$ for $\varphi\in\mathrm{Image}(\pi)$. Here is a description of $\mathrm{diag}(GL_3)$-orbits on $(GL_3/Q_3)^4\cong (\pr{2})^4$. \begin{thmalph}[see Theorems \ref{thm:single}, \ref{thm:parameter}, and Propositions \ref{thm:clsingle} and \ref{thm:clparameter}] \label{thm:orb34} \been \item The map $\pi\colon (\mathbb P^2)^4\to Map(2^{\{1,2,3,4\}},\mathbb N)$ is invariant under the diagonal action of $G=GL_3$. \item A map $\varphi\in Map(2^{\{1,2,3,4\}},\mathbb N)$ belongs to the image of $\pi$ if and only if $\varphi[\ast]$ appears in Figure \ref{hasseintro} via the correspondence $\varphi\leftrightarrow\varphi[\ast]$ (see Definitions \ref{df:map} and \ref{df:mapsplit}). \item For each $\varphi[\ast]$ listed in Figure \ref{hasseintro}, the fibre $\pi^{-1}(\varphi[\ast])$ is a single $\diag{G}$-orbit unless $\varphi[\ast]=\vp{6}$.The fibre $\pi^{-1}(\vp{6})$ is decomposed into infinitely many $\diag{G}$-orbits by means of $5$-dimensional orbits $\orb{5;p}$ with parameters $p\in \mathbb P^1\setminus\{0,1,\infty\}$, see Definition \ref{df:mapsplit}: \item If we fix $p\in\pr{1}\setminus\{0,1,\infty\}$, then the closure relations among all fibres of $\vp{\ast}$ and the orbit $\orb{5;p}$ are given by the following Hasse diagram in Figure \ref{hasseintro}. \enen \begin{figure}[H] \begin{minipage}{0.78\hsize} \centering \begin{tikzpicture}[scale=1.4] \node(o8)at(-0.5,3){\scalebox{0.8}{$\vp{8}$}}; \node(o71)at(-2.6,2){\scalebox{0.8}{$\vp{7;1}$}}; \node(o72)at(-1.2,2){\scalebox{0.8}{$\vp{7;2}$}}; \node(o73)at(0.2,2){\scalebox{0.8}{${\vp{7;3}}$}}; \node(o74)at(1.6,2){\scalebox{0.8}{${\vp{7;4}}$}}; \node(o634)at(-3.5,1){\scalebox{0.8}{${\vp{6;3,4}}$}}; \node(o624)at(-2.3,1){\scalebox{0.8}{${\vp{6;2,4}}$}}; \node(o614)at(-1.1,1){\scalebox{0.8}{${\vp{6;1,4}}$}}; \node(o623)at(0.1,1){\scalebox{0.8}{${\vp{6;2,3}}$}}; \node(o613)at(1.3,1){\scalebox{0.8}{${\vp{6;1,3}}$}}; \node(o612)at(2.5,1){\scalebox{0.8}{${\vp{6;1,2}}$}}; \node(p6)at(3.8,0.7){\scalebox{0.8}{${\vp{6}}$}}; \node(o534)at(-3.5,0){\scalebox{0.8}{${\vp{5;3,4}}$}}; \node(o524)at(-2.3,0){\scalebox{0.8}{${\vp{5;2,4}}$}}; \node(o514)at(-1.1,0){\scalebox{0.8}{${\vp{5;1,4}}$}}; \node(o523)at(0.1,0){\scalebox{0.8}{${\vp{5;2,3}}$}}; \node(o513)at(1.3,0){\scalebox{0.8}{${\vp{5;1,3}}$}}; \node(o512)at(2.5,0){\scalebox{0.8}{${\vp{5;1,2}}$}}; \node(o5)at(3.8,-0.2){\scalebox{0.8}{$\orb{5;p}$}}; \node(o412)at(-2.5,-1){\scalebox{0.8}{${\vp{4;1,2}}$}}; \node(o414)at(-0.5,-1){\scalebox{0.8}{${\vp{4;1,4}}$}}; \node(o413)at(1.5,-1){\scalebox{0.8}{${\vp{4;1,3}}$}}; \node(o41)at(-3.5,-1.2){\scalebox{0.8}{${\vp{4;1}}$}}; \node(o42)at(-1.5,-1.2){\scalebox{0.8}{${\vp{4;2}}$}}; \node(o43)at(0.5,-1.2){\scalebox{0.8}{${\vp{4;3}}$}}; \node(o44)at(2.5,-1.2){\scalebox{0.8}{${\vp{4;4}}$}}; \node(o2)at(-0.5,-2.2){\scalebox{0.8}{${\vp{2}}$}}; \draw[ultra thin](-0.5,2.85)--(-2.6,2.15); \draw[ultra thin](-0.5,2.85)--(-1.2,2.15); \draw[ultra thin](-0.5,2.85)--(0.2,2.15); \draw[ultra thin](-0.5,2.85)--(1.6,2.15); \draw[ultra thin](-2.6,1.85)--(-3.5,1.15); \draw[ultra thin](-2.6,1.85)--(-2.3,1.15); \draw[ultra thin](-2.6,1.85)--(0.1,1.15); \draw[ultra thin](-1.2,1.85)--(-3.5,1.15); \draw[ultra thin](-1.2,1.85)--(-1.1,1.15); \draw[ultra thin](-1.2,1.85)--(1.3,1.15); \draw[ultra thin](0.2,1.85)--(-2.3,1.15); \draw[ultra thin](0.2,1.85)--(-1.1,1.15); \draw[ultra thin](0.2,1.85)--(2.5,1.15); \draw[ultra thin](1.6,1.85)--(0.1,1.15); \draw[ultra thin](1.6,1.85)--(1.3,1.15); \draw[ultra thin](1.6,1.85)--(2.5,1.15); \draw[ultra thin](-2.6,1.85)--(3.8,1.05); \draw[ultra thin](-1.2,1.85)--(3.8,1.05); \draw[ultra thin](0.2,1.85)--(3.8,1.05); \draw[ultra thin](1.6,1.85)--(3.8,1.05); \draw[ultra thin](3.8,1.05)--(3.8,0.85); \draw[ultra thin](-3.5,0.85)--(-3.5,0.15); \draw[ultra thin](-2.3,0.85)--(-2.3,0.15); \draw[ultra thin](-1.1,0.85)--(-1.1,0.15); \draw[ultra thin](0.1,0.85)--(0.1,0.15); \draw[ultra thin](1.3,0.85)--(1.3,0.15); \draw[ultra thin](2.5,0.85)--(2.5,0.15); \draw[ultra thin](3.8,0.55)--(o5); \draw[ultra thin](3.8,0.55)--(-3.5,0.15); \draw[ultra thin](3.8,0.55)--(-2.3,0.15); \draw[ultra thin](3.8,0.55)--(-1.1,0.15); \draw[ultra thin](3.8,0.55)--(0.1,0.15); \draw[ultra thin](3.8,0.55)--(1.3,0.15); \draw[ultra thin](3.8,0.55)--(2.5,0.15); \draw[ultra thin](-3.5,-0.15)--(-2.5,-0.85); \draw[ultra thin](-2.3,-0.15)--(-0.5,-0.85); \draw[ultra thin](-1.1,-0.15)--(1.5,-0.85); \draw[ultra thin](0.1,-0.15)--(1.5,-0.85); \draw[ultra thin](1.3,-0.15)--(-0.5,-0.85); \draw[ultra thin](2.5,-0.15)--(-2.5,-0.85); \draw[ultra thin](-3.5,-0.15)--(-3.5,-1.05); \draw[ultra thin](-3.5,-0.15)--(-1.5,-1.05); \draw[ultra thin](-2.3,-0.15)--(-3.5,-1.05); \draw[ultra thin](-2.3,-0.15)--(0.5,-1.05); \draw[ultra thin](-1.1,-0.15)--(-1.5,-1.05); \draw[ultra thin](-1.1,-0.15)--(0.5,-1.05); \draw[ultra thin](0.1,-0.15)--(-3.5,-1.05); \draw[ultra thin](0.1,-0.15)--(2.5,-1.05); \draw[ultra thin](1.3,-0.15)--(-1.5,-1.05); \draw[ultra thin](1.3,-0.15)--(2.5,-1.05); \draw[ultra thin](2.5,-0.15)--(0.5,-1.05); \draw[ultra thin](2.5,-0.15)--(2.5,-1.05); \draw[ultra thin](o5)--(3.6,-0.54); \draw[ultra thin](3.6,-0.54)--(-2.3,-0.54); \draw[ultra thin](-2.3,-0.54)--(-3.5,-1.05); \draw[ultra thin](-0.75,-0.54)--(-1.5,-1.05); \draw[ultra thin](0.7,-0.54)--(0.5,-1.05); \draw[ultra thin](2.6,-0.54)--(2.5,-1.05); \draw[ultra thin](-2.5,-1.15)--(-0.5,-2.05); \draw[ultra thin](-0.5,-1.15)--(-0.5,-2.05); \draw[ultra thin](1.5,-1.15)--(-0.5,-2.05); \draw[ultra thin](-3.5,-1.35)--(-0.5,-2.05); \draw[ultra thin](-1.5,-1.35)--(-0.5,-2.05); \draw[ultra thin](0.5,-1.35)--(-0.5,-2.05); \draw[ultra thin](2.5,-1.35)--(-0.5,-2.05); \draw[ultra thin](o5)--(3.8,-1.25); \draw[ultra thin](3.8,-1.25)--(-0.5,-2.05); \end{tikzpicture} \caption{Hasse diagram} \label{hasseintro} \end{minipage} \hspace{-7mm} \begin{minipage}{0.19\hsize} \centering \begin{tikzpicture}[scale=1.3] \node(o8)at(0,3){\scalebox{0.8}{$\{{\vp{8}}\}$}}; \node(o7)at(0,2){\scalebox{0.8}{$\{{\vp{7;\ast}}\}$}}; \node(o6)at(-0.4,1){\scalebox{0.8}{$\{{\vp{6;\ast,\ast}}\}$}}; \node(p6)at(0.8,0.7){\scalebox{0.8}{$\{{\vp{6}}\}$}}; \node(o5)at(-0.4,0){\scalebox{0.8}{$\{{\vp{5;\ast,\ast}}\}$}}; \node(o5p)at(0.8,-0.45){\scalebox{0.8}{$\{\orb{5;q}\left|q\in O_p\right.\}$}}; \node(o4)at(-0.8,-1){\scalebox{0.8}{$\{{\vp{4;\ast,\ast}}\}$}}; \node(o4')at(0.2,-1.4){\scalebox{0.8}{$\{{\vp{4;\ast}}\}$}}; \node(o2)at(0,-2.2){\scalebox{0.8}{$\{{\vp{2}}\}$}}; \draw[ultra thin](0,2.85)--(0,2.15); \draw[ultra thin](0,1.85)--(-0.4,1.15); \draw[ultra thin](0,1.85)--(0.8,0.85); \draw[ultra thin](-0.4,0.85)--(-0.4,0.15); \draw[ultra thin](0.8,0.55)--(-0.4,0.15); \draw[ultra thin](0.8,0.55)--(o5p); \draw[ultra thin](-0.4,-0.15)--(-0.8,-0.8); \draw[ultra thin](-0.4,-0.15)--(0.2,-1.25); \draw[ultra thin](-0.8,-1.15)--(0,-2.05); \draw[ultra thin](0,-1.55)--(0,-2.05); \draw[ultra thin](0.9,-0.6)--(0.2,-1.25); \draw[ultra thin](0.9,-0.6)--(0.9,-1.55); \draw[ultra thin](0.9,-1.55)--(0,-2.05); \end{tikzpicture} \hspace{-7mm} \caption{mod $\mathcal{S}_4$} \label{hassemods4} \end{minipage} \end{figure} \end{thmalph} \berem \begin{enumerate} \item Since $\diag{G}\backslash(\mathbb P^2)^4$ admits a natural action of the symmetric group $\mathcal{S}_4$, one can consider the quotient $\left(\mathrm{diag}(G)\backslash (\mathbb P^2)^4\right)/\mathcal S_4$ in Figure \ref{hassemods4} where $O_p:=\{p,\frac1p,1-p,\frac1{1-p},1-\frac1p,\frac1{1-\frac1p}\}$. \item The symbol $\varphi[\ast]\in Map(2^{\{1,2,3,4\}},\mathbb N)$ has the following information: the dimension of the fibre $\pi^{-1}\left(\varphi[k;J]\right)\subset(\mathbb P^2)^4$ is $k$, where $J\subset \{1,2,3,4\}$ is another parameter. We observe that $\varphi[k;J]$ occurs in Figure \ref{hasseintro} if and only if so is $\varphi[k;\sigma J]$ for $\sigma\in\mathcal S_4$. \end{enumerate} \enrem \begin{notation} We set $\N=\{0,1,2,3,\ldots\}$, and $[m]$ denotes the set $\{1,2,\ldots,m \}\subset \mathbb N$ for $m\in \mathbb N$. We let $\mathbb K$ be an algebraically closed field with characteristic $0$. For a vector $v\in \mathbb K^n\setminus\{\bm 0\}$, we write the $\mathbb K^\times$-orbit through $v$ as $[v]\in\pr{n-1}$. Similarly for a matrix $(v_i)_{i=1}^m\in M(n,m)$ without any columns equal to $\bm 0$, the notion $[v_i]_{i=1}^m$ denotes the $m$-tuple of $\mathbb P^{n-1}$. For an $m$-tuple $[v_i]_{i=1}^m$ in $\mathbb P^{n-1}$, we write the subspace spanned by these $m$ vectors by $\langle v_i\rangle_{i=1}^m$. Furthermore, $\{e_i\}_{i=1}^n$ denotes the standard basis of $\mathbb K^n$. \end{notation} \section{Existence of open orbits and finiteness of orbits} \label{openandfinite} In this section, we prove Theorem \ref{thm:openandfinite}, which determines the existence of open orbits and finiteness of orbits under the diagonal action of $\GL{n}$ on the multiple projective space $(\pr{n-1})^m$. \begin{thm} \label{thm:open} Let $n\geq 2$ and $m$ be positive integers. There exists an open $\diag{\GL{n}}$-orbit on $(\pr{n-1})^m$ if and only if $n\geq m-1$. \end{thm} \begin{proof} \been \item If $n\geq m$, then we can take an element $[e_i]_{i=1}^m\in (\mathbb P^{n-1})^m$. Its stabiliser is \[ \left\{\left.\left(\bear{cc|c} \multicolumn{2}{c|}{\multirow{2}{*}{\scalebox{1.5}{$C$}}} & \multirow{2}{*}{\scalebox{1.2}{$\ast$}} \\ && \\ \hline \multicolumn{2}{c|}{\scalebox{1.2}{$0$}} & \scalebox{1.2}{$\ast$} \enar\right)\right| \;C=\diag{c_1,c_2,\ldots,c_m}\in\GL{m} \;\right\}\subset\GL{n}, \] and $\diag{GL_n}\cdot[e_i]_{i=1}^m$ is the open orbit since $ \dim\left(\diag{GL_n}\cdot[e_i]_{i=1}^m\right)=\dim\GL{n}-(m+n(n-m))=(n-1)m=\dim (\mathbb P^{n-1})^m. $ \item If $n+1=m$, then we can take a element $v=[v_i]_{i=1}^{n+1}\in (\mathbb P^{n-1})^{n+1}$ where $v_i:=e_i$ for $1\leq i\leq n$ and $v_{n+1}:=\sum_{k=1}^ne_k$. If $g\in \GL{n}$ stabilises $[v_i]=[e_i]$ for $1\leq i\leq n$, then it is a diagonal matrix, and if it stabilises $[v_{n+1}]= [\sum_{k=1}^ne_k]$, then it is actually a scalar matrix. Hence $\diag{GL_n}\cdot v$ is the open orbit since $ \dim\left(\diag{GL_n}\cdot v\right)=\dim\GL{n}-1=(n-1)(n+1)=\dim (\mathbb P^{n-1})^{n+1}. $ \item Let $n+2\leq m$. Remark that the centre of $GL_n$ acts trivially on $\mathbb P^{n-1}$, and the dimension of any $\diag{GL_n}$-orbit in $(\mathbb P^{n-1})^m$ is less than $n^2$. Hence there is no open orbit since $ \dim (\mathbb P^{n-1})^m=(n-1)m>(n-1)(n+1)=n^2-1. $ \enen \end{proof} \begin{thm} \label{thm:finite} Let $n\geq 2$ and $m$ be positive integers. There are only finitely many $\diag{\GL{n}}$-orbits on $(\pr{n-1})^m$ if and only if $m\leq 3$. \end{thm} \begin{proof} \been \item The action of $GL_n$ on $\mathbb P^{n-1}$ is transitive, hence there is only one orbit. \item From Bruhat decomposition, we have $\diag{GL_n}\backslash (\mathbb P^{n-1})^2=\diag{GL_n}\backslash (GL_n/Q_n)^2\cong Q_n\backslash GL_n/Q_n$ where $Q_n$ is the maximal parabolic subgroup of $GL_n$ with the Levi subgroup $GL_1\times GL_{n-1}$. It has a one to one correspondence with $\mathcal S_{n-1}\backslash \mathcal S_n/\mathcal S_{n-1}$, which is a two point set. \item If $m=3$, we have \bealign \diag{GL_n}\backslash (\mathbb P^{n-1})^3&= \diag{GL_n}\backslash\left\{[v_1, v_2, v_3]\in (\mathbb P^{n-1})^3\left|[v_1]\neq[v_2]\neq[v_3]\neq[v_1]\right.\right\} \notag \\ &\;\;\;\;\amalg\bigcup_{1\leq i<j\leq 3}\diag{GL_n}\backslash\left\{[v_1,v_2,v_3]\in (\mathbb P^{n-1})^3\left|[v_i]=[v_j]\right.\right\}. \label{eq:xn3decomp} \end{align} For $1\leq i< j\leq3$, we have \beeq \label{eq:xn3decomp1} \diag{GL_n}\backslash\left\{[v_1,v_2,v_3]\in (\mathbb P^{n-1})^3\left|[v_i]=[v_j]\right.\right\}\cong \diag{GL_n}\backslash (\mathbb P^{n-1})^2, \eneq hence it is finite from the previous case. Furthermore, we have \bealign &\left\{[v_1,v_2,v_3]\in (\mathbb P^{n-1})^3\left|[v_1]\neq[v_2]\neq[v_3]\neq[v_1]\right.\right\} \notag \\ &\;\;=\diag{GL_n}\cdot [e_1,e_2,e_1+e_2]\ \ \bigl(\ \amalg\ \diag{GL_n}\cdot [e_1,e_2,e_3]\bigr). \label{eq:xn3decomp2} \end{align} Remark that the inside of the bracket occurs only if $n\geq 3$. Combining (\ref{eq:xn3decomp}), (\ref{eq:xn3decomp1}), and (\ref{eq:xn3decomp2}), $\diag{GL_n}\backslash (\mathbb P^{n-1})^3$ is finite. \item If $m\geq 4$, then we can take $v(p):=[e_1,e_2,e_1+e_2, p ,p, \ldots,p]\in (\mathbb P^{n-1})^m$ for $p\in\pr{1}$. For $p,q\in\pr{1}$, assume that $g\in\GL{n}$ satisfies $\diag{g}\cdot v(p)=v(q)$. Since $g$ fixes $[e_1], [e_2]$, and $[e_1+e_2]$, the restricted linear map $g|_{\langle e_1,e_2\rangle}$ is a scalar action, hence $q=g\cdot p=p$, and $v(p)=v(q)$. Hence $\{\diag{GL_n}\cdot v(p)\}_{p\in\pr{1}}$ is a distinct family of orbits with infinitely many elements. \enen \end{proof} Combining these results, we completed the proof of Theorem \ref{thm:openandfinite}. \section{General structures of orbits on multiple projective spaces} \label{general} In Section \ref{openandfinite}, we observed the existence of open orbits and finiteness of orbits on $(\mathbb P^{n-1})^m$ under the diagonal action of $\GL{n}$. In this section, we consider further general properties of diagonal $GL_n$-orbits on $(\pr{n-1})^m$. \subsection{Indecomposable splitting of configuration spaces} To simplify the description of ${GL_n}$-orbit decomposition of $(\mathbb P^{n-1})^m$, we introduce some ${GL_n}$-invariant subsets of $(\mathbb P^{n-1})^m$ and consider the decompositions into them. For elements of $(\mathbb P^{n-1})^m$, we can define the notion of indecomposability as follows: \begin{df} \label{df:indecomposable} For $v=[v_i]_{i=1}^m\in (\mathbb P^{n-1})^m$, we say $v$ is decomposable if there exists a pair $\emptyset\neq I,J\subset[m]$ such that $I\amalg J=[m]$ and $\langle v_i\rangle_{i\in I}\cap \langle v_j\rangle_{j\in J}=\bm 0$. Conversely, if $v$ does not have such decomposition, we say it to be indecomposable. \end{df} It is equivalent to the notion of indecomposability as objects in the flag category, and with this notion, we can consider indecomposable splittings of elements in $(\mathbb P^{n-1})^m$ as follows: \begin{df}\label{df:splitting} \begin{enumerate} \item Define a set $\mathcal P_{m}$ as bellow: \[\mathcal P_{m}:=\left\{\{(I_k,r_k)\}_{k=1}^l\left|\ \coprod_{k=1}^lI_k=[m],\ I_k\neq\emptyset,\ r_k\in\mathbb N\right.\right\}.\] \item Define the map $\varpi\colon (\mathbb P^{n-1})^m\to \mathcal P_{m},\ v\mapsto \{(I_k,r_k)\}_{k=1}^l$ where \begin{enumerate} \item $\langle v_i\rangle_{i=1}^m=\bigoplus_{k=1}^l \langle v_i\rangle_{i\in I_k}$ and $r_k=\dim\langle v_i\rangle_{i\in I_k}$, \item $[v_i]_{i\in I_k}\in (\mathbb P^{n-1})^{\#I_k}$ is indecomposable. \end{enumerate} \end{enumerate} \end{df} The map $\varpi$ is well-defined since it is generally known that indecomposable splittings of flags are unique up to isomorphisms. However we can also check it elementarily in this case. Let $\{(I_k,r_k)\}_{k=1}^l$ and $\{(J_k,s_k)\}_{k=1}^{l'}\in\mathcal P_m$ satisfy the two conditions i) and ii) in Definition \ref{df:splitting} for $[v_i]_{i=1}^m\in (\mathbb P^{n-1})^m$. If $I_k\cap J_{k'}\neq\emptyset$ and $I_k\setminus J_{k'}\neq\emptyset$, then $\langle v_i\rangle_{i\in I_k\cap J_{k'}}\cap\langle v_i\rangle_{i\in I_k\setminus J_{k'}}=\bm 0$ from i), and it contradicts to the indecomposability of $[v_i]_{i\in I_k}$ assumed in ii). Hence either $I_k\cap J_{k'}=\emptyset$ or $I_k=J_{k'}$ holds. \begin{exmp} For $(n,m)=(3,4)$, we have \begin{align*} \varpi([e_1,e_1,e_2,e_3])&=\left\{(\{1,2\},1), (\{3\},1),(\{4\}, 1)\right\}, \\ \varpi([e_1,e_2,e_3,e_2+e_3])&=\left\{(\{1\},1),(\{2,3,4\},2)\right\}. \end{align*} \end{exmp} On the other hand, in this article, we observe mainly the following map: \begin{equation} \label{eq:rankmatrix} \begin{array}{cccc} \pi\colon & (\mathbb P^{n-1})^m & \longrightarrow & Map(2^{[m]},\mathbb N) \\ & \rotatebox{90}{$\in$} &&\rotatebox{90}{$\in$} \\ & [v_i]_{i=1}^m &\mapsto & \left(I\mapsto \dim\langle v_i\rangle_{i\in I}\right) \end{array} \end{equation} We can characterise the correspondence between $\varpi$ and $\pi$ as bellow: \begin{lemma} \label{thm:rho} For the maps $\varpi$ and $\pi$ defined in Definition \ref{df:splitting} and (\ref{eq:rankmatrix}), there exist following maps satisfying the diagram bellow. \[\xymatrix{ & Map(2^{[m]},\mathbb N) \ar@{.>>}[r]^-{\rho} &\mathcal P_{m} \\ GL_n\backslash (\mathbb P^{n-1})^m\ar@{.>>}[r]^-{\tilde\pi}& \mathrm{Image}(\pi) \ar@<-0.3ex>@{^{(}->}[u] \ar@{.>>}[r]^-{\left.\rho\right|} & \mathrm{Image}(\varpi) \ar@<-0.3ex>@{^{(}->}[u]\\ (\mathbb P^{n-1})^m\ar@{->>}[u]\ar@{->>}[ru]_-{\pi} \ar@{->>}[rru]_-{\varpi} }\] \end{lemma} \begin{proof} Since the map $\pi$ is $GL_n$-invariant, we can define the map $\tilde\pi$ clearly. Now for a map $\varphi\colon 2^{[m]}\to\mathbb N$, we define it to be decomposable if there exists a partition $I_1\amalg I_2=[m]$, $I_1,I_2\neq\emptyset$ which satisfies $\varphi(I)=\varphi(I\cap I_1)+\varphi(I\cap I_2)$ for all $I\subset [m]$. Then we can define the map $\rho\colon Map(2^{[m]},\mathbb N)\to \mathcal P_m$, $\varphi\mapsto \{(I_k,r_k)\}_{k=1}^m$ where \begin{enumerate} \item $\varphi(I)=\sum_{k=1}^l \varphi(I\cap I_k)$ for $I\subset [m]$, and $r_k=\varphi(I_k)$, \item the restricted map $\left.\varphi\right|_{I_k}$ is indecomposable for each $1\leq k\leq l$. \end{enumerate} The map $\rho$ is well-defined. Indeed, let $\{(I_k,r_k)\}_{k=1}^l$ and $\{(J_k,s_k)\}_{k=1}^{l'}\in\mathcal P_m$ satisfy the conditions (1) and (2) for $\varphi\colon 2^{[m]}\to\mathbb N$. If $I_k\cap J_{k'}\neq\emptyset$ and $I_k\setminus J_{k'}\neq\emptyset$, then for $I\subset I_k$, \begin{align*} \varphi(I)&=\sum_{k''=1}^{l'}\varphi(J_{k''}\cap I)=\varphi(J_{k'}\cap I)+\sum_{k''=1}^{l'}\varphi(J_{k''}\cap I\setminus J_{k'})=\varphi(J_{k'}\cap I)+\varphi(I\setminus J_{k'}) \\ &= \varphi(I_k\cap J_{k'}\cap I)+\varphi(I_k\setminus J_{k'}\cap I)\end{align*} from (1), and it contradicts to the indecomposability of $\left.\varphi\right|_{I_k}$ in (2). Hence either $I_k\cap J_{k'}=\emptyset$ or $I_k=J_{k'}$ holds, which concludes the well-definedness of $\rho$. Now for $\varpi([v_i]_{i=1}^m)=\{(I_k,r_k)\}_{k=1}^m$, we have \begin{enumerate} \item From $\langle v_i\rangle_{i=1}^m=\bigoplus_{k=1}^l \langle v_i\rangle_{i\in I_k}$, we have $\dim\langle v_i\rangle_{i\in I}=\sum_{i=1}^l \langle v_i\rangle_{i\in I_k\cap I}$. \item If $\emptyset\neq J,J'$ and $J\amalg J'=I_k$, then from the indecomposability of $[v_i]_{i\in I_k}$, we have $\langle v_i\rangle_{i\in I_k\cap J}\cap \langle v_i\rangle_{i\in I_k\cap J'}\neq \bm 0$. Hence $\dim\langle v_i\rangle_{i\in I_k}< \dim\langle v_i\rangle_{i\in I_k\cap J}+\dim \langle v_i\rangle_{i\in I_k\cap J'}$. \end{enumerate} Hence we have shown that $\rho\circ\pi=\varpi$. \end{proof} \subsection{Some typical decompositions of multiple projective spaces} \label{typical} For the map $\varpi$ defined in Definition \ref{df:splitting}, there are some remarkable properties. \begin{lemma} \label{thm:imagevarpi} For the map $\varpi\colon (\mathbb P^{n-1})^m\to\mathcal P_{m}$ defined in Definition \ref{df:splitting}, an element $\left\{(I_k,r_k)\right\}_{k=1}^l$ of $\mathcal P_{m}$ is contained in the image of $\varpi$ if and only if the followings are satisfied: \begin{enumerate} \item $r_k=1$, or $2\leq r_k\leq \#I_k-1$ for each $1\leq k\leq l$, \item $\sum_{k=1}^lr_k\leq n$. \end{enumerate} \end{lemma} \begin{proof} If $\varpi([v_i]_{i=1}^m)=\left\{(I_k,r_k)\right\}_{k=1}^l$, since $\langle v_i\rangle_{i=1}^m=\bigoplus_{k=1}^l\langle v_i\rangle_{i\in I_k}$ and $r_k=\dim\langle v_i\rangle_{i\in I_k}$, the second condition holds obviously. Furthermore, let us assume that the first condition fails, in other words, assume that $2,\#I_k\leq r_k$ is satisfied for some $k$. Since $\#I_k<r_k=\dim\langle v_i\rangle_{i\in I_k}$ can not occur, the equiality must hold. Hence $\{v_i\}_{i\in I_k}$ is linearly independent, which contradicts to the indecomposability. Conversely, if the two conditions are satisfied for $\{(I_k,r_k)\}_{k=1}^l\in\mathcal P_m$, we define as follows: \begin{enumerate} \item $R(k):=\sum_{k'\leq k}r_{k'}$ for $0\leq k\leq l$. In particular, $0=R(0)<R(1)<R(2)<\cdots R(l)\leq n$. \item For $I_k=\{i(1)<i(2)<\cdots<i({\#I_k})\}$, set \[v_{i(j)}:=\begin{cases} e_{R(k-1)+j} & 1\leq j\leq r_k, \\ \sum_{j'=1}^{r_k}e_{R(k-1)+j'} & r_k+1\leq j \leq \#I_k. \end{cases}\] \end{enumerate} Then we have $\langle v_i\rangle_{i=1}^m=\langle e_j\rangle_{j=1}^{R(l)}=\bigoplus_{k=1}^l \langle e_{R(k-1)+j}\rangle _{j=1}^{r_k}=\bigoplus_{k=1}^l\langle v_i\rangle_{i\in I_k}$, and each $\langle v_i\rangle_{i\in I_k}$ is indecomposable since the first $r_k+1$ vectors are in a general position in the $r_k$-dimensional space. Hence $\varpi([v_i]_{i=1}^m)=\{(I_k,r_k)\}_{k=1}^l$. \end{proof} \begin{lemma} \label{thm:rhobij} If we define the subset of $\mathcal P_m$ by \[\mathcal P_{n,m}':=\left\{\{(I_k,r_k)\}_{k=1}^m\in\mathrm{Image}(\varpi)\subset\mathcal P_m\left|\ r_k=1,\ \textrm{or}\ 2\leq r_k=\#I_k-1\ (1\leq k\leq l)\right.\right\}, \] then for each element $\{(I_k,r_k)\}_{k=1}^l\in\mathcal P_{n,m}'$, the fibre $\varpi^{-1}(\left\{(I_k,r_k)\right\}_{k=1}^l)\subset (\mathbb P^{n-1})^m$ is a single $GL_n$-orbit. Furthermore, this orbit coincides with the fibre $\pi^{-1}(\varphi)$ where \[\varphi\colon 2^{[m]}\to\mathbb N\colon I\mapsto\sum_{k=1}^l \min\left\{ \#(I_k\cap I), r_k\right\}. \] In particular, we have the following bijections. \[\xymatrix{ GL_n\backslash (\mathbb P^{n-1})^m \ar@{>>}@/^20pt/[rr]^-{\tilde\varpi}\ar@{>>}[r]^-{\tilde\pi}& \mathrm{Image}(\pi) \ar@{>>}[r]^-{\left.\rho\right|} & \mathrm{Image}(\varpi) \\ GL_n\backslash\varpi^{-1}(\mathcal P'_{n,m}) \ar[r]^-{\simeq}\ar@<-0.3ex>@{^{(}->}[u] & \left.\rho\right|^{-1}(\mathcal P'_{n,m}) \ar[r]^-{\simeq}\ar@<-0.3ex>@{^{(}->}[u] & \mathcal P'_{n,m} \ar@<-0.3ex>@{^{(}->}[u] }\] \end{lemma} \begin{proof} We define $v_1,\ldots,v_m$ as in Lemma \ref{thm:imagevarpi} for $\{(I_k,r_k)\}_{k=1}^l\in\mathcal P_{n,m}'$. Now let us consider an element $[w_i]_{i=1}^m\in\varpi^{-1}(\{(I_k,r_k)\}_{k=1}^l)$. \begin{enumerate} \item If $r_k=1$, there is an linear isomorphism $f_k$ between $1$-dimensional subspaces $\langle e_{R(k)}\rangle=\langle v_i\rangle_{i\in I_k}$ and $\langle w_i\rangle_{i\in I_k}$. In particular, $[f_k(w_i)]=[v_i]$ for all $i\in I_k$. \item For the case $2\leq r_k=\#I_k-1$, if $r_k$-tuple of $\{w_i\}_{i\in I_k}$ is \emph{not} linearly independent, then it spans strictly less than $r_k$-dimensional subspace of $\langle w_i\rangle_{i\in I_k}$, which does not contain the rest one vector. It contradicts to the indecomposability of $[w_i]_{i\in I_k}$. Hence all $r_k$-tuples of $\{w_i\}_{i\in I_k}$ must be linearly independent. From this observation, since $\{w_{i(1)}, w_{i(2)},\ldots,w_{i(r_k)}\}$ is a basis of the $r_k$-dimensional space $\langle w_i\rangle_{i\in I_k}$, there exists an linear combination $w_{i(r_k+1)}=\sum_{j=1}^{r_k}c_jw_{i(j)}$. If $c_1=0$, then $\{w_{i(j)}\}_{j=2}^{r_k+1}$ is linearly dependent, which contradicts to the observation above. Hence $c_1\neq 0$. Similarly we can prove $c_j\neq 0$ for all $1\leq j\leq r_k$. Now we can define a linear isomorphism $f_k$ between the $r_k$-dimensional spaces $\langle w_i\rangle_{i\in I_k}=\langle w_{i(j)}\rangle_{j=1}^{r_k}$ and $\langle v_i\rangle_{i\in I_k}=\langle e_{R(k-1)+j}\rangle_{j=1}^{r_k}$ by sending $c_jw_{i(j)}$ to $v_{i(j)}=e_{R(k-1)+j}$ for $1\leq j\leq r_k$, which leads that $f_k(w_{i(r_k+1)})=f_k\left(\sum_{j=1}^{r_k}c_jw_{i(j)}\right)=\sum_{j=1}^{r_k}e_{R(k-1)+j}=v_{i(r_k+1)}$. \end{enumerate} Now, since $\langle w_i\rangle_{i=1}^m=\bigoplus_{k=1}^l\langle w_i\rangle_{i\in I_k}$ and $\langle v_i\rangle_{i=1}^m=\bigoplus_{k=1}^l\langle v_i\rangle_{i\in I_k}$, by taking the direct product of these linear isomorphisms $f_k\colon \langle w_i\rangle_{i\in I_k}\simeq\langle v_i\rangle_{i\in I_k}$ and some linear isomorphism between the complement space, we have obtained an isomorphism $g\in GL_n$ such that $g\cdot[w_i]_{i=1}^m=[v_i]_{i=1}^m$. \bigskip Now, since any $r_k$-tuple of $\{w_i\}_{i\in I_k}$ are linearly independent, $\dim \langle w_i\rangle_{i\in I_k\cap I}=\min\{\#(I_k\cap I),r_k\}$. Furthermore, since $\langle w_i\rangle_{i=1}^m=\bigoplus_{k=1}^l\langle w_i\rangle_{i\in I_k}$, we have \[\dim\langle w_i\rangle_{i\in I}=\sum_{k=1}^l \dim\langle w_i\rangle_{i\in I_k\cap I}=\sum_{k=1}^l \min\{\#(I_k\cap I),r_k\}.\] Hence $\varpi^{-1}(\{(I_k,r_k)\}_{k=1}^l)\subset \pi^{-1}(\varphi)$. On the hand, for an element $v\in \pi^{-1}(\rho)$, we have $\varpi(v)=\rho\circ\pi(v)=\rho(\varphi)=\{(I_k,r_k)\}_{k=1}^l$, which concludes the converse inclusion. \end{proof} \section{Description of orbits} \label{main} From this section, we set $G$ to be the general linear group of the degree $3$ over algebraically closed field $\mathbb K$ with the characteristic $0$, and $Q$ be the maximal parabolic subgroup of $G$ such that $G/Q\cong\pr{2}$. In Section \ref{prbdecomp}, we determine the image of the map $\varpi$ and its subset $\mathcal P_{3,4}'$ defined in Definition \ref{df:splitting} and Lemma \ref{thm:rhobij}. Then we can obtain a description of $G$-orbit decomposition of $\varpi^{-1}(\mathcal S_{3,4})'\subset (\mathbb P^2)^4$ using $G$-invariant fibres of $\varpi$ and $\pi$ from Lemma \ref{thm:rhobij} (see Theorem \ref{thm:single}). Then in Section \ref{orbdecomp}, we describe the orbit decomposition of the $G$-invariant fibre of $\varpi$ on $\mathrm{Image}(\varpi)\setminus \mathcal P_{3,4}'$ to complete the description of all orbits (see Theorem \ref{thm:parameter}). \subsection{Decomposition with the indecomposable splitting} \label{prbdecomp} First of all, we describe the image of the map $\varpi\colon (\mathbb P^2)^4\to \mathcal P_4$ defined in Definition \ref{df:splitting}: \begin{lemma}\label{thm:splittinglist} For $\{(I_k,r_k)\}_{k=1}^l\in \mathcal P_4$, it is contained in the image of $\varpi\colon (\mathbb P^2)^4\to\mathcal P_4$ (see Definition \ref{df:splitting}) if and only if the tuple $\{(\#I_k,r_k)\}_{k=1}^l$ is one of the followings: \[\{(4,1)\},\ \{(3,1),(1,1)\},\{(2,1),(2,1)\},\ \{(4,2)\},\ \{(1,1),(1,1),(2,1)\},\ \{(3,2),(1,1)\},\ \{(4,3)\}.\] \end{lemma} \begin{proof} From Lemma \ref{thm:imagevarpi}, the tuple $\{(I_k,r_k)\}_{k=1}^l\in \mathcal P_4$ is in the image of $\varpi\colon (\mathbb P^2)^4\to Map(2^{[4]},\mathbb N)$ if and only if $\sum_{k=1}^lr_k\leq 3$ and $r_k=1$ or $2\leq r_k\leq \#I_k-1$ for each $1\leq k\leq l$. Hence we can classify the image of $\varpi$ as follows: \begin{enumerate} \item if $l=1$, then $\#I_1=4$ and the possibility of $r_1$ is equal to or less than $4-1=3$. hence the possibilities of $\{(\#I_k,r_k)\}_{k=1}^l$ are $\{(4,3)\}$, $\{(4,2)\}$, $\{(4,1)\}$. \item If $l=2$, the possibilities of partitions are $\{\#I_1,\#I_2\}=\{1,3\}$ or $\{2,2\}$. The first one corresponds to $\{(1,1),(3,2)\}$ or $\{(1,1),(3,1)\}$. The second one corresponds to $\{(2,1),(2,1)\}$. \item If $l=3$, the possibilities of partitions are $\{\#I_1,\#I_2,\#I_2\}=\{1,1,2\}$ hence it corresponds to $\{(1,1),(1,1),(2,1)\}$. \item If $l\geq 4$, then $\sum_{k=1}^lr_k\geq 4$ contradicts to the condition $\sum_{k=1}^lr_k\leq 3$. \end{enumerate} \end{proof} \begin{remark} From the definition of $\mathcal P_{3,4}'$ in Lemma \ref{thm:rhobij}, the tuple $\{(I_k,r_k)\}_{k=1}^l\in \mathrm{Image}(\varpi)$ is in $\mathcal P_{3,4}'$ if and only if $r_k=1$ or $2\leq r_k= \#I_k-1$ for each $1\leq k\leq l$. Hence from the classification in Lemma \ref{thm:splittinglist}, we have $\mathrm{Image}(\varpi)\setminus \mathcal P_{3,4}'=\{\{([4],2)\}\}$. \end{remark} \begin{df} \label{df:map} We define maps $\varphi[\ast]\in Map( 2^{[4]},\mathbb N)$ as in Table \ref{tab:vpsingle}, according to the correspondence between $\{(I_k,r_k)\}_{k=1}^l\in \mathcal P_{3,4}'$ classified in Lemma \ref{thm:splittinglist} and maps $2^{[4]}\to\mathbb N$, $I\mapsto \sum_{k=1}^l\min\{\#I_k,r_k\}$. In Table \ref{tab:vpsingle}, we set $\{i,j,k,l\}=[4]$. \begin{table}[h] \centering \small \begin{tabular}{|c|c|c|c|} \hline $\{(\#I_k,r_k)\}_{k=1}^l$ & $\{I_k\}_{k=1}^l $ & $\left.\rho\right|^{-1}(\{(I_k,r_k)\}_{k=1}^l)$ & representative of the orbit $\pi^{-1}(\varphi[\ast])$ \\ \hline \hline $\{(4,1)\}$ & $\{\{1,2,3,4\}\}$ & $\displaystyle{\vp{2}}$ & $[e_1,e_1,e_1,e_1]$ \\ \hline $\{(1,1),(3,1)\}$& $\{\{i\},\{j,k,l\}\}$ & $\displaystyle{\vp{4;i}}$ & $[e_1,e_2,e_2,e_2]$ if $i=1$ \\ \hline $\{(2,1),(2,1)\}$&$\{\{i,j\},\{k,l\}\}$ & $\displaystyle{\vp{4;i,j}}$ & $[e_1,e_1,e_2,e_2]$ if $\{i,j\}=\{1,2\}$ \\ \hline $\{(1,1),(1,1),(2,1)\}$&$\{\{i\},\{j\},\{k,l\}\}$ & $\displaystyle{\vp{6;k,l}}$ & $[e_1,e_2,e_3,e_3]$ if $\{k,l\}=\{3,4\}$ \\ \hline $\{(1,1),(3,2)\}$&$\{\{i\},\{j,k,l\}\}$& $\displaystyle{\vp{7;i}}$ & $[e_1,e_2,e_3,e_2+e_3]$ if $i=1$ \\ \hline $\{(4,3)\}$&$\{\{1,2,3,4\}\}$ & $\displaystyle{\vp{8}}$ & $[e_1,e_2,e_3,e_1+e_2+e_3]$ \\ \hline \end{tabular} \caption{Definition of maps $\varphi[\ast]$} \label{tab:vpsingle} \end{table} \end{df} \begin{exmp} For instance, the map $\varphi[7;4]$ corresponds to the partition $[4]=\{4\}\amalg [3]$, and we have $\varphi[7;i](I)=\min\{\#(I\cap\{4\}),1\}+\min\{\#(I\setminus\{4\}),2\}$. In other words, $[v_i]_{i=1}^4$ is in the fibre $\pi^{-1}\left(\varphi[7;1]\right)$ if and only if $\{v_2,v_2,v_3\}$ is in a general position in a $2$-dimensional space and $v_1$ is transversal to it. \end{exmp} \berem We can see that $\vp{4;i,j}=\vp{4;j,i}=\vp{4;k,l}=\vp{4;l,k}$, $\vp{5;i,j}=\vp{5;j,i},\;\vp{6;i,j}=\vp{6;j,i}$ where $\{1,2,3,4\}=\{i,j,k,l\}$. \enrem From Lemma \ref{thm:rhobij}, remark that for all $\left.\rho\right|^{-1}(\{(I_k,r_k)\}_{k=1}^l)=\varphi[\ast]\in Map(2^{[4]},\mathbb N)$ defined in Table \ref{tab:vpsingle}, the fibre $\pi^{-1}(\varphi[\ast])\subset (\mathbb P^2)^4$ is a single $G$-orbit through the element $v\in \varpi^{-1}(\{(I_k,r_k)\}_{k=1}^l)$ defined in Lemma \ref{thm:imagevarpi} as listed in Table \ref{tab:vpsingle}. Furthermore, the dimension of the orbit through it is $\dim G-\dim \mathrm{Stab}_v=3\sum_{k=1}^lr_k-l$ since an isomorphism $g\in G$ stabilising $v$ has to be the scalar action on each $r_k$-dimensional space $\langle v_i\rangle_{i\in I_k}$. Hence, the dimension of each orbit $\pi^{-1}(\varphi[\ast])$ is denoted by the number before the semicolon. \begin{thm} \label{thm:single} For the map $\varpi\colon (\mathbb P^2)^4\to \mathcal P_4$ defined in Definition \ref{df:splitting}, each fibre of $\varpi$ at all elements in $\mathcal P_{3,4}'=\mathrm{Image}(\varpi)\setminus\{\{([4],2)\}\}$ is a single $G$-orbit coincides with the fibre $\pi^{-1}(\varphi[\ast])$ listed in Table \ref{tab:vpsingle}. Furthermore, representatives of these orbits are also listed in Table \ref{tab:vpsingle}. \end{thm} \subsection{Description of infinitely many orbits} \label{orbdecomp} In Section \ref{prbdecomp}, we described the decomposition of the multiple projective space $(G/Q)^4\cong(\pr{2})^4$ into a finite number of $G$-invariant fibres of $\varpi\colon (\mathbb P^2)^4\to\mathcal P_4$ defined in Definition \ref{df:splitting}. Furthermore, for the subset $\mathcal P_{3,4}'\subset \mathrm{Image}(\varpi)$ defined in Lemma \ref{thm:rhobij}, each fibre of $\varpi$ on it was single $G$-orbit (see Theorem \ref{thm:single}). On the other hand, the fibre of $\{([4],2)\}$, which is the only element in $\mathrm{Image}(\varpi)\setminus \mathcal P_{3,4}'$, is not. In this section, we describe the $G$-orbit decomposition of this extra fibre. For this aim, we define some notations as follows: \bedef \label{df:mapsplit} \been \item The subset $(\pr{1})'$ of $\pr{1}$ denotes $ \pr{1}\setminus\left\{[e_1],[e_2],[e_1+e_2] \right\}. $ \item Define the map $\varphi[6],\varphi[5;i,j]\in Map(2^{[4]},\mathbb N)$ for $1\leq i<j\leq 4$ as follows: \[\varphi[6]\colon I\mapsto \min\{\#I,2\},\ \ \ \varphi[5;i,j]\colon I\mapsto \min\{\#(I/i\sim j),2\}.\] \item For $p=[p_1e_1+p_2e_2]\in\pr{1}$, we define a subset of $(\pr{2})^4$ by \[ \orb{5;p}:=\left\{\left.[v_i]_{i=1}^4\in (\pr{2})^4\right|[v_1]\neq [v_2],\;v_3=v_1+v_2,\;v_4=p_1v_1+p_2v_2\right\}. \] \enen \end{df} Consider an element $v=[v_i]_{i=1}^4\in (\mathbb P^2)^4$, then it lies in the fibre of $\{([4],2)\}$ if and only if $\{v_i\}_{i=1}^4$ spans $2$-dimensional space and is indecomposable. If $\dim\langle v_i\rangle_{i=1}^4=2$, there exists a pair $\{v_i,v_j\}$ which forms a basis of this space. Then $\{v_i,v_j,v_k,v_l\}$ is indecomposable if and only if either $v_k$ or $v_l$ is contained in neither $\langle v_i\rangle$ nor $\langle v_j\rangle$ where $\{i,j,k,l\}=[4]$. Hence, $\varpi(v)=\{([4],2)\}$ if and only if its rank is $2$ and there exists a triple of $\{[v_i]\}_{i=1}^4$ which are distinct. Let $\{[v_1],[v_2],[v_3]\}$ be a distinct triple, then there is a isomorphism $g\in GL_3$ sending it to $\{[e_1],[e_2],[e_1+e_2]\}$. Then the line $[v_4]$ is sent to some $p=[p_1e_1+p_2e_2]\in\mathbb P^1$. Remark that $p$ is independent of the choice of $G$-action since $G$-action stabilising $[e_1,e_2,e_1+e_2]$ has to be a scalar action on this $2$-dimensional space. Now if $p\in \left(\mathbb P^1\right)'$, then we have $\pi(v)=\varphi[6]$. On the other hand, $\pi(v)=\varphi[5;1,4]$, $\varphi[5;2,4]$, or $\varphi[5;3,4]$ if $p=[e_1]$ , $[e_2]$, or $[e_1+e_2]$ with respectively. By considering other distinct triples, all $\varphi[5;\ast]$ can be covered, and we obtain the following: \begin{thm} \label{thm:parameter} For the maps $\varpi\colon (\pr{2})^4\overset{\pi}{\to} Map(2^{[4]},\mathbb N)\overset{\rho}{\to}\mathcal P_{4}$ defined in Definition \ref{df:splitting}, (\ref{eq:rankmatrix}), and Lemma \ref{thm:rho}, \begin{enumerate} \item for the only element $\{([4],2)\}$ in $\mathrm{Image}(\varpi)\setminus\mathcal P_{3,4}'$, the fibre of $\left.\rho\right|_{\mathrm{Image}(\pi)}$ is fullfilled with the maps $\varphi[6]$ and $\varphi[5;i,j]$ ($1\leq i<j\leq 4$) defined in Definition \ref{df:mapsplit}. \item Each fibre $\pi^{-1}(\varphi[5;i,j])$ is a single $G$-orbit through $[e_1,e_1,e_2,e_1+e_2]$ (if $\{i,j\}=\{1,2\}$), and the fibre $\pi^{-1}(\varphi[6])$ is decomposed into $G$-orbits as follows: \[\pi^{-1}(\varphi[6])=\coprod_{p\in\left(\mathbb P^1\right)'}\mathcal O(5;p)=\coprod_{p\in\left(\mathbb P^1\right)'}G\cdot[e_1,e_2,e_1+e_2,p].\] \end{enumerate} \end{thm} \section{Closure relations among orbits} In this section, we determine the closure relations among $GL_3$-orbits on $(\pr{2})^4$ and $GL_3$-invariant fibres described in Theorems \ref{thm:single} and \ref{thm:parameter}. For an integer $d$, consider the $\mathbb K$-submodule $\mathcal H^d_n$ of the $n$-variable polynomial ring consisting of all homogeneous polynomials of the degree $d$. Then the polynomial ring $Pol(M(n,m))$ over $n\times m$-matrices admits a $\mathbb N^m$-graded structure as $Pol(M(n,m))=\bigoplus_{\bm d\in \mathbb N^m}\mathcal H_{n,m}^{\bm d}$ where $\mathcal H_{n,m}^{(d_1,d_2,\ldots,d_m)}:=\mathcal H_n^{d_1}\otimes \mathcal H^{d_2}_n\otimes\cdots\otimes \mathcal H^{d_m}_n$. Then closed subsets in $(\mathbb P^{n-1})^m$ correspond to the zero-point sets $Z(I)$ of homogeneous ideals $I=\bigoplus I\cap\mathcal H_{n,m}^{\bm d}$ of $Pol(M(n,m))$. In particular, the irreducibility of $Z(I)$ also corresponds to whether $I$ is primary or not. \bedef \label{thm:varphirelation} For $\varphi$, $\psi\in Map(2^{[m]},\mathbb N)$, we say that $\varphi\leq\psi$ if $\varphi(I)\leq\psi(I)$ for all $I\subset[m]$. \end{df} With this notation, we state the result on the closure relations among $\GL{3}$-invariant fibres of the map $\pi\colon (\mathbb P^2)^4\to Map(2^{[4]},\mathbb N)$ defined in (\ref{eq:rankmatrix}). \beprop \label{thm:clsingle} For $\varphi\in \mathrm{Image}(\pi)\subset Map(2^{[4]},\mathbb N)$, we have $\overline{\pi^{-1}(\varphi)}=\coprod_{\psi\leq\varphi}\pi^{-1}(\psi)$. \enprop \begin{proof} For $\varphi\in Map(2^{[m]},\mathbb N)$, we have \begin{align*} \coprod_{\psi\leq\varphi}\pi^{-1}(\psi)&=\left\{[v]\in(\pr{n-1})^m\left|\ \dim\langle v_i\rangle_{i\in I}\leq \varphi(I) \textrm{ for all }I\subset [m]\right.\right\} \\ &=\left\{[v]\in(\pr{n-1})^m\left|\ \textrm{ all }(\varphi(I)+1)\textrm{-minors of }(v_i)_{i\in I}\textrm{ vanish for all }I\subset[m]\right.\right\}, \\ \pi^{-1}(\varphi)&=\left\{[v]\in(\pr{n-1})^m\left|\ \dim\langle v_i\rangle_{i\in I}= \varphi(I) \textrm{ for all }I\subset [m]\right.\right\} \\ &=\left\{[v]\in(\mathbb P^{n-1})^m\left|\begin{array}{l}\textrm{there exists some non vanishing }\\ \varphi(I)\textrm{-minor of }(v_i)_{i\in I} \textrm{ for all }I\subset[m]\end{array}\right.\right\}\ \cap\ \coprod_{\psi\leq\varphi}\pi^{-1}(\psi). \end{align*} From this characterisation, $\coprod_{\psi\leq \varphi}\pi^{-1}(\psi)$ is a closed subset defined by the homogeneous ideal generated by all $(\varphi(I)+1)$-minors consisting of columns contained in $I$, and $\pi^{-1}(\varphi)$ is open in it. Hence we have $\overline{\pi^{-1}(\varphi)}\subset \coprod_{\psi\leq \varphi}\pi^{-1}(\psi)$. Furthermore, since $\pi^{-1}(\varphi)$ is not empty for $\varphi\in\mathrm{Image}(\pi)$, we can prove the converse inclusion by checking the irreducibility of $\coprod_{\psi\leq\varphi}\pi^{-1}(\psi)$. Now, let $X_{n,m}(r)\subset (\mathbb P^{n-1})^m$ denote the irreducible closed subset consisting of all matrices whose rank is less than or equal to $r$. Then for the maps $\varphi=\varphi[8]$, $\varphi[7;i]$, $\varphi[6;i,j]$, $\varphi[6]$, $\varphi[5;i,j]$, $\varphi[4;\ast]$, $\varphi[2]$ in $\mathrm{Image}(\pi)$ introduced in Definitions \ref{df:map} and \ref{df:mapsplit}, the closed subsets $\coprod_{\psi\leq \varphi}\pi^{-1}(\psi)$ is naturally identified with $X_{3,4}(3)$, $X_{3,3}(2)\times X_{3,1}(1)$, $X_{3,3}(3)$, $X_{3,4}(2)$, $X_{3,3}(2)$, $X_{3,2}(2)$, and $X_{3,1}(1)$ respectively. Hence they are all irreducible and the claim holds. \end{proof} According to Theorems \ref{thm:single} and \ref{thm:parameter}, the $GL_3$-invariant fibres $\pi^{-1}(\varphi)$ are all $GL_3$-orbits by themselves unless $\varphi=\varphi[6]$. On the other hand, $\pi^{-1}(\varphi[6])$ is decomposed in infinitely many orbits $\orb{5;p}$ introduced in Definition \ref{df:mapsplit}. Hence in the next, we shall determine the closure of $\orb{5;p}$, which completes the determination of all closure relations among orbits. \beprop \label{thm:clparameter} For the orbits $\orb{5;p}:=GL_3\cdot([e_1], [e_2], [e_1+e_2],p)$, $\pi^{-1}(\varphi[4;i])$, and $\pi^{-1}(\varphi[2])$ where $p\in(\pr{1})'$ and $1\leq i\leq 4$ (see Definitions \ref{df:map} and \ref{df:mapsplit}) , we have \[ \overline{\orb{5;p}}=\orb{5;p}\ \amalg\ \coprod_{i=1}^4\pi^{-1}(\varphi[4;i])\ \amalg\ \pi^{-1}(\varphi[2]). \] \enprop \belem \label{thm:iotahomeo}For an linear inclusion $\iota\colon \mathbb K^r\hookrightarrow\mathbb K^n$, the map below is a closed embedding. \[\tilde\iota\colon GL_r\backslash(\mathbb P^{r-1})^m \to GL_n\backslash (\mathbb P^{n-1})^m\colon [v_i]_{i=1}^m \mapsto [\iota(v_i)]_{i=1}^m. \] \enlem \begin{proof} Since $g\in GL_r$ defines a linear isomorphism in $\mathrm{Image}(\iota)$, there exists a linear isomorphism $\tilde g\in GL_n$ such that $\iota\circ g=\tilde g\circ \iota$ by taking the direct product with some linear isomorphism on the complement space. Hence the map $\tilde \iota$ is well-defined. On the other hand, for a linear isomorphism $\tilde g\in GL_n$ which satisfies $\tilde g\cdot[\iota(v_i)]_{i=1}^m=[\iota(w_i)]_{i=1}^m$, it defines a linear isomorphism between $\langle v_i\rangle_{i=1}^m$ and $\langle w_i\rangle_{i=1}^m$ sending $[v_i]$ to $[w_i]$ via $\iota$. So there is an isomorphism $g\in GL_r$ satisfying $g[v_i]=[w_i]$. Hence the map $\tilde\iota$ is injective. Since the continuity is obvious, we shall only show that the map is closed. For a closed subset $C$ of $GL_r\backslash (\mathbb P^{r-1})^m$, there exists a $GL_r$-invariant zero point set $Z(I)\subset (\mathbb P^{r-1})^m$ such that $C=GL_r\backslash Z(I)$ where $I$ is a homogeneous ideal in $Pol(M(r,m))$. Then we have $\tilde\iota(C)=GL_n\backslash GL_n\cdot\iota(Z(I))$. Now we define a homogeneous ideal $J\subset Pol(M(n,m))$ generated by homogeneous polynomials $f\circ p$ for all $f\in I$ and linear maps $p\colon \mathbb K^n\to\mathbb K^r$. Let $v=[v_i]_{i=1}^m \in Z(J)\cap X_{n,m}(r)$ where $X_{n,m}(r)$ is a closed subset of $(\mathbb P^{n-1})^m$ consisting of all elements with the rank less than $r+1$. Then there exixts a linear isomorphism $g\in GL_n$ sending $v_i$ into the $r$-dimensional space $\mathrm{Image}(\iota)$. Hence, taking some linear projection $p\colon \mathbb K^n\twoheadrightarrow\mathbb K^r$ such that $\iota p$ is the identity map on $\mathrm{Image}(\iota)$, we have $v=g^{-1}gv=g^{-1}\iota pgv\in GL_n\cdot \iota(pgv)$, and $f(pgv)=f\circ(pg)(v)=0$ for all $f\in I$. Hence $v\in GL_n\cdot \iota(Z(I))$. On the other hand, let $v \in Z(I)$. Since $Z(I)$ is $GL_r$-invariant, $f(gv)=0$ for all $g\in GL_r$. Hence $f(xv)=0$ holds for all linear endomorphisms $x$ on $\mathbb K^r$, which leads that $f\circ p(\iota(v))=f(p\iota v)=0$ for all $p\colon \mathbb K^n\to\mathbb K^r$. Hence $\iota(Z(I))\subset Z(J)$. From these arguments, we have shown that $GL_n\cdot \iota(Z(I))=Z(J)\cap X_{n,m}(r)$, which leads that the map $\tilde\iota$ is closed. \end{proof} From this lemma, we only have to determine the closure of $GL_2\cdot [e_1,e_2,e_1+e_2,p]=\tilde\iota^{-1}(\mathcal O(5;p))\subset(\mathbb P^1)^4$. Remark that $\tilde\iota$ intertwines the map $\pi$. For this aim, we introduce the following notation: for a $2\times 4$-matrix, $|i,j|$ denotes the $2$-minor with the $i$ and $j$-th columns. Then we define a homogeneous polynomial $P_{p}\in \mathcal{H}^{(1,1,1,1)}_{2,4}$ for $p=[p_1e_1+p_2e_2]\in\mathbb P^1$ by \begin{align} \label{eq:poly} P_{p}(x):=&p_2|1,3||4,2|+p_1|1,4||2,3| \\ =&-p_1|1,2||3,4|+(p_2-p_1)|1,3||4,2|=-p_2|1,2||3,4|+(p_1-p_2)|1,4||2,3|. \notag \end{align} Remark that the equalities hold since $|1,2||3,4|+|1,3||4,2|+|1,4||2,3|=0$ from the general property of determinants. Considering the $\mathcal S_4$-action on the columns, we have \[ P_{p}((1,2)x)=P_{{}^t(-p_2,-p_1)}(x), \;\; P_{p}((2,3)x)= P_{{}^t(p_2-p_1,p_2)}(x),\;\; P_{p}((3,4)x)=P_{{}^t(-p_2,-p_1)}(x). \] Hence, for the group homomorphism $\Psi\colon\mathcal{S}_4\rightarrow\GL{2}$ generated by $(1,2), (3,4)\mapsto\scalebox{0.5}{$\left(\bear{cc}0&-1\\-1&0 \enar\right)$}$ and $(2,3)\mapsto\scalebox{0.5}{$\left(\bear{cc}-1&1\\0&1\enar\right)$}$, we have \begin{align} P_{p}(\sigma^{-1} x)=P_{\Psi(\sigma)p}(x)\;\;\textrm{for }\sigma\in\mathcal{S}_4. \end{align} Remark that the kernel of $\Psi$ is the Klein $4$-group $\{\mathrm{id}, (1,2)(3,4), (1,3)(2,4), (1,4)(2,3)\}$, and the image of $\Psi$ acts on the $3$ points $\{0,1,\infty\}\subset\pr{1}$ effectively, which is isomorphic to $\mathcal{S}_3$. \belem \label{thm:polyirred} The polynomial $P_{p}$ is irreducible if and only if $p\in(\pr{1})'$ (see Definition \ref{df:mapsplit}). \enlem \begin{proof} Remark that $p$ is in $\left(\mathbb P^1\right)'$ if and only if $p_1p_2(p_1-p_2)\neq 0$. Since the polynomial $P_p$ is of the homogeneous degree $(1,1,1,1)$, we only have to check that it does not have any divisors with the homogeneous degree neither $(1,0,0,0)$, $(0,1,0,0)$, $(0,0,1,0)$, $(0,0,0,1)$, $(1,1,0,0)$, $(1,0,1,0)$, nor $(1,0,0,1)$. By an direct computation, we have \begin{align*} P_{}(x)&=x_{1,1}x_{1,2}((p_1-p_2)x_{2,3}x_{2,4})+x_{2,1}x_{2,2}((p_1-p_2)x_{1,3}x_{1,4}) \\ &\;\;+x_{1,1}x_{2,2}(p_1x_{1,3}x_{2,4}-p_2x_{2,3}x_{1,4})-x_{2,1}x_{1,2}(p_2x_{1,3}x_{2,4}-p_1x_{2,3}x_{1,4}). \end{align*} Hence it is divided by a polynomial of the homogeneous degree ${(1,1,0,0)}$ if and only if $p_1= p_2$. Similarly, since $P_{p}((2,3)x)=P_{{}^t(p_2-p_1,p_2)}(x)$ and $P_{p}((2,4)x)=P_{{}^t(p_1,p_1-p_2)}(x)$, $P_{p}(x)$ is divided by a polynomial of the homogeneous degree ${(1,0,1,0)}$ (resp. ${(1,0,0,1)}$) if and inly if $p_1=0$ (resp. $p_2=0$). On the other hand, let $P_{p}(x)$ has a divisor $f(x_1)$ of the homogeneous degree $(1,0,0,0)$. Since $P_{p}(x)$ is fixed under the action of $(1,2)(3,4)$, $(1,3)(2,4)$, and $(1,4)(2,3)$, it is factorised as $P_{\tilde{p}}(x)=cf(x_1)f(x_2)f(x_3)f(x_4)$, it can occur only if $p_1p_2(p_1-p_2)=0$ from the argument above. \end{proof} Next, we observe the relationship between the polynomial $P_{p}$ introduced in (\ref{eq:poly}) and the orbit $GL_2\cdot[e_1,e_2,e_+e_2,p]$: \belem \label{thm:orb5pcharacterise} For $v_p=[e_1,e_2,e_1+e_2,p]\in(\mathbb P^1)^4$ and the homogeneous polynomial $P_{p}$ introduced in (\ref{eq:poly}), we have the following: \bealign {\GL{2}}\cdot v_p=\left\{[v]\in(\pr{1})^4\left|\ |1,2||2,3||3,1|\neq 0, \;P_{p}(v)=0\right.\right\}. \label{eq:orb5pcharacterise} \end{align} \enlem \begin{proof} First of all, $P_{p}(x)$ is relatively $GL_2$-invariant since all minors are relatively $GL_2$-invariant with the character $g\mapsto (\det g)^2$. Let $X_p\subset(\pr{1})^4$ be the right-hand side of (\ref{eq:orb5pcharacterise}), then the orbit $GL_2\cdot v_p$ is included in $X_p$ clearly from $v_p\in X_p$ and the $GL_2$-invariance of $P_p$. Conversely, for $[v]=[v_1,v_2,v_3,v_4]\in X_p$, then $\{v_1,v_2\}$ is linearly independent and \begin{align*} |2,3|v_1+|3,1|v_2&=-|1,2|v_3\\ |1,4|\left(p_1|2,3|v_1+p_2|3,1|v_2\right)&=p_1|1,4||2,3|v_1+p_2|1,4||3,1|v_2 \\ &=-p_2|1,3||4,2|v_1+p_2|1,4||3,1|v_2=-p_2|3,1|\left(|2,4|v_1+|4,1|v_2\right)\\ &=p_2|3,1||1,2|v_4 \end{align*} Hence $v=(|2,3|v_1\ |3,1|v_2)\cdot [e_1,e_2,e_1+e_2,p]\in GL_2\cdot v_p$. \end{proof} From Lemmas \ref{thm:polyirred} and \ref{thm:orb5pcharacterise}, the orbit $GL_2\cdot v_p$ is open in the irreducible closed subset $Z(P_p)$ where $p\in (\mathbb P^1)'$. Hence we have $\overline{GL_2\cdot v_p}=Z(P_p)$, and shall only determine the set $Z(P_p)$. \belem \label{thm:polysolution} Let $p\in(\pr{1})'$ and $v_p:=[e_1, e_2, e_1+e_2, p]\in \mathcal O_p$. Then the zero point set of the polynomial $P_p$ defined in (\ref{eq:poly}) is given as follows: \beeq \label{eq:polysolution} Z(P_p) = GL_2\cdot v_p \amalg\coprod_{i=1}^4\pi^{-1}(\vp{4;i})\amalg\pi^{-1}(\vp{2}). \eneq \enlem \begin{proof} We already have classified orbits in $GL_2\backslash (\mathbb P^1)^4\hookrightarrow GL_3\backslash (\mathbb P^2)^4$ in Theorems \ref{thm:single} and \ref{thm:parameter}. Hence, we shall only determine whether each of them is contained in $Z(P_p)$ or not. If any $2$-minors do not vanish, we only have to consider the orbit $GL_2\cdot v_q= \tilde\iota^{-1}(\mathcal O(5;q))$ where $q\in(\pr{1})'$. Then since $P_p(v_q)=p_2q_1-p_1q_2$, it is contained in $Z(P_p)$ if and only if $p=q$. On the other hand, consider the case where exists some vanishing $2$-minor. Since the polynomial $P_p$ is of the form $c_1|i,j||k,l|+c_2|i,k||l,j|$ for some $c_1c_2(c_1-c_2)\neq 0$, if $P_p(v)=0$, then there has to be a triple of columns of $v$ whose $2$-minors all vanish. Conversely, if there exists a triple of columns whose $2$-minors all vanish, then clearly $P_p(v)=0$. \end{proof} Combining these results, we completed the proof of the closure relations in Theorem \ref{thm:orb34}. \begin{thebibliography}{99999} \bibitem{br1} M. Brion, Classification des espaces homogenes spheriques, {\it Compositio Math.} {\bf 63} (1987), pp. 189--208. \bibitem{br2} M. Brion, Lectures on the Geometry of Flag Varieties, Topics in cohomological studies of algebraic varieties, Trends Math., Birkh\"{a}user, 2005, pp. 33--85. \bibitem{kac} V. Kac, Infinite root systems, representations of graphs and invariant theory, {\it Invent. Math.} {\bf 56} (1980), pp. 57--92. \bibitem{k} T. Kobayashi, Bounded multiplicity theorems for induction and restriction, {\it J. Lie Theory} {\bf 32} (2022), pp. 197--238. \bibitem{kl} T. Kobayashi, A. Leontiev, Symmetry breaking operators for the restriction of representations of indefinite orthogonal groups $O(p,q)$, {\it Proc. Japan Acad. Ser. A Math. Sci.} {\bf 93} (2017), no. 8, pp. 86--91. \bibitem{km} T. Kobayashi, T. Matsuki, Classification of finite multiplicity symmetric pairs, {\it Transform. Groups} {\bf 19} (2014), pp. 457--493. \bibitem{ko} T. Kobayashi, T. Oshima, Finite multiplicity theorems for induction and restriction, {\it Adv. Math.} {\bf 248} (2013), pp. 921--944. \bibitem{ks} T. Kobayashi, B. Speh, Symmetry Breaking for Representations of Rank One Orthogonal Groups, Memoirs of the American Mathematical Society {\bf 238}, no. 1126, Amer. Math. Soc., 2015, v+110pp. \bibitem{ks2} T. Kobayashi, B. Speh. Symmetry Breaking for Representations of Rank One Orthogonal Groups II, Lecture Notes in Mathematics {\bf 2234}, Springer, 2018, xv+342pp. \bibitem{mwzA} P. Magyar, J. Weyman, A. Zelevinsky, Multiple flag varieties of finite type, {\it Adv. Math.} {\bf 141} (1999), pp. 97--118. \bibitem{mwzC} P. Magyar, J. Weyman, A. Zelevinsky, Symplectic multiple flag varieties of finite type, {\it J. Alg.} {\bf 230} (2000), pp. 245--265. \bibitem{mp} T. Matsuki, The orbits of affine symmetric spaces under the action of minimal parabolic subgroups, {\it J. Math. Soc. Jpn.} {\bf 31} (1979), pp. 331--357. \bibitem{mpp} T. Matsuki, Orbits on affine symmetric spaces under the action of parabolic subgroups, {\it Hiroshima Math. J.} {\bf 12} (1982), pp. 307--320. \bibitem{morb} T. Matsuki, Orbits on flag manifolds, Proceedings of the International Congress of Mathematicians, Vol. I, II (Kyoto, 1990), Math. Soc. Japan, Tokyo, 1991, pp. 807--813. \bibitem{mBex} T. Matsuki, An example of orthogonal triple flag variety of finite type, {\it J. Alg.} {\bf 375} (2013), pp. 148--187. \bibitem{mB} T. Matsuki, Orthogonal multiple flag varieties of finite type I : Odd degree case, {\it J. Alg.} {\bf 425} (2015), pp. 450--523. \bibitem{mD} T. Matsuki, Multiple flag varieties for orthogonal groups, {\it RIMS k\^{o}ky\^{u}roku} {\bf 2031} (2017), pp. 33--38. \bibitem{s} N. Shimamoto, Description of infinite orbits on multiple projective spaces, {\it J. Algebra} {\bf 563} (2020), pp. 404--425. \bibitem{tt} T. Tauchi, Dimension of the space of intertwining operators from degenerate principal series representations, {\it Selecta Math. (N.S.)} {\bf 24} (2018), pp. 3649--3662. \end{thebibliography} \end{document}
2205.07818v2
http://arxiv.org/abs/2205.07818v2
Sharp Hessian estimates for fully nonlinear elliptic equations under relaxed convexity assumptions, oblique boundary conditions and applications
\documentclass[10pt, leqno]{article} \usepackage{amsmath,amssymb,amsthm, epsfig} \usepackage{hyperref} \usepackage{amsmath} \usepackage[pdftex]{color} \usepackage{color} \usepackage{ulem} \usepackage{mathrsfs} \usepackage{wasysym} \usepackage{stackrel} \title{\large{\bf Sharp Hessian estimates for fully nonlinear elliptic equations under relaxed convexity assumptions, oblique boundary conditions and applications}} \author{\it by \smallskip \\ Junior da S. Bessa\footnote{\noindent Universidade Federal do Cear\'{a}. Department of Mathematics. Fortaleza - CE, Brazil. \noindent \texttt{E-mail address: \url{[email protected]}}},\,\,\,Jo\~{a}o Vitor da Silva\footnote{\noindent Universidade Estadual de Campinas - UNICAMP. Department of Mathematics. Campinas - SP, Brazil. \noindent \texttt{E-mail address: \url{[email protected]}}}, \,\,\,Maria N.B. Frederico\footnote{\noindent Universidade Federal do Cear\'{a}. Department of Mathematics. Fortaleza - CE, Brazil. \noindent \texttt{E-mail address: \url{[email protected]}}}\\ $\&$\\ Gleydson C. Ricarte \footnote{\noindent Universidade Federal Cear\'{a}. Department of Mathematics. Fortaleza, CE-Brazil 60455-760. \noindent \texttt{E-mail address: \url{[email protected]}}} } \newlength{\hchng} \newlength{\vchng} \setlength{\hchng}{0.55in} \setlength{\vchng}{0.55in} \addtolength{\oddsidemargin}{-\hchng} \addtolength{\textwidth}{2\hchng} \addtolength{\topmargin}{-\vchng} \addtolength{\textheight}{2\vchng} \def \N {\mathbb{N}} \def \R {\mathbb{R}} \def \supp {\mathrm{supp } } \def \div {\mathrm{div}} \def \dist {\mathrm{dist}} \def \redbdry {\partial_\mathrm{red}} \def \loc {\mathrm{loc}} \def \Per {\mathrm{Per}} \def \suchthat {\ \big | \ } \def \Lip {\mathrm{Lip}} \def \L {\mathfrak{L}} \def \LL {\mathbf{L}} \def \J {\mathfrak{J}} \def \H {\mathcal{H}^{N-1}(X)} \def \A {\mathcal{A}} \def \Leb {\mathcal{L}^n} \def \tr {\mathrm{Tr}} \def \esslimsup{\mathrm{ess limsup}} \def \essliminf{\mathrm{ess liminf}} \def \esslim{\mathrm{ess lim}} \def \M {\mathrm{T}} \newcommand{\pe}{E_{\varepsilon}} \newcommand{\defeq}{\mathrel{\mathop:}=} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \theoremstyle{definition} \newtheorem{statement}[]{Statement} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} \newtheorem{notation}[theorem]{Notation} \numberwithin{equation}{section} \newcommand{\intav}[1]{\mathchoice {\mathop{\vrule width 6pt height 3 pt depth -2.5pt \kern -8pt \intop}\nolimits_{\kern -6pt#1}} {\mathop{\vrule width 5pt height 3 pt depth -2.6pt \kern -6pt \intop}\nolimits_{#1}} {\mathop{\vrule width 5pt height 3 pt depth -2.6pt \kern -6pt \intop}\nolimits_{#1}} {\mathop{\vrule width 5pt height 3 pt depth -2.6pt \kern -6pt \intop}\nolimits_{#1}}} \begin{document} \maketitle \begin{abstract} We derive global $W^{2,p}$ estimates (with $n\le p <\infty$) for viscosity solutions to fully nonlinear elliptic equations under relaxed structural assumptions on the governing operator that are weaker than convexity and oblique boundary conditions as follows: $$ \left\{ \begin{array}{rclcl} F(D^2u,Du,u,x) &=& f(x)& \mbox{in} & \Omega \\ \beta(x) \cdot Du(x) + \gamma(x) u(x)&=& g(x) &\mbox{on}& \partial \Omega, \end{array} \right. $$ for $f \in L^p(\Omega)$ and under appropriate assumptions on the data $\beta, \gamma$, $g$ and $\Omega \subset \R^n$. Our approach makes use of geometric tangential methods, which consist of importing ``fine regularity estimates'' from a limiting profile, i.e., the \textit{Recession} operator, associated with the original second-order one via compactness and stability procedures. As a result, we pay special attention to the borderline scenario, i.e., $f \in \text{BMO}_p \varsupsetneq L^{\infty}$. In such a setting, we prove that solutions enjoy $\text{BMO}_p$ type estimates for their second derivatives. Finally, as another application of our findings, we obtain Hessian estimates to obstacle-type problems under oblique boundary conditions and no convexity assumptions, which may have their own mathematical interest. A density result for a suitable class of viscosity solutions will also be addressed. \medskip \noindent \textbf{Keywords:} Hessian estimates, fully nonlinear elliptic equations, oblique boundary conditions, relaxed convexity assumptions, obstacle type problems. \vspace{0.2cm} \noindent \textbf{AMS Subject Classification:} 35J25, 35J60, 35B65, 35R35. \end{abstract} \newpage \section{Introduction} \hspace{0.4cm}In this manuscript, we investigate global $W^{2,p}$ estimates (with $n \le p< \infty$) for viscosity solutions to fully nonlinear elliptic equations of the form: \begin{equation}\label{E1} \left\{ \begin{array}{rclcl} F(D^2u,Du,u,x) &=& f(x)& \mbox{in} & \Omega \\ \mathcal{B}(x,u,Du)&=& g(x) &\mbox{on}& \partial \Omega, \end{array} \right. \end{equation} where $$ \displaystyle \mathcal{B}(x,u,Du) = \beta(x) \cdot Du(x) + \gamma(x) u(x) $$ is the oblique boundary condition, $\Omega \subset \mathbb{R}^n$ a bounded domain with a regular boundary and data $f$, $g$, $\beta$ and $\gamma$ under appropriate conditions to be clarified soon. Throughout this work, it is assumed that $F:\textit{Sym}(n)\times \R^n \times \R \times \Omega \to \R$ is a second-order operator with uniformly elliptic structure, i.e., there exist \textit{ellipticity constants} $0<\lambda \le \Lambda< \infty$ such that \begin{equation}\label{Unif.Ellip.} \lambda\|\mathrm{Y}\|\leq F(\mathrm{X}+\mathrm{Y}, \varsigma, s, x)-F(\mathrm{X}, \varsigma, s, x) \leq \Lambda \|\mathrm{Y}\| \end{equation} for every $\mathrm{X}, \mathrm{Y} \in \textit{Sym}(n)$ with $\mathrm{Y} \ge 0$ (in the usual partial ordering on symmetric matrices) and $(\varsigma, s, x)\in \mathbb{R}^n \times \R \times \Omega $. Moreover, our approach does not impose any extra regularity assumption on nonlinearity $F$, \textit{e.g.} concavity, convexity, or smoothness hypothesis (cf. \cite{Caff1}, \cite{CC}, \cite{Lie-Tru} and \cite{Sav07} for classical and modern references). As a matter of fact, we obtain global $W^{2,p}$-estimates, i.e., $$ \|u\|_{W^{2, p}(\Omega)} \le \mathrm{C}(\verb"universal")\left(\|u\|_{L^{\infty}(\Omega)}+ \|f\|_{L^p(\Omega)}+\Vert g\Vert_{C^{1,\alpha}(\partial \Omega)}\right) $$ (with $n \le p < \infty$ and some $\alpha \in (0, 1)$) for viscosity solutions to \eqref{E1} under relaxed convexity assumptions on governing operator (cf. \cite{Kry13}, \cite{PT} and \cite{ST} for modern results) -- a kind of asymptotic convexity condition at infinity on $F$. To do this concept more precise, we introduce the notion of \textit{Recession operator}, whose terminology comes from Giga-Sato's work \cite{GS01} on theory of Hamilton-Jacobi PDEs. \begin{definition}[{\bf Recession profile}]\label{DefAC} We say that $F(\mathrm{X},\varsigma, s,x)$ is an asymptotically fully nonlinear elliptic operator if there exists a uniformly elliptic operator $F^{\sharp}(\mathrm{X}, \varsigma, s ,x)$, designated the \textit{Recession} operator, such that \begin{equation}\tag{{\bf \text{\small{Reces.}}}}\label{Reces} \displaystyle F^{\sharp}(\mathrm{X}, \varsigma, s, x) \defeq \lim_{\tau \to 0^{+}} \tau \cdot F\left(\frac{1}{\tau}\mathrm{X}, \varsigma, s, x\right) \end{equation} for all $\mathrm{X} \in \textrm{Sym}(n)$, $\varsigma \in \mathbb{R}^n$, $s \in \mathbb{R}$ and $x \in \Omega$. \end{definition} For instance, by way of illustration, such a limiting profile \eqref{Reces} appears naturally in singularly perturbed free boundary problems ruled by fully nonlinear equations, whose Hessian of solutions blows-up through the phase transition of the model, i.e., $\partial\{u^{\varepsilon}> \varepsilon\}$, where $u^{\varepsilon}$ satisfies in the viscosity sense $$ F(D^2 u^{\varepsilon}, x) = \mathcal{Q}_0(x)\frac{1}{\varepsilon} \zeta \left(\frac{u^{\varepsilon}}{\varepsilon}\right). $$ For such approximations, we have $0< \mathcal{Q}_0 \in C^0(\overline{\Omega})$, $0\leq \zeta \in C^{\infty}(\R)$ with $\supp ~\zeta = [0,1]$. For this reason, in the above model, the limiting free boundary condition is governed by the F$^{\sharp}$ rather than F, i.e., $$ F^{\sharp}(z_0, \nabla u(z_0) \otimes \nabla u(z_0)) = 2\mathrm{T}, \quad z_0 \in \partial\{u_0>0\} $$ in some appropriate viscosity sense, for a certain total mass $\mathrm{T}>0$ (cf. \cite[Section 6]{RT} for some enlightening example and details). Furthermore, limit profiles such as \eqref{Reces} also appears in higher order convergence rates in periodic homogenization of fully nonlinear uniformly parabolic Cauchy problems with rapidly oscillating initial data, as shown below: $$ \left\{ \begin{array}{rclcl} \frac{d}{dt}u^{\varepsilon}(x, t) & = & \frac{1}{\varepsilon^2}F(\varepsilon^2D^2 u^{\varepsilon}, x, t, \frac{x}{\varepsilon}, \frac{t}{\varepsilon}) & \text{in} & \R^n \times (0, T) \\ u^{\varepsilon}(x, 0) & = & g\left(x, \frac{x}{\varepsilon}\right)& \text{on} & \R^n. \end{array} \right. $$ In such a context $$ \frac{1}{\varepsilon^2}F(\varepsilon^2 \mathrm{P}, x, t, y, s) \to F^{\sharp}( \mathrm{P}, x, t, y, s) \quad \text{as} \quad \varepsilon \to 0^+, $$ uniformly for all $( \mathrm{P}, x, t, y, s) \in (\text{Sym}(n)\setminus \{\mathcal{O}_{n\times n}\})\times \R^n \times [0, T] \times \mathbb{T}^n \times \mathbb{T}$ (see \cite{KL20}). As a result, there exists a unique function $v: \R^n \times [0, T] \times \mathbb{T}^n \times [0, \infty) \to \R$ such that $v(x, t, \cdot, \cdot)$ is a viscosity solution to $$ \left\{ \begin{array}{rclcl} \frac{d}{ds}v(y, s) & = & F^{\sharp}(D_y^2 v, x, t, y, s) & \text{in} & \mathbb{T}^n \times (0, \infty) \\ v(x, t, y, 0) & = & g(y, x)& \text{on} & \mathbb{T}^n. \end{array} \right. $$ \bigskip To obtain our sharp Hessian estimates, we will assume that $F^{\sharp}$ enjoys good structural properties (e.g., convexity/concavity, or suitable \textit{a priori} estimates). Thus, using geometric tangential mechanisms, we may access good regularity estimates and, in a suitable manner, transfer such estimates to solutions of the original problem via compactness and stability processes. It is important to emphasize that convexity assumptions with respect to second-order derivatives play an essential role in obtaining several regularity issues in fully nonlinear elliptic models, e.g., Calder\'{o}n-Zygmund estimates \cite{Caff1}, Schauder type estimates \cite{CC} and higher estimates in some nonlinear models \cite{Evans}, \cite{Kry82}, \cite{Tru83} and \cite{Tru84}, and free boundary problems \cite{daSV21-1}, \cite{daSV21}, \cite{FiShah14}, \cite{Indr19}, \cite{Indrei19}, \cite{IM16} and \cite{R-Oton-Serr17} just to mention a few. Nevertheless, ensuring that such regularity issues hold true for problems like \eqref{E1} driven by operators satisfying the structure \eqref{Reces} is not an easy task. For instance, consider perturbations of Bellman operators of the form {\scriptsize{ $$ \left\{ \begin{array}{rclcl} \displaystyle F(\mathrm{X}, \varsigma, s, x) \defeq \inf_{\iota \in \hat{\mathcal{A}}} \left( \sum_{i,j=1}^{n} a^{\iota}_{ij}(x) \mathrm{X}_{ij} + \sum_{i=1}^{n} b^{\iota}_i(x).\varsigma_i + c^{\iota}(x)s\right) + \sum_{i=1}^{n} \textrm{arctg}(1+\lambda_i(\mathrm{X})) &=& f(x)& \mbox{in} & \Omega \\ \mathcal{B}(x,s,\varsigma)&=& g(x) &\mbox{on}& \partial \Omega, \end{array} \right. $$}} where $(\lambda_i(\mathrm{X}))^{n}_{i=1}$ are eigenvalues of the matrix $\mathrm{X} \in \text{Sym}(n)$, $b_i^{\iota}, c^{\iota}: \Omega \to \R$ are real functions for each $\iota \in \hat{\mathcal{A}}$ (for $\hat{\mathcal{A}}$ any set of index), and $\left( a^{\iota}_{ij}(x)\right)^{n}_{i,j=1} \subset C^{0, \alpha}(\Omega)$ have eigenvalues in $[\lambda, \Lambda]$ for each $x \in \Omega$ and $\iota \in \hat{\mathcal{A}}$. Thus, it is easy to check that such operators are neither concave nor convex. However, its Recession profile \eqref{Reces} inherits good structure properties, i.e., $$ \left\{ \begin{array}{rclcl} \displaystyle F^{\sharp}(D^2 \mathfrak{h}, D \mathfrak{h}, \mathfrak{h}, x) & = & \displaystyle \inf_{\iota \in \mathcal{A}} \left( \sum_{i,j=1}^{n} a^{\iota}_{ij}(x) \mathfrak{h}_{ij}\right) = 0 & \text{in} & \Omega\\ \mathcal{B}(x,\mathfrak{h}, D \mathfrak{h})&=& g(x) & \mbox{on} & \partial \Omega, \end{array} \right. $$ which is a convex operator, thereby enjoying higher regularity estimates in the face of Evans-Krylov-Trudinger's type theory (with oblique boundary conditions) addressed in \cite[Theorem 2.4.]{DL20}, \cite[Theorem 1.3]{LiZhang}, \cite[Theorem 5.4]{Lieb02}, \cite[\S 8]{Saf91}, \cite[Theorem 3.3]{Saf94} and \cite[Theorem 8.3]{Sil1} (see also \cite[Chapter 6]{CC}, \cite{Evans}, \cite{Kry82}, \cite{Tru83} and \cite{Tru84} for local estimates), i.e., for some $\alpha \in (0, 1)$ and $\Omega^{\prime} \subset\subset \Omega \cup \Gamma$ - with $C^{1, \alpha}\ni \Gamma \subset \partial \Omega$, $\beta, \gamma, g \in C^{1, \alpha}(\overline{\Gamma})$, there holds {\scriptsize{ \begin{equation}\label{AprioriEst} \|\mathfrak{h}\|_{C^{2, \alpha}(\overline{\Omega^{\prime}})} \le \mathrm{C}\left(n, \lambda, \Lambda, \alpha, \|a^{\iota}_{ij}\|_{C^{0, \alpha}(\Omega)}, \|\beta\|_{C^{1, \alpha}(\overline{\Gamma})}, \|\gamma\|_{C^{1, \alpha}(\overline{\Gamma})}, \Omega^{\prime}, \Omega\right)\left(\|\mathfrak{h}\|_{L^{\infty}(\Omega)}+\|g\|_{C^{1, \alpha}(\overline{\Gamma})}\right). \end{equation}}} Heuristically, if the ``limiting profile'' in \eqref{Reces} there exists point-wisely and fulfills a suitable \textit{a priori} estimate like \eqref{AprioriEst}, then a $W^{2, p}-$regularity theory to \eqref{E1} holds true up to the boundary via geometric tangential methods (cf. \cite{BOW16}, \cite{daSR19} and \cite{PT}). Therefore, one of the main purposes of this work is to establish sharp Hessian estimates for problems enjoying a regular asymptotically elliptic property as \eqref{Reces} on the governing operator in \eqref{E1}, as well as present a number of applications in related analysis themes and free boundary problems of the obstacle-type. Historically, the concept of an asymptotically regular problem can be traced back to Chipot-Evans' work \cite{Ch} in the context of certain problems in calculus of variations. Since then, there has been an increasing research interest in this class of problems and their related topics, see \cite{BJ0}, \cite{MF}, \cite{JP}, \cite{CS}, \cite{CS1} for an incomplete list of contributions. To the best of our knowledge, no research has been conducted on such boundary asymptotic qualitative estimates of general problems as \eqref{E1}, as well as their intrinsic connections and applications in the analysis of PDEs and free boundary problems. \subsection{Assumptions and main results} \hspace{0.4cm} For a given $r >0$, we denote by $\mathrm{B}_r(x)$ the ball of radius $r$ centered at $x=(x^{\prime},x_n) \in \R^n$, where $x^{\prime}=(x_1,x_2,\ldots, x_{n-1}) \in \R^{n-1}$. We write $\mathrm{B}_r = \mathrm{B}_r(0)$ and $\mathrm{B}^+_r = \mathrm{B}_r \cap \mathbb{R}^n_+$. Also, we write $\mathrm{T}_r \defeq \{(x^{\prime},0) \in \mathbb{R}^{n-1} : |x^{\prime}| < r\}$ and $\mathrm{T}_r(x_0) \defeq \mathrm{T}_r + x^{\prime}_0$ where $x^{\prime}_0 \in \mathbb{R}^{n-1}$. Finally, $\textrm{Sym}(n)$ will denote the set of all $n \times n$ real symmetric matrices. Throughout this manuscript, we will make the following assumptions: \begin{enumerate} \item[(A1)] ({\bf Structural conditions}) We assume that $F \in C^0(\text{Sym}(n), \R^n, \R, \Omega)$. Moreover, there are constants $0 < \lambda \le \Lambda$, $\sigma\geq 0$ and $\xi\geq 0$ such that \begin{eqnarray}\label{EqUnEll} \mathscr{P}^{-}_{\lambda,\Lambda}(\mathrm{X}-\mathrm{Y}) - \sigma |q_1-q_2| -\xi|r_1-r_2| &\le& F(\mathrm{X}, q_1,r_1,x)-F(\mathrm{Y},q_2,r_2,x) \nonumber \\ &\le& \mathscr{P}^{+}_{\lambda, \Lambda}(\mathrm{X}-\mathrm{Y})+ \sigma |q_1-q_2| + \xi|r_1-r_2| \label{5} \end{eqnarray} for all $\mathrm{X},\mathrm{Y} \in \textit{Sym}(n)$, $q_1,q_2 \in \mathbb{R}^n$, $r_1,r_2 \in \mathbb{R}$, $x \in \Omega$, where \begin{equation} \mathscr{P}^{+}_{\lambda,\Lambda}(\mathrm{X}) \defeq \Lambda \sum_{e_i >0} e_i +\lambda \sum_{e_i <0} e_i \quad \text{and} \quad \mathscr{P}^{-}_{\lambda,\Lambda}(\mathrm{X}) \defeq \Lambda \sum_{e_i <0} e_i + \lambda \sum_{e_i > 0} e_i, \end{equation} are the \textit{Pucci's extremal operators} and $e_i = e_i(\mathrm{X})$ ($1\leq i\leq n$) denote the eigenvalues of $\mathrm{X}$. \item[(A2)] ({\bf Regularity of the data}) \, The data satisfy $f \in C^0(\Omega) \cap L^p(\Omega)$ for $n \le p<\infty$, $g, \gamma \in C^0(\partial \Omega)$ with $\gamma \le 0$ and $\beta \in C^0( \partial \Omega; \mathbb{R}^n)$ with $\|\beta\|_{L^{\infty}(\partial \Omega )} \le 1$ and there exists a positive constant $\mu_0$ such that $\beta\cdot \overrightarrow{\textbf{n}}\ge \mu_0$, where $\overrightarrow{\textbf{n}}$ is the outward normal vector of $\Omega$. \item[(A3)] ({\bf Continuous coefficients in the $L^p$-average sense}) \,According to \cite{CCKS} for a fixed $x_0\in \Omega$, we define the quantity: $$ \psi_{F^{\sharp}}(x; x_0) \defeq \sup_{\mathrm{X} \in \textrm{Sym}(n)} \frac{|F^{\sharp}(\mathrm{X},0,0,x) - F^{\sharp}(\mathrm{X},0,0,x_0)|}{\|\mathrm{X}\|+1}, $$ which measures the oscillation of the coefficients of $F$ around $x_0$. Furthermore, for notation purposes, we shall often write $\psi_{F}(x, 0) = \psi_F(x)$. Finally, the mapping $\Omega \ni x \mapsto F^{\sharp}(\mathrm{X},0,0,x)$ is assumed to be H\"{o}lder continuous function (in the $L^{p}$-average sense) for every $\mathrm{X} \in \textrm{Sym}(n)$. This means that, there exist universal constants\footnote{Throughout this work, a constant is said to be \textit{universal} if it depends only on $n, \lambda, \Lambda, p, \mu_0, \|\gamma\|_{C^{1,\alpha}(\partial \Omega)}$ and $\|\beta\|_{C^{1,\alpha}(\partial \Omega)}$} $\hat{\alpha} \in (0,1)$, $\theta_0 >0$ and $0 < r_0 \le 1$ such that $$ \left( \intav{\mathrm{B}_r(x_0) \cap \Omega} \psi_{F^{\sharp}}(x,x_0)^{p} dx \right)^{1/p} \le \theta_0 r^{\hat{\alpha}} $$ for $x_0 \in \overline{\Omega}$ and $0 < r \le r_0$. \item[(A4)] ({\bf $C^{1,1}$ \textit{a priori} estimates}) We will assume that the recession operator $F^{\sharp}$ there exists and fulfills a up-to-the boundary $C^{1,1}$ \textit{a priori} estimates, i.e., for $x_0 \in \mathrm{B}^{+}_{1}$ and $g_0 \in C^{1,\alpha}(\mathrm{T}_1)$ (for some $\alpha \in (0, 1)$), there exists a solution $\mathfrak{h} \in C^{1,1}(\mathrm{B}^+_1) \cap C^0(\overline{\mathrm{B}^+_1})$ of $$ \left\{ \begin{array}{rclcl} F^{\sharp}(D^2 \mathfrak{h},x_0) &=& 0& \mbox{in} & \mathrm{B}^+_1 \\ \mathcal{B}(x,w,D\mathfrak{h})&=& g_0(x) &\mbox{on}& \mathrm{T}_1, \end{array} \right. $$ such that $$ \|\mathfrak{h}\|_{C^{1,1}\left(\overline{\mathrm{B}^{+}_{\frac{1}{2}}}\right)} \le \mathrm{C}_1(\verb"universal") \left(\|\mathfrak{h}\|_{L^{\infty}(\mathrm{B}^{+}_1)}+\|g_0\|_{C^{1,\alpha}(\overline{\mathrm{T}_1})}\right) $$ for some constant $\mathrm{C}_{1}>0$. \end{enumerate} From now on, an operator fulfilling $\text{(A1)}$ will be referred to as a $(\lambda, \Lambda, \sigma, \xi)-$elliptic operator. Furthermore, we shall always assume the normalization condition: $ F(\mathcal{O}_{n\times n}, \overrightarrow{0}, 0, x) = 0 \quad \text{for all} \,\,\, x \in \Omega,$ which is not restrictive, because one can reduce the problem in order to check it. We now present our first main result, which endures global $W^{2,p}$ estimates for viscosity solutions to asymptotically fully nonlinear equations. \vspace{0.4cm} \begin{theorem}[{\bf $W^{2, p}$ estimates under oblique boundary conditions}]\label{T1} Let $n \le p< \infty$, $\Omega \subset \mathbb{R}^n$, $\partial \Omega \in C^{2,\alpha}$, $\beta,\gamma,g \in C^{1,\alpha}(\partial \Omega)$ (for some $\alpha \in (0, 1)$) and $u$ be an $L^p-$viscosity solution of \eqref{E1}. Further, assume that structural assumptions (A1)-(A4) are in force. Then, there exists a constant $\mathrm{C}= \mathrm{C}(n, \lambda, \Lambda, p, \mu_0, \|\gamma\|_{C^{1,\alpha}(\partial \Omega)}, \|\beta\|_{C^{1,\alpha}(\partial \Omega)}, \|\partial \Omega\|_{C^{1,1}})>0$ such that $u \in W^{2,p}(\Omega)$. Furthermore, the following estimate holds \begin{equation} \label{2} \|u\|_{W^{2, p}(\Omega)} \le \mathrm{C}\cdot\left(\|u\|_{L^{\infty}(\Omega)}+ \|f\|_{L^p(\Omega)}+\| g\|_{C^{1,\alpha}(\partial \Omega)} \right). \end{equation} \end{theorem} \vspace{0.4cm} We also supply a sharp estimate in the scenario that $f$ exceeds the scope of $L^{\infty}$ functions. Precisely, our second result concerns $\text{BMO}$ type \textit{a priori} estimates for second derivatives provided the RHS of \eqref{E1} is a BMO function and $F^{\sharp}$ enjoys classical \textit{a priori} estimates. In contrast to the assumptions on Section \ref{section2}, we will assume, for the next Theorem, the following hypothesis on $F^{\sharp}$: \begin{enumerate} \item[(A4)$^{\star}$] ({\bf Higher \textit{a priori} estimates})\,\,$F^{\sharp}$ fulfills a $C^{2, \psi}$ \textit{a priori} estimate up to the boundary, for some $\psi \in (0,1)$, i.e., for any $g_0 \in C^0(\mathrm{B}^+_1) \cap C^{1, \psi}(\mathrm{T}_1)$, $\beta \in C^{1, \psi}(\mathrm{T}_1)$, there exists $\mathrm{C}_{\ast}>0$, depending only on universal parameters, such that any viscosity solution of $$ \left\{ \begin{array}{rclcl} F^{\sharp}(D^2 \mathfrak{h}, 0,0, x) &=& 0& \mbox{in} & \mathrm{B}^+_1,\\ \beta \cdot D\mathfrak{h} &=& g_0(x) & \mbox{on} &\mathrm{T}_1 \end{array} \right. $$ belongs to $C^{2, \psi}(\mathrm{B}^{+}_1) \cap C^0(\overline{\mathrm{B}^+_1})$ and fulfills $$ \|\mathfrak{h}\|_{C^{2, \psi}\left(\overline{\mathrm{B}^+_{\rho}}\right)} \le\frac{\mathrm{C}_{\ast}}{\rho^{2+\psi}} \left(\|\mathfrak{h}\|_{L^{\infty}(\mathrm{B}^+_1)} + \|g_0\|_{C^{1,\psi}(\overline{\mathrm{T}_1})}\right)\,\,\, \forall \,\,0<\rho \ll 1. $$ \end{enumerate} \begin{definition} For a function $f \in L^1_{\text{loc}}(\R^n)$ $x_0 \in \Omega$ and $\rho>0$, we define $(f)_{x_0, \rho}$ by $$ (f)_{x_0, \rho} \defeq \intav{\mathrm{B}_{\rho}(x_0) \cap \Omega} f(x) dx. $$ Moreover, when $x_0 = 0$ we will denote $(f)_{\rho}$ by simplicity. An $f \in L^1_{\text{loc}(\Omega)}$ is a function of $p-$bounded mean oscillation, in short $f \in \textrm{p-BMO}(\Omega)$, if \begin{equation}\label{p-BMOnorm} \|f\|_{p-BMO(\Omega)} \defeq \sup_{\mathrm{B}_{\rho} \subseteq \Omega \atop{x_0 \in \Omega}} \left(\intav{\mathrm{B}_{\rho}(x_0) \cap \Omega} |f(x) - (f)_{x_0, \rho}|^p dx\right)^{\frac{1}{p}} < \infty \end{equation} for a constant $\mathrm{C}>0$ that is independent on $\rho$. As a consequence of the John–Nirenberg inequality, such a semi-norm is equivalent to the one in the classical BMO spaces (see \cite[pp. 763-764]{PT}). \end{definition} \vspace{0.4cm} \begin{theorem}[{\bf Boundary $p-$BMO type estimates}]\label{BMO} Let $u$ be an $L^{p}$-viscosity solution to $$ \left\{ \begin{array}{rclcl} F(D^2u,Du,u,x) &=& f(x)& \mbox{in} & \mathrm{B}^+_1 \\ \beta \cdot Du&=& g(x) &\mbox{on}& \mathrm{T}_1, \end{array} \right. $$ where $f \in \textrm{p-BMO}(\mathrm{B}^+_1) \cap L^{p}(\mathrm{B}^+_1)$, for some $n \le p\leq \infty$. Assume further that assumptions (A1)-(A3) and $(A4)^{\star}$ are in force. Then, $D^2 u \in \text{p-BMO}(\mathrm{B}^+_1)$, Moreover, the following estimate holds {\scriptsize{ \begin{equation}\label{BMO2} \|D^2 u\|_{p-\textrm{BMO}\left(\overline{\mathrm{B}^{+}_{\frac{1}{2}}}\right)} \le \mathrm{C}\left(n, \lambda, \Lambda, p, \mathrm{C}_{\ast}, \mu_0, \|\beta\|_{C^{1, \alpha}(\overline{\mathrm{T}_1})}\right)\left(\|u\|_{L^{\infty}(\mathrm{B}^+_1)} + \|g\|_{C^{1, \alpha}(\overline{\mathrm{T}_1})} + \|f\|_{\textrm{p-BMO}(\mathrm{B}^+_1)}\right). \end{equation}}} \end{theorem} \begin{remark}[{\bf $C^{1, \text{Log-Lip}}$ estimates}] Since viscosity solutions to \eqref{E1} have a Hessian control in the $L^p-$ average sense, then by invoking the embedding result from \cite[Lemma 1]{AZBED}, we may establish $C^{1, \text{Log-Lip}}$ type estimates: Let $h: \mathrm{B}^{+}_{1} \rightarrow \mathbb{R}$ be a function such that for all $1 \le i, j\le n$, locally $D_{ij}h \in BMO_{p}(\mathrm{B}_{1}^{+})$. Then, $h \in C_{loc}^{1, \text{Log-Lip}}(\mathrm{B}_{1}^{+})$. In other words, {\scriptsize{ $$ \displaystyle \sup_{\rho \in (0, 1)} \sup_{z \in \partial \Omega}\sup_{\mathrm{B}_{\rho}(z)\cap\Omega} \frac{|u(x)-[u(z)+D u(z)\cdot (x-z)]|}{\rho^2\ln \rho^{-1}} \leq \hat{\mathrm{C}}\cdot \left(\|u\|_{L^{\infty}(\Omega)} + \|g\|_{C^{1, \alpha}(\partial \Omega)} + \|f\|_{\textrm{p-BMO}(\Omega)}\right), $$}} for some constant $\hat{\mathrm{C}} = \hat{\mathrm{C}}(n, \lambda, \Lambda, p, \mathrm{C}_{\ast}, \mu_0, \|\beta\|_{C^{1, \alpha}(\overline{\mathrm{T}_1})},\|\partial \Omega\|_{C^{1,1}})$ (cf. \cite[Theorem 5.2]{daSR19} for related boundary estimates). Finally, we must quote \cite{IMN17}, where the authors prove essentially close-to-optimal assumptions on the $f(x,u)$, for which $C_{\text{loc}}^{1,1}$ regularity of $W^{2,p}$ solutions in the classical semi-linear equation $\Delta u = f(x,u)$ holds true. \end{remark} We would like to mention that although our manuscript has been strongly motivated by recent works \cite{BJ}, \cite{daSR19} and \cite{PT}, our approach has required a sort of non-trivial adaptation due to the presence of a non-homogeneous oblique boundary condition and the asymptotic nature of the limiting operator. In addition, unlike \cite{ZZZ21}, our findings provide additional quantitative applications, such as BMO type estimates (see Section \ref{Section5}), $W^{2,p}$--regularity to obstacle-type problems with oblique boundary conditions (see Section \ref{obst}), and density results for oblique type problems (see Section \ref{Sec_Density}) via tangential methods.  Additionally, our recession profiles \ref{Reces} encode more general operators than linear ones, as addressed in \cite{ZZZ21} (see also \cite{BOW16}). Our results, in particular, extend, in the oblique boundary scenario, the former results from \cite{BOW16}, \cite{PT} and \cite{ST} (see also \cite{daSR19}), and to some extent, those from \cite{BOW16}, \cite{Kry13}, \cite{Kry17} and \cite{ZZZ21} by employing techniques tailored to the general framework of the fully nonlinear models under relaxed convexity assumptions and oblique boundary datum. Finally, we also highlight that Theorems \ref{T1} and \ref{BMO} can be extended to a more general class of models, including fully nonlinear parabolic PDEs. For sake of simplicity and clarity, we have decided to consider just the elliptic scenario in this paper. We pretend to study such issues in a forthcoming project. \subsection{Applications to obstacle type problems} \hspace{0.4cm} In closing, we would like to highlight that boundary Hessian estimates are also useful in the context of obstacle type problems. Physically, the classical obstacle problem refers to the equilibrium position of an elastic membrane (i.e., $u: \Omega \to \R$), whose boundary is held fixed (i.e., $u_{|\partial \Omega} = g$), lying above a given obstacle and subject to the action of external forces, \textit{e.g.}, friction, tension, and/or gravity ($f:\Omega \to \R$). More specifically, given an elastic membrane $u$ attached to a fixed boundary $\partial \Omega$ and an obstacle $\phi$, we seek the equilibrium position of the membrane when we move it downwards toward the obstacle, satisfying:  $$ \left\{ \begin{array}{rcll} \Delta u & \le & f & \text{in the weak sense in} \quad \Omega \\ \Delta u & = & f & \text{in the weak sense in} \quad \{u> \varphi\} \\ u & \ge & \varphi & \text{in} \quad \Omega \end{array} \right. $$ with $u-g \in H^1_0(\Omega)$. We refer \cite{BLOP18} for Calder\'{o}n-Zygmund estimates for elliptic and parabolic obstacle problems in non-divergence form (under convex structure) with discontinuous coefficients and irregular obstacles. We must quote \cite{Lieb01} concerning gradient estimates for obstacle problems of elliptic models (under convex/convex structure) with oblique boundary conditions. Now, we would like to focus our attention to the recent work \cite[Theorem 1.1.]{BJ1}, where $W^{2,p}$ estimates for the obstacle problem with oblique boundary conditions \begin{equation} \label{obss1} \left\{ \begin{array}{rclcl} F(D^2 u,Du,u,x) &\le& f(x)& \mbox{in} & \Omega \\ (F(D^2 u, Du, u,x) - f(x))(u(x)-\phi(x)) &=& 0 &\mbox{in}& \Omega\\ u(x) &\ge& \phi(x) &\mbox{in}& \Omega\\ \beta(x) \cdot Du(x) + \gamma(x)u(x) &=& g(x) &\mbox{on}& \partial \Omega\\ \end{array} \right. \end{equation} for a given obstacle $\phi \in W^{2,p}(\Omega)$ satisfying $\beta(x) \cdot D \phi(x) \ge g(x)$ a.e., on $\partial \Omega$, have been obtained in the case when $F$ is a convex operator and $\gamma = g \equiv 0$. Therefore, motivated by above problem, we will investigate $W^{2,p}$ regularity estimates for obstacle-type problems \eqref{obss1} under an asymptotic convexity assumption (weaker than convexity) and non-homogeneous oblique conditions. For our purpose, it is necessary to make the following assumptions: \begin{enumerate} \item[(A5)] There exists a modulus of continuity $\omega: [0,+\infty) \to [0,+\infty)$ with $\omega(0)=0$, such that $$ F(\mathrm{X}_1, q, r,x_1) - F(\mathrm{X}_2,q,r,x_2) \le \omega\left(|x_1-x_2|\right)\left[(|q| +1) + \alpha_0 |x_1-x_2|^2\right] $$ holds for any $x_1,x_2 \in \Omega$, $q \in \mathbb{R}^n$, $r \in \mathbb{R}$, $\alpha_0 >0$ and $\mathrm{X}_1,\mathrm{X}_2 \in \textrm{Sym}(n)$ satisfying $$ - 3 \alpha_0 \begin{pmatrix} \mathrm{Id}_n& 0 \\ 0& \mathrm{Id}_n \end{pmatrix} \leq \begin{pmatrix} \mathrm{X}_2&0\\ 0&-\mathrm{X}_1 \end{pmatrix} \leq 3 \alpha_0 \begin{pmatrix} \mathrm{Id}_n & -\mathrm{Id}_n \\ -\mathrm{Id}_n& \mathrm{Id}_n \end{pmatrix}, $$ where $\mathrm{Id}_n$ is the identity matrix. \item[(A6)] $F$ is a proper operator in the following sense: $$ d\cdot (r_{2}-r_{1}) \leq F(\mathrm{X},q,r_{1},x)-F(\mathrm{X},q,r_{2},x), $$ for any $\mathrm{X} \in \text{Sym}(n)$, $r_1,r_2 \in \mathbb{R}$, with $r_{1}\leq r_{2}$, $x \in \Omega$, $q \in \mathbb{R}^n$, and some $d >0$. \end{enumerate} The previous assumptions are invoked to assure the validity of the Comparison Principle for oblique derivatives problems like \eqref{E1} (see \cite[Theorem 2.10]{CCKS} and \cite[Theorem 7.17]{Leiberman}), ensuring the use of Perron's Method for viscosity solutions (see Lieberman's Book \cite[Chapter 7.4 and 7.6]{Leiberman}). Finally, our third result addresses a $W^{2,p}$-regularity theory for \eqref{obss1}, that does not rely on any additional regularity assumptions on nonlinearity $F$. \begin{theorem}[{\bf Obstacle problems with \textit{oblique} boundary conditions}]\label{T3} Let $u$ be an $L^p$-viscosity solution of \eqref{obss1} with $n<p< \infty$, where $F$ satisfies the structural assumption (A1)-(A4) and (A5)-(A6), $\partial \Omega \in C^{3}$, $f \in L^p(\Omega)$, $\beta \in C^{2}(\partial \Omega)$ with $\beta \cdot {\bf \overrightarrow{\bf{n}}} \ge \mu_0$ for some $\mu_0 >0$, $\phi \in W^{2,p}(\Omega)$ and $g \in C^{1,\alpha}(\partial \Omega)$ (for some $\alpha \in (0, 1)$). Then, $u \in W^{2,p}(\Omega)$ with the following estimate \begin{equation} \label{obs2} \|u\|_{W^{2, p}(\Omega)} \le \mathrm{C}_0\cdot \left( \|f\|_{L^p(\Omega)}+ \|\phi\|_{W^{2,p}(\Omega)} +\|g\|_{C^{1,\alpha}(\partial \Omega)} \right). \end{equation} where $\mathrm{C}_0 = \mathrm{C}_0(n,\lambda,\Lambda, p,\mu_0, \sigma, \omega, \|\beta\|_{C^2(\partial \Omega)}, \|\gamma\|_{C^2(\partial \Omega)}, \partial \Omega, \textrm{diam}(\Omega), \theta_0)$. \end{theorem} In conclusion, we must stress that similar Hessian estimates hold true for obstacle problems with Dirichlet boundary conditions: $$ \left\{ \begin{array}{rclcl} F(D^2 u,Du,u,x) &\le& f(x)& \mbox{in} & \Omega \\ (F(D^2 u, Du, u,x) - f(x))(u(x)-\phi(x)) &=& 0 &\mbox{in}& \Omega\\ u(x) &\ge& \phi(x) &\mbox{in}& \Omega\\ u(x) &=& g(x) &\mbox{on}& \partial \Omega \end{array} \right. $$ by invoking the related global $W^{2, p}$ regularity estimates addressed in \cite[Theorem 2.7]{BOW16} and \cite[Theorems 1.1]{daSR19} with the following estimate $$ \|u\|_{W^{2, p}(\Omega)} \le \mathrm{C}(\verb"universal")\cdot \left( \|f\|_{L^p(\Omega)}+ \|\phi\|_{W^{2,p}(\Omega)} +\|g\|_{W^{2,p}(\partial \Omega)} \right) \quad \text{for} \quad n<p< \infty. $$ Finally, we would like to mention the interesting manuscript \cite{Indr19}, in which the author utilized the regularity (in the case $p=\infty$) to prove a conjecture on the geometry of the free and fixed boundaries in the setting of fully nonlinear obstacle problems (cf. also \cite{Indrei19} for related regularity results). \subsection{Hessian estimates to models in non-divergence form: State-of-Art } \hspace{0.4cm} Regularity estimates in Sobolev spaces for nonlinear models have instituted an important chapter in the modern history of the elliptic theory of viscosity solutions. Moreover, because of its non-variational nature, obtaining sharp integrability properties of solutions of such elliptic operators have been a demanding task throughout the last years. One of the most pressing issues was whether $W^{2,p}$ a priori estimates could be addressed for any fully nonlinear operator, thereby attracting notable efforts from the academic community over the last three decades or so, while remaining open to the advancement of modern techniques. In this direction, Caffarelli proved in his seminal work \cite[Theorem 7.1]{Caff1} that $C^0-$viscosity solutions to \begin{equation}\label{EqCaf} F(D^2 u, x) = f(x) \quad \text{in} \quad \mathrm{B}_1 \end{equation} satisfy an interior $W^{2,p}-$regularity for $n < p < \infty$, with the following estimate $$ \|u\|_{W^{2, p}\left(\mathrm{B}_{\frac{1}{2}}\right)} \le \mathrm{C}(n, p, \lambda, \Lambda )\left(\|u\|_{L^{\infty}(\mathrm{B}_1)} + \|f\|_{L^p(\mathrm{B}_1)}\right), $$ provided a small oscillation of $F(\mathrm{X},x)$ in the variable $x$ and $C^{1,1}$ \textit{a priori} estimates for homogeneous equation with ``frozen coefficients'' $F(D^2 w, x_0)=0$ are in force. Furthermore, Caffarelli also demonstrated that for any $p<n$, there exists a fully nonlinear operator fulfilling the previous hypothesis, for which $W^{2,p}-$estimates fail to hold. In this regard, Dong-Kim's work (cite{DK91}) is worth mentioning for notable examples of linear elliptic operators in non-divergence form with piecewise constant coefficients, whose solutions lack $W^{2, p}$ estimates. A few years later, Caffarelli's $W^{2, p}-$estimates were extended by Escauriaza in \cite[Theorem 1]{Es93} in the context of $L^n-$viscosity solutions to the range of exponents $n - \varepsilon_0 < p < \infty$ with $\varepsilon_0 = \varepsilon_0\left(\frac{\Lambda}{\lambda}, n\right) \in \left(\frac{n}{2}, n\right)$ (the Escauriaza's constant), which provides the minimal range of integrability for which the Harnack inequality (resp. H\"{o}lder regularity) holds for viscosity solutions to \eqref{EqCaf}. Furthermore, when the boundary of the domain is smooth enough, namely $C^{1,1}$, a global $W^{2,p}$ estimate is obtained for $n - \varepsilon_0< p < \infty$ by Winter in \cite[Theorem 4.5]{Winter}. In the aftermath, Byun-Lee-Palagachev in \cite{BLP} extend the Escauriaza-Winter's results by studying the global regularity of viscosity solutions to the following Dirichlet problem for a fully nonlinear uniformly elliptic equation: \begin{equation}\label{Eq-Weight} \left\{ \begin{array}{rclcl} F(D^2u,Du,u,x) &=& f(x)& \mbox{in} & \Omega \\ u(x) &=& 0 &\mbox{on}& \partial \Omega \end{array} \right. \end{equation} in weighted Lebesgue spaces. Indeed, under appropriate structural conditions, if $f\in L^p_w(\Omega)$ for $p>n-\varepsilon_0$ and $w \in \mathcal{A}_{\frac{p}{n-\varepsilon_0}}$ (Muckenhoupt classes), then the problem \eqref{Eq-Weight} admits a unique solution $u \in W^{2,p}_w(\Omega)$ such that $$ \|u\|_{W^{2, p}_w\left(\Omega\right)} \le \mathrm{C}(\verb"universal")\|f\|_{L^p_w(\Omega)}. $$ Recently, in a new landmark for this theory, Caffarelli's local $W^{2, p}$ regularity estimates were extended by Pimentel-Teixeira in \cite{PT}. Precisely, the novelty with respect to the Caffarelli's results is the concept of \textit{Recession} operator (a sort of tangent profile for $F$ at ``infinity'', see \eqref{Reces}). In this context, the authors relaxed the hypothesis of $C^{1,1}$ \textit{a priori} estimates for solutions of equations without dependence on $x$, by assuming that $F$ must be ``convex or concave'' only at the ends of $\textit{Sym}(n)$. In this setting, the authors proved that any viscosity solution to \begin{equation}\label{Eq-Asymp-Local} F(D^2 u) =f(x) \quad \textrm{in} \quad \mathrm{B}_1, \end{equation} where $f \in L^{p}(\mathrm{B}_1) \cap C^0(\mathrm{B}_1)$, fulfills $u \in W^{2,p}_{loc}(\mathrm{B}_1)$ with an \textit{a priori} estimate $$ \|u\|_{W^{2, p}\left(\mathrm{B}_{\frac{1}{2}}\right)} \le \mathrm{C}(n, p, \lambda, \Lambda)\left(\|u\|_{L^{\infty}(\mathrm{B}_1)} + \|f\|_{L^p(\mathrm{B}_1)}\right). $$ We must quote similar global $W^{2, p}-$estimates independently proved by Byan \textit{et al}. in \cite[Theorems 2.5 and 2.7]{BOW16} for solutions to asymptotically linear operators with a zero Dirichlet boundary condition. Their approach is based on a proper transformation that converts a given asymptotically elliptic PDE to a suitable uniformly elliptic one. We also cite Silvestre-Teixeira's work \cite[Theorems 1 and 4]{ST} for an interesting survey on this theme in the setting of sharp gradient estimates. In the sequel, global $W^{2,p}$, $\text{BMO}_p$ and $C^{1, \text{Log-Lip}}$ regularity estimates, in the context of asymptotic convex operators (see Definition \eqref{DefAC}), i.e., $$ \left\{ \begin{array}{rclcl} F(D^2u,Du,u,x) &=& f(x)& \mbox{in} & \Omega\\ u(x) &=& g(x) &\mbox{on} & \partial \Omega \end{array} \right. $$ were addressed by da Silva-Ricarte in \cite[Theorems 1.1, 1.2 and 5.2]{daSR19}. Precisely, they obtain $$ \displaystyle \|u\|_{W^{2, p}\left(\Omega\right)} \le \mathrm{C}(n, p, \lambda, \Lambda, \|\partial \Omega\|_{C^{1, 1}})\left(\|u\|_{L^{\infty}(\Omega)} + \|f\|_{L^p(\Omega)} + \|g\|_{W^{2,p}(\partial \Omega)}\right). $$ The proof of such estimates in \cite{daSR19} is based on the successful adaptation of compactness arguments inspired by the ideas from Pimentel-Teixeira \cite{PT}, Silvestre-Teixeira \cite{ST}, and Winter \cite{Winter}. We also refer the reader to Lee's work \cite{Lee19} for Hessian estimates for solutions of \eqref{Eq-Asymp-Local} in the framework of weighted Orlicz spaces. Among other works on this research topic, we must cite the sequence of Krylov's fundamental manuscripts \cite{Kry12}, \cite{Kry13} and \cite{Kry17}, which are associated with the existence of $L^p-$viscosity solutions under either relaxed or no convexity assumptions on $F$. Now, concerning problems with oblique boundary conditions, Hessian estimates for viscosity solutions of elliptic equations like \eqref{E1} has been regarded as an important issue in the study of PDEs for a long time. The theory of this subject has also been improved over the last several decades. For general fully nonlinear problems like \eqref{E1}, the existence and uniqueness for viscosity solutions of oblique boundary problems was proved in \cite{Hi} and \cite{Leiberman}. Inspired by the ideas from Caffarelli's fundamental work \cite{Caff1}, Byan and Han in \cite{BJ} establish global $W^{2,p}-$estimates for viscosity solutions of convex fully nonlinear elliptic models of the form: $$ \left\{ \begin{array}{rclcl} F(D^2u,Du,u,x) &=& f(x)& \mbox{in} & \Omega \\ \beta(x) \cdot Du(x) &=& 0 &\mbox{on}& \partial \Omega. \end{array} \right. $$ Moreover, the following estimate holds $$ \|u\|_{W^{2,p}(\Omega)} \le C(\verb"universal")\left( \|u\|_{L^{\infty}(\Omega)} + \|f\|_{L^p(\Omega)} \right), $$ It is also worth highlighting that $C^{0, \alpha}$, $C^{1,\alpha}$ and $C^{2, \alpha}$ boundary regularity were addressed for the Neumann problem on flat boundaries by Milakis-Silvestre in \cite{Sil1}. Moreover, such results were recently extended to the general oblique scenario by Li-Zhang in \cite{LiZhang}. Finally, we must refer to the work of Zhang \textit{et al.} \cite{ZZZ21}, which establishes $W^{2,p}-$regularity for viscosity solutions of fully nonlinear elliptic equations with a homogeneous oblique derivative boundary condition. In this context, the nonlinearity is assumed to fulfill a certain asymptotic regularity condition. In conclusion, taking such historical advances into account, in this work, we aim to obtain a global $W^{2,p}$ estimates for a large class of nonlinear elliptic PDEs. To that end, we only assume that $F(\mathrm{X}, \varsigma, s, x)$ is asymptotically elliptic, that is, we establish global Sobolev \textit{a priori} estimates for \eqref{E1} solutions under a relaxed convexity assumption (a suitable asymptotic structure of $F$ just at "infinity" of $\textit{Sym}(n)$ according to the Definition \ref{DefAC}). This paper is organized as follows: Section \ref{section2} contains the main notations and preliminary results on which we will work throughout this manuscript. In Section \ref{section3}, we present the \textit{Modus Operandi}, i.e., the tangential approximation mechanism used in order to relate the estimates coming from \textit{Recession} operator $F^{\sharp}$ with its original counterpart $F$. Section \ref{Sec4} is devoted to proving Theorem \ref{T1}. In Section \ref{Section5} we establish the regularity estimates in the borderline setting, namely for $\textrm{p-BMO}$ spaces. Finally, in the final sections, we establish the global $W^{2,p}$ estimate for obstacle problems with oblique boundary conditions, and a density result in a suitable class of viscosity solutions. \section{Preliminaries and auxiliary results} \label{section2} \hspace{0.4cm}To \eqref{E1}, we introduce the appropriate notions of viscosity solutions. \begin{definition}[{\bf $C^{0}-$viscosity solutions}]\label{VSC_0} Let $F$ be a $\left(\lambda, \Lambda, \sigma, \xi\right)-$elliptic operator and $\Gamma \subset \partial\Omega$ a relatively open set. A function $u \in C^0(\overline{\Omega})$ is said a $C^{0}-$viscosity solution if the following condition hold: \begin{enumerate} \item[a)] for all $ \forall\,\, \varphi \in C^{2} (\Omega \cup \Gamma)$ touching $u$ by above at $x_0 \in \Omega \cup \Gamma$, $$ \left\{ \begin{array}{rcl} F\left(D^2 \varphi(x_{0}), D \varphi(x_{0}), \varphi(x_{0}), x_{0}\right) \geq f(x_0) & \text{when} & x_0 \in \Omega \\ \text{and} \quad \mathcal{B}(x_0, u(x_0), D \varphi(x_0)) \ge g(x_0) & \text{at} & x_0 \in \Gamma. \end{array} \right. $$ \item[b)] for all $ \forall\,\, \varphi \in C^{2 } (\Omega \cup \Gamma)$ touching $u$ by below at $x_0 \in \Omega \cup \Gamma$, $$ \left\{ \begin{array}{rcl} F\left(D^2 \varphi(x_{0}), D \varphi(x_{0}), \varphi(x_{0}), x_{0}\right) \leq f(x_0) & \text{when} & x_0 \in \Omega \\ \text{and} \quad \mathcal{B}(x_0, u(x_0), D \varphi(x_0)) \le g(x_0) & \text{at} & x_0 \in \Gamma. \end{array} \right. $$ \end{enumerate} \end{definition} \begin{definition}[{\bf$L^{p}$-viscosity solution}]\label{VSLp} Let $F$ be a $(\lambda,\Lambda,\sigma, \xi)$-elliptic operator, $p\ge n$ and $f\in L^{p}(\Omega)$. Assume that $F$ is continuous in $\mathrm{X}$, $q$, $r$ and measurable in $x$. A function $u\in C^0(\overline{\Omega})$ is said an $L^{p}$-viscosity solution for $\ref{E1}$ if the following assertions hold: \begin{enumerate} \item [a)] For all $\varphi\in W^{2,p}(\overline{\Omega})$ touching $u$ by above at $x_0 \in \overline{\Omega}$ $$ \left\{ \begin{array}{rcl} F\left(D^2 \varphi(x_{0}), D \varphi(x_{0}), \varphi(x_{0}), x_{0}\right) \geq f(x_0) & \text{when} & x_0 \in \Omega \\ \text{and} \quad \mathcal{B}(x_0, u(x_0), D \varphi(x_0)) \ge g(x_0) & \text{at} & x_0 \in \partial \Omega. \end{array} \right. $$ \item [b)] For all $\varphi\in W^{2,p}(\overline{\Omega})$ touching $u$ by below at $x_0 \in \overline{\Omega}$ $$ \left\{ \begin{array}{rcl} F\left(D^2 \varphi(x_{0}), D \varphi(x_{0}), \varphi(x_{0}), x_{0}\right) \leq f(x_0) & \text{when} & x_0 \in \Omega \\ \text{and} \quad \mathcal{B}(x_0, u(x_0), D \varphi(x_0)) \le g(x_0) & \text{at} & x_0 \in \partial \Omega. \end{array} \right. $$ \end{enumerate} \end{definition} For convenience of notation, we define $$ \mathcal{L}^{\pm}(u) \defeq \mathscr{P}^{\pm}_{\lambda,\Lambda}(D^2 u) \pm \sigma |Du|. $$ \begin{definition} We define the class $\overline{\mathcal{S}}\left(\lambda,\Lambda,\sigma, f\right)$ and $\underline{S}\left(\lambda,\Lambda,\sigma, f\right)$ to be the set of all continuous functions $u$ that satisfy $\mathcal{L}^{+}(u) \ge f$, respectively $\mathcal{L}^{-}(u) \le f$ in the viscosity sense (see Definition \ref{VSC_0}). We define, $$ \mathcal{S}\left(\lambda, \Lambda, \sigma,f\right) \defeq \overline{\mathcal{S}}\left(\lambda, \Lambda, \sigma,f\right) \cap \underline{\mathcal{S}}\left(\lambda, \Lambda,\sigma, f\right)\,\,\text{and}\,\, \mathcal{S}^{\star}\left(\lambda, \Lambda,\sigma, f\right) \defeq \overline{\mathcal{S}}\left(\lambda, \Lambda, \sigma,|f|\right) \cap \underline{\mathcal{S}}\left(\lambda, \Lambda,\sigma, -|f|\right). $$ Moreover, when $\sigma=0$, we denote $\mathcal{S}^{\star}(\lambda,\Lambda,0,f)$ just by $S^{\star}(\lambda,\Lambda,f)$ (resp. by $\underline{S}, \overline{S}, \mathcal{S}$). \end{definition} Now, we present a compactness result, whose proof can be found in \cite[Theorem 1.1]{LiZhang}. \begin{theorem}[{\bf $C^{0, \alpha^{\prime}}$ regularity}]\label{Holder_Est} Let $u \in C^0(\Omega) \cap C^0(\Gamma)$ be satisfying $$ \left\{ \begin{array}{rcl} u \in \mathcal{S}(\lambda, \Lambda, f) & \text{in} & \Omega \\ \mathcal{B}(x, u, Du)= g(x)& \text{on} & \Gamma. \end{array} \right. $$ Then, for any $\Omega^{\prime} \subset\subset \Omega \cup \Gamma$, $u \in C^{0, \alpha^{\prime}}(\overline{\Omega^{\prime}})$ and $$ \|u\|_{C^{0, \alpha^{\prime}}(\overline{\Omega^{\prime}})} \le \mathrm{C}(n, \lambda,\Lambda, \mu_0, \|\gamma\|_{L^{\infty}(\Gamma)}, \Omega^{\prime}, \Omega)\left(\|u\|_{L^{\infty}(\Omega)}+ \|f\|_{L^n(\Omega)}+\|g\|_{L^{\infty}(\Gamma)}\right) $$ where $\alpha^{\prime} \in (0, 1)$ depends on $n, \lambda,\Lambda$ and $\mu_0$. \end{theorem} Next, we present the following Stability result (see \cite[Theorem 3.8]{CCKS} for a proof). \begin{lemma}[{\bf Stability Lemma}]\label{Est} For $k \in \mathbb{N}$ let $\Omega_k \subset \Omega_{k+1}$ be an increasing sequence of domains and $\displaystyle \Omega \defeq \bigcup_{k=1}^{\infty} \Omega_k$. Let $p \ge n$ and $F, F_k$ be $(\lambda, \Lambda, \sigma, \xi)-$elliptic operators. Assume $f \in L^{p}(\Omega)$, $f_k \in L^p(\Omega_k)$ and that $u_k \in C^0(\Omega_k)$ are $L^{p}-$viscosity sub-solutions (resp. super-solutions) of $$ F_k(D^2 u_k,Du_k,u_k,x)=f_k(x) \quad \textrm{in} \quad \Omega_k. $$ Suppose that $u_k \to u_{\infty}$ locally uniformly in $\Omega$ and that for $\mathrm{B}_r(x_0) \subset \Omega$ and $\varphi \in W^{2,p}(\mathrm{B}_r(x_0))$ we have \begin{equation} \label{Est1} \|(\hat{g}-\hat{g}_k)^+\|_{L^p(\mathrm{B}_r(x_0))} \to 0 \quad \left(\textrm{resp.} \,\,\, \|(\hat{g}-\hat{g}_k)^-\|_{L^p(\mathrm{B}_r(x_0))} \to 0 \right), \end{equation} where $\hat{g}(x) \defeq F(D^2 \varphi, D \varphi, u,x)-f(x)$ and $\hat{g}_k(x) = F_k(D^2 \varphi, D \varphi, u_{k},x)-f_k(x)$. Then, $u$ is an $L^{p}-$viscosity sub-solution (resp. super-solution) of $$ F(D^2 u,Du,u,x)=f(x) \quad \textrm{in} \quad \Omega. $$ Moreover, if $F, f$ are continuous, then $u$ is a $C^0-$viscosity sub-solution (resp. super-solution) provided that \eqref{Est1} holds for all $\varphi \in C^2(\mathrm{B}_r(x_0))$ test function. \end{lemma} The next result is an A.B.P. Maximum Principle (see \cite[Theorem 2.1]{LiZhang} for a proof). \begin{lemma}[{\bf A.B.P. Maximum Principle}]\label{ABP-fullversion} Let $\Omega\subset \mathrm{B}_{1}$ and $u \in C^{0}(\overline{\Omega})$ satisfying \begin{equation*} \left\{ \begin{array}{rclcl} u\in \mathcal{S}(\lambda,\Lambda,f) &\mbox{in}& \Omega \\ \mathcal{B}(x,u,Du)=g(x) &\mbox{on}& \Gamma. \end{array} \right. \end{equation*} Suppose that exists $\varsigma\in \partial \mathrm{B}_{1}$ such that $\beta\cdot\varsigma\geq \mu_0$ and $\gamma\le 0$ in $\Gamma$. Then, \begin{eqnarray*} \|u\|_{L^{\infty}(\Omega)}\leq \|u\|_{L^{\infty}(\partial \Omega\setminus \Gamma)}+\mathrm{C}(n, \lambda, \Lambda, \mu_0)(\| g\|_{L^{\infty}(\Gamma)}+\| f\|_{L^{n}(\Omega)}) \end{eqnarray*} \end{lemma} In the sequel, we comment on the existence and uniqueness of viscosity solutions with oblique boundary conditions. For that purpose, we will assume the following condition on $F$: \begin{enumerate} \item [$(\bf SC)$] There exists a modulus of continuity $\tilde{\omega}$, i.e., $\tilde{\omega}$ is nondecreasing with $ \displaystyle \lim_{t \to 0} \tilde{\omega}(t) =0$ and $$ \psi_{F}(x,y) \le \tilde{\omega}(|x-y|). $$ \end{enumerate} Now, we will ensure the existence and uniqueness of viscosity solutions to the following problem: $$ \left\{ \begin{array}{rclcl} F(D^2u, x) &=& f(x) & \mbox{in} & \Omega,\\ \mathcal{B}(x,u,Du)&=& g(x) & \mbox{on} & \Gamma\\ u(x)&=&\varphi(x) & \mbox{on} & \partial \Omega\setminus \Gamma, \end{array} \right. $$ where $\Gamma$ is relatively open of $\partial\Omega$. The next proof follows the ideas from \cite[Theorem 3.1]{LiZhang} with minor modifications. For this reason, we will omit it here. \begin{theorem}\label{comparation} Suppose that $\Gamma\in C^{2}$ and $\beta\in C^{2}(\overline{\Gamma})$. Assume that $F$ satisfies (A1) and $(SC)$ . Let $u$ and $v$ be satisfying $$ \left\{ \begin{array}{rclcl} F(D^2u, x) &\geq& f_{1}(x) & \mbox{in} & \Omega,\\ \mathcal{B}(x,u,Du)&\geq& g_{1}(x) & \mbox{on} & \Gamma\\ \end{array} \right. $$ and $$ \left\{ \begin{array}{rclcl} F(D^2v, x) &\leq& f_{2}(x) & \mbox{in} & \Omega,\\ \mathcal{B}(x,v,Dv)&\leq& g_{2}(x) & \mbox{on} & \Gamma.\\ \end{array} \right. $$ Then, $$ \left\{ \begin{array}{rclcl} u-v\in \underline{\mathcal{S}}\left(\frac{\lambda}{n},\Lambda,f_{1}-f_{2}\right) & \mbox{in} & \Omega,\\ \mathcal{B}(x,u-v,D(u-v))\geq (g_{1}-g_{2})(x) & \mbox{on} & \Gamma.\\ \end{array} \right. $$ \end{theorem} \begin{theorem}[{\bf Uniqueness}]\label{Unicidade} Let $\Gamma\in C^{2}$, $\beta\in C^{2}(\overline{\Gamma})$, $\gamma\le 0$ and $\varphi\in C^{0}(\partial \Omega\setminus \Gamma)$. Suppose that there exists $\varsigma\in\partial \mathrm{B}_{1}$ such that $\beta\cdot \varsigma\ge \mu_0$ on $\Gamma$. Assume that $F$ satisfies $(A1)$. Then, there exists at most one viscosity solution of $$ \left\{ \begin{array}{rclcl} F(D^2u, x) &=& f(x) & \mbox{in} & \Omega,\\ \mathcal{B}(x,u,Du)&=& g(x) & \mbox{on} & \Gamma\\ u(x)&=&\varphi(x) & \mbox{on} & \partial \Omega\setminus \Gamma. \end{array} \right. $$ \end{theorem} \begin{proof} The proof is direct consequence of Theorem \ref{comparation} with A.B.P. estimate (Lemma \ref{ABP-fullversion}). \end{proof} \begin{theorem}[{\bf Existence}]\label{Existencia} Let $\Gamma\in C^{2}$, $\beta\in C^{2}(\overline{\Gamma})$, $\gamma\le 0$ and $\varphi\in C^{0}(\partial \Omega\setminus \Gamma)$. Suppose that there exists $\varsigma\in\partial \mathrm{B}_{1}$ such that $\beta\cdot \varsigma\ge \mu_0$ on $\Gamma$ and assume $(SC)$. In addition, suppose that $\Omega$ satisfies an exterior cone condition at any $x\in \partial \Omega\setminus \overline{\Gamma}$ and satisfies an exterior sphere condition at any $x\in \overline{\Gamma}\cap (\partial \Omega\setminus \Gamma)$. Then, there exists a unique viscosity solution $u\in C^{0}(\overline{\Omega})$ of $$ \left\{ \begin{array}{rclcl} F(D^2u, x) &=& f(x) & \mbox{in} & \Omega,\\ \mathcal{B}(x,u,Du)&=& g(x) & \mbox{on} & \Gamma\\ u(x)&=&\varphi(x) & \mbox{on} & \partial \Omega\setminus \Gamma. \end{array} \right. $$ \end{theorem} \begin{proof} The proof follows the same lines as the one in \cite[Theorem 3.3]{LiZhang} with minor modifications. For instance, we must invoke Theorem \ref{Unicidade} instead of \cite[Theorem 3.2]{LiZhang}. Other necessary modifications are the same made in the proof of Theorem \ref{comparation}. \end{proof} Next result is a key tool in our tangential approach. Precisely, it describes how the point-wise convergence of $F_{\tau}$ to $F^{\sharp}$ takes place. \begin{lemma} Let $F$ be a uniformly elliptic operator and assume $F^{\sharp}$ there exists. Then, given $\epsilon >0$ there exists $\tau_0(\lambda, \Lambda, \epsilon, \psi_{F^{\ast}}) >0$ such that, for every $\tau \in (0, \tau_0)$ there holds $$ \frac{\left|\tau F \left(\tau^{-1} \mathrm{X}, 0, 0, x\right) - F^{\sharp}(\mathrm{X}, 0, 0, x) \right|}{1+\|\mathrm{X}\|} \le \epsilon, $$ for every $\mathrm{X} \in \text{Sym}(n)$. \end{lemma} \begin{proof} The proof follows the same lines as the one in \cite{ST} (see also \cite{PT}). For this reason, we omit it here. \end{proof} \begin{remark} It is worth highlighting that the $W^{2, p}$ estimates from Theorem \ref{T1} depends not only on universal constants, but in effect on the ``modulus of convergence'' $F_{\tau} \to F^{\sharp}$. Precisely, by defining $\varsigma:(0, \infty) \to (0, \infty)$ as follows $$ \displaystyle \varsigma(\varepsilon) \defeq \sup_{\mathrm{X} \in \text{Sym}(n) \atop{\tau \in (0, \tau_0(\varepsilon))}} \left\{\frac{\left|\tau F\left(\tau^{-1} \mathrm{X}, 0, 0, x\right) - F^{\sharp}(\mathrm{X}, 0, 0, x) \right|}{1+\|\mathrm{X}\|}\right\}, \quad (\varepsilon \to 0^{+}\,\,\,\Rightarrow\,\,\,\varsigma(\varepsilon)\to 0^+), $$ then the constant $\mathrm{C}>0$ appearing in up-to-the boundary $W^{2,p}$ \textit{a priori} estimates \eqref{2} also depends on $\varsigma$. \end{remark} \subsection{Approximation device in geometric tangential analysis} \label{section3} \hspace{0.4cm} The main result of this section provides a compactness method that will be used as a key point throughout the whole paper. As a matter of fact, we will show that if our equation is close enough to the homogeneous equation with constant coefficients, then our solution is sufficiently close to a solution of the homogeneous equation with frozen coefficients. At the core of our techniques is the notion of the recession operator (see Definition \ref{DefAC}). The appropriate way to formalize this intuition is with an approximation lemma. \begin{lemma}[{\bf Approximation Lemma}] \label{Approx} Let $n \le p < \infty$, $0 \le \nu \le 1$ and assume that $(A1)-(A4)$ are in force. Then, for every $\hat{\delta} >0$, $\varphi \in C^0(\partial \mathrm{B}_1(0^{\prime},\nu))$ with $\|\varphi\|_{L^{\infty}(\partial \mathrm{B}_1(0^{\prime},\nu))} \le \mathfrak{c}_1$ and $g \in C^{1,\alpha}(\overline{\mathrm{T}}_2)$ with $0 < \alpha < 1$ and $\|g\|_{C^{1,\alpha}(\overline{\mathrm{T}}_2)} \le \mathfrak{c}_2$ for some $\mathfrak{c}_2 >0$ there exist positive constants $\epsilon =\epsilon(\hat{\delta},n, \mu_0, p, \lambda, \Lambda, \gamma,\mathfrak{c}_1, \mathfrak{c}_2) < 1$ and $\tau_0 = \tau_0(\hat{\delta}, n, \lambda,\Lambda, \mu_0, \mathfrak{c}_1, \mathfrak{c}_2) >0$ such that, if $$ \max\left\{ |F_{\tau}(\mathrm{X},x) - F^{\sharp}(\mathrm{X},x)|, \, \|\psi_{F^{\sharp}}(\cdot,0)\Vert_{L^{p}(\mathrm{B}^{+}_{r})},\,\|f\|_{L^{p}(\mathrm{B}^+_{r})} \right\} \le \epsilon \quad \textrm{and} \quad \tau \le \tau_0 $$ then, any two $L^p$-viscosity solutions $u$ (normalized, i.e., $\|u\|_{L^{\infty}(\mathrm{B}^{+}_r(0^{\prime},\nu))}\le 1$) and $\mathfrak{h}$ of $$ \left\{ \begin{array}{rclcl} F_{\tau}(D^2u,x) &=& f(x)& \mbox{in} & \mathrm{B}^{+}_r(0^{\prime},\nu) \cap \mathbb{R}^{n}_+ \\ \mathcal{B}(x,u,Du)&=& g(x) & \mbox{on} & \mathrm{B}_r(0^{\prime},\nu) \cap \mathrm{T}_r\\ u(x) &=& \varphi(x) &\mbox{on}& \overline{\partial \mathrm{B}_r(0^{\prime},\nu) \cap \mathbb{R}^n_+} \end{array} \right. $$ and $$ \left\{ \begin{array}{rclcl} F^{\sharp}(D^2 \mathfrak{h},0) &=& 0& \mbox{in} & \mathrm{B}^{+}_{\frac{7}{8}r}(0^{\prime},\nu) \cap \mathbb{R}^n_+ \\ \mathcal{B}(x,\mathfrak{h},D\mathfrak{h}) &=& g(x) & \mbox{on} & \mathrm{B}_{\frac{7}{8} r}(0^{\prime},\nu) \cap \mathrm{T}_r\\ \mathfrak{h}(x) &=& u(x) &\mbox{on}& \overline{\partial \mathrm{B}_{\frac{7}{8}r}(0^{\prime}, \nu) \cap \mathbb{R}^n_+} \end{array} \right. $$ satisfy $$ \|u-\mathfrak{h}\|_{L^{\infty}(\mathrm{B}^{+}_{\frac{7}{8}r}(0^{\prime},\nu))} \le \hat{\delta}. $$ \end{lemma} \begin{proof} We may assume, without loss of generality that $r=1$. We will proceed with a \textit{reductio at absurdum} argument. Thus, let us assume that the claim is not satisfied. Then, there exists $\delta_0>0$ and a sequence of functions $(F_{\tau_j})_{j \in \mathbb{N}}$, $(F^{\sharp}_j)_{j \in \mathbb{N}}$, $(u_{j})_{j \in \mathbb{N}}$, $(f_j)_{j \in \mathbb{N}}$, $(\varphi_j)_{j \in \mathbb{N}}$, $\{g_j\}_{j \in \mathbb{N}}$ and $(\mathfrak{h}_j)_{j \in \mathbb{N}}$ linked through $$ \left\{ \begin{array}{rclcl} F_{\tau_j}(D^2u_j,x) &=& f_j(x) & \mbox{in} & \mathrm{B}_1(0^{\prime},\nu_j) \cap \mathbb{R}^{n}_{+}\\ \mathcal{B}(x,u_{j},Du_{j})&=& g_j(x) & \mbox{on} & \mathrm{B}_1(0^{\prime},\nu_j) \cap T_1\\ u_j(x) &=& \varphi_j(x) &\mbox{on}& \overline{\partial \mathrm{B}_1(0^{\prime},\nu) \cap \mathbb{R}^n_+} \end{array} \right. $$ and $$ \left\{ \begin{array}{rclcl} F^{\sharp}_{j}(D^2 \mathfrak{h}_j,0) &=& 0 & \mbox{in} & \mathrm{B}_{\frac{7}{8}}(0^{\prime},\nu_j) \cap \mathbb{R}^{n}_{+}\\ \mathcal{B}(x,\mathfrak{h}_{j},D\mathfrak{h}_{j})&=& g_j(x) & \mbox{on} &\mathrm{B}_{\frac{7}{8}}(0^{\prime},\nu_j) \cap \mathrm{T}_1\\ \mathfrak{h}_j(x) &=& u_j(x) &\mbox{on}& \overline{\partial \mathrm{B}_{\frac{7}{8}}(0^{\prime}, \nu) \cap \mathbb{R}^n_+}. \end{array} \right. $$ where $\tau_j$, $\Vert \psi_{F_{\tau_{j}}^{\sharp}}(\cdot,0)\Vert_{L^{p}(\mathrm{B}^{+}_{1})}$, $\|f_j\|_{L^p(\mathrm{B}^+_{1})}$ go to zero as $j \to \infty$ and such that \begin{equation} \label{1} \|u_j-\mathfrak{h}_j\|_{L^{\infty}(\mathrm{B}_{\frac{7}{8}}(0^{\prime}, \nu) \cap \mathbb{R}^n_+)} > \delta_0. \end{equation} Here, $\varphi_j\in C^0(\partial \mathrm{B}_{1}(0^{\prime},\nu_j))$ and $g_j \in C^{1,\alpha}(\overline{\mathrm{T}_2})$ satisfy $\|\varphi_j\|_{L^{\infty}(\partial \mathrm{B}_{1}(0^{\prime},\nu_j))}\leq \mathfrak{c}_{1}$ and $\|g_j\|_{C^{1,\alpha}(\overline{\mathrm{T}_2})} \le \mathfrak{c}_2$, respectively. From Theorem $\ref{Holder_Est}$, we have for all $0<\rho<1$ \begin{equation} \label{4.6} \|u_j\|_{C^{0, \alpha^{\prime}}( \overline{\mathrm{B}_{1-\rho}(0^{\prime},\nu_j) \cap \mathbb{R}^n_+} )}\le \mathrm{C}(n,\lambda,\Lambda,\mathfrak{c}_{1},\mathfrak{c}_{2},\mu_0) \rho^{-\alpha^{\prime}} \end{equation} for some $\alpha^{\prime} = \alpha^{\prime}(n,\lambda,\Lambda, \mu_0)$ and sufficiently large $j$. Suppose that there exists a number $\nu_{\infty}$ and a subsequence $\{\nu_{j_k}\}$ such that $\nu_{j_k} \to \nu_{\infty}$ as $k \to +\infty$. We can assume that such a subsequence is monotone. If $v_{j_k}$ is decreasing, we can check that $$ \mathrm{B}_1(0^{\prime},\nu_{\infty}) \cap \mathbb{R}^n_+ \subset \mathrm{B}_1(0^{\prime},\nu_{j_k}) \cap \mathbb{R}^n_+ $$ for any $k$. Thus, we observe that \begin{equation} \label{4.7} \|u_{j_k}\|_{C^{0, \alpha^{\prime}} ( \overline{\mathrm{B}_{15/16}(0^{\prime},\nu_{\infty}) \cap \mathbb{R}^n_+})}\le C^0(\mathfrak{c}_1,\mathfrak{c}_2,n,\lambda,\Lambda,\mu_0) \end{equation} by using \eqref{4.6}. On the other hand, if $\nu_{j_k}$ is increasing, there exists a number $k_0$ such that $$ \mathrm{B}_{31/32}(0^{\prime}, \nu_{k_j}) \cap \mathbb{R}^n_+ \supset \mathrm{B}_{15/16}(0^{\prime},\nu_{\infty}) \cap \mathbb{R}^n_+ \quad \textrm{for} \,\,\,\, k \ge k_0. $$ Then we can deduce once again \eqref{4.7} for some proper subsequence $u_{j_k}$. Thus, according to Arzel\`{a}-Ascoli's compactness criterium, there exist functions $u_{\infty} \in C^{0,\alpha}(\overline{\mathrm{B}_{15/16}(0^{\prime},\nu_{\infty}) \cap \mathbb{R}^n_+})$, $g_{\infty} \in C^0(\partial \mathrm{B}^+_{1})$ and subsequences such that $u_{j_k} \to u_{\infty}$ in $C^{0,\alpha^{\prime}}(\mathrm{B}^+_{1})$ and $u_{\infty}= g_{\infty}$ on $\mathrm{B}_{15/16}(0^{\prime},\nu_{\infty}) \cap \mathrm{T}_1$. Since the functions $F^{\sharp}_{j}(\cdot, 0) \to F^{\sharp}_{\infty}(\cdot, 0)$ uniformly in compact sets of $ \textit{Sym}(n)$ and for every $\varphi \in C^2(\overline{\mathrm{B}^+_2})$, \begin{eqnarray*} |F_{\tau_{j_k}}(D^2 \varphi, x) - f_{j_k}(x) - F^{\sharp}_{\infty}(D^2 \varphi, 0)| &\le& |F_{\tau_{j_k}}(D^2 \varphi, x) - F^{\sharp}_{j_k}(D^2 \varphi, x)|+|f_{j_{k}}|+ \\ &+& |F^{\sharp}_{j_k}(D^2 \varphi, x) - F^{\sharp}_{j_{k}}(D^2 \varphi, 0)|+\\ &+&|F^{\sharp}_{j_k}(D^2 \varphi, 0)-F^{\sharp}_{\infty}(D^2 \varphi, 0)| \\ &\le& |F_{\tau_{j_k}}(D^2 \varphi, x) - F^{\sharp}_{j_k}(D^2 \varphi, x)|+|f_{j_{k}}|+\\ &+&\psi_{F_{\tau_{j_{k}}}^{\sharp}}(x,0)(1+|D^{2}\varphi|) \end{eqnarray*} then $$ \lim_{k \to +\infty} \| F_{\tau_{j_k}}(D^2 \varphi, x) - f_{j_k}(x) - F^{\sharp}_{\infty}(D^2 \varphi, 0) \|_{L^p(\mathrm{B}_r(x_0))} =0, $$ for any ball $\mathrm{B}_r(x_0) \subset \mathrm{B}_{15/16}(0,\nu_{\infty}) \cap \mathbb{R}^n_+$. Therefore, the Stability Lemma \ref{Est} ensures that $u_{\infty}$ satisfies $$ \left\{ \begin{array}{rclcl} F^{\sharp}_{\infty}(D^2 u_{\infty},0) &=& 0 & \mbox{in} & \mathrm{B}_{15/16}(0^{\prime},\nu_{\infty}) \cap \mathbb{R}^n_+ \\ \mathcal{B}(x,u_{\infty},Du_{\infty})&=& g_{\infty}(x) & \mbox{on} & \mathrm{B}_{15/16}(0^{\prime},\nu_{\infty}) \cap T_1 \end{array} \right. $$ in the viscosity sense. Now, we consider $w_{j_k} \colon= u_{\infty} - \mathfrak{h}_{j_k}$ for each $k$. Then, by \cite[Lemma 4.3]{BJ}, $w_{j_k}$ satisfies $$ \left\{ \begin{array}{rclcl} w_{j_k} &\in& \mathcal{S}(\frac{\lambda}{n}, \Lambda, 0 ) & \mbox{in} & \mathrm{B}_{7/8}(0^{\prime},\nu_{\infty}) \cap \mathbb{R}^n_+ \\ \ \mathcal{B}(x,w_{j_{k}},Dw_{j_{k}}) &=& g_{\infty}(x)-g_{j_k}(x) & \mbox{on} & \mathrm{B}_{7/8}(0^{\prime},\nu_{\infty}) \cap T_1\\ w_{j_k}(x) &=& u_{\infty}(x) - u_{j_k}(x) &\mbox{on}& \overline{\partial \mathrm{B}_{7/8}(0^{\prime},\nu_{\infty}) \cap \mathbb{R}^n_+}. \end{array} \right. $$ Now, using Lemma \ref{ABP-fullversion}, we observe that \begin{eqnarray*} \|w_{j_k}\|_{L^{\infty}(\mathrm{B}_{7/8}(0^{\prime},\nu_{\infty}) \cap \mathbb{R}^n_+)} &\le& \|u_{\infty} - u_{j_k}\|_{L^{\infty}(\partial \mathrm{B}_{7/8}(0^{\prime},\nu_{\infty}))} +\\ &+& \mathrm{C}(n,\lambda, \Lambda, \mu_0) \|g_{\infty} - g_{j_k}\|_{L^{\infty}(\mathrm{B}_{7/8}(0^{\prime},\nu_{\infty})\cap \mathrm{T}_1)} \to 0 \quad \text{as}\,\,\, k \to +\infty. \end{eqnarray*} This is, $w_{j_k}$ converges uniformly to zero. This implies that $\mathfrak{h}_{j_k}$ converges uniformly to $u_{\infty}$ in $\overline{\mathrm{B}_{7/8}(0^{\prime}, \nu_{\infty}) \cap \mathbb{R}^n_+}$, which contradicts \eqref{1} for $k \gg 1$. \end{proof} \section{$W^{2, p}$ estimates: Proof of Theorem \ref{T1}}\label{Sec4} \hspace{0.4cm}Before starting the proof of Theorem \ref{T1} we must introduce some useful terminologies: let $$ \mathcal{Q}^d_r(x_0) \defeq \left(x_0-\frac{r}{2}, x_0 + \frac{r}{2}\right)^d $$ be the $d-$dimensional cube of side-length $r$ and center $x_0$. In the case $x_0=0$, we will write $\mathcal{Q}^d_r$. Furthermore, if $d=n$ we will write $\mathcal{Q}_r(x_0)$. We will need later on the Calder\'{o}n-Zygmund decomposition Lemma for cubes: Let $\mathcal{Q}_{1}^{n-1}\times (0,1)$ unit cube. We split it into $2^{n}$ cubes of half side. We do the same splitting procedure with each one of these $2^{n}$ cubes, and we iterate such a process. Each cube in such a procedure is called a dyadic cube. Given two $\mathcal{Q}, \tilde{\mathcal{Q}}\neq \mathcal{Q}_{1}^{n-1}\times(0,1)$ dyadic cubes, we say that $\mathcal{Q}$ is a predecessor cube of $\tilde{\mathcal{Q}}$ if $\mathcal{Q}$ is one of the $2^{n}$ cubes obtained in the partition of $\tilde{\mathcal{Q}}$. In the sequel, we may enunciate a key result whose proof can be found in Caffarelli-Cabr\'{e}'s Book \cite[Lemma 4.2]{CC}. \begin{lemma}[{\bf Calder\'{o}n-Zygmund cube decomposition}]\label{Cal-Zyg} Let $\mathcal{A}\subset \mathcal{B}\subset \mathcal{Q}_{1}^{n-1}\times (0,1)$ be measurable sets and $\delta\in(0,1)$ such that \begin{enumerate} \item[({\bf a})] $\Leb(\mathcal{A})\leq \delta$; \item[({\bf b})] If $\mathcal{Q}$ is a dyadic cube such that $\Leb(\mathcal{A}\cap \mathcal{Q})>\delta \Leb(\mathcal{Q})$, then $\tilde{\mathcal{Q}}\subset \mathcal{B}$. \end{enumerate} Then, $\Leb(\mathcal{A})\leq \delta \Leb(\mathcal{B})$. \end{lemma} Remember that the Hardy-Littlewood maximal operator is defined as follows for $f \in L^1_{\textrm{loc}}(\mathbb\R^n)$: $$ \mathcal{M}(f)(x) \defeq \sup_{\rho>0} \intav{\mathrm{B}_{\rho}(x)} |f(y)| dy. $$ We say that $\mathrm{P}_{\mathrm{M}}$ is a paraboloid with \textit{opening} $\mathrm{M}>0$ whenever $\mathrm{P}_{\mathrm{M}}(x)= \pm \frac{\mathrm{M}}{2} |x|^2 + p_1\cdot x + p_0$. Observe that such a paraboloid is a convex function in the case of ``plus'' sign and a concave function otherwise. Now, for $u \in C^0(\Omega)$, $\Omega^{\prime} \subset \overline{\Omega}$ and $\mathrm{M} >0$ we define $$ \underline{\mathcal{G}}_{\mathrm{M}}(u,\Omega^{\prime}) \defeq \left\{x_0 \in \Omega^{\prime} ; \exists \, \mathrm{P}_{\mathrm{M}} \,\,\, \textrm{concave parabolid s. t.} \,\,\, \mathrm{P}_{\mathrm{M}}(x_0)=u(x_0), \,\, \mathrm{P}_{\mathrm{M}}(x) \le u(x)\,\, \forall \, x \in \Omega^{\prime}\right\} $$ and $$ \underline{\mathcal{A}}_{\mathrm{M}}(u,\Omega^{\prime}) \defeq \Omega^{\prime} \setminus \underline{\mathcal{G}}_{\mathrm{M}}(u,\Omega^{\prime}). $$ Similarly, by using convex paraboloid we define $\overline{\mathcal{G}}_{\mathrm{M}}(u,\Omega^{\prime})$ and $\overline{\mathcal{A}}_{\mathrm{M}}(u,\Omega^{\prime})$ and the sets $$ \mathcal{G}_{\mathrm{M}}(u,\Omega^{\prime}) \defeq \underline{\mathcal{G}}_{\mathrm{M}}(u,\Omega^{\prime}) \cap \overline{\mathcal{G}}_{\mathrm{M}}(u,\Omega^{\prime})\,\,\,\text{and}\,\,\,\mathcal{A}_{\mathrm{M}}(u,\Omega^{\prime}) \defeq \underline{\mathcal{A}}_{\mathrm{M}}(u,\Omega^{\prime}) \cap \overline{\mathcal{A}}_{\mathrm{M}}(u,\Omega^{\prime}). $$ In addition, we define: $$ \overline{\Theta}(u,\Omega^{\prime}, x) \defeq \inf\left\{\mathrm{M}>0 ; \, x \in \overline{\mathcal{G}}_{\mathrm{M}}(u,\Omega^{\prime})\right\}. $$ We also can define $\underline{\Theta}(u,\Omega^{\prime}, x)$. Finally, we define: $$ \Theta(u,\Omega^{\prime}, x) \defeq \sup\left\{\underline{\Theta}(u,\Omega^{\prime}, x), \overline{\Theta}(u,\Omega^{\prime}, x)\right\}. $$ \bigskip The first step towards $W^{2,p}$-estimates up to the boundary are estimates for paraboloids on the boundary. Hence, in this section, we will prove an appropriated power decay on the boundary for $\Leb(\mathcal{G}_{t}(u,\Omega))$. For this end, we will need the following standard technical result (cf. \cite{CC}): \begin{proposition}\label{P1} Let $0\leq h: \Omega \to \R$ be a measurable function, $\mu_{h}(t) \defeq \Leb(\{x \in \Omega: h(x) \ge t\})$ its distribution function and $\eta>0$, $\mathrm{M} >1$ constants. Then, $$ h \in L^p(\Omega) \Longleftrightarrow \sum_{j=1}^{\infty} \mathrm{M}^{pj} \mu_h(\eta \mathrm{M}^j)< \infty $$ for every $p \in (0,+\infty)$. Particularly, there exists a constant $\mathrm{C}=\mathrm{C}(n,\eta, \mathrm{M})$ such that $$ \displaystyle \mathrm{C}^{-1}.\left(\sum_{j=1}^{\infty} \mathrm{M}^{pj}.\mu_h(\eta \mathrm{M}^j)\right)^{\frac{1}{p}} \le \|h\|_{L^p(\Omega)} \le \mathrm{C} \left(\Leb(\Omega) + \sum_{j=1}^{\infty} \mathrm{M}^{pj} \mu_h(\eta \mathrm{M}^j)\right)^{\frac{1}{p}}. $$ \end{proposition} Once we have such estimates for paraboloids on the boundary, the ideas will follow as the ones of Caffarelli's original proof for the interior estimates (cf. \cite{CC}). In effect, consider the distribution function of $\Theta$, i,e., $\mu_{\Theta, \Omega^{\prime}}(t) \defeq \Leb\left(\left\{x \in \Omega^{\prime} : \, \Theta(x) > t\right\}\right)$. Then, one check that $\mu_{\Theta, \Omega^{\prime}}(t) = \Leb(\mathcal{A}_t(u,\Omega^{\prime}))$. Moreover, as an application of Proposition \ref{P1} we have $$ \Theta(u,\Omega^{\prime}, \cdot) \in L^p(\Omega^{\prime}) \Longleftrightarrow \sum_{j=1}^{\infty} \mathrm{M}^{pj} \Leb\left(\mathcal{A}_{\eta \mathrm{M}^j(u,\Omega^{\prime})}\right) < \infty. $$ Finally, from \cite[Proposition 1.1]{CC} we conclude $$ \|D^2 u\|_{L^p(\Omega^{\prime})} \le \mathrm{C}(\eta,\mathrm{M}, p) \left(\Leb(\Omega) + \sum_{j=1}^{\infty} \mathrm{M}^{pj} \Leb\left(\mathcal{A}_{\eta \mathrm{M}^j(u,\Omega^{\prime})}\right)\right). $$ Therefore, to obtain the desired $W^{2,p}$-estimates in $\Omega^{\prime}$ it is enough to prove the corresponding summability for $\displaystyle \sum_{j=1}^{\infty} \mathrm{M}^{pj} \Leb\left(\mathcal{A}_{\eta \mathrm{M}^j(u,\Omega^{\prime})}\right)$. We gather only a few elements involved in the proof of Theorem \ref{T1}, despite the fact that such results are well-known in the literature. We observe that such an \textit{a priori} estimate is independent of further assumptions on the operator $F$, and it follows merely from uniform ellipticity and the integrability of the source term. Thus, the proof is omitted in what follows. See \cite[Lemma 7.8]{CC} and \cite[Lemma 2.7]{Winter} for details. \begin{proposition}[{\bf Power Decay on the boundary}]\label{Prop2.12} Let $u\in \mathcal{S}(\lambda,\Lambda,f)$ in $\mathrm{B}^{+}_{12\sqrt{n}}\subset \Omega\subset\mathbb{R}^{n}_{+}$, $u\in C^0(\Omega)$ and $\Vert u\Vert_{L^{\infty}(\Omega)}\leq 1$. Then, there exist universal constants $\mathrm{C}>0$ and $\mu>0$ such that $\Vert f\Vert_{L^{n}(\mathrm{B}^{+}_{12\sqrt{n}})}\le 1$ implies $$ \Leb\left(\mathcal{A}_t(u,\Omega) \cap \left(\mathcal{Q}^{n-1}_1 \times (0,1) + x_0\right)\right) \le \mathrm{C}.t^{-\mu} $$ for any $x_0 \in \mathrm{B}_{9\sqrt{n}} \cap \overline{\mathbb{R}^n_+}$ and $t>1$. \end{proposition} Proposition \ref{Prop2.12} yields key information on the measure of $\mathcal{G}_{\mathrm{M}}(u,\Omega) \cap \left((\mathcal{Q}^{n-1}_1 \times (0,1)) + x_0\right)$, provided $$ \mathcal{G}_1(u,\Omega) \cap \left((\mathcal{Q}^{n-1}_2 \times (0,2)) + x_0\right)\not= \emptyset. $$ The proof of Theorem \ref{T1} will be split into two main steps: Firstly, we will investigate equations governed by operators without dependence on the function or gradient entry. In the sequel, we reduce the analysis for operators with full dependence. The first step towards such estimates is the following Proposition. \begin{proposition}\label{T-flat} Let $u$ be a normalized viscosity solution of \begin{equation*} \label{mens} \left\{ \begin{array}{rclcl} F(D^2u, x) &=& f(x) & \mbox{in} & \mathrm{B}^+_1,\\ \mathcal{B}(x,u,Du)&=& g(x) & \mbox{on} & \mathrm{T}_1, \end{array} \right. \end{equation*} where $\beta,\gamma, g \in C^{1,\alpha}(\mathrm{T}_1)$ with $\beta \cdot \overrightarrow{\textbf{n}} \ge \mu_0$ for some $\mu_0 >0$, $\gamma \le 0$ and $f \in L^p(\mathrm{B}^+_1)\cap C^{0}(\mathrm{B}^{+}_{1})$, for $n\le p < \infty$. Further, assume that assumptions (A1)-(A4) are in force. Then, there exist positive constants $\psi_{0}$ and $r_0$ depending on $n$, $\lambda$, $\Lambda$, $p$, $\mu_0$, $\alpha$, $\|\beta\|_{C^{1,\alpha}(\overline{\mathrm{T}_{1}})}$ and $\|\gamma\|_{C^{1,\alpha}(\overline{\mathrm{T}_{1}})}$ such that if $$ \left(\intav{\mathrm{B}_r(x_0) \cap \mathrm{B}^+_1} \psi_{F^{\sharp}}(x, x_0)^p dx\right)^{\frac{1}{p}} \le \psi_0 $$ for any $x_0 \in \mathrm{B}^+_1$ and $r \in (0, r_0)$, then $u \in W^{2, p}\left(\overline{\mathrm{B}^+_{\frac{1}{2}}}\right)$ and $$ \|u\|_{W^{2, p}\left(\mathrm{B}^+_{\frac{1}{2}}\right)} \le \mathrm{C} \cdot \left( \|u\|_{L^{\infty}(\mathrm{B}^+_1)} + \|f\|_{L^p(\mathrm{B}^+_1)}+\Vert g\Vert_{C^{1,\alpha}(\overline{\mathrm{T}_{1}})}\right), $$ where $\mathrm{C}=\mathrm{C}(n,\lambda,\Lambda,\mu_0,p, \Vert \gamma\Vert_{C^{1,\alpha}(\overline{\mathrm{T}_{1}})}, \alpha,r_{0})>0$. \end{proposition} In order to prove Proposition \ref{T-flat} we need a number of auxiliary results: \begin{proposition}\label{Prop.4.6} Assume that assumptions (A1)-(A4) there hold. Let $\mathrm{B}^{+}_{14 \sqrt{n}} \subset \Omega \subset \mathbb{R}^n_+$ and $u$ be a viscosity solution of $$ \left\{ \begin{array}{rclcl} F_{\tau}(D^2u,x) &=& f(x) & \mbox{in} & \mathrm{B}^+_{14\sqrt{n}},\\ \mathcal{B}(x,u,Du)&=& g(x) & \mbox{on} & \mathrm{T}_{14 \sqrt{n}}. \end{array} \right. $$ Assume further that $\max \left\{ \|f\|_{L^n(\mathrm{B}^+_{14\sqrt{n}})}, \,\,\tau \right\} \le \epsilon$. Finally, suppose for some $\tilde{x}_0 \in \mathrm{B}_{9\sqrt{n}} \cap \{x_n\ge0\}$ the following $$ \mathcal{G}_1(u,\Omega) \cap \left((\mathcal{Q}^{n-1}_2 \times (0,2)) + \tilde{x}_0\right) \not= \emptyset. $$ Then, $$ \Leb\left(\mathcal{G}_{\mathrm{M}}(u;\Omega) \cap \left((\mathcal{Q}^{n-1}_1 \times (0,1)) + x_0\right)\right) \ge 1-\epsilon_0, $$ where $x_0 \in \mathrm{B}_{9 \sqrt{n}} \cap \{x_n \ge 0\}$, $\mathrm{M}>1$ depends only on $n$, $\lambda$, $\Lambda$, $\mu_0$, $\alpha$, $\|\gamma\|_{C^{1,\alpha}(\overline{\mathrm{T}}_{14 \sqrt{n}})}$, $\mathrm{C}_{1}$ (from Assumption A4) and $\|g\|_{C^{1,\alpha}(\mathrm{T}_{14 \sqrt{n}})}$, and $\epsilon_0 \in (0,1)$. \end{proposition} \begin{proof} Consider $x_1 \in \mathcal{G}_1(u,\Omega) \cap \left(\mathcal{Q}^{n-1}_2 \times (0,2) + \tilde{x}_0\right) $. Then, there exist paraboloids with an opening $t=1$ touching $u$ at $x_1$ from above and below, this is, $$ -\frac{1}{2}|x-x_1|^2 \le u(x)-\ell(x) \le \frac{1}{2}|x-x_1|^2 $$ for $x \in \Omega$ and an affine function $\ell$. Now, we set $$ v(x) = \frac{u(x)-\ell(x)}{\mathrm{C}_{\ast}}, $$ where $\mathrm{C}_{\ast}>0$ is a dimensional constant selected (large enough) so that $ \|v\|_{L^{\infty}(\mathrm{B}^+_{12\sqrt{n}})} \le 1$ and $$ -|x|^2 \le v(x) \le |x|^2 \quad \text{in} \quad \Omega \setminus \mathrm{B}^{+}_{12\sqrt{n}}. $$ Next, we observe that $v$ is a viscosity solution to $$ \left\{ \begin{array}{rclcl} \tilde{F}_{\tau}(D^2v,x) &=& \tilde{f}(x)& \mbox{in} & \mathrm{B}^+_{14\sqrt{n}},\\ \mathcal{B}(x,v,Dv)&=&\frac{1}{\mathrm{C}_{\ast}}[g(x)-\beta \cdot D\ell-\gamma\ell]& \mbox{on} & \mathrm{T}_{14 \sqrt{n}}. \end{array} \right. $$ where $$ \tilde{F}_{\tau}(\mathrm{X},x) \defeq \frac{1}{\mathrm{C}_{\ast}}F_{\tau}(\mathrm{C}_{\ast} \mathrm{X}, x) \quad \text{and} \quad \tilde{f}(x) \defeq \frac{1}{\mathrm{C}_{\ast}} f(x) $$ Now, we consider $\mathfrak{h}$ the function $\epsilon$-close to $u$, coming from Lemma \ref{Approx}, this is, let $\mathfrak{h} \in C^{1,1}(\mathrm{B}^+_{13\sqrt{n}}) \cap C^0(\overline{\mathrm{B}^{+}_{13\sqrt{n}}})$ be solution of $$ \left\{ \begin{array}{rclcl} \tilde{F}^{\sharp}(D^2 \mathfrak{h},0) &=& 0 & \mbox{in} & \mathrm{B}^+_{13\sqrt{n}},\\ \mathcal{B}(x,\mathfrak{h}, D\mathfrak{h}) &=&\frac{1}{\mathrm{C}_{\ast}}[g(x)-\beta(x) \cdot D\ell(x)-\gamma(x)\ell(x)]& \mbox{on} & \mathrm{T}_{13\sqrt{n}}. \end{array} \right. $$ such that $$ \|v-\mathfrak{h}\|_{L^{\infty}(\mathrm{B}^+_{13\sqrt{n}})} \le \hat{\delta}<1 $$ Notice that $\beta \cdot D \ell \in C^{1,\alpha}(\overline{\mathrm{T}}_{14 \sqrt{n}})$ since $\beta \in C^{1,\alpha}(\overline{T}_{14\sqrt{n}})$ and $D \ell$ is a constant vector field. Hence, the A.B.P. Maximum Principle (Lemma \ref{ABP-fullversion}) assures that \begin{eqnarray*} \|\mathfrak{h}\|_{L^{\infty}(\mathrm{B}^+_{13\sqrt{n}})} &\le& \| v\|_{L^{\infty}(\partial \mathrm{B}^{+}_{13\sqrt{n}}\setminus \mathrm{T}_{13\sqrt{n}})}+\frac{\mathrm{C}}{\mathrm{C}_{\ast}}\left[\|g\|_{L^{\infty}(\overline{\mathrm{T}}_{13\sqrt{n}})}+|D\ell|\Vert \beta\Vert_{L^{\infty}(\overline{\mathrm{T}}_{14\sqrt{n}})}+\Vert \gamma\ell\Vert_{L^{\infty}(\overline{\mathrm{T}}_{13\sqrt{n}})}\right]\\ &\le& \mathrm{C}(n,\Vert\ell\Vert_{L^{\infty}(\overline{\mathrm{T}}_{14\sqrt{n}})},\Vert \gamma\Vert_{C^{1,\alpha}(\mathrm{T}_{14\sqrt{n}})},\|g\|_{C^{1,\alpha}(\overline{\mathrm{T}}_{14\sqrt{n}})})\\ &\defeq &\widetilde{\mathrm{C}} \end{eqnarray*} In consequence, condition (A4) ensures that $$ \|\mathfrak{h}\|_{C^{1,1}(\overline{\mathrm{B}^+}_{12\sqrt{n}})} \leq \mathrm{C}(\mathrm{C}_{1},\widetilde{\mathrm{C}})\Longrightarrow \mathcal{A}_{\mathrm{N}}\left(\mathfrak{h},\mathrm{B}^{+}_{12\sqrt{n}}\right) \cap \left((\mathcal{Q}^{n-1}_1 \times (0,1)) + x_0\right)=\emptyset $$ for some $\mathrm{N}=\mathrm{N}(\mathrm{C}_{1},\widetilde{C})>1$, and we extended $\mathfrak{h} \Big|_{\mathrm{B}^+_{12\sqrt{n}}}$ (continuously) outside $\mathrm{B}^{+}_{12\sqrt{n}}$ such that $\mathfrak{h}=v$ outside $\mathrm{B}^+_{13\sqrt{n}}$ and $\| v-\mathfrak{h}\|_{L^{\infty}(\Omega)} = \|v-\mathfrak{h}\|_{L^{\infty}(\mathrm{B}^+_{12\sqrt{n}})}$. For this reason, $$ \|v-\mathfrak{h}\|_{L^{\infty}(\Omega)} \le \overline{\mathrm{C}}(\mathrm{C}_{1},\widetilde{\mathrm{C}}) $$ and $$ -(\overline{\mathrm{C}}(\mathrm{C}_{1},\widetilde{\mathrm{C}})+|x|^2) \le \mathfrak{h}(x) \le \overline{\mathrm{C}}(\mathrm{C}_{1},\widetilde{\mathrm{C}})+|x|^2 \quad \text{in} \quad \Omega \setminus \mathrm{B}^+_{12\sqrt{n}}. $$ Therefore, there exists $\mathrm{M}_0 = \mathrm{M}_0(\mathrm{C}_{1},\tilde{\mathrm{C}})\ge \mathrm{N}>1$, for which $$ \mathcal{A}_{\mathrm{M}_0}(\mathfrak{h},\Omega) \cap \left((\mathcal{Q}^{n-1}_1 \times (0,1)) + x_0\right)=\emptyset. $$ Summarizing, \begin{equation} \label{Sub} \left( \mathcal{Q}^{n-1}_1 \times (0,1) + x_0 \right) \subset \mathcal{G}_{\mathrm{M}_0}(\mathfrak{h},\Omega). \end{equation} Now, we define $$ w(x) \defeq \frac{1}{2\mathrm{C}\hat{\delta}}(v-\mathfrak{h})(x). $$ Therefore, $w$ fulfills the assumptions of Proposition \ref{Prop2.12}. Thus, we obtain for $t>1$ the following $$ \Leb\left(\mathcal{A}_t(w,\Omega) \cap \left((\mathcal{Q}^{n-1}_1 \times (0,1)) + x_0\right)\right) \le \mathrm{C}t^{-\mu} \quad (\text{for a} \,\,\,\mu \,\,\text{universal}). $$ By using $\mathcal{A}_{2\mathrm{M}_0}(u) \subset \mathcal{A}_{\mathrm{M}_0}(w) \cup \mathcal{A}_{\mathrm{M}_0}(\mathfrak{h})$ and \eqref{Sub} we conclude that $$ \Leb\left(\mathcal{G}_{2\mathrm{M}_0}(v-\mathfrak{h},\Omega) \cap \left((\mathcal{Q}^{n-1}_1 \times (0,1)) + x_0\right)\right) \ge 1-\mathrm{C} \epsilon^{-\mu}. $$ Finally, we conclude that $$ \Leb\left(\mathcal{G}_{2\mathrm{M}_0}(v,\Omega) \cap \left((\mathcal{Q}^{n-1}_1 \times (0,1)) + x_0\right)\right) \ge 1-\mathrm{C} \epsilon^{-\mu}. $$ The proof is completed by choosing $\epsilon\ll 1$ in an appropriate manner and by setting $\mathrm{M} \equiv 2\mathrm{M}_0$. \end{proof} \begin{lemma} \label{lemma4.8} Given $\epsilon_0 \in (0, 1)$ and let $u$ be a normalized viscosity solution to $$ \left\{ \begin{array}{rclcl} F_{\tau}(D^2u,x) &=& f(x) & \mbox{in} & \mathrm{B}^+_{14\sqrt{n}},\\ \mathcal{B}(x,u,Du)&=& g(x) & \mbox{on} & \mathrm{T}_{14 \sqrt{n}}. \end{array} \right. $$ Assume that (A1)-(A4) hold true and that $f$ is extended by zero outside $\mathrm{B}^+_{14\sqrt{n}}$. For $x \in \mathrm{B}_{14 \sqrt{n}}$, let $$ \max \left\{\tau, \|f\|_{L^n(\mathrm{B}_{14 \sqrt{n}})} \right\} \le \epsilon $$ for some $\epsilon >0$ depending only on $n, \epsilon_0, \lambda, \Lambda, \mu_0, \alpha$. Then, for $k \in \mathbb{N}\setminus \{0\}$ we define \begin{eqnarray*} \mathcal{A} & \defeq & \mathcal{A}_{\mathrm{M}^{k+1}}(u, \mathrm{B}^+_{14 \sqrt{n}}) \cap \left(\mathcal{Q}^{n-1}_1 \times (0,1)\right)\\ \mathcal{B} &\defeq & \left(\mathcal{A}_{\mathrm{M}^k}(u, \mathrm{B}^+_{14\sqrt{n}}) \cap \left(Q^{n-1}_1 \times (0,1)\right)\right)\cup \left\{x \in \mathcal{Q}^{n-1}_1 \times (0,1); \mathcal{M}(f^n) \ge (\mathrm{C}_0\mathrm{M}^k)^n \right\}, \end{eqnarray*} where $\mathrm{M} = \mathrm{M}(n, \mathrm{C}_0)>1$. Then, $$ \Leb(\mathcal{A}) \le \epsilon_0(n, \epsilon, \lambda, \Lambda)\Leb(\mathcal{B}). $$ \end{lemma} \begin{proof} We will use Lemma \ref{Cal-Zyg}. Observe that $\mathcal{A} \subset \mathcal{B} \subset \left(\mathcal{Q}^{n-1}_1 \times (0,1)\right)$ and of the Proposition \ref{Prop.4.6} we conclude that $\Leb(\mathcal{A}) \le \delta <1$ provided we choice $\delta=\epsilon_0$. Thus, it remains to show the following implication: for dyadic cubes $\mathcal{Q}$ $$ \Leb\left(\mathcal{A} \cap \mathcal{Q}\right) > \epsilon_0 \Leb(\mathcal{Q}) \quad \Rightarrow \quad \tilde{\mathcal{Q}} \subset \mathcal{B}. $$ For that purpose, assume that for some $i \ge 1$, $\mathcal{Q}= \left(\mathcal{Q}^{n-1}_{\frac{1}{2^{i}}} \times \left(0, \frac{1}{2^{i}}\right) \right) + x_0$ is a dyadic cube with predecessor $\tilde{\mathcal{Q}} = \left(\mathcal{Q}^{n-1}_{\frac{1}{2^{i-1}}} \times \left(0, \frac{1}{2^{i-1}}\right)\right) + \tilde{x}_0$. Now, we assume that $\mathcal{Q}$ satisfies \begin{equation} \label{(14)} \Leb\left(\mathcal{A} \cap \mathcal{Q}\right)= \Leb\left(\mathcal{A}_{\mathrm{M}^{k+1}}(u, \mathrm{B}^+_{14\sqrt{n}}) \cap \mathcal{Q}\right) > \epsilon_0 \Leb(\mathcal{Q}), \end{equation} however the inclusion $\tilde{\mathcal{Q}} \subseteq \mathcal{B}$ does not hold. In consequence, there must be $x_1 \in \tilde{\mathcal{Q}}\setminus \mathcal{B}$, i.e., \begin{equation}\label{(15)} x_1 \in \tilde{\mathcal{Q}} \cap \mathcal{G}_{\mathrm{M}^k}(u, \mathrm{B}^+_{14\sqrt{n}}) \quad \text{and} \quad \mathcal{M}(f^n)(x_1) < (\mathrm{C}_0 \mathrm{M}^k)^n. \end{equation} In the sequel, we must split the analysis into two cases: \begin{enumerate} \item[] \textbf{Case 1:} \, If $|x_0-(x^{\prime}_0, 0)| < \frac{1}{2^{i-3}}\sqrt{n}$. In this case, we define: $\mathrm{T}(y) \defeq (x^{\prime}_0,0) + \frac{1}{2^i}.y$ and $\tilde{u}: \tilde{\Omega} \rightarrow \mathbb{R}$, where $\tilde{\Omega} = \mathrm{T}^{-1}(\Omega)$, given by $\tilde{u}(y) \defeq \frac{2^{2i}}{\mathrm{M}^k}u(\mathrm{T}(y))$. Now, observe that $\mathcal{Q} \subset \left(\mathcal{Q}^{n-1}_1 \times (0,1)\right)$ implies that $\mathrm{B}^+_{14\sqrt{n}/2^i}(x^{\prime}_0,0) \subset \mathrm{B}^+_{14\sqrt{n}}$. Furthermore, such a $\tilde{u}$ is a viscosity solution to $$ \left\{ \begin{array}{rclcl} \tilde{F}_{\tau}(D^2\tilde{u},y) &=& \tilde{f}(x) & \mbox{in} & \mathrm{B}^+_{14\sqrt{n}},\\ \tilde{\mathcal{B}}(y,\tilde{u},D\tilde{u}) &=&\tilde{g}(x) & \mbox{on} & \mathrm{T}_{14\sqrt{n}} , \end{array} \right. $$ where $$ \left\{ \begin{array}{rcl} \tilde{F}_{\tau}(\mathrm{X},y)& \defeq& \frac{\tau}{\mathrm{M}^k} F\left(\frac{\mathrm{M}^k}{\tau} \mathrm{X}, \mathrm{T}(y)\right),\\ \tilde{f}(y)&\defeq& \frac{1}{\mathrm{M}^k} f(\mathrm{T}(y)),\\ \tilde{\mathcal{B}}(y,s, \overrightarrow{v})&\defeq&\tilde{\beta}(y)\cdot \overrightarrow{v}+\tilde{\gamma}(y)s,\\ \tilde{\beta}(y) &\defeq& \beta(\mathrm{T}(y)), \\ \tilde{\gamma}(y)&\defeq& \frac{1}{2^{i}}\gamma(\mathrm{T}(y)),\\ \tilde{g}(y)&\defeq& \frac{2^{i}}{\mathrm{M}^{k}}g(\mathrm{T}(y)) \end{array} \right. $$ Now notice that $\tilde{F}^{\sharp}$ fulfills $C^{1,1}$-estimates with the same constant as $F^{\sharp}$. Moreover, from \eqref{(15)} we obtain $$ \|\tilde{f}\|_{L^n(\mathrm{B}^+_{14\sqrt{n}})} \le \frac{2^{i}}{\mathrm{M}^{k}} \left(\int_{\mathcal{Q}_{\frac{28\sqrt{n}}{2^i}(x_1)}}|f(x)|^{n}dx\right)^{\frac{1}{n}} \le 2^n \mathrm{C}_0; $$ As a result, $\|\tilde{f}\|_{L^n(\mathrm{B}^+_{14\sqrt{n}})} \le \epsilon$ provided we select $\mathrm{C}_0$ small enough in \eqref{(15)}. In addition, from \eqref{(15)} we conclude that $$ \mathcal{G}_1(\tilde{u}, \mathrm{T}^{-1}(\mathrm{B}^+_{14\sqrt{n}})) \cap \left(\mathcal{Q}^{n-2}_2 \times (0,2) + 2^i \left(\tilde{x}_0 - (x^{\prime}_0,0)\right) \right) \not= \emptyset. $$ Furthermore, $|x_0-\tilde{x}_0| \le \frac{1}{2^i} \sqrt{n}$ implies $|2^i(\tilde{x}_0 - (x^{\prime}_0,0))| < 9\sqrt{n}$. Therefore, we have shown that the assumptions of Proposition \ref{Prop.4.6} are true. Hence, it follows: $$ \Leb\left( \mathcal{G}_{\mathrm{M}}(\tilde{u}, \mathrm{T}^{-1}(\mathrm{B}^+_{14\sqrt{n}})) \cap \left(\left(\mathcal{Q}^{n-1}_1 \times (0,1)\right) + 2^i(x_0-(x^{\prime}_0, 0))\right)\right) \ge 1-\epsilon_0. $$ Thus, $$\Leb\left(\mathcal{G}_{\mathrm{M}^{k+1}}(u; \mathrm{B}^+_{14\sqrt{n}}) \cap \mathcal{Q}\right) \ge (1-\epsilon_0) \Leb(\mathcal{Q}), $$ which contradicts \eqref{(14)}. \item[]\textbf{Case 2:}\, If $|x_0-(x^{\prime}_0,0)| \ge \frac{1}{2^{i-3}} \sqrt{n}$. In this case, we conclude that $\mathrm{B}_{\frac{\sqrt{n}}{2^{i-3}}}\left(x_0+\frac{1}{2^{i+1}}e_n\right) \subset \mathrm{B}^+_{8 \sqrt{n}}$, where $e_n$ is the $n^{\underline{th}}$ unit vector of canonical base. Now, by defining the transformation: $\mathrm{T}(y) \defeq \left(x_0 +\frac{1}{2^{i+1}}e_n\right) + \frac{1}{2^i}y$, we proceed similarly to the first part of the proof. Finally, applying \cite[Lemma 5.2]{PT} instead of Proposition \ref{Prop.4.6} we obtain a contradiction to \eqref{(14)}. This completes the proof of the Lemma. \end{enumerate} \end{proof} In the sequel, we will establish the proof of Proposition \ref{T-flat}. \begin{proof}[{\bf Proof of Proposition \ref{T-flat}}] Fix $x_0 \in \mathrm{B}_{1/2} \cap \{x_n \ge 0\}$. When $x_0 \in \mathrm{T}_{\frac{1}{2}}$, $0 < r < \frac{1-|x_0|}{14\sqrt{n}}$ define: $$ \kappa \defeq \frac{\epsilon r}{\epsilon r^{-1} \|u\|_{L^{\infty}(\mathrm{B}^+_{14r\sqrt{n}}(x_0))} + \|f\|_{L^{n}(\mathrm{B}^+_{14r\sqrt{n}}(x_0))}+\epsilon r^{-1}\Vert g\Vert_{C^{1,\alpha}(\mathrm{T}_{14r\sqrt{n}}(x_{0}))}} $$ where the constant $\epsilon=\epsilon(n,\epsilon_0,\lambda,\Lambda,p, \mu_0, \alpha, \|\beta\|_{C^{1,\alpha}(\mathrm{T}_1)})$ is as the one in Proposition \ref{Prop.4.6} and $\epsilon_0 \in (0, 1)$ will be determined \textit{a posteriori}. Now, we define: $\tilde{u}(y) \defeq \frac{\kappa}{r^{2}}u(x_0+ry)$. Then, $\tilde{u}$ is a normalized viscosity solution to $$ \left\{ \begin{array}{rclcl} \tilde{F}(D^2 \tilde{u}, y) &=& \tilde{f}(x) & \mbox{in} & \mathrm{B}^+_{14\sqrt{n}},\\ \tilde{\mathcal{B}}(y,\tilde{u},D\tilde{u})&=&\tilde{g}(x) & \mbox{on} & \mathrm{T}_{14\sqrt{n}} . \end{array} \right. $$ where $$ \left\{ \begin{array}{rcl} \tilde{F}(\mathrm{X}, y) &\defeq& \kappa F\left(\frac{1}{\kappa} \mathrm{X}, ry+x_0\right) \\ \tilde{f}(y) &\defeq& \kappa f(x_0+ry)\\ \tilde{\mathcal{B}}(y,s, \overrightarrow{v})&\defeq& \tilde{\beta}(y)\cdot \overrightarrow{v}+\tilde{\gamma}(y)s\\ \tilde{\beta}(y) &\defeq& \beta(x_0+ry)\\ \tilde{\gamma}(y) &\defeq& r\gamma(x_{0}+ry)\\ \tilde{g}(y)&\defeq& \frac{\kappa}{r}g(x_{0}+ry). \end{array} \right. $$ Hence, $\tilde{F}$ fulfills (A1)-(A4). Moreover, \begin{eqnarray} \label{(16)} \|\tilde{f}\|_{L^n\left(\mathrm{B}^+_{14\sqrt{n}}\right)} &=& \frac{\kappa}{r} \|f\|_{L^n\left(\mathrm{B}^+_{14r\sqrt{n}}\right)} \le \epsilon \,\,\,\, \textrm{and} \,\,\, \|\tilde{\beta}\|_{C^{1,\alpha}(\mathrm{T}_{14\sqrt{n}})} \leq 1, \end{eqnarray} which ensures that the hypotheses of Lemma \ref{lemma4.8} are in force. Now, let $\mathrm{M}>0$ and $\mathrm{C}_0>0$ be as in the Lemma \ref{lemma4.8} and choose $\epsilon_0 \defeq \frac{1}{2\mathrm{M}^p}$. Now, for $k \geq 0$ we define \begin{eqnarray*} \alpha_k&\defeq & \Leb\left(\mathcal{A}_{\mathrm{M}^k}(\tilde{u}, \mathrm{B}^+_{14\sqrt{n}}) \cap \left(\mathcal{Q}^{n-1}_1 \times (0,1)\right)\right)\\ \beta_k &\defeq& \Leb\left( \left\{ x \in \left(\mathcal{Q}^{n-1}_1 \times (0,1)\right); \,\, \mathcal{M}(\tilde{f}^n)(x) \ge (\mathrm{C}_0\mathrm{M}^k)^n\right\} \right). \end{eqnarray*} As a result, Lemma \ref{lemma4.8} implies that $\alpha_{k+1} \le \epsilon_0 \left(\alpha_k+\beta_k\right)$ and thus \begin{equation} \label{(17)} \alpha_k \le \epsilon^k_0 + \sum_{i=0}^{k-1} \epsilon^{k-i}_0 \beta_i. \end{equation} On the other hand, from assumption (A2), we have $\tilde{f}^n \in L^{\frac{p}{n}}(\mathrm{B}^+_{14\sqrt{n}})$, consequently $\mathcal{M}(\tilde{f}^n) \in L^{\frac{p}{n}}(\mathrm{B}^+_{14\sqrt{n}})$. Thus, by \eqref{(16)} $$ \|\mathcal{M}(\tilde{f}^n)\|_{L^{\frac{n}{p}}(\mathrm{B}^+_{14\sqrt{n}})} \le \mathrm{C}(n,p) \|\tilde{f}^n\|_{L^{\frac{n}{p}}(\mathrm{B}^+_{14\sqrt{n}})} \le \mathrm{C}(n,p) \|\tilde{f}\|^n_{L^p(\mathrm{B}^+_{14\sqrt{n}})} \le \mathrm{C}. $$ Therefore, by Proposition \ref{P1} we obtain \begin{equation} \label{(18)} \sum_{k=1}^{\infty} \mathrm{M}^{p k} \beta_k \le \mathrm{C}(n,p). \end{equation} Finally, taking into account the choice of $\epsilon_0$, \eqref{(17)} and \eqref{(18)} we conclude that $$ \sum_{k=1}^{\infty} \mathrm{M}^{pk} \alpha_k \le \sum_{k=1}^{\infty} 2^{-k} + \left(\sum_{k=0}^{\infty}\mathrm{M}^{pk}\beta_k\right) \cdot \left(\sum_{k=1}^{\infty} \mathrm{M}^{pk} \epsilon^k_0\right) \le \mathrm{C}(n,p), $$ which implies that $\|D^{2} \tilde{u}\|_{L^p(\overline{\mathrm{B}^+_{1/2}})} \le \mathrm{C}(n,p,\mathrm{M})$ and consequently \begin{equation*} \label{(19)} \|D^2 u\|_{L^p\left(\overline{\mathrm{B}^+_{\frac{r}{2}}(x_0)}\right)} \le \mathrm{C}(n,\lambda,\Lambda,p,r) \left(\|u\|_{L^{\infty}(\mathrm{B}^+_1)}+ \|f\|_{L^p(\mathrm{B}^+_1)}+\Vert g\Vert_{C^{1,\alpha}(\mathrm{T}_{1})}\right). \end{equation*} On the other hand, if $x_0 \in \mathrm{B}^+_{1/2}$, we can use the result of interior estimates (cf. \cite[Theorem 6.1]{PT}, see also \cite{daSR19}). Finally, by combining interior and boundary estimates, we obtain the desired results by using a standard covering argument. This completes the proof of the Proposition. \end{proof} \begin{corollary} Let $u$ be a bounded viscosity solution of $$ \left\{ \begin{array}{rclcl} F(D^2u,Du,u, x) &=& f(x) & \mbox{in} & \mathrm{B}^+_1,\\ \mathcal{B}(x,u,Du)&=& g(x) & \mbox{on} & \mathrm{T}_1, \end{array} \right. $$ where $\beta,\gamma,g \in C^{1,\alpha}(\mathrm{T}_1)$ with $\beta \cdot \overrightarrow{\textbf{n}} \ge \mu_0$ for some $\mu_0 >0$, $\gamma \le 0$. Further, assume that (A1)-(A4) are in force. Then, there exists a constant $\mathrm{C}>0$ depending on $n$, $\lambda$, $\Lambda$, $p$, $C_{1}$, such that $u \in W^{2, p}\left(\mathrm{B}^+_{\frac{1}{2}}\right)$ and $$ \|u\|_{W^{2, p}\left(\mathrm{B}^+_{\frac{1}{2}}\right)} \le \mathrm{C} \cdot \left( \|u\|_{L^{\infty}(\mathrm{B}^+_1)} + \|f\|_{L^p(\mathrm{B}^+_1)}+\Vert g\Vert_{C^{1,\alpha}(\overline{\mathrm{T}_{1}})}\right). $$ \end{corollary} \begin{proof} Notice that $u$ is also a viscosity solution of \begin{equation*} \left\{ \begin{array}{rclcl} \tilde{F}(D^2u, x) &=& \tilde{f}(x) & \mbox{in} & \mathrm{B}^+_1,\\ \mathcal{B}(x,u,Du)&=& g(x) & \mbox{on} & \mathrm{T}_1 . \end{array} \right. \end{equation*} where $\tilde{F}(\mathrm{X},x) \defeq F(\mathrm{X},0,0,x)$ and $\tilde{f}$ is a function satisfying \begin{eqnarray*} |\tilde{f}|\leq \sigma|Du|+\xi|u|+|f|. \end{eqnarray*} Thus, we can apply Proposition \ref{T-flat} and conclude that \begin{eqnarray}\label{mens2} \|u\|_{W^{2, p}\left(\mathrm{B}^+_{\frac{1}{2}}\right)} \le \mathrm{C} \cdot \left( \|u\|_{L^{\infty}(\mathrm{B}^+_1)} + \|\tilde{f}\|_{L^p(\mathrm{B}^+_1)}+\Vert g\Vert_{C^{1,\alpha}(\overline{\mathrm{T}_{1}})}\right). \end{eqnarray} Now, using the same reasoning as in \cite[Corollary 4.5]{BJ} we conclude that $u\in W^{1,p}(\mathrm{B}_{1}^{+})$ and \begin{equation}\label{mens3} \Vert u\Vert_{W^{1,p}(\mathrm{B}_{1}^{+})}\leq \mathrm{C}\cdot (\Vert u\Vert_{L^{\infty}(\mathrm{B}^{+}_{1})}+\Vert f\Vert_{L^{p}(\mathrm{B}^{+}_{1})}). \end{equation} Finally, by combining (\ref{mens2}) and (\ref{mens3}) we complete the desired estimate. \end{proof} \begin{corollary}\label{Cor} Let $u$ be a bounded $L^{p}-$viscosity solution of $$ \left\{ \begin{array}{rclcl} F(D^2u, Du, u, x) &=& f(x) & \mbox{in} & \mathrm{B}^+_1,\\ \mathcal{B}(x,u,Du)&=& g(x) & \mbox{on} & \mathrm{T}_1, \end{array} \right. $$ where $\beta,\gamma, g \in C^{1,\alpha}(\mathrm{T}_1)$ with $\beta \cdot \overrightarrow{\textbf{n}} \ge \mu_0$ for some $\mu_0 >0$, $\gamma \le 0$ and $f \in L^p(\mathrm{B}^+_1)$, for $n \leq p < \infty$. Further, assume that $F^{\sharp}$ satisfies (A4) and $F$ fulfills the condition $(SC)$. Then, there exist positive constants $\beta_0=\beta_0(n,\lambda,\Lambda,p)$, $r_0 = r_0(n, \lambda, \Lambda, p)$ and $\mathrm{C}=\mathrm{C}(n,\lambda,\Lambda,p,r_0)>0$, such that if $$ \left(\intav{\mathrm{B}_r(x_0) \cap \mathrm{B}^+_1} \psi_{F^{\sharp}}(x, x_0)^p dx\right)^{\frac{1}{p}} \le \psi_0 $$ for any $x_0 \in \mathrm{B}^+_1$ and $r \in (0, r_0)$, then $u \in W^{2, p}\left(\overline{\mathrm{B}^+_{\frac{1}{2}}}\right)$ and $$ \|u\|_{W^{2, p}\left(\overline{\mathrm{B}^+_{\frac{1}{2}}}\right)} \le \mathrm{C} \cdot\left( \|u\|_{L^{\infty}(\mathrm{B}^+_1)} +\|f\|_{L^p(\mathrm{B}^+_1)}+\Vert g\Vert_{C^{1,\alpha}(\overline{\mathrm{T}_{1}})}\right). $$ \end{corollary} \begin{proof} It is sufficient to prove the result for equations with no dependence on $Du$ and $u$ (see \cite[Theorem 4.3]{Winter} for details). We will approximate $f$ in $L^p$ by functions $f_j \in C^{\infty}(\overline{\mathrm{B}^+_1}) \cap L^p(\mathrm{B}^+_1)$ such that $f_{j}\to f$ in $L^{p}(\mathrm{B}^{+}_{1})$, and also approximate $g$ by a sequence $(g_{j})$ in $C^{1,\alpha}(\mathrm{T}_{1})$ with the property that $g_{j}\to g$ in $C^{1,\alpha}(\mathrm{T}_{1})$. By Theorems of Uniqueness and Existence (Theorem \ref{Unicidade} and \ref{Existencia}), there exists a sequence of functions $ u_{j}\in C^{0}(\overline{\mathrm{B}^{+}_{1}})$, which they are viscosity solutions of following family of PDEs: $$ \left\{ \begin{array}{rclcl} F(D^2u_j, x) &=& f_j(x) & \mbox{in} & \mathrm{B}^+_1,\\ \mathcal{B}(x,u_{j},Du_{j})&=& g_{j}(x) & \mbox{on} & \mathrm{T}_{1}\\ u_{j}(x)&=&u(x) & \mbox{on} & \partial \mathrm{B}^{+}_{1}\setminus \mathrm{T}_{1}. \end{array} \right. $$ Therefore, the assumptions of the Theorem \ref{T-flat} are in force. As a result, $$ \|u_j\|_{W^{2,p}\left(\overline{\mathrm{B}^{+}_{\frac{1}{2}}}\right)} \le \mathrm{C}(\verb"universal").\left( \|u_j\|_{L^{\infty}(\mathrm{B}^+_1)} + \|f_j\|_{L^p(\mathrm{B}^+_1)}+\Vert g_{j}\Vert_{C^{1,\alpha}(\overline{\mathrm{T}_{1}})}\right), $$ for a constant $\mathrm{C}>0$. Furthermore, a standard covering argument also yields $u_j \in W^{2,p}(\mathrm{B}^+_1)$. From Lemma $\ref{ABP-fullversion}$, $(u_j)_{j \in \mathbb{N}}$ is uniformly bounded in $W^{2,p}(\overline{\mathrm{B}^+_{\rho}})$ for $\rho \in (0, 1)$. Once again, we can apply Lemma \ref{ABP-fullversion} and obtain $$ \|u_j-u_k\|_{L^{\infty}\left(\mathrm{B}^+_{1}\right)} \le \mathrm{C}(n,\lambda,\Lambda, \mu_0)(\|f_j-f_k\|_{L^p(\mathrm{B}^+_1)}+\Vert g_{j}-g_{k}\Vert_{C^{1,\alpha}(\overline{\mathrm{T}_{1}})}). $$ Thus, $u_j \to u_{\infty} \quad \textrm{in} \quad C^0(\overline{\mathrm{B}^+_1})$. Moreover, since $(u_j)_{j \in \mathbb{N}}$ is bounded in $W^{2,p}\left(\mathrm{B}^+_{\frac{1}{2}}\right)$ we obtain $u_j \to u_{\infty}$ weakly in $W^{2,p}\left(\mathrm{B}^+_{\frac{1}{2}}\right)$. Thus, $$ \|u_{\infty}\|_{W^{2,p}(\overline{\mathrm{B}^{+}_{1/2}})} \le \mathrm{C} \cdot\left( \|u_{\infty}\|_{L^{\infty}(\mathrm{B}^+_1)} + \|f\|_{L^p(\mathrm{B}^+_1)}+\Vert g\Vert_{C^{1,\alpha}(\overline{\mathrm{T}_{1}})}\right). $$ Finally, stability results (see Lemma \ref{Est}) ensure that $u_{\infty}$ is an $L^{p}-$viscosity solution to $$ \left\{ \begin{array}{rclcl} F(D^2u_{\infty}, x) &=& f(x)& \mbox{in} & \mathrm{B}^+_1,\\ \mathcal{B}(x,u_{\infty},Du_{\infty})&=& g(x) & \mbox{on} & \mathrm{T}_1 \\ u_{\infty}(x)&=& u(x) & \mbox{on} & \partial \mathrm{B}^{+}_{1}\setminus \mathrm{T}_{1}. \end{array} \right. $$ Thus, $w \defeq u_{\infty}-u$ fulfills $$ \left\{ \begin{array}{rclcl} w\in S(\frac{\lambda}{n}, \Lambda,0) & \mbox{in} & \mathrm{B}^+_1,\\ \mathcal{B}(x, w, Dw)= 0 & \mbox{on} & \mathrm{T}_1 \\ w= 0 & \mbox{on} & \partial \mathrm{B}^{+}_{1}\setminus \mathrm{T}_{1}. \end{array} \right. $$ which by Lemma $\ref{ABP-fullversion}$ we can conclude that $w=0$ in $\overline{\mathrm{B}^{+}_{1}}\setminus \mathrm{T}_{1}$. By continuity, $w=0$ in $\overline{\mathrm{B}^{+}_{1}}$ which completes the proof. \end{proof} \bigskip Finally, we are now in position to prove the Theorem \ref{T1}. \begin{proof}[{\bf Proof of Theorem \ref{T1}}] Based on standard reasoning (cf. \cite{Winter}) it is important to highlight that it is always possible to perform a change of variables in order to flatten the boundary. For this end, consider $x_0 \in \partial \Omega$. Since $\partial \Omega \in C^{2,\alpha}$ there exists a neighborhood of $x_0$, which we will label $\mathcal{V}(x_0)$ and a $C^{2, \alpha}-$diffeomorfism $\Phi: \mathcal{V}(x_0) \to \mathrm{B}_1(0)$ such that $ \Phi(x_0) = 0 \quad \mbox{and} \quad \Phi(\Omega \cap \mathcal{V}(x_0)) = \mathrm{B}^{+}_1$. Now, for $\tilde{\varphi} \in W^{2,p}(\mathrm{B}^+_1)$ we set $\varphi = \tilde{\varphi} \circ \Phi \in W^{2, p}(\mathcal{V}(x_0))$. Thus, we obtain that $$ D \varphi = (D \tilde{\varphi} \circ \Phi) D \Phi \quad \text{and}\quad D^2 \varphi = D \Phi^t \cdot (D^2 \tilde{\varphi} \circ \Phi) \cdot D \Phi + ((D \tilde{\varphi} \circ \Phi) \partial_{ij} \Phi)_{1 \le i,j \le n}. $$ Moreover, $\tilde{u} \defeq u \circ \Phi^{-1} \in C^0(\mathrm{B}^+_1)$. Next, we observe that $\tilde{u}$ is an $L^{p}-$viscosity solution to $$ \left\{ \begin{array}{rclcl} \tilde{F}(D^2 \tilde{u}, D \tilde{u}, \tilde{u}, y) &=& \tilde{f}(x) & \mbox{in} & \mathrm{B}^+_1,\\ \tilde{\mathcal{B}} (y, \tilde{u}, D \tilde{u}) & = & \tilde{g}(x) &\mbox{on} & \mathrm{T}_1. \end{array} \right. $$ where for $y=\Phi^{-1}(x)$ we have $$ \left\{ \begin{array}{rcl} \tilde{F}\left(D^2 \tilde{\varphi},D \tilde{\varphi}, x\right) & \defeq & F\left(D\Phi^t(y) D^2 \tilde{\varphi} D\Phi(y) + D \tilde{\varphi}D^2 \Phi(y), D \tilde{\varphi}D\Phi(y), \tilde{u}, y\right)\\ \tilde{f}(x) & \defeq & f \circ \Phi^{-1}(x)\\ \tilde{\mathcal{B}}(x,s, \overrightarrow{v})& \defeq &\tilde{\beta}(x)\cdot \overrightarrow{v}+\tilde{\gamma}(x)s\\ \tilde{\beta}(x) & \defeq & (\beta \circ \Phi^{-1}) \cdot (D \Phi \circ \Phi^{-1})^{t}\\ \tilde{\gamma}(x) & \defeq & (\gamma \circ \Phi^{-1}) \cdot (D \Phi \circ \Phi^{-1})^{t}\\ \tilde{g}(x) & \defeq & g \circ \Phi^{-1} \end{array} \right. $$ Furthermore, note that $\tilde{F}(\mathrm{X}, \varsigma, \eta, y) = F\left(D \Phi^t(y) \cdot \mathrm{X} \cdot D \Phi(y) + \varsigma D^2\Phi, \varsigma D\Phi(y), \eta, y\right)$ is a uniformly elliptic operator with ellipticity constants $\lambda \mathrm{C}(\Phi)$, $\Lambda \mathrm{C}(\Phi)$. Thus, $$ \tilde{F}^{\sharp}(\mathrm{X}, \varsigma, \eta, x) = F^{\sharp}\left(D\Phi^t(\Phi^{-1}(x)) \cdot \mathrm{X}\cdot D\Phi(\Phi^{-1}(x)) + \varsigma D^2 \Phi(\Phi^{-1}(x)), 0,0, \Phi^{-1}(x)\right). $$ In consequence, we conclude that $\psi_{\tilde{F}^{\sharp}}(x,x_0) \le \mathrm{C}(\Phi) \psi_{F^{\sharp}}(x,x_0)$, ensuring that $\tilde{F}$ falls into the assumptions of Corollary \ref{Cor}. This finishes the proof, as well as establishes the proof of the Theorem \ref{T1}. \end{proof} \section{BMO type estimates: Proof of Theorem \ref{BMO}}\label{Section5} \hspace{0.4cm} This section will be devoted to establishing boundary BMO type estimates. Before proving Theorem \ref{BMO} we will present some key results that play a crucial role in our strategy. \begin{lemma}[{\bf Approximation Lemma II}]\label{Prop-BMO} Let $g \in C^{1,\psi}(\mathrm{T}_1)$ such that $\|g\|_{C^{1,\psi}(\mathrm{T}_1)} \le \mathrm{C}_g$ and $\beta \in C^{1, \psi}(\overline{\mathrm{T}_1})$. Let $F$ and $F^{\infty}$ be $\left(\frac{\lambda}{n}, \Lambda\right)$-elliptic operators. Given $\hat{\delta}_0>0$, there exists $\epsilon_0=\epsilon_0(\hat{\delta}_0,n,\lambda,\Lambda, \mathrm{C}_g)<1$ such that, if $$ \max\left\{\frac{|F(\mathrm{X}, x)-F^{\infty}(\mathrm{X}, x_0)|}{\|X\|+1}, \,\,\psi_{F^{\infty}}(x), \,\,\|f\|_{L^p(\mathrm{B}^+_1)\cap p-\text{BMO}(\mathrm{B}^+_1)}\right\} \le \epsilon_0, $$ then any two $L^{p}-$viscosity solutions $u$ and $v$ of $$ \left\{ \begin{array}{rclcl} F(D^2 u , x) &=& f(x) & \mbox{in} & \mathrm{B}^+_1\\ \beta \cdot Du &=& g(x) & \mbox{on} & \mathrm{T}_1 . \end{array} \right. \quad \textrm{and} \quad \left\{ \begin{array}{rclcl} F^{\infty}(D^2 \mathfrak{h}, x_0) &=& 0 & \mbox{in} & \mathrm{B}^+_1\\ \beta \cdot D\mathfrak{h} &=& g(x) & \mbox{on} &\mathrm{T}_1. \end{array} \right. $$ satisfy $$ \|u-\mathfrak{h}\|_{L^{\infty}\left(\mathrm{B}^+_\frac{7}{8}\right)} \le \hat{\delta}_0. $$ \end{lemma} \begin{proof} The proof includes the same lines as Lemma \ref{Approx} along with minor changes. \end{proof} \begin{lemma}[{\bf A quadratic approximation}] \label{BMO4} Under the assumptions from Lemma \ref{Prop-BMO}, assume that $F^{\infty}$ satisfies (A4)$^{\star}$. Let $u$ be an $L^p-$viscosity solution to $$ \left\{ \begin{array}{rclcl} F(D^2 u, x) &=& f(x) & \mbox{in} & \mathrm{B}^+_1,\\ \beta \cdot Du &=& g(x)& \mbox{on} & \mathrm{T}_1 . \end{array} \right. $$ Then, there exist universal constants $\mathrm{C}_{\sharp}>0$ and $r_0 \in (0, 1/2)$, as well as a quadratic polynomial $\mathfrak{P}$, with $\|\mathfrak{P}\|_{L^{\infty}(\mathrm{B}^+_1)} \le \mathrm{C}_{\sharp}$ such that $$ \displaystyle \sup_{\mathrm{B}^+_r} |u(x) - \mathfrak{P}(x)| \le r^2 \quad \text{for every} \quad r \le r_0 $$ \end{lemma} \begin{proof} Firstly, take $\hat{\delta}_0 \in (0,1)$, to be chosen \textit{a posteriori}, and apply Lemma \ref{Prop-BMO} to obtain $\epsilon_0 >0$ and an $L^p-$viscosity solution to $$ \left\{ \begin{array}{rclcl} F^{\infty}(D^2 \mathfrak{h}, x) &=& 0 & \mbox{in} & \mathrm{B}^+_1,\\ \beta \cdot D\mathfrak{h} &=&g(x)& \mbox{on} & \mathrm{T}_1. \end{array} \right. $$ such that $$\displaystyle \sup_{\mathrm{B}^+_1} |u-\mathfrak{h}| \le \hat{\delta}_0. $$ Remember that $F^{\infty}$ fulfills assumption (A4)$^{\star}$, then $\mathfrak{h}$ enjoys classical estimates. As a result, its Taylor's expansion of second order, namely $\mathfrak{P}$, is well-defined. Furthermore, we have $$ \sup_{\mathrm{B}^+_{\rho}} |\mathfrak{h}-\mathfrak{P}| \le \mathrm{C}_{\ast} \rho^{2+\psi} \quad \forall \,\, \rho\leq r_0. $$ Now, we select $0<r \ll 1$ (small enough) in such a way that $$ r \le \min\left\{r_0, \left(\frac{1}{10\mathrm{C}_{\ast}}\right)^{\frac{1}{\psi}}\right\}. $$ Thus, $$ \displaystyle \sup_{\mathrm{B}^+_r} |\mathfrak{h}-\mathfrak{P}| \le \frac{1}{10}r^2. $$ On the other hand, we select $\hat{\delta}_0 \defeq \frac{1}{10}r^2$. As a result, we obtain $$ \displaystyle \sup_{\mathrm{B}^+_r} |u- \mathfrak{h}| \le \frac{1}{10}r^2. $$ Finally, by combining the previous inequalities, we conclude that $$ \sup_{\mathrm{B}^+_r} |u-\mathfrak{P}| \le r^2. $$ \end{proof} As a byproduct of the above result, we can yield a quadratic approximation at the recession level. The proof holds along with straightforward modifications in the Lemma \ref{BMO4}. For this reason, we will omit it here. \begin{corollary} \label{BMO3} Assume the assumptions from Lemma \ref{Prop-BMO}. Then, there exist universal constants $\mathrm{C}_{\sharp}>0$, $\tau_0 >0$ and $r>0$, such that if $u$ is a (normalized) viscosity solution of $$ \left\{ \begin{array}{rclcl} F_{\tau}(D^2 u, x) &=& f(x) & \mbox{in} & \mathrm{B}^+_1,\\ \beta(x) \cdot Du(x) &=& g(x) & \mbox{on} & \mathrm{T}_1. \end{array} \right. $$ with $\max\left\{\tau, \,\|f\|_{p-BMO(\mathrm{B}^+_1)}\right\} \le \tau_0$, there exists a quadratic polynomial $\mathfrak{P}$, with $\|\mathfrak{P}\|_{\infty} \le \mathrm{C}_{\sharp}$ satisfying $$ \sup_{\mathrm{B}^+_r} |u(x)-\mathfrak{P}(x)| \le r^2. $$ \end{corollary} We are already prepared to prove Theorem \ref{BMO} in the face of such above auxiliary results. \begin{proof}[{\bf Proof of Theorem \ref{BMO}}] In fact, for $\kappa \in (0,1)$ to be determined \textit{a posteriori}, we define $v(x) \defeq \kappa u(x)$ such that $v$ is a (normalized) $L^p-$viscosity solution to $$ \left\{ \begin{array}{rclcl} F_{\tau}(D^2 v, x) &=& \tilde{f}(x) & \mbox{in} & \mathrm{B}^+_1,\\ \beta(x) \cdot Dv(x) &=& \tilde{g}(x) &\mbox{on} & \mathrm{T}_1. \end{array} \right. $$ where $\tau \defeq \kappa$, $\tilde{f}(x) \defeq \kappa f(x)$ and $\tilde{g}(x) \defeq \kappa g(x)$. Now, we are going to determine $\kappa$. This will be done in such a way that $\max\left\{\tau, \|\tilde{f}\|_{\textrm{p-BMO}(\mathrm{B}^+_1)}\right\} \le \tau_0$, where $\tau_0$ comes from Corollary \ref{BMO3}. Finally, we will establish the result for $v$, which will lead to the Theorem's statement. From now on, we will show that there exists a sequence of second order polynomials $(\mathfrak{P}_k)_{k \in \mathbb{N}}$ given by $\mathfrak{P}_k(x)= \frac{1}{2} x^T \cdot \mathcal{A}_k \cdot x + \mathcal{B}_k \cdot x + \mathcal{C}_k$ such that \begin{eqnarray} F^{\sharp}(\mathcal{A}_k, x) & = & (\tilde{f} )_1, \label{proof-uc-eq01} \\ r^{2(k-1)} |\mathcal{A}_{k} - \mathcal{A}_{k-1}| + r^{k-1} |\mathcal{B}_{k-1} - \mathcal{B}_{k}| + |\mathcal{C}_{k-1}-\mathcal{C}_k| & \le & \mathrm{C}_{\sharp}.r^{2(k-1)},\label{proof-uc-eq05} \end{eqnarray} where $\mathrm{C}_{\sharp}>0$ is a universal constant, $0<r\ll 1$ comes from Lemma \ref{BMO4} and $$ \sup_{\mathrm{B}^+_{r^k}} \left|v(x) - \mathfrak{P}_k(x)\right| \le r^{2k}. $$ The proof will follow by way of an induction process. Let us define $\mathfrak{P}_0$ and $\mathfrak{P}_{-1}$ to be $\mathfrak{P}_{0}(x) = \mathfrak{P}_{-1}(x) = \frac{1}{2} x^T \cdot \mathrm{X}_0 \cdot x$, where $\mathrm{X}_0 \in \textit{Sym}(n)$ fulfills $F^{\sharp}(\mathrm{X}_0, x) = ( \tilde{f})_1$. The first step of the argument, i.e., the case $k=0$, is naturally verified. Now, suppose we have established the existence of such polynomials for $k=0,1,\ldots, j$. Then, we define the following auxiliary function $v_j: \mathrm{B}^+_1 \rightarrow \mathbb{R}$ given by $$ v_{j}(x) \defeq \frac{(v-\mathfrak{P}_j)(r^j x)}{r^{2j}}, $$ where we have by the induction assumption that $v_j$ is a (normalized) $L^p-$viscosity solution to $$ \left\{ \begin{array}{rclcl} F_{j}(D^2 v_j, x) &=&\tilde{f}_j(x) & \mbox{in} & \mathrm{B}^+_1\\ \tilde{\beta}(x) \cdot Dv_j(x) &=& \tilde{g}_{j}(x) & \mbox{on} & \mathrm{T}_{1}. \end{array} \right. $$ where $$ \left\{ \begin{array}{rcl} F_j(\mathrm{X}, x) & \defeq & \tau F \left(\frac{1}{\tau}(\mathrm{X} + \mathcal{A}_j), r^jx)\right) \\ \tilde{f}_j(x) & \defeq & \tilde{f}(r^j x) \\ \tilde{g}_{j}(x) & \defeq & \tilde{g}(r^{j}x)-\frac{1}{r^{j}}\tilde{\beta}(x) \cdot D\mathfrak{P}_j(r^{j}x)\\ \tilde{\beta}(x) & \defeq & r^{j} \beta(r^{j}x) \end{array} \right. $$ with $$ \begin{array}{lll} \|f_j\|_{\textrm{p-BMO}(\mathrm{B}^{+}_1)} = \displaystyle \sup_{0<s\leq 1} \left( \intav{\mathrm{B}^{+}_{s}} \left|f_j(x) - (f_j)_{s}\right|^pdx\right)^{\frac{1}{p}}\\ \hspace{2.35 cm}= \displaystyle \sup_{0<s\leq 1} \left( \intav{\mathrm{B}^{+}_{sr}} \left|f(z) - (f)_{sr}\right|^pdz\right)^{\frac{1}{p}} \\ \hspace{2.35 cm} \leq \|\tilde{f}\|_{p-\textrm{BMO}(\mathrm{B}^{+}_1)} \\ \hspace{2.35 cm}\leq \tau_0. \end{array} $$ Furthermore, since $F^{\sharp}(\mathcal{A}_j, x)= ( \tilde{f} )_1$, the PDE with oblique boundary conditions $$ \left\{ \begin{array}{rclcl} F^{\sharp}_{j}(D^2 \mathfrak{h}, x) &=& ( \tilde{f} )_1 & \mbox{in} & \mathrm{B}^+_1\\ \beta \cdot D \mathfrak{h}(x) & = & \tilde{g}(x) &\mbox{on} & \mathrm{T}_1 \end{array} \right. $$ satisfies the same (up to the boundary) $C^{2,\psi}$ \textit{a priori} estimates from the problem driven by $F^{\sharp}$, and it is under the assumption of Corollary \ref{BMO3}. Thus, there exists a quadratic polynomial $\tilde{\mathfrak{P}}$ with $\|\tilde{\mathfrak{P}}\|_{\infty} \le \mathrm{C}_{\sharp}$ such that \begin{equation}\label{BMO5} \sup_{\mathrm{B}^+_r} |v_{j}-\tilde{\mathfrak{P}}| \le r^2. \end{equation} Rewriting \eqref{BMO5} back to the original domain yields $$ \displaystyle \sup_{\mathrm{B}^{+}_{r^{k+1}}} \left|v(x) - \left[\mathfrak{P}_k(x)+ r^{2k}\tilde{\mathfrak{P}}\left(\frac{x}{r^k}\right)\right]\right|\leq r^{2(k+1)}. $$ Finally, by defining $\mathfrak{P}_{j+1}(x) \defeq \mathfrak{P}_j(x) + r^{2j} \tilde{\mathfrak{P}}(r^{-j}x)$ we check the $(k+1)^{\underline{th}}$ step of induction. Furthermore, the required conditions \eqref{proof-uc-eq01} and \eqref{proof-uc-eq05} are satisfied. In conclusion, for $\rho>0$, choose an integer $k$ such that $r^{k+1} < \rho \le r^{k}$. Then, using Theorem \ref{T1} to $v_k$, we obtain $$ \begin{array}{rcl} \displaystyle \sup_{\rho \in (0, 1/2)}\left(\intav{\mathrm{B}^+_{\rho} } |D^2 v(z) - \mathcal{A}_k|^p dz\right)^{\frac{1}{p}} & \le & \displaystyle \sup_{\rho \in (0, 1/2)} \left(\frac{1}{r^n}.\intav{\mathrm{B}^+_{r^k} } |D^2 v(z) - \mathcal{A}_k|^p dz\right)^{\frac{1}{p}}\\ & = & \displaystyle \sup_{\rho \in (0, 1/2)} \left(\frac{1}{r^n}.\int_{\mathrm{B}^+_1 } |D^2 v_k|^p dx\right)^{\frac{1}{p}} \\ & \le & \mathrm{C}(\verb"universal"). \end{array} $$ Now, recall the general inequality: $$ \displaystyle \intav{\mathrm{B}^{+}_{\rho} } \left|D^2 v - \intav{\mathrm{B}^{+}_{\rho}} D^2 v \ dy\right|^p dx \le 2^{p} \displaystyle \intav{\mathrm{B}^{+}_{\rho} } |D^2 v - \mathcal{A}_k|^p dx. $$ In effect, by triangular inequality, $$ \begin{array}{rcl} \displaystyle \left(\intav{\mathrm{B}^{+}_{\rho} } \left|D^2 v - \intav{\mathrm{B}^{+}_{\rho}} D^2 v \ dy\right|^p dx\right)^{\frac{1}{p}} & \le & \displaystyle \left(\intav{\mathrm{B}^{+}_{\rho} } |D^2 v - \mathcal{A}_k|^p dx\right)^{\frac{1}{p}}+\left|\intav{\mathrm{B}^{+}_{\rho} } D^2 v \ dy - \mathcal{A}_k\right|\\ & \le & \displaystyle \left(\intav{\mathrm{B}^{+}_{\rho} } |D^2 v - \mathcal{A}_k|^p dx\right)^{\frac{1}{p}}+ \displaystyle \intav{\mathrm{B}^{+}_{\rho} } |D^2 v - \mathcal{A}_k| dx\\ & \le & 2 \displaystyle \left(\intav{\mathrm{B}^{+}_{\rho} } |D^2 v - \mathcal{A}_k|^p dx\right)^{\frac{1}{p}}, \end{array} $$ Therefore, $$ \displaystyle \|Dv\|_{p-BMO\left(\overline{\mathrm{B}^{+}_{\frac{1}{2}}}\right)} \defeq \sup_{\rho \in (0, 1/2)}\left(\intav{\mathrm{B}^{+}_{\rho} } \left|D^2 v - \intav{\mathrm{B}^{+}_{\rho}} D^2 v \ dy\right|^p dx\right)^{\frac{1}{p}} \leq \mathrm{C}(\verb"universal"), $$ thereby finishing the proof of the estimate \eqref{BMO2}. \end{proof} \section{Obstacle-type problems: Proof of Theorem \ref{T3}}\label{A1} \label{obst} \hspace{0.4cm} In this section, we will look at an important class of free boundary problems, namely the obstacle problem with oblique boundary datum \eqref{obss1} for a given obstacle $\phi \in W^{2,p}(\Omega)$ satisfying $\mathcal{B}(x,u, D \phi) \ge g$ a.e. on $\partial \Omega$, and $F$ a uniformly elliptic operator fulfilling $F(\mathcal{O}_{n \times n},\overrightarrow{0},0,x)=0$. The primary purpose will be to develop a $W^{2,p}$-regularity theory for \eqref{obss1} that does not rely on any extra regularity assumptions on nonlinearity $F$. In the sequel, similarly to \cite[Theorem 3.3]{BLOP18} and \cite[Appendix]{daSV21-1} we will reduce the analysis of the obstacle problem eqrefobss1 in studying a family of penalized problems as follows: \begin{equation} \label{NN4-4} \left\{ \begin{array}{rclcl} F(D^2u_{\varepsilon},Du_{\varepsilon},u_{\varepsilon},x) &=& \mathrm{h}^+(x) \Psi_{\varepsilon}(u_{\varepsilon} - \phi) + f(x) - \mathrm{h}^+(x)& \mbox{in} & \Omega \\ \mathcal{B}(x,u_{\varepsilon},Du_{\varepsilon}) &=& g(x) & \mbox{on} &\partial \Omega, \end{array} \right. \end{equation} for $\varepsilon \in (0, 1)$, where $\Psi_{\varepsilon}(s)$ is a smooth function such that $$ \Psi_{\varepsilon}(s) \equiv 0 \quad \textrm{if} \quad s \le 0; \quad \Psi_{\varepsilon}(s) \equiv 1 \quad \textrm{if} \quad s \ge \varepsilon, $$ $$ 0 \le \Psi_{\varepsilon}(s) \le 1 \quad \textrm{for any} \quad s \in \mathbb{R}. $$ and $$ \mathrm{h}(x) \defeq f(x) - F(D^2 \phi, D \phi, \phi, x). $$ Now, we define $\hat{f}_{u_{\varepsilon}}(x) \defeq \mathrm{h}^+(x) \Psi_{\varepsilon}(u_{\varepsilon} - \phi) + f(x) - \mathrm{h}^+(x)$. Then, note that, $\mathrm{h} \in L^p(\Omega)$ with the estimate \begin{eqnarray}\label{5.4} \|\mathrm{h}\|_{L^p(\Omega)} &\le& \|f\|_{L^p(\Omega)} + \|F(D^2 \phi, D \phi, \phi, x) \|_{L^p(\Omega)}\nonumber \\ &\le& \mathrm{C} \cdot \left( \|f\|_{L^p(\Omega)} + \|\phi\|_{W^{2,p}(\Omega)}\right) \end{eqnarray} for some $\mathrm{C}=\mathrm{C}(n,\lambda,\Lambda, \sigma, \xi, p )>0$, since $F$ fulfills (A1). In a consequence, we have \begin{equation}\label{4.3} \|\hat{f}_{u_{\varepsilon}}\|_{L^p(\Omega)} \leq \mathrm{C}(n,\lambda,\Lambda, \sigma, \xi, p )\left( \|f\|_{L^p(\Omega)} + \|\phi\|_{W^{2,p}(\Omega)}\right) \end{equation} Next, we claim that the problem \eqref{NN4-4} admits a viscosity solution that enjoys \textit{a priori} estimates. In effect, according to Perron's Method (see Lieberman's Book \cite[Theorem 7.19]{Leiberman}) it follows that for each $v_0 \in L^p(\Omega)$ fixed, there exists a unique viscosity solution $u_{\varepsilon} \in W^{2,p}(\Omega)$ satisfying in the viscosity sense $$ \left\{ \begin{array}{rclcl} F(D^2u_{\varepsilon},Du_{\varepsilon},u_{\varepsilon},x) &=& \mathrm{h}^+(x) \Psi_{\varepsilon}(v_0 - \phi) + f(x) - \mathrm{h}^+(x)& \mbox{in} & \Omega \\ \mathcal{B}(x,u_{\varepsilon},Du_{\varepsilon})&=& g(x) & \mbox{on} &\partial \Omega, \end{array} \right. $$ fulfilling (due to Theorem \ref{T1}) the estimate \begin{equation}\label{Eq_Est_Obst} \|u_{\varepsilon}\|_{W^{2,p}(\Omega)} \le \mathrm{C} \cdot \left(\|u_{\varepsilon}\|_{L^{\infty}(\Omega)} + \|\hat{f}_{v_0}\|_{L^p(\Omega)}+\|g\|_{C^{1,\alpha}(\partial \Omega)}\right) \end{equation} for some $\mathrm{C}= \mathrm{C}(n,\lambda,\Lambda, p, \mu_0, \sigma, \omega, \|\beta\|_{C^2(\partial \Omega)}, \|\gamma\|_{C^2(\partial \Omega)}, \theta_0, \textrm{diam}(\Omega))>0$. Moreover, as in \eqref{4.3}, we have $\hat{f}_{v_0} \in L^p(\Omega)$. Now, from A.B.P. Maximum Principle (Lemma \ref{ABP-fullversion}) we obtain \begin{equation}\label{Eq_Est_Obst} \begin{array}{rcl} \|u_{\varepsilon}\|_{W^{2,p}(\Omega)} & \le & \mathrm{C} \cdot \left(\|\hat{f}_{v_0}\|_{L^p(\Omega)}+\|g\|_{C^{1,\alpha}(\partial \Omega)}\right)\\ &\le& \mathrm{C} \cdot\left( \|f\|_{L^p(\Omega)} + \|h\|_{L^p(\Omega)} + \|g\|_{C^{1,\alpha}(\partial \Omega)}\right)\\ &\le& \hat{\mathrm{C}}\cdot \left(\|f\|_{L^p(\Omega)} + \|g\|_{C^{1,\alpha}(\partial \Omega)} + \|\phi\|_{W^{2,p}(\Omega)} \right). \end{array} \end{equation} Thus, $$ \|u_{\varepsilon}\|_{W^{2,p}(\Omega)} \le \mathrm{C}_0(\hat{\mathrm{C}}, n,\lambda,\Lambda,p,\mu_0, \sigma, \omega, \|\beta\|_{C^2(\partial \Omega)}, \theta_0, \textrm{diam}(\Omega), \|f\|_{L^p(\Omega)}, \|g\|_{C^{1,\alpha}(\partial \Omega)}, \|\phi\|_{W^{2,p}(\Omega)}), $$ where $\mathrm{C}_0>0$ is independent on $v_0$. At this point, by defining the operator $\mathcal{T}: L^p(\Omega) \rightarrow W^{2,p}(\Omega) \subset L^p(\Omega)$ given by $\mathcal{T}(v_0)=u_{\varepsilon}$, we conclude that $\mathcal{T}$ maps the $\mathrm{C}_0$-ball (in $L^p(\Omega)$) into itself. Hence, $\mathcal{T}$ is a compact operator. Therefore, by Schauder's fixed point theorem, there exists $u_{\varepsilon}$ such that $\mathcal{T}(u_{\varepsilon})=u_{\varepsilon}$, which is a viscosity solution to \eqref{NN4-4}. Finally, we are in a position to establish our main result. \begin{proof}[{\bf Proof of Theorem \ref{T3}}] From \eqref{Eq_Est_Obst} we observe that $$ \|u_{\varepsilon}\|_{W^{2,p}(\Omega)} \le \mathrm{C}\cdot \left( \|f\|_{L^p(\Omega)} + \|g\|_{C^{1,\alpha}(\partial \Omega)} + \|\phi\|_{W^{2,p}(\Omega)}\right) $$ for some $\mathrm{C}=\mathrm{C}(n,\lambda, \Lambda, p, \mu_0, \sigma, \xi, \|\beta\|_{C^2(\partial \Omega)}, \textrm{diam}(\Omega), \theta_0)$. This ensures that $\{u_{\varepsilon}\}_{\varepsilon >0}$ is uniformly bounded in $W^{2,p}(\Omega)$. Hence, by standard compactness arguments, we can find a subsequence $\{u_{\varepsilon_j}\}_{j \in \mathbb{N}}$ with $\varepsilon_j \to 0$ and a function $u_{\infty} \in W^{2,p}(\Omega)$ such that $$ \left\{ \begin{array}{rcl} u_{\varepsilon_j} \rightharpoonup u_{\infty} & \text{in} & W^{2, p}(\Omega)\\ u_{\varepsilon_j} \to u_{\infty} & \text{in}& C^{0,\alpha_0}(\overline{\Omega})\\ u_{\varepsilon_j} \to u_{\infty} & \text{in}& C^{1,1-\frac{n}{p}}(\overline{\Omega}) \end{array} \right. $$ for some universal $\alpha_0 \in (0,1)$. Now, we assert that $u_{\infty}$ is a viscosity solution of \eqref{obss1}. In effect: \begin{enumerate} \item[\checkmark] Since $\beta(x) \cdot Du_{\varepsilon_j}(x) + \gamma(x) u_{\varepsilon_j}(x) =g(x)$ and $\{u_{\varepsilon_j}\}_{j \in \mathbb{N}}$ are uniformly bounded and equi-continuous on $\partial \Omega$, then from \eqref{Eq_Est_Obst} and Morrey embedding $W^{2, p}(\Omega) \hookrightarrow C^{1, 1-\frac{n}{p}}$, we have $$ \beta(x) \cdot Du_{\infty}(x) + \gamma(x) u_{\infty}(x)= g(x) \quad \textrm{on} \quad \partial \Omega $$ in the viscosity sense (and point-wisely). \item[\checkmark] On the other hand, from \eqref{NN4-4}, we have \begin{eqnarray*} F(D^2 u_{\varepsilon_j}, Du_{\varepsilon_j}, u_{\varepsilon_j},x) &=& \mathrm{h}^+(x) \Psi_{\varepsilon_j}(u_{\varepsilon_j} - \phi) + f(x) - \mathrm{h}^+(x)\\ & \le& f(x) \quad \textrm{in} \quad \Omega \quad \text{for each} \quad j \in \mathbb{N}. \end{eqnarray*} Thus, by applying Stability results (Lemma \ref{Est}), by passing to the limit $j \to +\infty$, we can conclude $$ F(D^2 u_{\infty}, Du_{\infty}, u_{\infty}, x) \le f \quad \textrm{in} \quad \Omega. $$ \item[\checkmark] Next, we are going to prove that $$ u_{\infty} \ge \phi \quad \textrm{in} \quad \overline{\Omega}. $$ Firstly, we see that $\Psi_{\varepsilon_j}(u_{\varepsilon_j} - \phi) \equiv 0$ on the set $\mathcal{O}_j \defeq \{x \in \overline{\Omega} \, : \, u_{\varepsilon_j}(x) < \phi(x)\}$. Note that, if $\mathcal{O}_j = \emptyset$, we have nothing to prove. Thus, we suppose that $\mathcal{O}_j \not= \emptyset$. Then, $$ F(D^2 u_{\varepsilon_j}, Du_{\varepsilon_j}, u_{\varepsilon_j},x) = f(x) - \mathrm{h}^+(x) \quad \textrm{for} \,\,\, x \in \mathcal{O}_j. $$ Now, for each $j \in \mathbb{N}$ note that $\mathcal{O}_j$ is relatively open in $\overline{\Omega}$, since $u_{\varepsilon_j} \in C^0(\overline{\Omega})$. Furthermore, from definition of $\mathrm{h}$ we have $$ F(D^2 \phi, D \phi, \phi, x ) = f(x)-\mathrm{h}(x) \ge F(D^2 u_{\varepsilon_j}, Du_{\varepsilon_j}, u_{\varepsilon_j}, x) \,\,\, \textrm{in} \,\,\, \mathcal{O}_j $$ Moreover, we have $u_{\varepsilon_j} = \phi$ on $\partial \mathcal{O}_j \setminus \partial \Omega$. Thus, from the Comparison Principle (see \cite[Theorem 2.10]{CCKS} and \cite[Theorem 7.17]{Leiberman}) we conclude that $u_{\varepsilon_j} \ge \phi$ in $\mathcal{O}_j$, which yields a contradiction. Therefore, $\mathcal{O}_j = \emptyset$ and $u_{\infty} \ge \phi$ in $\overline{\Omega}$. \item[\checkmark] Finally, it remains to show that $$ F(D^2 u_{\infty}, Du_{\infty}, u_{\infty} ,x) = f(x) \,\,\,\, \textrm{in} \,\,\,\, \{ u_{\infty} > \phi \} $$ in the viscosity sense. For this end, for each $ k \in \mathbb{N}$ we notice that $$ \mathrm{h}^+(x)\Psi_{\varepsilon_j}(u_{\varepsilon_j} -\phi) + f(x)-\mathrm{h}^{+}(x) \to f(x) \,\,\, \textrm{a.e. in} \,\,\, \left\{x \in \Omega \, : \, u_{\infty}(x) > \phi(x) + \frac{1}{k} \right\} . $$ Therefore, via Stability results (Lemma \ref{Est}) we conclude (in the viscosity sense) that $$ F(D^2 u_{\infty}, Du_{\infty}, u_{\infty} ,x) = f(x) \,\,\, \textrm{in} \,\,\, \{u_{\infty} > \phi\} = \bigcup_{k=1}^{\infty} \left\{ u_{\infty} > \phi + \frac{1}{k}\right\} \quad \text{as} \quad j \to +\infty, $$ thereby proving the claim. \end{enumerate} Finally, from \eqref{Eq_Est_Obst} $u_{\infty}$ satisfies the following regularity estimate {\scriptsize{ \begin{eqnarray*} \|u_{\infty}\|_{W^{2,p}(\Omega)} &\le& \liminf_{j \to +\infty} \|u_{\varepsilon_j}\|_{W^{2,p}(\Omega)} \\ &\le& \mathrm{C}(n,\lambda, \Lambda, p, \mu_0, \sigma, \omega, \|\beta\|_{C^2(\partial \Omega)}, \|\gamma\|_{C^2(\partial \Omega)}, \textrm{diam}(\Omega), \theta_0)\cdot\left( \|f\|_{L^p(\Omega)} + \|g\|_{C^{1,\alpha}(\partial \Omega)} + \| \phi \|_{W^{2,p}(\Omega)}\right) \end{eqnarray*}}} thereby finishing the proof of the Theorem. \end{proof} As a consequence, we obtain the uniqueness of existing solutions. \begin{corollary}[\textbf{Uniqueness}] The viscosity solution found in the Theorem \ref{T3} is unique. \end{corollary} \begin{proof} Indeed, let $u$ and $v$ be two viscosity solutions of \eqref{obss1}. Assume that $u \not= v$. Then, we may suppose without loss of generality that $$ \mathcal{O}_{\sharp} = \{v > u\} \not= \emptyset. $$ Since $v > u \ge \phi$ in $\mathcal{O}_{\sharp}$, we obtain in the viscosity sense $$ F(D^2 v, Dv, v,x) = f(x) \quad \textrm{in} \quad \mathcal{O}_{\sharp} $$ Hence, we conclude that $$ \left\{ \begin{array}{rclclcc} F(D^2u,Du,u,x) &\le& f(x) & \le & F(D^2 v, Dv,v,x)& \mbox{in} & \mathcal{O}_{\sharp} \\ & & u(x)&=& v(x) & \mbox{on} &\partial \mathcal{O}_{\sharp} \setminus \partial \Omega,\\ \mathcal{B}(x,u,Du) &=& g(x) & =& \mathcal{B}(x,v,Dv) & \mbox{on} & \partial \mathcal{O}_{\sharp} \cap \partial \Omega \end{array} \right. $$ Therefore, according to Comparison Principle for problems with oblique boundary conditions \cite[Theorem 2.10]{CCKS} and \cite[Theorem 7.17]{Leiberman}, we conclude that $u \ge v$ in $\mathcal{O}_{\sharp}$ whether $\partial \mathcal{O}_{\sharp} \cap \partial \Omega = \emptyset$ or not. However, this contradicts the definition of the set $\mathcal{O}_{\sharp}$, thereby proving that $u =v$. \end{proof} \section{Final conclusions: Density of viscosity solutions}\label{Sec_Density} \hspace{0.4cm}In this final part, we will deliver another application to $W^{2,p}$ regularity estimates. \begin{theorem}[{\bf $W^{2,p}$ density on the class $C^{0}-$viscosity solutions}] Let $u$ be a $C^{0}-$viscosity solution of $$ \left\{ \begin{array}{rclcl} F(D^{2}u,x) & = & f(x) & \text{in} & \mathrm{B}^{+}_{1} \\ \mathcal{B}(x, u, Du) & = & g(x) & \text{on} & \mathrm{T}_{1}, \end{array} \right. $$ where $f\in L^{p}(\mathrm{B}^{+}_{1})\cap C^{0}(\mathrm{B}^{+}_{1})$ (for $n \le p<\infty$), $\beta,\gamma, g\in C^{1,\alpha}(\mathrm{T}_{1})$ with $\gamma\leq 0$ and $\beta \cdot \overrightarrow{\textbf{n}} \geq \mu_{0}$ on $\mathrm{T}_{1}$, for some $\mu_{0}>0$. Then, for any $\delta>0$, there exists a sequence $(u_{j})_{j\in\mathbb{N}}\subset W^{2,p}_{loc}(\mathrm{B}^{+}_{1})\cap \mathcal{S}(\lambda-\delta,\Lambda+\delta,f)$ converging local uniformly to $u$. \end{theorem} \begin{proof} The proof follows some ideas from \cite[Theorem 8.1]{PT} adapted to oblique boundary scenery. We will present the details to the reader for completeness. Firstly, we are going to build up such a desired sequence of operators $F_{j}:Sym(n)\times \mathrm{B}^{+}_{1}\longrightarrow\mathbb{R}$ as follow: Given $\delta>0$ we consider the Pucci Maximal operator \begin{eqnarray*} \mathscr{L}_{\delta}(\mathrm{X})\defeq \mathscr{P}^{+}_{(\lambda-\delta),(\Lambda+\delta)}(\mathrm{X})=(\Lambda +\delta)\sum_{e_i >0} e_i(\mathrm{X}) +(\lambda-\delta) \sum_{e_i <0} e_i(\mathrm{X}), \end{eqnarray*} where $e_{i}(\mathrm{X})$ are the eigenvalues of matrix $\mathrm{X}\in \text{Sym}(n)$. Now, we define \begin{eqnarray*} F_{j}:&\text{Sym}(n)\times \mathrm{B}^{+}_{1}&\longrightarrow \mathbb{R}\\ &(\mathrm{X},x)&\longmapsto \max\{F(\mathrm{X},x),\mathscr{L}_{\delta}(\mathrm{X})-\mathrm{C}_{j}\}, \end{eqnarray*} for $(\mathrm{C}_{j})_{j\in\mathbb{N}}$ a sequence of divergence positive number (chosen precisely \textit{a posteriori}). Thus, notice that $F_{j}$ is continuous (because it is the maximum between two continuous functions) and uniformly elliptic with ellipticity constants $\lambda-\delta\leq \Lambda+\delta$, as $F$ and $\mathscr{L}_{\delta}$ are. Now, taking into account the $(\lambda,\Lambda)$-ellipticity of $F$ there holds \begin{eqnarray*} F(\mathrm{X},x)&\geq& \lambda\sum_{e_i >0} e_i(\mathrm{X}) +\Lambda\sum_{e_i <0} e_i\\ &\geq& \lambda\displaystyle\sum_{e_i >0}e_{i}(\mathrm{X})-\Lambda\| \mathrm{X}\|\\ &=& \mathscr{L}_{\delta}(\mathrm{X})-(\Lambda+\delta-\lambda)\sum_{e_i >0}e_{i}-(\lambda-\delta)\sum_{e_i >0}e_{i}-\Lambda\| \mathrm{X}\|\\ &\geq& \mathscr{L}_{\delta}(\mathrm{X})-(2\Lambda -\lambda+\delta)\| \mathrm{X}\|\\ &\geq& \mathscr{L}_{\delta}(\mathrm{X})-\mathrm{C}_{j}, \ \forall \mathrm{B}_{j}\subset \text{Sym}(n), \ \forall x\in \mathrm{B}^{+}_{1}, \end{eqnarray*} where we select $\mathrm{C}_{j}\defeq j(2\Lambda-\lambda+\delta)$. Hence, $F\equiv F_{j}$ in $\mathrm{B}_{j}\times \mathrm{B}^{+}_{1}\subset Sym(n)\times \mathrm{B}^{+}_{1}$. On the other hand, we investigate the recession operator associated to $F_{j}$. For that purpose, for each $\mu>0$ we have \begin{eqnarray*} (F_{j})_{\mu}(\mathrm{X},x)=\mu F_{j}(\mu^{-1}\mathrm{X},x)=\max\{F_{\mu}(\mathrm{X},x),L_{\delta}(\mathrm{X})-\mu C_{j}\}. \end{eqnarray*} Since $F_{\mu}$ is $(\lambda,\Lambda)$-elliptic for any $\mu>0$ we get \begin{eqnarray*} F_{\mu}(\mathrm{X},x)&\leq& \Lambda\sum_{e_i >0} e_i(\mathrm{X}) +\lambda\sum_{e_i <0} e_i(\mathrm{X})\\ & = & \mathscr{L}_{\delta}(\mathrm{X})-\delta \sum_{e_i >0} e_{i}(\mathrm{X})+\delta\sum_{e_i <0} e_{i}(\mathrm{X})\\ &\leq& \mathscr{L}_{\delta}(\mathrm{X})-\delta\|\mathrm{X}\|\\ &\leq & \mathscr{L}_{\delta}(\mathrm{X})-\mu \mathrm{C}_{j}, \end{eqnarray*} for all $ \mathrm{X}\in \mathrm{B}_{\frac{\mu \mathrm{C}_{j}}{\delta}}\subset \text{Sym}(n)$ and $x\in \mathrm{B}^{+}_{1}$. Therefore, we conclude that $F_{j}(\mathrm{X},x)= \mathscr{L}_{\delta}(\mathrm{X})-\mathrm{C}_{j}$ outside a ball of radius $\sim \mathrm{C}_{j}$, then $F_{j}^{\sharp}=\mathscr{L}_{\delta}$. Now, notice that $F_{j}^{\sharp}$ fulfills $C^{1,1}$ \textit{a priori} estimates, i.e., given $g_{0}\in C^{1,\alpha}(\mathrm{T}_{1})$ any viscosity solution to $$ \left\{ \begin{array}{rclcl} F_{j}^{\sharp}(D^{2}\mathfrak{h}, x_{0}) & = & 0 & \mbox{in} & \mathrm{B}^{+}_{1}\\ \mathcal{B}(x, \mathfrak{h}, D\mathfrak{h}) & = & g_{0}(x) & \mbox{on} & \mathrm{T}_{1} \end{array} \right. $$ satisfies $\mathfrak{h}\in C^{1,1}(\overline{\mathrm{B}^{+}_{1/2}})$ (see e.g., \cite[Teorema 1.3]{LiZhang}) with the following estimate: \begin{eqnarray*} \|\mathfrak{h}\|_{C^{1,1}(\overline{\mathrm{B}^{+}_{1/2}})}\leq \mathrm{C}\cdot \left(\|\mathfrak{h}\|_{L^{\infty}(\mathrm{B}^{+}_{1})}+\|g\|_{C^{1,\alpha}(\overline{\mathrm{T}_{1}})}\right). \end{eqnarray*} Therefore, statements of Proposition \ref{T-flat} are in force. Then, for each fixed $j\in\mathbb{N}$, any viscosity solution of $$ \left\{ \begin{array}{rcrcl} F_{j}(D^{2}v, x) & = & f(x) & \text{in} & \mathrm{B}^{+}_{1} \\ \mathcal{B}(x, v, Dv) & = & g(x) & \text{on} & \mathrm{T}_{1} \end{array} \right. $$ enjoys $W^{2,p}$ regularity estimates. To be more specific, for each $j\in\mathbb{N}$ there exists a constant $\kappa_{j}>0$ such that \begin{eqnarray*} \|v\|_{W^{2,p}(\mathrm{B}_{1/2}^{+})}\leq \kappa_{j}\cdot (\|v\|_{L^{\infty}(\mathrm{B}_{1}^{+})}+\|f\|_{L^{p}(\mathrm{B}^{+}_{1})}+\|g\|_{C^{1,\alpha}(\overline{\mathrm{T}_{1}})}). \end{eqnarray*} Finally, we build up the desired sequence $(u_{j})_{j \in \mathbb{N}}$ to be a viscosity solution to $$\left\{ \begin{array}{rcrcl} F_{j}(D^{2}u_{j},x) & = & f(x) & \mbox{in} & \mathrm{B}^{+}_{1} \\ \mathcal{B}(x, u_{j}, Du_{j}) & = & g(x) & \mbox{on} & \mathrm{T}_{1}\\ u(x) & = & u_{j}(x) & \mbox{on} & \partial \mathrm{B}^{+}_{1}\setminus \mathrm{T}_{1}, \end{array} \right. $$ whose the existence does hold from Theorem \ref{Existencia}. Furthermore, each $u_{j}\in W^{2,p}(\mathrm{B}^{+}_{1})$ and since operators $F_{j}$ are uniformly elliptic, it follow that $F_{j}(\cdot,x)\to F_{0}(\cdot,x)$ local uniformly in $\text{Sym}(n)$ for each $x\in \mathrm{B}^{+}_{1}$. Furthermore, since $F_{j}\equiv F$ in $\mathrm{B}_{j}\times \mathrm{B}^{+}_{1}$ then $F_{0}\equiv F$. In conclusion, using local $C^{0, \alpha}$ regularity and A.B.P. estimates (see Theorem \ref{Holder_Est} and Lemma \ref{ABP-fullversion} respectively) the sequence $(u_{j})_{j \in \mathbb{N}}$, converge, up to a subsequence, local uniformly to $u_{0}$ in the $C^{0, \alpha}$-topology. In addition, according to Stability results (Lemma \ref{Est}) $u_{0}$ is a viscosity solution of $$ \left\{ \begin{array}{rclcl} F(D^{2}u_{0},x) & = & f(x) & \mbox{in} & \mathrm{B}^{+}_{1}\\ \mathcal{B}(x, u_{0}, Du_{0}) & = & g(x) & \mbox{on} & \mathrm{T}_{1}\\ u(x) & = & u_0(x) & \mbox{on} & \partial \mathrm{B}^{+}_{1}\setminus \mathrm{T}_{1}. \end{array} \right. $$ Finally, by taking $w=u_{0}-u$, we can see that $w$ satisfies in the viscosity sense $$ \left\{ \begin{array}{rcl} w\in \mathcal{S}(\lambda/n,\Lambda,0) & \mbox{in} & \mathrm{B}^{+}_{1}\\ \mathcal{B}(x, w, Dw)=0 & \mbox{on} & \mathrm{T}_{1}\\ w=0 & \mbox{on} & \partial \mathrm{B}^{+}_{1}\setminus \mathrm{T}_{1}, \end{array} \right. $$ and, once again, by A.B.P. estimates (Lemma \ref{ABP-fullversion}) we conclude that $w=0$ in $\overline{\mathrm{B}^{+}_{1}}\setminus \mathrm{T}_{1}$. Therefore, $w\equiv 0$, i.e., $u=u_{0}$, thereby finishing the proof. \end{proof} \subsection*{Acknowledgments} \hspace{0.4cm} J.S. Bessa was partially supported by CAPES-Brazil under Grant No. 88887.482068/2020-00. J.V. da Silva, M. N. Barreto Frederico and G.C. Ricarte have been partially supported by CNPq-Brazil under Grant No. 307131/2022-0, No 165746/2020-3, and No. 304239/2021-6. J.V. da Silva has been partially supported by FAPDF Demanda Espont\^{a}nea 2021 and FAPDF - Edital 09/2022 - DEMANDA ESPONTÂNEA. Part of this work was developed during the \textit{Fortaleza Conference on Analysis and PDEs} (2022) at the Universidade Federal do Cear\'{a} (UFC-Brazil). J.V. da Silva would like to thank to UFC's Department of Mathematics for fostering a pleasant scientific and research atmosphere during his visit in the Summer of 2022. \begin{thebibliography}{99} \bibitem{AZBED} Azzam, J. and Bedrossian, J. \textit{Bounded mean oscillation and the uniqueness of active scalar equations}. Trans. Amer. Math. Soc. 367 (2015), no. 5, 3095-3118. \bibitem{BJ} Byun, S.S. and Han, J. \textit{$W^{2,p}$-estimates for fully nonlinear elliptic equations with oblique boundary conditions}. J. Differential Equations 268 (2020), no. 5, 2125-2150. \bibitem{BJ1} Byun, S.S., Han J. and Oh, J., \textit{On $W^{2,p}$-estimates for solutions of obstacle problems for fully nonlinear elliptic equations with oblique boundary conditions}. Calc. Var. Partial Differential Equations 61 (2022), no. 5, Paper No. 162, 15 pp. \bibitem{BLOP18} Byun, S.S., Lee, K.-A., Oh, J. and Park, J. \textit{Nondivergence elliptic and parabolic problems with irregular obstacles}. Math. Z. 290 (2018), no. 3-4, 973-990. \bibitem{BLP} Byun, S.-S.; Lee, M. and Palagachev, D.K. \textit{Hessian estimates in weighted Lebesgue spaces for fully nonlinear elliptic equations}. J. Differential Equations 260 (2016), no. 5, 4550-4571. \bibitem{BOW16} Byun, S.-S.; Oh, J. and Wang, L. \textit{$W^{2,p}$ estimates for solutions to asymptotically elliptic equations in nondivergence form}. J. Differential Equations 260 (2016), no. 11, 7965-7981. \bibitem{BJ0} Byun, S.S. and Wang, L. \textit{Global Calder\'{o}n-Zygmund theory for asymptotically regular nonlinear elliptic and parabolic equations}. Int. Math. Res. Not. IMRN 2015, no. 17, 8289-8308. \bibitem{Caff1} Caffarelli, L.A. \textit{Interior a priori estimates for solutions of fully nonlinear equations.} Ann. of Math.(2) 130 (1989), no. 1, 189--213. \bibitem{CC} Caffarelli, L.A. and Cabr\'{e}, X. \textit{Fully Nonlinear Elliptic Equations}. American Mathematical Society Colloquium Publications, 43. American Mathematical Society, Providence, RI, 1995. vi+104 pp. ISBN: 0-8218-0437-5. \bibitem{CCKS} Caffarelli, L.A., Crandall, M.G., Kocan, M. and \'{S}wi\c{e}ch, A. \textit{On viscosity solutions of fully nonlinear equations with measurable ingredients}. Comm, Pure Appl. Math. 49 (1996) (4), 365--397. \bibitem{Ch} Chipot, M. and Evans, L.C. \textit{Linearisation at infinity and Lipschitz estimates for certain problems in the calculus of variations}. Proc. Roy. Soc. Edinburgh Sect. A. 102 (3-4) (1986). \bibitem{daSR19} da Silva, J.V. and Ricarte, G.C. \textit{An asymptotic treatment for non-convex fully nonlinear elliptic equations: Global Sobolev and BMO type estimates.} Commun. Contemp. Math. 21 (2019), no. 7, 1850053, 28 pp. \bibitem{daSV21-1} da Silva, J.V. and Vivas, H. \textit{Sharp regularity for degenerate obstacle type problems: a geometric approach}. Discrete Contin. Dyn. Syst. 41 (2021), no. 3, 1359-1385. \bibitem{daSV21} da Silva, J.V. and Vivas, H. \textit{The obstacle problem for a class of degenerate fully nonlinear operators}. Rev. Mat. Iberoam. 37 (2021), no. 5, 1991-2020. \bibitem{DK91} Dong, H. and Kim, D. \textit{On the impossibility of $W^{2,p}$ estimates for elliptic equations with piecewise constant coefficients}. J. Funct. Anal. 267 (10) (2014) 3963--3974. \bibitem{DL20} Dong, H. and Li, Z. \textit{Classical solutions of oblique derivative problem in nonsmooth domains with mean Dini coefficients}. Trans. Amer. Math. Soc. 373 (2020), no. 7, 4975-4997. \bibitem{Es93} Escauriaza, L. \textit{$W^{2,n}$ a priori estimates for solutions to fully non-linear elliptic equation.} Indiana Univ. Math. J. 42, no. 2 (1993), 413-423. \bibitem{Evans} Evans, L. C., \textit{Classical solutions of fully nonlinear, convex, second-order elliptic equations}. Comm. Pure Appl. Math. 35(3), 333-363, 1982. \bibitem{FiShah14} Figalli, A. and Shahgholian, H., \textit{A general class of free boundary problems for fully nonlinear elliptic equations}. Arch. Ration. Mech. Anal. 213 (2014), no. 1, 269–286. \bibitem{MF} Foss, M. \textit{Global regularity for almost minimizers of nonconvex variational problems.} Ann. Mat. Pura Appl. (4) 187 (2) (2008), 263-321. \bibitem{GS01} Giga, Y. and Sato, M.-H. \textit{On semicontinuous solutions for general Hamilton-Jacobi equations}. Comm. Partial Differential Equations 26 (2001), 813-839. \bibitem{Indr19} Indrei, E. \textit{Boundary regularity and nontransversal intersection for the fully nonlinear obstacle problem}. Comm. Pure Appl. Math. 72 (2019), no. 7, 1459-1473. \bibitem{Indrei19} Indrei, E. \textit{Non-transversal intersection of the free and fixed boundary in the mean-field theory of superconductivity}. Interfaces Free Bound. 21 (2019), no. 2, 267-272. \bibitem{IM16} Indrei, E. and Minne, A. \textit{Regularity of solutions to fully nonlinear elliptic and parabolic free boundary problems}. Ann. Inst. H. Poincar\'{e} C Anal. Non Lin\'{e}aire 33 (2016), no. 5, 1259-1277. \bibitem{IMN17} Indrei, E., Minne, A. and Nurbekyan, L. \textit{Regularity of solutions in semilinear elliptic theory}. Bull. Math. Sci. 7 (2017), no. 1, 177-200. \bibitem{Hi} Ishii, H. and Lions, P.L. \textit{Fully nonlinear oblique derivative problems for nonlinear second-order elliptic PDEs}. Duke math. J. 62 (3) (1991) 633-661. \bibitem{KL20} Kim, S. and Lee, K-A. \textit{Higher order convergence rates in theory of homogenization II: Oscillatory initial data}. Adv. Math. 362 (2020), 106960, 52 pp. \bibitem{Kry82} Krylov, N.V. \textit{Boundedly nonhomogeneous elliptic and parabolic equations}. Izv. Akad. Nak. SSSR Ser. Mat. 46 (1982), 487-523; English transl. in Math USSR Izv. 20 (1983), 459-492. \bibitem{Kry12} Krylov, N.V. \textit{On the existence of smooth solutions for fully nonlinear elliptic equations with measurable ``coefficients'' without convexity assumptions}. Methods Appl. Anal., 19 (2012), pp. 119--146. \bibitem{Kry13} Krylov, N.V. \textit{On the existence of $W^{2,p}$ solutions for fully nonlinear elliptic equations under relaxed convexity assumptions}. Comm. Partial Diff. Eqs., 38 (2013), pp. 687--710. \bibitem{Kry17} Krylov, N.V. \textit{On the existence of $W^{2, p}$ solutions for fully nonlinear elliptic equations under either relaxed or no convexity assumptions}. Commun. Contemp. Math. 19 (2017), no. 6, 1750009, 39 pp. \bibitem{Lee19} Lee, M. \textit{Weighted Orlicz regularity estimates for fully nonlinear elliptic equations with asymptotic convexity}. Commun. Contemp. Math. 21 (2019), no. 4, 1850024, 29 pp. \bibitem{LiZhang} Li, D. and Zhang, K. \textit{Regularity for fully nonlinear elliptic equations with oblique boundary conditions}. Arch. Ration. Mech. Anal. 228(3)(2018) 923-967. \bibitem{Lieb01} Lieberman, G.M. \textit{Regularity of solutions of obstacle problems for elliptic equations with oblique boundary conditions}. Pacific J. Math. 201 (2001), no. 2, 389–419. \bibitem{Lieb02} Lieberman, G.M. \textit{Higher regularity for nonlinear oblique derivative problems in Lipschitz domains}. Ann. Sc. Norm. Super. Pisa Cl. Sci. (5) 1 (2002), no. 1, 111-151. \bibitem{Leiberman} Lieberman, G.M. \textit{Oblique derivative problems for elliptic equations}. World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, 2013. xvi+509 pp. ISBN: 978-981-4452-32-8. \bibitem{Lie-Tru} Lieberman, G.M. and Trudinger, N.S. \textit{Nonlinear oblique boundary value problems for nonlinear elliptic equations}. Trans. Amer. Math. Soc. 295 (1986), no. 2, 509-546. \bibitem{Sil1} Milakis, E. and Silvestre, L. \textit{Regularity for fully nonlinear elliptic equations with Neumann boundary data.} Commum. Partial Differ. Equ. 31 (7-9) (2006) 1227-1252. \bibitem{PT} Pimentel, E. and Teixeira, E.V. \textit{Sharp hessian integrability estimates for nonlinear elliptic equations: an asymptotic approach}. J. Math. Pures Appl. 106 (2016), pp. 744--767. \bibitem{JP} Raymond, J.P. \textit{Lipschitz regularity of solutions of some asymptotically convex problems.} Proc. Roy. Soc. Edinburgh Sect. A. 117 (1-2) (1991), 59-73. \bibitem{RT} Ricarte G.C. and Teixeira, E.V. \textit{Fully nonlinear singularly perturbed equations and asymptotic free boundary}. J. Funct. Anal., vol. 261, Issue 6, 2011, 1624-1673. \bibitem{R-Oton-Serr17} Ros-Oton, X. and Serra, J. \textit{The structure of the free boundary in the fully nonlinear thin obstacle problem}. Adv. Math. 316 (2017), 710-747. \bibitem{Saf91} Safonov, M.V. \textit{Nonlinear elliptic equations of second order}, Lecture notes. Dipartimento di Matematica Applicata ``G. Sansone'', Universita degli studi di Firenze, 1991, 52 p. \bibitem{Saf94} Safonov, M.V. \textit{On the boundary value problems for fully nonlinear elliptic equations of second order}. CMA Research Report MRR 049-94, 1994, 34 p. \bibitem{Sav07} Savin, O. \textit{Small perturbation solutions for elliptic equations}. Comm. Partial Differential Equations 32 (2007), no. 4-6, 557-578. \bibitem{CS} Scheven, C. and Schmidt, T. \textit{Asymptotically regular problems I: Higher integrability.} J. Differential Equations 248 (4) (2010), 745-791. \bibitem{CS1} Scheven, C. and Schmidt, T. \textit{Asymptotically regular problems II: Partial Lipschitz continuity and a singular set of positive measure.} Ann. Sc. Norm. Super. Pisa Cl. Sci. (5) 8 (3) (2009), 469-507. \bibitem{ST} Silvestre, L. and Teixeira, E.V. \textit{Regularity estimates for fully non linear elliptic equations which are asymptotically convex}. Contributions to nonlinear elliptic equations and systems, 425–438, Progr. Nonlinear Differential Equations Appl., 86, Birkh\"{a}user/Springer, Cham, 2015. \bibitem{Tru83} Trudinger, N.S. \textit{Fully nonlinear, uniformly elliptic equations under natural structure conditions}. Trans. Amer. Math. Soc. 278 (1983), no. 2, 751-769. \bibitem{Tru84} Trudinger, N.S. \textit{Regularity of solutions of fully nonlinear elliptic equations}. Boll. Un. Mat. Ital. A (6) 3 (1984), no. 3, 421–430. \bibitem{Winter} Winter, N. \textit{$W^{2,p}$ and $W^{1,p}$-Estimates at the Boundary for Solutions of Fully Nonlinear, Uniformly Elliptic Equations}. Z. Anal. Adwend. (J. Anal. Appl.) 28 (2009), 129--164. \bibitem{ZZZ21} Zhang, J., Zheng, S. and Zuo, C. \textit{$W^{2,p}$-regularity for asymptotically regular fully nonlinear elliptic and parabolic equations with oblique boundary values}. Discrete Contin. Dyn. Syst. Ser. S 14 (2021), no. 9, 3305–3318. \end{thebibliography} \end{document}
2205.07753v2
http://arxiv.org/abs/2205.07753v2
A tropical view on Landau-Ginzburg models
\documentclass[11pt,letter]{amsart} \usepackage{amsmath} \usepackage{amscd} \usepackage{amssymb} \usepackage{amsthm} \usepackage{amsfonts} \usepackage[latin1]{inputenc} \usepackage{mathrsfs} \usepackage{marvosym} \usepackage{srcltx} \usepackage[centering,text={15.5cm,24cm}]{geometry} \usepackage[dvips]{graphicx} \usepackage{pstricks} \usepackage[all]{xy} \usepackage[colorlinks=true]{hyperref} \hypersetup{ colorlinks = true, urlcolor = violet, linkcolor = blue, citecolor = violet } \pdfoptionpdfminorversion=7 \renewcommand{\baselinestretch}{1.2} \setlength{\marginparwidth}{10ex} \setcounter{tocdepth}{1} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{conjecture}[theorem]{Conjecture} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{construction}[theorem]{Construction} \newtheorem{assumption}[theorem]{Assumption} \newtheorem{convention}[theorem]{Convention} \newtheorem{example}[theorem]{Example} \newtheorem{examples}[theorem]{Examples} \newtheorem*{acknowledgement}{Acknowledgement} \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} \newtheorem{remarks}[theorem]{Remarks} \numberwithin{equation}{section} \numberwithin{figure}{section} \renewcommand{\thefigure}{\thesection.\arabic{figure}} \newcommand{\bary}{\mathrm{bar}} \newcommand{\length}{\operatorname{length}} \newcommand{\cover}{\tau^{\mathbf w}_s} \newcommand{\Cover}{\Delta^{\mathbf w}_{s} } \newcommand{\ray}{\frak s} \newcommand{\defbase}{S^{\mathbf w}} \newcommand{\defcone}{C^{\mathbf w}} \newcommand{\effvert}{\widetilde \Gamma^{[0]}} \newcommand{\vvert}{\tilde \Gamma^{[0]}_{\mathrm{leaf}}} \newcommand{\doublebar}[1]{\shortstack{$\overline{\hphantom{#1}} $\\[-4pt] $\overline{#1}$}} \newcommand{\underlinehigh}[1]{\shortstack{#1\\[-5pt] \underline{\hphantom{#1}}}} \newcommand{\lperp}{{}^\perp\!} \newcommand{\lparallel}{{}^\parallel\!} \newcommand{\bbfamily}{\fontencoding{U}\fontfamily{bbold}\selectfont} \newcommand{\textbb}[1]{{\bbfamily#1}} \newcommand {\lfor} {\mbox{\textbb{[}}} \newcommand {\rfor} {\mbox{\textbb{]}}} \newcommand{\HH} {\mathbb{H}} \newcommand{\NN} {\mathbb{N}} \newcommand{\ZZ} {\mathbb{Z}} \newcommand{\QQ} {\mathbb{Q}} \newcommand{\RR} {\mathbb{R}} \newcommand{\CC} {\mathbb{C}} \newcommand{\FF} {\mathbb{F}} \newcommand{\PP} {\mathbb{P}} \renewcommand{\AA} {\mathbb{A}} \newcommand{\GG} {\mathbb{G}} \newcommand {\shA} {\mathcal{A}} \newcommand {\shAff} {\mathcal{A}\text{\textit{ff}}} \newcommand {\shAut} {\mathcal{A}\text{\textit{ut}}} \newcommand {\shB} {\mathcal{B}} \newcommand {\shC} {\mathcal{C}} \newcommand {\shD} {\mathcal{D}} \newcommand {\shDiv} {\mathcal{D} \!\text{\textit{iv}}} \newcommand {\shCDiv} {\mathcal{C}\mathcal{D}\!\text{\textit{iv}}} \newcommand {\shEx} {\mathcal{E} \!\text{\textit{x}}} \newcommand {\shExt} {\mathcal{E} \!\text{\textit{xt}}} \newcommand {\shE} {\mathcal{E}} \newcommand {\shF} {\mathcal{F}} \newcommand {\shG} {\mathcal{G}} \newcommand {\shH} {\mathcal{H}} \newcommand {\shHom} {\mathcal{H}\!\text{\textit{om}}} \newcommand {\shI} {\mathcal{I}} \newcommand {\shJ} {\mathcal{J}} \newcommand {\shK} {\mathcal{K}} \newcommand {\shL} {\mathcal{L}} \newcommand {\shM} {\mathcal{M}} \newcommand {\shMPL} {\mathcal{MPL}} \newcommand {\shm} {\mathcal{m}} \newcommand {\shN} {\mathcal{N}} \newcommand {\shPL} {\mathcal{PL}} \newcommand {\shQ} {\mathcal{Q}} \newcommand {\shR} {\mathcal{R}} \newcommand {\shS} {\mathcal{S}} \newcommand {\shT} {\mathcal{T}} \newcommand {\shProj} {\mathcal{P}\!\text{\textit{roj}}\,} \newcommand {\shP} {\mathcal{P}} \newcommand {\shV} {\mathcal{V}} \newcommand {\shW} {\mathcal{W}} \newcommand {\shX} {\mathcal{X}} \newcommand {\shY} {\mathcal{Y}} \newcommand {\shZ} {\mathcal{Z}} \newcommand {\foA} {\mathfrak{A}} \newcommand {\foB} {\mathfrak{B}} \newcommand {\foC} {\mathfrak{C}} \newcommand {\foD} {\mathfrak{D}} \newcommand {\foE} {\mathfrak{E}} \newcommand {\foF} {\mathfrak{F}} \newcommand {\foG} {\mathfrak{G}} \newcommand {\foH} {\mathfrak{H}} \newcommand {\foI} {\mathfrak{I}} \newcommand {\foJ} {\mathfrak{J}} \newcommand {\foK} {\mathfrak{K}} \newcommand {\foL} {\mathfrak{L}} \newcommand {\foM} {\mathfrak{M}} \newcommand {\foN} {\mathfrak{N}} \newcommand {\foO} {\mathfrak{O}} \newcommand {\foP} {\mathfrak{P}} \newcommand {\foQ} {\mathfrak{Q}} \newcommand {\foR} {\mathfrak{R}} \newcommand {\foS} {\mathfrak{S}} \newcommand {\foT} {\mathfrak{T}} \newcommand {\foU} {\mathfrak{U}} \newcommand {\foV} {\mathfrak{V}} \newcommand {\foW} {\mathfrak{W}} \newcommand {\foX} {\mathfrak{X}} \newcommand {\foY} {\mathfrak{Y}} \newcommand {\foZ} {\mathfrak{Z}} \newcommand {\foa} {\mathfrak{a}} \newcommand {\fob} {\mathfrak{b}} \newcommand {\foc} {\mathfrak{c}} \newcommand {\fod} {\mathfrak{d}} \newcommand {\foe} {\mathfrak{e}} \newcommand {\fof} {\mathfrak{f}} \newcommand {\fog} {\mathfrak{g}} \newcommand {\foh} {\mathfrak{h}} \newcommand {\foi} {\mathfrak{i}} \newcommand {\foj} {\mathfrak{j}} \newcommand {\fok} {\mathfrak{k}} \newcommand {\fol} {\mathfrak{l}} \newcommand {\fom} {\mathfrak{m}} \newcommand {\fon} {\mathfrak{n}} \newcommand {\foo} {\mathfrak{o}} \newcommand {\fop} {\mathfrak{p}} \newcommand {\foq} {\mathfrak{q}} \newcommand {\forr} {\mathfrak{r}} \newcommand {\fos} {\mathfrak{s}} \newcommand {\fot} {\mathfrak{t}} \newcommand {\fou} {\mathfrak{u}} \newcommand {\fov} {\mathfrak{v}} \newcommand {\fow} {\mathfrak{w}} \newcommand {\fox} {\mathfrak{x}} \newcommand {\foy} {\mathfrak{y}} \newcommand {\foz} {\mathfrak{z}} \newcommand {\scrA} {\mathscr{A}} \newcommand {\aff} {\mathrm{aff}} \newcommand {\Aff} {\operatorname{Aff}} \newcommand {\an} {\mathrm{an}} \newcommand {\Ad} {\operatorname{Ad}} \newcommand {\Ann} {\operatorname{Ann}} \newcommand {\APic} {\operatorname{APic}} \newcommand {\as} {\mathrm{as}} \newcommand {\Ass} {\operatorname{Ass}} \newcommand {\Aut} {\operatorname{Aut}} \renewcommand {\Bar} {\operatorname{Bar}} \newcommand {\base} {\operatorname{Base}} \newcommand {\Br} {\operatorname{Br}} \newcommand {\Bs} {\operatorname{Bs}} \newcommand {\bct} {\mathrm{bct}} \newcommand {\C} {\mathscr{C}} \newcommand {\Cat} {\underline{\mathrm{Cat}}} \newcommand {\CDiv} {\operatorname{CDiv}} \newcommand {\Cl} {\operatorname{Cl}} \newcommand {\Char} {\operatorname{Char}} \newcommand {\Chambers} {\operatorname{Chambers}} \newcommand {\cp} {\bullet} \newcommand {\cl} {\operatorname{cl}} \newcommand {\codim} {\operatorname{codim}} \newcommand {\Coh} {\operatorname{Coh}} \newcommand {\coker} {\operatorname{coker}} \newcommand {\conv} {\operatorname{conv}} \newcommand {\cyc} {\mathrm{cyc}} \newcommand {\Diag} {\operatorname{Diag}} \newcommand {\da} {\downarrow} \newcommand {\depth} {\operatorname{depth}} \renewcommand{\div} {\operatorname{div}} \newcommand {\Di} {{\rm D}} \newcommand {\di} {{\rm d}} \newcommand {\Div} {\operatorname{Div}} \newcommand {\dlog} {\operatorname{dlog}} \newcommand {\Dlog} {\operatorname{Dlog}} \newcommand {\dom} {\operatorname{dom}} \newcommand {\dual} {\vee} \newcommand {\eps} {\varepsilon} \newcommand {\et} {\mathrm{\'{e}t}} \newcommand {\eqand} {\quad\text{and}\quad} \newcommand {\Ex} {\operatorname{Ex}} \newcommand {\End} {\operatorname{End}} \newcommand {\Ext} {\operatorname{Ext}} \newcommand {\Gal} {\operatorname{Gal}} \newcommand {\GL} {\operatorname{GL}} \newcommand {\Glue} {\underline{\mathrm{Glue}}} \newcommand {\Gm} {\GG_m} \newcommand {\gp} {{\operatorname{gp}}} \newcommand {\Gr} {\operatorname{Gr}} \newcommand {\hol} {\mathrm{hol}} \newcommand {\Hom} {\operatorname{Hom}} \newcommand {\height} {\operatorname{ht}} \newcommand {\id} {\operatorname{id}} \newcommand {\im} {\operatorname{im}} \newcommand {\inc} {\mathrm{in}} \newcommand {\ind} {\operatorname{ind}} \newcommand {\Int} {\operatorname{Int}} \newcommand {\irr} {\operatorname{irr}} \newcommand {\iso} {\simeq} \newcommand {\Isom} {\operatorname{Isom}} \newcommand {\Joints} {\operatorname{Joints}} \renewcommand {\ker } {\operatorname{kern}} \newcommand {\kk} {\Bbbk} \newcommand {\koker } {\operatorname{kokern}} \newcommand {\la} {\leftarrow} \newcommand {\leafvertices} {\Gamma^{[0]}_{\mathrm{leaf}}} \newcommand {\Lie} {\operatorname{Lie}} \newcommand {\Lin} {\operatorname{Lin}} \newcommand {\limdir} {\varinjlim} \newcommand {\liminv} {\varprojlim} \newcommand {\lla} {\longleftarrow} \newcommand {\loc} {\mathrm{loc}} \newcommand {\LogRings} {\underlinehigh{LogRings}} \newcommand {\longeur} {\operatorname{long}} \newcommand {\LPoly} {\underlinehigh{LPoly}} \newcommand {\lra} {\longrightarrow} \newcommand {\ls} {\dagger} \newcommand {\M} {\mathcal{M}} \newcommand {\Mbar} {\overline{\M}} \newcommand {\Map} {{\operatorname{Map}}} \renewcommand {\max} {{\operatorname{max}}} \newcommand {\maxid} {\mathfrak{m}} \newcommand {\minus} {\smallsetminus} \newcommand {\Mor} {\operatorname{Mor}} \newcommand {\MPL} {\operatorname{MPL}} \newcommand {\mult} {\operatorname{mult}} \newcommand {\myplus}[1]{\underset{#1}{\oplus}} \newcommand {\N} {\operatorname{N}} \newcommand {\no} {\mathrm{no}} \newcommand {\nor} {{\operatorname{nor}}} \newcommand {\Num} {\operatorname{Num}} \newcommand {\NA} {\operatorname{NA}} \newcommand {\NAP} {\operatorname{\overline{NA}}} \newcommand {\NB} {\operatorname{NB}} \newcommand {\NBP} {\operatorname{\overline{NB}}} \newcommand {\NEP} {\operatorname{\overline{NE}}} \newcommand {\NPP} {\operatorname{\overline{NP}}} \newcommand {\NS} {\operatorname{NS}} \renewcommand{\O} {\mathcal{O}} \newcommand {\ob} {\operatorname{ob}} \newcommand {\Ob} {\operatorname{Ob}} \newcommand {\ol} {\overline} \newcommand {\ord} {\operatorname{ord}} \renewcommand{\P} {\mathscr{P}} \newcommand {\PDiv} {\operatorname{PDiv}} \newcommand {\Pic} {\operatorname{Pic}} \newcommand {\PGL} {\operatorname{PGL}} \newcommand {\PL} {\operatorname{PL}} \newcommand {\PM} {\operatorname{PM}} \newcommand {\Pol} {\underline{\mathrm{Pol}}} \newcommand {\PQ} {\operatorname{PQ}} \newcommand {\pr} {\operatorname{pr}} \newcommand {\pre} {\mathrm{pre}} \newcommand {\pri} {\mathfrak{p}} \newcommand {\Proj} {\operatorname{Proj}} \newcommand {\pur} {\operatorname{pur}} \newcommand {\QP} {\operatorname{\overline{Q}}} \newcommand {\quadand} {\quad\text{and}\quad} \newcommand {\ra} {\to} \newcommand {\Ra} {\to} \newcommand {\rk} {\operatorname{rk}} \newcommand {\re} {\Re} \renewcommand {\red} {{\operatorname{red}}} \newcommand {\reg} {\mathrm{reg}} \newcommand {\Reg} {\operatorname{Reg}} \newcommand {\res} {\negthinspace\mid\negthinspace} \newcommand {\Rings} {\underlinehigh{Rings}} \newcommand {\rootvertex} {V_{\mathrm{root}}} \newcommand {\scrR} {\mathscr{R}} \newcommand {\scrB} {\mathscr{B}} \newcommand {\scrS} {\mathscr{S}} \newcommand {\sat} {\operatorname{sat}} \newcommand {\SAff} {\operatorname{SAff}} \newcommand {\SBs} {\operatorname{SBs}} \newcommand {\Sch} {\underline{\mathrm{Sch}}} \newcommand {\SF} {\operatorname{SF}} \newcommand {\shLS} {\mathcal{LS}} \newcommand {\sides} {\operatorname{Sides}} \newcommand {\sing} {\mathrm{sing}} \newcommand {\Sing} {\operatorname{Sing}} \newcommand {\SL} {\operatorname{SL}} \newcommand {\Spec} {\operatorname{Spec}} \newcommand {\Spf} {\operatorname{Spf}} \newcommand {\Strata} {\underline{\mathrm{Strata}}} \newcommand {\supp} {\operatorname{supp}} \newcommand {\Sym} {\operatorname{Sym}} \newcommand {\Symp} {\operatorname{Sym^+}} \newcommand {\sgn} {\operatorname{sgn}} \newcommand {\std} {\mathrm{std}} \newcommand {\tcont} {\operatorname{cont_\mathit{t}}} \newcommand {\tlog} {\operatorname{tlog}} \renewcommand {\top} {\mathrm{top}} \newcommand {\Top} {\underlinehigh{Top}} \newcommand {\topp} {\operatorname{Top}} \newcommand {\Tor} {\operatorname{Tor}} \newcommand {\Toric} {\underline{\mathrm{Toric}}} \newcommand {\Trans} {\operatorname{Trans}} \newcommand {\trdeg} {\operatorname{trdeg}} \newcommand {\vmult} {\operatorname{vmult}} \newcommand {\WDiv} {\operatorname{WDiv}} \newcommand {\D} {\mathfrak D} \newcommand {\T} {\mathfrak T} \newcommand {\X} {\mathfrak X} \newcommand {\Y} {\mathfrak Y} \newcommand {\Z} {\mathfrak Z} \newcommand {\U} {\mathscr{U}} \newcommand {\W} {\mathscr{W}} \newcommand {\I} {\mathrm{\,I}} \newcommand {\II} {\mathrm{\,II}} \newcommand {\III} {\mathrm{\,III}} \newcommand {\IV} {\mathrm{\,IV}} \newcommand{\set}[2]{ \left\{ \left. {#1} \; \right| \;\: {#2} \right\} } \newcommand{\grad}{\nabla} \newcommand{\ham}{\mathcal{X}} \newcommand{\bij}{\stackrel{\sim}{\to}} \newcommand{\sur}{\twoheadrightarrow} \newcommand{\inj}{\hookrightarrow} \newcommand{\targetring}{R_{g,\sigma}^k} \newcommand{\so}{\quad \Rightarrow \quad} \newcommand{\Changed}[1]{{\color{blue} #1}} \def\dual#1{{#1}^{\scriptscriptstyle \vee}} \def\mapright#1{\smash{ \mathop{\longrightarrow}\limits^{#1}}} \def\mapleft#1{\smash{ \mathop{\longleftarrow}\limits^{#1}}} \def\exact#1#2#3{0\to#1\to#2\to#3\to0} \def\mapup#1{\Big\uparrow \rlap{$\vcenter{\hbox{$\scriptstyle#1$}}$}} \def\mapdown#1{\Big\downarrow \rlap{$\vcenter{\hbox{$\scriptstyle#1$}}$}} \def\mydate{\ifcase\month \or January\or February\or March\or April\or May\or June\or July\or August\or September\or October\or \space\number\day,\space\number\year} \long\def\symbolfootnote[#1]#2{\begingroup\def\thefootnote{\fnsymbol{footnote}}\footnote[#1]{#2}\endgroup} \begin{document} \title[Tropical Landau-Ginzburg]{A tropical view on Landau-Ginzburg models} \author{Michael Carl} \address{\tiny Alfred-Delp-Str.~2, 73430 Aalen, Germany} \email{[email protected]} \author{Max Pumperla} \address{\tiny IU Internationale Hochschule, Juri-Gagarin-Ring~152, 99084~Erfurt, Germany\\ Anyscale, Inc., 2080 Addison St Ste 7, Berkeley, CA 94704, USA} \email{[email protected]} \author{Bernd Siebert} \address{\tiny Department of Mathematics, The Univ.\ of Texas at Austin, 2515 Speedway, Austin, TX 78712, USA} \email{[email protected]} \begin{abstract} This paper, largely written in 2009/2010, fits Landau-Ginzburg models into the mirror symmetry program pursued by the last author jointly with Mark Gross since 2001. This point of view transparently brings in tropical disks of Maslov index~$2$ via the notion of broken lines, previously introduced in two dimensions by Mark Gross in his study of mirror symmetry for $\PP^2$. A major insight is the equivalence of properness of the Landau-Ginzburg potential with smoothness of the anticanonical divisor on the mirror side. We obtain proper superpotentials which agree on an open part with those classically known for toric varieties. Examples include mirror LG models for non-singular and singular del Pezzo surfaces, Hirzebruch surfaces and some Fano threefolds. \end{abstract} \thanks{M.P.\ was supported by the \emph{Studienstiftung des deutschen Volkes}; research by B.S.\ was supported by NSF grant DMS-1903437.} \date{\today} \maketitle \tableofcontents \section*{Preface} This paper, largely written in 2009/2010, investigates the incorporation of Landau-Ginzburg models into the toric degeneration approach to mirror symmetry of the last author with Mark Gross \cite{logmirror,affinecomplex}. At the time we could not answer a key question concerning the existence of the algorithm in \cite{affinecomplex} in the relevant unbounded case. We also felt that our construction poses many interesting questions and more should be said. With two of the authors leaving academia (M.C.~2010, M.P.~2011), the paper was eventually left in preliminary form on the last author's Hamburg webpage in the state of September~2010.\footnote{The last named author presented our findings at the workshop ``Derived Categories, Holomorphic Symplectic Geometry, Birational Geometry, Deformation Theory'' at IHP/Paris in May 2010 and at VBAC~2010 in Lisbon in June~2010.} This preliminary version will remain available as an ancillary file in the arXiv-submission. A variation of the last three sections partly treating other cases appeared as part of the second author's doctoral thesis \cite{MaxThesis}. The key consistency proof of the superpotential in \S3 and \S4 has been repeatedly used in the construction of generalized theta functions in the surface and cluster variety cases, notably in \cite{GHK,GHKK}. The question of existence of a consistent wall structure in the non-compact case eventually turned out to be best answered trivially by a compactification requirement (Definition~\ref{Def: compactifiable (B,P)}, Proposition~\ref{Prop: wall structures exists for compactifiable cases}), or in the context of intrinsic mirror symmetry \cite{Assoc, CanonicalWalls}. With a recent increase in studies of the smooth anticanonical divisor case, the case of central interest in this paper, it now seems the right time to finalize this paper. To preserve the line of historical developments, we have mostly only done minor edits for accuracy and clarity. The exceptions are the mentioned compactification criterion in \S1, Remark~\ref{Rem: algebraizability} on algebraizability of the superpotential, a new section on the fibers of the superpotential (\S5), and a corrected treatment of the mirror of Hirzebruch surfaces taken from \cite[\S5.3]{MaxThesis}. We have also added some references to newer developments in the introduction and in some footnotes, but this did not seem the right place to give a comprehensive overview and due credit to all the wonderful developments that have happened around the topic in the last decade. We apologize with everybody whose newer contributions are not mentioned in this paper. \S5 existed in some form for many years and has been distributed occasionally, but the LG mirror map (Definition~\ref{Def: LG mirror map} and Theorem~\ref{Thm: W versus w}) has only rather recently been spelled out in discussions with Helge Ruddat in a joint project with Michel van Garrel on enumerative period integrals in Landau-Ginzburg models \cite{GRS}. \section*{Introduction} Mirror symmetry has been suggested both by mathematicians \cite{Givental} and physicists \cite{Witten,HoriVafa} to extend from Calabi-Yau varieties to a correspondence between Fano varieties and Landau-Ginzburg models. Mathematically a Landau-Ginzburg model is a non-compact K\"ahler manifold with a holomorphic function, the superpotential. Until recently, the majority of studies confined themselves to toric cases where the construction of the mirror is immediate. The one exception we are aware of is the work of Auroux, Katzarkov and Orlov on mirror symmetry for del Pezzo surfaces \cite{AKO}, where a symplectic mirror is constructed by a surgery construction. The general Floer-theoretic perspective for the mirror Landau-Ginzburg model of an SYZ fibered logarithmic Calabi-Yau manifold has been discussed by Auroux in \cite{auroux1,auroux2}. In the following we use the phrase log Calabi-Yau to refer to a pair $(X,D)$ of a complete variety $X$ over a field with a non-zero effective anticanonical divisor $D\subset X$.\footnote{In \cite{affinecomplex} log Calabi-Yau varieties were referred to as Calabi-Yau pairs to avoid confusion with the central fiber of toric degenerations of Calabi-Yau varieties.} The purpose of this paper is to fit the Fano/Landau-Ginzburg mirror correspondence into the mirror symmetry program via toric degenerations pursued by the last author jointly with Mark Gross \cite{logmirror,affinecomplex}. The program as it stands suggests a non-compact variety as the mirror of a log Calabi-Yau variety, or rather toric degenerations of these varieties. So the main new ingredient is the construction of the superpotential. The key technical idea of \emph{broken lines} (Definition~\ref{def: broken lines}) for the construction of the superpotential has already appeared in a different context in the two-dimensional situation in Gross' mirror correspondence for $\PP^2$ \cite{PP2mirror}. We replace his case-by-case study of well-definedness with a scattering computation, making it work in all dimensions. Our main findings can be summarized as follows. \begin{enumerate} \item From our point of view, the natural data on the Fano side is a toric degeneration of log Calabi-Yau varieties as defined in \cite{affinecomplex}, Definition~1.8. In particular, if arising from the localization of an algebraic family, the general fiber is a pair $(\check X,\check D)$ of a complete variety $\check X$ and a reduced anticanonical divisor $\check D$. No positivity property is ever used in our construction apart from effectivity of the anticanonical bundle. \item The mirror is a toric degeneration of non-compact Calabi-Yau varieties, together with a canonically defined regular function on the total space of the degeneration (Proposition~\ref{Prop: wall structures exists for compactifiable cases}). \item The superpotential is proper if and only if the anticanonical divisor $\check D$ on the log Calabi-Yau side is locally irreducible (Theorem~\ref{Thm: Properness}). These conditions also have clean descriptions on the underlying tropical models governing the mirror construction from \cite{logmirror,affinecomplex}. \S3 and \S4 give the all order construction of the superpotential, summarized in Theorem~\ref{Thm: all order W}. \item For smooth toric Fano varieties our construction provides a canonical (partial) compactification of the Hori-Vafa construction \cite{HoriVafa} (Corollary~\ref{Cor: reproduce Hori-Vafa} in the surface case, \cite[Thm.\,5.4]{MaxThesis} in all dimensions). But note Remark~\ref{Rem: algebraizability} concerning the general question of algebraizability of the superpotential. \item The terms in the superpotential can be interpreted in terms of virtual numbers of tropical disks, at least in dimension two (Proposition~\ref{Prop: Virtual count}). On the Fano side these conjecturally count holomorphic disks with boundary on a Lagrangian torus.\footnote{This picture has recently been confirmed in \cite{CanonicalWalls} in terms of punctured Gromov-Witten invariants.} \item The natural holomorphic parameters occurring in the construction on the Fano side lie in $H^1(\check X_0,\Theta_{(\check X_0,\check D_0)})$ where $\Theta_{(\check X_0,\check D_0)}$ is the logarithmic tangent bundle of the central fiber $(\check X_0,\check D_0)$ in (1). This group rules infinitesimal deformations of the pair $(\check X_0,\check D_0)$. We have not carefully analyzed the parameters on the Landau-Ginzburg side and their correspondence to the K\"ahler parameters on the log Calabi-Yau side.\footnote{The correspondence is transparent from the intrinsic mirror symmetry perspective \cite{Assoc,CanonicalWalls}. } Note however that all parameters come from deformations of the underlying space, our superpotential does not add extra parameters. \item Explicit computations include non-singular and singular del Pezzo surfaces, the Hirzebruch surfaces $\FF_2$ and $\FF_3$, $\PP^3$ and a singular Fano threefold (\S7--\S8). \end{enumerate} Throughout we work over an algebraically closed field $\kk$ of characteristic $0$. We use check-adorned symbols $\check X,\check D, \check\X,\check\D, \check B$ etc.\ for the log Calabi-Yau side, and unadorned symbols for the Landau-Ginzburg side. For an integral polyhedron $\tau\subset\RR^n$ we denote by $\Lambda_\tau$ the free abelian group of integral tangent vector fields on $\tau$. We would like to thank Denis Auroux, Mark Gross and Helge Ruddat for valuable discussions. \section{Landau-Ginzburg tropical data and scattering diagram} Throughout the paper we assume familiarity with the basic notions of toric degenerations from \cite{logmirror}, as reviewed in \cite[\S\S1.1--1.2]{affinecomplex}. We quickly review the basic picture. Let $(\check B,\check \P,\check\varphi)$ be the polarized intersection complex or \emph{cone picture} \cite[Expl.\,1.13]{affinecomplex} associated to a (schematic or formal) proper polarized toric degeneration $(\check\pi:\check\X\to T,\check\D)$ of log Calabi-Yau varieties \cite[Defs.\,1.8/1.9]{affinecomplex}. Here $T$ is the (formal) spectrum of a discrete valuation $\kk$-algebra, typically $\kk\lfor t\rfor$. Recall that $\check B$ is a topological manifold with a $\ZZ$-affine structure outside a codimension two cell complex $\check\Delta\subset \check B$, also called the discriminant locus; $\check\P$ is a decomposition of $\check B$ into integral, convex, but possibly unbounded polyhedra containing $\check\Delta$ as subcomplex of the first barycentric subdivision disjoint from vertices and the interiors of maximal cells; and $\check\varphi$ is a (generally multivalued) strictly convex piecewise linear function with integral slopes. The irreducible components of the central fiber $\check X_0\subset\check\X$ are the toric varieties with momentum polytopes the maximal cells in $\check \P$, and lower dimensional cells describe their intersections. Equivalently, one has the discrete Legendre dual data $(B,\P,\varphi)$, referred to as the dual intersection complex or \emph{fan picture} of the same degeneration \cite[Expl.\,1.11]{affinecomplex}, or the cone picture of the mirror via discrete Legendre duality \cite[Constr.\,1.16]{affinecomplex}. While \cite{logmirror} only treated the case of trivial canonical bundle or closed $B$, \cite{affinecomplex} gave the straightforward generalization to the case of interest here of log Calabi-Yau varieties, that is, a variety $\check X$ and an anticanonical divisor $\check D\subseteq \check X$. These correspond to compact $\check B$ with locally convex boundary $\partial \check B$, with $\partial\check B\neq\emptyset$ iff $\check D\neq\emptyset$. It holds $\partial \check B\neq \emptyset$ iff the discrete Legendre-dual $B$ is non-compact. The proof of \cite[Thm.\,5.4]{logmirror} then still shows that under a primitivity assumption on the local affine geometry (``simple singularities, \cite[Def.\,1.60]{logmirror}), the corresponding central fibers $(\check X_0,\M_{\check X_0})$ of toric degenerations of log Calabi-Yau varieties $(\check \X\to T,\check \D)$, as a log space, are classified by the cohomology groups \[ H^1(B, i_*\Lambda_B\otimes_\ZZ\GG_m(\kk))= H^1(\check B,\check\imath_*\check\Lambda_{\check B}\otimes_\ZZ\GG_m(\kk)) \] computed from the affine geometry of $B$ or $\check B$. Here $\Lambda_B$ denotes the sheaf of integral tangent vectors on the complement of the real codimension two singular locus $\Delta\subseteq B$, $i:B\setminus\Delta\to B$ is the inclusion, and $\check\Lambda_B=\shHom(\Lambda_B,\ZZ_B)$. Similar notations apply to $\check B$. The correspondence works via twisting toric patchings of standard affine toric models for the degeneration via so-called \emph{open gluing data} $s$ \cite[Def.\,1.18]{affinecomplex} and showing that any choice of $s$ gives rise to a log structure on $\check X_0=\check X_0(s)$ over the standard log point. This log structure is unique up to isomorphisms fixing $\check X_0$. Unlike in abstract deformation theory, the space of deformations is not just a torsor over the controlling cohomology group, but a group itself. In particular, trivial gluing data $s=1$ lead to a distinguished choice of log Calabi-Yau central fiber $(\check X_0,\M_{\check X_0})$. The log structure also carries the information of the anticanonical divisor $\check D_0\subseteq \check X_0$, which hence is suppressed in the notation. Conversely, \cite[Thm.\,3.1]{affinecomplex} constructs a canonical formal polarized toric degeneration $\pi:(\check\X\to T,\check\D)$ with given logarithmic central fiber $(\check X_0,\M_{\check X_0})$, even under a weaker assumption (\emph{local rigidity}, \cite[Def.\,1.26]{affinecomplex}) than simplicity. Thus we understand this side of the mirror correspondence rather well. The objective of this paper is to give a similarly canonical picture on the mirror side. The mirrors of Fano varieties are suggested to be so-called Landau-Ginzburg models (LG-models). Mathematically these are non-compact algebraic varieties with a holomorphic function, referred to as \emph{superpotential}, see e.g.\ \cite{HoriVafa,ChoOh,AKO,FOOO1}. Following the general program laid out in \cite{logmirror} and \cite{affinecomplex}, we construct LG-models via deformations of a non-compact union of toric varieties. The superpotential is constructed by canonical extension from the central fiber. Our starting point in this paper is as follows. \begin{assumption} \label{overall assumption} Let $(B,\P,\varphi)$ be a non-compact, polarized, (integral) tropical manifold without boundary \cite[Def.1.2]{affinecomplex}. We further assume that $(B,\P,\varphi)$ comes with a compatible sequence of consistent (wall) structures $\scrS_k$ as defined in \cite[Defs.\,2.22, 2.28, 2.41]{affinecomplex}, for some choice of open gluing data $s$. \end{assumption} For applications in mirror symmetry, $(B,\P,\varphi)$ is the Legendre-dual of a compact $(\check B,\check \P,\check \varphi)$ with locally convex boundary. Unfortunately, the algorithmic construction of consistent structures in \cite{affinecomplex} is problematic at several places in the non-compact case. The only practical general assumption we are aware of to make the algorithm work in the non-compact case adds the following convexity requirement at infinity. \begin{definition} \label{Def: compactifiable (B,P)} We call a tropical manifold $(B,\P)$ with or without boundary \emph{compactifiable} if there exists a compact subset $K\subseteq B$ containing a neighborhood of the union of all bounded cells of $\P$ and a proper continuous map $\psi: B\setminus K\to \RR_{\ge 0}$ with the following properties: (1)~$\psi$ is locally on $B\setminus (K\cup\Delta)$ a convex function. (2)~Each unbounded cell $\sigma\in\P$ has a finite integral polyhedral decomposition $\P_\sigma$ such that $\psi|_{\sigma\cap (B\setminus K)}$ is a convex, integral, piecewise affine function with respect to $\P_\sigma$. \end{definition} The existence of $\psi$ in Definition~\ref{Def: compactifiable (B,P)} makes it possible to exhaust $B$ by tropical manifolds as follows. \begin{lemma} \label{Lem: Exhaustion} Assume that $(B,\P)$ is compactifiable (Definition~\ref{Def: compactifiable (B,P)}). Then there exists a sequence of compact subsets $\ol B_1\subseteq\ol B_2\subseteq \ldots\subseteq B$ with (1)~$B=\bigcup_{\nu\ge 1} \ol B_\nu$, and (2)~$(\ol B_\nu,\ol\P_\nu)$ with $\ol\P_\nu=\big\{\sigma\cap\ol B_\nu\,\big|\, \sigma\in\P\big\}$ is a tropical manifold in the sense of \cite[Def.\,1.32]{affinecomplex}. \end{lemma} \begin{proof} Let $K\subseteq B$ and $\psi:B\setminus K\to \RR_{\ge0}$ be as in Definition~\ref{Def: compactifiable (B,P)}. Define $\ol B_\nu= K\cup \psi^{-1}\big([0,\nu]\big)$. Properness of $\psi$ implies $B=\bigcup_\nu \ol B_\nu$ as claimed in (1). Next observe that all bounded cells of $\P$ are contained in all $\ol B_\nu$ since they are contained in $K$. For an unbounded cell $\sigma\in\P$, Definition~\ref{Def: compactifiable (B,P)},(2) implies that the intersection $\sigma\cap \ol B_\nu$ is a compact convex polyhedron defined over $\QQ$. The denominators appearing in the vertices of $\partial(\sigma\cap\ol B_\nu)$ disappear when going over to an appropriate multiple of $\nu$. Thus up to going over to a subsequence, $\P$ induces an integral polyhedral decomposition $\ol\P_\nu$ of $\ol B_\nu$. Convexity at boundary points as required in \cite[Def.\,1.32,(2)]{affinecomplex} follows from convexity of $\psi$ posited in Definition~\ref{Def: compactifiable (B,P)},(1), provided $\partial\ol B_\nu\cap K\neq\emptyset$. The last condition clearly holds for $\nu\gg0$, hence can be achieved for all $\nu$ by relabelling the $\ol B_\nu$. \end{proof} The proof of Lemma~\ref{Lem: Exhaustion} motivates the following definition. \begin{definition} \label{Def: truncation} We call a tropical manifold $(\ol B,\ol \P)$, with compact $\ol B\subset B$ containing all bounded cells of $\P$ and intersecting the interior of each unbounded cell, a \emph{truncation} of $(B,\P)$. \end{definition} With an exhaustion as in Lemma~\ref{Lem: Exhaustion} we can now apply the main theorem of \cite{affinecomplex} to produce a compatible sequence of consistent wall structures on $(B,\P,\varphi)$. \begin{proposition} \label{Prop: wall structures exists for compactifiable cases} Let $(B,\P,\varphi)$ be a polarized, tropical manifold with $(B,\P)$ compactifiable. Assume further that each $(\ol B_\nu,\ol \P_\nu)$ from Lemma~\ref{Lem: Exhaustion} describes the intersection complex of a locally rigid, positive, pre-polarized toric log Calabi-Yau variety $(\ol X_0,\ol B_0)$ \cite[Defs.\,1.4, 1.26, 1.23]{affinecomplex}\footnote{The assumptions are fulfilled for example if $(B,\P)$ has simple singularities.}. Then there exists a compatible sequence of consistent (wall) structures $\scrS_k$ on $(B,\P,\varphi)$. The formal toric degeneration \cite[Def.\,1.9]{affinecomplex} $\pi: (\X,\D)\to\Spf\kk\lfor t\rfor$ defined by the $\scrS_k$ according to \cite[Prop.\,2.42]{affinecomplex} has an open embedding into a proper formal family $(\ol\X,\ol\D)\to\Spf\kk\lfor t\rfor$, with central fiber $\ol X_0$. Moreover, assuming $H^1(\ol X_0,\O_{\ol X_0})=H^2(\ol X_0,\O_{\ol X_0})=0$, this family is algebraizable: There exists a proper flat morphism $\ol\pi:(\ol\shX,\ol\shD)\to \Spec\kk\lfor t\rfor$, a toric degeneration in the sense of \cite[Def.1.8]{affinecomplex}, and a divisor $\shZ\subset \ol\shX$ flat over $\Spec\kk\lfor t\rfor$ such that $\pi$ is isomorphic to the completion of $\ol\pi|_{\ol\shX\setminus\shZ}$ at the central fiber. \end{proposition} \begin{proof} For each $\nu$, \cite[Thm.\,3.1]{affinecomplex} produces a compatible sequence $\scrS_{k,\nu}$ of consistent wall structures on $(\ol B_\nu,\ol\P_\nu)$. Comparing the inductive construction of $\scrS_{k,\nu}$ for fixed $k$, but taking $\nu\to\infty$ shows that the sets of walls stabilize on any compact subset of $B$. Note that by convexity no walls emanate from $\partial \ol B_\nu$. We can therefore take the limit over $\nu$ to define $\scrS_k$ on $B$. Mutual compatibility and consistency follows by consideration on compact subsets of $B$, using consistency and compatibility for the wall structure on $(\ol B_\nu,\ol\P_\nu)$ for $\nu$ sufficiently large. The compactification $(\ol\X,\ol\D)$ is constructed by observing that for each $k$ the proper schemes over $\kk[t]/(t^{k+1})$ obtained from $\scrS_{k,\nu}$ stabilize for $\nu\to\infty$. Taking this limit first and then the limit over $k$ now defines the formal scheme $\ol\X$ proper over $\Spf\kk\lfor t\rfor$ and containing $(\X,\D)$ as an open dense formal subscheme. With the additional assumptions, algebraizability follows from the Grothendieck algebraization theorem as in \cite[Cor.\,1.31]{affinecomplex}. \end{proof} \begin{remark} \label{Rem: compactifying divisor} The compactifying divisor $\Z\subset\ol\X$ with $\X=\ol\X\setminus\Z$ can be described by inspection of the polyhedral decomposition $\P_\nu$ of $\ol B_\nu$ for any sufficiently large $\nu$ and the asymptotic behavior of $\scrS_k$. For each sequence $F=(F_\nu)_\nu$ of mutually parallel maximal flat affine subsets $F_\nu\subseteq\partial\ol B_\nu$ there exists a maximal closed reduced subscheme $\Z_F\subseteq\Z$, and $\Z$ is the schematic union of the $\Z_F$. The restriction of $\pi$ to $\Z_F$ is itself a toric degeneration defined by the walls of $\scrS_k$ that intersect $F_\nu$ for all $\nu$, with wall functions obtained by disregarding all monomials not tangent to $F_\nu$. These statements follow directly from the gluing construction \cite[\S2.6]{affinecomplex}. See \S\ref{sect: special fiber} for a further discussion in the context of fibers of the superpotential. \end{remark} \section{The superpotential at $t$-order zero} Recall from \cite[Constr.\,2.7]{affinecomplex} the definition of the rings $R_{g,\sigma}^k$ used to build $\X$ over $\kk[t]/(t^{k+1})$. These depend on an inclusion $g:\omega\to \tau$ of cells $\omega,\tau\in\P$ and a maximal reference cell $\sigma$ containing $\tau$ (hence $\omega$). The ring $R_{g,\sigma}^k$ provides the $k$-th order thickening inside $\X$ of the stratum $X_\tau\subseteq X_0$ with momentum polytope $\tau$ in an affine open neighbourhood of the $\omega$-stratum $X_\omega\subseteq X_\tau$. The order is measured with respect to all irreducible components of $X_0$ containing the $\tau$-stratum. The reference cell $\sigma$ is necessary to fix the affine chart to work on. The rings obtained in this way from affine toric models are then localized at all functions carried by codimension one cells (``\emph{slabs}'') containing $\tau$. The details of this construction are rather irrelevant for what follows except that the ring $R^k_{g,\sigma}$ is a localization of a monomial ring, with each monomial $z^m$ having an associated underlying tangent vector $\ol m\in\Lambda_\sigma$, and an order of vanishing $\ord_{\sigma'} z^m$ for each maximal cell $\sigma'\supseteq\tau$. If $\tau=\sigma$ and $\ord_\sigma z^m=l$ then $z^m$ restricts to $z^{\ol m}t^l$ in the Laurent polynomial ring $R^k_{\id_\sigma,\sigma}= A[\Lambda_\sigma]$ describing the trivial $k$-th order deformation of the big cell of the irreducible component $X_\sigma\subseteq X_0$ defined by $\sigma$ over $A=\kk[t]/(t^{k+1})$. We need the following definition. \begin{definition} \label{Def: parallel edges} We call unbounded edges $\omega,\omega'\in\P$ \emph{parallel} if there exists a sequence of unbounded edges $\omega=\omega_0,\omega_1,\ldots,\omega_r=\omega'$ and maximal cells $\sigma_1,\ldots,\sigma_r\in\P$ with $\omega_{i-1},\omega_i\subseteq\sigma_i$ parallel ($\Lambda_{\omega}=\Lambda_{\omega'}$ as sublattices of $\Lambda_\sigma$) and unbounded in the same direction. A tropical manifold $(B,\P)$ with all unbounded edges parallel is called \emph{asymptotically cylindrical}. \end{definition} Let now $\sigma\in\P$ be an unbounded maximal cell. For each unbounded edge $\omega\subseteq \sigma$ there is a unique monomial $z^{m_\omega}\in R^0_{\id_\sigma,\sigma}$ with $\ord_\sigma (m_\omega)=0$ and $-\overline{m_\omega}$ a primitive generator of $\Lambda_\omega\subseteq \Lambda_\sigma$ pointing in the unbounded direction of $\omega$. Denote by $\scrR(\sigma)$ the set of such monomials $m_\omega$. We identify monomials for parallel unbounded edges $\omega,\omega'$. So these contribute only one exponent $m_\omega= m_{\omega'}$ to $\scrR(\sigma)$. Now at any point of $\partial\sigma$, the tangent vector $-\overline{m_\omega}$ points into $\sigma$. Hence \begin{equation} \label{Eqn: W^0(sigma)} W^0(\sigma):= \sum_{m\in \scrR(\sigma)} z^m \end{equation} extends to a regular function on the component $X_{\sigma}\subseteq X_0$ corresponding to $\sigma$. For bounded $\sigma$ define $W^0(\sigma)=0$. To simplify the following discussion, from now on we only consider the following situation.\footnote{This assumption was missing in preliminary versions of this paper and fixes a subtlety arising with non-trivial gluing data. The correct treatment of gluing data is given in \cite[\S5.2]{Theta}.} \begin{assumption} \label{Ass: trivial gluing data} One of the following two conditions hold. \begin{enumerate} \item If $\omega,\omega'\in \P$ are parallel edges there exists a maximal cell $\sigma\in\P$ with $\omega\cup\omega'\subseteq \sigma$. \item The open gluing data $s$ are trivial. \end{enumerate} \end{assumption} Since by Assumption~\ref{Ass: trivial gluing data} the restrictions of the $W^0(\sigma)$ to lower dimensional toric strata agree, they define a function $W^0\in\O(X_0)$. This is our \emph{superpotential at order $0$}. A motivation for this definition in terms of counts of tropical analogues of holomorphic disks will be given in Section~\ref{sect:tropical disks}. One insight in this paper is that in studying LG-models tropically it is advisable to restrict to asymptotically cylindrical $B$. \begin{proposition} \label{properness criterion} The order zero superpotential $W^0:X_0\to\AA^1$ is proper iff $(B,\P)$ is asymptotically cylindrical. \end{proposition} \begin{proof} It suffices to show the claimed equivalence after restriction to a non-compact irreducible component $X_\sigma\subseteq X_0$, that is, for $W^0(\sigma)$ from \eqref{Eqn: W^0(sigma)}. If all unbounded edges are parallel, $W^0(\sigma)$ is a multiple of a monomial with compact zero locus, and hence is proper. For the converse we show that if $m_{\omega}\neq m_{\omega'}$ for some $\omega,\omega' \subseteq\sigma$ then $W^0(\sigma)$ is not proper. The idea is to look at the closure of the zero locus of $W^0(\sigma)$ in an appropriate toric compactification $X_{\tilde \sigma}\supset X_\sigma$. Let $\omega_0,\ldots,\omega_r$ be the unbounded edges of $\sigma$ and write $m_i=m_{\omega_i}$ for their primitive generators. By assumption $\conv\{0,m_0,\ldots,m_r\}$ has a face not containing $0$ of dimension at least one. Let $H\subseteq \Lambda_\sigma$ be a supporting affine hyperplane of such a face. After relabeling we may assume $m_0,\ldots,m_s$ are the vertices of this face. Note that all $m_i-m_0$ are contained in the affine half-space $H-\RR_{\ge 0} m_0$. Now $\tilde\sigma= \sigma\cap(H-\RR_{\ge 0} m_0)$ is a rational bounded polytope $\tilde\sigma\subseteq \sigma$ with a single facet $\tau\subset \tilde\sigma$ not contained in a facet of $\sigma$. Thus the toric variety $X_{\tilde\sigma}$ with momentum polytope $\tilde\sigma$ contains $X_\sigma$ as the complement of the toric prime divisor $X_\tau\subset X_{\tilde \sigma}$. Note that $\Lambda_\tau= H-m_0$ by construction. To study the closure of the zero locus of $W^0(\sigma)$ in $X_{\tilde\sigma}$ consider the rational function $z^{-m_0}\cdot W^0(\sigma)$ on $X_{\tilde\sigma}$. This rational function does not contain $X_\tau$ in its polar locus, and its restriction to the big cell of $X_\tau$ is \[ 1+\sum_{i=1}^s z^{m_i-m_0}\in \kk[\Lambda_\tau]. \] In fact, $z^{m_i-m_0}$ for $i>s$ vanishes along $X_\tau$. Since $s\ge 1$ this Laurent polynomial has a non-empty zero locus. This proves that unless $m_i=m_j$ for all $i,j$ the closure of the zero locus of $W^0(\sigma)$ in $X_{\tilde\sigma}$ has a non-empty intersection with $X_\tau$, and hence $W^0(\sigma)$ can not be proper. \end{proof} Thus if one is to study LG models via our degeneration approach, then to obtain the full picture one has to restrict to asymptotically cylindrical $(B,\P)$. The interpretation on the mirror side of the condition that $(B,\P)$ be asymptotically cylindrical brings us to one of the main insights of this paper. We first formulate the Legendre-dual of Definition~\ref{Def: parallel edges}. \begin{definition} A tropical manifold $(\check B,\check \P)$ is said to have \emph{flat boundary} if $\partial\check B$ is locally flat in the affine structure. \end{definition} \begin{theorem} \label{Thm: Properness} Let $\X\to \Spf\kk\lfor t\rfor$ and $(\pi:\check\X\to T,\check\D)$ be polarized toric degenerations with Legendre dual intersection complexes $(B,\P,\varphi)$, $(\check B,\check\P,\check\varphi)$. Then the following are equivalent. \begin{enumerate} \item The order zero superpotential $W^0:X_0\to\AA^1$ defined in \eqref{Eqn: W^0(sigma)} is proper. \item $(B,\P)$ is asymptotically cylindrical. \item $(\check B,\check \P)$ has flat boundary. \item The restriction $\pi|_{\check\D}:\check\D\to T$ of $\pi:\check\X\to T$ is itself a toric degeneration. \end{enumerate} \end{theorem} \begin{proof} The equivalence of (1) and (2) is the content of Proposition~\ref{properness criterion}. The Legendre dual of the condition that $(B,\P)$ is asymptotically cylindrical says that $\partial \check B\subseteq \check B$ is itself an affine manifold with singularities to which our program applies. If this is the case then from the definition of $\check\D\subseteq \check\X$ it follows that $\check\D\to T$ is obtained by restricting the slab functions to $\check D_0\subseteq \check X_0$ and run our gluing construction \cite[\S2.6,\S2.7]{affinecomplex} on $\partial\check B$. The result is hence a toric degeneration of Calabi-Yau varieties. Conversely, assume that $\rho,\rho'\subseteq\partial \check B$ are two neighboring $(n-1)$-faces with $\Lambda_\rho\neq\Lambda_{\rho'}$ as subspaces of $\Lambda_v$ for $v\in\rho\cap\rho'$. In the notation of \cite[Constr.\,2.7]{affinecomplex}, the toric local model of $\check\X\to T$ is given by $\kk[P_{\rho\cap\rho',\sigma}]$ for $\sigma\in\P$ a maximal cell containing $\rho\cap\rho'$, with $\check\D$ locally corresponding to $\rho\cup\rho'$. It is now easy to see that $\check\D\to T$ is not formally a smoothing of $\check D_0$ at the generic point of $\check X_{\rho\cap\rho'}$ unless $\partial \check B$ is straight at $\rho\cap\rho'$. Hence $\partial\check B$ has to be smooth for $\check\D\to T$ to be a toric degeneration. \end{proof} Theorem~\ref{properness criterion},(4) motivates the following definition. \begin{definition} \label{Def: compact type} A toric degeneration of log Calabi-Yau varieties $(\pi:\check\X\to T,\check\D)$ is called \emph{of compact type} if $\check\D\to T$ is as well a toric degeneration of Calabi-Yau varieties. \end{definition} As a first example we consider the case of $\PP^2$. \begin{example} \label{Expl: PP2} The standard method to construct the LG-mirror for $\PP^2$ is to start from the momentum polytope $\Xi= \conv\{(-1,-1),(2,-1),(-1,2)\}$ of $\PP^2$ with its anticanonical polarization \cite{HoriVafa} . The rays of the corresponding normal fan associated to this polytope (using inward pointing normal vectors as in~\cite{affinecomplex}) are generated by $(1,0), (0,1), (-1,-1)$. Calling the monomials corresponding to the generators of the first two rays $x$ and $y$, respectively, we obtain the usual (non-proper) Landau-Ginzburg model on the big torus $(\GG_m(\kk))^2$, the function $x+y+\frac{1}{xy}$. \begin{figure} \input{PP2.pspdftex}\hspace{1cm} \input{mirrorPP2.pspdftex} \vspace{0.5cm} \input{scattering_mirrorPP2.pspdftex} \caption{An intersection complex $(\check B,\check \P)$ for $\PP^2$ with straight boundary and its Legendre dual $(B,\varphi)$ for the minimal polarization, with a chart on the complement of the shaded region and a chart showing the three parallel unbounded edges.} \label{fig:PP^2} \end{figure} To obtain a proper superpotential we need to make the boundary of the momentum polytope flat in affine coordinates. To do this we trade the corners with singular points in the interior. The simplest choice is a decomposition $\check \P$ of $\check B=\Xi$ into three triangles with three singular points with simple monodromy, that is, conjugate to $\left(\begin{smallmatrix}1&0\\1&1 \end{smallmatrix}\right)$, as depicted in Figure~\ref{fig:PP^2}. A minimal choice of the PL-function $\check \varphi$ takes values $0$ at the origin and $1$ on $\partial \check B$.\footnote{\cite[Expl.\,6.2]{Theta} identifies the generic fiber of the algebraization $(\check X\to T,\check D)$ of the resulting toric degeneration with the family of elliptic curves $g(t)(x_0^3+x_1^3+x_2^3)+x_0 x_1 x_2$ in the trivial deformation of $\PP^2$. Here $g(t)=t+O(t^2)$ is an analytic change of parameter related to Jacobian theta functions.} For this choice of $\check \varphi$ the Legendre dual of $(\check B,\check\P,\check \varphi)$ is shown in Figure~\ref{fig:PP^2} on the right. Note that the unbounded edges are indeed parallel, so each unbounded edge comes with copies of the other two unbounded edges parallel at integral distance~$1$. Now let us compute $X_0$ and $W_{\PP^2}^0$. The polyhedral decomposition has one bounded maximal cell $\sigma_0$ and three unbounded maximal cells $\sigma_1,\sigma_2,\sigma_3$. The bounded cell is the momentum polytope of the $X_{\sigma_0}$, a $\ZZ/3$-quotient of $\PP^2$. Each unbounded cell is affine isomorphic to $[0,1]\times \RR_{\ge0}$, the momentum polytope of $\PP^1\times\AA^1=: X_{\sigma_i}$, $i=1,2,3$. The $X_{\sigma_i}$ glue together by torically identifying pairs of $\PP^1$'s and $\AA^1$'s as prescribed by the polyhedral decomposition to yield $X_0$. By definition, $W_{\PP^2}^0$ vanishes identically on the compact component $X_{\sigma_0}$. Each of the unbounded components has two parallel unbounded edges, leading to the pull-back to $\PP^1\times\AA^1$ of the toric coordinate function of $\AA^1$, say $z_i$ for the $i$-th copy. Thus $W^0_{\PP^2}|_{X_{\sigma_i}}=z_i$ for $i=1,2,3$ and $W^0:X_0\to\AA^1$ is proper. These functions are clearly compatible with the toric gluings. \end{example} \begin{remark} An interesting feature of the degeneration point of view is that the mirror construction respects the finer data related to the degeneration such as the monodromy representation of the affine structure. In particular, this poses a question of uniqueness of the Landau-Ginzburg mirror. For the anticanonical polarization such as the chosen one in the case of $\PP^2$, the tropical data $(\check B,\check \P)$ is essentially unique, see Theorem~\ref{Thm: unique} for a precise statement. For larger polarizations (thus enlarging $\check B$) there are certainly many more possibilities. For example, as an affine manifold with singularities one can perturb the location of the singular points transversally to the invariant directions over the rational numbers and choose an adapted integral polyhedral decomposition after appropriate rescaling. It is not clear to us if all $(\check B,\check \P)$ with flat boundary leading to $\PP^2$ can be obtained by this procedure. \end{remark} \section{Scattering of monomials} \label{sect: scattering of monomials} A central tool in \cite{affinecomplex} are scattering diagrams. The purpose of this section is to study the propagation of monomials through scattering diagrams. Assume $\scrS_k$ is a structure that is consistent to order $k$ and let $\foj$ be a \emph{joint} of $\scrS_k$. Recall that a joint for a wall structure is a codimension two cell of the polyhedral decomposition of $B$ with codimension one skeleton the union of walls in $\scrS_k$. Thus each joint is the intersection of the walls that contain the joint. These walls containing $\foj$ define various \emph{scattering diagrams} $\D=(\forr_i, f_\foc)$ that collect the data carried by $\scrS_k$ relevant around $\foj$. Each joint defines several, closely related scattering diagrams, one for each choice of vertex $v\in\sigma_\foj$ and $\omega\in\P$ with $\omega\subseteq \sigma_\foj$ see \cite[Def.\,3.3, Constr.\,3.4]{affinecomplex}. Here $\sigma_\foj\in\P$ is the minimal cell containing $\foj$. A scattering diagram is a collection of half-lines $\RR_{\ge0}\doublebar m$ in the two-dimensional quotient space $Q_{\foj,\RR}^v=\big(\Lambda_{B,v}/ \Lambda_{\foj,v}\big)_\RR$ along with a function attached to each half-line. The double-bar notation denotes the image of an element or subset in $Q_{\foj,\RR}^v$, such as $\doublebar m$, $\doublebar\sigma$. The functions lie in the ring $\kk[P_x]$ defining the local model of $\X$ at some $x\in (\foj\cap \Int \omega)\setminus\Delta$. Any codimension one cell $\rho\in\P$ containing $\foj$ produces one or two half-lines, the latter in the case $\foj\cap\Int\rho\neq\emptyset$. Half-lines obtained from codimension one-cells are called \emph{cuts}, all other half-lines \emph{rays}. For an exponent $m_0$ with $\overline m_0\in\Lambda_v\setminus \Lambda_\foj$ we wish to define the scattering of the monomial $z^{m_0}$, which we think of traveling along the ray $-\RR_{\ge0} \doublebar m_0$ into the origin of $\shQ_{\foj,\RR}^v\simeq\RR^2$. In a scattering diagram monomials travel along \emph{trajectories}. These are defined in exactly the same way as rays \cite[Def.\,3.3]{affinecomplex}, but will have an additive meaning. \begin{definition} A \emph{trajectory} in $\shQ_{\foj,\RR}^v$ is a triple $(\fot,m_\fot, a_\fot)$, where $m_\fot$ is a monomial on a maximal cell $\sigma\ni v$ with $\pm \doublebar m_\fot\in \doublebar \sigma$ and $m\in P_x$ for all $x\in \foj\setminus\Delta$, $\fot=\pm \RR_{\ge 0} \doublebar m$, and $a_\fot\in \kk$. The trajectory is called \emph{incoming} if $\fot=\RR_{\ge 0} \doublebar m$, and \emph{outgoing} if $\fot=-\RR_{\ge 0} \doublebar m$. By abuse of notation we often suppress $m_\fot$ and $a_\fot$ when referring to trajectories. \end{definition} Here is the generalization of the central existence and uniqueness result for scattering diagrams \cite[Prop.\,3.9]{affinecomplex} incorporating trajectories. This result is crucial for the existence proof of the superpotential in Lemmas~\ref{lem:independence of p} and~\ref{lem:change of chambers}. \begin{proposition} \label{prop: trajectory scattering} Let $\D$ be the scattering diagram defined by $\scrS_k$ for $\foj\in\Joints(\scrS_k)$, $g:\omega\to\sigma_\foj$, $v\in\omega$. Let $(\RR_{\ge0}\doublebar m_0,m_0,1)$ be an incoming trajectory and $\sigma\supset \foj$ a maximal cell with $\doublebar m_0\in \doublebar \sigma$. For $\doublebar m\in \shQ_{\foj,\RR}^v\setminus \{0\}$ denote by \[ \theta_{\scriptsize\doublebar m}: R^k_{g,\sigma'} \to R^k_{g,\sigma} \] the ring isomorphism defined by $\D$ for a path connecting $-\doublebar m$ to $\doublebar m_0$, where $\sigma'$ is a maximal cell with $-\doublebar m\in\doublebar{\sigma'}$. Then there is a set $\foT$ of outgoing trajectories such that \begin{equation} \label{eqn:monomial scattering} z^{m_0}= \sum_{\fot\in\foT} \theta_{\scriptsize\doublebar m_\fot} (a_\fot z^{m_\fot}) \end{equation} holds in $R^k_{g,\sigma}$. Moreover, $\foT$ is unique if $a_{\fot}z^{m_\fot}\neq 0$ in $R^k_{g,\sigma'}$ for all $\fot\in\foT$, and if $m_\fot\neq m_{\fot'}$ whenever $\fot\neq\fot'$. \end{proposition} \begin{proof} The proof is by induction on $l\le k$. We first discuss the case that $\sigma_\foj$ is a maximal cell, that is, $\codim \sigma_\foj=0$. Then $\D$ has only rays, no cuts. In particular, any $\theta_{\scriptsize\doublebar m}$ is an automorphism of $R^k_{g,\sigma}$ that is the identity modulo $I_{g,\sigma}^{>0}$. Thus for $l=0$, \eqref{eqn:monomial scattering} forces one outgoing trajectory $(-\RR_{\ge 0}\doublebar m_0,m_0,1)$ if $\ord_{\sigma_\foj}(m_0)=0$ or none otherwise. For the induction step assume \eqref{eqn:monomial scattering} holds in $R^{l-1}_{g,\sigma}$. Then in $R^l_{g,\sigma}$ the difference of the two sides of \eqref{eqn:monomial scattering} is a sum of monomials $a z^m$ with $\ord_{\sigma_\foj} (m)=l$. Since $l>0$ and since there are no cuts (these represent slabs containing $\foj$), it holds $\theta_{\scriptsize\doublebar m}(a z^m)=a z^m$. Thus after adding appropriate trajectories $(-\RR_{\ge0}\doublebar m,m,a)$ with $\ord_{\sigma_\foj}(m)=l$ to $\foT$, Equation~\eqref{eqn:monomial scattering} holds in $R^l_{g,\sigma}$. This is the unique minimal choice of $\foT$. This finishes the proof in the case $\codim\sigma_\foj=0$. Under the presence of cuts we have several rings $R^k_{g,\sigma'}$ for various maximal cells $\sigma'\supset\foj$. This possibly brings in denominators that are powers of $f_{\rho,v}$ for cells $\rho\supset \foj$ of codimension one. In this case we show existence by a perturbation argument. To this end consider first the simplest scattering diagram in codimension one consisting of only two cuts $\foc_\pm=(\pm \foc,f_{\rho,v})$ dividing $\shQ$ into two halfplanes $\doublebar\sigma_\pm$ and with the same attached function. The signs are chosen in such a way that $\doublebar m_0\in \doublebar \sigma_-$. Let $\theta: R^k_{g,\sigma_-}\to R^k_{g,\sigma_+}$ be the isomorphism defined by a path from $\doublebar \sigma_-$ to $\doublebar \sigma_+$ and let $n\in\Lambda_\rho^\perp\subseteq \Lambda_v^*$ be the primitive integral vector that is positive on $\sigma_-$. Then $\langle m_0,n \rangle \ge 0$ and \[ \theta(z^{m_0})= f_{\rho,v}^{\langle m_0,n\rangle} \cdot z^{m_0}. \] Expanding yields the finite sum \begin{equation} \label{Eqn: theta(z^m)} \theta(z^{m_0})= \sum_{\langle m,n\rangle\ge 0} a_m z^m = \theta\Big(\sum_{\langle m,n\rangle\ge 0} a_m \theta^{-1}(z^m)\Big) =\theta\Big(\sum_{\langle m,n\rangle\ge 0} \theta_{\scriptsize\doublebar m}(a_m z^m)\Big), \end{equation} for some $a_m\in\kk$. Note that $\theta^{-1}(z^m)=\theta_{\scriptsize\doublebar m}(z^m)$ by the definition of $\theta_{\scriptsize\doublebar m}$. Now \eqref{Eqn: theta(z^m)} equals $\theta$ applied to \eqref{eqn:monomial scattering} for the set of trajectories \[ \foT:=\big\{(-\RR_{\ge0}\doublebar m,m,a_m)\,\big|\,\langle m,n\rangle\ge 0, a_mz^m\neq 0 \big\}. \] Hence existence is clear in this case. In the general case we work with perturbed trajectories as suggested by Figure~\ref{fig:perturbed scattering}. \begin{figure} \input{perturbed_scattering.pspdftex} \caption{Scattering diagram with perturbed trajectories (cuts and rays solid, perturbed trajectories dashed).} \label{fig:perturbed scattering} \end{figure} More precisely, a perturbed trajectory is a trajectory with the origin shifted. There is one unbounded perturbed incoming trajectory, a translation of $(-\RR_{\ge 0}\doublebar m_0,m_0,1)$, and a number of perturbed outgoing trajectories, each the result of scattering of other trajectories with rays or cuts. At each intersection point of a trajectory with a ray or cut, the incoming and outgoing trajectories at this point fulfill an equation analogous to \eqref{eqn:monomial scattering}. Similar to \cite[Constr.\,4.5]{affinecomplex} with our additive trajectories replacing the multiplicative $s$-rays\footnote{For technical reasons $s$-rays were not asked to be piecewise affine. In the present situation we insist in piecewise affine objects.}, there is then an asymptotic scattering diagram with trajectories obtained by taking the limit $\lambda\to 0$ of rescaling the whole diagram by $\lambda\in\RR_{>0}$. Any choice of perturbed incoming trajectory determines a unique minimal scattering diagram with perturbed trajectories. Moreover, for a generic choice of perturbed incoming trajectory the intersection points of trajectories with rays or cuts are pairwise disjoint, and they are in particular different from the origin. Hence the perturbed diagram can be constructed uniquely by induction on $l\le k$. Taking the associated asymptotic scattering diagram with trajectories now establishes the existence in the general case. Next we show uniqueness for $\codim\sigma_\foj>0$. For $\codim\sigma_\foj=2$ any monomial $z^m$ in $\kk[\Lambda_{\sigma_\foj}]$ fulfills $\ord_{\sigma_\foj}(m)=0$, and all $\theta_{\scriptsize\doublebar m}$ extend to $\kk(\Lambda_{ \sigma_\foj})$-algebra automorphisms of $R^k_{g,\sigma}\otimes_{\kk[\Lambda_{\sigma_\foj}]} \kk(\Lambda_{\sigma_\foj})$. Hence we can deduce uniqueness by induction on $l\le k$ as in codimension~$0$ by taking the factors $a$ of trajectories to be polynomials with coefficients in $\kk(\Lambda_{\sigma_\foj})$. Thus we combine all trajectories $\fot$ with the same $\doublebar m_\fot$ and the same $\ord_{\sigma_\foj} (m_\fot)$. It is clear that such generalized trajectories can be split uniquely into proper trajectories with all $m$ distinct, showing uniqueness in this case. Finally, for uniqueness in codimension one we can not argue just with $\ord_{\sigma_\foj}$ because there are monomials $z^m$ with $\ord_{\sigma_\foj}(m)=0$ but $\doublebar m\neq0$. Instead we look closer at the effect of adding trajectories. By induction it suffices to study the insertion of trajectories $(-\RR_{\ge 0}\doublebar m, m,a)$ with $\ord_{\sigma_\foj} (m)=l$ for each $m$ and such that \eqref{eqn:monomial scattering} continues to hold. By working with a perturbed scattering diagram as in Figure~4.1 of \cite{affinecomplex} and the asociated asymptotic scattering diagram as in \cite[\S4.3]{affinecomplex}, it suffices to consider the case of only two cuts as already considered above. In this case we have \[ 0= \sum_m \theta_{\scriptsize\doublebar m} (a_m z^m) \;=\;\sum_{i=0}^l \sum_{\langle m,n\rangle=i} \theta_{\scriptsize\doublebar m} (a_m z^m)\\ =\sum_{i=0}^l f_{\rho,v}^{-i}\sum_{\langle m,n\rangle=i} a_m z^m. \] Since all monomials in $f_{\rho,v}$ have vanishing $\ord_{\sigma_{\foj}}$ and only monomials $z^m$ with the same value of $\langle m,n\rangle$ can cancel, this equation implies \[ f_{\rho,v}^{-i}\sum_{\langle m,n\rangle=i} a_m z^m=0 \] holds for each $i$. Multiplying by $f_{\rho,v}^i$ thus shows $\sum_{\langle m,n\rangle=i} a_m z^m=0$ in $R_{g,\sigma}^l$, and hence $a_m=0$ for all $m$. This proves uniqueness also in codimension one. \end{proof} \section{The superpotential via broken lines} \label{Ch: Broken lines} The easiest way to define the superpotential in full generality is by the method of broken lines. Broken lines have been introduced by Mark Gross for $\dim B=2$ in his work on mirror symmetry for $\PP^2$ \cite{PP2mirror}. We assume we are given a locally finite scattering diagram $\scrS_k$ for the non-compact intersection complex $(B,\P,\varphi)$ of a polarized LG-model that is consistent to order $k$, as given by Assumption~\ref{overall assumption}. The notion of broken lines is based on the transport of monomials by changing chambers of $\scrS_k$. Recall from \cite[Def.\,2.22]{affinecomplex} that a chamber is the closure of a connected component of $B\setminus|\scrS_k|$. \begin{definition} \label{def: monomial transport} Let $\fou,\fou'$ be neighboring chambers of $\scrS_k$, that is, $\dim(\fou\cap\fou') =n-1$. Let $a z^m$ be a monomial defined at all points of $\fou\cap\fou'$ and assume that $\overline m$ points from $\fou'$ to $\fou$. Let $\tau:= \sigma_\fou\cap \sigma_{\fou'}$ and \[ \theta: R^k_{\id_\tau,\sigma_\fou}\to R^k_{\id_\tau,\sigma_{\fou'}} \] be the gluing isomorphism changing chambers. Then if \begin{equation} \label{eqn: transport} \theta(az^m)=\sum_i a_iz^{m_i} \end{equation} we call any summand $a_i z^{m_i}$ with $\ord_{\sigma_{\fou'}}(m_i)\le k$ a \emph{result of transport of $az^m$} from $\fou$ to $\fou'$. \end{definition} Note that since the change of chamber isomorphisms commute with changing strata, the monomials $a_iz^{m_i}$ in Definition~\ref{def: monomial transport} are defined at all points of $\fou\cap\fou'$. \begin{definition} \label{def: broken lines}(Cf.\ \cite[Def.\,4.9]{PP2mirror}) A \emph{broken line} for $\scrS_k$ is a proper continuous maps \[ \beta: (-\infty,0]\to B \] with image disjoint from any joint of $\scrS_k$, along with a sequence $-\infty=t_0<t_1<\ldots< t_{r-1}\le t_r=0$ for some $r\ge 0$ with $\beta(t_i)\in |\scrS_k|$, and for $i=1,\ldots,r$ monomials $a_i z^{m_i}$ defined at all points of $\beta([t_{i-1},t_i])$ (for $i=1$, $\beta((-\infty,t_1])$), subject to the following conditions. \begin{enumerate} \item $\beta|_{(t_{i-1},t_i)}$ is a non-constant affine map with image disjoint from $|\scrS_k|$, hence contained in the interior of a unique chamber $\fou_i$ of $\scrS_k$, and $\beta'(t)=-\overline m_i$ for all $t\in (t_{i-1},t_i)$. Moreover, if $t_r=t_{r-1}$ then $\fou_r\neq \fou_{r-1}$. \item $a_1=1$ and\footnote{The normalization condition $a_1=1$ has to be modified if there are parallel unbounded edges not contained in one cell and the gluing data are not trivial, see \cite[\S5.2]{Theta}.} there exists a (necessarily unbounded) $\omega\in\P^{[1]}$ with $\overline m_1\in\Lambda_\omega$ primitive and $\ord_\omega(m_1)=0$. \item For each $i=1,\ldots,r-1$ the monomial $a_{i+1} z^{m_{i+1}}$ is a result of transport of $a_i z^{m_i}$ from $\fou_i$ to $\fou_{i+1}$ (Definition~\ref{def: monomial transport}). \end{enumerate} The \emph{type} of $\beta$ is the tuple of all $\fou_i$ and $m_i$. By abuse of notation we suppress the data $t_i,a_i, m_i$ when talking about broken lines, but introduce the notation \[ a_\beta:= a_r,\quad m_\beta:=m_r. \] For $p\in B$ the set of broken lines $\beta$ with $\beta(0)=p$ is denoted $\foB(p)$. \end{definition} \begin{remark} \label{rem: broken lines} 1)\ \ If $(B,\P)$ is asymptotically cylindrical (Definition~\ref{Def: parallel edges}) then in Definition~\ref{def: broken lines} the existence of a one-cell $\omega\in\P$ with $\overline m_1\in\Lambda_\omega$ in (2) follows from (1).\\[1ex] 2)\ \ A broken line $\beta$ is determined uniquely by specifying its endpoint $\beta(0)$ and its type. In fact, the coefficients $a_i$ are determine inductively from $a_1=1$ by Equation~\eqref{eqn: transport}. \end{remark} According to Remark~\ref{rem: broken lines},(2) the map $\beta\mapsto \beta(0)$ identifies the space of broken lines of a fixed type with a subset of $\fou_r$. This subset is the interior of a polyhedron: \begin{proposition} \label{prop: broken line moduli polyhedral} For each type $(\fou_i,m_i)$ of broken lines there is an integral, closed, convex polyhedron $\Xi$, of dimension $n$ if non-empty, and an affine immersion \[ \Phi: \Xi\lra \fou_r, \] so that $\Phi\big (\Int\Xi\big)$ is the set of endpoints $\beta(0)$ of broken lines $\beta$ of the given type. \end{proposition} \begin{proof} This is an exercise in polyhedral geometry left to the reader. For the statement on dimensions it is important that broken lines are disjoint from joints. \end{proof} \begin{remark} \label{degenerate broken lines} A point $p\in \Phi(\partial \Xi)$ still has a meaning as an endpoint of a piecewise affine map $\beta:(-\infty,0]\to B$ together with data $t_i$ and $a_i z^{m_i}$, defining a \emph{degenerate broken line}. For $\beta$ not to be a broken line, $\im(\beta)$ has to intersect a joint. Indeed, if $\beta$ violates Definition~\ref{def: broken lines},(1) then $t_{i-1}=t_i$ or there exists $t\in (t_{i-1},t_i)$ with $\beta(t)\in|\scrS_k|$. In the first case denote by $\fop_j\in\scrS$ the wall with $\beta'(t_j)\in\fop_j$ for all $\beta'\in \Phi(\Int\Xi)$. Then $\beta(t_{i-1})=\beta(t_i)$ lies in $\fop_{i-1}\cap\fop_i$, hence in a joint. In the second case convexity of the chambers implies that $\beta([t_{i-1},t])\subset\partial \fou$ or $\beta([t,t_i])\subset\partial \fou$. Thus $\beta$ maps a whole open interval to $|\scrS_k|$. The break point $t_{i-1}$ or $t_i$ contained in this interval again intersects two walls, hence is contained in a joint. The other conditions in the definition of broken lines are closed. The set of endpoints $\beta(0)$ of degenerate broken lines of a given type is the $(n-1)$-dimensional polyhedral subset $\Phi(\partial\Xi)\subseteq \fou_r$. The set of degenerate broken lines \emph{not transverse} to some joint of $\scrS_k$ is polyhedral of smaller dimension. \end{remark} Any finite structure $\scrS_k$ involves only finitely many slabs and walls, and each polynomial coming with each slab or wall carries only finitely many monomials. Hence broken lines for $|\scrS_k|$ exist only for finitely many types. The following definition is therefore meaningful. \begin{definition} \label{def: general points} A point $p\in B$ is called \emph{general} (for the given structure $\scrS_k$) if it is not contained in $\Phi(\partial \Xi)$, for any $\Phi$ as in Proposition~\ref{prop: broken line moduli polyhedral}. \end{definition} Recall from \cite[\S2.6]{affinecomplex} that $\scrS_k$ defines a $k$-th order deformation of $X_0$ by gluing the sheaf of rings defined by $R^k_{g,\sigma_\fou}$, with $g:\omega\to\tau$ and $\fou$ a chamber of $\scrS_k$ with $\omega\cap\fou\neq\emptyset$, $\tau\subseteq\sigma_\fou$. Given a general $p\in \fou$ we can now define the \emph{superpotential} up to order $k$ locally as an element of $R^k_{g,\sigma_\fou}$ by \begin{equation} \label{Eqn: W^k} W^k_{g,\fou}(p):=\sum_{\beta\in \foB(p)} a_\beta z^{m_\beta}. \end{equation} The existence of a canonical extension $W^k$ of $W^0$ to $X_k$ follows once we check that (i)~$W^k_{g,\fou}(p)$ is independent of the choice of a general $p\in\fou$ and (ii)~the $W^k_{g,\fou}(p)$ are compatible with changing strata or chambers \cite[Constr.\,2.24]{affinecomplex}. This is the content of the following two lemmas. \begin{lemma} \label{lem:independence of p} Let $\fou$ be a chamber of $\scrS_k$ and $g:\omega\to\tau$ with $\omega\cap\fou \neq\emptyset$, $\tau\subseteq\sigma_\fou$. Then $W^k_{g,\fou}(p)$ is independent of the choice of $p\in\fou$. \end{lemma} \begin{proof} By Proposition~\ref{prop: broken line moduli polyhedral} the set $A \subseteq \fou$ of non-general points is a finite union of nowhere dense polyhedra. Moreover, since all $\Phi$ in Proposition~\ref{prop: broken line moduli polyhedral} are local affine isomorphisms, for each path $\gamma: [0,1]\to \fou\setminus A$ and broken line $\beta_0$ with $\beta_0(0)= \gamma(0)$ there exists a unique family $\beta_s$ of broken lines with endpoints $\beta_s(0)=\gamma(s)$ and with the same type as $\beta_0$. Hence $W^k_{g,\fou}(p)$ is locally constant on $\fou\setminus A$. To pass between the different connected components of $\fou\setminus A$, consider the set $A'\subseteq A$ of endpoints of degenerate broken lines that are not transverse to the joints of $\scrS_k$. More precisely, for each type of broken line, the set of endpoints of broken lines intersecting a given joint defines a polyhedral subset of $\fou$ of dimension at most $n-1$. Then $A'$ is the union of the $(n-2)$-cells of these polyhedral subsets, for any joint and any type of broken line. Since $\dim A'= n-2$, we conclude that $\fou\setminus A'$ is path-connected. It thus suffices to study the following situation. Let $\gamma:[-1,1]\to \Int \fou\setminus A'$ be an affine map with $\gamma(0)$ the only point of intersection with $A$. Let $\overline\beta_0: (-\infty,0]\to B$ be the underlying map of a degenerate broken line with endpoint $\gamma(0)$. The point is that $\overline\beta_0$ may arise as a limit of several different types of broken lines with endpoints $\gamma(s)$ for $s\neq0$. The lemma follows once we show that the contributions to $W^k_{g,\fou} \big(\gamma(s)\big)$ of such broken lines for $s<0$ and for $s>0$ coincide. Note we do not claim a bijection between the sets of broken lines for $s<0$ and for $s>0$, which in fact need not be true. For later use let $V\subset \Int\fou$ be a local affine hyperplane intersecting $\ol\beta_0$ only in $\ol\beta_0(0)=\gamma(0)$ and containing $\im(\gamma)$. Note that $V$ is also transverse to $A$. Thus by the unique continuation statement at the beginning of the proof, each broken line $\beta$ with endpoint $\beta(0)\in V$ fits into a unique family of broken lines of the same type and with endpoint any other point of the connected component of $\beta(0)$ in $V\setminus A$. In particular, since $\gamma^{-1}(A)=\{0\}$ any broken line $\beta$ with endpoint $\gamma(s_0)$ for $s_0\neq 0$ extends uniquely to a family of broken lines $\beta_s$ for $s\in [-1,0)$ or $s\in(0,1]$. Thus $\beta$ has a unique limit $\lim\beta:=\lim_{s\to 0} \beta_s$, a possibly degenerate broken line. For $s\neq 0$ denote by $\foB_s$ the space of broken lines $\beta$ with endpoint $\gamma(s)$ and such that the map underlying $\lim\beta$ equals $\overline\beta_0$. Since $\overline{\beta_0}$ is the underlying map of a degenerate broken line, $\foB_s\neq \emptyset$ for some sufficiently small $s$, hence also for all $s$ of the same sign, by unique continuation. Possibly by changing signs in the domain of $\gamma$ we may thus assume $\foB_s\neq \emptyset$ for $s<0$. We have to show \begin{equation} \label{eqn: equality of contributions} \sum_{\beta\in\foB_{-1}} a_\beta z^{m_{\beta}} = \sum_{\beta\in\foB_{1}} a_\beta z^{m_{\beta}}. \end{equation} Denote by $\foT_s$ the set of types of broken lines in $\foB_s$. Obviously, $\foT_s$ only depends on the sign of $s$. The central observation is the following. Let $J\subset B$ be the union of the joints of $\scrS_k$ intersected by $\im \overline\beta_0$. Let $x:= \overline\beta_0(t)$ for $t\ll 0$ be a point far off to $-\infty$. Thus $x$ lies in one of the unbounded cells of $\P$ and $\overline\beta_0$ is asymptotically parallel to an unbounded edge and does not cross a wall for $t'<t$. Let $U\subseteq B$ be a local affine hyperplane intersecting $\overline{\beta_0}$ transversally at $x$. By transversality with $J$ the images of the degenerate broken lines of types contained in any $\foB_s$ lie in a local affine hyperplane $H\subset U$ ($\dim H=n-2$). Note that each joint that $\ol\beta_0$ meets defines such a local hyperplane $H\subset U$, and if $\ol\beta_0$ meets several joints, the hyperplanes agree locally near $x$ since $\gamma(0)\not\in A'$. Moreover, the images of degenerate broken lines arising as the limit of broken lines of type in $\foT_{-1}\cup\foT_1$ separate a contractible neighbourhood of $\im\ol\beta_0$ in $B$ into two connected components. It follows that broken lines of type in $\foT_s$ for $s<0$ intersect $U$ only on one side of $H$, and for $s>0$ only on the other. Now let $\breve\beta_s\in\foB_s$ be one family of broken lines, say for $s<0$. Denote by $\fot$ the type of $\breve\beta_s$. Thus $\breve\beta_s$ is the unique broken line $\beta$ of type $\fot$ with endpoint $\beta(0)=\gamma(s)$. Since $\lim\breve\beta_s= \ol\beta_0$, the broken line $\breve\beta_s$ does not pass a wall before hitting $U$, thus is a straight line. Denote by $x_s\in U\setminus H$ the point of intersection of $\breve\beta_s$ with $U$. The $x_s$ vary affine linearly with $s$ with $\lim_{s\to0} x_s= x$, hence define an affine line segment in $U$. This line segment can be viewed as a lift of $\gamma([-1,0])\subset\fou$ to a line segment in $U$ via broken lines of type $\fot$ and their limits. If $\beta_s$ is another family of broken lines in $\foB_s$ for $s<0$, of type $\fot'$, then by the same argument $\beta_s$ hits $U$ in another point $x'_s$ in the same connected component of $U\setminus H$. Moreover, there is a unique such $\beta'_s$ with endpoint $\beta'_s(0)\in V$, where $V\subset\Int \fou$ is the local affine hyperplane chosen above. In particular, for each $s$ there is a unique broken line $\beta'_s$ of type $\fot'$ passing through $x_s$. In other words, each broken line $\beta_s\in\foB_s$, $s<0$, deforms uniquely to a broken line $\beta'_s$ with endpoint on $V$ and passing through $x_s$. The set of $\beta'_s$ obtained in this way can alternatively be constructed as follows. For $s<0$ all broken lines in $\foB_s$ have the same first chamber $\fou_1$ and monomial $z^{m_1}$. Now start with the straight broken line ending at $x_s$ and of type $(\fou_1,m_1)$. This broken line can be continued until it hits a wall or slab, where it splits into several broken lines, one for each summand in \eqref{eqn: transport}. Iterating this process leads to the infinite set of all broken lines with asymptotic given by $m_1$ and running through $x_s$. The $\beta'_s$ are the subset of the considered types, that is, with the unique deformation for $s\to 0$ having underlying map $\overline\beta_0$ and endpoint on $V$. From this point of view it is clear that at each joint $\foj$ intersected by $\im\overline \beta_0$ the $\beta'_s$ compute a scattering of monomials as considered in \S\ref{sect: scattering of monomials}. In fact, the union of the $\beta'_s$ with the same incoming part $az^m$ near $\foj$ induce a scattering diagram with perturbed trajectories as considered in the proof of Proposition~\ref{prop: trajectory scattering}. Thus the corresponding sum of monomials leaving a neighborhood of $\foj$ can be read off from the right-hand side of \eqref{eqn:monomial scattering} in this proposition, applied to the incoming trajectory $(\RR_{\ge0}\doublebar m,m,a)$. We conclude that $\sum_{\beta\in\foB_s} a_\beta z^{m_\beta}$ for $s<0$ computes the transport of $z^{m_1}$ along $\overline \beta_0$. This transport is defined by applying \eqref{eqn:monomial scattering} instead of \eqref{eqn: transport} at each joint intersected transversally by $\overline\beta_0$. The same argument holds for $s>0$, thus proving \eqref{eqn: equality of contributions}. \end{proof} \begin{remark} \label{rem:generalized broken lines} The proof of Lemma~\ref{lem:independence of p} really shows that the scattering of monomials introduced in \S\ref{sect: scattering of monomials} allows to replace the condition that broken lines have image disjoint from joints by transversality with joints. In the following we refer to these as \emph{generalized broken lines}. The set of degenerate broken lines with endpoint $p$ is denoted $\ol\foB(p)$. \end{remark} \noindent By Lemma~\ref{lem:independence of p} and Remark~\ref{rem:generalized broken lines} we are now entitled to define \begin{equation} \label{Eqn: W from degenerate broken lines} W^k_{g,\fou}:= W^k_{g,\fou}(p)= \sum_{\beta\in \ol\foB(p)} a_\beta z^{m_\beta}, \end{equation} for any choice of $p\in \fou\setminus A'$, $A'$ the set of endpoints in $\fou$ of degenerate broken lines \emph{not} transverse to all joints of $\scrS_k$. \begin{lemma} \label{lem:change of chambers} The $W^k_{g,\fou}$ are compatible with changing strata and changing chambers. \end{lemma} \begin{proof} Compatibility with changing strata follows trivially from the definitions. As for changing from a chamber $\fou$ to a neighboring chamber $\fou'$ ($\dim \fou\cap \fou'=n-1$) the argument is similar to the proof Lemma~\ref{lem:independence of p}. Let $g:\omega\to\tau$ be such that $\omega\cap\fou \cap\fou' \neq\emptyset$, $\tau\subseteq \sigma_\fou\cap\sigma_{\fou'}$ and \[ \theta: R^k_{g,\fou}\lra R^k_{g,\fou'} \] be the corresponding change of chamber isomorphism \cite[\S2.4]{affinecomplex}. We have to show $\theta \big(W^k_{g,\fou}\big) = W^k_{g,\fou'}$. Let $A\subseteq \fou\cup\fou'$ be the set of endpoints of degenerate broken lines. Consider a path $\gamma:[-1,1]\to \fou\cap\fou'$ connecting general points $\gamma(-1)\in\fou$, $\gamma(1)\in \fou'$ and with $\gamma^{-1}(\fou\cap\fou')=\{0\}$. We may also assume that $\gamma(s)\in A$ at most for $s=0$, and that any degenerate broken line with endpoint $\gamma(0)$ is transverse to joints. For $s\neq 0$ we then consider the space $\foB_s$ of broken lines $\beta_s$ with endpoint $\gamma(s)$ and with deformation for $s\to 0$ a fixed underlying map of a degenerate broken line $\overline\beta_0$. By transversality of $\overline\beta_0$ with the set of joints the limits of families $\beta_s$, $s\to 0$, group into generalized broken lines (Remark~\ref{rem:generalized broken lines}). Each such generalized broken line $\beta$ has as endpoint $p_0:= \gamma(0)$, but viewed as an element either of $\fou$ or of $\fou'$. We call this chamber the \emph{reference chamber} of $\beta$. Generalized broken lines with reference chambers $\fou$ and $\fou'$ contribute to $W^k_{g,\fou}$ and $W^k_{g,\fou'}$, respectively. Moreover, $m_\beta$ is either tangent to $\fou\cap\fou'$ or points properly into $\fou$ or into $\fou'$. We claim that $\theta$ maps each of the three types of contributions to $W^k_{g,\fou}$ to the three types of contributions to $W^k_{g,\fou'}$. Then $\theta \big(W^k_{g,\fou}\big) = W^k_{g,\fou'}$ and the proof is finished. Let us first consider the set of degenerate broken lines $\beta$ with $m_\beta$ tangent to $\fou\cap\fou'$. Then changing the reference chamber from $\fou$ to $\fou'$ defines a bijection between the considered generalized broken lines with endpoint $p_0$ and reference cell $\fou$ and those with reference cell $\fou'$. Note that in this case $\beta$ has to intersect a joint, so this statement already involves the arguments from the proof of Lemma~\ref{lem:independence of p}. Because $\theta(a_\beta z^{m_\beta})= a_\beta z^{m_\beta}$ this proves the claim in this case. Next assume $m_\beta$ points from $\fou\cap\fou'$ into the interior of $\fou$. This means that $\beta$ approaches $p_0$ from the interior of $\fou$. If we want to change the reference chamber to $\fou'$ we need to introduce one more point $t_{r+1}:=t_r=0$ and chamber $\fou_{r+1}:=\fou'$. According to Equation~\eqref{eqn: transport} in Definition~\ref{def: monomial transport} the possible monomials $a_{r+1}z^{m_{r+1}}$ are given by the summands in $\theta(a z^m)= \sum_i a_{r+1,i} z^{m_{r+1,i}}$. Thus for each summand we obtain one generalized broken line with reference cell $\fou'$. Clearly, this is exactly what is needed for compatibility with $\theta$ of the respective contributions to the local superpotentials. By symmetry the same argument works for generalized broken lines $\beta$ with reference cell $\fou'$ and $m_\beta$ pointing from $\fou\cap\fou'$ into the interior of $\fou'$, and $\theta^{-1}$ replacing $\theta$. Inverting $\theta$ means that a number of generalized broken lines with reference cell $\fou_{r+1}=\fou$ and two points $t_{r+1}=t_r$ (and necessarily $\fou_r=\fou'$), one for each summand of $\theta^{-1}(a_\beta z^{m_\beta})$, combine into a single generalized broken line with reference cell $\fou_r=\fou'$. This process is again compatible with applying $\theta$ to the respective contributions to the local superpotentials. This finishes the proof of the claim, which was left to complete the proof of the lemma. \end{proof} Summarizing the construction and Lemmas~\ref{lem:independence of p} and~\ref{lem:change of chambers}, we now state the unique existence of the superpotential $W$ on canonical toric degenerations: \begin{theorem} \label{Thm: all order W} Let $\pi:\X\to \Spf\kk\lfor t\rfor$ be the canonical toric degeneration given by the compatible system of wall structures in Assumption~\ref{overall assumption}. Then there exists a unique formal function $W:\X\to \AA^1$ that modulo $t^{k+1}$ agrees with the expressions \eqref{Eqn: W^k} and~\eqref{Eqn: W from degenerate broken lines} at each point $p$. \qed \end{theorem} Having defined the superpotential $W$ as a regular function on the formal scheme $\X$, a natural question concerns algebraizablity of $W$ assuming $\X\to\Spf\kk\lfor t\rfor$ is algebraizable. This generally appears to be a difficult question, but sometimes more can be said by methods going beyond the scope of this paper, as detailed in the following remark. \begin{remark} \label{Rem: algebraizability} Given a toric degeneration, we have defined the superpotential $W^k$ locally at $p\in B$ in the interior of a chamber $\fou$ as an expression in the Laurent polynomial ring $A_k[\Lambda_\sigma]$, $A_k=\kk[t]/(t^{k+1})$, with $\sigma=\sigma_\fou$ the maximal cell containing $\fou$. Increasing $k$ may make $\fou$ smaller, but if we choose $p$ sufficiently transcendental we can achieve that $p$ never hits a wall of any order. Then the all order potential $W:= \varprojlim W^k$ still has an expression in the ring \[ \kk[\Lambda_\sigma]\lfor t\rfor = \varprojlim_k A_k[\Lambda_\sigma]. \] To understand the dependence on $p$ note that by the construction of $\X$ as a colimit, a point $p\in B$ contained in the interior of a chamber for each $k$ defines an open embedding of the formal algebraic torus \begin{equation} \label{Eqn: formal algebraic torus} \hat \GG_m(\Lambda_\sigma)= \Spf \kk[\Lambda_\sigma]\lfor t\rfor \end{equation} into $\X$. Here $\sigma\in \P$ is the maximal cell containing $p$. The expression $W(p)=\varprojlim W^k(p)$ computes the pull-back of $W$ to this open formal subscheme. But the embedding $\hat \GG_m(\Lambda_\sigma)\to \X$ depends on $p$. Expressions for $W(p)$ for different choices of $p$ inside the same maximal cell $\sigma$ are related by a (typically infinite) series of wall crossing automorphisms. If $p$ moves to a $p'$ in a neighboring maximal cell then one needs to use the codimension one relations $XY=f_\rho t^e$ from the proof of \cite[Lem.\,2.34]{affinecomplex} to see that a similar wall crossing transformation involving quotients by slab functions relate $W(p)$ and $W(p')$. In any case, for each general $p$ we obtain an expression for $W$ in $\kk [\Lambda_\sigma]\lfor t\rfor$ rather than in the algebraic subring $\kk\lfor t\rfor [\Lambda_\sigma]$. However, as we will see in the examples in the last three section, sometimes $\X$ is algebraizable and there exists a point $p\in B$ such that only finitely many broken lines have endpoint $p$. Then $W(p)$ lies in the subring of finite type \[ \kk[\Lambda_\sigma][t] \subset \kk\lfor t\rfor[\Lambda_\sigma] \subset \kk[\Lambda_\sigma]\lfor t \rfor, \] hence is even a Laurent polynomial with algebraic coefficients in $t$. It is then tempting to believe that this algebraic expression describes a lift of $W$ to an algebraization of $\X$. But this may not be the case: Formal local representability by an algebraic expression neither means that $W$ lifts to an algebraization, nor that such a lift could locally be represented with polynomial coefficients in $t$ rather than with coefficients that are formal power series. In such a situation we therefore say that $W$ is \emph{ostensibly algebraic}. To conclude algebraizability of $W$, one rather has to write $W$ as a finite sum of quotients of sections of the polarizing line bundle $\O_\X(1)$. This can indeed sometimes be achieved by using \emph{generalized theta functions}, defined analogously to $W$ via sums of broken lines, see \cite{thetasurvey,Theta}. This method, which is beyond the scope of this paper, appears to work for example in all cases derived from reflexive polytopes via Construction~\ref{Const: Reflexive polytope degeneration}. \end{remark} \section{Fibers of the superpotential over $0,1,\infty$, and the LG mirror map} \label{sect: special fiber} Given a finite scattering diagram $\scrS_k$ for a non-compact $(B,\P,\varphi)$ consistent to order $k$ as in Assumption~\ref{overall assumption}, we have constructed in the last section the superpotential \begin{equation} W:\X\lra \AA^1. \end{equation} as a morphism of formal schemes. In this section we discuss some general features of $W$. Denote by $A\subset|\X|=X_0$ the union of complete irreducible components of $X_0$, that is, the union of the irreducible components $X_\sigma\subset X_0$ defined by the bounded maximal cells $\sigma\in\P$. We start with $W^{-1}(0)$, viewed as a Cartier divisor on the ringed space $\X$. To compute the order of $W$ along $X_\sigma\subset\X$ we define the following notion. \begin{definition} For a maximal cell $\sigma\in\P$, $\depth\sigma$ is the minimum of $\ord_\sigma m_\beta$ for all broken lines $\beta$ with endpoint in $\Int\sigma$. \end{definition} \begin{proposition} \label{Prop: 0-fiber of W} The multiplicity of the Cartier divisor $W^{-1}(0)\subset \X$ on the irreducible component $X_\sigma\subset X_0$ given by a maximal cell $\sigma\in\P$ equals $\depth\sigma$. \end{proposition} \begin{proof} This is obvious from the definition of $W_k$ in \eqref{Eqn: W^k}. \end{proof} If $(B,\P)$ is asymptotically cylindrical, properness of $W|_{X_0}$ (Proposition~\ref{properness criterion}) implies that if $\sigma\in\P$ is a maximal cell with $W^{-1}(0)\cap X_\sigma\neq \emptyset$ then $\sigma\in\P$ is bounded. In other words, $W^{-1}(0)\cap X_0\subseteq A$. Otherwise not much can generally be said about $W^{-1}(0)$. \medskip We next turn to the behavior of $W$ over $\infty$. This discussion makes only sense in the case that $\X\to\Spf\kk\lfor t\rfor$ is compactifiable, that is, if it extends to a proper family $\ol\X\to\Spf\kk\lfor t\rfor$. But even in this case, $W$ may not be a formal meromorphic function \cite[Tag~01X1]{stacks-project} near $\infty$. In other words, $W$ may have essential singularities near a compactifiying formal closed subscheme. We therefore assume $(B,\P)$ compactifiable and $W$ to extend to a meromorphic function on the corresponding partial completion $\ol\X$ of $\X$. A sufficient condition is if $W$ itself is algebraizable, as discussed in Remark~\ref{Rem: algebraizability}. Another problem is that the meromorphic lift may have a locus of indeterminacy containing components of the added divisor at $\infty$ on the central fiber. Here is a purely toric example. \begin{example} Let $(B,\P)$ be given by the complete fan in $\RR^2$ with rays generated by $(1,0)$, $(0,1)$, $(0,-1)$ and $(-1,k)$ for some $k\ge 2$, and $\varphi$ the convex PL function with value $1$ at all the ray generators. The discriminant locus is empty. With trivial gluing data and trivial wall structure we obtain the completion at $t=0$ of a toric threefold $\shX$ over $\AA^1=\Spec\kk[t]$. Letting $x,y$ be the monomial functions defined by $(1,0)$ and $(0,1)$, the superpotential equals \[ W=x+y+ty^{-1}+tx^{-1}y^k \] on the big torus. Using $\varphi$ for the truncation, we obtain a compactification $\ol\shX$ of $\shX$ by intersecting $B$ with the $4$-gon $\ol B$ with vertices $(k,0)$, $(0,k)$, $(0,-k)$ and $(-1,k)$. Now express $W$ in the neighborhood \[ \Spec\kk[u,v^{\pm 1},t]\subset \ol\shX,\quad y=u^{-1}, x=u^{-1}v, \] of the divisor $\shD_\rho\subset \partial\shX$ defined by the face $\rho\subset \ol B$ with vertices $(k,0),(0,k)$: \[ W= vu^{-1}+u^{-1}+tu+tu^{-k+1}v^{-1}= \frac{(1+v)u^{k-2}+tu^k+tv^{-1}}{u^{k-1}}. \] This rational function on $\ol\shX$ has locus of indeterminacy $u=t=0$. \end{example} In the asymptotically cylindrical case a meromorphic extension of $W$ has empty locus of indeterminacy. We therefore now restrict ourselves to the following situation. Let $(B,\P)$ be compactifiable and asymptotically cylindrical and let $(\ol B_\nu,\ol\P_\nu)$ be an associated sequence of truncations obtained by Lemma~\ref{Lem: Exhaustion}. Denote by $\ol\X\to \Spf\kk\lfor t\rfor$ the corresponding formal toric degeneration constructed in Proposition~\ref{Prop: wall structures exists for compactifiable cases} from the associated sequence $(\ol B_\nu,\ol \P_\nu)$ of truncations from Lemma~\ref{Lem: Exhaustion}, and $\shZ\subset\ol\shX$ the compactifying reduced divisor. Assume further that $W$ lifts to a meromorphic function $\shW$ on an algebraization $\ol\shX$ of $\ol\X$. For a sequence $F=(F_\nu)$ of maximal flat affine subspaces of $\partial \ol B_\nu$ denote by $\shZ_F\subset\ol\shZ$ the irreducible component defined in Remark~\ref{Rem: compactifying divisor}. For such an $F$ define a multiplicity $\mu_F\in\NN\setminus\{0\}$ as follows. Let $\rho\subseteq F$ be an $(n-1)$-cell, $\sigma\in\P$ the unique maximal cell containing $\rho$ and $\xi\in\Lambda_\sigma$ the primitive generator of $\Lambda_\omega$ for an unbounded $1$-cell $\omega$ pointing in the unbounded direction. Then \begin{equation} \label{Eqn: mu_F} \mu_F= \big[ \Lambda_\sigma: \Lambda_\rho+ \ZZ\cdot\xi\big]. \end{equation} Note that $\mu_F$ depends not on the choice of $\rho\subset F$. \begin{proposition} \label{Prop: infty-fiber of W} Let $\shW$ be a meromorphic lift of the superpotential $W$ to an algebraization $\ol\shX$ of the partial compactification $\ol\X$ of $\X$ from Proposition~\ref{Prop: wall structures exists for compactifiable cases} of the compactifiable, asymptotically cylindrical $(B,\P,\varphi)$. Then $\shW$ defines a morphism of schemes $\shW:\ol\shX\to\PP^1$, and it holds \[ \ol\shW^{-1}(\infty)= \sum_F \mu_F\cdot \shZ_F. \] The sum is over all sequences of parallel maximal flat affine subspaces of $\partial B_\nu$ and $\mu_F$ is defined in \eqref{Eqn: mu_F}. \end{proposition} \begin{proof} Since $W$ defines a proper map to $\AA^1$ the locus of indeterminacy of $\shW$ is empty. Thus $\shW$ defines a morphism to $\PP^1$. Let $\sigma\in\P$ be an unbounded maximal cell and $X_\sigma\subseteq X_0$ the corresponding irreducible component of the central fiber. Then $\shW|_{X_\sigma} =W|_{X_\sigma}\neq0$, so the multiplicity of a prime divisor $\shZ_F\subseteq \shW^{-1}(\infty)$ can be computed after restriction to the central fiber $X_0$, that is, from $W^0$, and $X_\sigma$ is a component of the reduced divisor $\shZ\cap X_0$. Moreover, in the coordinate ring $\kk[\Lambda_\sigma]$ of the big torus of $X_\sigma$ we have $W^0= z^\xi$. Thus the coefficient of $\shZ_F\cap X_0$ in $(W^0)^{-1}(0)$ equals \[ \ord_{D_\rho}W^0= \ord_{D_\rho}z^\xi = - [ \Lambda_\sigma: \Lambda_\rho+ \Lambda_{\omega_i}], \] by a standard fact in toric geometry. \end{proof} \medskip The most interesting result in this section concerns a tropical description of the special fiber $W^{-1}(1)$ in the asymptotically cylindrical case. We will see that it is described by the asymptotic behaviors of $(B,\P,\varphi)$ and $\scrS_k$. Moreover, $W^{-1}(1)$ is canonically the mirror toric degeneration of the anticanonical divisor $\check\D\to T$ from Theorem~\ref{Thm: Properness}, up to a tropically interesting mirror map reparametrizing the codomain $\AA^1$ of $W$. \begin{construction} (\emph{Asymptotic tropical manifold and asymptotic scattering diagram.}) \label{Constr: B_infinity} Assume that $(B,\P)$ is asymptotically cylindrical. Denote by $K\subseteq B$ the compact subset defined by the union of bounded cells of $\P$. Then there exists a non-zero integral vector field $\xi\in\Gamma(B\setminus K, i_*\Lambda)$ that is parallel to all unbounded $1$-cells of $\P$. We fix $\xi$ uniquely by requiring it to be primitive (indivisible) and pointing in the unbounded direction. The integral curves of $\xi$ generate an equivalence relation $\sim$ on $B\setminus K$. Define the \emph{asymptotic tropical manifold} $B_\infty$ associated to $B$ as the quotient $(B\setminus K)/\!\sim$. An explicit description of $B_\infty$ runs as follows. Let $\sigma\in\P$ be an unbounded cell. Let $\bar\sigma$ be the convex hull of the vertices. Since $B$ is asymptotically cylindrical it holds \[ \sigma= \bar\sigma+\RR_{\ge 0}\cdot\xi, \] as an equation of subsets of $\Lambda_{\sigma,\RR}$. Then define $\sigma_\infty$ as the image of $\sigma$ or $\bar\sigma$ under the canonical quotient \begin{equation} \label{Eqn: asymptotic monomial map} \Lambda_{\sigma,\RR} \to \Lambda_{\sigma,\RR}/\RR_{\ge 0}\cdot\xi. \end{equation} Clearly, this construction is compatible with the inclusion of faces. Taking the colimit of the $\sigma_\infty$ defines $B_\infty$ along with a polyhedral decomposition $\P_\infty$. The charts for the affine structure at vertices of $B_\infty$ are induced by the charts of $B$ along unbounded $1$-cells of $\P$. Note that unbounded $1$-cells of $\P$ are disjoint from $\Delta$. It is also clear that the strictly convex PL function $\varphi$ on $(B,\P)$ induces a strictly convex PL function $\varphi_\infty$ on $(B_\infty,\P_\infty)$. We have thus constructed a polarized tropical manifold $(B_\infty,\P_\infty,\varphi_\infty)$ of dimension $\dim B-1$, the \emph{asymptotic tropical manifold} of the asymptotically cylindrical $(B,\P,\varphi)$. A monomial at a point $x$ on an unbounded component of $\tau\setminus\Delta$ for $\tau \in\P$ induces a monomial at the image $x_\infty$ of the corresponding cell $\tau_\infty =\tau/{\RR_{\ge0} \xi}$. Since $\scrS_k$ is finite there exist a compact subset $K'\subseteq B$ such that only unbounded slabs or walls intersect $B\setminus K'$. We call these slabs or walls \emph{asymptotic}. In the present asymptotically cylindrical case $\xi$ is then tangent to the suppport of any such asymptotic slab or wall. Thus for any asymptotic slab $\fob$ or wall $\fop$ in $\scrS_k$ the image under the quotient map $B\setminus K'\to B_\infty$ defines the support of a slab or wall in $(B_\infty,\P_\infty,\varphi_\infty)$. The associated slab function or exponent is defined via the projection of monomials \eqref{Eqn: asymptotic monomial map}. Define by $\scrS_k^\infty$ the structure on $B_\infty$ obtained in this way. \qed \end{construction} To further relate the wall structures $\scrS_k$ on $B$ and $\scrS_k^\infty$ on $B_\infty$, only the rings $R^k_{g,\sigma}$ are relevant with $g:\omega\to \tau$ an inclusion of unbounded cells. Denote by $g_\infty:\omega_\infty\to \tau_\infty$ the induced inclusion of cells in $\P_\infty$. Taking a splitting of the inclusion $\ZZ\cdot\xi\subseteq \Lambda_\sigma$ provides a (non-canonical) isomorphism \begin{equation} \label{Eqn: w} \big(R^k_{g,\sigma}\big)_{z^\xi} \simeq R^k_{g_\infty,\sigma_\infty}[w,w^{-1}], \end{equation} where by abuse of notation $\xi$ denotes the unique monomial $m$ of order $0$ with $\ol m=\xi\in\Lambda_\sigma$. The isomorphism identifies $z^\xi$ with $w$. Mapping $w$ to $1$ now induces the canonical isomorphism of quotients \begin{equation} \label{eqn: quotient hom} \big(R^k_{g,\sigma}\big)_{z^\xi} /(z^\xi-1)\simeq R^k_{g_\infty,\sigma_\infty}. \end{equation} Note this isomorphism is compatible with the map of monomials discussed in Construction~\ref{Constr: B_infinity} and does not depend on choices. In particular, there is a well-defined formal function $w$ on $\X\setminus A$ locally given by $w$ in \eqref{Eqn: w}. From~\eqref{Eqn: w} it is also obvious that consistency of $\scrS_k$ implies consistency of $\scrS_k^\infty$. \begin{proposition} \label{Prop: w^(-1)(1)} Let $(\pi:\X\to\Spf\kk\lfor t\rfor, W)$ be the Landau-Ginzburg model with an asymptotically cylindrical intersection complex $(B,\P)$. Then the composition $w^{-1}(1)\to \X\stackrel{\pi}{\to} \Spf\kk\lfor t\rfor$ of the inclusion followed by $\pi$ is canonically isomorphic to the toric degeneration defined by the compatible system of wall structures $\scrS_k^\infty$ on $(B_\infty,\P_\infty,\varphi_\infty)$. \end{proposition} \begin{proof} It is enough to check the statement for a fixed finite order $k$. The key observation is that the asymptotic vector field $\xi$ is tangent to all unbounded walls on $B$. Now for fixed $k$ the complement $U\subset B$ of the compact subset $K'\subset B$ in Construction~\ref{Constr: B_infinity} only intersects unbounded walls. Moreover, gluing the rings associated to chambers in $U$ is enough to describe $X_k\setminus A$. The statement now readily follows from the construction of $\scrS_k^\infty$ and \eqref{eqn: quotient hom}. \end{proof} \begin{remark} \label{Rem: parallel asymptotic monomials} Monomials in unbounded walls or slabs proportional to $z^\xi$ map to elements of the base ring $\kk\lfor t\rfor$ under the homomorphism~\ref{eqn: quotient hom}. Such constant terms do not appear in the wall structures constructed either by the algorithm in \cite{affinecomplex} or in the canonical wall structure of \cite{CanonicalWalls}. Thus $\scrS_\infty$ is a more general wall structure than previously constructed that also involves \emph{undirectional walls}, that is, walls $\fop$ with $f_\fop\in\kk\lfor t\rfor^\times$. This fact has been overlooked in earlier versions of this paper and affects the mirror statements for $w^{-1}(1)$. The corrected statement is given in Proposition~\ref{Prop: mirror statement for w^{-1}(0)} below. \end{remark} To understand the influence of undirectional walls, we observe a close relationship to gluing data. \begin{remark} \label{Rem: Unidirectional walls from gluing data} Consider a wall structure $\scrS$ on an affine manifold with singularities $B$ with all walls and slabs undirectional. To emphasize the constant nature of the slab and wall functions, we now write $c_\fop, c_\fob\in \kk\lfor t\rfor^\times$ rather than $f_\fop, f_\fob$. Let $\foj$ be a zero-codimensional joint, that is, intersecting the interior of a maximal cell $\sigma$. Then the automorphism $\theta_\fop$ of $\kk\lfor t\rfor[\Lambda_\sigma]$ associated to a wall $\fop$ containing $\foj$ equals \begin{equation} \label{Eqn: Undirectional wall automorphism} \theta_\fop: z^m\longmapsto \big\langle c_\fop\otimes n_\fop,m\big\rangle\cdot z^m, \end{equation} with $n_\fop\in\check\Lambda_\sigma=\Hom(\Lambda_\sigma,\ZZ)$ the primitive normal vector spanning $\Lambda_\fop^\perp$, with sign depending on the direction of wall crosssing. Now all such automorphisms $\theta_\fop$ with $\foj\subset\fop$ commute. Moreover, their product is trivial iff \begin{equation} \label{Eqn: balancing in codim 0} \prod_\fop c_\fop \otimes n_\fop=1\otimes 0\in \kk\lfor t\rfor^\times\otimes_\ZZ\check\Lambda_\sigma. \end{equation} Note that in the tensor product the first factor is written multiplicatively, the second additively, so $1\otimes 0$ is the unit in this abelian group. We observe that the consistency condition \eqref{Eqn: balancing in codim 0} for $\scrS$ at $\foj$ is the cocycle condition at $\foj$ for the \emph{tropical $1$-cocycle} on $B$ supported on $|\scrS|$ that assigns $c_\fop\otimes n_\fop$, $c_\fob\otimes n_\fop$ to the elements of $\scrS$. This motivates to view a consistent, undirectional wall structure as a tropical 1-cocycle, with the cocycle condition reflected by consistency in all codimensions. Let us denote the group of tropical $1$-cocyles by $C^{n-1}_{\mathrm{trop}}(B)$. Next note that the ring homomorphisms defined by undirectional walls have the same form as in applying open gluing data, see e.g.\ \cite[(5.2)]{Theta}. In the simple singularity case, the group of equivalence classes of lifted open gluing data obeying a similar local consistency condition is given by the cohomology group $H^1(B,\iota_*\check\Lambda\otimes\kk^\times)$ \cite[Prop.\,4.25, Def.\,5.1, Lem.\,5.5]{logmirror}. Now there is an obvious map \[ C^{n-1}_{\mathrm{trop}}(B)\lra H_{n-1}(B_0,\check\Lambda\otimes \kk\lfor t\rfor^\times), \] which by consistency factors over $H_{n-1}(B,\iota_*\check\Lambda\otimes \kk\lfor t\rfor^\times)$. Moreover, by \cite[Thm.\,1]{Ruddat} we have an isomorphism \[ H_{n-1}(B,\iota_*\check\Lambda\otimes \kk\lfor t\rfor^\times) \simeq H^1(B,\iota_*\check\Lambda\otimes \kk\lfor t\rfor^\times). \] Thus we obtain a homomorphism \[ C^{n-1}_{\mathrm{trop}}(B)\lra H^1(B,\iota_*\check\Lambda\otimes \kk\lfor t\rfor^\times. \] This map associates to the tropical $1$-cocycle given by the undirectional wall structure $\scrS$ certain open gluing data that we denote $s_\scrS$. Now one can run the algorithm in \cite{affinecomplex} in two ways. First as usual with a cocycle representative of the gluing data $s_\scrS$, leading to a wall structure $\scrS'$. Second, starting with an initial wall structure that takes into account the reduction modulo $t$ of $\scrS$, interpreted as gluing data; then run the algorithm with the undirectional walls inserted at each order to obtain a wall structure $\scrS''$. Consistency of the undirectional wall structure $\scrS$ should be a necessary and sufficient condition for this to work. We conjecture that the two families $\foX',\foX''\to T$ obtained from $\scrS',\scrS''$ are isomorphic. One can compare $\foX',\foX''$ by choosing a general point in each cell as a reference point and relate the diagrams of schemes in \cite[\S\,2.6]{affinecomplex} by sequences of wall crossing automorphisms. To prove that this procedure induces an isomorphism of diagrams would require to carefully analyze the scattering algorithm on a neighborhood of the interior of each cell of $\P$, including the difference of the presence of undirectional walls versus the associated gluing data. For the application to our asymptotic wall structure $\scrS_\infty$ on $B_\infty$, one takes for the walls of the undirectional wall structure all undirectional walls of the wall structure on the asymptotically cylindrical $B$. Applying \cite[Prop.\,3.10,(2)]{affinecomplex} one can prove consistency at codimension $0$ joints order by order. For the slabs one observes that undirectional monomials in an unbounded slab $\fob$ are never of order $0$. Hence as in \cite[Thm.\,5.2]{affinecomplex} we can factor \[ f_\fob= \bar f_\fob\cdot f_\fob^\parallel \] with $f_\fob^\parallel\in \kk\lfor t\rfor[z^{\pm\xi}]^\times$ and $\bar f_\fob$ having a product decomposition with no factors in $\kk\lfor t\rfor[z^{\pm\xi}]$. We use $f_\fob^\parallel$ to define the slab of the undirectional wall structure on $B_\infty$ induced by $\fob$. A very careful analysis of the algorithm, which we have not carried out, should now show that this undirectional wall structure is consistent. \qed \end{remark} We finally relate the mirror of $\check\foD\to T$ to the LG-fiber $w^{-1}(1)$. This makes precise and proves a conjecture of Auroux in our setup \cite[Conj.\,7.4]{auroux1}. \begin{proposition} \label{Prop: mirror statement for w^{-1}(0)} If $\X\to\Spf\kk\lfor t\rfor$ is mirror dual to $(\check\X\to T,\check\D)$ then $w^{-1}(1)\to \Spf\kk\lfor t\rfor$ is the mirror family to $\check\foD\to T$ twisted by an undirectional wall structure as discussed in Remarks~\ref{Rem: parallel asymptotic monomials} and~\ref{Rem: Unidirectional walls from gluing data}.\footnote{In the interpretation of wall structures via punctured invariants \cite{CanonicalWalls}, undirectional walls count punctured Gromov-Witten invariants with one positive contact order with $\check\foD$, hence relate to traditional log Gromov-Witten invariants of $(\check\X,\check\foD)$. Further details will appear in \cite{GRS}. } \end{proposition} Most interestingly, the function $w$ in Proposition~\ref{Prop: w^(-1)(1)} and \eqref{Eqn: w} is closely related to the superpotential $W$. To explain this relation, note that for any fixed finite order $k$ we may restrict to the complement $U\subset B$ of a compact set such that each broken line $\beta$ ending in $U$ is parallel to the asymptotic vector field $\xi$. In other words, there exists $c\in\ZZ\setminus\{0\}$ with $m_\beta= c\cdot\xi$. There are two types of such broken lines, depending on the sign of $c$. If $c>0$ then $\beta$ is (part of) a broken line not intersecting any wall, and hence $c=1$ and the monomial carried by $\beta$ equals $z^\xi$. Otherwise $\beta$ is a broken line that returned from entering the compact set due to some non-trivial interaction with walls outside of $U$. We call these broken lines $\beta$ and the corresponding monomial $m_\beta$ at the root vertex \emph{outgoing}. In this case we can write $a_\beta z^{m_\beta} = a_\beta t^l z^{-d\xi}$ for some $l>0$ and $d=-c>0$. Summing over all such broken lines with general endpoint $p\in U$ leads to a polynomial with coefficients $N_{d,l}\in\kk$: \begin{equation} \label{Eqn: N(d,l)} h_k=\sum_{l=1}^k\sum_{d>0} N_{d,l}t^l z^{-d\xi}\in \kk[t,z^{-\xi}]. \end{equation} \begin{lemma} The coefficients $N_{d,l}\in\kk$ in \eqref{Eqn: N(d,l)} do not depend on the choices of $U$, $p\in U$ or $k\ge l$. \end{lemma} \begin{proof} Observe first that $h_k+w$ equals $W^k_{g,\fou}(p)$ from \eqref{Eqn: W^k}, for $\fou$ any unbounded chamber of $\scrS_k$ containing the chosen endpoint $p\in\fou$ of broken lines in~\eqref{Eqn: N(d,l)}, and $g:\omega\to\tau$ any inclusion of cells relevant to $\fou$, that is, with $\omega\cap\fou\neq \emptyset$, $\tau\subseteq\sigma_\fou$. Since $\xi$ is tangent to all walls intersecting $U$, this expression is unchanged under wall crossing automorphisms. The statement now follows from the independence of $W^k_{g,\fou}(p)$ of the choice of $p\in \fou$ (Lemma~\ref{lem:independence of p}) and the compatibility of $W^k_{g,\fou}$ with changing strata and chambers (Lemma~\ref{lem:change of chambers}). The statement on the independence of $k\ge l$ follows since a broken line $\beta$ interacting with a term in a wall of order $>k$ has an outgoing monomial $m_\beta$ of order $>k$. \end{proof} We are now in position to define the map relating $w$ and $W$ as the automorphism $\Phi$ of the formal algebraic torus \[ \hat\GG_m=\GG_m\times \Spf\kk\lfor t\rfor =\Spf\!\big( \kk[u^{\pm1}]\hat\otimes_\kk\kk\lfor t\rfor\big) = \Spf \kk[u^{\pm1}]\lfor t\rfor \] over $\kk\lfor t\rfor$ defined by \begin{equation} \label{Def: mirror map} \Phi^\sharp(u)= u+\sum_{l>0} \Big(\sum_{d>0} N_{d,l} u^{-d}\Big) t^l= u\left(1+\sum_{l>0} \Big(\sum_{d>0} N_{d,l} u^{-d-1}\Big) t^l\right). \end{equation} Note that for each fixed $l$ there are only finitely many broken lines contributing to the coefficient of $t^l$ in \eqref{Eqn: N(d,l)}. Hence $\Phi^\sharp(u)\in \kk[u^{\pm1}]\lfor t\rfor$ as required to define $\Phi$. Note also that $\Phi$ induces the identity on $\GG_m\subset \GG_m\times\Spf\kk\lfor t\rfor$, the reduction modulo $t$. \begin{definition} \label{Def: LG mirror map} We call the automorphism $\Phi$ of $\GG_m\times\Spf \kk\lfor t\rfor$ the \emph{Landau-Ginzburg (LG-) mirror map}. \end{definition} We emphasize that $\Phi$ is not in general induced by an automorphism of the scheme \[ (\GG_m)_{\Spec {\kk\lfor t\rfor}}= \Spec \kk\lfor t\rfor [u^{\pm1}]. \] For the following mirror map statement in the asymptotically cylindrical case we consider both $W|_{\X\setminus A}$ and $w$ as maps to $\hat\GG_m= \GG_m\times\Spf\kk\lfor t\rfor$. \begin{theorem} \label{Thm: W versus w} In the situation of Proposition~\ref{Prop: w^(-1)(1)} it holds $W|_{\X\setminus A}= \Phi\circ w$. \end{theorem} \begin{proof} The statement follows by observing that modulo $t^{k+1}$, \[ (w^\sharp\circ\Phi^\sharp)(u)= w+\sum_{l>0} \Big(\sum_{d>0} N_{d,l} w^{-d}\Big) t^l \] equals $W^k_{g,\fou}=h_k+w$ with $h_k$ as in \eqref{Eqn: N(d,l)}, for all unbounded chambers $\fou$. \end{proof} Theorem~\ref{Thm: W versus w} together with Proposition~\ref{Prop: mirror statement for w^{-1}(0)} yields the following. \begin{corollary} \label{Cor: fiber of W over 1 is mirror to check D} Let $\X\to\Spf\kk\lfor t\rfor$ be mirror dual to $(\check\X\to T,\check\D)$. Then $(\Phi^{-1}\circ W)^{-1}(1)\to \Spf\kk\lfor t\rfor$ is the mirror family of $\check\D\to T$ twisted by the undirectional wall structure discussed in Remarks~\ref{Rem: parallel asymptotic monomials} and~\ref{Rem: Unidirectional walls from gluing data}. \qed \end{corollary} \section{Wall structures and broken lines via tropical disks} \label{sect:tropical disks} We now aim for an alternative construction of the potential $W$ in terms of tropical disks.\footnote{ The interpretation of wall structures and the superpotential in terms of tropical disks of Maslov indices $0$ and $2$ in this section has a more speculative and inconclusive nature than the rest of the paper. Recent advances in our understanding of wall structures in the context of intrinsic mirror symmetry \cite{CanonicalWalls} makes it now feasible to develop the picture given in this section in full generality. This section is therefore included with only minor changes from the original version, but should be read with some caution.} \subsection{Tropical disks} \label{subsec:disks} Our definition of tropical disks depends only on the integral affine geometry of $B$ and not on its polyhedral decomposition $\P$. As usual let $i: B\setminus\Delta \to B$ denote the inclusion for $\Delta$` the singular locus of the integral affine structure, and let $\Lambda_B$ be the sheaf of integral tangent vectors. We restrict to the asymptotically cylindrical case (Definition~\ref{Def: parallel edges}). Without reference to $\P$ we require that $B$ is non-compact and for some orientable compact subset $K\subset B$, $\Gamma(B\setminus K, i_*\Lambda)$ has rank one. Then there exists a unique primitive integral affine vector field $\xi$ on $B\setminus K$ pointing away from $K$. We assume the semiflow of $\xi$ is complete and call its integral curves the \emph{asymptotic rays}. \begin{definition} \label{def:disk} Let $\Gamma$ be a tree with root vertex $\rootvertex$. Denote by $\Gamma^{[1]}, \Gamma^{[0]}, \leafvertices$ the set of edges, vertices, and leaf vertices (univalent vertices different from the root vertex), respectively. We allow unbounded edges, that is, edges adjacent to only one vertex, defining a subset $\Gamma_\infty^{[1]}\subseteq \Gamma^{[1]}$. Let $w: \Gamma^{[1]} \to \mathbb{N}\setminus \{0\}$ be a weight function. Let $x\in B\setminus \Delta$. A \emph{tropical disk bounded by $x$} is a proper, locally injective, continuous map \[ h: \big( |\Gamma|, \{\rootvertex\} \big) \lra \big(B,\{x\} \big) \] with the following properties. \begin{enumerate} \item $h^{-1}(\Delta)= \leafvertices$. \item For every edge $E\in \Gamma^{[1]}$ the image $h(E\setminus\partial E)$ is a locally closed integral affine submanifold of $B\setminus \Delta$ of dimension one. \item If $V\in\Gamma^{[0]}$ there is a primitive integral vector $m\in \Lambda_{B,h(V)}$ extending to a local vector field tangent to $h(E)$ and pointing away from $h(V)$. Define the \emph{tangent vector of $h$ at $V$ along $E$} as $\overline m_{V,E}:= w(E)\cdot m$. \item For every $V\in \Gamma^{[0]}\setminus \leafvertices$ the following \emph{balancing condition} holds: \[ \sum_{\{E\in \Gamma^{[1]}| \,V\in E\} } \overline m_{V,E}= 0. \] \item The image of an unbounded edge is an asymptotic ray. \end{enumerate} Two disks $h: |\Gamma| \to B$, $h': |\Gamma'| \to B$ are identified if $h=h'\circ \phi$ for a homeomorphism $\phi: |\Gamma|\to |\Gamma'|$ respecting the weights. The \emph{Maslov index} of $h$ is defined as $\displaystyle \mu(h):= 2\sum_{E\in\Gamma^{[1]}_\infty} w(E)$. \qed \end{definition} Note that for a tropical disk $h^*(i_*\Lambda_B)$ is a trivial local system. In particular, there is a unique parallel transport of tangent vectors along $h$. \begin{example} \label{triangle} \begin{figure}[h!] \input{discriminant_moduli.pspdftex} \caption{A tropical Maslov index zero disk bounding $x$ belonging to a moduli space of dimension $5$. The dashed lines indicate a part of the discriminant locus.} \label{fig:moduli} \end{figure} Suppose $\dim B=3$ and $\Delta$ bounds an affine $2$-simplex $\sigma$ with $T_x\sigma$ contained in the image of $i_*\Lambda_{B,x}$ for all $x\in \Delta$. Such a situation occurs in the mirror toric degenerations of local Calabi-Yau threefolds, for example in the mirror of $K_{\mathbb P^2}$ \cite[Expl.\,5.2]{Invitation}. Then any point $x\in \sigma \setminus \Delta$ bounds a family of tropical Maslov index zero disks of arbitrary dimension, as illustrated in Figure~\ref{fig:moduli}. \end{example} So far, our definition of tropical disks only depends on $|\Gamma|$ and not on its underlying graph $\Gamma$. A distinguished choice of $\Gamma$ is by assuming that there are no divalent vertices. At an interior vertex $V\in\Gamma^{[0]}$ (that is, neither the root vertex nor a leaf vertex) the rays $\RR_{\ge 0}\cdot \overline m_{E,V}$ of adjacent edges $E$ define a fan $\Sigma_{h,V}$ in $i_*\Lambda_{B,h(V)} \otimes_\ZZ\RR$. Denote by $\Sigma_{h,V}^0$ the parallel transport along $h$ of $\Sigma_{h,V}$ to $\rootvertex$. The \emph{type} of $h$ consists of the weighted graph $(\Gamma,w)$ along with the $\Sigma_{h,V}^0$, $V \in \Gamma^{[0]}\setminus \Gamma^{[0]}_\infty$. For $x\in B\setminus \Delta$ and $\overline m\in \Lambda_{B,x}$ denote by $\M_\mu(\overline m)$ the moduli space of tropical disks of Maslov index $\mu$ and root tangent vector $\overline m$. It comes with a natural stratification by type: A stratum consists of disks of fixed type, and the boundary of a stratum is reached when the image of an interior edge contracts to a vertex of higher valency. \smallskip From now on assume $B$ is equipped with a compatible polyhedral structure $\P$ as defined in \cite[\S1.3]{logmirror}. It is then natural to adapt $\Gamma$ to $\P$ by appending Definition~\ref{def:disk} as follows: \begin{list}{(6)} \item For any $E\in\Gamma^{[1]}$ there exists $\tau\in \P$ with $h(\Int E)\subseteq \Int(\tau)$, and if $V\in E$ is a divalent vertex then $h(V)\subseteq\partial\tau$. \end{list} \noindent In other words, we insert divalent vertices precisely at those points of $|\Gamma|$ where $h$ changes cells of $\P$ locally. Note however that we still consider the stratification on $\M_\mu(\overline m)$ defined with all divalent vertices removed. \begin{example} \begin{figure}[h] \input{cells.pspdftex} \caption{Disks near $\Delta$ (left) and their moduli cell complex (right).} \label{fig:cells} \end{figure} As it stands, the type does not define a good stratification of the moduli space of tropical disks. For each vertex $V\in\Gamma$ mapping to a codimension one cell $\rho\in\P$ we also need to specify the connected component of $\rho \setminus \Delta$ containing $h(V)$ (that is, specify a reference vertex $v\in\rho$). This is illustrated in Figure~\ref{fig:cells}. Here the dotted lines in the right picture correspond to generalized tropical disks, fulfilling all but (1) in Definition~\ref{def:disk}. \end{example} Tropical disks are closely related to broken lines as follows. We place ourselves in the context of \S\ref{Ch: Broken lines}. In particular, we assume given a structure $\scrS_k$ that is consistent to order $k$. \begin{lemma}[Disk completion]\footnote{Cf.\ \cite[Prop.4.13]{CanonicalWalls} for a refined treatment in a slightly different setup.} \label{line-disk} As a map, any broken line is the restriction of a Maslov index two disk $h: |\Gamma| \to B$ to the smallest connected subset of $|\Gamma|$ containing the root vertex and the (unique) unbounded leaf. The restriction of $h$ to the closure of the complement of this subset consists of Maslov index zero disks. \end{lemma} \begin{proof} We continue to use the terminology of \cite{affinecomplex}. First we show that any projected exponent $\overline m$ at a point $p$ of a wall or slab in $\scrS_k$ is the root tangent vector of a Maslov index zero disk $h$ rooted in $h(\rootvertex)=p$. This is true for $\scrS_0$ since by simplicity the exponents of a slab function $f_{\rho,v}$ are root tangent vectors of Maslov index zero disks with only one edge. Assume inductively this holds as well for $\scrS_{l}$, $0\le l \le k$, and show the claim for walls in $\scrS_{l+1}\setminus \scrS_{l}$ arising from scattering. We must show that the exponents of the outgoing rays are generated by those of the incoming rays or cuts. But if there existed an additional exponent, it would be preserved by any product with log automorphisms attached to the rays or cuts, as up to higher orders the latter are multiplications by polynomials with non trivial constant term. This contradicts consistency. In particular, if $p=\beta(t_i)$ is a break point of a broken line $\beta$ then $t_i$ can be turned into a balanced trivalent vertex by attaching a Maslov index zero disk $h$ with root tangent vector $\overline m$ equal to the projected exponent taken from the unique wall or slab containing $p$. \end{proof} \noindent We call any tropical disk as in the lemma a \emph{disk completion} of the broken line. The disk completion is in general not unique due to the following reasons: \begin{enumerate} \item First, Example~\ref{triangle} shows that tropical Maslov index zero disks may come in families of arbitrarily high dimension. \item Even if the moduli space of tropical Maslov index zero disks is of expected finite dimension, there may be joints with different incoming root tangent vectors. \item There may exist several Maslov index zero disks with the same root tangent vector, for example a closed geodesic of different winding numbers. \end{enumerate} We now take care of these issues. \subsection{Virtual tropical disks} Example~\ref{triangle} illustrates that for $\dim B\geq 3$, a tropical disk whose image is contained in a union of slabs leads to an unbounded dimension of the moduli space of tropical Maslov index $2\mu$ disks. In order to get enumerative invariants which recover broken lines we need a virtual count of tropical disks. Throughout we assume $B$ is oriented. \smallskip Suppose $\Delta$ is straightened as in \cite[Rem.\,1.49]{logmirror}, that is, $\Delta$ defines a subcomplex $\Delta^\bullet$ of the barycentric refinement of the polyhedral decomposition $\P$ of $B$. Note that the simplicial structure of $\Delta^\bullet$ refines the natural stratification of $\Delta$ given by local monodromy type. Let $\Delta_\max$ denote the set of maximal cells of $\Delta ^\bullet$ together with an orientation, chosen once and for all. Each $\tau \in \Delta_\max$ is contained in a unique $(n-1)$-cell $\rho\in\P$. Then monodromy along a small loop about $\tau$ defines a monodromy transvection vector $m_\tau\in\Lambda_\rho$, where the signs are fixed by the orientations via some sign convention. In view of the orientations of $\tau$ and $B$ we can then also choose a maximal cell $\sigma_\tau\supset \tau$ unambiguously. For each $\tau \in \Delta_\max$ let $w_\tau$ be the choice of a partition of $|w_\tau|\in \mathbb N$ (with $w_\tau = \emptyset$ for $|w_\tau|=0$). To separate leaves of tropical disks we will now locally replace $\Delta$ by a branched cover. We can then consider deformations of a disk $h$ whose leaves end on that cover instead of $\Delta$, with weights and directions prescribed by the partitions $\mathbf{w}:=(w_\tau|\tau \in \Delta_\max)$. \medskip \paragraph{\emph{Deformations of $\Delta$}} We first define a deformation of the barycentric refinement $\Delta_\bary$ of $\Delta$ as a polyhedral subset of $B$. For each $\tau \in \Delta_\max$, denote by $\ray_\tau \subseteq \sigma_\tau$ the 1-cell connecting the barycenter $b_\tau$ of $\tau$ to the barycenter of $\sigma_\tau$. Note that $\Lambda_{\ray_\tau} \otimes_\ZZ \RR$ intersects $i_*\Lambda_{b_\tau} \otimes_\ZZ \RR= \mathrm{span}(\Lambda_\tau,m_\tau)$ transversally. Moving the barycenter of the barycentric refinement $\tau_\bary$ of $\tau$ along $\ray_\tau$ while fixing $\partial \tau_\bary$ now defines a piecewise linear deformation $\tau_s$ of $\tau$ over $s\in \ray_\tau$ as a polyhedral subset of $\sigma_\tau$. Thus we obtain a deformation $\{ \Delta_s |s \in S\}$ of $ \Delta$ over the cone $S:=\prod_\tau \ray_\tau$. It is trivial as deformation of cell complexes, as parallel transport in direction $\ray_\tau$ in each cell $\sigma_\tau$ induces an isomorphism of cell complexes $\Delta_\bary \cong \Delta_s$. For an infinitesimal point of view let $i_\tau:\tau\to\sigma$ be the inclusion. Consider the preimage of the deformation of $\tau\subseteq\Delta$ under the natural inclusion $\sigma_\tau \hookrightarrow i_\tau^*T\sigma_\tau$. For $s=(s_1,\ldots, s_{\length w_\tau})\in \ray_\tau^{\length w_\tau}$ with $s_i$ pairwise different, \[ \cover:= \bigcup_{k=1}^{\length w_\tau} \tau_{s_k} \subseteq i_\tau^*T\sigma_\tau \] is then a $\length w_\tau$-fold branched cover of $\tau$, via the natural projection \[ \pi: i_\tau^*T\sigma_\tau \to \tau. \] Note that $\cover = \emptyset$ if $|w_\tau|=0$ and $\cover \subseteq \tau_{s'}^{\mathbf w'}$ if $\length w_\tau \leq \length w_\tau'$ and if the entries of $s$ agree with the first $\length w_\tau$ entries of $s'$. We make $\tau_s^{\mathbf w}$ into a weighted cell complex by equipping each cell of $\cover$ with the weight defined by the partition $w_\tau$. Finally, set $\defbase:= \prod_\tau \ray_\tau^{\length w_\tau}$ and $\Cover:= \bigcup_\tau \tau^{\mathbf w}_{s_\tau}$, where $s \in \defbase$. We still call $\Cover$ a deformation of $\Delta$, as for $\epsilon \to 0$, $\tilde \Delta_{\epsilon s}^{\mathbf w}$ converges to $\Delta$ as \emph{weighted} complexes in an obvious sense. \medskip \paragraph{\emph{Deformations of tropical disks}} We now want to define a virtual tropical disk as an infinitesimal deformation $\tilde h$ of a tropical disk $h$ such that the leaves of $\tilde h$ end on $\Cover$ as prescribed by $\mathbf w$. The idea is that for small $\epsilon>0$ and suitable environments $ U_v\subseteq T_vB$ of $0$, $v=h(V)$ the image of an internal vertex, the rescaled exponential map $\exp\big|_{\bigcup_v \left(\frac1\epsilon U_v\right)} \circ \epsilon \id_{T(B\setminus \Delta)}$ maps the union of the tropical curves $\tilde h_V$ in the following Definition~\ref{Def: virtual tropical disk} onto the image of a tropical disk $\tilde h_\epsilon: \tilde \Gamma \to B$ with leaves emanating from $ \Delta_{\epsilon s}^{\mathbf w} $. By choosing $\epsilon>0$ sufficiently small, the image of $\tilde h_\epsilon$ is contained in an arbitrary small neighborhood of the image of $h$. Thus $\tilde h_\epsilon$ indeed defines a deformation of $h$. Conversely, for $\epsilon$ sufficiently small, $\tilde h_\epsilon$ determines $\tilde h$ uniquely. Hence in order to simplify language and visualization, we may and will identify a virtual curve $\tilde h$ with its ``images" $\tilde h_\epsilon$ in $B$ for small $\epsilon>0$. \begin{definition} \label{Def: virtual tropical disk} Let $h: |\Gamma|\to B$ be a tropical disk not intersecting $|\Delta^{[\dim B-3]}|$. A \emph{virtual tropical disk $\tilde h$ of intersection type $\mathbf w$ deforming $h$} consists of: \begin{enumerate} \item For each interior vertex $V\in\Gamma^{[0]}$, a possibly disconnected genus zero ordinary tropical curve $\tilde h_V: \tilde \Gamma_V \to T_vB$ with respect to the fan $\Sigma_{h,V}$. This means that $\tilde \Gamma_v$ is a possibly disconnected graph with simply connected components and without di- and univalent vertices, the map $\tilde h_V$ satisfies conditions (2)--(4) of Definition \ref{def:disk}, while instead of (5) the unbounded edges are parallel displacements of rays of $\Sigma_{h,V}$. \item A cover $\tilde h_E$ of each edge $E$ of $h$ by weighted parallel sections of the normal bundle $h|_E^*TB/Th(E)$. For each edge $E$ adjacent to an interior vertex $V$, we require that the inclusion defines a weight-respecting bijection between the cosets of $\tilde h_E^{-1}(V)$ over $V$ and rays of $\tilde h_V$ in direction $E$. Moreover, the intersection defines a weight-preserving bijection between the cosets of $\{ \tilde h_E^{-1}(V) \, | \, h( V)\in \tau, E\ni V \}$ over the leaf vertices in $\tau$ and branches of $\cover$. \item A virtual root position, that is, a point $\tilde h_{\rootvertex} (\widetilde\rootvertex)$ in $T_{h(\rootvertex)}B$ such that $\tilde h_{\rootvertex}(\widetilde\rootvertex) + Th(E) = \tilde h_E^{-1}({\rootvertex})$, where $E$ is the root edge. \end{enumerate} \end{definition} We denote by $\M_{\mu}(\Cover,h)$ the moduli space of virtual Maslov index $\mu$ disks of intersection type $\mathbf w$ deforming $h$. In order to exclude the phenomenons in Example~\ref{triangle}, we now restrict to sufficiently general tropical disks. For such tropical disks a local deformation of the constraints on $\tilde h(\tilde \Gamma^{[0]})$ lifts to a local deformation of $\tilde h$ preserving the type. Formally, we define: \begin{definition} Let $ s\in \defbase$ and $\mu \in\{0,1\}$. A virtual tropical disk $\tilde h \in \M_{2\mu}(\Cover,h)$ is \emph{sufficiently general} if: \begin{enumerate} \item $\tilde h$ has no internal vertices of valency higher than three, \item all intersections of $\tilde h$ with the codimension one cells of $\P$ are transverse intersections at divalent vertices outside $|\P^{[\dim B-2]}|$, \item there exists a subspace $L\subseteq T_{h(\rootvertex)}B$ of dimension $\max\{1-\mu,0\}$ and an open cone $\defcone \subseteq \defbase$ containing $s$ such that the natural map \begin{equation} \label{eq:constraintmap} \pi \times \mathrm{ev}_{\widetilde \rootvertex}: \quad \bigcup_{s\in \defcone}\M_\mu(\Cover,h) \lra \defcone \times \big(T_{h(\rootvertex)}B\big)/ L \end{equation} is open. \end{enumerate} $\Cover$ is in \emph{general position} if for all Maslov index zero disks $h$ the complement of the set $\M_0(\Cover,h)^{gen}$ of sufficiently general disks in $\M_0(\Cover, h)$ is nowhere dense. \end{definition} \begin{lemma} \label{dimModuli} Given $\mathbf w$, the space of non-general position deformations of $\Delta$ is nowhere dense in $S^{\mathbf w}$. For general position, $\M_{2\mu}(\Cover,h)$ is of expected dimension $\dim B+\mu-1$. \end{lemma} \begin{proof}(Sketch) Consider a generalized class of tropical disks by forgetting the leaf constraints, allowing edge contractions and replacing condition~(6) by the assumption that the graph contains no divalent vertices. Fix a type with a trivalent graph $\Gamma$. Then any tropical disk of the given type is determined by the position of the root vertex $x$ and the length of the $N:=| \Gamma^{[1]} | -\mu = 2| \leafvertices|-1 -\mu$ bounded edges. This shows that the inverse map restricts to an open embedding $ \bigcup_{s\in \defbase} \M_\mu(\Cover)\to B\times \mathbb R^{N}_{\geq 0}$ in obvious identifications dictated by $\Gamma$. The statement now follows from the observation that the map \eqref{eq:constraintmap} expressed in the induced affine structure on $\M_\mu(\Cover,h)$ is piecewise linear, and any violation of stability defines a subset of a finite union of hyperplanes. In particular, the dimension statement follows by noting that the positions of the leaf vertices define constraints of codimension $2 (| \leafvertices | - \mu)$. \end{proof} \begin{remark} \label{foj} A stratum of $\M_0(\overline m)$ admits a natural affine structure. Hence a disk $h\in \M_0(\overline m)^{[k]}$ belonging to a $k$-dimensional stratum naturally comes with the $k$-dimensional subspace of induced infinitesimal vertex deformations \[ \foj_V(h)^{[k]}:=T_h \mathrm{ev}_{V}(T_h\M_0^{[k]}(\overline m)) \subseteq T_{h(V)}B. \] Likewise, infinitesimal deformations of a sufficiently general Maslov index zero disk $\tilde h$ give rise to \emph{virtual joints}, that is, the codimension two subsets defined by restricting the deformation family to the vertices. Such virtual joints converge to some codimension two space $\foj_V(\tilde h)\in T_{h(V)}B$, as $s \to 0 \in\defcone$. Moreover, if the limiting disk $h$ of $\tilde h$ belongs to a $(\dim B- 2)$-stratum of $\M_0(\overline m)$ such that \eqref{eq:constraintmap} extends to an open map at $0\in \partial \defcone$, then $ \foj_V(h)^{[\dim B-2]} = \foj_V(\tilde h)$. This may be used to define stability for tropical disks. \end{remark} \subsection{Structures via virtual tropical disks} We now relate the counting of virtual Maslov index zero disks with the structures of \cite{affinecomplex}. Let $\scrB$ be the set of closures of connected components of the codimension one cells of $\P$ when $\Delta$ is removed. For $\fob\in \scrB$ contained in $\rho\in \P^{[n-1]}$ and $v\in\rho$ a vertex contained in $\fob$ denote by $f_\fob:= f_{\rho,v}$ the order zero slab function attached to $\fob \in \scrB$ via the log structure. Then $f_\fob \in \kk[C_\fob]$ where $C_\fob$ is the monoid generated by one of the two primitive invariant $\tau$-transverse vectors $\pm m_{\tau}$ for each positively oriented $\tau \in \Delta^{[max]}$ with $\overline \fob \cap \tau \not = \emptyset$. Let $k \in \mathbb{N}$. Define the order $k$ scattering parameter ring by \[ R^{k}:= \kk[t_{ \tau} \, | \, \tau \in \Delta_\max ] / \mathcal I_{k}, \quad \mathcal I _{k}:= (t_{ \tau}^{k+1}| \tau \in \Delta_\max), \] and let $\widehat R$ be its completion as $k\to \infty$. As $f_\fob$ has a non-trivial constant term, we can take its logarithm as in \cite{gps} \begin{equation} \label{logslab} \log f_{\fob} = \sum_{\overline m\in C_{\fob} } \length(\overline m) a_{\fob, \overline m} z^{\overline m} \in \kk\lfor C_\fob\rfor, \end{equation} defining virtual multiplicities $a_{\fob, \overline m}\in \kk$. We consider $\log f_{\fob}$ as an element of $\kk\lfor C_\fob\rfor \otimes _k\widehat R$ via the completion of the inclusions \[ \iota_k: \kk[C_\fob]\to \kk[C_{\fob}]\otimes_\kk R^{k}, \quad z^{m_{\tau}} \mapsto z^{m_{\tau }} t_{\tau}. \] \begin{definition} Attach the following numbers to a sufficeintly general virtual tropical disk $\tilde h \in \M_\mu(\Cover,h)^{gen} $: \begin{enumerate} \item The virtual multiplicity of a vertex $V\in\effvert$ of $\tilde h $ is \[ \vmult_V(\tilde h):=\left\{ \begin{array}{ll} a_{\fob, \overline m} & \text{if $V$ is univalent, $\pi (\tilde h(V)) \in \fob $}\\ s(\overline m) & \text{if $V$ is divalent } \\ \big|\overline m \wedge \overline m'\big|_{\foj_V(h)} & \text{if $V$ is trivalent} \end{array}\right. \] where $\overline m$ denotes the tangent vector of $\tilde h$ at $V$ in the direction leading to the root, $\overline m'$ the tangent vector of a different edge of $\tilde h$ at $V$, $a_{\fob,\overline m}$ the coefficients in \eqref{logslab}, $s: \Lambda_{\tilde h(V)}B \to k^\times$ the change of stratum function at $\tilde h(V)$ coming from the gluing data, and the last expression the quotient density on $T_{\tilde h(V)}B/\foj_V(\tilde h)$ induced from the natural density on $B\setminus \Delta$. Explicitly, letting $\overline j_1, \ldots,\overline j_{n-2}$ be generators of $ \foj_V(\tilde h) \cap \Lambda_{B,h(V)} $ (cf. Remark~\ref{foj}) then \[ \big|\overline m \wedge \overline m'\big|_{\foj_V(h)} := |\overline m\wedge \overline m' \wedge \overline j_1 \wedge \ldots \wedge \overline j_{n-2}|. \] \item The virtual multiplicity $\vmult(\tilde h)$ of $\tilde h$ is the total product \[ \vmult(\tilde h):=\frac{1}{|\Aut(\mathbf w)|} \cdot \prod_{V \in \effvert } \vmult_{V}(\tilde h). \] Here $\mathbf w$ is the intersection type of $\tilde h$, and $\Aut(\mathbf w)$ is the product of the automorphisms\footnote{that is, the number of permutations of the entries of $w_\tau$ that do not change the partition } of the partitions $w_\tau$ over all $\tau \in \Delta_{[\max]}$. \item The $t$-order of $\tilde h$ is the sum of the changes in the $t$-order of the tangent vectors $\overline m_V$ at divalent vertices $V$ under changing the adjacent maximal strata $\sigma^\pm_V$, that is \[ \ord \tilde h := \sum _{\substack{V\in \Gamma^{[1]}:\\ \tilde h(V) \in \sigma^+_V\cap \sigma^-_V \in \P^{[n-1]} } } \left|\left< d\varphi|_{\sigma^-_V} - d\varphi|_{\sigma^+_V}, \overline m_V\right>\right|. \] \end{enumerate} \end{definition} \begin{remark} \label{t-order} The $t$-order may be considered as a combinatorial analogue of the symplectic area of a holomorphic disk. \end{remark} Note that the virtual multiplicity of a sufficiently general tropical disk depends only on its type. Moreover, we have: \begin{lemma} The virtual multiplicity of a (type of) sufficiently general tropical disk $\tilde h$ of intersection type $\mathbf w$ deforming a Maslov index zero disk $h$ is independent of the choice $s\in \defbase$ of the general position deformation $\Delta_s$. \end{lemma} \begin{proof} We only give a very rough sketch here: If the type only changes by the number of divalent vertices, the claim follows immediately from the definition of $\varphi$ as a continuous and piecewise linear function. In dimension two, the result then reduces to a standard one, cf.~\cite{gat}. In higher dimension, the only remaining instable \emph{hyper}planes consist of disks with a four-valent vertex. Here the independence of their stable deformations reduces to the dimension two case, as the virtual multiplicity is invariant under splitting each edge of $\Gamma$, acting with the stabilizer $SL(n,\mathbb Z)_{\foj_V(h)}$ on each fan $\Sigma_{h,V}$, and regluing formally. \end{proof} We are now ready for our central definitions: Denote by $\# \mathcal M_0(\mathbf w,\overline m,\ell)^{gen}$ the number of types of sufficiently general virtual tropical disks with intersection type $\mathbf w$ and $t$-order $\ell$ which deform a tropical Maslov index zero disk with root tangent vector $\overline m\in \Lambda_{B\setminus \Delta, x}$, counted with virtual multiplicity. More generally, for $\mu\in \{0,2\}$ we can define $\# \mathcal M_\mu(\mathbf w,\overline m,\ell)^{gen} $ by counting the corresponding disks themselves, but specifying the virtual root as follows: The virtual root is $0\in T_xB$ if $\mu =2$, and belongs to a line in $T_{h(\rootvertex)}B$ transverse to $\foj_{\rootvertex}(\tilde h)$ if $\mu = 0$. Let $\sigma \in \P_\max$, and $ P_{v,\sigma}$ the associated monoid at $v\in \sigma$ which is determined by $\varphi$ as in \cite[Constr.\,2.7]{affinecomplex}. Define the \emph{counting function} to order $k$ in $x\in \sigma$ by \begin{equation} \label{countfct} \log f_{\sigma, x}:= \sum_ {\substack{\overline m \in \Lambda_{x,B}, \\ \ell \leq k} } \sum_ {\mathbf w} \length (\overline m)\#\mathcal M_0 (\mathbf w, \overline m, \ell)^{gen}z^{\overline m}t^{\ell} \prod_{ \tau} t_{ \tau}^{|w_{ \tau}| }, \end{equation} which is an element of the ring $\kk[P_{v,\sigma}]\otimes_\kk R^k$. \begin{conjecture} \label{conject} For each $k \in \mathbb N$, the counting polynomial \eqref{countfct} modulo $(t^{k+1})$ stabilizes in $\mathbf w$ and then maps to the rings $\targetring$ via $t_\tau \mapsto 1$. The sets \begin{equation} \label{wall} \fop^k[x]:= \overline{ \set{ y\in \sigma \setminus \partial \sigma } {\log f_{\sigma,y} =\log f_{\sigma,x} \not = 0 \in \targetring}}, \end{equation} are either empty or define polyhedral subsets of codimension at least one. Up to refinement and adding trivial walls, the $\fop^k[x]$ together with their functions $\log f_{\sigma,y}$ reproduce the consistent wall structure $\scrS_k$ constructed in \cite{affinecomplex}. \end{conjecture} \begin{remark} Note that general position of $\Delta$ is not essential as long as we obtain the same virtual counts. \end{remark} \begin{proposition} The conjecture is true for $\dim B = 2$. \end{proposition} \begin{proof} (Sketch) It is sufficient to show the following two claims: \paragraph{\underline{Claim 1}: Over $R^k$, the counting monomials arise via scattering.} This can be proved by first decomposing $\exp f_{\sigma,x} $ into products of binomials and then proceeding inductively by applying \cite[Thm.\,2.7]{gps} to each joint. Alternatively, one can adapt their proof directly: Consider the thickening \[ \iota_\tau: \; \kk[t_\tau ]/t^{k+1} \lra \frac{\kk[u_{\tau i} | 1\leq i \leq k, ]}{(u_{\tau i}^2 | 1\leq i \leq k ) },\quad t_{\tau} \longmapsto \sum_{i=1}^{k} u_{\tau i}. \] inducing a thickening $\bigotimes_{ \tau} \iota_{ \tau}: R^k\to \tilde R^k$ of the scattering parameter ring. Then consider virtual tropical disks with respect to the $2^k$-fold branched covering of $\Delta$ whose branches $\tau_s^J$ over $\tau$ are labeled by $J \subseteq \{1,...,k\}$. Denote $u_{\tau J}:= \prod_{i\in J}u_{\tau i}$. We say such a disk is special if it has the following additional properties: The weight of a leaf is $|J|$ if it emanates from $\tau_s^J$, and $u_{\tau J} u_{\tau J'} \not = 0$ whenever there are leaf vertices in $\tau_s^J$ and $\tau_s^{J'}$. We can now attach the following function to the root tangent vector $-\overline m_{\tilde h}$ of such disks: \begin{equation} \label{f_h} f_{\tilde h} := 1 + \length(m_{\tilde h}) \vmult'(\tilde h) \cdot z^{\overline m_{\tilde h}}t^{\ord \tilde h} \hspace{-1.5em} \prod_{\tau^J_s \cap \tilde h(\vvert) \not = \emptyset} \hspace {-1em} |J|! \cdot u_{\tau J}. \end{equation} where $\vmult'$ equals $\vmult$ without the combinatorial factor $|\Aut(\mathbf w)|$. Now the order~0 terms appearing in the thickening of the exponential of \eqref{countfct} are precisely the $f_{\tilde h}$ of those special disks $\tilde h$ that contain only one edge. The others indeed arise from scattering, meaning the following: Whenever the root vertices of two special disks $\tilde h, \tilde h'$ map to the same point $p$ with transverse root leaves, there at most two ways to extend both disks beyond $p$ locally: Either glue them to a single tropical disk, which is possible only if $f_{\tilde h}f_{\tilde h'}\not = 0$, or enlarge the root leaves such that $p$ stays a point of intersection, which is always possible. The functions attached to the two old and the three new roots then define a consistent scattering diagram, that is the counterclockwise product of the automorphisms \[ z^{\overline m} \mapsto z^{\overline m} f_{\tilde h}^{|\overline m \wedge \overline m_{\tilde h}|} \] equals one. This is the content of \cite[Lem.\,1.9]{gps}, to which we refer for details. Now the proof of \cite[Thm.\,2.7]{gps} shows that the sum $\sum_{\tilde h} \log f_{\tilde h}$ over all special disks with $k$-intersection type $\mathbf w$, root tangent vector $-\overline m$ and $t$-order $\ell$ equals the thickening of the corresponding monomial in \eqref{countfct}. \paragraph{\underline{Claim 2}: The counting functions \eqref{countfct} can be lifted.} We must show that the scattering diagrams at each joint $\foj_V(\tilde h)$ produce liftable monomials: In case of a codimension zero joint this follows from the observation that each incoming non constant ray monomial has $t$-order greater than zero. Hence working modulo $(t^\ell)$ implies working up to a finite $k$-order. In case of codimension one joints, by assumption there is only one non constant monomial of zero $t$-order present in each scattering diagram, namely that given by the log structure. In this case, we can apply \cite{kansas}. Finally, there are no codimension two joints by assumption. \smallskip From both claims it follows that the gluing functions of both constructions must indeed coincide, as by our assumption on $\Delta$ both rely on scattering only, and scattering is unique up to equivalence. \end{proof} \subsection{Virtual counts of tropical Maslov index two disks} Assume now $(B,\P,\varphi)$ fulfills Conjecture~\ref{conject}, for example $\dim B=2$. \begin{proposition} \label{Prop: Virtual count} The coefficient $a_\beta $ of the last monomial $z^{m_\beta}$ of a general broken line $\beta$ is the virtual number of tropical Maslov index two disks with root tangent vector $m_\beta$ which complete $\beta$ as in Lemma~\ref{line-disk} and whose $t$-order equals the total change in the $t$-order of the exponents along $\beta$. \end{proposition} \begin{proof} Let $az^m, a'z^{m'}$ be the functions attached to the edges adjacent to a fixed break point $\beta(t)$ of $\beta$. Let $h$ be a virtual Maslov index zero disk bounded by $\beta(t)$ and with root tangent vector the required difference $\overline m- \overline m'$. Define the completed multiplicity of $h$ as the virtual multiplicity of $h$ times that of the break point. By definition, $a'/a$ is a coefficient in the exponential of $a_\fob:=|\overline m \wedge \overline m'| \log f_{\fob}$ for the function $f_\fob$ belonging to the wall or slab with tangent space containing $\overline m-\overline m'$. By formula \eqref{countfct}, $a_\fob$ equals the virtual number of completing virtual Maslov index zero disks with root tangent leaf $\overline m - \overline m'$ and $t$-order $\ell$, completed by the break point multiplicity. Hence the required coefficient in $\exp a_\fob$ is given by counting {\em disconnected} virtual tropical disks of total $t$-order $\ell$ and total root tangent leaf $\overline m$ with completed multiplicity. \end{proof} \section{Toric degenerations of del Pezzo surfaces and their mirrors} \label{sect:delpezzo} In this section we will compare superpotentials for mirrors of toric degenerations of del Pezzo surfaces, using broken lines and tropical Maslov index two disks. Recall that apart from $\PP^1 \times \PP^1$ all non-singular del Pezzo surfaces $dP_k$ can be obtained by blowing up $\PP^2$ in $0 \leq k \leq 8$ points. Note that $dP_k$ for $k\ge 5$ is not unique up to isomorphism but has a $2(k-4)$-dimensional moduli space. For the anticanonical bundle to be ample the blown-up points need to be in sufficiently general position. This means that no three points are collinear, no six points lie on a conic and no eight points lie on an irreducible cubic which has a double point at one of the points. However, rather than ampleness of $-K_X$ the existence of certain toric degenerations is central to our approach. For example, our point of view naturally includes the case $k=9$. \subsection{Toric del Pezzo surfaces} Up to lattice isomorphism there are exactly five toric del Pezzo surfaces $X(\Sigma)$ whose fans $\Sigma$ are depicted in Figure~\ref{fig:toric_fans}, namely $\PP^2$ blown up torically in at most three distinct points and $\PP^1 \times \PP^1$. To construct distinguished superpotentials for these surfaces we consider the following class of toric degenerations. For the definition note that by the Grothendieck algebraization theorem a toric degeneration $(\check\X\to T,\check\D)$ with $\check\D$ relatively ample can be algebraized. By abuse of notation we write $(\check\X_\eta,\check\D_\eta)$ for the generic fiber of an algebraization, and also simply speak of the \emph{generic fiber of $(\check\X\to T,\check\D)$}. \begin{figure} \input{toric_fans.pspdftex} \caption{Fans of the five toric del Pezzo surfaces} \label{fig:toric_fans} \end{figure} \begin{definition} \label{Def: toric degen dP_k} A \emph{distinguished toric degeneration of Fano varieties} is a toric degeneration $(\check\X \to T, \check\D)$ of compact type (Definition~\ref{Def: compact type}) with $ \check\D$ relatively ample over $T$ and with generic fiber $ \check\D_{\eta}\subseteq \check \X_{\eta}$ of an algebraization an anticanonical divisor in a Gorenstein surface. The \emph{associated intersection complex} $( \check B, \check\P, \check\varphi)$ is the intersection complex of $( \check\X \to T, \check\D)$ polarized by $\O_{ \check\X}( \check\D)$. \end{definition} The point of this definition is both the irreducibility of the anticanonical divisor and the fact that this divisor extends to a polarization on the central fiber. If the generic fiber $\check\X_\eta$ is a surface then it is a $dP_k$ for some $k$, together with a smooth anticanonical divisor. Starting from a reflexive polytope, there is a canonical construction of the intersection complex of a distinguished toric degeneration as follows. \begin{construction} \label{Const: Reflexive polytope degeneration} Let $\Xi$ be a reflexive polytope and $v_0\in \Xi$ the unique interior integral vertex. Define the polyhedral decomposition $\check \P$ of $\check B=\Xi$ with maximal cells the convex hulls of the facets of $\Xi$ and $v_0$. The affine chart at $v_0$ is the one defined by the affine structure of $\Xi$. At any other vertex define the affine structure by the unique chart compatible with the affine structure of the adjacent maximal cells and making $\partial \check B$ totally geodesic. This works because by reflexivity for any vertex $v$ the integral tangent vectors of any adjacent facet together with $v-v_0$ generate the full lattice. Moreover, $(\check B,\check \P)$ has a natural polarization by defining $\check\varphi(v_0)=0$ and $\check\varphi(v)=1$ for each other vertex $v$. The tropical manifold $(\check B,\check \P)$ obtained in this way does not generally have simple singularities. Using a similar computation from \cite{GBB} on the Legendre dual side one can show, however, that $(\check B,\check\P)$ has simple singularities iff the normal fan $\Sigma$ of $\Xi$ is elementary simplicial, meaning that each cone is the cone over an elementary simplex. In dimensions two and three this is equivalent to requiring the toric variety $\check X(\Xi)$ with momentum polytope $\Xi$ to be smooth. See \cite[Constr.\,5.2]{MaxThesis} for more details. Assuming $(\check B,\check \P)$ has locally rigid singularities (e.g.\ simple singularities or in dimension two) so we can run \cite{affinecomplex}, or there is a consistent compatible sequence of wall structures on $(\check B,\check \P,\check\varphi)$ by other means, we obtain a distinguished, anticanonically polarized toric degeneration $\check\X\to\Spf\kk\lfor t\rfor$ together with an irreducible anticanonical divisor $\check\D\subset\check\X$. The generic fiber $\check\X_\eta$ of this toric degeneration may not be isomorphic to the toric variety $\check X(\Xi)$. But by introducing an additional parameter $s$ scaling the non-constant coefficients of the slab functions one can produce a two-parameter family with $\check\X\to\Spf\kk\lfor t\rfor$ the restriction to $s=1$ and the constant family with fiber $\check X(\Xi)$ for $t\neq0$ the restriction to $s=0$. The discrete Legendre transform $(B,\P,\varphi)$ has a unique bounded cell $\sigma_0$, isomorphic to the dual polytope of $\Xi$. Up to the addition of a global affine function, the dual polarizing function $\varphi$ is the unique piecewise affine function changing slope by one along the unbounded facets.\footnote{In the present case $\check\varphi$ is single-valued.} \end{construction} \begin{remark} Alternatively, one can use an MPCP resolution \cite[Thm.\,2.2.24]{Bat} of $\check X(\Xi)$ defined by a simplicial subdivision of $\Sigma$ to split the discriminant locus of $(\check B,\check\P$ into simple singularities. On the Legendre-dual side the subdivision is given by writing the bounded maximal cell $\sigma_0$ as a union of elementary simplices. The resolution process leads to the introduction of more K\"ahler parameters on the log Calabi-Yau side, hence more complex parameters on the Landau-Ginzburg side, reflected in the choice of $\varphi$. See \cite{MaxThesis} for some discussions in this direction. \end{remark} \begin{example} Specializing to del Pezzo surfaces, we start from the momentum polytopes of the five non-singular toric Fano surfaces. The result of the construction is depicted in Figure~\ref{fig: toric_del_pezzo}, which shows a chart in the complement of the dotted segments. \begin{figure} \input{toric_del_pezzo.pspdftex} \caption{The intersection complexes $(\check B,\check\P)$ of the five distinguished toric degenerations of del Pezzo surfaces with simple singularities and their Legendre duals $(B,\P)$.} \label{fig: toric_del_pezzo} \end{figure} Note that the discrete Legendre transform $(B,\P,\varphi)$, also depicted in Figure~\ref{fig: toric_del_pezzo}, indeed has parallel outgoing rays. \end{example} Conversely, in dimension~$2$ we have the following uniqueness result. \begin{theorem} \label{Thm: unique} Let $(\pi:\check \X\to T,\check \D)$ be a distinguished toric degeneration of del Pezzo surfaces. Then the associated intersection complex $(\check B, \check\P)$ is a subdivision of the star subdivision of a reflexive polygon $\Xi$, with the edges containing a singular point precisely those connecting the interior integral point to a vertex of $\Xi$. If furthermore the toric degeneration has simple singularities \cite[\S1.5]{logmirror}, then $(\check B,\check\P)$ is isomorphic to an integral subdivision of one of the cases listed in Figure~\ref{fig: toric_del_pezzo}. \end{theorem} \begin{proof} Let $(\pi:\check \X \to T,\check\D)$ be the given toric degeneration. Thus the generic fiber $\check \X_\eta$ is isomorphic to a del Pezzo surface $dP_k$ over $\eta$ for some $0\le k\le 8$, or to $\PP^1\times\PP^1$. By assumption $\partial\check B$ is locally straight in the affine structure. First we determine the number of integral points of $\check B$. Let $\shL=\O_{\check\X}(\check\D)$ be the polarizing line bundle on $\check\X$. By assumption \begin{equation} \label{H^0 dP_k} h^0(\check \X_\eta,\shL|_{\check\X_\eta})= h^0(dP_k,-K_{dP_k})= \begin{cases} 10-k,&\check\X_\eta\simeq dP_k\\ 9,&\check\X_\eta\simeq \PP^1\times\PP^1. \end{cases} \end{equation} Let $t\in \O_{T,0}$ be a uniformizing parameter and $\check X_n:= \Spec \big( \kk[t]/(t^{n+1})\big) \times_T \check\X$ the $n$-th order neighborhood of $\check X_0:= \pi^{-1}(0)$ in $\check \X$. Denote by $\shL_n=\shL|_{\check X_n}$. Then for any $n$ there is an exact sequence of sheaves on $\check X_0$, \[ 0\lra \O_{\check X_0}\lra \shL_{n+1}\lra \shL_n\lra 0. \] By the analogue of \cite[Thm.\,4.2]{PartII} for log Calabi-Yau varieties, we know \[ h^1(\check X_0, \O_{\check X_0}) = h^1(\check \X_\eta, \O_{\check\X_\eta})= 0. \] Thus the long exact cohomology sequence induces a surjection $H^0(\check X_0,\shL_{n+1}) \twoheadrightarrow H^0(\check X_0,\shL_n)$ for each $n$. By the theorem on formal functions and cohomology and base change \cite[Thms.\,11.1 \& 12.11]{Hartshorne}, we thus conclude that $\pi_*\shL$ is locally free, with fiber over $0$ isomorphic to $H^0(\check X_0,\shL_0)$. In view of \eqref{H^0 dP_k} we thus conclude \[ h^0(\check X_0,\shL_0)= \begin{cases} 10-k,&\check\X_\eta\simeq dP_k\\ 9,&\check\X_\eta\simeq \PP^1\times\PP^1. \end{cases} \] Now on a toric variety the dimension of the space of sections of a polarizing line bundle equals the number of integral points of the momentum polytope. Since $\check X_0$ is a union of toric varieties, each integral point $x\in \check B$ provides a monomial section of $\shL_0$ on any irreducible component $\check X_\sigma\subseteq \check X_0$ with $\sigma\in \P$ containing $x$. These provide a basis of sections of $H^0 (\check X_0,\shL_0)$.\footnote{This also follows by the description of $(\check X_0,\shL_0)$ by a homogeneous coordinate ring in \cite[Def.\,2.4]{logmirror}.} Hence $\check B$ has $10-k$ integral points. An analogous argument shows that the number of integral points of $\partial \check B$ equals \[ h^0(\check D_0,\shL_0)= h^0(\check\D_\eta,\shL_\eta), \] which by Riemann-Roch equals $K_{dP_k}^2 = 9 -k$ or $K_{\PP^1\times\PP^1}^2=8$. In either case we thus have a unique integral interior point $v_0\in \check B$. In particular, $\check B$ has the topology of a disk, and each interior edge connects $v_0$ to an integral point of $\partial \check B$. Viewed in the chart at the interior integral point, $\check B$ is therefore a reflexive polygon $\Xi$. Moreover, since $\partial\check B$ is locally straight, each of the interior edges with endpoint a vertex of $\Xi$ has to contain a singular point. None of the other interior edges can contain a singular point for otherwise $\partial\check B$ would not be straight in the affine structure (and $\check B$ would not even be locally convex on the boundary). If the singularities are simple, one finds that $\partial\check B$ is only locally straight in the five cases shown in Figure~\ref{fig: toric_del_pezzo}, up to adding some edges connecting $v_0$ to $\partial \check B$ without singular point. The remaining 11~cases are discussed in \S\ref{Subsect: Singular Fano surfaces} below. \end{proof} \begin{remark} 1)\ The proof shows that the five types can be distinguished by $\dim H^0(\check \X_\eta, \shL_\eta)$, except for $\PP^1\times\PP^1$ and $dP_1$. Alternatively, by Proposition~\ref{Prop: Number of singular points} below one could use $H^1(\check \X_\eta, \Omega^1_{\check\X_\eta})$.\\[1ex] 2)\ For each $(\check B,\check \P)$ there is a discrete set of choices of $\check \varphi$, which determines the local toric models of $\check\X \to\check\D$. This reflects the fact that the base of (log smooth) deformations of the central fiber $\check X_0^\ls$ as a log space over the standard log point $\kk^\ls$ is higher dimensional. In fact, let $r$ be the number of vertices on $\partial \check B$. Then taking a representative of $\check \varphi$ that vanishes on one maximal cell, $\check\varphi$ is defined by the value at $r-2$ vertices on $\partial \check B$. Convexity then defines a submonoid $Q\subseteq \NN^{r-2}$ with the property that $\Hom(Q,\NN)$ is isomorphic to the space of (not necessarily strictly) convex, piecewise affine functions on $(\check B,\check \P)$ modulo global affine functions. Running the construction of \cite{affinecomplex} with parameters then produces a log smooth deformation over the completion at the origin of $\Spec \kk[Q]$ with central fiber $(\check X_0,\check D_0)$. For the minimal polyhedral decompositions of Figure~\ref{fig: toric_del_pezzo} with $r=l$ the number of singular point we have $\rk Q=l-2$, which by Remark~\ref{Rem: H^1(B,Lambda)},2 below agrees with the dimension of the space $H^1(\check X_0, \Theta_{\check X_0^\ls/\kk^\ls})$ of infinitesimal log smooth deformations of $X_0^\ls/ \kk^\ls$. One can show that in this case the constructed deformation is in fact semi-universal.\footnote{\cite[Thm.\,4.4]{RS} proves semi-universality for all simple toric degenerations of compact type.} \qed \end{remark} The technical tool to compute superpotentials in the toric del Pezzo cases and in related examples in finitely many steps is the following lemma, suggested to us by Mark Gross. It greatly reduces the number of broken lines to be considered, especially in asymptotically cylindrical situations and with a finite structure on the bounded cells. \begin{lemma} \label{Lem: Broken lines are rays} Let $\scrS$ be a structure for a non-compact, polarized tropical manifold $(B,\P,\varphi)$ that is consistent to all orders. We assume that there is a subdivision $\P'$ of a subcomplex of $\P$ with vertices disjoint from $\Delta$ and with the following properties. \begin{enumerate} \item Each $\sigma\in \P'$ is affine isomorphic to $\rho\times\RR_{\ge0}$ for some bounded face $\rho\subseteq \sigma$. \item $B\setminus \Int(|\P'|)$ is compact and locally convex at the vertices (this makes sense in an affine chart). \item If $m$ is an exponent of a monomial of a wall or unbounded slab intersecting some $\sigma\in\P'$, $\sigma=\rho+\RR_{\ge0}\overline m_{\sigma}$, then $-\overline m\in\Lambda_\rho+\RR_{> 0}\cdot \overline m_{\sigma}$. \end{enumerate} Then the first break point $t_1$ of a broken line $\beta$ with $\im(\beta)\not\subseteq |\P'|$ can only happen after leaving $\Int|\P'|$, that is, \[ t_1\ge \inf\big\{t\in (-\infty,0]\,\big|\, \beta(t)\not\in |\P'|\big\}. \] \end{lemma} \begin{proof} Assume $\beta(t_1)\in \sigma\setminus\rho$ for some $\sigma= \rho+\RR_{\ge0}\overline m_{\sigma} \in\P'$. Then $\beta|_{(-\infty,t_1]}$ is an affine map with derivative $-\overline m_{\sigma}$, and $\beta(t_1)$ lies on a wall. By the assumption on exponents of walls on $\sigma$, the result of nontrivial scattering at time $t_1$ only leads to exponents $m_2$ with $-\overline m_2\in \Lambda_\rho+\RR_{\ge0} \overline m_{\sigma}$, the outward pointing half-space. In particular, the next break point can not lie on $\rho$. Going by induction one sees that any further break point in $\sigma$ preserves the condition that $\beta'$ does not point inward. Moreover, by the convexity assumption, this condition is also preserved when moving to a neighboring cell in $\P'$. Thus $\im(\beta)\subseteq |\P'|$. \end{proof} \begin{proposition} \label{Prop: Simple Hori-Vafa cases} Let $(B,\P)$ be the dual intersection complex of a distinguished toric degeneration of del Pezzo surfaces with simple singularities and let $\sigma_0\subseteq B$ be the bounded cell. Then there is a neighborhood $U$ of the interior vertex $v_0\in \sigma_0$ such that for any $p\in U$ there is a canonical bijection between broken lines with endpoint $p$ and rays of $\Sigma$, the fan over the proper faces of $\sigma_0$. \end{proposition} \begin{proof} We can embed $\Sigma$ in the tangent space at $v_0$ by extending the unbounded edges to $v_0$ in the chart shown in Figure~\ref{fig:toric_fans}. Each ray of $\Sigma$ can then be interpreted as the image of a unique degenerate broken line. Because each such degenerate broken line has a positive distance from the shaded regions in Figure~\ref{fig: toric_del_pezzo} they can be moved with small perturbations of $p$ to deform to a proper broken line. Conversely, by inspection of the five cases, the result of non-trivial scattering at $\partial\sigma_0$ leads to a broken line not entering $\Int(\sigma_0)$. There are no walls entering $\Int\sigma_0$, so by Lemma~\ref{Lem: Broken lines are rays} any such broken line can bend at most at the intersection with $\partial\sigma_0$. \end{proof} \begin{corollary} \label{Cor: reproduce Hori-Vafa} Let $(\X\to\Spf\kk\lfor t\rfor,W)$ be the Landau-Ginzburg model mirror to a distinguished toric degeneration of del Pezzo surfaces with simple singularities. Then there is an open subset $U\simeq \Spf \kk[x^{\pm1},y^{\pm1}] \lfor t\rfor \subseteq \X$ such that $W|_U$ equals the usual Hori-Vafa monomial sum times $t$. \qed \end{corollary} \begin{remark} 1)\ For other than the anticanonical polarization of the del Pezzo surface, the terms in the superpotential receive different powers of $t$, just as in the Hori-Vafa proposal.\\[1ex] 2)\ Analogous arguments work for Landau-Ginzburg mirrors of smooth toric Fano varieties of any dimension \cite[Thm.\,5.4]{MaxThesis}. \end{remark} \begin{figure} \input{example_dp3.pspdftex} \caption{Tropical disks in the mirror of the distinguished base of $dP_3$ indicating the invariance under change of root vertex.} \label{fig:example_dp3} \end{figure} \begin{example} \label{ex: dP_3} Let us study the mirror of the distinguished toric degeneration of $dP_3$ with the minimal polarization $\check\varphi$ explicitly. In Figure~\ref{fig:example_dp3} the first two pictures show all Maslov index two tropical disks, respectively broken lines using disk completion (Lemma~\ref{line-disk}), for two choices of root vertex. Moving the root vertex within the shaded open hexagon yields the same result, that is, none of the six broken lines has a break point. In contrast, moving the root vertex inside $\sigma_0$ out of the shaded hexagon leads to some bent broken lines, but the set of root tangent vectors always remains $(1,0),(1,1),(0,1), (-1,0),(-1,-1)$ and $(0,-1)$, all with coefficient~$1$. This illustrates the invariance under the change of root vertex proved in Lemma~\ref{lem:independence of p}. The potential in the chart for the bounded cell $\sigma_0$ is thus computed as \[ W_{dP_3}(\sigma_0) = \Big(x + y + xy + \frac{1}{x} + \frac{1}{y} + \frac{1}{xy}\Big) \cdot t, \] which for $t\neq 0$ has six critical points. The picture on the right shows two tropical disks with weight two unbounded leaves. These do not contribute to the superpotential. An analogous picture arises for the other four distinguished del Pezzo degenerations. \end{example} Morally speaking the last example shows that in toric situations ray generators of the fan are sufficient to compute the superpotential, but really they should be seen as special cases of tropical disks or broken lines. \subsection{Non-toric del Pezzo surfaces} In this section we consider del Pezzo surfaces $dP_k$ for $k\ge 4$, referred to as higher del Pezzo surfaces. Let us first determine the topology of $B$ and the number of singular points of the affine structure. We need the following statement for Fano varieties with smooth anticanonical divisor \begin{lemma} \label{Lem: No log holomorphic 1-form} Let $X$ be a smooth Fano variety and $D\subset X$ a smooth anticanonical divisor. Then $H^0(X,\Omega_X(\log D))=0$. \end{lemma} \begin{proof} It is a folklore result in Hodge theory that the connecting homomorphism of the residue sequence \[ 0\lra \Omega_X\lra \Omega_X(\log D)\stackrel{\text{res}}{\lra} \O_D\lra 0 \] maps $1\in H^0(D,\O_D)$ to $c_1(\O_X(D))\in H^{1,1}(X)= H^1(X,\Omega_X)$. By ampleness of $D$ this class is nonzero. Thus $H^0(X,\Omega_X(\log D))\simeq H^0(X,\Omega_X)$. The claim now follows from the Kodaira vanishing theorem: \[ H^0(X,\Omega_X)\simeq H^n(X,\O_X) = H^n(X,K_X\otimes K_X^{-1})=0. \qedhere \] \end{proof} \smallskip \begin{proposition} \label{Prop: Number of singular points} Let $(B,\P)$ be the dual intersection complex of a distinguished toric degeneration $(\pi:\check\X\to T, \check\D)$ of del Pezzo surfaces with simple singularities (Definition~\ref{Def: toric degen dP_k}). In particular, we assume the generic fiber $\check\X_\eta$ is a proper surface with $\check\D_\eta$ a smooth anticanonical divisor. Then $B$ is homeomorphic to $\RR^2$, and the affine structure has $l=\dim H^1(\check\X_\eta, \Omega^1_{\check\X_\eta})+2$ singular points. \end{proposition} \begin{proof} Since the relative logarithmic dualizing sheaf $\omega_{\check\X/ \check\D} (-\log \check\D)$ is trivial, the generalization of \cite[Thm.\,2.39]{logmirror} to the case of log Calabi-Yau varieties shows that $B$ is orientable. By the classification of surfaces with effective anticanonical divisor we know $H^i(\check\X_\eta, \O_{\check\X_\eta})=0$, $i=1,2$. As in the proof of Theorem~\ref{Thm: unique} this implies $H^1(\check X_0,\O_{\check X_0})=0$. Thus by the log Calabi-Yau analogue of \cite[Prop.\,2.37]{logmirror}, \[ H^1(B,\kk)= H^1(\check X_0,\O_{\check X_0})=0. \] In particular, $B$ has the topology of $\RR^2$. As for the number of singular points, the generalization of \cite[Thms.\,3.21 \& 4.2]{PartII} to the case of log Calabi-Yau varieties \cite[Thm.\,3.5 \& Thm.\,3.9]{Tsoi} shows that $\dim H^1(\check\X_\eta, \Omega^1_{\check \X_\eta})$ is related to an affine Hodge group:\footnote{The proof of \cite[Thm.\,4.2]{PartII} has a gap fixed in \cite[Thm.\,1.10]{FFR}.} \begin{equation} \label{Eqn: H^1 affine} \dim_{\kk((t))} H^1(\check\X_\eta, \Omega^1_{\check\X_\eta}) =\dim_\RR H^1(B,i_*\check\Lambda \otimes_{\ZZ} \RR). \end{equation} To compute $H^1(B,i_*\check\Lambda \otimes_{\ZZ} \RR)$ we choose the following \v Cech cover of $B=\RR^2$. Let $\ell_1,\ldots,\ell_l$ be disjoint real half-lines emanating from the singular points $p_1,\ldots,p_l$. Define $U_0=\RR^2\setminus \bigcup_{i=1}^l \ell_i$, and $U_i=B_\eps(p_i)$ for $\eps$ sufficiently small to achieve $U_i\cap \ell_j=\emptyset$ unless $i=j$. Then $\foU:= \big\{U_0,U_1,\ldots,U_l\big\}$ is a Leray covering of $B$ for $i_*\check\Lambda_\RR:= i_*\check\Lambda\otimes_\ZZ \RR$, cf.\ \cite[Lem.\,5.5]{logmirror}. The terms in the \v Cech complex are \[ C^0(\foU,i_*\check\Lambda_\RR) = \RR^2 \times \prod_{i=1}^{l} \RR,\quad C^1(\foU,i_*\check\Lambda_\RR) = \prod_{i=1}^{l} \RR^2,\quad C^k(\foU,i_*\check\Lambda_\RR) = 0 \text{ for $k\ge 2$.} \] The analogue of \eqref{Eqn: H^1 affine} for degree~0 cohomology groups shows that the kernel of the \v Cech differential $C^0(\foU,i_*\check\Lambda_\RR) \to C^1 (\foU,i_*\check\Lambda_\RR)$ computes $H^0(\X_\eta,\Omega_{\X_\eta}(\log \D_\eta))$. This latter group vanishes by Lemma~\ref{Lem: No log holomorphic 1-form}. Hence \[ \dim H^1(B,i_*\check\Lambda_\RR)= 2l - (l + 2) = l-2 \] determines the number $l$ of focus-focus points as claimed. \end{proof} \begin{remark} \label{Rem: H^1(B,Lambda)} 1)\ From the analysis in Proposition~\ref{Thm: unique} and Proposition~\ref{Prop: Number of singular points} it is clear that for del Pezzo surfaces $dP_k$ with $k\ge4$ the anticanonical polarization is too small to extend over a toric degeneration with simple singularities. The associated tropical manifold would simply not have enough integral points to admit the required number of singular points.\\[1ex] 2)\ Essentially the same argument also computes the dimension of the space of infinitesimal deformations: \[ h^1(\check \X_\eta, \Theta_{\check\X_\eta}(\log \check\D_\eta))= h^1(\check X_0,\Theta_{\check X_0^\ls/\kk^\ls})= h^1(B, i_*\Lambda_\RR)= l-2. \] \vspace{-7ex} \qed \end{remark} It is easy to write down toric degenerations of non-toric del Pezzo surfaces, since they can be represented as hypersurfaces or complete intersections in weighted projective spaces, as for example done for $dP_6$ in~\cite[Expl.\,4.4]{Invitation}. The most natural toric degenerations in this setup have central fiber part of the toric boundary divisor of the ambient space. But because this construction gives nodal $\D_\eta$ such toric degenerations are never distinguished. To obtain proper superpotentials we therefore need a different approach. \begin{construction} \label{con: nontoric} Start from the intersection complex $(\check B, \check\P)$ of the distinguished toric degeneration of $dP_3$ depicted as a hexagon in Figure~\ref{fig: toric_del_pezzo}. The six focus-focus points in the interior of the bounded two cell make the boundary $\rho$ straight. There is no space to introduce more singular points of the affine structure because all interior edges already contain a singular point. To get around this, polarize by $-2 \cdot K_{dP_3}$ and adapt $\P$ in the obvious way, see Figure~\ref{fig:nontoric_del_pezzo}. This scales the affine manifold $B$ by two, but keeps the singular points fixed. The new boundary now has $12$ integral points and the union $\gamma$ of edges neither intersecting the central vertex nor $\partial B$ is a geodesic. We can then introduce new singular points on the boundary of the interior hexagon as visualized in Figure~\ref{fig:nontoric_del_pezzo}. Moreover, let $\check \varphi$ be unchanged on the interior cells and change slope by one when passing to a cell intersecting $\partial B$. Plugging in up to five singular points, the Hodge numbers from Proposition~\ref{Prop: Number of singular points} show that the toric degenerations obtained from the tropical data are in fact toric degenerations of $dP_k$, $4\le k\le 8$. \qed \end{construction} \begin{figure} \input{nontoric_del_pezzo.pspdftex} \caption{Straight boundary models for higher del Pezzo surfaces obtained by changing affine data for $dP_3$ and their Legendre duals.} \label{fig:nontoric_del_pezzo} \end{figure} Unlike in the anticanonically polarized case, the models constructed in this way are not unique. The geodesic $\gamma$ is divided into six segments by $\P$, and the choice on which of these segments we place the singular points, modulo the $\ZZ/6$-rotational symmetry, results in non-isomorphic models. We will see in Example~\ref{ex: dp5} how this choice influences tropical curve counts. Although there are other ways to define distinguished models for higher del Pezzo surfaces, for example by choosing another polarization, in this way we can extend the unique toric models most easily, since all tropical disks and broken lines we studied before arise in these models without any change. \begin{figure} \input{AKO_PP2.pspdftex} \caption{An alternative base for higher del Pezzo surfaces and their mirror.} \label{fig: AKO} \end{figure} \begin{remark} Note that introducing six new points, for instance as in the rightmost picture in Figure~\ref{fig:nontoric_del_pezzo}, corresponds to a blow up of $\PP^2$ in nine points, which is not Fano anymore, but from our point of view still has a Landau-Ginzburg mirror. From a different point of view this has already been noted in~\cite{AKO}, where the authors construct a compactification of the Hori-Vafa mirror as a symplectic Lefschetz fibration as follows. Start with the standard potential $x+y+\frac{1}{xy}$ for $\PP^2$ and compactify by a divisor at infinity consisting of nine rational curves. Then by a deformation argument it is possible to push $k$ of those rational curves to the finite part and decompactify to obtain a potential for $dP_k$, including $k=9$. We can reproduce this result from our point of view by starting with $\PP^2$ rather than with $dP_3$, as illustrated in Figure~\ref{fig: AKO}. Moving rational curves from infinity to the finite part is analogous to introducing new focus-focus points. In the present case one may put three focus-focus points on each unbounded ray of $(B,\P)$ until the respective Legendre dual vertex becomes straight. Figure~\ref{fig: AKO} on the right shows nine such points (corresponding to the case $k=9$ above), and any additional singular point would result in a concave boundary. This can be seen as an affine-geometrical explanation for why the compactification constructed by the authors in~\cite{AKO} has exactly nine irreducible components. Note that it is possible to introduce more singular points when passing to larger polarizations, but in this way we will not end up with degenerations of del Pezzo surfaces. \end{remark} In order to determine the superpotential, we depicted in Figure~\ref{fig:nontoric_del_pezzo} an appropriate chart of the relevant $(B,\P)$. When two regions to be removed overlap we shade them darker to indicate the non-trivial transformation there. \begin{figure} \input{scatter_higher_dp4.pspdftex} \caption{Mirror base to $dP_4$ showing new broken lines contributing to $W_{dP_4}$, indicating the wall crossing phenomenon and the invariance under change of endpoint within a chamber.} \label{fig:higher_dp4} \end{figure} \begin{example} \label{ex: dP4} Figure~\ref{fig:higher_dp4} shows the dual intersection complex $(B,\P)$ of a toric degeneration of $dP_4$ from Construction~\ref{con: nontoric}. The additional focus-focus point changes the structure $\scrS$ and allows broken lines to scatter with the wall in direction $(1,1)$ in the central cell $\sigma_0$ which subdivides $\sigma_0$ into two chambers $\fou,\fou'$ and yields new root tangent directions. A broken line coming from infinity in direction $(\pm 1,0)$ produces root tangent vectors $(-1\pm 1,-1)$, whereas one with direction $(0,\pm 1)$ takes directions $(-1,-1\pm 1)$. By construction, every broken line reaching the interior cell $\sigma_0$ has $t$-order at least $2$. Note also that by \cite[Expl.\,4.3]{Invitation} only the wall indicated by a dotted line in Figure~\ref{fig:higher_dp4} enters $\sigma_0$. Thus $\sigma_0$ is subdivided into two chambers $\fou,\fou'$, in all orders. A broken line can have at most one break point within $\sigma_0$, in which case the $t$-order increases by one. So let us compute $W_{dP_4}^3$. We get two new root tangent directions, namely $(-1,-2)$ and $(-2,-1)$, and possibly more contributions from directions $(0,-1)$ and $(-1,0)$. The two leftmost pictures in Figure~\ref{fig:higher_dp4} show all new broken lines for different choices of root vertex, apart from the six toric ones we have already encountered in Example~\ref{ex: dP_3}. Depending on this choice, the superpotential to order three is therefore either given by \begin{align*} W_{dP_4}(\fou) &= \Big( x + y + xy + \frac{1}{x} + \frac{1}{y} + \frac{1}{xy}\Big) \cdot t^2 + \Big(\frac{1}{x} + \frac{1}{x^2y}\Big) \cdot t^3 \quad\text{or } \\ W_{dP_4}(\fou') &= \Big(x + y + xy + \frac{1}{x} + \frac{1}{y} + \frac{1}{xy}\Big) \cdot t^2 + \Big(\frac{1}{y} + \frac{1}{xy^2}\Big) \cdot t^3 , \end{align*} both of which have seven critical points, as expected. These superpotentials are not only related by interchanging $x$ and $y$, for symmetry reasons, but also by wall crossing along the wall separating $\sigma_0$ into two chambers. This is the first non-trivial example of an ostensibly algebraic superpotential, as defined at the end of \S\ref{Ch: Broken lines}. In the rightmost picture in Figure~\ref{fig:higher_dp4} we indicated the behaviour of a single broken line of root tangent direction $(-1,-2)$ under change of root vertex. If the root vertex changes a chamber by passing one of the dotted lines drawn, the broken lines change accordingly. \end{example} \begin{figure} \input{scatter_higher_dp5.pspdftex} \caption{Mirror bases to $dP_5$ showing all new broken lines.} \label{fig:higher_dp5} \end{figure} \begin{example} \label{ex: dp5} Attaching another singular point on the unbounded ray in direction $(0,-1)$ as in Figure~\ref{fig:higher_dp5} on the left we arrive at a degeneration of $dP_5$. For a structure consistent to all orders there are three walls in the bounded maximal cell necessary, indicated by dotted lines in the figure. They are the extensions of the slabs with tangent directions $(1,1)$ and $(0,1)$ caused by additional singular points, and the result of scattering of these, the wall with tangent direction $(1,2)$. Because $(1,1)$ and $(0,1)$ form a lattice basis, the scattering procedure at the origin does not produce any additional walls. In any case, any broken line coming in from direction $(1,1)$ and with endpoint $p$ as indicated in Figure~\ref{fig:higher_dp5} can not interact with any of the scattering products. Tracing any possible broken lines starting from $t=-\infty$ one arrives at only five broken lines with endpoint $p$, with only one, drawn in red, having more than one breakpoint. We therefore obtain the following superpotential on the chamber $\fou$ containing $p$: \[ W_{dP_5}(\fou) = \Big(x + y + xy + \frac{1}{x} + \frac{1}{y} + \frac{1}{xy}\Big) \cdot t^2 + \Big( \frac{1}{y} + \frac{1}{xy} + \frac{1}{x^2y} + \frac{1}{xy^2}\Big) \cdot t^3 + \frac{1}{xy}t^4. \] \end{example} \begin{figure} \input{scatter_higher_dp6.pspdftex} \caption{Two mirror bases to $dP_6$.} \label{fig:higher_dp6} \end{figure} \begin{example} We study another model of the mirror to $dP_5$, which differs from the last one in the position of the second focus-focus point. Instead of placing it on the ray with generator $(0,-1)$ we move it to the ray generated by $(1,1)$, as shown in Figure~\ref{fig:higher_dp5} on the right. This alternative choice yields the superpotential \[ W_{dP_5}(\fou) = \Big(x + y + xy + \frac{1}{x} + \frac{1}{y} + \frac{1}{xy}\Big) \cdot t^2 + \Big( \frac{1}{y} + x + x^2y + \frac{1}{xy^2}\Big) \cdot t^3 \] on the selected chamber $\fou$. It is an interesting question to understand in detail the effect of particular choices of singular points and the corresponding degenerations. \end{example} \begin{example} As a last example, we study broken lines in the mirror of a distinguished model of $dP_6$, depicted on the left in Figure~\ref{fig:higher_dp6}. This time we obtain the superpotential \[ W_{dP_6}(\fou) = \Big(x + y + xy + \frac{1}{x} + \frac{1}{y} + \frac{1}{xy}\Big) \cdot t^2 + \Big(2 \cdot \frac{1}{xy^2} + \frac{1}{xy} + 2\cdot \frac{1}{y}\Big) \cdot t^3 + \frac{1}{y} \cdot t^4 + \frac{1}{y} \cdot t^5, \] with nine critical points. Again, this potential comes from a special choice of positions of critical points and root vertex among many others. This superpotential is ostensibly algebraic although the three walls meeting at the origin produce an infinite wall structure on the bounded cell $\sigma_0$. In contrast, Figure~\ref{fig:higher_dp6} on the right shows the mirror base of an alternative $dP_6$-degeneration featuring a finite wall structure in the bounded cell $\sigma_0$ by \cite[Expl.\,4.3]{Invitation}. We again obtain an ostensibly algebraic superpotential \[ W_{dP_6}(\fou) = \Big(x + y + xy + \frac{1}{x} + \frac{1}{y} + \frac{1}{xy}\Big) \cdot t^2 + \Big(\frac{y}{x} + \frac{1}{x} + \frac{1}{xy^2} + 2\cdot \frac{1}{y} + \frac{x}{y}\Big) \cdot t^3. \] \end{example} These examples illustrate that if we leave the realm of toric geometry, Landau-Ginzburg potentials for del Pezzo surfaces can, at least locally, still be described by Laurent polynomials, as in the toric setting. \section{Singular Fano and smooth non-Fano surfaces} Having studied smooth Fano surfaces, we now show that our approach also works if we admit Gorenstein singularities or drop the Fano condition. \subsection{Singular Fano surfaces} \label{Subsect: Singular Fano surfaces} We now classify the remaining distinguished toric degenerations of del Pezzo surfaces (Definition~\ref{Def: toric degen dP_k}) from Theorem~\ref{Thm: unique} with non-simple singularities. In this theorem we have already seen that the intersection complex $(\check B,\check\P)$ is obtained from a star subdivision of a reflexive polygon $\Xi$ with a singular point on each of the interior edges, and then posssibly a further subdivision adding more edges without a singular point connecting the interior integral point to a non-vertex point on $\partial\check B$. There are 16~well-known isomorphism classes of reflexive polygons, among which five give rise to smooth varieties, studied in the last section. The remaining eleven polygons are characterized by the property that the dual polygon has at least one integral non-vertex boundary point. These 11~polygons are in one-to-one correspondence with isomorphism classes of singular toric del Pezzo surfaces. To obtain a smooth boundary model $(\check B=\Xi,\check \P)$, we now need to add a non-simple singular point on some of the added edges to straighten the boundary. Non-simple means that the affine monodromy along a counterclockwise loop about such a singular point is conjugate to $\left(\begin{smallmatrix}1&-k\\0&1\end{smallmatrix}\right)$ for some $k>1$. We refer to $k$ as the \emph{order} of the singular point. Legendre-duality then yields an affine singularity of the same order~$k$ on the dual edge. As in the smooth case, we polarize $(\check B,\check\P)$ by the minimal polarizing function $\check\varphi$ changing slope by one along each interior edge. Figure~\ref{fig:singular-del-pezzo} depicts the discrete Legendre duals $(B,\P)$ thus obtained from star subdivision of the remaining $11$~reflexive polygons. Note that the affine monodromy of the singular point is reflected in the shaded regions. The order~$k$ of a singular point now equals the number of integral points on the edge of $\P$. We obtain the following addition to Theorem~\ref{Thm: unique}. \begin{theorem} In the situation of Theorem~\ref{Thm: unique} assume that $(\pi:\check\X\to T,\check\D)$ does not have simple singularities. Then $(\check B,\check\P)$ is Legendre-dual to one of the cases listed in Figure~\ref{fig:singular-del-pezzo}. \qed \end{theorem} With non-simple singularities on some edges in $(B,\P)$, the slab functions are not uniquely determined by the gluing data. If $v$ is a vertex on an edge $\rho$ containing an order~$k$ singularity, the slab function $f_{\rho,v}$ has the form $1+a_1 x+\ldots a_{k-1} x^{k-1}+a_k x^k$ for $x$ the generating monomial for the $\rho$-stratum of $X_0$. The coefficient $a_k$ is determined by the gluing data, with $a_k=1$ for trivial gluing data. Thus in any case, there are $k-1$ free coefficients for each singular point of order $k$. These coefficients reflect classes of exceptional curves on the resolution of the Fano side. Ignoring these classes, as suggested by working with the unresolved del Pezzo surface, leads to the slab function $(1+x)^k$. \begin{theorem}\footnote{The closely related superpotential of the corresponding $11$~semi-Fano surfaces obtained by MPCP resolution has independently been computed in \cite{ChanLau} by other methods. See \cite{MaxThesis} for the reproduction and comparison of their results with our method. Add reference to MPCP } \label{singular del pezzo} Let $(\X \to \Spec \kk\lfor t\rfor,W)$ be mirror to a distinguished toric degeneration $(\check\X\to T,\check\D)$ of del Pezzo surfaces (Definition~\ref{Def: toric degen dP_k}), and $(B,\P,\varphi)$ the associated intersection complex for the anticanonical polarization on $\check\X$. Denote by $\sigma_0$ the bounded cell of the associated intersection complex $(B,\P,\varphi)$. For an integral point $m$ on an edge $\omega\subset\partial\sigma_0$ of integral length $k$ define $N_m=\binom{l}{k}$, where $l$ is the integral distance between $m$ and one of the vertices of $\omega$. Then it holds \[ W(\sigma_0) = t\cdot \sum_{m \in \partial \sigma_0\cap\Lambda_{\sigma_0}} N_m z^m. \] \end{theorem} \begin{figure}[h!] \input{singular_del_pezzo.pspdftex} \caption{The broken lines contributing to the proper superpotential of the eleven singular toric del Pezzo surfaces.} \label{fig:singular-del-pezzo} \end{figure} \begin{proof} Corollary~\ref{Cor: reproduce Hori-Vafa} already treats the case with simple singularities. The remaining 11~cases are most easily done by inspection of the set of broken lines ending at a specified point $p$ as given in Figure~\ref{fig:singular-del-pezzo}. Note that there are no walls in $\Int\sigma_0$, so the result of this computation is independent of the choice of $p$. It is instructive to check this continuity property explicitly in some cases. Alternatively one can argue more generally as follows. If a broken line ends at $p\in\Int(\sigma_0)$ then the next to last vertex of $\beta$ maps to the intersection of the ray $p+\RR_{\ge0} m_\beta$ with $\partial\sigma_0$. Denote by $\omega\subset \partial\sigma_0$ the edge containing this point of intersection. Then by Lemma~\ref{Lem: Broken lines are rays} there are no more bending points of $\beta$, and hence the remaining part of $\beta$ has to be parallel to the unbounded edges of the unbounded cell containing $\omega$. This determines the kink of $\beta$ when crossing $\omega$. Note that this argument also limits the exponents $m$ appearing in $W(\sigma_0)$ to be contained in $\partial \sigma_0\cap\Lambda_{\sigma_0}$. Moreover, each such exponent belongs to at most one broken line ending at $p$. Now choose $p$ very close to the unique interior integral point of $\sigma_0$ and let $m\in \partial \sigma_0\cap\Lambda_{\sigma_0}$. Then $p+\RR_{\ge0}$ intersects $ \partial\sigma_0$ very close to $m$. A local computation now shows that depending on the choice of a vertex $v\in\omega$ the coefficient of $z^m$ comes from either the $l$-th or the $(k-l)$-th coefficient of the associated slab function $(1+x)^k$. In either case we obtain the stated binomial coefficient $\binom{l}{k}$. \end{proof} \subsection{Hirzebruch surfaces} As a last application featuring some non-Fano cases, we will study proper superpotentials for Hirzebruch surfaces $\mathbb{F}_m$. We fix the fan $\Sigma$ in $N \cong \ZZ^2$ of $\mathbb{F}_m$ to be the fan with rays $\rho_0,\ldots,\rho_3$ generated by the four primitive vectors \[ v_0 = (0,1),\ v_1 = (-1,0),\ v_2 = (0,-1),\ v_3 = (1,m). \] Since $\mathbb{F}_m$ is only Fano in the cases $\mathbb{F}_0 = \PP^1 \times \PP^1$ and $\mathbb{F}_1 = dP_1$, for $m\geq2$ the normal fan of the anticanonical polytope is not the fan of a Hirzeburch surface. We fix $m$ in the following and denote by $D_{\rho_i}$ the torus-invariant divisor associated to $\rho_i$. Instead of the anticanonical divisor, which is not ample for $m\geq 2$, we now consider a smooth divisor $D$ on $\FF_m$ in the ample class \[ D_{\rho_0} + D_{\rho_1} + D_{\rho_2} + m \cdot D_{\rho_3}. \] Define the tropical manifold $(\check B, \check \P, \check \varphi)$ with straight boundary as follows. $\check B$ is obtained from the Newton polytope \[ \Xi_{D}=\conv\big\{(-1,-1),(2m,-1),(0,1),(-1,1)\big\} \] of $D$ by joining each vertex with one of the endpoints of the line segment $[0,m-1]\times\{0\}\subset \Int\Xi_D$ as shown in~Figure~\ref{fig:Hirzebruch} on the top, and then introducing a single focus-focus singularity on each of the four joining one-cells. It is again elementary to check that this makes $\partial \check B$ totally geodesic. Moreover, setting $\check \varphi(v) = 1$ for all vertices $v$ of $\Xi_D$ and \[ \check\varphi(0,0) = \check\varphi(m-1,0) = 0 \] defines a strictly convex, integral PL-function $\check\varphi$ on $(\check B,\check \P)$. \begin{figure} \input{Hirzebruch.pspdftex} \caption{A straight boundary model for the Hirzebuch surfaces $\mathbb{F}_m$ with mirrors for $m=2,3$.} \label{fig:Hirzebruch} \end{figure} The Legendre dual $(B, \P,\varphi)$ has the four vertices $v_0,\ldots,v_3$ we started with, two bounded cells \[ \sigma_0 = \conv(v_0,v_1,v_2), \quad \sigma_1 = \conv(v_0,v_2,v_3), \] and four unbounded one-cells contained in $\RR_{\ge0} v_i$, $i=0,\ldots,3$. These one-cells are indeed parallel in any chart with domain an open set in the complement of the union of bounded cells. Moreover, $\partial (\sigma_0 \cup \sigma_1)$ has precisely four integral points, namely $v_0$, $v_1$, $v_2$ and $v_3$. Note also that by the definition of the Legendre-dual, $\varphi$ is uniquely determined by \[ \varphi(v_0) = \varphi(v_1) = \varphi(v_2)= 1,\ \varphi(v_3) = m, \] and by the requirement to change slope by one along $\partial(\sigma_0 \cup \sigma_1)$. We see that for $m\geq 3$ the union $\sigma_0 \cup \sigma_1$ is non-convex. In this case the scattering of the two walls emanating from the singular points on the two edges with vertices $v_0,v_1,v_3$ produce walls entering $\Int(\sigma_0\cup\sigma_1)$. For $m>3$ there is even an infinite number of walls to be added to make the wall structure consistent to all orders. While all these walls are added in the half-plane below the line $\RR\cdot(1,m)$ and hence there is an open set contained in $\sigma_0$ not containing any wall, we have not carried out the necessary analysis to decide if $W(\sigma_0)$ is given to all orders by an algebraic expression. We now restrict to the cases $m=2,3$ and explicitly compute the full superpotentials. First, for the case $m=2$ there are no walls in $\Int(\sigma_0\cup\sigma_2)$. The broken lines ending at a specified point $p\in\Int(\sigma_0)$ are depicted on the lower left in Figure~\ref{fig:Hirzebruch}. The Landau-Ginzburg superpotential can then be read off as \[ W(\sigma_0) = \Bigl(\frac{1}{x} + \frac{1}{y} + y\Bigr) \cdot t + xy^2 \cdot t^2. \] This is indeed the full potential, as there is no scattering in $\sigma_0\cup \sigma_1$, so we can apply Lemma~\ref{Lem: Broken lines are rays}. For $m=3$ let us first compute the walls entering $\Int(\sigma_0\cup\sigma_1)$. These come from scattering at the point $(0,1)$ of the adjacent edges with focus-focus singular points in directions $(1,1)$ and $(-1,-2)$. Locally this scattering situation is equivalent to the scattering of incoming walls from directions $(-1,0)$ and $(0,-1)$ meeting at the origin. An explicit computation carried out in~\cite[\S4.1]{Invitation} shows that this scattering diagram can be made consistent to all orders by introducing outgoing walls in directions $(1,0)$, $(0,1)$ and $(1,1)$. These translate to walls in directions $(1,1)$, $(-1,-2)$ and $(0,-1)$ in our situation, as indicated by the dashed lines in the lower right of Figure~\ref{fig:Hirzebruch}. Of course it will be necessary to insert more walls \emph{outside} of the bounded part, but this is unessential for the computation of the superpotential. Hence scattering on the bounded part $\sigma_0 \cup \sigma_1$ is finite and we can once more apply Lemma~\ref{Lem: Broken lines are rays}. For the choice of root vertex $p'\in\sigma_1$ as indicated in the lower right of Figure~\ref{fig:Hirzebruch} we get the following full superpotential for $(B_3, \P_3)$ \[ W(\fou) = \Bigl(\frac{1}{x} + \frac{1}{y} + y + \frac{y}{x}\Bigr) \cdot t + \frac{y}{x} \cdot t^2 + \Bigl(xy^3 + y^2\Bigr)\cdot t^3. \] Thus, neglecting $t$-orders for a moment, we have three new contributions that differ from the Hori-Vafa potential $\frac{1}{x} + \frac{1}{y} + y + xy^3$, namely the monomial $y^2$ and twice the term $\frac{y}{x}$. These come from broken lines that have a break point at the new walls emanating from $(0,1)$ in direction $(1,1)$, $(0,-1)$ and $(-1,-2)$, respectively. Note that these are precisely the terms Auroux found in~\cite[Prop.\,3.2]{auroux2}, when we make the coordinate change $x \mapsto \frac{1}{x}$ and $y \mapsto \frac{1}{y}$. The computation in~\cite[Prop.\,3.2]{auroux2} is very explicit and rather long when compared with our derivation. Of course all the hard work is hidden in the scattering process of \cite{affinecomplex} and the propagation of monomials via broken lines, but still it is remarkably easy to compute Landau-Ginzburg models with this approach, once everything is set up. \section{Three-dimensional examples} So far we restricted ourselves to $\dim B =2$. We now turn to a few simple examples illustrating some features of higher dimensional cases. \begin{example} \label{Expl: PP3} Starting from the momentum polytope $\Xi$ for $\PP^3$ with its anticanonical polarization, Construction~\ref{Const: Reflexive polytope degeneration} provides a model with a distinguished polarized tropical manifold $(\check B,\check \P,\check \varphi)$ with $\check B=\Xi$ and with flat boundary. In dimension three this is done by trading corners \emph{and edges} with a one-dimensional singular locus of the affine structure. Explicitly, $\check \P$ is the star-subdivision of \[ \Xi=\conv\big\{ (2,-1,-1),(-1,2,-1),(-1,-1,2),(-1,-1,-1)\big\} \] that introduces six two-faces spanned by the origin and two distinct vertices of $\Xi$. The discriminant locus $\check\Delta$ is the subcomplex of the first barycentric subdivision of these six affine triangles shown in Figure~\ref{fig:three-dimensional}. The affine structure is fixed by the embedding of $\Xi$ into $\RR^3$ at the origin, and by the affine charts at the vertices of $\check B$ making $\partial \Xi$ flat and inducing the given affine chart on the maximal cells of $\check \P$. Finally, $\check\varphi$ is determined by $\check \varphi(v)=1$ for every vertex $v$ of $\Xi$ and $\check \varphi(0)=0$. The discrete Legendre dual $(B,\P,\varphi))$, also drawn in Figure~\ref{fig:three-dimensional}, has four parallel unbounded rays and a discriminant locus $\Delta$ with six unbounded rays. Note that unlike in the case of closed tropical manifolds or in dimension two, $\check\Delta$ and $\Delta$ are not homeomorphic, but $\check \Delta$ is homeomorphic to the compactification of $\Delta$ that adds a point at infinity to each unbounded one-cell of $\Delta$. Every bounded two-face is subdivided into three $4$-gons by $\Delta$ and at every vertex of $B$ three of these $4$-gons meet. Denote the bounded three-cell by $\sigma_0$. As in the proof of Theorem~\ref{singular del pezzo} it now follows that any broken line ending at $p\in\Int(\sigma_0)$ is straight. This shows \[ W_{\PP^3}(\sigma_0) = \Big(x + y + z + \frac{1}{xyz}\Big) \cdot t. \] \begin{figure} \input{P3.pspdftex} \caption{The distinguished model of $\PP^3$ and its affine Legendre dual. The trivalent graphs indicate the discriminant loci, the dashed lines on the left the one-cells added in the star-subdivision. The little arrows indicate parts of unbounded cells of the discriminant locus.} \label{fig:three-dimensional} \end{figure} \end{example} \begin{example} \label{ex: corti} Consider the reflexive polytope $\Xi$ depicted on the left in Figure~\ref{fig:corti}, a truncated tetrahedron with parallel top and bottom facets that is symmetric under cyclic permutation of the coordinates. The polar dual is the bounded polyhedron $\sigma_0$ on the right of the same figure. Note that each edge of $\sigma_0$ has integral length two. This means that the toric Fano variety $\check X$ with anticanonical Newton polyhedron $\Xi$ is singular along each one-dimensional toric stratum; each such stratum has a neighborhood isomorphic to a product of a two-dimensonal $A_1$-singularity with $\GG_m$. \begin{figure} \input{cortis_poly.pspdftex} \caption{A reflexive polytope with edges and corners pushed in and its Legendre-dual, following the same conventions as in Figure~\ref{fig:three-dimensional}.} \label{fig:corti} \end{figure} In exactly the same way as in Example~\ref{Expl: PP3}, we arrive at a tropical manifold $(\check B,\check \P,\check \varphi)$ with flat boundary and $\check \P$ given by the star subdivision of $\Xi$. The Legendre-dual $(B,\P,\varphi)$ has a double tetrahedron as the unique bounded maximal cell. Both tropical manifolds are depicted in a chart at the origin in Figure~\ref{fig:corti}. The discriminant locus $\check\Delta$ of $(\check B,\check \P)$ now is contained in the union of triangles added in the star-subdivision, with the intersection of $\check\Delta$ with one such triangle $\tau$ three edges meeting in the barycenter of $\tau$. The discriminant locus $\Delta$ of $(B,\P)$ has nine rays emanating from the barycenters of the edges of $\sigma_0$, and then for each facet $\tau\subset\sigma_0$ again a union of three edges intersecting in the barycenter of $\tau$. Now neither side has simple singularities. For example, the affine structure of $(B,\P)$ at an interior point of an edge of $\Delta$ on $\partial\sigma_0$ is conjugate to $\left(\begin{smallmatrix} 1&-2&0\\0&1&0\\0&0&1 \end{smallmatrix}\right)$, the square of the monodromy of a focus-focus singularity times an interval. Thus although it is not hard to see that $(B,\P)$ is compactifiable (Definition~\ref{Def: compactifiable (B,P)}), the assumptions of \cite[Def.\,1.26]{affinecomplex} can not be fulfilled for this compactification, and hence the existence of a consistent wall structure is not immediately clear. \begin{figure} \input{subdivision_Fano3.pspdftex} \caption{Refinement of the discriminant locus on a bounded two-cell of $\P$.} \label{fig:refinement Fano3} \end{figure} In the present case we can proceed as follows. Each of the bounded two-cells of $(B,\P)$ is integral affine isomorphic to the planar triangle with vertices $(0,0),(2,0),(0,2)$. Subdivide each such two-cell as in Figure~\ref{fig:refinement Fano3} and refine $\P$ by intersection with the fan over the faces of this subdivision. The discriminant locus can then be refined as indicated by the dashed graph in Figure~\ref{fig:refinement Fano3}, along with replacing each unbounded cell of $\Delta$ by two copies joined with the rest of the graph at disjoint trivalent vertices lying on the edges of $\sigma_0$. Now we can indeed run the algorithm\footnote{There is a technical problem to assure that the process is finite at each step. This can be done by working with $d\varphi+\psi$ for $d>0$ and an appropriate $\psi$ or by inspection of the local scattering situations in the case at hand.} to construct a compatible sequence $\tilde\scrS_k$ of consistent wall structures. In a second step undo the refinement process to show that the algorithm indeed works starting with the non-rigid data $(B,\P,\varphi)$. As in Theorem~\ref{singular del pezzo} we also have a choice for the initial slab function, with a distinguished choice a square $f_{\rho,v}=(1+x+y)^2$ for each bounded $2$-cell $\rho$ and vertex $v\in \rho$. Now the details of the construction of the wall structure are completely irrelevant for the computation of the superpotential in the bounded cell $\sigma_0\in\P$: By the same arguments as in Theorem~\ref{singular del pezzo}, it is again just the sum over broken lines with at most one bend when crossing $\partial\sigma_0$. Moreover, the set of such broken lines with endpoint at any $p\in\Int\sigma_0$ are in bijection with the integral points of $\partial\sigma_0$. For the distinguished slab functions the coefficient carried by the broken line equals $1$ for the ones without bend and $2$ for the others. The superpotential therefore equals \[ W(\sigma_0) = \Big(xyz + \frac{1}{xyz} + \frac{x}{yz} + \frac{y}{xz} + \frac{z}{xy} +2(x^2+y^2+z^2)+\frac{2}{x^2}+\frac{2}{y^2}+\frac{2}{z^2}+\frac{2}{xy}+\frac{2}{yz}+\frac{2}{xz}\Big) \cdot t . \] A similar challenge concerns the existence of a consistent wall structure on $(\check B,\check \P,\check\varphi)$, but analogous arguments apply. The resulting toric degeneration $(\check\foX\to T,\check\foD)$ then fits into an algebraizable two-parameter family with the singular toric Fano manifold $\check X$ with its toric anticanonical divisor $\check D$ as another fiber. We have not performed a more detailed analysis to identify this family. Possibly it is just isomorphic to a deformation of $\check D$ inside $\check X$, as in Example~\ref{Expl: PP2} for $\PP^2$ and $\check D$ a family of elliptic curves. \end{example} \section*{Concluding remarks.} It would be interesting to more systematically analyze Landau-Ginzburg models for non-toric Fano threefolds with our method. In~\cite{Pr1, Pr2} so called \emph{very weak Landau-Ginzburg potentials} are found. The terms and coefficients of these Laurent polynomials have to be chosen very carefully. As the potentials presented there do not come from a specific algorithm, but rather are written down in an ad hoc way, one would like to have an interpretation of the terms occurring. One might ask whether there are toric degenerations reproducing the potentials in~\cite{Pr1, Pr2} via tropical disk counting, as in the examples here. \begin{thebibliography}{cccccccc} \bibitem[Au1]{auroux1} D.\ Auroux, \emph{Mirror symmetry and T-duality in the complement of an anticanonical divisor}, J.\ G\"okova Geom.\ Topol.\ GGT~1 (2007), 51--91. \bibitem[Au2]{auroux2} D.\ Auroux, \emph{Special Lagrangian fibrations, wall-crossing, and mirror symmetry}, Surv.\ Differ.\ Geom.~\textbf{13}, 1--47, Int.\ Press 2009. \bibitem[AKO]{AKO} D.\ Auroux, L.\ Katzarkov, D.\ Orlov, \emph{Mirror symmetry for Del Pezzo surfaces: Vanishing cycles and coherent sheaves}, Inventiones Math.~\textbf{166} (2006), 537--582. \bibitem[Ba]{Bat} V.~Batyrev: \emph{Dual polyhedra and mirror symmetry for Calabi-Yau hypersurfaces in toric varieties}, J.\ Algebraic Geom.~\textbf{3} (1994), 493--535. \bibitem[CL]{ChanLau} K.~Chan, S.-C.~Lau: \emph{Open Gromov-Witten invariants and superpotentials for semi-Fano toric surfaces}, Int.\ Math.\ Res.\ Not.~\textbf{14} (2014), 3759--3789. \bibitem[CO]{ChoOh}C.-H.\ Cho and Y.-G.\ Oh, \emph{Floer cohomology and disc instantons of Lagrangian torus fibers in Fano toric manifolds}, Asian J.\ Math.~\textbf{10} (2006), 773--814. \bibitem[FOOO]{FOOO1} K.\ Fukaya, Y.-G.\ Oh, H.\ Ohta, and K.\ Ono, \emph{Lagrangian Floer theory on compact toric manifolds I}, Duke Math.\ J.~\textbf{151} (2010), 23--174. \bibitem[FFR]{FFR} S.\ Felten, M.\ Filip, H.\ Ruddat: \emph{Smoothing toroidal crossing spaces}, Forum Math.\ Pi~\textbf{9:e7} (2021), 1--36. \bibitem[GRS]{GRS} M.\ van Garrel, H.\ Ruddat, B.\ Siebert: \emph{Enumerative period integrals of Landau-Ginzburg models via wall structures}, in progress. \bibitem[GM]{gat} A.\ Gathmann, H.\ Markwig, \emph{The numbers of tropical plane curves through points in general position}, J.\ Reine Angew.\ Math.~\textbf{602} (2007), 155--177. \bibitem[Gi]{Givental} A.\ Givental: \emph{Stationary phase integrals, quantum Toda lattices, flag manifolds and the mirror conjecture}, in: Topics in singularity theory, Amer.\ Math.\ Soc.\ Transl.\ Ser.~2, \textbf{180} (1997), 103--115, AMS~1997. \bibitem[GHK]{GHK} M.~Gross, P.~Hacking, S.~Keel: \emph{Mirror symmetry for log Calabi-Yau surfaces I}, Publ.\ Math.\ Inst.\ Hautes \'Etudes Sci.~\textbf{122} (2015), 65--168. \bibitem[GHKK]{GHKK} M.~Gross, P.~Hacking, S.~Keel, M.~Kontsevich, \emph{Canonical bases for cluster algebras}, J.\ Amer.\ Math.\ Soc.\ {\bf 31} (2018), 497--608. \bibitem[GHS]{Theta} M.~Gross, P.~Hacking, B.~Siebert: \emph{Theta functions on varieties with effective anticanonical class}, Mem.\ Amer.\ Math.\ Soc.~\textbf{278} (2022), no.~1367, \href{https://arxiv.org/pdf/1601.07081}{arXiv:1601.07081} [math.AG]. \bibitem[GPS]{gps} M.\ Gross, P.\ Pandharipande, B.\ Siebert, \emph{The tropical vertex}, Duke Math.\ J.\ \textbf{153} (2010), 297--362. \bibitem[Gr1]{GBB} M.~Gross: \emph{Toric Degenerations and Batyrev-Borisov Duality}, Math.\ Ann.\ \textbf{333} (2005), 645--688. \bibitem[Gr2]{PP2mirror} M.\ Gross: \emph{Mirror symmetry for $\PP^2$ and tropical geometry}, Adv.~Math.\ {\bf 224} (2010), 169--245. \bibitem[Gr3]{kansas} M.\ Gross: \emph{Tropical geometry and mirror symmetry}, CBMS Regional Conference Series in Mathematics~\textbf{111}, AMS~2011. \bibitem[GS1]{logmirror} M.\ Gross, B.\ Siebert: \emph{Mirror symmetry via logarithmic degeneration data I}, J.\ Differential Geom.~\textbf{72} (2006), 169--338. \bibitem[GS2]{PartII} M.\ Gross, B.\ Siebert: \emph{Mirror symmetry via logarithmic degeneration data II}, J.\ Algebraic Geom. \textbf{19} (2010), 679--780. \bibitem[GS3]{affinecomplex} M.\ Gross, B.\ Siebert: \emph{From real affine to complex geometry}, Ann.\ of Math.~\textbf{174}, (2011), 1301--1428. \bibitem[GS4]{Invitation} M.\ Gross, B.\ Siebert: \emph{An invitation to toric degenerations}, Surv.\ Differ.\ Geom.~\textbf{16}, 43--78, \bibitem[GS5]{thetasurvey} M.\ Gross, B.\ Siebert: \emph{Theta functions and mirror symmetry}, Surv.\ Differ.\ Geom.~\textbf{21}, International Press 2016. \bibitem[GS6]{Assoc} M.~Gross and B.~Siebert: \emph{Intrinsic mirror symmetry}, \href{https://arxiv.org/pdf/1909.07649v2}{arXiv:1909.07649v2} [math.AG]. \bibitem[GS7]{CanonicalWalls} M.~Gross and B.~Siebert: \emph{The canonical wall structure and intrinsic mirror symmetry}, to appear in Invent.\ Math., \href{https://arxiv.org/pdf/2105.02502}{https://arxiv.org/pdf/2105.02502} \bibitem[Ha]{Hartshorne} R.\ Hartshorne: \emph{Algebraic geometry}, Springer~1977. \bibitem[HV]{HoriVafa} K.\ Hori, C.\ Vafa: \emph{Mirror symmetry}, \href{https://arxiv.org/pdf/hep-th/0002222}{arXiv:hep-th/0002222}. \bibitem[KS]{ks} M.\ Kontsevich, Y.\ Soibelman: \emph{Affine structures and non-Archimedean analytic spaces}, in: \textsl{The unity of mathematics} (P.~Etingof, V.~Retakh, I.M.~Singer, eds.), 321--385, Progr.\ Math.~244, Birkh\"auser~2006. \bibitem[Pr1]{Pr1} V.\ Przyjalkowski: \emph{On Landau--Ginzburg models for Fano varieties}, Commun.\ Number Theory Phys.~\textbf{1} (2007), 713--728. \bibitem[Pr2]{Pr2} V.\ Przyjalkowski: \emph{ Weak Landau--Ginzburg models for smooth Fano threefolds}, Izv.\ Math.~\textbf{77} (2013), 772--794 \bibitem[Pu]{MaxThesis} M.\ Pumperla: \emph{Unifying Constructions in Toric Mirror Symmetry}, doctoral thesis, University of Hamburg~2011, \href{https://ediss.sub.uni-hamburg.de/handle/ediss/4618}{ https://ediss.sub.uni-hamburg.de/handle/ediss/4618} \bibitem[Ru]{Ruddat} H.\ Ruddat: \emph{A homology theory for tropical cycles on integral affine manifolds and a perfect pairing}, Geom.\ Topol.~\textbf{25} (2021), 3079--3132. \bibitem[RS]{RS} H.\ Ruddat, B.\ Siebert: \emph{Period integrals from wall structures via tropical cycles, canonical coordinates in mirror symmetry and analyticity of toric degenerations}, Publ.\ Math.\ Inst.\ Hautes \'Etudes Sci.~\textbf{132} (2020), 1--82. \bibitem[{SP}]{stacks-project} The {Stacks Project Authors}: \emph{\itshape Stacks project}, \url{http://stacks.math.columbia.edu}, 2022. \bibitem[Sy]{symington} M.\ Symington: \emph{Four dimensions from two in symplectic topology}, in: \textsl{Topology and geometry of manifolds (Athens, GA, 2001)}, 153--208, Proc.\ Sympos.\ Pure Math.~71, Amer.\ Math.\ Soc.~2003. \bibitem[Ts]{Tsoi} H.-M.\ Tsoi, \emph{Cohomological Properties of Toric Degenerations of Calabi-Yau Pairs}, doctoral thesis, University of Hamburg~2013, \href{https://ediss.sub.uni-hamburg.de/handle/ediss/5247}{ https://ediss.sub.uni-hamburg.de/handle/ediss/5247} \bibitem[Wi]{Witten} E.\ Witten: \emph{Phases of N=2 theories in two dimensions}, Nuclear Phys.~B~\textbf{403} (1993), 159--222. \end{thebibliography} \end{document}
2205.07710v2
http://arxiv.org/abs/2205.07710v2
The maximum spectral radius of irregular bipartite graphs
\documentclass[4pt]{article} \usepackage[utf8]{inputenc} \usepackage{geometry} \geometry{a4paper} \usepackage{graphicx} \usepackage[colorlinks,citecolor=red]{hyperref}\usepackage{amsmath} \usepackage{amsfonts} \usepackage{dsfont} \usepackage{mathrsfs} \usepackage{amsthm} \usepackage{latexsym} \usepackage{graphicx} \usepackage{amssymb} \usepackage{float}\usepackage{epsfig} \usepackage{epstopdf} \usepackage{color,xcolor} \usepackage{booktabs} \usepackage{array} \usepackage{paralist} \usepackage{verbatim} \usepackage{subfig} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{claim}[theorem]{Claim} \newtheorem{definition}[theorem]{Definition} \newtheorem{problem}[theorem]{Problem} \newtheorem{example}[theorem]{Example} \newtheorem{construction}[theorem]{Construction} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{remark}[theorem]{Remark} \newtheorem{question}[theorem]{Question} \newtheorem{algorithm}[theorem]{Algorithm} \usepackage{fancyhdr} \pagestyle{fancy} \renewcommand{\baselinestretch}{1.2}\renewcommand{\headrulewidth}{0pt} \lhead{}\chead{}\rhead{} \lfoot{}\cfoot{\thepage}\rfoot{} \usepackage{sectsty} \allsectionsfont{\sffamily\mdseries\upshape} \usepackage[nottoc,notlof,notlot]{tocbibind} \usepackage[titles,subfigure]{tocloft} \renewcommand{\cftsecfont}{\rmfamily\mdseries\upshape} \renewcommand{\cftsecpagefont}{\rmfamily\mdseries\upshape} \title{\textsf{The maximum spectral radius of irregular bipartite graphs}} \author{{Jie Xue$^{a}$}, {Ruifang Liu$^{a}$}\thanks{Corresponding author. E-mail address: [email protected] (Liu).}, { Jiaxin Guo$^{a}$}, {Jinlong Shu$^{b}$} \medskip \\ {\footnotesize $^a$School of Mathematics and Statistics, Zhengzhou University, Zhengzhou, China}\\ {\footnotesize $^b$School of Computer Science and Technology, East China Normal University, Shanghai, China}} \date{} \begin{document} \maketitle \begin{abstract} \maketitle A bipartite graph is subcubic if it is an irregular bipartite graph with maximum degree three. In this paper, we prove that the asymptotic value of maximum spectral radius over subcubic bipartite graphs of order $n$ is $3-\varTheta(\frac{\pi^{2}}{n^{2}})$. Our key approach is taking full advantage of the eigenvalues of certain tridiagonal matrices, due to Willms [SIAM J. Matrix Anal. Appl. 30 (2008) 639--656]. Moreover, for large maximum degree, i.e., the maximum degree is at least $\lfloor n/2 \rfloor$, we characterize irregular bipartite graphs with maximum spectral radius. For general maximum degree, we present an upper bound on the spectral radius of irregular bipartite graphs in terms of the order and maximum degree. \bigskip \noindent {\bf AMS Classification:} 05C50 \noindent {\bf Key words:} Spectral radius, Bipartite graph, Irregular, Subcubic, Maximum degree \end{abstract} \section{Introduction} The spectral radius of a graph is the largest eigenvalue of its adjacency matrix. A classical issue in spectral graph theory is the Brualdi-Solheid problem \cite{Brualdi1986}, which aims to characterize graphs with extremal values of the spectral radius in a given class of graphs. A lot of results concerning the Brualdi-Solheid problem were presented, and some of these results were exhibited in a recent monograph on the spectral radius by Stevanovi\'{c} \cite{Stevanovic2018}. Let $G$ be a connected graph on $n$ vertices with maximum degree $\Delta$. The spectral radius of $G$ is denoted by $\rho(G)$, or simply $\rho$ when there is no scope for confusion. By Perron-Frobenius theorem and Rayleigh quotient, it is easy to deduce a well-known upper bound $\rho(G)\leq \Delta$, with equality if and only if $G$ is regular. This also yields that the spectral radius of a connected graph is strictly less than its maximum degree when the graph is irregular. It is natural to ask what is the maximal value of the spectral radius of irregular graphs, which can be regarded as the Brualdi-Solheid problem for irregular graphs. An equivalent statement of this problem is how small $\Delta-\rho$ can be when the graph $G$ is irregular. A lower bound for $\Delta-\rho$ was first given by Stevanovi\'{c} in \cite{Stevanovic2004}. \begin{theorem}{\rm(\cite{Stevanovic2004})}\label{th5} Let $G$ be a connected irregular graph of order $n$ and maximum degree $\Delta$. Then \begin{equation} \Delta-\rho(G)>\frac{1}{2n(n\Delta-1)\Delta^{2}}. \end{equation} \end{theorem} Some lower bounds for $\Delta-\rho(G)$ under other graph parameters, such as the diameter and the minimum degree, were established in \cite{Cioaba2007EJC,Cioaba2007JCTB,Liu2007,Shi2009,Zhang2005}. In particular, Cioab\u{a} \cite{Cioaba2007EJC} presented the following lower bound, which confirmed a conjecture in \cite{Cioaba2007JCTB}. \begin{theorem}{\rm(\cite{Cioaba2007EJC})} Let $G$ be a connected irregular graph with $n$ vertices, maximum degree $\Delta$ and diameter $D$. Then \begin{equation} \Delta-\rho(G)>\frac{1}{nD}. \end{equation} \end{theorem} Another approach is to determine the asymptotic value of the maximum spectral radius for irregular graphs. Denote by $\mathcal{F}(n,\Delta)$ the set of all connected irregular graphs on $n$ vertices with maximum degree $\Delta$. Let $\rho(n,\Delta)$ be the maximum spectral radius of graphs in $\mathcal{F}(n,\Delta)$, that is, $$\rho(n,\Delta)=\max\{\rho(G): G\in \mathcal{F}(n,\Delta)\}.$$ In \cite{Liu2007}, Liu, Shen and Wang proposed a conjecture for the asymptotic value of $\rho(n,\Delta)$. \begin{conjecture}{\rm(\cite{Liu2007})}\label{conj1} For each fixed $\Delta$, the limit of $n^{2}(\Delta-\rho(n,\Delta))/(\Delta-1)$ exists. Furthermore, $$\lim_{n\rightarrow \infty}\frac{n^{2}(\Delta-\rho(n,\Delta))}{\Delta-1}=\pi^{2}.$$ \end{conjecture} It is obvious that the conjecture holds for $\Delta=2$, since the path $P_{n}$ is the only graph in $\mathcal{F}(n,2)$, and its spectral radius is $2\cos(\frac{\pi}{n+1})$. Very recently, the conjecture has been disproved by Liu \cite{Liu2022}. For subcubic graphs, it was proved that $\lim_{n\rightarrow \infty}n^{2}(3-\rho(n,3))=\pi^{2}/2$. Moreover, the extremal graph with maximum spectral radius is path-like, and it is always non-bipartite. Hence it is very interesting to consider the maximum spectral radius over irregular bipartite graphs. The asymptotic value of the maximum spectral radius for subcubic bipartite graphs is determined. Let us denote by $\mathcal{B}(n,\Delta)$ the set of all connected irregular bipartite graphs on $n$ vertices with maximum degree $\Delta$. Thus, $\mathcal{B}(n,3)$ means the set of all connected subcubic bipartite graphs with $n$ vertices. Let $\lambda(n,\Delta)$ be the maximum spectral radius among all graphs in $\mathcal{B}(n,\Delta)$, that is, $$\lambda(n,\Delta)=\max\{\rho(G): G\in \mathcal{B}(n,\Delta)\}.$$ The first main result of the paper presents the limit of $n^{2}(\Delta-\lambda(n,\Delta))$ for $\Delta=3$. \begin{theorem}\label{th1} Let $\lambda=\lambda(n,3)$ be the maximum spectral radius among all graphs in $\mathcal{B}(n,3)$. Then $$\lim_{n\to \infty}n^{2}(3-\lambda)=\pi^{2}.$$ \end{theorem} Note that $\mathcal{B}(n,3)\subset\mathcal{F}(n,3)$. This implies that $\lambda(n,3)<\rho(n,3)$. If we take $\Delta=3$, then Conjecture \ref{conj1} means that $$\lim_{n\rightarrow \infty}n^{2}(3-\rho(n,3))=2\pi^{2}.$$ However, according to Theorem \ref{th1}, it follows that $$\lim_{n\rightarrow \infty}n^{2}(3-\rho(n,3))\leq \lim_{n\rightarrow \infty}n^{2}(3-\lambda(n,3))=\pi^{2},$$ which provides a counterexample to Conjecture \ref{conj1}. A simple modification of Theorem \ref{th1} leads to the following consequence, which establishes an asymptotic value of the maximum spectral radius for subcubic bipartite graphs. \begin{theorem} The maximum spectral radius of subcubic bipartite graphs on $n$ vertices is $3-\varTheta(\frac{\pi^{2}}{n^{2}})$. \end{theorem} On the other hand, we focus on the maximum spectral radius of irregular bipartite graphs with large maximum degree. Define an irregular bipartite graph $H_{n,\Delta}$ as follows. \medskip \noindent $\bullet$ For $2\Delta>n$, $H_{n,\Delta}$ is isomorphic to the complete bipartite graph $K_{\Delta,n-\Delta}$.\\ \noindent $\bullet$ For $2\Delta=n$, $H_{n,\Delta}$ is obtained from $K_{\Delta,\Delta}$ by deleting an edge.\\ \noindent $\bullet$ For $2\Delta=n-1$, $H_{n,\Delta}$ is obtained from $K_{\Delta,\Delta}$ by deleting an edge, and then adding a new vertex and joining it to a vertex of degree less than $\Delta$. \medskip When the maximum degree is at least $\lfloor n/2 \rfloor$, the maximum spectral radius is completely determined. \begin{theorem}\label{th2} Let $G$ be an irregular bipartite graph on $n$ vertices with maximum degree $\Delta$. If $\Delta\geq \lfloor n/2 \rfloor$, then $$\rho(G)\leq \rho(H_{n,\Delta}),$$ with equality if and only if $G\cong H_{n,\Delta}$. \end{theorem} Note that the above theorem also means that $\lambda(n,\Delta)=\rho(H_{n,\Delta})$ if $\Delta\geq \lfloor n/2 \rfloor$. Furthermore, for general $n$ and $\Delta$, we provide a lower bound for $\Delta-\lambda(n,\Delta)$, which is also an upper bound for $\lambda(n,\Delta)$. \begin{theorem}\label{th3} Let $\lambda(n,\Delta)$ be the maximum spectral radius among all the graphs in $\mathcal{B}(n,\Delta)$. Then $$\Delta-\lambda(n,\Delta)>\frac{2\Delta}{n(4n+\Delta-4)}.$$ \end{theorem} The rest of the paper is organized as follows. In Section 2, we prove Theorem \ref{th1} by utilizing the eigenvalue of certain tridiagonal matrices. The proof of Theorem \ref{th2} is presented in Section 3. In Section 4, we establish some structural properties of the irregular bipartite graph with maximum spectral radius, and then give the proof of Theorem \ref{th3}. In the final section, some additional remarks are provided. \section{Subcubic bipartite graphs} The eigenvalues and eigenvectors of a certain tridiagonal matrix were discussed in \cite{Willms2008}. Let us consider the tridiagonal matrix \begin{equation} A=\begin{bmatrix} -\alpha+b & c_{1} & \\ a_{1} & b & c_{2} \\ & a_{2} & \ddots & \ddots \\ & & \ddots & b & c_{n-1}\\ & & & a_{n-1} & -\beta+b \end{bmatrix} \end{equation} with the restriction $\sqrt{a_{i}c_{i}}=d\neq 0$ for $1\leq i\leq n-1$. All variables appearing in the matrix are complex. For some special cases, Willms \cite{Willms2008} presented the eigenvalues and corresponding eigenvectors. \begin{lemma}{\rm(\cite{Willms2008})}\label{lem4} If $\alpha=d$ and $\beta=0$, then the eigenvalues of $A$ are $$\lambda_{i}=b+2d\cos\left(\frac{2i\pi}{2n+1}\right),$$ where $1\leq i\leq n$. \end{lemma} We remark that the above lemma is one of the partial results in \cite{Willms2008}. When $\alpha$ and $\beta$ take the other values, the eigenvalues and eigenvectors of $A$ were also provided. Let us define a special tridiagonal matrix of order $n$: \begin{equation}\label{eq3} M_{n}=\begin{bmatrix} 1 & -1 & \\ -1 & 2 & -1 \\ & -1 & \ddots & \ddots \\ & & \ddots & 2 & -1\\ & & & -1 & 2 \end{bmatrix}. \end{equation} Obviously, the matrix $M_{n}$ is obtained from $A$ by setting $\alpha=1$, $\beta=0$, $b=2$ and $a_{i}=c_{i}=-1$ for $1\leq i\leq n-1$. Using Lemma \ref{lem4}, the eigenvalues and eigenvectors of $M_{n}$ can be determined directly. In particular, we display its least eigenvalue in the following result. \begin{proposition}\label{pro1} For a given positive integer $n$, the least eigenvalue of $M_{n}$ is $4\sin^{2}\left(\frac{\pi}{4n+2}\right)$. \end{proposition} Let us introduce an identical equation about the trigonometric function, which will be used in the sequel proof. A fundamental result for the trigonometric function is $$\cos^{2}\alpha+\cos^{2}(\frac{\pi}{2}-\alpha)=\sin^{2}\alpha+\sin^{2}(\frac{\pi}{2}-\alpha)=1.$$ Using this fact, for any integer $k\geq 3$, one can see that \begin{equation}\label{eq6} \sum_{i=1}^{k-1}\sin^{2}\left(\frac{i}{2k}\pi\right)=\frac{k-1}{2} \end{equation} and \begin{equation}\label{eq7} \sum_{i=0}^{k-1}\cos^{2}\left(\frac{2i+1}{4k}\pi\right)=\frac{k}{2}. \end{equation} Given an irregular graph with maximum degree $\Delta$, we say that a vertex is unsaturated if its degree is less than $\Delta$. The irregular graph with the maximum spectral radius cannot have many unsaturated vertices. In \cite{Xue2022+}, Xue and Liu considered the subcubic bipartite graph with maximum spectral radius, and showed that the extremal graph contains at most two unsaturated vertices. Let $G$ be a bipartite graph with bipartition $(X,Y)$. We say that $G$ is balanced if $|X|=|Y|$. For $n\geq 6$, let us define a series of bipartite graphs $B_{n}$ constructed as follows. \medskip \noindent $\bullet$ $B_{6}$ is a balanced bipartite graph obtained from $K_{3,3}$ by deleting an edge.\\ \noindent $\bullet$ If $B_{n}$ is balanced, then $B_{n+1}$ is obtained from $B_{n}$ by adding a vertex and joining it to an unsaturated vertex in one part of $B_{n}$.\\ \noindent $\bullet$ If $B_{n}$ is unbalanced, then $B_{n+1}$ is obtained from $B_{n}$ by adding a vertex and joining it to the unsaturated vertices of $B_{n}$. \medskip It was proved that $B_{n}$ is the unique subcubic bipartite graph with maximum spectral radius \cite{Xue2022+}. The following lemma establishes the lower and upper bounds for the spectral radius of $B_{n}$. \begin{figure}[t] \begin{center} \scalebox{0.85}[0.85] {\includegraphics{maximal_graph.pdf}} \end{center} \caption{Subcubic bipartite graphs.} \label{fig1} \end{figure} \begin{lemma}\label{lem5} Let $\rho$ be the spectral radius of $B_{n}$ with $n\geq 6$. If $n$ is even, then $$4\sin^{2}\left(\frac{\pi}{2n+2}\right)\leq 3-\rho\leq \frac{4n+48}{n-2}\sin^{2}\left(\frac{\pi}{2n}\right).$$ \end{lemma} \begin{proof} Suppose that $n$ is even, that is, $n=2k$ with $k\geq 3$. We may label the vertices of $B_{2k}$ as presented in Fig. \ref{fig1}. Let $\mathtt{x}$ be the unit eigenvector of the adjacency matrix $A(B_{2k})$ corresponding to $\rho$. According to the symmetry of $B_{2k}$, one can see that $\mathtt{x}(u_{i})=\mathtt{x}(v_{i})$ for $1\leq i\leq k$. Set $\mathtt{x}(u_{i})=a_{i}$ for $1\leq i\leq k$. Let $L$ be the Laplacian matrix of $B_{2k}$, and $\Lambda$ be a diagonal matrix with diagonal elements $1,1,0,\ldots,0$. The two nonzero elements of $\Lambda$ are labeled by the vertices $u_{k}$ and $v_{k}$. Thus, \begin{equation}\label{eq1} 3I-A(B_{2k})=L+\Lambda. \end{equation} Since $\rho=\mathtt{x}^{t}A(B_{2k})\mathtt{x}$, we have \begin{equation}\label{eq2} 3-\rho=3\mathtt{x}^{t}\mathtt{x}-\mathtt{x}^{t}A(B_{2k})\mathtt{x} =\mathtt{x}^{t}(3I-A(B_{2k}))\mathtt{x}. \end{equation} Combining (\ref{eq1}) and (\ref{eq2}), it follows that $3-\rho=\mathtt{x}^{t}(L+\Lambda)\mathtt{x}$. Note that \[ \mathtt{x}^{t} L \mathtt{x}=2(a_{1}-a_{3})^{2}+2 \sum_{i=1}^{k-1}(a_{i}-a_{i+1})^{2}~~~~\text{and}~~~\mathtt{x}^{t}\Lambda\mathtt{x}=2a_{k}^{2}. \] In $B_{2k}$, the vertices $u_{1}$ and $u_{2}$ have the same neighbors, hence $\mathtt{x}(u_{1})=\mathtt{x}(u_{2})$, that is, $a_{1}=a_{2}$. It follows that \begin{eqnarray}\label{eq4} \begin{split} 3-\rho&=\mathtt{x}^{t}L\mathtt{x}+\mathtt{x}^{t}\Lambda\mathtt{x}\\ &=2(a_{1}-a_{3})^{2}+2 \sum_{i=1}^{k-1}(a_{i}-a_{i+1})^{2}+2a_{k}^{2}\\ &\geq 2 \sum_{i=1}^{k-1}(a_{i}-a_{i+1})^{2}+2a_{k}^{2}. \end{split} \end{eqnarray} Let $\mathtt{y}=(a_{1},a_{2},\ldots,a_{k})$. Suppose that $M_{k}$ is the tridiagonal matrix defined in (\ref{eq3}). Thus, one can see that \begin{equation}\label{eq5} \mathtt{y}^{t}M_{k}\mathtt{y}=a_{k}^{2}+\sum_{i=1}^{k-1}(a_{i}-a_{i+1})^{2}. \end{equation} Combining (\ref{eq4}) and (\ref{eq5}), we obtain that \begin{equation} 3-\rho\geq 2\mathtt{y}^{t}M_{k}\mathtt{y}. \end{equation} Let $\lambda_{\min}$ be the least eigenvalue of the matrix $M_{k}$. Note that $\mathtt{y}^{t}\mathtt{y}=\sum_{i=1}^{k}a_{i}^{2}=(\mathtt{x}^{t}\mathtt{x})/2=1/2$. Therefore, $$\lambda_{\min}\leq \frac{\mathtt{y}^{t}M_{k}\mathtt{y}}{\mathtt{y}^{t}\mathtt{y}}=2\mathtt{y}^{t}M_{k}\mathtt{y}.$$ By Proposition \ref{pro1}, we have $\lambda_{\min}=4\sin^{2}(\pi/(4k+2))$. Combining the above equations, it follows that \begin{equation} 3-\rho\geq 2\mathtt{y}^{t}M_{k}\mathtt{y}\geq \lambda_{\min}=4\sin^{2}\left(\frac{\pi}{4k+2}\right)=4\sin^{2}\left(\frac{\pi}{2n+2}\right). \end{equation} In the following, we will give an upper bound for $3-\rho$. In order to prove the upper bound, we construct a vector on the vertices of $B_{2k}$. Let $\mathtt{z}$ be the $n$-vector whose entries satisfy $$\mathtt{z}(u_{i})=\mathtt{z}(v_{i})=\sin\left(\frac{k-i}{2k}\right),$$ for $1\leq i\leq k$. According to (\ref{eq6}), we can see that \begin{equation}\label{eq8} \mathtt{z}^{t}\mathtt{z}=2\sum_{i=1}^{k}\sin^{2}\left(\frac{k-i}{2k}\pi\right)=2\sum_{i=1}^{k-1}\sin^{2}\left(\frac{i}{2k}\pi\right)=k-1. \end{equation} Note also that \begin{eqnarray}\label{eq9} \begin{split} \mathtt{z}^{t}(L+\Lambda)\mathtt{z}&=2(\mathtt{z}(u_{1})-\mathtt{z}(u_{3}))^{2}+2\mathtt{z}(u_{k})^{2} +2\sum_{i=1}^{k-1}(\mathtt{z}(u_{i})-\mathtt{z}(u_{i+1}))^{2}\\ &=2\left(\sin\left(\frac{k-1}{2k}\pi\right)-\sin\left(\frac{k-3}{2k}\pi\right)\right)^{2} +2\sum_{i=0}^{k-2}\left(\sin\left(\frac{i}{2k}\pi\right)-\sin\left(\frac{i+1}{2k}\pi\right)\right)^{2}\\ &=8\sin^{2}\left(\frac{2}{4k}\pi\right)\cos^{2}\left(\frac{2k-4}{4k}\pi\right) +8\sin^{2}\left(\frac{1}{4k}\pi\right)\sum_{i=0}^{k-2}\cos^{2}\left(\frac{2i+1}{4k}\pi\right)\\ &\leq 24\sin^{2}\left(\frac{1}{4k}\pi\right)+8\sin^{2}\left(\frac{1}{4k}\pi\right)\sum_{i=0}^{k-1}\cos^{2}\left(\frac{2i+1}{4k}\pi\right)\\ &=\left(4k+24\right)\sin^{2}\left(\frac{1}{4k}\pi\right), \end{split} \end{eqnarray} where the last equality is obtained from (\ref{eq7}). Moreover, since $$\rho\geq \frac{\mathtt{z}^{t}A(B_{2k})\mathtt{z}}{\mathtt{z}^{t}\mathtt{z}},$$ we obtain that \begin{eqnarray}\label{eq10} 3-\rho\leq \frac{3\mathtt{z}^{t}\mathtt{z}}{\mathtt{z}^{t}\mathtt{z}}-\frac{\mathtt{z}^{t}A(B_{2k})\mathtt{z}}{\mathtt{z}^{t}\mathtt{z}} =\frac{\mathtt{z}^{t}(L+\Lambda)\mathtt{z}}{\mathtt{z}^{t}\mathtt{z}}. \end{eqnarray} Combining (\ref{eq8}), (\ref{eq9}) and (\ref{eq10}), it follows that \begin{equation} 3-\rho\leq \frac{\left(4k+24\right)\sin^{2}\left(\frac{\pi}{4k}\right)}{k-1}=\frac{4n+48}{n-2}\sin^{2}\left(\frac{\pi}{2n}\right), \end{equation} as required. \end{proof} Now, we are in a position to present the proof of Theorem \ref{th1}. \medskip \noindent \textbf{Proof of Theorem \ref{th1}.} As mentioned above, $B_{n}$ is the unique bipartite graph in $\mathcal{B}(n,3)$ with the maximum spectral radius. Thus, $\lambda=\rho(B_{n})$. It is equivalent to show that $$\lim_{n\to \infty}n^{2}(3-\rho(B_{n}))=\pi^{2}.$$ We divide the proof into two cases according to the parity of $n$. \smallskip \noindent{\bf Case 1}. $n$ is even. \smallskip According to Lemma \ref{lem5}, it follows that \begin{equation} \lim_{n\to \infty}4n^{2}\sin^{2}\left(\frac{\pi}{2n+2}\right)\leq \lim_{n\to \infty}n^{2}(3-\rho(B_{n}))\leq \lim_{n\to \infty}\frac{n^{2}(4n+48)}{n-2}\sin^{2}\left(\frac{\pi}{2n}\right). \end{equation} The left limit is $$\lim_{n\to \infty}4n^{2}\sin^{2}\left(\frac{\pi}{2n+2}\right)=\lim_{n\to \infty}\frac{4n^{2}\pi^{2}}{(2n+2)^{2}}=\pi^{2}.$$ On the other hand, one can see that $$\lim_{n\to \infty}\frac{n^{2}(4n+48)}{n-2}\sin^{2}\left(\frac{\pi}{2n}\right)=\lim_{n\to \infty}\frac{n^{2}(4n+48)\pi^{2}}{4n^{2}(n-2)}=\pi^{2}.$$ Thus, the result holds in this case. \smallskip \noindent{\bf Case 2}. $n$ is odd. \smallskip Obviously, $B_{n}$ is a proper subgraph of $B_{n+1}$, and $B_{n-1}$ is a proper subgraph of $B_{n}$. Using Lemma \ref{lem5} for $B_{n-1}$ and $B_{n+1}$, respectively, we obtain that $$3-\rho(B_{n-1})\leq \frac{4n+44}{n-3}\sin^{2}\left(\frac{\pi}{2n-2}\right)$$ and $$3-\rho(B_{n+1})\geq 4\sin^{2}\left(\frac{\pi}{2n+4}\right).$$ Note that $\rho(B_{n-1})<\rho(B_{n})< \rho(B_{n+1})$. This implies that $$\lim_{n\to \infty}n^{2}(3-\rho(B_{n+1}))\leq \lim_{n\to \infty}n^{2}(3-\rho(B_{n}))\leq \lim_{n\to \infty}n^{2}(3-\rho(B_{n-1})).$$ It follows that $$\lim_{n\to \infty}4n^{2}\sin^{2}\left(\frac{\pi}{2n+4}\right)\leq \lim_{n\to \infty}n^{2}(3-\rho(B_{n})) \leq \lim_{n\to \infty}\frac{n^{2}(4n+44)}{n-3}\sin^{2}\left(\frac{\pi}{2n-2}\right),$$ that is, $$\pi^{2}\leq \lim_{n\to \infty}n^{2}(3-\rho(B_{n})) \leq \pi^{2}.$$ Hence $\lim_{n\to \infty}n^{2}(3-\rho(B_{n}))=\pi^{2}$. The proof is completed. ll}$\Box$ \section{Irregular bipartite graph with $\Delta\geq \left\lfloor n/2\right\rfloor$} In this section, we consider the maximum spectral radius of irregular bipartite graphs with $\Delta\geq \left\lfloor n/2 \right\rfloor$. An upper bound for the spectral radius of bipartite graphs with given size was presented by Bhattacharya, Friedland and Peled \cite{Bhattacharya2008}. The following lemma is part of \cite[Proposition 2.1]{Bhattacharya2008}. \begin{lemma}{\rm(\cite{Bhattacharya2008})}\label{lem1} Let $G$ be a bipartite graph of size $m$. Then $$\rho(G)\leq \sqrt{m},$$ with equality if and only if $G$ is a disjoint union of a complete bipartite graph and isolated vertices. \end{lemma} Note that the complete bipartite graph $K_{\Delta,n-\Delta}$ is irregular if $\Delta>\left\lfloor n/2\right\rfloor$. Indeed, the maximum spectral radius can be attained for the complete bipartite graph $K_{\Delta,n-\Delta}$. \begin{lemma}\label{lem2} Let $G$ be an irregular bipartite graph on $n$ vertices with maximum degree $\Delta$. If $\Delta>n-\Delta$, then $$\rho(G)\leq \rho(K_{\Delta,n-\Delta}),$$ with equality if and only if $G\cong K_{\Delta,n-\Delta}$. \end{lemma} \begin{proof} Let $G$ be the irregular bipartite graph with maximum spectral radius. It suffices to show that $G\cong K_{\Delta,n-\Delta}$. Suppose that the bipartition of $G$ is $(X,Y)$. Without loos of generality, assume that $|X|\leq |Y|$. Since the maximum degree of $G$ is $\Delta$, $|Y|\geq \Delta$. If $|Y|=\Delta$, then $G$ is a spanning subgraph of $K_{\Delta,n-\Delta}$. Note that the spectral radius is strictly increasing when adding an edge in a graph. Thus, $G\cong K_{\Delta,n-\Delta}$. If $|Y|>\Delta$, then $$|E(G)|\leq |X|\times |Y|<\Delta(n-\Delta).$$ By Lemma \ref{lem1}, we obtain that $\rho(G)\leq\sqrt{|E(G)|}<\sqrt{\Delta(n-\Delta)}$. However, the spectral radius of $K_{\Delta,n-\Delta}$ equals $\sqrt{\Delta(n-\Delta)}$, which implies that $\rho(K_{\Delta,n-\Delta})>\rho(G)$, a contradiction. \end{proof} The operation of edge transformation is a classic tool in spectral graph theory. The following lemma appeared in a number of references (see, for example, \cite{Cvetkovic1997,Stevanovic2018,Wu2005}). \begin{lemma}{\rm(\cite{Cvetkovic1997,Stevanovic2018,Wu2005})}\label{lem3} Let $u$ and $v$ be two vertices of a connected graph $G$, and let $S\subseteq N(u)\backslash N(v)$. Let $G'=G-\{wu:w\in S\}+\{wv:w\in S\}$. If $\mathtt{x}(v)\geq \mathtt{x}(u)$, where $\mathtt{x}$ is the principal eigenvector of $G$, then $\rho(G')>\rho(G)$. \end{lemma} In the following we present the proof of Theorem \ref{th2}. We remark that $H_{n,\Delta}$ is an irregular bipartite graph on $n$ vertices with maximum degree $\Delta$. If $2\Delta>n$, then $H_{n,\Delta}$ contains $\Delta$ unsaturated vertices. If $2\Delta=n$ or $n-1$, then $H_{n,\Delta}$ contains two unsaturated vertices. Note also that $H_{6,3}\cong B_{6}$ and $H_{7,3}\cong B_{7}$ (see Fig. \ref{fig1}). \medskip \noindent \textbf{Proof of Theorem \ref{th2}.} Suppose that $G$ is the bipartite graph with maximum spectral radius. If $\Delta>\lfloor n/2 \rfloor$, then $\Delta>n-\Delta$. For this case, it follows from Lemma \ref{lem2} that $G\cong H_{n,\Delta}$. Suppose now that $\Delta=\lfloor n/2 \rfloor$. We may assume that the bipartition of $G$ is $(X,Y)$, where $|X|\leq |Y|$. According to the parity of $n$, we divide the proof into two cases. \smallskip \noindent{\bf Case 1}. $n$ is even, that is, $n=2k$ for some $k\geq 2$. \smallskip Here, $\Delta=k$. Thus, one can see that $|Y|\geq k$. Note that $K_{k,k-1}$ is a proper subgraph of $H_{n,\Delta}$. Thus, $\rho(H_{n,\Delta})>\rho(K_{k,k-1})=\sqrt{k(k-1)}$. If $|Y|\geq k+1$, then $|X|\leq k-1$. Hence, $|E(G)|\leq \Delta|X|\leq k(k-1)$. By Lemma \ref{lem1}, we obtain that $\rho(G)\leq \sqrt{k(k-1)}<\rho(H_{n,\Delta})$, a contradiction. Therefore, $|Y|=k$, which implies that $|X|=|Y|=k$. Obviously, $G$ is a proper subgraph of $K_{k,k}$. Moreover, since the spectral radius is strictly increasing when adding edges, then $G$ is the graph obtained from $K_{k,k}$ by deleting one edge, hence $G\cong H_{n,\Delta}$. \smallskip \noindent{\bf Case 2}. $n$ is odd, that is, $n=2k+1$ for some $k\geq 2$. \smallskip In this case, $\Delta=k$. If $|Y|\geq k+2$, then $|X|=n-|Y|\leq k-1$. Thus, $|E(G)|\leq \Delta|X|\leq k(k-1)$. It follows from Lemma \ref{lem1} that $\rho(G)\leq \sqrt{k(k-1)}$. However, since $K_{k,k-1}$ is also a proper subgraph of $H_{n,\Delta}$, we have $\rho(H_{n,\Delta})>\rho(K_{k,k-1})=\sqrt{k(k-1)}$. Hence, $\rho(H_{n,\Delta})>\rho(G)$, a contradiction. Suppose that $|Y|\leq k+1$. This implies that $|X|=k$ and $|Y|=k+1$. Let $Y^{*}=\{v^{*}\in Y: d(v^{*})<k\}$. Thus, for any vertex $v\in Y\backslash Y^{*}$, we have $d(v)=k$, and hence $v$ is adjacent to all vertices of $X$. If $|Y\backslash Y^{*}|\geq k$, then the subgraph of $G$ induced by $X\cup (Y\backslash Y^{*})$ is isomorphic to $K_{k,|Y\backslash Y^{*}|}$. Thus, the maximum degree of $G$ is greater than $k$, a contradiction. Hence, $|Y\backslash Y^{*}|\leq k-1$, and so $|Y^{*}|\geq 2$. For vertices in $Y^{*}$, we obtain the following claim. \smallskip \noindent{\bf Claim}. If $u$ and $v$ are two vertices in $Y^{*}$, then $N(u)\cup N(v)=X$ and $N(u)\cap N(v)=\emptyset$. \smallskip If $X\backslash (N(u)\cup N(v))\neq \emptyset$, then we choose a vertex $w\in X\backslash (N(u)\cup N(v))$. Thus, $d(w)\leq |Y|-2=k-1$. Adding the edge $uw$, we have the resulting graph is also an irregular bipartite graph with maximum degree $\Delta$. However, the spectral radius of the resulting graph is greater than that of $G$, which contradicts the maximality of $G$. Therefore, we have $N(u)\cup N(v)=X$. Let $\mathtt{x}$ be the principal eigenvector of $G$. Without loos of generality, we may assume that $\mathtt{x}(u)\geq \mathtt{x}(v)$. Since $u\in Y^{*}$ and $N(u)\cup N(v)=X$, we can find a vertex $w\in N(v)\backslash N(u)$. If $N(u)\cap N(v)\neq\emptyset$, then the graph $G+wu-wv$ is also a connected irregular bipartite graph. However, according to Lemma \ref{lem3}, it follows that $\rho(G+wu-wv)>\rho(G)$, a contradiction. Therefore, $N(u)\cap N(v)=\emptyset$. The claim is proved. \smallskip Indeed, by Claim, it is easy to see that there are at most two vertices in $Y^{*}$. Combining the fact $|Y^{*}|\geq 2$, we know that $Y^{*}$ contains exactly two vertices, say $u$ and $v$. Suppose that $\mathtt{x}$ is the principal eigenvector of $G$, and $\mathtt{x}(u)\geq \mathtt{x}(v)$. Let $w^{*}$ be a vertex in $N(v)$. If $|N(v)|>1$, then we consider the graph $G'=G-\{wv:w\in N(v)\backslash \{w^{*}\}\}+\{wu:w\in N(v)\backslash \{w^{*}\}\}$. It follows from Lemma \ref{lem3} that $\rho(G')>\rho(G)$, a contradiction. Hence $|N(v)|=1$, that is, $v$ is a pendant vertex of $G$. Note also that the subgraph of $G$ induced by $X\cup (Y\backslash Y^{*})$ is isomorphic to $K_{k,k-1}$. Combining the above observations of $N(u)$ and $N(v)$, one can see that $G\cong H_{n,\Delta}$. Thus, we complete the proof. ll}$\Box$ \section{Lower bound of $\Delta-\lambda(n,\Delta)$ of irregular bipartite graphs} In this section, we present a lower bound of $\Delta-\lambda(n,\Delta)$ for irregular bipartite graphs with general maximum degree $\Delta.$ As mentioned above, $\lambda(n,\Delta)$ is the maximum spectral radius of irregular bipartite graphs in $\mathcal{B}(n,\Delta)$. Let $\Gamma$ be the irregular bipartite graph in $\mathcal{B}(n,\Delta)$, which attains the maximum spectral radius. Hence $\rho(\Gamma)=\lambda(n,\Delta)$. In order to establish the lower bound of $\Delta-\lambda(n,\Delta)$, it suffices to consider the lower bound of $\Delta-\rho(\Gamma)$. Suppose that the bipartition of $\Gamma$ is $(X,Y)$. Let $$X^{*}=\{w\in X: d(w)<\Delta\}~~~~ \text{and} ~~~~Y^{*}=\{w\in Y: d(w)<\Delta\}.$$ The following properties of $\Gamma$ are useful. \begin{lemma}\label{lem6} The irregular bipartite graph $\Gamma$ satisfies the following properties.\\ \noindent (I) $|X^{*}\cup Y^{*}|\geq 2$.\\ \noindent (II) If $|X^{*}|\geq 1$, $|Y^{*}|\geq 1$ and $|X^{*}|+|Y^{*}|\geq 3$, then the subgraph of $\Gamma$ induced by $X^{*}\cup Y^{*}$ is isomorphic to a complete bipartite graph. \end{lemma} \begin{proof} (I) Obviously, $|X^{*}\cup Y^{*}|\geq 1$ since $\Gamma$ is irregular. If $|X^{*}\cup Y^{*}|=1$, then $\Gamma$ contains exactly one vertex of degree less than $\Delta$. Thus, $\sum_{u\in X}d(u)\neq \sum_{v\in Y}d(v).$ But, this is impossible, since $\Gamma$ is bipartite. (II) Suppose, to the contradiction, that there exist two nonadjacent vertices $w_{1}\in X^{*}$ and $w_{2}\in Y^{*}.$ Then we can obtain a new bipartite graph by adding the edge $w_{1}w_{2}.$ Obviously, the new bipartite graph is also irregular, since $|X^{*}|+|Y^{*}|\geq 3$. However, the spectral radius of the new bipartite graph is greater than that of $\Gamma$, which contradicts the maximality of $\Gamma$. \end{proof} Let $\mathtt{x}$ be a unit eigenvector of $\Gamma$ corresponding to $\rho(\Gamma)$. Define $$\mathtt{x}(\hat{w})=\max\{\mathtt{x}(w):w\in \Gamma\}~~~~\text{and}~~~~\mathtt{x}(\check{w})=\min\{x(w):w\in \Gamma\}.$$ Note that $\Gamma$ is irregular. One can see that $\hat{w}\neq \check{w}$. Next we estimate the distance between $\hat{w}$ and $\check{w}$ in $\Gamma$. \begin{lemma}\label{lem7} The distance between $\hat{w}$ and $\check{w}$ is at most $2(n-1)/\Delta$. \end{lemma} \begin{proof} Suppose that $d(\check{w})=\Delta$. Since $\mathtt{x}$ is the eigenvector corresponding to $\rho(\Gamma)$, we have $\rho(\Gamma)\mathtt{x}=A(\Gamma)\mathtt{x}$. It follows that $$\rho(\Gamma)\mathtt{x}(\check{w})=\sum_{v\in N(\check{w})}\mathtt{x}(v)\geq \Delta\mathtt{x}(\check{w}),$$ which implies that $\rho(\Gamma)\geq \Delta$, a contradiction. Therefore, we obtain that $d(\check{w})<\Delta$. Denote by $\text{dist}(\hat{w},\check{w})$ the distance between $\hat{w}$ and $\check{w}$. \smallskip \noindent{\bf Case 1}. $\hat{w}$ and $\check{w}$ belong to the same part. \smallskip Assume that $\{\hat{w},\check{w}\}\subseteq X$. Let $P: \hat{w}=u_{0}v_{1}u_{1}\cdots v_{k}u_{k}=\check{w}$ be a shortest path from $\hat{w}$ to $\check{w}$. Thus, $V(P)\cap X=\{u_{0},u_{1},\ldots, u_{k}\}$ and $V(P)\cap Y=\{v_{1},v_{2}\ldots, u_{k}\}$. We claim that $|X^{*}\cap V(P)|\leq 3$. If not, we can find two vertices $u_{p}$ and $u_{q}$ in $X^{*}\cap V(P)$ with $1\leq p<q<k$. Without loos of generality, we may suppose that $\mathtt{x}(u_{p})\leq \mathtt{x}(u_{q})$. Since $P$ is a shortest path, $v_{p}$ and $u_{q}$ are nonadjacent. It is easy to see that the graph $\Gamma-u_{p}v_{p}+u_{q}v_{p}$ also belongs to $\mathcal{B}(n,\Delta)$. By Lemma \ref{lem3}, we have $\rho(\Gamma-u_{p}v_{p}+u_{q}v_{p})>\rho(\Gamma)$, a contradiction. Note that $P$ is a shortest path. One can see that $N(u_{i})\cap N(u_{j})=\emptyset$ for $|i-j|\geq 2$. It follows that \begin{equation}\label{eq11} \left|\bigcup_{u\in V(P)\cap X}N(u)\backslash V(P)\right|\geq \left\lceil \frac{k-2}{2}\right\rceil(\Delta-2). \end{equation} If $|Y^{*}\cap V(P)|\geq 2$, then it follows from Lemma \ref{lem6} that $u_{k}$ is adjacent to all vertices of $Y^{*}\cap V(P)$, contradicting the fact that $P$ is a shortest path. Therefore, $|Y^{*}\cap V(P)|\leq 1$. According to the shortest path $P$, it follows that $N(v_{i})\cap N(v_{j})=\emptyset$ for $|i-j|\geq 2$. Thus, we obtain that \begin{equation}\label{eq12} \left|\bigcup_{v\in V(P)\cap Y}N(v)\backslash V(P)\right|\geq \left\lceil \frac{k-1}{2}\right\rceil(\Delta-2). \end{equation} Combining (\ref{eq11}) and (\ref{eq12}), it follows that \begin{eqnarray*} n&\geq& |V(P)|+\left|\bigcup_{u\in V(P)\cap X}N(u)\backslash V(P)\right|+\left|\bigcup_{v\in V(P)\cap Y}N(v)\backslash V(P)\right|\\ &\geq &2k+1+\left\lceil \frac{k-2}{2}\right\rceil(\Delta-2)+\left\lceil \frac{k-1}{2}\right\rceil(\Delta-2)\\ &\geq& k\Delta+1, \end{eqnarray*} which implies that $k\leq (n-1)/\Delta$. Hence $\text{dist}(\hat{w},\check{w})=2k\leq 2(n-1)/\Delta$. \smallskip \noindent{\bf Case 2}. $\hat{w}$ and $\check{w}$ belong to different parts. \smallskip Assume that $\hat{w}\in X$ and $\check{w}\in Y$. Let $P: \hat{w}=v_{1}u_{1}\cdots v_{k}u_{k}=\check{w}$ be a shortest path from $\hat{w}$ to $\check{w}$. Thus, $V(P)\cap X=\{u_{1},u_{2},\ldots, u_{k}\}$ and $V(P)\cap Y=\{v_{1},v_{2}\ldots, u_{k}\}$. If $|V(P)\cap X^{*}|\geq 3$, then there exist two vertices $u_{p}$ and $u_{q}$ in $V(P)\cap X^{*}$ with $1\leq p<q<k$. We may assume that $\mathtt{x}(u_{p})\leq \mathtt{x}(u_{q})$. Since $P$ is a shortest path, $v_{p}$ and $u_{q}$ are nonadjacent. It is easy to see that the graph $\Gamma-u_{p}v_{p}+u_{q}v_{p}$ also belongs to $\mathcal{B}(n,\Delta)$. By Lemma \ref{lem3}, we have $\rho(\Gamma-u_{p}v_{p}+u_{q}v_{p})>\rho(\Gamma)$, a contradiction. Hence $|V(P)\cap X^{*}|\leq 2$. If $|V(P)\cap Y^{*}|\geq 2$, it follows from Lemma \ref{lem6} that $u_{k}$ is adjacent to all vertices in $Y^{*}\cap V(P)$. Hence $P$ cannot be the shortest, a contradiction. Therefore, $|V(P)\cap Y^{*}|\leq 1$. Similar to Case 1, we still see that $$\left|\bigcup_{u\in V(P)\cap X}N(u)\backslash V(P)\right|\geq \left\lceil \frac{k-2}{2}\right\rceil(\Delta-2),$$ and $$\left|\bigcup_{v\in V(P)\cap Y}N(v)\backslash V(P)\right|\geq \left\lceil \frac{k-1}{2}\right\rceil(\Delta-2).$$ Combing the above inequalities, it follows that \begin{eqnarray*} n&\geq& |V(P)|+\left|\bigcup_{u\in V(P)\cap X}N(u)\backslash V(P)\right|+\left|\bigcup_{v\in V(P)\cap Y}N(v)\backslash V(P)\right|\\ &\geq& 2k+\left\lceil \frac{k-2}{2}\right\rceil(\Delta-2)+\left\lceil \frac{k-1}{2}\right\rceil(\Delta-2)\\ &\geq& k\Delta. \end{eqnarray*} Thus, we have $\text{dist}(\hat{w},\check{w})=2k-1\leq (2n-\Delta)/\Delta\leq 2(n-1)/\Delta$, as required. \end{proof} The next theorem presents a lower bound for $\Delta-\rho(\Gamma)$. Before proceeding with the proof, let us recall an inequality proposed by Shi \cite[Lemma 1]{Shi2009}. If $a,b>0$, then \begin{equation}\label{eq14} a(p-q)^{2}+bq^{2}\geq\frac{abp^{2}}{a+b}, \end{equation} with equality if and only if $q=ap/(a+b)$. \begin{theorem}\label{th4} The spectral radius $\rho(\Gamma)$ satisfies $$\Delta-\rho(\Gamma)\geq \frac{2\Delta}{n(4n+\Delta-4)}.$$ \end{theorem} \begin{proof} Let $\mathtt{x}$, $\hat{w}$ and $\check{w}$ be defined as above. Since $\mathtt{x}$ is the eigenvector corresponding to $\rho(\Gamma)$, by the Rayleigh quotient, we have \begin{equation*} \rho(\Gamma)=2\sum_{uv\in E(\Gamma)}\mathtt{x}(u)\mathtt{x}(v). \end{equation*} Similar to the arguements in the proof of Lemma \ref{lem5}, we obtain that \begin{eqnarray}\label{eq13} \begin{split} \Delta-\rho(\Gamma)&=\Delta \mathtt{x}^{t}\mathtt{x}-2\sum_{uv\in E(\Gamma)}\mathtt{x}(u)\mathtt{x}(v)\\ &=\sum_{uv\in E(\Gamma)}(\mathtt{x}(u)-\mathtt{x}(v))^{2}+\sum_{v\in X^{*}\cup Y^{*}}(\Delta-d(v))\mathtt{x}(v)^{2}. \end{split} \end{eqnarray} Let $P$ be the shortest path of length $k$ between $\hat{w}$ and $\check{w}$. Using Cauchy-Schwarz inequality, it follows that \begin{eqnarray} \sum_{uv\in E(\Gamma)}(\mathtt{x}(u)-\mathtt{x}(v))^{2}\geq \sum_{uv\in E(P)}(\mathtt{x}(u)-\mathtt{x}(v))^{2} \geq \frac{1}{k}\left(\sum_{uv\in E(P)}(\mathtt{x}(u)-\mathtt{x}(v))\right)^{2} =\frac{1}{k}(\mathtt{x}(\hat{w})-\mathtt{x}(\check{w}))^{2}. \end{eqnarray} Recall that $\mathtt{x}(\hat{w})$ and $\mathtt{x}(\check{w})$ are the maximum and minimum components, respectively. One can see that $$\mathtt{x}(\check{w})^{2}<\frac{1}{n}<\mathtt{x}(\hat{w})^{2}.$$ By Lemma \ref{lem6}, we have $|X^{*}\cup Y^{*}|\geq 2$, and hence \begin{eqnarray} \sum_{v\in X^{*}\cup Y^{*}}(\Delta-d(v))\mathtt{x}(v)^{2}\geq \mathtt{x}(\check{w})^{2}\sum_{v\in X^{*}\cup Y^{*}}(\Delta-d(v)) \geq 2\mathtt{x}(\check{w})^{2}. \end{eqnarray} According to (\ref{eq14}), it follows that \begin{eqnarray}\label{eq15} \frac{1}{k}(\mathtt{x}(\hat{w})-\mathtt{x}(\check{w}))^{2}+2\mathtt{x}(\check{w})^{2}\geq \frac{2}{2k+1}\mathtt{x}(\hat{w})^{2} >\frac{2}{n(2k+1)}. \end{eqnarray} Combining Lemma \ref{lem7} and (\ref{eq13})-(\ref{eq15}), we have $$\Delta-\rho(\Gamma)>\frac{2}{n(2k+1)}\geq \frac{2\Delta}{n(4n+\Delta-4)},$$ which completes the proof. \end{proof} By $\rho(\Gamma)=\lambda(n,\Delta)$, Theorem \ref{th3} follows from Theorem \ref{th4}. It should be noted that the spectral radius of any irregular bipartite graph is not greater than that of $\Gamma$. Hence Theorem \ref{th4} implies immediately the following consequence. \begin{corollary} Let $G$ be a connected irregular bipartite graph on $n$ vertices with maximum degree $\Delta$. Then $$\Delta-\rho(G)>\frac{2\Delta}{n(4n+\Delta-4)}.$$ \end{corollary} Obviously, for bipartite graphs, this bound is better than the bound in Theorem \ref{th5}, due to Stevanovi\'{c}. \section{Concluding remarks} The maximum spectral radius of irregular bipartite graphs is considered in this paper. In order to get a better estimation of $\lambda(n,\Delta)$, we need to find the extremal graph in $\mathcal{B}(n,\Delta)$ with maximum spectral radius. The extremal graph in $\mathcal{B}(n,3)$ is completely determined in \cite{Xue2022+}. If $\Delta\geq \left\lfloor n/2\right\rfloor$, the extremal graph in $\mathcal{B}(n,\Delta)$ is determined in Theorem \ref{th2}. For other cases, the extremal graph in $\mathcal{B}(n,\Delta)$ is still unknown. Lemmas \ref{lem6} and \ref{lem7} present some structural properties of the extremal graph, which may be useful for finding the extremal graph in $\mathcal{B}(n,\Delta)$. To determine the extremal graph, it is crucial to determine its degree sequence. The degree sequence of the extremal graph in $\mathcal{F}(n,\Delta)$ was considered by Liu and Li \cite{Liu2008}. They conjectured that the extremal graph in $\mathcal{F}(n,\Delta)$ has exactly one vertex of degree less than $\Delta$. Until now, very little progress has been made on the degree sequence of the extremal graph in $\mathcal{F}(n,\Delta)$ (see \cite{Liu2009,Liu2012}). Naturally, we focus on finding possible degree sequence of the extremal graph in $\mathcal{B}(n,\Delta)$. We have seen from Lemma \ref{lem6} that the extremal graph in $\mathcal{B}(n,\Delta)$ has at least two vertices of degree less than $\Delta$. There exists an analogous problem of finding the minimum algebraic connectivity of regular graphs. The cubic graph with minimum algebraic connectivity was determined in \cite{Brand2007}. Recently, Abdi, Ghorbani and Imrich \cite{Abdi2021} obtained the asymptotic value of the minimum algebraic connectivity of cubic graphs, and presented the structure of the quartic graph with minimum algebraic connectivity. For bipartite case, we investigated the minimum algebraic connectivity of cubic bipartite graphs. Moreover, the unique cubic bipartite graph with minimum algebraic connectivity was also completely characterized. The structure of the extremal graph with minimum algebraic connectivity is very similar to the extremal graph $B_{n}$. So we think that this is possibly an effective approach for finding the extremal graph with maximum spectral radius in $\mathcal{B}(n,\Delta)$. \section*{Acknowledgements} This work was supported by National Natural Science Foundation of China (Nos. 12001498 and 11971445) and Natural Science Foundation of Henan Province (No. 202300410377). \begin{thebibliography}{99} \bibitem{Abdi2021} M. Abdi, E. Ghorbani, W. Imrich, Regular graphs with minimum spectral gap, \emph{European J. Combin.} \textbf{95} (2021) 103328. \bibitem{Brualdi1986} R.A. Brualdi, E.S. Solheid, On the spectral radius of complementary acyclic matrices of zeros and ones, \emph{SIAM J. Algebra Discrete Methods} \textbf{7} (1986) 265--272. \bibitem{Bhattacharya2008} A. Bhattacharya, S. Friedland, U.N. Peled, On the first eigenvalue of bipartite graphs, \emph{Electron. J. Combin.} \textbf{15} (2008) R144. \bibitem{Brand2007} C. Brand, B. Guiduli, W. Imrich, The characterization of cubic graphs with minimal eigenvalue gap, \emph{Croat. Chem. Acta} \textbf{80} (2007) 193--201. \bibitem{Cvetkovic1997} D. Cvetkovi\'{c}, P. Rowlinson, S. Simi\'{c}, Eigenspaces of graphs, \emph{Cambridge}, 1997. \bibitem{Cioaba2007EJC} S.M. Cioab\u{a}, The spectral radius and the maximum degree of irregular graphs, \emph{Electron. J. Combin.} \textbf{14} (2007) R38. \bibitem{Cioaba2007JCTB} S.M. Cioab\u{a}, D.A. Gregory, V. Nikiforov, Extreme eigenvalues of nonregular graphs, \emph{J. Combin. Theory Ser. B} \textbf{97} (2007) 483--486. \bibitem{Liu2007} B.L. Liu, J. Shen, X.M. Wang, On the largest eigenvalue of nonregular graphs, \emph{J. Combin. Theory Ser. B} \textbf{97} (2007) 1010--1018. \bibitem{Liu2008} B.L. Liu, G. Li, A note on the largest eigenvalue of non-regular graphs, \emph{Electron. J. Linear Algebra} \textbf{17} (2008) 54--61. \bibitem{Liu2009} B.L. Liu, Y.F. Huang, Z.F. You, On $\lambda_{1}$-extremal non-regular graphs, \emph{Electron. J. Linear Algebra} \textbf{18} (2009) 735--744. \bibitem{Liu2012} M.H. Liu, The (signless Laplacian) spectral radii of connected graphs with prescribed degree sequences, \emph{Electron. J. Combin.} \textbf{19} (2012) \#R35. \bibitem{Liu2022} L.L. Liu, Extremal spectral radius of nonregular graphs with prescribed maximum degree, arXiv: 2203.10245v2. \bibitem{Stevanovic2004} D. Stevanovi\'{c}, The largest eigenvalue of nonregular graphs, \emph{J. Combin. Theory Ser. B} \textbf{91} (2004) 143--146. \bibitem{Stevanovic2018} D. Stevanovi\'{c}, Spectral radius of graphs, \emph{Birkh\"{a}user/Springer, Cham}, 2018. \bibitem{Shi2009} L.S. Shi, The spectral radius of irregular graphs, \emph{Linear Algebra Appl.} \textbf{431} (2009) 189--196. \bibitem{Willms2008} A.R. Willms, Analytic results for the eigenvalues of certain tridiagonal matrices, \emph{SIAM J. Matrix Anal. Appl.} \textbf{30} (2008) 639--656. \bibitem{Wu2005} B.F. Wu, E.L. Xiao, Y. Hong, The spectral radius of trees on $k$ pendant vertices, \emph{Linear Algebra Appl.} \textbf{395} (2005) 343--349. \bibitem{Xue2022+} J. Xue, R.F. Liu, Maxima of spectral radius of irregular graphs with given maximum degree, \emph{submitted}. \bibitem{Zhang2005} X.D. Zhang, Eigenvectors and eigenvalues of nonregular graphs, \emph{Linear Algebra Appl.} \textbf{409} (2005) 79--86. \end{thebibliography} \end{document}
2205.07667v3
http://arxiv.org/abs/2205.07667v3
Selfadhesivity in Gaussian conditional independence structures
\documentclass[11pt, a4paper]{tboege-preprint} \pdfoutput=1 \makeatletter \@namedef{subjclassname@2020}{ \textup{2020} Mathematics Subject Classification} \makeatother \usepackage{algorithm} \usepackage{algpseudocode} \usepackage{tboege-bordermatrix} \newcommand{\sa}{\SF{sa}} \newcommand{\A}{\CC{A}} \newcommand{\Id}{\BBm1} \newcommand{\PD}{\RM{PD}} \title{Selfadhesivity in Gaussian \\ conditional independence structures} \author{Tobias Boege} \address{\mbox{Tobias Boege, Department of Mathematics, KTH Stockholm, Sweden}} \email{[email protected]} \date{\today} \subjclass[2020]{62R01, 62B10, 15A29, 05B20} \keywords{ selfadhesivity, adhesive extension, positive definite matrix, conditional independence, structural semigraphoid, orientable gaussoid} \begin{document} \begin{abstract} Selfadhesivity is a property of entropic polymatroids which guarantees that the polymatroid can be glued to an identical copy of itself along arbitrary restrictions such that the two pieces are independent given the common restriction. We show that positive~definite matrices satisfy this condition as well and examine consequences for Gaussian conditional independence structures. New axioms of Gaussian CI are obtained by applying selfadhesivity to the previously known axioms of structural semigraphoids and orientable gaussoids. \end{abstract} \maketitle \section{Introduction} In matroid theory, the term \emph{amalgam} refers to a matroid in which two smaller matroids are glued together along a common restriction, similar to how four triangles can be glued together along edges to form the boundary of a tetrahedron. This concept is meaningful for conditional independence (CI) structures as~well. The bridge from the geometric (matroid-theoretical) concept to probability theory (conditional independence) was built by Matúš \cite{MatusAdhesive} who defined a special kind of amalgam, the \emph{adhesive extension}, for polymatroids and proved that such extensions always exist for entropic polymatroids with a common restriction. The purpose of this article is two-fold: First, it is to extend this methodology beyond polymatroids and to introduce a derived collection of amalgamation properties known as \emph{selfadhesivity} for general conditional independence structures. Second, this general treatment of selfadhesivity is driven by its applications to Gaussian instead of discrete CI~inference. The main result, \Cref{thm:Adhe}, shows that, also in the Gaussian setting, adhesive extensions (of covariance matrices) exist and are even unique. We use the non-trivial structural constraints implied by this result to derive new axioms for Gaussian conditional independence structures. These results are heavily based on computations. All source code and further details on computations are provided on the mathematical research data repository \emph{MathRepo} hosted by the Max-Planck Institute for Mathematics in the Sciences; the output data, due to its size, is available only on the archiving service \emph{KEEPER} of the Max-Planck Society: \begin{center} \begin{tabular}{rl} MathRepo: & \url{https://mathrepo.mis.mpg.de/SelfadhesiveGaussianCI/} \\ KEEPER: & \url{https://keeper.mpdl.mpg.de/d/fbfe463162e94a14ac28/} \end{tabular} \end{center} \section{Preliminaries} \paragraph{Gaussian conditional independence} Let $N$ be a finite ground set indexing jointly distributed random variables $\xi = (\xi_i : i \in N)$. By convention, elements of $N$ are denoted by $i,j,k, \dots$ and subsets by $I,J,K, \dots$. Elements are identified with singleton subsets of $N$ and juxtaposition of subsets abbreviates set union. Thus, an expression such as $iK$ is shorthand for $\Set{i} \cup K$ as a subset of~$N$. The complement of $K \subseteq N$ is $K^\co$. The set of all $k$-element subsets of $N$ is $\binom{N}{k}$ and the powerset of $N$ is $2^N$. We are mostly interested in Gaussian (i.e., multivariate normal) distributions. These distributions are specified by a small number of parameters, namely by the mean vector $\mu \in \BB R^N$ and the covariance matrix $\Sigma \in \PD_N$, where $\PD_N$ is the set of positive definite matrices. Throughout this article, ``Gaussian'' means ``regular Gaussian'', i.e., the covariance matrix is strictly positive~definite. For positive semidefinite covariance matrices, which lie on the boundary of $\PD_N$, the CI~theory is algebraically more complicated and valid inference properties for regular Gaussians can fail to be valid for singular ones; see \cite[Section~2.3.6]{Studeny}. The following result summarizes basic facts from algebraic statistics relating subvectors of $\xi$ and their (positive~definite) covariance matrices. It can be found, for instance, in \S2.4 of \cite{Sullivant}. For $\Sigma \in \PD_N$ and $I,J,K\subseteq N$, let $\Sigma_{I,J}$ denote the submatrix with rows indexed by $I$ and columns by~$J$. Submatrices of the form $\Sigma_K \defas \Sigma_{K,K}$ are \emph{principal}. Dual to a principal submatrix is its \emph{Schur complement} $\Sigma^K \defas \Sigma_{K^\co} - \Sigma_{K^\co,K} \Sigma_K^{-1} \Sigma_{K,K^\co}$ in~$\Sigma$. Its rows and columns are indexed by $K^\co$ and its entries are functions of all the entries of $\Sigma$. Principal submatrices of and Schur complements in positive~definite matrices are also~positive~definite. The Schur complement construction is valid in greater generality which we will need below as well. Let $A$ be any (not necessarily positive definite, or even square) matrix whose rows are indexed by $IK$ and columns by $JK$, where $I,J,K$ are pairwise disjoint, and suppose that the principal submatrix $A_K$ is invertible. Then the Schur complement $A^K = A_{I,J} - A_{I,K} A_K^{-1} A_{K,J}$ is well-defined and its rows are indexed by $I$ and its columns by~$J$. See \cite{Zhang} for an introduction to theory of Schur complements in matrix analysis. \begin{theorem*} Let $\xi$ be distributed according to the (regular) Gaussian distribution with mean $\mu \in \BB R^N$ and covariance $\Sigma \in \PD_N$. Let $K\subseteq N$. \begin{itemize}[itemsep=0.6em] \item The marginal vector $\xi_K = (\xi_k : k \in K)$ is a regular Gaussian in $\BB R^K$ with mean vector $\mu_K$ and covariance~$\Sigma_K$. \item Given $y \in \BB R^K$, the conditional $\xi_{K^\co} \mid \xi_K = y$ is a regular Gaussian in $\BB R^{K^\co}$ with mean vector $\mu_{K^\co} + \Sigma_{K^\co,K} \Sigma_K^{-1} (y - \mu_K)$ and covariance $\Sigma^K$. \item Let a Gaussian distribution over $N = IJ$ be given with covariance~$\Sigma \in \PD_{IJ}$. Then~the marginal independence $\CI{\xi_I,\xi_J}$ holds if and only if $\Sigma_{I,J} = 0$. \qedhere \end{itemize} \end{theorem*} The general CI~statement $\CI{\xi_I,\xi_J|\xi_K}$, with $I,J,K$ pairwise disjoint, is the result of marginalizing $\xi$ to $IJK$, conditioning on $K$ and then checking for independence of $I$ and $J$. The previous theorem implies the following algebraic CI~criteria for regular Gaussians: \begin{align*} \CI{\xi_I,\xi_J|\xi_K} &\;\Leftrightarrow\; \left( \Sigma_{IJ} - \Sigma_{IJ,K} \Sigma_K^{-1} \Sigma_{K,IJ} \right)_{I,J} = 0 \\ &\;\Leftrightarrow\; \Sigma_{I,J} - \Sigma_{I,K} \Sigma_K^{-1} \Sigma_{K,J} = 0 \tag{$\CIperp_1$} \label{eq:CIL} \\ &\;\Leftrightarrow\; \Rk \Sigma_{IK,JK} = |K|. \tag{$\CIperp_2$} \label{eq:CI} \end{align*} Here, $\Rk$ denotes the rank of a matrix and the last equivalence follows from rank additivity of the Schur complement (see \cite{Zhang}). Indeed, the matrix in \eqref{eq:CIL} is the Schur complement of $K$ in $\Sigma_{IK,JK}$ and must have rank zero since the principal submatrix $\Sigma_K$ has full rank $|K|$ already because it is positive~definite. In particular, the truth of a conditional independence statement does not depend on the conditioning event and it does not depend on the mean~$\mu$. Hence, for CI purposes in this article, we identify regular Gaussians with their covariance matrices~$\Sigma \in \PD_N$. Rank additivity of the Schur complement also shows that the ``$\ge$'' part of the rank condition in \eqref{eq:CI} always holds. Hence, the minimal rank $|K|$ is attained if and only if all minors of $\Sigma_{IK,JK}$ of size $|K|+1$ vanish. But only a subset of these minors is necessary: by \eqref{eq:CIL} the rank of $\Sigma_{IK,JK}$ is $|K|$ if and only if $\Sigma_{I,J} = \Sigma_{I,K}\Sigma_K^{-1}\Sigma_{K,J}$ holds. This is one polynomial condition for each $i \in I$ and $j \in J$, namely $\det \Sigma_{iK,jK} = 0$ --- again by Schur complement expansion of the determinant. These minors correspond to CI~statements of the form $\CI{\xi_i,\xi_j|\xi_K}$. This proves the following ``localization rule'' for Gaussian conditional independence: \[ \label{eq:L} \tag{L} \CI{\xi_I,\xi_J|\xi_K} \;\Leftrightarrow\; \bigwedge_{i \in I, j \in J} \CI{\xi_i,\xi_j|\xi_K}. \] Rules of this form go back to \cite{MatusAscending}. A weaker localization rule~\eqref{eq:L'} (discussed below) holds for all semigraphoids, whereas the one presented above can be proved for compositional graphoids; see~\cite{UnifyingMarkov} in the context of graphical models. In~both cases, a general CI~statement is reduced to a conjunction of \emph{elementary CI~statements} $\CI{\xi_i,\xi_j|\xi_K}$ about the independence of two singletons. We adopt the form $\CI{I,J|K}$ for CI~statements $\CI{\xi_I,\xi_J|\xi_K}$ without the mention of a random vector. These symbols are treated as combinatorial objects and $\A_N \defas \{\, \CI{i,j|K} : ij \in \binom{N}{2}, \allowbreak {K \subseteq N \setminus ij} \,\}$ is the set of all elementary CI~statements. The \emph{CI~structure} of $\Sigma$ is the set \[ \CIS{\Sigma} \defas \Set{ \CI{i,j|K} \in \A_N : \det \Sigma_{iK,jK} = 0 }. \] The localization rule shows that $\CIS{\Sigma}$ encodes the entire set of true CI~statements for a Gaussian with covariance matrix~$\Sigma$ and with slight abuse of notation we employ statements such as $\CI{I,J|K} \in \CIS{\Sigma}$. It is important to note in this context that we treat only \emph{pure} CI~statements, i.e., $\CI{I,J|K}$ where $I,J,K$ are pairwise disjoint. Any general CI~statement with overlaps between the three sets decomposes, analogously to the localization rule, into a conjunction of pure CI~statements and functional dependence statements. For~a~regular Gaussian, functional dependences are always false, so this is no restriction in generality. In particular, the general statement $\CI{N,M|L}$, which frequently appears later, is equivalent to $\CI{(N\setminus L), (M\setminus L)|L}$ which is pure provided that~${L \supseteq N \cap M}$. \paragraph{Polymatroids and selfadhesivity} A \emph{polymatroid} over the finite ground set $N$ is a function $h: 2^N \to \BB R$ assigning to every subset $K \subseteq N$ a real number, such that $h$ is \begin{description}[itemsep=0pt] \item[normalized] $h(\emptyset) = 0$, \item[isotone] $h(I) \le h(J)$ for $I \subseteq J$, \item[submodular] $h(I) + h(J) \ge h(I \cup J) + h(I \cap J)$. \end{description} With the linear functional $\CId{I,J|K} \cdot h \defas h(IK) + h(JK) - h(IJK) - h(K)$, submodularity can be restated as $\CId{I,J|K} \cdot h \ge 0$ for all pairwise disjoint $I, J, K$. If $h_\xi$ is the entropy vector of a discrete random vector~$\xi$, i.e., $h_\xi(K)$ is the Shannon entropy of the marginal vector $\xi_K$, then it is a polymatroid and the quantity $\CId{I,J|K} \cdot h_\xi$ is known as the \emph{conditional mutual information} $I(\xi_I; \xi_J | \xi_K)$. Its vanishing is equivalent to the conditional independence $\CI{\xi_I,\xi_J|\xi_K}$. Hence we may define the CI~structure of a polymatroid as $\CIS{h} \defas \Set{ \CI{i,j|K} \in \A_N : \CId{ij|K} \cdot h = 0 }$. These structures are called \emph{(elementary) semimatroids} in \cite{MatusMatroids} and (equivalently, but based on properties of multiinformation instead of entropy vectors) \emph{structural semigraphoids} in \cite{StudenyStructural}. Again, per \cite{MatusMatroids} a localization rule holds for them which we use to interpret the containment of non-elementary CI~statements: \[ \label{eq:L'} \tag{$\text{L}'$} \CI{I,J|K} \in \CIS{h} \;\Leftrightarrow\; \bigwedge_{\substack{i \in I, j \in J, \\ K \subseteq L \subseteq IJK \setminus ij}} \CI{i,j|L} \in \CIS{h}. \] This rule can be proved from the semigraphoid axioms and hence it holds true also for Gaussians. In this case, it is equivalent to the shorter rule~\eqref{eq:L} using that Gaussians are compositional graphoids. Matúš in~\cite{MatusAdhesive} introduced the notions of adhesive extensions and selfadhesive polymatroids to mimic a curious amalgamation property of entropy vectors. The underlying construction is the \emph{Copy lemma} of \cite{ZhangYeung}, also known as the \emph{conditional product}; see \cite[Section~2.3.3]{Studeny}. For any polymatroid $g: 2^N \to \BB R$ and subset $L \subseteq N$ the \emph{restriction} $g|_L: 2^L \to \BB R$ given by $g|_L(K) \defas g(K)$, $K \subseteq L$, is again a polymatroid. Let $g$ and $h$ be two polymatroids on ground sets $N$~and~$M$, respectively, and suppose that their restrictions $g|_L$ and $h|_L$ to $L = N \cap M$ coincide. A~polymatroid $f$ on $NM$ is an \emph{adhesive extension} of $g$ and $h$ if: \begin{itemize}[itemsep=0pt] \item $f|_N = g$ and $f|_M = h$, \item $\CI{N,M|L} \in \CIS{f}$. \end{itemize} Since $L \subseteq N$ and $L \subseteq M$, the statement $\CI{N,M|L}$ is naturally equivalent to the pure CI~statement $\CI{N',M'|L}$ with $N' = N \setminus L$ and $M' = M \setminus L$. In polymatroidal terms, $N$~and $M$ are said to form a \emph{modular pair} in $f$ if this CI~statement holds. Next, suppose that we have only one polymatroid $h$ on ground set $N$ and fix $L \subseteq N$. An \emph{$L$-copy} of $N$ is a finite set $M$ with $|M| = |N|$ and $M \cap N = L$. We fix a bijection $\pi: N \to M$ which preserves $L$ pointwise. The polymatroid $h$ is a \emph{selfadhesive polymatroid at $L$} if there exists a polymatroid $\ol{h}$ which is an adhesive extension of $h$ and its induced copy $\pi(h)$ over their common restriction to~$L$. The polymatroid is \emph{selfadhesive} if it is selfadhesive at every $L \subseteq N$. The fundamental result of \cite{MatusAdhesive} is: \begin{theorem*} Any two of the restrictions of an entropic polymatroid have an entropic adhesive extension. In particular, entropy vectors are selfadhesive. \end{theorem*} \begin{remark} The set of polymatroids on $N$ which are selfadhesive forms a rational, polyhedral cone in $\BB R^{2^N}$. To see this, let $N$, a subset $L \subseteq N$ and an $L$-copy $M$ of $N$ with bijection $\pi$ be fixed. The conditions for a pair $(h, \ol{h})$, where $h: 2^N \to \BB R$ and $\ol{h}: 2^{NM} \to \BB R$, to be polymatroids and $\ol{h}$ to be an adhesive extension of $h$ and $\pi(h)$ are homogeneous linear equalities and inequalities with integer coefficients in the entries of $h$ and $\ol{h}$. Hence, the set of such pairs is a rational, polyhedral cone in $\BB R^{2^N} \times \BB R^{2^{NM}}$. By the Fourier--Motzkin elimination theorem \cite[Theorem~1.4]{Ziegler}, these properties are inherited by the projection down to $\BB R^{2^N}$ which consists of all polymatroids $h$ which are selfadhesive at~$L$. Intersecting these cones for all $L$ gives the desired set of selfadhesive polymatroids and shows that this set is a rational, polyhedral cone. \end{remark} \begin{remark} Linear inequalities which are valid for entropic polymatroids are called \emph{information inequalities}. The above observation implies that selfadhesivity, as a necessary condition for entropicness, captures only finitely many information inequalities for each fixed~$N$. By contrast, Matúš~\cite{MatusInfinite} showed that even for $|N| = 4$ there are infinitely many irredundant information inequalities. In the $|N| = 4$ case, the cone of selfadhesive polymatroids is characterized (in addition to the polymatroid properties) by the validity of the Zhang--Yeung inequalities (see \Cref{rem:ZY}). In~this sense, selfadhesivity is a reformulation of the Zhang--Yeung inequalities using only the notions of restriction and conditional independence. The generalization of the concept of adhesive extension to more than one $L$-copy of a polymatroid leads to the \emph{book inequalities} of~\cite{BookIneq}. \end{remark} \section{Adhesive extensions of Gaussians} The analogous result for Gaussian covariance matrices is our main theorem: \begin{theorem} \label{thm:Adhe} Let $\Sigma \in \PD_N$ and $\Sigma' \in \PD_M$ be two covariance matrices with common restriction $\Sigma_L = \Sigma'_L$, where $L = N \cap M$. There exists a unique $\Phi \in \PD_{NM}$ such that: \begin{itemize}[itemsep=0pt] \item $\Phi_N = \Sigma$ and $\Phi_M = \Sigma'$, \item $\CI{N,M|L} \in \CIS{\Phi}$. \end{itemize} \end{theorem} \begin{proof} Let $N' = N \setminus L$, $M' = M \setminus L$. We use the following names for blocks of $\Sigma$ and~$\Sigma'$: \[ \Sigma = \kbordermatrix{ & L & N' \\ L & X & A \\ N' & A^\T & Y }, \qquad\qquad \Sigma' = \kbordermatrix{ & L & M' \\ L & X & B \\ M' & B^\T & Z }. \] Consider the matrix \[ \Phi = \kbordermatrix{ & L & N' & M' \\ L & X & A & B \\ N' & A^\T & Y & \Lambda \\ M' & B^\T & \Lambda^\T & Z }, \] where $\Lambda$ will be determined shortly. Its restrictions to $N$ and $M$ are clearly equal to~$\Sigma$ and $\Sigma'$, respectively. The CI~statement $\CI{N,M|L}$ is equivalent to the rank requirement $\Rk \Phi_{N,M} = |N \cap M| = |L|$, but then rank additivity of the Schur complement shows \[ |L| = \Rk \Phi_{N,M} = \Rk \begin{pmatrix} X & B \\ A^\T & \Lambda\end{pmatrix} = \underbrace{\Rk X}_{= |L|} + \Rk(\Lambda - A^\T X^{-1} B). \] This implies $\Lambda = A^\T X^{-1} B$ and thus $\Phi$ is uniquely determined by $\Sigma$ and $\Sigma'$ via the two conditions in the \namecref{thm:Adhe}. To show positive definiteness, consider the transformation \[ P = \kbordermatrix{ & L & N' & M' \\ L & \Id & -X^{-1} A & -X^{-1} B \\ N' & 0 & \Id & 0 \\ M' & 0 & 0 & \Id } \] of the bilinear form $\Phi$: \[ P^\T \Phi P = \begin{pmatrix} X & 0 & 0 \\ 0 & Y - A^\T X^{-1} A & 0 \\ 0 & 0 & Z - B^\T X^{-1} B \end{pmatrix} = \begin{pmatrix} \Sigma_L & 0 & 0 \\ 0 & \Sigma^L & 0 \\ 0 & 0 & \Sigma'^L \end{pmatrix}. \] The result is clearly positive~definite and since $P$ is invertible, this shows $\Phi \in \PD_{NM}$. \end{proof} \begin{remark} \label{rem:Machinery} An alternative proof of this theorem was kindly pointed out by one of the referees. It relies on viewing the existence of $\Phi$ as a positive definite matrix completion problem where the entries of $\Phi_N$ and $\Phi_M$ are prescribed and the submatrix $\Phi_{N',M'}$ is left unspecified. The machinery developed in \cite{PosdefCompletion} shows that a positive definite completion exists and that there is a unique completion $\Psi$ with maximal determinant. This matrix satisfies $(\Psi^{-1})_{N',M'} = 0$ which is equivalent to $\CI{N,M|L}$ by the duality concept in Gaussian CI~theory; cf.~\cite[Proposition~3.10]{Dissert}. \end{remark} \begin{remark} \label{rem:ZY} Zhang and Yeung~\cite{ZhangYeung} proved the first information inequality for entropy vectors which is not a consequence of the Shannon inequalities (equivalently, the polymatroid properties). It can be expressed as the non-negativity of the functional \[ \CIz{i,j|kl} \defas \CId{kl|i} + \CId{kl|j} + \CId{ij|} - \CId{kl|} + \CId{ik|l} + \CId{il|k} + \CId{kl|i}. \] Matúš~\cite{MatusAdhesive} characterized the selfadhesive polymatroids over a 4-element ground set as those polymatroids satisfying $\CIz{i,j|kl} \ge 0$ for all choices of $i,j,k,l$. As a corollary to \Cref{thm:Adhe} we obtain that the multiinformation vectors and hence the differential entropy vectors of Gaussian distributions satisfy the Zhang--Yeung inequalities. This is one half of the result proved by Lněnička~\cite{Lnenicka}. However, that result also follows from the metatheorem of Chan~\cite{ChanBalanced} since $\CIz{i,j|kl}$ is balanced. \end{remark} In the theory of regular Gaussian conditional independence structures, it is natural to relax the positive definiteness assumption on $\Sigma$ to that of \emph{principal regularity}, i.e., all principal minors, instead of being positive, are required not to vanish. Principal regularity is the minimal technical condition which allows the formation of all Schur complements and the property is inherited by principal submatrices and Schur complements, hence enabling analogues of marginalization and conditioning over general fields instead of the field $\BB R$; see \cite{Gaussant} for applications. However, the last step in the above proof of \Cref{thm:Adhe} requires positive definiteness and does not work for principally regular matrices: \begin{example} \label{ex:PRnotSelfadhe} Consider the following principally regular matrix over $N = ijkl$: \[ \renewcommand{\arraystretch}{1.2} \Gamma = \kbordermatrix{ & i & j & & k & l \\ i & 1 & 0 & \vrule & 0 & \sfrac1{\sqrt2} \\ j & 0 & 1 & \vrule & \sfrac1{2\sqrt2} & 0 \\ \cline{2-6} k & 0 & \sfrac1{2\sqrt2} & \vrule & 1 & \sfrac{\sqrt3}{2} \\ l & \sfrac1{\sqrt2} & 0 & \vrule & \sfrac{\sqrt3}{2} & 1 } \] and fix $L = ij$. By the proof of \Cref{thm:Adhe}, the submatrix and rank conditions uniquely determine an adhesive extension of $\Gamma$ with an $L$-copy of itself over the ground set $\I{ijklk'l'}$. This unique candidate matrix~is \[ \renewcommand{\arraystretch}{1.2} \kbordermatrix{ & i & j & & k & l & & k' & l' \\ i & 1 & 0 & \vrule & 0 & \sfrac1{\sqrt2} & \vrule & 0 & \sfrac1{\sqrt2} \\ j & 0 & 1 & \vrule & \sfrac1{2\sqrt2} & 0 & \vrule & \sfrac1{2\sqrt2} & 0 \\ \cline{2-9} k & 0 & \sfrac1{2\sqrt2} & \vrule & 1 & \sfrac{\sqrt3}{2} & \vrule & \sfrac18 & 0 \\ l & \sfrac1{\sqrt2} & 0 & \vrule & \sfrac{\sqrt3}{2} & 1 & \vrule & 0 & \sfrac12 \\ \cline{2-9} k' & 0 & \sfrac1{2\sqrt2} & \vrule & \sfrac18 & 0 & \vrule & 1 & \sfrac{\sqrt3}{2} \\ l' & \sfrac1{\sqrt2} & 0 & \vrule & 0 & \sfrac12 & \vrule & \sfrac{\sqrt3}{2} & 1 }. \] But this matrix is not principally regular, as the $\I{lk'l'}$-principal minor is zero. However, the CI~structure $\CC G = \CIS{\Gamma}$ is the dual of the graphical model for the undirected path {$i$~--~$l$~--~$k$~--~$j$}; cf.~\cite[Section~3]{LnenickaMatus}. This implies that $\CC G$ is representable by a positive definite matrix with rational entries and even though the particular matrix representation $\Gamma$ does not have a selfadhesive extension (in the sense of \Cref{thm:Adhe}), another representation of $\CC G$ exists which is positive definite and hence selfadhesive. \end{example} \section{Structural selfadhesivity} The existence of adhesive extensions and in particular selfadhesivity of positive~definite matrices induces similar properties on their CI~structures, since the conditions in \Cref{thm:Adhe} can be formulated using only the concepts of restriction and conditional independence. On the CI~level, we sometimes use the term \emph{structural selfadhesivity} to emphasize that it is generally a weaker notion than what is proved for covariance matrices above. Selfadhesivity can be used to strengthen known properties of CI~structures: if it is known that all positive~definite matrices have a certain distinguished property $\FR p$, then the fact that $\Sigma$ and any $L$-copy of it fit into an adhesive, positive~definite extension obeying $\FR p$ says more about the structure of~$\Sigma$ than $\FR p$ alone. We begin by making precise the notion of a \emph{property}: \begin{definition} Let $\FR A_N = 2^{\A_N}$ be the set of all CI~structures over $N$. For $N = [n] = \Set{1, \dots, n}$ we use abbreviations $\A_n$ and $\FR A_n$. A \emph{property} of CI~structures is an element $\FR p$ of the \emph{property lattice} \[ \FR P \defas \bigtimes_{n=1}^\infty 2^{\FR A_n}. \] \end{definition} A property $\FR p$ consists of one set $\FR p(n) \subseteq \FR A_n$ per finite cardinality~$n$. This is the set of CI~structures over~$[n]$ which ``have property $\FR p$''. CI~structures $\CC L$ and $\CC M$ over $N$ and $M$, respectively, are \emph{isomorphic} if there is a bijection $\pi: N \to M$ such that under the induced map $\CC M = \pi(\CC L)$. We are only interested in properties which are invariant under isomorphy. Hence, the choice of ground sets $[n]$ presents no restriction. Moreover, we freely identify isomorphic CI~structures in the following. In particular, each $k$-element subset $K \subseteq [n]$ will be tacitly identified with~$[k]$ and we use notation such as $\FR p(K)$. \begin{example} By the localization rule \eqref{eq:L'}, the well-known semigraphoid axioms of \cite{PearlPazGraphoids} reduce to the single inference rule \[ \label{eq:sg} \tag{S} \CI{i,j|L} \wedge \CI{i,k|jL} \Rightarrow \CI{i,j|kL} \wedge \CI{i,k|L}. \] Being a semigraphoid is a property defined by \begin{gather*} \FR{sg}(n) \defas \Set{ \CC L \subseteq \A_n : \text{\eqref{eq:sg} holds for $\CC L$ for all $ijk \in \binom{[n]}{3}$ and $L \subseteq [n] \setminus ijk$} }. \end{gather*} Being realizable by a Gaussian distribution is another property \[ \FR g^+(n) \defas \Set{ \CIS{\Sigma} \in \A_n : \Sigma \in \PD_n }. \] Both are closed under restriction, which can be expressed as follows: for every $\CC L \in \FR p(N)$ and every $K \subseteq N$ we have $\CC L|_K \defas \CC L \cap \A_K \in \FR p(K)$. \end{example} The property lattice is equipped with a natural order relation of component-wise set inclusion from the boolean lattices $2^{\FR A_n}$. This order relation $\le$ compares properties by generality: if $\FR p \le \FR q$, then for all $n \ge 1$ we have $\FR p(n) \subseteq \FR q(n)$, and $\FR p$ is \emph{sufficient} for $\FR q$ and, equivalently, $\FR q$ is \emph{necessary} for~$\FR p$. A function $\varphi$ on the property lattice is \emph{recessive} if for every $\FR p \in \FR P$ we have $\varphi(\FR p) \le \FR p$. It is \emph{monotone} if $\FR p \le \FR q$ entails $\varphi(\FR p) \le \varphi(\FR q)$. \begin{definition} \label{def:selfadhe} Let $\FR p$ be a property of CI~structures. The \emph{selfadhesion} $\FR p^\sa(N)$ of $\FR p$ is the set of CI~structures $\CC L$ such that for every $L \subseteq N$ together with an $L$-copy $M$ of $N$ and bijection $\pi: N \to M$ there exists $\ol{\CC L} \in \FR p(NM)$ satisfying the conditions: \begin{itemize}[itemsep=0pt] \item $\ol{\CC L}|_N = \CC L$, $\ol{\CC L}|_M = \pi(\CC L)$, and \item $\CI{N,M|L} \in \ol{\CC L}$. \end{itemize} A~property is \emph{selfadhesive} if $\FR p = \FR p^\sa$. \end{definition} The following is a direct consequence of \Cref{thm:Adhe}: \begin{corollary} \label{cor:Gaussadhe} The property $\FR g^+$ of being regular Gaussian is selfadhesive. \end{corollary} \begin{proof} Let $\CC L \in \FR g^+(N)$ be Gaussian and $\Sigma \in \PD_N$ a realizing matrix. For any $L \subseteq N$, \Cref{thm:Adhe} applies with $\Sigma' = \Sigma$ and gives a matrix $\Phi$ whose CI~structure is a witness for the structural selfadhesivity of $\CC L$ at~$L$. \end{proof} \begin{lemma} \label{lemma:AdheProp} The operator $\cdot^\sa$ is recessive and monotone on the property lattice. \end{lemma} \begin{proof} Let $\FR p$ be a property and $\CC L \in \FR p^\sa(N)$. In particular, $\CC L$ is selfadhesive with respect to $\FR p$ at $L = N$. The $L$-copy $M$ of $N$ in the definition must be $M = N$ and it follows that $\CC L \in \FR p(NM) = \FR p(N)$. This proves recessiveness $\FR p^\sa \le \FR p$. For monotonicity, let $\FR p \le \FR q$ and $\CC L$ in $\FR p^\sa(N)$. Then for every $L$ with $L$-copy $M$ of $N$ there exists a certificate for the existence of $\CC L$ in $\FR p^\sa$. This certificate lives in $\FR p(NM) \subseteq \FR q(NM)$ which proves $\CC L \in \FR q^\sa(N)$. \end{proof} Thus, from monotonicity and the fact that $\FR g^+$ is a fixed point of self\-adhesion, we can conclude that a property which is necessary for Gaussianity remains necessary after selfadhesion. Since selfadhesion makes properties more specific, this allows us to take known necessary properties of Gaussian~CI and to derive new, stronger properties from them. \begin{corollary} \label{cor:AdheNecessary} If $\FR g^+ \le \FR p$, then $\FR g^+ \le \FR p^\sa$. \qed \end{corollary} Iterated application of selfadhesion gives rise to a chain of ever more specific properties $\FR g^+ \le \cdots \le \FR p^{k \cdot \sa} \le \cdots \le \FR p^{2\cdot \sa} \defas (\FR p^\sa)^\sa \le \FR p^\sa \le \FR p$. For~each fixed component $n$ of the property, this results in a descending chain in the finite boolean lattice $2^{\FR A_n}$ which must stabilize eventually. However, the whole property $\FR p$ has a countably infinite number of components and it is not clear if iterated selfadhesions converge after finitely many steps to the limit $\FR p^{\omega\cdot \sa} \defas \bigwedge_{k=1}^\infty {\FR p^{k\cdot \sa}}$ in the property lattice. \begin{question} Does $\cdot^\sa$ stabilize after the first application to ``well-behaved'' properties like~$\FR{sg}$, i.e., is $\FR{sg}^\sa = \FR{sg}^{\omega\cdot\sa}$? Under which assumptions on a property does $\cdot^\sa$ stabilize after a finite number of applications? \end{question} We now turn to the question which closure properties of $\FR p$ are recovered for $\FR p^\sa$. For example, if for every $\CC L, \CC L' \in \FR p(N)$ we have $\CC L \cap \CC L' \in \FR p(N)$, then $\FR p$ is \emph{closed under intersection}. Semigraphoids enjoy this closure property because they are axiomatized by the Horn clauses~\eqref{eq:sg}. The following \namecref{lemma:AdheIntersect} shows that all iterated selfadhesions inherit closure under intersection. \begin{lemma} \label{lemma:AdheIntersect} If $\FR p$ is closed under intersection, then so is $\FR p^\sa$. \end{lemma} \begin{proof} Let $\CC L, \CC L' \in \FR p^\sa(N)$ and fix a set $L \subseteq N$ and an $L$-copy $M$ of $N$ with bijection~$\pi$. There are $\ol{\CC L}$ and $\ol{\CC L'}$ in $\FR p(NM)$ witnessing the selfadhesivity of $\CC L$ and $\CC L'$, respectively, at~$L$. Their intersection $\ol{\CC L} \cap \ol{\CC L'}$ is in $\FR p(NM)$ by assumption and we have \begin{itemize}[itemsep=0pt, wide] \item $(\ol{\CC L} \cap \ol{\CC L'})|_N = \ol{\CC L}|_N \cap \ol{\CC L'}|_N = \CC L \cap \CC L'$, \item $(\ol{\CC L} \cap \ol{\CC L'})|_M = \ol{\CC L}|_M \cap \ol{\CC L'}|_M = \pi(\CC L) \cap \pi(\CC L') = \pi(\CC L \cap \CC L')$, \item $\CI{N,M|L} \in \ol{\CC L} \cap \ol{\CC L'}$. \end{itemize} Thus it proves selfadhesivity of $\CC L \cap \CC L'$ with respect to $\FR p$ at~$L$. \end{proof} Similarly to matroid theory, \emph{minors} are the natural subconfigurations of CI~structures. They are the CI-theoretic abstraction of marginalization and conditioning on random vectors. \begin{definition} Let $\CC L \subseteq \CC A_N$ and $x \in N$. The \emph{marginal} and the \emph{conditional} of $\CC L$ on $N \setminus x$ are, respectively, \begin{align*} \CC L \mathbin{\backslash} x &\defas \Set{ \CI{i,j|K} \in \CC A_{N \setminus x} : \CI{i,j|K} \in \CC L } = \CC L \cap \CC A_{N \setminus x}, \\ \CC L \mathbin{/} x &\defas \Set{ \CI{i,j|K} \in \CC A_{N \setminus x} : \CI{i,j|xK} \in \CC L}. \end{align*} A \emph{minor} of $\CC L$ is any CI~structure which is obtained by a sequence of marginalizations and conditionings. \end{definition} If for every $\CC L \in \FR p(N)$ and every minor $\CC K$ of $\CC L$ on ground set $M \subseteq N$ we have $\CC K \in \FR p(M)$, then $\FR p$ is \emph{minor-closed}. Minor-closedness is necessary for the existence of a finite axiomatization of a property~$\FR p$. More concretely, \cite{MatusMinors} studied descriptions of properties by finitely many ``forbidden minors'', which is under natural regularity assumptions equivalent to having a finite axiomatic description by boolean CI~inference formulas; cf.~\cite[Section~4.4]{Dissert} for details. \begin{lemma} If $\FR p \le \FR{sg}$ is minor-closed, then so is $\FR p^\sa$. \end{lemma} \begin{proof} By induction it suffices to prove closedness under marginals and conditionals. Let $\CC L \in \FR p^\sa(N)$ and $x \in N$. First, we prove that $\CC L \mathbin{\backslash} x \in \FR p^\sa(N \setminus x)$. Fix $L \subseteq N \setminus x$ and an $L$-copy $M$ of $N$ with bijection~$\pi$ and let $\ol{\CC L}$ be the witness for selfadhesivity of $\CC L$ at~$L$. The minor $\ol{\CC L} \mathbin{\backslash} \Set{x, \pi(x)}$ is in $\FR p(NM \setminus \Set{x, \pi(x)})$ by assumption of minor-closedness; and note that $M \setminus \pi(x)$ is an $L$-copy of $N \setminus x$. Moreover, $(\ol{\CC L} \mathbin{\backslash} \Set{x, \pi(x)})|_{N \setminus x} = \CC L \mathbin{\backslash} x$ which is isomorphic to $\pi(\CC L \mathbin{\backslash} x) = \pi(\CC L) \mathbin{\backslash} \pi(x) = (\ol{\CC L} \mathbin{\backslash} \Set{x, \pi(x)})|_{M \setminus \pi(x)}$. For the last argument we need the semigraphoid property to hold for~$\FR p$. This ensures by \cite[Lemma~2.2]{Studeny} that the localization rule~\eqref{eq:L'} applies. This rule shows that $\CI{N,M|L} \in \ol{\CC L}$ is equivalent to \[ \bigwedge_{\substack{i \in N', j \in M', \\ L \subseteq P \subseteq NM \setminus ij}} \CI{i,j|P} \in \ol{\CC L}. \] Applying the rule \eqref{eq:L'} again in reverse to a subset of these elementary CI~statements shows that $\CI{(N\setminus x), (M \setminus \pi(x))|L} \in \ol{\CC L}$ holds, which finishes the proof that $\ol{\CC L} \mathbin{\backslash} \Set{x, \pi(x)}$ is a witness for the selfadhesion of $\CC L \mathbin{\backslash} x$ at $L$. To prove that $\CC L \mathbin{/} x \in \FR p^\sa(N \setminus x)$, pick any $L \subseteq N \setminus x$ and let $M$ be an $Lx$-copy of $N$ with bijection~$\pi$. Note that $M \setminus x$ is an $L$-copy of $N \setminus x$ with bijection $\pi|_{N \setminus x}$. Let $\ol{\CC L} \in \FR p(NM)$ be a witness for the selfadhesivity of $\CC L$ at~$Lx$ and consider the conditional $\ol{\CC L} \mathbin{/} x$: \begin{align*} (\ol{\CC L} \mathbin{/} x)|_{N \setminus x} &= \Set{ (ij|K) \in \CC A_{N \setminus x} : (ij|Kx) \in \ol{\CC L} } \\ &= (\ol{\CC L}|_N) \mathbin{/} x = \CC L \mathbin{/} x. \end{align*} An analogous computation shows $(\ol{\CC L} \mathbin{/} x)|_{M \setminus x} = \pi(\CC L \mathbin{/} x)$ using that $x$ is fixed by~$\pi$. Moreover, we have $\CI{N,M|Lx} \in \ol{\CC L}$ which is equivalent to $\CI{(N\setminus x), (M\setminus x)|Lx} \in \ol{\CC L}$ since $x \in N \cap M$. But this entails $\CI{(N\setminus x),(M\setminus x)|L} \in \ol{\CC L} \mathbin{/} x$ and hence $\CC L \mathbin{/} x$ is selfadhesive at $L$ with witness~$\ol{\CC L} \mathbin{/} x$. \end{proof} \begin{question} Does $\FR{sg}^\sa$ have a finite axiomatization? Is finite axiomatizability or finite non-axiomatizability in general preserved by selfadhesion? \end{question} \subsection{Selfadhesivity testing} Whether or not a CI~structure $\CC L \subseteq \A_N$ is in $\FR p^\sa(N)$ can be checked algorithmically if an \emph{oracle} $\SF p(\tilde{\CC L})$ for the property~$\FR p$ is available. This oracle is a subroutine which receives a \emph{partially defined} CI~structure $\tilde{\CC L}$ over~$N$, i.e., a set of CI~statements or negated CI~statements specifying constraints on some statements from~$\A_N$. Then~$\SF p$ decides if $\tilde{\CC L}$ can be extended to a member of~$\FR p(N)$. \begin{algorithm}[h!] \caption{Blackbox selfadhesion membership test} \algblockdefx[function]{BeginFunction}{EndFunction}{\textbf{function}\xspace}{\textbf{end function}} \begin{algorithmic}[1] \BeginFunction $\textsf{is-selfadhesive}(\CC L, \SF p)$ \Comment{tests if $\CC L \in \FR p^\sa(N)$} \ForAll{$L \subseteq N$} \State $(M, \pi) \gets \text{$L$-copy of $N$ with bijection $\pi: N \to M$}$ \State $\tilde{\CC L} \gets \emptyset$ \ForAll{$s \in \A_N$} \State \textbf{if} $s \in \CC L$ \textbf{then} $\tilde{\CC L} \gets \tilde{\CC L} \cup \Set{ \hphantom\neg s, \hphantom\neg \pi(s) }$ \State \textbf{if} $s \not\in \CC L$ \textbf{then} $\tilde{\CC L} \gets \tilde{\CC L} \cup \Set{ \neg s, \neg \pi(s) }$ \EndFor \State $\tilde{\CC L} \gets \tilde{\CC L} \cup \Set{ \CI{N,M|L} }$ \Comment{or equivalent statements via \eqref{eq:L'}} \State \textbf{if} $\SF p(\tilde{\CC L}) = \TT{false}$ \textbf{then} \textbf{return} $\TT{false}$ \EndFor \State \textbf{return} $\TT{true}$ \EndFunction \end{algorithmic} \label{alg:Selfadhe} \end{algorithm} Each component $\FR p(n)$ of a property $\FR p$ is a set of subsets of~$\CC A_n$. There are two principal ways of representing this set: \emph{explicitly}, by listing its elements, or \emph{implicitly}, by listing a set of abstract axioms in the form of boolean formulas which all its elements and no other CI~structures satisfy. A typical application of \Cref{alg:Selfadhe} takes in both, an explicit description of $\FR p(n)$ to iterate over, as well as an implicit description~$\SF p$ of~$\FR p$ to perform selfadhesivity testing for ground sets of sizes between $n$ and~$2n$. It~outputs only an explicit description of $\FR p^\sa$ at a given index~$n$. Transforming this explicit description obtained from \Cref{alg:Selfadhe} into an implicit description to call the algorithm again is akin to transforming a disjunctive normal form of a boolean formula into a conjunctive normal form, which is a hard problem. Moreover, it would be required to compute $\FR p^\sa(m)$ explicitly for all $n \le m \le 2n$. This makes it difficult to iterate selfadhesions. \begin{remark} The proof of \Cref{lemma:AdheProp} shows that a CI~structure $\CC L$ satisfies selfadhesivity with respect to $\FR p$ at $L = N$ if and only if $\CC L$ has property $\FR p$. In the other extreme case, every structure in $\FR p$ is selfadhesive at $L = \emptyset$ if $\FR p$ is closed under the direct sum operation introduced in \cite{MatusMatroids}. Many useful properties are closed under direct sums because this operation mimics the independent joining of two random vectors; see \cite{MatusClassification}. If this is known a~priori, some selfadhesivity tests can be skipped. \end{remark} We now proceed to apply \Cref{alg:Selfadhe} to two practically tractable necessary conditions for Gaussian realizability. The computational results allow, via \Cref{cor:AdheNecessary}, the deduction of new CI~inference axioms for Gaussians on five random variables. \subsection{Structural semigraphoids} It is easy to see that every Gaussian CI~structure $\CC L = \CIS{\Sigma}$ can also be obtained from the \emph{correlation matrix} $\Sigma'$ of the original distribution $\Sigma$. Hence, we may assume that $\Sigma$ is a correlation matrix. In that case, the \emph{multiinformation vector} of $\Sigma$ is the map $m_\Sigma: 2^N \to \BB R$ given by $m_\Sigma(K) \defas -\sfrac12 \log \det \Sigma_K$. This function satisfies $m_\Sigma(\emptyset) = m_\Sigma(i) = 0$ for all $i \in N$ and it is super\-modular by the Koteljanskii inequality; see \cite{JohnsonBarrett}. Similarly to entropy vectors, the equality condition in these inequalities characterizes conditional independence: ${\CId{ij|K} \cdot m_\Sigma = 0} \;\Leftrightarrow\; {\CI{i,j|K} \in \CIS{\Sigma}}$. In the nomenclature of \cite[Chapter~5]{Studeny}, $m_\Sigma$ is an \emph{$\ell$-standardized super\-modular function}. The functions having these two properties form a rational, polyhedral cone $\BO{S}_N$ of codimension $|N| + 1$ in $\BB R^{2^N}$. Each of its facets is given by equality in precisely one of the supermodular inequalities $\CId{ij|K} \le 0$ for an elementary CI~statement $\CI{i,j|K} \in \A_N$. Since the facets of this cone are in bijection with CI~statements, it is natural to identify faces (intersections of facets) dually with CI~structures (unions of CI~statements). The property of CI~structures defined by arising from a face of $\BO{S}_N$ is that of \emph{structural semigraphoids}, denoted by $\FR{sg}_*$, and it is necessary for $\FR g^+$ since every Gaussian CI~structure $\CIS{\Sigma}$ is associated with the unique face on which $m_\Sigma \in \BO{S}_N$ lies in the relative interior. \begin{remark} Structural semigraphoids can be equivalently defined via the face lattice of the cone of \emph{tight} polymatroids, i.e., polymatroids~$h$ with $h(N) = h(N \setminus i)$ for every $i \in N$. The tightness condition poses no extra restrictions: for every polymatroid, there exists a tight polymatroid inducing the same pure CI~statements (only differing in the functional dependences); cf.~\cite[Section~III]{MatusCsirmaz}. A~proof of the equivalence is contained in \cite[Section~6.3]{Dissert}, \end{remark} Deciding whether a partially defined CI~structure $\tilde{\CC L}$ is consistent with the structural semigraphoid property is a question about the incidence structure of the face lattice of~$\BO{S}_N$. Such questions reduce to the feasibility of a rational linear program as previously demonstrated by \cite{EfficientCI}. \Cref{alg:Struct} relies on this insight by setting up the polyhedral description of the structural semigraphoidality test and then delegating the computation to specialized linear programming software. \begin{algorithm}[h!] \caption{Structural semigraphoid consistency test} \algblockdefx[function]{BeginFunction}{EndFunction}{\textbf{function}\xspace}{\textbf{end function}} \begin{algorithmic}[1] \BeginFunction $\textsf{is-structural}(\tilde{\CC L})$ \Comment{tests if $\tilde{\CC L}$ is consistent with $\FR{sg}_*(N)$} \State $P \gets \Set{ \text{$m(\emptyset) = m(i) = 0$ for all $i \in N$} }$ \Comment{$H$ description of polyhedron} \ForAll{$s \in \A_N$} \State \textbf{if} $\hphantom\neg s \in \tilde{\CC L}$ \textbf{then} $P \gets P \cup \Set{ -\CId{s} \cdot m = 0 }$ \State \textbf{if} $ \neg s \in \tilde{\CC L}$ \textbf{then} $P \gets P \cup \Set{ -\CId{s} \cdot m \ge 1 }$ \State \Comment{The condition $-\CId{s} \cdot m > 0$ is equivalent to $\ge 1$ in a cone} \State \textbf{else} $P \gets P \cup \Set{ -\CId{s} \cdot m \ge 0 }$ \EndFor \State \textbf{return} $\textsf{is-feasible}(P)$ \Comment{call an \TT{LP} solver} \EndFunction \end{algorithmic} \label{alg:Struct} \end{algorithm} Equipped with this oracle for $\FR{sg}_*$, \Cref{alg:Selfadhe} can be applied to compute membership in~$\FR{sg}_*^\sa$. We run the structural selfadhesivity test for the \emph{gaussoids} of \cite{LnenickaMatus} because they are easily computable candidates for Gaussian CI~structures; see also \cite{Geometry}. For~$n=4$ random variables, the gaussoids which are structural semigraphoids already coincide with the realizable Gaussian structures (as classified in \cite{LnenickaMatus}) and selfadhesivity offers no improvement. This is no longer the case for five random variables: \begin{computation} There are $508\,817$ gaussoids on $n=5$ random variables modulo isomorphy. Of~these $336\,838$ are structural semigraphoids and $335\,047$ of them are selfadhesive with respect to~$\FR{sg}_*$. \end{computation} A semigraphoid $\CC L$ is structural if and only if it is induced by a polymatroid, i.e., $\CC L = \CIS{h}$. In~this case, two distinct notions of selfadhesivity can be applied to $\CC L$: the first is Matúš's definition of selfadhesivity for the inducing polymatroid~$h$; and the second is structural selfadhesivity from \Cref{def:selfadhe} for the CI~structure $\CC L$ with respect to the property~$\FR{sg}_*$. Analogously to \Cref{cor:Gaussadhe}, one sees that the second condition is implied by the first. The existence of a selfadhesive inducing polymatroid can be efficiently tested for ground set size four based on the polyhedral description of the cone of selfadhesive 4-polymatroids from \cite[Corollary~6]{MatusAdhesive}. \begin{computation} Out of the $1\,285$ isomorphy representatives of $\FR{sg}_*(4)$, exactly $1\,224$ are in $\FR{sg}_*^\sa(4)$. Each of them is induced by a selfadhesive 4-polymatroid. \end{computation} \begin{question} Is every element of $\FR{sg}_*^\sa(N)$ induced by a selfadhesive $N$-polymatroid, for every finite set~$N$? \end{question} \subsection{Orientable gaussoids} Recall from \cite{Geometry} that a gaussoid is \emph{orientable} if it is the support of an oriented gaussoid. Oriented gaussoids are a variant of CI~structures in which every statement $\CI{i,j|K}$ has a sign $\Set{ \TT0, \TT+, \TT- }$ attached, indicating conditional independence, positive or negative partial correlation, respectively. Oriented gaussoids are axiomatically defined and therefore \TT{SAT} solvers are ideally suited to decide the consistency of a partially defined CI~structure with these axioms. The property of orientability, denoted $\FR{o}$, is obtained from the set of oriented gaussoids by mapping all CI~statements oriented as $\TT0$ to elements of a CI~structure and all statements oriented $\TT+$ or $\TT-$ to non-elements. To~facilitate orientability testing, one allocates two boolean variables $V^{\TT0}_s$ and $V^{\TT+}_s$ for every CI~statement~$s$. The~former indicates whether $s$ is $\TT0$ or not while the latter indicates, provided that $V^{\TT0}_s$ is false, if $s$ is $\TT+$ or $\TT-$. Further details about oriented gaussoids, their axioms and use of \TT{SAT} solvers for CI~inference are available in \cite{Geometry}. \Cref{alg:Orient} gives a condensed account of the algorithm. \begin{algorithm}[h!] \caption{Orientable gaussoid consistency test} \algblockdefx[function]{BeginFunction}{EndFunction}{\textbf{function}\xspace}{\textbf{end function}} \begin{algorithmic}[1] \BeginFunction $\textsf{is-orientable}(\tilde{\CC L})$ \Comment{tests if $\tilde{\CC L}$ is consistent with $\FR o(N)$} \State $\varphi \gets \textsf{oriented-gaussoid-axioms}(N)$ \Comment{boolean formula} \ForAll{$s \in \A_N$} \State \textbf{if} $\hphantom\neg s \in \tilde{\CC L}$ \textbf{then} $\varphi \gets \varphi \wedge [V^{\TT0}_s = \TT{true}]$ \State \textbf{if} $ \neg s \in \tilde{\CC L}$ \textbf{then} $\varphi \gets \varphi \wedge [V^{\TT0}_s = \TT{false}]$ \State $\varphi \gets \varphi \wedge [V^{\TT0}_s = \TT{true} \Rightarrow V^{\TT+}_s = \TT{false}]$ \State \Comment{there are only three signs $\Set{\TT0, \TT+, \TT-}$} \EndFor \State \textbf{return} $\textsf{is-satisfiable}(\varphi)$ \Comment{call a \TT{SAT} solver} \EndFunction \end{algorithmic} \label{alg:Orient} \end{algorithm} \begin{computation} All orientable gaussoids on $n=4$ are Gaussian. Of the $508\,817$ isomorphy classes of gaussoids on $n=5$ precisely $175\,215$ are orientable and $168\,010$ are selfadhesive with respect to orientability. \end{computation} \subsection{Structural orientable gaussoids} The meet $\FR{sg}_* \wedge \FR{o}$ of structural semigraphoids and orientable gaussoids in the property lattice is likewise necessary for Gaussianity and an oracle for it can be combined from the oracles of its two constituents. Its selfadhesion yields no improvement over apparently weaker properties: \begin{computation} The properties $\FR{sg}_* \wedge \FR o$ and $\FR{sg}_*^\sa \wedge \FR o$ coincide at $n=5$ with $175\,139$ isomorphy types. On the other hand, $\FR{sg}_* \wedge \FR{o}^\sa$, $\FR{sg}_*^\sa \wedge \FR{o}^\sa$ and $(\FR{sg}_* \wedge \FR o)^\sa$ coincide at $n=5$ with $167\,989$~types. \end{computation} Up to a few isolated examples in the literature, this represents the currently best known upper bound in the classification of realizable Gaussian conditional independence structures on five random variables. Examination of the difference $(\FR{sg}_* \wedge \FR{o})(5) \setminus (\FR{sg}_* \wedge \FR{o})^\sa(5)$ reveals new axioms for Gaussian CI beyond structural semigraphoids and orientability,~e.g.: \begin{align*} \CI{i,j|km} \wedge \CI{i,m|l} \wedge \CI{j,k|i} \wedge \CI{j,m} \wedge \CI{k,l} &\;\Rightarrow\; \CI{i,j}, \\ \CI{i,k|jl} \wedge \CI{i,l|km} \wedge \CI{j,k|i} \wedge \CI{j,m|k} \wedge \CI{k,l} &\;\Rightarrow\; \CI{i,k}, \\ \CI{i,k|j} \wedge \CI{i,l|jm} \wedge \CI{j,k|il} \wedge \CI{j,m|k} \wedge \CI{k,l} &\;\Rightarrow\; \CI{i,k}. \end{align*} The MathRepo page corresponding to this paper contains code and more information on how to obtain these inference rules algorithmically. Due to the large amount of data involved and the complexity of minimizing boolean formulas, it is currently not known how many genuinely new and mutually irredundant axioms are encoded in the results. \subsubsection*{Mathematical software and data repository} \TT{SoPlex v4.0.0} was used to solve rational linear programs exactly; see \cite{Soplex2012,Soplex2015,SCIP2018}. To check orientability, we used the incremental \TT{SAT} solver \TT{CaDiCaL v1.3.1} by \cite{CaDiCaL} and to enumerate satisfying assignments the \TT{AllSAT} solver \TT{nbc\_minisat\_all v1.0.2} by \cite{TodaSAT}. \Cref{ex:PRnotSelfadhe} was found using Wolfram \TT{Mathematica v11.3} \cite{Mathematica}. The~source code and results for all computations are available on the supplementary MathRepo website of the MPI-MiS and the KEEPER of the Max-Planck Society: \begin{center} \begin{tabular}{rl} MathRepo: & \url{https://mathrepo.mis.mpg.de/SelfadhesiveGaussianCI/} \\ KEEPER: & \url{https://keeper.mpdl.mpg.de/d/fbfe463162e94a14ac28/} \end{tabular} \end{center} \subsubsection*{Acknowledgement} This project was started at the Otto-von-Guericke-Uni\-ver\-si\-tät Magdeburg and finished at the Max-Planck Institute for Mathematics in the Sciences, Leipzig. I~would like to thank the OvGU and the MPI for providing me with the resources to carry out the computations whose results are presented here. I also wish to thank the anonymous referees for their thorough and critical reading of this manuscript, and especially for drawing my attention to the result mentioned in \Cref{rem:Machinery}. \bibliographystyle{tboege} \bibliography{gaussadhe} \end{document}
2205.07599v10
http://arxiv.org/abs/2205.07599v10
On the operators of Hardy-Littlewood-Pólya type
\documentclass[11pt]{amsart} \usepackage{amsmath} \usepackage{amssymb} \textwidth=142truemm \textheight=210truemm \usepackage{geometry} \geometry{left=4cm, right=4cm, top=3.2cm, bottom=3.2cm} \def\beqnn{\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}} \DeclareMathOperator{\esssup}{esssup} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{xca}[theorem]{Exercise} \theoremstyle{question} \newtheorem{question}[theorem]{Question} \theoremstyle{problem} \newtheorem{problem}[theorem]{Problem} \theoremstyle{conjecture} \newtheorem{conjecture}[theorem]{Conjecture} \numberwithin{equation}{section} \newcommand{\abs}[1]{\lvert#1\rvert} \newcommand{\blankbox}[2]{ \parbox{\columnwidth}{\centering \psetlength{\fboxsep}{0pt} \fbox{\raisebox{0pt}[#2]{\hspace{#1}}} }} \begin{document} \begin{center} \title{ {On the operators of Hardy-Littlewood-P\'olya type}} \end{center} \author{Jianjun Jin} \address{School of Mathematics Sciences, Hefei University of Technology, Xuancheng Campus, Xuancheng 242000, P.R.China} \email{[email protected], [email protected]} \subjclass[2010]{47B37; 26D15; 47A30} \keywords{Hardy-Littlewood-P\'olya-type operators; boundedness of operator; compactness of operator; norm of operator} \begin{abstract} In this paper we introduce and study several new Hardy-Littlewood-P\'olya-type operators. In particular, we study a Hardy-Littlewood-P\'olya-type operator induced by a positive Borel measure on $[0,1)$. We establish some sufficient and necessary conditions for the boundedness (compactness) of these operators. We also determine the exact values of the norms of the Hardy-Littlewood-P\'olya-type operators for certain special cases. \end{abstract} \maketitle \pagestyle{plain} \section{\bf {Introduction and main results}} Throughout this paper, for two positive numbers $A, B$, we write $A \preceq B$, or $A \succeq B$, if there exists a positive constant $C$ independent of the arguments such that $A \leq C B$, or $A \geq C B$, respectively. We will write $A \asymp B$ if both $A \preceq B$ and $A \succeq B$. Let $p>1$. We denote the conjugate of $p$ by $p'$, i.e., $\frac{1}{p}+\frac{1}{p'}=1$. Let $l^p$ be the space of sequences of complex numbers, i.e., \begin{equation*}l^{p}:=\{a=\{a_n\}_{n=1}^{\infty}: \|a\|_{p}=(\sum_{n=1}^{\infty} |a_{n}|^p)^{\frac{1}{p}}<+\infty \}.\end{equation*} For $a=\{a_n\}_{n=1}^{\infty}$, the Hardy-Littlewood-P\'olya operator $\mathbf{H}$ is defined as $$\mathbf{H}(a)(m):=\sum_{n=1}^{\infty}\frac{a_n}{\max\{m, n\}}, \, m\in \mathbb{N}. $$ It is well known (see \cite[page 254]{HLP}) that \begin{theorem}\label{m-0} Let $p>1$. Then $\mathbf{H}$ is bounded on $l^p$ and the norm of $\mathbf{H}$ is $p+p'$. \end{theorem} Hardy-Littlewood-P\'olya operator is related to some important topics in analysis and there have been many results about this operator and its analogous and generalizations. The classical results of this topic can be founded in the famous monograph \cite{HLP}. In the past three decades, the so-called Hilbert-type operators, including Hardy-Littlewood-P\'olya-type operators, have been extensively studied by Yang and his coauthors, see the survey \cite{YR} and Yang's book \cite{Y3}. For more recent results see for example \cite{WHY} and \cite{YZ}. Fu et al. have studied in \cite{FWL} some $p$-adic Hardy-Littlewood-P\'olya-type operators.Very recently, in the work \cite{B}, Brevig established some norm estimates for certain Hardy-Littlewood-P\'olya-type operators in terms of the Riemann zeta function. Some further results have been obtained in \cite{B-1}. In this paper, we first introduce and study the following operator of the Hardy-Littlewood-P\'olya type, $$\mathbf{H}^{\mu, \nu}_{\alpha, \beta, \gamma}(a)(m):=m^{\frac{1}{p}[(\alpha-1)+\alpha \mu]}\sum_{n=1}^{\infty} \frac{n^{\frac{1}{p'}[(\beta-1)-(p'-1)\beta \nu]}}{ [\max\{m^{\alpha}, n^{\beta}\}]^{\gamma}}a_n, a=\{a_n\}_{n=1}^{\infty}, m\geq 1, $$ where $\gamma>0$, $0<\alpha, \beta \leq 1$, $-1<\mu, \nu <p-1$. The operator $\mathbf{H}^{\mu, \nu}_{\alpha, \beta, \gamma}$ reduces to the classical Hardy-Littlewood-P\'olya operator $\mathbf{H}$ when $\alpha=\beta=\gamma=1$, $\mu=\nu=0$. We first study the boundedness of $\mathbf{H}^{\mu, \nu}_{\alpha, \beta, \gamma}$. We will provide a sufficient and necessary condition for the boundedness of $\mathbf{H}^{\mu, \nu}_{\alpha, \beta, \gamma}$ in terms of the parameters $\gamma, \mu, \nu$ and prove that \begin{theorem}\label{m-1-1} Let $p>1$, $\gamma>0$, $0<\alpha, \beta \leq 1$ and $-1<\mu, \nu <p-1$. Let $\mathbf{H}^{\mu, \nu}_{\alpha, \beta, \gamma}$ be defined as obove. Then $\mathbf{H}^{\mu, \nu}_{\alpha, \beta, \gamma}$ is bounded on $l^p$ if and only if $p(\gamma-1)-(\mu-\nu)\geq 0.$\end{theorem} When $p(\gamma-1)-(\mu-\nu)= 0$, i.e., $\gamma=1+\frac{\mu-\nu}{p}$, we use $\widetilde{\mathbf{H}}^{\mu, \nu}_{\alpha, \beta}$ to denote the operator $\mathbf{H}^{\mu, \nu}_{\alpha, \beta, \gamma}$. That is to say, $$\widetilde{\mathbf{H}}^{\mu, \nu}_{\alpha, \beta}(a)(m):=m^{\frac{1}{p}[(\alpha-1)+\alpha \mu]}\sum_{n=1}^{\infty} \frac{n^{\frac{1}{p'}[(\beta-1)-(p'-1)\beta \nu]}}{ [\max\{m^{\alpha}, n^{\beta}\}]^{1+\frac{\mu-\nu}{p}}}a_n,a=\{a_n\}_{n=1}^{\infty},m\geq 1.$$ We denote by $\|\widetilde{\mathbf{H}}^{\mu, \nu}_{\alpha, \beta}\|$ the norm of $\widetilde{\mathbf{H}}^{\mu, \nu}_{\alpha, \beta}$. We will show the following result, which is an extention of Theorem \ref{m-0}. \begin{theorem}\label{m-1-2} Let $p>1$, $0<\alpha, \beta \leq 1$ and $-1<\mu, \nu <p-1$. Let $\widetilde{\mathbf{H}}^{\mu, \nu}_{\alpha, \beta}$ be defined as above. Then $\widetilde{\mathbf{H}}^{\mu, \nu}_{\alpha, \beta}$ is bounded on $l^p$ and \begin{equation}\label{norm}\|\widetilde{\mathbf{H}}^{\mu, \nu}_{\alpha, \beta}\|= \frac{p}{{\alpha}^{\frac{1}{p}}{\beta}^{\frac{1}{p'}}}\left(\frac{1}{1+\mu}+\frac{1}{p-1-\nu}\right).\end{equation} \end{theorem} When $\alpha=\beta=1$. From Theorem \ref{m-1-1}, we know that the operator $$\mathbf{H}^{\mu, \nu}_{1, 1, \gamma}(a)(m)=\sum_{n=1}^{\infty} \frac{m^{\frac{\mu}{p}}n^{-\frac{\nu}{p}}}{[\max\{m, n\}]^{\gamma}}a_n, a=\{a_n\}_{n=1}^{\infty},m\geq 1, $$ is not bounded on $l^p$ when $\gamma<1+\frac{\mu-\nu}{p}$. On the one hand, we note that $$\int_{[0, 1)}t^{\max\{m, n\}-1}(1-t)^{\gamma-1}dt=B(\max\{m, n\}, \gamma)=\frac{\Gamma(\max\{m, n\})\Gamma(\gamma)}{\Gamma(\gamma+\max\{m, n\})}, $$ for $\gamma>0$, $m, n\geq 1$. Here $B(\cdot, \cdot)$ is the Beta function, which is defined as $$B(u,v):=\int_{0}^{1}t^{u-1}(1-t)^{v-1}\,dt,\: u>0,v>0.$$ The Gamma function $\Gamma(\cdot) $ is defined as $$\Gamma(x)=\int_{0}^{\infty}e^{-t} t^{x-1}\,dt,\: x>0.$$ It is known that $$B(u,v)=\frac{\Gamma(u)\Gamma{(v)}}{\Gamma(u+v)}.$$ For more introductions to these special functions, see \cite{AAR}. On the other hand, we see from \begin{equation}\label{g} \Gamma(x) = \sqrt{2\pi} x^{x-\frac{1}{2}}e^{-x}[1+r(x)],\, |r(x)|\leq e^{\frac{1}{12x}}-1,\, x>0,\end{equation} that $$\frac{\Gamma(\max\{m, n\})\Gamma(\gamma)}{\Gamma(\gamma+\max\{m, n\})} \asymp \frac{1}{[\max\{m, n\}]^{\gamma}},\, \gamma>0, m, n\geq 1.$$ Hence, in order to make $\mathbf{H}^{\mu, \nu}_{1, 1, \gamma}$ to be bounded on $l^p$ when $\gamma<1+\frac{\mu-\nu}{p}$, we let $\lambda$ be a positive Borel measure in $[0, 1)$, and consider the following operator $$\widehat{\mathbf{H}}^{\mu, \nu}_{\gamma, \lambda}(a)(m):=\sum_{n=1}^{\infty} m^{\frac{\mu}{p}}n^{-\frac{\nu}{p}}\mathcal{I}_\lambda[m,n]a_n, \, a=\{a_n\}_{n=1}^{\infty}, \, m\geq 1.$$ Where \begin{equation}\label{mea}\mathcal{I}_\lambda[m, n]=\int_{[0, 1)}t^{\max\{m, n\}-1}(1-t)^{\gamma-1}d\lambda(t), \,m, n\geq 1.\end{equation} We will characterize measures $\lambda$ such that $\widehat{\mathbf{H}}^{\mu, \nu}_{\gamma, \lambda}$ is bounded (compact) on $l^p$. To state our results, we introduce the notion of generalized Carleson measure on $[0,1)$. Let $s>0$, let $\lambda$ be a positive Borel measure on $[0,1)$, we say $\lambda$ is an $s$-Carleson measure if there is a constant $C>0$ such that $$\lambda([t, 1))\leq C (1-t)^s$$ holds for all $t\in [0, 1)$. Moreover, an $s$-Carleson measure $\lambda$ on $[0, 1)$ is said to be a vanishing $s$-Carleson measure, if it satisfies further that $$\lim_{t\rightarrow 1^{-}}\frac{\lambda([t, 1))}{(1-t)^s}=0.$$ We shall prove the following criterion for the boundedness of $\widehat{\mathbf{H}}^{\mu, \nu}_{\gamma, \lambda}$. \begin{theorem}\label{m-1-3} Let $p>1, \gamma>0$ and $-1<\mu, \nu<p-1$. Let $\lambda$ be a positive Borel measure on $[0, 1)$ such that $d\rho(t):=(1-t)^{\gamma-1}d\lambda(t)$ is a finite measure on $[0, 1)$, and $\widehat{\mathbf{H}}^{\mu, \nu}_{\gamma, \lambda}$ be defined as above. Then $\widehat{\mathbf{H}}^{\mu, \nu}_{\gamma, \lambda}$ is bounded on $l^p$ if and only if $\rho$ is a $[1+\frac{1}{p}(\mu-\nu)]$-Carleson measure on $[0, 1)$. \end{theorem} For the compactness of $\widehat{\mathbf{H}}^{\mu, \nu}_{\gamma, \lambda}$, we shall show that \begin{theorem}\label{m-1-4} Let $p>1, \gamma>0$ and $-1<\mu, \nu<p-1$. Let $\lambda$ be a positive Borel measure on $[0, 1)$ such that $d\rho(t):=(1-t)^{\gamma-1}d\lambda(t)$ is a finite measure on $[0, 1)$, and $\widehat{\mathbf{H}}^{\mu, \nu}_{\gamma, \lambda}$ be defined as above. Then $\widehat{\mathbf{H}}^{\mu, \nu}_{\gamma, \lambda}$ is compact on $l^p$ if and only if $\rho$ is a vanishing $[1+\frac{1}{p}(\mu-\nu)]$-Carleson measure on $[0, 1)$. \end{theorem} The paper is organized as follows. Two lemmas will be given in the next section. We will first prove Theorem \ref{m-1-2} in Section 3. The proof of Theorem \ref{m-1-1} will be given in Section 4. We prove Theorem \ref{m-1-3} and \ref{m-1-4} in Section 5. Final remarks will be presented in Section 6. \section{\bf {Two lemmas}} We need the following lemmas in the proof of our main results of this paper. \begin{lemma}\label{lem-1} Let $p>1$, $0<\alpha, \beta \leq 1$ and $-1<\mu, \nu <p-1$. We define $$E(m):=\sum_{n=1}^{\infty} \frac{n^{\beta-1}}{[\max\{m^{\alpha}, n^{\beta}\}]^{1+\frac{\mu-\nu}{p}}} \cdot \frac{m^{\frac{\alpha(1+\mu)}{p}}}{n^{\frac{\beta(1+\nu)}{p}}}, \, m\geq 1; $$ $$F(n):=\sum_{m=1}^{\infty} \frac{m^{\alpha-1}}{[\max\{m^{\alpha}, n^{\beta}\}]^{1+\frac{\mu-\nu}{p}}}\cdot \frac{n^{\frac{\beta(p-1-\nu)}{p}}}{m^{\frac{\alpha(p-1-\mu)}{p}}}, \, n\geq 1. $$ Then we have \begin{equation}\label{ineq-1}E(m)\leq \frac{p}{\beta}\left(\frac{1}{1+\mu}+\frac{1}{p-1-\nu}\right),\, m\geq 1,\end{equation} \begin{equation} \label{ineq-2}F(n)\leq \frac{p}{\alpha}\left(\frac{1}{1+\mu}+\frac{1}{p-1-\nu}\right),\, n\geq 1. \end{equation} \end{lemma} \begin{proof} In view of the assumption, we see that, for $ m\geq 1$, \begin{equation*} E(m)\leq \int_{0}^{\infty} \frac{x^{\beta-1}}{[\max\{m^{\alpha}, x^{\beta}\}]^{1+\frac{\mu-\nu}{p}}} \cdot \frac{m^{\frac{\alpha(1+\mu)}{p}}}{x^{\frac{\beta(1+\nu)}{p}}}\,dx. \end{equation*} Consequently, by the change of variables $s=x^{\beta}$, we obtain that \begin{eqnarray} E(m) &\leq&\frac{1}{\beta} \int_{0}^{\infty} \frac{1}{[\max\{m^{\alpha}, s\}]^{1+\frac{\mu-\nu}{p}}} \cdot \frac{m^{\frac{\alpha(1+\mu)}{p}}}{s^{\frac{1+\nu}{p}}}\,ds \nonumber \\ &=& \frac{1}{\beta} \int_{0}^{\infty} \frac{t^{-\frac{1+\nu}{p}}}{[\max\{1, t\}]^{1+\frac{\mu-\nu}{p}}}\,dt\nonumber \\ &=& \frac{1}{\beta} \int_{0}^{1}t^{-\frac{1+\nu}{p}}dt+\frac{1}{\beta} \int_{1}^{\infty}t^{-\frac{1+\mu}{p}-1}dt \nonumber \\ &=& \frac{p}{\beta}\left(\frac{1}{1+\mu}+\frac{1}{p-1-\nu}\right). \nonumber \end{eqnarray} This proves (\ref{ineq-1}). By the similar way, we can obtain that (\ref{ineq-2}) also holds. The lemma is proved. \end{proof} \begin{lemma}\label{lem} Let $\gamma>0, -1<\mu, \nu<p-1$. Let $\lambda$ be a positive Borel measure on $[0, 1)$ and $\mathcal{I}_{\lambda}[m, n]$ be defined as in (\ref{mea}) for $m, n\geq 1$. Set $d\rho(t)=(1-t)^{\gamma-1}d \lambda(t)$. If $\rho$ is a $[1+\frac{1}{p}(\mu-\nu)]$-Carleson measure on $[0, 1)$, then \begin{equation}\label{mun-1}\mathcal{I}_{\lambda}[m, n] \preceq \frac{1}{[\max\{m, n\}]^{1+\frac{1}{p}(\mu-\nu)}}\end{equation} holds for all $m, n\geq 1$. Furthermore, if $\rho$ is a vanishing $[1+\frac{1}{p}(\mu-\nu)]$-Carleson measure on $[0, 1)$, then \begin{equation}\label{mun-2} \mathcal{I}_{\lambda}[m, n] =o\left(\frac{1}{[\max\{m, n\}]^{1+\frac{1}{p}(\mu-\nu)}}\right), \,\,\max\{m, n\}\rightarrow \infty.\end{equation} \end{lemma} \begin{proof} When $m\geq 1, n\geq 2$, or $m\geq 2, n\geq 1$. We get from integration by parts that \begin{eqnarray} \mathcal{I}_{\lambda}[m, n]&=&\int_{0}^1 t^{\max\{m, n\}-1}d\rho(t) \nonumber \\&=&\rho([0,1))-(\max\{m, n\}-1)\int_{0}^1 t^{\max\{m, n\}-2}\rho([0, t))dt \nonumber \\ &=& (\max\{m, n\}-1)\int_{0}^1 t^{\max\{m, n\}-2}\rho([t, 1))dt.\nonumber \end{eqnarray} If $\rho$ is a $[1+\frac{1}{p}(\mu-\nu)]$-Carleson measure on $[0, 1)$, then we see that there is a constant $C_1>0$ such that $$\rho([t,1))\leq C_1 (1-t)^{1+\frac{1}{p}(\mu-\nu)}$$ holds for all $t\in [0,1)$. It follows that \begin{eqnarray}\mathcal{I}_{\lambda}[m, n] &\leq & C_1(\max\{m, n\}-1)\int_{0}^1 t^{\max\{m, n\}-2}(1-t)^{1+\frac{1}{p}(\mu-\nu)}dt\nonumber \\&=&C_1 \frac{(\max\{m, n\}-1)\Gamma(\max\{m, n\}-1)\Gamma(2+\frac{1}{p}(\mu-\nu))}{\Gamma(\max\{m, n\}+1+\frac{1}{p}(\mu-\nu))}.\nonumber \end{eqnarray} By using (\ref{g}) again, we obtain that $$\frac{(\max\{m, n\}-1)\Gamma(\max\{m, n\}-1)\Gamma(2+\frac{1}{p}(\mu-\nu))}{\Gamma(\max\{m, n\}+1+\frac{1}{p}(\mu-\nu))} \asymp \frac{1}{\max\{m, n\}^{1+\frac{1}{p}(\mu-\nu)}}.$$ It follows that (\ref{mun-1}) holds for $m\geq 1, n\geq 2$ or $m\geq 2, n\geq 1$. Next we consider the case $m=n=1$, we see from the fact $\rho$ is a finite measure on $[0,1)$ that \begin{equation} \mathcal{I}_{\lambda}[1, 1]=\int_{0}^1 d\rho(t)=\rho([0, 1))\preceq 1.\nonumber \end{equation} Then we get that (\ref{mun-1}) holds for all $m, n\geq 1$. Similarly, if $\rho$ is a vanishing $[1+\frac{1}{p}(\mu-\nu)]$-Carleson measure on $[0, 1)$, by minor modifications of above arguments, we can show that (\ref{mun-2}) holds. The lemma is proved. \end{proof} \section{\bf {Proof of Theorem \ref{m-1-2}}} For $a=\{a_n\}_{n=1}^{\infty} \in l^p, m\geq 1$, we have \begin{eqnarray}\lefteqn{m^{\frac{1}{p}[(\alpha-1)+\alpha \mu]}\left|\sum_{n=1}^{\infty} \frac{n^{\frac{1}{p'}[(\beta-1)-(p'-1)\beta \nu]}}{ [\max\{m^{\alpha}, n^{\beta}\}]^{1+\frac{\mu-\nu}{p}}}a_n\right|}\nonumber \\&&\leq \sum_{n=1}^{\infty} \left\{[K(m,n)]^{\frac{1}{p}}O_1(m, n)\cdot [K(m,n)]^{\frac{1}{p'}}O_2(m, n)\right\}:=I(m).\nonumber \end{eqnarray} Where \begin{equation*} {K}(m, n)=\frac{1}{[\max\{m^{\alpha}, n^{\beta}\}]^{1+\frac{\mu-\nu}{p}}}, \end{equation*} \begin{equation*} {O}_1(m, n)= \frac{n^{\frac{\beta(1+\nu)}{pp'}-\frac{\beta \nu}{p}}} {m^{\frac{\alpha(p-1-\mu)}{p^2}}} \cdot m^{\frac{1}{p}(\alpha-1)}\cdot |a_n|, \end{equation*} \begin{equation*} {O}_2(m, n)= \frac{m^{\frac{\alpha(p-1-\mu)}{p^2}+\frac{\alpha\mu}{p}}} {n^{\frac{\beta(1+\nu)}{pp'}}} \cdot n^{\frac{1}{p'}(\beta-1)}. \end{equation*} Applying the H\"{o}lder's inequality on $I(m)$, we get from (\ref{ineq-1}) that \begin{eqnarray} I(m) &\leq & \left[\sum_{n=1}^{\infty}{K}(m, n)[{O}_1(m, n)]^p \right]^{\frac{1}{p}}\left[\sum_{n=1}^{\infty}{K}(m, n)[{O}_2(m, n)]^{p'} \right]^{\frac{1}{p'}} \nonumber \\ & =& [{E}(m)]^{\frac{1}{p'}}\left[\sum_{n=1}^{\infty}{K}(m, n)[{O}_1(m, n)]^p \right]^{\frac{1}{p}}. \nonumber \end{eqnarray} It follows from (\ref{ineq-2}) that \begin{eqnarray} \lefteqn{\|\widetilde{\mathbf{H}}^{\mu, \nu}_{\alpha, \beta}a\|_p=[\sum_{m=1}^{\infty}I^p(m) ]^{\frac{1}{p}}} \nonumber \\ & \leq& \frac{1}{{\beta}^{\frac{1}{p'}}} \left(\frac{p}{1+\mu}+\frac{p}{p-1-\nu}\right)^{\frac{1}{p'}}\left[\sum_{n=1}^{\infty}\sum_{m=1}^{\infty}{K}(m, n)[{O}_1(m, n)]^p \right]^{\frac{1}{p}} \nonumber \\ &=& \frac{1}{{\beta}^{\frac{1}{p'}}} \left(\frac{p}{1+\mu}+\frac{p}{p-1-\nu}\right)^{\frac{1}{p'}} \left[\sum_{n=1}^{\infty}{F}(n) |a_n|^p \right]^{\frac{1}{p}} \nonumber \\ & \leq& \frac{p}{{\alpha}^{\frac{1}{p}}{\beta}^{\frac{1}{p'}}}\left(\frac{1}{1+\mu}+\frac{1}{p-1-\nu}\right)\|a\|_p. \nonumber \end{eqnarray} This means that $\widetilde{\mathbf{H}}^{\mu, \nu}_{\alpha, \beta}$ is bounded on $l^p$ and \begin{equation}\label{low}\|\widetilde{\mathbf{H}}^{\mu, \nu}_{\alpha, \beta}\|\leq \frac{p}{{\alpha}^{\frac{1}{p}}{\beta}^{\frac{1}{p'}}}\left(\frac{1}{1+\mu}+\frac{1}{p-1-\nu}\right).\end{equation} For $\varepsilon>0$, we take $\widetilde{a}=\{\widetilde{a}_n\}_{n=1}^{\infty}$ with $\widetilde{a}_n=\varepsilon^{\frac{1}{p}}n^{-\frac{1+\beta\varepsilon}{p}}$. On the one hand, we have $$\|\widetilde{a}\|_p^p=\varepsilon \sum_{n=1}^{\infty} n^{-1-\beta\varepsilon}\geq \varepsilon \int_{1}^{\infty} x^{-1-\beta\varepsilon}\, dx=\frac{1}{\beta}.$$ On the other hand, we have $$\|\widetilde{a}\|_p^p=\varepsilon+ \sum_{n=2}^{\infty}n^{-1-\beta\varepsilon}\leq \varepsilon+\varepsilon\int_{1}^{\infty} x^{-1-\beta\varepsilon}\, dx=\varepsilon+\frac{1}{\beta}.$$ Thus, we obtain that \begin{equation}\label{est-0} \|\widetilde{a}\|_p^p=\frac{1}{\beta}(1+o(1)),\,\, \,\,\varepsilon \rightarrow 0^{+}. \end{equation} We write \begin{eqnarray}\label{shar-1} \|\widetilde{\mathbf{H}}^{\mu, \nu}_{\alpha, \beta}\widetilde{a}\|_p^p=\varepsilon \sum_{m=1}^{\infty}m^{(\alpha-1)+\alpha \mu }\cdot [J(m)]^p. \end{eqnarray} Here $$J(m):=\sum_{n=1}^{\infty}\frac{n^{\frac{1}{p'}[(\beta-1)-(p'-1)\beta \nu]}\cdot n^{-\frac{1+\beta\varepsilon}{p}}}{[\max\{m^{\alpha}, n^{\beta}\}]^{1+\frac{\mu-\nu}{p}}}.$$ In view of the assumption $0<\beta\leq 1, -1<\nu<p-1$, we have $1+\beta \nu\geq 0$. Hence we get that \begin{eqnarray}\frac{1}{p'}[(\beta-1)-(p'-1)\beta \nu]-\frac{1+\beta \varepsilon}{p}=\frac{1}{p'}(\beta-1)-\frac{1+\beta \nu+\beta\varepsilon}{p}<0. \nonumber \end{eqnarray} Consequently, \begin{eqnarray}\label{shar-2} J(m)&\geq& \int_{1}^{\infty} \frac{x^{\frac{1}{p'}[(\beta-1)-(p'-1)\beta \nu]}\cdot x^{-\frac{1+\beta\varepsilon}{p}}}{[\max\{m^{\alpha}, x^{\beta}\}]^{1+\frac{\mu-\nu}{p}}}\,dx \nonumber \\&=&\frac{1}{\beta} \int_{1}^{\infty} \frac{s^{-\frac{1+\nu+\varepsilon}{p}}}{[\max\{m^{\alpha}, s\}]^{1+\frac{\mu-\nu}{p}}}\,ds\nonumber \\ &=&\frac{1}{\beta}m^{-\frac{\alpha}{p}(1+\mu+\varepsilon)} \int_{\frac{1}{m^{\alpha}}}^{\infty} \frac{t^{-\frac{1+\nu+\varepsilon}{p}}}{[\max\{1, t\}]^{1+\frac{\mu-\nu}{p}}}\,dt. \end{eqnarray} Also, for $0<\varepsilon< p-1-\nu$, we have \begin{eqnarray}\label{shar-3} \lefteqn{\int_{\frac{1}{m^{\alpha}}}^{\infty} \frac{t^{-\frac{1+\nu+\varepsilon}{p}}}{[\max\{1, t\}]^{1+\frac{\mu-\nu}{p}}}\,dt=\int_{0}^{\infty} \frac{t^{-\frac{1+\nu+\varepsilon}{p}}}{[\max\{1, t\}]^{1+\frac{\mu-\nu}{p}}}\,dt -\int_{0}^{\frac{1}{m^{\alpha}}} t^{-\frac{1+\nu+\varepsilon}{p}}\,dt }\nonumber \\ &&= p\left(\frac{1}{1+\mu}+\frac{1}{p-1-\nu-\varepsilon}\right)-\frac{p}{p-1-\nu-\varepsilon}m^{-\frac{\alpha(p-1-\nu-\varepsilon)}{p}} \nonumber \\ &&:=L(\varepsilon)-Q(m). \end{eqnarray} Combining (\ref{shar-1}), (\ref{shar-2}) and (\ref{shar-3}), we get that \begin{eqnarray}\label{mo-1} \|\widetilde{\mathbf{H}}^{\mu, \nu}_{\alpha, \beta}\widetilde{a}\|_p^p&\geq &\frac{\varepsilon}{\beta^p}\sum_{m=1}^{\infty} m^{-1-\alpha \varepsilon}\cdot [L(\varepsilon)-Q(m)]^p, \end{eqnarray} for $0<\varepsilon< p-1-\nu$. By using the Bernoulli's inequality(see \cite{M}), we obtain that \begin{equation}\label{mo-2} [L(\varepsilon)-Q(m)]^p \geq [L(\varepsilon)]^p \left [1-\frac{p^2}{L(\varepsilon)(p-1-\nu-\varepsilon)}m^{-\frac{\alpha(p-1-\nu-\varepsilon)}{p}}\right], \end{equation} for $0<\varepsilon< p-1-\nu$. From (\ref{mo-1}) and (\ref{mo-2}), we obtain that \begin{eqnarray}\label{mo-3} \|\widetilde{\mathbf{H}}^{\mu, \nu}_{\alpha, \beta}\widetilde{a}\|_p^p&\geq &\frac{\varepsilon}{\beta^p}[L(\varepsilon)]^p\sum_{m=1}^{\infty} m^{-1-\alpha \varepsilon} \nonumber \\& &-\frac{\varepsilon p^2}{\beta^{p}L(\varepsilon)(p-1-\nu-\varepsilon)}\sum_{m=1}^{\infty} m^{-1-\alpha \varepsilon-\frac{\alpha(p-1-\nu-\varepsilon)}{p}}. \end{eqnarray} We note that \begin{eqnarray}\label{mo-4} \varepsilon \sum_{m=1}^{\infty}m^{-1-\alpha \varepsilon}=\frac{1}{\alpha}(1+o(1)), \,\, \,\, \varepsilon \rightarrow 0^{+}, \end{eqnarray} and, for $0<\varepsilon<p-1-\nu$, \begin{eqnarray}\label{mo-5} \sum_{m=1}^{\infty}m^{-1-\alpha \varepsilon-\frac{\alpha(p-1-\nu-\varepsilon)}{p}}=O(1), \,\, \varepsilon \rightarrow 0^{+}. \end{eqnarray} It follows from (\ref{mo-3})-(\ref{mo-5}) that \begin{eqnarray} \|\widetilde{\mathbf{H}}^{\mu, \nu}_{\alpha, \beta}\widetilde{a}\|_p^p&\geq &\frac{1}{\alpha\beta^p}(1+o(1))\cdot [L(\varepsilon)]^p\cdot [1-\varepsilon O(1)]. \nonumber \end{eqnarray} Hence, by (\ref{est-0}), we get that \begin{eqnarray} \|\widetilde{\mathbf{H}}^{\mu, \nu}_{\alpha, \beta}\| &\geq & \frac{\|\widetilde{\mathbf{H}}^{\mu, \nu}_{\alpha, \beta}\widetilde{a}\|_p}{\|\widetilde{a}\|_p}\geq \frac{\frac{1}{\alpha^{\frac{1}{p}}\beta}(1+o(1))\cdot [L(\varepsilon)]\cdot [1-\varepsilon O(1)]^{\frac{1}{p}}}{\frac{1}{\beta^{\frac{1}{p}}}(1+o(1))}. \nonumber \end{eqnarray} Take $\varepsilon \rightarrow 0^{+}$, we see that \begin{equation}\label{big} \|\widetilde{\mathbf{H}}^{\mu, \nu}_{\alpha, \beta}\|\geq \frac{p}{{\alpha}^{\frac{1}{p}}{\beta}^{\frac{1}{p'}}}\left(\frac{1}{1+\mu}+\frac{1}{p-1-\nu}\right).\end{equation} Combining (\ref{low}) and (\ref{big}), we see that (\ref{norm}) is true and the proof of Theorem \ref{m-1-2} is finished. \section{\bf {Proof of Theorem \ref{m-1-1}}} We first prove the if part. If $p(\gamma-1)-(\mu-\nu)\geq 0$, that is $\gamma\geq 1+\frac{\mu-\nu}{p}$, then, for $a=\{a_n\}_{n=1}^{\infty}, \, m\geq 1$, it is easy to see that $$\left|\sum_{n=1}^{\infty} \frac{n^{\frac{1}{p'}[(\beta-1)-(p'-1)\beta \nu]}}{[\max\{m^{\alpha}, n^{\beta}\}]^{\gamma}}a_n\right|\leq \sum_{n=1}^{\infty} \frac{n^{\frac{1}{p'}[(\beta-1)-(p'-1)\beta \nu]}}{[\max\{m^{\alpha}, n^{\beta}\}]^{1+\frac{\mu-\nu}{p}}}|a_n|.$$ Consequently, in view of the boundedness of $\widetilde{\mathbf{H}}^{\mu, \nu}_{\alpha, \beta}$, we conclude that $\mathbf{H}^{\mu, \nu}_{\alpha, \beta, \gamma}$ is bounded on $l^p$ when $p(\gamma-1)-(\mu-\nu)\geq 0$. Next, we prove the only if part. We will show that, if $p(\gamma-1)-(\mu-\nu)<0$, then $\mathbf{H}^{\mu, \nu}_{\alpha, \beta, \gamma}$ can not be bounded on $l^p$. Actually, let $\varepsilon>0$, we still take $\widetilde{a}=\{\widetilde{a}_n\}_{n=1}^{\infty}$ with $\widetilde{a}_n=\varepsilon^{\frac{1}{p}} n^{-\frac{1+\beta\varepsilon}{p}}$. We have \begin{equation}\label{jia-1}\|\widetilde{a}\|_p^p=\frac{1}{\beta}(1+o(1)), \, \varepsilon \rightarrow 0^{+}.\end{equation} It follows that \begin{eqnarray}\label{3-0}\|\mathbf{H}^{\mu, \nu}_{\alpha, \beta, \gamma} \widetilde{a}\|_{p}^p&=& \sum_{m=1}^{\infty}m^{(\alpha-1)+\alpha \mu}\left[\sum_{n=1}^{\infty} \frac{n^{\frac{1}{p'}[(\beta-1)-(p'-1)\beta \nu]}\cdot n^{-\frac{1+\beta\varepsilon}{p}}}{[\max\{m^{\alpha}, n^{\beta}\}]^{\gamma}}\right]^p \nonumber \\ & :=& \sum_{m=1}^{\infty}m^{(\alpha-1)+\alpha \mu}\cdot [R(m)]^p. \end{eqnarray} On the other hand, we have, for $m \geq 1$, \begin{eqnarray}\label{3-1} R(m)&\geq& \int_{1}^{\infty} \frac{x^{\frac{1}{p'}[(\beta-1)-(p'-1)\beta \nu]}\cdot x^{-\frac{1+\beta\varepsilon}{p}}}{[\max\{m^{\alpha}, x^{\beta}\}]^{\gamma}}\,dx \\ &=&\frac{1}{\beta} \int_{1}^{\infty} \frac{s^{-\frac{1+\nu+\varepsilon}{p}}}{[\max\{m^{\alpha}, s\}]^{\gamma}}\,ds \nonumber \\ &=&\frac{1}{\beta}m^{-\frac{\alpha}{p}[p(\gamma-1)+(1+\nu+\varepsilon)]} \int_{\frac{1}{m^{\alpha}}}^{\infty} \frac{t^{-\frac{1+\nu+\varepsilon}{p}}}{[\max\{1, t\}]^{\gamma}}\,dt \nonumber \\ &\geq & \frac{1}{\beta}m^{-\frac{\alpha}{p}[p(\gamma-1)+(1+\nu+\varepsilon)]} \int_{1}^{\infty}t^{-\frac{1+\nu+\varepsilon}{p}-\gamma}\,dt .\nonumber \end{eqnarray} Since $p(\gamma-1)-(\mu-\nu)<0$, i.e., $\gamma<1+\frac{\mu-\nu}{p}$, we get that \begin{equation}\label{3-2} \int_{1}^{\infty}t^{-\frac{1+\nu+\varepsilon}{p}-\gamma}\,dt \geq \int_{1}^{\infty}t^{-\frac{1+\mu+\varepsilon}{p}-1}\,dt=\frac{p}{1+\mu+\varepsilon}.\end{equation} Consequently, from (\ref{3-0})-(\ref{3-2}), we obtain that \begin{eqnarray}\|\mathbf{H}^{\mu, \nu}_{\alpha, \beta, \gamma} \widetilde{a}\|_{p}^p&\geq &\frac{p^p}{[\beta(1+\mu+\varepsilon)]^p}\left[\sum_{m=1}^{\infty} m^{\alpha[p(\gamma-1)+(\mu-\nu)-\varepsilon]-1}\right]. \nonumber \end{eqnarray} We suppose that $\mathbf{H}^{\mu, \nu}_{\alpha, \beta, \gamma}: l^p \rightarrow l^p$ is bounded, it follows from (\ref{jia-1}) that \begin{eqnarray}\label{c-1} +\infty&>&\frac{\|\mathbf{H}^{\mu, \nu}_{\alpha, \beta, \gamma}\widetilde{a}\|_p^p}{\|\widetilde{a}\|_p^p}\nonumber\\&\geq&(1+o(1)) \frac{p^p}{[\beta(1+\mu+\varepsilon)]^p}\left[\sum_{m=1}^{\infty} m^{\alpha[p(1-\gamma)+(\mu-\nu)-\varepsilon]-1}\right]. \end{eqnarray} However, by $p(\gamma-1)-(\mu-\nu)<0$, we know that $p(1-\gamma)+(\mu-\nu)>0$. Hence, when $\varepsilon<p(1-\gamma)+(\mu-\nu)$, we see from $p(1-\gamma)+(\mu-\nu)-\varepsilon:=\theta>0$ that $$\sum_{m=1}^{\infty}m^{\alpha[p(1-\gamma)+(\mu-\nu)-\varepsilon]-1} =\sum_{m=1}^{\infty}m^{\theta\alpha-1} =+\infty.$$ Thus we get that (\ref{c-1}) is a contradiction. This proves that $\mathbf{H}^{\mu, \nu}_{\alpha, \beta, \gamma}$ can not be bounded on $l^p$, if $p(\gamma-1)-(\mu-\nu)<0$. Theorem \ref{m-1-1} is proved. \section{\bf {Proof of Theorem \ref{m-1-3} and \ref{m-1-4}}} We shall first prove Theorem \ref{m-1-3}. Firstly, we prove the if part of Theorem \ref{m-1-3}. By Lemma \ref{lem} and checking the proof of Theorem \ref{m-1-2}, we see that $ \widehat{\mathbf{H}}^{\mu, \nu}_{\gamma, \lambda}$ is bounded on $l^p$, if $d\rho(t)=(1-t)^{\lambda-1}d\lambda(t)$ is a $[1+\frac{1}{p}(\mu-\nu)]$-Carleson measure on $[0, 1)$. The if part of Theorem \ref{m-1-3} is proved. Secondly, we will show the only if part of Theorem \ref{m-1-3}. In our proof, we need the following well-known estimate, see \cite[Page 54]{Zh}. Let $0<w<1$. For any $c>0$, we have \begin{equation}\label{est}\sum_{n=1}^{\infty}n^{c-1}w^{2n}\asymp \frac{1}{(1-w^2)^c}. \end{equation} For $0<w<1$. We define $\mathbf{a}=\{\mathbf{a}\}_{n=1}^{\infty}$ as \begin{equation}\label{re} \mathbf{a}_n=(1-w^2)^{\frac{1}{p}}w^{\frac{2}{p}(n-1)}, n\in \mathbb{N}.\end{equation} Then it is easy to see that $\|\mathbf{a}\|_{p}=1.$ In view of the boundedness of $\widehat{\mathbf{H}}^{\mu, \nu}_{\gamma, \lambda}$, we obtain that \begin{eqnarray}\label{last} 1 &\succeq& \|\widehat{\mathbf{H}}^{\mu, \nu}_{\gamma, \lambda}\mathbf{a}\|_{p}^p \nonumber \\ &=&\sum_{m=1}^{\infty}m^{\mu}\Bigg|\sum_{n=1}^{m}\mathbf{a}_n n^{-\frac{\nu}{p}}\int_{0}^1t^{m-1}d\rho(t) +\sum_{n=m+1}^{\infty}\mathbf{a}_n n^{-\frac{\nu}{p}}\int_{0}^1t^{n-1}d\rho(t) \Bigg|^p \nonumber \\ &=&(1-w^2)\sum_{m=1}^{\infty}m^{\mu}\Bigg|\sum_{n=1}^{m} w^{\frac{2}{p}(n-1)} n^{-\frac{\nu}{p}}\int_{0}^1t^{m-1}d\rho(t) \nonumber \\ &&\quad\quad\quad+\sum_{n=m+1}^{\infty}w^{\frac{2}{p}(n-1)} n^{-\frac{\nu}{p}}\int_{0}^1t^{n-1}d\rho(t) \Bigg|^p. \nonumber \end{eqnarray} (${\bf {I}}$) When $0\leq \nu<p-1$, we see that \begin{eqnarray}\label{re-1} 1& \succeq& \|\widehat{\mathbf{H}}^{\mu, \nu}_{\gamma, \lambda}\mathbf{a}\|_{p}^p\geq (1-w^2)\sum_{m=1}^{\infty}m^{\mu}\Bigg|\sum_{n=1}^{m} w^{\frac{2}{p}(n-1)} n^{-\frac{\nu}{p}}\int_{0}^1t^{m-1}d\rho(t) \Bigg|^p \nonumber \\ &\geq & (1-w^2)\sum_{m=1}^{\infty}m^{\mu}\Bigg|\sum_{n=1}^{m} w^{\frac{2}{p}(n-1)} n^{-\frac{\nu}{p}}\int_{w}^1t^{m-1}d\rho(t) \Bigg|^p \nonumber \\ &\geq & (1-w^2)[\rho([w,1))]^p\sum_{m=1}^{\infty}m^{\mu}w^{p(m-1)}\Bigg[\sum_{n=1}^{m} w^{\frac{2}{p}(n-1)} n^{-\frac{\nu}{p}} \Bigg]^p. \end{eqnarray} On the other hand, we note that, for any $m\geq 1$, \begin{equation*}\label{n-1}\sum_{n=1}^{m}w^{\frac{2}{p}(n-1)}n^{-\frac{\nu}{p}}\geq m\cdot w^{\frac{2}{p}(m-1)}m^{-\frac{\nu}{p}}=w^{\frac{2}{p}(m-1)}m^{1-\frac{\nu}{p}}.\end{equation*} Then we get that \begin{equation} \sum_{m=1}^{\infty}m^{\mu}w^{p(m-1)}\Bigg[\sum_{n=1}^{m} w^{\frac{2}{p}(n-1)} n^{-\frac{\nu}{p}} \Bigg]^p\geq \sum_{m=1}^{\infty} m^{\mu+p(1-\frac{\nu}{p})}w^{(p+2)(m-1)}. \nonumber \end{equation} It follows from (\ref{re-1}) that \begin{equation}\label{end} 1\succeq(1-w^2)[\rho([w, 1))]^{p} \sum_{m=1}^{\infty} m^{\mu+p(1-\frac{\nu}{p})}w^{(p+2)(m-1)}. \nonumber \end{equation} Then we conclude from (\ref{est}) that $$(1-w^2)[\rho([w, 1))]^{p}\frac{1}{(1-w^2)^{\mu+p(1-\frac{\nu}{p})+1}}\preceq 1.$$ This implies that $$\rho([w, 1))\preceq (1-w^2)^{1+\frac{1}{p}(\mu-\nu)},\,\, {\text {for all}}\,\, w\in (0,1).$$ (${\bf {II}}$) When $-1<\nu<0$, we see that \begin{eqnarray}\label{re-2} 1& \succeq& \|\widehat{\mathbf{H}}^{\mu, \nu}_{\gamma, \lambda}\mathbf{a}\|_{p}^p\geq (1-w^2)\sum_{m=1}^{\infty}m^{\mu}\Bigg|\sum_{n=m+1}^{\infty} w^{\frac{2}{p}(n-1)} n^{-\frac{\nu}{p}}\int_{0}^1t^{n-1}d\rho(t) \Bigg|^p \nonumber \\ &\geq & (1-w^2)\sum_{m=1}^{\infty}m^{\mu}\Bigg|\sum_{n=m+1}^{\infty} w^{\frac{2}{p}(n-1)} n^{-\frac{\nu}{p}}\int_{w}^1t^{n-1}d\rho(t) \Bigg|^p \nonumber \\ &\geq & (1-w^2)[\rho([w,1))]^p\sum_{m=1}^{\infty}m^{\mu}\Bigg[\sum_{n=m+1}^{\infty} w^{(\frac{2}{p}+1)(n-1)} n^{-\frac{\nu}{p}} \Bigg]^p. \end{eqnarray} Meanwhile, we note that, for any $m\geq 1$, \begin{eqnarray} \lefteqn{\sum_{n=m+1}^{\infty} w^{(\frac{2}{p}+1)(n-1)} n^{-\frac{\nu}{p}} \geq\sum_{n=m+1}^{\infty} w^{(\frac{2}{p}+1)(n-1)} m^{-\frac{\nu}{p}} } \nonumber\\ &&=m^{-\frac{\nu}{p}} \frac{w^{(\frac{2}{p}+1)m}}{1-w^{\frac{2}{p}+1}} \succeq m^{-\frac{\nu}{p}} \frac{w^{(\frac{2}{p}+1)m}}{1-w^{2}}.\nonumber \end{eqnarray} Then we get that \begin{equation} \sum_{m=1}^{\infty}m^{\mu}\Bigg[\sum_{n=m+1}^{\infty} w^{(\frac{2}{p}+1)(n-1)} n^{-\frac{\nu}{p}} \Bigg]^p \succeq \frac{1}{(1-w^{2})^p}\sum_{m=1}^{\infty}m^{\mu-\nu}w^{(p+2)m}. \nonumber \end{equation} It follows from (\ref{re-2}) that \begin{equation} 1\succeq\frac{1-w^2}{(1-w^{2})^p}[\rho([w, 1))]^{p} \sum_{m=1}^{\infty} m^{\mu-\nu}w^{(p+2)m}. \nonumber \end{equation} Then, from again (\ref{est}), we see that $$(1-w^2)[\rho([w, 1))]^{p}\frac{1}{(1-w^2)^{\mu-\nu+p+1}}\preceq 1.$$ This also implies that $$\rho([w, 1))\preceq (1-w^2)^{1+\frac{1}{p}(\mu-\nu)},\,\, {\text {for all}}\,\, w\in (0,1).$$ Combining (${\bf {I}}$) and (${\bf {II}}$), we see that $\rho$ is a $[1+\frac{1}{p}(\mu-\nu)]$-Carleson measure on $[0, 1)$ and the only if part of \ref{m-1-3} is proved. Now, the proof of Theorem \ref{m-1-3} is finished. We next prove Theorem \ref{m-1-4}. We first show the if part. We assume that $\rho$ is a vanishing $[1+\frac{1}{p}(\mu-\nu)]$-Carleson measure on $[0, 1)$. Let $\mathfrak{M}\in \mathbb{N}$, we define the operator $\mathbf{H}^{[\mathfrak{M}]}$ as, for $a=\{a_n\}_{n=1}^{\infty}$, $$\mathbf{H}^{[\mathfrak{M}]}(a)(m):=m^{\frac{\mu}{p}}\sum_{n=1}^{\infty}n^{-\frac{\nu}{p}}\mathcal{I}_{\lambda}[m, n]a_n,$$ when $m\leq \mathfrak{M}$, and $\mathbf{H}^{[\mathfrak{M}]}(a)(m):=0,$ when $m\geq \mathfrak{M}+1$. Then we see that $\mathbf{H}^{[\mathfrak{M}]}$ is a finite rank operator and hence it is compact on $l^{p}$. By Lemma \ref{lem}, we know that, for any $\epsilon>0$, there is an ${\mathbf{M}}\in \mathbb{N}$ such that $$\mathcal{I}_{\lambda}[m, n]\preceq \frac{\epsilon}{[\max\{m, n\}]^{1+\frac{1}{p}(\mu-\nu)}}$$ holds for all $n\geq 1, m>{\mathbf{M}}$. Then, we see from \begin{eqnarray}\|(\widehat{\mathbf{H}}^{\mu, \nu}_{\gamma, \lambda}-\mathbf{H}^{[\mathfrak{M}]})a\|_{p}^p=\sum_{m=\mathfrak{M}+1}^{\infty}m^{{\mu}}\left|\sum_{n=1}^{\infty}n^{-\frac{\nu}{p}}\mathcal{I}_{\lambda}[m, n]a_n\right|^p,\nonumber \end{eqnarray} that, \begin{eqnarray}\|(\widehat{\mathbf{H}}^{\mu, \nu}_{\gamma, \lambda}-\mathbf{H}^{[\mathfrak{M}]})a\|_{p}^p\preceq {\epsilon}^p \sum_{m=\mathfrak{M}+1}^{\infty}m^{{\mu}}\left|\sum_{n=1}^{\infty}\frac{a_n}{[\max\{m, n\}]^{1+\frac{1}{p}(\mu-\nu)}}\right|^p,\nonumber\end{eqnarray} when $\mathfrak{M}>{\mathbf{M}}.$ Consequently, by checking the proof of Theorem \ref{m-1-2}, we see that, for any $\epsilon>0$, it holds that \begin{eqnarray}\|(\widehat{\mathbf{H}}^{\mu, \nu}_{\gamma, \lambda}-\mathbf{H}^{[\mathfrak{M}]})a\|_{p}\preceq {\epsilon} \|a\|_p, \nonumber\end{eqnarray} for all $a\in l^p$ when $\mathfrak{M}>{\mathbf{M}}$. It follows that $\widehat{\mathbf{H}}^{\mu, \nu}_{\gamma, \lambda}$ is compact on $l^p$. This proves the if part of Theorem \ref{m-1-4} . Finally, we prove the only if part. For $0<w<1$. We take $\mathbf{{a}}=\{\mathbf{{a}}_n\}_{n=1}^{\infty}$ as in (\ref{re}). It is easy to check that $\{\mathbf{{a}}_n\}_{n=1}^{\infty}$ is convergent weakly to $0$ on $l^{p}$ as $w\rightarrow 1^{-}$. Since $\widehat{\mathbf{H}}^{\mu, \nu}_{\gamma, \lambda}$ is compact on $l^p$, we get that \begin{equation}\label{com-0}\lim_{w\rightarrow {1^{-}}} \|\widehat{\mathbf{H}}^{\mu, \nu}_{\gamma, \lambda}\mathbf{{a}}\|_{p}=0.\end{equation} On the other hand, by checking the arguments of the proof of Theorem \ref{m-1-3}, we have \begin{eqnarray} \|\widehat{\mathbf{H}}^{\mu, \nu}_{\gamma, \lambda}\mathbf{{a}}\|_{p}^p &\succeq&[\rho([w,1))]^p\cdot \frac{1}{(1-w^2)^{\mu-\nu+p}}.\nonumber \end{eqnarray} This yields that \begin{eqnarray} \rho([w,1))\preceq \|\widehat{\mathbf{H}}^{\mu, \nu}_{\gamma, \lambda}{\mathbf{{a}}}\|_{p} (1-w^2)^{1+\frac{1}{p}(\mu-\nu)}.\nonumber \end{eqnarray} It follows from (\ref{com-0}) that $\rho$ is a vanishing $[1+\frac{1}{p}(\mu-\nu)$-Carleson measure on $[0, 1)$. This proves the only if part of Theorem \ref{m-1-4} and the proof of Theorem \ref{m-1-4} is completed. \section{\bf {Final Remarks}} \begin{remark} We first point out that the assumptions $-1<\mu, \nu<p-1$ in Theorem \ref{m-1-1} and \ref{m-1-2} are both necessary. We consider the case $\alpha=\beta=\gamma=1, \mu=\nu:=\delta$. That is to say, we will consider the operator $$\mathbf{H}^{\delta, \delta}_{1, 1, 1}(a)(m)=m^{\frac{\delta}{p}}\sum_{n=1}^{\infty} \frac{n^{-\frac{\delta}{p}}a_n}{\max\{m, n\}} ,\, a=\{a_n\}_{n=1}^{\infty}, \, m\geq 1. $$ We will use $\overline{\mathbf{H}}_{\delta}$ to denote $\mathbf{H}^{\delta, \delta}_{1, 1, 1}$. We shall show that \begin{proposition}$\overline{\mathbf{H}}_{\delta}$ is not bounded on $l^p$, if $\delta\leq -1$, or $\delta\geq p-1$.\end{proposition} \begin{proof} For $\varepsilon>0$, we take $\bar{a}=\{\bar{a}_n\}_{n=1}^{\infty}$ with $\bar{a}_n=\varepsilon^{\frac{1}{p}}n^{-\frac{1+\varepsilon}{p}}$. We see that $$\|\bar{a}\|_p^p=\varepsilon+ \varepsilon\sum_{n=2}^{\infty}n^{-1-\varepsilon}\leq \varepsilon+1,$$ and $$\|\overline{\mathbf{H}}_{\delta}\bar{a}\|_p^p=\sum_{m=1}^{\infty}m^{\delta}\left[\sum_{n=1}^{\infty} \frac{n^{-\frac{\delta+1+\varepsilon}{p}}}{\max\{m, n\}}\right]^p.$$ (${\bf {I}}$) If $\delta< -1$, when $\varepsilon<-(\delta+1)$, we see from $\delta+1+\varepsilon<0$ that, for any fixed $m \geq 1$, it holds that $$\sum_{n=1}^{\infty} \frac{n^{-\frac{\delta+1+\varepsilon}{p}}}{\max\{m, n\}}\geq \sum_{n=m}^{\infty} \frac{n^{-\frac{\delta+1+\varepsilon}{p}}}{n}\geq \sum_{n=m}^{\infty} n^{-1-\frac{\delta+1+\varepsilon}{p}}=+\infty. $$ This means that $\overline{\mathbf{H}}_{\delta}$ is not bounded on $l^p$ in this case. (${\bf {II}}$) If $\delta=-1$ or $\delta\geq p-1$, for all $m\geq 1$, we have \begin{eqnarray}\sum_{n=1}^{\infty} \lefteqn{\frac{ n^{-\frac{\delta+1+\varepsilon}{p}}}{\max\{m, n\}} \geq \int_{1}^{\infty} \frac{x^{-\frac{\delta+1+\varepsilon}{p}}}{\max\{m, x\}}\,dx} \nonumber \\ &=&m^{-\frac{\delta+1+\varepsilon}{p}} \int_{\frac{1}{m}}^{\infty} \frac{t^{-\frac{\delta+1+\varepsilon}{p}}}{\max\{1, t\}}\,dt \nonumber \\&\geq &m^{-\frac{\delta+1+\varepsilon}{p}}\int_{1}^{\infty} t^{-1-\frac{\delta+1+\varepsilon}{p}}\,dt=\frac{p}{1+\delta+\varepsilon}m^{-\frac{\delta+1+\varepsilon}{p}}. \nonumber \end{eqnarray} Consequently, \begin{eqnarray}\|\overline{\mathbf{H}}_{\delta}\bar{a}\|_p^p\geq \left(\frac{p}{1+\delta+\varepsilon}\right)^p\cdot \sum_{m=1}^{\infty}\frac{1}{m^{1+\varepsilon}}. \nonumber \end{eqnarray} On the other hand, we have \begin{eqnarray} \sum_{m=1}^{\infty}\frac{1}{m^{1+\varepsilon}}=\frac{1}{\varepsilon}(1+o(1)), \, \varepsilon \rightarrow 0^{+}.\nonumber \end{eqnarray} Therefore, we get that \begin{eqnarray}\|\overline{\mathbf{H}}_{\delta}\bar{a}\|_p^p\geq \left(\frac{p}{1+\delta+\varepsilon}\right)^p \frac{1}{\varepsilon}(1+o(1)).\nonumber \end{eqnarray} Taking $\varepsilon \rightarrow 0^{+}$, we obtain that $\|\overline{\mathbf{H}}_{\delta}\bar{a}\|_p^p \rightarrow +\infty$. This implies that $\overline{\mathbf{H}}_{\delta}$ is not bounded on $l^p$ when $\delta=-1$ or $\delta\geq p-1$. The proposition is proved. \end{proof} \end{remark} \begin{remark} When $\gamma=1$, from Theorem \ref{m-1-3} and \ref{m-1-4}, we have \begin{corollary} Let $p>1$ and $-1<\mu, \nu<p-1$. Let $\lambda$ be a positive finite Borel measure on $[0, 1)$ and $\check{\mathbf{H}}^{\mu, \nu}_{\lambda}$ be defined as $$\check{\mathbf{H}}^{\mu, \nu}_{\lambda}(a)(m):=\sum_{n=1}^{\infty} m^{\frac{\mu}{p}}n^{-\frac{\nu}{p}}\check{\mathcal{I}}_\lambda[m,n]a_n, \, a=\{a_n\}_{n=1}^{\infty}, \, m\geq 1.$$ Here \begin{equation}\check{\mathcal{I}}_\lambda[m, n]=\int_{[0, 1)}t^{\max\{m, n\}-1}d\lambda(t), \,m, n\geq 1. \nonumber \end{equation} Then $\check{\mathbf{H}}^{\mu, \nu}_{\lambda}$ is bounded (compact) on $l^p$ if and only if $\lambda$ is a (vanishing) $[1+\frac{1}{p}(\mu-\nu)]$-Carleson measure on $[0, 1)$, respectively. \end{corollary} \end{remark} \begin{remark} We finally consider the Hardy-Littlewood-P\'olya-type operator acting on the analytic function spaces in the unit disk $\mathbb{D}$. Let $\mathcal{A}(\mathbb{D})$ be the class of all analytic functions in the unit disk $\mathbb{D}$ of the complex plane. These years, for a function $f(z)=\sum_{n=0}^{\infty}a_n z^n \in \mathcal{A}(\mathbb{D})$, the following Hilbert operator $\mathcal{H}$, acting on the Taylor coefficients of $f$, and its variants and generalizations have been extensively studied, see \cite{BK}, \cite{D-1, D, DS, DJV}, \cite{GGPS, GP, GM-2}, \cite{Ka}, \cite{LMW}, \cite{LMN}, \cite{PR}. \begin{equation}\label{hhh-1}\mathcal{H}(f)(z):=\sum_{m=0}^{\infty}\Bigg[\sum_{n=0}^{\infty}\frac{a_n}{m+n+1}\Bigg]z^m. \nonumber \end{equation} For $\gamma>0$, $f=\sum_{n=0}^{\infty}a^n z^n\in \mathcal{A}(\mathbb{D})$, we similarly define the Hardy-Littlewood-P\'olya-type operator ${\mathbb{H}}_{\gamma}$ as \begin{equation*} {\mathbb{H}}_{\gamma}(f)(z):=\sum_{m=0}^{\infty} \left(\sum_{n=0}^{\infty} \frac{a_n}{[\max\{m+1, n+1\}]^{\gamma}}\right)z^m. \end{equation*} We will investigate the boundedness of ${\mathbb{H}}_{\gamma}$ acting on certain spaces of analytic functions in $\mathbb{D}$. Let $q$ be a positive number and $X_q$ be a Banach space of analytic functions in $\mathbb{D}$. For any $f\in X_q$, we assume that the norm $\|f\|_{X_{q}}$ of $f$ is determined by $f$, $q$ and other finite parameters $\beta_1, \beta_2, \cdots, \beta_k$. Here $k$ is a non-negative integer and $k=0$ means that there is no parameter. We denote by $\mathcal{P}(\mathbb{D})$ the class of all functions $f=\sum_{n=0}^{\infty}a_n z^n\in \mathcal{H}(\mathbb{D})$ with $\{a_n\}_{n=0}^{\infty}$ is a decreasing sequence of non-negative real numbers. We say $X_q$ have the {\em sequence-like property}, if, for a function $f\in \mathcal{P}(\mathbb{D})$, there is a constant $\mathbb{I}_X=\mathbb{I}_{X}(q, \beta_1, \beta_2, \cdots, \beta_k)$ with $\mathbb{I}_X>-1$ such that $f \in X_q$ if and only if $$\sum_{n=0}^{\infty} (n+1)^{\mathbb{I}_X}a_n^{q}<+\infty.$$ We point out that many classical spaces of analytic functions in $\mathbb{D}$ have the sequence-like property. Let $f=\sum_{n=0}^{\infty}a_n z^n\in \mathcal{P}(\mathbb{D})$. For example, (\dag) the Hardy space $H^q(\mathbb{D}), 1<q<\infty$, we know that, see \cite[page 127]{Pa}, $f \in H^q(\mathbb{D})$ if and only if $$\sum_{n=0}^{\infty} (n+1)^{q-2}a_n^{q}<+\infty.$$ (\dag\dag) For $1<q<\infty$, let $-2<\alpha\leq q-1$. It holds that, see \cite[Lemma 4]{GM-2}, $f \in \mathcal{D}^q_{\alpha}(\mathbb{D})$ if and only if $$\sum_{n=0}^{\infty} (n+1)^{2q-3-\alpha}a_n^{q}<+\infty.$$ Here $\mathcal{D}^q_{\alpha}(\mathbb{D})$ is the Dirichlet-type space, defined as \begin{eqnarray}\lefteqn{\mathcal{D}^q_{\alpha}(\mathbb{D})=\Bigg\{f\in \mathcal{H}({\mathbb{D}}): \|f\|_{\mathcal{D}^q_{\alpha}}=|f(0)|} \nonumber\\ &&\quad\quad\quad\quad+\Bigg[(\alpha+1)\int_{\mathbb{D}}|f'(z)|^q(1-|z|^2)^{\alpha}dA(z)\Bigg]^{\frac{1}{q}}<+\infty\Bigg\}.\nonumber \end{eqnarray} (\dag\dag\dag) For $1<q<\infty$, let $-1<\alpha<q+2$. It holds that, see \cite[Proposition 1]{GM-2}, $f \in \mathcal{A}^q_{\alpha}(\mathbb{D})$ if and only if $$\sum_{n=0}^{\infty} (n+1)^{q-3-\alpha}a_n^{q}<+\infty.$$ Here $\mathcal{A}^q_{\alpha}(\mathbb{D})$ is the Bergman space, defined as $$\mathcal{A}^q_{\alpha}(\mathbb{D})=\left\{f\in \mathcal{H}({\mathbb{D}}): \|f\|_{\mathcal{A}^q_{\alpha}}^q=(\alpha+1)\int_{\mathbb{D}}|f(z)|^q(1-|z|^2)^{\alpha}dA(z)<+\infty\right\}.$$ We obtain that \begin{proposition}\label{rem} Let $\gamma$, $q$ be two positive numbers. Let $X_q$ be a Banach space of analytic functions in $\mathbb{D}$ which has the sequence-like property and ${\mathbb{H}}_{\gamma}$ be as above. Then the necessary condition of ${\mathbb{H}}_{\gamma}: X_q\rightarrow X_q$ is bounded is $\gamma\geq 1$. \end{proposition} \begin{proof} We will prove that, ${\mathbb{H}}_{\gamma}: X_q\rightarrow X_q$ can not be bounded, if $0<\gamma<1$. Let $\varepsilon>0$ and set $\widetilde{f}_{\varepsilon}=\sum_{n=0}^{\infty}\widetilde{a}_n z^n$ with $\widetilde{a}_n=(\frac{\varepsilon}{1+\varepsilon})^{\frac{1}{q}}(n+1)^{-\frac{\mathbb{I}_X+1+\varepsilon}{q}}.$ It is easy to see that $\{\widetilde{a}_n\}_{n=0}^{\infty}$ is a decreasing sequence and $\sum_{n=0}^{\infty}(n+1)^{\mathbb{I}_{X}}\widetilde{a}_n^{q}<\infty.$ Hence $\widetilde{f}_\varepsilon \in X_q$. We set $$b_m=\sum_{n=0}^{\infty}\frac{\widetilde{a}_n}{[\max\{m+1, n+1\}]^{\gamma}}, \, m\geq 0.$$ We suppose that ${\mathbb{H}}_{\gamma}: X_q\rightarrow X_q$ is bounded. Then, by the fact that $\{b_n\}_{n=0}^{\infty}$ is a decreasing sequence, we see that $g(z)=\sum_{n=0}^{\infty}b_n z^n \in X_q$ and hence $$\sum_{m=0}^{\infty} (m+1)^{\mathbb{I}_X} b_n^q <+\infty.$$ That is \begin{eqnarray}\label{e-1} + \infty &>& \sum_{m=0}^{\infty} (m+1)^{\mathbb{I}_X} \left[\sum_{n=0}^{\infty}\frac{\widetilde{a}_n}{[\max\{m+1, n+1\}]^{\gamma}}\right]^q \\ &=& \frac{\varepsilon}{1+\varepsilon} \sum_{m=0}^{\infty} (m+1)^{\mathbb{I}_X} \left[\sum_{n=0}^{\infty}\frac{(n+1)^{-\frac{\mathbb{I}_X+1+\varepsilon}{q}}}{[\max\{m+1, n+1\}]^{\gamma}}\right]^q.\nonumber \end{eqnarray} On the other hand, for any $m\geq 0$, we have \begin{eqnarray}\label{e-2} \lefteqn{\sum_{n=0}^{\infty}\frac{(n+1)^{-\frac{\mathbb{I}_X+1+\varepsilon}{q}}}{[\max\{m+1, n+1\}]^{\gamma}}=\sum_{n=1}^{\infty}\frac{n^{-\frac{\mathbb{I}_X+1+\varepsilon}{q}}}{[\max\{m+1, n\}]^{\gamma}}}\\ &\geq& \int_{1}^{\infty} \frac{x^{-\frac{\mathbb{I}_X+1+\varepsilon}{q}}}{[\max\{m+1,x\}]^{\gamma}}\,dx \nonumber \\ & =& (m+1)^{(1-\lambda)-\frac{\mathbb{I}_X+1+\varepsilon}{q}} \cdot \int_{\frac{1}{m+1}}^{\infty} \frac{y^{-\frac{\mathbb{I}_X+1+\varepsilon}{q} }}{[\max\{1, y\}]^{\gamma}}\,dy \nonumber \\ &\geq& (m+1)^{(1-\gamma)-\frac{\mathbb{I}_X+1+\varepsilon}{q}}\int_{1}^{\infty} y^{-\gamma-\frac{\mathbb{I}_X+1+\varepsilon}{q} }\,dy \nonumber \\ &:=& (m+1)^{(1-\gamma)-\frac{\mathbb{I}_X+1+\varepsilon}{q}}E(\varepsilon).\nonumber \end{eqnarray} Combining (\ref{e-1}) and (\ref{e-2}), we get that \begin{eqnarray}\label{e-3} +\infty &>& \frac{\varepsilon}{1+\varepsilon} [E(\varepsilon)]^q \left[\sum_{m=0}^{\infty}(m+1)^{q(1-\gamma)-1-\varepsilon}\right].\end{eqnarray} But, if $\gamma< 1$, when $\varepsilon<q(1-\gamma)$, we have $$\sum_{m=0}^{\infty}(m+1)^{q(1-\gamma)-1-\varepsilon}=+\infty.$$Thus (\ref{e-3}) is a contradiction since $E(\varepsilon)>0$. This means that ${\mathbb{H}}_{\gamma}: X_q\rightarrow X_q$ can not be bounded when $\gamma< 1$. The proposition is proved. \end{proof} \end{remark} For $\gamma>0$, let $\lambda$ be a positive Borel measure in $[0,1)$, we define the operator \begin{equation}\label{hhh-2}\mathbb{H}_{\gamma, \lambda}(f)(z):=\sum_{m=0}^{\infty}\Bigg[\sum_{n=0}^{\infty}\mathbf{I}_{\gamma, \lambda}[m, n]a_n\Bigg]z^m, \,f=\sum_{n=0}^{\infty}a_n z^n \in \mathcal{A}(\mathbb{D}). \nonumber \end{equation} Here \begin{equation}\label{las}\mathbf{I}_{\gamma, \lambda}[m, n]=\int_{[0, 1)}t^{\max\{m, n\}}(1-t)^{\gamma-1}d\lambda(t), \,m, n\geq 0. \end{equation} When $\lambda$ in (\ref{las}) is the Lesbegue measure in $[0, 1)$, we see that $$\mathbf{I}_{\gamma, \lambda}[m, n] \asymp \frac{1}{[\max\{m+1, n+1\}]^{\gamma}}, m, n\geq 0.$$ It is interesting to study \begin{question} Characterize the measures $\lambda$ such that $\mathbb{H}_{\gamma, \lambda}$ is bounded (compact) from one analytic function space $X$ to another one $Y$. \end{question} \begin{thebibliography}{1} \bibitem{AAR}Andrews G., Askey R., Roy R., {\em Special functions}, Cambridge University Press, Cambridge, 1999. \bibitem{BK} Bo\v{z}in V., Karapetrovi\'{c}, B., {\em Norm of the Hilbert matrix on Bergman spaces}, J. Funct. Anal., 274 (2018), no. 2, pp. 525-543. \bibitem{B}Brevig O., {\em Sharp norm estimates for composition operators and Hilbert-type inequalities}, Bull. Lond. Math. Soc., 49(2017), pp.965-978. \bibitem{B-1}Brevig O.,{\em The best constant in a Hilbert-type inequality}, Expo. Math., 42 (2024), no.1, Paper No. 125530, 11 pp. \bibitem{D-1} Diamantopoulos E., {\em Operators induced by Hankel matrices on Dirichlet spaces}, Analysis (Munich), 24 (2004), no. 4, pp. 345-360. \bibitem{D}Diamantopoulos E., {\em Hilbert matrix on Bergman spaces}. Illinois J. Math., 48 (2004), no. 3, pp. 1067-1078. \bibitem{DS}Diamantopoulos E., Siskakis Aristomenis G., {\em Composition operators and the Hilbert matrix}, Studia Math., 140 (2000), no. 2, pp. 191-198. \bibitem{DJV} Dostani\'{c} M., Jevti\'{c}, M., Vukoti\'{c}, D., {\em Norm of the Hilbert matrix on Bergman and Hardy spaces and a theorem of Nehari type}, J. Funct. Anal., 254 (2008), no. 11, pp. 2800-2815. \bibitem{FWL}Fu Z., Wu Q., Lu, S., {\em Sharp estimates for $p$-adic Hardy, Hardy- Littlewood- P\'{o}lya operators and commutators}, Acta Math. Sin., 29(2013), pp. 137-150. \bibitem{GGPS}Galanopoulos P., Girela D., Pel\'{a}ez J., Siskakis Aristomenis G., {\em Generalized Hilbert operators}, Ann. Acad. Sci. Fenn. Math., 39 (2014), no. 1, pp. 231-258. \bibitem{GP}Galanopoulos P., Pel\'{a}ez J., {\em A Hankel matrix acting on Hardy and Bergman spaces}, Studia Math., 200(2010), pp. 201-220. \bibitem{GM-2}Girela D., Merch\'{a}n N., {\em Hankel matrices acting on the Hardy space $H^1$ and on Dirichlet spaces}, Rev. Mat. Complut., 32(2019), pp. 799-822. \bibitem{HLP}Hardy G., Littlewood J., P\'{o}lya G., {\em Inequalities}, Cambridge University Press, Cambridge, 1952. \bibitem{Ka} Karapetrovi\'{c} B., {\em Hilbert matrix and its norm on weighted Bergman spaces}, J. Geom. Anal., 31 (2021), no. 6, pp. 5909-5940. \bibitem{LMW}Lindstr\"om M., Miihkinen S., Wikman N., {\em On the exact value of the norm of the Hilbert matrix operator on the weighted Bergman spaces}, Ann. Fenn. Math., 46(2021), pp. 201-224. \bibitem{LMN}Lindstr\"om M., Miihkinen S., Norrbo D., {\em Exact essential norm of generalized Hilbert matrix operators on classical analytic function spaces}, Adv. Math., 408(2022), Paper No. 108598, 34 pp. \bibitem{M}Mitrinovi\'{c}, D., {\em Analytic inequalities}, Springer-Verlag, New York-Berlin, 1970. \bibitem{Pa} Pavlovi\'{c} M., {\em Introduction to function spaces on the disk}, Matemati\v{c}ki Institut SANU, Belgrade, 2004. \bibitem{PR}Pel\'{a}ez J., R\"{a}tty\"{a} J., {\em Generalized Hilbert operators on weighted Bergman spaces}. Adv. Math., 240 (2013), pp. 227-267. \bibitem{WHY} Wu F., Hong Y., Yang B., {\em A refined Hardy-Littlewood-Polya inequality and the equivalent forms}, J. Math. Inequal., 16(2022), no.4, pp. 1477-1491. \bibitem{YR}Yang B., Rassias T., {\em On the way of weight coefficient and research for the Hilbert-type inequalities}, Math. Inequal. Appl., 6(2003), pp. 625-658. \bibitem{Y3}Yang B., {\em The Norm of Operator and Hibert-type Inequalities(in Chinese)}, Science Press, 2009. \bibitem{YZ} Yang, B., Zhong Y., {\em On a reverse Hardy-Littlewood-Pólya's inequality}, J. Appl. Anal. Comput., 10(2020), no.5, pp. 2220-2232. \bibitem{Zh}Zhu K., {\em Operator Theory in Function Spaces}, Marcel Dekker, New York, 1990. \end{thebibliography} \end{document}
2205.07594v1
http://arxiv.org/abs/2205.07594v1
Random walks and rank one isometries on CAT(0) spaces
\documentclass[12pt,a4paper]{article} \usepackage[english]{babel} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{graphicx} \usepackage{url} \usepackage{amsmath,amsthm,amssymb} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{esint} \usepackage{soul} \usepackage{floatrow} \usepackage[a4paper,hcentering,vcentering]{geometry} \usepackage{subeqnarray} \usepackage[all]{xy} \usepackage{varioref} \usepackage{tikz} \usepackage{hyperref} \usepackage{pgfplots} \DeclareMathOperator{\cat}{\text{CAT}} \DeclareMathOperator{\sigmap}{\sigma (+ \infty)} \DeclareMathOperator{\sigmam}{\sigma(- \infty)} \DeclareMathOperator{\supp}{\text{supp}} \DeclareMathOperator{\haar}{Haar} \DeclareMathOperator{\ugm}{\it{U^{-}_g}} \DeclareMathOperator{\ugp}{\it{U^{+}_g}} \DeclareMathOperator{\uhm}{\it{U^{-}_h}} \DeclareMathOperator{\uhp}{\it{U^{+}_h}} \DeclareMathOperator{\iso}{Isom} \DeclareMathOperator{\zp}{\z^{+}(\omega)} \newcommand{\bd}{\partial_\infty} \newcommand{\nui}{\check{\nu}} \newcommand{\mui}{\check{\mu}} \newcommand{\pmui}{P_{\mui}} \newcommand{\prob}{\text{Prob}} \theoremstyle{plain} \newtheorem{thm}{Theorem}[section] \newtheorem*{Rank thm}{Rank Rigidity Theorem} \newtheorem*{Rank cjt}{Rank Rigidity Conjecture} \newtheorem{cor}[thm]{Corollary} \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \theoremstyle{definition} \newtheorem{Def}[thm]{Definition} \newtheorem{rem}[thm]{Remark} \theoremstyle{definition} \newtheorem*{ackn}{Acknowledgement} \title{Random walks and rank one isometries on CAT(0) spaces} \author{Corentin Le Bars} \date{\vspace{-5ex}} \begin{document} \maketitle \section{Introduction} Let $G$ be a discrete countable group and $\mu \in \prob (G)$ a probability measure on $G$. We define the random walk $(Z_n)_n$ on $G$ as the sequence $Z_n := \omega_1 \dots \omega_n$, where the $\omega_i'$s are chosen independently according to the law $\mu$. Fixing a point $x \in X$, where $X$ is a given metric space on which $G$ acts by isometries, we want to study the sequence of points $(Z_n x)_n$. In this paper, we are particularly interested in the asymptotic behaviour of this random walk, and if the space $X$ has the right geometric features, one can hope that $(Z_n x)_n$ is going to converge almost surely in a natural compactification of $X$. The typical setting in which results have been obtained is when $X$ is of negative curvature, or at least when $X$ has some kind of hyperbolic properties. In the fundamental paper of V. Kaimanovich \cite{kaimanovitch00}, the convergence of $(Z_n x)_n$ to a point of the visual boundary is proven for groups acting geometrically on proper hyperbolic spaces and several other classes of actions. More recently this result has been extended by J. Maher and G. Tiozzo in \cite{maher_tiozzo18} for groups acting by isometries on non proper hyperbolic spaces. We emphasize on the fact studying random walks on non proper spaces involves different techniques, as in the non proper case the space $\overline{X}$ is no longer a compactification of $X$: $\overline{X}$ might be non compact, and $X$ is no longer open in $\overline{X}$. In the sequel, we will only consider proper metric spaces. \newline In our case, $X$ is a proper $\cat(0)$ space, and $G$ acts on $X$ by isometries, but not necessarily cocompactly nor properly. If the reader wants a detailed introduction to $\cat(0)$ geometry, standard references are \cite{bridson_haefliger99} and \cite{ballman95}. The leading examples to have in mind when thinking about $\cat(0) $ spaces are Hadamard manifolds, $ \cat(0) $ cube complexes and buildings. We simply recall that there exists a natural compactification $\overline{X}$ of $X$, for which every point of the boundary is represented by some class of asymptotic geodesic rays. In particular, for $(g_n)_n $ a sequence of isometries in $G$, the convergence of $(g_n x)_n$ to a point $\xi \in \bd X$ does not depend on the basepoint $x$. In this paper, we show that the random walk $(Z_n x)_n$ converges almost surely to a point of the boundary, provided that there exist rank one elements in the group. A rank one element is an axial isometry whose axes do not bound any flat half plane. Due to \cite[Theorem 5.4]{bestvina_fujiwara09}, an isometry is rank one if one of its axis $\sigma$ is "contracting", namely there exists a constant $K$ such that the projection onto $\sigma$ of every geodesic ball disjoint from $\sigma$ has diameter less than $K$. This type of behaviour typically appears in hyperbolic spaces, where any loxodromic isometry is contracting in this sense. Then, it is often fruitful to think about rank one isometries as isometries that satisfy hyperbolic-like properties. Another important feature of rank one isometries is that they act on the boundary with North-South dynamics \cite[Lemma III. 3. 3]{ballman95}. Namely, for $g$ a rank one isometry of $X$, there exist two points $g^{+} $ and $g^{- }$ in $ \bd X$ such that the successive powers of $g$ contract the whole boundary $\bd X$ minus $g^{-}$ on $g^{+}$. In Section \ref{rank one section}, we review some of these results and their consequences. \newline The study of rank one elements is motivated by the Rank Rigidity theory for Riemannian manifolds of non positive curvature, due to W. Ballmann, M. Brin, R. Spatzier, P. Eberlein and K. Burns (see \cite{ballmann_brin95}, \cite{ballman_brin_eberlein}, \cite{burns_spatzier}, \cite{eberlein_heber} and \cite{ballman_burns_spatzier}). For a detailed introduction and proof of the Rank Rigidity Theorem for Hadamard manifolds, see \cite{ballman95}. The Theorem states that if $M$ is a Hadamard manifold, and if $G$ is a discrete group acting properly and cocompactly on $M $, then either $M$ decomposes as a non-trivial product of two manifolds, or $M$ is a higher rank symmetric space and $G$ contains a rank one isometry. This alternative appears to hold for wider classes of $\cat(0) $ spaces, which led to formulate a general conjecture. We state one of the possible formulations as it is written in \cite{caprace_sageev11}. Recall that a metric space is said geodesically complete if any geodesic can be extended to infinity. \begin{Rank cjt} Let $X$ a locally compact geodesically complete $\cat(0)$ space, and $G$ a discrete group acting properly and cocompactly on $X$ by isometries. Assume that $X$ is irreducible. Then $X$ is either a higher rank symmetric space or a Euclidean building of dimension $\geq 2$, or $G$ contains a rank one isometry. \end{Rank cjt} Over the last thirty years, the conjecture was proven to hold for Euclidean cell complexes of dimension 2 and 3 (Ballmann and Brin \cite{ballmann_brin95}), and P-E. Caprace and K. Fujiwara have proven that it also holds within the class of buildings and Coxeter groups \cite{caprace_fujiwara10}. More recently, P-E. Caprace and M. Sageev have proven that it remains true for finite-dimensional $\cat (0) $ cube complexes \cite{caprace_sageev11}. \newline It is to be noted that rank one elements may play important roles in other fields of research, among which the Tits Alternative conjecture, stating that every finitely generated subgroup of a group acting geometrically on a $\cat(0)$ space either contains a nonabelian free subgroup or is virtually solvable. The Tits alternative is known in many cases, see \cite{caprace_sageev11}, \cite{osajda_przytycki21} and references therein. More recently, D.~Osajda and P.~Przytycki have proven that it holds in the context of an almost free action on a $\cat(0)$ triangle complex, which we do not assume locally compact \cite[Theorem A]{osajda_przytycki21}. In the proof, they give a geometric criterion on the $\cat(0)$ triangle complex for finding rank one elements in the group \cite[Proposition 7.3]{osajda_przytycki21}, and hence free subgroups. Note that in this situation, the action needs not be cocompact. \newline Last, note that the presence of rank one elements in a group acting on a $\cat(0)$ space can be detected by general conditions on the group action along with some purely geometric features of the space, for example concerning the Tits boundary of the space, see Section \ref{tits metric subsection}. \newline A measure $\nu $ on $X$ is said to be $\mu$-stationary if $\mu \ast \nu = \nu $, where $\mu \ast \nu$ defines the convolution measure of $\mu $ and $\nu$. When we want to study random walks generated by a measure $\mu$ on $G$, it is often fruitful to endow $\overline{X}$ with a probability measure which is stationary with respect to $\mu$. It allows to use important results in measure theory including those of H. Furstenberg \cite{furstenberg73}. They will be reviewed in Section \ref{stationary section}. \newline The first result of this paper is the fact that in our context, there is a unique $\mu$-stationary measure on $\overline{X}$. \begin{thm}\label{measure thm} Let $G$ be a discrete group and $G \curvearrowright X$ a non-elementary action by isometries on a proper $\cat (0)$ space $X$. Let $\mu \in \prob(G) $ be an admissible probability measure on $G$, and assume that $G $ contains a rank one element. Then there exists a unique $\mu$-stationary measure $\nu \in \prob (\overline{X})$. \end{thm} Theorem \ref{measure thm} is fundamental in order to obtain the almost sure convergence of the random walk $(Z_n x)_n$ to the boundary. However, we think that the uniqueness of the stationary measure can be of independent interest. The second result, and the main Theorem of this article is the almost sure convergence to the boundary. \begin{thm}\label{convergence thm} Let $G$ be a discrete group and $G \curvearrowright X$ a non-elementary action by isometries on a proper $\cat (0)$ space $X$. Let $\mu \in \prob(G) $ be an admissible probability measure on $G$, and assume that $G $ contains a rank one element. Then for every $x \in X$, and for $\mathbb{P}$-almost every $\omega \in \Omega$, the random walk $(Z_n (\omega) x)_n $ converges almost surely to a boundary point $z^{+}(\omega) \in\bd X$. Moreover, $z^{+}(\omega)$ is distributed according to the stationary measure $\nu$. \end{thm} Results of convergence of this type had already been obtained for special cases. In the context of a fundamental group $\pi_1(M)$ of a compact rank one Riemannian manifold $M$ acting on the sphere at infinity $\partial \widetilde{M}$ of the universal covering space $\widetilde{M}$, the convergence of the sample paths was proven by Ballmann \cite[Theorem 2.2]{ballmann89}. In the context of finite dimensional $\cat(0)$ cube complexes, the behaviour of the sample paths of a given group has been extensively studied by T.~Fern\'os, J.~Lécureux and F.~Mathéus who showed under weak hypotheses that $(Z_n x)_n$ converges almost surely to a point of the visual boundary \cite[Theorem 1.3]{fernos_lecureux_matheus18}, and who gave an extensive description of the asymptotic behaviour of the random walk: nature of the limit points, occurrence of the contracting elements... It is also to be noted that the techniques we used in order to prove Theorem \ref{measure thm} can be rewritten in view of Theorem \ref{convergence thm} as the following: \begin{cor} Let $\xi \in \bd X$ be a limit of the random walk $(Z_n x)_n$. Then for $\nu$-almost every point $\eta \in \bd X$, there exists a rank one geodesic joining $\xi $ to $\eta$. \end{cor} In other words, limit points are almost surely rank one. This result will be useful for the proof of Theorem \ref{drift thm}, but we think it could be used in different contexts. In the more general setting of a non-specified $\cat(0)$ space, Karlsson and Margulis had already proven in \cite[Thoerem 2.1]{karlsson_margulis} a first general result of convergence of the random walk. In fact, they studied the more general behaviour of cocycles on $X$, but it is a problem that we will not consider here. When the measure $\mu$ has finite first moment $\int_G d(g x, x) d\mu(g) < \infty$, the subadditive ergodic Theorem implies that the limit $\lambda:= \lim_{n} \frac{1}{n}d(Z_n x, x)$ exists almost surely. This limit is called the \textit{drift} of the random walk, and can be understood as the speed at which the random walk goes to infinity. Since the action is isometric, $\lambda$ does not depend on the choice of the basepoint. Under the hypothesis that the drift is positive, Karlsson and Margulis had showed, among other results, that the random walk $(Z_n x)_n$ converges almost surely to a point of the visual boundary. Theorem \ref{convergence thm} is different because we don't assume that the measure $\mu $ has finite first moment, nor that the drift is positive. In the case that $G$ is non-amenable, Guivarc'h showed that the random walk generated by a word metric has positive drift \cite{guivarch80}, but it can be quite difficult to prove in general. \newline When we assume that the measure $\mu$ has finite first moment, the drift $\lambda$ exists, and another important subject concerning the asymptotic behaviour of the random walk is knowing whether $\lambda$ is positive or not. When the group $G$ is non-amenable and endowed with some word metric, the drift is positive, but for general actions, the answer is not clear. In \cite{karlsson_margulis}, the positivity of the drift was used to prove the convergence to the boundary, while here we obtain this result after the convergence. \begin{thm}\label{drift thm} Let $G$ be a discrete group and $G \curvearrowright X$ a non-elementary action by isometries on a proper $\cat (0)$ space $X$. Let $\mu \in \prob(G) $ be an admissible probability measure on $G$ with finite first moment, and assume that $G $ contains a rank one element. Let $x \in X$ be a basepoint of the random walk. Then the drift $\lambda$ is almost surely positive: \begin{equation} \lim_{n \rightarrow \infty} \frac{1}{n} d(Z_n(\omega) x, x) = \lambda >0. \end{equation} \end{thm} \vspace{5mm} In order to prove Theorem \ref{measure thm}, we use a result from Papasoglu and Swenson about the dynamics on the Tits boundary \cite[Lemma 19]{papasoglu_swenson09}, stating that in the $\cat(0)$ case, the action of the group $G$ satisfies a $\pi$-convergence property: if $(g_n)_n$ is a sequence of isometries satisfying $g_n x \underset{n \rightarrow \infty}{\longrightarrow} \xi \in \bd X$ and $g_n^{-1} x \underset{n \rightarrow \infty}{\longrightarrow} \eta \in \bd X$, then for all compact subset $K \subseteq \partial X - B_T(\eta, \pi)$, and all open set $U $ containing $\xi$, $g_n K \subseteq U $ for all $k $ large enough. It will then be useful to prove that the $\nu$-measure of $B_T(\eta, \pi)$ is zero (Lemma \ref{zero measure}), which we do by using a result from Maher and Tiozzo \cite[Lemma 4.5]{maher_tiozzo18}. Once we know that the stationary measure is unique, proving that the random walk $(Z_n x )_n$ converges to the boundary (Theorem \ref{convergence thm}) follows from a geometric result concerning rank one geodesics proven by Ballmann \cite[Lemma III.3.1]{ballman95}. Last, to show that the drift is positive (Theorem \ref{drift thm}), we have followed the strategy implemented in Guivarc'h and Raugi \cite{guivarch_raugi85}, see also \cite{benoist_quint16} and \cite{benoist_quint}. \newline While we were writing this paper, it came to our attention that H. Petyt, D.~Spriano and A.~Zallum have proposed another approach to the subject of $\cat(0)$ actions. More precisely, given an action of a group $G$ on a $\cat(0)$ space $X$, it is possible to build a hyperbolic space $(X_L, d_L)$ out of $X$ using "curtains", in such a way that informations on the original action can be nicely translated to the actions on the derived hyperbolic space. Considering the results we already have for actions on hyperbolic spaces (e.g. \cite{maher_tiozzo18}), it is likely that we could use these results to deduce some of the properties we have investigated in this paper about random walks one $\cat(0)$ spaces. \newline Section \ref{background} is an introduction to the notions that we are going to use, and presents the general setting. In Section \ref{uniqueness measure section}, we prove Lemma \ref{zero measure} and Theorem \ref{measure thm}. In Section \ref{convergence random walk}, we prove Theorem \ref{convergence thm} stating that the random walk is convergent, which is the main result of this article. In Section \ref{drift section}, we give applications of the convergence, especially the positivity of the drift and geodesic tracking results. \begin{ackn} The author is grateful to Jean Lécureux for the weekly conversations and commentaries, and whose contribution to this article was invaluable. \end{ackn} \section{Background}\label{background} \subsection{Random walks and general setting} Let $G$ be a discrete countable group and $\mu \in \prob(G)$ a probability measure on $G$. Throughout the article we will assume that $\mu $ is admissible, i.e. $\supp(\mu)$ generates $G$ as a semigroup. Let $(\Omega, \mathbb{P}) $ be the probability space $(G^{\mathbb{N}}, \delta_e \times \mu^{\mathbb{N^\ast}})$. The application \begin{equation*} (n, \omega) \in \mathbb{N} \times \Omega \mapsto Z_n(\omega) = \omega_1 \omega_2 \dots \omega_n, \end{equation*} where $\omega$ is chosen according to the law $\mathbb{P}$, defines the random walk on $G$ generated by the measure $\mu$. Let now $(X,d)$ be a proper $\cat(0)$ metric space, on which $G$ acts by isometries. If the reader wants a detailed introduction to $\cat (0) $ spaces, the main references that we will use are \cite{bridson_haefliger99} and \cite{ballman95}. We recall that the boundary $ \bd X$ of a $\cat (0) $ space $X$ is the set of equivalent classes of rays $\sigma : [0, \infty) \rightarrow X$, where two rays $\sigma_1, \sigma_2$ are equivalent if they are asymptotic, i.e. if $d(\sigma_1(t), \sigma_2 (t))$ is bounded uniformly in $t$. Given two points on the boundary $\xi$ and $\eta$, if there exists a geodesic line $\sigma : \mathbb{R} \rightarrow X$ such that the geodesic ray $\sigma_{[0, \infty)}$ is in the class of $\xi$ and the geodesic ray $t \in [0, \infty) \mapsto \sigma(-t)$ is in the class of $\eta$, we will say that the points $\xi $ and $\eta$ are joined by a geodesic line. The reader should be aware that in general, such a geodesic need not exist between any two points of the boundary, as can be seen in $\mathbb{R}^2$. A point $\xi$ of the boundary that is called a \textit{visibility point} if, for all $\eta \in \bd X - \{\xi\}$, there exists a geodesic from $\xi$ to $\eta$. We will see in the next section a criterion to prove that a given boundary point is a visibility point. An important feature in $\cat(0)$ spaces is the existence of closest-point projections on convex subsets. More precisely, given a closed convex subset $C$ in a proper $\cat(0)$ space, there exists a map $p_C : X \rightarrow C$ such that $p(x)$ minimizes the distance $d(x,C)$ \cite[Proposition 2.4]{bridson_haefliger99}. This map is a retraction of $X$ onto $C$ and is distance decreasing: for all $x, y \in X$, \begin{equation*} d(p_C(x), p_C(y )) \leq d(x,y). \end{equation*} Now, given a closed ball $B:= \overline{B}(x_0, r)$, the projection $p_r : X \rightarrow \overline{B}(x_0, r) $ can actually be extended to $\overline{X}$, by identifying any point $\xi$ of the boundary with the geodesic ray $\sigma$ issuing from $x_0$ in the class of $\xi$. In this setting, if $\sigma(0) = x_0$, we define $p_B(\xi) = \sigma (r)$. Following the notations in \cite[Chapter II.8]{bridson_haefliger99}, the visual topology on $\overline{X}$ is defined by a basis of open sets $U(c, r, \varepsilon)$, where $c$ is a geodesic ray, $r,\varepsilon>0$, and \begin{equation*} U(c, r, \varepsilon) := \{ x \in \overline{X} \ | \ d(x, c(r)) >r, d(p_r(x), c(r)) < \varepsilon)\}, \end{equation*} where we called $p_r$ the projection on the closed (convex) ball $\overline{B}(c(0), r)$ of centre $c(0)$ and radius $r$. Given $x \in X$ and $\xi \in \overline{X}$, there is a unique geodesic ray (or segment) $c$ joining $x$ to $\xi$, so we will write alternatively $U(x, \xi, r, \varepsilon)$ for $U(c, r, \varepsilon)$. The following proposition is taken from \cite[Proposition II. 8.8]{bridson_haefliger99}, and implies that the visual topology does not depend on the basepoint. It will be useful later. \begin{prop}[{\cite[Proposition II. 8.8]{bridson_haefliger99}}]\label{continuite proj} Let $x, x' \in X$, and $r >0 $. let $c$ and $c'$ be the geodesic rays issuing from $x$ and $x'$ respectively such that $c(\infty)=c'(\infty) = \xi$. Let $p_r : \overline{X} \rightarrow \overline{B}(x, r)$ be the projection of $\overline{X}$ onto $\overline{B}(x, r)$. Then, for all $\varepsilon >0 $, there exists $R= R(r, d(x, x'), \varepsilon)>0 $ such that for all $R' \geq R$, $p_{r} (U(c', R', \varepsilon/3)) \subseteq B(c(r), \varepsilon)$. In particular, for all $R' \geq R$, $U(c', R', \varepsilon/3)$ is contained in $U(c, r, \epsilon )$. \end{prop} \begin{figure} \centering \begin{center} \begin{tikzpicture}[scale=1.5] \draw (0,0) -- (3, 3) ; \draw (1, 2) to[bend right = 50] (3, 1) node[above right] {$U(x, \xi, r, \epsilon )$}; \draw (2, 2.7) to[bend right = 80] (3, 2) node[above right] {$U(x', \xi, R', \varepsilon/3)$}; \draw (1, -1) to[bend left=10] (3,3) ; \draw (0,0) node[below left]{$x$} ; \draw (1,-1) node[below left]{$x'$} ; \draw (3,3) node[above]{$\xi$}; \end{tikzpicture} \end{center} \caption{Illustration of Proposition \ref{continuite proj}.} \end{figure} When $X$ is a proper space, the space $\overline{X} = X \cup \partial X $ is a compactification of $X$, that is, $\overline{X}$ is compact and $X$ is an open and dense subset of $\overline{X}$. We recall that the action of $G$ on $X$ extends to an action on $\bd X$ by homeomorphisms. Another equivalent construction of the boundary can be done using horofunctions. If $x_n \rightarrow \xi \in \bd X$, we denote by $h_\xi^{x} : X \mapsto \mathbb{R}$ the horofunction given by \begin{equation*} h_\xi^{x}(z) = \lim_n d(x_n, z) - d(x_n, x). \end{equation*} It is a standard result in $\cat(0)$ geometry (see for example \cite[Proposition II.2.5]{ballman95}) that this limit exists and that given any basepoint $x$, a horofunction characterizes the boundary point $\xi $. \subsection{Rank one elements}\label{rank one section} Let $g \in G$. We say that $g$ is a \textit{semisimple} isometry if its displacement function $ x \in X \mapsto \tau_g(x) = d(x , gx)$ has a minimum in $X$. If this minimum is non-zero, it is a standard result (see for example \cite[Proposition II.3.3]{ballman95}) that the set on which this minimum is obtained is of the form $C \times \mathbb{R}$, where $C$ is a closed convex subset of X. On the set $\{c\}\times \mathbb{R}$ for $c \in C$, $g$ acts as a translation, which is why $ g$ is called \textit{axial} and the subset $\{c\}\times \mathbb{R}$ is called an \textit{axis} of $g$. A \textit{flat half-plane} in $X$ is defined as a euclidean half plane isometrically embedded in $X$. \begin{Def} We say that a geodesic in $X$ is \textit{rank one} if it does not bound a flat half-plane. If $g$ is an axial isometry of $X$, we say that $g$ is rank one if no axis of $g$ bounds a flat half-plane. \end{Def} \begin{rem}\label{flat strip} Let $g$ be a rank one isometry, and let $\sigma$ be one of its axes. Then there exists $R \geq 0 $ such that $\sigma $ does not bound a flat strip of width $R$. \end{rem} More information on rank one isometries and geodesics can be found in \cite[Section III. 3]{ballman95}, and more recently in \cite{caprace_fujiwara10} and in \cite{bestvina_fujiwara09}. If $X$ is a proper space, M.~Bestvina and K.~Fujiwara showed in \cite{bestvina_fujiwara09} that an isometry is rank one if and only if it induces a contraction property on its axes. More precisely, an isometry of $X$ has rank one if and only if there exists $C \geq 0$ such that one of its axis $\sigma$ is $C$-contracting: for every metric ball $B$ disjoint from the geodesic $\sigma$, the projection $\pi_\sigma (B)$ of the ball onto $\sigma$ has diameter at most $C$. \begin{Def} We say that the action $G \curvearrowright X$ of a rank one group $G$ on a $\cat (0) $ space $X$ is \textit{non-elementary} if $G$ neither fixes a point in $\bd X$ nor stabilizes a geodesic line in $X$. \end{Def} To justify this definition, we use a result from Caprace and Fujiwara in \cite{caprace_fujiwara10}. What follows comes from the aforementioned paper. \begin{Def} Let $g_1, \, g_2 \in G$ be axial isometries of $G$, and fix $x_0 \in X$. The elements $g_1, g_2 \in G$ are called independent if the map \begin{equation} \mathbb{Z} \times \mathbb{Z} \rightarrow [0, \infty) : (m,n) \mapsto d(g_1^m x_0, g_2^nx_0) \end{equation} is proper. \end{Def} \begin{rem} In particular, the fixed points of two independent axial elements form four distinct points of the visual boundary. \end{rem} The following result was proven by P-E.~Caprace and K.~Fujiwara in \cite{caprace_fujiwara10}. \begin{prop}[{\cite[Proposition 3.4]{caprace_fujiwara10}}]\label{non elem caprace fuj} Let $X$ be a proper $\cat (0) $ space and let $G < \iso (X)$. Assume that $G$ contains a rank one element. Then exactly one of the following assertions holds: \begin{enumerate} \item \label{elem} $G$ either fixes a point in $\bd X$ or stabilizes a geodesic line. In both cases, it possesses a subgroup of index at most 2 of infinite Abelianization. Furthermore, if $X$ has a cocompact isometry group, then $\overline{G} < \iso (X)$ is amenable. \item \label{non elem} G contains two independent rank one elements. In particular, $\overline{G}$ contains a discrete non-Abelian free subgroup. \end{enumerate} \end{prop} As a consequence, the action $G \curvearrowright X$ of a rank one group $G$ on a $\cat (0) $ space $X$ is non-elementary if and only if alternative \ref{non elem} of the previous Proposition holds. The next Lemma comes from \cite{hamenstadt09}, and extends a result from Ballmann and Brin \cite{ballmann_brin95}. It is a fundamental result on the dynamics induced by rank one isometries. \begin{thm}[{\cite[Lemma 4.4]{hamenstadt09}}] \label{hamenstadt} An axial isometry $f$ in $G$ is rank one if and only if $f$ acts with North-South dynamics with respect to its fixed points $f^{-}$ et $f^{+}$ : for every neighbourhood $V$ of $f^{-}$ and $U$ of $f^{+}$, there exists $k_0 \geq 0 $ such that $f^{k} (\bd X - V) \subseteq U$ and $f^{-k} (\bd X - U) \subseteq V$ for all $k \geq k_0$. \end{thm} \subsection{Tits metric on the boundary}\label{tits metric subsection} Let us now recall some results about Tits geometry. It is a useful tool to detect flats in $\cat(0)$ spaces. The following definitions and properties can be found in \cite[Section II.4]{ballman95}, and in \cite{bridson_haefliger99}. Let $\sigma_1 : [0, C_1] \rightarrow X$ and $ \sigma_2 : [0, C_2] \rightarrow X$ be two geodesic segments in $X$ emanating from the same basepoint $\sigma_1 (0 ) = \sigma_2(0) = x$. For every $(s, t) \in (0, C_1) \times (0,C_2)$, there exists a euclidean comparison triangle $\overline{\Delta}_{s,t}$ of the triangle $\Delta_{s,t} $ in $X$ spanned by $(x, \sigma_1 (s), \sigma_2(t))$. Write $\overline{\angle}_{\overline{x}} (\overline{\sigma_1} (s), \overline{\sigma_2} (t) )$ the angle at $\overline{x}$ of the comparison triangle $\overline{\Delta}_{s,t}$. Then by the $\cat (0)$ inequality, $\overline{\angle}_{\overline{x}} (\overline{\sigma_1} (s), \overline{\sigma_2} (t) )$ is monotonically decreasing and we can define \begin{equation*} \angle (\sigma_1, \sigma_2) = \lim_{s, t \rightarrow 0} \overline{\angle}_{\overline{x}} (\overline{\sigma_1} (s), \overline{\sigma_2} (t) ). \end{equation*} Since $X$ is uniquely geodesic, for any triple $x, y, z \in X$, there exist exactly one geodesic segment $\sigma_1$ (resp.$ \sigma_2$) from $x$ to $y$ (resp. from $x$ to $z$), and we define the angle $\angle_x (y,z) $ at $x$ between $y$ and $z$ as $ \angle (\sigma_1, \sigma_2)$. If $x_n \rightarrow \xi \in \bd X$, $y_p \rightarrow \eta \in \bd X $ in the visual topology, one can extend the notion of angle between points in the boundary by \begin{equation*} \angle_x (\xi, \eta) = \lim_{n, p \rightarrow \infty} \angle_x (x_n, y_p). \end{equation*} It turns out that the $\angle_x (\xi, \eta)$ does not depend on the choice of the sequences $(x_n)_n$ and $(y_p)_p$. Finally, we define $\angle : \overline{X} \times \overline{X} \rightarrow [0, \pi] $ the angular metric on $\overline{X} $ by \begin{equation*} \angle (\xi, \eta) = \sup_{x \in X} \angle_x(\xi, \eta). \end{equation*} \begin{rem} For example, given two points on the boundary $\xi$ and $\eta$, if there exists a geodesic $\sigma $ joining them, then the above supremum is attained on $\sigma$ and $\angle(\xi, \eta) = \pi$. \end{rem} The Tits metric on the boundary $d_T : \overline{X} \times \overline{X} \rightarrow \mathbb{R} \cup \{\infty\}$ is the length metric associated to $\angle (.,.)$. We denote by $B_T(\xi, r)$ the closed ball of radius $r$ and centre $\xi$, defined by $B_T(\xi, r) := \{ \eta \in \bd X : d_T(\xi, \eta) \leq r\}$. The following theorem summarizes important properties of the Tits metric. \begin{thm}[{\cite[Theorem II.4.11]{ballman95}}]\label{tits} Let $X$ be a proper $\cat (0)$ space. Then $(\partial_T X, d_T)$ is a complete $\cat (1) $ space. Moreover, for all $\eta, \, \xi \in \bd X$ : \begin{enumerate} \item if there is no geodesic in $X$ from $\xi $ to $\eta$, then $d_T(\xi, \eta) = \angle (\xi, \eta) \leq \pi$. \item If $\angle (\xi, \eta) < \pi$, there is no geodesic in $X$ joining $\xi $ to $\eta$ and there exists a unique geodesic (for the Tits metric) from $\xi$ to $\eta $ in $\partial_T X$. \item If there is a geodesic $\sigma$ in $X$ from $\xi $ to $ \eta$, then $d_T(\xi, \eta) \geq \pi $, with equality if and only if $\sigma$ bounds a flat half-plane. \item \label{semi continuite} If $(\xi_n)$ and $(\eta_n)$ are two sequences in $\bd X$, such that $\xi_n \rightarrow \xi \in \bd X$ and $\eta_n \rightarrow \eta\in \bd X$ in the visual topology, then $d_T(\xi, \eta) \leq \liminf_{n\rightarrow \infty} d_T(\xi_n, \eta_n)$. \end{enumerate} \end{thm} \begin{rem} In fact, for $0 \leq r< \infty$, $B_T(\xi, r)$ is closed for the visual topology. Indeed, let $(\xi_n) \subseteq B_T(\xi, r)$ such that $\xi_n \rightarrow z \in \bd X$ in the visual topology. By lower semicontinuity, $\liminf d_T(\xi, \xi_n) \geq d_T(z, \xi)$, hence $d_T(z, \xi) \leq r$. In particular, the ball $B_T(\xi, r)$ is $\nu$-measurable. \end{rem} Let now $g \in G$ be a rank one element and let $\sigma$ be an axis of $g$. Then $\sigma(+\infty) $ and $\sigma(- \infty)$ are visibility points of the boundary. In particular, $d_T (\sigmam, \xi) = + \infty$ for all $\xi \in \bd X - \{\sigma(+ \infty)\}$, see \cite[Lemma 1.7]{ballman_buyalo08}. In fact, the converse of can be made true once we add some conditions on the group action. The next propositions show that, given some conditions on the group action, there are ways to detect rank one elements in $G$ provided we have isolated points on the Tits boundary. \begin{thm}[{\cite[Main Theorem]{ruane01}}] Let $G$ be group acting properly discontinuously, cocompactly by isometries on a $\cat(0) $ space $X$, and let $a, b$ be infinite order elements such that $d_T(\{a^{\pm \infty}\},\{b^{\pm \infty}\}) > \pi$, then the subgroup generated by $a$ and $b $ contains a free subgroup. In fact, there exists $N\geq 0 $ such that for all $n \geq N$, $a^n b^{-n }$ is a rank one isometry. \end{thm} Given an action by isometries of a group $G$ on a $\cat(0)$ space $X$, the limit set $\Lambda$ of the action $G \curvearrowright X$ is the set of all points $ \xi \in \bd X$ such that there exists a sequence $(g_n) \in G $, and $x \in X$ for which $g_n x \rightarrow \xi $. \begin{prop}[{\cite[Proposition 1]{ballman_buyalo08}}] Suppose that $\Lambda = \partial X$, and that for each $\xi \in \partial_T X$, there exists $\eta \in \partial_T X$ with $d_T(\xi, \eta) > \pi$. Then $G$ contains a rank one isometry. \end{prop} \subsection{Stationary measures and boundary theory} \label{stationary section} In the study of random walks on a group $G$ acting on a metric space $X$, it is often very useful to use stationary measures on $X$. In the following, for $Y$ a measurable space, we denote $\prob(Y) $ the set of probability measures on $Y$. When a Polish group $G$ acts continuously on a topological probability space $Y$, with $\mu \in \prob(G)$ and $\nu \in \prob(Y)$, we define the convolution probability measure $\mu \ast \nu$ as the image of $\mu \times \nu $ under the action $G \times Y \rightarrow Y $. In other words, for $f $ a bounded measurable function on $Y$, \begin{equation*} \int_Y f(y)d(\mu \ast \nu)(y) = \int_G\int_Y f(g \cdot y ) d\mu(g) d\nu (y). \end{equation*} In our situation, $G$ is countable, so for $A$ any measurable set in $Y$, \begin{equation*} \mu \ast \nu (A) = \sum_{g \in G} \mu(g) \nu (g^{-1}A). \end{equation*} We will write $\mu_m = \mu^{\ast m}$ the $m$-th convolution power of $\mu$, where $G$ acts on itself by left translation $(g,h) \mapsto gh$. \begin{Def} A probability measure $\nu \in \prob(X) $ is $\mu$-stationary if $\mu \ast \nu = \nu $. \end{Def} The Banach-Alaoglu Theorem implies that the set of measures on $\overline{X}$ is a weakly-$\ast$ compact space. The next result is a straightforward consequence of this fact. \begin{thm}\label{existence} Let $G$ be a countable group acting by homeomorphisms on a compact metric space $Y$, and let $\mu \in \prob(G) $ a probability measure on $G$. Then there exists a $\mu$-stationary Borel probability measure $\nu \in \prob (Y)$ on $Y$. \end{thm} \begin{rem} Since $X$ is a proper $\cat(0)$ space, $\overline{X}$ is a compact metrizable space and Theorem \ref{existence} states that there exists a probability measure $\nu$ on $\overline{X}$ that is $\mu$-stationary. \end{rem} One of the reasons why we use stationary measures is given by the following Theorem, which is an important consequence of the martingale convergence Theorem and which goes back to Furstenberg \cite{furstenberg73}. \begin{thm}[{\cite[Lemma 1.33]{furstenberg73}}]\label{furstenberg73} Let $G$ be a discrete group, $\mu \in \prob(G)$ and $(n, \omega) \in \mathbb{N} \times \Omega \mapsto Z_n(\omega)$ be the random walk on $G$ associated to the measure $\mu$. Let $Y$ be a locally compact, $\sigma$-compact metric space on which $G$ acts by isometries, and let $\nu$ be a $\mu$-stationary measure on $Y$. Then, for $\mathbb{P}$-almost every $\omega \in \Omega$, there exists $\nu_\omega \in \prob(Y)$ such that $Z_n (\omega)\nu \rightarrow \nu_\omega $ in the weak-$\ast$ topology. Moreover, for all $g \in G$, $Z_n (\omega)g\nu \rightarrow \nu_\omega $ in the weak-$\ast$ topology, and $\nu = \int_{\Omega} \nu_\omega d \mathbb{P}(\omega)$. \end{thm} Let us give a brief overview of boundary theory. For more details, one can study \cite{kaimanovitch00} and \cite{furman02}. We define by "shift map" the application defined by \begin{equation*} S : (\omega_0, \omega_1,\dots) \in \Omega \mapsto (\omega_0 \omega_1, \omega_2, \dots). \end{equation*} If we define by $ f :\Omega \rightarrow G$ the application $f(\omega) = \omega_1$, see random walk $(Z_n)$ can be written as \begin{equation*} (n, \omega) \in \mathbb{N} \times \Omega \mapsto Z_n(\omega) = f(\omega) f(S\omega) \dots f(S^{n-1} \omega). \end{equation*} Given a measure $\mu $ on a group and a measurable $G$-action on a metric space $ M $ endowed with a probability $\nu$, we say that $ (M, \nu) $ is a $(G, \mu)$-space if the measure $\nu$ is $\mu$-stationary. In that case, Theorem \ref{furstenberg73} states that there exists a limit measure $\nu_\omega = \lim_{n \rightarrow \infty } Z_n(\omega) \nu$. \begin{Def} A $(G, \mu)$-space $(M, \nu)$ is a $(G,\mu)$-boundary if for $\mathbb{P}$-almost every $\omega\in \Omega$, $\nu_\omega$ is a Dirac measure. \end{Def} The study of $(G, \mu)$-boundaries has strong connections with the existence of harmonic functions and Poisson transforms on a group, but these results will be omitted here. We refer to \cite{furman02} for more informations on these subjects. It is straightforward to see that any $G$-equivariant factor $(M', \nu')$ of a $(G, \mu)$-boundary is still a $(G, \mu)$-boundary. In fact, the following Theorem states that there exists a pair $(B, \nu_B)$ which is maximal and universal among $(G, \mu)$-boundaries. \begin{thm}[{\cite[Theorem 10.1]{furstenberg73}}]\label{poisson boundary} Given a locally compact group $G$ with admissible probability measure $\mu$, there exists a maximal $(G, \mu)$-boundary, called the Poisson-Furstenberg boundary $(B, \nu_B)$ of $(G, \mu)$, which is uniquely characterized by the following property: \begin{description} \item [Universality]For every measurable $(G, \mu)$-boundary $(M, \nu)$, there is a $G$-equivariant measurable quotient map $p : (B, \nu_B) \rightarrow (M, \nu)$, uniquely defined up to $\nu_B$-null sets. \end{description} \end{thm} A construction of the Poisson boundary can be described as follows. The Poisson boundary of the measure $\mu$ is the space $B$ of ergodic components of the action of $S$ on $(\Omega, \haar \otimes \mu^{\mathbb{N^\ast}})$. It is a measured space equipped by the pushforward $\nu_B$ of the measure $\mathbb{P} $ by the natural projection $\Omega \rightarrow B$. The space $(B, \nu_B)$ is a a $(G, \mu)$-space on which $G$ acts ergodically. There is a lot to say about the action of $G$ on $B$, especially concerning $G$-equivariant boundary maps, and the interested reader may read \cite{bader_furman14} for recent developments in this direction, where it is proven that the action $G \curvearrowright B$ is isometrically ergodic, which is an enhanced version of ergodicity. \section{Uniqueness of the stationary measure}\label{uniqueness measure section} From now on, let $G$ be a countable group and $G \curvearrowright X$ a non-elementary action by isometries on a proper $\cat (0)$ space $X$. Let $\mu \in \prob(G) $ be an admissible probability measure on $G$, and assume that $G $ contains a rank one element. Theorem~\ref{existence} gives the existence of a $\mu$-stationary measure $\nu \in \prob(\overline{X})$ on $\overline{X}$. The goal of this section is to show that $\nu$ is the unique $\mu$-stationary measure on $\overline{X}$. In order to do so, we show that the measures $\nu_\omega $ given by the Theorem \ref{furstenberg73} are in fact Dirac measures $\delta_{z(\omega)}$, and that they do not depend on $\nu $. \subsection{Dynamics on the Tits boundary} Let $x \in X$ be a basepoint. We start by showing that almost surely, a subsequence of $(Z_n(\omega)x)$ goes to infinity. \begin{lem}\label{subsequence} For all $x \in X$, $(Z_n(\omega) x)_n$ is $\mathbb{P}$-almost surely unbounded. \end{lem} \begin{rem}\label{subsequence rem} Since $\overline{X}$ is compact, a straightforward consequence is that $\mathbb{P}$-almost surely, there exist a subsequence $\phi (n)$ (depending on $\omega$) and $z^{+} (\omega), z^{-}(\omega) \in \bd X$ such that $(Z_{\phi(n)}(\omega) x)_n$ converges to $z^{+} (\omega)$, and $(Z^{-1}_{\phi(n)}(\omega) x)_n$ converges to $z^{-}(\omega)$. \end{rem} \begin{proof} Let $K$ be a compact subset of $X$, and let $D$ be its diameter. By hypothesis, there exists $g$ a rank one element in the group, of translation length $\tau(g) := l > 0$. Hence there exists $k \in \mathbb{N}$ such that $\tau(g^k) = k l > D$. In particular, if $Z_n(\omega) x \in K$, then $Z_n(\omega) g^k x \notin K$. Since $\mu $ is admissible, there exists $m \in \mathbb{N}$ such that $\mu_m(g^k) =: a >0$. Then for $n \in \mathbb{N}$, \begin{equation*} \mathbb{P}(Z_{n+m}(\omega)x \in K \, | \, Z_n(\omega)x \in K) < 1-a. \end{equation*} Then, $\mathbb{P}$-almost surely, there exists $n_0 $ such that $Z_{n_0}(\omega) x \notin K $. Let us take an increasing sequence of compacts $(K_p)_p$ such that $\bigcup K_p = X$. For all $p \in \mathbb{N}$, $\mathbb{P}$-almost surely there exists $n_p \in \mathbb{N}$ such that $Z_{n_p}(\omega) x \notin K_p $. Then the subsequence $(Z_{n_p}(\omega) x)_p$ escapes every compact of $X$. Since $G$ acts by isometries, $(Z^{-1}_{n_p}(\omega) x)_p$ also escapes every compact of $X$. \end{proof} The following Theorem, due to P. Papasoglu and E. Swenson in \cite{papasoglu_swenson09} is a key ingredient in our proof. \begin{thm}[{\cite[Lemma 19]{papasoglu_swenson09}}]\label{PS} Let $X$ be a proper $\cat (0)$ space, and $G \curvearrowright X$ an action by isometries. Let $x \in X$, $\theta \in [0, \pi]$ and $(g_n) \subseteq G$ be a sequence of isometries for which there exists $x \in X $ such that $g_n(x) \rightarrow \xi \in \bd X $ and $ g^{-1}_n(x) \rightarrow \eta \in \bd X$. Then for all compact subset $K \subseteq \bd X - B_T(\eta, \theta)$ and for all open subset $U$ such that $B_T(\xi, \pi - \theta)\subseteq U$, there exists $n_0$ such that for all $n \geq n_0$, $g_n(K) \subseteq U$. \end{thm} We want to apply this theorem in order to prove that the limit measures given by Theorem \ref{furstenberg73} are Dirac measures. First, we start by a technical Lemma. \begin{lem}\label{separation} Let $g$ be a rank one isometry in $G$, with fixed points $g^{+}, \, g^{-} \in \bd X$ respectively attractive and repulsive. Then there exists $U^{+}, U^{-} \subseteq \bd X$ neighbourhoods of $g^{+}, \, g^{-}$ respectively such that for all $ \xi \in \bd X$, \begin{equation*} B_T(\xi, \pi) \cap U^{-} \neq \emptyset \Rightarrow B_T(\xi, \pi) \cap U^{+} = \emptyset, \end{equation*} and \begin{equation} B_T(\xi, \pi) \cap U^{+} \neq \emptyset \Rightarrow B_T(\xi, \pi) \cap U^{-} = \emptyset. \end{equation} In other words, we can find neighbourhoods of the fixed points of $g$ small enough so that the Tits ball of radius $\pi $ around any point $\xi \in \bd X$ do not intersect both neighbourhoods simultaneously. \end{lem} \begin{figure} \centering \begin{center} \begin{tikzpicture}[scale=0.5] \draw (0,0) circle (5cm); \draw[line width =1mm] (3.83, 3.21) arc (40 : 70 : 5cm) node[right=5mm] {$B_T(\xi, \pi)$}; \draw (3.21, 3.83) to[bend right] (5,0) node[left=3mm] {$U^{+}$} ; \draw (-4.62, 1.91) to[bend left] (-4.62, -1.91) node[right=2mm] {$U^{-}$} ; \draw (-5, 0) to[bend right = 10] (4.83, 1.3) ; \draw (4.83, 1.3) node[above, right] {$g^{+}$}; \draw (-5,0) node[above, left]{$g^{-}$}; \end{tikzpicture} \end{center} \caption{Illustration of Proposition \ref{separation}.} \end{figure} \begin{proof} By contradiction, assume there exist decreasing sequences of neighbourhoods $\{ U^{+}_n \} $ and $\{ U^{-}_n \} $ of respectively $g^{+}$ and $g^{-}$, such that $\cap_n U^{+}_n = g^{+}$ and $\cap_n U^{-}_n = g^{-}$, and a sequence of points $(\xi_n) \subseteq \bd X$ such that for all $n \in \mathbb{N}$, $B_T(\xi_n, \pi) \cap U^{-}_n \neq \emptyset $ and $B_T(\xi_n, \pi) \cap U^{+}_n \neq \emptyset $. Notice that due to Theorem \ref{tits}, since $g^{-}$ is a fixed point of a rank one isometry, $d_T(g^{-}, \eta) = + \infty$ for all $\eta \in \bd X - \{ g^{-}\} $. For all $n \in \mathbb{N}$, take $z_n \in B_T(\xi_n, \pi) \cap U^{-}_n$. By hypothesis, $z_n \rightarrow g^{-}$. Since $\bd X$ is compact, then passing to a subsequence, $\xi_n \rightarrow \xi \in \bd X$ in the visual topology. By lower semicontinuity of the Tits metric (Theorem \ref{tits}) , $\liminf d_T (\xi_n, z_n) \geq d_T(g^{-}, \xi )$, hence $d_T(g^{-}, \xi) \leq \pi$, thus $\xi = g^{-}$. The same argument with $z_n \in B_T(\xi_n, \pi) \cap U^{+}_n$ gives $\xi = g^{+}$, a contradiction. \end{proof} The next step is to show that the measure of this ball is actually zero. Let us begin by showing that $\nu$ is non-atomic, that is, for all $\xi \in \overline{X}$, $\nu(\xi) = 0 $. The next Lemma follows standard ideas. \begin{lem}\label{non atomic} Let $(G, \mu)$ be a discrete group acting by isometries on a metric space $X$, and let $\nu$ be a $\mu$-stationary measure on $X$. If the action $G \curvearrowright X$ does not have finite orbits, then $\nu$ is non-atomic. \end{lem} \begin{proof} Let us assume that there exists an atom for $\nu$. Let $m := \max\{ \nu(x) : x \in \overline{X}\}$ and $X_m = \{ x \in \overline{X} : \nu (x) = m\}$. The set $X_m $ is non-empty by hypothesis, and finite because $\nu(\overline{X}) = 1$. Let $x \in X_m$. Since $\nu $ is $\mu$-stationary, $\mu \ast \nu (x ) = \nu (x)$, hence \begin{equation} \sum_{g \in G} \mu(g) \nu(g^{-1} x ) = m \nonumber. \end{equation} Then, for all $g \in G$, $\nu(g^{-1} x) = m$. The set $X_m $ is $G$-invariant, finite and non-empty, which is in contradiction with the fact that the action does not have finite orbits. \end{proof} \begin{rem}\label{r1} The group $G$ acts on $(\partial_T X, d_T)$ by isometries, hence for all $\xi \in \bd X$, $f \in G$, $fB_T(\xi, \pi) = B_T(f\xi, \pi)$. \end{rem} \subsection{$B_T(\xi, \pi)$ is a null set} In this section, we show that $\nu(B_T(\xi, \pi))=0$. In order to do this, we use a Lemma from J. Maher and G. Tiozzo (\cite{maher_tiozzo18}), and North-South dynamics on the boundary. \begin{lem}[{\cite[Lemma 4.5]{maher_tiozzo18}}]\label{mt} Let $G$ be a discrete group acting by homeomorphisms on a metric space $M$, and let $\mu \in \prob(G)$ whose support generates $G$ as a semigroup. Let $\nu $ be a $\mu$-stationary measure on $M$. Let $Y \subseteq M$,and assume that there exists a sequence of positive numbers $(\varepsilon_n)_{n \in \mathbb{N}}$ such that for all $f \in G$, there exists a sequence $(g_n)_{n \in \mathbb{N}}$ such that the translates $fY, g^{-1}_1 f Y, g^{-1}_2 f Y, \dots $ are all disjoint and for all $g_n$, there exists $m \in \mathbb{N}$ such that $\mu_m(g_n) \geq \varepsilon _n$. Then $\nu(Y)= 0$. \end{lem} We can then apply the Lemma \ref{mt} to obtain the following result: \begin{prop}\label{zero measure} With the previous notations, for all $\xi \in \bd X$, $\nu (B_T( \xi, \pi)) = 0$. \end{prop} \begin{proof} By assumption, the action of $G$ on $X$ is non-elementary and there exists a rank one isometry. By Proposition \ref{non elem caprace fuj}, there exist two rank one isometries $g$ and $h$ whose fixed points are mutually disjoint. If we repeat the argument given in Lemma \ref{separation}, we obtain that there exists $U^{+}_g, U^{-}_g, U^{+}_h , U^{-}_h $ neighbourhoods of $g^{+}, g^{-}, h^{+}, h^{-}$ respectively such that for all $\xi \in \bd X, \, B_T(\xi, \pi) \cap U^{+}_g \neq \emptyset \Rightarrow B_T(\xi, \pi) \cap U = \emptyset$, for $U = U^{-}_g, U^{+}_h , U^{-}_h $ and $B_T(\xi, \pi) \cap U^{-}_g \neq \emptyset \Rightarrow B_T(\xi, \pi) \cap U = \emptyset$, for $U = U^{+}_g, U^{+}_h , U^{-}_h $. \newline Let us apply Lemma \ref{hamenstadt} to rank one isometries $g$ and $h$ with distinct fixed points. There exists $k_0$ such that for all $k \geq k_0$, $g^{k} (\bd X - U^{-}_g) \subseteq U^{+}_g$, $g^{-k} (\bd X - U^{+}_g) \subseteq U^{-}_g$, $h^{k} (\bd X - U^{-}_h) \subseteq U^{+}_h$ and $h^{-k} (\bd X - U^{+}_h) \subseteq U^{-}_h$. Let now $\xi \in \bd X$ be a boundary point, $f \in G$ an isometry and write $Y := B_T(\xi, \pi)$. We have shown in Remark \ref{r1} that $fY = B_T(f \xi, \pi)$. We are looking for a sequence of elements $(g_n)_n$ of $G$ such that the conditions in Lemma~\ref{mt} are verified. There are three cases possible: \newline \textbf{\underline{First case}}: If $fY \cap \ugm \neq \emptyset$, then $fY \cap \uhm = \emptyset$ and $fY \cap \uhp = \emptyset$. Hence, for all $k \geq k_0$, $h^{k} f Y \subseteq \uhp $. The translates \begin{equation*} \{ fY, h^{ k_0} f Y, h^{ 2k_0} f Y, \dots h^{ n k_0} f Y\dots \} \end{equation*} are all disjoint. Indeed, $f Y \cap \uhp = \emptyset $ by hypothesis and if there exists $p \in \mathbb{N}$ such that $h^{ (n + p)k_0} f Y \cap h^{ n k_0} f Y \neq \emptyset$. Then $h^{pk_0} fY \cap fY \neq \emptyset$, which is impossible because $h^{pk_0} fY \subseteq \uhp $ and $fY \cap \uhp = \emptyset$. \newline \textbf{\underline{Second case}}: If $fY \cap \ugp \neq \emptyset$, then the translates \begin{equation*} \{ fY, h^{ k_0} f Y, h^{2k_0 }fY, \dots h^{ n k_0} f Y\dots \} \end{equation*} are all disjoint for the same reasons. \newline \textbf{\underline{Third case}}: If $fY \cap \ugm = \emptyset$ and $fY \cap \ugp = \emptyset$, then the same argument shows that the translates \begin{equation*} \{ fY, g^{ k_0} f Y, g^{ 2k_0} f Y, \dots g^{ n k_0} f Y\dots \} \end{equation*} are all disjoint. \newline \begin{figure} \centering \begin{center} \begin{tikzpicture}[scale=0.5] \draw (0,0) circle (5cm); \draw[line width =1mm] (3.83, 3.21) arc (40 : 70 : 5cm) node[right=5mm] {$B_T(f\xi, \pi)$}; \draw (3.21, 3.83) to[bend right] (5,0) node[left=3mm] {$U_h^{+}$} ; \draw (-4.62, 1.91) to[bend left] (-4.62, -1.91) node[right=2mm] {$U_g^{-}$} ; \draw (-5, 0) to[bend right = 10] (4.83, 1.3) ; \draw (0,-5) to[bend right = 10] (-2.5, 4.33) ; \draw (1.29,-4.83) to[bend right = 40] (-1.29,-4.83) node[above=3mm] {$U_h^{-}$}; \draw (-3.53, 3.53) to[bend right = 40] (-1.29,4.83) node[below=3mm] {$U_h^{+}$}; \draw (4.83, 1.3) node[above, right] {$g^{+}$}; \draw (-5,0) node[above, left] {$g^{-}$}; \draw (-2.5, 4.33) node[above, left] {$h^{+}$} ; \draw (0,-5) node[below]{$h^{-}$} ; \end{tikzpicture} \end{center} \caption{Illustration of Lemma \ref{zero measure}.} \end{figure} Now, since the measure $\mu $ is admissible, for all $n \in \mathbb{N}$, there exists $ m \in \mathbb{N}$ (depending on $n$) such that $\mu_m (h^{-nk_0}) > 0$ . Similarly, for all $n\in \mathbb{N}$, there also exists $m' \in \mathbb{N} $ such that $\mu_{m'}(g^{-nk_0}) > 0 $. Consider the sequence $(\varepsilon_n)_{n \in \mathbb{N}}$ defined by $\varepsilon_n := \min \{\mu_m (h^{-nk_0}), \mu_{m'}(g^{nk_0})\}$. It is a sequence of positive numbers, and it does not depend on $f$. Finally, for all $f \in G$, we have found a sequence $(g_n)$ such that the translates $\{ fY, \dots, g^{-1}_n f Y\}$ are all disjoint and such that for all $n$, there exists $m \in \mathbb{N}$ such that $\mu_m(g_n) \geq \varepsilon_n$. The proposition follows from Lemma \ref{mt}. \end{proof} We can now prove the main result of this section. \begin{thm}\label{unicite} Let $G$ be a discrete group and $G \curvearrowright X$ a non-elementary action by isometries on a proper $\cat (0)$ space $X$. Let $\mu \in \prob(G) $ be an admissible probability measure on $G$, and assume that $G $ contains a rank one element. Then there exists a unique $\mu$-stationary measure $\nu \in \prob(X)$. \end{thm} \begin{proof} By Lemma \ref{subsequence} and Remark \ref{subsequence rem}, for all $x \in X$, there almost surely exists a subsequence $(Z_{\phi (n)}(\omega)x)_n $ of $(Z_n(\omega) x)_n$ and $z^{+} (\omega), z^{-}(\omega) \in \bd X$ such that $(Z_{\phi(n)}(\omega) x)_n$ converges to $z^{+} (\omega)$, and $(Z^{-1}_{\phi(n)}(\omega) x)_n$ converges to $z^{-}(\omega)$. By Theorem \ref{PS}, for all $K \subseteq \bd X - B_T( z^{-}(\omega), \pi)$, and for all $U$ neighbourhood of $z^{+}(\omega)$, there exists $n_0$ such that $Z_{\phi(n)}(\omega)K \subseteq U$ for all $n \geq n_0$. By Proposition \ref{zero measure}, $\nu (B_T( z^{-}(\omega), \pi)) = 0$ hence for all measurable $A$ of $\bd X$, $Z_{\phi(n)}(\omega)\nu(A)$ converges to $1$ if $z^{+}(\omega) \in A$ and to $0$ otherwise. In other words, $Z_{\phi(n)}(\omega) \nu \rightarrow \delta_{z^{+}(\omega)}$ in the weak-$\ast$ topology. By Theorem \ref{furstenberg73}, $Z_n(\omega) \rightarrow \nu_\omega$ in the weak-$\ast$ topology, so $\nu_\omega = \delta_{z^{+}(\omega)}$ by uniqueness of the limit. Since $\nu = \int_{\Omega} \delta_{z^{+}(\omega)} d \mathbb{P}(\omega)$ and $z^{+}(\omega)$ does not depend upon $\nu$, the measure $\nu \in \prob(X) $ is the unique $\mu$-stationary measure on $\overline{X}$. \end{proof} \section{Convergence of the random walk} \label{convergence random walk} The goal of this section is to show that for all $x \in X$, $Z_n(\omega)x \rightarrow z^{+}(\omega)$ $\mathbb{P}$-almost surely. Note that since $G$ acts by isometries, if $ (Z_n(\omega)x) $ converges to $\xi \in \bd X$ for some $x \in X$, then $(Z_n(\omega)x') $ also converges to $\xi $ for all $x' \in X$. The following is known as Portmanteau Theorem, and is a classical result in measure theory. \begin{prop} Let $Y$ be a metric space, $P_n$ a sequence of probability measures on $Y$, and $P$ a probability measure on $Y$. Then the following are equivalent: \begin{itemize} \item $P_n \rightarrow_n P$ in the weak$-\ast$ topology; \item $\underset{n \rightarrow \infty}{\liminf} \ P_n(O) \geq P(O)$ for every open set $O \subseteq Y$. \end{itemize} \end{prop} \begin{cor}\label{portmanteau} Let $O$ be an open neighbourhood of $z^{+}(\omega)$ (for the visual topology). Then \begin{equation} \liminf_{n \rightarrow \infty} \nu(Z^{-1}_n(\omega) (O)) = \delta_{z^{+}(\omega)}(O) = 1. \end{equation} \end{cor} The next result was proven by W. Ballmann in \cite{ballman95}, and will be fundamental in the sequel. \begin{lem}[{\cite[Lemma III.3.1]{ballman95}}]\label{ballm} Let $\sigma : \mathbb{R} \rightarrow X$ a bi-infinite geodesic in $X$ that does not bound a flat half strip of width $R >0$, with endpoints $\sigmam $ and $\sigmap$ in $\bd X$. Then there exist neighbourhoods $U$, $V$ of $\sigmam$ and $ \sigmap$ respectively in $\overline{X}$ such that for all $\xi \in U$ and $\eta \in V$, there is a geodesic $\sigma'$ in $X$ from $\xi $ to $\eta$, and for any such geodesic, we have $d(\sigma(0), \sigma') < R$. In particular, $\sigma'$ does not bound a flat strip of width $2R$. \end{lem} We recall that we have proven in section \ref{stationary section} that $\nu$ is the unique stationary measure on $\overline{X}$, that $\mathbb{P}$-almost surely, $Z_n (\omega)\nu \rightarrow \delta_{z^{+}(\omega)}$ for some $z^{+}(\omega) \in \bd X$, and that $\nu $ is distributed as $\nu = \int_{\Omega} \delta_{z^{+}(\omega)} d \mathbb{P}(\omega)$. In other words, for all open set $U$ in $\overline{X} $, $\nu(U) = \mathbb{P} (\omega \in \Omega \, : \, z^{+}(\omega) \in U)$. We also know from Lemma \ref{non atomic} that $\nu$ is non atomic. \begin{rem}\label{support} The \textit{support} of a measure $m$ on a topological space $Y$ is the smallest closed set $C$ such that $m(Y \setminus C)= 0$. In other words $y \in \supp(m)$ if and only if for all $U $ open containing $y $, $m(U) >0$. From what we have obtained in Section \ref{stationary section}, $\supp(\nu)$ is infinite, and for each $z \in \supp(\nu)$, $\nu(B_T(z, \pi))= 0 $. In other words, any two points of the support of $\nu $ are almost surely joined by a rank one geodesic. \end{rem} Using Proposition \ref{continuite proj}, one can now prove that the random walk goes to infinity almost surely. \begin{lem}\label{unbdd} Let $x \in X$ a basepoint. Then $d(x, Z_n(\omega)x) \rightarrow \infty $ almost surely. \end{lem} \begin{proof} Let $z_1$ and $z_2$ be two distinct points of the support of $\nu$. By Remark \ref{support}, we can take $z_1$ and $z_2$ to be joined by a rank one geodesic $\sigma$. Let us suppose without loss of generality that $\sigma(- \infty) = z_1$ and $\sigmap= z_2$. Recall that by Remark \ref{flat strip}, there exists $R > 0 $ large enough so that $\sigma $ does not bound a flat strip of width $R$. Since $G$ acts by isometries on $X$, what we want to show does not depend on the basepoint and we can take $x~:=~\sigma(0)$. Let us assume by contradiction that $(Z_n(\omega)x)_n$ admits a bounded subsequence. Because $X$ is proper, there exists $y \in X$ and a subsequence $(\phi(n))_n$ such that $Z_{\phi(n)}(\omega) x \rightarrow y \in X$. In particular, there exists $n_0$ such that for all $n \geq n_0$, $d(Z_{\phi(n)}(\omega) x , y) \leq 1$. Due to Lemma \ref{portmanteau}, for every open neighbourhood $O$ of $z^{+}(\omega)$ and every $\varepsilon>0$, there exists $N \in \mathbb{N}$ such that for all $n~\geq~N$, $\nu(Z^{-1}_n(\omega) O) \geq 1 - \varepsilon$. Now define $U$, $V$ to be the open neighbourhoods of $\sigmap$ and $\sigma(-\infty)$ respectively given by Lemma \ref{ballm}. Since $z_1 $ belongs to the support of $\nu$, $U$ has non-zero $\nu$-measure, thus in particular there exists $n_1\geq n_0$ such that for all $n \geq n_1$, $Z_n(\omega)U \cap O \neq \emptyset $. Repeating the same argument with $V$, there exists $n_2 \geq n_1$ so that for all $n \geq n_2$, $Z_n(\omega)U \cap O \neq \emptyset $ and $Z_n(\omega)V \cap O \neq \emptyset $. Now fix $r, \varepsilon > 0 $, with $r > \varepsilon$, and let $R'= R'(r, R+1, \varepsilon) $ given by Proposition \ref{continuite proj}. Observe that the set $O := U(y, z^{+}(\omega), R', \varepsilon/3)$ is an open neighbourhood of $z^{+}(\omega)$, so by the previous argument, there exists $n_2 \in \mathbb{N}$ and $(\xi, \eta) \in U \times V$ such that $Z_{\phi(n_2)}(\omega)\xi$ and $Z_{\phi(n_2)}(\omega)\eta$ both belong to $O$. Now by Lemma~\ref{ballm}, there exists a geodesic line $\sigma_{\xi, \eta}$ in $X$ joining $\xi $ and $\eta $ such that $d(x, \sigma_{\xi, \eta})\leq R$. Let $x'$ be the projection of $x$ on $\sigma_{\xi, \eta}$, so that $d(x,x') \leq R$. Then for all $n \in \mathbb{N}$, \begin{eqnarray} d(Z_{\phi(n)}(\omega) x', \, y) &\leq& d( Z_{\phi(n)}(\omega) x' , Z_{\phi(n)}(\omega) x ) + d (Z_{\phi(n)}(\omega) x, y) \nonumber \\ & \leq & d( x' , x ) + d (Z_{\phi(n)}(\omega) x, y) \nonumber \\ & \leq & R + d (Z_{\phi(n)}(\omega) x, y) \nonumber. \end{eqnarray} In particular, applying this equality for $n= n_2$ yields $d(Z_{\phi(n_2)}(\omega) x', \, y) \leq R+1$. From now on, denote $Z_{\phi(n_2)}(\omega) x'$ by $y'$, and by $p_{r}$ the closest point projection $p_{r} : \overline{X} \rightarrow \overline{B}(y', r)$. Due to Proposition~\ref{continuite proj}, and because we have chosen $R'$ accordingly, \begin{equation*} U(y, z^{+}(\omega), R', \varepsilon/3) \subseteq U(y', z^{+}(\omega), r, \varepsilon). \end{equation*} In particular, since we have defined $\xi $ and $\eta $ such that $Z_{\phi(n_2)}(\omega)\xi$ and $Z_{\phi(n_2)}(\omega)\eta$ belong to $U(y, z^{+}(\omega), R', \varepsilon/3)$, it implies that $Z_{\phi(n_2)}(\omega)\xi$ and $Z_{\phi(n_2)}(\omega)\eta$ both belong to $U(y', z^{+}(\omega), r, \varepsilon)$. However, there is a geodesic line from $Z_{\phi(n_2)}(\omega)\xi$ to $Z_{\phi(n_2)}(\omega)\eta$ passing through $y' = Z_{\phi(n_2)}(\omega) x'$, so that \begin{equation*} d(p_{r}(Z_{\phi(n_2)}(\omega)\xi), p_{r}(Z_{\phi(n_2)}(\omega)\eta)) = 2r. \end{equation*} Now $\xi $ and $\eta$ both belong to $U(y', z^{+}(\omega), r, \varepsilon)$, which means that \begin{equation*} d(p_{r}(Z_{\phi(n_2)}(\omega)\xi), p_{r}(z^{+}(\omega))) < \varepsilon, \end{equation*} and \begin{equation*} d(p_{r}(Z_{\phi(n_2)}(\omega)\eta), p_{r}(z^{+}(\omega))) < \varepsilon. \end{equation*} Now by the triangular inequality, \begin{equation*} d(p_{r}(Z_{\phi(n_2)}(\omega)\xi), p_{r}(Z_{\phi(n_2)}(\omega)\eta)) < 2 \varepsilon, \end{equation*} a contradiction with the fact that $\epsilon < r$. See Figure \ref{lemme 4.4}. \end{proof} \begin{figure}[h]\label{lemme 4.4} \centering \begin{center} \begin{tikzpicture}[scale=1] lldraw[black] (0,0) circle (1pt) node[left] {$y$} ; lldraw[black] (6,0) circle (1pt) node[right] {$z^{+}(\omega)$} ; \draw (5,2) node[right] {$Z_{\phi(n_2)} \xi$} ; \draw (5,-2) node[right] {$Z_{\phi(n_2)} \eta$} ; lldraw[black] (1.26,0) circle (1pt) node[above= 3mm] {$y'$} ; \draw[red] (5,2) .. controls (0,0) .. (5,-2); \draw (3,3) .. controls (1.5,0) .. (3,-3) node[below=3mm,right] {$U(y',z^{+}, r, \varepsilon)$}; \draw (4.5,-3) .. controls (3,0) .. (4.5,3) node[above,right] {$U(y,z^{+}, R', \varepsilon/3)$}; \end{tikzpicture} \end{center} \caption{$Z_{\phi(n_1)} \xi$ and $Z_{\phi(n_1)} \eta$ can not belong to $U(y',z^{+}, r, \varepsilon)$.} \end{figure} Now we can prove the convergence to the boundary. \begin{thm}\label{convergence} Let $G$ be a discrete group and $G \curvearrowright X$ a non-elementary action by isometries on a proper $\cat (0)$ space $X$. Let $\mu \in \prob(G) $ be an admissible probability measure on $G$, and assume that $G $ contains a rank one element. Then for every $x \in X$, and for $\mathbb{P}$-almost every $\omega \in \Omega$, the random walk $(Z_n (\omega) x)_n $ converges to a boundary point $z^{+}(\omega) \in \bd X$. Moreover, $z^{+}(\omega)$ is distributed according to $\nu $. \end{thm} \begin{proof} Let $x \in X$. Because of Lemma \ref{unbdd} the random walk $(Z_n (\omega) x)_n$ goes to infinity, so it is enough to show that there is no accumulation point of $(Z_n (\omega) x)_n $ in $\bd X$ other than the boundary point $z^{+} (\omega)$ given by Proposition \ref{subsequence}. Assume that for a given subsequence, $Z_{\phi(n)}(\omega) x \rightarrow \xi$, with $\xi \in \bd X$. Then we can apply the results in the first section and the Theorem \ref{PS} to get that $Z_{\phi(n)}(\omega) \nu \rightarrow \delta_\xi$ in the weak-$\ast$ topology. By Theorem \ref{furstenberg73}, we have $Z_n (\omega) \nu \rightarrow \delta_{z^{+}(\omega)}$, so $z^{+}(\omega) = \xi$ by uniqueness. \end{proof} Now, Proposition \ref{zero measure} combined with Theorem \ref{tits} allows to state a geometric result which can be of independent interest. \begin{cor}\label{limit points are rank one} Let $\xi \in \bd X$ be a limit of the random walk $(Z_n x)_n$. Then for $\nu$-almost every point $\eta \in \bd X$, there exists a rank one geodesic joining $\xi $ to $\eta$. \end{cor} \section{Positivity of the drift}\label{drift section} \subsection{Proof of Theorem \ref{drift thm}} Now that we know that the random walk converges to the boundary, we can wonder "at which speed" it converges. The goal of this section is to show that this speed is linear. The strategy is classical: it was initiated by Guivarc'h and Raugi for random walks on Lie groups \cite{guivarch_raugi85} and later it was used later for the study of free groups by Ledrappier \cite{ledrappier01}. This type of results can be understood as a generalised version of a Law of Large Numbers for a given random walk in some metric space. These questions have been extensively studied by Benoist and Quint in \cite{benoist_quint}, who have also proven a Central Limit Theorem for random walks on Gromov-hyperbolic groups, see \cite{benoist_quint16}. Let $G$ be a discrete group and $G \curvearrowright X$ a non-elementary action by isometries on a proper $\cat (0)$ space $X$. Let $\mu \in \prob(G) $ be an admissible probability measure on $G$. As a consequence of Kingman subadditive Theorem (see for example \cite[Corollary 4.3]{karlsson_margulis}), there is a constant $\lambda$ such that for $\mathbb{P}$-almost every sample path $(Z_n(\omega)x)_n$ we have \begin{equation} \lim_{n \rightarrow \infty} \frac{1}{n} d(Z_n(\omega) x, x) = \lambda. \end{equation} The aforementioned constant $\lambda$ is referred to as the \textit{drift} of the random walk. We prove that if we assume that the probability measure $\mu $ has finite first moment, i.e. $\sum_G \mu(g) d(gx, x) < \infty$, the drift can be written by $ \lim_{n \rightarrow \infty} \frac{1}{n} d(Z_n(\omega) x, x) = \lambda $ and is positive. More precisely, we establish the following result: \begin{thm}\label{drift} Let $G$ be a discrete group and $G \curvearrowright X$ a non-elementary action by isometries on a proper $\cat (0)$ space $X$. Let $\mu \in \prob(G) $ be an admissible probability measure on $G$ with finite first moment, and assume that $G $ contains a rank one element. Let $x \in X$ be a basepoint of the random walk. Then the drift $\lambda$ is almost surely positive: \begin{equation} \lim_{n \rightarrow \infty} \frac{1}{n} d(Z_n(\omega) x, x) = \lambda > 0. \end{equation} \end{thm} From now on, we denote by $\nu$ the unique $\mu$-stationary measure on $X$ given by Theorem \ref{unicite}. As in \cite[Theorem 9.3]{fernos_lecureux_matheus18}, we begin by showing that the displacement $d(Z_n(\omega) x, x)$ is well approximated by the horofunctions $h_\xi (Z_n(\omega)x)$. For the remaining of the section, we keep the notations introduced by Theorem \ref{drift} \begin{prop} Let $x \in X$ be a basepoint. Then for $\nu$-almost every $\xi \in \partial X$, and $\mathbb{P}$-almost every $\omega \in \Omega$, there exists $C > 0 $ such that for all $n \geq 0$ we have \begin{equation} |h_\xi (Z_n(\omega)x) - d(Z_n(\omega) x, x)| < C. \end{equation} \end{prop} \begin{proof} Because of Proposition \ref{zero measure}, for $\nu$-almost every $\xi \in \bd X$, $d_T(\xi, z^{+}(\omega)) > \pi$. With Theorem \ref{tits}, this implies that for $\nu$-almost every $\xi \in \partial X$, there is a rank one geodesic $\sigma_\xi$ in $X$ joining $\xi $ to $z^{+}(\omega)$. Let $\xi \in \partial X$ such that $d_T(\xi, z^{+}(\omega)) > \pi$, and fix $R>0 $ such that $\sigma_\xi $ does not bound a flat strip of width $R$. By Lemma \ref{ballm}, there exist neighbourhoods $U$, $V$ of $\xi $ and $ z^{+}(\omega)$ respectively in $\overline{X}$ such that for all $\xi' \in U$ and $\eta \in V$, there is a geodesic from $\xi' $ to $\eta$, and for any such geodesic $\sigma'$, we have $d(\sigma_\xi(0), \sigma') < R$. Assume first that $x = \sigma_\xi(0)$. Since $Z_n(\omega)x \rightarrow z^{+}(\omega)$ almost surely, there exists $n_0 $ such that for all $n \geq n_0$, $Z_n(\omega)x \in V$. We are going to show that for all $n \geq n_0 $, $|h^x_\xi (Z_n(\omega)x) - d(Z_n(\omega) x, x)| \leq 2R$. Take $(y_p)_p$ a sequence in $X$ converging to $\xi $. There exists $p_0$ such that for all $p \geq p_0$, $y_p \in U$. Fix $n \geq n_0$ and $p \geq p_0$, and define $x' = x'(n,p)$ as the projection of $x$ on the geodesic segment joining $y_{p}$ to $Z_{n}(\omega)x$. By Lemma \ref{ballm}, $d(x', x) \leq R$, hence \begin{eqnarray} d(y_p, Z_n(\omega)x) & = & d(y_p, x') + d(x', Z_n (\omega)x) \nonumber \\ & \geq & d(y_p, x) - R + d(x, Z_n (\omega)x) - R \nonumber. \end{eqnarray} Then for all $p \geq p_0$, $n \geq n_0$, $d(y_p, Z_n(\omega)x) -d(y_p, x) \geq d(x, Z_n (\omega)x) -2R$. If we make $p \rightarrow \infty$, we get that for all $n \geq n_0$, \begin{equation*} h^x_\xi (Z_n(\omega)x) + 2R \geq d(x, Z_n(\omega)x). \end{equation*} Conversely, for all $n \in \mathbb{N}$, $d(x, Z_n(\omega)x) \geq d(y_p, Z_n(\omega)x) - d(x, y_p)$ hence $d(x, Z_n(\omega)x) \geq h^x_\xi (Z_n(\omega)x) $ by taking the limit. Then there exists $C >0 $ such that for all $n \in \mathbb{N}$, \begin{equation} |h^x_\xi (Z_n(\omega)x) - d(Z_n(\omega) x, x)| \leq C \label{bdd} \end{equation} Now if we take a different basepoint $z \in X$, \begin{eqnarray} d(Z_n(\omega)z, z ) &\leq &d(Z_n(\omega)z, Z_n(\omega)x) + d(Z_n(\omega)x, x ) + d(x, z ) \nonumber \\ & \leq & d(Z_n(\omega)x, x ) + 2 d(x, z ), \nonumber \end{eqnarray} hence $|d(Z_n(\omega)z, z ) - d(Z_n(\omega)x, x )| \leq 2 d(x, z )$. Similarly, $|h^x(Z_n(\omega)z) - h^x_\xi(Z_n(\omega)x)| \leq 2 d(x, z )$ and if we change the basepoint, $|h^x_\xi - h^z_\xi | \leq d(z, x)$, so equation \eqref{bdd} does not depend on the choice of the basepoint. \end{proof} As a consequence, we have the following corollary: \begin{cor}\label{sublin approx} For every $x \in X$, $\mathbb{P}$-almost surely every $\omega \in \Omega $ and $\nu$-almost every $\xi \in \partial X$, we have that \begin{equation*} \lambda = \lim_{n \rightarrow \infty} \frac{1}{n} h_\xi (Z_n(\omega) x). \end{equation*} \end{cor} The rest of the proof is now very similar to what is done in \cite[Theorem 9.3]{fernos_lecureux_matheus18}, which was itself inspired by \cite{benoist_quint}. We include it for completeness. The idea is to apply results about additive cocycles. Define $\check{\mu} $ by $\check{\mu}(g)= \mu(g^{-1})$. It is still an admissible measure for $G$, so we can apply Theorem \ref{unicite} to find $\nui$ the unique $\mui$-stationary measure on $\overline{X}$. Define $T : (\Omega \times \overline{X}, \mathbb{P} \times \nui ) \rightarrow (\Omega \times \overline{X}, \mathbb{P} \times \nui )$ be defined by $T(\omega, \xi ) \mapsto (S\omega, \omega_0^{-1} \xi)$. The following Proposition is proved in \cite{benoist_quint}. Its key ingredient is the fact that $\nui$ is the unique $\mui$-stationary measure. \begin{prop}\label{ergodic} The transformation $T$ preserves the measure $\mathbb{P} \times \nui$ and acts ergodically. \end{prop} \begin{proof} Let $\beta = \mathbb{P} \times \nui$. For any bounded Borel function $\psi $ on $\Omega \times \overline{X}$, \begin{equation*} \beta(\psi)= \int_\Omega \int_{\overline{X}} \psi(\omega,x) d\mathbb{P}d\nui(x). \end{equation*} The following computation shows that $T$ is probability measure preserving. \begin{eqnarray} \beta(\psi \circ T)& = & \int_\Omega \int_{\overline{X}} \psi(S \omega,\omega_0^{-1}x) d\mathbb{P}(\omega)d\nui(x) \nonumber \\ & = & \int_\Omega \int_{\overline{X}} \psi(S\omega,x) d\mathbb{P}( \omega)d\nui(x) \text{ because $\nui$ is $\mui$-stationary}\nonumber \\ & = & \beta(\psi). \nonumber \end{eqnarray} Let us show that $\beta $ is ergodic. Let $\psi$ be a Borel function on $\Omega \times \overline{X}$ which is $T$-invariant. Let us define by $\pmui$ the Markov operator associated to $\mui $: for all Borel bounded function $f $ on $\overline{X}$, $\pmui f (x) = \int f(gx)d\mui(g)$. It follows that \begin{eqnarray} \nui(\pmui f) & = & \int \int f(gx)d\mui(g) d\nui(x) \nonumber \\ & = & \mui \ast \nui (f) \nonumber \\ & = & \nui(f) \text{ by $\mui$-stationarity.} \end{eqnarray} Reversing this computation, it is then equivalent to say that a measure $\nui'$ is a $\mui$-stationary measure on $\overline{X}$ and to say that it is $\pmui$-invariant. Since $\nui$ is the unique $\mui$-stationary measure, it is the only $\pmui$-invariant measure on $\overline{X}$, hence $\nui$ is $\pmui$-ergodic. Let $\psi $ be a $T$-invariant bounded Borel function on $\Omega \times \overline{X}$. Denote $\phi (x) = \int \psi(\omega, x) d\mathbb{P}(\omega) d\nui(x)$, which is a bounded Borel function on $\overline{X}$. \begin{eqnarray} \pmui \phi (x) &=& \int \int \psi (\omega, gx) d\mathbb{P}(\omega) d\mui(g) \nonumber \\ &=& \int \int \psi (\omega, g^{-1}x) d\mathbb{P}(\omega) d\mu(g) \nonumber \\ &=& \int (\psi \circ T) (\omega, x) d\mathbb{P}(\omega) = \phi (x) \nonumber. \end{eqnarray} Thus $\phi$ is $\pmui$-invariant, hence constant by ergodicity, say equal to $C$. Let $\mathcal{X}_n $ be the $\sigma$-algebra generated by $\mu^{\otimes n} \times \nui$, and $\phi_n = \mathbb{E}[\phi \, | \, \mathcal{X}_n]$, so that the sequence $(\phi_n)$ is a bounded martingale, and then converges to $\psi$ by the martingale convergence theorem. We have by definition \begin{eqnarray} \phi_n(\omega_0, \dots, \omega_{n-1}, x) & = & \int \psi((\omega_0, \dots, \omega_{n-1}), \omega, x) d \mathbb{P}(\omega) \nonumber \\ & = & \int \psi \circ T^n ((\omega_0, \dots, \omega_{n-1}), \omega, x) d \mathbb{P}(\omega) \nonumber \text{ by $T$-invariance}\\ & = & \int \psi \circ T^n (\omega, \omega_{n-1}^{-1} \dots \omega_0^{-1}x) d \mathbb{P}(\omega) \nonumber\\ & = & \phi (\omega_{n-1}^{-1} \dots \omega_0^{-1}x) \nonumber \\ & = & C. \nonumber \end{eqnarray} Then $\psi $ is also constant. We have proven that $T$ acts ergodically on $\beta$. \end{proof} We can now conclude the proof of Theorem \ref{drift}. \begin{proof}[Proof of Theorem \ref{drift}] Let $x \in X$ be a base point. Define the function $H : \Omega \times \overline{X} \rightarrow \mathbb{R}$ by \begin{equation*} H(\omega, \xi) = h_\xi (\omega_0 x). \end{equation*} Recall that $h_\xi $ is 1-Lipschitz on $X$, hence $|H(\omega, \xi)| \leq d(x, \omega_0 x ) $. Since $\mu $ has finite first moment, $\int |H(\omega, \xi)| d\mathbb{P}(\omega)d\nui(\xi) < \infty$. Now observe that for all $g_1, g_2 \in G, \, y \in Y$, horofunctions satisfy a cocycle relation: \begin{eqnarray} h_\xi (g_1g_2 y ) & = & \lim_{x_n \rightarrow \xi}d(g_1g_2x , x_n) - d(x_n, x) \nonumber \\ & = & \lim_{x_n \rightarrow \xi}d(g_2x , g_1^{-1}x_n) - d(g_1 x, x_n ) + d(g_1 x, x_n ) - d(x_n, x) \nonumber \\ & = & \lim_{x_n \rightarrow \xi}d(g_2x , g_1^{-1}x_n) - d( x, g_1^{-1}x_n ) + d(g_1 x, x_n ) - d(x_n, x) \nonumber \\ & = & h_{g_1^{-1}\xi} (g_2 x) + h_\xi (g_1 x). \label{cocycle horof} \end{eqnarray} Relation \eqref{cocycle horof} gives that \begin{eqnarray} h_\xi (Z_n x) = \sum_{k=1}^{n} h_{Z_k^{-1}\xi} (\omega_k x ) = \sum_{k=1}^{n} H(T^k (\omega, \xi)) \label{transient cocycle}. \end{eqnarray} By Proposition \ref{sublin approx}, we have that for $\nu$-almost $\xi \in \overline{X}$, and $\frac{1}{n}h_\xi (Z_n x) \rightarrow \lambda$, thus $\frac{1}{n}\sum_{k=1}^{n} H(T^k (\omega), \xi) \rightarrow \lambda$. In the meantime, due to Proposition \ref{ergodic}, we can apply Birkhoff Ergodic Theorem and obtain \begin{equation*} \frac{1}{n}\sum_{k=1}^{n} H(T^k (\omega, \xi)) \rightarrow \int H(\omega, \xi) d\mathbb{P}(\omega)d\nui (\xi). \end{equation*} Now, by Proposition \ref{sublin approx} together with Theorem \ref{convergence} gives that $h_\xi(Z_n(\omega)x) $ tends to $+\infty$ almost surely. By equation \eqref{transient cocycle}, it means that $\sum_{k=1}^{n} H(T^k (\omega), \xi)$ is a transient cocycle. Now by \cite[Lemma 3.6]{guivarch_raugi85}, we obtain that $\int H(\omega, \xi) d\mathbb{P}(\omega)d\nui (\xi) > 0$. In other words, the drift is positive and we have proven Theorem \ref{drift}. \end{proof} \subsection{Applications} We can now add an application that is a reformulation of \cite[Theorem 2.1]{karlsson_margulis}, now that we know that the drift is positive. It states that we have a geodesic tracking of the random walk. \begin{cor} Let $G$ be a discrete group and $G \curvearrowright X$ a non-elementary action by isometries on a proper $\cat (0)$ space $X$. Let $\mu \in \prob(G) $ be an admissible probability measure on $G$ with finite first moment, and assume that $G $ contains a rank one element. Let $x \in X$ be a basepoint of the random walk. Then for almost every $\omega \in \Omega$, there is a unique geodesic ray $\gamma^\omega : [0, \infty) \rightarrow X$ starting at $x$ such that \begin{equation} \lim_{n\rightarrow \infty} \frac{1}{n} d(\gamma^\omega(\lambda n), Z_n(\omega)x) = 0, \end{equation} where $\lambda $ is the (positive) drift of the random walk. \end{cor} Another application that could be of interest is about boundary theory. The convergence of the random walk stated in Theorem \ref{convergence} provides a natural map \begin{equation} z^{+} : \left\{ \begin{array}{rcr} \Omega & \rightarrow & \bd X \nonumber\\ \omega & \mapsto & z^{+}(\omega). \nonumber \end{array} \right. \end{equation} Since for all $n, \omega$, $Z_n(S\omega) = \omega_0^{-1} Z_{n+1}(\omega)$, we have the equivariance property \begin{equation} z^{+}(S\omega) = \omega_0^{-1}z^{+}(\omega). \nonumber \end{equation} In other words, $(\bd X, \nu)$ is a $(G, \mu)$-boundary. A natural question is to determine under which conditions $(\bd X, \nu)$ is maximal between $(G, \mu)$-boundaries in the sense of Theorem \ref{poisson boundary}. Now Kaimanovich gave a criterion \cite[Theorem 6.4]{kaimanovitch00}, namely the "strip criterion" for determining whether $(\bd X, \nu)$ is maximal within the category of $(G, \mu)$-boundaries. \newline A \textit{gauge} on $G$ is an increasing sequence $\mathcal{G} = (\mathcal{G}_k)_k$ exhausting $G$. The \textit{gauge function} associated to $\mathcal{G}$ is the the function \begin{equation*} |g|_\mathcal{G} := \min \{k \, : \, g \in \mathcal{G}_k\}. \end{equation*} \begin{thm}[{\cite[Theorem 6.4]{kaimanovitch00}}]\label{kai strips} Let $\mu$ be a probability measure on a countable group $G$ with finite entropy $H(\mu) = - \sum_{g \in G} \mu(g) \log (\mu(g)) < \infty$, and let $(B_{-}, m_-)$ and $(B_{+}, m_+)$ be $(G, \mui)$ and $(G, \mu)$-boundaries respectively. Assume that there exists a gauge $\mathcal{G}$ on $G$ and a measurable $G$-equivariant map $S$ assigning to pairs of points $(b_-, b_+ ) \in (B_-, B_+)$ non-empty "strips" in $G$ such that for all $g \in G$, and $(m_- \otimes m_+ )$-a.e. $(z_-, z_+) \in B_- \times B_+$, \begin{equation} \frac{1}{n} \log |S(b_-, b_+) g \cap \mathcal{G}_{|Z_n|}| \underset{n \rightarrow \infty}{\longrightarrow} 0 \label{strip criterion equation} \end{equation} in probability with respect to $\mathbb{P}$, then the boundary $(B_+, b_+)$ is maximal. \end{thm} Using this celebrated result, it could be possible to adapt our situation to this context in order to give satisfactory criteria for which $(\bd X, \nu)$ is in fact the Poisson boundary of $(G, \mu)$. If we further assume that the action is proper and cocompact, the criterion is satisfied and it was done by Karlsson and Margulis \cite[Corollary 6.2]{karlsson_margulis}. If we do not assume that the action is geometric, we think that Corollary \ref{limit points are rank one} could be useful in order to find the strips required in Theorem \ref{kai strips}, and thus proving the maximality of $(\bd X, \nu)$ as a $(G, \mu)$-boundary. T.~Fern\'{o}s used this kind of strategy in order to give weak conditions under which the Roller boundary of a finite dimensional $\cat(0)$ cube complex is in fact the Furstenberg-Poisson boundary of a random walk on an acting group $G$. Nevertheless, we were not able to determine satisfying assumptions under which Theorem \ref{kai strips} could be applied in our context. \bibliographystyle{alpha} \bibliography{bibliography} \end{document}
2205.07570v1
http://arxiv.org/abs/2205.07570v1
Weighted approximation in higher-dimensional missing digit sets
\documentclass[12pt]{article} \usepackage[margin=1in]{geometry} \usepackage{amscd} \usepackage{amssymb} \usepackage{amsmath} \usepackage{amsthm} \usepackage{enumerate} \usepackage{tikz} \usepackage{wrapfig, framed, caption} \usepackage{float} \usetikzlibrary{arrows} \usepackage[font=small,labelfont=bf]{caption} \usepackage{xcolor} \usepackage{mathtools} \usepackage[colorlinks=true, linkcolor=blue, citecolor=blue, pagebackref=true]{hyperref} \usepackage{ulem} \usepackage{cite} \allowdisplaybreaks \normalem \parskip=5pt \newtheorem{theorem}{Theorem} \newtheorem{theoremY}{Theorem Y} \newtheorem*{theoremY*}{Theorem Y} \newtheorem{theoremAB}{Theorem AB} \newtheorem*{theoremAB*}{Theorem AB} \newtheorem*{theoremIDSC}{Inhomogeneous Duffin--Schaeffer Conjecture} \newtheorem*{linearformsmtp*}{Mass transference principle for linear forms} \newtheorem{corollary}{Corollary} \newtheorem*{corollary*}{Corollary} \newtheorem{proposition}{Proposition} \newtheorem{lemma}{Lemma} \newtheorem{claim}{Claim} \newtheorem*{claim*}{Claim} \newtheorem{conjecture}{Conjecture} \newtheorem{problem}{Problem} \newtheorem{question}{Question} \theoremstyle{definition} \newtheorem{definition}{Definition} \theoremstyle{remark} \newtheorem{remark}{Remark} \newtheorem*{remark*}{Remark} \newtheorem{example}{Example} \newcommand{\bp}{\mathbf{p}} \newcommand{\cRp}{\cR_{\cC}(\Phi)} \newcommand{\rao}{R_{\alpha,1}} \newcommand{\rak}{R_{\alpha,k}} \newcommand{\lp}{\lambda_\psi} \newcommand{\cJp}{\mathcal{J}_{\mathcal{C}}(\Phi)} vemsy=msbm5 scaled 1200 vemsy \renewcommand{\Bbb}[1]{\mathbb{#1}} \newcommand{\A}{{\Bbb A}} \newcommand{\B}{{\Bbb B}} \newcommand{\C}{{\Bbb C}} \newcommand{\D}{{\Bbb D}} \newcommand{\E}{{\Bbb E}} \newcommand{\F}{{\Bbb F}} \newcommand{\G}{{\Bbb G}} \newcommand{\bbH}{{\Bbb H}} \newcommand{\I}{{\Bbb I}} \newcommand{\J}{{\Bbb J}} \newcommand{\K}{{\Bbb K}} \newcommand{\bbL}{{\Bbb L}} \newcommand{\M}{{\Bbb M}} \newcommand{\N}{{\Bbb N}} \newcommand{\bbO}{{\Bbb O}} \newcommand{\bbP}{{\Bbb P}} \newcommand{\Q}{{\Bbb Q}} \newcommand{\R}{{\Bbb R}} \newcommand{\Rp}{{\Bbb R}^{+}} \newcommand{\bbS}{{\Bbb S}} \newcommand{\T}{{\Bbb T}} \newcommand{\bbU}{{\Bbb U}} \newcommand{\V}{{\Bbb V}} \newcommand{\W}{{\Bbb W}} \newcommand{\X}{{\Bbb X}} \newcommand{\Y}{{\Bbb Y}} \newcommand{\Z}{{\Bbb Z}} \newcommand{\cA}{{\cal A}} \newcommand{\cB}{{\cal B}} \newcommand{\cC}{{\cal C}} \newcommand{\cD}{{\cal D}} \newcommand{\cE}{{\cal E}} \newcommand{\cF}{{\cal F}} \newcommand{\cG}{{\cal G}} \newcommand{\cH}{{\cal H}} \newcommand{\cI}{{\cal I}} \newcommand{\cJ}{{\cal J}} \newcommand{\cK}{{\cal K}} \newcommand{\cL}{{\cal L}} \newcommand{\cM}{{\cal M}} \newcommand{\cN}{{\cal N}} \newcommand{\cO}{{\cal O}} \newcommand{\cP}{{\cal P}} \newcommand{\cQ}{{\cal Q}} \newcommand{\cR}{{\cal R}} \newcommand{\cS}{{\cal S}} \newcommand{\cSM}{{\cal S}^*} \newcommand{\cT}{{\cal T}} \newcommand{\cU}{{\cal U}} \newcommand{\cV}{{\cal V}} \newcommand{\cW}{{\cal W}} \newcommand{\cX}{{\cal X}} \newcommand{\cY}{{\cal Y}} \newcommand{\cZ}{{\cal Z}} \newcommand{\ve}{\varepsilon} \newcommand{\vphi}{\varphi} \newcommand{\Om}{\Omega} \newcommand{\U}{\Upsilon} \newcommand{\La}{\Lambda} \newcommand{\tpsi}{\tilde\psi} \newcommand{\tphi}{\tilde\phi} \newcommand{\tU}{\tilde\Upsilon} \newcommand{\Pn}{\Phi_n} \newcommand{\Pm}{\Phi_m} \newcommand{\Pj}{\Phi_j} \newcommand{\pn}{\varphi_n} \newcommand{\bal}{\beta_\alpha} \newcommand{\balpha}{\boldsymbol{\alpha}} \newcommand{\bgamma}{\boldsymbol{\gamma}} \newcommand{\btau}{\boldsymbol{\tau}} xx}{\Phi} xy}{\Theta} \newcommand{\ba}{\mathbf{a}} \newcommand{\bb}{\mathbf{b}} \newcommand{\bc}{\mathbf{c}} \newcommand{\bi}{\mathbf{i}} \newcommand{\bj}{\mathbf{j}} \newcommand{\x}{\mathbf{x}} \newcommand{\y}{\mathbf{y}} \newcommand{\p}{\mathbf{p}} \newcommand{\q}{\mathbf{q}} \newcommand{\bt}{\mathbf{t}} \newcommand{\br}{\mathbf{r}} \newcommand{\bv}{\mathbf{v}} \newcommand{\0}{\mathbf{0}} \newcommand{\ie}{{\it i.e.}\/ } \newcommand{\eg}{{\it e.g.}\/ } \newcommand{\diam}{\mathrm{diam}} \newcommand{\dist}{\operatorname{dist}} \newcommand{\vv}[1]{{\mathbf{#1}}} \newcommand{\Veronese}{\cV} \renewcommand{\le}{\leq} \renewcommand{\ge}{\geq} \newcommand{\ra}{R_{\alpha}} \newcommand{\kgb}{K_{G,B}} \newcommand{\kgbl}{K_{G',B^{(l)}}} \newcommand{\kgbf}{K^f_{G,B}} \newcommand{\hf}{\cH^f} \newcommand{\hs}{\cH^s} \newcommand{\fB}{\mathfrak{B}} \newcommand{\an}{(A;n)} \newcommand{\ann}{(A';n')} \newcommand{\Ln}{(L;n)} \newcommand{\Lj}{(L;j)} \newcommand{\aj}{(A;j)} \newcommand{\ajj}{(A';j')} \newcommand{\ajs}{(A;j^*)} \newcommand{\aji}{(A;j_i)} \newcommand{\ajis}{(A;j_{i^{**}})} \newcommand{\as}{(A;s)} \newcommand{\at}{(A;t)} \newcommand{\can}{\cC\an} \newcommand{\caj}{\cC\aj} \newcommand{\cajs}{\cC\ajs} \newcommand{\caji}{\cC\aji} \newcommand{\cajis}{\cC\ajis} \newcommand{\cas}{\cC\as} \newcommand{\cat}{\cC\at} \newcommand{\tW}{\widetilde{W}} \newcommand{\Id}{\text{Id}} \newcommand{\bad}{\mathbf{BA}} \newcommand{\sing}{\mathbf{Sing}} \newcommand{\DI}{\mathbf{DI}} \newcommand{\FS}{\mathbf{FS}} \newcommand{\ibad}{\mathbf{IBA}} \newcommand{\well}{\mathbf{WA}} \newcommand{\vwa}{\mathbf{VWA}} \newcommand{\nvwa}{\mathbf{NVWA}} \newcommand{\bone}{\boldsymbol{1}} \newcommand{\recipso}{\mathcal R_0} \newcommand{\freeD}{\mathcal{D}'} \newcommand{\comp}{^{\mathsf{c}}} \newcommand{\nopar}{{\parfillskip=0pt \par}} \DeclarePairedDelimiter{\norm}{\lVert}{\rVert} \DeclarePairedDelimiter{\abs}{\lvert}{\rvert} \DeclarePairedDelimiter{\set}{\lbrace}{\rbrace} \DeclarePairedDelimiter{\floor}{\lfloor}{\rfloor} \DeclarePairedDelimiter{\ceil}{\lceil}{\rceil} \DeclarePairedDelimiter{\parens}{\lparen}{\rparen} \DeclarePairedDelimiter{\brackets}{\lbrack}{\rbrack} \DeclarePairedDelimiter{\supn}{\|}{\|} \DeclareMathOperator{\Leb}{Leb} \DeclareMathOperator{\sgn}{sgn} \DeclareMathOperator{\dimh}{\dim_H} \DeclareMathOperator{\Mu}{M} \title{Weighted approximation in higher-dimensional missing digit sets} \author{Demi Allen \\ (Exeter) \and Benjamin Ward \\ (York)} \date{\today} \begin{document} \frenchspacing \maketitle \begin{abstract} In this note, we use the mass transference principle for rectangles, recently obtained by Wang and Wu (Math. Ann., 2021), to study the Hausdorff dimension of sets of ``weighted $\Psi$-well-approximable'' points in certain self-similar sets in $\R^d$. Specifically, we investigate weighted $\Psi$-well-approximable points in ``missing digit'' sets in $\R^d$. The sets we consider are natural generalisations of Cantor-type sets in $\R$ to higher dimensions and include, for example, \emph{four corner Cantor sets} (or \emph{Cantor dust}) in the plane with contraction ratio $\frac{1}{n}$ with $n \in \N$. \end{abstract} \section{Introduction and motivation} The work of this current paper is motivated by a question posed in a seminal paper by Mahler \cite{Mahler_1984}; namely, how well can we approximate points in the middle-third Cantor set by: \begin{enumerate}[(i)] \itemsep-2pt \item{rational numbers contained in the Cantor set, or} \label{Mahler part 1} \item{rational numbers not in the Cantor set?} \end{enumerate} The first contribution to this question was arguably made by Weiss \cite{Weiss_2001}, who showed that almost no point in the middle-third Cantor set is \emph{very well approximable} with respect to the natural probability measure on the middle-third Cantor set. Since this initial contribution, numerous authors have contributed to answering these questions, approaching them from many different perspectives. For example, Levesley, Salp, and Velani \cite{LSV_2007} considered triadic approximation in the middle-third Cantor set, different subsets of the first named author, Baker, Chow, and Yu \cite{ABCY, ACY, Baker4} studied dyadic approximation in the middle-third Cantor set, Kristensen \cite{Kristensen_2006} considered approximation of points in the middle-third Cantor set by algebraic numbers, and Tan, Wang and Wu \cite{TWW_2021} have recently studied part (\ref{Mahler part 1}) by introducing a new notion of the ``height'' of a rational number. There has also been considerable effort invested in trying to generalise some of the above results to more general self-similar sets in $\R$ and also to various fractal sets in higher dimensions. See, for example, \cite{Allen-Barany, Baker1, Baker2, Baker3, Baker-Troscheit, BFR_2011, FS_2014, FS_2015, Khalil-Luethi, KLW_2005, PV_2005, WWX_Cantor, Yu_self-similar_2021} and references therein. The results in this paper can be thought of as a contribution to answering a natural $d$-dimensional weighted variation of part (\ref{Mahler part 1}) of Mahler's question. In particular, we will be interested in weighted approximation in $d$-dimensional ``missing digit'' sets. Before we introduce the general framework we will consider here, we provide a very brief overview of some of the classical results on weighted Diophantine approximation in the ``usual'' Euclidean setting which provide further motivation for the current work. Fix $d \in \N$ and let $\Psi=(\psi_{1}, \dots , \psi_{d})$ be a $d$-tuple of approximating functions $\psi_{i}:\N \to [0, \infty)$ with $\psi_{i}(r) \to 0$ as $r \to \infty$ for each $1 \leq i \leq d$. The set of weighted simultaneously $\Psi$-well-approximable points in $\R^d$ is defined as \begin{equation*} W_{d}(\Psi):= \left\{ \x = (x_{1}, \dots , x_{d}) \in [0,1]^{d}: \left|x_{i}-\frac{p_{i}}{q}\right| < \psi_{i}(q)\, ,\; 1 \leq i \leq d, \text{ for i.m.} \; (p_1, \dots p_d, q) \in \Z^{d} \times \N \right\}, \end{equation*} where i.m. denotes infinitely many. Note that the special case where each approximating function is the same, that is $\Psi=(\psi, \dots , \psi)$, is generally the more intensively studied set. The case where each approximating function is potentially different, usually referred to as \emph{weighted simultaneous approximation}, is a natural generalisation of this. Simultaneous approximation (i.e. when the approximating function is the same in each coordinate axis) can generally be seen as a metric generalisation of Dirichlet's Theorem, whereas weighted simultaneous approximation is a metric generalisation of Minkowski's Theorem. Weighted simultaneous approximation has earned interest in the past few decades due to Schmidt and natural connections to Littlewood’s Conjecture, see for example \cite{BRV_2016, BPV_2011, An_2013, An_2016, Schmidt_1983}. Motivated by classical works due to the likes of Khintchine \cite{Khintchine_24, Khintchine_25} and Jarn\'{i}k \cite{Jarnik_31} which tell us, respectively, about the Lebesgue measure and Hausdorff measures of the sets of classical simultaneously $\Psi$-well-approximable points (i.e. when $\Psi=(\psi, \dots, \psi)$), one may naturally also wonder about the ``size'' of sets of weighted simultaneously $\Psi$-well-approximable points in terms of Lebesgue measure, Hausdorff dimension, and Hausdorff measures. Khintchine \cite{Weighted_Khintchine} showed that if $\psi: \N \to [0,\infty)$ and $\Psi(q)=(\psi(q)^{\tau_1}, \dots, \psi(q)^{\tau_d})$ for some $\btau = (\tau_1,\dots,\tau_d) \in (0,1)^d$ with $\tau_1+\tau_2+\dots+\tau_d=1$, then \begin{equation*} \lambda_d(W_d(\Psi))=\begin{cases} \quad 0 \quad &\text{ if } \quad \sum_{q=1}^{\infty} q^d \psi(q) < \infty, \\[2ex] \quad 1 \quad &\text{ if } \quad \sum_{q=1}^{\infty} q^d \psi(q) = \infty, \text{ and $q^d\psi(q)$ is monotonic}. \end{cases} \end{equation*} Throughout we use $\lambda_d(X)$ to denote the $d$-dimensional Lebesgue measure of a set \mbox{$X \subset \R^d$}. For more general approximating functions $\Psi(q)=(\psi_1(q), \dots, \psi_d(q))$, with $\prod_{i=1}^{d}\psi_i(q)$ monotonically decreasing and $\psi_{i}(q)<q^{-1}$ for each $1\leq i \leq d$, it has been proved, see \cite{Cassels_1950, Weighted_Khintchine, Schmidt_1960, Gallagher_1962}, that \begin{equation*} \lambda_d(W_d(\Psi))=\begin{cases} \quad 0 \quad &\text{ if } \quad \sum_{q=1}^{\infty} q^d \psi_1(q)\dots \psi_d(q) < \infty, \\[2ex] \quad 1 \quad &\text{ if } \quad \sum_{q=1}^{\infty} q^d \psi_1(q)\dots \psi_d(q) = \infty. \end{cases} \end{equation*} For approximating functions of the form $\Psi(q)=(\psi_1(q),\dots,\psi_d(q))$ where \[\psi_i(q)=q^{-t_{i}-1}, \quad \text{for some vector } \bt=(t_{1}, \dots , t_{d}) \in \R^{d}_{>0},\] Rynne \cite{Rynne_1998} proved that if $\sum_{i=1}^{d}t_{i} \geq 1$, then \begin{equation*} \dimh W_{d}(\Psi)=\min_{1 \leq k \leq d} \left\{ \frac{1}{t_{k}+1} \left( d+1+\sum_{i:t_{k} \geq t_{i}}(t_{k}-t_{i})\right) \right\}. \end{equation*} Throughout, we write $\dimh{X}$ to denote the \emph{Hausdorff dimension} of a set $X \subset \R^d$, we refer the reader to \cite{Falconer} for definitions and properties of Hausdorff dimension and Hausdorff measures. Rynne's result has recently been extended to a more general class of approximating functions by Wang and Wu \cite[Theorem 10.2]{WW2019}. In recent years, there has been rapidly growing interest in whether similar statements can be proved when we intersect $W_d(\Psi)$ with natural subsets of $[0,1]^{d}$, such as submanifolds or fractals. The study of such questions has been further incentivised by many remarkable works of the recent decades, such as \cite{KLW_2005, KM_1998, VV_2006}, and applications to other areas, such as wireless communication theory \cite{ABLVZ_2016}. \section{$d$-dimensional missing digit sets and main results} In this paper we study weighted approximation in $d$-dimensional missing digit sets, which are natural extensions of classical missing digit sets (i.e. generalised Cantor sets) in $\R$ to higher dimensions. A very natural class of higher dimensional missing digit sets included within our framework are the \emph{four corner Cantor sets} (or \emph{Cantor dust}) in $\R^2$ with contraction ratio $\frac{1}{n}$ for $n \in \N$. Throughout we consider $\R^d$ equipped with the supremum norm, which we denote by $\supn{\cdot}$. For subsets $X,Y \subset \R^{d}$ we define $\diam(X) = \sup\{\|u-v\|:u,v \in X\}$ and $\dist(X,Y)= \inf\{\|x-y\|: x \in X, y \in Y\}$. We define \emph{higher-dimensional missing digit sets} via iterated function systems as follows. Let $b \in \N$ be such that $b \geq 3$ and let $J_1,\dots,J_d$ be proper subsets of $\set{0,1,\dots,b-1}$ such that for each $1 \leq i \leq d$, we have \[N_i:=\#J_i\geq 2.\] Suppose $J_i=\set{a^{(i)}_1,\dots,a^{(i)}_{N_i}}$. For each $1 \leq i \leq d$, we define the iterated function system \[\Phi^i=\left\{f_j:[0,1]\to[0,1]\right\}_{j=1}^{N_i} \quad \text{where} \quad f_j(x)=\frac{x+a^{(i)}_j}{b}.\] Let $K_i$ be the attractor of $\Phi^i$; that is, $K_i \subset \R$ is the unique non-empty compact set which satisfies \[K_i=\bigcup_{j=1}^{N_i}{f_j(K_i)}.\] We know that such a set exists due to work of Hutchinson \cite{Hutchinson}. Equivalently $K_i$ is the set of $x \in [0,1]$ for which there exists a base $b$ expansion of $x$ consisting only of digits from $J_i$. In view of this, we will also use the notation $K_{b}(J_i)$ to denote this set. For example, in this notation, the classical middle-third Cantor set is precisely the set $K_3(\{0,2\})$. We call the sets $K_b(J_i)$ \emph{missing digit sets} since they consist of numbers with base-$b$ expansions missing specified digits. Note that, for each $1 \leq i \leq d$, the Hausdorff dimension of $K_i$, which we will denote by $\gamma_i$, is given by \[\gamma_i = \dimh{K_i} = \frac{\log{N_i}}{\log{b}}.\] We will be interested in the \emph{higher-dimensional missing digit set} \[K:=\prod_{i=1}^{d}{K_i}\] formed by taking the Cartesian product of the sets $K_i$, $1 \leq i \leq d$. As a natural concrete example, we note that the \emph{four corner Cantor set} in $\R^2$ with contraction ratio $\frac{1}{b}$ (with $b \geq 3$ an integer) can be written in our notation as $K_{b}(\{0,b-1\}) \times K_{b}(\{0,b-1\})$. We note that $K$ is the attractor of the iterated function system \[\Phi=\left\{f_{(j_1,\dots,j_d)}:[0,1]^d \to [0,1]^d, (j_1,\dots,j_d) \in \prod_{i=1}^{d}{J_i}\right\}\] where \[f_{(j_1,\dots,j_d)}\begin{pmatrix} x_1 \\ \vdots \\ x_d\end{pmatrix} = \begin{pmatrix}\frac{x_1+j_1}{b} \\ \vdots \\ \frac{x_d+j_d}{b}\end{pmatrix}.\] Notice that $\Phi$ consists of \[N:=\prod_{i=1}^{d}{N_i}\] maps and so, for convenience, we will write \[\Phi = \left\{g_j:[0,1]^d\to[0,1]^d\right\}_{j=1}^{N}\] where the $g_j$'s are just the maps $f_{(j_1,\dots,j_d)}$ from above written in some order. The Hausdorff dimension of $K$, which we denote by $\gamma$, is \[\gamma = \dimh{K} = \frac{\log{N}}{\log{b}}.\] We will write \[\La = \set{1,2,\dots,N} \qquad \text{and} \qquad \La^*=\bigcup_{n=0}^{\infty}{\La^n}.\] We write $\bi$ to denote a word in $\La$ or $\La^*$ and we write $|\bi|$ to denote the length of $\bi$. For $\bi \in \Lambda^*$ we will also use the shorthand notation \[g_{\bi} = g_{i_1} \circ g_{i_2} \circ \dots \circ g_{i_{|\bi|}}.\] We adopt the convention that $g_{\emptyset}(x)=x$. Let $\Psi: \La^* \to [0,\infty)$ be an approximating function. For each $x \in K$, we define the set \begin{align*} W(x,\Psi)=\left\{y \in K: \supn{y-g_{\bi}(x)}<\Psi(\bi) \text{ for infinitely many } \bi \in \La^*\right\}. \end{align*} The following theorem is a special case of \cite[Theorem 1.1]{Allen-Barany}. \begin{theorem} \label{self-similar approx thm} Let $\Phi$ and $K$ be as defined above. Let $x \in K$ and let $\vphi: \N \to [0,\infty)$ be a monotonically decreasing function. Let $\Psi(\bi) = \diam(g_{\bi}(K))\vphi(|\bi|)$. Then, for $s>0$, \[ \cH^{s}(W(x,\Psi))= \begin{cases} 0 & \text{if} \quad \sum_{\bi \in \La^*}{\Psi(\bi)^s}<\infty, \\[2ex] \cH^s(K) & \text{if} \quad \sum_{\bi \in \La^*}{\Psi(\bi)^s}=\infty. \end{cases} \] \end{theorem} Of particular interest to us here is the following easy corollary. \begin{corollary} \label{simultaneous corollary} Let $\Phi$ and $K$ be as above and suppose that $\diam(K)=1$. Let $\psi: \N \to [0,\infty)$ be such that $b^n\psi(b^n)$ is monotonically decreasing and define $\vphi: \N \to [0,\infty)$ by $\vphi(n)=b^n\psi(b^n)$. Let $\Psi(\bi) = \diam(g_{\bi}(K))\vphi(|\bi|)$. Recall that $\gamma = \dimh{K}$. Then, for $x \in K$, we have \[ \cH^{\gamma}(W(x,\Psi))= \begin{cases} 0 & \text{if} \quad \sum_{n=1}^{\infty}{(b^n\psi(b^n))^{\gamma}}<\infty, \\[2ex] \cH^{\gamma}(K) & \text{if} \quad \sum_{n=1}^{\infty}{(b^n\psi(b^n))^{\gamma}}=\infty. \end{cases} \] \end{corollary} \begin{proof} It follows from Theorem \ref{self-similar approx thm} that \[ \cH^{\gamma}(W(x,\Psi))= \begin{cases} 0 & \text{if} \quad \sum_{\bi \in \La^*}{\Psi(\bi)^{\gamma}}<\infty, \\[2ex] \cH^{\gamma}(K) & \text{if} \quad \sum_{\bi \in \La^*}{\Psi(\bi)^{\gamma}}=\infty. \end{cases} \] However, in this case, by the definition of $\vphi$ and our assumption that $\diam(K)=1$, we have \[\sum_{\bi \in \La^*}{\Psi(\bi)^{\gamma}} = \sum_{n=1}^{\infty}{\sum_{\substack{\bi \in \La^* \\ |\bi| = n}}{(\diam(g_{\bi}(K))\vphi(|\bi|))^{\gamma}}} = \sum_{n=1}^{\infty}{\sum_{\substack{\bi \in \La^* \\ |\bi| = n}}{\psi(b^n)^{\gamma}}} = \sum_{n=1}^{\infty}{N^n \psi(b^n)^{\gamma}} = \sum_{n=1}^{\infty}{(b^n \psi(b^n))^{\gamma}}. \qedhere \] \end{proof} For an approximating function $\psi: \N \to [0,\infty)$, define \begin{align} \label{W x psi} W(x,\psi)=\left\{y \in K: \supn{y-g_{\bi}(x)} < \psi(b^{|\bi|}) \text{ for infinitely many } \bi \in \La^*\right\}. \end{align} In essence, $W(x,\psi)$ is a set of ``simultaneously $\psi$-well-approximable'' points in $K$. The following statement regarding these sets can be deduced immediately from Corollary \ref{simultaneous corollary}. \begin{corollary} \label{W x psi corollary} Let $\Phi$ and $K$ be defined as above and let $\psi: \N \to [0,\infty)$ be such that $b^n\psi(b^n)$ is monotonically decreasing. Suppose further that $\diam(K)=1$. Then, \[ \cH^{\gamma}(W(x,\psi))= \begin{cases} 0 & \text{if} \quad \sum_{n=1}^{\infty}{(b^n\psi(b^n))^{\gamma}}<\infty, \\[2ex] \cH^{\gamma}(K) & \text{if} \quad \sum_{n=1}^{\infty}{(b^n\psi(b^n))^{\gamma}}=\infty. \end{cases} \] \end{corollary} Here we will be interested in weighted versions of the sets $W(x,\psi)$. More specifically, for $\bt=(t_1,\dots,t_d) \in \R^d_{\geq 0}$ and for $x \in K$, we define the \emph{weighted approximation set} \[W(x,\psi,\bt) = \left\{\y=(y_1,\dots,y_d) \in K: |y_j-g_{\bi}(x)_j|<\psi(b^{|\bi|})^{1+t_i}, 1 \leq j \leq d, \text{ for i.m. } \bi \in \La^*\right\}.\] Here we are using the notation $g_{\bi}(x)=(g_{\bi}(x)_1,\dots,g_{\bi}(x)_d)$. Our main results relating to the Hausdorff dimension of sets of the form $W(x,\psi,\bt)$ are as follows. \begin{theorem} \label{lower bound theorem} Let $\Phi$ and $K$ be defined as above. Recall that $\gamma = \dimh{K}$ and $\gamma_i=\dimh{K_i}$ for each $1 \leq i \leq d$. Let $\psi: \N \to [0, \infty)$ be such that $b^n\psi(b^n)$ is monotonically decreasing. Further suppose that $\diam(K)=1$ and \[\sum_{n=1}^{\infty}{(b^n\psi(b^n))^{\gamma}} = \infty.\] Then, for $\bt = (t_1,\dots,t_d) \in \R^d_{\geq 0}$, we have \[\dimh{W(x,\psi,\bt)} \geq \min_{1 \leq k \leq d}\left\{\frac{1}{1+t_k}\left(\gamma + \sum_{j:t_j \leq t_k}{(t_k-t_j)\gamma_{j}}\right)\right\}.\] \end{theorem} If $\psi$ satisfies more stringent divergence conditions, then we an show that the lower bound given in Theorem \ref{lower bound theorem} in fact gives an exact formula for the Hausdorff dimension of $W(x,\psi,\bt)$. More precisely, we are able to show the following. \begin{theorem} \label{upper bound theorem} Let $\Phi$ and $K$ be as defined above. Let $x \in K$ and let $\psi:\N \to [0,\infty)$ be such that: \begin{enumerate}[(i)] \item{$b^n\psi(b^n)$ is monotonically decreasing,} \item{$\displaystyle{\sum_{n=1}^{\infty}{(b^n \psi(b^n))^{\gamma}}=\infty}, \quad$ and} \item{$\displaystyle{\sum_{n=1}^{\infty}{(b^n \psi(b^n)^{1+\ve})^{\gamma}}<\infty} \quad$ for every $\ve>0$.} \end{enumerate} Then, for $\bt = (t_1, \dots, t_d) \in \R^d_{\geq 0}$, we have \[\dimh{W(x,\psi,\bt)} = \min_{1 \leq k \leq d}\left\{\frac{1}{1+t_k}\left(\gamma + \sum_{j:t_j \leq t_k}{(t_k-t_j)\gamma_{j}}\right)\right\}.\] \end{theorem} As an example of an approximating function which satisifies conditions $(i)-(iii)$, one can think of $\psi(q)=\left(q(\log_{b}q)^{1/\gamma}\right)^{-1}$. This function naturally appears when one considers analogues of Dirichlet's theorem in missing digit sets (see \cite{BFR_2011, FS_2014}). As a corollary to Theorem \ref{upper bound theorem} we deduce the following statement which can be interpreted as a higher-dimensional weighted generalisation of \cite[Theorem 4]{LSV_2007}. In \cite[Theorem 4]{LSV_2007}, Levesley, Salp, and Velani establish the Hausdorff measure of the set of points in a one-dimensional base-$b$ missing digit set (i.e. of the form $K_b(J)$ in our present notation) which can be well-approximated by rationals with denominators which are powers of $b$. Before we state our corollary, we fix one more piece of notation. Given an approximating function $\psi: \N \to [0,\infty)$, an infinite subset $\cB \subset \N$, and $\bt = (t_1,\dots,t_d) \in \R_{\geq 0}^d$, we define \[W_{\cB}(\psi,\bt)=\left\{x \in K: \left|x_{i}-\frac{p_i}{q}\right|<\psi(q)^{1+t_i}, 1 \leq i \leq d, \text{ for i.m. } (p_1,\dots,p_d,q) \in \Z^d \times \cB \right\}.\] \begin{corollary} \label{LSV_equivalent} Fix $b \in \N$ with $b \geq 3$ and let $\cB=\{b^n: n=0,1,2,\dots\}$. Let $K$ be a higher dimensional missing digit set as defined above (with base $b$) and write $\gamma=\dimh{K}$. Furthermore, suppose that $\set{0,b-1} \subset J_i$ for every $1 \leq i \leq d$. In particular, this also means that $\diam{K}=1$. Let $\psi: \N \to [0,\infty)$ be an approximating function such that \begin{enumerate}[(i)] \item{$b^{n}\psi(b^{n})$ is monotonically decreasing with $b^{n}\psi(b^{n}) \to 0$ as $n \to \infty$}, \item{$\displaystyle{\sum_{n=1}^{\infty}{(b^n \psi(b^n))^{\gamma}}=\infty}, \quad$ and} \item{$\displaystyle{\sum_{n=1}^{\infty}{(b^n \psi(b^n)^{1+\ve})^{\gamma}}<\infty} \quad$ for every $\ve>0$.} \end{enumerate} Then \begin{equation*} \dimh W_{\cB}(\psi, \bt) = \min_{1 \leq k \leq d}\left\{\frac{1}{1+t_k}\left(\gamma + \sum_{j:t_j \leq t_k}{(t_k-t_j)\, \gamma_j}\right)\right\}. \end{equation*} \end{corollary} \begin{proof} Observe that the conditions imposed in the statement of Corollary \ref{LSV_equivalent} guarantee that Theorem \ref{upper bound theorem} is applicable. Furthermore, by our assumption that $b^{n}\psi(b^{n}) \to 0$ as $n \to \infty$, we may assume without loss of generality that $\psi(b^n) < b^{-n}$ for all $n \in \N$. Next, we note that if $\p=(p_1,\dots,p_d) \in \Z^d$ and $\frac{\p}{b^n} = \left(\frac{p_1}{b^n},\dots,\frac{p_d}{b^n}\right) \notin K$, then we must have \[\dist\left(\frac{\p}{b^n},K\right) \geq b^{-n}, \quad \text{where} \quad \dist(x,K)=\inf\set{\|x-y\|: y \in K}.\] (Recall that we use $\|\cdot\|$ to denote the supremum norm in $\R^d$.) Thus we need only concern ourselves with pairs $(\p,q) \in \Z^{d} \times \cB$ for which $\frac{\p}{q} \in K$. Let $G=\left\{x=(x_1,\dots,x_d) \in \{0,1\}^{d}\right\}$ and note that $G \subset K$ by the assumption that $\{0,b-1\} \subset J_{i}$ for each $1\leq i \leq d$. For any $x \in G$ and any $\bj \in \Lambda^{n}$ it is possible to write $g_{\bj}(x)=\frac{\p}{b^{n}}$ for some $\p \in (\N\cup \set{0})^{d}$. Hence \[W(x, \psi, \bt) \subset W_{\cB}(\psi, \bt).\] Furthermore, the set of all rational points of the form $\frac{\p}{b^{n}}$ contained in $K$ is \begin{equation*} \bigcup_{x \in G} \bigcup_{\bj \in \Lambda^{n}} g_{\bj}(x). \end{equation*} Hence \begin{equation*} W_{\cB}(\psi, \bt) \subset \bigcup_{x \in G} W(x, \psi, \bt). \end{equation*} By the finite stability of Hausdorff dimension (see \cite{Falconer}), Corollary \ref{LSV_equivalent} now follows from Theorem~\ref{upper bound theorem}. \end{proof} Notice that in Theorem \ref{lower bound theorem}, Theorem \ref{upper bound theorem}, and Corollary \ref{LSV_equivalent}, we insist on the same underlying base $b$ in each coordinate direction. This is somewhat unsatisfactory and one might hope to be able to obtain results where we can have different bases $b_i$ in each coordinate direction. The first steps towards proving results relating to weighted approximation in this setting can be seen in \cite[Section 12]{WW2019}. Proving more general results with different bases in different coordinate directions is likely to be a very challenging problem since such sets are \emph{self-affine} and, generally speaking, self-affine sets are more difficult to deal with than self-similar or self-conformal sets. Indeed, very little is currently known even regarding non-weighted approximation in self-affine sets. \noindent{\bf Structure of the paper:} The remainder of the paper will be arranged as follows. In Section~\ref{measure theory section} we will present some measure theoretic preliminaries which will be required for the proofs of our main results. The key tool required for proving Theorem \ref{lower bound theorem} is a mass transference principle for rectangles proved recently by Wang and Wu \cite{WW2019}. We introduce this in Section \ref{mtp section}. In Section \ref{lower bound section} we present our proof of Theorem \ref{lower bound theorem} and we conclude in Section \ref{upper bound section} with the proof of Theorem \ref{upper bound theorem}. \section{Some Measure Theoretic Preliminaries} \label{measure theory section} Recall that $\gamma = \dimh{K}$ and that $\gamma_i=\dimh{K_i}$ for $1 \leq i \leq d$, where $K$ and $K_i$ are as defined above. Furthermore, note that $0 < \cH^{\gamma}(K) < \infty$ and $0< \cH^{\gamma_{i}}(K_{i})<\infty$ for each $1 \leq i \leq d$, see for example \cite[Theorem 9.3]{Falconer}. Let us define the measures \[\mu:=\frac{\cH^{\gamma}|_{K}}{\cH^{\gamma}(K)} \qquad \text{and} \qquad \mu_i:=\frac{\cH^{\gamma_i}|_{K_i}}{\cH^{\gamma_i}(K_i)} \quad \text{for each } 1 \leq i \leq d.\] So, for $X \subset \R^d$, we have \[\mu(X) = \frac{\cH^{\gamma}(X \cap K)}{\cH^{\gamma}(K)}.\] Similarly, for $X \subset \R$, for each $1 \leq i \leq d$ we have \[\mu_i(X) = \frac{\cH^{\gamma_i}(X \cap K_i)}{\cH^{\gamma_i}(K_i)}.\] Note that $\mu$ defines a probability measure supported on $K$ and, for each $1 \leq i \leq d$, $\mu_i$ defines a probability measure supported on $K_i$. Note also that the measure $\mu$ is $\delta$-Ahlfors regular with $\delta = \gamma$ and, for each $1 \leq i \leq d$, the measure $\mu_i$ is $\delta$-Ahlfors regular with $\delta=\gamma_i$ (see, for example, \cite[Theorem 4.14]{Mattila_1999}). We will also be interested in the product measure \[\Mu:=\prod_{i=1}^{d}{\mu_i}.\] We note that $\Mu$ is $\delta$-Ahlfors regular with $\delta = \gamma$. This fact follows straightforwardly from the Ahlfors regularity of each of the $\mu_i$'s. \begin{lemma} \label{M regularity lemma} The product measure $\Mu = \prod_{i=1}^{d}{\mu_i}$ on $\R^d$ is $\delta$-Ahlfors regular with $\delta=\gamma$. \end{lemma} \begin{proof} Let $B=\prod_{i=1}^{d}{B(x_i,r)}$, $r>0$, be an arbitrary ball in $\R^d$. The aim is to show that \mbox{$\Mu(B) \asymp r^{\gamma}$}. Recall that for each $1 \leq i \leq d$, the measure $\mu_i$ is $\delta$-Ahlfors regular with \mbox{$\delta = \gamma_i = \dimh{K_i} = \frac{\log{N_i}}{\log{b}}$}. Also recall that $N=\prod_{i=1}^{d}{N_i}$ and $\gamma = \dimh{K} = \frac{\log{N}}{\log{b}}$. Thus, we have \[\Mu(B) = \prod_{i=1}^{d}{\mu_i(B(x_i,r))} \asymp \prod_{i=1}^{d}{r^{\gamma_i}} = r^{\sum_{i=1}^{d}{\gamma_i}}.\] Note that \[\sum_{i=1}^{d}{\gamma_i} = \sum_{i=1}^{d}{\frac{\log{N_i}}{\log{b}}} = \frac{\log(\prod_{i=1}^{d}{N_i})}{\log{b}} = \frac{\log{N}}{\log{b}} = \gamma.\] Hence, $\Mu(B) \asymp r^{\gamma}$ as claimed. \end{proof} We also note that, up to a constant factor, the product measure $\Mu$ is equivalent to the measure $\mu = \frac{\cH^{\gamma}|_K}{\cH^{\gamma}(K)}$. \begin{lemma}\label{measure equivalence lemma} Let $\Mu=\prod_{i=1}^{d}{\mu_i}$. Then, up to a constant factor, $\Mu$ is equivalent to $\mu$; i.e. for any Borel set $F \subset \R^d$, we have $\Mu(F) \asymp \mu(F)$. \end{lemma} Lemma \ref{measure equivalence lemma} follows immediately upon combining Lemma \ref{M regularity lemma} with \cite[Proposition 2.2 (a) + (b)]{Falconer_techniques}. In our present setting, where $K$ is a self-similar set with well-separated components, we can actually show the stronger statement that $\mu=\Mu$. \begin{proposition} \label{measures_are_equal} The measures $\mu$ and $\Mu$ are equal, i.e. for every Borel set $F \subset \R^d$, we have $\mu(F) = \Mu(F)$. \end{proposition} \begin{proof} For each $1 \leq i \leq d$, there exists a unique Borel probability measure (see, for example, \cite[Theorem 2.8]{Falconer_techniques}) $m_i$ satisfying \begin{align} \label{measure uniqueness 1} m_i &= \sum_{j=1}^{N_i}{\frac{1}{N_i}m_i \circ f_j^{-1}}. \end{align} Likewise, there exists a unique Borel probability measure $m$ satisfying \begin{align} \label{measure uniqueness 2} m &= \sum_{j=1}^{N}{\frac{1}{N}m \circ g_j^{-1}}. \end{align} We begin by showing that $\mu_i$ satisfies \eqref{measure uniqueness 1} for each $1 \leq i \leq d$. Note that $\cH^{\gamma_i}(f_{j_1}(K_i) \cap f_{j_2}(K_i)) = 0$ for any $1\leq j_1,j_2 \leq N_i$ with $j_1 \neq j_2$. Thus, for any Borel set $X \subset \R^d$, , we have \begin{align*} \mu_i(X) &= \frac{1}{\cH^{\gamma_i}(K_i)}\cH^{\gamma_i}(X \cap K_{i}) \\ &= \frac{1}{\cH^{\gamma_i}(K_i)}\sum_{j=1}^{N_i}{\cH^{\gamma_i}(X \cap f_j(K_i))} \\ &= \frac{1}{\cH^{\gamma_i}(K_i)}\sum_{j=1}^{N_i}{\cH^{\gamma_i}(f_j(f_j^{-1}(X) \cap K_i))} \\ &= \frac{1}{\cH^{\gamma_i}(K_i)}\sum_{j=1}^{N_i}{\left(\frac{1}{b}\right)^{\gamma_i}\cH^{\gamma_i}(f_j^{-1}(X) \cap K_i)} \\ &= \frac{1}{\cH^{\gamma_i}(K_i)}\sum_{j=1}^{N_i}{\frac{1}{N_i}\cH^{\gamma_i}(f_j^{-1}(X) \cap K_i)} \\ &= \sum_{j=1}^{N_i}{\frac{1}{N_i}\mu_i \circ f_j^{-1}(X)}. \end{align*} By an almost identical argument, it can be shown that $\mu$ satisfies $\eqref{measure uniqueness 2}$. Finally, we show that $\Mu$ also satisfies \eqref{measure uniqueness 2} and, hence, by the uniqueness of solutions to~\eqref{measure uniqueness 2}, we conclude that $\Mu$ must be equal to $\mu$. Since $\mu_i$ satisfies \eqref{measure uniqueness 1} for each $1 \leq i \leq d$, we have \begin{align*} \Mu &= \prod_{i=1}^{d}{\mu_i} \\ &= \prod_{i=1}^{d}{\left(\sum_{j=1}^{N_i}{\frac{1}{N_i}\mu_i \circ f_j^{-1}}\right)} \\ &= \sum_{\bj = (j_1, \dots, j_d) \in \prod_{i=1}^{d}{\{1,\dots,N_i\}}}{\frac{1}{N}\prod_{i=1}^{d}{\mu_i \circ f_{j_i}^{-1}}} \\ &= \sum_{j=1}^{N}{\frac{1}{N}M\circ g_j^{-1}}. \qedhere \end{align*} \end{proof} \section{Mass transference principle for rectangles} \label{mtp section} To prove Theorem \ref{lower bound theorem}, we will use the mass transference principle for rectangles established recently by Wang and Wu in \cite{WW2019}. The work of Wang and Wu generalises the famous Mass Transference Principle originally proved by Beresnevich and Velani \cite{BV_MTP}. Since its initial discovery in \cite{BV_MTP}, the Mass Transference Principle has found many applications, especially in Diophantine Approximation, and has by now been extended in numerous directions. See \cite{Allen-Beresnevich, Allen-Baker, BV_MTP, BV_Slicing, Koivusalo-Rams, WWX2015, WW2019, Zhong2021} and references therein for further information. Here we shall state the general ``full measure'' mass transference principle from rectangles to rectangles established by Wang and Wu in \cite[Theorem~3.4]{WW2019}. Fix an integer $d \geq 1$. For each $1 \leq i \leq d$, let $(X,|\cdot|_i,m_i)$ be a bounded locally compact metric space equipped with a $\delta_i$-Ahlfors regular probability measure $m_i$. We consider the product space $(X, |\cdot|, m)$ where \[X = \prod_{i=1}^{d}{X_i}, \qquad |\cdot|=\max_{1 \leq i \leq d}{|\cdot|_i}, \quad \text{and} \quad m=\prod_{i=1}^{d}{m_i}.\] Note that a ball $B(x,r)$ in $X$ is the product of balls in $\{X_i\}_{1 \leq i \leq d}$; \[B(x,r) = \prod_{i=1}^{d}{B(x_i,r)} \quad \text{for} \quad x=(x_1,\dots,x_d).\] Let $J$ be an infinite countable index set and let $\beta: J \to \R_{\geq 0}: \alpha \mapsto \beta_{\alpha}$ be a positive function such that for any $M > 1$, the set \[\{\alpha \in J: \beta_{\alpha} < M\}\] is finite. Let $\rho: \R_{\geq 0} \to \R_{\geq 0}$ be a non-increasing function such that $\rho(u) \to 0$ as $u \to \infty$. For each $1 \leq i \leq d$, let $\{R_{\alpha,i}: \alpha \in J\}$ be a sequence of subsets of $X_i$. Then, the \emph{resonant sets} in $X$ that we will be concerned with are \[\left\{R_{\alpha}=\prod_{i=1}^{d}{R_{\alpha,i}:\alpha \in J}\right\}.\] For a vector $\ba = (a_1,\dots,a_d) \in \R_{>0}^d$, write \[\Delta(R_{\alpha}, \rho(\beta_{\alpha})^{\ba}) = \prod_{i=1}^{d}{\Delta(R_{\alpha,i},\rho(\beta_{\alpha})^{a_i})},\] where $\Delta(R_{\alpha,i},\rho(\bal)^{a_i})$ appearing on the right-hand side denotes the $\rho(\bal)^{a_i}$-neighbourhood of $R_{\alpha,i}$ in $X_i$. We call $\Delta(R_{\alpha,i},\rho(\beta_{\alpha})^{a_i})$ the \emph{part of $\Delta(R_{\alpha},\rho(\beta_{\alpha})^{\ba})$ in the $i$th direction.} Fix $\ba = (a_1, \dots, a_d) \in \R_{>0}^d$ and suppose $\bt = (t_1,\dots,t_d) \in \R_{\geq 0}^d$. We are interested in the set \[W_{\ba}(\bt) = \left\{x \in X: x \in \Delta(R_{\alpha},\rho(\beta_{\alpha})^{\ba+\bt}) \quad \text{for i.m. } \alpha \in J \right\}.\] We can think of $\Delta(R_{\alpha},\rho(\beta_{\alpha})^{\ba+\bt})$ as a smaller ``rectangle'' obtained by shrinking the ``rectangle'' $\Delta(R_{\alpha},\rho(\beta_{\alpha})^{\ba})$. Finally, we require that the resonant sets satisfy a certain \emph{$\kappa$-scaling} property, which in essence ensures that locally our sets behave like affine subspaces. \begin{definition} \label{kappa scaling} Let $0 \leq \kappa < 1$. For each $1 \leq i \leq d$, we say that $\{R_{\alpha,i}\}_{\alpha \in J}$ has the \emph{$\kappa$-scaling property} if for any $\alpha \in J$ and any ball $B(x,r)$ in $X_i$ with centre $x_i \in R_{\alpha,i}$ and radius $r>0$, for any $ 0 < \varepsilon < r$, we have \[c_1 r^{\delta_i \kappa}\varepsilon^{\delta_i(1-\kappa )} \leq m_i(B(x_i,r) \cap \Delta(R_{\alpha,i},\varepsilon)) \leq c_2 r^{\delta_i \kappa} \varepsilon^{\delta_i(1-\kappa)}\] for some absolute constants $c_1, c_2 > 0$. \end{definition} In our case $\kappa=0$ since our resonant sets are points. For justification of this, and calculations of $\kappa$ for other resonant sets, see \cite{Allen-Baker}. Wang and Wu established the following mass transference principle for rectangles in \cite{WW2019}. \begin{theorem}[Wang -- Wu, \cite{WW2019}] \label{Theorem WW} Assume that for each $1 \leq i \leq d$, the measure $m_i$ is $\delta_i$-Ahlfors regular and that the resonant set $R_{\alpha,i}$ has the $\kappa$-scaling property for $\alpha \in J$. Suppose \[m\left(\limsup_{\substack{\alpha \in J \\ \beta_{\alpha} \to \infty}}{\Delta(R_{\alpha},\rho(\beta_{\alpha})^{\ba}})\right) = m(X).\] Then we have \[\dimh{W_{\ba}(\bt)} \geq s(\bt):= \min_{A \in \cA}\left\{\sum_{k \in \cK_1}{\delta_k} + \sum_{k \in \cK_2}{\delta_k} + \kappa \sum_{k \in \cK_3}{\delta_k}+(1-\kappa)\frac{\sum_{k \in \cK_3}{a_k \delta_k} - \sum_{k \in \cK_2}{t_k \delta_k}}{A}\right\},\] where \[\cA = \{a_i, a_i+t_i: 1 \leq i \leq d\}\] and for each $A \in \cA$, the sets $\cK_1, \cK_2, \cK_3$ are defined as \[\cK_1 = \{k: a_k \geq A\}, \quad \cK_2=\{k: a_k+t_k \leq A\} \setminus \cK_1, \quad \cK_3=\{1,\dots,d\}\setminus (\cK_1 \cup \cK_2)\] and thus give a partition of $\{1,\dots,d\}$. \end{theorem} \section{Proof of Theorem \ref{lower bound theorem}} \label{lower bound section} To prove Theorem \ref{lower bound theorem}, we will apply Theorem \ref{Theorem WW} with $X_i = K_i$, $m_i=\mu_i$ and $\abs{\cdot}_i = \abs{\cdot}$ (absolute value in $\R$) for each $1 \leq i \leq d$. Then, in our setting, we will be interested in the product space $(X, \supn{\cdot}, \Mu)$ where \[X = \prod_{i=1}^{d}{K_i} \; = K, \qquad \Mu = \prod_{i=1}^{d}{\mu_i},\] and $\supn{\cdot}$ denotes the supremum norm in $\R^d$. Recall that for each $1 \leq i \leq d$, the measure $\mu_i$ is $\delta_i$-Ahlfors regular with \[\delta_i = \gamma_i = \dimh{K_i}\] and the measure $\Mu$ is $\delta$-Ahlfors regular with \[\delta = \gamma = \dimh{K}.\] For us, the appropriate indexing set is \[\cJ = \{\bi \in \La^*\}.\] We define our \emph{weight function} $\beta: \La^* \to \R_{\geq 0}$ by \[\beta_{|\bi|} = \beta(\bi) = |\bi|.\] Note that $\beta$ satisfies the requirement that for any real number $M > 1$ the set $\set{\bi \in \La^*: \beta_{\bi} < M}$ is finite. Next we define $\rho: \R_{\geq 0} \to \R_{\geq 0}$ by \[\rho(u) = \psi(b^u).\] Since $b^n\psi(b^n)$ is monotonically decreasing by assumption, it follows that $\psi(b^n)$ is monotonically decreasing and $\psi(b^n) \to 0$ as $n \to \infty$. For a fixed $x = (x_1,\dots,x_d) \in K$, we define the resonant sets of interest as follows. For each $\bi \in \La^*$, take \[R_{\bi}^x=g_{\bi}(x).\] Correspondingly, for each $1 \leq j \leq d$, \[R_{\bi,j}^x = g_{\bi}(x)_j,\] where $g_{\bi}(x) = (g_{\bi}(x)_1, \dots, g_{\bi}(x)_d)$. So, $R_{\bi,j}^x$ is the coordinate of $g_{\bi}(x)$ in the $j$th direction. In each coordinate direction, the $\kappa$-scaling property is satisfied with $\kappa$=0, since our resonant sets are points. Let us fix $\ba = (1,1,\dots,1) \in \R_{>0}^d$. Then, in this case, we note that \[\limsup_{\substack{\alpha \in \cJ \\ \bal \to \infty}}{\Delta(R_{\alpha}^{x}, \rho(\bal)^{\ba})} = \limsup_{\substack{\bi \in \La^* \\ |\bi| \to \infty}}{\Delta(g_{\bi}(x), \psi(b^{|\bi|})^{\ba})} = W(x,\psi),\] where $W(x,\psi)$ is as defined in \eqref{W x psi}. Moreover, it follows from Corollary \ref{W x psi corollary} and Proposition \ref{measures_are_equal} that $\Mu(W(x,\psi)) = \Mu(K)$, since we assumed that $\sum_{n=1}^{\infty}{(b^n\psi(b^n))^{\gamma}} = \infty$. Now suppose that $\bt = (t_1,\dots,t_d) \in \R_{\geq 0}^d$. Then, in our case, \[W_{\ba}(\bt) = W(x,\psi,\bt),\] which is the set we are interested in. So, recalling that $\kappa = 0$ in our setting, we may now apply Theorem \ref{Theorem WW} directly to conclude that \[\dimh{W(x,\psi,\bt)} \geq \min_{A \in \cA}\left\{\sum_{k \in \cK_1}{\delta_k} + \sum_{k \in \cK_2}{\delta_k}+\frac{\sum_{k \in \cK_3}{\delta_k}-\sum_{k \in \cK_2}{t_k \delta_k}}{A}\right\} =: s(\bt),\] where \[\cA = \set{1} \cup \set{1+t_i: 1 \leq i \leq d}\] and for each $A \in \cA$ the sets $\cK_1, \cK_2, \cK_3$ are defined as follows: \[\cK_1 = \set{k: 1 \geq A}, \quad \cK_2 = \{k: 1+t_k \leq A\} \setminus \cK_1, \quad \text{and} \quad \cK_3 = \set{1,\dots,d} \setminus (\cK_1 \cup \cK_2). \] Note that $\cK_1, \cK_2, \cK_3$ give a partition of $\set{1,\dots,d}$. To obtain a neater expression for $s(\bt)$, as given in the statement of Theorem \ref{lower bound theorem}, we consider the possible cases which may arise. To this end, let us suppose, without loss of generality, that \[0 < t_{i_1} \leq t_{i_2} \leq \dots \leq t_{i_d}.\] \smallskip \underline{{\bf Case 1: $A=1$}} If $A = 1$, then $\cK_1 = \set{1,\dots,d}$, $\cK_2 = \emptyset$, and $\cK_3 = \emptyset$. In this case, the ``dimension number'' simplifies to \[\sum_{j=1}^{d}{\delta_{j}} = \sum_{j=1}^{d}{\dimh{K_j}} = \sum_{j=1}^{d}{\frac{\log{N_j}}{\log{b}}} = \frac{\log{\left(\prod_{j=1}^{d}N_{j}\right)}}{\log{b}} = \frac{\log{N}}{\log{b}} = \dimh{K}.\] \smallskip \underline{{\bf Case 2:} $A=1+t_{i_k}$ with $t_{i_k}>0$} \nopagebreak Suppose $A= 1+t_{i_k}$ for some $1 \leq k \leq d$ and that $t_{i_k}>0$ (otherwise we are in Case 1). Suppose $k \leq k' \leq d$ is the maximal index such that $t_{i_k} = t_{i_{k'}}$. In this case, \[\cK_1 = \emptyset, \qquad \cK_2 = \set{i_1, \dots, i_{k'}}, \quad \text{and} \quad \cK_3 = \set{i_{k'+1},\dots,i_d}\] and the ``dimension number'' is \begin{align*} \sum_{j=1}^{k'}{\delta_{i_j}} + \frac{\sum_{j=k'+1}^{d}{\delta_{i_j}}-\sum_{j=1}^{k'}{t_{i_j}\delta_{i_j}}}{1+t_{i_k}} &= \frac{1}{1+t_{i_k}}\left((1+t_{i_k})\sum_{j=1}^{k'}{\delta_{i_j}} + \sum_{j=k'+1}^{d}{\delta_{i_j}}-\sum_{j=1}^{k'}{t_{i_j}\delta_{i_j}}\right) \\ &= \frac{1}{1+t_{i_k}}\left(\sum_{j=1}^{d}{\delta_{i_j}}+\sum_{j=1}^{k'}{\delta_{i_j}(t_{i_k}-t_{i_j})}\right) \\ &= \frac{1}{1+t_{i_k}}\left(\dimh{K} + \sum_{j=1}^{k'}{(t_{i_k}-t_{i_j})\dimh{K_j}}\right). \end{align*} Putting the two cases together, we conclude that \[\dimh{W(x,\psi,\bt)} \geq \min_{1 \leq k \leq d}\left\{\frac{1}{1+t_k}\left(\gamma + \sum_{j:t_j \leq t_k}{(t_k-t_j)\gamma_{j}}\right)\right\},\] as claimed. This completes the proof of Theorem \ref{lower bound theorem}. \section{Proof of Theorem \ref{upper bound theorem}} \label{upper bound section} Let \begin{equation*} \cA_{n}(x, \psi, \bt):= \bigcup_{\bi \in \Lambda^{n}} \Delta\left(R^{x}_{\bi}, \psi(b^{n})^{1+\bt}\right)=\bigcup_{\bi \in \Lambda^{n}}\prod_{j=1}^{d}B\left( R^{x}_{\bi,j}, \psi(b^{n})^{1+t_{j}} \right). \end{equation*} Then \begin{equation*} W(x, \psi, \bt) = \limsup_{n \to \infty} \cA_{n}(x, \psi, \bt) \, . \end{equation*} For any $m \in \N$ we have that \begin{equation} \label{cover_1} W(x, \psi, \bt) \subset \bigcup_{n \geq m} \cA_{n}(x, \psi, \bt) \, . \end{equation} Observe that $\cA_{n}(x, \psi, \bt)$ is a collection of $N^{n}=(b^{n})^{\gamma}$ rectangles with sidelengths $2\psi(b^{n})^{1+t_{j}}$ in each $j$th coordinate axis. Fix some $1 \leq k \leq d$. Throughout suppose that $n$ is sufficiently large such that $\psi(b^{n})<1$. Condition $(i)$ of Theorem \ref{upper bound theorem} implies that $\psi(b^{n})^{1+t_{k}} \leq \psi(b^{n}) \to 0$ as $n \to \infty$, and so for any $\rho>0$ there exists a sufficiently large positive integer $n_{0}(\rho)$ such that \begin{equation*} \psi(b^{n})^{1+t_{k}} \leq \rho \quad \text{ for all } n \geq n_{0}(\rho). \end{equation*} Suppose $n \geq n_0(\rho)$ and that for each $1 \leq j \leq d$ we can construct an efficient finite $\psi(b^{n})^{1+t_{k}}$-cover $\cB_{j}(\bi,k,\rho)$ for $B\left(R^{x}_{\bi,j}, \psi(b^{n})^{1+t_{j}}\right)$ with cardinality $\#\cB_{j}(\bi,k,\rho)$ for each $\bi \in \Lambda^{n}$. Then we can construct a $\psi(b^{n})^{1+t_{k}}$-cover of $\Delta\left(R^{x}_{\bi}, \psi(b^{n})^{1+\bt}\right)$ for each $\bi \in \Lambda^{n}$ with cardinality $\prod_{j=1}^{d}\#\cB_{j}(\bi,k,\rho)$ by considering the Cartesian product of the individual covers $\cB_j(\bi, k, \rho)$ for each $1 \leq j \leq d$. By \eqref{cover_1} \begin{equation} \label{cover_2} \bigcup_{n \geq n_{0}(\rho)} \cA_{n}(x, \psi, \bt) \end{equation} is a cover of $W(x, \psi, \bt)$. So, supposing that we can find such covers $\cB_{j}(\bi,k,\rho)$, we have that \begin{equation*} \bigcup_{n \geq n_{0}(\rho)}\bigcup_{\bi \in \Lambda^{n}}\prod_{j=1}^{d}\cB_{j}(\bi,k,\rho) \end{equation*} is a $\psi(b^{n})^{1+t_{k}}$-cover of $W(x,\psi,\bt)$. To calculate the values $\#\cB_{j}(\bi,k,\rho)$ we consider two possible cases depending on the fixed $1\leq k\leq d$. Without loss of generality suppose that $0<t_{1}\leq t_{2} \leq \dots \leq t_{d}$. Then, since we are assuming that $\psi(b^{n})<1$, we have that $\psi(b^{n})^{1+t_{1}}\geq \dots \geq \psi(b^{n})^{1+t_{d}}$. \smallskip \underline{{\bf Case 1: $t_{j} \geq t_{k}$}} In this case, $\psi(b^{n})^{1+t_{k}} \geq \psi(b^{n})^{1+t_{j}}$ and so, for any $\bi \in \Lambda^{n}$, we have \begin{equation*} B\left(R^{x}_{\bi,j}, \psi(b^{n})^{1+t_{k}}\right) \supset B\left(R^{x}_{\bi,j}, \psi(b^{|\bi|})^{1+t_{j}}\right). \end{equation*} Hence, we may take our covers to be $\cB(\bi,k, \rho)=B\left(R^{x}_{\bi,j}, \psi(b^{n})^{1+t_{k}}\right)$, and so $\#\cB_{j}(\bi,k,\rho)=1$. \smallskip \underline{{\bf Case 2: $t_{j} < t_{k}$}} In this case, $\psi(b^{n})^{1+t_{k}} < 2\psi(b^{n})^{1+t_{j}}$. Let $u \in \N$ be the unique integer such that \begin{equation} \label{u_bound} b^{-u} \leq 2\psi(b^{n})^{1+t_{j}} < b^{-u+1}, \end{equation} and observe that, for any $\bi \in \Lambda^{n}$, we have \begin{equation*} B\left(R^{x}_{\bi,j}, \psi(b^{|\bi|})^{1+t_{j}}\right) \subset \bigcup_{\substack{\ba = (a_{1},\dots,a_{u-1}) \in \Lambda_{j}^{u-1} \\ f_{a_i} \in \Phi^{j},\, 1 \leq i \leq u-1}} f_{\ba}([0,1]), \end{equation*} where $\Lambda_j = \{1,\dots,N_j\}$. Let $A$ denote the set of $\ba \in \Lambda_{j}^{u-1}$ such that \begin{equation*} f_{\ba}([0,1]) \cap B\left(R^{x}_{\bi,j}, \psi(b^{n})^{1+t_{j}}\right)\neq \emptyset. \end{equation*} Note by the definition of $u$, and the fact that the mappings $f_{\ba}$ of the same length are pairwise disjoint up to possibly a single point of intersection, that $\#A \leq 2$ since \begin{equation*} \diam \left(f_{\ba}([0,1])\right)=b^{-(u-1)} > \diam \left( B\left(R^{x}_{\bi,j}, \psi(b^{n})^{1+t_{j}}\right) \right). \end{equation*} Observe that $f_{\bb}([0,1]) \subset f_{\ba}([0,1])$ if and only if $\bb=\ba\bc$ for $\bc \in \Lambda^{*}_{j}:=\bigcup_{n=0}^{\infty}{\Lambda_j^n}$, where we write $\ba\bc$ to denote the concatenation of the two words $\ba$ and $\bc$. Let $v \geq 0$ be the unique integer such that \begin{equation} \label{v_bound} b^{-u-v} \leq \psi(b^{n})^{1+t_{k}} < b^{-u-v+1}. \end{equation} Note that $v$ is well defined since $\psi(b^{n})^{1+t_{k}} < 2\psi(b^{n})^{1+t_{j}} < b^{-u+1}$, and so $v \geq 0$. Then \begin{equation*} \underset{\bc \in \Lambda_{j}^{v}}{\bigcup_{\ba \in A, } }f_{\ba \bc}([0,1]) \supset B\left(R^{x}_{\bi,j}, \psi(b^{|\bi|})^{1+t_{j}}\right). \end{equation*} Notice that the left-hand side above gives rise to a $\psi(b^n)^{1+t_k}$-cover for the right-hand side and let us denote this cover by $\cB_{j}(\bi,k,\rho)$. By the above arguments an easy upper bound on $\#\cB_{j}(\bi,k,\rho)$ is seen to be $2N_{j}^{v}$. Furthermore, by \eqref{u_bound} and \eqref{v_bound} we have that \begin{equation*} \#\cB_{j}(\bi,k,\rho) \leq 2N_{j}^{v} = 2 (b^{v})^{\gamma_{j}} \overset{\eqref{v_bound}}{\leq} 2\left(b^{1-u}\psi(b^{n})^{-1-t_{k}}\right)^{\gamma_{j}} \overset{\eqref{u_bound}}{\leq}2^{1+\gamma_j}b^{\gamma_{j}}\psi(b^{n})^{(t_{j}-t_{k})\gamma_{j}}. \end{equation*} Summing over $1 \leq j \leq d$ and $\bi \in \Lambda^n$ for each $n \geq n_0(\rho)$ we see that \begin{align} \cH^{s}_{\rho}(W(x, \psi, \bt)) \, & \ll \sum_{n \geq n_0(\rho)}{\left(\left(\psi(b^n)^{1+t_k}\right)^s \times \sum_{\bi \in \Lambda^n}{\prod_{j=1}^{d}{\#\cB_j(\bi,k,\rho)}}\right)} \nonumber \\ &\ll \sum_{n\geq n_{0}(\rho)} \left(\psi(b^{n})^{1+t_{k}}\right)^{s}N^{n} \prod_{j:t_{j}<t_{k}} b^{\gamma_{j}}\psi(b^{n})^{(t_{j}-t_{k})\gamma_{j}} \nonumber \\ & \ll \sum_{n\geq n_{0}(\rho)} \psi(b^{n})^{s(1+t_{k})+\sum_{j:t_{j}<t_{k}}(t_{j}-t_{k})\gamma_{j} - \gamma} \left( \psi(b^{n})b^{n} \right)^{\gamma}. \end{align} Thus, it follows from condition $(iii)$ in Theorem \ref{upper bound theorem} that for any \[s \geq s_{0}=\frac{\gamma+\sum_{j:t_j<t_k}(t_{k}-t_{j})\gamma_{j}+\delta\gamma}{1+t_{k}} \quad \text{with } \delta>0,\] we have \[\cH^{s}_{\rho}(W(x,\psi,\bt)) \to 0 \quad \text{as } \rho \to 0.\] This implies that $\dimh W(x, \psi, \bt) \leq s_{0}$. The above argument holds for any initial choice of~$k$, and so we conclude that \begin{equation*} \dimh W(x, \psi, \bt) \leq \min_{1 \leq k \leq d} \left\{ \frac{1}{1+t_{k}}\left(\gamma+ \sum_{j:t_j<t_k}(t_{k}-t_{j})\gamma_{j}\right) \right\}. \end{equation*} Combining this upper bound with the lower bound result from Theorem \ref{lower bound theorem} completes the proof of Theorem \ref{upper bound theorem}. \smallskip \noindent{\bf Acknowledgements.} The authors are grateful to Bal\'{a}zs B\'{a}r\'{a}ny, Victor Beresnevich, Jason Levesley, Baowei Wang, and Wenmin Zhong for useful discussions. \bibliographystyle{plain} \begin{thebibliography}{10} \itemsep-0.05cm \bibitem{ABLVZ_2016} F. Adiceam, V. Beresnevich, J. Levesley, S. Velani, E. Zorin, \emph{Diophantine approximation and applications in interference alignment}, Adv. Math. 302 (2016), 231--279. \bibitem{Allen-Baker} D. Allen, S. Baker, \emph{A general mass transference principle}, Selecta Math. (N.S.) 25 (2019), no.~3, Paper No. 39, 38 pp. \bibitem{ABCY} D. Allen, S. Baker, S. Chow, H. Yu, \emph{A note on dyadic approximation in Cantor's set}, preprint (2022), arXiv:2204.09452, 5 pages. \bibitem{Allen-Barany} D. Allen, B. B\'{a}r\'{a}ny, \emph{Hausdorff measures of shrinking targets on self-conformal sets}, Mathematika 67 (2021), no. 4, 807--839. \bibitem{Allen-Beresnevich} D. Allen, V. Beresnevich, \emph{A mass transference principle for systems of linear forms and its applications}, Compos. Math. 154 (2018), no. 5, 1014--1047. \bibitem{ACY} D. Allen, S. Chow, H. Yu, \emph{Dyadic approximation in the middle-third Cantor set}, preprint (2020), arXiv:2005.09300, 39 pages. \bibitem{An_2013} J. An, \emph{Badziahin-{P}ollington-{V}elani's theorem and {S}chmidt's game}, Bull. Lond. Math. Soc. (2013), no. 4, 721--733. \bibitem{An_2016} J. An, \emph{2-dimensional badly approximable vectors and {S}chmidt's game}, Duke Math. J. (2016), no. 2, 267--284. \bibitem{BPV_2011} D. Badziahin, A. Pollington, S. Velani, \emph{On a problem in simultaneous {D}iophantine approximation: {S}chmidt's conjecture}, Ann. of Math. 2 (2011), no. 3, 1837--1883. \bibitem{Baker1} S. Baker, \emph{An analogue of Khintchine's theorem for self-conformal sets.}, Math. Proc. Cambridge Philos. Soc. 167 (2019), no. 3, 567--597. \bibitem{Baker2} S. Baker, \emph{Overlapping iterated function systems from the perspective of Metric Number Theory}, Mem. Amer. Math. Soc., to appear, arXiv:1901.07875. \bibitem{Baker3} S. Baker, \emph{Intrinsic Diophantine Approximation for overlapping iterated function systems}, preprint (2021), arXiv:2104.14249, 31 pages. \bibitem{Baker4} S. Baker, \emph{Approximating elements of the middle third Cantor set with dyadic rationals}, preprint (2022), arXiv:2203.12477, 15 pages. \bibitem{Baker-Troscheit} S. Baker, S. Troscheit, \emph{Analogues of Khintchine’s theorem for random attractors}, Trans. Amer. Math. Soc. 375 (2022), 1411--1441. \bibitem{BRV_2016} V. Beresnevich, F. Ram{\'\i}rez, S. Velani, \emph{Metric Diophantine approximation: aspects of recent work}, London Math. Soc. Lecture Note Ser. (2016), no. 437, 1--95. \bibitem{BV_MTP} V. Beresnevich, S. Velani, \emph{A mass transference principle and the Duffin-Schaeffer conjecture for Hausdorff measures}, Ann. of Math. (2) 164 (2006), no. 3, 971--992. \bibitem{BV_Slicing} V. Beresnevich, S. Velani, \emph{Schmidt's theorem, Hausdorff measures, and slicing}, Int. Math. Res. Not. 2006, Art. ID 48794, 24 pp. \bibitem{BFR_2011} R. Broderick, L. Fishman, A. Reich, \emph{Intrinsic approximation on Cantor-like sets, a problem of Mahler}, Mosc. J. Comb. Number Theory 1 (2011), no. 4, 3--12. \bibitem{Cassels_1950} J. Cassels, \emph{Some metrical theorems in {D}iophantine approximation. {I}}, Proc. Cambridge Philos. Soc. 46 (1950), 209--218. \bibitem{Falconer} K.~Falconer, \newblock {\em Fractal geometry: Mathematical foundations and applications}. \newblock John Wiley \& Sons, Ltd., Chichester, third edition, 2014. \bibitem{Falconer_techniques} K. Falconer, \emph{Techniques in fractal geometry}, John Wiley \& Sons, Ltd., Chichester, 1997. xviii+256 pp. \bibitem{FS_2014} L. Fishman, D. Simmons, \emph{Intrinsic approximation for fractals defined by rational iterated function systems: Mahler's research suggestion}, Proc. Lond. Math. Soc. 3 (2014), no. 1, 189--212. \bibitem{FS_2015} L. Fishman, D. Simmons, \emph{Extrinsic Diophantine approximation on manifolds and fractals}, J. Math. Pures Appl. (9) 104 (2015), no. 1, 83--101. \bibitem{Gallagher_1962} P. Gallagher, \emph{Metric simultaneous Diophantine approximation}, J. London Math. Soc. 1 (1962), no. 1, 387--390. \bibitem{Hutchinson} J. Hutchinson, \emph{Fractals and self-similarity}, Indiana Univ. Math. J. 30 (1981), no. 5, 713--747. \bibitem{Jarnik_31} V. Jarn\'{i}k, \emph{\"{U}ber die simultanen diophantischen Approximationen}, Math. Z. 33 (1931), no. 1, 505--543. \bibitem{Khalil-Luethi} O. Khalil, M. L\"{u}thi, \emph{Random Walks, Spectral Gaps, and Khintchine's Theorem on Fractals}, preprint (2021), arXiv:2101.05797, 73 pages. \bibitem{Khintchine_24} A. Khintchine, {\em Einige S\"atze \"uber Kettenbr\"uche, mit Anwendungen auf die Theorie der Diophantischen Approximationen}, Math. Ann. 92 (1924), 115--125. \bibitem{Khintchine_25} A. Khintchine, \emph{\"{U}ber die angen\"{a}herte Aufl\"{o}sung linearer Gleichungen in ganzen Zahlen}, Rec. Math. Soc. Moscou, 32 (1925), 203--218. \bibitem{Weighted_Khintchine} A. Khintchine, \emph{Zur metrischen Theorie der diophantischen Approximationen}, Math. Z. 24 (1926), no. 1, 706--714. \bibitem{KLW_2005} D. Kleinbock, E. Lindenstrauss, B. Weiss, \emph{On fractal measures and Diophantine approximation}, Selecta Math. (N.S.) 10 (2005), no. 4, 479--523. \bibitem{KM_1998} D. Kleinbock, G. Margulis, \emph{Flows on homogeneous spaces and Diophantine approximation on manifolds}, Ann. of Math. (2) (1998), 339--360. \bibitem{Koivusalo-Rams} H. Koivusalo, M. Rams, \emph{Mass transference principle: from balls to arbitrary shapes}, Int. Math. Res. Not. IMRN 2021, no. 8, 6315--6330. \bibitem{Kristensen_2006} S. Kristensen, \emph{Approximating numbers with missing digits by algebraic numbers}, Proc. Edinb. Math. Soc. (2) (2006), no. 3, 657--666. \bibitem{LSV_2007} J. Levesley, C. Salp, S. Velani, \emph{On a problem of {K}. {M}ahler: {D}iophantine approximation and {C}antor sets}, Math. Ann. 338 (2007), no. 1, 97--118. \bibitem{Mahler_1984} K. Mahler, \emph{Some suggestions for further research}, Bull. Austral. Math. Soc. 29 (1984), 101--108. \bibitem{Mattila_1999} P. Mattila, \emph{Geometry of Sets and Measures in Euclidean Space: Fractals and rectifiability}, Cambridge University Press, 1999. \bibitem{PV_2005} A. Pollington, S. Velani, \emph{Metric Diophantine approximation and ``absolutely friendly'' measures}, Selecta Math. (N.S.) 11 (2005), no. 2, 297--307. \bibitem{Rynne_1998} B. Rynne, \emph{Hausdorff dimension and generalized simultaneous {D}iophantine approximation}, Bull. London Math. Soc. 30 (1998), no. 4, 365--376. \bibitem{Schmidt_1960} W. Schmidt, \emph{A metrical theorem in {D}iophantine approximation}, Canadian J. Math. 12 (1960), 619--631. \bibitem{Schmidt_1983} W. Schmidt, \emph{Open problems in Diophantine approximation}, Diophantine approximations and transcendental numbers ({L}uminy, 1982), Progr. Math. 31 (1983), 271--287. \bibitem{TWW_2021} B. Tan, B. Wang, J. Wu, \emph{Mahler's question for intrinsic Diophantine approximation on triadic Cantor set: the divergence theory}, preprint (2021), arXiv:2103.00544, 16 pages. \bibitem{VV_2006} R. Vaughan, S. Velani, \emph{Diophantine approximation on planar curves: the convergence theory}, Invent. Math. 166 (2006), no. 1, 103--124. \bibitem{WW2019} B. Wang, J. Wu, \emph{Mass transference principle from rectangles to rectangles in Diophantine approximation}, Math. Ann. 381 (2021), no. 1-2, 243--317. \bibitem{WWX2015} B. Wang, J. Wu, J. Xu, \emph{Mass transference principle for limsup sets generated by rectangles}, Math. Proc. Cambridge Philos. Soc. 158 (2015), no. 3, 419--437. \bibitem{WWX_Cantor} B. Wang, J. Wu, J. Xu, Jian, \emph{Dynamical covering problems on the triadic Cantor set}, C. R. Math. Acad. Sci. Paris 355 (2017), no. 7, 738--743. \bibitem{Weiss_2001} B. Weiss, \emph{Almost no points on a Cantor set are very well approximable}, R. Soc. Lond. Proc. Ser. A Math. Phys. Eng. Sci. 457 (2001), no. 2008, 949--952. \bibitem{Yu_self-similar_2021} H. Yu, \emph{Rational points near self-similar sets}, preprint (2021), arXiv:2101.05910, 60 pages. \bibitem{Zhong2021} W. Zhong, \emph{Mass transference principle: from balls to arbitrary shapes: measure theory}, J. Math. Anal. Appl. 495 (2021), no. 1, Paper No. 124691, 23 pp. \end{thebibliography} \begin{minipage}{0.5\linewidth} \begin{flushleft} {\footnotesize D.~Allen \\ Coll. of Eng., Maths. and Phys. Sci. \\ University of Exeter \\ Harrison Building \\ North Park Road \\ Exeter \\ EX4 4QF, UK \\ \texttt{[email protected]} } \end{flushleft} \end{minipage} \begin{minipage}{0.5\linewidth} \begin{flushleft} {\footnotesize B.~Ward \\ Department of Mathematics \\ University of York \\ Heslington \\ York \\ YO10 5DD, UK \\ \texttt{[email protected]} } \end{flushleft} \end{minipage} \end{document}
2205.07492v2
http://arxiv.org/abs/2205.07492v2
Moduli spaces of $\mathbb{Z}/k\mathbb{Z}$-constellations over $\mathbb{A}^2$
\DeclareSymbolFont{AMSb}{U}{msb}{m}{n} \documentclass[11pt,a4paper,leqno,noamsfonts]{amsart} \linespread{1.15} \makeatletter \renewcommand\@biblabel[1]{#1.} \makeatother \usepackage[english]{babel} \usepackage[dvipsnames]{xcolor} \usepackage{graphicx,pifont,soul,physics} \usepackage[utopia]{mathdesign} \usepackage[a4paper,top=3cm,bottom=3cm, left=3cm,right=3cm,marginparwidth=60pt]{geometry} \usepackage[utf8]{inputenc} \usepackage{braket,caption,comment,mathtools,stmaryrd,ytableau} \usepackage[usestackEOL]{stackengine} \usepackage{multirow,booktabs,microtype,relsize} \usepackage[colorlinks,bookmarks]{hyperref} \hypersetup{colorlinks, citecolor=britishracinggreen, filecolor=black, linkcolor=cobalt, urlcolor=cornellred} \setcounter{tocdepth}{1} \setcounter{section}{-1} \numberwithin{equation}{section} \usepackage[capitalise]{cleveref} \input{macros.tex} \usetikzlibrary{patterns} \DeclareMathAlphabet\BCal{OMS}{cmsy}{b}{n} \title{Moduli spaces of $\Z/k\Z$-constellations over $\A^2$} \author{Michele Graffeo} \begin{document} \maketitle \begin{abstract}Let $\rho:\Z/k \Z\rightarrow \SL(2,\C)$ be a representation of a finite abelian group and let $\Theta^{\gen}\subset \Hom_\Z(R(\Z/k\Z),\Q)$ be the space of generic stability conditions on the set of $G$-constellations. We provide a combinatorial description of all the chambers $C\subset\Theta^{\gen}$ and prove that there are $k!$ of them. Moreover, we introduce the notion of simple chamber and we show that, in order to know all toric $G$-constellations, it is enough to build all simple chambers. We also prove that there are $k\cdot 2^{k-2} $ simple chambers. Finally, we provide an explicit formula for the tautological bundles $\Calr_C$ over the moduli spaces $\Calm _C$ for all chambers $C\subset \Theta^{\gen}$ which only depends upon the chamber stair which is a combinatorial object attached to the chamber $C$. \end{abstract} \tableofcontents \section{Introduction}Given a Gorenstein singular variety $X$, a crepant resolution is a proper birational morphism $$Y\xrightarrow{\varepsilon}X$$ where $Y$ is smooth and the canonical bundle is preserved, i.e. $\omega_Y=\varepsilon^*\omega_X$. It was proven by Watanabe in \cite{WATANABE} that the singularities of the form $\A^n/G$, where $G\subset\SL(n,\C)$ is a finite subgroup, are Gorenstein. Their crepant resolutions appear in several fields of Algebraic Geometry and Mathematical Physics, for example see \cite{BRUZZO,ITOREID,REID} and the references therein. Even though, in general, crepant resolutions may not exist, their existence is guaranteed in dimension 2 and 3: see \cite{DUFREE} for dimension 2, and see Roan \cite{ROAN1,ROAN2}, Ito \cite{ITOCREP} and Markushevich \cite{MARKU} for dimension 3. In particular, the 3-dimensional case was solved by a case by case analysis, taking advantage of the fact that the conjugacy classes of finite subgroups of $\SL(3,\C)$ were listed, for example in \cite{YU}. More recently, in \cite{BKR}, Bridgeland, King and Reid proved in one shot that a resolution always exists in dimension 3. The resolution that they proposed is made in terms of $G$-clusters, i.e. $G$-equivariant zero-dimensional subschemes $Z$ of $\A^n$ such that $H^0(Z,\Calo_Z)\cong\C[G]$ as $G$-modules (\Cref{cluster}). In particular, in \cite{BKR} it was proved that there exists a crepant resolution $$G\mbox{-}\Hilb(\A^3)\rightarrow \A^3/G$$ where $G$-$\Hilb(\A^3)$ is the fine moduli space of $G$-clusters. Notice that this result had already been obtained for abelian actions by Nakamura in \cite{NAKAMURA}. In \cite{CRAWISHII} Craw and Ishii generalized the notion of $G$-cluster to that of $G$-constellation, i.e. a coherent $G$-sheaf $\Calf$ such that $H^0(\A^n,\Calf)\cong \C[G]$ as $G$-modules (\Cref{constellation}). Moreover, in the case of $G$ abelian the authors in \cite{CRAWISHII} introduced a notion of $\theta$-stability for $G$-constellations (\Cref{stability}), following the ideas in King \cite{KING}. They proved that, for any abelian subgroup $G\subset\SL(3,\C)$ and for any crepant resolution $Y\xrightarrow{\varepsilon} \A^3/G$ there exists at least a generic stability condition $\theta$ and an isomorphism $\Calm_\theta \xrightarrow{\varphi} Y $ such that the composition $\varepsilon\circ\varphi$ agrees with the restriction $$\Calm_\theta\rightarrow\A^3/G$$ of the Hilbert--Chow morphism, to the irreducible component $\Calm_\theta$ of the fine moduli space of $\theta$-stable $G$-constellations containing free orbits. Moreover, they conjectured that the same is true for any finite subgroup of $\SL(3,\C)$. Recently, this conjecture has been affirmatively solved by Yamagishi in \cite{prova2}. It turns out that the space of generic stability conditions $\Theta^{\gen}\subset \Theta$ is a disjoint union of connected components called chambers. Moreover, in each chamber $C$, the notion of stability is constant, i.e. for any $\theta,\theta'\in C$, a $G$-constellation is $\theta$-stable if and only if it is $\theta'$-stable. In this paper I will focus on the 2-dimensional abelian case, i.e. the case when $G\subset\SL(2,\C)$ is a finite abelian, and hence cyclic, subgroup. In the literature the singularity $\A^2/G$ is sometimes called the $A_{|G|-1}$ singularity. This case is particularly simple from the point of view of the resolution because we know, from classical surface theory, that there is a unique minimal crepant resolution. Therefore, all the moduli spaces $\Calm_\theta$ are isomorphic as quasi-projective varieties. As a consequence, in order to distinguish two chambers it is enough to study their universal families $\Calu_C \in\Ob\Coh(\Calm_C\times \A^2)$. The first main result in the paper is the following. \begin{customthm}{\ref{TEO1}} If $G\subset \SL(2,\C)$ is a finite abelian subgroup of cardinality $k=|G|$, then the space of generic stability conditions $\Theta^{\gen}$ is the disjoint union of $k!$ chambers. \end{customthm} The result in \Cref{TEO1} can be also recovered, via different arguments, from the theory developed by Kronheimer in \cite{KRON} (See also \cite[Chapter 3-\S 3]{CASSLO} for the algebraic interpretation), but the approach to the abelian case here is different and it helps to prove the other results. In order to prove \Cref{TEO1}, I will give an exhaustive combinatorial description of the toric points of the spaces $\Calm_\theta$ in terms of very classical combinatorial objects, namely skew Ferrers diagrams. Such diagrams are standard tools in many branches of mathematics, e.g. enumerative geometry, group theory, commutative algebra etc (for example \cite{BRIANCON,FULTREP,ANDREA}). Next, I will introduce the notion of simple chamber (\Cref{simplechamber}) and I will show that, for any indecomposable $G$-constellation $\Calf$, there exists at least a simple chamber $C$ such that $\Calf$ is $\theta$-stable for all $\theta\in C$. This property makes simple chambers useful, because knowing them is the same as knowing all the $ G $-constellations. In order to define simple chambers, I will need to construct chamber stairs (\Cref{chamberstair}), combinatorial objects that I will use to encode all the data of a chamber $C$. The second theorem I prove is the following. \begin{customthm}{\ref{teosimple}}If $G\subset \SL(2,\C)$ is a finite abelian subgroup of cardinality $k=|G|$, then the space of generic stability conditions $\Theta^{\gen}$ contains $k\cdot 2^{k-2}$ simple chambers. \end{customthm} Finally, in \Cref{costruzione} I will give a commutative algebra construction that allows one to write an explicit formula for the tautological bundle $$\Calr_\theta\in\Ob\Coh(\Calm_\theta),$$ i.e. the pushforward of the universal family $\Calu_\theta \in\Ob\Coh(\Calm_\theta\times \A^2)$ via the first projection. This construction can be easily implemented using some software such as Macaulay2 \cite{M2}. Moreover, it provides a realization of all the moduli spaces $\Calm_\theta$ as a $G$-invariant subvariety of $\Quot_{\Calk_C}^{|G|}(\A^2)$ where $\Calk_C\in\Ob\Coh(\A^2)$ is an ideal sheaf dependent only upon the chamber $C$ such that $\theta\in C$ (see \Cref{quot}). This solves, in 2-dimensions, a problem related to the one raised by Nakamura in \cite[Problem 6.5.]{NAKAMURA} and it also implies that to give a chamber is equivalent to give its chamber stair (\Cref{chamberstair}). This paper gives some contributions to the solution of several open problems regarding the subject, and provides some techniques that seem to be applicable to more general situations, such as some non-abelian, even 3-dimensional, case for example following the ideas in \cite{NOLLA1,NOLLA2}. After providing, in the first section, some technical preliminaries and some known facts, I will devote the second section to a brief description of the singularity $ \A ^ 2 / G $ and to its minimal resolution. In the third section I will prove that the toric $ G $-constellations are completely described in terms of $ G $-stairs, which are certain diagrams whose definition I will give in \Cref{stair}. The following sections (4 and 5), are devoted to the proofs of the two main theorems, while in the last section I will give the above mentioned commutative algebra construction. \section*{Acknowledgments}I thank my advisor Ugo Bruzzo for his guidance and support. I also thank Mario De Marco, Massimo Gisonni and Felipe Luis López Reyes for very helpful discussions. Special thanks to Alastair Craw, Maria Luisa Graffeo and Andrea T. Ricolfi for giving me useful suggestions while writing the paper. \section{Preliminaries}\label{sec1} Given a finite group $G$ and a representation $\rho:G\rightarrow \GL(n,\C)$, we have an action of $G$ on the polynomial ring $\C[x_1,\ldots,x_n]$, given by \[ \begin{tikzcd}[row sep=tiny] G\times\C[x_1,\ldots,x_n] \arrow{r} & \C[x_1,\ldots,x_n]& \\ (g,p)\arrow[mapsto]{r} & p\circ \rho(g)^{-1} \end{tikzcd} \] where $p$ and $\rho(g)^{-1}$ are thought respectively as a polynomial and a linear function. Out of this, we can build the quotient singularity $$\A^n/G=\Spec\C[x_1,\ldots,x_n]^G $$ whose points parametrize the set-theoretic orbits of the action of $G$ on $\A^n$ induced by $\rho$. Given a representation $\rho:G\rightarrow\GL(n,\C)$, a \textit{$\rho$-equivariant sheaf} (or a \textit{$\rho$-sheaf} in the sense of \cite{BKR}) is a coherent sheaf $\Calf\in\Ob\Coh(\A^n)$ together with a lift to $\Calf$ of the $G$-action on $\A^n$ induced by $\rho$, i.e. for all $g\in G$ there are morphisms $\lambda_g^\Calf:\Calf\rightarrow\rho(g)^*\Calf$ such that: \begin{itemize} \item $\lambda_{1_G}^\Calf=\id_\Calf$, \item $\lambda_{hg}^\Calf=\rho(g)^*(\lambda^\Calf_h)\circ \lambda^\Calf_g$, \end{itemize} where $1_G$ is the unit of $G$. In particular, this induces a structure of representation on the vector space $H^0(\A^n,\Calf)$ as above \[ \begin{tikzcd}[row sep=tiny] G\times H^0(\A^n,\Calf) \arrow{r} & H^0(\A^n,\Calf) \\ (g,s)\arrow[mapsto]{r} & (\lambda_g^\Calf)^{-1}\circ \rho(g)^*(s). \end{tikzcd} \] Whenever the representation is an inclusion $G\subset \GL(n,\C)$ we will omit the representation and we will talk about \textit{$G$-equivariant sheaf} (or \textit{$G$-sheaf}). \begin{definition}\label{cluster} Let $G\subset \GL(n,\C)$ be a finite subgroup. A \textit{$G$-cluster} is a zero-dimensional subscheme $Z$ of $\A^n$ such that: \begin{itemize} \item the structure sheaf $\Calo_Z$ is $G$-equivariant, i.e. the ideal $I_Z$ is invariant with respect to the action of $G$ on $\C[x_1,\ldots,x_n]$, and \item if $\rho_{\reg}:G\rightarrow\GL(\C[G])$ is the regular representation, then there is an isomorphism of representations $$\varphi: H^0(Z,\Calo_Z)\rightarrow \C[G],$$ i.e. $\varphi$ is an isomorphism of vector spaces such that the following diagram: \begin{center} \begin{tikzpicture} \node at (-2,1) {$G\times H^0(Z,\Calo_Z)$}; \node at (-2,-1) {$G\times \C[G]$}; \node at (1,1) {$ H^0(Z,\Calo_Z)$}; \node at (1,-1) {$\C[G]$}; \node[right] at (1,0) {\small$\varphi$}; \node[left] at (-2,0) {\small$\id_G\times\varphi$}; \draw[->] (1,0.7)--(1,-0.7); \draw[->] (-2,0.7)--(-2,-0.7); \draw[->] (-0.7,1)--(0.1,1); \draw[->] (-1.1,-1)--(0.5,-1); \end{tikzpicture} \end{center} where the horizontal arrows are the $G$-actions, commutes. \end{itemize} We will denote by $\Hilb^G(\A^n)$ the fine moduli space of $G$-clusters and, by $G$-$\Hilb(\A^n)$ the irreducible component of $\Hilb^G(\A^n)$ containing the free $G$-orbits. \end{definition} Recall that, for all $ n \ge 1$ and for all $G\subset \SL(n,\C)$ finite subgroup, the singularities of the form $\A^n/G$ are Gorenstein (cf. \cite{WATANABE}). \begin{theorem}[{\cite[Theorem 1.2]{BKR}}]\label{BKR} Let $G\subset \SL(n,\C)$ be a finite subgroup where $n=2,3$. Then, the Hilbert-Chow morphism $$Y:=G\mbox{-}\Hilb(\A^n)\xrightarrow{\varepsilon}\A^n/G=:X$$ is a crepant resolution of singularities, i.e. $\omega_Y\cong \varepsilon^*\omega_X$. \end{theorem} \begin{remark} The Hilbert-Chow morphism $\varepsilon$ mentioned in \Cref{BKR} is a $G$-equivariant version of the usual Hilbert-Chow morphism $$\overline{\varepsilon}:\Hilb^{|G|}(\A^n)\rightarrow \sym^{|G|}(\A^n).$$ In particular $\varepsilon$ can be thought of as the restriction of $\overline{\varepsilon}$ to the $G$-invariant subvariety $G$-$\Hilb(\A^n)\subset \Hilb^{|G|}(\A^n)$. \end{remark} A natural generalization of the concept of a $G$-cluster is given in \cite{CRAWISHII}, and it is achieved by consider coherent $\Calo_{\A^n}$-modules which are not necessarily the structure sheaves of zerodimensional subschemes of $\A^n$. \begin{definition}[{\cite[Definition 2.1]{CRAWISHII}}]\label{constellation} Let $G\subset \GL(n,\C)$ be a finite subgroup. A \textit{$G$-constellation} is a coherent $\Calo_{\A^n}$-module $\Calf$ on $\A^n$ such that: \begin{itemize} \item $\Calf$ is $G$-equivariant, i.e. there is a fixed lift on $\Calf$ of the $G$-action on $\A^n$, and \item there is an isomorphism of representations $$\varphi: H^0(\A^n,\Calf)\rightarrow \C[G].$$ \end{itemize}\end{definition} \begin{remark} Since a $G$-constellation $\Calf$ is a coherent sheaf on the affine variety $\A^n$, sometimes, by abuse of notations, we address the name $G$-constellation to the space of global sections $H^0(\A^n,\Calf)$ as well as $\Calf$ and, sometimes, we treat a $G$-constellation as if it were a $\C[x_1,\ldots,x_n]$-module, meaning that we are working with the space of its global sections. \end{remark} \begin{remark}\label{freunic} The $G$-equivariance hypothesis implies that the support of a $G$-constellation is a union of $G$-orbits. Moreover, for dimensional reasons, the only constellations supported on a free orbit $Z$ are isomorphic to the structure sheaf $\Calo_Z$. \end{remark} \begin{remark}\label{shur}Recall that ({see, for example, \cite[chapters 1 and 2]{FULTREP}}), given a finite group $G$ and the set of isomorphism classes of its irreducible representations $$\Irr(G)=\{\mbox{Irreducible representations}\}/\mbox{iso},$$ there is a ring isomorphism $$\Psi:R(G)\xrightarrow{\sim}\underset{\rho\in\Irr(G)}{\bigoplus} \Z\rho, $$ where $(R(G),\oplus)$ is the Grothendieck group of isomorphism classes of representations of $G$, and the ring structure (on both sides) is induced by tensor product $\otimes$ of representations. Moreover $\Irr(G)=\{\rho_1,\ldots,\rho_s\}$ is finite, and we have the correspondence: \[ \begin{tikzcd}[row sep=tiny] R(G) \arrow{r}{\Psi} & \underset{i=1}{\overset{s}{\bigoplus}}\Z\rho_i \\ \C[G]\arrow[mapsto]{r} & (\dim \rho_1,\ldots,\dim \rho_s). \end{tikzcd} \] \end{remark} Following the ideas in \cite{KING}, the above mentioned properties allow one to introduce a notion of stability on the set of $G$-constellations. Given a finite subgroup $G\subset \SL(n,\C)$ (where $n=2,3$), the \textit{space of stability conditions} for $G$-constellations is \[ \Theta=\Set{ \theta\in\Hom_\Z(R(G),\Q)|\theta(\C[G])=0} \] \begin{definition}\label{stability} Let $\theta\in\Theta$ be a stability condition. A $G$-constellation $\Calf$ is said to be \textit{$\theta$-(semi)stable} if, for any proper $G$-equivariant subsheaf $0\subsetneq \Cale\subsetneq \Calf$, we have $$\theta(H^0(\A^n,\Cale)) \underset{(\ge)}{>}0.$$ A stability condition $\theta$ is \textit{generic} if the notion of $\theta$-semistability is equivalent to the notion of $\theta$-stability. Finally, we denote by $\Theta^{\gen}\subset\Theta$ the subset of generic stability conditions. \end{definition} \begin{definition} A $G$-constellation $\Calf$ is \textit{indecomposable} if it cannot be written as a direct sum $$\Calf=\Cale_1\oplus \Cale_2,$$ where $\Cale_1,\Cale_2$ are proper $G$-subsheaves, and it is \textit{decomposable} otherwise. \end{definition} \begin{remark} If we think of a $ G $-constellation as its space of global sections, a $G$-constellation $F=H^0(\Calf,\A^n)$ is indecomposable if it cannot be written as a direct sum $$F=E_1\oplus E_2,$$ where $E_1,E_2$ are proper $G$-equivariant $\C[x_1,\ldots,x_n]$-submodules. \end{remark} \begin{remark} If $\Calf$ is decomposable, then it is not $\theta$-stable for any stability condition $\theta\in\Theta$. Since, for our purpose, we are interested in indecomposable $G$-constellations, whenever not specified a $G$-constellation will always be indecomposable. \end{remark} \begin{remark}\label{frestable} If $Z\subset\A^n$ is a free orbit, then $\Calo_Z$ does not admit any proper $G$-subsheaf. Therefore, it is $\theta$-stable for all $\theta\in\Theta$. \end{remark} \begin{definition} Let $\theta\in\Theta^{\gen}$ be a generic stability condition. We denote by $\Calm_\theta$ the (fine) moduli space of $\theta$-stable $G$-constellations. \end{definition} The theorem below brings together results from \cite{CRAWISHII,BKR,prova2}. \begin{theorem}\label{CRAWthm} The following results are true for $n=2,3$. \begin{itemize} \item The subset $\Theta^{\gen}\subset\Theta$ of generic parameters is open and dense. It is the disjoint union of finitely many open convex polyhedral cones in $\Theta$ called \textit{chambers}. \item For generic $\theta\in\Theta^{\gen}$, the moduli space $\Calm_\theta$ exists and it depends only upon the chamber $C\subset\Theta^{\gen}$ containing $\theta$, so we write $\Calm_C$ in place of $\Calm_\theta$ for any $\theta\in C$. Moreover, the Hilbert--Chow morphism, which associates to each $G$-constellation $\Calf$ its support $\Supp(\Calf)$, $\varepsilon\colon \Calm_C\rightarrow \A^n/G$, is a crepant resolution. \item(Craw--Ishii Theorem \cite{CRAWISHII}) Given a finite abelian subgroup $G\subset\SL(n,\C)$, suppose $Y\xrightarrow{\varepsilon} \A^n/G$ is a projective crepant resolution. Then $Y \cong \Calm_C$ for some chamber $C\subset\Theta$ and $\varepsilon=\varepsilon_C$ is the Hilbert-Chow morphism. \item(Yamagishi Theorem \cite{prova2}) Given a finite subgroup $G\subset\SL(n,\C)$, suppose $Y\xrightarrow{\varepsilon} \A^n/G$ is a projective crepant resolution. Then $Y \cong \Calm_C$ for some chamber $C\subset\Theta$. \item There exists a chamber $C_G\subset \Theta^{\gen}$ such that $\Calm_{C_G}=G$-$\Hilb(\A^n)$. \end{itemize} \end{theorem} We will adopt the same notation as \cite{CRAWISHII} for the universal family of $C$-stable $G$-constellations, namely $\Calu_C\in\Ob\Coh(\Calm_C\times\A^n)$, and for the tautological bundle $\Calr_C:={(\pi_{\Calm_C})}_*\Calu_C$. \begin{remark} The hypothesis of \Cref{CRAWthm}, \Cref{freunic,frestable} imply, together with the third point of \Cref{CRAWthm}, that if we denote by $U_C=\Calm_C\smallsetminus\exc (\varepsilon_C)$ the complement of the exceptional locus of the Hilbert-Chow morphism then, for any two chambers $C,C'\subset\Theta^{\gen}$, then there is a canonical isomorphism of families over $\A^n/G$-schemes \begin{center} \begin{tikzpicture} \node at (0,0) {${\Calu_C}_{|_{U_{C}\times\A^n}}$}; \node at (0,-1) {$U_C\times\A^n$}; \draw (0,-0.2)--(0,-0.8); \node at (3,0) {${\Calu_{C'}}_{|_{U_{C'}\times\A^n}}$}; \node at (3,-1) {$U_{C'}\times\A^n$,}; \draw (3,-0.2)--(3,-0.8); \node at (1.5,-0.5) {$\cong$}; \end{tikzpicture} \end{center} i.e. there exists a unique isomorphism $\varphi_C:U_C\rightarrow U_{C'}$ such that the diagram \[ \begin{tikzcd} U_C \arrow{rr}{\varphi} \arrow[swap]{dr}{{\varepsilon}_C} & & U_{C'}\arrow{dl}{\varepsilon_{C'}}\\ & \A^n/G & \end{tikzcd} \] commutes and ${\Calu_C}_{|_{U_{C}\times\A^n}}\cong (\varphi\times \id_{\A^n})^*{\Calu_{C'}}_{|_{U_{C'}\times\A^n}}$. In particular, any $U_C$ parametrizes the free orbits of the $G$-action as the complement of the singular locus of $\A^n/G $ does. \end{remark} \section{The two-dimensional abelian case} In this section we introduce some notation that we will use throughout the rest of the paper. Moreover, we give a very brief description of the singularities $ A_{|G|-1} $ and of their respective resolutions. Throughout all the section, we fix a finite abelian subgroup $ G \subset \SL (n, \C) $. \subsection{The action of $G$}\label{section2.1} Whenever $G\subset \SL(n,\C)$ is a finite abelian subgroup, it is well known that its irreducible representations are 1-dimensional and that the group $G$ and the set $\Irr(G)$ are in bijection. Moreover, the map $\Psi$ in \Cref{shur} is such that \[ \begin{tikzcd}[row sep=tiny] R(G) \arrow{r}{\Psi} & \underset{\rho\in\Irr(G)}{\bigoplus}\Z\rho\\ \C[G]\arrow[mapsto]{r} & (1,\ldots,1). \end{tikzcd} \] In particular, in dimension 2, it is well known that all finite abelian subgroups $G\subset \SL(2,\C)$ are cyclic. Moreover, for any $k\ge1$, there is only one conjugacy class of abelian subgroups of $\SL(2,\C)$ isomorphic to $\Z/k\Z$. In what follows we will choose, as representative of such conjugacy class, \begin{equation}\label{Zkaction} \Z/k\Z\cong G=\left< g_k=\begin{pmatrix} \xi_k^{-1}&0\\0&\xi_k \end{pmatrix} \right>\subset\SL(2,\C), \end{equation} where $\xi_k$ is a (fixed) primitive $k$-th root of unity. We adopt the following notation for the irreducible representations of $G$: \[ \Irr(G)=\Set{\begin{matrix}\begin{tikzpicture} \node at (-0.9,0) {$\rho_i:$}; \node at (0,0) {$\Z/k\Z$}; \node at (1.5,0) {$\C^*$}; \node at (0,-0.5) {$g_k$}; \node at (1.5,-0.5) {$\xi_k^i$}; \draw[->] (0.5,0)--(1.2,0); \draw[|->] (0.3,-0.5)--(1.2,-0.5); \end{tikzpicture}\end{matrix} |i=0,\ldots,k-1}. \] Sometimes, we will identify $\Irr(G)$ with the set $\{0,\ldots,k-1\}$ according to the bijection $\rho_j\mapsto j$. Notice that, one may also identify $(\Irr(G),\otimes)$ with the abelian group $(\Z/k\Z,+)$, but in what follows we will mostly deal with $\Irr(G)$ as a set of indices, hence we will ignore the natural group structure on it. \subsection{The quotient singularity $\A^2/G$ and its resolution}\label{section22} The singularity obtained in this case is the so-called $A_{k-1}$ singularity, i.e. $$A_{k-1}:=\A^2/G.$$ This is a rational double point. It is well known that it has a unique minimal, in fact crepant, resolution $Y\xrightarrow{\varepsilon}A_{k-1}$ whose exceptional divisor is a chain of $k-1$ smooth $(-2)$-rational projective curves. As a consequence of \Cref{CRAWthm} and of the uniqueness of the minimal model of a surface, for any chamber $C$, there is an isomorphism of varieties $\varphi_C:\Calm_C\xrightarrow{\sim} Y$ such that the diagram \[ \begin{tikzcd} \Calm_C \arrow{rr}{\varphi_C} \arrow[swap]{dr}{{\varepsilon}_C} & & Y\arrow{dl}{\varepsilon}\\ & A_{k-1}& \end{tikzcd} \] commutes. What changes between two different chambers $C,C'$ is that they have different universal families $\Calu_C,\Calu_{C'}\in\Ob\Coh(Y\times\A^2)$. \section{Toric \texorpdfstring{$G$}{}-constellations} This section is devoted to the study of toric $G$-constellations, i.e. those $G$-constellations which, in addition to being $ G $-sheaves, are also $ \T^2 $-sheaves. As it usually happens when dealing with $ \T^2 $-modules, we will see that the $ \C [x, y] $-module structure of a toric $G$-constellation is fully described in terms of combinatorial objects, namely the skew Ferrers diagrams. This way of proceeding in the description of a $ \T^2 $-module is not new, and it is actually adopted very often in the literature; for example in the study of monomial ideals (see \cite{BRIANCON}) or, more generally, in the study of $ \T^2 $-modules of finite length (see \cite{ANDREA}). Although many statements can be generalized to higher dimension, from now on we will focus on the 2-dimensional case. \subsection{The torus action} Recall that $\A^2$ is a toric variety via the standard torus action: \begin{equation}\label{eq:sttorusaction} \begin{tikzcd}[row sep=tiny] \mathbb{T}^2\times\A^2 \arrow{r} & \A^2 \\ ((\sigma_1,\sigma_2),(x,y))\arrow[mapsto]{r} & (\sigma_1\cdot x ,\sigma_2\cdot y). \end{tikzcd} \end{equation} Notice that, under our assumptions, $G$ is a finite subgroup of the torus $\T^2$. Hence, the action of $\T^2$ commutes with the action of the finite abelian (diagonal) subgroup $G\subset\T^2$. This implies that, given a $\theta$-stable $G$-constellation $\Calf$ and an element $\sigma\in \mathbb T^2 $, the pullback $\sigma^*\Calf$ is a $\theta$-stable $G$-constellation. Indeed, $\sigma^*$ induces an isomorphism between the global sections of $\sigma^*\Calf$ and $\Calf$ and hence, $\dim H^0(\A^2,\sigma^*\Calf)=k$. Moreover, $\sigma^*\Calf$ is still a $G$-sheaf if we define, for all $ g\in G$, the morphisms $\lambda^{\sigma^*\Calf}_g:\sigma^*\Calf\rightarrow g^*\sigma^*\Calf$ as $$\lambda^{\sigma^*\Calf}_g=\sigma^*\lambda^{\Calf}_g.$$ Such morphisms are well defined because $\sigma^*$ and $g^*$ commute, i.e. $g^*\sigma^*\Calf\cong\sigma^*g^*\Calf$ for all $ (g,\sigma)\in G\times\T^2$. Finally, we have to check that $\sigma^*\Calf$ is $\theta$-stable. This follows from the fact that both the groups $G\subset \T^2$ act diagonally and, as a consequence, if $\Cale\subset \Calf$ is a proper $G$-subsheaf and $$H^0(\A^2,\Cale)=\underset{j=1}{\overset{r}{\bigoplus}}\rho_{i_j}$$ as representations, then $\sigma^*\Cale\subset\sigma^*\Calf$ is a proper $G$-subsheaf and $$H^0(\A^2,\sigma^*\Cale)=\underset{j=1}{\overset{r}{\bigoplus}}\rho_{i_j}$$as representations. \begin{definition} As explained above, the torus $\T^2$ acts on $\Calm_C$, for any chamber $C$. We say that a (indecomposable) $G$-constellation $\Calf$ is \textit{toric} if it corresponds to a torus fixed point. \end{definition} \begin{remark}\label{toric torus} A $G$-constellation $\Calf$ is toric if and only if it admits a structure of $\T^2$-sheaf. Indeed, if $\Calf$ is a torus fixed point one possible $\T^2$-structure is obtained from the following associations \[ \begin{tikzcd}[row sep=tiny] \T^2\times H^0(\A^2,\Calf) \arrow{r} & H^0(\A^2,\Calf)& \\ (\sigma,p)\arrow[mapsto]{r} & p\circ \sigma^{-1}, \end{tikzcd} \] for $\sigma\in\T^2$ acting on $\A^2$ as in \eqref{eq:sttorusaction}. We stress that the $\T^2$-equivariant structure on $\Calf$ is not unique. Indeed any such structure can be twisted by characters of $\mathbb T^2$. \end{remark} \begin{definition} We say that a $G$-constellation $\Calf$ is \textit{nilpotent} if the endomorphisms $x\cdot $ and $y\cdot$ of the $\C[x,y]$-module $H^0(\A^2,\Calf)$ are nilpotent. \end{definition} \begin{remark}\label{suppnilp} A $G$-constellation $\Calf$ is supported at the origin $0\in\A^2$ if and only if it is nilpotent. This follows from the relation between the annihilator of a $\C[x,y]$-module and the support of the sheaf associated to it (see \cite[Section 2.2]{EISENBUD}). Moreover, \Cref{CRAWthm} implies that nilpotent $C$-stable $G$-constellations correspond to points of the exceptional locus of the crepant resolution $\Calm_C$. \end{remark} \begin{remark}\label{modrep} Given a $G$-constellation $F=H^0(\A^2,\Calf)$, we can compare its structures of $G$-representation and of $\C[x,y]$-module. Looking at the induced action of $G$ on $\C[x,y]$, it turns out that, if $s\in\rho_i$ via the isomorphism $F\cong\C[G]$ then: $$x\cdot s\in\rho_{i+1},$$ and, $$y\cdot s\in\rho_{i-1}.$$ \end{remark} \begin{prop}\label{xyfazero} If $F=H^0(\A^2,\Calf)$ is a nilpotent $G$-constellation then the endomorphism $xy\cdot$ is the zero endomorphism. \end{prop} \begin{proof}The $G$-constellation $F $ is a $k$-dimensional $\C$-vector space. Let us pick a basis $$\{v_0,\ldots,v_{k-1}\}$$ of $F$ such that, for all $i=0,\ldots,k-1$, $v_i\in\rho_i$ under the isomorphism $F\cong\C[G]$. As in \Cref{modrep}, for all $ i=0,\ldots,k-1$, we have: $$x\cdot v_i\in\rho_{i+1},$$ and, $$y\cdot v_i\in\rho_{i-1}$$ where the indices are thought modulo $k$. In other words, $$x\cdot v_i\in\Span(v_{i+1})\mbox{ and }y\cdot v_i\in\Span(v_{i-1}).$$ Therefore, we get: $$xy\cdot v_i\in\Span(v_i),\quad \forall i=0,\ldots,k-1$$ i.e. $$xy\cdot v_i=\alpha_iv_i,\mbox{ with }\alpha_i\in\C,\quad \forall i=0,\ldots,k-1.$$ Now, the nilpotency hypothesis implies that $\alpha_i=0$ for all $i=0,\ldots,k-1$. \end{proof} \begin{remark}\label{toricisnilp}If a $G$-constellation $F=H^0(\A^2,\Calf)$ is toric, then it is also nilpotent. Indeed, following the same logic as in the proof of \Cref{xyfazero} we have $$x^k\cdot v_i=\alpha_i v_i,\mbox{ with }\alpha_i\in\C, \quad\forall i=0,\ldots,k-1,$$ but torus equivariancy implies $\alpha_i =0$ for all $i=0,\ldots,k-1$. \end{remark} \subsection{Skew Ferrers diagrams and $G$-stairs}\label{section 32} The advantage of working with toric $G$-constellations is that their spaces of global sections can be described in terms of monomial ideals whose data are described by means of combinatorial objects. We can associate, to each element of the natural plane $\N^2$, two labels: namely a monomial and an irreducible representation. We achieve this by saying that \textit{a polynomial $p\in\C[x,y]$ belongs to an irreducible representation $\rho_i$} if $$\forall g\in G,\quad g \cdot p=\rho_i(g)p $$ i.e. $p$ is an eigenfunction for the linear map $g\cdot$ with the complex number $\rho_i(g)$ as eigenvector. In particular, with the notations in \Cref{section2.1}, the monomial $x^iy^j$ belongs to the irreducible representation $\rho_{i-j}$ of the abelian group $G$, where the index is tought modulo $k$. According to this association, we can define the \textit{representation tableau $\Calt_G$} as \[\Calt_G=\Set{(i,j,t)\in\N^2\times \Irr(G)|i-j\equiv t\ (\mod k\ )}\subset \N^2\times \Irr(G).\] \begin{figure}[H] \begin{tikzpicture} \node[left] at (0,5) {\small$\N$}; \node[right] at (6,0) {\small$\N$}; \draw[<->] (0,5)--(0,0)--(6,0); \node at (5.5,2) {$\cdots$}; \node at (2.5,4.5) {$\vdots$}; \Quadrant{5}{3}{4} \newcommand{\y}{4} \draw[dashed] (3,0.5)--(5,0.5); \draw[dashed] (3,1.5)--(5,1.5); \draw[dashed] (0,0.5)--(2,0.5); \draw[dashed] (0,1.5)--(2,1.5); \draw[dashed] (3,3.5)--(5,3.5); \draw[dashed] (0,3.5)--(2,3.5); \node at (0.5,0.2) {\tiny$0$}; \node at (1.5,0.2) {\tiny$1$}; \node at (2.5,0.5) {\small$\cdots$}; \node at (0.5,0.7) {\tiny$1$}; \node at (1.5,0.7) {\tiny$x$}; \node at (4.5,0.7) {\tiny$x^k$}; \node at (3.5,0.7) {\tiny$x^{k-1}$}; \node at (3.5,1.7) {\tiny $x^{k-1}y$}; \node at (4.5,0.2) {\tiny$0$}; \node at (3.5,0.2) {\tiny$k-1$}; \node at (1.5,1.7) {\tiny$xy$}; \node at (4.5,1.7) {\tiny$x^ky$}; \node at (0.5,1.7) {\tiny$y$}; \node at (0.5,1.2) {\tiny$k-1$}; \node at (1.5,1.2) {\tiny$0$}; \node at (2.5,1.5) {\small$\cdots$}; \node at (3.5,1.2) {\tiny$k-2$}; \node at (4.5,1.2) {\tiny$k-1$}; \node at (0.5,3.7) {\tiny$y^{k-1}$}; \node at (1.5,3.7) {\tiny$xy^{k-1}$}; \node at (4.5,3.7) {\tiny$x^ky^{k-1}$}; \node at (3.5,3.7) {\tiny$(xy)^{k-1}$}; \node at (0.5,2.5) {\small$\vdots$}; \node at (1.5,2.5) {\small$\vdots$}; \node at (2.5,2.5) {\small$\cdots$}; \node at (3.5,2.5) {\small$\vdots$}; \node at (4.5,2.5) {\small$\vdots$}; \node at (0.5,3.2) {\tiny$1$}; \node at (1.5,3.2) {\tiny$2$}; \node at (2.5,3.5) {\small$\cdots$}; \node at (3.5,3.2) {\tiny$0$}; \node at (4.5,3.2) {\tiny$1$}; \node[below] at (0.5,0) {\small 0}; \node[below] at (1.5,0) {\small 1}; \node[below] at (2.5,0) {\small $\cdots$}; \node[below] at (3.5,0) {\small $k-1$}; \node[below] at (4.5,0) {\small $k$}; \node[below] at (5.5,0) {\small $k+1$}; \node[left] at (0,0.5) {\small 0}; \node[left] at (0,1.5) {\small 1}; \node[left] at (0,2.5) {\small $\vdots$}; \node[left] at (0,3.5) {\small $k-1$}; \node[left] at (0,4.5) {\small $k$}; \end{tikzpicture} \caption{The representation tableau $\Calt_G$.} \label{Tableau} \end{figure} Notice that the labeling with the representation is superfluous because the first projection $$\pi_{\N^2}:\Calt_G\rightarrow \N^2 $$ is a bijection. In any case, this notation is useful to keep in mind that we are dealing with the representation structure as well as with the module structure. In summary, the representation tableau has the property that \begin{center} \textit{moving to the right ``increases" the irreducible representation by 1 $ \pmod k $}\\ \textit{moving up ``decreases" the irreducible representation by 1 $ \pmod k $.} \end{center} \begin{definition} A \textit{Ferrers diagram (Fd)} is a subset $A$ of the natural plane $\N^2$ such that $$(\N^2\smallsetminus A)+\N^2\subset (\N^2\smallsetminus A)$$ i.e. there exist $s\ge 0$ and $t_0\ge \cdots\ge t_s\ge 0$ such that \[ A=\Set{(i,j)|i=0,\ldots,s \mbox{ and }\ j=0,\ldots,t_i}. \] \end{definition} \begin{remark} In the literature there is some ambiguity about the name to be given to such diagrams. Indeed, sometimes, they are also called Young tableaux and, by Ferrers diagrams, something else is meant (for some different notations, see for example \cite{FULTREP,ANDREWS}). In any case, we will adopt the notation in \cite{FEDOU}. \end{remark} Pictorially, we see $s$ consecutive columns of weakly decreasing heights. An example is depicted in \Cref{figure2}. \begin{figure}[H]\scalebox{0.5}{ \begin{tikzpicture} \node[left] at (0,5) {\small$\N$}; \node[right] at (6,0) {\small$\N$}; \draw[<->] (0,5)--(0,0)--(6,0); \draw[-] (0,4)--(1,4)--(1,3)--(3,3)--(3,1)--(4,1)--(4,0); \draw (0,3)--(1,3) --(1,0); \draw (0,2)--(3,2); \draw (2,3)--(2,0); \draw (0,1)--(3,1)--(3,0); \end{tikzpicture}} \caption{An example of Fd where $s=3,t_0=3,t_1=2,t_2=2,t_3=0$.} \label{figure2} \end{figure} \begin{remark}\label{sFdmodulestructre} We briefly recall that, starting from a Ferrers diagram $A$, we can build a torus-invariant zero-dimensional subscheme $Z$ of $\A^2$. Indeed, if $B=\N^2\smallsetminus A$ is the complement of $A$, then \[I_Z=\Set{x^{b_1}y^{b_2}|(b_1,b_2)\in B} \] is the ideal of the above mentioned subscheme $Z\subset \A^2$. In particular, the $\C[x,y]$-module structure of $H^0(\A^2,\Calo_Z)=\C[x,y]/I_Z$ is encoded in the Fd, by saying that a box, labeled by the monomial $m\in\C[x,y]$, corresponds to the one-dimensional vector subspace of $H^0(\A^2,\Calo_Z)$ generated by $m$, and \begin{center} \textit{moving to the right in the Fd is the multiplication by $x$}\\ \textit{moving up in the Fd is the multiplication by $y$.} \end{center} \end{remark} \begin{definition} Let $\Gamma\subset \N^2$ be a subset of the natural plane. We will say that $\Gamma $ is a \textit{skew Ferrers diagram (sFd)} if there exist two Ferrers diagrams $\Gamma_1,\Gamma_2\subset\N^2$ such that $\Gamma=\Gamma_1\smallsetminus\Gamma_2$. Moreover, we will say that a sFd $\Gamma$ is \textit{connected} if, for any decomposition $$\Gamma=\Gamma_1\cup\Gamma_2$$ as disjoint union, there are at least a box in $\Gamma_1$ and a box in $\Gamma_2$ which share an edge. \end{definition} \begin{lemma}\label{lemmatec} Let $A_1,A_2\subset \N^2$ be two Ferrers diagrams and let $\Gamma\subset \N^2$ be the skew Ferrers diagram $\Gamma=A_1\smallsetminus A_2$. Consider, for $i=1,2$, the ideals \[ I_{A_i}=\left( \Set{x^{b_1}y^{b_2}\in\C[x,y]|(b_1,b_2)\in \N^2\smallsetminus A_i}\right). \] Then, the isomorphism class of the torus equivariant $\C[x,y]$-module $$M_\Gamma=I_{A_2}/I_{A_2}\cap I_{A_1}=I_{A_2}/I_{A_2\cup A_1},$$ is independent of the choice of $A_1,A_2$. Equivalentely, for any other choice of $A_1',A_2'\subset \N^2$ such that $\Gamma=A_1\smallsetminus A_2$, the torus equivariant $\C[x,y]$-modules $M_\Gamma$ and $I_{A_2'}/I_{A_2'\cup A_1'}$ are isomorphic. \end{lemma} \begin{proof} The fact that $M_\Gamma$ does not depend on the decomposition $\Gamma=A_1\smallsetminus A_2$ follows noticing that, if we pick another decomposition $\Gamma=A_1'\smallsetminus A_2'$, then the isomorphism of $\C$-vector spaces $$I_{A_2}/I_{A_2}\cap I_{A_1}\rightarrow I_{A_2'}/I_{A_2'}\cap I_{A_1'},$$ which associates the class $x^\alpha y^\beta+ I_{A_2}\cap I_{A_1}$ to the class $x^\alpha y^\beta+ I_{A_2'}\cap I_{A_1'}$, is an isomorphism of $\C[x,y]$-modules. \end{proof} Now, instead of focusing just on subsets of the natural plane $\N^2$, we introduce more structure by looking at subsets of the representation tableau. In some instances, we will need to work with abstract sFd's obtained forgetting about the monomials. \begin{definition} A \textit{$G$-sFd} is a subset $A\subset\Calt_G$ of the representation tableau whose image $\pi_{\N^2}(A)$, under the first projection $$\pi_{\N^2}:\Calt_G\rightarrow\N^2,$$ is a sFd. An \textit{abstract $G$-sFd} is a diagram $\Gamma$ made of boxes labeled by the irreducible representations of $G$ that can be embedded into the representation tableau as a $G$-sFd. \end{definition} \begin{example} Consider the $\Z/3\Z$-action on $ \A^2$ defined in \eqref{Zkaction}. In \Cref{figure3} are shown an abstract $G$-sFd and two of its possible realizations as $G$-sFd. \begin{figure}[H] \begin{tikzpicture}[scale=0.8] \draw (0,2)--(0,4)-- (1,4)--(1,3)--(0,3); \draw (2,1)--(2,3)--(1,3)--(1,2)--(2,2); \draw (1,1)--(2,1)--(2,0)--(1,0)--(1,2)--(0,2)--(0,1)--(1,1); \node at (1.5,0.5) {1}; \node at (1.5,1.5) {0}; \node at (1.5,2.5) {2}; \node at (0.5,1.5) {2}; \node at (0.5,2.5) {1}; \node at (0.5,3.5) {0}; \end{tikzpicture} \begin{tikzpicture}[scale=0.8] \node at (-2,0) {$\ $}; \node at (4,0) {$\ $}; \draw (0,2)--(0,4)-- (1,4)--(1,3)--(0,3); \draw (2,1)--(2,3)--(1,3)--(1,2)--(2,2); \draw (1,1)--(2,1)--(2,0)--(1,0)--(1,2)--(0,2)--(0,1)--(1,1); \draw[dashed] (1,0.5)--(2,0.5); \draw[dashed] (0,1.5)--(2,1.5); \draw[dashed] (0,2.5)--(2,2.5); \draw[dashed] (0,3.5)--(1,3.5); \node at (1.5,0.2) {\tiny$1$}; \node at (1.5,1.2) {\tiny$0$}; \node at (1.5,2.2) {\tiny$2$}; \node at (0.5,2.2) {\tiny$1$}; \node at (0.5,3.2) {\tiny$0$}; \node at (0.5,1.2) {\tiny$2$}; \node at (0.5,1.7) {\tiny$y$}; \node at (1.5,1.7) {\tiny$xy$}; \node at (1.5,2.7) {\tiny$xy^2$}; \node at (0.5,2.7) {\tiny$y^2$}; \node at (0.5,3.7) {\tiny$y^3$}; \node at (1.5,0.7) {\tiny$x$}; \end{tikzpicture} \begin{tikzpicture}[scale=0.8] \draw (0,2)--(0,4)-- (1,4)--(1,3)--(0,3); \draw (2,1)--(2,3)--(1,3)--(1,2)--(2,2); \draw (1,1)--(2,1)--(2,0)--(1,0)--(1,2)--(0,2)--(0,1)--(1,1); \draw[dashed] (1,0.5)--(2,0.5); \draw[dashed] (0,1.5)--(2,1.5); \draw[dashed] (0,2.5)--(2,2.5); \draw[dashed] (0,3.5)--(1,3.5); \node at (1.5,0.2) {\tiny$1$}; \node at (1.5,1.2) {\tiny$0$}; \node at (1.5,2.2) {\tiny$2$}; \node at (0.5,2.2) {\tiny$1$}; \node at (0.5,3.2) {\tiny$0$}; \node at (0.5,1.2) {\tiny$2$}; \node at (0.5,1.7) {\tiny$x^4y^2$}; \node at (1.5,1.7) {\tiny$x^5y^2$}; \node at (1.5,2.7) {\tiny$x^5y^3$}; \node at (0.5,2.7) {\tiny$x^4y^3$}; \node at (0.5,3.7) {\tiny$x^4y^4$}; \node at (1.5,0.7) {\tiny$x^5y$}; \end{tikzpicture} \caption{An abstract $\Z/3\Z$-sFd and two of its possible realizations as $\Z/3\Z$-sFd.} \label{figure3} \end{figure} On the other hand, the diagram in \Cref{figure4} is not an abstract $G$-sFd. \begin{figure}[H] \begin{tikzpicture}[scale=0.6] \draw (1,0)--(1,2)--(0,2)--(0,0)--(2,0)--(2,1)--(0,1); \node at (0.5,0.5) {0}; \node at (1.5,0.5) {2}; \node at (0.5,1.5) {2}; \end{tikzpicture} \caption{\ } \label{figure4} \end{figure} \end{example} \begin{remark} Given any subset $\Xi$ of the representation tableau and any monomial $x^\alpha y^\beta$ we will denote by $x^\alpha y^\beta\cdot \Xi$ the subset of the representation tableau obtained by translating $\Xi$ $\alpha$ steps to the right and $\beta$ steps up. Notice that this is compatible with the association $\N^2\leftrightarrow\{\mbox{monomials in two variables}\}$ as explained in \Cref{sFdmodulestructre}. \end{remark} \begin{lemma}\label{toric basis} If $\Calf$ is a torus equivariant $G$-constellation then there exists a basis $\{v_0,\ldots,v_{k-1}\}$ of $F=H^0(\A^2,\Calf)$ such that \begin{enumerate} \item for all $i=0,\ldots,k-1$, we have $v_i\in\rho_i$, \item for all $i=0,\ldots,k-1$, the sections $v_i$ are semi-invariant functions with respect some character $\chi_i$ of $\T^2$, i.e. $(a,b)\cdot v_i=\chi_i(a,b) v_i$ for all $(a,b)\in\T^2$, \item for all $i=0,\ldots,k-1$, $$\begin{cases} x\cdot v_i\in \{v_{i+1},0\},\\ y\cdot v_i\in\{ v_{i-1},0\}. \end{cases}$$ \end{enumerate} \end{lemma} \begin{proof} We can always pick a basis $\{\widetilde{v}_0,\ldots,\widetilde{v}_{k-1}\}$ which satisfies \textit{(1)} and \textit{(2)}. Moreover, it follows from \Cref{modrep} that: $$\begin{cases} x\cdot \widetilde{v}_i\in\Span(\widetilde{v}_{i+1}),\\ y\cdot \widetilde{v}_i\in\Span(\widetilde{v}_{i-1}), \end{cases}$$ where the indices are thought modulo $k$. The fact that $\Calf$ is toric implies that there are no \virg cycles", i.e. there are no $1<s<k$ and \[ \Set{(i_j,k_j,h_j,\sigma_j)\in \Irr(G)\times \N^2\times \C^*| \begin{array}{c} j=1,\ldots,s,\\ i_j\not=i_{j'}\mbox{ for }j\not=j',\\ k_j+h_{j+1}>0 \end{array}} \] where the indices are thought modulo $s$, such that \begin{equation}\label{cycle}\begin{cases} (x\cdot)^{k_{1}}\widetilde{v}_{i_1}&=\sigma_1(y\cdot)^{h_{2}}\widetilde{v}_{i_2},\\ (x\cdot)^{k_{2}}\widetilde{v}_{i_2}&=\sigma_2(y\cdot)^{h_{3}}\widetilde{v}_{i_3},\\ &\vdots\\ (x\cdot)^{k_{{s-1}}}\widetilde{v}_{i_{s-1}}&=\sigma_{s-1}(y\cdot)^{h_{s}}\widetilde{v}_{i_s},\\ (x\cdot)^{k_{s}}\widetilde{v}_{i_{s}}&=\sigma_s(y\cdot)^{h_{1}}\widetilde{v}_{i_1}. \end{cases} \end{equation} Indeed, $x$ and $y$ are semi-invariant functions with respect to the characters \[ \begin{tikzcd}[row sep=tiny] \T^2 \arrow{r}{\lambda_x} & \C^* \\ (a,b)\arrow[mapsto]{r} & a \end{tikzcd} \] and\[ \begin{tikzcd}[row sep=tiny] \T^2 \arrow{r}{\lambda_y} & \C^* \\ (a,b)\arrow[mapsto]{r} & b \end{tikzcd} \] of the torus $\T^2$. Then, if we act on both sides of the Equations \eqref{cycle} with some $(a,b)\in \T^2$, we get: \begin{equation}\label{cycle1} \begin{cases} \lambda_x(a,b)^{k_{1}}\chi_{i_1}(a,b)(x\cdot)^{k_{1}}\widetilde{v}_{i_1}=\sigma_1\lambda_y(a,b)^{h_{2}}\chi_{i_2}(a,b)(y\cdot)^{h_{2}}\widetilde{v}_{i_2},\\ \lambda_x(a,b)^{k_{2}}\chi_{i_2}(a,b)(x\cdot)^{k_{2}}\widetilde{v}_{i_2}=\sigma_2\lambda_y(a,b)^{h_{3}}\chi_{i_3}(a,b)(y\cdot)^{h_{3}}\widetilde{v}_{i_3},\\ \quad\qquad \quad\quad\quad\quad\quad\quad \quad\vdots\\ \lambda_x(a,b)^{k_{{s-1}}}\chi_{i_{s-1}}(a,b)(x\cdot)^{k_{{s-1}}}\widetilde{v}_{i_{s-1}}=\sigma_{s-1}\lambda_y(a,b)^{h_{s}}\chi_{i_s}(a,b)(y\cdot)^{h_{s}}\widetilde{v}_{i_s},\\ \lambda_x(a,b)^{k_{{s}}}\chi_{i_s}(a,b)(x\cdot)^{k_{{s}}}\widetilde{v}_{i_{s}}=\sigma_s\lambda_y(a,b)^{h_{1}}\chi_{i_1}(a,b)(y\cdot)^{h_{1}}\widetilde{v}_{i_1},\\ \end{cases} \end{equation} Now, the System \eqref{cycle1} is equivalent to: $$\begin{cases} a^{k_{1}}\chi_{i_1}(a,b)=b^{h_{2}}\chi_{i_2}(a,b),\\ a^{k_{2}}\chi_{i_2}(a,b)=b^{h_{3}}\chi_{i_3}(a,b),\\ \quad \quad\quad\quad\quad\ \ \ \vdots\\ a^{k_{{s-1}}}\chi_{i_{s-1}}(a,b)=b^{h_{s}}\chi_{i_s}(a,b),\\ a^{k_{{s}}}\chi_{i_s}(a,b)=b^{h_{1}}\chi_{i_1}(a,b),\\ \end{cases}$$ which is equivalent to \begin{equation}\label{cycle2} a^{{k_1}+\cdots+{k_s}}=b^{h_{1}+\cdots+h_{s}}\quad \forall (a,b)\in\T^2. \end{equation} Finally, the only solution of \Cref{cycle2} is $${{k_1}=\cdots={k_s}}={h_{1}=\cdots=h_{s}}=0,$$ which contradicts the hypothesis $k_i+h_{i+1}>0$ for all $ i=1,\ldots,s$. We are now ready to build the requested basis. Let $\{w_1,\ldots,w_\ell\}\subset\{\widetilde{v}_0,\ldots,\widetilde{v}_{k-1}\}$ be a minimal set of generators of the $\C[x,y]$-module $F$, i.e. the set \[ \Set{w_j+\mm\cdot F\in F/\mm\cdot F | j=1,\ldots, \ell } \] is a basis of the $\C$-vector space $F/\mm\cdot F$. Let us also denote by $F_j$, for $j=1,\ldots,\ell$, the submodule generated by $w_j$. We start by taking, for all $j=1,\ldots,\ell$, as basis of $F_j$ the set \[B_j=\Set{x^\alpha y^\beta w_j|\alpha\cdot\beta=0}. \] The problem is that in general the union of all $ B_j$'s is not a basis of $F$ because there can be some relations $x^\alpha w_i=\mu y^\beta w_j$ for $i\not=j$ and $\mu\in\C^*\smallsetminus 1 $. The fact that there are no cycles implies that we can re-scale all the elements in each $B_j$ obtaining new $\overline{B}_j$ so that $\underset{j}{\bigcup}\overline{B}_j$ is a basis of $F$ that verifies properties \textit{(1)}, \textit{(2)}, \textit{(3)}. \end{proof} \begin{prop}\label{const-sFd} Given a, possibly decomposable, torus equivariant $G$-constellation $F=H^0(\A^2,\Calf)$, there is (at least) one $G$-sFd whose associated $\C[x,y]$-module is a $G$-constellation isomorphic to $F$. \end{prop} \begin{remark}\label{periodn} If we find one $G$-sFd with the required property, then there are infinitely many of them. Indeed, a special property of the representation tableau is that translations enjoy some periodicity properties. Let $\Gamma$ be a $G$-sFd, then: \begin{enumerate} \item multiplication by $x$ has period $k$, i.e there is an isomorphism of $\C[x,y]$-modules $$M_\Gamma\xrightarrow{\sim}M_{x^k\cdot\Gamma}$$ which induces an isomorphism of representations between $M_\Gamma$ and $M_{x^k\cdot\Gamma}$; \item multiplication by $y$ has period $k$, i.e there is an isomorphism of $\C[x,y]$-modules $$M_\Gamma\xrightarrow{\sim}M_{y^k\cdot\Gamma}$$ which induces an isomorphism of representations between $M_\Gamma$ and $M_{y^k\cdot\Gamma}$; \item multiplication by $xy$ is an isomorphism, i.e there is an isomorphism of $\C[x,y]$-modules $$M_\Gamma\xrightarrow{\sim}M_{xy\cdot\Gamma}$$ which induces an isomorphism of representations between $M_\Gamma$ and $M_{xy\cdot\Gamma}$. \end{enumerate} In particular, all these $G$-sFd's correspond to the same abstract $G$-sFd. \end{remark} \begin{proof}( \textit{of \Cref{const-sFd}} ). Let $\{v_0,\ldots,v_{k-1}\}$ be a $\C$-basis of $F$ with the properties listed in \Cref{toric basis}, and let $\set{w_j=v_{i_j}|j=1,\ldots,s}$ be a minimal set of generators of $F$ as a $\C[x,y]$-module (see the proof of \Cref{toric basis}). Denote by $F_j$, for $j=1,\ldots,s$, the $\C[x,y]$-submodule of $F$ generated by $w_j$. We can represent each $F_j$ by using diagrams of the form shown in \Cref{diagfja}, \begin{figure}[ht] {\scalebox{0.7}{\begin{tikzpicture} \draw (0,2)--(0,0)--(3,0)--(3,1)--(1,1)--(1,3)--(0,3)--(0,2); \draw (0,4)--(0,5)--(1,5)--(1,4)--(0,4)--(0,5); \draw (4,0)--(5,0)--(5,1)--(4,1)--(4,0)--(5,0); \draw (0,2)--(1,2); \draw (2,0)--(2,1); \draw (0,1)--(1,1)--(1,0); \node at (0.5,0.5) {$w_j$}; \node at (1.5,0.5) {$v_{i_j+1}$}; \node at (2.5,0.5) {$v_{i_j+2}$}; \node at (3.5,0.5) {$\cdots$}; \node at (4.5,0.5) {$v_{i_j+k_j}$}; \node at (0.5,1.5) {$v_{i_j-1}$}; \node at (0.5,2.5) {$v_{i_j-2}$}; \node at (0.5,3.6) {$\vdots$}; \node at (0.5,4.5) {$v_{i_j-h_j}$}; \end{tikzpicture}}} \caption{\ } \label{diagfja} \end{figure} where the integers $k_j$ and $h_j$ are defined by \[k_j=\max\Set{\alpha|(x\cdot)^\alpha w_j\not=0}\] and \[h_j=\max\Set{\alpha|(y\cdot)^\alpha w_j\not=0},\] and they are well defined because any toric $G$-constellation is nilpotent by \Cref{toricisnilp}. The $\C[x,y]$-module structure of $F_j$ is encoded in the fact that the multiplication by $ x $ (resp. $ y $) sends the generator of a box (i.e., the generator of the corresponding vector space) to the generator of the box on the right (resp. above). If there is no box on the right (resp. above) this means that the multiplication by $x$ (resp. $y$) is zero. Now, we have to glue these diagrams to form the required $G$-sFd. We glue them along boxes with the same labels. First, notice that, if, for some $j\not=j'$ and $r,t\ge 1$, we have $(x\cdot)^rw_j=(x\cdot)^tw_{j'}$, i.e. $i_j+r=i_{j'}+t$ modulo $k$, then $$(x\cdot)^rw_j=(x\cdot)^tw_{j'}=0.$$ Indeed, if $r<t$ (the case $r\ge t$ is analogous) then, a representation argument (see \Cref{xyfazero}) tells us that $w_j=(x\cdot)^{t-r}w_{j'}$ which, whenever $(x\cdot)^rw_i\not=0$, contradicts the minimality of the generating set $\{ w_1,\ldots,w_s\}$. Analogously, if, for some $j\not=j'$ and $r,t\ge 1$, we have $(y\cdot)^rw_j=(y\cdot)^tw_{j'}$, then $(y\cdot)^rw_j=0$. Now we show that, if, for some $j\not=j'$ and $r,t\ge 1$, we have $(x\cdot)^rw_j=(y\cdot)^tw_{j'}$, then $r=k_j$ and $t=h_{j'}$. Suppose, by contradiction, that there exists $1\le r<k_{j}$ such that $(x\cdot)^rw_j=(y\cdot)^{t}w_{j'}$ (the case $1\le t<h_{j'}$ is similar). In particular, the minimality assumption implies $t\ge 1$. Since $r<k_j$, by definition of $k_j$, we have $(x\cdot)^{r+1}w_i\not= 0$. Therefore, we get $$0\not=(x\cdot)^{r+1} w_j=x\cdot((x\cdot)^{r} w_j)= x\cdot y^{t}\cdot w_{j'}= (x y)\cdot y^{t-1}\cdot w_{j'}=0 $$ which gives a contradiction. We show now that there are no ``cycles". Explicitly, suppose that, up to reordering the $v_i's$, and consequently the $w_i's$, we have already glued $\ell $ diagrams of the form depicted in \cref{diagfja} to a diagram of the form shown in \Cref{diagrammone}. \begin{figure}[ht]\scalebox{0.7}{ \begin{tikzpicture}[scale=0.8] \draw (8,-6)--(8,-7)--(10,-7)--(10,-6)--(9,-6)--(9,-5)--(8,-5)--(8,-6); \draw (5,-3)--(5,-4)--(7,-4)--(7,-3)--(6,-3)--(6,-2)--(5,-2)--(5,-3); \draw (1,1)--(1,0)--(3,0)--(3,1)--(2,1)--(2,2)--(1,2)--(1,1); \draw (-2,4)--(-2,3)--(0,3)--(0,4)--(-1,4)--(-1,5)--(-2,5)--(-2,4); \draw (1,3)--(1,4)--(2,4)--(2,3)--(1,3)--(1,4); \draw (-2,6)--(-2,7)--(-1,7)--(-1,6)--(-2,6)--(-2,7); \draw (4,0)--(5,0)--(5,1)--(4,1)--(4,0)--(5,0); \draw (5,-1)--(6,-1)--(6,0)--(5,0)--(5,-1)--(6,-1); \draw (9,-3)--(9,-4)--(8,-4)--(8,-3)--(9,-3)--(9,-4); \draw (5,-3)--(6,-3)--(6,-4); \draw (1,1)--(2,1)--(2,0); \draw (-2,4)--(-1,4)--(-1,3); \node at (3.5,0.5) {$\cdots$}; \node at (-1.5,5.6) {$\vdots$}; \node at (5.5,-1.4) {$\vdots$}; \node at (1.5,2.6) {$\vdots$}; \node at (7.5,-3.5) {$\cdots$}; \node at (5.5,0.5) {$\ddots$}; \node at (-1.5,3.5) {$w_{1}$}; \node at (1.5,0.5) {$w_{{2}}$}; \node at (5.5,-3.5) {$w_{{\ell-1}}$}; \node at (8.5,-6.5) {$w_{{\ell}}$}; \node at (13.2,-4.5) {$x^{k_{{\ell}}}w_{\ell}$}; \node[right] at (2,6.5) {$y^{h_{{1}}}w_{1}$}; \node at (4.5,5) {${y^{h_{{2}}}w_{2} }{=}{ x^{k_{{1}}}w_{1} }$}; \node at (10.5,-1.5) {${{ y^{h_{{\ell}}}w_{\ell} }{=}}{ x^{k_{{\ell-1}}}w_{{\ell-1}}}$}; \draw[->] (4.5,4.8) to [out = 270 ,in =0 ] (1.5,3.5); \draw[->] (10.2,-1.7) to [out = 270 ,in =0 ] (8.5,-3.5); \draw[->] (13.2,-4.7) to [out = 270 ,in =0 ] (11.5,-6.5); \node at (10.5,-6.5) {$\cdots$}; \node at (0.5,3.5) {$\cdots$}; \node at (8.5,-4.4) {$\vdots$}; \draw (9,-7)--(9,-6)--(8,-6); \draw (11,-6)--(11,-7)--(12,-7)--(12,-6)--(11,-6)--(11,-7); \draw[<-] (-1.5,6.5)--(2,6.5); \end{tikzpicture}} \caption{\ } \label{diagrammone} \end{figure} Then, we want to show that there is no gluing $(x\cdot)^{k_{\ell}}w_{\ell}=\sigma(y\cdot)^{h_{1}}w_{1}$ for some $\sigma\in\C^*$, i.e. no gluing of the first and the last boxes of the above diagram. The presence of this cycle would translate into the following system of equalities $$\begin{cases} (x\cdot)^{k_{1}}w_{1}&=(y\cdot)^{h_{2}}w_{2},\\ (x\cdot)^{k_{2}}w_{2}&=(y\cdot)^{h_{3}}w_{3},\\ &\vdots\\ (x\cdot)^{k_{{\ell-1}}}w_{{\ell-1}}&=(y\cdot)^{h_{\ell}}w_{\ell},\\ (x\cdot)^{k_{\ell}}w_{{\ell}}&=\sigma(y\cdot)^{h_{1}}w_{1}, \end{cases}$$ which cannot be verified by any toric $G$-constellation as explained in the proof of \Cref{toric basis}. So far we have proven that each connected component of the required $G$-sFd have the shape depicted in \Cref{shape}. \begin{figure}[ht]\scalebox{0.4}{ \begin{tikzpicture} \draw (8,-6)--(8,-7)--(10,-7)--(10,-6)--(9,-6)--(9,-5)--(8,-5)--(8,-6); \draw (5,-3)--(5,-4)--(7,-4)--(7,-3)--(6,-3)--(6,-2)--(5,-2)--(5,-3); \draw (1,1)--(1,0)--(3,0)--(3,1)--(2,1)--(2,2)--(1,2)--(1,1); \draw (-2,4)--(-2,3)--(0,3)--(0,4)--(-1,4)--(-1,5)--(-2,5)--(-2,4); \draw (1,3)--(1,4)--(2,4)--(2,3)--(1,3)--(1,4); \draw (-2,6)--(-2,7)--(-1,7)--(-1,6)--(-2,6)--(-2,7); \draw (4,0)--(5,0)--(5,1)--(4,1)--(4,0)--(5,0); \draw (5,-1)--(6,-1)--(6,0)--(5,0)--(5,-1)--(6,-1); \draw (9,-3)--(9,-4)--(8,-4)--(8,-3)--(9,-3)--(9,-4); \draw (5,-3)--(6,-3)--(6,-4); \draw (1,1)--(2,1)--(2,0); \draw (-2,4)--(-1,4)--(-1,3); \node at (3.5,0.5) {$\cdots$}; \node at (-1.5,5.6) {$\vdots$}; \node at (5.5,-1.4) {$\vdots$}; \node at (1.5,2.6) {$\vdots$}; \node at (7.5,-3.5) {$\cdots$}; \node at (5.5,0.5) {$\ddots$}; \node at (10.5,-6.5) {$\cdots$}; \node at (0.5,3.5) {$\cdots$}; \node at (8.5,-4.4) {$\vdots$}; \draw (9,-7)--(9,-6)--(8,-6); \draw (11,-6)--(11,-7)--(12,-7)--(12,-6)--(11,-6)--(11,-7); \end{tikzpicture}} \caption{\ } \label{shape} \end{figure} Moreover, if we forget about the reordering, each box contains a label $v_i$ whose index increases by one when moving to the right or downward in the diagram. Since we have chosen $ v_i\in\rho_i $ for $i=0,\ldots,k-1$, this diagram fits in the representation tableau (see \Cref{section 32}), i.e. it is an abstract $G$-sFd. After performing all possible gluings, we obtain a number of abstract $G$-sFd's $A_1,\ldots,A_m$ whose shape is drawn in \Cref{shape}. The last thing to do is to show that we can realize $A_1,\ldots,A_m$ as subsets $\Gamma_1,\ldots,\Gamma_m$ of the representation tableau to get a $G$-sFd, i.e. in such a way that $$\pi_{\N^2}\left(\underset{i=1}{\overset{m}{\bigcup}}\Gamma_i\right)$$ is a sFd. This can be done in many ways and we explain one possible way to proceed. We start by realizing $A_1,\ldots,A_m$ as disjoint $G$-sFd's $\Gamma_1,\ldots,\Gamma_m$. This can always be done because, as we observed, $A_1,\ldots,A_m$ are abstract $G$-sFd's and, from any choice of realizations $\widetilde{\Gamma}_1,\ldots,\widetilde{\Gamma}_m$ of them as non-necessarily disjoint $G$-sFd's, we can obtain disjoint $\Gamma_1,\ldots,\Gamma_m$ by performing the translations described in \Cref{periodn}. At this point, we have $m$ disjoint $G$-sFd's as described in \Cref{tris}, \begin{figure}[ht]\scalebox{1}{ \begin{tikzpicture} \node at (-4,0) {\scalebox{0.2}{ \begin{tikzpicture} \draw (8,-6)--(8,-7)--(10,-7)--(10,-6)--(9,-6)--(9,-5)--(8,-5)--(8,-6); \draw (5,-3)--(5,-4)--(7,-4)--(7,-3)--(6,-3)--(6,-2)--(5,-2)--(5,-3); \draw (1,1)--(1,0)--(3,0)--(3,1)--(2,1)--(2,2)--(1,2)--(1,1); \draw (-2,4)--(-2,3)--(0,3)--(0,4)--(-1,4)--(-1,5)--(-2,5)--(-2,4); \draw (1,3)--(1,4)--(2,4)--(2,3)--(1,3)--(1,4); \draw (-2,6)--(-2,7)--(-1,7)--(-1,6)--(-2,6)--(-2,7); \draw (4,0)--(5,0)--(5,1)--(4,1)--(4,0)--(5,0); \draw (5,-1)--(6,-1)--(6,0)--(5,0)--(5,-1)--(6,-1); \draw (9,-3)--(9,-4)--(8,-4)--(8,-3)--(9,-3)--(9,-4); \draw (5,-3)--(6,-3)--(6,-4); \draw (1,1)--(2,1)--(2,0); \draw (-2,4)--(-1,4)--(-1,3); \node at (3.5,0.5) {$\cdots$}; \node at (-1.5,5.6) {$\vdots$}; \node at (5.5,-1.4) {$\vdots$}; \node at (1.5,2.6) {$\vdots$}; \node at (7.5,-3.5) {$\cdots$}; \node at (5.5,0.5) {$\ddots$}; \node at (10.5,-6.5) {$\cdots$}; \node at (0.5,3.5) {$\cdots$}; \node at (8.5,-4.4) {$\vdots$}; \draw (9,-7)--(9,-6)--(8,-6); \draw (11,-6)--(11,-7)--(12,-7)--(12,-6)--(11,-6)--(11,-7); \end{tikzpicture}}}; \node at (0,0) {\scalebox{0.2}{ \begin{tikzpicture} \draw (8,-6)--(8,-7)--(10,-7)--(10,-6)--(9,-6)--(9,-5)--(8,-5)--(8,-6); \draw (5,-3)--(5,-4)--(7,-4)--(7,-3)--(6,-3)--(6,-2)--(5,-2)--(5,-3); \draw (1,1)--(1,0)--(3,0)--(3,1)--(2,1)--(2,2)--(1,2)--(1,1); \draw (-2,4)--(-2,3)--(0,3)--(0,4)--(-1,4)--(-1,5)--(-2,5)--(-2,4); \draw (1,3)--(1,4)--(2,4)--(2,3)--(1,3)--(1,4); \draw (-2,6)--(-2,7)--(-1,7)--(-1,6)--(-2,6)--(-2,7); \draw (4,0)--(5,0)--(5,1)--(4,1)--(4,0)--(5,0); \draw (5,-1)--(6,-1)--(6,0)--(5,0)--(5,-1)--(6,-1); \draw (9,-3)--(9,-4)--(8,-4)--(8,-3)--(9,-3)--(9,-4); \draw (5,-3)--(6,-3)--(6,-4); \draw (1,1)--(2,1)--(2,0); \draw (-2,4)--(-1,4)--(-1,3); \node at (3.5,0.5) {$\cdots$}; \node at (-1.5,5.6) {$\vdots$}; \node at (5.5,-1.4) {$\vdots$}; \node at (1.5,2.6) {$\vdots$}; \node at (7.5,-3.5) {$\cdots$}; \node at (5.5,0.5) {$\ddots$}; \node at (10.5,-6.5) {$\cdots$}; \node at (0.5,3.5) {$\cdots$}; \node at (8.5,-4.4) {$\vdots$}; \draw (9,-7)--(9,-6)--(8,-6); \draw (11,-6)--(11,-7)--(12,-7)--(12,-6)--(11,-6)--(11,-7); \end{tikzpicture}}}; \node at (6,0) {\scalebox{0.2}{ \begin{tikzpicture} \draw (8,-6)--(8,-7)--(10,-7)--(10,-6)--(9,-6)--(9,-5)--(8,-5)--(8,-6); \draw (5,-3)--(5,-4)--(7,-4)--(7,-3)--(6,-3)--(6,-2)--(5,-2)--(5,-3); \draw (1,1)--(1,0)--(3,0)--(3,1)--(2,1)--(2,2)--(1,2)--(1,1); \draw (-2,4)--(-2,3)--(0,3)--(0,4)--(-1,4)--(-1,5)--(-2,5)--(-2,4); \draw (1,3)--(1,4)--(2,4)--(2,3)--(1,3)--(1,4); \draw (-2,6)--(-2,7)--(-1,7)--(-1,6)--(-2,6)--(-2,7); \draw (4,0)--(5,0)--(5,1)--(4,1)--(4,0)--(5,0); \draw (5,-1)--(6,-1)--(6,0)--(5,0)--(5,-1)--(6,-1); \draw (9,-3)--(9,-4)--(8,-4)--(8,-3)--(9,-3)--(9,-4); \draw (5,-3)--(6,-3)--(6,-4); \draw (1,1)--(2,1)--(2,0); \draw (-2,4)--(-1,4)--(-1,3); \node at (3.5,0.5) {$\cdots$}; \node at (-1.5,5.6) {$\vdots$}; \node at (5.5,-1.4) {$\vdots$}; \node at (1.5,2.6) {$\vdots$}; \node at (7.5,-3.5) {$\cdots$}; \node at (5.5,0.5) {$\ddots$}; \node at (10.5,-6.5) {$\cdots$}; \node at (0.5,3.5) {$\cdots$}; \node at (8.5,-4.4) {$\vdots$}; \draw (9,-7)--(9,-6)--(8,-6); \draw (11,-6)--(11,-7)--(12,-7)--(12,-6)--(11,-6)--(11,-7); \end{tikzpicture}}}; \node at (3,0) {$\cdots$}; \draw[<-] (1.3,-1.2) to [out=90,in=210] (1.6,-0.6); \draw[<-] (-2.7,-1.2) to [out=70,in=210] (-2.4,-0.6); \node[right] at (1.5,-0.5) {\small$x^{\alpha_2}y^{\beta_2}$}; \node[right] at (-2.5,-0.5) {\small$x^{\alpha_1}y^{\beta_1}$}; \draw[->] (-0.4,1.3) to [out=135,in=0] (-1.1,1.3); \node[right] at (-0.5,1.3) {\small$x^{\gamma_2}y^{\delta_2}$}; \draw[->] (5.7,1.3) to [out=135,in=0] (4.9,1.3); \node[right] at (5.6,1.3) {\small$x^{\gamma_m}y^{\delta_m}$}; \node at (-4,-2) {$\Gamma_1$}; \node at (0,-2) {$\Gamma_2$}; \node at (6,-2) {$\Gamma_m$}; \end{tikzpicture}} \caption{\ } \label{tris} \end{figure} where just the labels of the boxes we are interested in are shown. The problem is that, in general, the union $\underset{i=1}{\overset{m}{\bigcup}}\Gamma_i$ is not a $G$-sFd, i.e. $\pi_{\N^2}\left(\underset{i=1}{\overset{m}{\bigcup}}\Gamma_i\right)$ is not a sFd. In order to solve this problem, we have to perform some translations, and a possible choice of $G$-sFd is $$\Gamma=\underset{i=1}{\overset{m}{\bigcup}}\overline{\Gamma}_i,$$ where $$\overline{\Gamma}_i=x^{k\ssum{j=1}{i-1}\alpha_j}y^{k\ssum{j=1+i}{m}\delta_j}\cdot\Gamma_i\quad\mbox{for }i=1,\ldots,m.$$ The proof that $\Gamma$ is a $G$-sFd is now an easy check. \end{proof} As a byproduct of the proof, we also get that any $G$-sFd associated to a toric $G$-constellation has a particular shape. \begin{definition}\label{stair} We say that a a connected $G$-sFd $\Gamma$ is a \textit{stair} if $$(m,n)\in\pi_{\N^2}(\Gamma) \Rightarrow (m+1,n+1),(m-1,n-1)\notin \pi_{\N^2}(\Gamma).$$ Moreover, \begin{itemize} \item a \textit{$G$-stair} is a stair made of $k$ boxes, \item an \textit{abstract ($G$-)stair} is an abstract $G$-sFd whose realization in the representation tableau is a ($G$-)stair, \item given a stair $\Gamma$, the \textit{(anti)generators} of $\Gamma$ are the boxes positioned in the (top) lower corners of $\Gamma$ (see \Cref{genantigen}), \item a substair is any (possibly not connected) subset of a stair. \end{itemize} \begin{figure}[H]\scalebox{0.9}{ \begin{tikzpicture} \node at (-4,0) {\scalebox{0.5}{ \begin{tikzpicture} \draw (8,-6)--(8,-7)--(10,-7)--(10,-6)--(9,-6)--(9,-5)--(8,-5)--(8,-6); \draw (5,-3)--(5,-4)--(7,-4)--(7,-3)--(6,-3)--(6,-2)--(5,-2)--(5,-3); \draw (1,1)--(1,0)--(3,0)--(3,1)--(2,1)--(2,2)--(1,2)--(1,1); \draw (-2,4)--(-2,3)--(0,3)--(0,4)--(-1,4)--(-1,5)--(-2,5)--(-2,4); \draw (1,3)--(1,4)--(2,4)--(2,3)--(1,3)--(1,4); \draw (-2,6)--(-2,7)--(-1,7)--(-1,6)--(-2,6)--(-2,7); \draw (4,0)--(5,0)--(5,1)--(4,1)--(4,0)--(5,0); \draw (5,-1)--(6,-1)--(6,0)--(5,0)--(5,-1)--(6,-1); \draw (9,-3)--(9,-4)--(8,-4)--(8,-3)--(9,-3)--(9,-4); \draw (5,-3)--(6,-3)--(6,-4); \draw (1,1)--(2,1)--(2,0); \draw (-2,4)--(-1,4)--(-1,3); \node at (3.5,0.5) {$\cdots$}; \node at (-1.5,5.6) {$\vdots$}; \node at (5.5,-1.4) {$\vdots$}; \node at (1.5,2.6) {$\vdots$}; \node at (7.5,-3.5) {$\cdots$}; \node at (5.5,0.5) {$\ddots$}; \node at (10.5,-6.5) {$\cdots$}; \node at (0.5,3.5) {$\cdots$}; \node at (8.5,-4.4) {$\vdots$}; \draw (9,-7)--(9,-6)--(8,-6); \draw (11,-6)--(11,-7)--(12,-7)--(12,-6)--(11,-6)--(11,-7); \end{tikzpicture}}}; \draw[<-] (-0.7,-2.8) to [out=90,in=210] (0,-1.2); \draw[<-] (-2,-1.3) to [out=70,in=150] (0,-1.1); \draw[<-] (-5.3,1.8) to [out=0,in=130] (0,-1); \draw[<-] (-6.8,3.2) to [out=-10,in=110] (0,-0.9); \node[right] at (0,-1) {antigenerators}; \node[right] at (-13,0) {generators}; \draw[->] (-10.9,-0.2) to [out=-30,in=180] (-2.6,-3.2); \draw[->] (-10.9,-0.1) to [out=-20,in=180] (-4.1,-1.7); \draw[->] (-10.9,0) to [out=5,in=180] (-6.1,0.3); \draw[->] (-10.9,0.1) to [out=70,in=180] (-7.6,1.7); \end{tikzpicture}} \caption{Generators and antigenerators of a stair.} \label{genantigen} \end{figure} \end{definition} \begin{remark}\label{orderbox} If $\Calf$ is any torus equivariant $G$-constellation, and $\Gamma_\Calf$ is any $G$-sFd associated to $\Calf$, then $\Gamma_\Calf$ is connected, i.e. it is a $G$-stair, if and only if $\Calf$ is indecomposable, i.e. if it is toric. In this case we will refer to the upper left box as the first box and we will refer to the lower right box as the last box. In this a way, we provide of a total order the boxes of a $G$-stair and, consequently, we provide of a total order also the irreducible representations of $G$. \end{remark} \begin{remark} The set of generators of a stair $\Gamma$ corresponds to a minimal set of generators of the $\C[x,y]$-module $M_\Gamma$ associated to $\Gamma$, i.e. $m_1,\ldots,m_s\in M_{\Gamma}$ such that \[\Set{m_i+\mm\cdot M_\Gamma\in M_\Gamma/\mm\cdot M_\Gamma|i=1,\ldots,s} \] is a $\C$-basis of $M_\Gamma/\mm\cdot M_\Gamma$. Antigenerators correspond to one dimensional $\C[x,y]$-submodules of $M_\Gamma$, i.e. they form a $\C$-basis of the so-called socle $$(0:_{M_\Gamma}\mm)=\Set{m\in M_\Gamma|\mm\cdot m=0\in M_\Gamma }.$$ Since each irreducible representation of $G$ appears once in a $G$-stair $L$, sometimes, with abuse of notation, we will say that an irreducible representation is a (anti)generator for $L$. \end{remark} \begin{definition} Given a connected $G$-sFd $\Gamma$, we denote respectively by $\mathfrak{h}(\Gamma)$ and $\mathfrak{w}(\Gamma)$ the \textit{height} and the \textit{width} of $\Gamma$, i.e. the height and the width of the smallest rectangle in $\N^2$ containing $\pi_{\N^2}(\Gamma)$. Moreover, the \textit{height} and the \textit{width}, $\mathfrak{h}(\Calf)$ and $\mathfrak{w}(\Calf)$, of a toric $G$-constellation $\Calf$ are respectively the height and the width of any $G$-stair which represents $\Calf$. \end{definition} \section{The chamber decomposition of \texorpdfstring{$\Theta$}{} and the moduli spaces \texorpdfstring{$\Calm_C$}{}} This section is devoted to the proof of the first main result (\Cref{TEO1}). In the first part of the section we analyze the toric points of $\Calm_C$ and the corresponding $G$-constellations. Then, we show how to construct 1-dimensional families of nilpotent $G$-constellations. Finally, in the last part, we give the proof of the first main result. \subsection{The crepant resolution $\Calm_C$ and its toric points}\label{toric resolution} As noticed in \Cref{section22}, the crepant resolution $\Calm_C\xrightarrow{\varepsilon_C}\A^2/G $ does not depend on the chamber $C$, i.e. for all $C,C'\in\Theta^{\gen}$ different chambers, there exists a canonical isomorphism $\varphi:\Calm_C\xrightarrow{\sim}\Calm_C'$ such that the diagram \[ \begin{tikzcd} \Calm_C \arrow{rr}{\varphi} \arrow[swap]{dr}{{\varepsilon}_C} & & \Calm_{C'}\arrow{dl}{\varepsilon_{C'}}\\ & \A^2/G& \end{tikzcd} \] commutes. The varieties $\A^2$, $\A^2/G$ and $\Calm_C$ are toric (see for example \cite[Chapter 10]{COX} or \cite[Chapter 2]{FULTORI}) and we can rewrite the diagram \[ \begin{tikzcd} &\A^2 \arrow{d}{\pi}\\ \Calm_C\arrow{r}{\varepsilon_C} &\A^2/G \end{tikzcd} \] in terms of fans as follows: \begin{center} \begin{tikzpicture} \node at (0.9,0) {\begin{tikzpicture}[scale=0.8] \draw[-] (1,0)--(0,0)--(3,-2); \draw[-] (0,1)--(0,0)--(2,-1); \node at (0.7,-1) {$\Calm_C$}; \node at (2.2,-1.2) {$\ddots$}; \node[above] at (0,1) {\tiny$(0,1)$}; \node[right] at (3,-2) {\tiny$(k,-k+1)$}; \node[right] at (2,-1) {\tiny$(2,-1)$}; \node[right] at (1,0) {\tiny$(1,0)$}; \end{tikzpicture}}; \draw[->] (2,0)--(4,0); \node at (6,0) { \begin{tikzpicture}[scale=0.8] \draw[-] (0,1)--(0,0)--(3,-2); \node at (0.65,-1) {$\A^2/G$}; \node[above] at (0,1) {\tiny$(0,1)$}; \node[right] at (3,-2) {{\tiny$(k,-k+1)$}.}; \end{tikzpicture}}; \node at (5.5,3.5) { \begin{tikzpicture} \draw[-] (0,1)--(0,0)--(1,0); \node at (1,1) {$\A^2$}; \node[right] at (1,0) {\tiny$(1,0)$}; \node[above] at (0,1) {\tiny$(0,1)$}; \end{tikzpicture}}; \node[above] at (3,0) {\small$\varepsilon_C$}; \draw[<-] (5.5,1.5)--(5.5,2.5); \node[right] at (5.5,2) {\small$\pi$}; \end{tikzpicture} \end{center} In particular, $\Calm_C$ is covered by the $k$ toric charts $U_j\cong\A^2$, for $j=1,\ldots,k$, associated to the maximal cones of the fan for $\Calm_C$ showed above. Let us identify $\A^2/G$ with the subvariety of $\A^3$ \[ \A^2/G=\Set{(\alpha,\beta,\gamma)\in\A^3|\alpha\beta-\gamma^k=0}, \] and let us put (toric) coordinates $a_j,c_j$ on each $U_j$ for $j=1,\ldots,k$. Then, we can encode the diagram above into the following $k$ diagrams \begin{center} \begin{tikzpicture} \node at (0,0) {$U_j$}; \node at (3.5,0) {$\A^2/G$}; \node at (3.5,1) {$\A^2$}; \draw[->] (0.3,0)--(3,0); \draw[|->] (0.6,-0.5)--(1.7,-0.5); \draw[->] (3.5,0.8)--(3.5,0.3); \draw[|->] (5,0.8)--(5,0.3); \node[right] at (3.5,0.6) {\tiny$\pi$}; \node[above] at (1.5,0) {\tiny$\varepsilon_j$}; \node at (0,-0.5) {\scriptsize$(a_j,c_j)$}; \node at (3.5,-0.5) {\scriptsize{$(a_j^{k-j+1}c_j^{k-j},a_j^{j-1}c_j^{j},a_jc_j)$}}; \node at (5,0) {\scriptsize{$(x^k,y^k,xy)$}}; \node at (5,1) {\scriptsize{$(x,y)$}}; \end{tikzpicture} \end{center} for $j=1,\ldots,k$. In this way, we obtain some relations between the coordinates $x,y$ on $\A^2$ and the coordinates $a_j,c_j$ on $U_j$, namely \begin{equation}\label{toricrelations} \begin{array}{c} a_j=x^{j}y^{j-k},\\ c_j=x^{1-j}y^{k-j+1}. \end{array} \end{equation} Formally, these are relations between regular functions $x,y,a_j,c_j\in \C[a_j,c_j]\underset{\C[x,y]^G}{\otimes}\C[x,y]$ defined on $U_j\underset{\A^2/G}{\times}\A^2= \Spec\left(\left(\C[a_j,c_j]\underset{\C[x,y]^G}{\otimes}\C[x,y]\right)_{\red}\right) $. \begin{remark}\label{orderconst} The toric points of $\Calm_C$ are the origins of the charts $U_j$ and they correspond to the toric $C$-stable $G$-constellations. Indeed, the torus $\T^2/G$ acts on $\Calm_C$ making it into a toric variety, as described at the beginning of this section, and this toric action coincides with the action \[ \begin{tikzcd}[row sep=tiny] \T^2\times \Calm_C \arrow{r} & \Calm_C \\ (\sigma ,[\Calf])\arrow[mapsto]{r} & {[}\sigma^*\Calf{]} . \end{tikzcd} \] This is a consequence of the universal property of $\Calm _C$. Notice that, outside the exceptional locus of $\Calm_C$, i.e. on the open subset of free orbits, a direct computation is enough to show that the two actions agree. Hence we have a total order on the toric $G$-constellations over $\Calm_C$, in the sense that the first toric $G$-constellation is the $G$-constellation over the origin of $U_1$, the second one is the $G$-constellation over the origin of $U_2$, and so on. \end{remark} \begin{remark}\label{defomations} Let $\Gamma$ be a $G$-stair. Then there exists a unique $\sigma\in\Irr(G)$ such that $$y\cdot \sigma =0 \mbox{ and } x\cdot \sigma\otimes\rho_{-1}=0$$ in $\Gamma$. In particular, the representation $\sigma$ corresponds to the first box of $\Gamma$. This representation is important because, if we want to deform in a non-trivial way the $G$-constellation $\Calf_\Gamma$ associated to $\Gamma$ keeping the property of being nilpotent, there are only two ways to do it, namely to modify the $\C[x,y]$-module structure of $\Calf_\Gamma$ by imposing $$y\cdot\sigma=\lambda \cdot \sigma\otimes\rho_{-1},\quad \lambda\in\C^*$$ or $$x\cdot\sigma\otimes\rho_{-1}=\mu \cdot \sigma,\quad \mu\in\C^*.$$ Indeed, if $y\cdot \sigma=\lambda\cdot \sigma\otimes\rho_{-1}$ is not zero, then the nilpotency hypothesis implies $$x\cdot \sigma\otimes\rho_{-1}=\frac{1}{\lambda}xy\cdot \sigma=0,$$ and the other case is similar. Comparing this with the proof of \Cref{toric basis} one can show that letting $\lambda$ (resp. $\mu$) varying in $\C^*$ all the $G$-constellations so obtained are not isomorphic to each other (as $G$-constellations). In particular $\lambda,\mu$ are coordinates on a chart of $\Calm_C$ around $\Calf_\Gamma$. \end{remark} As a consequence of the above remark, we obtain the following lemma. \begin{lemma}\label{hwchart} If $\Calf_j$ is the toric $G$-constellation over the origin of the $j$-th chart of some $\Calm_C$, then we have $$\mathfrak{h}(\Calf_j)=k-j+1$$ or, equivalently $$\mathfrak{w}(\Calf_j)=j.$$ \end{lemma} \begin{proof} Let $\Gamma_j\subset\Calt_G$ be a $G$-stair for $\Calf_j$. In particular, it has the form in \Cref{shapelabel}\begin{figure}[ht]\scalebox{1}{ \begin{tikzpicture} \node at (0,0) {\scalebox{0.3}{ \begin{tikzpicture} \draw (8,-6)--(8,-7)--(10,-7)--(10,-6)--(9,-6)--(9,-5)--(8,-5)--(8,-6); \draw (5,-3)--(5,-4)--(7,-4)--(7,-3)--(6,-3)--(6,-2)--(5,-2)--(5,-3); \draw (1,1)--(1,0)--(3,0)--(3,1)--(2,1)--(2,2)--(1,2)--(1,1); \draw (-2,4)--(-2,3)--(0,3)--(0,4)--(-1,4)--(-1,5)--(-2,5)--(-2,4); \draw (1,3)--(1,4)--(2,4)--(2,3)--(1,3)--(1,4); \draw (-2,6)--(-2,7)--(-1,7)--(-1,6)--(-2,6)--(-2,7); \draw (4,0)--(5,0)--(5,1)--(4,1)--(4,0)--(5,0); \draw (5,-1)--(6,-1)--(6,0)--(5,0)--(5,-1)--(6,-1); \draw (9,-3)--(9,-4)--(8,-4)--(8,-3)--(9,-3)--(9,-4); \draw (5,-3)--(6,-3)--(6,-4); \draw (1,1)--(2,1)--(2,0); \draw (-2,4)--(-1,4)--(-1,3); \node at (3.5,0.5) {$\cdots$}; \node at (-1.5,5.6) {$\vdots$}; \node at (5.5,-1.4) {$\vdots$}; \node at (1.5,2.6) {$\vdots$}; \node at (7.5,-3.5) {$\cdots$}; \node at (5.5,0.5) {$\ddots$}; \node at (10.5,-6.5) {$\cdots$}; \node at (0.5,3.5) {$\cdots$}; \node at (8.5,-4.4) {$\vdots$}; \draw (9,-7)--(9,-6)--(8,-6); \draw (11,-6)--(11,-7)--(12,-7)--(12,-6)--(11,-6)--(11,-7); \end{tikzpicture}}}; \draw[<-] (2,-1.8) to [out=90,in=210] (2.3,-1.2); \node[right] at (-1.1,1.9){\small$x^{\alpha}y^{\beta}$}; \draw[->] (-1,1.9) to [out=135,in=0] (-1.8,2); \node[right] at (2.2,-1.1) {\small$x^{\gamma}y^{\delta}$}; \end{tikzpicture}} \caption{\ } \label{shapelabel} \end{figure} where just the labels of the boxes we are interested in are shown. Recall, from \Cref{section 32}, that, if we write the skew Ferrers diagram $\pi_{\N^2}(\Gamma_j)=A\smallsetminus B$ as the difference of two Ferrers diagrams $A$ and $B$, then $\Calf_j\cong M_{\Gamma_j}$, where $$M_{\Gamma_j}\cong \frac{I_A}{I_A\cap I_B},$$ and $I_A,I_B$ are as in the proof of \Cref{lemmatec}. Now, if we deform $\Calf_j$ as in \Cref{defomations}, by using the parameters $a_j,c_j\in\C$, we get relations: $$x\cdot x^{\gamma}y^{\delta}=a_jx^{\alpha}y^{\beta}$$ $$y\cdot x^{\alpha}y^{\beta}=c_jx^{\gamma}y^{\delta} $$ and, the relations \eqref{toricrelations} tell us that $$({\gamma-\alpha+1},{\delta-\beta})=({\mathfrak{w}(\Calf)},{-\mathfrak{h}(\Calf)+1})=(j,j-k)\in\N^2$$ $$({\alpha-\gamma},{\beta-\delta+1})=({-\mathfrak{w}(\Calf)+1},{\mathfrak{h}(\Calf)})=(1-j,k-j+1) \in\N^2$$ which completes the proof. \end{proof} \begin{remark} \Cref{hwchart} implies that any two toric $G$-constellations of the same height (or equivalently width) cannot belong to the same chamber, i.e. they cannot be $\theta$-stable for the same generic parameter $\theta\in\Theta^{\gen}$ simultaneously. \end{remark} \subsection{One dimensional families} \begin{definition} Given a toric $G$-constellation $\Calf$ and its abstract $G$-stair $\Gamma_\Calf$, its \textit{favorite condition} is the stability condition $\theta_\Calf\in\Theta$ defined by: $$(\theta_\Calf)_i=\begin{cases} -2 &\mbox{if $\rho_i$ is a generator and it is neither the first nor the last box of $\Gamma_\Calf$},\\ -1 &\mbox{if $\rho_i$ is a generator and it is either the first or the last box of $\Gamma_\Calf$},\\ 2 &\mbox{if $\rho_i$ is an antigenerator and it is neither the first nor the last box of $\Gamma_\Calf$},\\ 1 &\mbox{if $\rho_i$ is an antigenerator and it is either the first or the last box of $\Gamma_\Calf$},\\ 0 & \mbox{otherwise} \end{cases}$$ Moreover, the \textit{cone of good conditions for $\Calf$}, is the cone: \[ \Theta_\Calf=\Set{\theta\in\Theta^{\gen}|\Calf \mbox{ is $\theta$-stable}}. \] \end{definition} \begin{remark} It is worth mentioning that the favorite condition $\theta_{\Calf}$ of a toric $G$-constellation $\Calf$ can be understood as the stability condition determined by an appropriate flow on a certain quiver as explained in \cite[\S 6]{INFIRRI}. \end{remark} \begin{definition} Let $\Gamma$ be a stair and let $\Gamma'\subset \Gamma$ be a substair. We say that an element $v\in\Gamma'$ is \begin{itemize} \item a \textit{left internal endpoint} of $\Gamma'$ if there exists $w\in\Gamma\smallsetminus\Gamma'$ such that $x\cdot w=v$ or if $y\cdot v \in\Gamma\smallsetminus\Gamma'$; \item a \textit{right internal endpoint} of $\Gamma'$ if there exists $w\in\Gamma\smallsetminus\Gamma'$ such that $y\cdot w=v$ or if $x\cdot v \in\Gamma\smallsetminus\Gamma'$. \end{itemize} Moreover, we say that \begin{itemize} \item a left (resp. right) internal endpoint is a \textit{horizontal left (resp. right) cut} if $y\cdot v \in\Gamma\smallsetminus\Gamma'$ (resp. there exists $w\in\Gamma\smallsetminus\Gamma'$ such that $y\cdot w=v$); \item a left (resp. right) internal endpoint is a \textit{vertical left (resp. right) cut} if there exists $w\in\Gamma\smallsetminus\Gamma'$ such that $x\cdot w=v$ (resp. $x\cdot v \in\Gamma\smallsetminus\Gamma'$); \end{itemize} \end{definition} \begin{remark}\label{endpoints}If $\Calf$ is a $G$-constellation and $\Gamma_\Calf$ is a $G$-stair for $\Calf$, then a substair $\Gamma\subset\Gamma_\Calf$ corresponds to a $G$-equivariant $\C[x,y]$-submodule $\Cale_\Gamma$ of $\Calf$ if and only if it has only vertical left cuts and horizontal right cuts. Moreover, if $\Gamma$ is connected and $\theta_\Calf$ is the favorite condition of $\Calf $, then, $$\theta_\Calf(\Cale_\Gamma)=\begin{cases} 1&\mbox{if $\Gamma$ has one internal endpoint},\\ 2&\mbox{if $\Gamma$ has two internal endpoints}. \end{cases}$$ \end{remark} \begin{remark}\label{ororverver} Let $\Calf$ be a toric $G$-constellation with abstract $G$-stair $\Gamma_\Calf$ and let $\Cale<\Calf$ be a subrepresentation, i.e. a $G$-invariant linear subspace, whose substair $\Gamma_\Cale\subset\Gamma_\Calf$ is connected. Then, if $\Gamma_\Cale$ has two horizontal cuts or two vertical cuts and $\theta_\Calf$ is the favorite condition of $\Calf$, we have $$\theta_\Calf(\Cale)=0.$$ \end{remark} \begin{remark}\label{propfavorite} The following properties are easy to check for a toric $G$-constellation $\Calf$: \begin{itemize} \item favorite conditions are never generic, \item the $G$-constellation $\Calf$ is $\theta_\Calf$-stable, \item there exist generic conditions $\theta \in \Theta^{\gen}$ such that $\Calf$ is $\theta $-stable, i.e. the cone of good conditions $\Theta_\Calf$ is not empty. \end{itemize} Moreover, given a chamber $C$, we have: $$C=\underset{[\Calf]\in\Calm_C}{\bigcap}\Theta_\Calf.$$ For example, one can prove the third property using the openness of the nonempty set $\set{\theta \in\Theta | \Calf\mbox{ is strictly $\theta$-stable}} $ and the denseness of $\Theta^{\gen}$. However, we give here an alternative proof of this fact as in what follows we shall need a similar argument. Let $\rho_i$ be any irreducible representation, we denote by $\Calf_{\rho_i}$ the $G$-equivariant $\C[x,y]$-submodule of $\Calf$ generated by $\rho_i$ and, we denote by $\Gamma_{\rho_i}\subset\Gamma _\Calf$ the abstract substair and $G$-stair corresponding to $\Calf_{\rho_i}$ and $\Calf$ respectively. Consider an $\varepsilon\in\Theta$ with the following properties: $$\begin{cases} \varepsilon_i=0 & \mbox{if }\rho_i\mbox{ is an antigenerator},\\ \varepsilon_i<0 & \mbox{if }\rho_i\mbox{ is neither a generator nor an antigenerator},\\ \varepsilon_i=-\ssum{\rho_j\in(\Gamma_{\rho_i}\smallsetminus\rho_i)}{ }\varepsilon_j & \mbox{if }\rho_i\mbox{ is a generator},\\ \ssum{\mbox{\tiny $\rho_i$ generator}}{ }\varepsilon_i<1. \end{cases}$$ Then, for any subrepresentation $\Cale<\Calf$, we have $$\varepsilon(\Cale)>-\ssum{\mbox{\tiny $\rho_i$ generator}}{ }\varepsilon_i>-1.$$ Hence, the $G$-constellation $\Calf$ is $(\theta_\Calf+\varepsilon)$-stable. Indeed, \Cref{endpoints} implies that, given an indecomposable proper $G$-equivariant $\C[x,y] $-submodule we have $$(\theta_\Calf+\varepsilon)(\Cale)>0.$$ On the contrary, if $\Cale $ is not indecomposable then it is a direct sum of indecomposable components and $(\theta_\Calf+\varepsilon)(\Cale)>0$ follows by the additivity of $\theta_\Calf+\varepsilon$ on direct sums. We conclude by noticing that $\Theta\smallsetminus\Theta^{\gen}$ is a union of hyperplanes and so, there is at least a choice $\varepsilon\in\Theta$ such that $\theta_\Calf+\varepsilon$ is generic. We will see in the proof of \Cref{TEO1} that there is an easier way, which does not involve any $\varepsilon$, to prove that $\Theta _\Calf$ is not empty. \end{remark} \begin{definition}\label{deflink} An \textit{abstract linking stair} is an abstract stair made of $2k$ boxes obtained from an abstract $G$-stair $\Gamma$ in either of the following ways: \begin{enumerate} \item (\textit{decreasing} linking stair of $\Gamma$) take two copies of $\Gamma $ and make a new abstract stair by gluing the right edge of the last box of one copy to the left edge of the first box of the other copy; \item (\textit{increasing} linking stair of $\Gamma$) take two copies of $\Gamma $ and make a new abstract stair by gluing the lower edge of the last box of one copy to the upper edge of the first box of the other copy. \end{enumerate} A \textit{linking stair} is a realization of an abstract linking stair as a subset of the representation tableau. \end{definition} \begin{remark} An abstract linking stair contains exactly $k$ different abstract $G$-stairs. \end{remark} \begin{prop}\label{propcoppia} Let $\Gamma$ be the abstract $G$-stair of a $G$-constellation $\Calf$ and let $L$ be its abstract decreasing linking stair. Consider any $G$-stair $\Gamma' \subset L$ and its associated $G$-constellation $\Calf'$. Then, the following are equivalent: \begin{enumerate} \item there exists at least a chamber $C$ such that both $\Calf $ and $\Calf'$ belong to $C$, i.e. $\Theta_{\Calf}\cap \Theta_{\Calf'}\not=\emptyset$, \item $\mathfrak{h}(\Calf')=\mathfrak{h}(\Calf)-1$, \item the substair $\Gamma'\subset L$ has a horizontal left cut. \end{enumerate} In particular, $ \Calf' $ is the $G$-constellation next to $\Calf$ in $\Calm_C$ as per \Cref{orderconst}. \end{prop} \begin{example} \Cref{examplefig} describes the situation via an example. Here, we are considering the $\Z/9\Z $-action on $\A^2 $ given in \eqref{Zkaction}. \begin{figure}[H] \begin{tikzpicture}\node at (-3,0) {\scalebox{0.5}{ \begin{tikzpicture} \draw (0,-2)--(0,-3)--(2,-3)--(2,-5)--(3,-5)--(3,-6)--(4,-6)--(4,-4)--(3,-4)--(3,-2)--(1,-2)--(1,0)--(0,0)--(0,-2); \draw (0,-1)--(1,-1); \draw (0,-2)--(1,-2)--(1,-3); \draw (2,-2)--(2,-3)--(3,-3); \draw (2,-4)--(3,-4)--(3,-5)--(4,-5); \node at (0.5,-0.5) {\Huge 0}; \node at (0.5,-1.5) {\Huge 1}; \node at (0.5,-2.5) {\Huge 2}; \node at (1.5,-2.5) {\Huge 3}; \node at (2.5,-2.5) {\Huge 4}; \node at (2.5,-3.5) {\Huge 5}; \node at (2.5,-4.5) {\Huge 6}; \node at (3.5,-4.5) {\Huge 7}; \node at (3.5,-5.5) {\Huge 8}; \end{tikzpicture}}}; \node at (3,0) {\scalebox{0.5}{ \begin{tikzpicture} \draw (2,-4)--(2,-5)--(3,-5)--(3,-6)--(4,-6)--(4,-4)--(3,-4)--(3,-4); \draw (4,-6)--(4,-8)--(7,-8)--(7,-7)--(5,-7)--(5,-5)--(4,-5); \draw (2,-4)--(2,-3)--(3,-3)--(3,-4); \draw (4,-6)--(5,-6); \draw (4,-7)--(5,-7)--(5,-8); \draw (2,-4)--(3,-4)--(3,-5)--(4,-5); \draw (6,-8)--(6,-7); \node at (2.5,-3.5) {\Huge 5}; \node at (2.5,-4.5) {\Huge 6}; \node at (3.5,-4.5) {\Huge 7}; \node at (3.5,-5.5) {\Huge 8}; \node at (4.5,-5.5) {\Huge 0}; \node at (4.5,-6.5) {\Huge 1}; \node at (4.5,-7.5) {\Huge 2}; \node at (5.5,-7.5) {\Huge 3}; \node at (6.5,-7.5) {\Huge 4}; \end{tikzpicture}}}; \node at (-2.8,1.9) {{\Large $\Gamma$}}; \node at (3.2,1.4) {{\Large $\Gamma'$}}; \end{tikzpicture} \begin{tikzpicture} \node at (0,0) {\scalebox{0.6}{ \begin{tikzpicture} \draw (0,-2)--(0,-3)--(2,-3)--(2,-5)--(3,-5)--(3,-6)--(4,-6)--(4,-4)--(3,-4)--(3,-2)--(1,-2)--(1,0)--(0,0)--(0,-2); \draw (4,-6)--(4,-8)--(6,-8)--(6,-10)--(7,-10)--(7,-11)--(8,-11)--(8,-9)--(7,-9)--(7,-7)--(5,-7)--(5,-5)--(4,-5); \draw (0,-1)--(1,-1); \draw (0,-2)--(1,-2)--(1,-3); \draw (2,-2)--(2,-3); \draw (2,-4)--(3,-4)--(3,-5)--(4,-5); \draw(4,-6)--(5,-6); \draw(4,-7)--(5,-7)--(5,-8); \draw(6,-7)--(6,-8)--(7,-8); \draw(6,-9)--(7,-9)--(7,-10)--(8,-10); \draw[pattern=north west lines] (4,-6)--(4,-4)--(3,-4)--(3,-3)--(2,-3)--(2,-5)--(3,-5)--(3,-6); \draw[ultra thick,dashed] (8.5,-8)--(8.5,-3); \draw[ultra thick,dashed] (11,-11)--(11,0); \draw[ultra thick,dashed] (8,-8)--(9,-8); \draw[ultra thick,dashed] (8,-3)--(9,-3); \draw[ultra thick,dashed] (11.5,0)--(10.5,0); \draw[ultra thick,dashed] (11.5,-11)--(10.5,-11); \draw[ultra thick,dashed] (0,-9)--(4,-9); \draw[ultra thick,dashed] (0,-8.5)--(0,-9.5); \draw[ultra thick,dashed] (4,-8.5)--(4,-9.5); \draw[ultra thick,->] (5.5,0) to [out=225 , in =80] (3.5,-3.5); \node at (0.5,-0.5) {\Huge 0}; \node at (0.5,-1.5) {\Huge 1}; \node at (0.5,-2.5) {\Huge 2}; \node at (1.5,-2.5) {\Huge 3}; \node at (2.5,-2.5) {\Huge 4}; \node at (2.5,-3.5) {\Huge 5}; \node at (2.5,-4.5) {\Huge 6}; \node at (3.5,-4.5) {\Huge 7}; \node at (3.5,-5.5) {\Huge 8}; \node at (4.5,-5.5) {\Huge 0}; \node at (4.5,-6.5) {\Huge 1}; \node at (4.5,-7.5) {\Huge 2}; \node at (5.5,-7.5) {\Huge 3}; \node at (6.5,-7.5) {\Huge 4}; \node at (6.5,-8.5) {\Huge 5}; \node at (6.5,-9.5) {\Huge 6}; \node at (7.5,-9.5) {\Huge 7}; \node at (7.5,-10.5) {\Huge 8}; \end{tikzpicture}}}; \node at (-2.2,-2.4) {$\Gamma$}; \node at (-0.1,3.5) {$\Gamma\cap \Gamma'$}; \node at (2,0) {$\Gamma'$}; \node at (3.4,0) {$L$}; \node at (5,1) {$\mathfrak{h}(\Gamma)=6$,}; \node at (5,-1) {$\mathfrak{w}(\Gamma)=4$,}; \node at (7,1) {$\mathfrak{h}(\Gamma')=5$,}; \node at (7,-1) {$\mathfrak{w}(\Gamma')=5$.}; \end{tikzpicture} \caption{The abstract linking stair $L$ of an abstract $G$-stair $\Gamma$ and a substair $\Gamma'$ of $L$ which satisfies the hypotheses of \Cref{propcoppia}.} \label{examplefig} \end{figure} \end{example} \begin{proof} (of \Cref{propcoppia}). We start by introducing some notation. Let $\Calf,\Calf'$ be two $G$-constellations. Given a proper subrepresentation $\Cale<\Calf$ (resp. $\Cale'<\Calf'$), we denote by $\Cale'$ (resp. $\Cale$) the corresponding subrepresentation $\Cale'<\Calf'$ (resp. $\Cale<\Calf$). Here, by ``corresponding" we mean that, since $\Cale$ is a subrepresentation of the regular representation $\C[G]$ of an abelian group, it decomposes as a direct sum of distinct indecomposable representations $\Cale\cong\underset{j}{\oplus}\rho_{i_j}$. Then, we denote by $\Cale'$ the subrepresentation of $\Calf'\cong\C[G]$ given by the same summands: $$\Cale'\cong\underset{j}{\oplus}\rho_{i_j}.$$ In particular, for all $ \theta\in\Theta$, the two rational numbers $$\theta(\Cale)\ \ \mbox{and}\ \ \theta(\Cale')$$ are the same. Moreover, we denote by $\Gamma_{\Cale}\subset\Gamma$ (resp. $\Gamma_{\Cale'}\subset\Gamma'$) the substair associated to $\Cale$ (resp. $\Cale'$). Notice that, given a proper $G$-equivariant $\C[x,y]$-submodule $\Cale<\Calf$, the subrepresentation $\Cale'$ is not necessarily a $\C[x,y]$-submodule of $\Calf'$. We are now ready to proceed with the proof. \begin{itemize} \item[(\textit{2})$\Leftrightarrow$(\textit{3})] We omit the easy proof. \item[(\textit{1})$\Rightarrow$(\textit{3})] Suppose by contradiction that $\Gamma'\subset L$ has a vertical left cut. Then, by \Cref{endpoints}, the subrepresentation $\Cale_{\Gamma\cap\Gamma'}<\Calf$ is a $\C[x,y]$-submodule because, in $\Gamma$, the substair $\Gamma\cap\Gamma'$ has a vertical left cut by hypothesis and its last box is not internal. At the same time, again by \Cref{endpoints}, $\Cale_{\Gamma\cap\Gamma'}'<\Calf'$ is the complement of a $\C[x,y]$-submodule, because its first box is not internal and it has a vertical right cut. Hence, $$C\subset\Theta_{\Calf}\cap\Theta_{\Calf'}\subset \{\theta(\Cale_{\Gamma\cap\Gamma'})>0\}\cap \{-\theta(\Cale_{\Gamma\cap\Gamma'})>0\}=\emptyset,$$ which contradicts (\textit{1}). \item[(\textit{3})$\Rightarrow$(\textit{1})] In order to prove statement (\textit{1}), we need to show that $$\Theta_\Calf\cap\Theta_{\Calf'}\not=\emptyset.$$ We start by identifying the proper indecomposable $G$-equivariant subsheaves $\Cale<\Calf$ (resp. $\Cale'<\Calf'$) such that also $\Cale'$ (resp. $\Cale$) is a proper $G$-equivariant subsheaf of $\Calf$ (resp. $\Calf'$). Let $\Cale'<\Calf'$ be a proper indecomposable $G$-equivariant submodule of $\Calf'$; we consider three different cases.\\ \textbf{Case 1.} Both the first and the last box of the substair $\Gamma_{\Cale'}\subset\Gamma'$ are internal endpoints. Then, the same happens for $\Gamma_{\Cale}\subset\Gamma$. This is true because $\Gamma$ has a vertical right cut in $L$, by the construction of a decreasing linking stair (see \Cref{deflink}), and hence, the right internal endpoint of $\Gamma_{\Cale'}$ in $\Gamma'$, which is a horizontal cut by \Cref{endpoints}, is different from the right internal endpoint of $\Gamma$ in $L$. Therefore, both internal endpoints of $\Gamma_{\Cale'}$ correspond to internal endpoints of $\Gamma_\Cale$ of the same respective nature. As a consequence, the subrepresentation $\Cale$ is a proper, non necessarily indecomposable, $G$-equivariant submodule of $\Calf$.\\ \textbf{Case 2.} The substair $\Gamma_{\Cale'}$ has only the vertical left cut in $\Gamma'$, and hence, its last box coincides with the last box of $\Gamma'$. In particular, this box is not the right internal endpoint of $\Gamma $ in $L$. We have to study the nature of the internal endpoints of $\Gamma_\Cale$. Notice first that it is enough to study the right internal endpoint of $\Gamma_\Cale$ because, if $\Gamma_\Cale$ has still left internal endpoint, then it is a vertical left cut. Let $\rho_i$ be the label on the last box of $\Gamma'$, then, the label on the horizontal left cut of $\Gamma'$ (i.e. its first box) is $\rho_{i+1}$. Now, since, by hypothesis (\textit{3}), the box labeled by $\rho_{i+1}$ is a horizontal left cut of $\Gamma'\subset L$, the box labeled by $\rho_i$ in $\Gamma$ has to be a horizontal right cut for the substair $\Gamma_\Cale$. Therefore, $\Gamma_\Cale$ has only vertical left cuts and horizontal right cuts, and so, by \Cref{endpoints}, $\Cale$ is a proper, non necessarily indecomposable, $G$-equivariant submodule.\\ \textbf{Case 3.} The substair $\Gamma_{\Cale'}\subset \Gamma'$ has only the horizontal right cut, i.e. its first box coincides with the first box of $\Gamma'$. First of all notice that, as for the first analyzed case, the right internal endpoint of $\Gamma_{\Cale'}$ in $\Gamma'$, which is a horizontal cut by hypothesis, is different from the right internal endpoint of $\Gamma$ in $L$, which is vertical by definition of decreasing linking stair. Therefore, the box of $\Gamma$ with the same label as the horizontal right cut of $\Gamma_{\Cale'}$ is an internal endpoint of $\Gamma_\Cale$ and it is a horizontal right cut. Finally, the first box of $\Gamma' $ in $L$ is a left internal endpoint for $\Gamma_\Cale$, and so it is a horizontal left cut by point (\textit{3}) of the statement. As a consequence, $\Gamma_\Cale$ has two horizontal cuts. In summary, if $\Cale'<\Calf'$ is a proper indecomposable $G$-equivariant submodule of $\Calf'$ such that $\Gamma_{\Cale'}$ has a vertical left cut, then also $\Cale<\Calf$ is a proper, non necessarily indecomposable, $G$-equivariant submodule. While, if $\Gamma_{\Cale'}<\Gamma'$ has only the right horizontal cut, then $\Gamma_\Cale$ has two horizontal cuts. Following the same logic, if $\Cale<\Calf$ is a proper indecomposable $G$-equivariant submodule of $\Calf$ such that $\Gamma_{\Cale}$ has a horizontal right cut, then also $\Cale'<\Calf'$ is a proper, non necessarily indecomposable, $G$-equivariant submodule. While, if $\Gamma_{\Cale}<\Gamma$ has only the left vertical cut, then $\Gamma_{\Cale'}$ has two vertical cuts. We are now ready to exhibit a $\theta\in\Theta^{\gen}$ such $\Calf$ and $\Calf'$ are $\theta$-stable. Let $\theta_\Calf$ and $\theta_{\Calf'}$ be the respective favorite conditions for $\Calf $ and $\Calf'$ and let $\theta=\theta_\Calf+\theta_{\Calf'}$ be their sum. Then, both $\Calf$ and $\Calf'$ are $\theta$-stable. Indeed, \begin{itemize} \item if $\Cale<\Calf$ is a proper indecomposable $G$-equivariant $\C[x,y]$-submodule of $\Calf$ such that also $\Cale'$ is a $\C[x,y]$-submodule of $\Calf'$, then $$\theta(\Cale)=\theta_\Calf(\Cale)+\theta_{\Calf'}(\Cale)=\theta_\Calf(\Cale)+\theta_{\Calf'}(\Cale')>0$$ follows from the fact that $\Calf $ is $\theta_\Calf$-stable and $\Calf' $ is $\theta_{\Calf'}$-stable (see \Cref{propfavorite}); \item if $\Cale'<\Calf'$ is a proper indecomposable $G$-equivariant $\C[x,y]$-submodule of $\Calf'$ such that $\Gamma_{\Cale}$ has two horizontal cuts, then $$\theta(\Cale')=\theta_\Calf(\Cale')+\theta_{\Calf'}(\Cale')=\theta_\Calf(\Cale)+\theta_{\Calf'}(\Cale')=\theta_{\Calf'}(\Cale')=1>0$$ follows from the fact that $\Calf' $ is $\theta_{\Calf'}$-stable (see \Cref{propfavorite}) and from \Cref{endpoints,ororverver}; \item if $\Cale<\Calf$ is a proper indecomposable $G$-equivariant $\C[x,y]$-submodule of $\Calf$ such that $\Gamma_{\Cale'}$ has two vertical cuts, then $$\theta(\Cale)=\theta_\Calf(\Cale)+\theta_{\Calf'}(\Cale)=\theta_\Calf(\Cale)+\theta_{\Calf'}(\Cale')=\theta_{\Calf}(\Cale)=1>0$$ follows from the fact that $\Calf $ is $\theta_{\Calf}$-stable (see \Cref{propfavorite}) and from \Cref{endpoints,ororverver}; \item if $\Cale<\Calf$ (resp. $\Cale'<\Calf'$) is a proper decomposable $G$-equivariant $\C[x,y]$-submodule, then $$\theta(\Cale)>0$$ follows by applying the previous points to the indecomposable components of $\Cale$ and from the additivity of $\theta$. \end{itemize} The last issue here is that, in general, such $\theta$ is not generic, i.e. $$\theta \in \overline{\Theta_\Calf\cap\Theta_{\Calf'}}\smallsetminus \Theta_\Calf\cap\Theta_{\Calf'}.$$ In order to solve this problem, we can perturb $\theta_\Calf$ and $\theta_{\Calf'}$ the same way as as we did in \Cref{propfavorite} thus obtaining a generic $\widetilde{\theta}\in\Theta_{\Calf}\cap\Theta_{\Calf'}$. Consider the stability conditions $\varepsilon,\varepsilon'\in\Theta$ defined as follows: $$\begin{cases} \begin{matrix*}[l] \varepsilon_i=0 & \mbox{if }\rho_i\mbox{ is an antigenerator of }\Gamma_\Calf ,\\ \varepsilon_i'=0 & \mbox{if }\rho_i\mbox{ is an antigenerator of }\Gamma_{\Calf'},\\ \varepsilon_i<0 & \mbox{if }\rho_i\mbox{ is neither a generator nor an antigenerator of }\Gamma_\Calf,\\ \varepsilon_i'<0 & \mbox{if }\rho_i\mbox{ is neither a generator nor an antigenerator of }\Gamma_{\Calf'},\\ \varepsilon_i=-\ssum{\rho_j\in(\Gamma_{\rho_i}\smallsetminus\rho_i)}{ }\varepsilon_j & \mbox{if }\rho_i\mbox{ is a generator of }\Gamma_\Calf,\\ \varepsilon_i'=-\ssum{\rho_j\in(\Gamma_{\rho_i}'\smallsetminus\rho_i)}{ }\varepsilon_j' & \mbox{if }\rho_i\mbox{ is a generator of }\Gamma_{\Calf'}, \end{matrix*}\\ \ssum{\mbox{\tiny $\rho_i$ generator of }\Gamma_\Calf}{ }\varepsilon_i+\ssum{\mbox{\tiny $\rho_i$ generator of }\Gamma_{\Calf'}}{ }\varepsilon_i'<1, \end{cases}$$ where, as in \Cref{propfavorite}, $\Gamma_{\rho_i}\subset\Gamma$ (resp. $\Gamma_{\rho_i}'\subset\Gamma'$) is the substair associated to the $\C[x,y]$-submodule of $\Calf $ (resp. $ \Calf '$) generated by the irreducible subrepresentation $\rho_i$. Now, if $$\widetilde{\theta}=(\theta_{\Calf}+\varepsilon)+(\theta_{\Calf'}+\varepsilon')$$ then $\Calf$ and $\Calf'$ are $\widetilde{\theta}$-stable, and $\varepsilon$ and $\varepsilon'$ can be chosen in such a way that $\widetilde{\theta}$ is generic. As a consequence $\Theta_\Calf\cap\Theta_{\Calf'}\not=\emptyset$. \end{itemize} \end{proof} We will see, in the proof of \Cref{TEO1}, that there is an easier way to prove that $\Theta _\Calf\cap\Theta_{\Calf'}$ is not empty. By following the same logic, one can prove a similar statement for the increasing linking stairs. \begin{prop}\label{propcoppia1} Let $\Gamma$ be the abstract $G$-stair of a $G$-constellation $\Calf$ and let $L$ be its abstract increasing linking stair. Consider any $G$-stair $\Gamma'\subset L $ and its associated $G$-constellation $\Calf'$. Then, the following are equivalent: \begin{enumerate} \item there exists at least a chamber $C$ such that both $\Calf $ and $\Calf'$ belong to $C$, i.e. $\Theta_{\Calf}\cap \Theta_{\Calf'}\not=\emptyset$, \item $\mathfrak{h}(\Calf')=\mathfrak{h}(\Calf)+1$, \item the substair $\Gamma'\subset L$ has a vertical right cut. \end{enumerate} In particular, $ \Calf $ is the $G$-constellation next to $\Calf'$ in $\Calm_C$ in the sense of \Cref{orderconst}. \end{prop} \subsection{Counting the chambers} \begin{remark}\label{minoreouguale} Propositions \ref{propcoppia} and \ref{propcoppia1} provide a way to build 1-dimensional families of nilpotent $G$-constellations. In particular, each of this families corresponds to some exceptional line in some $\Calm_C$. Moreover, the two gluings described in the definition of linking stair are nothing but the two possible ways of deforming a toric $G$-constellation keeping the property of being nilpotent described in \Cref{defomations}. This implies that the families coming from \Cref{propcoppia} and \Cref{propcoppia1} are exactly the 1-dimensional families of nilpotent $G$-constellations appearing in the moduli spaces $\Calm_C$. An easy combinatorial computation tells us that the maximum number of chambers is $k!$. Indeed, if we start by a $G$-constellation $\Calf_1$ of maximum height $\mathfrak{h}(\Calf)=k$, i.e. $\Calf_1$ has one of the $k$ abstract $G$-stairs shown in \Cref{maxheight}, \begin{figure}[ht]$\begin{matrix}\scalebox{0.7}{ \begin{tikzpicture} \node at (0.5,0.5) {\large 0}; \node at (0.5,1.5) {\large 1}; \node at (0.5,2.6) {\large $ \vdots $}; \node at (0.5,3.5) {\large $k-2$}; \node at (0.5,4.5) {\large $k-1$}; \draw (0,3)--(0,5)--(1,5)--(1,3); \draw (0,4)--(1,4); \draw (0,2)--(0,0)--(1,0)--(1,2); \draw (0,1)--(1,1); \draw[dashed] (0,2)--(1,2)--(1,3)--(0,3)--(0,2); \end{tikzpicture}} \end{matrix},\quad \begin{matrix}\scalebox{0.7}{ \begin{tikzpicture} \node at (0.5,0.5) {\large 1}; \node at (0.5,1.5) {\large 2}; \node at (0.5,2.6) {\large $ \vdots $}; \node at (0.5,3.5) {\large $k-1$}; \node at (0.5,4.5) {\large $0$}; \draw (0,3)--(0,5)--(1,5)--(1,3); \draw (0,4)--(1,4); \draw (0,2)--(0,0)--(1,0)--(1,2); \draw (0,1)--(1,1); \draw[dashed] (0,2)--(1,2)--(1,3)--(0,3)--(0,2); \end{tikzpicture}} \end{matrix},\quad \cdots,\quad \begin{matrix}\scalebox{0.7}{ \begin{tikzpicture} \node at (0.5,0.5) {\large $k-1$}; \node at (0.5,1.5) {\large 0}; \node at (0.5,2.6) {\large $ \vdots $}; \node at (0.5,3.5) {\large $k-3$}; \node at (0.5,4.5) {\large $k-2$}; \draw (0,3)--(0,5)--(1,5)--(1,3); \draw (0,4)--(1,4); \draw (0,2)--(0,0)--(1,0)--(1,2); \draw (0,1)--(1,1); \draw[dashed] (0,2)--(1,2)--(1,3)--(0,3)--(0,2); \end{tikzpicture}} \end{matrix}$ \caption{The abstract $G$-stairs of maximum height.} \label{maxheight} \end{figure} we can construct toric $G$-constellations $\Calf_2,\ldots,\Calf_k$ with respective abstract $G$-stairs $\Gamma_{j}$ for $j=2,\ldots,k$ by recursively applying the prescriptions in \Cref{propcoppia}. Precisely, for any $j>1$, each $\Gamma_j$ is a connected substair, with horizontal left cut, of the decreasing linking stair of $\Gamma_{j-1}$. To conclude that the maximum number of chambers is $k!$, we notice that the $j$-th time that we apply \Cref{propcoppia} there are $k-j$ possible $G$-stairs with horizontal left cut in the decreasing linking stair of the abstract $G$-stair of $\Calf_j$. \end{remark} \begin{theorem}\label{TEO1}If $G\subset \SL(2,\C)$ is a finite abelian subgroup of cardinality $k=|G|$, then the space of generic stability conditions $\Theta^{\gen}$ is the disjoint union of $k!$ chambers. \end{theorem} \begin{proof} It is enough to show that, if $\Calf_1,\ldots,\Calf_k$ are as in \Cref{minoreouguale}, then there exists a chamber $$C=\Theta_{\Calf_1}\cap\Theta_{\Calf_2}\cap\cdots\cap\Theta_{\Calf_k}\not=\emptyset,$$ such that $\Calf_j$ is $C$-stable for all $j=1,\ldots,k$. We claim that, if, for all $j=1,\ldots,k$, the favorite condition of $\Calf_j$ is $\theta_{\Calf_j}$, then $$\theta=\ssum{j=1}{k}\theta_{\Calf_j}\in C.$$A priori, in order to prove the claim, we need to show both that $\theta$ is generic and that every $\Calf_j$ is $\theta$-stable. In fact, it is enough to show just that every $\Calf_j$ is $\theta$-stable, because this implies that $\Calm_\theta$ has $k$ torus fixed-points and, as a consequence, that $\theta $ is generic. Let $\Cale_j<\Calf_j$ be a proper $G$-equivariant indecomposable $\C[x,y]$-submodule of $\Calf_j$ with substair $\Gamma_{\Cale_j}\subset\Gamma_{\Calf_j}$. Suppose also that $\Cale_j=\underset{s=m}{\overset{n}{\bigoplus}}\rho_s$, where $0\le m\le n\le k-1$. We denote by $\Cale_i$, for $i=1,\ldots,j-1,j+1,\ldots,k$, the subrepresentation of $\Calf_i$ corresponding to $\Cale_j$, i.e. $$\Cale_i=\underset{s=m}{\overset{n}{\bigoplus}}\rho_s,\ \ \forall i=1,\ldots,j-1,j+1,\ldots,k.$$ Notice that \begin{itemize} \item if $\Gamma_{\Cale_{j+1}}$ has two vertical cuts, then $\Gamma_{\Cale_{i}}$ has two vertical cuts for every $i>j+1$; \item if $\Gamma_{\Cale_{j-1}}$ has two horizontal cuts, then $\Gamma_{\Cale_{i}}$ has two horizontal cuts for every $i<j-1$. \end{itemize} This is true because every time we increase (resp. decrease) the index $i$, we perform a horizontal left (resp. vertical right) cut in the decreasing (resp. increasing) linking stair which does not affect the vertical left (resp. horizontal right) cut of $\Gamma_{\Cale_{j+1}}$ (resp. $\Gamma_{\Cale_{j-1}}$). Hence, for all $i =1,\ldots,j-1,j+1,\ldots,k$, we have $\theta_{\Calf_i}(\Cale_j)\ge0 $ and, as a consequence $$\theta(\Cale_j)=\left(\theta_{\Calf_j}+\ssum{i\not=j}{ }\theta_{\Calf_i}\right)(\Cale_j)>0.$$ \end{proof} \begin{remark} The proof of \Cref{TEO1} provides an alternative way to prove that $$\Theta_{\Calf}\not=\emptyset$$ in \Cref{propfavorite} and, that $$\Theta_{\Calf}\cap\Theta_{\Calf'}\not=\emptyset$$ in the last part of the third point of the proof of \Cref{propcoppia}. For example, let $\Calf$ be a toric $G$-constellation with abstract $G$-stair of height $\mathfrak{h}(\Calf)=j$. We construct $\Calf_1,\ldots,\Calf_{j-1},\Calf_{j+1},\ldots,\Calf_k$ by recursively applying Propositions \ref{propcoppia} and \ref{propcoppia1}, i.e. \begin{itemize} \item if $i>j$, then $\Calf_i$ has, as $G$-stair, a $G$-substair, with a horizontal left cut, of the decreasing linking stair of $\Calf_{i-1}$, \item if $i<j$, then $\Calf_i$ has, as $G$-stair, a $G$-substair, with a vertical right cut, of the increasing linking stair of $\Calf_{i+1}$. \end{itemize} Then, if $\theta = \theta_\Calf+\ssum{ }{ }\theta_{\Calf_i}$ is the sum of all favorite conditions, we have $\theta\in\Theta_\Calf.$ \end{remark} \section{Simple chambers} In this section we firstly introduce the notion of chamber stair. Roughly speaking, it is a stair that encodes all the data needed to reconstruct a chamber. Then, we define simple chambers, which are a particular kind of chambers with the property that any toric $G$-constellation belongs to at least one of them. Finally, we prove that there are exactly $k\cdot 2^{k-2}$ simple chambers. \begin{remark}Given a chamber $C\subset \Theta^{\gen}$ we can make a stair $\Gamma_C$ out of it and and we say that $\Gamma_C$ is the chamber stair of $C$. Let $\Calf_1,\ldots,\Calf_k$ be the toric $G$-constellations in $\Calm_C$. As explained in \Cref{propcoppia} (resp. \Cref{propcoppia1}), the abstract $G$-stairs $\Gamma_j,\Gamma_{j+1}$ of two consecutive $G$-constellations $\Calf_j,\Calf_{j+1}$ are substairs of the same stair $L$, namely the decreasing linking stair of $\Gamma_j$ (resp. the increasing linking stair of $\Gamma_{j+1}$). Moreover they have non-empty intersection in $L$. Now, if $\Gamma_1,\ldots,\Gamma_k$ are the respective abstract $G$-stairs of $\Calf_1,\ldots,\Calf_k$, we can construct a new abstract stair $\Gamma_C$ by gluing consecutive abstract $G$-stairs along their common parts. \end{remark} \begin{definition}\label{chamberstair} The \textit{abstract chamber stair of $C$} or the \textit{abstract $C$-stair} is the abstract stair $\Gamma_C$ obtained as described above. \end{definition} \begin{example} Consider the case $G\cong \Z/5\Z$. \Cref{CHAMBER} explains how to build an abstract $C$-stair starting from the abstract $G$-stairs of the $G$-constellations in some chamber $C$. \begin{figure}[ht] \scalebox{1}{\begin{tikzpicture} \node at (0,0) { \scalebox{0.5}{\begin{tikzpicture} \draw (0,0)--(0,5)--(1,5)--(1,0)--(0,0); \draw (0,1)--(1,1); \draw (0,2)--(1,2); \draw (0,3)--(1,3); \draw (0,4)--(1,4); \draw[pattern=north east lines] (0,1)--(0,0)--(1,0)--(1,2)--(0,2)--(0,1); \node at (0.5,0.5) {\Huge 0}; \node at (0.5,1.5) {\Huge 4}; \node at (0.5,2.5) {\Huge 3}; \node at (0.5,3.5) {\Huge 2}; \node at (0.5,4.5) {\Huge 1}; \end{tikzpicture}}}; \node at (1.5,-0.25) { \scalebox{0.5}{\begin{tikzpicture} \draw (2,0)--(0,0)--(0,2)--(1,2)--(1,1)--(2,1)--(2,-2)--(1,-2)--(1,0)--(0,0); \draw (0,1)--(1,1); \draw (1,-1)--(2,-1); \draw[pattern=north west lines] (0,0)--(0,2)--(1,2)--(1,0)--(0,0); \draw[pattern=north east lines] (1,-2)--(2,-2)--(2,-1)--(1,-1)--(1,-2); \node at (0.5,0.5) {\Huge 0}; \node at (0.5,1.5) {\Huge 4}; \node at (1.5,-1.5) {\Huge 3}; \node at (1.5,-0.5) {\Huge 2}; \node at (1.5,0.5) {\Huge 1}; \end{tikzpicture}}}; \node at (3.5,-0.5) { \scalebox{0.5}{\begin{tikzpicture} \draw (0,0)--(1,0)--(1,-1)--(2,-1)--(2,-2)--(3,-2)--(3,0)--(2,0)--(2,1)--(0,1)--(0,0); \draw (1,1)--(1,0)--(2,0); \draw (2,0)--(2,-1)--(3,-1); \draw[pattern=north west lines] (0,0)--(0,1)--(1,1)--(1,0)--(0,0)--(0,1); \draw[pattern=north east lines] (2,-2)--(3,-2)--(3,0)--(1,0)--(1,-1)--(2,-1); \node at (0.5,0.5) {\Huge 3}; \node at (2.5,-1.5) {\Huge 2}; \node at (2.5,-0.5) {\Huge 1}; \node at (1.5,-0.5) {\Huge 0}; \node at (1.5,0.5) {\Huge 4}; \end{tikzpicture}}}; \node at (6,-0.75) { \scalebox{0.5}{\begin{tikzpicture} \draw (1,0)--(1,-1); \draw (3,-2)--(3,-1); \draw (1,-1)--(2,-1)--(2,-2); \draw (1,0)--(2,0)--(2,-1)--(4,-1)--(4,-2)--(1,-2)--(1,-1)--(0,-1)--(0,0)--(1,0); \draw[pattern=north west lines] (0,0)--(2,0)--(2,-2)--(1,-2)--(1,-1)--(0,-1)--(0,0); \draw[pattern=north east lines] (1,-1)--(4,-1)--(4,-2)--(1,-2); \node at (2.5,-1.5) {\Huge 3}; \node at (1.5,-1.5) {\Huge 2}; \node at (1.5,-0.5) {\Huge 1}; \node at (0.5,-0.5) {\Huge 0}; \node at (3.5,-1.5) {\Huge 4}; \end{tikzpicture}}}; \node at ( 9 ,-1) { \scalebox{0.5}{\begin{tikzpicture} \draw (1,0)--(5,0)--(5,1)--(0,1)--(0,0)--(1,0); \draw (1,0)--(1,1); \draw (2,0)--(2,1); \draw (3,0)--(3,1); \draw (4,0)--(4,1); \draw[pattern=north west lines] (0,0)--(3,0)--(3,1)--(0,1)--(0,0); \node at (0.5,0.5) {\Huge 2}; \node at (1.5,0.5) {\Huge 3}; \node at (2.5,0.5) {\Huge 4}; \node at (3.5,0.5) {\Huge 0}; \node at (4.5,0.5) {\Huge 1}; \end{tikzpicture}}}; \node at (0,-1.75) {$\Gamma_1$}; \node at (1.5,-1.75) {$\Gamma_2$}; \node at (3.5,-1.75) {$\Gamma_3$}; \node at (6,-1.75) {$\Gamma_4$}; \node at (9,-1.75) {$\Gamma_5$}; \end{tikzpicture}}\\$\ $\\ \begin{tikzpicture} \node at (0,0) { \scalebox{0.5}{\begin{tikzpicture} \draw (0,-4)--(0,-5)--(1,-5)--(1,-7)--(2,-7)--(2,-8)--(3,-8)--(3,-9)--(8,-9)--(8,-8)--(4,-8)--(4,-7)--(3,-7)--(3,-6)--(2,-6)--(2,-4)--(1,-4)--(1,0)--(0,0)--(0,-4); \draw (0,-1)--(1,-1); \draw (0,-2)--(1,-2); \draw (0,-3)--(1,-3); \draw (0,-4)--(1,-4)--(1,-5)--(2,-5); \draw (1,-6)--(2,-6)--(2,-7)--(3,-7)--(3,-8)--(4,-8)--(4,-9); \draw (5,-8)--(5,-9); \draw (6,-8)--(6,-9); \draw (7,-8)--(7,-9); \node at (0.5,-0.5) {\Huge 1}; \node at (0.5,-1.5) {\Huge 2}; \node at (0.5,-2.5) {\Huge 3}; \node at (0.5,-3.5) {\Huge 4}; \node at (0.5,-4.5) {\Huge 0}; \node at (1.5,-4.5) {\Huge 1}; \node at (1.5,-5.5) {\Huge 2}; \node at (1.5,-6.5) {\Huge 3}; \node at (2.5,-6.5) {\Huge 4}; \node at (2.5,-7.5) {\Huge 0}; \node at (3.5,-7.5) {\Huge 1}; \node at (3.5,-8.5) {\Huge 2}; \node at (4.5,-8.5) {\Huge 3}; \node at (5.5,-8.5) {\Huge 4}; \node at (6.5,-8.5) {\Huge 0}; \node at (7.5,-8.5) {\Huge 1}; \end{tikzpicture}}}; \node at (0,-2.7) {$\Gamma_C$}; \end{tikzpicture} \caption{The abstract $C$-stair $\Gamma_C$ is obtained by gluing, along their common part, the abstract $\Z/5\Z$-stairs $\Gamma_i$ and $\Gamma_{i+1}$ for $i=1,\ldots,4$.} \label{CHAMBER} \end{figure} In particular, we have glued the boxes \scalebox{0.4}{\begin{tikzpicture} \draw[pattern=north east lines] (0,0)--(1,0)--(1,1)--(0,1)--(0,0)--(1,0); \end{tikzpicture}} of an abstract $G$-stair with the boxes \scalebox{0.4}{\begin{tikzpicture} \draw[pattern=north west lines] (0,0)--(1,0)--(1,1)--(0,1)--(0,0)--(1,0); \end{tikzpicture}} of the next abstract $G$-stair. \end{example} \begin{definition} A \textit{chamber stair associated to $C$} or a \textit{$C$-stair} is any realization $\widetilde{\Gamma}_C$ of the abstract chamber stair $\Gamma_C$ associated to $C$ as a subset of the representation tableau. \end{definition} \begin{remark}\label{CONCLUSION} Let $C\subset \Theta^{\gen}$ be a chamber and let $\Gamma_C\subset \Calt_G$ be a $C$-stair. Consider a $G$-stair $\Gamma\subset \Gamma_C$ of width $\mathfrak{w}(\Gamma)=j$ and the associated $G$-constellation $\Calf_\Gamma$. Let us also denote by $b,b'\in\Gamma$ the first and the last box of $\Gamma$. Suppose that $\Calf_\Gamma$ is not $C$-stable. Then, there are two consecutive $C$-stable $G$-constellations $\Calf$ and $\Calf'$ with associated respective $G$-stairs $\Gamma_{\Calf},\Gamma_{\Calf'}\subset\Gamma_C$ such that $b\in\Gamma_{\Calf}$ and $b'\in\Gamma_{\Calf'}$. Therefore, $\Gamma$ is a substair of both the decreasing linking stair $L$ of $\Gamma_\Calf$ and the increasing linking stair $L'$ of $\Gamma_{\Calf'}$. In particular, as a consequence of Propositions \ref{propcoppia} and \Cref{propcoppia1}, one and only one between the following two possibilities must occur, namely: \begin{equation} \label{conditions} \begin{array}{l} \mathfrak{w}(\Calf)=j-1, \mathfrak{w}(\Calf')=j, \mbox{ and $b$ (resp. $b'$) is a left (resp. right) horizontal cut of $\Gamma$ in $L$}, \\ \mathfrak{w}(\Calf)=j ,\mathfrak{w}(\Calf')=j+1, \mbox{ and $b$ (resp. $b'$) is a right (resp. left) vertical cut of $\Gamma$ in $L$}. \end{array} \end{equation} On the other hand, again as a consequence of \Cref{propcoppia} and \Cref{propcoppia1}, if $\Calf_\Gamma$ is $C$-stable, none of the conditions in \eqref{conditions} can hold true, and in this case $\Gamma$ has horizontal left cut and vertical right cut in $\Gamma_C$. Summing up, if $\Gamma\subset\Gamma_C$ is a connected $G$-substair associated to a toric $G$-constellation $\Calf_\Gamma$ then only the following two cases can occur: \begin{itemize} \item the $G $-constellation $\Calf_\Gamma$ is $C$-stable and $\Gamma$ has a horizontal left cut and a vertical right cut, or \item the $G $-constellation $\Calf_\Gamma$ is not $C$-stable and $\Gamma$ has two horizontal cuts or two vertical cuts. \end{itemize} \end{remark} \begin{remark}\label{unicCstair} Different chambers have different abstract chamber stairs. First, recall from \Cref{CONCLUSION} that, as per \Cref{propcoppia}, the $G$-stair of any toric $C$-stable $ G$-constellation has a vertical right cut in the $C$-stair and a horizontal right cut in the decreasing linking stair of the previous $G$-constellation. Suppose that two chambers $C$ and $C'$ have the same abstract chamber stair $\Gamma$. In particular, from the construction of abstract chamber stairs, it follows that $C$ and $C'$ have the same first (in the sense of \Cref{orderconst}) toric $G$-constellation. Suppose that $C$ and $C'$ differ for the $j$-th toric $G$-constellation. This translates into the fact that, if $\Calf_j$ and $\Calf_j'$ are the respective $j$-th $G$-constellations of $C$ and $C'$ and $\Gamma_j,\Gamma_j'$ are their abstract $G$-stairs, then $$\Gamma_j\not=\Gamma_j'.$$ Let us denote by $\Calf_{j-1}$ the $(j-1)$-th toric $G$-constellation of $C$ (and $C'$) and by $\Gamma_{j-1}$ its abstract $G$-stair. Then, both $\Gamma_j$ and $\Gamma_j'$ are substairs of the decreasing linking stair $L_{j-1}$ of $\Gamma_{j-1}$ and they have horizontal right cut in $L_{j-1}$ as noticed above. Since, $\Gamma_{j-1},\Gamma_j$ and $\Gamma_j'$ are connected and $\Gamma_{j-1}\cap \Gamma_j,\Gamma_{j-1}\cap \Gamma_j'\not=\emptyset$ in $L_{j-1}$, it follows that: $$\Gamma_{j-1}\cup \Gamma_j\subsetneq \Gamma_{j-1}\cup \Gamma_j'\mbox{ or }\Gamma_{j-1}\cup \Gamma_j\supsetneq \Gamma_{j-1}\cup \Gamma_j'.$$ Finally, if without loss of generality we suppose $$\Gamma_{j-1}\cup \Gamma_j\subsetneq \Gamma_{j-1}\cup \Gamma_j'\subset \Gamma,$$ then we get a contradiction. Indeed, as noticed at the beginning, $\Gamma_{j}$ has a vertical right cut in $\Gamma$, but it has to have a horizontal right cut in $\Gamma_{j-1}\cup \Gamma_j'$ because it is a connected substair of $L_{j-1}$ which strictly contains $\Gamma_{j}$. \end{remark} \begin{remark}Since the abstract chamber stair $\Gamma_C$ of a chamber $C$ contains a copy of the abstract $G$-stairs of the toric $C$-stable $G$-constellations, we will think of such abstract $G$-stairs as substairs of $\Gamma_C$. Similarly, given a $C$-stair $\widetilde{\Gamma}_C\subset\Calt_G$ which realize $\Gamma_C$, we will realize the abstract $G$-stairs associated to the $G$-constellations in $C$ as substairs of $\widetilde{\Gamma}_C$. \end{remark} \begin{definition}\label{simplechamber} Given a chamber $C$, we say that a toric $C$-stable $G$-constellation is \textit{$C$-characteristic} if its abstract $G$-stair has the same generators as the abstract $C$-stair, see \Cref{stair}. We say that a chamber $C$ is \textit{simple} if there is a toric $C$-stable $G$-constellation whose abstract $G$-stair has the same generators of the abstract $C$-stair, i.e. if there exists at least one $C$-characteristic $G$-constellation. \end{definition} \begin{example} An example of a simple chamber is given by the chamber $C_G$ in \Cref{CRAWthm}, i.e. the chamber whose associated moduli space is $G$-$ \Hilb(\A^2) $. In particular, the abstract $C_G$-stair has only one generator, namely $\rho_0$. \end{example} \begin{definition} Let $\Gamma$ be a $G$-stair and let $\rho_i$ and $\rho_j$ be its first and its last generators. \begin{itemize} \item The \textit{left tail} of $\Gamma$ is the substair of $\Gamma$ given by \[\mathfrak{lt}(\Gamma)=\Set{y^s\cdot\rho_i|s>0}.\] \item The \textit{right tail} of $\Gamma$ is the substair of $\Gamma$ given by \[\mathfrak{rt}(\Gamma)= \Set{x^s\cdot\rho_j|s>0}.\] \item The \textit{tail} of $\Gamma$ is the substair of $\Gamma$ given by $$\mathfrak{t}(\Gamma)=\mathfrak{lt}(\Gamma)\cup \mathfrak{rt}(\Gamma).$$ \end{itemize} Similarly one can define left/right tails for abstract $G$-stairs. \end{definition} \begin{remark}\label{stessigen} If two $G$-stairs $\Gamma$ and $\Gamma'$ have the same generators, then they differ by their tails, i.e. the following equality of subsets of the representation tableau holds true: $$\Gamma\smallsetminus\mathfrak{t}(\Gamma)=\Gamma'\smallsetminus\mathfrak{t}(\Gamma')$$ In particular, if a $G$-stair $\Gamma$ has a tail of cardinality $m$, then there are $m+1$ $G$-stairs with the same generators as $\Gamma$. In simple words, the other $G$-stairs are obtained by moving some boxes from the left tail to the right tail (and viceversa) of $\Gamma$. \end{remark} \begin{prop}\label{propsimple}The following properties are true. \begin{enumerate} \item Any toric $G$-constellation is $C$-stable for some simple chamber $C$. \item Given a simple chamber $C$, and a $C$-characteristic $G$-constellation $\Calf$, there is an algorithm to produce all the toric $C$-stable constellations. \item If $C$ is a simple chamber, all the toric $G$-constellations that admit a $G$-stair with the same generators as the $C$-stair belong to $C$, i.e. they are $C$-stable. In particular, they are $C$-characteristic. \end{enumerate} \end{prop} \begin{proof} Let $\Gamma_C$ be the abstract $C$-stair. We prove the first two points in a constructive way. In order to do so, we show that, given a toric $G$-constellation $\Calf$, there is a unique simple chamber $C$ such that $\Calf$ is $C$-characteristic. Let $\Calf$ be a toric $G$-constellation with associated abstract $G$-stair $\Gamma_\Calf$ of height $\mathfrak{h}(\Calf)=j$. In order to build a chamber starting from $\Calf$, we have to first recursively apply Propositions \ref{propcoppia} and \ref{propcoppia1} $j-1$ times and $k-j$ times respectively, to obtain $k$ toric constellations $$\Calf_1,\ldots,\Calf_{j-1},\Calf,\Calf_{j+1},\ldots,\Calf_k$$ and, finally, apply \Cref{TEO1} to conclude that there exists a chamber $C$ such that the constellations $\Calf_1,\ldots,\Calf_{j-1},\Calf,\Calf_{j+1},\ldots,\Calf_k$ correspond to the toric points of $\Calm_C$. The condition that the chamber must be simple translates into the fact that, at every step, no new generators appear. This may be only achieved by performing, every time that we apply \Cref{propcoppia} (resp. \Cref{propcoppia1}), the first (resp. last) possible horizontal (resp. vertical) cut in the decreasing (resp. increasing) linking stair. In order to prove the last point, we start by considering a $G$-constellation $\Calf$ whose abstract $G$-stair $\Gamma_\Calf$ has the same generators as the $C$-stair and such that it has empty right tail, i.e. $\mathfrak{t}(\Gamma_\Calf)=\mathfrak{lt}(\Gamma_\Calf)$. Let $m=\#\mathfrak{lt}(\Gamma_\Calf)$ be the cardinality of the left tail of $\Gamma_\Calf$. The first $m$ times we apply \Cref{propcoppia} by performing the first possible horizontal cut we increase the cardinality of $\mathfrak{rt}(\Gamma_\Calf)$ by 1 and, consequently, we decrease the cardinality of $\mathfrak{lt}(\Gamma_\Calf)$ by 1. In this way we find, as explained in \Cref{stessigen}, all the toric $G$-constellations which admit a $G$-stair with the same generators as the $C$-stair and all of them are $C$-stable by \Cref{TEO1}. \end{proof} \begin{lemma}\label{genconst} Let $\Gamma$ be a $G$-stair. Then $\Gamma$ has at most $$\left\lfloor \frac{k+1}{2} \right\rfloor$$ generators. \end{lemma} \begin{proof}The statement follows from the following observation. If a stair has $r$ generators, then it has at least $2r-1$ boxes, as shown in \Cref{numgens}. \begin{figure}[H]\scalebox{0.6}{ \begin{tikzpicture}[scale=0.7] \draw (0,0)--(0,1)--(1,1)--(1,0)--(0,0)--(0,1); \draw (2,0)--(2,1)--(3,1)--(3,0)--(2,0)--(2,1); \draw (2,-2)--(2,-1)--(3,-1)--(3,-2)--(2,-2)--(2,-1); \draw (4,-2)--(4,-1)--(5,-1)--(5,-2)--(4,-2)--(4,-1); \draw (5,-3)--(5,-2)--(6,-2)--(6,-3)--(5,-3)--(5,-2); \draw (5,-5)--(5,-4)--(6,-4)--(6,-5)--(5,-5)--(5,-4); \node at (0.5,1.6) {$\vdots$}; \node at (1.5,0.5) {$\cdots$}; \node at (2.5,-0.4) {$\vdots$}; \node at (3.5,-1.5) {$\cdots$}; \node at (5.5,-1.5) {$\ddots$}; \node at (5.5,-3.4) {$\vdots$}; \node at (6.5,-4.5) {$\cdots$}; \node at (2,-3) {\rotatebox{-45}{$\underbrace{\hspace{5cm}}$}}; \node at (5,-0.5) {\rotatebox{-45}{$\overbrace{\hspace{3.5cm}}$}}; \node[left] at (2,-3.5) {$r$}; \node[right] at (5,0) {$r-1$}; \end{tikzpicture}} \caption{\ } \label{numgens} \end{figure} Now, a $G$-stair has exactly $k$ boxes. Hence, $$r\le \left\lfloor \frac{k+1}{2} \right\rfloor.$$ \end{proof} \begin{example}\label{hilbop} Non-simple chambers exist. As already mentioned in \Cref{CRAWthm}, there is a chamber $C_G$ such that $G$-$\Hilb(\A^2)\cong\Calm_{C_G}$ as moduli spaces. In particular, \[C_G\subset\Set{\theta\in\Theta | \theta_0<0,\ \theta_i>0\ \forall i=1,\ldots,k-1 },\] and the abstract $G$-stairs of its toric constellations are shown in \Cref{torichilb}. \begin{figure}[H] \scalebox{1}{ \begin{tikzpicture} \node at (0,-2.5) {$\Gamma_{\Calf_1}$}; \node at (0,0) {$\begin{matrix} \begin{tikzpicture}[scale=0.7] \draw[dashed](0,2)--(0,3); \draw[dashed](1,2)--(1,3); \draw (1,1)--(1,2)--(0,2)--(0,0)--(1,0)--(1,1)--(0,1); \draw (1,4)--(1,6)--(0,6)--(0,3)--(1,3)--(1,4)--(0,4); \draw (1,5)--(0,5); \node at (0.5,2.6) {$\vdots$}; \node at (0.5,0.5) {\small$0$}; \node at (0.5,1.5) {\small$k$-$1$}; \node at (0.5,3.5) {\small$3$}; \node at (0.5,4.5) {\small$2$}; \node at (0.5,5.5) {\small$1$}; \end{tikzpicture} \end{matrix}$}; \node at (1.5,-2.5) {$\Gamma_{\Calf_2}$}; \node at (1.5,-0.35) {$\begin{matrix} \begin{tikzpicture}[scale=0.7] \draw[dashed](0,2)--(0,3); \draw[dashed](1,2)--(1,3); \draw (1,0)--(2,0)--(2,1)--(1,1); \draw (1,1)--(1,2)--(0,2)--(0,0)--(1,0)--(1,1)--(0,1); \draw (1,4)--(1,5)--(0,5)--(0,3)--(1,3)--(1,4)--(0,4); \node at (0.5,2.6) {$\vdots$}; \node at (0.5,0.5) {\small$0$}; \node at (0.5,1.5) {\small$k$-$1$}; \node at (0.5,3.5) {\small$3$}; \node at (0.5,4.5) {\small$2$}; \node at (1.5,0.5) {\small$1$}; \end{tikzpicture} \end{matrix}$}; \node at (3.7,-2.5) {$\Gamma_{\Calf_3}$}; \node at (3.7,-0.7) {$\begin{matrix} \begin{tikzpicture}[scale=0.7] \draw[dashed](0,2)--(0,3); \draw[dashed](1,2)--(1,3); \draw (2,0)--(2,1); \draw (1,0)--(3,0)--(3,1)--(1,1); \draw (1,1)--(1,2)--(0,2)--(0,0)--(1,0)--(1,1)--(0,1); \draw (1,3)--(1,4)--(0,4)--(0,3)--(1,3)--(1,4); \node at (0.5,2.6) {$\vdots$}; \node at (0.5,0.5) {\small$0$}; \node at (0.5,1.5) {\small$k$-$1$}; \node at (0.5,3.5) {\small$3$}; \node at (2.5,0.5) {\small$2$}; \node at (1.5,0.5) {\small$1$}; \end{tikzpicture} \end{matrix}$}; \node at (5.3,-1.4) {$\cdots$}; \node at (5.3,-2.5) {$\cdots$}; \node at (7.7,-2.5) {$\Gamma_{\Calf_{k-1}}$}; \node at (7.7,-1.4) {$\begin{matrix} \begin{tikzpicture}[scale=0.7] \draw[dashed](2,0)--(3,0); \draw[dashed](2,1)--(3,1); \draw (1,1)--(1,2)--(0,2)--(0,0)--(1,0)--(1,1)--(0,1); \draw (1,0)--(2,0)--(2,1)--(1,1); \node at (2.5,0.5) {$\cdots$}; \draw (4,1)--(3,1)--(3,0)--(5,0)--(5,1)--(4,1)--(4,0); \node at (0.5,0.5) {\small$0$}; \node at (0.5,1.5) {\small$k$-$1$}; \node at (4.5,0.5) {\small$k$-$2$}; \node at (3.5,0.5) {\small$k$-$3$}; \node at (1.5,0.5) {\small$1$}; \end{tikzpicture} \end{matrix}$}; \node at (12,-2.5) {$\Gamma_{\Calf_k}$.}; \node at (12,-1.75) {$\begin{matrix} \begin{tikzpicture}[scale=0.7] \draw[dashed](2,0)--(3,0); \draw[dashed](2,1)--(3,1); \draw (1,1)--(1,0)--(0,0)--(0,1)--(1,1); \draw (1,0)--(2,0)--(2,1)--(1,1); \node at (2.5,0.5) {$\cdots$}; \draw (4,1)--(3,1)--(3,0)--(5,0)--(5,1)--(4,1)--(4,0); \draw (5,0)--(6,0)--(6,1)--(5,1); \node at (0.5,0.5) {\small$0$}; \node at (5.5,0.5) {\small$k$-$1$}; \node at (4.5,0.5) {\small$k$-$2$}; \node at (3.5,0.5) {\small$k$-$3$}; \node at (1.5,0.5) {\small$1$}; \end{tikzpicture} \end{matrix}$}; \end{tikzpicture}} \caption{The abstract $G$-stairs of the $C_G$-stable toric $G$-constellations.} \label{torichilb} \end{figure} Notice that, for $i=1,\ldots,k$ and $j=0,\ldots,k-1$, the favorite conditions $\theta_{\Calf_i}$ are defined by $$(\theta_{\Calf_i})_j=\begin{cases} -2&\mbox{if } j=0 \ \&\ i\not=1,k,\\ -1&\mbox{if } j=0 \ \&\ (i=1 \mbox{ or } i=k),\\ 1&\mbox{if } j=i-1\not=0,\\ 1&\mbox{if } j=i ,\\ 0& \mbox{otherwise.} \end{cases}$$ and that the condition $$\theta=\ssum{i=1}{k}\theta_{\Calf_i}=(-2k+2,\underbrace{2,\ldots,2}_{k-1})$$ belongs to $C_G$. More precisely, the moduli space $G$-$\Hilb(\A^2)$ parametrises all the toric $G$-constellations generated by the trivial representation. As a consequence, the abstract $G$-stairs $\Gamma_{\Calf_i}$, for $i=1,\ldots,k$, have as only generator the trivial representation. Let us reverse this property by asking the presence of just one antigenerator, for example, the trivial representation. It is easy to see that there is a chamber $C_G^{\OP}$ whose toric $G$-constellations, as requested, have the abstract $G$-stairs in \Cref{torichilbop}. \begin{figure}[ht] \scalebox{1}{ \begin{tikzpicture} \node at (0,-2.5) {$\Gamma_{\Calf_1'}$}; \node at (0,0) {$\begin{matrix} \begin{tikzpicture}[scale=0.7] \draw[dashed](0,2)--(0,3); \draw[dashed](1,2)--(1,3); \draw (1,1)--(1,2)--(0,2)--(0,0)--(1,0)--(1,1)--(0,1); \draw (1,4)--(1,6)--(0,6)--(0,3)--(1,3)--(1,4)--(0,4); \draw (1,5)--(0,5); \node at (0.5,2.6) {$\vdots$}; \node at (0.5,0.5) {\small$k$-1}; \node at (0.5,1.5) {\small$k$-2}; \node at (0.5,3.5) {\small$2$}; \node at (0.5,4.5) {\small$1$}; \node at (0.5,5.5) {\small$0$}; \end{tikzpicture} \end{matrix}$}; \node at (1.5,-2.5) {$\Gamma_{\Calf_2'}$}; \node at (1.5,-0.35) {$\begin{matrix} \begin{tikzpicture}[scale=0.7] \draw[dashed](0,2)--(0,3); \draw[dashed](1,2)--(1,3); \draw (0,5)--(-1,5)--(-1,4)--(0,4); \draw (1,1)--(1,2)--(0,2)--(0,0)--(1,0)--(1,1)--(0,1); \draw (1,4)--(1,5)--(0,5)--(0,3)--(1,3)--(1,4)--(0,4); \node at (0.5,2.6) {$\vdots$}; \node at (0.5,0.5) {\small$k$-2}; \node at (0.5,1.5) {\small$k$-3}; \node at (0.5,3.5) {\small$1$}; \node at (0.5,4.5) {\small$0$}; \node at (-0.5,4.5) {\small$k$-1}; \end{tikzpicture} \end{matrix}$}; \node at (3.7,-2.5) {$\Gamma_{\Calf_3'}$}; \node at (3.7,-0.7) {$\begin{matrix} \begin{tikzpicture}[scale=0.7] \draw[dashed](0,2)--(0,1); \draw[dashed](1,2)--(1,1); \draw (-1,3)--(-1,4); \draw (0,3)--(-2,3)--(-2,4)--(0,4); \draw (0,3)--(0,2)--(1,2)--(1,3); \draw (0,0)--(1,0)--(1,1)--(0,1)--(0,0)--(1,0); \draw (1,3)--(1,4)--(0,4)--(0,3)--(1,3); \node at (0.5,1.6) {$\vdots$}; \node at (0.5,0.5) {\small$k$-3}; \node at (0.5,2.5) {\small1}; \node at (0.5,3.5) {\small$0$}; \node at (-0.5,3.5) {\small$k$-1}; \node at (-1.5,3.5) {\small$k$-2}; \end{tikzpicture} \end{matrix}$}; \node at (5.3,-1.4) {$\cdots$}; \node at (5.3,-2.5) {$\cdots$}; \node at (7.7,-2.5) {$\Gamma_{\Calf_{k-1}'}$}; \node at (7.7,-1.4) {$\begin{matrix} \begin{tikzpicture}[scale=0.7] \draw[dashed](2,0)--(3,0); \draw[dashed](2,1)--(3,1); \draw (4,0)--(4,-1)--(5,-1)--(5,0); \draw (0,0)--(1,0)--(1,1)--(0,1)--(0,0)--(1,0); \draw (1,0)--(2,0)--(2,1)--(1,1); \node at (2.5,0.5) {$\cdots$}; \draw (4,1)--(3,1)--(3,0)--(5,0)--(5,1)--(4,1)--(4,0); \node at (0.5,0.5) {\small2}; \node at (4.5,-0.5) {\small1}; \node at (4.5,0.5) {\small0}; \node at (3.5,0.5) {\small$k$-1}; \node at (1.5,0.5) {\small3}; \end{tikzpicture} \end{matrix}$}; \node at (12,-2.5) {$\Gamma_{\Calf_k'}$}; \node at (12,-1.75) {$\begin{matrix} \begin{tikzpicture}[scale=0.7] \draw[dashed](2,0)--(3,0); \draw[dashed](2,1)--(3,1); \draw (1,1)--(1,0)--(0,0)--(0,1)--(1,1); \draw (1,0)--(2,0)--(2,1)--(1,1); \node at (2.5,0.5) {$\cdots$}; \draw (4,1)--(3,1)--(3,0)--(5,0)--(5,1)--(4,1)--(4,0); \draw (5,0)--(6,0)--(6,1)--(5,1); \node at (0.5,0.5) {\small$1$}; \node at (5.5,0.5) {\small$0$}; \node at (4.5,0.5) {\small$k$-1}; \node at (3.5,0.5) {\small$k$-$2$}; \node at (1.5,0.5) {\small$2$}; \end{tikzpicture} \end{matrix}$}; \end{tikzpicture}} \caption{The abstract $G$-stairs of the $C_G^{\OP}$-stable toric $G$-constellations.} \label{torichilbop} \end{figure} In particular, \[C_G^{\OP}\subset\Set{\theta\in\Theta|\theta_0>0,\ \theta_i<0\ \forall i=1,\ldots,k-1}.\] We denote the associated moduli space by $$G\mbox{-}\Hilb^{\OP}(\A^2):= \Calm_{C_G^{\OP}}.$$ Notice that, while $C_{\Z/3\Z}^{\OP}$ is simple, $C_{\Z/k\Z}^{\OP}$ is not simple for $k\ge 4$ because the number of generators of the $C_{\Z/k\Z}^{\OP}$-stair is $$k-1>\left\lfloor\frac{k+1}{2}\right\rfloor \ \ \forall k\ge 4.$$ Therefore, as a consequence of \Cref{genconst}, there is no $ C_{\Z/k\Z}^{\OP}$-characteristic $G$-constellation. We show, as an example, the abstract chamber stairs of $C_G$ and $C_G^{\OP}$ in the case $k=5$. \begin{figure}[H] \begin{tikzpicture} \node at (0,0) { $\begin{matrix} \scalebox{0.5}{ \begin{tikzpicture} \draw (0,-4)--(0,-5)--(5,-5)--(5,-4)--(1,-4)--(1,0)--(0,0)--(0,-4); \draw (0,-1)--(1,-1); \draw (0,-2)--(1,-2); \draw (0,-3)--(1,-3); \draw (0,-4)--(1,-4)--(1,-5); \draw (2,-4)--(2,-5); \draw (3,-4)--(3,-5); \draw (4,-4)--(4,-5); \node at (0.5,-0.5) {\Large1}; \node at (0.5,-1.5) {\Large2}; \node at (0.5,-2.5) {\Large3}; \node at (0.5,-3.5) {\Large4}; \node at (0.5,-4.5) {\Large0}; \node at (1.5,-4.5) {\Large1}; \node at (2.5,-4.5) {\Large2}; \node at (3.5,-4.5) {\Large3}; \node at (4.5,-4.5) {\Large4}; \end{tikzpicture}} \end{matrix}$}; \node at (7,1.5 ) { $\begin{matrix}\scalebox{0.5}{ \begin{tikzpicture} \draw (0,-4)--(0,-5)--(1,-5)--(1,-8)--(3,-8)--(3,-10)--(6,-10)--(6,-11)--(11,-11)--(11,-10)--(7,-10)--(7,-9)--(4,-9)--(4,-7)--(2,-7)--(2,-4)--(1,-4)--(1,0)--(0,0)--(0,-4); \draw (0,-1)--(1,-1); \draw (0,-2)--(1,-2); \draw (0,-3)--(1,-3); \draw (0,-4)--(1,-4)--(1,-5)--(2,-5); \draw (1,-6)--(2,-6); \draw (1,-7)--(2,-7)--(2,-8); \draw (3,-7)--(3,-8)--(4,-8); \draw (3,-9)--(4,-9)--(4,-10); \draw (5,-9)--(5,-10); \draw (6,-9)--(6,-10)--(7,-10)--(7,-11); \draw (8,-10)--(8,-11); \draw (9,-10)--(9,-11); \draw (10,-10)--(10,-11); \node at (0.5,-0.5) {\Large0}; \node at (0.5,-1.5) {\Large1}; \node at (0.5,-2.5) {\Large2}; \node at (0.5,-3.5) {\Large3}; \node at (0.5,-4.5) {\Large4}; \node at (1.5,-4.5) {\Large0}; \node at (1.5,-5.5) {\Large1}; \node at (1.5,-6.5) {\Large2}; \node at (1.5,-7.5) {\Large3}; \node at (2.5,-7.5) {\Large4}; \node at (3.5,-7.5) {\Large0}; \node at (3.5,-8.5) {\Large1}; \node at (3.5,-9.5) {\Large2}; \node at (4.5,-9.5) {\Large3}; \node at (5.5,-9.5) {\Large4}; \node at (6.5,-9.5) {\Large0}; \node at (6.5,-10.5) {\Large1}; \node at (7.5,-10.5) {\Large2}; \node at (8.5,-10.5) {\Large3}; \node at (9.5,-10.5) {\Large4}; \node at (10.5,-10.5) {\Large0}; \end{tikzpicture}} \end{matrix}$}; \end{tikzpicture} \caption{ The abstract {$C_{\Z/5\Z}$-stair} and the abstract {$C_{\Z/5\Z}^{\OP}$-stair.}} \end{figure} \end{example} \begin{theorem}\label{teosimple} If $G\subset \SL(2,\C)$ is a finite abelian subgroup of cardinality $k=|G|$, then the space of generic stability conditions $\Theta^{\gen}$ contains $k\cdot 2^{k-2}$ simple chambers. \end{theorem} \begin{proof}Let $\Calb$ be the set of of possible sets of generators for a $G$-stair, i.e. \[\Calb=\Set{A\subset \Calt_G|\mbox{there exists a $G$-stair whose generators are the elements in $A$}},\] and let $\Calg$ be the set of all $G$-stairs \[\Calg=\Set{\Gamma\subset \Calt_G|\mbox{$\Gamma$ is a $G$-stair}}.\] Consider the subsemigroup $Z$ of $\Calt_G$ \[Z=\Set{(\alpha k +\gamma , \beta k+\gamma,\rho_0)\in\Calt_G|\alpha,\beta,\gamma\ge 0 }.\] We denote by $\overline{\Calb}$ and $\overline{\Calg}$ the set of equivalence classes $$\overline{\Calb}=\Calb/\sim_Z,\ \mbox{and} \ \overline{\Calg}=\Calg/\sim_Z$$ where, if $A_1,A_2\in \Calb$ (resp. $\Gamma_1,\Gamma_2\in\Calg$), then $A_1\sim_Z A_2$ (resp. $\Gamma_1\sim_Z \Gamma_2$) if there exist $z\in Z$ such that $$A_1=A_2+z\mbox{ or }A_2= A_1+z \quad (\mbox{resp. }\Gamma_1=\Gamma_2+z\mbox{ or }\Gamma_2= \Gamma_1+z).$$ Notice that, if two $G$-stairs are $\sim_Z$-equivalent also their sets of generators are $\sim_Z$-equivalent. However, the contrary is not true. Now, the number of simple chambers equals the cardinality of $\overline{\Calb}$. Indeed, \Cref{propsimple} implies that the chamber $C$ is uniquely determined by a constellation $\Calf$ whose $G$-stair is $C$-characteristic. More precisely, $C$ is uniquely determined by the generators of any characteristic $C$-stair $\Gamma_\Calf$. Although there are infinitely many $G$-stairs corresponding to $\Calf$, \Cref{periodn} tells us that two $G$-stairs correspond to the same $G$-constellation if and only if they differ by an element in $Z$, i.e. they are $\sim_Z$-equivalent. Let $\Calg_r$ be the set of $G$-stairs with $r$ generators and let $\overline{\Calg}_r=\Calg_r/\sim_Z$ be the induced quotient. We have a surjective map $$\Psi:\Calg\rightarrow\Calb$$ which associates to each $G$-stair its set of generators, and this map descends to the sets of equivalence classes $$\overline{\Psi}:\overline{\Calg}\rightarrow\overline{\Calb},$$ because $\sim_Z$-equivalent $G$-stairs correspond to $\sim _Z$-equivalent sets of generators. Now, $\overline{\Calb}$ decomposes as a disjoint union (see \Cref{genconst}) as follows: $$\overline{\Calb}=\underset{r=1}{\overset{\left\lfloor \frac{k+1}{2}\right\rfloor}{\bigsqcup}}\overline{\Psi}(\overline{\Calg}_r).$$ Our strategy is to compute $\overline{\Psi}(\overline{\Calg}_r)$ for every $1\le r \le\left\lfloor \frac{k+1}{2}\right\rfloor$ and then sum over all $r$. For $r=1$ we have $|\overline{\Psi}(\overline{\Calg}_1)|=k$. If we impose the presence of $r\ge2$ generators and of a tail of cardinality $j$ then there are $$k\cdot\binom{k-2-j}{2r-3}$$ elements in $\overline{\Psi}(\overline{\Calg}_r)$ which comes from $G$-stairs with a tail of cardinality $j$. Indeed, as shown in \Cref{arrangement}, we have $2r-1$ fixed boxes (generators and anti-generators), $j$ boxes contained in the tails (dashed areas) and $k-2r+1-j$ boxes left to arrange in $2r-2$ places (dotted areas). \begin{figure}\scalebox{0.8}{ \begin{tikzpicture}[scale=0.6] \draw (0,0)--(0,1)--(1,1)--(1,0)--(0,0)--(0,1); \draw (2,0)--(2,1)--(3,1)--(3,0)--(2,0)--(2,1); \draw (2,-2)--(2,-1)--(3,-1)--(3,-2)--(2,-2)--(2,-1); \draw (4,-2)--(4,-1)--(5,-1)--(5,-2)--(4,-2)--(4,-1); \draw (5,-3)--(5,-2)--(6,-2)--(6,-3)--(5,-3)--(5,-2); \draw (5,-5)--(5,-4)--(6,-4)--(6,-5)--(5,-5)--(5,-4); \node at (1.5,0.5) {$\cdots$}; \node at (2.5,-0.35) {$\vdots$}; \node at (3.5,-1.5) {$\cdots$}; \node at (5.5,-1.5) {$\ddots$}; \node at (5.5,-3.35) {$\vdots$}; \node at (2.2,-3) {\rotatebox{-45}{$\underbrace{\hspace{4.7cm}}$}}; \node at (4.8,-0.2) {\rotatebox{-45}{$\overbrace{\hspace{3cm}}$}}; \node[left] at (2.2,-3.5) {$r$}; \node[right] at (4.8,0.2) {$r-1$}; \draw[dashed] (0,1)--(0,2); \draw[dashed] (1,1)--(1,2); \draw[dashed] (6,-5)--(7,-5); \draw[dashed] (6,-4)--(7,-4); \end{tikzpicture}} \caption{\ } \label{arrangement} \end{figure} The number of possible ways to arrange the boxes is computed via the stars and bars method\footnote{In a more suggestive way, one can say \virg combinations with repetition of $2r-2$ elements of class $k-2r+1-j$".}. In particular, there are $$\binom{(2r-2)+(k-2r+1-j)-1}{k-2r+1-j}=\binom{k-2-j}{2r-3}$$ of them. Finally, if we sum over all possible $r$ and $j$, we get $$k\cdot \left[1+\ssum{r=2}{\left\lfloor\frac{k+1}{2}\right\rfloor}\ssum{j=0}{k-2r+1}\binom{k-2-j}{2r-3}\right]=k\cdot {2^{k-2}}.$$ \end{proof} \begin{remark} An easy combinatorial computation shows that the set $\overline{\Calg}$ in the proof of \Cref{teosimple} has cardinality $k\cdot 2^{k-1}$, i.e. that there there are exactly $k\cdot 2^{k-1}$ isomorphisms classes of toric $G$-constellations. \end{remark} We conclude this section with two examples which help to understand the notions just introduced. \begin{example} In this example we treat the case $G\cong \Z/5\Z$. The following picture contains a list of the possible shapes of the abstract chamber stairs of simple chambers and, in each case, the shapes of the $G$-stairs associated to the toric $G$-constellations belonging to the respective simple chamber. \begin{figure}[H] \scalebox{0.9}{ \begin{tabular}{||c|c||c|c||} \hline\hline $\begin{matrix} {\begin{tikzpicture}[scale=0.15] \draw (0,1)--(0,0)--(5,0)--(5,1)--(1,1)--(1,5)--(0,5)--(0,1); \draw (0,1)--(1,1)--(1,0); \draw (2,0)--(2,1); \draw (3,0)--(3,1); \draw (4,0)--(4,1); \draw (0,4)--(1,4); \draw (0,3)--(1,3); \draw (0,2)--(1,2); \end{tikzpicture}} \end{matrix}$& $\begin{matrix}\scalebox{0.9}{ \begin{tikzpicture} \draw (0,0) to [out=30,in=150] (2,0); \draw (1.5,0) to [out=30,in=150] (3.5,0); \draw (3,0) to [out=30,in=150] (5,0); \draw (4.5,0) to [out=30,in=150] (6.5,0); \node at (0.25,1) {\begin{tikzpicture}[scale=0.15] \draw (0,1)--(0,0)--(1,0)--(1,5) --(0,5)--(0,1); \draw (0,1)--(1,1); \draw (0,4)--(1,4); \draw (0,3)--(1,3); \draw (0,2)--(1,2); \end{tikzpicture} }; \node at (1.75,1) {\begin{tikzpicture}[scale=0.15] \draw (0,1)--(0,0)--(2,0)--(2,1)--(1,1)--(1,4) --(0,4)--(0,1); \draw (0,1)--(1,1)--(1,0); \draw (0,3)--(1,3); \draw (0,2)--(1,2); \end{tikzpicture} }; \node at (3.25,1) {\begin{tikzpicture}[scale=0.15] \draw (0,1)--(0,0)--(3,0)--(3,1)--(1,1)--(1,3) --(0,3)--(0,1); \draw (0,1)--(1,1)--(1,0); \draw (0,2)--(1,2); \draw (2,0)--(2,1); \end{tikzpicture} }; \node at (4.75,1) {\begin{tikzpicture}[scale=0.15] \draw (0,1)--(0,0)--(4,0)--(4,1)--(1,1)--(1,2) --(0,2)--(0,1); \draw (0,1)--(1,1)--(1,0); \draw (2,0)--(2,1); \draw (3,0)--(3,1); \end{tikzpicture} }; \node at (6.25,1) {\begin{tikzpicture}[scale=0.15] \draw (1,0)--(0,0)--(0,1)--(5,1) --(5,0)--(1,0); \draw (1,0)--(1,1); \draw (4,0)--(4,1); \draw (3,0)--(3,1); \draw (2,0)--(2,1); \end{tikzpicture} }; \end{tikzpicture}} \end{matrix}$& $\begin{matrix} \scalebox{1}{\begin{tikzpicture}[scale=0.15] \draw (0,1)--(0,0)--(1,0)--(1,-1)--(6,-1)--(6,0)--(2,0)--(2,1)--(1,1)--(1,2)--(0,2)--(0,6) --(-1,6)--(-1,1)--(0,1); \draw (0,2)--(0,1)--(1,1)--(1,0)--(2,0); \draw (0,5)--(-1,5); \draw (0,4)--(-1,4); \draw (0,3)--(-1,3); \draw (0,2)--(-1,2); \draw (2,0)--(2,-1); \draw (3,0)--(3,-1); \draw (4,0)--(4,-1); \draw (5,0)--(5,-1); \end{tikzpicture}} \end{matrix}$& $\begin{matrix}\scalebox{0.9}{ \begin{tikzpicture} \draw (0,0) to [out=30,in=150] (2,0); \draw (1.5,0) to [out=30,in=150] (3.5,0); \draw (3,0) to [out=30,in=150] (5,0); \draw (4.5,0) to [out=30,in=150] (6.5,0); \node at (0.25,1) {\begin{tikzpicture}[scale=0.15] \draw (0,1)--(0,0)--(1,0)--(1,5) --(0,5)--(0,1); \draw (0,1)--(1,1); \draw (0,4)--(1,4); \draw (0,3)--(1,3); \draw (0,2)--(1,2); \end{tikzpicture} }; \node at (4.75,1) {\begin{tikzpicture}[scale=0.15] \draw (2,0)--(1,0)--(1,1)--(0,1)--(0,2)--(2,2)--(2,1)--(4,1)--(4,0)--(2,0); \draw (1,2)--(1,1)--(2,1)--(2,0); \draw (3,1)--(3,0); \end{tikzpicture} }; \node at (3.25,1) {\begin{tikzpicture}[scale=0.15] \draw (0,1)--(0,0)--(1,0)--(1,-1)--(2,-1)--(2,1)--(1,1)--(1,2) --(-1,2)--(-1,1)--(0,1); \draw (0,2)--(0,1)--(1,1)--(1,0)--(2,0); \end{tikzpicture} }; \node at (1.75,1) {\begin{tikzpicture}[scale=0.15] \draw (0,1)--(0,0)--(1,0)--(1,-1)--(2,-1)--(2,1)--(1,1)--(1,3) --(0,3)--(0,1); \draw (0,1)--(1,1)--(1,0)--(2,0); \draw (0,2)--(1,2); \end{tikzpicture} }; \node at (6.25,1) {\begin{tikzpicture}[scale=0.15] \draw (4,0)--(0,0)--(0,1)--(5,1) --(5,0)--(4,0); \draw (1,0)--(1,1); \draw (4,0)--(4,1); \draw (3,0)--(3,1); \draw (2,0)--(2,1); \end{tikzpicture} }; \end{tikzpicture}} \end{matrix}$\\\hline\hline $\begin{matrix} \scalebox{1}{\begin{tikzpicture}[scale=0.15] \draw (0,1)--(0,0)--(0,-1)--(5,-1)--(5,0)--(1,0)-- (1,2)--(0,2)--(0,6) --(-1,6)--(-1,1)--(0,1); \draw (0,2)--(0,1)--(1,1)--(1,0)--(2,0); \draw (0,0)--(1,0)--(1,-1); \draw (0,5)--(-1,5); \draw (0,4)--(-1,4); \draw (0,3)--(-1,3); \draw (0,2)--(-1,2); \draw (2,0)--(2,-1); \draw (3,0)--(3,-1); \draw (4,0)--(4,-1); \end{tikzpicture}} \end{matrix}$& $\begin{matrix}\scalebox{0.9}{ \begin{tikzpicture} \draw (0,0) to [out=30,in=150] (2,0); \draw (1.5,0) to [out=30,in=150] (3.5,0); \draw (3,0) to [out=30,in=150] (5,0); \draw (4.5,0) to [out=30,in=150] (6.5,0); \node at (0.25,1) {\begin{tikzpicture}[scale=0.15] \draw (0,4)--(0,0)--(1,0)--(1,5) --(0,5)--(0,4); \draw (0,1)--(1,1); \draw (0,4)--(1,4); \draw (0,3)--(1,3); \draw (0,2)--(1,2); \end{tikzpicture} }; \node at (4.75,1) {\begin{tikzpicture}[scale=0.15] \draw (0,1)--(0,0)--(4,0)--(4,1)--(1,1)--(1,2) --(0,2)--(0,1); \draw (0,1)--(1,1)--(1,0); \draw (2,0)--(2,1); \draw (3,0)--(3,1); \end{tikzpicture} }; \node at (3.25,1) {\begin{tikzpicture}[scale=0.15] \draw (1,1)--(0,1)--(0,0)--(1,0)--(1,-2)--(3,-2)--(3,-1)--(2,-1)--(2,1)--(1,1); \draw (1,1)--(1,0)--(2,0); \draw (1,-1)--(2,-1)--(2,-2); \end{tikzpicture} }; \node at (1.75,1) {\begin{tikzpicture}[scale=0.15] \draw (0,1)--(0,0)--(1,0)--(1,-2)--(2,-2)--(2,1)--(1,1)--(1,2) --(0,2)--(0,1); \draw (0,1)--(1,1)--(1,0)--(2,0); \draw (1,-1)--(2,-1); \end{tikzpicture} }; \node at (6.25,1) {\begin{tikzpicture}[scale=0.15] \draw (4,0)--(0,0)--(0,1)--(5,1) --(5,0)--(4,0); \draw (1,0)--(1,1); \draw (4,0)--(4,1); \draw (3,0)--(3,1); \draw (2,0)--(2,1); \end{tikzpicture} }; \end{tikzpicture}} \end{matrix}$& $\begin{matrix} \scalebox{1}{\begin{tikzpicture}[scale=0.15] \draw (0,1)--(0,5)--(1,5)--(1,1)--(4,1)--(4,0)--(8,0)--(8,-1)--(3,-1)--(3,0)--(0,0)--(0,1); \draw (0,1)--(1,1)--(1,0); \draw (0,2)--(1,2); \draw (0,3)--(1,3); \draw (0,4)--(1,4); \draw (2,1)--(2,0); \draw (3,1)--(3,0)--(4,0)--(4,-1); \draw (5,0)--(5,-1); \draw (6,0)--(6,-1); \draw (7,0)--(7,-1); \end{tikzpicture}} \end{matrix}$& $\begin{matrix}\scalebox{0.9}{ \begin{tikzpicture} \draw (0,0) to [out=30,in=150] (2,0); \draw (1.5,0) to [out=30,in=150] (3.5,0); \draw (3,0) to [out=30,in=150] (5,0); \draw (4.5,0) to [out=30,in=150] (6.5,0); \node at (0.25,1) {\begin{tikzpicture}[scale=0.15] \draw (0,4)--(0,0)--(1,0)--(1,5) --(0,5)--(0,4); \draw (0,1)--(1,1); \draw (0,4)--(1,4); \draw (0,3)--(1,3); \draw (0,2)--(1,2); \end{tikzpicture} }; \node at (4.75,1) {\begin{tikzpicture}[scale=0.15] \draw (1,2)--(4,2)--(4,0)--(3,0)--(3,1)--(0,1)--(0,2)--(1,2); \draw (4,1)--(3,1)--(3,2); \draw (2,1)--(2,2); \draw (1,1)--(1,2); \end{tikzpicture} }; \node at (3.25,1) {\begin{tikzpicture}[scale=0.15] \draw (0,1)--(0,0)--(3,0)--(3,1)--(1,1)--(1,3) --(0,3)--(0,1); \draw (0,1)--(1,1)--(1,0); \draw (0,2)--(1,2); \draw (2,0)--(2,1); \end{tikzpicture} }; \node at (1.75,1) {\begin{tikzpicture}[scale=0.15] \draw (0,3)--(0,0)--(2,0)--(2,1)--(1,1)--(1,4) --(0,4)--(0,3); \draw (0,1)--(1,1)--(1,0); \draw (0,3)--(1,3); \draw (0,2)--(1,2); \end{tikzpicture} }; \node at (6.25,1) {\begin{tikzpicture}[scale=0.15] \draw (4,0)--(0,0)--(0,1)--(5,1) --(5,0)--(4,0); \draw (1,0)--(1,1); \draw (4,0)--(4,1); \draw (3,0)--(3,1); \draw (2,0)--(2,1); \end{tikzpicture} }; \end{tikzpicture}} \end{matrix}$\\\hline\hline $\begin{matrix} {\begin{tikzpicture}[scale=0.15] \draw (0,1)--(0,5)--(1,5)--(1,1)--(2,1)--(2,0)--(6,0)--(6,-1)--(1,-1)--(1,0)--(0,0)--(0,1); \draw (0,2)--(0,1)--(1,1)--(1,0)--(2,0); \draw (0,4)--(1,4); \draw (0,3)--(1,3); \draw (0,2)--(1,2); \draw (2,0)--(2,-1); \draw (3,0)--(3,-1); \draw (4,0)--(4,-1); \draw (5,0)--(5,-1); \end{tikzpicture}} \end{matrix}$& $\begin{matrix}\scalebox{0.9}{ \begin{tikzpicture} \draw (0,0) to [out=30,in=150] (2,0); \draw (1.5,0) to [out=30,in=150] (3.5,0); \draw (3,0) to [out=30,in=150] (5,0); \draw (4.5,0) to [out=30,in=150] (6.5,0); \node at (0.25,1) {\begin{tikzpicture}[scale=0.15] \draw (0,4)--(0,0)--(1,0)--(1,5) --(0,5)--(0,4); \draw (0,1)--(1,1); \draw (0,4)--(1,4); \draw (0,3)--(1,3); \draw (0,2)--(1,2); \end{tikzpicture} }; \node at (4.75,1) {\begin{tikzpicture}[scale=0.15] \draw (2,0)--(1,0)--(1,1)--(0,1)--(0,2)--(2,2)--(2,1)--(4,1)--(4,0)--(2,0); \draw (1,2)--(1,1)--(2,1)--(2,0); \draw (3,1)--(3,0); \end{tikzpicture} }; \node at (3.25,1) {\begin{tikzpicture}[scale=0.15] \draw (0,1)--(0,0)--(1,0)--(1,-1)--(3,-1)--(3,0)--(2,0)--(2,1)--(1,1)--(1,2) --(0,2)--(0,1); \draw (0,1)--(1,1)--(1,0)--(2,0)--(2,-1); \end{tikzpicture} }; \node at (1.75,1) {\begin{tikzpicture}[scale=0.15] \draw (0,2)--(0,0)--(1,0)--(1,-1)--(2,-1)--(2,1)--(1,1)--(1,3) --(0,3)--(0,2); \draw (0,1)--(1,1)--(1,0)--(2,0); \draw (0,2)--(1,2); \end{tikzpicture} }; \node at (6.25,1) {\begin{tikzpicture}[scale=0.15] \draw (4,0)--(0,0)--(0,1)--(5,1) --(5,0)--(4,0); \draw (1,0)--(1,1); \draw (4,0)--(4,1); \draw (3,0)--(3,1); \draw (2,0)--(2,1); \end{tikzpicture} }; \end{tikzpicture}} \end{matrix}$& $\begin{matrix} \scalebox{1}{\begin{tikzpicture}[scale=0.15] \draw (0,1)--(0,5)--(1,5)--(1,1)--(3,1)--(3,0)--(7,0)--(7,-1)--(2,-1)--(2,0)--(0,0)--(0,1); \draw (0,2)--(1,2); \draw (0,3)--(1,3); \draw (0,4)--(1,4); \draw (0,1)--(1,1)--(1,0); \draw (2,1)--(2,0)--(3,0)--(3,-1); \draw (4,0)--(4,-1); \draw (5,0)--(5,-1); \draw (6,0)--(6,-1); \end{tikzpicture}} \end{matrix}$& $\begin{matrix}\scalebox{0.9}{ \begin{tikzpicture} \draw (0,0) to [out=30,in=150] (2,0); \draw (1.5,0) to [out=30,in=150] (3.5,0); \draw (3,0) to [out=30,in=150] (5,0); \draw (4.5,0) to [out=30,in=150] (6.5,0); \node at (0.25,1) {\begin{tikzpicture}[scale=0.15] \draw (0,4)--(0,0)--(1,0)--(1,5) --(0,5)--(0,4); \draw (0,1)--(1,1); \draw (0,4)--(1,4); \draw (0,3)--(1,3); \draw (0,2)--(1,2); \end{tikzpicture} };[scale=0.15] \node at (4.75,1) {\begin{tikzpicture}[scale=0.15] \draw (3,0)--(2,0)--(2,1)--(0,1)--(0,2)--(3,2)--(3,1)--(4,1)--(4,0)--(3,0); \draw (2,2)--(2,1)--(3,1)--(3,0); \draw (1,1)--(1,2); \end{tikzpicture} }; \node at (3.25,1) {\begin{tikzpicture}[scale=0.15] \draw (0,1)-- (0,0)--(2,0)--(2,-1)--(3,-1)--(3,1)--(1,1)--(1,2) --(0,2)--(0,1); \draw (0,1)--(1,1)--(1,0); \draw (2,1)--(2,0)--(3,0); \end{tikzpicture} }; \node at (1.75,1) {\begin{tikzpicture}[scale=0.15] \draw (0,3)-- (0,0)--(2,0)--(2,1)--(1,1)--(1,4) --(0,4)--(0,3); \draw (0,1)--(1,1)--(1,0); \draw (0,3)--(1,3); \draw (0,2)--(1,2); \end{tikzpicture} }; \node at (6.25,1) {\begin{tikzpicture}[scale=0.15] \draw (4,0)--(0,0)--(0,1)--(5,1) --(5,0)--(4,0); \draw (1,0)--(1,1); \draw (4,0)--(4,1); \draw (3,0)--(3,1); \draw (2,0)--(2,1); \end{tikzpicture} }; \end{tikzpicture}} \end{matrix}$\\\hline\hline $\begin{matrix} \scalebox{1}{\begin{tikzpicture}[scale=0.15] \draw (0,1)--(0,5)--(1,5)--(1,1)--(2,1)--(2,-2)--(6,-2)--(6,-3)--(1,-3)--(1,0)--(0,0)--(0,1); \node at (0.5,5) {$\ $}; \draw (0,4) -- (1,4); \draw (0,3) -- (1,3); \draw (0,2) -- (1,2); \draw (0,1) -- (1,1)-- (1,0)-- (2,0); \draw (1,-1)-- (2,-1); \draw (1,-2)-- (2,-2)-- (2,-3); \draw (3,-2)-- (3,-3); \draw (4,-2)-- (4,-3); \draw (5,-2)-- (5,-3); \end{tikzpicture}} \end{matrix}$& $\begin{matrix}\scalebox{0.9}{ \begin{tikzpicture} \draw (0,0) to [out=30,in=150] (2,0); \draw (1.5,0) to [out=30,in=150] (3.5,0); \draw (3,0) to [out=30,in=150] (5,0); \draw (4.5,0) to [out=30,in=150] (6.5,0); \node at (0.25,1) {\begin{tikzpicture}[scale=0.15] \draw (0,4)--(0,0)--(1,0)--(1,5) --(0,5)--(0,4); \draw (0,1)--(1,1); \draw (0,4)--(1,4); \draw (0,3)--(1,3); \draw (0,2)--(1,2); \end{tikzpicture} }; \node at (4.75,1) {\begin{tikzpicture}[scale=0.15] \draw (0,1)--(0,0)--(4,0)--(4,1)--(1,1)--(1,2) --(0,2)--(0,1); \draw (0,1)--(1,1)--(1,0); \draw (2,0)--(2,1); \draw (3,0)--(3,1); \end{tikzpicture} }; \node at (3.25,1) {\begin{tikzpicture}[scale=0.15] \draw (0,1)--(0,0)--(3,0)--(3,1)--(1,1)--(1,3) --(0,3)--(0,1); \draw (0,1)--(1,1)--(1,0); \draw (0,2)--(1,2); \draw (2,0)--(2,1); \end{tikzpicture} }; \node at (1.75,1) {\begin{tikzpicture}[scale=0.15] \draw (2,1)--(2,4)--(0,4)--(0,3)--(1,3)--(1,0)--(2,0)--(2,1); \draw (1,1)--(2,1); \draw (1,2)--(2,2); \draw (1,4)--(1,3)--(2,3); \end{tikzpicture} }; \node at (6.25,1) {\begin{tikzpicture}[scale=0.15] \draw (4,0)--(0,0)--(0,1)--(5,1) --(5,0)--(4,0); \draw (1,0)--(1,1); \draw (4,0)--(4,1); \draw (3,0)--(3,1); \draw (2,0)--(2,1); \end{tikzpicture} }; \end{tikzpicture}} \end{matrix}$& $\begin{matrix} \scalebox{1}{\begin{tikzpicture}[scale=0.15] \draw (0,1)--(0,5)--(1,5)--(1,1)--(3,1)--(3,-1)--(7,-1)--(7,-2)--(2,-2)--(2,0)--(0,0)--(0,1); \node at (0.5,5) {$\ $}; \draw (0,4) -- (1,4); \draw (0,3) -- (1,3); \draw (0,2) -- (1,2); \draw (0,1) -- (1,1)-- (1,0); \draw (2,1) -- (2,0)-- (3,0); \draw (2,-1) -- (3,-1)-- (3,-2); \draw (4,-1)-- (4,-2); \draw (5,-1)-- (5,-2); \draw (6,-1)-- (6,-2); \end{tikzpicture}} \end{matrix}$& $\begin{matrix}\scalebox{0.9}{ \begin{tikzpicture} \draw (0,0) to [out=30,in=150] (2,0); \draw (1.5,0) to [out=30,in=150] (3.5,0); \draw (3,0) to [out=30,in=150] (5,0); \draw (4.5,0) to [out=30,in=150] (6.5,0); \node at (0.25,1) {\begin{tikzpicture}[scale=0.15] \draw (0,4)--(0,0)--(1,0)--(1,5) --(0,5)--(0,4); \draw (0,1)--(1,1); \draw (0,4)--(1,4); \draw (0,3)--(1,3); \draw (0,2)--(1,2); \end{tikzpicture} }; \node at (4.75,1) {\begin{tikzpicture}[scale=0.15] \draw (0,1)--(0,0)--(4,0)--(4,1)--(1,1)--(1,2) --(0,2)--(0,1); \draw (0,1)--(1,1)--(1,0); \draw (2,0)--(2,1); \draw (3,0)--(3,1); \end{tikzpicture} }; \node at (3.25,1) {\begin{tikzpicture}[scale=0.15] \draw (1,3)--(3,3)--(3,0)--(2,0)--(2,2)--(0,2)--(0,3)--(1,3); \draw (1,3)--(1,2); \draw (2,3)--(2,2)--(3,2); \draw (3,1)--(2,1); \end{tikzpicture} }; \node at (1.75,1) {\begin{tikzpicture}[scale=0.15] \draw (0,3)--(0,0)--(2,0)--(2,1)--(1,1)--(1,4) --(0,4)--(0,3); \draw (0,1)--(1,1)--(1,0); \draw (0,3)--(1,3); \draw (0,2)--(1,2); \end{tikzpicture} }; \node at (6.25,1) {\begin{tikzpicture}[scale=0.15] \draw (4,0)--(0,0)--(0,1)--(5,1) --(5,0)--(4,0); \draw (1,0)--(1,1); \draw (4,0)--(4,1); \draw (3,0)--(3,1); \draw (2,0)--(2,1); \end{tikzpicture} }; \end{tikzpicture}} \end{matrix}$\\\hline\hline \end{tabular}} \caption{Description of the simple chambers for the action of $\Z/5\Z$.} \end{figure} As predicted by \Cref{teosimple}, the possible shapes for the chamber stairs of simple chambers are $ 8 = 2 ^ {5-2} $, and there are 5 different ways to label each chamber stair. \end{example} \begin{example}In this example we treat the case $G\cong \Z/4\Z$. The following picture contains a list of the possible shapes of the abstract chamber stairs and, in each case, the shapes of the $G$-stairs associated to the toric $G$-constellations belonging to the respective chamber. \begin{figure}[H] \begin{tabular}{||c|c||c|c||} \hline\hline $\begin{matrix} \scalebox{1}{\begin{tikzpicture}[scale=0.15] \draw (0,1)--(0,0)--(4,0)--(4,1)--(1,1)--(1,4)--(0,4)--(0,1); \draw (0,1)--(1,1)--(1,0); \draw (2,0)--(2,1); \draw (3,0)--(3,1); \draw (0,3)--(1,3); \draw (0,2)--(1,2); \end{tikzpicture}} \end{matrix}$& $\begin{matrix}\scalebox{0.9}{ \begin{tikzpicture} \draw (0,0) to [out=30,in=150] (2,0); \draw (1.5,0) to [out=30,in=150] (3.5,0); \draw (3,0) to [out=30,in=150] (5,0); \node at (0.25,1) {\begin{tikzpicture}[scale=0.15] \draw (0,3)--(0,0)--(1,0)--(1,4) --(0,4)--(0,3); \draw (0,1)--(1,1); \draw (0,3)--(1,3); \draw (0,2)--(1,2); \end{tikzpicture} }; \node at (1.75,1) {\begin{tikzpicture}[scale=0.15] \draw (0,2)--(0,0)--(2,0)--(2,1)--(1,1)--(1,3) --(0,3)--(0,2); \draw (0,1)--(1,1)--(1,0); \draw (0,2)--(1,2); \end{tikzpicture} }; \node at (3.25,1) {\begin{tikzpicture}[scale=0.15] \draw (0,1)--(0,0)--(3,0)--(3,1)--(1,1)--(1,2) --(0,2)--(0,1); \draw (0,1)--(1,1)--(1,0); \draw (2,0)--(2,1); \end{tikzpicture} }; \node at (4.75,1) {\begin{tikzpicture}[scale=0.15] \draw (3,0)--(0,0)--(0,1)--(4,1) --(4,0)--(3,0); \draw (1,0)--(1,1); \draw (3,0)--(3,1); \draw (2,0)--(2,1); \end{tikzpicture} }; \end{tikzpicture}} \end{matrix}$& $\begin{matrix} \scalebox{1}{\begin{tikzpicture}[scale=0.15] \draw (0,1)--(0,0)--(0,-1)--(4,-1)--(4,0)--(1,0)-- (1,2)--(0,2)--(0,5) --(-1,5)--(-1,1)--(0,1); \draw (0,2)--(0,1)--(1,1)--(1,0)--(2,0); \draw (0,0)--(1,0)--(1,-1); \draw (0,4)--(-1,4); \draw (0,3)--(-1,3); \draw (0,2)--(-1,2); \draw (2,0)--(2,-1); \draw (3,0)--(3,-1); \end{tikzpicture}} \end{matrix}$& $\begin{matrix}\scalebox{0.9}{ \begin{tikzpicture} \draw (0,0) to [out=30,in=150] (2,0); \draw (1.5,0) to [out=30,in=150] (3.5,0); \draw (3,0) to [out=30,in=150] (5,0); \node at (0.25,1) {\begin{tikzpicture}[scale=0.15] \draw (0,3)--(0,0)--(1,0)--(1,4) --(0,4)--(0,3); \draw (0,1)--(1,1); \draw (0,3)--(1,3); \draw (0,2)--(1,2); \end{tikzpicture} }; \node at (1.75,1) {\begin{tikzpicture}[scale=0.15] \draw (2,1)--(2,4)--(0,4)--(0,3)--(1,3)--(1,1)--(2,1)--(2,1); \draw (1,2)--(2,2); \draw (1,4)--(1,3)--(2,3); \end{tikzpicture} }; \node at (3.25,1) {\begin{tikzpicture}[scale=0.15] \draw (0,1)--(0,0)--(3,0)--(3,1)--(1,1)--(1,2) --(0,2)--(0,1); \draw (0,1)--(1,1)--(1,0); \draw (2,0)--(2,1); \end{tikzpicture} }; \node at (4.75,1) {\begin{tikzpicture}[scale=0.15] \draw (3,0)--(0,0)--(0,1)--(4,1) --(4,0)--(3,0); \draw (1,0)--(1,1); \draw (3,0)--(3,1); \draw (2,0)--(2,1); \end{tikzpicture} }; \end{tikzpicture}} \end{matrix}$\\\hline\hline $\begin{matrix} \scalebox{1}{\begin{tikzpicture}[scale=0.15] \draw (0,1)--(0,4)--(1,4)--(1,1)--(3,1)--(3,0)--(6,0)--(6,-1)--(2,-1)--(2,0)--(0,0)--(0,1); \draw (0,2)--(1,2); \draw (0,3)--(1,3); \draw (0,1)--(1,1)--(1,0); \draw (2,1)--(2,0)--(3,0)--(3,-1); \draw (4,0)--(4,-1); \draw (5,0)--(5,-1); \end{tikzpicture}} \end{matrix}$& $\begin{matrix}\scalebox{0.9}{ \begin{tikzpicture} \draw (0,0) to [out=30,in=150] (2,0); \draw (1.5,0) to [out=30,in=150] (3.5,0); \draw (3,0) to [out=30,in=150] (5,0); \node at (0.25,1) {\begin{tikzpicture}[scale=0.15] \draw (0,3)--(0,0)--(1,0)--(1,4) --(0,4)--(0,3); \draw (0,1)--(1,1); \draw (0,3)--(1,3); \draw (0,2)--(1,2); \end{tikzpicture} }; \node at (1.75,1) {\begin{tikzpicture}[scale=0.15] \draw (0,2)--(0,0)--(2,0)--(2,1)--(1,1)--(1,3) --(0,3)--(0,2); \draw (0,1)--(1,1)--(1,0); \draw (0,2)--(1,2); \end{tikzpicture} }; \node at (3.25,1) {\begin{tikzpicture}[scale=0.15] \draw (1,3)--(3,3)--(3,1)--(2,1)--(2,2)--(0,2)--(0,3)--(1,3); \draw (1,3)--(1,2); \draw (2,3)--(2,2)--(3,2); \end{tikzpicture} }; \node at (4.75,1) {\begin{tikzpicture}[scale=0.15] \draw (3,0)--(0,0)--(0,1)--(4,1) --(4,0)--(3,0); \draw (1,0)--(1,1); \draw (3,0)--(3,1); \draw (2,0)--(2,1); \end{tikzpicture} }; \end{tikzpicture}} \end{matrix}$& $\begin{matrix} \scalebox{1}{\begin{tikzpicture}[scale=0.15] \draw (0,1)--(0,4)--(1,4)--(1,1)--(2,1)--(2,0)--(5,0)--(5,-1)--(1,-1)--(1,0)--(0,0)--(0,1); \draw (0,2)--(0,1)--(1,1)--(1,0)--(2,0); \draw (0,3)--(1,3); \draw (0,2)--(1,2); \draw (2,0)--(2,-1); \draw (3,0)--(3,-1); \draw (4,0)--(4,-1); \end{tikzpicture}} \end{matrix}$& $\begin{matrix}\scalebox{0.9}{ \begin{tikzpicture} \draw (0,0) to [out=30,in=150] (2,0); \draw (1.5,0) to [out=30,in=150] (3.5,0); \draw (3,0) to [out=30,in=150] (5,0); \node at (0.25,1) {\begin{tikzpicture}[scale=0.15] \draw (0,3)--(0,0)--(1,0)--(1,4) --(0,4)--(0,3); \draw (0,1)--(1,1); \draw (0,3)--(1,3); \draw (0,2)--(1,2); \end{tikzpicture} }; \node at (1.75,1) {\begin{tikzpicture}[scale=0.15] \draw (2,1)--(2,2)--(1,2)--(1,3)--(0,3)--(0,1)--(1,1)--(1,0)--(2,0)--(2,1); \draw (2,1)--(1,1)--(1,2)--(0,2); \end{tikzpicture} }; \node at (3.25,1) {\begin{tikzpicture}[scale=0.15] \draw (1,2)--(2,2)--(2,1)--(3,1)--(3,0)--(1,0)--(1,1)--(0,1)--(0,2)--(1,2); \draw (1,2)--(1,1)--(2,1)--(2,0); \end{tikzpicture} }; \node at (4.75,1) {\begin{tikzpicture}[scale=0.15] \draw (3,0)--(0,0)--(0,1)--(4,1) --(4,0)--(3,0); \draw (1,0)--(1,1); \draw (3,0)--(3,1); \draw (2,0)--(2,1); \end{tikzpicture} }; \end{tikzpicture}} \end{matrix}$\\\hline\hline $\begin{matrix} \scalebox{1}{\begin{tikzpicture}[scale=0.15] \draw (0,1)--(0,0)--(1,0)--(1,-1)--(5,-1)--(5,0)--(2,0)--(2,1)--(1,1)--(1,2)--(0,2)--(0,5) --(-1,5)--(-1,1)--(0,1); \draw (0,2)--(0,1)--(1,1)--(1,0)--(2,0); \draw (0,4)--(-1,4); \draw (0,3)--(-1,3); \draw (0,2)--(-1,2); \draw (2,0)--(2,-1); \draw (3,0)--(3,-1); \draw (4,0)--(4,-1); \end{tikzpicture}} \end{matrix}$& $\begin{matrix}\scalebox{0.9}{ \begin{tikzpicture} \draw (0,0) to [out=30,in=150] (2,0); \draw (1.5,0) to [out=30,in=150] (3.5,0); \draw (3,0) to [out=30,in=150] (5,0); \node at (0.25,1) {\begin{tikzpicture}[scale=0.15] \draw (0,3)--(0,0)--(1,0)--(1,4) --(0,4)--(0,3); \draw (0,1)--(1,1); \draw (0,3)--(1,3); \draw (0,2)--(1,2); \end{tikzpicture} }; \node at (1.75,1) {\begin{tikzpicture}[scale=0.15] \draw (2,1)--(2,2)--(1,2)--(1,3)--(0,3)--(0,1)--(1,1)--(1,0)--(2,0)--(2,1); \draw (2,1)--(1,1)--(1,2)--(0,2); \end{tikzpicture} }; \node at (3.25,1) {\begin{tikzpicture}[scale=0.15] \draw (1,2)--(2,2)--(2,1)--(3,1)--(3,0)--(1,0)--(1,1)--(0,1)--(0,2)--(1,2); \draw (1,2)--(1,1)--(2,1)--(2,0); \end{tikzpicture} }; \node at (4.75,1) {\begin{tikzpicture}[scale=0.15] \draw (3,0)--(0,0)--(0,1)--(4,1) --(4,0)--(3,0); \draw (1,0)--(1,1); \draw (3,0)--(3,1); \draw (2,0)--(2,1); \end{tikzpicture} }; \end{tikzpicture}} \end{matrix}$& $\begin{matrix} \scalebox{1}{\begin{tikzpicture}[scale=0.15] \draw (0,1)--(0,4)--(1,4)--(1,1)--(2,1)--(2,-1)--(4,-1)--(4,-2)--(7,-2)--(7,-3)--(3,-3)--(3,-2)--(1,-2)--(1,0)--(0,0)--(0,1); \node at (0.5,4) {$\ $}; \draw (0,3)--(1,3); \draw (0,2)--(1,2); \draw (0,1)--(1,1)--(1,0)--(2,0); \draw (1,-1)--(2,-1)--(2,-2); \draw (3,-1)--(3,-2)--(4,-2)--(4,-3); \draw (5,-2)--(5,-3); \draw (6,-2)--(6,-3); \end{tikzpicture}} \end{matrix}$& $\begin{matrix}\scalebox{0.9}{ \begin{tikzpicture} \draw (0,0) to [out=30,in=150] (2,0); \draw (1.5,0) to [out=30,in=150] (3.5,0); \draw (3,0) to [out=30,in=150] (5,0); \node at (0.25,1) {\begin{tikzpicture}[scale=0.15] \draw (0,3)--(0,0)--(1,0)--(1,4) --(0,4)--(0,3); \draw (0,1)--(1,1); \draw (0,3)--(1,3); \draw (0,2)--(1,2); \end{tikzpicture} }; \node at (1.75,1) {\begin{tikzpicture}[scale=0.15] \draw (2,1)--(2,4)--(0,4)--(0,3)--(1,3)--(1,1)--(2,1)--(2,1); \draw (1,2)--(2,2); \draw (1,4)--(1,3)--(2,3); \end{tikzpicture} }; \node at (3.25,1) {\begin{tikzpicture}[scale=0.15] \draw (1,3)--(3,3)--(3,1)--(2,1)--(2,2)--(0,2)--(0,3)--(1,3); \draw (1,3)--(1,2); \draw (2,3)--(2,2)--(3,2); \end{tikzpicture} }; \node at (4.75,1) {\begin{tikzpicture}[scale=0.15] \draw (3,0)--(0,0)--(0,1)--(4,1) --(4,0)--(3,0); \draw (1,0)--(1,1); \draw (3,0)--(3,1); \draw (2,0)--(2,1); \end{tikzpicture} }; \end{tikzpicture}} \end{matrix}$\\\hline\hline \end{tabular} \caption{Description of the chambers for the action of $\Z/4\Z$.} \label{camerez4} \end{figure} Notice that the first $4=2^{4-2}$ pictures correspond to simple chambers. Moreover, as predicted by \Cref{TEO1}, the possible shapes for the chamber stairs are $ 6 = (4-1) ! $, and there are 4 different ways to label each chamber stair. Note also that, after having labeled each box appropriately, the first and last chambers in \Cref{camerez4} correspond to $C_G$ and $C_G^{\OP}$ respectively (see \Cref{hilbop}). \end{example} \section{The costruction of the tautological bundles \texorpdfstring{$\Calr_C$}{}} The quasi projective variety $\Calm_C$ is a fine moduli space obtained by GIT as described in \cite{KING} by King. In particular, there exists a universal family $\Calu_C\in\Ob\Coh(\Calm_C\times\A^2)$. The tautological bundle is the pushforward $$\Calr_C=(\pi_{\Calm_C})_*(\Calu_C).$$ It is a vector bundle of rank $k=|G|$ whose fibers are $G$-constellations and, more precisely, over each point $[\Calf]\in\Calm_C$ the fiber $(\Calr_C)_{[\Calf]}$ is canonically isomorphic to the space of global sections $H^0(\A^2,\Calf)$. In this section we give an explicit construction of the tautological bundles $\Calr_C$ for all chambers $C\subset\Theta^{\gen}$ in terms of their chamber stairs. We will adopt the same notation as in \Cref{toric resolution}. \begin{notation} From now on, given a coherent monomial ideal sheaf $\Calk\subset\Calo_{\A^2}$, we denote by $\widetilde{\Calk}$ the $\Calo_Y$-module defined by \[ \widetilde{\Calk}= \varepsilon^*\pi_*\Calk/\tor_{\Calo_Y}\varepsilon^*\pi_*\Calk. \] \end{notation} \begin{lemma}\label{GSVsullecarte} Suppose that $\Calk$ is generated by the monomials $x^{\alpha_1}y^{\beta_1},\ldots,x^{\alpha_s}y^{\beta_s}$. Then, on each toric chart $U_j\subset Y$ with coordinates $(a_j,c_j)$, the sheaf $\widetilde{\Calk}$ agrees with the sheaf $\Calh_j$ associated to the $\C[a_j,c_j]$-module: $$H_j= \frac{K_j}{K_j\cap I_j}\subset \frac{\C[a_j,c_j,x,y]}{K_j\cap I_j},$$ where $K_j$ and $I_j$ are the ideals of $\C[a_j,c_j,x,y]$ given by $$K_j=(x^{\alpha_1}y^{\beta_1},\ldots,x^{\alpha_s}y^{\beta_s})$$ and $$I_j=(a_jy^{k-j}-x^{j},c_jx^{j-1}-y^{k-j+1},a_jc_j-xy),$$ and the gluings on the intersections $U_i\cap U_j$, for $ 1\le i ,j \le k $, are given by: \[ \begin{tikzcd}[row sep=tiny] \Gamma(U_i\cap U_j,\Calh_i) \arrow{r}{\varphi_{ij}} & \Gamma(U_i\cap U_j,\Calh_j) \\ x\arrow[mapsto]{r} & x,\\ y\arrow[mapsto]{r} & y,\\ a_i\arrow[mapsto]{r} & a_j^{i-j+1}c_j^{i-j},\\ c_i\arrow[mapsto]{r} & a_j^{j-i}c_j^{j-i+1}. \end{tikzcd} \] \end{lemma} \begin{proof} The proof is achieved by direct computation, after noticing that the gluings on the intersections are deduced from the toric description of the toric quasiprojective variety $\Calm_C$ given at the beginning of \Cref{toric resolution} and, in particular, from Equations \eqref{toricrelations}. \end{proof} \begin{remark} Using the the relations given in \eqref{toricrelations}, the modules $H_j$, for $j=1,\ldots,k$, can be regarded as $\C[a_j , c_j ]$-submodules of the rational function field $\C(x, y)$. \end{remark} \begin{remark}\label{GSFDGIUSTO} If $x^{\alpha_1}y^{\beta_1},\ldots,x^{\alpha_s}y^{\beta_s}$ are the generators of some $C$-stair $\Gamma_C$ and $\Calk$ is defined as in \Cref{GSVsullecarte}, all the $G$-sFd associated to the toric fibers of $\widetilde{\Calk}$ are substairs of $\Gamma_C$. This is a consequence of Nakayama's Lemma together with the following three facts: \begin{equation}\label{app} \begin{array}{cc} \forall j=1,\ldots,k,\forall i=1,\ldots,s& x^{\alpha_i+1}y^{\beta_i+1}\in(K_j\cap I_j)+(a_j,c_j),\\ \forall j=1,\ldots,k,& x^{\alpha_1}y^{\beta_1+k}\in(K_j\cap I_j)+(a_j,c_j),\\ \forall j=1,\ldots,k,& x^{\alpha_s+k}y^{\beta_s}\in(K_j\cap I_j)+(a_j,c_j). \end{array} \end{equation} The relations \eqref{app} follow from the easy observations that $$x^{\alpha_i}y^{\beta_i}\cdot(a_jc_j-xy)=a_jc_jx^{\alpha_i}y^{\beta_i}-x^{\alpha_i+1}y^{\beta_i+1}\in K_j\cap I_j,$$ $$y^{j-1}\cdot x^{\alpha_1}y^{\beta_1}\cdot(c_jx^{j-1}-y^{k-j+1})=c_jx^{\alpha_1+j-1}y^{\beta_1+j-1}-x^{\alpha_1}y^{\beta_1+k}\in K_j\cap I_j,$$ $$x^{k-j}\cdot x^{\alpha_s}y^{\beta_s}\cdot(a_jy^{k-j}-x^{j})=a_jx^{\alpha_s+k-j }y^{\beta_s+k-j}-x^{\alpha_s+k}y^{\beta_s}\in K_j\cap I_j.$$ \end{remark} In this last part of the paper we state and prove the last main theorem. Before to give the proof, we also state and prove some corollaries and results needed in the proof. \begin{theorem}\label{costruzione} Let $C\subset\Theta^{\gen}$ be a chamber and let $\Gamma_C\subset\Calt_G$ be a $C$-stair. Suppose that $\Gamma_C$ has $s\ge1$ ordered (see \Cref{orderbox}) generators $v_1,\ldots,v_s$ with associated monomials $$x^{\alpha_1}y^{\beta_1},\ldots,x^{\alpha_s}y^{\beta_s}\in\C[x,y].$$ Consider the ideal sheaf $\Calk= (x^{\alpha_1}y^{\beta_1},\ldots,x^{\alpha_s}y^{\beta_s})\Calo_{\A^2}$, then $$\Calr_C\cong\varepsilon^*\pi_*\Calk /\tor_{\Calo_{\Calm_C}}(\varepsilon^*\pi_*\Calk).$$ \end{theorem} The following corollaries are direct consequences of \Cref{costruzione}. \begin{corollary}\label{sullecarte} On each toric chart $U_j\subset\Calm_C$ with coordinates $(a_j,c_j)$, the tautological bundle ${\Calr_C}_{|_{U_j}}$ agrees with the sheaf $\Calh_j$ associated to the $\C[a_j,c_j]$-module $H_j$ in \Cref{GSVsullecarte}. \end{corollary} \begin{corollary}\label{nuovo GSV} In the hypotheses of \Cref{costruzione}, the $\Calo_Y$-module $\widetilde{\Calk}$ is locally free of rank $|G|$. \end{corollary} \begin{corollary}\label{quot} Let $C$ and $\Calk$ be as in \Cref{costruzione}. Then, $\Calm_C$ can be identified with a closed $G$-invariant subvariety of $\Quot_{\Calk}^{|G|}(\A^2)$. \end{corollary} \begin{remark} \Cref{nuovo GSV} is a generalisation of {\cite[Proposition 2.4.]{GSV}} in the abelian setting. \end{remark} \begin{remark}\label{GSVhilb} For the trivial ideal $K=(1)=\C[x,y]$ \Cref{sullecarte} recovers Nakamura's description of the $G$-Hilbert scheme when $G$ is abelian (see \cite{NAKAMURA}). \end{remark} \begin{remark}\label{onefibreisok} Notice that, over the origin of the first and the last charts, the $\Calo_{U_1}$-module $\Calh_1$ and the $\Calo_{U_k}$-module $\Calh_k$ have, as toric fibers, the expected $G$-constellations $\Calf_1$ and $\Calf_k$, i.e $$\Calf_1\cong {\Calh_1}_{0_1}\cong\frac{(x^{\alpha_1}y^{\beta_1})}{(x^{\alpha_1}y^{\beta_1+k},x^{\alpha_1+1}y^{\beta_1})}\subset \frac{\C[x,y]}{(x^{\alpha_1}y^{\beta_1+k},x^{\alpha_1+1}y^{\beta_1})}$$ and $$\Calf_k\cong {\Calh_k}_{0_k}\cong\frac{(x^{\alpha_s}y^{\beta_s})}{(x^{\alpha_s+k}y^{\beta_s},x^{\alpha_s}y^{\beta_s+1})}\subset \frac{\C[x,y]}{(x^{\alpha_s+k}y^{\beta_s},x^{\alpha_s}y^{\beta_s+1})},$$ where $0_i\in U_i$ is, for $i=1,k$, the origin. We prove this only for the origin of $U_k$, the other proof is similar. We start by showing that $$x^{\alpha_i}y^{\beta_i}\in (K_k\cap I_k)+(a_k,c_k) \ \mbox{ for }i=1,\ldots,s-1. $$ Notice that, for all $ i=1,\ldots,s-1$, we have $$\alpha_i\ge0,\quad\beta_i>\beta_{i+1}>\beta_{s}\ge 0,\quad\alpha_{i}+k-1\ge \alpha_{i+1} .$$ Therefore, we can write: $$c_kx^{\alpha_i+k-1}y^{\beta_i-1}-x^{\beta_i}y^{\alpha_i}= \begin{cases} c_kx^{\alpha_i+k-1-\alpha_{i+1}}y^{\beta_i-1-\beta_{i+1}}(x^{\alpha_{i+1}}y^{\beta_{i+1}})-x^{\alpha_i}y^{\beta_i}\\ (x^{\alpha_i}y^{\beta_i-1})(c_kx^{k-1}-y), \end{cases}$$ which implies $$x^{\alpha_i}y^{\beta_i}\in(K_k\cap I_k )+(a_k,c_k)\ \forall i=1,\ldots,s-1.$$ Now, we have $$K_k\cap I_k+(a_k,c_k)=(x^{\alpha_s}y^{\beta_s})\cap I_k+(a_k,c_k)=(x^{\alpha_s}y^{\beta_s})\cdot I_k+(a_k,c_k)=(x^{\alpha_s+k}y^{\beta_s},x^{\alpha_s}y^{\beta_s+1},a_k,c_k),$$ which gives \[ {\Calh_k}_{0_k}\cong \frac{(x^{\alpha_s}y^{\beta_s})}{(x^{\alpha_s+k}y^{\beta_s},x^{\alpha_s}y^{\beta_s+1},a_k,c_k)}\subset \frac{\C[x,y,a_k,c_k]}{(x^{\alpha_s+k}y^{\beta_s},x^{\alpha_s}y^{\beta_s+1},a_k,c_k)}. \] \end{remark} \begin{definition}\label{defajcj} Let $K\subset\C[x,y]$ be the ideal generated by the (ordered) set of monomials \[ \Set{x^{\alpha_i}y^{\beta_i}|i=1,\ldots,s} \] associated to the generators of some chamber stair $\Gamma_C$ and let $\Gamma_K=\Set{(m,i)\in \Calt_G|m\in K}$ be the subset of the representation tableau corresopnding to $K$. Given a monomial $m_b\in K$ corresponding to a box $b\in \Gamma_C\subset \Gamma_K$, we say that: \begin{itemize} \item the property \textbf{(A$_j$)} holds for $m_b$ (or for $b$) if \[x^{-j}y^{k-j}\cdot m_b \in\Gamma_K,\] \item the property \textbf{(C$_j$)} holds for $m_b$ (or for $b$) if \[x^{j-1}y^{-k+j-1}\cdot m_b \in\Gamma_K.\] \end{itemize} \end{definition} \begin{lemma}\label{fine} If the property \textbf{(A$_j$)} (resp. \textbf{(C$_j$)}) holds for a box $b\in\Gamma_C$ then it holds also for the box after (resp. before) $b$. \end{lemma} \begin{proof} Let $m_b=x^\alpha y^\beta$ be the monomial associated to the box $b$. From \Cref{defajcj}, it follows immediately that, if the property \textbf{(A$_j$)} (resp. \textbf{(C$_j$)}) holds for $b$, then it holds for all the monomials $x^\gamma y^\delta$ such that $\gamma \ge\alpha$ and $\delta\ge \beta$. This proves the Lemma in the case in which the box after (resp. before) $b$ is on the right (resp. above) $b$. We prove the remaining case for the property \textbf{(C$_j$)} and we leave the similar proof for \textbf{(A$_j$)}. We have to prove that, if two monomials of the form $x^\alpha y^\beta,x^{\alpha-1} y^\beta$ correspond to some successive boxes in $\Gamma_C$ and the property \textbf{(C$_j$)} holds for $x^\alpha y^\beta$ then it holds also for $x^{\alpha-1} y^\beta$. In other words, we suppose that \[m_1=x^{\alpha +j-1} y^{\beta-k+j-1}\in K,\] and we want to prove that \[m_2=x^{\alpha +j-2} y^{\beta-k+j-1}\in K.\] Let $b_1,b_2$ be the boxes corresponding to $m_1,m_2$ and let $b$ be the box corresponding to $x^{\alpha-1}y^\beta$. If $b_1\in\Gamma_K\smallsetminus\Gamma_C$ it follows easily that $b_2\in\Gamma_K$. Suppose $b_1\in \Gamma_C$ and consider the connected substair $\Gamma\subset\Gamma_C$ whose first box is $b$ and whose last box is $b_1$. We have, by construction, \[\mathfrak{w}(\Gamma)=j\mbox{ and }\mathfrak{h}(\Gamma)=k-j+2,\] which imply that $\Gamma$ contains $k+1$ boxes. Let $\Gamma'=\Gamma\smallsetminus \{b_1\}$ be the connected $G$-substair of $\Gamma_C$ obtained by removing the last box from $\Gamma$ and let $b'\in\Gamma_C$ be the last box of $\Gamma'$. Now, by construction, $b$ is a vertical left cut for $\Gamma'$ in $\Gamma_C$ and, as a consequence of \Cref{CONCLUSION} also $b'$ is a vertical cut. Therefore $b'$ must correspond to the monomial $m_2$ from which it directly follows \[b'=b_2\in \Gamma_C. \] Which implies the thesis. \end{proof} \begin{proof}( \textit{of \Cref{costruzione}} ). If we endow the product $\Calm_C\times \A^2$ with the $G$-action defined by \[ \begin{tikzcd}[row sep=tiny] G\times\Calm_C\times \A^2 \arrow{r} &\Calm_C\times \A^2 \\ (g_k^i,p,(x,y))\arrow[mapsto]{r} & (p,(\xi_k^{-i}x,\xi_k^{i}y)). \end{tikzcd} \] where $g_k$ is the (fixed) generator of the cyclic group $G$ (see subsection \ref{section2.1}), it turns out that the $\Calo_{\Calm_C\times \A^2}$-module $\widetilde{\Calk}$ is $G$-equivariant with respect to this action. To prove the theorem, we use the description of $\widetilde{\Calk}$ given in \Cref{sullecarte}. We know from \Cref{freunic} that the tautological bundles $\Calr_C$ and $\Calr_{C_G}$ agree on the complement $U_C$ of the exceptional locus of $\Calm_C$. Moreover, we have, as a consequence of the construction of $\widetilde{\Calk}$ and of \Cref{GSVhilb}, isomorphisms \[ {\Calr_{C}}_{|_{U_C}}\cong{\Calr_{C_G}}_{|_{U_C}}\cong{\widetilde{\Calk}}_{|_{U_C}}\cong \Calo_{U_C}^{\oplus k}. \] Now we show that the fibers of $\Calr_C$ and $\widetilde{\Calk}$ over the toric points of $\Calm_C$ are the same $G$-constellations. This will be enough to prove the statement, because each chamber is uniquely identified by its toric $G$-constellations. We split this part in several steps: \begin{itemize} \item[STEP 0] Over each point of $p\in \Calm_C$ the fiber $\widetilde{\Calk}_p$ is a $G$-equivariant $\C[x,y]$-module and, over each origin $0_j\in U_j$ the fibre $\widetilde{\Calk}_{0_j}$ is also $\T^2$-equivariant. This follows from the fact that the ideal $K_j$ is generated by monomials and that the ideal $I_j$ is generated by $G$-eigenbinomials (recall that the group $G$ acts trivially on $U_j$) of positive degrees in the variables $a_j,c_j$. \item[STEP 1] All the $G$-sFd associated to the toric fibers of $\widetilde{\Calk}$ are substairs of the $C$-stair $\Gamma_C$. For this, see \Cref{GSFDGIUSTO}. \item[STEP 2] For all $j=1,\ldots,k$, the $j$-th torus equivariant $G$-module $\widetilde{\Calk}_{0_j}$ is indecomposable. Let $\Gamma_j\subset\Gamma_C$ be the $G$-sFd associated to $\widetilde{\Calk}_{0_j}$. Then, the $G$-constellation $\widetilde{\Calk}_{0_j}$ is indecomposable if and only if $\Gamma_j$ is connected. First observe that, for a box $b\in\Gamma_C$ both the properties \textbf{(A$_j$)} and \textbf{(C$_j$)} implies that the corresponding monomial $m_b$ belongs to $(K _j\cap I_j)+(a_j,c_j)$. This is true because, if $m_b=x^\alpha y^\beta$, then \begin{equation} \label{motivi} \begin{array}{cc} \mbox{\textbf{(A$_j$)}}\Rightarrow & a_jx^{\alpha-j}y^{\beta +k-j}-x^\alpha y^\beta\in K_j\cap I_j, \\ \mbox{\textbf{(C$_j$)}}\Rightarrow & c_jx^{\alpha+j-1}y^{\beta -k+j-1}-x^\alpha y^\beta\in K_j\cap I_j. \end{array} \end{equation} On the other hand, $b\in\Gamma_C\smallsetminus \Gamma_j$ if and only if $m_b\in (K_j \cap I_j)+(a_j,c_j)$. In particular, by construction, at least one of the following relations is true. \begin{enumerate} \item $a_jx^{\alpha-j}y^{\beta +k-j}-x^\alpha y^\beta\in K_j\cap I_j$, \item $ c_jx^{\alpha+j-1}y^{\beta -k+j-1}-x^\alpha y^\beta\in K_j\cap I_j$, \item\label{3} $ a_jc_jx^{\alpha-1}y^{\beta -1}-x^\alpha y^\beta\in K_j\cap I_j$. \end{enumerate} Notice that $b\in\Gamma_C$ implies (see STEP 1) that \eqref{3} can not hold true. Therefore, given $b\in\Gamma_C$, it belongs to $\Gamma_j$ if and only if one among the two properties $\mbox{\textbf{(A$_j$)}}$ and $\mbox{\textbf{(C$_j$)}}$ holds for $b$. Now, the connectedness of $\Gamma_j$ is a consequence of \Cref{fine}. \item[STEP 3] Let, for all $j=1,\ldots,k$, $\mathfrak m_j \subset \Calo_{\Calm_C,0_j}$ be the maximal ideal, and let \[ F_j= \widetilde{\Calk}_{0_j}/\mathfrak m_j =\widetilde{\Calk}_{0_j}\underset{\Calo_{\Calm_C,0_j}}{\otimes}(\Calo_{\Calm_C,0_j} /\mathfrak m_j) \] be the fibre of the sheaf $\widetilde{\Calk}$ over the point $0_j$. We show now that \[ \dim_\C F_j= k. \] This implies, together with the previous step that, for all $j=1,\ldots,k$, the $G$-module $ \widetilde{\Calk}_{0_j} $ is a toric $G$-constellation. First notice that, by semicontinuity of the dimension of the fibers of a coherent sheaf (cf. \cite[Example 12.7.2]{HARTSHORNE}), we have \begin{equation} \label{primainequ} \dim_\C F_j\ge k. \end{equation} Let us suppose $j\ge 2$, the case $j=1$ was shown in \Cref{onefibreisok}. Let $\Gamma_j\subset\Gamma_C$ be, as in the previous step, the $G$-sFd associated to $\widetilde{\Calk}_{0_j}$, and let $x^\alpha y^\beta$ be the monomial in $\C[x,y]\subset\C[a_j,c_j,x,y]$ corresponding to the first box of $\Gamma_j$. Then, if the monomial $x^{\alpha+a}y^{\beta+b}$ corresponds to a box of $\Gamma_C$ for some $a\ge j$ and $j-k\le b \le 0$, it has the property \textbf{(A$_j$)}. As in STEP 2, \Cref{fine} implies that $x^{\alpha+a}y^{\beta+b}\notin\Gamma_i$. Suppose that, $x^{\gamma}y^{\beta-k+j-1}\in\Gamma_i$ for some $\alpha+1\le\gamma\le \alpha+j$. Then, if we have $x^{\gamma'}y^\beta\in\Gamma_i$ for some $\gamma-j+1\le\gamma'\le \gamma$, the following relation \[ \begin{array}{cc} c_jx^{\gamma'+j-1} y^{\beta -k+j-1}-x^{\gamma'} y^{\beta} \in K_j\cap I_j, \end{array} \] implies that $ x^{\gamma'} y^{\beta} \in K_j\cap I_j+(a_j,c_j)$. Now, by construction we have $\alpha-j+2\le\gamma'\le\alpha+j$ and we have fixed $j\ge 2$. Thus, $\gamma'=\alpha$ gives the contraddiction $x^\alpha y^\beta\notin\Gamma_i$. As a consequence, $x^{\gamma}y^{\beta-k+j-1}\notin\Gamma_i$ for all $\alpha+1\le\gamma\le \alpha+j$. Now, thanks to the connectedness proven in STEP 2, we have $$ \mathfrak{w}(\widetilde{\Calk}_{0_j})\le j \quad \mbox{ and }\quad \mathfrak{h}(\widetilde{\Calk}_{0_j})\le k-j+1, $$ which together imply \[ \dim_\C F_j= \mathfrak{w}(\widetilde{\Calk}_{0_j})+ \mathfrak{h}(\widetilde{\Calk}_{0_j})-1\le k. \] The equality $\dim_\C F_j= k$ follows now from \Cref{primainequ}. \item[STEP 4] As an immediate consequence of the previous step, for all $j=1,\ldots,k$, the $G$-constellation $\widetilde{\Calk}_{0_j}$ has width $\mathfrak{w}(\widetilde{\Calk}_{0_j})= j$, see \Cref{hwchart}. Hence, for $j=1,\ldots,k$, they are different to each other. \end{itemize} Now, the above listed properties imply that $\widetilde{\Calk}$ is the tautological bundle $\Calr_{C'}$ of some chamber $C'\subset\Theta^{\gen}$ which admits $\Gamma_C$ as $C'$-stair and this, by \Cref{unicCstair}, implies $C'=C$. \end{proof} \begin{remark} As expected, in dimension 3 \Cref{costruzione} is in general false. For instance, given the $(\Z/2\Z)^2$-action over $\A^3$ defined by the inclusion \[ \begin{tikzcd}[row sep=tiny] (\Z/2\Z)^2\arrow{r}{ } & \SL(3,\C) \\ (1,0)\arrow[mapsto]{r} & \begin{pmatrix} -1&0&0\\0&1&0\\0&0&-1 \end{pmatrix},\\ (0,1)\arrow[mapsto]{r} & \begin{pmatrix} 1&0&0\\0&-1&0\\0&0&-1 \end{pmatrix}, \end{tikzcd}\] the quotient singularity $X=\A^3/(\Z/2\Z)^2$ admits four different crepant resolutions $\varepsilon_i:Y_i\rightarrow X$, for $i=1,\ldots,4$. All of them are toric and they are described by the planar graphs in \Cref{Z2Z2}. \begin{figure}[H] \scalebox{1}{ \begin{tikzpicture}[scale=0.65] \draw (-1,2)--(1,2)--(0,0)--(-1,2); \draw (-2,0)--(2,0)--(0,4)--(-2,0); ll (-2,0) circle (1.2pt); ll (2,0) circle (1.2pt); ll (0,4) circle (1.2pt); ll (-1,2) circle (1.2pt); ll (0,0) circle (1.2pt); ll (1,2) circle (1.2pt); \node at (-2.2,-0.2) {$e_1$}; \node at (2.2,-0.2) {$e_2$}; \node at (0,4.2) {$e_3$}; \node at (1.2,2.2) {$v_1$}; \node at (-1.3,2.2) {$v_2$}; \node at (0,-0.3) {$v_3$}; \node at (0,-1.5) {$Y_1$}; \end{tikzpicture} } \scalebox{1}{ \begin{tikzpicture}[scale=0.65] \draw (-1,2)--(1,2)--(0,0); \draw (1,2)--(-2,0); \draw (-2,0)--(2,0)--(0,4)--(-2,0); ll (-2,0) circle (1.2pt); ll (2,0) circle (1.2pt); ll (0,4) circle (1.2pt); ll (-1,2) circle (1.2pt); ll (0,0) circle (1.2pt); ll (1,2) circle (1.2pt); \node at (-2.2,-0.2) {$e_1$}; \node at (2.2,-0.2) {$e_2$}; \node at (0,4.2) {$e_3$}; \node at (1.2,2.2) {$v_1$}; \node at (-1.3,2.2) {$v_2$}; \node at (0,-0.3) {$v_3$}; \node at (0,-1.5) {$Y_2$}; \end{tikzpicture} } \scalebox{1}{ \begin{tikzpicture}[scale=0.65] \draw (0,0)--(-1,2)--(1,2); \draw (-1,2)--(2,0); \draw (-2,0)--(2,0)--(0,4)--(-2,0); ll (-2,0) circle (1.2pt); ll (2,0) circle (1.2pt); ll (0,4) circle (1.2pt); ll (-1,2) circle (1.2pt); ll (0,0) circle (1.2pt); ll (1,2) circle (1.2pt); \node at (-2.2,-0.2) {$e_1$}; \node at (2.2,-0.2) {$e_2$}; \node at (0,4.2) {$e_3$}; \node at (1.2,2.2) {$v_1$}; \node at (-1.3,2.2) {$v_2$}; \node at (0,-0.3) {$v_3$}; \node at (0,-1.5) {$Y_3$}; \end{tikzpicture} } \scalebox{1}{ \begin{tikzpicture}[scale=0.65] \draw (-1,2)--(0,0)--(1,2); \draw (0,0)--(0,4); \draw (-2,0)--(2,0)--(0,4)--(-2,0); ll (-2,0) circle (1.2pt); ll (2,0) circle (1.2pt); ll (0,4) circle (1.2pt); ll (-1,2) circle (1.2pt); ll (0,0) circle (1.2pt); ll (1,2) circle (1.2pt); \node at (-2.2,-0.2) {$e_1$}; \node at (2.2,-0.2) {$e_2$}; \node at (0,4.2) {$e_3$}; \node at (1.2,2.2) {$v_1$}; \node at (-1.3,2.2) {$v_2$}; \node at (0,-0.3) {$v_3$}; \node at (0,-1.5) {$Y_4$}; \end{tikzpicture} } \caption{Toric description of the crepant resolutions of $\A^3/(\Z/2\Z)^2$.} \label{Z2Z2} \end{figure} These diagrams are obtained by considering a fan $\Sigma_i$ for each resolution $Y_i$ then, each simplex in the planar graph is the intersection of a cone in $\Sigma_i$, with the plane containing the heads of the rays that generate $\Sigma_i$. Notice that $Y_1$ differs from the other resolutions by just one flop $\begin{tikzcd}[row sep=tiny] {Y_1}\arrow[dashed,leftrightarrow]{r}{\sigma_i} & Y_i \end{tikzcd}$ for $i=2,3,4$. Now, let $\widetilde{\Calo}_i$, for $i=1,\ldots,4$, be the torsion free $\Calo_{Y_i}$-module defined by \[ \widetilde{\Calo}_i=\varepsilon_i^*\pi_*\Calo_{\A^3}/\tor_{\Calo_{Y_i}}\varepsilon_i^*\pi_*\Calo_{\A^3}, \] where $\pi:\A^3\rightarrow X$ is the canonical projection. A direct computation shows that only $\widetilde{\Calo}_1$ is locally free, and, for $i=2,3,4$, the locus where $\widetilde{\Calo}_i$ fails to be locally free coincides with the line flopped by $\sigma_i$. In this setting, it can be shown that the pair $(Y_1,\widetilde{\Calo}_i)$ is canonically isomorphic to the pair $((\Z/2\Z)^2-\Hilb(\A^3),\Calr)$ where $\Calr$ is the tautological bundle. My future project is to work out conditions on an ideal sheaf $\Calk \subset \Calo_{\A^3}$ and a crepant resolution $Y$ of $\A^3/G$, for $G\subset \SL(3,\C)$ finite subgroup, in order to have $\widetilde \Calk$ locally free and isomorphic to $\Calo_{\A^3}[G]$. \end{remark} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2} \begin{thebibliography}{10} \bibitem{ANDREWS} George~E. Andrews, \emph{The theory of partitions}, Encyclopedia of Mathematics and its Applications, Vol. 2, Addison-Wesley Publishing Co., Reading, Mass.-London-Amsterdam, 1976. \bibitem{BRIANCON} Jo\"{e}l Brian\c{c}on, \emph{Description de {$H{\rm ilb}^{n}\C\{x,y\}$}}, Invent. Math. \textbf{41} (1977), no.~1, 45--89. \bibitem{BKR} Tom Bridgeland, Alastair King, and Miles Reid, \emph{The {M}c{K}ay correspondence as an equivalence of derived categories}, J. Amer. Math. Soc. \textbf{14} (2001), no.~3, 535--554. \bibitem{BRUZZO} Ugo Bruzzo, Anna Fino, and Pietro Fr\'{e}, \emph{The {K}\"{a}hler quotient resolution of {$\C^3/\Gamma$} singularities, the {M}c{K}ay correspondence and {$D = 3$} {$\Caln = 2$} {C}hern-{S}imons gauge theories}, Comm. Math. Phys. \textbf{365} (2019), no.~1, 93--214. \bibitem{CASSLO} Heiko Cassens and Peter Slodowy, \emph{On {K}leinian singularities and quivers}, Singularities ({O}berwolfach, 1996), Progr. Math., vol. 162, Birkh\"{a}user, Basel, 1998, pp.~263--288. \bibitem{COX} David~A. Cox, John~B. Little, and Henry~K. Schenck, \emph{Toric varieties}, Graduate Studies in Mathematics, vol. 124, American Mathematical Society, Providence, RI, 2011. \bibitem{CRAWISHII} Alastair Craw and Akira Ishii, \emph{Flops of {$G$}-{H}ilb and equivalences of derived categories by variation of {GIT} quotient}, Duke Math. J. \textbf{124} (2004), no.~2, 259--307. \bibitem{FEDOU} Marie-Pierre Delest and Jean-Marc F\'{e}dou, \emph{Enumeration of skew {F}errers diagrams}, Discrete Math. \textbf{112} (1993), no.~1-3, 65--79. \bibitem{DUFREE} Alan~H. Durfee, \emph{Fifteen characterizations of rational double points and simple critical points}, Enseign. Math. (2) \textbf{25} (1979), no.~1-2, 131--163. \bibitem{EISENBUD} David Eisenbud, \emph{Commutative algebra with a view toward algebraic geometry}, Graduate Texts in Mathematics, vol. 150, Springer-Verlag, New York, 1995. \bibitem{FULTORI} William Fulton, \emph{Introduction to toric varieties}, Annals of Mathematics Studies, vol. 131, Princeton University Press, Princeton, NJ, 1993. \bibitem{FULTREP} William Fulton and Joe Harris, \emph{Representation theory}, Graduate Texts in Mathematics, vol. 129, Springer-Verlag, New York, 1991. \bibitem{GSV} Gérard Gonzalez-Sprinberg and Jean-Louis Verdier, \emph{Construction g\'{e}om\'{e}trique de la correspondance de {M}c{K}ay}, Ann. Sci. \'{E}cole Norm. Sup. (4) \textbf{16} (1983), no.~3, 409--449 (1984). \bibitem{M2} Daniel~R. Grayson and Michael~E. Stillman, \emph{Macaulay2, a software system for research in algebraic geometry}, Available at http://www.math.uiuc.edu/Macaulay2/. \bibitem{HARTSHORNE} Robin Hartshorne, \emph{Algebraic geometry}, Graduate Texts in Mathematics, vol.~52, Springer-Verlag, New York-Heidelberg, 1977. \bibitem{ITOCREP} Yukari Ito, \emph{Crepant resolution of trihedral singularities and the orbifold {E}uler characteristic}, Internat. J. Math. \textbf{6} (1995), no.~1, 33--43. \bibitem{ITOREID} Yukari Ito and Miles Reid, \emph{The {M}c{K}ay correspondence for finite subgroups of {${ \SL}(3,\C)$}}, Higher-dimensional complex varieties ({T}rento, 1994), de Gruyter, Berlin, 1996, pp.~221--240. \bibitem{KING} Alastair~D. King, \emph{Moduli of representations of finite-dimensional algebras}, Quart. J. Math. Oxford Ser. (2) \textbf{45} (1994), no.~180, 515--530. \bibitem{KRON} Peter~Benedict Kronheimer, \emph{The construction of {ALE} spaces as hyper-{K}\"{a}hler quotients}, J. Differential Geom. \textbf{29} (1989), no.~3, 665--683. \bibitem{MARKU} Dimitri Markushevich, \emph{Resolution of {${\C}^3/H_{168}$}}, Math. Ann. \textbf{308} (1997), no.~2, 279--289. \bibitem{ANDREA} Riccardo Moschetti and Andrea~T. Ricolfi, \emph{On coherent sheaves of small length on the affine plane}, J. Algebra \textbf{516} (2018), 471--489. \bibitem{NAKAMURA} Iku Nakamura, \emph{Hilbert schemes of abelian group orbits}, J. Algebraic Geom. \textbf{10} (2001), no.~4, 757--779. \bibitem{NOLLA1} \'{A}lvaro Nolla~de Celis, \emph{{$G$}-graphs and special representations for binary dihedral groups in {${\rm GL}(2,\C)$}}, Glasg. Math. J. \textbf{55} (2013), no.~1, 23--57. \bibitem{NOLLA2} \'{A}lvaro Nolla~de Celis and Yuhi Sekiya, \emph{Flops and mutations for crepant resolutions of polyhedral singularities}, Asian J. Math. \textbf{21} (2017), no.~1, 1--45. \bibitem{REID} Miles Reid, \emph{La correspondance de {M}c{K}ay}, no. 276, 2002, S\'{e}minaire Bourbaki, Vol. 1999/2000, pp.~53--72. \bibitem{ROAN1} Shi-Shyr Roan, \emph{On {$c_1=0$} resolution of quotient singularity}, Internat. J. Math. \textbf{5} (1994), no.~4, 523--536. \bibitem{ROAN2} Shi-Shyr Roan, \emph{Minimal resolutions of {G}orenstein orbifolds in dimension three}, Topology \textbf{35} (1996), no.~2, 489--508. \bibitem{INFIRRI} Alexander~V. Sardo-Infirri, \emph{{Resolutions of orbifold singularities and flows on the McKay quiver}}, 1996. \bibitem{WATANABE} Keiichi Watanabe, \emph{Certain invariant subrings are {G}orenstein. {I}, {II}}, Osaka Math. J. \textbf{11} (1974), 1--8; ibid. {\bf 11 (1974), 379--388}. \bibitem{prova2} Ryo Yamagishi, \emph{{Moduli of $G$-constellations and crepant resolutions II: the Craw-Ishii conjecture}}, 2022. \bibitem{YU} Stephen S.-T. Yau and Yung Yu, \emph{Gorenstein quotient singularities in dimension three}, Mem. Amer. Math. Soc. \textbf{105} (1993), no.~505, viii+88. \end{thebibliography} \bigskip \medskip \noindent \emph{Michele Graffeo}, \texttt{[email protected]} \\ \textsc{Scuola Internazionale Superiore di Studi Avanzati (SISSA), Via Bonomea 265, 34136 Trieste, Italy} \end{document} \usepackage[colorinlistoftodos]{todonotes}\definecolor{antiquewhite}{rgb}{0.98, 0.92, 0.84} \definecolor{buff}{rgb}{0.94, 0.86, 0.51} \definecolor{palecopper}{rgb}{0.85, 0.54, 0.4} \definecolor{fluorescentyellow}{rgb}{0.8, 1.0, 0.0} \definecolor{bole}{rgb}{0.47, 0.27, 0.23} \newcommand{\towrite}[1]{\leavevmode\todo[inline,color=antiquewhite]{\textbf{TO WRITE: }#1}} \newcommand{\tonote}[1]{\todo[size=\tiny,color=buff]{\textbf{NOTE: } #1}} \newcommand{\tofix}[1]{\todo[size=\tiny,color=palecopper]{\textbf{TO FIX: } #1}} \newcommand{\toask}[1]{\todo[size=\tiny,color=fluorescentyellow]{\textbf{QUESTION: } #1}} \usepackage{amsmath,float} \usetikzlibrary{decorations.markings} \definecolor{cornellred}{rgb}{0.7, 0.11, 0.11} \definecolor{britishracinggreen}{rgb}{0.0, 0.26, 0.15} \definecolor{cobalt}{rgb}{0.0, 0.28, 0.67} \DeclareSymbolFont{usualmathcal}{OMS}{cmsy}{m}{n} \DeclareSymbolFontAlphabet{\mathcal}{usualmathcal} \newcommand{\TT}{\mathbf{T}} \newcommand{\bfk}{\mathbf{k}} \newcommand{\A}{{\mathbb{A}}} \newcommand{\B}{{\mathbb{B}}} \newcommand{\C}{{\mathbb{C}}} \newcommand{\E}{{\mathbb{E}}} \newcommand{\F}{{\mathbb{F}}} \newcommand{\G}{{\mathbb{G}}} \newcommand{\I}{{\mathbb{I}}} \newcommand{\J}{{\mathbb{J}}} \newcommand{\K}{{\mathbb{K}}} \newcommand{\M}{{\mathbb{M}}} \newcommand{\N}{{\mathbb{N}}} \newcommand{\Proj}{{\mathbb{P}}} \newcommand{\Q}{{\mathbb{Q}}} \newcommand{\R}{{\mathbb{R}}} \newcommand{\T}{{\mathbb{T}}} \newcommand{\U}{{\mathbb{U}}} \newcommand{\V}{{\mathbb{V}}} \newcommand{\W}{{\mathbb{W}}} \newcommand{\X}{{\mathbb{X}}} \newcommand{\Y}{{\mathbb{Y}}} \newcommand{\Z}{{\mathbb{Z}}} \newcommand{\CA}{{\mathcal A}} \newcommand{\CB}{{\mathcal B}} \newcommand{\CC}{{\mathcal C}} \newcommand{\CD}{{\mathcal D}} \newcommand{\CE}{{\mathcal E}} \newcommand{\CF}{{\mathcal F}} \newcommand{\CG}{{\mathcal G}} \newcommand{\CH}{{\mathcal H}} \newcommand{\CI}{{\mathcal I}} \newcommand{\CJ}{{\mathcal J}} \newcommand{\CK}{{\mathcal K}} \newcommand{\CL}{{\mathcal L}} \newcommand{\CM}{{\mathcal M}} \newcommand{\CN}{{\mathcal N}} \newcommand{\CO}{{\mathcal O}} \newcommand{\CP}{{\mathcal P}} \newcommand{\CQ}{{\mathcal Q}} \newcommand{\CR}{{\mathcal R}} \newcommand{\CS}{{\mathcal S}} \newcommand{\CT}{{\mathcal T}} \newcommand{\CU}{{\mathcal U}} \newcommand{\CV}{{\mathcal V}} \newcommand{\CW}{{\mathcal W}} \newcommand{\CX}{{\mathcal X}} \newcommand{\CY}{{\mathcal Y}} \newcommand{\CZ}{{\mathcal Z}} \newcommand{\Fa}{{\mathfrak{a}}} \newcommand{\Fb}{{\mathfrak{b}}} \newcommand{\Fc}{{\mathfrak{c}}} \newcommand{\Fd}{{\mathfrak{d}}} \newcommand{\Fe}{{\mathfrak{e}}} \newcommand{\Ff}{{\mathfrak{f}}} \newcommand{\Fg}{{\mathfrak{g}}} \newcommand{\Fh}{{\mathfrak{h}}} \newcommand{\Fi}{{\mathfrak{i}}} \newcommand{\Fj}{{\mathfrak{j}}} \newcommand{\Fk}{{\mathfrak{k}}} \newcommand{\Fl}{{\mathfrak{l}}} \newcommand{\Fm}{{\mathfrak{m}}} \newcommand{\Fn}{{\mathfrak{n}}} \newcommand{\Fo}{{\mathfrak{o}}} \newcommand{\Fp}{{\mathfrak{p}}} \newcommand{\Fq}{{\mathfrak{q}}} \newcommand{\Fr}{{\mathfrak{r}}} \newcommand{\Fs}{{\mathfrak{s}}} \newcommand{\Ft}{{\mathfrak{t}}} \newcommand{\Fu}{{\mathfrak{u}}} \newcommand{\Fv}{{\mathfrak{v}}} \newcommand{\Fw}{{\mathfrak{w}}} \newcommand{\Fx}{{\mathfrak{x}}} \newcommand{\Fy}{{\mathfrak{y}}} \newcommand{\Fz}{{\mathfrak{z}}} \newcommand{\FA}{{\mathfrak{A}}} \newcommand{\FB}{{\mathfrak{B}}} \newcommand{\FC}{{\mathfrak{C}}} \newcommand{\FD}{{\mathfrak{D}}} \newcommand{\FE}{{\mathfrak{E}}} \newcommand{\FF}{{\mathfrak{F}}} \newcommand{\FG}{{\mathfrak{G}}} \newcommand{\FH}{{\mathfrak{H}}} \newcommand{\FI}{{\mathfrak{I}}} \newcommand{\FJ}{{\mathfrak{J}}} \newcommand{\FK}{{\mathfrak{K}}} \newcommand{\FL}{{\mathfrak{L}}} \newcommand{\FM}{{\mathfrak{M}}} \newcommand{\FN}{{\mathfrak{N}}} \newcommand{\FO}{{\mathfrak{O}}} \newcommand{\FP}{{\mathfrak{P}}} \newcommand{\FQ}{{\mathfrak{Q}}} \newcommand{\FR}{{\mathfrak{R}}} \newcommand{\FS}{{\mathfrak{S}}} \newcommand{\FT}{{\mathfrak{T}}} \newcommand{\FU}{{\mathfrak{U}}} \newcommand{\FV}{{\mathfrak{V}}} \newcommand{\FW}{{\mathfrak{W}}} \newcommand{\FX}{{\mathfrak{X}}} \newcommand{\FY}{{\mathfrak{Y}}} \newcommand{\FZ}{{\mathfrak{Z}}} \newcommand{\ddd}{\mathbf{d}} \newcommand{\ord}{\mathrm{ord}} \newcommand{\divisor}{\mathrm{div}} \newcommand{\codim}{\mathrm{codim}} \newcommand{\simto}{\,\widetilde{\to}\,} \newcommand{\dq}{{q \frac{d}{dq}}} \newcommand{\pt}{{\mathsf{pt}}} \newcommand{\ch}{{\mathrm{ch}}} \newcommand{\td}{{\mathrm{td}}} x}{\mathsf{fix}} \newcommand{\mov}{\mathsf{mov}} \newcommand{\MF}{{\mathsf{MF}}} \DeclareMathOperator{\Hilb}{Hilb} \DeclareMathOperator{\tor}{Tor} \DeclareMathOperator{\Eu}{Eu} \DeclareMathOperator{\im}{image} \DeclareMathOperator{\Con}{Con} \DeclareMathOperator{\Sets}{Sets} \DeclareMathOperator{\Sch}{Sch} \DeclareMathOperator{\Quot}{Quot} \DeclareMathOperator{\con}{con} \DeclareMathOperator{\red}{red} \DeclareMathOperator{\Der}{Der} \DeclareMathOperator{\nilp}{nilp} \DeclareMathOperator{\coker}{coker} \DeclareMathOperator{\depth}{depth} \DeclareMathOperator{\ann}{ann} \DeclareMathOperator{\Ann}{Ann} \DeclareMathOperator{\AP}{AP} \DeclareMathOperator{\BPS}{BPS} \DeclareMathOperator{\Td}{Td} \DeclareMathOperator{\BPSs}{\CB\CP\CS} \DeclareMathOperator{\IC}{IC} \DeclareMathOperator{\ICs}{\CI\CC} \DeclareMathOperator{\Sym}{Sym} \DeclareMathOperator{\Mod}{Mod} \DeclareMathOperator{\mot}{mot} \DeclareMathOperator{\coh}{coh} \DeclareMathOperator{\crr}{cr} \DeclareMathOperator{\Per}{Per} \DeclareMathOperator{\Coh}{Coh} \DeclareMathOperator{\QCoh}{QCoh} \DeclareMathOperator{\MHS}{MHS} \DeclareMathOperator{\MMHS}{MMHS} \DeclareMathOperator{\MHM}{MHM} \DeclareMathOperator{\MMHM}{MMHM} \DeclareMathOperator{\Chow}{Chow} \DeclareMathOperator{\Frac}{Frac} \DeclareMathOperator{\HTop}{HTop} \DeclareMathOperator{\Orb}{Orb} \DeclareMathOperator{\Span}{Span} \DeclareMathOperator{\vd}{vd} \DeclareMathOperator{\ob}{ob} \DeclareMathOperator{\Ob}{Ob} \DeclareMathOperator{\diag}{diag} \DeclareMathOperator{\Obs}{Obs} \DeclareMathOperator{\Boxtimes}{\mathlarger{\mathlarger{\boxtimes}}} \DeclareMathOperator{\Art}{Art} \DeclareMathOperator{\vir}{\mathrm{vir}} \DeclareMathOperator{\twist}{\text{sh}} \DeclareMathOperator{\Exp}{Exp} \DeclareMathOperator{\Var}{Var} \DeclareMathOperator{\KK}{K} \DeclareMathOperator{\Db}{D^b} \DeclareMathOperator{\D}{D} \DeclareMathOperator{\Dlf}{D^{\geq,\textrm{lf}}} \DeclareMathOperator{\Rep}{Rep} \DeclareMathOperator{\GL}{GL} \DeclareMathOperator{\PGL}{PGL} \DeclareMathOperator{\Mat}{Mat} \DeclareMathOperator{\mon}{mon} \DeclareMathOperator{\tw}{tw} \DeclareMathOperator{\cyc}{cyc} \DeclareMathOperator{\HN}{HN} \DeclareMathOperator{\rk}{rk} \DeclareMathOperator{\St}{St} \DeclareMathOperator{\length}{length} \DeclareMathOperator{\NCHilb}{NCHilb} \DeclareMathOperator{\NCQuot}{Quot} \DeclareMathOperator{\colim}{colim} \renewcommand{\ss}{\operatorname{ss}} \newcommand{\phim}[1]{\phi^{\mathrm{mon}}_{#1}} \newcommand{\muhat}{\hat{\mu}} \newcommand{\derived}{\mathbf{D}} \newcommand{\ff}{\mathbf{f}} \newcommand{\BBS}{\mathrm{BBS}} \newcommand{\HC}{\mathsf{hc}} \newcommand{\HO}{\mathrm{H}} \newcommand{\fra}{\mathrm{fr}} \newcommand{\crit}{\operatorname{crit}} \newcommand{\zetass}{\zeta\textrm{-ss}} \newcommand{\zetast}{\zeta\textrm{-st}} \newcommand*{\defeq}{\mathrel{\vcenter{\baselineskip0.5ex \lineskiplimit0pt \hbox{\scriptsize.}\hbox{\scriptsize.}}} =} \newcommand{\into}{\hookrightarrow} \newcommand{\onto}{\twoheadrightarrow} \DeclareFontFamily{OT1}{rsfs}{} \DeclareFontShape{OT1}{rsfs}{n}{it}{<-> rsfs10}{} \DeclareMathAlphabet{\curly}{OT1}{rsfs}{n}{it} \renewcommand\hom{\mathscr{H}\kern-0.3em\mathit{om}} \newcommand\ext{\curly Ext} \newcommand\Ext{\operatorname{Ext}} \newcommand\Hom{\operatorname{Hom}} \newcommand\End{\operatorname{End}} \DeclareMathOperator{\lHom}{\mathscr{H}\kern-0.3em\mathit{om}} \DeclareMathOperator{\RRlHom}{\mathbf{R}\kern-0.025em\mathscr{H}\kern-0.3em\mathit{om}} \DeclareMathOperator{\RHom}{{\mathbf{R}\mathrm{Hom}}} \DeclareMathOperator{\lExt}{{\mathscr{E}\kern-0.2em\mathit{xt}}} \DeclareMathOperator{\Def}{Def} \DeclareMathOperator{\Jac}{Jac} \DeclareMathOperator{\pr}{pr} \DeclareMathOperator{\At}{At} \newcommand{\RR}{\mathbf R} \newcommand{\hint}{\textbf{Hint}} \newcommand\Id{\operatorname{Id}} \newcommand\Spec{\operatorname{Spec}} \newcommand\Supp{\operatorname{Supp}} \newcommand\Tot{\operatorname{Tot}} \newcommand\inv{\operatorname{inv}} \newcommand\id{\operatorname{id}} \newcommand\SFA{\mathsf{A}} \newcommand\SFB{\mathsf{B}} \newcommand\cA{\curly A} \newcommand{\II}{{I^{\bullet}}} \newcommand{\JJ}{{J^{\bullet}}} \newcommand{\Mbar}{{\overline M}} \newcommand{\Pic}{\mathop{\rm Pic}\nolimits} \newcommand{\PT}{\mathsf{PT}} \newcommand{\DT}{\mathsf{DT}} \newcommand{\GW}{\mathsf{GW}} \newcommand{\SFQ}{\mathsf{Q}} \newcommand{\HH}{\mathrm{H}} \newcommand{\Calo}{\mathscr O} \newcommand{\Cali}{\mathscr I} \newcommand{\Calk}{\mathscr K} \makeatletter \newenvironment{proofof}[1]{\par \pushQED{\qed} \normalfont \topsep6\p@\@plus6\p@\relax \trivlist \item[\hskip3\labelsep \itshape Proof of #1\@addpunct{.}]\ignorespaces }{ \popQED\endtrivlist\@endpefalse } \makeatother \usepackage[all]{xy} \usepackage{tikz} \usepackage{tikz-cd} \usepackage{adjustbox} \usepackage{rotating} \usepackage{comment} \newcommand*{\isoarrow}[1]{\arrow[#1,"\rotatebox{90}{\(\sim\)}" ]} \usetikzlibrary{matrix,shapes,intersections,arrows,decorations.pathmorphing} \tikzset{commutative diagrams/arrow style=math font} \tikzset{commutative diagrams/.cd, mysymbol/.style={start anchor=center,end anchor=center,draw=none}} \newcommand\MySymb[2][\square]{ \arrow[mysymbol]{#2}[description]{#1}} \tikzset{ shift up/.style={ to path={([yshift=#1]\tikztostart.east) -- ([yshift=#1]\tikztotarget.west) \tikztonodes} } } \newenvironment{citazione} {\begin{quotation}\normalsize} {\end{quotation}} \theoremstyle{definition} \newtheorem*{lemma*}{Lemma} \newtheorem*{theorem*}{Theorem} \newtheorem*{example*}{Example} \newtheorem*{fact*}{Fact} \newtheorem*{notation*}{Notation} \newtheorem*{definition*}{Definition} \newtheorem*{prop*}{Proposition} \newtheorem*{remark*}{Remark} \newtheorem*{corollary*}{Corollary} \newtheorem*{solution}{Solution} \newtheorem*{conventions*}{Conventions} \newtheorem*{convention*}{Convention} \newtheorem{definition}{Definition}[section] \newtheorem{assumption}{Assumption}[section] \newtheorem{problem}[definition]{Problem} \newtheorem{example}[definition]{Example} \newtheorem{fact}[definition]{Fact} \newtheorem{conventions}[definition]{Conventions} \newtheorem{convention}{Convention} \newtheorem{question}[definition]{Question} \newtheorem{notation}[definition]{Notation} \newtheorem{remark}[definition]{Remark} \newtheorem{conjecture}[definition]{Conjecture} \newtheorem{claim}[definition]{Claim} \newtheorem{observation}[definition]{Observation} \newtheoremstyle{thm} {3mm} {3mm} {\slshape} {0mm} {\bfseries} {.} {1mm} {}\theoremstyle{thm} \newtheorem{theorem}[definition]{Theorem} \newtheorem{corollary}[definition]{Corollary} \newtheorem{lemma}[definition]{Lemma} \newtheorem{prop}[definition]{Proposition} \newtheorem{thm}{Theorem} \renewcommand*{\thethm}{\Alph{thm}} \newtheoremstyle{ex} {3mm} {3mm} {} {0mm} {\scshape} {.} {1mm} {}\theoremstyle{ex} \newtheorem{exercise}[definition]{Exercise} \newtheoremstyle{sol} {3mm} {3mm} {} {0mm} {\scshape} {.} {1mm} {}\theoremstyle{sol} \newtheorem*{Acknowledgments*}{Acknowledgments} \newcommand{\virg}{``} \newcommand{\p}{{\mathfrak{p}}} \newcommand{\q}{{\mathfrak{q}}} \newcommand{\nn}{{\mathfrak{n}}} \newcommand{\mm}{{\mathfrak{m}}} \newcommand{\Calt}{{\mathscr{T}}} \newcommand{\Cale}{{\mathscr{E}}} \newcommand{\Caloo}{{\mathcal{O}}} \newcommand{\Cala}{{\mathcal{A}}} \newcommand{\Calb}{{\mathscr{B}}} \newcommand{\Calc}{{\mathcal{C}}} \newcommand{\Calf}{{\mathscr{F}}} \newcommand{\Cald}{{\mathcal{D}}} \newcommand{\Calx}{{\mathcal{X}}} \newcommand{\Calp}{{\mathcal{P}}} \newcommand{\Caly}{{\mathcal{Y}}} \newcommand{\Calm}{{\mathscr{M}}} \newcommand{\Caln}{{\mathcal{N}}} \newcommand{\Calq}{{\mathcal{Q}}} \newcommand{\Calz}{{\mathcal{Z}}} \newcommand{\Calh}{{\mathscr{H}}} \newcommand{\Calkk}{{\mathcal{K}}} \newcommand{\Call}{{\mathcal{L}}} \newcommand{\Calg}{{\mathscr{G}}} \newcommand{\Calii}{{\mathcal{I}}} \newcommand{\Calr}{{\mathscr{R}}} \newcommand{\Cals}{{\mathcal{S}}} \newcommand{\Calu}{{\mathscr{U}}} \newcommand{\Calv}{{\mathcal{V}}} \newcommand{\cano}{{\mathcal{C}an}} \newcommand{\scop}{{\hat{\mathbb{A}}^2}} \newcommand{\sing}{{\mbox{Sing}}} \newcommand{\chain}{{\mbox{CHAIN}}} \newcommand{\znz}{{\sfrac{\Z}{n\Z}}} \newcommand{\zpz}{{\sfrac{\Z}{p\Z}}} \newcommand{\zqz}{{\sfrac{\Z}{q\Z}}} \newcommand{\zmz}{{\sfrac{\Z}{m\Z}}} \newcommand{\zhz}{{\sfrac{\Z}{h\Z}}} \newcommand{\zkz}{{\sfrac{\Z}{k\Z}}} \DeclareMathOperator{\sym}{Sym} \newcommand{\sfrac}[2]{ {\raise0.8ex\hbox{$#1$} \!\mathord{\left/ {\vphantom {#1 #2}}\right.\kern-\nulldelimiterspace} \!\lower0.8ex\hbox{$#2$}} } \newcommand{\ssum}[2]{ \underset{#1}{\overset{#2}{\sum}} } \newcommand{\sprod}[2]{ \underset{#1}{\overset{#2}{\prod}} } \newcommand{\pfrac}[2]{ \frac{\partial{#1}}{\partial{#2}} } \newcommand{\uunderset}[2]{ \underset{\mbox{\tiny$\begin{matrix}#1 \end{matrix}$}}{#2} } \def\dashmapsto{\mathrel{\mapstochar\dashrightarrow}} \DeclareMathOperator{\bul}{{\mbox{\tiny$\bullet$}}} \newcommand{\ppfrac}[3]{ \frac{\partial{#1}}{\partial{#2}}\big|_{#3} } \newcommand{\curv}[2]{{#1}_{\mbox{\tiny curv}}^{[#2]}} \newcommand{\insi}[2]{ \left\{\ {#1}\ \left| \ {#2}\ \right.\right\} } \newcommand{\numci}[1]{ \begin{tikzpicture}\node at (0,0) {#1}; \draw (0,0) circle (0.25); \end{tikzpicture} } \newcommand{\Quadrant}[2]{ \foreach \i in {0,...,{#1}} \draw[thick] (\i,0)--(\i,{#2+1}); \foreach \j in {1,...,{#2}} \draw[thick] (0,\j)--({#1},\j); \draw[thick] (0,1)--(0,0)--({#1},0)--({#1},1); \draw[thick] (0,{#2})--(0,{#2+1})--({#1},{#2+1})--({#1},{#2}); } \newcommand{\Bl}[2]{\bl_{{#1}}(#2)} \newcommand{\consteta}[1]{{G\mbox{-}\Const_{#1}}} \newcommand{\funz}[1]{ {<#1,\underline{\ \ }>} } \newcommand{\wwedge}[2]{ {\underset{#2}{\overset{#1}{\bigwedge}}} } \DeclareMathOperator{\Parallel}{\begin{matrix}\begin{tikzpicture}\draw (0,0)--(0.5,0)--(0.75,0.25)--(0.25,0.25)--(0,0); \end{tikzpicture} \end{matrix}} \DeclareMathOperator{\OP}{OP} \DeclareMathOperator{\cl}{cl} \DeclareMathOperator{\bl}{Bl} \DeclareMathOperator{\res}{res} \DeclareMathOperator{\reg}{reg} \DeclareMathOperator{\colen}{colen} \DeclareMathOperator{\Lip}{Lip} \DeclareMathOperator{\osc}{osc} \DeclareMathOperator{\dist}{dist} \DeclareMathOperator{\Sur}{Sur} \DeclareMathOperator{\gr}{Gr} \DeclareMathOperator{\comp}{comp} \DeclareMathOperator{\Aut}{Aut} \DeclareMathOperator{\Kom}{Kom} \DeclareMathOperator{\Irr}{Irr} \DeclareMathOperator{\Com}{Com} \DeclareMathOperator{\Mult}{Mult} \DeclareMathOperator{\Clust}{Clust} \DeclareMathOperator{\Const}{Const} \DeclareMathOperator{\eext}{\Cale xt} \DeclareMathOperator{\pa}{p_a} \DeclareMathOperator{\pg}{p_g} \DeclareMathOperator{\dho}{dh} \DeclareMathOperator{\invert}{Invert} \DeclareMathOperator{\dhp}{dh_p} \DeclareMathOperator{\dhi}{dh_i} \DeclareMathOperator{\Ral}{\mathcal{R-}alg} \DeclareMathOperator{\sesu}{setsupp} \DeclareMathOperator{\trdeg}{trdeg} \DeclareMathOperator{\htt}{ht} \DeclareMathOperator{\coht}{coht} \DeclareMathOperator{\Max}{Max} \DeclareMathOperator{\gras}{Grass} \DeclareMathOperator{\graf}{graf} \DeclareMathOperator{\dom}{dom} \DeclareMathOperator{\topd}{topdim} \DeclareMathOperator{\conv}{Conv} \DeclareMathOperator{\inte}{int} \DeclareMathOperator{\cliff}{Cliff} \DeclareMathOperator{\ost}{OSt} \DeclareMathOperator{\lk}{Lk} \DeclareMathOperator{\sgn}{sgn} \DeclareMathOperator{\gen}{gen} \DeclareMathOperator{\exc}{Exc} \DeclareMathOperator{\ran}{ran} \DeclareMathOperator{\sheaf}{Sheaf} \DeclareMathOperator{\diam}{diam} \DeclareMathOperator{\Div}{Div} \DeclareMathOperator{\rot}{rot} \DeclareMathOperator{\ba}{ba} \DeclareMathOperator{\gon}{gon} \DeclareMathOperator{\Sl}{Sl} \DeclareMathOperator{\dive}{div} \DeclareMathOperator{\mult}{mult} \DeclareMathOperator{\injrad}{injrad} \DeclareMathOperator{\pdim}{dim_{pr}} \DeclareMathOperator{\SL}{SL} \newtheorem{innercustomthm}{Theorem} \newenvironment{customthm}[1] {\renewcommand\theinnercustomthm{#1}\innercustomthm} {\endinnercustomthm}
2205.07490v2
http://arxiv.org/abs/2205.07490v2
Graded Hecke algebras and equivariant constructible sheaves on the nilpotent cone
\documentclass[11pt]{amsart} \usepackage{amsmath} \usepackage{amssymb} \usepackage{amsthm} \usepackage{latexsym} \usepackage{graphicx} \usepackage{hyperref} \usepackage{enumerate} \usepackage[all]{xy} \usepackage{chngcntr} \setlength{\unitlength}{1cm} \setlength{\topmargin}{0cm} \setlength{\textheight}{22cm} \setlength{\oddsidemargin}{1cm} \setlength{\textwidth}{14cm} \setlength{\voffset}{-1cm} \newtheorem{thm}{Theorem}[section] \newtheorem{cor}[thm]{Corollary} \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{conj}{Conjecture} \newtheorem{thmintro}{Theorem} \newtheorem{propintro}[thmintro]{Proposition} \theoremstyle{definition} \newtheorem{defn}[thm]{Definition} \newtheorem{rem}[thm]{Remark} \newtheorem{cond}[thm]{Condition} \newtheorem{ex}[thm]{Example} \renewcommand{\thethmintro}{\Alph{thmintro}} \newcommand{\N}{\mathbb N} \newcommand{\Z}{\mathbb Z} \newcommand{\Q}{\mathbb Q} \newcommand{\R}{\mathbb R} \newcommand{\C}{\mathbb C} \newcommand{\F}{\mathbb F} \newcommand{\mf}{\mathfrak} \newcommand{\mc}{\mathcal} \newcommand{\mb}{\mathbf} \newcommand{\mh}{\mathbb} \def\cC{{\mathcal C}} \def\cF{{\mathcal F}} \def\cG{{\mathcal G}} \def\cL{{\mathcal L}} \def\cS{{\mathcal S}} \def\cR{{\mathcal R}} \def\cV{{\mathcal V}} \def\cE{{\mathcal E}} \def\Irr{{\rm Irr}} \newcommand{\mr}{\mathrm} \newcommand{\ind}{\mathrm{ind}} \newcommand{\enuma}[1]{\begin{enumerate}[\textup{(}a\textup{)}] {#1} \end{enumerate}} \newcommand{\Fr}{\mathrm{Frob}} \newcommand{\Sc}{\mathrm{sc}} \newcommand{\ad}{\mathrm{ad}} \newcommand{\cusp}{\mathrm{cusp}} \newcommand{\nr}{\mathrm{nr}} \newcommand{\unr}{\mathrm{unr}} \newcommand{\Wr}{\mathrm{wr}} \newcommand{\cpt}{\mathrm{cpt}} \newcommand{\Rep}{\mathrm{Rep}} \newcommand{\Res}{\mathrm{Res}} \newcommand{\af}{\mathrm{aff}} \def\tor{{\rm tor}} \newcommand{\unip}{\mathrm{unip}} \newcommand{\der}{\mathrm{der}} \newcommand{\red}{\mathrm{red}} \newcommand{\matje}[4]{\left(\begin{smallmatrix} #1 & #2 \\ #3 & #4 \end{smallmatrix}\right)} \newcommand{\inp}[2]{\langle #1 , #2 \rangle} \newcommand{\Mod}{\mathrm{Mod}} \newcommand{\Hom}{\mathrm{Hom}} \newcommand{\End}{\mathrm{End}} \newcommand{\mi}{\text{\bf -}} \newcommand{\sfa}{\mathsf{a}} \newcommand{\sfb}{\mathsf{b}} \newcommand{\sfq}{\mathsf{q}} \newcommand{\an}{\mathrm{an}} \newcommand{\me}{\mathrm{me}} \newcommand{\un}{\mathrm{un}} \newcommand{\isom}{\xrightarrow{\sim}} \newcommand{\triv}{\mathrm{triv}} \newcommand{\ds}{\displaystyle} \newcommand{\sgn}{\mathrm{sgn}} \newcommand{\pt}{\mathrm{pt}} \newcommand{\IM}{\mathrm{IM}} \newcommand{\Zy}{{Z^\circ_{G \times \C^\times} (y)}} \newcommand{\Modf}[1]{\mathrm{Mod}_{\mathrm{fl}, #1}} \newcommand{\Ad}{\mathrm{Ad}} \newcommand{\pr}{\mathrm{pr}} \newcommand{\reg}{\mathrm{reg}} \newcommand{\IC}{\mathrm{IC}} \newcommand{\Ql}{\overline{\mathbb Q_\ell}} \newcommand{\Ext}{\mathrm{Ext}} \begin{document} \title[Hecke algebras and constructible sheaves on the nilpotent cone]{Graded Hecke algebras and equivariant constructible sheaves on the nilpotent cone} \date{\today} \subjclass[2010]{20C08, 14F08, 22E57} \maketitle \vspace{5mm} \begin{center} {\Large Maarten Solleveld} \\[1mm] IMAPP, Radboud Universiteit Nijmegen\\ Heyendaalseweg 135, 6525AJ Nijmegen, the Netherlands \\ email: [email protected] \end{center} \vspace{5mm} \begin{abstract} Graded Hecke algebras can be constructed geometrically, with constructible sheaves and equivariant cohomology. The input consists of a complex reductive group $G$ (possibly disconnected) and a cuspidal local system on a nilpotent orbit for a Levi subgroup of $G$. We prove that every such ``geometric" graded Hecke algebra is naturally isomorphic to the endomorphism algebra of a certain $G \times \C^\times$-equivariant semisimple complex of sheaves on the nilpotent cone $\mf g_N$ in the Lie algebra of $G$. From there we provide an algebraic description of the $G \times \C^\times$-equivariant bounded derived category of constructible sheaves on $\mf g_N$. Namely, it is equivalent with the bounded derived category of finitely generated differential graded modules of a suitable direct sum of graded Hecke algebras. This can be regarded as a categorification of graded Hecke algebras. This paper prepares for a study of representations of reductive $p$-adic groups with a fixed infinitesimal central character. In sequel papers \cite{SolSheaves,SolStand}, that will lead to proofs of the generalized injectivity conjecture and of the Kazhdan--Lusztig conjecture for $p$-adic groups. \end{abstract} \vspace{5mm} \tableofcontents \section*{Introduction} The story behind this paper started with with the seminal work of Kazhdan and Lusztig \cite{KaLu}. They showed that an affine Hecke algebra $\mc H$ is naturally isomorphic with a $K$-group of equivariant coherent sheaves on the Steinberg variety of a complex reductive group. (Here $\mc H$ has a formal variable $\mb q$ as single parameter and the reductive group must have simply connected derived group.) This isomorphism enables one to regard the category of equivariant coherent sheaves on that particular variety as a categorification of an affine Hecke algebra. Later that became quite an important theme in the geometric Langlands program, see for instance \cite{Bez,ChGi,MiVi}. Our paper is inspired by the quest for a generalization of such a categorification of $\mc H$ to affine Hecke algebras with more than one $q$-parameter. That is relevant because such algebras arise in abundance from reductive $p$-adic groups and types \cite[\S 2.4]{ABPS}. However, up to today it is unclear how several independent $q$-parameters can be incorporated in a setup with equivariant $K$-theory or $K$-homology. The situation improves when one formally completes an affine Hecke algebra with respect to (the kernel of) a central character, as in \cite{Lus-Gr}. Such a completion is Morita equivalent with a completion of a graded Hecke algebra with respect to a central character.\\ Graded Hecke algebras $\mh H$ with several parameters (now typically called $k$) do admit a geometric interpretation \cite{Lus-Cusp1,Lus-Cusp2}. (Not all combinations of parameters occur though, there are conditions on the ratios between the different $k$-parameters.) For this reason graded Hecke algebras, instead of affine Hecke algebras, play the main role in this paper. Such algebras, and minor generalizations called twisted graded Hecke algebras, appear in several independent ways. Consider a connected reductive group $\mc G$ defined over a non-archimedean local field $F$. Let $\Rep (\mc G (F))^{\mf s}$ be any Bernstein block in the category of (complex, smooth) $\mc G (F)$-representations. Locally on the space of characters of the Bernstein centre of $\mc G (F)$, $\Rep (\mc G (F))^{\mf s}$ is always equivalent with the module category of some twisted graded Hecke algebra \cite[\S 7]{SolEnd}. This is derived from an equivalence of $\Rep (\mc G (F))^{\mf s}$ with the module category of an algebra which is almost an affine Hecke algebra, established in full generality in \cite{SolEnd}. The same kind of algebras arise from enhanced Langlands parameters for $\mc G (F)$ \cite{AMS2}. That construction involves complex geometry and the cuspidal support map for enhanced L-parameters from \cite{AMS1}. It matches specific sets of enhanced L-parameters for $\mc G (F)$ with specific sets of irreducible representations of twisted graded Hecke algebras, see \cite{AMS2,AMS3}. Like affine Hecke algebras, graded Hecke algebras are related to equivariant sheaves on varieties associated to complex reductive groups. However, here the sheaves must be constructible and one uses equivariant cohomology instead of equivariant K-theory. Just equivariant sheaves do not suffice to capture the entire structure of graded Hecke algebras, one rather needs differential complexes of those. Thus we arrive at the (bounded) equivariant derived categories of constructible sheaves from \cite{BeLu}. Via intersection cohomology, such objects have many applications in representation theory, see for instance \cite{Lus-IC,Ach}.\\ \textbf{Main results}\\ Let $G$ be a complex reductive group and let $M$ be a Levi subgroup of $G$. To cover all enhanced Langlands parameters for $p$-adic groups and all instances of twisted graded Hecke algebras mentioned above, we must allow disconnected reductive groups. Let $q\cE$ be an irreducible $M$-equivariant cuspidal local system on a nilpotent orbit in the Lie algebra of $M$. From these data a twisted graded Hecke algebra $\mh H (G,M,q\cE)$ can be constructed \cite[\S 4]{AMS2}. As a graded vector space, it is the tensor product of: \begin{itemize} \item the algebra $\mc O (\mf t)$ of polynomial functions on Lie$(Z(M^\circ)) = \mf t$, with grading 2 times the standard grading, \item $\C [\mb r]$, where $\mb r$ is a formal variable of degree 2, \item the (twisted) group algebra of a finite ``Weyl-like" group $W_{q\cE}$ (in degree 0). \end{itemize} We will work in $\mc D^b_{G \times \C^\times}(X)$, the $G \times \C^\times$-equivariant bounded derived category of constructible sheaves on a complex variety $X$ \cite{BeLu}. We let $G$ act on its Lie algebra $\mf g$ via the adjoint representation, and we let $\lambda \in \C^\times$ act on $\mf g$ as multiplication by $\lambda^{-2}$. In \cite{Lus-Cusp1,Lus-Cusp2,AMS2} an important object $K \in \mc D^b_{G \times \C^\times} (\mf g)$ was constructed from $q\cE$, by a process that bears some similarity with parabolic induction. With $G^\circ$ instead of $G \times \C^\times$, $K$ would be a character sheaf as in \cite{Lus-Char}. In general it does not fit entirely with Lusztig's notion of character sheaves on disconnected reductive groups, because those are only $G^\circ$-equivariant. Let $\mf g_N$ be the variety of nilpotent elements in the Lie algebra $\mf g$ of $G$ and let $K_N$ be the pullback of $K$ to $\mf g_N$. Up to degree shifts, both $K$ and $K_N$ are direct sums of simple perverse sheaves. This $K_N$ generalizes the equivariant perverse sheaves used to establish the (generalized) Springer correspondence \cite{Lus-Int}. The following was already known for connected $G$, from \cite{Lus-Cusp1,Lus-Cusp2}, while for disconnected $G$ it follows quickly from \cite{AMS2}. \begin{thmintro}\label{thm:A} (see Theorem \ref{thm:1.2}) \\ There exist natural isomorphisms of graded algebras \[ \mh H (G,M,q\cE) \longrightarrow \End^*_{\mc D^b_{G \times \C^\times}(\mf g)}(K) \longrightarrow \End^*_{\mc D^b_{G \times \C^\times}(\mf g_N)}(K_N) . \] \end{thmintro} We point out that here the additional $\C^\times$-action makes things much more interesting (like in \cite{KaLu} for K-theory and affine Hecke algebras). Indeed, the simpler version $\End^*_{\mc D^b_G (\mf g)}(K)$ is isomorphic to the crossed product of $\mc O (\mf t)$ with a twisted group algebra of $W_{q\cE}$, and that does not involve any Hecke type relations. Let $\mc D^b_{G \times \C^\times}(\mf g_N ,K_N)$ be the full triangulated subcategory of $\mc D^b_{G \times \C^\times}(\mf g_N)$ generated by $K_N$. By analogy with progenerators of module categories, Theorem \ref{thm:A} indicates that $\mc D^b_{G \times \C^\times}(\mf g_N ,K_N)$ should be equivalent to some category of right $\mh H (G,M,q\cE)$-modules. Our geometric objects are differential complexes of sheaves (up to equivalences), and accordingly we need (equivalence classes of complexes of) differential graded $\mh H (G,M,q\cE)$-modules. \begin{thmintro}\label{thm:B} (see Theorem \ref{thm:3.3}) \\ There exists an equivalence of categories between $\mc D^b_{G \times \C^\times}(\mf g_N ,K_N)$ and\\ $\mc D^b (\mh H (G,M,q\cE)-\Mod_{\mr{fgdg}})$, the bounded derived category of finitely generated differential graded right $\mh H (G,M,q\cE)$-modules. \end{thmintro} This is a geometric categorification of $\mh H (G,M,q\cE)$, albeit of a different kind than in \cite{KaLu,Bez}. It is a variation (with $G \times \C^\times$ instead of $G^\circ$) on the derived version of the generalized Springer correspondence from \cite{Rid,RiRu}. In that setting, the algebra is $\mc O (\mf t) \rtimes W(G,T)$, which can also be considered as a graded Hecke algebra with parameters $k = 0$. Further, one may regard Theorem \ref{thm:B} as ``formality" of the graded algebra $\mh H (G,M,q\cE)$, in the following sense. There exists a differential graded algebra $\mc R$ (with nonzero differential) such that $H^* (\mc R) \cong \mh H (G,M,q\cE)$ and $\mc R$ is formal, that is, quasi-isomorphic with $H^* (\mc R)$. The equivalence in Theorem \ref{thm:B} maps $\mc D^b_{G \times \C^\times}(\mf g_N ,K_N)$ to $\mc D^b (\mc R -\Mod_{\mr{fgdg}})$ via some Hom-functor, and from there to $\mc D^b (\mh H (G,M,q\cE)-\Mod_{\mr{fgdg}})$ by taking cohomology. From a geometric point of view, it is more natural to consider the entire category $\mc D^b_{G \times \C^\times}(\mf g_N)$ in Theorem \ref{thm:B}. It turns out that this category decomposes, like in a related setting in \cite{RiRu1}: \begin{thmintro}\label{thm:C} (see Theorem \ref{thm:3.6}) \\ There exists an orthogonal decomposition \[ \mc D^b_{G \times \C^\times}(\mf g_N) = \bigoplus\nolimits_{[M,q\cE]_G} \mc D^b_{G \times \C^\times}(\mf g_N ,K_N) . \vspace{-1mm} \] Here $K_N$ is constructed from an $M$-equivariant cuspidal local system $q\cE$ on a nilpotent orbit in a Levi Lie-subalgebra $\mr{Lie} (M)$, and the direct sum runs over $G$-conjugacy classes of such pairs $(\mr{Lie} (M),q\cE)$. \end{thmintro} \textbf{Outlook}\\ Theorems \ref{thm:B} and \ref{thm:C} describe $\mc D^b_{G \times \C^\times}(\mf g_N)$ as a derived module category. Let us point out that the category $\mh H (G,M,q\cE)-\Mod_{\mr{fgdg}}$ in Theorem \ref{thm:B} is much smaller than the category of ungraded finitely generated right $\mh H (G,M,q\cE)$-modules (which relates to categories of smooth representations of reductive $p$-adic groups). In the sequels to this paper \cite{SolSheaves,SolStand} we focus on standard and irreducible $\mh H (G,M,q\cE)$-modules, so modules with a fixed central character. Those will be analysed using versions of localization for equivariant derived constructible sheaves. In the end, this will be used to verify the Kazhdan--Lusztig conjecture for $p$-adic groups \cite[Conjecture 8.11]{Vog} and the generalized injectivity conjecture \cite{CaSh}. To prove these conjectures entirely, it is essential to work in the generality of the current paper. In words, Theorem \ref{thm:C} says that the $G \times \C^\times$-equivariant derived category of constructible sheaves on the nilpotent cone $\mf g_N$ decomposes as a direct sum of subcategories associated to the various involved cuspidal supports $(M,q\cE)$. Let us speculate on how this relates to the Langlands program. An element $N \in \mf g_N$ and a semisimple element $g \in G$ with Ad$(g)N = q_F N$ can be used to define a Langlands parameter for a reductive group over a non-archimedean local field $F$. It may be an unramified L-parameter like in \cite{KaLu}: trivial on the inertia group $\mb I_F$ and with $g$ the image of an arithmetic Frobenius element $\Fr$. But one can also start with a more complicated L-parameter $\phi$, let $G^\circ$ be the connected centralizer of $\phi (\mb I_F)$ in the complex dual group and let $G/G^\circ$ be generated by $\phi (\Fr)$. One may hope that an analogue of Theorem \ref{thm:C} holds for equivariant sheaves on varieties (or stacks) of Langlands parameters, as defined in \cite{DHKM,Zhu}. It would say that the relevant sheaves on Langlands parameters can be decomposed as orthogonal direct sums of pieces associated to suitable cuspidal supports. That would be similar to the Bernstein decomposition of the category of the smooth complex representations of a reductive $p$-adic group. A result of this kind is already known for enhanced L-parameters \cite[\S 8]{AMS1}, that covers the cases of simple equivariant constructible sheaves. To fit with the (conjectural) framework for geometrization of the local Langlands correspondence from \cite{FaSc,Zhu}, a version of Theorem \ref{thm:C} with equivariant coherent sheaves on varieties of L-parameters is desirable.\\ \textbf{Structure of the paper}\\ We start with recalling twisted graded Hecke algebras in terms of generators and relations. We generalize a few results from \cite{SolHecke}, which say that the set of irreducible representations of a graded Hecke algebra is essentially independent of the parameters $k$ and $r$. Then we prove a generally useful result: \begin{thmintro}\label{thm:D} The global dimension of $\mh H (G,M,q\cE)$ equals $\dim (Z(M^\circ)) + 1$. \end{thmintro} In Paragraph \ref{par:geomConst} we describe the geometric construction of $\mh H (G,M,q\cE)$ in detail, and we establish Theorem \ref{thm:A}. Next we check that $K_N$ is a semisimple object of $\mc D^b_{G \times \C^\times}(\mf g_N)$ and we relate it to parabolic induction for perverse sheaves -- which is needed for Theorems \ref{thm:B} and \ref{thm:C}. Paragraph \ref{par:centralizer} is mainly preparation for an argument with localization to $\exp (\C \sigma)$-invariants. We include it here because it is closely related to Paragraph \ref{par:geomConst} and because our analysis of $(G/P)^\sigma = (G/P)^{\exp (\C \sigma)}$ for $\sigma \in \mf t$ is of independent interest. Paragraph \ref{par:iso} and Section \ref{sec:cuspidal} (basically the complement of Theorems \ref{thm:B}, \ref{thm:C} and \ref{thm:D}) will be used directly in \cite{SolSheaves,SolStand}. Section \ref{sec:description} is dedicated to Theorems \ref{thm:B} and \ref{thm:C}. We prove them by reduction to the setting of \cite{Rid,RiRu1,RiRu}, where sheaves of $\Ql$-modules on varieties over fields of positive characteristic are considered. This involves checking many things, among others that $\mh H (G,M,q\cE)$ is Koszul as differential graded algebra.\\ \textbf{Acknowledgements}\\ We thank Eugen Hellmann for some enlightening conversations. A big thanks to the referees for their detailed remarks, which helped to avoid several problems and to substantially clarify the paper. \vspace{3mm} \renewcommand{\theequation}{\arabic{section}.\arabic{equation}} \counterwithin*{equation}{section} \section{Graded Hecke algebras} \label{sec:defGHA} Let $\mf a$ be a finite dimensional Euclidean space and let $W$ be a finite Coxeter group acting isometrically on $\mf a$, and hence also on the linear dual space $\mf a^\vee$. Let $R \subset \mf a^\vee$ be a reduced integral root system, stable under the action of $W$, such that the reflections $s_\alpha$ with $\alpha \in R$ generate $W$. These conditions imply that $W$ acts trivially on the orthogonal complement of $\R R$ in $\mf a^\vee$. Write $\mf t = \mf a \otimes_\R \C$ and let $S(\mf t^\vee) = \mc O (\mf t)$ be the algebra of polynomial functions on $\mf t$. We also fix a base $\Delta$ of $R$. Let $\Gamma$ be a finite group which acts faithfully and orthogonally on $\mf a$ and stabilizes $R$ and $\Delta$. Then $\Gamma$ normalizes $W$ and $W \rtimes \Gamma$ is a group of automorphisms of $(\mf a,R)$. We choose a $W \rtimes \Gamma$-invariant parameter function $k : R \to \C$. Let ${\mb r}$ be a formal variable, identified with the coordinate function on $\C$ (so $\mc O (\C) = \C [\mb r]$). Let $\natural : \Gamma^2 \to \C^\times$ be a 2-cocycle and inflate it to a 2-cocycle of $W \rtimes \Gamma$. Recall that the twisted group algebra $\C[W \rtimes \Gamma,\natural]$ has a $\C$-basis $\{ N_w : w \in W \rtimes \Gamma \}$ and multiplication rules \[ N_w \cdot N_{w'} = \natural (w,w') N_{w w'} . \] In particular it contains the group algebra of $W$. \begin{prop}\label{prop:1.1} \textup{\cite[Proposition 2.2]{AMS2}} \\ There exists a unique associative algebra structure on $\C [W \rtimes \Gamma,\natural] \otimes \mc O (\mf t) \otimes \C[{\mb r}]$ such that: \begin{itemize} \item the twisted group algebra $\C[W \rtimes \Gamma,\natural]$ is embedded as subalgebra; \item the algebra $\mc O (\mf t) \otimes \C[{\mb r}]$ of polynomial functions on $\mf t \oplus \C$ is embedded as a subalgebra; \item $\C[{\mb r}]$ is central; \item the braid relation $N_{s_\alpha} \xi - {}^{s_\alpha}\xi N_{s_\alpha} = k (\alpha) {\mb r} (\xi - {}^{s_\alpha} \xi) / \alpha$ holds for all $\xi \in \mc O (\mf t)$ and all simple roots $\alpha$; \item $N_w \xi N_w^{-1} = {}^w \xi$ for all $\xi \in \mc O (\mf t)$ and $w \in \Gamma$. \end{itemize} \end{prop} We denote the algebra from Proposition \ref{prop:1.1} by $\mh H (\mf t, W \rtimes \Gamma,k,\mb{r},\natural)$ and we call it a twisted graded Hecke algebra. It is graded by putting $\C[W \rtimes \Gamma,\natural]$ in degree 0 and $\mf t^\vee \setminus \{0\}$ and $\mb r$ in degree 2. When $\Gamma$ is trivial, we omit $\natural$ from the notation, and we obtain the usual notion of a graded Hecke algebra $\mh H (\mf t, W,k,\mb{r})$. Notice that for $k = 0$ Proposition \ref{prop:1.1} yields the crossed product algebra \begin{equation}\label{eq:1.9} \mh H (\mf t, W \rtimes \Gamma, 0,\mb{r}, \natural) = \C[\mb{r}] \otimes_\C \mc O (\mf t) \rtimes \C [W \rtimes \Gamma, \natural], \end{equation} with multiplication rule \[ N_w \xi N_w^{-1} = {}^w \xi \qquad w \in W \rtimes \Gamma, \xi \in \mc O (\mf t) . \] It is possible to scale all parameters $k(\alpha)$ simultaneously. Namely, scalar multiplication with $z \in \C^\times$ defines a bijection $m_z : \mf t^\vee \to \mf t^\vee$, which clearly extends to an algebra automorphism of $S(\mf t^\vee)$. From Proposition \ref{prop:1.1} we see that it extends even further, to an algebra isomorphism \begin{equation}\label{eq:1.16} m_z : \mh H (\mf t, W \rtimes \Gamma, zk, \mb{r}, \natural) \to \mh H (\mf t,W \rtimes \Gamma ,k,\mb{r}, \natural) \end{equation} which is the identity on $\C [W \rtimes \Gamma, \natural] \otimes_\C \C [\mb{r}]$. Notice that for $z=0$ the map $m_z$ is well-defined, but no longer bijective. It is the canonical surjection \[ \mh H (\mf t,W \rtimes \Gamma,0,\mb{r}, \natural) \to \C [W \rtimes \Gamma, \natural] \otimes_\C \C [\mb{r}] . \] One also encounters versions of $\mh H (\mf t,W \rtimes \Gamma,k,\mb{r},\natural)$ with $\mb{r}$ specialized to a nonzero complex number. In view of \eqref{eq:1.16} it hardly matters which specialization, so it suffices to look at $\mb r \mapsto 1$. The resulting algebra $\mh H (\mf t,W \rtimes \Gamma,k, \natural)$ has underlying vector space $\C [W \rtimes \Gamma, \natural] \otimes_\C \mc O (\mf t)$ and cross relations \begin{equation}\label{eq:1.17} \xi \cdot s_\alpha - s_\alpha \cdot s_\alpha (\xi) = k(\alpha) (\xi - s_\alpha (\xi)) / \alpha \qquad \alpha \in \Delta, \xi \in S(\mf t^\vee) . \end{equation} Since $\Gamma$ acts faithfully on $(\mf a,\Delta)$, and $W$ acts simply transitively on the collection of bases of $R$, $W \rtimes \Gamma$ acts faithfully on $\mf a$. From \eqref{eq:1.17} we see that the centre of $\mh H (\mf t,W \rtimes \Gamma,k, \natural)$ is \begin{equation}\label{eq:1.18} Z(\mh H (\mf t,W \rtimes \Gamma,k,\natural)) = S(\mf t^\vee)^{W \rtimes \Gamma} = \mc O (\mf t / W \rtimes \Gamma) . \end{equation} As a vector space, $\mh H(\mf t,W \rtimes \Gamma,k, \natural)$ is still graded by $\deg (w) = 0$ for $w \in W \rtimes \Gamma$ and $\deg (x) = 2$ for $x \in \mf t^\vee \setminus \{0\}$. However, it is not a graded algebra any more, because \eqref{eq:1.17} is not homogeneous in the case $\xi = \alpha$. Instead, the above grading merely makes $\mh H(\mf t,W \rtimes \Gamma,k, \natural)$ into a filtered algebra. The graded algebra associated to this filtration is obtained by setting the right hand side of \eqref{eq:1.17} equal to 0. In other words, the associated graded object of $\mh H(\mf t,W \rtimes \Gamma,k, \natural)$ is the crossed product algebra \eqref{eq:1.9}. Graded Hecke algebras can be decomposed like root systems and reductive Lie algebras. Let $R_1, \ldots, R_d$ be the irreducible components of $R$. Write $\mf a_i^\vee = \mr{span}(R_i) \subset \mf a^\vee$, $\mf t_i = \Hom_\R (\mf a_i^\vee,\C)$ and $\mf z = R^\perp \subset \mf t$. Then \begin{equation}\label{eq:1.7} \mf t = \mf t_1 \oplus \cdots \oplus \mf t_d \oplus \mf z . \end{equation} The inclusions $W(R_i) \to W(R), \mf t_i^\vee \to \mf t^\vee$ and $\mf z^\vee \to \mf t^\vee$ induce an algebra isomorphism \begin{equation}\label{eq:1.22} \mh H (\mf t_1, W(R_1), k) \otimes_\C \cdots \otimes_\C \mh H (\mf t_d, W(R_d),k) \otimes_\C \mc O (\mf z) \; \longrightarrow \; \mh H (\mf t, W, k) . \end{equation} The central subalgebra $\mc O (\mf z) \cong S(\mf z^\vee)$ is of course very simple, so the study of graded Hecke algebras can be reduced to the case where the root system $R$ is irreducible. \subsection{Some representation theory} \ \label{par:iso} We list some isomorphisms of (twisted) graded Hecke algebras that will be useful later on. For any $z \in \C^\times$, $\mh H (\mf t, W \rtimes \Gamma,k,\mb{r},\natural)$ admits a ``scaling by degree" automorphism \begin{equation}\label{eq:1.11} x \mapsto z^n x \qquad \text{if } x \in \mh H (\mf t, W \rtimes \Gamma,k,\mb{r},\natural) \text{ has degree } 2n. \end{equation} Extend the sign representation to a character $\sgn$ of $W \rtimes \Gamma$, trivial on $\Gamma$. That yields the sign involution \begin{equation}\label{eq:1.6} \begin{aligned} & \sgn : \mh H(\mf t,W \rtimes \Gamma,k, \mb r, \natural) \to \mh H(\mf t,W \rtimes \Gamma,k, \mb r, \natural) \\ & \sgn (N_w) = \sgn (w) N_w ,\quad \sgn (\mb r) = -\mb r ,\quad \sgn (\xi) = \xi \qquad w \in W \rtimes \Gamma, \xi \in \mf t^\vee. \end{aligned} \end{equation} Upon specializing $\mb r = 1$, it induces an algebra isomorphism \[ \sgn : \mh H(\mf t,W \rtimes \Gamma,k, \natural) \to \mh H(\mf t,W \rtimes \Gamma,-k, \natural) . \] More generally, we can pick a sign $\epsilon (s_\alpha)$ for every simple reflection $s_\alpha \in W$, such that $\epsilon (s_\alpha) = \epsilon (s_\beta)$ if $s_\alpha$ and $s_\beta$ are conjugate in $W \rtimes \Gamma$. Then $\epsilon$ extends uniquely to a character of $W \rtimes \Gamma$ trivial on $\Gamma$ (and every character of $W \rtimes \Gamma$ which is trivial on $\Gamma$ has this form). Define a new parameter function $\epsilon k$ by \[ \epsilon k (\alpha) = \epsilon (s_\alpha) k(\alpha) . \] Then there are algebra isomorphisms \begin{equation} \begin{array}{llll} \phi_\epsilon : & \mh H(\mf t,W \rtimes \Gamma,k,\mb r,\natural) & \to & \mh H(\mf t,W \rtimes \Gamma,\epsilon k,\mb r,\natural) ,\\ \phi_\epsilon : & \mh H(\mf t,W \rtimes \Gamma,k,\natural) & \to & \mh H(\mf t,W \rtimes \Gamma,\epsilon k,\natural) ,\\ \multicolumn{4}{l}{\phi_\epsilon (N_w) = \epsilon (w) N_w ,\quad \phi_\epsilon (\mb r) = \mb r ,\quad \phi_\epsilon (\xi) = \xi, \qquad w \in W \rtimes \Gamma, \xi \in \mc O (\mf t) .} \end{array} \end{equation} Notice that for $\epsilon$ equal to the sign character of $W$, $\phi_\epsilon$ agrees with $\sgn$ from \eqref{eq:1.6} on $\mh H(\mf t,W \rtimes \Gamma,k,\natural)$ but not on $\mh H(\mf t,W \rtimes \Gamma,k,\mb r,\natural)$. For $R$ irreducible of type $B_n, C_n, F_4$ or $G_2$, there are two further nontrivial possible $\epsilon$'s. Consider the characters $\epsilon_s, \epsilon_l$ of $W$ with \[ \epsilon_s (s_\alpha) = \left\{ \begin{array}{cc} 1 & \alpha \text{ long} \\ -1 & \alpha \text{ short} \end{array}\right. ,\qquad \epsilon_l (s_\alpha) = \left\{ \begin{array}{cc} 1 & \alpha \text{ short} \\ -1 & \alpha \text{ long} \end{array}\right. . \] Since $\Gamma$ acts isometrically on $\mf a$, $\epsilon_l$ and $\epsilon_s$ are $\Gamma$-invariant. Thus we obtain algebra isomorphisms \[ \phi_{\epsilon_s} : \mh H (\mf t,W \rtimes \Gamma,k,\natural) \to \mh H(\mf t,W \rtimes \Gamma,\epsilon_s k,\natural) ,\; \phi_{\epsilon_l} : \mh H (\mf t,W \rtimes \Gamma,k,\natural) \to \mh H(\mf t,W \rtimes \Gamma,\epsilon_l k,\natural) . \] \begin{lem}\label{lem:1.4} Let $\mh H(\mf t,W \rtimes \Gamma,k,\natural)$ be a twisted graded Hecke algebra with a real-valued parameter function $k$. Then it is isomorphic to a twisted graded Hecke algebra $\mh H(\mf t,W \rtimes \Gamma,\epsilon k,\natural)$ with $\epsilon k : R \to \R_{\geq 0}$, via an isomorphism $\phi_\epsilon$ that is the identity on $\mc O (\mf t \oplus \C)$. \end{lem} \begin{proof} Define \[ \epsilon (s_\alpha) = \left\{ \begin{array}{cc} 1 & k (\alpha) \geq 0 \\ -1 & k (\alpha) < 0 \end{array}\right. . \] Since $k$ is $\Gamma$-invariant, this extends to a $\Gamma$-invariant quadratic character of $W$. Then $\phi_\epsilon$ has the required properties. \end{proof} With the above isomorphisms we will generalize the results of \cite[\S 6.2]{SolHecke}, from graded Hecke algebras with positive parameters to twisted graded Hecke algebras with real parameters. For the moment, we let $\mh H$ stand for either $\mh H (\mf t,W \rtimes \Gamma,k,\mb r,\natural)$ or $\mh H (\mf t,W \rtimes \Gamma,k,\natural)$. Every finite dimensional $\mh H$-module $V$ is the direct sum of its generalized $\mc O (\mf t)$-weight spaces \[ V_\lambda := \{ v \in V : (\xi - \xi(\lambda))^{\dim V} v = 0 \; \forall \xi \in \mc O (\mf t) \} \qquad \lambda \in \mf t . \] We denote the set of $\mc O (\mf t)$-weights of $V$ by \[ \mr{Wt}(V) = \{ \lambda \in \mf t : V_\lambda \neq 0 \} . \] Let $\mf a^{-}$ be the obtuse negative cone in $\R R \subset \mf a$ determined by $(R,\Delta)$. We denote the interior of $\mf a^{-}$ in $\R R$ by $\mf a^{--}$. We recall that a finite dimensional $\mh H$-module $V$ is tempered if \[ \mr{Wt}(V) \subset \mf a^{-} \oplus i \mf a \] and that $V$ is essentially discrete series if, with $\mf z$ as in \eqref{eq:1.7}: \[ \mr{Wt}(V) \subset \mf a^{--} \oplus (\mf z \cap \mf a) \oplus i \mf a. \] For a subset $U$ of $\mf t$ we let $\Modf{U}(\mh H)$ be the category of finite dimensional $\mh H$-modules $V$ with Wt$(V) \subset U$. For example, we have the category of $\mh H$-modules with ``real" weights $\Modf{\mf a}(\mh H)$. We indicate a subcategory/subset of tempered modules by a subscript ``temp". In particular, we have the category of finite dimensional tempered $\mh H$-modules $\Mod_{\mr{fl}}(\mh H )_{\mr{temp}}$. We want to compare the irreducible representations of \[ \mh H (\mf t,W \rtimes \Gamma,k,\natural) = \mh H (\mf t,W \rtimes \Gamma,k,\mb r,\natural) / (\mb r - 1) \] with those of \[ \mh H (\mf t,W \rtimes \Gamma,0,\natural) = \mh H (\mf t,W \rtimes \Gamma,k,\mb r,\natural) / (\mb r) . \] The latter algebra has $\Irr (\C [W \rtimes \Gamma,\natural])$ as the set of irreducible representations on which $\mc O (\mf t)$ acts via evaluation at $0 \in \mf t$. The correct analogue of this for $\mh H (\mf t,W \rtimes \Gamma,k,\natural)$, at least with $k$ real-valued, is \[ \Irr_{\mf a} (\mh H (\mf t,W \rtimes \Gamma,k,\natural))_{\mr{temp}} := \Irr (\mh H (\mf t,W \rtimes \Gamma,k, \natural))_{\mr{temp}} \cap \Modf{\mf a}( \mh H (\mf t,W \rtimes \Gamma,k,\natural)) . \] As $\C [W \rtimes \Gamma, \natural]$ is a subalgebra of $\mh H (\mf t,W \rtimes \Gamma,k,\natural)$, there is a natural restriction map \[ \Res_{W \rtimes \Gamma} : \Mod_{\mr{fl}} (\mh H (\mf t,W \rtimes \Gamma,k,\natural)) \to \Mod_{\mr{fl}}(\C [W \rtimes \Gamma, \natural]) . \] However, when $k \neq 0$ this map usually does not preserve irreducibility, not even on $\Irr_{\mf a} (\mh H (\mf t,W \rtimes \Gamma,k,\natural))_{\mr{temp}}$. In the remainder of this paragraph we assume that the parameter function $k$ only takes real values. Let $\epsilon$ be as in Lemma \ref{lem:1.4}. Since $\phi_\epsilon$ is the identity on $\mc O (\mf t \oplus \C)$, it induces equivalences of categories \[ \begin{array}{llll} \Modf{U}(\mh H (\mf t,W \rtimes \Gamma,\epsilon k,\natural) ) & \longrightarrow & \Modf{U} (\mh H (\mf t,W \rtimes \Gamma,k,\natural)) & U \subset \mf t , \\ \Mod_{\mr{fl}}(\mh H (\mf t,W \rtimes \Gamma,\epsilon k,\natural) )_{\mr{temp}} & \longrightarrow & \Mod_{\mr{fl}} (\mh H (\mf t,W \rtimes \Gamma,k,\natural))_{\mr{temp}} & \end{array} \] and a bijection \[ \Irr_{\mf a} (\mh H (\mf t,W \rtimes \Gamma,\epsilon k,\natural) )_{\mr{temp}} \longrightarrow \Irr_{\mf a} (\mh H (\mf t,W \rtimes \Gamma,k,\natural) )_{\mr{temp}} . \] \begin{thm}\label{thm:1.3} Let $k : R \to \R$ be a $\Gamma$-invariant parameter function. \enuma{ \item The set $\Res_{W \rtimes \Gamma} (\Irr_{\mf a} \mh H (\mf t,W \rtimes \Gamma,k, \natural)_{\mr{temp}} )$ is a $\Z$-basis of $\Z \, \Irr (\C [W \rtimes \Gamma, \natural])$. } Suppose that the restriction of $k$ to any type $F_4$ component of $R$ has $k(\alpha) = 0$ for a root $\alpha$ in that component or is the form $\epsilon k'$ for a character $\epsilon : W (F_4) \to \{\pm 1\}$ and a parameter function $k' : F_4 \to \R_{>0}$ which is geometric in the sense of the next remark. \enuma{ \setcounter{enumi}{1} \item There exist total orders on $\Irr_{\mf a} (\mh H (\mf t,W \rtimes \Gamma,k,\natural)_{\mr{temp}})$ and on $\Irr (\C [W \rtimes \Gamma ,\natural])$, such that the matrix of the $\Z$-linear map \[ \Res_{W \rtimes \Gamma} : \Z \, \Irr_{\mf a} (\mh H (\mf t,W \rtimes \Gamma,k,\natural))_{\mr{temp}} \to \Z \, \Irr (\C [W \rtimes \Gamma ,\natural]) \] is upper triangular and unipotent. \item There exists a unique bijection \[ \zeta_{\mh H (\mf t,W \rtimes \Gamma,k,\natural))} : \Irr_{\mf a} (\mh H (\mf t,W \rtimes \Gamma, k,\natural))_{\mr{temp}} \to \Irr (\C [W \rtimes \Gamma, \natural]) \] such that $\zeta_{\mh H (\mf t,W \rtimes \Gamma,\epsilon k,\natural)} (\pi)$ always occurs in $\Res_{W \rtimes \Gamma} (\pi)$. } \end{thm} \begin{rem} Geometric parameter functions will appear in Section \ref{sec:cuspidal}. Let us make the allowed parameter functions for a type $F_4$ root system explicit here. Write $k = (k(\alpha), k(\beta))$ where $\alpha$ is short root and $\beta$ is a long root. The possibilities are \[ (0,0), (c,0), (0,c), (c,c), (2c,c), (c/2,c), (4c,c), (-c,c), (-2c,c), (-c/2,c), (-4c,c) , \] where $c \in \R^\times$ is arbitrary. We expect that Theorem \ref{thm:1.3} also holds without extra conditions for type $F_4$. \end{rem} \begin{proof} (a) is known from \cite[Proposition 1.7]{SolK}. The proof of that shows we can reduce the entire theorem to the case where $\natural$ is trivial. We assume that from now on, and omit $\natural$ from the notations. Parts (b) and (c) were shown in \cite[Theorem 6.2]{SolHecke}, provided that $k(\alpha) \geq 0$ for all $\alpha \in R$. Choose $\epsilon$ as in Lemma \ref{lem:1.4}, so that $\epsilon k : R \to \R_{\geq 0}$. For $V \in \Mod_{\mr{fl}}(\mh H (\mf t, W ,\epsilon k))$ we have \[ \Res_{W} (\phi_\epsilon^* V) = \Res_{W} (V) \otimes \epsilon , \] so we obtain a commutative diagram \begin{equation}\label{eq:1.8} \begin{array}{ccc} \Z \, \Irr_{\mf a} (\mh H (\mf t, W ,\epsilon k) )_{\mr{temp}} & \xrightarrow{\Res_W} & \Z \, \Irr (W) \\ \downarrow \phi_\epsilon^* & & \downarrow \otimes \epsilon \\ \Z \, \Irr_{\mf a} (\mh H (\mf t, W , k) )_{\mr{temp}} & \xrightarrow{\Res_W} & \Z \, \Irr (W) \end{array} \end{equation} All the maps in this diagram are bijective and the vertical maps preserve irredu\-ci\-bi\-li\-ty. Thus the theorem for $\mh H (\mf t,W,\epsilon k)$ implies it for $\mh H (\mf t,W,k)$. The commutative diagram \eqref{eq:1.8} also allows us to extend \cite[Lemma 6.5]{SolHecke} from $\mh H (\mf t,W, \epsilon k)$ to $\mh H (\mf t,W, k)$. Then we can finish our proof for $\mh H (\mf t,W \rtimes \Gamma, k)$ by applying \cite[Lemma 6.6]{SolHecke}. \end{proof} \begin{thm}\label{thm:1.10} Let $\mh H (\mf t,W \rtimes \Gamma,k,\natural)$ be as in Theorem \ref{thm:1.3}.b. There exists a canonical bijection \[ \zeta_{\mh H (\mf t,W \rtimes \Gamma,k,\natural))} : \Irr (\mh H (\mf t,W \rtimes \Gamma, k,\natural)) \to \Irr (\mh H (\mf t,W \rtimes \Gamma,0,\natural)) \] which (as well as its inverse) \begin{itemize} \item respects temperedness, \item preserves the intersections with $\Modf{\mf a}$, \item generalizes Theorem \ref{thm:1.3}.c, via the identification \[ \Irr_{\mf a} (\mh H (\mf t,W \rtimes \Gamma,0,\natural))_{\mr{temp}} = \Irr (\C [W \rtimes \Gamma,\natural]) . \] \end{itemize} \end{thm} \begin{proof} As discussed in the proof of Theorem \ref{thm:1.3}.a, we can easily reduce to the case where $\natural$ is trivial. In \cite[Proposition 6.8]{SolHecke}, that case is derived from \cite[Theorem 6.2]{SolHecke} (under more strict conditions on the parameters $k$). Using Theorem \ref{thm:1.3} instead of \cite[Theorem 6.2]{SolHecke}, this works for all parameters allowed in Theorem \ref{thm:1.3}. Although \cite[Proposition 6.8]{SolHecke} is only formulated for irreducible representations in $\Modf{\mf a} (\mh H (\mf t,W \rtimes \Gamma,k))$, the argument applies to all of $\Irr (\mh H (\mf t,W \rtimes \Gamma,k))$. \end{proof} \subsection{Global dimension} \ \label{par:gldim} We want to determine the global dimension of $\mh H (\mf t, W \rtimes \Gamma, k, \mb r, \natural)$. For $\mh H (\mf t,W,kr)$ it has already been done in \cite{SolGHA}, and our argument is based on reduction to that case. A lower bound for the global dimension is easily obtained: \begin{lem}\label{lem:1.13} $\mr{gl. \, dim} (\mh H (\mf t, W \rtimes \Gamma, k, \mb r, \natural)) \geq \dim_\C (\mf t \oplus \C)$. \end{lem} \begin{proof} We abbreviate $\mh H = \mh H (\mf t, W \rtimes \Gamma, k, \mb r, \natural)$. Pick $\lambda \in \mf t$ such that $w \lambda \neq \lambda$ for all $w \in W \rtimes \Gamma \setminus \{1\}$. Fix any $r \in \C$ and let $\C_{\lambda,r}$ be the onedimensional $\mc O (\mf t \oplus \C)$-module with character $(\lambda,r)$. By \cite[Theorem 6.4]{BaMo}, which generalizes readily to include $\Gamma$, the $\mc O (\mf t)$-weights of \begin{equation}\label{eq:1.35} \Res^{\mh H}_{\mc O (\mf t \oplus \C)} \mr{ind}_{\mc O (\mf t \oplus \C)}^{\mh H} \C_{\lambda,r} \end{equation} are precisely the $w \lambda$ with $w \in W \rtimes \Gamma$. These are all different and the dimension of \eqref{eq:1.35} is $|W \rtimes \Gamma|$, so \eqref{eq:1.35} must be isomorphic with $\bigoplus\nolimits_{w \in W \rtimes \Gamma} \C_{w \lambda,r}$. By Frobenius reciprocity \begin{align}\nonumber \mr{Ext}^n_{\mh H} \big( \mr{ind}_{\mc O (\mf t \oplus \C)}^{\mh H} \C_{\lambda,r} , \mr{ind}_{\mc O (\mf t \oplus \C)}^{\mh H} \C_{\lambda,r} \big) \cong \mr{Ext}^n_{\mc O (\mf t \oplus \C)} \big( \C_{\lambda,r}, \Res^{\mh H}_{\mc O (\mf t \oplus \C)} \mr{ind}_{\mc O (\mf t \oplus \C)}^{\mh H} \C_{\lambda,r} \big) \\ \label{eq:1.65} \cong \bigoplus\nolimits_{w \in W \rtimes \Gamma} \mr{Ext}^n_{\mc O (\mf t \oplus \C)} \big( \C_{\lambda,r},\C_{w \lambda, r} \big) = \mr{Ext}^n_{\mc O (\mf t \oplus \C)} \big( \C_{\lambda,r}, \C_{\lambda, r} \big) . \end{align} It is well-known (and can be computed with a Koszul resolution) that the last expression equals (with $\mf T$ for tangent space) \begin{equation}\label{eq:1.66} \bigwedge\nolimits^n \big( \mf T_{\lambda,r} (\mf t \oplus \C) \big) = \bigwedge\nolimits^n (\mf t \oplus \C) . \end{equation} This is nonzero when $0 \leq n \leq \dim_\C (\mf t \oplus \C)$, so the global dimension must be at least $\dim_\C (\mf t \oplus \C)$. \end{proof} With a general argument, the computation of the global dimension of \\ $\mh H (\mf t, W \rtimes \Gamma, k, \mb r, \natural)$ can be reduced to the cases with $\Gamma = \{1\}$. \begin{lem}\label{lem:1.11} Let $\Gamma$ be a finite group acting by automorphisms on a complex algebra $A$. Let $\natural : \Gamma^2 \to \C^\times$ be a 2-cocycle and build the twisted crossed product $A \rtimes \C [\Gamma,\natural]$ with multiplication relations as in Proposition \ref{prop:1.1} -- the role of $\mh H (\mf t,W,k,\mb r)$ is played by $A$. Then \[ \mr{gl. \, dim} (A \rtimes \C [\Gamma,\natural]) = \mr{gl. \, dim} (A) . \] \end{lem} \begin{proof} For any $A$-module $M$ \[ \mr{Res}_A^{A \rtimes \C [\Gamma,\natural]} \mr{ind}_A^{A\rtimes \C [\Gamma,\natural]} M \cong \bigoplus\nolimits_{\gamma \in \Gamma} \gamma^* (M) . \] Hence $\mr{Ext}^n_A (M',M)$ is a direct summand of \[ \mr{Ext}^n_A \big( M', \mr{Res}_A^{A \rtimes \C [\Gamma,\natural]} \mr{ind}_A^{A\rtimes \C [\Gamma,\natural]} M \big) \cong \mr{Ext}^n_A \big( \mr{ind}_A^{A\rtimes \C [\Gamma,\natural]} M',\mr{ind}_A^{A\rtimes \C [\Gamma,\natural]} M \big) . \] In particular gl. dim $(A) \leq$ gl. dim $(A\rtimes \C [\Gamma,\natural])$. For any $A \rtimes \C [\Gamma,\natural]$-module $V$ there is a surjective module homomorphism \[ \begin{array}{cccc} p : & \mr{ind}_A^{A\rtimes \C [\Gamma,\natural]} \mr{Res}_A^{A \rtimes \C [\Gamma,\natural]} V & \to & V \\ & x \otimes v & \mapsto & x v \end{array} . \] On the other hand, there is a natural injection \[ \begin{array}{cccc} \imath : & V & \to & \mr{ind}_A^{A\rtimes \C [\Gamma,\natural]} \mr{Res}_A^{A \rtimes \C [\Gamma,\natural]} V \\ & v & \mapsto & \sum_{\gamma \in \Gamma} N_\gamma^{-1} \otimes N_\gamma v \end{array} . \] This in fact a module homomorphism. Namely, for $a \in A$: \begin{align*} \imath (a v) & = \sum\nolimits_{\gamma \in \Gamma} N_\gamma^{-1} \otimes N_\gamma a v = \sum\nolimits_{\gamma \in \Gamma} N_\gamma^{-1} \otimes \gamma (a) N_\gamma v \\ & = \sum\nolimits_{\gamma \in \Gamma} N_\gamma^{-1} \gamma (a) \otimes N_\gamma v = \sum\nolimits_{\gamma \in \Gamma} a N_\gamma^{-1} \otimes N_\gamma v = a \imath (v) . \end{align*} Similarly, for $g \in \Gamma$: \begin{align*} \imath (N_g v) & = \sum\nolimits_{\gamma \in \Gamma} N_\gamma^{-1} \otimes N_\gamma N_g v = \sum\nolimits_{\gamma \in \Gamma} N_g N_g^{-1} N_\gamma^{-1} \otimes N_\gamma N_g v \\ & = \sum\nolimits_{\gamma \in \Gamma} N_g N_{\gamma g}^{-1} \otimes \ind_{\mc O (\mf t \oplus \C)}^{\mh H} \C_{\lambda,r}s N_{\gamma g} v = \sum\nolimits_{h \in \Gamma} N_g N_h^{-1} \otimes N_h v = N_g \imath (v) . \end{align*} Clearly $p \circ \imath = |\Gamma| \, \mr{id}_V$, so \[ \mr{ind}_A^{A\rtimes \C [\Gamma,\natural]} \mr{Res}_A^{A \rtimes \C [\Gamma,\natural]} V \cong V \oplus \ker p \qquad \text{as } A \rtimes \C [\Gamma,\natural] \text{-modules.} \] For any second $A \rtimes \C [\Gamma,\natural]$-module $V'$, $\mr{Ext}^n_{A \rtimes \C [\Gamma,\natural]} (V,V')$ is a direct summand of \[ \mr{Ext}^n_{A \rtimes \C [\Gamma,\natural]} \big( V \oplus \ker p,V' \big) = \mr{Ext}^n_{A \rtimes \C [\Gamma,\natural]} \big( \mr{ind}_A^{A\rtimes \C [\Gamma,\natural]} \mr{Res}_A^{A \rtimes \C [\Gamma,\natural]} V,V' \big) . \] By Frobenius reciprocity the latter is isomorphic with \[ \mr{Ext}^n_A \big( \mr{Res}_A^{A \rtimes \C [\Gamma,\natural]} V, \mr{Res}_A^{A \rtimes \C [\Gamma,\natural]} V' \big). \] Hence $\mr{Ext}^n_{A \rtimes \C [\Gamma,\natural]} (V,V')$ vanishes whenever $n >\,$gl. dim$(A)$. \end{proof} In view of Lemma \ref{lem:1.11} and the construction of $\mh H (\mf t, W \rtimes \Gamma,k, \mb{r},\natural)$, it can be expected that it has the same global dimension as $\mc O (\mf t \oplus \C)$. The latter equals $\dim_\C (\mf t \oplus \C)$, see for instance \cite[Theorem 4.3.7]{Wei}. While the global dimensions of these algebras do indeed agree, Lemma \ref{lem:1.11} does not suffice to show that. One complication is that a map like $\imath$ above is not a module homomorphism in the setting of the group $W \rtimes \Gamma$ and the algebra $\mc O (\mf t \oplus \C)$, when the parameters of the Hecke algebra are nonzero. The centre of $\mh H (\mf t,W,k,\mb r)$ was identified in \cite[Proposition 4.5]{Lus-Gr} as \begin{equation}\label{eq:1.56} Z(\mh H (\mf t,W,k,\mb r)) = \mc O (\mf t)^W \otimes \C [\mb r] . \end{equation} From this and Proposition \ref{prop:1.1} we see that \begin{equation}\label{eq:1.61} \mh H (\mf t,W,k,\mb r) \text{ has finite rank as module over } Z(\mh H (\mf t,W,k,\mb r)). \end{equation} For $r \in \C$, let $\hat{\mh H}_r$ be the formal completion of $\mh H (\mf t,W,k,\mb r)$ with respect to the ideal $(\mb r - r)$ of $\C [\mb r]$. By \eqref{eq:1.56}, $\hat{\mh H}_r$ is also the formal completion of $\mh H (\mf t,W,k,\mb r)$ with respect to the ideal \[ I_r = Z(\mh H (\mf t,W,k,\mb r)) (\mb r - r) \; \subset \; Z(\mh H (\mf t,W,k,\mb r)). \] For an $\mh H (\mf t,W,k,\mb r)$-module $V$, we denote its completion with respect to $(\mb r -r)$, or equivalently with respect to $I_r$, by $\hat V_r$. If $V$ is finitely generated as $\mh H (\mf t,W,k,\mb r)$-module, then by \eqref{eq:1.61} it is also finitely generated over $Z(\mh H (\mf t,W,k,\mb r))$, so the natural map \begin{equation}\label{eq:1.58} Z(\hat{\mh H}_r) \otimes_{Z(\mh H(\mf t,W,k,\mb r))} V = \hat{\mh H}_r \otimes_{\mh H (\mf t,W,k,\mb r)} V \longrightarrow \varprojlim_n V / I_r^n V = \hat V_r \end{equation} is an isomorphism of $\hat{\mh H}_r$-modules. \begin{lem}\label{lem:1.16} $\mr{gl. \, dim} (\mh H (\mf t, W \rtimes \Gamma, k, \mb r, \natural )) \leq \mr{sup}_{r \in \C} \, \mr{gl. \, dim}(\hat{\mh H}_r).$ \end{lem} \begin{proof} By Lemma \ref{lem:1.11} we may assume that $\Gamma = \{1\}$, so that $\natural$ disappears. We abbreviate $\mh H = \mh H (\mf t,W,k,\mb r)$. All the algebras in this proof are Noetherian, so by \cite[Proposition 4.1.5]{Wei} their global dimensions equal their Tor-dimensions. We will use both the characterization in terms of Ext-groups and that in terms of Tor-groups, whatever we find more convenient. Let $V$ and $M$ be finitely generated $\mh H$-modules. By \eqref{eq:1.58}, exactness of completion for finitely generated $Z(\mh H)$-modules and \cite[Corollary 3.2.10]{Wei}: \begin{equation}\label{eq:1.30} Z(\hat{\mh H}_r) \otimes_{Z(\mh H)} \mr{Tor}_m^{\mh H} (V,M) \cong \mr{Tor}_m^{\hat{\mh H}_r} (\hat{V}_r, \hat{M}_r) . \end{equation} It is known, e.g. from \cite[Lemma 3]{KNS}, that $V$ and $M$ have projective resolutions consisting of free $Z(\mh H)$-modules of finite rank. It follows that $\mr{Tor}_m^{\mh H} (V,M)$ is finitely generated as $Z (\mh H)$-module. By \cite[Lemma 2.9]{SolTwist}, $\mr{Tor}_m^{\mh H} (V,M)$ is nonzero if and only if its formal completion with respect to some maximal ideal $I$ of $Z(\mh H)$ is nonzero. That happens if and only if \eqref{eq:1.30} is nonzero for the $r \in \C$ with $\mb r - r \in I$. For any $m \leq \mr{gl. \, dim}(\mh H)$ such $V,M$ exist, because finitely generated modules suffice to detect the global dimension of a Noetherian ring \cite[\S 4.1]{Wei}. It follows that $\mr{gl. \, dim}(\hat{\mh H}_r) \geq m$ for the appropriate $r$. \end{proof} It remains to find a good upper bound for the global dimension of $\hat{\mh H}_r$. \begin{thm}\label{thm:1.12} \enuma{ \item For any $r \in \C$, the global dimension of $\hat{\mh H}_r$ equals $\dim_\C (\mf t) + 1$. \item The global dimension of $\mh H (\mf t, W \rtimes \Gamma, k, \mb r, \natural)$ equals $\dim_\C (\mf t) + 1$. } \end{thm} \begin{proof}(a) By Lemma \ref{lem:1.11} it suffices to consider the cases with $\Gamma = \{1\}$. The crucial point of our proof is that the global dimension of the graded Hecke algebra \[ \mh H / (\mb r - r) = \mh H (\mf t,W,rk) \] has already been computed, and equals $\dim_\C (\mf t)$ \cite[Theorem 5.3]{SolGHA}. For any $\mh H /(\mb r - r)$-module $V_1$, \cite[Theorem 4.3.1]{Wei} provides a comparison of projective dimensions \begin{equation}\label{eq:1.67} \mr{pd}_{\mh H}(V_1) \leq \mr{pd}_{\mh H / (\mb r - r)}(V_1) + \mr{pd}_{\mh H}(\mh H / (\mb r - r)) . \end{equation} From the short exact sequence \[ 0 \to \mh H \xrightarrow{\mb r - r} \mh H \to \mh H / (\mb r - r) \to 0 \] we see that $\mr{pd}_{\mh H}(\mh H / (\mb r - r)) = 1$. Hence \begin{equation}\label{eq:1.32} \mr{pd}_{\mh H}(V_1) \leq \mr{pd}_{\mh H / (\mb r - r)}(V_1) + 1 \leq \dim_\C (\mf t) + 1. \end{equation} In other words, $\mr{Tor}_m^{\mh H}(V_1,M) = 0$ for all $m > \dim_\C (\mf t) + 1$. Let $V_2$ be an $\mh H$-module on which $(\mb r - r)^2$ acts as 0. In the short exact sequence \[ 0 \to (\mb r - r) V_2 \to V_2 \to V_2 / (\mb r - r) V_2 \to 0 , \] $\mb r - r$ annihilates both the outer terms, so \eqref{eq:1.32} applies to them. Applying $\mr{Tor}_*^{\mh H}(?,M)$ to this short exact sequence yields a long exact sequence, and taking \eqref{eq:1.32} into account we see that $\mr{Tor}_m^{\mh H}(V_2,M) = 0$ for all $m > \dim_\C (\mf t) + 1$. This argument can be applied recursively, and then it shows that \begin{equation}\label{eq:1.33} \mr{Tor}_m^{\mh H}(V_n,M) = 0 \qquad \text{if } m > \dim_\C (\mf t) + 1 \text{ and } (\mb r - r)^n V_n = 0 \text{ for some } n \in \N . \end{equation} Assume now that $V$ and $M$ are finitely generated $\hat{\mh H}_r$-modules. By \eqref{eq:1.61} they are also finitely generated over $Z(\hat{\mh H}_r) = \widehat{Z(\mh H)}_r$, and therefore \begin{equation}\label{eq:1.60} V \cong \varprojlim_n V / I_r^n V \cong \varprojlim_n V / (\mb r - r)^n V . \end{equation} By \eqref{eq:1.30} and \eqref{eq:1.33} \begin{equation}\label{eq:1.34} \mr{Tor}_m^{\hat{\mh H}_r} (V / (\mb r - r)^n V, M) = 0 \qquad \text{for } m > \dim_\C (\mf t) + 1 \text{ and } n \in \N . \end{equation} Let $P_* \to M$ be a resolution by free $\hat{\mh H}_r$-modules $P_i$ of finite rank $\mu_i$ (this is possible because $\hat{\mh H}_r$ is Noetherian). Then \[ \mr{Tor}_m^{\hat{\mh H}_r} (V / (\mb r - r)^n V, M) = H_m \big( V / (\mb r - r)^n V \otimes_{\hat{\mh H}_r} \hat{\mh H}_r^{\mu_*}, d_* \big) = H_m \big( (V / (\mb r - r)^n V )^{\mu_*} , d_* \big) . \] Here the sequence of differential complexes $\big( (V / (\mb r - r)^n V )^{\mu_*} , d_* \big)$, indexed by $n \in \N$, satisfies the Mittag--Leffler condition because the transition maps are surjective. By \eqref{eq:1.60} the inverse limit of the sequence is $(V^{\mu_*}, d_*)$, which computes $\mr{Tor}_m^{\hat{\mh H}_r} (V,M)$. According to \cite[Theorem 3.5.8]{Wei} there is a short exact sequence \[ 0 \to \varprojlim_n \! {}^1 \mr{Tor}_{m+1}^{\hat{\mh H}_r} (V / (\mb r - r)^n V, M) \to \mr{Tor}_m^{\hat{\mh H}_r} (V,M) \to \varprojlim_n \mr{Tor}_m^{\hat{\mh H}_r} (V / (\mb r - r)^n V, M) \to 0. \] For $m > \dim_\C (\mf t) + 1$, \eqref{eq:1.34} shows that both outer terms vanish, so $\mr{Tor}_m^{\hat{\mh H}_r} (V,M) = 0$ as well. Hence \begin{equation}\label{eq:1.59} \text{gl. dim}(\hat{\mh H}_r) \leq \dim_\C (\mf t) + 1 , \end{equation} which already suffices for part (b). Consider the $\hat{\mh H}_r$-module \[ V_{\lambda,r} := \ind_{\mc O (\mf t \oplus \C)}^{\mh H} \C_{\lambda,r} = \ind_{\widehat{\mc O (\mf t \oplus \C)}_r}^{\hat{\mh H}_r} \C_{\lambda,r} \] from \eqref{eq:1.35}. Analogous to \eqref{eq:1.65} and \eqref{eq:1.66}, one computes that \[ \mr{Ext}^n_{\hat{\mh H}_r}(V_{\lambda,r},V_{\lambda,r}) \neq 0 \qquad \text{for } 0 \leq n \leq \dim_\C (\mf t) + 1 , \] which shows that \eqref{eq:1.59} is actually an equality.\\ (b) This follows from Lemmas \ref{lem:1.13} and \ref{lem:1.16} in combination with \eqref{eq:1.59}. \end{proof} \section{Equivariant sheaves and equivariant cohomology} \label{sec:cuspidal} We follow the setup from \cite{Lus-Cusp1,Lus-Cusp2,AMS1,AMS2}. In these references a graded Hecke algebra was associated to a cuspidal local system on a nilpotent orbit for a complex reductive group, via equivariant cohomology. For future applications to Langlands parameters we deal not only with connected groups, but also with disconnected reductive groups $G$. We work in the $G$-equivariant bounded derived category $\mc D^b_G (X)$, as in \cite{BeLu}, \cite[\S 1]{Lus-Cusp1} and \cite[\S 1]{Lus-Cusp2}. The formalism of \cite{BeLu} entails that (for non-discrete groups) this is not exactly the bounded derived category of the category of $G$-equivariant constructible sheaves on a $G$-variety $X$. Morphisms in $\mc D_G^b (X)$ are defined via a resolution of $X$ by $G$-varieties $Y$ as in \cite{BeLu}, and on each such $Y$ we use morphisms in a (non-equivariant) derived category of sheaves. In general, checking that an object or a morphism belongs to $\mc D^b_G (X)$ can be reduced to two steps: \begin{itemize} \item show that it belongs to $\mc D^b_{G^\circ}(X)$, \item show that the $G^\circ$-equivariant structure extends to a $G$-equivariant structure. \end{itemize} Typically the first step above is much harder than the second step, which is about abstract group actions only. The reason for this structure of $\mc D^b_G (X)$ is that from any $G^\circ$-resolution $Y$ of a $G$-variety $X$, one obtains a $G$-resolution $G \times^{G^\circ} Y$ of $X$, and those resolutions suffice to study $\mc D_G^b (X)$ as constructed in \cite[\S 2]{BeLu}. This entails for instance that a morphism in $\mc D^b_G (X)$ is an isomorphism if and only it becomes an isomorphism in $\mc D^b_{G^\circ}(X)$. Equivariant cohomology for objects of $\mc D^b_G (X)$ is defined via push-forward to a point, representing the result as a complex of sheaves on a classifying space $\mf B G$ for $G$ and then taking cohomology in $\mc D^b (\mf B G)$. For more background we refer to \cite[Chapter 6]{Ach}. We will use some notations and conventions from \cite{Lus-Cusp2}, in particular functors from or to $\mc D_G^b (X)$ are by default derived functors. Let $[n]$ be the functor that shifts degrees by $n$. For objects $A,B$ of $\mc D^b_G (X)$ and $n \in \Z$, we write \[ \Hom^n_{\mc D^b_G (X)} (A, B) = \Hom_{\mc D^b_G (X)} (A, B[n]) . \] In the case $A = B$ one obtains the graded algebra \[ \mr{End}^*_{\mc D^b_G (X)}(A) = \bigoplus\nolimits_{n \in \Z} \Hom^n_{\mc D^b_G (X)} (A, A) . \] \subsection{Geometric construction of graded Hecke algebras} \ \label{par:geomConst} Recall from \cite{AMS1} that a quasi-Levi subgroup of $G$ is a group of the form $M = Z_G (Z(L)^\circ)$, where $L$ is a Levi subgroup of $G^\circ$. Thus $Z (M)^\circ = Z(L)^\circ$ and $M \longleftrightarrow L = M^\circ$ is a bijection between the quasi-Levi subgroups of $G$ and the Levi subgroups of $G^\circ$. \begin{defn}\label{def:2.5} A cuspidal quasi-support for $G$ is a triple $(M,\cC_v^M,q\cE)$ where: \begin{itemize} \item $M$ is a quasi-Levi subgroup of $G$; \item $\cC_v^M$ is the $\Ad(M)$-orbit of a nilpotent element $v \in \mf m = \mathrm{Lie}(M)$. \item $q \cE$ is a $M$-equivariant cuspidal local system on $\cC_v^M$, i.e. as an $M^\circ$-equivariant local system it is a direct sum of cuspidal local systems. \end{itemize} \end{defn} We denote the $G$-conjugacy class of $(M,\cC_v^M,q \cE)$ by $[M,\cC_v^M,q\cE]_G$. With this cuspidal quasi-support we associate the groups \begin{equation}\label{eq:3.15} N_G (q \cE) = \mathrm{Stab}_{N_G (M)}(q \cE) \quad \text{and} \quad W_{q \cE} = N_G (q \cE) / M . \end{equation} Let $\mf g_N$ be the variety of nilpotent elements in the Lie algebra $\mf g = \mr{Lie}(G)$. Cuspidal quasi-supports are useful to partition the set of $G$-equivariant local systems on Ad$(G)$-orbits in $\mf g_N$. Let $\cE$ be an irreducible constituent of $q\cE$ as $M^\circ$-equivariant local system on $\cC_v^M$ (which by the cuspidality of $\cE$ equals the Ad$(M^\circ)$-orbit of $v$). Then \[ W_\cE^\circ := N_{G^\circ}(M^\circ) / M^\circ \cong N_{G^\circ} (M^\circ) M / M \] is a subgroup of $W_{q\cE}$. It is normal because $G^\circ$ is normal in $G$. Write $T = Z(M)^\circ$ and $\mf t = \mr{Lie}(T)$. It is known from \cite[Proposition 2.2]{Lus-Cusp1} that $R(G^\circ,T) \subset \mf t^\vee$ is a root system with Weyl goup $W_\cE^\circ$. Let $P^\circ$ be a parabolic subgroup of $G^\circ$ with Levi decomposition $P^\circ = M^\circ \ltimes U$. The definition of $M$ entails that it normalizes $U$, so \[ P := M \ltimes U \] is a again a group, a ``quasi-parabolic" subgroup of $G$. Since $W_\cE^\circ = W(R(G^\circ,T))$, all possible $P$ are conjugate by elements of $N_{G^\circ}(M^\circ)$. We put \begin{align*} & N_G (P,q\cE) = N_G (P,M) \cap N_G (q\cE) , \\ & \Gamma_{q\cE} = N_G (P,q\cE) / M . \end{align*} The same proof as for \cite[Lemma 2.1.b]{AMS2} shows that \begin{equation}\label{eq:3.1} W_{q\cE} = W_\cE^\circ \rtimes \Gamma_{q\cE} . \end{equation} The $W_{q\cE}$-action on $T$ gives rise to an action of $W_{q\mc E}$ on $\mc O (\mf t) = S(\mf t^\vee)$. We specify our parameters $k (\alpha)$. For $\alpha$ in the root system $R(G^\circ,T)$, let $\mf g_\alpha \subset \mf g$ be the associated eigenspace for the $T$-action. Let $\Delta_P$ be the set of roots in $R(G^\circ,T)$ which are simple with respect to $P$. For $\alpha \in \Delta_P$ we define $k (\alpha) \in \Z_{\geq 2}$ by \begin{equation}\label{eq:1.5} \begin{array}{ccc} \ad (v)^{k (\alpha) - 2} : & \mf g_{\alpha} \oplus \mf g_{2 \alpha} \to \mf g_{\alpha} \oplus \mf g_{2 \alpha} & \text{ is nonzero,} \\ \ad (v)^{k (\alpha) - 1} : & \mf g_{\alpha} \oplus \mf g_{2 \alpha} \to \mf g_{\alpha} \oplus \mf g_{2 \alpha} & \text{ is zero.} \end{array} \end{equation} Then $(k (\alpha) )_{\alpha \in \Delta_P}$ extends to a $W_{q\cE}$-invariant function $k : R(G^\circ,T)_{\mathrm{red}} \to \C$, where the subscript ``red" indicates the set of reduced (or indivisible) roots. Let $\natural : (W_{q\cE} / W_\cE^\circ)^2 \to \C^\times$ be a 2-cocycle (to be specified later). To these data we associate the twisted graded Hecke algebra $\mh H (\mf t, W_{q\cE},k,\mb{r},\natural)$, as in Proposition \ref{prop:1.1}. To make the connection of the above twisted graded Hecke algebra with the cuspidal local system $q\cE$ complete, we involve the geometry of $G$ and $\mf g$. Write \[ \mf t_\reg = \{ x \in \mf t : Z_{\mf g}(x) = \mf l \} \quad \text{and} \quad \mf g_{RS} = \Ad (G) (\cC_v^M \oplus \mf t_\reg \oplus \mf u) . \] Consider the varieties \begin{align*} & \dot{\mf g} = \{ (X,gP) \in \mf g \times G/P : \Ad (g^{-1}) X \in \cC_v^M \oplus \mf t \oplus \mf u \} , \\ & \dot{\mf g}^{\circ} = \{ (X,gP) \in \mf g \times G^{\circ}/P^\circ : \Ad (g^{-1}) X \in \cC_v^M \oplus \mf t \oplus \mf u \}, \\ & \dot{\mf g}_{RS} = \dot{\mf g} \cap (\mf g_{RS} \times G/P) ,\\ & \dot{\mf g}_N = \dot{\mf g} \cap (\mf g_N \times G/P) . \end{align*} We let $G \times \C^\times$ act on these varieties by \[ (g_1,\lambda) \cdot (X,gP) = (\lambda^{-2} \Ad (g_1) X, g_1 g P) . \] By \cite[Proposition 4.2]{Lus-Cusp1} there is a natural isomorphism \begin{equation}\label{eq:1.1} H^*_{G \times \C^\times} (\dot{\mf g}) \cong \mc O (\mf t) \otimes_\C \C [\mb r] . \end{equation} The same calculation (omitting $\mf t$ from the definition of $\dot{\mf g}$) shows that \begin{equation}\label{eq:1.50} H^*_{G \times \C^\times} (\dot{\mf g}_N) \cong \mc O (\mf t) \otimes_\C \C [\mb r] . \end{equation} Consider the maps \begin{equation}\label{eq:1.2} \begin{aligned} & \cC_v^M \xleftarrow{f_1} \{ (X,g) \in \mf g \times G : \Ad (g^{-1}) X \in \cC_v^M \oplus \mf t \oplus \mf u\} \xrightarrow{f_2} \dot{\mf g} , \\ & f_1 (X,g) = \mathrm{pr}_{\cC_v^M}(\Ad (g^{-1}) X) , \hspace{2cm} f_2 (X,g) = (X,gP) . \end{aligned} \end{equation} The group $G \times \C^\times \times P$ acts on $\{ (X,g) \in \mf g \times G : \Ad (g^{-1}) X \in \cC_v^M \oplus \mf t \oplus \mf u\}$ by \[ (g_1,\lambda,p) \cdot (X,g) = ( \lambda^{-2} \Ad (g_1)X ,g_1 g p). \] Notice that the local system $q\cE$ on $\cC_v^M$ is $M \times \C^\times$-equivariant, because $\C^\times$ is connected and stabilizes nilpotent $M$-orbits. Further $f_1$ is constant on $G$-orbits, so $f_1^* q\cE$ is naturally a $G \times \C^\times$-equivariant local system. Let $\dot{q\cE}$ be the unique $G \times \C^\times $-equivariant local system on $\dot{\mf g}$ such that $f_2^* \dot{q\cE} = f_1^* q\cE$. Let $\mr{pr}_1 : \dot{\mf g} \to \mf g$ be the projection on the first coordinate. When $G$ is connected, Lusztig \cite{Lus-Cusp2} has constructed graded Hecke algebras from \[ K := \pr_{1,!} \dot{q\cE} \quad \in \mc D^b_{G \times \C^\times} (\mf g ) . \] For our purposes the pullback $K_N$ of $K$ to the nilpotent variety $\mf g_N \subset \mf g$ will be more suitable than $K$ itself. We can relate $\dot{\mf g}$ and $K$ to their versions for $G^\circ$, as follows. Write \begin{equation}\label{eq:1.13} G = \bigsqcup_{\gamma \in N_G (P,M) / M} G^\circ \gamma M / M \quad \text{and} \quad G/P = \bigsqcup_{\gamma \in N_G (P,M) / M} G^\circ \gamma P / P . \end{equation} Then we can decompose \begin{multline}\label{eq:1.21} \dot{\mf g} \; = \; \bigsqcup\nolimits_{\gamma \in N_G (P,M) / M} \dot{\mf g}^\circ_\gamma \; := \; \\ \hspace{-28mm} \bigsqcup\nolimits_{\gamma \in N_G (P,M) / M} \big\{ (X, g \gamma P) \in \dot{\mf g} : g \in G^\circ \big\} \cong \\ \bigsqcup_{\gamma \in N_G (P,M) / M} \hspace{-4mm} \big\{ (X,g \gamma P^\circ \gamma^{-1}) : X \in \mf g, g \in G^\circ / \gamma P^\circ \gamma^{-1}, \Ad (g^{-1}) X \in \Ad (\gamma) (\cC_v^M + \mf t + \mf u) \big\} . \end{multline} Here each term $\dot{\mf g}^\circ_\gamma$ is a twisted version of $\dot{\mf g}^\circ$. Consequently $K$ is a direct sum of $G^\circ \times \C^\times$-equivariant subobjects, each of which is a twist of the $K$ for $(G^\circ M, \cC_v^M, q\cE)$ by an element of $N_G (P,M) / M$. Let $\dot{q \cE}_{RS}$ be the pullback of $\dot{q\cE}$ to $\dot{\mf g}_{RS}$. Let $\IC_{G \times \C^\times} (\mf g \times G/P,\dot{q \cE}_{RS})$ be the equivariant intersection cohomology complex determined by $\dot{q \cE}_{RS}$. This is just the usual intersection cohomology complex in $\mc D^b (\mf g \times G/P)$, but with its $G \times \C^\times$-equivariant structure. It is supported on the closure of $\dot{\mf g}_{RS}$ in $\mf g \times G/P$, a domain on which $\mr{pr}_1$ becomes proper. The map \[ \mathrm{pr}_{1,RS} : \dot{\mf g}_{RS} \to \mf g_{RS} \] is a fibration with fibre $N_G (M) / M$, so $(\pr_{1,RS})_! \dot{q\cE}_{RS}$ is a local system on $\mf g_{RS}$. It is shown in \cite[\S 3.4.a]{Lus-Cusp1} and \cite[Proposition 7.12.c]{Lus-Cusp2} that \begin{equation}\label{eq:1.4} K \cong \mr{pr}_{1,!} \, \IC_{G \times \C^\times} (\mf g\times G/P,\dot{q \cE}_{RS}) \cong \IC_{G \times \C^\times}(\mf g, (\pr_{1,RS})_! \dot{q\cE}_{RS}) . \end{equation} Although in these references $G$ is assumed to be connected and $G \times \C^\times$-equivariance is not mentioned, the entire argument in \cite[\S 3.4]{Lus-Cusp2} can be placed in the appropriate $G \times \C^\times$-equivariant derived categories. The right hand side of \eqref{eq:1.4} shows that $K$ is a direct sum of simple perverse sheaves with support $\overline{\mf g_{RS}}$. Further, \cite[Lemma 5.4]{AMS1} and \cite[Proposition 7.14]{Lus-Cusp2} say that \begin{equation}\label{eq:1.10} \C[W_{q \cE},\natural_{q \cE}] \cong \End^0_{\mc D^b_{G \times \C^\times}(\mf g_{RS})} \big( (\mathrm{pr}_{1,RS})_! \dot{q \cE}_{RS} \big) \cong \End^0_{\mc D^b_{G \times \C^\times}(\mf g)} ( K ) , \end{equation} where $\natural_{q\cE} : (W_{q\cE} / W_\cE^\circ )^2 \to \C^\times$ is a suitable 2-cocycle. As in \cite[(8)]{AMS2}, we record the subalgebra of endomorphisms that stabilize Lie$(P)$: \begin{equation}\label{eq:3.60} \End^0_{\mc D^b_{G \times \C^\times}(\mf g)} \big( \mathrm{pr}_{1,!} \dot{q \cE} \big)_P \cong \C[\Gamma_{q \cE},\natural_{q \cE}] . \end{equation} Now we associate to $(M,\cC_v^M,q\cE)$ the twisted graded Hecke algebra \[ \mh H (G,M,q\cE) := \mh H (\mf t, W_{q\cE},k ,\mb r,\natural_{q \cE}) , \] where the parameters $k (\alpha)$ come from \eqref{eq:1.5}. As in \cite[Lemma 2.8]{AMS2}, we can regard it as \[ \mh H (G,M,q\cE) = \mh H (\mf t, W_\cE^\circ,k ,\mb r) \rtimes \End^0_{\mc D^b_{G \times \C^\times}(\mf g)} \big( \mathrm{pr}_{1,!} \dot{q \cE} \big)_P , \] and then it depends canonically on $(G,M,q\cE)$. We note that \eqref{eq:3.1} implies \begin{equation}\label{eq:3.13} \mh H (G^\circ N_G (P,q\cE),M,q\cE) = \mh H (G,M,q\cE) . \end{equation} There is also a purely geometric realization of this algebra. For $\Ad (G) \times \C^\times$-stable subvarieties $\cV$ of $\mf g$, we define, as in \cite[\S 3]{Lus-Cusp1}, \begin{equation}\label{eq:1.27} \begin{aligned} & \dot{\cV} = \{ (X,gP) \in \dot{\mf g} : X \in \cV \} , \\ & \ddot{\cV} = \{ (X,gP,g' P) : (X,g P) \in \dot{\cV}, (X,g' P) \in \dot{\cV} \} . \end{aligned} \end{equation} Let $q\cE^\vee$ be the dual equivariant local system on $\cC^M_v$, which is also cuspidal. It gives rise to $K^\vee = \pr_{1,!} \dot{q\cE}^\vee$, another equivariant intersection cohomology complex on $\mf g$. The two projections $\pi_{12}, \pi_{13} : \ddot{\cV} \to \dot{\cV}$ give rise to a $G \times \C^\times$-equivariant local system \[ \ddot{q\cE} = (\pi_{12} \times \pi_{13})^* \big( \dot{q\cE} \boxtimes \dot{q\cE}^\vee \big) \quad \text{on} \quad \ddot{\cV}, \] which carries a natural action of \eqref{eq:1.1}. As in \cite{Lus-Cusp1}, the action of $\C [W_{q\cE},\natural_{q\cE}^{-1}]$ on $K^\vee$ leads to \begin{equation}\label{eq:1.3} \text{actions of } \C [W_{q\cE},\natural_{q\cE}] \otimes \C [W_{q\cE},\natural_{q\cE}^{-1}] \text{ on } \ddot{q\cE} \text{ and on } H_j^{G^\circ \times \C^\times} \big( \ddot{\cV},\ddot{q\cE} \big) . \end{equation} In \cite{Lus-Cusp1} and \cite[\S 2]{AMS2} a left action $\Delta$ and a right action $\Delta'$ of $\mh H (G,M,q\cE)$ on $H_*^{G \times \C^\times}(\ddot{\mf g_N}, \ddot{q\cE})$ are constructed. \begin{thm}\label{thm:1.2} \enuma{ \item The actions $\Delta$ and $\Delta'$ identify $H_*^{G \times \C^\times}(\ddot{\mf g}, \ddot{q\cE})$ and \\ $H_*^{G \times \C^\times}(\ddot{\mf g_N}, \ddot{q\cE})$ with the biregular representation of $\mh H (G,M,q\cE)$. \item Methods from equivariant cohomology provide natural isomorphisms of graded vector spaces \[ \begin{array}{lll} \End^*_{\mc D^b_{G \times \C^\times}(\mf g)}(K) & \cong & H_*^{G \times \C^\times}(\ddot{\mf g}, \ddot{q\cE}) ,\\ \End^*_{\mc D^b_{G \times \C^\times}(\mf g_N)}(K_N) & \cong & H_*^{G \times \C^\times}(\ddot{\mf g_N}, \ddot{q\cE}) . \end{array} \] \item Parts (a) and (b) induce canonical isomorphisms of graded algebras \[ \mh H (G,M,q\cE) \to \End^*_{\mc D^b_{G \times \C^\times}(\mf g)}(K) \to \End^*_{\mc D^b_{G \times \C^\times}(\mf g_N)}(K_N) . \] } \end{thm} \begin{proof} (a) When $G$ is connected, this is shown for $\ddot{\mf g_N}$ in \cite[Corollary 6.4]{Lus-Cusp1} and for $\ddot{\mf g}$ in the proof of \cite[Theorem 8.11]{Lus-Cusp2}, based on \cite{Lus-Cusp1}. In \cite[Corollary 2.9 and \S 4]{AMS2} both are generalized to possibly disconnected $G$. \\ (b) For $(\mf g , K)$ with $G$ connected this is the beginning of the proof of \cite[Theorem 8.11]{Lus-Cusp2}. The same argument applies when $G$ is disconnected, and with $(\mf g_N, K_N)$ instead of $(\mf g,K)$.\\ (c) In \cite[Theorem 8.11]{Lus-Cusp2} the first isomorphism is shown when $G$ is connected. Using parts (a,b) the same argument applies when $G$ is disconnected. Similarly we obtain \[ \mh H (G,M,q\cE) \cong \End^*_{\mc D^b_{G \times \C^\times}(\mf g_N)}(K_N) . \] These two graded algebra isomorphisms are linked via parts (a,b) and functoriality for the inclusion $\mf g_N \to \mf g$. \end{proof} \subsection{Semisimplicity of some complexes of sheaves} \ \label{par:semisimplicity} For an alternative construction of $\dot{q\cE}$ and $K$, we consider the isomorphism of $G \times \C^\times$-varieties \begin{equation}\label{eq:1.38} \begin{array}{ccc} G \times^P (\cC_v^M \oplus \mf t \oplus \mf u) & \to & \dot{\mf g} \\ (g,X) & \mapsto & (\mr{Ad}(g) X, gP) \end{array}. \end{equation} We note that the middle term in \eqref{eq:1.2} is isomorphic to $G \times ( \cC_v^M \oplus \mf t \oplus u )$ via the map $(X,g) \mapsto (g,\mr{Ad}(g^{-1})X)$. In these terms, \eqref{eq:1.2} becomes \begin{equation}\label{eq:1.51} \cC_v^M \xleftarrow{f'_1} G \times ( \cC_v^M \oplus \mf t \oplus \mf u ) \xrightarrow{f'_2} G \times^P (\cC_v^M \oplus \mf t \oplus \mf u), \end{equation} with the natural maps $f'_1$ and $f'_2$. We get $\dot{q\cE}$ as $G \times \C^\times$-equivariant local system on $G \times^P (\cC_v^M \oplus \mf t \oplus \mf u)$, satisfying $f^{'*}_2 \dot{q\cE} = f^{'*}_1 q\cE$. In this setup $\mr{pr}_1$ is replaced by \begin{equation}\label{eq:1.40} \begin{array}{cccc} \mu : & G \times^P (\cC_v^M \oplus \mf t \oplus \mf u) & \to & \mf g \\ & (g,X) & \mapsto & \mr{Ad}(g)X \end{array} \end{equation} and then \begin{equation}\label{eq:1.46} K = \mu_! \, \dot{q\cE} . \end{equation} Recall that we defined $K_N$ as the pullback $K_N$ of $K$ to the variety $\mf g_N$, and that $K$ is a semisimple complex (that is, isomorphic to a direct sum of simple perverse sheaves, maybe with degree shifts). We will prove that $K_N$ is also semisimple complex of sheaves. We write \[ \dot{\mf g}_N = \dot{\mf g} \cap (\mf g_N \times G/P) . \] The maps \eqref{eq:1.2} restrict to \begin{equation}\label{eq:1.36} \cC_v^M \xleftarrow{f_{1,N}} \{ (X,g) \in \mf g_N \times G : \Ad (g^{-1}) X \in \cC_v^M \oplus \mf u\} \xrightarrow{f_{2,N}} \dot{\mf g}_N , \end{equation} which allows us to define a local system $\dot{q\cE}_N$ on $\dot{\mf g}_N$ by $f_{2,N}^* \dot{q\cE} = f_{1,N}^* q\cE_N$. Then $\dot{q\cE}_N$ is the pullback of $\dot{q\cE}$ to $\mf g_N$, because $f_{1,N}^* q\cE_N$ is the pullback of $f_1^* q\cE$. Let $\mr{pr}_{1,N}$ be the restriction of $\mr{pr}_1$ to $\dot{\mf g}_N$. From the Cartesian diagram \begin{equation}\label{eq:1.48} \xymatrix{ \dot{\mf g}_N \ar[r] \ar[d]^{\mr{pr}_{1,N}} & \dot{\mf g} \ar[d]^{\mr{pr}_1} \\ \mf g_N \ar[r] & \mf g } \end{equation} we see with base change \cite[Theorem 3.4.3]{BeLu} that \begin{equation}\label{eq:1.37} \mr{pr}_{1,N,!} \, \dot{q\cE}_N \text{ equals the pullback } K_N \text{ of } K \text{ to } \mf g_N . \end{equation} \begin{prop}\label{prop:1.14} There is a natural isomorphism \[ K_N \cong \mr{pr}_{1,N,!} \, \IC_{G \times \C^\times} (\mf g_N \times G/P, \dot{q\cE}_N) . \] \end{prop} \begin{proof} Notice that the middle term in \eqref{eq:1.36} is isomorphic with $G \times \cC_v^M \oplus \mf u$ and that \eqref{eq:1.38} provides an isomorphism \[ \dot{\mf g}_N \cong G \times^P (\cC_v^M \oplus \mf u ). \] With the commutative diagram \begin{equation}\label{eq:1.39} \xymatrix{ \cC_v^M \ar[d] & & \cC_v^M \oplus \mf u \ar[ll]^{\mr{pr}_{\cC_v^M}} \ar[d] \\ G \times^P \cC_v^M & & G \times^P (\cC_v^M \oplus \mf u) \ar[ll]^{\mr{id}_G \times \mr{pr}_{\cC_v^M}} } \end{equation} we can construct $\dot{q\cE}_N \in \mc D^b_{G \times \C^\times}(G \times^P (\cC_v^M \oplus \mf u))$ in two equivalent ways: \begin{itemize} \item pullback of $q\cE$ to $\cC_v^m \oplus \mf u$ (as $P \times \C^\times$-equivariant local system) and then equivariant induction $\mr{ind}_{P \times \C^\times}^{G \times \C^\times}$ as in \cite[\S 2.6.3]{BeLu}; \item equivariant induction $\mr{ind}_{P \times \C^\times}^{G \times \C^\times}$ of $q\cE$ to $G \times^P \cC_v^M$ and then pullback to \\ $G \times^P (\cC_v^M \oplus \mf u)$. \end{itemize} In these terms \begin{equation}\label{eq:1.47} K_N = \mu_{N,!} \, \dot{q\cE}_N , \end{equation} where $\mu_N : G \times^P (\cC_v^M \oplus \mf u) \to \mf g_N$ is the restriction of \eqref{eq:1.40}. Let $j_{\mf m_N} : \cC_v^M \to \mf m_N$ be the inclusion. Then \begin{equation}\label{eq:1.41} K_N = \mu_{N,!} (\mr{id}_G \times j_{\mf m_N} \times \mr{id}_{\mf u})_! \dot{q\cE}_N , \end{equation} where now the domain of $\mu_N$ is $\mf m_N \oplus \mf u$. Regarded as $M^\circ \times \C^\times$-equivariant local system on $\cC_v^M$, $q\cE$ is a direct sum of irreducible cuspidal local systems $\cE$. Each of those $\cE$ is clean \cite[Theorem 23.1]{Lus-CharV}, which means that the natural maps \[ j_{\mf m_N,!} \cE \to \IC (\mf m_N, \cE) \to j_{\mf m_N,*} \cE \] are isomorphisms in $\mc D^b_{M^\circ \times \C^\times}(\mf m_N)$. Taking direct sums over the appropriate $\cE$, we find that the maps \begin{equation}\label{eq:1.49} j_{\mf m_N,!} q\cE \to \IC (\mf m_N, q\cE) \to j_{\mf m_N,*} q\cE . \end{equation} are isomorphisms in $\mc D^b_{M^\circ \times \C^\times}(\mf m_N)$ as well. In fact, since the maps in \eqref{eq:1.49} are $M \times \C^\times$-equivariant, they are also isomorphisms in $\mc D^b_{M \times \C^\times}(\mf m_N)$ (see the remarks at the start of Section \ref{sec:cuspidal}). In other words, $q\cE$ is also clean. In the diagram \eqref{eq:1.39} the map $\mr{pr}_{\cC_v^M}$ extends naturally to \[ \mr{pr}_{\mf m_N} : \mf m_N \oplus \mf u \to \mf m_N , \] and both are trivial vector bundles. Hence (up to degree shifts) \begin{multline}\label{eq:1.42} \mr{pr}_{\mf m_N}^* j_{\mf m_N,!} q\cE = \mr{pr}_{\mf m_N}^* \IC_{P \times \C^\times} (\mf m_N, q\cE) = \mr{pr}_{\mf m_N}^* j_{\mf m_N,*} q\cE = \\ (j_{\mf m_N} \times \mr{id}_{\mf u})_* \mr{pr}_{\cC_v^M}^* q\cE = \IC_{P \times \C^\times} (\mf m_N \oplus \mf u, \mr{pr}_{\cC_v^M}^* q\cE ) = (j_{\mf m_N} \times \mr{id}_{\mf u})_! \mr{pr}_{\cC_v^M}^* q\cE . \end{multline} The vertical maps in \eqref{eq:1.39} induce equivalences of categories $\mr{ind}_{P \times \C^\times}^{G \times \C^\times}$, which commute with the relevant functors induced by the horizontal maps in \eqref{eq:1.39}, so \begin{equation}\label{eq:1.43} \begin{aligned} (\mr{id}_G \times j_{\mf m_N} \times \mr{id}_{\mf u})_! \dot{q\cE}_N & = (\mr{id}_G \times j_{\mf m_N} \times \mr{id}_{\mf u})_! \mr{ind}_{P \times \C^\times}^{G \times \C^\times} \mr{pr}_{\cC_v^M}^* q\cE \\ & = \mr{ind}_{P \times \C^\times}^{G \times \C^\times} (j_{\mf m_N} \times \mr{id}_{\mf u})_! \mr{pr}_{\cC_v^M}^* q\cE \\ & = \mr{ind}_{P \times \C^\times}^{G \times \C^\times} \IC_{P \times \C^\times} (\mf m_N \oplus \mf u, \mr{pr}_{\cC_v^M}^* q\cE ) \\ & = \IC_{G \times \C^\times} \big( G \times^P (\mf m_N \oplus \mf u), \mr{ind}_{P \times \C^\times}^{G \times \C^\times} \mr{pr}_{\cC_v^M}^* q\cE \big) \\ & = \IC_{G \times \C^\times} \big( G \times^P (\mf m_N \oplus \mf u), \dot{q\cE}_N \big) . \end{aligned} \end{equation} Since $G \times^P (\mf m_N \oplus \mf u)$ is closed in $G \times^P \mf g_N$, the last expression is isomorphic with \begin{equation}\label{eq:1.55} \IC_{G \times \C^\times} (G \times^P \mf g_N , \dot{q\cE}_N ). \end{equation} Via the isomorphism \begin{equation}\label{eq:1.57} G \times^P \mf g_N \cong \mf g_N \times G/P \end{equation} obtained from \eqref{eq:1.38} by restriction, \eqref{eq:1.55} becomes $\IC_{G \times \C^\times} (\mf g_N \times G/P, \dot{q\cE}_N )$. Combine that with \eqref{eq:1.41} and \eqref{eq:1.43}. \end{proof} The following method to prove semisimplicity of $K_N$ is based on the decomposition theorem for perverse sheaves of geometric origin \cite[Th\'eor\`eme 6.2.5]{BBD}. The same method can be applied to $K$, using the first isomorphism in \eqref{eq:1.4}. \begin{lem}\label{lem:1.17} $K_N$ is a semisimple object of $\mc D^b_{G \times \C^\times} (\mf g_N)$. \end{lem} \begin{proof} By construction every $M^\circ$-equivariant (cuspidal) local system on a Ad$(M^\circ)$-orbit in $\mf m_N$ is geometric. The automorphism group $\mr{Aut}(M^\circ_{\mr{der}})$ of the derived subgroup of $M^\circ$ is algebraic and defined over $\Z$. The action of $M$ on $\mf m_N$ factors through $\mr{Aut}(M^\circ_{\mr{der}})$, and hence the cuspidal local system $q\cE$ on $\cC_v^M$ is of geometric origin. Like for $M$, the automorphism group of $G^\circ_\der$ is algebraic and defined over $\Z$, and the adjoint actions of $G$ and $P$ on $\mf g$ factor via that group. Therefore not only $\pr_{\cC_v^M}^* q\cE$ but also \[ \dot{q\cE}_N = \mr{ind}_{P \times \C^\times}^{G \times \C^\times} \pr_{\cC_v^M}^* q\cE \in \mc D^b_{G \times \C^\times} (G \times^P \mf m_N \oplus \mf u) \] is of geometric origin. As the isomorphism \eqref{eq:1.57} only involves $G$ via the adjoint action, it follows that $\IC_{G \times \C^\times} (\mf g_N \times G / P, \dot{q\cE}_N)$ is of geometric origin as well. Since \[ \pr_{1,N} : \mf g_N \times G/P \to \mf g_N \] is proper, we can apply the decomposition theorem for equivariant perverse sheaves \cite[\S 5.3.1]{BeLu} to Proposition \ref{prop:1.14}. This is based on the non-equivariant version from \cite[\S 6]{BBD}, and therefore requires objects of geometric origin. \end{proof} For compatibility with other papers we record that, by \eqref{eq:1.41}, \eqref{eq:1.42} and \eqref{eq:1.43}: \begin{equation}\label{eq:1.44} \begin{aligned} K_N & \cong \mu_{N,!} \mr{ind}_{P \times \C^\times}^{G \times \C^\times} \IC_{P \times \C^\times} (\mf m_N \oplus \mf u, \mr{pr}_{\cC_v^M}^* q\cE ) \\ & \cong \mu_{N,!} \mr{ind}_{P \times \C^\times}^{G \times \C^\times} \mr{pr}_{\mf m_N}^* \IC_{P \times \C^\times} (\mf m_N, q\cE) \\ & \cong \mu_{N,!} (\mr{id}_G \times \mr{pr}_{\mf m_N})^* \mr{ind}_{P \times \C^\times}^{G \times \C^\times} \IC_{P \times \C^\times} (\mf m_N, q\cE) . \end{aligned} \end{equation} Like in \cite[\S 8.4]{Ach}, the diagram \begin{equation}\label{eq:1.52} \mf m_N \to G \times^P \mf m_N \xleftarrow{\mr{id}_G \times \mr{pr}_{\mf m_N}} G \times^P (\mf m_N \oplus \mf u) \xrightarrow{\mu_N} \mf g_N \end{equation} gives rise to a parabolic induction functor \begin{equation}\label{eq:1.53} \mc I_{P \times \C^\times, \mf m_N}^{G \times \C^\times} = \mu_{N,!} (\mr{id}_G \times \mr{pr}_{\mf m_N})^* \mr{ind}_{P \times \C^\times}^{G \times \C^\times} : \mc D^b_{P \times \C^\times} (\mf m_N) \to \mc D^b_{G \times \C^\times} (\mf g_N) . \end{equation} Since $U \subset P$ is contractible and acts trivially on $\mf m_N$, inflation along the quotient map $P \to M$ induces an equivalence of categories \[ \mc D^b_{P \times \C^\times} (\mf m_N) \cong \mc D^b_{M \times \C^\times} (\mf m_N) . \] With these notions \eqref{eq:1.44} says precisely that \begin{equation}\label{eq:1.45} K_N \cong \mc I_{P \times \C^\times, \mf m_N}^{G \times \C^\times} \, \IC_{M \times \C^\times} (\mf m_N, q\cE) . \end{equation} For later use we also mention the parabolic restriction functor \begin{equation} \mc R_{P \times \C^\times, \mf m_N}^{G \times \C^\times} = \big( \mr{ind}_{P \times \C^\times }^{G \times \C^\times} \big)^{-1} (\mr{id}_G \times \mr{pr}_{\mf m_N})_* \mu_N^! : \mc D^b_{G \times \C^\times} (\mf g_N) \to \mc D^b_{P \times \C^\times} (\mf m_N) . \end{equation} The arguments in Proposition \ref{prop:1.14} and \eqref{eq:1.44} admit natural analogues for $K$. Namely, with the diagram \[ \mf m_N \to G \times^P \mf m_N \xleftarrow{\mr{id}_G \times \mr{pr}_{\mf m_N}} G \times^P (\mf m_N \oplus \mf t \oplus \mf u) \xrightarrow{\mu} \mf g \] instead of \eqref{eq:1.52}, we get a functor similar to \eqref{eq:1.53}. That yields an isomorphism \begin{equation}\label{eq:1.54} K \cong \mu_! (\mr{id}_G \times \mr{pr}_{\mf m_N})^* \mr{ind}_{P \times \C^\times}^{G \times \C^\times} \IC_{P \times \C^\times} (\mf m_N, q\cE) . \end{equation} This also follows from \cite[Proposition 7.12]{Lus-Cusp2}, at least when $G$ is connected. \subsection{Variations for centralizer subgroups} \ \label{par:centralizer} Let $\sigma \in \mf t$, so that $M = Z_G (T) \subset Z_G (\sigma)$. We would like to compare Theorem \ref{thm:1.2} with its version for $(Z_G (\sigma), M, q\cE)$. First we analyse the variety \[ (G/P)^\sigma := \{ g P \in G / P : \sigma \in \mr{Lie} (g P g^{-1}) \} . \] This is also the fixed point set of $\exp (\C \sigma)$ in $G/P$. Let $Z_G^\circ (\sigma)$ be the connected component of $Z_G (\sigma)$. \begin{lem}\label{lem:1.5} For any $g P \in (G/P)^\sigma$, the subgroup $g P^\circ g^{-1} \cap Z_G^\circ (\sigma)$ of $Z_G^\circ (\sigma)$ is parabolic. \end{lem} \begin{proof} Consider the parabolic subgroup $P' := g P^\circ g^{-1}$ of $G^\circ$. Its Lie algebra $\mf p'$ contains the semisimple element $\sigma$, so there exists a maximal torus $T'$ of $P'$ with $\sigma \in \mf t'$. Let $M'$ be the unique Levi factor of $P'$ containing $T'$. The unipotent radical $U'$ of $P'$ and the opposite parabolic $M' \bar{U}'$ give rise to decompositions of $Z(\mf m')$-modules \[ \mf g = \bar{\mf u}' \oplus \mf p', \quad \mf p' = Z(\mf m') \oplus \mf m'_{\der} \oplus \mf u' . \] Since $Z(\mf m') \subset \mf t' \subset Z_{\mf g}(\sigma)$, these decompositions are preserved by intersecting with $Z_{\mf g} (\sigma)$: \[ Z_{\mf g}(\sigma) = Z_{\bar{\mf u}'}(\sigma) \oplus Z_{\mf p'}(\sigma), \quad Z_{\mf p'}(\sigma) = Z(\mf m') \oplus Z_{\mf m'_{\der}}(\sigma) \oplus Z_{\mf u'}(\sigma) . \] This shows that $Z_{\mf g}(\sigma) \cap \mf p'$ is a parabolic subalgebra of $Z_{\mf g}(\sigma)$. Hence $Z^\circ_G (\sigma) \cap P'$ is a parabolic subgroup of $Z_G^\circ (\sigma)$. \end{proof} The subgroup $Z_G (\sigma) \subset G$ stabilizes $(G/P)^\sigma$, so the latter is a union of $Z_G (\sigma)$-orbits. \begin{lem}\label{lem:1.6} The connected components of $(G/P)^\sigma$ are precisely its $Z_G^\circ (\sigma)$-orbits. \end{lem} \begin{proof} Clearly every $Z_G^\circ (\sigma)$-orbit is connected. From \eqref{eq:1.13} we get an isomorphism of varieties \begin{equation}\label{eq:1.14} G / P = \bigsqcup\nolimits_{\gamma \in N_G (M)/M} \gamma G^\circ P / P \cong \bigsqcup\nolimits_\gamma \gamma G^\circ / P^\circ . \end{equation} Here $Z_G^\circ (\sigma)$ acts on $\gamma G^\circ / P^\circ$ by \[ z \cdot \gamma g P^\circ = \gamma (\gamma^{-1} z \gamma) g P^\circ , \] so via conjugation by $\gamma^{-1}$ and the natural action of $\gamma^{-1} Z_G^\circ (\sigma) \gamma = Z_G^\circ (\Ad (\gamma^{-1}) \sigma)$ on $G^\circ / P^\circ$. Taking $\exp (\C \sigma)$-fixed points in \eqref{eq:1.14} gives \begin{align*} (G/P)^\sigma & \cong \bigsqcup\nolimits_\gamma (\gamma G^\circ / P^\circ )^\sigma \\ & = \bigsqcup\nolimits_\gamma \{ \gamma g P^\circ : g \in G^\circ, \sigma \in \mr{Lie}(\gamma g P^\circ g^{-1} \gamma^{-1}) \} \\ & = \bigsqcup\nolimits_\gamma \gamma \{ g P^\circ : g \in G^\circ, \Ad (\gamma^{-1}) \sigma \in \mr{Lie} (g P^\circ g^{-1}) \} \\ & = \bigsqcup\nolimits_\gamma \gamma (G^\circ / P^\circ )^{\Ad (\gamma^{-1}) \sigma} . \end{align*} This reduces the lemma to the case $G^\circ / P^\circ$, so to the connected group $G^\circ$. For that we refer to \cite[Proposition 8.8.7.ii]{ChGi}. That reference is written for Borel subgroups, but with Lemma \ref{lem:1.5} the proof also applies to other conjugacy classes of parabolic subgroups. \end{proof} It is also shown in \cite[Proposition 8.8.7.ii]{ChGi} that every $Z_G^\circ (\sigma)$-orbit in $(G/P)^\sigma$ is a submanifold and an irreducible component. \begin{lem}\label{lem:1.7} There are isomorphisms of $Z_G (\sigma)$-varieties \[ \bigsqcup_{w \in N_{Z_G (\sigma)}(M) \backslash N_G (M)} Z_G (\sigma) / Z_{w P w^{-1}}(\sigma) \cong \bigsqcup_{w \in N_{Z_G (\sigma)}(M) \backslash N_G (M)} Z_G (\sigma) \cdot w P = (G / P)^\sigma . \] \end{lem} \begin{proof} By Lemma \ref{lem:1.6} there exist finitely many $\gamma \in G$ such that \begin{equation}\label{eq:1.15} (G/P)^\sigma = \sqcup_\gamma Z_G^\circ (\sigma) \cdot \gamma P . \end{equation} Then the same holds with $Z_G (\sigma)$ instead of $Z_G^\circ (\sigma)$, and fewer $\gamma$'s. The $Z_G (\sigma)$-stabilizer of $\gamma P$ is \[ \{ z \in Z_G (\sigma) : z \gamma P \gamma^{-1} = \gamma P \gamma^{-1} \} = Z_G (\sigma) \cap \gamma P \gamma^{-1} = Z_{\gamma P \gamma^{-1}}(\sigma) . \] That proves the lemma, except for the precise index set. Fix a maximal torus $T'$ of $Z_G^\circ (\sigma)$ with $T \subset T'$. Every parabolic subgroup of $G^\circ$ or $Z_G^\circ (\sigma)$ is conjugate to one containing $T'$. The $G^\circ$-conjugates of $P^\circ$ that contain $T'$ are the $w P^\circ w^{-1}$ with $w \in N_{G^\circ}(T')$, or equivalently with \[ w \in N_{G^\circ}(T') / N_{P^\circ}(T') = N_{G^\circ}(T') / N_{M^\circ}(T') \cong N_{G^\circ}(M^\circ) / M^\circ . \] For $w,w' \in N_{G^\circ}(M^\circ)$, $w P^\circ$ and $w' P^\circ$ are in the same $Z_G^\circ (\sigma)$-orbit if and only if $w' w^{-1} \in N_{Z_G^\circ (\sigma)}(M^\circ)$. We find that \begin{equation}\label{eq:1.19} (G^\circ / P^\circ )^\sigma = \bigsqcup\nolimits_{w \in N_{Z_G^\circ (\sigma)}(M^\circ) \backslash N_{G^\circ}(M^\circ)} Z_G^\circ (\sigma) \cdot w P^\circ . \end{equation} We note that the group \[ N_{G^\circ}(M^\circ) / M^\circ = N_{G^\circ}(T) / Z_{G^\circ}(T) = N_{G^\circ}(M) / M^\circ \cong N_{G^\circ}(M) M / M \] normalises $P$. When we replace $G^\circ / P^\circ$ by $G / P$ in \eqref{eq:1.19}, the options for $w$ need to be enlarged to $N_G (M) / M$. Next we replace $Z_G^\circ (\sigma)$ by $Z_G (\sigma)$, so that $w P$ and $w' P$ are in the same $Z_G (\sigma)$-orbit if and only if $w' w^{-1} \in N_{Z_G (\sigma)}(M) / M$. Notice that $wP \in (G/P)^\sigma$ because \[ \sigma \in \mf m = \mr{Lie}(w M w^{-1}) \subset \mr{Lie}(w P w^{-1}) . \] We conclude that \[ (G / P )^\sigma = \bigsqcup_{w \in N_{Z_G^\circ (\sigma)}(M) \backslash N_{G}(M)} Z_G^\circ (\sigma) \cdot w P = \bigsqcup_{w \in N_{Z_G (\sigma)}(M) \backslash N_{G}(M)} Z_G (\sigma) \cdot w P . \qedhere \] \end{proof} The fixed point set of $\exp (\C \sigma)$ in $\dot{\mf g}$ is \[ \dot{\mf g}^\sigma = \dot{\mf g} \cap (Z_{\mf g} (\sigma) \times (G/P)^\sigma) = \{ (X,gP) \in Z_{\mf g}(\sigma) \times (G/P)^\sigma : \Ad (g^{-1}) X \in \cC_v^M + \mf t + \mf u \} . \] Clearly $\dot{\mf g}^\sigma$ is related to $\dot{Z_{\mf g}(\sigma)}$ and to $\dot{Z_{\mf g}(\sigma)}^\circ$. With \eqref{eq:1.19} and \eqref{eq:1.21} we can make that precise: \begin{align} \label{eq:1.24} & \dot{\mf g}^\sigma = \bigsqcup_{w \in N_{Z_G^\circ (\sigma)}(M) \backslash N_{G}(M)} \dot{Z_{\mf g}(\sigma)}^\circ_w = \bigsqcup_{w \in N_{Z_G (\sigma)}(M) \backslash N_{G}(M)} \dot{Z_{\mf g}(\sigma)}_w \\ \nonumber & \dot{Z_{\mf g}(\sigma)}_w = \{ (X,g Z_{w P w^{-1}}(\sigma)) \in Z_{\mf g}(\sigma) \times Z_G (\sigma) / Z_{w P w^{-1}}(\sigma) : \\ \nonumber & \hspace{2cm} \Ad (g^{-1})X \in \Ad (w) (\cC_v^M + \mf t + \mf u) \} . \end{align} Let $j' : \dot{\mf g}^\sigma \to \dot{\mf g}$ be the inclusion and let $\pr_1^\sigma$ be the restriction of $\pr_1$ to $\dot{\mf g}^\sigma$. We define \[ K_\sigma = (\pr_1^\sigma )_! j^{'*} \dot{q\cE} \in \mc D^b_{Z_G (\sigma) \times \C^\times}(Z_{\mf g}(\sigma)) . \] From \eqref{eq:1.24} we infer that $K_\sigma$ is a direct sum of the parts $K_{\sigma,w}$ (resp. $K^\circ_{\sigma,w}$) coming from $\dot{Z_{\mf g}(\sigma)}_w$ (resp. from $\dot{Z_{\mf g}(\sigma)}_w^\circ$), and each such part is a version of the $K$ for $Z_G (\sigma)$ (resp. for $Z_G^\circ (\sigma)$), twisted by $w \in N_G (M) / M$. These objects admit versions restricted to subvarieties of nilpotent elements, which we indicate by a subscript $N$. In particular \[ K_{N,\sigma} = (\pr_{1,N}^\sigma )_! j_N^{'*} \dot{q\cE}_N \in \mc D^b_{Z_G (\sigma) \times \C^\times}(Z_{\mf g}(\sigma)_N) \] can be decomposed as a direct sum of subobjects $K_{N,\sigma,w}$ or $K_{N,\sigma,w}^\circ$. \begin{lem}\label{lem:1.15} The objects $K_\sigma, K_{\sigma,w} \in \mc D^b_{Z_G (\sigma) \times \C^\times} (Z_{\mf g}(\sigma))$ and \\ $K_{N,\sigma}, K_{N,\sigma,w} \in \mc D^b_{Z_G (\sigma) \times \C^\times} (Z_{\mf g}(\sigma)_N)$ are semisimple. \end{lem} \begin{proof} We note that, like in \eqref{eq:1.38}, there is an isomorphism of $Z_G (\sigma) \times \C^\times$-varieties \[ \dot{Z_G (\sigma)}_w \cong Z_G (\sigma) \times^{Z_{w P w^{-1}(\sigma)}} \big( \mr{Ad}(w) (\cC_v^M \oplus \mf t \oplus \mf u) \cap Z_{\mf g} (\sigma) \big) . \] Here $Z_{w P w^{-1}}(\sigma)$ is a quasi-parabolic subgroup of $G$ with quasi-Levi factor $M$ and \[ \mr{Ad}(w) (\cC_v^M \oplus \mf t \oplus \mf u) \cap Z_{\mf g} (\sigma) = \cC_v^M \oplus \mf t \oplus \big( \mr{Ad}(w) \mf u \cap Z_{\mf g} (\sigma) \big) \] with $\mr{Ad}(w) \mf u \cap Z_{\mf g} (\sigma)$ the Lie algebra of the unipotent radical of $Z_{w P w^{-1}}(\sigma)$. Comparing that with the construction of $K$ in \eqref{eq:1.40}--\eqref{eq:1.46}, we deduce that $K_{\sigma,w}$ is the $K$ for the group $Z_G (\sigma)$ and the cuspidal local system $\mr{Ad}(w)_* q\cE$. As $K$ is semisimple, see \eqref{eq:1.4}, so is the current $K_{\sigma,w}$. The same reasoning, now using \eqref{eq:1.47}, shows that $K_{N,\sigma,w}$ is the $K_N$ for $Z_G (\sigma)$ and $\mr{Ad}(w)_* q\cE$. By Proposition \ref{prop:1.14}.b, $K_{N,\sigma,w}$ is semisimple. The objects $K_\sigma$ and $K_{N,\sigma}$ are direct sums of objects $K_{\sigma,w}$ and $K_{N,\sigma,w}$, so these are also semisimple. \end{proof} The above decompositions of $K_\sigma$ and $K_{N,\sigma}$ are the key to analogues of parts of Paragraph \ref{par:geomConst} for $Z_G (\sigma)$. \begin{lem}\label{lem:1.8} Let $w,w' \in N_G (M) / M$. The inclusion $Z_{\mf g}(\sigma)_N \to Z_{\mf g}(\sigma)$ induces an isomorphism of graded $H^*_{Z_G^\circ (\sigma) \times \C^\times}(\pt)$-modules \[ \Hom^*_{\mc D^b_{Z_G^\circ (\sigma) \times \C^\times}(Z_{\mf g}(\sigma))} (K^\circ_{\sigma,w},K^\circ_{\sigma,w'}) \longrightarrow \Hom^*_{\mc D^b_{Z_G^\circ (\sigma) \times \C^\times}(Z_{\mf g}(\sigma)_N)} (K^\circ_{N,\sigma,w},K^\circ_{N,\sigma,w'}) . \] \end{lem} \begin{proof} Decompose $\dot{q\cE} |_{\dot{Z_{\mf g}(\sigma)}^\circ_w}$ as direct sum of irreducible $Z_G^\circ (\sigma) \times \C^\times$-equivariant local systems. Each summand is of the form $\Ad (w)_* \dot{\cE}$, for an irreducible summand $\cE$ of $q\cE$ as $M^\circ$-equivariant local system. Similarly we decompose $\dot{q\cE} |_{\dot{Z_{\mf g}(\sigma)}^\circ_{w'}}$ as direct sum of terms $\Ad (w')_* \dot{\cE}$. Like in the proof of Lemma \ref{lem:1.15}: \begin{equation}\label{eq:1.29} K^\circ_{\sigma,w} = \bigoplus\nolimits_{\cE} (\pr_{1,Z_{\mf g}(\sigma)})_! \Ad (w)_* \dot{\cE} , \end{equation} and similarly for $K^\circ_{N,\sigma,w}, K^\circ_{\sigma,w'}$ and $K^\circ_{N,\sigma,w'}$. A computation like the start of the proof of \cite[Theorem 8.11]{Lus-Cusp2} (already used in Theorem \ref{thm:1.2}.b) shows that \begin{multline}\label{eq:1.25} \Hom^*_{\mc D^b_{Z_G^\circ (\sigma) \times \C^\times}(Z_{\mf g}(\sigma))} \big( (\pr_{1,Z_{\mf g} (\sigma)})_! \Ad (w)_* \dot{\cE}, (\pr_{1,Z_{\mf g}(\sigma)})_! \Ad (w')_* \dot{\cE'} \big) \\ \cong H_*^{Z_G^\circ (\sigma) \times \C^\times} \big( \ddot{Z_{\mf g}(\sigma)}^\circ , i_\sigma^* \big( \Ad (w)_* \dot{\cE} \boxtimes \Ad (w')_* \dot{\cE'}^\vee \big) \big) . \end{multline} Here $\ddot{Z_{\mf g}(\sigma)}^\circ = \dot{Z_{\mf g}(\sigma)}^\circ \times^{Z_{\mf g}(\sigma)} \dot{Z_{\mf g}(\sigma)}^\circ$ and \[ i_\sigma : \ddot{Z_{\mf g}(\sigma)}^\circ \to \dot{Z_{\mf g}(\sigma)}^\circ \times \dot{Z_{\mf g}(\sigma)}^\circ \] denotes the inclusion. The same applies with subscripts $N$: \begin{multline}\label{eq:1.26} \Hom^*_{\mc D^b_{Z_G^\circ (\sigma) \times \C^\times}(Z_{\mf g}(\sigma)_N)} \big( (\pr_{1,Z_{\mf g} (\sigma)_N})_! \Ad (w)_* \dot{\cE}_N, (\pr_{1,Z_{\mf g}(\sigma)_N})_! \Ad (w')_* \dot{\cE'}_N \big) \\ \cong H_*^{Z_G^\circ (\sigma) \times \C^\times} \big( \ddot{Z_{\mf g}(\sigma)}_N^\circ , i_{N,\sigma}^* \big( \Ad (w)_* \dot{\cE} \boxtimes \Ad (w')_* \dot{\cE'}^\vee \big) \big) . \end{multline} When $w = w'$ and $\cE = \cE'$, \eqref{eq:1.25} and \eqref{eq:1.26} are computed in \cite[Proposition 4.7]{Lus-Cusp1}. In fact \cite[Proposition 4.7]{Lus-Cusp1} also applies in our more general setting, with different $\Ad (w)_* \dot{\cE}$ and $\Ad (w') \dot{\cE'}$. Namely, to handle those we add the argument from the proof of \cite[Proposition 2.6]{AMS2}, especially \cite[(11)]{AMS2}. That works for both $Z_{\mf g}(\sigma)$ and for $Z_{\mf g}(\sigma)_N$, and entails that there are natural isomorphisms of graded $H^*_{Z_G^\circ (\sigma) \times \C^\times}(\pt)$-modules \begin{equation}\label{eq:1.28} \begin{array}{ccc} H^*_{Z_G^\circ (\sigma) \times \C^\times}(\dot{Z_{\mf g}(\sigma)}^\circ) \otimes_\C H_0 \big( \ddot{Z_{\mf g}(\sigma)}^\circ , i_\sigma^* \big( \Ad (w)_* \dot{\cE} \boxtimes \Ad (w')_* \dot{\cE'}^\vee \big) \big) & \cong & \text{\eqref{eq:1.25}}, \\ H^*_{Z_G^\circ (\sigma) \times \C^\times}(\dot{Z_{\mf g}(\sigma)}^\circ) \otimes_\C H_0 \big( \ddot{Z_{\mf g}(\sigma)}_N^\circ , i_{N,\sigma}^* \big( \Ad (w)_* \dot{\cE} \boxtimes \Ad (w')_* \dot{\cE'}^\vee \big) \big) & \cong & \text{\eqref{eq:1.26}} . \end{array} \end{equation} Moreover, the proof of \cite[Proposition 4.7]{Lus-Cusp1} shows that the two lines of \eqref{eq:1.28} are isomorphic via the inclusion $Z_{\mf g}(\sigma)_N \to Z_{\mf g}(\sigma)$. \end{proof} Finally, we can generalize the second isomorphism in Theorem \ref{thm:1.2}.c. \begin{prop}\label{prop:1.9} The inclusion $Z_{\mf g}(\sigma)_N \to Z_{\mf g}(\sigma)$ induces a graded algebra isomorphism \[ \End^*_{\mc D^b_{Z_G (\sigma) \times \C^\times}(Z_{\mf g}(\sigma))}(K_\sigma) \longrightarrow \End^*_{\mc D^b_{Z_G (\sigma) \times \C^\times}(Z_{\mf g}(\sigma)_N)}(K_{N,\sigma}) . \] \end{prop} \begin{proof} Take the direct sum of the instances of Lemma \ref{lem:1.8}, over all \\ $w,w' \in N_{Z_G^\circ (\sigma)} (M) \backslash N_G (M)$. By \eqref{eq:1.24}, that yields a natural isomorphism \[ \End^*_{\mc D^b_{Z_G^\circ (\sigma) \times \C^\times}(Z_{\mf g}(\sigma))}(K_\sigma) \longrightarrow \End^*_{\mc D^b_{Z_G^\circ (\sigma) \times \C^\times}(Z_{\mf g}(\sigma)_N)}(K_{N,\sigma}) . \] Now we take $\pi_0 (Z_G (\sigma))$-invariants on both sides, that replaces $\End^*_{\mc D^b_{Z_G^\circ (\sigma) \times \C^\times} (?)}$ by $\End^*_{\mc D^b_{Z_G (\sigma) \times \C^\times} (?)}$. \end{proof} \section{Description of $\mc D^b_{G \times GL_1}(\mf g_N)$ with Hecke algebras} \label{sec:description} We want to make a (right) module category of $\mh H = \mh H (G,M,q\cE)$ equivalent with a category of equivariant constructible sheaves. In view of Theorem \ref{thm:1.2}, we should compare with $\mc D^b_{G \times GL_1} (\mf g_N,K_N)$, the triangulated subcategory of $\mc D^b_{G \times \C^\times} (\mf g_N)$ generated by the simple summands of the semisimple object $K_N$. Since that involves complexes of sheaves, we have to look at differential graded $\mh H$-modules. Recall that $\mh H$ has no terms in odd degrees, so that its differential can only be zero. Hence a differential graded $\mh H$-module $M$ is just a graded $\mh H$-module $\bigoplus_{n \in \Z} M_n$ with a differential $d_M$ of degree 1. As $\mc D^b_{G \times GL_1} (\mf g_N)$ is a derived category, we are lead to $\mc D (\mh H - \Mod_{dg})$, the derived category of differential graded right $\mh H$-modules. Its bounded version is $\mc D^b (\mh H - \Mod_{\mr{fgdg}})$, where the subscript stands for ``finitely generated differential graded". We note that $\mh H -\Mod_{dg}$ is much smaller than $\mh H -\Mod$, for instance the only irreducible $\mh H$-modules it contains are those on which $\mc O (\mf t \oplus \C)$ acts via evaluation at $(0,0)$. In fact the triangulated category $\mc D^b (\mh H - \Mod_{\mr{fgdg}})$ is already generated by a single object, namely $\mh H$ \cite[Corollary 11.1.5]{BeLu}. From another angle, we aim to analyse the entire category $\mc D^b_{G \times GL_1}(\mf g_N)$ in terms of suitable module categories of Hecke algebras. As there may be several Levi subgroups and equivariant cuspidal local systems involved, we will need a finite collection of such Hecke algebras. The motivating and archetypical example is \cite[\S 12]{BeLu}. There it is shown that, for a connected complex reductive group $G'$ with a Borel subgroup $B'$ containing a maximal torus $T'$, there are equivalences of triangulated categories \begin{equation}\label{eq:3.9} \mc D^b_{G'} (G' / B') \cong \mc D^b_{B'} (\mr{pt}) \cong \mc D^b (\mc O (\mr{Lie}(T')) - \Mod_{\mr{fgdg}}) . \end{equation} The isomorphism $\mh H (G,M,q\cE) \cong \End^*_{\mc D^b_{G \times \C^\times} (\mf g_N)} (K_N)$ from Theorem \ref{thm:1.2} gives rise to an additive functor \begin{equation}\label{eq:3.3} \begin{array}{ccc} \mc D^b_{G \times \C^\times} (\mf g_N, K_N) & \to & \mc D^b (\mh H - \Mod_{\mr{fgdg}}) \\ S & \mapsto & \Hom^*_{\mc D^b_{G \times \C^\times} (\mf g_N)} (K_N,S) \end{array} . \vspace{-2mm} \end{equation} However, it is not clear whether this functor is triangulated or fully faithful (on an appropriate subcategory). One problem is that $\mc D^b_{G \times \C^\times} (\mf g_N)$ does not arise as the (bounded) derived category of an abelian category, another that $\Hom^*_{\mc D^b_{G \times \C^\times} (\mf g_N)}$ is defined rather indirectly. \subsection{Transfer to a ground field of positive characteristic} \ \label{par:transfer} We will overcome the above problems by constructing a more subtle functor instead of \eqref{eq:3.3}, which will lead to an equivalence of categories. To this end, we will first transfer the entire setup from the ground field $\C$ to a ground field of positive characteristic. All our varieties, algebraic groups and (complexes of) sheaves may be considered over an algebraically closed ground field instead of $\C$, see \cite[\S 4.3]{BeLu}. In particular we can take an algebraic closure $k_s$ of a finite field $\F_q$ whose characteristic $p$ is good for $G$, like in \cite{Rid,RiRu}. As we do not require that $G$ is connected, we decree that ``good" also means that $p$ does not divide the order of $\pi_0 (G)$. For consistency, we replace the variety $\mh A^1 (\C) = \C$ (on which $\mb r$ is the standard coordinate) by the affine space $\mh A^1$. In our setting, the topology of the coefficient field $\C$ of our sheaves does not play a role. Since $\Ql \cong \C$ as fields, we may just as well look at sheaves of $\Ql$-vector spaces everywhere. This setup has the advantage that one can pass to varieties over finite fields, and to mixed (equivariant) sheaves. To emphasize that we consider an object with ground field $k_s$ we will sometimes add a subscript $s$. This notation comes from \cite[\S 6]{BBD}, where $k_s$ arises as the residue field for some discrete valuation ring, which relates $k_s$ to special fibres. In the remainder of this section we will regard $G$ as an algebraic group, and for an action of $G$ or $G \times GL_1$ we tacitly assume that these groups are considered over the same field as the varieties on which they act. To that end, and to get semisimplicty of $K_N$ from Lemma \ref{lem:1.17}, we assume that $G$ can be defined over a finite extension of $\Z$. That is hardly a restriction, since by Chevalley's construction it holds for any connected reductive group over an algebraically closed field. For $G^\circ$ we may use Chevalley's $\Z$-model. Then reduction modulo $p$ and extension of scalars to $k_s$ gives the corresponding Chevalley group $G^\circ_s$ over $k_s$. Similarly, from $\mf g_N$ over $\Z$ we obtain by reduction modulo $p$ and extension to $k_s$ the nilpotent variety $\mf g_{N,s}$ in the Lie algebra of $G^\circ_s$. \begin{lem}\label{lem:3.5} The classification of $G \times GL_1$-equivariant cuspidal local systems (with $\Ql$-coefficients) on $\mf g_N$ is the same for the two base fields $\C$ and $k_s$. \end{lem} \begin{proof} The classification of equivariant cuspidal local systems supported on the variety of unipotent elements in $G^\circ$ \cite{Lus-Int} shows this for $G^\circ$. By \cite[\S 2.1.f]{Lus-Cusp1}, such local systems are automatically $G^\circ \times GL_1$-equivariant. With the natural bijection between the unipotent orbits in $G^\circ$ and the nilpotent orbits in $\mf g$ (recall that $p$ is good for $G^\circ$), we obtain the same result for $G^\circ \times GL_1$-equivariant cuspidal local systems supported on $\mf g_N$. To get from there to $G \times GL_1$-equivariant cuspidal local systems boils down to extending representations of a finite group \[ \pi_0 (Z_{G^\circ}(X)) \cong \pi_0 (Z_{G^\circ \times GL_1}(X)) \qquad X \in \mf g_N \] to a larger finite group \[ \pi_0 (Z_G (X)) \cong \pi_0 (Z_{G \times GL_1}(X)), \] see \cite[\S 3]{AMS1}. In view of the short exact sequence \[ 1 \to \pi_0 (Z_{G^\circ}(N)) \to \pi_0 (Z_G (N)) \to G / G^\circ = \pi_0 (G) \to 1 \] from \cite[(21)]{AMS1} and because $p$ does not divide $|\pi_0 (G)|$, this works in the same way over $\C$ and over $k_s$. \end{proof} Of course Lemma \ref{lem:3.5} also applies to a quasi-Levi subgroup $M$ of $G$. That provides, for each $q\cE, \mf m_N, K_N $ as before, versions $q\cE_s, \mf m_{N,s}, K_{N,s}$ with base field $k_s$.\\ For the transfer of $\mc D^b_{G \times GL_1} (\mf g_N)$ from $\C$ to $k_s$ we follow the strategy that was used to derive the decomposition theorem for equivariant perverse sheaves \cite[\S 5.3]{BeLu} from its non-equivariant version \cite[Th\'eor\`eme 6.2.5]{BBD}, which in turn relied on an analogue for varieties over finite fields. To apply the techniques from \cite[\S 6.1]{BBD}, it seems necessary that $G$ can be defined over a finite extension of $\Z$. In that case, all the varieties below can also be defined over a finite extension of $\Z$. Fix a segment $I \subset \Z$ and assume that $G \times GL_1$ is embedded in $GL_k$. It was noted in \cite[\S 3.1]{BeLu} that the variety $M_{|I|}$ of $k$-frames in the affine space $\mh A^{|I|+k}$ is an acyclic $G \times GL_1$-space. Then $G \times GL_1$ acts freely on $Q := M_{|I|} \times \mf g_N$ and the projection $p : Q \to \mf g_N$ is an $|I|$-acyclic resolution of $G \times GL_1$-varieties. Let \[ \overline{Q} = Q / (G \times GL_1) = (M_{|I|} \times \mf g_N) / (G \times GL_1) \] be the quotient variety. By \cite[\S 2.3.2]{BeLu}, $\mc D^I_{G \times GL_1}(\mf g_N)$ is naturally equivalent with $\mc D^I (\overline Q |p)$, the full subcategory of $\mc D^I (\overline Q )$ made from all the objects that come from $\mf g_N$ via $p$ and $q : Q \to \overline{Q}$. Notice that all these objects are of geometric origin. Let $\overline{Q}_{et}$ be $\overline{Q}$ with the \'etale topology. According to \cite[\S 6.1.2.B'']{BBD} there is a fully faithful embedding (for sheaves with $\Ql$-coefficients) \[ \mc D^b (\overline{Q}_{et}) \longrightarrow \mc D^b (\overline Q) , \] whose essential image consists of the objects of geometric origin. This restricts to an equivalence of categories \[ \mc D^b (\overline{Q}_{et} | p ) \longrightarrow \mc D^b (\overline Q | p) . \] Therefore we may replace the analytic topology on $\overline{Q}$ by the \'etale topology, and we tacitly do that from now on. For a variety $X$ defined over some finite extension of $\Z$, we denote by $X_s$ the base change to $k_s$. According to \cite[\S 6.1.10 and p.159]{BBD} there is an equivalence of categories \begin{equation}\label{eq:3.8} \mc D^b_{\mc T,L} (\overline{Q}) \longleftrightarrow \mc D^b_{\mc T,L} (\overline{Q}_s) . \end{equation} Here $\mc T$ denotes an algebraic stratification of $\overline{Q}$ and $L$ means that for every stratum a finite collection of irreducible smooth sheaves with $\Z / \ell \Z$-coefficients has been chosen. The subscript $\mc T,L$ refers to $(\mc T,L)$-constructible sheaves, as in \cite[\S 6.1.8]{BBD}. Let $p_s : Q_s \to \mf g_{N,s}$ be the base change of $p$ and let $q_s : Q_s \to \overline{Q}_s$ be the base change of $q$, the quotient map for the free $G \times GL_1$-action. \begin{lem}\label{lem:3.10} From \eqref{eq:3.8} one can obtain an equivalence of categories \[ \mc D^b (\overline Q | p) \longleftrightarrow \mc D^b (\overline{Q}_s | p_s) . \] \end{lem} \begin{proof} The $G \times GL_1$-orbits on $\mf g_N$ give a stratification of $Q = M_{|I|} \times \mf g_N$, and dividing out by $G \times GL_1$ produces a stratification $\mc T_I$ of $\overline{Q}$. As $L_I$ we take those sheaves (of the indicated kind) on the strata of $\mc T_I$ that come from $\mf g_N$ via $p$ and $q$. Since there are only finitely many $G \times GL_1$-orbits in $\mf g_N$, this is a finite collection. Every $G \times GL_1$-equivariant sheaf on $\mf g_N$ is constructible with respect to the orbit stratification, so $L_I$ provides enough objects to make $\mc D^b (\overline Q | p)$ and $\mc D^b (\overline{Q}_s | p_s)$ constructible with respect to $(\mc T_I,L_I)$. On each stratum in $\mf g_N$ or $\mf g_{N,s}$ the isomorphism classes of irreducible equivariant sheaves with $\Z / \ell \Z$-coefficients can be put in bijection with the isomorphism classes of irreducible representations of the equivariant fundamental group in $G \times GL_1$. Hence $L_I$ has the same property for $\overline{Q}$ and $\overline{Q}_s$, with respect to $p,q$ and $p_s,q_s$. However, maybe $\mc T_I$ does not have the right geometric properties to apply \cite[Lemma 6.1.9 and \S 6.1.10]{BBD}. As in \cite[p. 155]{BBD}, we may refine the situation, by passing to larger finite extension $A$ of $\Z$ and a finer stratification $\mc T$ defined over $A$, whose fibers are smooth and geometrically connected. For $L$ we may take a finite collection as on \cite[p. 156]{BBD}, such that $(\mc T_I,L_I)$-constructibility implies $(\mc T,L)$-constructibility. Then \cite[\S 6.1.10]{BBD} may be used. For this $(\mc T,L)$, $\mc D^b_{\mc T,L} (\overline{Q})$ contains $\mc D^b (\overline{Q} | p)$ as the full subcategory of objects whose pullback along $q : Q \to \overline{Q}$ is isomorphic to the pullback along \[ p : M_{|I|} \times \mf g_N \to \mf g_N \] of an object of $\mc D^b (\mf g_N)$. Under the equivalence \eqref{eq:3.8}, this corresponds to the full subcategory of $\mc D^b_{\mc T,L} (\overline{Q}_s)$ formed by the objects whose pullback along $q_s$ is isomorphic to the pullback along \[ p_s : M_{|I|,s} \times \mf g_{N,s} \to \mf g_{N,s} \] of an object of $\mc D^b (\mf g_{N,s})$. In view of the properties of $L,L_I$ with respect to $p_s,q_s$, this full subcategory is exactly $\mc D^b (\overline{Q}_s | p_s)$. \end{proof} Lemma \ref{lem:3.10} can be restricted from $\mc D^b (\overline Q |p)$ to $\mc D^I (\overline Q |p)$. From that and \cite[\S 2.3.2]{BeLu} we obtain equivalences of categories \begin{equation}\label{eq:3.10} \mc D^I_{G \times GL_1} (\mf g_N) \longleftrightarrow \mc D^I (\overline Q |p) \longleftrightarrow \mc D^I (\overline{Q}_s | p_s) \longleftrightarrow \mc D^I_{G \times GL_1} (\mf g_{N,s}) . \end{equation} Recall $K_{N,s}$ from Lemma \ref{lem:3.5} and the subsequent remark. \begin{thm}\label{thm:3.11} Assume that $G$ can be defined over a finite extension of $\Z$. There are equivalences of categories \[ \begin{array}{l@{\qquad}lll} (a) & \mc D^b_{G \times GL_1}(\mf g_N) & \longleftrightarrow & \mc D^b_{G \times GL_1}(\mf g_{N,s}) ,\\ (b) & \mc D^b_{G \times GL_1}(\mf g_N, K_N) & \longleftrightarrow & \mc D^b_{G \times GL_1}(\mf g_{N,s}, K_{N,s}) . \end{array} \] \end{thm} \begin{proof} (a) The equivalences \eqref{eq:3.10} can be achieved for any segment $I \subset \Z$, but we note that $Q$ and $\overline{Q}$ depend on $|I|$. For a larger segment $I' \supset I$, the $G \times GL_1$-space $Q' = M_{|I'|} \times \mf g_N$ contains $Q = M_{|I|} \times \mf g_N$. Hence \eqref{eq:3.10} for $I$ embeds canonically in \eqref{eq:3.10} for $I'$. The enables us to take the limit of all these instances of \eqref{eq:3.10}. \\ (b) It remains to show that part (a) sends $K_N$ to $K_{N,s}$. By the definition of equivariant derived categories \cite{BeLu}, that means that we have to compare $p^* K_N \in \mc D^b_{G \times GL_1} (Q)$ with $p_s^* K_{N,s} \in \mc D^b_{G \times GL_1} (Q_s)$. Let $K'_N$ be the image of $p^* K_N$ in $\mc D^b_{\mc T,L}(\overline{Q})$ via the proof of Lemma \ref{lem:3.10}, so $q^* K'_N \cong p^* K_N$. We define $K'_{N,s}$ analogously. Since \[ q^* : \mc D^b (\overline{Q}) \to \mc D^b_{G \times GL_1}(Q) \] is an equivalence of categories, it suffices to show that \eqref{eq:3.8} sends $K'_N$ to $K'_{N,s}$. By \cite[\S 6.1.7 and 6.1.10]{BBD} equivalences of the kind \eqref{eq:3.8} respect the usual derived functors associated to maps between varieties, provided $(\mc T,L)$ may be enlarged depending on the object under consideration. We claim that the step from $\mc D^b (\overline Q)$ to $\mc D^b_{G \times GL_1} (\mf g_N)$ respects equivariant induction $\mr{ind}_{P \times GL_1}^{G \times GL_1}$. To shorten the notations we check this for $\mr{ind}_P^G$. Let $Y$ be a $P$-variety with a resolution $X$ and let $\mc F \in \mc D^b_P (Y)$ come from $X / P$. From the diagram \[ Y \longleftarrow X \longrightarrow X / P = ( G \times^P X) / G \longleftarrow G \times^P X \longrightarrow G \times^P Y \] we see that $Y$ and $\mr{ind}_P^G (\mc F) \in \mc D^b_G (G\times^P Y)$ come from the same object of $\mc D^b (X/P)$. It follows that equivalences of the kind \eqref{eq:3.8}, when retracted to equivariant derived categories like in Lemma \ref{lem:3.10}, also respect equivariant induction. We resort to the expression $K_N = \mu_{N,!} \dot{q\cE_N}$ from \eqref{eq:1.47}. That and the constructions described in terms of \eqref{eq:1.39} only use derived functors of the kind discussed above. In this way we reduce our task from $K_N$ to $q\cE \in \mc D^b_{M \times GL_1} (\cC_v^M)$. The construction of the equivalence \eqref{eq:3.8} in \cite[\S 6.1.8--6.1.9]{BBD} uses only base change maps, it goes via $\mc D^b_{\mc T,L}(\overline{Q}_A)$ for a suitable subring $A$ of $\C$. Knowing that, Lemma \ref{lem:3.10} and its proof entail that $q\cE$ is sent to $q\cE_s$ by the version of \eqref{eq:3.10} for $M$. This proves that the equivalence of categories from part (a) sends $K_N$ to $K_{N,s}$. \end{proof} \subsection{Orthogonal decomposition} \ \label{par:orthogonal} From \cite{RiRu1} it can be expected that $\mc D^b_{G \times GL_1}(\mf g_{N,s})$ decomposes as an orthogonal direct sum of full subcategories of the form $\mc D^b_{G \times GL_1}(\mf g_{N,s} ,K_{N,s})$. Here orthogonality means that there are no nonzero morphisms between objects from different summands. In view of Theorem \ref{thm:3.11}, the same should work over the base field $\C$. We start with an orthogonality statement on the cuspidal level. Let $\cC_v^M, \cC_{v'}^M$ be nilpotent Ad$(M)$-orbits in $\mf m$ and let $q\cE, q\cE$ be $M$-equivariant irreducible cuspidal local systems on respectively $\cC_v^M$ and $\cC_{v'}^M$. As noted in \cite[\S 2.1.f]{Lus-Cusp1}, $q\cE$ and $q\cE'$ are automatically $M \times GL_1$-equivariant. Let $\IC (\mf m_N, q\cE)$ and $\IC (\mf m_N,q\cE' )$ be the associated $M \times GL_1$-equivariant intersection cohomology complexes. Notice that $\IC_{M \times GL_1}(\mf m_N,q\cE)$ is the version of $K_N$ for $M$. \begin{lem}\label{lem:3.4} Suppose that $\cC_v^M \neq \cC_{v'}^M$ or that $\cC_v^M = \cC_{v'}^M$ and $q\cE, q\cE'$ are not isomorphic in $\mc D^b_{M \times GL_1}(\cC_v^M)$. Then \[ \Hom^*_{\mc D^b_{M \times GL_1}(\mf m_N)} \big( \IC (\mf m_N, q\cE), \IC (\mf m_N,q\cE' )\big) = 0 . \] \end{lem} \begin{proof} Suppose that the given Hom-space is nonzero. By the remarks at the start of Section \ref{sec:cuspidal}, this remains true if we replace $M \times GL_1$ by its neutral component $M^\circ \times GL_1$. As $M^\circ \times GL_1$-equivariant local systems, we can decompose $q\cE = \bigoplus_i \cE_i$ and $q\cE' = \bigoplus_j \cE'_j$, where the $\cE_i$ and the $\cE'_j$ are irreducible and cuspidal. Then \begin{equation}\label{eq:3.16} \bigoplus\nolimits_{i,j} \Hom^*_{\mc D^b_{M^\circ \times GL_1}(\mf m_N)} \big( \IC (\mf m_N, \cE_i), \IC (\mf m_N,\cE'_j )\big) \neq 0 . \end{equation} Pick indices $i,j$ for which the corresponding summand of \eqref{eq:3.16} is nonzero. The results in \cite[Appendix A]{RiRu1} are formulated for a connected reductive group and its centre, but they also work for a $M^\circ \times GL_1$-variety and with $Z(M^\circ)$ instead of the centre of $M^\circ \times GL_1$. Thus, we can analyse the $Z(M^\circ)$-characters of the $M^\circ \times GL_1$-equivariant perverse sheaves $\cE_i$ and $\cE'_j$ as in \cite[Appendix A]{RiRu1}. Notice that Theorem \ref{thm:3.11} enables us to apply this with base field $\C$ as well. From \cite[Proposition A.8]{RiRu} and \eqref{eq:3.16} one concludes that $\cE_i$ and $\cE'_j$ have the same $Z(M^\circ)$-character. According to \cite[p. 205]{Lus-Int}, that implies that $\cE_i$ is isomorphic to $\cE'_j$ in $\mc D^b_{M^\circ}(\mf m_N)$. Hence $\cC_v^{M^\circ} = \cC_{v'}^{M^\circ}$, so we may (and will) assume that $v = v'$. Recall from \eqref{eq:1.49} that $q\cE$ and $q\cE'$ are clean (on $\mf m_N$). With adjunction we compute \begin{equation}\label{eq:3.11} \begin{aligned} & \Hom^*_{\mc D^b_{M \times GL_1}(\mf m_N)} \big( \IC (\mf m_N, q\cE), \IC (\mf m_N,q\cE' )\big) = \\ & \Hom^*_{\mc D^b_{M \times GL_1}(\mf m_N)} \big( \IC (\mf m_N, q\cE), j_{\mf m_N,*} q\cE' \big) = \\ & \Hom^*_{\mc D^b_{M \times GL_1}(\cC_{v}^M)} \big( j^*_{\mf m_N} \IC (\mf m_N, q\cE), q\cE' \big) = \Hom^*_{\mc D^b_{M \times GL_1}(\cC_{v}^M)} \big( q\cE, q\cE' \big) . \end{aligned} \end{equation} Let $\rho, \rho' \in \Irr \big( \pi_0 (Z_{M \times GL_1}(v)) \big)$ be the images of $q\cE$ and $q\cE'$ under the equivalence of categories \[ \mc D^b_{M \times GL_1}(\cC_{v}^M) \cong \mc D^b_{Z_{M \times GL_1}(v)} (\{v\}) . \] Then \eqref{eq:3.11} reduces to \[ \Hom^*_{\mc D^b_{Z_{M \times GL_1}(v)} (\{v\}) } \big( \rho, \rho' \big) = \Hom_{\pi_0 (Z_{M \times GL_1} (v))} (\rho, \rho') . \] Since $q\cE$ and $q\cE'$ are not isomorphic, $\rho$ and $\rho'$ are not isomorphic, and this expression vanishes. That contradicts the assumption at the start of the proof. \end{proof} Consider the collection of all cuspidal quasi-supports $(M,\cC_v^M,q\cE)$ for $G$. Since each $\mf m_N$ admits only very few irreducible $M$-equivariant cuspidal local systems \cite[Introduction]{Lus-Int}, there are only finitely many $G$-conjugacy classes of cuspidal quasi-supports for $G$. We note that by Lemma \ref{lem:3.5} the classification is the same over the ground fields $\C$ and $k_s$. Each such conjugacy class $[M,\cC_v^M,q\cE ]_G$ gives rise to a full triangulated subcategory \[ \mc D^b_{G \times GL_1}(\mf g_N ,K_N) = \mc D^b_{G \times GL_1} \big( \mf g_N ,\mc I_{P \times \C^\times, \mf m_N}^{G \times \C^\times} \IC_{M \times GL_1}(\mf m_N,q\cE) \big) , \] see \eqref{eq:1.45} for the equality. \begin{thm}\label{thm:3.6} There is an orthogonal decomposition \[ \mc D^b_{G \times GL_1} (\mf g_N) = \bigoplus\nolimits_{[M,\cC_v^M,q\cE]_G} \mc D^b_{G \times GL_1} \big( \mf g_N ,\mc I_{P \times \C^\times, \mf m_N}^{G \times \C^\times} \IC_{M \times GL_1}(\mf m_N,q\cE) \big) . \] The same holds over the ground field $k_s$. \end{thm} \begin{proof} Over $k_s$ is the translation of \cite[Theorem 3.5]{RiRu1} to our setting. Almost the entire proof in \cite[\S 2--3]{RiRu1} is valid in our generality, only the argument with central characters (near the end of the proof of \cite[Theorem 3.5]{RiRu1}) does not work any more. We extend that to our setting with Lemma \ref{lem:3.4}. We may pass to the base field $\C$ with Theorem \ref{thm:3.11}. \end{proof} \subsection{An equivalence of triangulated categories} \ \label{par:triangulated} We aim to show that $\mc D^b_{G \times GL_1}(\mf g_{N,s}, K_{N,s})$ is equivalent with $\mc D^b (\mh H - \Mod_{\mr{fgdg}})$, as triangulated categories. We follow the strategy outlined in \cite[\S 4]{RiRu}, based on \cite{Rid}, but with $G \times GL_1$ instead of $G$. We need the following objects as substitutes for objects appearing in the derived generalized Springer correspondence from \cite{Rid,RiRu}: \[ \begin{array}{l|l|l} \text{our setting} & \text{setting from \cite{RiRu}} & \text{setting from \cite{Rid}} \\ \hline \mf g_N & \mc N & \mc N \\ \IC (\mf m_N, q\cE) & \IC_{\mb c} & \IC (\{0\}, \overline{\Q_\ell}) \\ K_N & \mh A_{\mb c} & \mb A \\ \C [W_{q\cE} ,\natural_{q\cE}] & \Ql [W(L)] & \Ql [W] \\ H^*_{G \times \C^\times} (\dot{\mf g_N}) \cong \mc O (\mf t \oplus \C) & H_L (\mc O_L) \cong S \mf z^* & H^*_G (G/B) \cong S \mf h^* \\ \mh H (G,M,q\cE) & \Ql [W(L)] \ltimes S \mf z^* & \mc A_G = \Ql [W(L)] \ltimes S \mf h^* \\ \mc I_{P \times \C^\times, \mf m_N}^{G \times \C^\times} & \mc I_P^G & \Psi \\ \mc R_{P \times \C^\times, \mf m_N}^{G \times \C^\times} & \mc R_P^G & \Phi \\ \mc D^b_{M \times \C^\times} (\mf m_N, \IC (\mf m_N, q\cE)) & \mc D^b_L (\mc N_L ,\IC_{\mb c}) \cong \mc D^b_Z (\pt) & \mc D^b_G (G/B) \cong \mc D_T^b (\pt) \end{array} \] For the third till sixth lines of the table we refer to, respectively, \eqref{eq:1.45}, \eqref{eq:1.10}, \eqref{eq:1.50} and Theorem \ref{thm:1.2}.c. To justify the last line of the table, we note that the proof of \cite[Lemma 2.3 and Proposition 2.4]{RiRu} shows that \begin{equation}\label{eq:3.2} \mc D^b_{M \times \C^\times} (\mf m_N, \IC (\mf m_N, q\cE)) \cong \mc D^b_{Z(M)^\circ \times \C^\times} (\pt) = \mc D^b_{T \times \C^\times} (\pt) . \end{equation} As explained in \cite[\S 3.2]{RiRu}, the $M$-equivariant cuspidal local system $q\cE_s$ on $\mc C_{v,s}^M \subset \mf m_{N,s}$ admits a version over a finite field $\F_q$, such that a Frobenius element of Gal$(k_s / \F_q)$ acts trivially (after base change to $k_s$). Then everything can be set up over $\F_q$ with mixed sheaves, as in \cite[\S 4--5]{Rid}. Like in \cite{Rid,RiRu}, we indicate the analogous objects over $\F_q$ with a subscript $\circ$. \begin{lem}\label{lem:3.8} There are algebra isomorphisms \[ \Ql [W_{q\cE}, \natural_{q\cE}] \cong \End_{\mc D^b_{G \times GL_1} (\mf g_{N,s})}(K_{N,s}) \cong \End_{\mc D^b_{G \times GL_1} (\mf g_{N,\circ})}(K_{N,\circ}) , \] and the Frobenius action on the middle term is trivial. \end{lem} \begin{proof} The first isomorphism can be shown in the same way as \eqref{eq:1.10}. This uses among others a notion of good and bad $P-P$ double cosets in $G$, or equivalently $G$-orbits in $G/P \times G/P$, extended from \cite{Lus-Cusp1} to disconnected $G$ in \cite[proof of Proposition 2.6]{AMS2}. With this notion, \cite[\S 3.3]{RiRu} generalizes to disconnected $G$ over $\F_q$. In particular \cite[Proposition 3.7]{RiRu} shows that \[ \dim_{\Ql} \End_{\mc D^b_{G \times GL_1} (\mf g_{N,\circ})}(K_{N,\circ}) = |W_{q\cE}| . \] Next \cite[Proposition 3.9]{RiRu} proves that Gal$(k_s/\F_q)$ acts trivially on\\ $\End_{\mc D^b_{G \times GL_1} (\mf g_{N,s})}(K_{N,s})$, and shows that the natural map \[ \End_{\mc D^b_{G \times GL_1} (\mf g_{N,\circ})}(K_{N,\circ}) \to \End_{\mc D^b_{G \times GL_1} (\mf g_{N,s})}(K_{N,s}) \] induced by base change is an algebra isomorphism. \end{proof} Like in Theorem \ref{thm:1.2}, we obtain \[ \End^*_{\mc D^b_{G \times GL_1} (\mf g_{N,s})} (K_{N,s}) = \mh H_{\Ql} = \mh H_{\Ql} (G,M,q\cE) , \] the version of $\mh H$ with scalars $\Ql$ instead of $\C$. With that settled, the proof of \cite[Theorem 4.1]{RiRu} applies to $(\mf g_{N,\circ}, K_{N,\circ})$. It provides a triangulated category \[ K^b \mr{Pure}_{G \times GL_1} (\mf g_{N,\circ}, K_{N,\circ}) , \] which is a mixed version of $\mc D^b_{G \times GL_1} (\mf g_{N,s}, K_{N,s})$ in the sense of \cite[Definition 4.2]{Rid}. Next \cite[Theorem 4.2]{RiRu} and \cite[\S 6]{Rid} generalize readily to our setting (but with objects over the ground field $\F_q$). In particular these entail an equivalence of triangulated categories \begin{equation}\label{eq:3.4} K^b \mr{Pure}_{G \times GL_1} (\mf g_{N,\circ}, K_{N,\circ}) \cong \mc D^b (\mh H_{\Ql} -\Mod_{\mr{fgdg}}) . \end{equation} Recall the notion of Koszulity for differential graded algebras from \cite{BGS}. \begin{lem}\label{lem:3.1} \enuma{ \item The algebra $\mh H_{\Ql}$ is Koszul. \item The Koszul dual $E(\mh H_{\Ql})$ of $\mh H_{\Ql}$ is a finite dimensional graded algebra. } \end{lem} \begin{proof} (a) Consider the degree zero part $\mh H_{\Ql,0} = \Ql [W_{q\cE},\natural_{q\cE}]$ as $\mh H_{\Ql}$-module, annihilated by all terms of positive degree. We have to find a resolution of $\mh H_{\Ql,0}$ by projective graded modules $P^n$, such that each $P^n$ is generated by its part in degree $n$. We will use that the multiplication map \[ \mh H_{\Ql,0} \otimes_{\Ql} \mc O (\mf t \oplus \mh A^1) \to \mh H_{\Ql} \] is an isomorphism of graded vector spaces. Start with the standard Koszul resolution for $\mc O (\mf t \oplus \mh A^1)$: \[ \Ql \leftarrow \mc O (\mf t \oplus \mh A^1) \leftarrow \mc O (\mf t \oplus \mh A^1) \otimes_{\Ql} \bigwedge\nolimits^1 (\mf t \oplus \mh A^1) \leftarrow \mc O (\mf t \oplus \mh A^1) \otimes_{\Ql} \bigwedge\nolimits^2 (\mf t \oplus \mh A^1) \leftarrow \cdots \] It is graded so that $\mc O (\mf t \oplus \mh A^1)_d \otimes_{\Ql} \bigwedge^n (\mf t \oplus \mh A^1)$ sits in degree $d+n$. Define \[ P^n = \mr{ind}_{\mc O (\mf t \oplus \mh A^1)}^{\mh H_{\Ql}} \big( \mc O (\mf t \oplus \mh A^1) \otimes_{\Ql} \bigwedge\nolimits^n (\mf t \oplus \mh A^1) \big) = \mh H_{\Ql} \otimes_{\Ql} \bigwedge\nolimits^n (\mf t \oplus \mh A^1) . \] Then $P^n = \mh H_{\Ql} P^n_n$ and we have a graded projective resolution \vspace{-1mm} \[ P^* \to \mr{ind}_{\mc O (\mf t \oplus \mh A^1)}^{\mh H_{\Ql}} (\Ql) = \mh H_{\Ql,0} . \] Thus $\mh H_{\Ql}$ fulfills \cite[Definition 1.1.2]{BGS} and is Koszul.\\ (b) In \cite[\S 1.2]{BGS}, the Koszul dual $E(\mh H_{\Ql})$ is defined as $\Ext^*_{\mh H_{\Ql}} (\mh H_{\Ql,0}, \mh H_{\Ql,0})$. This is easily computed as graded vector space: \begin{align*} E (\mh H_{\Ql}) & = \Ext^*_{\mh H_{\Ql}} \big( \mr{ind}_{\mc O (\mf t \oplus \mh A^1)}^{\mh H_{\Ql}} \Ql, \mh H_{\Ql,0} \big) \\ & = \Ext^*_{\mc O (\mf t \oplus \mh A^1)} \big( \Ql, \mh H_{\Ql,0} \big) \\ & = \Ext^*_{\mc O (\mf t \oplus \mh A^1)} \big( \Ql, \Ql \big) \otimes_{\Ql} \mh H_{\Ql,0} \\ & = \bigwedge\nolimits^* (\mf t \oplus \mh A^1) \otimes_{\Ql} \mh H_{\Ql,0} . \end{align*} Note that both $\bigwedge\nolimits^* (\mf t \oplus \mh A^1)$ and $\mh H_{\Ql,0}$ have finite dimension. \end{proof} The opposite algebra of $\mh H_{\Ql}$ is of the same kind, namely $\mh H_{\Ql} (G,M,q\cE^\vee)$ for the dual local system $q\cE^\vee$. Hence Lemma \ref{lem:3.1} also holds for $\mh H_{\Ql}^{op}$, which means that we may use the results of \cite{BGS} with right modules instead of left modules. Lemma \ref{lem:3.1} entails that $\mc D^b (\mh H_{\Ql} -\Mod_{\mr{fgdg}})$ admits the ``geometric t-structure" from \cite[\S 2.13]{BGS}. Its heart is equivalent with $E(\mh H_{\Ql})-\Mod_{g}$, the abelian category of graded right $E(\mh H_{\Ql})$-modules. Next \cite[Theorem 7.1]{Rid} shows that \eqref{eq:3.4} sends this t-structure to the ``second t-structure" on $K^b \mr{Pure}_{G \times GL_1} (\mf g_{N,\circ}, K_{N,\circ})$ from \cite[\S 4.2]{Rid}. In particular the heart of the second t-structure is equivalent with the heart of the geometric t-structure: \begin{equation}\label{eq:3.5} \mr{Perv}_{KD}(\mf g_{N,\circ},K_{N,\circ}) \cong E(\mh H_{\Ql})-\Mod_{g} . \end{equation} Let $F_\circ \in \mr{Perv}_{KD}(\mf g_{N,\circ}, K_{N,\circ})$ be the image of $E(\mh H_{\Ql})$ and let $F_s$ be the image of $F_\circ$ in $\mc D^b_{G \times GL_1}(\mf g_{N,s}, K_{N,s})$ via \cite[Theorem 4.1]{RiRu}. We note that by Lemma \ref{lem:3.1}.b and \eqref{eq:3.5}, \begin{equation}\label{eq:3.13} \Hom_{\mc D^b_{G \times GL_1}(\mf g_{N,s},K_{N,s})} (F_s ,F_s [m]) \text{ is nonzero for only finitely many } m \in \Z . \end{equation} Choose a resolution of $E(\mh H_{\Ql})$ by free (graded right) modules of finite rank, that is possible by Lemma \ref{lem:3.1}.b. Via \eqref{eq:3.5}, that yields a projective resolution \[ \cdots \to P_\circ^{-2} \to P_\circ^{-1} \to P_\circ^0 \to K_{N,\circ} \] in $\mr{Perv}_{KD}(\mf g_{N,\circ},K_{N,\circ})$. Let $P_s^n$ be the image of $P_\circ^n$ in $\mc D^b_{G \times GL_1}(\mf g_{N,s}, K_{N,s})$. Then each $P_\circ^n$ (resp. $P_s^n$) is a direct sum of finitely many copies of $F_\circ$ (resp $F_s$). If $I \subset \Z$ is a segment such that $F_s$ lies in $\mc D^I_{G \times GL_1}(\mf g_{N,s},K_{N,s})$, then all $P_s^n$ belong to $\mc D^I_{G \times GL_1}(\mf g_{N,s},K_{N,s})$. This yields a chain complex \begin{equation}\label{eq:3.6} \cdots \to P_s^{-2} \to P_s^{-1} \to P_s^0 \to K_{N,s} , \end{equation} where all objects and all morphisms come from $\mc D^I_{G \times GL_1}(\mf g_{N,s},K_{N,s})$. However, the entire complex is usually unbounded, because it is likely that $P_\circ^n$ and $P_s^n$ are nonzero for all $n \in \Z_{\leq 0}$. We define a graded algebra $\mc R = \bigoplus_{n \in \Z_{\geq 0}} \mc R^n$ with \[ \mc R^n = \prod\nolimits_{k,j \in \Z_{\leq 0}} \Hom_{\mc D^b_{G \times GL_1}(\mf g_{N,s},K_{N,s})} (P_s^k ,P_s^j [n+k-j]) . \] The multiplication in $\mc R$ comes from composition in $\mc D^b_{G \times GL_1}(\mf g_{N,s},K_{N,s})$. For fixed $k$ and $n$, \eqref{eq:3.13} shows that only finitely many $j$ give a nonzero contribution to the part of $\mc R^n$ starting in $P_s^k$. This guarantees that the multiplication map $\mc R^n \times \mc R^m \to \mc R^{n+m}$ is well-defined. For $M \in \mc D^b_{G \times GL_1}(\mf g_{N,s},K_{N,s})$ and $n \in \Z_{\geq 0}$ we put \[ {\mc Hom}^n (P_s^* ,M) = \prod\nolimits_{j \in \Z_{\leq 0}} \Hom_{\mc D^b_{G \times GL_1}(\mf g_{N,s})} (P_s^j, M[j+n]) , \] so that we obtain a functor \[ {\mc Hom}^* (P_s^* ,?) = \bigoplus\nolimits_{n \geq 0} {\mc Hom}^n (P_s^*,?) : \mc D^b_{G \times GL_1}(\mf g_{N,s},K_{N,s}) \to \mc D^b (\mc R -\Mod_{\mr{fgdg}}) . \] By \cite[Theorem 7.4]{Rid} and \cite[Proposition 4]{Sch}, $\mc R$ is quasi-isomorphic to its own cohomology ring and \[ H^* (\mc R) \cong \End^*_{\mc D^b_{G \times GL_1}(\mf g_{N,s})} (K_{N,s}) \cong \mh H_{\Ql} . \] Moreover, by \cite[Remark 7.5]{Rid} there exists a quasi-isomorphism $\mc R \to \mh H_{\Ql}$. According to \cite[Theorem 10.12.5.1 and \S 11.1]{BeLu}, that induces an equivalence of categories \[ \otimes^L_{\mc R} \mh H_{\Ql} :\; \mc D^b (\mc R -\Mod_{\mr{fgdg}}) \longrightarrow \mc D^b (\mh H_{\Ql} -\Mod_{\mr{fgdg}}) . \] Combining all the above, we get an additive functor \begin{equation}\label{eq:3.7} \otimes^L_{\mc R} \mh H_{\Ql} \circ {\mc Hom}^* (P_s^* ,?) : \; \mc D^b_{G \times GL_1} (\mf g_{N,s},K_{N,s}) \longrightarrow \mc D^b (\mh H_{\Ql} -\Mod_{\mr{fgdg}}) . \end{equation} We want to show that \eqref{eq:3.7} is triangulated. An analogous statement is \cite[Lemma 7.7]{Rid}, which is proven in \cite[Appendix A]{Rid}. We cannot apply \cite[\S A.1]{Rid} directly to $\mh H_{\Ql}$, because for Hecke algebras averaging $\mc O (\mf t \oplus \mh A^1)$-module homomorphisms (between $\mh H$-modules) over $W_{q\cE}$ does not preserve the $\mc O (\mf t \oplus \mh A^1)$-linearity. Fortunately, from \cite[\S A.2]{Rid} only the last three lines rely on this averaging over a Weyl group. We can apply \cite[\S A.2]{Rid} with the group $G^\circ \times GL_1$, so that the $G^\circ \times GL_1$-equivariant cohomology of the variety of Borel subgroups is isomorphic to $\mc O (\mf t \oplus \mh A^1)$, like in \eqref{eq:3.9}. These arguments show the following: the functor \eqref{eq:3.7} sends any exact triangle in $\mc D^b_{G \times GL_1}(\mf g_{N,s},K_{N,s})$ to a triangle \begin{equation}\label{eq:3.12} L \to M \to N \to L[1] \quad \text{in} \quad \mc D^b (\mh H_{\Ql} -\Mod_{\mr{fgdg}}), \end{equation} whose image in $\mc D^b (\mc O (\mf t \oplus \mh A^1) -\Mod_{\mr{fgdg}})$ is an exact triangle. \begin{lem}\label{lem:3.9} The triangle \eqref{eq:3.12} is already exact in $\mc D^b (\mh H_{\Ql} -\Mod_{\mr{fgdg}})$. \end{lem} \begin{proof} We recall from \cite[10.12.2.9]{BeLu} that there is an equivalence of categories \[ \mc D^b (\mh H_{\Ql} -\Mod_{\mr{fgdg}}) \cong {\mc KP} (\mh H_{\Ql}) , \] where the right hand side denotes the homotopy category of $\mc K$-projective differential graded (right) $\mh H_{\Ql}$-modules. The same holds for $\mc O (\mf t \oplus \mh A^1)$. Recall that the cone of a morphism $f : L \to M$ in ${\mc KP} (\mh H_{\Ql})$ is $M \oplus L[1]$ with the differential $(d_M + f[1],-d_L)$. It comes with natural maps $\pi_1 : M \to \mr{cone}(f)$ and $\pi_2 : \mr{cone}(f) \to L[1]$. Phrased in these terms, \eqref{eq:3.12} yields a commutative diagram \begin{equation}\label{eq:3.14} \begin{array}{ccccccc} L & \xrightarrow{f} & M & \xrightarrow{\pi_1} & \mr{cone}(f) & \xrightarrow{\pi_2} & L[1] \\ || & & | | & & \downarrow \phi & & || \\ L & \xrightarrow{f} & M & \xrightarrow{g_1} & N & \xrightarrow{g_2} & L[1] \end{array}, \end{equation} where all the modules and the horizontal morphisms belong to $\mc{KP} (\mh H_{\Ql})$ and $\phi$ is an isomorphism in $\mc{KP} (\mc O (\mf t \oplus \mh A^1))$. We need to prove that $\phi$ can be replaced by an isomorphism in $\mc{ KP} (\mh H_{\Ql})$, for then \eqref{eq:3.12} becomes an exact triangle. The resolutions constructed in \cite[\S 10.12.2.4]{BeLu} show that every object of \\ $\mc D^b (\mh H_{\Ql} -\Mod_{\mr{fgdg}})$ can be represented by a free $\mh H_{\Ql}$-module, that is, an object of $\mc{ KP} (\mh H_{\Ql})$ of the form $V \otimes \mh H_{\Ql}$ with $V$ a graded $\Ql$-vector space (and some differential). Hence we may assume that all the objects in \eqref{eq:3.14} are free differential graded $\mh H_{\Ql}$-modules, say \[ L = V_L \otimes \mh H_{\Ql},\quad M = V_M \otimes \mh H_{\Ql},\quad N = V_N \otimes \mh H_{\Ql}. \] With the $\mc K$-projectivity \cite[\S 10.12.2]{BeLu} we can easily compute some Hom-spaces: \begin{align*} & \Hom_{\mc{ KP}(\mh H_{\Ql})} (V \otimes \mh H_{\Ql}, V' \otimes \mh H_{\Ql}) \cong \Hom_{\mc{ KP} (\Ql)} (V,V') \otimes \mh H_{\Ql} \\ & \Hom_{\mc{ KP}(\mc O (\mf t \oplus \mh A^1))} (V \otimes \mh H_{\Ql}, V' \otimes \mh H_{\Ql}) = \\ & \Hom_{\mc{ KP}(\mc O (\mf t \oplus \mh A^1))} \big( V \otimes \Ql [W_{q\cE},\natural_{q\cE}] \otimes \mc O (\mf t \oplus \mh A^1), V' \otimes \Ql [W_{q\cE},\natural_{q\cE}] \otimes \mc O (\mf t \oplus \mh A^1) \big) = \\ & \Hom_{\mc{ KP} (\Ql)} (V,V') \otimes \mr{End}_{\Ql} \big( \Ql [W_{q\cE},\natural_{q\cE}] \big) \otimes \mc O (\mf t \oplus \mh A^1) \end{align*} Notice that $\mr{End}_{\Ql} \big( \Ql [W_{q\cE},\natural_{q\cE}] \big)$ is naturally a $W_{q\cE}$-representation, with as invariants the operators from left multiplication by elements of $\Ql [W_{q\cE},\natural_{q\cE}]$. Now we see that averaging over $W_{q\cE}$ provides canonical surjections \begin{align*} & \mr{Av} : \mr{End}_{\Ql} \big( \Ql [W_{q\cE},\natural_{q\cE}] \big) \to \mr{End}_{\Ql [W_{q\cE},\natural_{q\cE}]} \big( \Ql [W_{q\cE},\natural_{q\cE}] \big) = \Ql [W_{q\cE},\natural_{q\cE}] ,\\ & \mr{Av} : \Hom_{\mc{ KP}(\mc O (\mf t \oplus \mh A^1))} (V \otimes \mh H_{\Ql}, V' \otimes \mh H_{\Ql}) \to \Hom_{\mc{ KP}(\mh H_{\Ql})} (V \otimes \mh H_{\Ql}, V' \otimes \mh H_{\Ql}) \end{align*} For $f' \in \Hom_{\mc{ KP}(\mc O (\mf t \oplus \mh A^1))} (V \otimes \mh H_{\Ql}, V' \otimes \mh H_{\Ql})$, $x \in V \otimes \Ql [W_{q\cE},\natural_{q\cE}]$ and\\ $T \in \mc O (\mf t \oplus \mh A^1)$, this works out as \[ \mr{Av} (f') (x T) = |W_{q\cE}|^{-1} \sum\nolimits_{w \in W_{q\cE}} f' (x N_w^{-1}) N_w T . \] In the same notation we consider $m = \sum_i v_i \otimes x_i T_i \in M$ and $\pi_1 (m) \in \mr{cone}(f)$. By the commutativity of \eqref{eq:3.14} \begin{align*} \mr{Av}(\phi) (\pi_1 (m)) & = |W_{q\cE}|^{-1} \sum_{w \in W_{q\cE}} \sum_i \phi (\pi_1 (v_i \otimes x_i N_w^{-1})) N_w T_i \\ & = |W_{q\cE}|^{-1} \sum_{i,w} g_1 (v_i \otimes x_i N_w^{-1}) N_w T_i \;=\; |W_{q\cE}|^{-1} \sum_{i,w} g_1 (v_i \otimes x_i T_i ) \\ & = \sum\nolimits_i g_1 (v_i \otimes x_i T_i ) = g_1 (m) = \phi (\pi_1 (m)) . \end{align*} For $c = \sum_j c_j \otimes x_j T_j \in \mr{cone}(f)$ we compute \begin{align*} g_2 \mr{Av}(\phi) (c) & = |W_{q\cE}|^{-1} \sum_{w \in W_{q\cE}} \sum_j g_2 (\phi (c_j \otimes x_j N_w^{-1}) N_w T_j) \\ & = |W_{q\cE}|^{-1} \sum_{j,w} \pi_2 (c_j \otimes x_j N_w^{-1}) N_w T_j \;=\; |W_{q\cE}|^{-1} \sum_{j,w} \pi_2 (c_j \otimes x_j T_j) \\ & = \sum\nolimits_j \pi_2 (c_j \otimes x_j T_j) = \pi_2 (c) = g_2 \phi (c) . \end{align*} The calculations show that the diagram \eqref{eq:3.14} remains commutative if we replace $\phi$ by the $\mh H_{\Ql}$-linear map Av$(\phi)$. Finally, the proof of \cite[Proposition A.3]{Rid} shows that Av$(\phi)$ is an isomorphism in $\Hom_{\mc{ KP}(\mh H_{\Ql})} (\mr{cone}(f),N)$. \end{proof} \begin{thm}\label{thm:3.2} Transfer the setup of Paragraph \ref{par:geomConst} to groups and varieties over an algebraically closed field of good characteristic for $G$, and use $\Ql$ as coefficient field for all sheaves and representations. \enuma{ \item The functor \eqref{eq:3.7} is an equivalence between the triangulated categories\\ $\mc D^b_{G \times GL_1}(\mf g_{N,s}, K_{N,s})$ and $\mc D^b \big( \mh H_{\Ql}(G,M,q\cE) -\Mod_{\mr{fgdg}} \big)$. \item There is an equivalence of triangulated categories \[ \mc D^b_{G \times GL_1}(\mf g_{N,s}) \longrightarrow \bigoplus\nolimits_{[M,\cC_v^M,q\cE]_G} \mc D^b \big( \mh H_{\Ql} (G,M,q\cE) -\Mod_{\mr{fgdg}} \big) . \] } \end{thm} \begin{proof} (a) The proof of \cite[Theorem 4.3]{RiRu} explains why the arguments from \cite[\S 7]{Rid} generalize to our setting. These results show that \eqref{eq:3.7} commutes with the shift operator and sends $K_{N,s}$ to $\mh H_{\Ql}$. By Lemma \ref{lem:3.9}, the functor \eqref{eq:3.7} is triangulated. We conclude with an application of Beilinson's lemma (in the version from \cite[Lemma 6]{Sch}).\\ (b) This follows from part (a) and Theorem \ref{thm:3.6}. \end{proof} Combining Theorems \ref{thm:3.11} and \ref{thm:3.2}, we have proven: \begin{thm}\label{thm:3.3} Assume that $G$ can be defined over a finite extension of $\Z$. \enuma{ \item There exists an equivalence of categories \[ \mc D^b_{G \times GL_1}(\mf g_N, K_N) \longrightarrow \mc D^b \big( \mh H_{\Ql}(G,M,q\cE) -\Mod_{\mr{fgdg}} \big) , \] which sends $K_N$ to $\mh H_{\Ql}(G,M,q\cE)$. The same holds with the coefficient field $\C$ instead of $\Ql$. \item There exists an equivalence of categories \begin{align*} \mc D^b_{G \times GL_1} (\mf g_N) \longrightarrow & \bigoplus\nolimits_{[M,\cC_v^M,q\cE]_G} \mc D^b \big( \mh H (G,M,q\cE) -\Mod_{\mr{fgdg}} \big) \\ & = \; \mc D^b \big( \bigoplus\nolimits_{[M,\cC_v^M,q\cE]_G} \mh H (G,M,q\cE) -\Mod_{\mr{fgdg}} \big) . \end{align*} } \end{thm} We note that replacing $\Ql$ by the isomorphic field $\C$ is allowed because the topology of $\Ql$ does not play a role here. Part (a) categorifies $\mh H (G,M,q\cE)$ as differential graded algebra, while part (b) expresses $\mc D^b_{G \times GL_1} (\mf g_N)$ as a (derived) module category. \begin{thebibliography}{99} \bibitem[Ach]{Ach} P.N. Achar, \emph{Perverse sheaves and applications to representation theory}, Mathematical Surveys and Monographs {\bf 258}, American Mathematical Society, 2021 \bibitem[ABPS]{ABPS} A.-M. Aubert, P.F. Baum, R.J. Plymen, M. Solleveld, ``Conjectures about $p$-adic groups and their noncommutative geometry", Contemp. Math. {\bf 691} (2017), 15--51 \bibitem[AMS1]{AMS1} A.-M. Aubert, A. Moussaoui, M. Solleveld, ``Generalizations of the Springer correspondence and cuspidal Langlands parameters'', Manus. Math. {\bf 157} (2018), 121--192 \bibitem[AMS2]{AMS2} A.-M. Aubert, A. Moussaoui, M. Solleveld, ``Graded Hecke algebras for disconnected reductive groups", pp. 23--84 in: \emph{Geometric aspects of the trace formula, W. M\"uller, S. W. Shin, N. Templier (eds.)}, Simons Symposia, Springer, 2018 \bibitem[AMS3]{AMS3} A.-M. Aubert, A. Moussaoui, M. Solleveld, ``Affine Hecke algebras for Langlands parameters", arXiv:1701.03593v3, 2019 \bibitem[BaMo]{BaMo} D. Barbasch, A. Moy, ``Unitary spherical spectrum for $p$-adic classical groups", Acta Appl. Math. {\bf 44} (1996), 3--37 \bibitem[BBD]{BBD} A. Beilinson, J. Bernstein, P. Deligne, ``Faisceaux pervers", pp. 5--171 in: \emph{Analysis and topology on singular spaces (Luminy, 1981)}, Ast\'erisque 100, Soc. Math. France, Paris, 1982 \bibitem[BGS]{BGS} A. Beilinson, V. Ginzburg, W. Soergel, ``Koszul duality patterns in representation theory", J. Amer. Math. Soc. {\bf 9.2} (1996), 473--527 \bibitem[BeLu]{BeLu} J. Bernstein, V. Lunts, \emph{Equivariant sheaves and functors}, Lecture Notes in Mathematics {\bf 1578}, Springer-Verlag, Berlin, 1994 \bibitem[Bez]{Bez} R. Bezrukavnikov, ``On two geometric realizations of an affine Hecke algebra", Publ. math. I.H.\'E.S. {\bf 123} (2016), 1--67 \bibitem[CaSh]{CaSh} W. Casselman, F. Shahidi, ``On irreducibility of standard modules for generic representations", Ann. Scient. \'Ec. Norm. Sup (4) {\bf 31} (1998), 561--589 \bibitem[ChGi]{ChGi} N. Chriss, V. Ginzburg, \emph{Representation theory and complex geometry}, Birkh\"auser, 1997 \bibitem[DHKM]{DHKM} J.-F. Dat, D. Helm, R. Kurinczuk, G. Moss, ``Moduli of Langlands parameters", arXiv:2009.06708, 2020 \bibitem[FaSc]{FaSc} L. Fargues, P. Scholze, ``Geometrization of the local Langlands correspondence", arXiv:2102.13459, 2021 \bibitem[KaLu]{KaLu} D. Kazhdan, G. Lusztig, ``Proof of the Deligne--Langlands conjecture for Hecke algebras", Invent. Math. {\bf 87} (1987), 153--215 \bibitem[KNS]{KNS} D. Kazhdan, V. Nistor, P. Schneider, ``Hochschild and cyclic homology of finite type algebras", Sel. Math. New Ser. {\bf 4.2} (1998), 321--359 \bibitem[Lus1]{Lus-Int} G. Lusztig, ``Intersection cohomology complexes on a reductive group'', Invent. Math. {\bf 75.2} (1984), 205--272 \bibitem[Lus1]{Lus-CharV} G. Lusztig, ``Character sheaves V", Adv. Math. {\bf 61} (1986), 103--155 \bibitem[Lus2]{Lus-Char} G. Lusztig, ``Introduction to character sheaves", Proc. Symp. Pure Math. {\bf 47} (1987), 161--179 \bibitem[Lus3]{Lus-Cusp1} G. Lusztig, ``Cuspidal local systems and graded Hecke algebras'', Publ. Math. Inst. Hautes \'Etudes Sci. {\bf 67} (1988), 145--202 \bibitem[Lus4]{Lus-Gr} G. Lusztig, ``Affine Hecke algebras and their graded version", J. Amer. Math. Soc {\bf 2.3} (1989), 599--635 \bibitem[Lus5]{Lus-IC} G. Lusztig, ``Intersection cohomology methods in representation theory", Proc. Int. Congr. Math. Kyoto, Springer, 1991 \bibitem[Lus6]{Lus-Cusp2} G. Lusztig, ``Cuspidal local systems and graded Hecke algebras. II'', pp. 217--275 in: \emph{Representations of groups}, Canadian Mathematical Society Conference Proceedings {\bf 16}, 1995 \bibitem[MiVi]{MiVi} I. Mirkovi\'c, K. Vilonen, ``Geometric Langlands duality and representations of algebraic groups over commutative rings", Ann. Math. {\bf 166.1} (2007), 95--143 \bibitem[Rid]{Rid} L. Rider, ``Formality for the nilpotent cone and a derived Springer correspondence", Adv. Math. {\bf 235} (2013), 208--236 \bibitem[RiRu1]{RiRu1} L. Rider, A. Russell, ``Perverse sheaves on the nilpotent cone and Lusztig's generalized Springer correspondence", pp. 273--292 in: \emph{Lie algebras, Lie superalgebras, vertex algebras and related topics}, Proc. Sympos. Pure Math. {\bf 92}, Amer. Math. Soc., Providence RI, 2016 \bibitem[RiRu2]{RiRu} L. Rider, A. Russell, ``Formality and Lusztig's generalized Springer correspondence", Algebras Repr. Th. {\bf 24} (2021), 699--714 \bibitem[Sch]{Sch} O.M. Schn\"urer, ``Equivariant sheaves on flag varieties", Math. Z. {\bf 267} (2011), 27--80 \bibitem[Sol1]{SolGHA} M. Solleveld, ``Parabolically induced representations of graded Hecke algebras", Algebr. Represent. Th. {\bf 15.2} (2012), 233--271 \bibitem[Sol2]{SolK} M. Solleveld, ``Topological K-theory of affine Hecke algebras", Ann. K-theory {\bf 3.3} (2018), 395--460 \bibitem[Sol3]{SolHecke} M. Solleveld, ``Affine Hecke algebras and their representations", Indag. Math. {\bf 32.5} (2021), 1005--1082 \bibitem[Sol4]{SolEnd} M. Solleveld, ``Endomorphism algebras and Hecke algebras for reductive $p$-adic groups", J. Algebra {\bf 606} (2022), 371--470 \bibitem[Sol5]{SolTwist} M. Solleveld, ``Hochschild homology of twisted crossed products and twisted graded Hecke algebras", Ann. K-theory {\bf 8.1} (2023), 81--126 \bibitem[Sol6]{SolSheaves} M. Solleveld, ``Graded Hecke algebras, constructible sheaves and the $p$-adic Kazhdan--Lusztig conjecture", arXiv:2106.03196v2, 2022 \bibitem[Sol7]{SolStand} M. Solleveld, ``On submodules of standard modules", arXiv:2309.10401, 2023 \bibitem[Vog]{Vog} D. Vogan, ``The local Langlands conjecture", pp. 305--379 in: \emph{Representation theory of groups and algebras}, Contemp. Math. {\bf 145}, American Mathematical Society, Providence RI, 1993 \bibitem[Wei]{Wei} C. Weibel, \emph{An introduction to homological algebra}, Cambridge Studies in Advanced Mathematics {\bf 38}, Cambridge University Press, 1994 \bibitem[Zhu]{Zhu} X. Zhu, ``Coherent sheaves on the stack of Langlands parameters", arXiv:2008.02998, 2020 \end{thebibliography} \end{document}
2205.07431v2
http://arxiv.org/abs/2205.07431v2
Radial projection theorems in finite spaces
\documentclass[12pt]{amsart} \usepackage{amssymb,amsmath,amsthm,bm} \usepackage[colorinlistoftodos,prependcaption,textsize=tiny]{todonotes} \oddsidemargin=-.0cm \evensidemargin=-.0cm \textwidth=16cm \textheight=22cm \topmargin=0cm \definecolor{darkblue}{RGB}{0,0,160} \usepackage{fouriernc} \usepackage[colorlinks=true,allcolors=darkblue]{hyperref} \DeclareSymbolFont{usualmathcal}{OMS}{cmsy}{m}{n} \DeclareSymbolFontAlphabet{\mathcal}{usualmathcal} \usepackage[T1]{fontenc} \usepackage{color} \usepackage{parskip} \usepackage{setspace} \renewcommand{\baselinestretch}{1.1} \newtheorem{theorem}{Theorem}[section] \newtheorem{remark}{Remark}[section] \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{question}[theorem]{Question} \newtheorem{definition}[theorem]{Definition} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem*{definition*}{Definition} \def\thang#1{\noindent \textcolor{blue} {\textsc{(Thang:} \textsf{#1})}} \def\ben#1{\noindent \textcolor{red} {\textsc{(Ben:} \textsf{#1})}} \def\F{\mathcal{F}} \def\Fp{\mathbb{F}_p} \def\Fq{\mathbb{F}_q} \def\R{\mathbb{R}} \def\C{\mathcal{C}} \def\A{\mathcal{A}} \def\B{\mathcal{B}} \def\E{E}\def\spt{\mathtt{spt}} \def\RR{\mathcal{R}} \def\S{\mathcal{S}} \newcommand{\I}{\mathcal{I}} \def \R{{\mathbb R}} \def\rr{{\mathbb R}} \def \F{{\mathbb F}} \def \Z{{\mathbb Z}} \def\supp{\hbox{supp\,}} \DeclareMathOperator{\rank}{rank} \DeclareMathOperator{\spn}{span} \def\P{\mathcal{P}} \def\L{\mathcal{L}} \def\C{\mathcal{C}} \def\S{\mathcal{S}} \def\V{\mathcal{V}} \def\Q{\mathcal{Q}} \def\K{\mathcal{K}} \def\U{\mathcal{U}} \def\HH{\mathcal{H}} \def\W{\mathcal{W}} \def\A{\mathcal{A}} \def\C{\mathcal{C}} \def\B{\mathcal{B}} \newcommand{\sss}{\mathcal S} \newcommand{\kkk}{\mathcal K} \newcommand{\uuu}{\mathcal E} \begin{document} \title{Radial projection theorems in finite spaces} \author{Ben Lund\and Thang Pham\and Vu Thi Huong Thu} \address{Discrete Mathematics Group, Institute for Basic Science (IBS)} \email{[email protected]} \address{Hanoi University of Science, Vietnam National University} \email{[email protected]} \address{Hanoi University of Science, Vietnam National University} \email{vuthihuongthu\[email protected]} \date{} \maketitle \begin{abstract} Motivated by recent results on radial projections and applications to the celebrated Falconer distance problem, we study radial projections in the setting of finite fields. More precisely, we extend results due to Mattila and Orponen (2016), Orponen (2018), and Liu (2020) to finite spaces. In some cases, our results are stronger than the corresponding results in the continuous setting. In particular, we solve the finite field analog of a conjecture due to Liu and Orponen on the exceptional set of radial projections of a set of dimension between $d-2$ and $d-1$. \end{abstract} \section{Introduction} For any point $y \in \mathbb{R}^d$, the radial projection $\pi^y:\mathbb{R}^d \setminus \{y\} \rightarrow S^{d-1}$ is \[\pi^y(x) = \frac{x-y}{|x-y|}. \] For $E \subset \mathbb{R}^d$, the radial projection of $E$ from $y$ is \[\pi^y(E) = \{\pi^y(x) : x \in E \setminus \{y\}\}.\] There are many results and open problems related to the general question: What can be said about the radial projections $\{\pi^y(E): y \in \mathbb{R}^d\}$, given a bound on the Hausdorff dimension $\dim_H(E)$ of $E$? Here we consider finite analogs to various results on this general question. Over a general field $\mathbb{F}$, for a given point $y \in \mathbb{F}^d$, the radial projection $\pi^y$ from $y$ maps a point $x \neq y$ to the line that contains $x$ and $y$. Usually we work over a finite field $\mathbb{F}_q$, and in this setting the standard analog to $\dim_H(E)$ is $\log_q(|E|)$. One reason to be interested in finite analogs is that working in the finite field model can help to understand the behavior of the corresponding problems in the continuous setting. A second reason that we are interested in radial projections of point sets is because of a connection to the Falconer distance problem. Recent work on the Falconer distance problem \cite{Du3, alex-fal} relies on a radial projection theorem due to Orponen \cite{o1}. We hope that a better understanding of radial projections of finite point sets may also lead to progress on finite versions of the Falconer distance problem - see the last paragraph for a more detailed discussion. Mattila and Orponen \cite{MO16} proved that for any Borel set $E\subset \mathbb{R}^d$, if $\dim_H(E)>d-1$, then \begin{equation}\label{conti-1}\dim_H\{y\in \mathbb{R}^d\colon \mathcal{H}^{d-1}(\pi^y(E))=0\}\le 2(d-1)-\dim_H(E),\end{equation} where $\mathcal{H}^{d}$ denotes the $d$-dimensional Hausdorff measure. Orponen showed \cite{o1} that this is sharp in the sense that for any $\alpha\in (d-1, d]$, there exists a Borel set $E\subset \mathbb{R}^d$ of Hausdorff dimension $\alpha$ such that $\dim_H\{y: \mathcal{H}^{d-1}(\pi^y(E)) = 0\} = 2(d-1) - \alpha$. Our first result is an analogous bound for sets in finite space. \begin{theorem}\label{th:largeESC} Let $E \subset \mathbb{F}_q^d$ and let $M$ be a positive integer, with $|E| \geq 6q^{d-1}$ and $M \leq 4^{-1}q^{d-1}$. Then, \begin{equation} \label{eq:largeESC} \#\{y\in \mathbb{F}_q^d\colon |\pi^y(E)|\leq M\} < 12q^{d-1}M|E|^{-1}.\end{equation} \end{theorem} This is stronger than the most natural finite analog to (\ref{conti-1}) in the case that $M$ is much smaller than $q^{d-1}$, and suggests the following strengthening of (\ref{conti-1}). \begin{conjecture}\label{conj12} Let $E \subset \mathbb{R}^d$ be a Borel set with $\dim_H(E) > d-1$, and let $s\leq d-1$ be a real number. Then, \[\dim_H\{y \in \mathbb{R}^d: \dim_H\pi^y(E)< s\} \leq d-1+s-\dim_H(E). \] \end{conjecture} Liu \cite{liu} proved that for any Borel set $E\subset \mathbb{R}^d$ with $\dim_H(E) \leq d-1$, \begin{equation}\label{conti-2} \dim_H\{y\in \mathbb{R}^d\colon \dim_H \pi^y(E)<\dim_H(E)\}\le \min(2(d-1)-\dim_H(E), \dim_H(E)+1). \end{equation} Liu and Orponen \cite{liu} further suggested the following, stronger conjecture. \begin{conjecture}\label{conjec} Let $E\subset \mathbb{R}^d$ be a Borel set with $\dim_H(E)\in (k-1, k]$ for $k \in \{0, 1,\ldots, d-1\}$. Then \[\dim_H\left\lbrace y\in \mathbb{R}^d\colon \dim_H\pi^y(E)<\dim_H(E) \right\rbrace\le k. \] \end{conjecture} We prove the natural finite field analog to Conjecture \ref{conjec} for $|E| \geq q^{d-2}$. \begin{theorem}\label{th:largeTSC} Let $E \subset \mathbb{F}_q^d$ with $|E| \leq 100^{-1}q^{d-1}$. Then, \begin{equation}\label{eq:largeTSC}\#\{y \in \mathbb{F}_q^d: |\pi^y(E )| \leq 10^{-1}|E|\} < 8q^{d-1}. \end{equation} \end{theorem} For $|E| < q^{d-2}$, we prove the following bound, which is slightly stronger than the natural analog to (\ref{conti-2}) over the corresponding range. \begin{theorem}\label{th:justCS-intro} Let $E \subset \mathbb{F}_q^d$ and let $1<C<|E|$. Then, \begin{equation}\label{eq:justCS} \#\{y \in \mathbb{F}_q^d:|\pi^y(E)| < C^{-1}|E|\} < (C-1)^{-1}q|E|. \end{equation} \end{theorem} We do not believe that Theorem \ref{th:justCS-intro} is sharp, and offer the following conjecture, which is a natural finite analog to Conjecture \ref{conjec}. \begin{conjecture}\label{conj:ff} Let $E \subset \mathbb{F}_q^d$ with $q^{k-1} < |E| \leq q^k$ for $k \in \{1,\ldots,d-1\}$. Then, \[\#\{y \in \mathbb{F}_q^d: |\pi^y(E)| < 10^{-1}|E|\} \leq 10q^k . \] \end{conjecture} If true, then Conjectures \ref{conjec} and \ref{conj:ff} would be best possible, up to constant factors. Indeed, if $E$ is contained in a translate of a $k$-dimensional subspace $F$, then $\dim_H \pi^y(E) \leq k-1$ for every $y \in F$. However, in some cases it is possible to obtain stronger conclusions by assuming that $E$ is not nearly contained in a translate of a low-dimensional subspace. For example, Orponen showed that, if $E \subset \mathbb{R}^2$ is a Borel set such that $\dim_H(E \setminus \ell) = \dim_H(E)$ for each line $\ell$, then \begin{equation}\label{conti-4}\dim_H\left\lbrace y\in \mathbb{R}^2\colon \dim_H \pi^y(E)<\frac{\dim_H(E)}{2} \right\rbrace=0.\end{equation} As a consequence of this result, when $\dim_H(E)>0$ and $\dim_H(E\setminus \ell) = \dim_H(E)$ for each line $\ell$, there exists $y\in E$ such that $\dim_H \pi^y(E)>\frac{\dim_H(E)}{2}$. Several improvements have been obtained in \cite{LIU, Shmerkin, Shmerkin2}. The most recent, due to Shmerkin and Wang \cite{Shmerkin2}, states that for a fixed $k\in \{1, \ldots, d-1\}$, $d\ge 2$, and a Borel set $E\subset \mathbb{R}^d$ with $\dim_H(E)=s\in (k-1/k-\eta(d), k]$, where $\eta(d)$ is a small positive constant satisfying $\eta(1)=0$, then \begin{equation}\label{best-current} \sup_{y\in E} \dim_H \pi^y(E) \ge k-1+\phi(s-k+1),~~\phi(u)=\frac{u}{2}+\frac{u^2}{2(2+\sqrt{u^2+4})}, \end{equation} under the condition that $E$ is not contained in any $k$-plane. Hence, with $d=2$ and $k=1$, one has there exists $y\in E$ such that \begin{equation}\label{best-current-plane}\dim_H\pi^y(E)\ge \phi(s)>s/2.\end{equation} The following finite analog to (\ref{conti-4}) has a very simple proof, using only the fact that there is a unique line through any pair of points. \begin{theorem}\label{prop1.6} Let $\mathbb{F}$ be an arbitrary field, and let $E \subset \mathbb{F}^d$ be a finite set of points such that no line contains more than $(3/4)|E|$ points of $E$. Then, \begin{equation} \#\{y \in \mathbb{F}^d : |\pi^y(E)|< 2^{-1}|E|^{1/2}\} \leq 1. \end{equation} \end{theorem} In a general setting, we have the next theorem. \begin{theorem}\label{th:3.3intro} Let $\mathbb{F}$ be an arbitrary field and $E$ be a set in $\mathbb{F}^d$ such that no line contains more than $|E|/2$ points of $E$. For $M < |E|/4$, we have \[\#\{y \in \mathbb{F}^d: |\pi^y(E)| < M\} < 4M^2. \] \end{theorem} This theorem is essentially sharp up to constant factors. For example, suppose that $\mathbb{F}_q$ has a subfield of order $p$, and that $E \subset \mathbb{F}_q^2$ is the set of points in a subplane isomorphic to $\mathbb{F}_p^2$. Then, $|\pi^y(E)| = p+1 > |E|^{1/2}$ for every point $y \in E$. We can reasonably hope to improve Theorem \ref{th:3.3intro} under the additional assumption that $\mathbb{F}$ does not have a subfield of suitable size. We obtain some such improvements using incidence bounds, in particular the Szemer\'edi-Trotter theorem in the real plane \cite{Szemeredi} and an analog proved by Stevens and de Zeeuw for planes over fields of finite characteristic \cite{frank}. \begin{theorem}\label{radial-real-intro} There exist constants $0<c_1,c_2<1$ such that the following holds. For any finite $E \subset \mathbb{R}^2$ such that $|\ell \cap E| < c_1 |E|$ for each line $\ell$ and $M \leq c_2 |E|$, we have \[\#\{y \in \mathbb{R}^2 : |\pi^y(E)| < M\} < O(M^2 |E|^{-1}). \] \end{theorem} One corollary to Theorem \ref{radial-real-intro} (in the spirit of (\ref{best-current-plane})) is that, for any finite set $E \subset \mathbb{R}^2$ with no more than $c_1|E|$ points contained in any line, there is a point $y \in E$ such that $|\pi^y(E)| = O(|E|)$. \begin{theorem}\label{primefields2D} There exists a constant $0<c<1$ such that the following holds. Let $p$ be prime. For any set $E\subset \mathbb{F}_p^2$ with $|E|\ll p^{8/5}$ and $|\ell\cap E|<c|E|$ for each line $\ell$ and $M\leq c^{11/7}|E|^{4/7}$, we have \[\#\{y \in \mathbb{F}_p^2 : |\pi^y(E)| < M\} < O(M^{11/4}|E|^{-1}).\] \end{theorem} We conjecture that the conclusion of Theorem \ref{radial-real-intro} holds for sets $E \subset \mathbb{F}_p^2$ with $|E| \ll p$ and no more than $c|E|$ points in any line. {\bf The Falconer distance problem:} Another motivation of this work that we want to emphasize here is a connection between this topic and the Falconer distance problem, which is one of central problems in Geometric Measure Theory. Given a compact set $E$ in $\mathbb{R}^d$, the Falconer distance conjecture states that if the Hausdorff dimension of $E$ is greater than $d/2$, then its distance set has positive Lebesgue measure. The recent breakthrough of Guth, Iosevich, Ou, and Wang \cite{alex-fal} says that in two dimensions the conclusion holds when $\dim_H(E)>5/4$. In higher dimensions, the best current dimensional thresholds are as follows: \begin{itemize} \item $d=3$, Du, Guth, Ou, Wang, Wilson, and Zhang \cite{Du1} (2017): $\frac{3}{2}+\frac{3}{10}$ \item $d\ge 4$ even, Du, Iosevich, Ou, Wang, and Zhang \cite{Du3} (2020): $\frac{d}{2}+\frac{1}{4}$ \item $d\ge 5$ odd, Du and Zhang \cite{Du2} (2018): $\frac{d}{2}+\frac{d}{4d-2}$ \end{itemize} The finite field variant of this problem is known as the Erd\H{o}s-Falconer distance problem, which asks for the smallest exponent $\alpha$ such that for any $E\subset \mathbb{F}_q^d$, if $|E|\ge q^{\alpha}$, then the distance set covers a positive proportion of all distances. In 2005, Iosevich and Rudnev \cite{IR07} showed that if $|E|\gg q^{\frac{d+1}{2}}$, $E$ determines a positive proportion of all distances. Unlike the continuous version, the exponent $\frac{d+1}{2}$ has been proved to be sharp in \cite{HIKR10} except for even dimensions or dimensions $d\equiv 3\mod 4$ with $q\equiv 3\mod 4$. For these cases, the conjectured exponent is $d/2$, which is in line with the Falconer distance conjecture. Compared to the continuous version, only a few improvements has been able to make during the last 15 years. In particular, Chapman, Erdogan, Hart, Iosevich, and Koh \cite{CEHIK10} proved the exponent $\frac{4}{3}$ over arbitrary finite fields, which was recently improved to $\frac{5}{4}$ over prime fields by Murphy, Petridis, Pham, Rudnev, and Stevens \cite{murphy}. However, in higher even dimensions, we know nothing except the exponent $(d+1)/2$. Since one of the key steps in the papers \cite{Du3, alex-fal} is a radial projection theorem due to Orponen \cite{o2}, if one wishes to adapt the methods from $\mathbb{R}^d$ to $\mathbb{F}_q^d$, then a full understanding about the radial projection over finite fields is needed. Thus, the results in this paper provide some new progress in this direction. {\bf An update on Conjectures \ref{conj12} and \ref{conjec}:} In a very recent paper \cite{OS}, Orponen and Shmerkin proved that Conjecture \ref{conj12} and Conjecture \ref{conjec} hold in two dimensions. We refer the reader to their paper for more details and discussions. \section{Notation and definitions} Here we fix notation and definitions that we use in several proofs. For $E\subset \mathbb{F}_q^d$ and an integer $M>0$, let \[T := \{y \in \mathbb{F}_q^d: |\pi^y(E | \leq M\}. \] For any line $\ell$, let $e(\ell) = |\ell \cap E|$, let $t(\ell) = |\ell \cap T|$, and let $\ell(x)$ be the indicator function for $x \in \ell$. Let $L$ be the set of lines that each contain at least one point of $T$ and at least one point of $E$, and let $G$ be the set of all lines in $\mathbb{F}_q^d$. We also use the notation $\binom{d}{1}_q$, which is defined by \[\binom{d}{1}_q:=\frac{q^d-1}{q-1}.\] Note that $\binom{d}{1}$ is the number of lines in $\mathbb{F}_q^d$ that are incident to a specified point. \section{Proofs of Theorem \ref{th:justCS-intro}, Theorem \ref{prop1.6}, and Theorem \ref{th:3.3intro}} The proofs in this section use the Cauchy-Schwarz inequality together with the fact that there is a unique line through each pair of distinct points. The following lemma immediately implies Theorem \ref{th:justCS-intro}, and plays an important role in the proof of Theorem \ref{th:3.3intro}. \begin{lemma}\label{lem:TOffALine} Let $E \subset \mathbb{F}^d$ be a finite set of points, let $1 < C < |E|$, let $k$ be a positive integer, and let $T$ be a set of points such that \begin{itemize} \item for each point $y \in T$, we have $|\pi^y(E)| < |E|C^{-1}$, and \item for any line $\ell$, we have $|\ell \cap T| < k$. \end{itemize} Then, \[|T| \leq (C-1)^{-1}k|E|. \] \end{lemma} \begin{proof} The strategy is to find upper and lower bounds on the number of triples in \[ R := \{(y,x_1,x_2) \in T \times E \times E: y,x_1,x_2 \text{ are co-linear and } y \notin \{x_1, x_2\}\}.\] Denote by $M = |E|C^{-1}$ the assumed upper bound on $\pi^y(E)$ for $y \in T$. To obtain a lower bound on $|R|$, we bound the number of triples in $R$ that contain a fixed $y \in T$: \[ \sum_{\substack{\ell: y \in \ell}} (e(\ell) - \mathbf{1}_{y \in E})^2 \geq M^{-1} \left(\sum_{\substack{\ell: y \in \ell}} (e(\ell) - \mathbf{1}_{y \in E}) \right)^2 \geq M^{-1}(|E|-M\cdot \mathbf{1}_{y \in E})^2.\] Summing over all $y \in T$: \[|R| = \sum_{y \in T}\sum_{\ell: y\in\ell} (e(\ell)-\mathbf{1}_{y \in E})^2 \geq \sum_{y \in T} M^{-1}(|E|- M \cdot \mathbf{1}_{y \in E})^2 = C|T||E| - 2|E||T \cap E| + M|T\cap E|. \] The upper bound relies on the observation that each pair of distinct points in $E$ belongs to at most $k$ triples in $R$: \begin{align*} |R| &= \sum_{x \in E} \sum_{\substack{y \in T \\ x \neq y}} 1 + \sum_{\substack{x_1,x_2 \in E\\ x_1 \neq x_2}} \sum_{\substack{y \in T \\ y \notin \{x_1,x_2\}}} \sum_{\ell \in G}\ell(x_1)\ell(x_2)\ell(y) \\ &\leq |T|\,|E| - |E \cap T| + \sum_{\substack{x_1,x_2 \in E\\ x_1 \neq x_2}} \sum_{\ell \in G} \ell(x_1)\ell(x_2) \sum_{\substack{y \in T \\ y \notin \{x_1,x_2\}}} \ell(y) \\&= |T|\,|E| - |E \cap T| + \sum_{\substack{x_1,x_2 \in E\\ x_1 \neq x_2}} \sum_{\ell \in G} \ell(x_1)\ell(x_2)(t(\ell)- \mathbf{1}_{x_1 \in T} - \mathbf{1}_{x_2 \in T}) \\ &\leq |T|\,|E| - |E \cap T| +k|E|^2 - \sum_{\substack{x_1,x_2 \in E\\ x_1 \neq x_2}} (\mathbf{1}_{x_1 \in T} + \mathbf{1}_{x_2 \in T}) \\ &= |T|\,|E| - |E \cap T| + k|E|^2 - 2(|E \cap T|(|E \cap T| - 1) + (|E|-|E \cap T|)|E \cap T|)\\ &= |T|\,|E| + k|E|^2 - 2 |E| \, |E \cap T| + |E \cap T|.\end{align*} Combining the lower and upper bounds, we get \begin{align*}C|T|\,|E|-2|E|\,|T\cap E| + M|T\cap E| &\leq |T|\,|E| + k|E|^2 - 2|E|\,|E\cap T| + |E \cap T|,\end{align*} hence $|T| < (C-1)^{-1}k|E|$. \end{proof} As stated above, Lemma \ref{lem:TOffALine} immediately implies theorem \ref{th:justCS-intro}. \begin{proof}[Proof of Theorem \ref{th:justCS-intro}] At most $q$ points of $T$ are contained in any line in $\mathbb{F}_q^d$. Hence, Theorem \ref{th:justCS-intro} follows from the case $k=q$ of Lemma \ref{lem:TOffALine}. \end{proof} Next we prove Theorem \ref{th:3.3intro}. Lemma \ref{lem:TOffALine} gives a bound on $|T|$ when not too many points of $T$ are on any single line. The next lemma bounds the number of elements of $T$ that can be contained in a single line. \begin{lemma}\label{lem:TOnALine} Let $\mathbb{F}$ be a field, and let $E \subset \mathbb{F}^d$ be a finite set of points. Let $\ell$ be a line such that $\ell \cap E = \emptyset$. Then, for $M<|E|/2$, \[|\ell \cap \{y \in \mathbb{F}^d: |\pi^y(E)| \leq M\}| \leq 2M.\] \end{lemma} \begin{proof} Let $T$ be the set of $y\in \mathbb{F}^d$ such that $|\pi^y(E)| \leq M$ and $L$ be the set of lines that contain at least one point of $T$ and at least one point of $E$. Let \[R:= \{(y,x_1,x_2) \in (T\cap \ell) \times E \times E: y \neq x_1 \neq x_2 \text{ and } y, x_1, x_2 \text{ are co-linear}\}. \] Since each ordered pair $(x_1,x_2) \in E^2$ of distinct points in $E$ determines a line that intersects $\ell$ in at most one point, we have \begin{equation} \label{eq:upperBoundRForCollinear} |R| \leq |E|(|E|-1) < |E|^2. \end{equation} On the other hand, we have \begin{align} |R| &= \sum_{y \in T\cap \ell} \sum_{\substack{\ell' \in L \\ y \in \ell'}} e(\ell')(e(\ell') - 1)\nonumber\\ &\geq |T\cap \ell|M^{-1}|E|^2 - |T\cap \ell|\,|E| = |T\cap \ell||E|^2M^{-1}(1-M|E|^{-1})\nonumber\\ &\geq 2^{-1}|T\cap \ell|\,|E|^2M^{-1}.\label{eq:lowerBoundRForCollinear} \end{align} Here, we use the Cauchy-Schwarz inequality and the fact that each point of $T$ is incident to at most $M$ lines of $L$. The result follows directly from (\ref{eq:upperBoundRForCollinear}) and (\ref{eq:lowerBoundRForCollinear}). \end{proof} Theorem \ref{th:3.3intro} follows easily from Lemmas \ref{lem:TOffALine} and \ref{lem:TOnALine}. \begin{proof}[Proof of Theorem \ref{th:3.3intro}] Let $\ell$ be a line. Since there are at least $|E|/2$ points of $E$ that do not lie on $\ell$ and $M < |E|/4$, Lemma \ref{lem:TOnALine} implies that there are at most $2M$ points of $T$ contained in $\ell$. Consequently, Lemma \ref{lem:TOffALine} applied with $C = |E|M^{-1}$ and $k = 2M$ implies that \[|T| \leq (|E|M^{-1}-1)^{-1}2M|E| < 4M^2. \] \end{proof} \subsection{Proof of Theorem \ref{prop1.6}} As promised in the introduction, the proof of Theorem \ref{prop1.6} is very simple. \begin{proof}[Proof of Theorem \ref{prop1.6}] Let $T$ be the set of points such that $|\pi^y(E)| < |E|^{1/2}$, and let $M= 2^{-1}|E|^{1/2}-1$. Let $L$ be the set of lines that contain at least one point of $T$ and at least one point of $E$. Suppose, for contradiction, that $y,z \in T$ with $y \neq z$. Let $L_y,L_z \subset L$ be those lines of $L$ incident to $y$ and $z$, respectively. Let $\ell_M$ be the line that contains both $y$ and $z$. Let $\ell \in L_y \setminus \{\ell_M\}$. Then each point of $E \cap \ell$ is on a different line of $L_z$, hence $|\ell \cap E| \leq M$. Since $|L_y \setminus \{\ell_M\}| \leq M$, we have that $\left |\bigcup_{\ell \in L_y \setminus \{\ell_M\}} (L_y \cap E) \right | \leq M^2$. Since $M^2 < (1/4)|E|$, it must be the case that $|\ell_M \cap E| > (3/4)|E|$, contradicting our assumption. \end{proof} \section{Proofs of Theorems \ref{th:largeESC} and \ref{th:largeTSC}} The proofs in this section use the Cauchy-Schwarz inequality together with several combinatorial properties of points and lines in $\mathbb{F}_q^d$. In particular, each point is incident to $\binom{d}{1}_q$ lines, each line is incident to $q$ points, and $\mathbb{F}_q^d$ has $q^d$ points in total. The proofs of Theorems \ref{th:largeESC} and \ref{th:largeTSC} are similar. The basic plan of both proofs is to give upper and lower bounds on $\sum_{\ell \in L} e(\ell)t(\ell)$. We use the same simple lower bound for both proofs. Each pair $(x,y) \in E \times T$ is contained in exactly one line of $L$ if $x \neq y$, and at least one line of $L$ if $x = y$. Hence, \begin{equation}\label{eq:etLowerBound} \sum_{\ell \in L} e(\ell)t(\ell) \geq |E|\,|T|. \end{equation} The upper bounds on $\sum_{\ell \in L} e(\ell)t(\ell)$ used in the two proofs of this section rely on different arguments, but both are inspired by work on incidence bounds for large sets in finite space. These bounds were first investigated by Haemmers \cite{Ham}, and have had many recent applications to problems in arithmetic combinatorics and Erd\H{o}s-type questions over finite vector spaces \cite{YMRS, BIP, MPRRS, RS, RRS}. The following lemma and its proof is nearly identical to Lemma 1 in \cite{MP2016}. \begin{lemma}\label{lem:MP} Let $E \subset \mathbb{F}_q^d$. Then, \begin{align} \label{eq:e2}&\sum_{\ell \in G} e(\ell)^2 = \binom{d}{1}_q |E| + |E|(|E|-1), \text{ and}\\ \label{eq:eVar}&\sum_{\ell \in G} (e(\ell) - |E|q^{1-d})^2 \leq \binom{d}{1}_q|E|. \end{align} \end{lemma} \begin{proof} The following calculation yields (\ref{eq:e2}): \begin{align*} \sum_{\ell \in G} e(\ell)^2 &= \sum_{\ell \in G} \sum_{(x,x') \in E^2} \ell(x) \ell(x') \\ &= \sum_\ell \sum_{x \in E} \ell(x) + \sum_\ell \sum_{\substack{(x,x') \in E^2: \\x \neq x'}} \ell(x)\ell(x') \\ &= \binom{d}{1}_q |E| + |E|(|E|-1). \end{align*} In the third line, we use the facts that each point is incident to exactly $\binom{d}{1}_q$ lines, and each pair of distinct points is contained in exactly one line. We now show that (\ref{eq:eVar}) follows from (\ref{eq:e2}). Indeed, \begin{align*} \sum_{\ell \in G}(e(\ell) - |E|q^{-(d-1)})^2 &= \sum_{\ell \in G} e(\ell)^2 - 2 |E|q^{-(d-1)} \sum_{\ell \in G}e(\ell) + |G| |E|^2q^{-2(d-1)} \\ &= \sum_{\ell \in G} e(\ell)^2 - q^{-(d-1)}\binom{d}{1}_q |E|^2 \\ &< \binom{d}{1}_q |E|. \end{align*} In the second line, we use the facts that each point is incident to exactly $\binom{d}{1}_q$ lines, and that $|G| = q^{d-1}\binom{d}{1}_q$. \end{proof} \subsection{Proof of Theorem \ref{th:largeESC}} First we obtain an upper bound on $\sum_{\ell \in L} e(\ell)t(\ell)$. \begin{lemma}\label{lem:largeEUpperBound} We have \begin{equation}\label{eq:largeEUpperBound} \sum_{\ell \in L} e(\ell)t(\ell) \leq (M q^{-(d-1)})|E|\,|T| + \sqrt{(M|T|+|T|^2)|E|\binom{d}{1}_q}. \end{equation} \end{lemma} \begin{proof} Since each point of $T$ is incident to at most $M$ lines of $L$, we have $\sum_{\ell \in L} t(\ell) \leq M|T|$, and hence \begin{align}\sum_{\ell \in L} \nonumber e(\ell)t(\ell) & \leq \frac{M|E|\,|T|}{q^{d-1}} + \sum_{\ell \in L}t(\ell)\left(e(\ell) - |E|q^{-(d-1)}\right) \\ \nonumber &\leq \frac{M|E|\,|T|}{q^{d-1}} + \sqrt{\sum_{\ell \in L}t(\ell)^2 \cdot \sum_{\ell \in L}\left(e(\ell) - |E|q^{-(d-1)}\right)^2 } \\ \label{eq:intermediateLargeE} &\leq \frac{M|E|\,|T|}{q^{d-1}} + \sqrt{\sum_{\ell \in L}t(\ell)^2 \cdot \sum_{\ell \in G}\left(e(\ell) - |E|q^{-(d-1)}\right)^2 }. \end{align} Lemma \ref{lem:MP} provides an upper bound for $\sum_{\ell \in G}\left(e(\ell) - |E|q^{-(d-1)}\right)^2$, so we only need to bound $\sum_{\ell \in L}t(\ell)^2$. \begin{align} \nonumber \sum_{\ell \in L} t(\ell)^2 &= \sum_{\ell \in L} \sum_{(y,y') \in T^2} \ell(y) \ell(y') \\ \nonumber &= \sum_{\ell \in L} t(\ell) + \sum_\ell \sum_{\substack{(y,y') \in T^2: \\y \neq y'}} \ell(y)\ell(y') \\ \label{eq:boundT2ForLargeE}&\leq M |T| + |T|(|T|-1) < M|T| + |T|^2. \end{align} Combining (\ref{eq:boundT2ForLargeE}) with the bound $\sum_{\ell \in G}\left(e(\ell) - |E|q^{-(d-1)}\right)^2 \leq |E| \binom{d}{1}_q$ given by Lemma \ref{lem:MP} and (\ref{eq:intermediateLargeE}) completes the proof. \end{proof} We use Lemma \ref{lem:largeEUpperBound} to prove the following, which is slightly more general than Theorem \ref{th:largeESC}. \begin{theorem}\label{th:largeE} Let $E \subset \mathbb{F}_q^d$ and let $M$ be a positive integer. Let $a= \binom{d}{1}_q|E|^{-1}$ and let $b = Mq^{1-d}$. Let $C = (1-2b+b^2-a)^{-1}$. If $C>0$, then \begin{equation}\#\{y\in \mathbb{F}_q^d\colon |\pi^y(E)|\leq M\} < C\binom{d}{1}_qM|E|^{-1}.\end{equation} \end{theorem} \begin{proof} Combining the lower bound (\ref{eq:etLowerBound}) with the upper bound (\ref{eq:largeEUpperBound}) from Lemma \ref{lem:largeEUpperBound} leads quickly to the conclusion of the theorem: \begin{align*} (1-2b+b^2)|E|^2|T|^2 &= (1-Mq^{-(d-1)})^2 |E|^2 |T|^2 \\ &< (M|T|+|T|^2)|E|\binom{d}{1}_q \\ &= M|T|\binom{d}{1}_q + a|E|^2|T|^2. \end{align*} Hence, \[|T| < C\binom{d}{1}_q M |E|^{-1},\] as claimed. \end{proof} \begin{proof}[Proof of Theorem \ref{th:largeESC}] Apply Theorem \ref{th:largeE} with $a=3^{-1}$ and $b=4^{-1}$. \end{proof} \subsection{Proof of Theorem \ref{th:largeTSC}} To prove Theorem \ref{th:largeTSC}, we first need a refinement of Lemma \ref{lem:largeEUpperBound}. \begin{lemma}\label{lem:largeTUpperBound} Let $a = |E|q^{1-d}$, let $b=M |E|^{-1}$, and let $c = \sqrt{(1-b)/a}$. If $(1-b)/a>1$, then \[\sum_{\ell \in L} e(\ell)t(\ell) < (b+ac)|E|\,|T| + (1-c^{-1})^{-1}|E|\sqrt{2\binom{d}{1}_q|T|}. \] \end{lemma} \begin{proof} We divide the sum $\sum_{\ell \in L} e(\ell)t(\ell)$ into three parts: \[\sum_{\ell \in L} e(\ell)t(\ell) = \sum_{\ell:e(\ell)= 1} e(\ell)t(\ell) + \sum_{\substack{\ell: e(\ell) \geq 2, \\t(\ell) \leq c|T|q^{1-d}}} e(\ell)t(\ell) + \sum_{\substack{\ell : e(\ell) \geq 2, \\t(\ell) > c|T|q^{1-d}}} e(\ell)t(\ell).\] Bounding the first two terms is easy: \begin{equation}\label{eq:L1} \sum_{\ell:e(\ell) = 1} t(\ell) \leq \sum_{y \in T} |\pi^y(E )| \leq M|T| = b|T|\,|E|, \end{equation} \begin{equation}\label{eq:L2} \sum_{\substack{\ell: e(\ell) \geq 2, \\t(\ell) \leq c|T|q^{1-d}}} e(\ell)t(\ell) \leq c|T|q^{1-d} \sum_{\ell: e(\ell) \geq 2} e(\ell) < c|T|q^{1-d}|E|^{2} = ac |T| \, |E|. \end{equation} Let $L'=\{\ell \in L: e(\ell) \geq 2, t(\ell) > c|T|q^{1-d}\}$. We have \begin{equation}\label{eq:L'CS} \left(\sum_{\ell \in L'} e(\ell)t(\ell)\right)^2 \leq \sum_{\ell \in L'}t(\ell)^2 \sum_{\ell \in L'}e(\ell)^2.\end{equation} We use the lower bound on $t(\ell)$ together with Lemma \ref{lem:MP} to bound $\sum_{\ell \in L'} t(\ell)^2$. Denote $d = (1-c^{-1})^{-2}$. Note that \[d^{-1}t(\ell)^2 = (t(\ell)-c^{-1}t(\ell))^2 \leq (t(\ell)-|T|q^{1-d})^2.\] Together with (\ref{eq:eVar}) of Lemma \ref{lem:MP}, this yields \begin{equation}\label{eq:L'T2} \sum_{\ell \in L'}t(\ell)^2 = d \sum_{\ell \in L'} d^{-1}t(\ell)^2 \leq d \sum_{\ell \in L'}(t(\ell) - |T|q^{1-d})^2 < (1-c^{-1})^{-2}\binom{d}{1}_q |T|. \end{equation} To bound $\sum_{\ell \in L'} e(\ell)^2$, we use the fact that each point of $E$ is incident to fewer than $|E|$ lines of $L'$ to infer \begin{equation}\label{eq:L'E2} \sum_{\ell \in L'} e(\ell)^2 < 2|E|^2, \end{equation} Taken together, (\ref{eq:L'CS}), (\ref{eq:L'T2}), and (\ref{eq:L'E2}) imply \begin{equation}\label{eq:L'} \left(\sum_{\ell \in L'} e(\ell)t(\ell)\right)^2 \leq \sum_{\ell \in L'}t(\ell)^2 \sum_{\ell \in L'}e(\ell)^2 < 2(1-c^{-1})^{-2}\binom{d}{1}_q|T| \, |E|^2. \end{equation} Combining (\ref{eq:L1}), (\ref{eq:L2}), and (\ref{eq:L'}) leads immediately to the claimed conclusion. \end{proof} We remark that a very similar proof leads to a bound analogous to \ref{conti-2}. The key idea that leads to our stronger result is to use the trivial bound \ref{eq:L'E2} instead of Lemma \ref{lem:MP} to bound $\sum_{\ell \in L'}e(\ell)^2$. We use Lemma \ref{lem:largeTUpperBound} to obtain the following, which is slightly stronger than Theorem \ref{th:largeTSC}. \begin{theorem}\label{th:largeT} Let $E \subset \mathbb{F}_q^d$ and let $M$ be a positive integer. Let $a = |E|q^{1-d}$, and let $b=M |E|^{-1}$. Let $c = \sqrt{(1-b)/a}$. Then, \[\#\{y \in \mathbb{F}_q^d: |\pi^y(E )| \leq M\} < 2(1-b-ac)^{-2}(1-c^{-1})^{-2}\binom{d}{1}_q. \] \end{theorem} \begin{proof} Lemma \ref{lem:largeTUpperBound} and (\ref{eq:etLowerBound}) together imply \[|T|\,|E| < (b+ac)|T|\,|E| + (1-c^{-1})^{-1}|E|\sqrt{2 \binom{d}{1}_q|T|}, \] which easily implies the claimed bound on $|T|$. \end{proof} \begin{proof}[Proof of Theorem \ref{th:largeTSC}] Apply Theorem \ref{th:largeT} with $a=100^{-1}$ and $b=10^{-1}$. \end{proof} \section{Proofs Theorems \ref{radial-real-intro} and \ref{primefields2D}} The proofs in this section are less elementary than those in the previous sections, and rely on incidence bounds. The two proofs are identical in outline, but the calculations are simpler over the reals. \subsection{Over the reals} To proceed, we need to recall the following celebrated Szemer\'{e}di-Trotter theorem and one of its consequence on the number of $k$-rich lines. \begin{theorem}[\cite{Szemeredi}]\label{th:SzTr} Let $E$ be a set of points and $L$ be a set of lines in $\mathbb{R}^2$. Then, \begin{equation}\label{eq:SzTrIncidenceBound} \sum_{\ell \in L}e(\ell) = O(|E|^{2/3}|L|^{2/3} + |E| + |L|). \end{equation} Let $L_{\ge k}$ be the set of $k$-rich lines, i.e. lines with at least $k$ points from $E$, then \begin{equation}\label{eq:SzTrRichLineBound} |L_{\geq k}| = O(|E|^2k^{-3} + |E|k^{-1}). \end{equation} \end{theorem} The following is a standard consequence, we include a proof for the reader's convenience. \begin{lemma}\label{th:SzTrSquareBound} There exist positive constants $c_1,c_2$ such that, for any set $E$ of points in $\mathbb{R}^2$, \begin{equation} \sum_{c_1 \leq k \leq c_2|E|} k^2 |L_{=k}| < 10^{-1}|E|^2. \end{equation} \begin{proof} \begin{align*} \sum_{c_1 \leq k \leq c_2|E|} k^2 |L_{=k}| &= \sum_{c_1 \leq k \leq c_2|E|} k^2 (|L_{\geq k}| - |L_{\geq k+1}|) \\ &= \sum_{c_1 \leq k \leq c_2|E|} k^2 |L_{\geq k}| - \sum_{c_1 + 1 \leq k \leq c_2|E| + 1} (k-1)^2 |L_{\geq k}| \\ &= \sum_{c_1 \leq k \leq c_2|E|} (k^2-(k-1)^2) |L_{=k}|+ c_1^2|L_{=c_1}| - (c_2|E|)^2|L_{=c_2 |E|+1}| \\ &\leq \sum_{c_1 \leq k \leq c_2|E|-1} k O(|E|^2 k^{-3} + |E|k^{-1}) + O(c_1^{-1}|E|^2)\\ &\leq \sum_{c_1 \leq k} O(|E|^2 k^{-2}) + \sum_{k \leq c_2|E|}O(|E|) + O(c_1^{-1}|E|^2)\\ &\leq O(c_1^{-1}|E|^2) + O(c_2|E|^2). \end{align*} No matter what the constants hidden in the $O$-notation are, there are choices for $c_1$ and $c_2$ so that the right side is bounded by $10^{-1}|E|^2$, as claimed. \end{proof} \end{lemma} \begin{theorem}\label{radial-real} Let $c_1$ and $c_2$ be as in Lemma \ref{th:SzTrSquareBound}. There exists a constant $0 <c_3<1$ such that the following holds. For any finite $E \subset \mathbb{R}^2$ such that $|\ell \cap E| < c_2 |E|$ for each line $\ell$ and $M \leq c_3 |E|$, we have \[\#\{y \in \mathbb{R}^2 : \pi^y(E) < M\} < O(M^2 E^{-1}). \] \end{theorem} \begin{proof} As before, we proceed by bounding the sum $\sum_{\ell \in L}e(\ell)t(\ell)$ from above and below. We use the same lower bound $\sum_{\ell \in L} e(\ell)t(\ell) \geq |E|\,|T|$ as before. For the upper bound, we partition the sum as follows: \begin{equation} \sum_{\ell \in L} e(\ell)t(\ell) \leq \sum_{\substack{\ell \in L: \\ e(\ell) < c_1}} e(\ell)t(\ell) + \sum_{\substack{\ell \in L: \\ e(\ell) \geq c_1, t(\ell) > 1}} e(\ell)t(\ell)+\sum_{\substack{\ell \in L: \\ e(\ell) \geq c_1, t(\ell) = 1}}e(\ell)t(\ell). \end{equation} {\bf Bounding the first term:} This is easy, namely, \begin{equation}\nonumber \sum_{\substack{\ell \in L: \\ e(\ell) < c_1}} e(\ell)t(\ell) = \sum_{y \in T} \sum_{\substack{\ell \in L:\\ y \in \ell}} e(\ell) < c_1 |M|\, |T| \leq c_1c_3|E|\,|T|. \end{equation} {\bf Bounding the second term:} By the Cauchy-Schwarz inequality, \begin{equation}\label{30-e} \left(\sum_{\substack{\ell \in L: \\ e(\ell) \geq c_1, t(\ell) > 1}} e(\ell)t(\ell)\right)^2 \leq \sum_{\substack{\ell \in L: \\ e(\ell) \geq c_1, t(\ell) > 1}} e(\ell)^2 \sum_{\substack{\ell \in L: \\ e(\ell) \geq c_1, t(\ell) > 1}} t(\ell)^2. \end{equation} Lemma \ref{th:SzTrSquareBound} bounds the first sum on the right side by $10^{-1}|E|^2$. To bound the second sum, we use the observation that each point of $T$ is incident to fewer than $T$ lines $\ell$ such that $t(\ell) > 1$: \begin{align*} \sum_{\substack{\ell \in L: \\ t(\ell) > 1}}t(\ell)^2 &= \sum_{\ell \in L} \sum_{y_1 \neq y_2 \in T} \ell(y_1)\ell(y_2) + \sum_{\ell \in L} \sum_{y \in T} \ell(y)\\ &= |T|(|T|-1) + \sum_{y \in T} \sum_{\ell \in L} \ell(y) \\ &\leq |T|(|T|-1) + \sum_{y \in T} |T| < 2|T|^2. \end{align*} Combining these leads to \begin{equation}\nonumber \sum_{\substack{\ell \in L: \\ e(\ell) \geq c_1, t(\ell) > 1}} e(\ell)t(\ell) \leq 5^{-1/2}|E|\,|T|. \end{equation} {\bf Bounding the third term:} The key observation is that \[\#\{\ell \in L: t(\ell) = 1\} \leq M|T|.\] Combined with the Szemer\'edi-Trotter theorem \ref{th:SzTr}, one has \begin{equation}\nonumber \sum_{\substack{\ell \in L: \\ e(\ell) \geq c_1, t(\ell) = 1}}e(\ell)t(\ell) \leq \sum_{\substack{\ell \in L : \\ t(\ell) = 1}} e(\ell) = O((M|T|\,|E|)^{2/3} + M|T| + |E|). \end{equation} Combining the upper and lower bounds gives \begin{equation}\nonumber |E|\,|T| \leq (c_1\,c_3 + 5^{-1/2})|E|\,|T| + O((M|E|\,|T|)^{2/3} + c_3|E|\,|T| + |E|). \end{equation} By choosing $c_3$ sufficiently small, we have \begin{equation}\nonumber |E|\,|T| = O((M|E|\,|T|)^{2/3}), \end{equation} which leads directly to the desired conclusion. \end{proof} \subsection{Over prime fields} In the prime field setting, we make use of a variant of the Stevens-De Zeeuw point-line incidence bound in \cite[Theorem 14]{lund}. \bigskip \begin{theorem}[\cite{frank}]\label{frank} Let $P$ be a point set in $\mathbb{F}_p^2$ and $L$ be a set of lines in $\mathbb{F}_p^2$. If $|P|\le p^{8/5}$, then \[\sum_{\ell\in L}e(\ell)=O(|P|^{11/15}|L|^{11/15}+|P|+|L|).\] Let $L_{\ge k}$ be the set of $k$-rich lines, i.e. lines with at least $k$ points from $P$, then \[|L_{\ge k}|=O(|P|^{11/4}k^{-\frac{15}{4}}+|P|k^{-1}).\] \end{theorem} We also have an analog of Lemma \ref{th:SzTrSquareBound}. \begin{lemma}\label{th:SzTrSquareBound-ff} There exists a positive constant $c$ such that, for any set $E$ of points in $\mathbb{R}^2$, \begin{equation} \sum_{c^{-4/7}|E|^{3/7} \leq k \leq c|E|} k^2 |L_{=k}| < 10^{-1}|E|^2. \end{equation} \begin{proof} Set $c_1:=c^{-4/7}|E|^{3/7}$. \begin{align*} \sum_{c_1 \leq k \leq c|E|} k^2 |L_{=k}| &= \sum_{c_1 \leq k \leq c|E|} k^2 (|L_{\geq k}| - |L_{\geq k+1}|) \\ &= \sum_{c_1 \leq k \leq c|E|} k^2 |L_{\geq k}| - \sum_{c_1 + 1 \leq k \leq c|E| + 1} (k-1)^2 |L_{\geq k}| \\ &= \sum_{c_1 \leq k \leq c|E|} (k^2-(k-1)^2) |L_{=k}|+ c_1|L_{=c_1}| - (c|E|)^2|L_{=c |E|+1}| \\ &\leq \sum_{c_1 \leq k \leq c|E|-1} k O(|E|^{11/4} k^{-15/4} + |E|k^{-1}) + O(c_1^{-7/4}|E|^{11/4})+O(c_1|E|)\\ &\leq \sum_{c_1 \leq k} O(|E|^{11/4} k^{-7/4}) + \sum_{k \leq c_2|E|}O(|E|) + O(c_1^{-1}|E|^2)\\ &\leq O(c_1^{-7/4}|E|^{11/4})+O(c_1|E|) + O(c|E|^2). \end{align*} So we can choose $c$ small enough, so that the right side is bounded by $10^{-1}|E|^2$. \end{proof} \end{lemma} \begin{proof}[Proof of Theorem \ref{primefields2D}] Our argument is similar to that over the reals, for the sake of completeness, we reproduce all details here. As above, we want to bound the sum $\sum_{\ell \in L}e(\ell)t(\ell)$ from above and below. We have $\sum_{\ell \in L} e(\ell)t(\ell) \geq |E|\,|T|$. For the upper bound, we partition the sum as follows: \begin{equation} \sum_{\ell \in L} e(\ell)t(\ell) \leq \sum_{\substack{\ell \in L: \\ e(\ell) < c_1}} e(\ell)t(\ell) + \sum_{\substack{\ell \in L: \\ e(\ell) \geq c_1, t(\ell) > 1}} e(\ell)t(\ell)+\sum_{\substack{\ell \in L: \\ e(\ell) \geq c_1, t(\ell) = 1}}e(\ell)t(\ell), \end{equation} where $c_1:=c^{-4/7}|E|^{3/7}$. {\bf Bounding the first term:} This is easy, namely, \begin{equation}\nonumber \sum_{\substack{\ell \in L: \\ e(\ell) < c_1}} e(\ell)t(\ell) = \sum_{y \in T} \sum_{\substack{\ell \in L:\\ y \in \ell}} e(\ell) < c_1 |M|\, |T| \leq c|E|\,|T|. \end{equation} {\bf Bounding the second term:} By the Cauchy-Schwarz inequality, \begin{equation}\label{30-eee} \left(\sum_{\substack{\ell \in L: \\ e(\ell) \geq c_1, t(\ell) > 1}} e(\ell)t(\ell)\right)^2 \leq \sum_{\substack{\ell \in L: \\ e(\ell) \geq c_1, t(\ell) > 1}} e(\ell)^2 \sum_{\substack{\ell \in L: \\ e(\ell) \geq c_1, t(\ell) > 1}} t(\ell)^2. \end{equation} Lemma \ref{th:SzTrSquareBound-ff} bounds the first sum on the right side by $10^{-1}|E|^2$. To bound the second sum, we use the observation that each point of $T$ is incident to fewer than $T$ lines $\ell$ such that $t(\ell) > 1$: \begin{align*} \sum_{\substack{\ell \in L: \\ t(\ell) > 1}}t(\ell)^2 &= \sum_{\ell \in L} \sum_{y_1 \neq y_2 \in T} \ell(y_1)\ell(y_2) + \sum_{\ell \in L} \sum_{y \in T} \ell(y)\\ &= |T|(|T|-1) + \sum_{y \in T} \sum_{\ell \in L} \ell(y) \\ &\leq |T|(|T|-1) + \sum_{y \in T} |T| < 2|T|^2. \end{align*} Combining these leads to \begin{equation}\nonumber \sum_{\substack{\ell \in L: \\ e(\ell) \geq c_1, t(\ell) > 1}} e(\ell)t(\ell) \leq 5^{-1/2}|E|\,|T|. \end{equation} {\bf Bounding the third term:} The key observation is that \[\#\{\ell \in L: t(\ell) = 1\} \leq M|T|.\] Combined with the Theorem \ref{frank}, one has \begin{equation}\nonumber \sum_{\substack{\ell \in L: \\ e(\ell) \geq c_1, t(\ell) = 1}}e(\ell)t(\ell) \leq \sum_{\substack{\ell \in L : \\ t(\ell) = 1}} e(\ell) = O((M|T|\,|E|)^{11/15} + M|T| + |E|)=O\left((M|T|\,|E|)^{11/15}+|E|\right)+\frac{|T|\,|E|}{10}. \end{equation} Combining the upper and lower bounds gives \begin{equation}\nonumber |E|\,|T| \leq (c + 5^{-1/2})|E|\,|T| + O\left((M|T|\,|E|)^{11/15}+|E|\right)+\frac{|T|\,|E|}{10}. \end{equation} By choosing $c$ sufficiently small, we have \begin{equation}\nonumber |E|\,|T| = O((M|E|\,|T|)^{11/15}), \end{equation} which leads directly to the desired conclusion. \end{proof} \nonumber \section{Acknowledgements} We would like to thank Bochen Liu and Tuomas Orponen for many discussions about the results in the continuous setting. B. Lund was supported by the Institute for Basic Science (IBS-R029-C1). T. Pham would like to thank to the VIASM for the hospitality and for the excellent working condition. \bibliographystyle{amsplain} \begin{thebibliography}{10} \bibitem{YMRS} E. Aksoy Yazici, B. Murphy, M. Rudnev and I. Shkredov, \textit{Growth estimates in positive characteristic via collisions}, International Mathematics Research Notices, \textbf{2017}(23) (2017), 7148-7189. \bibitem{BIP} M. Bennett, A. Iosevich and J. Pakianathan, \textit{Three-point configurations determined by subsets of $\mathbb{F}_q^2$ via the Elekes-Sharir Paradigm}, Combinatorica, \textbf{34}(6) (2014): 689--706. \bibitem{CEHIK10} J. Chapman, M. Burak Erdo\u{g}an, D. Hart, A. Iosevich and D. Koh, {\it Pinned distance sets, $k$-simplices, Wolff's exponent in finite fields and sum-product estimates}, Mathematische Zeitschrift, \textbf{271}(1) (2012), 63--93. \bibitem{Du1} X. Du, L. Guth, Y. Ou, H. Wang, B. Wilson and R. Zhang, \textit{Weighted restriction estimates and application to Falconer distance set problem}, American Journal of Mathematics, \textbf{143}(1) (2021), 175--211. \bibitem{Du2} X. Du and R. Zhang, \textit{Sharp $L^2$- estimates of the Schr\"{o}dinger maximal function in higher dimensions}, Annal of Mathematics, \textbf{189} (2019), no. 3, 837--861. \bibitem{Du3} X. Du, A. Iosevich, Y. Ou, H. Wang and R. Zhang, \textit{An improved result for Falconer's distance set problem in even dimensions}, Mathematische Annalen, \textbf{380}(3) (2021), 1215–1231. \bibitem{alex-fal} L. Guth, A. Iosevich, Y. Ou and H. Wang, \textit{On Falconer's distance set problem in the plane}, Inventiones mathematicae, \textbf{219}(3), 779--830. \bibitem{point-lineforlargeset} B. Hanson, B. Lund and O. Roche-Newton, \textit{On distinct perpendicular bisectors and pinned distances in finite fields}, Finite Fields and Their Applications, \textbf{37} (2016): 240--264. \bibitem{Ham} W.H. Haemers, \textit{Eigenvalue techniques in design and graph theory}, Number 121, Mathematisch centrum Amsterdam, 1980. \bibitem{HIKR10} D.~ Hart, A.~ Iosevich, and D.~ Koh, M.~ Rudnev, \emph{ Averages over hyperplanes, sum-product theory in vector spaces over finite fields and the Erd\"os-Falconer distance conjecture}, Transactions of the American Mathematical Society, \textbf{363}(6) (2011), 3255--3275. \bibitem{IR07} A. Iosevich, and M. Rudnev, \textit{Erd\H{o}s distance problem in vector spaces over finite fields}, Transactions of the American Mathematical Society, \textbf{359}(12) (2007): 6127--6142. \bibitem{lund} B. Lund, G. Petridis, \textit{Bisectors and pinned distances}, Discrete and Computational Geometry, \textbf{64}(3) (2020): 995--1012. \bibitem{liu} B. Liu, \textit{On hausdorff dimension of radial projections}, Revista Matem\'{a}tica Iberoamericana, \textbf{37}(4) (2020): 1307--1319. \bibitem{LIU} B. Liu and C-Y. Shen, \textit{Intersection between pencils of tubes, discretized sum-product, and radial projections}, arXiv:2001.02551 (2020). \bibitem{Mat04} P.~Mattila, \newblock \textit{Hausdorff dimension, projections, and the {F}ourier transform}, \newblock {\em Publicacions Matematiques}, \textbf{48}(1) (2004), 3--48. \bibitem{MO16} P.~Mattila and T.~Orponen, \newblock Hausdorff dimension, intersections of projections and exceptional plane sections, \newblock {\em Proceedings of the American Mathematical Society}, \textbf{144}(8) (2016):3419--3430. \bibitem{murphy} B. Murphy, G. Petridis, T. Pham, M. Rudnev and S. Stevens, \textit{On the pinned distances problem over finite fields}, Journal of London Mathematical Society, \textbf{105}(1) (2022): 469-499. \bibitem{MP2016} B. Murphy, G. Petridis, \textit{A point-line incidence identity in finite fields, and applications}, Moscow Journal of Combinatorics and Number Theory, 2016. \bibitem{MPRRS} B. Murphy, G. Petridis, O. Roche‐Newton, M. Rudnev and I. D. Shkredov, \textit{New results on sum‐product type growth over fields}, Mathematika, \textbf{65}(3) (2019), 588--642. \bibitem{o2} T. Orponen, \textit{On the dimension and smoothness of radial projections}, Analysis and PDE, \textbf{12}(5):1273--1294, 2019. \bibitem{o1} T. Orponen, \textit{A sharp exceptional set estimate for visibility}, Bulletin of the London Mathematical Society, \textbf{50}(1) (2018):1–6. \bibitem{OS} T. Orponen and P. Shmerkin, \textit{On exceptional sets of radial projections}, arXiv:2205.13890 (2022). \bibitem{RS} M. Rudnev and I. D. Shkredov, \textit{On growth rate in $SL_2(F_p)$, the affine group and sum product type implications}, arXiv:1812.01671 (2018). \bibitem{RRS} O. Roche-Newton, M. Rudnev and I. D. Shkredov, \textit{New sum-product type estimates over finite fields}, Advances in Mathematics, \textbf{293} (2016), 589--605. \bibitem{ps} Y. Peres and W. Schlag, \textit{Smoothness of projections, Bernoulli convolutions, and the dimension of exceptions}, Duke Mathematical Journal, \textbf{102}(2):193--251, 2000. \bibitem{Shmerkin} P. Shmerkin, \textit{A nonlinear version of Bourgain’s projection theorem}, accepted in Journal of European Mathematical Society (2022). \bibitem{Shmerkin2} P. Shmerkin and H. Wang, \textit{On the distance sets spanned by sets of dimension $ d/2$ in $\mathbb {R}^ d$}, arXiv:2112.09044 (2021). \bibitem{frank} S. Stevens and F. De Zeeuw, \textit{An improved point‐line incidence bound over arbitrary fields}, Bulletin of the London Mathematical Society, \textbf{49}(5) (2017): 842--858. \bibitem{Szemeredi} E. Szemer\'{e}di and W.T. Trotter, \textit{Extremal problems in discrete geometry}, Combinatorica, \textbf{3}(3):381--392, 1983. \end{thebibliography} \end{document}
2205.07392v2
http://arxiv.org/abs/2205.07392v2
Saturation for Small Antichains
\documentclass[11pt, twoside]{article} \usepackage{amssymb, amsmath, amsthm} \usepackage[left=25mm, right=25mm, bottom=28mm]{geometry} \usepackage{inputenc} \usepackage{fancyhdr} \usepackage{tikz} \usetikzlibrary{positioning} \usepackage{titling} \usepackage{url} \usepackage{hyperref} \usepackage[T1]{fontenc} \usepackage{currvita} \usepackage{xcolor} \predate{} \postdate{} \usepackage{lipsum} \newtheorem{theorem}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{claim}{Claim} \renewcommand{\theclaim}{\Alph{claim}} \newtheorem{claim1}{Claim} \newtheorem{question}[theorem]{Question} \newtheorem{conjecture}[theorem]{Conjecture} \begin{document} \newcommand{\Addresses}{{ \bigskip \footnotesize \medskip Irina~\DJ ankovi\'c, \textsc{St John's College, Cambridge, CB2 1TP, UK.}\par\nopagebreak\textit{Email address:} \texttt{[email protected]} \medskip Maria-Romina~Ivan, \textsc{Department of Pure Mathematics and Mathematical Statistics, Centre for Mathematical Sciences, Wilberforce Road, Cambridge, CB3 0WB, UK.}\par\nopagebreak\textit{Email address:} \texttt{[email protected]} }} \pagestyle{fancy} \fancyhf{} \fancyhead [LE, RO] {\thepage} \fancyhead [CE] {IRINA \DJ ANKOVI\'C, MARIA-ROMINA IVAN} \fancyhead [CO] {SATURATION FOR SMALL ANTICHAINS} \renewcommand{\headrulewidth}{0pt} \renewcommand{\l}{\rule{6em}{1pt}\ } \title{\Large\textbf{SATURATION FOR SMALL ANTICHAINS}} \author{IRINA \DJ ANKOVI\'C, MARIA-ROMINA IVAN} \date{} \maketitle \begin{abstract} For a given positive integer $k$ we say that a family of subsets of $[n]$ is $k$-antichain saturated if it does not contain $k$ pairwise incomparable sets, but whenever we add to it a new set, we do find $k$ such sets. The size of the smallest such family is denoted by $\text{sat}^*(n, \mathcal A_{k})$. Ferrara, Kay, Kramer, Martin, Reiniger, Smith and Sullivan conjectured that $\text{sat}^*(n, \mathcal A_{k})=(k-1)n(1+o(1))$, and proved this for $k\leq4$. In this paper we prove this conjecture for $k=5$ and $k=6$. Moreover, we give the exact value for $\text{sat}^*(n, \mathcal A_5)$ and $\text{sat}^*(n, \mathcal A_6)$. We also give some open problems inspired by our analysis. \end{abstract} \section{Introduction} \par For a positive integer $k$ we say that a family $\mathcal F$ of subsets of $[n]=\{1,\ldots,n\}$ is \textit{$k$-antichain saturated} if $\mathcal F$ does not contain $k$ pairwise incomparable sets, but for every set $X\notin\mathcal F$, the family $\mathcal F\cup\{X\}$ does contain $k$ incomparable sets. We denote by $\text{sat}^*(n,\mathcal A_{k})$ the size of the smallest $k$-antichain saturated family. Equivalently (by Dilworth's theorem), $\text{sat}^*(n,\mathcal A_k)$ is the size of the smallest family that is maximal subject to being the union of $k-1$ chains. \par To see an example of a $k$-saturated family, let us call a chain of subsets of $[n]$ \textit{full} if it has size $n+1$. Then it is easy to see that a collection of $k-1$ full chains that intersect only at $\emptyset$ and $[n]$ is a $k$-antichain saturated family. Thus for $n$ large enough we certainly have $\text{sat}^*(n, \mathcal A_{k})\leq (k-1)(n-1)+2$. \par Ferrara, Kay, Kramer, Martin, Reiniger, Smith and Sullivan \cite{Ferrara2017TheSN} improved this upper bound slightly, showing that for $n\geq k\geq4$, we have $\text{sat}^*(n, \mathcal A_{k})\leq (n-1)(k-1)-(\frac{1}{2}\log_2k+\frac{1}{2}\log_2\log_2k+c)$, for some absolute constant $c$. In the other direction, they also showed that $\text{sat}^*(n,\mathcal A_k)\geq 3n-1$ for $n\geq k\geq 4$. This immediately implies that for $n\geq4$ we have $\text{sat}^*(n,\mathcal A_4)=3n-1$. They also showed that $\text{sat}^*(n,\mathcal A_2)=n+1$ and $\text{sat}^*(n,\mathcal A_3)=2n$, and conjectured that $\text{sat}^*(n,\mathcal A_{k})=(k-1)n(1+o(1))$. Here $o(1)$ denotes a function that tends to 0 as $n$ tends to infinity for each fixed $k$, in other words we are thinking of $k$ as fixed and $n$ growing. Later on, Martin, Smith and Walker \cite{martin2019improved} improved the lower bound by showing that for $k\geq4$ and $n$ large enough $\text{sat}^*(n,\mathcal A_{k})\geq (1-\frac{1}{\log_2(k-1)})\frac{(k-1)n}{\log_2(k-1)}$.\par We mention that this problem is part of a growing area in combinatorics, induced and non-induced poset saturation. We refer the reader to Gerbner, Keszegh, Lemons, Palmer, P{\'a}lv{\"o}lgyi and Patk{\'o}s \cite{gerbner2013saturating}, and Gerbner and Patk{\'o}s \cite{gerbner2018extremal} for nice introductions to the area, as well as to the paper of Keszegh, Lemons, Martin, P{\'a}lv{\"o}lgyi and Patk{\'o}s \cite{KESZEGH2021} for recent results on a variety of posets. \par In this paper we determine the exact value for $k=5$ and $k=6$. We show that for $n\geq5$ we have $\text{sat}^*(n, \mathcal A_5)=4n-2$, and for $n\geq6$ we have $\text{sat}^*(n,\mathcal A_6)=5n-5$. Our starting strategy in each proof is to cover the saturated family with full chains and look at the number of sets on each level. If the number is too small, then we try to `deflect' a chain to add a set not in the family in such a way that everything is still covered by the initial number of chains, which will contradict the saturation property. We believe that this approach, combined with more structural knowledge of the family might lead to improvements in the lower bound for general $k$. \par To end the Introduction, we record two immediate observations that we will use several times. The first is that any $k$-antichain saturated family must contain $\emptyset$ and $[n]$. The second is the following. \begin{lemma} If $\mathcal{F}$ is an induced $k$-antichain saturated family, then $\mathcal{F}$ is the union of $k-1$ full chains. \label{fullchainlemma} \end{lemma} \begin{proof} By Dilworth's theorem, we may partition $\mathcal F$ into $k-1$ chains, and so $\mathcal F$ is certainly contained in the union of some $k-1$ full chains, say $\mathcal D_1,\ldots,\mathcal D_{k-1}$. But $\mathcal D_1\cup\ldots\cup\mathcal D_{k-1}$ is a $k$-antichain saturated family, so by maximality of $\mathcal F$ we must have that $\mathcal F=\mathcal D_1\cup\ldots\cup\mathcal D_{k-1}$. \end{proof} \section{5-antichain saturation} \begin{theorem} For any positive integer $n\geq5$ we have $\text{sat}^{*}(n,\mathcal{A}_{5})=4n-2$. \label{thmfork=5} \end{theorem} \begin{proof} Let $\mathcal{F}$ be an induced 5-antichain saturated family. By Lemma \ref{fullchainlemma} we can cover $\mathcal{F}$ with $4$ full chains $\mathcal{D}_{1},\ldots,\mathcal{D}_{4}$. For each $i \in \{ 1,\ldots,n-1\}$ let $\mathcal{F}_{i}$ be the collection of sets in $\mathcal{F}$ of size $i$, and $x_{i}=|\mathcal{F}_{i}|$. We will now examine the following 4 cases:\\ \par\textbf{Case 1.} There exists $i \in \{ 1,\ldots,n-1\}$ such that $x_{i}=1$.\\ Let $A$ be the unique set in $\mathcal{F}$ of size $i$. Since each of the chains $\mathcal{D}_{1},\ldots \mathcal{D}_{4}$ is a full chain, it follows that all of them must contain $A$. Consider the sets of size $i-1$ and $i+1$ in $\mathcal{D}_{1}$. They must be of the form $A\setminus \{x\}$ and $A\cup \{y\}$ respectively, for some $x\in A$ and $y\in \left[ n\right] \setminus A$. Let $A'=A\setminus \{x\}\cup \{y\}$. Since $A'\neq A$ and $|A'|=i$, $A'\notin \mathcal{F}$. On the other hand, by setting $\mathcal{D}'_1=\mathcal{D}_1\setminus\{A\}\cup\{A'\}$, we observe that the chains $\mathcal {D}'_1,\mathcal{D}_2,\mathcal{D}_3,\mathcal{D}_4$ cover $\mathcal{F}\cup \{A'\}$ (note that $A$ is still covered by $\mathcal{D}_{2}$). This implies that $\mathcal{F}\cup \{A'\}$ is $5$-antichain free, contradicting the fact that $\mathcal{F}$ is 5-antichain saturated.\\ \par\textbf{Case 2.} There is no $j$ such that $x_{j}=1$, but there exists $i$ such that $x_{i}=2$.\\Since $\text{sat}^*(n,\mathcal{A}_{5})\geq 3n-1$ we get that $|\mathcal F|\geq 3n-1$, thus there must be some $l \in \{1,\ldots,n-1\}$ for which $x_{l}\geq 3$. Combining this with the fact that there exist $i$ such that $x_{i}=2$ and $x_{m}\neq1$ for all $1\leq m\leq n-1$, we deduce that there exists some index $1\leq j\leq n-1$ such that $x_{j}=2$ and $x_{j+1}\geq 3$, or $x_{j}=2$ and $x_{j-1}\geq 3$. Since a family is antichain-saturated if and only if the family of the complements of its sets is antichain-saturated, we can assume without loss of generality that there exists $j$ such that $x_j=2$ and $x_{j+1}\geq3$. Let $A_{1}$ and $A_{2}$ be the two sets of size $j$. Since the 4 chains $\mathcal D_1,\ldots,\mathcal D_4$ that cover $\mathcal F$ are full, they have to go through $A_1$ and $A_2$ as well as cover the sets of size $j+1$. This implies that at least two chains with different sets of size $j+1$ have the same element of size $j$. Thus we can assume without loss of generality that these chains are $\mathcal D_1$ and $\mathcal D_2$, and $A_1\in\mathcal D_1,\mathcal D_2$. Let also $B_1$ and $B_2$ be the two (distinct) sets of size $j+1$ in these two chains respectively. Let $B_3$ be another set of size $j+1$ and assume without loss of generality that it is part of $\mathcal D_3$. We either have $A_2\in\mathcal D_3$, or $A_1\in\mathcal D_3$ which implies $A_2\in\mathcal D_4$. As $\mathcal D_4$ must contain an element of size $j+1$, we can assume, after relabelling if necessary, that $A_1\subset B_1,B_2$, and $A_2\subset B_3$, and $A_1,B_1\in\mathcal D_1$, and $A_1, B_2\in\mathcal D_2$, and $A_2, B_3\in\mathcal D_3$. Moreover, since $j\neq 0$, there exist sets $C_{1}, C_{2} \subseteq A_{1}$ of size $j-1$ that are part of the chains $\mathcal{D}_{1}$ and $\mathcal{D}_{2}$ respectively. Note that $C_1$ may be equal to $C_2$. Hence we can write $$C_{1}\cup \{c_{1}\}=A_{1}=B_{1}\setminus \{b_{1}\} \text{ and }  C_{2}\cup \{c_{2}\}=A_{1}=B_{2}\setminus \{b_{2}\},$$ where $b_{1}\neq b_{2} \in \left[ n \right] \setminus A_{1}$ and $c_{1}, c_{2} \in A_{1}$. Let $A'=A_{1}\setminus \{c_{1}\}\cup \{b_{1}\}$ and $A''=A_{1}\setminus \{c_{2}\}\cup \{b_{2}\}$. If $A' \notin \mathcal{F}$, then by modifying $\mathcal D_1$ by replacing $A_1$ with $A'$ we obtain a cover of $\mathcal{F}\cup \{A'\}$ with 4 chains, contradicting the fact that $\mathcal{F}$ is 5-antichain saturated. Thus $A'\in \mathcal{F}$, and similarly, $A'' \in \mathcal{F}$ too. Moreover, by construction, $|A'|=|A''|=j$ and $A'\neq A_{1}\neq A''$. Because $\mathcal F$ contains exactly 2 sets of size $j$, we must have that $A'=A_{2}=A''$. However $A'$ contains $b_{1}$, while $A''$ does not, a contradiction.\par The picture below summarises the above analysis. \begin{center} \begin{tikzpicture} \draw [red] (-2,1) -- (0,0); \draw [red] (-2,-1.80) -- (0,-0.80); \node at (0,-0.4) {$A_1$}; \draw [blue] (0,0) -- (2,1); \draw [blue] (0,-0.8) -- (2,-1.8); \node at (-2,1.5) {$B_1 = A_1\cup \{b_1\}$}; \node at (2,1.5) {$B_2 = A_1\cup \{b_2\}$}; \node at (-2,-2.3) {$C_1 = A_1\setminus\{c_1\}$}; \node at (2,-2.3) {$C_2 = A_1\setminus\{c_2\}$}; \node at (-2,2.25) {\color{red} $\mathcal{D}_1$}; \node at (2,2.25) {\color{blue} $\mathcal{D}_2$}; \draw (4,0) -- (5.5,1); \node at (4,-0.35) {$A_2$}; \node at (5.5,1.35) {$B_3$}; \draw [dotted] (-4,0) -- (-2,1); \draw [dotted] (-4,-0.8) -- (-2,-1.8); \node at (-4,-0.4) {$A' = A_1\setminus\{c_1\}\cup \{b_1\}$}; \end{tikzpicture} \end{center} \par\textbf{Case 3.} For all $i \in \{1,\ldots, n-1\}$, $x_{i}=3$.\\ We will show that this implies that $\mathcal{F}$ can be covered by $3$ chains, contradicting the 5-saturation property of $\mathcal{F}$.\par We start with the $4$ full chains $\mathcal{D}_{1}, \ldots ,\mathcal{D}_{4}$ that cover $\mathcal F$. By modifying them if necessary, we can choose them in such a way that two of them coincide. Equivalently, we prove that for each $i\in\{0,\ldots,n\}$, two of these chains can be chosen to coincide on sets of size less than or equal to $i$. We proceed by induction on $i$.\par Clearly for $i=0$ all of $\mathcal{D}_{j}$ start with the empty set, so they all coincide on sets of size at most $0$. For $i=1$ we have three different options for the sets of size $1$ and $4$ chains, so two chains must coincide on sets of size at most 1.\par Let now $i>1$ and assume that we can cover $\mathcal F$ by 4 full chains, $\mathcal D^i_1,\mathcal D^i_2,\mathcal D^i_3,\mathcal D^i_4$, two of which coincide on sets of size less than $i$. Without loss of generality, $\mathcal{D}^i_{1}$ and $\mathcal{D}^i_{2}$ coincide on sets of size less than $i$. If they coincide on sets of size $i$, we are done. Thus we now assume that they do not, and let $A_{1}$ be the set of size $i$ in $\mathcal{D}^i_{1}$ and $A_{2}$ the set of size $i$ in $\mathcal{D}^i_{2}$. Let also $A_3$ be the third set of size $i$.\par If $\mathcal{D}^i_{3}$ contains $A_{1}$, then by replacing the sets of size not more than $i$ in the chain $\mathcal{D}^i_{1}$ with the sets of size not more than $i$ in $\mathcal{D}^i_{3}$, we obtain a cover of $\mathcal{F}$ by $4$ chains, two of which coincide on all sets of size less than or equal to $i$, so we are done. Similarly we are done if any of $A_{2}\in \mathcal{D}^i_{3}$, $A_{1}\in \mathcal{D}^i_{4}$ or $A_{2}\in \mathcal{D}^i_{4}$ holds. Therefore we may assume that $A_{3} \in \mathcal{D}^i_{3},\mathcal{D}^i_{4}$.\par Let $B$ be the set of size $i-1$ in chains $\mathcal{D}^i_{1}$ and $\mathcal{D}^i_{2}$. Then $A_{1}$ must be of the form $B\cup \{x\}$ for some $x\in \left[ n\right]\setminus B$. Similarly, $A_{2}=B\cup \{y\}$ for some $y\in \left[ n\right]\setminus B$. We observe that $x\neq y$ as $A_{1}\neq A_{2}$. For any $b \in B$, let $X_{b}=B\cup \{x\} \setminus \{b\}$ and $Y_{b}=B\cup \{y\} \setminus \{b\}$. We observe that the family $\mathcal S=\{X_b, Y_b:b\in B\}$ has size $2|B|=2(i-1)$ since the $X$'s are pairwise distinct, the $Y$'s are pairwise distinct, and $X_b\neq Y_{b'}$ for any $b,b'\in B$ (as one set contains $x$, but the other does not). Moreover, all sets in $\mathcal S$ have size $i-1$ and $B\notin\mathcal S$.\par If $i\geq 3$, then $2(i-1)\geq 4 >2$, and since there are exactly $2$ sets of size $i-1$ in $\mathcal{F}$ that are not equal to $B$, at least one of the sets in $\mathcal S$ is not in $\mathcal{F}$. Without loss of generality, assume $X_{b}\notin\mathcal F$ for some $b\in B$. However, by removing all sets of size less than $i$ from $\mathcal{D}^i_{1}$ and adding $X_{b}$ to it, we obtain a $4$-chain cover of $\mathcal{F}\cup \{X_{b}\}$, which contradicts the fact that $\mathcal{F}$ is 5-antichain saturated. \par If $i=2$, then $B=\{b\}$ for some $b\in[n]$, and so $A_{1}=\{b,x\}$, $A_{2}=\{b,y\}$, $X_{b}=\{x\}$ and $Y_{b}=\{y\}$. As in the above case, if $\{x\}\notin \mathcal{F}$ or $\{y\}\notin \mathcal{F}$ we obtain a contradiction. Thus we must have $\{x\},\{y\}\in \mathcal{F}$. Without loss of generality we can assume that $\{x\}\in \mathcal{D}_{3}$ and $\{y\} \in \mathcal{D}_{4}$. As argued previously, we must have $A_{3}\in\mathcal D^2_3,\mathcal D^2_4$, which immediately implies that $A_{3}=\{x,y\}$. Now we modify the chains as follows: we set $\mathcal{D}^3_{1}=\mathcal D^2_1$, $\mathcal{D}^3_{3}=\mathcal D^2_3$, $\mathcal D^3_2=\mathcal D^2_2\setminus\{\{b\}\}\cup\{\{y\}\}$ and $\mathcal D^3_4=\mathcal D^2_4\setminus\{\{y\}\}\cup\{\{x\}\}$. This forms a cover of $\mathcal F$ by 4 full chains such that $D^3_{3}$ and $D^3_{4}$ coincide on all sets of size not greater than $2$. Thus the induction induction step is complete.\\ \par\textbf{Case 4.} There exist $j,t \in \{1,\ldots, n-1\}$ such that $x_{j}=3$ and $x_{t}=4$.\\ We know that no $x_{i}$ is equal to $1$ or $2$ for $i\in \{ 1, \ldots n-1\}$, thus there must exist an index $l$ such that $x_{l}=3$ and $x_{l+1}=4$, or $x_{l}=3$ and $x_{l-1}=4$. As in previous cases, we can assume without loss of generality that there exists $l$ such that $x_l=3$ and $x_{l+1}=4$. Let $A$, $B$ and $C$ be the sets of size $l$ in $\mathcal{F}$. Since $\mathcal{F}$ is covered by the 4 full chains $\mathcal D_1,\ldots,\mathcal D_4$, these 4 chains have to go through the 4 distinct sets of size $l+1$ in $\mathcal{F}$. Moreover, since there are exactly 3 sets of size $l$, we must have that two chains go through the same set of size $l$, while the other two chains go through the remaining sets of size $l$. Putting this together, we can assume without loss of generality that $A\in\mathcal D_1, \mathcal D_2$, $B\in\mathcal D_3$ and $C\in\mathcal D_4$. Furthermore, the sets of size $l+1$ are of the form $A\cup\{a_{1}\}\in\mathcal D_1$, $A\cup\{a_{2}\}\in\mathcal D_2$, $B\cup\{b\}\in\mathcal D_3$ and $C\cup\{c\}\in\mathcal D_4$, where $a_{1},a_{2} \in \left[ n \right] \setminus A$ and $a_1\neq a_2$, $b \in \left[ n \right] \setminus B$ and $c \in \left[ n \right] \setminus C$.\par We now consider the sets of size $l-1$ corresponding to these chains. They must be of the form $A\setminus\{a_{1}'\}\in\mathcal D_1$, $A\setminus\{a_{2}'\}\in\mathcal D_2$, $B\setminus\{b'\}\in\mathcal D_3$ and $C\setminus\{c'\}\in\mathcal D_4$, where $a_{1}',a_{2}'\in A$, $b'\in B$ and $c'\in C$. We note that these sets need not be distinct.\par Let $A'=A\setminus \{ a_{1}'\} \cup \{a_{1}\}$ and $A''=A\setminus \{ a_{2}'\} \cup \{a_{2}\}$. It is clear that $A\neq A'$, $A\neq A''$ and $A'\neq A''$, thus $A, A', A''$ are 3 distinct sets of size $l$. If $A'\notin\mathcal F$, then by replacing $A$ with $A'$ in the chain $\mathcal{D}_{1}$ we obtain a cover of $\mathcal{F}\cup\{A'\}$ by 4 chains, which contradicts the fact that $\mathcal{F}$ is 5-antichain saturated. Thus we must have $A'\in\mathcal F$, and since it has size $l$, $A'=B$ or $A'=C$. Similarly we get that $A''\in\mathcal F$. Therefore, the 3 sets of size $l$ in our family are $A$, $A'$ and $A''$, and we assume without loss of generality that $B=A'$ and $C=A''$.\par Let $B'=B\setminus\{a_{1}\}\cup \{b\}$. It is clear that $B\neq B'$. If $B' \notin \mathcal{F}$, then by leaving the chains $\mathcal{D}_{2}$ and $\mathcal{D}_{4}$ unchanged, swapping the sets of size less than $l$ between the chains $\mathcal{D}_{1}$ and $\mathcal{D}_{3}$, then replacing $A$ with $B'$ in chain $\mathcal{D}_{3}$, and $A$ with $B$ in chain $\mathcal D_1$, we obtain a cover of $\mathcal{F}\cup \{B'\}$ with 4 full chains. This implies that $\mathcal{F}\cup \{B'\}$ is still 5-antichain free, a contradiction. Hence $B'\in \mathcal{F}$ and thus it has to be equal to either $A$ or $A''$.\par The picture below illustrates the cover of $\mathcal F$ by the modified 4 chains: $\mathcal D_1', \mathcal D_2, \mathcal D'_3, \mathcal D_4$. \begin{center} \begin{tikzpicture} \draw [olive] (0,0) -- (0,1); \draw [olive] (0,-0.8) -- (0,-1.8); \draw [dotted, violet] (0,0) -- (2,1); \draw [dotted, violet] (0.1,-0.8) -- (0.1,-1.8); \draw [blue] (2,1) -- (3,0); \draw [blue] (2,-1.8) -- (3,-0.8); \draw [red] (3,0) -- (4,1); \draw [red] (3,-0.8) -- (4,-1.8); \draw [dotted] (4,1) -- (6,0); \draw [dotted] (4,-1.8) -- (6,-0.8); \draw (6,0) -- (6,1); \draw (6,-0.8) -- (6,-1.8); \draw [dotted, orange] (2,-1.8) -- (-4,-0.8); \draw [dotted, orange] (0,1) -- (-4,0); \node at (3,-0.4) {$A$}; \node at (2,1.5) {$A\cup \{a_1\}$}; \node at (2,2.25) {\color{blue} $\mathcal{D}_1$}; \node at (2,-2.3) {$A\setminus\{a'_1\}$}; \node at (4,1.5) {$A\cup \{a_2\}$}; \node at (4,2.25) {\color{red} $\mathcal{D}_2$}; \node at (4,-2.3) {$A\setminus\{a'_2\}$}; \node at (6,1.5) {$C\cup \{c\}$}; \node at (6,2.25) {$\mathcal{D}_4$}; \node at (6,-2.3) {$C\setminus\{c'\}$}; \node at (6,-0.4) {$C = A\cup \{a_2\}\setminus\{a_2'\}$}; \node at (0,1.5) {$B\cup \{b\}$}; \node at (0,2.25) {\color{olive} $\mathcal{D}_3$}; \node at (0,-2.3) {$B\setminus\{b'\}$}; \node at (0.75,0.75) {\color{violet} $\mathcal{D}_1'$}; \node at (-4,-0.4) {$B' = B\cup \{b\}\setminus\{a_1\}$}; \node at (0,-0.4) {$B = A\cup \{a_1\}\setminus\{a'_1\}$}; \node at (-2,0.8) {\color{orange} $\mathcal{D}_3'$}; \end{tikzpicture} \end{center} \par We now examine the two cases: \begin{enumerate} \item[(a)] If $B'=A$, then $A=(A\setminus \{ a_{1}'\} \cup \{a_{1}\})\setminus\{a_{1}\}\cup \{b\}=A\setminus \{a_{1}'\} \cup \{b\}$, which implies that $a_{1}'=b$. It then follows that $B\cup\{b\}=(A\setminus \{ a_{1}'\} \cup \{a_{1}\})\cup\{a_{1}'\}=A\cup\{a_{1}\}.$ This contradicts the original assumption that these 4 sets of size $l+1$ are distinct. \item[(b)] If $B'=C$, let $C'=C\setminus\{a_{2}\}\cup \{c\}$. By the same reasoning as above $C'\in \mathcal{F}$ and $C'\neq A$, thus we must have $C'=B$.\\ From $B'=C$ we get that $(A\setminus \{ a_{1}'\} \cup \{a_{1}\})\setminus\{a_{1}\}\cup \{b\}=A\setminus \{ a_{2}'\} \cup \{a_{2}\}$, which implies that $ b=a_{2}$ and $a'_1=a'_2$. Similarly, from $C'=B$ we get that $c=a_1$. This implies that $B\cup \{b\}=(A\setminus\{a_{1}'\}\cup\{a_{1}\})\cup \{a_{2}\}=(A\setminus\{a_{1}'\}\cup\{a_{2}\})\cup \{a_{1}\}=C \cup \{c\}$, which contradicts the assumption that there are 4 sets of size $l+1$. \end{enumerate} \par We conclude that none of the 4 cases analysed above is possible, thus we deduce that $x_{i}=4$ for all $i\in \{1, \ldots ,n-1\}$. We already know that $x_{0}=x_{n}=1$, thus $|\mathcal F|\geq 4n-2$. This implies that $\text{sat}^*(n, \mathcal A_5)\geq 4n-2$ for $n\geq5$. On the other hand, a family of 4 full chains that only intersect at $\emptyset$ and $\left[ n\right]$ is 5-antichain saturated and has size $4n-2$, thus $\text{sat}^*(n, \mathcal A_5)\leq 4n-2$, which finishes the proof.\end{proof} \section{6-antichain saturation} The proof presented in this section is very similar to the proof of Theorem \ref{thmfork=5}. We therefore focus only on the parts that are specific to the 6-antichain and, where necessary, direct the reader to the analogous parts in the previous proof. \begin{theorem} For every positive integer $n\geq 6$ we have $\text{sat}^{*}(n,\mathcal{A}_{6})=5n-5$. \end{theorem} \begin{proof} Let $\mathcal{F}$ be an induced $6$-antichain saturated family of subsets of $[n]$. By Lemma \ref{fullchainlemma}, we can cover $\mathcal{F}$ with $5$ full chains $\mathcal{D}_{1},\ldots,\mathcal{D}_{5}$. Let $x_{0}, \ldots ,x_{n}$ be the numbers of sets of sizes $0,\ldots , n$ respectively in $\mathcal{F}$. In the same way as in the proof of Theorem \ref{thmfork=5}, we deduce that we cannot have $x_{i}\in\{1,2,3\}$ for any $i\in \{1, \ldots, n-1\}$.\par The case when $x_{i}=4$ for all $i\in \{1, \ldots, n-1\}$ is completely analogous to Case 3 in the proof of Theorem \ref{thmfork=5}, except for the base case $i=2$ of the induction. More precisely, we need to show that if the 5 full chains cover $\mathcal F$ and two of them agree on sets of size at most 1, then we can modify them in such a way that they still cover $\mathcal F$ (and are full chains) and two of them coincide on sets of size at most 2. The figures below are the two situations where we need to modify the chains. The colour coded figures are enough to show that this is possible. For the left figure we note that it is easy to show, and the same argument has been done in the previous section, that $\{x\}$ and $\{y\}$ are in $\mathcal F$, thus one of them is in $\mathcal D_3$ or $\mathcal D_4$. Without loss of generality we assume $\{x\}\in\mathcal D_3$. \begin{figure}[h] \centering \begin{minipage}{0.5\textwidth} \centering \begin{tikzpicture} \draw [red] (-2,2) -- (-2,0) -- (0,-2); \draw [olive] (-1,2) -- (-1.9,0.05) -- (0,-1.9); \draw [magenta] (1,2) -- (0,0) -- (0,-2); \draw [blue] (1,2) -- (2,0) -- (0,-2); \draw [black] (3,2) -- (3,0) -- (0,-2); \node at (-2,2.25) {\footnotesize$\{a,x\}$}; \node at (-1,2.25) {\footnotesize$\{a,y\}$}; \node at (1,2.25) {\footnotesize$\{b,c\}$}; \node at (3,2.25) {\footnotesize$\{d,e\}$}; \node at (-2.3,0) {\footnotesize$\{a\}$}; \node at (0.8,0) {\footnotesize$\{b\}=\{x\}$}; \node at (2.3,0) {\footnotesize$\{c\}$}; \node at (3.3,0) {\footnotesize$\{d\}$}; \node at (0,-2.3) {\footnotesize$\emptyset$}; \node at (-2.3,1.25) {\color{red}\footnotesize $\mathcal{D}_1$}; \node at (-1,1.25) {\color{olive}\footnotesize$\mathcal{D}_2$}; \node at (0.3,1.25) {\color{magenta}\footnotesize$\mathcal{D}_3$}; \node at (1.75,1.25) {\color{blue}\footnotesize$\mathcal{D}_4$}; \node at (3.3,1.25) {\footnotesize$\mathcal{D}_5$}; \draw[dotted, orange] (-0.1,-1.9) -- (-0.1,0) -- (-2,2); \draw[dotted, purple] (0.1,-1.8) -- (1.9,0) -- (0.95,1.95); \node at (-0.4,-0.65) {\color{orange}\footnotesize$\mathcal{D}_1'$}; \node at (0.8,-0.65) {\color{purple}\footnotesize$\mathcal{D}_3'$}; \end{tikzpicture} \end{minipage} \begin{minipage}{0.49\textwidth} \centering \begin{tikzpicture} \draw [red] (-2,2) -- (-2,0) -- (0,-2); \node at (-2,2.25) {\footnotesize $\{a,x\}$}; \node at (-2.3,1.25) {\color{red}\footnotesize$\mathcal{D}_1$}; \draw [olive] (-1,2) -- (-1.9,0.05) -- (-0.1,-1.8); \node at (-1,2.25) {\footnotesize$\{a.y\}$}; \node at (-1.65,1.25) {\color{olive}\footnotesize$\mathcal{D}_2$}; \draw [magenta] (-1,2) -- (-1,0) -- (0,-2); \node at (-0.7,1.25) {\color{magenta}\footnotesize$\mathcal{D}_3$}; \draw [blue] (1,2) -- (1,0) -- (0,-2); \node at (0.7,1.25) {\color{blue}\footnotesize$\mathcal{D}_4$}; \draw (2,2) -- (2,0) -- (0,-2); \node at (1.7,1.25) {\footnotesize$\mathcal{D}_5$}; \node at (-2.3,0) {\footnotesize $\{a\}$}; \node at (0,-2.3) {\footnotesize $\emptyset$}; \draw [dotted, orange] (-1.05,1.9) -- (-1.1,0) -- (-0.1,-1.9); \node at (-1.35,0.25) {\footnotesize\color{orange}$\mathcal{D}_2'$}; \end{tikzpicture} \end{minipage} \end{figure} \par Finally, suppose that there exist an index $i$ such that $x_{i}=4$ and an index $j$ such that $x_{j}=5$. Since all $x_{k}$ are either 4 or 5 for $0<k<n$, there exists some $l\in \{1, \ldots, n-1\}$ such that $x_{l}=4$ and $x_{l+1}=5$, or $x_l=4$ and $x_{l-1}=5$. As before, we can assume without loss of generality that there exists $l$ such that $x_l=4$ and $x_{l+1}=5$. Let $A$, $B$, $C$ and $D$ be the sets of size $l$ in $\mathcal{F}$. Since there are 4 sets of size $l$ and all 5 chains must go through them and also cover them, it follows that exactly two chains have the same element of size $l$. On the other hand there are 5 elements of size $l+1$, thus each of them belongs to exactly one of the 5 full chains. Putting this together we can assume without loss of generality that $A\in\mathcal D_1,\mathcal D_2$, and $B$, $C$ and $D$ are part of the chains $\mathcal D_3$, $\mathcal D_4$ and $\mathcal D_5$ respectively. Let $A\cup\{a_{1}\}$, $A\cup\{a_{2}\}$, $B\cup\{b\}$, $C\cup\{c\}$ and $D\cup\{d\}$ be the 5 elements of size $l+1$ in the chains $\mathcal{D}_{1}, \ldots ,\mathcal{D}_{5}$ respectively, where $a_1\neq a_2$.\par We define the sets $A'$ and $A''$ as in Case 4 of Theorem \ref{thmfork=5} and deduce by the same exact argument that they both belong to $\mathcal{F}$. Thus, we may assume without loss of generality that $B=A'$ and $C=A''$. We also define $B'$ and $C'$ as in the previous section and deduce in the same way that both $B'$ and $C'$ belong to $\mathcal{F}$. The sets of size $l$ are $A, A', A''$ and $D$, two of which have to be $B'$ and $C'$. By the analogue of the subcases (a) and (b) of Case 4 in the previous section, we have that $B'\neq A$, $B'\neq B=A'$, $C'\neq A$, $C'\neq C$, and $B'=C$ and $C'=B$ cannot both hold. Thus we deduce that either $B'=D$ or $C'=D$. Without loss of generality assume $C'=D$. Moreover, we either have $B'=C$ or $B'=D=C'$. It is an easy exercise to see that both cases imply that $a_1'=a_2'$, and either $b=a_2$ or $b=c$.\par Let $W=A\setminus \{a_{1}'\} \in \mathcal{F}$. We observe that the 4 sets of size $l$ are $W\cup\{w_1\}$, $W\cup\{w_2\}$, $W\cup\{w_3\}$ and $W\cup\{w_4\}$, where $w_{1}, \ldots, w_{4}$ are $a_{1}, a_{2}, a_{1}'$ and $c$ in some order. We note that each of these sets has at least two supersets of size $l+1$ in $\mathcal{F}$ -- for example $W\cup\{c\}=C'$ is comparable to both $C\cup\{c\}$ and $D\cup\{d\}$. This immediately tells us that for every $i$ we can can easily construct full chains $\mathcal{C}_{1},\ldots,\mathcal{C}_{5}$ that cover $\mathcal F$ such that two of these chains go through the set $W \cup \{w_{i}\}$. On the other hand, we have that $a_1'=a_2'$, which tells us that the two chains that coincide on level $l$ must also coincide on level $l-1$ and, more importantly, their common set of size $l-1$ has to be a subset of all 4 sets of size $l$. Combining everything we see that this implies that we must have only one set of size $l-1$ in our family, thus $l=1$. In the analogue case where $x_l=4$ and $x_{l-1}=5$, we get $l=n-1$. To summarise, $x_{i}=5$ for all $i\in \{2, \ldots ,n-2\}$, $x_1\geq 4$, $x_{n-1}\geq 4$ and $x_0=x_n=1$. Therefore we have that $|\mathcal{F}| \geq 5n-5$.\par We are left to show that this bound is achieved for every $n\geq6$. Let $\mathcal F$ be the following family:$$\mathcal{F}=\{ \emptyset, \{1\}, \{2\}, \{3\}, \{4\}, \left[n\right] \setminus \{1\},\left[n\right] \setminus \{2\}, \left[n\right] \setminus \{3\}, \left[n\right] \setminus \{4\}, $$ $$ \{1,2\}, \{1,2,5\}, \{1,2,5,6\}, \ldots , \left[n\right] \setminus \{3,4\},$$ $$ \{1,3\}, \{1,3,5\}, \{1,3,5,6\}, \ldots , \left[n\right] \setminus \{2,4\}, $$ $$\{2,3\}, \{2,3,5\}, \{2,3,5,6\}, \ldots , \left[n\right] \setminus \{1,4\},$$ $$\{4,3\}, \{4,3,5\}, \{4,3,5,6\}, \ldots , \left[n\right] \setminus \{1,2\},$$ $$\{4,2\}, \{4,2,5\}, \{4,2,5,6\}, \ldots , \left[n\right] \setminus \{1,3\}\}.$$ \par This family is pictured below. \begin{center} \begin{tikzpicture}[scale=0.9] \node at (0,-0.4) {$\emptyset$}; \draw (-1,1) -- (0,0) -- (-3,1); \draw (1,1) -- (0,0) -- (3,1); \node at (-3,1.4) {$\{1\}$}; \node at (-1,1.4) {$\{2\}$}; \node at (1,1.4) {$\{3\}$}; \node at (3,1.4) {$\{4\}$}; \draw (-4,2.8) -- (-3,1.8) -- (-2,2.8) -- (-1,1.8) -- (0,2.8) -- (1,1.8) -- (2,2.8) -- (3,1.8) -- (4,2.8); \draw (-4,2.8) -- (1,1.8); \draw (-1,1.8) -- (4,2.8); \node at (-4,3.2) {$\{1,3\}$}; \node at (-2,3.2) {$\{1,2\}$}; \node at (0,3.2) {$\{2,3\}$}; \node at (2,3.2) {$\{3,4\}$}; \node at (4,3.2) {$\{2,4\}$}; \draw (-4,3.6) -- (-4,4.4); \draw (-2,3.6) -- (-2,4.4); \draw (0,3.6) -- (0,4.4); \draw (2,3.6) -- (2,4.4); \draw (4,3.6) -- (4,4.4); \node at (-4,4.8) {$\{1,3,5\}$}; \node at (-2,4.8) {$\{1,2,5\}$}; \node at (0,4.8) {$\{2,3,5\}$}; \node at (2,4.8) {$\{3,4,5\}$}; \node at (4,4.8) {$\{2,4,5\}$}; \draw (-4,5.2) -- (-4,6); \draw (-2,5.2) -- (-2,6); \draw (0,5.2) -- (0,6); \draw (2,5.2) -- (2,6); \draw (4,5.2) -- (4,6); \node at (-4,6.4) {$\{1,3,5,6\}$}; \node at (-2,6.4) {$\{1,2,5,6\}$}; \node at (0,6.4) {$\{2,3,5,6\}$}; \node at (2,6.4) {$\{3,4,5,6\}$}; \node at (4,6.4) {$\{2,4,5,6\}$}; \draw (-4,6.8) -- (-4,7.2); \draw [dotted] (-4,7.2) -- (-4,7.6); \draw (-4,7.6) -- (-4,8); \draw (-2,6.8) -- (-2,7.2); \draw [dotted] (-2,7.2) -- (-2,7.6); \draw (-2,7.6) -- (-2,8); \draw (0,6.8) -- (0,7.2); \draw [dotted] (0,7.2) -- (0,7.6); \draw (0,7.6) -- (0,8); \draw (2,6.8) -- (2,7.2); \draw [dotted] (2,7.2) -- (2,7.6); \draw (2,7.6) -- (2,8); \draw (4,6.8) -- (4,7.2); \draw [dotted] (4,7.2) -- (4,7.6); \draw (4,7.6) -- (4,8); \node at (-4,8.4) {$[n]\setminus\{2,4\}$}; \node at (-2,8.4) {$[n]\setminus\{3,4\}$}; \node at (0,8.4) {$[n]\setminus\{1,4\}$}; \node at (2,8.4) {$[n]\setminus\{1,2\}$}; \node at (4,8.4) {$[n]\setminus\{1,3\}$}; \draw (3,9.8) -- (-4,8.8) -- (-1,9.8); \draw (1,9.8) -- (-2,8.8) -- (3,9.8); \draw (-3,9.8) -- (0,8.8) -- (3,9.8); \draw (-3,9.8) -- (2,8.8) -- (-1,9.8); \draw (-3,9.8) -- (4,8.8) -- (1,9.8); \node at (-3,10.2) {$[n]\setminus\{1\}$}; \node at (-1,10.2) {$[n]\setminus\{2\}$}; \node at (1,10.2) {$[n]\setminus\{3\}$}; \node at (3,10.2) {$[n]\setminus\{4\}$}; \draw (-3,10.6) -- (0,11.6) -- (-1,10.6); \draw (3,10.6) -- (0,11.6) -- (1,10.6); \node at (0,12) {$[n]$}; \end{tikzpicture} \end{center} \par It is easy to see that $\mathcal{F}$ is $6$-antichain free as it is covered by 5 full chains, and that it has size $1+4+1+4+5(n-3)=5n-5$. We now prove that whenever we add a set to $\mathcal F$ we create a 6-antichain.\par Let $X\notin\mathcal F$. If $|X|\in\{2, \ldots, n-2\}$, then $X$ will form a 6-antichain with the 5 sets in $\mathcal F$ that have the same size as $X$. If $X=\{k\}$ for $k \notin \{1,2,3,4\}$, then $X$ will form a 6-antichain with the sets of size $2$ in $\mathcal{F}$. Similarly, if $X$ is the complement of a singleton, it will form a 6-antichain with the sets of size $n-2$ in $\mathcal F$.\par This proves that $\mathcal F$ is 6-antichain saturated. Thus $\text{sat}^*(n, \mathcal A_6)=5n-5$ for all $n\geq6$. \end{proof} \section{Further work} Although the saturation number for the $k$-antichain is known to be roughly between $(k-1)n$ and $((k-1)/\log_2 {(k-1)})n$, the exact coefficient of $n$ is not known for general $k$. We believe that the following conjecture is true, strengthening the conjecture in $\cite{Ferrara2017TheSN}$ that $\text{sat}^*(n, \mathcal A_{k})=(k-1)n(1+o(1))$. \begin{conjecture} For each fixed positive integers $k$ we have $\text{sat}^*(n,\mathcal{A}_{k})=n(k-1)-O(1)$. \end{conjecture} \par The results in this paper prove the conjecture for $k=5$ and $k=6$, but in addition, the proofs hint at a more general behaviour of antichain-saturated families. In both cases we have seen that almost all levels of the antichain-saturated family have to have the maximal size possible, namely $k-1$, and based on this we make the following conjecture. \begin{conjecture} For each fixed $k>1$ there exists $l$ with the following property. For $n$ sufficiently large, any $k$-antichain saturated family $\mathcal F$ of subsets of $[n]$ has exactly $k-1$ sets of size $i$ for all $l\leq i\leq n-l$. \end{conjecture} \par Using the techniques in this paper, the main obstacle in proving the above conjecture for $k>6$ comes from the increased number of choices the chains we are analysing have when traversing between 2 or 3 consecutive levels of the family. A first step in proving this conjecture would be to answer the following simple yet elusive question. \begin{conjecture} Let $\mathcal{F}$ be a $k$-antichain saturated family and let $x_{i}$ be the number of sets of size $i$ in $\mathcal{F}$ for $0\leq i\leq n$. Then there exist an $i$ such that $x_i=k-1$. \end{conjecture} \textbf{Note added in proof.} Recently Bastide, Groenland, Jacob and Johnston showed in the paper `Exact antichain saturation numbers via a generalisation of a result of Lehman-Ron' (ar$\chi$iv: 2207.07391) that all our conjectures are true, thus solving the general saturation problem for the antichain. \bibliographystyle{amsplain} \bibliography{document} \Addresses \end{document}
2205.07351v2
http://arxiv.org/abs/2205.07351v2
Non-invertible planar self-affine sets
\documentclass[11pt,twoside,reqno]{amsart} \usepackage{microtype} \usepackage{cite} \usepackage[OT1]{fontenc} \usepackage{type1cm} \usepackage{amssymb} \usepackage{comment} \usepackage{xcolor} \usepackage{geometry} \geometry{a4paper,centering} \usepackage{hyperref} \hypersetup{ colorlinks=true, linkcolor=black, anchorcolor=black, citecolor=black, filecolor=black, menucolor=red, runcolor=black, urlcolor=black, } \numberwithin{equation}{section} \theoremstyle{plain} \newtheorem{theorem}{Theorem}[section] \newtheorem{maintheorem}{Theorem} \renewcommand{\themaintheorem}{\Alph{maintheorem}} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{proposition}[theorem]{Proposition} \newtheorem*{claim}{Claim} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{conjecture}[theorem]{Conjecture} \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} \newtheorem{remarks}[theorem]{Remarks} \newtheorem{example}[theorem]{Example} \newtheorem*{ack}{Acknowledgement} \newtheorem*{acknowledgements}{Acknowledgement} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem*{question}{Question} \newcommand{\note}[1]{\fcolorbox{red}{white}{\color{red}#1}} \newcommand{\BB}{\mathcal{B}} \newcommand{\HH}{\mathcal{H}} \newcommand{\LL}{\mathcal{L}} \newcommand{\PP}{\mathcal{P}} \newcommand{\MM}{\mathcal{M}} \newcommand{\EE}{\mathcal{E}} \newcommand{\CC}{\mathcal{C}} \newcommand{\TT}{\mathcal{T}} \newcommand{\UU}{\mathcal{U}} \newcommand{\OO}{\mathcal{O}} \newcommand{\R}{\mathbb{R}} \newcommand{\RP}{\mathbb{RP}^1} \newcommand{\C}{\mathbb{C}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\N}{\mathbb{N}} \newcommand{\hhh}{\mathtt{h}} \newcommand{\iii}{\mathtt{i}} \newcommand{\jjj}{\mathtt{j}} \newcommand{\kkk}{\mathtt{k}} \renewcommand{\lll}{\mathtt{l}} \newcommand{\eps}{\varepsilon} i}{\varphi} \newcommand{\roo}{\varrho} \newcommand{\ualpha}{\overline{\alpha}} \newcommand{\lalpha}{\underline{\alpha}} \newcommand{\Wedge}{\textstyle\bigwedge} \newcommand{\la}{\langle} \newcommand{\ra}{\rangle} \newcommand{\A}{\mathsf{A}} \newcommand{\B}{\mathsf{B}} \newcommand{\F}{\mathsf{F}} \newcommand{\pv}{\underline{p}} \newcommand{\dd}{\,\mathrm{d}} \renewcommand{\ge}{\geqslant} \renewcommand{\le}{\leqslant} \renewcommand{\geq}{\geqslant} \renewcommand{\leq}{\leqslant} \DeclareMathOperator{\dimloc}{dim_{loc}} \DeclareMathOperator{\udimloc}{\overline{dim}_{loc}} \DeclareMathOperator{\ldimloc}{\underline{dim}_{loc}} \DeclareMathOperator{\dimm}{dim_M} \DeclareMathOperator{\udimm}{\overline{dim}_M} \DeclareMathOperator{\ldimm}{\underline{dim}_M} \DeclareMathOperator{\dimh}{dim_H} \DeclareMathOperator{\udimh}{\overline{dim}_H} \DeclareMathOperator{\ldimh}{\underline{dim}_H} \DeclareMathOperator{\dimp}{dim_p} \DeclareMathOperator{\udimp}{\overline{dim}_p} \DeclareMathOperator{\ldimp}{\underline{dim}_p} \DeclareMathOperator{\dims}{dim_S} \DeclareMathOperator{\diml}{dim_L} \DeclareMathOperator{\dimaff}{dim_{aff}} \DeclareMathOperator{\dima}{dim_A} \DeclareMathOperator{\cdima}{\mathcal{C}dim_A} \DeclareMathOperator{\cdimh}{\mathcal{C}dim_H} \DeclareMathOperator{\dimf}{dim_F} \DeclareMathOperator{\Ker}{Ker} \DeclareMathOperator{\Tan}{Tan} \DeclareMathOperator{\linspan}{span} \DeclareMathOperator{\dist}{dist} \DeclareMathOperator{\diam}{diam} \DeclareMathOperator{\proj}{proj} \DeclareMathOperator{\nproj}{\overline{proj}} \DeclareMathOperator{\conv}{conv} \DeclareMathOperator{\inter}{int} \DeclareMathOperator{\por}{por} \DeclareMathOperator{\sgn}{sgn} \DeclareMathOperator{\spt}{spt} \DeclareMathOperator{\rank}{rank} \DeclareMathOperator{\length}{length} \DeclareMathOperator{\im}{im} \renewcommand{\atop}[2]{\genfrac{}{}{0pt}{}{#1}{#2}} \begin{document} \title{Non-invertible planar self-affine sets} \author{Antti K\"aenm\"aki} \address[Antti K\"aenm\"aki] {Research Unit of Mathematical Sciences \\ P.O.\ Box 8000 \\ FI-90014 University of Oulu \\ Finland} \email{[email protected]} \author{Petteri Nissinen} \address[Petteri Nissinen] {Department of Physics and Mathematics \\ University of Eastern Finland \\ P.O.\ Box 111 \\ FI-80101 Joensuu \\ Finland} \email{[email protected]} \subjclass[2000]{Primary 28A80; Secondary 37C45, 37D35.} \keywords{Self-affine set, Hausdorff dimension} \date{\today} \begin{abstract} We compare the dimension of a non-invertible self-affine set to the dimension of the respective invertible self-affine set. In particular, for generic planar self-affine sets, we show that the dimensions coincide when they are large and differ when they are small. Our study relies on thermodynamic formalism where, for dominated and irreducible matrices, we completely characterize the behavior of the pressures. \end{abstract} \maketitle \section{Introduction} Let $J$ be a finite set and $(A_i+v_i)_{i \in J}$ a tuple of contractive affine self-maps on $\R^2$, where we have written $A+v$ to denote the affine map $x \mapsto Ax+v$ defined on $\R^2$ for all matrices $A \in M_2(\R)$ and translation vectors $v \in \R^2$. If the affine maps $A_i+v_i$ do not have a common fixed point, then we call such a tuple an \emph{affine iterated function system}. We also write $f_i = A_i+v_i$ for all $i \in J$ and note that the associated tuple of matrices $(A_i)_{i \in J}$ is an element of $M_2(\R)^J$. A classical result of Hutchinson \cite{Hutchinson1981} shows that for each affine iterated function system $(f_i)_{i \in J}$ there exists a unique non-empty compact set $X' \subset \R^2$, called the \emph{self-affine set}, such that \begin{equation} \label{eq:self-affine-set-def} X' = \bigcup_{i \in J} f_i(X'). \end{equation} In this article, if $I = \{i \in J : A_i \text{ is invertible}\}$ is non-empty, then the self-affine set $X \subset X'$ associated to $(f_i)_{i \in I}$ is called \emph{invertible}, and if $J \setminus I$ is non-empty, then the self-affine set $X'$ associated to $(f_i)_{i \in J}$ is called \emph{non-invertible}. B\'ar\'any, Hochman, and Rapaport \cite{BHR} and Hochman and Rapaport \cite{HochmanRapaport2021} have recently shown that the Hausdorff dimension reaches a natural upper bound, the affinity dimension, on a large deterministic class of invertible self-affine sets. In our main result, Theorem \ref{thm:main} below, part (1) shows that generically under a separation condition the dimensions of $X'$ and $X$ agree when they are at least $1$. Furthermore, if the dimension of $X$ is strictly less than $1$, then part (2) demonstrates that generically the dimensions of $X'$ and $X$ are distinct. Regarding part (3), let us first recall that Marstrand's projection theorem \cite{Marstrand1954} gives $\dimh(\proj_{V}(X')) = \min\{1,\dimh(X')\}$ for Lebesgue almost all $V \in \RP$. Although the equality holds for generic $V$, it is often difficult to say whether a particular $V$ satisfies it. The purpose of part (3) is to verify that the orthogonal complement of the kernel of one of the rank one matrices is such a direction. The precise definitions of the assumptions used in the theorem will be given in coming sections. \begin{theorem} \label{thm:main} Suppose that $X'$ and $X$ are the planar self-affine sets associated to affine iterated function systems $(A_i+v_i)_{i \in J}$ and $(A_i+v_i)_{i \in I}$ such that $A_i \in GL_2(\R)$ for all $i \in I \subset J$, respectively. \begin{enumerate} \item If $(A_i)_{i \in I}$ is strictly affine and strongly irreducible such that $\dimaff((A_i)_{i \in I}) \ge 1$ and $X$ satisfies the strong open set condition, then \begin{align*} \udimm(X') &= \dimh(X), \\ \dimh(\proj_V(X')) &= 1 \end{align*} for all $V \in \RP$. \item If $(A_i)_{i \in J}$ is dominated or irreducible such that $\max_{i \in J} \|A_i\| < \frac12$, contains a rank one matrix, and $\dimaff((A_i)_{i \in I}) < 1$, then \begin{equation*} \dimh(X_{\mathsf{v}}') > \udimm(X_{\mathsf{v}}) \end{equation*} for $\LL^{2\# J}$-almost all translation vectors $\mathsf{v} = (v_i)_{i \in J} \in (\R^2)^{\# J}$. \item If $(A_i)_{i \in J}$ contains a rank one matrix, $(A_i)_{i \in I}$ is strictly affine and strongly irreducible such that $\dimaff((A_i)_{i \in I}) < 1$, and $X$ satisfies the strong open set condition, then there exists a rank one matrix $A$ in $\A$ such that \begin{equation*} \dimh(X') = \dimh(\proj_{\ker(A)^\bot}(X')) \le 1. \end{equation*} \end{enumerate} \end{theorem} We remark that B\'ar\'any and K\"ortv\'elyesi \cite{BaranyKort2024} have recently continued the above study. They have demonstrated that if the affinity dimension is strictly less than one, then there exist two large parameter sets for the defining matrices so that in the first one, the Hausdorff dimension of the non-invertible self-affine set equals the affinity dimension, and in the second one, the Hausdorff dimension is strictly smaller than the affinity dimension. This observation proposes that determining the Hausdorff dimension in this situation requires a better understanding of the geometry. The remainder of the paper is organized as follows. In Section \ref{sec:matrices}, we compare the behavior of the pressures and study the continuity. In particular, for dominated and irreducible matrices, we completely characterize the continuity of the pressure in the non-invertible case. In Section \ref{sec:dim-results}, we uncover how the study of non-invertible self-affine sets is connected to the theory of sub-self-affine and inhomogeneous self-affine sets, and prove the main result. \section{Products of matrices} \label{sec:matrices} \subsection{Rank one matrices} We denote the collection of all $2 \times 2$ matrices with real entries by $M_2(\R)$, the general linear group of degree $2$ over $\R$ by $GL_2(\R) \subset M_2(\R)$, and the orthogonal group in dimension $2$ over $\R$ by $O_2(\R) \subset GL_2(\R)$. A matrix $A \in GL_2(\R)$ is called \emph{proximal} if it has two real eigenvalues with different absolute values. If $A \in M_2(\R)$, then the \emph{singular values} of $A$ are defined to be the non-negative square roots of the eigenvalues of the positive-semidefinite matrix ${A^\top}A$ and are denoted by $\alpha_1(A)$ and $\alpha_2(A)$ in non-increasing order. Recall that the rank of $A$ is the number of non-zero singular values of $A$. The identities $\alpha_1(A)=\|A\|$ and $\alpha_1(A)\alpha_2(A) = |\det(A)|$ for all $A \in M_2(\R)$ are standard, as is the identity $\alpha_2(A)=\|A^{-1}\|^{-1}$ in the case where $A$ is invertible. For each $A \in M_2(\R)$ and $s \geq 0$ we define the \emph{singular value function} by setting \begin{equation*} \varphi^s(A)= \begin{cases} \alpha_1(A)^s, &\text{if } 0 \le s \le 1, \\ \alpha_1(A)\alpha_2(A)^{s-1}, &\text{if } 1 < s \le 2, \\ |\det(A)|^{s/2}, &\text{if } 2 < s < \infty, \end{cases} \end{equation*} i^s(A)$ represents a measurement of the $s$-dimensional volume of the image of the Euclidean unit ball under $A$. Since $\alpha_1(A)\alpha_2(A)^{s-1} = \alpha_1(A)^{2-s}|\det(A)|^{s-1}$ for all $1 < s \le 2$, the inequality $\varphi^s(AB) \leq \varphi^s(A)\varphi^s(B)$ is valid for all $s \ge 0$. In other words, the singular value function is sub-multiplicative. i^s(A)=0$ for all $s > 0$. Let us next recall that rank one matrices are projections. Let $\RP$ be the real projective line, that is, the set of all lines through the origin in $\R^2$. If $V,W \in \RP$, then the \emph{projection} $\proj_V^W \colon \R^2 \to V$ is the linear map such that $\proj_V^W|_V=\mathrm{Id}|_V$ and $\ker(\proj_V^W)=W$. Furthermore, the \emph{orthogonal projection} $\proj_V^{V^\bot}$ onto the subspace $V$ is denoted by $\proj_V$. The following lemma is well-known. But, as the proof is short, we provide the reader with full details. \begin{lemma} \label{thm:rank-one1} A matrix $A \in M_2(\R)$ has rank one if and only if there exist $v,w \in \R^2 \setminus \{(0,0)\}$ such that $A = vw^\top$. In this case, \begin{equation*} A = \begin{cases} \langle v,w \rangle\proj_{\im(A)}^{\ker(A)}, &\text{if $A$ is not nilpotent}, \\ |v||w|R\proj_{\ker(A)^\perp}, &\text{if $A$ is nilpotent}, \end{cases} \end{equation*} where $R \in O_2(\R)$ is a rotation by an angle $\pi/2$. In particular, $A(X)$ is bi-Lipschitz equivalent to $\proj_{\ker(A)^\bot}(X)$ for all $X \subset \R^2$. \end{lemma} \begin{proof} Let us first prove the characterization of rank one matrices. If $A = vw^\top$ for some $v,w \in \R^2 \setminus \{(0,0)\}$, then $Ax = vw^\top x = \langle w,x \rangle v$ for all $x \in \R^2$. Therefore, $A$ maps every $x$ to a scalar multiple of $v$, $\rank(A)=1$, and $\im(A) = \linspan(v)$. If $x \in \linspan(w)^\bot$, then $Ax = vw^\top x = \langle w,x \rangle v = 0$ and $\ker(A)=\linspan(w)^\bot$. Conversely, if $\rank(A)=1$, then there is $v \in \R^2 \setminus \{(0,0)\}$ such that $Ax$ is a scalar multiple of $v$ for all $x \in \R^2$. In particular, this is true when $x = (1,0)$ and $x = (0,1)$. That is, there are $w_1,w_2 \in \R \setminus \{0\}$ such that $A(1,0)=w_1v$ and $A(0,1)=w_2v$. In other words, \begin{equation*} A = \begin{pmatrix} w_1v_1 & w_2v_1 \\ w_1v_2 & w_2v_2 \end{pmatrix} = vw^\top, \end{equation*} where $w = (w_1,w_2) \in \R^2 \setminus \{(0,0)\}$. Let us then show that a rank one matrix $A$ is a projection. If $A$ is not nilpotent, then $\linspan(v) = \im(A) \ne \ker(A) = \linspan(w)^\bot$. Since $Ax = vw^\top x = \langle x,w \rangle v$ and it is easy to see that \begin{equation*} \proj_{\im(A)}^{\ker(A)}(x) = \frac{\langle x,w \rangle}{\langle v,w \rangle} v = \frac{1}{\langle v,w \rangle} Ax \end{equation*} for all $x \in \R^2$, we have shown the first case. If $A$ is nilpotent, then $\linspan(v) = \im(A) = \ker(A) = \linspan(w)^\bot$. Since $Rw/|w| = v/|v|$, where $R \in O_2(\R)$ is a rotation by an angle $\pi/2$, we have \begin{equation*} \proj_{\ker(A)^\bot}(x) = \proj_{\linspan(w)}(x) = \frac{\langle x,w \rangle}{|w|^2} w = \frac{\langle x,w \rangle}{|v||w|} R^{-1}v \end{equation*} and hence, \begin{equation*} R\proj_{\ker(A)^\bot}(x) = \frac{\langle x,w \rangle}{|v||w|} v = \frac{1}{|v||w|} Ax \end{equation*} as claimed. Since the last claim follows immediately from the the fact that a rank one matrix is a projection, we have finished the proof. \end{proof} \subsection{Pressure} Let $J$ be a finite set and $\A = (A_i)_{i \in J} \in M_2(\R)^J$ be a tuple of matrices. We say that $\A$ is \emph{irreducible} if there does not exist $V \in \RP$ such that $A_iV \subset V$ for all $i \in J$; otherwise $\A$ is \emph{reducible}. Note that the irreducibility is equivalent to the property that the matrices in $\A$ do not have a common eigenvector. Therefore, $\A$ is reducible if and only if the matrices in $\A$ can simultaneously be presented (in some coordinate system) as upper triangular matrices. The tuple $\A$ is \emph{strongly irreducible} if there does not exist a finite set $\mathcal{V} \subset \RP$ such that $A_i\mathcal{V}=\mathcal{V}$ for all $i\in J$. We call a proper subset $\CC\subset\RP$ a \emph{multicone} if it is a finite union of closed non-trivial projective intervals. We say that $\A$ is \emph{dominated} if each matrix $A_i$ is non-zero and there exists a multicone $\CC\subset\RP$ such that $A_i\CC\subset\CC^o$ for all $i \in J$, where $\CC^o$ is the interior of $\CC$. Conversely, if a multicone $\CC\subset\RP$ satisfies such a condition, then we say that $\CC$ is a \emph{strongly invariant multicone} for $\A$. For example, the first quadrant is strongly invariant for any tuple of positive matrices. Note that a dominated tuple is not necessarily irreducible and vice versa. If $\A \in GL_2(\R)^J$ is dominated and irreducible, then, by \cite[Lemma 2.10]{BaranyKaenmakiYu2021}, $\A$ is strongly irreducible. We let $J^*$ denote the set of all finite words $\{ \varnothing \} \cup \bigcup_{n \in \N} J^n$, where $\varnothing$ satisfies $\varnothing\iii = \iii\varnothing = \iii$ for all $\iii \in J^*$. For notational convenience, we set $J^0 = \{ \varnothing \}$. The set $J^\N$ is the collection of all infinite words. We define the \emph{left shift} $\sigma \colon J^\N \to J^\N$ by setting $\sigma\iii = i_2i_3\cdots$ for all $\iii = i_1i_2\cdots \in J^\N$. The concatenation of two words $\iii \in J^*$ and $\jjj \in J^* \cup J^\N$ is denoted by $\iii\jjj \in J^* \cup J^\N$ and the length of $\iii \in J^* \cup J^\N$ is denoted by $|\iii|$. If $\jjj \in J^* \cup J^\N$ and $1 \le n < |\jjj|$, then we define $\jjj|_n$ to be the unique word $\iii \in J^n$ for which $\iii\kkk = \jjj$ for some $\kkk \in J^* \cup J^\N$. Write $\iii|_0 = \varnothing$. If $\iii \in J^* \setminus \{\varnothing\}$, then $\iii^- = \iii|_{|\iii|-1}$ is the word obtained from $\iii$ by deleting its last element. Furthermore, if $\iii \in J^n$ for some $n \in \N$, then we set $[\iii] = \{\jjj \in J^\N : \jjj|_n=\iii\}$. The set $[\iii]$ is called a \emph{cylinder set}. We write $A_\iii = A_{i_1} \cdots A_{i_n}$ for all $\iii = i_1 \cdots i_n \in J^n$ and $n \in \N$. We say that $\mathsf{A} \in GL_2(\R)^J$ is \emph{strictly affine} if there is $\iii \in I^*$ such that $A_\iii$ is proximal. Recall that $A \in GL_2(\R)$ is \emph{proximal} if it has two real eigenvalues with different absolute values. By \cite[Corollary 2.4]{BaranyKaenmakiMorris2018}, a dominated tuple in $GL_2(\R)^J$ is strictly affine. If $\Gamma \subset J^\N$ is a non-empty compact set such that $\sigma(\Gamma) \subset \Gamma$, then we define $\Gamma_n = \{\iii|_n \in J^n : \iii \in \Gamma\}$ and $\Gamma_* = \bigcup_{n \in \N} \Gamma_n$. We keep denoting $(I^\N)_n$ and $(I^\N)_*$ by $I^n$ and $I^*$, respectively, for all $I \subset J$ and $n \in \N$. Given a tuple $\A = (A_i)_{i \in J} \in M_2(\R)^J$ of matrices, we define for each such $\Gamma \subset J^\N$ and $s \ge 0$ the \emph{pressure} by setting i^s(A_\iii) \in [-\infty,\infty). \end{equation*} i^s(A_\iii))_{n \in \N}$ is sub-additive and hence, the limit above exists or is $-\infty$ by Fekete's lemma. i^t(A_i) \max_{k \in J}\|A_k\|^{(s-t)}$ for all $i \in J$, we see that $P(\Gamma,\A,s) \le P(\Gamma,\A,t) + (s-t) \log\max_{k \in J}\|A_k\|$ for all $s > t \ge 0$. Since $\A$ consists only of strictly contractive matrices, we have $\max_{k \in J}\|A_k\|<1$ and hence, the pressure $P(\Gamma,\A,s)$ is strictly decreasing as a function of $s$ whenever it is finite. Notice also that $P(\Gamma,\A,0) = \lim_{n \to \infty} \frac{1}{n} \log \#\Gamma_n \ge 0$ and $\lim_{s \to \infty} P(\Gamma,\A,s) = -\infty$. In this case, we define the \emph{affinity dimension} by setting \begin{equation*} \dimaff(\Gamma,\A) = \inf\{s \ge 0 : P(\Gamma,\A,s) \le 0\}. \end{equation*} Notice that if the pressure $s \mapsto P(\Gamma,\A,s)$ is continuous at $s_0 = \dimaff(\Gamma,\A)$, then $P(\Gamma,\A,s_0) = 0$. We are interested in the properties of the pressure \begin{equation*} P(\A,s) = P(J^\N,\A,s) \end{equation*} as a function of $s$ and the affinity dimension $\dimaff(\A) = \dimaff(J^\N,\A)$. To that end, let us introduce some further notation. Let $I = \{i \in J : A_i \text{ is invertible}\}$. In this case, we trivially have that \begin{equation*} I^\N = \{\iii \in J^\N : A_{\iii|_n} \text{ is invertible for all }n \in \N\} \end{equation*} is a compact subset of $J^\N$ and satisfies $\sigma(I^\N) = I^\N$. Therefore, the pressure $P(I^\N,\A,s)$ is well-defined for all $s \ge 0$. We also define \begin{equation*} \Sigma = \{\iii \in J^\N : A_{\iii|_n} \text{ is non-zero for all }n \in \N\}. \end{equation*} It is easy to see that $\Sigma$ is a compact subset of $J^\N$ and satisfies $\sigma(\Sigma) \subset \Sigma$. Indeed, if $\jjj \in \sigma(\Sigma)$, then there is $\iii \in \Sigma$ such that $\jjj = \sigma\iii$ and $A_{\iii|_n} \ne 0$ for all $n \in \N$. As clearly $A_{\sigma\iii|_n} \ne 0$ for all $n \in \N$, we see that $\jjj = \sigma\iii \in \Sigma$ as claimed. Hence, also the pressure $P(\Sigma,\A,s)$ is well-defined for all $s \ge 0$. Observe that the inclusion $\sigma(\Sigma) \subset \Sigma$ can be strict: if $J = \{0,1\}$ and \begin{equation*} A_0 = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}, \qquad A_1 = \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix}, \end{equation*} then $\Sigma = \{0111\cdots, 111\cdots\}$ and $\sigma(\Sigma) = \{111\cdots\}$. \begin{lemma} \label{thm:pressure-continuous} If $\A = (A_i)_{i \in J} \in M_2(\R)^J$ satisfies $\max_{i \in J} \|A_i\| < 1$, then \begin{equation*} P(\A,s) = \begin{cases} \log \# J, &\text{if } s = 0, \\ P(\Sigma,\A,s), &\text{if } 0 < s \le 1, \\ P(I^\N,\A,s), &\text{if } 1 < s < \infty. \end{cases} \end{equation*} Furthermore, the function $s \mapsto P(\A,s)$ is strictly decreasing on $[0,\infty)$, continuous on $(0,1)$, and uniformly continuous on $(1,\infty)$ whenever it is finite. \end{lemma} \begin{proof} i^s(A) = \alpha_1(A)^s = \|A\|^s$ for all $0 \le s \le 1$. Therefore, as we interpreted $0^0=1$, we have \begin{equation*} P(\A,0) = \lim_{n \to \infty} \frac{1}{n} \log \sum_{\iii \in J^n} \|A_\iii\|^0 = \log \# J. \end{equation*} i^s(A_\iii)>0$ if and only if $\iii \in I^*$. This shows $P(\A,s) = P(I^\N,\A,s)$ for all $1<s<\infty$. The function $s \mapsto P(\A,s)$ has already seen strictly decreasing. The continuity on $(0,1)$ follows from \cite[Theorem 1.2(3)]{FengShmerkin2014} and the uniform continuity on $(1,\infty)$ follows directly from \cite[Lemma 2.1]{KaenmakiVilppolainen2010}. \end{proof} The following lemma characterizes the continuity of the function $s \mapsto P(\A,s)$ at $0$. \begin{lemma} \label{thm:pressure-right-continuous-at-zero} If $\A = (A_i)_{i \in J} \in M_2(\R)^J$ satisfies $\max_{i \in J} \|A_i\| < 1$, then the function $s \mapsto P(\A,s)$ is right-continuous at $0$ if and only if the semigroup $\{A_\iii : \iii \in J^*\}$ does not contain rank zero matrices. \end{lemma} \begin{proof} If the semigroup $\{A_\iii : \iii \in J^*\}$ does not contain rank zero matrices, then $\Sigma = J^\N$ and the right-continuity at $0$ is guaranteed by Lemma \ref{thm:pressure-continuous}. If $A_\iii$ has rank zero for some $\iii \in J^n$ and $n \in \N$, then clearly $\# \Sigma_n < \# J^n = (\# J)^n$. Fix $0 < s \le 1$ and notice that Lemma \ref{thm:pressure-continuous} implies \begin{equation*} P(\A,s) \le \frac{1}{n} \log \sum_{\iii \in \Sigma_n} \|A_\iii\|^s \end{equation*} and \begin{equation*} \lim_{s \downarrow 0} P(\A,s) \le \frac{1}{n} \log \# \Sigma_n < \frac{1}{n} \log \# J^n = P(\A,0), \end{equation*} where the limit exists by Lemma \ref{thm:pressure-continuous}. In particular, the function $s \mapsto P(\A,s)$ is not right-continuous at $0$. \end{proof} The possible discontinuity at $1$ has already been observed by Feng and Shmerkin \cite[Remark 1.1]{FengShmerkin2014}. In their example, the pressure is not finite when $s > 1$, but it is easy to see that this is not a necessity. If $J = \{0,1\}$ and \begin{equation*} A_0 = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}, \qquad A_1 = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}, \end{equation*} then, by Lemma \ref{thm:pressure-continuous}, for $\A = (A_0,A_1) \in M_2(\R)^J$ we have $P(\A,1) = \log 2$ and $P(\A,s) = 0$ for all $s>1$. The continuity of the function $s \mapsto P(\A,s)$ at $1$ will be characterized for dominated and irreducible tuples in Lemma \ref{thm:pressure-continuous-at-one}. Let us next determine when the pressure is finite. For that, we need the following definition. Given a tuple $\A = (A_i)_{i \in J} \in M_2(\R)^J$ of matrices, we define the \emph{joint spectral radius} by setting \begin{equation*} \roo(\A) = \lim_{n \to \infty} \max_{\iii \in J^n} \|A_\iii\|^{1/n}. \end{equation*} As the operator norm is sub-multiplicative, the sequence $(\log \max_{\iii \in J^n} \|A_\iii\|)_{n \in \N}$ is sub-additive and hence, the limit above exists by Fekete's lemma. \begin{lemma} \label{thm:joint-spectral-radius} If $\A = (A_i)_{i \in J} \in M_2(\R)^J$ is dominated or irreducible, then $\roo(\A) > 0$. \end{lemma} \begin{proof} Let us first assume that $\A$ is dominated and $\CC \subset \RP$ is a strongly invariant multicone for $\A$. Since there exists a multicone $\CC_0 \subset \RP$ such that $\bigcup_{\iii \in J^n} A_\iii \CC \subset \bigcup_{i \in J} A_i \CC \subset \CC_0 \subset \CC^o$ for all $n \in \N$, we find, by applying \cite[Lemma 2.2]{BochiMorris2015}, a constant $\kappa > 0$ such that \begin{equation} \label{eq:bochi-morris} \|A_\iii|V\| \ge \kappa \|A_\iii\| \end{equation} for all $V \in \CC_0$ and $\iii \in J^*$. It follows that if $V \in \CC_0$, then $A_\jjj V \in \CC_0$ and $\|A_\iii A_\jjj\| \ge \|A_\iii A_\jjj | V\| = \|A_\iii|A_\jjj V\| \|A_\jjj|V\| \ge \kappa^2\|A_\iii\|\|A_\jjj\|$ for all $\iii,\jjj \in J^*$. Therefore, \begin{equation*} \roo(\A) \ge \liminf_{n \to \infty} \max_{i_1 \cdots i_n \in J^n} \kappa^{2(n-1)/n} \|A_{i_1}\|^{1/n} \cdots \|A_{i_n}\|^{1/n} \ge \kappa^2 \min_{j \in J} \|A_j\| > 0 \end{equation*} as claimed. Although the proof in the irreducible case can be found in \cite[Lemma 2.2]{Jungers2009}, we present the full details for the convenience of the reader. Denote the unit circle by $S^1$ and suppose that for each $k \in \N$ there is $x_k \in S^1$ such that for every $i \in J$ we have $|A_ix_k| < \frac{1}{k}$. By the compactness of $S^1$, there is $x \in S^1$ such that $|A_ix|=0$ for all $i \in J$. Choosing $V = \linspan(x) \in \RP$, we see that $A_iV = \{(0,0)\} \subset V$ for all $i \in J$ and $\A$ is reducible. It follows that there is $\delta > 0$ such that for every $x \in S^1$ there exists $i \in J$ for which $|A_ix| \ge \delta$. Let us next apply this inductively. Fix $x_0 \in S^1$ and choose $i_1 \in J$ such that $|A_{i_1}x_0| \ge \delta$. Write $x_1 = A_{i_1}x_0$ and choose $i_2 \in J$ such that $|A_{i_2}\frac{x_1}{|x_1|}| \ge \delta$ whence $|A_{i_2}A_{i_1}x_0| = |A_{i_2}x_1| \ge \delta|x_1| = \delta|A_{i_1}x_0| \ge \delta^2$. Continuing in this manner, we find for each $n \in \N$ a word $\iii_n \in J^n$ such that $\|A_{\iii_n}\| \ge |A_{\iii_n} x_0| \ge \delta^n$. Hence, \begin{equation*} \roo(\A) \ge \liminf_{n \to \infty} \|A_{\iii_n}\|^{1/n} \ge \delta > 0 \end{equation*} as wished. \end{proof} The following two lemmas characterize the finiteness of the pressure. \begin{lemma} \label{thm:pressure-finite1} If $\A = (A_i)_{i \in J} \in M_2(\R)^J$ satisfies $\max_{i \in J} \|A_i\| < 1$, then the following five conditions are equivalent: \begin{enumerate} \item\label{it:11} $P(\A,s) > -\infty$ for all $0 \le s \le 1$, \item\label{it:12} $\lim_{s \downarrow 0} P(\A,s) > -\infty$, \item\label{it:13} there does not exist $n \in \N$ such that $A_\iii = 0$ for all $\iii \in J^n$, \item\label{it:14} there exists $\jjj \in J^\N$ such that $A_{\jjj|_n} \ne 0$ for all $n \in \N$, \item\label{it:15} $\roo(\A)>0$. \end{enumerate} Furthermore, all of these conditions hold if $\A$ is dominated or irreducible. \end{lemma} \begin{proof} Notice that the limit in \eqref{it:12} exists by Lemma \ref{thm:pressure-continuous} and the implications \eqref{it:11} $\Rightarrow$ \eqref{it:12} and \eqref{it:14} $\Rightarrow$ \eqref{it:13} are trivial. Let us first show the implication \eqref{it:12} $\Rightarrow$ \eqref{it:13}. If \eqref{it:13} does not hold, then there exists $n_0 \in \N$ such that $A_\iii = 0$ for all $\iii \in J^{n_0}$. Since now $\|A_\iii\| = 0$ for all $\iii \in J^n$ and $n \ge n_0$, we see that $P(\A,s) = -\infty$ for all $s>0$ and \eqref{it:12} cannot hold. Let us then show the implication \eqref{it:13} $\Rightarrow$ \eqref{it:14}. If \eqref{it:14} does not hold, then for every $\jjj \in J^\N$ there is $n(\jjj) \in \N$ such that $A_{\jjj|_{n(\jjj)}} = 0$. By compactness of $J^\N$, there exist $M \in \N$ and $\jjj_1,\ldots,\jjj_M \in J^\N$ such that $\{[\jjj_i|_{n(\jjj_i)}]\}_{i=1}^M$ still covers $J^\N$. Choosing $n = \max_{i \in \{1,\ldots,M\}} n(\jjj_i)$, we see that for every $\iii \in J^n$ there is $i \in \{1,\ldots,M\}$ such that $A_\iii = A_{\jjj_i|_{n(\jjj_i)}}A_{\sigma^{n(\jjj_i)}\iii} = 0$ and \eqref{it:13} cannot hold. Since $\A$ is a tuple of strictly contractive matrices, the function $s \mapsto P(\A,s)$ is strictly decreasing whenever it is finite. Therefore, we have $P(\A,s) \ge P(\A,1) \ge \log \roo(\A)$ for all $0 \le s \le 1$ and hence, we have the implication \eqref{it:15} $\Rightarrow$ \eqref{it:11}. Therefore, to conclude the proof, it suffices to show the implication \eqref{it:13} $\Rightarrow$ \eqref{it:15} and also verify condition \eqref{it:15} when $\A$ is dominated or irreducible. While the latter is immediately assured by Lemma \ref{thm:joint-spectral-radius}, we also see that to prove the former, we may assume that $\A$ is reducible. This means that, after possibly a change of basis, the matrices $A_i$ in $\A$ are of the form \begin{equation*} A_i = \begin{pmatrix} a_i & b_i \\ 0 & c_i \end{pmatrix} \end{equation*} for all $i \in J$. Since $A_i(1,0) = a_i(1,0)$ and $A_i(\frac{b_i}{c_i-a_i},1) = c_i(\frac{b_i}{c_i-a_i},1)$ when $a_i \ne c_i$, we see that $\max\{|a_i|,|c_i|\} \le \|A_i\|$ for all $i \in J$. As the product of upper triangular matrices is upper triangular with diagonal entries obtained as products of the corresponding diagonal entries, we also have $\max\{|a_{i_1} \cdots a_{i_n}|, |c_{i_1} \cdots c_{i_n}|\} \le \|A_\iii\|$ for all $\iii = i_1 \cdots i_n \in J^n$ and $n \in \N$. Therefore, if condition \eqref{it:15} does not hold i.e.\ $\roo(\A) = 0$, then \begin{equation*} \max_{i \in J} |a_i| = \lim_{n \to \infty} \max_{i_1 \cdots i_n \in J^n} |a_{i_1} \cdots a_{i_n}|^{1/n} \le \roo(\A) = 0 \end{equation*} and, similarly, $\max_{i \in J} |c_i| = 0$. In other words, the diagonal entries in all of the matrices $A_i$ are zero. Thus, $A_\iii = 0$ for all $\iii \in J^2$ and condition \eqref{it:13} does not hold. \end{proof} \begin{lemma} \label{thm:pressure-finite2} If $\A = (A_i)_{i \in J} \in M_2(\R)^J$ satisfies $\max_{i \in J} \|A_i\| < 1$, then the following five conditions are equivalent: \begin{enumerate} \item\label{it:20} $P(I^\N,\A,s) > -\infty$ for all $s \ge 0$, \item\label{it:21} $P(\A,s) > -\infty$ for all $s \ge 0$, \item\label{it:22} $\lim_{s \downarrow 1} P(\A,s) > -\infty$, \item\label{it:23} there does not exist $n \in \N$ such that $A_\iii$ has rank at most one for all $\iii \in J^n$, \item\label{it:24} there exists $j \in J$ such that $A_j \in GL_2(\R)$. \end{enumerate} \end{lemma} \begin{proof} i^s(A_\iii) = 0$ for all $\iii \in J^n$, $n \ge n_0$, and $s>1$, we have $P(\A,s) = -\infty$ for all $s>1$ and \eqref{it:22} cannot hold. Let us then show the implication \eqref{it:23} $\Rightarrow$ \eqref{it:24}. If \eqref{it:24} does not hold, then $A_j$ has rank at most one for all $j \in J$. It follows that for every $\iii \in J^n$ and $n \in \N$ the rank of $A_\iii$ is at most one and \eqref{it:23} cannot hold. i^s(A_{\jjj|_n}) \ge \alpha_2(A_{\jjj|_n}) \ge \alpha_2(A_j)^n > 0$ for all $n \in \N$ and $s \ge 0$, we see that $P(I^\N,\A,s) \ge \log\alpha_2(A_j) > -\infty$ for all $s \ge 0$ as wished. \end{proof} \subsection{Equilibrium states} Let $\MM_\sigma(J^\N)$ be the collection of all $\sigma$-invariant Borel probability measures on $J^\N$. If $0 < s \le 1$, then we say that a measure $\mu_K \in \MM_\sigma(J^\N)$ is \emph{$s$-Gibbs-type} if there exists a constant $C \ge 1$ such that \begin{equation*} C^{-1}e^{-nP(\A,s)}\|A_\iii\|^s \le \mu_K([\iii]) \le Ce^{-nP(\A,s)}\|A_\iii\|^s \end{equation*} for all $\iii \in J^n$ and $n \in \N$. \begin{lemma} \label{thm:weak-gibbs} If $\A = (A_i)_{i \in J} \in M_2(\R)^J$ satisfies $\max_{i \in J} \|A_i\| < 1$ and is dominated or irreducible, then for every $0 < s \le 1$ there exist a unique ergodic $s$-Gibbs-type measure $\mu_K \in \MM_\sigma(J^\N)$. \end{lemma} \begin{proof} Recall first that, by Lemma \ref{thm:pressure-finite1}, the pressure $P(\A,s)$ is finite for all $0 < s \le 1$. If $\A$ is irreducible, then the existence of the claimed measure $\mu_K \in \MM_\sigma(J^\N)$ follows immediately from \cite[Proposition 1.2]{FengKaenmaki2011}. We may thus assume that $\A$ is dominated. Fix $0 < s \le 1$ and notice that, by \eqref{eq:bochi-morris}, there exist $\kappa > 0$ and a multicone $\CC_0 \subset \RP$ such that $\|A_\iii|V\| \ge \kappa\|A_\iii\|$ for all $V \in \CC_0$ and $\iii \in J^*$. Fixing $V \in \CC_0$, we see that \begin{equation*} \log\|A_{\iii|_n}\|^s + \log\kappa^s \le \sum_{k=0}^{n-1} \log\|A_{\sigma^k \iii|_1}|A_{\sigma\iii}V\|^s \le \log\|A_{\iii|_n}\|^s \end{equation*} for all $\iii \in J^\N$ and $n \in \N$. By \cite[Theorems 1.7 and 1.16]{Bowen2008}, there exist an ergodic measure $\mu_K \in \MM_\sigma(J^\N)$ and a constant $C \ge 1$ such that \begin{equation*} \kappa^sC^{-1} e^{-nP(\A,s)}\|A_\iii\|^s \le \mu_K([\iii]) \le Ce^{-nP(\A,s)}\|A_\iii\|^s \end{equation*} for all $\iii \in J^n$ and $n \in \N$; see also \cite[Lemma 2.12]{BaranyKaenmakiYu2021}. The uniqueness of $\mu_K$ is now evident as two different ergodic measures are mutually singular. \end{proof} If $\A = (A_i)_{i \in J} \in M_2(\R)^J$ is dominated, then it follows from \eqref{eq:bochi-morris} that $\|A_\iii\| \ge \kappa^{2(n-1)}\|A_{i_1}\| \cdots \|A_{i_n}\| \ge \kappa^{2(n-1)}\min_{i \in J}\|A_i\|^n > 0$ for all $\iii = i_1 \cdots i_n \in J^n$ and $n \in \N$. Hence the semigroup $\{A_\iii : \iii \in J^*\}$ does not contain rank zero matrices and, by Lemma \ref{thm:pressure-right-continuous-at-zero}, the function $s \mapsto P(\A,s)$ is right-continuous at $0$. Furthermore, if there are no rank zero matrices, then $\Sigma = J^\N$ and the $s$-Gibbs-type measure $\mu_K \in \MM_\sigma(J^\N)$ is fully supported on $J^\N$. If $\A$ is irreducible, then $\mu_K$ is supported only on $\Sigma$. Given $\mu \in \MM_\sigma(J^\N)$ and $\A = (A_i)_{i \in J} \in M_2(\R)^J$, we define for each $s \ge 0$ the \emph{energy} by setting \begin{equation*} i^s(A_\iii). \end{equation*} The limit above exists or is $-\infty$ again by Fekete's lemma. Recall that the \emph{entropy} of $\mu$ is \begin{equation*} h(\mu) = -\lim_{n \to \infty} \frac{1}{n} \sum_{\iii \in J^n} \mu([\iii]) \log\mu([\iii]). \end{equation*} It is well-known that \begin{equation} \label{eq:eq-state-ineq} P(\A,s) \ge h(\mu) + \Lambda(\mu,\A,s) \end{equation} for all $\mu \in \MM_\sigma(J^\N)$ and $s \ge 0$; for example, see \cite[\S 3]{KaenmakiVilppolainen2010}. A measure $\mu_K \in \MM_\sigma(J^\N)$ is an \emph{$s$-equilibrium state} if it satisfies \begin{equation} \label{eq:eq-state-def} P(\A,s) = h(\mu_K) + \Lambda(\mu_K,\A,s) > -\infty. \end{equation} The following lemma shows the uniqueness of the equilibrium state in dominated and irreducible cases. \begin{lemma} \label{thm:unique-equilibrium-state} If $\A = (A_i)_{i \in J} \in M_2(\R)^J$ satisfies $\max_{i \in J} \|A_i\| < 1$ and is dominated or irreducible, then for every $0 < s \le 1$ the ergodic $s$-Gibbs-type measure $\mu_K \in \MM_\sigma(J^\N)$ is the unique $s$-equilibrium state. \end{lemma} \begin{proof} Fix $0 < s \le 1$ and let $\mu_K \in \MM_\sigma(J^\N)$ be the ergodic $s$-Gibbs-type measure. Since, by Lemmas \ref{thm:weak-gibbs} and \ref{thm:pressure-finite1}, \begin{align*} h(\mu_K) + \Lambda(\mu_K,\A,s) &= \lim_{n \to \infty} \frac{1}{n} \sum_{\iii \in \Sigma_n} \mu_K([\iii]) \log \frac{\|A_\iii\|^s}{\mu_K([\iii])} \\ &= \lim_{n \to \infty} \frac{1}{n} \sum_{\iii \in \Sigma_n} \mu_K([\iii]) \log e^{nP(\A,s)} = P(\A,s) > -\infty, \end{align*} we see that $\mu_K$ is an $s$-equilibrium state. As $\mu_K$ is ergodic, the uniqueness follows from \cite[Theorem 3.6]{KaenmakiVilppolainen2010}. \end{proof} If $\A = (A_i)_{i \in J} \in M_2(\R)^J$ contains an invertible matrix, then $I \ne \emptyset$ and, by Lemma \ref{thm:pressure-finite2}, $P(I^\N,\A,s) > -\infty$ for all $s \ge 0$. In this case, regardless of domination and irreducibility, it follows from \cite[Theorem 4.1]{Kaenmaki2004} that for every $s > 0$ there exists an ergodic measure $\nu_K \in \MM_\sigma(J^\N)$ supported on $I^\N$ such that \begin{equation} \label{eq:equilibrium-state} P(I^\N,\A,s) = h(\nu_K) + \Lambda(\nu_K,\A,s). \end{equation} Note that such a measure is not necessarily unique; see \cite{FengKaenmaki2011,KaenmakiVilppolainen2010,KaenmakiMorris2018,BochiMorris2018}. \begin{lemma} \label{thm:pressure-drop} If $\A = (A_i)_{i \in J} \in M_2(\R)^J$ satisfies $\max_{i \in J} \|A_i\| < 1$, contains a rank one matrix, and is dominated or irreducible, then \begin{equation*} P(I^\N,\A,s) < P(\A,s) \end{equation*} for all $0 \le s \le 1$. \end{lemma} \begin{proof} Since $\A$ is dominated or irreducible, Lemma \ref{thm:pressure-finite1} shows that $P(\A,s) > -\infty$ for all $0 \le s \le 1$. Notice that, by Lemma \ref{thm:pressure-continuous}, $P(I^\N,\A,0) = \log\# I < \log\# J = P(\A,0)$ and we may fix $0 < s \le 1$. Therefore, by Lemma \ref{thm:unique-equilibrium-state}, there exists unique $\mu_K \in \MM_\sigma(J^\N)$ such that \begin{equation} \label{eq:equilibrium-large} P(\A,s) = h(\mu_K) + \Lambda(\mu_K,\A,s) > -\infty. \end{equation} Furthermore, by Lemma \ref{thm:weak-gibbs}, $\mu_K$ satisfies \begin{equation*} \mu_K([\iii]) \ge C^{-1}e^{-nP(\A,s)} \|A_\iii\|^s > 0 \end{equation*} for all $\iii \in \Sigma_n$ and $n \in \N$, where $C \ge 1$ is a constant. In particular, if $A_k$ is a rank one matrix in $\A$, then $\mu_K([k]) > 0$. If $\A$ does not contain invertible matrices, then trivially $P(I^\N,\A,s) = -\infty$ for all $s > 0$ and there is nothing to prove. We may thus assume that $\A$ contains an invertible matrix. Therefore, by \eqref{eq:equilibrium-state}, there exists a measure $\nu_K \in \MM_\sigma(J^\N)$ supported on $I^\N$ such that \begin{equation} \label{eq:equilibrium-small} P(I^\N,\A,s) = h(\nu_K) + \Lambda(\nu_K,\A,s). \end{equation} Since $A_k$ is not invertible and $\nu_K$ is supported on $I^\N$, we have $\nu_K([k]) = 0$. As $\mu_K$ is the unique measure in $\MM_\sigma(J^\N)$ satisfying \eqref{eq:equilibrium-large} and $\mu_K([k]) > \nu_K([k])$, we see that $\nu_K$ does not satisfy \eqref{eq:equilibrium-large} and therefore, by \eqref{eq:equilibrium-small}, \begin{equation*} P(\A,s) > h(\nu_K) + \Lambda(\nu_K,\A,s) = P(I^\N,\A,s) \end{equation*} as claimed. \end{proof} The following lemma characterizes the continuity of the function $s \mapsto P(\A,s)$ at $1$. \begin{lemma} \label{thm:pressure-continuous-at-one} If $\A = (A_i)_{i \in J} \in M_2(\R)^J$ satisfies $\max_{i \in J} \|A_i\| < 1$ and is dominated or irreducible, then the function $s \mapsto P(\A,s)$ is continuous at $1$ if and only if $\A$ does not contain rank one matrices. \end{lemma} \begin{proof} If $\A$ does not contain rank one matrices, then it contains only invertible or rank zero matrices. By Lemma \ref{thm:pressure-continuous}, rank zero matrices do not have any effect on the value of the pressure $P(\A,s)$ when $s > 0$. Therefore, rank zero matrices have no impact on the continuity at $1$ and we may assume that $\A \in GL_2(\R)^J$. But in this case, the continuity follows from \cite[Lemma 2.1]{KaenmakiVilppolainen2010}. Let us then assume that $\A$ contains a rank one matrix. If $\A$ does not contain invertible matrices, then, as the function $s \mapsto P(\A,s)$ is strictly decreasing, Lemma \ref{thm:pressure-finite2} implies that $P(\A,s) = -\infty$ for all $s > 1$. Furthermore, since $\A$ is dominated or irreducible, Lemma \ref{thm:pressure-finite1} shows that $P(\A,s) > -\infty$ for all $0 \le s \le 1$ and the function $s \mapsto P(\A,s)$ is discontinuous at $1$. We may thus assume that $\A$ contains an invertible matrix. By Lemma \ref{thm:pressure-finite2}, we thus have $P(I^\N,\A,s) > -\infty$ for all $s \ge 0$. Recall that, by \cite[Lemma 2.1]{KaenmakiVilppolainen2010}, the function $s \mapsto P(I^\N,\A,s)$ is continuous at $1$. Therefore, by Lemma \ref{thm:pressure-continuous}, showing \begin{equation*} P(I^\N,\A,1) < P(\A,1) \end{equation*} proves the function $s \mapsto P(\A,s)$ discontinuous at $1$. But, as $\A$ contains a rank one matrix, this follows immediately from Lemma \ref{thm:pressure-drop}. \end{proof} \section{Dimension of non-invertible self-affine sets} \label{sec:dim-results} Recall that $J$ is a finite set and the affine iterated function system is a tuple $(f_i)_{i \in J}$ of contractive affine self-maps on $\R^2$ not having a common fixed point. We write $f_i = A_i+v_i$ for all $i \in J$, where $A_i \in M_2(\R)$ and $v_i \in \R^2$, and $f_\iii = f_{i_1} \circ \cdots \circ f_{i_n}$ for all $\iii = i_1 \cdots i_n \in J^n$ and $n \in \N$. We let $f_\varnothing = \mathrm{Id}$ to be the identity map. Note that the associated tuple of matrices $(A_i)_{i \in J}$ is an element of $M_2(\R)^J$ and satisfies $\max_{i \in J}\|A_i\|<1$. If $I = \{i \in J : A_i \text{ is invertible}\}$ is non-empty, then the invertible self-affine set $X$ is associated to $(f_i)_{i \in I}$, and if $J \setminus I$ is non-empty, then the non-invertible self-affine set $X'$ is associated to $(f_i)_{i \in J}$. Recall the defining property \eqref{eq:self-affine-set-def} of a self-affine set. We use the convention that whenever we speak about a self-affine set, then it is automatically accompanied with a tuple of affine maps which defines it. This makes it possible to write that e.g.\ ``a non-invertible self-affine set is dominated'' which obviously then means that ``the associated tuple $\A = (A_i)_{i \in J}$ of matrices in $M_2(\R)^J$ is dominated''. The study of non-invertible self-affine sets is connected to the theory of sub-self-affine sets. If the \emph{canonical projection} $\pi \colon J^\N \to \R^2$ is defined such that \begin{equation*} \pi(\iii) = \lim_{n \to \infty} f_{\iii|_n}(0) = \lim_{n \to \infty} \sum_{k=1}^n A_{\iii|_{k-1}}v_{i_k} \end{equation*} for all $\iii = i_1i_2\cdots \in J^\N$, then we write $X'' = \pi(\Sigma)$, where $\Sigma = \{\iii \in J^\N : A_{\iii|_n}$ is non-zero for all $n \in \N\}$. Observe that $X = \pi(I^\N) \subset X'' \subset \pi(J^\N) = X'$ and, as $\sigma(\Sigma) \subset \Sigma$, the set $X''$ is \emph{sub-self-affine}, i.e.\ \begin{equation} \label{eq:sub-self-affine} X'' \subset \bigcup_{i \in J} f_i(X''); \end{equation} see \cite{KaenmakiVilppolainen2010}. The study is also connected to inhomogeneous self-affine sets. If $C \subset \R^2$ is compact, then there exists a unique non-empty compact set $X_C \subset \R^2$ such that \begin{equation*} X_C = \bigcup_{i \in I} f_i(X_C) \cup C. \end{equation*} The set $X_C$ is called the \emph{inhomogeneous self-affine set} with condensation $C$. Such sets were introduced by Barnsley and Demko \cite{BarnsleyDemko1985} and they have been studied for example in \cite{Barnsley2006,KaenmakiLehrback2017,Burrell2019,BurrellFraser2020,BakerFraserMathe2019}. Note that $X_\emptyset$ is the invertible self-affine set $X$. \begin{lemma} \label{thm:inhomog} If $X'$ and $X$ are non-invertible and invertible planar self-affine sets, respectively, and $X''$ is the associated sub-self-affine set defined in \eqref{eq:sub-self-affine}, then $X' \setminus X''$ is countable and \begin{equation*} X' = X_C = X \cup \bigcup_{\iii \in I^*} f_\iii(C), \end{equation*} where $X_C$ is the inhomogeneous self-affine set with condensation $C = \bigcup_{i \in J \setminus I} f_i(X')$. \end{lemma} \begin{proof} Let us first show that $X' \setminus X''$ is countable. Writing $v_\iii = \sum_{k=1}^n A_{\iii|_{k-1}}v_{i_k}$, we see that $f_\iii = A_\iii + v_\iii$ for all $\iii = i_1 \cdots i_n \in J^n$ and $n \in \N$. Let $\iii \in J^\N \setminus \Sigma$ and choose $n_0(\iii) = \min\{n \in \N : A_{\iii|_n}$ is zero$\}$. Since $v_{\iii|_{n+1}} = \sum_{k=1}^{n+1} A_{\iii|_{k-1}}v_{i_k} = A_{\iii|_n} v_{i_{n+1}} + \sum_{k=1}^n A_{\iii|_{k-1}}v_{i_k} = v_{\iii|_n}$ for all $n \ge n_0(\iii)$, a simple induction shows that \begin{equation*} f_{\iii|_n}(X') = \{v_{\iii|_{n_0(\iii)}}\} \end{equation*} for all $n \ge n_0(\iii)$. As $J^\N \setminus \Sigma$ is clearly separable, there exist countably many infinite words $\iii_1,\iii_2,\ldots \in J^\N \setminus \Sigma$ such that $J^\N \setminus \Sigma \subset \bigcup_{k \in \N} [\iii_k|_{n_0(\iii_k)}]$. It follows that \begin{equation*} X' \setminus X'' \subset \{v_{\iii_k|_{n_0(\iii_k)}} : k \in \N\} \end{equation*} is countable. Let us then prove the claimed equalities. Noting that the argument of \cite[Lemma 3.9]{Snigireva2008} works also in the self-affine setting, we have \begin{equation} \label{eq:inhomog-eq} X_C = X \cup \bigcup_{\iii \in I^*} f_\iii(C). \end{equation} To prove the remaining equality, let us first show that $X' \subset X_C$. To that end, fix $x \in X'$. By \eqref{eq:inhomog-eq}, we have $X \subset X_C$ and we may assume that $x \in X' \setminus X$. But this implies that there exist $\iii \in I^*$ and $i \in J \setminus I$ such that $x \in f_{\iii i}(X')$. Since, again by \eqref{eq:inhomog-eq}, \begin{equation*} f_{\iii i}(X') \subset f_\iii(C) \subset \bigcup_{\iii \in I^*} f_\iii(C) \subset X_C \end{equation*} we have shown that $X' \subset X_C$. The inclusion $X_C \subset X'$ follows immediately from \eqref{eq:inhomog-eq} since we trivially have $X \subset X'$ and $f_\iii(C) \subset X'$ for all $\iii \in I^*$. Thus $X' = X_C$ as claimed. \end{proof} i^s(A_\iii)$ and hence, the pressure $P(\A,s)$. Recall that the upper Minkowski dimension $\udimm$ is an upper bound for the Hausdorff dimension $\dimh$ for all compact sets; see \cite[\S 5.3]{Mattila1995}. The following lemma, generalizing \cite[Theorem 5.4]{Falconer1988}, shows that the affinity dimension is an upper bound for the upper Minkowski dimension for all non-invertible self-affine sets. \begin{lemma} \label{thm:affinity-upper} If $X'$ is a planar self-affine set, then \begin{equation*} \udimm(X') \le \dimaff(\A). \end{equation*} \end{lemma} \begin{proof} We may assume that $\dimaff(\A) < 2$ as otherwise there is nothing to prove. Let $k \in \{0,1\}$ be such that $k \le \dimaff(\A) < k+1$. Fix $\dimaff(\A) < s < k+1$ and notice that $P(\A,s) < 0$. By \cite[Proposition 4.1]{Falconer1988}, we thus have \begin{equation} \label{eq:star-sum-finite} i^s(A_\jjj) < \infty. \end{equation} Let $B$ be a ball containing $X'$. By scaling and translating, we may assume that $B$ is the unit ball. Write \begin{equation*} \CC_r = \{\iii \in J^* : \alpha_{k+1}(A_\iii) \le r < \alpha_{k+1}(A_{\iii^-})\} \end{equation*} for all $0<r<1$. If $\jjj \in J^\N$, then $\alpha_{k+1}(A_{\jjj|_0}) = \alpha_{k+1}(\mathrm{Id}) = 1$ and $\alpha_{k+1}(A_{\jjj|_n}) \to 0$ as $n \to \infty$. Therefore, for each $0<r<1$ there exists unique $n \in \N$ such that $\jjj|_n \in \CC_r$ and the collection $\{[\iii] : \iii \in \CC_r\}$ of pairwise disjoint cylinder sets is a cover of $J^\N$. Fix $0<r<1$ and $\iii \in \CC_r$, and observe that $f_\iii(B)$ is an ellipse with semi-axes $\alpha_1(A_\iii)$ and $\alpha_2(A_\iii)$. Since $\alpha_{k+1}(A_\iii) \le r < \alpha_{k+1}(A_{\iii^-})$, the set $f_\iii(B)$ is covered by \begin{equation*} \begin{cases} 4, &\text{if } k=0, \\ 4\max\{r^{-1}\alpha_1(A_\iii),1\}, &\text{if } k=1 \end{cases} \end{equation*} i^k(A_{\iii^-})r^{-k}$ many balls of radius $r$. Write \begin{equation*} i^k(A_{\iii^-})r^{-k} \end{equation*} and observe that \begin{equation*} i^k(A_{\iii^-})\alpha_{k+1}(A_{\iii^-})^{s-k} = 4\fii^s(A_{\iii^-}) \end{equation*} for all $\iii \in \CC_r$. Recalling \eqref{eq:star-sum-finite}, we thus have \begin{equation} \label{eq:star-box} \begin{split} i^s(\A_{\jjj}) \\ i^s(A_\jjj) \le 4M\# Jr^{-s}. \end{split} \end{equation} Since $\{[\iii] : \iii \in \CC_r\}$ is a covering of $J^\N$, it follows that $\{f_\iii(B) : \iii \in \CC_r\}$ is a covering of $X'$. Hence $X'$ can be covered by $\sum_{\iii \in \CC_r} N_\iii(r)$ many balls of radius $r$. This together with \eqref{eq:star-box} gives $\udimm(X') \le s$. The proof is finished by letting $s \downarrow \dimaff(\A)$. \end{proof} It is easy to construct examples of self-affine sets having dimension strictly less than the affinity dimension. For example, several self-affine carpets have this property. Nevertheless, the classical result of Falconer \cite[Theorem 5.3]{Falconer1988} shows that, perhaps rather surprisingly, the Hausdorff dimension of a non-invertible self-affine set equals the affinity dimension for Lebesgue-almost every choice of translation vectors. \begin{theorem} \label{thm:falconer} If $X_{\mathsf{v}}'$ is a planar self-affine set and $\A$ satisfies $\max_{i \in J}\|A_i\|<\frac12$, then \begin{equation*} \dimh(X_{\mathsf{v}}') = \min\{2,\dimaff(\A)\} \end{equation*} for $\LL^{2 \#J}$-almost all translation vectors $\mathsf{v} = (v_i)_{i \in J} \in (\R^2)^{\#J}$. \end{theorem} Originally, Falconer assumed that the matrices are invertible and their norms are bounded above by $\frac13$. Solomyak \cite{Solomyak1998} relaxed the bound to $\frac12$ which, by the example of Edgar \cite{Edgar1992}, is known to be the best possible. To see that $\min\{2,\dimaff(\A)\}$ in Theorem \ref{thm:falconer} is a lower bound for the Hausdorff dimension also when the matrices are non-invertible, by Lemma \ref{thm:pressure-continuous} it suffices to notice that \cite[Lemma 2.2]{Falconer1988} remains valid for all parameters $s$ strictly less than the rank of the matrix. Recently a deterministic class of invertible self-affine sets were found for which the Hausdorff dimension equals the affinity dimension. We say that $X$ satisfies the \emph{open set condition} if there exists a non-empty open set $U \subset \R^2$ such that $f_i(U) \cap f_j(U) = \emptyset$ and $f_i(U) \subset U$ for all $i,j \in I$ with $i \ne j$. If such a set $U$ also intersects $X$, then we say that $X$ satisfies the \emph{strong open set condition}. The following breakthrough result for self-affine sets is proven by B\'ar\'any, Hochman, and Rapaport \cite[Theorems 1.1 and 7.1]{BHR}: \begin{theorem} \label{thm:BHR} If $X$ is an invertible strictly affine strongly irreducible planar self-affine set satisfying the strong open set condition, then \begin{align*} \dimh(X) &= \min\{2,\dimaff(I^\N,\A)\}, \\ \dimh(\proj_V(X)) &= \min\{1,\dimaff(I^\N,\A)\} \end{align*} for all $V \in \RP$. \end{theorem} We emphasize that Theorem \ref{thm:BHR} uses the assumption that the affine iterated function system consists only of invertible maps. It is currently not known whether the result holds also with non-invertible maps. We also remark that Hochman and Rapaport \cite{HochmanRapaport2021} have recently managed to relax the assumptions of the result. They showed that the strong open set condition can be replaced by exponential separation, a separation condition which allows overlapping. The following three propositions collect our dimension results for non-invertible self-affine sets. \begin{proposition} \label{thm:prop1} Suppose that $X'$ and $X$ are non-invertible and invertible planar self-affine sets, respectively. If \begin{align*} \dimh(X) &= \min\{2,\dimaff(I^\N,\A)\} \ge 1, \\ \dimh(\proj_{V}(X)) &= \min\{1,\dimaff(I^\N,\A)\} = 1 \end{align*} for all $V \in \RP$, then $\udimm(X') = \dimh(X)$ and $\dimh(\proj_V(X'))=1$ for all $V \in \RP$. \end{proposition} \begin{proof} To simplify notation, write $s = \dimaff(I^\N,\A)$. If $1 < s < \infty$, then Lemma \ref{thm:pressure-continuous} shows that $\dimaff(\A) = s \ge 1$. If $s=1$, then we get $P(\A,t) = P(I^\N,\A,t) < 0 = P(I^\N,\A,1) \le P(\A,1)$ for all $1<t<\infty$ and we again have $\dimaff(\A) = s \ge 1$. Therefore, by Lemma \ref{thm:affinity-upper}, we have $\udimm(X') \le \min\{2,\dimaff(\A)\} = \min\{2,s\} = \dimh(X) \le \dimh(X')$. To finish the proof, notice that $1 = \dimh(\proj_{V}(X)) \le \dimh(\proj_{V}(X')) \le 1$ for all $V \in \RP$. \end{proof} \begin{proposition} \label{thm:prop3} Suppose that $X'$ and $X$ are non-invertible and invertible planar self-affine sets, respectively. If $X'$ is dominated or irreducible, $\A$ contains a rank one matrix, $\dimaff(I^\N,\A) < 1$, and \begin{equation*} \dimh(X') = \min\{2,\dimaff(\A)\}, \end{equation*} then $\dimh(X') > \udimm(X)$. \end{proposition} \begin{proof} To simplify notation, write $s=\dimaff(I^\N,\A)$. Since $s < 1$, Lemma \ref{thm:pressure-drop} implies that $0 = P(I^\N,\A,s) < P(\A,s)$. Therefore, as Lemmas \ref{thm:pressure-finite1} and \ref{thm:pressure-continuous} guarantee the continuity of the pressure, we have $s < \dimaff(\A)$. Therefore, by Lemma \ref{thm:affinity-upper}, we have $\udimm(X) \le s < \min\{2,\dimaff(\A)\} = \dimh(X')$. \end{proof} \begin{proposition} \label{thm:prop2} Suppose that $X'$ and $X$ are non-invertible and invertible planar self-affine sets, respectively. If $\A$ contains a rank one matrix and \begin{equation*} \dimh(X) = \dimh(\proj_V(X)) < 1 \end{equation*} for all $V \in \RP$, then there exists a rank one matrix $A$ in $\A$ such that $\dimh(X') = \dimh(\proj_{\ker(A)^\bot}(X')) \le 1$. \end{proposition} \begin{proof} To simplify notation, write $s = \dimh(X)$. By Lemma \ref{thm:inhomog}, the non-invertible self-affine set can be expressed as an inhomogeneous self-affine set, \begin{equation*} X' = X_C = X \cup \bigcup_{\iii \in I^*} f_\iii(C), \end{equation*} where $C = \bigcup_{i \in J \setminus I} f_i(X')$. Therefore, by the countable stability of Hausdorff dimension, \begin{equation} \label{eq:main1} \begin{split} \dimh(X') &= \max\{s, \sup_{\iii \in I^*}\dimh(f_\iii(C))\} \\ &= \max\{s, \dimh(C)\} = \max\{s, \max_{i \in J \setminus I}\dimh(A_i(X'))\}. \end{split} \end{equation} Let $A$ be a rank one matrix in $\A$ such that $\dimh(A(X')) = \max_{i \in J \setminus I}\dimh(A_i(X'))$. Since, by the assumption and Lemma \ref{thm:rank-one1}, $s = \dimh(\proj_{\ker(A)^\bot}(X)) \le \dimh(\proj_{\ker(A)^\bot}(X')) = \dimh(A(X'))$, the claim follows from \eqref{eq:main1}. \end{proof} We are now ready to prove the main result. The proof basically just applies Theorems \ref{thm:falconer} and \ref{thm:BHR} in the above propositions. \begin{proof}[Proof of Theorem \ref{thm:main}] (1) Since, by Theorem \ref{thm:BHR}, we have \begin{align*} \dimh(X) &= \min\{2,\dimaff(I^\N,\A)\} \ge 1, \\ \dimh(\proj_{V}(X)) &= \min\{1,\dimaff(I^\N,\A)\} = 1 \end{align*} for all $V \in \RP$, Proposition \ref{thm:prop1} implies $\udimm(X') = \dimh(X)$ and $\dimh(\proj_V(X'))=1$ for all $V \in \RP$. (2) Since, by Theorem \ref{thm:falconer}, we have \begin{equation*} \dimh(X_{\mathsf{v}}') = \min\{2,\dimaff(\A)\} \end{equation*} for $\LL^{2 \#J}$-almost all $\mathsf{v} \in (\R^2)^{\#J}$, Proposition \ref{thm:prop3} implies $\dimh(X_{\mathsf{v}}') > \udimm(X_{\mathsf{v}})$ for $\LL^{2 \#J}$-almost all $\mathsf{v} \in (\R^2)^{\#J}$. (3) Since, by Theorem \ref{thm:BHR}, we have \begin{equation*} \dimh(X) = \dimh(\proj_V(X)) < 1 \end{equation*} for all $V \in \RP$, Proposition \ref{thm:prop2} implies that there exists a rank one matrix $A$ in $\A$ such that $\dimh(X') = \dimh(\proj_{\ker(A)^\bot}(X')) \le 1$. \end{proof} \begin{ack} The authors thank De-Jun Feng for pointing out Lemma \ref{thm:affinity-upper}. \end{ack} \begin{thebibliography}{10} \bibitem{BakerFraserMathe2019} S.~Baker, J.~M. Fraser, and A.~M\'{a}th\'{e}. \newblock Inhomogeneous self-similar sets with overlaps. \newblock {\em Ergodic Theory Dynam. Systems}, 39(1):1--18, 2019. \bibitem{BHR} B.~B\'{a}r\'{a}ny, M.~Hochman, and A.~Rapaport. \newblock Hausdorff dimension of planar self-affine sets and measures. \newblock {\em Invent. Math.}, 216(3):601--659, 2019. \bibitem{BaranyKaenmakiMorris2018} B.~B\'{a}r\'{a}ny, A.~K\"{a}enm\"{a}ki, and I.~D. Morris. \newblock Domination, almost additivity, and thermodynamic formalism for planar matrix cocycles. \newblock {\em Israel J. Math.}, 239(1):173--214, 2020. \bibitem{BaranyKaenmakiYu2021} B.~B\'{a}r\'{a}ny, A.~K{\"a}enm{\"a}ki, and H.~Yu. \newblock Finer geometry of planar self-affine sets. \newblock Preprint, available at arXiv:2107.00983, 2021. \bibitem{BaranyKort2024} B.~B\'{a}r\'{a}ny and V.~K{\"o}rtv{\'e}lyesi. \newblock On the dimension of planar self-affine sets with non-invertible maps. \newblock Proc. Roy. Soc. Edinburgh Sect. A, to appear, available at arXiv:2302.13037, 2024. \bibitem{Barnsley2006} M.~F. Barnsley. \newblock {\em Superfractals}. \newblock Cambridge University Press, Cambridge, 2006. \bibitem{BarnsleyDemko1985} M.~F. Barnsley and S.~Demko. \newblock Iterated function systems and the global construction of fractals. \newblock {\em Proc. Roy. Soc. London Ser. A}, 399(1817):243--275, 1985. \bibitem{BochiMorris2015} J.~Bochi and I.~D. Morris. \newblock Continuity properties of the lower spectral radius. \newblock {\em Proc. Lond. Math. Soc. (3)}, 110(2):477--509, 2015. \bibitem{BochiMorris2018} J.~Bochi and I.~D. Morris. \newblock Equilibrium states of generalised singular value potentials and applications to affine iterated function systems. \newblock {\em Geom. Funct. Anal.}, 28(4):995--1028, 2018. \bibitem{Bowen2008} R.~Bowen. \newblock {\em Equilibrium states and the ergodic theory of {A}nosov diffeomorphisms}, volume 470 of {\em Lecture Notes in Mathematics}. \newblock Springer-Verlag, Berlin, revised edition, 2008. \newblock With a preface by David Ruelle, Edited by Jean-Ren\'{e} Chazottes. \bibitem{Burrell2019} S.~A. Burrell. \newblock On the dimension and measure of inhomogeneous attractors. \newblock {\em Real Anal. Exchange}, 44(1):199--215, 2019. \bibitem{BurrellFraser2020} S.~A. Burrell and J.~M. Fraser. \newblock The dimensions of inhomogeneous self-affine sets. \newblock {\em Ann. Acad. Sci. Fenn. Math.}, 45(1):313--324, 2020. \bibitem{Edgar1992} G.~A. Edgar. \newblock Fractal dimension of self-affine sets: some examples. \newblock Number~28, pages 341--358. 1992. \newblock Measure theory (Oberwolfach, 1990). \bibitem{Falconer1988} K.~J. Falconer. \newblock The {H}ausdorff dimension of self-affine fractals. \newblock {\em Math. Proc. Cambridge Philos. Soc.}, 103(2):339--350, 1988. \bibitem{FengKaenmaki2011} D.-J. Feng and A.~K{\"a}enm{\"a}ki. \newblock Equilibrium states of the pressure function for products of matrices. \newblock {\em Discrete Contin. Dyn. Syst.}, 30(3):699--708, 2011. \bibitem{FengShmerkin2014} D.-J. Feng and P.~Shmerkin. \newblock Non-conformal repellers and the continuity of pressure for matrix cocycles. \newblock {\em Geom. Funct. Anal.}, 24(4):1101--1128, 2014. \bibitem{HochmanRapaport2021} M.~Hochman and A.~Rapaport. \newblock Hausdorff dimension of planar self-affine sets and measures with overlaps. \newblock {\em J. Eur. Math. Soc. (JEMS)}, 24(7):2361--2441, 2022. \bibitem{Hutchinson1981} J.~E. Hutchinson. \newblock Fractals and self-similarity. \newblock {\em Indiana Univ. Math. J.}, 30(5):713--747, 1981. \bibitem{Jungers2009} R.~Jungers. \newblock {\em The joint spectral radius}, volume 385 of {\em Lecture Notes in Control and Information Sciences}. \newblock Springer-Verlag, Berlin, 2009. \newblock Theory and applications. \bibitem{Kaenmaki2004} A.~K\"{a}enm\"{a}ki. \newblock On natural invariant measures on generalised iterated function systems. \newblock {\em Ann. Acad. Sci. Fenn. Math.}, 29(2):419--458, 2004. \bibitem{KaenmakiLehrback2017} A.~K\"{a}enm\"{a}ki and J.~Lehrb\"{a}ck. \newblock Measures with predetermined regularity and inhomogeneous self-similar sets. \newblock {\em Ark. Mat.}, 55(1):165--184, 2017. \bibitem{KaenmakiMorris2018} A.~K\"{a}enm\"{a}ki and I.~D. Morris. \newblock Structure of equilibrium states on self-affine sets and strict monotonicity of affinity dimension. \newblock {\em Proc. Lond. Math. Soc. (3)}, 116(4):929--956, 2018. \bibitem{KaenmakiVilppolainen2010} A.~K{\"a}enm{\"a}ki and M.~Vilppolainen. \newblock Dimension and measures on sub-self-affine sets. \newblock {\em Monatsh. Math.}, 161(3):271--293, 2010. \bibitem{Marstrand1954} J.~M. Marstrand. \newblock Some fundamental geometrical properties of plane sets of fractional dimensions. \newblock {\em Proc. London Math. Soc. (3)}, 4:257--302, 1954. \bibitem{Mattila1995} P.~Mattila. \newblock {\em Geometry of Sets and Measures in Euclidean Spaces: Fractals and Rectifiability}. \newblock Cambridge University Press, Cambridge, 1995. \bibitem{Snigireva2008} N.~Snigireva. \newblock Inhomogeneous self-similar sets and measures. \newblock PhD Dissertation, University of St Andrews, 2008. \bibitem{Solomyak1998} B.~Solomyak. \newblock Measure and dimension for some fractal families. \newblock {\em Math. Proc. Cambridge Philos. Soc.}, 124(3):531--546, 1998. \end{thebibliography} \end{document}
2205.07346v2
http://arxiv.org/abs/2205.07346v2
Optimal Error-Detecting Codes for General Asymmetric Channels via Sperner Theory
\documentclass[conference]{IEEEtran} \usepackage{amsmath, amssymb, amsthm, mathtools} \usepackage{relsize, paralist, hyperref, xcolor, balance, setspace} \usepackage[T1]{fontenc} \newtheorem{theorem}{Theorem}\newtheorem{proposition}[theorem]{Proposition} \newtheorem{definition}[theorem]{Definition} \newcommand{ \C }{ \bs{C} } \newcommand{ \myF }{ \mathbb{F} } \newcommand{ \myA }{ \mathcal A } \newcommand{ \myC }{ \mathcal C } \newcommand{ \myG }{ \mathcal G } \newcommand{ \myK }{ \mathcal K } \newcommand{ \myP }{ \mathcal P } \newcommand{ \myS }{ \mathcal S } \newcommand{ \myU }{ \mathcal U } \newcommand{ \myX }{ \mathcal X } \newcommand{ \myY }{ \mathcal Y } \newcommand{ \Z }{ \mathbb{Z} } \newcommand{ \N }{ \mathbb{N} } \newcommand{ \rank }{ \operatorname{rank} } \newcommand{ \myarrow }{ \stackrel{\sml{\myK}}{\rightsquigarrow} } \newcommand{ \sml }[1]{ \mathsmaller{#1} } \newcommand{ \bs }[1]{ \boldsymbol{#1} } \newcommand{ \ceil }[1]{ \lceil #1 \rceil } \newcommand{ \floor }[1]{ \lfloor #1 \rfloor } \newcommand{ \myqed }{ \hfill $\blacktriangle$ } \newcommand{ \qqed }{ \hfill \IEEEQED } \hyphenation{op-tical net-works semi-conduc-tor} \begin{document} \title{\huge Optimal Error-Detecting Codes for General Asymmetric Channels via Sperner Theory} \author{\IEEEauthorblockN{Mladen~Kova\v{c}evi\'c and Dejan~Vukobratovi\'{c}} \IEEEauthorblockA{Faculty of Technical Sciences, University of Novi Sad, Serbia\\ Emails: [email protected], [email protected]} } \maketitle \begin{abstract} Several communication models that are of relevance in practice are asymmetric in the way they act on the transmitted ``objects''. Examples include channels in which the amplitudes of the transmitted pulses can only be decreased, channels in which the symbols can only be deleted, channels in which non-zero symbols can only be shifted to the right (e.g., timing channels), subspace channels in which the dimension of the transmitted vector space can only be reduced, unordered storage channels in which the cardinality of the stored (multi)set can only be reduced, etc. We introduce a formal definition of an asymmetric channel as a channel whose action induces a partial order on the set of all possible inputs, and show that this definition captures all the above examples. Such a general approach allows one to treat all these different models in a unified way, and to obtain a characterization of optimal error-detecting codes for many interesting asymmetric channels by using Sperner theory. \end{abstract} \section{Introduction} \label{sec:intro} Several important channel models possess an intrinsic asymmetry in the way they act on the transmitted ``objects''. A classical example is the binary $ \mathsf{Z} $-channel in which the transmitted $ 1 $'s may be received as $ 0 $'s, but not vice versa. In this article we formalize the notion of an asymmetric channel by using order theory, and illustrate that the given definition captures this and many more examples. Our main goals are the following: \begin{inparaenum} \item[1)] to introduce a framework that enables one to treat many different kinds of asymmetric channels in a unified way, and \item[2)] to demonstrate its usefulness and meaningfulness through examples. In particular, the usefulness of the framework is illustrated by describing \emph{optimal} error-detecting codes for a broad class of asymmetric channels (for all channel parameters), a result that follows from Kleitman's theorem on posets satisfying the so-called LYM inequality. \end{inparaenum} \subsection{Communication channels} \label{sec:channels} \begin{definition} \label{def:channel} Let $ \myX, \myY $ be nonempty sets. A communication channel on $ (\myX, \myY) $ is a subset $ \myK \subseteq \myX \times \myY $ satisfying\linebreak $ \forall x \in \myX \; \exists y \in \myY \; (x,y) \in \myK $ and $ \forall y \in \myY \; \exists x \in \myX \; (x,y) \in \myK $. We also use the notation $ {x \myarrow y} $, or simply $ x \rightsquigarrow y $ when there is no risk of confusion, for $ (x,y) \in \myK $. For a given channel $ \myK \subseteq \myX \times \myY $, we define its dual channel as $ \myK^\textnormal{d} = \{ (y, x) : (x, y) \in \myK \} $. \end{definition} Note that we describe communication channels purely in combinatorial terms, as \emph{relations} in Cartesian products $ \myX \times \myY $.\linebreak Here $ \myX $ is thought of as the set of all possible inputs, and $ \myY $ as the set of all possible outputs of the channel. The \pagebreak expression $ x \rightsquigarrow y $ means that the input $ x $ can produce the output $ y $ with positive probability. We do not assign particular values of probabilities to each pair $ (x,y) \in \myK $ as they are irrelevant for the problems that we intend to discuss. \subsection{Partially ordered sets} \label{sec:posets} In what follows, we shall use several notions from order theory, so we recall the basics here \cite{engel, stanley}. A partially ordered set (or poset) is a set $ \myU $ together with a relation $ \preceq $ satisfying, for all $ x, y, z \in \myU $: \begin{inparaenum} \item[1)] reflexivity: $ x \preceq x $, \item[2)] asymmetry (or antisymmetry): if $ x \preceq y $ and $ y \preceq x $, then $ x = y $, \item[3)] transitivity: if $ x \preceq y $ and $ y \preceq z $, then $ x \preceq z $. \end{inparaenum} Two elements $ x, y \in \myU $ are said to be comparable if either $ x \preceq y $ or $ y \preceq x $. They are said to be incomparable otherwise. A chain in a poset $ (\myU, \preceq) $ is a subset of $ \myU $ in which any two elements are comparable. An antichain is a subset of $ \myU $ any two distinct elements of which are incomparable. A function $ \rho: \myU \to \mathbb{N} $ is called a rank function if $ \rho(y) = \rho(x) + 1 $ whenever $ y $ covers $ x $, meaning that $ x \preceq y $ and there is no $ y' \in \myU $ such that $ x \preceq y' \preceq y $. A poset with a rank function is called graded. In a graded poset with rank function $ \rho $ we denote $ \myU_{[\underline{\ell}, \overline{\ell}]} = \{ x \in \myU : \underline{\ell} \leqslant \rho(x) \leqslant \overline{\ell} \} $, and we also write $ \myU_\ell = \myU_{[\ell,\ell]} $ (here the rank function $ \rho $ is omitted from the notation as it is usually understood from the context). Hence, $ \myU = \bigcup_\ell \myU_\ell $. A graded poset is said to have Sperner property if $ \myU_\ell $ is an antichain of maximum cardinality in $ (\myU, \preceq) $, for some $ \ell $. A poset is called rank-unimodal if the sequence $ |\myU_\ell| $ is unimodal (i.e., an increasing function of $ \ell $ when $ \ell \leqslant \ell' $, and decreasing when $ \ell \geqslant \ell' $, for some $ \ell' $). We say that a graded poset $ (\myU, \preceq) $ possesses the LYM (Lubell--Yamamoto--Meshalkin) property \cite{kleitman} if there exists a nonempty list of maximal chains such that, for any $ \ell $, each of the elements of rank $ \ell $ appear in the same number of chains. In other words, if there are $ L $ chains in the list, then each element of rank $ \ell $ appears in $ L/|\myU_\ell| $ of the chains. We shall call a poset \emph{normal} if it satisfies the LYM property, see \cite[Sec.~4.5 and Thm 4.5.1]{engel}. A simple sufficient condition for a poset to be normal is that it be regular \cite[Cor.~4.5.2]{engel}, i.e., that both the number of elements that cover $ x $ and the number of elements that are covered by $ x $ depend only on the rank of $ x $. In Section \ref{sec:examples} we shall see that many standard examples of posets, including the Boolean lattice, the subspace lattice, the Young's lattice, chain products, etc., arise naturally in the analysis of communications channels. \pagebreak \section{General asymmetric channels and\\error-detecting codes} \label{sec:asymmetric} In this section we give a formal definition of asymmetric channels and the corresponding codes which unifies and generalizes many scenarios analyzed in the literature. We assume hereafter that the sets of all possible channel inputs and all possible channels outputs are equal, $ \myX = \myY $. For a very broad class of communication channels, the relation $ \rightsquigarrow $ is reflexive, i.e., $ x \rightsquigarrow x $ (any channel input can be received unimpaired, in case there is no noise), and transitive, i.e., if $ x \rightsquigarrow y $ and $ y \rightsquigarrow z $, then $ x \rightsquigarrow z $ (if there is a noise pattern that transforms $ x $ into $ y $, and a noise pattern that transforms $ y $ into $ z $, then there is a noise pattern -- a combination of the two -- that transforms $ x $ into $ z $). Given such a channel, we say that it is \emph{asymmetric} if the relation $ \rightsquigarrow $ is asymmetric, i.e., if $ x \rightsquigarrow y $, $ x \neq y $, implies that $ y \not\rightsquigarrow x $. In other words, we call a channel asymmetric if the channel action induces a partial order on the space of all inputs $ \myX $. \begin{definition} \label{def:asymmetric} A communication channel $ \myK \subseteq \myX^2 $ is said to be asymmetric if $ (\myX, \stackrel{\sml{\myK}}{\rightsquigarrow}) $ is a partially ordered set. We say that such a channel is * if the poset $ (\myX, \stackrel{\sml{\myK}}{\rightsquigarrow}) $ is *, where * stands for an arbitrary property a poset may have (e.g., graded, Sperner, normal, etc.). \end{definition} Many asymmetric channels that arise in practice, including all the examples mentioned in this paper, are graded as there are natural rank functions that may be assigned to them. For a graded channel $ \myK $, we denote by $ \myK_{[\underline{\ell}, \overline{\ell}]} = \myK \cap \big( \myX_{[\underline{\ell}, \overline{\ell}]} \big)^{\!2} $ its natural restriction to inputs of rank $ \underline{\ell}, \ldots, \overline{\ell} $. \begin{definition} \label{def:edc} We say that $ \bs{C} \subseteq \myX $ is a code detecting up to $ t $ errors in a graded asymmetric channel $ \myK \subseteq \myX^2 $ if, for all $ x, y \in \C $, \begin{align} \label{eq:detectgen} x \myarrow y \; \land \; x \neq y \quad \Rightarrow \quad | \rank(x) - \rank(y) | > t . \end{align} We say that $ \bs{C} \subseteq \myX $ detects \emph{all} error patterns in an asymmetric channel $ \myK \subseteq \myX^2 $ if, for all $ x, y \in \C $, \begin{align} \label{eq:detectgen2} x \myarrow y \quad \Rightarrow \quad x = y . \end{align} \end{definition} For graded channels, the condition \eqref{eq:detectgen2} is satisfied if and only if the condition \eqref{eq:detectgen} holds for any $ t $. In words, $ \bs{C} $ detects all error patterns in a given asymmetric channel if no element of $ \C $ can produce another element of $ \C $ at the channel output. If this is the case, the receiver will easily recognize whenever the transmission is erroneous because the received object is not going to be a valid codeword which could have been transmitted. Yet another way of saying that $ \C $ detects all error patterns is the following. \begin{proposition} \label{thm:edc} $ \C \subseteq \myX $ detects all error patterns in an asymmetric channel $ \myK \subseteq \myX^2 $ if and only if $ \C $ is an antichain in the corresponding poset $ (\myX, \stackrel{\sml{\myK}}{\rightsquigarrow}) $. \end{proposition} A simple example of an antichain, and hence a code detecting all error patterns in a graded asymmetric channel, is the level set $ \myX_\ell $, for an arbitrary $ \ell $. \pagebreak \begin{definition} \label{def:optimal} We say that $ \C \subseteq \myX $ is an optimal code detecting up to $ t $ errors (resp. all error patterns) in a graded asymmetric channel $ \myK \subseteq \myX^2 $ if there is no code of cardinality larger than $ |\C| $ that satisfies \eqref{eq:detectgen} (resp. \eqref{eq:detectgen2}). \end{definition} Hence, an optimal code detecting all error patterns in an asymmetric channel $ \myK \subseteq \myX^2 $ is an antichain of maximum cardinality in the poset $ (\myX, \stackrel{\sml{\myK}}{\rightsquigarrow}) $. Channels in which the code $ \myX_\ell $ is optimal, for some $ \ell $, are called Sperner channels. All channels treated in this paper are Sperner. An example of an error-detecting code, of which the code $ \myX_\ell $ is a special case (obtained for $ t \to \infty $), is given in the following proposition. \begin{proposition} \label{thm:tedc} Let $ \myK \subseteq \myX^2 $ be a graded asymmetric channel, and $ (\ell_n)_n $ a sequence of integers satisfying $ \ell_n - \ell_{n-1} > t $, $ \forall n $. The code $ \C = \bigcup_{n} \myX_{\ell_n} $ detects up to $ t $ errors in $ \myK $. \end{proposition} If the channel is normal, an optimal code detecting up to $ t $ errors is of the form given in Proposition \ref{thm:tedc}. We state this fact for channels which are additionally rank-unimodal, as this is the case that is most common. \begin{theorem} \label{thm:optimal} Let $ \myK \subseteq \myX^2 $ be a normal rank-unimodal asymmetric channel. The maximum cardinality of a code detecting up to $ t $ errors in $ \myK_{[\underline{\ell}, \overline{\ell}]} $ is given by \begin{equation} \label{eq:maxsumgen} \max_{m} \sum^{\overline{\ell}}_{\substack{ \ell=\underline{\ell} \\ \ell \, \equiv \, m \; (\operatorname{mod}\, t+1) } } |\myX_\ell| . \end{equation} \end{theorem} \begin{IEEEproof} This is essentially a restatement of the result of Kleitman~\cite{kleitman} (see also \cite[Cor.~4.5.4]{engel}) which states that, in a finite normal poset $ ( \myU, \preceq ) $, the largest cardinality of a family $ \C \subseteq \myU $ having the property that, for all distinct $ x, y \in \C $, $ x \preceq y $ implies that $ \rank(y) - \rank(x) > t $, is $ \max_F \sum_{x \in F} |\myU_{\rank(x)}| $. The maximum here is taken over all chains $ F = \{x_1, x_2, \ldots, x_c\} $ satisfying $ x_1 \preceq x_2 \preceq \cdots \preceq x_c $ and $ \rank(x_{i+1}) - \rank(x_i) > t $ for $ i = 1, 2, \ldots, c-1 $, and all $ c = 1, 2, \ldots $. If the poset $ ( \myU, \preceq ) $ is in addition rank-unimodal, then it is easy to see that the maximum is attained for a chain $ F $ satisfying $ \rank(x_{i+1}) - \rank(x_i) = t + 1 $ for $ i = 1, 2, \ldots, c-1 $, and that the maximum cardinality of a family $ \C $ having the stated property can therefore be written in the simpler form \begin{equation} \label{eq:maxsumgen2} \max_{m} \sum_{\ell \, \equiv \, m \; (\operatorname{mod}\, t+1)} |\myU_\ell| . \end{equation} Finally, \eqref{eq:maxsumgen} follows by recalling that the restriction $ ( \myU_{[\underline{\ell}, \overline{\ell}]}, \preceq ) $ of a normal poset $ ( \myU, \preceq ) $ is normal \cite[Prop. 4.5.3]{engel}. \end{IEEEproof} \vspace{2mm} We note that an optimal value of $ m $ in \eqref{eq:maxsumgen} can be determined explicitly in many concrete examples (see Section~\ref{sec:examples}). We conclude this section with the following claim which enables one to directly apply the results pertaining to a given asymmetric channel to its dual. \begin{proposition} \label{thm:dual} A channel $ \myK \subseteq \myX^2 $ is asymmetric if and only if its dual $ \myK^\textnormal{d} $ is asymmetric. A code $ \bs{C} \subseteq \myX $ detects up to $ t $ errors in $ \myK $ if and only if it detects up to $ t $ errors in $ \myK^\textnormal{d} $. \end{proposition} \section{Examples} \label{sec:examples} In this section we list several examples of communication channels that have been analyzed in the literature in different contexts and that are asymmetric in the sense of Definition \ref{def:asymmetric}. For each of them, a characterization of optimal error-detecting codes is given based on Theorem \ref{thm:optimal}. \subsection{Codes in power sets} \label{sec:subset} Consider a communication channel with $ \myX = \myY = 2^{\{1,\ldots,n\}} $ and with $ A \rightsquigarrow B $ if and only if $ B \subseteq A $, where $ A, B \subseteq \{1, \ldots, n\} $. Codes defined in the power set $ 2^{\{1,\ldots,n\}} $ were proposed in \cite{gadouleau+goupil2, kovacevic+vukobratovic_clet} for error control in networks that randomly reorder the transmitted packets (where the set $ \{1,\ldots,n\} $ is identified with the set of all possible packets), and are also of interest in scenarios where data is written in an unordered way, such as DNA-based data storage systems \cite{lenz}. Our additional assumption here is that the received set is always a subset of the transmitted set, i.e., the noise is represented by ``set reductions''. These kinds of errors may be thought of as consequences of packet losses/deletions. Namely, if $ t $ packets from the transmitted set $ A $ are lost in the channel, then the received set $ B $ will be a subset of $ A $ of cardinality $ |A| - t $. We are interested in codes that are able to detect up to $ t $ packet deletions, i.e., codes having the property that if $ B \subsetneq A $, $ |A| - |B| \leqslant t $, then $ A $ and $ B $ cannot both be codewords. It is easy to see that the above channel is asymmetric in the sense of Definition \ref{def:asymmetric}; the ``asymmetry'' in this model is reflected in the fact that the cardinality of the transmitted set can only be reduced. The poset $ ( \myX, \rightsquigarrow ) $ is the so-called Boolean lattice \cite[Ex.~1.3.1]{engel}. The rank function associated with it is the set cardinality: $ \rank(A) = |A| $, for any $ A \subseteq \{1, \ldots, n\} $. This poset is rank-unimodal, with $ |\myX_\ell| = \binom{n}{\ell} $, and normal \cite[Ex.~4.6.1]{engel}. By applying Theorem~\ref{thm:optimal} we then obtain the maximum cardinality of a code $ \C \subseteq 2^{\{1,\ldots,n\}} $ detecting up to $ t $ deletions. Furthermore, an optimal value of $ m $ in \eqref{eq:maxsumgen} can be found explicitly in this case. This claim was first stated by Katona~\cite{katona} in a different terminology. \begin{theorem} \label{thm:subset} The maximum cardinality of a code $ \C \subseteq 2^{\{1,\ldots,n\}} $ detecting up to $ t $ deletions is \begin{equation} \label{eq:maxsumsets} \sum^n_{\substack{ \ell=0 \\ \ell \, \equiv \, \lfloor \frac{n}{2} \rfloor \; (\operatorname{mod}\, t+1) } } \binom{n}{\ell} \end{equation} \end{theorem} Setting $ t \to \infty $ (in fact, $ t > \lceil n/2 \rceil $ is sufficient), we conclude that the maximum cardinality of a code detecting any number of deletions is $ \binom{n}{\lfloor n/2 \rfloor} = \binom{n}{\lceil n/2 \rceil} $. This is a restatement of the well-known Sperner's theorem \cite{sperner}, \cite[Thm 1.1.1]{engel}. For the above channel, its dual (see Definition~\ref{def:channel}) is the channel with $ \myX = 2^{\{1, \ldots, n\}} $ in which $ A \rightsquigarrow B $ if and only if $ B \supseteq A $. This kind of noise, ``set augmentation'', may be thought of as a consequence of packet insertions. Proposition~\ref{thm:dual} implies that the expression in \eqref{eq:maxsumsets} is also the maximum cardinality of a code $ \C \subseteq \myX $ detecting up to $ t $ insertions. \subsection{Codes in the space of multisets} \label{sec:multiset} A natural generalization of the model from the previous subsection, also motivated by unordered storage or random permutation channels, is obtained by allowing repetitions of symbols, i.e., by allowing the codewords to be \emph{multisets} over a given alphabet \cite{multiset}. A multiset $ A $ over $ \{1, \ldots, n\} $ can be uniquely described by its multiplicity vector $ \mu_A = (\mu_A(1), \ldots, \mu_A(n)) \in \N^n $, where $ \N = \{0, 1, \ldots\} $. Here $ \mu_A(i) $ is the number of occurrences of the symbol $ i \in \{1, \ldots, n\} $ in $ A $. We again consider the deletion channel in which $ A \rightsquigarrow B $ if and only if $ B \subseteq A $ or, equivalently, if $ \mu_B \leqslant \mu_A $ (coordinate wise). If we agree to use the multiplicity vector representation of multisets, we may take $ \myX = \myY = \N^n $. The channel just described is asymmetric in the sense of Definition~\ref{def:asymmetric}. The rank function associated with the poset $ {(\myX, \rightsquigarrow)} $ is the multiset cardinality: $ \rank(A) = \sum_{i=1}^n \mu_A(i) $. We have $ |\myX_\ell| = \binom{\ell + n - 1}{n - 1} $. The following claim is a multiset analog of Theorem~\ref{thm:subset}. \begin{theorem} \label{thm:multiset} The maximum cardinality of a code $ \C \subseteq \myX_{[\underline{\ell}, \overline{\ell}]} $, $ \myX = \N^n $, detecting up to $ t $ deletions is \begin{align} \label{eq:Mcodesize} \sum^{\lfloor \frac{\overline{\ell} - \underline{\ell}}{t+1} \rfloor}_{i=0} \binom{\overline{\ell} - i (t+1) + n - 1}{n - 1} . \end{align} \end{theorem} \begin{IEEEproof} The poset $ (\myX, \rightsquigarrow) $ is normal as it is a product of chains \cite[Ex.~4.6.1]{engel}. We can therefore apply Theorem~\ref{thm:optimal}.\linebreak Furthermore, since $ |\myX_\ell| = \binom{\ell + n - 1}{n - 1} $ is a monotonically increasing function of $ \ell $, the optimal choice of $ m $ in \eqref{eq:maxsumgen} is $ \overline{\ell} $, which implies \eqref{eq:Mcodesize}. \end{IEEEproof} \vspace{2mm} The dual channel is the channel in which $ A \rightsquigarrow B $ if and only if $ B \supseteq A $, i.e., $ \mu_B \geqslant \mu_A $. These kinds of errors -- multiset augmentations -- may be caused by insertions or duplications. \subsection{Codes for the binary $ \mathsf{Z} $-channel and its generalizations} \label{sec:Z} Another interpretation of Katona's theorem \cite{katona} in the coding-theoretic context, easily deduced by identifying subsets of $ \{1, \ldots, n\} $ with sequences in $ \{0, 1\}^n $, is the following: the expression in \eqref{eq:maxsumsets} is the maximum size of a binary code of length $ n $ detecting up to $ t $ asymmetric errors, i.e., errors of the form $ 1 \to 0 $ \cite{borden}. By using Kleitman's result \cite{kleitman}, Borden~\cite{borden} also generalized this statement and described optimal codes over arbitrary alphabets detecting $ t $ asymmetric errors. (Error control problems in these kinds of channels have been studied quite extensively; see, e.g., \cite{blaum, bose+rao}.) To describe the channel in more precise terms, we take $ \myX = \myY = \{0, 1, \ldots, a-1\}^n $ and we let $ (x_1, \ldots, x_n) \rightsquigarrow (y_1, \ldots, y_n) $ if and only if $ y_i \leqslant x_i $ for all $ i = 1, \ldots, n $. This channel is asymmetric and the poset $ (\myX, \rightsquigarrow) $ is normal \cite[Ex.~4.6.1]{engel}. The appropriate rank function here is the Manhattan weight: $ \rank(x_1, \ldots, x_n) = \sum_{i=1}^n x_i $. In the binary case ($ {a = 2} $), this channel is called the $ \mathsf{Z} $-channel and the Manhattan weight coincides with the Hamming weight. Let $ c(N, M, \ell) $ denote the number of \emph{compositions} of the number $ \ell $ with $ M $ non-negative parts, each part being $ \leqslant\! N $ \cite[Sec.~4.2]{andrews}. In other words, $ c(N, M, \ell) $ is the number of vectors from $ \{0, 1, \ldots, N-1\}^M $ having Manhattan weight $ \ell $. Restricted integer compositions are well-studied objects; for an explicit expression for $ c(N, M, \ell) $, see \cite[p.~307]{stanley}. \begin{theorem}[Borden \cite{borden}] \label{thm:Z} The maximum cardinality of a code $ \C \subseteq \{0, 1, \ldots, a-1\}^n $ detecting up to $ t $ asymmetric errors is \begin{align} \label{eq:Zcode} \sum^{n(a-1)}_{\substack{ \ell=0 \\ \ell \, \equiv \, \lfloor \frac{n(a-1)}{2} \rfloor \; (\operatorname{mod}\, t+1) }} c(a-1, n, \ell) . \end{align} \end{theorem} The channel dual to the one described above is the channel in which $ (x_1, \ldots, x_n) \rightsquigarrow (y_1, \ldots, y_n) $ if and only if $ y_i \geqslant x_i $ for all $ i = 1, \ldots, n $. \subsection{Subspace codes} \label{sec:subspace} Let $ \myF_q $ denote the field of $ q $ elements, where $ q $ is a prime power, and $ \myF_q^n $ an $ n $-dimensional vector space over $ \myF_q $. Denote by $ \myP_q(n) $ the set of all subspaces of $ \myF_q^n $ (also known as the projective space), and by $ \myG_q(n , \ell) $ the set of all subspaces of dimension $ \ell $ (also known as the Grassmannian). The cardinality of $ \myG_q(n , \ell) $ is expressed through the $ q $-binomial (or Gaussian) coefficients \cite[Ch.~24]{vanlint+wilson}: \begin{align} \label{eq:gcoeff} \left| \myG_q(n , \ell) \right| = \binom{n}{\ell}_{\! q} = \prod_{i=0}^{\ell-1} \frac{ q^{n-i} - 1 }{ q^{\ell-i} - 1 } . \end{align} The following well-known properties of $ \binom{n}{\ell}_{\! q} $ will be useful: \begin{inparaenum} \item[1)] symmetry: $ \binom{n}{\ell}_{\! q} = \binom{n}{n-\ell}_{\! q} $, and \item[2)] unimodality: $ \binom{n}{\ell}_{\! q} $ is increasing in $ \ell $ for $ \ell \leqslant \frac{n}{2} $, and decreasing for $ \ell \geqslant \frac{n}{2} $. \end{inparaenum} We use the convention that $ \binom{n}{\ell}_{\! q} = 0 $ when $ \ell < 0 $ or $ \ell > n $. Codes in $ \myP_q(n) $ were proposed in \cite{koetter+kschischang} for error control in networks employing random linear network coding \cite{ho}, in which case $ \myF_q^n $ corresponds to the set of all length-$ n $ packets (over a $ q $-ary alphabet) that can be exchanged over the network links. We consider a channel model in which the only impairments are ``dimension reductions'', meaning that, for any given transmitted vector space $ U \subseteq \myF_q^n $, the possible channel outputs are subspaces of $ U $. These kinds of errors can be caused by packet losses, unfortunate choices of the coefficients in the performed linear combinations in the network (resulting in linearly dependent packets at the receiving side), etc. In the notation introduced earlier, we set $ \myX = \myY = \myP_q(n) $ and define the channel by: $ U \rightsquigarrow V $ if and only if $ V $ is a subspace of $ U $. This channel is asymmetric. The poset $ (\myX, \rightsquigarrow) $ is the so-called linear lattice (or the subspace lattice) \cite[Ex.~1.3.9]{engel}. The rank function associated with it is the dimension of a vector space: $ \rank(U) = \dim U $, for $ U \in \myP_q(n) $. We have $ |\myX_\ell| = | \myG_q(n , \ell) | = \binom{n}{\ell}_{\! q} $. The following statement may be seen as the $ q $-analog \cite[Ch.~24]{vanlint+wilson} of Katona's theorem \cite{katona}, or of Theorem \ref{thm:subset}. \begin{theorem} \label{thm:subspace} The maximum cardinality of a code $ \C \subseteq \myP_q(n) $ detecting dimension reductions of up to $ t $ is \begin{align} \label{eq:codesize} \sum^n_{\substack{ \ell=0 \\ \ell \, \equiv \, \lfloor \frac{n}{2} \rfloor \; (\operatorname{mod}\, t+1) } } \binom{n}{\ell}_{\! q} . \end{align} \end{theorem} \begin{IEEEproof} The poset $ (\myP_q(n), \subseteq) $ is rank-unimodal and normal \cite[Ex.~4.5.1]{engel} and hence, by Theorem \ref{thm:optimal}, the maximum cardinality of a code detecting dimension reductions of up to $ t $ can be expressed in the form \begin{subequations} \begin{align} \label{eq:maxsum} &\max_{m} \sum^n_{\substack{ \ell=0 \\ \ell \, \equiv \, m \; (\operatorname{mod}\, t+1) } } \binom{n}{\ell}_{\! q} \\ \label{eq:maxsumr} &\;\;\;\;= \max_{r \in \{0, 1, \ldots, t\}} \; \sum_{j \in \Z} \binom{n}{\lfloor \frac{n}{2} \rfloor + r + j(t+1)}_{\! q} . \end{align} \end{subequations} (Expression \eqref{eq:maxsum} was also given in \cite[Thm~7]{ahlswede+aydinian}.) We need to show that $ m = \lfloor n / 2 \rfloor $ is a maximizer in \eqref{eq:maxsum} or, equivalently, that $ r = 0 $ is a maximizer in \eqref{eq:maxsumr}. Let us assume for simplicity that $ n $ is even; the proof for odd $ n $ is similar. What we need to prove is that the following expression is non-negative, for any $ r \in \{1, \ldots, t\} $, \begin{subequations} \label{eq:j} \begin{align} \nonumber &\sum_{j \in \Z} \binom{n}{\frac{n}{2} + j(t+1)}_{\! q} - \sum_{j \in \Z} \binom{n}{\frac{n}{2} + r + j(t+1)}_{\! q} \\ \label{eq:jpos} &\;\;= \sum_{j > 0} \binom{n}{\frac{n}{2} + j(t+1)}_{\! q} - \binom{n}{\frac{n}{2} + r + j(t+1)}_{\! q} + \\ \label{eq:jzero} &\;\;\phantom{=}\ \binom{n}{\frac{n}{2}}_{\! q} - \binom{n}{\frac{n}{2} + r}_{\! q} - \binom{n}{\frac{n}{2} + r - (t+1)}_{\! q} + \\ \label{eq:jneg} &\;\;\phantom{=}\ \sum_{j < 0} \binom{n}{\frac{n}{2} + j(t+1)}_{\! q} - \binom{n}{\frac{n}{2} + r + (j-1)(t+1)}_{\! q} . \end{align} \end{subequations} Indeed, since the $ q $-binomial coefficients are unimodal and maximized at $ \ell = n / 2 $, each of the summands in the sums \eqref{eq:jpos} and \eqref{eq:jneg} is non-negative, and the expression in \eqref{eq:jzero} is also non-negative because\begin{subequations} \begin{align} \nonumber &\binom{n}{\frac{n}{2}}_{\! q} - \binom{n}{\frac{n}{2} + r}_{\! q} - \binom{n}{\frac{n}{2} + r - (t+1)}_{\! q} \quad \\ \label{eq:a1} &\;\;\;\;\geqslant \binom{n}{\frac{n}{2}}_{\! q} - \binom{n}{\frac{n}{2} + 1}_{\! q} - \binom{n}{\frac{n}{2} -1}_{\! q} \\ \label{eq:a2} &\;\;\;\;= \binom{n}{\frac{n}{2}}_{\! q} - 2 \binom{n}{\frac{n}{2} - 1}_{\! q} \\ \label{eq:a3} &\;\;\;\;= \binom{n}{\frac{n}{2}}_{\! q} \left( 1 - 2 \frac{q^{\frac{n}{2} + 1} - 1}{q^{\frac{n}{2} + 2} - 1} \right) \\ \label{eq:a4} &\;\;\;\;> \binom{n}{\frac{n}{2}}_{\! q} \left( 1 - 2 \frac{1}{q} \right) \\ \label{eq:a5} &\;\;\;\;\geqslant 0 , \end{align} \end{subequations} where \eqref{eq:a1} and \eqref{eq:a2} follow from unimodality and symmetry of $ \binom{n}{\ell}_{\! q} $, \eqref{eq:a3} is obtained by substituting the definition of $ \binom{n}{\ell}_{\! q} $, \eqref{eq:a4} follows from the fact that $ \frac{\alpha-1}{\beta-1} < \frac{\alpha}{\beta} $ when $ 1 < \alpha < \beta $, and \eqref{eq:a5} is due to $ q \geqslant 2 $. \end{IEEEproof} \vspace{2mm} As a special case when $ t \to \infty $ (in fact, $ t > \lceil n/2 \rceil $ is sufficient), we conclude that the maximum cardinality of a code detecting arbitrary dimension reductions is $ \binom{n}{\lfloor n/2 \rfloor}_{\! q} $. In other words, $ \myG_q(n, \lfloor n/2 \rfloor) $ is an antichain of maximum cardinality in the poset $ (\myP_q(n), \subseteq) $ (see Prop.~\ref{thm:edc}). This is the well-known $ q $-analog of Sperner's theorem \cite[Thm 24.1]{vanlint+wilson}. The dual channel in this example is the channel in which $ U \rightsquigarrow V $ if and only if $ U $ is a subspace of $ V $. \subsection{Codes for deletion and insertion channels} \label{sec:deletions} Consider the channel with $ \myX = \myY = \{0, 1, \ldots, a-1\}^* = \bigcup_{n=0}^\infty \{0, 1, \ldots, a-1\}^n $ in which $ x \rightsquigarrow y $ if and only if $ y $ is a subsequence of $ x $. This is the so-called deletion channel in which the output sequence is produced by deleting some of the symbols of the input sequence. The channel is asymmetric in the sense of Definition~\ref{def:asymmetric}. The rank function associated with the poset $ (\myX, \rightsquigarrow) $ is the sequence length: for any $ x = x_1 \cdots x_\ell $, where $ x_i \in \{0, 1, \ldots, a-1\} $, $ \rank(x) = \ell $. We have $ |\myX_\ell| = a^\ell $. Given that $ \myX $ is infinite, we shall formulate the following statement for the restriction $ \myX_{[\underline{\ell}, \overline{\ell}]} $, i.e., under the assumption that only sequences of lengths $ \underline{\ell}, \ldots, \overline{\ell} $ are allowed as inputs. This is a reasonable assumption from the practical viewpoint. \begin{theorem} The maximum cardinality of a code $ \C \subseteq \bigcup_{\ell=\underline{\ell}}^{\overline{\ell}} \{0, 1, \ldots, a-1\}^\ell $ detecting up to $ t $ deletions is \begin{align} \sum_{j=0}^{\lfloor \frac{\overline{\ell} - \underline{\ell}}{t+1} \rfloor} a^{\overline{\ell} - j (t+1)} . \end{align} \end{theorem} \begin{IEEEproof} The poset $ (\myX_{[0, \overline{\ell}]}, \rightsquigarrow) $ is normal. To see this, note that the list of $ a^{\overline{\ell}} $ maximal chains of the form $ \epsilon \rightsquigarrow x_1 \rightsquigarrow x_1 x_2 \rightsquigarrow \cdots \rightsquigarrow x_1 x_2 \ldots x_{\overline{\ell}} $, where $ \epsilon $ is the empty sequence and $ x_i \in \{0, 1, \ldots, a-1\} $, satisfies the condition that each element of $ \myX_{[0, {\overline{\ell}}]} $ of rank $ \ell $ appears in the same number of chains, namely $ a^{\overline{\ell} - \ell} $ (see Section~\ref{sec:posets}). The claim now follows by invoking Theorem \ref{thm:optimal} and by using the fact that $ |\myX_\ell| = a^\ell $ is a monotonically increasing function of $ \ell $, implying that the optimal choice for $ m $ in \eqref{eq:maxsumgen} is $ \overline{\ell} $. \end{IEEEproof} \vspace{2mm} The dual channel in this example is the insertion channel in which $ x \rightsquigarrow y $ if and only if $ x $ is a subsequence of $ y $. \subsection{Codes for bit-shift and timing channels} \label{sec:shift} Let $ \myX = \myY = \{0, 1\}^n $, and let us describe binary sequences by specifying the positions of $ 1 $'s in them. More precisely, we identify $ x \in \{0,1\}^n $ with the integer sequence $ \lambda_x = (\lambda_x(1), \ldots, \lambda_x(w)) $, where $ \lambda_x(i) $ is the position of the $ i $'th $ 1 $ in $ x $, and $ w $ is the Hamming weight of $ x $. This sequence satisfies $ 1 \leqslant \lambda_x(1) < \lambda_x(2) < \cdots < \lambda_x(w) \leqslant n $. For example, for $ x = 1 1 0 0 1 0 1 $, $ \lambda_x = (1, 2, 5, 7) $. In fact, it will be more convenient to use a slightly different description of a sequence $ x $, namely $ \tilde{\lambda}_x = \lambda_x - (1, 2, \ldots, w) $, for which it holds that $ 0 \leqslant \tilde{\lambda}_x(1) \leqslant \tilde{\lambda}_x(2) \leqslant \cdots \leqslant \tilde{\lambda}_x(w) \leqslant n - w $. Consider a communication model in which each of the $ 1 $'s in the input sequence may be shifted to the right \cite{shamai+zehavi, kovacevic}. Such models are also useful for describing timing channels wherein $ 1 $'s indicate the time slots in which packets have been sent and shifts of these $ 1 $'s are consequences of packet delays; see for example \cite{kovacevic+popovski}. Thus $ x \rightsquigarrow y $ if and only if $ x $ and $ y $ have the same Hamming weight and $ \lambda_x \leqslant \lambda_y $ (coordinate wise). Since a necessary condition for $ x \rightsquigarrow y $ is that $ x $ and $ y $ have the same Hamming weight, we may consider the sets of inputs $ \{0, 1\}^n_w \equiv \{ x \in \{0, 1\}^n : \sum_{i=1}^n x_i = w\} $ separately, for each $ w = 0, \ldots, n $ (here $ x = x_1 \cdots x_n $). The above channel is asymmetric. The poset $ (\{0, 1\}^n_w, \rightsquigarrow) $ is denoted $ L(n-w, w) $ in \cite[Ex.~1.3.13]{engel}. The rank function on this poset is defined by: $ \rank(x) = \sum_{i=1}^{w} \tilde{\lambda}_x(i) $, where $ w $ is the Hamming weight of $ x $. Let $ p(N, M, \ell) $ denote the number of \emph{partitions} of the number $ \ell $ into at most $ M $ positive parts, each part being $ \leqslant\! N $ \cite[Sec.~3.2]{andrews}. These too are very well-studied objects. An interesting connection between them and the Gaussian coefficients which we encountered in Section~\ref{sec:subspace} is the following \cite[Sec.~3.2]{andrews}, \cite[Thm 24.2]{vanlint+wilson}: \begin{align} \label{eq:partitions} \sum_{\ell=0}^{MN} p(N, M, \ell) q^\ell = \binom{N+M}{M}_{\! q} . \end{align} \begin{theorem} The maximum cardinality of a code $ \C \subseteq \{0, 1\}^n_w $ detecting up to $ t $ right-shifts is lower-bounded by \begin{align} \label{eq:shift} \sum^{w (n-w)}_{\substack{ \ell=0 \\ \ell \, \equiv \, \lfloor \frac{w(n-w)}{2} \rfloor \; (\operatorname{mod}\, t+1) } } p(n-w, w, \ell) . \end{align} The maximum cardinality of a code $ \C \subseteq \{0, 1\}^n_w $ detecting all patterns of right-shifts is $ p(n-w, w, \lfloor \frac{w(n-w)}{2} \rfloor) $. \end{theorem} \begin{IEEEproof} The number of elements in $ \{0, 1\}^n_w $ of rank $ \ell $ is $ p(n-w, w, \ell) $. These numbers are symmetric, $ p(n-w, w, \ell) = p(n-w, w, w(n-w) - \ell) $, and unimodal, and hence maximized when $ \ell = \lfloor \frac{w(n-w)}{2} \rfloor $ \cite[Thm 3.10]{andrews}. Furthermore, it follows from \cite[Thm 6.2.10 and Cor.~6.2.1]{engel} that the poset $ (\{0, 1\}^n_w, \rightsquigarrow) $ is Sperner. This implies the second statement. The first statement follows from Proposition \ref{thm:tedc}. \end{IEEEproof} \vspace{2mm} We believe the lower bound in \eqref{eq:shift} is actually the optimal value, i.e., the maximum cardinality of a code detecting $ t $ right-shifts, but at present we do not have a proof of this fact. The dual channel in this example is the channel in which non-zero symbols may be shifted only to the left. \section{Conclusion} As we have seen, order theory is a powerful tool for analyzing asymmetric channel models, particularly the error detection problem for which an optimal solution may be obtained in many cases of interest. Developing the introduced framework further and exploring other applications and channel models that fit into it is a topic of ongoing investigation. Note that we have not discussed here error-\emph{correcting} codes in the posets we encountered. This is also left for future work (see \cite{firer} for a related study). \vspace{3mm} \emph{Acknowledgment:} This work was supported by European Union's Horizon 2020 research and innovation programme (Grant Agreement no.\ 856967), and by the Secretariat for Higher Education and Scientific Research of the Autonomous Province of Vojvodina (project no.\ 142-451-2686/2021). \balance \begin{thebibliography}{99} \bibitem{ahlswede+aydinian} R. Ahlswede and H. Aydinian, ``On Error Control Codes for Random Network Coding,'' in \emph{Proc. Workshop on Network Coding, Theory and Applications (NetCod)}, pp.~68--73, Lausanne, Switzerland, June 2009. \bibitem{andrews} G. E. Andrews, \emph{The Theory of Partitions}, Addison-Wesley Publishing Company, 1976. \bibitem{blaum} M. Blaum, \emph{Codes for Detecting and Correcting Unidirectional Errors}, IEEE Computer Society Press, 1993. \bibitem{borden} J. M. Borden, ``Optimal Asymmetric Error Detecting Codes,'' \emph{Inform. and Control}, vol.~53, no.~1-2, pp.~66--73, 1982. \bibitem{bose+rao} B. Bose and T. R. N. Rao, ``Theory of Unidirectional Error Correcting/Detecting Codes,'' \emph{IEEE Trans. Comput.}, vol.~31, no.~6, pp.~521--530, 1982. \bibitem{engel} K. Engel, \emph{Sperner Theory}, Cambridge University Press, 1997. \bibitem{firer} M. Firer, M. M. S. Alves, J. A. Pinheiro, and L. Panek, \emph{Poset Codes: Partial Orders, Metrics and Coding Theory}, Springer, 2018. \bibitem{gadouleau+goupil2} M. Gadouleau and A. Goupil, ``A Matroid Framework for Noncoherent Random Network Communications,'' \emph{IEEE Trans. Inform. Theory}, vol.~57, no.~2, pp.~1031--1045, 2011. \bibitem{ho} T. Ho, M. M\'{e}dard, R. K\"{o}tter, D. Karger, M. Effros, J. Shi, and B. Leong, ``A Random Linear Network Coding Approach to Multicast,'' \emph{IEEE Trans. Inform. Theory}, vol.~52, no.~10, pp.~4413--4430, 2006. \bibitem{katona} G. O. H. Katona, ``Families of Subsets Having no Subset Containing Another One with Small Difference,'' \emph{Nieuw Arch. Wiskd.}, vol.~20, no.~3, pp.~54--67, 1972. \bibitem{kleitman} D. J. Kleitman, ``On an Extremal Property of Antichains in Partial Orders, the {LYM} Property and Some of Its Implications and Applications,'' in: M. Hall Jr. and J. H. Van Lint (Eds.), \emph{Combinatorics}, pp.~277--290, Amsterdam, 1975. \bibitem{kovacevic} M. Kova\v{c}evi\'c, ``Runlength-Limited Sequences and Shift-Correcting Codes: Asymptotic Analysis,'' \emph{IEEE Trans. Inform. Theory}, vol.~65, no.~8, pp.~4804--4814, 2019. \bibitem{kovacevic+popovski} M. Kova\v{c}evi\'c and P. Popovski, ``Zero-Error Capacity of a Class of Timing Channels,'' \emph{IEEE Trans. Inform. Theory}, vol.~60, no.~11, pp.~6796--6800, 2014. \bibitem{multiset} M. Kova\v{c}evi\'{c} and V. Y. F. Tan, ``Codes in the Space of Multisets---Coding for Permutation Channels with Impairments,'' \emph{IEEE Trans. Inform. Theory}, vol.~64, no.~7, pp.~5156--5169, 2018. \bibitem{kovacevic+vukobratovic_clet} M. Kova\v cevi\'c and D. Vukobratovi\'c, ``Subset Codes for Packet Networks,'' \emph{IEEE Commun. Lett.}, vol.~17, no.~4, pp.~729--732, 2013. \bibitem{koetter+kschischang} R. K\"{o}tter and F. R. Kschischang, ``Coding for Errors and Erasures in Random Network Coding,'' \emph{IEEE Trans. Inform. Theory}, vol.~54, no.~8, pp.~3579--3591, 2008. \bibitem{lenz} A. Lenz, P. H. Siegel, A. Wachter-Zeh, and E. Yaakobi, ``Coding Over Sets for DNA Storage,'' \emph{IEEE Trans. Inform. Theory}, vol.~66, no.~4, pp.~2331--2351, 2020. \bibitem{shamai+zehavi} S. Shamai (Shitz) and E. Zehavi, ``Bounds on the Capacity of the Bit-Shift Magnetic Recording Channel,'' \emph{IEEE Trans. Inform. Theory}, vol.~37, no.~3, pp.~863--872, 1991. \bibitem{sperner} E. Sperner, ``Ein Satz \"{u}ber Untermengen einer endlichen Menge,'' \emph{Math. Z.}, vol.~27, pp.~44--48, 1928. \bibitem{stanley} R. P. Stanley, \emph{Enumerative Combinatorics}, Vol I, Cambridge University Press, 1997. \bibitem{vanlint+wilson} J. H. van Lint and R. M. Wilson, \emph{A Course in Combinatorics}, 2nd ed., Cambridge University Press, 2001. \end{thebibliography} \end{document}
2205.07328v1
http://arxiv.org/abs/2205.07328v1
Quotients of the Bruhat-Tits tree by function field analogs of the Hecke congruence subgroups
ledate{20 November 2007} leversion{2.1.9} \NeedsTeXFormat{LaTeX2e} \documentclass{amsart} \usepackage{amssymb} \usepackage{amsmath} \usepackage{subfigure} \usepackage{epsfig} \usepackage[utf8]{inputenc} \usepackage{mathabx} \usepackage{graphicx} \usepackage{xcolor} \usepackage[T1]{fontenc} \usepackage[all]{xy} \usepackage{xcolor} \usepackage{comment} \usepackage{stmaryrd} \usepackage{enumitem} \usepackage{hyperref} \hypersetup{colorlinks=true,linkcolor=blue,citecolor=red,urlcolor=blue} \newcommand{\red}[1]{{\color{red}#1}} \newcommand\classgp{\mathfrak{g}} \newcommand\anything{*} \newcommand\sgn{\textnormal{sgn}} \newcommand\diag{\textnormal{diag}} \newcommand\disc{\textnormal{disc}} \newcommand\ad{\mathbb{A}} \newcommand\Z{\mathbb{Z}} \newcommand\lu{\mathfrak{L}} \newcommand\compleji{\mathbb{C}} \newcommand\alge{\mathfrak{A}} \newcommand\tr{\textnormal{ tr}} \newcommand\la{\Lambda} \newcommand\ideala{\mathcal{A}} \newcommand\idealb{\mathcal{B}} \newcommand\bark{\bar{k}} \newcommand\uno{\{1\}} \newcommand\quadc{k^*/(k^*)^2} \newcommand\Ba{\mathfrak B} \newcommand\oink{\mathcal O} \newcommand\D{\mathcal D} \newcommand\nuD{\nu_{\D}} \newcommand\Q{\mathbb Q} \newcommand\enteri{\mathbb Z} \newcommand\gal{\mathcal G} \newcommand\dete{\textnormal{det}} \newcommand\ovr[2]{#1|#2} \newcommand\vaa{\longrightarrow} \newcommand\Da{\mathfrak{D}} \newcommand\Ha{\mathfrak{H}} \newcommand\Ea{\mathfrak{E}} \newcommand\Fa{\mathfrak{F}} \newcommand\Ma{\mathfrak{M}} \newcommand\ma{\mathfrak{m}} \newcommand\Txi{\lceil} \newcommand\bmattrix[4]{\left(\begin{array}{cc}#1&#2\\#3&#4\end{array}\right)} \newcommand\sbmattrix[4]{\textnormal{\scriptsize$\left(\begin{array}{cc}#1&#2\\#3&#4\end{array}\right)$\normalsize}} \newcommand\blattice[4]{\left<\begin{array}{cc}#1&#2\\#3&#4\end{array}\right>} \newcommand\bspace[4]{\left[\begin{array}{cc}#1&#2\\#3&#4\end{array}\right]} \newcommand\wq{\mathfrak{q}} \newcommand\uink{\mathcal U} \newcommand\qunitaryomega{\uink} \newcommand\quniti[4]{\rm{SU}^{#4}} \newcommand\qunitarylam{\quniti nkh{\la}} \newcommand\spinomega{Spin_{n}(Q)} \newcommand\hspinomega{Spin_{n}(\D,h)} \newcommand\matrici{\mathbb{M}} nitum{\mathbb{F}} \newcommand\splitt[1]{\matrici_2(#1)} \newcommand\splitk{\splitt k} \newcommand\Hgot{\mathfrak{H}} \newcommand{\rf}{\mathfrak{r}} \newcommand{\cf}{\mathfrak{c}} \theoremstyle{definition} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{prop}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{obs}[theorem]{Observation} \newtheorem{rem}[theorem]{Remark} \newtheorem{ex}[theorem]{Example} \newtheorem{defi}[theorem]{Definition} \newtheorem{problem}[theorem]{Problem} \newtheorem{solution}[theorem]{Solution} \newcommand\bbmatrix[4]{\left(\begin{array}{cc}#1&#2\\#3&#4\end{array}\right)} \newcommand\lbmatrix[4]{\textnormal{\scriptsize{$\left(\begin{array}{cc}#1&#2\\#3&#4 \end{array}\right)$}}\normalsize} \newcommand\lcvector[2]{\textnormal{\scriptsize{$\left(\begin{array}{c}#1\\#2\end{array}\right)$}}\normalsize} \numberwithin{equation}{section} \DeclareRobustCommand{\rchi}{{\mathpalette\irchi\relax}} \newcommand{\irchi}[2]{\raisebox{\depth}{$#1\chi$}} \title[Quotients of trees by Hecke congruence subgroups] {Quotients of the Bruhat-Tits tree by function field analogs of the Hecke congruence subgroups} \author{Claudio Bravo} \begin{document} \maketitle \footnotesize \textbf{MSC Numbers (2020): 20G30-20E08 (primary), 11R58-14H60 (secondary)} \textbf{Keywords: Hecke subgroups, Quotient graphs, Global function fields, Eichler orders} \normalsize \begin{abstract} Let $\mathcal{C}$ be a smooth, projective and geometrically integral curve defined over a finite field $\mathbb{F}$. For each closed point $P_{\infty}$ of $\mathcal{C}$, let $R$ be the ring of functions that are regular outside $P_{\infty}$, and let $K$ be the completion at $P_{\infty}$ of the function field of $\mathcal{C}$. In order to study groups of the form $\mathrm{GL}_2(R)$, Serre describes the quotient graph $\mathrm{GL}_2(R) \backslash \mathfrak{t}$, where $\mathfrak{t}$ is the Bruhat-Tits tree defined from $\mathrm{SL}_2(K)$. In particular, Serre shows that $\mathrm{GL}_2(R) \backslash \mathfrak{t}$ is the union of a finite graph and a finite number of ray shaped subgraphs, which are called cusps. It is not hard to see that finite index subgroups inherit this property. In this work we describe the associated quotient graph $\mathrm{H} \backslash \mathfrak{t}$ for the action on $\mathfrak{t}$ of the group $\mathrm{H}= \left\lbrace \sbmattrix{a}{b}{c}{d} \in \text{GL}_2(R) : c \equiv 0 \, (\text{mod } I)\right\rbrace \normalsize$, where $I$ is an ideal of $R$. More specifically, we give a explicit formula for the cusp number of $\mathrm{H} \backslash \mathfrak{t}$. Then, by using Bass-Serre Theory, we describe the combinatorial structure of $\mathrm{H}$. These groups play, in the function field context, the same role as the Hecke congruence subgroups of $\mathrm{SL}_2(\mathbb{Z})$. \end{abstract} \section{Introduction}\label{Section Intro} Geometric group theory is the area of mathematics devoted to study group-theoretic problems via topological and geometric methods. A classical example of this approach is the characterization of discrete subgroups of real Lie groups by their action on symmetric spaces. In the latter seventies, Bass and Serre initiated a program for study certain groups by their actions on trees, i.e. on connected graphs without cycles. This theory is currently known as Bass-Serre theory (c.f \cite{S}). Bass-Serre theory can be applied to study discrete subgroups of $\mathrm{GL}_2(K)$, where $K$ is a discrete valuation field. For instance, Serre proves in \cite[Chapter I, \S 3.3]{S} that a discrete subgroup of $\mathrm{SL}_2(K)$ is torsion-free if and only if it acts freely on a tree. This result gives another proof of a theorem due to Ihara, which states that any discrete torsion-free subgroup of $\mathrm{SL}_2(\mathbb{Q}_p)$ is free (cf.~\cite[Theorem 1]{Ihara}). In \cite[Chapter II, \S 1.1]{S} Serre introduces the Bruhat-Tits tree $\mathfrak{t}=\mathfrak{t}(K)$ associated to the group $\mathrm{SL}_2$ at a discrete valuated field $K$. Moreover, the same author shows that $\mathrm{GL}_2(K)$ acts on $\mathfrak{t}$. Let $R$ be the coordinate ring of an affine open set of a smooth, projective, geometrically integral curve $\mathcal{C}$, defined over a field $\mathbb{F}$, with a unique point $P_{\infty}$ at infinity. In \cite[Chapter II, \S 2]{S} Serre uses $\mathfrak{t}$ in order to study groups of the form $\mathrm{GL}_2(R)$. Indeed, the closed point $P_{\infty}$ gives rise to a discrete valuation $\nu$ on the function field $k$ of $\mathcal{C}$ and hence we have an action of $\mathrm{GL}_2(R)$, as a subgroup of $\mathrm{GL}_2(K)$, on the Bruhat-Tits tree $\mathfrak{t}(K)$ defined from the completion $K$ of $k$ at $P_{\infty}$. In this situation, Serre gave the following description of the quotient graph $\mathrm{GL}_2(R) \backslash \mathfrak{t}$. \begin{theorem} \cite[Chapter II, Theorem 9]{S}\label{teo serre quot} Let $\mathfrak{t}$ be the local Bruhat-Tits tree defined by the group $\text{SL}_2$ at the completion $K$ associated to the valuation induced by $P_{\infty}$ (cf.~\S \ref{subsection BTT}). Then, the graph $\mathrm{GL}_2(R) \backslash \mathfrak{t}$ is combinatorially finite, i.e. it is obtained by attaching a finite number of infinite half lines, called cuspidal rays, to a certain finite graph $Y$. The set of such cuspidal rays is indexed by the elements of the Picard group $\mathrm{Pic}(R)=\mathrm{Pic}(\mathcal{C})/\langle \overline{P_{\infty}}\rangle$. \end{theorem} Using Bass-Serre Theory, Serre concludes the following structural result for the groups $\mathrm{GL}_2(R)$ defined above. \begin{theorem}\cite[Chapter II, Theorem 10]{S}\label{teo serre grup} There exists a finitely generated group $H$, and a family $ \lbrace (I_{\sigma}, \mathcal{P}_{\sigma}, \mathcal{B}_{\sigma}) \rbrace_{\sigma \in \mathrm{Pic}(R)}$, where: \begin{itemize} \item[1.] $I_\sigma$ is an $R$-fractional ideal and $\mathcal{P}_\sigma= (\mathbb{F}^{*}\times \mathbb{F}^{*})\ltimes I_\sigma$, \item[2.] $\mathcal{B}_{\sigma}$ is a group with canonical injections $\mathcal{B}_{\sigma} \rightarrow H$ and $\mathcal{B}_{\sigma} \rightarrow \mathcal{P}_{\sigma}$, \end{itemize} such that $\mathrm{GL}_2(R)$ is isomorphic to the sum of $\mathcal{P}_\sigma$, for $\sigma \in \mathrm{Pic}(R)$, and $H$, amalgamated along their common subgroups $\mathcal{B}_\sigma$ according to the above injections. \end{theorem} Moreover, Serre describes the structure of $\mathrm{GL}_2(R)$ as an amalgamated sum in certain cases, by explicitly describing the corresponding quotient graphs. This work considers, for example, the cases $\mathcal{C}=\mathbb{P}^1_{\mathbb{F}}$, for $\text{deg}(P_{\infty}) \in \lbrace 1, 2,3,4 \rbrace$, or when $\mathcal{C}$ is a curve of genus $0$ without rational points and $\text{deg}(P_{\infty})=2$. The case $\mathcal{C}=\mathbb{P}^1_{\mathbb{F}}$ and $\deg(P_{\infty})=1$ reduces to a classical result, now called Nagao's Theorem (cf.~\cite{N}), where the corresponding quotient graph is a ray. Theorem \ref{teo serre quot} and Theorem \ref{teo serre grup} have some interesting consequences for the involved groups. For instance, we can apply Theorem \ref{teo serre grup} in order to show that $\mathrm{GL}_2(R)$ is never finitely generated. Also, the preceding theorems allow us to study the homology and cohomology groups of $\mathrm{GL}_2(R)$. Indeed, in \cite[Chapther II, \S 2.8]{S} Serre applies this approach in order to understand the homology group of $\mathrm{GL}_2(R)$ with coefficient in certain modules. For instance, he concludes that $H_i(\mathrm{GL}_2(R), \mathbb{Q})=0$, for all $i >1$, and $H_1(\mathrm{GL}_2(R), \mathbb{Q})$ is a finite dimensional $\mathbb{Q}$-vector space. Then, in \cite[Chapther II, \S 2.9]{S} Serre interprets the Euler-Poincar\'e characteristic of certain subgroups $G$ of $\mathrm{GL}_2(R)$ in terms of the relative homology of $G$ modulo a representative system of its conjugacy classes of maximal unipotent subgroups. In order to prove Theorem \ref{teo serre quot}, Serre makes an extensive use of the theory of vector bundles of rank $2$ over $\mathcal{C}$. On the other hand, Mason in \cite{M} gives a more elementary approach which involves substantially less algebraic geometry. This point of view only requires the Riemann-Roch Theorem and some basic notions about Dedekind rings. The price to pay for this simplicity is that one is not able to prove the finiteness of the diameter of graph $Y$ in Theorem \ref{teo serre quot}. However, Mason applies this result on quotient graphs in order to study the lowest index non-congruence subgroups of $\mathrm{GL}_2(R)$. In a more general context, let $\mathbf{G}$ be an arbitrary reductive algebraic $k$-group. We can define a poly-simplicial complex $\mathcal{X}(\mathbf{G},K)$ associated to the group $\mathbf{G}$ and the field $K$. This topological space is called the building of $\mathbf{G}(K)$, and this notion generalizes the definition of the Bruhat-Tits tree. When $R = \mathbb{F}[t]$ and $\mathbf{G}$ is split over $k$, there exists a result that generalizes Nagao's theorem, which describes the structure of the quotient space $\overline{\mathcal{X}}= \textbf{G}(\mathbb{F}[t]) \backslash \mathcal{X}$ associated to the action of $\textbf{G}(\mathbb{F}[t])$ on $\mathcal{X}=\mathcal{X}(\textbf{G},\mathbb{F}((t^{-1})))$. This result is due to Soul\'e and described in \cite[Theorem 1]{So}. Soul\'e shows that $\overline{\mathcal{X}}$ is isomorphic to a sector of $\mathcal{X}$, which is the analog of a ray in the general building context. Then, in the same article, the author describes $\textbf{G}(\mathbb{F}[t])$ as an amalgamated sum of certain well-known subgroups. This structural result can be extended to the context where $\mathbf{G}$ is an isotrivial $k$-group, i.e.~a reductive $k$-group that splits in the composite field $\ell=\mathbb{L} k$, for a finite extension $\mathbb{L}$ of $\mathbb{F}$. This problem has been developed by Margaux in \cite{Margaux}. Indeed, Margaux manages to prove the same result as Soul\'e, replacing the condition ``split'' by ``isotrivial''. Finally, in the context of quasi-split groups, with L.~Arenas-Carmona, B.~Loisel and G.~Lucchini Arteche, we extend Theorem \ref{teo serre quot} and Theorem \ref{teo serre grup} to the special unitary groups of split rank one, which are the smallest quasi-split non-split reductive groups. This results are described in \cite{ABLL}, and they are not covered by the preceding works. In the context where $\mathbb{F}$ is a finite field, one of the strongest results about the structure of the preceding quotients that exists in the literature is due to Bux, K\"ohl and Witzel in \cite{B}. In order to present this result, let $\mathcal{S}$ be a set of closed points in $\mathcal{C}$, and denote by $\mathcal{O}_\mathcal{S}$ the ring of functions that are regular outside $\mathcal{S}$. In particular, we have $\mathcal{O}_{P_\infty}=R$. Assume that $\mathbf{G}$ is an isotropic and non-commutative algebraic $k$-group and let $\mathcal{X}_\mathcal{S}$ be the product of buildings $\prod_{P \in \mathcal{S}} \mathcal{X}(\mathbf{G}, k_P)$, where $k_P$ is the completion of $k$ at $P$. Choose a particular realization $\mathbf{G}_{\mathrm{real}}$ of $\mathbf{G}$ as an algebraic set of some affine space defined over $k$. Given this realization, we define $G$ as the group of $\mathcal{O}_{\mathcal{S}}$-points of $\mathbf{G}_{\mathrm{real}}$. Then, \cite[Proposition 13.6]{B} shows that there exists a constant $L$ and finitely many pairwise sectors $Q_1, \cdots, Q_s$ of $\mathcal{X}_\mathcal{S}$ such that the $G$-translates of the $L$-neighborhood of $\bigcup_{i=1}^{s} Q_i$ cover $\mathcal{X}_\mathcal{S}$. Note that almost all the previous results are specific to groups of points of certain reductive groups. Thus, it is natural to seek for extensions to certain subgroups of them. Widely speaking, the goal of this work is to study a certain family of congruence subgroups $\mathrm{H}$ of $\mathrm{GL}_2(R)$, which is a direct analog of the Hecke congruence subgroups of $\text{SL}_2(\mathbb{Z})$ in the function field context. We do this through the analysis of the associated group actions on trees. Specifically, in analogy with Theorem \ref{teo serre quot}, we show that $\mathrm{H} \backslash \mathfrak{t}$ is combinatorially finite and we give a explicit formula for the number its cuspidal rays. Then, by using Bass-Serre Theory, we describe the combinatorial structure of $\mathrm{H}$ and its abelianization. Finally, we present some interesting examples in the context where $\mathcal{C}=\mathbb{P}^1_\mathbb{F}$. \section{Context and Main results}\label{Section on the principal problem} In order to introduce the main results of this work, we use the following definitions and notations. Let $\mathcal{C}$ be a smooth, projective, geometrically integral $\mathbb{F}$-curve and let $k$ be its function field. Since in the sequel we use the spinor genera theory in some proofs, and this theory is set in the context where \underline{the ground field $\mathbb{F}$ is finite}, we assume this throughout and we denote its cardinality by $q=p^s$. A $\mathcal{C}$-order $\mathfrak{D}$ on the matrix algebra $\mathbb{M}_2(k)$ is a locally free sheaf of $\mathcal{O}_{\mathcal{C}}$-algebras whose generic fiber is $\mathbb{M}_2(k)$ (cf.~Definition \ref{definition of max and eichler orders}). Analogously, an $R$-order is a locally free $R$-algebra. As we explain in \S \ref{Section Spinor}, any $R$-order can be extended to a $\mathcal{C}$-order by choosing an arbitrary local order at the point $P_{\infty} \in \mathcal{C}$. We say that a $\mathcal{C}$-order is maximal if its completions are maximal at all places of $\mathcal{C}$. By definition, a split maximal order is an order that is $\text{GL}_2(k)$-conjugate to the sheaf $$ \small \mathfrak{D}_D= \left( \begin{array}{cc} \mathcal{O}_{\mathcal{C}} & \mathfrak{L}^D\\ \mathfrak{L}^{-D} & \mathcal{O}_{\mathcal{C}} \end{array} \right), \normalsize $$ where $D$ is an arbitrary divisor on $\mathcal{C}$, and where $\mathfrak{L}^D$ is the invertible sheaf defined in every open set $U \subseteq \mathcal{C}$ by $$ \mathfrak{L}^D(U)= \left\lbrace f \in k: \text{div}(f)|_{U}+D|_{U} \geq 0 \right\rbrace . $$ In general, an Eichler $\mathcal{C}$-order is a sheaf-theoretical intersection of two maximal $\mathcal{C}$-orders. We define a specific family of Eichler $\mathcal{C}$-orders $\mathfrak{E}_D$ by taking \begin{equation}\label{eq de eD} \mathfrak{E}_D= \mathfrak{D}_D \cap \mathfrak{D}_0, \end{equation} where $D$ is an effective divisor. Let $U_0$ be the open set in $\mathcal{C}$ defined as the complement of $\lbrace P_{\infty} \rbrace$. We define $\mathrm{H}_D$ as the group of invertible elements in $\mathfrak{E}_D(U_0)$. In other words \begin{equation}\label{eq eichler} \small \text{H}_D= \left\lbrace \left( \begin{array}{cc} a & b\\ c & d \end{array} \right) \in \text{GL}_2(R) : c \equiv 0 \, (\text{mod } I_D)\right\rbrace , \normalsize \end{equation} where $I_D$ is the $R$-ideal defined as $I_D=\mathfrak{L}^{-D}(U_0)$. Then, the family of groups $\mathcal{H}=\lbrace \mathrm{H}_D: D \, \,\text{effective divisor} \rbrace$ plays the same role as the Hecke congruence subgroups of $\mathrm{SL}_2(\mathbb{Z})$ in the function field context. The objective of this work is to characterize the quotient graph associated to the action of $\mathrm{H}_D$ on the Bruhat-Tits tree $\mathfrak{t}$, to subsequently describe the combinatorial structure of $\mathrm{H}_D$. Note that $\mathrm{H}_D$ naturally contains the kernel of $\mathrm{GL}_2(R) \to \mathrm{GL}_2(R/I_D)$. This implies, as we prove in Corollary \ref{Cor comb fin} (which follows from a lemma by Serre in \cite{Se2}), that the quotient graph $\mathrm{H}_D \backslash \mathfrak{t}$ is combinatorially finite, and the number of cuspidal rays of $\mathfrak{t}_D= \text{H}_D \backslash \mathfrak{t}$ is equal to the number of $\text{H}_D$-orbits in $\mathbb{P}^1(k)$. The previous observation is useful in the context where $D$ has small degree. Unfortunately, this set of $\text{H}_D$-orbits is really hard to characterize in the general case. Another obstruction for a direct computation of $\mathfrak{t}_D$ is that $\mathrm{H}_D$ is not a normal subgroup of $\mathrm{GL}_2(R)$. In particular, $\mathrm{GL}_2(R) \backslash \mathfrak{t}$ is not always a quotient of $\text{H}_D \backslash \mathfrak{t}$. In order to present our main result we introduce some additional notations. For any divisor $D$ on $\mathcal{C}$, we denote by $\overline{D}$ its linear equivalence class. Also, we denote by $\lfloor a \rfloor$ the largest integer not exceeding $a \in \mathbb{R}$. Observe that, when $D=0$, we have $\text{H}_D=\text{GL}_2(R)$. In particular, the next theorem can be considered as a partial generalization of Serre's result on the structure of quotient graphs. \begin{theorem}\label{teo cusp} Let $D$ be an effective divisor, which we write as $D=\sum_{i=1}^r n_i P_i$, where the points $P_1, \dots, P_r, P_{\infty}$ are all different. Then, the graph $\mathfrak{t}_D=\mathrm{H}_D\backslash \mathfrak{t}$ is obtained by attaching a finite number of rays, called cuspidal rays, to a certain finite graph $Y\subset \mathfrak{t}_D$. The cardinality $\mathfrak{c}_D$ of the set of such cuspidal rays satisfies \small \begin{equation}\label{numero de patas} \mathfrak{c}_D \leq c(\mathrm{H}_D) \footnotesize :=2^r |g(2)| \left| \frac{ 2\mathrm{Pic}(\mathcal{C})+ \langle \overline{P_{\infty}} \rangle}{ \langle \overline{P_{\infty}} \rangle} \right| \normalsize \left( 1+ \frac{1}{q-1} \prod_{i=1}^r \left( q^{\text{deg}(P_i)\lfloor \frac{n_i}{2} \rfloor}-1\right) \right), \end{equation} \normalsize where $g(2)$ is the maximal exponent-2 subgroup of $\mathrm{Pic}(R)$. Moreover, equality holds when $g(2)$ is trivial and each $n_i$ is odd. \end{theorem} Note that $g(2)$ is trivial in various cases: for example, when $\mathcal{C}=\mathbb{P}^1_{\mathbb{F}}$ and $P_{\infty}$ has odd degree, or when $\mathcal{C}$ is an elliptic curve with no non-trivial $2$-torsion rational points. The previous results give a more precise description than \cite[\S 3.3, Lemma 8]{Se2}, which in fact only says that the set of cuspidal rays is finite. Indeed, in Theorem \ref{teo cusp}, we have a control on the number of cusps, and, in the case where $g(2)$ is trivial and each $n_i$ is odd, we have an explicit expression to compute it. When $\mathcal{C}=\mathbb{P}^1_\mathbb{F}$, Theorem 3.3 of \cite{A5} characterizes a set of representatives for all but finitely many vertex classes. In particular, this theorem gives a set of representatives for the action of $\mathrm{H}_D$ on the ends of $\mathfrak{t}$. Now, by using Bass-Serre theory and Theorem \ref{teo cusp} we can deduce the following general result on the combinatorial structure of $\text{H}_D$. This can be considered as a partial generalization of Theorem \ref{teo serre grup}, and as a more detailed description than \cite[\S 3.3, Lemma 8]{Se2}. \begin{theorem}\label{teo grup} In the notation of Theorem \ref{teo cusp}, assume that each $n_i$ is odd and $g(2)= \lbrace e \rbrace$. Then, there exist a finitely generated group $H$, two sets of indices, denoted by $\mathbf{D}$ and $\mathbf{I}$, and a family $\lbrace (I_{\sigma}, \mathcal{P}_{\sigma}, \mathcal{B}_{\sigma}) : \sigma \in \mathbf{D} \sqcup \mathbf{I} \rbrace$, where \begin{itemize} \item[1.] $\mathrm{Card}(\mathbf{D})= 2^r [2\mathrm{Pic}(\mathcal{C})+ \langle \overline{P_{\infty}} \rangle: \langle \overline{P_{\infty}} \rangle],$ and $\mathrm{Card}(\mathbf{I})=c(\mathrm{H}_D)-\mathrm{Card}(\mathbf{D}),$ \item[2.] $ I_\sigma$ is an $R$-ideal contained in $I_D$ \item[3.] $\mathcal{P}_\sigma= (\mathbb{F}^{*}\times \mathbb{F}^{*})\ltimes I_\sigma$, for any $\sigma \in \mathbf{D},$ while $\mathcal{P}_{\sigma} = \mathbb{F}^{*}\times I_\sigma$, for any $\sigma \in \mathbf{I},$ \item[4.] $\mathcal{B}_{\sigma}$ is a group with canonical injections $\mathcal{B}_{\sigma} \rightarrow H$ and $\mathcal{B}_{\sigma} \rightarrow \mathcal{P}_{\sigma}$, for any $\sigma \in \mathbf{D} \sqcup \mathbf{I}$, \end{itemize} such that $\mathrm{H}_D$ is isomorphic to the sum of $\mathcal{P}_\sigma$, for $\sigma \in \mathbf{D}\sqcup \mathbf{I}$, and $H$, amalgamated along their common subgroups $\mathcal{B}_\sigma$ according to the above injections. \end{theorem} As we cite in \S \ref{Section Intro}, Serre in \cite[Chapter II, \S 2.8]{S} gives a series of results that relate the homology of congruence subgroups of $\mathrm{GL}_2(R)$ with the structure of its quotient graphs. We can apply them to our context. In particular, the following result, which gives a more precise description of the abelianization of $\mathrm{H}_D$, is a consequence of Theorem \ref{teo cusp}. \begin{theorem}\label{teo grup ab} With the same notation and hypotheses of Theorem \ref{teo grup}, the abelianization $\mathrm{H}_D^{\mathrm{ab}}$ of $\mathrm{H}_D$ is \begin{itemize} \item[1.] finite generated, if each $n_i$ equals one, or \item[2.] the direct product of a finitely generated group with a infinite dimensional $\mathbb{F}_p$-vector space, in any other case. \end{itemize} \end{theorem} Finally, we present an interesting example in the context where $\mathcal{C}=\mathbb{P}^1_{\mathbb{F}}$, $D=P$ is a closed point and $\deg(P_{\infty})=\deg(P)=1$. Indeed, by using these hypotheses on $\mathcal{C}$ and $D$ we can show that the graph $\mathfrak{t}_D=\mathrm{H}_D\backslash \mathfrak{t}$ is isomorphic to a double ray (cf.~Proposition \ref{teo quot t}). We subsequently describe $\mathrm{H}_D$ as an explicit amalgamated sum (cf.~Proposition \ref{teo group t}) and compute its abelianization (cf.~Proposition \ref{teo group ab t}). \section{Preliminaries on the Bruhat-Tits tree}\label{Section BTT} \subsection{Conventions and notations for graphs} We recall some basic definitions on graphs and trees. We define a graph $\mathfrak{g}$ as a pair of sets $\text{V}=\text{V}(\mathfrak{g})$ and $\text{E}=\text{E}(\mathfrak{g})$, and three functions $s,t:E\rightarrow V$ and $r:E\rightarrow E$ satisfying the identities $r(a)\neq a$, $r\big(r(a)\big)=a$ and $s\big(r(a)\big)=t(a)$, for every $a \in E$. In all that follows $V$ and $E$ are called vertex and edge set, respectively, and the functions $s,t$ and $r$ are called respectively source, target and reverse. Our definition is chosen in a way that allows the existence of multiple edges and loops. Two vertices $v,w \in \text{V}$ are neighbors if there exists an edge $e \in \text{E}$ satisfying $s(e)=v$ and $t(e)=w$. The valency of a vertex $v$ is the cardinality of its set of neighboring vertices. A simplicial map $\gamma:\mathfrak{g} \rightarrow \mathfrak{g}'$ between graphs is a pair of functions $\gamma_V:\text{V}(\mathfrak{g})\rightarrow \text{V}(\mathfrak{g}')$ and $\gamma_E: \mathrm{E}(\mathfrak{g})\rightarrow \mathrm{E}(\mathfrak{g}')$ preserving the source, target and reverse functions. We say that a simplicial map $\gamma:\mathfrak{g} \rightarrow \mathfrak{g}'$ is an isomorphism if there exists another simplicial map $\gamma':\mathfrak{g}' \rightarrow \mathfrak{g}$ such that $\gamma_V \circ \gamma'_V= \mathrm{id}_{\text{V}(\mathfrak{g})}$, $\gamma'_V \circ \gamma_V= \mathrm{id}_{\text{V}(\mathfrak{g'})}$, $\gamma_E \circ \gamma'_E= \mathrm{id}_{\text{E}(\mathfrak{g})}$ and $\gamma'_E \circ \gamma_E= \mathrm{id}_{\text{E}(\mathfrak{g'})}$. The group of automorphisms $\mathrm{Aut}(\mathfrak{g})$ of a graph $\mathfrak{g}$ is the set of isomorphism from $\mathfrak{g}$ to $\mathfrak{g}$, with the composition as a group law. We say that a group $\Gamma$ acts on a graph $\mathfrak{g}$ is there exists an homomorphism from $\Gamma$ to $\mathrm{Aut}(\mathfrak{g})$. A group action of $\Gamma$ on a graph $\mathfrak{g}$ has no inversions if $g \cdot a\neq r(a)$, for every edge $a$ and every element $g\in\Gamma$. An action without inversions defines a quotient graph in a natural sense. Indeed, if $\Gamma$ acts on $\mathfrak{g}$ without inversions, then the vertex set of $\Gamma \backslash \mathfrak{g}$ corresponds to $\Gamma \backslash V$, and the edge set corresponds to $\Gamma \backslash E$. When the action of $\Gamma$ has inversions, we can replace $\mathfrak{g}$ by its first barycentric subdivision in order to obtain a graph where $\Gamma$ acts without inversions. Then, the quotient graph is also defined in this case. Let $\mathfrak{g}$ be a graph. A finite line in $\mathfrak{g}$, usually denoted by $\mathfrak{p}$, is a subgraph whose vertex and edge sets are $\mathrm{V}(\mathfrak{p})=\lbrace v_i \rbrace_{i=0}^{n}$ and $\mathrm{E}(\mathfrak{p})=\lbrace e_i, r(e_i) \rbrace_{i=0}^{n-1}$, where $s(e_i)=v_i$ and $t(e_{i})=v_{i+1}$, for all index $0 \leq i \leq n-1$. The length of $\mathfrak{p}$ is, by definition, $n=\mathrm{Card} (\mathrm{V}(\mathfrak{p}))-1=\mathrm{Card} (\mathrm{E}(\mathfrak{p}))/2$. We often emphasize the vertices $v_0$, the initial vertex of $\mathfrak{p}$, and $v_r$, the final vertex of $\mathfrak{p}$, by saying ``$\mathfrak{p}$ is a path connecting $v_0$ with $v_r$''. A graph $\mathfrak{g}$ is connected if, given two vertices $v,w \in \mathrm{V}(\mathfrak{g})$, there exists finite path $\mathfrak{p}$ connecting them. We define a ray $\mathfrak{r}$ in $\mathfrak{g}$ by replacing $n$ and $n-1$ by $\infty$ in the definition of finite line. A cycle in $\mathfrak{g}$ is a finite line with an additional pair of edges connecting $v_n$ with $v_0$. We define a tree as a connected graph without cycles. A maximal path in $\mathfrak{g}$ is a doubly infinite line, i.e. the union of two rays intersecting only in one vertex. Let $\mathfrak{r}_1$ and $\mathfrak{r}_2$ be two rays whose vertex sets are respectively denoted by $V_1= \left\lbrace v_i : i\in \mathbb{Z}_{\geq 0}\right\rbrace $ and $V_2=\left\lbrace v'_i: i\in \mathbb{Z}_{\geq 0}\right\rbrace$. We say that $\mathfrak{r}_1$ and $\mathfrak{r}_2$ are equivalent if there exists $t, i_0 \in \mathbb{Z}_{\geq 0}$ such that $v_{i}= v_{i+t}'$, for all $i \geq i_0$. In this case we write $\mathfrak{r}_1 \sim \mathfrak{r}_2$. We define the visual limit, also called the end set, $\partial_{\infty}(\mathfrak{g})$ of $\mathfrak{g}$ as the set of equivalence classes of rays $\rf$ in $\mathfrak{g}$. We denote the class of $\rf$ by $\partial_{\infty}(\mathfrak{r})$. By a cuspidal ray in a graph $\mathfrak{g}$, we mean a ray such that every non-initial vertex has valency two in $\mathfrak{g}$. A cusp in $\mathfrak{g}$ is an equivalence class of cuspidal rays in $\mathfrak{g}$. We denote the cusp set of $\mathfrak{g}$ by $\partial^{\infty}(\mathfrak{g})$. We say that a graph is combinatorially finite if it is the union of a finite graph and a finite number of cuspidal rays. In particular, when a graph is combinatorially finite its visual limit is also finite. \subsection{The Bruhat-Tits tree}\label{subsection BTT} Let $k$ be the function field of a smooth, projective, geometrically integral curve $\mathcal{C}$ defined over a field $\mathbb{F}$. Let $K$ be the completion of $k$ with respect to a discrete valuation $\nu: k^{*} \to \mathbb{Z}$, and let $\mathcal{O}$ be its ring of integers. Recall that a tree is a connected graph without cycles. An example of tree is the Bruhat-Tits building $\mathfrak{t}=\mathfrak{t}(K)$ associated to the reductive group $\text{SL}_2$ and the field $K$. In order to introduce this tree, we have to fix some definitions concerning lattices. Let $\pi \in \mathcal{O}$ be a fixed uniformizing parameter of $K$. A lattice in a $K$-vector space $V$ is a finitely generated $\mathcal{O}$-submodule of $V$, which generates $V$ as a vector space. Assume that $V$ is a two-dimensional $K$-vector space. Then, every lattice on $V$ is free of rank $2$. The group $K^{*}$ acts on the set of lattices by homothetic transformations. The vertex set of $\mathfrak{t}(K)$ can be defined as the set of homothetic classes of lattices in $V$. We adopt this convention. Let $\Lambda$ and $\Lambda'$ be two lattices in $V$. By the Invariant Factor Theorem of Algebraic Number Theory, there is an $\mathcal{O}$-basis $\lbrace e_1, e_2 \rbrace$ of $\Lambda$ and integers $a,b$ such that $\lbrace \pi^a e_1, \pi^b e_2 \rbrace$ is an $\mathcal{O}$-basis for $\Lambda'$. The set $\lbrace a,b \rbrace$ does not depend on the choice of basis for $\Lambda, \Lambda'$. Moreover, if we replace $\Lambda$ by $x\Lambda$, and $\Lambda'$ by $y \Lambda'$, where $x,y \in K^{*}$, then $\lbrace a,b \rbrace$ changes into $\lbrace a+c , b+c\rbrace$, where $c=\nu(y/x)$. So, the integer $|a-b|$ is called the distance between the classes $[\Lambda]$ and $[\Lambda']$. We define one pair of mutually reverse edges in $\mathfrak{t}(K)$ for each pair of lattice classes at distance one. This defines a graph, which can be proved to be a tree (cf.~\cite[Chapter II, \S 1, Theorem 1]{S}). The group $\text{GL}_2(K)$ acts on $\mathfrak{t}$ by $g \cdot [\Lambda]=[g(\Lambda)]$, for any $\mathcal{O}$-lattice $\Lambda \subset K^2$ and any $g \in \text{GL}_2(K)$. This induces an action of $\text{PGL}(V)=\text{PGL}_2(K)$ on $\mathfrak{t}$. An order in $\mathbb{M}_2(K)$ is a lattice with a ring structure induced by the multiplication of $\mathbb{M}_2(K)$. We say that an order is maximal when it fails to be contained in any other order. One can reinterpret the Bruhat-Tits tree for $\text{SL}_2$ in several ways. One of these arises from the following remark. There exists a bijective map from the vertex set of $\mathfrak{t}(K)$ to the set of maximal orders in $\mathbb{M}_2(K)$. Indeed, this function is defined by $[\Lambda] \mapsto \mathrm{End}_{\mathcal{O}}(\Lambda)$, which is valid, since the endomorphism rings of $\Lambda$ and $x\Lambda$ coincide for any $x \in K^{*}$. Moreover, under this identification, two maximal orders $\mathfrak{D}$ and $\mathfrak{D}'$ are neighbors if the pair $ \left\lbrace \mathfrak{D}, \mathfrak{D}'\right\rbrace$ is $\mathrm{GL}_2(K)$-conjugate to the pair $ \left\lbrace \sbmattrix {\oink}{\oink}{\oink}{\oink}, \sbmattrix {\oink}{\pi^{-1} \oink}{\pi \oink}{\oink} \right\rbrace.$ Another reinterpretation of the Bruhat-Tits tree for $\text{SL}_2$ comes from the topological structure of $K$, which is very useful in order to have a concrete representation of its visual limit. We denote by $B_a^{|r|}$ the closed ball in $K$ whose center is $a$ and radius is $|\pi^r|$. Then, we can define the function $\Sigma$ between the set of closed balls and the set of maximal orders in $\mathbb{M}_2(K)$ by $ B_a^{|r|} \mapsto \mathrm{End}_{\mathcal{O}}( \Lambda_B ),$ where $\Lambda_B = \left\langle %{a \choose 1} , {\pi^r \choose 0} \binom{a}{1}, \binom{\pi^r}{0} \right\rangle $. It follows from \cite[\S 4]{AAC} that $\Sigma$ is bijective. Thus, this induces a correspondence between the vertex set of $\mathfrak{t}=\mathfrak{t}(K)$ and the set of closed balls in $K$. Moreover, if we say that two balls are neighbors whenever one is a proper maximal sub-ball of the other, then $\Sigma$ induces an isomorphism of graphs. In other words, under the previous definition, we have that two balls $B$ and $B'$ are neighbors precisely when $\Sigma(B)$ and $\Sigma(B')$ are neighbors. So, by using this reinterpretation of the Bruhat-Tits tree in terms of balls, it is straightforward that any ray $\mathfrak{r}$ in $\mathfrak{t}$ satisfies either $V(\mathfrak{r})=\left\lbrace B_{a}^{|r+n|} : n \in \mathbb{Z}_{\geq 0} \right\rbrace$, for certain $a \in K$ and $r \in \mathbb{Z}$, or $V(\mathfrak{r})=\left\lbrace B_{0}^{|r-n|} : n \in \mathbb{Z}_{\geq 0} \right\rbrace$, for certain $r \in \mathbb{Z}$. In the first case, the visual limit of $\mathfrak{r}$ can be identified with $a \in K$, and, in the second, we identify it with the point at infinity $\infty$. This brief remark shows that the visual limit of the Bruhat-Tits tree $\mathfrak{t}=\mathfrak{t}(K)$ is in natural correspondence with the $K$-points of the projective line $\mathbb{P}^1$. In all that follows, the equivalence classes of rays in $\partial_{\infty}(\mathfrak{t})$ are called ends of $\mathfrak{t}$. This set of ends is acted on naturally by the group $\mathrm{GL}_2(K)$, via Moebius transformations with coefficients in $K$. In fact, this action is compatible with the previously defined action of $\mathrm{GL}_2(K)$ on lattices, or the subsequent action on balls induced by the former (cf.~\cite[\S 4]{AAC}). It follows from the density of $k$ in $K$, that for any finite line $\mathfrak{p}$ of $\mathfrak{t}$, there is a ray containing $\mathfrak{p}$ whose end corresponds to a rational element $s \in \mathbb{P}^1(k) \subset \mathbb{P}^1(K)$. \section{On combinatorially finite quotients of the Bruhat-Tits tree}\label{Section comb finite} We keep the notation from last section. Here we give a detailed description of the quotient graphs of the Bruhat-Tits tree $\mathfrak{t}$ by certain subgroups of $\mathrm{GL}_2(k)$. In order to do this, we introduce the following notion. \begin{defi}\label{def good quotient} Let $H$ be a subgroup of $\mathrm{GL}_2(k)$. We say that $H$ \textit{closes enough umbrellas} if there exists a finite family of rays $ \mathfrak{R}_{H}= \lbrace \mathfrak{r}_i \rbrace_{i=1}^{\gamma} \subset \mathfrak{t}$, each with a vertex set $\lbrace v_{n}(i) \rbrace_{n>0}^{\infty}$, where $v_n(i)$ and $v_{n+1}(i)$ are neighbors, satisfying each of the following statements: \begin{itemize} \item[(a)] The set of ends of all rays in $\mathfrak{R}_{H}$ is a representative system of $H \backslash \mathbb{P}^1(k)$. \item[(b)] $H \backslash \mathfrak{t}$ is obtained by attaching all the images $\overline{\mathfrak{r}_i} \subseteq H \backslash \mathfrak{t}$ to a certain finite graph $Y_{H}$. \item[(c)] No $\mathfrak{r}_i$ contains a pair of vertices in the same $H$-orbit, and $\overline{\mathfrak{r}_i} \cap \overline{\mathfrak{r}_j} = \emptyset$, for each $i \neq j$. \item[(d)] For each index $i$ and each $n>0$, we have $ \mathrm{Stab}_H(v_{n}(i))\subseteq \mathrm{Stab}_H(v_{n+1}(i))$. \item[(e)] $\mathrm{Stab}_H(v_{n}(i))$ acts transitively on the set of neighboring vertices in $\mathfrak{t}$ of $v_{n}(i)$, other than $v_{n+1}(i)$. \end{itemize} In particular, if $H$ closes enough umbrellas, then $H \backslash \mathfrak{t}$ is combinatorially finite. Moreover, for any ray $\mathfrak{r} \subset \mathfrak{t}$ whose visual limits belongs to $\mathbb{P}^1(k)$, there exists a subray $\mathfrak{r}' \subseteq \mathfrak{t}$ satisfying conditions (d) and (e). Note that the notion of ``closing umbrellas'' corresponds to these two statements, while (a), (b) and (c) convey the idea of ``closing \textit{enough} umbrellas'', so as to have a good quotient graph. \end{defi} We say that a subgroup $H$ of $\mathrm{GL}_2(k)$ is net when each element in $H$ fails to admit a root of unit different than one as an eigenvalue. It follows from \cite[Chapter II, \S 2.1- \S 2.3]{Se2} that every net subgroup of $\mathrm{GL}_2(k)$ closes enough umbrellas. We say that two groups are commensurable if they have a common finite index subgroup. The notion of ``closing enough umbrellas'' behaves well when we pass to commensurable groups as is shown in the following results. This is probably known to experts but, as far as we are aware, a precise reference does not exist. \begin{theorem}\label{Teo comb finito} Let $H $ be a discrete subgroup of $\mathrm{GL}_2(k)$. Let $H' \subseteq \mathrm{GL}_2(k)$ be a group that is commensurable with $H$. If $H$ closes enough umbrellas, then $H'$ also closes enough umbrellas. \end{theorem} In order to prove this theorem, we analyze separately in the following two proposition the cases of subgroups of $H$ and of groups containing $H$. \begin{prop}\label{prop comb finito up} Let $H$ be a subgroup of $\mathrm{GL}_2(k)$. Assume that $H$ closes enough umbrellas. Let $H'\subset \mathrm{GL}_2(k)$ be a group containing $H$ as a finite index normal subgroup. Then, $H'$ also closes enough umbrellas. \end{prop} In order to prove the proposition, we need the following lemma. \begin{lemma}\label{lemma comb finite} Let $\mathfrak{g}$ be a combinatorially finite graph, and let $G$ be a finite group acting on this graph. Then, each cuspidal ray $\mathfrak{r}$ in $\mathfrak{g}$ has a finite number of vertices in the same $G$-orbit. In particular, $\mathfrak{r}$ has a subray whose image in $G \backslash \mathfrak{g}$ is a cuspidal ray, and hence $G \backslash \mathfrak{g}$ is a combinatorially finite graph. \end{lemma} \begin{proof} By definition we have that there exists a set of rays $\mathfrak{R}=\lbrace \tilde{\mathfrak{r}}_i \rbrace_{i=1}^{\gamma}$ all contained in $\mathfrak{g}$, such that $\mathfrak{g}$ is obtained by attaching all $\tilde{\mathfrak{r}}_i$ to a certain finite graph $Y$. Let $\tilde{\mathfrak{r}}$ be a ray in $\mathfrak{R}$. Since $G$ acts simplicially on $\mathfrak{g}$, for each $g \in G$, the graph $g \cdot \tilde{\mathfrak{r}}$ is also a ray in $\mathfrak{g}$. Then, since $\mathfrak{g}$ is combinatorially finite, $g \cdot \tilde{\mathfrak{r}}$ has the same visual limit as some ray in $\mathfrak{R}$. First assume that $\partial_{\infty}(\tilde{\mathfrak{r}})=\partial_{\infty}(g \cdot \tilde{\mathfrak{r}})$. Then, $\mathfrak{r}^{\circ}:= \tilde{\mathfrak{r}} \cap (g \cdot \tilde{\mathfrak{r}})$ is a ray. Since each non-initial vertex of $\tilde{\mathfrak{r}}$ and $g \cdot \tilde{\mathfrak{r}}$ has valency two, we get $\mathfrak{r}^{\circ}= \tilde{\mathfrak{r}}$ or $\mathfrak{r}^{\circ}=g \cdot \tilde{\mathfrak{r}}$. In other words, $\tilde{\mathfrak{r}} \subseteq g \cdot \tilde{\mathfrak{r}}$ or $\tilde{\mathfrak{r}} \supseteq g \cdot \tilde{\mathfrak{r}}$. Assume that $\tilde{\mathfrak{r}} \subseteq g \cdot \tilde{\mathfrak{r}}$, then $$\tilde{\mathfrak{r}}\subseteq g \cdot \tilde{\mathfrak{r}} \subseteq \cdots \subseteq g^k \cdot \tilde{\mathfrak{r}} \subseteq g^{k+1} \cdot \tilde{\mathfrak{r}}, \text{ for all } k \in \mathbb{Z}_{\geq 0}.$$ Since $G$ is finite, we get that $\tilde{\mathfrak{r}}=g \cdot \tilde{\mathfrak{r}}$. By an analogous argument we also prove that $\tilde{\mathfrak{r}}=g \cdot \tilde{\mathfrak{r}}$, when $\tilde{\mathfrak{r}} \supseteq g \cdot \tilde{\mathfrak{r}}$. We conclude that $g$ fixes every vertex in this case. Now, assume that the visual limit of $g \cdot \tilde{\mathfrak{r}}$ is not $\partial_{\infty}(\tilde{\mathfrak{r}})$. Then, $\tilde{\mathfrak{r}} \cap (g \cdot \tilde{\mathfrak{r}})$ is a finite graph. So, for each index $i$, we define the ray $\tilde{\mathfrak{r}}_i'$ as the unique unbounded connected component of $$\tilde{\mathfrak{r}}_i \smallsetminus \left( \bigcup_{\substack{ h \in G \\ \partial_{\infty}(\tilde{\mathfrak{r}}_i) \neq \partial_{\infty}(h \cdot \tilde{\mathfrak{r}}_i) } } \tilde{\mathfrak{r}}_i \cap (h \cdot \tilde{\mathfrak{r}}_i) \right) . $$ By definition, and by the final statement in last paragraph, the ray $\tilde{\mathfrak{r}}_i'$ does not have two vertices in the same $G$-orbit. Since $\tilde{\mathfrak{r}}_i$ and $\tilde{\mathfrak{r}}_i'$ differ by a finite graph, the first assertion follows. In order to prove the last assertion, we say that $\tilde{\mathfrak{r}}_i'$ and $\tilde{\mathfrak{r}}_j'$ are $G$-equivalent if $\partial_{\infty}(\tilde{\mathfrak{r}}'_i) = \partial_{\infty}(g \cdot \tilde{\mathfrak{r}}_j')$, for some $g=g(i,j) \in G$. So, we define $\mathfrak{r}''_i \subseteq G \backslash \mathfrak{g}$ as the intersection of the images by $\pi:\mathfrak{g} \to G \backslash \mathfrak{g}$ of all rays $\tilde{\mathfrak{r}}_j'$ in the $G$-equivalence class of $\tilde{\mathfrak{r}}_i'$. We claim that $\mathfrak{r}''_i$ is a cuspidal ray in $G \backslash \mathfrak{g}$. Indeed, any element $g \in G$ sending $\partial_{\infty}(\tilde{\mathfrak{r}}'_i)$ to $\partial_{\infty}(\tilde{\mathfrak{r}}'_j)$ gives an injective simplicial correspondence between the vertices in either ray, whose image contains a pre-image in $\mathfrak{g}$ of $\mathfrak{r}_i''$. This correspondence is independent on the choice of $g$, since a different choice $g'$ defines an element $g'g^{-1}$ fixing every vertex in $\tilde{\mathfrak{r}}'_i$. This proves the claim. Finally, let us define $Y''$ as the union of $\pi(Y)$ with all $\pi(\tilde{\mathfrak{r}}_j) \smallsetminus \mathfrak{r}''_i$, for all pairs $(i,j)$ whose corresponding rays $\tilde{\mathfrak{r}}_i'$ and $\tilde{\mathfrak{r}}_j'$ are $G$-equivalent. Thus, $G \backslash \mathfrak{g}$ is obtained by attaching all $\mathfrak{r}''_i$ to the finite graph $Y''$. \end{proof} \begin{proof}[Proof of Proposition \ref{prop comb finito up}] By hypothesis there exists a family of rays $\mathfrak{R}_{H}=\lbrace \mathfrak{r}_j \rbrace_{j=1}^{\gamma}$ satisfying (a), (b), (c), (d) and (e) in Definition \ref{def good quotient}. For all index $j$, we denote by $\xi_j $ the visual limit of $\mathfrak{r}_j$, and by $\lbrace v_n(\xi_j) \rbrace_{n=1}^{\infty}$ the vertex set of $\mathfrak{r}_j$. So, we have $\mathbb{P}^1(k)= H \cdot \lbrace \xi_j \rbrace_{j=1}^{\gamma} $. Let $\lbrace \omega_i \rbrace_{i=1}^{\delta}$ be a set of representatives of $H' \backslash \mathbb{P}^1(k)$. Then, each $\omega_i$ can be written as $\omega_i=h \cdot \xi_j$ for some suitable index $j=j(i)$ and some suitable element $h=h(i) \in H$. Thus, we define $\widehat{\mathfrak{r}}'_i$ as the intersection of $h \cdot \mathfrak{r}_j$ with the unique ray in $\mathfrak{t}$ joining $B_0^{|0|}$ with $\omega_i$. Let us write $\mathrm{V}(\widehat{\mathfrak{r}}'_i)=\lbrace v_n(\omega_i) \rbrace_{n=1}^{\infty}$, where $v_{n}(\omega_i)$ and $v_{n+1}(\omega_i)$ are neighbors. By definition, for each vertex $v_n(\omega_i)$, there exists $m=m(n)\in \mathbb{Z}_{>0}$ such that $v_n(\omega_i)=h \cdot v_m(\xi_j)$. Thus, we have $\mathrm{Stab}_H(v_{n}(\omega_i))=h \mathrm{Stab}_H(v_{m}(\xi_j)) h^{-1}$, where $H \subseteq H'$. Hence, condition (e) follows. Let $G$ be the finite group $H'/H$. Note that $H' \backslash \mathbb{P}^1(k)= G \backslash (H \backslash \mathbb{P}^1(k))$. Moreover, note that the quotient graph $H' \backslash \mathfrak{t}$ is the quotient of the combinatorially finite graph $H \backslash \mathfrak{t}$ by the finite group $G$. Then, it follows from Lemma \ref{lemma comb finite} that $H' \backslash \mathfrak{t}$ is combinatorially finite, and that for each ray $\overline{\mathfrak{r}_j}$ in $H \backslash \mathfrak{t}$ there exists a subray $\tilde{\mathfrak{r}_j}^{\circ}$ not containing two vertices in the same $G$-orbit. Let $\mathfrak{r}_j^{\circ} \subseteq \mathfrak{r}_j\subset \mathfrak{t}$ be a lift of $\tilde{\mathfrak{r}_j}^{\circ}$. So, for each index $i$, we define $\mathfrak{r}_i'$ as the intersection of $\widehat{\mathfrak{r}}_i'$ with $h(i) \cdot \mathfrak{r}_j^{\circ}$. We write $\mathrm{V}(\mathfrak{r}_i')=\lbrace v_n(\omega_i) \rbrace_{n=N_i}^{\infty}$, where $N_i>0$. Then, for each $n \geq N_i+1$, the vertices $v_{n-1}(\omega_i)$ and $v_{n+1}(\omega_i)$ are not in the same $H'$-orbit. So, since, by condition (e), all other neighbors are in the same $\mathrm{Stab}_{H'}(v_n(\omega_i))$-orbit as $v_{n-1}(\omega_i)$, we see that $\mathrm{Stab}_{H'}(v_n(\omega_i))$ stabilizes $v_{n+1}(\omega_i)$, i.e. condition (d) holds. In order to check condition (c) on $\mathfrak{R}_{H'}:= \lbrace \mathfrak{r}_i' \rbrace_{i=1}^{\delta}$, we just have to prove the projections $\overline{\mathfrak{r}_i'}$ and $\overline{\mathfrak{r}_l'}$ to $H' \backslash \mathfrak{t}$ do not intersect when $i \neq l$ in $\lbrace 1, \cdots, \delta \rbrace$. Indeed, it follows from Lemma \ref{lemma comb finite}, and the construction of the rays $\mathfrak{r}_i'$, that $\overline{\mathfrak{r}_i'} \cap \overline{\mathfrak{r}_l'} \neq \emptyset $ if and only if $\overline{\mathfrak{r}_i'}=\overline{\mathfrak{r}_l'}$, and also if and only if their visual limits coincide. By definition, the last assertion does not hold if $i \neq l$. Finally, condition (b) is an immediate consequence of Lemma \ref{lemma comb finite}, and condition (a) is immediate by construction. \end{proof} \begin{prop}\label{prop comb finito down} Let $H$ be a discrete subgroup of $\mathrm{GL}_2(k)$. Assume that $H$ closes enough umbrellas. Then, any finite index subgroup $H_0$ of $H$ closes enough umbrellas. \end{prop} \begin{proof} Since any finite index subgroup of $H$ contains a normal subgroup, by Proposition \ref{prop comb finito up} we may assume that $H_0$ is normal in $H$. By hypothesis there exists a family of rays $\mathfrak{R}_{H}=\lbrace \mathfrak{r}_j \rbrace_{j=1}^{\gamma}$ satisfying (a), (b), (c), (d) and (e) in Definition \ref{def good quotient}. For all index $j$, we denote by $\xi_j $ the visual limit of $\mathfrak{r}_j$, and by $\lbrace v_n(\xi_j) \rbrace_{n=1}^{\infty}$ the vertex set of $\mathfrak{r}_j$. So, we have $\mathbb{P}^1(k)= H \cdot \lbrace \xi_j \rbrace_{j=1}^{\gamma} $. Let $\lbrace \mu_i \rbrace_{i=1}^{\beta}$ be a set of representatives of $H_0 \backslash \mathbb{P}^1(k)$. Then, each $\mu_i$ can be written as $\mu_i=h \cdot \xi_j$ for some suitable index $j=j(i)$ and some suitable element $h=h(i) \in H$. Thus, we define $\widehat{\mathfrak{r}}_i= h \cdot \mathfrak{r}_j$, i.e. $\mathrm{V}(\widehat{\mathfrak{r}}_i)=\lbrace v_n(\mu_i) \rbrace_{n=1}^{\infty}$, where $v_n(\mu_i)=h \cdot v_n(\xi_j)$. So, we have $$\mathrm{Stab}_{H_0}(v_{n}(\mu_i))= H_0 \cap h \mathrm{Stab}_H(v_{n}(\xi_j)) h^{-1}.$$ In particular, condition (d) for $H_0$ follows immediately. Now, we check condition (c) for $H_0$. Indeed, assume that there exist $v_0 \in \mathrm{V}(\widehat{\mathfrak{r}}_k)$, $w_0 \in \mathrm{V}(\widehat{\mathfrak{r}}_l)$ and $h_0 \in H_0$ such that $h_0 \cdot v_0 = w_0$. Write $v_0= h(k) \cdot v$ and $w_0= h(l) \cdot w $, with $v \in \mathrm{V}(\mathfrak{r}_{j(k)})$ and $w \in \mathrm{V}(\mathfrak{r}_{j(l)})$. Then $h \cdot v=w$ with $h= h(l)^{-1} h_0 h(k) \in H$, which contradicts condition (c) for $H$. So, condition (c) for $H_0$ follows. Let $G$ be the finite group $H/H_0$. Let $\pi: \mathrm{Stab}_H(v_n(\mu_i)) \to G$ be the map defined by composing the natural inclusion $\mathrm{Stab}_H(v_n(\mu_i)) \to H$ with the projection $H \to G$. Since, for each $n \in \mathbb{Z}_{\geq 1}$ we have $\mathrm{ker}(\pi)=\mathrm{Stab}_{H_0}(v_n(\mu_i))$, we obtain from condition (d) for $H$ the chain of contentions $$ \mathrm{Stab}_H(v_{1}(\mu_i))/\mathrm{Stab}_{H_0}(v_{1}(\mu_i)) \subseteq \cdots \subseteq \mathrm{Stab}_H(v_{n}(\mu_i))/\mathrm{Stab}_{H_0}(v_{n}(\mu_i)) \subseteq \cdots$$ Then, since $G$ is a finite set, there exists $t_0=t_0(i) \in \mathbb{Z}_{\geq 1}$ such that, for each $n \geq t_0$ \begin{equation}\label{eq stab in closing umbrellas} \mathrm{Stab}_H(v_{n}(\mu_i))/\mathrm{Stab}_{H_0}(v_{n}(\mu_i)) = \mathrm{Stab}_H(v_{n+1}(\mu_i))/\mathrm{Stab}_{H_0}(v_{n+1}(\mu_i)). \end{equation} Recall that, since $K$ is locally compact, we have that $\mathrm{Stab}_{\mathrm{GL}_2(K)}(v_{n}(\mu_i))$ is compact. Then, for each discrete subgroup $D$, for instance $H$ or $H_0$, we get that $\mathrm{Stab}_{D}(v_{n}(\mu_i))$ is finite. Then, Equation \eqref{eq stab in closing umbrellas} implies that, for each $n > t_0$ $$ |\mathrm{Stab}_{H_0}(v_{n}(\mu_i))/\mathrm{Stab}_{H_0}(v_{n-1}(\mu_i))|= |\mathrm{Stab}_H(v_{n}(\mu_i))/\mathrm{Stab}_{H}(v_{n-1}(\mu_i)) | . $$ In particular, the injective map $$\psi: \mathrm{Stab}_{H_0}(v_{n}(\mu_i))/\mathrm{Stab}_{H_0}(v_{n-1}(\mu_i)) \to \mathrm{Stab}_{H}(v_{n}(\mu_i))/\mathrm{Stab}_{H}(v_{n-1}(\mu_i)),$$ induced by the inclusion $\iota: \mathrm{Stab}_{H_0}(v_{n}(\mu_i)) \to \mathrm{Stab}_{H}(v_{n}(\mu_i))$, is a bijection. It follows from condition (e) for $H$ and the orbit-stabilizer relation that the set $\mathrm{Stab}_{H}(v_{n}(\mu_i))/\mathrm{Stab}_{H}(v_{n-1}(\mu_i))$ parametrizes all the neighboring vertices in $\mathfrak{t}$ of $v_n(\mu_i)$ other than $v_{n+1}(\mu_i)$. So, since $\psi$ is a bijection, we deduce that the set $ \mathrm{Stab}_{H_0}(v_{n}(\mu_i))/\mathrm{Stab}_{H_0}(v_{n-1}(\mu_i))$ also parametrizes the aforementioned set of vertices. In other words, up to replacing $\widehat{\mathfrak{r}}_i$ by the ray $\mathfrak{r}_i'$ defined by the vertex set $\lbrace v_n(\mu_i)\rbrace_{i=t_0+1}^{\infty}$, condition (e) follows for $H_0$. Now, note that the graph $H \backslash \mathfrak{t}$ is the quotient of the graph $H_0 \backslash \mathfrak{t}$ by the finite group $G$. In particular, the pre-image of the finite graph $Y_{H}$ by the projection $H_0 \backslash \mathfrak{t} \to H \backslash \mathfrak{t} $ is a finite graph. So, since $\widehat{\mathfrak{r}}_i \smallsetminus \mathfrak{r}_i'$ is also a finite graph, we conclude that condition (b) holds for $H_0$. Condition (a) for $H_0$ follows from definition. Thus, we conclude the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{Teo comb finito}] Let $H_0$ be a common finite index subgroup containing in $H$ and $H'$. By replacing $H_0$ by a smaller subgroup if needed, we can assume that $H_0$ is normal in $H'$. Then, it follows from Proposition \ref{prop comb finito down} that $H_0$ closes enough umbrellas. By applying Proposition \ref{prop comb finito up} to $H_0$ and $H'$, we conclude that $H'$ also closes enough umbrellas. \end{proof} As in \S \ref{Section on the principal problem} and \S \ref{subsection BTT}, let $k$ be the function field of a smooth, projective, geometrically integral curve $\mathcal{C}$ defined over a field $\mathbb{F}$. Let $Q$ be a closed point in $\mathcal{C}$, and set $U'=\mathcal{C} \smallsetminus \lbrace Q\rbrace$. Denote by $R'$ the ring of regular functions on $U'$. Let $\nu_Q$ be the discrete valuation map defined from the closed point $Q$. Let us denote by $k_Q$ the completion of $k$ with respect to $\nu_Q$. In the remaining of this section, we give a detailed description of certain quotient of the Bruhat-Tits tree $\mathfrak{t}=\mathfrak{t}(k_Q)$ defined from $\mathrm{SL}_2$ and $k_Q$. In order to do this, let us introduce the following definition: \begin{defi}\label{definition of max and eichler orders} A $\mathcal{C}$-order of maximal rank $\mathfrak{R}$ is a locally free sheaf of $\mathcal{O}_{\mathcal{C}}$-algebras whose generic fiber is $\mathbb{M}_2(k)$. We say that a $\mathcal{C}$-order $\mathfrak{D}$ is maximal when it is maximal with respect to inclusion. An Eichler $\mathcal{C}$-order $\mathfrak{E}$ is the sheaf-theoretical intersection of two maximal $\mathcal{C}$-orders. \end{defi} \begin{ex} Let us denote by $\mathfrak{D}_0$ the sheaf $\mathfrak{D}_0=\mathbb{M}_2(\mathcal{O}_{\mathcal{C}})$. Then $\mathfrak{D}_0$ is a maximal $\mathcal{C}$-order. Moreover $\mathfrak{D}_0(U')^{*}=\mathrm{GL}_2(R')$. \end{ex} \begin{corollary}\label{Cor comb fin} Let $H\subset \mathrm{GL}_2(k)$ be a group commensurable with $\mathrm{GL}_2(R')$. Then $H$ closes enough umbrellas. In particular, for any Eichler $\mathcal{C}$-order $\mathfrak{E}$, we have that $\tilde{H}=\mathfrak{E}(U)^{*}$ and $\tilde{\Gamma}= \mathrm{Stab}_{\mathrm{GL}_2(k)} (\mathfrak{E}(U))$ close enough umbrellas. \end{corollary} \begin{proof} First, it follows from \cite[Chapter II, \S 2.1- \S 2.3]{S} that $\mathrm{GL}_2(R')$ closes enough umbrellas. Then, it follows from Theorem \ref{Teo comb finito} that any group $H \subset \mathrm{GL}_2(k)$ commensurable with $\mathrm{GL}_2(R')$ closes enough umbrellas. Now, we claim that $\tilde{H}$ and $\tilde{\Gamma}$ are commensurable with $\mathrm{GL}_2(R')$. Indeed, let $\mathfrak{D}$ be a maximal $\mathcal{C}$-order containing $\mathfrak{E}$. Let us fix $\tilde{\Gamma}_0= \text{Stab}_{\text{GL}_2(k)}(\mathfrak{D}(U'))$. Note that $\tilde{\Gamma}_0$ and $\tilde{\Gamma}$ are commensurable, since they contain the respective finite index subgroups $\tilde{H}_0= \mathfrak{D}(U')^{*}$ and $\tilde{H}$, where $\tilde{H}$ is a finite index subgroup of $\tilde{H}_0$ (cf.~\cite[Theorem 1.2]{A2}). Moreover, note that $\tilde{H}_0$ belongs to the same commensurability class as $\mathrm{GL}_2(R')$, since $\mathfrak{D} \cap \mathfrak{D}_0$ is a finite index Eichler $\mathcal{C}$-order simultaneously contained in $\mathfrak{D}$ and $\mathfrak{D}_0$. \end{proof} \begin{rem} Theorem \ref{Teo comb finito} and Corollary \ref{Cor comb fin} can be easily extended to subgroups of $\mathrm{PGL}_2(k)$. \end{rem} \section{Spinor class fields}\label{Section Spinor} In this section we introduce the basic definitions and results about completions, spinor genera and spinor class fields of orders. See \cite{abelianos} for details. We denote by $|\mathcal{C}|$ the set of closed points in the smooth projective geometrically integral curve $\mathcal{C}$, and we fix $P_{\infty} \in |\mathcal{C}|$. Let $U_0$ be the affine open set $ \mathcal{C} \smallsetminus \lbrace P_{\infty} \rbrace$. For every point $P \in |\mathcal{C}|$, we denote by $k_P$ the completion at $P$ of the function field $k=k(\mathcal{C})$, and by $\mathcal{O}_{P}$ the ring of integers of the former. For any open set $U \subseteq \mathcal{C}$, we define the ad\`ele ring $\ad_{U}$ of $U$ as the subring of $\prod_{P \in |U|} k_P$ consisting of the tuples $a=(a_{P})_{P}$ where $a_P$ lies in the ring $\mathcal{O}_{P}$ for all but finitely many closed points $P$. We also define the id\`ele group $\mathbb{I}_U$ as the group of invertible ad\`eles $\ad_U^{*}$. We write $\ad=\ad_{\mathcal{C}}$ and $\mathbb{I}=\mathbb{I}_{\mathcal{C}}$. A $\mathcal{C}$-lattice or $\mathcal{C}$-bundle in a finite dimensional $k$-vector space $V$ is a locally free subsheaf of the constant sheaf $V$. For any sheaf of groups $\Lambda$ on $\mathcal{C}$ we denote by $\Lambda(U)$ its group of $U$-sections. In particular, this convention applies to $\mathcal{C}$-lattices. By definition, the completion at $P$ of $\Lambda$, denoted $\Lambda_P$, is the topological closure of $\Lambda(U)$ in $V_P:=V \otimes_{k} k_{P}$, where $U$ is an affine open subset containing $P$. Thus defined, $\Lambda_P$ does not depend on the choice of $U$. Note that, for every affine open subset $U\subseteq \mathcal{C}$, the $\oink_\mathcal{C}(U)$-module $\Lambda(U)$ is an $\oink_\mathcal{C}(U)$-lattice. The same property holds for orders. As in the affine context, every $\mathcal{C}$-lattice is determined by its set of local completions $\lbrace \Lambda_P: P\in |\mathcal{C}| \rbrace$, as follows: \begin{enumerate} \item[(a)] For any two lattices $\Lambda$ and $\Lambda'$ in $V$, we have $\Lambda_P=\Lambda'_P$ for almost all $P$, \item[(b)] if $\Lambda_P=\Lambda'_P$ for all $P$, then $\Lambda=\Lambda'$, and \item[(c)] every family $\{\Lambda''(P)\}_P$ of local lattices satisfying $\Lambda''(P)=\Lambda_P$ for almost all $P$ is the family of completions of a global lattice $\Lambda''$ in $V$. \end{enumerate} In particular, the preceding properties apply for order. Given a finite dimensional vector space $W$ over $k$, we define its adelization $W_{\ad}$ as $W_{\ad}=W \otimes_{k} \ad$. This is a $k$-vector space isomorphic to $\ad^{\mathrm{dim}_k W}$. In particular, this definition applies to $W=\text{End}_k(V)$. We also define the adelization of a lattice $\Lambda$ by $\Lambda_{\mathbb{A}} = \prod_{P \in |\mathcal{C}|} \Lambda_P$, which is an open and compact subgroup of $V_{\mathbb{A}}$. Given an arbitrary $\mathcal{C}$-lattice $\Lambda$ and an adelic matrix $$a\in \mathrm{End}_\ad(V_\ad)= \big(\text{End}_k(V)\big)_{\mathbb{A}},$$ we define the adelic image $L=a\Lambda$ of $\Lambda$ as the unique $\mathcal{C}$-lattice satisfying $L_\ad=a\Lambda_\ad$. To each $\mathcal{C}$-lattice $\Lambda$ in $k^2$, we associate the $\mathcal{C}$-order $\Da_{\Lambda}=\mathrm{End}_{\oink_X}(\Lambda)$ in the matrix algebra $\matrici_2(k)$, which is defined on every open set $U\subseteq \mathcal{C}$ by $$\Da_\Lambda(U)=\left\{a\in \matrici_2(k)\Big|a\Lambda(U)\subseteq\Lambda(U)\right\}.$$ This is a maximal $\mathcal{C}$-order in $\mathbb{M}_2(k)$ (cf.~Definition \ref{definition of max and eichler orders}). Moreover, every maximal $\mathcal{C}$-order in the two-by-two matrix algebra equals $\Da_\Lambda$, for some $\mathcal{C}$-lattice $\Lambda$ in $k^2$. In particular, if we fix a maximal $\mathcal{C}$-order $\Da$, then any other maximal $\mathcal{C}$-order in $\mathbb{M}_2(k)$ is equal to $\Da'=a\Da a^{-1}$, for some $a\in\mathrm{GL}_2(\ad)$. In general, if we fix an $\mathcal{C}$-order $\Da$ of maximal rank, then we can define the genus $\mathrm{gen}(\Da)$ of $\Da$ as the set of all $\mathcal{C}$-orders $a\Da a ^{-1}$, for $a\in \matrici_2(\ad)^*$. So, the previous statement is equivalent to the fact that the set of maximal $\mathcal{C}$-orders is a genus, which we denote by $\mathbb{O}_0$. Let $\mathfrak{D}$ be a $\mathcal{C}$-order of maximal rank, i.e. of rank $4$. Let $U$ be either an affine open set of $\mathcal{C}$ or the full set $\mathcal{C}$. We define the $U$-spinor class field of $\mathfrak{D}$ as the field corresponding, via class field theory, to the subgroup $k^*H(\Da, U)\subseteq \mathbb{I}=\ad^*$, where \begin{equation}\label{Eq class fields} H(\Da, U)=\left \lbrace \mathrm{det}(a)|a\in\matrici_2(\ad)^*,\ a\Da(V) a^{-1}=\Da(V), \, \forall \, \, V \stackrel{\circ}{\subseteq} U \right \rbrace. \end{equation} The symbol $\stackrel{\circ}{\subseteq}$ above denotes an open subset. This field depends only on the genus $\mathbb{O}=\mathrm{gen}(\Da)$ of $\Da$, and we denote it by $\Sigma(\mathbb{O},U)$. When $U=\mathcal{C}$ we simplify the notation by using $\Sigma=\Sigma(\mathbb{O})$. Let $\mathbb{I} \to \mathrm{Gal}(\Sigma/k)$, $t\mapsto [t,\Sigma/k]$ be the Artin map on the id\`ele group (cf.~\cite[Chapter VI, \S 5, p. 387]{Neukirch}). There exists a well-defined distance map $\rho:\mathbb{O}\times\mathbb{O}\rightarrow\mathrm{Gal}\big(\Sigma/k\big)$, given by $\rho(\Da,\Da')=[\det(a),\Sigma/k]$, where $a\in\mathrm{GL}_2(\ad)$ is any adelic element satisfying $\Da'=a\Da a^{-1}$. The distance map has a multiplicative property, in the sense that, for any tuple $(\Da,\Da',\Da'')\in\mathbb{O}^3$, it satisfies $\rho(\Da,\Da'')=\rho(\Da,\Da')\rho(\Da',\Da'')$. The kernel of $\rho$ consists of the pairs $(\Da,\Da')$ such that $\Da(U)$ and $\Da'(U)$ are $\mathrm{GL}_2(\mathcal{O}_{\mathcal{C}}(U))$-conjugate for every affine open subset $U\subseteq \mathcal{C}$. In the case of maximal orders, the map defined above is $\rho_0:\mathbb{O}_0^2\rightarrow\mathrm{Gal}\big(\Sigma(\mathbb{O}_0)/k\big)$, and it can be characterized as follows: The image of $\rho_0$ for a pair $(\Da,\Da')$ of maximal orders is given by the formula $\rho_0(\Da,\Da')=[[D(\Da,\Da'),\Sigma(\mathbb{O}_0)/k]]$, where $D\mapsto [[D,\Sigma(\mathbb{O}_0)/k]]$ is the Artin map on divisors and the divisor $D(\Da,\Da')$ is defined in Equation \eqref{eq divisor def from two max or}. \section{Eichler orders and grids}\label{Section grids} In this section notation is as in \S \ref{Section Spinor}. \subsection{The grid defined from and Eichler order} In the local algebra $\matrici_2(k_P)$, any two $\mathcal{O}_{P}$-maximal orders are simultaneously $\mathrm{GL}_2(k_P)$-conjugate to the orders $\Da_P=\sbmattrix {\oink_P}{\oink_P}{\oink_P}{\oink_P}$ and $\Da'_P=\sbmattrix {\oink_P}{\pi_P^d\oink_P}{\pi_P^{-d}\oink_P}{\oink_P}$ for some $d \in \mathbb{Z}_{\geq 0}$, where $\pi_P$ is a local uniformizing parameter in $k_P$. So, we define the local distance $d_P$ between maximal orders in $\matrici_2(k_P)$ by taking $d_\mathfrak{p}(\Da_P,\Da'_P)=d$, where $d$ is as above. As we introduce in Definition \ref{definition of max and eichler orders}, an Eichler $\mathcal{C}$-order, or simply an Eichler order, is the intersection of two maximal $\mathcal{C}$-orders. This is a local definition in the sense that $\Da_P\cap\Da'_P=(\Da\cap\Da')_P$ for every pair of orders. Moreover, locally, for any Eichler order $\mathfrak{E}_{P}$ there exists a unique pair of maximal orders whose intersection is $\mathfrak{E}_{P}$. So, we define the level of a local Eichler order as the distance between the preceding maximal orders. Globally, there exists a well-defined distance map on the set of maximal $\mathcal{C}$-orders, whose image on a pair $(\Da, \Da')$ is the effective divisor \begin{equation}\label{eq divisor def from two max or} D=D(\Da,\Da')=\sum_{P\in|\mathcal{C}|}d_P(\Da_P,\Da'_P)P. \end{equation} In particular, there exists a global level defined, on an Eichler $\mathcal{C}$-order $\Ea = \Da \cap \Da'$, as the distance $D(\Da,\Da')$. A useful property of the level function is that two local Eichler orders are $\mathrm{GL}_2(k_P)$-conjugate precisely when their local levels equal. This property can be interpreted in terms of genera by saying that two Eichler $\mathcal{C}$-orders belong to the same genus exactly when they have the same global level. So, for any effective divisor $D$, there exists a genus of Eichler $\mathcal{C}$-orders of level $D$, which is denoted by $\mathbb{O}_D$. It follows, from the characterization of the Bruhat-Tits tree in terms of maximal orders (cf.~\S \ref{Section BTT}), that there exists a bijective map between the set of local Eichler orders $ \Ea$ of level $\kappa$ and the set of finite lines $\mathfrak{p}$ of length $\kappa$ in the Bruhat-Tits tree. Formally, a local Eichler order $\mathfrak{E}$ corresponds to the finite line $\mathfrak{p}=\mathfrak{s}(\mathfrak{E})$ whose vertices are the maximal orders containing $\mathfrak{E}$. Let $\Ea$ be an Eichler $\mathcal{C}$-order of level $D=\sum_P n_P P$. Let us denote by $S(\mathfrak{E})$ the product of finite lines $S(\mathfrak{E})=\prod_P\mathfrak{s}(\mathfrak{E}_P)$, where $P$ belongs to the set of closed points such that $n_P>0$. This is called the grid of $\mathfrak{E}$. It follows from Property (c) in \S \ref{Section Spinor} that the set of maximal $\mathcal{C}$-orders containing $\Ea$ corresponds to the vertex set in $S(\Ea)$. Moreover, it is easy to see that this correspondence is compatible with the action of $\mathrm{PGL}_2(k)$ on Eichler $\mathcal{C}$-orders by conjugation. In order to compare different Eichler $\mathcal{C}$-orders, we fix an effective divisor $D=\sum_P n_P P$, and a finite set of places $T \supseteq \mathrm{Supp}(D).$ Denote by $\mathrm{Eich}(D, T)$ the set of Eichler $\mathcal{C}$-orders of level $D$ satisfying $\mathfrak{E}_Q = \mathbb{M}_2(\mathcal{O}_Q)$ for $Q \notin T$. Then, given an Eichler $\mathcal{C}$-order in $\mathrm{Eich}(D, T)$, its grid can be seen naturally as a subcomplex of the finite product of Bruhat-Tits trees $\prod_{P \in T} \mathfrak{t}(k_P)$. Any grid of the form $S(\mathfrak{E})$, for $\mathfrak{E} \in \mathrm{Eich}(D, T)$, is called a concrete $D$-grid. Note that the group $G_T=\mathrm{GL}_2(\mathcal{O}_{\mathcal{C}}(\mathcal{C} \smallsetminus T))$ acts on the set of concrete $D$-grids by conjugation. Indeed, we can define this action as the extension of the conjugacy action of $G_T$ on the set of maximal $\mathcal{C}$-orders to $D$-grids, which is valid since $G_T$ acts simplicially on each local tree. The orbits of concrete $D$-grids by this action are called abstract $D$-grids. Any representative of an abstract grid is called a concrete representative. Note that all these definitions depend on the set $T$. This is why it is important to consider the following result. \begin{prop}\cite[Proposition 3.1]{A5}\label{eichler=grillas} Let $D$ be an effective divisor. Then, there exists a finite set of places $ T $ containing $\mathrm{Supp}(D)$ such that every $\mathrm{PGL}_2(k)$-conjugacy class of Eichler $\mathcal{C}$-orders contains a representative in $\mathrm{Eich}(D, T)$. \end{prop} Let $Q$ be a closed point of $\mathcal{C}$, and write $D=D'+n_Q Q$, where $n_Q>0$ and the support of $D'$ fails to contain $Q$. Then, each concrete $D$-grid is a paralellotope having two concrete $D'$-grids as opposite faces. These opposite faces are called the $Q$-faces of the $D$-grid. We say that two concrete $D'$-grids $S$ and $S'$ are $Q$-neighbors if there exists a concrete $D$-grid $\tilde{S}$, with $D=D'+Q$ and $Q \notin \mathrm{Supp}(D')$, such that $S$ and $S'$ are the $Q$-faces of $\tilde{S}$. Let $\mathfrak{D}$ be a maximal $\mathcal{C}$-order corresponding to a vertex $v$ in $S$. Then, there exists one and only one $Q$-neighbor $v'$ among the vertices of $S'$. We call it the $Q$-neighbor of $v$ in $\tilde{S}$. \subsection{Classifiying graphs} As an intermediate step to prove Theorem \ref{teo cusp}, we characterize a quotient graph of $\mathfrak{t}$ other than $ \mathfrak{t}_D=\mathrm{H}_D \backslash \mathfrak{t}$. In order to introduce this quotient structure, fix $D$ an effective divisor, and let $\mathbb{O}_D$ be the genus containing all Eichler $\mathcal{C}$-orders of level $D$. Let $Q \in |\mathcal{C}|$ be a closed point not contained in $\mathrm{Supp}(D)$. Let $V_0$ be the affine open set $\mathcal{C} \smallsetminus \lbrace Q \rbrace$. Then, any order in $\mathbb{O}_D$ is maximal at $Q$, i.e. its completion at $Q$ is maximal. For any $\mathfrak{E} \in \mathbb{O}_D$, we define the C-graph $C_Q(\mathfrak{E})=\Gamma\backslash\mathfrak{t}$, where $\mathfrak{t}=\mathfrak{t}(k_Q)$, and $\Gamma$ is the stabilizer of $\mathfrak{E}(V_0)$ in $\mathrm{PGL}_2(k)$. Note that, it follows from Corollary \ref{Cor comb fin} that $C_Q(\Da)$ is combinatorially finite. Two Eichler $\mathcal{C}$-orders $\mathfrak{E}$ and $\mathfrak{E}'$ such that $\rho(\mathfrak{E} ,\mathfrak{E}')$ belongs to the group generated by $[[Q, \Sigma(\mathbb{O}_D)/k]]$ define isomorphic quotient graphs. Indeed, it follows from \cite[\S 2]{abelianos} that, if $\rho(\mathfrak{E} ,\mathfrak{E}') \in \left \langle [[Q, \Sigma(\mathbb{O}_D)/k]] \right \rangle$, then $\mathfrak{E}(U_0)$ and $\mathfrak{E}'(U_0)$ are $\mathrm{GL}_2(k)$-conjugate. In this case we write $\mathfrak{E}\sim \mathfrak{E}'$. We denote by $\mathfrak{Sp}(\mathbb{O}_D, Q)$ the quotient set of $\mathbb{O}_D$ by the previous equivalence relation. The classifying graph $C_{Q}(\mathbb{O}_D)$ is the disjoint union of the finitely many C-graphs corresponding to all elements in $\mathfrak{Sp}(\mathbb{O}_D, Q)$. In particular, it is combinatorially finite. All definitions and conventions introduced in \S \ref{Section BTT} apply to $C_{Q}(\mathbb{O}_D)$ by adapting them to the context of disjoint union of graphs. In particular, by the cusp set of $C_{Q}(\mathbb{O}_D)$ we mean the disjoint union of the cusp sets of all connected components of $C_{Q}(\mathbb{O}_D)$. In the following section we study the combinatorial structure of the classifying graphs of Eichler orders. With this in mind, we make frequent use of the next result: \begin{prop}\cite[Proposition 3.2]{A5}\label{func corresp D-grillas} Let $D$ be an effective divisor supported away from the place $Q$. The vertices of the classifying graph $C_Q(\mathbb{O}_D)$ are in bijection with the abstract $D$-grids, while its pairs of mutually reverse edges are in bijection with the abstract $(D+Q)$-grids. The endpoints of an edge are the vertices of $C_Q(\mathbb{O}_D)$ corresponding to the $Q$-faces of the grid corresponding to that edge. \end{prop} \subsection{Spinor class fields of Eichler orders} We finish this section by recalling some results about the spinor class field associated to the genus of Eichler $\mathcal{C}$-orders of level $D$. Let $D$ be a divisor, which we write as $D=\sum_{i=1}^r n_i P_{i}$, where $P_i \neq P_{\infty}$, and let $U_0$ be the affine set $ \mathcal{C} \smallsetminus \lbrace P_{\infty} \rbrace$ as above. It follows from \cite[Theorem 1.2]{A13} that the spinor class field $\Sigma_D=\Sigma(\mathbb{O}_D)$ (resp. $\Sigma(\mathbb{O}_D,U_0)$), for Eichler $\mathcal{C}$-orders of level $D$, is the maximal subfield of $\Sigma_0=\Sigma(\mathbb{O}_0)$ (resp. $\Sigma(\mathbb{O}_0,U_0)$) splitting at every place $P_i$ for which $n_i$ is odd. \begin{prop}\label{gal grupo de clase} Let $J= \lbrace i: n_i \text{ is odd}\rbrace$. The Galois group $\mathrm{Gal}(\Sigma_D/k)$ is isomorphic to the abelian group $\mathrm{Pic}(\mathcal{C})/(2 \mathrm{Pic}(\mathcal{C})+ \langle \overline{P_{j}}: j \in J \rangle)$. Using the same notation, $\mathrm{Gal}(\Sigma(\mathbb{O}_D,U_0)/k)$ is isomorphic to $\mathrm{Pic}(\mathcal{C})/(2 \mathrm{Pic}(\mathcal{C})+ \langle \overline{P_{\infty}} \rangle+ \langle \overline{P_{j}}: j \in J \rangle)$. \end{prop} \begin{proof} Let $L/F$ be a finite abelian extension (i.e. Galois with abelian Galois group) of global fields. It follows from \cite[Chapter VI, \S 6, Theorem 6.1 and Corollary 6.6]{Neukirch} that there exists an isomorphism from $\mathrm{Gal}(L/F)$ to $\mathbb{I}_F/F^{*} H(L)$, where $\mathbb{I}_F$ is the id\`ele group of $F$, and $H(L):=\lbrace N_{L/F}(a): a \in \mathbb{I}_L \rbrace$ is the kernel of the Artin map, which satisfies the following properties: \begin{itemize} \item[(i)] $Q$ is unramified in $L/F$ if and only if $\mathcal{O}_Q^{*} \subseteq F^{*} H(L)$. \item[(ii)] $Q$ splits completely in $L/F$ if and only if $F_Q^{*} \subseteq F^{*} H(L)$. \end{itemize} Apply this when $F$ is the global function field $k= \mathbb{F}(\mathcal{C})$ and $L$ is $\Sigma_D$. Recall that $H(\Sigma_0)$ equals $H(\mathbb{M}_2(\mathcal{O}_{\mathcal{C}}), \mathcal{C})$ as in Equation \eqref{Eq class fields}. Then, since the localization of $\mathbb{M}_2(\mathcal{O}_{\mathcal{C}})$ at $Q$ is $\mathbb{M}_2(\mathcal{O}_{Q})$, it is easy to see that $k_Q^{*2}\mathcal{O}_Q^{*} \subseteq H(\Sigma_0)$, for all closed points $Q \in \mathcal{C}$. In particular, since $\Sigma_D \subseteq \Sigma_0$, we obtain $ k_Q^{*2}\mathcal{O}_Q^{*} \subseteq H(\Sigma_0) \subseteq H(\Sigma_D)$. So, if we write $\mathbb{I}_{k, \infty}:=\prod_{Q \in \mathcal{C}} \mathcal{O}_Q^{*}$, then $\mathbb{I}_{k}^2 \mathbb{I}_{k, \infty}$ is contained in $k^{*}H(\Sigma_D)$. And, since $\mathbb{I}_k/ k^{*} \mathbb{I}_{k, \infty} \cong \mathrm{Pic}(\mathcal{C})$, the Galois group of $\Sigma_D/k$ is a quotient of $\mathrm{Pic}(\mathcal{C})/2\mathrm{Pic}(\mathcal{C})$. Let us write $e(Q)$ for the id\`ele whose coordinate at $Q$ is $\pi_Q$ and any other coordinate equals one. Since $\Sigma_D/k$ splits at $P_j$, with $j \in J$, we deduce from (ii) that all $e(P_j)$, with $j \in J$, belong to $k^{*}H(\Sigma_D)$. In particular, we obtain the inclusion $$k^{*} \mathbb{I}_k^{2} \mathbb{I}_{k, \infty} \langle e(P_j): j \in J \rangle \subseteq k^{*} H(\Sigma_D).$$ Furthermore, the maximality condition on $\Sigma_D$ implies equality. We conclude that $$\mathrm{Gal}(\Sigma_D/k) \cong \frac{\mathbb{I}_k}{k^{*} \mathbb{I}_k^{2} \mathbb{I}_{k, \infty} \langle e(P_j): j \in J \rangle} \cong \frac{\mathrm{Pic}(\mathcal{C})}{2 \mathrm{Pic}(\mathcal{C})+ \langle \overline{P_{j}}: j \in J \rangle}.$$ Moreover, we can analogously prove that $\mathfrak{G} =\mathrm{Gal}(\Sigma(\mathbb{O}_D,U_0)/k)$ is isomorphic to $\mathrm{Pic}(\mathcal{C})/(2 \mathrm{Pic}(\mathcal{C})+ \langle \overline{P_{\infty}}, \overline{P_{j}}: j \in J \rangle)$, by noting that to compute $\mathfrak{G}$ we no longer need a condition at the place $P_{\infty}$, so the corresponding local stabilizer must be replaced by the full local group $\mathrm{GL}_2(k_{P_{\infty}})$. \end{proof} Moreover, it follows from \cite[Proposition 6.1]{A2} that the corresponding distance function $\rho_D$ on the genus of Eichler $\mathcal{C}$-orders of level $D$ is related to $\rho_0$ through restriction, i.e. \begin{equation}\label{eq rho} \rho_D(\Ea_{\Lambda,\Lambda'},\Ea_{L,L'})=\rho_0(\Da_\Lambda,\Da_L)\Big|_{\Sigma(D)}, \end{equation} for any four $\mathcal{C}$-lattices $\Lambda, \Lambda', L$ and $L'$ (cf.~\S \ref{Section Spinor}). \section{On quotient graphs of Eichler groups}\label{Section Quotient} The objective of this section is to prove Theorem \ref{teo cusp}. To do so, we extensively use the following remark. As we said in \S \ref{Section BTT}, every subgroup of $\text{GL}_2(k_{P_{\infty}})$ acts on $\mathfrak{t}$ via its image in $\text{PGL}_2(k_{P_{\infty}})$. In particular, the topological space $\mathfrak{t}_D$ equals the quotient of $\mathfrak{t}=\mathfrak{t}(k_{P_{\infty}})$ by the projective image $\text{PH}_D$ of $$\small \text{H}_D= \left\lbrace \left( \begin{array}{cc} a & b\\ c & d \end{array} \right) \in \text{GL}_2(R) : c \equiv 0 \, (\text{mod } I_D)\right\rbrace , \normalsize $$ where $I_D$ is the $R$-ideal defined as $I_D=\mathfrak{L}^{-D}(U_0)=\mathfrak{L}^{-D}(\mathcal{C} \smallsetminus \lbrace P_{\infty }\rbrace)$. \\ We start this section by presenting a proof of Theorem \ref{teo cusp} assuming the following result, which is implied by Proposition \ref{lema4} below. \begin{prop}\label{prop aux} The number of cusps of any connected component of $C_{P_{\infty}}(\mathbb{O}_D)$ is the same, and it equals \begin{equation} c(D)= \alpha(D) [2\mathrm{Pic}(\mathcal{C})+\left\langle \overline{P_{a_1}}, \cdots ,\overline{P_{a_u}}, \overline{P_{\infty}} \right\rangle : \langle \overline{P_{\infty}} \rangle], \end{equation} where $$ \alpha(D)= 1 + \frac{1}{q-1} \prod_{i=1}^{r} \left( q^{\deg(P_i) \lfloor \frac{n_i}{2}\rfloor}-1\right),$$ and $P_{a_1}, \cdots, P_{a_u}$ are the closed points in $\mathcal{C}$ whose coefficients in $D=\sum_{i=1}^r n_i P_i$ are odd. \end{prop} \subsection{A proof of Theorem \ref{teo cusp}}\label{subsection proof of Theorem} As we just said, we can replace $\text{H}_D$ by its image $\text{PH}_D$ in $\text{PGL}_2(k)$ to compute the cusp number of $\mathfrak{t}_D$. First, we prove inequality \eqref{numero de patas}. Set $\Gamma= \mathrm{Stab}_{\text{PGL}_2(k)}(\mathfrak{E}_D(U_0))$. On one hand, it follows from Proposition \ref{prop aux} that the cusp number of $\Gamma \backslash \mathfrak{t}$ is equal to $c(D)$. On the other hand, it follows from \cite[Theorem 1.2]{A2} that $$[\Gamma:\text{PH}_D]=\frac{2^r|g(2)|}{[\Sigma(\mathbb{O}_0,U_0):\Sigma(\mathbb{O}_D,U_0)]},$$ where $g(2)$ is the maximal exponent-2 subgroup of $\mathrm{Pic}(R)$. So, we obtain from Proposition \ref{gal grupo de clase}, \begin{equation}\label{eq gamma/phd} [\Gamma:\text{PH}_D]=\frac{2^r|g(2)|}{[2 \text{Pic}(\mathcal{C})+ \langle \overline{P_{a_1}}, \cdots, \overline{P_{a_u}}, \overline{P_{\infty}} \rangle: 2 \text{Pic}(\mathcal{C})+ \langle \overline{P_{\infty}} \rangle]}. \end{equation} Recall now that, by Corollary \ref{Cor comb fin}, the set of cups of $\mathfrak{t}_D$ (resp. $\Gamma \backslash \mathfrak{t}$) is parametrized by $\mathbb{P}^1(k) / \text{PH}_D$ (resp. $\mathbb{P}^1(k) / \Gamma$). Then, the cusp number of $\mathfrak{t}_D$ cannot exceed $c(\text{H}_D)=c(D)[\Gamma:\text{PH}_D]$ and inequality \eqref{numero de patas} follows. Now, we assume that each $n_i$ is odd and $g(2)$ is trivial. Then, we have to prove that the cusp number of $\mathfrak{t}_D$ is exactly $c(\text{H}_D)$. This is a consequence of the following lemma. \begin{lemma}\label{lemma cubrimiento de cuspides} Assume that each $n_i$ is odd and that $g(2)$ is trivial. Then, there are exactly $[\Gamma: \text{PH}_D]$ cusps in $\mathfrak{t}_D$ with the same image in $\Gamma \backslash \mathfrak{t}$. \end{lemma} \begin{proof} Let $\Theta=\Theta(\eta)$ be the set of cusps of $\mathfrak{t}_D$ whose image in $\Gamma \backslash \mathfrak{t}$ is the cusp $\eta$. Then, $\mathrm{Card}(\Theta)$ is strictly less than $[\Gamma: \text{PH}_D]$ precisely when there exists an element $\overline{g} \in \Gamma/\text{PH}_D$ stabilizing an element of $\Theta$. Since the set of cups of $\mathfrak{t}_D$ (resp. $\Gamma \backslash \mathfrak{t}$) is parametrized by $\mathbb{P}^1(k) / \text{PH}_D$ (resp. $\mathbb{P}^1(k) / \Gamma$), if we prove that, for any $s \in \mathbb{P}^1(k)$, we have \begin{equation}\label{id stab H} \mathrm{Stab}_{\Gamma}(s) \subset \text{PH}_D, \end{equation} then, the result follows. Let $g \in \Gamma$ and assume that $g$ stabilizes some class of rays corresponding to $s \in \mathbb{P}^1(k)$. By definition, $g \mathfrak{E}_D(U_0) g^{-1}= \mathfrak{E}_D(U_0)$. So, as we saw in \S \ref{Section grids}, $g$ acts on the concrete $D$-grid $S_D$ associated to $\mathfrak{E}_D$ as an automorphism. In particular, we have \begin{figure} \[ \fbox{ \xygraph{ !{<0cm,0cm>;<.8cm,0cm>:<0cm,.8cm>::} !{(2.5,2.6) }*+{\textbf{(A)}}="na" !{(4.5,1) }*+{\bullet}="b11" !{(3.,1) }*+{\bullet}="c11" !{(0.5,1) }*+{\bullet}="a11" !{(2.,1) }*+{\bullet}="d11" !{(1.5,1) }*+{\bullet}="z11" !{(3.5,1) }*+{\bullet}="y11" !{(0,-0.08) }*+{\star}="e1" !{(0,2.0) }*+{\star}="e3" !{(1,-0.08) }*+{\star}="d1" !{(1,2.0) }*+{\star}="d3" !{(5,-0.08) }*+{\star}="e2" !{(5,2.0) }*+{\star}="e4" !{(4,-0.08) }*+{\star}="d2" !{(4,2.0) }*+{\star}="d4" "a11"-@{.}"e3" "a11"-@{.}"e1" "c11"-@{~}"d11" "c11"-@{.}"b11" "a11"-@{.}"d11" "b11"-@{.}"e2" "b11"-@{.}"e4" "z11"-@{.}"d3" "z11"-@{.}"d1" "y11"-@{.}"d2" "y11"-@{.}"d4" } } \fbox{ \xygraph{ !{<0cm,0cm>;<0.8cm,0cm>:<0cm,0.8cm>::} !{(0.8,2.6) }*+{\textbf{(B)}}="na" !{(-0.2,0.1) }*+{\bullet}="e2" !{(1.2,0.1) }*+{\bullet}="e3" !{(-0.6,-0.0) }*+{{}^{\mathfrak{D}_B}}="e2n" !{(-0.6,1.8) }*+{{}^{\mathfrak{D}_{B+P_{\infty}}}}="e3n" !{(1.3,1.72) }*+{{}^{\mathbb{S}^{+}}}="gn" !{(-0.2,1.5) }*+{\bullet}="e4" !{(1.2,1.5) }*+{\bullet}="e5" !{(1,0.6) }*+{\bullet}="e6" !{(2.4,0.6) }*+{\bullet}="e7" !{(1.0,0.25) }*+{{}^{\mathbb{S}}}="gn" !{(1,2.0) }*+{\bullet}="e8" !{(2.4,2.0) }*+{\bullet}="e9" "e2"-@{-}"e4" "e2"-@{-}"e3""e4"-@{-}"e5""e3"-@{-}"e5" "e6"-@{-}"e8" "e6"-@{-}"e7""e8"-@{-}"e9""e7"-@{-}"e9" "e2"-@{-}"e6" "e3"-@{-}"e7""e4"-@{-}"e8""e5"-@{-}"e9" }} \] \caption{Figure \textbf{(A)} shows the Bruhat-Tits tree $\mathfrak{t}(k_{P_i})$, where $\mathfrak{p}_i$ corresponds to the finite central line and where the middle edge of $\mathfrak{p}_i$ is represented by a cursive edge. Figure \textbf{(B)} shows a concrete $(D+P_{\infty})$-grid, or equivalently two $P_{\infty}$-neighboring $D$-grids.}\label{fig 1} \end{figure} \begin{itemize} \item[(1)] $g \in \mathrm{Stab}((\mathfrak{D}_0)_{Q})$ for every $Q \neq P_1, \cdots, P_r, P_{\infty}$, and \item[(2)] $g (\mathfrak{D}_0)_{P_i} g^{-1}= (\mathfrak{D}_{\epsilon_i D})_{P_i}$ with $\epsilon_i \in \lbrace 0,1 \rbrace$ for any $P_i$ in the support of $D$, i.e., $g$ can either pointwise fix the line in $\mathfrak{t}(k_{P_i})$ joining $(\mathfrak{D}_0)_{P_i}$ with $(\mathfrak{D}_{D})_{P_i}$, or flip it. \end{itemize} If some $\epsilon_i =1$, then $g$ acts on $\mathfrak{t}_i=\mathfrak{t}(k_{P_i})$ without fixing any point of the finite path $\mathfrak{p}_{i}= \mathfrak{s}(\mathfrak{E}(U_0)_{P_i})$ whose length is odd. Let $\mathfrak{t}_i^{d}$ be the topological space obtained from $\mathfrak{t}_i$ by removing the central edge of $\mathfrak{p}_i$. Then, the action of $g$ on $\mathfrak{t}_i$ exchanges the two connected components of $\mathfrak{t}_i^{d}$. See Figure \ref{fig 1}\textbf{(A)}. We conclude that $g$ fixes no visual limit in $\mathfrak{t}_i$, whence it fixes no element in $\mathbb{P}^1(k)$, which contradicts the hypothesis on $g$. On the other hand, if every $\epsilon_i=0$, we have $$g \in \Gamma_0=\mathrm{Stab}_{\text{PGL}_2(k)}(\mathfrak{D}_0(U_0)) \cap \mathrm{Stab}_{\text{PGL}_2(k)}(\mathfrak{D}_D(U_0)) = \mathrm{Stab}_{\text{PGL}_2(k)}(\mathfrak{E}_D(U_0)).$$ We claim that $\Gamma_0/\text{PH}_D \hookrightarrow g(2)$, and since $g(2)$ is trivial, we get $g \in \mathrm{PH}_D$, which concludes the proof. To prove the claim, we follow \cite[Theorem 1.2]{A2}. For $E \in \lbrace 0,D \rbrace$ we denote by $\Lambda_E$ the $\mathcal{O}_{\mathcal{C}}(U_0)$-lattice satisfying $\mathfrak{D}_E(U_0)=\mathrm{End}_{\mathcal{O}_{\mathcal{C}}(U_0)}( \Lambda_E )$. Let $h \in \Gamma_0$ be an arbitrary element, and fix $h_0 \in \mathrm{GL}_2(k)$ a lift of $h$. Then, by definition, we get $h_0 \in \mathrm{Stab}_{\mathrm{GL}_2(k)}(\mathfrak{D}_E(U_0))$, for $E \in \lbrace 0,D \rbrace$. Then, there exists $b_E \in \mathbb{I}_{U_0}$ such that $h_0 \Lambda_E= b_E \Lambda_E$ (recall \S \ref{Section Spinor}). By taking determinant in the preceding equality, we deduce that $b_E^2 \mathcal{O}_{\mathcal{C}}(U_0)=\det(h_0)\mathcal{O}_{\mathcal{C}}(U_0)$, whence we deduce $b:=b_0=b_D$, since $ \mathcal{O}_{\mathcal{C}}(U_0)$ is a Dedekind domain. Recall that $\mathrm{Pic}(A)$ is isomorphic to the ideal class group $\mathbb{I}_{U_0}/(k^{*} \prod_{Q \in |U_0|} \mathcal{O}_Q^{*})$. Moreover, note that the class $[b]$ of $b=b(h_0)$ in $\mathrm{Pic}(A)$ only depends on the class $h=[h_0] \in \mathrm{PGL}_2(k)$. Indeed, if we change the representative $h_0$ of $h$ by $\lambda h_0$, then we obtain $[b(\lambda h_0)]=[b(h_0) \cdot \mathrm{div}(\lambda)]=[b(h_0)]$. In all that follows we denote by $[b]$ the class of any $b=b(h_0)$, which only depends on $h$. Let us define $\Xi: \Gamma_0 \to \mathrm{Pic}(A)$ as the function satisfying $\Xi(h)=[b]\in \mathrm{Pic}(A)$. On one hand, note that $2 \Xi(h)=[b^2]=[\mathrm{div}(\det(h_0))]=0$. In particular, we have $\mathrm{Im}(\Xi)\subseteq g(2)$. On the other hand, if $\Xi(h)=0$, then $b=\mathrm{div}(\lambda)$, for some $\lambda \in k^{*}$. This implies that $h_0 \lambda^{-1} \in \mathrm{Aut}(\Lambda_E)=\mathfrak{D}_E(U_0)^{*}$, for all $E \in \lbrace 0,D \rbrace$. Thus, we get $h_0 \lambda^{-1} \in \mathfrak{E}_D(U_0)^{*}$, whence $h \in \mathrm{PH}_D$. Hence, we conclude $\Gamma_0/\mathrm{PH}_D$ injects into $g(2)$. \end{proof} This concludes the proof of Theorem \ref{teo cusp}. In the remainder of this section, we prove Proposition \ref{lema4}, which is a stronger version of Proposition \ref{prop aux}. In \S \ref{subsection dec. grids}, we study vertices in the classification graph $C_{P_{\infty}}(\mathbb{O}_D)$. To do so, we present the concept of \textit{semi-decomposition datum} of $D$-grids. Then, in \S \ref{subsection combinatorial structure}, we analyze the topological structure of the classification graph. Specifically, we define and characterize a ramified covering $C_{P_{\infty}}(\mathbb{O}_D) \to C_{P_{\infty}}(\mathbb{O}_0) $ via the characterization of vertices in $C_{P_{\infty}}(\mathbb{O}_D)$ obtained in \S \ref{subsection dec. grids}. Finally, we use what is known about the classifying graph $C_{P_{\infty}}(\mathbb{O}_0)$, which is summarized in the following result. \begin{theorem}\cite[Theorem 1.2]{A1}\label{Teo Arenas cusp in C0} The classifying graph $C_{P_{\infty}}(\mathbb{O}_0)$ is combinatorially finite, and it has exactly $\text{Pic}(R) \cong \text{Pic}(\mathcal{C}) / \langle P \rangle$ cuspidal rays. The vertices corresponding to conjugacy classes of the form $[\mathfrak{D}_{D}]$, for some divisor $D$ on $\mathcal{C}$, are located in the cuspidal rays of $C_{P_{\infty}}(\mathbb{O}_0)$. Conversely, almost every vertex in a cuspidal ray $\mathfrak{r}_{\sigma}$ of $C_{P_{\infty}}(\mathbb{O}_0)$, with $\sigma \in \mathrm{Pic}(A)$, corresponds to a $\mathrm{GL}_2(k)$-conjugacy class of the form $[\mathfrak{D}_{B+nP_{\infty}}]$, where $B=B(\sigma)$ depends only on $\sigma$. \end{theorem} In Theorem \ref{Teo Arenas cusp in C0} and hereafter, by ``almost every'' we mean all but finitely many. \subsection{On the decomposition of grids}\label{subsection dec. grids} Here the main goal is to establish and prove a ``decomposition criterion'' for grids, and subsequently for Eichler orders. We fix the following notation for the rest of this section. Let $D$ be an effective divisor, and write $D=\sum_{i=1}^{r} n_i P_i$ where the points $P_1, \cdots, P_r, P_{\infty}$ are all different. Denote by $d_D$ the degree of $D$. Using Proposition \ref{eichler=grillas} we fix a finite set of places $T$ such that every every $\mathrm{GL}_2(k)$-conjugacy class of Eichler $\mathcal{C}$-orders contains a representative in $\mathrm{Eich}(D, T)$. For any pair of divisors $(B,B')$ such that $B+B'$ is effective, consider the Eichler $\mathcal{C}$-order $$ \Ea[B,B']:=\sbmattrix {\oink_\mathcal{C}}{\mathfrak{L}^{-B'}}{\mathfrak{L}^{-B}}{\oink_\mathcal{C}}, $$ whose level is $B+B'$. Any $\mathrm{GL}_2(k)$-conjugate of such an order is called split. We denote an abstract grid by $\mathbb{S}$, and we often choose a representative of this class by writing $S \in \mathbb{S}$, or any verbal analog. \begin{defi} For any basis $\beta \subset k^2$ we denote by $A(\beta)$ the matrix whose columns are the vectors in $\beta$. A maximal $\mathcal{C}$-order $\mathfrak{D}$ is called $\beta$-split if $A(\beta) \mathfrak{D} A(\beta)^{-1} = \mathfrak{D}_E$, for some divisor $E$. We say that a $D$-grid $S$ is $\beta$-split if every vertex $S$ is $\beta$-split as an order. This is equivalent to \begin{equation}\label{semi-desc pi} A(\beta) S A(\beta)^{-1} = S \left( \Ea \left[ B,B+D \right] \right), \end{equation} for some divisor $B$. A corner of a $D$-grid $S$ is a vertex of $S$ having a unique $P$-neighbor, for each $P$ in the support of $D$. Let $D' \leq D$ be an effective divisor. A $D'$-corner of $S$ is a $D'$-grid $S' \subset S$ containing a corner of $S$. \end{defi} \begin{defi}\label{defi semi-desc} A semi-decomposition datum of $S$ is a $3$-tuple $(\beta, B, D')$, where: \begin{itemize} \item[(a)] $\beta$ is a basis of $k^2$, \item[(b)] $B$ and $D'$ are two divisors on $\mathcal{C}$ satisfying $2D \geq 2D' \geq D$, \item[(c)] there exists a corner of $S$ of the form $v_0=A(\beta)^{-1} \mathfrak{D}_{B}A(\beta)$, \item[(d)] there is a $\beta$-split $D'$-corner $\Pi_{\mathrm{SD}} \subseteq S$ whose set of vertices is $$ \mathrm{V}(\Pi_{\mathrm{SD}}) = \left\lbrace A(\beta)^{-1} \mathfrak{D}_{E} A(\beta) : B \leq E\leq B+D' \right \rbrace, $$ \item[(e)] no vertex outside $\Pi_{\mathrm{SD}}$ is $\beta$-split. \end{itemize} If $D'=D$, then $(\beta,B, D')$ is called a total decomposition datum. The basis $\beta$ is called the semi-decomposition basis of $S$. The subgrid $\Pi_{\mathrm{SD}}$ is called the decomposed subgrid of $S$ associated to the datum. The degree of a semi-decomposition datum $(\beta, B , D')$ is the degree of $B$. \end{defi} \begin{ex}\label{ex semi desc for mult free} Assume that $D$ is multiplicity free. Then, condition (b) in Definition \ref{defi semi-desc} implies that any semi-decomposition datum of a concrete $D$-grid is a total decomposition datum. \end{ex} Note that the pair of divisors $(B,D')$ in the previous definition depends only on the $\mathrm{GL}_2(k)$-conjugacy class of $D$-grids. Indeed, if $(\beta, B, D')$ is a semi-decomposition datum of $S$, and $S=G S'G^{-1}$ with $G \in \text{GL}_2(k)$, then $(\beta', B, D')$ is a semi-decomposition datum of $S'$, where $\beta'=G(\beta)$. Furthermore, we have $A(\beta')=GA(\beta)$. This allows us to extend the definition of semi-decomposition data to abstract $D$-grids. However, in order to (partially) extend the notion of degree, we need the following result. \begin{lemma}\label{grado bien definido} Let $S$ be a concrete $D$-grid. Let $(\beta, B, D')$ and $(\beta^{\circ}, B^{\circ}, D^{ \circ '})$ be two semi-decomposition data of $S$ with positive degree. Then $B$ and $B^{\circ}$ are linearly equivalent. \end{lemma} \begin{proof} Set $A=A(\beta)$, $A^{\circ}=A(\beta^{\circ})$, and let $G$ be the base change matrix from $\beta$ to $\beta^{\circ}$. We start by showing that we can restrict our proof to the context where $D$ is multiplicity free. Indeed, $A^{\circ} S(A^{\circ})^{-1}$ is the image of the concrete $D$-grid $ASA^{-1}$ by the conjugation map induced by $G$. Let us write $D= \sum_{i=1}^{r} n_i P_i$, and let $\lbrace P_{b_1}, \cdots, P_{b_s}\rbrace$ be the set of such points with an odd coefficient $n_i$. Set $D_c=P_{b_1}+ \cdots+ P_{b_s}$. Let $\Delta$ be the subcomplex of $S$ whose vertex set is $$ \text{V}(\Delta) = \left\lbrace A^{-1} \mathfrak{D}_{E} A : B_0 \leq E \leq B_0+D_c \right\rbrace,$$ where $B_0=B+\sum_{i=1}^r \lfloor \frac{n_i}{2}\rfloor P_i$. Note that the intersection of the maximal $\mathcal{C}$-orders corresponding to the vertex set of $\Delta$ is an Eichler $\mathcal{C}$-order of level $D_c$. Equivalently, we get that $\Delta$ is a concrete $D_c$-grid. Since $\text{PGL}_2(k)$ acts simplicially on each grid, we get $G A \Delta A^{-1} G^{-1} = A^{\circ} \Delta (A^{\circ})^{-1}$. Moreover, we have that $$ \text{V}(\Delta)= \left\lbrace (A^{\circ})^{-1} \mathfrak{D}_{E} A^{\circ} : B_0^{\circ} \leq E \leq B_0^{\circ}+D_c \right\rbrace,$$ where $B_0^{\circ}= B^{\circ}+\sum_{i=1}^r \lfloor \frac{n_i}{2} \rfloor P_i$. Thus, we conclude that the $D_c$-grid $\Delta$ has two induced positive degree total-decomposition data $(\beta, B_0, D_c)$ and $(\beta^{\circ}, B_0^{\circ}, D_c)$. Note that if $B_0$ and $B_0^{\circ}$ are linearly equivalent, then $B$ and $B^{\circ}$ are also. Therefore, by replacing $S$ by $\Delta$, we can assume that $D$ is multiplicity free. Now, assume that $D$ is multiplicity free. In this case, every vertex in $A SA^{-1} $ and $A^{\circ} S(A^{\circ})^{-1}$ correspond to a split maximal $\mathcal{C}$-order (cf.~Example \ref{ex semi desc for mult free}). Set $\mathfrak{D}_{B''}=G\mathfrak{D}_{B} G^{-1}$. Then one of the following holds: \begin{enumerate} \item $B''$ is principal, \item $\mathfrak{L}^{-B''}(\mathcal{C})=\lbrace 0 \rbrace$, or \item $\mathfrak{L}^{B''}(\mathcal{C})=\lbrace 0 \rbrace$. \end{enumerate} If $B''$ is principal, then $\mathfrak{D}_{B''}(\mathcal{C})= \mathbb{M}_2(\mathbb{F})$. On the other hand, since $\deg(B)>0$, we obtain $\mathfrak{L}^{-B}(\mathcal{C})=\lbrace 0 \rbrace$. Thus, we conclude $\mathfrak{D}_{B}(\mathcal{C})= \sbmattrix {\mathbb{F}^{*}}{\mathfrak{L}^{B}(\mathcal{C})}{0}{\mathbb{F}^{*}}$, which is not a simple algebra. So, the first case is impossible. Now, assume the second case, i.e. $\mathfrak{L}^{-B}(\mathcal{C})=\mathfrak{L}^{-B''}(\mathcal{C})=\lbrace 0 \rbrace$. Then, it follows from \cite[\S 4, Proposition 4.1]{A1} that one of the following conditions holds: \begin{itemize} \item[(a)] $G= \sbmattrix {x}{y}{0}{z}$ and $B-B''= \text{div}(x^{-1}z)$, or \item[(b)] $G=\sbmattrix {0}{x}{z}{0}$ and $B+B''= \text{div}(x^{-1}z)$. \end{itemize} In case (a), for any divisor $0 \leq E \leq D$ we have that \begin{equation} G\mathfrak{D}_{B+E} G^{-1} \subseteq \sbmattrix {\mathcal{O}_{\mathcal{C}}-yz^{-1}\mathfrak{L}^{-B-E}}{\mathfrak{J}}{xz^{-1}\mathfrak{L}^{-B-E}}{\mathcal{O}_{\mathcal{C}}+yz^{-1}\mathfrak{L}^{-B-E}}, \end{equation} for some invertible sheaf $\mathfrak{J}$, where the other coefficients are optimal. Since $$xz^{-1}\mathfrak{L}^{-B-E}=\mathfrak{L}^{-B-\text{div}(xz^{-1})-E}=\mathfrak{L}^{-B''-E},$$ and $G\mathfrak{D}_{B+E} G^{-1}$ is a split maximal $\mathcal{C}$-order, we deduce that $G\mathfrak{D}_{B+E} G^{-1}=\mathfrak{D}_{B''+E}$, for any divisor $0 \leq E \leq D$. This implies that $B''=B^{\circ}$, and then $B$ and $B^{\circ}$ are linearly equivalent. In case (b), for any divisor $0 \leq E \leq D$ we have that \begin{equation} G\mathfrak{D}_{B+E} G^{-1}=\sbmattrix {\mathcal{O}_{\mathcal{C}}}{xz^{-1}\mathfrak{L}^{-B-E}}{zx^{-1}\mathfrak{L}^{B+E}}{\mathcal{O}_{\mathcal{C}}}. \end{equation} Then, it follows directly from the previous equation that $G\mathfrak{D}_{B+E} G^{-1}= \mathfrak{D}_{B''-E}$, for any divisor $0 \leq E \leq D$. Therefore $B''=B^{\circ}+D$, whence $B$ is linearly equivalent to $-B^{\circ}-D$. But, the last condition contradicts the hypotheses of positive degree on $B$ and $B^{\circ}$. We conclude that only case (a) can hold. Finally, if $\mathfrak{L}^{B''}=\lbrace 0 \rbrace$ we can replace $B''$ by $-B''$ in the preceding argument. \end{proof} \begin{defi} Let $\mathbb{S}$ be an abstract $D$-grid. A semi-decomposition datum of $\mathbb{S}$ is a pair $(B, D')$, where $(\beta, B, D')$ is a semi-decomposition datum of some concrete representative $S \in \mathbb{S}$. When $D'=D$, we say that $(B, D')$ is a total decomposition datum of $\mathbb{S}$. When $\deg(B)>0$, the degree of this datum is by definition $\deg(B)$, which is well-defined by Lemma \ref{grado bien definido}. \end{defi} Let $\beta= \lbrace e_1, e_2 \rbrace$ be a basis of $k^2$. We say that a two-dimensional vector bundle $\mathfrak{L}$ on $\mathcal{C}$ is $\beta$-split if $\mathfrak{L}= \mathfrak{L}^B e_1 \oplus \mathfrak{L}^C e_2 $, where $B$ and $C$ are divisors on $\mathcal{C}$. Then, a maximal $\mathcal{C}$-order $\mathfrak{D}_{\Lambda}$ splits in the base $\beta$ if and only if at least one (and therefore every) vector bundle in the class $[\Lambda]$ is $\beta$-split. Moreover, either condition is equivalent to $\sbmattrix 1000, \sbmattrix 0001\in \mathfrak{D}_{\Lambda}(\mathcal{C})$. More generally, an Eichler $\mathcal{C}$-order is split precisely when it contains a non-trivial idempotent as a global section, or equivalently, when the corresponding grid has a total-decomposition datum. In fact, we have a more precise result that follows immediately from the current paragraph and \cite[Theorem 1.2]{A5}: \begin{prop}\label{prop des} Let $\mathfrak{E}$ be an Eichler $\mathcal{C}$-order of level $D$. Then the following statements are equivalent: \begin{itemize} \item[(1)] $\mathfrak{E}$ is split, \item[(2)] $S=S(\mathfrak{E})$ has a total-decomposition datum, \item[(3)] The ring of global sections $\mathfrak{E}(\mathcal{C}) \subseteq \mathbb{M}_2(k)$ contains a non trivial idempotent matrix, \item[(4)] There exists a nontrivial idempotent matrix of $\mathbb{M}_2(k)$ contained in the ring of global sections $\mathfrak{D}(\mathcal{C})$ for every maximal $\mathcal{C}$-order $\mathfrak{D}$ corresponding to a vertex of $S$. \end{itemize} Assume moreover that $D$ is multiplicity free. Then, for almost all conjugacy classes in $\mathbb{O}_D$, the orders in this class are split. \end{prop} \begin{ex}\label{ex0} Assume that $D=0$. Let $Q$ be a closed point on $\mathcal{C}$, and set $R'=\mathcal{O}_{\mathcal{C}}(\mathcal{C} \smallsetminus \lbrace Q \rbrace)$. Then, each concrete $D$-grid consists in precisely one vertex, which represents a maximal $\mathcal{C}$-order. According to Theorem \ref{Teo Arenas cusp in C0}, almost all abstract $0$-grids admit a total decomposition datum, and these $0$-grids correspond to vertices located in a finite union of rays in $C_{Q}(\mathbb{O}_0)$, which are parametrized by $\text{Pic}(R') \cong \text{Pic}(\mathcal{C}) / \langle \overline{Q} \rangle$. Moreover, almost every vertex in a cuspidal ray $\mathfrak{r}_{\sigma}$ of $C_{P_{\infty}}(\mathbb{O}_0)$, with $\sigma \in \mathrm{Pic}(A)$, corresponds to a $\mathrm{GL}_2(k)$-conjugacy class of the form $[\mathfrak{D}_{B+nP_{\infty}}]$, where $B=B(\sigma)$ depends only on $\sigma$. In particular, given a divisor $D_0$, almost all classes of maximal $\mathcal{C}$-orders have a representative of the form $\mathfrak{D}_E$ with $\deg(E) > \deg(D_0)$. \end{ex} The rest of this sub-section is exclusively devoted to proving the following proposition, which generalizes the previous example. \begin{prop}\label{max semi-desc} Let $D$ be an effective divisor and $d_D=\deg(D)$. Then, for almost every abstract $D$-grid $\mathbb{S}$ there exists a semi-decomposition datum $(B,D')$ of degree $d$ for $\mathbb{S}$ with $d > d_D$. Moreover, $D'$ is unique and the class of $B$ in $\mathrm{Pic}(\mathcal{C})$ is unique. In particular, $d$ is unique. \end{prop} In order to prove Proposition \ref{max semi-desc} we extensively work with some subgrids defined by a certain stratification, which we formalize in Definition \ref{defi strata} and Lemma \ref{Lemma buenos caminos}. \begin{defi}\label{defi strata} Let $S$ be a concrete $D$-grid. We define the $P_i$-axis $S(P_i) \subseteq S$ as the finite line in $S$ whose vertex set is $\lbrace v_j \rbrace_{j=0}^{n_i}$, where $v_0$ is as in Definition \ref{defi semi-desc}, and where $v_j$ is $P_i$-neighbor of $v_{j+1}$, whenever $0 \leq j \leq n_{i-1}$. Write $D=D_0+nQ$, with $D_0$ supported away from $Q$ and $n>0$. Then, the vertex set of $S$ can be naturally written as the disjoint union of the vertex sets of $n+1$ different $D_0$-grids, denoted by $S_0, \cdots, S_n$. We do this in a way such that $S_0$ is a $Q$-face of $S$, and $S_i$ is a $Q$-neighbor of $S_{i+1}$, for each $i \in \lbrace 0, \cdots, n-1 \rbrace$. These $D_0$-grids are called the $Q$-strata of $S$. See Figure \ref{fig 01}\textbf{(B)}. The numbering of the strata can be inverted if necessary. \end{defi} The sequence of strata $\lbrace S_0, \cdots, S_n \rbrace$ defines a finite line in $\mathfrak{t}(k_Q)$ of length $n$. The following lemma characterizes, for almost every grid, the image of this line in the classifying graph. \begin{lemma}\label{Lemma buenos caminos} Let us write $D=D_0+nQ$ as above. Then, for almost every abstract $D$-grid $\mathbb{S}$, there exists $S \in \mathbb{S}$ such that the corresponding line $\mathfrak{c}(S)$ in $\mathfrak{t}(k_Q)$ is defined by vertices $ z_0, \cdots, z_s, \zeta_{s+1}, \cdots \zeta_{n} \in \mathfrak{t}(k_{Q})$ satisfying: \begin{itemize} \item[(i)] vertices in each pair $(z_i$, $z_{i+1})$, $(\zeta_i$, $\zeta_{i+1})$, or $(z_{s}$, $\zeta_{s+1})$ are neighbors, \item[(ii)] $s> \lfloor \frac{n+1}{2}\rfloor $, \item[(iii)] $z_0, \cdots, z_{s}$ are pairwise non-$\Gamma$-equivalent vertices, \item[(iv)] $z_0, \cdots, z_{s}$ are on the maximal path joining $\infty$ with some $\epsilon \in k$, and \item[(v)] $\zeta_{s+i}$ and $z_{s-i}$ are $\Gamma$-equivalent, for any $i\in \lbrace 1, \cdots, s\rbrace$. \end{itemize} In particular, the image $\mathfrak{c}(\mathbb{S})$ of $\mathfrak{c}(S)$ in $C_Q(\mathbb{O}_{D_0})$ is a line of length $s$ contained in a cuspidal ray. \end{lemma} \begin{proof} Let $\mathbb{S}$ be an abstract $D$-grid, and let $S \in \mathbb{S}$ be a concrete representative. Let $\mathfrak{c}=\mathfrak{c}(S)$ be the finite line in $\mathfrak{t}(k_{Q})$ corresponding to $S$. The image $\mathfrak{c}(\mathbb{S})$ of $\mathfrak{c}$ in $C_{Q}(\mathbb{O}_{D_0})$ is a line of length $s \leq n$, which only depends on $\mathbb{S}$ by Proposition \ref{func corresp D-grillas}. Now, as we noted in \S \ref{Section grids} (see the paragraphs before Proposition \ref{func corresp D-grillas}) the graph $C_{Q}(\mathbb{O}_{D_0})$ is combinatorially finite. Thus, there exist finitely many lines of length at most $n$ that are not contained in a cuspidal ray. Hence, we can assume that $\mathfrak{c}(\mathbb{S})$ is contained in a cuspidal ray $\mathfrak{r}$ of $C_{Q}(\mathbb{O}_{D_0})$. Let $\mathfrak{X}= \Gamma \backslash\mathfrak{t}(k_{Q})$ be the connected component in $C_{Q}(\mathbb{O}_{D_0})$ containing $\mathfrak{r}$. Let $T$ be a maximal subtree of $\mathfrak{X}$, let $j: T \rightarrow \mathfrak{t}(k_{Q})$ be a lift of $T$, and let $\pi: \mathfrak{t}(k_{Q}) \rightarrow \mathfrak{X}$ be the canonical projection. By Corollary \ref{Cor comb fin} we may assume that the visual limit $\epsilon=\epsilon(\mathfrak{r})$ of $j(\mathfrak{r})$ belongs to $\mathbb{P}^1(k)$, and, up to changing the lift, we can assume $\epsilon\neq \infty$. Moreover, there exists a subray $\mathfrak{r}_0 \subset j(\mathfrak{r})$ such that: \begin{itemize} \item $\text{V}(\mathfrak{r}_0)= \lbrace w_i\rbrace_{i=0}^{\infty}$, where $w_i$ and $w_{i+1}$ are adjacent, \item $\text{Stab}_{\Gamma}(w_i) \subset \text{Stab}_{\Gamma}(w_{i+1})$, and \item $\text{Stab}_{\Gamma}(w_i)$ acts transitively on the set of neighboring vertices of $w_i$ other than $w_{i+1}$. \end{itemize} Note that the last statement implies that all neighbors of $w_i$, besides $w_{i+1}$, are in the same $\Gamma$-orbit. By induction on $L \geq 1$, we can show that, for any $i \geq 0$ and for any vertex $v$ at distance $\leq L$ of $w_{i+L}$ such that the line connecting $v$ and $w_{i+L}$ does not contain $w_{i+L+1}$, $v$ is in the same $\mathrm{Stab}_{\Gamma}(w_{i+L})$-orbit than $w_{i} \in V(\mathfrak{r}_0)$. Define $\mathfrak{r}_0'$ as the subray of $\mathfrak{r}_0$ with vertices are $\lbrace w_i\rbrace_{i=n}^{\infty}$. In particular, this last statement applies to $L < n$ and every vertex in $\mathfrak{r}_0'$. Since $\mathfrak{r} \smallsetminus \pi(\mathfrak{r}_0')$ is a finite line, arguing as above, we may assume that $\pi(\mathfrak{c})$ is contained in $\pi(\mathfrak{r}_0')$. This implies that $\mathfrak{c}$ is $\Gamma$-equivalent to a finite line that intersects $\mathfrak{r}_0'$ in a finite line of length $s \leq n$ (recall that $\mathfrak{t}(k_Q)$ is a tree), so we may assume that this is the case for $\mathfrak{c}$. Let us write $\mathrm{V}(\mathfrak{c})= \lbrace \varpi_i \rbrace_{i=0}^n$, where $\varpi_i$ and $\varpi_{i+1}$ are adjacent, and $\mathrm{V}(\mathfrak{c} \cap \mathfrak{r}_0') = \lbrace \varpi_i \rbrace_{i=t}^{s+t+1}$, where $\varpi_{s+t+1}$ is the vertex that is the closest to $\epsilon$. In particular, there exists $k >n$ such that, for each $i \in \lbrace t, \cdots, s+t+1 \rbrace$, the vertex $\varpi_i$ equals $w_{i+k}$. Since $t<n$ and $w_{k}$ and $\varpi_0$ are both at distance $t$ from $\varpi_t=w_{t+k}$, we see that $w_{k}$ is $\mathrm{Stab}_{\Gamma}(\varpi_{t})$-equivalent to $\varpi_0$. Then, up to replacing $\mathfrak{c}$ by a $\Gamma$-equivalent line, we may assume that $t=0$. We claim that we can assume that $s+1 \geq \lfloor \frac{n+1}{2}\rfloor$. Indeed, if $s+1 < \lfloor \frac{n+1}{2}\rfloor$ we argue as follows. Since $n-s-1<n$ and $w_{2s+2-n+k}$ and $\varpi_n$ are both at distance $n-s-1$ from $\varpi_{s+1}=w_{s+1+k}$, we note that $w_{2s+2-n+k}$ is $\mathrm{Stab}_{\Gamma}(\varpi_{s+1})$-equivalent to $\varpi_n$. Equivalently, there exists $\gamma \in \Gamma$ such that $\gamma \cdot \lbrace \varpi_i \rbrace_{i=s+1}^{n} \subset \mathfrak{r}_0'$, and we may replace $\mathfrak{c}$ by $\gamma \cdot \mathfrak{c}$. Write $V(\mathfrak{c})= \lbrace z_0, \cdots, z_s, \zeta_{s+1}, \cdots \zeta_{n}\rbrace$, which satisfies conditions (i), (ii) in Lemma \ref{Lemma buenos caminos}. Condition (iii) follows from the fact that the image of $\pi(\mathfrak{c})=\mathfrak{c}(\mathbb{S})$ belongs to a cuspidal ray. Condition (iv) is immediate since we can always extend a ray to a maximal path reaching infinity. Finally, condition (v) follows by the same argument used above. \end{proof} \begin{figure} \[ \fbox{ \xygraph{ !{<0cm,0cm>;<.8cm,0cm>:<0cm,.8cm>::} !{(-0.2,2.6) }*+{\textbf{(A)}}="na" !{(0.6,2.995) }*+{\infty}="e3" !{(2.6,1.4) }*+{\bullet}="f3" !{(3,1.4) }*+{{}^{z_s}}="fi3" !{(1.5,2.3) }*+{\bullet}="f4" !{(1.6,2.5) }*+{{}^{z_0}}="f4" !{(2.2,1.7) }*+{\bullet}="f5" !{(2.5,1.9) }*+{{}^{z_{s-1}}}="fi5" !{(4.4,0) }*+{\epsilon}="e5" !{(1.5,0.2)}*+{{}^{\zeta_{n}}}="fi2" !{(1.3,0.5)}*+{\bullet}="f2" !{(2.4,0.7)}*+{{}^{\zeta_{s+1}}}="fi4" !{(2.2,1.1)}*+{\bullet}="f4" !{(0.5,0) }*+{}="e4" "e3"-@{.}"f3" "f3"-@{.}"e5" "f3"-@{.}"e4" } } \fbox{ \xygraph{ !{<0cm,0cm>;<0.8cm,0cm>:<0cm,0.8cm>::} !{(-1.5,2.6) }*+{\textbf{(B)}}="na" !{(-0.2,0.0) }*+{\bullet}="e2" !{(1.2,0.0) }*+{\bullet}="e3" !{(0.75,0.09) }*+{{}^{S_0}}="s1" !{(0.75,1.09) }*+{{}^{S_1}}="s2" !{(0.75,2.59) }*+{{}^{S_{n}}}="s2" !{(-0.2,1.0) }*+{\bullet}="e4" !{(1.2,1.0) }*+{\bullet}="e5" !{(1,0.5) }*+{\bullet}="e6" !{(2.4,0.5) }*+{\bullet}="e7" !{(-0.2,2.5) }*+{\bullet}="e10" !{(1.2,2.5) }*+{\bullet}="e11" !{(1,3.0) }*+{\bullet}="e12" !{(2.4,3.0) }*+{\bullet}="e13" !{(1,1.5) }*+{\bullet}="e8" !{(2.4,1.5) }*+{\bullet}="e9" "e2"-@{-}"e4" "e2"-@{-}"e3""e4"-@{-}"e5""e3"-@{-}"e5" "e6"-@{-}"e8" "e6"-@{-}"e7""e8"-@{-}"e9""e7"-@{-}"e9" "e2"-@{-}"e6" "e3"-@{-}"e7""e4"-@{-}"e8""e5"-@{-}"e9" "e10"-@{-}"e11" "e10"-@{-}"e12""e13"-@{-}"e11""e13"-@{-}"e12" "e4"-@{.}"e10" "e11"-@{.}"e5" "e12"-@{.}"e8" "e13"-@{.}"e9" }} \] \caption{Figure \textbf{(A)} shows the finite line $\mathfrak{c}(S)$ as in Lemma \ref{Lemma buenos caminos}, while Figure \textbf{(B)} shows the strata of a concrete $D$-grid. In the latter, vertical neighbors are $P_1$-neighbors.}\label{fig 01} \end{figure} We prove the existence of semi-decomposition data by induction on $r$. In order to be able to use the inductive hypothesis we need the following result. \begin{lemma}\label{lemma finitas D-grillas} Let $D$ be an effective divisor, and write $D=D_0+nQ$, where $D_0$ is supported away from $Q$ and $n>0$. Let $\mathbb{S}_0$ be an abstract $D_0$-grid. Then the set of abstract $D$-grids $\mathbb{S}$ such that there exist $S_0 \in \mathbb{S}_0$ and $S \in \mathbb{S}$ with $S_0 \subset S$, is finite. \end{lemma} \begin{proof} Let $q'$ be the cardinality of the residue field of $k_Q$. First we claim that any concrete $D_0$-grid is contained in finitely many concrete $D$-grids. In order to prove the claim, let us fix an Eichler $\mathcal{C}$-order $\mathfrak{E}_0$ of level $D_0$. Let $\mathfrak{E}$ be an Eichler $\mathcal{C}$-order of level $D$ contained in $\mathfrak{E}_0$. Since, for each $P\neq Q$, the $P$-coefficient of $D$ and $D_0$ is the same, we have $(\mathfrak{E}_0)_P=(\mathfrak{E})_P$, for all $P \neq Q$. Moreover, we have that $(\mathfrak{E}_0)_Q$ is a maximal order, while $(\mathfrak{E})_Q$ is a local Eichler order of level $n$. Denote by $v_0$ the vertex in $\mathfrak{t}(k_Q)$ corresponding to $(\mathfrak{E}_0)_Q$. Then $(\mathfrak{E})_Q$ corresponds to the intersection of all maximal orders in a finite line of length $n$ in $\mathfrak{t}(k_Q)$ containing $v_0$. Moreover, it follows from the local-global principles (b) and (c) in \S \ref{Section Spinor} that there exists a bijective map between the set of Eichler $\mathcal{C}$-orders $\mathfrak{E}$ of level $D$ contained in $\mathfrak{E}_0$, and the set of finite lines of length $n$ in $\mathfrak{t}(k_Q)$ containing $v_0$. In particular, there are finitely many Eichler $\mathcal{C}$-orders $\mathfrak{E}$ of level $D$ contained in $\mathfrak{E}_0$. Since there exists a bijective map between the set of concrete $D$-grids containing $S_0=S(\mathfrak{E}_0)$, and the set of Eichler $\mathcal{C}$-orders $\mathfrak{E}$ of level $D$ contained in $\mathfrak{E}_0$ (cf.~\S \ref{Section grids}), the claim follows. Let us define $\mathbb{W}$ as the set of abstract $D$-grids $\mathbb{S}$ such that there exist $S_0 \in \mathbb{S}_0$ and $S \in \mathbb{S}$ with $S_0 \subseteq S$. Fix a concrete representative $S_0^{\circ} \in \mathbb{S}_0$. We define $W$ as the set of concrete $D$-grids containing $S_0^{\circ}$. Then, it follows from the previous claim that $W$ is finite. Now, we claim that the map $\phi: W \to \mathbb{W}$, defined by $\phi(S) = [S]$, is surjective. Indeed, let $\mathbb{S} \in \mathbb{W}$. Then, by definition, there exist $S_0' \in \mathbb{S}_0$ and $S' \in \mathbb{S}$ with $S_0' \subseteq S'$. Since $S_0^{\circ}$ and $S_0'$ belong to $\mathbb{S}_0$, there exists $\gamma \in \mathrm{GL}_2(k)$ such that $S_0^{\circ}=\gamma \cdot S_0'$. Thus, the $D$-grid $S=\gamma \cdot S'$ contains $S_0^{\circ}$. This implies that $S \in W$, and, by definition, we have $\phi(S)=[\gamma \cdot S']=[S']=\mathbb{S}$. So, the claim follows. In particular, since $\phi$ is surjective, we conclude that $\mathbb{W}$ is finite, which completes the proof. \end{proof} We are now ready to prove Proposition \ref{max semi-desc}. \begin{proof}[Proof of Proposition \ref{max semi-desc}] Recall that $D=\sum_{i=1}^r n_i P_i$. We prove the existence of semi-decomposition data by induction on $r \in \mathbb{N}$. Note that, if $r=0$, then $D$ is trivial, and, in such case, the result follows from Example \ref{ex0}. Now we prove the result for $r>0$. We may assume that $P_1$ has minimal degree in the support of $D$. Set $U'= \mathcal{C } \smallsetminus \lbrace P_1\rbrace$. Then, we can write $D=n_1P_1+ D_0$, where $D_0$ is supported in $U'$. Let $\mathbb{S}$ be an abstract $D$-grid, and fix a concrete representative $S \in \mathbb{S}$. Let $S_0, \cdots, S_{n_1}$ be the $D_0$-grids in the $P_1$-strata of $S$. See Definition \ref{defi strata} and Figure \ref{fig 01}\textbf{(B)}. These define a line $\mathfrak{c}(S)$ in $\mathfrak{t}(k_{P_1})$, and we may assume that it satisfies statements (i), (ii), (iii), (iv) and (v) in Lemma \ref{Lemma buenos caminos}, and that its image $\mathfrak{c}(\mathbb{S})$ is contained in a cuspidal ray of $C_{P_1}(\mathbb{O}_{D_0})$. Moreover, we can enumerate the strata of $S$ in a way such that $S_i$ corresponds to $z_i \in \mathrm{V}\big(\mathfrak{t}(k_{P_1})\big)$, for each $i \in \lbrace 0, \cdots, s \rbrace$. Now, by the inductive hypothesis and Lemma \ref{lemma finitas D-grillas}, we may assume that there exists a semi-decomposition datum $(\beta, B, D_0')$ of degree $\deg(B)>d_{D}>d_{D_0}$ for $S_0$. We claim that $(\beta, B, sP_1+D_0')$ is a semi-decomposition datum for $S$. Let $A=A(\beta)$ be the base change matrix, as defined in Definition \ref{defi semi-desc}. Then, by definition of a semi-decomposition datum, $\text{V}(S_0)$ contains the vertex set $ \mathrm{V}(\Pi(S_0))=\left\lbrace A \mathfrak{D}_{E} A^{-1}: B \leq E \leq B+D_0' \right\rbrace$. We already know that the vertex $\pi_V(z_i)$ corresponding to the class $\mathbb{S}_i$ of $S_i$ has valency two in $C_{P_{1}}(\mathbb{O}_{D_0})$. Denote by $\mathbb{S}_{-1} \neq \mathbb{S}_1$ the other $P_1$-neighbor of $\mathbb{S}_0$. Then, we can describe a decomposed subgrid of some concrete representatives of $\mathbb{S}_{1}$ and $\mathbb{S}_{-1}$. Indeed, we know that the $D'_0$-grid $\Pi(S_0)$ is $P_1$-neighbor to the $D_0'$-grids $\nabla_{-1}, \nabla_{1}$, whose vertex sets are respectively $$ \text{V}(\nabla_{-1})= \lbrace A^{-1} \mathfrak{D}_{E} A : B-P_1 \leq E \leq B-P_1+D_0' \rbrace ,$$ and $$ \text{V}(\nabla_{1})= \lbrace A^{-1} \mathfrak{D}_{E} A : B+P_1 \leq E \leq B+P_1+D_0' \rbrace .$$ Then, we can complete $\nabla_{-1}, \nabla_1$ in order to obtain two $D_0$-grids, denoted respectively by $S_{-1}$ and $S_{1}$, which are $P_1$-neighbors to $S_0$. We claim that $S_{1}$ and $S_{-1}$ belong to different abstract $D_0$-grids. Indeed, if $S_{1}$ and $S_{-1}$ define the same abstract $D_0$-grid, then each concrete $D_0$-grid in the class has two positive degree semi-decomposition data of the form $(\beta_1, B-P_1, D_0')$ and $(\beta_2, B+P_1, D_0')$. Note that $\deg(B)> \deg(P_1)$ by hypothesis, whence $\deg(B+P_1)>0$ and $\deg(B-P_1)>0$. Thus, by Lemma \ref{grado bien definido} we deduce that $B+P_1$ is linearly equivalent to $B-P_1$, which is impossible. So, the claim follows. Now, we claim that $S_1 \in \mathbb{S}_1$ and $S_{-1} \in \mathbb{S}_{-1}$. In order to prove this, let $\mathfrak{E}_0'$ and $\mathfrak{E}_{-1}'$ be the Eichler $\mathcal{C}$-orders defined as the intersection of all the maximal orders corresponding to vertices of the respective grids $S_0$ and $S_{-1}$. Since $S_0$ and $S_{-1}$ are $D_0$-grids, the level of $\mathfrak{E}_0'$ and $\mathfrak{E}_{-1}'$ is $D_0$. In particular, these Eichler $\mathcal{C}$-orders are maximal at $P_1$. By definition $\Pi(S_0) \subseteq S_0$ and $\nabla_{-1}\subseteq S_{-1}$. This implies that $$ \mathfrak{E}_0' \subseteq \bigcap_{B \leq E \leq B+D_0'} A^{-1}\mathfrak{D}_E A, \, \, \text{ and } \, \, \mathfrak{E}_{-1}' \subseteq \bigcap_{B-P_1 \leq E \leq B-P_1+D_0'} A^{-1} \mathfrak{D}_E A .$$ Thus, if $m$ is the coefficient of $P_1$ in $B$, then $(\mathfrak{E}_0')_{P_1} \subseteq A^{-1}(\mathfrak{D}_{mP_1})_{P_1} A$ and $(\mathfrak{E}_1')_{P_1} \subseteq A^{-1}(\mathfrak{D}_{(m-1)P_1})_{P_1} A$. Moreover, since $(\mathfrak{E}_0')_{P_1}$ and $(\mathfrak{E}_{-1}')_{P_1}$ are maximal, we get that $(\mathfrak{E}_0')_{P_1} = A^{-1}(\mathfrak{D}_{mP_1})_{P_1} A$ and $(\mathfrak{E}_{-1}')_{P_1}= A^{-1} (\mathfrak{D}_{(m-1) P_1})_{P_1} A$. Since the vertex $v_{-1}$ in $\mathfrak{t}(k_{P_1})$ corresponding to $S_{-1}$ is the projection of some (any) maximal $\mathcal{C}$-order in $\mathrm{V}(S_{-1})$ by localizing at $P_1$, it coincides with $A^{-1} (\mathfrak{D}_{(m-1) P_1})_{P_1} A$. This implies that $v_{-1}$ is the unique vertex in the maximal path joining the ends $\epsilon$ and $\infty$ that is further from $\epsilon$ than $z_0$. Thus, we deduce that $S_{-1} \in \mathbb{S}_{-1}$, whence we also obtain $S_{1} \in \mathbb{S}_{1}$. We conclude from this that $(\beta, B+P_1, D_0')$ is a semi-decomposition datum of degree $d_{D_0}$ of $S_1$. So, by an inductive argument, we can show that, for each $i \in \lbrace 0, \cdots, n_1 \rbrace$, we have that $(\beta, B+ i P_1, D_0')$ is a semi-decomposition datum of degree $d_{D_0}$ for $S_i$. Therefore, $(\beta, B, sP_1+D_0')$ is a semi-decomposition datum for $S$. We are only left to prove the condition $\deg(B)> d_D$. Let $\mathbb{S}$ be an abstract $D$-grid as above, and set $S \in \mathbb{S}$. Let us denote by $v_{\mathbb{S}} \in \mathrm{V}(C_{P_{\infty}}(\mathbb{O}_D))$ the vertex corresponding to $\mathbb{S}$. Then we know that almost every vertex $v_{\mathbb{S}}$ belongs to a cuspidal ray $\mathfrak{r} \subset C_{P_{\infty}}(\mathbb{O}_D)$. In particular, almost every vertex $v_{\mathbb{S}}$ has exactly two $P_{\infty}$-neighbors. Let $v_{\mathbb{S}}^{+}$ be the neighbor of $v_{\mathbb{S}}$ that is closest to the end of $\mathfrak{r}$. Then, the argument here above shows that, for almost every abstract $D$-grid $\mathbb{S}$, with a semi-decomposition datum $(B,D')$, the abstract grid corresponding to $v_{\mathbb{S}}^{+}$ has the semi-decomposition datum $(B+P_{\infty}, D')$. See Figure \ref{fig X}. Recalling that there are finitely many cuspidal rays in $C_{P_{\infty}}(\mathbb{O}_D)$, we see that, up to increasing $n \in \mathbb{Z}$ at the divisor $B+nP_{\infty}$, for almost every abstract grid, and any concrete representative, there is a semi-decomposition datum of degree $>d_D$. \begin{figure} \[ \fbox{ \xygraph{ !{<0cm,0cm>;<0.9cm,0cm>:<0cm,0.9cm>::} !{(-1,0.0) }*+{{}^{(B, D')}}="i0" !{(1.0,0.6) }*+{{}^{ (B+P_{\infty}, D') }}="i1" !{(3,0.0)}*+{{}^{(B+2P_{\infty}, D') }}="i2" !{(-1,0.3) }*+{\bullet}="e0" !{(1.0,0.3) }*+{\bullet}="e1"!{(3,0.3) }*+{\bullet}="e2" !{(5,0.3) }*+{}="e3" "e2"-@{.}"e3" "e0"-@{-}"e1" "e1"-@{-}"e2" }} \] \caption{A cuspidal ray $\mathfrak{r} \subseteq C_{P_{\infty}}(\mathbb{O}_D)$. Each vertex in $\mathrm{V}(\mathfrak{r})$ represents an abstract $D$-grid with a semi-decomposition datum of the form $(B+nP_{\infty}, D')$.}\label{fig X} \end{figure} We prove now the uniqueness of semi-decomposition data. Let $\mathbb{S}$ be an abstract $D$-grid as above, and set $S \in \mathbb{S}$ be a concrete representative. In particular, $S$ has a semi-decomposition datum $(\beta, B, D')$ of degree $>d_D$. Write $D'=\sum_{i=1}^r s_i P_i$, where $s_i \leq \lfloor \frac{n_i+1}{2}\rfloor$. We can characterize the isomorphism class of non-$\beta$-split vertices in $\mathrm{V}(S)$, i.e. the vertices in $\mathrm{V}(S) \smallsetminus \mathrm{V}(\Pi_{\mathrm{SD}})$. Indeed, condition (v) in Lemma \ref{Lemma buenos caminos} implies that any non-$\beta$-split maximal $\mathcal{C}$-order in $\mathrm{V}(S)$ is isomorphic to a split maximal $\mathcal{C}$-order in $\Pi_{\mathrm{SD}}$. Then, the vertex set $\lbrace v_j \rbrace_{j=0}^{n_i}$ of the $P_i$-axis of $S$ satisfies that $v_j \cong \mathfrak{D}_{B+j P_i}$, if $j\leq s_i$, and $v_{j} \cong \mathfrak{D}_{B+(2s_i-j)P_i}$, if $j \geq s_i$. On the other hand, by hypothesis we have $\deg(B)> d_D \geq 0$. Then, Lemma \ref{grado bien definido} implies that $\mathfrak{D}_{B+(2s_i-j)P_i}$ is not isomorphic to $\mathfrak{D}_{B+jP_i}$, for any $j \geq s_i$. This implies that the integers $\lbrace s_i \rbrace_{i=1}^{r}$ are unique, and then $D'$ is unique. The uniqueness of the class of $B$ follows from Lemma \ref{grado bien definido}. \end{proof} An immediate corollary of the end of the preceding proof is the following statement, which we state here for further reference. \begin{corollary}\label{coro semi-desc of neighbor} Let $\mathbb{S}$ be an abstract $D$-grid corresponding to a vertex in a cuspidal ray in $C_{P_{\infty}}(\mathbb{O}_D)$. Let $(B,D')$ be a semi-decomposition datum of $\mathbb{S}$ of degree $>d_D$. Let $\mathbb{S}^{+}$ be the abstract $D$-grid corresponding to the unique neighbor in $C_{P_{\infty}}(\mathbb{O}_D)$ that is closer to the end of the cuspidal ray. Then $(B+P_{\infty},D')$ is a semi-decomposition datum of $\mathbb{S}^{+}$. \end{corollary} \begin{rem} Any semi-decomposed grid with a sufficiently negative degree datum is totally decomposed. Indeed, let $S$ be a $D$-grid, and let $(\beta, B, D')$ be a semi-decomposition datum for $S$. Replacing $S$ by another representative in the same class if needed, we can assume that $\beta$ is the canonical basis, or equivalently that $A(\beta)=\mathrm{Id}$. Let us write $D=\sum_{i=1}^{r} n_iP_i$ and $D'=\sum_{i=1}^{r} s_iP_i$. For each $i \in \lbrace 1, \cdots, r \rbrace$, we denote by $S(P_i) \subset \mathfrak{t}(k_{P_i})$ the $P_i$-axis of $S$. Note that $S(P_i)$ is a length-$n_i$ line. Moreover, if we write $\mathrm{V}(S(P_i))=\lbrace v_j\rbrace_{j=0}^{n_i}$, then, for any $0 \leq j \leq s_i$, the vertex $v_j$ corresponds to the local maximal order $(\mathfrak{D}_{B+jP_i})_{P_i}$. Thus, all vertices in $\lbrace v_j \rbrace_{j=0}^{s_i}$ are located on the maximal path $\mathfrak{p}(0,\infty)$ joining the the visual limits $0$ and $\infty$ in $\partial_{\infty}(\mathfrak{t}(k_{P_i}))= \mathbb{P}^1(k_{P_i})$. Let $\mathfrak{E}$ be the Eichler $\mathcal{C}$-order corresponding to $\mathbb{S}$, i.e., assume that $\mathbb{S}=[S(\mathfrak{E})]$. Set $U_i= \mathcal{C} \smallsetminus \lbrace P_i \rbrace$ and $\Gamma= \mathrm{Stab}_{\mathrm{PGL}_2(k)}(\mathfrak{E}(U_i))$. Arguing as in the proof of Lemma \ref{Lemma buenos caminos} we can prove that, for each $s_i+1 \leq j \leq n_i$, $v_j$ is $\Gamma$-equivalent to a vertex in $\mathfrak{p}(0,\infty)$ not in $\lbrace v_l \rbrace_{l=0}^{j-1}$. Thus, we get that $s_i=n_i$, and hence any semi-decomposition datum $(\beta, B, D')$ of $S$ is a total decomposition datum. Finally, note that, if $\mathfrak{I}=\sbmattrix {0}{1}{1}{0}$, then $S$ is in the same class as $\mathfrak{I} A(\beta) S A(\beta)^{-1}\mathfrak{I}$, whose total-decomposition datum $(\beta_0, D-B,D)$ has a positive degree. \end{rem} \subsection{On the combinatorial structure of the classifying graph}\label{subsection combinatorial structure} Here the main goal is to compute the cusp number of the classifying graph $C_{P_{\infty}}(\mathbb{O}_D)$, which corresponds to Proposition \ref{prop aux}. Actually, we prove a more precise result stated in Proposition \ref{lema4} below. To do this, we use the existence and uniqueness of semi-decomposition data of almost every abstract $D$-grid proved in the previous section. More specifically, using Proposition \ref{max semi-desc} we define a natural simplicial map $\tilde{\mathfrak{d}}: C_{P_{\infty}}(\mathbb{O}_D) \smallsetminus Y \to C_{P_{\infty}}(\mathbb{O}_0)$, where $Y$ is a finite subgraph, which is a regular cover on the cuspidal rays of $C_{P_{\infty}}(\mathbb{O}_0)$. Then, we compute the number of pre-images of a given cuspidal ray and we apply Theorem \ref{Teo Arenas cusp in C0} in order to obtain the desired result. \begin{defi}\label{ideal tip} Let $S$ be a concrete $D$-grid with a semi-decompostion datum $(\beta, B, D')$ of positive degree. We denote by $\delta(S)$ the $\text{PGL}_2(k)$-conjugacy class of the maximal $\mathcal{C}$-order $\mathfrak{D}_{B}$. Let $\mathbb{S}$ be an abstract $D$-grid. Assume that $\mathbb{S}$ has a semi-decomposition datum $(B, D')$ with positive degree. We define the principal corner of $\mathbb{S}$ as $\mathfrak{d}(\mathbb{S}):=\delta(S)$, where $S \in \mathbb{S}$. \end{defi} It follows from Lemma \ref{grado bien definido} that, if $S,S^{\circ} \in \mathbb{S}$ have respective semi-decomposition data $(\beta, B, D')$ and $(\beta^{\circ}, B^{\circ}, {D'}^{\circ})$ with positive degree, then $\mathfrak{D}_{B}$ is $\mathrm{GL}_2(k)$-conjugate to $\mathfrak{D}_{B^{\circ}}$. Hence the previous definition is valid. \begin{defi} Let $\mathbb{S}$ be an abstract $D$-grid with a semi-decomposition datum $(B, D')$. Let us write $D=\sum_{i=1}^r n_i P_i$ and $D'=\sum_{i=1}^r s_i P_i$. We define the semi-decomposition vector associated to the previous datum as $l=l(\mathbb{S}):=(s_1, \cdots, s_r)$. Note that $(B, D')$ is a total decomposition datum exactly when $l=(n_1, \cdots, n_r)$. \end{defi} Now, it follows from Proposition \ref{max semi-desc} that there exists a finite graph $Y \subset C_{P_{\infty}}(\mathbb{O}_D)$ such that, for each vertex $v \in \mathrm{V}(C_{P_{\infty}}(\mathbb{O}_D) \smallsetminus Y)$, the corresponding abstract $D$-grid $\mathbb{S}=\mathbb{S}(v)$ has a representative with a semi-decomposition datum of degree $>\deg(D)$. Then, $\mathfrak{d}$ induces a well-defined function $\widetilde{\mathfrak{d}}: \mathrm{V}(C_{P_{\infty}}(\mathbb{O}_D) \smallsetminus Y) \rightarrow \mathrm{V}(C_{P_{\infty}}(\mathbb{O}_0))$. Let $\mathbb{S}$ be an abstract $D$-grid with a semi-decomposition datum $(B,D')$ of degree $>\deg(D)$. Let $\mathbb{S}^{+}$ be the abstract $D$-grid corresponding to the unique neighbor $v_{\mathbb{S}}^{+}$ of $v_{\mathbb{S}}$ in $C_{P_{\infty}}(\mathbb{O}_D)$ that is closer to the end of the cuspidal ray containing $v_{\mathbb{S}}$. By Corollary \ref{coro semi-desc of neighbor}, $(B+P_{\infty},D')$ is a semi-decomposition datum of $\mathbb{S}^{+}$. In this case, we get $\mathfrak{d}(\mathbb{S}^{+}) = [\mathfrak{D}_{B+P_{\infty}}]$. See Figure \ref{fig 1}\textbf{(B)}. In particular, this implies that the function $\widetilde{\mathfrak{d}}$ sends neighboring vertices into neighboring vertices. So, we can extend $\widetilde{\mathfrak{d}}$ to a simplicial map from $C_{P_{\infty}}(\mathbb{O}_D) \smallsetminus Y$ to $C_{P_{\infty}}(\mathbb{O}_0)$, which we also denote by $\widetilde{\mathfrak{d}}$. The following proposition describes the fibers of $\widetilde{\mathfrak{d}}$. \begin{prop}\label{lema 3} Let $\mathfrak{D}=\mathfrak{D}_B$ be a split maximal $\mathcal{C}$-order satisfying $\deg(B)> \deg(D)$. Assume that $[\mathfrak{D}] \neq \mathfrak{d}(\mathbb{S})$ for $\mathbb{S}$ corresponding to a vertex in $Y$. Then, $\widetilde{\mathfrak{d}}^{-1}([\mathfrak{D}])$ contains \begin{itemize} \item[(1)] Exactly one totally decomposed $D$-grid, and \item[(2)] Exactly $\frac{1}{q-1} \prod_{s_i \neq n_i} (q^{\deg(P_i)}-1) q^{\deg(P_i)(n_i-s_i-1)}$ semi-decomposed $D$-grids whose semi-decomposed vector is $(s_1, \cdots, s_r) \neq (n_1, \cdots, n_r)$. \end{itemize} \end{prop} In order to prove this proposition we have to use the following lemma. \begin{lemma}\label{lemma equal tip} Let $\mathbb{S}$ and $\mathbb{S}^{\circ}$ two $D$-grids with semi-decomposition data of degree $> \deg(D)$ such that $\mathfrak{d}(\mathbb{S})=\mathfrak{d}(\mathbb{S}^{\circ})$ and $l(\mathbb{S})=l(\mathbb{S}^{\circ})$. Then there exists concrete $D$-grids $S \in \mathbb{S}$ and $S^{\circ} \in \mathbb{S}^{\circ}$ such that their respective decomposed subgrids $\Pi_{\mathrm{SD}}$ and $\Pi_{\mathrm{SD}}^{\circ}$ are equal. \end{lemma} \begin{proof} Let $\mathbb{S}$ and $\mathbb{S}^{\circ}$ be two abstract $D$-grids such that $\mathfrak{d}(\mathbb{S})=\mathfrak{d}(\mathbb{S}^{\circ})$. Set $S_0\in \mathbb{S}$ and $S_0^{\circ} \in \mathbb{S}^{\circ}$, and let $(\beta, B, D')$ and $(\beta^{\circ}, B^{\circ}, D')$ be their respective semi-decomposition data. Let $\Pi_{\mathrm{SD}} \subset S_0$ and $\Pi_{\mathrm{SD}}^{\circ} \subset S^{\circ}_0$ be the respective decomposed subgrids, and write $A=A(\beta)$ and $A^{\circ}=A(\beta^{\circ})$. By hypothesis $\mathfrak{d}(\mathbb{S})=\mathfrak{d}(\mathbb{S}^{\circ})$, whence we have that $\delta(S_0)= \mathfrak{D}_{B}$ is $\mathrm{GL}_2(k)$-conjugate to $\delta(S^{\circ}_0)= \mathfrak{D}_{B^{\circ}}$. Moreover, by hypothesis, $\deg(B), \deg(B^{\circ})> \deg(D) \geq 0$. So, by \cite[\S 4, Proposition 4.1]{A1}, there exists $f \in k^{*}$ such that $B-B^{\circ}= \text{div}(f)$. Set $G= \sbmattrix {f}{0}{0}{1} \in \text{GL}_2(k)$ so that $G \mathfrak{D}_{B^{\circ}} G^{-1}=\mathfrak{D}_{B}$. We claim that $S:=(GA) S_0 (GA)^{-1} \in \mathbb{S}$ and $S^{\circ}:=(A^{\circ}) S_0^{\circ} (A^{\circ})^{-1} \in \mathbb{S}^{\circ}$ have the same decomposed subgrids. Indeed, for any divisor $E$, satisfying $0 \leq E \leq D'$, we have $$ G \mathfrak{D}_{B^{\circ}+E} G^{-1} = \sbmattrix {\mathcal{O}_{\mathcal{C}}}{f^{-1}\mathfrak{L}^{-B^{\circ}-E}}{f \mathfrak{L}^{B^{\circ}+E}}{\mathcal{O}_{\mathcal{C}}}= \mathfrak{D}_{B+E}.$$ In particular, we obtain that $(GA) \Pi_{\mathrm{SD}} (GA)^{-1}= (A^{\circ}) \Pi_{\mathrm{SD}}^{\circ} (A^{\circ})^{-1}$, i.e. the decomposed subgrid of $S$ and $S^{\circ}$ are equal. \end{proof} \begin{proof}[Proof of Proposition \ref{lema 3}] Let $\mathbb{S}$ and $\mathbb{S}^{\circ}$ be two abstract $D$-grids such that $\mathfrak{d}(\mathbb{S})=\mathfrak{d}(\mathbb{S}^{\circ})=[\mathfrak{D}_B]$, where $\deg(B) > \deg(B)$. It follows from Lemma \ref{lemma equal tip} that there exist $S \in \mathbb{S}$ and $S^{\circ} \in\mathbb{S}^{\circ}$ with the same decomposed subgrid $\Pi_{\mathrm{SD}}$. So, if all $s_i=n_i$, then $S=S'$, whence $\mathbb{S}=\mathbb{S}^{\circ}$. On the other hand, we can always consider the totally decomposed $D$-grid whose vertices are $\mathfrak{D}_{B+E}$, where $0 \leq E \leq D$. Thus (1) follows. More generally, $\mathbb{S}=\mathbb{S}^{\circ}$ if and only if there exists $g \in \text{GL}_2(k)$ such that \begin{itemize} \item[(1D)] $g \Pi_{\mathrm{SD}} g^{-1}= \Pi_{\mathrm{SD}}$, and \item[(2D)] $g (S \smallsetminus \Pi_{\mathrm{SD}})g^{-1}= S^{\circ} \smallsetminus \Pi_{\mathrm{SD}}$, \end{itemize} where we recall that no vertex in $S \smallsetminus \Pi_{\mathrm{SD}}$ and $S^{\circ} \smallsetminus \Pi_{\mathrm{SD}}$ is split in the canonical basis. As $\Pi_{\mathrm{SD}}$ is totally decomposed, by \cite[\S 4, Proposition 4.1]{A1}, we have $g=\sbmattrix {x}{y}{0}{z}$, where $\text{div}(x^{-1}z)=0$, i.e. $x^{-1}z \in \mathbb{F}^{*}$. Let $m_i$ be the multiplicity of $P_i$ in $B$. Then, Condition (1D) is equivalent to the following statement: \textit{For every $i \in \lbrace 1, \cdots, r \rbrace$, the action of $g \in \mathrm{GL}_2(k)$ on the tree $\mathfrak{t}_i=\mathfrak{t}(k_{P_i})$ point-wisely stabilizes the finite path $\mathfrak{c}_i$ whose vertex set is \begin{equation} \left\lbrace \sbmattrix {\mathcal{O}_{P_{i}}}{ \pi_{i}^{-m_i-t}\mathcal{O}_{P_{i}}}{\pi_{i}^{m_i+t}\mathcal{O}_{P_{i}}}{\mathcal{O}_{P_{i}}} : t \in \lbrace 0, \cdots, s_i \rbrace \right\rbrace, \end{equation} where $\pi_i \in \mathcal{O}_{P_i}$ is a local uniformizing parameter}. Moreover, this condition is equivalent to $\nu_{P_i}(y) \geq -m_i$. On the other hand, when $s_i \neq n_i$, the localizations at $P_i$ of orders in $S$ that are different from the localizations of vertices in $ \Pi_{\mathrm{SD}}$, correspond to the vertices of a line $\widehat{\mathfrak{p}}_i$ of length $(n_i-s_i-1)$ in $\mathfrak{t}(k_{P_i})$, which does not intersect the maximal path joining $0$ and $\infty$. See Figure \ref{fig 01}\textbf{(A)}. Hence, vertices in $\widehat{\mathfrak{p}}_i$ are in correspondence with the local rings of endomorphisms of the lattices \begin{equation} \small \binom{a}{1}\mathcal{O}_{P_i}+ \binom{\pi_i^{-m_i-s_i-j}}{0} \mathcal{O}_{P_i}, \normalsize \end{equation} where $a = a_1 \pi_{i}^{-m_i-s_i+1}+ \cdots+ a_j \pi_{i}^{-m_i-s_i+j}$, and where, for any $k>1$, we have $a_k \in \mathbb{F}(P_i)= \mathcal{O}_{P_i}/ \pi_i\mathcal{O}_{P_i}$, while $a_1 \in \mathbb{F}(P_i)^{*}$. The same characterization holds for the localizations at $P_i$ of orders in $S^{\circ}$, by replacing $a_i$ by $a_i^{\circ} \in \mathbb{F}(P_i)$. Since $\nu_{P_i}(y) \geq -m_i$, Condition (2D) is equivalent to $a_j= a_j^{\circ}(x^{-1}z)$, for each $j \in \lbrace 1, \cdots, n_i-s_i \rbrace$. Since two $D$-grids are equal if and only if all their $P_i$-projections coincide, it follows that $\mathrm{Card}(\tilde{\mathfrak{d}}^{-1}[\mathfrak{D}_B])$ equals the number of $\mathbb{F}^{*}$-homothety classes in $$\prod_{s_i \neq n_i} \left( \mathbb{F}(P_i)^{*} \times \mathbb{F}(P_i)^{n_i-s_i-1}\right),$$ which is $\frac{1}{q-1} \prod_{s_i \neq n_i} (q^{\deg(P_i)}-1) q^{\deg(P_i)(n_i-s_i-1)}$. \end{proof} Let us denote by $\alpha(D)$ the positive integer $$\alpha(D)=1+\frac{1}{q-1} \prod_{i=1}^r \left( q^{\deg(P_i)\lfloor \frac{n_i}{2}\rfloor }-1\right).$$ \begin{lemma}\label{lem preimagen de eta} Let $B$ be a divisor, whose degree is greater than $\deg(D)$, and assume that $[\mathfrak{D}_B] \neq \mathfrak{d}(\mathbb{S})$ for $\mathbb{S}$ corresponding to a vertex in $Y$. Then, $\tilde{\mathfrak{d}}^{-1}([\mathfrak{D}_B])$ has $\alpha(D)$ elements. In particular, there are $\alpha(D)$ different cuspidal rays in $C_{P_{\infty}}(\mathbb{O}_D)$ whose initial vertex corresponds to an abstract $D$-grid $\mathbb{S}$ satisfying that $\mathfrak{d}(\mathbb{S})= [\mathfrak{D}_{B}]$. \end{lemma} \begin{proof} It follows directly from Proposition \ref{lema 3} that $\tilde{\mathfrak{d}}^{-1}([\mathfrak{D}_{B}])$ contains one and only one split abstract $D$-grid, and, for each possible semi-decomposition vector $l=(s_1, \cdots, s_r) \neq (n_1, \cdots, n_r)$, there are precisely $\frac{1}{q-1} \prod_{i=1}^{r} (q^{\deg(P_i)}-1) q^{\deg(P_i)(n_i-s_i-1)}$ non-split abstract $D$-grids whose semi-decomposition vector is $l$. By definition of semi-decomposition data we know that $l \in \Upsilon= \lbrace (s_1, \cdots, s_r): n_i \geq s_i \geq \lfloor \frac{n_i+1}{2}\rfloor \rbrace$. Then, $\tilde{\mathfrak{d}}^{-1}([\mathfrak{D}_{B}])$ contains $$ 1+ \sum_{\Upsilon'} \frac{1}{q-1} \prod_{s_i \neq n_i} (q^{\deg(P_i)}-1) q^{\deg(P_i)(n_i-s_i-1)}= 1+\frac{1}{q-1} \prod_{} \left( q^{\deg(P_i)\lfloor \frac{n_i}{2}\rfloor }-1\right)$$ different abstract $D$-grids, where $\Upsilon':= \Upsilon \smallsetminus \lbrace (n_1, \cdots, n_r) \rbrace$. Hence, the first claim is proved. Now, recall that, by Corollary \ref{coro semi-desc of neighbor}, if $\mathbb{S}$ and $\mathbb{S}^{\circ}$ are two $P_{\infty}$-neighboring $D$-grids satisfying $\mathfrak{d}(\mathbb{S})=[\mathfrak{D}_{B}]$ and $\mathfrak{d}(\mathbb{S}^{\circ})=[\mathfrak{D}_{B^{\circ}}]$, then $(B,D')$ and $( B+P_{\infty},D')$ are respective semi-decomposition data of $\mathbb{S}$ and $\mathbb{S}^{\circ}$. So, it follows that $\mathbb{S}$ and $\mathbb{S}^{\circ}$ have the same semi-decomposition vectors. Thus, the second claim follows. \end{proof} Recall that, by definition, the number of cusps of a disjoint finite union of graphs is the sum of the number of cusps in each of its connected components. \begin{prop}\label{prop cuspides en C} The number of cusps of $C_{P_{\infty}}(\mathbb{O}_D)$ equals $\mathrm{Card}(\mathrm{Pic}(R))\alpha(D)$. \end{prop} \begin{proof} It follows from Theorem \ref{Teo Arenas cusp in C0} that the vertices of $C_{P_{\infty}}(\mathbb{O}_0)$ corresponding to the $\text{GL}_2(k)$-classes of split maximal $\mathcal{C}$-orders $\mathfrak{D}_B$ are located in a finite disjoint union of infinite lines or half lines in $C_{P_{\infty}}(\mathbb{O}_0)$. If we assume that a infinite line is the union of two half lines, then the number of such half lines is equal to the number of cusps of $C_{P_{\infty}}(\mathbb{O}_0)$, which coincide with $\text{Pic}(R)$. Recall that we can remove a finite set of vertices in the classifying graphs in order to simplify some arguments. Thus, we can consider only vertices associated to abstract $D$-grids with a semi-decomposition datum of degree $>\deg(D)$ (cf.~Proposition \ref{max semi-desc}). Recall also that $\tilde{\mathfrak{d}}$ is a simplicial map from the disjoint union of a set of cuspidal rays in $C_{P_{\infty}}(\mathbb{O}_D)$, representing all classes of cuspidal rays, to an analog set in $C_{P_{\infty}}(\mathbb{O}_0)$. In particular, the image of a cuspidal ray of the former set under $\tilde{\mathfrak{d}}$ is also a cuspidal ray. So, $\tilde{\mathfrak{d}}$ can be seen as a function between such cusps. In particular, to compute the cusp number in $C_{P_{\infty}}(\mathbb{O}_D)$ it suffices to compute the number of pre-images of each cusp in $C_{P_{\infty}}(\mathbb{O}_0)$. Moreover, this can be reduced to computing the number of pre-images of any vertex in a cuspidal ray of $C_{P_{\infty}}(\mathbb{O}_0)$. Hence, the proposition follows from Lemma \ref{lem preimagen de eta}. \end{proof} We say that a cusp $\eta \in \partial^{\infty}(C_{P_{\infty}}(\mathbb{O}_D))$ is split if it is represented by a cuspidal ray formed only by vertices corresponding to totally decomposed $D$-grids. In any other case we say that the cusp is non-split. Note that the arguments given in the proof above imply that $\widetilde{\mathfrak{d}}$ induces a natural function $\widetilde{\mathfrak{d}}^{\infty}:\partial^{\infty}( C_{P_{\infty}}(\mathbb{O}_D)) \rightarrow \partial^{\infty}( C_{P_{\infty}}(\mathbb{O}_0))$ between the respective cusp sets of $C_{P_{\infty}}(\mathbb{O}_D)$ and $C_{P_{\infty}}(\mathbb{O}_0)$. Then, Lemma \ref{lema 3}.(1) and Lemma \ref{lem preimagen de eta} imply that for each $\eta \in \partial^{\infty}(C_{P_{\infty}}(\mathbb{O}_0))$ there exists a unique split cusp in $(\widetilde{\mathfrak{d}}^{\infty})^{-1}(\eta)$, and $\alpha(D)-1$ non split cusps. \begin{rem}\label{rem 3} We can characterize the unique split cusp in $(\widetilde{\mathfrak{d}}^{\infty})^{-1}(\eta)$. Indeed, let $\eta'_B \in \partial^{\infty}( C_{P_{\infty}}(\mathbb{O}_D))$ be the class of the cuspidal ray $\mathfrak{r}_{B,D} \subseteq C_{P_{\infty}}(\mathbb{O}_D)$, whose vertices correspond to the $\text{PGL}_2(k)$-classes of decomposable Eichler $\mathcal{C}$-orders $\Ea_{B,n}$ (equiv. the totally-decomposable abstract $D$-grids $\mathbb{S}=[S(\Ea_{B,n})]$) defined by \begin{equation} \Ea_{B,n}=\sbmattrix {\oink_\mathcal{C}}{\mathfrak{L}^{B+nP_{\infty}}}{\mathfrak{L}^{-B-nP_{\infty}+D}}{\oink_\mathcal{C}}, \quad n \geq 0. \end{equation} Then, the image by $\widetilde{\mathfrak{d}}^{\infty}$ of $\eta'_B$ is the class $\eta_B \in \partial^{\infty}( C_{P_{\infty}}(\mathbb{O}_0))$ of the cuspidal ray $\mathfrak{r}_B \subseteq C_{P_{\infty}}(\mathbb{O}_0)$, whose vertices correspond to $ \lbrace [\mathfrak{D}_{B+nP_{\infty}}] : n \geq 0 \rbrace$. This is the unique split cusp in $(\widetilde{\mathfrak{d}}^{\infty})^{-1}(\eta)$. Moreover, it follows from Theorem \ref{Teo Arenas cusp in C0} that, if we fix a representative set $\Delta_R \subset \mathrm{Div}(\mathcal{C})$ of $\mathrm{Pic}(R)$, then $\lbrace \eta_B : B \in \Delta_R \rbrace$ is a representative set of $\partial^{\infty}(C_{P_{\infty}}(\mathbb{O}_0))$. \end{rem} We are finally ready to prove Proposition \ref{prop aux} as an immediate consequence of the following result, which concludes the proof of Theorem \ref{teo cusp}. \begin{prop}\label{lema4} The number of cusps of any connected component of $C_{P_{\infty}}(\mathbb{O}_D)$ is the same, and it equals \begin{equation} c(D)= \alpha(D) [2\mathrm{Pic}(\mathcal{C})+\left\langle \overline{P_{a_1}}, \cdots ,\overline{P_{a_u}}, \overline{P_{\infty}} \right\rangle : \langle \overline{P_{\infty}} \rangle], \end{equation} where $P_{a_1}, \cdots, P_{a_u}$ are the closed points in $\mathrm{Supp}(D) \subseteq \mathcal{C}$ whose coefficients are odd. Moreover, there are $c(D)/\alpha(D)$ split cusps in any connected component of $C_{P_{\infty}}(\mathbb{O}_D)$. \end{prop} \begin{proof} We will use the bijection between the set of abstract $D$-grids and the set of Eichler $\mathcal{C}$-orders of level $D$ (cf.~Proposition \ref{eichler=grillas}). In particular, we will assign to any Eichler $\mathcal{C}$-order of level $D$ a vertex in $C_{P_{\infty}}(\mathbb{O}_D)$. Let $\mathfrak{E}$ and $\mathfrak{E'}$ be two split Eichler $\mathcal{C}$-orders associated to two vertices $v,v'$ in the same connected component of $C_{P_{\infty}}(\mathbb{O}_D)$. We denote by $\mathbb{S}$ and $\mathbb{S}'$ be the respective abstract $D$-grids corresponding to $\mathfrak{E}$ and $\mathfrak{E'}$. Assume that $\mathfrak{d}(\mathbb{S})= [\mathfrak{D}_B]$, $\mathfrak{d}(\mathbb{S}')= [\mathfrak{D}_{B'}]$, and $\deg(B), \deg(B')> \deg(D)$. Let $\Sigma_D=\Sigma(\mathbb{O}_D)$ be the spinor class field of Eichler $\mathcal{C}$-orders of level $D$. By definition of $C_{P_{\infty}}(\mathbb{O}_D)$, the vertices $v$ and $v'$ are in the same connected component if and only if $\rho(\mathfrak{E},\mathfrak{E'}) \in \lbrace \text{id}_{\Sigma_D}, [[P_{\infty},\Sigma_D/k]]\rbrace$. It follows from the Equation \eqref{eq rho} that $\rho(\mathfrak{E},\mathfrak{E'})= \rho(\mathfrak{D}_B, \mathfrak{D}_{B'})=[[B-B', \Sigma_D/k]]$. Hence, by Proposition \ref{gal grupo de clase}, $v$ and $v'$ are in the same connected component precisely when $\overline{B-B'} \in 2\text{Pic}(\mathcal{C})+\left\langle \overline{P_{a_1}}, \cdots ,\overline{P_{a_u}}, \overline{P_{\infty}} \right\rangle$. Moreover, since the cuspidal ray $\mathfrak{r} \subseteq C_{P_{\infty}}(\mathbb{O}_{D})$ containing $v$ consists of vertices corresponding to $[\mathfrak{D}_{B+nP_{\infty}}]$ (cf.~Corollary \ref{coro semi-desc of neighbor}), we see that $\overline{B+\langle P_{\infty} \rangle}$ determines the image in $C_{P_{\infty}}(\mathbb{O}_{0})$ of $\mathfrak{r}$. This tells us that there are $[2\mathrm{Pic}(\mathcal{C})+\left\langle \overline{P_{a_1}}, \cdots ,\overline{P_{a_u}}, \overline{P_{\infty}} \right\rangle : \langle \overline{P_{\infty}} \rangle]$ possible images via $\widetilde{\mathfrak{d}}^{\infty}$ for a cusp in a given connected component of $C_{P_{\infty}}(\mathbb{O}_D)$. Since by Lemma \ref{lem preimagen de eta} there are $\alpha(D)$ non equivalent cuspidal rays in $C_{P_{\infty}}(\mathbb{O}_D)$ covering each cuspidal ray in $C_{P_{\infty}}(\mathbb{O}_{0})$, we get that there are $c(D)$ different cusps in any connected component of $C_{P_{\infty}}(\mathbb{O}_D)$. This proves the first statement. For the last statement, it follows from Lemma \ref{lema 3}.(1) that there exists only one split cuspidal ray in each fiber of $\widetilde{\mathfrak{d}}^{\infty}$. Thus, it suffices to count the number of cusps in the image of $\widetilde{\mathfrak{d}}^{\infty}$ in any given connected component. As we proved above, this number is precisely $c(D)/\alpha(D)$. \end{proof} \section{Stabilizers and amalgams}\label{Section amalgams} In this section we analyze the structure of $\text{H}_D$ as an amalgam. More specifically, the main goal of this section is to prove Theorems \ref{teo grup} and \ref{teo grup ab}. For this reason, in all that follows we assume that $g(2)$ is trivial and that each $n_i$ is an odd positive integer. We also assume that $\mathrm{Supp}(D) \neq \emptyset$, since any other case can be reduced to Serre's result. In order to prove the aforementioned results we extensively use Bass-Serre theory (cf.~\cite[Chapter I, \S 5]{S}). Let $\mathbf{C}$ be a set indexing all the cusps in $\mathfrak{t}_D$. It follows from Theorem \ref{teo cusp} that $\mathfrak{t}_D$ is the union of a finite graph $Y$ with a finite number of cuspidal rays, namely $\mathfrak{r}(\sigma)$, for $\sigma \in \mathbf{C}$. Moreover, the same results implies the following identity: $$\mathrm{Card}(\mathbf{C})=c(\mathrm{H}_D)=2^r |g(2)| \left| \frac{ 2\mathrm{Pic}(\mathcal{C})+ \langle \overline{P_{\infty}} \rangle}{ \langle \overline{P_{\infty}} \rangle} \right| \normalsize \left( 1+ \frac{1}{q-1} \prod_{i=1}^r \left( q^{\text{deg}(P_i)\lfloor \frac{n_i}{2} \rfloor}-1\right) \right).$$ Now, we choose a maximal tree $T$ of $\mathfrak{t}_D$ and a lift $j: T \rightarrow \mathfrak{t}=\mathfrak{t}(K)$. Note that each cuspidal ray $\mathfrak{r}(\sigma)$ of $\mathfrak{t}_D$ is contained in $T$, whence $j(\mathfrak{r}(\sigma))$ is a ray of $\mathfrak{t}$. More explicitly, we can fix a tree $T$ by taking the union of the cuspidal rays $\mathfrak{r}(\sigma)$ with a maximal tree in the finite graph $Y \subseteq \mathfrak{t}_D$. \subsection{Review of Bass-Serre Theory}\label{subsection recalls on BS Theory} Let us recall some definitions from \S \ref{Section BTT}. Let $(s, t, r)$ and $(\tilde{s}, \tilde{t}, \tilde{r})$ be the triplets indicating source, target and reverse maps for the graphs $\mathfrak{t}$ or $\mathfrak{t}_D$ respectively. An orientation on $\mathfrak{t}_D$ is a subset $O$ of $\mathrm{E}(\mathfrak{t}_D)$ such that $\mathrm{E}(\mathfrak{t}_D)$ is the disjoint union of $O$ and $\tilde{r}(O)$. In order to simplify some of the subsequent definitions, let us fix an orientation $O$ for $\mathfrak{t}_D$, and set $o(y)=0$, if $y \in O$, while $o(y)=1$, if $y \not\in O$, i.e., if $\tilde{r}(y) \in O$. We extend $j$ to a function $j: \mathrm{E}(\mathfrak{t}_D) \to \mathrm{E}(\mathfrak{t})$ satisfying the relation \begin{equation}\label{eq j ext} j(\tilde{r}(y))=r(j(y)), \end{equation} as follows: For each $y \in O \smallsetminus \mathrm{E}(T)$, we choose $j(y)$ so that $s(j(y)) \in \mathrm{V}(j(T))$. For the remaining edges we define $j(y)$ by the relation \eqref{eq j ext}. Note that we have $s(j(y))=j(\tilde{s}(y))$, for all $y \in O$. In general, however, the corresponding relation for the target does not hold. Next, for each $y \in O \smallsetminus \mathrm{E}(T)$ we choose $g_{y} \in \mathrm{H}_D$ satisfying $t(j(y))=g_{y}\cdot j(\tilde{t}(y))$. This is always possible since $t(j(y))$ and $j(\tilde{t}(y))$ have the same image $\tilde{t}(y)$ in the quotient set $\mathrm{V}(\mathfrak{t}_D)$. Now, we extend the map $y \mapsto g_{y}$ to all edges in $\mathfrak{t}_D$ by setting $g_{y}=\mathrm{id}$, for all $y \in \mathrm{E}(T)$, and for all remaining edges $g_{r(y)}=g_{y}^{-1}$. Note that the latter relation holds for each pair of reverse edges. Therefore, for each edge $y$ in the quotient graph, we get $s(j(y))= g_{y}^{-o(y)} j(\tilde{s}(y))$ and $ t(j(y))=g_{y}^{1-o(y)}j(\tilde{t}(y))$. For each vertex $\overline{v}\in \mathrm{V}(\mathfrak{t}_D)$, we define $\mathrm{Stab}_{\mathrm{H}_D}({\overline{v}})$ as the stabilizer in $\mathrm{H}_D$ of the lift $j(\overline{v})$. An analogous convention applies to an edge $y$. Thus, for each pair $(\overline{v},y)$ where $\overline{v}=\tilde{t}(y)$, we have a morphism $f_y: \mathrm{Stab}_{\mathrm{H}_D}(y)\to \mathrm{Stab}_{\mathrm{H}_D}(\overline{v})$ defined by $g \mapsto g_{y}^{o(y)-1} g g_{y}^{1-o(y)}$. This function is well defined since $$g_{y}^{o(y)-1} \mathrm{Stab}_{\mathrm{H}_D}\big(j(y)\big) g_{y}^{1-o(y)} \subseteq \mathrm{Stab}_{\mathrm{H}_D}\Big(j\big(\tilde{t}(y)\big)\Big).$$ Thus, the data presented above allow us to define the graph of groups $(\mathfrak{h}_D, \mathfrak{t}_D)=(\mathfrak{h}_D, T, \mathfrak{t}_D)$ associated to the action of $\mathrm{H}_D$ on $\mathfrak{t}$ (cf.~\cite[Chapter~I, \S 4.4]{S}). Now, we can define the fundamental group associated to this graph of groups. Indeed, let $F(\mathfrak{h}_D, \mathfrak{t}_D)$ be the group generated by all $\mathrm{Stab}_{\mathrm{H}_D}(\overline{v})$, where $\overline{v} \in \mathrm{V}(\mathfrak{t}_D)$, and elements $a_y$, for each $y\in \mathrm{E}(\mathfrak{t}_D)$, subject to the relations $$ a_{\tilde{r}(y)}=a_y^{-1}, \, \, \mathrm{ and } \, \, a_y f_y(b) a_y^{-1}= f_{\tilde{r}(y)}(b), \, \, \forall y \in \mathrm{E}(\mathfrak{t}_D), \, \, \forall b \in \mathrm{Stab}_{\mathrm{H}_D}(y). $$ The fundamental group $\pi_1(\mathfrak{h}_D)=\pi_1(\mathfrak{h}_D,\mathfrak{t}_D)$ is, by definition, the quotient of $F(\mathfrak{h}_D,\mathfrak{t}_D)$ by the normal subgroup generated by the elements $a_y$ for $y\in \mathrm{E}(T)$. In other words, if we denote by $h_y$ the image of $a_y$ in $\pi_1(\mathfrak{h}_D, \mathfrak{t}_D)$, then the group $\pi_1(\mathfrak{h}_D, \mathfrak{t}_D)$ is generated by all $\mathrm{Stab}_{\mathrm{H}_D}(\overline{v})$, where $\overline{v} \in \mathrm{V}(\mathfrak{t}_D)$, and the elements $h_y$, for $y \in \mathrm{E}(\mathfrak{t}_D)$, subject to the relations \[\begin{array}{cl} h_{\tilde{r}(y)}=h_y^{-1}, \quad h_y f_y(b) h_y^{-1}= f_{\tilde{r}(y)}(b), \quad \mathrm{and} \quad h_z=\mathrm{id}, \end{array}\] for all $(z,y) \in \mathrm{E}(T) \times \mathrm{E}(\mathfrak{t}_D)$ and for all $b \in \mathrm{Stab}_{\mathrm{H}_D}(y)$. It can be proven that the group $\pi_1$ is independent, up to isomorphism, of the choice of the graph of groups $\mathfrak{h}_D$, and in particular of the tree $T\subset \mathfrak{t}_D$. As mentioned in \S \ref{Section Intro}, Bass-Serre Theory implies that all subgroups of $\mathrm{GL}_2(K)$ can be described from their actions on $\mathfrak{t}(K)$ (cf.~\cite[Chapter~I, \S 5.4]{S}). More specifically, they are isomorphic to their corresponding fundamental groups, as defined above (cf.~\cite[Chapter~I, \S 5, Theorem 13]{S}). In our case, $\mathrm{H}_D$ isomorphic to the fundamental group $\pi_1(\mathfrak{h}_D)=\pi_1(\mathfrak{h}_D, \mathfrak{t}_D)$. Let $\mathfrak{r}(\sigma)$ be a cuspidal ray in $\mathfrak{t}_D$. We denote by $\mathcal{P}_{\sigma}$ the fundamental group $\pi_1(\mathfrak{h}_D|_{\mathfrak{r}(\sigma)})$ of the restriction of $\mathfrak{h}_D$ to $\mathfrak{r}(\sigma)$. Analogously, we define $H=\pi_1(\mathfrak{h}_D|_{Y})$, for $Y$ as in Theorem \ref{teo cusp}. For each $\sigma$ as above, let $\mathcal{B}_{\sigma}$ be the vertex stabilizer in $\mathrm{H}_D$ of the unique vertex in $Y \cap \mathfrak{r}(\sigma)$. We have canonical injections $\mathcal{B}_{\sigma} \to \mathcal{P}_{\sigma}$ and $\mathcal{B}_{\sigma} \to H$. Now, as Serre points out in \cite[Chapter~II, \S 2.5, Theorem 10]{S}, if we have a graph of groups $\mathfrak{h}$, which is obtained by ``gluing'' two graphs of groups $\mathfrak{h}_1$ and $\mathfrak{h}_2$ by a tree of groups $\mathfrak{h}_{12}$, then there exist two injections $\iota_1:\mathfrak{h}_{12} \to \mathfrak{h}_{1}$ and $\iota_2: \mathfrak{h}_{12} \to \mathfrak{h}_{2}$, such that $\pi_1(\mathfrak{h})$ is isomorphic to the sum of $\pi_1(\mathfrak{h}_1)$ and $\pi_1(\mathfrak{h}_2)$, amalgamated along $\pi_1(\mathfrak{h}_{12})$ according to $\iota_1$ and $\iota_2$. In our context, we conclude that $\mathrm{H}_D$ is isomorphic to the sum of $\mathcal{P}_\sigma$, for all $\sigma$, and $H$, amalgamated along their common subgroups $\mathcal{B}_\sigma$ according to the above injections. Since $\mathfrak{r}(\sigma) \subseteq T$, each $\mathcal{P}_{\sigma}$ coincides with the direct limit of the vertex stabilizers defined by all $v \in \mathrm{V}(\mathfrak{r}(\sigma))$. In all that follows, we exploit this property of the groups $\mathcal{P}_{\sigma}$, in order to describe them in more detail. We start with some comments on the previous choice that simplify our work. Note that, in our context, each vertex stabilizer is finite, since it is the intersection of a compact set with a discrete subgroup of $\mathrm{GL}_2(k)$. So, replacing $\mathfrak{r}(\sigma)$ by another equivalent cuspidal ray has no effect in the statements in Theorem \ref{teo grup}. Hence, to prove the aforementioned theorem, we only need to describe the groups $\mathcal{P}_{\sigma}$ for a suitable set of cusp rays. We describe a convenient choice in what follows. \subsection{On vertex stabilizers} For any closed point $Q$ in $\mathcal{C}$, any $s \in k$ and any $n \in \mathbb{Z}$, let $\mathfrak{D}(s,n,Q)$ be the $\mathcal{O}_{Q}$-maximal order defined by \begin{equation} \mathfrak{D}(s,n,Q)= \sbmattrix {1}{0}{s}{\pi_{Q}^n} \mathbb{M}_2(\mathcal{O}_{Q}) \sbmattrix {1}{0}{s}{\pi_{Q}^n}^{-1}. \end{equation} We denote by $\mathcal{O}$ the ring of local integers at $P_{\infty}$, and we fix a uniformizing parameter $\pi \in \mathcal{O}$. We define $v_n(s) \in \text{V}(\mathfrak{t})$ as the vertex corresponding to the $\mathcal{O}$-maximal order $\mathfrak{D}(s,n,P_{\infty})$. For any $s \in \mathbb{P}^1(k)$, we define the $R$-ideal $\mathcal{Q}_s$ by $\mathcal{Q}_s= R \cap s^{-1}R \cap s^{-2}R$, when $s \in k^{*}$, and by $\mathcal{Q}_s=R$, when $s \in \lbrace 0, \infty \rbrace$. Let us write $R(n)=\lbrace a \in R: \nu(a)\geq -n \rbrace$. Recall that $I_D$ denotes the $R$-ideal $\mathfrak{L}^{-D}(U_0)$, where $U_0= \mathcal{C} \smallsetminus \lbrace P_{\infty} \rbrace$ (cf.~Equation \eqref{eq eichler}). So, the next lemma follows immediately from \cite[Lemma 3.2]{M} and \cite[Lemma 3.4]{M}. \begin{lemma}\label{lema mason} Assume first that $s =0$ and $n<0$. Then \begin{equation}\label{eq 1 stab H_D} \mathrm{Stab}_{\mathrm{H}_D}(v_n(0))= \left\lbrace \sbmattrix {\alpha}{c}{0}{\beta}: \alpha, \beta \in \mathbb{F}^{*} \text{ and } c \in R(-n)\right\rbrace. \end{equation} On the other hand, if $s=0$ and $n\geq 1$, then \begin{equation}\label{eq 2 stab H_D} \mathrm{Stab}_{\mathrm{H}_D}(v_n(0))= \left\lbrace \sbmattrix {\alpha}{0}{c}{\beta}: \alpha, \beta \in \mathbb{F}^{*} \text{ and } c \in R(n)\cap I_D \right\rbrace. \end{equation} Finally, if $s \in k \smallsetminus \mathbb{F}$ and $n \deg(P_{\infty})> \deg(\mathcal{Q}_s)$, then the element $g \in \mathrm{GL}_2(k)$ belongs to $\mathrm{Stab}_{\mathrm{H}_D}(v_n(s))$ if and only if it has the form \begin{equation}\label{eq 3 stab H_D} g=A(\alpha, \beta, c)= \sbmattrix {\beta-sc}{(\alpha-\beta)s+s^2c}{-c}{\alpha+sc} = \sbmattrix {0}{-1}{1}{-s} ^{-1} \sbmattrix {\alpha}{c}{0}{\beta} \sbmattrix {0}{-1}{1}{-s}, \end{equation} where \begin{itemize} \item[(a)] $\alpha, \beta \in \mathbb{F}^{*}$, and \item[(b)] $ c \in R(n) \cap I_D \cap Rs^{-1} \cap ((\beta-\alpha)s^{-1}+Rs^{-2})$. \end{itemize} In all cases, the stabilizer group $\mathrm{Stab}_{\mathrm{H}_D}(v_n(s))$ contains triangularizable matrices only. Moreover, in \eqref{eq 2 stab H_D} and \eqref{eq 3 stab H_D}, the matrix group $$\text{Sb}_s(n):=\lbrace A(\alpha, \alpha,c): \alpha \in \mathbb{F}^{*}, c\in I_D \cap R(n) \cap \mathcal{Q}_s \rbrace,$$ which is always isomorphic to $\mathbb{F}^{*} \times ( R(n) \cap \mathcal{Q}_s \cap I_D)$, is contained in $\mathrm{Stab}_{\mathrm{H}_D}(v_n(s))$. \end{lemma} Let $\mathbf{C}$ be a set indexing all the cusps in $\mathfrak{t}_D$. For each $\sigma \in \mathbf{C}$, let $\mathfrak{r}(\sigma)$ be a cuspidal ray in $\mathfrak{t}_D$ representing the corresponding cusp, and let $j(\mathfrak{r}(\sigma))$ be its lift to $\mathfrak{t}$. We can assume that its visual limit $\xi=\xi_{\sigma}$ is in $\mathbb{P}^1(k)$ by Corollary \ref{Cor comb fin}, and we assume moreover that one of them is $\infty$. It follows from the previous lemma that there exists a ray $\mathfrak{r}'(\sigma)$, equivalent to $\mathfrak{r}(\sigma)$, with a vertex set $\lbrace \overline{v_i} : i=1, \cdots, \infty \rbrace$, where any pair of neighboring vertices $(\overline{v_i}, \overline{v_{i+1}})$ has the property $\mathrm{Stab}_{\text{H}_D}(j(\overline{v_i})) \subset \mathrm{Stab}_{\text{H}_D}(j(\overline{v_{i+1}}))$. Then, up to changing the representing cuspidal ray for each class, we can assume that the previous inclusion holds for each $\mathfrak{r}(\sigma)$ in $\mathfrak{t}_D$. In particular, in this case, the direct limit $\mathcal{P}_{\sigma}$ coincides with the increasing union $\bigcup_{i=1}^{\infty}\mathrm{Stab}_{\text{H}_D}(v_i)$. Thus, in order to describe $\mathcal{P}_{\sigma}$, we only need to study the stabilizers of some unbounded subset $\lbrace v_{\alpha(i)} \rbrace_{i=1}^{\infty} \subseteq \text{V}(j(\mathfrak{r}(\sigma)))$. So, let $\mathfrak{r}({\infty}) \subseteq \mathfrak{t}_D$ be the cuspidal ray at infinity, i.e., the projection on $\mathfrak{t}_D$ of a ray whose vertex set is $\lbrace v_{-n}(0): n \geq N_0\rbrace$, for certain suitable integer $N_0$. Then, Lemma \ref{lema mason} directly shows that $\mathcal{P}_{\infty}$ is isomorphic to $ (\mathbb{F}^{*} \times \mathbb{F}^{*} )\ltimes R$. When $\mathfrak{r}(\sigma)$ is different from $\mathfrak{r}(\infty)$ we have to work with other tools. We do this in what follows. Let us fix a cuspidal ray $\mathfrak{r}( \sigma)$ different from $\mathfrak{r}(\infty)$. Then, the vertex set of $j(\mathfrak{r}(\sigma))$ equals $\lbrace v_n(\xi) : n \geq N_0 \rbrace $, where $\xi$ and $N_0$ depend on $\sigma$. It follows from Lemma \ref{lema mason} that the maximal unipotent subgroup of each $\mathrm{Stab}_{\mathrm{H}_D}(v_n(\xi))$ is isomorphic to the intersection of $R(n)$ with the $R$-ideal $\mathcal{Q}_{\xi} \cap I_D$. Therefore, since $\bigcup_{n>N_0} R(n)=R$, it follows that the maximal unipotent subgroup of $\mathcal{P}_{\sigma}$ is isomorphic to the $R$-ideal $\mathcal{Q}_{\xi} \cap I_D$. Thus, by Equation \eqref{eq 3 stab H_D}, in order to describe $\mathcal{P}_{\sigma}$, we only need to characterize the eigenvalues of some elements in $\mathrm{Stab}_{\mathrm{H}_D}(v_n(\xi))$. We start by recalling some relevant definitions from \S \ref{Section grids}. As before, $\Gamma \subset \text{PGL}_2(k)$ denotes the stabilizer of $\mathfrak{E}_D(U_0)$, where $\mathfrak{E}_D$ is the Eichler $\mathcal{C}$-order defined in \eqref{eq de eD}. So, for each $v \in \text{V}(\mathfrak{t})$, its image $\overline{v} \in \mathrm{V}(\Gamma \backslash \mathfrak{t})$ represents one and only one $\mathcal{C}$-Eichler order of level $D$, or equivalently one abstract $D$-grid. Next, we make explicit this correspondence for any vertex $v=v_n(\xi)$. It follows from condition (c) in \S \ref{Section Spinor} that, given the family of local orders $\lbrace \mathfrak{E}(P) : P \in |\mathcal{C}| \rbrace$ defined by \begin{itemize} \item[($\mathrm{E}_1$)] $\mathfrak{E}({P_i})= \mathfrak{D}(0,0,P_i) \cap \mathfrak{D}(0, n_i, P_i) $, for any $i \in \lbrace 1, \cdots, n \rbrace$, \item[($\mathrm{E}_2$)] $\mathfrak{E}({P_{\infty}})= \mathfrak{D}(\xi,n, P_{\infty})$, and \item[($\mathrm{E}_3$)] $\mathfrak{E}({Q})=\mathfrak{D}(0,0,Q)$, for any $Q\neq P_1, \cdots, P_n, P_{\infty}$, \end{itemize} there exists an Eichler $\mathcal{C}$-order $\mathfrak{E}=\mathfrak{E}[v]$ such that $\mathfrak{E}_P=\mathfrak{E}(P)$, for all $P \in |\mathcal{C}|$. Then, the abstract $D$-grid corresponding to $v$ is $\mathbb{S}(v)=\big[S(\mathfrak{E}[v])\big]$. Observe that the level of $\mathfrak{E}$ is equal to $D=\sum_{i=1}^n n_i P_i$. Now, by definition, $g \in \mathrm{Stab}_{\text{H}_D}(v_n(\xi))$ if and only if $g \in \mathrm{H}_D=\mathfrak{E}_D(U_0)^{*}$ and $g \in \mathrm{Stab}_{\text{GL}_2(k)}(v_n(\xi))$. Observe that $g \in \mathfrak{E}_D(U_0)^{*}$ is equivalent to $g \in \mathfrak{E}_D(U_0)$ and $\det(g) \in R^{*}=\mathbb{F}^{*}$. The following result describes the normalizer of local maximal orders. See \cite[\S 3]{A2} for details. \begin{lemma}\label{lemma normalizer of maximal order} For any closed point $Q$, the normalizer in $\mathrm{GL}_2(k_Q)$ of a local maximal order $\mathfrak{D}(s,n,Q)$ is $k_Q^{*} \mathfrak{D}(s,n,Q)^{*}$. \end{lemma} \begin{proof} Since any $\mathcal{O}_Q$-maximal order is the ring of endomorphisms of an $\mathcal{O}_Q$-lattice, we can write $\mathfrak{D}(s,n,Q)=\mathrm{End}_{\mathcal{O}_Q}(\Lambda)$, for some lattice $\Lambda=\Lambda(s,n,Q)$ of $k_Q \times k_Q$. Then $g \in \mathrm{GL}_2(k_Q)$ normalizes $\mathfrak{D}(s,n,Q)$ if and only if $\mathrm{End}_{\mathcal{O}_Q}(\Lambda)= \mathrm{End}_{\mathcal{O}_Q}(g(\Lambda))$. So, since two lattices have the same endomorphism rings precisely when they belong to the same homothety class, we have that $\mathrm{End}_{\mathcal{O}_Q}(\Lambda)= \mathrm{End}_{\mathcal{O}_Q}(g(\Lambda))$ precisely when $g(\Lambda)= \lambda \Lambda$, for some $\lambda \in k_Q^{*}$, i.e. $\lambda^{-1}g \in \mathrm{End}_{\mathcal{O}_Q}(\Lambda)^{*}=\mathfrak{D}(s,n,Q)^{*}$. \end{proof} We deduce from the preceding lemma that $g \in \mathfrak{E}_D(U_0)^{*}$ if and only if the following conditions hold: \begin{itemize} \item $\det(g) \in \mathbb{F}^{*}$, \item $g$ normalizes $\mathfrak{D}(0,m, P_i)$, for each $P_i \in \mathrm{Supp}(D)$ and $m \in \lbrace 0, \cdots, n_i \rbrace$, and \item $g$ normalizes $\mathfrak{D}(0,0,Q)$, for every $Q\neq P_1, \cdots, P_r, P_{\infty}$. \end{itemize} Thus, we conclude that $g$ belongs to $\mathrm{Stab}_{\text{H}_D}(v_n(\xi))$ precisely when it satisfies conditions $(\mathrm{A}_1)-(\mathrm{A}_4)$ below: \begin{itemize} \item[($\mathrm{A}_1$)] $\text{det}(g) \in \mathbb{F}^{*}$, \item[($\mathrm{A}_2$)] $g$ normalizes $\mathfrak{D}(0,m,P_i)$, for each $P_i \in \mathrm{Supp}(D)$ and $m \in \lbrace 0, \cdots, n_i \rbrace$, \item[($\mathrm{A}_3$)] $g$ normalizes $\mathfrak{D}(0,0,Q)$, for every $Q\neq P_1, \cdots, P_r, P_{\infty}$, and \item[($\mathrm{A}_4$)] $g$ normalizes $\mathfrak{D}(\xi ,n,P_{\infty})$. \end{itemize} Recall that we only have to describe the vertex stabilizers for an arbitrary unbounded set of vertex of $j(\mathfrak{r}(\sigma))$. Then, by changing $v_n(\xi)$ to $v_{n+1}(\xi)$ if needed, we can assume without loss of generality that the type of $v_n(\xi)$ coincides with the type of $v_0(0)$. Thus, there exists $h_{P_\infty} \in \mathrm{SL}_2(k_{P_{\infty}})$ such that $h_{P_\infty} \cdot v_n(\xi)=v_0(0)$, i.e., $h_{P_{\infty}} \mathfrak{D}(\sigma,n,P_{\infty}) h_{P_{\infty}}^{-1}= \mathfrak{D}(0,0,P_{\infty})$. Now, it follow from Lemma \ref{lemma normalizer of maximal order} that the $\mathrm{GL}_2(k_Q)$-normalizer of local maximal orders $\mathfrak{D}_Q$ are open. So, by the Strong Approximation Theorem applied on the open set $\mathcal{C} \smallsetminus \mathrm{Supp}(D)$, there exists $h=h(v) \in \text{SL}_2(k)$ satisfying $h \mathfrak{D}(\sigma,n,P_{\infty}) h^{-1}= \mathfrak{D}(0,0,P_{\infty})$ and normalizing each $\mathfrak{D}(0,0,Q)$, for $Q\neq P_1, \cdots, P_r, P_{\infty}$. For each $P_i \in \mathrm{Supp}(D)$, let $\mathfrak{s}_i$ be the finite line in $\mathfrak{t}(k_{P_i})$ whose vertex set is $ \lbrace h^{-1}\mathfrak{D}(0,m,P_i)h : m \in \lbrace 0, \cdots, n_i\rbrace \rbrace$. We define $S=S(v)$ as the concrete $D$-grid $S= \prod_{i=1}^n \mathfrak{s}_i$. Note that $S= h S(\mathfrak{E}[v]) h^{-1}$ is another representative of $\mathbb{S}=\mathbb{S}(v)$. Thus, we deduce from conditions $(\mathrm{A}_1)-(\mathrm{A}_4)$ that, $g$ belongs to $\mathrm{Stab}_{\text{H}_D}(v_n(\xi))$ if and only if $\widetilde{g}=hgh^{-1} \in \text{GL}_2(k)$ satisfies the following: \begin{itemize} \item[($\mathrm{B}_1$)] $\text{det}(\widetilde{g}) \in \mathbb{F}^{*}$, \item[($\mathrm{B}_2$)] $\widetilde{g}$ normalizes each maximal $\mathcal{C}$-order in the vertex set of $S$, and \item[($\mathrm{B}_3$)] $\widetilde{g}$ normalizes each local maximal order $\mathfrak{D}(0,0,Q)$, where $Q\neq P_1, \cdots, P_r$. \end{itemize} Let $v_{n+2}(\xi)$ be the vertex at distance two from $v_n(\xi)$ towards $\xi$. Since $C_{P_{\infty}}(\mathbb{O}_D)$ is combinatorially finite, for $n \gg 1$, we have $ \overline{v_n(\xi)} \neq \overline{v_{n+2}(\xi)}$. Moreover, it follows from Proposition \ref{max semi-desc} and Corollary \ref{coro semi-desc of neighbor} that there exists a suitable integer $N_{\sigma}$ such that for all $n> N_{\sigma}$ the actual $D$-grid $S=S(v_{n}(\sigma))$ has a semi-decomposition datum with positive degree. First, assume that $S$ has a total-decomposition datum $(\beta, B, (n_i)_{i=1}^r)$. Let $A=A(\beta)$ be be the base change matrix from the canonical basis to $\beta$, and let $\mathfrak{E}= A^{-1} \mathfrak{E}[B,B+D] A$ be the split Eichler $\mathcal{C}$-order, defined as the intersection of all maximal orders in the vertex set of $S$. Then, it follows from Proposition \ref{prop des}, that there exists a global idempotent matrix $\epsilon_1 \in \mathfrak{E}(\mathcal{C})$. Since $\mathcal{T}:= \mathbb{F}^{*} \epsilon_1 + \mathbb{F}^{*}(\text{id}-\epsilon_1)$ is contined in $\mathfrak{E}(\mathcal{C})^{*}$, any matrix in $\mathcal{T}$ normalizes the Eichler $\mathcal{C}$-order $\mathfrak{E}$. In other words, any matrix $g \in \mathcal{T}$ satisfies the properties $(\mathrm{B}_1),(\mathrm{B}_2)$ and $(\mathrm{B}_3)$. Thus, for any pair of elements $a,b \in \mathbb{F}^{*}$ there is a matrix $g_{a,b} \in \mathrm{Stab}_{\mathrm{H}_D}(v_n(\xi))$ whose eigenvalues are $a$ and $b$. Then, since the group generated by $\lbrace g_{a,b}: a,b \in \mathbb{F}^{*} \rbrace$ and the maximal unipotent subgroup of $\mathrm{Stab}_{\mathrm{H}_D}(v_n(\xi))$ equals $$\lbrace A(\alpha, \beta,c): \alpha, \beta \in \mathbb{F}^{*}, c \in R(n) \cap \mathcal{Q}_{\xi} \cap I_D \rbrace,$$ we conclude from Lemma \ref{lema mason} that \begin{equation} \mathrm{Stab}_{\mathrm{H}_D}(v_n(\xi)) \cong (\mathbb{F}^{*} \times \mathbb{F}^{*}) \ltimes (R(n) \cap Q_{\xi} \cap I_D). \end{equation} Now, assume that $S$ has a non total semi-decomposition datum $(\beta, B, (s_i)_{i=1}^r)$, and let $A=A(\beta)$ as before. Then $A^{-1} \widetilde{g}A$ normalizes each maximal $\mathcal{C}$-order in the vertex set of $A^{-1} S A$. In particular, the matrix $A^{-1} \widetilde{g}A$ fixes $\mathfrak{D}_B$, with $\deg(B)>0$, whence $A^{-1} \widetilde{g}A= \sbmattrix {x}{y}{0}{z}$ with $x^{-1}z \in \mathbb{F}^{*}$ (cf.~\cite[\S 4, Proposition 4.1]{A1}). So, by taking $S^{\circ}=S$ in the proof of Proposition \ref{lema 3}, we deduce that $x=z$. Thus, the eigenvalues of $\widetilde{g}$ are equal. The same holds for $g$. Therefore, we conclude from Lemma \ref{lema mason} that $\mathrm{Stab}_{\mathrm{H}_D}(v_n(\xi))= \text{Sb}_{\xi}(n)$ in this case. In particular, we have that \begin{equation} \mathrm{Stab}_{\mathrm{H}_D}(v_n(\xi)) \cong \mathbb{F}^{*} \times (R(n) \cap Q_{\xi} \cap I_D). \end{equation} \subsection{End of proof of Theorem \ref{teo grup} and \ref{teo grup ab}} Note that it follows from Corollary \ref{coro semi-desc of neighbor} that we can assume that the semi-decomposition vectors of $\mathbb{S}(v_n(\xi))$ and $\mathbb{S}(v_{n+2}(\xi))$ are equal. Then, by the arguments presented above, we deduce that: \begin{itemize} \item $\mathrm{Stab}_{\mathrm{H}_D}(v_{n+2}(\xi)) \cong (\mathbb{F}^{*} \times \mathbb{F}^{*}) \ltimes (R(n+2) \cap Q_{\xi})$, if $\mathbb{S}(v_{n+2}(\xi))$ is split, and \item $\mathrm{Stab}_{\mathrm{H}_D}(v_{n+2}(\xi)) \cong \mathbb{F}^{*} \times (R(n+2) \cap Q_{\xi})$, if not. \end{itemize} An inductive argument shows that, for each $t \in \mathbb{Z}_{\geq 0}$, $\mathrm{Stab}_{\mathrm{H}_D}(v_{n+2t}(\xi))$ is isomorphic to \begin{itemize} \item $(\mathbb{F}^{*} \times \mathbb{F}^{*}) \ltimes (R(n+2t) \cap Q_{\xi} \cap I_D)$, if $\mathbb{S}(v_{n+2t}(\xi))$ is split, and \item $\mathbb{F}^{*} \times (R(n+2t) \cap Q_{\xi} \cap I_D)$, if not. \end{itemize} Now, we say that $\mathfrak{r}(\sigma)$ is split when it only contains vertices corresponding to split abstract grids. Since we can assume that $\mathbb{S}(v_{n}(\xi))$ and $\mathbb{S}(v_{n+t}(\xi))$ correspond to vertices at distance $t>0$ in the same cuspidal ray of $C_{P_{\infty}}(\mathbb{O}_D)$, we get from Corollary \ref{coro semi-desc of neighbor} that they have the same semi-decomposition vectors. In particular, if $\mathfrak{r}(\sigma)$ is not split, then every vertex in $\mathfrak{r}(\sigma)$ corresponds to a nonsplit abstract grid. Thus, we conclude \begin{equation} \mathcal{P}_{\sigma} \cong \left\{ \begin{array}{ll} (\mathbb{F}^{*} \times \mathbb{F}^{*}) \ltimes (Q_{\xi} \cap I_D ) & \text{if } \mathfrak{r}(\sigma) \text{ is split}, \\ \mathbb{F}^{*} \times ( Q_{\xi} \cap I_D) & \text{if not}.\\ \end{array} \right. \end{equation} Since each $n_i$ is odd by hypothesis, it follows from Proposition \ref{lema4} that there are $$c(D)/\alpha(D)=[2\mathrm{Pic}(\mathcal{C})+\left\langle \overline{P_{1}}, \cdots ,\overline{P_{r}}, \overline{P_{\infty}} \right\rangle : \langle \overline{P_{\infty}} \rangle]$$ split cusps in $\Gamma \backslash \mathfrak{t}$. Moreover, it follows from Lemma \ref{lemma cubrimiento de cuspides} that there are exactly $[\Gamma: \text{PH}_D]$ cusps of $\mathfrak{t}_D$ with the same image in $\Gamma \backslash \mathfrak{t}$. Thus, we conclude from Equation \eqref{eq gamma/phd} that there are $2^r [2\mathrm{Pic}(\mathcal{C})+ \langle \overline{P_{\infty}} \rangle: \langle \overline{P_{\infty}} \rangle]$ elements $\sigma \in \mathbf{C}$ such that $\mathfrak{r}(\sigma)$ is split. Then, Theorem \ref{teo grup} follows. Now, it directly follows \cite[Chapter II, S 2.8, Exercise 2]{S} that $\mathrm{H}_D^{\mathrm{ab}}$ is finitely generated if and only if each group $\mathcal{P}_{\sigma}^{\mathrm{ab}}$ is finite, and, in any other case, $\mathrm{H}_D^{\mathrm{ab}}$ is the product of a finitely generated group with a $\mathbb{F}_p$-vector space of infinite dimension, where $p=\mathrm{Char}(\mathbb{F})$. Moreover, it follows from a direct computation that $\mathcal{P}_{\sigma}^{\mathrm{ab}}=\mathbb{F}^*\times \mathbb{F}^*$, if $\mathfrak{r}(\sigma)$ is split, while $\mathcal{P}_{\sigma}^{\mathrm{ab}}=\mathcal{P}_{\sigma}$, otherwise. Thus, the group $\mathrm{H}_D^{\mathrm{ab}}$ is finitely generated if and only if $\mathfrak{t}_D$ has only split cusps or, equivalently, if $\mathrm{Card}(\mathbf{D})=c(\mathrm{H}_D)$ (cf.~Theorem \ref{teo grup}). Then, it follows from Theorem \ref{teo cusp} that $\mathrm{H}_D^{\mathrm{ab}}$ is finitely generated if and only if each $n_i$ equals one. \subsection{An example where \texorpdfstring{$\mathcal{C}=\mathbb{P}^1_{\mathbb{F}}$}{CP}} Let $D$ be a square free divisor supported at a closed point $P$ of degree one. Let $P_{\infty}$ be another closed point of degree one. Without loss of generality, we assume that $P_{\infty}$ is the point at infinity of $\mathbb{P}^1_{\mathbb{F}}$ and $D= \mathrm{div}(t)|_{U_0}$, whence $R=\mathbb{F}[t]$. Then $\mathcal{O}=\mathcal{O}_{P_{\infty}}=\mathbb{F}[[t^{-1}]]$ and $\pi=1/t$ is a uniformizing parameter. Since $r=1$, $\deg(P)=1$ and $\deg(P_{\infty})=1$, Theorem \ref{teo cusp} implies that $\mathfrak{t}_D=\mathrm{H}_D\backslash \mathfrak{t}$ is the union of a doubly infinity ray with a certain finite graph. In order to perform some explicit computations, in all that follows, we interpret the Bruhat-Tits tree $\mathfrak{t}$ in terms of closed balls in $K$, as we describe in \S \ref{subsection BTT}. Using this interpretation, it is not hard to see that the set of vertices in the doubly infinity ray $\mathfrak{p}_{a,\infty}$, joining $a \in k$ with $\infty$, corresponds to the set of closed balls $B_a^{|r|}$ whose center is $a$ and its radius is $|\pi^r|$. Then, it is immediately that the union of all these doubly infinity rays covers $\mathfrak{t}$. \begin{lemma}\label{lemma quot t} One has $\mathrm{H}_D \cdot \mathrm{V}(\mathfrak{p}_{0,\infty})=\mathrm{V}(\mathfrak{t})$ and $\mathrm{H}_D \cdot \mathrm{E}(\mathfrak{p}_{0,\infty})=\mathrm{E}(\mathfrak{t})$. \end{lemma} \begin{proof} Firstly, we prove $\mathrm{H}_D \cdot \mathrm{V}(\mathfrak{p}_{0,\infty})=\mathrm{V}(\mathfrak{t})$. Assume, by contradiction, that there exists a vertex in $\mathfrak{t}$ that fails to belong to the same $\mathrm{H}_D$-orbit as any vertex in $\mathfrak{p}_{0,\infty}$. Let $B=B_s^{|r|}$ be a such vertex at the minimal distance from $\mathfrak{p}_{0,\infty}$. Since $B'=B_s^{|r-1|}$ is closer than $B$ to $\mathfrak{p}_{0,\infty}$, we have that $h \cdot B' \in \mathfrak{p}_{0,\infty}$, for some $h \in \mathrm{H}_D$. Thus, up to replace $B$ by $h \cdot B$, we assume that $B$ is at distance one from $\mathfrak{p}_{0,\infty}$. In other words, we assume that $B=B_s^{|\nu(s)+1|}$. For each $f \in R$, let us write: $$\tau_f= \sbmattrix {1}{-f}{0}{1} , \quad I= \sbmattrix {0}{1}{1}{0}, \, \, \text{and} \quad \sigma_f= \sbmattrix {1}{0}{-tf}{1}= I \cdot \tau_{tf} \cdot I .$$ Note that $\tau_f, \sigma_f \in \mathrm{H}_D$, for any $f \in R$. In all that follows we extensively use the fact that, for any $p \in \mathbb{Z}$ and any pair $x,s \in k$, we have the following relations: $$ \tau_x\cdot B_s^{|p|} = B_{s-x}^{|p|}, \quad I \cdot B_0^{|p|}= B_0^{|-p|}, \quad \mathrm{and} \quad I \cdot B_s^{|p|}= B_{1/s}^{|p-2\nu(s)|}, \, \, \mathrm{if} \, \, 0 \notin B_s^{|p|}.$$ Assume that $\nu(s) \leq 0$. Then, we can write $s=f_0+\epsilon_0$, where $f_0 \in R=\mathbb{F}[t]$ and $\nu(\epsilon_0) \geq 1 $. Note that $\tau_{f_0} \cdot B_s^{|p|}=B_{\epsilon_0}^{|p|}$ belongs to $\mathrm{V}(\mathfrak{p}_{0,\infty})$ if and only if $\nu(\epsilon_0) \geq p$. In particular, we obtain $\tau_{f_0} \cdot B \in \mathrm{V}(\mathfrak{p}_{0,\infty})$. Thus, this case is impossible. Now, assume that $\nu(s) \geq 1$. Then, $1/s= tf_0 + \epsilon_0$, where $f_0 \in R$ and $\nu(\epsilon_0) \geq 1 $. Assuming $p > \nu(s)$, we get $$\sigma_{f_0} \cdot B_s^{|p|}= (I \cdot \tau_{t f_0} \cdot I )\cdot B_s^{|p|}= (I \cdot \tau_{t f_0}) \cdot B_{1/s}^{|p-2\nu(s)|} = I \cdot B_{\epsilon_0}^{|p-2\nu(s)|}. $$ So, since $I \cdot \mathfrak{p}_{0,\infty}=\mathfrak{p}_{0,\infty}$, we conclude that $\sigma_{f_0} \cdot B_s^{|p|} \in \mathrm{V}(\mathfrak{p}_{0,\infty})$ precisely when $B_{\epsilon_0}^{|p-2\nu(s)|} \in \mathrm{V}(\mathfrak{p}_{0,\infty})$, or equivalently, when $\nu(\epsilon_0)+2\nu(s) \geq p $. Thus, we get that $\sigma_{f_0} \cdot B \in \mathrm{V}(\mathfrak{p}_{0, \infty})$, which is also absurd. Therefore, we conclude $\mathrm{H}_D \cdot \mathrm{V}(\mathfrak{p}_{0,\infty})=\mathrm{V}(\mathfrak{t})$. Moreover, since $\mathrm{H}_D$ acts simplicially on $\mathfrak{t}$, we also conclude $\mathrm{H}_D \cdot \mathrm{E}(\mathfrak{p}_{0,\infty})=\mathrm{E}(\mathfrak{t})$. \end{proof} For each $m \in \mathbb{Z}_{\geq 0}$, let us denote by $\mathbb{F}[t]_m$ the set of polynomials whose degree is less of equal than $m$. \begin{lemma}\label{lemma stab t} The stabilizers in $\mathrm{H}_D$ of the vertex $v_n \in \mathrm{V}(\mathfrak{p}_{0, \infty})$ corresponding to the ball $B_{0}^{|n|}$ is $\sbmattrix {\mathbb{F}^*}{\mathbb{F}[t]_{-n}}{0}{\mathbb{F}^*}$, if $n <0$, $\sbmattrix {\mathbb{F}^*}{0}{0}{\mathbb{F}^*}$, if $n=0$, and $\sbmattrix {\mathbb{F}^*}{0}{t\mathbb{F}[t]_{n-1}}{\mathbb{F}^*}$, if $n>0$. \end{lemma} \begin{proof} By definition, we have $\mathrm{Stab}_{\mathrm{H}_D}(v_n)= \mathrm{Stab}_{\mathrm{GL}_2(K)}(v_n) \cap \mathrm{H}_D$, where the stabilizer in $\mathrm{GL}_2(K)$ of $v_n$ equals to $K^{*}\sbmattrix {\oink}{\pi^{n} \oink}{\pi^{-n} \oink}{\oink} ^{*}$. Since $\det(\mathrm{H}_D)=\mathbb{F}^{*}$, we have $\mathrm{Stab}_{\mathrm{H}_D}(v_n)= \sbmattrix {\oink}{\pi^{n} \oink}{\pi^{-n} \oink}{\oink} ^{*} \cap \mathrm{H}_D$. Then, the result follows from a straightforward computation. \end{proof} \begin{prop}\label{teo quot t} If $\mathcal{C}=\mathbb{P}^1_\mathbb{F}$, $D=P$ is a closed point and $\deg(P_{\infty})=\deg(P)=1$, then the doubly infinity ray $\mathfrak{p}_{0,\infty}$ is isomorphic to the quotient graph $\mathfrak{t}_D=\mathrm{H}_D\backslash \mathfrak{t}$. \end{prop} \begin{proof} Firstly, we claim that $\mathrm{V}(\mathfrak{p}_{0,\infty})$ contains one and only one representative of each $\mathrm{H}_D$-orbit in $\mathrm{V}(\mathfrak{t})$. It follows from Lemma \ref{lemma quot t} that, in order to prove this claim, we just have to prove that any two different vertices in $\mathfrak{p}_{0,\infty}$ belong to different $\mathrm{H}_D$-orbits. Let $v_n=B_0^{|n|}$ and $v_m=B_0^{|m|}$ be two vertices in $\mathrm{V}(\mathfrak{p}_{0,\infty})$, which belong in the same $\mathrm{H}_D$-orbit. Then, the stabilizers $S_n, S_m \subseteq \mathrm{H}_D$ of $v_n$ and $v_m$ are conjugates, say $S_n= h S_m h^{-1}$, for some $h \in \mathrm{H}_D$. In particular, they have the same number of elements. Thus, it follows from Lemma \ref{lemma stab t} that $n=m$ or $n=-m+1$. If the second alternative holds, then $h\tau \in \mathrm{Stab}_{\mathrm{GL}_2(K)}(v_n)$, where $\tau=\sbmattrix {\pi^{-2m+1}}{0}{0}{1}$. Since Lemma \ref{lemma normalizer of maximal order} implies that $\mathrm{Stab}_{\mathrm{GL}_2(K)}(v_n)=K^{*}\mathfrak{D}_{n}^*$, where $\mathfrak{D}_n$ is the maximal order corresponding to $v_n$, we have $\det(h\tau)$ belongs to $K^{*2}\mathcal{O}^{*}$, which is impossible since $\det(\tau) \in \pi K^{*2}\mathcal{O}^{*}$ and $\det(h) \in \mathbb{F}^{*}$. Then $n=m$, which proves the claim. Now, we claim that $\mathrm{E}(\mathfrak{p}_{0,\infty})$ contains exactly one representative of each $\mathrm{H}_D$-orbit in $\mathrm{E}(\mathfrak{t})$. Note that, since $\mathfrak{p}_{0,\infty}$ does not contain two vertices in the same $\mathrm{H}_D$-orbit, it also fails to have two edges in the same $\mathrm{H}_D$-orbit. Thus, the latter claim follows from Lemma \ref{lemma quot t}. \end{proof} \begin{prop}\label{teo group t} The group $\mathrm{H}_D$ of invertible matrices in $\sbmattrix {\mathbb{F}[t]}{\mathbb{F}[t]}{t\mathbb{F}[t]}{\mathbb{F}[t]}$ is isomorphic to the sum of $\sbmattrix {\mathbb{F}^{*}}{\mathbb{F}[t]}{0}{\mathbb{F}^{*}}$ and $\sbmattrix {\mathbb{F}^{*}}{0}{t\mathbb{F}[t]}{\mathbb{F}^{*}}$ amalgamated along the diagonal group $\mathbb{F}^{*} \times \mathbb{F}^{*}$. \end{prop} \begin{proof} Let $\mathfrak{r}_1$ (resp. $\mathfrak{r}_2)$ be the ray whose vertex set is, in the notations of Lemma \ref{lemma stab t}, the set $\lbrace v_n: n\geq 0\rbrace$ (resp. $\lbrace v_n: n\leq 0\rbrace$). Let $\mathcal{P}_1$ (resp. $\mathcal{P}_2)$ be the fundamental group of the graph of groups defined on $\mathfrak{r}_1$ (resp. $\mathfrak{r}_2$) by restriction. Denote by $\mathcal{B}_0$ the fundamental group of the graph of groups defined on the intersection $\mathfrak{r}_1 \cap \mathfrak{r}_2$. Since $\mathfrak{p}_{0, \infty}=\mathfrak{r}_1 \cup \mathfrak{r}_2$, it is not hard to see that the fundamental group defined on $\mathfrak{p}_{0, \infty}$, i.e. $\mathrm{H}_D$, is the sum of $\mathcal{P}_1$ with $\mathcal{P}_2$ amalgamated along $\mathcal{B}_0$. Moreover, since $\mathfrak{r}_1 \cap \mathfrak{r}_2= \lbrace v_0 \rbrace$, we have that $\mathcal{B}_0$ is the stabilizer of $v_0$, which coincides with a diagonal group isomorphic to $\mathbb{F}^{*} \times \mathbb{F}^{*}$. Since $\mathfrak{r}_1$ is simply connected and Lemma \ref{lemma stab t} implies that $\mathrm{Stab}_{\mathrm{H}_D}(v_n) \subset \mathrm{Stab}_{\mathrm{H}_D}(v_{n+1})$, for any $n \geq 0$, we have that $\mathcal{P}_1$ the union of the preceding vertex stabilizers. Thus, Lemma \ref{lemma stab t} implies that $\mathcal{P}_1$ equals $\sbmattrix {\mathbb{F}^{*}}{\mathbb{F}[t]}{0}{\mathbb{F}^{*}}$ . Analogously, $\mathcal{P}_2$ equals $\sbmattrix {\mathbb{F}^{*}}{0}{t\mathbb{F}[t]}{\mathbb{F}^{*}}$, whence the result follows. \end{proof} \begin{prop}\label{teo group ab t} With the same notation and hypotheses of Theorem \ref{teo group t}, the group $\mathrm{H}_D^{\mathrm{ab}}$ is isomorphic to $\mathbb{F}^* \times \mathbb{F}^*$. \end{prop} \begin{proof} Let us write $\mathrm{H}_D$ as the sum of $\mathcal{P}_1= \sbmattrix {\mathbb{F}^{*}}{\mathbb{F}[t]}{0}{\mathbb{F}^{*}}$ and $\mathcal{P}_2 =\sbmattrix {\mathbb{F}^{*}}{0}{t\mathbb{F}[t]}{\mathbb{F}^{*}}$ amalgamated along the diagonal group $\mathcal{B}_0 \cong \mathbb{F}^* \times \mathbb{F}^*$. It follows from \cite[\S 3]{Hu} that, naturally defined from the preceding amalgamated sum, we have the following Mayer-Vietoris exact sequence $$ \cdots \xrightarrow{\delta} H_i(\mathcal{B}_0, \mathbb{Z}) \xrightarrow{\iota_*} H_i(\mathcal{P}_1, \mathbb{Z}) \oplus H_i(\mathcal{P}_2, \mathbb{Z}) \xrightarrow{d_*} H_{i}(\mathrm{H}_D, \mathbb{Z}) \xrightarrow{\delta} \cdots, $$ where any group acts trivially on $\mathbb{Z}$, $\delta$ is the level morphism, and the morphisms $\iota_*$ and $d_*$ are defined by $\iota(x)=(x,x)$ and $d(x,y)=x-y$, respectively. In particular, at level $i=1$, we obtain that $ H_1(\mathcal{B}_0, \mathbb{Z}) \xrightarrow{\iota_*} H_1(\mathcal{P}_1, \mathbb{Z}) \oplus H_1(\mathcal{P}_2, \mathbb{Z}) \xrightarrow{d_*} H_{1}(\mathrm{H}_D, \mathbb{Z}) \rightarrow 0 $ is exact. Thus, the morphism $d_*:\mathcal{P}_1^{\mathrm{ab}} \oplus\mathcal{P}_2^{\mathrm{ab}} \to \mathrm{H}_D^{\mathrm{ab}}$ defined by $d_*(x,y)=x-y$ is surjective. Since the canonical projection restricts to an isomorphism between the group $\mathcal{B}_0$ and either $\mathcal{P}_1^{\mathrm{ab}}$ or $ \mathcal{P}_2^{\mathrm{ab}}$, the result follows. \end{proof} \section*{Acknowledgements} I wish to express my hearty thanks to my PhD advisors Luis Arenas-Carmona and Giancarlo Lucchini Arteche for several helpful remarks. I also want to express my gratitude with Anid-Conicyt by the Doctoral fellowship No $21180544$. \begin{thebibliography}{99} \bibitem[AbB08]{Brown} {\sc P.~Abramenko and K.~S.~Brown}, {\em Buildings: Theory and applications}, Graduate Texts in Mathematics \textbf{248}, Springer, New York, 2008. \bibitem[A12]{abelianos} {\sc L. Arenas-Carmona}, {\em Representation fields for commutative orders}, Ann. Inst. Fourier (Grenoble) \textbf{62} (2012), 807-819. \bibitem[A13]{A13} {\sc L. Arenas-Carmona}, {\em Eichler orders, trees and representation fields}, Int. J. Number Theory \textbf{9} (2013), no. 7, 1725-1741. \bibitem[A14]{A1} {\sc L. Arenas-Carmona}, {\em Computing quaternion quotient graphs via representation of orders}, J. of Algebra \textbf{402} (2014), 258-279. \bibitem[A16]{A2} {\sc L. Arenas-Carmona}, {\em Spinor class fields for generalized Eichler orders}, J. Th\'eor. Nombres Bordeaux \textbf{28} (2016), 679-698. \bibitem[AAC18]{AAC} {\sc M. Arenas}, {\sc L. Arenas-Carmona}, and {\sc J. Contreras}, {\em On optimal embeddings and trees}, J. Number Theor. \textbf{91} (2018), 91-117. \bibitem[ABp]{A5} {\sc L. Arenas-Carmona} and {\sc C. Bravo}, {\em On genera containing non-split Eichler orders over function fields}, arXiv:1905.08244 [math.NT]. \bibitem[ABLL]{ABLL} {\sc L. Arenas-Carmona}, {\sc C. Bravo}, {\sc B. Loisel} and {\sc G. Lucchini-Arteche} {\em Quotients of the Bruhat-Tits tree by arithmetic subgroups of special unitary groups}, J. of Pure and Applied Algebra \textbf{266} (2021). \bibitem[BH91]{Bri} {\sc M. Bridson, A. Haefliger}, {\em Metric Spaces of Non-Positive Curvature}, Springer, Berlin (1991). \bibitem[BKW13]{B} {\sc K. Bux, R. K{\"o}hl, S. Witzel}, {\em Higher Finiteness Properties of Reductive Arithmetic Groups in Positive Characteristic: the Rank Theorem}, Annals of Mathematics \textbf{177} (2013), 311–366. \bibitem[Hu15]{Hu} {\sc K. Hutchinson}, {\em On the low-dimensional homology of $\mathrm{SL}_2(k[t,t^{-1}])$}, J. of Algebra \textbf{425} (2015) 324-366. \bibitem[Ih66]{Ihara} {\sc Y. Ihara}, {\em On discrete subgroups of the two by two projective linear group over $p$-adic fields}, J. Math. Soc. Japan \textbf{18} (1966), 219-235. \bibitem[Mar09]{Margaux} {\sc B. Margaux}, {\em The structure of the group $G(k[t])$: Variations on a theme of Soul\'e}, Algebra and Number Theory \textbf{3} (2009), 393-409. \bibitem[Ma01]{M} {\sc A. W. Mason}, {\em Serre's generalization of Nagao's theorem: an elementary approach}, Trans. Amer. Math. Soc. \textbf{353} (2001), 749-767. \bibitem[MS03]{M3} {\sc A. W. Mason and A. Schweizer}, {\em The minimum index of a non-congruence subgroup of $\mathrm{SL}_2$ over an arithmetic domain}. Israel J. Math. \textbf{133} (2003), 29-44. \bibitem[MS13]{M2} {\sc A. W. Mason and A. Schweizer}, {\em The stabilizers in a Drinfeld modular group of the vertices of its Bruhat-Tits tree: an elementary approach}. Internat. J. Algebra Comput. \textbf{23} (2013), 1653-1683. \bibitem[Na59]{N} {\sc H. Nagao}, {\em On $\mathrm{GL}(2,K[x])$}, J. Inst. Polytech. Osaka City Univ. Ser. A \textbf{10} (1959), 117-121. \bibitem[Ne99]{Neukirch} {\sc J. Neukirch}, {\em Algebraic number theory}, Springer Verlag, Berlin, 1999. \bibitem[Pau02]{Paulin} {\sc F. Paulin}, {\em Groupe modulaire, fractions continues et approximation diophantinne en carat\'eristique $p$}, Geom. Dedi. \textbf{95} (2002), 65-85. \bibitem[Se70]{Se2} {\sc J.-P. Serre}, {\em Le Probl\`eme des Groupes de Congruence Pour $\text{SL}_2$}, Annals of Mathematics \textbf{92} (1970), 489-527. \bibitem[Se80]{S} {\sc J.-P. Serre}, {\em Trees}, Springer Verlag, Berlin, 1980. \bibitem[So77]{So} {\sc C. Soul\'e}, {\em Chevalley groups over polynomial rings}, Homological group theory (Durham, 1977), edited by C.T.C. Wall, London Math. Soc. Lectures Note Ser., Cambridge Univ. Press, \textbf{36} (1979), 359-361. \bibitem[St93]{Stichlenoth} {\sc H.~Stichtenoth}, {\em Algebraic Function Fields and Codes}, Springer-Verlag, Berlin, 1993. \end{thebibliography} \scriptsize $ $ {\sc Claudio Bravo}\\ Universidad de Chile, Facultad de Ciencias, \\Casilla 653, Santiago, Chile\\ \email{[email protected]} \end{document} \section{Preliminaries on divisors and vector bundles}\label{Section VB} Let $\mathcal{C}$ be a smooth, projective, geometrically integral curve over a finite field $\mathbb{F}$. By definition, a divisor on $\mathcal{C}$ is a formal finite linear combination $n_1P_1+ \cdots+ n_rP_r$ of distinct closed points $P_1, \cdots, P_r \in \mathcal{C}$ with integer coefficients $n_1, \cdots, n_r \in \mathbb{Z}$, for some $r \in \mathbb{N}$. Obviously, the divisors on $\mathcal{C}$ form an abelian group under coefficient-wise addition. We denote it by $\mathrm{Div}(\mathcal{C})$. A divisor $D = n_1P_1 + \cdots + n_r P_r$ as above is called effective, and written $D \geq 0$, if each $n_i \geq 0$. If $D_1$ and $D_2$ are two divisors such that $D_2-D_1$ is effective, then we write $D_2 \geq D_1$ or $D_1 \leq D_2$. Each closed point $P$ in $\mathcal{C}$ defines a discrete valuation $\nu_P$ on the global function field $k=\mathbb{F}(\mathcal{C})$. Let $k_{P}$ be the completion of $k$ at $P$, i.e. the completion of $k$ with respect to $\nu_P$. Let $\mathcal{O}_P$ be the ring of integers of $k_P$, and fix a uniformizing parameter $\pi_P \in \mathcal{O}_P$. Then, we define the degree of the point $P$ as the degree of the finite extension $\mathbb{F}(P)= \mathcal{O}_P/\pi_P \mathcal{O}_P$ of $\mathbb{F}$. More generally, the degree of a divisor $D = n_1P_1 + \cdots + n_r P_r$ as above is the integer $\deg(D) := n_1 \deg(P_1) + \cdots n_r \deg(P_r) \in \mathbb{Z}$. Thus defined, the degree is a group homomorphism $\deg : \mathrm{Div}(\mathcal{C}) \to d\mathbb{Z},$ where $d$ is the gcd of $\deg(Q)$ with $Q \in \mathcal{C}$. Its kernel is denoted by $\mathrm{Div}^0(\mathcal{C})$. Moreover, every element $f \in k^{*}$ defines a divisor $\mathrm{div}(f) \in \mathrm{Div}^0(\mathcal{C})$. We define the Picard group $\mathrm{Pic}(\mathcal{C})$ as the quotient of $\mathrm{Div}(\mathcal{C})$ by the subgroup $\mathrm{div}(k^{*})=\lbrace \mathrm{div}(f): f \in k^{*}\rbrace$. Hence, one has the exact sequence: \begin{equation} 0 \to J(\mathbb{F}) \to \mathrm{Pic}(\mathcal{C}) \to d\mathbb{Z} \to 0, \end{equation} where $J(\mathbb{F})=\mathrm{Div}^0(\mathcal{C})/\mathrm{div}(k^{*})$, also denoted $\mathrm{Pic}^{0}(\mathcal{C})$, corresponds to the set of $\mathbb{F}$-points of the Jacobian variety of $\mathcal{C}$ (cf.~\cite[Chapter II, \S 2.2]{S}). Since $\mathbb{F}$ is finite, the group $J(\mathbb{F})$ is also finite (cf.~\cite[Chapter II, \S 2.2]{S}). Let $\mathcal{A}$ be the affine line considered as an algebraic variety. A vector bundle on $\mathcal{C}$ is a variety which ``locally looks like a direct product of $\mathcal{C}$ with a vector space''. Formally, a vector bundle of rank $s$ over $\mathcal{C}$ is an algebraic variety $\mathcal{B}$ over $\mathbb{F}$ equipped with a morphism $\pi : \mathcal{B} \to \mathcal{C}$ such that there exists a covering $\mathcal{C}=\bigcup_{i \in I} U_i$ by Zariski open sets satisfying \begin{itemize} \item[(a)] For each $i \in I $ there exists an isomorphism $\phi_i: \pi^{-1}(U_i) \to U_i \times \mathcal{A}^s$ satisfying that the composition $\pi \circ \phi_i^{-1}: U_i \times \mathcal{A}^{s} \to U_i $ is the projection onto the first coordinate, and \item[(b)] For each $i,j \in I$ there exists an $(s \times s)$-matrix $A_{ij}$, whose entries are regular functions in $U_i \cap U_j$, satisfying that the composition $$\phi_{ij}: \phi_j \circ \phi_i^{-1} |_{(U_i \cap U_j) \times \mathcal{A}^{s} }: (U_i \cap U_j) \times \mathcal{A}^{s} \to (U_i \cap U_j) \times \mathcal{A}^{s} ,$$ takes the form $ \phi_{ij}(x,v)=(x, A_{ij}(x) v)$. \end{itemize} We call the tuple $(U_i, \phi_i, \phi_{ij} )$ a trivialization of the respective vector bundle. If $s = 1$, we say that $(\mathcal{B}, \pi)$ is a line bundle. Let $(\mathcal{B}, \pi)$ be a vector bundle of rank $s$ with trivialization $(U_i, \phi_i, \phi_{ij})$. Define $(\mathcal{B}', \pi')$, $s'$ and $(U'_i, \phi'_i, \phi'_{ij})$ analogously. A morphism of vector bundles $f : \mathcal{B} \to \mathcal{B}'$ is a $\mathcal{C}$-morphism, i.e. such that the following diagram commutes $$\xymatrix{\mathcal{B} \ar[dr]_{\pi} \ar^{f}[rr]& & \mathcal{B}' \ar[dl]^{\pi'} \\ & \mathcal{C} & } ,$$ and satisfying that, for any $i \in I$ and $i' \in I'$, the algebraic morphism $\pi^{-1}(U_i\cap U_{i'}) \to \pi^{-1}(f(U_i\cap U_i'))$ has the form $\mathrm{id} \times f_{ij}$, for some linear map $f_{ij}: \mathcal{A}^s \to \mathcal{A}^s$. For one-dimensional vector bundles, elements of the Picard group $\mathrm{Pic}(\mathcal{C})$ correspond to isomorphism classes of line bundles over $\mathcal{C}$. For one-dimensional vector bundles, elements of the Picard group $\mathrm{Pic}(\mathcal{C})$ correspond to isomorphism classes of line bundles over $\mathcal{C}$. This bijection is induced by the following map. Let $D \in \mathrm{Div}(\mathcal{C})$ and let $\mathfrak{L}^D$ be the sheaf defined in every open set $U \subseteq \mathcal{C}$ by \begin{equation} \mathfrak{L}^D(U)= \left\lbrace f \in k: \text{div}(f)|_{U}+D|_{U} \geq 0 \right\rbrace . \end{equation} Then, we can show that $\mathfrak{L}^D$ is a locally free sheaf of rank one. Thus, it defines a line bundle on $\mathcal{C}$. If we define a group structure on the set of classes via tensor products, then the previously defined map is actually a group isomorphism. Naturally associated to a line bundle $\mathfrak{L}^D$ we can define the maximal $\mathcal{C}$-order $\mathfrak{D}_D$ (cf.~Definition \ref{definition of max and eichler orders}) as follows \begin{equation}\label{eq maximales} \small \mathfrak{D}_D= \mathrm{End}_{\oink_{\mathcal{C}}}\binom{\mathcal{O}_{\mathcal{C}}}{\mathfrak{L}^{-D}} =\left( \begin{array}{cc} \mathcal{O}_{\mathcal{C}} & \mathfrak{L}^D\\ \mathfrak{L}^{-D} & \mathcal{O}_{\mathcal{C}} \end{array} \right). \normalsize \end{equation} In this thesis we will study a special family of intersections of two maximal orders as above. This family consists in objects of the form \begin{equation}\label{eq de eD} \mathfrak{E}_D=\mathfrak{D}_0 \cap \mathfrak{D}_D. \end{equation} More specifically, one of our main goals is to understand the quotient graph $\mathfrak{t}_D= \mathrm{H}_D \backslash \mathfrak{t}(k_{P_{\infty}})$, where $\mathrm{H}_D = \mathfrak{E}_D(\mathcal{C} \smallsetminus \lbrace P_{\infty} \rbrace) $. See Theorems \ref{teo cusp}, \ref{Explicit teo cusp}, \ref{teo grup} and \ref{teo grup ab} for more details. \subsection{An interpretation of the Riemann-Roch Theorem} Let $D$ be a divisor on $\mathcal{C}$, and let $U$ be a open affine set. The Rieman-Roch Theorem states that $\dim_{\mathbb{F}}(\mathfrak{L}^D(U))$ is bounded by a constant depending on the degree of $D$ and the genus $g$ of $\mathcal{C}$ (cf.~\cite[\S 1, Theorem~1.5.17]{Stichlenoth}). Indeed, we have the following statements: \begin{itemize} \item $\dim_{\mathbb{F}}(\mathfrak{L}^D(U)) \geq \deg(D)+1-g,$ and \item if $\deg(D) \geq 2g-1$, then $\dim_{\mathbb{F}}(\mathfrak{L}^D(U))=\deg(D)+1-g$. \end{itemize} Let $P_{\infty}$ be a fixed closed point in $\mathcal{C}$, and let $R$ be the ring of functions that are regular outside $P_{\infty}$. Then $R$ is a Dedekind domain whose quotient field is $k$. Let $\nu: k \to \mathbb{Z} \cup \lbrace \infty \rbrace$ be the discrete valuation induced by $P_{\infty}$. We recall some elementary properties, which follow immediately from the product formula and the hypothesis that $\mathcal{C}$ is geometrically integral (which implies that $\mathbb{F}$ is algebraically closed in $k$): \begin{itemize} \item $\nu_{\infty}(x) \leq 0$, for all $x \in R \smallsetminus \lbrace 0 \rbrace$, \item for any $x \in R$, we have $\nu_{\infty}(x) = 0$ if and only if $x \in \mathbb{F}^{*}$, and \item $R^{*}=\mathbb{F}^{*}$. \end{itemize} Each closed point $P \in \mathcal{C}$ other than $P_{\infty}$ corresponds to a prime ideal $I(P)= \lbrace x \in R: \nu_{P}(x) \geq 1 \rbrace$ of $R$, and conversely. Furthermore, every non-zero fractional $R$-ideal $J$ has a decomposition $J=\prod_{P \neq P_{\infty}} I(P)^{n_P}$, and an associated divisor $D_J=\sum_{P \neq P_{\infty}} n_P P$. We define $\deg(J):=\deg(D_J)$. For any $m \in \mathbb{N}$, we define \[J[m]:= \mathfrak{L}^{-D_J+mP_{\infty}}(U_0)=\left\lbrace x \in J:\nu(x) \geq -m\right\rbrace,\] where $U_0=\mathrm{Spec}(R)=\mathcal{C} \smallsetminus \lbrace P_{\infty} \rbrace$. We denote by $g$ the genus of $\mathcal{C}$. Then, by the Riemann-Roch Theorem, the set $J[m]$ is a finite-dimensional vector space over $\mathbb{F}$, and when $\deg(-D_J+mP_{\infty})\geq 2g-1$ we have \[\text{dim}_{\mathbb{F}}(J[m])= \deg(-D_J+mP_{\infty})+ 1-g.\] It follows from a simple computation that \[\deg(-D_J+mP_{\infty})=-\deg(J)+m\deg(P_{\infty}),\] whence we finally get, \begin{equation}\label{eq Riemann-Roch} \text{dim}_{\mathbb{F}}(J[m])= -\deg(J)+m \mathrm{deg}(P_{\infty}) + 1-g, \end{equation} when $m\mathrm{deg}(P_{\infty})\geq \deg(J)+2g-1$. In all that follows, by abuse of language, we refer to Equation \eqref{eq Riemann-Roch} as the Riemann-Roch Theorem. \section{On some explicit examples}\label{Section examples} In the current section, the main goal is to prove Theorem \ref{Explicit teo cusp}, and to subsequently present some explicit computations of quotient graphs $\mathfrak{t}_D$ associated to the action of Eichler groups $\mathrm{H}_D$, for small values of $\deg(D)$. We deduce from Theorem \ref{teo cusp} that the computation of the number of cusps gets more involved as $\deg(D)$, $\deg(P_{\infty})$, or the genus of $\mathcal{C}$ increases. Naturally, we expect the same behavior from the finite graphs $Y \subset \mathfrak{t}_D$ (cf.~Theorem \ref{teo cusp}). For these reasons, in this section we work in the most elementary non-trivial context possible. In other words, we set $\mathcal{C}=\mathbb{P}^{1}_{\mathbb{F}}$ and $P_{\infty}$ to be the point at infinity. In all that follows we denote by $P[i]$ the degree-one closed point corresponding to $i \in \mathbb{F} \cup \lbrace \infty \rbrace$. In particular, we have that $\mathrm{div}(t-i)= P[i]- P[\infty]$ and $P_{\infty}=P[\infty]$, whence $R=\mathbb{F}[t]$, and $K=k_{P_\infty}=\mathbb{F}((t^{-1}))$. In this context $\mathcal{O}_{P_\infty}=\mathbb{F}[[t^{-1}]]$ is the ring of integers of $K$, $\nu=\nu_{P_\infty}=-\mathrm{deg}$ and $\pi=1/t$ is a uniformizing parameter. We also consistently fix the absolute value $x\mapsto |x|=|x|_{P_\infty}$. In all that follows, we identify the Bruhat-Tits tree for $\mathrm{SL}_2(K)$ (cf.~\S \ref{Section BTT}) with the tree $\mathfrak{b}=\mathfrak{b}(K)$, whose vertices are the closed balls in $K$, and two of them are neighbors if one is a proper sub-ball of the other. \subsection{Conventions on fundamental regions} It follows from \cite[\S 3 and \S 4]{S} that in order to define quotient graphs in full generality, i.e., in the context where a group acts with some edge inversions, it is convenient to work with the barycentric subdivision. Here, in order to define the fundamental domains associated to non-simply connected quotient graphs, we adopt this convention. Then, in order to define a fundamental domain in the Bruhat-Tits tree, we begin by performing a finite number of ``surgeries'' on the quotient graph $\mathfrak{q}$ to turn it into a tree. See Figures \ref{fig 5}\textbf{(A)} and \ref{fig 5} \textbf{(B)}. By a surgery we mean the process of replacing an edge by a pair of half edges, provided that the resulting graph is still connected. After surgery, we get a tree $\mathfrak{q}'$, we fix a vertex $v \in \mathrm{V}(\mathfrak{q}')$ that corresponds to a ``real'' vertex in $\mathfrak{q}$, and then we choose a pre-image $\tilde{v}$ in the Bruhat-Tits tree. Successively, we consistently lift the path from $v$ to any vertex or non-vertex, where a non-vertex is the lift in a half edge in $\mathfrak{q}'$. The union of the images of such liftings is the \textit{fundamental domain} under consideration. See Figure \ref{fig 5}\textbf{(C)}. Note that any structural result on a quotient graph can be translated into a result on its corresponding fundamental region. Moreover, this correspondence is perfect, in the sense that the quotient graph can be recovered from the fundamental domain and the pairs of corresponding non-vertices. Indeed, this is done by gluing the latter in an obvious manner. In particular, any combinatorial or topological result on the fundamental region can be also interpreted in terms of the corresponding quotient graph. We can define the ends of a fundamental region as the visual limit of its rays. This point of view allows us to explicitly describe the ends of the fundamental regions or quotient graphs in terms of representatives of the $\mathrm{H}_D$-orbits in $\mathbb{P}^{1}(k)$. \begin{figure} \[ \fbox{ \xygraph{ !{<0cm,0cm>;<0.7cm,0cm>:<0cm,0.7cm>::} !{(-0.8,2.2) }*+{\textbf{(A)}}="na" !{(-0.2,0.1) }*+{\bullet}="e2" !{(1.2,0.1) }*+{\bullet}="e3" !{(1.2,3) }*+{\bullet}="m1" !{(-0.2,3) }*+{\bullet}="m2" !{(2.6,3) }*+{\bullet}="m3" !{(-1,3) }*+{}="i1" !{(3.4,3) }*+{}="i2" !{(-0.2,1.5) }*+{\bullet}="e4" !{(1.2,1.5) }*+{\bullet}="e5" !{(2.6,0.1) }*+{\bullet}="e6" !{(2.6,1.5) }*+{\bullet}="e7" !{(2.6,-0.41) }*+{}="nad" "e2"-@{-}"e4" "e2"-@{-}"e3""e4"-@{-}"e5""e3"-@{-}"e5" "e3"-@{-}"e6" "e5"-@{-}"e7" "e6"-@{-}"e7" "m1"-@{-}"m2" "m1"-@{-}"m3" "m1"-@{-}"e5" "m2"-@{.}"i1" "m3"-@{.}"i2" }} \fbox{ \xygraph{ !{<0cm,0cm>;<0.7cm,0cm>:<0cm,0.7cm>::} !{(-0.8,2.2) }*+{\textbf{(B)}}="na" !{(-1.2,-0.3) }*+{\star}="e2i" !{(-0.2,0.1) }*+{\bullet}="e2" !{(0.3,-0.3) }*+{\star}="e3i" !{(1.2,0.1) }*+{\bullet}="e3" !{(2.2,-0.3) }*+{\ast}="e3ii" !{(1.2,3) }*+{\bullet}="m1" !{(-0.2,3) }*+{\bullet}="m2" !{(2.6,3) }*+{\bullet}="m3" !{(-1,3) }*+{}="i1" !{(3.4,3) }*+{}="i2" !{(-0.2,1.5) }*+{\bullet}="e4" !{(1.2,1.5) }*+{\bullet}="e5" !{(2.6,0.1) }*+{\bullet}="e6" !{(3.6,-0.3) }*+{\ast}="e6i" !{(2.6,1.5) }*+{\bullet}="e7" "e2"-@{-}"e4" "e2"-@{-}"e2i" "e3i"-@{-}"e3" "e3ii"-@{-}"e3" "e4"-@{-}"e5""e3"-@{-}"e5" "e6i"-@{-}"e6" "e5"-@{-}"e7" "e6"-@{-}"e7" "m1"-@{-}"m2" "m1"-@{-}"m3" "m1"-@{-}"e5" "m2"-@{.}"i1" "m3"-@{.}"i2" }} \fbox{ \xygraph{ !{<0cm,0cm>;<0.7cm,0cm>:<0cm,0.7cm>::} !{(-0.8,2.2) }*+{\textbf{(C)}}="na" !{(0.6,2.4) }*+{ }="m1" !{(2.8,0.6) }*+{ }="m2" !{(4,-0.4) }*+{ }="m3" !{(-0.2,3.11) }*+{ }="m4" !{(1.28,1.8) }*+{ \bullet}="n01" !{(0.8,1.4) }*+{ \bullet }="n1" !{(0.8,0.9) }*+{ \bullet }="n2" !{(0.4,0.5) }*+{ \star }="n21" !{(1.1,0.5) }*+{ \ast }="n22" !{(1.35,0.9) }*+{ \bullet }="n3" !{(1.8,0.5) }*+{ \bullet }="n3p" !{(2.1,0.2) }*+{ \ast }="n31" !{(0.15,0.9) }*+{ \bullet }="n4" !{(-0.4,0.5) }*+{ \bullet }="n4p" !{(-0.8,0.2) }*+{ \star }="n41" !{(1.5,2) }*+{ }="i01" !{(-1,0.0) }*+{ }="i41" !{(-0.8,0.2) }*+{ }="i412" !{(-1.4,-0.4) }*+{ }="i42" !{(0.6,1.6) }*+{ }="i1" !{(2.3,0.0) }*+{}="i31" !{(2.1,0.2) }*+{}="i312" !{(2.8,-0.4) }*+{}="i32" !{(0.8,1.6) }*+{ }="i1d" !{(0.8,0.7) }*+{ }="i2d" !{(0.85,1.1) }*+{ }="i3d1" !{(0.75,1.1) }*+{ }="i3d2" !{(0.25,0.3) }*+{ }="i21d" !{(1.25,0.3) }*+{ }="i22d" !{(0.4,0.5) }*+{ }="i21f" !{(1.1,0.5) }*+{ }="i22f" !{(-0.25,-0.4) }*+{ }="i21ff" !{(1.65,-0.4) }*+{ }="i22ff" "m1"-@{-}"m2" "m2"-@{.}"m3" "m1"-@{.}"m4" "i01"-@{-}"i41" "i42"-@{.}"i412" "i1"-@{-}"i31" "i32"-@{.}"i312" "i1d"-@{-}"i2d" "i3d1"-@{-}"i21d" "i3d2"-@{-}"i22d" "i21f"-@{.}"i21ff" "i22f"-@{.}"i22ff"}} \] \caption{In the Figure, \textbf{(A)} represents a quotient graph, \textbf{(B)} shows the tree obtained from the previous graph by the process of surgery, and finally \textbf{(C)} represents the corresponding choice of a fundamental domain.}\label{fig 5} \end{figure} \subsection{A Proof of Theorem \ref{Explicit teo cusp}} Let $N=(t-\lambda_1) \cdots (t-\lambda_n)$. We can write $\mathrm{div}(t-\lambda_i)=P[\lambda_i]-P[\infty]$, where $P[j]$ is a degree-one closed point on $\mathbb{P}^{1}_{\mathbb{F}}$. Let $D= \sum_{i=1}^{r} P[\lambda_i]$ be the corresponding multiplicity free divisor on $\mathcal{C}$. Then, since $n_1=\cdots =n_r=1$ and $\deg(P[\infty])=\deg(P[\lambda_1])=\cdots =\deg(P[\lambda_r])=1$, it follows from Theorem \ref{teo cusp} that the quotient graph $\mathfrak{t}_D$ as exactly $2^{n}$ cusps. Thus, in order to prove Theorem \ref{Explicit teo cusp}, it suffices to prove that the restriction of the canonical projection $\pi: \mathfrak{t}(K)\cong \mathfrak{b}(K) \to \mathfrak{t}_D$ to the tree $\mathfrak{s}$ is an injection. Consequently, the result follows from next lemma, which we prove following the techniques in \cite{M}: \begin{lemma} The vertices in $\mathfrak{s}$ are in different $\mathrm{H}_D$-orbits. \end{lemma} \begin{proof} Note that, when $N=1$, the lemma reduces to Nagao's Theorem (cf.~\cite{N}). So, we assume throughout that $n \geq 1$. As in \S \ref{Section BTT} we write $B_x^{|t|}$ for the ball of radius $|\pi|^t$ centered at $x\in K$, where $\pi=\pi_{P_\infty}$ is a uniformizing parameter. This ball corresponds to the local maximal order $\mathrm{End}_{\mathcal{O}_{P_{\infty}}} (\Lambda_B)$, where $\Lambda_B = \left\langle %{a \choose 1} , {\pi^r \choose 0} \binom{a}{1}, \binom{\pi^r}{0} \right\rangle $. In particular, $B_0:=B_0^{|0|}$ corresponds to the local maximal order $\mathbb{M}_2(\mathcal{O}_{P_{\infty}})$. Let $B_1=B_{x_1}^{|r_1|}$ and $B_2=B_{x_2}^{|r_2|}$ be two vertices in $\mathfrak{t}_D$, where each center is either $0$ or the multiplicative inverse of a proper monic nonconstant divisor of $N=\prod_{i=1}^{n}(t-\lambda_i)$. Assume that there exists a matrix $g=\sbmattrix ab{Nc}d\in \mathrm{H}_D$ satisfying $g \cdot B_1=B_2$. We must prove that $B_1=B_2$. Set $h_1= \sbmattrix {x_1}{\pi^{r_1}}{1}{0}$ and $h_2=\sbmattrix {x_2}{\pi^{r_2}}{1}{0}$, so that we have both $B_1=h_1 \cdot B_0$ and $B_2=h_2 \cdot B_0$. Since $K^{*}\mathrm{GL}_2(\mathcal{O}_{P_\infty})$ is the stabilizer of $B_0$ in $\mathrm{GL}_2(K)$, we must have $h_2^{-1}gh_1 \in \lambda \mathrm{GL}_2(\mathcal{O}_{P_\infty})$, for some $\lambda\in K^{*}$. By taking determinants, we get $2\nu(\lambda)=r_1-r_2$, where $\nu$ is the valuation corresponding to $P_{\infty}$. Hence, $r_1-r_2$ is an even integer and $\pi^{\frac{r_2-r_1}{2}}h_2^{-1}gh_1 \in\mathrm{GL}_2(\mathcal{O}_{P_\infty})$. After a simple computation we have $\pi^{\frac{r_2-r_1}{2}}h_2^{-1}gh_1$ equals \begin{equation}\label{eq def} \bmattrix {\pi^{\frac{r_2-r_1}2}(d+Ncx_1)}{\pi^{\frac{r_2+r_1}2}Nc} {\pi^{\frac{-r_1-r_2}2}(ax_1-dx_2+b-Ncx_1x_2)}{\pi^{\frac{r_1-r_2}2}(a-Ncx_2)}. \end{equation} We conclude that $\pi^{\frac{r_1-r_2}{2}}(a-Ncx_2),\pi^{\frac{r_2-r_1}{2}}(d+Ncx_1) \in \mathcal{O}_{P_\infty}$. On the other hand, the polynomials $a-Ncx_2$ and $d+Ncx_1$ either vanish or have non-positive valuations. This leaves us three alternatives: \begin{itemize} \item[(i)] $r:=r_1=r_2$, and $\nu(a-Ncx_2)=\nu(d+Ncx_1)=0$, \item[(ii)] $a=Ncx_2$ or \item[(iii)] $d=-Ncx_1$. \end{itemize} The last two alternatives imply $\det(g) \notin \mathbb{F}^{*}$, so (i) must hold. The result follows if $x_1=x_2$, as this implies $B_1=B_2$. The same holds if $r\leq0$ since $\nu(x_1),\nu(x_2)>0$ and hence $B_1=B_0^{|r|}=B_2$. We assume in the sequel that $x_1\neq x_2$ and $r>0$. In particular, one of the $x_1$, $x_2$ is not zero. From Equation \eqref{eq def} and (i) we deduce the following facts: \begin{itemize} \item[(a)] $a-Ncx_2 =: a_0 \in \mathbb{F}^*$, \item[(b)] $d+Ncx_1 =: d_0 \in \mathbb{F}^*$, \item[(c)] $ Nc \in \pi^{-r}\mathcal{O}_{P_\infty}$, or equivalently $\deg(Nc)\leq r$, and \item[(d)] $a_0 x_1- d_0 x_2 + b+Ncx_1x_2 = ax_1-dx_2+b-Ncx_1x_2 \in \pi^r \mathcal{O}_{P_\infty}$. \end{itemize} Firstly assume that either $\nu(Ncx_1x_2)>0$ or $x_1x_2=0$, then the dominant term on the left hand side of identity (d) is $b \in \mathbb{F}[t]$, unless it vanishes. As $r>0$ we must conclude the latter. It follows that $g=\sbmattrix {a}{0}{Nc}{d}$, in particular $a,d \in \mathbb{F}^{*}$. Then, it follows from (a) and (b) that $Nc x_2, Ncx_1 \in \mathbb{F}$, and then $c=0$, as at least one element in $\{x_1,x_2\}$ is the inverse of a nonconstant proper monic divisor of $N$. From the preceding considerations, we get the identity $B_{x_2}^{|r|}=B_2=g \cdot B_1= B_{ax_1/d}^{|r|}$. This implies that $ dx_2-ax_1 \in \pi^r \mathcal{O}_{P_{\infty}}$. Note that if $\nu(x_1)\geq r$, then $0 \in B_1$, whence we can assume that $x_1=0$. This implies that $dx_2 \in \pi^r \mathcal{O}_{P_{\infty}}$, i.e. $\nu(x_2) \geq r$. Thus, we conclude $B_1=B_{0}^{|r|}=B_2$. In any other case $\nu(x_1), \nu(x_2)< \nu(N) < r $, whence we deduce $\nu(x_1)=\nu(x_2)$ and $a=d$. We conclude $x_1-x_2 \in \pi^r \mathcal{O}_{P_{\infty}}$, hence $B_1=B_2$. Finally, assume that both $x_1, x_2 \neq 0$ and $\nu(Ncx_1x_2)\leq 0$. We can assume $r> \max{ \lbrace \nu(x_1), \nu(x_2)\rbrace }$ since other case we could redefine $x_1$ or $x_2$ as $0$ and return to the preceding case. Let \begin{equation}\label{bin}\epsilon= b+Ncx_1x_2 \in -a_0x_1+d_0x_2+\pi^r\oink_{P_\infty}\subseteq \pi \mathcal{O}_{P_\infty}.\end{equation} By a simple computation, we get $\det(g) =a_0 d_0- \xi \in \mathbb{F}^{*}$, where $ \xi= Nc(a_0 x_1-d_0 x_2 + \epsilon)\in \mathbb{F}$. If $\xi=0$, we have that $c=0$ or \begin{equation}\label{eq divisi} Nc+ bx_1^{-1} x_2^{-1} =\epsilon (x_1x_2)^{-1}= d_0 x_1^{-1}-a_0x_2^{-1}. \end{equation} If $c=0$, then $b \in \pi\mathcal{O}_{P_\infty}$ by \eqref{bin}, so that $b=0$ and we argue as in the previous paragraph. Otherwise, Equation \eqref{eq divisi} and conditions (a) and (b) imply that $x_1^{-1}$ divides $x_2^{-1}$ and conversely, as each divides $N$, whence $B_1=B_2$. Assume now that $\xi \neq 0$, so by applying, in the given order, (c), the definition of $\xi$, the definition of $\epsilon$, and (d), we prove the following chain of inequalities: $$r\geq-\nu(Nc)=\nu(a_0 x_1- d_0 x_2 +\epsilon)=\nu(a_0 x_1- d_0 x_2 +b+Ncx_1x_2)\geq r,$$ whence $\nu(a_0 x_1- d_0 x_2 +\epsilon)=-\nu(Nc)= r$. In this case we have $$ r=\nu(a_0 x_1- d_0 x_2 + \epsilon)=\nu(x_1x_2)+\nu(a_0x_2^{-1}-d_0x_1^{-1}+ \epsilon(x_1x_2)^{-1}) \leq \nu(x_1x_2), $$ as the second term is a polynomial. On the other hand, the hypothesis $\nu(Ncx_1x_2)\leq 0$ implies $\nu(x_1x_2) = \nu(Ncx_1 x_2)-\nu(Nc)\leq r$. Thus, $r=\nu(x_1x_2)$ and $$\sigma:= a_0x_2^{-1}-d_0x_1^{-1}+ \epsilon(x_1x_2)^{-1} = a_0x_2^{-1}-d_0x_1^{-1}+ b(x_1x_2)^{-1} + Nc,$$ is a nonzero constant polynomial. But $\sigma$ is divisible by $\text{gcd}(x_1^{-1},x_2^{-1})$, and therefore $\text{gcd}(x_1^{-1},x_2^{-1})=1$. If $\epsilon \neq 0$ we conclude that $b(x_1x_2)^{-1} + Nc$ is a multiple of $(x_1x_2)^{-1}$. By the strong triangular inequality, $\nu(\sigma)=0$ implies $$\nu(a_0x_1^{-1}-d_0x_2^{-1}) = \nu(b(x_1x_2)^{-1} + Nc)\leq \nu((x_1x_2)^{-1}).$$ By conditions (a) and (b), the preceding inequality is impossible by a degree argument. To finish the proof we consider $\epsilon=0$, in which case $\nu(a_0x_1^{-1}-d_0x_2^{-1})=0$. As the polynomials $1/x_1$ and $1/x_2$ are monic, this is only possible when $a_0=d_0$. Then condition (d) implies $\nu(x_1-x_2)\geq r$. We conclude that $B_1=B_2$. \end{proof} \begin{figure} \[ \fbox{ \xygraph{ !{<0cm,0cm>;<.8cm,0cm>:<0cm,.8cm>::} !{(0.5,4) }*+{\infty}="e3" !{(2,2.5) }*+{\bullet}="f3" !{(2.5,2.8) }*+{{}^{B_0^{|\nu(s)|}}}="fis" !{(1.6,2.0) }*+{\bullet}="f4" !{(0.8,2.2) }*+{{}^{B_s^{|\nu(s)+1|}}}="fi4" !{(0.8,1.0) }*+{\bullet}="f5" !{(0.6,1.2) }*+{{}^{B}}="fi5" !{(4.5,0) }*+{0}="e5" !{(0.0,0.0) }*+{{}^{s}}="f2" !{(-0.5,3.5) }*+{\textbf{(A)}}="ti99" "e3"-@{-}"e5" "f3"-@{.}"f2" } } \fbox{ \xygraph{ !{<0cm,0cm>;<.8cm,0cm>:<0cm,.8cm>::} !{(0.5,4) }*+{\infty}="e3" !{(4.5,0) }*+{0}="e5" !{(0.0,0.1) }*+{{}^{1/t}}="f2" !{(2,2.5) }*+{\bullet}="t3" !{(2.5,2.8) }*+{{}^{B_{0}^{|1|}}}="ti3" !{(1.6,2.0) }*+{\bullet}="f4" !{(1.15,2.2) }*+{{}^{B_{1/t}^{|2|}}}="fi4" !{(3.5,0.1) }*+{{}^{1/t^2}}="f5" !{(-0.5,3.5) }*+{\textbf{(B)}}="ti99" "e3"-@{-}"e5" "f2"-@{-}"t3" "f5"-@{-}"f4" } } \] \caption{In \textbf{(A)} continuous line represents the double ray $\mathfrak{p}_{0,\infty}$, which is a fundamental region for the action of $\mathrm{H}_{D}$ on $\mathfrak{b}(K)$, when $D=P[0]|_{U_0}$. On the other hand, Figure \textbf{(B)} shows a fundamental region for the action of $\mathrm{H}_{D}$ on $\mathfrak{b}(K)$, when $D=(P[0]+P[1])|_{U_0}$.}\label{fig4} \end{figure} \subsection{Small Examples} Here we compute some examples of fundamental regions (or equivalently, quotient graphs) in the context where $\mathcal{C}=\mathbb{P}^1_{\mathbb{F}}$, $P_{\infty}=P[\infty]$ and $\deg(D)$ is small. \begin{ex}\label{ex1 rf} Assume that $N=t$, or equivalently assume that $D= \mathrm{div}(t)|_{U_0}=P[0]|_{U_0}$. We denote by $\mathfrak{p}_{a,b} \subset \mathfrak{b}(K)$ the double ray joining two different elements $a,b \in \mathbb{P}^{1}(k)$. Then, Theorem \ref{Explicit teo cusp} implies that the union of $\mathfrak{p}_{0,\infty}$ with a finite graph is a fundamental region for the action of $\mathrm{H}_D$ on $\mathfrak{b}(K)$. More precisely, we claim that $\mathfrak{p}_{0,\infty}$ alone is a fundamental region in this case. See Figure \ref{fig4}\textbf{(A)}. In order to prove this claim, we introduce the following algorithm. For each $f \in R$, let us write: $$\tau_f= \sbmattrix {1}{-f}{0}{1} , \quad I= \sbmattrix {0}{1}{1}{0}, \, \, \text{and} \quad \sigma_f= \sbmattrix {1}{0}{-tf}{1}= I \cdot \tau_{tf} \cdot I .$$ Note that $\tau_f, \sigma_f \in \mathrm{H}_D$, for any $f \in R$. Let $s \in k$ be a finite rational element, and let $B=B_s^{|r|}$ be any vertex in the double infinity ray $\mathfrak{p}_{s, \infty}$. We claim that $B$ is in the $\mathrm{H}_D$-orbit of some vertex of $\mathfrak{p}_{0,\infty}$. So, first note that, if $\nu(s) \geq r$ or $s=0$, then $B=B_{0}^{|r|}$, whence the claim holds immediately. Thus, assume that $s \neq 0$ and that $\nu(s)<r$. In all that follows we extensively use the fact that for any $p \in \mathbb{Z}$ and any pair $x,s \in k$ we have that $$ \tau_x\cdot B_s^{|p|} = B_{s-x}^{|p|}, \quad I \cdot B_0^{|p|}= B_0^{|-p|}, \quad \mathrm{and} \quad I \cdot B_s^{|p|}= B_{1/s}^{|p-2\nu(s)|}, \, \, \mathrm{if} \, \, 0 \notin B_s^{|p|}.$$ Fix the uniformizing parameter $1/t \in \mathcal{O}_{P_{\infty}}$. Assume that $\nu(s) \leq 0$. Then, we can write $s=f_0+\epsilon_0$, where $f_0 \in R=\mathbb{F}[t]$ and $\nu(\epsilon_0) \geq 1 $. Let us define $\mathfrak{c}_0$ as the unique finite path in $\mathfrak{b}(K)$ joining $B_{s}^{|\nu(s)+1|}$ with $B_{s}^{|\nu(\epsilon_0)|}$, when $\epsilon_0 \neq 0$, and define it as the ray joining $B_{s}^{|\nu(s)+1|}$ with the end $s$, in the remaining case. Note that, since $\nu(s) < \nu(\epsilon_0)$, the path $\mathfrak{c}_0$ is non-trivial in any case. Moreover, note that $\tau_{f_0} \cdot B_s^{|p|}=B_{\epsilon_0}^{|p|}$ belongs to $\mathrm{vert}(\mathfrak{p}_{0,\infty})$ if and only if $\nu(\epsilon_0) \geq p$. In particular, we obtain $\tau_{f_0} \cdot \mathfrak{c}_0 \subset \mathfrak{p}_{0,\infty}$, i.e., each vertex in $\mathfrak{c}_0$ is in the $\mathrm{H}_D$-orbit of a vertex in $\mathfrak{p}_{0,\infty}$. If $B \notin \mathrm{vert}(\mathfrak{c}_0)$, we have not proven that $B \in \mathrm{H}_D \cdot \mathfrak{p}_{0,\infty}$, but we have proven that there exist vertices satisfying this condition in the path joining $B_s^{|\nu(s)+1|}$ to $B$. Since $\tau_{f_0} \cdot \mathfrak{p}_{s, \infty}=\mathfrak{p}_{\epsilon_0, \infty}$, where $\nu(\epsilon_0) \geq 1$, in the latter case we replace $B$ by $\tau_{f_0} \cdot B = B_{\epsilon_0}^{|r|}$, which leads us to the last case. Now, assume that $\nu(s) \geq 1$. Then, $1/s= tf_0 + \epsilon_0$, where $f_0 \in R$ and $\nu(\epsilon_0) \geq 1 $. So, in analogy with the previous case, let $\mathfrak{c}_0'$ be the unique finite path in $\mathfrak{b}(K)$ joining $B_{s}^{|\nu(s)+1|}$ with $B_{s}^{|2\nu(s)+\nu(\epsilon_0)|}$, when $\epsilon_0 \neq 0$, and define it as the ray joining $B_{s}^{|\nu(s)+1|}$ with $s$, in the remaining case. Assuming $p > \nu(s)$, we get $$\sigma_{f_0} \cdot B_s^{|p|}= (I \cdot \tau_{t f_0} \cdot I )\cdot B_s^{|p|}= (I \cdot \tau_{t f_0}) \cdot B_{1/s}^{|p-2\nu(s)|} = I \cdot B_{\epsilon_0}^{|p-2\nu(s)|}. $$ So, since $I \cdot \mathfrak{p}_{0,\infty}=\mathfrak{p}_{0,\infty}$, we conclude that $\sigma_{f_0} \cdot B_s^{|p|} \in \mathrm{vert}(\mathfrak{p}_{0,\infty})$ precisely when $B_{\epsilon_0}^{|p-2\nu(s)|} \in \mathrm{vert}(\mathfrak{p}_{0,\infty})$, or equivalently, when $\nu(\epsilon_0)+2\nu(s) \geq p $. Thus, we get that $\sigma_{f_0} \cdot \mathfrak{c}_0' \subset \mathfrak{p}_{0, \infty}$. Again, if $B \notin \mathrm{vert}(\mathfrak{c}_0')$, we have not yet proven that $B \in \mathrm{H}_D \cdot \mathfrak{p}_{0,\infty}$. But, as in the first case, we have proven that there exist vertices in the path joining $B_s^{|\nu(s)+1|}$ to $B$, which satisfy this condition. This shows that $\sigma \cdot B$ is closer to $\mathfrak{p}_{0,\infty}$ than $B$. In particular, we can keep applying either case until $B$ belongs to $\mathfrak{p}_{0,\infty}$, and we are done, i.e., we have shown that the each $\mathrm{H}_D$-orbit of vertices has a representative in $\mathrm{vert}(\mathfrak{p}_{0, \infty})$. Moreover, Theorem \ref{Explicit teo cusp} shows that $\mathfrak{p}_{0, \infty}$ does not have vertices in the same $\mathrm{H}_D$-orbit. Since $\mathrm{H}_D$ acts without inversions, we conclude that $\mathfrak{t}_D$ is isomorphic to $\mathfrak{p}_{0, \infty}$, whence it is a fundamental region for the corresponding group action. \end{ex} \begin{rem} A similar method allows us to give another proof of Nagao's Theorem (cf.~\cite{N}). \end{rem} \begin{rem} We define a continued fraction with coefficient sets $S \subseteq K$ and $T \subseteq K^{*}$ as a sequence $(s_n)_{n=1}^{\infty}$, satisfying $$ s_n = f_0+\frac{b_1}{f_1+\frac{b_2}{ f_{n-1}+^{\ddots}\frac{b_n}{f_{n}}}} ,$$ where each $ f_n \in S$, $b_n \in T$. When this sequence converges in $K$ we say that its limit has an expression as an infinite continued fraction. In \cite{Paulin} Paulin interprets the existence of continued fractions with coefficient in $S=\mathbb{F}[t]$ and $T=\lbrace 1 \rbrace$, in terms of the action of $\mathrm{H}_0=\mathrm{Gl}_2(\mathbb{F}[t])$ on the tree $\mathfrak{t}(K)$. These continued fractions express any element in $K$. Moreover, it is well-known that the elements $s_n$ in the sequence are the best rational Diophantine approximations of elements in $K$. In our setting, we can use a generalization of the previously introduced algorithm in order to extend Paulin's results. More specifically, with Arenas-Carmona we have shown (unpublished) the existence of continued fractions approximating all elements in $K$, associated to the action of certain arithmetical subgroups of $\mathrm{GL}_2(K)$ whose action in $\mathfrak{t}(K)$ has a small fundamental region. In particular, this applies to the case where the arithmetical subgroup is $\mathrm{H}_D$ with $D= P[0]|_{U_0}$. \end{rem} \begin{ex}\label{ex 2 r} Here we exhibit a fundamental region for the action of $\mathrm{H}_D$, when $D=\mathrm{div}(t(t-1))|_{U_0} = (P[0] + P[1])|_{U_0}$ and $\mathbb{F}=\mathbb{F}_2$. In order to achieve this, we introduce a different method from the one introduced in the previous example. Indeed, the key step in this example is compute the valency of the image in $\mathfrak{t}_D$ of some vertices in $\mathfrak{b}(K)$. Fix a vertex $x \in \mathrm{vert}(\mathfrak{s})$, where $\mathfrak{s} \subset \mathfrak{t}(K)$ is the smallest subtree containing all ends in $\lbrace \infty, 0, 1/t, 1/(t-1) \rbrace$, as in Theorem \ref{Explicit teo cusp}. We denote by $\mathfrak{v}^1(x)$ the star of $x$, i.e, the full subgraph of $\mathfrak{t}(K)$ whose vertices are precisely $x$ and its neighbors. In order to prove that $\mathfrak{s}$ is a fundamental region, we just need to show that every edge in $\mathfrak{v}^1(x)$ is in the same $\mathrm{H}_D$-orbit as some edge in $\mathfrak{s}$. Fix $s \in \lbrace 0, 1/t, 1/(t-1) \rbrace$, and assume $x=x_n$ for some $n \in \mathbb{Z}$, where $x_i= B_{s}^{|i|}\in \mathfrak{p}_{s, \infty}$. Furthermore, assume that $n>\nu(s)$ when $s \neq 0$, and make no assumption on $n$ when $s=0$. Note that every vertex in $\mathfrak{s}$ is accounted for in this way. Let $r_{n}$ be the edge connecting $x_n$ to $x_{n+1}$. We begin our analysis by assuming that $s=0$ and $-n=m \geq 0$. In this case we claim that the image of $x$ has valency two in $\mathfrak{t}_D$. First recall that $\tau = \sbmattrix {a}{b}{c}{d} \in \mathrm{H}_D$ precisely when $a,b,d \in \mathbb{F}[t]$, $c \in t(t-1) \mathbb{F}[t]$ and $ad-bc \in \mathbb{F}^{*}$. On the other hand, $\tau$ stabilizes $x$ if and only if $\nu(a), \nu(d) \geq 0$, $\nu(c) \geq m$ and $\nu(b) \geq -m$. Since the valuation of a non-constant polynomial is strictly negative, the two previous conditions imply that $\tau \in \mathrm{Stab}_{\mathrm{H}_D}(x_n)$ precisely when $a,d \in \mathbb{F}$, $c=0$ and $b \in \mathbb{F}[t]_{m}:= \lbrace f \in \mathbb{F}[t]: \deg(f) \leq m \rbrace$. In other words $$ \mathrm{Stab}_{\mathrm{H}_D}= \sbmattrix {\mathbb{F}^{*} }{\mathbb{F}[t]_{m}}{0}{\mathbb{F}^{*} }:= \left\lbrace \sbmattrix {a}{b}{0}{d} : a, d \in \mathbb{F}^{*} , b \in \mathbb{F}[t]_{m} \right \rbrace.$$ So, the set of unipotent elements in $\mathrm{Stab}_{\mathrm{H}_D}(x_n)$ equals $$U_{m}:=\sbmattrix {1 }{\mathbb{F}[t]_{m}}{0}{1}:= \left\lbrace \sbmattrix {1}{b}{0}{1} : b \in \mathbb{F}[t]_{m} \right \rbrace. $$ For each $i \in \mathbb{Z}_{\geq 0}$, let us introduce the group $$\Delta_i= \left\lbrace \sbmattrix {1}{z}{0}{1} : z \in \pi^{-i}\mathcal{O}_{P_{\infty}} \right \rbrace.$$ It follows from \cite[\S 5.1]{A5} that $\Delta_m$ acts transitively on the set of edges in $\mathfrak{v}^{1}(x)$ other than $r_{n}$. Moreover, $\Delta_{m-1}$ acts trivially on this set. Note that $U_{m}$ covers $\Delta_{m}/\Delta_{m-1}$, since $\pi^{-m}\mathcal{O}_{P_{\infty}}/\pi^{1-m}\mathcal{O}_{P_{\infty}} \cong \mathbb{F}(P_{\infty})=\mathbb{F}$. Therefore, $U_m$ acts transitively on the set of edges in $\mathfrak{v}^{1}(x)$ other than $r_{n}$, whence the claim follows. Now, assume that either $s=0$ and $n>0$ or $s\neq 0$ and $n>\nu(s)=1$. In this case, we extend the previous arguments in order to show that the image of $x$ in $\mathfrak{t}_D$ has valency two, except when $s=0$ and $n=1$, or $s\in \lbrace 1/t, 1/(t-1) \rbrace$ and $n=2$. Indeed, let us fix $$g_s= \sbmattrix {s}{1}{1}{0}.$$ Then, we obtain $x=g_s \cdot y_n$, where $y_i:= B_{0}^{|-i|}$. Thus, to prove the preceding statement, we just need to show that the set of unipotent elements in $g_{s}^{-1} \mathrm{Stab}_{\mathrm{H}_D} \left( x \right) g_{s}$ covers $\Delta_{n}/\Delta_{n-1}$. We compute these stabilizer subgroups next. Indeed, note that $\tau \in g_{s}^{-1} \mathrm{Stab}_{\mathrm{H}_D} \left( x \right) g_{s}$ precisely when $\tau \in g_{s}^{-1} \mathrm{H}_D g_{s}$ and $\tau \in \mathrm{Stab}_{\mathrm{GL}_2(K)} \left( y_n \right)$. Let us write $\tau= g_{s}^{-1} g g_{s}$, where $g=\sbmattrix {a}{b}{c}{d} \in \mathrm{H}_D$. Then $\tau \in g_{s}^{-1} \mathrm{H}_D g_{s}$ if and only if $$ \tau= \sbmattrix {d+cs}{c}{(a-d)s-cs^{2}+b }{a-cs} \in \mathrm{Stab}_{\mathrm{GL}_2(K)} \left( y_n \right) .$$ Equivalently, we have that $\nu(d+cs), \nu(a-cs) \geq 0$, $\nu(c) \geq -n$ and $\nu((a-d)s-cs^{2}+b ) \geq n$. If $s=0$, then these previous conditions hold precisely when $a,d \in \mathbb{F}^{*}$, $b=0$ and $c \in t(t-1) \mathbb{F}[t]_{n-2}$. So, we get $$g_{s}^{-1} \mathrm{Stab}_{\mathrm{H}_D} \left( x \right) g_{s} = \left\lbrace \sbmattrix {a}{c}{0}{d} : a,d \in \mathbb{F}^{*}, \, \, c \in t(t-1) \mathbb{F}[t]_{n-2} \right \rbrace.$$ In particular, the set of unipotent elements in $g_{s}^{-1} \mathrm{Stab}_{\mathrm{H}_D} \left( x \right) g_{s}$ is exactly $$ U_n(0):= \left\lbrace \sbmattrix {1}{c}{0}{1} : c \in t(t-1) \mathbb{F}[t]_{n-2} \right \rbrace. $$ Thus, the image of $x=B_0^{|n|}$, $n \geq 2$ by the canonical projection has valency two in $\mathfrak{t}_D$, since $ t(t-1) \mathbb{F}[t]_{n-2} $ covers $\pi^{-n}\mathcal{O}/\pi^{1-n}\mathcal{O} \cong \mathbb{F}(P_{\infty})=\mathbb{F}$ Now, assume $s \in \lbrace 1/t, 1/(t-1) \rbrace$ and $n \geq 3$. Note that $cs \in \mathbb{F}[t]$ in either case. So, we have that $a_0=a-cs$ and $d_0=d+cs$ are two polynomials with non-negative valuations, whence $a_0, d_0 \in \mathbb{F}$. Moreover, we have that $(a_0-d_0)s+cs^{2}+b=(a-d)s-cs^{2}+b \in \pi^{n}\mathcal{O}$. Since $\nu(s)=1$, we conclude that the polynomial $(a_0-d_0)s^{-1}+c+bs^{-2}$ belongs $\pi^{n-2}\mathcal{O}$, whence it is zero. So, we can write $(a_0-d_0)+cs+bs^{-1}=0$. Thus, since $\mathbb{F}=\mathbb{F}_2$, we have $a_0=d_0=1$, whence we conclude $c=bs^{-2} \in t(t-1) \mathbb{F}[t]$. We deduce that: $$ g_{s}^{-1} \mathrm{Stab}_{\mathrm{H}_D} \left( x \right) g_{s}= \left\lbrace \sbmattrix {1}{t(t-1)s^{-1}f}{0}{1} : f \in \mathbb{F}[t]_{n-3} \right \rbrace. $$ In particular, any element $g_{s}^{-1} \mathrm{Stab}_{\mathrm{H}_D} \left( x_n \right) g_{s}$ is unipotent, whence it covers $\Delta_{n}/\Delta_{n-1}$, since $t(t-1)s^{-1} \mathbb{F}[t]_{n-3} $ covers $\pi^{-n}\mathcal{O}/\pi^{1-n}\mathcal{O} \cong \mathbb{F}(P_{\infty})=\mathbb{F}$. It now follows from Theorem \ref{Explicit teo cusp} that $\mathfrak{s}$ is contained in a fundamental region for the action of $\mathrm{H}_D$ on $\mathfrak{t}(K)$. Moreover, the previous analysis shows that $\mathfrak{s}$ contains a fundamental region, since the valency of $B_0^{|1|}$ and $B_{1/t}^{|2|}$ are both exactly three, since $\mathbb{F}=\mathbb{F}_2$. We conclude that $\mathfrak{s}$ is a fundamental region. \begin{rem} In Example \ref{ex 2 r}, we can check that when $\mathbb{F} \neq \mathbb{F}_2$, the group \[\left\lbrace \sbmattrix {1}{t(t-1)s^{-1}f}{0}{1} : f \in \mathbb{F}[t]_{n-3} \right \rbrace,\] is also contained in $g_s^{-1} \mathrm{Stab}_{\mathrm{H}_D}(x) g_s$. In particular $x$ has valency two again. Thus, the only part where the property $\mathbb{F}=\mathbb{F}_2$ is actually used is in the final paragraph, where the equality allows us to prove that there are no edges coming out from $B_0^{|1|}$ or $B_{1/t}^{|2|}$ that are not contained in $\mathfrak{s}$. \end{rem} We can refine the above method to explicitly compute more complex quotient graphs up to certain degree. To do so, the key step is to properly use the Riemann-Roch equality \eqref{eq Riemann-Roch}. There exist several published computations employing this technique for $D=0$. See \cite{A1} and \cite{M} for more details. \end{ex} \begin{rem} Note that Examples \ref{ex1 rf} and \ref{ex 2 r} show simply connected quotient graphs. In \cite[\S 6]{M2}, Mason and Schweizer proposed the question: $$\textit{When is the quotient graph } \mathrm{GL}_2(R) \backslash \mathfrak{t}(K) \textit{ a tree ?}$$ They indicated that the theory of Drinfeld modular curves provides a complete answer when $\mathbb{F}$ is finite (cf.~\cite{M3}). An interesting question is if the same theory can be properly used in order to extend these results to the Hecke congruence subgroups $\mathrm{H}_D$. This can be eventually studied in order to give some complementary results to Theorem \ref{teo cusp}, \ref{teo grup} and \ref{Explicit teo cusp}. \end{rem}
2205.07322v1
http://arxiv.org/abs/2205.07322v1
Hook length and symplectic content in partitions
\documentclass{amsart} \usepackage{amssymb,amsmath,color} \usepackage{amsthm} \usepackage{array} \usepackage{multirow} \usepackage{url} \usepackage{tikz} \usepackage[enableskew]{youngtab} \usepackage{ytableau,varwidth } \allowdisplaybreaks \def\({\left(} \def\){\right)} \def\R{\bold R} \newtheorem{theorem}{Theorem}[section] \newtheorem{corollary}[theorem]{Corollary} \newtheorem{proposition}{Proposition} \newtheorem{conjecture}{Conjecture} \newtheorem{lemma}[theorem]{Lemma} \theoremstyle{definition} \newtheorem{definition}{Definition} \newtheorem{example}{Example} \newtheorem{remark}{Remark} \begin{document} \title{Hook length and symplectic content in partitions} \author{T. Amdeberhan} \address{Department of Mathematics\\ Tulane University\\ New Orleans, LA 70118, USA} \email{[email protected]} \author{G. E. Andrews} \address{Department of Mathematics\\ Penn State University\\ University Park, PA 16802, USA} \email{[email protected]} \author{C. Ballantine} \address{Department of Mathematics and Computer Science\\ College of the Holy Cross \\ Worcester, MA 01610, USA \\} \email{[email protected]} \begin{abstract} The dimension of an irreducible representation of $GL(n,\mathbb{C})$, $Sp(2n)$, or $SO(n)$ is given by the respective hook-length and content formulas for the corresponding partition. The first author, inspired by the Nekrasov-Okounkov formula, conjectured combinatorial interpretations of analogous expressions involving hook-lengths and symplectic/orthogonal contents. We prove special cases of these conjectures. In the process, we show that partitions of $n$ with all symplectic contents non-zero are equinumerous with partitions of $n$ into distinct even parts. We also present Beck-type companions to this identity. In this context, we give the parity of the number of partitions into distinct parts with odd (respectively, even) rank. We study the connection between the sum of hook-lengths and the sum of inversions in the binary representation of a partition. In addition, we introduce a new partition statistic, the $x$-ray list of a partition, and explore its connection with distinct partitions as well as partitions maximally contained in a given staircase partition. \end{abstract} \maketitle \section{introduction}\label{intro} \noindent The interplay between the hook lengths of a partition $\lambda$ and the contents of its cells appears in Stanley's formula for the dimension of the irreducible polynomial representation of the general linear group, $GL(n, \mathbb C)$, indexed by $\lambda$ \cite[(7.106)]{ec2}. There are also analogous formulas for irreducible polynomial representations of symplectic and orthogonal groups \cite{CS, EK, S}. \smallskip \noindent In this article, we prove a variety of results on the combinatorial nature of the expressions involving the symplectic and orthogonal hook-content formulas and certain generalizations thereof. These objects have been conjectured by the first author \cite{A12}. In order to state the conjectures and our results, we begin by introducing the relevant notations. \smallskip \noindent A {\it partition} $\lambda=(\lambda_1,\lambda_2,\dots,\lambda_{\ell})$ of $n\in\mathbb{N}$, denoted $\lambda\vdash n$, is a finite non-increasing sequence of positive integers, called \textit{parts}, that add up to $n$. The {\it size} of $\lambda$, denoted by $\vert\lambda\vert$, is the sum of all its parts. The {\it length} of $\lambda$, denoted by $\ell(\lambda)$, is the number of parts of $\lambda$. As usual, $p(n)$ denotes the number of partitions of $n$. We denote by $\mathcal P(n \mid X)$ the set of partitions of $n$ satisfying some condition $X$ and define $p(n\mid X):=|\mathcal P(n\mid X)|$. The {\it Young diagram} of a partition $\lambda$ is a left-justified array of squares such that the $i^{th}$-row of the array contains $\lambda_i$ squares. \begin{example}\label{eg1} If $\lambda= (5, 3, 3, 2,1)$, then $\vert\lambda\vert=14, \ell(\lambda)=5$. The Young diagram of $\lambda$ is shown below. $$\tiny\ydiagram{5, 3, 3, 2,1}$$ \end{example} \noindent A cell $(i,j)$ of $\lambda$ is the square in the $i^{th}$-row and $j^{th}$-column in the Young diagram of $\lambda$. The conjugate of $\lambda$ is the partition $\lambda'$ whose Young diagram has rows that are precisely the columns of the Young diagram of $\lambda$. For example, $\lambda'=(6,5,4,2,2)$ is the conjugate of $\lambda= (5,5, 3, 3, 2,1)$. When necessary, we will use the exponential notation $\lambda=(a_1^{m_1}, a_2^{m_2}, \ldots, a_k^{m_k})$ to mean that $a_i$ appears $m_i$ times as a part of $\lambda$, for $i=1, 2, \ldots, k$. This will mainly be used for hooks; that is, partitions of the form $(a, 1^b)$. The {\it hook length} of a cell $u=(i,j)$ of $\lambda$ is $h^\lambda(u)=\lambda_i+\lambda_j'-i-j+1$ and its {\it content} is $c^\lambda(u)=j-i$. Then, the dimension of the irreducible representation of $GL(n, \mathbb C)$ corresponding to $\lambda$ with $\ell(\lambda)\leq n$ is given by \cite[(7.106)]{ec2} $$\dim_{GL}(\lambda,n)=\prod_{u\in \lambda}\frac{n+c^\lambda(u)}{h^\lambda(u)}.$$ \smallskip \noindent The irreducible representations of the symplectic group $Sp(2n)$, consisting of $2n\times 2n$ matrices which preserve any non-degenerate, skew-symmetric form on $\mathbb C^{2n}$, are also indexed by partitions $\lambda$ with $\ell(\lambda)\leq n$. The {\it symplectic content} of cell $(i,j)\in\lambda$ is $$c^\lambda _{sp}(i,j)=\begin{cases}\lambda_i+\lambda_j-i-j+2 & \text{ if } i>j\\ i+j-\lambda'_i-\lambda'_j & \text{ if } i\leq j. \end{cases}$$ The irreducible representations of the special orthogonal group $SO(n)$, consisting of orthogonal matrices of determinant $1$, are indexed by partitions $\lambda$ with $\ell(\lambda)\leq \lfloor\frac{n}{2}\rfloor$. The {\it orthogonal content} of cell $(i,j)\in\lambda$ is defined by $$c_{O}^{\lambda}(i,j)=\begin{cases} \lambda_i+\lambda_j-i-j \qquad \text{if $i\geq j$} \\ i+j-\lambda_i'-\lambda_j'-2 \,\,\, \text{if $i<j$}.\end{cases}$$ The symplectic and orthogonal counterparts to Stanley's hook-content formula are, respectively, \cite{EK, S} $$\dim_{Sp}(\lambda,2n)=\prod_{u\in\lambda}\frac{2n+c_{sp}^{\lambda}(u)}{h^{\lambda}(u)} \qquad \text{and} \qquad \dim_{SO}(\lambda,n)=\prod_{u\in\lambda}\frac{n+c_{O}^{\lambda}(u)}{h^{\lambda}(u)}.$$ \smallskip \noindent We are now ready to introduce the conjectures listed in \cite{A12}. The author was inspired by the remarkable hook-length identity of Nekrasov and Okounkov \cite[(6.12)]{NO06} (later given a more elementary proof by Han \cite{H10}) $$\sum_{n\geq0}q^n\sum_{\lambda\vdash n}\prod_{u\in\lambda}\frac{t+(h^{\lambda}(u))^2}{(h^{\lambda}(u))^2}=\prod_{j\geq1}\frac1{(1-q^j)^{t+1}},$$ and Stanley's analogous hook-content identity \cite[Theorem 2.2]{RPS10} $$\sum_{n\geq0}q^n\sum_{\lambda\vdash n}\prod_{u\in\lambda}\frac{t+(c^{\lambda}(u))^2}{(h^{\lambda}(u))^2}=\frac1{(1-q)^t}.$$ \smallskip \noindent \begin{conjecture} [\cite{A12}, Conjecture 6.2(a)] \label{conj6.2a} For $t$ an indeterminate, we have $$\sum_{n\geq0}q^n\sum_{\lambda\vdash n}\prod_{u\in\lambda}\frac{t+c_{sp}^{\lambda}(u)}{h^{\lambda}(u)}= \prod_{j\geq1}\frac{(1-q^{8j})^{\binom{t+1}2}}{(1-q^{8j-2})^{\binom{t+1}2-1}} \left(\frac{1-q^{4j-1}}{1-q^{4j-3}}\right)^t \left(\frac{1-q^{8j-4}}{1-q^{8j-6}}\right)^{\binom{t}2-1}.$$ \end{conjecture} \begin{conjecture} [\cite{A12}, Conjecture 6.2(b)] \label{conj6.2b} For $t$ an indeterminate, we have $$\sum_{n\geq0}q^n\sum_{\lambda\vdash n}\prod_{u\in\lambda}\frac{t+c_{O}^{\lambda}(u)}{h^{\lambda}(u)}= \prod_{j\geq1}\frac{(1-q^{8j})^{\binom{t}2}}{(1-q^{8j-6})^{\binom{t}2-1}} \left(\frac{1-q^{4j-1}}{1-q^{4j-3}}\right)^t \left(\frac{1-q^{8j-4}}{1-q^{8j-2}}\right)^{\binom{t+1}2-1}.$$ \end{conjecture} \smallskip \noindent In this article, we prove the case $t=0$ of Conjectures \ref{conj6.2a} and \ref{conj6.2b}. \smallskip \noindent The next conjecture from \cite{A12} is a symplectic, respectively orthogonal, counterpart to the hook-content identity of Stanley, which is also in the spirit of Nekrasov-Okounkov's hook formula. \begin{conjecture} [\cite{A12}, Conjecture 6.3(a)] \label{conj6.3a} For $t$ an indeterminate, we have $$\sum_{n\geq0}q^n\sum_{\lambda\vdash n}\prod_{u\in\lambda}\frac{t+(c_{sp}^{\lambda}(u))^2}{(h^{\lambda}(u))^2}= \prod_{j\geq1}\frac1{(1-q^{4j-2})(1-q^j)^t}= \sum_{n\geq0}q^n\sum_{\lambda\vdash n}\prod_{u\in\lambda}\frac{t+(c_{O}^{\lambda}(u))^2}{(h^{\lambda}(u))^2}.$$ \end{conjecture} \smallskip \noindent We prove the cases $t=0$ and $t=-1$ of Conjecture \ref{conj6.3a}. \smallskip \noindent The special cases of the above conjectures will be proved in section \ref{no-thms}. We denote by $sy\mathcal P_0$ the set of partitions $\lambda$ such that $c^{\lambda}_{sp}(u)\neq 0$ for all cells $u\in \lambda$. To prove the case $t=0$ of Conjectures \ref{conj6.2a}, \ref{conj6.2b} and \ref{conj6.3a}, we describe the partitions in $sy\mathcal P_0$ explicitly. The {\it nested hooks} of a partition $\lambda$ are hook partitions whose Young diagrams consist of a cell $(i,i)$ on the main diagonal of $\lambda$ together with all cells directly to the right and directly below the cell $(i,i)$. Thus, the $i^{th}$-hook of $\lambda$ is the partition $(\lambda_i-i+1, 1^{\lambda'_i-i})$. In section \ref{no-thms} we prove the following characterization. \begin{theorem} \label{non-zero cont} $\lambda\in sy\mathcal P_0$ if and only if $\lambda=\emptyset$ or all nested hooks $(a, 1^b)$ of $\lambda$ satisfy $a=b$. \end{theorem} \noindent Using the transformation of {\it straightening} nested hooks, i.e., transforming each nested hook $(a, 1^b)$ into a part equal to $a+b$, one can easily see that partitions of $n$ in $sy\mathcal P_0$ are in bijection with partitions of $n$ into distinct even parts. Therefore \begin{equation}\label{dist-parts} |sy\mathcal P_0(n)|=p(n \, \big| \text{ distinct even parts}). \end{equation} \smallskip \noindent In section \ref{beck-section}, we prove two Beck-type companion identities to \eqref{dist-parts}. These would give combinatorial interpretations for the excess of the number of parts in all partitions in $sy\mathcal P_0(n)$ over the number of parts in all partitions in $\mathcal P(n \, \big| \text{ distinct even parts})$. The combinatorial description in our first Beck-type identity involves partitions with even rank. In section \ref{odd-even rank}, we examine the parity of $p(n\mid \text{distinct parts, odd rank})$ and $p(n\mid \text{distinct parts, even rank})$. In section \ref{bi-hooks}, we will investigate the connection between the sum of all hook-lengths of a partition $\lambda$ and the sum of all inversions in the binary representation of $\lambda$. Finally, in section \ref{two stats}, we study the $x$-ray list of a partition, an analogue to the $x$-ray of a permutation \cite{BMPS}, and we determine links with $p(n \mid \text{distinct parts})$ and also with the number of partitions maximally contained in a given staircase partition. \section{Proofs of Theorem \ref{non-zero cont} and special cases of Conjectures \ref{conj6.2a}, \ref{conj6.2b} and \ref{conj6.3a}}\label{no-thms} \noindent In this section, we first prove Theorem \ref{non-zero cont} and derive two corollaries which lead to the proofs for the case $t=0$ of Conjectures \ref{conj6.2a}, \ref{conj6.2b} and \ref{conj6.3a}. We then conclude by proving the case $t=-1$ of Conjecture \ref{conj6.3a}. \smallskip \noindent We start with basic observations about the symplectic and orthogonal contents of cells in partitions. \begin{remark} \label{remark sp-o} The definitions imply that, for any partition $\lambda$ and any cell $(i,j)\in \lambda$, we have $$c^\lambda_{sp}(i,j)=-c^{\lambda'}_O(j,i).$$ \smallskip \noindent Moreover, since $h^\lambda(i,j)=-h^{\lambda'}(j,i)$, for any positive integer $n$, we have $$\sum_{\lambda\vdash n}\prod_{u\in\lambda}\frac{c_{sp}^{\lambda}(u)}{h^{\lambda}(u)}= \sum_{\lambda\vdash n}\prod_{u\in\lambda}\frac{c_{O}^{\lambda}(u)}{h^{\lambda}(u)}.$$ \end{remark} \smallskip \noindent Given a partition $\lambda$, by the {\it outer hook} of $\lambda$ we mean the hook determined by the cell $(1,1)$, i.e., the partition $(\lambda_1, 1^{\lambda'_1-1})$. \begin{remark} \label{R1} Let $\mu$ be the partition obtained from $\lambda$ by removing the outer hook. Then, it is immediate from the definitions that $$c^\mu _{sp}(i,j)=c^\lambda _{sp}(i+1,j+1) \qquad \text{and} \qquad c^\mu _{O}(i,j)=c^\lambda _{O}(i+1,j+1).$$ Thus, the symplectic and orthogonal contents of a cell are preserved by the operation of adding or removing an outer hook. \end{remark} \smallskip \noindent The \textit{rank}, $r(\lambda)$, of a partition $\lambda$ is defined as the difference between the largest part and the length of the partition, i.e., $r(\lambda)=\lambda_1-\lambda'_1$. We denote by $r_j(\lambda)$ the rank of the partition obtained from $\lambda$ by removing the first $j-1$ outer hooks, successively. Thus, $r_j(\lambda)=\lambda_j-\lambda'_j$. \begin{remark}\label{remark c-h} For any partition $\lambda$ and any cell $(i,j)\in \lambda$, we have $$c^\lambda_{sp}(i,j)=\begin{cases} r_j(\lambda)+1+h^\lambda(i,j) & \text{ if } i>j\\ r_i(\lambda)+1-h^\lambda(i,j) & \text{ if } i\leq j.\end{cases} $$ \end{remark} \smallskip \noindent Next, we determine explicitly the symplectic contents of cells in hook partitions $\lambda=(a, 1^b)$ with $a\geq 1, b\geq 0$. In detail, we have \begin{align*}c^\lambda_{sp}(1,1)& =-2b, & \ \\ c^\lambda_{sp}(1,j)&=-b+j-1 & \text{ for } &2\leq j\leq a,\\ c^\lambda_{sp}(i,1)&=a-i+2 & \text{ for } &2\leq i\leq b+1. \end{align*} The contents $c^\lambda_{sp}(1,j)$, for $j\geq 2$, are seen as consecutive integers between $-b+1$ and $a-b-1$, increasing from left to right. The contents $c^\lambda_{sp}(i,1)$, for $i\geq 2$, are consecutive integers between $a-b+1$ and $a$ decreasing from top to bottom. These observations lead to the following characterization of hook partitions when all symplectic contents are unequal to a fixed integer $t$. \newpage \begin{proposition}\label{not t} Let $t$ be an integer. A hook partition $\lambda=(a, 1^b)$, with $a\geq 1, b\geq 0$, satisfies $c^\lambda_{sp}(i,j)\neq t$ for all cells $(i,j)\in \lambda$ if and only if all of the following conditions is satisfied: \begin{itemize} \item[(i)] if $b\neq -t/2$ \item[(ii)] if $a\geq 2$, then $b<1-t$ or $a-b-1<t$; \item[(iii)] if $b\geq 1$, then $t<a-b+1$ or $a<t$. \end{itemize} \end{proposition} \noindent The cases needed for our proofs are given in the next two corollaries. Recall that $sy\mathcal P_0$ denotes the set of partitions having all symplectic contents non-zero. We also denote by $sy\mathcal P_{\pm1}$ the set of partitions with all symplectic contents not equal to $\pm 1$. \begin{corollary} \label{cor 0} For $a\geq 1, b\geq 0$, the hook partition $\lambda =(a, 1^b)$ is in $sy\mathcal P_0$ if and only if $a=b$. \end{corollary} \begin{proof} Choose $t=0$ in Proposition \ref{not t} and suppose $(a, 1^b)\in sy\mathcal P_0$. Proposition \ref{not t}(i) implies $b\geq 1$. If $a=1$, by Proposition \ref{not t}(iii), we gather $b<2$. Therefore $b=1=a$. If $a\geq 2$, then, by (ii) and (iii) of Proposition \ref{not t}, $a=b$. Conversely, if $a=b$, then the conditions of Proposition \ref{not t} are satisfied. \end{proof} \begin{corollary} \label{cor 1} The only hook partitions in $sy\mathcal P_{\pm 1}$ are $(1)$ and $(2,1)$. \end{corollary} \begin{proof} The two partitions $(1)$ and $(2,1)$ satisfy the conditions of Proposition \ref{not t} with $t=\pm 1$. Consider $(a, 1^b)$ with $a\geq 1, b\geq 0$ and $(a, 1^b)\not \in \{(1),(2,1)\}$. Condition (i) is satisfied. If $b=0$, condition (ii) is not satisfied for any $a\geq 2$. If $a=1$, then condition (iii) is not satisfied for any $b\geq 1$. Suppose $a\geq 2$ and $b\geq 1$. Conditions (ii) and (iii) imply $a=b+1$ if $t=1$, and imply $a=b-1$ if $t=-1$. Thus, the only hook partitions in $sy\mathcal P_{\pm 1}$ are $(1)$ and $(2,1)$. \end{proof} \noindent Before we prove Theorem \ref{non-zero cont} we introduce one more concept. The {\it Durfee square} of a partition $\lambda$ is the largest partition of the form $(m^m)$ whose Young diagram fits inside the Young diagram of $\lambda$. The length of the Durfee square of $\lambda$ equals the number of nested hooks of $\lambda$. \begin{proof}[Proof of Theorem \ref{non-zero cont}] Clearly $\emptyset\in sy\mathcal P_0$. If $\lambda \neq \emptyset$, we prove by induction on the length $m$ of the Durfee square that $\lambda \in sy\mathcal P_0$ if and only if $a=b$. \smallskip \noindent If $m=1$, the statement is true by Corollary \ref{cor 0}. \smallskip \noindent Next, assume that the statement of the theorem is true for every partition with Durfee square of length less than $k$ and let $\lambda$ be a partition with Durfee square of length $k$. Suppose $\lambda\in sy\mathcal P_0$. By Remark \ref{R1}, we only need to show that the outer hook, $(\lambda_1, 1^{\lambda'_1-1})$, satisfies $\lambda_1=\lambda'_1-1$. \smallskip \noindent If $\lambda_1<\lambda'_1-1$, we must have $\lambda_{\lambda_1+2}=1$. Otherwise, if $\mu$ is the partition obtained from $\lambda$ by removing the outer hook, we have $\mu'_1\geq \lambda_1+1\geq\mu_1+2$ which is impossible since $\mu \in sy\mathcal P_0$ and hence $\mu_1=\mu'_1-1$. Consequently, we see that $c^\lambda_{sp}(\lambda_1+2, 1)=\lambda_{\lambda_1+2}+\lambda_1-(\lambda_1+2)-1+2= 0$. Similarly, if $\lambda_1>\lambda'_1-1$, we have $\lambda'_{\lambda'_1}=1$ and thus $c_{sp}(1, \lambda'_1)=1+\lambda'_1-\lambda'_1-\lambda'_{\lambda'_1}=0$. Therefore $\lambda_1=\lambda'_1-1$. \smallskip \noindent Finally, if every nested hook $(a, 1^b)$ of $\lambda$ satisfies $a=b$, by induction, $c^\lambda_{sp}(i,j)\neq 0$ for all $i,j\geq 2$. Moreover, since $\lambda_1=\lambda'_1-1$, for $1\leq j\leq \lambda_1$ we have $$ c^{\lambda}_{sp}(1,j)=1+j-\lambda'_1-\lambda'_j\leq 1+\lambda_1-\lambda'_1-\lambda'_j=-\lambda'_j<0,$$ and for $2\leq i\leq \lambda'_1$ we have $ c^{\lambda}_{sp}(i,1)=\lambda_i+\lambda_1-i-1+2\geq\lambda_i+\lambda_1-\lambda'_1+1 \geq \lambda_i>0$. Thus, $\lambda \in sy\mathcal P_0$. \end{proof} \begin{corollary}\label{Euler} For any positive integer $n$ we have $$|sy\mathcal P_0(n)|=p(n \, \big| \text{ parts} \equiv 2\bmod 4).$$\end{corollary} \begin{proof} Apply \eqref{dist-parts} and Euler's identity \cite[(1.2.5)]{A98} with $q$ replaced by $q^2$. \end{proof} \smallskip \noindent From Theorem \ref{non-zero cont} and Remark \ref{remark c-h} we have the following. \begin{corollary} \label{cc=h} For $\lambda\neq \emptyset$, we have $\lambda \in sy\mathcal P_0$ if and only if $$c^\lambda_{sp}(i,j)=\begin{cases} h^\lambda(i,j) & \text{ if } i>j \\ -h^\lambda(i,j) & \text{ if } i\leq j,\end{cases} $$ for all $(i,j)\in \lambda$. \end{corollary} \begin{proof} The assertion holds due to the fact that all nested hooks $(a, 1^b)$ of $\lambda$ satisfy $a=b$ if and only if $r_j(\lambda)=-1$ for all $j$ no larger than the length of the Durfee square of $\lambda$. \end{proof} \begin{remark} From Corollary \ref{cc=h}, if $\lambda \in sy\mathcal P_0$, we have $c^\lambda_{sp}(i,j)>0$ if $i> j$ and $c^\lambda_{sp}(i,j)<0$ if $i\leq j$. \end{remark} \noindent From Corollaries \ref{Euler} and \ref{cc=h} and Remark \ref{remark sp-o} we obtain the proof for the case $t=0$ of Conjecture \ref{conj6.3a}. \begin{theorem} [\cite{A12}, Conjecture 6.3(b)] \label{conj6.3b} $$\sum_{n\geq0}q^n\sum_{\lambda\vdash n}\prod_{u\in\lambda}\frac{(c_{sp}^{\lambda}(u))^2}{(h^{\lambda}(u))^2}= \prod_{j\geq1}\frac1{1-q^{4j-2}}=\sum_{n\geq0}q^n\sum_{\lambda\vdash n}\prod_{u\in\lambda}\frac{(c_{O}^{\lambda}(u))^2}{(h^{\lambda}(u))^2}.$$ \end{theorem} \smallskip \noindent Next, we prove care $t=0$ of Conjectures \ref{conj6.2a} and \ref{conj6.2b}. \begin{theorem} [\cite{A12}, Conjecture 6.2(c)] \label{conj6.2c} $$\sum_{n\geq0}q^n\sum_{\lambda\vdash n}\prod_{u\in\lambda}\frac{c_{sp}^{\lambda}(u)}{h^{\lambda}(u)}= \prod_{j\geq1}\frac1{1+q^{4j-2}}=\sum_{n\geq0}q^n\sum_{\lambda\vdash n}\prod_{u\in\lambda}\frac{c_{O}^{\lambda}(u)}{h^{\lambda}(u)}.$$ \end{theorem} \begin{proof} From Theorem \ref{non-zero cont}, if $n$ is odd, $sy\mathcal P_0(n)=\emptyset$. Assume $n$ is even. If $\lambda \in sy\mathcal P_0$, by Corollary \ref{cc=h}, for each $u\in \lambda$, we have $\displaystyle \frac{c^\lambda_{sp}(u)}{h^\lambda(u)}=\pm 1$. Since all nested hooks $(a, 1^b)$ of $\lambda$ satisfy $a=b$, the number of cells $u\in \lambda$ with $\displaystyle \frac{c^\lambda_{sp}(u)}{h^\lambda(u)}=1$ equals $\displaystyle \frac{n}{2}$. Thus, if $n>0$ is even and $\lambda \vdash n$, $$\prod_{u\in\lambda}\frac{c_{sp}^{\lambda}(u)}{h^{\lambda}(u)}=\begin{cases} 1 & \text{ if } n\equiv 0\bmod 4\\ -1 & \text{ if } n\equiv 2\bmod 4.\end{cases}$$ On the other hand, $\displaystyle \prod_{j\geq 1}\frac{1}{1+q^{4j-2}}$ is the generating function for the number of partitions $\lambda\vdash n$ with parts congruent to $2\bmod 4$, each partition counted with weight $(-1)^{\ell(\lambda)}$. If $\lambda$ is such a partition, then $$(-1)^{\ell(\lambda)}=\begin{cases} 1 & \text{ if } n\equiv 0\bmod 4\\ -1 & \text{ if } n\equiv 2\bmod 4.\end{cases}$$ This proves the left-hand identity of Theorem \ref{conj6.2c}. The right-hand identity follows from Remark \ref{remark sp-o}. \end{proof} \smallskip \noindent Theorems \ref{conj6.2c} and \ref{conj6.3b} lead to the following identity as it was conjectured in \cite{A12}. \begin{corollary} [\cite{A12}, Conjecture 6.3(c)] \label{conj6.3c} For any positive integer $n$ we have $$(-1)^{\binom{n}2}\sum_{\lambda\vdash n}\prod_{u\in\lambda}\frac{(c_{sp}^{\lambda}(u))^2}{(h^{\lambda}(u))^2} =\sum_{\lambda\vdash n}\sum_{\lambda\vdash n}\prod_{u\in\lambda}\frac{c_{sp}^{\lambda}(u)}{h^{\lambda}(u)}.$$ \end{corollary} \noindent Before proving the case $t=-1$ of Conjecture \ref{conj6.3a}, we characterize the partitions in $sy\mathcal P_{\pm 1}$. \smallskip\noindent Denote by $\delta_r$ the {\it staircase partition} with $r$ consecutive parts $\delta_r=(r, r-1, \ldots, 2,1)$. Notice that the length of the Durfee square of $\delta_r$ is $\lceil \frac{r}{2}\rceil$. \begin{theorem} \label{non-pm1 cont} $\lambda\in sy\mathcal P_{\pm 1}$ if and only if $\lambda=\emptyset$ or $\lambda=\delta_r$ for some $r$. \end{theorem} \begin{proof} Clearly $\emptyset\in sy\mathcal P_{\pm1}$. If $\lambda \neq \emptyset$, we prove the statement by induction on the length $m$ of the Durfee square of the partition. \smallskip \noindent If $m=1$, the statement is true by Corollary \ref{cor 1}. \smallskip \noindent For the inductive step, assume that if $\mu$ is a partition with Durfee square of length at most $m$, then $\mu\in sy\mathcal P_{\pm1}$ if and only if $\mu$ is a staircase partition. Let $\lambda\in sy\mathcal P_{\pm 1}$ be a partition with Durfee square of size $m+1$. Then by the induction hypothesis, the partition $\lambda^-$ obtained from $\lambda$ by removing the outer hook is a staircase with Durfee size $m$, i.e., $\lambda^-=\delta_{k}$ with $k\in \{2m, 2m-1\}$. Conversely, it follows from Remark \ref{R1} that if $\lambda^-=\delta_{k}$ with $k\in \{2m, 2m-1\}$, then $c^\lambda_{sp}(i,j)\neq \pm 1$ for all cells $(i,j)\in \lambda$ with $i,j\geq 2$. Assume $\lambda$ is such that $\lambda^-=\delta_{k}$ with $k\in \{2m, 2m-1\}$. We have $$c^\lambda_{sp}(1,j)=\begin{cases}2-2\lambda'_1 & \text{ if } j=1\\ -\lambda'_1-k+2j-2 & \text{ if } 2\leq j\leq k+1\\ j-\lambda'_1 & \text{ if } j\geq k+2 \text{ \ (if any) } \end{cases} $$ and $$c^\lambda_{sp}(j,1)=\begin{cases}2-2\lambda'_1 & \text{ if } j=1\\ \lambda_1+k-2j+4 & \text{ if } 2\leq j\leq k+1\\ \lambda_1-j+2 & \text{ if } j\geq k+2 \text{ \ (if any) } \end{cases}$$ By construction, $\lambda_1, \lambda'_1\geq k+1$. \smallskip \noindent If $\lambda_1=\lambda'_1=k+1$, then $c^\lambda_{sp}(1,k+1)=-1$. \smallskip \noindent If $\lambda_1=k+1, \lambda'_1\geq k+2$, then $c^\lambda_{sp}(k+2,1)=1$. \smallskip \noindent If $\lambda_1\geq k+2$ and $\lambda'_1= k+1$, then $c^\lambda_{sp}(1,k+2)=1$. \smallskip \noindent If $\lambda'_1>\lambda_1\geq k+2$, then $c^\lambda_{sp}(\lambda_1+1,1)=1$. \smallskip \noindent If $\lambda_1\geq\lambda'_1\geq k+3$, then $c^\lambda_{sp}(1,\lambda'_1-1)=-1$. \smallskip \noindent Finally, if $\lambda_1=\lambda'_1=k+2$, we have $c^\lambda_{sp}(i,j)$ is even for all cells $(i,j)$. \end{proof} \noindent From Theorem \ref{non-pm1 cont} and Remark \ref{remark c-h} we have the following. \begin{corollary} \label{c=h} For $\lambda\neq \emptyset$, we have $\lambda \in sy\mathcal P_{\pm1}$ if and only if $$c^\lambda_{sp}(i,j)=\begin{cases} h^\lambda(i,j)+1 & \text{ if } i>j \\ -h^\lambda(i,j)+1 & \text{ if } i\leq j,\end{cases} $$ for all $(i,j)\in \lambda$. \end{corollary}\begin{proof} The statement follows from the fact that $\lambda$ is a staircase partition if and only if $r_j(\lambda)=0$ for all $j$ no larger than the length if the Durfee square of $\lambda$. \end{proof} \begin{theorem}\label{t=-1} We have the generating function $$\sum_{n\geq0}q^n\sum_{\lambda\vdash n}\prod_{u\in\lambda}\frac{c_{sp}^{\lambda}(u))^2-1}{(h^{\lambda}(u))^2}=\prod_{j\geq1}\frac{1-q^j}{1-q^{4j-2}}=\sum_{n\geq0}(-q)^{\binom{n+1}2}.$$ \end{theorem} \begin{proof} \smallskip \noindent Suppose $\lambda=\delta_k\vdash n$. For each each $1\leq j\leq k$, we refer to the cells $(a,c)\in \delta_k$ with $a+c=j+1$ as the $j^{th}$ anti-diagonal of $\lambda$. Then, in the $j^{th}$ anti-diagonal we have $j$ cells with hook-length $2(k-j)+1$. Of these, $\lceil \frac{j}{2}\rceil$ cells have symplectic content $-2(k-j)$ and $\lfloor \frac{j}{2}\rfloor$ cells have symplectic content $2(k-j+1)$. Therefore, if $n$ is not a triangular number, $\sum_{\lambda\vdash n} \prod_{u\in\lambda}\frac{c_{sp}^{\lambda}(u))^2-1}{(h^{\lambda}(u))^2}=0$ and if $n$ is the triangular number $\binom{k+1}{2}$, then \begin{align*} \sum_{\lambda\vdash n}&\prod_{u\in\lambda}\frac{c_{sp}^{\lambda}(u))^2-1}{(h^{\lambda}(u))^2} \\ = &\prod_{j=1}^k\frac{(4(k-j)^2-1)^{\lceil\frac{j}{2}\rceil}(4(k-j+1)^2-1)^{\lfloor\frac{j}{2}\rfloor}}{(2(k-j)+1)^{2j}}\\ =& \prod_{j=1}^k\frac{(2(k-j)-1)^{\lceil\frac{j}{2}\rceil}(2(k-j+1)+1)^{\lfloor\frac{j}{2}\rfloor}}{(2(k-j)+1)^{j}}\\ =& \frac{ (-1)^{\lceil\frac{k}{2}\rceil}\prod_{j=2}^k(2(k-j+1)-1)^{\lceil\frac{j-1}{2}\rceil} \cdot(2k-1) \prod_{j=2}^{k-1}(2(k-j)+1)^{\lfloor\frac{j+1}{2}\rfloor}}{\prod_{j=1}^k(2(k-j)+1)^{j}}. \end{align*} Since $\lceil\frac{j-1}{2}\rceil+\lfloor \frac{j+1}{2}\rfloor=j$ for all $2\leq j\leq k-1$, we obtian $$\sum_{\lambda\vdash n} \prod_{u\in\lambda}\frac{c_{sp}^{\lambda}(u))^2-1}{(h^{\lambda}(u))^2} =(-1)^{\lceil \frac{k}{2}\rceil}=(-1)^{\binom{k+1}{2}}=(-1)^n.$$ Lastly, $$ \prod_{j\geq1}\frac{1-q^j}{1-q^{4j-2}}=\sum_{n\geq0}(-q)^{\binom{n+1}2}$$ follows from Gauss' identity \cite[(2.2.13)]{A98}, with $q$ replaced by $-q$, and Euler's identity \cite[(1.2.5)]{A98}, with $q$ replaced by $q^2$. \end{proof} \begin{remark} Let $\beta(n)$ be the number of {\it overcubic partitions} of an integer $n$, see \cite{K12} and references therein. The generating function for $\beta(n)$ is $$\sum_{n\geq0}\beta(n)\,q^n=\prod_{j\geq1}\frac1{(1-q^{4j-2})(1-q^j)^2}=\prod_{j\geq1}\frac{1+q^{2j}}{(1-q^j)^2}.$$ Thus, the case $t=2$ of Conjecture \ref{conj6.3a}, which is still an open problem, becomes $$\beta(n)=\sum_{\lambda\vdash n}\prod_{u\in\lambda}\frac{2+(c_{sp}^{\lambda}(u))^2}{(h^{\lambda}(u))^2}.$$\end{remark} \smallskip \noindent \section {Beck-Type identities}\label{beck-section} \smallskip \noindent In this section, we consider again the identity \eqref{dist-parts}, $$p(n\mid \mbox{distinct even parts})=\vert sy\mathcal P_0(n)\vert,$$ and establish two Beck-type companion identities for \eqref{dist-parts}. If $\mathcal P(n \mid X)$ denotes the set of partitions of $n$ satisfying condition $X$ and $p(n\mid X)=|\mathcal P(n\mid X)|$, a Beck-type companion identity to $p(n\big| X)= p(n\mid Y)$ is an identity that equates the excess of the number of parts of all partitions in $\mathcal P(n\mid X)$ over the number of parts of all partitions in $\mathcal P(n\mid Y)$ and (a multiple of) the number of partitions of $n$ satisfying a condition that is a slight relaxation of $X$ (or $Y$). \smallskip\noindent Recall that $sy\mathcal P_0(n)$ is the set of partitions of $n$ for which the symplectic content of all cells is non-zero. From the proof of Theorem \ref{non-zero cont}, these partitions are precisely those which are {\it almost self-conjugate}, i.e.. the nested hooks have $\text{leg $=$ arm $+1$}$. Moreover, if the Durfee square has side length $m$, then the $(m+1)^{st}$ part of the partition is $m$ and removing a box from each of the first $m$ columns of the Young diagram leaves a self-conjugate partition. \smallskip\noindent In \cite{AB19}, the authors give a Beck-type companion identity for $$p(n\mid \mbox{distinct odd parts})=p(n\mid \mbox{self-conjugate}),$$ and the work of this section has many similarities with \cite{AB19}. \smallskip \noindent Denote by $s_{c'}(n)$ the number of parts of all partitions in $sy\mathcal P_0(n)$ and by $s_e(n)$ the number of parts of all partitions of $n$ into distinct even parts. Before we introduce our first Beck-type identity, we need a definition. Recall that the rank of a partition $\lambda$ is the difference, $\lambda_1-\ell(\lambda)$, between the largest part in $\lambda$ and the length of $\lambda$. In \cite{BG}, the $M_2$-rank of a partition $\lambda$ is defined as the difference between the largest part and the number of parts in the $\text{mod } 2$ diagram of $\lambda$, that is, $$M_2(\lambda)=\left\lceil\frac{\lambda_1}{2}\right\rceil-\ell(\lambda).$$ \begin{theorem}\label{first beck} For all $n > 0$, we have $s_{c'}(n)-s_e(n)$ equals twice the number of partitions into even parts with exactly one even part repeated plus the number of partitions into distinct even parts with even $M_2$-rank. \end{theorem} \begin{proof} We use the notation $D f(z,q)$ to mean the derivative of $f(z,q)$ with respect to $z$ evaluated at $z=1$, i.e., $$Df(z,q):=\left.\left(\frac{\partial}{\partial z}f(z,q)\right)\right|_{z=1}.$$ Note that if $f(z,q)$ is a partition generating function wherein the exponent of $q$ keeps track of the number being partitioned and the exponent of $z$ is the number of parts, then $Df(z,q)$ is the generating function for the number of parts in the partitions considered. \noindent We denote the generating functions for $s_{c'}(n)$ by $S_{c'}(q)$. Thus, $$S_{c'}(q)=\sum_{n\geq 0} s_{c'}(n)q^n .$$ \noindent To obtain the generating function for the number of partitions in $sy\mathcal P_0$ wherein the exponent of $z$ keeping track of the number of parts, we use the bivariate generating function for self-conjugate partitions \cite{AB19} given by $$ \sum_{m\geq 0}\frac{z^{m}q^{m^2}}{(zq^2;q^2)_m}.$$ Given a partition in $sy\mathcal P_0$ with Durfee square of side length $m$, we remove one box from each of the first $m$ columns of the Young diagram to obtain a self-conjugate partition. Thus, the bivariate generating function for partitions in $sy\mathcal P_0$ wherein the exponent of $z$ keeping track of the number of parts is \begin{equation}\label{bivp_0}F(q;z):=\sum_{m\geq 1}\frac{z^{m+1}q^{m^2+m}}{(zq^2;q^2)_m}.\end{equation} \smallskip \noindent We begin by writing $F(q;z)$ as a limit. We mean $$F(q;z)=\lim_{\tau\to 0}z\sum_{m\geq 1}\frac{(-\frac{q^2}{\tau};q^2)_m(q^2;q^2)_m z^m\tau^m}{(q^2;q^2)_m(zq^2;q^2)_m}.$$ Next, we apply the transformation on the last line of pg. 38 of \cite{A98} in which we first replace $q$ by $q^2$ and then substitute $a= -\frac{q^2}{\tau}, b=q^2, c=zq^2, t=z\tau$. We follow this through the limit as $\tau \to 0$. We obtain \begin{equation}F(q;z)=-z+ z(1-z)\sum_{m\geq 0}(-q^2; q^2)_m z^m. \end{equation} \noindent Applying the operator $D$, we obtain \begin{align*} S_{c'}(q)=&D F(q;z) \\ =&-1+ \lim_{z\rightarrow 1^{-}}(1-z)\sum_{m\geq0}(-q^2;q^2)_mz^m+D(1-z)\sum_{m\geq 0}(-q^2; q^2)_m z^m \\ =&-1+\sum_{m\geq0}\frac{q^{m^2+m}}{(q^2;q^2)_m}+D(1-z)\sum_{m\geq 0}(-q^2; q^2)_m z^m \\ =&-1+(-q^2;q^2)_{\infty}++D(1-z)\sum_{m\geq 0}(-q^2; q^2)_m z^m. \end{align*} \noindent To find $D(1-z)\sum_{m\geq 0}(-q^2; q^2)_m z^m$ we use \cite[Proposition 2.1 and Theorem 1]{AJO01}. When using Theorem 1 in \cite{AJO01}, we first replace $q$ by $q^2$ and then set $a=0$ and $t=-q^2$. Thus, \begin{align*} D(1-z)\sum_{m\geq 0}(-q^2; q^2)_m z^m &=\sum_{n\geq0}((-q^2;q^2)_{\infty}-(-q^2;q^2)_n) \\ &=\frac12\sum_{n\geq1}\frac{q^{n(n-1)}}{(-q^2;q^2)_{n-1}}+(-q^2;q^2)_{\infty}\left(-\frac12+\sum_{m\geq1}\frac{q^{2m}}{1-q^{2m}}\right). \end{align*} Therefore, the generating function for the number of parts of all partitions in $sy\mathcal P_0(n)$ can be written as \newpage \begin{align*} S_{c'}(q)&=D F(q;z) \\ &= -1+\frac{1}{2} \sum_{n=1}^\infty \frac{q^{n(n-1)}}{(-q^2;q^2)_{n-1}}+(-q^2;q^2)_\infty \left(\frac12+\sum_{m\geq 1}\frac{q^{2m}}{1-q^{2m}}\right). \end{align*} We notice that $$\sum_{n\geq1}\frac{q^{n(n-1)}}{(-q^2;q^2)_{n-1}}=\sigma(q^2),$$ where $\sigma(q^2)$ is the expression (1.1) in \cite{ADH88}. There, it is shown that $\sigma(q)=\sum_{n=0}^\infty S(n)q^n$, where $S(n)$ counts the number of partitions of $n$ into distinct parts with even rank minus the number of partitions of $n$ into distinct parts with odd rank. Thus, $\sigma(q^2)=\sum_{n\geq0}\widetilde{S}(n)q^n$, where $\widetilde{S}(n)$ is the number of partitions of $n$ into distinct even parts with even $M_2$-rank minus the number of partitions of $n$ into distinct even parts with odd $M_2$-rank. Then, \begin{equation}\label{m2-rank id} \frac{1}{2} \left(\sum_{n=1}^\infty \frac{q^{n(n-1)}}{(-q^2;q^2)_{n-1}}+(-q^2;q^2)_\infty\right)=p(n\mid \text{distinct even parts, even $M_2$-rank}). \end{equation} \smallskip \noindent Next, we observe that $(-q^2;q^2)_\infty\sum_{m\geq 1}\frac{q^{2m}}{1-q^{2m}}$ is the generating function for the cardinality of $$\mathcal E(n)= \{(\lambda, (2m)^k)\mid \lambda \text{ has distinct even parts, }m,k\geq 1,|\lambda|+2mk=n \}.$$ Clearly, the subset of $\mathcal E(n)$ consisting of pairs $(\lambda, (2m))$, where $2m$ is not a part of $\lambda$, is in one-to-one correspondence with the set of parts of all partitions of $n$ into distinct even parts. To see this, notice that if $\mu$ is a partition into distinct even parts, for each part $2t$ in $\mu$, we can create a pair $(\mu\setminus (2t), (2t))\in \mathcal E(n)$. Here and throughout, if $\eta$ is a partition whose parts (with multiplicity) are also parts of $\mu$, then $\mu\setminus \eta$ stands for the partition obtained from $\mu$ by removing all parts of $\eta$ (with multiplicity). \smallskip \noindent Now, consider the remaining pairs of partitions $(\lambda, (2m)^k)$ in $\mathcal E(n)$, i.e, pairs with $k\geq 2$ or pairs with $k=1$ and $2m$ is a part of $\lambda$. For each such pair $(\lambda, (2m)^k)$, we create the partition $\mu=\lambda \cup (2m)^k$. Then, $\mu$ is a partition into even parts with exactly one even part repeated. Each such partition is obtained twice: if $2t$ is the repeated part of $\mu$ and it appears with multiplicity $b\geq 2$, then $\mu$ is obtained from $(\mu\setminus (2t)^b, (2t)^b)$ and also from $(\mu\setminus (2t)^{b-1}, (2t)^{b-1})$. \smallskip \noindent This completes the proof of the theorem. \end{proof} \smallskip \noindent We proceed to derive a weighted Beck-type companion identity for \eqref{dist-parts}. \begin{theorem}\label{second beck} For $n > 0$, we have $s_{c'}(n)-2s_e(n)$ equals the number of partitions partitions into even parts with exactly one even part repeated and if the repeated part is not the smallest, the partition is counted with weight $2$. \end{theorem} \begin{proof} In addition to the notation introduced in the proof of Theorem \ref{first beck}, we denote by $S_e(q)$ the generating function for $s_e(n)$. Thus, $$S_e(q)=\sum_{n\geq 0} s_e(n)q^n.$$ The generating function for the number of partitions into distinct even parts, where the exponent of $z$ keeping track of the number of parts, is \begin{equation}\label{biv_even} E(q;z):=\sum_{m\geq 0}\frac{z^mq^{m^2+m}}{(q^2;q^2)_m}.\end{equation} To see this, given a partition $\lambda$ with distinct even parts, remove an even staircase, i.e., remove parts $2, 4, \ldots$ from each part of $\lambda$ starting with the smallest. We are left with a partition into even parts. In $\lambda$ there are as many parts as the height of the even staircase removed. \smallskip \noindent Hence, using \eqref{bivp_0} and \eqref{biv_even}, we have \begin{align} S_{c'}(q)-S_e(q) & = D\left(\sum_{m\geq 1}\frac{z^{m+1}q^{m^2+m}}{(zq^2;q^2)_m}-\sum_{m\geq 0}\frac{z^mq^{m^2+m}}{(q^2;q^2)_m} \right)\nonumber \\ & =\sum_{m\geq 1}\left(\frac{(m+1)q^{m^2+m}}{(q^2;q^2)_m}+ D \frac{q^{m^2+m}}{(zq^2;q^2)_m}-\frac{mq^{m^2+m}}{(q^2;q^2)_m}\right)\nonumber\\ & = \sum_{m\geq 1}\frac{q^{m^2+m}}{(q^2;q^2)_m} + D \sum_{m\geq 1}\frac{q^{m^2+m}}{(zq^2;q^2)_m}.\label{gf} \end{align} Now \begin{align*}\sum_{m\geq 0}\frac{q^{m^2+m}}{(zq^2;q^2)_m}& = \lim_{\tau\to 0}\sum_{m\geq 0}\frac{(-\frac{q^2}{\tau};q^2)_m(q^2;q^2)_m\tau^m}{(q^2;q^2)_m(zq^2;q^2)_m}\\ & = \lim_{\tau\to 0} \frac{(q^2;q^2)_\infty(-q^2;q^2)_\infty}{(zq^2;q^2)_\infty(\tau;q^2)_\infty}\sum_{m\geq 0}\frac{(z;q^2)_m(\tau;q^2)_mq^{2m}}{(q^2;q^2)_m(-q^2;q^2)_m}. \end{align*} The last equality was obtained from Heine's transformation \cite[p.19, Cor. 2.3]{A98} by first replacing $q$ with $q^2$ and then substituting $a= -\frac{q^2}{\tau}, b=q^2, c=zq^2, t=\tau$. \smallskip \noindent Therefore \begin{align} \sum_{m\geq 0}\frac{q^{m^2+m}}{(zq^2;q^2)_m} & = \frac{(q^2;q^2)_\infty(-q^2;q^2)_\infty}{(zq^2;q^2)_\infty} \label{S_c-S_o}\\ & \qquad + \frac{(q^2;q^2)_\infty(-q^2;q^2)_\infty}{(zq^2;q^2)_\infty}(1-z)\sum_{m\geq 1}\frac{(zq^2;q^2)_{m-1}q^{2m}}{(q^2;q^2)_m(-q^2;q^2)_m}. \nonumber \end{align} We apply $D$ to \eqref{S_c-S_o} and use \eqref{gf} to obtain \begin{align} S_{c'}(q)- S_e(q) \nonumber &= \sum_{m\geq 1}\frac{q^{m^2+m}}{(q^2;q^2)_m} +\sum_{m\geq 1}\frac{ (-q^2;q^2)_\infty q^{2m}}{1-q^{2m}} - \sum_{m\geq 1}\frac{(-q^2;q^2)_\infty q^{2m}}{(1-q^{2m})(-q^2; q^2)_m}\nonumber\\ & = \sum_{m\geq 1}\frac{q^{m^2+m}}{(q^2;q^2)_m} + \sum_{m\geq 1}\frac{(-q^2;q^2)_\infty q^{2m}}{1-q^{2m}} - \sum_{m\geq 1}\frac{q^{2m}(-q^{2m+1};q^2)_\infty}{1-q^{2m}}.\label{gfs} \end{align} The first expression in \eqref{gfs} is the generating function for partitions with distinct even parts. As in the proof of Theorem \ref{first beck}, the second expression is the generating function for $\mathcal E(n)$, i.e., pairs of partitions $(\lambda, (2m)^k)$, where $\lambda$ has distinct even parts, $m,k\geq 1$ and $|\lambda|+ 2mk=n$. Finally, the last expression generates partitions into even parts in which the smallest part may be repeated. These partitions correspond to pairs $(\lambda, (2m)^k)\in \mathcal E(n)$, such that the parts of $\lambda$ are greater than $2m$. \smallskip \noindent Then, the difference of the last two expressions is the generating function for $$\mathcal E'(n):=\{(\lambda, (2m)^k) \in \mathcal E(n) \mid \lambda \text{ has parts of size at most $2m$}\}.$$ As in the proof of Theorem \ref{first beck}, the subset of $\mathcal E'(n)$ consisting of pairs $(\lambda, (2m))$ such that $2m$ is not a part of $\lambda$ is in one-to-one correspondence with the set of parts that are not smallest in all partitions of $n$ into distinct even parts. We can view the set of partitions of $n$ into distinct even parts (generated by the first sum of \eqref{gfs}) as corresponding to the set of smallest parts in all partitions of $n$ into distinct even parts. \smallskip \noindent Next, we consider the remaining pairs of partitions $(\lambda, (2m)^k)$ in $\mathcal E'(n)$, i.e, pairs with $k\geq 2$ or pairs with $k=1$ and $2m$ is a part of $\lambda$. For each such pair $(\lambda, (2m)^k)$, we create the partition $\mu=\lambda \cup (2m)^k$. Then, $\mu$ is a partition into even parts with exactly one even part repeated. If the repeated part of $\mu$, $2t$ is the smallest, the partition $\mu$ is obtained exactly once: from $(\mu\setminus {2t}^{b-1})$, where $b$ is the multiplicity of $2t$ in $\mu$. If the repeated part is not the smallest, each such partition is obtained twice: if $2t$ is the repeated part of $\mu$ and it appears with multiplicity $b\geq 2$, then $\mu$ is obtained from $(\mu\setminus (2t)^b, (2t)^b)$ and also from $(\mu\setminus (2t)^{b-1}, (2t)^{b-1})$. \smallskip \noindent This completes the proof of the theorem. \end{proof} \smallskip \noindent \section {On the parity of $p(n \mid \text{distinct parts, odd/even rank})$} \label{odd-even rank} \noindent As mentioned in the previous section, the generating function for the number of partitions of $n$ into distinct parts with even rank minus the number of partitions of $n$ into distinct parts with odd rank is given by \cite{ADH88} $$\sigma(q)=\sum_{n\geq0}\frac{q^{\binom{n+1}2}}{(-q;q)_n}.$$ Analogous to identity \eqref{m2-rank id}, we have \begin{equation}\label{rank id} \frac{1}{2} \left(\sum_{n=0}^\infty \frac{q^{\binom{n+1}2}}{(-q;q)_{n}}+(-q;q)_\infty\right)=p(n\mid \text{distinct parts, even rank}). \end{equation} Computed modulo $2$, the identity \eqref{rank id} together with the pentagonal number theorem implies \begin{equation}\label{PNT} \sum_{n\geq0}\frac{q^{\binom{n+1}2}}{(-q;q)_n}\equiv (-q;q)_{\infty}\equiv \sum_{m\in\mathbb{Z}}q^{\frac{m(3m-1)}2}.\end{equation} In this context, we consider the generating function \cite{ADH88} for the number $g(n,r)$ of distinct partitions of $n$ having rank $r$: $$G(z,q):=\sum_{n\geq0}\sum_{r\geq 0} g(n,r)z^rq^n=\sum_{n\geq0}\frac{q^{\binom{n+1}2}}{(zq;q)_n}.$$ Thus, $\sigma(q)=G(-1,q)$. \smallskip \noindent To determine the parity behavior of the number of distinct partitions of $n$ with odd, respectively even rank, we first prove a preliminary result. \begin{lemma} \label{pre-lemma} We have \begin{align*} \sum_{n\geq0}\frac{q^{\binom{n+1}2}}{(zq;q)_n}=\sum_{n\geq0}q^{\frac{n(3n+1)}2}(1+q^{2n+1})\prod_{j=1}^n\frac{z+q^j}{1-zq^j}. \end{align*} \end{lemma} \begin{proof} In the Rogers-Fine identity \cite[p. 233, eq. (9.1.1)]{AB05}, replace $\alpha=-\frac{q}{\tau}, \beta=zq$, and let $\tau\rightarrow0$. Simplification yields the desired result. \end{proof} \begin{theorem} \label{odd-odd-odd-even} Let $n$ be a positive integer. Then, \begin{itemize}\item[(i)] $p(n \mid \text{distinct parts, odd rank})$ is odd if and only if $n=\frac{k(3k-(-1)^k)}2$ for some positive integer $k$; \item[(ii)] $p(n \mid \text{distinct parts, even rank})$ is odd if and only if $n=\frac{k(3k+(-1)^k)}2$ for some positive integer $k$.\end{itemize} \end{theorem} \begin{proof} As usual, operator $D$ computes the derivative $\frac{d}{dz}$ and evaluates at $z=1$. All congruences in this proof are {\it modulo $2$}. It is straightforward to check that, expanded as a series, $D\frac{z+x}{1-zx}\equiv1$ and $\frac{1+x}{1-x}\equiv 1$, i.e., all coefficients of powers of $x$ except the constant term are even. Using these facts and Lemma \ref{pre-lemma}, we have \begin{align} DG(z,q)&=\sum_{n\geq0}q^{\frac{n(3n+1)}2}(1+q^{2n+1})\sum_{j=1}^n\prod_{\substack{i=1 \\ i\neq j}}^n\frac{1+q^i}{1-q^i}D\frac{z+q^j}{1-zq^j}\label{der} \\ &\equiv \sum_{n\geq0}q^{\frac{n(3n+1)}2}(1+q^{2n+1})\sum_{j=1}^n1 \nonumber \\ &\equiv \sum_{\substack{n\geq0 \\ \text{$n$ odd}}} q^{\frac{n(3n+1)}2}(1+q^{2n+1})=\sum_{n\geq0}q^{(2n+1)(3n+2)}(1+q^{4n+3}), \nonumber \end{align} which is equivalent to the first assertion of the theorem. \smallskip \noindent To prove the second assertion, we calculate $D\,zG=G(1,q)+DG(z,q)$ by making use of (\ref{PNT}) and \eqref{der} \begin{align*} D\,zG(z,q)&=\sum_{n\geq1}\frac{q^{\binom{n+1}2}}{(q;q)_n}+\sum_{n\geq0}q^{\frac{n(3n+1)}2}(1+q^{2n+1})\sum_{j=1}^n\prod_{\substack{i=1 \\ i\neq j}}^n\frac{1+q^i}{1-q^i}D\frac{z+q^j}{1-zq^j} \\ &\equiv \sum_{m\in\mathbb{Z}^{*}}q^{\frac{m(3m-1)}2}+ \sum_{k\geq1}q^{\frac{k(3k-(-1)^k)}2} \\ &\equiv \sum_{k\geq1}q^{\frac{k(3k+(-1)^k)}2}, \end{align*} where $\mathbb{Z}^{*}$ denotes the set of non-zero integers. The proof is complete. \end{proof} \smallskip \noindent The preceding discussion leads to the following $q$-series identity. \begin{corollary} We have $$\sum_{n\geq1}\frac{q^{\binom{n+1}2}}{(q;q)_n}\sum_{m=1}^n\frac{q^m}{1-q^m} =\sum_{n\geq1}\frac{q^{\frac{n(3n+1)}2}(1+q^{2n+1})\,(-q;q)_n}{(q;q)_n}\sum_{j=1}^n\frac{1+q^{2j}}{1-q^{2j}}.$$ \end{corollary} \begin{proof} Both sides of the identity equal $D\, G(z,q)$. The left-hand side is obtained by logarithmic-differentiation, while the right-hand side is obtained from \eqref{der}. \end{proof} \section{Binary inversion sums and sums of hook lengths} \label{bi-hooks} \noindent A pair $i<j$ is an {\it inversion} of a binary string $w=w_1\cdots w_n$ if $w_i=1>0=w_j$. The {\it binary inversion sum} of $w$ is given by the sum $s(w)=\sum(j-i)$ over all inversions of $w$. Given a binary string (or word) $w=w_1\cdots w_n$, the {\it position} of $w_i$ in $w$ is $i$. \smallskip \noindent Given a partition $\lambda$, its {\it bit string}, $b(\lambda)$, is a binary word starting with $1$ and ending with $0$ defined as follows. Start at SW corner of the Young diagram and travel along the outer profile going NE. For each step to the right, record a $1$; for each step up, record a $0$. By the {\it length} of $b(\lambda)$ we mean the number of digits in $b(\lambda)$. For instance, for the partition $\lambda= (5,3,3,2,1)$ of Example \ref{eg1} we get the bit string $b(\lambda)=1010100110$ and the length of $b(\lambda)$ is $10$. \smallskip \noindent Before we state the main result of this section, we give a motivating example. \begin{example} We list the partitions of $n=2, 3$, and $4$. For each partition $\lambda$, we give the bit string $b(\lambda)$, the binary inversion sum $s(b(\lambda))$, the list $H_\lambda$ of hook lengths of cells of $\lambda$, and the sum of all hook lengths of $\lambda$, i.e., $\sum_{u\in\lambda}h^{\lambda}(u)$. Notice that we give $H_{\lambda}$ as a list of sub-lists of hook lengths along rows of the Young diagram. \begin{center} \begin{tabular}{ |c|c|c|c|c| } \hline partition $\lambda$ & $b(\lambda)$ & $s(b(\lambda))$ & $H_{\lambda}$ & sum of elements in $H_{\lambda}$ \\ \hline (2) & 110 & 3 & [(2,1)] & 3 \\ (1,1) &100 & 3 & [(2),(1)]& 3 \\ (3) & 1110 & 6 & [(3,2,1)] & 6 \\ (2,1) & 1010 & 5 &[(3,1),(1)] & 5 \\ (1,1,1) & 1000 & 6 & [(3),(2),(1)] & 6 \\ (4) & 11110 & 10 & [(4,3,2,1)] & 10 \\ (3,1) & 10110 & 8 & [(4,2,1),(1)] & 8 \\ (2,2) & 1100 & 8 & [(3,2),(2,1)] & 8 \\ (2,1,1) & 10010 & 8 & [(4,1),(2),(1)] & 8 \\ (1,1,1,1) & 10000 & 10 & [(4),(3),(2),(1)] & 10 \\ \hline \end{tabular} \end{center} \end{example} \begin{theorem} Given any partition $\lambda$, the sum of its hooks, $\sum_{u\in\lambda}h^{\lambda}(u)$, equals the inversion sum, $s(b(\lambda))$, of its binary bit $b(\lambda)$. \end{theorem} \begin{proof} Let $\lambda$ be a partition of $n$ and $b(\lambda)$ its bit string. The number of parts $\ell(\lambda)$ of $\lambda$ equals the number of $0$ digits in $b(\lambda)$. The perimeter of $\lambda$, which is defined as the hook-length $h^{\lambda}(1,1)$, is equal to the length of $b(\lambda)$ minus $1$. \smallskip \noindent Each cell $(a,c)$ in the Young diagram of $\lambda$ defines a sub-diagram consisting of the cell $(a,c)$ and all cells of the Young diagram of $\lambda$ weakly to the right and weakly below $(a,c)$. Denote the partition corresponding to this sub-diagram by $\lambda^{(a,c)}$. Then $b(\lambda^{(a,c)})$ is the sub-string of $b(\lambda)$ starting at the $c^{th}$ digit from the left equal to $1$ (we do not count the $0$ digits) and ending after the $a^{th}$ digit from the right equal to $0$ (we do not count the $1$ digits). The length of $b(\lambda^{(a,c)})$ minus $1$ is precisely the hook length $h(a,c)$ in $\lambda$. Moreover, if the $c^{th}$ digit from the left equal to $1$ is in position $i$ in $b(\lambda)$ and the $a^{th}$ digit from the right equal to $0$ is in position $j$, then $j-i$ is precisely the length of $b(\lambda^{(a,c)})$ minus $1$, i.e., $h(a,c)$. Conversely, every inversion $i<j$ determines a sub-string of $b(\lambda)$ that starts with $1$ in position $i$ in $b(\lambda)$ and ends with $0$ in position $j$ in $b(\lambda)$. The length of the sub-string minus $1$ is precisely $j-i$ and is the hook length $h(a,c)$ where $c$ is the number of $1$ digits in positions $\leq i$ and $a$ is the number of $0$ digits in positions $\geq j$ in $b(\lambda)$. Thus, for each inversion $i<j$, the difference $j-i$ is the hook length of a unique cell in the Young diagram of $\lambda$. \end{proof} \section{On the $x$-ray list of a partition} \label{two stats} \noindent Let $\lambda=(\lambda_1,\lambda_2,\dots,\lambda_{\ell(\lambda)})$ be an integer partition of length $\ell(\lambda)$ and designate $m:=\max\{\lambda_1,\ell(\lambda)\}$. We construct an $m\times m$ matrix whose entry in the $i^{th}$ row and $j^{th}$ column is $1$ if $(i,j)$ is a cell in the Young diagram of $\lambda$ and $0$ otherwise. Since the Young diagram of a partition is also known as the Ferrers diagram, the matrix defined here is referred to as the {\it Ferrers matrix} of $\lambda$ on OEIS \cite{FMatrix}. \begin{example} \label{eg 3} Given the partition, $\lambda=(4,3,1)\vdash 8$, we have $m=4$ and the corresponding $4\times 4$ matrix is shown below. $$\begin{bmatrix} 1&1&1&1 \\ 1&1&1&0 \\ 1&0&0&0 \\ 0&0&0&0 \end{bmatrix}$$ \end{example} \noindent Given a partition $\lambda$, we add the elements of each anti-diagonal in the Ferrers matrix of $\lambda$ to obtain a composition, denoted by $\lambda_x$, which we call the {\it $x$-ray list} of $\lambda$. For the partition $\lambda=(4,3,1)$ in Example \ref{eg 3}, we have $\lambda_x=(1,2,3,2,0,0,0)$. \smallskip \noindent Since the number of entries equal to $1$ in an anti-diagonal is invariant under conjugation, we have $\lambda_x=\lambda'_x$ for all partitions $ \lambda$. Thus, when studying $x$-ray lists, it is enough to consider partitions $\lambda$ satisfying $\lambda_1\geq \ell(\lambda)$. So, the Ferrers matrix is an $M\times M$ matrix, where $M=\lambda_1$. \smallskip \noindent The first result, below, proves that the $x$-ray lists of partitions are unimodal compositions. In addition, the sub-sequence of an $x$-ray consisting of the initial terms up to the first occurrence of a peak is strictly increasing. \begin{lemma} \label{x-ray-lemma} If $\lambda$ is a partition, then $\lambda_x$ is of the form $(1,2,3,\dots,n,a_1,a_2,\dots,a_r)$, where $n\geq a_1\geq a_2\geq\cdots\geq a_r$. \end{lemma} \begin{proof} Let $\lambda$ be a partition such that $\lambda_1\geq\ell(\lambda)$ and let $M=\lambda_1$. Consider all initial anti-diagonals with no entries equal to $0$. These contribute the following initial parts in the composition $\lambda_x$: \begin{itemize} \item[(i)]$1, 2, 3,\dots, n$ where $n\leq M$, or \medskip \item[(ii)] $1, 2, 3, \dots, M, M-1, M-2, \dots, M-j$. \end{itemize} \smallskip \noindent Let us consider the first anti-diagonal containing a $0$ entry. In case (i), the sum of its entries, $a_1$, will be at most $n$. In case (ii), the sum of its entries will be strictly less than $M-j$. Furthermore, zeros propagate zeros, i.e., in a column, below a $0$ there are only $0$ entries and, in a row, to the right of a $0$ there are only $0$ entries. Thus, for any string of consecutive zeros in a given anti-diagonal, there is a string with at least as many zeros in the next anti-diagonal. This necessitates a non-increase in the sum of entries of the next anti-diagonal.\end{proof} \begin{example} In the partially filled Ferrers matrices below, $$\begin{bmatrix} 1&1&1&1&1 \\ 1&1&1&0&\square \\ 1&1&0&\square \\ 1&0&\square \\ 1&\square \end{bmatrix} \qquad \qquad \begin{bmatrix} 1&1&1&1&1 \\ 1&1&1&0&\square \\ 1&1&0&\square \\ 1&0&\square \\ 0&\square \end{bmatrix},$$ \bigskip \noindent the positions marked with $\square$ are forced to be filled with $0$.\end{example} \begin{theorem} \label{x-ray-thm} The number of different $x$-ray lists of $n$ equals to the number of partitions of $n$ into distinct parts. \end{theorem} \begin{proof} We first show that if $\pi=(b_1, b_2, \ldots)$ and $\rho=(c_1, c_2, \ldots)$ are partition of $n$ into distinct parts such that $\pi\neq \rho$, then their $x$-ray lists are different, i.e., $\pi_x\neq\rho_x$. Let $i$ be the smallest integer such that $b_i\neq c_i$, say $b_i>c_i$. In the Ferrers matrix of a partition with distinct parts, in an anti-diagonal, the entries SW of a $0$ are all equal to $0$. Therefore, the anti-diagonal of $\pi$ containing the end point of part $b_i$, i.e., the $(i+b_i-1)^{st}$ anti-diagonal, contains at least one more $1$ than the same anti-diagonal in $\rho$. Then, $\pi_x\neq \rho_x$. Consequently, the number of $x$-ray lists is at least as large as the number of partitions into distinct parts. \smallskip \noindent On the other hand, each $x$-ray list, $\lambda_x$, of $n$ corresponds to a partition $\mu$ of $n$ with distinct parts as follows. If the $i^{th}$ entry of $\lambda_x$ is $r$, fill in the $i^{th}$ anti-diagonal of a matrix with $r$ entries equal to $1$ (starting in the first row of the matrix) followed by $0$ entries. Let $\phi(\lambda_x)$ be the partition whose Ferrers matrix is the matrix obtained above. By Lemma \ref{x-ray-lemma} and the construction of the matrix, $\phi(\lambda_x)$ is a partition with distinct parts. Hence, the number of $x$-ray lists is no more than the number of partitions of $n$ into distinct parts. The proof is complete. \end{proof} \noindent \begin{example} We illustrate the proof of Theorem \ref{x-ray-thm} for $n=7$. \bigskip \begin{center} \begin{tabular}{ |c|c|c|c| } \hline $\lambda$ with $\lambda_1\geq\ell(\lambda)$ & $\lambda_x$ & $\phi(\lambda_x)$ \\ \hline \multirow{1}{4em} {7} & (1,1,1,1,1,1,1) & 7 \\ \hline \multirow{1}{4em} {61} & (1,2,1,1,1,1) & 61 \\ \hline \multirow{2}{4em}{52 \\ 511} & (1,2,2,1,1) & 52 \\ & & \\ \hline \multirow{2}{4em}{43 \\ 411} & (1,2,2,2) & 43 \\ & & \\ \hline \multirow{3}{4em}{421 \\ 331 \\ 322 } & (1,2,3,1) & 421 \\ & & \\ & & \\ \hline \end{tabular} \end{center} \end{example} \smallskip \noindent We now explore a different aspect of $x$-ray lists of partitions. \smallskip\noindent We say that a partition $\mu$ is contained in partition $\eta$ if $\mu_i\leq \eta_i$ for all $i$ (the partitions are padded with $0$ after the last part). Recall that $\delta_r$ denotes the staircase partition of length $r$. We say that $\lambda$ is {\it maximally contained} in $\delta_r$ if it is contained in $\delta_r$ but not in $\delta_{r-1}$. We consider the following motivating example. \begin{example} The table below \cite{STSP} is constructed as follows. For $n\geq1$, the $n^{th}$ row records the number of partitions of $n$ that are maximally contained in $\delta_r$ for $1\leq r\leq n$. \begin{align*} &1 \\ &0, 2 \\ &0, 1, 2 \\ &0, 0, 3, 2 \\ &0, 0, 3, 2, 2 \\ &0, 0, 1, 6, 2, 2 \\ &0, 0, 0, 7, 4, 2, 2. \end{align*} For example, $\{(4), (3,1), (2,2), (2,1,1), (1,1,1,1)\}$ is the set of partitions of $n=4$. The fourth row in the above table shows that there are no partitions maximally contained in $\delta_1$ or $\delta_2$; there are $3$ partitions maximally contained in $\delta_3$; there are $2$ partitions maximally contained in $\delta_4$. Thus, the the fourth row is $0, 0, 3, 2$. \smallskip \noindent On the other hand, the multiset of $x$-ray lists of $n=4$ with zero entries omitted is $\{1111, 211, 211, 211, 1111\}$. There are no $x$-ray lists of length $1$ or $2$; there are $3$ $x$-ray lists of length $3$; there are 2 $x$-ray lists of length $4$. This data is precisely the fourth row in the above table: $0, 0, 3, 2$. \end{example} \noindent This agreement is not accidental as shown in the next theorem. By the {\it length} of an $x$-ray list, we mean the number of non-zero entries in the list. \begin{theorem} \label{biject-triangle} For $n\geq1$ and $1\leq r\leq n$, the number of partitions of $n$ with $x$-ray list of length $r$ equals the number of partitions of $n$ maximally contained in $\delta_r$. \end{theorem} \begin{proof} Let $\lambda\vdash n$ and suppose its x-ray list, $\lambda_x$, has length $r$. From Lemma \ref{x-ray-lemma}, the first $r$ anti-diagonals in the Young diagram of $\lambda$ are non-empty while the $(r+1)^{st}$ anti-diagonal is empty. Since $(\delta_r)_x=(1,2,3,\dots,r)$, it follows that $\lambda$ is maximally contained in $\delta_r$. \smallskip \noindent Conversely, if $\lambda$ is maximally contained in $\delta_r$, then the $(r+1)^{st}$ anti-diagonal of $\lambda$ is empty and the $r^{th}$ anti-diagonal is non-empty. Again, by Lemma \ref{x-ray-lemma}, the $x$-ray list $\lambda_x$ has length $r$. \end{proof} \smallskip \begin{thebibliography}{00} \bibitem{A12} T. Amdeberhan, \textit{Theorems, problems and conjectures.}\\ http://dauns01.math.tulane.edu/~tamdeberhan/conjectures.pdf \bibitem{A98} G. E. Andrews, \textit{The Theory of Partitions.} Reprint of the 1976 original. Cambridge Mathematical Library. Cambridge University Press, Cambridge, 1998, +255 pp. \bibitem{A90} G. E. Andrews, \textit{Three aspects of partitions.} S\'eminaire Lotharingien de Combinatoire, B25f, 1990. \bibitem{AB19} G. E. Andrews and C. Ballantine, \textit{Almost partition identities}, Proc. Natl. Acad. Sci. USA 116 (2019). no. 12, 5428--5436. \bibitem{AB05} G. E. Andrews, B. C. Berndt, \textit{Ramanujan's Lost notebook, Part I.} Springer, New York, 2005. \bibitem{ADH88} G. E. Andrews, F. Dyson, and D. Hickerson, \textit{Partitions and indefinite quadratic forms.} Invent. Math. 91 (1988), 391--407. \bibitem{AJO01} G. E. Andrews, J. Jim\'enez-Urroz, and K. Ono, \textit{$q$-series identities and values of certain $L$-functions.} Duke Math. J. 108(3) (2001) 395--419. \bibitem{BMPS} C. Bebeacua, T. Mansour, A. Postnikov, S. Severini, \textit{On the $X$-rays of permutations.} Proceedings of the Workshop on Discrete Tomography and its Applications, 193--203, Electron. Notes Discrete Math., 20, Elsevier Sci. B. V., Amsterdam, 2005. \bibitem{BG} A. Berkovich and F. G. Garvan, \textit{Some observations on Dyson’s new symmetries of partitions.} J. Combin. Theory, Ser. A 100 (2002), 61--93. \bibitem{CS} P. S. Campbell and A. Stokke, \textit{Hook-content formulae for symplectic and orthogonal tableaux.} Canad. Math. Bull. 55 (2012), no. 3, 462--473. \bibitem{H10} G.-N. Han, \textit{The Nekrasov-Okounkov hook length formula: refinement, elementary proof, extension, and applications}, Ann. Inst. Fourier (Grenoble) 60 (2010), no. 1, 1–29 \bibitem{K12} B. Kim, \textit{On partition congruences for overcubic partition pairs}, Commun. Korean Math. Soc. 27 (2012), no. 3, 477–482 \bibitem{NO06} N.A. Nekrasov and A. Okounkov, \textit{Seiberg-Witten theory and random partitions.} The unity of mathematics, 525--596, Progr. Math., 244, Birkh\"auser Boston, Boston, MA, 2006. \bibitem{EK} N. El Samra and R. C. King, \textit{Dimensions of irreducible representations of the classical Lie groups.} J. Phys. A 12 (1979), no. 12, 2317--2328. \bibitem{RPS10} R. P. Stanley, \textit{Some combinatorial properties of hook lengths, contents, and parts of partitions}, Ramanujan J. 23 (2010), no. 1-3, 91–105. \bibitem{ec2} R. P. Stanley, \textit{Enumerative combinatorics.} Vol. 2. Cambridge Studies in Advanced Mathematics, 62. Cambridge University Press, Cambridge, 1999. xii+581 pp. \bibitem{S} S. Sundaram, \textit{Tableaux in the representation theory of the classical Lie groups.} Invariant theory and tableaux (Minneapolis, MN, 1988), 191--225, IMA Vol. Math. Appl., 19, Springer, New York, 1990. \bibitem{FMatrix} The On-Line Encyclopedia of Integer Sequences, Sequence A237981, https://oeis.org. \bibitem{STSP} The On-Line Encyclopedia of Integer Sequences, Sequence A325189, https://oeis.org. \end{thebibliography} \end{document}
2205.07317v2
http://arxiv.org/abs/2205.07317v2
The domino problem of the hyperbolic plane is undecidable, new proof
\documentclass{article} \usepackage{graphicx} \usepackage{xcolor} \usepackage{amsfonts} \usepackage{amssymb} \def\encadre#1#2{\setbox100=\hbox{\kern#1{#2}\kern#1} \dimen100=\ht100 \advance \dimen100 by #1 \dimen101=\dp100 \advance \dimen101 by #1 \setbox100=\hbox{\vrule height \dimen100 depth \dimen101\box100\vrule} \setbox100=\vbox{\hrule\box100\hrule} \advance \dimen100 by .4pt \ht100=\dimen100 \advance \dimen101 by .4pt \dp100=\dimen101 \box100 \relax } \setbox221=\hbox{\special{psfile=cercle_L20.eps}} \def\encercle#1#2{\hbox{\raise-5pt\copy221\hskip#2#1}} \newtheorem{thm}{\textbf{Theorem}} \newtheorem{lem}{\textbf{Lemma}} \newtheorem{cor}{\textbf{Corollary}} \newtheorem{prop}{\textbf{Proposition}} \newtheorem{defn}{\textbf{Definition}} \newtheorem{cla}{\textbf{Claim}} \newtheorem{fig}{\textbf{Figure}} \newtheorem{tab}{\textbf{Table}} \newtheorem{algo}{\textbf{Algorithm}} \newtheorem{const}{\textbf{Construction}} \def\leurre{\noindent\leftskip0pt\rmix\mathix\baselineskip 10pt\parindent0pt} \def\lanote #1 {\noindent{\leftskip0pt\rmix\mathix\baselineskip 10pt \parindent0pt {\sc Note:} #1\par}} \makeatletter \newcount\lapage\lapage=413 \newcount\pagei\pagei=\lapage\advance\lapage by 1 \newcount\pageii\pageii=\lapage\advance\lapage by 1 \newcount\pageiii\pageiii=\lapage\advance\lapage by 1 \newcount\pageiv\pageiv=\lapage\advance\lapage by 1 \newcount\pagev\pagev=\lapage\advance\lapage by 1 \newcount\pagevi\pagevi=\lapage\advance\lapage by 1 \newcount\pagevii\pagevii=\lapage\advance\lapage by 1 \newcount\pageviii\pageviii=\lapage\advance\lapage by 1 \newcount\pageix\pageix=\lapage\advance\lapage by 1 \newcount\pagex\pagex=\lapage\advance\lapage by 1 \makeatother \makeatletter \renewcommand{\sectionmark}[1]{ \markright {\itshape{ #1}}} \makeatother \def\ttV{\vrule height 12pt depth 6pt width 0pt} \def\ttH{\hrule height 0pt depth 0pt width \hsize} \def\chapterskip{\vskip 17.5mm} \def\grostrait{\ligne{\vrule height 1pt depth 1pt width \hsize}} \def\demitrait{\ligne{\vrule height 0.5pt depth 0.5pt width \hsize}} \makeindex \begin{document} \parindent=12pt \def\ligne#1{\hbox to \hsize{#1}} \def\PlacerEn#1 #2 #3 {\rlap{\kern#1\raise#2\hbox{#3}}} \font\bfxiv=cmbx12 at 12pt \font\bfxii=cmbx12 \font\bfix=cmbx9 \font\bfvii=cmbx7 \font\bfvi=cmbx6 \font\bfviii=cmbx8 \font\pc=cmcsc10 \font\itix=cmti9 \font\itviii=cmti8 \font\rmix=cmr9 \font\mmix=cmmi9 \font\symix=cmsy9 \def\mathix{\textfont0=\rmix \textfont1=\mmix \textfont2=\symix} \font\rmviii=cmr8 \font\mmviii=cmmi8 \font\symviii=cmsy8 \def\mathviii{\textfont0=\rmviii \textfont1=\mmviii \textfont2=\symviii} \font\rmvii=cmr7 \font\mmvii=cmmi7 \font\symvii=cmsy7 \def\mathvii{\textfont0=\rmvii \textfont1=\mmvii \textfont2=\symvii} \font\rmvi=cmr6 \font\mmvi=cmmi6 \font\symvi=cmsy6 \def\mathvi{\textfont0=\rmvi \textfont1=\mmvi \textfont2=\symvi} \font\rmv=cmr5 \font\mmv=cmmi5 \font\symv=cmsy5 \def\mathv{\textfont0=\rmv \textfont1=\mmv \textfont2=\symv} \font\rmvii=cmr7 \font\rmv=cmr5 \ligne{\hfill} \pagenumbering{arabic} \begin{center}\bf\Large The domino problem of the hyperbolic plane is undecidable, new proof \end{center} \vskip 20pt \begin{center}\bf\large Maurice Margenstern\end{center} \vskip10pt \begin{center}\small Laboratoire de G\'enie Informatique, de Production et de Maintenance, EA 3096,\\ Universit\'e de Lorraine, B\^atiment A de l'UFR MIM,\\ 3 rue Augustin Fresnel,\\ BP 45112,\\ 57073 Metz Cedex 03, France,\\ {\it e-mail} : [email protected] \end{center} \def\cqfd{\hbox{\kern 2pt\vrule height 6pt depth 2pt width 8pt\kern 1pt}} \def\HH{\hbox{$I\!\!H$}} \def\Hii{\hbox{$I\!\!H^2$}} \def\Hiii{\hbox{$I\!\!H^3$}} \def\Hiv{\hbox{$I\!\!H^4$}} \vskip 10pt \noindent \begin{abstract} \it The present paper is a new version of the arXiv paper revisiting the proof given in a paper of the author published in 2008 proving that the general tiling problem of the hyperbolic plane is undecidable by proving a slightly stronger version using only a regular polygon as the basic shape of the tiles. The problem was raised by a paper of Raphael Robinson in 1971, in his famous simplified proof that the general tiling problem is undecidable for the Euclidean plane, initially proved by Robert Berger in~1966. The present construction simplifies that of the recent arXiv paper. It again strongly reduces the number of prototiles. \end{abstract} \def\blank{\hbox{$\,$\_$\!$\_$\,$}} \section{Introduction} Whether it is possible to tile the plane with copies of a fixed set of tiles was a question raised by Hao {\sc Wang}, \cite{wang} in the late 50's of the previous century. {\sc Wang} solved the {\it origin-constrained} problem, which consists in fixing an initial tile in the above finite set of tiles. Indeed, fixing one tile is enough to entail the undecidability of the problem. Also called the {\bf general tiling problem} further in this paper, the general case, free of any condition, in particular with no fixed initial tile, was proved undecidable by Robert {\sc Berger} in 1966, \cite{berger}. Both {\sc Wang}'s and {\sc Berger}'s proofs deal with the problem in the Euclidean plane. In 1971, Raphael {\sc Robinson} found an alternative, simpler proof of the undecidability of the general problem in the Euclidean plane, see \cite{robinson1}. In that 1971 paper, Robinson raised the question of the general problem for the hyperbolic plane. Seven years later, in 1978, he proved that in the hyperbolic plane, the origin-constrained problem is undecidable, see \cite{robinson2}. Since then, the problem had remained open. In 2007, I proved the undecidability of the tiling problem in the hyperbolic plane, published in 2008, see \cite{mmundecTCS}. In a recent arXiv paper~\cite{mmarXiv22}, I presented a new proof of what was established in \cite{mmundecTCS}. I follow the same general idea but the tiling itself is changed. It is that of~\cite{mmarXiv22} but several details of the presentation are changed in the present version whose result is a new significant reduction of the number of prototiles. In order the paper to be self-contained, I repeat the frame of the paper as well as the strategy used to address the tiling problem. In the present introduction, we remind the reader the general strategy to attack the tiling problem, as already established in the famous proofs dealing with the Euclidean case. We assume that the reader is familiar with the tiling $\{7,3\}$ of the hyperbolic plane. That tiling is the frame which our solution of the problem lies in. The reader familiar with hyperbolic geometry can skip that part of the paper. We also refer the reader to~\cite{mmbook1} and to~\cite{mmDMTCS} for a more detailed introduction and for other bibliographical references. Also, in order that the paper can be self-contained, we sketch the notion of a space-time diagram of a Turing machine. With respect to paper~\cite{mmarXiv22}, I append a new section devoted to the construction of an aperiodic tiling. Hao Wang mentioned in~\cite{wang} that if any tiling of the hyperbolic plane were necessarily periodic then, the tiling problem would be decidable. Accordingly, the unsolvability of the problem entails the existence of an aperiodic tiling of the hyperbolic plane. Section~\ref{s_aperiod} deals with that point. In Section~\ref{s_proof}, I reuse the construction of Section~\ref{s_aperiod} to establish the properties of the particular tiling which we consider within the tiling $\{7,3\}$ and which are later used for the proof of Theorem~\ref{undec}. In Section~\ref{s_proof}, we proceed to the proof itself, leaning on the definition of the needed tiles. In Subsection~\ref{ss_tiles} we proceed to the counting of the needed prototiles. That allows us to prove: \begin{thm}\label{undec} \it The domino problem of the hyperbolic plane is undecidable. \end{thm} Reproducing the similar section of \cite{mmarXiv22} for self-containedness, Section~\ref{s_corol} gives several corollaries of Theorem~\ref{undec}. We conclude about the difference between the present paper, paper~\cite{mmarXiv22} and paper~\cite{mmundecTCS}. From Theorem~\ref{undec}, we immediately conclude that the general tiling problem is undecidable in the hyperbolic plane. \section{An aperiodic tiling of the hyperbolic plane}\label{s_aperiod} Subsection~\ref{ss_heptatil} briefly mentions the frame of our constructions. Then, in Subsection~\ref{ss_aperiod}, we proceed to the construction of an aperiodic tiling of the hyperbolic plane. \subsection{The tiling $\{7,3\}$}\label{ss_heptatil} We assume the reader to be familiar with hyperbolic geometry. We can refer him/her to \cite{mmbook1} for an introduction. {\bf Regular tessellations} are a particular case of tilings. They are generated from a regular polygon by reflection in its sides and, recursively, of the images in their sides. In the Euclidean case, there are, up to isomorphism and up to similarities, three tessellations, respectively based on the square, the equilateral triangle and on the regular hexagon. Later on we say {\bf tessellation}, for short. In the hyperbolic plane, there are infinitely many tessellations. They are based on the regular polygons with $p$ sides and with $\displaystyle{{2\pi}\over q}$ as vertex angle and they are denoted by $\{p,q\}$. This is a consequence of a famous theorem by Poincar\'e which characterises the triangles starting from which a tiling can be generated by the recursive reflection process which we already mentioned. Any triangle tiles the hyperbolic plane if its vertex angles are of the form $\displaystyle{\pi\over p}$, $\displaystyle{\pi\over q}$ and $\displaystyle{\pi\over r}$ with the condition that $\displaystyle{1\over p}+\displaystyle{1\over q}+ \displaystyle{1\over r}< 1$. Among these tilings, we choose the tiling $\{7,3\}$ which we called the {\bf ternary heptagrid} in \cite{ibkmACRI}. It is below illustrated by Figure~\ref{splittil_73}. From now on we call it the {\bf heptagrid}. \vskip 0pt \vtop{ \ligne{\hfill \includegraphics[scale=0.5]{tiling_7_3.ps} \hfill} \begin{fig}\label{splittil_73} \leurre The heptagrid in the Poincar\'e's disc model. \end{fig} } In \cite{ibkmACRI,mmbook1}, many properties of the heptagrid are described. An important tool to establish them is the splitting method, prefigured in \cite{mmJUCSii} and for which we refer to~\cite{mmbook1}. Here, we just suggest the use of this method which allows us to exhibit a tree, spanning the tiling: the {\bf Fibonacci tree}. Below, the left-hand side of Figure~\ref{til_73} illustrates the splitting of \Hii\ into a central tile~$T$ and seven sectors dispatched around~$T$. Each sector is spanned by a Fibonacci tree. The right-hand side of Figure~\ref{til_73} illustrates how the sector can be split into sub-regions. Now, we notice that two of these regions are copies of the same sector and that the third region~$S$ can be split into a tile and then a copy of a sector and a copy of~$S$. Such a process gives rise to a tree whose nodes are in bijection with the tiles of the sector. The tree structure will be used in the sequel and other illustrations will allow the reader to better understand the process. \ligne{\hfill} \vtop{ \vspace{-195pt} \ligne{\hfill \includegraphics[scale=0.5]{new_eclate_7_3.ps} \hskip-40pt \includegraphics[scale=1.1]{ch4_figure13.ps} \hfill} \vskip 5pt \begin{fig}\label{til_73} \leurre Left-hand side: the standard Fibonacci trees which span the heptagrid. Right-hand side: the splitting of a sector, spanned by a Fibonacci tree. \end{fig} } \vtop{ \ligne{\hfill \includegraphics[scale=0.55]{middle_global.ps} \hfill} \begin{fig}\label{the_mids} \leurre The mid-point lines delimiting a sector of the heptagrid. The rays $u$ and $v$ are supported by mid-point lines. \end{fig} } \vskip 10pt Another important tool to study the tiling $\{7,3\}$ is given by the {\bf mid-point} lines, which are illustrated by Figure~\ref{the_mids}. The lines have that name as far as they join the mid-points of contiguous edges of tiles. Let $s_0$ be a side of a tile. Let $M$ be the mid-point of~$s_0$ and let $A$ be one of the vertices defined by~$s_0$. Two sides~$s_1$ and~$s_2$ of tiles of the heptagrid also share $A$. They define a tile~$\mu$. Let~$u$, $v$ be the rays issued from~$A$ which crosses the mid-point of~$s_1$, $s_2$ respectively, see Figure~\ref{the_mids}. There, we can see how such rays us allow to delimit a sector, a property which is proved in~\cite{ibkmACRI,mmbook1}. Later on, such a sector will be called a {\bf sector of the heptagrid}. We say that $\mu$ is its {\bf root} or that the sector is {\bf rooted at}~$\mu$. \subsection{Generating the heptagrid with tiles} \label{ss_pre_tiles} \def\GG{{\bf G}} \def\YY{{\bf Y}} \def\BB{{\bf B}} \def\OO{{\bf O}} \def\MM{{\bf M}} \def\RR{{\bf R}} Now, we show that the tiling which we have described in general terms in the previous section can effectively be generated from a small finite set of tiles which we call the {\bf prototiles}: we simply need 4~of them. The basic colours we consider are {\bf green}, {\bf yellow}, {\bf blue} and {\bf orange}: we denote them by \GG, \YY, \BB{} and \OO{} respectively. \subsubsection{Trees of the heptagrid} \label{sss_trees} Using the tiles defined previously, we define a tiling by applying the rules~$(R_0)$ also illustrated by Figure~\ref{f_proto_0}. The tiles we use are copies of the prototiles. \vskip 5pt \ligne{\hfill $\vcenter{\vtop{\leftskip 0pt\parindent 0pt\hsize 250pt \ligne{\hfill \GG{} $\rightarrow$ \YY\BB\GG,\hskip 20pt \BB{} $\rightarrow$ \BB\OO,\hskip 20pt \YY{} $\rightarrow$ \YY\BB\GG,\hskip 20pt \OO{} $\rightarrow$ \YY\BB\OO \hfill} }}$ \hfill$(R_0)$\hskip 10pt} \vskip 10pt \ligne{\hfill \vtop{\hsize=300pt \ligne{\hfill \includegraphics[scale=0.3]{proto_tiles_0.ps} \hfill} \vspace{-10pt} \begin{fig}\label{f_proto_0} \leurre The prototiles generating the tiling: all cases for the neighbourhood of a tile are considered, whatever the tile: \BB, \YY, \OO{} or \GG. The neighbourhoods around a tile of the same colour correspond to the different occurrences of that colour in the right-hand side part of the rules~$(R_0)$. \end{fig} } \hfill} Infinitely many tilings of the heptagrid can be constructed by applying the rules~$(R_0)$ with the help of Figure~\ref{f_proto_0}. That figure illustrates all possible neighbourhoods of tiles which are copies of the proto-tiles \GG, \YY, \OO{} and \BB. We later on say that such a symbol constitutes the {\bf status} of the tile. The first line of the figure deals with the four possible neighbourhoods for a \BB-tile: the central tile is a \BB-tile and the father is, successively, a \BB-, an \OO-, a \YY- and a \GG-tile. As far as \YY{} occurs three times in the right side of the rules of~$(R)$, the second line of Figure~\ref{f_proto_0} illustrates the possible neighbourhoods for a \YY-tile. Next, both \GG{} and \OO{} appear twice only in the right-hand side part of a rule of~$(R_0)$. It is the reason why in the third line of the figure, the first two neighbourhoods concern a \GG-tile while the next two ones deal with an \OO-tile. \def\nng{\hbox{\bf\small g} } \def\nny{\hbox{\bf\small y} } \def\nnb{\hbox{\bf\small b} } \def\nno{\hbox{\bf\small o} } In order to get some order in the tiling, we introduce a numbering of the sides of each tile of the heptagrid. Assume that the side which is given number~1 is fixed. We then number the other sides increasingly while counterclockwise turning around the tile. In a tile, side~1 is the side shared with its father. As far as the central tile has no father, its side~1 is arbitrarily fixed. The neighbours are numbered after the side it shares with the tile we consider. A side receives different numbers in the tiles which share it. Table~$(N)$ indicates the correspondence between those numbers depending on the status of a tile~$\tau$ and a neighbour~$\nu$. The index \nng, \nny, \nnb{} or \nno{} refers to the status of~$\nu$. When the number in $\nu$ is~1 it means that $\nu$ is a son of~$\tau$. \newdimen\ntlsizea\ntlsizea=45pt \newdimen\ntlsizeb\ntlsizeb=35pt \def\numtiline #1 #2 #3 #4 #5 #6 #7 #8 {\ligne{\hbox to \ntlsizea{#1\hfill} \hbox to \ntlsizea{#2\hfill} \hbox to \ntlsizeb{#3\hfill} \hbox to \ntlsizeb{#4\hfill} \hbox to \ntlsizeb{#5\hfill} \hbox to \ntlsizeb{#6\hfill} \hbox to \ntlsizeb{#7\hfill} \hbox to \ntlsizeb{#8\hfill} \hfill} } \vskip 5pt \ligne{\hfill $\vcenter{\vtop{\leftskip 0pt\parindent 0pt\hsize 320pt \numtiline {} {1} {2} {3} {4} {5} {6} {7} \numtiline {\GG-tile} {6$_{\nny}$,7$_{\nng}$} {7$_{\nnb}$} {1$_{\nny}$} {1$_{\nnb}$} {1$_{\nng}$} {2$_{\nnb}$} {3$_{\nnb}$} \numtiline {\YY-tile} {4$_{\nny}$,3$_{\nng,\nno}$} {6$_{\nno,\nnb}$} {7$_{\nno}$} {1$_{\nny}$} {1$_{\nnb}$} {1$_{\nng}$} {2$_{\nnb}$} \numtiline {\BB-tile} {5$_{\nny}$,4$_{\nng,\nnb,\nno}$} {7$_{\nny}$,6$_{\nng}$} {7$_{\nng}$} {1$_{\nnb}$} {1$_{\nno}$} {2$_{\nny}$} {2$_{\nng,\nno}$} \numtiline {\OO-tile} {5$_{\nnb,\nno}$} {7$_{\nnb}$} {1$_{\nny}$} {1$_{\nnb}$} {1$_{\nno}$} {2$_{\nny}$} {3$_{\nny}$} }}$ \hfill$(N)$\hskip 15pt} \vskip 10pt Let us start with a few definitions needed to explain the properties which the construction needs for proving the result of the present section. Let $\tau_0$ and $\tau_1$ be two tiles of the heptagrid. A {\bf path} between $\tau_0$ and $\tau_1$ is a finite sequence $\{\mu_i\}_{i\in\{0..n\}}$ such that $\mu_i$ and $\mu_{i+1}$ when \hbox{$0\leq i<n$}, share a side, say in that case the tiles are {\bf adjacent}, and such that \hbox{$\mu_0=\tau_0$} and \hbox{$\mu_n=\tau_1$}. In that case, we say that $n$+1 is the {\bf length} of the path and we also say that the path {\bf joins} $\tau_0$ to~$\tau_1$. The {\bf distance} between two tiles $\tau_0$ and $\tau_1$ is the smallest $m$ for which there is a path joining~$\tau_0$ to~$\tau_1$ whose length is~$m$. Considering a sector~$\mathcal S$ as above defined, rooted at a tile~$\mu$, see Figure~\ref{the_mids}. We know that the set of tiles whose centre is contained in $\mathcal S$ is spanned by a tree rooted at~$\mu$. In~\cite{mmJUCSii,mmbook1}, it is proved that such a tree is spanned by the rules of~$(R_0)$. We call such a tree a {\bf tree of the heptagrid} when its root is not a \BB-tile. We indifferently say that $A$, see the figure, is the origin of the tree or of the sector and that $A$ points at $\mu$. We say that the origin of the tree points at the root of the tree. We denote that tree by $T(\mu)$. Note, that a tree of the heptagrid is a set of tiles, it is not the set of points contained in those tiles. We call left-, right-hand side {\bf border} of $T(\mu)$ the set of tiles of the tree which are crossed by the left-, right hand side ray respectively which delimit the sector defining the tree. In a tree of the heptagrid $\mathcal T$, the {\bf level~$m$} is the set of tiles of~$\mathcal T$ which are at the distance~$m$ from the root. By induction, it is easy to prove that the sons of a tile on the level~$m$ belong to the level~$m$+1. A tiling of the heptagrid can be defined by the following process: \begin{const}\label{cons_til} \phantom{construction}\\ $-$ Time~$0$: fix a tile~$\tau$ as a root of a tree~$T(\tau_0)$ of the heptagrid; that root is the level~$0$ of $T(\tau_0)$ and choose its status among \YY, \GG{} or \OO; $-$ time~$m$+1, $m\in \mathbb N$: construct $\tau_{m+1}$ as a father of~$\tau_m$, taking as $\tau_{m+1}$ with a status which is compatible with that of~$\tau_m$; construct the level~$1$ of $T(\tau_{m+1})$ and for $i$ in $\{0..m\}$, if $h_i$ is the level of $T(\tau_i)$ constructed at time~$m$, construct the level~$h_i$$+$$1$. \end{const} It is not difficult to establish the following property: \def\dprop #1 {\begin{prop}\label{#1}} \def\fprop{\end{prop}} \dprop {psubtree_levels} Let $T_0$, $T_1$ be two trees of the heptagrid with $T_1\subset T_0$. Then a level of $T_1$ is contained in a level of $T_0$. More precisely, let $\rho_1$ be the root of $T_1$. Let $h$ be the level of~$\rho_1$ in $T_0$. Let $\tau$ be a tile of~$T_1$ and let be $k$ its level in $T_1$. Then, the level of $\tau$ in~$T_0$ is $h$+$k$. \fprop \noindent Proof. Let $\rho_0$, $\rho_1$ be the roots of~$T_0$, $T_1$ respectively. If $\rho_1$ belongs to the level~1 of $T_0$, the sons of~$\rho_1$, which belong to the level~1 of~$T_1$, belong to the level~2 of~$T_0$. Consequently, if the property is true if $\rho_1$ belongs to the level~$k$ of $T_0$, it is also true for the trees of the heptagrid which are rooted at a tile of the level~$k$+1 of~$T_0$. Which proves the property. \hfill$\Box$ Below, Figure~\ref{f_hepta_tils} illustrate two different applications of Construction~\ref{cons_til}. The left-hand side picture represents an implementation where at time~0 the initial root is a \GG-tile and, at each time, the father of $\tau_{m+1}$ is also a \GG-tile for several consecutive values of~$m$ and then, it is a \YY-tile. In the central and in the right-hand side pictures, we have two views of the same implementation: an infinite sequence of consecutive \GG-tiles are crossed by a line~$\ell$ so that an infinite sequence of consecutive \BB-tiles are crossed by~$\ell$ too. \vskip 10pt \vtop{ \ligne{\hfill \includegraphics[scale=0.3]{nov_til_hepta_0.ps} \includegraphics[scale=0.3]{nov_til_isoclines.ps} \includegraphics[scale=0.3]{nov_til_B_isoclines.ps} \hfill} \begin{fig}\label{f_hepta_tils} \leurre Two examples of an implementation of the rules~$(R_0)$ to tile the heptagrid. To left, a sequence of successive \GG- and \YY-tiles are crossed by the same line. In the centre and to right, a sequence of \GG-tiles are crossed by the same line. \end{fig} } On those figures, we can notice levels for which Proposition~\ref{psubtree_levels} is of course true. But the figures lead us to introduce a definition. From construction~\ref{cons_til} and from Proposition~\ref{psubtree_levels}, we can see that a level of $T(\tau_i)$ is continued in $T(\tau_{i+1})$ and, {\it a fortiori}, in all $T(\tau_{i+k})$ for all positive integer~$k$. \dprop {pinclusion} For any tree of the heptagrid $T(\tau)$, if $\mu$ is a non \BB-tile belonging to that tree, then \hbox{$T(\mu)\subset T(\tau)$}. \fprop \noindent Proof. It is true when $\mu$ are sons of~$\tau$ or if $\mu$ is the \OO-son of the \BB-son of~$\tau$, see figure~\ref{f_tree}. Let $u$ and $v$ be the left- and right-hand side ray respectively issued from the origin $A$ of $T(\tau)$. In each of those cases, $T(\mu)$ is the image of $T(\tau)$ under an appropriate shift: a shift along $u$, $v$ when $\mu$ is the \YY-, \GG-son respectively of $\tau$; a shift along the left-hand side ray~$w$ defining $T(\mu)$ when $\mu$ is the \OO-son of the \BB-son of~$\mu$. Accordingly, the proposition follows for any tile~$\nu$ whose status is not \BB{} by induction on the level of~$\nu$.\hfill$\Box$ \vskip 10pt \vtop{ \ligne{\hfill \includegraphics[scale=0.6]{tree_seed.ps} \hfill} \begin{fig}\label{f_tree} \leurre A tree~$\mathcal T$ of the heptagrid with two sub-trees of $\mathcal T$. One of them is generated by the \GG-son of the \YY-son of the root of $\mathcal T$, the other is generated by the \GG-son of the root of $\mathcal T$. Side~$1$ in each tile is defined according to Table~$(N)$. \end{fig} } We remind the reader that in $T(\tau)$, whatever the status of~$\tau$ which is assumed to be not \BB, the number of tiles belonging to the level~$m$ of that tree is $f_{2m+1}$, where $\{f_n\}_{n\in\mathbb N}$ is the Fibonacci sequence satisfying \hbox{$f_0=f_1=1$}. If the tiles $\mu$ and $\nu$ belong to the same level of $T(\tau)$, we call {\bf appartness} between $\mu$ and $\nu$ the number $n$ of tiles $\omega_i$ with $i\in\{0..n\}$ such that those $\omega_i$ belong to the same level and such that $\omega_0=\mu$, $\omega_n=\nu$ and that for $i$ with \hbox{$0\leq i<n$}, we have that $\omega_i$ and $\omega_{i+1}$ share a side. Accordingly, denoting the appartness between $\mu$ and $\nu$ by $appart(\mu,\nu)$, we get that if those tiles belong to the level~$m$, then \hbox{$appart(\mu,\nu)\leq f_{2m+1}$}. From Proposition~\ref{psubtree_levels}, it is plain that the definition of the appartness of two tiles does not depend on the tree of the heptagrid to which they belong, provided that then, they belong to the same tree. Let us closer look at the two implementations of Construction~\ref{cons_til} illustrated by Figure~\ref{f_hepta_tils}. Fix a \GG-tile~$\tau$ of the heptagrid. Fix a mid-point~$A$ of a side of another tile sharing a vertex~$V$ only with~$\tau$. From~$A$, draw two rays issued from~$A$, one of them~$u$ passes through the mid-point of one of the sides of~$\tau$ sharing~$V$ while~$v$ passes through the mid-point of the other side of~$\tau$ sharing~$V$. The ray~$u$ and~$v$ allow us to define a sector of the heptagrid pointed by~$A$. Applying the rule~$(R_0)$ to~$\tau$, to its sons and, recursively to the sons of its sons, we define a tree of the heptagrid. Let $v$ be the ray issued from~$A$ which also crosses the \GG-son of~$\tau$. From the rules $(R_0)$ and from Figure~\ref{f_proto_0}, it is not difficult to establish that in $T(\tau)$, $v$ crosses only \GG-tiles. We can notice that time $m$+1 of Construction~\ref{cons_til}, gives us two possibilities only to define the father of a root: indeed, such a father cannot be neither a \BB-tile nor an \OO-one, as far as \GG-tiles are never sons of either a \BB-tile or an \OO-one. Figure~\ref{f_inc_conf} shows us what may happen when a father~$\varphi$ is appended to a \YY- or a \GG-tile which is the central tile~$\kappa$ in the pictures of the figure, the stress being put on the corresponding trees of the heptagrid generated in that way. In the leftmost column of the figure, two pictures illustrate what happens if $\varphi$ has the same status as~$\kappa$. In the central column $\varphi$ has the other status with respect to~$\kappa$. In the rightmost column a new father~$\psi$ is appended to~$\varphi$ with, again, a change in the status. Note that if $\psi$ would have the same status as~$\varphi$, the obtained tree would be the same but later appendings could change the situation. Let us call the situation illustrated by the rightmost column an {\bf alternation}. We distinguish \YY\GG\YY- and \GG\YY\GG-alternations where the symbols are self-explaining. We also note that in an alternation, the rays defining the tree rooted at~$\psi$ are both non secant with respect to the rays defined by $\kappa$, so that \hbox{$T(\kappa)\subset T(\psi)$}. \vskip 10pt \vtop{ \ligne{\hfill \includegraphics[scale=0.3]{nov_til_isol_YY.ps} \includegraphics[scale=0.3]{nov_til_isol_YG.ps} \includegraphics[scale=0.3]{nov_til_isol_YGY.ps} \hfill} \ligne{\hfill \includegraphics[scale=0.3]{nov_til_isol_GG.ps} \includegraphics[scale=0.3]{nov_til_isol_GY.ps} \includegraphics[scale=0.3]{nov_til_isol_GYG.ps} \hfill} \begin{fig}\label{f_inc_conf} \leurre Appending a father to the root \YY{} or \GG{} of a tree of the heptagrid: first line, the root is a \YY-tile; second line: it is a \GG-tile. The rightmost column illustrates the possible alternations. \end{fig} } From those observations, we conclude that there are two basic situations: either, starting from a time $k$, the father appended at time~$k$+$i$ has always the same status as the root at time~$k$ or, there are infinitely many times $i_j$, with $i_j>k$, such that the situation at times~$k$+$i_j$, $k$+$i_j$+1 and $k$+$i_j$+2 is an alternation. Consider the first case. There are two sub-cases: starting from a time~$k$ the appended father is always a \GG-, \YY-tile, we call that sub-case the \GG, \YY{\bf -ultimate configuration} respectively. First, let us study the case of a \GG-ultimate configuration. Accordingly, we may infer that the ray issued from the origin of~$T(\tau_k)$ is supported by a line~$\ell$ which also supports the right-hand side ray delimiting $T(\tau_{k+i})$ for $i\in\mathbb N$. Let $A_j$ be the origin of $T(\tau_j)$. It is not difficult to see that for $i\in\mathbb N$, $T(\tau_{k+i+1})$ is the image of $(\tau_{k+i})$ under the shift along~$\ell$ which transforms $A_k$ into $A_{k+1}$. Accordingly, the left-hand side $u_{i+1}$ ray defining $T(\tau_{k+i+1})$ and the left-hand side ray $u_i$ defining $T(\tau_{k+i})$ are non-secant. So that the rays $u_i$ define a partition of the half-plane $\pi_\ell$ defined by~$\ell$ which contains $T(\tau_k)$. As a consequence, any tile $\mu$ belonging to $\pi_\ell$ falls within one of the $T(\tau_{m+i})$'s for some $i$. From that, we infer that there is a level~$\lambda$ of $T(\tau_{m+i})$ which contains $\mu$. What happens on the other half-plane~$\pi_r$ also defined by~$\ell$? It is not difficult to see, as shown by the central and the left-hand side pictures of Figure~\ref{f_hepta_tils}, that $\ell$ also crosses a sequence of consecutive \BB-cells which lie in $\pi_r$. Let $\beta_i$ be the \BB-son of the \GG-tile $\tau_{k+i}$ for $i\in\mathbb N$ and let $\omega_i$ be the \OO-son of $\beta_i$. Then, if $u_i$ and $v_i$ are the left- and right-hand side ray respectively defining $T(\omega_i)$, it can be seen that \hbox{$u_{i+1}=v_i$} for all $i$, $i\geq k$. Indeed, the shift which transforms $\tau_{k+i}$ into $\tau_{k+i+1}$ also transforms $\beta_i$ into $\beta_{i+1}$ and, accordingly, it also transforms the line supporting $u_i$ into that supporting $u_{i+1}$ which is $v_i$. From that property, it easily follows that \hbox{$T(\tau_{mi+i+1})\cap T(\tau_{m+i})=\emptyset$} when $i\geq k$. Accordingly, any tile $\mu$ of $\pi_r$ which is not a \BB-tile having two sides crossed by $\ell$ falls within $T(\tau_k)$ or within $T(\tau_{k+i})$ for some~$i$ in $\mathbb N$. Consider the second sub-case when, starting from some~$k$ all $\tau_{k+i}$ is a \YY-tile for all~$i$, $i\in\mathbb N$. Call that case the {\bf \YY-ultimate configuration}. This time all those tiles are crossed by the line~$\ell$ which supports the left-hand side ray defining $T(\tau_k)$. Similarly, a shift along $\ell$ transforms each $T(\tau_{k+i})$ into $T(\tau_{k+i+1})$. Let $u_i$ be the left-hand side ray defining $T(\tau_{k+i})$. The same shift also transforms $u_i$ into $u_{i+1}$ so that those rays define a partition of the half-plane $\pi_1$ defined by $\ell$ which contains the $\tau_{k+i}$'s. Accordingly, for any tile~$\mu$ in $\pi_1$, there is an integer~$i$ in $\mathbb N$ such that $\mu\in T(\tau_{k+i})$. Let $\pi_2$ be the other half-plane defined by~$\ell$. From Figure~\ref{f_proto_0}, we know that $\ell$ also crosses a sequence of \OO-tiles: on the level of~$\tau_{k+i+1}$, there is an \OO-tile which shares a side with that node and whose \OO-son shares a side with $\tau_{k+i}$, being on the other side of~$\ell$ with respect to $\tau_{k+i}$. Let $\omega_{k+i}$ be the \OO-tile sharing a side with both $\tau_{k+i+1}$ and $\tau_{k+i}$. From what we said, the $\omega_{k+i}$'s belong to $\pi_2$. The same shift as that which transforms $T(\tau_{k+i})$ into $T(\tau_{k+i+1})$ transforms $T(\omega_{k+i})$ into $T(\omega_{k+i+1})$. We can note that the $T(\tau_{k+i})$ have their right-hand side ray supported by~$\ell$ and the left-hand side ray $u_{k+i}$ defining that tree is such that $u_{k+i+1}$ is the image of $u_{k+i}$ under the shift which transforms $T(\tau_{k+i})$ into $T(\tau{k+i+1})$. The $u_{k+i}$'s define a partition of $\pi_2$ so that each tile~$\nu$ in $\pi_2$ belongs to some $T(\tau_{k+i})$. In fact, as far as \hbox{$T(\tau_{k+i})\subset T(\tau_{k+i+1}$}, once $\nu$ is in $T(\tau_{k+i})$ it is also in all $T(\tau_{k+j})$ with $j>i$. \def\reunion{\mathop{\cup}} Consider the second case. In each $\tau_i$ defined in Construction~\ref{cons_til}, we define a segment joining the centre of that tile to the center of~$\tau_{i+1}$. The union of those segments constitute an infinite broken line which splits the complement $\mathcal C$ in the hyperbolic plane of the points contained in the tiles of $T(\tau_0)$ into two parts. In the second case, we know that there is a sequence $\{i_j\}_{p\in\mathbb N}$ of times, at which we have an alternation. Whether the alternation are is \GG\YY\GG, or \YY\GG\YY, we noted that the left-, right-hand side rays defined by $T(\tau_{i_j+2})$ are non-secant from the left-, right-hand side rays respectively defined by $T(\tau_{i_j})$. The consequence is that in both cases the rays defined at the alternations constitute a partition of~$\mathcal C$. Accordingly, if $\cal T$ is the set of tiles of the heptagrid, then \hbox{${\cal T}=\displaystyle{\reunion\limits_{j\in\mathbb N} {T(\tau_{i_j})}}$}. So that if $\mu$ is a tile, it belongs to $T(\tau_0)$ or to $T(\tau_{i_j})$ for some~$j$. That allows us to prove: \begin{lem}\label{absorb} Let $\{\tau_i\}_{i\in\mathbb Z}$ be a sequence of tiles in a tiling constructed with the rules~$(R_0)$, such that for all $i$ in $\mathbb Z$ \hbox{$T(\tau_i)\subset T(\tau_{i+1})$}. If the sequence is in an alternating configuration then, for any tile~$\mu$ there is an index $i$ such that $\mu$ falls within $T(\tau_i)$. If it is not the case: If the sequence is in a \YY-ultimate configuration. Let $k$ be the smallest integer such that $\tau_i$ is a \YY-tile for all $i\geq k$. Then, let $\omega_i$ be the tile sharing a side with $\tau_{i+1}$ and an other one with $\tau_i$. Then for any tile $\mu$ there is either an index $i$ such that $\mu$ falls within $T(\tau_i)$ or there is an index $j$, $j\geq k$ such that $\mu$ falls within $T(\omega_j)$. If the sequence is in a \GG-ultimate configuration. Let $k$ be the smallest integer such that $\tau_i$ is a \GG-tile for all $i\geq k$. Let $\beta_i$ be the \BB-son of $\tau_i$ for $i\geq k$ and let $\omega_i$ be the \OO-son of $\beta_i$ for $i\geq k$ too. Then for any tile $\mu$ which is not a $\beta_i$ with $i\geq k$, there is an index $i$ such that $\mu$ falls within $T(\tau_i)$ or there is an index $j$, $j\geq k$, such that $\mu$ falls within $T(\omega_j)$. \end{lem} Lemma~\ref{absorb} and Proposition~\ref{psubtree_levels} allow us to call {\bf isocline} the bi-infinite extension of any level of a tree $T(\tau_m)$, for any value of $m$ in $\mathbb N$. Note that Figure~\ref{f_proto_0} allows us to define isoclines in a \GG-ultimate configuration by completing the levels for the exceptional \BB-tiles indicated in the Lemma and by joining the levels of the $T(\omega_i)$ with the corresponding $T(\tau_i)$. Note that the pictures of Figure~\ref{f_hepta_tils} also represent the tilings with their isoclines. Let $T$ be a tree of the heptagrid, let $\rho$ be its root and let $\tau$ be a tile in $T$. Say that a path $\pi=\{\pi_i\}_{i\in\{0..n\}}$ joining $\rho$ to~$\tau$ is a {\bf tree path} if and only if for each non negative integer $i$, with $i<n$, $\pi_{i+1}$ is a son of $\pi_i$. It is not difficult to see that, in that case, $n$ is the level of $\tau$, which can easily be proved by induction on~$n$. A {\bf branch} in a tree~$T$ of the heptagrid is an infinite sequence $\beta=\{\beta_i\}_{i\in\mathbb N}$ such that for each $i$ in $\mathbb N$, $\beta_{i+1}$ is a son of $\beta_i$. Accordingly, a tree path in $T$ is a path from the root of $T$ to a tile~$\tau$ in $T$ which is contained in the branch of~$T$ which passes through $\tau$. We can state: \dprop{psepar_char} Let $T$ be a tree of the heptagrid. Let $\mu$ and $\nu$ be tiles in $T$. Let $\pi_\mu$, $\pi_\nu$ be the tree path from the root $\rho$ of $T$ to $\mu$, $\nu$ respectively. Then, $T(\mu)\subset T(\nu)$ if and only if $\pi_\mu\subset \pi_\nu$ and $T(\mu)\cap T(\nu) = \emptyset$ is and only if we have both $\pi_\mu\not\subset \pi_\nu$ and $\pi_\nu\not\subset\pi_\nu$. \fprop \noindent Proof. Assume that $\pi_\mu\subset\pi_\nu$. As far as the rules $(R_0)$ applied to $T(\rho)$ have the same result when they are applied in $T(\nu)$ and in $T(\mu)$, we have that \hbox{$T(\mu)\subset T(\nu)\subset T(\rho)$} and, clearly,from Proposition~\ref{psubtree_levels} we get that \hbox{$\pi_\mu\subset\pi_\nu$}. Presently, consider $\mu$ and $\nu$ and let $\pi_\mu$, $\pi_\nu$ be the tree paths from $\rho$ to $\mu$, $\nu$ respectively. Consider $\pi_\mu\cap\pi_\nu$. There is a tile~$\beta$ belonging to $\pi_\mu$ and $\pi_nu$ such that $\pi_\beta$, the tree path from $\rho$ to~$\beta$ satisfies $\pi_\beta=\pi_\mu\cap\pi_\nu$. Clearly, $\pi_\beta$ is the greatest common path of $\pi_\mu\cap\pi_\nu$. Consider the case when $\beta=\nu$. It means that $\pi_\mu\subset\pi_\nu$. From what we already proved, we have that $T(\mu)\subset T(\nu)$. Consider the case when $\pi_\mu\not\subset\pi_\nu$. In that case, the length of $\pi_\beta$ is shorter from both that of $\pi_\mu$ and that of $\pi\nu$. It means that one son of $\pi_\beta$, say $\beta_\mu$, belongs to $\pi_\mu$ and the other, say $\beta_\nu$, belongs to $\pi_\nu$ and $\beta_\mu$, $\beta_\nu$ does not belong to $\pi_nu$, $\pi_\mu$ respectively by definition of $\beta$. Now, $\mu$ and $\nu$ are non \BB-tiles. If $\beta$ is a non \BB-tile. We have three cases as far as $\mu$ and $\nu$ can be exchanged if needed: \vskip 0pt $(i)$ $\beta_\mu$ is the \YY-son and $\beta_\nu$ is the \GG- or \OO-son; \vskip 0pt $(ii)$ $\beta_\mu$ is the \YY-son and $\beta_\nu$ is the \BB-son; \vskip 0pt $(iii)$ $\beta_\mu$ is the \BB-son and $\beta_\nu$ is the \GG- or \OO-son. \vskip 0pt In the case $(i)$, Figure~\ref{f_tree} shows us that the right-hand side ray~$u$ of $T(\beta_\mu)$ and the left-hand side ray~$v$ of $T(\beta_\nu)$ have a common perpendicular which is the line containing the side of the \BB-son of $\beta$ share with $\beta$. So that $T(\beta_\mu)\cap T(\beta_\nu)=\emptyset$. In the case $(ii)$, as far as $\nu$ is a non \BB-tile, there is a tile $\gamma$ in $\pi_\nu$ which is a descendent of $\beta$ which is the first non \BB-tile on $\pi_\nu$ in between $\beta_\nu$ and $\nu$. It may happen that $\gamma=\nu$. In that case, all tiles on $\pi_\nu$ after $\beta$ and until $\gamma$ are \BB-tiles. As can be seen from Figure~\ref{f_tree}, those \BB-tiles are crossed by~$u$. We have that $\gamma$ is an \OO-tile and if $w$ is the left-hand side ray of $T(\gamma)$, $u$ and $w$ have a common perpendicular as can be seen on Figure~\ref{f_tree} for the \OO-son of the \BB-son of the tree which is there represented. Accordingly, $T(\beta_\mu)\cap T(\gamma)=\emptyset$. As far as $T(\nu)\subset T(\gamma)$ we again get that $T(\mu)\cap T(\nu)=\emptyset$. In the case $(iii)$, we can argue in a similar way. This time, let $y$ be the left-hand side ray of $T(\beta_\nu)$. If $\gamma$ is the \OO-son of $\beta_\mu$, Figure~\ref{f_tree} shows us that $T(\gamma)\cap T(\beta_\nu)=\emptyset$ as far as $y$ is the right-hand side ray of $T(\gamma)$. Now, if $\pi_\mu$ does not pass through $\gamma$ it passes outside the left-hand side ray $z$ of $T(\gamma)$. Accordingly, $T(\mu)\cap T(\beta_\nu)=\emptyset$, so that, all the more, \hbox{$T(\mu)\cap T(\nu)=\emptyset$}. Assume that $T(\mu)\cap T(\nu)=\emptyset$. Clearly, $\pi_\mu\not\subset\pi_\nu$ and also $\pi_\nu\not\subset\pi_\mu$. So that we have the situation depicted with $\pi_\beta=\pi\mu\cap\pi_\nu$ and both $\pi_\beta\not=\pi\mu$ with $\pi_\beta\not=\pi_\nu$. Consequently, if both $\pi_\nu\not\subset\pi_\mu$ and $\pi_\mu\not\subset\pi_\nu$ do not hold then necessarily $\pi_\nu\subset\pi_\mu$ or $\pi_\mu\subset\pi_\nu$ so that $T(\mu)\subset T(\nu)$ or $T(\nu)\subset T(\mu)$ holds.\hfill$\Box$ We have an important property: \begin{lem}\label{separ_trees} Two distinct trees of the heptagrid are either disjoint or one of them contains the other. \end{lem} The lemma is an immediate corollary of Proposition~\ref{psepar_char}. Moreover, from the proof of that proposition, we clearly get the following result: \dprop {ptreepath} Let $T(\tau)$ be a tree of the heptagrid. Let $T(\mu)$ be another tree of the heptagrid with $\mu$ within $T(\tau)$. Let $\pi_\mu$ be a tree path from the root of $T(\tau)$ to $\mu$. Then $\pi_\mu$ contains at least one tile $\nu$ which is not a \BB-tile. Moreover, for any non \BB-tile $\omega$ in $\pi_\mu$, we have \hbox{$T(\omega)\subset T(\tau)$}. \fprop \def\ww{{\bf w}} \def\bb{{\bf b}} Note that in Figure~\ref{f_proto_0}, the curves representing the isoclines are constituted by two kinds of segments defined as follows. Those segments join the mid-points of two different sides of a tile: one kind, denoted by \ww, is defined by joining two sides which are separated by one side, namely joining side~2 and side~7; the other kind, denoted by \bb, is defined by joining two sides which are separated by two contiguous sides, namely joining side~2 and side~6 or joining side~3 and side~7. Call these marks on a tile its {\bf level mark}. The distribution of the level marks obeys the following rules: \vskip 5pt \ligne{\hfill \ww $\rightarrow$ \bb\ww\ww\hskip 30pt \bb $\rightarrow$ \bb\ww,\hfill$(S)$ \hskip 10pt} \vskip 10pt \noindent \begin{lem}\label{levels} It is not difficult to tile the heptagrid with the prototiles \YY, \GG, \BB{} and \OO{} by applying the rules~$(R_0)$ so that the rules of~$(S)$ also apply if we put \ww{} marks on \BB-, and \OO-tiles only and \bb{} marks on \YY- and \GG-tiles only. \end{lem} \noindent Proof. Inside a tree of the heptagrid, the result follows by induction on the levels. If we consider two trees of the heptagrid where one of them contains the other, the result follows from Proposition~\ref{psubtree_levels} which tells us that the levels of a sub-tree in a tree of the heptagrid are contained in levels of the tree. From Lemma~\ref{absorb}, it is possible to construct a sequence of trees of the heptagrid $\{T(\tau_i)\}_{i\in \mathbb N}$ such that $\displaystyle{\reunion\limits_{i\in\mathbb N} T(\tau_i)}$ covers the whole hyperbolic plane, so that the lemma follows. \hfill$\Box$ Note that in \ww-tiles, sides~2 and~7 are joined by the mark while in \YY-tiles it is the cas for sides~3and~7 while in \GG-tiles it is the case for sides~2 and~6. Construction~\ref{cons_til} allow us to tile the whole hyperbolic plane in infinitely many ways. The number of such tilings is uncountable as far as at each time we have a choice between two possibilities and that the number of steps is infinite. It can be argued that any construction of a tiling which, by definition, starts with any tile, is in some sense described by Construction~\ref{cons_til}. Indeed, whatever the starting tile, we find at some point a \GG-tile as far as in a tiling, there is a \GG-tile at a distance at most 3 of any tile~$\mu$. That distance can be observed for a \BB-tile: its \OO-son has a \YY-son which to its turn has a \GG-son. \subsubsection{The trees of the tiling}\label{sss_treetil} From now on, we introduce two new colours for the tiles, mauve and red which we denote by \MM{} and \RR{} respectively. We decide that \MM-tiles duplicate the \BB-tiles when they are sons of a \GG-tile and only in that case and that an \RR-tile duplicates the \OO-son of an \MM-tile, so that the rules $(R_0)$ are replaced by the following ones: \vskip 5pt \ligne{\hfill $\vcenter{\vtop{\leftskip 0pt\parindent 0pt\hsize 280pt \ligne{\hfill \GG{} $\rightarrow$ \YY\MM\GG,\hskip 20pt \BB{} $\rightarrow$ \BB\OO,\hskip 20pt \YY{} $\rightarrow$ \YY\BB\GG,\hskip 20pt \OO{} $\rightarrow$ \YY\BB\OO \hfill} \ligne{\hfill \RR{} $\rightarrow$ \YY\BB\OO,\hskip 20pt \MM{} $\rightarrow$ \BB\RR, \hfill} }}$ \hfill$(R_1)$\hskip 10pt} \vskip 5pt As previously, the status of a tile is, by definition, its colour. Figure~\ref{f_proto} illustrates the application of the rules~$(R_1)$ by giving what they induce for the neighbours of a tile given its status and the status of its father. Alike what is done in Figure~\ref{f_proto_0}, we also define levels by using the rules~$(S)$. As far as a \BB-tile or an \OO{} is \ww, \MM- and \RR-tiles are also \ww. As in Figure~\ref{f_proto_0}, the number of central tiles associated to a same status is the number of occurrences of the status in the right-hand side parts of the rules~$(R_1)$. We call {\bf tree of the tiling} any $T(\nu)$ where $\nu$ is an \RR-tile. We repeat that a tree of the tiling is a set of tiles, not the set of points contained in those tiles. We also here indicate that, as far as \MM-, \RR-tiles behave like \BB-, \OO-tiles respectively, we later refer to \BB-, \OO-tiles only unless the specificity of \MM-, \RR-tiles is required. From Lemma~\ref{separ_trees} we can state: \begin{lem}\label{par_trees} Let ${\mathcal T}_1$ and ${\mathcal T}_2$ be two trees of the tiling. Either those trees are disjoint or one of them contains the other. Moreover, a ray which delimits one of those trees does not intersect any of the rays delimiting the other tree. The same also applies to the rightmost and the leftmost branches of those trees. \end{lem} \ligne{\hfill \vtop{\leftskip 0pt\parindent 0pt\hsize=300pt \ligne{\hfill \includegraphics[scale=0.5]{nov_neigh_tiles.ps} \hfill} \begin{fig}\label{f_neigh} \leurre Figure for proving the Figures~\ref{f_proto_0} and \ref{f_proto}. The circles crosses the neighbours of the tiles which their centres. The circles have the colour of the tiles containing their centre. \end{fig} } \hfill} \vskip 10pt \noindent Proof. Immediate from the proof of Proposition~\ref{psepar_char} when the tree are disjoint: the rays do not intersect and the property follows for the borders as far as they are inside the considered trees. We have to look at the situation when \hbox{$T(\tau_1)\subset T(\tau_0)$}, where $T(\tau_0)$ and $T(\tau_1)$ are two trees of the tiling. Consider the tree path $\pi$ in $T(\tau_0)$ joining $\tau_0$ to $\tau_1$. When going on~$\pi$ from $\tau_0$ to $\tau_1$, let $\nu$ be the last non \BB-tile of~$\pi$ different of $\tau_1$. From Proposition~\ref{ptreepath}, we know that \hbox{$T(\tau_1)\subset T(\nu)$}. Let $u$ and~$v$ be the rays delimiting $T(\nu)$ and let $u_1$ and $v_1$ be those delimiting $T(\tau_1)$. We repeat here the discussion of cases $(i)$ up to $(iii)$ in the proof of Proposition~\ref{psepar_char}. We have seen there that the same observation about the rays hold so that it extends to the borders as far as they are inside the trees and as far as two rays cannot both cross a border. The proof of Lemma~\ref{par_trees} is completed.\hfill$\Box$ Figure~\ref{f_til_trees} illustrates Lemma~\ref{par_trees}. Note that the figure does not mention all trees of the tiling which can be drawn within the limits of that figure. Let us go back to the process described by Construction~\ref{cons_til}. The process leads us to introduce the following notion: \vskip 10pt \vtop{ \ligne{\hfill \includegraphics[scale=0.35]{proto_tiles.ps} \hfill} \vspace{-10pt} \begin{fig}\label{f_proto} \leurre The prototiles generating the tiling: we describe all cases for the neighbourhood of a tile, whatever it is: \BB, \YY, \OO, \GG, \MM{} or \RR. The neighbourhoods around a tile of the same colour correspond to the different occurrences of that colour in the right-hand side part of rules~$(R_1)$. \end{fig} } \vskip 10pt \vtop{ \ligne{\hfill \includegraphics[scale=0.6]{til_seeds.ps} \hfill} \begin{fig}\label{f_til_trees} \leurre A tree~$\mathcal T$ of the tiling with three sub-trees of the tiling contained in $\mathcal T$. One of them is contained in another one while two of those trees of the tiling inside $\mathcal T$ are disjoint. \end{fig} } \begin{defn}\label{thread} A {\bf thread} is a set $\cal F$ of trees of the tiling such that: \vskip 2pt {\leftskip 20pt\parindent 0pt $(i)$ if $A_1,A_2\in{\cal F}$, then either $A_1\subset A_2$ or $A_2\subset A_1$; $(ii)$ if $A\in{\cal F}$, then there is $B\in{\cal F}$ with $B\subset A$, the inclusion being proper. $(iii)$ if $A_1,A_2\in{\cal F}$ with $A_1\subset A_2$ and if $A$ is a tree of the tiling with $A_1\subset A$, and $A\subset A_2$, then $A\in {\cal F}$. \par} \end{defn} Said in words, a thread is a set of trees of the tiling on which the inclusion defines a linear ordered, which has no smaller element and which contains all trees of the tiling which belong to a segment of the set, according to the order defined by inclusion. \begin{defn}\label{ultrathread} A thread $\cal F$ of the tiling is called an {\bf ultra-thread} if it possesses the following additional property: \vskip 2pt {\leftskip 20pt\parindent 0pt $(iv)$ there is no $A\in{\cal F}$ such that for all $B\in{\cal F}$, $B\subset A$. \par} \end{defn} \begin{lem}\label{ultrathread_defs} A set $\cal F$ of trees of the tiling is an ultra-tread if and only if it possesses properties~$(i)$, $(ii)$ and~$(iii)$ of definition~{\rm\ref{thread}} together with the following: \vskip 2pt {\leftskip 20pt\parindent 0pt $(v)$ for any $A\in{\cal F}$ and for any tree $B$ of the tiling, if $A\subset B$, then $B\in{\cal F}$. \par} \end{lem} For proving the theorem, we need a kind of converse of Proposition~\ref{ptreepath}. \dprop{inpath} Let $A$ and $B$ be two trees of the tiling with $A\subset B$. Let $\rho$ be the root of~$B$ and let $\pi$ be the tree path from $\rho$ to the root~$\tau$ of~$A$. Let $C$ be a tree of the tiling such that \hbox{$A\subset C\subset B$}. Then there is a tile~$\nu$ of~$\pi$ such that $C=T(\nu)$. \fprop \noindent Proof of the Lemma. Indeed, an ultra-thread satisfies $(v)$. Assume the opposite. There are $A\in{\mathcal F}$ and $B$ be a tree of the tiling such that $A\subset B$ and $B\not\in\mathcal F$. Let $C$ be another tree of $\mathcal F$. Then $B\not\subset C$,otherwise, from~$(iii)$ we get $B\in\mathcal F$. From Lemma~\ref{par_trees}, $C\subset B$. As far as $C\in\mathcal F$, we may apply to~$C$ the argument we applied to~$A$. We get a sequence $C_i$ of elements of $\mathcal F$ such that \hbox{$A\subset C_i\subset C_{i+1}\subset B$}. From Proposition~\ref{inpath} the roots of $C_i$ belong to the path from the root of~$B$ to that of~$A$ which is contained in a branch of~$B$. Accordingly, as such a path is finite, the sequence of $C_i$ is also finite. Let $C_m$ be the biggest tree of the sequence of $C_i$. Repeating the argument applied to~$A$ by applying it to~$C_m$, we get that for any $C$ in $\mathcal F$, $C\subset C_m$ which is a contradiction with $(iv)$. So that an ultra-thread satisfies $(v)$. Conversely. Assume that a thread $\cal F$ satisfies $(v)$. Then, it obviously satisfies $(iv)$. \hfill$\Box$ \noindent Proof of Proposition~\ref{inpath}. Let, $A$, $B$ and $C$ as in the statement of the proposition. Let $\rho$, $\tau$, $\nu$ be the roots of $B$, $A$, $C$ respectively. Let $\pi_A$, $\pi_C$ be the tree path from $\rho$ to~$\tau$,$\nu$ respectively. Let $\omega$ be the tile on both $\pi_A$ and $\pi_C$ which is the farthest from~$\rho$ and assume that $\pi_A$ and $\pi_C$ go through different sons of~$\omega$. From Proposition~\ref{psepar_char} we get that \hbox{$A\cap C=\emptyset$} which is impossible. So that necessarily $\omega=C$, which means that $\omega\in\pi_A$.\hfill$\Box$ Accordingly, an ultra-thread is a maximal thread with respect to the inclusion. A thread $\mathcal F$ which is not an ultra-thread is said {\bf bounded} and there is a tree $A$ in $\mathcal F$ such that for each $B$ in $\mathcal F$, we get \hbox{$B\subset A$}. In that case, $A$ is called the {\bf bound} of $\mathcal F$. Consider the following construction: \begin{const}\label{cultra} {\leftskip 20pt\parindent 0pt\rm \phantom{construction}\\ - time~0: fix an \RR-son $\rho$ of a~\MM-tile which is itself the son of a \GG-tile; let $F_0$ be $T(\rho)$; - time~1: at the level~3 of~$F_0$, and on its left-hand side border, there is another \RR-tile $\rho_{-1}$: let $F_{-1}$ be $T(\rho_{-1})$; clearly, \hbox{$F_{-1}\subset F_0$}. Repeating that process by induction, we produce a sequence $\{F\}_{n\leq0}$ of trees of the tiling such that $F_i$ is contained in $F_{i-1}$ for all negative~$i$; denote by $\rho_i$ the root of~$F_i$. If $\tau_{i+1}$ is the son of a tile~$\tau_i$, we say that $T(\tau_i)$ {\bf completes} $T(\tau_{i+1})$. - time~$2n$+1, $n\geq0$: complete~$T(\rho_{2n})$ by $T(\rho_{2n+1})$, where $\rho_{2n+1}$ is an \MM-tile which is the son of a \GG-tile $\omega_{2n+1}$ which we take as the \GG-son of a \YY-tile $\xi_{2n+1}$; - time $2n$+2: complete $T(\xi_{2n+1})$ by $T(\rho_{2n+2})$, where $\rho_{2n+2}$ is an \RR-tile whose \YY-son is $\xi_{2n+1}$; let $F_{n+1}$ be $T(\rho_{2n+2})$ which contains~$F_n$. \par} \end{const} \vskip 2pt \dprop{pultra} The sequence constituted by the $F_n$, $n\in\mathbb Z$ of Construction~{\rm\ref{cultra}} is an ultra-thread. \fprop \noindent Proof. By construction, \hbox{$F_n\subset F_{n+1}$}. Moreover, as a consequence of Lemma~\ref{par_trees}, we know that the rays delimiting $F_{n+1}$ do not meet those delimiting $F_n$. The linear order follows from the construction itself. Clearly, the property~$(ii)$ is also satisfied. By construction, in between $F_n$ and $F_{n+1}$ there is no tree of the tiling: there are two trees of the heptagrid whose roots are not \RR-tiles. So that the sequence constitute a thread. Clearly, property~$(iv)$ is also satisfied by the construction.\hfill$\Box$ \subsection{Isoclines} \label{subsubsec_iso} We go back to the rules~$(S)$ defined in the Subsection~\ref{ss_pre_tiles}. We proved there that it is possible to tile the plane with the rules~$(R_0)$ so that the rules~$(S)$ also apply provided that \ww-marks are put on \BB- and \OO-tiles exactly and that \bb-marks are put on \YY- and \GG-tiles exactly. In fact, we can prove more: \begin{lem}\label{les_iso} Consider a tiling of the heptagrid with \YY, \GG, \BB{} and \OO{} as prototiles obtained by applying the rules~$(R_0)$. Then, defining \ww-marks on \BB- and \OO-tiles exactly and \bb-marks on \YY- and \GG-tiles exactly, then the \bb- and \ww-marks obey the rules of~$(S)$. \end{lem} Proof. According to the assumption, around any tile of the tiling we have the configurations of Figure~\ref{f_proto_0}. Now, the pictures of the figure satisfy the rules of~$(S)$ if we put \bb- and \ww-marks as stated in the assumption of the lemma. Accordingly, the tiling also obey the rules of $(S)$ if we consider \bb- and \ww-marks only.\hfill$\Box$ Figure~\ref{f_isoc} illustrates the property that the levels of a tree of the tiling coincide with those of its sub-trees which are also trees of the tiling. We already noticed that property for the trees of the heptagrid, see Proposition~\ref{psubtree_levels}. That allows us to continue the levels to infinity on both sides of a tree of the tiling. We call {\bf isoclines} the curves obtained by continuing the levels of trees in~$\cal T$. In the sequel, it will be important to mark the path of some isoclines on each tile of the tiling. The isoclines are unchanged if some \BB- and \OO-tiles are replaced by \MM- and \RR-ones respectively provided that rules $(R_1)$ are applied. Figure~\ref{f_proto} shows us that the levels are also defined in the same way. \vskip 10pt \vtop{ \ligne{\hfill \includegraphics[scale=0.6]{nov_til_isoclines.ps} \hfill} \begin{fig}\label{f_isoc} \leurre Illustration of the levels in the tiling. Seven of them are indicated in the figure. Four trees of the tiling are shown with the rays defining the corresponding tree of the tiling. \end{fig} } \vskip 5pt \noindent \subsection{Constructing an aperiodic tiling}\label{ss_aperiod} We remind the reader that in the heptagrid, a tiling is periodic if there is a shift~$\tau$ such the tiling is globally invariant under the application of~$\tau$. The goal of the present subsection is to prove: \begin{thm}\label{th_aperiod} There is a tiling of the heptagrid which is not periodic. It can be constructed with $157$ prototiles. \end{thm} We presently turn to the construction which will be reused to prove Theorem~\ref{undec}. The idea is the following: with trees of the tiling, we define an infinite family of triangles which do not intersect and which are bigger and bigger. That property entails that the tiling cannot be periodic. The construction is performed as follows. We define two families of {\bf trilaterals}, a red one and a blue one. In each family, we have {\bf triangles} and {\bf phantoms}, so that we have blue and red triangles and also blue and red phantoms. We call them the {\bf interwoven} triangles as far as the blue trilaterals generate the red ones which, to their turn generate the blue trilaterals. Blue and red are the {\bf colours} of the trilateral. Blue and red are said {\bf opposite colours}. Triangle or phantom is the {\bf attribute} of a trilateral. Triangle and phantom are said of {\bf opposite attributes}. The first steps of the construction are represented by Figure~\ref{f_interwov}. Although the figure is drawn in the Euclidean plane, it can be implemented in the heptagrid. We require that triangles of the same colour do not intersect each other. They will be implemented by following trees of the tiling as far as the borders of such a tree do not intersect those of another one. The legs of a triangle or those of a phantom will follow the borders of a tree $T(\tau)$ of the tiling. The basis of the triangle or of the phantom will follow a level of $T(\tau)$. For properties shared by both triangles and phantoms whichever the colour, we shall speak of trilaterals. For the set of all trilaterals, we shall speak of the {\bf interwoven triangles}. For the construction, we consider a sequence of \RR-tiles $\{\rho_i\}_{i\in \mathbb N}$ such that for each $i$ in $\mathbb N$, \hbox{$T(\rho_{i+1})\subset T(\rho_i)$}, and such that $\rho_{i+1}$ is the \RR-son of an \MM-tile which is the \MM-son of a \GG-tile which is the \YY-son of $\rho_i$. We say that the pattern \YY\GG\MM\RR{} joins $\rho_i$ to~$\rho_{i+1}$. Now, we require that $\rho_0$ belongs to an isocline, chosen at random and which we call {\bf iscoline 0}. We number the isoclines with number in $\mathbb Z$. Each isocline $8n$, $n\in\mathbb N$ is said {bf green} and each isocline $n$ with \hbox{$n = 4 (mod 8)$} is said {\bf orange}. Under that condition, the sequence of the $\rho_i$ is called a {\bf wire}. For any $\rho_i$ we say that $i$ is its {\bf absissa}. We say that $\rho_{2i+1}$ is the {\bf mid-point} between $\rho_{2i}$ and $\rho_{2i+2}$. Note that, by construction, the mid-point lies on an orange isocline and each $\rho_{2i}$ lies on a green isocline. The role of the green isoclines is to construct the generation~0 of the trilaterals whose colour is blue. Each \RR-tile on a green isocline, it is called a {\bf primary seed} triggers a trilateral, moreover, for each $i$ in $\mathbb N$, the trilaterals raised at $\rho_{2i}$ and $\rho_{2(i+1)}$ have the same colour and opposite attributes. The \RR-tiles on an orange isocline raise the {\bf principal seeds} which trigger a blue or a red trilateral. \begin{const}\label{c_interwov} \phantom{construction}\\ along each wire $\{\rho_i\}_{i\in\mathbb N}$ of the tiling: step $0$ defines the trilaterals of generation~$0$ which are blue; $\rho_{2i}$ emit legs of a trilateral $T_0$ which are stopped by the isocline passing through $\rho_{2i+2}$; $\rho_{2i+2}$ emit legs of a trilateral $T_1$ which has the same colour as $T_0$ but the opposite attribute with respect to~$T_0$; the $\rho_{2i+1}$ which lies inside a triangle of generation~0 emits a red trilateral; let $T_1$ and $T_2$ be the trilaterals raised at that $\rho_{2i+1}$ and at $\rho_{2i+5}$ respectively for the same $i$; $T_1$ and $T_2$ are both red and they have opposite attributes; accordingly, the basis of $T_1$ is raised at $\rho_{2i+5}$; the seeds at $\rho_{2i+1}$ also emit a mauve signal along their orange isocline from side to side; $-$ step $n$$+$$1$, $n\in\mathbb N$: for each trilateral $T$ of the generation~$n$, let $\rho_i$ be its vertex and let $\rho_j$ emit its basis; then $\rho_k$ is its {\bf mid-point} where $k$ satisfies \hbox{$2k=i+j$}; also, $j$$-$$i$ is the {\bf height} of~$T$; the isocline passing through $\rho_k$ is said the {\bf mid-line} of~$T$; then for each triangle~$T_0$ of the generation~$n$, its mid-point emits the vertex of a trilateral~$T_1$ and the basis of a trilateral~$T_2$; $T_1$ and $T_2$ have opposite attributes and both have the opposite colour with respect to~$T_0$; when the mauve signal~$\mu$ emitted at step~$0$ is accompanied by the basis of a phantom, it is stopped by the legs of the first triangle~$T$ which it meets and the isocline of~$\mu$ is the mid-line of $T$; when the mauve signal is accompanied by the basis~$\beta$ of a triangle~$T$, it is stopped by the first legs of the same colour as~$\beta$, which completes the construction of~$T$; the trilaterals of the generation~$n$$+$$1$ are the trilaterals whose vertex is raised at the mid-point of a triangle of the generation~$n$. \end{const} The construction is illustrated by Figure~\ref{f_interwov}. \dprop{tri_indices} The trilaterals of the odd generations are red, the even generations are blue. If $h_n$ is the height of a trilateral of the generation~$n$ we have $h_n=2^{n+1}$. The absissa $\xi_{n,m}$ of the vertex of the $m^{\rm th}$ trilateral of the generation~$n$, $m\in \mathbb N$, is given by \hbox{$\xi_{n,m} = 2^n-1+m.2^{n+1}$}, assuming that $\xi_{0,0}=0$. \fprop \noindent Proof. As far as the trilaterals of generation~0 are blue, the trilaterals of generation~1 are red and those of generation~2 are blue so that, by induction, the trilaterals of an odd generation are red and those of an even generation are blue. By construction, the absissas of the mid-points of the trilaterals of generation~0 are $2m+1$, $m\in\mathbb N$. As far as $h_0=2$ trivially holds, the formula is true for generation~0. We also have that absissas of the heads of the trilaterals of generation~0 are $2m$, $\in\mathbb N$ which also satisfies the formula of the proposition. From Construction~\ref{c_interwov}, as far as $\xi_{0,0}=0$, we can see that $\xi_{1,0}=1$. From Construction~\ref{c_interwov}, we can see that $\xi_{n+1,0}=\xi_{n,0}$+$h_n$ as far as $h_{n+1}=2h_n$. As far as $\xi_{0,0}=0$, we get that $\xi_{n,0}=2^n-1$, from which we obtain the formula of the proposition. \hfill$\Box$ Denote by $\mu_{n,m}$ the mid-point of the $m^{\rm th}$ trilateral of the generation~$n$. From the proof of the proposition, we note that $\mu_{n,m}=(m$+$1).2^{n+1}-1=(2m$+$2).2^n-1$ which means $\mu_{n,m}$ is also the mid-point of a trilateral of the previous generation. In fact, each second mid-point of trilaterals of the previous generations is still the mid-point of a trilateral of the generation~$n$. The other mid-points are mid-points of triangles so that they emit vertices of trilaterals of the generation~$n$. That proves the construction too. Note that the proof is illustrated by Figure~\ref{f_interwov}. The reader is referred to the Appendix for other pictures illustrating the first five steps of the construction. Together with Proposition~\ref{tri_indices}, we have additional properties: \begin{lem}\label{inter_gene} A trilateral~$T$ of the generation~$n$$+$$1$ contains a single phantom~$P$ of the generation~$n$ and there are two triangles $T_0$, $T_1$ of the generation~$n$ such that $T_0$, $T_1$ contains the vertex, the basis of~$T$ respectively, in both cases on their mid-line. Moreover, $T$ and $P$ have the same mid-line. A trilateral~$T$ of the generation~$n$$+$$2$ contains three trilaterals which are of the same colour of the generation~$n$ when $n\geq 1$, two of them being triangles and, in between them, a phantom~$P$, the third one. Also, $T$ and $P$ have the same mid-point. \end{lem} \noindent Proof. The lemma is an easy consequence of Proposition~\ref{tri_indices}. Taking the notations of the proposition, note that \vskip 0pt $\xi_{n+1,m+1}=2^{n+1}-1+(m$+$1).2^{n+2}=\xi_{n+1,m}+2^{n+1}=\xi_{n+1,m}+2^n+2^{n+1}+2^n$ Moreover, $\xi_{n+1,m}=2^{n+1}-1+m.2^{n+2}=2^n-1+2m.2^{n+1}+2^{n+1}+2^n$\\ $=\xi_{n,2m}+2^n$, \noindent which proves the first assertion of the lemma, as far as $2^n$ is half the height of $T_1$. Note that the absissa of the mid-point of~$T$ is $\xi_{n+1,m}+2^{n+1}$ while that of $T_1$ is $\xi_{n,2m+1}+2^n$. An easy computation shows as that the two absissas are equal. Similarly,\\ $\xi_{n+2,m+1}= 2^{n+2}-1+(m$+$1).2^{n+3}=\xi_{n+2,m}+4.2^{n+1}=\xi_{n+2,m}+2^n+3.2^{n+1}+2^n$,\\ \noindent which proves the last part of the lemma. For what is the mid-points, the proof follows from the latter computation and from two applications of the first part of the lemma. \ifnum 1=0 { Assume that it is true for the generation~$n$. By Construction~\ref{c_interwov}, the vertex of a trilateral of the generation~$n$+1 is a mid-point of a trilateral $T_0$ of the generation~$n$ of another colour or of another attribute. The legs raised from that tile cross a first orange signal in the next trilateral~$T_1$ and they cross the basis of that trilateral at the second orange signal issued from the mid-point of the trilateral~$T_2$, the next one after $T_1$. As far as in between $T_i$ and $T_{i+1}$ with \hbox{$i\in\{0,1\}$}, there is a trilateral, the trilateral contains two triangles of the generation~$n$ exactly, which proves the first assertion of the lemma. If the trilateral of the generation~$n$+1 is a phantom~$\varphi$, its mid-point~$\mu$ corresponds to the first orange signal met by its legs after its vertex. By construction, $\mu$ cannot be in a triangle as far as the legs of a triangle stop the orange signal running along an orange isocline. And so, $\mu$ is the mid-point of a phantom~$\psi_n$ of the generation~$n$ and \hbox{$\psi_n\subset\varphi$}. By induction, that proves the second assertion of the lemma. \vskip 10pt \vtop{\leftskip 0pt\parindent 0pt \ligne{\hfill \includegraphics[scale=0.5]{interwov_demo.ps} \hfill} \begin{fig}\label{f_interwov} \leurre Illustrating the construction of the interwoven triangles. We can see how to construct a triangle of the generation~$n$+1 from triangles of the generation~$n$. \end{fig} } We remain with the proof that the attributes of the trilaterals we have found in the above compoutations are those given by the lemma. For what is the connection between a trilateral of the generation~$n$+1 and the trilaterals mentionned in the lemma, we know from construction~\ref{c_interwov} that $T_0$ and $T_1$ are triangles. Consequently, the trilateral contained in~$T$ is a phantom as far as the vertex of $P$ belongs to the basis of~$T_0$ and the basis of~$P$ contains the vertex of~$T_1$. Consider a trilateral of the generation~$n$+2. From what we just proved, it contains a phantom~$P_0$ of generation~$n$+1. Applying the lemma to~$P_0$, we get that $P_0$ contains a phantom~$P$ of the generation~$n$ and two triangles $T_0$ and~$T_1$ such that the basis of $T_0$ contains the vertex of~$P$ and the vertex of~$T_1$ belongs to the basis of~$P$. Now, the above computations show us that $T_0$ and~$T_1$ are contained in $T$. That completes the proof of the lemma. \hfill$\Box$ \dprop{no_cross} The legs of a trilateral do not intersect the legs of another one, whichever its colour, whichever its attribute. Moreover, two triangles of the same colour are either disjoint or one of them is embedded in the other one. \fprop \noindent Proof. Immediate corollary of Lemma~\ref{par_trees} and of Proposition~\ref{tri_indices}. The colour, the attribute and the generation of a trilateral constitute its {\bf characteristics}. \vskip 10pt \ligne{\hskip 10pt \vtop{\leftskip 0pt\parindent 0pt\hsize=300pt \ligne{\hskip 0pt \includegraphics[scale=0.45]{nov_til_ultra.ps} \includegraphics[scale=0.45]{nov_til_no_ultra.ps} \hfill} \begin{fig}\label{f_ultra} \leurre Representation of seeds and isoclines in two tilings of the heptagrid. To left, two trees of the tiling are illustrated, both rooted at an \RR-tile. They belong to an ultra-thread. To right, the tiling has no ultra-threads, threads only. \end{fig} } \hfill} The trilaterals we defined in Section~\ref{s_aperiod} can be embedded in the tiling of the hyperbolic plane illustrated by Figure~\ref{f_ultra}. In the figure, we can see that the periodic numbering of the isoclines from~0 up to~4 is implemented with the help of three colours used to materialise the isoclines: blue, green and orange. \dprop{mid-lines} In each triangle~$T$ of the generation~$n$, $n\geq 1$, its mid-line $\mu$ crosses $n$ phantoms~$P_m$ of the generation~$m$ with \hbox{$0\leq m < n$}. Moreover, $\mu$ is also a mid-line for each $P_m$, where \hbox{$0\leq m < n$}. \fprop \noindent Proof. Apply Lemma~\ref{inter_gene}: $T$ contains a phantom $P_{n-1}$ of the generation~$n$$-$1. If $n>1$, the lemma also applies to $P_{n-1}$ giving rise to a phantom $P_{n-2}$ of the generation $n$$-$2. By induction, we prove the proposition which is clearly true for generation~1 which follows from the previous argument. The statement about $\mu$ also follows by induction from the computation performed in the proof of Proposition~\ref{tri_indices}. \hfill$\Box$ \subsection{Application to isoclines and to threads}\label{ss_iso_thr} Going back to isoclines, we already noticed that they allow to define levels in the whole hyperbolic plane. As shown by Figures~\ref{f_proto_0} and~\ref{f_proto}, isoclines do not intersect and above an isocline there is always an isocline, so that isoclines constitute a partition of the tiling. We need some additional information to Construction~\ref{cultra}. In that construction, we defined a {\bf wire}, denoted by $\mathcal Q$, as the sequence of tiles joining all $\rho_i$, $i\in \mathbb N$, in the tree. The sequence of $T(\rho_i)$, $i\in\mathbb N$ defines a thread. Remember that from $\rho_i$ to $\rho_{i+1}$, both included, we have the statuses \RR, \YY, \GG, \MM{} and \RR{} for the elements of~$\mathcal Q$. Those five tiles belong to five consecutive isoclines. Note that the number of tiles of~$T(\rho_0)$ which are at distance~$n$ from $\rho_0$ is $f_{2n+1}$. \begin{lem}\label{iso_apart} Let $I_n$ be the elements of $T(\rho_0)$ belonging to the isocline~$n$ where $n\in \mathbb N$. Let $u_n$, $v_n$ be the leftmost, rightmost element of~$I_n$ respectively. Let $y_n$ be the element of $\mathcal Q$ belonging to the isocline~$n$. Then \vskip 5pt \ligne{\hfill $\vcenter{\vtop{\leftskip 0pt\parindent 0pt\hsize=250pt \ligne{\hfill $appart(u_n,y_n)\geq f_{2n-3}$, $appart(y_n,v_n)\geq f_{2n-1}$,\hfill} \ligne{\hfill $appart(u_n,y_n), appart(y_n,v_n) \leq f_{2n+1}$\hfill} }}$ \hfill$(A)$\hskip 20pt} \end{lem} \noindent Proof. It is plain, from Figure~\ref{f_proto}, that if $\rho_i$, \hbox{$i\in\{1,2,3\}$}, the status of $\rho_i$, $i$ from 1 to~3, are \YY, \BB{} and \OO{} respectively. From Lemma~\ref{separ_trees}, we have \hbox{$T(\rho_1)\cap T(\rho_3)=\emptyset$} and ${\mathcal Q}\backslash \{\rho_0\} \subset T(\rho_1)$. Now, consider $\eta_j$, \hbox{$j\in\{1,2,3\}$} the sons of~$\rho_1$. As far as the statuses of those sons are, in the order of~$j$, \YY, \BB{} and \GG{} respectively, Lemma~\ref{separ_trees} tells us that \hbox{$T(\eta_1)\cap T(\eta_3)=\emptyset$}. Moreover, it is plain that \hbox{${\mathcal Q}\backslash\{\rho_0,\rho_1\}\subset T(\eta_3)$} and we know that \hbox{$T(\eta_3)\subset T(\rho_1)$}. Accordingly, we may conclude that \hbox{${\mathcal Q}\backslash\{\rho_0,\rho_1\}\cap T(\eta_1)=\emptyset$}. In $T(\eta_1)$ the level of the isocline~$n$ is $n$$-$2 and in $T(\rho_3)$, the same isocline contains the level $n$$-$1 of that tree. From that, we get the estimates of~$(A)$. \hfill$\Box$ Let us remember that Construction~\ref{cultra} defines an ultra-thread ${\mathcal F}_{i\in\mathbb Z}$, where each $\mathcal F$ is $T(\rho_i)$ where the tiles joining $\rho_i$ to $\rho_{i+1}$ have the statuses \RR, \YY, \GG, \MM{} and \RR{} in that order. As far as $i$ runs over $\mathbb Z$ we may, in that case, define $\mathcal Q$ as a sequence of tiles indexed in $\mathbb Z$ with the property that ${\mathcal Q}_{4i}$ is exactly $\rho_i$. We again call that new sequence the quasi-axis of that ultra-thread. Then, it is possible to prove: \begin{lem}\label{bigtree} Let $\{T(\rho_i)\}_{i\in\mathbb Z}$ be the sequence of trees of the tiling defined by Construction~{\rm\ref{cultra}}. Then, for each tile~$\tau$ of the heptagrid, there is $i\in\mathbb Z$ such that $\tau\in T(\rho_i)$. Accordingly, for any tile $\tau$ of the heptagrid which is not a \BB-tile, there is $i\in \mathbb Z$ such that \hbox{$T(\tau)\subset T(\rho_i)$}. \end{lem} \noindent Proof. There is an index $n$ such that $\tau$ belongs to the isocline~$n$. Let $y_n$ be the tile of $\mathcal Q$ belonging to the isocline~$n$ too. Let $\rho_i$ be the closest $\rho_j$ such that $y_n\in T(\rho_j)$ as far as, clearly, ${\mathcal Q}_m$ belong to $T(\rho_j)$ for any $j$ and any $m$ starting from some value. Now, we can find $j<i$, $j\in \mathbb Z$, such that $(A)$ should be satisfied with $u_j$ and $v_j$ being the leftmost and rightmost tiles respectively of the trace of $T(\rho_j)$ on the isocline $n$. Taking $\tau$ a tile of status different from \BB{} and from \MM, as we can find $j\in\mathbb Z$ such that $\tau\in T(\rho_j)$, from Proposition~\ref{ptreepath} we conclude that \hbox{$T(\tau)\subset T(\rho_j)$}.\hfill$\Box$ As a corollary of Lemma~\ref{bigtree}, we can deduce the following property of the ultra-thread obtained from Construction~\ref{cultra}: \begin{lem}\label{ultra_ultra} Let $\mathcal F$ be the ultra-thread given by Construction~{\rm\ref{cultra}} and let $\mathcal G$ be another ultra-thread. Then, for each tree of the tiling $G$ in $\mathcal G$, there is a tree of the tiling $F$ belonging to~$\mathcal F$ such that \hbox{$G\subset F$}. \end{lem} \noindent Proof. Immediate. \dprop{p_level_ultra} Let $\mathcal F=\displaystyle{\reunion_{n\in\mathbb Z}F_n}$ be an ultra-thread and let $\tau_n$ be the root of $F_n$. Consider the set of levels of the tiling. For each level~$m$ in $\mathbb Z$, there is an $n$ in $\mathbb Z$ such that the level of~$\tau_n$ is higher than~$m$. Moreover, let $I_m$ be the set of tiles on the level~$m$ belonging to~$F_n$. Let $\ell_m$, $r_m$ be the leftmost, rightmost tile respectively in $I_m$. Let $u$ in $\mathbb Z$, $u>n$. Then \hbox{$I_u\subset I_m$} and \hbox{$appart(\ell_m,\ell_u).appart(r_m,r_u)>0$}. \fprop \noindent Proof. Let $h_k$ be the level of $\tau_k$, $k$ in $\mathbb Z$. Let $\tau_\ell$ with $\ell>k$. Then, by definition of $\mathcal F$, we have \hbox{$F_k=T(\tau_k)\subset T(\tau_\ell)=F_\ell$}, so that $h_\ell>h_k$. The relations concerning $I_m$, $I_u$ and their respective extremal tiles comes from the fact the borders of trees of the tiling do not meet.\hfill$\Box$ \begin{lem}\label{bigtree_ultra} Let $\mathcal F=\displaystyle{\reunion_{n\in\mathbb Z}F_n}$ be an ultra-thread and let $\tau$ be a tile. Then there is $m$ in $\mathbb Z$ such that $\tau\in F_m$. \end{lem} \noindent Proof. Consider a broken line $\mathcal B$ which joins the centers of each $\tau_n$, $n\in\mathbb Z$, where $\tau_n$ is the root of $F_n$. Let $k$ be the level of~$\tau$. That level meets $\mathcal B$ at some tile~$\nu$. From Proposition~\ref{p_level_ultra}, there is an $m$ in $\mathbb Z$ such that the level of~$\tau_m$ is higher than $k$. By construction, $F_m$ contains $\nu$ as far as each $F_n$ contains all the tiles crossed by $\mathcal B$, starting from its root. Let \hbox{$\delta= appart(\tau,\nu)$}. Let $I_u$, $\ell_u$ and $r_u$ defined as in the proof of Proposition~\ref{p_level_ultra}. As far as $appart(\ell_u,\ell_v)>0$ and $appart(r_u,r_v)>0$ if $u<v$ and as far as those appartnesses are integers, we have that $appart(\ell_u,\tau)\rightarrow\infty$ and \hbox{$appart(r_u,\tau)\rightarrow\infty$} when $u\rightarrow\infty$. Accordingly, there is $w$ in $\mathbb Z$ such that $appart(\ell_w,\tau),appart(r_w,\tau)>\delta$, so that $F_w$ contains $\tau$. \hfill$\Box$ \begin{lem}\label{no_ultra} There are tilings of the heptagrid with the tiles \YY, \GG, \BB, \OO, \MM{} and \RR{} and the application of the rules~$(R_1)$ such that all its threads are bounded. \end{lem} \noindent Proof. Consider a mid-point line $\ell$ of the hyperbolic plane as defined in Section~\ref{ss_heptatil}. Assume that $\ell$ crosses all the levels which can be put by a tiling of the heptagrid. It is possible to assume that $\ell$ crosses \GG-tiles and that the centres of those tiles lie on the same side of~$\ell$. Each one of those tiles generates a tree of the heptagrid whose right-hand side ray is contained in~$\ell$. That rules out the possibility of an ultra-thread in such a tiling. Otherwise, let $\mathcal F=\displaystyle{\reunion_{n\in\mathbb Z} F_n}$ be an ultra-thread. Take $\tau$ as a \GG-tile crossed by~$\ell$. From Lemma~\ref{bigtree_ultra}, there is $n$ in $\mathbb Z$, such that $\tau$ is contained in~$F_n$. From Lemma~\ref{separ_trees}, $T(\tau)\subset F_n$. But now, $F_{n+1}$ contains $F_n$ but, as its border does not meet that of~$F_n$, that border should meet that of $T(\nu)$ where $\nu$ is a \GG-tile crossed by~$\ell$ such that $T(\tau)\subset T(\nu)$. But in that case, we could choose $\nu$ such that its border meet that of $F_{n+1}$, a contradiction with Lemma~\ref{separ_trees}. That proves Lemma~\ref{no_ultra}.\hfill$\Box$ Accordingly, some realisations of the tiling contain ultra-threads, some realisations of it contain none of them as illustrated by Figure~\ref{f_ultra}. \subsection{The prototiles for an aperiodic tiling}\label{ss_til_aperiod} From now on, we introduce a distinction of the isoclines. Each fourth isecline, starting from one of them defined at random, receives a colour: alternatively {\bf green} and {\bf orange}. Considering that a green isocline is higher than an orange one, that defines the directions {\bf up} and {\bf down} in the hyperbolic plane. The isoclines also allow us to define the directions {\bf to left} and {\bf to right}. From now on, an \RR-tile will be called a {\bf seed}. The seeds which are crossed by a green isocline are the vertices of a trilateral of the generation~0, so that they trigger the construction of that trilateral. A seed which sits on an orange isocline is the vertex of a trilateral of the generation~$n$ with $n\geq 1$. An isocline which is neither green nor orange is said to be {\bf blue}. In the present subsection, we implement Construction~\ref{c_interwov} as a tiling. To that purpose, we define a set of {\bf prototiles}: the tiles of the tiling are copies of prototiles. By copy we mean an isometric image which place a tile from an isocline onto another one such that left, right up and down of the former place coincide with those directions on the new isocline. Figure~\ref{f_nproto_a_i} defines the tiles required for the implementation of the isoclines. \vskip 10pt \ligne{\hfill \vtop{\leftskip 0pt\parindent 0pt\hsize=300pt \ligne{\hfill \includegraphics[scale=0.25]{nnov_prototuiles_libres.ps} \hfill} \ligne{\hskip-10pt \includegraphics[scale=0.3]{nov_tuiles_arbo_iso.ps} \hfill} \begin{fig}\label{f_nproto_a_i} \leurre Implementation of the rules~$(R_1)$ and of the isoclines by 29 prototiles. Note the convention for representing heptagonal tiles by squares. \end{fig} } \hfill} To force the succession of green and orange isoclines, we use the \BB-tiles as far as they occur in most rules in $(R_1)$. Tiles 1 up to~8 of the figure illustrate how we define that succession. Tiles 9-12, 13-16 and 17-21 illustrate the other tiles than \BB-ones when they receive a green isocline, an orange one and a blue one respectively. Note that \RR-tiles are mentioned with blue isoclines only as far as they are seeds when sitting on a green or an orange isocline. The last row illustrates the tiles needed to start marking the \BB-tiles: it is the case for the son of a \YY-, an \OO{} or an \RR-tile. Depending on the isocline of the father of such a \BB-tile and the surrounding isoclines, we have the appropriate tile to be synchronised with \BB-tiles on the same isocline as far as green and orange isoclines split the hyperbolic plane into two halves. Consider a tree of the tiling ${\mathcal T} = T(\tau)$, rooted at $\tau$. Define \hbox{$\{\beta_i\}_{i\in\mathbb N}$} to be the branch of $\mathcal T$ as follows. If $\tau$ is a \GG-tile, then $\beta_0 =\tau$. Otherwise, $\beta_0$ is the \BB-son of the \MM-son of $\tau$. Then, for any non negative integer $n$, $\beta_{n+1}$ is the \BB-son of $\beta_n$. Call that branch the {\bf $\beta$-branch} from $\tau$. We say that the branch consists of $\tau$ and of its recursive \BB-off-springs. The interest of that definition is that the $\beta$-branch of $\mathcal T$ does not intersect any tree of the tiling contained in $\mathcal T$. It can easily be seen from Figure~\ref{f_proto} that the prototiles of the figure can tile $T(\tau)$. The first three rows of the figure indicate the convention we use to represent the heptagonal prototiles by square ones. The convention is based on the fact that we have mainly a top down direction and a left right one given by the isoclines. The top number indicates 1, the side to the father. At the bottom side of the square we have the numbers of the sides to the sons of the tile. On the left- and right-hand side edges, we have the number of the sides crossed by the isocline on which the tile sits. \def\gg{{\bf g}} As an example, it is not difficult to see that a \ww-tile cannot abut another \ww-tile and that, similarly, a \bb-tile cannot abut another \bb-tile. From that and similar considerations we leave to the reader, the tiles with a blue isocline can build the pictures of Figure~\ref{f_proto} and only them. As far as besides the isocline the tiles with a green isocline of Figure~\ref{f_nproto_a_i} look like those with a blue isocline, we obtain the pictures of Figure~\ref{f_proto} and only them with the tiles of Figure~\ref{f_nproto_a_i}. We also clearly obtain that green and blue isocline do not mix and do not cross each other. The first row of Figure~\ref{f_nproto_a_i} allow us to build a $\beta$-branch in any tree of the tiling. But the first row alone generates a $\beta$-branch whose root is rejected at infinity. For a true $\beta$-branch rooted at a tree of the heptagrid, we need the tiles of the last row of Figure~\ref{f_nproto_a_i}: the father of a \BB-tile is either a \YY-, an \OO- or an \RR-tile. In each case, the father may be on a green or on a blue isocline while the \BB-tile may be on a blue or on a green one. Of course, if the \BB-tile, its father is on a green tile then its father, the \BB-tile respectively, is on a blue one. Note that the green, orange isocline defined by a first tile~1, 5 respectively impose the position of all other green, orange isoclines by the fact that the tiles bearing a green, orange isocline can only abut on the same level tiles also bearing a green, orange isocline respectively. In order to define the prototiles to construct the trilaterals, we need another property which can be deduced from Proposition~\ref{tri_indices} and Lemma~\ref{inter_gene}: \dprop{legs_and_bases} The legs of a trilateral~$T$ of the generation~$n$$+$$1$ are cut once by the basis~$B$ of the triangle~$T_0$ of the generation~$n$ whose mid-point is the vertex~$V$ of~$T$. That isocline~$\beta$ which contains $B$ is issued from $\rho_j$ where $\rho_j$ is the mid-point between $V$ and the mid-point of~$T$. In between $V$ and $\beta$, the legs of $T$ are cut by bases of phantoms only. In between $\beta$ and the basis of~$T$, the legs of $T$ are not cut by any trilateral of whichever generation. \fprop \noindent Proof. From Lemma~\ref{inter_gene}, we know that the vertex of~$T$ is the mid-point of a triangle~$T_0$ of the generation~$n$ and that the basis of~$T$ is issued from the mid-point of a triangle $T_1$ of the generation~$n$ too. As far as the height of~$T_0$ is the half of that of~$T$ the basis of~$T_0$ satisfies the statement of the proposition. Some trilaterals of generation~$m$, with $m\leq n$ whose vertices are contained in~$T$ cut the basis of~$T$, but their basis does not cut the legs of~$T$. For what are trilaterals of generation higher than $n$+1, either they contain~$T$ or they are disjoint from~$T$ so that in both cases, their basis do not cut the legs of~$T$.\hfill$\Box$ \vskip 10pt \ligne{\hfill \vtop{\leftskip 0pt\parindent 0pt\hsize=300pt \ligne{\hskip-15pt \includegraphics[scale=0.4]{nov_tuiles_tri.ps} \hfill} \vspace{-10pt} \begin{fig}\label{f_nproto_tri} \leurre The 128 prototiles for constructing the trilaterals. Among those prototiles, 28 of them represent a red or a blue trilateral. Note the conventions of colours in order to restrict the number of pictures downto 40 of them. \end{fig} } \hfill} Figure~\ref{f_nproto_tri} gives the prototiles for constructing the trilaterals which have to be appended to those of Figure~\ref{f_nproto_a_i}. Note that the prototiles~1 to~4 of Figure~\ref{f_nproto_tri} complete the prototiles of Figure~\ref{f_nproto_a_i} for what are the seeds on a green or an orange isocline. The first row of the second part of Figure~\ref{f_nproto_tri} illustrates prototiles to trigger the construction of the legs of a trilateral. Note that both left- and right-hand side legs are represented. Moreover, as indicated by the figure itself, we use a few grey colours to be replaced by various hues of blue and red. It is the reason why the 128 prototiles are illustrated by only 40 ones. In fact, as indicated in the propositions and lemmas devoted to the trilaterals, the legs can be uniformly dealt with. Note the fact that for a leg, whichever its side is, we use two hues of the same colour: a dark version for the first half of the leg starting from the vertex, and a lighter version for the second half. The rightmost part of the first row in the second part of the figure illustrates thee conventions we use for the hues which represent two or three colours. \vskip 10pt \vtop{ \ligne{\hfill \includegraphics[scale=0.35]{tttt_interwov_4.ps} \hfill} \vspace{-10pt} \begin{fig}\label{f_tttt_interwov} \leurre The construction of an aperiodic tiling. \end{fig} } Consider a {\bf triangle} $T$ whose vertex is denoted by~$V$ and its mid-point by $\omega$. From Proposition~\ref{legs_and_bases}, the first basis of a {\it triangle} $T_0$ cutting a leg of~$T$ cuts the path from $V$ to the basis of~$T$ at the mid-point~$\nu$ between $V$ and~$\omega$. Such a basis met by a leg when running over it from~$V$ occurs at the fourth of the leg. From Lemma~\ref{inter_gene}, the other bases cutting the leg of~$T$ in between $V$ and $\nu$ are bases of trilaterals of lower generations. Clearly, the mauve signals running on the isoclines of thoses bases meet triangles of a generation lower than that of~$T_0$, so that when those bases cut the leg of $T$ there is no mauve signal with them. So that the first time a leg of~$T$ meets a mauve signal, it is on the isocline passing through~$\omega$. Accordingly, the change of coulour for the leg of~$T$ occurs at that moment. Later, there is no meeting of a basis of a trilateral, the basis of~$T$ being excepted. When it is the case, the basis does not contain a mauve signal at that meeting. Accordingly, the proof of Theorem~\ref{th_aperiod} is completed. The number of needed prototiles is the sum of the numbers indicated in Figures~\ref{f_nproto_a_i} and~\ref{f_nproto_tri}. \hfill$\Box$ Figure~\ref{f_tttt_interwov} illustrates the proof of Theorem~\ref{th_aperiod}. \section{Completing the proof of Theorem~\ref{undec}} \label{s_proof} Presently, we shall see how to obtain the prototiles we need in order to prove Theorem~\ref{undec}. It is important to see that the theorem will be proved only when we have produced the set of prototiles. From the previous construction, we know that we have bigger and bigger triangles, so that if we take red triangles as the frame of the simulation of a Turing machine, it is a possible solution. It is enough that the set of prototiles is adapted to a given Turing machine $M$ in order to perform its computation in any triangle. If $M$ does not halt, the computation is stopped when the computing signal meets the basis of the triangle and it will be the case in all triangles. If $M$ halts, the halting will be observed in some triangle. It is easy to implement the halting state by a prototile one side of which cannot be abut by any prototile. However, that program can be fulfilled if we can perform the computation in a red triangle. The scenario is the following. The initial configuration is displayed along the right-hand leg $\ell_r$ of the red triangle~$T$. That leg consists of \OO-tiles, the vertex of~$T$ being excepted: it is the tile~\ref{f_nproto_tri}.3, an \RR-tile sitting on an orange isocline. From the \OO-tiles, we consider the path in $T$ which goes from a tile~$\tau$ of $\ell_r$ to a tile of the basis of~$T$ by following the \YY-son of~$\tau$ and, recursively, the \YY-sons of those \YY-sons. Call such a path the {\bf \YY-path from~$\tau$} and say that $\tau$ is its {\bf source}. From Lemma~\ref{separ_trees}, we know that a \YY-path from a tile $\ell_r$ does not meet the legs of a triangle contained in~$T$. The role of a \YY-path from~$\tau$ is to convey the content of the square of the tape of~$M$ which lies in $\tau$. The \YY-path updates that content as soon as the apropriate state is seen, so that the \YY-path records the history of the computation on the square represented by its source. A computing signal $\xi$ starts from the root $\rho$ of $T$ and it visits the \YY-paths according to the program of~$M$. In order to go from one \YY-path to the next one, the $\xi$ travels on a level of~$T(\rho)$. That signal conveys the current state~$\eta$ of~$M$. When $\xi$ meets a \YY-path conveying the current content $\sigma$ of the square of the tape which is the source of that \YY-path, $\xi$ performs the instruction associated to $\eta$ and $\sigma$ in the program of~$M$. The \YY-path convey the new letter contained by the square at the source of the \YY-path. It also conveys the new state of~$M$ as well as the direction~$\delta$ towards the \YY-path whose source is a neighbour of the source from which the previous \YY-path originated. To that goal, $\xi$ goes to the next level along the \YY-path it met and, on that level, goes to the new \YY-path in the direction given by~$\delta$. As far as $T$ may contain other red triangles in which the same computation of $M$ is performed, those computations should not interfere with each other. We already know that the \YY-paths generated in $T$ do not meet those of a triangle inside~$T$. It is also necessary that the levels on which $\xi$ travels in~$T$ are not those on which a similar signal travels in a triangle contained in~$T$. Accordingly, we have to deal with that point. Call {\bf free row} of a trilateral~$T$, the intersection with~$T$ of an orange or a green isocline which does not meet the legs of a triangle contained in~$T$. Note that the notion might be applied to red phantoms as well but we reserve it for red triangles. We deal with that problem in Subsection~\ref{ss_freerows}. We also notice from Figure~\ref{f_nproto_tri} that active seeds trigger both the construction of legs of a trilateral~$T$ and the construction of the basis of a trilateral whose status is opposite to that of~$T$. However, as indicated by Figure~\ref{f_ultra}, it may happen that the basis triggered by an active seed will not meet legs of an appropriate triangle. It is the case if the tree of the tiling raised by the active seed is the bound of a thread. That raises another problem dealt with in Subsection~\ref{ss_synchro}. \subsection{Free rows in red triangles}\label{ss_freerows} Before considering how to detect the free rows in a red triangle, it is important to know whether there are enough of them for the computation purpose. \begin{lem}\label{free_rows} In a red trilateral of the generation~$2n$$+$$1$ there are $2^{n+1}$$+$$1$ free rows. \end{lem} \noindent Proof. The smallest generation for a red triangle is generation~1 and such a triangle contains two green isoclines and one green one. Accordingly, such a triangle contains 3 free rows. From Lemma~\ref{inter_gene}, a red triangle~$T$ of the generation $2n$+3 contains two red triangles $T_0$ and $T_1$ of the generation~$2n$$-$1 with, in between them, a phantom of that generation which contains $\varphi_{2n-1}$ free rows. The height of $T$ is four times that of $T_0$. It is not difficult to see that the vertex of~$T$ is contained in a phantom~$P_0$ of the generation~$2n$$-$1 too and that the vertex of~$T$ is the mid-point of~$P_0$. Similarly, the basis of~$T$ is contained in the mid-line of a phantom~$P_1$ of the generation~$2n$$-$1 too. Accordingly, \vskip 0pt \ligne{\hfill $\varphi_{2n+1} -1 = 2(\varphi_{2n-1}-1)$ \hfill\hskip 20pt} \noindent which gives us \vskip 0pt \ligne{\hfill $\varphi_{2n+1} = 2\varphi_{2n-1}-1$ \hfill$(\ast)$\hskip 20pt} \noindent as far as the mid-line $T$ is counted twice if we consider $2\varphi_{2n-1}$. An easy induction from $(\ast)$ shows us that $\varphi_{2n+1}=2^{n+1}+1$. We again find that $\varphi_1=3$. We can check on Figure~\ref{f_interwov_4}, see the Appendix, that $\varphi_3=5$.\hfill$\Box$ Note that a red triangle of the generation~$2n$+1 is crossed by $4^{2n+2}=8^{n+1}$ isoclines. Accordingly, if the number of free rows of a red triangle of the generation~$2n$+1 is very small with respect to its height, it still tends to infinity as $n$ tends to infinity. It is worth noticing that if we choosed the blue triangles instead of the red ones in order to simulate the computation of the Turing machine, using a similar definition for free rows with the help of blue signals instead of the red ones, we would obtain that in each blue triangle there is a single free row, the mid-line of the triangle, see \cite{mmundecTCS} for the proof. The reason is that generation~0 consists of blue trilaterals in which there is a single free row while in a red triangle of generation~1 there are three free rows. Accordingly, it is worth dealing with the dectection of the free rows in red triangles. To that aim we proceed as follows. The legs of a red triangle~$T$ send a red signal inside~$T$ along a green or an orange isocline. If the signal meets a leg of the same side and of the same colour, the signal goes on. It goes on too if it meets the legs of a blue triangle or the legs of a phantom, whichever the colour. If the signal meets a leg of an opposite side, we prevent tiles to implement such a meeting. We also give the leg the possibility to send a yellow signal inside~$T$ along a such an isocline. If the signal meets a leg of an opposite side, it is the other leg of $T$ and the signal is established: a free row is detected. If the yellow signal meets a leg of the same side,it means that it must be replaced by a red signal as far as no tile implements the crossing of the yellow signal by a leg of a red triangle. However, the yellow signal may freely cross legs of blue triangles and of phantoms whatever their colour. Accordingly, each green or orange isocline inside~$T$ conveys a signal: a red one if on that isocline the signal meets the leg of a red triangle inside $T$, a yellow one if on that isocline the signal does not meet a red triangle inside~$T$. In Subsection~\ref{ss_tiles} we shall see how the problem is solved. \subsection{Synchronisation}\label{ss_synchro} We already noted the problem of possible active seeds which are the origin of a bound for some bounded thread. Another problem arises: as far as on the same green or orange isocline there might be several seeds, it is important that the red signals raised in a triangle occurs on the same isocline as a yellow signal inside another triangle. Call {\bf latitude} a finite set of consecutive green and orange isoclines. The {\bf latitude of a trilateral} is the set of green and orange isoclines from the isocline of its vertex to that of its basis. Note that the lateral red signals give rise to signals which may travel along an isocline far away from the legs of any triangle. Those signals of opposite laterality may meet in between two red triangles and outside them: in that case, a left-hand side signal coming from right meets a right-hand side signal coming from left. It is important that the latitude of a trilateral coincide with that of trilaterals of the same characteristics belonging to different threads. The red signals used for detecting the free rows are not enough for that property. To better see what is involved, we need the following notion. Consider a trilateral~$T$ of generation~$n$+1. If the vertex~$V$ of~$T$ is inside a triangle~$T_1$, we say that $T_1$ is the {\bf father} of~$T$. Note that a trilateral may have no father: it is the case in a wire defined by a bounded thread. If $T$ has a father of $T_1$ we may define the father of $T_1$ if that later exists. Accordingly, for any trilateral~$T$, we construct a sequence $\{T_k\}_{k\in [0..h]}$ such that $T_0=T$ and for each $k$ in $[0..h$$-$$1]$, $T_{k+1}$ is the father of~$T_k$ and $T_h$ has no father. Each $T_k$ with $k$ in $[0..h]$ is called an {\bf ancestor} of~$T$ and $T_h$ is called the {\bf remotest ancestor} of $T$. Note that the generation of the remotest ancestor of a trilateral~$T$ depends on the wire to which $T$ belongs. We append two kinds of signals. We consider a special signal for blue trilaterals: the vertex of a blue trilateral as well as the ends of its basis trigger a {\bf blue} signal, the same one whichever the laterality of the end emitting it, whichever the attribute of the trilateral. Such a signal is important due to the fact that a thread may not cross the latitude of a given trilateral. Also, to distinguish latitudes of red trilaterals we need to mark the isoclines passing through the heads of red triangles. We call that latter mark the {\bf silver} signal. It is raised by the vertex~$V$ of a red triangle and it travels on the orange isocline which passes through~$V$. The silver and the blue signals allow us to prove the following property: \begin{lem}\label{remote} Let $T$ be a trilateral belonging to a wire $\mathcal W$. Let $S$ be a trilateral whose characteristics are those of~$T$, $S$ belonging to a wire~$\mathcal V$, with ${\mathcal V}\not={\mathcal W}$. Then $T$ and $S$ has the same latitude if and only if $T$, $S$ have an ancestor $X$, $Y$ respectively, such that the vertex of~$X$ and that of~$Y$ lie on the same isocline. \end{lem} Note that the lemma mentions an ancestor within the same latitude and it says nothing of the remotest ancestors of~$T$ and~$S$ which may belong to different latitudes. As a consequence of the lemma, we can say that the latitudes of red triangles of the generation~$2n$$+$$1$ are the same whatever the wire giving rise to the interwoven triangles and for any $n\in\mathbb N$. \noindent Proof of Lemma~\ref{remote}. We prove that property by induction on~$n$, for $n\geq 1$. Note that for any trilateral~$T$ of generation~0, its ancestors are~$T$ itself. Consider a triangle~$T$ of generation~0{} in $\mathcal W$. Its vertex~$W$ sits on a green isocline $\omega$. If $\omega$ meets a seed~$V$ belonging to $\mathcal V$, $V$ receives the blue signal emitted by~$W$ as far as that signal cannot leave~$\omega$: that signal may be interrupted by a basis lying on $\omega$, but that very basis restores the signal starting from its ends. Accordingly, $W$ triggers a trilateral~$S$ of generation~0. The colour of~$S$ is the same as that of~$T$, we remain with its attribute. What we have seen up to now shows us that the trilaterals of generation~0 lie within the same latitudes both for $\mathcal W$ and for $\mathcal V$. We have to see that the attributes are the same for the same lattitude. Let $A$ be the mid-point of~$T$. It triggers a trilateral~$Q$ of generation~1. If $Q$ is not a triangle, its basis defines a seed $B$ of~$\mathcal W$ which triggers a triangle $H$. It is not difficult to see that $B$ belongs to a triangle $J$ of generation~0 whose vertex is on the basis of the phantom whose vertex is on the basis of~$T$. Accordingly, we may assume that $Q$ is a red triangle. Now $Q$ emits a silver signal which travels on its orange isocline~$\varpi$ which meets a seed $C$ of $\mathcal V$. As far as isoclines cannot cut each other, the distance from $W$ to~$A$ is the same as the distance from $V$ to~$C$. And so, $C$ is the mid-point of~$S$ which raises a red triangle. Consequently, $S$ is a triangle too. We proved the lemma for generation~0. The argument of the silver signal to identify the attribute of~$S$ show us that the lemma is also true for generation~1. Assume that the lemma is true for the generation~$n$. Consider a trilateral~$T$ of the generation~$n$+1 whose vertex is in $\mathcal W$. We may assume that $T$ is a triangle: if it were a phantom $P$ we would consider as $T$ the triangle triggered by the seed of $\mathcal W$ lying on the basis of~$P$. Let $W$ be the vertex of~$T$ and let~$N$ be its mid-point. We know that on the same wire, there is a seed $Q$ which also triggers a triangle of the same colour as~$T$ and the distance of $Q$ from $W$ is twice the height of~$T$. Let $\omega$, $\varpi$ and $\varphi$ be the isoclines passing through~$W$, $N$ and $Q$ respectively. Let $V$, $M$ and $P$ be the seeds of $\mathcal V$ lying on the isoclines $\omega$, $\varpi$ and $\varphi$ respectively. The distance from~$V$ to $P$ is that from $W$ to~$Q$ which means that the trilateral issued from $V$ has the same height as $T$ so that its generation is the same and its colour is also the same. Also, $M$ is the mid-point of~$S$. Clearly, if $T$ is a red triangle, so is $S$ as far as the silver signal emitted by $W$ passes also through~$V$. If $T$ is blue, $N$ triggers a red trilateral and, arguing like we did for generation~1, we may assume that $N$ triggers a red triangle. Accordingly, $M$ also triggers a red triangle as far as it receives the silver signal emitted by $N$. Now, a triangle is triggered at the mid-point of another triangle, so that $S$ is a triangle too as far $M$ is its mid-point. Consequently, we proved that for the generation~$n$+1 the latitudes are the same for trilaterals with the same attributes, provided that the trilateral are both present in $\mathcal W$ and in $\mathcal V$. From that, the lemma follows. If $S$ and $T$ have the same latitude, their attributes are the same and their ancestors lie within the same latitudes as long as they are present in both wires. If $T$ and $S$ have an ancestor whose vertices are on the same isocline, clearly, those ancestors have the same latitude and, step by step, their successive sons occupy the same latitudes, so that it is the case for $S$ and~$T$ too. \hfill$\Box$ Later on, we refer to Lemma~\ref{remote} when we say that the silver and the blue signals allow us to synchronise all wires of the tiling. The problem raised by possible bounds of threads is dealt with as follows. The blue signal emitted by a the basis of a trilateral of a wire may meet the basis emitted on the same isocline by a blue trilateral of another wire. Such a meeting is permitted: it solves the problem of possibly missing trialterals in a bounded thread. \vskip 10pt \vtop{ \ligne{\hfill \includegraphics[scale=0.35]{ntttt_interwov_lib.ps} \hfill} \vspace{-10pt} \begin{fig}\label{f_freerows} \leurre The free rows in the red triangles. They are in yellow in the figure. Note that the yellow signal is supperposed with the mauve one on the mid-line of red triangles. \end{fig} } From our description of the signals emitted by the legs of a triangle in order to detect free rows inside them, we can see that such signals must cross legs of the same laterality. It is the reason why we consider that instead, the legs of a red triangle emit a red signal of their laterality outside the triangle. As illustrated by Figure~\ref{f_freerows}, we can see that a red signal emitted by a red triangle $T_0$ included into another red triangle~$T_1$ also cuts the legs of~$T_1$. Those red signals are similar to the blue signals above defined for both trilaterals. The difference is here that they concern red triangles only and that they are not emitted by the vertex and the basis only: they are emitted on each green or orange isocline crossing the leg. We a lso decide that right-, left-hand side red signals coming from left, from right respectively may match with a red basis coming from right, from left respectively. Note that appending the silver signal means just changing a bit the tiles conveying an orange isocline, but it also requires to append five tiles as far as there are tiles outside legs and bases of trilaterals which convey an orange isocline with no signal at all. We remain with the condition meant by the {\it general tiling problem}. We borrow the next subsection to paper~\cite{mmundecTCS} with a few changes. \subsection{The general tiling problem}\label{ss_til_issue} In the proofs of the general tiling problem in the Euclidean plane by Berger and by Robinson, there is an assumption which is implicit and which was, most probably, considered as obvious at that time. Consider a finite set~$S$ of {\bf prototiles}. We call {\bf solution} of the tiling of the plane by~$S$ a partition~$\mathcal P$ such that the closure of any element of~$\mathcal P$ is a copy of an element of~$S$. We notice that the definition contains the traditional condition on matching signs in the case when the elements of~$S$ possess signs. We also notice that a copy means an isometric image. In that problem, we assume that only shifts are allowed and we exclude rotations. Note that, in the Euclidean case, rotations are also ruled out. Here rotations have to be explicitly ruled out as far as the shifts leaving the tiling globally invariant also generate the rotations which leave the tiling globally invariant. In fact we accept isometries and only those such that a tile marked \ww{} or \bb{} on a given isocline is transformed into a tile marked \ww{} or \bb{} respectively on an isocline, the same one or another one. Note that the general tiling problem can be formalized as follows: \vskip 3pt \ligne{\hfill$\forall S\ \ (\exists\,{\cal P}\ sol({\cal P},S)\vee% \neg(\exists\,{\cal P}\ sol({\cal P},S))),$ \hfill} \vskip 3pt \noindent where $sol({\cal P},S)$ means that $\cal P$~is a solution of~$S$ and where $\vee$~is interpreted in a constructive way: there is an algorithm which, applied to~$S$ provides us with~'yes' if there is a solution and~'no' if there is none. The origin-constrained problem can be formalized in a similar way by: \vskip 3pt \ligne{\hfill$\forall (S,a)\ \ (\exists\,{\cal P}\ sol({\cal P},S,a)\vee% \neg(\exists\,{\cal P}\ sol({\cal P},S,a))),$ \hfill} \vskip 3pt \noindent where $a\in S$, with the same algorithmic interpretation of~$\vee$ and where the formula $sol({\cal P},S,a)$ means that $\cal P$~is a solution of~$S$ which starts with~$a$. Note that if $a$ is a blocking tile, {\it i.e.} a tile which cannot abut any one in $S$, then we may face a situation where we cannot tile the plane because $a$ was chosen at random while it is possible to tile the plane. A solution is to exclude $a$ from the choice. The other one is to allow the occurrence of contradictions because a wrong tile was chosen while the appropriate one would raise no contradiction. Of course, there must be a restriction: such a change should occur finitely many times at most for the same place. Obviously, if we have a solution of the general tiling problem for the considered instance, we also have a solution of the origin-constrained problem, with the facility that we may choose the first tile. To prove that the general tiling problem, in the considered instance, has no solution, we have to prove that, whatever the initial tile, except the blocking one, the corresponding origin-constrained problem has no solution either. In Berger's and Robinson's proofs the construction starts with a special tile, called the {\bf origin}. Their proof holds for the general problem as far as they force the tiling to have a dense subset of origins. In the construction, the origins start the simulation of the space-time i diagram of the computation of a Turing machine~$M$. All origins compute the same machine~$M$ with i the same initial configuration of~$M$. The origins define infinitely many domains of computation of infinitely many sizes. If the machine does not halt, starting from an origin, it is possible to tile the plane. If the machine halts, whatever the initial tile, we nearby find an origin and, from this one, we shall eventually fall into a domain which contains the halting of the machine: at that point, it is easy to prevent the tiling to go on. The present construction aims at the same goal. From Proposition~\ref{tri_indices}, we know that the trilaterals are bigger and bigger once their generation is triggered along a wire. Consequently, what we suggested with the \YY-paths and the free rows answer positively the possibility to simulate any Turing machine working on a semi-infinite tape which, as well known, does not alter the generality. We remain with the way to force such computations. \subsection{The seeds}\label{ss_sides} We establish that there are enough seeds for starting the computation of a Turing machine in the interwoven triangles. We have the important property: \begin{lem} \label{iso_seeds} Let the root of a tree of the tiling~$T$ be on a green or an orange isocline. Then, there is a seed in the tiles of~$T$ on the next orange or green isocline respectively, downwards. Starting from that last isocline, there are seeds, downwards, on all the isoclines. \end{lem} We have a very important density property: \begin{lem}\label{density} For any tile~$\tau$ in a tiling, fitted with the isoclines, there is a seed on a green or an orange isocline within a ball around~$\tau$ of radius~$8$. \end{lem} \noindent Proof. From Figure~\ref{f_proto} we can see that if we consider a \GG-tile $\tau$, there is a seed at a distance~2 from~$\tau$. As far as a \YY-tile has a \GG-son, there is a seed at a distance at most~3 from a \YY-tile. By construction, there is a seed at distance~1 from an \MM-tile. Figure~\ref{f_proto} shows us that there is a \GG-tile at distance~1 from a \BB-tile. Accordingly, there is a seed at distance at most~3 from a \BB-tile. There is a \YY-tile at distance~1 from an \OO-tile so that there is a seed at distance at most 4 from an \OO-tile. Accordingly, there is another seed at distance at most~4 from a seed. If the seed found in that way is on a green or an orange isocline, four isoclines further there is a seed on a green or on an orange isocline at distance at most 8. \hfill$\Box$ From Lemma~\ref{density}, we know that around any tile~$\tau$ there is a seed in a disc of radius~4. We can say a bit more: \begin{lem}\label{all_iso} Assume that there is a seed on a green isocline. Then, there is at least a seed on the next green isocline and on each further isocline whichever its colour. \end{lem} \noindent Proof. Let $\tau$ be a seed on an isocline. From Lemma~\ref{density} it can be easily found everywhere. Say that the isocline to which $\tau$ belongs is isocline~0. The sons of~$\tau$, say $\tau_i$, $i\in\{1..3\}$ are not seed and none of them is \MM. Accordingly there is no seed on the levels~1 and~2 of~$T(\tau)$. On the level~2 the statuses of the sons of the $\tau_i$, $i\in\{1..3\}$, are \YY, \BB, \GG{} or \OO. Accordingly, there is no seed on the level~3 but as far as there is a tile~\MM{} on that level, there is a seed on level~4. But a \GG-tile always has a \GG-son so that if the \GG-tile on level~2 raises a seed on level~4 the \GG-descendants of that tile generate a seed at each level~$n$ with $n>4$. At least one of those isoclines is green so that we can repeat the argument. That completes the proof of the lemma. \hfill$\Box$. From now on, a seed on a green or an orange isocline is called an {\bf active seed}. In each red triangle, we define a limited grid in which we simulate the execution of the same Turing machine starting from the same initial finite configuration. Of course, the whole initial configuration occurs in a big enough red triangle. If the configuration is not complete in a red triangle, the computation halts on the basis of the red triangle. Accordingly as the red triangles are bigger and bigger, if the machine does not stop, it is possible to tile the plane. If the machine halts, the halting produces a tile which prevents the tiling to be completed. As far as i the halting problem of Turing machines starting from a finite initial configuration is undecidable, that reduction proves that the tiling problem of the hyperbolic plane is also undecidable. As far as we know that there are enough active seeds, we have to look at how behave the triangles constructed from them. The construction performed in Section~\ref{s_aperiod} required the realization of the interwoven triangles starting from at least one wire as far as that alone entails the construction of bigger i and bigger triangles which are disjoint from each other. To prove Theorem~\ref{undec} we need that the computation is performed more or less similarly in the triangles of each wire. But that property is guaranteed by the synchronisation property of the silver and blue signals, as proved by Lemma~\ref{remote}. \subsubsection{The implementation}\label{ss_implem} As can immediately be seen, the important feature is not that we have strictly parallel lines, and that squares are aligned along lines which are perpendicular to the tapes. What is important is that we have a {\bf grid}, which may be a more or less distorted image of the just described representation. We can reinforce Lemma~\ref{density}: \begin{lem}\label{superdensity} For each tile~$\tau$, there is an active seed whose distance from~$\tau$ is at most~$12$. \end{lem} \noindent Proof. In the worst case, assume that $\tau$ is a \BB-tile on an isocline $m$. The \OO-son of $\tau$ has a \YY-son so, that at distance at most 4 there is a seed~$\rho$ say on the isocline~$n$. As far as there are seeds on levels $n$+4+$k$ for any $k\in\mathbb N$, there is at least an active seen on some level $n$+$j$ with $j\leq 8$. Now, clearly, $n\leq m$+4. That proves the lemma. \hfill$\Box$ On Figure~\ref{f_ultra}, we can see two active seeds and several seeds which are not active. Accordingly, most seeds are not active but, the active ones are also dense in the heptagrid. It means that if we start to tile the heptagrid from an arbitrary tile, the blocking one being excepted, later or sooner we fall upon an active seed. We go back to that topic a bit later and also when we shall discuss the exact set of prototiles. As already mentioned, the legs issued from an active seed~$\sigma$ follow the borders of $T(\sigma)$. Note that the active seeds also send signals on the green and the orange isoclines. What is important is the thread-structure and Lemma~\ref{inter_gene}. Note that the silver and the blue signals prevent the occurrence of two active seeds on the same isocline giving rise to trilaterals of different characteristics. \ifnum 1=0 { We now turn to Subsection~\ref{ss_freerows} in order to solve the problem of the free rows. \subsection{Detecting the free rows}\label{ss_freerows} We remain with a problem about the detection of the free rows. We already defined the principle of that detection which gurantees that the free rows lie on the same isoclines for red triangles of the same latitude. We have simply to define the prototiles which implement that principle. The set of prototiles defined by Figures~\ref{f_nproto_a_i} and~\ref{f_nproto_tri} guarantees the definition of green and orange isoclines for the whole heptagrid in such a way that we can number the isoclines with integers and in such a way that the green and orange isoclines receive numbers $4n$, $n\in\mathbb Z$. The silver signal allows us to synchronise the generation of triangles and to obtain the alternation of triangles and phantoms among the trilaterals of the same generation and the altarnation of colours between the generations: even ones are blue, odd ones are red. We complete the prototiles of Subsection~\ref{ss_aperiod} in order to signalise the free rows. When that will be done, we shall turn to the definition of the prototiles for the computation of the simulated Turing machine. \subsection{The tiles}\label{ss_tiles} In this sub-section, we shall describe as precisely as possible the tiles needed for the constructions defined in the previous sub-sections. The description is split into two parts. We revisit the prototiles defined in Subsection~\ref{ss_til_aperiod} with Figures~\ref{f_nproto_a_i} and~\ref{f_nproto_tri}. Indeed, we have to implement the detection of the free rows, the construction of the red and of the blue signals and then the travel of the computing signal $\xi$. That latter is tightly connected with the program of the simulated Turing machine~$M$ so that the related prototiles should be better called {\bf meta-tiles} as far as they bear variable signs for the content of a square of the tape of~$M$, for the state of~$M$ and for the direction~$\delta$ which has to be followed in order to meet the next \YY-path. The detection of the free rows and the construction of red and blue signals are defined by Sub subsection~\ref{sss_proto}. The management of the signal~$\xi$ is performed in Sub subsection~\ref{sss_meta}. \subsubsection{The proto-tiles} \label{sss_proto} With the silver signal, we fix the implementation of the triangles and of the phantom. The actual place of the generation~$n$+1 is fixed by the first choice of an active seed which is in a free green isocline. If the active seed triggers a triangle, a phantom, the active seeds of the basis of the trilateral trigger a phantom, a triangle respectively. Whence the expected alternation which the whole construction is based upon. The set of tiles we turn to now is called the set of {\bf prototiles}. We subdivide the set into two parts: the construction of the isoclines and the construction of the trilaterals. A prototile is a pattern. Indeed, a tile is the indication of two data: the location of a tile in the heptagrid and the indication of a copy of the prototile which is placed at that location. The mark of the isoclines indicates which shifts are allowed: from a tile on an isocline to another tile on another isocline, provided that the marks of the image match with those of the new isocline. \vskip 10pt \ligne{\hfill \vtop{\leftskip 0pt\parindent 0pt\hsize=300pt \ligne{\hskip-15pt \includegraphics[scale=0.3]{nov_tuiles_lib.ps} \hfill} \vspace{-10pt} \begin{fig}\label{f_proto_frow} \leurre Generation of the red and the yellow signals by the legs of the triangle. Note that there are two red signals: one for the left-hand side legs and the other for the right-hand side ones. Taking into account the possible colours of the isocline, we get $75$ tiles to be appended for those from Figures~{\rm\ref{f_nproto_a_i}} and {\rm\ref{f_nproto_tri}}. \end{fig} } \hfill} The set of prototiles forces the construction of the tiling with the isoclines. It also forces the activation of the seeds and the consecutive construction of the embedded triangles including the detection of the free rows in the triangles. We have already two figures to illustrate the prototiles: Figures~\ref{f_nproto_a_i} and~\ref{f_nproto_tri}. Each figure defines marks on the tiles for the construction of the tiling, of the isoclines, of the triangles and of the phantoms respectively. We append to those figures two new ones in order to introduce the new signals we defined: the red and the yellow ones which allow us to locate the free rows. Figure~\ref{f_proto_frow} illustrates the generation of the yellow and the red signals by the legs of a triangle. The figure makes use of meta-tiles in that meaning that the light mauve colour indicating the isocline can be freely replaced either by the orange colour or by the green one depending on which isocline we consider: remember that the red, blue and yellow signals run on green or orange isoclines only. Accordingly, that colour represents a variable for colours of the isoclines. A part of the tiles of Figure~\ref{f_proto_frow} are already present in Figure~\ref{f_nproto_tri}: the new red, blue or yellow signs are appended to those of Figure~\ref{f_nproto_tri} for 20 of them. \ifnum 1=0 { \vskip 10pt \ligne{\hfill \vtop{\leftskip 0pt\parindent 0pt\hsize=300pt \ligne{\hfill \includegraphics[scale=0.4]{nov_prototuiles_freerowsII.ps} \hfill} \begin{fig}\label{f_proto_frowII} \leurre The prototiles conveying the red signals outside the legs of a triangle both inside and outside the triangle. Note the use of meta-colours for the isoclines and for the signals. The light blue colour represents both forms of the red signal. The light purple colour represents the two red signals and the yellow signal. The last picture of the third line illustrates the join tiles. We also append the prototiles for conveying the silver signal. Taking into account that the isoclines may have two colours, we get $67$ prototiles. \end{fig} } \hfill} Note that the tiles allowing red and blue signals to meet are attached to \BB-tiles: there are enough of them on each isocline. The distance between two consecutive \BB-tiles on an isocline is at most 5, as can be checked on Figure~\ref{f_ultra}. From that remark and summing the prototiles defined in Figures~\ref{f_nproto_a_i}, \ref{f_nproto_tri} and \ref{f_proto_frow}, we get 232 prototiles. \ifnum 1=0 { A last remark is about the vertex of a triangle. From Figure~\ref{f_nproto_tri}, we can see that the the tile {\bf Rtg0} does not send a red signal but it sends a green signal and a silver one together. The silver signal allows us to synchronise the construction of the trilaterals as already mentioned and with respect to free rows, that signal should be considered as a red one. The figure indicates the prototiles for the tiles outside the legs and the basis of a triangle, both inside and outside the triangle. Note that we get 67 prototiles. Among them, note that we have two join tiles, one for each kind of isoclines. The join tile of Figure~\ref{f_proto_frowII} allows a right-hand side signal coming from the left to join a left-hand side signal coming from the right. It means that the red signals emitted by the legs of two triangles lying on the same isoclines can join outside the triangles. Moreover, we decided to permit that joining on a \BB-tile only, wich reduces the number of prototiles but it does not alter the generality of the construction: in between legs of such triangles there are always \BB-tiles: the vertex~$\nu$ of a triangle is the \RR-son of an \MM-tile which has a \BB-son $\beta$. The \BB-son of $\beta$ is on an isocline which comes just after that of~$\nu$. The figure also displays the ten prototiles required to convey the silver signal, of course on a green isocline. Summing all the numbers obtained in each figure among Figures~\ref{f_proto_a_i}, \ref{f_nproto_tri}, \ref{f_proto_frowI} and~\ref{f_proto_frowII}, we obtain: \begin{lem}\label{proto_count} There is a set of $232$ prototiles which allow us to construct a tiling of the heptagrid implementing the embedded triangles with their isoclines together with the detection of the free rows in each triangle, the lattitudes of trilaterals with identical attributes being synchronised. \end{lem} Note that the number of free rows in a trilateral is that of Lemma~\ref{free_rows} as far as the basis of a triangle is not signalised as a free row. In the appendix, several figures illustrate the construction of the tiling by focusing each one on one of the pictures belonging to Figure~\ref{f_proto}. \def\oo{{\bf o}} \def\yy{{\bf y}} \def\gg{{\bf g}} \def\mm{{\bf m}} \vskip 10pt \noindent \subsubsection{The meta-tiles} \label{sss_meta} Let us now examine the construction of prototiles for simulating a Turing machine. As already mentioned,that part of the prototiles depends upon the Turing machine~$M$ which is simulated. It also depends on the data $\mathcal D$ to which $M$ is applied. Of course, it would be possible to consider Turing machines starting from an empty tape. The consequence would be a huge complexification of the machine which would store the data into its states. It is simpler to consider that $M$ applies to a true data. It is the reason while we call these tiles {\bf meta-tiles}. As already mentioned, the simulation of the computation of~$M$ is organised in the red triangles, starting from generation~0. The interest of those infinitely many generations is the fact that as far as the number $n$ of the generation increases, the number of free rows in the corresponding triangles also increases, which gives the tiling more time in the simulation of~$M$. Also note that the space also increases as far as the height of a red triangle of the generation~$2n$+1 is $8^{n+1}$ according to Lemma~\ref{tri_indices}. As already mentioned, the initial configuration is displayed along the rightmost branch of a red triangle~$T$ which, outside the head of~$T$, consists of \OO-tiles. A tree of the heptagrid rooted at a tile on the rightmost branch of~$T$ has its leftmost branch constituted of \YY-tiles. Now, from Lemma~\ref{par_trees}, that borders does not meet the legs of a triangle inside~$T$. Accordingly, the computation signal~$\xi$ travels on \YY-tiles only when it goes from a free row of~$T$ to the next one and it crosses consecutive tiles when it travels on a free row of~$T$. The meta-tiles are illustrated by Figure~\ref{f_nmeta}, where the caption explains the meaning of the colours. \vskip 0pt \ligne{\hfill \vtop{\leftskip 0pt\parindent 0pt\hsize=320pt \ligne{\hskip-10pt \includegraphics[scale=0.3]{nov_tuiles_compute.ps} \hfill} \vspace{-15pt} \begin{fig}\label{f_nmeta} \leurre The meta-tiles for simulating the execution of the Turing machine~$M$. Tile~1 represents the active seed for a triangle. Tiles~2 up to~5 allow the signals emitted for the computation to go down until the first orange or green isocline reached by tiles~3 and~4 for the left- and right-hand side legs repsectively. Tiles~6 up to~11 allow the computing signal to travel on a free row of the triangle. Tiles 12 up to~15 illustrate the execution of an instruction. Tiles 16 up to~19 represent the crossing by the \YY-path of the isoclines covered by a red lateral signal. Tiles 20 and~21 illustrate the junction of the \YY-path with a free row, allowing the computing signal to go towards the next \YY-path. Tile~22 illustrates the case when the computing signal reaches the basis of the triangle which interrupts the computation. Tiles~23 and~24 indicate that the computation by the Turing machine halts. Those tiles are unique and cannot abut any other tile of the tiling. \end{fig} } \hfill} \vskip 10pt A few supplementary explanations are needed. Meta-tile~\ref{f_nmeta}.1 sends a white signal to the right-hand side leg of the triangle it generates. It triggers tile \ref{f_nmeta}.3 which is on that leg in order to represent the squares of the Turing tape of~$M$. Meta-tiles \ref{f_nmeta}.6 up to \ref{f_nmeta}.11 illustrate the travel of the current state of~$M$ on a free row. Note that if a seed occurs on that isocline, it must be active and it is either meta-tile~\ref{f_nmeta}.1 or \ref{f_nmeta}.10 depending on whether the tile triggers a red triangle or a blue trilateral respectively. Meta-tiles~\ref{f_nmeta}.12 up to \ref{f_nmeta}.15 illustrate the execution of an instruction: the current state travels on the free row going to left or to right, meta-tiles~\ref{f_nmeta}.14, \ref{f_nmeta}.15 or \ref{f_nmeta}.12, \ref{f_nmeta}.13 respectively. The difference is seen on the \YY-son: if the new state goes to left, to right, the mark is put to left, to right respectively of the yellow mark of the \YY-son. Meta-tiles \ref{f_nmeta}.16 up to \ref{f_nmeta}.19 allow the new signal following the \YY-path to cross non free rows. When it reaches the free row, the new current states goes to left, to right, meta-tiles~\ref{f_nmeta}.20, \ref{f_nmeta}.21 respectively depending on the side from which the current state came along the \YY-path. Meta-tile~\ref{f_nmeta}.22 is used when the current state reaches the basis of the red triangle: the computation is stopped as far as there is not enough free rows in that triangle for the computation of~$M$. Meta-tiles~\ref{f_nmeta}.23 and \ref{f_nmeta}.24 are used when the current state is the halting state: when it meets the free row, the halting of the computation of~$M$ is implemented. Note that those latter tiles cannot abut any other tiles in those we defined and cannot abut each other or each one with itself. And so, we can see that the computation in a triangle always stops. Either it happens by the meeting of the computing signal along a \YY-path with the basis of the triangle, or it happens by the halting of~$M$ itself. Meta-tiles \ref{f_nmeta}.22 up to \ref{f_nmeta}.24 illustrate those situations. The number of meta-tiles depends upon the number~$s$ of states and the number~$\ell$ of letters of~$M$. From Figure~\ref{f_nmeta}, we can made the following counting: {\leftskip 30pt\parindent 0pt in Figure~\ref{f_nmeta}, tiles: 1 $-$ 5: 5 tiles;\\ 6 $-$ 11: 12$\times\ell$ tiles: two possible isoclines and $\ell$ possible states;\\ 12 $-$ 15: 2$\times I$ tiles; $I$: number of instructions of $M$;\\ 16 $-$ 19: 3$\times I$ tiles: the four tiles together, three possible isoclines and $I_\ell$ possible instructions;\\ 20 $-$ 21: 2$\times I$ tiles; two possible isoclines, $I_\ell$ possible instructions;\\ 22 $-$ 24: 3 tiles. \par} \noindent where $I$ is the number of instructions of the program of~$M$, $\ell$ is the number of states and $s$ is the number of letters. Also $D$ is the length of the data written with letters of~$M$. Accordingly, the total number of meta-tiles is \hbox{$7.I$+$12.\ell$+$D$+8}. Combining that result with the previous countings we get: \begin{lem}\label{countundec} For each Turing machine $M$ with $s$ states and $\ell$ letters whose program contains $I$ instructions exactly, and whose data requires $D$ letters, there is a set ${\mathcal P}_{M,\mathcal D}$ of \hbox{$240$$+$$7.I$$+$$12.\ell$$+$$D$} prototiles, so that the tiling problem is undecidable for the set of all ${\mathcal P}_{M,\mathcal D}$ applied to data written with letters of~$M$. \end{lem} That completes the proof of Theorem~\ref{undec}. \section{A few corollaries for connected tiling problems}\label{s_corol} For the convenience of the reader, that section reproduces the similar section of \cite{mmarXiv22}. The construction leading to the proof of Theorem~\ref{undec} allows to get a few results in the same line of problems. As indicated in \cite{goodmana,goodmanb}, there is a connection between the general tiling problem and the {\bf Heesch number} of a tiling. That number is defined as the maximum number of {\bf coronas} of a disc which can be formed with the tiles of a given set of tiles, see \cite{mann} for more information. As indicated in \cite{goodmanb}, and as our construction fits in the case of domino tilings, we have the following corollary of Theorem~\ref{undec}. \begin{thm}\label{heesch} There is no computable function which bounds the Heesch number for the tilings of the hyperbolic plane. \end{thm} The construction of~\cite{mmarXiv1} gives the following result, see \cite{mmarXiv3,mmrp07}. \begin{thm}\label{finite} The finite tiling problem is undecidable for the hyperbolic plane. \end{thm} Indeed, when the Turing machine halts, the halting state triggers a signal which encloses the computation area and which compels the tiling to be completed by blank tiles only, see~\cite{mmrp07}. Combining the construction proving Theorem~\ref{finite} and the partition theorem which is proved in~\cite{mmbook1}, chapter~4, section~4.5.2 about the splitting of Fibonacci patchworks, also see~\cite{mmJCA}, the construction of this paper allows us to establish the following result, see~\cite{mmarXiv4}. \begin{thm}\label{periodictiling} The periodic tiling problem is undecidable for the hyperbolic plane, also in its domino version. \end{thm} Note that the analog of Theorem~\ref{periodictiling} for the Euclidean plane was proved by Gurevich and Koriakov, see \cite{gurevich}. In the statement of Theorem~\ref{periodictiling}, {\bf periodic} means that there is a shift which leaves the tiling globally invariant. The construction mimics that of Theorem~\ref{finite} in the fact that if the simulated Turing machine halts, we also enclose the computing area. But this time, we enlarge the notion of computing area and of triangles so as to also permits black trees to support embedded triangles. In this way, we can define areas of the kind defined by Fibonacci patchworks and of the size dictated by the halting of the machine. We define colours for these surrounding signals in such a way that they entail a construction of a scaled Fibonacci tree, see~\cite{mmJCAd}. Next, it is not difficult to construct a tiling of the hyperbolic plane in this way, periodically, applying the shifts already mentioned in~\cite{mmbook1}, chapter~4, section~4.5.3, also see~\cite{mmJCA}. \vskip 7pt At last, in another direction, we may apply the arguments of Hanf and Myers, see~\cite{hanf,myers}, and prove the following result. \begin{thm}\label{nonrec} There is a finite set of tiles such that it generates only non-recursive tilings of the hyperbolic plane. \end{thm} The proof makes use of the construction of two incomparable recursively enumerable sets~$A$ and~$B$ of integers. The set of tiles defines the computation of these sets by a Turing machine. Moreover, the set of tiles tiles the plane if and only if there is a set to separate~$A$ from~$B$. As such a set cannot be constructed by an algorithm, we obtain the result stated in Theorem~\ref{nonrec}. \subsection{Conclusion} It seems to me that the construction of Section~\ref{s_aperiod} could be applied to prove undecidability problems on cellular automata. Of course, the halting problem for cellular automata is undecidable, but this is a simple consequence of the undecidability of the same problem for Turing machines. In fact, it is interesting to notice that the construction of Section~\ref{s_aperiod} which is based on Construction~\ref{c_interwov} can be performed by a cellular automaton. The working of the automaton could be devised as follows. We consider that the automaton works on three layers. On the first one, it tries to construct the tiling. The initial configuration of this layer is a blank plane, except at a tile called central, which is an active seed. On the second layer, the cellular automaton updates a ball around the central tile which coincides with that of the first layer. The third layer is a `working sheet' for intermediate computations performed by the automaton. It is plain that if the Turing machine implemented in the set of tiles does not halt, the cellular automaton will tile the plane in infinite time. If the Turing machine halts, the cellular automaton will take notice that the construction is stopped at some point. \vskip 7pt A last consequence of the construction of Section~\ref{s_aperiod} leads us back to hyperbolic geometry. We used Figure \ref{f_interwov}, in order to better understand Construction~\ref{c_interwov}. It is worth noticing that both figures are indeed Euclidean constructions. However, Construction~\ref{c_interwov} proceeds in a hyperbolic tiling. It seems to me that the fact that this transfer is possible has an important meaning. From my humble point of view, it means that a construction which seems to be purely Euclidean has indeed a purely combinatoric character. It belongs to absolute geometry and it mainly requires Archimedes' axiom. Note that absolute geometry itself has no pure model. A realisation is necessarily either Euclidean or hyperbolic. To conclude with it, we suggest that probably the extent of absolute geometry is somehow under-estimated. As indicated in the Introduction, the construction of the paper is inspired by the construction of the paper I wrote in 2007 to prove Theorem~\ref{undec}. However, and it was the main goal of the present paper, the number of needed prototiles is significantly reduced. From Lemma~\ref{countundec}, simulating a Turing machine~$M$ whose program contains $I$ instructions for $s$ states and $\ell$ and which letters applied to a data $\mathcal D$ of length $D$ with a tiling, \hbox{$240$$+$$7.I$$+$$12.\ell$$+$$D$} prototiles are needed in our simulation. In contrast, paper \cite{mmarXiv22} required \hbox{556+20.$I$+136.$s$+12.$\ell$+$D$} for the same goal while my 2007 paper~\cite{mmundecTCS} required \hbox{$18870$+$880.s$+$1852.\ell$+$192.I$$+$$D$} prototiles. Note that the importance of the parameters involved in those formulas seriously depends on~$M$ and on its data. For the same Turing machine $M$ there are infinitely many possible data so that $D$ is the single variable. If we consider tiny universal Turing machines, see~\cite{neary_woods} for example, $D$ is enormous in comparison with~$I$. As far for a single Turing machine there are infinitely many possible data, it makes sense to focus our attention on the program of~$M$. If we apply those formulas to the universal Turing machine with 6 states and 4 letters from~\cite{neary_woods} we get 449 prototiles with the present paper, while 1884 prototiles are required in \cite{mmarXiv22} and 35782 of them for \cite{mmundecTCS}. The present result is at least four times better than that of \cite{mmarXiv22} and more than seventy nine times better than that of \cite{mmundecTCS}. If we consider Turing machines with a high number of instructions and if we ignore the size of the data, then in the present paper the amount of prototiles is of order 7.$I$, it is 12.$I$ in \cite{mmarXiv22} and 192.$I$ in \cite{mmundecTCS}. Accordingly, in magnitude, the present paper is a bit more than 1.71 better than \cite{mmarXiv22} and it divides by more than 27 that of~\cite{mmundecTCS}. Note that if we consider a fixed universal Turing machine $U$ and if we consider the halting problem for $U$ applied to all its possible data, that problem is also undecidable. If $U$ is the considered tiny universal Turing machine, the single variable is then $D$. In that case, the previous formulas are all of the order of~$D$. \ifnum 1=0 { The present paper is probably the last or the penultimate I shall write. I take that occasion to thank all my friends in science, namely Andrew Adamatsky, Serge Grigorieff, Genaro Juarez Martinez and George Paun. \subsection*{Acknowledgement} I am very pleased to acknowledge the interest of several colleagues and friends to the main result of this paper. Let me especially thank Andr\'e Barb\'e, Jean-Paul Delahaye, Chaim Goodman-Strauss, Serge Grigorieff, Yuri Gurevich, Tero Harju, Oscar Ibarra, Hermann Maurer, Gheorghe P\u aun, Grzegorz Rozenberg and Klaus Sutner. I am specially in debt to Professor Chaim Goodman-Strauss for most valuable comments to improve the presentation of the proof. \def\kvs{\vspace{-1pt}} \begin{thebibliography}{5} \font\itviii=cmti10 \font\bfviii=cmbx10 \bibitem{berger} Berger~R., The undecidability of the domino problem, {\it Memoirs of the American Mathematical Society}, {\bf 66}, (1966), 1-72. \kvs \bibitem{ibkmACRI} Chelghoum~K., Margenstern~M., Martin~B., Pecci~I., Cellular automata in the hyperbolic plane: proposal for a new environment, {\it Lecture Notes in Computer Sciences}, {\bf 3305}, (2004), 678-687, proceedings of ACRI'2004, Amsterdam, October, 25-27, 2004. \kvs \bibitem{goodmana} Goodman-Strauss, Ch., A strongly aperiodic set of tiles in the hyperbolic plane, {\it Inventiones Mathematicae}, {\bf 159}(1), (2005), 119-132. \kvs \bibitem{goodmanb} Goodman-Strauss, Ch., Growth, aperiodicity and undecidability, {\it invited address at the AMS meeting at Davidson, NC}, March, 3-4, (2007). \kvs \bibitem{gurevich} Gurevich~Yu., Koriakov~I., A remark on Berger's paper on the domino problem, {\it Siberian Mathematical Journal}, {\bf 13}, (1972), 459--463. \kvs \bibitem{hanf} Hanf~W., Nonrecursive tilings of the plane. I. {\it Journal of Symbolic Logic}, {\bf 39}, (1974), 283-285. \kvs \bibitem{hopcroft} Hopcroft, J.E., Motwani, R., Ullman, J.D., {\it Introduction to Automata Theory, Languages, and Computation}, Addison Wesley, Boston/San Francisco/New York, (2001). \kvs \bibitem{mann} Mann~C., Heesch's tiling problem, {\it American Mathematical Monthly}, {\bf 111}(6), (2004), 509-517. \kvs \bibitem{mmJUCSii} Margenstern~M., New Tools for Cellular Automata of the Hyperbolic Plane, {\it Journal of Universal Computer Science} {\bf 6{\rm N$^\circ$12}}, 1226--1252, (2000) \kvs \bibitem{mmDMTCS} Margenstern~M., Cellular Automata and Combinatoric Tilings in Hyperbolic Spaces, a survey, {\it Lecture Notes in Computer Sciences}, {\bf 2731}, (2003), 48-72. \kvs \bibitem{mmJCA} Margenstern~M., A new way to implement cellular automata on the penta- and heptagrids, {\it Journal of Cellular Automata} {\bf 1}, N$^\circ1$, (2006), 1-24. \kvs \bibitem{mmarXiv1} Margenstern~M., About the domino problem in the hyperbolic plane from an algorithmic point of view, {\it iarXiv:cs.CG/$0603093$ v$1$}, (2006), 11p. \kvs \bibitem{mmarXiv3} Margenstern~M., The finite tiling problem is undecidable in the hyperbolic plane, {\it arxiv$:$cs.CG/$0703147v1$}, (2007), March, 8p. \kvs \bibitem{mmarXiv4} Margenstern~M., The periodic domino problem is undecidable in the hyperbolic plane, {\it arxiv$:$cs.CG/$0703153v1$}, (2007), March, 12p. \kvs \bibitem{mmrp07} Margenstern~M., The Finite Tiling Problem Is Undecidable in the Hyperbolic Plane, {\it Workshop on Reachability Problems}, {\bf RP07}, July 2007, Turku, Finland, \kvs \bibitem{mmbook1} Margenstern~M., Cellular Automata in Hyperbolic Spaces, Volume 1, Theory, {\it OCP}, Philadelphia, (2007), 422p. \kvs \bibitem{mmBEATCS} Margenstern~M., The Domino Problem of the Hyperbolic Plane is Undecidable, {\it Bulletin of the EATCS}, {\bf 93}, October, (2007), 220-237. \kvs \bibitem{mmundecTCS} Margenstern~M., The domino problem of the hyperbolic plane is undecidable, {\it Theoretical Computer Science}, {\bf 407}(1-3), (2008), 29-84. \kvs \bibitem{mmJCAd} Margenstern~M., A Uniform and Intrinsic Proof that there are Universal Cellular Automata in Hyperbolic Spaces, {\it Journal of Cellular Automata} {\bf 3}, N$^\circ2$, (2008), 157-180. \kvs \bibitem{mmarXiv22} Margenstern~M., The domino problem of the hyperbolic plane is undecidable, new proof, {\it arXiv$:$cs.DM/$2205.07317v1$}, (2022), May, 49pp. \kvs \bibitem{myers} Myers~D., Nonrecursive tilings of the plane. II. {\it Journal of Symbolic Logic}, {\bf 39}, (1974), 286-294. \kvs \bibitem{neary_woods} Neary~T., Woods~D., Four Small Universal Turing Machines, {\it Fundamenta Informaticae}, {\bf 91}(1), (2009), 123-144. \kvs \bibitem{robinson1} Robinson~R.M. Undecidability and nonperiodicity for tilings of the plane, {\it Inventiones Mathematicae}, {\bf 12}, (1971), 177-209. \kvs \bibitem{robinson2} Robinson~R.M. Undecidable tiling problems in the hyperbolic plane. {\it Inventiones Mathematicae}, {\bf 44}, (1978), 259-264. \kvs \bibitem{turing} Turing A.M., On computable real numbers, with an application to the Entscheidungsproblem, {\it Proceedings of the London Mathematical Society}, ser. 2, {\bf 42}, 230-265, (1936). \kvs \bibitem{wang} Wang~H. Proving theorems by pattern recognition, Bell System Tech. J. vol. {\bf 40} (1961), 1--41. \end{thebibliography} \newpage \ligne{\hfill\bf\huge Appendix\hfill} \vskip 10pt We first reproduce the pictures for the construction of the interwoven triangles at a larger scale. \vskip 0pt \ligne{\hfill \vtop{\leftskip 0pt\parindent 0pt\hsize=320pt \ligne{\hfill\hskip-10pt \includegraphics[scale=0.4]{tttt_interwov_0.ps} \hfill} \begin{fig}\label{t_interwov_0} \leurre The isoclines and the generation~$0$ of the interwoven triangles. Note the alternation of triangles and phantoms. \end{fig} } \hfill} \vskip 0pt \ligne{\hfill \vtop{\leftskip 0pt\parindent 0pt\hsize=320pt \ligne{\hfill\hskip-10pt \includegraphics[scale=0.4]{tttt_interwov_1.ps} \hfill} \begin{fig}\label{f_interwov_1} \leurre The isoclines and the generations~$0$ and~1 of the interwoven triangles. Note the alternation of triangles and phantoms of the same colour \end{fig} } \hfill} \newpage \ligne{\hfill \vtop{\leftskip 0pt\parindent 0pt\hsize=320pt \ligne{\hfill\hskip-10pt \includegraphics[scale=0.4]{tttt_interwov_2.ps} \hfill} \begin{fig}\label{f_interwov_2} \leurre The isoclines and the generations~$0$, 1 and~2 of the interwoven triangles. Note the alternation of triangles and phantoms of the same colour \end{fig} } \hfill} \vskip 0pt \ligne{\hfill \vtop{\leftskip 0pt\parindent 0pt\hsize=320pt \ligne{\hfill\hskip-10pt \includegraphics[scale=0.4]{tttt_interwov_3.ps} \hfill} \begin{fig}\label{f_interwov_3} \leurre The isoclines and the generations~$0$, 1, 2 and~3 of the interwoven triangles. Note the alternation of triangles and phantoms of the same colour \end{fig} } \hfill} \newpage \ligne{\hfill \vtop{\leftskip 0pt\parindent 0pt\hsize=320pt \ligne{\hfill\hskip-10pt \includegraphics[scale=0.4]{tttt_interwov_4.ps} \hfill} \begin{fig}\label{f_interwov_4} \leurre The isoclines and the generations~$0$, 1, 2, 3 and~4 of the interwoven triangles. Note the alternation of triangles and phantoms of the same colour \end{fig} } \hfill} \newpage \vskip 20pt As announced in Subsection~\ref{ss_iso_thr} we give several figures where the neighbourhood of the central tile occurs in Figure~\ref{f_proto}. Figure~\ref{f_pavBB} illustrates how the rules are applicated starting from a fixed central and its father according to Figure~\ref{f_proto}. The following figures apply the same principle. \vskip 10pt \ligne{\hfill \vtop{\leftskip 0pt\parindent 0pt\hsize=300pt \ligne{\hfill \includegraphics[scale=0.35]{nov_pav_scheme.ps} \hfill} \begin{fig}\label{f_pavsch} \leurre The central tile and its immediate neighbourhood is that of Figure~\ref{f_pavBB}. \end{fig} } \hfill} \vskip 10pt \vskip 10pt \ligne{\hfill \vtop{\leftskip 0pt\parindent 0pt\hsize=300pt \ligne{\hfill \includegraphics[scale=0.55]{nov_pav_BB.ps} \hfill} \begin{fig}\label{f_pavBB} \leurre Central tile: a \BB-tile whose father is also a \BB-tile. The rules of~$(R_1)$ are applied. \end{fig} } \hfill} \newpage Presently, the \BB-tiles with, as father, an \OO-tile then a \YY-one. There cannot be a \GG-father: in that case, the \BB-son is replaced by an \MM-one, see Figure~\ref{f_pavMG} \vskip 10pt \ligne{\hfill \vtop{\leftskip 0pt\parindent 0pt\hsize=300pt \ligne{\hfill \includegraphics[scale=0.55]{nov_pav_BO.ps} \hfill} \begin{fig}\label{f_pavBO} \leurre Central tile: a \BB-tile whose father is an \OO-tile. The rules of~$(R_1)$ are applied. \end{fig} } \hfill} \vskip 10pt \ligne{\hfill \vtop{\leftskip 0pt\parindent 0pt\hsize=300pt \ligne{\hfill \includegraphics[scale=0.6]{nov_pav_BY.ps} \hfill} \begin{fig}\label{f_pavBY} \leurre Central tile: a \BB-tile whose father is a \YY-tile. The rules of~$(R_1)$ are applied. \end{fig} } \hfill} \newpage Presently, the case of an \MM-tile whose father is necessarily a \GG-tile, according to rules~$(R_1)$. \vskip 10pt \ligne{\hfill \vtop{\leftskip 0pt\parindent 0pt\hsize=300pt \ligne{\hfill \includegraphics[scale=0.6]{nov_pav_MG.ps} \hfill} \begin{fig}\label{f_pavMG} \leurre Central tile: an \MM-tile. Its father is necessarily a \GG-tile. The rules of~$(R_1)$ are applied. Note that the neighbourhood is different from those of Figures~\ref{f_pavBB} and~\ref{f_pavBO}. \end{fig} } \hfill} \vskip 5pt Presently, the central tile are \YY-tiles in the four following figures. \vskip 10pt \ligne{\hfill \vtop{\leftskip 0pt\parindent 0pt\hsize=300pt \ligne{\hfill \includegraphics[scale=0.6]{nov_pav_YY.ps} \hfill} \begin{fig}\label{f_pavYY} \leurre Central tile: a \YY-tile whose father is also a \YY-tile. The rules of~$(R_1)$ are applied. \end{fig} } \hfill} \newpage \vskip 10pt \ligne{\hfill \vtop{\leftskip 0pt\parindent 0pt\hsize=300pt \ligne{\hfill \includegraphics[scale=0.6]{nov_pav_YO.ps} \hfill} \begin{fig}\label{f_pavYO} \leurre Central tile: a \YY-tile whose father is an \OO-tile. The rules of~$(R_1)$ are applied. The neighbourhood is different from that of Figure~\ref{f_pavYY}. \end{fig} } \hfill} \vskip 10pt In Figures~\ref{f_pavYGM} and \ref{f_pavYGB}, the father is a \GG-tile. \vskip 10pt \ligne{\hfill \vtop{\leftskip 0pt\parindent 0pt\hsize=300pt \ligne{\hfill \includegraphics[scale=0.6]{nov_pav_YGM.ps} \hfill} \begin{fig}\label{f_pavYGM} \leurre Central tile: a \YY-tile whose father is a \GG-tile. The rules of~$(R_1)$ are applied. Neighbours~2 and~7 of the central tile are an \MM-tile. \end{fig} } \hfill} \vskip 10pt \newpage \ligne{\hfill \vtop{\leftskip 0pt\parindent 0pt\hsize=300pt \ligne{\hfill \includegraphics[scale=0.6]{nov_pav_YGB.ps} \hfill} \begin{fig}\label{f_pavYGB} \leurre Central tile: a \YY-tile whose father is a \GG-tile. The rules of~$(R_1)$ are applied. Neighbour~2 of the central tile is a \BB-tile, neighbour~7 is an \MM-one. \end{fig} } \hfill} \vskip 5pt Presently, the central tile is a \GG-tile. The father is either a \YY-tile or a \GG-one. \vskip 10pt \ligne{\hfill \vtop{\leftskip 0pt\parindent 0pt\hsize=300pt \ligne{\hfill \includegraphics[scale=0.6]{nov_pav_GY.ps} \hfill} \begin{fig}\label{f_pavGY} \leurre Central tile: a \GG-tile whose father is a \YY-tile. The rules of~$(R_1)$ are applied. Neighbours~4 of the central tile is an \MM-tile. Neighbour~2 is a \BB-one. \end{fig} } \hfill} \vskip 10pt \ligne{\hfill \vtop{\leftskip 0pt\parindent 0pt\hsize=300pt \ligne{\hfill \includegraphics[scale=0.6]{nov_pav_GG.ps} \hfill} \begin{fig}\label{f_pavGG} \leurre Central tile: a \GG-tile whose father is also a \GG-tile. The rules of~$(R_1)$ are applied. Neighbour~2 of the central tile is a \BB-tile, neighbour~3 is an \MM-one. \end{fig} } \hfill} \vskip 10pt Presently, the central tile is an \OO-tile. The father is either a \BB-tile or an \OO-one. \vskip 10pt \ligne{\hfill \vtop{\leftskip 0pt\parindent 0pt\hsize=300pt \ligne{\hfill \includegraphics[scale=0.6]{nov_pav_OB.ps} \hfill} \begin{fig}\label{f_pavOB} \leurre Central tile: an \OO-tile whose father is a \BB-tile. The rules of~$(R_1)$ are applied. \end{fig} } \hfill} \newpage \ligne{\hfill \vtop{\leftskip 0pt\parindent 0pt\hsize=300pt \ligne{\hfill \includegraphics[scale=0.6]{nov_pav_OO.ps} \hfill} \begin{fig}\label{f_pavOO} \leurre Central tile: an \OO-tile whose father is an \OO-tile. The rules of~$(R_1)$ are applied. Besides the father, the neighbourhood is the same as in Figure~\ref{f_pavOB}. \end{fig} } \hfill} \vskip 10pt Presently, the central tile is an \RR-tile. The father is necessarily an \MM-tile. \vskip 10pt \ligne{\hfill \vtop{\leftskip 0pt\parindent 0pt\hsize=300pt \ligne{\hfill \includegraphics[scale=0.6]{nov_pav_RM.ps} \hfill} \begin{fig}\label{f_pavRM} \leurre Central tile: an \RR-tile. The father is an \MM-tile. The rules of~$(R_1)$ are applied. The neighbourhood is different from those of Figures~\ref{f_pavOB} and~\ref{f_pavOO} despite the fact that the rules are similar to those involving \OO-tiles. \end{fig} } \hfill} \end{document}
2205.07189v1
http://arxiv.org/abs/2205.07189v1
Simultaneous coloring of vertices and incidences of graphs
\documentclass[11pt,letterpaper]{article} \usepackage{amssymb,amsmath,graphicx,amsfonts} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{tikz} \usetikzlibrary{arrows} \usepackage{color} \renewcommand{\baselinestretch}{1.0} \oddsidemargin = 0 cm \evensidemargin = 0 cm \textwidth = 16cm \textheight = 22 cm \headheight=0cm \topskip=0cm \topmargin=0cm \newtheorem{theorem}{Theorem} \newtheorem{algorithm}[theorem]{Algorithm} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{remark}[theorem]{Remark} \newtheorem{example}[theorem]{Example} \newtheorem{problem}[theorem]{Problem} \newtheorem{questions}[theorem]{Questions} \newtheorem{construction}[theorem]{Construction} \newtheorem{notation}[theorem]{Notation} \newtheorem{definition}[theorem]{Definition} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{preproof}{{\bf Proof}} \renewcommand{\thepreproof}{} \newenvironment{proof}[1]{\begin{preproof}{\rm #1}\hfill{$\blacksquare$}}{\end{preproof}} \newtheorem{presproof}{{\bf Sketch of Proof.\ }} \renewcommand{\thepresproof}{} \newenvironment{sproof}[1]{\begin{presproof}{\rm #1}\hfill{$\blacksquare$}}{\end{presproof}} \newtheorem{prepro}{{\bf Proposition}} \renewcommand{\theprepro} {{\arabic{prepro}}} \newenvironment{pro}{\begin{prepro}{\hspace{-0.5 em}{\bf.\ }}}{\end{prepro}} \title{Simultaneous coloring of vertices and incidences of graphs} {\small \author{Mahsa Mozafari-Nia$^a$, Moharram N. Iradmusa$^{a,b}$\\ {\small $^{a}$Department of Mathematical Sciences, Shahid Beheshti University,}\\ {\small G.C., P.O. Box 19839-63113, Tehran, Iran.}\\ {\small $^{b}$School of Mathematics, Institute for Research in Fundamental Sciences (IPM),}\\ {\small P.O. Box: 19395-5746, Tehran, Iran.}} \begin{document} \maketitle \begin{abstract} An $n$-subdivision of a graph $G$ is a graph constructed by replacing a path of length $n$ instead of each edge of $G$ and an $m$-power of $G$ is a graph with the same vertices as $G$ and any two vertices of $G$ at distance at most $m$ are adjacent. The graph $G^{\frac{m}{n}}$ is the $m$-power of the $n$-subdivision of $G$. In [M. N. Iradmusa, M. Mozafari-Nia, A note on coloring of $\frac{3}{3}$-power of subquartic graphs, Vol. 79, No.3, 2021] it was conjectured that the chromatic number of $\frac{3}{3}$-power of graphs with maximum degree $\Delta\geq 2$ is at most $2\Delta+1$. In this paper, we introduce the simultaneous coloring of vertices and incidences of graphs and show that the minimum number of colors for simultaneous proper coloring of vertices and incidences of $G$, denoted by $\chi_{vi}(G)$, is equal to the chromatic number of $G^{\frac{3}{3}}$. Also by determining the exact value or the upper bound for the said parameter, we investigate the correctness of the conjecture for some classes of graphs such as $k$-degenerated graphs, cycles, forests, complete graphs and regular bipartite graphs. In addition, we investigate the relationship between this new chromatic number and the other parameters of graphs. \end{abstract} \section{Introduction}\label{sec1} All graphs we consider in this paper are simple, finite and undirected. For a graph $G$, we denote its vertex set, edge set and face set (if $G$ is planar) by $V(G)$, $E(G)$ and $F(G)$ respectively. Maximum degree, independence Number and maximum size of cliques of $G$ are denoted by $\Delta(G)$, $\alpha(G)$ and $\omega(G)$, respectively. Also, for vertex $v\in V(G)$, $N_G(v)$ is the set of neighbors of $v$ in $G$ and any vertex of degree $k$ is called a $k$-vertex.. From now on, we use the notation $[n]$ instead of $\{1,\ldots,n\}$. We mention some of the definitions that are referred to throughout the note and for other necessary definitions and notations we refer the reader to a standard text-book \cite{bondy}.\\ A mapping $c$ from $V(G)$ to $[k]$ is a proper $k$-coloring of $G$, if $c(v)\neq c(u)$ for any two adjacent vertices. A minimum integer $k$ that $G$ has a proper $k$-coloring is the chromatic number of $G$ and denoted by $\chi(G)$. Instead of the vertices, we can color the edges of graph. A mapping $c$ from $E(G)$ to $[k]$ is a proper edge-$k$-coloring of $G$, if $c(e)\neq c(e')$ for any two adjacent edges $e$ and $e'$ ($e\cap e'\neq\varnothing$). A minimum integer $k$ that $G$ has a proper edge-$k$-coloring is the chromatic index of $G$ and denoted by $\chi'(G)$.\\ Another coloring of graph is the coloring of incidences of graphs. The concepts of incidence, incidence graph and incidence coloring were introduced by Brualdi and Massey in 1993 \cite{Bruldy}. In graph $G$, any pair $i=(v,e)$ is called an incidence of $G$, if $v\in V(G)$, $e\in E(G)$ and $v\in e$. Also in this case the elements $v$ and $i$ are called incident. For any edge $e=\{u,v\}$, we call $(u,e)$, the first incidence of $u$ and $(v,e)$, the second incidence of $u$. In general, for a vertex $v\in V(G)$, the set of the first incidences and the second incidences of $v$ is denoted by $I_1^G(v)$ and $I_2^G(v)$, respectively. Also let $I_G(v)=I_1^G(v)\cup I_2^G(v)$ , $I_1^G[v]=\{v\}\cup I_1^G(v)$ and $I_G[v]=\{v\}\cup I_G(v)$. Sometime we remove the index $G$ for simplicity.\\ Let $I(G)$ be the set of the incidences of $G$. The incidence graph of $G$, denoted by $\mathcal{I}(G)$, is a graph with vertex set $V(\mathcal{I}(G))=I(G)$ such that two incidences $(v,e)$ and $(w,f)$ are adjacent in $\mathcal{I}(G)$ if $(i)$ $v=w$, or $(ii)$ $e=f$, or $(iii)$ $\{v,w\}=e$ or $f$. Any proper $k$-coloring of $\mathcal{I}(G)$ is an incidence $k$-coloring of $G$. The incidence chromatic number of $G$, denoted by $\chi_i(G)$, is the minimum integer $k$ such that $G$ is incidence $k$-colorable.\\ Total coloring is one of the first simultaneous colorings of graphs. A mapping $c$ from $V(G)\cup E(G)$ to $[k]$ is a proper total-$k$-coloring of $G$, if $c(x)\neq c(y)$ for any two adjacent or incident elements $x$ and $y$. A minimum integer $k$ that $G$ has a proper total-$k$-coloring is the total chromatic number of $G$ and denoted by $\chi''G)$ \cite{behzad}. In 1965, Behzad conjectured that $\chi''(G)$ never exceeds $\Delta(G)+2$.\\ Another simultaneous coloring began in the mid-1960s with Ringel \cite{ringel}, who conjectured that the vertices and faces of a planar graph may be colored with six colors such that every two adjacent or incident of them are colored differently. In addition to total coloring which is defined for any graph, there are three other types of simultaneous colorings of a planar graph $G$, depending on the use of at least two sets of the sets $V(G)$, $E(G)$, and $F(G)$ in the coloring. These colorings of graphs have been studied extensively in the literature and there are many results and also many open problems. For further information see \cite{borodin, chan, wang1,wang2}.\\ Inspired by the total coloring of a graph $G$ and its connection with the fractional power of graphs which was introduced in \cite{paper13}, in this paper we define a new kind of simultaneous coloring of graphs. In this type of coloring, we color simultaneously the vertices and the incidences of a graph. \begin{definition}\label{verinccol} Let $G$ be a graph. A $vi$-simultaneous proper $k$-coloring of $G$ is a coloring $c:V(G)\cup I(G)\longrightarrow[k]$ in which any two adjacent or incident elements in the set $V(G)\cup I(G)$ receive distinct colors. The $vi$-simultaneous chromatic number, denoted by $\chi_{vi}(G)$, is the smallest integer k such that $G$ has a $vi$-simultaneous proper $k$-coloring. \end{definition} \begin{example} {\rm Suppose cycles of order 3 and 4. we know that $\chi(C_3)=\chi'(C_3)=3$ and $\chi''(C_3)=\chi_i(C_3)=4$. But four colors are not enough for $vi$-simultaneous proper coloring of $C_3$ and easily one can show that $\chi_{vi}(C_3)=5$. For the cycle of order four, we have $\chi(C_4)=\chi'(C_4)=2$ and $\chi''(C_4)=\chi_i(C_4)=4$. In addition, Figure \ref{C4} shows that $\chi_{vi}(C_4)=4$.} \end{example} \begin{figure}[h] \begin{center} \begin{tikzpicture}[scale=1.0] \tikzset{vertex/.style = {shape=circle,draw, line width=1pt, opacity=1.0, inner sep=2pt}} \tikzset{vertex1/.style = {shape=circle,draw, fill=black, line width=1pt,opacity=1.0, inner sep=2pt}} \tikzset{arc/.style = {->,> = latex', line width=1pt,opacity=1.0}} \tikzset{edge/.style = {-,> = latex', line width=1pt,opacity=1.0}} \node[vertex1] (a) at (0,0) {}; \node at (-0.3,-0.3) {$1$}; \node[vertex] (b) at (1,0) {}; \node at (1,-0.4) {$2$}; \node[vertex] (c) at (2,0) {}; \node at (2,-0.4) {$3$}; \node[vertex1] (d) at (3,0) {}; \node at (3.3,-0.3) {$4$}; \node[vertex] (e) at (3,1) {}; \node at (3.4,1) {$1$}; \node[vertex] (f) at (3,2) {}; \node at (3.4,2) {$2$}; \node[vertex1] (g) at (3,3) {}; \node at (3.3,3.3) {$3$}; \node[vertex] (h) at (2,3) {}; \node at (2,3.4) {$4$}; \node[vertex] (i) at (1,3) {}; \node at (1,3.4) {$1$}; \node[vertex1] (j) at (0,3) {}; \node at (-0.3,3.3) {$2$}; \node[vertex] (k) at (0,2) {}; \node at (-0.4,2) {$3$}; \node[vertex] (m) at (0,1) {}; \node at (-0.4,1) {$4$}; \draw[edge] (a) to (b); \draw[edge] (b) to (c); \draw[edge] (c) to (d); \draw[edge] (d) to (e); \draw[edge] (e) to (f); \draw[edge] (f) to (g); \draw[edge] (g) to (h); \draw[edge] (h) to (i); \draw[edge] (i) to (j); \draw[edge] (j) to (k); \draw[edge] (k) to (m); \draw[edge] (m) to (a); \node[vertex1] (a1) at (5,0) {}; \node at (4.7,-0.3) {$a$}; \node[vertex] (b1) at (6,0) {}; \node at (6,-0.4) {$(a,b)$}; \node[vertex] (c1) at (7,0) {}; \node at (7,-0.4) {$(b,a)$}; \node[vertex1] (d1) at (8,0) {}; \node at (8.3,-0.3) {$b$}; \node[vertex] (e1) at (8,1) {}; \node at (8.6,1) {$(b,c)$}; \node[vertex] (f1) at (8,2) {}; \node at (8.6,2) {$(c,b)$}; \node[vertex1] (g1) at (8,3) {}; \node at (8.3,3.3) {$c$}; \node[vertex] (h1) at (7,3) {}; \node at (7,3.4) {$(c,d)$}; \node[vertex] (i1) at (6,3) {}; \node at (6,3.4) {$(d,c)$}; \node[vertex1] (j1) at (5,3) {}; \node at (4.7,3.3) {$d$}; \node[vertex] (k1) at (5,2) {}; \node at (4.4,2) {$(d,a)$}; \node[vertex] (m1) at (5,1) {}; \node at (4.4,1) {$(a,d)$}; \draw[edge] (a1) to (b1); \draw[edge] (b1) to (c1); \draw[edge] (c1) to (d1); \draw[edge] (d1) to (e1); \draw[edge] (e1) to (f1); \draw[edge] (f1) to (g1); \draw[edge] (g1) to (h1); \draw[edge] (h1) to (i1); \draw[edge] (i1) to (j1); \draw[edge] (j1) to (k1); \draw[edge] (k1) to (m1); \draw[edge] (m1) to (a1); \end{tikzpicture} \caption{$vi$-simultaneous proper $4$-coloring of $C_4$. Black vertices are corresponding to the vertices of $G$ and white vertices are corresponding to the incidences of $C_4$. The incidence $(u,\{u,v\})$ is denoted by $(u,v)$.} \label{C4} \end{center} \end{figure} Similar to incidence coloring, we can define some special kind of $vi$-simultaneous coloring of graphs according to the number of colors that appear on the incidences of each vertex. \begin{definition}\label{(k,l)IncidenceCol} A $vi$-simultaneous proper $k$-coloring of a graph $G$ is called $vi$-simultaneous $(k,s)$-coloring of $G$ if for any vertex $v$, the number of colors used for coloring $I_2(v)$ is at most $s$. We denote by $\chi_{vi,s}(G)$ the smallest number of colors required for a $vi$-simultaneous $(k,s)$-coloring of $G$. \end{definition} For example, the $vi$-simultaneous coloring of $C_4$ in Figure \ref{C4} is a $vi$-simultaneous $(4,1)$-coloring and so $\chi_{vi,1}(C_4)=4$. Observe that $\chi_{vi,1}(G)\geq\chi_{vi,2}(G)\geq\cdots\geq\chi_{vi,\Delta}(G)=\chi_{vi}(G)$ for every graph $G$ with maximum degree $\Delta$. \subsection{Fractional power of graph} For the edge coloring and total coloring of any graph $G$, two corresponding graphs are defined. In the line graph of $G$, denoted by $\mathcal{L}(G)$, the vertex set is $E(G)$ and two vertex $e$ and $e'$ are adjacent if $e\cap e'\neq\varnothing$. In the total graph of $G$, denoted by $\mathcal{T}(G)$, vertex set is $V(G)\cup E(G)$ and two vertices are adjacent if and only if they are adjacent or incident in $G$. According to these definitions, we have $\chi'(G)=\chi(\mathcal{L}(G))$ and $\chi''(G)=\chi(\mathcal{T}(G))$. Therefore, edge coloring and total coloring of graphs can be converted to vertex coloring of graphs.\\ Motivated by the concept of total graph, the fractional power of a graph was first introduced in \cite{paper13}. Let $G$ be a graph and $k$ be a positive integer. The \emph{$k$-power of $G$}, denoted by $G^k$, is defined on the vertex set $V(G)$ by adding edges joining any two distinct vertices $x$ and $y$ with distance at most $k$. Also the $k$-subdivision of $G$, denoted by $G^{\frac{1}{k}}$, is constructed by replacing each edge $xy$ of $G$ with a path of length $k$ with the vertices $x=(xy)_0,(xy)_1,\ldots, (xy)_{k-1},y=(xy)_k$. Note that the vertex $(xy)_l$ has distance $l$ from the vertex $x$, where $l\in \{0,1,\ldots,k\}$. Also, $(xy)_l=(yx)_{k-l}$, for any $l\in \{0,1,\ldots,k\}$. The vertices $(xy)_0$ and $(xy)_k$ are called terminal vertices and the others are called internal vertices. We refer to these vertices in short, $t$-vertices and $i$-vertices of $G$, respectively. Now the fractional power of graph $G$ is defined as follows. \begin{definition}\label{def1} Let $G$ be a graph and $m,n\in \mathbb{N}$. The graph $G^{\frac{m}{n}}$ is defined to be the $m$-power of the $n$-subdivision of $G$. In other words, $G^{\frac{m}{n}}=(G^{\frac{1}{n}})^m$. \end{definition} The sets of terminal and internal vertices of $G^\frac{m}{n}$ are denoted by $V_t(G^\frac{m}{n})$ and $V_i(G^\frac{m}{n})$, respectively. It is worth noting that, $G^{\frac{1}{1}}=G$ and $G^{\frac{2}{2}}=\mathcal{T}(G)$.\\ By virtue of Definition \ref{def1}, one can show that $\omega(G^{\frac{2}{2}})=\Delta(G)+1$ and the Total Coloring Conjecture can be reformulated as follows. \begin{conjecture}\label{conj1} {For any simple graph $G$, $\chi(G^{\frac{2}{2}})\leq \omega(G^{\frac{2}{2}})+1$.} \end{conjecture} In \cite{paper13}, the chromatic number of some fractional powers of graphs was first studied and it was proved that $\chi(G^{\frac{m}{n}})=\omega(G^{\frac{m}{n}})$ where $n=m+1$ or $m=2<n$. Also it was conjectured that $\chi(G^{\frac{m}{n}})=\omega(G^{\frac{m}{n}})$ for any graph $G$ with $\Delta(G)\geq3$ when $\frac{m}{n}\in\mathbb{Q}\cap(0,1)$. This conjecture was disproved by Hartke, Liu and Petrickova \cite{hartke2013} who proved that the conjecture is not true for the cartesian product $C_3\Box K_2$ (triangular prism) when $m=3$ and $n=5$. However, they claimed that the conjecture is valid except when $G=C_3\Box K_2$. In addition they proved that the conjecture is true when $m$ is even.\\ It can be easily seen that, $G$ and $\mathcal{I}(G)$ are isomorphic to the induced subgraphs of $G^\frac{3}{3}$ by $V_t(G^\frac{3}{3})$ and $V_i(G^\frac{3}{3})$, the sets of terminal and internal vertices of $G^\frac{3}{3}$ respectively. So $\chi_i(G)=\chi(G^{\frac{3}{3}}[V_i(G^\frac{3}{3})])$. Also, by considering the $3$-subdivision of a graph $G$, two internal vertices $(uv)_1$ and $(uv)_2$ of the edge $uv$ in $G^{\frac{3}{3}}$ are corresponding to the incidences of the edge $\{u,v\}$ in $G$. For convenience, we denote $(uv)_1$ and $(uv)_2$ with $(u,v)$ and $(v,u)$, respectively.\\ Similar to the equality $\chi''(G)=\chi(G^{\frac{2}{2}})$, we have the following basic theorem about the relation between $vi$-simultaneous coloring of a graph and vertex coloring of its $\frac{3}{3}$ power. \begin{theorem}\label{vi-simultaneous} For any graph $G$, $\chi_{vi}(G)=\chi(G^{\frac{3}{3}})$. \end{theorem} Because of Theorem~\ref{vi-simultaneous}, we use the terms $\chi_{vi}(G)$ and $\chi(G^{\frac{3}{3}})$ interchangebly in the rest of the paper. We often use the notation $\chi_{vi}(G)$ to express the theorems and the notation $\chi(G^{\frac{3}{3}})$ in the proofs.\\ As mentioned in \cite{paper13}, one can easily show that $\omega(G^{\frac{3}{3}})=\Delta(G)+2$, when $\Delta(G)\geq 2$ and $\omega(G^{\frac{3}{3}})=4$, when $\Delta(G)=1$. Therefore, $\Delta+2$ is a lower bound for $\chi(G^{\frac{3}{3}})$ and $\chi_{vi}(G)$, when $\Delta(G)\geq 2$. In \cite{paper13}, the chromatic number of fractional power of cycles and paths are considered, which can be used to show that the graphs with maximum degree two are $vi$-simultaneous 5-colorable (see Section \ref{sec4}). In \cite{iradmusa2020,3power3subdivision} it is shown that $\chi(G^{\frac{3}{3}})\leq7$ for any graph $G$ with maximum degree $3$. Moreover, in \cite{mahsa} it is proved that $\chi(G^{\frac{3}{3}})\leq 9$ for any graph $G$ with maximum degree $4$. Also in \cite{iradmusa2020} it is proved that $\chi(G^{\frac{3}{3}})\leq\chi(G)+\chi_i(G)$ when $\Delta(G)\leq2$ and $\chi(G^{\frac{3}{3}})\leq \chi(G)+\chi_i(G)-1$ when $\Delta(G)\geq 3$. In addition, in \cite{Bruldy}, it is shown that $\chi_i(G)\leq2\Delta(G)$ for any graph $G$. Hence, if $G$ is a graph with $\Delta(G)\geq2$, then $\chi(G^{\frac{3}{3}})=\chi_{vi}(G)\leq 3\Delta(G)$.\\ According to the results mentioned in the previous paragraph, the following conjecture is true for graphs with maximum degree at most $4$. \begin{conjecture}{\em{\cite{mahsa}}}\label{cmahsa} Let $G$ be a graph with $\Delta(G)\geq 2$. Then $\chi_{vi}(G)\leq 2\Delta(G)+1$. \end{conjecture} We know that $\chi(G^{\frac{3}{3}})\geq \omega(G)=\Delta(G)+2$ when $\Delta(G)\geq 2$. In addition, Total Coloring Conjecture states that $\chi(G^{\frac{2}{2}})\leq \Delta(G)+2$. Therefore if Total Coloring Conjecture is correct, then the following conjecture is also true. \begin{conjecture}{\em{\cite{mahsa}}}\label{tcmahsa} Let $G$ be a graph with $\Delta(G)\geq 2$. Then $\chi(G^{\frac{2}{2}})\leq\chi(G^{\frac{3}{3}})$. \end{conjecture} Similar to the graphs $\mathcal{L}(G)$, $\mathcal{T}(G)$ and $\mathcal{I}(G)$, for any graph $G$, we can define a corresponding graph, denoted by $\mathcal{T}_{vi,1}(G)$, such that $\chi_{vi,1}(G)=\chi(\mathcal{T}_{vi,1}(G))$. \begin{definition}\label{Tvi1} Let $G$ be a nonempty graph. The graph $\mathcal{T}_{vi,1}(G)$, is a graph with vertex set $V(G)\times [2]$ and two vertices $(v,i)$ and $(u,j)$ are adjacent in $\mathcal{T}_{vi,1}(G)$ if and only if one of the following conditions hold: \begin{itemize} \item $i=j=1$ and $d_G(v,u)=1$, \item $i=j=2$ and $1\leq d_G(v,u)\leq 2$, \item $i\neq j$ and $0\leq d_G(v,u)\leq 1$, \end{itemize} \end{definition} \begin{example}\label{Ex:Tvi1C6} {\rm As an example, $\mathcal{T}_{vi,1}(C_6)$ shown in Figure \ref{Tvi1C6}. Unlabeled vertices belong to $V(C_6)\times\{2\}$. }\end{example} \begin{figure}[h] \begin{center} \resizebox{7.7cm}{5cm}{ \begin{tikzpicture}[scale=0.5] \tikzset{vertex/.style = {shape=circle,draw, line width=1pt, opacity=1.0, inner sep=2pt}} \tikzset{edge/.style = {-,> = latex', line width=1pt,opacity=1.0}} \node [vertex] (0) at (0, 2.5) {}; \node [vertex] (1) at (3, 2.5) {}; \node [vertex] (2) at (5, 0) {}; \node [vertex] (3) at (-2, 0) {}; \node [vertex] (4) at (3, -2.5) {}; \node [vertex] (5) at (0, -2.5) {}; \node [vertex] (6) at (4, 4) {}; \node at (5.5,4) {$(v_2,1)$}; \node [vertex] (7) at (7, 0) {}; \node at (8.5,0) {$(v_1,1)$}; \node [vertex] (8) at (4, -4) {}; \node at (5.5,-4) {$(v_6,1)$}; \node [vertex] (9) at (-1, -4) {}; \node at (-2.5,-4) {$(v_5,1)$}; \node [vertex] (10) at (-4, 0) {}; \node at (-5.5,0) {$(v_4,1)$}; \node [vertex] (11) at (-1, 4) {}; \node at (-2.5,4) {$(v_3,1)$}; \draw [edge] (1) to (2); \draw [edge] (1) to (0); \draw [edge] (0) to (3); \draw [edge] (2) to (4); \draw [edge] (4) to (5); \draw [edge] (5) to (3); \draw [edge] (6) to (11); \draw [edge] (11) to (10); \draw [edge] (10) to (9); \draw [edge] (9) to (8); \draw [edge] (8) to (7); \draw [edge] (7) to (6); \draw [edge] (1) to (6); \draw [edge] (2) to (7); \draw [edge] (4) to (8); \draw [edge] (5) to (9); \draw [edge] (3) to (10); \draw [edge] (0) to (11); \draw [edge] (0) to (6); \draw [edge] (11) to (1); \draw [edge] (1) to (7); \draw [edge] (2) to (6); \draw [edge] (2) to (8); \draw [edge] (4) to (7); \draw [edge] (4) to (9); \draw [edge] (5) to (8); \draw [edge] (5) to (10); \draw [edge] (3) to (9); \draw [edge] (10) to (0); \draw [edge] (3) to (11); \draw [edge] (1) to (4); \draw [edge] (2) to (5); \draw [edge] (4) to (3); \draw [edge] (5) to (0); \draw [edge] (3) to (1); \draw [edge] (0) to (2); \end{tikzpicture}} \caption{$\mathcal{T}_{vi,1}(C_6)$} \label{Tvi1C6} \end{center} \end{figure} \begin{theorem}\label{start2} For any nonempty graph $G$, $\chi_{vi,1}(G)=\chi(\mathcal{T}_{vi,1}(G))$. \end{theorem} An incidence coloring of a graph can be viewed as a proper arc coloring of a corresponding digraph. For a graph $G$, digraph $\overrightarrow{G}$ is a digraph obtained from $G$ by replacing each edge of $E(G)$ by two opposite arcs. Any incidence $(v,e)$ of $I(G)$, with $e=\{v,w\}$, can then be associated with the arc $(v,w)$ in $A(\overrightarrow{G})$. Therefore, an incidence coloring of $G$ can be viewed as a proper arc coloring of $\overrightarrow{G}$ satisfying $(i)$ any two arcs having the same tail vertex are assigned distinct colors and $(ii)$ any two consecutive arcs are assigned distinct colors.\\ Similar to incidence coloring, there is another equivalent coloring for proper coloring of $\frac{3}{3}$-power of a graph or equivalently $vi$-simultaneous proper coloring. \begin{definition}\label{underlying} Let $G$ be a graph, $S=S_t\cup S_i$ be a subset of $V(G^{\frac{3}{3}})$ such that $S_t\subseteq V_t(G^{\frac{3}{3}})$ and $S_i\subseteq V_i(G^{\frac{3}{3}})$ and $H$ be the subgraph of $G^{\frac{3}{3}}$ induced by $S$. Also let $A(S_i)=\{(u,v)\ |\ (uv)_1\in S_i\}$ and $V(S_i)=\{u\in V(G)\ |\ I(u)\cap S_i\neq\varnothing\}$. The underlying digraph of $H$, denoted by $D(H)$, is a digraph with vertex set $S_t\cup V(S_i)$ and arc set $A(S_i)$. Specially, $D(G^{\frac{3}{3}})=\overrightarrow{G}$. \end{definition} Now any proper coloring of $G^{\frac{3}{3}}$ (or, equivalently, any $vi$-simultaneous coloring of $G$) can be viewed as a coloring of vertices and arcs of $D(G^{\frac{3}{3}})$ satisfying $(i)$ any two adjacent vertices are assigned distinct colors, $(ii)$ any arc and its head and tail are assigned distinct colors, $(iii)$ any two arcs having the same tail vertex (of the form $(u,v)$ and $(u,w)$) are assigned distinct colors and $(iv)$ any two consecutive arcs (of the form $(u,v)$ and $(v,w)$) are assigned distinct colors.\\ A star is a tree with diameter at most two. A star forest is a forest, whose connected components are stars. The star arboricity $st(G)$ of a graph $G$ is the minimum number of star forests in $G$ whose union covers all edges of $G$. In \cite{planarinc} it was proved that $\chi_i(G)\leq \chi'(G)+st(G)$. Similar to this result, we can give an upper bound for $\chi_{vi}(G)$ in terms of total chromatic number and star arboricity. \begin{theorem}\label{start1} For any graph $G$, we have $\chi_{vi}(G)\leq \chi(G^{\frac{2}{2}})+st(G)$. \end{theorem} The aim of this paper is to find exact value or upper bound for the $vi$-simultaneous chromatic number of some classes of graphs by coloring the vertices of $G^{\frac{3}{3}}$ and checking the truthness of the conjecture \ref{cmahsa} for some classes of graphs. We show that the Conjecture~\ref{cmahsa} is true for some graphs such as trees, complete graphs and bipartite graphs. Also we study the relationship between $vi$-simultaneous chromatic number and the other parameters of graphs. \subsection{Structure of the paper} After this introductory section where we established the background, purpose and some basic definitions and theorems of the paper, we divide the paper into four sections. In Section \ref{sec2}, we prove Theorems \ref{vi-simultaneous}, \ref{start2} and \ref{start1} and some basic lemmas and theorems. In Section \ref{sec3}, we give an upper bound for $vi$-simultaneous chromatic number of a $k$-degenerated graph in terms of $k$ and the maximum degree of graph. In Section \ref{sec4} we provide exact value for chromatic number of $\frac{3}{3}$-powers of cycles, complete graphs and complete bipartite graphs and also give an upper bound for chromatic number of $\frac{3}{3}$-powers of bipartite graphs and conclude that the Conjecture~\ref{cmahsa} is true for these classes of graphs. \section{Basic theorems and lemmas}\label{sec2} At first, we prove Theorems \ref{vi-simultaneous}, \ref{start2} and \ref{start1}.\\ \textbf{Proof of Thorem \ref{vi-simultaneous}} At first, suppose that $\chi(G^{\frac{3}{3}})=k$ and $c:V(G^{\frac{3}{3}})\longrightarrow[k]$ is a proper coloring of $G^{\frac{3}{3}}$. We show that the following $vi$-simultaneous $k$-coloring of $G$ is proper. \[c'(x)=\left\{\begin{array}{cc} c(x) & x\in V(G)=V_t(G^{\frac{3}{3}}),\\ c((uv)_1) & x=(u,v)\in I(G). \end{array}\right.\] Since $G$ in an induced subgraph of $G^{\frac{3}{3}}$ by the terminal vertices, $c$ is a proper coloring of $G$. So $c'$ assigns different colors to the adjacent vertices of $G$. Now suppose that $(u,v)$ and $(r,s)$ are adjacent vertices in $\mathcal{I}(G)$. There are three cases:\\ (i) $(r,s)=(v,u)$. Since $(vu)_1$ and $(uv)_1$ are adjacent in $G^{\frac{3}{3}}$, $c'((u,v))=c((uv)_1)\neq c((vu)_1)=c'((r,s))$.\\ (ii) $r=u$. Since $d_{G^{\frac{1}{3}}}((uv)_1, (us)_1)=2$, $(uv)_1$ and $(us)_1$ are adjacent in $G^{\frac{3}{3}}$. So in this case, $c'((u,v))=c((uv)_1)\neq c((us)_1)=c'((u,s))$.\\ (iii) $r=v$. Since $d_{G^{\frac{1}{3}}}((uv)_1, (vs)_1)=3$, $(uv)_1$ and $(vs)_1$ are adjacent in $G^{\frac{3}{3}}$. So in this case, $c'((u,v))=c((uv)_1)\neq c((vs)_1)=c'((v,s))$.\\ Finally suppose that $u\in V(G)$ and $(r,s)\in I(G)$ are incident. So $u=r$ or $u=s$. In the first case, we have $d_{G^{\frac{1}{3}}}(u, (rs)_1)=1$ and in the second case we have $d_{G^{\frac{1}{3}}}(u, (rs)_1)=2$ and $u$ and $(rs)_1$ are adjacent in $G^{\frac{3}{3}}$. So $c'(u)=c(u)\neq c((rs)_1)=c'((r,s))$.\\ Similarly we can show that each proper $vi$-simultaneous $k$-coloring of $G$ give us a proper $k$-coloring of $G^{\frac{3}{3}}$. Therefore $\chi_{vi}(G)=\chi(G^{\frac{3}{3}})$. \hfill $\blacksquare$\\\\ \textbf{Proof of Thorem \ref{start2}} Firstly, suppose that $\chi_{vi,1}(G)=k$ and $c:V(G)\cup I(G)\longrightarrow [k]$ is a $vi$-simultaneous $(k,1)$-coloring of $G$. We show that the following $k$-coloring of $\mathcal{T}_{vi,1}(G)$ is proper. \[c'(x)=\left\{\begin{array}{cc} c(u) & x=(u,1),\\ s & x=(u,2), s\in c(I_2(u)). \end{array}\right.\] Since $c$ is a $vi$-simultaneous $(k,1)$-coloring, $|c(I_2(u))|=1$ for any vertex $u\in V(G)$ and so $c'$ is well-defined. Now suppose that $(v,i)$ and $(u,j)$ are adjacent in $\mathcal{T}_{vi,1}(G)$. \begin{itemize} \item If $i=j=1$, then $c'((v,i))=c(v)\neq c(u)=c'((u,j))$. \item If $i=j=2$ and $d_G(v,u)=1$, then $c'((v,i))=c(u,v)\neq c((v,u))=c'((u,j))$. \item If $i=j=2$ and $d_G(v,u)=2$, then $c'((v,i))=c(z,v)\neq c((z,u))=c'((u,j))$ where $z\in N_G(v)\cap N_G(u)$. \item If $i=1$, $j=2$ and $v=u$, then $c'((v,i))=c(v)\neq c((z,v))=c'((u,j))$ where $z\in N_G(v)$. \item If $i=1$, $j=2$ and $d_G(v,u)=1$, then $c'((v,i))=c(v)\neq c((v,u))=c'((u,j))$. \end{itemize} So $c'$ assigns different colors to the adjacent vertices of $\mathcal{T}_{vi,1}(G)$.\\ Now suppose that $\chi(\mathcal{T}_{vi,1}(G))=k$ and $c':V(\mathcal{T}_{vi,1}(G))\longrightarrow [k]$ is a proper $k$-coloring of $\mathcal{T}_{vi,1}(G)$. Easily one can show that the following $k$-coloring is a $vi$-simultaneous $(k,1)$-coloring of $G$. \[c(x)=\left\{\begin{array}{cc} c'((x,1)) & x\in V(G),\\ c'((v,2)) & x=(u,v)\in I(G). \end{array}\right.\] Thus $\chi_{vi,1}(G)=\chi(\mathcal{T}_{vi,1}(G))$. \hfill $\blacksquare$\\\\ \noindent\textbf{Proof of Thorem \ref{start1}} Let $G$ be an undirected graph with star arboricity $st(G)$ and $s \hspace{1mm}:\hspace{1mm} E(G) \longrightarrow [st(G)]$ be a mapping such that $s^{-1}(i)$ is a forest of stars for any $i$, $1\leq i \leq st(G)$. Also, suppose that $c$ be a total coloring of $G^{\frac{2}{2}}$ with colors $\{st(G)+1,\ldots,st(G)+\chi''(G)\}$. Now, to color $t$-vertices and $i$-vertices of the graph $G$, define the mapping $c'$ by $c'((u,v))=s(uv)$ if $v$ is the center of a star in some forest $s^{-1}(i)$. If some star is reduced to one edge, we arbitrarily choose one of its end vertices as the center. Note that, for any edge $uv$, one of the $t$-vertices $u$ or $v$ is the center of a some star forest. It is enough to color the other $t$-vertices and $i$-vertices of $G$.\\ Consider the graph $G$ on uncolord $t$-vertices and uncolord $i$-vertices. It can be easily seen that the resulting graph, $G'$, is isomorphic to $G^{\frac{2}{2}}$. Now, assign colors $c(u)$ and $c((u,v))$ to a $t$-vertex $u$ and a $i$-vertex $(u,v)$ in $G'$. Therefore, we have $\chi(G^{\frac{3}{3}})\leq\chi(G^{\frac{2}{2}})+st(G)$. \hfill $\blacksquare$\\\\ For any star forest $F$, we have $st(F)=1$, $\chi(F^{\frac{2}{2}})=\Delta(F)+1$ and $\chi(F^{\frac{3}{3}})=\Delta(F)+2$. Therefore, the upper bound of Theorem \ref{start1} is tight.\\ The following lemmas will be used in the proofs of some theorems in the next sections. The set $\{c(a)\ |\ a\in A\}$ is denoted by $c(A)$ where $c:D\rightarrow R$ is a function and $A\subseteq D$. \begin{lemma}\label{firstlem} Let $G$ be a graph with maximum degree $\Delta$ and $c$ is a proper $(\Delta+2)$-coloring of $G^{\frac{3}{3}}$ with colors from $[\Delta+2]$. Then $|c(I_2(v))\leq\Delta-d_G(v)+1$ for any $t$-vertex $v$. Specially $|c(I_2(v))|=1$ for any $\Delta$-vertex $v$ of $G$. \end{lemma} \begin{proof}{ Let $v$ be a $t$-vertex of $G$. Since all vertices in $I_1[v]$ are pairwise adjacent in $G^{\frac{3}{3}}$, there are exactly $d_G(v)+1$ colors in $c(I_1[v])$. Now, consider the vertices in $I_2(v)$. Since any vertex in $I_2(v)$ is adjacent with each vertex of $I_1[v]$, the only available colors for these $i$-vertices is the remain colors from $[\Delta+2]\setminus c(I_1[v])$. Therefore, $|c(I_2(v))|\leq\Delta-d_G(v)+1$. }\end{proof} \begin{lemma}\label{secondlem} Let $G$ be a graph, $e$ be a cut edge of $G$ and $C_1$ and $C_2$ be two components of $G-e$. Then $\chi_{vi,l}(G)=\max\{\chi_{vi,l}(H_1),\chi_{vi,l}(H_2)\}$ where $H_i=C_i+e$ for $i\in\{1,2\}$ and $1\leq l\leq\Delta(G)$. \end{lemma} \begin{proof}{ Obviously $\chi_{vi,l}(H_1)\leq \chi_{vi,l}(G)$ and $\chi_{vi,l}(H_2)\leq \chi_{vi,l}(G)$. So $\max\{\chi_{vi,l}(H_1),\chi_{vi,l}(H_2)\}\leq\chi_{vi,l}(G)$. Now suppose that $\chi_{vi,l}(H_1)=k_1\geq k_2=\chi_{vi,l}(H_2)$. We show that $\chi_{vi,l}(G)\leq k_1$. Let $c_i:V(H_i)\rightarrow [k_i]$ be a $vi$-simultaneous $(k_i,l)$-colorings ($1\leq i\leq2$) and $e=\{u,v\}$. Since $V(H_1)\cap V(H_2)=\{u, (u,v), (v,u), v\}$ and these four vertices induce a clique, so by suitable permutation on the colors of the coloring $c_1$, we reach to the new coloring $c'_1$ such that $c'_1(x)=c_2(x)$ for any $x\in\{u, (u,v), (v,u), v\}$. Now we can easily prove that the following coloring is a $vi$-simultaneous $(k_1,l)$-coloring: \[c(x)=\left\{\begin{array}{cc} c'_1(x) & x\in V(H_1),\\ c_2(x) & x\in V(H_2). \end{array}\right.\] }\end{proof} \begin{lemma}\label{thirdlem} Let $G_1$ and $G_2$ be two graphs, $V(G_1)\cap V(G_2)=\{v\}$ and $G=G_1\cup G_2$. Then \[\chi_{vi,1}(G)=\max\{\chi_{vi,1}(G_1),\chi_{vi,1}(G_2), d_G(v)+2\}.\] \end{lemma} \begin{proof}{ Suppose that $k=\max\{\chi_{vi,1}(G_1),\chi_{vi,1}(G_2), d_G(v)+2\}$. Obviously $\chi_{vi,1}(G_1)\leq \chi_{vi,1}(G)$, $\chi_{vi,1}(G_2)\leq \chi_{vi,1}(G)$ and $d_G(v)+2\leq\Delta(G)+2\leq\chi_{vi}(G)\leq\chi_{vi,1}(G)$. So $k\leq\chi_{vi,1}(G)$. Now suppose that $c_1$ and $c_2$ are $vi$-simultaneous $(k,1)$-coloring of $G_1$ and $G_2$ respectively. Note that $I_1^{G_1}[v]$, $I_1^{G_2}[v]$ and $I_1^{G}[v]$ are cliques and $I_2^{G_1}(v)$, $I_2^{G_2}(v)$ and $I_2^{G}(v)$ are independent sets in $G_1$, $G_2$ and $G$ respectively. Also $c_i(I_1^{G_i}[v])\cap c_i(I_2^{G_i}(v))=\varnothing$ and $|c_i(I_2^{G_i}(v))|=1$ for each $i\in [2]$. So by suitable permutations on the colors of $c_2$ in three steps, we reach to the new coloring $c_3$: \begin{itemize} \item [(1)] If $c_1(v)=a\neq b=c_2(v)$ then we just replace colors $a$ and $b$ together in $c_2$ and otherwise we do nothing. We denote the new coloring by $c'_2$. \item [(2)] Let $c_1(x)=c$ and $c'_2(y)=d$ for each $x\in I_2^{G_1}(v)$ and $y\in I_2^{G_2}(v)$. If $c\neq d$ then we just replace colors $c$ and $d$ together in $c'_2$. Otherwise we do nothing. We denote the new coloring by $c''_2$. Obviously, $c\neq a\neq d$ and so $c''_2(v)=a$. \item [(3)] If $c''_2(I_1^{G_2}(v))\cap c_1(I_1^{G_1}(v))=\varnothing$ we do nothing. Otherwise, suppose that $c''_2(I_1^{G_2}(v))\cap c_1(I_1^{G_1}(v))=\{a_1,\ldots,a_s\}$. Since $k\geq d_G(v)+2$ and $|c''_2(I_{G_2}[v])\cup c_1(I_{G_1}[v])|=d_{G}(v)+2-s$, there are $s$ colors $b_1,\ldots,b_s$ which have not appeared in $c''_2(I_{G_2}[v])\cup c_1(I_{G_1}[v])$. Now we replace $a_i$ and $b_i$ together for each $i\in\{1,\ldots,s\}$. We denote the new coloring by $c_3$. \end{itemize} Now we can easily show that the following function is a $vi$-simultaneous proper $(k,1)$-coloring for $G$: \[c(x)=\left\{\begin{array}{cc} c_1(x) & x\in V(G_1)\cup I(G_1),\\ c_3(x) & x\in V(G_2)\cup I(G_2). \end{array}\right.\] }\end{proof} \begin{theorem}\label{blocks} Let $k\in\mathbb{N}$ and $G$ be a graph with blocks $B_1,\ldots,B_k$. Then \[\chi_{vi,1}(G)=\max\{\chi_{vi,1}(B_1),\ldots,\chi_{vi,1}(B_k), \Delta(G)+2\}.\] Specially, $\chi_{vi,1}(G)=\max\{\chi_{vi,1}(B_1),\ldots,\chi_{vi,1}(B_k)\}$ when $G$ has at least one $\Delta(G)$-vertex which is not cut vertex. \end{theorem} \begin{proof}{ By induction on the number $k$ and applying Lemma \ref{thirdlem}, the proof will be done. }\end{proof} We can determine an upper bound on the $vi$-simultaneous chromatic number $\chi_{vi,s}(G)$ in terms of $\Delta(G)$ and list chromatic number of $G$.\\ \begin{definition}\label{listcoloring}\cite{bondy} Let $G$ be a graph and $L$ be a function which assigns to each vertex $v$ of $G$ a set $L(v)\subset\mathbb{N}$, called the list of $v$. A coloring $c:V(G)\rightarrow\mathbb{N}$ such that $c(v)\in L(v)$ for all $v\in V(G)$ is called a list coloring of $G$ with respect to $L$, or an $L$-coloring, and we say that $G$ is $L$-colorable. A graph $G$ is $k$-list-colorable if it has a list coloring whenever all the lists have length $k$. The smallest value of $k$ for which $G$ is $k$-list-colorable is called the list chromatic number of $G$, denoted $\chi_{l}(G)$. \end{definition} \begin{theorem}\label{upperbound-list} Let $G$ be a nonempty graph and $s\in\mathbb{N}$. Then\\ (i) $\chi_{vi,s}(G)\leq\max\{\chi_{i,s}(G),\chi_{l}(G)+\Delta(G)+s\}$,\\ (ii) If $\chi_{i,s}(G)\geq\chi_{l}(G)+\Delta(G)+s$, then $\chi_{vi,s}(G)=\chi_{i,s}(G)$. \end{theorem} \begin{proof}{ (i) Suppose that $\max\{\chi_{i,s}(G),\chi_{l}(G)+\Delta(G)+s\}=k$. So there exists an incidence $(k,s)$-coloring $c_i: I(G)\rightarrow [k]$ of $G$ and hence $|c_i(I_2(u))|\leq s$ for any vertex $u\in V(G)$. Therefore, $|c_i(I_G(u))|\leq \Delta(G)+s$. Now we extend $c_i$ to a $vi$-simultaneous $(k,s)$-coloring $c$ of $G$. The set of available colors for the vetex $u$ is $L(u)=[k]\setminus c_i(I_G(u))$ which has at least $k-\Delta(G)-s\geq \chi_l(G)$ colors. Since $|L(u)|\geq\chi_{l}(G)$ for any vertex $u\in V(G)$, there exists a proper vertex coloring $c_v$ of $G$ such that $c_v(u)\in L(u)$. Now one can easily show that the following coloring is a $vi$-simultaneous $(k,s)$-coloring of $G$: \[c(x)=\left\{\begin{array}{cc} c_i(x) & x\in I(G),\\ c_v(x) & x\in V(G). \end{array}\right.\] (ii) If $\chi_{i,s}(G)\geq\chi_{l}(G)+\Delta(G)+s$, then $\chi_{vi,s}(G)\leq\chi_{i,s}(G)$. In addition, any $vi$-simultaneous $(k,s)$-coloring of $G$ induces an incidence $(k,s)$-coloring of $G$ and so $\chi_{i,s}(G)\leq\chi_{vi,s}(G)$. Therefore, $\chi_{vi,s}(G)=\chi_{i,s}(G)$. }\end{proof} \begin{corollary}\label{upperbound-list-vi1} $\chi_{vi,1}(G)\leq\max\{\chi(G^2),\chi_{l}(G)+\Delta(G)+1\}$ for any nonempty graph $G$. Specially, if $\chi(G^2)\geq\chi_{l}(G)+\Delta(G)+1$, then $\chi_{vi,1}(G)=\chi(G^2)$. \end{corollary} \begin{corollary}\label{upperbound-diam-vi1} Let $G$ be a graph of order $n$ with $diam(G)=2$. Then $\chi_{vi,1}(G)\leq\max\{n, \chi_l(G)+\Delta(G)+1\}$. Specially if $\Delta(G)\leq\frac{n}{2}-1$, then $\chi_{vi,1}(G)=n$. \end{corollary} \begin{remark}{\rm In \cite{Cranston}, it was proved that the square of any cubic graph other than the Petersen graph is 8-list-colorable and so $\chi(G^2)\leq8$. In addition the diameter of the Petersen graph $P$ is two. Therefore, by Corollaries \ref{upperbound-list-vi1} and \ref{upperbound-diam-vi1}, $\chi_{vi,1}(P)=10$ for the Petersen graph and $\chi_{vi,1}(G)\leq 8$ for any graph $G$ with $\Delta(G)=3$ other than the Petersen graph. }\end{remark} \section{$k$-degenerated graphs}\label{sec3} A graph $G$ is said to be $k$-degenerated if any subgraph of $G$ contains a vertex of degree at most $k$. For example, Any graph $G$ is 1-degenerated if and only if $G$ is a forest. We can give an upper bound for $vi$-simultaneous chromatic number of a $k$-degenerated graph in terms of $k$ and its maximum degree.\\ Let $\mathcal{F}=\{A_1,\ldots,A_n\}$ be a finite family of $n$ subsets of a finite set $X$. A system of distinct representatives (SDR) for the family $\mathcal{F}$ is a set $\{a_1,\ldots,a_n\}$ of distinct elements of $X$ such that $a_i\in A_i$ for all $i\in [n]$. \begin{theorem}\label{kdegenerated} Let $k\in\mathbb{N}$ and $G$ be a $k$-degenerated graph with $\Delta(G)\geq2$. Then $\chi_{vi,k}(G)\leq \Delta(G)+2k$. \end{theorem} \begin{proof}{ If $k=\Delta(G)$, then $\chi_{vi,k}(G)=\chi_{vi}(G)\leq 3\Delta(G)=\Delta(G)+2k$. So we suppose that $1\leq k\leq\Delta(G)-1$. Assume the contrary, and let the theorem is false and $G$ be a minimal counter-example. Let $u$ be a vertex in $G$ with degree $r\leq k$ and $N_G(u)=\{u_1,\ldots,u_r\}$ and let $G'=G-u$. According to the minimality of $G$, $\chi_{vi,k}(G')\leq \Delta(G)+2k$ and there exists a $vi$-simultaneous $(\Delta(G)+2k,k)$-coloring $c'$ of $G'$. We extend $c'$ to a $vi$-simultaneous $(\Delta(G)+2k,k)$-coloring $c$ of $G$ which is a contradiction.\\ Firstly, we color the vertices of $I_1(u)$. For each $(u,u_i)\in I_1(u)$ there are at least $k$ available colors if $|c'(I_2(u_i))|=k$ and there are at least $2k$ available colors if $|c'(I_2(u_i))|\leq k$. Let $A_i$ be the set of available colors for $(u,u_i)\in I_1(u)$. Since we must select distinct colors for the vertices of $I_1(u)$, we prove that the family $\mathcal{F}=\{A_1,\ldots,A_r\}$ has a system of distinct representatives. Because $|\cup_{j\in J}A_j|\geq k\geq |J|$ for any subset $J\subseteq [r]$, using Hall's Theorem (see Theorem 16.4 in \cite{bondy}), we conclude that $\mathcal{F}$ has an SDR $\{a_1,\ldots,a_r\}$ such that $|\{a_j\}\cup c'(I_2(u_j))|\leq k$ for any $j\in [r]$. We color the vertex $(u,u_j)$ by $a_j$ for any $j\in [r]$. Now we color the vertices of $I_2(u)$. Since $|c'(I_{G'}[u_j]\cup c(I_1^{G}(u))|<\Delta(G)+2k$ for each $j\in [r]$, there exists at least one available color for the vertex $(u_j,u)$. Finally, we select the color of the vertex $u$. Since $|I_G(u)\cup N_G(u)|=3r<\Delta(G)+2k$, we can color the vertex $u$ and complete the coloring of $c$. }\end{proof} \begin{corollary}\label{tree} Let $F$ be a forest. Then \[\chi_{vi,1}(F)=\left\{\begin{array}{lll} 1 & \Delta(F)=0,\\ 4 & \Delta(F)=1,\\ \Delta(F)+2 & \Delta(F)\geq2. \end{array}\right.\] \end{corollary} \begin{proof}{ The proof is trivial for $\Delta(F)\leq1$. So we suppose that $\Delta(F)\geq2$. Each forest is a 1-degenerated graph. So by use of Theorem \ref{kdegenerated} we have $\chi_{vi,1}(F)\leq\Delta(F)+2$. In addition, $\chi_{vi,1}(F)\geq\chi_{vi}(F)=\chi(F^{\frac{3}{3}})\geq\omega(F^{\frac{3}{3}})=\Delta(F)+2$. Hence $\chi_{vi,1}(F)=\Delta(F)+2$. }\end{proof} \begin{corollary} For any $n\in\mathbb{N}\setminus\{1\}$, $\chi_{vi,1}(P_n)=4$. \end{corollary} \begin{remark}{\rm Using the following simple algorithm, we have a proper $(\Delta+2)$-coloring for $\frac{3}{3}$-power of any tree $T$ with $\Delta(T)=\Delta$:\\ Suppose that $v_1,\ldots,v_n$ are $t$-vertices of $T$ and the $t$-vertex $v_1$ of degree $\Delta$ is the root of $T$. To achieve a $(\Delta+2)$-coloring of $T^{\frac{3}{3}}$, assign color $1$ to the $v_1$ and color all $i$-vertices in $I_1(v_1)$ with distinct colors in $\{2,\ldots,\Delta+1\}$. Note that, since these $i$-vertices are pairwise adjacent, they must have different colors. Also, color all $i$-vertices in $I_2(v_1)$ with color $\Delta+2$.\\ Now, to color the other $t$-vertices and $i$-vertices of $T$, for the $t$-vertex $v_i$ with colored parent $p_{v_i}$, $2\leq i\leq n$, color all the uncolored $i$-vertices in $I_2(v_i)$ same as $(p_{v_i}v_i)_1$. Then color $v_i$ with a color from $[\Delta+2]\setminus\{c(p_{v_i}),c((p_{v_i}v_i)_1), c((p_{v_i}v_i)_2)\}$. Now, color all the uncolored $i$-vertices in $I_1(v_i)$ with distinct $\Delta-1$ colors from $[\Delta+2]\setminus\{c((p_{v_i}v_i)_1), c((p_{v_i}v_i)_2), c(v_i)\}$.} \end{remark} As each outerplanar graph is a $2$-degenerated graph and each planar graph is a $5$-degenerated graph, we can result the following corollary by use of the Theorem \ref{kdegenerated}. \begin{corollary} Let $G$ be a graph with maximum degree $\Delta$. \begin{itemize} \item[(i)] If $G$ is an outerplanar graph, then $\chi_{vi,2}(G)\leq \Delta+4$. \item[(ii)] If $G$ is a planar graph, then $\chi_{vi,5}(G)\leq \Delta+10$. \end{itemize} \end{corollary} We decrease the upper bound of Theorem \ref{kdegenerated} to $\Delta+5$ for 3-degenerated graphs with maximum degree at least five. \begin{theorem}\label{3degenerated} Every $3$-degenerated graph $G$ with $\Delta(G)\geq5$ admits a $vi$-simultaneous $(\Delta(G)+5,3)$-coloring. Therefore, $\chi_{vi,3}(G)\leq\Delta(G)+5$. \end{theorem} \begin{proof}{ Assume the contrary, and let the theorem is false and $G$ be a minimal counter-example. Let $u$ be a vertex in $G$ with degree $r\leq 3$ and $N_G(u)=\{u_1,\ldots,u_r\}$ and let $G'=G-u$. If $\Delta(G')=4$, then by Theorem \ref{kdegenerated} we have $\chi_{vi,3}(G')\leq 4+6=10=\Delta(G)+5$ and if $\Delta(G')\geq 5$, according to the minimality of $G$, $\chi_{vi,3}(G')\leq \Delta(G)+5$. So there exists a $vi$-simultaneous $(\Delta(G)+5,3)$-coloring $c'$ of $G'$. We extend $c'$ to a $vi$-simultaneous $(\Delta(G)+5,3)$-coloring $c$ of $G$, which is a contradiction.\\ Firstly, we color the vertices of $I_1(u)$. For each $(u,u_i)\in I_1(u)$ there are at least $3$ available colors if $|c'(I_2(u_i))|=3$ and there are at least $5$ available colors if $|c'(I_2(u_i))|\leq 2$. Let $A_i$ be the set of available colors for $(u,u_i)\in I_1(u)$ and $C_i=c'(I_2(u_i))$. Since we must select distinct colors for the vertices of $I_1(u)$, we prove that the family $\mathcal{F}=\{A_1,\ldots,A_r\}$ has an SDR. According to the degree of $u$ and the sizes of $C_1$, $C_2$ and $C_3$, we consider five cases: \begin{itemize} \item [(1)] $r\leq2$. Since $|A_i|\geq3$, easily one can show that $\mathcal{F}$ has an SDR $\{a_j|\ j\in [r]\}$ such that $|\{a_j\}\cup c'(I_2(u_j))|\leq 3$ for any $j\in [r]$. We color the vertex $(u,u_j)$ by $a_j$ for any $j\in [r]$. Now we color the vertices of $I_2(u)$. Since $|c'(I_{G'}[u_j]\cup c(I_1^{G}(u))|<\Delta(G)+2+r\leq \Delta(G)+4$ for each $j\in [r]$, there exists at least one available color for the vertex $(u_j,u)$. Finally, we select the color of the vertex $u$. Since $|I_G(u)\cup N_G(u)|=3r\leq 6<\Delta(G)+5$, we can color the vertex $u$ and complete the coloring of $c$. \item [(2)] $r=3$ and $|C_j|\leq2$ for any $j\in [3]$. Because $|\cup_{j\in J}A_j|\geq 5\geq |J|$ for any subset $J\subseteq [r]$, using Hall's Theorem (see Theorem 16.4 in \cite{bondy}), we conclude that $\mathcal{F}$ has an SDR $\{a_1,\ldots,a_r\}$ such that $|\{a_j\}\cup c'(I_2(u_j))|\leq 3$ for any $j\in [r]$. We color the vertex $(u,u_j)$ by $a_j$ for any $j\in [r]$. Now we color the vertices of $I_2(u)$. Since $|c'(I_{G'}[u_j]\cup c(I_1^{G}(u))|<\Delta(G)+2+r-1\leq \Delta(G)+4$ for each $j\in [r]$, there exists at least one available color for the vertex $(u_j,u)$. Finally, we select the color of the vertex $u$. Since $|I_G(u)\cup N_G(u)|=9<\Delta(G)+5$, we can color the vertex $u$ and complete the coloring of $c$. \item [(3)] $r=3$ and $|C_j|\leq2$ for two sets of $C_j$s. Without loss of generality, let $|C_1|=|C_2|=2$ and $|C_3|=3$. If $C_j\cap c'(I_{G'}[u_3])$ is nonempty for some $j\in\{1,2\}$ and $a\in C_j\cap c'(I_{G'}[u_3])$, then we color the vertex $(u,u_j)$ with $a$, the vertex $(u,u_i)$ ($j\neq i\in [2]$) with color $b$ from $C_i\setminus\{a\}$ ($b\in A_i\setminus\{a\}$ if $C_i=\{a\}$) and the vertex $(u,u_3)$ with color $d$ from $C_3\setminus\{a,b\}$.\\ Because $|c'(I_{G'}[u_3])|=\Delta(G)+3$, if $C_1\cap c'(I_{G'}[u_3])=\varnothing=C_2\cap c'(I_{G'}[u_3])$ then $C_1=C_2$. Suppose that $C_1=C_2=\{a,b\}$ and $d\in A_1\setminus\{a,b\}$ (note that $|A_1|=5$). So $d\in c'(I_{G'}[u_3])$. We color the vertex $(u,u_1)$ with $d$, the vertex $(u,u_2)$ with color $a$ and the vertex $(u,u_3)$ with color $f$ from $C_3\setminus\{a,d\}$. Now we color the vertices of $I_2(u)$. Since $|c'(I_{G'}[u_j]\cup c(I_1^{G}(u))|\leq\Delta(G)+4$ for each $j\in [r]$, there exists at least one available color for the vertex $(u_j,u)$. Finally, we select the color of the vertex $u$. Since $|I_G(u)\cup N_G(u)|=9<\Delta(G)+5$, we can color the vertex $u$ and complete the coloring of $c$. \item [(4)] $r=3$ and $|C_j|\leq2$ for only one set of $C_j$s. Without loss of generality, let $|C_1|=2$ and $|C_2|=|C_3|=3$. If $C_1\cap c'(I_{G'}[u_j])$ is nonempty for some $j\in\{2,3\}$ and $a\in C_1\cap c'(I_{G'}[u_j])$, then we color the vertex $(u,u_1)$ with $a$. Suppose that $j\neq i\in\{2,3\}$. Since $|C_i|+|c'(I_{G'}[u_j])|=\Delta(G)+6$, $C_i\cap c'(I_{G'}[u_j])\neq\varnothing$. Let $b\in C_i\cap c'(I_{G'}[u_j])$ and color the vertex $(u,u_i)$ with color $b$ and the vertex $(u,u_j)$ with color $d$ from $C_j\setminus\{a,b\}$.\\ Because $|c'(I_{G'}[u_2])|=|c'(I_{G'}[u_3])|=\Delta(G)+3$, if $C_1\cap c'(I_{G'}[u_2])=\varnothing=C_1\cap c'(I_{G'}[u_3])$ then $c'(I_{G'}[u_2])=c'(I_{G'}[u_3])$. Since $|C_i|+|c'(I_{G'}[u_j])|=\Delta(G)+6$, $C_i\cap c'(I_{G'}[u_j])\neq\varnothing$ when $\{i,j\}=\{2,3\}$. Therefore, there exist $b\in C_2\cap c'(I_{G'}[u_3])$ and $d\in C_3\cap c'(I_{G'}[u_2])$ such that $b\neq d$. Now we color the vertex $(u,u_1)$ with $a\in C_1$, the vertex $(u,u_2)$ with color $b$ and the vertex $(u,u_3)$ with color $d$. Now we color the vertices of $I_2(u)$. Since $|c'(I_{G'}[u_j]\cup c(I_1^{G}(u))|\leq\Delta(G)+4$ for each $j\in [r]$, there exists at least one available color for the vertex $(u_j,u)$. Finally, we select the color of the vertex $u$. Since $|I_G(u)\cup N_G(u)|=9<\Delta(G)+5$, we can color the vertex $u$ and complete the coloring of $c$. \item [(5)] $r=3$ and $|C_j|=3$ for any $j\in [3]$. For any $i,j\in [3]$, since $|C_i|+|c'(I_{G'}[u_j])|=\Delta(G)+6$, $C_i\cap c'(I_{G'}[u_j])\neq\varnothing$. So there exist $a_1\in C_1\cap c'(I_{G'}[u_2])$, $a_2\in C_2\cap c'(I_{G'}[u_3])$ and $a_3\in C_3\cap c'(I_{G'}[u_1])$. If $|\{a_1,a_2,a_3\}|=3$, then we color the vertex $(u,u_j)$ with color $a_j$ ($j\in [3]$) and similar to the previous cases, we can complete the coloring $c$. Now suppose that $|\{a_1,a_2,a_3\}|=2$. Without loss of generality, suppose that $a_1=a_2\neq a_3$ and $b\in C_2\setminus\{a\}$. In this case, we color $(u,u_1)$ with $a_1$, the vertex $(u,u_2)$ with color $b$ and the vertex $(u,u_3)$ with color $a_3$. Finally suppose that $a_1=a_2=a_3$. If $(C_i\setminus\{a_1\})\cap c'(I_{G'}[u_j])\neq\varnothing$ for some $i,j\in [3]$ and $b\in (C_i\setminus\{a_1\})\cap c'(I_{G'}[u_j])$, we color $(u,u_i)$ with $b$, the vertex $(u,u_2)$ with color $a_1$ and the vertex $(u,u_s)$ with color $d\in C_s\setminus\{a_1,b\}$ where $i\neq s\neq j$. Otherwise, we have $(C_1\setminus\{a_1\})\cap c'(I_{G'}[u_3])=\varnothing=(C_2\setminus\{a_1\})\cap c'(I_{G'}[u_3])$ which concludes $C_1=C_2$. Suppose that $C_1=C_2=\{a_1,b,d\}$. Now we color $(u,u_1)$ with $b$, the vertex $(u,u_2)$ with color $a_1$ and the vertex $(u,u_3)$ with color $f\in C_3\setminus\{a_1,b\}$.\\ In all of these 3 subcases, we have $|c'(I_{G'}[u_j]\cup c(I_1^{G}(u))|\leq\Delta(G)+4$ for each $j\in [3]$ and similar to the previous cases, we can complete the coloring $c$. \end{itemize} }\end{proof} \begin{problem}{\rm Let $G$ be a $3$-degenerated graph with $\Delta(G)=4$. We know that $\chi_{vi}(G)\leq9$. What is the sharp upper bound for $\chi_{vi,1}(G)$, $\chi_{vi,2}(G)$ and $\chi_{vi,3}(G)$? By Theorem \ref{kdegenerated}, $\chi_{vi,3}(G)\leq10$. Is this upper bound sharp or similar to Theorem \ref{3degenerated}, the upper bound is 9? }\end{problem} \section{Cycles, Complete and Bipartite Graphs}\label{sec4} In \cite{paper13}, it was proved that $\chi(C_k^m)=k$, when $m\geq \lfloor\frac{k}{2}\rfloor$ and otherwise, $\chi(C_k^m)=\lceil\frac{k}{\lfloor\frac{k}{m+1}\rfloor}\rceil$. With a simple review, we can prove that $\chi(G^{\frac{3}{3}})=\chi_{vi}(G)\leq 5$ when $\Delta(G)=2$ and in this case, $\chi(G^{\frac{3}{3}})=\chi_{vi}(G)=4$ if and only if any component of $G$ is a cycle of order divisible by 4 or a path. In the first theorem, we show that any cycle of order at least four is $vi$-simultaneous $(5,1)$-colorable. To avoid drawing too many edges in the figures, we use $\frac{1}{3}$-powers of graphs instead of $\frac{3}{3}$-powers of graphs. Internal vertices are shown with white color and terminal vertices are shown with color black. \begin{theorem}\label{cycles} Let $3\leq n\in\mathbb{N}$. Then \[\chi_{vi,1}(C_n)=\left\{\begin{array}{lll} 6 & n=3,\\ 4 & n\equiv 0\ (mod\ 4),\\ 5 & otherwise. \end{array}\right.\] \end{theorem} \begin{figure}[h] \begin{center} \begin{tikzpicture}[scale=1.0] \tikzset{vertex/.style = {shape=circle,draw, line width=1pt, opacity=1.0, inner sep=2pt}} \tikzset{vertex1/.style = {shape=circle,draw, fill=black, line width=1pt,opacity=1.0, inner sep=2pt}} \tikzset{arc/.style = {->,> = latex', line width=1pt,opacity=1.0}} \tikzset{edge/.style = {-,> = latex', line width=1pt,opacity=1.0}} \node[vertex1] (a) at (0,0) {}; \node at (0,-0.4) {$1$}; \node[vertex] (b) at (1,0) {}; \node at (1,-0.4) {$2$}; \node[vertex] (c) at (2,0) {}; \node at (2,-0.4) {$3$}; \node[vertex1] (d) at (3,0) {}; \node at (3,-0.4) {$4$}; \node[vertex] (e) at (2.5,0.85) {}; \node at (3,0.85) {$5$}; \node[vertex] (f) at (2,1.7) {}; \node at (2.5,1.7) {$2$}; \node[vertex1] (g) at (1.5,2.55) {}; \node at (1.9,2.55) {$6$}; \node[vertex] (h) at (1,1.7) {}; \node at (0.6,1.7) {$3$}; \node[vertex] (i) at (0.5,0.85) {}; \node at (0.1,0.85) {$5$}; \draw[edge] (a) to (b); \draw[edge] (b) to (c); \draw[edge] (c) to (d); \draw[edge] (d) to (e); \draw[edge] (e) to (f); \draw[edge] (f) to (g); \draw[edge] (g) to (h); \draw[edge] (h) to (i); \draw[edge] (i) to (a); \end{tikzpicture} \caption{$vi$-simultaneous proper $(6,1)$-coloring of $C_3$. Black vertices are corresponding to the vertices of $G$ and white vertices are corresponding to the incidences of $C_3$.} \label{C3} \end{center} \end{figure} \begin{proof}{ Suppose that $V(C_n)=\{v_1,v_2,\ldots,v_n\}$ and $c$ is a $vi$-simultaneous $(k,1)$-coloring of $C_3$. We have $c(v_i)\neq c((v_i,v_j))=c((v_l,v_j))$ where $\{i,j,l\}=[3]$. So \[|\{c(v_1),c(v_2),c(v_3), c((v_1,v_2)),c((v_2,v_1)),c((v_1,v_3))\}|=6.\] Therefore, $k\geq6$. Figure \ref{C3} shows a $vi$-simultaneous $(6,1)$-coloring of $C_3$ and so $\chi_{vi,1}(C_3)=6$. In the second part, $\chi_{vi}(C_n)=\chi(C_n^{\frac{3}{3}})=\chi(C_{3n}^3)=\lceil\frac{3n}{\lfloor\frac{3n}{4}\rfloor}\rceil=4=\Delta(C_n)+2$ and hence Lemma \ref{firstlem} shows that any $vi$-simultaneous $4$-coloring of $C_n$ is a $vi$-simultaneous $(4,1)$-coloring.\\ For the last part, we consider three cases:\\ (i) $n=4q+1$, $q\in\mathbb{N}$. Suppose that $c$ is a $vi$-simultaneous $(4,1)$-coloring of $C_{n-1}$ and \[(c(v_1),c((v_1,v_{n-1})), c((v_{n-1},v_1)), c(v_{n-1}))=(1,4,3,2).\] In this coloring, the colors of the other vertices uniquely determined. To find a $vi$-simultaneous $(5,1)$-coloring of $C_{n}$, we replace the edge $\{v_1,v_{n-1}\}$ with the path $P=v_{n-1}v_{n}v_1$. Now we define the coloring $c'$ as follows (See Figure \ref{4q+1}): \[c'(x)=\left\{\begin{array}{lllll} 2 & x=v_n,\\ 3 & x\in \{v_{n-1}, (v_n,v_1)\},\\ 4 & x=(v_n,v_{n-1}),\\ 5 & x\in\{v_{n-2},(v_1,v_n), (v_{n-1},v_n\},\\ c(x) & otherwise. \end{array}\right.\] \begin{figure}[h] \begin{center} \begin{tikzpicture}[scale=1.0] \tikzset{vertex/.style = {shape=circle,draw, line width=1pt, opacity=1.0, inner sep=2pt}} \tikzset{vertex1/.style = {shape=circle,draw, fill=black, line width=1pt,opacity=1.0, inner sep=2pt}} \tikzset{edge/.style = {-,> = latex', line width=1pt,opacity=1.0}} \node[vertex1] (a) at (0,0) {}; \node at (0,0.4) {$3$}; \node at (0,-0.5) {$v_{n-2}$}; \node[vertex] (b) at (1,0) {}; \node at (1,0.4) {$4$}; \node[vertex] (c) at (2,0) {}; \node at (2,0.4) {$1$}; \node[vertex1] (d) at (3,0) {}; \node at (3,0.4) {$2$}; \node at (3,-0.5) {$v_{n-1}$}; \node[vertex] (e) at (4,0) {}; \node at (4, 0.4) {$3$}; \node[vertex] (f) at (5,0) {}; \node at (5,0.4) {$4$}; \node[vertex1] (g) at (6,0) {}; \node at (6,0.4) {$1$}; \node at (6,-0.5) {$v_{1}$}; \node[vertex] (h) at (7,0) {}; \node at (7,0.4) {$2$}; \node[vertex] (i) at (8,0) {}; \node at (8,0.4) {$3$}; \node[vertex1] (j) at (9,0) {}; \node at (9,0.4) {$4$}; \node at (9,-0.5) {$v_{2}$}; \node at (4.5,-0.5) {$v_{n}$}; \node at (-0.5,0) {{\large $\cdots$}}; \node at (-2.5,0) {{\large Coloring $c$ :}}; \node at (9.6,0) {{\large $\cdots$}}; \node at (-2.5,-1) {{\large Coloring $c'$ :}}; \draw[edge] (a) to (b); \draw[edge] (b) to (c); \draw[edge] (c) to (d); \draw[edge] (d) to (e); \draw[edge] (e) to (f); \draw[edge] (f) to (g); \draw[edge] (g) to (h); \draw[edge] (h) to (i); \draw[edge] (i) to (j); \node[vertex1] (a1) at (0,-1) {}; \node at (0,-1.4) {$5$}; \node[vertex] (b1) at (1,-1) {}; \node at (1,-1.4) {$4$}; \node[vertex] (c1) at (2,-1) {}; \node at (2,-1.4) {$1$}; \node[vertex1] (d1) at (3,-1) {}; \node at (3,-1.4) {$3$}; \node[vertex] (e1) at (3.5,-1) {}; \node at (3.5, -1.4) {$5$}; \node[vertex] (f1) at (4,-1) {}; \node at (4,-1.4) {$4$}; \node[vertex1] (g1) at (4.5,-1) {}; \node at (4.5,-1.4) {$2$}; \node[vertex] (h1) at (5,-1) {}; \node at (5,-1.4) {$3$}; \node[vertex] (i1) at (5.5,-1) {}; \node at (5.5,-1.4) {$5$}; \node[vertex1] (j1) at (6,-1) {}; \node at (6,-1.4) {$1$}; \node[vertex] (k1) at (7,-1) {}; \node at (7,-1.4) {$2$}; \node[vertex] (l1) at (8,-1) {}; \node at (8,-1.4) {$3$}; \node[vertex1] (m1) at (9,-1) {}; \node at (9,-1.4) {$4$}; \node at (-0.5,-1) {{\large $\cdots$}}; \node at (9.6,-1) {{\large $\cdots$}}; \draw[edge] (a1) to (b1); \draw[edge] (b1) to (c1); \draw[edge] (c1) to (d1); \draw[edge] (d1) to (e1); \draw[edge] (e1) to (f1); \draw[edge] (f1) to (g1); \draw[edge] (g1) to (h1); \draw[edge] (h1) to (i1); \draw[edge] (i1) to (j1); \draw[edge] (i1) to (k1); \draw[edge] (k1) to (l1); \draw[edge] (l1) to (m1); \end{tikzpicture} \caption{Extension $vi$-simultaneous $(4,1)$-coloring $c$ to a $vi$-simultaneous $(5,1)$-coloring $c'$.} \label{4q+1} \end{center} \end{figure} (ii) $n=4q+2$, $q\in\mathbb{N}$ and $q\in\mathbb{N}$. Figure \ref{C6} shows a $vi$-simultaneous $(5,1)$-coloring of $C_6$. Now suppose that $n\geq 10$. Easily we can use the method of case (i) on two edges $e_1=\{v_{1},v_2\}$ and $e_2=\{v_4,v_5\}$ of $C_{n-2}$ to achieve a $vi$-simultaneous $(5,1)$-coloring of $C_n$.\\ (iii) $n=4q+3$, $q\in\mathbb{N}$. Figure \ref{C6} shows a $vi$-simultaneous $(5,1)$-coloring of $C_7$. Now suppose that $n\geq 11$. Again we use the method of case (i) on three edges $e_1=\{v_1,v_2\}$ (with change the color of $v_{3}$ to $5$ instead of vertex $v_{n-3}$), $e_2=\{v_4,v_5\}$ and $e_3=\{v_7,v_8\}$ of $C_{n-3}$ to achieve a $vi$-simultaneous $(5,1)$-coloring of $C_n$. \begin{figure}[h] \begin{center} \begin{tikzpicture}[scale=1.0] \tikzset{vertex/.style = {shape=circle,draw, line width=1pt, opacity=1.0, inner sep=2pt}} \tikzset{vertex1/.style = {shape=circle,draw, fill=black, line width=1pt,opacity=1.0, inner sep=2pt}} \tikzset{edge/.style = {-,> = latex', line width=1pt,opacity=1.0}} \node[vertex1] (a) at (0,0) {}; \node at (0,-0.4) {$1$}; \node[vertex] (a1) at (1,0) {}; \node at (1,-0.4) {$3$}; \node[vertex] (a2) at (2,0) {}; \node at (2,-0.4) {$4$}; \node[vertex1] (b) at (3,0) {}; \node at (3,-0.4) {$2$}; \node[vertex] (b1) at (4,0) {}; \node at (4,-0.4) {$5$}; \node[vertex] (b2) at (5,0) {}; \node at (5,-0.4) {$3$}; \node[vertex1] (c) at (6,0) {}; \node at (6,-0.4) {$1$}; \node[vertex] (c1) at (7,0) {}; \node at (7,-0.4) {$4$}; \node[vertex] (c2) at (8,0) {}; \node at (8,-0.4) {$5$}; \node[vertex1] (d) at (8,1) {}; \node at (8,1.4) {$2$}; \node[vertex] (d1) at (7,1) {}; \node at (7,1.4) {$3$}; \node[vertex] (d2) at (6,1) {}; \node at (6,1.4) {$4$}; \node[vertex1] (e) at (5,1) {}; \node at (5,1.4) {$1$}; \node[vertex] (e1) at (4,1) {}; \node at (4,1.4) {$5$}; \node[vertex] (e2) at (3,1) {}; \node at (3,1.4) {$3$}; \node[vertex1] (f) at (2,1) {}; \node at (2,1.4) {$2$}; \node[vertex] (f1) at (1,1) {}; \node at (1,1.4) {$4$}; \node[vertex] (f2) at (0,1) {}; \node at (0,1.4) {$5$}; \draw[edge] (a) to (a1); \draw[edge] (a1) to (a2); \draw[edge] (a2) to (b); \draw[edge] (b) to (b1); \draw[edge] (b1) to (b2); \draw[edge] (b2) to (c); \draw[edge] (c) to (c1); \draw[edge] (c1) to (c2); \draw[edge] (c2) to (d); \draw[edge] (d) to (d1); \draw[edge] (d1) to (d2); \draw[edge] (d2) to (e); \draw[edge] (e) to (e1); \draw[edge] (e1) to (e2); \draw[edge] (e2) to (f); \draw[edge] (f) to (f1); \draw[edge] (f1) to (f2); \draw[edge] (f2) to (a); \node[vertex1] (a) at (0,2) {}; \node at (0,2.4) {$5$}; \node[vertex] (a1) at (1,2) {}; \node at (1,2.4) {$1$}; \node[vertex] (a2) at (2,2) {}; \node at (2,2.4) {$3$}; \node[vertex1] (b) at (3,2) {}; \node at (3,2.4) {$4$}; \node[vertex] (b1) at (4,2) {}; \node at (4,2.4) {$2$}; \node[vertex] (b2) at (5,2) {}; \node at (5,2.4) {$1$}; \node[vertex1] (c) at (6,2) {}; \node at (6,2.4) {$5$}; \node[vertex] (c1) at (7,2) {}; \node at (7,2.4) {$3$}; \node[vertex] (c2) at (8,2) {}; \node at (8,2.4) {$2$}; \node[vertex1] (x) at (9,2) {}; \node at (9,1.6) {$1$}; \node[vertex] (x1) at (9,3) {}; \node at (9,3.4) {$4$}; \node[vertex] (x2) at (8,3) {}; \node at (8,3.4) {$3$}; \node[vertex1] (d) at (7,3) {}; \node at (7,3.4) {$2$}; \node[vertex] (d1) at (6,3) {}; \node at (6,3.4) {$5$}; \node[vertex] (d2) at (5,3) {}; \node at (5,3.4) {$4$}; \node[vertex1] (e) at (4,3) {}; \node at (4,3.4) {$3$}; \node[vertex] (e1) at (3,3) {}; \node at (3,3.4) {$2$}; \node[vertex] (e2) at (2,3) {}; \node at (2,3.4) {$5$}; \node[vertex1] (f) at (1,3) {}; \node at (1,3.4) {$4$}; \node[vertex] (f1) at (0,3) {}; \node at (0,3.4) {$3$}; \node[vertex] (f2) at (-1,2.5) {}; \node at (-1,2.1) {$2$}; \draw[edge] (a) to (a1); \draw[edge] (a1) to (a2); \draw[edge] (a2) to (b); \draw[edge] (b) to (b1); \draw[edge] (b1) to (b2); \draw[edge] (b2) to (c); \draw[edge] (c) to (c1); \draw[edge] (c1) to (c2); \draw[edge] (c2) to (x); \draw[edge] (x) to (x1); \draw[edge] (x1) to (x2); \draw[edge] (x2) to (d); \draw[edge] (d) to (d1); \draw[edge] (d1) to (d2); \draw[edge] (d2) to (e); \draw[edge] (e) to (e1); \draw[edge] (e1) to (e2); \draw[edge] (e2) to (f); \draw[edge] (f) to (f1); \draw[edge] (f1) to (f2); \draw[edge] (f2) to (a); \end{tikzpicture} \caption{$vi$-simultaneous $(5,1)$-coloring $C_6$ and $C_7$.} \label{C6} \end{center} \end{figure} }\end{proof} \begin{corollary} Let $G$ be a nonempty graph with $\Delta(G)\leq2$. Then $\chi_{vi,1}(G)=4$ if and only if each component of $G$ is a cycle of order divisible by 4 or a path. \end{corollary} The following lemma is about the underlying digraph of any subgraph of $\frac{3}{3}$-power of a graph induces by an independence set. We leave the proof to the reader. \begin{lemma}\label{stardiforest} Let $G$ be a graph and $S$ be an independent set of $G^{\frac{3}{3}}$. Then each component of $D(G^{\frac{3}{3}}[S])$ is trivial or star whose arcs are directed towards the center. In addition the vertices of trivial components form an independent set in $G$. \end{lemma} \begin{theorem}\label{complete} $\chi_{vi}(K_n)=n+2$ for each $n\in\mathbb{N}\setminus\{1\}$. \end{theorem} \begin{proof}{ Let $G=K_n^{\frac{3}{3}}$, $c:V(G)\rightarrow [\chi(G)]$ be a proper coloring and $C_j=c^{-1}(j)$ ($1\leq j\leq\chi(G)$). Lemma \ref{stardiforest} concludes that each color class $C_j$ has at most $n-1$ vertices. So \[\chi(G)\geq\frac{|V(G)|}{n-1}=\frac{n^2}{n-1}=n+1+\frac{1}{n-1}.\] Therefore, $\chi(G)\geq n+2$. Now we define a proper $(n+2)$-coloring of $G$.\\ When $n=2$, $\chi(G)=\chi(K_4)=4$. Now we consider $n\geq 3$. Consider the hamiltonian cycle of $K_n$, named $C=(v_1,v_2,\ldots,v_n)$. For $1\leq j\leq n$, assign color $j$ to the $t$-vertex $v_j$ and all $i$-vertices $(v_k,v_{j+1})$, where $k\in [n]\setminus\{j,j+1\}$ and $v_{n+1}=v_1$. It can be easily seen that, all $t$-vertices of $G$ have a color in $[n]$ and the only uncolored vertices of $G$ are $(v_j,v_{j+1})$, for $1\leq j\leq n$. Now, it is enough to color the mentioned $i$-vertices. Suppose that $n$ is even. Assign color $n+1$ to the $i$-vertex $(v_j,v_{j+1})$, if $j$ is an odd number, otherwise color it with the color $n+2$. Now suppose that $n$ is an odd integer. Then for $1\leq j\leq n-1$, color the $i$-vertex $(v_j,v_{j+1})$ with color $n+1$, if $j$ is odd and otherwise assign color $n+2$ to it. Also, color the $i$-vertex $(v_n,v_1)$ with color $n$ and recolor the $t$-vertex $v_n$ with color $n+1$. }\end{proof} Suppose that $c$ is a $vi$-simultaneous $(n+2)$-coloring of $K_n$. For any vertex $v$, $|c(I_1[v])|=n$ and so $c(I_2(v))|=2$. Therefore $\chi_{vi,2}(K_n)=\chi_{vi}(K_n)=n+2$. In the following theorem, we determine $\chi_{vi,1}(K_n)$. \begin{theorem}\label{(vi,1)Kn} Let $n\in\mathbb{N}\setminus\{1\}$ and $G$ be a graph of order $n$. Then $\chi_{vi,1}(G)=2n$ if and only if $G\cong K_n$. \end{theorem} \begin{proof}{Firstly, suppose that $G\cong K_n$. Since $diam(G)=1$, by Definition \ref{Tvi1}, any two vertices $(u,i)$ and $(v,j)$ of $\mathcal{T}_{vi,1}(G)$ are adjacent. So $\chi_{vi,1}(G)=\chi(\mathcal{T}_{vi,1}(G))=|V(\mathcal{T}_{vi,1}(G))|=2n$. Conversely, suppose that $\chi_{vi,1}(G)=2n$. Therefore, $\chi(\mathcal{T}_{vi,1}(G))=2n=|V(\mathcal{T}_{vi,1}(G))|$ which implies that $\mathcal{T}_{vi,1}(G)$ is a complete graph. Now for any two distinct vertices $u$ and $v$ of $G$, the vertices $(u,1)$ and $(v,2)$ of $\mathcal{T}_{vi,1}(G)$ are adjacent and so $d_G(u,v)=1$. Thus $G$ is a complete graph. }\end{proof} A dynamic coloring of a graph $G$ is a proper coloring, in which each vertex neighborhood of size at least two receives at least two distinct colors. The dynamic chromatic number $\chi_d(G)$ is the least number of colors in such a coloring of $G$ \cite{Dynamic}. Akbari et al. proved the following theorem that we use to give a proper coloring for $\frac{3}{3}$-power of a regular bipartite graph. \begin{theorem} {\em{\cite{Akbari}}}\label{dynamic} Let $G$ be a $k$-regular bipartite graph, where $k\geq 4$. Then, there is a $4$-dynamic coloring of $G$, using two colors for each part. \end{theorem} \begin{theorem} {\em{\cite{bondy}}}\label{Hallregular} Every regular bipartite graph has a perfect matching. \end{theorem} \begin{theorem}\label{regularbipartite} If $G=G(A,B)$ is a $k$-regular bipartite graph with $k\geq 4$ and $|A|=|B|=n$, then $\chi_{vi}(G)\leq \min\{n+3,2k\}$. \end{theorem} \begin{proof} {Suppose that $V(A)=\{v_1,\ldots,v_n\}$ and $V(B)=\{u_1,\ldots,u_n\}$. Since $G$ is a $k$-regular bipartite graph, by Theorem~\ref{Hallregular}, $G$ has a perfect matching $M=\{v_1u_1,\ldots,v_nu_n\}$. First, we present a $(n+3)$-proper coloring for $G^{\frac{3}{3}}$. For $2\leq i\leq n$ color two $t$-vertices $v_i$ and $u_i$ with colors $1$ and ${n+1}$, respectively. Also, for $u\in N(v_1)$ and $v\in N(u_1)$ color $i$-vertices $(u,v_1)$ and $(v,u_1)$ with colors $1$ and $n+1$, respectively.\\ Now, for $2\leq i\leq n$, for $u\in N(v_i)\setminus\{u_i\}$ and $v\in N(u_i)\setminus\{v_i\}$, assign color $i$ to $i$-vertices $(u,v_i)$ and $(v,u_i)$. It can be easily seen that all the $t$-vertices of $G$ except $\{v_1,u_1\}$ and all $i$-vertices of $G$ except $\{(v_i,u_i),(u_i,v_i)|\hspace{1mm}2\leq i\leq n\}$ have colors in $[n+1]$. Now, assign colors $n+2$ and $n+3$ to $t$-vertices $v_1$ and $v_2$, respectively. Also, for $2\leq i\leq n$, then color $i$-vertices $(v_i,u_i)$ and $(u_i,v_i)$ with colors $n+2$ and $n+3$, respectively. With a simple review, you can see that this coloring is a proper coloring for $G^{\frac{3}{3}}$ with $(n+3)$ colors.\\ In the following, we present a $(2k)$-proper coloring for $G^{\frac{3}{3}}$. By Theorem~\ref{dynamic}, there is a $4$-dynamic coloring of $G$, named $c$, using two colors in each part. Without loss of generality, suppose that each $t$-vertex in $A$ has one of colors $1$ and $2$ and each $t$-vertex in $B$ has one of colors $3$ or $4$. For $1\leq i\leq n$, consider the $t$-vertex $u_i\in V(B)$ with set of neighbors $N(u_i)$. Note that, $c$ is a $4$-dynamic coloring, so $u_i$ has at least one neighbor of each colors $1$ and $2$. Let $u$ and $u'$ be two $t$-vertices in $N(u_i)$, where $c(u)=1$ and $c(u')=2$. First, assign colors $1$ and $2$ to $i$-vertices $(u_i,u')$ and $(u_i,u)$, respectively. Then, for $w\in N(u_i)\setminus \{u,u'\}$, color all $i$-vertices $(u_i,w)$ with different colors in $\{5,\ldots,{k+2}\}$. Similarly, for a $t$-vertex $v_i\in V(A)$, Suppose that $v$ and $v'$ are neighbors of $v$ with colors $3$ and $4$, respectively. Color the $i$-vertices $(v_i,v')$ and $(v_i,v)$ with colors $3$ and $4$, respectively. Then, for $w'\in N(v_i)\setminus \{v,v'\}$, color all $i$-vertices $(v_i,w')$ with different colors in $\{k+3,\ldots,2k\}$. It can be easily seen that, the presented coloring is a proper $(2k)$-coloring for $G^{\frac{3}{3}}$. }\end{proof} Since any bipartite graph with maximum degree $\Delta$ can be extended to a $\Delta$-regular bipartite graph, we have the following corollary. \begin{corollary} If $G$ is a bipartite graph with maximum degree $\Delta$, then $\chi_{vi}(G)\leq 2\Delta$. \end{corollary} A derangement of a set $S$ is a bijection $\pi : S\rightarrow S$ such that no element $x\in S$ has $\pi(x)=x$. \begin{theorem} Let $n,m\in\mathbb{N}$ and $n\geq m$. Then $\chi_{vi}(K_{n,m})=\left\{\begin{array}{ll} n+2 & m\leq 2\\ n+3 & m\geq 3\end{array}\right.$. \end{theorem} \begin{proof}{ Let $A=\{v_1,\ldots,v_n\}$ and $B=\{u_1,\ldots,u_m\}$ be two parts of $K_{n,m}$ and $G=K_{n,m}^{\frac{3}{3}}$. If $m=1$, then $K_{n,1}$ is a tree and by Corollary~\ref{tree}, we have $\chi(G)=n+2$. Now suppose that $m=2$. Since $\omega(G)=\Delta+2$, $\chi(G)\geq n+2$. It suffices to present a proper $(n+2)$-coloring for $G$ with colors in $[n+2]$. Suppose that $\pi$ is a derangement of the set $[n]$. Assign color $n+1$ to the vertices of $\{u_1\}\cup I_2(u_2)$ and color $n+2$ to the vertices of $u_2\cup I_2(u_1)$. Also for $j\in[n]$, color $i$-vertices $(u_1,v_j)$ and $(u_2,v_j)$ with color $j$ and vertex $v_j$ with color $\pi(j)$. The given coloring is a proper $(n+2)$-coloring of $G$.\\ In the case $m\geq 3$, suppose that $c$ is a proper coloring of $G$ with colors $1,\ldots,n+2$. Since the vertices of $I_1[u_1]$ are pairwise adjacent in $G$, there are exactly $n+1$ colors in $c(I_1[u_1])$. Without loss of generality, suppose that $c(u_1)=1$ and $c(I_1(u_1))=[n+1]\setminus\{1\}$. By Theorem~\ref{firstlem}, all $i$-vertices of $I_2(u_1)$ have the same color $n+2$.\\ Now, consider $t$-vertices $u_2$ and $u_3$. All $i$-vertices of $I_2(u_2)$ and all $i$-vertices of $I_2(u_3)$, have the same color and their colors are different from $\{2,\ldots,n+2\}$. Hence, the only available color for these vertices is the color $1$. But the subgraph of $G$ induced by $I_2(u_2)\cup I_2(u_3)$ is 1-regular and so for their coloring we need to two colors, a contradiction.\\ To complete the proof, it suffices to show that $\chi((K_{n,n})^{\frac{3}{3}})\leq n+3$. Since $n\geq 3$, $n+3\leq 2n$ and by Theorem~\ref{regularbipartite}, we have $\chi(G)\leq\chi({K_{n,n}}^{\frac{3}{3}})\leq \min\{n+3,2n\}=n+3$. Hence, $\chi(G)=n+3$. }\end{proof} \begin{theorem}\label{vi1Knm} Let $n,m\in\mathbb{N}\setminus\{1\}$. Then $\chi_{vi,1}(K_{n,m})=n+m$. \end{theorem} \begin{proof}{ Since $(K_{n,m})^2\cong K_{n+m}$, $K_{n+m}$ is a subgraph of $\mathcal{T}_{vi,1}(K_{n,m})$ and so $\chi_{vi.1}(K_{n,m})=\chi(\mathcal{T}_{vi,1}(K_{n,m}))\geq n+m$. Now we show that $\chi(\mathcal{T}_{vi,1}(K_{n,m}))\leq n+m$. Let $V=\{v_1,\ldots,v_n\}$ and $U=\{u_1,\ldots,u_m\}$ be two parts of $K_{n,m}$, $\pi$ be a derangement of $[n]$ and $\sigma$ be a derangement of $[m]$. Easily one can show that the following vertex coloring of $\mathcal{T}_{vi,1}(K_{n,m})$ is proper. \[c(x)=\left\{\begin{array}{llll} i & x=(v_i,2)\\ n+j & x=(u_j,2)\\ \pi(i) & x=(v_i,1)\\ n+\sigma(j) & x=(u_j,1).\end{array}\right.\] }\end{proof} As we see, there are some graphs such as trees and $K_{n,2}$ with maximum degree $\Delta$, whose $\frac{3}{3}$-power has chromatic number equal to $\Delta+2$. The problem is Characterizing all graphs with the desired property. \begin{problem}{\rm Charactrize all graphs $G$ with maximum degree at least 3 such that $\chi(G^{\frac{3}{3}})=\omega(G^{\frac{3}{3}})=\Delta(G)+2$. }\end{problem} {\bf Acknowledgements.} This research was in part supported by a grant from IPM (No.1400050116). \begin{thebibliography}{100} \bibitem{Akbari} S. Akbari, M. Ghanbari and S. Jahanbekam, {\it On the dynamic chromatic number of graphs}, Combinatorics and Graphs, in: Contemporary Mathematics-American Mathematical Society. vol. 531, 11--18 (2010). \bibitem{behzad} M. Behzad, {\it Graphs and their Chromatic Numbers}, Ph.D. thesis, Michigan State University (1965). \bibitem{bondy} J. A. Bondy and U. S. R. Murty, {\it Graph theory}, Graduate Texts in Mathematics, vol. 244, Springer, New York (2008). \bibitem{Bruldy} R.A. Brualdi and J.Q. Massey, {\it Incidence and strong edge colorings of graphs}, Discrete Math., vol. 122, 51--58 (1993). \bibitem{borodin} O.V. Borodin, {\it Simultaneous coloring of edges and faces of plane graphs}, Discrete Math., vol. 128 (1994) 21–33. \bibitem{chan} W. H. Chan, Peter C. B. Lam, and W. C. Shiu, {\it Edge-face total chromatic number of Halin graphs}, SIAM J. Discrete Math., 23(3), 1646--1654 (2009). \bibitem{Cranston} D. W. Cranston, S. J. Kim, {\it List coloring the square of a subcubic graph}, J. Graph Theory, vol. 57, no. 1, 65--87 (2008). \bibitem{hartke2013} S. Hartke, H. Liu and S. Petrickova, {\it On coloring of fractional powers of graphs}, arxiv.org/pdf/1212.3898v1.pdf (2013). \bibitem{iradmusa2020} M. N. Iradmusa, {\it A short proof of 7-colorability of $\frac{3}{3}$-power of sub-cubic graphs}, Iranian Journal of Science and Technology, Transactions A: Science, 44, 225--226 (2020). \bibitem{paper13} M. N. Iradmusa, {\it On colorings of graph fractional powers}, Discrete Math. vol. 310 (10-11), 1551--1556 (2010). \bibitem{Dynamic} B. Montgomery, {\it Dynamic Coloring}, Ph.D. thesis, West Virginia University (2001). \bibitem{mahsa} M. Mozafari-Nia, M. N. Iradmusa, {\it A note on coloring of $\frac{3}{3}$-power of subquartic graphs}, Australasian J. of Combinatorics, 79 (3), 454--460 (2021). \bibitem{ringel} G. Ringel, {\it Ein Sechsfarbenproblem auf der Kugel}, Abh. Math. Sem. Univ. Hamburg 29(1965), 107--117. \bibitem{3power3subdivision} F. Wang and X. Liu, {\it Coloring 3-power of 3-subdivision of subcubic graph}, Discrete Mathematics, Algorithms and Applications. vol. 10 (3), 1850041 (2018). \bibitem{wang1} W. Wang, J. Liu, {\it On the vertex face total chromatic number of planar graphs}, J. Graph Theory 22 (1996) 29–37. \bibitem{wang2} W. Wang, Xuding Zhu,{\it Entire colouring of plane graphs}, Journal of Combinatorial Theory, Series B, 101 (2011) 490–501. \bibitem{west2} D. B. West, {\it Introduction to graph theory}, Prentice Hall, Second edition, (2001). \bibitem{planarinc} D. Yang, {\it Fractional incidence coloring and star arboricity of graphs}, Ars Combinatoria. vol. 105, 213--224 (2012). \end{thebibliography} \end{document}
2205.07165v2
http://arxiv.org/abs/2205.07165v2
Zagier-Hoffman's conjectures in positive characteristic
\documentclass{amsart} \usepackage{color} \usepackage{amsmath} \usepackage{kotex} \usepackage{amssymb} \usepackage{amsthm} \usepackage{amscd} \usepackage{bm} \usepackage{eucal} \usepackage{enumitem} \usepackage{tikz} \usepackage[utf8]{inputenc} \usepackage[all]{xy} \allowdisplaybreaks \usepackage{hyperref} \hypersetup{ colorlinks = true, urlcolor = blue, linkcolor = blue, citecolor = red } \theoremstyle{plain} \newtheorem{theorem}{Theorem}[section] \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{problem}{Problem} \newtheorem{question}{Question} \newtheorem{theoremx}{Theorem} \newtheorem{conjecturex}[theoremx]{Conjecture} \newtheorem{corollaryx}[theoremx]{Corollary} \renewcommand{\thetheoremx}{\Alph{theoremx}} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{remark}[theorem]{Remark} \newtheorem{example}[theorem]{Example} \newtheorem{notation}[theorem]{Notation} \numberwithin{equation}{section} \newcommand\fantome[1]{} \def\bL{\mathbb L} \def\bN{\mathbb N} \def\bB{\mathbb B} \def\bZ{\mathbb Z} \def\bF{\mathbb F} \def\bQ{\mathbb Q} \def\bC{\mathbb C} \def\bG{\mathbb G} \def\bK{\mathbb K} \def\bO{\mathbb O} \def\bS{\mathbb S} \def\bT{\mathbb T} \def\bt{\mathbf t} \def\cI{\mathcal I} \def\cF{\mathcal F} \def\cG{\mathcal G} \def\calM{\mathcal M} \def\cO{\mathcal O} \def\cL{\mathcal L} \def\calp{\mathcal p} \def\fa{\mathfrak{a}} \def\ff{\mathfrak{f}} \def\Fq{\mathbb F_q} \def\lra{\longrightarrow} \def\tpi{\widetilde{\widetilde{\pi}}} \def\fl{\mathrm{fl}} \newcommand{\af}{\mathfrak{a}} \newcommand{\A}{\mathcal{A}} \newcommand{\Ab}{\textbf{A}} \newcommand{\LL}{\mathbb{L}} \newcommand{\E}{\mathcal{E}} \newcommand{\Gm}{\mathbb{G}_m} \newcommand{\Kb}{\textbf{K}} \newcommand{\Hb}{\textbf{H}} \newcommand{\iso}{\overset{\sim}{\longrightarrow}} \newcommand{\oX}{\overline{X}} \newcommand{\rg}{\on{rg}} \newcommand{\p}{\perp} \newcommand{\spn}{\mathrm{span}} \newcommand{\Fc}{\mathcal{F}} \newcommand{\Fz}{\mathcal{F}_0} \newcommand{\Lc}{\mathcal{L}} \newcommand{\Lz}{\mathcal{L}_0} \newcommand{\Pc}{\mathcal{P}} \newcommand{\Pz}{\mathcal{P}_0} \newcommand{\Lis}{\Li^{*}} \newcommand{\tLis}{\widetilde{\Li}^{*}} \newcommand{\fLis}{\mathfrak{Li}^{*}} \newcommand{\ppar}{${}$\par} \DeclareMathOperator{\e}{exp} \DeclareMathOperator{\Init}{Init} \DeclareMathOperator{\ev}{ev} \DeclareMathOperator{\Log}{Log} \DeclareMathOperator{\Hom}{Hom} \DeclareMathOperator{\Fitt}{Fitt} \DeclareMathOperator{\Frob}{Frob} \DeclareMathOperator{\Fib}{Fib} \DeclareMathOperator{\Pic}{Pic} \DeclareMathOperator{\Tr}{Tr} \DeclareMathOperator{\Nm}{Nm} \DeclareMathOperator{\Div}{Div} \DeclareMathOperator{\Gal}{Gal} \DeclareMathOperator{\End}{End} \DeclareMathOperator{\Reg}{Reg} \DeclareMathOperator{\Spec}{Spec} \DeclareMathOperator{\Leng}{Length} \DeclareMathOperator{\sgn}{sgn} \DeclareMathOperator{\rk}{rk} \DeclareMathOperator{\Lie}{Lie} \DeclareMathOperator{\Li}{Li} \DeclareMathOperator{\Si}{Si} \DeclareMathOperator{\fLi}{\mathfrak{Li}} \DeclareMathOperator{\fL}{\mathfrak{L}} \newcommand{\power}[2]{{#1 [[ #2 ]]}} \newcommand{\laurent}[2]{{#1 (( #2 ))}} \newcommand{\F}{\mathbb{F}} \newcommand{\C}{\mathbb{C}} \newcommand{\cM}{\mathcal{M}} \newcommand{\bA}{\textbf{A}} \newcommand{\bu}{\mathbf{u}} \newcommand{\bff}{\mathbf{f}} \newcommand{\bg}{\mathbf{g}} \newcommand{\bk}{\mathbf{k}} \newcommand{\bh}{\mathbf{h}} \newcommand{\bv}{\mathbf{v}} \newcommand{\bi}{\mathbf{i}} \newcommand{\bz}{\mathbf{z}} \newcommand{\by}{\mathbf{y}} \newcommand{\bx}{\mathbf{x}} \newcommand{\bb}{\mathbf{b}} \newcommand{\bw}{\mathbf{w}} \newcommand{\fla}{\bm{\lambda}} \newcommand{\fm}{\bm{\mu}} \newcommand{\fs}{\mathfrak{s}} \newcommand{\fe}{\bm{\epsilon}} \newcommand{\fve}{\bm{\varepsilon}} \newcommand{\flamb}{\bm{\lambda}} \newcommand{\frakL}{\mathfrak{L}} \newcommand{\frakLi}{\mathfrak{Li}} \newcommand{\fQ}{\mathfrak{Q}} \newcommand{\inv}{\ensuremath ^{-1}} \newcommand{\isom}{\ensuremath \cong} \newcommand{\tsgn}{\widetilde{\sgn}} \newcommand{\pd}{\partial} \newcommand{\tcG}{\widetilde{\mathcal{G}}} \newcommand{\N}{\ensuremath \mathbb{N}} \DeclareMathOperator{\Exp}{Exp} \DeclareMathOperator{\Res}{Res} \DeclareMathOperator{\RES}{RES} \DeclareMathOperator{\Mat}{Mat} \DeclareMathOperator{\divisor}{div} \DeclareMathOperator{\depth}{depth} \newcommand{\on}{\ensuremath ^{\otimes n}} \newcommand{\twist}{^{(1)}} \newcommand{\invtwist}{^{(-1)}} \newcommand{\twistinv}{^{(-1)}} \newcommand{\twisti}{^{(i)}} \newcommand{\twistk}[1]{^{(#1)}} \newcommand{\pinv}{\ensuremath ^{'-1}} \newcommand{\tuan}[1]{{\color{red} #1}} \newcommand{\bohae}[1]{{\color{green} #1}} \newcommand{\hojin}[1]{{\color{cyan} #1}} \newcommand{\nhuan}[1]{{\color{violet} #1}} \author[B.-H. Im]{Bo-Hae Im} \address{ Dept. of Mathematical Sciences, KAIST, 291 Daehak-ro, Yuseong-gu, Daejeon 34141, South Korea } \email{[email protected]} \author[H. Kim]{Hojin Kim} \address{ Dept. of Mathematical Sciences, KAIST, 291 Daehak-ro, Yuseong-gu, Daejeon 34141, South Korea } \email{[email protected]} \author[K. N. Le]{Khac Nhuan Le} \address{ Normandie Université, Université de Caen Normandie - CNRS, Laboratoire de Mathématiques Nicolas Oresme (LMNO), UMR 6139, 14000 Caen, France. } \email{[email protected]} \author[T. Ngo Dac]{Tuan Ngo Dac} \address{ Normandie Université, Université de Caen Normandie - CNRS, Laboratoire de Mathématiques Nicolas Oresme (LMNO), UMR 6139, 14000 Caen, France. } \email{[email protected]} \author[L. H. Pham]{Lan Huong Pham} \address{ Institute of Mathematics, Vietnam Academy of Science and Technology, 18 Hoang Quoc Viet, 10307 Hanoi, Viet Nam } \email{[email protected]} \title{Zagier-Hoffman's conjectures in positive characteristic} \subjclass[2010]{Primary 11M32; Secondary 11G09, 11J93, 11M38, 11R58} \keywords{Anderson $t$-motives, Anderson-Brownawell-Papanikolas criterion, (alternating) multiple zeta values, (alternating) Carlitz multiple polylogarithms} \parskip 2pt \setcounter{tocdepth}{1} \begin{document} \maketitle \begin{abstract} Multiples zeta values and alternating multiple zeta values in positive characteristic were introduced by Thakur and Harada as analogues of classical multiple zeta values of Euler and Euler sums. In this paper we determine all linear relations between alternating multiple zeta values and settle the main goals of these theories. As a consequence we completely establish Zagier-Hoffman's conjectures in positive characteristic formulated by Todd and Thakur which predict the dimension and an explicit basis of the span of multiple zeta values of Thakur of fixed weight. \end{abstract} \tableofcontents \section*{Introduction} \label{introduction} \subsection{Classical setting} \subsubsection{Multiple zeta values} Multiple zeta values of Euler (MZV's for short) are real positive numbers given by \[ \zeta(n_1,\dots,n_r)=\sum_{0<k_1<\dots<k_r} \frac{1}{k_1^{n_1} \dots k_r^{n_r}}, \quad \text{where } n_i \geq 1, n_r \geq 2. \] Here $r$ is called the depth and $w=n_1+\dots+n_r$ is called the weight of the presentation $\zeta(n_1,\dots,n_r)$. These values cover the special values $\zeta(n)$ for $n \geq 2$ of the Riemann zeta function and have been studied intensively especially in the last three decades with important and deep connections to different branches of mathematics and physics, for example arithmetic geometry, knot theory and higher energy physics. We refer the reader to \cite{BGF, Zag94} for more details. The main goal of this theory is to understand all $\mathbb Q$-linear relations between MZV's. Goncharov \cite[Conjecture 4.2]{Gon97} conjectures that all $\mathbb Q$-linear relations between MZV's can be derived from those between MZV's of the same weight. As the next step, precise conjectures formulated by Zagier \cite{Zag94} and Hoffman \cite{Hof97} predict the dimension and an explicit basis for the $\mathbb Q$-vector space $\mathcal{Z}_k$ spanned by MZV's of weight $k$ for $k \in \mathbb{N}$. \begin{conjecture}[Zagier's conjecture] We define a Fibonacci-like sequence of integers $d_k$ as follows. Letting $d_0=1, d_1=0$ and $d_2=1$ we define $d_k=d_{k-2}+d_{k-3}$ for $k \geq 3$. Then for $k \in \N$ we have \[ \dim_{\mathbb Q} \mathcal Z_k = d_k. \] \end{conjecture} \begin{conjecture}[Hoffman's conjecture] The $\mathbb Q$-vector space $\mathcal Z_k$ is generated by the basis consisting of MZV's of weight $k$ of the form $\zeta(n_1,\dots,n_r)$ with $n_i \in \{2,3\}$. \end{conjecture} The algebraic part of these conjectures which concerns upper bounds for $\dim_{\mathbb Q} \mathcal Z_k$ was solved by Terasoma \cite{Ter02}, Deligne-Goncharov \cite{DG05} and Brown \cite{Bro12} using the theory of mixed Tate motives. \begin{theorem}[Deligne-Goncharov, Terasoma] For $k \in \N$ we have $\dim_{\mathbb Q} \mathcal Z_k \leq d_k$. \end{theorem} \begin{theorem}[Brown] The $\mathbb Q$-vector space $\mathcal Z_k$ is generated by MZV's of weight $k$ of the form $\zeta(n_1,\dots,n_r)$ with $n_i \in \{2,3\}$. \end{theorem} Unfortunately, the transcendental part which concerns lower bounds for $\dim_{\mathbb Q} \mathcal Z_k$ is completely open. We refer the reader to \cite{BGF,Del13,Zag94} for more details and more exhaustive references. \subsubsection{Alternating multiple zeta values} There exists a variant of MZV's called the alternating multiple zeta values (AMZV's for short), also known as Euler sums. They are real numbers given by \[ \zeta \begin{pmatrix} \epsilon_1 & \dots & \epsilon_r \\ n_1 & \dots & n_r \end{pmatrix}=\sum_{0<k_1<\dots<k_r} \frac{\epsilon_1^{k_1} \dots \epsilon_r^{k_r}}{k_1^{n_1} \dots k_r^{n_r}} \] where $\epsilon_i \in \{\pm 1\}$, $n_i \in \N$ and $(n_r,\epsilon_r) \neq (1,1)$. Similar to MZV's, these values have been studied by Broadhurst, Deligne–Goncharov, Hoffman, Kaneko–Tsumura and many others because of the many connections in different contexts. We refer the reader to \cite{Har21,Hof19,Zha16} for further references. As before, it is expected that all $\mathbb Q$-linear relations between AMZV's can be derived from those between AMZV's of the same weight. In particular, it is natural to ask whether one could formulate conjectures similar to those of Zagier and Hoffman for AMZV's of fixed weight. By the work of Deligne-Goncharov \cite{DG05}, the sharp upper bounds are achieved: \begin{theorem}[Deligne-Goncharov] For $k \in \N$ if we denote by $\mathcal A_k$ the $\mathbb Q$-vector space spanned by AMZV's of weight $k$, then $\dim_{\mathbb Q} \mathcal A_k \leq F_{k+1}$. Here $F_n$ is the $n$-th Fibonacci number defined by $F_1=F_2=1$ and $F_{n+2}=F_{n+1}+F_n$ for all $n \geq 1$. \end{theorem} The fact that the previous upper bounds would be sharp was also explained by Deligne in \cite{Del10} (see also \cite{DG05}) using a variant of a conjecture of Grothendieck. In the direction of extending Brown's theorem for AMZV's, there are several sets of generators for $\mathcal A_k$ (see for example \cite{Cha21,Del10}). However, we mention that these generators are only linear combinations of AMZV's. Finally, we know nothing about non-trivial lower bounds for $\dim_{\mathbb Q} \mathcal A_k$. \subsection{Function field setting} \label{sec:function fields} \subsubsection{MZV's of Thakur and analogues of Zagier-Hoffman's conjectures} \label{sec:ZagierHoffman} By analogy between number fields and function fields, based on the pioneering work of Carlitz \cite{Car35}, Thakur \cite{Tha04} defined analogues of multiple zeta values in positive characteristic. We now need to introduce some notations. Let $A=\Fq[\theta]$ be the polynomial ring in the variable $\theta$ over a finite field $\Fq$ of $q$ elements of characteristic $p>0$. We denote by $A_+$ the set of monic polynomials in $A$. Let $K=\Fq(\theta)$ be the fraction field of $A$ equipped with the rational point $\infty$. Let $K_\infty$ be the completion of $K$ at $\infty$ and $\C_\infty$ be the completion of a fixed algebraic closure $\overline K$ of $K$ at $\infty$. We denote by $v_\infty$ the discrete valuation on $K$ corresponding to the place $\infty$ normalized such that $v_\infty(\theta)=-1$, and by $\lvert\cdot\rvert_\infty= q^{-v_\infty}$ the associated absolute value on $K$. The unique valuation of $\mathbb C_\infty$ which extends $v_\infty$ will still be denoted by $v_\infty$. Finally we denote by $\overline{\mathbb F}_q$ the algebraic closure of $\mathbb F_q$ in $\overline{K}$. Let $\mathbb N=\{1,2,\dots\}$ be the set of positive integers and $\mathbb Z^{\geq 0}=\{0,1,2,\dots\}$ be the set of non-negative integers. In \cite{Car35} Carlitz introduced the Carlitz zeta values $\zeta_A(n)$ for $n \in \N$ given by \[ \zeta_A(n) := \sum_{a \in A_+} \frac{1}{a^n} \in K_\infty \] which are analogues of classical special zeta values in the function field setting. For any tuple of positive integers $\mathfrak s=(s_1,\ldots,s_r) \in \N^r$, Thakur \cite{Tha04} defined the characteristic $p$ multiple zeta value (MZV for short) $\zeta_A(\fs)$ or $\zeta_A(s_1,\ldots,s_r)$ by \begin{equation*} \zeta_A(\fs):=\sum \frac{1}{a_1^{s_1} \ldots a_r^{s_r}} \in K_\infty \end{equation*} where the sum runs through the set of tuples $(a_1,\ldots,a_r) \in A_+^r$ with $\deg a_1>\cdots>\deg a_r$. We call $r$ the depth of $\zeta_A(\fs)$ and $w(\fs)=s_1+\dots+s_r$ the weight of $\zeta_A(\fs)$. We note that Carlitz zeta values are exactly depth one MZV's. Thakur \cite{Tha09a} showed that all the MZV's do not vanish. We refer the reader to \cite{AT90,AT09,GP21,LRT14,LRT21,Pel12,Tha04,Tha09,Tha10,Tha17,Tha20,Yu91} for more details on these objects. As in the classical setting, the main goal of the theory is to understand all linear relations over $K$ between MZV's. We now state analogues of Zagier-Hoffman's conjectures in positive characteristic formulated by Thakur in \cite[\S 8]{Tha17} and by Todd in \cite{Tod18}. For $w \in \N$ we denote by $\mathcal Z_w$ the $K$-vector space spanned by the MZV's of weight $w$. We denote by $\mathcal T_w$ the set of $\zeta_A(\fs)$ where $\fs=(s_1,\ldots,s_r) \in \N^r$ of weight $w$ with $1\leq s_i\leq q$ for $1\leq i\leq r-1$ and $s_r<q$. \begin{conjecture}[Zagier's conjecture in positive characteristic] \label{conj: dimension} Letting \begin{align*} d(w)=\begin{cases} 1 & \text{ if } w=0, \\ 2^{w-1} & \text{ if } 1 \leq w \leq q-1, \\ 2^{w-1}-1 & \text{ if } w=q, \end{cases} \end{align*} we put $d(w)=\sum_{i=1}^q d(w-i)$ for $w>q$. Then for any $w \in \N$, we have \[ \dim_K \mathcal Z_w = d(w). \] \end{conjecture} \begin{conjecture}[Hoffman's conjecture in positive characteristic] \label{conj: basis} A $K$-basis for $\mathcal Z_w$ is given by $\mathcal T_w$ consisting of $\zeta_A(s_1,\ldots,s_r)$ of weight $w$ with $s_i \leq q$ for $1 \leq i <r$, and $s_r<q$. \end{conjecture} In \cite{ND21} one of the authors succeeded in proving the algebraic part of these conjectures (see \cite[Theorem A]{ND21}): for all $w \in \N$, we have \[ \dim_K \mathcal Z_w \leq d(w). \] This part is based on shuffle relations for MZV's due to Chen and Thakur and some operations introduced by Todd. For the transcendental part, he used the Anderson-Brownawell-Papanikolas criterion in \cite{ABP04} and proved sharp lower bounds for small weights $w \leq 2q-2$ (see \cite[Theorem D]{ND21}). It has already been noted that it is very difficult to extend his method to general weights (see \cite{ND21} for more details). \subsubsection{AMZV's in positive characteristic} Recently, Harada \cite{Har21} introduced the alternating multiple zeta values in positive characteristic (AMZV's) as follows. Letting $\fs=(s_1,\dots,s_r) \in \N^n$ and $\fve=(\varepsilon_1,\dots,\varepsilon_r) \in (\Fq^\times)^n$, we define \begin{equation*} \zeta_A \begin{pmatrix} \fve \\ \fs \end{pmatrix} := \sum \dfrac{\varepsilon_1^{\deg a_1} \dots \varepsilon_r^{\deg a_r }}{a_1^{s_1} \dots a_r^{s_r}} \in K_{\infty} \end{equation*} where the sum runs through the set of tuples $(a_1,\ldots,a_r) \in A_+^r$ with $\deg a_1>\cdots>\deg a_r$. The numbers $r$ and $w(\fs)=s_1+\dots+s_r$ are called the depth and the weight of $\zeta_A \begin{pmatrix} \fve \\ \fs \end{pmatrix}$, respectively. We set $\zeta_A \begin{pmatrix} \emptyset \\ \emptyset \end{pmatrix} = 1$. Harada \cite{Har21} extended basic properties of MZV's to AMZV's, i.e., non-vanishing, shuffle relations, period interpretation and linear independence. Again the main goal of this theory is to determine all linear relations over $K$ between AMZV's. It is natural to ask whether the previous work on analogues of the Zagier-Hoffman conjectures can be extended to this setting. More precisely, if for $w \in \N$ we denote by $\mathcal{AZ}_w$ the $K$ vector space spanned by the AMZV's of weight $w$, then we would like to determine the dimensions of $\mathcal{AZ}_w$ and show some nice bases of these vector spaces. \subsection{Main results} \subsubsection{Statements of the main results} In this manuscript we present complete answers to all the previous conjectures and problems raised in \S \ref{sec:function fields}. First, for all $w$ we calculate the dimension of $\mathcal{AZ}_w$ and give an explicit basis in the spirit of Hoffman. \begin{theoremx} \label{thm: ZagierHoffman AMZV} We define a Fibonacci-like sequence $s(w)$ as follows. We put \begin{equation*} s(w) = \begin{cases} (q - 1) q^{w-1}& \text{if } 1 \leq w < q, \\ (q - 1) (q^{w-1} - 1) &\text{if } w = q, \end{cases} \end{equation*} and for $w>q$, $s(w)=(q-1)\sum \limits_{i = 1}^{q-1} s(w-i) + s(w - q)$. Then for all $w \in \N$, \[ \dim_K \mathcal{AZ}_w = s(w). \] Further, we can exhibit a Hoffman-like basis of $\mathcal{AZ}_w$. \end{theoremx} Second, we give a proof of both Conjectures \ref{conj: dimension} and \ref{conj: basis} which generalizes the previous work of the fourth author \cite{ND21}. \begin{theoremx} \label{thm: ZagierHoffman} For all $w \in \N$, $\mathcal T_w$ is a $K$-basis for $\mathcal Z_w$. In particular, \[ \dim_K \mathcal Z_w = d(w). \] \end{theoremx} We recall that analogues of Goncharov's conjectures in positive characteristic were proved in \cite{Cha14}. As a consequence, we give a framework for understanding all linear relations over $K$ between MZV's and AMZV's and settle the main goals of these theories. \subsubsection{Ingredients of the proofs} Let us emphasize here that Theorem \ref{thm: ZagierHoffman AMZV} is much harder than Theorem \ref{thm: ZagierHoffman} and that it is not enough to work within the setting of AMZV's. On the one hand, although it is straightforward to extend the algebraic part for AMZV's following the same line in \cite[\S 2 and \S 3]{ND21}, we only obtain a weak version of Brown's theorem in this setting. More precisely, we get a set of generators for $\mathcal{AZ}_w$ but it is too large to be a basis of this vector space. For small weights, we find ad hoc arguments to produce a smaller set of generators but it does not work for arbitrary weights (see \S \ref{sec:without ACMPL}). Roughly speaking, in \cite[\S 2 and \S 3]{ND21} we have an algorithm which moves forward so that we can express any AMZV's as a linear combination of generators. But we lack some precise controls on the coefficients in these expressions so that we cannot go backward and change bases. On the other hand, the transcendental part for AMZV's shares the same difficulties with the case of MZV's as noted above. In this paper we use a completely new approach which is based on the study of alternating Carlitz multiple polylogarithms (ACMPL's for short) defined as follows. We put $\ell_0:=1$ and $\ell_d:=\prod_{i=1}^d (\theta-\theta^{q^i})$ for all $d \in \mathbb N$. For any tuple $\fs=(s_1,\ldots,s_r) \in \N^r$ and $\fve=(\varepsilon_1,\dots,\varepsilon_r) \in (\Fq^\times)^r$, we introduce the corresponding alternating Carlitz multiple polylogarithm by \begin{equation*} \Li\begin{pmatrix} \fve \\ \fs \end{pmatrix} := \sum\limits_{d_1> \dots > d_r\geq 0} \dfrac{\varepsilon_1^{d_1} \dots \varepsilon_r^{d_r} }{\ell_{d_1}^{s_1} \dots \ell_{d_r}^{s_r}} \in K_{\infty}. \end{equation*} We also set $\Li \begin{pmatrix} \emptyset \\ \emptyset \end{pmatrix} = 1$. The key result is to establish a non-trivial connection between AMZV's and ACMPL's which allows us to go back and forth between these objects (see Theorem \ref{thm:bridge}). To do this, following \cite[\S 2 and \S 3]{ND21} we use stuffle relations to develop an algebraic theory for ACMPL's and obtain a weak version of Brown's theorem, i.e., a set of generators for the $K$-vector space $\mathcal{AL}_w$ spanned by ACMPL's of weight $w$. We observe that this set of generators is exactly the same as that for AMZV's. Thus $\mathcal{AL}_w=\mathcal{AZ}_w$, which provides a dictionary between AMZV's and ACMPL's. We then determine all $K$-linear relations between ACMPL's (see Theorem \ref{thm:ACMPL}). The proof we give here, while using similar tools as in \cite{ND21}, differs in some crucial points and requires three new ingredients. The first new ingredient is the construction of an appropriate Hoffman-like basis $\mathcal{AS}_w$ of $\mathcal{AL}_w$. In fact, our transcendental method dictates that we must find a complete system of bases $\mathcal{AS}_w$ of $\mathcal{AL}_w$ indexed by weights $w$ with strong constraints as given in Theorem \ref{theorem: linear independence}. The failure to find such a system of bases is the main obstacle to generalizing \cite[Theorem D]{ND21} (see \S \ref{sec: application AMZV} and \cite[Remark 6.3]{ND21} for more details). The second new ingredient is formulating and proving (a strong version of) Brown's theorem for AMCPL's (see Theorem \ref{thm: strong Brown}). As mentioned before, the method in \cite{ND21} only gives a weak version of Brown's theorem for ACMPL's as the set of generators is not a basis. Roughly speaking, given any ACMPL's we can express it as a linear combination of generators. The fact that stuffle relations for ACMPL's are ``simpler" than shuffle relations for AMZV's gives more precise information about the coefficients of these expressions. Consequently, we show that a certain transition matrix is invertible and obtain Brown's theorem for ACMPL's. This completes the algebraic part for ACMPL's. The last new ingredient is proving the transcendental part for ACMPL's in full generality, i.e., the ACMPL's in $\mathcal{AS}_w$ are linearly independent over $K$ (see Theorem \ref{thm: trans ACMPL}). We emphasize that we do need the full strength of the algebraic part to prove the transcendental part. The proof follows the same line in \cite[\S 4 and \S 5]{ND21} which is formulated in a more general setting in \S \ref{sec: dual motives}. First, we have to consider not only linear relations between ACMPL's in $\mathcal{AS}_w$ but also those between ACMPL's in $\mathcal{AS}_w$ and the suitable power $\widetilde \pi^w$ of the Carlitz period $\widetilde \pi$. Second, starting from such a non-trivial relation we apply the Anderson-Brownawell-Papanikolas criterion in \cite{ABP04} and reduce to solve a system of $\sigma$-linear equations. While in \cite[\S 4 and \S 5]{ND21} this system does not have a non-trivial solution which allows us to conclude, our system has a unique solution for even $w$ (i.e., $q-1$ divides $w$). This means that for such $w$ up to a scalar there is a unique linear relation between ACMPL's in $\mathcal{AS}_w$ and $\widetilde \pi^w$. The last step consists of showing that in this unique relation, the coefficient of $\widetilde \pi^w$ is nonzero. Unexpectedly, this is a consequence of Brown's theorem for AMCPL's mentioned above. \subsubsection{Plan of the paper} We will briefly explain the organization of the manuscript. \begin{itemize} \item In \S \ref{sec:algebraic part} we recall the definition and basic properties of ACMPL's. We then develop an algebraic theory for these objects and obtain weak and strong Brown's theorems (see Proposition \ref{prop: weak Brown} and Theorem \ref{thm: strong Brown}). \item In \S \ref{sec: dual motives} we generalize some transcendental results in \cite{ND21} and give statements in a more general setting (see Theorem \ref{theorem: linear independence}). \item In \S \ref{sec:transcendental part} we prove transcendental results for ACMPL's and completely determine all linear relations between ACMPL's (see Theorems \ref{thm: trans ACMPL} and \ref{thm:ACMPL}). \item Finally, in \S \ref{sec:applications} we present two applications and prove the main results, i.e., Theorems \ref{thm: ZagierHoffman AMZV} and \ref{thm: ZagierHoffman}. The first application is to prove the above connection between ACMPL's and AMZV's and then to determine all linear relations between AMZV's in positive characteristic (see \S \ref{sec: application AMZV}). The second application is a proof of Zagier-Hoffman's conjectures in positive characteristic which generalizes the main results of \cite{ND21} (see \S \ref{sec: application MZV}). \end{itemize} \subsection{Remark} When our work was released in arXiv:2205.07165, Chieh-Yu Chang informed us that Chen, Mishiba and he were working towards a proof of Theorem \ref{thm: ZagierHoffman} (e.g., the MZV version) by using a similar method, and their paper \cite{CCM22} is now available at arXiv:2205.09929. \subsection{Acknowledgments} The first named author (B.-H. Im) was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No.~2020R1A2B5B01001835). Two of the authors (KN. L. and T. ND.) were partially supported by the Excellence Research Chair ``$L$-functions in positive characteristic and applications'' funded by the Normandy Region. The fourth author (T. ND.) was partially supported by the ANR Grant COLOSS ANR-19-CE40-0015-02. The fifth author (LH. P.) was supported by the grant ICRTM.02-2021.05 funded by the International Center for Research and Postgraduate Training in Mathematics (VAST, Vietnam). \section{Weak and strong Brown's theorems for ACMPL's} \label{sec:algebraic part} In this section we first extend the work of \cite{ND21} and develop an algebraic theory for ACMPL's. Then we prove a weak version of Brown's theorem for ACMPL's (see Theorem \ref{prop: weak Brown}) which gives a set of generators for the $K$-vector space spanned by ACMPL's of weight $w$. The techniques of Sections \ref{sec: power sums}-\ref{sec:weak Brown} are similar to those of \cite{ND21} and the reader may wish to skip the details. Contrary to what happens in \cite{ND21}, it turns out that the previous set of generators is too large to be a basis. Consequently, in \S \ref{sec:strong Brown} we introduce another set of generators and prove a strong version of Brown's theorem for ACMPL's (see Theorem \ref{thm: strong Brown}). \subsection{Analogues of power sums} \label{sec: power sums} \subsubsection{} We recall and introduce some notation in \cite{ND21}. A tuple $\fs$ is a sequence of the form $\fs=(s_1,\dots,s_n) \in \mathbb N^n$. We call $\depth(\fs) = n$ the depth and $w(\fs) = s_1 + \dots + s_n$ the weight of $\fs$. If $\fs$ is nonempty, we put $\fs_- := (s_2, \dotsc, s_n)$. Let $\fs $ and $\mathfrak{t}$ be two tuples of positive integers. We set $s_i = 0$ (resp. $t_i = 0$) for all $i > \textup{depth}(\fs)$ (resp. $i > \textup{depth}(\mathfrak{t})$). We say that $\fs \leq \mathfrak{t}$ if $s_1 + \cdots + s_i \leq t_1 + \cdots + t_i$ for all $i \in \mathbb{N}$, and $w(\fs) = w(\mathfrak{t})$. This defines a partial order on tuples of positive integers. For $i \in \mathbb{N}$, we define $T_i(\fs)$ to be the tuple $(s_1+\dots+s_i,s_{i+1},\dots,s_n)$. Further, for $i \in \mathbb N$, if $T_i(\fs) \leq T_i(\mathfrak t)$, then $T_k(\fs) \leq T_k(\mathfrak t)$ for all $k \geq i$. Let $\fs=(s_1,\dots,s_n) \in \mathbb N^n$ be a tuple of positive integers. We denote by $0 \leq i \leq n$ the largest integer such that $s_j \leq q$ for all $1 \leq j \leq i$ and define the initial tuple $\Init(\fs)$ of $\fs$ to be the tuple \[ \Init(\fs):=(s_1,\dots,s_i). \] In particular, if $s_1>q$, then $i=0$ and $\Init(\fs)$ is the empty tuple. For two different tuples $\fs$ and $\mathfrak t$, we consider the lexicographic order for initial tuples and write $\Init(\mathfrak t) \preceq \Init(\fs)$ (resp. $\Init(\mathfrak t) \prec \Init(\fs)$, $\Init(\mathfrak t) \succeq \Init(\fs)$ and $\Init(\mathfrak t) \succ \Init(\fs)$). \subsubsection{} Letting $\fs = (s_1, \dotsc, s_n) \in \mathbb{N}^{n}$ and $\fve = (\varepsilon_1, \dotsc, \varepsilon_n) \in (\mathbb{F}_q^{\times})^{n}$, we set $\fs_- := (s_2, \dotsc, s_n)$ and $\fve_- := (\varepsilon_2, \dotsc, \varepsilon_n) $. By definition, an array $\begin{pmatrix} \fve \\ \fs \end{pmatrix} $ is an array of the form $$\begin{pmatrix} \fve \\ \fs \end{pmatrix} = \begin{pmatrix} \varepsilon_1 & \dotsb & \varepsilon_n \\ s_1 & \dotsb & s_n \end{pmatrix}.$$ We call $\depth(\fs) = n$ the depth, $w(\fs) = s_1 + \dots + s_n$ the weight and $\chi(\fve) = \varepsilon_1 \dots \varepsilon_n$ the character of $\begin{pmatrix} \fve \\ \fs \end{pmatrix}$. We say that $\begin{pmatrix} \fve \\ \fs \end{pmatrix} \leq \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} $ if the following conditions are satisfied: \begin{enumerate} \item $\chi(\fve) = \chi(\fe)$, \item $w(\fs) = w(\mathfrak{t})$, \item $s_1 + \dotsb + s_i \leq t_1 + \dotsb + t_i$ for all $i \in \N$. \end{enumerate} We note that this only defines a preorder on arrays. \begin{remark} \label{rmk: compa depth} We claim that if $\begin{pmatrix} \fve \\ \fs \end{pmatrix} \leq \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} $, then $\depth(\fs) \geq \depth(\mathfrak{t})$. In fact, assume that $\depth(\fs) < \depth(\mathfrak{t})$. Thus \begin{align*} w(\fs) = s_1 + \cdots + s_{\depth(\fs)} \leq t_1 + \cdots + t_{\depth(\fs)} < t_1 + \cdots + t_{\depth(\mathfrak{t})} = w(\mathfrak{t}), \end{align*} which contradicts the condition $w(\fs) = w(\mathfrak{t})$. \end{remark} \subsubsection{} \label{sec: definition Sd} We recall the power sums and MZV's studied by Thakur \cite{Tha10}. For $d \in \mathbb{Z}$ and for $\fs=(s_1,\dots,s_n) \in \N^n$ we introduce \begin{equation*} S_d(\fs) := \sum\limits_{\substack{a_1, \dots, a_n \in A_{+} \\ d = \deg a_1> \dots > \deg a_n\geq 0}} \dfrac{1}{a_1^{s_1} \dots a_n^{s_n}} \in K \end{equation*} and \begin{equation*} S_{<d}(\mathfrak s) := \sum\limits_{\substack{a_1, \dots, a_n \in A_{+} \\ d > \deg a_1> \dots > \deg a_n\geq 0}} \dfrac{1}{a_1^{s_1} \dots a_n^{s_n}} \in K. \end{equation*} We define the multiple zeta value (MZV) by \begin{equation*} \zeta_A(\fs) := \sum \limits_{d \geq 0} S_d(\fs) = \sum \limits_{d \geq 0} \sum\limits_{\substack{a_1, \dots, a_n \in A_{+} \\ d = \deg a_1> \dots > \deg a_n\geq 0}} \dfrac{1}{a_1^{s_1} \dots a_n^{s_n}} \in K_{\infty}. \end{equation*} We put $\zeta_A(\emptyset) = 1$. We call $\depth(\fs) = n$ the depth and $w(\fs) = s_1 + \dots + s_n$ the weight of $\zeta_A(\fs)$. We also recall that $\ell_0 := 1$ and $\ell_d := \prod^d_{i=1}(\theta - \theta^{q^i})$ for all $d \in \mathbb{N}$. Letting $\mathfrak s = (s_1 , \dots, s_n) \in \mathbb{N}^n$, for $d \in \mathbb{Z}$, we define analogues of power sums by \begin{equation*} \Si_d(\mathfrak s) := \sum\limits_{d=d_1> \dots > d_n\geq 0} \dfrac{1}{\ell_{d_1}^{s_1} \dots \ell_{d_n}^{s_n}} \in K, \end{equation*} and \begin{equation*} \Si_{<d}(\mathfrak s) := \sum\limits_{d>d_1> \dots > d_n\geq 0} \dfrac{1 }{\ell_{d_1}^{s_1} \dots \ell_{d_n}^{s_n}} \in K. \end{equation*} We introduce the Carlitz multiple polylogarithm (CMPL for short) by \begin{equation*} \Li(\fs) := \sum \limits_{d \geq 0} \Si_d(\fs) = \sum \limits_{d \geq 0} \ \sum\limits_{d=d_1> \dots > d_n\geq 0} \dfrac{1}{\ell_{d_1}^{s_1} \dots \ell_{d_n}^{s_n}} \in K_{\infty}. \end{equation*} We set $\Li(\emptyset) = 1$. We call $\depth(\fs) = n$ the depth and $w(\fs) = s_1 + \dots + s_n$ the weight of $\Li(\fs)$. \subsubsection{} Let $\begin{pmatrix} \fve \\ \fs \end{pmatrix} = \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_n \\ s_1 & \dots & s_n \end{pmatrix} $ be an array. For $d \in \mathbb{Z}$, we define \begin{equation*} S_d \begin{pmatrix} \fve \\ \fs \end{pmatrix} := \sum\limits_{\substack{a_1, \dots, a_n \in A_{+} \\ d = \deg a_1> \dots > \deg a_n\geq 0}} \dfrac{\varepsilon_1^{\deg a_1} \dots \varepsilon_n^{\deg a_n }}{a_1^{s_1} \dots a_n^{s_n}} \in K \end{equation*} and \begin{equation*} S_{<d} \begin{pmatrix} \fve \\ \fs \end{pmatrix} := \sum\limits_{\substack{a_1, \dots, a_n \in A_{+} \\ d > \deg a_1> \dots > \deg a_n\geq 0}} \dfrac{\varepsilon_1^{\deg a_1} \dots \varepsilon_n^{\deg a_n }}{a_1^{s_1} \dots a_n^{s_n}} \in K. \end{equation*} We also introduce \begin{equation*} \Si_d \begin{pmatrix} \fve \\ \fs \end{pmatrix} := \sum\limits_{d=d_1> \dots > d_n\geq 0} \dfrac{\varepsilon_1^{d_1} \dots \varepsilon_n^{d_n} }{\ell_{d_1}^{s_1} \dots \ell_{d_n}^{s_n}} \in K, \end{equation*} and \begin{equation*} \Si_{<d} \begin{pmatrix} \fve \\ \fs \end{pmatrix} := \sum\limits_{d>d_1> \dots > d_n\geq 0} \dfrac{\varepsilon_1^{d_1} \dots \varepsilon_n^{d_n} }{\ell_{d_1}^{s_1} \dots \ell_{d_n}^{s_n}} \in K. \end{equation*} One verifies easily the following formulas: \begin{align*} \Si_{d} \begin{pmatrix} \varepsilon \\ s \end{pmatrix} &= \varepsilon^d \Si_d(s),\\ \Si_d \begin{pmatrix} 1& \dots & 1 \\ s_1 & \dots & s_n \end{pmatrix} &= \Si_{d}(s_1, \dots, s_n),\\ \Si_{<d} \begin{pmatrix} 1& \dots & 1 \\ s_1 & \dots & s_n \end{pmatrix} &= \Si_{<d}(s_1, \dots, s_n),\\ \Si_{d} \begin{pmatrix} \fve \\ \fs \end{pmatrix} &= \Si_{d} \begin{pmatrix} \varepsilon_1 \\ s_1 \end{pmatrix} \Si_{<d} \begin{pmatrix} \fve_{-} \\ \fs_{-} \end{pmatrix} . \end{align*} Then we define the alternating Carlitz multiple polylogarithm (ACMPL for short) by \begin{equation*} \Li \begin{pmatrix} \fve \\ \fs \end{pmatrix} := \sum \limits_{d \geq 0} \Si_d \begin{pmatrix} \fve \\ \fs \end{pmatrix} = \sum\limits_{d_1> \dots > d_n\geq 0} \dfrac{\varepsilon_1^{d_1} \dots \varepsilon_n^{d_n} }{\ell_{d_1}^{s_1} \dots \ell_{d_n}^{s_n}} \in K_{\infty}. \end{equation*} Recall that $\Li \begin{pmatrix} \emptyset \\ \emptyset \end{pmatrix} = 1$. We call $\depth(\fs) = n$ the depth, $w(\fs) = s_1 + \dots + s_n$ the weight and $\chi(\fve) = \varepsilon_1 \dots \varepsilon_n$ the character of $\Li \begin{pmatrix} \fve \\ \fs \end{pmatrix}$. \begin{lemma} \label{agree} For all $ \begin{pmatrix} \fve \\ \fs \end{pmatrix}$ as above such that $s_i \leq q$ for all $i$, we have \begin{equation*} S_d \begin{pmatrix} \fve \\ \fs \end{pmatrix} = \Si_d \begin{pmatrix} \fve \\ \fs \end{pmatrix} \quad \text{for all } d \in \mathbb{Z}. \end{equation*} Therefore, \begin{equation*} \zeta_A \begin{pmatrix} \fve \\ \fs \end{pmatrix} = \Li \begin{pmatrix} \fve \\ \fs \end{pmatrix} . \end{equation*} \end{lemma} \begin{proof} We denote by $\mathcal{J}$ the set of all arrays $ \begin{pmatrix} \fve \\ \fs \end{pmatrix} = \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_n \\ s_1 & \dots & s_n \end{pmatrix} $ for some $n$ such that $s_1, \dots, s_n \leq q$. The second statement follows at once from the first statement. We prove the first statement by induction on $\depth(\fs)$. For $\depth(\fs) = 1$, we let $ \begin{pmatrix} \fve \\ \fs \end{pmatrix} = \begin{pmatrix} \varepsilon \\ s \end{pmatrix} $ with $s \le q$. It follows from special cases of power sums in \cite[\S 3.3]{Tha09} that for all $d \in \mathbb{Z}$, $ S_{d} \begin{pmatrix} \varepsilon \\ s \end{pmatrix} = \dfrac{\varepsilon^d}{\ell^s_d} = \Si_{d} \begin{pmatrix} \varepsilon \\ s \end{pmatrix} . $ Suppose that the first statement holds for all arrays $ \begin{pmatrix} \fve \\ \fs \end{pmatrix} \in \mathcal{J}$ with $\depth(\fs) = n - 1$ and for all $ d \in \mathbb{Z}$. Let $ \begin{pmatrix} \fve \\ \fs \end{pmatrix} = \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_n \\ s_1 & \dots & s_n \end{pmatrix} $ be an element of $\mathcal{J}$. Note that if $ \begin{pmatrix} \fve \\ \fs \end{pmatrix} \in \mathcal{J}$, then $ \begin{pmatrix} \fve_{-} \\ \fs_{-} \end{pmatrix} \in \mathcal{J}$. It follows from induction hypothesis and the fact $s_1 \leq q$ that for all $d \in \mathbb{Z}$ \begin{equation*} S_d \begin{pmatrix} \fve \\ \fs \end{pmatrix} = S_d \begin{pmatrix} \varepsilon_1 \\ s_1 \end{pmatrix} S_{<d} \begin{pmatrix} \fve_{-} \\ \fs_{-} \end{pmatrix} = \Si_d \begin{pmatrix} \varepsilon_1 \\ s_1 \end{pmatrix} \Si_{<d} \begin{pmatrix} \fve_{-} \\ \fs_{-} \end{pmatrix} = \Si_d \begin{pmatrix} \fve \\ \fs \end{pmatrix} . \end{equation*} This proves the lemma. \end{proof} \subsubsection{} Let $\begin{pmatrix} \fve \\ \fs \end{pmatrix}$, $\begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix}$ be two arrays. We recall $s_i = 0$ and $\varepsilon_i = 1$ for all $i > \depth(\fs)$; $t_i = 0$ and $\epsilon_i = 1$ for all $i > \depth(\mathfrak{t})$. We define the following operation \begin{equation*} \begin{pmatrix} \fve \\ \fs \end{pmatrix} + \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} := \begin{pmatrix} \fve \fe \\ \fs + \mathfrak{t} \end{pmatrix}, \end{equation*} where $\fve \fe$ and $\fs + \mathfrak{t}$ are defined by component multiplication and component addition, respectively. We now consider some formulas related to analogues of power sums. It is easily seen that \begin{equation}\label{eq: redsum} \Si_d \begin{pmatrix} \varepsilon \\ s \end{pmatrix} \Si_d \begin{pmatrix} \epsilon \\ t \end{pmatrix} = \Si_d \begin{pmatrix} \varepsilon \epsilon \\ s + t \end{pmatrix} , \end{equation} hence, for $\mathfrak{t} = (t_1, \dots, t_n)$, \begin{equation}\label{redsum} \Si_d \begin{pmatrix} \varepsilon \\ s \end{pmatrix} \Si_d \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} = \Si_d \begin{pmatrix} \varepsilon \epsilon_1 & \fe_{-} \\ s + t_1 & \mathfrak{t}_{-} \end{pmatrix}. \end{equation} More generally, we deduce the following proposition which will be used frequently later. \begin{proposition} \label{polysums} Let $ \begin{pmatrix} \fve \\ \fs \end{pmatrix} $, $ \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} $ be two arrays. Then we have the following: \begin{enumerate} \item There exist $f_i \in \mathbb{F}_q$ and arrays $ \begin{pmatrix} \fm_i \\ \mathfrak{u}_i \end{pmatrix} $ with $ \begin{pmatrix} \fm_i \\ \mathfrak{u}_i \end{pmatrix} \leq \begin{pmatrix} \fve \\ \fs \end{pmatrix} + \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} $ and $\depth(\mathfrak{u}_i) \leq \depth(\fs) + \depth(\mathfrak{t})$ for all $i$ such that \begin{equation*} \Si_d \begin{pmatrix} \fve \\ \fs \end{pmatrix} \Si_d \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} = \sum \limits_i f_i \Si_d \begin{pmatrix} \fm_i \\ \mathfrak{u}_i \end{pmatrix} \quad \text{for all } d \in \mathbb{Z}. \end{equation*} \item There exist $f'_i \in \mathbb{F}_q$ and arrays $ \begin{pmatrix} \fm'_i \\ \mathfrak{u}'_i \end{pmatrix} $ with $ \begin{pmatrix} \fm'_i \\ \mathfrak{u}'_i \end{pmatrix} \leq \begin{pmatrix} \fve \\ \fs \end{pmatrix} + \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} $ and $\depth(\mathfrak{u}'_i) \leq \depth(\fs) + \depth(\mathfrak{t})$ for all $i$ such that \begin{equation*} \Si_{<d} \begin{pmatrix} \fve \\ \fs \end{pmatrix} \Si_{<d} \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} = \sum \limits_i f'_i \Si_{<d} \begin{pmatrix} \fm'_i \\ \mathfrak{u}'_i \end{pmatrix} \quad \text{for all } d \in \mathbb{Z}. \end{equation*} \item There exist $f''_i \in \mathbb{F}_q$ and arrays $ \begin{pmatrix} \fm''_i \\ \mathfrak{u}''_i \end{pmatrix} $ with $ \begin{pmatrix} \fm''_i \\ \mathfrak{u}''_i \end{pmatrix} \leq \begin{pmatrix} \fve \\ \fs \end{pmatrix} + \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} $ and $\depth(\mathfrak{u}''_i) \leq \depth(\fs) + \depth(\mathfrak{t})$ for all $i$ such that \begin{equation*} \Si_d \begin{pmatrix} \fve \\ \fs \end{pmatrix} \Si_{<d} \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} = \sum \limits_i f''_i \Si_d \begin{pmatrix} \fm''_i \\ \mathfrak{u}''_i \end{pmatrix} \quad \text{for all } d \in \mathbb{Z}. \end{equation*} \end{enumerate} \end{proposition} \begin{proof} The proof follows the same line as in \cite[Proposition 2.1]{ND21}. We write down the proof for the reader's convenience. We first prove Part 1 and Part 2 by induction on $\depth(\fs) + \depth(\mathfrak{t})$. For $\depth(\fs) + \depth(\mathfrak{t}) = 2$, we have $\depth(\fs) = \depth(\mathfrak{t}) = 1$, hence we may assume that $ \begin{pmatrix} \fve \\ \fs \end{pmatrix} = \begin{pmatrix} \varepsilon \\ s \end{pmatrix}$ and $ \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} = \begin{pmatrix} \epsilon \\ t \end{pmatrix}$, where $s, t \in \mathbb{N}$ and $\varepsilon, \epsilon \in \mathbb{F}_q^{\times}$. For Part 1, it follows from \eqref{eq: redsum} that \begin{align*} \Si_d \begin{pmatrix} \varepsilon \\ s \end{pmatrix} \Si_d \begin{pmatrix} \epsilon \\ t \end{pmatrix} = \Si_d \begin{pmatrix} \varepsilon \epsilon \\ s + t \end{pmatrix}. \end{align*} We have $\begin{pmatrix} \varepsilon \epsilon \\ s + t \end{pmatrix} = \begin{pmatrix} \varepsilon \\ s \end{pmatrix} + \begin{pmatrix} \epsilon \\ t \end{pmatrix}$ and $\depth(s + t) = 1 < \depth(s) + \depth(t) = 2$, which shows that Part 1 holds in this case. For Part 2, it follows from \eqref{eq: redsum} that \begin{align*} \Si_{<d} \begin{pmatrix} \varepsilon \\ s \end{pmatrix} \Si_{<d} \begin{pmatrix} \epsilon \\ t \end{pmatrix} =& \ \left(\sum_{m < d}\Si_{m} \begin{pmatrix} \varepsilon \\ s \end{pmatrix}\right)\left(\sum_{n < d}\Si_{n} \begin{pmatrix} \epsilon \\ t \end{pmatrix}\right)\\ =& \ \sum_{m < d}\Si_{m} \begin{pmatrix} \varepsilon \\ s \end{pmatrix}\Si_{<m} \begin{pmatrix} \epsilon \\ t \end{pmatrix} + \sum_{n < d}\Si_{n} \begin{pmatrix} \epsilon \\ t \end{pmatrix}\Si_{<n} \begin{pmatrix} \varepsilon \\ s \end{pmatrix}\\ &+ \sum_{m = n < d}\Si_{m} \begin{pmatrix} \varepsilon \\ s \end{pmatrix}\Si_{n} \begin{pmatrix} \epsilon \\ t \end{pmatrix}\\ =& \ \sum_{m < d}\Si_{m} \begin{pmatrix} \varepsilon & \epsilon \\ s & t \end{pmatrix} + \sum_{n < d}\Si_{n} \begin{pmatrix} \epsilon & \varepsilon\\ t & s\end{pmatrix} + \sum_{m < d}\Si_{m} \begin{pmatrix} \varepsilon\epsilon \\ s + t \end{pmatrix}\\ =& \ \Si_{<d} \begin{pmatrix} \varepsilon & \epsilon \\ s & t \end{pmatrix} + \Si_{<d} \begin{pmatrix} \epsilon & \varepsilon\\ t & s\end{pmatrix} + \Si_{<d} \begin{pmatrix} \varepsilon\epsilon \\ s + t \end{pmatrix}. \end{align*} One verifies that $\begin{pmatrix} \varepsilon & \epsilon \\ s & t \end{pmatrix} \leq \begin{pmatrix} \varepsilon \\ s \end{pmatrix} + \begin{pmatrix} \epsilon \\ t \end{pmatrix}, \begin{pmatrix} \epsilon & \varepsilon\\ t & s\end{pmatrix} \leq \begin{pmatrix} \varepsilon \\ s \end{pmatrix} + \begin{pmatrix} \epsilon \\ t \end{pmatrix}$ and $\begin{pmatrix} \varepsilon \epsilon \\ s + t \end{pmatrix} = \begin{pmatrix} \varepsilon \\ s \end{pmatrix} + \begin{pmatrix} \epsilon \\ t \end{pmatrix}$. Moreover, we have $\depth(s, t) = \depth(t, s) = \depth(s) + \depth(t) = 2$ and $\depth(s + t) = 1 < \depth(s) + \depth(t) = 2$, which shows that Part 2 holds in this case. We assume that Part 1 and Part 2 hold when $\depth(\fs) + \depth(\mathfrak{t}) < d$. We need to show that Part 1 and Part 2 hold when $\depth(\fs) + \depth(\mathfrak{t}) = d$. For Part 1, we deduce from \eqref{eq: redsum} that \begin{align} \label{eq: Part 1 Si} \Si_d \begin{pmatrix} \fve \\ \fs \end{pmatrix} \Si_d \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} &= \Si_d \begin{pmatrix} \varepsilon_1 \\ s_1 \end{pmatrix}\Si_{<d} \begin{pmatrix} \fve_- \\ \fs_- \end{pmatrix} \Si_d \begin{pmatrix} \epsilon_1 \\ t_1 \end{pmatrix}\Si_{<d} \begin{pmatrix} \fe_- \\ \mathfrak{t}_- \end{pmatrix}\\ \notag &= \Si_d \begin{pmatrix} \varepsilon_1 \\ s_1 \end{pmatrix}\Si_d \begin{pmatrix} \epsilon_1 \\ t_1 \end{pmatrix}\Si_{<d} \begin{pmatrix} \fve_- \\ \fs_- \end{pmatrix} \Si_{<d} \begin{pmatrix} \fe_- \\ \mathfrak{t}_- \end{pmatrix}\\ \notag &= \Si_d \begin{pmatrix} \varepsilon_1\epsilon_1 \\ s_1 + t_1 \end{pmatrix}\Si_{<d} \begin{pmatrix} \fve_- \\ \fs_- \end{pmatrix} \Si_{<d} \begin{pmatrix} \fe_- \\ \mathfrak{t}_- \end{pmatrix}. \end{align} Since $\depth(\fs_- ) + \depth(\mathfrak{t}_-) = \depth(\fs) + \depth(\mathfrak{t}) - 2 < \depth(\fs) + \depth(\mathfrak{t}) = d$, it follows from the induction hypothesis that there exist $f'_j \in \mathbb{F}_q$ and arrays $ \begin{pmatrix} \fm'_j \\ \mathfrak{u}'_j \end{pmatrix} $ with $ \begin{pmatrix} \fm'_j \\ \mathfrak{u}'_j \end{pmatrix} \leq \begin{pmatrix} \fve_- \\ \fs_- \end{pmatrix} + \begin{pmatrix} \fe_- \\ \mathfrak{t}_- \end{pmatrix} $ and $\depth(\mathfrak{u}'_j) \leq \depth(\fs_-) + \depth(\mathfrak{t}_-)$ for all $j$ such that \begin{equation*} \Si_{<d} \begin{pmatrix} \fve_- \\ \fs_- \end{pmatrix} \Si_{<d} \begin{pmatrix} \fe_- \\ \mathfrak{t}_- \end{pmatrix} = \sum \limits_j f'_j \Si_{<d} \begin{pmatrix} \fm'_j \\ \mathfrak{u}'_j \end{pmatrix} \quad \text{for all } d \in \mathbb{Z}. \end{equation*} Thus we deduce from \eqref{eq: Part 1 Si} that \begin{align*} \Si_d \begin{pmatrix} \fve \\ \fs \end{pmatrix} \Si_d \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} = \sum \limits_j f'_j\Si_d \begin{pmatrix} \varepsilon_1\epsilon_1 \\ s_1 + t_1 \end{pmatrix} \Si_{<d} \begin{pmatrix} \fm'_j \\ \mathfrak{u}'_j \end{pmatrix} = \sum \limits_j f'_j \Si_{d} \begin{pmatrix} \varepsilon_1\epsilon_1 & \fm'_j \\ s_1 + t_1 & \mathfrak{u}'_j \end{pmatrix}. \end{align*} One verifies that $\begin{pmatrix} \varepsilon_1\epsilon_1 & \fm'_j \\ s_1 + t_1 & \mathfrak{u}'_j \end{pmatrix} \leq \begin{pmatrix} \varepsilon_1 & \fve_- \\ s_1 & \fs_- \end{pmatrix} + \begin{pmatrix} \epsilon_1 & \fe_- \\ t_1 & \mathfrak{t}_- \end{pmatrix} = \begin{pmatrix} \fve \\ \fs\end{pmatrix} + \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix}$, and $\depth(s_1 + t_1 , \mathfrak{u}'_j) = 1 + \depth(\mathfrak{u}'_j) \leq 1 + \depth(\fs_-) + \depth(\mathfrak{t}_-) < \depth(\fs) + \depth(\mathfrak{t})$, which proves Part 1. For Part 2, we have \begin{align} \label{eq: Part 2 Si} \Si_{<d} \begin{pmatrix} \fve \\ \fs \end{pmatrix} & \Si_{<d} \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix}\\ \notag =& \ \left(\sum_{m < d}\Si_{m} \begin{pmatrix} \fve \\ \fs \end{pmatrix}\right)\left(\sum_{n < d}\Si_{n} \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix}\right)\\ \notag =& \ \sum_{m < d}\Si_{m} \begin{pmatrix} \fve \\ \fs \end{pmatrix}\Si_{<m} \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} + \sum_{n < d}\Si_{n} \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix}\Si_{<n} \begin{pmatrix} \fve \\ \fs \end{pmatrix} + \sum_{m = n < d}\Si_{m} \begin{pmatrix} \fve \\ \fs \end{pmatrix}\Si_{n} \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix}\\ \notag =& \ \sum_{m < d}\Si_{m} \begin{pmatrix} \varepsilon_1 \\ s_1 \end{pmatrix}\Si_{<m} \begin{pmatrix} \fve_- \\ \fs_- \end{pmatrix}\Si_{<m} \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} + \sum_{n < d}\Si_{n} \begin{pmatrix} \epsilon_1 \\ t_1 \end{pmatrix}\Si_{<n} \begin{pmatrix} \fe_- \\ \mathfrak{t}_- \end{pmatrix}\Si_{<n} \begin{pmatrix} \fve \\ \fs \end{pmatrix}\\ \notag &+ \sum_{m < d}\Si_{m} \begin{pmatrix} \fve \\ \fs \end{pmatrix}\Si_{m} \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix}. \end{align} Since $\depth(\fs_- ) + \depth(\mathfrak{t}) = \depth(\fs) + \depth(\mathfrak{t}_-) = \depth(\fs) + \depth(\mathfrak{t}) - 1 < \depth(\fs) + \depth(\mathfrak{t}) = d$, it follows from the induction hypothesis that there exist $g_j, g_j' \in \mathbb{F}_q$ and \begin{itemize} \item arrays $ \begin{pmatrix} \boldsymbol{\eta}_j \\ \mathfrak{v}_j \end{pmatrix} $ with $ \begin{pmatrix} \boldsymbol{\eta}_j \\ \mathfrak{v}_j \end{pmatrix}\leq \begin{pmatrix} \fve_- \\ \fs_- \end{pmatrix} + \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} $ and $\depth(\mathfrak{v}_j) \leq \depth(\fs_-) + \depth(\mathfrak{t})$ for all $j$, \item arrays $ \begin{pmatrix} \boldsymbol{\eta}'_j \\ \mathfrak{v}'_j \end{pmatrix} $ with $ \begin{pmatrix} \boldsymbol{\eta}'_j \\ \mathfrak{v}'_j \end{pmatrix} \leq \begin{pmatrix} \fe_- \\ \mathfrak{t}_- \end{pmatrix} + \begin{pmatrix} \fve \\ \fs \end{pmatrix} $ and $\depth(\mathfrak{v}'_j) \leq \depth(\mathfrak{t}_-) + \depth(\fs)$ for all $j$, \end{itemize} such that \begin{align*} \Si_{<m} \begin{pmatrix} \fve_- \\ \fs_- \end{pmatrix}\Si_{<m} \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} &= \sum \limits_j g_j \Si_{<m} \begin{pmatrix} \boldsymbol{\eta}_j \\ \mathfrak{v}_j \end{pmatrix} \quad \text{for all } m \in \mathbb{Z},\\ \Si_{<n} \begin{pmatrix} \fe_- \\ \mathfrak{t}_- \end{pmatrix}\Si_{<n} \begin{pmatrix} \fve \\ \fs \end{pmatrix} &= \sum \limits_j g_j' \Si_{<n} \begin{pmatrix} \boldsymbol{\eta}'_j \\ \mathfrak{v}'_j \end{pmatrix} \quad \text{for all } n \in \mathbb{Z}. \end{align*} Moreover, from Part 1, there exist $f_j \in \mathbb{F}_q$ and arrays $ \begin{pmatrix} \fm_j \\ \mathfrak{u}_j \end{pmatrix} $ with $ \begin{pmatrix} \fm_j \\ \mathfrak{u}_j \end{pmatrix} \leq \begin{pmatrix} \fve \\ \fs \end{pmatrix} + \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} $ and $\depth(\mathfrak{u}_j) \leq \depth(\fs) + \depth(\mathfrak{t})$ for all $j$ such that \begin{equation*} \Si_m \begin{pmatrix} \fve \\ \fs \end{pmatrix} \Si_m \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} = \sum \limits_j f_j \Si_m \begin{pmatrix} \fm_j \\ \mathfrak{u}_j \end{pmatrix} \quad \text{for all } m \in \mathbb{Z}. \end{equation*} We deduce from \eqref{eq: Part 2 Si} that \begin{align*} \Si_{<d} \begin{pmatrix} \fve \\ \fs \end{pmatrix} & \Si_{<d} \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix}\\ =& \ \sum_{m < d}\Si_{m} \begin{pmatrix} \varepsilon_1 \\ s_1 \end{pmatrix}\Si_{<m} \begin{pmatrix} \fve_- \\ \fs_- \end{pmatrix}\Si_{<m} \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} + \sum_{n < d}\Si_{n} \begin{pmatrix} \epsilon_1 \\ t_1 \end{pmatrix}\Si_{<n} \begin{pmatrix} \fe_- \\ \mathfrak{t}_- \end{pmatrix}\Si_{<n} \begin{pmatrix} \fve \\ \fs \end{pmatrix}\\ \notag &+ \sum_{m < d}\Si_{m} \begin{pmatrix} \fve \\ \fs \end{pmatrix}\Si_{m} \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} \\ =& \ \sum \limits_j g_j \sum_{m < d}\Si_{m} \begin{pmatrix} \varepsilon_1 \\ s_1 \end{pmatrix}\Si_{<m} \begin{pmatrix} \boldsymbol{\eta}_j \\ \mathfrak{v}_j \end{pmatrix} + \sum \limits_j g_j' \sum_{n < d}\Si_{n} \begin{pmatrix} \epsilon_1 \\ t_1 \end{pmatrix}\Si_{<n} \begin{pmatrix} \boldsymbol{\eta}'_j \\ \mathfrak{v}'_j \end{pmatrix}\\ \notag &+ \sum \limits_j f_j \sum_{m < d}\Si_m \begin{pmatrix} \fm_j \\ \mathfrak{u}_j \end{pmatrix} \\ =& \ \sum \limits_j g_j \sum_{m < d}\Si_{m} \begin{pmatrix} \varepsilon_1 & \boldsymbol{\eta}_j\\ s_1 & \mathfrak{v}_j \end{pmatrix} + \sum \limits_j g_j' \sum_{n < d}\Si_{n} \begin{pmatrix} \epsilon_1 & \boldsymbol{\eta}'_j\\ t_1 & \mathfrak{v}'_j \end{pmatrix} + \sum \limits_j f_j \Si_{<d} \begin{pmatrix} \fm_j \\ \mathfrak{u}_j \end{pmatrix} \\ =& \ \sum \limits_j g_j \Si_{<d} \begin{pmatrix} \varepsilon_1 & \boldsymbol{\eta}_j\\ s_1 & \mathfrak{v}_j \end{pmatrix} + \sum \limits_j g_j' \Si_{<d} \begin{pmatrix} \epsilon_1 & \boldsymbol{\eta}'_j\\ t_1 & \mathfrak{v}'_j \end{pmatrix} + \sum \limits_j f_j \Si_{<d} \begin{pmatrix} \fm_j \\ \mathfrak{u}_j \end{pmatrix}. \end{align*} One verifies that \begin{align*} \begin{pmatrix} \varepsilon_1 & \boldsymbol{\eta}_j\\ s_1 & \mathfrak{v}_j \end{pmatrix} &\leq \bigg( \begin{pmatrix} \varepsilon_1 \\ s_1\end{pmatrix} , \begin{pmatrix} \fve_-\\ \fs_- \end{pmatrix} + \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} \bigg) \leq \begin{pmatrix} \fve\\ \fs \end{pmatrix} + \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix},\\ \begin{pmatrix} \epsilon_1 & \boldsymbol{\eta}'_j\\ t_1 & \mathfrak{v}'_j \end{pmatrix} & \leq \bigg(\begin{pmatrix} \epsilon_1\\ t_1\end{pmatrix}, \begin{pmatrix} \fe_-\\ \mathfrak{t}_- \end{pmatrix} + \begin{pmatrix} \fve\\ \fs \end{pmatrix}\bigg) \leq \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} + \begin{pmatrix} \fve\\ \fs \end{pmatrix}, \end{align*} and \begin{align*} \depth(s_1, \mathfrak{v}_j) &= 1 + \depth(\mathfrak{v}_j) \leq 1 + \depth(\fs_-) + \depth(\mathfrak{t}) = \depth(\fs) + \depth(\mathfrak{t}),\\ \depth(t_1, \mathfrak{v}'_j)&= 1 + \depth(\mathfrak{v}'_j) \leq 1 + \depth(\mathfrak{t}_-) + \depth(\fs) = \depth(\mathfrak{t}) + \depth(\fs), \end{align*} which proves Part 2. The proof of Part 1 and Part 2 is finished. For Part 3, we have \begin{align*} \Si_d \begin{pmatrix} \fve \\ \fs \end{pmatrix} \Si_{<d} \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} = \Si_d \begin{pmatrix} \varepsilon_1 \\ s_1 \end{pmatrix}\Si_{<d} \begin{pmatrix} \fve_- \\ \fs_- \end{pmatrix} \Si_{<d} \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix}. \end{align*} From Part 2, there exist $f'_k \in \mathbb{F}_q$ and arrays $ \begin{pmatrix} \fm'_k \\ \mathfrak{u}'_k \end{pmatrix} $ with $ \begin{pmatrix} \fm'_k \\ \mathfrak{u}'_k \end{pmatrix} \leq \begin{pmatrix} \fve_- \\ \fs_- \end{pmatrix} + \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} $ and $\depth(\mathfrak{u}'_k) \leq \depth(\fs_-) + \depth(\mathfrak{t})$ for all $k$ such that \begin{equation*} \Si_{<d} \begin{pmatrix} \fve_- \\ \fs_- \end{pmatrix} \Si_{<d} \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} = \sum \limits_k f'_k \Si_{<d} \begin{pmatrix} \fm'_k \\ \mathfrak{u}'_k \end{pmatrix} \quad \text{for all } d \in \mathbb{Z}. \end{equation*} We deduce that \begin{align*} \Si_d \begin{pmatrix} \fve \\ \fs \end{pmatrix} \Si_{<d} \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} &= \sum \limits_k f'_k \Si_d \begin{pmatrix} \varepsilon_1 \\ s_1 \end{pmatrix} \Si_{<d} \begin{pmatrix} \fm'_k \\ \mathfrak{u}'_k \end{pmatrix}= \sum \limits_k f'_k \Si_d \begin{pmatrix} \varepsilon_1 & \fm'_k \\ s_1 & \mathfrak{u}'_k \end{pmatrix}. \end{align*} One verifies that \begin{align*} \begin{pmatrix} \varepsilon_1 & \fm'_k\\ s_1 & \mathfrak{u}'_k \end{pmatrix} \leq \bigg( \begin{pmatrix} \varepsilon_1 \\ s_1\end{pmatrix} , \begin{pmatrix} \fve_-\\ \fs_- \end{pmatrix} + \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} \bigg) \leq \begin{pmatrix} \fve\\ \fs \end{pmatrix} + \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} \end{align*} and $\depth(s_1, \mathfrak{u}'_k) = 1 + \depth(\mathfrak{u}'_k) \leq 1 + \depth(\fs_-) + \depth(\mathfrak{t}) = \depth(\fs) + \depth(\mathfrak{t})$, which proves Part 3. \end{proof} We denote by $\mathcal{AL}$ (resp. $\mathcal{L}$) the $K$-vector space generated by the ACMPL's (resp. by the CMPL's) and by $\mathcal{AL}_w$ (resp. $\mathcal{L}_w$) the $K$-vector space generated by the ACMPL's of weight $w$ (resp. by the CMPL's of weight $w$). It follows from Proposition~\ref{polysums} that $\mathcal{AL}$ is a $K$-algebra. By considering only arrays with trivial characters, Proposition~\ref{polysums} implies that $\mathcal{L}$ is also a $K$-algebra. \subsection{Operators $\mathcal B^*$, $\mathcal C$ and $\mathcal{BC}$} \label{sec:Todd} In this section we extend operators $\mathcal B^*$ and $\mathcal C$ of Todd \cite{Tod18} and the operator $\mathcal{BC}$ of Ngo Dac \cite{ND21} in the case of ACMPL's. \begin{definition} A binary relation is a $K$-linear combination of the form \begin{equation*} \sum \limits_i a_i \Si_d \begin{pmatrix} \fve_i \\ \fs_i \end{pmatrix} + \sum \limits_i b_i \Si_{d+1} \begin{pmatrix} \fe_i \\ \mathfrak{t}_i \end{pmatrix} =0 \quad \text{for all } d \in \mathbb{Z}, \end{equation*} where $a_i,b_i \in K$ and $ \begin{pmatrix} \fve_i \\ \fs_i \end{pmatrix} , \begin{pmatrix} \fe_i \\ \mathfrak{t}_i \end{pmatrix} $ are arrays of the same weight. A binary relation is called a fixed relation if $b_i = 0$ for all $i$. \end{definition} We denote by $\mathfrak{BR}_{w}$ the set of all binary relations of weight $w$. One verifies at once that $\mathfrak{BR}_{w}$ is a $K$-vector space. It follows from the fundamental relation in \cite[\S 3.4.6]{Tha09} and Lemma \ref{agree}, an important example of binary relations \begin{equation*} R_{\varepsilon} \colon \quad \Si_d \begin{pmatrix} \varepsilon\\ q \end{pmatrix} + \varepsilon^{-1}D_1 \Si_{d+1} \begin{pmatrix} \varepsilon& 1 \\ 1 & q-1 \end{pmatrix} =0, \end{equation*} where $D_1 = \theta^q - \theta \in A$. For later definitions, let $R \in \mathfrak{BR}_w$ be a binary relation of the form \begin{equation} \label{eq: Rd0} R(d) \colon \quad \sum \limits_i a_i \Si_d \begin{pmatrix} \fve_i \\ \fs_i \end{pmatrix} + \sum \limits_i b_i \Si_{d+1} \begin{pmatrix} \fe_i \\ \mathfrak{t}_i \end{pmatrix} =0, \end{equation} where $a_i,b_i \in K$ and $ \begin{pmatrix} \fve_i \\ \fs_i \end{pmatrix} , \begin{pmatrix} \fe_i \\ \mathfrak{t}_i \end{pmatrix} $ are arrays of the same weight. We now define some operators on $K$-vector spaces of binary relations. \subsubsection{Operators $\mathcal B^*$} Let $ \begin{pmatrix} \sigma \\ v \end{pmatrix} $ be an array. We define an operator \begin{equation*} \mathcal B^*_{\sigma,v} \colon \mathfrak{BR}_{w} \longrightarrow \mathfrak{BR}_{w+v} \end{equation*} as follows: for each $R \in \mathfrak{BR}_{w}$ given as in \eqref{eq: Rd0}, the image $\mathcal B^*_{\sigma,v}(R) = \Si_d \begin{pmatrix} \sigma \\ v \end{pmatrix} \sum_{j < d} R(j)$ is a fixed relation of the form \begin{align*} 0 &= \Si_d \begin{pmatrix} \sigma \\ v \end{pmatrix} \left(\sum \limits_ia_i \Si_{<d} \begin{pmatrix} \fve_i \\ \fs_i \end{pmatrix} + \sum \limits_i b_i \Si_{<d+1} \begin{pmatrix} \fe_i \\ \mathfrak{t}_i \end{pmatrix} \right) \\ &= \sum \limits_i a_i \Si_d \begin{pmatrix} \sigma \\ v \end{pmatrix} \Si_{<d} \begin{pmatrix} \fve_i \\ \fs_i \end{pmatrix} + \sum \limits_i b_i \Si_d \begin{pmatrix} \sigma \\ v \end{pmatrix} \Si_{<d} \begin{pmatrix} \fe_i \\ \mathfrak{t}_i \end{pmatrix} + \sum \limits_i b_i \Si_d \begin{pmatrix} \sigma \\ v \end{pmatrix} \Si_{d} \begin{pmatrix} \fe_i \\ \mathfrak{t}_i \end{pmatrix} \\ &= \sum \limits_i a_i \Si_d \begin{pmatrix} \sigma & \fve_i \\ v& \fs_i \end{pmatrix} + \sum \limits_i b_i \Si_d \begin{pmatrix} \sigma & \fe_i \\ v& \mathfrak{t}_i \end{pmatrix} + \sum \limits_i b_i \Si_d \begin{pmatrix} \sigma \epsilon_{i1} & \fe_{i-} \\ v + t_{i1} & \mathfrak{t}_{i-} \end{pmatrix} . \end{align*} The last equality follows from \eqref{redsum}. Let $ \begin{pmatrix} \Sigma \\ V \end{pmatrix} = \begin{pmatrix} \sigma_1 & \dots & \sigma_n \\ v_1 & \dots & v_n \end{pmatrix} $ be an array. We define an operator $\mathcal{B}^*_{\Sigma,V}(R) $ by \begin{equation*} \mathcal B^*_{\Sigma,V}(R) := \mathcal B^*_{\sigma_1,v_1} \circ \dots \circ \mathcal B^*_{\sigma_n,v_n}(R). \end{equation*} \begin{lemma} \label{polybesao} Let $ \begin{pmatrix} \Sigma \\ V \end{pmatrix} = \begin{pmatrix} \sigma_1 & \dots & \sigma_n \\ v_1 & \dots & v_n \end{pmatrix} $ be an array. Then $\mathcal B^*_{\Sigma,V}(R)$ is of the form \begin{align*} \sum \limits_i a_i \Si_d \begin{pmatrix} \Sigma & \fve_i \\ V& \fs_i \end{pmatrix} & + \sum \limits_i b_i \Si_d \begin{pmatrix} \Sigma & \fe_i \\ V& \mathfrak{t}_i \end{pmatrix} + \sum \limits_i b_i \Si_d \begin{pmatrix} \sigma_1 & \dots & \sigma_{n-1} & \sigma_n \epsilon_{i1} &\fe_{i-} \\ v_1 & \dots & v_{n-1} & v_n+ t_{i1} & \mathfrak{t}_{i-} \end{pmatrix} = 0 . \end{align*} \end{lemma} \begin{proof} From the definition and \eqref{redsum}, we have $\mathcal{B}^*_{\sigma_n,v_n}(R)$ is of the form \begin{align*} \sum \limits_i a_i \Si_d \begin{pmatrix} \sigma_n & \fve_i \\ v_n& \fs_i \end{pmatrix} + \sum \limits_i b_i \Si_d \begin{pmatrix} \sigma_n & \fe_i \\ v_n& \mathfrak{t}_i \end{pmatrix} + \sum \limits_i b_i \Si_d \begin{pmatrix} \sigma_n \epsilon_{i1} & {\fe_i}_- \\ v_n + t_{i1} & {\mathfrak{t}_i}_- \end{pmatrix} = 0. \end{align*} Apply the operator $\mathcal B^*_{\sigma_1,v_1} \circ \dots \circ \mathcal B^*_{\sigma_{n - 1},v_{n - 1}}$ to $\mathcal{B}^*_{\sigma_n,v_n}(R)$, the result then follows from the definition. \end{proof} \subsubsection{Operators $\mathcal C$} Let $ \begin{pmatrix} \Sigma \\ V \end{pmatrix} $ be an array of weight $v$. We define an operator \begin{equation*} \mathcal C_{\Sigma,V}(R) \colon \mathfrak{BR}_{w} \longrightarrow \mathfrak{BR}_{w+v} \end{equation*} as follows: for each $R \in \mathfrak{BR}_{w}$ given as in \eqref{eq: Rd0}, the image $\mathcal C_{\Sigma,V}(R) = R(d) \Si_{<d+1} \begin{pmatrix} \Sigma \\ V \end{pmatrix} $ is a binary relation of the form \begin{align*} 0 &= \left( \sum \limits_i a_i \Si_d \begin{pmatrix} \fve_i \\ \fs_i \end{pmatrix} + \sum \limits_i b_i \Si_{d+1} \begin{pmatrix} \fe_i \\ \mathfrak{t}_i \end{pmatrix} \right) \Si_{<d+1} \begin{pmatrix} \Sigma \\ V \end{pmatrix} \\ &= \sum \limits_i a_i \Si_d \begin{pmatrix} \fve_i \\ \fs_i \end{pmatrix} \Si_{d} \begin{pmatrix} \Sigma \\ V \end{pmatrix} + \sum \limits_i a_i \Si_d \begin{pmatrix} \fve_i \\ \fs_i \end{pmatrix} \Si_{<d} \begin{pmatrix} \Sigma \\ V \end{pmatrix} + \sum \limits_i b_i \Si_{d+1} \begin{pmatrix} \fe_i \\ \mathfrak{t}_i \end{pmatrix} \Si_{<d+1} \begin{pmatrix} \Sigma \\ V \end{pmatrix} \\ &= \sum \limits_i c_i \Si_d \begin{pmatrix} \fm_i \\ \mathfrak{u}_i \end{pmatrix} + \sum \limits_i c'_i \Si_{d+1} \begin{pmatrix} \fm'_i \\ \mathfrak{u}'_i \end{pmatrix} . \end{align*} The last equality follows from Proposition \ref{polysums}. In particular, the following proposition gives the form of $\mathcal C_{\Sigma,V}(R_{\varepsilon})$. \begin{proposition} \label{polycer1} Let $ \begin{pmatrix} \Sigma \\ V \end{pmatrix}$ be an array with $V = (v_1,V_{-})$ and $\Sigma = (\sigma_1, \Sigma_{-})$. Then $\mathcal C_{\Sigma,V}(R_{\varepsilon})$ is of the form \begin{equation*} \Si_d \begin{pmatrix} \varepsilon\sigma_1 & \Sigma_{-} \\ q + v_1 & V_{-} \end{pmatrix} + \Si_d \begin{pmatrix} \varepsilon& \Sigma \\ q & V \end{pmatrix} + \sum \limits_i b_i \Si_{d+1} \begin{pmatrix} \varepsilon& \fe_i \\ 1 & \mathfrak{t}_i \end{pmatrix} =0, \end{equation*} where $b_i \in A$ are divisible by $D_1$ and $ \begin{pmatrix} \fe_i \\ \mathfrak{t}_i \end{pmatrix} $ are arrays satisfying $ \begin{pmatrix} \fe_i \\ \mathfrak{t}_i \end{pmatrix} \leq \begin{pmatrix} 1 \\ q - 1 \end{pmatrix} + \begin{pmatrix} \Sigma \\ V \end{pmatrix} $ for all $i$. \end{proposition} \begin{proof} We see that $\mathcal C_{\Sigma,V}(R_{\varepsilon})$ is of the form \begin{equation*} \Si_d \begin{pmatrix} \varepsilon \\ q \end{pmatrix} \Si_{d} \begin{pmatrix} \Sigma \\ V \end{pmatrix} + \Si_d \begin{pmatrix} \varepsilon \\ q \end{pmatrix} \Si_{<d} \begin{pmatrix} \Sigma \\ V \end{pmatrix} + \varepsilon^{-1} D_1 \Si_{d+1} \begin{pmatrix} \varepsilon& 1 \\ 1 & q-1 \end{pmatrix} \Si_{<d+1} \begin{pmatrix} \Sigma \\ V \end{pmatrix} = 0. \end{equation*} It follows from \eqref{redsum} and Proposition \ref{polysums} that \begin{align*} \Si_d \begin{pmatrix} \varepsilon \\ q \end{pmatrix} \Si_{d} \begin{pmatrix} \Sigma \\ V \end{pmatrix} + \Si_d \begin{pmatrix} \varepsilon \\ q \end{pmatrix} \Si_{<d} \begin{pmatrix} \Sigma \\ V \end{pmatrix} &= \Si_d \begin{pmatrix} \varepsilon\sigma_1 & \Sigma_{-} \\ q + v_1 & V_{-} \end{pmatrix} + \Si_d \begin{pmatrix} \varepsilon& \Sigma \\ q & V \end{pmatrix}, \\ \varepsilon^{-1}D_1 \Si_{d+1} \begin{pmatrix} \varepsilon& 1 \\ 1 & q-1 \end{pmatrix} \Si_{<d+1} \begin{pmatrix} \Sigma \\ V \end{pmatrix} &= \sum \limits_i b_i \Si_{d+1} \begin{pmatrix} \varepsilon& \fe_i \\ 1 & \mathfrak{t}_i \end{pmatrix} , \end{align*} where $b_i \in A$ are divisible by $D_1$ and $ \begin{pmatrix} \fe_i \\ \mathfrak{t}_i \end{pmatrix} $ are arrays satisfying $ \begin{pmatrix} \fe_i \\ \mathfrak{t}_i \end{pmatrix} \leq \begin{pmatrix} 1 \\ q - 1 \end{pmatrix} + \begin{pmatrix} \Sigma \\ V \end{pmatrix} $ for all $i$. This proves the proposition. \end{proof} \subsubsection{Operators $\mathcal{BC}$} Let $\varepsilon \in \mathbb{F}_q^{\times}$. We define an operator \begin{equation*} \mathcal{BC}_{\varepsilon,q} \colon \mathfrak{BR}_{w} \longrightarrow \mathfrak{BR}_{w+q} \end{equation*} as follows: for each $R \in \mathfrak{BR}_{w}$ given as in \eqref{eq: Rd0}, the image $\mathcal{BC}_{\varepsilon,q}(R)$ is a binary relation given by \begin{align*} \mathcal{BC}_{\varepsilon,q}(R) = \mathcal B^*_{\varepsilon,q}(R) - \sum\limits_i b_i \mathcal C_{\fe_i,\mathfrak{t}_i} (R_{\varepsilon}). \end{align*} Let us clarify the definition of $\mathcal{BC}_{\varepsilon,q}$. We know that $\mathcal B^*_{\varepsilon,q}(R)$ is of the form \begin{equation*} \sum \limits_i a_i \Si_d \begin{pmatrix} \varepsilon& \fve_i \\ q& \fs_i \end{pmatrix} + \sum \limits_i b_i \Si_d \begin{pmatrix} \varepsilon& \fe_i \\ q& \mathfrak{t}_i \end{pmatrix} + \sum \limits_i b_i \Si_d \begin{pmatrix} \varepsilon\epsilon_{i1} & \fe_{i-} \\ q + t_{i1} & \mathfrak{t}_{i-} \end{pmatrix} = 0. \end{equation*} Moreover, $\mathcal C_{\fe_i,\mathfrak{t}_i} (R_{\varepsilon})$ is of the form \begin{equation*} \Si_d \begin{pmatrix} \varepsilon& \fe_i \\ q& \mathfrak{t}_i \end{pmatrix} + \Si_d \begin{pmatrix} \varepsilon\epsilon_{i1} & \fe_{i-} \\ q + t_{i1} & \mathfrak{t}_{i-} \end{pmatrix} + \varepsilon^{-1}D_1 \Si_{d+1} \begin{pmatrix} \varepsilon \\ 1 \end{pmatrix} \Si_{<d+1} \begin{pmatrix} 1 \\ q-1 \end{pmatrix} \Si_{<d+1} \begin{pmatrix} \fe_i \\ \mathfrak{t}_i \end{pmatrix} = 0. \end{equation*} Combining with Proposition \ref{polysums}, Part 2, we have that $\mathcal{BC}_{\varepsilon,q}(R)$ is of the form \begin{equation*} \sum \limits_i a_i \Si_d \begin{pmatrix} \varepsilon& \fve_i \\ q& \fs_i \end{pmatrix} + \sum \limits_{i,j} b_{ij} \Si_{d+1} \begin{pmatrix} \varepsilon& \fe_{ij} \\ 1& \mathfrak{t}_{ij} \end{pmatrix} =0, \end{equation*} where $b_{ij} \in K$ and $ \begin{pmatrix} \fe_{ij} \\ \mathfrak{t}_{ij} \end{pmatrix} $ are arrays satisfying $ \begin{pmatrix} \fe_{ij} \\ \mathfrak{t}_{ij} \end{pmatrix} \leq \begin{pmatrix} 1 \\ q-1 \end{pmatrix} + \begin{pmatrix} \fe_{i} \\ \mathfrak{t}_{i} \end{pmatrix} $ for all $j$. \subsection{A weak version of Brown's theorem for ACMP's} \label{sec:weak Brown} \subsubsection{Preparatory results} \begin{proposition}\label{polydecom} 1) Let $ \begin{pmatrix} \fve \\ \fs \end{pmatrix} = \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_n \\ s_1 & \dots & s_n \end{pmatrix} $ be an array such that $\Init(\fs) = (s_1, \dots, s_{k-1})$ for some $1 \leq k \leq n$, and let $\varepsilon$ be an element in $\mathbb{F}_q^{\times}$. Then $\Li \begin{pmatrix} \fve \\ \fs \end{pmatrix} $ can be decomposed as follows: \begin{equation*} \Li \begin{pmatrix} \fve \\ \fs \end{pmatrix} = \underbrace{ - \Li \begin{pmatrix} \fve' \\ \fs' \end{pmatrix} }_\text{type 1} + \underbrace{\sum\limits_i b_i\Li \begin{pmatrix} \fe_i' \\ \mathfrak{t}'_i \end{pmatrix} }_\text{type 2} + \underbrace{\sum\limits_i c_i\Li \begin{pmatrix} \fm_i \\ \mathfrak{u}_i \end{pmatrix} }_\text{type 3} , \end{equation*} where $ b_i, c_i \in A$ are divisible by $D_1$ such that for all $i$, the following properties are satisfied: \begin{itemize} \item For all arrays $ \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} $ appearing on the right hand side, \begin{equation*} \depth(\mathfrak{t}) \geq \depth(\fs) \quad \text{and} \quad T_k(\mathfrak{t}) \leq T_k(\fs). \end{equation*} \item For the array $ \begin{pmatrix} \fve' \\ \fs' \end{pmatrix} $ of type $1$ with respect to $ \begin{pmatrix} \fve \\ \fs \end{pmatrix} $, we have \begin{align*} \begin{pmatrix} \fve' \\ \fs' \end{pmatrix} = \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_{k-1} & \varepsilon & \varepsilon^{-1}\varepsilon_{k} & \varepsilon_{k+1} & \dots & \varepsilon_n \\ s_1 & \dots & s_{k-1} & q & s_k- q & s_{k+1} & \dots & s_n \end{pmatrix}. \end{align*} Moreover, for all $k \leq \ell \leq n$, \begin{equation*} s'_{1} + \dots + s'_\ell < s_1 + \dots + s_\ell. \end{equation*} \item For the array $ \begin{pmatrix} \fe' \\ \mathfrak{t}' \end{pmatrix} $ of type $2$ with respect to $ \begin{pmatrix} \fve \\ \fs \end{pmatrix} $, for all $k \leq \ell \leq n$, \begin{equation*} t'_{1} + \dots + t'_\ell < s_1 + \dots + s_\ell. \end{equation*} \item For the array $ \begin{pmatrix} \fm \\ \mathfrak{u} \end{pmatrix} $ of type $3$ with respect to $ \begin{pmatrix} \fve \\ \fs \end{pmatrix} $, we have $\Init(\fs) \prec\Init(\mathfrak{u})$. \end{itemize} \noindent 2) Let $ \begin{pmatrix} \fve \\ \fs \end{pmatrix} = \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_k \\ s_1 & \dots & s_k \end{pmatrix} $ be an array such that $\Init(\fs) = \fs$ and $s_k = q$. Then $\Li \begin{pmatrix} \fve \\ \fs \end{pmatrix} $ can be decomposed as follows: \begin{equation*} \Li \begin{pmatrix} \fve \\ \fs \end{pmatrix} = \underbrace{\sum\limits_i b_i\Li \begin{pmatrix} \fe'_i \\ \mathfrak{t}'_i \end{pmatrix} }_\text{type 2} + \underbrace{\sum\limits_i c_i\Li \begin{pmatrix} \fm_i \\ \mathfrak{u}_i \end{pmatrix} }_\text{type 3} , \end{equation*} where $ b_i, c_i \in A$ divisible by $D_1$ such that for all $i$, the following properties are satisfied: \begin{itemize} \item For all arrays $ \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} $ appearing on the right hand side, \begin{equation*} \depth(\mathfrak{t}) \geq \depth(\fs) \quad \text{and} \quad T_k(\mathfrak{t}) \leq T_k(\fs). \end{equation*} \item For the array $ \begin{pmatrix} \fe' \\ \mathfrak{t}' \end{pmatrix} $ of type $2$ with respect to $ \begin{pmatrix} \fve \\ \fs \end{pmatrix} $, \begin{equation*} t'_{1} + \dots + t'_k < s_1 + \dots + s_k. \end{equation*} \item For the array $ \begin{pmatrix} \fm \\ \mathfrak{u} \end{pmatrix} $ of type $3$ with respect to $ \begin{pmatrix} \fve \\ \fs \end{pmatrix} $, we have $\Init(\fs) \prec\Init(\mathfrak{u})$. \end{itemize} \end{proposition} \begin{proof} For Part 1, since $\Init(\fs) = (s_1, \dots, s_{k-1})$, we get $s_k > q$. Set $ \begin{pmatrix} \Sigma \\ V \end{pmatrix} = \begin{pmatrix} \varepsilon^{-1} \varepsilon_{k} & \varepsilon_{k+1} &\dots & \varepsilon_n \\ s_k - q & s_{k+1} &\dots & s_n \end{pmatrix} $. By Proposition \ref{polycer1}, $\mathcal{C}_{\Sigma,V}(R_{\varepsilon})$ is of the form \begin{equation} \label{polyr1} \Si_d \begin{pmatrix} \varepsilon_{k} & \dots & \varepsilon_n \\ s_{k} & \dots & s_n \end{pmatrix} + \Si_d \begin{pmatrix} \varepsilon & \varepsilon^{-1}\varepsilon_{k} & \varepsilon_{k+1} & \dots & \varepsilon_n \\ q & s_k- q & s_{k+1} & \dots & s_n \end{pmatrix} + \sum \limits_i b_i \Si_{d+1} \begin{pmatrix} \varepsilon & \fe_i \\ 1 & \mathfrak{t}_i \end{pmatrix} =0, \end{equation} where $b_i \in A$ divisible by $D_1$ and $ \begin{pmatrix} \fe_i \\ \mathfrak{t}_i \end{pmatrix} $ are arrays satisfying for all $i$, \begin{equation*} \begin{pmatrix} \fe_i \\ \mathfrak{t}_i \end{pmatrix} \leq \begin{pmatrix} 1 \\ q - 1 \end{pmatrix} + \begin{pmatrix} \Sigma \\ V \end{pmatrix} = \begin{pmatrix} \varepsilon^{-1} \varepsilon_{k} & \varepsilon_{k+1} &\dots & \varepsilon_n \\ s_k - 1 & s_{k+1} &\dots & s_n \end{pmatrix} . \end{equation*} For $m \in \mathbb{N}$, we denote by $q^{\{m\}}$ the sequence of length $m$ with all terms equal to $q$. We agree by convention that $q^{\{0\}}$ is the empty sequence. Setting $s_0 = 0$, we may assume that there exists a maximal index $j$ with $0 \leq j \leq k-1$ such that $s_j < q$, hence $\Init(\fs) = (s_1, \dots, s_j, q^{\{k-j-1\}})$. Then the operator $ \mathcal{BC}_{\varepsilon_{j+1},q} \circ \dots \circ \mathcal{BC}_{\varepsilon_{k-1},q}$ applied to the relation \eqref{polyr1} gives \begin{align*} \Si_d &\begin{pmatrix} \varepsilon_{j+1} & \dots & \varepsilon_{k-1} & \varepsilon_{k} & \dots & \varepsilon_n \\ q & \dots & q & s_{k} & \dots & s_n \end{pmatrix} \\ & + \Si_d \begin{pmatrix} \varepsilon_{j+1} & \dots & \varepsilon_{k-1} & \varepsilon & \varepsilon^{-1}\varepsilon_{k} & \epsilon_{k+1} & \dots & \epsilon_n \\ q & \dots & q & q & s_k- q & s_{k+1} & \dots & s_n \end{pmatrix} \\ & + \sum \limits_i b_{i_1 \dots i_{k-j}} \Si_{d+1} \begin{pmatrix} \varepsilon_{j+1} & \fe_{i_1 \dots i_{k-j}} \\ 1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} =0, \end{align*} where $b_{i_1 \dots i_{k-j}} \in A$ are divisible by $D_1$ and for $2 \leq l \leq k-j$, $ \begin{pmatrix} \fe_{i_1 \dots i_l} \\ \mathfrak{t}_{i_1 \dots i_l} \end{pmatrix} $ are arrays satisfying \begin{equation*} \begin{pmatrix} \fe_{i_1 \dots i_l} \\ \mathfrak{t}_{i_1 \dots i_l} \end{pmatrix} \leq \begin{pmatrix} 1 \\ q - 1 \end{pmatrix} + \begin{pmatrix} \varepsilon_{k-l+2} & \fe_{i_1 \dots i_{l-1}} \\ 1 & \mathfrak{t}_{i_1 \dots i_{l-1}} \end{pmatrix} = \begin{pmatrix} \varepsilon_{k-l+2} & \fe_{i_1 \dots i_{l-1}} \\ q & \mathfrak{t}_{i_1 \dots i_{l-1}} \end{pmatrix} . \end{equation*} Thus \begin{equation} \label{eq: part 1 compa} \begin{pmatrix} \fe_{i_1 \dots i_{k-j}} \\ \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} \leq \begin{pmatrix} \varepsilon_{j+2}& \dots & \varepsilon_{k-1} & \varepsilon & \varepsilon^{-1}\varepsilon_{k} & \varepsilon_{k+1}& \dots & \varepsilon_n \\ q& \dots & q & q & s_{k} - 1 & s_{k+1} &\dots & s_n \end{pmatrix} . \end{equation} Letting $ \begin{pmatrix} \Sigma' \\ V' \end{pmatrix} = \begin{pmatrix} \varepsilon_{1} &\dots & \varepsilon_j \\ s_1 &\dots & s_j \end{pmatrix} $, by Lemma \ref{polybesao}, we continue to apply $\mathcal{B}^*_{\Sigma',V'}$ to the above relation and get \begin{align*} \Si_d \begin{pmatrix} \fve \\ \fs \end{pmatrix} &+ \Si_d \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_{k-1} & \varepsilon & \varepsilon^{-1}\varepsilon_{k} & \varepsilon_{k+1} & \dots & \varepsilon_n \\ s_1 & \dots & s_{k-1} & q & s_k- q & s_{k+1} & \dots & s_n \end{pmatrix} \\ & + \sum \limits_i b_{i_1 \dots i_{k-j}} \Si_d \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_j & \varepsilon_{j+1} & \fe_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_j & 1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} \\ & + \sum \limits_i b_{i_1 \dots i_{k-j}} \Si_d \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_{j-1} & \varepsilon_j \varepsilon_{j+1} & \fe_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_{j-1} & s_j+1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} =0. \end{align*} Hence \begin{align*} \Li \begin{pmatrix} \fve \\ \fs \end{pmatrix} &+ \Li \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_{k-1} & \varepsilon & \varepsilon^{-1}\varepsilon_{k} & \varepsilon_{k+1} & \dots & \varepsilon_n \\ s_1 & \dots & s_{k-1} & q & s_k- q & s_{k+1} & \dots & s_n \end{pmatrix} \\ & + \sum \limits_i b_{i_1 \dots i_{k-j}} \Li \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_j & \varepsilon_{j+1} & \fe_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_j & 1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} \\ & + \sum \limits_i b_{i_1 \dots i_{k-j}} \Li \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_{j-1} & \varepsilon_j \varepsilon_{j+1} & \fe_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_{j-1} & s_j+1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} =0. \end{align*} In other words, we have \begin{align} \label{eq: part 1 Li} \Li \begin{pmatrix} \fve \\ \fs \end{pmatrix} =& - \Li \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_{k-1} & \varepsilon & \varepsilon^{-1}\varepsilon_{k} & \varepsilon_{k+1} & \dots & \varepsilon_n \\ s_1 & \dots & s_{k-1} & q & s_k- q & s_{k+1} & \dots & s_n \end{pmatrix} \\ \notag & - \sum \limits_i b_{i_1 \dots i_{k-j}} \Li \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_j & \varepsilon_{j+1} & \fe_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_j & 1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} \\\notag & - \sum \limits_i b_{i_1 \dots i_{k-j}} \Li \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_{j-1} & \varepsilon_j \varepsilon_{j+1} & \fe_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_{j-1} & s_j+1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix}. \end{align} The first term, the second term, and the third term on the right hand side of \eqref{eq: part 1 Li} are referred to as type 1, type 2, and type 3 respectively. We now verify the conditions of arrays of type 1, type 2, and type 3 with respect to $ \begin{pmatrix} \fve \\ \fs \end{pmatrix} $. We first note that $\fs = (s_1, \dots, s_j, q^{\{k-j-1\}}, s_k, \dotsc, s_n)$. \textit{Type 1: } For $\begin{pmatrix} \fve' \\ \fs' \end{pmatrix} = \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_{k-1} & \varepsilon & \varepsilon^{-1}\varepsilon_{k} & \varepsilon_{k+1} & \dots & \varepsilon_n \\ s_1 & \dots & s_{k-1} & q & s_k- q & s_{k+1} & \dots & s_n \end{pmatrix}$, we have \begin{align*} \depth(\fs') = (k - 1) + 1 + (n - k + 1) = n + 1 > \depth(\fs). \end{align*} For $\ell = k$, since $s_k > q$, we have \begin{align*} s'_1 + \cdots + s'_k = s_1 + \cdots + s_{k - 1} + q < s_1 + \cdots + s_{k}. \end{align*} For $k < \ell \leq n$, one verifies that \begin{align*} s'_1 + \cdots + s'_{\ell} = s_1 + \cdots + s_{\ell - 1} < s_1 + \cdots + s_{\ell}. \end{align*} Since $w(\fs') = w(\fs)$, one deduces that $T_k(\fs') \leq T_k(\fs)$. \textit{Type 2:} For $ \begin{pmatrix} \fe' \\ \mathfrak{t}' \end{pmatrix} = \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_j & \varepsilon_{j+1} & \fe_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_j & 1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix}$, it follows from \eqref{eq: part 1 compa} and Remark \ref{rmk: compa depth} that \begin{align*} \depth(\mathfrak{t}') &= j + 1 + \depth(\mathfrak{t}_{i_1 \dots i_{k-j}})\\ &\geq j + 1 + (k - j - 2) + 1 + (n - k + 1) = n + 1 > \depth(\fs). \end{align*} For $\ell = k$, since $s_k > q$, it follows from \eqref{eq: part 1 compa} that \begin{align*} t'_1 + \cdots + t'_k \leq s_1 + \cdots + s_j + 1 + q(k - j - 1) < s_1 + \cdots + s_{k}. \end{align*} For $k < \ell \leq n$, it follows from \eqref{eq: part 1 compa} that \begin{align*} t'_1 + \cdots + t'_{\ell} \leq s_1 + \cdots + s_{\ell - 1} < s_1 + \cdots + s_{\ell}. \end{align*} Since $w(\mathfrak{t}') = w(\fs)$, one deduces that $T_k(\mathfrak{t}') \leq T_k(\fs)$. \textit{Type 3:} For $ \begin{pmatrix} \fm \\ \mathfrak{u} \end{pmatrix} = \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_{j-1} & \varepsilon_j \varepsilon_{j+1} & \fe_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_{j-1} & s_j+1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix}$, it follows from \eqref{eq: part 1 compa} and Remark \ref{rmk: compa depth} that \begin{align*} \depth(\mathfrak{u}) &= j + \depth(\mathfrak{t}_{i_1 \dots i_{k-j}}) \geq j + (k - j - 2) + 1 + (n - k + 1) = \depth(\fs). \end{align*} For $k \leq \ell \leq n$, it follows from \eqref{eq: part 1 compa} that \begin{align*} u_1 + \cdots + u_{\ell} \leq s_1 + \cdots + s_{\ell}. \end{align*} Since $w(\mathfrak{u}) = w(\fs)$, one deduces that $T_k(\mathfrak{u}) \leq T_k(\fs)$. Moreover, we have $\Init(\fs) \prec (s_1, \dotsc, s_{j - 1}, s_j + 1) \preceq \Init(\mathfrak{u})$. We have proved Part 1. For Part $2$, we recall the relation $R_{\varepsilon_k}$ given by \begin{equation*} R_{\varepsilon_k} \colon \quad \Si_d \begin{pmatrix} \varepsilon_k\\ q \end{pmatrix} + \varepsilon_k^{-1}D_1 \Si_{d+1} \begin{pmatrix} \varepsilon_k& 1 \\ 1 & q-1 \end{pmatrix} =0. \end{equation*} Setting $s_0 = 0$, we may assume that there exists a maximal index $j$ with $0 \leq j \leq k-1$ such that $s_j < q$. We note that $s_k = q$, hence $\Init(\fs) = \fs = (s_1, \dots, s_j, q^{\{k-j-1\}}, q)$. Then $ \mathcal{BC}_{\varepsilon_{j+1},q} \circ \dots \circ \mathcal{BC}_{\varepsilon_{k-1},q}(R_{\varepsilon_k})$ is of the form \begin{align} \label{eq: Part 2 Li} \Si_d &\begin{pmatrix} \varepsilon_{j+1} & \dots & \varepsilon_{k} \\ q & \dots & q \end{pmatrix} + \sum \limits_i b_{i_1 \dots i_{k-j}} \Si_{d+1} \begin{pmatrix} \varepsilon_{j+1} & \fe_{i_1 \dots i_{k-j}} \\ 1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} =0, \end{align} where $b_{i_1 \dots i_{k-j}} \in A$ are divisible by $D_1$ and for $2 \leq l \leq k-j$, $ \begin{pmatrix} \fe_{i_1 \dots i_l} \\ \mathfrak{t}_{i_1 \dots i_l} \end{pmatrix} $ are arrays satisfying \begin{equation*} \begin{pmatrix} \fe_{i_1 \dots i_l} \\ \mathfrak{t}_{i_1 \dots i_l} \end{pmatrix} \leq \begin{pmatrix} 1 \\ q - 1 \end{pmatrix} + \begin{pmatrix} \varepsilon_{k-l+2} & \fe_{i_1 \dots i_{l-1}} \\ 1 & \mathfrak{t}_{i_1 \dots i_{l-1}} \end{pmatrix} = \begin{pmatrix} \varepsilon_{k-l+2} & \fe_{i_1 \dots i_{l-1}} \\ q & \mathfrak{t}_{i_1 \dots i_{l-1}} \end{pmatrix} . \end{equation*} Thus \begin{equation} \label{eq: part 2 compa} \begin{pmatrix} \fe_{i_1 \dots i_{k-j}} \\ \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} \leq \begin{pmatrix} \varepsilon_{j+2}& \dots & \varepsilon_{k} & 1 \\ q& \dots & q & q - 1 \end{pmatrix} . \end{equation} Letting $ \begin{pmatrix} \Sigma' \\ V' \end{pmatrix} = \begin{pmatrix} \varepsilon_{1} &\dots & \varepsilon_j \\ s_1 &\dots & s_j \end{pmatrix} $, by Lemma \ref{polybesao}, we continue to apply $\mathcal{B}^*_{\Sigma',V'}$ to \eqref{eq: Part 2 Li} and get \begin{align*} \Si_d \begin{pmatrix} \fve \\ \fs \end{pmatrix} & + \sum \limits_i b_{i_1 \dots i_{k-j}} \Si_d \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_j & \varepsilon_{j+1} & \fe_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_j & 1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} \\ & + \sum \limits_i b_{i_1 \dots i_{k-j}} \Si_d \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_{j-1} & \varepsilon_j \varepsilon_{j+1} & \fe_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_{j-1} & s_j+1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} =0, \end{align*} hence \begin{align*} \Li \begin{pmatrix} \fve \\ \fs \end{pmatrix} & + \sum \limits_i b_{i_1 \dots i_{k-j}} \Li \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_j & \varepsilon_{j+1} & \fe_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_j & 1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} \\ & + \sum \limits_i b_{i_1 \dots i_{k-j}} \Li \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_{j-1} & \varepsilon_j \varepsilon_{j+1} & \fe_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_{j-1} & s_j+1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} =0. \end{align*} In other words, we have \begin{align} \label{eq: part 2 Li 2} \Li \begin{pmatrix} \fve \\ \fs \end{pmatrix} =& - \sum \limits_i b_{i_1 \dots i_{k-j}} \Li \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_j & \varepsilon_{j+1} & \fe_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_j & 1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} \\ \notag & - \sum \limits_i b_{i_1 \dots i_{k-j}} \Li \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_{j-1} & \varepsilon_j \varepsilon_{j+1} & \fe_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_{j-1} & s_j+1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix}. \end{align} The first term and the second term on the right hand side of \eqref{eq: part 2 Li 2} are referred to as type 2 and type 3 respectively. We now verify the conditions of arrays of type 2 and type 3 with respect to $ \begin{pmatrix} \fve \\ \fs \end{pmatrix} $. \textit{Type 2:} For $ \begin{pmatrix} \fe' \\ \mathfrak{t}' \end{pmatrix} = \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_j & \varepsilon_{j+1} & \fe_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_j & 1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix}$, it follows from \eqref{eq: part 2 compa} and Remark \ref{rmk: compa depth} that \begin{align*} \depth(\mathfrak{t}') = j + 1 + \depth(\mathfrak{t}_{i_1 \dots i_{k-j}}) \geq j + 1 + (k - j)= k + 1 > \depth(\fs). \end{align*} It follows from \eqref{eq: part 2 compa} that \begin{align*} t'_1 + \cdots + t'_k \leq s_1 + \cdots + s_j + 1 + q(k - j - 1) < s_1 + \cdots + s_{k}. \end{align*} Since $w(\mathfrak{t}') = w(\fs)$, one deduces that $T_k(\mathfrak{t}') \leq T_k(\fs)$. \textit{Type 3:} For $ \begin{pmatrix} \fm \\ \mathfrak{u} \end{pmatrix} = \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_{j-1} & \varepsilon_j \varepsilon_{j+1} & \fe_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_{j-1} & s_j+1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix}$, it follows from \eqref{eq: part 2 compa} and Remark \ref{rmk: compa depth} that \begin{align*} \depth(\mathfrak{u}) = j + \depth(\mathfrak{t}_{i_1 \dots i_{k-j}}) \geq j + (k - j) = \depth(\fs). \end{align*} It follows from \eqref{eq: part 2 compa} that \begin{align*} u_1 + \cdots + u_k \leq s_1 + \cdots + s_{j - 1} + (s_{j} + 1) + q(k - j - 1) + (q - 1) = s_1 + \cdots + s_k. \end{align*} Since $w(\mathfrak{u}) = w(\fs)$, one deduces that $T_k(\mathfrak{u}) \leq T_k(\fs)$. Moreover, we have $\Init(\fs) \prec (s_1, \dotsc, s_{j - 1}, s_j + 1) \preceq \Init(\mathfrak{u})$. We finish the proof. \end{proof} We recall the following definition of \cite{ND21} (see \cite[Definition 3.1]{ND21}): \begin{definition} Let $k \in \mathbb N$ and $\fs$ be a tuple of positive integers. We say that $\fs$ is $k$-admissible if it satisfies the following two conditions: \begin{itemize} \item[1)] $s_1,\dots,s_k \leq q$. \item[2)] $\fs$ is not of the form $(s_1,\dots,s_r)$ with $r \leq k$, $s_1,\dots,s_{r-1} \leq q$, and $s_r=q$. \end{itemize} Here we recall $s_i=0$ for $i > \depth(\fs)$. By convention the empty array $\begin{pmatrix} \emptyset \\ \emptyset \end{pmatrix}$ is always $k$-admissible. An array is $k$-admissible if the corresponding tuple is $k$-admissible. \end{definition} \begin{proposition} \label{polyalgpartlem} For all $k \in \mathbb{N}$ and for all arrays $ \begin{pmatrix} \fve \\ \fs \end{pmatrix} $, $\Li \begin{pmatrix} \fve \\ \fs \end{pmatrix} $ can be expressed as a $K$-linear combination of $\Li \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} $'s of the same weight such that $\mathfrak{t}$ is $k$-admissible. \end{proposition} \begin{proof} The proof follows the same line as that of \cite[Proposition 3.2]{ND21}. We write down the proof for the reader's convenience. We consider the following statement:\\ $(H_k) \quad$ For all arrays $ \begin{pmatrix} \fve \\ \fs \end{pmatrix} $, we can express $\Li \begin{pmatrix} \fve \\ \fs \end{pmatrix} $ as a $K$-linear combination of $\Li \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} $'s of the same weight such that $\mathfrak{t}$ is $k$-admissible.\\ We need to show that $(H_k)$ holds for all $k \in \mathbb{N}$. We proceed the proof by induction on $k$. For $k = 1$, we consider all the cases for the first component $s_1$ of $\fs$. If $s_1 \leq q$, then either $\fs$ is $1$-admissible, or $\begin{pmatrix} \fve \\ \fs \end{pmatrix} = \begin{pmatrix} \varepsilon \\ q \end{pmatrix}$. For the case $\begin{pmatrix} \fve \\ \fs \end{pmatrix} = \begin{pmatrix} \varepsilon \\ q \end{pmatrix}$, we deduce from the relation $R_{\varepsilon}$ that \begin{equation*} \Li \begin{pmatrix} \varepsilon\\ q \end{pmatrix} = - \varepsilon^{-1}D_1 \Li \begin{pmatrix} \varepsilon& 1 \\ 1 & q-1 \end{pmatrix}, \end{equation*} hence $(H_1)$ holds in this case since $(1, q - 1)$ is $1$-admissible. If $s_1 > q$, we assume that $ \begin{pmatrix} \fve \\ \fs \end{pmatrix} = \begin{pmatrix} \varepsilon_1 & \cdots & \varepsilon_n \\ s_1 & \cdots & s_n \end{pmatrix}$. Set $\begin{pmatrix} \Sigma \\ V \end{pmatrix} = \begin{pmatrix} \varepsilon_1 & \varepsilon_2 & \cdots & \varepsilon_n \\ s_1 - q & s_2 & \cdots & s_n \end{pmatrix}$. From Proposition \ref{polycer1}, we have $\mathcal C_{\Sigma,V}(R_{1})$ is of the form \begin{equation*} \Si_d \begin{pmatrix} \varepsilon_1 & \cdots & \varepsilon_n \\ s_1 & \cdots & s_n \end{pmatrix} + \Si_d \begin{pmatrix} 1& \varepsilon_1 & \varepsilon_2 & \cdots & \varepsilon_n \\ q & s_1 - q & s_2 & \cdots & s_n \end{pmatrix} + \sum \limits_i b_i \Si_{d+1} \begin{pmatrix} 1 & \fe_i \\ 1 & \mathfrak{t}_i \end{pmatrix} =0, \end{equation*} where $b_i \in K$ for all $i$. Thus we deduce that \begin{align*} \Li \begin{pmatrix} \fve \\ \fs \end{pmatrix} = - \Li \begin{pmatrix} 1& \varepsilon_1 & \varepsilon_2 & \cdots & \varepsilon_n \\ q & s_1 - q & s_2 & \cdots & s_n \end{pmatrix} - \sum \limits_i b_i \Li \begin{pmatrix} 1 & \fe_i \\ 1 & \mathfrak{t}_i \end{pmatrix}, \end{align*} which proves that $(H_1)$ holds in this case since $(q, s_1 - q, s_2, \dotsc, s_n)$ and $(1, \mathfrak{t}_i)$ are $1$-admissible. We next assume that $(H_{k - 1})$ holds. We need to show that $(H_k)$ holds. It suffices to consider the array $\begin{pmatrix} \fve \\ \fs \end{pmatrix}$ where $\fs$ is not $k$-admissible. Moreover, from the induction hypothesis of $(H_{k - 1})$, we may assume that $\fs$ is $(k - 1)$-admissible. For such array $\begin{pmatrix} \fve \\ \fs \end{pmatrix}$, we claim that $\depth(\fs) \geq k$. Indeed, assume that $\depth(\fs) < k$. Since $\fs$ is $(k - 1)$-admissible, one verifies that $\fs$ is $k$-admissible, which is a contradiction. Assume that $ \begin{pmatrix} \fve \\ \fs \end{pmatrix} = \begin{pmatrix} \varepsilon_1 & \cdots & \varepsilon_n \\ s_1 & \cdots & s_n \end{pmatrix}$ where $\fs$ is not $k$-admissible and $\depth(\fs) \geq k$. In order to prove that $(H_k)$ holds for the array $\begin{pmatrix} \fve \\ \fs \end{pmatrix}$, we proceed by induction on $s_1 + \cdots + s_k$. The case $s_1 + \cdots + s_k = 1$ is a simple check. Assume that $(H_k)$ holds when $s_1 + \cdots + s_k < s$. We need to show that $(H_k)$ holds when $s_1 + \cdots + s_k = s$. To do so, we give the following algorithm: \textit{Algorithm: } We begin with an array $\begin{pmatrix} \fve \\ \fs \end{pmatrix}$ where $\fs$ is not $k$-admissible, $\depth(\fs) \geq k$ and $s_1 + \cdots + s_k = s$. \textit{Step 1:} From Proposition \ref{polydecom}, we can decompose $\Li \begin{pmatrix} \fve \\ \fs \end{pmatrix} $ as follows: \begin{equation} \label{eq: polyalgpartlem} \Li \begin{pmatrix} \fve \\ \fs \end{pmatrix} = \underbrace{ - \Li \begin{pmatrix} \fve' \\ \fs' \end{pmatrix} }_\text{type 1} + \underbrace{\sum\limits_i b_i\Li \begin{pmatrix} \fe_i' \\ \mathfrak{t}'_i \end{pmatrix} }_\text{type 2} + \underbrace{\sum\limits_i c_i\Li \begin{pmatrix} \fm_i \\ \mathfrak{u}_i \end{pmatrix} }_\text{type 3} , \end{equation} where $ b_i, c_i \in A$. Note that the term of type $1$ disappears when $\Init(\fs) = \fs$ and $s_n = q$. Moreover, for all arrays $ \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} $ appearing on the right hand side of \eqref{eq: polyalgpartlem}, we have $\depth(\mathfrak{t}) \geq \depth(\fs) \geq k$ and $t_1 + \cdots + t_k \leq s_1 + \cdots + s_k = s$. \textit{Step 2:} For all arrays $ \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} $ appearing on the right hand side of \eqref{eq: polyalgpartlem}, if $\mathfrak{t}$ is either $k$-admissible or $\mathfrak{t}$ satisfies the condition $t_1 + \cdots + t_k < s$, then we deduce from the induction hypothesis that $(H_k)$ holds for the array $\begin{pmatrix} \fve \\ \fs \end{pmatrix}$, and hence we stop the algorithm. Otherwise, there exists an array $\begin{pmatrix} \fve_1 \\ \fs_1 \end{pmatrix}$ where $\fs_1$ is not $k$-admissible, $\depth(\fs_1) \geq k$ and $s_{11} + \cdots + s_{1k} = s$. For such an array, we repeat the algorithm for $\begin{pmatrix} \fve_1 \\ \fs_1 \end{pmatrix}$. It should be remarked that the array $\begin{pmatrix} \fve_1 \\ \fs_1 \end{pmatrix}$ is of type $3$ with respect to $\begin{pmatrix} \fve \\ \fs \end{pmatrix}$. Indeed, if the array $\begin{pmatrix} \fve_1 \\ \fs_1 \end{pmatrix}$ is of type $1$ or type $2$, then we deduce from Proposition \ref{polydecom} that $s_{11} + \cdots + s_{1k} < s_{1} + \cdots + s_{k} = s$, which is a contradiction. It remains to show that the above algorithm stops after a finite number of steps. Set $\begin{pmatrix} \fve_0 \\ \fs_0 \end{pmatrix} = \begin{pmatrix} \fve \\ \fs \end{pmatrix}$. Assume that the above algorithm does not stop. Then there exists a sequence of arrays $\begin{pmatrix} \fve_0 \\ \fs_0 \end{pmatrix}, \begin{pmatrix} \fve_1 \\ \fs_1 \end{pmatrix}, \begin{pmatrix} \fve_2 \\ \fs_2 \end{pmatrix}, \dotsc$ such that for all $i \geq 0$, $\fs_i$ is not $k$-admissible, $\depth(\fs_i) \geq k$ and $\begin{pmatrix} \fve_{i + 1} \\ \fs_{i + 1} \end{pmatrix}$ is of type $3$ with respect to $\begin{pmatrix} \fve_i \\ \fs_i \end{pmatrix}$. From Proposition \ref{polydecom}, we obtain an infinite sequence \begin{align*} \Init(\fs_0) \prec \Init(\fs_1) \prec \Init(\fs_2) \prec \cdots \end{align*} which is strictly increasing. For all $i \geq 0$, since $\fs_i$ is not $k$-admissible and $\depth(\fs_i) \geq k$, we have $\depth(\Init(\fs_i)) \leq k$, hence $\Init(\fs_i) \preceq q^{\{k\}}$. This shows that $\Init(\fs_i) = \Init(\fs_{i + 1})$ for all $i$ sufficiently large, which is a contradiction. We deduce that the algorithm stops after a finite number of steps. We finish the proof. \end{proof} \subsubsection{A set of generators $\mathcal{AT}_w$ for ACMPL's} We recall that $\mathcal{AL}_w$ is the $K$-vector space generated by ACMPL's of weight $w$. We denote by $\mathcal{AT}_w$ the set of all ACMPL's $\Li \begin{pmatrix} \fve \\ \fs \end{pmatrix} = \Li \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_n \\ s_1 & \dots & s_n \end{pmatrix} $ of weight $w$ such that $s_1, \dots, s_{n-1} \leq q$ and $s_n < q$. We put $t(w)=|\mathcal{AT}_w|$. Then one verifies that \begin{equation*} t(w) = \begin{cases} (q - 1) q^{w-1}& \text{if } 1 \leq w < q, \\ (q - 1) (q^{w-1} - 1) &\text{if } w = q, \end{cases} \end{equation*} and for $w>q$, $t(w)=(q-1)\sum \limits_{i = 1}^q t(w-i)$. We are ready to state a weak version of Brown's theorem for ACMPL's. \begin{proposition} \label{prop: weak Brown} The set of all elements $\Li \begin{pmatrix} \fve \\ \fs \end{pmatrix} $ such that $\Li \begin{pmatrix} \fve \\ \fs \end{pmatrix} \in \mathcal{AT}_w$ forms a set of generators for $\mathcal{AL}_w$. \end{proposition} \begin{proof} The result follows immediately from Proposition \ref{polyalgpartlem} in the case of $k = w$. \end{proof} \subsection{A strong version of Brown's theorem for ACMPL's} \label{sec:strong Brown} \subsubsection{Another set of generators $\mathcal{AS}_w$ for ACMPL's} \label{sec:AS_w} We consider the set $\mathcal{J}_w$ consisting of positive tuples $\fs = (s_1, \dots, s_n)$ of weight $w$ such that $s_1, \dots, s_{n-1} \leq q$ and $s_n <q$, together with the set $\mathcal{J}'_w$ consisting of positive tuples $\fs = (s_1, \dots, s_n)$ of weight $w$ such that $ q \nmid s_i$ for all $i$. Then there is a bijection \begin{equation*} \iota \colon \mathcal{J}'_w \longrightarrow \mathcal{J}_w \end{equation*} given as follows: for each tuple $\fs = (s_1, \dots, s_n) \in \mathcal{J}'_w$, since $q \nmid s_i$, we can write $s_i = h_i q + r_i $ where $0 < r_i < q$ and $h_i \in \mathbb{Z}^{\ge0}$. The image $\iota(\mathfrak s)$ is the tuple \begin{equation*} \iota(\mathfrak s) = (\underbrace{q, \dots, q}_{\text{$h_1$ times}}, r_1 , \dots, \underbrace{q, \dots, q}_{\text{$h_n$ times}}, r_n). \end{equation*} Let $\mathcal{AS}_w$ denote the set of ACMPL's $\Li \begin{pmatrix} \fve \\ \fs \end{pmatrix} $ such that $\mathfrak s \in \mathcal{J}'_w$. We note that in general, $\mathcal{AS}_w$ is strictly smaller than $\mathcal{AT}_w$. The only exceptions are when $q=2$ or $w \leq q$. \subsubsection{Cardinality of $\mathcal{AS}_w$.} We now compute $s(w)=|\mathcal{AS}_w|$. To do so we denote by $\mathcal{AJ}_w$ the set consisting of arrays $ \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_n \\ s_1 & \dots & s_n \end{pmatrix} $ of weight $w$ such that $q \nmid s_i$ for all $i$ and by $\mathcal{AJ}^1_w$ the set consisting of arrays $ \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_n \\ s_1 & \dots & s_n \end{pmatrix} $ of weight~$w$ such that $s_1, \dots, s_{n-1} \leq q, s_n < q$ and $\varepsilon_i = 1$ whenever $s_i = q$ for $1 \leq i \leq n$. We construct a map \begin{equation*} \varphi \colon \mathcal{AJ}_w \longrightarrow \mathcal{AJ}^1_w \end{equation*} as follows: for each array $ \begin{pmatrix} \fve\\ \fs \end{pmatrix} = \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_n \\ s_1 & \dots & s_n \end{pmatrix} \in \mathcal{AJ}_w$, since $q \nmid s_i$, we can write $s_i = (h_i-1) q + r_i $ where $0 < r_i < q$ and $h_i \in \mathbb N$. The image $\varphi \begin{pmatrix} \fve\\ \fs \end{pmatrix} $ is the array \begin{equation*} \varphi \begin{pmatrix} \fve\\ \fs \end{pmatrix} = \bigg( \underbrace{\begin{pmatrix} 1 & \dots & 1\\ q & \dots & q \end{pmatrix}}_\text{$h_1-1$ times} \begin{pmatrix} \varepsilon_1 \\ r_1 \end{pmatrix} \dots \underbrace{\begin{pmatrix} 1 & \dots & 1\\ q & \dots & q \end{pmatrix}}_\text{$h_n-1$ times} \begin{pmatrix} \varepsilon_n \\ r_n \end{pmatrix} \bigg) . \end{equation*} It is easily seen that $\varphi$ is a bijection, hence $|\mathcal{AS}_w| =|\mathcal{AJ}_w| = |\mathcal{AJ}^1_w|$. Thus $s(w) = |\mathcal{AJ}^1_w|$. One verifies that \begin{equation*} s(w) = \begin{cases} (q - 1) q^{w-1}& \text{if } 1 \leq w < q, \\ (q - 1) (q^{w-1} - 1) &\text{if } w = q, \end{cases} \end{equation*} and for $w>q$, \[ s(w)=(q-1)\sum \limits_{i = 1}^{q-1} s(w-i) + s(w - q). \] \subsubsection{} We state a strong version of Brown's theorem for ACMPL's. \begin{theorem} \label{thm: strong Brown} The set $\mathcal{AS}_w$ forms a set of generators for $\mathcal{AL}_w$. In particular, \[ \dim_K \mathcal{AL}_w \leq s(w). \] \end{theorem} \begin{proof} We recall that $\mathcal{AT}_w$ is the set of all ACMPL's $\Li \begin{pmatrix} \fve \\ \fs \end{pmatrix}$ with $\begin{pmatrix} \fve \\ \fs \end{pmatrix} \in \mathcal{AJ}_w$ Let $\textup{Li} \begin{pmatrix} \fve \\ \fs \end{pmatrix} = \textup{Li}\begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_n \\ s_1 & \dots & s_n \end{pmatrix} \in \mathcal{AT}_w$. Then $\begin{pmatrix} \fve \\ \fs \end{pmatrix} \in \mathcal{AJ}_w$, which implies $s_1, \dotsc, s_{n - 1} \leq q$ and $s_n < q$. We express $\begin{pmatrix} \fve \\ \fs \end{pmatrix}$ in the following form \begin{align*} \bigg( \underbrace{\begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_{h_1 - 1}\\ q & \dots & q \end{pmatrix}}_\text{$h_1 - 1$ times} \begin{pmatrix} \varepsilon_{h_1} \\ r_1 \end{pmatrix} \dots \underbrace{\begin{pmatrix} \varepsilon_{h_1 + \cdots + h_{\ell - 1} + 1} & \dots & \varepsilon_{h_1 + \cdots + h_{\ell - 1} + (h_\ell - 1)}\\ q & \dots & q \end{pmatrix}}_\text{$h_\ell - 1$ times} \begin{pmatrix} \varepsilon_{h_1 + \cdots + h_{\ell - 1} + h_\ell} \\ r_\ell \end{pmatrix} \bigg), \end{align*} where $h_1, \dotsc, h_\ell \geq 1$, $h_1 + \cdots + h_\ell = n$ and $0 < r_1, \dotsc, r_\ell < q$. Then we set \begin{align*} \begin{pmatrix} \fve' \\ \fs' \end{pmatrix} = \begin{pmatrix} \varepsilon'_1 & \dots & \varepsilon'_\ell \\ s'_1 & \dots & s'_\ell \end{pmatrix}, \end{align*} where $\varepsilon'_i = \varepsilon_{h_1 + \cdots + h_{i - 1} + 1} \cdots \varepsilon_{h_1 + \cdots + h_{i - 1} + h_i}$ and $s'_i = (h_i - 1)q + r_i$ for $1 \leq i \leq \ell$. We note that $\iota(\fs')=\fs$. From Proposition \ref{polydecom} and Proposition \ref{prop: weak Brown}, we can decompose $\textup{Li}\begin{pmatrix} \fve' \\ \fs' \end{pmatrix}$ as follows: \begin{equation*} \Li \begin{pmatrix} \fve' \\ \fs' \end{pmatrix} = \sum a_{\fe,\mathfrak t}^{\fve',\fs'} \Li \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} , \end{equation*} where $ \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} $ ranges over all elements of $\mathcal{AJ}_w$ and $a_{\fe,\mathfrak t}^{\fve',\fs'} \in A$ satisfying \begin{equation*} a_{\fe,\mathfrak t}^{\fve',\fs'} \equiv \begin{cases} \pm 1 \ (\text{mod } D_1) & \text{if } \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} = \begin{pmatrix} \fve \\ \fs \end{pmatrix},\\ 0 \ \ (\text{mod } D_1) & \text{otherwise}. \end{cases} \end{equation*} Note that $\Li \begin{pmatrix} \fve' \\ \fs' \end{pmatrix} \in \mathcal{AS}_w$. Thus the transition matrix from the set consisting of such $\Li \begin{pmatrix} \fve' \\ \fs' \end{pmatrix} $ as above (we allow repeated elements) to the set consisting of $\Li \begin{pmatrix} \fve \\ \fs \end{pmatrix} $ with $ \begin{pmatrix} \fve \\ \fs \end{pmatrix} \in \mathcal{AJ}_w$ is invertible. It then follows again from Proposition \ref{prop: weak Brown} that $\mathcal{AS}_w$ is a set of generators for $\mathcal{AL}_w$, as desired. \end{proof} \section{Dual $t$-motives and linear independence} \label{sec: dual motives} We continue with the notation given in the Introduction. Further, letting $t$ be another independent variable, we denote by $\bT$ the Tate algebra in the variable $t$ with coefficients in $\C_\infty$ equipped with the Gauss norm $\lVert . \rVert_\infty$, and by $\bL$ the fraction field of $\bT$. We denote by $\mathcal E$ the ring of series $\sum_{n \geq 0} a_nt^n \in \overline K[[t]]$ such that $\lim_{n \to +\infty} \sqrt[n]{|a_n|_\infty}=0$ and $[K_\infty(a_0,a_1,\ldots):K_\infty]<\infty$. Then any $f \in \mathcal E$ is an entire function. For $a \in A=\Fq[\theta]$, we set $a(t):=a \rvert_{\theta=t} \in \Fq[t]$. \subsection{Dual $t$-motives} We recall the notion of dual $t$-motives due to Anderson (see \cite[\S 4]{BP20} and \cite[\S 5]{HJ20} for more details). We refer the reader to \cite{And86} for the related notion of $t$-motives. For $i \in \mathbb Z$ we consider the $i$-fold twisting of $\C_\infty((t))$ defined by \begin{align*} \C_\infty((t)) & \rightarrow \C_\infty((t)) \\ f=\sum_j a_j t^j & \mapsto f^{(i)}:=\sum_j a_j^{q^i} t^j. \end{align*} We extend $i$-fold twisting to matrices with entries in $\C_\infty((t))$ by twisting entry-wise. Let $\overline K[t,\sigma]$ be the non-commutative $\overline K[t]$-algebra generated by the new variable $\sigma$ subject to the relation $\sigma f=f^{(-1)}\sigma$ for all $f \in \overline K[t]$. \begin{definition} An effective dual $t$-motive is a $\overline K[t,\sigma]$-module $\mathcal M'$ which is free and finitely generated over $\overline K[t]$ such that for $\ell\gg 0$ we have \[(t-\theta)^\ell(\mathcal M'/\sigma \mathcal M') = \{0\}.\] \end{definition} We mention that effective dual $t$-motives are called Frobenius modules in \cite{CPY19,CH21,Har21,KL16}. Note that Hartl and Juschka \cite[\S 4]{HJ20} introduced a more general notion of dual $t$-motives. In particular, effective dual $t$-motives are always dual $t$-motives. Throughout this paper we will always work with effective dual $t$-motives. Therefore, we will sometimes drop the word ``effective" where there is no confusion. Let $\mathcal M$ and $\mathcal M'$ be two effective dual $t$-motives. Then a morphism of effective dual $t$-motives $\mathcal M \to \mathcal M'$ is just a homomorphism of left $\overline K[t,\sigma]$-modules. We denote by $\cF$ the category of effective dual $t$-motives equipped with the trivial object $\mathbf{1}$. We say that an object $\mathcal M$ of $\cF$ is given by a matrix $\Phi \in \Mat_r(\overline K[t])$ if $\mathcal M$ is a $\overline K[t]$-module free of rank $r$ and the action of $\sigma$ is represented by the matrix $\Phi$ on a given $\overline K[t]$-basis for $\mathcal M$. We say that an object $\mathcal M$ of $\cF$ is uniformizable or rigid analytically trivial if there exists a matrix $\Psi \in \text{GL}_r(\bT)$ satisfying $\Psi^{(-1)}=\Phi \Psi$. The matrix $\Psi$ is called a rigid analytic trivialization of $\mathcal M$. We now recall the Anderson-Brownawell-Papanikolas criterion which is crucial in the sequel (see \cite[Theorem 3.1.1]{ABP04}). \begin{theorem}[Anderson-Brownawell-Papanikolas] \label{thm:ABP} Let $\Phi \in \Mat_\ell(\overline K[t])$ be a matrix such that $\det \Phi=c(t-\theta)^s$ for some $c \in \overline K^\times$ and $s \in \mathbb Z^{\geq 0}$. Let $\psi \in \Mat_{\ell \times 1}(\mathcal E)$ be a vector satisfying $\psi^{(-1)}=\Phi \psi$ and $\rho \in \Mat_{1 \times \ell}(\overline K)$ such that $\rho \psi(\theta)=0$. Then there exists a vector $P \in \Mat_{1 \times \ell}(\overline K[t])$ such that \[ P \psi=0 \quad \text{and} \quad P(\theta)=\rho. \] \end{theorem} \subsection{Some constructions of dual $t$-motives} \label{subsec:dual motives} \subsubsection{General case} We briefly review some constructions of dual $t$-motives introduced in \cite{CPY19} (see also \cite{Cha14,CH21,Har21}). Let $\fs=(s_1,\ldots,s_r) \in \mathbb N^r$ be a tuple of positive integers and $\fQ=(Q_1,\dots,Q_r) \in \overline K[t]^r$ satisfying the condition \begin{equation} \label{eq: condition for Q} (\lVert Q_1 \rVert_\infty / |\theta|_\infty^{\frac{qs_1}{q-1}})^{q^{i_1}} \ldots (\lVert Q_r \rVert_\infty / |\theta|_\infty^{\frac{qs_r}{q-1}})^{q^{i_r}} \to 0 \end{equation} as $0 \leq i_r < \dots < i_1$ and $ i_1 \to \infty$. We consider the dual $t$-motives $\mathcal M_{\fs,\fQ}$ and $\mathcal M_{\fs,\fQ}'$ attached to $(\fs,\fQ)$ given by the matrices \begin{align*} \Phi_{\fs,\fQ} &= \begin{pmatrix} (t-\theta)^{s_1+\dots+s_r} & 0 & 0 & \dots & 0 \\ Q_1^{(-1)} (t-\theta)^{s_1+\dots+s_r} & (t-\theta)^{s_2+\dots+s_r} & 0 & \dots & 0 \\ 0 & Q_2^{(-1)} (t-\theta)^{s_2+\dots+s_r} & \ddots & & \vdots \\ \vdots & & \ddots & (t-\theta)^{s_r} & 0 \\ 0 & \dots & 0 & Q_r^{(-1)} (t-\theta)^{s_r} & 1 \end{pmatrix} \\ &\in \Mat_{r+1}(\overline K[t]), \end{align*} and $\Phi'_{\fs,\fQ} \in \Mat_r(\overline K[t])$ is the upper left $r \times r$ sub-matrix of $\Phi_{\fs,\fQ}$. Throughout this paper, we work with the Carlitz period $\widetilde \pi$ which is a fundamental period of the Carlitz module (see \cite{Gos96, Tha04}). We fix a choice of $(q-1)$st root of $(-\theta)$ and set \[ \Omega(t):=(-\theta)^{-q/(q-1)} \prod_{i \geq 1} \left(1-\frac{t}{\theta^{q^i}} \right) \in \bT^\times \] so that \[ \Omega^{(-1)}=(t-\theta)\Omega \quad \text{ and } \quad \frac{1}{\Omega(\theta)}=\widetilde \pi. \] Given $(\fs,\fQ)$ as above, Chang introduced the following series (see \cite[Lemma 5.3.1]{Cha14} and also \cite[Eq. (2.3.2)]{CPY19}) \begin{align} \label{eq: series L} \frakL(\fs;\fQ)=\frakL(s_1,\dots,s_r;Q_1,\dots,Q_r):=\sum_{i_1 > \dots > i_r \geq 0} (\Omega^{s_r} Q_r)^{(i_r)} \dots (\Omega^{s_1} Q_1)^{(i_1)}. \end{align} It is proved that $\frakL(\fs,\fQ) \in \mathcal E$ (see \cite[Lemma 5.3.1]{Cha14}). Here we recall that $\mathcal E$ denotes the ring of series $\sum_{n \geq 0} a_nt^n \in \overline K[[t]]$ such that $\lim_{n \to +\infty} \sqrt[n]{|a_n|_\infty}=0$ and $[K_\infty(a_0,a_1,\ldots):K_\infty]<\infty$. In the sequel, we will use the following crucial property of this series (see \cite[Lemma 5.3.5]{Cha14} and \cite[Proposition 2.3.3]{CPY19}): for all $j \in \mathbb Z^{\geq 0}$, we have \begin{equation} \label{eq: power seriesChang} \frakL(\fs;\fQ) \left(\theta^{q^j} \right)=\left(\frakL(\fs;\fQ)(\theta)\right)^{q^j}. \end{equation} Then the matrix given by \begin{align*} \Psi_{\fs,\fQ} &= \begin{pmatrix} \Omega^{s_1+\dots+s_r} & 0 & 0 & \dots & 0 \\ \frakL(s_1;Q_1) \Omega^{s_2+\dots+s_r} & \Omega^{s_2+\dots+s_r} & 0 & \dots & 0 \\ \vdots & \frakL(s_2;Q_2) \Omega^{s_3+\dots+s_r} & \ddots & & \vdots \\ \vdots & & \ddots & \ddots & \vdots \\ \frakL(s_1,\dots,s_{r-1};Q_1,\dots,Q_{r-1}) \Omega^{s_r} & \frakL(s_2,\dots,s_{r-1};Q_2,\dots,Q_{r-1}) \Omega^{s_r} & \dots & \Omega^{s_r}& 0 \\ \frakL(s_1,\dots,s_r;Q_1,\dots,Q_r) & \frakL(s_2,\dots,s_r;Q_2,\dots,Q_r) & \dots & \frakL(s_r;Q_r) & 1 \end{pmatrix} \\ &\in \text{GL}_{r+1}(\bT) \end{align*} satisfies \[ \Psi_{\fs,\fQ}^{(-1)}=\Phi_{\fs,\fQ} \Psi_{\fs,\fQ}. \] Thus $\Psi_{\fs,\fQ}$ is a rigid analytic trivialization associated to the dual $t$-motive $\mathcal M_{\fs,\fQ}$. We also denote by $\Psi_{\fs,\fQ}'$ the upper $r \times r$ sub-matrix of $\Psi_{\fs,\fQ}$. It is clear that $\Psi_\fs'$ is a rigid analytic trivialization associated to the dual $t$-motive $\mathcal M_{\fs,\fQ}'$. Further, combined with Eq. \eqref{eq: power seriesChang}, the above construction of dual $t$-motives implies that $\widetilde \pi^w \frak L(\fs;\fQ)(\theta)$ where $w=s_1+\dots+s_r$ has the MZ (multizeta) property in the sense of \cite[Definition 3.4.1]{Cha14}. By \cite[Proposition 4.3.1]{Cha14}, we get \begin{proposition} \label{prop: MZ property} Let $(\fs_i;\fQ_i)$ as before for $1 \leq i \leq m$. We suppose that all the tuples of positive integers $\fs_i$ have the same weight, say $w$. Then the following assertions are equivalent: \begin{itemize} \item[i)] $\frak L(\fs_1;\fQ_1)(\theta),\dots,\frak L(\fs_m;\fQ_m)(\theta)$ are $K$-linearly independent. \item[ii)] $\frak L(\fs_1;\fQ_1)(\theta),\dots,\frak L(\fs_m;\fQ_m)(\theta)$ are $\overline K$-linearly independent. \end{itemize} \end{proposition} We end this section by mentioning that Chang \cite{Cha14} also proved analogue of Goncharov's conjecture in this setting. \subsubsection{Dual $t$-motives connected to MZV's and AMZV's} \label{sec:MZV motives} Following Anderson and Thakur \cite{AT09} we introduce dual $t$-motives connected to MZV's and AMZV's. We briefly review Anderson-Thakur polynomials introduced in \cite{AT90}. For $k \geq 0$, we set $[k]:=\theta^{q^k}-\theta$ and $D_k:= \prod^k_{\ell=1} [\ell]^{q^{k-\ell}}$. For $n \in \N$ we write $n-1 = \sum_{j \geq 0} n_j q^j$ with $0 \leq n_j \leq q-1$ and define \[ \Gamma_n:=\prod_{j \geq 0} D_j^{n_j}. \] We set $\gamma_0(t) :=1$ and $\gamma_j(t) :=\prod^j_{\ell=1} (\theta^{q^j}-t^{q^\ell})$ for $j\geq 1$. Then Anderson-Thakur polynomials $\alpha_n(t) \in A[t]$ are given by the generating series \[ \sum_{n \geq 1} \frac{\alpha_n(t)}{\Gamma_n} x^n:=x\left( 1-\sum_{j \geq 0} \frac{\gamma_j(t)}{D_j} x^{q^j} \right)^{-1}. \] Finally, we define $H_n(t)$ by switching $\theta$ and $t$ \begin{align*} H_n(t)=\alpha_n(t) \big|_{t=\theta, \, \theta=t}. \end{align*} By \cite[Eq. (3.7.3)]{AT90} we get \begin{equation} \label{eq: deg theta Hn} \deg_\theta H_n \leq \frac{(n-1) q}{q-1}<\frac{nq}{q-1}. \end{equation} Let $\fs=(s_1,\ldots,s_r) \in \mathbb N^r$ be a tuple and $\mathfrak \epsilon=(\epsilon_1,\ldots,\epsilon_r) \in (\F_q^\times)^r$. Recall that $\overline{\mathbb F}_q$ denotes the algebraic closure of $\mathbb F_q$ in $\overline{K}$. For all $1 \leq i \leq r$ we fix a fixed $(q-1)$-th root $\gamma_i \in \overline{\mathbb F}_q$ of $\epsilon_i \in \mathbb F_q^\times$ and set $Q_{s_i,\epsilon_i}:=\gamma_i H_{s_i}$. Then we set $\fQ_{\fs,\fe}:=(Q_{s_1,\epsilon_1},\dots,Q_{s_r,\epsilon_r})$ and put $\frak L(\fs;\fe):=\frak L(\fs;\fQ_{\fs,\fe})$. By \eqref{eq: deg theta Hn} we know that $\lVert H_n \rVert_\infty < |\theta|_\infty ^{\tfrac{n q}{q-1}}$ for all $n \in \N$, thus $\fQ_{\fs,\fe}$ satisfies Condition \eqref{eq: condition for Q}. Thus we can define the dual $t$-motives $\mathcal M_{\fs,\fe}=\mathcal M_{\fs,\fQ_{\fs,\fe}}$ and $\mathcal M_{\fs,\fe}'=\mathcal M_{\fs,\fQ_{\fs,\fe}}'$ attached to $\fs$ whose matrices and rigid analytic trivializations will be denoted by $(\Phi_{\fs,\fe},\Psi_{\fs,\fe})$ and $(\Phi_{\fs,\fe}',\Psi_{\fs,\fe}')$, respectively. These dual $t$-motives are connected to MZV's and AMZV's by the following result (see \cite[Proposition 2.12]{CH21} for more details): \begin{equation} \label{eq: MZV} \frak L(\fs;\fe)(\theta)=\frac{\gamma_1 \dots \gamma_r \Gamma_{s_1} \dots \Gamma_{s_r} \zeta_A \begin{pmatrix} \fe \\ \fs \end{pmatrix}}{\widetilde \pi^{w(\fs)}}. \end{equation} By a result of Thakur \cite{Tha09}, one can show (see \cite[Theorem 2.1]{Har21}) that $\zeta_A \begin{pmatrix} \fe \\ \fs \end{pmatrix} \neq 0$. Thus $\frak L(\fs;\fe)(\theta) \neq 0$. \subsubsection{Dual $t$-motives connected to CMPL's and ACMPL's} \label{sec:CMPL motives} We keep the notation as above. Let $\fs=(s_1,\ldots,s_r) \in \mathbb N^r$ be a tuple and $\fe=(\epsilon_1,\ldots,\epsilon_r) \in (\F_q^\times)^r$. For all $1 \leq i \leq r$ we have a fixed $(q-1)$-th root $\gamma_i$ of $\epsilon_i \in \mathbb F_q^\times$ and set $Q_{s_i,\epsilon_i}':=\gamma_i$. Then we set $\fQ_{\fs,\fe}':=(Q_{s_1,\epsilon_1}',\dots,Q_{s_r,\epsilon_r}')$ and put \begin{align} \label{eq: series Li} \frakLi(\fs;\fe)=\frak L(\fs;\fQ_{\fs,\fe}')=\sum_{i_1 > \dots > i_r \geq 0} (\gamma_{i_r} \Omega^{s_r})^{(i_r)} \dots (\gamma_{i_1} \Omega^{s_1})^{(i_1)}. \end{align} Thus we can define the dual $t$-motives $\mathcal N_{\fs,\fe}=\mathcal N_{\fs,\fQ_{\fs,\fe}'}$ and $\mathcal N_{\fs,\fe}'=\mathcal N_{\fs,\fQ_{\fs,\fe}'}'$ attached to $(\fs,\fe)$. These dual $t$-motives are connected to CMPL's and ACMPL's by the following result (see \cite[Lemma 5.3.5]{Cha14} and \cite[Prop. 2.3.3]{CPY19}): \begin{equation} \label{eq: ACMPL} \frakLi(\fs;\fe)(\theta)=\frac{\gamma_1 \dots \gamma_r \Li \begin{pmatrix} \fe \\ \fs \end{pmatrix}}{\widetilde \pi^{w(\fs)}}. \end{equation} \subsection{A result for linear independence} \subsubsection{Setup} Let $w \in \N$ be a positive integer. Let $\{(\fs_i;\fQ_i)\}_{1 \leq i \leq n}$ be a collection of pairs satisfying Condition \eqref{eq: condition for Q} such that $\fs_i$ always has weight $w$. We write $\fs_i=(s_{i1},\dots,s_{i \ell_i}) \in \mathbb N^{\ell_i}$ and $\fQ_i=(Q_{i1},\dots,Q_{i\ell_i}) \in (\Fq^\times)^{\ell_i}$ so that $s_{i1}+\dots+s_{i \ell_i}=w$. We introduce the set of tuples \[ I(\fs_i;\fQ_i):=\{\emptyset,(s_{i1};Q_{i1}),\dots,(s_{i1},\dots,s_{i (\ell_i-1)};Q_{i1},\dots,Q_{i(\ell_i-1)})\},\] and set \[ I:=\cup_i I(\fs_i;\fQ_i). \] \subsubsection{Linear independence} We are now ready to state the main result of this section. \begin{theorem} \label{theorem: linear independence} We keep the above notation. We suppose further that $\{(\fs_i;\fQ_i)\}_{1 \leq i \leq n}$ satisfies the following conditions: \begin{itemize} \item[(LW)] For any weight $w'<w$, the values $\frak L(\frak t;\fQ)(\theta)$ with $(\frak t;\fQ) \in I$ and $w(\frak t)=w'$ are all $K$-linearly independent. In particular, $\frak L(\frak t;\fQ)(\theta)$ is always nonzero. \item[(LD)] There exist $a \in A$ and $a_i \in A$ for $1 \leq i \leq n$ which are not all zero such that \begin{equation*} a+\sum_{i=1}^n a_i \frak L(\fs_i;\fQ_i)(\theta)=0. \end{equation*} \end{itemize} For all $(\frak t;\fQ) \in I$, we set the following series in $t$ \begin{equation} \label{eq:f} f_{\mathfrak t;\fQ}:= \sum_i a_i(t) \mathfrak L(s_{i(k+1)},\dots,s_{i \ell_i};Q_{i(k+1)},\dots,Q_{i \ell_i}), \end{equation} where the sum runs through the set of indices $i$ such that $(\mathfrak t;\fQ)=(s_{i1},\dots,s_{i k};Q_{i1},\dots,Q_{i k})$ for some $0 \leq k \leq \ell_i-1$. Then for all $(\mathfrak t;\fQ) \in I$, $f_{\mathfrak t;\fQ}(\theta)$ belongs to $K$. \end{theorem} \begin{remark} 1) Here we note that LW stands for Lower Weights and LD for Linear Dependence. 2) With the above notation we have \[ f_{\emptyset}= \sum_i a_i(t) \mathfrak L(\fs_i;\fQ_i). \] 2) In fact, we improve \cite[Theorem B]{ND21} in two directions. First, we remove the restriction to Anderson-Thakur polynomials and tuples $\fs_i$. Second, and more importantly, we allow an additional term $a$, which is crucial in the sequel. More precisely, in the case of MZV's, while \cite[Theorem B]{ND21} investigates linear relations between MZV's of weight $w$, Theorem \ref{theorem: linear independence} investigates linear relations between MZV's of weight $w$ and suitable powers $\widetilde \pi^w$ of the Carlitz period. \end{remark} \begin{proof} The proof will be divided into two steps. \medskip \noindent {\bf Step 1.} We first construct a dual $t$-motive to which we will apply the Anderson-Brownawell-Papanikolas criterion. We recall $a_i(t):=a_i \rvert_{\theta=t} \in \Fq[t]$. For each pair $(\fs_i;\fQ_i)$ we have attached to it a matrix $\Phi_{\fs_i,\fQ_i}$. For $\fs_i=(s_{i1},\dots,s_{i \ell_i}) \in \mathbb N^{\ell_i}$ and $\fQ_i=(Q_{i1},\dots,Q_{i\ell_i}) \in (\Fq^\times)^{\ell_i}$ we recall \[ I(\fs_i;\fQ_i)=\{\emptyset,(s_{i1};Q_{i1}),\dots,(s_{i1},\dots,s_{i (\ell_i-1)};Q_{i1},\dots,Q_{(\ell_i-1)})\}, \] and $I:=\cup_i I(\fs_i;\fQ_i)$. We now construct a new matrix $\Phi'$ indexed by elements of $I$, say \[ \Phi'=\left(\Phi'_{(\mathfrak t;\fQ),(\mathfrak t';\fQ')}\right)_{(\mathfrak t;\fQ),(\mathfrak t';\fQ') \in I} \in \Mat_{|I|}(\overline K[t]). \] For the row which corresponds to the empty pair $\emptyset$ we put \begin{align*} \Phi'_{\emptyset,(\mathfrak t';\fQ')}= \begin{cases} (t-\theta)^w & \text{if } (\mathfrak t';\fQ')=\emptyset, \\ 0 & \text{otherwise}. \end{cases} \end{align*} For the row indexed by $(\mathfrak t;\fQ)=(s_{i1},\dots,s_{i j};Q_{i1},\dots,Q_{ij})$ for some $i$ and $1 \leq j \leq \ell_i-1$ we put \begin{align*} \Phi'_{(\mathfrak t;\fQ),(\mathfrak t';\fQ')}= \begin{cases} (t-\theta)^{w-w(\mathfrak t')} & \text{if } (\mathfrak t';\fQ')=(\mathfrak t;\fQ), \\ Q_{ij}^{(-1)} (t-\theta)^{w-w(\mathfrak t')} & \text{if } (\mathfrak t';\fQ')=(s_{i1},\dots,s_{i (j-1)};Q_{i1},\dots,Q_{i (j-1)}), \\ 0 & \text{otherwise}. \end{cases} \end{align*} Note that $\Phi_{\fs_i,\fQ_i}'=\left(\Phi'_{(\mathfrak t;\fQ),(\mathfrak t';\fQ')}\right)_{(\mathfrak t;\fQ),(\mathfrak t';\fQ') \in I(\fs_i;\fQ_i)}$ for all $i$. We define $\Phi \in \Mat_{|I|+1}(\overline K[t])$ by \begin{align*} \Phi=\begin{pmatrix} \Phi' & 0 \\ \bv & 1 \end{pmatrix} \in \Mat_{|I|+1}(\overline K[t]), \quad \bv=(v_{\mathfrak t,\fQ})_{(\mathfrak t;\fQ) \in I} \in \Mat_{1 \times|I|}(\overline K[t]), \end{align*} where \[ v_{\mathfrak t;\fQ}=\sum_i a_i(t) Q_{i \ell_i}^{(-1)} (t-\theta)^{w-w(\mathfrak t)} . \] Here the sum runs through the set of indices $i$ such that $(\mathfrak t;\fQ)=(s_{i1},\dots,s_{i (\ell_i-1)};Q_{i1},\dots,Q_{i (\ell_i-1)})$ and the empty sum is defined to be zero. We now introduce a rigid analytic trivialization matrix $\Psi$ for $\Phi$. We define $\Psi'=\left(\Psi'_{(\mathfrak t;\fQ),(\mathfrak t';\fQ')}\right)_{(\mathfrak t;\fQ),(\mathfrak t';\fQ') \in I} \in \text{GL}_{|I|}(\bT)$ as follows. For the row which corresponds to the empty pair $\emptyset$ we define \begin{align*} \Psi'_{\emptyset,(\mathfrak t';\fQ')}= \begin{cases} \Omega^w & \text{if } (\mathfrak t';\fQ')=\emptyset, \\ 0 & \text{otherwise}. \end{cases} \end{align*} For the row indexed by $(\mathfrak t;\fQ)=(s_{i1},\dots,s_{i j};Q_{i1},\dots,Q_{i j})$ for some $i$ and $1 \leq j \leq \ell_i-1$ we put \begin{align*} &\Psi'_{(\mathfrak t;\fQ),(\mathfrak t';\fQ')}= \\ & \begin{cases} \mathfrak L(\mathfrak t;\fQ) \Omega^{w-w(\mathfrak t)} & \text{if } (\mathfrak t';\fQ')=\emptyset, \\ \mathfrak L(s_{i(k+1)},\dots,s_{ij};Q_{i(k+1)},\dots,Q_{ij}) \Omega^{w-w(\mathfrak t)} & \text{if } (\mathfrak t';\fQ')=(s_{i1},\dots,s_{i k};Q_{i1},\dots,Q_{i k}) \text{ for some } 1 \leq k \leq j, \\ 0 & \text{otherwise}. \end{cases} \end{align*} Note that $\Psi_{\fs_i,\fQ_i}'=\left(\Psi'_{(\mathfrak t;\fQ),(\mathfrak t';\fQ')}\right)_{(\mathfrak t;\fQ),(\mathfrak t';\fQ') \in I(\fs_i;\fQ_i)}$ for all $i$. We define $\Psi \in \text{GL}_{|I|+1}(\bT)$ by \begin{align*} \Psi=\begin{pmatrix} \Psi' & 0 \\ \bff & 1 \end{pmatrix} \in \text{GL}_{|I|+1}(\bT), \quad \bff=(f_{\mathfrak t;\fQ})_{\mathfrak t \in I} \in \Mat_{1 \times|I|}(\bT). \end{align*} Here we recall (see Eq. \eqref{eq:f}) \begin{equation*} f_{\mathfrak t;\fQ}= \sum_i a_i(t) \mathfrak L(s_{i(k+1)},\dots,s_{i \ell_i};Q_{i(k+1)},\dots,Q_{i \ell_i}) \end{equation*} where the sum runs through the set of indices $i$ such that $(\mathfrak t;\fQ)=(s_{i1},\dots,s_{i k};Q_{i1},\dots,Q_{i k})$ for some $0 \leq k \leq \ell_i-1$. In particular, $f_{\emptyset}= \sum_i a_i(t) \mathfrak L(\fs_i;\fQ_i)$. By construction and by \S \ref{subsec:dual motives}, we get $\Psi^{(-1)}=\Phi \Psi$, that means $\Psi$ is a rigid analytic trivialization for $\Phi$. \medskip \noindent {\bf Step 2.} Next we apply the Anderson-Brownawell-Papanikolas criterion (see Theorem \ref{thm:ABP}) to prove Theorem \ref{theorem: linear independence}. In fact, we define \begin{align*} \widetilde \Phi=\begin{pmatrix} 1 & 0 \\ 0 & \Phi \end{pmatrix} \in \Mat_{|I|+2}(\overline K[t]) \end{align*} and consider the vector constructed from the first column vector of $\Psi$ \begin{align*} \widetilde \psi=\begin{pmatrix} 1 \\ \Psi_{(\mathfrak t;\fQ),\emptyset}' \\ f_\emptyset \end{pmatrix}_{(\mathfrak t;\fQ) \in I}. \end{align*} Then we have $\widetilde \psi\twistinv=\widetilde \Phi \widetilde \psi$. We also observe that for all $(\mathfrak t;\fQ) \in I$ we have $\Psi_{(\mathfrak t;\fQ),\emptyset}'=\mathfrak L(\mathfrak t;\fQ) \Omega^{w-w(\mathfrak t)}$. Further, \begin{align*} a+f_\emptyset(\theta)=a+\sum_i a_i \mathfrak L(\fs_i;\fQ_i)(\theta)=0. \end{align*} By Theorem \ref{thm:ABP} with $\rho=(a,0,\dots,0,1)$ we deduce that there exists $\bh=(g_0,g_{\mathfrak t,\fQ},g) \in \Mat_{1 \times (|I|+2)}(\overline K[t])$ such that $\bh \psi=0$, and that $g_{\mathfrak t,\fQ}(\theta)=0$ for $(\mathfrak t,\fQ) \in I$, $g_0(\theta)=a$ and $g(\theta)=1 \neq 0$. If we put $ \bg:=(1/g)\bh \in \Mat_{1 \times (|I|+2)}(\overline K(t))$, then all the entries of $ \bg$ are regular at $t=\theta$. Now we have \begin{align} \label{eq: reduction} ( \bg- \bg\invtwist \widetilde \Phi) \widetilde \psi= \bg \widetilde \psi-( \bg \widetilde \psi)\invtwist=0. \end{align} We write $ \bg- \bg\invtwist \widetilde \Phi=(B_0,B_{\mathfrak t},0)_{\mathfrak t \in I}$. We claim that $B_0=0$ and $B_{\mathfrak t,\fQ}=0$ for all $(\mathfrak t;\fQ) \in I$. In fact, expanding \eqref{eq: reduction} we obtain \begin{equation} \label{eq: B} B_0+\sum_{\mathfrak t \in I} B_{\mathfrak t,\fQ} \mathfrak L(\mathfrak t;\fQ) \Omega^{w-w(\mathfrak t)}=0. \end{equation} By \eqref{eq: power seriesChang} we see that for $(\mathfrak t;\fQ) \in I$ and $j \in \N$, \begin{equation*} \frakL(\mathfrak t;\fQ)(\theta^{q^j})=(\frakL(\mathfrak t;\fQ)(\theta))^{q^j} \end{equation*} which is nonzero by Condition $(LW)$. First, as the function $\Omega$ has a simple zero at $t=\theta^{q^k}$ for $k \in \N$, specializing \eqref{eq: B} at $t=\theta^{q^j}$ yields $B_0(\theta^{q^j})=0$ for $j \geq 1$. Since $B_0$ belongs to $\overline K(t)$, it follows that $B_0=0$. Next, we put $w_0:=\max_{(\mathfrak t;\fQ) \in I} w(\mathfrak t)$ and denote by $I(w_0)$ the set of $(\mathfrak t;\fQ) \in I$ such that $w(\mathfrak t)=w_0$. Then dividing \eqref{eq: B} by $\Omega^{w-w_0}$ yields \begin{equation} \label{eq: B1} \sum_{(\mathfrak t;\fQ) \in I} B_{\mathfrak t,\fQ} \mathfrak L(\mathfrak t;\fQ) \Omega^{w_0-w(\mathfrak t)}=\sum_{(\mathfrak t;\fQ) \in I(w_0)} B_{\mathfrak t,\fQ} \mathfrak L(\mathfrak t;\fQ)+\sum_{(\mathfrak t;\fQ) \in I \setminus I(w_0)} B_{\mathfrak t,\fQ} \mathfrak L(\mathfrak t;\fQ) \Omega^{w_0-w(\mathfrak t)}=0. \end{equation} Since each $B_{\mathfrak t,\fQ}$ belongs to $\overline K(t)$, they are defined at $t=\theta^{q^j}$ for $j \gg 1$. Note that the function $\Omega$ has a simple zero at $t=\theta^{q^k}$ for $k \in \N$. Specializing \eqref{eq: B1} at $t=\theta^{q^j}$ and using \eqref{eq: power seriesChang} yields \[ \sum_{(\mathfrak t;\fQ) \in I(w_0)} B_{\mathfrak t,\fQ}(\theta^{q^j}) (\mathfrak L(\mathfrak t;\fQ)(\theta))^{q^j}=0 \] for $j \gg 1$. We claim that $B_{\mathfrak t,\fQ}(\theta^{q^j})=0$ for $j \gg 1$ and for all $(\mathfrak t;\fQ) \in I(w_0)$. Otherwise, we get a non-trivial $\overline K$-linear relation between $\mathfrak L(\mathfrak t;\fQ)(\theta)$ with $(\frak t;\fQ) \in I$ of weight $w_0$. By Proposition \ref{prop: MZ property} we deduce a non-trivial $K$-linear relation between $\mathfrak L(\mathfrak t;\fQ)(\theta)$ with $(\frak t;\fQ) \in I(w_0)$, which contradicts with Condition $(LW)$. Now we know that $B_{\mathfrak t,\fQ}(\theta^{q^j})=0$ for $j \gg 1$ and for all $(\mathfrak t;\fQ) \in I(w_0)$. Since each $B_{\mathfrak t,\fQ}$ belongs to $\overline K(t)$, it follows that $B_{\mathfrak t,\fQ}=0$ for all $(\mathfrak t;\fQ) \in I(w_0)$. Next, we put $w_1:=\max_{(\mathfrak t;\fQ) \in I \setminus I(w_0)} w(\mathfrak t)$ and denote by $I(w_1)$ the set of $(\mathfrak t;\fQ) \in I$ such that $w(\mathfrak t)=w_1$. Dividing \eqref{eq: B} by $\Omega^{w-w_1}$ and specializing at $t=\theta^{q^j}$ yields \[ \sum_{(\mathfrak t;\fQ) \in I(w_1)} B_{\mathfrak t,\fQ}(\theta^{q^j}) (\mathfrak L(\mathfrak t;\fQ)(\theta))^{q^j}=0 \] for $j \gg 1$. Since $w_1<w$, by Proposition \ref{prop: MZ property} and Condition $(LW)$ again we deduce that $B_{\mathfrak t,\fQ}(\theta^{q^j})=0$ for $j \gg 1$ and for all $(\mathfrak t;\fQ) \in I(w_1)$. Since each $B_{\mathfrak t,\fQ}$ belongs to $\overline K(t)$, it follows that $B_{\mathfrak t,\fQ}=0$ for all $(\mathfrak t;\fQ) \in I(w_1)$. Repeating the previous arguments we deduce that $B_{\mathfrak t,\fQ}=0$ for all $(\mathfrak t;\fQ) \in I$ as required. We have proved that $ \bg- \bg\invtwist \widetilde \Phi=0$. Thus \begin{align*} \begin{pmatrix} 1 & 0 & 0 \\ 0 & \text{Id} & 0 \\ g_0/g & (g_{\mathfrak t,\fQ}/g)_{(\mathfrak t;\fQ) \in I} & 1 \end{pmatrix}^{(-1)} \begin{pmatrix} 1 & 0 \\ 0 & \Phi \end{pmatrix} = \begin{pmatrix} 1 & 0 & 0 \\ 0 & \Phi' & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 & 0 \\ 0 & \text{Id} & 0 \\ g_0/g & (g_{\mathfrak t,\fQ}/g)_{(\mathfrak t;\fQ) \in I} & 1 \end{pmatrix}. \end{align*} By \cite[Prop. 2.2.1]{CPY19} we see that the common denominator $b$ of $g_0/g$ and $g_{\mathfrak t,\fQ}/g$ for $(\mathfrak t,\fQ) \in I$ belongs to $\Fq[t] \setminus \{0\}$. If we put $\delta_0=bg_0/g$ and $\delta_{\mathfrak t,\fQ}=b g_{\mathfrak t,\fQ}/g$ for $(\mathfrak t,\fQ) \in I$ which belong to $\overline K[t]$ and $\delta:=(\delta_{\mathfrak t,\fQ})_{\mathfrak t \in I} \in \Mat_{1 \times |I|}(\overline K[t])$, then $\delta_0^{(-1)}=\delta_0$ and \begin{align} \label{eq:equation for delta} \begin{pmatrix} \text{Id} & 0 \\ \delta & 1 \end{pmatrix}^{(-1)} \begin{pmatrix} \Phi' & 0 \\ b \bv & 1 \end{pmatrix} = \begin{pmatrix} \Phi' & 0 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} \text{Id} & 0 \\ \delta & 1 \end{pmatrix}. \end{align} If we put $X:=\begin{pmatrix} \text{Id} & 0 \\ \delta & 1 \end{pmatrix} \begin{pmatrix} \Psi' & 0 \\ b \bff & 1 \end{pmatrix}$, then $X^{(-1)}=\begin{pmatrix} \Phi' & 0 \\ 0 & 1 \end{pmatrix} X$. By \cite[\S 4.1.6]{Pap08} there exist $\nu_{\mathfrak t,\fQ} \in \Fq(t)$ for $(\mathfrak t,\fQ) \in I$ such that if we set $\nu=(\nu_{\mathfrak t,\fQ})_{(\mathfrak t,\fQ) \in I} \in \Mat_{1 \times |I|}(\Fq(t))$, \begin{align*} X=\begin{pmatrix} \Psi' & 0 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} \text{Id} & 0 \\ \nu & 1 \end{pmatrix}. \end{align*} Thus the equation $\begin{pmatrix} \text{Id} & 0 \\ \delta & 1 \end{pmatrix} \begin{pmatrix} \Psi' & 0 \\ b \bff & 1 \end{pmatrix}=\begin{pmatrix} \Psi' & 0 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} \text{Id} & 0 \\ \nu & 1 \end{pmatrix}$ implies \begin{equation} \label{eq:nu} \delta \Psi'+b \bff=\nu. \end{equation} The left-hand side belongs to $\bT$, so does the right-hand side. Thus $\nu=(\nu_{\mathfrak t,\fQ})_{(\mathfrak t,\fQ) \in I} \in \Mat_{1 \times |I|}(\Fq[t])$. For any $j \in \N$, by specializing \eqref{eq:nu} at $t=\theta^{q^j}$ and using \eqref{eq: power seriesChang} and the fact that $\Omega$ has a simple zero at $t=\theta^{q^j}$ we deduce that \[ \bff(\theta)=\nu(\theta)/b(\theta). \] Thus for all $(\mathfrak t,\fQ) \in I$, $f_{\mathfrak t;\fQ}(\theta)$ given as in \eqref{eq:f} belongs to $K$. \end{proof} \section{Linear relations between ACMPL's} \label{sec:transcendental part} In this section we use freely the notation of \S \ref{sec:algebraic part} and \S \ref{sec:CMPL motives}. \subsection{Preliminaries} We begin this section by proving several auxiliary lemmas which will be useful in the sequel. We recall that $\overline{\mathbb F}_q$ denotes the algebraic closure of $\mathbb F_q$ in $\overline{K}$. \begin{lemma} \label{lem: gamma_i} Let $\epsilon_i \in \Fq^\times$ be different elements. We denote by $\gamma_i \in \overline{\F}_q$ a $(q-1)$-th root of $\epsilon_i$. Then $\gamma_i$ are all $\Fq$-linearly independent. \end{lemma} \begin{proof} We know that $\Fq^\times$ is cyclic as a multiplicative group. Let $\epsilon$ be a generating element of $\Fq^\times$ so that $\Fq^\times=\langle \epsilon \rangle$. Let $\gamma$ be the associated $(q-1)$-th root of $\epsilon$. Then for all $1 \leq i \leq q-1$ it follows that $\gamma^i$ is a $(q-1)$-th root of $\epsilon^i$. Thus it suffices to show that the polynomial $P(X)=X^{q-1}-\epsilon$ is irreducible in $\Fq[X]$. Suppose that this is not the case, write $P(X)=P_1(X)P_2(X)$ with $1 \leq \deg P_1<q-1$. Since the roots of $P(X)$ are of the form $\alpha \gamma$ with $\alpha \in \Fq^\times$, those of $P_1(X)$ are also of this form. Looking at the constant term of $P_1(X)$, we deduce that $\gamma^{\deg P_1} \in \Fq^\times$. If we put $m=\text{gcd}(\deg P_1,q-1)$, then $1 \leq m<q-1$ and $\gamma^m \in \Fq^\times$. Letting $\beta:=\gamma^m \in \Fq^\times$, we get $\beta^{\frac{q-1}{m}}=\gamma^{q-1}=\epsilon$. Since $1 \leq m<q-1$, we get a contradiction with the fact that $\Fq^\times=\langle \epsilon \rangle$. The proof is finished. \end{proof} \begin{lemma} \label{lem: same character} Let $ \Li \begin{pmatrix} \fe_i \\ \fs_i \end{pmatrix} \in \mathcal{AL}_w$ and $a_i \in K$ satisfying \begin{equation*} \sum_i a_i \frakLi(\fs_i;\fe_i)(\theta)=0. \end{equation*} For $\epsilon \in \Fq^\times$ we denote by $I(\epsilon)=\{i:\, \chi(\fe_i)=\epsilon\}$ the set of indices whose corresponding character equals $\epsilon$. Then for all $\epsilon \in \Fq^\times$, \[ \sum_{i \in I(\epsilon)} a_i \frakLi(\fs_i;\fe_i)(\theta)=0. \] \end{lemma} \begin{proof} We keep the notation of Lemma \ref{lem: gamma_i}. Suppose that we have a relation \[\sum_i \gamma_i a_i=0\] with $a_i \in K_\infty$. By Lemma \ref{lem: gamma_i} and the fact that $K_\infty=\Fq((1/\theta))$, we deduce that $a_i=0$ for all $i$. By \eqref{eq: ACMPL} the relation $\sum_i a_i \frakLi(\fs_i;\fe_i)(\theta)=0$ is equivalent to the following one \[ \sum_i a_i \gamma_{i1} \dots \gamma_{i\ell_i} \Li \begin{pmatrix} \fe_i \\ \fs_i \end{pmatrix}=0. \] By the previous discussion, for all $\epsilon \in \Fq^\times$, \[ \sum_{i \in I(\epsilon)} a_i \gamma_{i1} \dots \gamma_{i\ell_i} \Li \begin{pmatrix} \fe_i \\ \fs_i \end{pmatrix}=0. \] By \eqref{eq: ACMPL} again we deduce the desired relation \[ \sum_{i \in I(\epsilon)} a_i \frakLi(\fs_i;\fe_i)(\theta)=0. \] \end{proof} \begin{lemma} \label{lem: KuanLin} Let $m \in \mathbb N$, $\varepsilon \in \Fq^\times$, $\delta \in \overline K[t]$ and $F(t,\theta) \in \overline{\F}_q[t,\theta]$ (resp. $F(t,\theta) \in \F_q[t,\theta]$) satisfying \[ \varepsilon\delta = \delta^{(-1)}(t-\theta)^m+F^{(-1)}(t,\theta). \] Then $\delta \in \overline{\F}_q[t,\theta]$ (resp. $\delta \in \F_q[t,\theta]$) and \[ \deg_\theta \delta \leq \max\left\{\frac{qm}{q-1},\frac{\deg_\theta F(t,\theta)}{q}\right\}. \] \end{lemma} \begin{proof} The proof follows the same line as that of \cite[Theorem 2]{KL16} where it is shown that if $F(t,\theta) \in \F_q[t,\theta]$ and $\varepsilon=1$, then $\delta \in \F_q[t,\theta]$. We write down the proof for the case $F(t,\theta) \in \overline{\F}_q[t,\theta]$ for the convenience of the reader. By twisting once the equality $\varepsilon\delta = \delta^{(-1)}(t-\theta)^m+F^{(-1)}(t,\theta)$ and the fact that $\varepsilon^q=\varepsilon$, we get \[ \varepsilon\delta^{(1)} = \delta(t-\theta^q)^m+F(t,\theta). \] We put $n=\deg_t \delta$ and express \[ \delta=a_n t^n + \dots +a_1t+a_0 \in \overline K[t] \] with $a_0,\dots,a_n \in \overline K$. For $i<0$ we put $a_i=0$. Since $\deg_t \delta^{(1)}=\deg_t \delta=n < \delta(t-\theta^q)^m=n+m$, it follows that $\deg_t F(t,\theta)=n+m$. Thus we write $F(t,\theta)=b_{n+m} t^{n+m}+\dots+b_1 t+b_0$ with $b_0,\dots,b_{n+m} \in \overline{\F}_q[\theta]$. Plugging into the previous equation, we obtain \[ \varepsilon(a_n^q t^n + \dots +a_0^q) = (a_n t^n + \dots +a_0)(t-\theta^q)^m+b_{n+m} t^{n+m}+\dots+b_0. \] Comparing the coefficients $t^j$ for $n+1 \leq j \leq n+m$ yields \[ a_{j-m}+\sum_{i=j-m+1}^{n} {m \choose j-i} (-\theta^q)^{m-j+i} a_i+b_j=0. \] Since $b_j \in \overline{\F}_q[\theta]$ for all $n+1 \leq j \leq n+m$, we can show by descending induction that $a_j \in \overline{\F}_q[\theta]$ for all $n+1-m \leq j \leq n$. If $n+1-m \leq 0$, then we are done. Otherwise, comparing the coefficients $t^j$ for $m \leq j \leq n$ yields \[ a_{j-m}+\sum_{i=j-m+1}^{n} {m \choose j-i} (-\theta^q)^{m-j+i} a_i+b_j-\varepsilon a_j^q=0. \] Since $b_j \in \overline{\F}_q[\theta]$ for all $m \leq j \leq n$ and $a_j \in \overline{\F}_q[\theta]$ for all $n+1-m \leq j \leq n$, we can show by descending induction that $a_j \in \overline{\F}_q[\theta]$ for all $0 \leq j \leq n-m$. We conclude that $\delta \in \overline{\F}_q[t,\theta]$. We now show that $\deg_\theta \delta \leq \max\{\frac{qm}{q-1},\frac{\deg_\theta F(t,\theta)}{q}\}$. Otherwise, suppose that $\deg_\theta \delta > \max\{\frac{qm}{q-1},\frac{\deg_\theta F(t,\theta)}{q}\}$. Then $\deg_\theta \delta^{(1)}=q \deg_\theta \delta$. It implies that $\deg_\theta \delta^{(1)} > \deg_\theta (\delta(t-\theta^q)^m)=\deg_\theta \delta+qm$ and $\deg_\theta \delta^{(1)} >\deg_\theta F(t,\theta)$. Hence we get \[\deg_\theta (\varepsilon \delta^{(1)})= \deg_\theta \delta^{(1)} > \deg_\theta(\delta(t-\theta^q)^m+F(t,\theta)), \] which is a contradiction. \end{proof} \subsection{Linear relations: statement of the main result} \begin{theorem} \label{thm: trans ACMPL} Let $w \in \N$. We recall that the set $\mathcal{J}'_w$ consists of positive tuples $\fs = (s_1, \dots, s_n)$ of weight $w$ such that $ q \nmid s_i$ for all $i$. Suppose that we have a non-trivial relation \begin{equation*} a+\sum_{\fs_i \in \mathcal{J}'_w} a_i \frakLi(\fs_i;\fe_i)(\theta)=0, \quad \text{for $a, a_i \in K$.} \end{equation*} Then $q-1 \mid w$ and $a \neq 0$. Further, if $q-1 \mid w$, then there is a unique relation \begin{equation*} 1+\sum_{\fs_i \in \mathcal{J}'_w} a_i \frakLi(\fs_i;\fe_i)(\theta)=0, \quad \text{for $a_i \in K$.} \end{equation*} Also, for indices $(\fs_i; \fe_i)$ with nontrivial coefficient $a_i$, we have $\fe_i = (1, \dots, 1)$. In particular, the ACMPL's in $\mathcal{AS}_w$ are linearly independent over $K$. \end{theorem} \begin{remark} \label{rem:remark on Brown} We emphasize that although Theorem \ref{thm: trans ACMPL} is a purely transcendental result, it is crucial that we need the full strength of algebraic theory for ACMPL's (i.e., Theorem \ref{thm: strong Brown}) to conclude (see the last step of the proof). \end{remark} As a direct consequence of Theorem \ref{thm: trans ACMPL}, we obtain: \begin{theorem} \label{thm:ACMPL} Let $w \in \N$. Then the ACMPL's in $\mathcal{AS}_w$ form a basis for $\mathcal{AL}_w$. In particular, \[ \dim_K \mathcal{AL}_w=s(w). \] \end{theorem} \begin{proof} By Theorem \ref{thm: trans ACMPL} the ACMPL's in $\mathcal{AS}_w$ are all linearly independent over $K$. Then by Theorem \ref{thm: strong Brown} we deduce that the ACMPL's in $\mathcal{AS}_w$ form a basis for $\mathcal{AL}_w$. Hence $\dim_K \mathcal{AL}_w=|\mathcal{AS}_w|=s(w)$ as required. \end{proof} \subsection{Proof of Theorem \ref{thm: trans ACMPL}} \ppar We outline the ideas of the proof. Starting from such a nontrivial relation, we apply the Anderson-Brownawell-Papanikolas criterion in \cite{ABP04} and reduce to the solution of a system of $\sigma$-linear equations. In contrast to \cite[\S 4 and \S 5]{ND21}, this system has a unique solution when $q-1$ divides $w$. We first show that for such a weight $w$ up to a scalar in $K^\times$ there is at most one linear relation between ACMPL's in $\mathcal{AS}_w$ and $\widetilde \pi^w$. Second we show a linear relation between ACMPL's in $\mathcal{AS}_w$ and $\widetilde \pi^w$ where the coefficient of $\widetilde \pi^w$ is nonzero. For this we use Brown's theorem for AMCPL's, i.e., Theorem \ref{thm: strong Brown}. We are back to the proof of Theorem \ref{thm: trans ACMPL}. We claim that if $q-1 \nmid w$, then any linear relation \[ a+\sum_{\fs_i \in \mathcal{J}'_w} a_i \frakLi(\fs_i;\fe_i)(\theta)=0 \] with $a,a_i \in K$ implies that $a=0$. In fact, if we recall that $\overline{\mathbb F}_q$ denotes the algebraic closure of $\mathbb F_q$ in $\overline{K}$, then the claim follows from Eq \eqref{eq: ACMPL} and that $\widetilde{\pi}^w\not\in \overline{\mathbb F}_q \left(( \frac{1}{\theta}\right))$ since $q-1 \nmid w$. The proof is by induction on the weight $w \in \N$. For $w=1$, we distinguish two cases: \begin{itemize} \item If $q>2$, then by the previous remark it suffices to show that if \[a+\sum_i a_i \frakLi(1;\epsilon_i)(\theta)=0, \] then $a_i=0$ for all $i$. In fact, it follows immediately from Lemma \ref{lem: same character}. \item If $q=2$, then $w=q-1=1$. Then the theorem holds from the facts that there is only one index $(\fs_1;\fe_1)=(1,1)$ and that $\Li(1)=\zeta_A(1)=-D^{-1}_1 \widetilde \pi$. \end{itemize} Suppose that Theorem \ref{thm: trans ACMPL} holds for all $w'<w$. We now prove that it holds for $w$. Suppose that we have a linear relation \begin{equation} \label{eq: non trivial relation MZV} a+\sum_i a_i \frakLi(\fs_i;\fe_i)(\theta)=0. \end{equation} By Lemma \ref{lem: same character} and its proof we can suppose further that $\fe_i$ has the same character, i.e., there exists $\epsilon \in \Fq^\times$ such that for all $i$, \begin{equation} \label{eq: same character} \chi(\fe_i)=\epsilon_{i1} \dots \epsilon_{i\ell_i}=\epsilon. \end{equation} We now apply Theorem \ref{theorem: linear independence} to our setting of ACMPL's. We recall that by Eq. \eqref{eq: series Li}, \[ \frakLi(\fs;\fe)=\frak L(\fs;\fQ_{\fs,\fe}'), \] and also \begin{align} \label{eq: I recall} I(\fs_i; \fe_i) &= \{\emptyset, (s_{i1}; \epsilon_{i1}), \dots, (s_{i1}, \dots, s_{i(\ell_{i}-1)}; \epsilon_{i1}, \dots, \epsilon_{i(\ell_i-1)})\}, \notag \\ I &= \cup_i I(\fs_i;\fe_i). \end{align} We know that the hypothesis are verified: \begin{itemize} \item[(LW)] By the induction hypothesis, for any weight $w'<w$, the values $\frakLi(\frak t;\fe)(\theta)$ with $(\frak t;\fe) \in I$ and $w(\frak t)=w'$ are all $K$-linearly independent. \item[(LD)] By \eqref{eq: non trivial relation MZV}, there exist $a \in A$ and $a_i \in A$ for $1 \leq i \leq n$ which are not all zero such that \begin{equation*} a+\sum_{i=1}^n a_i \frakLi(\fs_i;\fe_i)(\theta)=0. \end{equation*} \end{itemize} Thus Theorem \ref{theorem: linear independence} implies that for all $(\mathfrak t;\fe) \in I$, $f_{\mathfrak t;\fe}(\theta)$ belongs to $K$ where $f_{\mathfrak t;\fe}$ is given by \begin{equation*} f_{\mathfrak t;\fe}:= \sum_i a_i(t) \frakLi(s_{i(k+1)},\dots,s_{i \ell_i};\epsilon_{i(k+1)},\dots,\epsilon_{i \ell_i}). \end{equation*} Here, the sum runs through the set of indices $i$ such that $(\mathfrak t;\fe)=(s_{i1},\dots,s_{i k};\epsilon_{i1},\dots,\epsilon_{i k})$ for some $0 \leq k \leq \ell_i-1$. We derive a direct consequence of the previous rationality result. Let $(\mathfrak t;\fe) \in I$ and $\mathfrak t \neq \emptyset$. Then $(\mathfrak t;\fe)=(s_{i1},\dots,s_{i k};\epsilon_{i1},\dots,\epsilon_{i k})$ for some $i$ and $1 \leq k \leq \ell_i-1$. We denote by $J(\mathfrak t;\fe)$ the set of all such $i$. We know that there exists $a_{\mathfrak t;\fe} \in K$ such that \begin{equation*} a_{\mathfrak t;\fe}+f_{\mathfrak t;\fe}(\theta)=0, \end{equation*} or equivalently, \begin{equation*} a_{\mathfrak t;\fe}+\sum_{i \in J(\frak t;\fe)} a_i \frakLi(s_{i(k+1)},\dots,s_{i \ell_i};\epsilon_{i(k+1)},\dots,\epsilon_{i \ell_i})(\theta)=0. \end{equation*} The ACMPL's appearing in the above equality belong to $\mathcal{AS}_{w-w(\mathfrak t)}$. By the induction hypothesis, we can suppose that $\epsilon_{i(k+1)}=\dots=\epsilon_{i \ell_i}=1$. Further, if $q-1 \nmid w-w(\frak t)$, then $a_i(t)=0$ for all $i \in J(\frak t;\fe)$. Therefore, letting $(\fs_i;\fe_i)=(s_{i1},\dots,s_{i \ell_i};\epsilon_{i1},\dots,\epsilon_{i \ell_i})$ we can suppose that $s_{i2},\dots,s_{i \ell_i}$ are all divisible by $q-1$ and $\epsilon_{i2}=\dots=\epsilon_{i \ell_i}=1$. In particular, for all $i$, $\epsilon_{i1}=\chi(\fe_i)=\epsilon$. Now we want to solve \eqref{eq:equation for delta}. Further, in this system we can assume that the corresponding element $b \in \Fq[t] \setminus \{0\}$ equals $1$. We define \[ J:=I \cup \{(\fs_i;\fe_i)\} \] where $I$ is given as in \eqref{eq: I recall}. For $(\mathfrak t;\fe) \in J$ we denote by $J_0(\frak t;\fe)$ consisting of $(\frak t';\fe') \in I$ such that there exist $i$ and $0 \leq j <\ell_i$ so that $(\frak t;\fe)=(s_{i1},s_{i2},\dots,s_{ij};\epsilon,1,\dots,1)$ and $(\frak t';\fe')=(s_{i1},s_{i2},\dots,s_{i(j+1)};\epsilon,1,\dots,1)$. In particular, for $(\frak t;\fe)=(\fs_i;\fe_i)$, $J_0(\frak t;\fe)$ is the empty set. For $(\mathfrak t;\fe) \in J \setminus \{\emptyset\}$, we also put \[ m_{\frak t}:=\frac{w-w(\frak t)}{q-1} \in \mathbb Z^{\geq 0}. \] Then it is clear that \eqref{eq:equation for delta} is equivalent finding $(\delta_{\mathfrak t;\fe})_{(\mathfrak t;\fe) \in J} \in \Mat_{1 \times |J|}(\overline K[t])$ such that \begin{equation} \label{eq:delta} \delta_{\mathfrak t;\fe}=\delta_{\mathfrak t;\fe}^{(-1)} (t-\theta)^{w-w(\frak t)}+\sum_{(\mathfrak t';\fe') \in J_0(\frak t;\fe)} \delta_{\mathfrak t';\fe'}^{(-1)} (t-\theta)^{w-w(\frak t)}, \quad \text{for all } (\frak t;\fe) \in J \setminus \{\emptyset\}, \end{equation} and \begin{equation} \label{eq:delta emptyset} \delta_{\mathfrak t;\fe}=\delta_{\mathfrak t;\fe}^{(-1)} (t-\theta)^{w-w(\frak t)}+\sum_{(\mathfrak t';\fe') \in J_0(\frak t;\fe)} \delta_{\mathfrak t';\fe'}^{(-1)} \gamma^{(-1)} (t-\theta)^{w-w(\frak t)}, \quad \text{for } (\frak t;\fe)=\emptyset. \end{equation} Here $\gamma^{q-1}=\epsilon$. In fact, for $(\frak t;\fe)=(\fs_i;\fe_i)$, the corresponding equation becomes $\delta_{\fs_i;\fe_i}=\delta_{\fs_i;\fe_i}^{(-1)}$. Thus $\delta_{\fs_i;\fe_i}=a_i(t) \in \Fq[t]$. Letting $y$ be a variable, we denote by $v_y$ the valuation associated to the place $y$ of the field $\Fq(y)$. We put \[ T:=t-t^q, \quad X:=t^q-\theta^q. \] We claim that \begin{itemize} \item[1)] For all $(\mathfrak t;\fe) \in J \setminus \{\emptyset\}$, the polynomial $\delta_{\frak t;\fe}$ is of the form \[ \delta_{\frak t;\fe}=f_{\frak t} \left(X^{m_{\frak t}}+\sum_{i=0}^{m_{\frak t}-1} P_{\frak t,i}(T) X^i \right)\] where \begin{itemize} \item $f_{\frak t} \in \Fq[t]$, \item for all $0 \leq i \leq m_{\frak t}-1$, $P_{\frak t,i}(y)$ belongs to $\Fq(y)$ with $v_y(P_{\frak t,i}) \geq 1$. \end{itemize} \item[2)] For all $\mathfrak t \in J \setminus \{\emptyset\}$ and all $\frak t' \in J_0(\frak t)$, there exists $P_{\frak t,\frak t'} \in \Fq(y)$ such that \[ f_{\frak t'}=f_{\frak t} P_{\frak t,\frak t'}(T). \] In particular, if $f_{\frak t}=0$, then $f_{\frak t'}=0$. \end{itemize} The proof is by induction on $m_{\frak t}$. We start with $m_{\frak t}=0$. Then $\frak t=\fs_i$ and $\fe=\fe_i$ for some $i$. We have observed that $\delta_{\fs_i;\fe_i}=a_i(t) \in \Fq[t]$ and the assertion follows. Suppose that the claim holds for all $(\frak t;\fe) \in J \setminus \{\emptyset\}$ with $m_{\frak t}<m$. We now prove the claim for all $(\frak t;\fe) \in J \setminus \{\emptyset\}$ with $m_{\frak t}=m$. In fact, we fix such $\frak t$ and want to find $\delta_{\frak t;\fe} \in \overline K[t]$ such that \begin{equation} \label{eq:delta1} \delta_{\mathfrak t;\fe}=\delta_{\mathfrak t;\fe}^{(-1)} (t-\theta)^{(q-1)m}+\sum_{(\mathfrak t';\fe') \in J_0(\frak t;\fe)} \delta_{\mathfrak t';\fe'}^{(-1)} (t-\theta)^{(q-1)m}. \end{equation} By the induction hypothesis, for all $(\frak t';\fe') \in J_0(\frak t;\fe)$, we know that \[ \delta_{\frak t';\fe'}=f_{\frak t'} \left(X^{m_{\frak t'}}+\sum_{i=0}^{m_{\frak t'}-1} P_{\frak t',i}(T) X^i \right)\] where \begin{itemize} \item $f_{\frak t'} \in \Fq[t]$, \item for all $0 \leq i \leq m_{\frak t'}-1$, $P_{\frak t',i}(y) \in \Fq(y)$ with $v_y(P_{\frak t,i}) \geq 1$. \end{itemize} For $(\frak t';\fe') \in J_0(\frak t;\fe)$, we write $\frak t'=(\frak t,(m-k)(q-1))$ with $0 \leq k < m$ and $k \not\equiv m \pmod{q}$, in particular $m_{\frak t'}=k$. We put $f_k=f_{\frak t'}$ and $P_{\frak t',i}=P_{k,i}$ so that \begin{equation} \label{eq:delta2} \delta_{\frak t';\fe'}=f_k \left(X^k+\sum_{i=0}^{k-1} P_{k,i}(T) X^i \right) \in \Fq[t,\theta^q]. \end{equation} By Lemma \ref{lem: KuanLin}, $\delta_{\frak t;\fe}$ belongs to $K[t]$, and $\deg_\theta \delta_{\frak t;\fe} \leq mq$. Further, since $\delta_{\frak t;\fe}$ is divisible by $(t-\theta)^{(q-1)m}$, we write $\delta_{\frak t;\fe}=F (t-\theta)^{(q-1)m}$ with $F \in K[t]$ and $\deg_\theta F \leq m$. Dividing \eqref{eq:delta1} by $(t-\theta)^{(q-1)m}$ and twisting once yields \begin{equation} \label{eq:F} F^{(1)}=F (t-\theta)^{(q-1)m}+\sum_{(\mathfrak t';\fe') \in J_0(\frak t;\fe)} \delta_{\mathfrak t';\fe'}. \end{equation} As $\delta_{\frak t';\fe'} \in \Fq[t,\theta^q]$ for all $(\frak t';\fe') \in J_0(\frak t;\fe)$, it follows that $F (t-\theta)^{(q-1)m} \in \Fq[t,\theta^q]$. As $\deg_\theta F \leq m$, we get \[ F=\sum_{0 \leq i \leq m/q} f_{m-iq} (t-\theta)^{m-iq}, \quad \text{for $f_{m-iq} \in \Fq[t]$}. \] Thus \begin{align*} & F (t-\theta)^{(q-1)m}=\sum_{0 \leq i \leq m/q} f_{m-iq} (t-\theta)^{mq-iq}=\sum_{0 \leq i \leq m/q} f_{m-iq} X^{m-i}, \\ & F^{(1)}=\sum_{0 \leq i \leq m/q} f_{m-iq} (t-\theta^q)^{m-iq}=\sum_{0 \leq i \leq m/q} f_{m-iq} (T+X)^{m-iq}. \end{align*} Putting these and \eqref{eq:delta2} into \eqref{eq:F} gets \begin{align*} & \sum_{0 \leq i \leq m/q} f_{m-iq} (T+X)^{m-iq}=\sum_{0 \leq i \leq m/q} f_{m-iq} X^{m-i}+\sum_{\substack{0 \leq k<m \\ k \not\equiv m \pmod{q}}} f_k \left(X^k+\sum_{i=0}^{k-1} P_{k,i}(T) X^i \right). \end{align*} Comparing the coefficients of powers of $X$ yields the following linear system in the variables $f_0,\dots,f_{m-1}$: \begin{equation*} B_{\big| y=T} \begin{pmatrix} f_{m-1} \\ \vdots \\ f_0 \end{pmatrix}=f_m \begin{pmatrix} Q_{m-1} \\ \vdots \\ Q_0 \end{pmatrix}_{\big| y=T}. \end{equation*} Here for $0 \leq i \leq m-1$, $Q_i={m \choose i} y^{m-i} \in y\Fq[y]$ and $B=(B_{ij})_{0 \leq i,j \leq m-1} \in \Mat_m(\Fq(y))$ such that \begin{itemize} \item $v_y(B_{ij}) \geq 1$ if $i>j$, \item $v_y(B_{ij}) \geq 0$ if $i<j$, \item $v_y(B_{ii})=0$ as $B_{ii}=\pm 1$. \end{itemize} The above properties follow from the fact that $P_{k,i} \in \Fq(y)$ and $v_y(P_{k,i}) \geq 1$. Thus $v_y(\det B)=0$ so that $\det B \neq 0$. It follows that for all $0 \leq i \leq m-1$, $f_i=f_m P_i(T)$ with $P_i \in \Fq(y)$ and $v_y(P_i) \geq 1$ and we are done. To conclude, we have to solve \eqref{eq:delta} for $(\frak t;\fe)=\emptyset$. We have some extra work as we have a factor $\gamma^{(-1)}$ on the right hand side of \eqref{eq:delta emptyset}. We use $\gamma^{(-1)}=\gamma/\epsilon$ and put $\delta:=\delta_{\emptyset,\emptyset}/\gamma \in \overline K[t]$. Then we have to solve \begin{equation} \label{eq:delta emptyset2} \epsilon \delta=\delta^{(-1)} (t-\theta)^w+\sum_{(\mathfrak t';\fe') \in J_0(\emptyset)} \delta_{\mathfrak t';\fe'}^{(-1)} (t-\theta)^w. \end{equation} We distinguish two cases. \subsubsection{Case 1: $q-1 \nmid w$, says $w=m(q-1)+r$ with $0<r<q-1$} \ppar We know that for all $(\frak t';\fe') \in J_0(\emptyset)$, says $\frak t'=((m-k)(q-1)+r)$ with $0 \leq k \leq m$ and $k \not\equiv m-r \pmod{q}$, \begin{equation} \label{eq:delta4} \delta_{\frak t';\fe'}=f_k \left(X^k+\sum_{i=0}^{k-1} P_{k,i}(T) X^i \right) \in \Fq[t,\theta^q] \end{equation} where \begin{itemize} \item $f_k \in \Fq[t]$, \item for all $0 \leq i \leq k-1$, $P_{k,i}(y)$ belongs to $\Fq(y)$ with $v_y(P_{k,i}) \geq 1$. \end{itemize} By Lemma \ref{lem: KuanLin}, $\delta$ belongs to $K[t]$. We claim that $\deg_\theta \delta \leq mq$. Otherwise, we have $\deg_\theta \delta_\emptyset > mq$. Twisting \eqref{eq:delta emptyset2} once gets \begin{equation*} \epsilon \delta^{(1)}=\delta (t-\theta^q)^w+\sum_{(\mathfrak t';\fe') \in J_0(\emptyset)} \delta_{\mathfrak t';\fe'} (t-\theta^q)^w. \end{equation*} As $\deg_\theta \delta > mq$, we compare the degrees of $\theta$ on both sides and obtain \[ q\deg_\theta \delta=\deg_\theta \delta+wq. \] Thus $q-1 \mid w$, which is a contradiction. We conclude that $\deg_\theta \delta \leq mq$. From \eqref{eq:delta emptyset2} we see that $\delta$ is divisible by $(t-\theta)^w$. Thus we write $\delta=F (t-\theta)^w$ with $F \in K[t]$ and $\deg_\theta F \leq mq-w=m-r$. Dividing \eqref{eq:delta emptyset2} by $(t-\theta)^w$ and twisting once yields \begin{equation} \label{eq:F1} \epsilon F^{(1)}=F (t-\theta)^w+\sum_{(\mathfrak t';\fe') \in J_0(\emptyset)} \delta_{\mathfrak t'}. \end{equation} Since $\delta_{\frak t';\fe'} \in \Fq[t,\theta^q]$ for all $(\frak t';\fe') \in J_0(\emptyset)$, it follows that $F (t-\theta)^w \in \Fq[t,\theta^q]$. As $\deg_\theta F \leq m-r$, we write \[ F=\sum_{0 \leq i \leq (m-r)/q} f_{m-r-iq} (t-\theta)^{m-r-iq}, \quad \text{for $f_{m-r-iq} \in \Fq[t]$}. \] It follows that \begin{align*} & F (t-\theta)^w=\sum_{0 \leq i \leq (m-r)/q} f_{m-r-iq} (t-\theta)^{mq-iq}=\sum_{0 \leq i \leq (m-r)/q} f_{m-r-iq} X^{m-i}, \\ & F^{(1)}=\sum_{0 \leq i \leq (m-r)/q} f_{m-r-iq} (t-\theta^q)^{m-r-iq}=\sum_{0 \leq i \leq (m-r)/q} f_{m-r-iq} (T+X)^{m-r-iq}. \end{align*} Putting these and \eqref{eq:delta4} into \eqref{eq:F1} yields \begin{align*} & \epsilon \sum_{0 \leq i \leq (m-r)/q} f_{m-r-iq} (T+X)^{m-r-iq} \\ &=\sum_{0 \leq i \leq (m-r)/q} f_{m-r-iq} X^{m-i}+\sum_{\substack{0 \leq k \leq m \\ k \not\equiv m-r \pmod{q}}} f_k \left(X^k+\sum_{i=0}^{k-1} P_{k,i}(T) X^i \right). \end{align*} Comparing the coefficients of powers of $X$ yields the following linear system in the variables $f_0,\dots,f_m$: \begin{equation*} B_{\big| y=T} \begin{pmatrix} f_m \\ \vdots \\ f_0 \end{pmatrix}=0. \end{equation*} Here $B=(B_{ij})_{0 \leq i,j \leq m} \in \Mat_{m+1}(\Fq(y))$ such that \begin{itemize} \item $v_y(B_{ij}) \geq 1$ if $i>j$, \item $v_y(B_{ij}) \geq 0$ if $i<j$, \item $v_y(B_{ii})=0$ as $B_{ii} \in \Fq^\times$. \end{itemize} The above properties follow from the fact that $P_{k,i} \in \Fq(y)$ and $v_y(P_{k,i}) \geq 1$. Thus $v_y(\det B)=0$. Hence $f_0=\dots=f_m=0$. It follows that $\delta_\emptyset=0$ as $\delta=0$ and $\delta_{\frak t';\fe'}=0$ for all $(\frak t';\fe') \in J_0(\emptyset)$. We conclude that $\delta_{\frak t;\fe}=0$ for all $(\frak t;\fe) \in J$. In particular, for all $i$, $a_i(t)=\delta_{\fs_i;\fe_i}=0$, which is a contradiction. Thus this case can never happen. \subsubsection{Case 2: $q-1 \mid w$, says $w=m(q-1)$} \ppar By similar arguments as above, we show that $\delta=F (t-\theta)^{(q-1)m}$ with $F \in K[t]$ of the form \[ F=\sum_{0 \leq i \leq m/q} f_{m-iq} (t-\theta)^{m-iq}, \quad \text{for $f_{m-iq} \in \Fq[t]$}. \] Thus \begin{align*} & F (t-\theta)^{(q-1)m}=\sum_{0 \leq i \leq m/q} f_{m-iq} (t-\theta)^{mq-iq}=\sum_{0 \leq i \leq m/q} f_{m-iq} X^{m-i}, \\ & F^{(1)}=\sum_{0 \leq i \leq m/q} f_{m-iq} (t-\theta^q)^{m-iq}=\sum_{0 \leq i \leq m/q} f_{m-iq} (T+X)^{m-iq}. \end{align*} Putting these and \eqref{eq:delta2} into \eqref{eq:delta emptyset2} gets \begin{align*} & \epsilon \sum_{0 \leq i \leq m/q} f_{m-iq} (T+X)^{m-iq}\\ &=\sum_{0 \leq i \leq m/q} f_{m-iq} X^{m-i}+\sum_{\substack{0 \leq k<m \\ k \not\equiv m \pmod{q}}} f_k \left(X^k+\sum_{i=0}^{k-1} P_{k,i}(T) X^i \right). \end{align*} Comparing the coefficients of powers of $X$ yields \[ \epsilon f_m=f_m \] and the following linear system in the variables $f_0,\dots,f_{m-1}$: \begin{equation*} B_{\big| y=T} \begin{pmatrix} f_{m-1} \\ \vdots \\ f_0 \end{pmatrix}=f_m \begin{pmatrix} Q_{m-1} \\ \vdots \\ Q_0 \end{pmatrix}_{\big| y=T}. \end{equation*} Here for $0 \leq i \leq m-1$, $Q_i={m \choose i} y^{m-i} \in y\Fq[y]$ and $B=(B_{ij})_{0 \leq i,j \leq m-1} \in \Mat_m(\Fq(y))$ such that \begin{itemize} \item $v_y(B_{ij}) \geq 1$ if $i>j$, \item $v_y(B_{ij}) \geq 0$ if $i<j$, \item $v_y(B_{ii})=0$ as $B_{ii} \in \Fq^\times$. \end{itemize} The above properties follow from the fact that $P_{k,i} \in \Fq(y)$ and $v_y(P_{k,i}) \geq 1$. Thus $v_y(\det B)=0$ so that $\det B \neq 0$. We distinguish two subcases. \medskip \noindent {\bf Subcase 1: $\epsilon \neq 1$.} \ppar It follows that $f_m=0$. Then $f_0=\dots=f_{m-1}=0$. Thus $\delta_{\frak t;\fe}=0$ for all $(\frak t;\fe) \in J$. In particular, for all $i$, $a_i(t)=\delta_{\fs_i;\fe_i}=0$. This is a contradiction and we conclude that this case can never happen. \medskip \noindent {\bf Subcase 2: $\epsilon=1$.} \ppar It follows that $\gamma \in \Fq^\times$ and thus \begin{itemize} \item[1)] The polynomial $\delta_\emptyset=\delta \gamma$ is of the form \[ \delta_\emptyset=f_\emptyset \left(X^m+\sum_{i=0}^{m-1} P_{\emptyset,i}(T) X^i \right)\] with \begin{itemize} \item $f_\emptyset \in \Fq[t]$, \item for all $0 \leq i \leq m-1$, $P_{\emptyset,i}(y) \in \Fq(y)$ with $v_y(P_{\emptyset,i}) \geq 1$. \end{itemize} \item[2)] For all $(\frak t';\fe') \in J_0(\emptyset)$, there exists $P_{\emptyset,\frak t'} \in \Fq(y)$ such that \[ f_{\frak t'}=f_\emptyset P_{\emptyset,\frak t'}(T). \] \end{itemize} Hence there exists a unique solution $(\delta_{\mathfrak t;\fe})_{(\mathfrak t;\fe) \in J} \in \Mat_{1 \times |J|}(K[t])$ of \eqref{eq:delta} up to a factor in $\Fq(t)$. Recall that for all $i$, $a_i(t)=\delta_{\fs_i;\fe_i}$. Therefore, up to a scalar in $K^\times$, there exists at most one non-trivial relation \[ a \widetilde \pi^w+\sum_i a_i \Li \begin{pmatrix} \fve_i \\ \fs_i \end{pmatrix}=0 \] with $a_i \in K$ and $\Li \begin{pmatrix} \fve_i \\ \fs_i \end{pmatrix} \in \mathcal{AS}_w$. Further, we must have $\fve_i=(1,\dots,1)$ for all $i$. To conclude, it suffices to exhibit such a relation with $a \neq 0$. In fact, we recall $w=(q-1)m$ and then express $\Li(q-1)^m=\Li \begin{pmatrix} 1 \\ q-1 \end{pmatrix}^m$ as a $K$-linear combination of ACMPL's of weight $w$. By Theorem \ref{thm: strong Brown}, we can write \[ \Li(q-1)^m=\Li \begin{pmatrix} 1 \\ q-1 \end{pmatrix}^m=\sum_i a_i \Li \begin{pmatrix} \fve_i \\ \fs_i \end{pmatrix}, \quad \text{where $a_i \in K$, $\Li \begin{pmatrix} \fve_i \\ \fs_i \end{pmatrix} \in \mathcal{AS}_w.$} \] We note that $\Li(q-1)=\zeta_A(q-1)=-D^{-1}_1 \widetilde \pi^{q-1}$. Thus \[ (-D_1)^{-m} \widetilde \pi^w-\sum_i a_i \Li \begin{pmatrix} \fve_i \\ \fs_i \end{pmatrix}=0, \] which is the desired relation. \section{Applications on AMZV's and Zagier-Hoffman's conjectures in positive characteristic} \label{sec:applications} In this section we give two applications of the study of ACMPL's. First we use Theorem \ref{thm:ACMPL} to prove Theorem \ref{thm: ZagierHoffman AMZV} which calculates the dimensions of the vector space $\mathcal{AZ}_w$ of alternating multiple zeta values in positive characteristic (AMZV's) of fixed weight introduced by Harada \cite{Har21}. Consequently we determine all linear relations for AMZV's. To do so we develop an algebraic theory to obtain a weak version of Brown's theorem for AMZV's. Then we deduce that $\mathcal{AZ}_w$ and $\mathcal{AL}_w$ are equal and conclude. In contrast to the setting of MZV's, although the results are clean, we are unable to obtain either sharp upper bounds or sharp lower bounds for $\mathcal{AZ}_w$ for general $w$ without the theory of ACMPL's. Second we restrict our attention to MZV's and determine all linear relations between MZV's. In particular, we obtain a proof of Zagier-Hoffman's conjectures in positive characteristic in full generality (i.e., Theorem \ref{thm: ZagierHoffman}) and generalize the work of one of the authors \cite{ND21}. \subsection{Linear relations between AMZV's} \label{sec: application AMZV} \subsubsection{Preliminaries} For $d \in \mathbb{Z}$ and for $\fs=(s_1,\dots,s_n) \in \N^n$, recalling $S_d(\fs)$ and $S_{<d}(\fs)$ given in \S \ref{sec: definition Sd}, and further letting $\begin{pmatrix} \fve \\ \fs \end{pmatrix} = \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_n \\ s_1 & \dots & s_n \end{pmatrix} $ be an array, we recall (see \S \ref{sec: definition Sd}) \begin{equation*} S_d \begin{pmatrix} \fve \\ \fs \end{pmatrix} = \sum\limits_{\substack{a_1, \dots, a_n \in A_{+} \\ d = \deg a_1> \dots > \deg a_n\geq 0}} \dfrac{\varepsilon_1^{\deg a_1} \dots \varepsilon_n^{\deg a_n }}{a_1^{s_1} \dots a_n^{s_n}} \in K \end{equation*} and \begin{equation*} S_{<d} \begin{pmatrix} \fve \\ \fs \end{pmatrix} = \sum\limits_{\substack{a_1, \dots, a_n \in A_{+} \\ d > \deg a_1> \dots > \deg a_n\geq 0}} \dfrac{\varepsilon_1^{\deg a_1} \dots \varepsilon_n^{\deg a_n }}{a_1^{s_1} \dots a_n^{s_n}} \in K. \end{equation*} One verifies easily the following formulas: \begin{align*} & S_{<d} \begin{pmatrix} 1& \dots & 1 \\ s_1 & \dots & s_n \end{pmatrix} = S_{<d}(s_1, \dots, s_n),\quad S_d \begin{pmatrix} 1 &\dots & 1 \\ s_1 & \dots & s_n \end{pmatrix} = S_{d}(s_1, \dots, s_n),\\ & S_{d} \begin{pmatrix} \varepsilon \\ s \end{pmatrix} = \varepsilon^d S_d(s),\quad S_{d} \begin{pmatrix} \fve \\ \fs \end{pmatrix} = S_{d} \begin{pmatrix} \varepsilon_1 \\ s_1 \end{pmatrix} S_{<d} \begin{pmatrix} \fve_{-} \\ \fs_{-} \end{pmatrix}. \end{align*} Harada \cite{Har21} introduced the alternating multiple zeta value (AMZV) as follows; \begin{equation*} \zeta_A \begin{pmatrix} \fve \\ \fs \end{pmatrix} = \sum \limits_{d \geq 0} S_d \begin{pmatrix} \fve \\ \fs \end{pmatrix} = \sum\limits_{\substack{a_1, \dots, a_n \in A_{+} \\ \deg a_1> \dots > \deg a_n\geq 0}} \dfrac{\varepsilon_1^{\deg a_1} \dots \varepsilon_n^{\deg a_n }}{a_1^{s_1} \dots a_n^{s_n}} \in K_{\infty}. \end{equation*} Using Chen's formula (see \cite{Che15}), Harada proved that for $s, t \in \mathbb{N}$ and $\varepsilon, \epsilon \in \mathbb{F}_q^{\times}$, we have \begin{equation} \label{eq: sum AMZV} S_d \begin{pmatrix} \varepsilon \\ s \end{pmatrix} S_d \begin{pmatrix} \epsilon \\ t \end{pmatrix} = S_d \begin{pmatrix} \varepsilon\epsilon \\ s+t \end{pmatrix} + \sum \limits_i \Delta^i_{s,t} S_d \begin{pmatrix} \varepsilon\epsilon & 1 \\ s+t-i & i \end{pmatrix} , \end{equation} where \begin{equation} \label{eq:Delta Chen} \Delta^i_{s,t} = \begin{cases} (-1)^{s-1} {i - 1 \choose s - 1} + (-1)^{t-1} {i-1 \choose t-1} & \quad \text{if } q - 1 \mid i \text{ and } 0 < i < s + t, \\ 0 & \quad \text{otherwise.} \end{cases} \end{equation} \begin{remark} \label{rmk: reduce sum} When $s + t \leq q$, we deduce from the above formulas that \begin{align*} S_d \begin{pmatrix} \varepsilon \\ s \end{pmatrix} S_d \begin{pmatrix} \epsilon \\ t \end{pmatrix} = S_d \begin{pmatrix} \varepsilon\epsilon \\ s+t \end{pmatrix}. \end{align*} \end{remark} He then proved similar results for products of AMZV's (see \cite{Har21}): \begin{proposition} \label{sums} Let $ \begin{pmatrix} \fve \\ \fs \end{pmatrix} $, $ \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} $ be two arrays. Then \begin{enumerate} \item There exist $f_i \in \mathbb{F}_q$ and arrays $ \begin{pmatrix} \fm_i \\ \mathfrak{u}_i \end{pmatrix} $ with $ \begin{pmatrix} \fm_i \\ \mathfrak{u}_i \end{pmatrix} \leq \begin{pmatrix} \fve \\ \fs \end{pmatrix} + \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} $ and $\depth(\mathfrak{u}_i) \leq \depth(\fs) + \depth(\mathfrak{t})$ for all $i$ such that \begin{equation*} S_d \begin{pmatrix} \fve \\ \fs \end{pmatrix} S_d \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} = \sum \limits_i f_i S_d \begin{pmatrix} \fm_i \\ \mathfrak{u}_i \end{pmatrix} \quad \text{for all } d \in \mathbb{Z}. \end{equation*} \item There exist $f'_i \in \mathbb{F}_q$ and arrays $ \begin{pmatrix} \fm'_i \\ \mathfrak{u}'_i \end{pmatrix} $ with $ \begin{pmatrix} \fm'_i \\ \mathfrak{u}'_i \end{pmatrix} \leq \begin{pmatrix} \fve \\ \fs \end{pmatrix} + \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} $ and $\depth(\mathfrak{u}'_i) \leq \depth(\fs) + \depth(\mathfrak{t})$ for all $i$ such that \begin{equation*} S_{<d} \begin{pmatrix} \fve \\ \fs \end{pmatrix} S_{<d} \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} = \sum \limits_i f'_i S_{<d} \begin{pmatrix} \fm'_i \\ \mathfrak{u}'_i \end{pmatrix} \quad \text{for all } d \in \mathbb{Z}. \end{equation*} \item There exist $f''_i \in \mathbb{F}_q$ and arrays $ \begin{pmatrix} \fm''_i \\ \mathfrak{u}''_i \end{pmatrix} $ with $ \begin{pmatrix} \fm''_i \\ \mathfrak{u}''_i \end{pmatrix} \leq \begin{pmatrix} \fve \\ \fs \end{pmatrix} + \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} $ and $\depth(\mathfrak{u}''_i) \leq \depth(\fs) + \depth(\mathfrak{t})$ for all $i$ such that \begin{equation*} S_d \begin{pmatrix} \fve \\ \fs \end{pmatrix} S_{<d} \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} = \sum \limits_i f''_i S_d \begin{pmatrix} \fm''_i \\ \mathfrak{u}''_i \end{pmatrix} \quad \text{for all } d \in \mathbb{Z}. \end{equation*} \end{enumerate} \end{proposition} We denote by $\mathcal{AZ}$ the $K$-vector space generated by the AMZV's and $\mathcal{AZ}_w$ the $K$-vector space generated by the AMZV's of weight $w$. It follows from Proposition~\ref{sums} that $\mathcal{AZ}$ is a $K$-algebra. \subsubsection{Algebraic theory for AMZV's} \label{sec: alg theory AMZV} We can extend an algebraic theory for AMZV's which follow the same line as that in \S \ref{sec:algebraic part}. \begin{definition} A binary relation is a $K$-linear combination of the form \begin{equation*} \sum \limits_i a_i S_d \begin{pmatrix} \fve_i \\ \fs_i \end{pmatrix} + \sum \limits_i b_i S_{d+1} \begin{pmatrix} \fe_i \\ \mathfrak{t}_i \end{pmatrix} =0 \quad \text{for all } d \in \mathbb{Z}, \end{equation*} where $a_i,b_i \in K$ and $ \begin{pmatrix} \fve_i \\ \fs_i \end{pmatrix} , \begin{pmatrix} \fe_i \\ \mathfrak{t}_i \end{pmatrix} $ are arrays of the same weight. A binary relation is called a fixed relation if $b_i = 0$ for all $i$. \end{definition} We denote by $\mathfrak{R}_{w}$ the set of all binary relations of weight $w$. From Lemma \ref{agree} and the relation $R_{\varepsilon}$ defined in \S \ref{sec:Todd}, we obtain the following binary relation \begin{equation*} R_{\varepsilon} \colon \quad S_d \begin{pmatrix} \varepsilon\\ q \end{pmatrix} + \varepsilon^{-1}D_1 S_{d+1} \begin{pmatrix} \varepsilon& 1 \\ 1 & q-1 \end{pmatrix} =0, \end{equation*} where $D_1 = \theta^q - \theta$. For later definitions, let $R \in \mathfrak{R}_w$ be a binary relation of the form \begin{equation} \label{eq: Rd} R(d) \colon \quad \sum \limits_i a_i S_d \begin{pmatrix} \fve_i \\ \fs_i \end{pmatrix} + \sum \limits_i b_i S_{d+1} \begin{pmatrix} \fe_i \\ \mathfrak{t}_i \end{pmatrix} =0, \end{equation} where $a_i,b_i \in K$ and $ \begin{pmatrix} \fve_i \\ \fs_i \end{pmatrix} , \begin{pmatrix} \fe_i \\ \mathfrak{t}_i \end{pmatrix} $ are arrays of the same weight. We now define some operators on $K$-vector spaces of binary relations. First, we define operators $\mathcal B^*$. Let $ \begin{pmatrix} \sigma \\ v \end{pmatrix} $ be an array. We introduce \begin{equation*} \mathcal B^*_{\sigma,v} \colon \mathfrak{R}_{w} \longrightarrow \mathfrak{R}_{w+v} \end{equation*} as follows: for each $R \in \mathfrak{R}_{w}$ as given in \eqref{eq: Rd}, the image $\mathcal B^*_{\sigma,v}(R) = S_d \begin{pmatrix} \sigma \\ v \end{pmatrix} \sum_{j < d} R(j)$ is a fixed relation of the form \begin{align*} 0 &= S_d \begin{pmatrix} \sigma \\ v \end{pmatrix} \left(\sum \limits_ia_i S_{<d} \begin{pmatrix} \fve_i \\ \fs_i \end{pmatrix} + \sum \limits_i b_i S_{<d+1} \begin{pmatrix} \fe_i \\ \mathfrak{t}_i \end{pmatrix} \right) \\ &= \sum \limits_i a_i S_d \begin{pmatrix} \sigma \\ v \end{pmatrix} S_{<d} \begin{pmatrix} \fve_i \\ \fs_i \end{pmatrix} + \sum \limits_i b_i S_d \begin{pmatrix} \sigma \\ v \end{pmatrix} S_{<d} \begin{pmatrix} \fe_i \\ \mathfrak{t}_i \end{pmatrix} + \sum \limits_i b_i S_d \begin{pmatrix} \sigma \\ v \end{pmatrix} S_{d} \begin{pmatrix} \fe_i \\ \mathfrak{t}_i \end{pmatrix} \\ &= \sum \limits_i a_i S_d \begin{pmatrix} \sigma & \fve_i \\ v& \fs_i \end{pmatrix} + \sum \limits_i b_i S_d \begin{pmatrix} \sigma & \fe_i \\ v& \mathfrak{t}_i \end{pmatrix} + \sum \limits_i b_i \sum \limits_j f_{i,j} S_d \begin{pmatrix} \fm_{i,j} \\ \mathfrak{u}_{i,j} \end{pmatrix} . \end{align*} The last equality follows from Proposition \ref{sums}. Let $ \begin{pmatrix} \Sigma \\ V \end{pmatrix} = \begin{pmatrix} \sigma_1 & \dots & \sigma_n \\ v_1 & \dots & v_n \end{pmatrix} $ be an array. We define an operator $\mathcal{B}^*_{\Sigma,V}(R) $ by \begin{equation*} \mathcal B^*_{\Sigma,V}(R) := \mathcal B^*_{\sigma_1,v_1} \circ \dots \circ \mathcal B^*_{\sigma_n,v_n}(R). \end{equation*} \begin{lemma} \label{polybesao AMZV} Let $ \begin{pmatrix} \Sigma \\ V \end{pmatrix} = \begin{pmatrix} \sigma_1 & \dots & \sigma_n \\ v_1 & \dots & v_n \end{pmatrix} $ be an array. Under the notations of \eqref{eq: Rd}, suppose that for all $i$, $v_n + t_{i1} \leq q$ where $\mathfrak{t}_{i} = (t_{i1}, \mathfrak{t}_{i-})$ . Then $\mathcal B^*_{\Sigma,V}(R)$ is of the form \begin{align*} \sum \limits_i a_i S_d \begin{pmatrix} \Sigma & \fve_i \\ V& \fs_i \end{pmatrix} & + \sum \limits_i b_i S_d \begin{pmatrix} \Sigma & \fe_i \\ V& \mathfrak{t}_i \end{pmatrix} \\ & + \sum \limits_i b_i S_d \begin{pmatrix} \sigma_1 & \dots & \sigma_{n-1} & \sigma_n \epsilon_{i1} &\fe_{i-} \\ v_1 & \dots & v_{n-1} & v_n+ t_{i1} & \mathfrak{t}_{i-} \end{pmatrix} = 0 . \end{align*} \end{lemma} \begin{proof} From the definition, we have $\mathcal{B}^*_{\sigma_n,v_n}(R)$ is of the form \begin{align*} \sum \limits_i a_i S_d \begin{pmatrix} \sigma_n & \fve_i \\ v_n& \fs_i \end{pmatrix} + \sum \limits_i b_i S_d \begin{pmatrix} \sigma_n & \fe_i \\ v_n& \mathfrak{t}_i \end{pmatrix} + \sum \limits_i b_i S_d \begin{pmatrix} \sigma_n \\ v_n \end{pmatrix} S_{d} \begin{pmatrix} \fe_i \\ \mathfrak{t}_i \end{pmatrix} = 0. \end{align*} For all $i$, since $v_n + t_{i1} \leq q$, it follows from Remark \ref{rmk: reduce sum} that \begin{align*} S_d \begin{pmatrix} \sigma_n \\ v_n \end{pmatrix} S_{d} \begin{pmatrix} \fe_i \\ \mathfrak{t}_i \end{pmatrix} = S_d \begin{pmatrix} \sigma_n \\ v_n \end{pmatrix} S_d \begin{pmatrix} \epsilon_{i1} \\ t_{i1} \end{pmatrix} S_{<d} \begin{pmatrix} {\fe_i}_- \\ {\mathfrak{t}_i}_- \end{pmatrix} &= S_d \begin{pmatrix} \sigma_n \epsilon_{i1} \\ v_n + t_{i1} \end{pmatrix} S_{<d} \begin{pmatrix} {\fe_i}_- \\ {\mathfrak{t}_i}_- \end{pmatrix}\\ &= S_d \begin{pmatrix} \sigma_n \epsilon_{i1} & {\fe_i}_- \\ v_n + t_{i1} & {\mathfrak{t}_i}_- \end{pmatrix}, \end{align*} hence $\mathcal{B}^*_{\sigma_n,v_n}(R)$ is of the form \begin{align*} \sum \limits_i a_i S_d \begin{pmatrix} \sigma_n & \fve_i \\ v_n& \fs_i \end{pmatrix} + \sum \limits_i b_i S_d \begin{pmatrix} \sigma_n & \fe_i \\ v_n& \mathfrak{t}_i \end{pmatrix} + \sum \limits_i b_i S_d \begin{pmatrix} \sigma_n \epsilon_{i1} & {\fe_i}_- \\ v_n + t_{i1} & {\mathfrak{t}_i}_- \end{pmatrix} = 0. \end{align*} Apply the operator $\mathcal B^*_{\sigma_1,v_1} \circ \dots \circ \mathcal B^*_{\sigma_{n - 1},v_{n - 1}}$ to $\mathcal{B}^*_{\sigma_n,v_n}(R)$, the result then follows from the definition. \end{proof} Second, we define operators $\mathcal C$. Let $ \begin{pmatrix} \Sigma \\ V \end{pmatrix} $ be an array of weight $v$. We introduce \begin{equation*} \mathcal C_{\Sigma,V}(R) \colon \mathfrak{R}_{w} \longrightarrow \mathfrak{R}_{w+v} \end{equation*} as follows: for each $R \in \mathfrak{R}_{w}$ as given in \eqref{eq: Rd}, the image $\mathcal C_{\Sigma,V}(R) = R(d) S_{<d+1} \begin{pmatrix} \Sigma \\ V \end{pmatrix} $ is a binary relation of the form \begin{align*} 0 &= \left( \sum \limits_i a_i S_d \begin{pmatrix} \fve_i \\ \fs_i \end{pmatrix} + \sum \limits_i b_i S_{d+1} \begin{pmatrix} \fe_i \\ \mathfrak{t}_i \end{pmatrix} \right) S_{<d+1} \begin{pmatrix} \Sigma \\ V \end{pmatrix} \\ &= \sum \limits_i a_i S_d \begin{pmatrix} \fve_i \\ \fs_i \end{pmatrix} S_{d} \begin{pmatrix} \Sigma \\ V \end{pmatrix} + \sum \limits_i a_i S_d \begin{pmatrix} \fve_i \\ \fs_i \end{pmatrix} S_{<d} \begin{pmatrix} \Sigma \\ V \end{pmatrix} + \sum \limits_i b_i S_{d+1} \begin{pmatrix} \fe_i \\ \mathfrak{t}_i \end{pmatrix} S_{<d+1} \begin{pmatrix} \Sigma \\ V \end{pmatrix} \\ &= \sum \limits_i f_i S_d \begin{pmatrix} \fm_i \\ \mathfrak{u}_i \end{pmatrix} + \sum \limits_i f'_i S_{d+1} \begin{pmatrix} \fm'_i \\ \mathfrak{u}'_i \end{pmatrix} . \end{align*} The last equality follows from Proposition \ref{sums}. In particular, the following proposition gives the form of $\mathcal C_{\Sigma,V}(R_{\varepsilon})$. \begin{proposition} \label{polycer1 AMZV} Let $ \begin{pmatrix} \Sigma \\ V \end{pmatrix}$ be an array with $V = (v_1,V_{-})$ and $\Sigma = (\sigma_1, \Sigma_{-})$. Then $\mathcal C_{\Sigma,V}(R_{\varepsilon})$ is of the form \begin{equation*} S_d \begin{pmatrix} \varepsilon\sigma_1 & \Sigma_{-} \\ q + v_1 & V_{-} \end{pmatrix} + \sum \limits_i a_i S_d \begin{pmatrix} \fve_i \\ \fs_i \end{pmatrix} + \sum \limits_i b_i S_{d+1} \begin{pmatrix} \varepsilon& \fe_i \\ 1 & \mathfrak{t}_i \end{pmatrix} =0, \end{equation*} where $a_i, b_i \in K$ and $\begin{pmatrix} \fve_i \\ \fs_i \end{pmatrix}, \begin{pmatrix} \fe_i \\ \mathfrak{t}_i \end{pmatrix} $ are arrays satisfying \begin{itemize} \item $\begin{pmatrix} \fve_i \\ \fs_i \end{pmatrix} \leq \begin{pmatrix} \varepsilon \\ q \end{pmatrix} + \begin{pmatrix} \Sigma \\ V \end{pmatrix}$ and $s_{i1} < q + v_1$ for all $i$; \item $ \begin{pmatrix} \fe_i \\ \mathfrak{t}_i \end{pmatrix} \leq \begin{pmatrix} 1 \\ q - 1 \end{pmatrix} + \begin{pmatrix} \Sigma \\ V \end{pmatrix} $ for all $i$. \end{itemize} \end{proposition} \begin{proof} From the definition, $\mathcal C_{\Sigma,V}(R_{\varepsilon})$ is of the form \begin{equation*} S_d \begin{pmatrix} \varepsilon \\ q \end{pmatrix} S_{d} \begin{pmatrix} \Sigma \\ V \end{pmatrix} + S_d \begin{pmatrix} \varepsilon \\ q \end{pmatrix} S_{<d} \begin{pmatrix} \Sigma \\ V \end{pmatrix} + \varepsilon^{-1} D_1 S_{d+1} \begin{pmatrix} \varepsilon& 1 \\ 1 & q-1 \end{pmatrix} S_{<d+1} \begin{pmatrix} \Sigma \\ V \end{pmatrix} = 0. \end{equation*} It follows from \eqref{eq: sum AMZV} and Proposition \ref{sums} that \begin{align*} S_d \begin{pmatrix} \varepsilon \\ q \end{pmatrix} S_{d} \begin{pmatrix} \Sigma \\ V \end{pmatrix} + S_d \begin{pmatrix} \varepsilon \\ q \end{pmatrix} S_{<d} \begin{pmatrix} \Sigma \\ V \end{pmatrix} &= S_d \begin{pmatrix} \varepsilon\sigma_1 & \Sigma_{-} \\ q + v_1 & V_{-} \end{pmatrix} + \sum \limits_i a_i S_d \begin{pmatrix} \fve_i \\ \fs_i \end{pmatrix}, \\ \varepsilon^{-1}D_1 S_{d+1} \begin{pmatrix} \varepsilon& 1 \\ 1 & q-1 \end{pmatrix} S_{<d+1} \begin{pmatrix} \Sigma \\ V \end{pmatrix} &= \sum \limits_i b_i S_{d+1} \begin{pmatrix} \varepsilon& \fe_i \\ 1 & \mathfrak{t}_i \end{pmatrix} , \end{align*} where $a_i, b_i \in K$ and $\begin{pmatrix} \fve_i \\ \fs_i \end{pmatrix}, \begin{pmatrix} \fe_i \\ \mathfrak{t}_i \end{pmatrix} $ are arrays satisfying \begin{itemize} \item $\begin{pmatrix} \fve_i \\ \fs_i \end{pmatrix} \leq \begin{pmatrix} \varepsilon \\ q \end{pmatrix} + \begin{pmatrix} \Sigma \\ V \end{pmatrix}$ and $s_{i1} < q + v_1$ for all $i$; \item $ \begin{pmatrix} \fe_i \\ \mathfrak{t}_i \end{pmatrix} \leq \begin{pmatrix} 1 \\ q - 1 \end{pmatrix} + \begin{pmatrix} \Sigma \\ V \end{pmatrix} $ for all $i$. \end{itemize} This proves the proposition. \end{proof} Finally, we define operators $\mathcal{BC}$. Let $\varepsilon \in \mathbb{F}_q^{\times}$. We introduce \begin{equation*} \mathcal{BC}_{\varepsilon,q} \colon \mathfrak{R}_{w} \longrightarrow \mathfrak{R}_{w+q} \end{equation*} as follows: for each $R \in \mathfrak{R}_{w}$ as given in \eqref{eq: Rd}, the image $\mathcal{BC}_{\varepsilon,q}(R)$ is a binary relation given by \begin{align*} \mathcal{BC}_{\varepsilon,q}(R) = \mathcal B^*_{\varepsilon,q}(R) - \sum\limits_i b_i \mathcal C_{\fe_i,\mathfrak{t}_i} (R_{\varepsilon}). \end{align*} Let us clarify the definition of $\mathcal{BC}_{\varepsilon,q}$. We know that $\mathcal B^*_{\varepsilon,q}(R)$ is of the form \begin{equation*} \sum \limits_i a_i S_d \begin{pmatrix} \varepsilon& \fve_i \\ q& \fs_i \end{pmatrix} + \sum \limits_i b_i S_d \begin{pmatrix} \varepsilon& \fe_i \\ q& \mathfrak{t}_i \end{pmatrix} + \sum \limits_i b_i S_d \begin{pmatrix} \varepsilon \\ q \end{pmatrix} S_{d} \begin{pmatrix} \fe_i \\ \mathfrak{t}_i \end{pmatrix} = 0. \end{equation*} Moreover, $\mathcal C_{\fe_i,\mathfrak{t}_i} (R_{\varepsilon})$ is of the form \begin{equation*} S_d \begin{pmatrix} \varepsilon& \fe_i \\ q& \mathfrak{t}_i \end{pmatrix} + S_d \begin{pmatrix} \varepsilon \\ q \end{pmatrix} S_{d} \begin{pmatrix} \fe_i \\ \mathfrak{t}_i \end{pmatrix} + \varepsilon^{-1}D_1 S_{d+1} \begin{pmatrix} \varepsilon \\ 1 \end{pmatrix} S_{<d+1} \begin{pmatrix} 1 \\ q-1 \end{pmatrix} S_{<d+1} \begin{pmatrix} \fe_i \\ \mathfrak{t}_i \end{pmatrix} = 0. \end{equation*} Combining with Proposition \ref{sums}, we have that $\mathcal{BC}_{\varepsilon,q}(R)$ is of the form \begin{equation*} \sum \limits_i a_i S_d \begin{pmatrix} \varepsilon& \fve_i \\ q& \fs_i \end{pmatrix} + \sum \limits_{i,j} b_{ij} S_{d+1} \begin{pmatrix} \varepsilon& \fe_{ij} \\ 1& \mathfrak{t}_{ij} \end{pmatrix} =0, \end{equation*} where $b_{ij} \in K$ and $ \begin{pmatrix} \fe_{ij} \\ \mathfrak{t}_{ij} \end{pmatrix} $ are arrays satisfying $ \begin{pmatrix} \fe_{ij} \\ \mathfrak{t}_{ij} \end{pmatrix} \leq \begin{pmatrix} 1 \\ q-1 \end{pmatrix} + \begin{pmatrix} \fe_{i} \\ \mathfrak{t}_{i} \end{pmatrix} $ for all $j$. \begin{proposition}\label{polydecom AMZV} 1) Let $ \begin{pmatrix} \fve \\ \fs \end{pmatrix} = \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_n \\ s_1 & \dots & s_n \end{pmatrix} $ be an array such that $\Init(\fs) = (s_1, \dots, s_{k-1})$ for some $1 \leq k \leq n$. Then $\zeta_A \begin{pmatrix} \fve \\ \fs \end{pmatrix} $ can be decomposed as follows: \begin{equation*} \zeta_A \begin{pmatrix} \fve \\ \fs \end{pmatrix} = \underbrace{\sum\limits_i a_i \zeta_A \begin{pmatrix} \fve'_i \\ \fs'_i \end{pmatrix} }_\text{type 1} + \underbrace{\sum\limits_i b_i\zeta_A \begin{pmatrix} \fe_i' \\ \mathfrak{t}'_i \end{pmatrix} }_\text{type 2} + \underbrace{\sum\limits_i c_i\zeta_A \begin{pmatrix} \fm_i \\ \mathfrak{u}_i \end{pmatrix} }_\text{type 3} , \end{equation*} where $a_i, b_i, c_i \in K$ such that for all $i$, the following properties are satisfied: \begin{itemize} \item For all arrays $ \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} $ appearing on the right hand side, \begin{equation*} \depth(\mathfrak{t}) \geq \depth(\fs) \quad \text{and} \quad T_k(\mathfrak{t}) \leq T_k(\fs). \end{equation*} \item For the array $ \begin{pmatrix} \fve' \\ \fs' \end{pmatrix} $ of type $1$ with respect to $ \begin{pmatrix} \fve \\ \fs \end{pmatrix} $, we have $\Init(\fs) \preceq \Init(\fs')$ and $s'_k < s_k$. \item For the array $ \begin{pmatrix} \fe' \\ \mathfrak{t}' \end{pmatrix} $ of type $2$ with respect to $ \begin{pmatrix} \fve \\ \fs \end{pmatrix} $, for all $k \leq \ell \leq n$, \begin{equation*} t'_{1} + \dots + t'_\ell < s_1 + \dots + s_\ell. \end{equation*} \item For the array $ \begin{pmatrix} \fm \\ \mathfrak{u} \end{pmatrix} $ of type $3$ with respect to $ \begin{pmatrix} \fve \\ \fs \end{pmatrix} $, we have $\Init(\fs) \prec\Init(\mathfrak{u})$. \end{itemize} \noindent 2) Let $ \begin{pmatrix} \fve \\ \fs \end{pmatrix} = \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_k \\ s_1 & \dots & s_k \end{pmatrix} $ be an array such that $\Init(\fs) = \fs$ and $s_k = q$. Then $\zeta_A \begin{pmatrix} \fve \\ \fs \end{pmatrix} $ can be decomposed as follows: \begin{equation*} \zeta_A \begin{pmatrix} \fve \\ \fs \end{pmatrix} = \underbrace{\sum\limits_i b_i\zeta_A \begin{pmatrix} \fe'_i \\ \mathfrak{t}'_i \end{pmatrix} }_\text{type 2} + \underbrace{\sum\limits_i c_i\zeta_A \begin{pmatrix} \fm_i \\ \mathfrak{u}_i \end{pmatrix} }_\text{type 3} , \end{equation*} where $ b_i, c_i \in K$ such that for all $i$, the following properties are satisfied: \begin{itemize} \item For all arrays $ \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} $ appearing on the right hand side, \begin{equation*} \depth(\mathfrak{t}) \geq \depth(\fs) \quad \text{and} \quad T_k(\mathfrak{t}) \leq T_k(\fs). \end{equation*} \item For the array $ \begin{pmatrix} \fe' \\ \mathfrak{t}' \end{pmatrix} $ of type $2$ with respect to $ \begin{pmatrix} \fve \\ \fs \end{pmatrix} $, \begin{equation*} t'_{1} + \dots + t'_k < s_1 + \dots + s_k. \end{equation*} \item For the array $ \begin{pmatrix} \fm \\ \mathfrak{u} \end{pmatrix} $ of type $3$ with respect to $ \begin{pmatrix} \fve \\ \fs \end{pmatrix} $, we have $\Init(\fs) \prec\Init(\mathfrak{u})$. \end{itemize} \end{proposition} \begin{proof} For Part 1, since $\Init(\fs) = (s_1, \dots, s_{k-1})$, we get $s_k > q$. Set $ \begin{pmatrix} \Sigma \\ V \end{pmatrix} = \begin{pmatrix} 1 & \varepsilon_{k+1} &\dots & \varepsilon_n \\ s_k - q & s_{k+1} &\dots & s_n \end{pmatrix} $. By Proposition \ref{polycer1 AMZV}, $\mathcal{C}_{\Sigma,V}(R_{\varepsilon_k})$ is of the form \begin{equation} \label{polyr1 AMZV} S_d \begin{pmatrix} \varepsilon_{k} & \dots & \varepsilon_n \\ s_{k} & \dots & s_n \end{pmatrix} + \sum \limits_i a_i S_d \begin{pmatrix} \fve_i \\ \fs_i \end{pmatrix} + \sum \limits_i b_i S_{d+1} \begin{pmatrix} \varepsilon_k & \fe_i \\ 1 & \mathfrak{t}_i \end{pmatrix} =0, \end{equation} where $a_i, b_i \in K$ and $\begin{pmatrix} \fve_i \\ \fs_i \end{pmatrix}, \begin{pmatrix} \fe_i \\ \mathfrak{t}_i \end{pmatrix} $ are arrays satisfying \begin{align} \label{eq: part 1 compa 0 AMZV} \begin{pmatrix} \fve_i \\ \fs_i \end{pmatrix} &\leq \begin{pmatrix} \varepsilon_k \\ q \end{pmatrix} + \begin{pmatrix} \Sigma \\ V \end{pmatrix} = \begin{pmatrix} \varepsilon_{k} & \dots & \varepsilon_n \\ s_{k} & \dots & s_n \end{pmatrix} \quad \text{and} \quad s_{i1} < q + v_1 = s_k;\\ \notag \begin{pmatrix} \fe_i \\ \mathfrak{t}_i \end{pmatrix} &\leq \begin{pmatrix} 1 \\ q - 1 \end{pmatrix} + \begin{pmatrix} \Sigma \\ V \end{pmatrix} = \begin{pmatrix} 1 & \varepsilon_{k + 1} & \dots & \varepsilon_n \\ s_k - 1 & s_{k + 1} & \dots & s_n \end{pmatrix}. \end{align} For $m \in \mathbb{N}$, we recall that $q^{\{m\}}$ is the sequence of length $m$ with all terms equal to $q$. Setting $s_0 = 0$, we may assume that there exists a maximal index $j$ with $0 \leq j \leq k-1$ such that $s_j < q$, hence $\Init(\fs) = (s_1, \dots, s_j, q^{\{k-j-1\}})$. Then the operator $ \mathcal{BC}_{\varepsilon_{j+1},q} \circ \dots \circ \mathcal{BC}_{\varepsilon_{k-1},q}$ applied to the relation \eqref{polyr1 AMZV} gives \begin{align} \label{polyr1 AMZV 2} S_d \begin{pmatrix} \varepsilon_{j+1} & \dots & \varepsilon_{k-1} & \varepsilon_{k} & \dots & \varepsilon_n \\ q & \dots & q & s_{k} & \dots & s_n \end{pmatrix} & + \sum \limits_i a_i S_d \begin{pmatrix} \varepsilon_{j+1} & \dots & \varepsilon_{k-1} & \fve_i \\ q & \dots & q & \fs_i \end{pmatrix} \\ \notag & + \sum \limits_i b_{i_1 \dots i_{k-j}} S_{d+1} \begin{pmatrix} \varepsilon_{j+1} & \fe_{i_1 \dots i_{k-j}} \\ 1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} =0, \end{align} where $b_{i_1 \dots i_{k-j}} \in K$ and for $2 \leq l \leq k-j$, $ \begin{pmatrix} \fe_{i_1 \dots i_l} \\ \mathfrak{t}_{i_1 \dots i_l} \end{pmatrix} $ are arrays satisfying \begin{equation*} \begin{pmatrix} \fe_{i_1 \dots i_l} \\ \mathfrak{t}_{i_1 \dots i_l} \end{pmatrix} \leq \begin{pmatrix} 1 \\ q - 1 \end{pmatrix} + \begin{pmatrix} \varepsilon_{k-l+2} & \fe_{i_1 \dots i_{l-1}} \\ 1 & \mathfrak{t}_{i_1 \dots i_{l-1}} \end{pmatrix} = \begin{pmatrix} \varepsilon_{k-l+2} & \fe_{i_1 \dots i_{l-1}} \\ q & \mathfrak{t}_{i_1 \dots i_{l-1}} \end{pmatrix} . \end{equation*} Thus \begin{equation} \label{eq: part 1 compa AMZV} \begin{pmatrix} \fe_{i_1 \dots i_{k-j}} \\ \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} \leq \begin{pmatrix} \varepsilon_{j+2}& \dots & \varepsilon_k & 1 & \varepsilon_{k+1}& \dots & \varepsilon_n \\ q& \dots & q & s_{k} - 1 & s_{k+1} &\dots & s_n \end{pmatrix} . \end{equation} We let $ \begin{pmatrix} \Sigma' \\ V' \end{pmatrix} = \begin{pmatrix} \varepsilon_{1} &\dots & \varepsilon_j \\ s_1 &\dots & s_j \end{pmatrix} $ and we apply $\mathcal{B}^*_{\Sigma',V'}$ to \eqref{polyr1 AMZV 2}. Since $s_j < q$, i.e., $s_j + 1 \leq q$, it follows from Lemma \ref{polybesao AMZV} that \begin{align*} S_d \begin{pmatrix} \fve \\ \fs \end{pmatrix} &+ \sum \limits_i a_i S_d \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_j & \varepsilon_{j+1} & \dots & \varepsilon_{k-1} & \fve_i \\ s_1 & \dots & s_j & q & \dots & q & \fs_i \end{pmatrix} \\ & + \sum \limits_i b_{i_1 \dots i_{k-j}} S_d \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_j & \varepsilon_{j+1} & \fe_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_j & 1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} \\ & + \sum \limits_i b_{i_1 \dots i_{k-j}} S_d \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_{j-1} & \varepsilon_j \varepsilon_{j+1} & \fe_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_{j-1} & s_j+1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} =0. \end{align*} Hence \begin{align*} \zeta_A \begin{pmatrix} \fve \\ \fs \end{pmatrix} &+ \sum \limits_i a_i \zeta_A \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_j & \varepsilon_{j+1} & \dots & \varepsilon_{k-1} & \fve_i \\ s_1 & \dots & s_j & q & \dots & q & \fs_i \end{pmatrix} \\ & + \sum \limits_i b_{i_1 \dots i_{k-j}} \zeta_A \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_j & \varepsilon_{j+1} & \fe_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_j & 1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} \\ & + \sum \limits_i b_{i_1 \dots i_{k-j}} \zeta_A \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_{j-1} & \varepsilon_j \varepsilon_{j+1} & \fe_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_{j-1} & s_j+1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} =0. \end{align*} In other words, we have \begin{align} \label{eq: part 1 Li AMZV} \zeta_A \begin{pmatrix} \fve \\ \fs \end{pmatrix} = &- \sum \limits_i a_i \zeta_A \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_j & \varepsilon_{j+1} & \dots & \varepsilon_{k-1} & \fve_i \\ s_1 & \dots & s_j & q & \dots & q & \fs_i \end{pmatrix} \\ \notag & - \sum \limits_i b_{i_1 \dots i_{k-j}} \zeta_A \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_j & \varepsilon_{j+1} & \fe_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_j & 1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} \\ \notag & - \sum \limits_i b_{i_1 \dots i_{k-j}} \zeta_A \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_{j-1} & \varepsilon_j \varepsilon_{j+1} & \fe_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_{j-1} & s_j+1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix}. \end{align} The first term, the second term, and the third term on the right hand side of \eqref{eq: part 1 Li AMZV} are referred to as type 1, type 2, and type 3 respectively. We now verify the conditions of arrays of type 1, type 2, and type 3 with respect to $ \begin{pmatrix} \fve \\ \fs \end{pmatrix} $. We first note that $\fs = (s_1, \dots, s_j, q^{\{k-j-1\}}, s_k, \dotsc, s_n)$. \textit{Type 1: } For $\begin{pmatrix} \fve' \\ \fs' \end{pmatrix} = \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_j & \varepsilon_{j+1} & \dots & \varepsilon_{k-1} & \fve_i \\ s_1 & \dots & s_j & q & \dots & q & \fs_i \end{pmatrix}$, it follows from \eqref{eq: part 1 compa 0 AMZV} and Remark \ref{rmk: compa depth} that \begin{align*} \depth(\fs') = j + (k - j - 1) + \depth(\fs_i) \geq j + (k - j - 1) + (n - k + 1) = \depth(\fs). \end{align*} For $k \leq \ell \leq n$, it follows from \eqref{eq: part 1 compa 0 AMZV} that \begin{align*} s'_1 + \cdots + s'_{\ell} \leq s_1 + \cdots + s_{\ell}. \end{align*} Since $w(\fs') = w(\fs)$, one deduces that $T_k(\fs') \leq T_k(\fs)$. Moreover, on verifies that $\Init(\fs) = (s_1, \dotsc, s_{k - 1}) \preceq \Init(\fs')$ and $s'_k = s_{i1} < s_k$ by \eqref{eq: part 1 compa 0 AMZV}. \textit{Type 2:} For $ \begin{pmatrix} \fe' \\ \mathfrak{t}' \end{pmatrix} = \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_j & \varepsilon_{j+1} & \fe_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_j & 1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix}$, it follows from \eqref{eq: part 1 compa AMZV} and Remark \ref{rmk: compa depth} that \begin{align*} \depth(\mathfrak{t}') &= j + 1 + \depth(\mathfrak{t}_{i_1 \dots i_{k-j}})\\ &\geq j + 1 + (k - j - 2) + 1 + (n - k + 1) = n + 1 > \depth(\fs). \end{align*} For $\ell = k$, since $s_k > q$, it follows from \eqref{eq: part 1 compa AMZV} that \begin{align*} t'_1 + \cdots + t'_k \leq s_1 + \cdots + s_j + 1 + q(k - j - 1) < s_1 + \cdots + s_{k}. \end{align*} For $k < \ell \leq n$, it follows from \eqref{eq: part 1 compa AMZV} that \begin{align*} t'_1 + \cdots + t'_{\ell} \leq s_1 + \cdots + s_{\ell - 1} < s_1 + \cdots + s_{\ell}. \end{align*} Since $w(\mathfrak{t}') = w(\fs)$, one deduces that $T_k(\mathfrak{t}') \leq T_k(\fs)$. \textit{Type 3:} For $ \begin{pmatrix} \fm \\ \mathfrak{u} \end{pmatrix} = \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_{j-1} & \varepsilon_j \varepsilon_{j+1} & \fe_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_{j-1} & s_j+1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix}$, it follows from \eqref{eq: part 1 compa AMZV} and Remark \ref{rmk: compa depth} that \begin{align*} \depth(\mathfrak{u}) &= j + \depth(\mathfrak{t}_{i_1 \dots i_{k-j}}) \geq j + (k - j - 2) + 1 + (n - k + 1) = \depth(\fs). \end{align*} For $k \leq \ell \leq n$, it follows from \eqref{eq: part 1 compa AMZV} that \begin{align*} u_1 + \cdots + u_{\ell} \leq s_1 + \cdots + s_{\ell}. \end{align*} Since $w(\mathfrak{u}) = w(\fs)$, one deduces that $T_k(\mathfrak{u}) \leq T_k(\fs)$. Moreover, we have $\Init(\fs) \prec (s_1, \dotsc, s_{j - 1}, s_j + 1) \preceq \Init(\mathfrak{u})$. We have proved Part 1. For Part $2$, we recall the relation $R_{\varepsilon_k}$ given by \begin{equation*} R_{\varepsilon_k} \colon \quad S_d \begin{pmatrix} \varepsilon_k\\ q \end{pmatrix} + \varepsilon_k^{-1}D_1 S_{d+1} \begin{pmatrix} \varepsilon_k& 1 \\ 1 & q-1 \end{pmatrix} =0. \end{equation*} Setting $s_0 = 0$, we may assume that there exists a maximal index $j$ with $0 \leq j \leq k-1$ such that $s_j < q$. We note that $s_k = q$, hence $\Init(\fs) = \fs = (s_1, \dots, s_j, q^{\{k-j-1\}}, q)$. Then $ \mathcal{BC}_{\varepsilon_{j+1},q} \circ \dots \circ \mathcal{BC}_{\varepsilon_{k-1},q}(R_{\varepsilon_k})$ is of the form \begin{align} \label{eq: Part 2 Li AMZV} S_d &\begin{pmatrix} \varepsilon_{j+1} & \dots & \varepsilon_{k} \\ q & \dots & q \end{pmatrix} + \sum \limits_i b_{i_1 \dots i_{k-j}} S_{d+1} \begin{pmatrix} \varepsilon_{j+1} & \fe_{i_1 \dots i_{k-j}} \\ 1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} =0, \end{align} where $b_{i_1 \dots i_{k-j}} \in K$ and for $2 \leq l \leq k-j$, $ \begin{pmatrix} \fe_{i_1 \dots i_l} \\ \mathfrak{t}_{i_1 \dots i_l} \end{pmatrix} $ are arrays satisfying \begin{equation*} \begin{pmatrix} \fe_{i_1 \dots i_l} \\ \mathfrak{t}_{i_1 \dots i_l} \end{pmatrix} \leq \begin{pmatrix} 1 \\ q - 1 \end{pmatrix} + \begin{pmatrix} \varepsilon_{k-l+2} & \fe_{i_1 \dots i_{l-1}} \\ 1 & \mathfrak{t}_{i_1 \dots i_{l-1}} \end{pmatrix} = \begin{pmatrix} \varepsilon_{k-l+2} & \fe_{i_1 \dots i_{l-1}} \\ q & \mathfrak{t}_{i_1 \dots i_{l-1}} \end{pmatrix} . \end{equation*} Thus \begin{equation} \label{eq: part 2 compa AMZV} \begin{pmatrix} \fe_{i_1 \dots i_{k-j}} \\ \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} \leq \begin{pmatrix} \varepsilon_{j+2}& \dots & \varepsilon_{k} & 1 \\ q& \dots & q & q - 1 \end{pmatrix} . \end{equation} We let $ \begin{pmatrix} \Sigma' \\ V' \end{pmatrix} = \begin{pmatrix} \varepsilon_{1} &\dots & \varepsilon_j \\ s_1 &\dots & s_j \end{pmatrix} $ and we apply $\mathcal{B}^*_{\Sigma',V'}$ to \eqref{eq: Part 2 Li AMZV}. Since $s_j < q$, i.e., $s_j + 1 \leq q$, it follows from Lemma \ref{polybesao AMZV} that \begin{align*} S_d \begin{pmatrix} \fve \\ \fs \end{pmatrix} & + \sum \limits_i b_{i_1 \dots i_{k-j}} S_d \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_j & \varepsilon_{j+1} & \fe_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_j & 1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} \\ & + \sum \limits_i b_{i_1 \dots i_{k-j}} S_d \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_{j-1} & \varepsilon_j \varepsilon_{j+1} & \fe_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_{j-1} & s_j+1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} =0, \end{align*} hence \begin{align*} \zeta_A \begin{pmatrix} \fve \\ \fs \end{pmatrix} & + \sum \limits_i b_{i_1 \dots i_{k-j}} \zeta_A \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_j & \varepsilon_{j+1} & \fe_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_j & 1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} \\ & + \sum \limits_i b_{i_1 \dots i_{k-j}} \zeta_A \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_{j-1} & \varepsilon_j \varepsilon_{j+1} & \fe_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_{j-1} & s_j+1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} =0. \end{align*} In other words, we have \begin{align} \label{eq: part 2 Li 2 AMZV} \zeta_A \begin{pmatrix} \fve \\ \fs \end{pmatrix} =& - \sum \limits_i b_{i_1 \dots i_{k-j}} \zeta_A \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_j & \varepsilon_{j+1} & \fe_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_j & 1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} \\ \notag & - \sum \limits_i b_{i_1 \dots i_{k-j}} \zeta_A \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_{j-1} & \varepsilon_j \varepsilon_{j+1} & \fe_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_{j-1} & s_j+1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix}. \end{align} The first term and the second term on the right hand side of \eqref{eq: part 2 Li 2 AMZV} are referred to as type 2 and type 3 respectively. We now verify the conditions of arrays of type 2 and type 3 with respect to $ \begin{pmatrix} \fve \\ \fs \end{pmatrix} $. \textit{Type 2:} For $ \begin{pmatrix} \fe' \\ \mathfrak{t}' \end{pmatrix} = \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_j & \varepsilon_{j+1} & \fe_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_j & 1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix}$, it follows from \eqref{eq: part 2 compa AMZV} and Remark \ref{rmk: compa depth} that \begin{align*} \depth(\mathfrak{t}') = j + 1 + \depth(\mathfrak{t}_{i_1 \dots i_{k-j}}) \geq j + 1 + (k - j)= k + 1 > \depth(\fs). \end{align*} It follows from \eqref{eq: part 2 compa AMZV} that \begin{align*} t'_1 + \cdots + t'_k \leq s_1 + \cdots + s_j + 1 + q(k - j - 1) < s_1 + \cdots + s_{k}. \end{align*} Since $w(\mathfrak{t}') = w(\fs)$, one deduces that $T_k(\mathfrak{t}') \leq T_k(\fs)$. \textit{Type 3:} For $ \begin{pmatrix} \fm \\ \mathfrak{u} \end{pmatrix} = \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_{j-1} & \varepsilon_j \varepsilon_{j+1} & \fe_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_{j-1} & s_j+1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix}$, it follows from \eqref{eq: part 2 compa AMZV} and Remark \ref{rmk: compa depth} that \begin{align*} \depth(\mathfrak{u}) = j + \depth(\mathfrak{t}_{i_1 \dots i_{k-j}}) \geq j + (k - j) = \depth(\fs). \end{align*} It follows from \eqref{eq: part 2 compa AMZV} that \begin{align*} u_1 + \cdots + u_k \leq s_1 + \cdots + s_{j - 1} + (s_{j} + 1) + q(k - j - 1) + (q - 1) = s_1 + \cdots + s_k. \end{align*} Since $w(\mathfrak{u}) = w(\fs)$, one deduces that $T_k(\mathfrak{u}) \leq T_k(\fs)$. Moreover, we have $\Init(\fs) \prec (s_1, \dotsc, s_{j - 1}, s_j + 1) \preceq \Init(\mathfrak{u})$. We finish the proof. \end{proof} \begin{proposition} \label{polyalgpartlem AMZV} For all $k \in \mathbb{N}$ and for all arrays $ \begin{pmatrix} \fve \\ \fs \end{pmatrix} $, $\zeta_A \begin{pmatrix} \fve \\ \fs \end{pmatrix} $ can be expressed as a $K$-linear combination of $\zeta_A \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} $'s of the same weight such that $\mathfrak{t}$ is $k$-admissible. \end{proposition} \begin{proof} The proof follows the same line as that of \cite[Proposition 3.2]{ND21}. We write down the proof for the reader's convenience. We consider the following statement: $(H_k) \quad$ For all arrays $ \begin{pmatrix} \fve \\ \fs \end{pmatrix} $, we can express $\zeta_A \begin{pmatrix} \fve \\ \fs \end{pmatrix} $ as a $K$-linear combination of $\zeta_A \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} $'s of the same weight such that $\mathfrak{t}$ is $k$-admissible.\\ We need to show that $(H_k)$ holds for all $k \in \mathbb{N}$. We proceed the proof by induction on $k$. For $k = 1$, we prove that $(H_1)$ holds by induction on the first component $s_1$ of $\fs$. If $s_1 \leq q$, then either $\fs$ is $1$-admissible, or $\begin{pmatrix} \fve \\ \fs \end{pmatrix} = \begin{pmatrix} \varepsilon \\ q \end{pmatrix}$. For the case $\begin{pmatrix} \fve \\ \fs \end{pmatrix} = \begin{pmatrix} \varepsilon \\ q \end{pmatrix}$, we deduce from the relation $R_{\varepsilon}$ that \begin{equation*} \zeta_A \begin{pmatrix} \varepsilon\\ q \end{pmatrix} = - \varepsilon^{-1}D_1 \zeta_A \begin{pmatrix} \varepsilon& 1 \\ 1 & q-1 \end{pmatrix}, \end{equation*} hence $(H_1)$ holds in this case since $(1, q - 1)$ is $1$-admissible. If $s_1 > q$, we assume that $(H_1)$ holds for the array $\begin{pmatrix} \fve \\ \fs \end{pmatrix}$ where $s_1 < s$. We need to shows that $(H_1)$ holds for the array $\begin{pmatrix} \fve \\ \fs \end{pmatrix}$ where $s_1 = s$. Indeed, assume that $ \begin{pmatrix} \fve \\ \fs \end{pmatrix} = \begin{pmatrix} \varepsilon_1 & \cdots & \varepsilon_n \\ s_1 & \cdots & s_n \end{pmatrix}$. Set $\begin{pmatrix} \Sigma \\ V \end{pmatrix} = \begin{pmatrix} \varepsilon_1 & \varepsilon_2 & \cdots & \varepsilon_n \\ s_1 - q & s_2 & \cdots & s_n \end{pmatrix}$. From Proposition \ref{polycer1 AMZV}, we have $\mathcal C_{\Sigma,V}(R_{1})$ is of the form \begin{equation*} S_d \begin{pmatrix} \varepsilon_1 & \cdots & \varepsilon_n \\ s_1 & \cdots & s_n \end{pmatrix} + \sum \limits_i a_i S_d \begin{pmatrix} \fve_i \\ \fs_i \end{pmatrix} + \sum \limits_i b_i S_{d+1} \begin{pmatrix} 1 & \fe_i \\ 1 & \mathfrak{t}_i \end{pmatrix} =0, \end{equation*} where $a_i, b_i \in K$ and $\begin{pmatrix} \fve_i \\ \fs_i \end{pmatrix}$ are arrays satisfying $s_{i1} < q + v_1 = s_1 = s$ for all $i$. Thus we deduce that \begin{align*} \zeta_A \begin{pmatrix} \fve \\ \fs \end{pmatrix} = - \sum \limits_i a_i \zeta_A \begin{pmatrix} \fve_i \\ \fs_i \end{pmatrix} - \sum \limits_i b_i \zeta_A \begin{pmatrix} 1 & \fe_i \\ 1 & \mathfrak{t}_i \end{pmatrix}. \end{align*} It is clear that $(1, \mathfrak{t}_i)$ is $1$-admissible. Moreover, for all $i$, since $s_{i1} < s$, we deduce from the induction hypothesis that $(H_1)$ holds for $\begin{pmatrix} \fve_i \\ \fs_i \end{pmatrix}$, which proves that $(H_1)$ holds for $\begin{pmatrix} \fve \\ \fs \end{pmatrix}$. We next assume that $(H_{k - 1})$ holds. We need to show that $(H_k)$ holds. It suffices to consider the array $\begin{pmatrix} \fve \\ \fs \end{pmatrix}$ where $\fs$ is not $k$-admissible. Moreover, from the induction hypothesis of $(H_{k - 1})$, we may assume that $\fs$ is $(k - 1)$-admissible. For such array $\begin{pmatrix} \fve \\ \fs \end{pmatrix}$, we claim that $\depth(\fs) \geq k$. Indeed, assume that $\depth(\fs) < k$. Since $\fs$ is $(k - 1)$-admissible, one verifies that $\fs$ is $k$-admissible, which is a contradiction. Assume that $ \begin{pmatrix} \fve \\ \fs \end{pmatrix} = \begin{pmatrix} \varepsilon_1 & \cdots & \varepsilon_n \\ s_1 & \cdots & s_n \end{pmatrix}$ where $\fs$ is not $k$-admissible and $\depth(\fs) \geq k$. In order to prove that $(H_k)$ holds for the array $\begin{pmatrix} \fve \\ \fs \end{pmatrix}$, we proceed by induction on $s_1 + \cdots + s_k$. The case $s_1 + \cdots + s_k = 1$ is clear. Assume that $(H_k)$ holds when $s_1 + \cdots + s_k < s$. We need to show that $(H_k)$ holds when $s_1 + \cdots + s_k = s$. To do so, we give the following algorithm: \textit{Algorithm: } We begin with an array $\begin{pmatrix} \fve \\ \fs \end{pmatrix}$ where $\fs$ is not $k$-admissible, $\depth(\fs) \geq k$ and $s_1 + \cdots + s_k = s$. \textit{Step 1:} From Proposition \ref{polydecom AMZV}, we can decompose $\zeta_A \begin{pmatrix} \fve \\ \fs \end{pmatrix} $ as follows: \begin{equation} \label{eq: polyalgpartlem AMZV} \zeta_A \begin{pmatrix} \fve \\ \fs \end{pmatrix} = \underbrace{\sum\limits_i a_i \zeta_A \begin{pmatrix} \fve'_i \\ \fs'_i \end{pmatrix} }_\text{type 1} + \underbrace{\sum\limits_i b_i\zeta_A \begin{pmatrix} \fe_i' \\ \mathfrak{t}'_i \end{pmatrix} }_\text{type 2} + \underbrace{\sum\limits_i c_i\zeta_A \begin{pmatrix} \fm_i \\ \mathfrak{u}_i \end{pmatrix} }_\text{type 3} , \end{equation} where $a_i, b_i, c_i \in K$. Note that the term of type $1$ disappears when $\Init(\fs) = \fs$ and $s_n = q$. Moreover, for all arrays $ \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} $ appearing on the right hand side of \eqref{eq: polyalgpartlem AMZV}, we have $\depth(\mathfrak{t}) \geq \depth(\fs) \geq k$ and $t_1 + \cdots + t_k \leq s_1 + \cdots + s_k = s$. \textit{Step 2:} For all arrays $ \begin{pmatrix} \fe \\ \mathfrak{t} \end{pmatrix} $ appearing on the right hand side of \eqref{eq: polyalgpartlem AMZV}, if $\mathfrak{t}$ is either $k$-admissible or $\mathfrak{t}$ satisfies the condition $t_1 + \cdots + t_k < s$, then we deduce from the induction hypothesis that $(H_k)$ holds for the array $\begin{pmatrix} \fve \\ \fs \end{pmatrix}$, and hence we stop the algorithm. Otherwise, there exists an array $\begin{pmatrix} \fve_1 \\ \fs_1 \end{pmatrix}$ where $\fs_1$ is not $k$-admissible, $\depth(\fs_1) \geq k$ and $s_{11} + \cdots + s_{1k} = s$. For such an array, we repeat the algorithm for $\begin{pmatrix} \fve_1 \\ \fs_1 \end{pmatrix}$. It should be remarked that the array $\begin{pmatrix} \fve_1 \\ \fs_1 \end{pmatrix}$ is of type $1$ or type $3$ with respect to $\begin{pmatrix} \fve \\ \fs \end{pmatrix}$. Indeed, if the array $\begin{pmatrix} \fve_1 \\ \fs_1 \end{pmatrix}$ is of type $2$, then we deduce from Proposition \ref{polydecom AMZV} that $s_{11} + \cdots + s_{1k} < s_{1} + \cdots + s_{k} = s$, which is a contradiction. It remains to show that the above algorithm stops after a finite number of steps. Set $\begin{pmatrix} \fve_0 \\ \fs_0 \end{pmatrix} = \begin{pmatrix} \fve \\ \fs \end{pmatrix}$. Assume that the above algorithm does not stop. Then there exists a sequence of arrays $\begin{pmatrix} \fve_0 \\ \fs_0 \end{pmatrix}, \begin{pmatrix} \fve_1 \\ \fs_1 \end{pmatrix}, \begin{pmatrix} \fve_2 \\ \fs_2 \end{pmatrix}, \dotsc$ such that for all $i \geq 0$, $\fs_i$ is not $k$-admissible, $\depth(\fs_i) \geq k$ and $\begin{pmatrix} \fve_{i + 1} \\ \fs_{i + 1} \end{pmatrix}$ is of type $1$ or type $3$ with respect to $\begin{pmatrix} \fve_i \\ \fs_i \end{pmatrix}$. From Proposition \ref{polydecom AMZV}, we obtain an infinite sequence \begin{align*} \Init(\fs_0) \preceq \Init(\fs_1) \preceq \Init(\fs_2) \preceq \cdots \end{align*} which is increasing. For all $i \geq 0$, since $\fs_i$ is not $k$-admissible and $\depth(\fs_i) \geq k$, we have $\depth(\Init(\fs_i)) \leq k$, hence $\Init(\fs_i) \preceq q^{\{k\}}$. Thus there exists an index $i_0$ sufficiently large such that $\Init(\fs_i) = \Init(\fs_{i + 1})$ for all $i \geq i_0$. It follows from Proposition \ref{polydecom AMZV} that $\begin{pmatrix} \fve_{i + 1} \\ \fs_{i + 1} \end{pmatrix}$ is of type $1$ with respect to $\begin{pmatrix} \fve_i \\ \fs_i \end{pmatrix}$ for all $i \geq i_0$. Moreover, if we set $\ell = \depth(\Init(\fs_i)) + 1$, then $s_{(i + 1)\ell} < s_{i\ell}$ for all $i \geq i_0$, which is a contradiction. We deduce that the algorithm stops after a finite number of steps. \end{proof} Consequently, we obtain a weak version of Brown's theorem for AMZV's as follows. \begin{proposition} \label{prop: weak Brown AMZV} The set of all elements $\zeta_A \begin{pmatrix} \fve \\ \fs \end{pmatrix}$ such that $\zeta_A \begin{pmatrix} \fve \\ \fs \end{pmatrix} \in \mathcal{AT}_w$ forms a set of generators for $\mathcal{AZ}_w$. Here we recall that $\mathcal{AT}_w$ is the set of all AMZV's $\zeta_A \begin{pmatrix} \fve \\ \fs \end{pmatrix} = \Li \begin{pmatrix} \fve \\ \fs \end{pmatrix}$ of weight $w$ such that $ \begin{pmatrix} \fve \\ \fs \end{pmatrix} = \begin{pmatrix} \varepsilon_1 & \cdots & \varepsilon_n \\ s_1 & \cdots & s_n \end{pmatrix}$ with $s_1, \dots, s_{n-1} \leq q$ and $s_n < q$ introduced in the paragraph preceding Proposition \ref{prop: weak Brown}. \end{proposition} \begin{proof} It follows from Proposition \ref{polyalgpartlem AMZV} in the case of $k = w$. \end{proof} \subsection{Proof of Theorem \ref{thm: ZagierHoffman AMZV}} \label{sec:proof Theorem A} As a direct consequence of Proposition \ref{prop: weak Brown} and Proposition \ref{prop: weak Brown AMZV}, we get \begin{theorem} \label{thm:bridge} The $K$-vector space $\mathcal{AZ}_w$ of AMZV's of weight $w$ and the $K$-vector space $\mathcal{AL}_w$ of ACMPL's of weight $w$ are the same. \end{theorem} By this identification we apply Theorem \ref{thm:ACMPL} to obtain Theorem \ref{thm: ZagierHoffman AMZV}. \subsection{Zagier-Hoffman's conjectures in positive characteristic} \label{sec: application MZV} \subsubsection{Known results} We use freely the notation introduced in \S \ref{sec:ZagierHoffman}. We recall that for $w \in \N$, $\mathcal Z_w$ denotes the $K$-vector space spanned by the MZV's of weight~$w$ and $\mathcal T_w$ denotes the set of $\zeta_A(\fs)$ where $\fs=(s_1,\ldots,s_r) \in \N^r$ of weight $w$ with $1\leq s_i\leq q$ for $1\leq i\leq r-1$ and $s_r<q$. Recall that the main results of \cite{ND21} state that \begin{itemize} \item For all $w \in \N$ we always have $\dim_K \mathcal Z_w \leq d(w)$ (see \cite[Theorem A]{ND21}). \item For $w \leq 2q-2$ we have $\dim_K \mathcal Z_w \geq d(w)$ (see \cite[Theorem B]{ND21}). In particular, Conjecture \ref{conj: basis} holds for $w \leq 2q-2$ (see \cite[Theorem D]{ND21}). \end{itemize} However, as stated in \cite[Remark 6.3]{ND21} it would be very difficult to extend the method of \cite{ND21} for general weights. As an application of our main results, we present a proof of Theorem \ref{thm: ZagierHoffman} which settles both Conjectures \ref{conj: dimension} and \ref{conj: basis}. \subsubsection{Proof of Theorem \ref{thm: ZagierHoffman}} \label{sec:proof Theorem B} As we have already known the sharp upper bound for $\mathcal Z_w$ (see \cite[Theorem A]{ND21}), Theorem \ref{thm: ZagierHoffman} follows immediately from the following proposition. \begin{proposition} \label{prop:lower bound} For all $w \in \N$ we have $\dim_K \mathcal Z_w \geq d(w)$. \end{proposition} \begin{proof} We denote by $\mathcal S_w$ the set of CMPL's consisting of $\Li(s_1,\ldots,s_r)$ of weight $w$ with $q \nmid s_i$ for all $i$. Then $\mathcal{S}_w$ can be considered as a subset of $\mathcal{AS}_w$ by assuming $\fe=(1,\ldots, 1)$. In fact, all algebraic relations in \S \ref{sec:algebraic part} hold for CMPL version, i.e., for $\Si_d(s_1,\dots,s_r)=\Si_d \begin{pmatrix}1 & \dots &1 \\ s_1& \dots & s_r\end{pmatrix}$ and $\Li(s_1,\dots,s_r)=\Li \begin{pmatrix}1 & \dots &1 \\ s_1& \dots & s_r\end{pmatrix}$. It follows that $\mathcal S_w$ is contained in $\mathcal Z_w$ by Theorem \ref{thm:bridge}. Further, by \S \ref{sec:AS_w}, $|\mathcal S_w|=d(w)$. By Theorem \ref{thm: trans ACMPL} we deduce that elements in $\mathcal S_w$ are all linearly independent over $K$. Therefore, $\dim_K \mathcal Z_w \geq |\mathcal S_w|=d(w)$. \end{proof} \subsection{Sharp bounds without ACMPL's} \label{sec:without ACMPL} To end this paper we mention that without ACMPL's it seems very hard to obtain for arbitrary weight $w$ \begin{itemize} \item either the sharp upper bound $\dim_K \mathcal{AZ}_w \leq s(w)$, \item or the sharp lower bound $\dim_K \mathcal{AZ}_w \geq s(w)$. \end{itemize} We can only do this for small weights with ad hoc arguments. We collect the results below, sketch some ideas for the proofs, and refer the reader to \cite{IKLNDP23} for full details. \begin{proposition} \label{prop:Brown adhoc} Let $w \leq 2q-2$. Then $\dim_K \mathcal{AZ}_w \leq s(w)$. \end{proposition} \begin{proof} We denote by $\mathcal{AT}_w^1$ the subset of AMZV's $\zeta_A \begin{pmatrix} \fe \\ \fs \end{pmatrix}$ of $\mathcal{AT}_w$ such that $\epsilon_i=1$ whenever $s_i=q$ (i.e., the character corresponding to $q$ is always 1) and by $\langle \mathcal{AT}_w^1 \rangle$ the $K$-vector space spanned by the AMZV's in $\mathcal{AT}_w^1$. We see that $|\mathcal{AT}_w^1|=s(w)$. Thus it suffices to prove that $\langle \mathcal{AT}_w^1 \rangle=\mathcal{AZ}_w$. Recall that for any $\epsilon \in \Fq^\times$, we recall the relation \begin{equation*} R_{\varepsilon} \colon \quad S_d \begin{pmatrix} \varepsilon\\ q \end{pmatrix} + \varepsilon^{-1}D_1 S_{d+1} \begin{pmatrix} \varepsilon& 1 \\ 1 & q-1 \end{pmatrix} =0, \end{equation*} where $D_1 = \theta^q - \theta$. We recall that the coefficients $\Delta^i_{s,t}$ are given in \eqref{eq:Delta Chen}. Let $U=(u_1,\dots,u_n)$ and $W=(w_1,\dots,w_r)$ be tuples of positive integers such that $u_n \leq q-1$ and $w_1,\dots,w_r \leq q$. Let $\fe=(\epsilon_1,\dots,\epsilon_n)\in(\Fq^\times)^n$ and $\fla=(\lambda_1,\dots,\lambda_r)\in (\Fq^\times)^r$. By direct calculations, we show that $\mathcal B_{\fe,U} \mathcal C_{\fla,W} (R_{\epsilon})$ can be written as \begin{align*} &S_d\begin{pmatrix} \fe & \epsilon & \fla\\ U & q & W \end{pmatrix}+S_d\begin{pmatrix} \fe & \epsilon \lambda_1 & \fla_-\\ U & q + w_1 & W_- \end{pmatrix}\\ & +\Delta^{q-1}_{q,w_1}S_d\begin{pmatrix} \fe & \epsilon \lambda_1 & 1 & \fla_-\\ U & w_1 + 1 & q - 1 & W_- \end{pmatrix}\\ &+\Delta^{q-1}_{q,w_1}S_d\begin{pmatrix} \fe & \epsilon \lambda_1 & \lambda_2 & \lambda_3 & \dots & \lambda_r\\ U & w_1 + 1 & q + w_2 - 1 & w_3 & \dots & w_r \end{pmatrix}\\ & +\Delta^{q-1}_{q,w_1} \sum_{i=2}^{r-1}\prod_{j=2}^i(\Delta_{q-1,w_j}^{q-1}+1)S_d\begin{pmatrix} \fe & \epsilon \lambda_1 & \lambda_2 \dots \lambda_i & \lambda_{i+1} & \lambda_{i+2} \dots \lambda_r\\ U & w_1 + 1 & w_2 \dots w_i& q + w_{i+1} - 1 & w_{i+2} \dots w_r \end{pmatrix}\\ & +\Delta^{q-1}_{q,w_1} \sum_{i=2}^{r}\prod_{j=2}^i(\Delta^{q-1}_{q-1,w_j}+1)S_d \begin{pmatrix} \fe & \epsilon \lambda_1 & \lambda_2 & \dots & \lambda_i & 1 & \lambda_{i+1} & \dots & \lambda_r\\ U & w_1 + 1 & w_2 & \dots & w_i& q - 1 & w_{i+1} & \dots & w_r \end{pmatrix} \\ &+\epsilon^{-1}D_1S_d\begin{pmatrix} \fe & \epsilon & 1 & \fla\\ U & 1 & q - 1 & W \end{pmatrix}+\epsilon^{-1}D_1S_d\begin{pmatrix} \fe & \epsilon & \lambda_1 & \fla_-\\ U & 1 & q + w_1 - 1 & W_- \end{pmatrix}\\ &+\epsilon^{-1}D_1S_d\begin{pmatrix} \epsilon_1 & \dots & \epsilon_{n-1} & \epsilon_n\epsilon& 1 & \fla\\ u_1 & \dots & u_{n-1} & u_n + 1 & q - 1 & W \end{pmatrix}\\ &+\epsilon^{-1}D_1S_d\begin{pmatrix} \epsilon_1 & \dots & \epsilon_{n-1} & \epsilon_n\epsilon& \lambda_1 & \fla_-\\ u_1 & \dots & u_{n-1} & u_n + 1 & q + w_1 - 1 & W_- \end{pmatrix}\\ &+\epsilon^{-1}D_1\sum_{i=1}^{r-1}\prod_{j=1}^i(\Delta_{q-1,w_j}^{q-1}+1)S_d\begin{pmatrix} \fe & \epsilon & \lambda_1 & \dots & \lambda_i & \lambda_{i+1} & \lambda_{i+2} & \dots & \lambda_r\\ U & 1 & w_1 &\dots & w_i & q + w_{i+1} - 1 & w_{i+2} &\dots & w_r \end{pmatrix}\\ &+\epsilon^{-1}D_1\sum_{i=1}^{r}\prod_{j=1}^i(\Delta^{q-1}_{q-1,w_j}+1)S_d \begin{pmatrix} \fe&\epsilon&\lambda_1\dots\lambda_i&1&\lambda_{i+1}\dots\lambda_r\\ U&1&w_1\dots w_i&q-1&w_{i+1}\dots w_r \end{pmatrix}\\ &+\epsilon^{-1}D_1\sum_{i=1}^{r}\prod_{j=1}^i(\Delta^{q-1}_{q-1,w_j}+1)S_d\begin{pmatrix} \epsilon_1 \dots \epsilon_{n-1}&\epsilon_n\epsilon&\lambda_1 \dots \lambda_i&1&\lambda_{i+1} \dots \lambda_r\\ u_1 \dots u_{n-1}&u_n+1&w_1 \dots w_i&q-1&w_{i+1} \dots w_r \end{pmatrix}\\+&\epsilon^{-1}D_1\sum_{i=1}^{r-1}\prod_{j=1}^i(\Delta^{q-1}_{q-1,w_j}+1)S_d \begin{pmatrix} \epsilon_1 \dots \epsilon_{n-1}&\epsilon_n\epsilon&\lambda_1 \dots \lambda_i& \lambda_{i+1}&\lambda_{i+2} \dots \lambda_r\\ u_1 \dots u_{n-1} &u_n+1&w_1 \dots w_i& q+w_{i+1}-1&w_{i+2} \dots w_r \end{pmatrix} \\ &=0. \end{align*} We denote by $(*)$ this formula. We denote by $H_r$ the following claim: for any tuples of positive integers $U$ and $W=(w_1,\dots,w_r)$ of depth $r$, $\fe \in(\Fq^\times)^{\depth(U)}$ of any depth, $\fla=(\lambda_1,\dots,\lambda_r)\in(\Fq^\times)^{r}$, and $\epsilon \in \Fq^\times$ such that $w(U)+w(W)+q=w$, the AMZV's $\zeta_A\begin{pmatrix} \fe & \epsilon & \fla\\ U & q & W \end{pmatrix}$ and $\zeta_A\begin{pmatrix} \fe & \epsilon \lambda_1 & \fla_-\\ U & q+w_1 & W_- \end{pmatrix}$ belong to $\langle\mathcal{AT}_w^1\rangle$. We claim that $H_r$ holds for any $r \geq 0$. The proof is by induction on $r$. For $r=0$, we know that $W=\emptyset$. Letting $U=(u_1,\dots,u_n)$ and $\fe=(\epsilon_1,\dots,\epsilon_n)$ we apply the formula $(*)$ to get an explicit expression for $\mathcal B_{\fe,U}(R_{\epsilon})$ given by \begin{align*} S_d\begin{pmatrix} \fe & \epsilon\\ U & q \end{pmatrix}&+\epsilon^{-1}D_1S_d\begin{pmatrix} \fe & \epsilon& 1\\ U & 1 & q - 1 \end{pmatrix} +\epsilon^{-1}D_1S_d\begin{pmatrix} \epsilon_1 & \dots & \epsilon_{n-1} & \epsilon_n \epsilon & 1 \\ u_1 & \dots & u_{n-1} & u_n + 1 & q - 1 \end{pmatrix}=0. \end{align*} Since $u_i \leq w(U)=w-q \leq q-2$, we deduce that $\zeta_A\begin{pmatrix} \fe & \epsilon\\ U & q \end{pmatrix}\in\langle \mathcal{AT}_w^1 \rangle$ as required. Suppose that $H_{r'}$ holds for any $r'<r$. We now show that $H_r$ holds. We proceed again by induction on $w_1$. For $w_1=1$, letting $U=(u_1,\dots,u_n)$ and $\fe=(\epsilon_1,\dots,\epsilon_n)$ we apply the formula $(*)$ to get an explicit expression for $\mathcal B_{\fe,U} \mathcal C_{\fla,W} (R_{\epsilon})$. As $w(U)+w(W)=w-q \leq q-2$, by induction we deduce that all the terms except the first two ones in this expression belong to $\langle \mathcal{AT}_w^1 \rangle$. Thus for any $\epsilon \in \Fq^\times$, \begin{equation} \label{eq:alg_2q-2} \zeta_A\begin{pmatrix} \fe & \epsilon & \fla\\ U & q & W \end{pmatrix}+\zeta_A\begin{pmatrix} \fe & \epsilon \lambda_1 & \fla_-\\ U & q + 1 & W_- \end{pmatrix} \in \langle \mathcal{AT}_w^1 \rangle. \end{equation} We take $\epsilon=1$. As the first term lies in $\mathcal{AT}_w^1$ by definition, we deduce that \begin{align*} \zeta_A\begin{pmatrix} \fe & \lambda_1 & \fla_-\\ U & q + 1 & W_- \end{pmatrix} \in \langle \mathcal{AT}_w^1 \rangle. \end{align*} Thus in \eqref{eq:alg_2q-2} we now know that the second term lies in $\langle \mathcal{AT}_w^1 \rangle$, which implies that \[ \zeta_A\begin{pmatrix} \fe & \epsilon & \fla\\ U & q & W \end{pmatrix} \in \langle \mathcal{AT}_w^1 \rangle. \] We suppose that $H_r$ holds for all $W'=(w_1',\dots,w_r')$ such that $w_1'<w_1$. We have to show that $H_r$ holds for all $W=(w_1,\dots,w_r)$. The proof follows the same line as before. Letting $U=(u_1,\dots,u_n)$ and $\fe=(\epsilon_1,\dots,\epsilon_n)$ we apply the formula $(*)$ to get an explicit expression for $\mathcal B_{\fe,U} \mathcal C_{\fla,W} (R_{\epsilon})$. As $w(U)+w(W)=w-q \leq q-2$, by induction we deduce that all the terms except the first two ones in this expression belong to $\langle \mathcal{AT}_w^1 \rangle$. Thus for any $\epsilon \in \Fq^\times$, \begin{equation} \label{eq:alg_2q-2 2} \zeta_A\begin{pmatrix} \fe & \epsilon & \fla\\ U & q & W \end{pmatrix}+\zeta_A\begin{pmatrix} \fe & \epsilon \lambda_1 & \fla_-\\ U & q + w_1 & W_- \end{pmatrix} \in \langle \mathcal{AT}_w^1 \rangle. \end{equation} We take $\epsilon=1$ and deduce \begin{align*} \zeta_A\begin{pmatrix} \fe & \lambda_1 & \fla_-\\ U & q + w_1 & W_- \end{pmatrix} \in \langle \mathcal{AT}_w^1 \rangle. \end{align*} Thus in \eqref{eq:alg_2q-2 2} the second term lies in $\langle \mathcal{AT}_w^1 \rangle$, which implies \[ \zeta_A\begin{pmatrix} \fe & \epsilon & \fla\\ U & q & W \end{pmatrix} \in \langle \mathcal{AT}_w^1 \rangle. \] The proof is complete. \end{proof} \begin{remark} \label{rem:2q-1} The condition $w \leq 2q-2$ is essential in the previous proof as it allows us to significantly simplify the expression of $\mathcal B_{\fe,U} \mathcal C_{\fla,W} (R_{\epsilon})$ (see Eq. \eqref{eq:alg_2q-2 2}). For $w=2q-1$ the situation is already complicated but we can manage to prove Proposition \ref{prop:Brown adhoc}. Unfortunately, we are not able to extend it to $w=2q$. \end{remark} \begin{proposition} Let either $w \leq 3q-3$, or $w=3q-2,q=2$. Then $\dim_K \mathcal{AZ}_w \geq s(w)$. \end{proposition} \begin{proof} We outline a proof of this theorem and refer the reader to \cite{IKLNDP23} for more details. For $1 \leq w \leq 3q-2$, we denote by $\mathcal I_w'$ the set of tuples $\fs=(s_1,\ldots,s_r) \in \mathbb N^r$ of weight $w$ as follows: \begin{itemize} \item For $1 \leq w \leq 2q-2$, $\mathcal I_w'$ consists of tuples $\fs=(s_1,\ldots,s_r) \in \mathbb N^r$ of weight $w$ where $s_i \neq q$ for all $i$. \item For $2q-1 \leq w \leq 3q-3$, $\mathcal I_w'$ consists of tuples $\fs=(s_1,\ldots,s_r) \in \mathbb N^r$ of weight $w$ of the form \begin{itemize} \item either $s_i \neq q, 2q-1, 2q$ for all $i$, \item or there exists a unique integer $1 \leq i <r$ such that $(s_i,s_{i+1})=(q-1,q)$. \end{itemize} \item For $w = 3q-2$ and $q>2$, $\mathcal I_w'$ consists of tuples $\fs=(s_1,\ldots,s_r) \in \mathbb N^r$ of weight $w$ of the form \begin{itemize} \item either $s_i \neq q, 2q-1, 2q, 3q-2$ for all $i$, \item or there exists a unique integer $1 \leq i <r$ such that $(s_i,s_{i+1}) \in \{(q-1,q),(2q-2,q)\}$, but $\fs \neq (q-1,q-1,q)$, \item or $\fs=(q-1,2q-1)$. \end{itemize} \item For $q=2$ and $w = 3q-2=4$, $\mathcal I_w'$ consists of the following tuples: $(2,1,1)$, $(1,2,1)$ and $(1,3)$. \end{itemize} We denote by $\mathcal{AT}_w'$ the subset of AMZV's given by \[ \mathcal{AT}_w':=\left\{\zeta_A\begin{pmatrix} \fe \\ \fs \end{pmatrix} : \fs \in \mathcal I_w', \text{ and } \epsilon_i=1 \text{ whenever } s_i \in \{q,2q-1\} \right\}.\] Thus, if either $w \leq 3q-3$, or $w=3q-2, q=2$, then one shows that \[ |\mathcal{AT}_w'|=s(w). \] Further, for $w \leq 3q-3$ and any $(\fs;\fe)=(s_1,\dots,s_r;\epsilon_1,\dots,\epsilon_r) \in \N^r \times (\Fq^\times)^r$, if $\zeta_A \begin{pmatrix} \fve \\ \fs \end{pmatrix} \in \mathcal{AT}_w'$, then $\zeta_A \begin{pmatrix} s_1 & \dots & s_{r-1} \\ \epsilon_1 & \dots & \epsilon_{r-1} \end{pmatrix}$ belongs to $\mathcal{AT}_{w-s_r}'$. This property allows us to apply Theorem~\ref{theorem: linear independence} and show by induction on $w \leq 3q-3$ that the AMZV's in $\mathcal{AT}_w'$ are all linearly independent over $K$. The proof is similar to that of Theorem~\ref{thm: trans ACMPL}. We apply Theorem \ref{theorem: linear independence} and reduce to solve a system of $\sigma$-linear equations. By direct but complicated calculations, we show that there does not exist any non-trivial solutions and we are done. For $w=3q-2$ and $q=2$, it can be treated separately by the same method. \end{proof} \begin{remark} 1) We note that the MZV's $\zeta_A(1,2q-2)$ and $\zeta_A(2q-1)$ (resp. $\zeta_A(1,3q-3)$ and $\zeta_A(3q-2)$) are linearly dependent over $K$ by \cite[Theorem 3.1]{LRT14}. This explains the above ad hoc construction of $\mathcal{AT}_w'$. \noindent 2) Despite extensive numerical experiments, we cannot find a suitable basis $\mathcal{AT}_w'$ for the case $w=3q-1$. \end{remark} \begin{thebibliography}{10} \bibitem{And86} G.~Anderson. \newblock {{$t$}-motives}. \newblock {\em Duke Math. J.}, 53(2):457--502, 1986. \bibitem{ABP04} G. Anderson, W. D. Brownawell, and M. Papanikolas. \newblock {Determination of the algebraic relations among special {$\Gamma$}-values in positive characteristic}. \newblock {\em Ann. of Math. (2)}, 160(1):237--313, 2004. \bibitem{AT90} G.~Anderson and D.~Thakur. \newblock {Tensor powers of the Carlitz module and zeta values}. \newblock {\em Ann. of Math. (2)}, 132(1):159--191, 1990. \bibitem{AT09} G.~Anderson and D.~Thakur. \newblock {Multizeta values for {$\mathbb F_q[t]$}, their period interpretation, and relations between them}. \newblock {\em {Int. Math. Res. Not. IMRN}}, (11):2038--2055, 2009. \bibitem{Bro12} F.~Brown. \newblock {Mixed {T}ate motives over {$\mathbb Z$}}. \newblock {\em Ann. of Math. (2)}, 175:949--976, 2012. \bibitem{BP20} D.~Brownawell and M.~Papanikolas. \newblock {A rapid introduction to Drinfeld modules, $t$-modules and $t$-motives}. \newblock In G.~B\"ockle, D.~Goss, U.~Hartl, and M.~Papanikolas, editors, {\em {$t$-motives: Hodge structures, transcendence and other motivic aspects", EMS Series of Congress Reports}}, pages 3--30. European Mathematical Society, 2020. \bibitem{BGF} J.~{Burgos Gil} and J.~Fresan. \newblock {\em Multiple zeta values: from numbers to motives}. \newblock to appear, Clay Mathematics Proceedings. \bibitem{Car35} L.~Carlitz. \newblock {On certain functions connected with polynomials in Galois field}. \newblock {\em Duke Math. J.}, 1(2):137--168, 1935. \bibitem{Cha14} C.-Y. Chang. \newblock {Linear independence of monomials of multizeta values in positive characteristic}. \newblock {\em Compos. Math.}, 150(11):1789--1808, 2014. \bibitem{CCM22} C.-Y. Chang, Y.-T Chen, and Y.~Mishiba. \newblock {On Thakur’s basis conjecture for multiple zeta values in positive characteristic}. \newblock {\em arXiv:2205.09929}, 2022. \bibitem{CPY19} C.-Y. Chang, M.~Papanikolas, and J.~Yu. \newblock {An effective criterion for {E}ulerian multizeta values in positive characteristic}. \newblock {\em J. Eur. Math. Soc. (JEMS)}, 21(2):405--440, 2019. \bibitem{Cha21} S.~Charlton. \newblock {On motivic multiple t values, Saha's basis conjecture, and generators of alternating MZV's}. \newblock {\em arXiv:2112.14613v1}, 2021. \bibitem{Che15} H.-J. Chen. \newblock {On shuffle of double zeta values over {$\Bbb{F}_q[t]$}}. \newblock {\em J. Number Theory}, 148:153--163, 2015. \bibitem{CH21} Y.-T. Chen and R.~Harada. \newblock {On lower bounds of the dimensions of multizeta values in positive characteristic}. \newblock {\em {Doc. Math.}}, 26:537--559, 2021. \bibitem{Del10} P.~Deligne. \newblock {Le groupe fondamental unipotent motivique de {$G_m-\mu_N$}, pour {$N=2,3,4,6$} ou {$8$}}. \newblock {\em {Publ. Math. Inst. Hautes \'{E}tudes Sci.}}, 112:101--141, 2010. \bibitem{Del13} P.~Deligne. \newblock Multiz\^{e}tas, d'apr\`es {F}rancis {B}rown. \newblock S\'{e}minaire Bourbaki. Vol. 2011/2012. Expos\'{e}s 1043--1058. \newblock {\em Ast\'{e}risque}, 352:161--185, 2013. \bibitem{DG05} P.~Deligne and A.~Goncharov. \newblock Groupes fondamentaux motiviques de {T}ate mixte. \newblock {\em Ann. Sci. \'{E}cole Norm. Sup. (4)}, 38(1):1--56, 2005. \bibitem{GP21} O.~Gezmis and F.~Pellarin. \newblock {Trivial multiple zeta values in Tate algebras}. \newblock {\em International Mathematics Research Notices, to appear}, rnab104, 2021. \bibitem{Gon97} A.~Goncharov. \newblock The double logarithm and {M}anin's complex for modular curves. \newblock {\em Math. Res. Lett.}, 4(5):617--636, 1997. \bibitem{Gos96} D.~Goss. \newblock {\em Basic Structures of function field arithmetic}, volume~35 of {\em Ergebnisse der Mathematik und ihrer Grenzgebiete (3)}. \newblock {Springer-Verlag, Berlin}, 1996. \bibitem{Har21} R.~Harada. \newblock {Alternating multizeta values in positive characteristic}. \newblock {\em {Math. Z.}}, 298(3-4):1263--1291, 2021. \bibitem{HJ20} U.~Hartl and A.~K. Juschka. \newblock {Pink's theory of Hodge structures and the Hodge conjectures over function fields}. \newblock In G.~B\"ockle, D.~Goss, U.~Hartl, and M.~Papanikolas, editors, {\em {$t$-motives: Hodge structures, transcendence and other motivic aspects", EMS Series of Congress Reports}}, pages 31--182. European Mathematical Society, 2020. \bibitem{Hof97} M.~Hoffman. \newblock The algebra of multiple harmonic series. \newblock {\em J. Algebra}, 194:477--495, 1997. \bibitem{Hof19} M.~Hoffman. \newblock {An odd variant of multiple zeta values}. \newblock {\em {Commun. Number Theory Phys.}}, 13(3):529--567, 2019. \bibitem{IKLNDP23} B.-H. Im, H.~Kim, K.~N. Le, T.~{Ngo Dac}, and L.~H. Pham. \newblock Note on linear independence of alternating multiple zeta values in positive characteristic. \newblock {\em {available at https://hal.science/hal-04240841}}, 2023. \bibitem{KL16} Y.-L. Kuan and Y.-H. Lin. \newblock {Criterion for deciding zeta-like multizeta values in positive characteristic}. \newblock {\em Exp. Math.}, 25(3):246--256, 2016. \bibitem{LRT14} J.~A.~Lara Rodriguez and D.~Thakur. \newblock {Zeta-like multizeta values for {$\mathbb{F}_q[t]$}}. \newblock {\em {Indian J. Pure Appl. Math.}}, 45(5):787--801, 2014. \bibitem{LRT21} J.~A.~Lara Rodriguez and D.~Thakur. \newblock {Zeta-like multizeta values for higher genus curves}. \newblock {\em {J. Th\'{e}or. Nombres Bordeaux}}, 33(2):553--581, 2021. \bibitem{LND22} H.~H. {Le} and T.~{Ngo Dac}. \newblock {Zeta-like multiple zeta values in positive characteristic}. \newblock {\em {to appear, Math. Z., hal-03093398}}, 2022. \bibitem{ND21} T.~{Ngo Dac}. \newblock {On Zagier-Hoffman's conjectures in positive characteristic}. \newblock {\em {Ann. of Math. (2)}}, 194(1):361--392, 2021. \bibitem{Pap08} M.~Papanikolas. \newblock {Tannakian duality for {A}nderson-{D}rinfeld motives and algebraic independence of {C}arlitz logarithms}. \newblock {\em Invent. Math.}, 171(1):123--174, 2008. \bibitem{Pel12} F.~Pellarin. \newblock {Values of certain $L$-series in positive characteristic}. \newblock {\em Ann. of Math. (2)}, 176(3):2055--2093, 2012. \bibitem{Ter02} T.~Terasoma. \newblock Mixed {T}ate motives and multiple zeta values. \newblock {\em Invent. Math.}, 149(2):339--369, 2002. \bibitem{Tha04} D.~Thakur. \newblock {\em Function field arithmetic}. \newblock {World Scientific Publishing Co., Inc., River Edge, NJ}, 2004. \bibitem{Tha09a} D.~Thakur. \newblock {Power sums with applications to multizeta and zeta zero distribution for {$\mathbb F_q[t]$}}. \newblock {\em Finite Fields Appl.}, 15(4):534--552, 2009. \bibitem{Tha09} D.~Thakur. \newblock {Relations between multizeta values for {$\mathbb F_q[t]$}}. \newblock {\em Int. Math. Res. Not.}, (12):2318--2346, 2009. \bibitem{Tha10} D.~Thakur. \newblock {Shuffle relations for function field multizeta values}. \newblock {\em Int. Math. Res. Not. IMRN}, (11):1973--1980, 2010. \bibitem{Tha17} D.~Thakur. \newblock {Multizeta values for function fields: a survey}. \newblock {\em {J. Th\'{e}or. Nombres Bordeaux}}, 29(3):997--1023, 2017. \bibitem{Tha20} D.~Thakur. \newblock {Multizeta in function field arithmetic}. \newblock In G.~B\"ockle, D.~Goss, U.~Hartl, and M.~Papanikolas, editors, {\em {$t$-motives: Hodge structures, transcendence and other motivic aspects", EMS Series of Congress Reports}}, pages 441--452. European Mathematical Society, 2020. \bibitem{Tod18} G.~Todd. \newblock A conjectural characterization for {$\mathbb F_q(t)$}-linear relations between multizeta values. \newblock {\em J. Number Theory}, 187:264--28, 2018. \bibitem{Yu91} J.~Yu. \newblock {Transcendence and special zeta values in characteristic $p$}. \newblock {\em Ann. of Math. (2)}, 134(1):1--23, 1991. \bibitem{Zag94} D.~Zagier. \newblock Values of zeta functions and their applications. \newblock In {\em First {E}uropean {C}ongress of {M}athematics, {V}ol. {II} {P}aris, 1992)}, volume 120 of {\em Progr. Math.}, pages 497--512. Birkh\"{a}user, Basel, 1994. \bibitem{Zha16} J.~Zhao. \newblock {\em {Multiple zeta functions, multiple polylogarithms and their special values}}, volume~12 of {\em {Series on Number Theory and its Applications}}. \newblock {World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ}, 2016. \end{thebibliography} \end{document}
2205.07155v1
http://arxiv.org/abs/2205.07155v1
Characterization of blowups via time change in a mean-field neural network
\documentclass[aap,preprint]{imsart} \RequirePackage{amsthm,amsmath,amsfonts,amssymb,mathrsfs,stmaryrd,stackrel} \RequirePackage[numbers]{natbib} \RequirePackage[colorlinks,citecolor=blue,urlcolor=blue]{hyperref} \RequirePackage{graphicx,color,epsfig,subfigure} \RequirePackage{bm,bbm} \startlocaldefs \newtheorem{axiom}{Axiom} \newtheorem{claim}[axiom]{Claim} \newtheorem{theorem}{Theorem}[section] \newtheorem{proposition}{Proposition}[section] \newtheorem{assumption}{Assumption} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \theoremstyle{remark} \newtheorem{remark}{\emph{ Remark}} \newtheorem{definition}[theorem]{Definition} \newtheorem*{example}{Example} \newtheorem*{fact}{Fact} \newcommand{\Prob}[1]{\mathbb{P}\left[#1\right]} \newcommand{\Probm}[1]{\mathbb{P}_m\left[#1\right]} \newcommand{\Exp}[1]{\mathbb{E}\left[#1\right]} \newcommand{\Expm}[1]{\mathbb{E}_m\left[#1\right]} \newcommand{\Expt}[1]{\mathbb{E}_\theta \left[#1\right]} \newcommand{\dd}{\mathrm{d}} \endlocaldefs \begin{document} \begin{frontmatter} \title{Characterization of blowups via time change in a mean-field neural network} \runtitle{Blowing up via time change} \begin{aug} \author[A,B]{\fnms{Thibaud} \snm{Taillefumier}\ead[label=e1]{[email protected]}} \and \author[A,B]{\fnms{Phillip} \snm{Whitman}\ead[label=e3]{[email protected]}} \address[A]{Department of Mathematics, University of Texas, Austin, } \address[B]{Department of Neuroscience, University of Texas, Austin, } \end{aug} \begin{abstract} Idealized networks of integrate-and-fire neurons with impulse-like interactions obey McKean-Vlasov diffusion equations in the mean-field limit. These equations are prone to blowups: for a strong enough interaction coupling, the mean-field rate of interaction diverges in finite time with a finite fraction of neurons spiking simultaneously, thereby marking a macroscopic synchronous event. Characterizing these blowup singularities analytically is the key to understanding the emergence and persistence of spiking synchrony in mean-field neural models. However, such a resolution is hindered by the first-passage nature of the mean-field interaction in classically considered dynamics. Here, we introduce a delayed Poissonian variation of the classical integrate-and-fire dynamics for which blowups are analytically well defined in the mean-field limit. Albeit fundamentally nonlinear, we show that this delayed Poissonian dynamics can be transformed into a noninteracting linear dynamics via a deterministic time change. We specify this time change as the solution of a nonlinear, delayed integral equation via renewal analysis of first-passage problems. This formulation also reveals that the fraction of simultaneously spiking neurons can be determined via a self-consistent, probability-conservation principle about the time-changed linear dynamics. We utilize the proposed framework in a companion paper to show analytically the existence of singular mean-field dynamics with sustained synchrony for large enough interaction coupling. \end{abstract} \begin{keyword}[class=MSC2020] \kwd[Primary ]{60G99} \kwd{60K15, 35Q92, 35D30, 35K67, 45H99} \end{keyword} \begin{keyword} \kwd{mean-field neural network models; blowups in parabolic partial differential equation; regularization by time change; singular interactions; delayed integral equations; inhomogeneous renewal processes} \kwd{second keyword} \end{keyword} \end{frontmatter} \section{Introduction} \subsection{Background} This work introduces neural network models for which the emergence of synchrony can be studied analytically in the idealized, mean-field limit of infinite-size networks. By synchrony, we refer to the possibility that a finite fraction of the network's neurons simultaneously spikes. Dynamics exhibiting such synchrony can serve as models to study the maintenance of precise temporal information in neural networks \cite{Panzeri:2010,Kasabov:2010,Brette:2015}. The maintenance of precise temporal information in the face of neural noise remains a debated issue from an experimental and computational perspective. Mathematical approaches to understand synchrony involve making simplifying assumptions about the individual neuronal processing as well as about the network supporting their interactions. Integrate-and-fire neurons \cite{Lapicque:1907,Knight:1972} constitute perhaps the simplest class of models susceptible to displaying synchrony \cite{Taillefumier:2013uq}. In integrate-and-fire models, the internal state of a neuron $i$ is modeled as a continuous-time diffusive process $X_{i,t}$, whose dynamics stochastically integrates past neural interactions. Spiking times are then defined as first-passage times of this diffusive process to a spiking boundary $L$. Upon spiking, the process $X_{i,t}$ resets to a base real value $\Lambda$. In other words, $X_{i,t^+}=\Lambda$ whenever neuron $i$ spikes at time $t$. There is no loss of generality in assuming that $L=0$ and $\Lambda>0$, so that $X_{i,t}$ has nonnegative state space. Moreover, classical integrate-and-fire models assume that $X_{i,t}$ follows a Wiener diffusive dynamics with negative drift $-\nu<0$ \cite{Carrillo:2015}. Drifted Wiener processes are the simplest diffusive dynamics for which spikes occur in finite time with probability one, even in the absence of interactions. A key feature of integrate-and-fire models is that they allow for the occurrence of synchronous spiking events. This is most conveniently seen by considering a finite neural network with instantaneous, homogeneous, impulse-like excitatory interactions. For such interactions, if neuron $i$ spikes at time $t$, downstream neurons $j \neq i$ instantaneously update their internal states according to $X_{j,t^+} = X_{j,t^+} - w$, where $ w>0$ is the size of the impulse-like interaction. Thus, the spiking of neuron $i$ causes the states of all other neurons to move toward the zero spiking boundary, leading to two possible outcomes for downstream neuron $j$: either $X_{j,t^+}>w$ and the update merely hastens the next spiking time of neuron $j$, or $X_{j,t^+} \leq w$ and the interaction causes neuron $j$ to spike in synchrony with $i$ \cite{Faugeras:2011,Taillefumier:2012uq}. The latter synchronous spiking events occurs with finite probability, as we generically have $\Prob{X_{j,t} \in (0,w]}>0$ for regular diffusion processes. In turn, the synchronous spiking of downstream neuron $j$ can trigger additional synchronous spiking events in the network, via branching processes referred to as spiking avalanches. Spiking avalanches are well-defined under the modeling assumptions that neurons transiently enter a post-spiking refractory state and always exit this refractory state by reseting to $\Lambda$ \cite{Taillefumier:2014fk}. Under such assumptions, neurons can spike at most once within an avalanche and synchronously spiking neurons can be distinguished according to their generation number \cite{Delarue:2015}. Tellingly, the finite probability to observe a spiking avalanche is maintained in certain simplifying limit, such as the thermodynamic mean-field limit \cite{Amari:1975aa, Faugeras:2009,Touboul:2014,Renart:2004}. For a homogeneous, excitatory, integrate-and-fire network, the thermodynamic mean-field limit considers a network of $N$ exchangeable neurons in the infinite-size limit, $N\to \infty$, with vanishingly small impulse size $w_N = \lambda/N \to 0$, where $\lambda$ is a parameter quantifying the interaction coupling. In this mean-field limit, individual neurons only interact with one another via a deterministic population-averaged firing rate $f(t)$ \cite{Caceres:2011,Carrillo:2013,Carrillo:2015}. Specifically, the dynamics of a representative process $X_t$ obeys a nonlinear partial differential equation (PDE) of the McKean-Vlasov type \begin{eqnarray}\label{eq:classicalMV} \partial_t p = \big(\nu + \lambda f(t) \big) \partial_x p + \partial_x^2 p/2 + f(t^-) \delta_\Lambda \, , \quad \mathrm{with} \quad p(t,x) \, \dd x =\Prob{X_t \in \dd x} \, , \end{eqnarray} and where the effective drift features the firing rate $f(t)$. The nonlinearity of the above equation stems from the conservation of probability, which equates $f(t)$ with a boundary flux of probability: \begin{eqnarray}\label{eq:classicalFlux} f(t) = \partial_x p(t,0)/2 \, . \end{eqnarray} In the following, we refer to equation \eqref{eq:classicalMV} and \eqref{eq:classicalFlux} as the classical McKean-Vlasov (cMV) equations and to the underlying dynamics supporting these equations as the classical mean-field (cMF) dynamics. Within the setting of cMF dynamics, a blowup occurs at time $T_0$ if the spiking rate diverges when $t \to T_0^-$ and a synchronous event happens if a fraction of the neurons $\pi_0>0$ synchronously spikes in $T_0$. \subsection{Motivation} Following on seminal computational work in \cite{Brunel:1999aa,Brunel:2000aa}, the cMF dynamics was first investigated in a PDE setting by C\'aceres {\it et al.} \cite{Caceres:2011}, who established the occurrence of blowups. The existence and regularity of solutions to \eqref{eq:classicalMV} and \eqref{eq:classicalFlux} have been considered from the standpoint of stochastic analysis by several authors \cite{Delarue:2015,Delarue:2015b,Hambly:2019,Nadtochiy:2019,Nadtochiy:2020}. These authors combined results from the theory of interacting-particle systems \cite{Liggett:1985,Sznitman:1989} and of the convergence of probability measures \cite{Billingsley:2013} to establish criteria for the existence of global solutions \cite{Delarue:2015,Delarue:2015b} and to classify the type of singularities displayed by these solutions \cite{Hambly:2019,Nadtochiy:2019,Nadtochiy:2020}. However, the analytical characterization of blowup singularities have proven rather challenging. Here, we propose a modified interacting-particle system with Poisson-like attributes that is also prone to blowup, the so-called delayed Poissonian mean-field (dPMF) model. By contrast with \cite{Delarue:2015b} and in line with \cite{Caceres:2011,Carrillo:2013}, we only conjecture that the propagation of chaos holds to motivate the form of the corresponding mean-field PDE problem. This conjecture is numerically supported in the weak interaction regime $\lambda < \Lambda$ and for the strong interaction regime $\lambda > \Lambda$. The interest of the proposed framework lies in introducing a mean-field model where blowups, including full blowup whereby a finite fraction of neurons fires synchronously, can be studied analytically. In particular, we utilize this framework in \cite{TTLS} to show the existence of global solutions defined on the whole real lines with an infinite but countable number of blowups for large interaction parameters $\lambda \gg \Lambda$. \subsection{Approach} The crux of our approach is the introduction of an analytically tractable neural-network model that is closely related to the cMF model, the so-called delayed Poissonian mean-field (dPMF) dynamics. dPMF dynamics are derived from the classical ones by considering that $(a)$ neurons are driven by noisy inputs with Poisson-like attributes and that $(b)$ neurons exhibit a post-spiking refractory period. Concretely, assumption $(a)$ corresponds to approximating the counting process registering neuronal inputs in the thermodynamic limit by a Gaussian Markov process with time-dependent drift $-\big(\nu + \lambda f(t)\big)$ and unit Fano factor, i.e., with variance and drift of identical magnitude. At the same time, assumption $(b)$ corresponds to enforcing that neurons remain in a noninteracting, inactive state for a duration $\epsilon$ after reaching the zero spiking threshold and before reseting in $\Lambda$. In addition of being relevant from a modeling standpoint, the inclusion of a finite refractory period $\epsilon$ allows for the unambiguous definition of dPMF dynamics during synchrony. Specifically, refractory period enforces that every neuron engaging in an instantaneous spiking avalanche at time $t$ spikes only once and resets in $\Lambda$ at time $t+\epsilon$. Overall, the delayed Poissonian version of the nonlinear McKean-Vlasov dynamics \eqref{eq:classicalMV} reads \begin{eqnarray}\partial_t p = \big(\nu + \lambda f(t) \big) \left( \partial_x p + \partial_x^2 p/2 \right) + f(t-\epsilon) \delta_\Lambda \, , \nonumber \end{eqnarray} where we will see that the conservation of probability imposes that \begin{eqnarray}\label{eq:cPMF} f(t) = \frac{\nu \partial_x p(t-\epsilon,0)}{2-\lambda \partial_x p(t,0)} \, . \end{eqnarray} The above relation directly indicates the criterion for blowups in dPMF dynamics: blowups occur whenever $\partial_x p(t,0)/2$, the instantaneous flux through the absorbing boundary, reaches the value $1/\lambda$. In other words, blowups emerge at finite boundary flux, which allows for the continuous maintenance of the absorbing boundary condition: $p(t,0)=0$. This is by contrast with cMF dynamics for which blowups involve diverging fluxes at times $T_0$, for which the absorbing boundary condition must locally fail: $p(T_0,0) \geq 1/\lambda$ \cite{Nadtochiy:2019,Nadtochiy:2020}. Such singular behavior is a major hurdle to elucidating blowup analytically in cMF dynamics. The expected regularized behavior of dPMF dynamics during blowups is the primary motivation for their introduction. Ideally, the PDE problem that defines dPMF dynamics shall be established as the mean-field limit of the corresponding finite-size interacting-particle system. The present work only conjectures that such a mean-field limit holds, which is supported by numerical simulations (except possibly for interaction parameter $\lambda \simeq \Lambda$). Then, the core idea of our approach is to solve the PDE problem defining dPMF dynamics by formally introducing the time change \begin{eqnarray}\label{eq:sigmaTime_int} \sigma = \Phi(t) = \nu t + \lambda F(t) \, , \quad \mathrm{with} \quad F(t)=\int_0^t f(s) \, \dd s \, , \end{eqnarray} which is a smooth increasing function in the absence of blowups. Due to the Poissonian attributes of the neuronal drives, the time change $\Phi$ can serve to parametrize the dPMF dynamics of a representative process as $X_t=Y_{\Phi(t)}$, where $Y_\sigma$ is a process obeying a linear, noninteracting dynamics. The dynamics of $Y_\sigma$ is that of a Wiener process absorbed in zero, with constant negative unit drift and with reset in $\Lambda$, but with time-inhomogeneous refractory period specified via a $\Phi$-dependent delay function $\sigma \mapsto \eta[\Phi](\sigma)$. In the following, we will refer to $\eta$ as the backward delay function associated to $\Phi$. Concretely, this means that assuming the backward-delay function $\eta$ known, the transition kernel of $Y_\sigma$ denoted by $(\sigma,x) \mapsto q(\sigma,x)$ satisfies the time-changed PDE problem \begin{eqnarray}\label{eq:qPDE_int} \partial_\sigma q &=& \partial_x q +\frac{1}{2} \partial^2_{x} q + \frac{\dd}{\dd \sigma} [G(\sigma-\eta(\sigma))] \delta_{\Lambda} \, , \end{eqnarray} with absorbing and conservation conditions respectively given by \begin{eqnarray}\label{eq:abscons_int} q(\sigma,0)=0 \quad \mathrm{and} \quad \partial_\sigma G(\sigma)=\partial_x q(\sigma ,0) /2 \, . \end{eqnarray} In equations \eqref{eq:qPDE_int} and \eqref {eq:abscons_int}, $G$ denotes the $\eta$-dependent cumulative flux of $Y_\sigma$ through the zero threshold. By definition of the time change $\Phi$, $G$ is related to the cumulative flux $F$ via $F=G \circ \Phi$. Moreover, in the absence of blowups, the functional dependence of $\eta$ on the time change $\Phi$ is given by \begin{eqnarray} \eta(\sigma) = \sigma - \Phi(\Psi(\sigma) -\epsilon) \, . \nonumber \end{eqnarray} where $\Psi=\Phi^{-1}$ refers to the inverse time change of $\Phi$. Thus, $G=G[\Phi]$ actually depends on $\Phi$ via $\eta$, which motivates considering equation \eqref{eq:sigmaTime_int} as a self-consistent equation specifying admissible time changes: \begin{eqnarray}\label{eq:selfcons_int} \Phi(t) = \nu t + \lambda G[\Phi](\Phi(t)) \, . \end{eqnarray} Our approach then elaborates on the fact that in the absence of blowups, dPMF dynamics are fully parametrized by the time-change function that uniquely solves equation \eqref{eq:selfcons_int} for some reasonable initial conditions. Given such a solution $\Phi$, the transition kernel of a representative dPMF dynamics $X_t$ is found as $(t,x) \mapsto p(t,x) = q(\Phi(t),x)$, where $q$ uniquely solves the time-changed PDE for the corresponding backward-delay function $\eta[\Phi]$. From there, our general aim is to show that this time-changed characterization is preserved in the presence of blowups, thereby justifying dPMF dynamics as a convenient modeling framework to analytically study mean-field dynamics with blowups. \subsection{Results} Our main result is to characterize explosive dPMF dynamics via a fixed-point problem bearing on a regularized time-changed dynamics. To state this fixed-point problem, we first need to define a notion of initial conditions in the time-changed picture. In principle, the most general initial conditions for the original dPMF dynamics are specified by two measures $(p_0,f_0)$ in $\mathcal{M}(\mathbbm{R}^+) \times \mathcal{M}([-\epsilon,0))$, where $\mathcal{M}(I)$ denotes the space of nonnegative measures over the interval $I \subset \mathbbm{R}$. Moreover, to represent a probability measure, $(p_0,f_0)$ must also satisfy the normalization condition: \begin{eqnarray} \int_0^\infty p_0(x) \, \dd x + \int_{-\epsilon}^0 f_0(t) \, \dd t =1 \, . \nonumber \end{eqnarray} With this in mind, the time-changed version of the above initial conditions is specified as follows: \begin{definition}\label{def:initCond_int} Given normalized initial conditions $(p_0,f_0)$ in $\mathcal{M}(\mathbbm{R}^+) \times \mathcal{M}([-\epsilon,0))$, the initial conditions for the time-changed problem are defined by $(q_0,g_0)$ in $\mathcal{M}(\mathbbm{R}^+) \times \mathcal{M}([\xi_0,0))$ such that \begin{eqnarray} q_0=p_0 \quad \mathrm{and} \quad g_0 = \frac{\dd G_0}{\dd \sigma} \quad \mathrm{with} \quad G_0=(\mathrm{id} - \nu \Psi_0)/\lambda\, , \nonumber \end{eqnarray} where the function $\Psi_0$ and the number $\xi_0$ are given by: \begin{eqnarray} \Psi_0(\sigma) &=& \inf \left\{ t \geq 0 \, \bigg \vert \, \nu t + \lambda \int_0^t f_0(s) \, \mathrm{d}s > \sigma \right\} \, , \nonumber\\ \xi_0&=& -\nu \epsilon -\lambda \int_{-\epsilon}^0 f_0(t) \, \dd t < 0 \, . \nonumber \end{eqnarray} \end{definition} Equipped with the above notion of initial conditions, we are in a position to state the fixed-point problem characterizing possibly explosive dPMF dynamics. This fixed-point problem will be most conveniently formulated in term of the inverse time change $\Psi=\Phi^{-1}$, which can generically be assumed to be a continuous, nondecreasing function. Given an inverse time change $\Psi$, the time change $\Phi$ can be recovered as the right-continuous inverse of $\Psi$. In the time-changed picture, blowups happen if the inverse time change $\Psi$ becomes locally flat and a synchronous event happens if $\Psi$ remains flat for a finite amount of time. Informally, flat sections of $\Psi$ unfold blowups by freezing time in the original coordinate $t$, while allowing time to pass in the time-changed coordinate $\sigma$. Such unfolding of blowups in the time-changed picture will allow for the following characterization of inverse time change $\Psi$, which remains valid for explosive dPMF dynamics. \begin{theorem}\label{th:fixedpoint_int} Given time-changed initial conditions $(q_0, g_0)$ in $\mathcal{M}(\mathbbm{R}^+) \times \mathcal{M}([\xi_0,0))$, the inverse time change $\Psi$ satisfies the fixed-point problem \begin{eqnarray}\forall \; \sigma \geq 0 \, , \quad \Psi(\sigma) = \left\{ \begin{array}{ccc} \left( \sigma - \lambda \int_0^\sigma g_0(\xi) \dd \xi \right) / \nu & \quad \mathrm{if} & -\xi_0 \leq \sigma < 0 \, , \vspace{5pt} \nonumber\\ \sup_{0 \leq \xi \leq \sigma} \big( \xi - \lambda G[\eta](\xi)\big) / \nu & \quad \mathrm{if} & \sigma \geq 0 \, . \end{array} \right. \end{eqnarray} where $G[\eta]$ is the smooth cumulative flux associated to a linear diffusion dynamics with time-inhomogeneous backward-delay function $\eta: \mathbbm{R}^+ \to \mathbbm{R}^+$. Given a backward-delay function $\eta$, the cumulative flux $G$ is given as the unique solution to the quasi-renewal equation \begin{eqnarray}\label{eq:DuHamel_int} G(\sigma) = \int_0^\infty H(\sigma,x) q_0(x) \, \dd x + \int_0^\sigma H(\sigma-\tau,\Lambda) \, \dd G(\tau-\eta(\tau)) \, , \end{eqnarray} where by convention we set $G(\sigma)=G_0(\sigma)=\int_0^\sigma g_0(\xi) \dd \xi$ if $-\xi_0 \leq \sigma < 0$. Moreover, the integration kernel featured in \eqref{eq:DuHamel_int} is specified as $\sigma \mapsto H(\sigma,x) = \Prob{\tau_x \leq \sigma} $, where $\tau_x$ is the first-passage time to zero of a Wiener process started in $x>0$ and with negative unit drift. Finally, the fixed-point nature of the problem follows from the definition of the backward-delay function $\eta$ as the $\Psi$-dependent time-wrapped version of the constant delay $\epsilon$: \begin{eqnarray}\eta(\sigma) = \sigma - \Phi(\Psi(\sigma) -\epsilon) \quad \mathrm{with} \quad \Phi(t)= \inf \left\{ \sigma \geq \xi_0 \, \big \vert \,\Psi(\sigma) > t \right\} \, , \nonumber \end{eqnarray} for which we consistently have $\eta(0)=- \Phi(-\epsilon)=\xi_0$. \end{theorem} We will show that the time-changed formulation of dPMF dynamics yields the existence and uniqueness of dPMF dynamics under an additional assumption about the initial conditions. That assumption bears on the distribution of active processes at starting time and states that $q_0$ is a locally smooth near zero with $q_0(0)=0$ and $\partial_x q_0(0) /2<1/\lambda$. In view of \eqref{eq:cPMF}, such an additional assumption precludes a blowup from happening instantaneously. In turn, this will allow us to show the following existence and uniqueness result: \begin{theorem} Given time-changed initial conditions $(q_0,g_0)$ in $\mathcal{M}(\mathbbm{R}^+) \times \mathcal{M}([\xi_0,0))$ such that $q_0(0)=0$ and $\partial_x q_0(0) /2<1/\lambda$, the fixed-point problem defined in Theorem \ref{th:fixedpoint_int} admits a unique smooth solution $\Psi$ on $[0,S_1]$, where \begin{eqnarray} S_1=\inf \lbrace \sigma>0 \, \vert \, \Psi'(\sigma) \leq 0 \rbrace > 0 \, . \nonumber \end{eqnarray} Moreover, if $S_1<\infty$ and $\Psi''(S_1^-)<0$, this solution can be uniquely continued on $[S_1,S_1+\lambda \pi_1)$ as a constant, where $\pi_1$ solves the self-consistent equation: \begin{eqnarray}\label{eq:defjump_int} \pi_1 = \inf \left\{ p \geq 0 \, \bigg \vert \, p > \int_{0}^\infty H(\lambda p, x ) q(S_1,x ) \, \dd x \right\} \, . \end{eqnarray} \end{theorem} The above result indicates how the time-changed process $Y_\sigma$ resolves a blowup episode by alternating two types of dynamics. Before blowups, the dynamics $Y_\sigma$ is that of an absorbed linear diffusion with resets. These resets occur with time-inhomogeneous delays, which depends on the inverse time change $\Psi$. At the blowup onset $S_1$, $\Psi$ becomes locally flat, indicating that the original time $t=\Psi(\sigma)$ freezes, thereby stalling resets. As a result, after the blowup onset in $S_1$, the dynamics of $Y_\sigma$ remains that of an absorbed linear diffusion but without resets. Such a dynamics persists until a self-consistent blowup exit condition is met in $S_1+\lambda \pi_1$. This condition follows from \eqref{eq:defjump_int} and states that it must take $\lambda \pi_1$ time-changed units for a fraction $\pi_1$ of processes to inactivate during a blowup episode. Finally, note that the generic condition that $\lim_{\sigma \to S_1} \partial^2_\sigma \Psi(\sigma)<0$, which we refer to the full-blowup condition, implies that the blowup is marked for the original dynamics $X_t=Y_{\Phi(t)}$ at $T_1=\Psi(S_1)$ by a well-characterized rate divergence: $f(t) \sim_{t \to T_1^-} 1/\sqrt{T_1-t}$. In principle, dPMF dynamics could be continued past a blowup episode, and possibly even extended to the whole half-line $\mathbbm{R}^+$. Showing this in our framework would require to check that $(1)$ the so-called $(i)$ nonexplosive exit conditions $\partial_\sigma q(S_1+\lambda \pi_1,0)/2<1/\lambda$ and $(ii)$ full-blowup condition $\partial^2_\sigma \Psi(S_1^-)<0$ are constitutively satisfied and that $(2)$ blowup times do not have an accumulation point. This program is beyond the scope of this work and is the topic of another manuscript \cite{TTLS}, where we show that these conditions hold for large enough interaction parameter $\lambda$. Here, we only state the main result of \cite{TTLS} about the existence of global explosive dPMF dynamics: \begin{theorem} For large enough $\lambda>\Lambda$, there exists explosive dPMF dynamics defined over the whole half-line $\mathbbm{R}^+$, with a countable infinity of blowups. These blowups occurs at consecutive times $T_k$ with size $\pi_k$, $k \in \mathbbm{N}$, and are such that $\pi_k$ and $T_{k+1}-T_k$ are both bounded away from zero. \end{theorem} \subsection{Methodology} Overall, the main interest of our approach lies in our ability to resolve a singular dynamics by mapping it onto a regular dynamics, but via possibly discontinuous change of time. Such an approach avoids resorting to convergence arguments in sample-path spaces equipped with the Skorokhod topology. Our approach proceeds in four steps: First, we infer the interacting-particle systems approximating the dPMF dynamics by modifying the systems known to approximate cMF dynamics \cite{Delarue:2015,Delarue:2015b}. This involves considering noisy synaptic interactions, whereby spiking updates in downstream neurons are i.i.d. following a normal law with mean and variance equal to $w_N=\lambda/N$, where $N$ denotes the number of neurons. Conjecturing propagation of chaos \cite{Sznitman:1989} in the infinite size limit $N \to \infty$ allows us to justify the form of the PDE problem associated to dPMF dynamics, which is only well-posed for nonexplosive dynamics. In order to extend this PDE characterization to explosive dPMF dynamics, we must give a weak formulation to the associated PDE problem. Due to the mean-field nature of dPMF dynamics, this weak formulation involves considering the cumulative flux $F$ as an auxiliary unknown function. Second, we define the linear, time-inhomogeneous PDE problem associated to the process $Y_\sigma$ obtained by time change of nonexplosive dPMF dynamics: $X_t=Y_{\Phi(t)}$. The hypothesis of nonexplosive dynamics is necessary to ensure that the time change $\Phi$ introduced in \eqref{eq:sigmaTime_int} is smooth. However, considerations from renewal analysis show that the obtained time-changed PDE problem is actually unconditionally well-posed, independent on the assumption of smoothness of $\Phi$. By this, we mean that for any choice of nondecreasing function $\Phi$, the latter PDE problem has well-behaved solutions $(\sigma,x) \mapsto q[\Phi](\sigma,x)$, in the sense that by contrast with $F=G \circ \Phi$, the associated cumulative flux $G$ is always a smooth function of time. Therefore, this time-changed picture provides us with a natural framework to define a notion of explosive dPMF dynamics, with possibly many blowups. Third, we show that among solutions parametrized by increasing functions $\Phi$, candidate solutions of the form $(t,x) \mapsto p(t,x)=q[\Phi](\Phi(t),x)$ are also weak solutions for dPMF dynamics if and only if the time change $\Phi$ satisfies the self-consistent equation \eqref{eq:selfcons_int}. Technically, showing this point relies on the substitution formula for nonsmooth changes of variable \cite{Falkner:2012} as well as on the Vol'pert superposition principle \cite{Volpert:1967,DalMaso:1991}. This result justifies reducing the analysis of dPMF dynamics to the study of a delayed, nonlinear, integral equation defining the fixed-point problem of Theorem \eqref{th:fixedpoint_int}. Crucially, for bearing on possibly discontinuous increasing time change $\Phi$, this approach fully captures explosive dPMF dynamics. Fourth, we show that the fixed-point problem of Theorem \eqref{th:fixedpoint_int} admits local solutions for a class of initial conditions that exclude instantaneous blowup. Under such initial conditions, we establish the existence of initial smooth dPMF dynamics via a contraction argument. This contraction argument relies on the fact that for nonzero refractory period $\epsilon>0$, the quasi-renewal equation \eqref{eq:DuHamel_int} loses its renewal character at small enough timescale. Then, repeated application of the Banach fixed-point theorem allows one to specify a smooth solution $\Phi$ up to the first putative blowup time $T_1$. In the time-changed coordinate $\sigma$, the onset of a blowup episode at $S_1=\Phi(T_1)$ corresponds to withholding resets, which leads to a natural self-consistent equation for blowup sizes $\pi_1$. Such blowups are certain in the large interaction regime $\lambda \geq \Lambda$ and can be shown to have physical size, in the sense that they must correspond to a finite fraction $0< \pi_1<1$. \subsection{Structure} In Section \ref{sec:modelDef}, we introduce the delayed Poissonian (dPMF) dynamics as the conjectured mean-field limit of a certain interacting-particle systems and define its associated weak PDE formulation. In Section \ref{sec:timeChange}, we show that in the absence of blowups, dPMF dynamics are fully determined by a time change that maps the original time-homogeneous nonlinear dynamics on time-inhomogeneous linear dynamics. In Section \ref{sec:fixedPoint}, we exploit the weak formulation of dPMF dynamics to show that the proposed time-changed formulation remains valid in the presence of blowups, exhibiting the fixed-point problem that characterizes admissible time changes. In Section \ref{sec:local}, we show that the fixed-point problem admits local solutions with blowups and resolve analytically these blowups. \section{The delayed Poisson-McKean-Vlasov dynamics}\label{sec:modelDef} In this section, we justify the consideration of dPMF dynamics to study the emergence and persistence of blowups in mean-field neural models. Conjecturing that propagation of chaos holds in the infinite-size limit, we justify the McKean-Vlasov equations defining dPMF dynamics in the absence of blowups. We then leverage these equations to elaborate a weak formulation that allows for the consideration of explosive dPMF dynamics. \subsection{Finite-size stochastic model} We start by defining the finite-size version of the dPMF dynamics in terms of a particle system whose dynamics is unconditionally well-posed. This particle system consists of a network of $N$ interacting processes $X_{N,i,t}$, $1\leq i \leq N$, whose interaction dynamics is as follows: $(i)$ Whenever a process $X_{N,i,t}$ hits the spiking boundary at zero, it instantaneously enters an inactive refractory state. $(ii)$ At the same time, all the other active processes $X_{N,j,t}$ (which are not in the inactive refractory state) are respectively decreased by amounts $w_{N,j,t}$, which are independently drawn from a normal law with mean and variance equal to $\lambda/N$. $(iii)$ After an inactive (refractory) period of duration $\epsilon>0$, the process $X_{N,i,t}$ restarts its autonomous stochastic dynamics from the reset state $\Lambda>0$. $(iv)$ In between spiking/interaction times, the autonomous dynamics of active processes follow independent drifted Wiener processes with negative drift $-\nu$. Correspondingly, an initial condition for the network is given by specifying the starting values of the active processes, i.e. $X_{N,i,0}>0$ if $i$ is active, and the last hitting time of the inactive processes, i.e., $-\epsilon \leq \rho_{N,i,0} \leq 0$, if $i$ is inactive. The above dynamics can be conveniently recapitulated in terms of the stochastic differential equations governing the interacting processes $X_{N,i,t}$, $1\leq i \leq N$. For $1\leq i \leq N$, these equations takes the general form \begin{eqnarray}\label{eq:stochEq} X_{N,i,t} = X_{N,i,0} - \int_0^t \mathbbm{1}_{ \{ X_{N,i,s^-} >0 \} } \dd Z_{N,i,s} + \Lambda M_{N,i,t-\epsilon} \, , \end{eqnarray} where $Z_{N,i,t}$, $1\leq i \leq N$, denote continuous-time driving processes with Poisson-like attributes and where $M_{N,i,t}$, $1\leq i \leq N$, are increasing processes counting the number of times that $X_{N,i,t}$ hits the threshold $\Lambda$ before $t$. The driving processes $Z_{N,i,t}$ are specified in term of cumulative drift functions $\Phi_{N,i}$ according to \begin{eqnarray} Z_{N,i,t} = \Phi_{N,i}(t) + W_{i,\Phi_{N,i}(t)} \, , \nonumber \end{eqnarray} where $W_{i,t}$, $1\leq i \leq N$, are independent Wiener processes. Thus-defined, the driving processes $Z_{N,i,t}$ exhibit Poisson-like attributes in the sense that they have constitutive unit Fano factor. By contrast, classical particle-system approaches consider driving processes with variable Fano Factor of the form $\Phi_{N,i}(t) + W_{i,t}$. The assumption of a constant Fano factor is the key to making an analytical treatment of blowups possible in the mean-field limit. As generic cumulative functions, the functions $\Phi_{N,i}$ are only assumed to be right-continuous with left limits, which we refer to as being \emph{c\`{a}dl\`{a}g} following classical probabilistic conventions. These \emph{c\`{a}dl\`{a}g} cumulative functions $\Phi_{N,i}$ are naturally defined in terms of the counting processes $M_{N,j,t}$ as \begin{eqnarray} \Phi_{N,i}(t) = \nu t + \frac{\lambda}{N} \sum_{j\neq i} M_{N,j,t} \nonumber \, , \end{eqnarray} showing that the jump discontinuities in $\Phi_{N,i}$ model interneuronal interactions. In turn, the counting processes $M_{N,j,t}$ are defined as \begin{eqnarray} M_{N,j,t} = \sum_{n>0} \mathbbm{1}_{[0,t]}(\rho_{N,j,n}) \, , \nonumber \end{eqnarray} where $\rho_{N,i,n}$, $n \geq 0$, denote the successive first-passage times of $X_{N,i,t}$ to the zero spiking threshold. These times are formally defined for all $n\geq 0$ by \begin{eqnarray} \rho_{N,i,n+1} = \inf \left\{ t>r_{N,i,n} \, \big \vert \, X_{N,i,t} \leq 0 \right\} \, . \nonumber \end{eqnarray} where by convention, we set $r_{N,i,0} = 0$ for processes that are active in zero with $X_{N,i,0}>0$ and where $r_{N,i,n} = \rho_{N,i,n}+\epsilon$, $n\geq 1$, denote the successive delayed reset times. Thus defined, the particle-system dynamics is self exciting: every spiking event of a neuron $i$ hastens the spiking of other neurons $j \neq i$ by bringing their states closer to the zero threshold boundary. Moreover, the particle-system dynamics allows for synchronous spiking as whenever neuron $i$ spikes due to its autonomous dynamics, we generically have that $\Prob{X_{N,j,t} \in (0,w_{N,j,t}]}>0$. If the process $X_{N,i,t}$ first hits zero at time $\rho$, \eqref{eq:stochEq} implies that $X_{N,i,t}$ remains in zero for all $t$ in $(\rho, \rho+\epsilon)$, until it receives an instantaneous kick that enforces a reset in $\Lambda$ at time $r=\rho+\epsilon$. Thus, \eqref{eq:stochEq} formally identifies the refractory state with zero. However, it will prove more convenient to consider the inactive state as an isolated inactive state away from zero. The reason for this is that such a consideration avoid modeling inactive processes via Dirac-delta mass in zero, so that regular absorbing boundary conditions in zero can be enforced. Mathematically, the benefit of including an inactive period $\epsilon$ is to ensure the uniqueness of the particle-system dynamics during spiking avalanches, thereby ensuring that the overall dynamics is well-posed. Spiking avalanches occurs when the spiking of a neuron triggers the instantaneous spiking of other neurons. Neurons that engage in a spiking avalanche can be sorted out according to a generation number. Generation $0$ contains the lone triggering neuron which is driven to the absorbing boundary by its autonomous dynamics. Generation $1$ comprises all those neurons that spike due to interactions with the triggering neuron alone. In general, generation $k>1$, comprises all the neurons that spike from interacting with the neurons of the previous generations alone. In the absence of a post-spiking inactive period ($\epsilon=0$), it is ambiguous whether the neurons from previous generations are impacted by the spiking of neurons from the following generations. However, in the presence of an inactive period ($\epsilon>0$), neurons from previous generations are unresponsive to neurons from following generations due to post-spiking transient inactivation. Accordingly, as a variation on \cite{Delarue:2015b}, we resolve the ambiguity of spiking avalanche in the absence of inactive period by only considering the so-called ``physical dynamics'', obtained from delayed dynamics in the limit $\epsilon \to 0^+$. These ``physical dynamics'' assume that independent of their generation number, every neuron engaging in a spiking avalanche at time $t$ spikes only once and resets in $\Lambda$ at $t^+$. We conclude by noting that the delayed dynamics introduced here differ from those considered in \cite{Delarue:2015b}, where the delay bears on the interactions rather than the resets. This distinction is important as by contrast with reset-delayed dynamics, interaction-delayed dynamics are not prone to explosions. \subsection{Mean-field dynamics under propagation of chaos} The particle-system dynamics introduced above primarily differs from the classically considered one by its Poisson-like attributes. In classically defined particle systems, the jump discontinuities of the driving inputs have fixed size $\lambda/N$ instead of being i.i.d according to a normal law with mean and variance equal to $\lambda/N$. In \cite{Delarue:2015b}, Delarue {\it et al.} show that the property of propagation of chaos holds in the infinite-size limit of classically defined particle systems. This property establishes that in the infinite-size limit, a representative process $X_t=\lim_{N \to \infty} X_{N,i,t}$ follows a mean-field dynamics satisfying the PDE problem \eqref{eq:classicalMV} and \eqref{eq:classicalFlux} originally introduced in \cite{Caceres:2011,Carrillo:2013}. This particle-system-based approach automatically yields the existence of---possibly explosive---solutions to the PDE problem \eqref{eq:classicalMV} and \eqref{eq:classicalFlux}. Here, by contrast with \cite{Delarue:2015b} and in line with \cite{Caceres:2011,Carrillo:2013}, we only conjecture propagation of chaos to motivate the form of the PDE problem defining a novel mean-field dynamics that is prone to blowup. We then consider these dynamics on their own merit, independent of the conjecture of propagation of chaos. The propagation of chaos states that for exchangeable initial conditions, the processes $X_{N,i,t}$, $1 \leq i \leq N$, become i.i.d. in the limit of infinite-size networks $N \to \infty$, so that each individual process follows a mean-field dynamics. We refer to such a mean-field dynamics as a cMF dynamics for the classical model and a dPMF dynamics for the Poisson-like model. For both models, the mean-field interaction governing the dynamics of a representative process $X_t$ is mediated by a deterministic cumulative drift $\Phi =\lim_{N \to \infty} \Phi_{N,i}$. Formally, this deterministic drift is defined as $\Phi(t) = \nu t + \lambda \Exp{ M_t}$, where the process $M_t$ counts the successive first-passage times of the representative process $X_t$ to the zero spiking threshold: \begin{eqnarray}\label{eq:MFM} M_t = \sum_{n>0} \mathbbm{1}_{[0,t]}(\rho_n) \, , \quad \mathrm{with} \quad \rho_{n+1} = \inf \left\{ t>r_n = \rho_n+\epsilon \, \big \vert \, X_{t} \leq 0 \right\} \, . \end{eqnarray} In the following, we will denote the increasing function $\Exp{ M_t}$ by $F(t)$. In the context of the associated PDE problem, we will refer to $F$ as the cumulative flux function through the zero absorbing boundary. Observe that by definition, the function $F$ is an increasing \emph{c\`adl\`ag} function. This allows one to define the instantaneous firing rate $f$ in the distribution sense as the Radon-Nikodym derivative of $F$ with respect to the Lebesgue measure $f = \dd F /\dd t$. Correspondingly, synchronous events whereby a finite fraction of processes spike simultaneously are marked by Dirac-delta mass in $f$. By contrast with cMF models, the deterministic cumulative drift $\Phi(t)=\nu t +\lambda F(t)$ constitutively impacts neurons with Poissonian attributes in dPMF models, i.e., via a process $Z_t = \Phi(t)+W_{\Phi(t)}$, where $W$ is a driving Wiener process. Accordingly, the stochastic dPMF dynamics of a representative process is given by \begin{eqnarray}\label{eq:stochEqMF} X_t = X_0 - \int_0^t \mathbbm{1}_{ \{ X_{s^-} >0 \} } \dd Z_s + \Lambda M_{t-\epsilon} \, . \end{eqnarray} The above equation fully defines dPMF dynamics. Because of the self-interaction terms, dPMF dynamics are prone to blowups for large enough interaction coupling and/or for initial conditions that are concentrated near the boundary. Actually, just as for cMF models, we will see that the cumulative drift $\Phi$ can exhibit $(i)$ singular blowups, corresponding to a divergence of the reset rate $f$ in finite time and $(ii)$ jump discontinuities whereby a finite fraction of the processes spike at the same time. Our goal is to characterize analytically the emergence of these blowups. This will require first defining the PDE problem associated to dPMF dynamics in the absence of blowups. \subsection{McKean-Vlasov equations under smoothness assumptions}\label{subsec:MKV} For weak interaction, i.e., $\lambda<\Lambda$, we expect dPMF dynamics to be nonexplosive for initial conditions far enough from the spiking threshold, e.g., $p(0,x)=\delta_{x_0}(x)$ with sufficiently large $x_0>0$. This motivates defining the PDE problem associated to dPMF dynamics under strong regularity assumptions. Specifically, let us assume that $t \mapsto F(t) = \Exp{M_t}$ is smooth on $[0,T)$ for some $T>0$. Then, $f(t) = F'(t)$ represents the nonnegative, smooth, mean-field rate of inactivation. Under such regularity assumptions, a representative process $X_t$ satisfying \eqref{eq:stochEqMF} admits a probability density $(t,x) \mapsto p(t,x)$ which solves the Fokker-Plank equation \begin{eqnarray}\label{eq:FP} \partial_t p = (\nu+\lambda f(t)) \left( \partial_x p + \partial^2_{x} p/2 \right) + \mathbbm{1}_{\{ t>\epsilon \} }f(t-\epsilon) \delta_{\Lambda} \, , \end{eqnarray} with absorbing boundary condition in $p(t,0) = 0$. The latter absorbing condition ensures that the process becomes inactive upon reaching zero. The Dirac-delta source term models the reset in $\Lambda$ of newly activated processes, which happens in $t$ with delayed rate $f(t-\epsilon)$. To be consistent, the mean-field dynamics specified by \eqref{eq:FP} needs to conserve the total probability. This conservation requirement implies that \begin{eqnarray}\label{eq:cons1} \partial_t \left( \int_{0}^\infty \! p(t,x) \, \dd x \right) =\mathbbm{1}_{\{t>\epsilon \}}f(t-\epsilon) - f(t) \, . \end{eqnarray} Using \eqref{eq:FP}, we can evaluate the left term above as \begin{eqnarray} \partial_t \left( \int_{0}^\infty \! p(t,x) \, \dd x \right) &=& \int_{0}^\infty \big( \nu + \lambda f(t) \big) \left( \partial_x p(t,x) + \partial^2_{x} p(t,x) /2 \right) \, \dd x \nonumber \\ && + \int_{0}^\infty \mathbbm{1}_{\{t>\epsilon \}}f(t-\epsilon) \delta_{\Lambda}(x) \, \dd x \, . \nonumber \end{eqnarray} Performing integration by parts with absorbing boundary condition in zero then yields \begin{eqnarray} \partial_t \left( \int_{0}^\infty \! p(t,x) \, \dd x \right) = -\big(\nu \! + \! \lambda f(t)\big) \partial_x p(0,t)/2 + f(t-\epsilon) \, , \nonumber \end{eqnarray} which together with \eqref{eq:cons1} imposes the self-consistent conservation condition $f(t) = (\nu \! + \! \lambda f(t)) \partial p_x(t,0) /2$, ultimately yielding: \begin{eqnarray} f(t) = \frac{ \nu \partial_x p(0,t)}{2 - \lambda \partial_x p(0,t)} \, . \nonumber \end{eqnarray} The Fokker-Planck equation \eqref{eq:FP} and the above conservation condition fully specify the mean-field dynamics. As the coefficients of the Fokker-Plank equation depends on its solution via a boundary flux term, the mean-field dynamics is actually a nonlinear Markov evolution of the McKean-Vlasov type. Finally, observe that to avoid blowups, we have only considered initial conditions of the form $p_0(x)=\delta_{x_0}(x)$, with initially empty inactive state. However, the PDE problem defined above can be considered for more generic initial conditions, at the possible cost of not having any regular solutions. According to the delayed nature of the dynamics, these generic initial conditions are naturally specified by \begin{eqnarray} p(0,x) = p_0(x) \quad \mathrm{and} \quad f(t) = f_0(t) \, , \quad \epsilon \leq t< 0 \, . \nonumber \end{eqnarray} with $(p_0,f_0)$ in $\mathcal{M}(\mathbbm{R}^+) \times \mathcal{M}([-\epsilon,0))$, where $(p_0,f_0)$ satisfies the normalization condition \begin{eqnarray}\label{eq:normCond} \int_{-\epsilon}^0 f_0(s) \, \dd s = 1-\int_{0}^\infty p_0(x) \, \dd x \, . \end{eqnarray} Given this notion of initial conditions, we define the McKean-Vlasov PDE problem associated to smooth dPMF dynamics as: \begin{definition}\label{def:delayedPDE} Given normalized initial conditions $(p_0,f_0)$ in $\mathcal{M}(\mathbbm{R}^+) \times \mathcal{M}([-\epsilon,0))$, the PDE problem associated to a dPMF dynamics consists in finding the density function $(t,x) \mapsto p(t,x)= \dd \Prob{0<X_t<x} / \dd x$ solving \begin{eqnarray}\label{eq:delayedPDE} \partial_t p &=& \big(\nu + \lambda f(t) \big) \left( \partial_x p + \partial^2_{x} p /2 \right) + f(t-\epsilon) \delta_{\Lambda} \, , \end{eqnarray} on $[0,T) \times [0, \infty)$ for some (possibly infinite) $T>0$ and with absorbing and conservation conditions given by \begin{eqnarray}\label{eq:delayedPDEcond} p(t,0) = 0 \quad \mathrm{and} \quad f(t) = \frac{\nu \partial_x p(t,0)}{2 - \lambda \partial_x p(t,0)} \, . \end{eqnarray} \end{definition} The challenge posed by the emergence of blowups is to make sense of \eqref{eq:delayedPDE} for instantaneous flux $f$ exhibiting finite-time divergence, possibly followed by Dirac-delta mass, corresponding to a jump discontinuity in $F$. Such singularities present themselves whenever an initially smooth dynamics is such that $t \mapsto \partial_x p(t,0)/2$ reaches $1/\lambda$ in finite time. As intuition suggests, this blowup criterion will generally be met for sufficiently large interaction parameter. In particular, we will see that given initial conditions of the form $p_0=\delta_{x_0}$ with $x_0>0$, there is a constant $C_{x_0}$ which only depends on $x_0$ such that a blowup occurs in finite time for all $\lambda>C_{x_0}$. \subsection{Weak formulation for explosive dPMF dynamics} In this section, our goal is to propose a weak formulation of the PDE problem \ref{def:delayedPDE} that is amenable to capture explosive dPMF dynamics. This formulation bears on candidate density functions $(t,x) \mapsto p(t,x)$ in the space of distributions defined as the dual of $C_{0,\infty}([-\epsilon,T) \times \mathbbm{R})$, the set of compactly supported, smooth functions $u:[-\epsilon,T)\times \mathbbm{R} \rightarrow \mathbbm{R}$. We derive the announced weak formulation by first considering nonexplosive solutions of the PDE problem \ref{def:delayedPDE}. For all nonexplosive solutions $(t,x) \mapsto p(t,x)$ and all test functions $u$ in $C_{0,\infty}([-\epsilon,T) \times \mathbbm{R})$, we must have \begin{eqnarray*} 0&=&\int_0^T \int_{0}^\infty \left[ \big(\nu + \lambda f(t) \big) \mathcal{L}[p](t,x) + f(t-\epsilon) \delta_{\Lambda} - \partial_t p(t,x) \right] u(t,x) \, \dd t\, \dd x \,, \\ &=&\int_0^T \int_{0}^\infty \big[ \big(\nu + \lambda f(t) \big) \mathcal{L}[p](t,x) - \partial_t p(t,x) \big] u(t,x) \, \dd x \, \dd t \\ && \hspace{200pt} + \int_0^T f(t-\epsilon) u(t,\Lambda) \, \dd t \, , \end{eqnarray*} where $\mathcal{L}$ denotes the operator $\mathcal{L}=\partial_x + 1/2\partial^2_{x} $ for brevity. Taking into account the absorbing boundary condition, integration by parts with respect to space yields \begin{eqnarray*} \int_{0}^\infty \mathcal{L}[p](t,x) u(t,x) \, \dd x = \int_{0}^\infty p(t,x) \mathcal{L}^\dagger [u](t,x)\, \dd x + \frac{1}{2}\partial_x p(t,0) u(t,0) \, , \end{eqnarray*} so that integration by part with respect to time produces \begin{eqnarray*} \int_{0}^\infty \big[ p(T,x)u(T,x)-p(0,x)u(0,x) \big] \, \dd x &=& \\ && \hspace{-120pt}\int_0^T\int_{0}^\infty p(t,x) \big[ \big(\nu + \lambda f(t) \big) \mathcal{L}^\dagger [u](t,x) + \partial_t u(t,x) \big] \, \dd x \, \dd t \nonumber\\ && \hspace{-120pt} + \int_0^T \big[ f(t-\epsilon)u(t,\Lambda)-\big(\nu + \lambda f(t) \big) \partial_x p(t,0) u(t,0)/2 \big] \, \dd t \, . \end{eqnarray*} Remembering the flux conservation condition $\partial_x p(t,0)=f(t)/(\nu+\lambda f(t))$, we obtain the following weak characterization for nonexplosive solutions \begin{eqnarray*} \int_{0}^\infty \big[ p(T,x)u(T,x)-p(0,x)u(0,x) \big] \, \dd x &=& \\ && \hspace{-120pt}\int_0^T\int_{0}^\infty p(t,x) \big[ \big(\nu + \lambda f(t) \big) \mathcal{L}^\dagger [u](t,x) + \partial_t u(t,x) \big] \, \dd x \, \dd t \nonumber\\ && \hspace{-120pt} + \int_0^T \big[ f(t-\epsilon)u(t,\Lambda)- f(t) u(t,0) \big] \, \dd t \, . \end{eqnarray*} The above characterization involves the instantaneous flux $f$ as an unknown, which can be safely assumed to be a nonnegative integrable function. With that in mind, one can see that the proposed characterization derived for nonexplosive solutions is well-posed for any candidate density function in the space of integrable distributions. This leads to defining the notion of weak solution for the dPMF dynamics in the presence of blowups as follows: \begin{definition}\label{def:weakPDE} Given normalized initial conditions $(p_0,f_0)$ in $\mathcal{M}(\mathbbm{R}^+) \times \mathcal{M}([-\epsilon,0))$, the density function $(t,x)\mapsto p(t,x)$ is a weak solution of the dPMF dynamics if and only if there is a bounded nondecreasing \emph{c\`adl\`ag} function $F:[-\epsilon,T) \to \mathbbm{R}$ with $\dd F/dt = f_0$ on $[-\epsilon,0)$ such that for all $u$ in $C_{0,\infty}([-\epsilon,T) \times \mathbbm{R})$, we have \begin{eqnarray}\label{eq:weakPDE} \int_{0}^\infty \big[ p(T,x)u(T,x)-p(0,x)u(0,x) \big] \, \dd x &=& \\ &&\hspace{-140pt} \int_0^T\int_{0}^\infty p(t,x) \big[\nu \mathcal{L}^\dagger [u](t,x) + \partial_t u(t,x) \big] \, \dd x \, \dd t \nonumber\\ && \hspace{-140pt}+ \: \lambda \int_0^T\int_{0}^\infty p(t^-,x) \mathcal{L}^\dagger [u](t,x)\, \dd x \, \dd F(t) \nonumber\\ && \hspace{-140pt}+ \int_0^T u(t,\Lambda) \, \dd F(t-\epsilon) - \int_0^T u(t,0) \, \dd F(t) \, .\nonumber\ \end{eqnarray} \end{definition} Clearly, all nonexplosive solutions $(t,x)\mapsto p(t,x)$ of the PDE problem \ref{def:delayedPDE} are weak solutions of the PDE problem \ref{def:weakPDE} for $F$ equal to the cumulative flux integrating $f$ as defined in \eqref{eq:delayedPDEcond}. It is also clear that by contrast with the PDE problem \ref{def:delayedPDE}, the definition of weak solutions allows for discontinuous function $F$. In that respect, observe that we enforce that $F$ is \emph{c\`adl\`ag} to be consistent with the definition of the counting process $M_t$ given in \eqref{eq:MFM}. When choosing $F$ to be \emph{c\`adl\`ag}, it is then necessary to specify the type of continuity of the integrand for integrals with respect to $F$ as the integrator to be well-defined. In view of the predictable integrand in stochastic equation \eqref{eq:stochEqMF}, we consistently impose that the integrand be left-continuous whenever $F$ features as the integrator. Intuitively, we expect the function $F$ featuring in Definition \ref{def:weakPDE} to be uniquely related to a weak solution $(t,x) \mapsto p(t,x)$, just as for nonexplosive solutions. This fact is established by the following proposition: \begin{proposition}\label{prop:uniqueF} There is a unique nondecreasing \emph{c\`adl\`ag} function $F$ such that the density function $(t,x)\mapsto p(t,x)$ is a weak solution of the dPMF dynamics. \end{proposition} \begin{proof} Observe that for integrable density functions, the defining property of weak solutions also holds for smooth test functions with bounded derivatives of all orders. Then, specifying \eqref{eq:weakPDE} for $u=1$ yields \begin{eqnarray}\label{eq:u1} \int_{0}^\infty p(T,x)\, \dd x-\int_{0}^\infty p(0,x) \, \dd x = \big( F(T-\epsilon) -F (-\epsilon) \big) - \big( F(T) -F (0) \big) \, . \end{eqnarray} Consider two functions $F_1$ and $F_2$ such that $p$ is a weak solution of the dPMF dynamics. The initial conditions impose that we have \begin{eqnarray*} F_1(0)-F_1(-\epsilon)=F_2(0)-F_2(-\epsilon)=\int_{-\epsilon}^0 f_0(t) \, \dd t \, . \end{eqnarray*} Therefore, specifying \eqref{eq:u1} for $F_1$ and $F_2$ and forming the difference yields \begin{eqnarray*} F_1(t)-F_1(t-\epsilon)=F_2(t)-F_2(t-\epsilon) \, , \end{eqnarray*} so that, $F_1-F_2$ is an $\epsilon$-periodic function. As $F_1 = F_2$ on the interval $[-\epsilon,0)$, we necessarily have $F_1=F_2$ for all $t \leq 0$ by $\epsilon$-periodicity. \end{proof} In the following our strategy will be to use the above notion of weak solutions to screen candidate explosive solutions defined via time change for \emph{bona fide} dPMF dynamics. \section{Linearization via implicitly defined time change}\label{sec:timeChange} In this section, we show that under certain regularity assumptions, dPMF dynamics can be turned into noninteracting linear dynamics via a time change. We then interpret these dynamics probabilistically via renewal analysis to establish that they are constitutively well-posed, independent of the time-change function. Such a realization provides the basis to define explosive dPMF dynamics in the time-changed picture. \subsection{Conditionally linear dynamics} Informally, dPMF dynamics admit blowups at those times $T$ for which the instantaneous inactivation flux $f$ diverges: $\lim_{t \to T^-} f(t) =\infty$. Characterizing such blowup times analytically entails studying the PDE problem \ref{def:delayedPDE} with the drift, diffusion, and reset coefficients that are all allowed to locally diverge. In general, this is a hard problem that cannot be tackled analytically. However, for drift and diffusion coefficients with Poisson-like attributes, the problem is tractable thanks to the availability of a regularized time-changed formulation. Not surprisingly, the function $\Phi$ operating this time change can be guessed as the integral function of the drift: \begin{eqnarray}\label{eq:Phi} t \mapsto \Phi(t) = \nu t+ \lambda F(t) \, . \end{eqnarray} This approach suggests considering that solutions to the PDE problem \ref{def:delayedPDE} as parametrized by a time change $\Phi$, which shall be viewed as the fundamental unknown of the problem. In light of \eqref{eq:Phi}, we shall look for solution time change $\Phi$ in the following class of functions: \begin{definition}\label{def:Phi} We define the class of valid time changes $\mathcal{T}$ as the set of \emph{c\`adl\`ag} functions $\Phi:[-\epsilon, \infty)\to[\xi_0, \infty)$, such that their difference quotients are lower bounded by $\nu$: for all $y,x \leq -\epsilon$, $x \neq y$, we have \begin{eqnarray*} w_\Phi(y,x)=\frac{\Phi(y)-\Phi(x)}{y-x} \geq \nu \, . \end{eqnarray*} \end{definition} In general, to be a valid time change, we only require a function $\Phi: \mathbbm{R}^+ \to \mathbbm{R}^+$ to be a nondecreasing \emph{c\`{a}dl\`{a}g} function. This means that time changes $\Phi$ must exclude time-reversal point at which the changed time $\sigma = \Phi(t)$ would start flowing backward when the original time $t$ keeps moving forward. Here, the time change Definition \ref{def:Phi} additionally imposes that $\Phi$ has no flat region as we have $\nu>0$. As a result, specifying the inverse time change $\Phi^{-1}: \mathbbm{R}^+ \to \mathbbm{R}^+$ as the right-continuous generalized inverse of $\Phi$, actually yields a continuous function $\Psi$. It is then clear that $\Phi$ is the right-continuous inverse of $\Psi$. \begin{definition}\label{def:Psi} Given a time change $\Phi$ in $\mathcal{T}$, the inverse time change $\Phi^{-1}:[\xi_0,\infty) \to [-\epsilon,\infty)$ is defined as the continuous function \begin{eqnarray*} \sigma \mapsto \Psi(\sigma)=\Phi^{-1}(\sigma) = \inf \left\{ t \geq 0 \, \vert \, \Phi(t) > \sigma \right\} \, . \end{eqnarray*} \end{definition} Importantly, valid time changes include function $\Phi$ with discontinuous jumps---or equivalently flat regions for $\Psi$. Such discontinuities will correspond to the occurrence of synchronous events, at those times for which inactivation on the absorbing boundary has finite probability. The key to unlocking these synchronous events is that the time change $\Phi$ maps the dynamics of an eventually singular, interacting dynamics onto that of a constitutively regular, noninteracting one. When unfolding along the new time coordinate $\sigma=\Phi(t)$, this regular dynamics will only depend on $\Phi$ via the time wrapping of the refractory period $\epsilon$. Such time wrapping is captured by the so-called backward-delay function, which is defined as follows: \begin{definition}\label{def:Eta} Given the time change $\Phi$ in $\mathcal{T}$, we define the corresponding backward-delay function $\eta:[0,\infty) \to \mathbbm{R}^+$ by \begin{eqnarray*} \eta(\sigma) = \sigma - \Phi \left( \Psi(\sigma) - \epsilon \right) \, , \quad \sigma \geq 0 \, . \end{eqnarray*} We denote the set of backward functions $\lbrace \eta[\Phi]\rbrace_{\Phi \in \mathcal{T}}$ by $\mathcal{W}$. \end{definition} As $w_\Phi \geq \nu$ for all $\Phi$ in $\mathcal{T}$, it is clear that for all $\eta$ in $\mathcal{W}$, we actually have $\eta \geq \nu \epsilon$, so that all delays are bounded away from zero. Time-wrapped-delay function $\eta$ in $\mathcal{W}$ will serve to parametrize the time-changed dynamics obtained via $\Phi$ in $\mathcal{T}$. These time-changed dynamics will be that of a modified Wiener process $Y_\sigma$ with negative unit drift, inactivation on the zero boundary, and reset in $\Lambda$ after a refractory period specified by $\eta$. Consequently, we define time-changed dynamics as the processes $Y_\sigma$ solutions to the following stochastic evolution: \begin{definition}\label{def:Ysigma} Denoting the canonical Wiener process by $W_\sigma$, we define the time-changed processes $Y_\sigma$ as solutions to the stochastic evolution \begin{eqnarray}\label{eq:Ysigma} Y_\sigma = -\sigma+ \int_0^\sigma \mathbbm{1}_{\{Y_{\xi^-}>0\}}\, \dd W_\xi + \Lambda N_{\sigma-\eta(\sigma)} \, ,\quad \mathrm{with} \quad N_\sigma = \sum_{n>0} \mathbbm{1}_{[\xi_0,t]}(\xi_n) \, , \end{eqnarray} where the process $N_\sigma$ counts the successive first-passage times $\xi_n$ of the process $Y_\sigma$ to the absorbing boundary: \begin{eqnarray*} \xi_{n+1} = \inf \left\{ \sigma >0 \, \big \vert \, \sigma-\eta(\sigma) > \xi_n, Y_\sigma \leq 0 \right\} \, . \end{eqnarray*} \end{definition} A time-changed process $Y_\sigma$ is uniquely specified by imposing elementary initial condition, which takes an alternative formulation: either the process is active $Y_0=x>0$ and $N_0=0$, either the process has entered refractory period at some earlier time $\xi$ so that $Y_0=0$ and $N_\sigma=\mathbbm{1}_{\sigma \geq \xi}$ for $\xi_0 \leq \sigma < 0$. Generic initial conditions are given by considering that $(x,\xi)$ is sampled from some probability distribution on $\{ (0,\infty) \times \{ 0 \} \} \cup \{ \{ 0 \} \times [\xi_0,0) \}$. This amounts to choosing a normalized pair of distributions $(q_0,g_0)$ in $\mathcal{M}(\mathbbm{R}^+) \times \mathcal{M}([\xi_0,0))$. The ensuing dynamics is well-defined as long as the backward-delay function $\eta \geq \nu \epsilon$ is locally bounded, which is always the case for valid time change $\Phi$. Of particular interest is the fact that such dynamics can accommodate jump discontinuities in $\eta$. This is perhaps best seen by considering the so-called backward-time function $\mathbbm{R}^+ \to [\xi_0,\infty )$, $\sigma \mapsto \sigma-\eta(\sigma)$, featured in the time-delayed counting process of \eqref{eq:Ysigma}. When unambiguous, we will denote this backward-time function by $\xi$ and refer to it as the ``function $\xi$'' to differentiate from when $\xi$ plays the role of a real variable. By construction, the function $\xi$ satisfies $\xi(\sigma) =\sigma-\eta(\sigma)= \Phi \left( \Psi(\sigma) - \epsilon \right)$, and is thus a nondecreasing \emph{c\`adl\`ag} function, possibly admitting discontinuities and flat regions. Specifically, denoting by $\mathcal{D}_\Phi$ the countable set of discontinuous time of $\Phi$ in $[-\epsilon,\infty)$, the function $\xi$ has discontinuities on the countable set \begin{eqnarray*} \mathcal{D}_\xi = \cup_{t \in \mathcal{D}_\Phi} \{ \inf \Psi^{-1}(\{ t+\epsilon \}) \} = \cup_{t \in \mathcal{D}_\Phi} \{ \inf \{\sigma \, \vert \, \Psi(\sigma)=t+\epsilon \} \} \, , \end{eqnarray*} whereas it is flat on the countable disjoint union of discontinuity intervals of $\Phi$: \begin{eqnarray*} \mathcal{J}_\xi = \mathcal{J}_\Psi = \cup_{t \in \mathcal{D}_\Phi} \{ \Psi^{-1}(\{ t \}) \} = \cup_{t \in \mathcal{D}_\Phi} [\Phi(t^-), \Phi(t)] \, . \end{eqnarray*} The discontinuities and flat regions of the function $\xi$ will play the central part in explaining the occurrence of synchronous events in the original dPMF dynamics from analyzing the dynamics of $Y_\sigma$. In this perspective, it is worth completing the time-changed picture of the dPMF dynamics by stating the PDE problem attached to the dynamics of $Y_\sigma$: \begin{definition}\label{def:qPDE} Given a backward-delay function $\eta$ in $\mathcal{W}$ and some normalized initial conditions $(q_0,g_0)$ in $\mathcal{M}(\mathbbm{R}^+) \times \mathcal{M}([\xi_0,0))$, $\xi_0=-\eta(0)$, the density function $(\sigma,x) \mapsto q(\sigma,x)=\dd \Prob{0<Y_\sigma \leq x}/\dd x$ solves the time-changed PDE problem \begin{eqnarray}\label{eq:qPDE} \partial_\sigma q &=& \partial_x q +\frac{1}{2} \partial^2_{x} q + \frac{\dd}{\dd \sigma} [G(\sigma-\eta(\sigma))] \delta_{\Lambda} \, , \end{eqnarray} with absorbing and conservation conditions respectively given by \begin{eqnarray}\label{eq:abscons} q(\sigma,0)=0 \quad \mathrm{and} \quad g(\sigma )= \partial_\sigma G(\sigma)=\partial_x q(\sigma ,0) /2 \, . \end{eqnarray} \end{definition} We refer to the time-changed PDE problem \ref{def:qPDE} as a regularized one because the emergence of synchrony will only involves discontinuities in the backward-delay function rather than diverging drift and diffusion coefficients. This is obvious from the fact that \eqref{eq:qPDE} features constant unit drift and diffusion coefficients, so that all the interactions present in the original problem will be mediated by $\eta$ in the time-changed dynamics. The boundary condition \eqref{eq:abscons} identifies $g$ as the absorbing boundary flux of $Y_{\sigma}$, i.e., as its instantaneous inactivation rate. Because of the nonhomogeneity of $\eta$, the reset rate with which $Y_{\sigma}$ activates is generally distinct from the inactivation rate, as shown by the prefactor of the Dirac-delta source term in \eqref{eq:qPDE}. Technically, this prefactor is defined as the Radon-Nikodym derivative of the measure specified by the cumulative function $\sigma \mapsto G(\sigma-\eta(\sigma))$ with respect to the Lebesgue measure on $[0,\infty)$. This definition is justified by the fact that for all $\Phi$ in $\mathcal{T}$, $\sigma \mapsto \sigma-\eta(\sigma)=\Phi \left( \Psi(\sigma) - \epsilon \right)$ is a nondecreasing function and that $G$ is defined as a cumulative flux function. Moreover, this definition allows for possibly discontinuous backward-delay functions $\eta$, as we will see that $G$ is uniquely determined as smooth function in the next section. \subsection{Time-inhomogeneous renewal process} The cumulative flux $G$ is instrumental in specifying the dynamics of the time-changed process $Y_\sigma$, whose density function solves the PDE problem \ref{def:qPDE}. Given generic initial conditions, it is clear that $G$ shall only depend on the backward-delay functions $\eta$. In the next section, we will make this $\eta$-dependence explicit by adapting results from elementary renewal analysis. As a preliminary to this objective, we devote this section to exhibiting the renewal character of the time-changed dynamics. In this perspective, let us introduce $\lbrace \tau_k \rbrace_{k \geq 1}$, the increasing sequence of reset times to be distinguished from the sequence of inactivation times $\{ \xi_k \}_{k \geq 0}$, with the convention that $\tau_0=0$. These newly-introduced reset times can also be defined in terms of the backward-delay function $\eta$ as: \begin{proposition}\label{prop:tauDef} Given a backward-delay function $\eta$ in $\mathcal{W}$, the reset times $\lbrace \tau_k \rbrace_{k \geq 1}$ of the time-changed dynamics $Y_\sigma$ satisfy \begin{eqnarray}\label{eq:tauDef} \tau_k=\tau(\xi_k) \, , \quad \mathrm{with} \quad \tau(\xi) = \inf \{ \sigma > 0 \, \vert \, \xi(\sigma) = \sigma-\eta(\sigma) \geq \xi \} \, , \end{eqnarray} where the forward function $\tau$ is the left-continuous generalized inverse of the nondecreasing function $\xi=\mathrm{id}-\eta$. \end{proposition} Again, just as for the function $\xi$, we will refer to the ``function $\tau$'' when $\tau$ designates the forward function defined in \eqref{eq:tauDef} rather than a real variable. \begin{proof}[Proof of Proposition \ref{prop:tauDef}] This follows from the fact that the delayed process $N_{\sigma-\eta(\sigma)}$ involved in \eqref{eq:Ysigma} results from the composition of the $\emph{c\`adl\`ag}$ counting process $N_t$ with the $\emph{c\`adl\`ag}$ nondecreasing function $\xi=\mathrm{id}-\eta$. To see why, suppose that $Y_\sigma$ inactivates in $\xi_1$, i.e., that $N_\sigma$ has a jump discontinuity in $\xi_1$. Then, $Y_\sigma$ remains in zero for $\sigma>\xi_1$ until the first discontinuity time of the reset counting process $N_{\xi(\sigma)}$. If $\xi^{-1}(\{\xi_1 \})=\{\tau_1 \}$ is a singleton, we set $\tau(\xi_1)=\tau_1$. Otherwise, $\xi^{-1}(\{\xi_1 \})$ is an interval including its left endpoint denoted by $\tau_1$. By right-continuity of $N_\sigma$, the first discontinuity time of the composed process $N_{\xi(\sigma)}$ must be $\tau_1$, which is defined as: \begin{eqnarray*} \tau_1 = \inf \{ \sigma > 0 \, \vert \, \xi(\sigma) = \xi_1 \} = \inf \{ \sigma > 0 \, \vert \, \xi(\sigma) \geq \xi_1 \} \, . \end{eqnarray*} This justifies defining the function $\tau$ as the left-continuous generalized inverse of $\xi$. \end{proof} By definition \eqref{eq:tauDef}, for all $k\geq 1$, $\tau_k$ satisfies $\tau_k-\xi_k \geq \eta(\tau_k)$ with equality if $\xi_k$ is a continuity point of $\tau$. Otherwise, we can only say that $\tau_k-\xi_k \geq \eta(\tau_k)$. Thus the refractory period of $Y_\sigma$ does not necessarily coincide with the backward-delay function $\eta$ at the reset time. However, if $\eta$ is uniformly bounded by $\Vert \eta \Vert_{0,\infty}$ on $\mathbbm{R}^+$, we have \begin{eqnarray}\label{eq:tauIneq} \tau(\xi) \leq \inf \Big \{ \sigma > 0 \, \Big \vert \, \sigma- \sup_{ s \geq 0 }\eta(s) \geq \xi \Big \} = \xi + \Vert \eta \Vert_{0,\infty} \, , \end{eqnarray} Thus, in general, we have $\eta(\tau_k) \leq \tau_k-\xi_k \leq \Vert \eta \Vert_{0, \infty}$. Clarifying the possible continuity issues of the time-changed dynamics motivates introducing one more delay function, the so-called forward-delay function defined by $\gamma=\tau-\mathrm{id}$. By contrast with the backward-delay function, $\gamma$ allows us to consider the refractory period as a function of the inactivation time: $\tau_k-\xi_k=\tau(\xi_k)-\xi_k=\gamma(\xi_k)$. Backward and forward delay functions are naturally related via the following properties: \begin{proposition} $(i)$ For all backward delay functions $\eta$ in $\mathcal{W}$, the forward-delay function $\gamma$ is specified by: \begin{eqnarray}\label{eq:charGamma} \forall \; \xi \geq \xi_0 \, , \quad \gamma(\xi) = \inf \{ \sigma > 0 \, \vert \, \sigma \geq \eta(\sigma+\xi)\} \geq \nu \epsilon \, . \end{eqnarray} $(ii)$ Given two backward-delay function $\eta_a$ and $\eta_b$ in $\mathcal{W}$ with $\eta_a \geq \eta_b$, their corresponding forward-delay functions $\gamma_a$ and $\gamma_b$ satisfy $\gamma_a \geq \gamma_b$. \end{proposition} \begin{proof} $(i)$ By definition of the forward function $\tau$ in \eqref{eq:tauDef}, for all $\xi \leq \xi_0$: \begin{eqnarray*} \gamma(\xi) = \tau(\xi) - \xi = \inf \{ \sigma > \xi \, \vert \, \sigma-\eta(\sigma) \geq \xi \} - \xi = \inf \{ \sigma > 0 \, \vert \, \sigma \geq \eta(\sigma+\xi)\}, . \end{eqnarray*} In particular, we have $\gamma(\xi) \geq \inf_{\sigma \geq 0} \eta(\sigma) \geq \nu \epsilon$. $(ii)$ If $\eta_a \geq \eta_b$, we have $\{ \sigma > \, \vert \,\sigma \geq \eta_a(\sigma+\xi) \} \subset \{ \sigma > \, \vert \,\sigma \geq \eta_b(\sigma+\xi) \}$ so that by the characterization given in $(i)$, we have $\gamma_a \geq \gamma_b$. \end{proof} Notice that characterization \eqref{eq:charGamma} together with \eqref{eq:tauIneq} implies that $\Vert \eta \Vert_{0,\infty}=\Vert \gamma \Vert_{0,\infty}$.\\ It is now straightforward to exhibit the renewal character of the dynamics of $Y_\sigma$. Unless stated otherwise, we assume the initial condition $p_{\sigma_0}=\delta_x$, $x>0$, for simplicity. Given a backward-delay functions $\eta$ in $\mathcal{W}$, the sequences of times $\lbrace \xi_k \rbrace_{k \geq 0}$ and $\lbrace \tau_k \rbrace_{k \geq }$ are interwoven, i.e., $\xi_0<\tau_0=0<\xi_1 < \tau_1 <\xi_2 < \tau_2 \ldots$. The refractory periods $\{ \tau_k-\xi_k \}_{k \geq 1}$ are determined by the forward-delay function $\tau_k-\xi_k=\gamma(\xi_k)$, which was precisely introduced to that end. In between consecutive reset and inactivation times, the dynamics of the time-changed process $Y_\sigma$ is simply that of a Wiener process with unit negative drift. Thus, the random variables $\{ \xi_{k+1}-\tau_{k} \}_{k \geq 0}$ are i.i.d. according to $\Prob{\xi_{k+1}-\tau_{k} \leq \sigma}=H(\sigma,\Lambda)$ for all $k\geq1$ and to $\Prob{ \tau_1 \leq \sigma \, \vert \, Y_0=x } =H(\sigma,x)$ otherwise, where $H$ denotes the first-passage cumulative distribution~\cite{Karatzas} \begin{eqnarray*}H(\sigma,x) = \frac{1}{2} \left( \mathrm{Erfc} \left( \frac{x-\sigma}{\sqrt{2 \sigma}}\right) + e^{2 x} \mathrm{Erfc} \left( \frac{x + \sigma}{\sqrt{2 \sigma}}\right) \right) \, . \end{eqnarray*} By convention, we set $H(\sigma,x)=0$ for $\sigma<0$, so that $H$ admits the density function \begin{eqnarray*}h(\sigma,x) = \partial_\sigma H(\sigma,x) = \mathbbm{1}_{\{\sigma \geq 0 \}}\frac{xe^{-(x -\sigma)^2/2 \sigma}}{\sqrt{2 \pi \sigma^3}} \, . \end{eqnarray*} In turn, the inter-inactivation epochs $\{ \xi_{k+1}-\xi_k \}_{k \geq 1}$ are independently distributed according to time-inhomogeneous distributions: \begin{eqnarray*} \Prob{\xi_{k+1} \leq \sigma \, \vert \, \xi_k}&=&\bar{H}_\Lambda(\sigma,\xi_k) = H \big(\sigma-\tau(\xi_k) , \Lambda \big)= H \big(\sigma-\xi_k-\gamma(\xi_k) , \Lambda \big) \, , \end{eqnarray*} This shows that the sequence $\{ \xi_k \}_{k \geq 0}$ constitutes a time-inhomogenous renewal process. Recognizing the renewal character of the time-changed dynamics $Y_\sigma$ suggests that its associated cumulative flux $G$ satisfies a renewal-type integral equation. In order to establish this equation in the next section, we will need the following result, which shows that forward and backward functions $\xi$ and $\tau$ are well-behaved inverse functions of one another. \begin{proposition}\label{prop:Peta} Given a backward-delay function $\eta$ in $\mathcal{W}$, for all $\sigma>0$, we have \begin{eqnarray*} \{ \xi > \xi_0 \, \vert \, \tau(\xi) = \xi+\gamma(\xi) \leq \sigma \} = \{ \xi > \xi_0 \, \vert \, \xi \leq \xi(\sigma) = \sigma-\eta(\sigma) \} \, . \end{eqnarray*} \end{proposition} \begin{proof} In order to prove the proposed set identity, we use the following characterization: \begin{eqnarray}\label{eq:charSet} \{ \xi' > \xi_0 \, \vert \, \tau(\xi') \leq \sigma \} &=& \{ \xi' > \xi_0 \, \vert \, \inf \{ \tau'> \xi' \, \vert \, \xi(\tau') \geq \xi' \} \leq \sigma \} \, , \\ &=& {\displaystyle \cap}_{n \geq 1}\left\{ \xi' > \xi_0 \, \vert \, \exists \, \tau'> \xi' \, , \; \xi(\tau') \geq \xi' , \tau' \leq \sigma+1/n \right\} \, , \\ &=& {\displaystyle \cap}_{n \geq 1}\left\{ \xi' > \xi_0 \, \vert \, \exists \, \tau'> \xi' \, , \; \xi(\tau') \geq \xi' , \sigma \leq \tau' \leq \sigma+1/n \right\} \, , \end{eqnarray} where the last equality follows from the fact that $\xi = \mathrm{id}-\eta$ is nondecreasing. Consider $\xi' > \xi_0$ such that $\xi' \leq \xi(\sigma)$, then for all $n \geq 1$, take any $\tau'$ such that $\sigma \leq \tau' \leq \sigma+1/n$, we have $\xi' \leq \xi(\sigma) = \sigma-\eta(\sigma) < \sigma \leq \tau'$ and $\xi(\tau') \geq \xi(\sigma) \geq \xi'$. Thus $ \{ \xi' > \xi_0 \, \vert \, \xi' \leq \xi(\sigma) \} \subset \{ \xi' > \xi_0 \, \vert \, \tau(\xi') \leq \sigma \}$. Reciprocally, consider $\xi' > \xi_0$ such that $\tau(\xi') \leq \sigma$. Then by characterization \eqref{eq:charSet}, there is a sequence $\{ \tau_n \}_{n\geq 1}$ such that $\xi(\tau_n) \geq \xi'$ and $\sigma \leq \tau_n \leq \sigma+1/n$. Suppose that $\xi'> \xi(\sigma)$, we then have \begin{eqnarray*} \xi(\sigma)< \xi' \leq \liminf_{n \to \infty} \xi(\tau_n) = \lim_{\tau \to \sigma^+} \xi(\tau) \, , \end{eqnarray*} which contradicts the right continuity of $\xi$. Thus we must have $\xi' \leq \xi(\sigma)$, which means that $\{ \xi' > \xi_0 \, \vert \, \tau(\xi') \leq \sigma \} \subset \{ \xi' > \xi_0 \, \vert \, \xi' \leq \xi(\sigma) \}$. \end{proof} A direct consequence of the above proposition is that $\Prob{\tau(\xi) \leq \sigma } = \Prob{\xi \leq \sigma-\eta(\sigma)}$. This result will feature prominently in establishing a renewal-type equation for the cumulative flux function $G$. \subsection{Quasi-renewal equation} Here, our goal is to adapt elementary results from renewal analysis to characterize the cumulative flux $G$ as the unique (smooth) solution of a renewal-type equation. For the elementary initial condition $q_0=\delta_x$, this renewal-type equation can be deduced from the representation of $G$ as a probabilistic series \begin{eqnarray}\label{eq:Ftx} G( \sigma) = \Exp{\sum_{k=1}^{\infty} \mathbbm{1}_{ \left\{ \xi_k < \sigma \right\} } \, \bigg \vert \, Y_0=x } = \sum_{k=1}^{\infty} \Prob{ \xi_k \leq \sigma \, \vert \, Y_0=x} \, . \end{eqnarray} Observe that by nonnegativity of forward-delay functions $\gamma$, each of the probabilities involved in the series is upper bounded by its counterpart in the compactly converging series representation without delay: $\Prob{ \xi_k \leq \sigma \, \vert \, Y_0=x}\leq H(\sigma, x+(k-1)\Lambda)$. This justifies the validity of the series representation \eqref{eq:Ftx}. Moreover, by divisibility of the first-passage distribution, one can check that \begin{eqnarray*} \sum_{k=1}^\infty H(\sigma, x+(k-1)\Lambda) = H(\sigma, x) + \int H(\sigma-\tau, \Lambda) \sum_{k=1}^\infty H(\tau, x+(k-1)\Lambda) \, \dd \tau \, , \end{eqnarray*} yielding the classical renewal integral equation satisfied by $G$ in the absence of delays. We extend this result for nonzero delays in the following proposition: \begin{proposition}\label{prop:Renewal} Given a backward-delay function $\eta$ in $\mathcal{W}$, the cumulative flux function $G$ associated to the PDE problem \ref{def:qPDE} is the unique solution of the renewal-type equation: \begin{eqnarray}\label{eq:DuHamel} G(\sigma) = \int_0^\infty H(\sigma,x) q_0(x) \, \dd x + \int_0^\sigma H(\sigma-\tau,\Lambda) \, \dd G(\tau-\eta(\tau)) \, . \end{eqnarray} \end{proposition} \begin{proof} It is enough to show the result for the elementary initial condition $q_0=\delta_x$. Let us consider $\xi_{k+1}$, the $(k+1)$-th inactivation time of $Y_\sigma$, which is necessarily preceded by the $k$-th reset time: $\xi_{k+1} > \tau_k > 0$. Conditioning on $\tau_k$ for $k\leq 1$ yields \begin{eqnarray*} \Prob{ \xi_{k+1} \leq \sigma \, \vert \, Y_0=x} = \Exp{\Prob{\xi_{k+1} \leq \sigma \, \vert \, \tau_k } \, \vert \, Y_0=x} = \Exp{H(\sigma - \tau_k, \Lambda)\, \vert \, Y_0=x} \, , \end{eqnarray*} where the last equality uses the fact that $\{ \xi_{k+1}-\tau_k \}_{k \geq 1}$ are i.i.d. according to the distribution $H(\cdot, \Lambda)$. Thus, singling out the first term, the series representation of $G$ given in \eqref{eq:Ftx} reads \begin{eqnarray} G( \sigma ) &=& \Prob{ \xi_1 \leq \sigma \, \vert \, Y_0=x} + \sum_{k=2}^{\infty} \Exp{H(\sigma - \tau_{k-1}, \Lambda)\, \vert \, Y_0=x} \, , \nonumber\\ &=& H(\sigma,x)\label{eq:Gsx} + \int_0^\sigma H(\sigma - \tau, \Lambda) \,\sum_{k=1}^{\infty} \dd \Prob{\tau_k \leq \tau \, \vert \, Y_0=x} \, . \end{eqnarray} We conclude by expressing the series appearing as an integrator function above in terms of $G$. Invoking Proposition \ref{prop:Peta}, we have the compact convergence \begin{eqnarray*} \sum_{k=1}^{\infty} \Prob{ \tau_k \leq \sigma \, \vert \, Y_0=x} = \sum_{k=1}^{\infty} \Prob{ \xi_k \leq \sigma -\eta(\sigma) \, \vert \, Y_0=x} = G(\sigma-\eta(\sigma)) \, , \end{eqnarray*} which leads to \eqref{eq:DuHamel} upon substitution in \eqref{eq:Gsx}. To show uniqueness, suppose $G_1$ and $G_2$ both solves \eqref{eq:DuHamel}. Then, $G_1$ and $G_2$ are necessarily smooth functions with derivatives $g_1$ and $g_2$ satisfying \begin{eqnarray*} g_1(\sigma)-g_2(\sigma) = \int_0^\sigma h(\sigma-\tau,\Lambda) \dd \big[ G_1(\xi(\tau)) -G_2(\xi(\tau))) \, . \end{eqnarray*} Introducing the function $\tau(\xi)$ allows on to perform the generalized change of variable $\xi=\xi(\tau)$. For small enough $\sigma >0$, such a change of variable yields: \begin{eqnarray*} \vert g_1(\sigma)-g_2(\sigma) \vert &=& \bigg \vert \int_0^\sigma h(\sigma-\tau(\xi),\Lambda) \dd \big[ G_1(\xi) -G_2(\xi)) \bigg \vert \, , \\ &\leq& \int_0^\sigma h(\sigma-\tau(\xi),\Lambda) \big \vert g_1(\xi) -g_2(\xi)) \big \vert \, \dd \xi \, , \\ &\leq& \left( \int_0^\sigma h(\sigma-\tau(\xi),\Lambda) \dd \xi \right) \sup_{0 \leq \xi \leq \sigma} \vert g_1(\xi)-g_2(\xi) \vert \, , \\ &\leq& \sigma \Vert h(\cdot,\Lambda) \Vert_\infty \vert g_1(\xi)-g_2(\xi) \vert \, , \end{eqnarray*} where $\Vert h(\cdot,\Lambda) \Vert_\infty$ denotes the finite infinity norm of $\sigma \mapsto h(\sigma,\Lambda)$. This inequality establishes that $g_1=g_2$ on any interval $[0,\sigma)$ with $\sigma< 1/\Vert h(\cdot,\Lambda) \Vert_\infty$. This local uniqueness result transfers to $G_1$ and $G_2$ by virtue of $G_1(0)=G_2(0)=0$. Finally, global uniqueness can be recovered by standard methods of continuation. \end{proof} The above result makes it clear that as a solution to \eqref{eq:DuHamel}, the cumulative flux function $G$ inherits all the regularity properties of $\sigma \mapsto H( \sigma, \Lambda)$, i.e., $G$ is a smooth function for $\sigma>0$. Moreover, with $G$ specified as a solution to \eqref{eq:DuHamel}, the full solution of the inhomogeneous PDE \eqref{eq:qPDE} can be expressed in terms of the corresponding homogeneous solutions by Duhamel's principle. These homogeneous solutions are known in closed form~\cite{Karatzas}: \begin{eqnarray*}\kappa(\sigma,y,x) = \frac{e^{-\frac{(y - x + \sigma)^2}{2\sigma}}}{\sqrt{2 \pi \sigma}} \left(1-e^{ -\frac{2 x y}{\sigma}} \right) \, . \end{eqnarray*} Thus, Proposition \ref{prop:Renewal} admits the following corollary: \begin{corollary} Given a backward-delay function $\eta$ in $\mathcal{W}$ and normalized initial conditions $(q_0, g_0)$ in $\mathcal{M}(\mathbbm{R}^+) \times \mathcal{M}([\xi_0,0))$, there is a unique solution density function $(\sigma,x) \mapsto q(\sigma,x)=\dd \Prob{0<Y_\sigma \leq x} / \dd x$ to the PDE problem \ref{def:qPDE} which admits the integral representation \begin{eqnarray*}q(\sigma,y) = \int_0^\infty \kappa(\sigma,y,x) q_0(x) \, \dd x + \int_0^\sigma \kappa(\sigma-\tau,y,\Lambda) \, \dd G(\tau-\eta(\tau)) \, , \end{eqnarray*} where $G$ is the solution to the renewal-type equation \eqref{eq:DuHamel}. \end{corollary} The above renewal analysis has allowed us to justify the existence, uniqueness, and regularity of time-changed dynamics assuming the backward-delay function $\eta$ known. However, $\eta$ is actually an unknown of the problem for being ultimately defined in term of the time change $\Phi$ via Definition \ref{def:Eta}. We devote the next section to exhibiting under which conditions $\eta[\Phi]$ parametrizes an admissible dPMF dynamics. \section{Fixed-point problem}\label{sec:fixedPoint} In this section, we establish our main result, Theorem \ref{th:fixedpoint_int}, by establishing that dPMF dynamics are equivalent to certain constitutively well-posed time-changed dynamics. To do so, for any candidate time change $\Phi$ in $\mathcal{T}$, we consider the time-changed dynamics $Y_\sigma$ that is uniquely defined by the backward delay function $\eta[\Phi]$ in $\mathcal{W}$. Then, we look for possibly explosive, weak solutions to our original PDE problem \ref{def:delayedPDE} under the form $(t,x) \mapsto q[\Phi](\Phi(t),\sigma)$, where $q[\Phi]$ is the density function of the process $Y_\sigma$. This leads to exhibiting a natural condition on $\Phi$ to parametrize such a weak solution, as stated in the following propositon: \begin{proposition}\label{prop:qPDE} Given a time change $\Phi$ in $\mathcal{T}$, let $q$ be the solution of the time-changed PDE problem \ref{def:qPDE} associated with $\eta[\Phi]$ in $\mathcal{W}$ on $\mathbbm{R}^+ \times \mathbbm{R}^+$. Then, $p(t,x)=q(\Phi(t),x)$ with $\Phi$ a valid time change in $\mathcal{T}$, weakly solves the PDE problem \ref{def:delayedPDE} on $\mathbbm{R}^+ \times \mathbbm{R}^+$ if and only if \begin{eqnarray}\label{eq:condEq} \forall \; t \geq 0 \, , \quad \Phi(t)=\nu t+\lambda G(\Phi(t)) \, . \end{eqnarray} In particular, the cumulative flux function of the dPMF dynamics is given by $F=G \circ \Phi$. \end{proposition} \begin{proof} We proceed in two steps: $(i)$ we characterize the time-changed dynamics associated with $\eta[\Phi]$ as a solution to a weak PDE problem, $(ii)$ we show that this weak formulation is equivalent to that of Definition \ref{def:weakPDE} if and only if the proposed criterion holds. $(i)$ As a solution to the PDE problem \ref{def:qPDE}, $q$ is also a weak solution in the usual sense: for all $S>0$ and for all test functions $v$ in $C_{0,\infty}([-\xi_0,S)\times \mathbbm{R})$, we have \begin{eqnarray*} \int_0^S \int_0^\infty \left[ \mathcal{L}[q](\sigma,x) + \frac{\dd}{\dd \sigma} [G(\sigma-\eta(\sigma))] \delta_{\Lambda} - \partial_\sigma q(\sigma,x) \right] v(\sigma,x) \, \dd \sigma \dd x \, . \end{eqnarray*} In turn, performing integration by parts in the distributional sense yields \begin{eqnarray*} \int_{0}^\infty \big[ q(S,x)v(S,x)-q(0,x)v(0,x) \big] \, \dd x &=& \\ &&\hspace{-140pt} \int_0^S\int_{0}^\infty q(\sigma,x) \big[\mathcal{L}^\dagger[v](\sigma,x)+ \partial_\sigma v(\sigma,x) \big] \, \dd x \, \dd \sigma \nonumber\\ && \hspace{-140pt}+ \int_0^S v(\sigma,\Lambda) \, \dd G(\sigma-\eta(\sigma)) - \int_0^S v(\sigma,0) \, \dd G(\sigma) \, . \end{eqnarray*} Our goal is to perform the change of variable $\sigma=\Phi(t)$ to recover the weak form of solutions from Definition \ref{def:weakPDE}. Given any nondecreasing function $M$ and $N$ with $N$ right-continuous, for any bounded Borel measurable function $f$, the substitution formula reads \cite{Falkner:2012} \begin{eqnarray*} \int_a^b f(x)\, \dd (N\circ M)(x) = \int_{M(a)}^{M(b)} f(X(y))\, \dd N(y) \, , \end{eqnarray*} where $X$ is the left-continuous generalized inverse of $M$: $X(y)=\inf \{ x \in [a,b] \vert y \leq M(x) \}$. Moreover, if $N$ is continuous, $X$ can be any generalized inverse of $M$. Thus, we have \begin{eqnarray*} \int_0^S v(\sigma,\Lambda) \, \dd G(\sigma-\eta(\sigma)) &=& \int_0^S v(\sigma,\Lambda) \, \dd (G \circ \Phi)(\Psi(\sigma)-\epsilon) \, , \\ &=& \int_0^S v(\Phi(t^-),\Lambda) \, \dd (G \circ \Phi)(t-\epsilon) \, . \end{eqnarray*} Therefore, defining $u(t,x)=v(\Phi(t),x)$ and $T=\Psi(S)$, we have \begin{eqnarray*} \int_{0}^\infty \big[ q(\Phi(T),x)u(T,x)-q(\Phi(0),x)u(0,x) \big] \, \dd x &=& \\ && \hspace{-240pt} \int_0^T \int_{0}^\infty q(\Phi(t^-),x) \big[\mathcal{L}^\dagger[u](t^-,x)+ \partial_\sigma v(\Phi(t^-),x) \big] \, \dd x \, \dd \Phi(t) \nonumber\\ && \hspace{-240pt}+ \int_0^T u(t^-,\Lambda) \, \dd (G \circ \Phi)(t-\epsilon) - \int_0^T u(t^-,0) \, \dd (G \circ \Phi)(t) \, . \end{eqnarray*} Given a smooth test function $v$, for all $x\geq 0$, the function $t \mapsto u(t,x)=v(\Phi(t),x)$ is a function with bounded variation. Accordingly, we can apply the generalized chain rule involving Vol'pert superposition principle \cite{Volpert:1967,DalMaso:1991} to obtain \begin{eqnarray*} \frac{\dd u}{\dd t}(\cdot ,x)= \frac{\dd}{\dd t}[v(\Phi(\cdot),x)] = \partial_\sigma\hat{v}(\Phi,x)(\cdot) \frac{\dd \Phi}{\dd t} \, , \end{eqnarray*} where $\partial_\sigma\hat{v}$ denotes the average superposition of $\partial_\sigma v$: \begin{eqnarray*} \partial_\sigma \hat{v}(\Phi,x)(t)=\int_0^1 \partial_\sigma v\big(\Phi(t^-)+z(\Phi(t)-\Phi(t^-))\big) \, \dd z \, . \end{eqnarray*} Now, we can always restrict our choice of test functions $v$ to these functions in $C_{0,\infty}([-\epsilon,\infty)\times \mathbbm{R})$ such that for all $t$ in the discontinuity set $\mathcal{D}_{\Phi}$, we have \begin{eqnarray*} v(\Phi(t^-))=v(\Phi(t)) \quad \mathrm{and} \quad \partial_\sigma v(\Phi(t^-))=\partial_\sigma v(\Phi(t)) = \partial_\sigma \hat{v}(\Phi,x)(t) \, , \end{eqnarray*} so that $t \mapsto u(x,t)$ is continuously differentiable. This means in particular that for all test functions $u$ in $C_{0,\infty}([-\epsilon,S)\times \mathbbm{R})$, we have \begin{eqnarray}\label{eq:uWeak} \int_{0}^\infty \big[ q(\Phi(T),x)u(T,x)-q(\Phi(0),x)u(0,x) \big] \, \dd x &=& \\ && \hspace{-240pt} \int_0^T \int_{0}^\infty q(\Phi(t^-),x) \mathcal{L}^\dagger[u](t,x) \, \dd \Phi(t) + \int_0^T \int_{0}^\infty q(\Phi(t^-),x) \partial_t u(t,x) \, \dd x \, \dd t \nonumber\\ && \hspace{-240pt}+ \int_0^T u(t,\Lambda) \, \dd (G \circ \Phi)(t-\epsilon) - \int_0^T u(t,0)\, \dd (G \circ \Phi)(t) \nonumber\, . \end{eqnarray} $(ii)$ Let us now prove the proposition. If $\Phi(t)=\nu t+\lambda G(\Phi(t))$, we directly see that substituting $ \dd \Phi(t)=\nu \, \dd t + \lambda \, \dd (G\circ \Phi)(t)$ in the above equation shows that $(t,x) \mapsto p(t,x)=q(\Phi(t),x)$ is a weak solution of PDE problem \ref{def:delayedPDE} for $F=G \circ \Phi$. Reciprocally, let $(t,x) \mapsto p(t,x)=q(\Phi(t),x) $ be a weak solution of the PDE problem \ref{def:delayedPDE}. Specifying the defining property of $p$ as a weak solution for $F$ with $u(x,t)=1$ yields \begin{eqnarray*} \int_0^T \int_{0}^\infty \partial_t p(t,x) u(t)\, \dd x \, \dd t = \int_0^T u(t) \, \dd F(t-\epsilon) - \int_0^T u(t) dF(t) \, , \end{eqnarray*} whereas specifying \eqref{eq:uWeak} for $u(x,t)=1$ yields an alternative expression for the same quantity \begin{eqnarray*} \int_0^T \int_{0}^\infty \partial_t q(\Phi(t),x) u(t)\, \dd x \, \dd t = \int_0^T u(t) \, \dd (G \circ \Phi)(t-\epsilon) - \int_0^T u(t) \, \dd (G \circ \Phi)(t) \, . \end{eqnarray*} Moreover, for $\Psi$ being the right-continuous generalized inverse of the strictly increasing function $\Phi,$ we have $\Psi \circ \Phi=\mathrm{id}$ on $\mathbbm{R}^+$ and the initial condition $G=F\circ \Psi$ on $(-\eta(\epsilon),0]$ implies that $G \circ \Phi=F$ on $[-\epsilon,0)$. Then, by the same reasoning as in Proposition \ref{prop:uniqueF}, $F(t)=(G \circ \Phi)(t)$ for all $t \geq 0$. In turn, subtracting \eqref{eq:uWeak} with $q(\Phi(t),x)$ identified to $p(t,x)$ from equation \eqref{eq:weakPDE} with $F$ identified to $G \circ \Phi$ yields \begin{eqnarray*} \int_0^T \int_{0}^\infty p(t^-,x) \mathcal{L}^\dagger[u](t,x)\, \dd x \, \dd \Phi(t) = \int_0^T \int_{0}^\infty p(t^-,x) \mathcal{L}^\dagger[u](t,x)\, \dd x \, \dd \big[\nu t +\lambda F(t)\big] \, . \end{eqnarray*} It is clear that the above equality holds for all smooth functions such that $\mathcal{L}^\dagger[u]$ remains bounded on $[0,T)\times (0,\infty)$. This observation allows us to specify the above equation for test functions of the form $u(x,t)=x w(t)$ with $w$ in $C_{0,\infty}([0,T))$, so that we obtain \begin{eqnarray*} \int_0^T(1-r(t)) w(t) \, \dd \big[\nu t +\lambda F(t)-\Phi(t) \big] = 0 \, , \end{eqnarray*} where $r(t)$ is the probability of being in the refractory state at $t$. As for all locally bounded backward-delay functions $\eta$, $r(t)>0$ remains bounded away from one, and both $\Phi$ and $F$ are \emph{c\`adl\`ag} functions, this implies that \begin{eqnarray*} \forall t \geq 0 \, , \quad \Phi(t) = \nu t +\lambda F(t) = \nu t+\lambda G(\Phi(t)) \, , \end{eqnarray*} which concludes the proof. \end{proof} Proposition \ref{prop:qPDE} shows that the existence and uniqueness of dPMF dynamics amounts to the existence and uniqueness of a time change $\Phi$ in $\mathcal{T}$ solving \eqref{eq:condEq}. In this light, \eqref{eq:condEq} appears as a self-consistent relation rather than a mere definition as in \eqref{eq:Phi}. In fact, \eqref{eq:condEq} defines a fixed-point problem satisfied by these time changes $\Phi$ that parametrized dPMF dynamics. The fixed-point nature of \eqref{eq:condEq} follows from the fact that the cumulative flux function $G$ involved in \eqref{eq:condEq} is functionally dependent on $\Phi$ via $\eta$. To account for blowups, solutions $\Phi$ to \eqref{eq:condEq} are allowed to be discontinuous, which leads to possible degeneracy issues. Indeed, as the cumulative flux $G$ must be smooth, $\Phi$ can only become discontinuous if \eqref{eq:condEq} is degenerate in the sense that $\sigma = \nu t + G(\sigma)$ admits multiple solutions $\sigma$ for some times $t$. These times at which \eqref{eq:condEq} becomes degenerate will actually mark the occurrence of blowups in the original dPMF dynamics. To avoid dealing with such degeneracies, it is actually desirable to reformulate the fixed-point problem in terms of the inverse time change. This is because by contrast with the possibly discontinuous increasing function $\Phi$, the inverse time change $\Psi=\Phi^{-1}$ is defined as a continuous nondecreasing function. Such a formulation yields our main result stated in Theorem \ref{th:fixedpoint_int}, which directly follows from the following proposition: \begin{proposition}\label{def:fixedPoint} The inverse time-change $\Psi$ parametrizes a dPMF dynamics if and only if it solves the fixed-point problem \begin{eqnarray}\label{eq:fixedPoint} \forall \; \sigma \geq 0 \, , \quad \Psi(\sigma) = \sup_{0 \leq \xi \leq \sigma} \big( \xi - \lambda G[\Psi](\xi)\big) / \nu \, . \end{eqnarray} \end{proposition} \begin{proof} Consider $\Phi$ in $\mathcal{T}$ solving \eqref{eq:condEq}. Then, for all $\sigma \geq 0$, writing $\sigma=\Phi(t)$ and $t=\Psi(\sigma)$ in \eqref{eq:condEq}, we have \begin{eqnarray*} \Psi(\sigma) = \big( \sigma - \lambda G[\Psi](\sigma)\big) / \nu = \sup_{0 \leq \xi \leq \sigma} \big( \xi - \lambda G[\Psi](\xi)\big) / \nu \, , \end{eqnarray*} where the last equality follows from $\Psi=\Phi^{-1}$ being \emph{c\`adl\`ag} increasing by definition of $\mathcal{T}$. Reciprocally, consider $\Phi$ solving the fixed-point problem \eqref{eq:fixedPoint}. Then, $\Phi$ is necessarily a nonnegative, \emph{c\`adl\`ag}, nondecreasing function for being defined as a running maximum function with $\Psi(0)=0$. Moreover, the function $\sigma \mapsto \eta(\sigma)= \sigma - \Psi^{-1}(\Psi(\sigma)-\epsilon)$ is a nonnegative, \emph{c\`adl\`ag} function so that $G$ is a well-defined cumulative function satisfying the renewal-type equation \eqref{eq:DuHamel}. In particular $G$ is a nondecreasing smooth function so that $w_\Psi$ satisfies $w_\Psi \leq 1/\nu$. This shows that $\Phi=\Psi^{-1}$ belongs to $\mathcal{T}$. Finally, we conclude by observing that \begin{eqnarray*} \Phi(t) &=& \inf \Big \{ \sigma \geq 0 \, \Big \vert \, \Psi(\sigma) > t \Big \} \, , \\ &=& \inf \Big \{ \sigma \geq 0 \, \Big \vert \,\sup_{0 \leq \xi \leq \sigma} \big( \xi - \lambda G[\Psi](\xi)\big) / \nu > t \Big \} \, , \\ &=& \inf \Big \{ \sigma \geq 0 \, \Big \vert \, \big( \sigma - \lambda G[\Psi](\sigma)\big) / \nu > t \Big \} \, . \end{eqnarray*} Thus, by continuity of $G$, we have \begin{eqnarray*} \Phi(t) = \inf \Big \{ \sigma \geq 0 \, \Big \vert \, \big( \sigma - \lambda G[\Psi](\sigma)\big) / \nu = t \Big \} \, , \end{eqnarray*} so that when $\Phi(t)$ exists, it necessarily satisfies \eqref{eq:condEq}. \end{proof} The above proposition proves our main result stated in the introduction as Theorem \ref{th:fixedpoint_int}. The next section establishes its practical usefulness by showing that the corresponding fixed-point problem admits solutions parametrizing explosive dPMF dynamics. \section{Local blowup solutions}\label{sec:local} In this section, we establish that for large enough interaction parameters, the fixed-point problem \ref{def:fixedPoint} locally admits solutions with analytically well-defined blowups. To show this, we first define a contracting, regularized fixed-point map $\mathcal{F}_\delta$ over an appropriately chosen Banach space of candidate functions. We then show that $\Psi = \lim_{\delta \to 0^+} \Psi_\delta$, where $\Psi_\delta$ uniquely solves the fixed-point equation $\Psi=\mathcal{F}_\delta[\Psi]$, defines locally the unique maximum smooth inverse time change up to the first putative blowup. Finally, we analytically resolve the ensuing blowup episode by interpreting blowup in the time-changed picture as linear dynamics with absorption but without reset. \subsection{Regularized fixed-point problem} The direct resolution of the dPMF fixed-point problem \ref{def:fixedPoint} is not possible by standard analysis when allowing for discontinuous time changes. To remedy this point, we consider a set of regularized fixed-point problems which approximates the original one, but for which delay functions will be continuous. These approximate problems are defined on the following restricted space of candidate solutions: \begin{definition} Given a real $\delta$ such that $0<\delta<1/\nu$ and for all $\xi>0$, we restrict the candidate space for inverse time changes to \begin{eqnarray*} \mathcal{C}_\delta([\xi_0,\xi])) = \left\{ \Psi \in C_0([\xi_0, \xi]) \, \Bigg \vert \, \begin{array}{ccc} \Psi(\xi)=\Psi_0(\xi) & , & \xi_0 \leq \xi \leq 0 \vspace{5pt} \\ \delta \leq w_\Psi(y,x) \leq 1/\nu & , & 0 \leq x , y \leq \xi \end{array} \right\} \, , \end{eqnarray*} where $w_\Psi$ is the difference quotient $w_\Psi(y,x) = (\Psi(y) - \Psi(x))/(y-x)$. \end{definition} For all $\xi> 0$, the candidate space $\mathcal{C}_\delta([\xi_0,\xi]))$ is a Banach space with respect to the uniform norm, denoted by $\Vert \cdot \Vert_{\xi_0,\xi}$. Every candidate functions $\Psi$ in $\mathcal{C}_\delta([\xi_0,\infty))$ can naturally serve as an inverse time change, i.e., $\Psi^{-1}$ belongs to $\mathcal{T}$ . Moreover, choosing $\delta>0$ enforces that every $\Psi$ in $\mathcal{C}_\delta([\xi_0,\infty))$ is a strictly increasing, continuous function on $\mathbbm{R}^+$, so that $\Psi^{-1}$ is also a strictly increasing, continuous function on $\mathbbm{R}^+$ with difference quotient satisfying $\nu \leq w_{\Psi^{-1}}(y,x) \leq 1/\delta$. As a result, the forward function $\tau[\Psi]$ is continuous in $C([\xi_0,\infty))$ and so is the backward function $\xi[\Psi]$ in $C([0,\infty))$, being defined as: \begin{eqnarray*} \tau[\Psi](\xi) = \Psi^{-1}\big(\Psi(\xi)+\epsilon \big) \quad \mathrm{and} \quad \xi[\Psi](\sigma) = \Psi^{-1}\big(\Psi(\sigma)-\epsilon \big) \, . \end{eqnarray*} In the absence of discontinuities, we have the equivalence $\tau=\tau[\Psi](\xi) \Leftrightarrow \xi[\Psi](\tau)=\xi$, which implies that $\gamma[\Psi](\xi)=\eta[\Psi](\tau)$. The key reason to introduce the space $\mathcal{C}_\delta([\xi_0,\xi]))$ is the following uniform Lipschitz property: \begin{proposition}\label{prop:Lipsc} For all $\Psi$ in $\mathcal{C}_\delta([\xi_0,\infty))$, the map $\Psi \mapsto \xi[\Psi]$ are $1/\delta$-Lipschitz with respect to the uniform norm on $\mathcal{C}_\delta([\xi_0,\infty))$. \end{proposition} \begin{proof} Every function $\Psi$ in $\mathcal{C}_\delta([\xi_0,\xi]))$ admits a continuous inverse function $\Psi^{-1}$ on $\mathbbm{R}^+$ with bounded difference quotient such that $\nu \leq w_{\Psi^{-1}}\leq 1/\delta$. Therefore, for all $\sigma>0$: \begin{eqnarray*} \vert \xi[\Psi_a](\sigma)-\xi[\Psi_b](\sigma) \vert &=& \vert \Psi^{-1}_a(\Psi_a(\sigma) - \epsilon) - \Psi^{-1}_b(\Psi_b(\sigma) - \epsilon) \vert \, \\ & \leq & \vert \Psi_a(\sigma)-\Psi_b(\sigma) \vert /\delta \, . \end{eqnarray*} \end{proof} Our goal is to formulate the dPMF fixed-point problem \ref{def:fixedPoint} on the candidate Banach space $\mathcal{C}_\delta([\xi_0,\infty))$. However, it turns out that the natural fixed-point map \begin{eqnarray*} \Psi \mapsto \Big\{ \sigma \mapsto \sup_{\xi_0 \leq \xi \leq \sigma}\big(\xi-\lambda G[\Psi](\xi)\big) / \nu \Big\} \, . \end{eqnarray*} does not stabilize $\mathcal{C}_\delta([\xi_0,\infty))$ in the sense that it can produce functions with difference quotient below $\delta$. To define a fixed-point map that stabilizes $\mathcal{C}_\delta([\xi_0,\infty))$, we need to introduce the function $\mathcal{S}_\delta: \mathcal{C}_{-\infty}([\xi_0,\infty)) \rightarrow \mathcal{C}_\delta([\xi_0,\infty))$ defined by \begin{eqnarray*} \mathcal{S}_\delta[\psi](\sigma) = \sup_{0 \leq s \leq \sigma} \left\{ \psi(s)-\delta s \right\} + \delta \sigma \, . \end{eqnarray*} The stabilizing role of $\mathcal{S}_\delta$ follows from noticing that for all $0 \leq x \leq y$, we have \begin{eqnarray*} \mathcal{S}_\delta[\psi](y)-\mathcal{S}_\delta[\psi](x) = \! \sup_{0 \leq s \leq y} \left\{ \psi(s)-\delta s \right\} - \! \sup_{0 \leq s \leq x} \left\{ \psi(s)-\delta s \right\} + \delta (y-x) \geq \delta (y-x) \, . \end{eqnarray*} This shows that $w_{\mathcal{S}_\delta[\psi]} \geq \delta$, so that $\mathcal{S}_\delta$ stabilizes $\mathcal{C}_\delta([\xi_0,\infty))$ from below. In turn, we can show that composing the natural fixed-point map of Definition \ref{def:fixedPoint} with $\mathcal{S}_\delta$ defines a well-posed map on $\mathcal{C}_\delta([\xi_0,\infty))$. With these conventions, the regularized fixed-point map is specified as follows. \begin{proposition}\label{def:fixedPoint2} Given $\Psi$ in $\mathcal{C}_\delta([\xi_0,\infty))$ and initial conditions $(q_0,g_0)$, setting \begin{eqnarray*}\mathcal{F}_\delta [\Psi] &=& \mathcal{S}_\delta [\psi] \quad \mathrm{with} \quad \psi (\sigma)=\left\{ \begin{array}{ccc} \displaystyle \frac{1}{\nu}\big( \sigma - \lambda G[\Psi](\sigma) \big) &\: \mathrm{if} \: & \sigma \geq 0 \, , \\ \Psi_0(\sigma) & \: \mathrm{if} \: & \xi_0 \leq \sigma < 0 \, . \end{array} \right. \end{eqnarray*} defines a map from $\mathcal{C}_\delta([\xi_0,\infty))$ to $\mathcal{C}_\delta([\xi_0,\infty))$ such that $\psi$ is a smooth function on $(0,\infty)$. \end{proposition} \begin{proof} Let us check that for all $\Psi$ in $\mathcal{C}_\delta([\xi_0,\infty))$, $\mathcal{F}_\delta [\Psi]$ is also in $\mathcal{C}_\delta([\xi_0,\infty))$. Given $\Psi$ in $\mathcal{C}_\delta([\xi_0,\infty))$, $\gamma[\Psi]$ is a positive, continuous, bounded function, which represents a forward-delay function compatible with the initial condition $ g_0$ on $[\xi_0,0)$. In turn, we can interpret $G[\Psi]$ as the associated cumulative flux, which is a smooth function on $(0,\infty)$ for satisfying the renewal-type equation \eqref{eq:DuHamel}. In particular, $G[\Psi]$ is continuously differentiable on $C((0,\infty))$, with positive derivative denoted $g[\Psi]$. Then, the definition of $\psi$ in terms of $G[\Psi]$ implies that $\psi$ belongs to $\mathcal{C}_{-\infty}([\xi_0,\infty))$, so that $\Psi=\mathcal{S}_\delta[\Psi]$ belongs to $\mathcal{C}_{\delta}([\xi_0,\infty))$. \end{proof} \subsection{Contraction argument} We establish in the following proposition that the mapping $\mathcal{F}_\delta$ from Proposition \ref{def:fixedPoint2} is a contraction on the Banach spaces $\mathcal{C}_{\delta}([\xi_0,\sigma))$ for small enough $\sigma>0$. The proof will rely on the fact that for $\sigma>0$ smaller than the time-wrapped refractory periods, the mapping $\mathcal{F}_\delta$ loses its renewal character. Then, the Lipschitz continuity of the mapping $\Psi \mapsto \xi[\Psi]$ on $\mathcal{C}_\delta([\xi_0,\sigma])$ and the vanishing behavior of the first-passage kernel $H$ for small time will directly yield the result. \begin{proposition}\label{prop:fixedPoint} For small enough $\sigma>0$, the map $\mathcal{F}_\delta$ is a contraction on $\mathcal{C}_{\delta}([\xi_0,\sigma))$ for the uniform norm denoted by $\Vert \cdot \Vert_{0,\sigma}$. \end{proposition} \begin{proof} We proceed in two steps: $(i)$ we justify that $\mathcal{G}_\delta$ induces a mapping $\mathcal{C}_\delta([\xi_0,\sigma)) \rightarrow \mathcal{C}_\delta([\xi_0,\sigma))$ that loses its renewal character for small enough $\sigma>0$; $(ii)$ we show that for small enough $\sigma>0$, $\mathcal{F}_\delta$ is a contraction for the uniform norm $\Vert \cdot \Vert_{0,\sigma}$, i.e., for all $\Psi_a,\Psi_b$ in $\mathcal{C}_{\delta}([\xi_0,\sigma))$, $\big \Vert \mathcal{F}_\delta[\Psi_a] - \mathcal{F}_\delta[\Psi_b] \big \Vert_{0,\sigma} \leq K \Vert \Psi_a - \Psi_b\Vert_{0,\sigma}$ with $K<1$. $(i)$ The fact that $\mathcal{F}_\delta[\psi]$ belongs to $\mathcal{C}_{\delta}([\xi_0,\infty))$ follows from Proposition \ref{def:fixedPoint2}. Let us show that $\mathcal{F}_\delta[\psi]$ loses its renewal character when considered on $[0,\Psi^{-1}(\epsilon))$. As $\delta \leq w_\Psi \leq 1/\nu$ and $\Psi(0) =0$, we must have $ \nu \epsilon \leq \Psi^{-1}(\epsilon) \leq 1/\delta$. Moreover, $\Psi^{-1}(\epsilon)$ is also defined as the time-changed refractory period after zero: \begin{eqnarray*} \Psi^{-1}(\epsilon)= \Psi^{-1} \left(\Psi(0)+\epsilon \right) = \gamma[\Psi](0) \, . \end{eqnarray*} This implies that on $[0,\Psi^{-1}(\epsilon))$, no more than one first-hitting time may occur in the inhomogeneous renewal processes determined by $\gamma[\Psi]$. Correspondingly, over the time interval $[0, \Psi^{-1}(\epsilon)] \supset [0,\nu \epsilon]$, the inhomogeneous renewal-type equation \eqref{eq:DuHamel} loses its renewal character to read \begin{eqnarray}\label{eq:nhomNotRenew} G[\Psi](\sigma) &=& \int_{0}^\infty H(\sigma,x) q_0(x) \, \dd x + \int_0^\sigma H(\sigma-\tau,\Lambda) \, \dd G_0(\xi[\Psi](\tau)) \, . \end{eqnarray} As the initial conditions prescribe $\Psi$ to coincide with $\Psi_0$ on $[\xi_0,0]$, this means that for all $0 \leq \tau \leq \nu \epsilon$, we have \begin{eqnarray*} \xi[\Psi](\tau) = \Phi_0 \left( \Psi(\tau) - \epsilon \right) \, , \quad 0 \leq \tau \leq \nu \epsilon \, , \end{eqnarray*} so that remembering that $F_0 = G_0 \circ \Phi_0$ allows one to write \eqref{eq:nhomNotRenew} as \begin{eqnarray*}G[\Psi](\sigma) &=& \int_{0}^\infty H(\sigma,x) q_0(x) \, \dd x + \int_0^\sigma H(\sigma-\tau,\Lambda) \, \dd F_0(\Psi(\tau)-\epsilon) \, . \end{eqnarray*} This shows that specifying $\mathcal{F}_\delta [\Psi]$, on $[0, \sigma)$ only requires knowledge of $\Psi$ on $[0,\sigma)$ for $\sigma \leq \nu \epsilon$. Therefore, $\mathcal{F}_\delta$ is a mapping $\mathcal{C}_{\delta}([\xi_0,\sigma)) \rightarrow \mathcal{C}_{\delta}([\xi_0,\sigma))$ for all $0 < \sigma \leq \nu \epsilon$. $(ii)$ Let us now consider two functions $\Psi_a$ and $\Psi_b$ in $\mathcal{C}_\delta([\xi_0,\infty))$. For ease of notation, indexation by $a$ and $b$ will indicate throughout the proof relation to $\Psi_a$ and to $\Psi_b$, respectively. For instance, we write $G_a=G[\Psi_a]$ and $G_b=G[\Psi_b]$. By $(i)$, both cumulative functions $G_{a}$ and $G_{b}$ satisfy a nonrenewal equation \eqref{eq:nhomNotRenew} with identical regular initial conditions on the interval $ [0, \nu \epsilon ]$. As a result, the integral terms arising from $q_0$ in \eqref{eq:nhomNotRenew} are identical for both $G_{a}$ and $G_{b}$, which allows one to write $\Delta G(\sigma) = G_{a}(\sigma)-G_{b}(\sigma)$ for all $0 \leq \sigma \leq \nu \epsilon$ as \begin{eqnarray*} \Delta G(\sigma) &=& \int_{0}^{\sigma } H(\sigma-\tau,\Lambda) \, \dd \big[ F_0 (\Psi_a(\tau) -\epsilon)- F_0 (\Psi_b(\tau) -\epsilon) \big] \, . \end{eqnarray*} where $F_0=G_0 \circ \Phi_0$ denotes the initial conditions for the cumulative flux of the original process $X_t$ in $\mathcal{M}([-\epsilon,0))$. With no loss of generality, let us assume that $t_a=\Psi_a(\sigma) \leq \Psi_b(\sigma)=t_b$. Then, performing the change of variables $t=\Psi_a(\tau)$ and $t=\Psi_b(\tau)$, we have \begin{eqnarray} \int_0^\sigma H(\sigma-\tau, \Lambda) \!\!\!\!\!\! && \dd \! \left[ F_0\big(\Psi_a(\tau)-\epsilon\big)-F_0\big(\Psi_b(\tau)-\epsilon \big) \right] \nonumber\\ &=& \int_0^{t_a} H(\sigma-\Phi_a(t), \Lambda) \, \dd F_0\big(t-\epsilon\big) - \int_0^{t_b} H(\sigma-\Phi_b(t), \Lambda) \, \dd F_0\big(t-\epsilon\big) \, , \nonumber \\ &=& \int_0^{t_a} \big[ H(\sigma-\Phi_a(t), \Lambda) -H(\sigma-\Phi_b(\tau) \big] \, \dd F_0\big(t-\epsilon\big) \, \nonumber \\ && - \int_{t_a}^{t_b} H(\sigma-\Phi_b(t), \Lambda) \, \dd F_0\big(t-\epsilon\big) \label{eq:ineqBanach} \end{eqnarray} In turn, performing the change of variables $\tau=\Psi_a( t)$ in the first integral term of the equation above, denoted by $I_1$, yields \begin{eqnarray*} I_1(\sigma) & \leq & \int_0^{t_a} \big \vert H(\sigma-\Phi_a(t), \Lambda) - \, H(\sigma-\Phi_b(t), \Lambda) \big \vert \, \dd F_0\big(t-\epsilon\big) \, , \\ &=& \int_0^{\sigma} \big \vert H(\sigma-\tau, \Lambda) -H(\sigma-\Phi_b(\Psi_a(\tau)), \Lambda) \big \vert \, \dd F_0\big(\Psi_a(\tau)-\epsilon\big) \, , \\ &\leq& \Vert h( \cdot , \Lambda) \Vert_{0,\sigma} \int_0^{\sigma} \vert \tau - \Phi_b(\Psi_a(\tau)) \vert \, \dd F_0\big(\Psi_a(\tau)-\epsilon\big) \, , \end{eqnarray*} where the last inequality follows from the fact that $H( \cdot, \Lambda)$ is $\Vert h( \cdot , \Lambda) \Vert_{0,\sigma}$-Lipschitz on $[0,\sigma]$. Then, utilizing that $\Phi_b$ is necessarily $1/\delta$-Lipschitz for $\Psi_b$ in $\mathcal{C}_\delta([\xi_0,\infty))$, we have \begin{eqnarray*} I_1(\sigma) &\leq& \Vert h( \cdot , \Lambda) \Vert_{0,\sigma} \int_0^{\sigma} \vert \Phi_b(\Psi_b(\tau)) - \Phi_b(\Psi_a(\tau)) \vert \, \dd F_0\big(\Psi_a(\tau)-\epsilon\big) \, , \\ &\leq& \frac{ \Vert h( \cdot , \Lambda) \Vert_{0,\sigma}}{ \delta} \int_0^{\sigma} \vert\Psi_b(\tau) - \Psi_a(\tau) \vert \, \dd F_0\big(\Psi_a(\tau)-\epsilon\big) \, , \\ &\leq& \frac{ \Vert h( \cdot , \Lambda) \Vert_{0,\sigma}}{ \delta} \left[ F_0(t_a-\epsilon\big) - F_0\big(-\epsilon) \right] \Vert \Psi_a -\Psi_b \Vert_{0,\sigma} \end{eqnarray*} where the last inequality follows from the fact $F_0$ is an increasing function. The last integral term in \eqref{eq:ineqBanach}, denoted by $I_2$, can be bounded via similar argument as \begin{eqnarray*} I_2(\sigma) &=& \int_{t_a}^{t_b} H(\sigma-\Phi_b(t), \Lambda) \, \dd F\big(t-\epsilon\big) \nonumber\\ &\leq& H(\sigma-\Phi_b(t_a), \Lambda) \, \left[ F(t_b-\epsilon\big) - F\big(t_a-\epsilon) , \Lambda) \right] \, , \\ &\leq& \Vert h( \cdot , \Lambda) \Vert_{0,\sigma} \vert \Phi_b(\Psi_b(\sigma)) - \Phi_b(\Psi_a(\sigma)) \vert \, \left[ F(t_b-\epsilon\big) - F\big(t_a-\epsilon) \right] \, , \\ &=& \frac{ \Vert h( \cdot , \Lambda) \Vert_{0,\sigma}}{\delta} \left[ F(t_b-\epsilon\big) - F\big(t_a-\epsilon) \right] \, \Vert \Psi_a -\Psi_b \Vert_{0,\sigma} \, , \end{eqnarray*} Since by conservation of probability $F(t_b-\epsilon\big) - F\big(-\epsilon) \leq 1$, we have for all $\sigma \geq 0$ \begin{eqnarray*} \vert \Delta G(\sigma) \vert = \frac{\lambda}{\nu} \big \vert I_1(\sigma) + I_2(\sigma) \big \vert \leq \frac{ \lambda \Vert h( \cdot , \Lambda) \Vert_{0,\sigma}}{ \nu \delta} \Vert \Psi_a -\Psi_b \Vert_{0,\sigma} \, . \end{eqnarray*} Moreover, for all real valued functions $\psi_a$ and $\psi_b$ over $\mathbbm{R}^+$, we have \begin{eqnarray*} \big \vert \mathcal{S}_\delta(\psi_a) - \mathcal{S}_\delta(\psi_b) \big \vert &=& \Big \vert \sup_{0 \leq s \leq \sigma} \{ \psi_a(s) - \delta s \} - \sup_{0 \leq s \leq \sigma} \{ \psi_b(s) - \delta s \} \Big \vert \, , \\ &\leq& \Big \vert \sup_{0 \leq s \leq \sigma} \{ \psi_a(s) - \psi_b(s) \} \Big \vert \, , \\ &\leq& \sup_{0 \leq s \leq \sigma} \big \vert \psi_a(s) - \psi_b(s) \big \vert = \Vert \psi_a - \psi_b \Vert_{0,\sigma}\, , \end{eqnarray*} so that we have the inequality \begin{eqnarray*} \big \vert \mathcal{F}_\delta[\Psi_b](\sigma)-\mathcal{F}_\delta[\Psi_a](\sigma) \big \vert &=& \big \vert \mathcal{S}_\delta[(\mathrm{id}-\lambda G_b)/\nu](\sigma)-\mathcal{S}_\delta[(\mathrm{id}-\lambda G_a)/\nu](\sigma) \big \vert \, , \\ & \leq & \big \Vert \lambda G_b/\nu-\lambda G_a/\nu] \big \Vert_{0,\sigma} \, , \\ &\leq & \frac{ \lambda \Vert h( \cdot , \Lambda) \Vert_{0,\sigma}}{ \nu \delta} \Vert \psi_a- \psi_b \Vert_{0,\sigma} \, . \end{eqnarray*} We conclude by noticing that $\lim_{\sigma \to 0} h(\sigma, \Lambda) = 0$, so that for small enough $\sigma>0$, $K= \lambda \, h(\sigma, \Lambda)/ (\nu \delta)<1$, which shows that the map $\mathcal{F}_\delta$ is a contraction on the space $\mathcal{C}_\delta(0,\sigma)$ for the uniform norm $\Vert \cdot \Vert_{0,\sigma}$. \end{proof} By the Banach fixed-point theorem, Proposition \ref{prop:fixedPoint} implies the existence of a unique local solution to the regularized fixed-point problem of Proposition \ref{def:fixedPoint2}. The following corollary shows that for all $\delta>0$, such a local solution can be maximally extended to the whole real half-line $\mathbbm{R}^+$. \begin{corollary}\label{cor:globSol} For all finite refractory periods $\epsilon>0$ and for all parameter $0<\delta<1/\nu$, there is a unique global solution $\psi_\delta$ in $\mathcal{C}_\delta([\xi_0,\infty))$ to the fixed-point problem $\psi=\mathcal{G}_\delta[\psi]$. \end{corollary} \begin{proof} For fixed $\delta>0$, a local solution can be continued unconditionally as $\mathcal{F}_\delta$ is locally contracting irrespective of the initial conditions. More specifically, assuming the solution defined up to $\tau>0$, the continuation process past $\tau$ consists in applying Proposition \ref{prop:fixedPoint} with $\{ g(\sigma) \}_{\tau \leq \sigma \leq \xi(\tau)}$ serving as initial flux conditions, and with spatial part of the initial condition naturally given by \begin{eqnarray*} q(\tau,y) = \kappa(\tau,y,x) \, p_0(x) \, dx + \int_0^\tau \kappa(\tau-\sigma,y,\Lambda) \, \dd G(\xi(\sigma)) \, . \end{eqnarray*} \end{proof} We are now in a position to exhibit a local solution to the dPMF fixed-point problem \ref{def:fixedPoint} by considering the function $\Psi=\lim_{\delta \to 0^+} \Psi_\delta$. For this solution to be uniquely defined on a nonzero interval $[0,S_1)$, we require that the initial conditions are such that instantaneous blowups are excluded. Specifically, we assume that the density $q_0$ is locally smooth near zero with $q_0(0)=0$ and $\partial_x q_0(0) /2<1/\lambda$. This amounts to imposing that all solutions $\Psi_\delta$ are such that ${\Psi'_0}(0^+) = 1-\lambda \partial_x q_0(0) /2$ is bounded away from zero, so that for all $0<\delta_1<\delta_2<\Psi'_0(0^+)$, $\Psi_{\delta_1}(\sigma)=\Psi_{\delta_2}(\sigma)$ for small enough $\sigma>0$. In turn, such a local solution $\Psi$ can be uniquely continued up to the first time $\Psi'$ becomes zero, therefore giving a criterion to maximally define $S_1$. This leads to the following proposition: \begin{theorem}\label{th:smooth} For all normalized initial conditions $(q_0,g_0)$ in $\mathcal{M}(\mathbbm{R}^+) \times\mathcal{M}([\xi_0,\infty))$ such that $q_0(0)=0$ and $\partial_x q_0(0) /2<1/\lambda$, there is a unique smooth solution to the fixed-point problem $\Psi=\mathcal{F}_0[\Psi]$ up to the possibly infinite time \begin{eqnarray}\label{eq:defS0} S_1 = \inf \left\{ \sigma>0 \, \big \vert \, \Psi'(\sigma) \leq 0 \right\} = \inf \left\{ \sigma>0 \, \big \vert \, g[\Psi](\sigma) \geq 1/\lambda \right\} > 0 \, . \end{eqnarray} \end{theorem} \begin{proof} Suppose there exists a solution $\Psi$ to the fixed-point problem $\Psi=\mathcal{F}_0[\Psi]$. Then, the smooth function $\psi=( \mathrm{id} - \lambda G[\Psi] ) / \nu $ in $\mathcal{C}_0([\xi_0,\infty))$ is such that $\Psi=\mathcal{S}_0[\psi]$ with $\psi'_0(0^+)=\Psi'_0(0^+)=(1-\lambda \partial_x q_0(0) /2) /\nu>0$. Let us then introduce the positive time \begin{eqnarray*} S_{1,\delta}=\inf \left\{ \sigma >0 \, \big \vert \, \psi'(\sigma) \leq \delta \right\} >0\, , \end{eqnarray*} so that $\Psi=\psi$ is necessarily smooth on $[0,S_{1,\delta})$. For all $\delta>0$, $\Psi$ is also determined as the solution of the regularized fixed-point problem $\Psi=\mathcal{F}_\delta[\Psi]$ on the possibly infinite time interval $[0,S_{1,\delta})$. Indeed, on $[0,S_{1,\delta})$ we have \begin{eqnarray*} \mathcal{F}_\delta [\Psi](\sigma) = \mathcal{S}_\delta \left[\big( \sigma - \lambda G[\Psi](\sigma) \big) / \nu \right] = \mathcal{S}_\delta \left[\psi \right] = \psi = \Psi \, , \quad 0 \leq \sigma \leq S_{1,\delta}\, . \end{eqnarray*} By Corollary \ref{cor:globSol}, there is a unique solution $\Psi_\delta$ to $\Psi=\mathcal{F}_\delta[\Psi]$ in $\mathcal{C}_\delta([\xi_0,\infty)) \subset \mathcal{C}_0([\xi_0,\infty))$. Thus, the time $S_{1,\delta}$ is equivalently defined as \begin{eqnarray*} S_{1,\delta}=\inf \left\{ \sigma >0 \, \big \vert \, \psi'_\delta(\sigma) \geq \delta \right\} >0\, , \quad \mathrm{with} \quad \psi_\delta = \big( \sigma - \lambda G[\Psi_\delta](\sigma) \big) / \nu \, , \end{eqnarray*} and $\Psi=\mathcal{F}_0[\Psi]$ admits $\Psi_\delta$ as unique smooth solution on $[0,S_\delta)$. Moreover, for all $\delta_2>\delta_1>0$, since we have $\Psi_{\delta_1}=\Psi_{\delta_2}$ on $[0,S_{\delta_2})$ and since $S_{1,\delta}$ is a decreasing function of $\delta$, the solution to $\Psi=\mathcal{F}_0[\Psi]$ can be uniquely extended to a smooth function on $[0,S_1)$, with $S_1 = \lim_{\delta \to 0} S_{1,\delta}$. Finally, it remains to check that $S_1$ is also defined as \eqref{eq:defS0}. For all $0 \leq \sigma < S_1$, there is $\delta>0$ such that $S_{1,\delta}>\sigma$, so that we have $\psi'(\sigma)=\psi'_\delta(\sigma) > \psi'_\delta(S_{1,\delta}) = \delta>0$. There is nothing more to show if $S_1=\infty$. If $S_1<\infty$, as a bounded increasing function on $[0,S_1)$, the solution $\psi$ admits a left limit in $S_1$. Thus $\psi$ can be extended by continuity to $[0,S_1]$ with: \begin{eqnarray*} \psi'(S_1) = \lim_{\delta \to 0^+}\psi'(S_{1,\delta}) =\lim_{\delta \to 0^+} \psi'_\delta(S_{1,\delta}) = \lim_{\delta \to 0^+} \delta = 0 \, . \end{eqnarray*} \end{proof} \subsection{Analytical characterization of full blowups} Assuming the dynamics starting at time zero to be initially smooth with $q_0(0)=0$ and $\partial_x q_0(0)/2<1/\lambda$, the first blowup time $T_1$ occurs when the cumulative flux $f$, or equivalently $\Phi'=1+ \lambda f$, first diverges, i.e., \begin{eqnarray*} T_1 &=& \inf \left\{ T>0 \, \Big \vert \, \lim_{t \to T^-} \Phi'(t) = \infty \right\} \, ,\\ &=& \inf \left\{ \Psi(S)>0 \, \Big \vert \, \lim_{\sigma \to S^-} \Psi'(\sigma) = 0 \right\} = \Psi(S_1) > 0 \, . \end{eqnarray*} Thus, in the time-changed picture, the blowup condition corresponds precisely to the definition of $S_1$, the terminal point of the interval over which Theorem \ref{th:smooth} guarantees the existence of smooth, increasing solutions. This justifies defining the following blowup conditions with respect to the time-changed dynamics: \begin{definition}\label{def:blowup} A blowup occurs if $T_1 = \Psi(S_1)<\infty$ which is equivalent to \begin{eqnarray*}S_1 = \inf \big\{ \sigma>0 \, \big \vert \, g(\sigma) \geq 1/\lambda \big\} < \infty \, , \end{eqnarray*} where $g = \partial_\sigma G$ is the instantaneous inactivation flux for the time-changed process $Y_\sigma$. \end{definition} As stated at the end of Section \ref{subsec:MKV}, the emergence of blowup clearly depends on the initial conditions. However, for large enough interaction parameter, blowups will generically occur in finite time. To see this, consider for instance initial conditions of the form $(q_0,g_0)=(\delta_{x_0},0)$. For such initial conditions, the fixed-point problem $\Psi=\mathcal{F}_\delta[\Psi]$ admits an initially smooth solution $\Psi$ with instantaneous flux $g$ satisfying \begin{eqnarray*} g(\sigma) = h(\sigma, x_0) + \int_0^\sigma h(\sigma-\tau, \Lambda) \, \dd G(\xi(\tau)) \geq h(\sigma, x) \, , \end{eqnarray*} where $h(\sigma,x)= \partial_\sigma H(\sigma, x) \geq 0$ represents the first-passage density to zero of a Wiener process started in $x$ with negative unit drift: \begin{eqnarray}\label{eq:kFPT} h(\sigma,x) = \frac{x}{\sqrt{2 \pi \sigma^3}} \exp \left(-\frac{(x-\sigma)^2}{2 \sigma} \right)\, . \end{eqnarray} Moreover, we have $\sup_{\sigma \geq 0} h(\sigma, x_0) \geq h(x_0, x_0) = 1 /\sqrt{2 \pi x_0}$. This implies that the blowup condition $g(\sigma) \leq 1/\lambda$ is satisfied in finite time whenever $\lambda> C_{x_0}=\sqrt{2 \pi x_0}$. From now on, let us consider that the blowup condition is first met in $S_1<\infty$. On $[0,S_1]$, the inverse time change $\Psi$ is a smooth function with $\lim_{\sigma \to S_1^-} \Psi'(\sigma) = 0$. Thus, the diverging behavior of $f = (\Phi'-\nu)/\lambda$ is determined by the order of the first nonzero left-derivative of $\Psi$ in $S_1$, which is always larger or equal to two. In all generality, this order depends on the initial conditions. However, for generic initial conditions, we expect that \begin{eqnarray*} \inf \left\{ n \geq 1 \, \Big \vert \, \lim_{\sigma \to S_1^-} \Psi^{(n)}(\sigma) \neq 0 \right\} = 2 \, . \end{eqnarray*} Moreover, given that we necessarily have $\Psi'>0$ on $[0,S_1)$, $S_1$ must be a local maximum so that the criterion $\lim_{\sigma \to S_1^-} \Psi^{(n)}(\sigma) \neq 0$ is actually equivalent to $\lim_{\sigma \to S_1^-} \Psi^{(n)}(\sigma) > 0$. The above observations lead us to introduce an additional condition for blowups, which we refer to as the full-blowup condition: \begin{definition}\label{def:strictblowup} The blowup time $T_1 = \Phi(S_1)$ satisfies the full-blowup condition if $\lim_{\sigma \to S_1^-}\partial_\sigma g(\sigma)>0$. \end{definition} The definition of the full-bowup condition naturally follows from the fact that $\Psi'=(1-\lambda g) / \nu$ on $[0,S_1]$, so that $\lim_{\sigma \to S_1^-} \Psi''(\sigma)=\lim_{\sigma \to S_1^-} \partial_\sigma g(\sigma)$. It is straightforward to check that full blowups are marked by H\"older singularity with exponent $1/2$ for the time change $\Phi$: \begin{proposition} Under the full-blowup condition, the flux density $f$ diverges in $T_1$ as \begin{eqnarray*} f(t) \stackrel[t \to T_1^-]{}{\sim} \frac{ \lambda}{ \sqrt{2a_1(T_1-t)}} \, , \quad \mathrm{with} \quad a_1 = (\lambda/\nu) \partial_\sigma g(S_1) \, . \end{eqnarray*} \end{proposition} \begin{proof} The generic blowup condition implies that the inverse time change $\Psi$ admits a zero left derivative when $t \to T_1^-$. The full-blowup condition further ensures that $\Psi$ behaves locally quadratically in the left vicinity of $S_1$. Specifically, we have \begin{eqnarray*} \Psi(\sigma) = T_1 - a_1(\sigma-S_1)^2/2+o\big((\sigma-S_1)^2\big) \, , \quad \sigma < S_1 \, , \end{eqnarray*} where the quadratic coefficient is given by \begin{eqnarray*} a_1=-\partial^2_\sigma \lim_{\sigma \to S_1}\Psi(\sigma) = (\lambda/\nu) \partial_\sigma g(S_1) >0 \, . \end{eqnarray*} Thus, for $t<T_1$, just before blowup, the time change $\Phi$ behaves as \begin{eqnarray*} \Phi(t) = S_1 - \sqrt{2(T_1-t)/a_1} + o\big(\sqrt{T_1-t}\big) \, . \end{eqnarray*} In turn, this implies a blowup divergence as the reciprocal of a square root: \begin{eqnarray*} f(t) = \frac{\nu g \left(\Phi (t)\right)}{1 - \lambda g \left(\Phi (t)\right)} \stackrel[t \to T_1^-]{}{\sim} \frac{\nu \lambda}{\lambda \partial_\sigma g(S_1)\big(S_1-\Phi(t)\big)} = \frac{ \lambda}{ \sqrt{2a_1(T_1-t)}} \, . \end{eqnarray*} \end{proof} A synchronization event occurs in $T_1$ if the time change $\Phi$ exhibit a jump discontinuity in $T_1$ after a blowup. Such a discontinuity corresponds to the inverse time change $\Psi$ being flat on a non-empty interval $[S_1,U_1)$, with $S_1=\Phi(T_1^-)<U_1=\Phi(T_1)$. By smoothness of $\Psi$ on $[0,S_1]$, every synchronization event is triggered by a blowup but in all generality, a blowup need not trigger a synchronization event, which corresponds to the marginal case $S_1=\Phi(T_1^-)=U_1=\Phi(T_1)$. However, under the full-blowup condition, a blowup always trigger a synchronization event, i.e., $S_1=\Phi(T_1^-)<U_1=\Phi(T_1)$. \begin{theorem}\label{th:jump} Suppose $S_1$ is a full blowup time for the time-changed dynamics with normalized initial conditions $(q_0,g_0)$ such that $q_0(0)=0$ and $\partial_x q_0(0)/2<1/\lambda$. Then, the solution to the fixed-point problem $\Psi=\mathcal{F}_0[\Psi]$ can be uniquely extended as a constant function on $[S_1,U_1]$ with $U_1=S_1+\lambda \pi_1$ where $\pi_1$ satisfies $0<\pi_1<\Vert q(S_1, \cdot) \Vert_1 \leq 1$ and is defined as \begin{eqnarray}\label{eq:defjump} \pi_1 = \inf \left\{ p \geq 0 \, \bigg \vert \, p > \int_{0}^\infty H(\lambda p, x ) q(S_1,x ) \, \dd x \right\} \, . \end{eqnarray} \end{theorem} \begin{proof} The proof proceeds in two steps: $(i)$ We show that if it is possible to locally extend a solution $\Psi$ past a full blowup in $S_1$, such an extension is uniquely determined on the maximum interval $[S_1,U_1]$, where $\Psi$ is constant. $(ii)$ We show that under full-blowup conditions, it is always possible to extend a solution past $S_1$, i.e., $S_1<U_1$. $(i)$ Consider the smooth solution $\Psi$ of the fixed-point problem $\Psi=\mathcal{F}_\delta[\Psi]$ on $[0,S_1]$. Suppose there exists $U>S_1$ such that the solution $\Psi$ can be extended on $[0,U]$. Then on $[0,U]$, there is a smooth function $\psi$ such that $\Psi=\mathcal{S}_0[\psi]$ with $\psi=(\mathrm{id}-\lambda G[\Psi])/\nu$. Moreover, the full-blowup condition entails that $ \psi'(S_1) = \lim_{\sigma \to S_1^-} \Psi(\sigma)=0$ and $\psi''(S_1) = \lim_{\sigma \to S_1^-}\Psi''(\sigma)<0$. Thus, $\psi$ must be locally decreasing for $\sigma>S_1$, so that \begin{eqnarray*}U^\dagger = \sup \left\{\sigma \in [0,U] \, \big \vert \, \psi(\sigma) \leq \psi(S_1)=\Psi(S_1) \right\} >0 \, . \end{eqnarray*} Then, for all $0 \leq \sigma \leq U^\dagger$, we have $\psi(\sigma) \leq \Psi(S_1)$ so that $\Psi=\mathcal{S}_0[\psi]$ implies that $\Psi(\sigma)=\Psi(S_1)$ on $[S_1,U^\dagger]$. This shows that under the full blowup condition, smooth solution $\Psi$ can only be continued locally as a constant function, if it is possible at all. Suppose that $\Psi$ is constant for all $0 \leq \sigma \leq U_1$, for some real value $U_1>S_1$. Then the backward function $\xi(\sigma)=\sigma-\eta(\sigma)=\Phi(\Psi(\sigma)-\epsilon)=\Phi(-\epsilon)$ is also constant. Consequently, the integral term in the renewal-type equation \eqref{eq:DuHamel} attached to the fixed-point problem \ref {def:fixedPoint} vanishes on $[S_1, U_1]$ and for all $S_1 \leq \sigma \leq U_1$, we must have $\Psi=\mathcal{S}_0[\psi]$ where: \begin{eqnarray*} \psi(\sigma)=\big(\sigma-\lambda G(\sigma) \big)/\nu \quad \mathrm{with} \quad G(\sigma) = G(S_1) + \int_0^\infty H(\sigma,x) q(S_1,x) \, dx \, . \end{eqnarray*} Thus the auxiliary function $\psi$ is uniquely determined on $[0,U_1]$. For this determination to be consistent, we must have that for all $0 \leq \sigma \leq U_1$, $\psi(\sigma) \leq \Psi(S_1)$, which is equivalent to \begin{eqnarray*} \nu\big(\psi(\sigma)- \Psi(S_1) \big) &=& \sigma-S_1-\lambda \big(G(\sigma) - G(S_1) \big) \, , \\ &=& \sigma-S_1 + \lambda \int_0^\infty H(\sigma,x) q(S_1,x) \, dx \leq 0 \, . \\ \end{eqnarray*} This shows that the maximum possible value $U_1$ ensuring that $\Psi$ is constant on $[0,U_1]$ is: \begin{eqnarray*} \quad \quad U_1&=& \inf \left\{\sigma>0 \, \big \vert \, \psi(\sigma) > 0 \right\} = \lambda \inf \left\{p>0 \, \bigg \vert \, p - \int_0^\infty H(\lambda p,x) q_0(x) \, dx > 0 \right\} = \lambda \pi_1 \, . \end{eqnarray*} $(ii)$ To conclude, it remains to show that the interval $S_1<U_1$ is nonempty under full-blowup condition, so that $\Psi$ can be maximally continued as a constant function on a nonempty interval $[S_1,U_1)$. To that end, we will actually show that the number $\pi_1$, as defined in \eqref{eq:defjump}, is such that $0<\pi_1< \Vert q_0 \Vert_1$ under full-blowup condition.. Let us consider the smooth function $\zeta$ defined on $[0,\infty)$ by \begin{eqnarray*} \zeta(p) = p- \int_{0}^\infty H(\lambda p, x) q(S_1,x) \, \dd x \, . \end{eqnarray*} Our first goal is to prove that $\pi_1$ exists as a root of the function $\zeta$ and is such that $0<\pi_1 \leq \Vert q_0 \Vert_1 \leq 1$. To show this, observe that $\zeta$ is a smooth function on $[0,\infty)$ such that for all $p>\Vert q(S_1, \cdot)\Vert_1$, we have \begin{eqnarray*} \zeta(p) \geq p - \Vert H(\lambda p, \cdot ) \Vert_\infty \int_0^\infty q(S_1, x) \, \dd x = p - \Vert q(S_1, \cdot)\Vert_1 > 0 \, , \end{eqnarray*} Thus to establish the existence of a root in $(0,\Vert q(S_1, \cdot)\Vert_1)$, it is enough to show that $\zeta$ takes negative values in $(0,\Vert q(S_1, \cdot)\Vert_1)$. In fact, we will show that $\zeta$ is negative in the vicinity of zero under full blowup condition. As $\zeta$ satisfies $\lim_{p \to 0^+} \zeta(p)=0$, we first consider the asymptotic behavior of its derivative function given by \begin{eqnarray*} \zeta'(p) = 1 - \lambda \int_{0}^\infty h(\lambda p, x) q(S_1, x) \, \dd x \, , \end{eqnarray*} where $h$ is defined as the first-passage density in \eqref{eq:kFPT}. The limit behavior of $\zeta'$ is \begin{eqnarray*} \lim_{p \to 0^+} \zeta'(p) = 1 - \lambda \lim_{p \to 0^+} \int_{0}^\infty h(\lambda p, x) q(S_1,x) \, \dd x = 1 - \lambda \partial_x q(S_1,0)/2 \, , \end{eqnarray*} where the last equality follows from the absorbing boundary condition in zero and the asymptotic property of first-passage density $h$: $\lim_{t \to 0} h(t, x) = \delta(x)-\delta'(x)/2$ in the sense of generalized distributions given by \ref{app:Asympt1}. From there, under blowup condition, we have \begin{eqnarray*} \lim_{p \to 0} \zeta'(p) = 1 - \lambda \partial_x q(S_1,0)/2 = 1 - \lambda g(S_1,0) = 0 \, . \end{eqnarray*} Next, we evaluate the limit of the second derivative $\zeta''$. To do so, we utilize the asymptotic result obtained in \ref{app:Asympt2}. To apply this result, we utilize the facts that $(1)$ $x \mapsto q(S_1,x)$ admits a locally bounded fourth derivative in zero with $q(0,x)=0$ and that $(2)$ injecting $q(\sigma,0)=0$ for all $\sigma>0$ in \eqref{eq:qPDE} yields $\partial_x q(S_1,0)+\partial^2_x q(S_1,0)/2=0$. The asymptotic result from \ref{app:Asympt2} implies that \begin{eqnarray*} \lim_{p \to 0^+} \zeta''(p) &=& - \lambda^2 \lim_{p \to 0^+} \int_{0}^\infty \partial_\sigma h(\lambda p, x) q(S_1,x) \, \dd x \, , \\ &=& -\frac{\lambda^2}{2} \left( \partial^2_x q(S_1, 0) + \partial^3_x q(S_1, 0)/2 \right) \, . \end{eqnarray*} Differentiating \eqref{eq:qPDE} with respect to $x$ below the reset site, i.e., for $x <\Lambda$, we get \begin{eqnarray*} \partial_x \partial_\sigma q = \partial^2_x q+ \partial^3_x q/2 \, , \end{eqnarray*} so that specifying the above relation in $(S_1, 0)$ yields: \begin{eqnarray*} \lim_{p \to 0^+} \zeta''(p) = -\lambda^2 \partial_x \partial_\sigma q (S_1, 0) / 2 \, . \end{eqnarray*} Finally, permuting the order of the partial derivatives in the cross-derivative term yields $\partial_\sigma \partial_x q(S_1,0)/2 = \partial_\sigma g(S_1)$ so that, under the full blowup condition, we get \begin{eqnarray*} \lim_{p \to 0} \zeta''(p) =- \lambda^2 \partial_\sigma g(S_1) < 0 \, . \end{eqnarray*} This implies that as a root of $\zeta$, $\pi_1$ exists and is such that $0<\pi_1 \leq \Vert q_0 \Vert_1 \leq 1$. \end{proof} The above proposition has a direct interpretation in terms of the original dPMF dynamics. In the event of a full blowup, the dynamics of the time-changed process $Y$ cannot unfold smoothly after the blowup time $S_1$ as it would imply that $\Psi'=(1-\lambda g)/\nu<0$ on some nonempty interval to the left of $S_1$. In other word, as a decreasing function, the function $\Psi$ would implement a time-reversal in $S_0$, which is not physically admissible. Physical solutions resolve this conundrum by freezing the clock for the original time at $T_0=\Psi(S_0)$, while letting the clock for the changed time run past $S_0$. In the time-changed picture, this corresponds to stalling the reset of inactive processes, while letting active processes inactivate according to their linear, noninteracting dynamics. Such a non-reset dynamics continues in the time-changed picture until the original clock can start running again, which happens at time $U_1=S_1+\lambda \pi_1$. Incidentally, the number $0<\pi_1<1$ is the fraction of processes that synchronously inactivates at time $T_0$, which is marked by a discontinuity of size $\lambda \pi_1$ in the time change $\Phi=\Psi^{-1}$. We summarize the above discussion by stating the following corollary. \begin{corollary}\label{cor:jump} Under the full-blowup condition at time $T_1=\Psi(S_1)$, a synchronization event occurs with size $0<\big(\Phi(T_1)-\Phi(T_1^-)\big)/\lambda=\pi_1<1$. \end{corollary} Observe that the definitions of the generic blowup trigger time $S_1$ and of the blowup exit time $U_1=S_1+\lambda \pi_1$ is rather imprecise with respect to the behavior of $\Psi$ in the immediate vicinity of $S_1$ and $U_1$. These imprecisions are the sources of difficulties in extending the existence of a solution over the whole real half-line $\mathbbm{R}^+$. To exhibit such a solution using our prior results, we need to check that the auxiliary function $\psi= (\mathrm{id}-\lambda G )/\nu$ is such that $\psi'(U_1)>0$ at blowup exit time $U_1$, so that we can invoke Theorem \ref{th:smooth} to continue the solution over some nonempty interval $[U_1,S_2)$, where $S_2$ is the next putative blowup time where $\lim_{\sigma \to S_2-} \Psi'(\sigma)=0$. Then, if $S_2<\infty$, invoking Theorem \ref{th:jump} to further continue the solution via blowup resolution necessitates checking the full-blowup condition: $\lim_{\sigma \to S_2-} \Psi''(\sigma)<0$. Assuming that all these conditions check ad infinitum, exhibiting a solution over the whole real half-line $\mathbbm{R}^+$ will finally require to exclude the occurrence of accumulation points, whereby an infinite number of vanishingly small blowups happens in finite time. The main result of \cite{TTLS} is to show that all these checks and requirements are met for sufficiently large interaction parameter $\lambda$ and sufficiently small refractory period $\epsilon>0$. Establishing this result relies on a detailed analysis of the time-changed dynamics and is beyond the scope of this work, which is mainly concerned with introducing the time-changed picture to characterize mean-field dynamics with blowups. \section*{Acknowledgements} The authors would like to thank the anonymous referees, the Associate Editor, and the Editor for their constructive comments that improved the quality of this paper. The first author was supported by an Alfred P. Sloan Research Fellowship FG-2017-9554 and a CRCNS award DMS-2113213 from the National Science Foundation. The second author was supported in part by a grant from the Center for Theoretical and Computation Neuroscience from the University of Texas, Austin. \begin{appendix} \section{Asymptotic behavior of $h(\sigma, \cdot)$ and $\partial_\sigma h(\sigma, \cdot)$ when $\sigma \to 0^+$}\label{appA} This appendix comprises two useful results about the short-time asymptotics of the first-passage kernel $(\sigma,x) \mapsto h(\sigma, x)$ and its time-derivative $(\sigma,x) \mapsto \partial_\sigma h(\sigma, x)$. The first result follows from classical work in ~\cite{Roz84}, whereas the second result requires original analysis. \begin{proposition}\label{app:Asympt1} Consider a continuous function $q:\mathbbm{R} \to \mathbbm{R}$. Suppose moreover that $q$ is continuously differentiable on $[0,\delta]$ for some $\delta>0$, then \begin{eqnarray} \lim_{\sigma \to 0^+} \int_0^\infty \partial_\sigma h(\sigma,x) q(x) \, \dd x &=& q(0) + \partial_x q(0)/2 \, . \end{eqnarray} \end{proposition} \begin{proof} After a simple change of variable $y=x-t$, one can check that \begin{eqnarray} \int_{0}^\infty h(\sigma,x) q(x) \, dx = \int_{0}^\infty \frac{ye^{-\frac{y^2}{2\sigma}}}{\sqrt{2 \pi \sigma^3}} f(y+\sigma) \, dy +\int_{0}^\infty \frac{e^{-\frac{(x-\sigma)^2}{2\sigma}}}{\sqrt{2 \pi \sigma}} f(x) \, dx \end{eqnarray} where we recognize the drifted heat kernel in the last integral term. The asymptotic behavior of the first integral term follows the analysis in~\cite{Roz84}, which shows that \begin{eqnarray}\label{eq:heatAsympt} \lim_{\sigma \to 0+} \int_{0}^\infty \frac{xe^{-\frac{x^2}{2\sigma}}}{\sqrt{2 \pi \sigma^3}} q(x) \, dx = \partial_x q(0)/2 \, , \end{eqnarray} for all functions $q$ with locally bounded derivative in zero. Thus, we have \begin{eqnarray}\label{eq:h1Asympt} \lim_{\sigma \to 0+} h(\sigma,x) = \delta_0(x) - \delta'_0(x)/2 \, . \end{eqnarray} in the distribution sense. \end{proof} \begin{proposition}\label{app:Asympt2} Consider a continuous function $q:\mathbbm{R} \to \mathbbm{R}$ with nonnegative value and polynomial growth on $\mathbbm{R}^+$. Suppose moreover that $q$ is four times continuously differentiable on $[0,\delta]$ for some $\delta>0$ and $q(0)=0$ and $\partial_x q(0)+\partial^2_x q(0)/2=0$, then \begin{eqnarray} \lim_{\sigma \to 0^+} \int_0^\infty \partial_\sigma h(\sigma,x) q(x) \, \dd x &=& \frac{1}{2} \left( \partial^2_x q(0) + \partial^3_x q(0)/2\right)\, . \end{eqnarray} \end{proposition} \begin{proof} As $q:\mathbbm{R} \to \mathbbm{R}$ has polynomial growth, i.e., there is an integer $d>0$ such that $q(x) \leq K (1+x^d)$ on $\mathbbm{R}^+$ for some real $K>0$. For all real $\delta>0$ and all integers $m, n \geq 0$, we have \begin{eqnarray} \lim_{\sigma \to 0^+} \frac{1}{\sigma^m}\int_\delta^\infty e^{-\frac{(x-\sigma)^2}{2\sigma}}x^n q(x) \, \dd x = 0 \, . \end{eqnarray} This follows from the fact that $x \mapsto e^{-\frac{(x-\sigma)^2}{2\sigma}}x^n$ is decreasing for $x>\left(\sigma+\sqrt{\sigma(4n+\sigma)}\right)$. Then for $0 \leq \sigma<\delta^2/(n+\delta)$, we have: \begin{eqnarray} \sup_{x\geq \delta} e^{-\frac{(x-\sigma)^2}{2\sigma}}x^n = e^{-\frac{(\delta-\sigma)^2}{2\sigma}} \delta^n \, . \end{eqnarray} This allows one to write for $0 \leq \sigma<\delta^2/(n+d+2+\delta)$ \begin{eqnarray} \int_\delta^\infty e^{-\frac{(x-\sigma)^2}{2\sigma}}x^n q(x) \, \dd x &\leq& \int_\delta^\infty e^{-\frac{(x-\sigma)^2}{2\sigma}}x^n\big(1+x^{d+2} \big) \frac{K(1+x^d)}{1+x^{d+2}} \, \dd x \, , \\ &\leq& \, e^{-\frac{(\delta-\sigma)^2}{2\sigma}}\delta^n K \big(1+\delta^{d+2} \big) \int_\delta^\infty \frac{1+x^d}{1+x^{d+2}} \, \dd x \, , \\ &\leq& K_{d,\delta} \, \delta^n e^{-\frac{(\delta-\sigma)^2}{2\sigma}} \, , \end{eqnarray} where the constant $K_{d,\delta}$ only depends on $d$ via the Gamma function: \begin{eqnarray} K_{d,\delta} &=& K \big(1+\delta^{d+2} \big) \int_0^\infty \frac{1+x^d}{1+x^{d+2}} \, \dd x \, , \\ &=& K \big(1+\delta^{d+2} \big) \left(2 \Gamma \left[\frac{1+d}{2+d} \right] \Gamma \left[\frac{3+d}{2+d} \right] \right) < \infty \, . \end{eqnarray} We conclude by observing that \begin{eqnarray} 0 \leq \frac{1}{\sigma^m}\int_\epsilon^\infty e^{-\frac{(x-\sigma)^2}{2\sigma}}x^n q(x) \, \dd x \leq K_{d,\delta} \, \delta^n e^{-\frac{(\delta-\sigma)^2}{2\sigma}}/\sigma^m \xrightarrow{\sigma \to 0^+} 0 \end{eqnarray} The above observation implies that if $q$ is a function with polynomial growth, then for all $\delta>0$ we have \begin{eqnarray}\label{eq:remainBound} \lim_{\sigma \to 0^+} \int_\delta^\infty \vert \partial_\sigma h(\sigma,x) \vert q(x) \, \dd x = 0 \, , \end{eqnarray} so that if the limits at stake exist, we have \begin{eqnarray} \lim_{\sigma \to 0^+} \int_0^\infty \partial_\sigma h(\sigma,x) q(x) \, \dd x &=& \lim_{\sigma \to 0^+} \int_0^\delta \partial_\sigma h(\sigma,x) q(x) \, \dd x \, . \end{eqnarray} Moreover, if there exists $\delta>0$ such that $\vert \partial^4_x q \vert \leq B_\delta < \infty$ on $[0,\delta]$ with $q(0)=0$, we have \begin{eqnarray} \Bigg \vert \int_0^\delta \partial_\sigma h(\sigma,x) \left( q(x) - \sum_{n=1}^3 \frac{\partial^n_x q(0)}{n!} x^n \right) \, \dd x \Bigg \vert \leq B_\delta \int_0^\delta \vert \partial_\sigma h(\sigma,x) \vert x^4 \, \dd x \, . \end{eqnarray} Let us then introduce the integrals \begin{eqnarray} I_n(\sigma) &=& \int_0^\infty \partial_\sigma h(\sigma,x) x^n \, \dd x , \quad 1 \leq n\leq 3 \, ,\\ J_4(\sigma) &=& \int_0^\infty \vert \partial_\sigma h(\sigma,x) \vert x^4 \, \dd x , \end{eqnarray} The latter integral can be evaluated in closed form as \begin{eqnarray} \lefteqn{J_4(\sigma) = e^{-\sigma/2}(36+\sigma(61+\sigma(16+\sigma))) \sqrt{\frac{\sigma}{2 \pi }} + } \\ && \hspace{40pt} \sigma(5+\sigma)(15+ \sigma(12+\sigma)) \left(1+\mathrm{Erf} \left(\sqrt{\frac{\sigma}{2}} \right) \right) \, . \end{eqnarray} The above expression shows that $\lim_{\sigma \to 0^+} J_4(\sigma)=0$ so that \begin{eqnarray} \lim_{\sigma \to 0^+} \int_0^\delta \vert \partial_\sigma h(\sigma,x) \vert x^4 \, \dd x = \lim_{\sigma \to 0^+} J_4(\sigma) - \lim_{\sigma \to 0^+} \int_\delta^\infty \vert \partial_\sigma h(\sigma,x) \vert x^4 \, \dd x = 0 \, . \end{eqnarray} This shows that if the limits at stake exist, we must have \begin{eqnarray} \lim_{\sigma \to 0^+} \int_0^\delta \partial_\sigma h(\sigma,x) q(x) \, \dd x &=& \lim_{\sigma \to 0^+} \sum_{n=1}^3 \frac{\partial^n_x q(0)}{n!} \int_0^\delta \partial_\sigma h(\sigma,x) x^n \, \dd x \, , \\ &=& \lim_{\sigma \to 0^+} \sum_{n=1}^3 \frac{\partial^n_x q(0)}{n!} I_n(\sigma) \, , \end{eqnarray} where the last equality follows from \eqref{eq:remainBound}. For $n=1,2,3$, we find that \begin{eqnarray} I_1(\sigma) = \frac{e^{-\sigma/2}}{\sqrt{2 \pi \sigma}} + \frac{1}{2} \left(1+\mathrm{Erf} \left(\sqrt{\frac{\sigma}{2}} \right) \right) \, , \end{eqnarray} \begin{eqnarray} I_2(\sigma) = \frac{e^{-\sigma/2}(1+2 \sigma)}{\sqrt{2 \pi \sigma}} + \left( \frac{3}{2} + \sigma\right) \left(1+\mathrm{Erf} \left(\sqrt{\frac{\sigma}{2}} \right) \right) \, , \end{eqnarray} \begin{eqnarray} I_3(\sigma) = 3 \left(e^{-\sigma/2}(3+\sigma) \sqrt{\frac{\sigma}{2 \pi }} + \left(\frac{1}{2}+\sigma \left(2+\frac{\sigma}{2} \right) \right)\left(1+\mathrm{Erf} \left(\sqrt{\frac{\sigma}{2}} \right) \right) \right) \, , \end{eqnarray} where one can observe that $I_1(\sigma)$ and $I_2(\sigma)$ diverge when $\sigma \to 0^+$. Such diverging behaviors cancel out under the assumption that $\partial_x q(0)+\partial^2_x q(0)/2=0$, as we then have \begin{eqnarray} \sum_{n=1}^3 \frac{\partial^n_x q(0)}{n!} I_n(\sigma) = \frac{\partial^2_x q(0)}{2} (I_2(\sigma)-I_1(\sigma)) + \frac{\partial^2_x q(0)}{6} I_3(\sigma) \, , \end{eqnarray} with $\lim_{\sigma \to 0^+} I_2(\sigma)-I_1(\sigma) = 1$ and $\lim_{\sigma \to 0^+} I_3(\sigma)=3/2$. \end{proof} \end{appendix} \bibliographystyle{imsart-number} \begin{thebibliography}{32} \bibitem{Amari:1975aa} \begin{barticle}[author] \bauthor{\bsnm{Amari},~\bfnm{Shun-Ichi}\binits{S.-I.}} (\byear{1975}). \btitle{Homogeneous nets of neuron-like elements}. \bjournal{Biological Cybernetics} \bvolume{17} \bpages{211--220}. \bdoi{10.1007/BF00339367} \end{barticle} \endbibitem \bibitem{Billingsley:2013} \begin{bbook}[author] \bauthor{\bsnm{Billingsley},~\bfnm{Patrick}\binits{P.}} (\byear{2013}). \btitle{Convergence of probability measures}. \bpublisher{John Wiley \& Sons}. \end{bbook} \endbibitem \bibitem{Brette:2015} \begin{barticle}[author] \bauthor{\bsnm{Brette},~\bfnm{Romain}\binits{R.}} (\byear{2015}). \btitle{Philosophy of the spike: rate-based vs. spike-based theories of the brain}. \bjournal{Frontiers in systems neuroscience} \bvolume{9} \bpages{151}. \end{barticle} \endbibitem \bibitem{Brunel:2000aa} \begin{barticle}[author] \bauthor{\bsnm{Brunel},~\bfnm{Nicolas}\binits{N.}} (\byear{2000}). \btitle{Dynamics of Sparsely Connected Networks of Excitatory and Inhibitory Spiking Neurons}. \bjournal{Journal of Computational Neuroscience} \bvolume{8} \bpages{183--208}. \bdoi{10.1023/A:1008925309027} \end{barticle} \endbibitem \bibitem{Brunel:1999aa} \begin{barticle}[author] \bauthor{\bsnm{Brunel},~\bfnm{Nicolas}\binits{N.}} \AND \bauthor{\bsnm{Hakim},~\bfnm{Vincent}\binits{V.}} (\byear{1999}). \btitle{Fast Global Oscillations in Networks of Integrate-and-Fire Neurons with Low Firing Rates}. \bjournal{Neural Computation} \bvolume{11} \bpages{1621--1671}. \bdoi{10.1162/089976699300016179} \end{barticle} \endbibitem \bibitem{Caceres:2011} \begin{barticle}[author] \bauthor{\bsnm{C{\'a}ceres},~\bfnm{Mar{\'\i}a~J}\binits{M.~J.}}, \bauthor{\bsnm{Carrillo},~\bfnm{Jos{\'e}~A}\binits{J.~A.}} \AND \bauthor{\bsnm{Perthame},~\bfnm{Beno{\^\i}t}\binits{B.}} (\byear{2011}). \btitle{Analysis of nonlinear noisy integrate \& fire neuron models: blow-up and steady states}. \bjournal{The Journal of Mathematical Neuroscience} \bvolume{1} \bpages{7}. \end{barticle} \endbibitem \bibitem{Roz84} \begin{bbook}[author] \bauthor{\bsnm{Cannon},~\bfnm{John~Rozier}\binits{J.~R.}} (\byear{1984}). \btitle{The one-dimensional heat equation}. \bseries{Encyclopedia of Mathematics and its Applications} \bvolume{23}. \bpublisher{Addison-Wesley Publishing Company Advanced Book Program}, \baddress{Reading, MA}. \bnote{With a foreword by Felix E. Browder}. \bmrnumber{MR747979 (86b:35073)} \end{bbook} \endbibitem \bibitem{Carrillo:2013} \begin{barticle}[author] \bauthor{\bsnm{Carrillo},~\bfnm{Jos{\'e}~A}\binits{J.~A.}}, \bauthor{\bsnm{Gonz{\'a}lez},~\bfnm{Mar{\'\i}a d~M}\binits{M.~d.~M.}}, \bauthor{\bsnm{Gualdani},~\bfnm{Maria~P}\binits{M.~P.}} \AND \bauthor{\bsnm{Schonbek},~\bfnm{Maria~E}\binits{M.~E.}} (\byear{2013}). \btitle{Classical solutions for a nonlinear Fokker-Planck equation arising in computational neuroscience}. \bjournal{Communications in Partial Differential Equations} \bvolume{38} \bpages{385--409}. \end{barticle} \endbibitem \bibitem{Carrillo:2015} \begin{barticle}[author] \bauthor{\bsnm{Carrillo},~\bfnm{Jos{\'e}~Antonio}\binits{J.~A.}}, \bauthor{\bsnm{Perthame},~\bfnm{Beno{\^\i}t}\binits{B.}}, \bauthor{\bsnm{Salort},~\bfnm{Delphine}\binits{D.}} \AND \bauthor{\bsnm{Smets},~\bfnm{Didier}\binits{D.}} (\byear{2015}). \btitle{Qualitative properties of solutions for the noisy integrate and fire model in computational neuroscience}. \bjournal{Nonlinearity} \bvolume{28} \bpages{3365}. \end{barticle} \endbibitem \bibitem{DalMaso:1991} \begin{barticle}[author] \bauthor{\bsnm{Dal~Maso},~\bfnm{Gianni}\binits{G.}} \AND \bauthor{\bsnm{Rampazzo},~\bfnm{Franco}\binits{F.}} (\byear{1991}). \btitle{On systems of ordinary differential equations with measures as controls}. \bjournal{Differential and Integral equations} \bvolume{4} \bpages{739--765}. \end{barticle} \endbibitem \bibitem{Delarue:2015b} \begin{barticle}[author] \bauthor{\bsnm{Delarue},~\bfnm{F.}\binits{F.}}, \bauthor{\bsnm{Inglis},~\bfnm{J.}\binits{J.}}, \bauthor{\bsnm{Rubenthaler},~\bfnm{S.}\binits{S.}} \AND \bauthor{\bsnm{Tanr\'e},~\bfnm{E.}\binits{E.}} (\byear{2015}). \btitle{Particle systems with a singular mean-field self-excitation. Application to neuronal networks}. \bjournal{Stochastic Processes and their Applications} \bvolume{125} \bpages{2451 - 2492}. \bdoi{https://doi.org/10.1016/j.spa.2015.01.007} \end{barticle} \endbibitem \bibitem{Delarue:2015} \begin{barticle}[author] \bauthor{\bsnm{Delarue},~\bfnm{Fran{\c{c}}ois}\binits{F.}}, \bauthor{\bsnm{Inglis},~\bfnm{James}\binits{J.}}, \bauthor{\bsnm{Rubenthaler},~\bfnm{Sylvain}\binits{S.}}, \bauthor{\bsnm{Tanr{\'e}},~\bfnm{Etienne}\binits{E.}} \betal{et~al.} (\byear{2015}). \btitle{Global solvability of a networked integrate-and-fire model of McKean--Vlasov type}. \bjournal{The Annals of Applied Probability} \bvolume{25} \bpages{2096--2133}. \end{barticle} \endbibitem \bibitem{Falkner:2012} \begin{barticle}[author] \bauthor{\bsnm{Falkner},~\bfnm{Neil}\binits{N.}} \AND \bauthor{\bsnm{Teschl},~\bfnm{Gerald}\binits{G.}} (\byear{2012}). \btitle{On the substitution rule for Lebesgue--Stieltjes integrals}. \bjournal{Expositiones Mathematicae} \bvolume{30} \bpages{412-418}. \bdoi{https://doi.org/10.1016/j.exmath.2012.09.002} \end{barticle} \endbibitem \bibitem{Faugeras:2009} \begin{barticle}[author] \bauthor{\bsnm{Faugeras},~\bfnm{Olivier}\binits{O.}}, \bauthor{\bsnm{Touboul},~\bfnm{Jonathan}\binits{J.}} \AND \bauthor{\bsnm{Cessac},~\bfnm{Bruno}\binits{B.}} (\byear{2009}). \btitle{A constructive mean-field analysis of multi population neural networks with random synaptic weights and stochastic inputs}. \bjournal{Frontiers in Computational Neuroscience} \bvolume{3} \bpages{1}. \bdoi{10.3389/neuro.10.001.2009} \end{barticle} \endbibitem \bibitem{Hambly:2019} \begin{barticle}[author] \bauthor{\bsnm{Hambly},~\bfnm{Ben}\binits{B.}}, \bauthor{\bsnm{Ledger},~\bfnm{Sean}\binits{S.}}, \bauthor{\bsnm{S{\o}jmark},~\bfnm{Andreas}\binits{A.}} \betal{et~al.} (\byear{2019}). \btitle{A {M}cKean--{V}lasov equation with positive feedback and blow-ups}. \bjournal{The Annals of Applied Probability} \bvolume{29} \bpages{2338--2373}. \end{barticle} \endbibitem \bibitem{Karatzas} \begin{bbook}[author] \bauthor{\bsnm{Karatzas},~\bfnm{Ioannis}\binits{I.}} \AND \bauthor{\bsnm{Shreve},~\bfnm{Steven~E.}\binits{S.~E.}} (\byear{1991}). \btitle{Brownian motion and stochastic calculus}, \bedition{second} ed. \bseries{Graduate Texts in Mathematics} \bvolume{113}. \bpublisher{Springer-Verlag}, \baddress{New York}. \bmrnumber{MR1121940 (92h:60127)} \end{bbook} \endbibitem \bibitem{Kasabov:2010} \begin{barticle}[author] \bauthor{\bsnm{Kasabov},~\bfnm{Nikola}\binits{N.}} (\byear{2010}). \btitle{To spike or not to spike: A probabilistic spiking neuron model}. \bjournal{Neural Networks} \bvolume{23} \bpages{16--19}. \end{barticle} \endbibitem \bibitem{Knight:1972} \begin{barticle}[author] \bauthor{\bsnm{Knight},~\bfnm{Bruce~W}\binits{B.~W.}} (\byear{1972}). \btitle{The relationship between the firing rate of a single neuron and the level of activity in a population of neurons: Experimental evidence for resonant enhancement in the population response}. \bjournal{The Journal of general physiology} \bvolume{59} \bpages{767--778}. \end{barticle} \endbibitem \bibitem{Lapicque:1907} \begin{barticle}[author] \bauthor{\bsnm{Lapicque},~\bfnm{Louis}\binits{L.}} (\byear{1907}). \btitle{Recherches quantitatives sur l'excitation electrique des nerfs traitee comme une polarization}. \bjournal{Journal de Physiologie et de Pathologie Generalej} \bvolume{9} \bpages{620--635}. \end{barticle} \endbibitem \bibitem{Liggett:1985} \begin{bbook}[author] \bauthor{\bsnm{Liggett},~\bfnm{Thomas~M.}\binits{T.~M.}} (\byear{1985}). \btitle{Interacting particle systems}. \bseries{Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]} \bvolume{276}. \bpublisher{Springer-Verlag, New York}. \bdoi{10.1007/978-1-4613-8542-4} \bmrnumber{776231} \end{bbook} \endbibitem \bibitem{Nadtochiy:2020} \begin{barticle}[author] \bauthor{\bsnm{Nadtochiy},~\bfnm{Sergey}\binits{S.}} \AND \bauthor{\bsnm{Shkolnikov},~\bfnm{Mykhaylo}\binits{M.}} (\byear{2020}). \btitle{Mean field systems on networks, with singular interaction through hitting times}. \bjournal{The Annals of Probability} \bvolume{48} \bpages{1520--1556}. \end{barticle} \endbibitem \bibitem{Nadtochiy:2019} \begin{barticle}[author] \bauthor{\bsnm{Nadtochiy},~\bfnm{Sergey}\binits{S.}}, \bauthor{\bsnm{Shkolnikov},~\bfnm{Mykhaylo}\binits{M.}} \betal{et~al.} (\byear{2019}). \btitle{Particle systems with singular interaction through hitting times: application in systemic risk modeling}. \bjournal{The Annals of Applied Probability} \bvolume{29} \bpages{89--129}. \end{barticle} \endbibitem \bibitem{Panzeri:2010} \begin{barticle}[author] \bauthor{\bsnm{Panzeri},~\bfnm{Stefano}\binits{S.}}, \bauthor{\bsnm{Brunel},~\bfnm{Nicolas}\binits{N.}}, \bauthor{\bsnm{Logothetis},~\bfnm{Nikos~K}\binits{N.~K.}} \AND \bauthor{\bsnm{Kayser},~\bfnm{Christoph}\binits{C.}} (\byear{2010}). \btitle{Sensory neural codes using multiplexed temporal scales}. \bjournal{Trends in neurosciences} \bvolume{33} \bpages{111--120}. \end{barticle} \endbibitem \bibitem{Renart:2004} \begin{barticle}[author] \bauthor{\bsnm{Renart},~\bfnm{Alfonso}\binits{A.}}, \bauthor{\bsnm{Brunel},~\bfnm{Nicolas}\binits{N.}} \AND \bauthor{\bsnm{Wang},~\bfnm{Xiao-Jing}\binits{X.-J.}} (\byear{2004}). \btitle{Mean-field theory of irregularly spiking neuronal populations and working memory in recurrent cortical networks}. \bjournal{Computational neuroscience: A comprehensive approach} \bpages{431--490}. \end{barticle} \endbibitem \bibitem{Sznitman:1989} \begin{bincollection}[author] \bauthor{\bsnm{Sznitman},~\bfnm{Alain-Sol}\binits{A.-S.}} (\byear{1991}). \btitle{Topics in propagation of chaos}. In \bbooktitle{\'Ecole d'\'Et\'e de {P}robabilit\'es de {S}aint-{F}lour {XIX}---1989}. \bseries{Lecture Notes in Math.} \bvolume{1464} \bpages{165--251}. \bpublisher{Springer, Berlin}. \bdoi{10.1007/BFb0085169} \bmrnumber{1108185} \end{bincollection} \endbibitem \bibitem{Taillefumier:2014fk} \begin{barticle}[author] \bauthor{\bsnm{Taillefumier},~\bfnm{Thibaud}\binits{T.}} \AND \bauthor{\bsnm{Magnasco},~\bfnm{Marcelo}\binits{M.}} (\byear{2014}). \btitle{A Transition to Sharp Timing in Stochastic Leaky Integrate-and-Fire Neurons Driven by Frozen Noisy Input}. \bjournal{Neural Computation} \bvolume{26} \bpages{819--859}. \bdoi{10.1162/NECO{\_}a{\_}00577} \end{barticle} \endbibitem \bibitem{Taillefumier:2013uq} \begin{barticle}[author] \bauthor{\bsnm{Taillefumier},~\bfnm{Thibaud}\binits{T.}} \AND \bauthor{\bsnm{Magnasco},~\bfnm{Marcelo~O.}\binits{M.~O.}} (\byear{2013}). \btitle{A phase transition in the first passage of a Brownian process through a fluctuating boundary with implications for neural coding}. \bjournal{Proceedings of the National Academy of Sciences}. \end{barticle} \endbibitem \bibitem{TTLS} \begin{barticle}[author] \bauthor{\bsnm{Taillefumier},~\bfnm{Thibaud}\binits{T.}} \AND \bauthor{\bsnm{Sadun},~\bfnm{Lorenzo}\binits{L.}} (\byear{2022}). \btitle{Global solutions with infinitely many blowups in a mean-field neural network}. \end{barticle} \endbibitem \bibitem{Taillefumier:2012uq} \begin{barticle}[author] \bauthor{\bsnm{Taillefumier},~\bfnm{Thibaud}\binits{T.}}, \bauthor{\bsnm{Touboul},~\bfnm{Jonathan}\binits{J.}} \AND \bauthor{\bsnm{Magnasco},~\bfnm{Marcelo}\binits{M.}} (\byear{2012}). \btitle{Exact Event-Driven Implementation for Recurrent Networks of Stochastic Perfect Integrate-and-Fire Neurons}. \bjournal{Neural Computation} \bvolume{24} \bpages{3145--3180}. \bdoi{10.1162/NECO{\_}a{\_}00346} \end{barticle} \endbibitem \bibitem{Faugeras:2011} \begin{barticle}[author] \bauthor{\bsnm{Touboul},~\bfnm{Jonathan}\binits{J.}} \AND \bauthor{\bsnm{Faugeras},~\bfnm{Olivier}\binits{O.}} (\byear{2011}). \btitle{A Markovian event-based framework for stochastic spiking neural networks}. \bjournal{Journal of Computational Neuroscience} \bpages{1-23}. \bnote{10.1007/s10827-011-0327-y}. \end{barticle} \endbibitem \bibitem{Touboul:2014} \begin{barticle}[author] \bauthor{\bsnm{Touboul},~\bfnm{Jonathan}\binits{J.}} \betal{et~al.} (\byear{2014}). \btitle{Propagation of chaos in neural fields}. \bjournal{The Annals of Applied Probability} \bvolume{24} \bpages{1298--1328}. \end{barticle} \endbibitem \bibitem{Volpert:1967} \begin{barticle}[author] \bauthor{\bsnm{Vol'pert},~\bfnm{Aizik~Isaakovich}\binits{A.~I.}} (\byear{1967}). \btitle{The spaces and quasilinear equations}. \bjournal{Mathematics of the USSR-Sbornik} \bvolume{2} \bpages{225}. \end{barticle} \endbibitem \end{thebibliography} \end{document}
2205.07111v2
http://arxiv.org/abs/2205.07111v2
Asymptotic value of the multidimensional Bohr radius
\documentclass[12pt]{amsart} \usepackage{amsmath} \usepackage{amssymb, amsfonts, amsthm} \usepackage{mathtools} \usepackage{amssymb} \usepackage{ifthen} \usepackage{graphicx} \usepackage{float} \usepackage{caption} \usepackage{subcaption} \usepackage{cite} \usepackage{amsfonts} \usepackage{amscd} \usepackage{amsxtra} \usepackage{color} \usepackage{bm} \newcommand{\green}{\color{green}} \newcommand{\red}{\color{red}} \newcommand{\blue}{\color{blue}} \newcommand{\A}{{\mathcal A}} \newcommand{\IC}{{\mathbb C}} \newcommand{\ID}{{\mathbb D}} \newcommand{\uhp}{{\mathbb H}} \newcommand{\PP}{{\mathcal P}} \newcommand{\N}{{\mathbb N}} \newcommand{\R}{{\mathbb R}} \newcommand{\es}{{\mathcal S}} \newcommand{\Schur}{{\mathscr{S}}} \newcommand{\T}{{\mathbb T}} \newcommand{\Z}{{\mathbb Z}} \newcommand{\spl}{{\mathcal{SP}}} \renewcommand{\Im}{{\,\operatorname{Im}\,}} \renewcommand{\Re}{{\,\operatorname{Re}\,}} \addtolength{\textwidth}{4cm} \addtolength{\hoffset}{-2cm} \addtolength{\textheight}{2cm} \addtolength{\voffset}{-1cm} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{axiom}[theorem]{Axiom} \newtheorem{case}[theorem]{Case} \newtheorem{condition}[theorem]{Condition} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{criterion}[theorem]{Criterion} \newtheorem{definition}[theorem]{Definition} \theoremstyle{definition} \newtheorem{remark}[theorem]{Remark} \newtheorem{algorithm}[theorem]{Algorithm} \newtheorem{problem}[theorem]{Problem} \newtheorem{example}[theorem]{Example} \newtheorem{exercise}[theorem]{Exercise} \newtheorem{solution}[theorem]{Solution} \newtheorem{notation}[theorem]{Notation} \renewcommand{\theequation}{\thesection.\arabic{equation}} \numberwithin{equation}{section} \def\be{\begin{equation}} \def\ee{\end{equation}} \newcounter{alphabet} \newcounter{tmp} \newenvironment{Thm}[1][]{\refstepcounter{alphabet} \bigskip \noindent {\bf Theorem \Alph{alphabet}} \ifthenelse{\equal{#1}{}}{}{ (#1)} {\bf .} \itshape}{\vskip 8pt} \newcommand{\Reff}[1]{\setcounter{tmp}{\ref{#1}}\Alph{tmp}} \newenvironment{Lem}[1][]{\refstepcounter{alphabet} \bigskip \noindent {\bf Lemma \Alph{alphabet}} {\bf .} \itshape}{\vskip 8pt} \newenvironment{Core}[1][]{\refstepcounter{alphabet} \bigskip \noindent {\bf Corollary \Alph{alphabet}} {\bf .} \itshape}{\vskip 8pt} \newenvironment{Conj}[1][]{\refstepcounter{alphabet} \bigskip \noindent {\bf Conjecture \Alph{alphabet}} {\bf .} \itshape}{\vskip 8pt} \begin{document} \title[Asymptotic value of the multidimensional Bohr radius] {Asymptotic value of the multidimensional Bohr radius} \author[Vibhuti Arora]{Vibhuti Arora} \address{Vibhuti Arora, Department of Mathematics, National Institute of Technology Calicut, Kerala 673 601, India.} \email{[email protected], [email protected]} \author[Shankey Kumar]{Shankey Kumar} \address{Shankey Kumar, Department of Mathematics, Indian Institute of Technology Madras, Chennai, 600036, India.} \email{[email protected]} \author{Saminathan Ponnusamy} \address{ Saminathan Ponnusamy, Department of Mathematics, Indian Institute of Technology Madras, Chennai, 600036, India.} \address{Lomonosov Moscow State University, Moscow Center of Fundamental and Applied Mathematics, Moscow, Russia.} \email{[email protected]} \keywords{Bohr phenomena, Holomorphic mappings, Subordination, Banach spaces, Power series, Homogeneous polynomials, univalent, convex functions.} \subjclass[2020]{Primary: 32A05, 30A10; Secondary: 30C45, 30C80, 32A17, 32A30} \begin{abstract} This article determines the exact asymptotic value of the Bohr radii and the arithmetic Bohr radii for the holomorphic functions defined on the unit ball of the $\ell_p^n$ space and having values in the simply connected domain of $\mathbb{C}$. Moreover, we investigate sharp Bohr radius for four distinct categories of holomorphic functions. These functions map the bounded balanced domain $G$ of a complex Banach space $X$ into the following domains: the right half-plane, the slit domain, the punctured unit disk, and the exterior of the closed unit disk. \end{abstract} \maketitle \section{Introduction and preliminaries} The classical theorem of Harald Bohr \cite{Bohr14}, examined a century ago, states that if the power series $\sum_{k=0}^{\infty}a_kz^k$ converges for $|z|<1$ and is bounded by $1$ in modulus, then the majorant series $\sum_{k=0}^{\infty}|a_k|\, |z|^k$ is bounded by $1$ for $|z|\leq 1/3$ and the constant $1/3$ is optimal. This result, although looks simple, not only motivates many but also generates intensive research activities to study analogues of this result in various setting--which we call Bohr's phenomena in modern language. This topic is interesting in its own right from the point of view of analysis. In fact, the idea of Bohr that relates to power series was revived by many with great interest in the nineties due to the extensions to holomorphic functions of several complex variables and to more abstract settings. For example in 1997, Boas and Khavinson \cite{BK97} defined $n$-dimensional Bohr radius for the family of holomorphic functions bounded by $1$ on the unit polydisk. This investigation led to Bohr type questions in different settings. In a series of papers, Aizenberg \cite{A00,A2005,A07}, Aizenberg et al. \cite{AAD}, Defant and Frerick \cite{DF}, and Djakov and Ramanujan \cite{DjaRaman-2000} have established further results on Bohr's phenomena for multidimensional power series. Several other aspects and generalizations of Bohr's inequality may be obtained from the literature. In particular, \cite[Section 6.4]{KM07} on Bohr's type theorems is useful in investigating new inequalities to holomorphic functions of several complex variables and more importantly to solutions of partial differential equations. Let $X$ be a complex Banach space. For a domain $G\subset X$ and a domain $D\subset \mathbb{C}$, let $H(G,D)$ be the set of holomorphic mappings from $G$ into $D$. Every $f\in H(G,D)$ can be expanded, in a neighbourhood of any given $x_0 \in G$, into the series \begin{equation}\label{eqf} f(x)=\sum_{m=0}^{\infty}\cfrac{1}{m!}\,D^m f(x_0)\big(\underbrace{(x-x_0),(x-x_0),\dots,(x-x_0)}_{m-times}\big), \end{equation} where $D^mf(x_0)$, $m\in \mathbb{N}$, denotes the $m$-th Fr\'{e}chet derivative of $f$ at $x_0$, which is a bounded symmetric $m$-linear mapping from $\prod_{i=1}^m X$ to $\mathbb{C}$. For convenience, we may write $D^m f(x_0)\big((x-x_0)^m\big)$ instead of $$ D^m f(x_0)\big(\underbrace{(x-x_0),(x-x_0),\dots,(x-x_0)}_{m-times}\big). $$ It is understood that $D^0 f(x_0)\big((x-x_0)^0\big)=f(x_0)$. For additional information on this topic, we recommend to refer the book of Graham and Kohr \cite{GK}. A domain $G\subset X$ is said to be {\em balanced} if $zG\subset G$ for all $z\in \overline{\mathbb{D}}$, where $\mathbb{D}:=\{z\in \mathbb{C}: |z|<1\}$. Given a balanced domain $G$, we denote the higher dimensional Bohr radius by $K^G_X(D)$ which is the largest non-negative number $r$ with the property that the inequality \begin{equation*}\label{eqbi} \sum_{m=1}^{\infty}\bigg|\cfrac{1}{m!}\,D^m f(0)\big(x^m\big)\bigg|\le d(f(0),\partial D) \end{equation*} holds for all $x\in rG$ and for all holomorphic functions $f \in H(G,D)$ with the expansion \eqref{eqf} about the origin. Here $d(f(0),\partial D)$ denotes the Euclidean distance between $f(0)$ and the boundary $\partial D$ of the domain $D$. The definition of $K^G_X(D)$ is introduced by Hamada et al. \cite{HHK} (see also \cite{BD21}). In this setting, Bohr's remarkable result may be restated as $K^\mathbb{D}_\mathbb{C}(\mathbb{D})=1/3 $. A relevant application of this outcome in the field of Banach algebras was provided by Dixon in \cite{D95}. After this, Bohr's result gained immense popularity among mathematicians. To see the progress on this topic, readers are encouraged to explore a comprehensive survey articles \cite{Abu-M16}, \cite[Chapter 8]{GarMasRoss-2018}, and the monograph \cite{DGMS19}. Additionally, for the latest developments pertaining to the Bohr phenomenon in the realm of complex variables, the references \cite{BDK04,BB04,Kaypon18,KM07,LLP2020} serve as valuable resources. The result that $K_{\mathbb{C}^n}^G(\mathbb{D})\ge 1/3$ was proved by Aizenberg \cite{A00} for any balanced domain $G$ in $\mathbb{C}^n$, and in the same paper he gave the optimal result $K^G_{\mathbb{C}^n}(\mathbb{D})= 1/3$ by assuming $G$ is convex. In 2009, Hamada et al. \cite[Corollary 3.2]{HHK} discussed a more general result by proving that $K^G_X(\mathbb{D})=1/3$ for any bounded balanced domain $G$ which is a subset of complex Banach space $X$. Also, by taking a restriction on $f\in H(G,\mathbb{D})$ such that $f(0)=0$ and $G\subset \mathbb{C}^n$, Liu and Ponnusamy \cite{LP21} improved the quantity $K^G_{\mathbb{C}^n}(\mathbb{D})\geq 1/\sqrt{2}$ and obtained a sharp radius $K^G_{\mathbb{C}^n}(\mathbb{D})=1/\sqrt{2}$ if $G$ is convex. Moreover, Bhowmik and Das \cite{BD21} calculated the following quantities: \begin{equation}\label{eqconn} \inf\{K^G_X(\Omega):\Omega\subset \mathbb{C} \mbox{ is simply connected}\}=3-2\sqrt{2} \end{equation} and \begin{equation}\label{eqconvex} \inf\{K^G_X(\Omega):\Omega\subset \mathbb{C} \mbox{ is convex}\}=\frac{1}{3}. \end{equation} In higher dimensions, there have been numerous significant and intriguing results (see for instance \cite{A00,A07}). To ensure clarity and facilitate understanding, let us fix a set of notations before we proceed any further. Let $\mathbb{H}:=\{z\in \mathbb{C}:{\rm Re}\,z>0\}$ be the right half-plane, $ T:=\{z\in \mathbb{C}:|\arg z|<\pi\} $ be the slit region, $\mathbb{D}_0:=\{z\in \mathbb{C}:0<|z|<1\}$ be the punctured unit disk, and $\overline{\mathbb{D}}^c:=\{z\in \mathbb{C}:|z|>1\}$ be the exterior of the closed unit disk $\overline{\mathbb{D}}$. The spherical distance between two complex numbers of the extended complex plane is given by \begin{equation}\label{SD} \lambda (z,w)= \begin{cases}\vspace{0.1cm} \displaystyle\frac{|z-w|}{\sqrt{1+|z|^2}\sqrt{1+|w|^2}}& \text{if $z,w\in \mathbb{C}$,}\\\vspace{0.2cm} \displaystyle\frac{1}{\sqrt{1+|z|^2}} & \text{if $w=\infty$}. \end{cases} \end{equation} In \cite{AbuAli}, Abu-Muhanna and Ali observed that if $f(z)=\sum_{n=0}^{\infty}a_nz^n \in H(\mathbb{D},\overline{\mathbb{D}}^c)$ then \begin{equation}\label{EUD} \lambda\bigg(\sum_{n=0}^{\infty}|a_nz^n|, |a_0|\bigg)\le \lambda \left(f(0),\partial \overline{\mathbb{D}}^c\right) \end{equation} for all $|z|\le 1/3$, where the constant $1/3$ is sharp. In 2013, Abu-Muhanna and Ali in \cite{Abu4} obtained $K_\mathbb{C}^\mathbb{D}(\mathbb{H})=1/3$ and $K_\mathbb{C}^\mathbb{D}(T)=(\sqrt{2}-1)/(\sqrt{2}+1)$. Further, Abu-Muhanna et al. \cite{AAN17} determined $K_\mathbb{C}^\mathbb{D}(\mathbb{D}_0)=1/3$. Therefore, it is natural to consider analogues of these results in a general setting. The following domains are the examples of the balanced domains, which we require in our further discussion: $$ B_{\ell_p^n}:=\{z=(z_1,\dots,z_n)\in\mathbb{C}^n:\,\|z\|_p<1\}, \,\ 1\leq p \leq \infty, $$ where the $p$-norm is given by $\|z\|_p=\left(\sum_{k=1}^{N}|z_k|^p\right)^{1/p}$ (with the obvious modification for $p=\infty$). Let $\mathbb{N}_0=\mathbb{N}\cup\{0\}$, $|\alpha|:=\alpha_1 +\cdots +\alpha_n$ and $z^\alpha:={z_1}^{\alpha_1}\cdots {z_n}^{\alpha_n}$ for $\alpha=(\alpha_1, \dots,\alpha_n)\in \mathbb{N}_0^n$ (where $0^0$ is interpreted as $1$). Let $\Omega$ be a simply connected domain in $\mathbb{C}$. Then every $f\in H(B_{{\ell}_p^n},\Omega)$, $1\leq p\leq \infty$, can be expressed as $$ f(z)=c_0+\sum_{|\alpha|\in \mathbb{N}} c_\alpha(f) z^\alpha, \quad \mbox{for }z\in B_{{\ell}_p^n}. $$ It is clear from our previous discussion that if we replace $X$ by $\mathbb{C}^n$ and $G$ by $B_{\ell_p^n}$, then $$ \sum_{|\alpha|= m} c_\alpha(f) z^\alpha = \cfrac{1}{m!}\,D^m f(0)\big(z^m\big), \quad \mbox{for } z\in B_{{\ell}_p^n}. $$ Unless otherwise stated throughout the article, we assume $p\in [1,\infty]$. Now, we prepare to give the following definition of the Bohr radii of the functions in $H(B_{{\ell}_p^n},\Omega)$. \begin{definition} For $1\leq p\leq \infty$, the Bohr radius $K_n^p(\Omega)$ is the supremum of all $r\in[0,1)$ such that \begin{equation}\label{eqBS} \sup_{z\in rB_{\ell_p^n}} \sum_{m=1}^{\infty}\sum_{|\alpha|=m} |c_\alpha(f) z^\alpha| \leq d(f(0),\partial \Omega) \end{equation} holds for all holomorphic functions $f \in H(B_{{\ell}_p^n},\Omega)$. \end{definition} \begin{remark}\label{HoPoRa} Let $f\in H(B_{{\ell}_p^n},\Omega)$. Consider $r>0$ such that $$ \sum_{|\alpha|=m} |c_\alpha(f) (rz)^\alpha| \leq d(f(0),\partial \Omega) $$ for all $m\in \mathbb{N}$. Then it follows easily that \begin{align*} \sum_{m=1}^{\infty} \sum_{|\alpha|=m} \left|c_\alpha(f) \left(\frac{r}{2}z\right)^\alpha\right| &= \sum_{m=1}^{\infty} \frac{1}{2^m} \sum_{|\alpha|=m} \Big|c_\alpha(f) \left(rz\right)^\alpha\Big|\\ &\leq d(f(0),\partial \Omega)\sum_{m=1}^{\infty} \frac{1}{2^m}=d(f(0),\partial \Omega) \end{align*} showing that $r/2 \leq K_n^p(\Omega)$. \end{remark} The above multidimensional generalizations of Bohr's theorem have garnered significant interest in recent years. The initial formulation of this generalization was presented by Boas and Khavinson \cite{B2000,BK97}. This generalization, in a more general setting, is studied in \cite{AAD} in the spirit of functional analysis. Later, Defant et al. \cite{DFOOS11} obtained the optimal asymptotic estimate for $K_n^\infty(\mathbb{D})$ by using the fact that the Bohnenblust-Hille inequality is indeed hypercontractive. In the same year, the exact asymptotic value of $K_n^p(\mathbb{D})$ was given by Defant and Frerick \cite{DF11}. In 2014, Bayart et al. \cite{BPS14} proved that $\lim_{n\to \infty}K_n^\infty(\mathbb{D})\sqrt{n}/(\sqrt{\log n})=1$. In the recent articles \cite{BCGMMS21,KM23} the lower bound of $K_n^p(\mathbb{D})$ was improved over the previously known lower bounds. Before we proceed further, define \begin{equation}\label{DBSCD} \tilde{K}_n^p:= \inf\{K_n^p(\Omega):\Omega\subset \mathbb{C} \mbox{ is simply connected}\} . \end{equation} Recently, in \cite{BD21}, authors estimate that $\lim_{n\to \infty}\tilde{K}_n^\infty\sqrt{n}/(\sqrt{\log n})=1$. The article is organized as follows. Our primary objective of this paper is to obtain the exact asymptotic bounds of the Bohr radius $\tilde{K}_n^p$, and to establish a lower bound of $K_n^p(\Omega)$. Our next objective is to study the arithmetic Bohr radii (definition is given in the next section) for the functions in $H(B_{{\ell}_p^n},\Omega)$, and to establish sharp bounds of $K^G_X(\mathbb{H})$, $K^G_X(\mathbb{D}_0)$, and $K^G_X(T)$, where $G$ represents a bounded balanced domain within a Banach space $X$. Additionally, we also determine a specific radius for which the inequality \eqref{EUD} holds for functions belonging to the class $H(G,\overline{\mathbb{D}}^c)$. \section{Main Results} Before stating the main results, we need to introduce some notation and definitions, and then present necessary preliminary tools. \subsection{Preliminaries on $m$-homogeneous polynomial} Let $\mathcal{P}(\prescript{m}{}{\ell_p^n})$ denote the space of all $m$-homogeneous polynomials defined on the $\ell_p^n$ space. Each polynomial $P\in \mathcal{P}(\prescript{m}{}{\ell_p^n})$ can be written in the form $$ P(z_1,\dots,z_n)=\sum_{\alpha\in\Lambda(m,n)} a_\alpha z^\alpha, $$ where $\Lambda(m,n):=\{\alpha\in\mathbb{N}_0^n:\,|\alpha|=m\}$ and $a_\alpha \in X$. Equivalently, we can express it in another form as $$ P(z_1,\dots,z_n)=\sum_{{\bf j}\in\mathcal{J}(m,n)} c_{\bf j} z_{\bf j}, $$ where $\mathcal{J}(m,n):=\{{\bf j}=(j_1,\dots,j_m):\,1\leq j_1\leq\dots \leq j_m\leq n\}$, $z_{\bf j}:=z_{j_1}\cdots z_{j_m}$, and $c_{\bf j}\in X$. Then the elements $(z^\alpha)_{\alpha\in\Lambda(m,n)}$ or $(z_{\bf j})_{{\bf j}\in\mathcal{J}(m,n)}$ are referred to as the monomials. Note that the coefficients $c_{\bf j}$ and $a_\alpha$ are related by the relation $c_{\bf j}=a_\alpha$ with ${\bf j}=(1,\overset{\alpha_1}{\dots},1,\dots,n,\overset{\alpha_n}{\dots},n)$. More precisely there is a one-to-one correspondence between two index sets $\Lambda(m,n)$ and $\mathcal{J}(m,n)$. The following result due to Bayart et al. \cite[Theorem 3.2]{BDS19} plays a key role in the proof of Theorem \ref{maintheorem}. \begin{Thm}\label{imtheorem} {\rm(\cite[Theorem 3.2]{BDS19})} Let $m\geq1$ and $P\in \mathcal{P}(\prescript{m}{}{\ell_p^n})$ be of the form $P(z)=\sum_{{\bf j}\in\mathcal{J}(m,n)} c_{\bf j} z_{\bf j}$. Then, for every $u\in \ell_p^n$, we have $$ \sum_{{\bf j}\in \mathcal{J}(m,n)} |c_{\bf j}| \, |u_{\bf j}| \leq C(m,p) |\mathcal{J}(m-1,n)|^{1-\frac{1}{\min\{p,2\}}} \|u\|_p^m \,\|P\|_{\mathcal{P}(\prescript{m}{}{\ell_p^n})}, $$ where $$ C(m,p)\leq \begin{cases}\vspace{0.1cm} em e^{(m-1)/p} & \text{if $1\leq p \leq 2$,}\\\vspace{0.2cm} em 2^{(m-1)/2} & \text{if $2\leq p \leq \infty$}. \end{cases} $$ \end{Thm} \subsection{The notion of subordination connected with $f\in H(B_{{\ell}_p^n},\Omega)$} In order to proceed to state a couple of main results, we require the following well-known definitions: \begin{enumerate} \item[(i)] {\bf Subordination:} Suppose $f$ and $g$ are analytic functions in $\mathbb{D}$. We say that $g$ is {\em subordinate} to $f$, written by $g\prec f$, or $g(z) \prec f(z)$, if there is an analytic function $w:\,\mathbb{D}\rightarrow \mathbb{D}$ with $w(0)=0$ and such that $g=f\circ w$. \item[(ii)] {\bf Starlike functions:} A domain $D$ in $\IC$ is called starlike with respect to $0$ (or simply starlike) if $tw\in D$ whenever $w\in D$ and $t\in [0,1]$. A univalent function $f$ is said to be starlike in $\ID$ if it maps $\mathbb{D}$ onto a domain that starlike and $f(0)=0$. Analytically, this holds if and only if $f'(0)\neq 0$ and ${\rm Re}\, (zf'(z)/f(z))>0$ for $z\in \ID$. \item[(iii)] {\bf Convex functions:} A domain $D$ in $\IC$ is called convex if $(1-t)w_1+tw_2\in D$ whenever $w_1, w_2\in D$ and $t\in [0,1]$. An analytic function is said to be convex if it maps $\mathbb{D}$ univalently onto a convex domain. It is well-known that $f$ is convex if and only if $g=zf'$ is starlike. \end{enumerate} The following well-known result of Rogosinski (see \cite[p. 195, Theorem 6.4]{Duren83}) on subordination serves as a useful tool to prove Theorem \ref{maintheorem}. \begin{Lem}\label{lemmasub} {\rm \cite[Theorem 6.4]{Duren83}} Let $f(z)=\sum_{n=1}^{\infty}a_nz^n$ and $g(z)=\sum_{n=1}^{\infty}b_nz^n$ be two analytic functions in $\mathbb{D}$ such that $f\prec g$. Then \begin{itemize} \item[(i)] $|a_n|\le n|b_1|$ if $g$ is starlike univalent in $\mathbb{D}$, \item[(ii)] $|a_n|\le |b_1|$ if $g$ is convex univalent in $\mathbb{D}$. \end{itemize} \end{Lem} The inequality in Lemma B(i) continues to hold even if $g$ is just univalent in $\mathbb{D}$. This fact follows from the work of de Branges \cite{DeB1} in 1985. Also, we use the following well-known results: if $g$ is the univalent mapping from $\mathbb{D}$ onto $\Omega$, then (cf. \cite[Corollary 1.4]{Pomm92}) \begin{equation}\label{USEQ} |g'(0)|\geq d(g(0),\partial\Omega)\geq \frac{1}{4} |g'(0)|, \end{equation} and similarly, if $g$ is convex also, then \begin{equation}\label{CSEQ} |g'(0)|\geq d(g(0),\partial\Omega)\geq \frac{1}{2} |g'(0)|. \end{equation} The proofs of Theorems \ref{maintheorem} and \ref{KA-1} which used the inequalities \eqref{subord} and \eqref{subord-2} require some discussion. Now, we begin the discussion by considering $f\in H(B_{{\ell}_p^n},\Omega)$. For a fixed $z\in B_{{\ell}_p^n}$, consider the function $F$ on $\mathbb{D}$ defined by \begin{equation*} F(y):=f(zy),\quad y\in \mathbb{D}. \end{equation*} Then $F:\mathbb{D}\to \Omega$ is holomorphic and \begin{equation*} F(y)=f(0)+\sum_{m=1}^{\infty}\Bigg(\sum_{|\alpha|=m} c_\alpha(f) z^\alpha \Bigg) y^m. \end{equation*} Therefore, $F \prec g$ in $\mathbb{D}$, where $g$ is the univalent mapping from $\mathbb{D}$ onto $\Omega$ satisfying $g(0)=F(0)=f(0)$. Then, using Lemma B(i) and the inequality \eqref{USEQ}, we have \begin{equation}\label{subord} \bigg|\sum_{|\alpha|=m} c_\alpha(f) z^\alpha\bigg|\leq 4md(f(0),\partial \Omega), \quad \mbox{for each $z\in B_{{\ell}_p^n}$}. \end{equation} If $\Omega$ is a convex domain, then Lemma B(ii) and the inequality \eqref{CSEQ} provide \begin{equation}\label{subord-2} \bigg|\sum_{|\alpha|=m} c_\alpha(f) z^\alpha\bigg|\leq 2 d(f(0),\partial \Omega), \quad \mbox{for each $z\in B_{{\ell}_p^n}$}. \end{equation} \subsection{Asymptotic behaviour of the Bohr radius} Regarding the multi-dimensional Bohr radius, significant contributions have been made by a number of authors (see \cite{A00, B2000, BK97, DF11, DT89}). Their results and ideas reach up to the following optimal result for $ K_n^p(\mathbb{D})$. \begin{Thm}\label{MDBR} There exists a constant $D\geq 1$ such that for each $1\leq p \leq \infty$ and all $n$ $$ \frac{1}{D} \Bigg(\frac{\log n}{n}\Bigg)^{1-\frac{1}{\min\{p,2\}}}\leq K_n^p(\mathbb{D}) \leq D \Bigg(\frac{\log n}{n}\Bigg)^{1-\frac{1}{\min\{p,2\}}}. $$ \end{Thm} \begin{remark} Note that, in Theorem C, the lower bound of $K_n^p(\mathbb{D}) $ was established by Defant and Frerick \cite{DF11}, while the upper bound for the case $p=\infty$ was obtained by Boas and Khavinson \cite{BK97}. For the remaining values of $p$, the upper bound was determined by Boas \cite{B2000}. \end{remark} Now we are in a position to state and prove our first main result which gives bounds of the Bohr radii $\tilde{K}_n^p$ defined by \eqref{DBSCD}. In fact, in the case of simply connected domain, we show that the left hand side inequality holds with $B$ in place of $1/D$, where $B_0=1/(8e^{5})\leq B\leq 1 $. \begin{theorem} \label{maintheorem} There are positive constants $B$ and $D$ such that for each $1\leq p \leq \infty$ and all $n$ $$ B \Bigg(\frac{\log n}{n}\Bigg)^{1-\frac{1}{\min\{p,2\}}}\leq \tilde{K}_n^p \leq D \Bigg(\frac{\log n}{n}\Bigg)^{1-\frac{1}{\min\{p,2\}}}. $$ \end{theorem} \begin{proof} We begin the discussion with the following simple observation from \eqref{DBSCD}: $\tilde{K}_n^p\leq K_n^p(\mathbb{D})$. Then, from Theorem C, we have $$ K_n^p(\mathbb{D})\leq D \Bigg(\cfrac{\log n}{n} \Bigg)^{1-\frac{1}{\min\{p,2\}}} $$ for some $D\geq 1$. Now, we proceed to obtain a lower estimate. To do this, first we fix $P\in \mathcal{P}(\prescript{m}{}{\ell_p^n})$, where $P(z)=\sum_{{\bf j}\in\mathcal{J}(m,n)} c_{\bf j} z_{\bf j}$, and $u \in \ell_p^n$. Then, by Theorem A and the inequality \eqref{subord}, we obtain $$ \sum_{{\bf j}\in \mathcal{J}(m,n)} |c_{\bf j} (P)|\, |u_{\bf j}| \leq 4 m \, C(m,p) |\mathcal{J}(m-1,n)|^{1-\frac{1}{\min\{p,2\}}} \|u\|_p^m d(f(0),\partial \Omega). $$ If we use the fact that the number of elements in the index set $\mathcal{J}(m-1,n)$ is $$ \frac{(n+m-2)!}{(n-1)!(m-1)!}, $$ then we get $$ \sum_{{\bf j}\in \mathcal{J}(m,n)} |c_{\bf j} (P)| \, |u_{\bf j}| \leq 4m \,C(m,p) \Bigg(\frac{(n+m-2)!}{(n-1)!(m-1)!}\Bigg)^{1-\frac{1}{\min\{p,2\}}} \|u\|_p^m \ d(f(0),\partial \Omega). $$ It is not difficult to observe that $$ \frac{(n+m-2)!}{(n-1)!(m-1)!} \leq \frac{(n+m)^{m-1}}{(m-1)!} \leq \frac{e^m}{m^{m-1}} (n+m)^{m-1}=e^m \bigg(1+\frac{n}{m} \bigg)^{m-1}, $$ which leads to $$ \sum_{{\bf j}\in \mathcal{J}(m,n)} |c_{\bf j} (P)| \, |u_{\bf j}|\leq 4m\, C(m,p)\ \bigg(1+\frac{n}{m} \bigg)^{(m-1)\Big(1-\frac{1}{\min\{p,2\}}\Big)} e^{m\left(1-\frac{1}{\min\{p,2\}}\right)} \ \|u\|_p^m \ d(f(0),\partial \Omega). $$ A simple observation shows that \begin{equation*} \bigg(1+\frac{n}{m} \bigg)^{\frac{m-1}{m}} \leq 2^{\frac{m-1}{m}} \max \bigg\{1, \bigg(\frac{n}{m} \bigg)^{\frac{m-1}{m}} \bigg\} \leq 2 \max \bigg\{1, \frac{m^{1/m}n}{m n^{1/m}} \bigg\}. \end{equation*} Note that $x\mapsto xn^{1/x}$ is decreasing from $x=0$ to $x=\log n$ and increasing thereafter. Thus, $$ \bigg(1+\frac{n}{m} \bigg)^{\frac{m-1}{m}} \leq \frac{2n}{\log n} $$ so that $$ \sum_{{\bf j}\in \mathcal{J}(m,n)} |c_{\bf j} (P)| \, |u_{\bf j}|\leq 4m \,C(m,p)\ \bigg(\frac{2n}{\log n} \bigg)^{m\Big(1-\frac{1}{\min\{p,2\}}\Big)} e^{m-\frac{m}{\min\{p,2\}}} \ \|u\|_p^m \ d(f(0),\partial \Omega). $$ Finally, using Remark \ref{HoPoRa} we obtain $$ K_n^p(\Omega)\geq B \Bigg(\cfrac{\log n}{n} \Bigg)^{1-\frac{1}{\min\{p,2\}}} $$ for some $B>0$. Indeed, it can be noted that $B_0=1/(8e^{5})\leq B\leq 1 $. This concludes the proof. \end{proof} \subsection{Asymptotic behaviour of the arithmetic Bohr radius} The arithmetic Bohr radius was for the first time introduced and studied by Defant et al. in \cite{DMP08}. The concept of the arithmetic Bohr radius plays an important role in studying the upper bound of the multi-dimensional Bohr radius and is also used as a main tool to derive upper inclusions for domains of convergence (see \cite{DMP09}). \begin{definition} The arithmetic Bohr radius $A_n^p(\Omega)$, $1\leq p \leq \infty$, is defined as \begin{align*} A_n^p(\Omega):= \sup\bigg\{&\frac{1}{n}\sum_{i=1}^{n}r_i, r\in \mathbb{R}_{\geq 0}^n:\sum_{m=1}^{\infty}\sum_{|\alpha|=m} |c_\alpha(f)| r^\alpha \leq d(f(0),\partial \Omega) \\ &\mbox{ for all } f\in H(B_{{\ell}_p^n},\Omega)\bigg\}, \end{align*} where $\mathbb{R}_{\geq 0}^n=\{r=(r_1,\dots,r_n)\in \mathbb{R}^n:\, r_i\geq 0, 1\leq i \leq n \}$. Also, we set $$ \tilde{A}_n^p=\inf\{A_n^p(\Omega):\Omega\subset \mathbb{C} \mbox{ is simply connected}\}. $$ \end{definition} \begin{theorem}\label{LSABR} Let $2\leq p\leq \infty$. Then we have $$ \limsup_{n\to \infty} A_n^p(\mathbb{D}) \frac{\displaystyle n^{ \big(\frac{1}{2}+\frac{1}{\max\{p,2\}}\big)}}{ \sqrt{\log n}}\leq1. $$ \end{theorem} \begin{proof} To achieve our goal, we use the Kahane-Salem-Zygmund inequality (see \cite{B2000,BK97,DMP08}) which states that for $n,m\geq 2$, there exist coefficients $(c_\alpha)_{|\alpha|=m}$ with $|c_\alpha|=m!/\alpha!$ for all $\alpha$ such that $$ \sup_{z\in B_{\ell_p^n}} \bigg|\sum_{|\alpha|=m} c_\alpha z^\alpha\bigg| \leq \sqrt{32m\log(6m)} n^{ \frac{1}{2}+\big(\frac{1}{2}-\frac{1}{\max\{p,2\}}\big)m} \sqrt{m!}=:L. $$ Let $r=(r_1,\dots,r_n) \in \mathbb{R}_{\geq 0}^n$ such that $$ \sum_{m=1}^{\infty} \sum_{|\alpha|=m} |a_\alpha(f)|r^{\alpha}\leq d(f(0),\partial\mathbb{D}) \quad \mbox{for all $f\in H(B_{{\ell}_p^n},\mathbb{D})$}. $$ Then $$ \frac{1}{L}\bigg(\sum_{i=1}^{n}r_i\bigg)^m=\frac{1}{L}\sum_{|\alpha|=m} \frac{m!}{\alpha!} r^\alpha=\frac{1}{L}\sum_{|\alpha|=m} |c_\alpha| r^\alpha \leq 1. $$ This gives that $$ \bigg(\sum_{i=1}^{n}r_i\bigg)^m\leq \sqrt{32m \log(6m)} n^{ \frac{1}{2}+\big(\frac{1}{2}-\frac{1}{\max\{p,2\}}\big)m} \sqrt{m!}. $$ Consequently, $$ \frac{1}{n}\sum_{i=1}^{n}r_i \leq \big(32mn \log(6m)\big)^{\frac{1}{2m}} \frac{(m!)^{\frac{1}{2m}}}{n^{ \big(\frac{1}{2}+\frac{1}{\max\{p,2\}}\big)}}. $$ Using Stirling's formula $\left(m!\leq \sqrt{2\pi m}e^{\frac{1}{12m}}m^me^{-m}\right)$, one obtains \begin{equation*} \frac{1}{n}\sum_{i=1}^{n}r_i \leq \big(32mn \log(6m)\big)^{\frac{1}{2m}} \left( \sqrt{2\pi m}e^{\frac{1}{12m}}e^{-m}\right)^{\frac{1}{2m}} \frac{\sqrt{m}}{n^{ \big(\frac{1}{2}+\frac{1}{\max\{p,2\}}\big)}}. \end{equation*} Letting $m=\lfloor \log(n) \rfloor$, we have \begin{equation*} \left(\frac{\displaystyle n^{ \big(\frac{1}{2}+\frac{1}{\max\{p,2\}}\big)}}{ \sqrt{\log n}}\right)\frac{1}{n}\sum_{i=1}^{n}r_i \leq \left(32 \sqrt{2\pi }\lfloor \log(n) \rfloor^{\frac{3}{2}}e^{\frac{1}{12\lfloor \log(n) \rfloor}}\log(6\lfloor \log(n) \rfloor)\right)^{\frac{1}{2\lfloor \log(n) \rfloor}} \frac{n^{\frac{1}{2\lfloor \log(n) \rfloor}}}{\sqrt{e}}. \end{equation*} It is easy to see that $$ \lim_{n\to \infty} \left(32 \sqrt{2\pi }\lfloor \log(n) \rfloor^{\frac{3}{2}}e^{\frac{1}{12\lfloor \log(n) \rfloor}}\log(6\lfloor \log(n) \rfloor)\right)^{\frac{1}{2\lfloor \log(n) \rfloor}}=1. $$ Further, the observation $$ \frac{1}{2} \leq {\frac{\log n}{2\lfloor \log(n) \rfloor}} \leq {\frac{\lfloor \log n \rfloor+1}{2\lfloor \log(n) \rfloor}} $$ gives that $$ \lim_{n\to \infty} n^{\frac{1}{2\lfloor \log(n) \rfloor}}=\sqrt{e}. $$ Both of the above observations conclude the result. \end{proof} The proof of the next lemma follows exactly as in \cite[Lemma 4.3]{DMP08}. So, we skip the details here. \begin{lemma}\label{AKlemma} For $n\in \mathbb{N}$ and $1\leq p\leq \infty$, we have $$ A_n^p(\Omega)\geq \frac{K_n^p(\Omega)}{n^{1/p}}. $$ \end{lemma} For our further discussion we need the following result which is obtained in \cite[Theorem 4.1]{DMP08} and \cite[Remark 2]{DF11}. \begin{Thm}\label{ABR} There is an absolute constant $D\geq 1$ such that for each $1\leq p\leq \infty$ and each $n$ $$ \frac{1}{D} \frac{ \big( \log n\big)^{1-(1/\min\{p,2\})}}{\displaystyle n^{\frac{1}{2}+(1/\max\{p,2\})}}\leq A_n^p(\mathbb{D}) \leq D \frac{ \big( \log n\big)^{1-(1/\min\{p,2\})}}{\displaystyle n^{\frac{1}{2}+(1/\max\{p,2\})}}. $$ \end{Thm} The lower bound in the following result can simply be obtained using Theorem \ref{maintheorem} and Lemma \ref{AKlemma}. The upper bound follows from Theorem D and the observation that $$ \tilde{A}_n^p\leq A_n^p(\mathbb{D}). $$ \begin{theorem}\label{ABRS} There are positive constants $B$ and $D$ such that for each $1\leq p \leq \infty$ and all $n$ $$ B \frac{ \big( \log n\big)^{1-(1/\min\{p,2\})}}{\displaystyle n^{\frac{1}{2}+(1/\max\{p,2\})}}\leq \tilde{A}_n^p \leq D \frac{ \big( \log n\big)^{1-(1/\min\{p,2\})}}{\displaystyle n^{\frac{1}{2}+(1/\max\{p,2\})}}. $$ \end{theorem} \begin{remark} For instance $B$ and $D$ in Theorem \ref{ABRS} can be taken from Theorems \ref{maintheorem} and D, respectively. \end{remark} The next result gives the asymptotic behaviour of $\tilde{A}_n^\infty$. \begin{theorem} $\lim\limits_{n\to \infty} \tilde{A}_n^\infty \sqrt{n/\log(n)}=1$. \end{theorem} \begin{proof} It is already obtained in \cite[Theorem 2]{BD21} that for every $\epsilon \in (0,1/2)$ $$ K_n^\infty(\Omega)\geq (1-2 \epsilon) \sqrt{\frac{\log n}{n}}, $$ for large enough $n$. By Lemma \ref{AKlemma}, for every $\epsilon \in (0,1/2)$, we have $$ A_n^\infty(\Omega)\geq (1-2 \epsilon) \sqrt{\frac{\log n}{n}} $$ for large enough $n$. The upper estimation can be obtained using Theorem \ref{LSABR} and the fact that $ \tilde{A}_n^\infty\leq A_n^\infty(\mathbb{D})$. This completes the proof. \end{proof} \subsection{Bounds on $K_{n}^p(\Omega)$ and $A_{n}^p(\Omega)$ through Sidon constant} We denote $\partial B_{\ell_p^n}=\{z\in \mathbb{C}^n:\|z\|_p =1\}$. Also, we define the norms $\|.\|_\infty$ and $\|.\|_1$ on the elements of the space $\mathcal{P}(\prescript{m}{}{\ell_p^n})$ as $$ \|P\|_\infty:= \sup_{z\in \partial B_{\ell_p^n}}|P(z)| \ \mbox{ and } \ \|P\|_1:=\sup_{z\in \partial B_{\ell_p^n}} \sum_{|\alpha|=m} |c_\alpha|\,|z^\alpha|, $$ respectively. It is not hard to observe that $\|P\|_\infty \leq \|P\|_1$. \begin{definition}\label{sidon} For each pair $n,m\in \mathbb{N}$ and each $1\leq p\leq \infty$, we define the constant $S_p(m,n)$ as \begin{align*} S_p(m,n)=\inf\{M>0:\|P\|_1\leq M\|P\|_\infty \text{ for all } P\in \mathcal{P}(\prescript{m}{}{\ell_p^n})\}. \end{align*} For the case $p=\infty$, the constant $S_p(m,n)$ known as the Sidon constant denoted by $S(m,n)$. \end{definition} From the above definition, it is clear that $S_p(m,n)\geq 1$ for every $m,n$ and $p$. Moreover, if $P(z)=\sum_{j=1}^{n}c_j z_j$, then choose $t_{z_j}\in [0,2\pi]$ such that $|c_j z_j|=c_j z_j e^{it_{z_j}}$ for $j=1,\dots,n$. We obtain \begin{equation*} \sum_{j=1}^{n}|c_j z_j|= \sum_{j=1}^{n}c_j z_j e^{it_{z_j}} \leq \|P\|_\infty \end{equation*} which shows that $S_p(1,n)= 1$ for every $n$ and $p$. \begin{theorem}\label{KA-1} Let $n\in\mathbb{N}\backslash \{1\}$ and $1\leq p\leq \infty$. If $\Omega$ is a simply connected domain and $H_{n}^p \in(0,1)$ is the solution of the following equation \begin{align} \label{eqsd} x+\sum_{m=2}^{\infty}m S_p(m,n) x^m= \frac{1}{4}, \end{align} then \begin{equation}\label{lbmbr} K_{n}^p(\Omega) \geq H_{n}^p \quad \mbox{and} \quad A_{n}^p(\Omega)\geq \frac{H_{n}^p}{n^{1/p}}. \end{equation} If $\Omega$ is convex, then the number $H_{n}^p$ in \eqref{lbmbr} can be replaced by the number which is the solution of the following equation \begin{align*} x+\sum_{m=2}^{\infty} S_p(m,n) x^m= \frac{1}{2}. \end{align*} \end{theorem} \begin{proof} Assume that $\Omega$ is simply connected domain. Now, using Definition \ref{sidon} and inequality \eqref{subord}, we have $$ \sum_{|\alpha|=m} |c_\alpha(f) (rz)^\alpha| \leq 4mS_p(m,n)r^md(f(0),\partial \Omega) $$ and thus, $$ \sum_{m=1}^{\infty}\sum_{|\alpha|=m} |c_\alpha(f) (rz)^\alpha| \leq 4d(f(0),\partial \Omega)\sum_{m=1}^{\infty}mS_p(m,n)r^m. $$ We have already observed that $S_p(1,n)=1$. Then the equation \eqref{eqsd} shows that $$ \sum_{m=1}^{\infty}\sum_{|\alpha|=m} |c_\alpha(f) (rz)^\alpha| \leq d(f(0),\partial \Omega) \quad \mbox{for every $r\leq H_{n}^p$}. $$ It is easy to observe that $f(x):=x+\sum_{m=2}^{\infty}m S_p(m,n) x^m$ is an increasing function of $x$ for $x\geq 0$, $f(0)=0<1/4$ and $f(1/2)>1/4$. This concludes that $ H_{n}^p \in (0,1)$. Using the above approach and the inequality \eqref{subord-2} we can obtain the desired result when $\Omega$ is convex. Finally, the lower bound of $A_{n}^p(\Omega)$ can be obtained easily using Lemma \ref{AKlemma} and the lower bound of $K_{n}^p(\Omega)$. This completes the proof. \end{proof} \subsection{Multidimensional Bohr's radius for Half-plane and Convex domains} Let $\mathcal{P}$ denote the class of all analytic functions $p$ with $p(0)=1$ having positive real part in $\mathbb{D}$. The following fundamental result is well-known (cf. \cite[Corollary 2.3]{Pomm75}) and the Carath\'{e}odory Lemma (cf. \cite[p. 41]{Duren83}). \begin{Lem} \label{lemmap} For every $p\in \mathcal{P}$ of the form $p(z)=1+c_1z+c_2z^2+\cdots$, we have $|c_n|\le 2$ for $n\ge 1$. \end{Lem} We are now ready to state and prove our initial result concerning the multidimensional Bohr's phenomenon for functions belonging to the class $H(G,\mathbb{H})$. \begin{theorem}\label{ThmH} Let $G$ be a bounded balanced domain and $f\in H(G,\mathbb{H})$ be of the form \eqref{eqf} in a neighbourhood of the origin. Then $K^G_X(\mathbb{H})=1/3$. \end{theorem} \begin{proof} Using \eqref{eqconvex}, we can easily establish that $K^G_X(\mathbb{H})\geq 1/3$. However, we provide an alternative proof for the lower bound of $K^G_X(\mathbb{H})$ here. Note that ${\rm Re}f(0)>0$, for $f\in H(G,\mathbb{H})$. For any fixed $y\in G$, we introduce $F$ by \begin{equation*} F(z):=f(zy),\quad z\in \mathbb{D}. \end{equation*} Then $F:\mathbb{D}\to \mathbb{H}$ is holomorphic, $F(0)=f(0)$ and \begin{equation}\label{eqHF} F(z)=f(0)+\sum_{m=1}^{\infty}\cfrac{1}{m!}\,D^m f(0)\big(y^m\big)z^m. \end{equation} Define $g\in \mathcal{P}$ by \begin{equation*} g(z)=\cfrac{F(z)-i{\rm Im}(f(0))}{{\rm Re}(f(0))}=1+\cfrac{1}{{\rm Re}(f(0))}\sum_{m=1}^{\infty}\cfrac{1}{m!}\,D^m f(0)\big(y^m\big)z^m. \end{equation*} By Lemma~E, we obtain \begin{equation*} \bigg| \cfrac{\,D^m f(0)\big(y^m\big)}{m!}\bigg| \le 2 {\rm Re}(f(0)) \end{equation*} for $m\ge 1$. Thus, for $x\in (1/3)G$, we have by the last inequality that \begin{equation*} \sum_{m=1}^{\infty}\bigg| \cfrac{\,D^m f(0)\big(x^m\big)}{m!}\bigg|=\sum_{m=1}^{\infty}\bigg| \cfrac{\,D^m f(0)\big(y^m\big)}{m!}\bigg| \bigg(\frac{1}{3}\bigg)^m\le {\rm Re}(f(0)) =d(f(0),\partial \mathbb{H}). \end{equation*} In order to prove that the constant $1/3$ is optimal, we use the technique given in \cite{HHK}. For each $1/3<r_0<1$, there exists a $c\in(0,1)$ and $v\in \partial G$ such that $cr_0>1/3$ and $c\sup_{x\in \partial G}||x||<||v||$. Next, we consider the function $f_0$ on $G$ by $$ f_0(x)=L\bigg(\cfrac{c\psi_v(x)}{||v||}\bigg), $$ where $L(z)=(1+z)/(1-z),\mbox{ for }z\in \mathbb{D}$, $\psi_v$ is a bounded linear functional on $X$ with $\psi_v(v)=||v||$ and $||\psi_v||=1$. Clearly, $c\psi_v(x)/||v||\in\mathbb{D}$ and $f_0\in H(G,\mathbb{H})$. Choosing $x=r_0v$ gives \begin{equation*} f_0(r_0v)=\cfrac{1+cr_0}{1-cr_0}=1+2\sum_{n=1}^{\infty}{(cr_0)}^n=f_0(0)+\sum_{m=1}^{\infty}\bigg| \cfrac{\,D^m f_0(0)\big(x^m\big)}{m!}\bigg| >2=1+d(f_0(0),\partial\mathbb{H}). \end{equation*} The proof of the theorem is complete. \end{proof} It is now appropriate to remark that when $X=\mathbb{C}$ and $G=\mathbb{D}$, Theorem \ref{ThmH} coincides with \cite[Theorem 2.1]{Abu4}. Furthermore, a subsequent result can be derived from Theorem \ref{ThmH}, which pertains to holomorphic functions defined on a bounded balanced domain and taking values in a convex domain. \begin{corollary} Let $G$ be a bounded balanced domain and $f\in H(G,C)$, where $C$ is a convex domain, be of the form \eqref{eqf} in a neighbourhood of the origin. Then $K^G_X(C)=1/3$. \end{corollary} \begin{proof} This corollary has been proved by Bhowmik and Das \cite[Theorem 1]{BD21}. It might be appropriate to indicate an alternate proof as a consequence of the previous theorem. Let $u\in \partial C$ be nearest to $f(0)$. Further, let $T_u$ be the tangent line at $u$, and $H_u$ the half-plane containing $C$. Then $f\in H(G,H_u)$. Now choose $t$ real so that $(H_u-u)e^{it}=\mathbb{H}$, the right half-plane and let $K=(C-u)e^{it}\subset\mathbb{H}$. Hence $$ g(z)=(f(z)-u)e^{it}\in H(G,K)\subset H(G, \mathbb{H}) $$ and $$ g(0)=(f(0)-u)e^{it}=|f(0)-u|=d(f(0),\partial C). $$ Applying Theorem \ref{ThmH} to $g$ provides the desired conclusion, including sharpness part. \end{proof} \subsection{Multidimensional Bohr's radius for the punctured unit disk $\mathbb{D}_0$} Our next objective is to give multidimensional analogue of \cite[Theorem 2.1]{AAN17} for the class $H(G,\mathbb{D}_0)$. \begin{theorem} Let $G$ be a bounded balanced domain and $f\in H(G,\mathbb{D}_0)$ be of the form \eqref{eqf} in a neighbourhood of the origin. Then $K^G_X(\mathbb{D}_0)=1/3$. \end{theorem} \begin{proof} We have $H(G,\mathbb{D}_0)\subset H(G,\mathbb{D})$ and thus, the inequality $K^G_X(\mathbb{D}_0)\geq1/3$ directly follows from \cite[Corollary 3.2]{HHK}. To prove $K^G_X(\mathbb{D}_0)\leq1/3$, as before, consider for any $r_0\in (1/3,1)$ a number $0<c<1$ and $v\in \partial G$ such that $cr_0>1/3$ and $c\sup_{x\in \partial G}||x||<||v||$. Next, we introduce \begin{equation*} H_t(z)=\exp \bigg(-t \cfrac{1+z}{1-z}\bigg)=\cfrac{1}{e^t}+\cfrac{1}{e^t}\sum_{n=1}^{\infty}\bigg[\sum_{m=1}^{n}\cfrac{(-2t)^m}{m!}{n-1 \choose m-1}\bigg]z^n,\quad t>0, \end{equation*} which is in $H(\mathbb{D},\mathbb{D}_0)$. Also, let $f_1\in H(G,\mathbb{D}_0)$ be defined by $$ f_1(x)=H_t\bigg(\cfrac{c\psi_v(x)}{||v||}\bigg), $$ where $\psi_v$ is a bounded linear functional on $X$ with $\psi_v(v)=||v||$ and $||\psi_v||=1$. Thus, for $x=r_0v$, we get $$ f_1(r_0v)=H_t(cr_0)=\sum_{m=1}^{\infty}\bigg| \cfrac{\,D^m f_1(0)\big(x^m\big)}{m!}\bigg|>1 $$ by using the argument given in \cite[Theorem 2.1]{AAN17}. This concludes the proof. \end{proof} \subsection{Multidimensional Bohr's radius for slit mapping $T$} We now state and prove our next result which provides the sharp Bohr inequality for functions belonging to the class $H(G,T)$. \begin{theorem} Let $G$ be a bounded balanced domain and $f\in H(G,T)$ be of the form \eqref{eqf} in a neighbourhood of the origin with $f(0)>0$. Then $K^G_X(T)=3-2\sqrt{2}$. \end{theorem} \begin{proof} By applying \eqref{eqconn}, we can readily show that $K^G_X(T)\geq 3-2\sqrt{2}$. Nevertheless, we present an alternative proof for the lower bound of $K^G_X(T)$. Suppose that $f\in H(G,T)$. Now, we consider the function $F$ as in the proof of Theorem \ref{ThmH} which is in $H(\mathbb{D},T)$ having series expansion \eqref{eqHF}. Since $F\in H(\mathbb{D},T)$, we may write the given condition as $ F\prec g $, where $$ g(z)=f(0)\bigg(\cfrac{1+z}{1-z}\bigg)^2=f(0)+4f(0)\sum_{n=1}^{\infty}nz^n. $$ According to Lemma B(i), we have \begin{equation*} \bigg| \cfrac{\,D^m f(0)\big(y^m\big)}{m!}\bigg| \le 4f(0)m ~\mbox{ for $m\ge 1$}. \end{equation*} This gives, for $z\in \mathbb{D}$ and $y\in G$, that \begin{equation*} \sum_{m=1}^{\infty}\bigg| \cfrac{\,D^m f(0)\big(y^m\big)}{m!}\bigg| |z|^m\le 4f(0)\sum_{m=1}^{\infty}m |z|^m=4f(0)\cfrac{|z|}{(1-|z|)^2} \end{equation*} which is less than or equal to $f(0)$ for all $|z|\le 3-2\sqrt{2}\approx0.17157$. The inequality $K^G_X(T)\geq3-2\sqrt{2}$ holds. Next we prove that $K^G_X(T)\leq3-2\sqrt{2}$. Note that, for any $r_0\in (3-2\sqrt{2},1)$, there exists a $c\in (0,1)$ such that $cr_0>3-2\sqrt{2}$. Also there exists a $v\in \partial G$ such that $$ c\sup \{||x||:x\in \partial G\}<||v||. $$ Next, we consider a function $f_2$ on $G$ defined by $$ f_2(x)=U\bigg(\cfrac{c\psi_v(x)}{||v||}\bigg), $$ where $$ U(z)=\bigg(\cfrac{1+z}{1-z}\bigg)^2,\quad z\in \mathbb{D} $$ and, $\psi_v$ is a bounded linear functional on $X$ with $\psi_v(v)=||v||$ and $||\psi_v||=1$. Clearly $f_2\in H(G,T)$. Then, for $x=r_0v$, we have \begin{equation*} f_2(x) =f_2(0)+\sum_{m=1}^{\infty}\bigg| \cfrac{\,D^m f_2(0)\big(y^m\big)}{m!}\bigg| |z|^m=\bigg(\cfrac{1+cr_0}{1-cr_0}\bigg)^2=1+4\sum_{n=1}^{\infty}n(cr_0)^n>2. \end{equation*} This completes the proof. \end{proof} \subsection{Multidimensional Bohr's radius for exterior of the closed unit disk $\overline{\mathbb{D}}^c$} The following lemma by Abu-Muhanna and Ali \cite{AbuAli} will be required to establish our next result. \begin{Lem}\label{lemma2.7} Let $f\in H(\mathbb{D},\overline{\mathbb{D}}^c)$. Then $f\prec W$, where \begin{equation*} W(z)=\exp\bigg(\cfrac{1+\phi(z)}{1-\phi(z)}\bigg), \end{equation*} with \begin{equation*} \phi(z)=\cfrac{z+b}{1+\overline{z}b} \ \mbox{ and } \ b=\cfrac{\log f(0)-1}{\log f(0)+1}. \end{equation*} \end{Lem} The following theorem extends a result of Abu-Muhanna and Ali \cite[Theorem 2.1]{AbuAli} to higher dimension. \begin{theorem} Let $G$ be a bounded balanced domain and $f\in H(G,\overline{\mathbb{D}}^c)$ be of the form \eqref{eqf} in a neighbourhood of the origin. Then \begin{equation}\label{eqDc} \lambda \bigg(\sum_{m=1}^{\infty}\bigg| \cfrac{\,D^m f(0)\big(x^m\big)}{m!}\bigg|, |f(0)|\bigg)\le \lambda(f(0),\partial \overline{\mathbb{D}}^c) \quad \mbox{for } x\in (1/3)G, \end{equation} where $\lambda$ is defined in \eqref{SD}. This result is sharp. \end{theorem} \begin{proof} Let $f\in H(G,\overline{\mathbb{D}}^c)$. Then we can construct a function $F$ as in the proof of Theorem \ref{ThmH} which will be in $H(\mathbb{D},\overline{\mathbb{D}}^c)$ and have the series expansion \eqref{eqHF}. Now, by using Lemma F, we have $F\prec W$. It is easy to observe that, $W(z)$ defined in Lemma F can be written as $$ W(z)=F(0)\exp \bigg(\frac{z\log |F(0)|^2}{1-z}\bigg). $$ Hence, from \cite[Equation 2.7]{AbuAli}, we obtain $$ \sum_{m=0}^{\infty}\bigg|\cfrac{1}{m!}\,D^m f(0)\big(x^m\big)\bigg||z|^m \leq |F(0)|^\frac{1+|z|}{1-|z|}=|f(0)|^\frac{1+|z|}{1-|z|}, $$ which, in particular, gives $$ \sum_{m=0}^{\infty}\bigg|\cfrac{1}{m!}\,D^m f(0)\big(x^m\big)\bigg||z|^m \leq |f(0)|^2 \quad \mbox{for } |z|\leq 1/3. $$ Then a simple computation shows that $$ \lambda \bigg(\sum_{m=0}^{\infty}\bigg|\cfrac{1}{m!}\,D^m f(0)\big(x^m\big)\bigg|,|f(0)|\bigg)\leq \lambda(|f(0)|,|f(0)|^2)\leq \lambda(|f(0)|,1) $$ holds for $x\in (1/3)G$. Finally, we prove that inequality \eqref{eqDc} does not hold for $x\in r_0G$, where $r_0\in (1/3,1)$. We know that there exists a $c\in (0,1)$ and $v\in \partial G$ such that $cr_0>1/3$ and $$ c\sup \{||x||:x\in \partial G\}<||v||. $$ Now, we consider $f_3$ on $G$ defined by $$ f_3(x)=W\bigg(\cfrac{c\psi_v(x)}{||v||}\bigg), $$ where $\psi_v$ is a bounded linear functional on $X$ with $\psi_v(v)=||v||$ and $||\psi_v||=1$. Thus, $$ f_3(r_0v)=W(cr_0)=|f(0)|\exp \bigg(\frac{cr_0}{1-cr_0}\log |f(0)|^2\bigg)=|f(0)|^{(1+cr_0)/(1-cr_0)}. $$ Also, a simple computation gives that $$ \frac{\lambda(|f(0)|,|f(0)|^{(1+cr_0)/(1-cr_0)})}{\lambda(|f(0)|,1)}= \frac{\sqrt{2}|f(0)|(|f(0)|^{(2cr_0)/(1-cr_0)}-1)}{(|f(0)|-1)\sqrt{1+|f(0)|^{2(1+cr_0)/(1-cr_0)}}}\to \frac{2cr_0}{1-cr_0} $$ as $|f(0)|\to 1$. Since $2x/(1-x)>1$, i.e., $x>1/3$, we see that $$ \lambda \bigg(\sum_{m=0}^{\infty}\bigg|\cfrac{1}{m!}\,D^m f_0(0)\big(x^m\big)\bigg||z|^m,|f(0)|\bigg)= \lambda(|f(0)|,|f(0)|^2)> \lambda(|f(0)|,1) $$ as $|f(0)|\to 1$. This concludes the proof of the theorem. \end{proof} \noindent \subsection*{ Acknowledgement.} The work of the first author is supported by SERB-SRG, SRG/2023/ 001938, and the work of the second author is supported by the Institute Post Doctoral Fellowship of IIT Madras, India. He thanks IIT Madras, for providing an excellent research facility. \subsection*{Conflict of Interests} The authors declare that there is no conflict of interest regarding the publication of this paper. \subsection*{Data Availability Statement} The authors declare that this research is purely theoretical and does not associate with any datas. \begin{thebibliography}{99} \bibitem{AbuAli} { Y. Abu-Muhanna and R. M. Ali}, Bohr's phenomenon for analytic functions into the exterior of a compact convex body, {\em J. Math. Anal. Appl.} \textbf{379} (2011), 512--517. \bibitem{Abu4} {Y. Abu-Muhanna and R. M. Ali}, Bohr's phenomenon for analytic functions and the hyperbolic metric, {\em Math. Nachr.} \textbf{286}(11-12) (2013), 1059--1065. \bibitem{AAN17}{Y. Abu-Muhanna, R. M. Ali}, and { Z. C. Ng,} Bohr radius for the punctured disk, {\em Math. Nachr.} \textbf{290} (2017), 2434--2443. \bibitem{Abu-M16} {Y. Abu-Muhanna, R. M. Ali}, and {S. Ponnusamy}, On the Bohr inequality. In ``Progress in Approximation Theory and Applicable Complex Analysis" (Edited by N. K. Govil et al.), Springer Optimization and Its Applications, {\bf 117} (2016), 265--295. \bibitem{A00} {L. Aizenberg,} {Multidimensional analogues of Bohr's theorem on power series}, {\em Proc. Amer. Math. Soc.} {\bf 128}(4) (2000), 1147--1155. \bibitem{A2005} L. Aizenberg, Generalization of Caratheodory's inequality and the Bohr radius for multidimensional power series, {\em Oper. Theory: Adv. Appl.}, \textbf{158} (2005), 87--94. \bibitem{A07} { L. Aizenberg}, Generalization of results about the Bohr radius for power series, {\em Stud. Math.} {\bf 180} (2007), 161--168. \bibitem{AAD} L. Aizenberg, A. Aytuna, and P. Djakov, An abstract approach to Bohr's phenomenon, {\em Proc. Amer. Math. Soc.} {\bf 128}(9) (2000), 2611--2619. \bibitem{BDS19} F. Bayart, A. Defant, and S. Schl\"{u}ters, Monomial convergence for holomorphic functions on $\ell_r$, {\em J. Anal. Math.} {\bf 138}(1) (2019), 107--134. \bibitem{BPS14} F. Bayart, D. Pellegrino, and J. B. Seoane-Sep\'{u}lveda, The Bohr radius of the $n$-dimensional polydisk is equivalent to $(\log n)/n$, {\em Adv. Math.} {\bf 264} (2014), 726--746. \bibitem{BDK04} C. B\'{e}n\'{e}teau, A. Dahlner, and D. Khavinson, Remarks on the Bohr phenomenon, {\em Comput. Methods Funct. Theory} {\bf 4}(1) (2004), 1--19. \bibitem{BCGMMS21} L. Bernal-Gonz\'{a}lez, H.J. Cabana, D. Garc\'{i}a, M. Maestre, G.A. Mu\~{n}oz-Fern\'{a}ndez, and J.B. Seoane-Sep\'{u}lveda, A new approach towards estimating the $n$-dimensional Bohr radius, {\em Rev. R. Acad. Cienc. Exactas F\'{i}s. Nat. Ser. A Mat. RACSAM} {\bf 115} (2021), Article number: 44. \bibitem{BD21} { B. Bhowmik and N. Das}, Bohr radius and its asymptotic value for holomorphic functions in higher dimensions, {\em C. R. Math. Acad. Sci. Paris}, {\bf 359} (2021), 911--918. \bibitem{B2000} H.P. Boas, Majorant series, {\em J. Korean Math. Soc.} {\bf 37} (2000), 321--337. \bibitem{BK97} {H. P. Boas and D. Khavinson}, Bohr's power series theorem in several variables, {\em Proc. Amer. Math. Soc.} {\bf 125} (1997), 2975--2979. \bibitem{Bohr14} {H. Bohr}, A theorem concerning power series, {\em Proc. London Math. Soc.} {\bf 13}(2) (1914), 1--5. \bibitem{BB04} E. Bombieri and J. Bourgain, A remark on Bohr's inequality, {\em Int. Math. Res. Not.} {\bf 80} (2004), 4307--4330. \bibitem{DeB1} L. de Branges, A proof of the Bieberbach conjecture, {\em Acta Math.} \textbf{154} (1985), 137--152. \bibitem{DF} A. Defant and L. Frerick, A logarithmic lower bound for multi-dimensional Bohr radii, {\em Israel J. Math.} \textbf{152}(1) (2006), 17--28. \bibitem{DF11} A. Defant and L. Frerick, The Bohr radius of the unit ball of $\ell_p^n$, {\em J. Reine Angew. Math.} {\bf 660} (2011), 131--147. \bibitem{DFOOS11} A. Defant, L. Frerick, J. Ortega-Cerd\'{a}, M. Ouna\"{i}es, and K. Seip, The Bohnenblust-Hille inequality for homogeneous polynomials is hypercontractive, {\em Ann. of Math.} {\bf 174}(2) (2011), 485--497. \bibitem{DGMS19} A. Defant, D. Garc\'{i}a, M. Maestre, and P. Sevilla-Peris, {\em Dirichlet series and holomorphic functions in high dimensions}, {\bf37}, Cambridge University Press, Cambridge, 2019. xxvii+680 pp. \bibitem{DMP08} A. Defant, M. Maestre, and C. Prengel, The arithmetic Bohr radius, {\em Q. J. Math.} {\bf 59}(2) (2008), 189--205. \bibitem{DMP09} A. Defant, M. Maestre, and C. Prengel, Domains of convergence for monomial expansions of holomorphic functions in infinitely many variables, {\em J. Reine Angew. Math.} {\bf 634} (2009), 13--49. \bibitem{DT89} S. Dineen and R. M. Timoney, Absolute bases, tensor products and a theorem of Bohr, {\em Studia Math.} {\bf 94} (1989), 227--234. \bibitem{D95} P. G. Dixon, Banach algebras satisfying the non-unital von Neumann inequality, {\em Bull. Lond. Math. Soc.} {\bf 27} (1995), 359--362 \bibitem{DjaRaman-2000} P.~B.~Djakov and M.~S.~Ramanujan, A remark on Bohr's theorems and its generalizations, {\em J. Analysis} \textbf{8} (2000), 65--77. \bibitem{Duren83} {P. L. Duren}, Univalent Functions Springer-Verlag, New York, (1983). \bibitem{GarMasRoss-2018} S.~R.~Garcia, J.~ Mashreghi and W.~T.~Ross, Finite Blaschke products and their connections, Springer, Cham, 2018. \bibitem{GK} I. Graham and G. Kohr, Geometric function theory in one and higher dimensions, Pure and Applied Mathematics, Marcel Dekker, 255, Marcel Dekker, 2003. \bibitem{HHK}{H. Hamada, T. Honda, and G. Kohr}, {Bohr's theorem for holomorphic mappings with the values in homogeneous balls}, {\em Israel J. Math.} {\bf 173} (2009), 177--187. \bibitem{Kaypon18} {I. R. Kayumov and S. Ponnusamy}, On a powered Bohr inequality, {\em Ann. Acad. Sci. Fenn. Math.} {\bf 44} (2019), 301--310. \bibitem{KM07} G. Kresin and V. Maz'ya, Sharp real-part Theorems. A unified approach. \textit{Lect. Notes in Math.,} 1903, Springer-Verlag, Berlin-Heidelberg-New York, 2007. \bibitem{KM23}{S. Kumar and R. Manna}, Revisit of multi-dimensional Bohr radius, {\em J. Math. Anal. Appl.} {\bf523}(1) (2023), Paper No. 127023. \bibitem{LLP2020} G. Liu, Z. Liu, and S. Ponnusamy, Refined Bohr inequality for bounded analytic functions, {\em Bull. Sci. Math.} {\bf 173} (2021), Paper No. 103054, 20 pp. \bibitem{LP21} M. S. Liu and S. Ponnusamy, {Multidimensional analogues of refined Bohr's inequality}, {\em Proc. Amer. Math. Soc.} {\bf 149} (2021), 2133--2146. \bibitem{Pomm75} {Ch. Pommerenke}, {\em Univalent functions}, Vandenhoeck \& Ruprecht, Germany, 1975. \bibitem{Pomm92} {Ch. Pommerenke}, {\em Boundary Behaviour of Conformal Maps}, Springer-Verlag Berlin Heidelberg, New York, 1992. \end{thebibliography} \end{document}
2205.07876v4
http://arxiv.org/abs/2205.07876v4
Almost everywhere and norm convergence of Approximate Identity and Fejér means of trigonometric and Vilenkin systems
\documentclass[11pt]{amsart} \usepackage{amssymb} \usepackage{amsmath} \usepackage[active]{srcltx} \usepackage{t1enc} \usepackage[latin2]{inputenc} \usepackage{verbatim} \usepackage{amsmath,amsfonts,amssymb,amsthm} \usepackage[mathcal]{eucal} \usepackage{enumerate} \usepackage[centertags]{amsmath} \usepackage{graphicx} \graphicspath{ {./images/} } \setcounter{MaxMatrixCols}{10} \newtheorem{theorem}{Theorem} \newtheorem{conjecture}{Conjecture} \newtheorem{lemma}{Lemma} \newtheorem{remark}{Remark} \newtheorem{definition}{Definition} \newtheorem{corollary}{Corollary} \newtheorem{problem}{Problem} \begin{document} \author{N. Nadirashvili, G. Tephnadze and G. Tutberidze} \title[Approximate Identity and Fejr means]{Almost everywhere and norm convergence of Approximate Identity and Fejr means of trigonometric and Vilenkin systems} \address{N. Nadirashvili, The University of Georgia, School of science and technology, 77a Merab Kostava St, Tbilisi 0128, Georgia.} \email{[email protected] \ \ \ \ [email protected] } \address {G. Tephnadze, The University of Georgia, School of Science and Technology, 77a Merab Kostava St, Tbilisi, 0128, Georgia.} \email{[email protected]} \address{G.Tutberidze, The University of Georgia, School of science and technology, 77a Merab Kostava St, Tbilisi 0128, Georgia.} \email{[email protected], \ \ \ \ [email protected]} \thanks{The research was supported by Shota Rustaveli National Science Foundation grant FR-19-676.} \date{} \maketitle \begin{abstract} In this paper, we investigate very general approximation kernels with special properties, called an approximate identity, and prove almost everywhere and norm convergence of these general methods, which consists of a class of summability methods and provide norm and a.e. convergence of these summability methods with respect to the trigonometric system. Investigations of these summations can be used to obtain norm convergence of Fej\'er means with respect to the Vilenkin system also, but these methods are not useful to study a.e. convergence in this case, because of some special properties of the kernels of Fej\'er means. Despite these different properties we give alternative methods to prove almost everywhere convergence of Fejr means with respect to the Vilenkin systems. \end{abstract} \date{} \textbf{2020 Mathematics Subject Classification.} 42C10. \noindent \textbf{Key words and phrases:} Vilenkin system, trigonometric system, Fejr means, almost everywhere convergence, norm convergence. \section{Introduction} Let define Fourier coefficients, partial sums, Fejr means and kernels with respect to the Vilenkin and trigonometric systems of any integrable function in the usual manner: \begin{eqnarray*} \widehat{f}^w(k) &:&=\int f\overline{w}_{k}d\mu \text{\thinspace \qquad\ \ \ \ }\left(k\in \mathbb{N}, \ \ w=\psi \ \ \text{or} \ \ w=T\right), \\ S^w_{n}f &:&=\sum_{k=0}^{n-1}\widehat{f}\left( k\right) \psi _{k}\ \text{ \qquad\ \ }\left(n\in \mathbb{N}_{+},\text{ }S_{0}f:=0, \ \ w=\psi \ \ \text{or} \ \ w=T \right), \\ \sigma^w _{n}f &:&=\frac{1}{n}\sum_{k=0}^{n-1}S^w_{k}f\text{ \qquad\ \ \ \ \ } \left( \text{ }n\in \mathbb{N}_{+}\text{ }\right), \\ K^w_{n} &:&=\frac{1}{n}\overset{n-1}{\underset{k=0}{\sum }}D^w_{k}\text{ \qquad\ \ \ \ \ \ \ \thinspace }\left( \text{ }n\in \mathbb{N}_{+},\text{ }\ \ w=\psi \ \ \text{or} \ \ w=T \right), \end{eqnarray*} where $\mathbb{N}_{+}$ denote the set of the positive integers, $\mathbb{N}:= \mathbb{N}_{+}\cup \{0\}.$ It is well-known (for details see e.g books \cite{AVD}, \cite{Garsia} and \cite{Tor1}) that Fejr means $$\sigma^w_nf \ \left( w=\psi \ \text{or} \ w=T\right),$$ where $\sigma_n^\psi$ and $\sigma_n^T$ are Vilenkin-Fejr and trigonometric-Fejr means, respectively, converge to the function $f$ in $L_p$ norm, that is \begin{equation*} \left\Vert \sigma^w_nf-f\right\Vert_p\to 0, \ \ \text{as} \ \ n\to\infty, \ \ \left( w=\psi \ \text{or} \ w=T\right) \end{equation*} for any $f\in L_p$ where $1\leq p< \infty.$ Moreover, (see e.g. \cite{BN} and \cite{BNT}) if we consider the maximal operator of Fejr means with respect to Vilenkin and trigonometric systems defined by \begin{eqnarray*} \sigma ^{\ast,w}f&:=&\sup_{n\in\mathbb{N}}\left\vert \sigma^w_nf\right\vert \ \left( w=\psi \ \text{or} \ w=T\right), \end{eqnarray*} then the weak type inequality \begin{equation*} \mu \left( \sigma ^{*,w}f>\lambda \right) \leq \frac{c}{\lambda }\left\| f\right\| _{1},\text{ \qquad }\left(f\in L_1(G_m), \ \ \lambda >0\right) \end{equation*} was proved in Zygmund \cite{Zy} for the trigonometric series, in Schipp \cite{Sc} for Walsh series and in Pl, Simon \cite{PS} (see also \cite{PTW}, \cite{tepthesis1}, \cite{We1,We2}) for bounded Vilenkin system. It follows that the F\'ejer means with respect to trigonometric and Vilenkin systems of any integrable function converges a.e. to this function. In the books \cite{Garsia}, \cite{MS1} and \cite{tepthesis} was investigated very general approximation kernels with special properties, called an approximate identity which consists of a class of summability methods such us Fejr means. In this paper we investigate more general summability methods which are called the approximation identities, which consist of a class of summability methods and provide norm and a.e. convergence of these summability methods with respect to the trigonometric system. Investigations of these summations can be used to obtain norm convergence of Fej\'er means with respect to the Vilenkin system also, but these methods are not useful to study a.e. convergence in this case, because of some special properties of the kernels of Vilenkin-Fej\'er means. Despite these different properties, we give alternative methods to prove almost everywhere convergence of Fejr means with respect to the Vilenkin systems. This paper is organized as follows: in order not to disturb our discussions later on some definitions and notations are presented in Sections 2 and 3. Moreover, For the proofs of the main results we need some auxiliary Lemmas, some of them are new and of independent interest. These results are also presented in Sections 2 and 3. The main result with proof is given in Sections 4 and 5. \qquad \section{Fejr means with respect to the Vilenkin systems } Let $m:=(m_{0},m_{1},\dots)$ denote a sequence of the positive integers not less than 2. Denote by \begin{equation*} Z_{m_{k}}:=\{0,1,\dots,m_{k}-1\} \end{equation*} the additive group of integers modulo $m_{k}.$ Define the group $G_{m}$ as the complete direct product of the group $% Z_{m_{j}}$ with the product of the discrete topologies of $Z_{m_{j}}$ $^{,}$% s. In this paper we discuss bounded Vilenkin groups only, that is $\sup_{n\in \mathbb{N}}m_{n}<\infty .$ The direct product $\mu $ of the measures \begin{equation*} \mu _{k}\left( \{j\}\right):=1/m_{k}\text{ \qquad }(j\in Z_{m_{k}}) \end{equation*} is the Haar measure on $G_{m_{\text{ }}}$with $\mu \left( G_{m}\right) =1.$ The elements of $G_{m}$ are represented by the sequences \begin{equation*} x:=(x_{0},x_{1},\dots,x_{k},\dots)\qquad \left( \text{ }x_{k}\in Z_{m_{k}}\right). \end{equation*} It is easy to give a base for the neighbourhood of $G_{m}$ namely \begin{equation*} I_{0}\left( x\right):=G_{m}, \ \ I_{n}(x):=\{y\in G_{m}\mid y_{0}=x_{0},\dots,y_{n-1}=x_{n-1}\}\text{ }(x\in G_{m},\text{ }n\in \mathbb{N}) \end{equation*}Denote $I_{n}:=I_{n}\left( 0\right) $ for $n\in \mathbb{N}$ and $\overline{% I_{n}}:=G_{m}$ $\backslash $ $I_{n}$ $.$ Let $ e_{n}:=\left( 0,\dots,0,x_{n}=1,0,\dots\right) \in G_{m}\qquad \left( n\in \mathbb{N}\right). $ If we define the so-called generalized number system based on $m$ in the following way: \begin{equation*} M_{0}:=1,\text{ \qquad }M_{k+1}:=m_{k}M_{k\text{ }},\ \qquad (k\in \mathbb{N}) \end{equation*}then every $n\in \mathbb{N}$ can be uniquely expressed as $% n=\sum_{k=0}^{\infty }n_{j}M_{j}$, where $n_{j}\in Z_{m_{j}}$ $~(j\in \mathbb{N})$ and only a finite number of $n_{j}`$s differ from zero. Let $\left\vert n\right\vert :=\max $ $\{j\in \mathbb{N};$ $n_{j}\neq 0\}.$ If we define $I_n:=I_n\left(0\right),$ for $n\in\mathbb{N}$ and $\overline{I_n}:=G_m \ \ \backslash $ $I_n$ and \begin{equation*} I_N^{k,l}:=\left\{ \begin{array}{l}I_N(0,\ldots,0,x_k\neq 0,0,...,0,x_l\neq 0,x_{l+1},\ldots ,x_{N-1},\ldots),\\ \text{for} \qquad k<l<N,\\ I_N(0,\ldots,0,x_k\neq 0,x_{k+1}=0,\ldots,x_{N-1}=0,x_N,\ldots ), \\ \text{for } \qquad l=N. \end{array}\right. \end{equation*} then \begin{equation}\label{2} \overline{I_{N}}=\left( \overset{N-2}{\underset{k=0}{\bigcup }}\overset{N-1}{\underset{l=k+1}{\bigcup }}I_{N}^{k,l}\right) \bigcup \left( \underset{k=0}{\bigcup\limits^{N-1}}I_{N}^{k,N}\right) . \end{equation} Next, we introduce on $G_{m}$ an orthonormal system which is called the Vilenkin system. First define the complex valued function $r_{k}\left( x\right) :G_{m}\rightarrow \mathbb{C},$ the generalized Rademacher functions, as \begin{equation*} r_{k}\left( x\right):=\exp \left( 2\pi\imath x_{k}/m_{k}\right) \text{ \qquad }\left( \imath^{2}=-1,\text{ }x\in G_{m},\text{ }k\in \mathbb{N}\right). \end{equation*} Now define the Vilenkin system $\psi:=(\psi _{n}:n\in \mathbb{N})$ on $G_{m} $ as: \begin{equation*} \psi _{n}\left( x\right):=\prod_{k=0}^{\infty }r_{k}^{n_{k}}\left( x\right) \text{ \qquad }\left( n\in \mathbb{N}\right). \end{equation*} By a Vilenkin polynomial we mean a finite linear combination of Vilenkin functions. We denote the collection of Vilenkin polynomials by $\mathcal{P}$. The Vilenkin system is orthonormal and complete in $L_{2}\left( G_{m}\right) \,$ (for details see e.g. \cite{AVD, sws, Vi}). Specially, we call this system the Walsh-Paley one if $m\equiv 2$ (for details see \cite{gol} and \cite{sws}). Recall that (for details see e.g. \cite{AVD}, \cite{gat1} and \cite{gat}) if $n>t,$ $t,n\in \mathbb{N},$ then \begin{equation}\label{star1} K^\psi_{M_n}\left(x\right)=\left\{ \begin{array}{ll} \frac{M_t}{1-r_t\left(x\right) },& x\in I_t\backslash I_{t+1},\qquad x-x_te_t\in I_n, \\ \frac{M_n+1}{2}, & x\in I_n, \\ 0, & \text{otherwise} \end{array} \right. \end{equation} and \begin{equation} \label{star2} n\left\vert K^\psi_n\right\vert\leq c\sum_{l=0}^{\left\vert n\right\vert }M_l \left\vert K^\psi_{M_l} \right\vert. \end{equation}By using these two properties of Fejr kernels we obtain the following: \begin{lemma}\label{lemma7kn} For any $n,N\in \mathbb{N_+}$, we have that \begin{eqnarray} \label{fn40} &&\int_{G_m} K^\psi_n (x)d\mu(x)=1,\\ && \label{fn4} \sup_{n\in\mathbb{N}}\int_{G_m}\left\vert K^\psi_n(x)\right\vert d\mu(x)\leq c<\infty,\\ && \label{fn400} \int_{\overline{I_N}}\left\vert K^\psi_n(x)\right\vert d\mu (x)\rightarrow 0, \ \ \text{as} \ \ n\rightarrow \infty, \ \ \text{for any} \ \ N\in \mathbb{N_+}. \end{eqnarray} where $c$ is an absolute constant. \end{lemma} \begin{proof} According orthonormality of Vilenkin systems we immediately get the proof of \eqref{fn40}. It is easy to prove that $$\int_{G_m}\left\vert K^\psi_{M_n}(x)\right\vert d\mu(x)\leq c<\infty.$$ By combining \eqref{star1} and \eqref{star2} we can conclude that \begin{eqnarray*} \int_{G_m}\left\vert K^\psi_n\left(x\right)\right\vert d\mu\left(x\right) &\leq& \frac{1}{n}\sum_{l=0}^{\left\vert n\right\vert} M_{l}\int_{G_{m}}\left\vert K^\psi_{M_l}\left(x\right)\right\vert d\mu \left(x\right) \leq \frac{1}{n}\sum_{l=0}^{\left\vert n\right\vert}M_l<c<\infty, \end{eqnarray*} so also \eqref{fn4} is proved. Let $x\in I_N^{k,l}, \ \ k=0,\dots,N-2, \ \ l=k+1,\dots ,N-1.$ By using again \eqref{star1} and \ref{star2} we get that \begin{eqnarray} \label{kn} \left\vert K^\psi_n\left(x\right)\right\vert \leq \frac{c}{n}\sum_{s=0}^{l}M_s\left\vert K^\psi_{M_s}\left(x\right)\right\vert \leq \frac{c}{n}\sum_{s=0}^{l}M_sM_k\leq \frac{cM_lM_k}{n}. \end{eqnarray} Let $x\in I_N^{k,N}$, where $x\in I_{q+1}^{k,q}$, for some $N\leq q <\vert n\vert,$ i.e., $$x=\left(x_0=0,\ldots,x_{k-1}=0,x_k\neq 0,\ldots,x_{N-1}=0,x_q\neq 0,x_{q+1}=0,\ldots,x_{\left\vert n\right\vert-1},\ldots\right), $$ then \begin{eqnarray} \label{11110T1} \left\vert K^\psi_n\left(x\right)\right\vert \leq\frac{c}{n}\underset{i=0}{\overset{q-1}{\sum}} M_iM_k\leq\frac{cM_kM_q}{n}. \end{eqnarray} Let $x\in I_{\vert n\vert}^{k,\vert n\vert}\subset I_N^{k,N}$, i.e., $$x=\left(x_0=0,\ldots,x_{m-1}=0,x_k\neq 0,x_{k+1}=0,\ldots,x_N=0,\ldots, x_{\left\vert n\right\vert-1}=0,\ldots\right),$$ then \begin{eqnarray} \label{11111T1} &&\left\vert K^\psi_n\left(x\right)\right\vert \leq \frac{c}{n}\overset{\left\vert n\right\vert-1} {\underset{i=0}{\sum }}M_iM_k \leq \frac{cM_kM_{\left\vert n\right\vert}}{n}. \end{eqnarray} If we combine \eqref{11110T1} and \eqref{11111T1} we can conclude that \begin{eqnarray}\label{11110T111} \int_{I_{N}^{k,N}}\left\vert K^\psi_n \right\vert d\mu &=&\overset{\vert n\vert-1}{\underset{q=N}{\sum }} \int_{I_{q+1}^{k,q}}\left\vert K^\psi_n \right\vert d\mu +\int_{I_{\vert n\vert}^{k,\vert n\vert}}\left\vert K^\psi_n \right\vert d\mu \\ \notag &\leq&\overset{\vert n\vert-1}{\underset{q=N}{\sum }}\frac{cM_k}{n}+\frac{cM_k}{n}\\ &\leq& \frac{c(\vert n\vert-N)M_k}{M_{\vert n \vert}}. \end{eqnarray} Hence, if we apply \eqref{2}, \eqref{kn} and \eqref{11110T111} we find that \begin{eqnarray*} &&\int_{\overline{I_N}}\left\vert K^\psi_n\right\vert d\mu\\ &=&\overset{N-2}{\underset{k=0}{\sum }}\overset{N-1}{\underset{l=k+1}{\sum }} \sum\limits_{x_{j}=0,j\in\{l+1,...,N-1\}}^{m_{j-1}}\int_{I_{N}^{k,l}}\left\vert K^\psi_n \right\vert d\mu +\overset{N-1}{\underset{k=0}{\sum}} \int_{I_{N}^{k,N}}\left\vert K^\psi_n\right\vert d\mu\\ &\leq& c\overset{N-2}{\underset{k=0}{\sum }}\overset{N-1}{\underset{l=k+1}{\sum }}\frac{m_{l+1}...m_{N-1}}{M_{N}} \frac{cM_lM_k}{n} +c\overset{N-1}{\underset{k=0}{\sum }}(\vert n\vert-N)M_k\frac{1}{M_{\vert n \vert}}\\ &:=&I+II. \end{eqnarray*} It is evident that \begin{eqnarray*} I&=&\overset{N-2}{\underset{k=0}{\sum }}\overset{N-1}{\underset{l=k+1}{\sum }} \frac{M_k}{M_{\vert n \vert}} \leq c\overset{N-2}{\underset{k=0}{\sum }} \frac{(N-k)M_k}{M_{\vert n \vert}}\\ &\leq& c\overset{N-2}{\underset{k=0}{\sum }} \frac{\vert n \vert-k}{2^{\vert n \vert-k}} = c\overset{N-2}{\underset{k=0}{\sum }} \frac{\vert n \vert-k}{2^{(\vert n \vert-k)/2}}\frac{1}{2^{(\vert n \vert-k)/2}}\\ &\leq& \frac{c}{2^{(\vert n \vert-N)/2}}\overset{N-2}{\underset{k=0}{\sum }} \frac{\vert n \vert-k}{2^{(\vert n \vert-k)/2}}\leq \frac{C}{2^{(\vert n \vert-N)/2}}\to 0, \ \ \text{as} \ \ n\to \infty. \end{eqnarray*} Analogously, we see that \begin{equation*} II\leq \frac{c(\vert n\vert-N)}{2^{\vert n \vert-N}}\to 0, \ \ \text{as} \ \ n\to \infty, \end{equation*} so also \eqref{fn400} holds and the proof is complete. \end{proof} The next lemma is very important to prove almost everywhere convergence of Vilenkin-Fej\'er means: \begin{lemma}\label{lemma7kncc} Let $n\in \mathbb{N}.$ Then \begin{eqnarray*} && \int_{\overline{I_N}}\sup_{n>M_N}\left\vert K^\psi_n\right\vert d\mu\leq C<\infty,\notag \end{eqnarray*} where $C$ is an absolute constant. \end{lemma} \begin{proof} Let $n>M_N$ and $x\in I_N^{k,l}, \ \ k=0,\dots,N-2, \ \ l=k+1,\dots ,N-1.$ By using \eqref{kn} in the proof of Lemma \ref{lemma7kn} we get that \begin{eqnarray} \label{knmax1} \sup_{n>M_N}\left\vert K^\psi_n\left(x\right)\right\vert\leq \frac{cM_lM_k}{M_N}. \end{eqnarray} Let $n>M_N$ and $x\in I_N^{k,N}$. Then, by using \eqref{star1} we find that $ \left\vert K^\psi_n\left(x\right)\right\vert \leq cM_k$ so that \begin{eqnarray} \label{knmax2} \sup_{n>M_N}\left\vert K^\psi_n (x)\right\vert \leq cM_k. \end{eqnarray} Hence, if we apply \eqref{2} we get that \begin{eqnarray} \label{knmax3} &&\int_{\overline{I_N}}\sup_{n>M_N}\left\vert K^\psi_n\right\vert d\mu\\ \notag &=&\overset{N-2}{\underset{k=0}{\sum }}\overset{N-1}{\underset{l=k+1}{\sum }} \sum\limits_{x_{j}=0,j\in\{l+1,...,N-1\}}^{m_{j-1}}\int_{I_{N}^{k,l}}\sup_{n>M_N}\left\vert K^\psi_n \right\vert d\mu \\ \notag &+&\overset{N-1}{\underset{k=0}{\sum }}\int_{I_{N}^{k,N}}\sup_{n>M_N}\left\vert K^\psi_n \right\vert d\mu\\ \notag &\leq& c\overset{N-2}{\underset{k=0}{\sum }}\overset{N-1}{\underset{l=k+1}{\sum }}\frac{m_{l+1}...m_{N-1}}{M_{N}} \frac{M_lM_k}{M_N}+c\overset{N-1}{\underset{k=0}{\sum }}\frac{M_k}{M_N}\\ \notag &\leq& \overset{N-2}{\underset{k=0}{\sum }} \frac{(N-k)M_k}{M_N}+c<C<\infty. \end{eqnarray} The proof is complete. \end{proof} \section{Fejr means with respect to the trigonometric system } If we consider the Fej\'{e}r kernels with respect to the trigonometric system $\{(1/2\pi){e^{inx}},n=0,\pm1,\pm 2,...\},$ for $x\in[-\pi,\pi]$ we have that $K^T_n (x)\geq 0$ and $$ K^T_n(x)=\frac{1}{n}\left(\frac{\sin((n x)/2)}{\sin(x/2)}\right)^2. $$ Moreover, Fejr kernel $K^T_n (n\in \mathbb{N_+} )$ with respect to trigonometric system has upper envelope \begin{eqnarray}\label{trkn} 0\leq K^T_n(x)\leq \min(n,\pi{(n \vert x\vert^2)}^{-1}). \end{eqnarray} \begin{figure}[htbp] \centerline{\includegraphics[scale=0.96]{1.jpg}} \caption{Fej\'{e}r kernel and the upper envelope $\min(n,\pi{(n \vert x\vert^2)}^{-1})$} \label{f1} \end{figure} It also follows that every Fejr kernels have one integrable upper envelope: $$\sup_{n\in\mathbb{N}}K^T_n(x)\leq \pi \vert x\vert^{-2}.$$ \begin{lemma}\label{lemma7kn1} Let $n\in \mathbb{N}.$ Then, for any $n,N\in \mathbb{N_+}$, we have that \begin{eqnarray} \label{fn401} &&\int_{[-\pi,\pi]}\left\vert K^T_n(x)\right\vert d\mu(x)=\int_{[-\pi,\pi]} K^T_n (x)d\mu(x)=1,\\ && \label{fn4001} \int_{[-\pi,\pi] \backslash [-\varepsilon,\varepsilon]}\left\vert K^T_n(x)\right\vert d\mu (x)\rightarrow 0, \ \ \text{as} \ \ n\rightarrow \infty, \ \ \text{for any} \ \ \varepsilon>0. \end{eqnarray} Moreover, \begin{eqnarray}\label{1.73app4} \lim_{n\to \infty}\sup_{[-\pi,\pi] \backslash [-\varepsilon,\varepsilon]}\left\vert K^T_n(x)\right\vert =0, \ \ \text{for any} \ \ \varepsilon>0, \end{eqnarray} \end{lemma} \begin{proof} According the property $K^T_n (x)\geq 0$ and orthonormality of trigonometric system we immediately get the proof of \eqref{fn401}. On the other hand, \eqref{fn4001} and \eqref{1.73app4} follow estimate \eqref{trkn} so we leave out the details. \end{proof} \section{Approximate Identity}\label{s2,5} The properties established in Lemma \ref{lemma7kn} and Lemma \ref{lemma7kn1} ensure that kernel of the Fejr means $\{K^w_N\}_{N=1}^\infty \ \left( w=\psi \ \text{or} \ w=T\right),$ with respect to Vilenkin and trigonometric systems form what is called an approximation identity. To unify the proofs for trigonometric and Vilenkin systems we mean that $I$ denotes $G_m$ or $[-\pi,\pi]$ and $I_N$ denotes $I_N(0)$ or $[-1/ {2^N},1/{2^N}],$ for $N\in \mathbb{N_+}.$ \begin{definition} The family $\{\Phi_n \}^{\infty}_{n=1}\subset L_{\infty}(I)$ forms an approximate identity provided that \begin{eqnarray*} &(A1)& \ \ \ \int_{I}\Phi_n(x)d(x)=1 \label{1.71app}\\ &(A2)& \ \ \ \sup_{n\in \mathbb{N}}\int_{I}\left\vert \Phi_n(x)\right\vert d\mu(x)<\infty\label{1.72app} \\ &(A3)& \ \ \ \int_{I \backslash I_N}\left\vert \Phi_n(x)\right\vert d\mu (x)\rightarrow 0, \ \ \text{as} \ \ n\rightarrow \infty, \ \ \text{for any} \ \ N\in \mathbb{N_+}. \label{1.73app} \end{eqnarray*} \end{definition} The term "approximate identity" is used because of the fact that $\Phi_n\ast f \to f \ \text{ as } \ n\to \infty$ in any reasonable sense. Next, we prove important result, which will be used to obtain norm convergence of some well-known and general summability methods: \begin{theorem} \label{theoremconv} Let $f\in L_p(I),$ where $1\leq p< \infty$ and the family $\{\Phi_n \}^{\infty}_{n=1}\subset L_{\infty}(I)$ forms an approximate identity. Then $$\Vert \Phi_n\ast f - f \Vert_p \to 0 \ \ \text{ as } \ \ n\to \infty.$$ \end{theorem} \begin{proof} Let $\varepsilon>0.$ By using continuity of $L_p$ norm and $(A2)$ condition we get that $$\sup_{t\in I_N}\Vert f(x-t)-f(x)\Vert_p\sup_{n\in \mathbb{N}}\Vert\Phi_n\Vert_1<\varepsilon/2.$$ By now applying Minkowski's integral inequality and $(A1)$ and $(A3)$ conditions we find that \begin{eqnarray*} \Vert\Phi_n\ast f-f\Vert_p &=&\left\Vert \int_{I}\Phi_n(t)(f(x-t)-f(x))d\mu(t)\right\Vert_p\\ &\leq&\int_{I}\left\vert \Phi_n(t)\right\vert \left\Vert f(x-t)-f(x)\right\Vert_p d\mu(t)\\ &=&\int_{I_N}\left\vert \Phi_n(t)\right\vert \left\Vert f(x-t)-f(x)\right\Vert_p d\mu(t)\\ &+&\int_{I\setminus I_N}\left\vert \Phi_n(t)\right\vert \left\Vert f(x-t)-f(x)\right\Vert_p d\mu(t)\\ &\leq&\sup_{t\in I_N}\Vert f(x-t)-f(x)\Vert_p\sup_{n\in \mathbb{N}}\left\Vert\Phi_n\right\Vert_1\\ &+&\sup_{t\in I}\left\Vert f(x-t)-f(x)\right\Vert_p\int_{I\setminus I_N}\left\vert \Phi_n(t)\right\vert d\mu(t) < \varepsilon/2+\varepsilon/2<\varepsilon. \end{eqnarray*} The proof is complete. \end{proof} According to Lemma \ref{lemma7kn} and Lemma \ref{lemma7kn1} we immediately get that the following results holds true: \begin{corollary} Let $f\in L_p(I),$ where $1\leq p< \infty.$ Then \begin{equation*} \left\Vert \sigma^w_nf-f\right\Vert_p\to 0, \ \ \text{as} \ \ n\to\infty, \text{ }\ \ \left( w=\psi \ \ \text{or} \ \ w=T \right). \end{equation*} where $\sigma_n^\psi$ and $\sigma_n^T$ are Vilenkin-Fejr and trigonometric-Fejr means, respectively. \end{corollary} \begin{theorem}\label{th2} Suppose that $f\in L_1(I)$ and that the family $\{\Phi_n \}^{\infty}_{n=1}\subset L_{\infty}(I)$ forms an approximate identity. In addition, let \begin{eqnarray}\label{1.73app40} &(A4)& \ \ \ \lim_{n\to \infty}\sup_{I \backslash I_N}\left\vert \Phi_n(x)\right\vert =0, \ \ \text{for any} \ \ N\in \mathbb{N_+}, \end{eqnarray} a) If the function $f$ is continuous at $t_0$, then $$\Phi_n\ast f(t_0)\to f(t_0) \ \ \text{ as } \ \ n\to \infty.$$ b) If the functions $\{\Phi_n \}^{\infty}_{n=1}$ are even and the left and right limits $f(t_0-0)$ and $f(t_0+0)$ do exist and are finite, then $$\Phi_n\ast f(t_0) \to L, \ \ \text{ as } \ \ n\to \infty,$$ where \begin{eqnarray}\label{L} L=:\frac{f(t_0+0)+f(t_0-0)}{2}. \end{eqnarray} \end{theorem} \begin{proof} It is evident that \begin{eqnarray*} \left\vert \Phi_n\ast f(t_0)-f(t_0)\right\vert &=& \left\vert\int_{I}\Phi_n(t)(f(t_0-t)-f(t_0))d\mu(t)\right\vert\\ &\leq&\left\vert\int_{I_N} \Phi_n(t)(f(t_0-t)-f(t_0)) d\mu(t)\right\vert\\ &+&\left\vert\int_{I\setminus I_N} \Phi_n(t)f(t_0-t) d\mu(t)\right\vert+\left\vert\int_{I\setminus I_N} \Phi_n(t)f(t_0) d\mu(t)\right\vert\\ &=:&I+II+III. \end{eqnarray*} Let $f$ be continuous at $t_0$. For any $\varepsilon>0$ there exists $N$ such that $$I \leq \sup_{t\in I_N}\vert f(t_0+t)-f(t_0))\vert \sup_{n\in \mathbb{N}}\Vert\Phi_n\Vert_1< \varepsilon/2.$$ Using (A4) condition, we get that $$II \leq \sup_{t\in I \backslash I_N}\left\vert \Phi_n(t)\right\vert \Vert f\Vert_1 \to 0, \ \ \text{as} \ \ n\to\infty.$$ We conclude from (A3) that $$III \leq \vert f(t_0)\vert \int_{I\setminus I_N} \vert \Phi_n(t)\vert d\mu(t)\to 0, \ \ \text{as} \ \ n\to\infty.$$ Thus part a) is proved. Since functions $\{\Phi_n \}^{\infty}_{n=1}$ are even, for the proof of part b), we first note that \begin{eqnarray*} &&(\Phi_n\ast f)(t_0)-L\\ &=&\int_{I} \Phi_n(t)\left(\frac{f(t_0-t)+f(t_0+t)}{2}-\frac{f(t_0-0)+f(t_0+0)}{2}\right) d\mu(t) \end{eqnarray*} Hence, if we use part a), we immediately get the proof of part b) so the proof is complete. \end{proof} \begin{corollary} Let $f\in L_1[-\pi,\pi].$ Then the following statements holds true: a) If the function $f$ is continuous at $t_0$, then $$\sigma^T_n f(t_0)\to f(t_0) \ \ \text{ as } \ \ n\to \infty.$$ b) Let left and right limits $f(t_0-0)$ and $f(t_0+0)$ do exist and are finite. Then $$\sigma^T_n f(t_0) \to L\ \ \text{ as } \ \ n\to \infty,$$ where $L$ is defined by \eqref{L}. \end{corollary} \begin{remark} \label{prop:divkn} Conditions (A4) and \eqref{trkn} do not hold for the Vilenkin-Fejr kernels. Indeed, by using \eqref{star1}, for any $ k\in\mathbb{N }_+$ and for any $e_0\in I_n{(e_{0})} \subset G_m \backslash I_n, \ (n\in\mathbb{N}_+)$ we get that \begin{eqnarray*} \vert K^\psi_{M_k}(e_0)\vert=\left\vert \frac{M_0}{1-r_0\left(e_0\right) }\right\vert =\left\vert \frac{M_0}{1-\exp\left(2\pi \imath /m_{0}\right) }\right\vert=\frac{1}{2\sin(\pi /m_{0})} \geq \frac{1}{2}, \end{eqnarray*} so that \begin{eqnarray*} \lim_{k\to \infty}\sup_{I_n{(e_{0})} \subset G_m \backslash I_n}\left\vert K^\psi_{M_k}(x)\right\vert &\geq&\lim_{k\to \infty}\left\vert K^\psi_{M_k}(e_0)\right\vert\\ &\geq& \frac{1}{2}>0, \ \text{for any} \ n\in \mathbb{N_+}. \end{eqnarray*} Hence (A4) and \eqref{trkn} are not true for the Fejr kernels with respect to the Vilenkin system. However, in some publications you can find that some researchers use such an estimate (for details see \cite{IV}). Moreover, for any $x\in I_{k}\backslash I_{k-1}$ we have $$\vert K^\psi_{M_k}(x)\vert= \left\vert \frac{M_{k-1}}{1-\exp\left(2\pi \imath /m_{k-1}\right) }\right\vert=\frac{M_{k-1}}{2\sin(\pi /m_{k-1})} \geq \frac{M_k}{2\pi}$$ and it follows that Fejr kernels with respect to Vilenkin system do not have one integrable upper envelope. In particular the following lower estimate holds: $$\sup_{n\in\mathbb{N}}\vert K^\psi_n(x)\vert\geq (2\pi \lambda \vert x\vert)^{-1}, \ \ \ \text{where} \ \ \ \lambda:=\sup_{n\in\mathbb{N}}m_n.$$ \end{remark} This remark shows that there is an essential difference between the Vilenkin-Fejr kernels and Fejr kernels with respect to trigonometric system. Moreover, Theorem \ref{th2} is useless to prove almost everywhere convergence of Vilenkin-Fejr means. \section{Almost Everywhere Convergence of Vilenkin-Fej\'er Means} The next theorem is very important to study almost everywhere convergence of the Vilenkin-Fej\'er means: \begin{theorem}\label{weaktype1} Suppose that the sigma sub-linear operator $V$ is bounded from $L_{p_1}$ to $L_{p_{1}}$ for some $1<p_1\leq \infty $ and \begin{equation*} \int\limits_{\overline{I}}\left\vert Vf\right\vert d\mu \leq C\left\Vert f\right\Vert_1 \end{equation*} for $f\in L_1(G_m)$ and Vilenkin interval $I\subset G_m$ which satisfy \begin{equation}\label{atom} \text{supp} f\subset I,\text{ \ \ \ \ \ \ }\int_{G_m}fd\mu =0. \end{equation} Then the operator $V$ is of weak-type $\left( 1,1\right) $, i.e., \begin{equation*} \underset{y>0}{\sup }y\mu \left(\left\{ Vf>y\right\}\right) \leq \left\Vert f\right\Vert_1. \end{equation*} \end{theorem} \begin{theorem}\label{atom0} Let $f\in L_1(G_m).$ Then \begin{equation*} \underset{y>0}{\sup}y\mu\left\{\sigma^{*,\psi}f>y\right\} \leq \left\Vert f\right\Vert_1. \end{equation*} \end{theorem} \begin{proof} By Theorem \ref{weaktype1} we obtain that the proof will be complete if we show that \begin{equation*} \int\limits_{\overline{I}}\left\vert \sigma^{*,\psi}f\right\vert d\mu \leq \Vert f\Vert_1, \end{equation*} for every function $f,$ which satisfy conditions in \eqref{atom} where $I$ denotes the support of the function $f.$ Without lost the generality we may assume that $f$ be a function with support $I$ and $\mu \left( I\right) =M_{N}.$ We may assume that $I=I_N.$ It is easy to see that \begin{equation*} \sigma^{\psi}_nf =\underset{I_N}{\int} K^{\psi}_n(x-t)f(t) d\mu\left(t\right)=0, \ \ \ \text{ for } \ \ \ n\leq M_{N}. \end{equation*} Therefore, we can suppose that $n>M_{N}.$ Hence, \begin{eqnarray*} &&\left\vert \sigma^{*,\psi}f(x)\right\vert \\ &\leq& \sup_{n\leq M_N}\left\vert\underset{I_N}{\int} K^{\psi}_n(x-t)f(t) d\mu\left(t\right)\right\vert + \sup_{n>M_N}\left\vert\underset{I_N}{\int} K^{\psi}_n(x-t)f(t) d\mu\left(t\right)\right\vert\\ &=&\sup_{n>M_N}\left\vert\underset{I_N}{\int} K^{\psi}_n(x-t)f(t) d\mu\left(t\right)\right\vert. \end{eqnarray*} Let $t\in I_N$ and $x\in \overline{I_N}.$ Then $x-t\in \overline{I_N}$ and if we apply Lemma \ref{lemma7kncc} we get that \begin{eqnarray*} \int\limits_{\overline{I_N}}\left\vert \sigma^{*,\psi}f(x)\right\vert d\mu (x) &\leq&\int\limits_{\overline{I_N}}{\sup_{n>M_N}}\underset{I_N}{\int}\left\vert K^{\psi}_{n}\left(x-t\right)f(t) \right\vert d\mu\left(t\right)d\mu\left(x\right)\\ &\leq&\int\limits_{\overline{I_N}}\underset{I_N}{\int}{\sup_{n>M_N}}\left\vert K^{\psi}_{n}\left(x-t\right)f(t) \right\vert d\mu\left(t\right)d\mu\left(x\right)\\ &\leq& \int\limits_{I_N}\underset{\overline{I_N}}{\int}{\sup_{n>M_N}}\left\vert K^{\psi}_n\left(x-t\right)f(t) \right\vert d\mu\left(x\right)d\mu\left(t\right)\\ &\leq& \int\limits_{I_N}\left\vert f(t) \right\vert d\mu\left(t\right)\underset{\overline{I_N}}{\int}{\sup_{n>M_N}}\left\vert K^{\psi}_n\left(x-t\right)\right\vert d\mu\left(x\right)\\ &\leq& \int\limits_{I_N}\left\vert f(t) \right\vert d\mu\left(t\right)\underset{\overline{I_N}}{\int}{\sup_{n>M_N}}\left\vert K^{\psi}_n\left(x\right)\right\vert d\mu\left(x\right)\\ &=& \left\Vert f \right\Vert_1 \underset{\overline{I_N}}{\int}{\sup_{n>M_N}}\left\vert K^{\psi}_n\left(x\right)\right\vert d\mu\left(x\right)\\ &\leq& c\left\Vert f \right\Vert_1. \end{eqnarray*} The proof is complete. \end{proof} \begin{theorem}\label{theoremae0} Let $f\in L_{1}(G_m)$. Then \begin{equation*} \sigma_n^\psi f\rightarrow f\text{ \ \ a.e., \ \ as \ \ } n\rightarrow \infty. \end{equation*} \end{theorem} \begin{proof} Since \begin{equation*} S^\psi_{n}P=P,\text{ for every }P\in \mathcal{P} \end{equation*} according to regularity of Fejr means we obtain that \begin{equation*} \sigma^\psi_{n}P\rightarrow P\ \ \ \ \ \text{a.e.,} \text{ \ \ \ \ as \ \ \ }\ n\rightarrow \infty, \end{equation*} where $P\in \mathcal{P}$ is dense in the space $L_1$ (for details see e.g. \cite{AVD}). On the other hand, by using Theorem \ref{atom0} we obtain that the maximal operator $\sigma^{\ast}$ is bounded from the space $L_1$ to the space $weak-L_{1},$ that is, \begin{equation*} \sup_{y>0}y \mu \left\{x\in G_m:\left\vert \sigma^{\ast,\psi} f\left(x\right)\right\vert >y \right\} \leq \left\Vert f\right\Vert_1. \end{equation*} According to the usual density argument (see Marcinkiewicz and Zygmund \cite{MA}) imply we obtain almost everywhere convergence of Fejr means \begin{equation*} \sigma^\psi_nf\rightarrow f\text{ \ \ a.e., \ \ as \ \ }n\rightarrow \infty. \end{equation*} The proof is complete. \end{proof} \textbf{Acknowledgment: }The author would like to thank the referee for helpful suggestions. \begin{thebibliography}{99} \bibitem{AVD} G. Agaev, N. Vilenkin, G. Dzahafarly and A. Rubinstein, Multiplicative systems of functions and harmonic analysis on zero-dimensional groups, Baku, Ehim, 1981. \bibitem{Garsia} A. Garcia, Topics in almost everywhere convergence, Lectures In Advanced Mathematics 4, Rand McNally, 1970. \bibitem{gat1} G. Gt, Investigations of certain operators with respect to the Vilenkin system, Acta Math. Hung., 61 (1993), 131-149. \bibitem{gat} G. Gt, Ces\`{a}ro means of integrable functions with respect to unbounded Vilenkin systems. J. Approx. Theory, 124, no. 1, (2003), 25-43. \bibitem{BN} I. Blahota, K. Nagy, Approximation by $\Theta$-means of Walsh-Fourier series, Anal. Math. 44 (1) (2018) 57-71. \bibitem{BNT} I. Blahota, K. Nagy, G. Tephnadze, Approximation by Marcinkiewicz $\Theta$-means of double Walsh-Fourier series, Math. Inequal. Appl., 22, 3 (2019) 837-853. \bibitem{gol} B. I. Golubov, A. V. Efimov and V. A. Skvortsov, Walsh series and transforms, and its Applications, 64. Kluwer Academic Publishers Group, Dordrecht, 1991. \bibitem{IV} T. Iofina, On the degree of approximation by means of Fourier-Vilenkin series in Holder and $L_p$ norm East J. Approx., 15, no. 2, (2009) 143-158. \bibitem{MA} J. Marcinkiewicz and A. Zygmund, On the summability of double Fourier series, Fund. Math. 32 (1939), 122-132. \bibitem{MS1} C. Muscalu and W. Schhlag, Classical and multilinear harmonic analysis, Cambridge University Press, Vol.1, 2013. \bibitem{PS} J. Pl and P. Simon, On a generalization of the concept of derivate, Acta Math. Hung., 29 (1977), 155-164. \bibitem{PTW} L. E. Persson, G. Tephnadze, P. Wall, On the maximal operators of Vilenkin-Nrlund means, J. Fourier Anal. Appl., 21, 1 (2015), 76-94. \bibitem{Sc} F. Schipp, Certain rearrangements of series in the Walsh series, Mat. Zametki, 18 (1975), 193-201. \bibitem{sws} F. Schipp, W. R. Wade, P. Simon and J. Pl, Walsh series. An introduction to dyadic harmonic analysis, Adam Hilger, Ltd., Bristol, 1990. \bibitem{tepthesis} L. E. Persson, G. Tephnadze and F. Weisz, Martingale Hardy Spaces and Summability of Vilenkin-Fourier Series, book manuscript, Springer, 2022, (to appear). \bibitem{tepthesis1} G. Tephnadze, Martingale Hardy Spaces and Summability of the One Dimensional Vilenkin-Fourier Series, PhD thesis, Oct. 2015 (ISSN 1402-1544). \bibitem{Tor1} A. Torchinsky, Real-variable methods in harmonic analysis, Academic Press, 1986. \bibitem{Vi} N. Ya. Vilenkin, On a class of complete orthonormal systems, Izv. Akad. Nauk. U.S.S.R., Ser. Mat., 11 (1947), 363-400. \bibitem{We1} F. Weisz, Martingale Hardy spaces and their applications in Fourier Analysis, Springer, Berlin-Heideiberg-New York, 1994. \bibitem{We2} F. Weisz, Ces\`{a}ro summability of one and two-dimensional Fourier series, Anal. Math., 5 (1996), 353-367. \bibitem{Zy} A. Zygmund, Trigonometric Series, Vol. 1, Cambridge Univ. Press, 1959. \end{thebibliography} \end{document}
2205.07025v1
http://arxiv.org/abs/2205.07025v1
Minimal-Perimeter Lattice Animals and the Constant-Isomer Conjecture
\documentclass[preprint]{elsarticle} \usepackage{microtype} \usepackage{amsmath} \usepackage{color} \usepackage{pifont} \usepackage{enumitem} \usepackage{amssymb} \usepackage{tikz} \usepackage{subcaption} \usepackage{polylib} \newtheorem{theorem}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{observation}[theorem]{Observation} \newenvironment{proof}{{\noindent \em Proof:~}}{\hfill{\hfill$\Box$}} \graphicspath{{./graphics/}} \polypath{./graphics/} gurename~\ref{#1}} grefs}[1]{Figures~\ref{#1}} \newcommand*{\myeqref}[1]{Equation~\eqref{#1}} \newcommand{\abs}[1]{\left|#1\right|} \newcommand{\N}{\mathbb{N}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\hex}{\mathcal{H}} \newcommand{\squ}{\mathcal{S}} \newcommand{\set}[1]{\left\lbrace#1\right\rbrace} \newcommand{\bra}[1]{\left[#1\right]} \newcommand{\ceil}[1]{\left\lceil#1\right\rceil} \newcommand{\floor}[1]{\left\lfloor#1\right\rfloor} \newcommand\polylog{{\rm polylog}} \newcommand{\perim}[1]{\mathcal{P}(#1)} \newcommand{\border}[1]{\mathcal{B}(#1)} \newcommand{\minp}{\epsilon} \newcommand{\minn}{\minp^{-1}} \newcommand{\psize}{e_P(Q)} \newcommand{\bsize}{e_B(Q)} \newcommand{\poly}{Q} \newcommand{\lattice}{\mathcal{L}} \newcommand{\hexagon}{\drawpolyhex[scale=0.6]{single_hex.txt}} \newcommand{\myqed}{} \newcommand{\comment}[1]{\relax} \captionsetup{compatibility=false} \captionsetup[subfigure]{justification=centering} \newcommand{\caplabel}[1]{#1} \newif\ifusestandalone \usestandalonetrue \ifusestandalone \usepackage{standalone} \begin{document} \title{Minimal-Perimeter Lattice Animals and the Constant-Isomer Conjecture} \author[tech1]{Gill~Barequet\corref{cor}} \ead{[email protected]} \author[tech1]{Gil~Ben-Shachar} \ead{[email protected]} \cortext[cor]{Corresponding author} \address[tech1]{ Center for Graphics and Geometric Computing, Dept.\ of Computer Science, \\ The Technion---Israel Inst.\ of Technology, Haifa 3200003, Israel. } \begin{abstract} We consider minimal-perimeter lattice animals, providing a set of conditions which are sufficient for a lattice to have the property that inflating all minimal-perimeter animals of a certain size yields (without repetitions) all minimal-perimeter animals of a new, larger size. We demonstrate this result on the two-dimensional square and hexagonal lattices. In addition, we characterize the sizes of minimal-perimeter animals on these lattices that are not created by inflating members of another set of minimal-perimeter animals. \end{abstract} \begin{keyword} Lattice animals, benzenoid hydrocarbon isomers. \end{keyword} \maketitle \begin{center} \fbox{\includegraphics[scale=0.79]{graphics/quote.png}} \vspace{-0.5mm} \\ \begin{minipage}{4.7in} \footnotesize Cyvin S.J., Cyvin B.N., Brunvoll J. (1993) Enumeration of benzenoid chemical isomers with a study of constant-isomer series. In: \emph{Computer Chemistry}, part of \emph{Topics in Current Chemistry} book series, vol.\ 166. Springer, Berlin, Heidelberg (p.~117). \end{minipage} \end{center} \section{Introduction} An \emph{animal} on a $d$-dimensional lattice is a connected set of lattice cells, where connectivity is through ($d{-}1$)-dimensional faces of the cells. Specifically, on the planar square lattice, connectivity of cells is through edges. Two animals are considered identical if one can be obtained from the other by \emph{translation} only, without rotations or flipping. (Such animals are called ``fixed'' animals, as opposed to ``free'' animals.) Lattice animals attracted interest in the literature as combinatorial objects~\cite{eden1961two} and as a computational model in statistical physics and chemistry~\cite{temperley1956combinatorial}. (In these areas, one usually considers \emph{site} animals, that is, clusters of lattice vertices, hence, the graphs considered there are the \emph{dual} of our graphs.) In this paper, we consider lattices in two dimensions, specifically, the hexagonal, triangular, and square lattices, where animals are called polyhexes, polyiamonds, and polyominoes, respectively. We show the application of our results to the square and hexagonal lattices, and explain how to extend the latter to the triangular lattice. An example of such animals is shown in figure~\ref{fig:examples}. \begin{figure} \centering \begin{subfigure}[t]{0.3\textwidth} \centering \drawpoly[scale = 0.75]{exmpSqr.txt} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \drawpolyhex[scale = 0.75]{exmpHex.txt} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \drawpolyiamond[scale = 0.70]{exmpTri.txt} \end{subfigure} \caption{An example of a polyomino, a polyhex, and a polyiamond.} \label{fig:examples} \end{figure} Let $A^\lattice(n)$ denote the number of lattice animals of size~$n$, that is, animals composed of $n$ cells, on a lattice~$\lattice$. A major research problem in the study of lattices is understanding the nature of~$A^\lattice(n)$, either by finding a formula for it as a function of~$n$, or by evaluating it for specific values of~$n$. These problems are to this date still open for any nontrivial lattice. Redelmeier~\cite{redelmeier1981counting} introduced the first algorithm for counting all polyominoes of a given size, with no polyomino being generated more than once. Later, Mertens~\cite{Mertens1990} showed that Redelmeier's algorithm can be utilized for any lattice. The first algorithm for counting lattice animals without generating all of them was introduced by Jensen~\cite{jensen2000statistics}. Using his method, the number of animals on the 2-dimensional square, hexagonal, and triangular lattices were computed up to size~56, 46, and~75, respectively. An important measure of lattice animals is the size of their \emph{perimeter} (sometimes called ``site perimeter''). The perimeter of a lattice animal is defined as the set of empty cells adjacent to the animal cells. This definition is motivated by percolation models in statistical physics. In such discrete models, the plane or space is made of small cells (squares or cubes, respectively), and quanta of material or energy ``jump'' from a cell to a neighboring cell with some probability. Thus, the perimeter of a cluster determines where units of material or energy can move to, and guide the statistical model of the flow. \begin{figure} \centering \begin{subfigure}[t]{0.45\textwidth} \centering \drawpoly[scale = 0.45]{exmpQ.txt} \caption{Q} \end{subfigure} \begin{subfigure}[t]{0.45\textwidth} \centering \drawpoly[scale = 0.45]{exmpIQ.txt} \caption{I(Q)} \end{subfigure} \caption{A polyomino~$\poly$ and its inflated polyomino~$I(\poly)$. Polyomino cells are colored gray, while perimeter cells are colored white.} \label{fig:exmp} \end{figure} Asinowski et al.~\cite{asinowski2017enumerating,asinowski2018polycubes} provided formulae for polyominoes and polycubes with perimeter size close to the maximum possible. On the other extreme reside animals with the \emph{minimum} possible perimeter size for their area. The study of polyominoes of a minimal perimeter dates back to Wang and Wang~\cite{wang1977discrete}, who identified an infinite sequence of cells on the square lattice, the first~$n$ of which (for any~$n$) form a minimal-perimeter polyomino. Later, Altshuler et al.~\cite{altshuler2006}, and independently Sieben~\cite{sieben2008polyominoes}, studied the closely-related problem of the \emph{maximum} area of a polyomino with~$p$ perimeter cells, and provided a closed formula for the minimum possible perimeter of $n$-cell polyominoes. Minimal-perimeter animals were also studied on other lattices. For animals on the triangular lattice (polyiamonds), the main result is due to F\"{u}lep and Sieben~\cite{fulep2010polyiamonds}, who characterized all the polyiamonds with maximum area for their perimeter, and provided a formula for the minimum perimeter of a polyiamond of size~$n$. Similar results were given by Vainsencher and Bruckstein~\cite{VainsencherB08} for the hexagonal lattice. In this paper, we study an interesting property of minimal-perimeter animals, which relates to the notion of the \emph{inflation} operation. Simply put, inflating an animal is adding to it all its perimeter cells (see Figure~\ref{fig:exmp}). We provide a set of conditions (for a given lattice), which if it holds , then inflating all minimal-perimeter animals of some size yields all minimal-perimeter animals of some larger size in a bijective manner. While this paper discusses some combinatorial properties of minimal-perimeter polyominoes, another algorithmic question emerges from these properties, namely, ``how many minimal-perimeter polyominoes are there of a given size?'' This question is addressed in detail in a companion paper~\cite{barequet2020algorithms}. The paper is organized as follows. In Section~\ref{sec:main}, we provide some definitions and prove our main theorem. In sections~\ref{sec:polyominoes} and~\ref{sec:polyhexes}, we show the application of Section~\ref{sec:main} to polyominoes and polyhexes, respectivally. Then, in Section~\ref{sec:polyiamonds} we explain how the same result also applies to the regular triangular lattice. We end in Section~\ref{sec:conclusion} with some concluding remarks. \subsection{Polyhexes as Molecules} In addition to research of minimal-perimeter animals in the literature on combinatorics, there has been much more intensive research of minimal-perimeter polyhexes in the literature on organic chemistry, in the context of the structure of families of molecules. For example, significant amount of work dealt with molecules called \emph{benzenoid hydrocarbons}. It is a known natural fact that molecules made of carbon atoms are structured as shapes on the hexagonal lattice. Benzenoid hydrocarbons are made of carbon and hydrogen atoms only. In such a molecule, the carbon atoms are arranged as a polyhex, and the hydrogen atoms are arranged around the carbons atoms. \begin{figure} \centering \begin{tabular}{ccc} \raisebox{0.39\height}{\includegraphics[scale=0.13]{Naphtaline.png}} & ~~~ & \includegraphics[scale=0.13]{Circumnaphtaline.png} \\ (a) Naphthalene ($C_{10} H_8$) & & (b) Circumnaphtaline ($C_{32} H_{14})$ \end{tabular} \caption{Naphthalene and its circumscribed version.} \label{fig:naphthalene} \end{figure} Figure~\ref{fig:naphthalene}(a) shows a schematic drawing of the molecule of Naphthalene (with formula~$C_{10}H_8$), the simplest benzenoid hydrocarbon, which is made of ten carbon atoms and eight hydrogen atoms, while Figure~\ref{fig:naphthalene}(b) shows Circumnaphthalene (molecular formula~$C_{32}H_{14}$). There exist different configurations of atoms for the same molecular formula, which are called \emph{isomers} of the same formula. In the field of organic chemistry, a major goal is to enumerate all the different isomers of a given formula. Note that the carbon and hydrogen atoms are modeled by lattice \emph{vertices} and not by cells of the lattice, but as we explain below, the numbers of hydrogen atoms identifies with the number of perimeter cells of the polyhexes under discussion. Indeed, the hydrogen atoms lie on lattice vertices that do not belong to the polyhex formed by the carbon atoms (which also lie on lattice vertices), but are connected to them by lattice edges. In minimal-perimeter polyhexes, each perimeter cell contains exactly two such hydrogen vertices, and every hydrogen vertex is shared by exactly two perimeter cells. (This has nothing to do with the fact that a single cell of the polyhex might be neighboring several---five, in the case of Naphthalene---``empty'' cells.) Therefore, the number of hydrogen atoms in a molecule of a benzenoid hydrocarbon is identical to the size of the perimeter of the imaginary polyhex.\footnote{ In order to model atoms as lattice cells, one might switch to the dual of the hexagonal lattice, that is, to the regular triangular lattice, but this will not serve our purpose. } In a series of papers (culminated in Reference~\cite{dias1987handbook}), Dias provided the basic theory for the enumeration of benzenoid hydrocarbons. A comprehensive review of the subject was given by Brubvoll and Cyvin~\cite{brunvoll1990we}. Several other works~\cite{harary1976extremal,cyvin1991series,dias2010} also dealt with the properties and enumeration of such isomers. The analogue of what we call the ``inflation'' operation is called \emph{circumscribing} in the literature on chemistry. A circumscribed version of a benzenoid hydrocarbon molecule~$M$ is created by adding to~$M$ an outer layer of hexagonal ``carbon cells,'' that is, not only the hydrogen atoms (of~$M$) adjacent to the carbon atoms now turn into carbon atoms, but also new carbon atoms are added at all other ``free'' vertices of these cells so as to ``close'' them. In addition, hydrogen atoms are put at all free lattice vertices that are connected by edges to the new carbon atoms. This process is visualized well in Figure~\ref{fig:naphthalene}. In the literature on chemistry, it is well known that circumscribing all isomers of a given molecular formula yields, in a bijective manner, all isomers that correspond to another molecular formula. (The sequences of molecular formulae that have the same number of isomers created by circumscribing are known as \emph{constant-isomer series}.) Although this fact is well known, to the best of our knowledge, no rigorous proof of it was ever given. As mentioned above, we show that inflation induces a bijection between sets of minimal-perimeter animals on the square, hexagonal, and in a sense, also on the triangular lattice. By this, we prove the long-observed (but never proven) phenomenon of ``constant-isomer series,'' that is, that circumscribing isomers of benzenoid hydrocarbon molecules (in our terminology, inflating minimum-perimeter polyhexes) yields all the isomers of a larger molecule. \section{Minimal-Perimeter Animals} \label{sec:main} Throughout this section, we consider animals on some specific lattice~$\lattice$. Our main result consists of a set of conditions on minimal-perimeter animals on~$\lattice$, which is sufficient for satisfying a bijection between sets of minimal-perimeter animals on~$\lattice$. \subsection{Preliminaries} \label{subsec:preliminaries} \begin{figure} \centering \begin{tabular}{ccccc} \drawpolyhex[scale=0.5]{exmp_hex.txt} & & \drawpolyhex[scale=0.5]{exmp_hex_I.txt} & & \drawpolyhex[scale=0.5]{exmp_hex_D.txt}\\ $Q$ & & $I(Q)$ & & $D(Q)$ \end{tabular} \caption{A polyhex~$\poly$, its inflated polyhex~$I(\poly)$, and its deflated polyhex~$D(\poly)$. The gray cells belong to~$\poly$, the white cells are its perimeter, and its border cells are marked with a pattern of dots.} \label{fig:exmp_poly} \end{figure} Let~$\poly$ be an animal on~$\lattice$. Recall that the \emph{perimeter} of~$\poly$, denoted by~$\perim{\poly}$, is the set of all empty lattice cells that are neighbors of at least one cell of~$\poly$. Similarly, the \emph{border} of~$\poly$, denoted by~$\border{\poly}$, is the set of cells of~$\poly$ that are neighbors of at least one empty cell. The \emph{inflated} version of~$\poly$ is defined as~$I(\poly) := \poly \cup \perim{\poly}$. Similarly, the \emph{deflated} version of~$\poly$ is defined as~$D(\poly) := \poly \backslash \border{\poly}$. These operations are demonstrated in Figure~\ref{fig:exmp_poly}. Denote by~$\minp(n)$ the minimum possible size of the perimeter of an $n$-cell animal on~$\lattice$, and by~$M_n$ the set of all minimal-perimeter $n$-cell animals on~$\lattice$. \subsection{A Bijection} \begin{theorem} \label{thm:main} Consider the following set of conditions. \begin{enumerate}[label=(\arabic*)] \item The function~$\minp(n)$ is weakly-monotone increasing. \item There exists some constant~$c^* = c^*(\lattice)$, for which, for any minimal-perimeter animal~$\poly$, we have that~$\abs{\perim{\poly}} = \abs{\border{\poly}} + c^*$ and~$\abs{\perim{I(\poly)}} \leq \abs{\perim{\poly}}+c^*$. \item If~$\poly$ is a minimal-perimeter animal of size $n+\minp(n)$, then~$D(\poly)$ is a valid (connected) animal. \end{enumerate} If all the above conditions hold for~$\lattice$, then~$\abs{M_n} = \abs{M_{n+\minp(n)}}$. If these conditions are not satisfied for only a finite amount of sizes of animals, then the claim holds for all sizes greater than some lattice-dependent nominal size~$n_0$. \myqed \end{theorem} \begin{proof} We begin with proving that inflation preserves perimeter minimality. \begin{lemma} \label{lemma:minimal-inflating} If~$\poly$ is a minimal-perimeter animal, then~$I(\poly)$ is a minimal-perimeter animal as well. \end{lemma} \begin{proof} Let~$\poly$ be a minimal-perimeter animal. Assume to the contrary that~$I(\poly)$ is not a minimal-perimeter animal, thus, there exists an animal~$\poly'$ such that $\abs{\poly'} = \abs{I(\poly)}$, and~$\abs{\perim{\poly'}} < \abs{\perim{I(\poly)}}$. By the second premise of Theorem~\ref{thm:main}, we know that $\abs{\perim{I(\poly)}} \leq \abs{\perim{\poly}} + c^*$, thus, $\abs{\perim{\poly'}} < \abs{\perim{\poly}}+c$, and since~$\poly'$ is a minimal-perimeter animal, we also know by the same premise that~$\abs{\perim{\poly'}} = \abs{\border{\poly'}}+c$, and, hence, that~$\abs{\border{\poly'}} < \abs{\perim{\poly}}$. Consider now the animal~$D(\poly')$. Recall that $\abs{\poly'} = \abs{I(\poly)}=\abs{\poly}+\abs{\perim{\poly}}$, thus, the size of~$D(\poly')$ is at least~$\abs{\poly}+1$, and~$\abs{\perim{D(\poly')}} < \abs{\perim{\poly}} = \minp(n)$ (since the perimeter of~$D(\poly')$ is a subset of the border of~$\poly'$). This is a contradiction to the first premise, which states that the sequence~$\minp(n)$ is monotone increasing. Hence, the animal~$\poly'$ cannot exist, and~$I(\poly)$ is a minimal-perimeter animal. \myqed \end{proof} We now proceed to demonstrating the effect of repeated inflation on the size of minimal-perimeter animals. \begin{lemma} \label{lemma:pnc_size} The minimum perimeter size of animals of size~$n+k\minp(n)+c^*k(k-1)/2$ (for~$n > 1$ and any $k \in \N$) is~$\minp(n)+c^*k$. \end{lemma} \begin{proof} We repeatedly inflate a minimal-perimeter animal~$\poly$, whose initial size is~$n$. The size of the perimeter of~$\poly$ is~$\minp(n)$, thus, inflating it creates a new animal of size~$n+\minp(n)$, and the size of the border of~$I(\poly)$ is~$\minp(n)$, thus, the size of~$I(\poly)$ is~$\minp(n) + c^*$. Continuing the inflation of the animal, the $k$th inflation will increase the size of the animal by $\minp(n) + (k-1)c^*$ and will increase the size of the perimeter by~$c^*$. Summing up these quantities yields the claim. \myqed \end{proof} Next, we prove that inflation preserves difference, that is, inflating two different minimal-perimeter animals (of equal or different sizes) always produces two different new animals. (Note that this is not true for non-minimal-perimeter animals.) \begin{lemma} \label{lemma:different_inflating} Let~$\poly_1,\poly_2$ be two different minimal-perimeter animals. Then, regardless of whether or not~$\poly_1,\poly_2$ have the same size, the animals~$I(\poly_1)$ and~$I(\poly_2)$ are different as well. \end{lemma} \begin{proof} Assume to the contrary that $\poly = I(\poly_1) = I(\poly_2)$, that is, $\poly = \poly_1 \cup \perim{\poly_1} = \poly_2 \cup \perim{\poly_2}$. In addition, since $\poly_1 \neq \poly_2$, and since a cell cannot belong simultaneously to both an animal and to its perimeter, this means that~$\perim{\poly_1} \neq \perim{\poly_2}$. The border of~$\poly$ is a subset of both~$\perim{\poly_1}$ and~$\perim{\poly_2}$, that is, $\border{\poly} \subset \perim{\poly_1} \cap \perim{\poly_2}$. Since~$\perim{\poly_1} \neq \perim{\poly_2}$, we obtain that either~$\abs{\border{\poly}} < \abs{\perim{\poly_1}}$ or~$\abs{\border{\poly}} < \abs{\perim{\poly_2}}$; assume without loss of generality the former case. Now consider the animal~$D(\poly)$. Its size is~$\abs{\poly}-\abs{\border{\poly}}$. The size of~$\poly$ is~$\abs{\poly_1}+\abs{\perim{\poly_1}}$, thus, $\abs{D(\poly)} > \abs{\poly_1}$, and since the perimeter of~$D(\poly)$ is a subset of the border of~$\poly$, we conclude that~$\abs{\perim{D(\poly)}} < \abs{\perim{\poly_1}}$. However, $\poly_1$ is a minimal-perimeter animal, which is a contradiction to the first premise of the theorem, which states that~$\minp(n)$ is monotone increasing. \myqed \end{proof} To complete the cycle, we also prove that for any minimal-perimeter animal~$\poly \in M_{n+\minp(n)}$, there is a minimal-perimeter source in~$M_n$, that is, an animal~$\poly'$ whose inflation yields~$\poly$. Specifically, this animal is $D(\poly)$. \begin{lemma} \label{lemma:deflating} For any~$\poly \in M_{n+\minp(n)}$, we also have that~$I(D(\poly)) = \poly$. \end{lemma} \begin{proof} Since~$\poly \in M_{n+\minp(n)}$, we have by Lemma~\ref{lemma:pnc_size} that~$\abs{\perim{\poly}} = \minp(n)+c^*$. Combining this with the equality~$\abs{\perim{\poly}} = \abs{\border{\poly}}+c^*$, we obtain that~$\abs{\border{\poly}} = \minp(n)$, thus, $\abs{D(\poly)} = n$ and $\abs{\perim{D(\poly)}} \geq \minp(n)$. Since the perimeter of~$D(\poly)$ is a subset of the border of~$\poly$, and~$\abs{\border{\poly}} = \minp(n)$, we conclude that the perimeter of~$D(\poly)$ and the border of~$\poly$ are the same set of cells, and, hence, $I(D(\poly)) = \poly$. \myqed \end{proof} Let us now wrap up the proof of the main theorem. In Lemma~\ref{lemma:minimal-inflating} we have shown that for any minimal-perimeter animal~$\poly \in M_n$, we have that~$I(\poly) \in M_{n+\minp(n)}$. In addition, Lemma~\ref{lemma:different_inflating} states that the inflation of two different minimal-perimeter animals results in two other different minimal-perimeter animals. Combining the two lemmata, we obtain that~$\abs{M_n} \leq \abs{M_{n+\minp(n)}}$. On the other hand, in Lemma~\ref{lemma:deflating} we have shown that if~$\poly \in M_{n+\minp(n)}$, then~$I(D(\poly)) = \poly$, and, thus, for any animal in~$M_{n+\minp(n)}$, there is a unique source in~$M_n$ (specifically, $D(\poly)$), whose inflation yields~$\poly$. Hence, $\abs{M_n} \geq \abs{M_{n+\minp(n)}}$. Combining the two relations, we conclude that~$\abs{M_n} = \abs{M_{n+\minp(n)}}$. \myqed \end{proof} \subsection{Inflation Chains} Theorem~\ref{thm:main} implies that there exist infinite chains of sets of minimal-perimeter animals, each set obtained by inflating all members of the previous set, while the cardinalities of all sets in a chain are equal. Obviously, there are sets of minimal-perimeter animals that are not created by the inflation of any other sets. We call the size of animals in such sets an \emph{inflation-chain root}. Using the definitions and proofs in the previous section, we are able to characterize which sizes can be inflation-chain roots. Then, using one more condition, which holds in the lattices we consider, we determine which values are the actual inflation-chain roots. To this aim, we define the pseudo-inverse function \[ \minp^{-1}(p) = \min\set{n \in \N \mid \minp(n) = p}. \] Since~$\minp(n)$ is a monotone-increasing discrete function, it is a step function, and the value of~$\minp^{-1}(p)$ is the first point in each step. \begin{theorem} \label{thm:root-candidates} Let~$\lattice$ be a lattice satisfying the premises of Theorem~\ref{thm:main}. Then, all inflation-chain roots are either~$\minp^{-1}(p)$ or~$\minp^{-1}(p)-1$, for some $p \in \N$. \end{theorem} \begin{proof} Recall that~$\minp(n)$ is a step function, where each step represents all animal sizes for which the minimal perimeter is~$p$. Let us denote the start and end of the step representing the perimeter~$p$ by~$n_b^p$ and~$n_e^p$, respectively. Formally, $n_b^p = \minp^{-1}(p)$ and~$n_e^p = \minp^{-1}(p+1)-1$. For each size~$n$ of animals in the step~$\bra{n_b^p,n_e^p}$, inflating a minimal-perimeter animal of size~$n$ results in an animal of size~$n{+}p$, and by~Lemma~\ref{lemma:pnc_size}, the perimeter of the inflated animal is~$p{+}c^*$. Thus, the inflation of animals of all sizes in the step of perimeter~$p$ yields animals that appear in the step of perimeter~$p{+}c^*$. In addition, they appear in a \emph{consecutive} portion of the step, specifically, the range~$\bra{n_b^p+p,n_e^p+p}$. Similarly, the step~$\bra{n_b^{p+1},n_e^{p+1}}$ is mapped by inflation to the range~$\bra{n_b^{p+1}+p+1,n_e^{p+1}+p+1}$, which is a portion of the step of~$p{+}1$. Note that the former range ends at~$n_e^p+p = n_b^{p+1}+p-1$, while the latter range starts at~$n_b^{p+1}+p+1$, thus, there is exactly one size of animals, specifically,~$n_b^{p+1}+p$, which is not covered by inflating animals in the ranges~$\bra{n_b^p+p,n_e^p+p}$ and~$\bra{n_b^{p+1},n_e^{p+1}}$. These two ranges represent two different perimeter sizes. Hence, the size~$n_b^{p+1}+p$ must be either the end of the first step, $n_e^{p+c^*}$, or the beginning of the second step, $n_b^{p+c^*+1}$. This concludes the proof. \myqed \end{proof} The arguments of the proof of Theorem~\ref{thm:root-candidates} are visualized in Figure~\ref{fig:minpH_roots} for the case of polyhexes. In fact, as we show below (see Theorem~\ref{thm:root-conditioned}), only the second option exists, but in order to prove this, we also need a maximality-conservation property of the inflation operation. Here is another perspective for the above result. Note that minimal-perimeter animals, with size corresponding to $n_e^{p}$ (for some~$p \in \N$), are the largest animals with perimeter~$p$. Intuitively, animals with the largest size, for a certain perimeter size, tend to be ``spherical'' (``round'' in two dimensions), and inflating them makes them even more spherical. Therefore, one might expect that for a general lattice, the inflation operation will preserve the property of animals being the largest for a given perimeter. In fact, this has been proven rigorously for the square lattice~\cite{altshuler2006,sieben2008polyominoes} and for the hexagonal lattice~\cite{VainsencherB08,fulep2010polyiamonds}. However, this also means that inflating a minimal-perimeter animal of size~$n_e^p$ yields a minimal-perimeter animal of size~$n_e^{p+c^*}$, and, thus, $n_e^p$ cannot be an inflation-chain root. We summarize this discussion in the following theorem. \begin{theorem} \label{thm:root-conditioned} Let~$\lattice$ be a lattice for which the three premises of Theorem~\ref{thm:main} are satisfied, and, in addition, the following condition holds. \begin{enumerate}[label=(\arabic*)] \setcounter{enumi}{3} \item The inflation operation preserves the property of having a maximum size for a given perimeter. \end{enumerate} Then, the inflation-chain roots are precisely~$(\minp_\lattice)^{-1}(p)$, for all $p \in \N$. \myqed \end{theorem} \subsection{Convergence of Inflation Chains} We now discuss the structure of inflated animals, and show that under a certain condition, inflating repeatedly \emph{any} animal (or actually, any set, possibly disconnected, of lattice cells) ends up in a minimal-perimeter animal after a finite number of inflation steps. Let~$I^k(Q)$ ($k>0$) denote the result of applying repeatedly~$k$ times the inflating operator~$I(\cdot)$, starting from the animal~$Q$. Equivalently, \[ I^k(Q) = Q \cup \set{c \mid \mbox{Dist}(c,Q) \leq k}, \] where~$\mbox{Dist}(c,Q)$ is the Lattice distance from a cell~$c$ to the animal~$Q$. For brevity, we will use the notation $Q^k = I^k(Q)$. Let us define the function \( \phi(Q) = \minn(\abs{\perim{Q}}) - \abs{Q} \) and explain its meaning. When~$\phi(Q) \geq 0$, it counts the cells that should be added to~$Q$, with no change to its perimeter, in order to make it a minimal-perimeter animal. In particular, if~$\phi(Q) = 0$, then~$Q$ is a minimal-perimeter animal. Otherwise, if~$\phi(Q) < 0$, then~$Q$ is also a minimal-perimeter animal, and $\abs{\phi(Q)}$ cells can be removed from~$Q$ while still keeping the result a minimal-perimeter animal and without changing its perimeter. \begin{lemma} \label{lemma:jumps-p-1} For any value of~$p$, we have that~$\minn(p+c^*)-\minn(p) = p-1$. \end{lemma} \begin{proof} Let~$Q$ be a minimal-perimeter animal with area~$n_b^p = \minn(p)$. The area of~$I(Q)$ is~$n_b^p+p$, thus, by Theorem~\ref{thm:main}, $\perim{I(Q)} = p+c^*$. The area~$n_b^{p+c^*}$ is an inflation-chain root, hence, the area of~$I(Q)$ cannot be~$n_b^{p+c^*}$. Except~$n_b^{p+c^*}$, animals of all other areas in the range~$[n_b^{p+c^*},\dots,n_e^{p+c^*}]$ are created by inflating minimal-perimeter animals with perimeter~$p$. The animal~$Q$ is of area~$n_b^p$, \emph{i.e.}, the area of~$I(Q)$ must be the minimal area from $\bra{n_b^{p+c^*},n_e^{p+c^*}}$ which is not an inflation-chain root. Hence, the area of~$I(Q)$ is~$n_b^{p+c^*}+1$. We now equate the two expressions for the area of $I(Q)$: $n_b^p+p = n_b^{p+c^*}+1$. That is, $n_b^{p+c^*}-n_b^{p} = p-1$. The claim follows. \end{proof} Using Lemma~\ref{lemma:jumps-p-1}, we can deduce the following result. \begin{lemma} \label{lem:conv-step} If~$\abs{\perim{I(Q)}} = \abs{\perim{Q}} +c^*$, then~$\phi(I(Q)) = \phi(Q)-1$. \end{lemma} \begin{proof} \begin{align*} \phi(I(Q)) &= \minn(\abs{\perim{I(Q)}}) - \abs{I(Q)} \\ &= \minn(\abs{\perim{Q}}+c^*) - (\abs{Q} + \abs{\perim{Q}}) \\ &= \minn(\abs{\perim{Q}}) + \abs{\perim{Q}} -1 - \abs{Q} - \abs{\perim{Q}} \\ &= \minn(\abs{\perim{Q}}) -\abs{Q} - 1 \\ &= \phi(Q) - 1. \end{align*} \end{proof} Lemma~\ref{lem:conv-step} tells us that inflating an animal, $Q$, which satisfies $\abs{\perim{I(Q)}} = \abs{\perim{Q}} +c^*$, reduces $\phi(Q)$ by $1$. In other words, $I(Q)$ is ``closer'' than~$Q$ to being a minimal-perimeter animal. This result is stated more formally in the following theorem. \begin{theorem} \label{thm:convergence} Let~$\lattice$ be a lattice for which the four premises of Theorems~\ref{thm:main} and~\ref{thm:root-conditioned} are satisfied, and, in addition, the following condition holds. \begin{enumerate}[label=(\arabic*)] \setcounter{enumi}{4} \item For every animal~$Q$, there exists some finite number~$k_0 = k_0(Q)$, such that for every $k>k_0$, we have that~$\abs{\perim{Q^{k+1}}} = \abs{\perim{Q^{k}}} + c$. \end{enumerate} Then, after a finite number of inflation steps, any animal becomes a minimal-perimeter animal. \end{theorem} \begin{proof} The claim follows from Lemma~\ref{lem:conv-step}. After~$k_0$ inflation operations, the premise of this lemma holds. Then, any additional inflation step will reduce~$\phi(Q)$ by~$1$ until~$\phi(Q)$ is nullified, which is precisely when the animal becomes a minimal-perimeter animal. (Any additional inflation steps would add superfluous cells, in the sense that they can be removed while keeping the animal a minimal-perimeter animal.) \end{proof} \section{Polyominoes} \label{sec:polyominoes} Throughout this section, we consider the two-dimensional square lattice~$\squ$, and show that the premises of Theorem~\ref{thm:main} hold for this lattice. The lattice-specific notation ($M_n$, $\minp(n)$, and~$c^*$) in this section refer to~$\squ$. \subsection{Premise 1: Monotonicity} The function~$\minp^\squ(n)$, that gives the minimum possible size of the perimeter of a polyomino of size~$n$, is known to be weakly-monotone increasing. This fact was proved independently by Altshuler et al.~\cite{altshuler2006} and by Sieben~\cite{sieben2008polyominoes}. The latter reference also provides the following explicit formula. \begin{theorem} \label{thm:minp_sqr} \textup{\cite[Thm.~$5.3$]{sieben2008polyominoes}} $\minp^\squ(n) = \ceil{\sqrt{8n-4} \,}+2$. \myqed \end{theorem} \subsection{Premise 2: Constant Inflation} The second premise is apparently the hardest to show. We will prove that it holds for~$\squ$ by analyzing the patterns which may appear on the border of minimal-perimeter polyominoes. Asinowski et al.~\cite{asinowski2017enumerating} defined the \emph{excess} of a perimeter cell as the number of adjacent occupied cell minus one, and the total \emph{perimeter excess} of an animal~$\poly$, $e_P(\poly)$, as the sum of excesses over all perimeter cells of~$\poly$. We extend this definition to border cells, and, in a similar manner, define the \emph{excess} of a border cell as the number of adjacent empty cells minus one, and the \emph{border excess} of~$\poly$, $e_B(\poly)$, as the sum of excesses over all border cells of~$\poly$. First, we establish a connection between the size of the perimeter of a polyomino to the size of its border. The following formula is universal for all lattice animals. \begin{lemma} \label{lemma:pebe} For every animal~$\poly$, we have that \begin{equation} \label{eq:pebe} \abs{\perim{\poly}} + e_P(\poly) = \abs{\border{\poly}} + e_B(\poly). \end{equation} \end{lemma} \begin{proof} Consider the (one or more) rectilinear polygons bounding the animal~$\poly$. The two sides of the equation are equal to the total length of the polygon(s) in terms of lattice edges. Indeed, this length can be computed by iterating over either the border or the perimeter cells of~$\poly$. In both cases, each cell contributes one edge plus its excess to the total length. The claim follows. \myqed \end{proof} \begin{figure} \centering \begin{subfigure}[t]{0.1\textwidth} \centering \drawpoly[scale=0.5]{bs1.txt} \caption{} \end{subfigure} \begin{subfigure}[t]{0.1\textwidth} \centering \drawpoly[scale=0.5]{bs2.txt} \caption{} \end{subfigure} \begin{subfigure}[t]{0.1\textwidth} \centering \drawpoly[scale=0.5]{bs3.txt} \caption{} \end{subfigure} \begin{subfigure}[t]{0.1\textwidth} \centering \drawpoly[scale=0.5]{bs4.txt} \caption{} \end{subfigure} \qquad \begin{subfigure}[t]{0.1\textwidth} \centering \drawpoly[scale=0.5]{ps1.txt} \addtocounter{subfigure}{18} \caption{} \end{subfigure} \begin{subfigure}[t]{0.1\textwidth} \centering \drawpoly[scale=0.5]{ps2.txt} \caption{} \end{subfigure} \begin{subfigure}[t]{0.1\textwidth} \centering \drawpoly[scale=0.5]{ps3.txt} \caption{} \end{subfigure} \begin{subfigure}[t]{0.1\textwidth} \centering \drawpoly[scale=0.5]{ps4.txt} \caption{} \end{subfigure} \caption{All possible patterns of cells, up to symmetries, with positive excess. The gray cells are polyomino cells, while the white cells are perimeter cells. The centers of the ``crosses'' are the subject cells, and the patterns show the immediate neighbors of these cells. Patterns~(a--d) exhibit excess border cells, while Patterns~(w--z) exhibit excess perimeter cells. } \label{fig:patterns_sqr} \end{figure} \begin{figure} \centering \drawpoly{patterns.txt} \caption{A~sample polyomino with marked patterns.} \label{fig:patterns-exmp} \end{figure} Let~$\#\square$ be the number of excess cells of a certain type in a polyomino, where~`$\square$' is one of the symbols~$a$--$d$ and~$w$--$z$, as classified in Figure~\ref{fig:patterns_sqr}. Figure~\ref{fig:patterns-exmp} depicts a polyomino which includes cells of all these types. Counting~$e_P(Q)$ and~$e_B(Q)$ as functions of the different patterns of excess cells, we see that \( e_B(Q) = \#a + 2\#b + 3\#c + \#d \) and \( e_P(Q) = \#w + 2\#x + 3\#y + \#z. \) Substituting~$e_B$ and~$e_P$ in \myeqref{eq:pebe}, we obtain that \[ \psize = \bsize + \#a + 2\#b+3\#c + \#d - \#w - 2\#x-3\#y - \#z. \] Since Pattern~(c) is a singleton cell, we can ignore it in the general formula. Thus, we have that \[ \psize = \bsize + \#a + 2\#b + \#d - \#w - 2\#x-3\#y - \#z. \] We now simplify the equation above, first by eliminating the hole pattern, namely, Pattern~(y). \begin{lemma} \label{lemma:no-holes-sqr} Any minimal-perimeter polyomino is simply connected (that is, it does not contain holes). \end{lemma} \begin{proof} The sequence~$\minp(n)$ is weakly-monotone increasing.\footnote{ In the sequel, we simply say ``monotone increasing.'' } Assume that there exists a minimal-perimeter polyomino~$\poly$ with a hole. Consider the polyomino~$\poly'$ that is obtained by filling this hole. The area of~$\poly'$ is clearly larger than that of~$\poly$, however, the perimeter size of~$\poly'$ is smaller than that of~$\poly$ since we eliminated the perimeter cells inside the hole but did not introduce new perimeter cells. This is a contradiction to~$\minp(n)$ being monotone increasing. \myqed \end{proof} Next, we continue to eliminate terms from the equation by showing some invariant related to the turns of the boundary of a minimal-perimeter polyomino. \begin{lemma} \label{lemma:sum_of_turns} For a simply connected polyomino, we have that \( \#a +2\#b -\#w -2\#x = 4. \) \end{lemma} \begin{proof} The boundary of a polyomino without holes is a simple polygon, thus, the sum of its internal angles is $(v-2)\pi$, where~$v$ is the complexity (number of vertices) of the polygon. Note that Pattern~(a) (resp.,~(b)) adds one (resp., two) $\pi/2$-vertex to the polygon. Similarly, Pattern~(w) (resp.~(x)) adds one (resp., two) $3\pi/2$-vertex. All other patterns do not involve vertices. Let~$L = \#a+2\#b$ and~$R = \#w+2\#x$. Then, the sum of angles of the boundary polygon implies that $L \cdot \pi/2 + R \cdot 3\pi/2 = (L+R-2) \cdot \pi$, that is, $L-R = 4$. The claim follows. \myqed \end{proof} Finally, we show that Patterns~(d) and~(z) cannot exist in a minimal-perimeter polyomino. We define a \emph{bridge} as a cell whose removal renders the polyomino disconnected. Similarly, a perimeter bridge is a perimeter cell that neighbors two or more connected components of the complement of the polyomino. Observe that minimal-perimeter polyominoes do not contain any bridges, \emph{i.e.}, cells of Patterns~(d) or~(z). This is stated in the following lemma. \begin{lemma} \label{lemma:no-bridges-sqr} A minimal-perimeter polyomino does not contain any bridge cells. \end{lemma} \begin{proof} Let~$\poly$ be a minimal-perimeter polyomino. For the sake of contradiction, assume first that there is a cell~$f \in \perim{\poly}$ as part of Pattern~(z). Assume without loss of generality that the two adjacent polyomino cells are to the left and to the right of~$f$. These two cells must be connected, thus, the area below (or above)~$f$ must form a cavity in the polyomino shape. Let, then, $\poly'$ obtained by adding~$f$ to~$\poly$ and filling the cavity. grefs{fig:no_z+d}(a,b) illustrate this situation. The cell directly above~$f$ becomes a perimeter cell, the cell~$f$ ceases to be a perimeter cell, and at least one perimeter cell in the area filled below~$f$ is eliminated, thus,~$\abs{\perim{\poly'}} < \abs{\perim{\poly}}$ and~$\abs{\poly'} > \abs{\poly}$, which is a contradiction to the sequence~$\minp(n)$ being monotone increasing. Therefore, polyomino~$\poly$ does not contain perimeter cells that fit Pattern~(z). \begin{figure} \centering \begin{subfigure}[t]{0.2\textwidth} \centering \drawpoly[scale=0.5, every node/.style={scale=0.6}]{no_z1.txt} \caption{$\poly$} \end{subfigure} \begin{subfigure}[t]{0.2\textwidth} \centering \drawpoly[scale=0.5, every node/.style={scale=0.6}]{no_z2.txt} \caption{$\poly'$} \end{subfigure} \qquad \begin{subfigure}[t]{0.2\textwidth} \centering \drawpoly[scale=0.5, every node/.style={scale=0.6}]{no_d1.txt} \caption{$\poly$} \end{subfigure} \begin{subfigure}[t]{0.2\textwidth} \centering \drawpoly[scale=0.5]{no_d2.txt} \caption{$\poly'$} \end{subfigure} \caption{Forbidden patterns for the proof of Theorem~\ref{theorem:pb4}.} \label{fig:no_z+d} \end{figure} Now assume for contradiction that~$\poly$ contains a cell~$f$ that forms Pattern~(d). Let~$\poly'$ be the polyomino obtained from~$\poly$ by removing~$f$ (this will break~$\poly'$ into two separate pieces) and then shifting to the left the piece on the right (this will unite the two pieces into a new polyomino). grefs{fig:no_z+d}(c,d) demonstrate this situation. This operation is always valid since~$\poly$ is of minimal perimeter, hence, by Lemma~\ref{lemma:no-holes-sqr}, it is simply connected, and thus, removing~$f$ breaks~$\poly$ into two separate polyominoes with a gap of one cell in between. Shifting to the left the piece on the right will not create a collision since this would mean that the two pieces were touching, which is not the case. On the other hand, the shift will eliminate the gap that was created by the removal of~$f$, hence, the two pieces will now form a new connected polyomino. The area of~$\poly'$ is one less than the area of~$\poly$, and the perimeter of~$\poly'$ is smaller by at least two than the perimeter of~$\poly$, since the perimeter cells below and above~$f$ cease to be part of the perimeter, and connecting the two parts does not create new perimeter cells. From the formula of~$\minp(n)$, we know that $\minp(n)-\minp(n-1) \leq 1$ for~$n \geq 3$. However, $\abs{\poly} - \abs{\poly'} = 1$ and~$\abs{\perim{\poly}} - \abs{\perim{\poly'}} = 2$, hence, $\poly$ is not a minimal-perimeter polyomino, which contradicts our assumption. Therefore, there are no cells in~$\poly$ that fit Pattern~(d). This completes the proof. \myqed \end{proof} We are now ready to wrap up the proof of the constant-inflation theorem. \begin{theorem} \label{theorem:pb4} (Stepping Theorem) For any minimal-perimeter polyomino~$\poly$ (except the singleton cell), we have that $\psize=\bsize+4.$ \end{theorem} \begin{proof} Lemma~\ref{lemma:sum_of_turns} tells us that~$\psize=\bsize+4+\#d-\#z$. By Lemma~\ref{lemma:no-bridges-sqr}, we know that $\#d = \#z = 0$. The claim follows at once. \myqed \end{proof} \subsection{Premise 3: Deflation Resistance} \begin{lemma} \label{lemma:def_valid} Let~$\poly$ be a minimal-perimeter polyomino of area~$n+\minp(n)$ (for $n \geq 3$). Then, $D(\poly)$ is a valid (connected) polyomino. \end{lemma} \begin{proof} Assume to the contrary that~$D(\poly)$ is not connected, so that it is composed of at least two connected parts. Assume first that~$D(\poly)$ is composed of exactly two parts, $\poly_1$ and~$\poly_2$. Define the \emph{joint perimeter} of the two parts, $\perim{\poly_1,\poly_2}$, to be~$\perim{\poly_1} \cup \perim{\poly_2}$. Since~$\poly$ is a minimal-perimeter polyomino of area $n+\minp(n)$, we know by Theorem~\ref{theorem:pb4} that its perimeter size is~$\minp(n)+4$ and its border size is~$\minp(n)$, respectively. Thus, the size of~$D(\poly)$ is exactly~$n$ regardless of whether or not~$D(\poly)$ is connected. Since deflating~$\poly$ results in~$\poly_1 \cup \poly_2$, the polyomino~$\poly$ must have an (either horizontal, vertical, or diagonal) ``bridge'' of border cells which disappears by the deflation. The width of the bridge is at most~2, thus, $\abs{\perim{\poly_1} \cap \perim{\poly_2}} \leq 2$. Hence, $\abs{\perim{\poly_1}} + \abs{\perim{\poly_2}} - 2 \leq \abs{\perim{\poly_1,\poly_2}}$. Since~$\perim{\poly_1,\poly_2}$ is a subset of~$\border{\poly}$, we have that $\abs{\perim{\poly_1,\poly_2}} \leq \minp(n)$. Therefore, \begin{equation} \label{eq:def_valid_1} \minp(\abs{\poly_1}) + \minp(\abs{\poly_2}) - 2 \leq \minp(n). \end{equation} Recall that~$\abs{\poly_1} + \abs{\poly_2} = n$. It is easy to observe that~$\minp(\abs{\poly_1})+\minp(\abs{\poly_2})$ is minimized when~$\abs{\poly_1}=1$ and $\abs{\poly_2} = n-1$ (or vice gref{fig:minp_plot}) \begin{figure} \centering \includegraphics[scale=0.4]{minp_s.pdf} \caption{The function~$\minp(n)$.} \label{fig:minp_plot} \end{figure} been $2+\sqrt{8n-4}$ (without rounding up), this would be obvious. But since $\minp(n) = \left\lceil 2+\sqrt{8n-4} \, \right\rceil$, it is a step function (with an infinite number of intervals), where the gap between all successive steps is exactly~1, except the gap between the two leftmost steps which is~2. This guarantees that despite the rounding, the minimum of~$\minp(\abs{\poly_1})+\minp(\abs{\poly_2})$ occurs as claimed. Substituting this into \myeqref{eq:def_valid_1}, and using the fact that~$\minp(1)=4$, we see that $\minp(n-1) + 2 \leq \minp(n)$. However, we know~\cite{sieben2008polyominoes} that $\minp(n) - \minp(n-1) \leq 1$ for $n\geq 3$, which is a contradiction. Thus, the deflated version of~$\poly$ cannot split into two parts unless it splits into two singleton cells, which is indeed the case for a minimal-perimeter polyomino of size~8, specifically, \( D( \!\! \raisebox{-2.25mm}{\drawpoly[scale=0.3]{diag8.txt}} \!\! ) = \!\! \raisebox{-1.5mm}{\drawpoly[scale=0.3]{diag2.txt}} \). The same method can be used for showing that~$D(\poly)$ cannot be composed of more then two parts. Note that this proof does not hold for polyominoes of area which is not of the form~$n+\minp(n)$, but it suffices for the use in Theorem~\ref{thm:main}. \myqed \end{proof} As mentioned earlier, it was already proven elsewhere~\cite{altshuler2006,sieben2008polyominoes} that Premise~4 (roots of inflation chains) is fulfilled for the square lattice. Therefore, we proceed to showing that Premise~5 holds. \subsection{Premise 5: Convergence to a Minimum-Perimeter Polyomino} In this section, we show that starting from any polyomino~$P$, and applying repeatedly some finite number of inflation steps, we obtain a polyomino $Q = Q(P)$, for which $\perim{I(Q)} = \perim{Q} + 4$. Let~$R(Q)$ denote the \emph{diameter} of~$Q$, \emph{i.e.}, the maximal horizontal or vertical distance ($L^\infty$) between two cells of~$Q$. The following lemma shows that some geometric features of a polyomino disappear after inflating it enough times. \begin{lemma} \label{lem:no-hdz} For any~$k > R(Q)$, the polyomino~$Q^k$ does not contain any (i)~holes; (ii)~cells of Type~(d); or (iii)~patterns of Type~(z). \end{lemma} \begin{proof} \begin{itemize} \item[(i)] Let~$Q$ be a polyomino, and assume that~$Q^k$ contains a hole. Consider a cell~$c$ inside the hole, and let~$c_u$ be the cell of~$Q^k$ that lies immediately above it. (Note that since~$c_u$ belongs to the border of~$Q^k$, it is not a cell of~$Q$.) Any cell that resides (not necessarily directly) below~$c$ is closer to~$c$ than to~$c_u$. Since $c_u \in Q^k$, it ($c_u$) is closer than~$c$ to~$Q$, thus, there must be a cell of~$Q$ (not necessarily directly) above~$c$, otherwise~$c_u$ would not belong to~$Q^k$. The same holds for cells below, to the right, and to the left of~$c$, thus,~$c$ resides within the axis-aligned bounding box of the extreme cells of~$Q$, and after~$R(Q)$ steps,~$c$ will be occupied, and any hole will be eliminated. \item[(ii)] Assume that there exists a polyomino~$Q$, for which the polyomino~$Q^k$ contains a cell of Type~(d). Without loss of generality, assume that the neighbors of~$c$ reside to its left and to its right, and denote them by $c_\ell,c_r$, respectively. Denote by~$c_o$ one of the cells whose inflation created~$c_\ell$, \emph{i.e.}, a cell which belongs to~$Q$ and is in distance of at most~$k$ from~$c_\ell$. In addition, denote by $c_u,c_d$ the adjacent perimeter cells which lie immediately above and below~$c$, respectively. The cell~$c_d$ is not occupied, thus, its distance from~$c_o$ is~$k+1$, which means that~$c_o$ lies in the same row as~$c_\ell$. Assume for contradiction that~$c_o$ lies in a row below~$c_\ell$. Then, the distance between~$c_o$ and~$c_d$ is at most~$k$, hence~$c_d$ belongs to~$Q^k$. The same holds for~$c_u$; thus, cell~$c_o$ must lie in the same row as~$c_\ell$. Similar considerations show that~$c_o$ must lie to the left of~$c_\ell$, otherwise~$c_d$ and~$c_u$ would be occupied. In the same manner, one of the cells that originated~$c_r$ must lie in the same row as~$c_r$ on its right. Hence, any cell of Type~(d) have cells of~$Q$ to its right and to its left, and thus, it is found inside the bounding axis-aligned bounding box of~$Q$, which will necessarily be filled with polyomino cells after~$R(Q)$ inflation steps. \item[(iii)] Let~$c$ be a Type-(z) perimeter cell of~$Q^k$. Assume, without loss of generality, that the polyomino cells adjacent to it are to its left and to its right, and denote them by~$c_\ell$ and~$c_r$, respectively. Let~$c_o$ denoted a cell whose repeated inflation has added~$c_\ell$ to $Q^k$. (Note that~$c_o$ might not be unique.) This cell must lie to the left of~$c$, otherwise, it will be closer to~$c$ than to~$c_\ell$, and~$c$ would not be a perimeter cell. In addition, $c_o$ must lie in the same row as~$c_\ell$, for otherwise, by the same considerations as above, one of the cells above or below~$c$ will be occupied. The same holds for~$c_r$ (but to its right), thus, cells of Type~(z) must reside between two original cells of~$Q$, \emph{i.e.}, inside the bounding box of~$Q$, and after~$R(Q)$ inflation steps, all cells inside this box will become polyomino cells. \end{itemize} \end{proof} We can now conclude that inflating a polyomino~$Q$ for~$R(Q)$ times eliminates all holes and bridges, and, thus, the polyomino~$Q^k$ will obey the equation $\abs{\perim{Q^k}} = \abs{\border{Q^k}} + 4$. \begin{lemma} \label{lem:conv-pb4} Let~$Q$ be a polyomino, and let~$k = R(Q)$. We have that $\abs{\perim{Q^k}} = \abs{\border{Q^k}} + 4$. \end{lemma} \begin{proof} This follows at once from Lemma~\ref{lem:no-hdz} and Theorem~\ref{theorem:pb4}. \end{proof} \section{Polyhexes} \label{sec:polyhexes} In this section, we show that the premises of Theorem~\ref{thm:main} hold for the two-dimensional hexagonal lattice~$\hex$. The roadmap followed in this section is similar to the one used in Section~\ref{sec:polyominoes}. In this section, all the lattice-specific notations refer to~$\hex$. \subsection{Premise 1: Monotonicity} The first premise has been proven for~$\hex$ independently by Vainsencher and Bruckstien~\cite{VainsencherB08} and by F\"{u}lep and Sieben~\cite{fulep2010polyiamonds}. We will use the latter, stronger version which also includes a formula for $\minp(n)$. \begin{theorem} \label{thm:minp_hex} \textup{\cite[Thm.~$5.12$]{fulep2010polyiamonds}} $\minp(n) = \ceil{\sqrt{12n-3}\,}+3$. \myqed \end{theorem} Clearly, the function~$\minp(n)$ is weakly-monotone increasing. \subsection{Premise 2: Constant Inflation} To show that the second premise holds, we analyze the different patterns that may appear in the border and perimeter of minimal-perimeter polyhexes. We can classify every border or perimeter cell by one of exactly~24 patterns, distinguished by the number and positions of their adjacent occupied cells. The~24 possible patterns are shown in Figure~\ref{fig:patterns_hex}. \begin{figure} \centering \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{b0.txt} \caption{} \label{fig:b0} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{b1.txt} \caption{} \label{fig:b1} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{b2.txt} \caption{} \label{fig:b2} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{b3.txt} \caption{} \label{fig:b3} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{b4.txt} \caption{} \label{fig:b4} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{b5.txt} \caption{} \label{fig:b5} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{b6.txt} \caption{} \label{fig:b6} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{b7.txt} \caption{} \label{fig:b7} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{b8.txt} \caption{} \label{fig:b8} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{b9.txt} \caption{} \label{fig:b9} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{b10.txt} \caption{} \label{fig:b10} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{b11.txt} \caption{} \label{fig:b11} \end{subfigure} \medskip \\ \begin{subfigure}[t]{0.075\textwidth} \centering \addtocounter{subfigure}{2} \drawpolyhex[scale=0.65]{p0.txt} \caption{} \label{fig:p0} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{p1.txt} \caption{} \label{fig:p1} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{p2.txt} \caption{} \label{fig:p2} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{p3.txt} \caption{} \label{fig:p3} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{p4.txt} \caption{} \label{fig:p4} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{p5.txt} \caption{} \label{fig:p5} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{p6.txt} \caption{} \label{fig:p6} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{p7.txt} \caption{} \label{fig:p7} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{p8.txt} \caption{} \label{fig:p8} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{p9.txt} \caption{} \label{fig:p9} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{p10.txt} \caption{} \label{fig:p10} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{p11.txt} \caption{} \label{fig:p11} \end{subfigure} \caption{All possible patterns (up to symmetries) of border (first row) and perimeter (second row) cells. The gray cells are polyhex cells, while the white cells are perimeter cells. Each subfigure shows a cell in the middle, and the possible pattern of cells surrounding it.} \label{fig:patterns_hex} \end{figure} Let us recall the equation subject of Lemma~\ref{lemma:pebe}. \[ \abs{\perim{\poly}} + e_P(\poly) = \abs{\border{\poly}} + e_B(\poly). \] Our first goal is to express the excess of a polyhex~$\poly$ as a function of the numbers of cells of~$\poly$ of each pattern. We denote the number of cells of a specific pattern in~$\poly$ by $\#\hexagon$, where `$\hexagon$' is one of the~22 patterns listed in Figure~\ref{fig:patterns_hex}. The excess (either border or perimeter excess) of Pattern~$\hexagon$ is denoted by $e(\hexagon)$. (For simplicity, we omit the dependency on~$\poly$ in the notations of~$\#\hexagon$ and~$e(\hexagon)$. This should be understood from the context.) The border excess can be expressed as~$e_B(\poly) = \sum_{\hexagon \in \{a,\dots,l\}} e(\hexagon)\#\hexagon$, and, similarly, the perimeter excess can be expressed as~$e_P(\poly) = \sum_{\hexagon \in \{o,\dots,z\}} e(\hexagon)\#\hexagon$. By plugging these equations into \myeqref{eq:pebe}, we obtain that \begin{equation} \label{eq:all-patterns} \abs{\perim{\poly}} + \sum_{\hexagon \in \{o,\dots,z\}} e(\hexagon)\#\hexagon = \abs{\border{\poly}} + \sum_{\hexagon \in \{a,\dots,l\}} e(\hexagon)\#\hexagon~. \end{equation} The next step of proving the second premise is showing that minimal-perimeter polyhexes cannot contain some of the~22 patterns. This will simplify \myeqref{eq:all-patterns}. \begin{lemma} \label{lemma:no-holes_hex} (Analogous to Lemma~\ref{lemma:no-holes-sqr}.) A minimal-perimeter polyhex does not contains holes. \end{lemma} \begin{proof} Assume to the contrary that there exists a minimal-perimeter polyhex~$\poly$ that contains one or more holes, and let~$\poly'$ be the polyhex obtained by filling one of the holes in~$\poly$. Clearly, $|\poly'| > |\poly|$, and by filling the hole we eliminated some perimeter cells and did not create new perimeter cells. Hence, $\abs{\perim{\poly'}} < \abs{\perim{\poly}}$. This contradicts the fact that~$\minp(n)$ is monotone increasing, as implied by Theorem~\ref{thm:minp_hex}. \myqed \end{proof} Another important observation is that minimal-perimeter polyhexes tend to be ``compact.'' We formalize this observation in the following lemma. Recall the definition of a bridge from Section~\ref{sec:polyominoes}: A \emph{bridge} is a cell whose removal unites two holes or renders the polyhex disconnected (specifically, Patterns~(b), (d), (e), (g), (h), (j), and~(k)). Similarly, a \emph{perimeter bridge} is an empty cell whose addition to the polyhex creates a hole in it (specifically, Patterns~(p), (r), (s), (u), (v), (x), and~(y)). \begin{lemma} \label{lemma:bridges} (Analogous to Lemma~\ref{lemma:no-bridges-sqr}.) Minimal-perimeter polyhexes contain neither bridges nor perimeter bridges. \myqed \end{lemma} \begin{proof} Let~$\poly$ be a minimal-perimeter polyhex, and assume first that it contains a bridge cell~$f$. By Lemma~\ref{lemma:no-holes_hex}, since~$\poly$ does not contain holes, the removal of~$f$ from~$\poly$ will break it into two or three disconnected polyhexes. We can connect these parts by translating one of them towards the other(s) by one cell. (In case of Pattern~(h), the polyhex is broken into three parts, but then translating any of them towards the removed cell would make the polyhex connected again.) Locally, this will eliminate at least two perimeter cells created by the bridge. (This can be verified by exhaustively checking all the relevant patterns.) The size of the new polyhex, $\poly'$, is one less than that of~$\poly$, while the perimeter of~$\poly'$ is smaller by at least two than that of~$\poly$. However, Theorem~\ref{thm:minp_hex} implies that~$\minp(n)-\minp(n-1) \leq 1$ for all $n \geq 3$, which is a contradiction to~$\poly$ being a minimal-perimeter polyhex. Assume now that~$\poly$ contains a perimeter bridge. Filling the bridge will not increase the perimeter. (It might create one additional perimeter cell, which will be canceled out with the eliminated (perimeter) bridge cell.) In addition, it will create a hole in the polyomino. Then, filling the hole will create a polyhex with a larger size and a smaller perimeter, which is a contradiction to~$\minp(n)$ being monotone increasing. \myqed \end{proof} As a consequence of Lemma~\ref{lemma:no-holes_hex}, Pattern~(o) cannot appear in any minimal-perimeter polyhex. In addition, Lemma~\ref{lemma:bridges} tells us that the Border Patterns~(b), (d), (e), (g), (h), (j), and~(k), as well as the Perimeter Patterns~(p), (r), (s), (u), (v), (x), and~(y) cannot appear in any minimal-perimeter polyhex. (Note that patterns~(b) and~(p) are not bridges by themselves, but the adjacent cell is a bridge, that is, the cell above the central cells in~\drawpolyhex[scale=0.3]{b1.txt} and~\drawpolyhex[scale=0.3]{p1.txt} are bridges.) Finally, Pattern~(a) appears only in the singleton cell (the unique polyhex of size~1), which can be disregarded. Ignoring all these patterns, we obtain that \begin{equation} \label{eq:pb321} \abs{\perim{\poly}} + 3\#q + 2\#t + \#w = \abs{\border{\poly}} + 3\#c + 2\#f + \#i. \end{equation} Note that Patterns~(l) and~(z) have excess~0, and, hence, although they may appear in minimal-perimeter polyhexes, they do not contribute to the equation. Consider a polyhex which contains only the six feasible patterns that contribute to the excess (those that appear in \myeqref{eq:pb321}). Let~$\xi$ denote the single polygon bounding the polyhex. We now count the number of vertices and the sum of internal angles of~$\xi$ as functions of the numbers of appearances of the different patterns. In order to calculate the number of vertices of~$\xi$, we first determine the number of vertices contributed by each pattern. In order to avoid multiple counting of a vertex, we associate each vertex to a single pattern. Note that each vertex of~$\xi$ is surrounded by three (either occupied or empty) cells, out of which one is empty and two are occupied, or vise versa. We call the cell, whose type (empty or occupied) appears once (among the surrounding three cells), the ``representative'' cell, and count only these representatives. Thus, each vertex is counted exactly once. For example, out of the six vertices surrounding Pattern~(c), five vertices belong to the bounding polygon, but the representative cell of only three of them is the cell at the center of this pattern, thus, by our scheme, Pattern~(c) contributes three vertices, each having a~$2\pi/3$ angle. Similarly, only two of the four vertices in the configuration of Pattern~(t), are represented by the cell at the center of this pattern. In this case, each vertex is the head of a $4\pi/3$ angle. To conclude, the total number of vertices of~$\xi$ is \[ 3\#c+2\#f+\#i+3\#q+2\#t+\#w, \] and the sum of internal angles is \begin{equation} \label{eq:sum-1} (3\#c+2\#f+\#i)2\pi/3 + (3\#q+2\#t+\#w)4\pi/3. \end{equation} On the other hand, it is known that the sum of internal angles is equal to \begin{equation} \label{eq:sum-2} (3\#c+2\#f+\#i+3\#q+2\#t+\#w-2)\pi. \end{equation} Equating the terms in Formulae~\eqref{eq:sum-1} and~\eqref{eq:sum-2}, we obtain that \begin{equation} \label{eq:sum-3} 3\#c+2\#f+\#i = 3\#q+2\#t+\#w + 6. \end{equation} Plugging this into \myeqref{eq:pb321}, we conclude that~$\abs{\perim{\poly}} = \abs{\border{\poly}} + 6$, as required. We also need to show that the second part of the second premise holds, that is, that if~$\poly$ is a minimal-perimeter polyhex, then $\abs{\perim{I(\poly)}} \leq \abs{\perim{\poly}} + 6$. To this aim, note that $\border{I(\poly)} \subset \perim{\poly}$, thus, it is sufficient to show that~$\abs{\perim{I(\poly)}} \leq \abs{\border{I(\poly)}} + 6$. Obviously, \myeqref{eq:all-patterns} holds for the polyhex~$I(\poly)$, hence, in order to prove the relation, we only need to prove the following lemma. \begin{lemma} \label{lemma:inf-no-bridges} If~$\poly$ is a minimal-perimeter polyhex, then~$I(\poly)$ does not contain any bridge. \myqed \end{lemma} \begin{proof} Assume to the contrary that~$I(\poly)$ contains a bridge. Then, the cell that makes the bridge must have been created in the inflation process. However, any cell~$c \in I(\poly) \backslash \poly$ must have a neighboring cell~$c' \in \poly$. All the cells adjacent to~$c'$ must also be part of~$I(\poly)$, thus, cell~$c$ must have three consecutive neighbors around it, namely, $c'$ and the two cells neighboring both~$c$ and~$c'$. The only bridge pattern that fits this requirement is Pattern~(j). However, this means that there must have been a gap of two cells in~$\poly$ that caused the creation of~$c$ during the inflation of~$\poly$. Consequently, by filling the gap and the hole it created, we will obtain (see Figure~\ref{fig:no-j}) a larger polyhex with a smaller perimeter, which contradicts the fact that~$\poly$ is a minimal-perimeter polyhex. \myqed \end{proof} \begin{figure} \centering \begin{subfigure}[t]{0.2\textwidth} \centering \drawpolyhex[scale=0.5]{noj1.txt} \caption{$Q$} \end{subfigure} \begin{subfigure}[t]{0.2\textwidth} \centering \drawpolyhex[scale=0.5]{noj2.txt} \caption{$I(Q)$} \end{subfigure} \begin{subfigure}[t]{0.2\textwidth} \centering \drawpolyhex[scale=0.5]{noj3.txt} \caption{$Q'$} \end{subfigure} \caption{The construction in Lemma~\ref{lemma:inf-no-bridges} which shows that~$I(\poly)$ cannot contain a cell of Pattern~$(j)$. Assuming that it does, by filling the hole in it, we obtain~$\poly'$ which contradicts the perimeter-minimality of~$\poly$. (The marked cells in~$\poly'$ are those added to~$\poly$.)} \label{fig:no-j} \end{figure} \subsection{Premise 3: Deflation Resistance} We now show that deflating a minimal-perimeter polyhex results in another (smaller) valid polyhex. The intuition behind this condition is that a minimal-perimeter polyhex is ``compact,'' having a shape which does not become disconnected by deflation. \begin{lemma} \label{lemma:def-valid-polyhex} For any minimal-perimeter polyhex~$\poly$, the shape~$D(\poly)$ is also a valid (connected) polyhex. \myqed \end{lemma} \begin{proof} The proof of this lemma is very similar to the first part of the proof of Lemma~\ref{lemma:bridges}. Consider a minimal-perimeter polyhex~$\poly$. In order for~$D(\poly)$ to be disconnected, $\poly$ must contain a bridge of either a single cell or two adjacent cells. A 1-cell bridge cannot be part of~$\poly$ by Lemma~\ref{lemma:bridges}. The polyhex~$\poly$ can neither contain a 2-cell bridge. \begin{figure} \centering \begin{subfigure}[b]{0.3\textwidth} \centering \drawpolyhex[scale=0.45]{hex_bridge_1.txt} \caption{} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \drawpolyhex[scale=0.45]{hex_bridge_2.txt} \caption{} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \drawpolyhex[scale=0.45]{hex_bridge_3.txt} \caption{} \end{subfigure} \caption{An example for the construction in the proof of Lemma~\ref{lemma:bridges}. The two-cell bridge is colored in red in (a). Then, in (b), the bridge is removed, and, in (c), the two parts are ``glued'' together.} \label{fig:two-cell-bridge} \end{figure} Assume to the contrary that it does, as is shown in Figure~\ref{fig:two-cell-bridge}(a). Then, removing the bridge (see Figure~\ref{fig:two-cell-bridge}(b)), and then connecting the two pieces (by translating one of them towards the other by one cell along a direction which makes a $60^{\circ}$ angle with the bridge), creates (Figure~\ref{fig:two-cell-bridge}(c)) a polyhex whose size is smaller by two than that of the original polyhex, and whose perimeter is smaller by at least two (since the perimeter cells adjacent to the bridge disappear). The new polyhex is valid, that is, the translation by one cell of one part towards the other does not make any cells overlap, otherwise there is a hole in the original polyhex, which is impossible for a minimal-perimeter polyhex by Lemma~\ref{lemma:no-holes_hex}. However, we reached a contradiction since for a minimal-perimeter polyhex of size~$n \geq 7$, we have that~$\minp(n) - \minp(n-2) \leq 1$. Finally, it is easy to observe by a tedious inspection that the deflation of any polyhex of size less than~7 results in the empty polyhex. \myqed \end{proof} In conclusion, we have shown that all the premises of Theorem~\ref{thm:main} are satisfied for the hexagonal lattice, and, therefore, inflating a set of all the minimal-perimeter polyhexes of a certain size yields another set of minimal-perimeter polyhexes of another, larger, size. This result is demonstrated in Figure~\ref{fig:hex_corrolary}. \begin{figure} \centering \begin{subfigure}[b]{0.24\textwidth} \centering \drawpolyhex[scale=0.45]{hm9_1.txt} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \drawpolyhex[scale=0.45]{hm9_2.txt} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \drawpolyhex[scale=0.45]{hm9_3.txt} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \drawpolyhex[scale=0.45]{hm9_4.txt} \end{subfigure} \medskip \\ \begin{subfigure}[b]{0.24\textwidth} \centering \drawpolyhex[scale=0.45]{hm23_1.txt} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \drawpolyhex[scale=0.45]{hm23_2.txt} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \drawpolyhex[scale=0.45]{hm23_3.txt} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \drawpolyhex[scale=0.45]{hm23_4.txt} \end{subfigure} \caption{A demonstration of Theorem~\ref{thm:main} for polyhexes. The top row contains all polyhexes in~$M_9$ (minimal-perimeter polyhexes of size~9), while the bottom row contains their inflated versions, all the members of~$M_{23}$.} \label{fig:hex_corrolary} \end{figure} We also characterized inflation-chain roots of polyhexes. As is mentioned above, the premises of Theorems~\ref{thm:main} and~\ref{thm:root-conditioned} are satisfied for polyhexes~\cite{VainsencherB08,sieben2008polyominoes}, and, thus, the inflation-chain roots are those who have the minimum size for a given minimal-perimeter size. An easy consequence of Theorem~\ref{thm:minp_hex} is that the formula~$\floor{\frac{(p-4)^2}{12}+\frac{5}{4}}$ generates all these inflation-chain roots. This result is demonstrated in Figure~\ref{fig:minpH_roots}. \begin{figure} \centering \includestandalone[scale=0.40]{minpH_roots} \vspace{-0.25cm} \caption{The relation between the minimum perimeter of polyhexes, $\minp(n)$, and the inflation-chain roots. The points represent the minimum perimeter of a polyhex of size~$n$, and sizes which are inflation-chain roots are colored in red. The arrows show the mapping between sizes of minimal-perimeter polyhexes (induced by the inflation operation) and demonstrate the proof of Theorem~\ref{thm:root-candidates}.} \label{fig:minpH_roots} \end{figure} As in the case of polyominoes, and as was mentioned earlier, it was already proven elsewhere~\cite{VainsencherB08,fulep2010polyiamonds} that Premise~4 (roots of inflation chains) is fulfilled for the hexagonal lattice. Therefore, we proceed to showing that Premise~5 holds. \subsection{Premise 5: Convergence to a Minimum-Perimeter Polyomino} Similarly to polyominoes, we now show that starting from a polyhex~$\poly$ and applying repeatedly a finite number, $k$, of inflation steps, we obtain a polyhex $\poly^k=I^k(\poly)$, for which $\perim{I(\poly^k)} = \perim{\poly^k} + 6$. Let~$R(\poly)$ denote the \emph{diameter} of~$\poly$, \emph{i.e.}, the maximal distance between two cells of~$\poly$ when projected onto one of the three main axes. As in the case of polyominoes, some geometric features of~$\poly$ will disappear after $R(\poly)$ inflation steps. \begin{lemma} \label{lem:no-hdz-hex} (Analogous to Lemma~\ref{lem:no-hdz}.) For any $k > R(Q)$, the polyhex~$Q^k$ does not contain any (i)~holes; (ii)~polyhex bridge cells; or (iii)~perimeter bridge cells. \end{lemma} \begin{proof} \begin{itemize} \item[(i)] The proof is identical to the proof for polyominoes. \item[(ii)] After~$R(Q)$ inflation steps, the obtained polyhex is clearly connected. If at this point there exists a bridge cell, then it must have been created in the last inflation step since after further steps, this cell would cease being a bridge cell. If during this inflation step, that eliminates the mentioned bridge, another bridge is created then its removal will not render the polyomino disconnected (since it was already connected before applying the inflation step), thus, it must have created a hole in the polyhex, in contradiction to the previous clause. \item[(iii)] We will present here a version of the analogue proof for polyominoes, adapted for polyhexes. Let~$c$ be a perimeter bridge cell of~$Q^k$. Assume, without loss of generality, that two of the polyhex cells adjacent to it are above and below it, and denote them by~$c_1$ and~$c_2$, respectively. The cell whose inflation resulted in adding~$c_1$ to the polyhex~$c_1$, denoted by~$c_o$, must reside above~$c$, otherwise, it would be closer to~$c$ than to~$c_1$, and~$c$ would not be a perimeter cell. The same holds for~$c_2$ (below $c$), thus, any perimeter bridge cell must reside between two original cells of~$Q$. Hence, after~$R(Q)$ inflation steps, all such cells will become a polyhex cells. \end{itemize} \end{proof} \begin{lemma} \label{lem:conv-pb4-hex} (Analogous to Lemma~\ref{lem:conv-pb4}.) After~$k = R(Q)$ inflation steps, the polyhex~$Q^k$ will obey $\abs{\perim{Q^k}} = \abs{\border{Q^k}} +6$. \end{lemma} \begin{proof} This follows at once from Lemma~\ref{lem:no-hdz-hex} and Equation~\ref{eq:sum-3}. \end{proof} \section{Polyiamonds} \label{sec:polyiamonds} Polyiamonds are sets of edge-connected triangles on the regular triangular lattice. Unlike the square and the hexagonal lattice, in which all cells are identical in shape and in their role, the triangular lattice has two types of cells, which are seen as a left and a right pointing arrows (\drawpolyiamond[scale=0.4]{t2_diam.txt},\drawpolyiamond[scale=0.4]{t1_diam.txt}). Due to this complication, inflating a minimal-perimeter polyiamond does not necessarily result in a minimal-perimeter polyiamond. Indeed, the second premise of Theorem~\ref{thm:main} does not hold for polyiamonds. This fact is not surprising, since inflating minimal-perimeter polyiamonds creates ``jaggy'' polyiamonds whose perimeter is not minimal. Figures~\ref{fig:exmp_diamond}(a,b) illustrate this phenomenon. \begin{figure} \centering \begin{subfigure}[b]{0.3\textwidth} \centering \drawpolyiamond[scale=0.4]{exmp_diamond.txt} \caption{$\poly$} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \drawpolyiamond[scale=0.4]{exmp_diamond_I.txt} \caption{$I(\poly)$} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \drawpolyiamond[scale=0.4]{exmp_diamond_II.txt} \caption{$\poly'$} \end{subfigure} \caption{An example of inflating polyiamonds. The polyiamond~$\poly$ is of a minimum perimeter, however, its inflated version, $I(\poly)$ is not of a minimum perimeter. The polyiamond~$\poly'$, obtained by adding to~$\poly$ all the cells sharing a \emph{vertex} with $\poly$, is a minimal-perimeter polyiamond.} \label{fig:exmp_diamond} \end{figure} However, we can fix this situation in the triangular lattice by modifying the definition of the perimeter of a polyiamond so that it it would include all cells that share a \emph{vertex} (instead of an edge) of the boundary of the polyiamond. Under the new definition, Theorem~\ref{thm:main} holds. The reason for this is surprisingly simple: The modified definition merely mimics the inflation of animals on the graph dual to that of the triangular lattice. (Recall that graph duality maps vertices to faces (cells), and vice versa, and edges to edges.) However, the dual of the triangular lattice is the hexagonal lattice, for which we have already shown in Section~\ref{sec:polyhexes} that all the premises of Theorem~\ref{thm:main} hold. Thus, applying the modified inflation operator in the triangular lattice induces a bijection between sets of minimal-perimeter polyiamonds. This relation is demonstrated in Figure~\ref{fig:exmp_diamond}. \comment{ \section{Polycubes} \label{sec:polycubes} In this section we consider animals in the high dimension square lattice, namely polycubes. Empirically, it seems that inflating all the minimal-perimeter polycubes of a given size the result is all the minimal-perimeter polycubes of some larger size. We can not say it definitively since we are not aware of any algorithm which generates all the minimal-perimeter polycubes other then generating all the polycubes and checking which ones have minimal-perimeter. Since the number of polycubes grows rapidly with the size we can not produce all the minimal with size greater than some relatively small value (in the 3D case, we only know the number of polycubes with size up to $19$). For all the values we did check it seems that the inflation operation does induce a bijection between sets of minimal-perimeter polycubes. However, we can not prove this using Theorem~\ref{thm:main} since the second condition does not hold. Even more than that, we can show that Theorem~\ref{thm:main} probably apply only to two dimensional lattices. A conclusion from Lemma~\ref{lemma:pnc_size} is that for a lattice $\lattice$, satisfying the conditions of Theorem~\ref{thm:main} it holds that $\minp_\lattice(n) = \Theta(\sqrt{n})$. It is reasonable to assume that in a $d$-dimensional lattice $\lattice_d$, the relation between the size of a minimal-perimeter animal and its perimeter is roughly as the relation between a $d$-dimensional sphere and its surface area, thus, we can assume that $\minp^{\lattice_d}(n) = \Theta(n^{\frac{d-1}{d}})$, and thus Theorem~\ref{thm:main} does not hold for high dimensional lattices. Proving this relation in high dimensions remains an open problem, and probably another technique should be utilized in order to prove (or disprove) this property in high dimensions. } \section{Conclusion} \label{sec:conclusion} In this paper, we show that the inflation operation induces a bijection between sets of minimal-perimeter animals on any lattice which satisfies three conditions. We demonstrate this result on three planar lattices: the square, hexagonal, and also the triangular (with a modified definition of the perimeter). The most important contribution of this paper is the application of our result to polyhexes. Specifically, the phenomenon of the number of isomers of a benzenoid hydrocarbons remaining unchanged under circumscribing, which was observed in the literature of chemistry more than~30 years ago but has never been proven till now. However, we do not believe that this set of conditions is necessary. Empirically, it seems that by inflating all the minimal-perimeter polycubes (animals on the 3-dimensional cubical lattice) of a given size, we obtain all the minimal-perimeter polycubes of some larger size. However, the second premise of Theorem~\ref{thm:main} does not hold for this lattice. Moreover, we believe that as stated, Theorem~\ref{thm:main} applies only to 2-dimensional lattices! A simple conclusion from Lemma~\ref{lemma:pnc_size} is that if the premises of Theorem~\ref{thm:main} hold for animals on a lattice~$\lattice$, then~$\minp_\lattice(n) = \Theta(\sqrt{n})$. We find it is reasonable to assume that for a $d$-dimensional lattice $\lattice_d$, the relation between the size of a minimal-perimeter animal and its perimeter is roughly equal to the relation between a $d$-dimensional sphere and its surface area. Hence, we conjecture that $\minp^{\lattice_d}(n) = \Theta(n^{1-1/d})$, and, thus, Theorem~\ref{thm:main} is not suitable for higher dimensions. \bibliographystyle{abbrv} \bibliography{references} \end{document} *************************************** LLNCS version *********************************** \newif \ifShowComments \ShowCommentstrue \documentclass{llncs} \usepackage{microtype} \usepackage{amsmath} \usepackage{color} \usepackage{cite} \usepackage{pifont} \usepackage{enumitem} \usepackage{amssymb} \usepackage{tikz} \usepackage{subcaption} \usepackage{polylib} \newif\ifusestandalone \usestandalonetrue \ifusestandalone \usepackage{standalone} \graphicspath{{./graphics/}} \polypath{./graphics/} gurename~\ref{#1}} grefs}[1]{Figures~\ref{#1}} \newcommand*{\myeqref}[1]{Equation~\eqref{#1}} \newcommand{\abs}[1]{\left|#1\right|} \newcommand{\N}{\mathbb{N}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\hex}{\mathcal{H}} \newcommand{\squ}{\mathcal{S}} \newcommand{\set}[1]{\left\lbrace#1\right\rbrace} \newcommand{\bra}[1]{\left[#1\right]} \newcommand{\ceil}[1]{\left\lceil#1\right\rceil} \newcommand{\floor}[1]{\left\lfloor#1\right\rfloor} \newcommand\polylog{{\rm polylog}} \newcommand{\perim}[1]{\mathcal{P}(#1)} \newcommand{\border}[1]{\mathcal{B}(#1)} \newcommand{\minp}{\epsilon} \newcommand{\minn}{\minp^{-1}} \newcommand{\psize}{e_P(Q)} \newcommand{\bsize}{e_B(Q)} \newcommand{\poly}{Q} \newcommand{\lattice}{\mathcal{L}} \newcommand{\hexagon}{\drawpolyhex[scale=0.6]{single_hex.txt}} \newcommand{\myqed}{\hfill $\Box$} \newcommand{\comment}[1]{\relax} \captionsetup{compatibility=false} \captionsetup[subfigure]{justification=centering} \newtheorem{observation}[theorem]{Observation} \newcommand{\caplabel}[1]{#1} \ifShowComments \newcommand\red[1]{{\color{red}#1}} \newcommand\blue[1]{{\color{blue}#1}} \newcommand\purple[1]{{\color{purple}#1}} \newcommand\toreview[1]{{\bfseries\color{olive}#1}} \newcommand{\xnote}[3]{#1{[#2: \textbf{#3}]}} \def\gil#1{{\xnote{\blue}{GBS says}{#1}}} \def\gill#1{{\xnote{\red}{GB says}{#1}}} \else \newcommand\toreview[1]{#1} \def\gil#1{} \def\gill#1{} \begin{document} \frontmatter \pagestyle{headings} \addtocmark{On Minimal-Perimeter Lattice Animals} \mainmatter \title{Minimal-Perimeter Lattice Animals and the Constant-Isomer Conjecture} \date{} \author{ Gill Barequet \quad $\bullet$ \quad Gil Ben-Shachar } \authorrunning{Barequet, Ben-Shachar, Bui, and Osegueda} \tocauthor{ Gill Barequet (Technion, Haifa), Gil Ben-Shachar (Technion, Haifa) } \institute{ Dept.\ of Computer Science, The Technion---Israel Inst.\ of Technology, \\ Haifa~3200003, Israel. E-mail: \texttt{\{barequet,gilbe\}@cs.technion.ac.il} } \maketitle \begin{abstract} We consider minimal-perimeter lattice animals, providing a set of conditions which are sufficient for a lattice to have the property that inflating all minimal-perimeter animals of a certain size yields (without repetitions) all minimal-perimeter animals of a new, larger size. We demonstrate this result on the two-dimensional square and hexagonal lattices. In addition, we characterize the sizes of minimal-perimeter animals on these lattices that are not created by inflating members of another set of minimal-perimeter animals. \end{abstract} \mbox{} \vspace{-12mm} \\ \begin{center} \fbox{\includegraphics[scale=0.8]{graphics/quote.png}} \vspace{2mm} \\ \begin{minipage}{5in} \footnotesize Cyvin S.J., Cyvin B.N., Brunvoll J. (1993) Enumeration of benzenoid chemical isomers with a study of constant-isomer series. In: \emph{Computer Chemistry}, part of \emph{Topics in Current Chemistry} book series, vol.\ 166. Springer, Berlin, Heidelberg. (p.~117) \end{minipage} \end{center} \section{Introduction} An \emph{animal} on a $d$-dimensional lattice is a connected set of lattice cells, where connectivity is through ($d{-}1$)-dimensional faces of the cells. Specifically, on the planar square lattice, connectivity of cells is through edges. Two animals are considered identical if one can be obtained from the other by \emph{translation} only, without rotations or flipping. (Such animals are called ``fixed'' animals, as opposed to ``free'' animals.) Lattice animals attracted interest in the literature as combinatorial objects~\cite{eden1961two} and as a computational model in statistical physics and chemistry~\cite{temperley1956combinatorial}. (In these areas, one usually considers \emph{site} animals, that is, clusters of lattice vertices, hence, the graphs considered there are the \emph{dual} of our graphs.) In this paper, we consider lattices in two dimensions, specifically, the hexagonal, triangular, and square lattices, where animals are called polyhexes, polyiamonds, and polyominoes, respectively. We show the application of our results to the square and hexagonal lattices, and explain how to extend the latter to the triangular lattice. An example of such animals is shown in figure~\ref{fig:examples}. \begin{figure} \centering \begin{subfigure}[t]{0.3\textwidth} \centering \drawpoly[scale = 0.75]{exmpSqr.txt} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \drawpolyhex[scale = 0.75]{exmpHex.txt} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \centering \drawpolyiamond[scale = 0.70]{exmpTri.txt} \end{subfigure} \caption{An example of a polyomino, a polyhex, and a polyiamond.} \label{fig:examples} \end{figure} Let $A^\lattice(n)$ denote the number of lattice animals of size~$n$, that is, animals composed of $n$ cells, on a lattice~$\lattice$. A major research problem in the study of lattices is understanding the nature of~$A^\lattice(n)$, either by finding a formula for it as a function of~$n$, or by evaluating it for specific values of~$n$. These problems are to this date still open for any nontrivial lattice. Redelmeier~\cite{redelmeier1981counting} introduced the first algorithm for counting all polyominoes of a given size, with no polyomino being generated more than once. Later, Mertens~\cite{Mertens1990} showed that Redelmeier's algorithm can be utilized for any lattice. The first algorithm for counting lattice animals without generating all of them was introduced by Jensen~\cite{jensen2000statistics}. Using his method, the number of animals on the 2-dimensional square, hexagonal, and triangular lattices were computed up to size~56, 46, and~75, respectively. An important measure of lattice animals is the size of their \emph{perimeter} (sometimes called ``site perimeter''). The perimeter of a lattice animal is defined as the set of empty cells adjacent to the animal cells. This definition is motivated by percolation models in statistical physics. In such discrete models, the plane or space is made of small cells (squares or cubes, respectively), and quanta of material or energy ``jump'' from a cell to a neighboring cell with some probability. Thus, the perimeter of a cluster determines where units of material or energy can move to, and guide the statistical model of the flow. \begin{figure} \centering \begin{subfigure}[t]{0.45\textwidth} \centering \drawpoly[scale = 0.45]{exmpQ.txt} \caption{Q} \end{subfigure} \begin{subfigure}[t]{0.45\textwidth} \centering \drawpoly[scale = 0.45]{exmpIQ.txt} \caption{I(Q)} \end{subfigure} \caption{A polyomino~$\poly$ and its inflated polyomino~$I(\poly)$. Polyomino cells are colored gray, while perimeter cells are colored white.} \label{fig:exmp} \end{figure} Asinowski et al.~\cite{asinowski2017enumerating,asinowski2018polycubes} provided formulae for polyominoes and polycubes with perimeter size close to the maximum possible. On the other extreme reside animals with the \emph{minimum} possible perimeter size for their area. The study of polyominoes of a minimal perimeter dates back to Wang and Wang~\cite{wang1977discrete}, who identified an infinite sequence of cells on the square lattice, the first~$n$ of which (for any~$n$) form a minimal-perimeter polyomino. Later, Altshuler et al.~\cite{altshuler2006}, and independently Sieben~\cite{sieben2008polyominoes}, studied the closely-related problem of the \emph{maximum} area of a polyomino with~$p$ perimeter cells, and provided a closed formula for the minimum possible perimeter of $n$-cell polyominoes. Minimal-perimeter animals were also studied on other lattices. For animals on the triangular lattice (polyiamonds), the main result is due to F\"{u}lep and Sieben~\cite{fulep2010polyiamonds}, who characterized all the polyiamonds with maximum area for their perimeter, and provided a formula for the minimum perimeter of a polyiamond of size~$n$. Similar results were given by Vainsencher and Bruckstein~\cite{VainsencherB08} for the hexagonal lattice. In this paper, we study an interesting property of minimal-perimeter animals, which relates to the notion of the \emph{inflation} operation. Simply put, inflating an animal is adding to it all its perimeter cells (see Figure~\ref{fig:exmp}). We provide a set of conditions (for a given lattice), which if it holds , then inflating all minimal-perimeter animals of some size yields all minimal-perimeter animals of some larger size in a bijective manner. While this paper discusses some combinatorial properties of minimal-perimeter polyominoes, another algorithmic question emerges from these properties, namely, ``how many minimal-perimeter polyominoes are there of a given size?'' This question is addressed in detail in a companion paper~\cite{barequet2020algorithms}. The paper is organized as follows. In Section~\ref{sec:main}, we provide some definitions and prove our main theorem. In sections~\ref{sec:polyominoes} and~\ref{sec:polyhexes}, we show the application of Section~\ref{sec:main} to polyominoes and polyhexes, respectivally. Then, in Section~\ref{sec:polyiamonds} we explain how the same result also applies to the regular triangular lattice. We end in Section~\ref{sec:conclusion} with some concluding remarks. \subsection{Polyhexes as Molecules} In addition to research of minimal-perimeter animals in the literature on combinatorics, there has been much more intensive research of minimal-perimeter polyhexes in the literature on organic chemistry, in the context of the structure of families of molecules. For example, significant amount of work dealt with molecules called \emph{benzenoid hydrocarbons}. It is a known natural fact that molecules made of carbon atoms are structured as shapes on the hexagonal lattice. Benzenoid hydrocarbons are made of carbon and hydrogen atoms only. In such a molecule, the carbon atoms are arranged as a polyhex, and the hydrogen atoms are arranged around the carbons atoms. \begin{figure} \centering \begin{tabular}{ccc} \raisebox{0.39\height}{\includegraphics[scale=0.13]{Naphtaline.png}} & ~~~ & \includegraphics[scale=0.13]{Circumnaphtaline.png} \\ (a) Naphthalene ($C_{10} H_8$) & & (b) Circumnaphtaline ($C_{32} H_{14})$ \end{tabular} \caption{Naphthalene and its circumscribed version.} \label{fig:naphthalene} \end{figure} Figure~\ref{fig:naphthalene}(a) shows a schematic drawing of the molecule of Naphthalene (with formula~$C_{10}H_8$), the simplest benzenoid hydrocarbon, which is made of ten carbon atoms and eight hydrogen atoms, while Figure~\ref{fig:naphthalene}(b) shows Circumnaphthalene (molecular formula~$C_{32}H_{14}$). There exist different configurations of atoms for the same molecular formula, which are called \emph{isomers} of the same formula. In the field of organic chemistry, a major goal is to enumerate all the different isomers of a given formula. Note that the carbon and hydrogen atoms are modeled by lattice \emph{vertices} and not by cells of the lattice, but as we explain below, the numbers of hydrogen atoms identifies with the number of perimeter cells of the polyhexes under discussion. Indeed, the hydrogen atoms lie on lattice vertices that do not belong to the polyhex formed by the carbon atoms (which also lie on lattice vertices), but are connected to them by lattice edges. In minimal-perimeter polyhexes, each perimeter cell contains exactly two such hydrogen vertices, and every hydrogen vertex is shared by exactly two perimeter cells. (This has nothing to do with the fact that a single cell of the polyhex might be neighboring several---five, in the case of Naphthalene---``empty'' cells.) Therefore, the number of hydrogen atoms in a molecule of a benzenoid hydrocarbon is identical to the size of the perimeter of the imaginary polyhex.\footnote{ In order to model atoms as lattice cells, one might switch to the dual of the hexagonal lattice, that is, to the regular triangular lattice, but this will not serve our purpose. } In a series of papers (culminated in Reference~\cite{dias1987handbook}), Dias provided the basic theory for the enumeration of benzenoid hydrocarbons. A comprehensive review of the subject was given by Brubvoll and Cyvin~\cite{brunvoll1990we}. Several other works~\cite{harary1976extremal,cyvin1991series,dias2010} also dealt with the properties and enumeration of such isomers. The analogue of what we call the ``inflation'' operation is called \emph{circumscribing} in the literature on chemistry. A circumscribed version of a benzenoid hydrocarbon molecule~$M$ is created by adding to~$M$ an outer layer of hexagonal ``carbon cells,'' that is, not only the hydrogen atoms (of~$M$) adjacent to the carbon atoms now turn into carbon atoms, but also new carbon atoms are added at all other ``free'' vertices of these cells so as to ``close'' them. In addition, hydrogen atoms are put at all free lattice vertices that are connected by edges to the new carbon atoms. This process is visualized well in Figure~\ref{fig:naphthalene}. In the literature on chemistry, it is well known that circumscribing all isomers of a given molecular formula yields, in a bijective manner, all isomers that correspond to another molecular formula. (The sequences of molecular formulae that have the same number of isomers created by circumscribing are known as \emph{constant-isomer series}.) Although this fact is well known, to the best of our knowledge, no rigorous proof of it was ever given. As mentioned above, we show that inflation induces a bijection between sets of minimal-perimeter animals on the square, hexagonal, and in a sense, also on the triangular lattice. By this, we prove the long-observed (but never proven) phenomenon of ``constant-isomer series,'' that is, that circumscribing isomers of benzenoid hydrocarbon molecules (in our terminology, inflating minimum-perimeter polyhexes) yields all the isomers of a larger molecule. \section{Minimal-Perimeter Animals} \label{sec:main} Throughout this section, we consider animals on some specific lattice~$\lattice$. Our main result consists of a set of conditions on minimal-perimeter animals on~$\lattice$, which is sufficient for satisfying a bijection between sets of minimal-perimeter animals on~$\lattice$. \subsection{Preliminaries} \label{subsec:preliminaries} \begin{figure} \centering \begin{tabular}{ccccc} \drawpolyhex[scale=0.5]{exmp_hex.txt} & & \drawpolyhex[scale=0.5]{exmp_hex_I.txt} & & \drawpolyhex[scale=0.5]{exmp_hex_D.txt}\\ $Q$ & & $I(Q)$ & & $D(Q)$ \end{tabular} \caption{A polyhex~$\poly$, its inflated polyhex~$I(\poly)$, and its deflated polyhex~$D(\poly)$. The gray cells belong to~$\poly$, the white cells are its perimeter, and its border cells are marked with a pattern of dots.} \label{fig:exmp_poly} \end{figure} Let~$\poly$ be an animal on~$\lattice$. Recall that the \emph{perimeter} of~$\poly$, denoted by~$\perim{\poly}$, is the set of all empty lattice cells that are neighbors of at least one cell of~$\poly$. Similarly, the \emph{border} of~$\poly$, denoted by~$\border{\poly}$, is the set of cells of~$\poly$ that are neighbors of at least one empty cell. The \emph{inflated} version of~$\poly$ is defined as~$I(\poly) := \poly \cup \perim{\poly}$. Similarly, the \emph{deflated} version of~$\poly$ is defined as~$D(\poly) := \poly \backslash \border{\poly}$. These operations are demonstrated in Figure~\ref{fig:exmp_poly}. Denote by~$\minp(n)$ the minimum possible size of the perimeter of an $n$-cell animal on~$\lattice$, and by~$M_n$ the set of all minimal-perimeter $n$-cell animals on~$\lattice$. \subsection{A Bijection} \begin{theorem} \label{thm:main} Consider the following set of conditions. \begin{enumerate}[label=(\arabic*)] \item The function~$\minp(n)$ is weakly-monotone increasing. \item There exists some constant~$c^* = c^*(\lattice)$, for which, for any minimal-perimeter animal~$\poly$, we have that~$\abs{\perim{\poly}} = \abs{\border{\poly}} + c^*$ and~$\abs{\perim{I(\poly)}} \leq \abs{\perim{\poly}}+c^*$. \item If~$\poly$ is a minimal-perimeter animal of size $n+\minp(n)$, then~$D(\poly)$ is a valid (connected) animal. \end{enumerate} If all the above conditions hold for~$\lattice$, then~$\abs{M_n} = \abs{M_{n+\minp(n)}}$. If these conditions are not satisfied for only a finite amount of sizes of animals, then the claim holds for all sizes greater than some lattice-dependent nominal size~$n_0$. \myqed \end{theorem} \begin{proof} We begin with proving that inflation preserves perimeter minimality. \begin{lemma} \label{lemma:minimal-inflating} If~$\poly$ is a minimal-perimeter animal, then~$I(\poly)$ is a minimal-perimeter animal as well. \end{lemma} \begin{proof} Let~$\poly$ be a minimal-perimeter animal. Assume to the contrary that~$I(\poly)$ is not a minimal-perimeter animal, thus, there exists an animal~$\poly'$ such that $\abs{\poly'} = \abs{I(\poly)}$, and~$\abs{\perim{\poly'}} < \abs{\perim{I(\poly)}}$. By the second premise of Theorem~\ref{thm:main}, we know that $\abs{\perim{I(\poly)}} \leq \abs{\perim{\poly}} + c^*$, thus, $\abs{\perim{\poly'}} < \abs{\perim{\poly}}+c$, and since~$\poly'$ is a minimal-perimeter animal, we also know by the same premise that~$\abs{\perim{\poly'}} = \abs{\border{\poly'}}+c$, and, hence, that~$\abs{\border{\poly'}} < \abs{\perim{\poly}}$. Consider now the animal~$D(\poly')$. Recall that $\abs{\poly'} = \abs{I(\poly)}=\abs{\poly}+\abs{\perim{\poly}}$, thus, the size of~$D(\poly')$ is at least~$\abs{\poly}+1$, and~$\abs{\perim{D(\poly')}} < \abs{\perim{\poly}} = \minp(n)$ (since the perimeter of~$D(\poly')$ is a subset of the border of~$\poly'$). This is a contradiction to the first premise, which states that the sequence~$\minp(n)$ is monotone increasing. Hence, the animal~$\poly'$ cannot exist, and~$I(\poly)$ is a minimal-perimeter animal. \myqed \end{proof} We now proceed to demonstrating the effect of repeated inflation on the size of minimal-perimeter animals. \begin{lemma} \label{lemma:pnc_size} The minimum perimeter size of animals of size~$n+k\minp(n)+c^*k(k-1)/2$ (for~$n > 1$ and any $k \in \N$) is~$\minp(n)+c^*k$. \end{lemma} \begin{proof} We repeatedly inflate a minimal-perimeter animal~$\poly$, whose initial size is~$n$. The size of the perimeter of~$\poly$ is~$\minp(n)$, thus, inflating it creates a new animal of size~$n+\minp(n)$, and the size of the border of~$I(\poly)$ is~$\minp(n)$, thus, the size of~$I(\poly)$ is~$\minp(n) + c^*$. Continuing the inflation of the animal, the $k$th inflation will increase the size of the animal by $\minp(n) + (k-1)c^*$ and will increase the size of the perimeter by~$c^*$. Summing up these quantities yields the claim. \myqed \end{proof} Next, we prove that inflation preserves difference, that is, inflating two different minimal-perimeter animals (of equal or different sizes) always produces two different new animals. (Note that this is not true for non-minimal-perimeter animals.) \begin{lemma} \label{lemma:different_inflating} Let~$\poly_1,\poly_2$ be two different minimal-perimeter animals. Then, regardless of whether or not~$\poly_1,\poly_2$ have the same size, the animals~$I(\poly_1)$ and~$I(\poly_2)$ are different as well. \end{lemma} \begin{proof} Assume to the contrary that $\poly = I(\poly_1) = I(\poly_2)$, that is, $\poly = \poly_1 \cup \perim{\poly_1} = \poly_2 \cup \perim{\poly_2}$. In addition, since $\poly_1 \neq \poly_2$, and since a cell cannot belong simultaneously to both an animal and to its perimeter, this means that~$\perim{\poly_1} \neq \perim{\poly_2}$. The border of~$\poly$ is a subset of both~$\perim{\poly_1}$ and~$\perim{\poly_2}$, that is, $\border{\poly} \subset \perim{\poly_1} \cap \perim{\poly_2}$. Since~$\perim{\poly_1} \neq \perim{\poly_2}$, we obtain that either~$\abs{\border{\poly}} < \abs{\perim{\poly_1}}$ or~$\abs{\border{\poly}} < \abs{\perim{\poly_2}}$; assume without loss of generality the former case. Now consider the animal~$D(\poly)$. Its size is~$\abs{\poly}-\abs{\border{\poly}}$. The size of~$\poly$ is~$\abs{\poly_1}+\abs{\perim{\poly_1}}$, thus, $\abs{D(\poly)} > \abs{\poly_1}$, and since the perimeter of~$D(\poly)$ is a subset of the border of~$\poly$, we conclude that~$\abs{\perim{D(\poly)}} < \abs{\perim{\poly_1}}$. However, $\poly_1$ is a minimal-perimeter animal, which is a contradiction to the first premise of the theorem, which states that~$\minp(n)$ is monotone increasing. \myqed \end{proof} To complete the cycle, we also prove that for any minimal-perimeter animal~$\poly \in M_{n+\minp(n)}$, there is a minimal-perimeter source in~$M_n$, that is, an animal~$\poly'$ whose inflation yields~$\poly$. Specifically, this animal is $D(\poly)$. \begin{lemma} \label{lemma:deflating} For any~$\poly \in M_{n+\minp(n)}$, we also have that~$I(D(\poly)) = \poly$. \end{lemma} \begin{proof} Since~$\poly \in M_{n+\minp(n)}$, we have by Lemma~\ref{lemma:pnc_size} that~$\abs{\perim{\poly}} = \minp(n)+c^*$. Combining this with the equality~$\abs{\perim{\poly}} = \abs{\border{\poly}}+c^*$, we obtain that~$\abs{\border{\poly}} = \minp(n)$, thus, $\abs{D(\poly)} = n$ and $\abs{\perim{D(\poly)}} \geq \minp(n)$. Since the perimeter of~$D(\poly)$ is a subset of the border of~$\poly$, and~$\abs{\border{\poly}} = \minp(n)$, we conclude that the perimeter of~$D(\poly)$ and the border of~$\poly$ are the same set of cells, and, hence, $I(D(\poly)) = \poly$. \myqed \end{proof} Let us now wrap up the proof of the main theorem. In Lemma~\ref{lemma:minimal-inflating} we have shown that for any minimal-perimeter animal~$\poly \in M_n$, we have that~$I(\poly) \in M_{n+\minp(n)}$. In addition, Lemma~\ref{lemma:different_inflating} states that the inflation of two different minimal-perimeter animals results in two other different minimal-perimeter animals. Combining the two lemmata, we obtain that~$\abs{M_n} \leq \abs{M_{n+\minp(n)}}$. On the other hand, in Lemma~\ref{lemma:deflating} we have shown that if~$\poly \in M_{n+\minp(n)}$, then~$I(D(\poly)) = \poly$, and, thus, for any animal in~$M_{n+\minp(n)}$, there is a unique source in~$M_n$ (specifically, $D(\poly)$), whose inflation yields~$\poly$. Hence, $\abs{M_n} \geq \abs{M_{n+\minp(n)}}$. Combining the two relations, we conclude that~$\abs{M_n} = \abs{M_{n+\minp(n)}}$. \myqed \end{proof} \subsection{Inflation Chains} Theorem~\ref{thm:main} implies that there exist infinite chains of sets of minimal-perimeter animals, each set obtained by inflating all members of the previous set, while the cardinalities of all sets in a chain are equal. Obviously, there are sets of minimal-perimeter animals that are not created by the inflation of any other sets. We call the size of animals in such sets an \emph{inflation-chain root}. Using the definitions and proofs in the previous section, we are able to characterize which sizes can be inflation-chain roots. Then, using one more condition, which holds in the lattices we consider, we determine which values are the actual inflation-chain roots. To this aim, we define the pseudo-inverse function \[ \minp^{-1}(p) = \min\set{n \in \N \mid \minp(n) = p}. \] Since~$\minp(n)$ is a monotone-increasing discrete function, it is a step function, and the value of~$\minp^{-1}(p)$ is the first point in each step. \begin{theorem} \label{thm:root-candidates} Let~$\lattice$ be a lattice satisfying the premises of Theorem~\ref{thm:main}. Then, all inflation-chain roots are either~$\minp^{-1}(p)$ or~$\minp^{-1}(p)-1$, for some $p \in \N$. \end{theorem} \begin{proof} Recall that~$\minp(n)$ is a step function, where each step represents all animal sizes for which the minimal perimeter is~$p$. Let us denote the start and end of the step representing the perimeter~$p$ by~$n_b^p$ and~$n_e^p$, respectively. Formally, $n_b^p = \minp^{-1}(p)$ and~$n_e^p = \minp^{-1}(p+1)-1$. For each size~$n$ of animals in the step~$\bra{n_b^p,n_e^p}$, inflating a minimal-perimeter animal of size~$n$ results in an animal of size~$n{+}p$, and by~Lemma~\ref{lemma:pnc_size}, the perimeter of the inflated animal is~$p{+}c^*$. Thus, the inflation of animals of all sizes in the step of perimeter~$p$ yields animals that appear in the step of perimeter~$p{+}c^*$. In addition, they appear in a \emph{consecutive} portion of the step, specifically, the range~$\bra{n_b^p+p,n_e^p+p}$. Similarly, the step~$\bra{n_b^{p+1},n_e^{p+1}}$ is mapped by inflation to the range~$\bra{n_b^{p+1}+p+1,n_e^{p+1}+p+1}$, which is a portion of the step of~$p{+}1$. Note that the former range ends at~$n_e^p+p = n_b^{p+1}+p-1$, while the latter range starts at~$n_b^{p+1}+p+1$, thus, there is exactly one size of animals, specifically,~$n_b^{p+1}+p$, which is not covered by inflating animals in the ranges~$\bra{n_b^p+p,n_e^p+p}$ and~$\bra{n_b^{p+1},n_e^{p+1}}$. These two ranges represent two different perimeter sizes. Hence, the size~$n_b^{p+1}+p$ must be either the end of the first step, $n_e^{p+c^*}$, or the beginning of the second step, $n_b^{p+c^*+1}$. This concludes the proof. \myqed \end{proof} The arguments of the proof of Theorem~\ref{thm:root-candidates} are visualized in Figure~\ref{fig:minpH_roots} for the case of polyhexes. In fact, as we show below (see Theorem~\ref{thm:root-conditioned}), only the second option exists, but in order to prove this, we also need a maximality-conservation property of the inflation operation. Here is another perspective for the above result. Note that minimal-perimeter animals, with size corresponding to $n_e^{p}$ (for some~$p \in \N$), are the largest animals with perimeter~$p$. Intuitively, animals with the largest size, for a certain perimeter size, tend to be ``spherical'' (``round'' in two dimensions), and inflating them makes them even more spherical. Therefore, one might expect that for a general lattice, the inflation operation will preserve the property of animals being the largest for a given perimeter. In fact, this has been proven rigorously for the square lattice~\cite{altshuler2006,sieben2008polyominoes} and for the hexagonal lattice~\cite{VainsencherB08,fulep2010polyiamonds}. However, this also means that inflating a minimal-perimeter animal of size~$n_e^p$ yields a minimal-perimeter animal of size~$n_e^{p+c^*}$, and, thus, $n_e^p$ cannot be an inflation-chain root. We summarize this discussion in the following theorem. \begin{theorem} \label{thm:root-conditioned} Let~$\lattice$ be a lattice for which the three premises of Theorem~\ref{thm:main} are satisfied, and, in addition, the following condition holds. \begin{enumerate}[label=(\arabic*)] \setcounter{enumi}{3} \item The inflation operation preserves the property of having a maximum size for a given perimeter. \end{enumerate} Then, the inflation-chain roots are precisely~$(\minp_\lattice)^{-1}(p)$, for all $p \in \N$. \myqed \end{theorem} \subsection{Convergence of Inflation Chains} We now discuss the structure of inflated animals, and show that under a certain condition, inflating repeatedly \emph{any} animal (or actually, any set, possibly disconnected, of lattice cells) ends up in a minimal-perimeter animal after a finite number of inflation steps. Let~$I^k(Q)$ ($k>0$) denote the result of applying repeatedly~$k$ times the inflating operator~$I(\cdot)$, starting from the animal~$Q$. Equivalently, \[ I^k(Q) = Q \cup \set{c \mid \mbox{Dist}(c,Q) \leq k}, \] where~$\mbox{Dist}(c,Q)$ is the Lattice distance from a cell~$c$ to the animal~$Q$. For brevity, we will use the notation $Q^k = I^k(Q)$. Let us define the function \( \phi(Q) = \minn(\abs{\perim{Q}}) - \abs{Q} \) and explain its meaning. When~$\phi(Q) \geq 0$, it counts the cells that should be added to~$Q$, with no change to its perimeter, in order to make it a minimal-perimeter animal. In particular, if~$\phi(Q) = 0$, then~$Q$ is a minimal-perimeter animal. Otherwise, if~$\phi(Q) < 0$, then~$Q$ is also a minimal-perimeter animal, and $\abs{\phi(Q)}$ cells can be removed from~$Q$ while still keeping the result a minimal-perimeter animal and without changing its perimeter. \begin{lemma} \label{lemma:jumps-p-1} For any value of~$p$, we have that~$\minn(p+c^*)-\minn(p) = p-1$. \end{lemma} \begin{proof} Let~$Q$ be a minimal-perimeter animal with area~$n_b^p = \minn(p)$. The area of~$I(Q)$ is~$n_b^p+p$, thus, by Theorem~\ref{thm:main}, $\perim{I(Q)} = p+c^*$. The area~$n_b^{p+c^*}$ is an inflation-chain root, hence, the area of~$I(Q)$ cannot be~$n_b^{p+c^*}$. Except~$n_b^{p+c^*}$, animals of all other areas in the range~$[n_b^{p+c^*},\dots,n_e^{p+c^*}]$ are created by inflating minimal-perimeter animals with perimeter~$p$. The animal~$Q$ is of area~$n_b^p$, \emph{i.e.}, the area of~$I(Q)$ must be the minimal area from $\bra{n_b^{p+c^*},n_e^{p+c^*}}$ which is not an inflation-chain root. Hence, the area of~$I(Q)$ is~$n_b^{p+c^*}+1$. We now equate the two expressions for the area of $I(Q)$: $n_b^p+p = n_b^{p+c^*}+1$. That is, $n_b^{p+c^*}-n_b^{p} = p-1$. The claim follows. \end{proof} Using Lemma~\ref{lemma:jumps-p-1}, we can deduce the following result. \begin{lemma} \label{lem:conv-step} If~$\abs{\perim{I(Q)}} = \abs{\perim{Q}} +c^*$, then~$\phi(I(Q)) = \phi(Q)-1$. \end{lemma} \begin{proof} \begin{align*} \phi(I(Q)) &= \minn(\abs{\perim{I(Q)}}) - \abs{I(Q)} \\ &= \minn(\abs{\perim{Q}}+c^*) - (\abs{Q} + \abs{\perim{Q}}) \\ &= \minn(\abs{\perim{Q}}) + \abs{\perim{Q}} -1 - \abs{Q} - \abs{\perim{Q}} \\ &= \minn(\abs{\perim{Q}}) -\abs{Q} - 1 \\ &= \phi(Q) - 1. \end{align*} \end{proof} Lemma~\ref{lem:conv-step} tells us that inflating an animal, $Q$, which satisfies $\abs{\perim{I(Q)}} = \abs{\perim{Q}} +c^*$, reduces $\phi(Q)$ by $1$. In other words, $I(Q)$ is ``closer'' than~$Q$ to being a minimal-perimeter animal. This result is stated more formally in the following theorem. \begin{theorem} \label{thm:convergence} Let~$\lattice$ be a lattice for which the four premises of Theorems~\ref{thm:main} and~\ref{thm:root-conditioned} are satisfied, and, in addition, the following condition holds. \begin{enumerate}[label=(\arabic*)] \setcounter{enumi}{4} \item For every animal~$Q$, there exists some finite number~$k_0 = k_0(Q)$, such that for every $k>k_0$, we have that~$\abs{\perim{Q^{k+1}}} = \abs{\perim{Q^{k}}} + c$. \end{enumerate} Then, after a finite number of inflation steps, any animal becomes a minimal-perimeter animal. \end{theorem} \begin{proof} The claim follows from Lemma~\ref{lem:conv-step}. After~$k_0$ inflation operations, the premise of this lemma holds. Then, any additional inflation step will reduce~$\phi(Q)$ by~$1$ until~$\phi(Q)$ is nullified, which is precisely when the animal becomes a minimal-perimeter animal. (Any additional inflation steps would add superfluous cells, in the sense that they can be removed while keeping the animal a minimal-perimeter animal.) \end{proof} \section{Polyominoes} \label{sec:polyominoes} Throughout this section, we consider the two-dimensional square lattice~$\squ$, and show that the premises of Theorem~\ref{thm:main} hold for this lattice. The lattice-specific notation ($M_n$, $\minp(n)$, and~$c^*$) in this section refer to~$\squ$. \subsection{Premise 1: Monotonicity} The function~$\minp^\squ(n)$, that gives the minimum possible size of the perimeter of a polyomino of size~$n$, is known to be weakly-monotone increasing. This fact was proved independently by Altshuler et al.~\cite{altshuler2006} and by Sieben~\cite{sieben2008polyominoes}. The latter reference also provides the following explicit formula. \begin{theorem} \label{thm:minp_sqr} \textup{\cite[Thm.~$5.3$]{sieben2008polyominoes}} $\minp^\squ(n) = \ceil{\sqrt{8n-4} \,}+2$. \myqed \end{theorem} \subsection{Premise 2: Constant Inflation} The second premise is apparently the hardest to show. We will prove that it holds for~$\squ$ by analyzing the patterns which may appear on the border of minimal-perimeter polyominoes. Asinowski et al.~\cite{asinowski2017enumerating} defined the \emph{excess} of a perimeter cell as the number of adjacent occupied cell minus one, and the total \emph{perimeter excess} of an animal~$\poly$, $e_P(\poly)$, as the sum of excesses over all perimeter cells of~$\poly$. We extend this definition to border cells, and, in a similar manner, define the \emph{excess} of a border cell as the number of adjacent empty cells minus one, and the \emph{border excess} of~$\poly$, $e_B(\poly)$, as the sum of excesses over all border cells of~$\poly$. First, we establish a connection between the size of the perimeter of a polyomino to the size of its border. The following formula is universal for all lattice animals. \begin{lemma} \label{lemma:pebe} For every animal~$\poly$, we have that \begin{equation} \label{eq:pebe} \abs{\perim{\poly}} + e_P(\poly) = \abs{\border{\poly}} + e_B(\poly). \end{equation} \end{lemma} \begin{proof} Consider the (one or more) rectilinear polygons bounding the animal~$\poly$. The two sides of the equation are equal to the total length of the polygon(s) in terms of lattice edges. Indeed, this length can be computed by iterating over either the border or the perimeter cells of~$\poly$. In both cases, each cell contributes one edge plus its excess to the total length. The claim follows. \myqed \end{proof} \begin{figure} \centering \begin{subfigure}[t]{0.1\textwidth} \centering \drawpoly[scale=0.5]{bs1.txt} \caption{} \end{subfigure} \begin{subfigure}[t]{0.1\textwidth} \centering \drawpoly[scale=0.5]{bs2.txt} \caption{} \end{subfigure} \begin{subfigure}[t]{0.1\textwidth} \centering \drawpoly[scale=0.5]{bs3.txt} \caption{} \end{subfigure} \begin{subfigure}[t]{0.1\textwidth} \centering \drawpoly[scale=0.5]{bs4.txt} \caption{} \end{subfigure} \qquad \begin{subfigure}[t]{0.1\textwidth} \centering \drawpoly[scale=0.5]{ps1.txt} \addtocounter{subfigure}{18} \caption{} \end{subfigure} \begin{subfigure}[t]{0.1\textwidth} \centering \drawpoly[scale=0.5]{ps2.txt} \caption{} \end{subfigure} \begin{subfigure}[t]{0.1\textwidth} \centering \drawpoly[scale=0.5]{ps3.txt} \caption{} \end{subfigure} \begin{subfigure}[t]{0.1\textwidth} \centering \drawpoly[scale=0.5]{ps4.txt} \caption{} \end{subfigure} \caption{All possible patterns of cells, up to symmetries, with positive excess. The gray cells are polyomino cells, while the white cells are perimeter cells. The centers of the ``crosses'' are the subject cells, and the patterns show the immediate neighbors of these cells. Patterns~(a--d) exhibit excess border cells, while Patterns~(w--z) exhibit excess perimeter cells. } \label{fig:patterns_sqr} \end{figure} \begin{figure} \centering \drawpoly{patterns.txt} \caption{A~sample polyomino with marked patterns.} \label{fig:patterns-exmp} \end{figure} Let~$\#\square$ be the number of excess cells of a certain type in a polyomino, where~`$\square$' is one of the symbols~$a$--$d$ and~$w$--$z$, as classified in Figure~\ref{fig:patterns_sqr}. Figure~\ref{fig:patterns-exmp} depicts a polyomino which includes cells of all these types. Counting~$e_P(Q)$ and~$e_B(Q)$ as functions of the different patterns of excess cells, we see that \( e_B(Q) = \#a + 2\#b + 3\#c + \#d \) and \( e_P(Q) = \#w + 2\#x + 3\#y + \#z. \) Substituting~$e_B$ and~$e_P$ in \myeqref{eq:pebe}, we obtain that \[ \psize = \bsize + \#a + 2\#b+3\#c + \#d - \#w - 2\#x-3\#y - \#z. \] Since Pattern~(c) is a singleton cell, we can ignore it in the general formula. Thus, we have that \[ \psize = \bsize + \#a + 2\#b + \#d - \#w - 2\#x-3\#y - \#z. \] We now simplify the equation above, first by eliminating the hole pattern, namely, Pattern~(y). \begin{lemma} \label{lemma:no-holes-sqr} Any minimal-perimeter polyomino is simply connected (that is, it does not contain holes). \end{lemma} \begin{proof} The sequence~$\minp(n)$ is weakly-monotone increasing.\footnote{ In the sequel, we simply say ``monotone increasing.'' } Assume that there exists a minimal-perimeter polyomino~$\poly$ with a hole. Consider the polyomino~$\poly'$ that is obtained by filling this hole. The area of~$\poly'$ is clearly larger than that of~$\poly$, however, the perimeter size of~$\poly'$ is smaller than that of~$\poly$ since we eliminated the perimeter cells inside the hole but did not introduce new perimeter cells. This is a contradiction to~$\minp(n)$ being monotone increasing. \myqed \end{proof} Next, we continue to eliminate terms from the equation by showing some invariant related to the turns of the boundary of a minimal-perimeter polyomino. \begin{lemma} \label{lemma:sum_of_turns} For a simply connected polyomino, we have that \( \#a +2\#b -\#w -2\#x = 4. \) \end{lemma} \begin{proof} The boundary of a polyomino without holes is a simple polygon, thus, the sum of its internal angles is $(v-2)\pi$, where~$v$ is the complexity (number of vertices) of the polygon. Note that Pattern~(a) (resp.,~(b)) adds one (resp., two) $\pi/2$-vertex to the polygon. Similarly, Pattern~(w) (resp.~(x)) adds one (resp., two) $3\pi/2$-vertex. All other patterns do not involve vertices. Let~$L = \#a+2\#b$ and~$R = \#w+2\#x$. Then, the sum of angles of the boundary polygon implies that $L \cdot \pi/2 + R \cdot 3\pi/2 = (L+R-2) \cdot \pi$, that is, $L-R = 4$. The claim follows. \myqed \end{proof} Finally, we show that Patterns~(d) and~(z) cannot exist in a minimal-perimeter polyomino. We define a \emph{bridge} as a cell whose removal renders the polyomino disconnected. Similarly, a perimeter bridge is a perimeter cell that neighbors two or more connected components of the complement of the polyomino. Observe that minimal-perimeter polyominoes do not contain any bridges, \emph{i.e.}, cells of Patterns~(d) or~(z). This is stated in the following lemma. \begin{lemma} \label{lemma:no-bridges-sqr} A minimal-perimeter polyomino does not contain any bridge cells. \end{lemma} \begin{proof} Let~$\poly$ be a minimal-perimeter polyomino. For the sake of contradiction, assume first that there is a cell~$f \in \perim{\poly}$ as part of Pattern~(z). Assume without loss of generality that the two adjacent polyomino cells are to the left and to the right of~$f$. These two cells must be connected, thus, the area below (or above)~$f$ must form a cavity in the polyomino shape. Let, then, $\poly'$ obtained by adding~$f$ to~$\poly$ and filling the cavity. grefs{fig:no_z+d}(a,b) illustrate this situation. The cell directly above~$f$ becomes a perimeter cell, the cell~$f$ ceases to be a perimeter cell, and at least one perimeter cell in the area filled below~$f$ is eliminated, thus,~$\abs{\perim{\poly'}} < \abs{\perim{\poly}}$ and~$\abs{\poly'} > \abs{\poly}$, which is a contradiction to the sequence~$\minp(n)$ being monotone increasing. Therefore, polyomino~$\poly$ does not contain perimeter cells that fit Pattern~(z). \begin{figure} \centering \begin{subfigure}[t]{0.2\textwidth} \centering \drawpoly[scale=0.5, every node/.style={scale=0.6}]{no_z1.txt} \caption{$\poly$} \end{subfigure} \begin{subfigure}[t]{0.2\textwidth} \centering \drawpoly[scale=0.5, every node/.style={scale=0.6}]{no_z2.txt} \caption{$\poly'$} \end{subfigure} \qquad \begin{subfigure}[t]{0.2\textwidth} \centering \drawpoly[scale=0.5, every node/.style={scale=0.6}]{no_d1.txt} \caption{$\poly$} \end{subfigure} \begin{subfigure}[t]{0.2\textwidth} \centering \drawpoly[scale=0.5]{no_d2.txt} \caption{$\poly'$} \end{subfigure} \caption{Forbidden patterns for the proof of Theorem~\ref{theorem:pb4}.} \label{fig:no_z+d} \end{figure} Now assume for contradiction that~$\poly$ contains a cell~$f$ that forms Pattern~(d). Let~$\poly'$ be the polyomino obtained from~$\poly$ by removing~$f$ (this will break~$\poly'$ into two separate pieces) and then shifting to the left the piece on the right (this will unite the two pieces into a new polyomino). grefs{fig:no_z+d}(c,d) demonstrate this situation. This operation is always valid since~$\poly$ is of minimal perimeter, hence, by Lemma~\ref{lemma:no-holes-sqr}, it is simply connected, and thus, removing~$f$ breaks~$\poly$ into two separate polyominoes with a gap of one cell in between. Shifting to the left the piece on the right will not create a collision since this would mean that the two pieces were touching, which is not the case. On the other hand, the shift will eliminate the gap that was created by the removal of~$f$, hence, the two pieces will now form a new connected polyomino. The area of~$\poly'$ is one less than the area of~$\poly$, and the perimeter of~$\poly'$ is smaller by at least two than the perimeter of~$\poly$, since the perimeter cells below and above~$f$ cease to be part of the perimeter, and connecting the two parts does not create new perimeter cells. From the formula of~$\minp(n)$, we know that $\minp(n)-\minp(n-1) \leq 1$ for~$n \geq 3$. However, $\abs{\poly} - \abs{\poly'} = 1$ and~$\abs{\perim{\poly}} - \abs{\perim{\poly'}} = 2$, hence, $\poly$ is not a minimal-perimeter polyomino, which contradicts our assumption. Therefore, there are no cells in~$\poly$ that fit Pattern~(d). This completes the proof. \myqed \end{proof} We are now ready to wrap up the proof of the constant-inflation theorem. \begin{theorem} \label{theorem:pb4} (Stepping Theorem) For any minimal-perimeter polyomino~$\poly$ (except the singleton cell), we have that $\psize=\bsize+4.$ \end{theorem} \begin{proof} Lemma~\ref{lemma:sum_of_turns} tells us that~$\psize=\bsize+4+\#d-\#z$. By Lemma~\ref{lemma:no-bridges-sqr}, we know that $\#d = \#z = 0$. The claim follows at once. \myqed \end{proof} \subsection{Premise 3: Deflation Resistance} \begin{lemma} \label{lemma:def_valid} Let~$\poly$ be a minimal-perimeter polyomino of area~$n+\minp(n)$ (for $n \geq 3$). Then, $D(\poly)$ is a valid (connected) polyomino. \end{lemma} \begin{proof} Assume to the contrary that~$D(\poly)$ is not connected, so that it is composed of at least two connected parts. Assume first that~$D(\poly)$ is composed of exactly two parts, $\poly_1$ and~$\poly_2$. Define the \emph{joint perimeter} of the two parts, $\perim{\poly_1,\poly_2}$, to be~$\perim{\poly_1} \cup \perim{\poly_2}$. Since~$\poly$ is a minimal-perimeter polyomino of area $n+\minp(n)$, we know by Theorem~\ref{theorem:pb4} that its perimeter size is~$\minp(n)+4$ and its border size is~$\minp(n)$, respectively. Thus, the size of~$D(\poly)$ is exactly~$n$ regardless of whether or not~$D(\poly)$ is connected. Since deflating~$\poly$ results in~$\poly_1 \cup \poly_2$, the polyomino~$\poly$ must have an (either horizontal, vertical, or diagonal) ``bridge'' of border cells which disappears by the deflation. The width of the bridge is at most~2, thus, $\abs{\perim{\poly_1} \cap \perim{\poly_2}} \leq 2$. Hence, $\abs{\perim{\poly_1}} + \abs{\perim{\poly_2}} - 2 \leq \abs{\perim{\poly_1,\poly_2}}$. Since~$\perim{\poly_1,\poly_2}$ is a subset of~$\border{\poly}$, we have that $\abs{\perim{\poly_1,\poly_2}} \leq \minp(n)$. Therefore, \begin{equation} \label{eq:def_valid_1} \minp(\abs{\poly_1}) + \minp(\abs{\poly_2}) - 2 \leq \minp(n). \end{equation} Recall that~$\abs{\poly_1} + \abs{\poly_2} = n$. It is easy to observe that~$\minp(\abs{\poly_1})+\minp(\abs{\poly_2})$ is minimized when~$\abs{\poly_1}=1$ and $\abs{\poly_2} = n-1$ (or vice gref{fig:minp_plot}) \begin{figure} \centering \includegraphics[scale=0.4]{minp_s.eps} \caption{The function~$\minp(n)$.} \label{fig:minp_plot} \end{figure} been $2+\sqrt{8n-4}$ (without rounding up), this would be obvious. But since $\minp(n) = \left\lceil 2+\sqrt{8n-4} \, \right\rceil$, it is a step function (with an infinite number of intervals), where the gap between all successive steps is exactly~1, except the gap between the two leftmost steps which is~2. This guarantees that despite the rounding, the minimum of~$\minp(\abs{\poly_1})+\minp(\abs{\poly_2})$ occurs as claimed. Substituting this into \myeqref{eq:def_valid_1}, and using the fact that~$\minp(1)=4$, we see that $\minp(n-1) + 2 \leq \minp(n)$. However, we know~\cite{sieben2008polyominoes} that $\minp(n) - \minp(n-1) \leq 1$ for $n\geq 3$, which is a contradiction. Thus, the deflated version of~$\poly$ cannot split into two parts unless it splits into two singleton cells, which is indeed the case for a minimal-perimeter polyomino of size~8, specifically, \( D( \!\! \raisebox{-2.25mm}{\drawpoly[scale=0.3]{diag8.txt}} \!\! ) = \!\! \raisebox{-1.5mm}{\drawpoly[scale=0.3]{diag2.txt}} \). The same method can be used for showing that~$D(\poly)$ cannot be composed of more then two parts. Note that this proof does not hold for polyominoes of area which is not of the form~$n+\minp(n)$, but it suffices for the use in Theorem~\ref{thm:main}. \myqed \end{proof} As mentioned earlier, it was already proven elsewhere~\cite{altshuler2006,sieben2008polyominoes} that Premise~4 (roots of inflation chains) is fulfilled for the square lattice. Therefore, we proceed to showing that Premise~5 holds. \subsection{Premise 5: Convergence to a Minimum-Perimeter Polyomino} In this section, we show that starting from any polyomino~$P$, and applying repeatedly some finite number of inflation steps, we obtain a polyomino $Q = Q(P)$, for which $\perim{I(Q)} = \perim{Q} + 4$. Let~$R(Q)$ denote the \emph{diameter} of~$Q$, \emph{i.e.}, the maximal horizontal or vertical distance ($L^\infty$) between two cells of~$Q$. The following lemma shows that some geometric features of a polyomino disappear after inflating it enough times. \begin{lemma} \label{lem:no-hdz} For any~$k > R(Q)$, the polyomino~$Q^k$ does not contain any (i)~holes; (ii)~cells of Type~(d); or (iii)~patterns of Type~(z). \end{lemma} \begin{proof} \begin{itemize} \item[(i)] Let~$Q$ be a polyomino, and assume that~$Q^k$ contains a hole. Consider a cell~$c$ inside the hole, and let~$c_u$ be the cell of~$Q^k$ that lies immediately above it. (Note that since~$c_u$ belongs to the border of~$Q^k$, it is not a cell of~$Q$.) Any cell that resides (not necessarily directly) below~$c$ is closer to~$c$ than to~$c_u$. Since $c_u \in Q^k$, it ($c_u$) is closer than~$c$ to~$Q$, thus, there must be a cell of~$Q$ (not necessarily directly) above~$c$, otherwise~$c_u$ would not belong to~$Q^k$. The same holds for cells below, to the right, and to the left of~$c$, thus,~$c$ resides within the axis-aligned bounding box of the extreme cells of~$Q$, and after~$R(Q)$ steps,~$c$ will be occupied, and any hole will be eliminated. \item[(ii)] Assume that there exists a polyomino~$Q$, for which the polyomino~$Q^k$ contains a cell of Type~(d). Without loss of generality, assume that the neighbors of~$c$ reside to its left and to its right, and denote them by $c_\ell,c_r$, respectively. Denote by~$c_o$ one of the cells whose inflation created~$c_\ell$, \emph{i.e.}, a cell which belongs to~$Q$ and is in distance of at most~$k$ from~$c_\ell$. In addition, denote by $c_u,c_d$ the adjacent perimeter cells which lie immediately above and below~$c$, respectively. The cell~$c_d$ is not occupied, thus, its distance from~$c_o$ is~$k+1$, which means that~$c_o$ lies in the same row as~$c_\ell$. Assume for contradiction that~$c_o$ lies in a row below~$c_\ell$. Then, the distance between~$c_o$ and~$c_d$ is at most~$k$, hence~$c_d$ belongs to~$Q^k$. The same holds for~$c_u$; thus, cell~$c_o$ must lie in the same row as~$c_\ell$. Similar considerations show that~$c_o$ must lie to the left of~$c_\ell$, otherwise~$c_d$ and~$c_u$ would be occupied. In the same manner, one of the cells that originated~$c_r$ must lie in the same row as~$c_r$ on its right. Hence, any cell of Type~(d) have cells of~$Q$ to its right and to its left, and thus, it is found inside the bounding axis-aligned bounding box of~$Q$, which will necessarily be filled with polyomino cells after~$R(Q)$ inflation steps. \item[(iii)] Let~$c$ be a Type-(z) perimeter cell of~$Q^k$. Assume, without loss of generality, that the polyomino cells adjacent to it are to its left and to its right, and denote them by~$c_\ell$ and~$c_r$, respectively. Let~$c_o$ denoted a cell whose repeated inflation has added~$c_\ell$ to $Q^k$. (Note that~$c_o$ might not be unique.) This cell must lie to the left of~$c$, otherwise, it will be closer to~$c$ than to~$c_\ell$, and~$c$ would not be a perimeter cell. In addition, $c_o$ must lie in the same row as~$c_\ell$, for otherwise, by the same considerations as above, one of the cells above or below~$c$ will be occupied. The same holds for~$c_r$ (but to its right), thus, cells of Type~(z) must reside between two original cells of~$Q$, \emph{i.e.}, inside the bounding box of~$Q$, and after~$R(Q)$ inflation steps, all cells inside this box will become polyomino cells. \end{itemize} \end{proof} We can now conclude that inflating a polyomino~$Q$ for~$R(Q)$ times eliminates all holes and bridges, and, thus, the polyomino~$Q^k$ will obey the equation $\abs{\perim{Q^k}} = \abs{\border{Q^k}} + 4$. \begin{lemma} \label{lem:conv-pb4} Let~$Q$ be a polyomino, and let~$k = R(Q)$. We have that $\abs{\perim{Q^k}} = \abs{\border{Q^k}} + 4$. \end{lemma} \begin{proof} This follows at once from Lemma~\ref{lem:no-hdz} and Theorem~\ref{theorem:pb4}. \end{proof} \section{Polyhexes} \label{sec:polyhexes} In this section, we show that the premises of Theorem~\ref{thm:main} hold for the two-dimensional hexagonal lattice~$\hex$. The roadmap followed in this section is similar to the one used in Section~\ref{sec:polyominoes}. In this section, all the lattice-specific notations refer to~$\hex$. \subsection{Premise 1: Monotonicity} The first premise has been proven for~$\hex$ independently by Vainsencher and Bruckstien~\cite{VainsencherB08} and by F\"{u}lep and Sieben~\cite{fulep2010polyiamonds}. We will use the latter, stronger version which also includes a formula for $\minp(n)$. \begin{theorem} \label{thm:minp_hex} \textup{\cite[Thm.~$5.12$]{fulep2010polyiamonds}} $\minp(n) = \ceil{\sqrt{12n-3}\,}+3$. \myqed \end{theorem} Clearly, the function~$\minp(n)$ is weakly-monotone increasing. \subsection{Premise 2: Constant Inflation} To show that the second premise holds, we analyze the different patterns that may appear in the border and perimeter of minimal-perimeter polyhexes. We can classify every border or perimeter cell by one of exactly~24 patterns, distinguished by the number and positions of their adjacent occupied cells. The~24 possible patterns are shown in Figure~\ref{fig:patterns_hex}. \begin{figure} \centering \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{b0.txt} \caption{} \label{fig:b0} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{b1.txt} \caption{} \label{fig:b1} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{b2.txt} \caption{} \label{fig:b2} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{b3.txt} \caption{} \label{fig:b3} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{b4.txt} \caption{} \label{fig:b4} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{b5.txt} \caption{} \label{fig:b5} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{b6.txt} \caption{} \label{fig:b6} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{b7.txt} \caption{} \label{fig:b7} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{b8.txt} \caption{} \label{fig:b8} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{b9.txt} \caption{} \label{fig:b9} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{b10.txt} \caption{} \label{fig:b10} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{b11.txt} \caption{} \label{fig:b11} \end{subfigure} \medskip \\ \begin{subfigure}[t]{0.075\textwidth} \centering \addtocounter{subfigure}{2} \drawpolyhex[scale=0.65]{p0.txt} \caption{} \label{fig:p0} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{p1.txt} \caption{} \label{fig:p1} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{p2.txt} \caption{} \label{fig:p2} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{p3.txt} \caption{} \label{fig:p3} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{p4.txt} \caption{} \label{fig:p4} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{p5.txt} \caption{} \label{fig:p5} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{p6.txt} \caption{} \label{fig:p6} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{p7.txt} \caption{} \label{fig:p7} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{p8.txt} \caption{} \label{fig:p8} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{p9.txt} \caption{} \label{fig:p9} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{p10.txt} \caption{} \label{fig:p10} \end{subfigure} \begin{subfigure}[t]{0.075\textwidth} \centering \drawpolyhex[scale=0.65]{p11.txt} \caption{} \label{fig:p11} \end{subfigure} \caption{All possible patterns (up to symmetries) of border (first row) and perimeter (second row) cells. The gray cells are polyhex cells, while the white cells are perimeter cells. Each subfigure shows a cell in the middle, and the possible pattern of cells surrounding it.} \label{fig:patterns_hex} \end{figure} Let us recall the equation subject of Lemma~\ref{lemma:pebe}. \[ \abs{\perim{\poly}} + e_P(\poly) = \abs{\border{\poly}} + e_B(\poly). \] Our first goal is to express the excess of a polyhex~$\poly$ as a function of the numbers of cells of~$\poly$ of each pattern. We denote the number of cells of a specific pattern in~$\poly$ by $\#\hexagon$, where `$\hexagon$' is one of the~22 patterns listed in Figure~\ref{fig:patterns_hex}. The excess (either border or perimeter excess) of Pattern~$\hexagon$ is denoted by $e(\hexagon)$. (For simplicity, we omit the dependency on~$\poly$ in the notations of~$\#\hexagon$ and~$e(\hexagon)$. This should be understood from the context.) The border excess can be expressed as~$e_B(\poly) = \sum_{\hexagon \in \{a,\dots,l\}} e(\hexagon)\#\hexagon$, and, similarly, the perimeter excess can be expressed as~$e_P(\poly) = \sum_{\hexagon \in \{o,\dots,z\}} e(\hexagon)\#\hexagon$. By plugging these equations into \myeqref{eq:pebe}, we obtain that \begin{equation} \label{eq:all-patterns} \abs{\perim{\poly}} + \sum_{\hexagon \in \{o,\dots,z\}} e(\hexagon)\#\hexagon = \abs{\border{\poly}} + \sum_{\hexagon \in \{a,\dots,l\}} e(\hexagon)\#\hexagon~. \end{equation} The next step of proving the second premise is showing that minimal-perimeter polyhexes cannot contain some of the~22 patterns. This will simplify \myeqref{eq:all-patterns}. \begin{lemma} \label{lemma:no-holes_hex} (Analogous to Lemma~\ref{lemma:no-holes-sqr}.) A minimal-perimeter polyhex does not contains holes. \end{lemma} \begin{proof} Assume to the contrary that there exists a minimal-perimeter polyhex~$\poly$ that contains one or more holes, and let~$\poly'$ be the polyhex obtained by filling one of the holes in~$\poly$. Clearly, $|\poly'| > |\poly|$, and by filling the hole we eliminated some perimeter cells and did not create new perimeter cells. Hence, $\abs{\perim{\poly'}} < \abs{\perim{\poly}}$. This contradicts the fact that~$\minp(n)$ is monotone increasing, as implied by Theorem~\ref{thm:minp_hex}. \myqed \end{proof} Another important observation is that minimal-perimeter polyhexes tend to be ``compact.'' We formalize this observation in the following lemma. Recall the definition of a bridge from Section~\ref{sec:polyominoes}: A \emph{bridge} is a cell whose removal unites two holes or renders the polyhex disconnected (specifically, Patterns~(b), (d), (e), (g), (h), (j), and~(k)). Similarly, a \emph{perimeter bridge} is an empty cell whose addition to the polyhex creates a hole in it (specifically, Patterns~(p), (r), (s), (u), (v), (x), and~(y)). \begin{lemma} \label{lemma:bridges} (Analogous to Lemma~\ref{lemma:no-bridges-sqr}.) Minimal-perimeter polyhexes contain neither bridges nor perimeter bridges. \myqed \end{lemma} \begin{proof} Let~$\poly$ be a minimal-perimeter polyhex, and assume first that it contains a bridge cell~$f$. By Lemma~\ref{lemma:no-holes_hex}, since~$\poly$ does not contain holes, the removal of~$f$ from~$\poly$ will break it into two or three disconnected polyhexes. We can connect these parts by translating one of them towards the other(s) by one cell. (In case of Pattern~(h), the polyhex is broken into three parts, but then translating any of them towards the removed cell would make the polyhex connected again.) Locally, this will eliminate at least two perimeter cells created by the bridge. (This can be verified by exhaustively checking all the relevant patterns.) The size of the new polyhex, $\poly'$, is one less than that of~$\poly$, while the perimeter of~$\poly'$ is smaller by at least two than that of~$\poly$. However, Theorem~\ref{thm:minp_hex} implies that~$\minp(n)-\minp(n-1) \leq 1$ for all $n \geq 3$, which is a contradiction to~$\poly$ being a minimal-perimeter polyhex. Assume now that~$\poly$ contains a perimeter bridge. Filling the bridge will not increase the perimeter. (It might create one additional perimeter cell, which will be canceled out with the eliminated (perimeter) bridge cell.) In addition, it will create a hole in the polyomino. Then, filling the hole will create a polyhex with a larger size and a smaller perimeter, which is a contradiction to~$\minp(n)$ being monotone increasing. \myqed \end{proof} As a consequence of Lemma~\ref{lemma:no-holes_hex}, Pattern~(o) cannot appear in any minimal-perimeter polyhex. In addition, Lemma~\ref{lemma:bridges} tells us that the Border Patterns~(b), (d), (e), (g), (h), (j), and~(k), as well as the Perimeter Patterns~(p), (r), (s), (u), (v), (x), and~(y) cannot appear in any minimal-perimeter polyhex. (Note that patterns~(b) and~(p) are not bridges by themselves, but the adjacent cell is a bridge, that is, the cell above the central cells in~\drawpolyhex[scale=0.3]{b1.txt} and~\drawpolyhex[scale=0.3]{p1.txt} are bridges.) Finally, Pattern~(a) appears only in the singleton cell (the unique polyhex of size~1), which can be disregarded. Ignoring all these patterns, we obtain that \begin{equation} \label{eq:pb321} \abs{\perim{\poly}} + 3\#q + 2\#t + \#w = \abs{\border{\poly}} + 3\#c + 2\#f + \#i. \end{equation} Note that Patterns~(l) and~(z) have excess~0, and, hence, although they may appear in minimal-perimeter polyhexes, they do not contribute to the equation. Consider a polyhex which contains only the six feasible patterns that contribute to the excess (those that appear in \myeqref{eq:pb321}). Let~$\xi$ denote the single polygon bounding the polyhex. We now count the number of vertices and the sum of internal angles of~$\xi$ as functions of the numbers of appearances of the different patterns. In order to calculate the number of vertices of~$\xi$, we first determine the number of vertices contributed by each pattern. In order to avoid multiple counting of a vertex, we associate each vertex to a single pattern. Note that each vertex of~$\xi$ is surrounded by three (either occupied or empty) cells, out of which one is empty and two are occupied, or vise versa. We call the cell, whose type (empty or occupied) appears once (among the surrounding three cells), the ``representative'' cell, and count only these representatives. Thus, each vertex is counted exactly once. For example, out of the six vertices surrounding Pattern~(c), five vertices belong to the bounding polygon, but the representative cell of only three of them is the cell at the center of this pattern, thus, by our scheme, Pattern~(c) contributes three vertices, each having a~$2\pi/3$ angle. Similarly, only two of the four vertices in the configuration of Pattern~(t), are represented by the cell at the center of this pattern. In this case, each vertex is the head of a $4\pi/3$ angle. To conclude, the total number of vertices of~$\xi$ is \[ 3\#c+2\#f+\#i+3\#q+2\#t+\#w, \] and the sum of internal angles is \begin{equation} \label{eq:sum-1} (3\#c+2\#f+\#i)2\pi/3 + (3\#q+2\#t+\#w)4\pi/3. \end{equation} On the other hand, it is known that the sum of internal angles is equal to \begin{equation} \label{eq:sum-2} (3\#c+2\#f+\#i+3\#q+2\#t+\#w-2)\pi. \end{equation} Equating the terms in Formulae~\eqref{eq:sum-1} and~\eqref{eq:sum-2}, we obtain that \begin{equation} \label{eq:sum-3} 3\#c+2\#f+\#i = 3\#q+2\#t+\#w + 6. \end{equation} Plugging this into \myeqref{eq:pb321}, we conclude that~$\abs{\perim{\poly}} = \abs{\border{\poly}} + 6$, as required. We also need to show that the second part of the second premise holds, that is, that if~$\poly$ is a minimal-perimeter polyhex, then $\abs{\perim{I(\poly)}} \leq \abs{\perim{\poly}} + 6$. To this aim, note that $\border{I(\poly)} \subset \perim{\poly}$, thus, it is sufficient to show that~$\abs{\perim{I(\poly)}} \leq \abs{\border{I(\poly)}} + 6$. Obviously, \myeqref{eq:all-patterns} holds for the polyhex~$I(\poly)$, hence, in order to prove the relation, we only need to prove the following lemma. \begin{lemma} \label{lemma:inf-no-bridges} If~$\poly$ is a minimal-perimeter polyhex, then~$I(\poly)$ does not contain any bridge. \myqed \end{lemma} \begin{proof} Assume to the contrary that~$I(\poly)$ contains a bridge. Then, the cell that makes the bridge must have been created in the inflation process. However, any cell~$c \in I(\poly) \backslash \poly$ must have a neighboring cell~$c' \in \poly$. All the cells adjacent to~$c'$ must also be part of~$I(\poly)$, thus, cell~$c$ must have three consecutive neighbors around it, namely, $c'$ and the two cells neighboring both~$c$ and~$c'$. The only bridge pattern that fits this requirement is Pattern~(j). However, this means that there must have been a gap of two cells in~$\poly$ that caused the creation of~$c$ during the inflation of~$\poly$. Consequently, by filling the gap and the hole it created, we will obtain (see Figure~\ref{fig:no-j}) a larger polyhex with a smaller perimeter, which contradicts the fact that~$\poly$ is a minimal-perimeter polyhex. \myqed \end{proof} \begin{figure} \centering \begin{subfigure}[t]{0.2\textwidth} \centering \drawpolyhex[scale=0.5]{noj1.txt} \caption{$Q$} \end{subfigure} \begin{subfigure}[t]{0.2\textwidth} \centering \drawpolyhex[scale=0.5]{noj2.txt} \caption{$I(Q)$} \end{subfigure} \begin{subfigure}[t]{0.2\textwidth} \centering \drawpolyhex[scale=0.5]{noj3.txt} \caption{$Q'$} \end{subfigure} \caption{The construction in Lemma~\ref{lemma:inf-no-bridges} which shows that~$I(\poly)$ cannot contain a cell of Pattern~$(j)$. Assuming that it does, by filling the hole in it, we obtain~$\poly'$ which contradicts the perimeter-minimality of~$\poly$. (The marked cells in~$\poly'$ are those added to~$\poly$.)} \label{fig:no-j} \end{figure} \subsection{Premise 3: Deflation Resistance} We now show that deflating a minimal-perimeter polyhex results in another (smaller) valid polyhex. The intuition behind this condition is that a minimal-perimeter polyhex is ``compact,'' having a shape which does not become disconnected by deflation. \begin{lemma} \label{lemma:def-valid-polyhex} For any minimal-perimeter polyhex~$\poly$, the shape~$D(\poly)$ is also a valid (connected) polyhex. \myqed \end{lemma} \begin{proof} The proof of this lemma is very similar to the first part of the proof of Lemma~\ref{lemma:bridges}. Consider a minimal-perimeter polyhex~$\poly$. In order for~$D(\poly)$ to be disconnected, $\poly$ must contain a bridge of either a single cell or two adjacent cells. A 1-cell bridge cannot be part of~$\poly$ by Lemma~\ref{lemma:bridges}. The polyhex~$\poly$ can neither contain a 2-cell bridge. \begin{figure} \centering \begin{subfigure}[b]{0.3\textwidth} \centering \drawpolyhex[scale=0.45]{hex_bridge_1.txt} \caption{} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \drawpolyhex[scale=0.45]{hex_bridge_2.txt} \caption{} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \drawpolyhex[scale=0.45]{hex_bridge_3.txt} \caption{} \end{subfigure} \caption{An example for the construction in the proof of Lemma~\ref{lemma:bridges}. The two-cell bridge is colored in red in (a). Then, in (b), the bridge is removed, and, in (c), the two parts are ``glued'' together.} \label{fig:two-cell-bridge} \end{figure} Assume to the contrary that it does, as is shown in Figure~\ref{fig:two-cell-bridge}(a). Then, removing the bridge (see Figure~\ref{fig:two-cell-bridge}(b)), and then connecting the two pieces (by translating one of them towards the other by one cell along a direction which makes a $60^{\circ}$ angle with the bridge), creates (Figure~\ref{fig:two-cell-bridge}(c)) a polyhex whose size is smaller by two than that of the original polyhex, and whose perimeter is smaller by at least two (since the perimeter cells adjacent to the bridge disappear). The new polyhex is valid, that is, the translation by one cell of one part towards the other does not make any cells overlap, otherwise there is a hole in the original polyhex, which is impossible for a minimal-perimeter polyhex by Lemma~\ref{lemma:no-holes_hex}. However, we reached a contradiction since for a minimal-perimeter polyhex of size~$n \geq 7$, we have that~$\minp(n) - \minp(n-2) \leq 1$. Finally, it is easy to observe by a tedious inspection that the deflation of any polyhex of size less than~7 results in the empty polyhex. \myqed \end{proof} In conclusion, we have shown that all the premises of Theorem~\ref{thm:main} are satisfied for the hexagonal lattice, and, therefore, inflating a set of all the minimal-perimeter polyhexes of a certain size yields another set of minimal-perimeter polyhexes of another, larger, size. This result is demonstrated in Figure~\ref{fig:hex_corrolary}. \begin{figure} \centering \begin{subfigure}[b]{0.24\textwidth} \centering \drawpolyhex[scale=0.45]{hm9_1.txt} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \drawpolyhex[scale=0.45]{hm9_2.txt} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \drawpolyhex[scale=0.45]{hm9_3.txt} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \drawpolyhex[scale=0.45]{hm9_4.txt} \end{subfigure} \medskip \\ \begin{subfigure}[b]{0.24\textwidth} \centering \drawpolyhex[scale=0.45]{hm23_1.txt} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \drawpolyhex[scale=0.45]{hm23_2.txt} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \drawpolyhex[scale=0.45]{hm23_3.txt} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \drawpolyhex[scale=0.45]{hm23_4.txt} \end{subfigure} \caption{A demonstration of Theorem~\ref{thm:main} for polyhexes. The top row contains all polyhexes in~$M_9$ (minimal-perimeter polyhexes of size~9), while the bottom row contains their inflated versions, all the members of~$M_{23}$.} \label{fig:hex_corrolary} \end{figure} We also characterized inflation-chain roots of polyhexes. As is mentioned above, the premises of Theorems~\ref{thm:main} and~\ref{thm:root-conditioned} are satisfied for polyhexes~\cite{VainsencherB08,sieben2008polyominoes}, and, thus, the inflation-chain roots are those who have the minimum size for a given minimal-perimeter size. An easy consequence of Theorem~\ref{thm:minp_hex} is that the formula~$\floor{\frac{(p-4)^2}{12}+\frac{5}{4}}$ generates all these inflation-chain roots. This result is demonstrated in Figure~\ref{fig:minpH_roots}. \begin{figure} \centering \includestandalone[scale=0.40]{minpH_roots} \vspace{-0.25cm} \caption{The relation between the minimum perimeter of polyhexes, $\minp(n)$, and the inflation-chain roots. The points represent the minimum perimeter of a polyhex of size~$n$, and sizes which are inflation-chain roots are colored in red. The arrows show the mapping between sizes of minimal-perimeter polyhexes (induced by the inflation operation) and demonstrate the proof of Theorem~\ref{thm:root-candidates}.} \label{fig:minpH_roots} \end{figure} As in the case of polyominoes, and as was mentioned earlier, it was already proven elsewhere~\cite{VainsencherB08,fulep2010polyiamonds} that Premise~4 (roots of inflation chains) is fulfilled for the hexagonal lattice. Therefore, we proceed to showing that Premise~5 holds. \subsection{Premise 5: Convergence to a Minimum-Perimeter Polyomino} Similarly to polyominoes, we now show that starting from a polyhex~$\poly$ and applying repeatedly a finite number, $k$, of inflation steps, we obtain a polyhex $\poly^k=I^k(\poly)$, for which $\perim{I(\poly^k)} = \perim{\poly^k} + 6$. Let~$R(\poly)$ denote the \emph{diameter} of~$\poly$, \emph{i.e.}, the maximal distance between two cells of~$\poly$ when projected onto one of the three main axes. As in the case of polyominoes, some geometric features of~$\poly$ will disappear after $R(\poly)$ inflation steps. \begin{lemma} \label{lem:no-hdz-hex} (Analogous to Lemma~\ref{lem:no-hdz}.) For any $k > R(Q)$, the polyhex~$Q^k$ does not contain any (i)~holes; (ii)~polyhex bridge cells; or (iii)~perimeter bridge cells. \end{lemma} \begin{proof} \begin{itemize} \item[(i)] The proof is identical to the proof for polyominoes. \item[(ii)] After~$R(Q)$ inflation steps, the obtained polyhex is clearly connected. If at this point there exists a bridge cell, then it must have been created in the last inflation step since after further steps, this cell would cease being a bridge cell. If during this inflation step, that eliminates the mentioned bridge, another bridge is created then its removal will not render the polyomino disconnected (since it was already connected before applying the inflation step), thus, it must have created a hole in the polyhex, in contradiction to the previous clause. \item[(iii)] We will present here a version of the analogue proof for polyominoes, adapted for polyhexes. Let~$c$ be a perimeter bridge cell of~$Q^k$. Assume, without loss of generality, that two of the polyhex cells adjacent to it are above and below it, and denote them by~$c_1$ and~$c_2$, respectively. The cell whose inflation resulted in adding~$c_1$ to the polyhex~$c_1$, denoted by~$c_o$, must reside above~$c$, otherwise, it would be closer to~$c$ than to~$c_1$, and~$c$ would not be a perimeter cell. The same holds for~$c_2$ (below $c$), thus, any perimeter bridge cell must reside between two original cells of~$Q$. Hence, after~$R(Q)$ inflation steps, all such cells will become a polyhex cells. \end{itemize} \end{proof} \begin{lemma} \label{lem:conv-pb4-hex} (Analogous to Lemma~\ref{lem:conv-pb4}.) After~$k = R(Q)$ inflation steps, the polyhex~$Q^k$ will obey $\abs{\perim{Q^k}} = \abs{\border{Q^k}} +6$. \end{lemma} \begin{proof} This follows at once from Lemma~\ref{lem:no-hdz-hex} and Equation~\ref{eq:sum-3}. \end{proof} \section{Polyiamonds} \label{sec:polyiamonds} Polyiamonds are sets of edge-connected triangles on the regular triangular lattice. Unlike the square and the hexagonal lattice, in which all cells are identical in shape and in their role, the triangular lattice has two types of cells, which are seen as a left and a right pointing arrows (\drawpolyiamond[scale=0.4]{t2_diam.txt},\drawpolyiamond[scale=0.4]{t1_diam.txt}). Due to this complication, inflating a minimal-perimeter polyiamond does not necessarily result in a minimal-perimeter polyiamond. Indeed, the second premise of Theorem~\ref{thm:main} does not hold for polyiamonds. This fact is not surprising, since inflating minimal-perimeter polyiamonds creates ``jaggy'' polyiamonds whose perimeter is not minimal. Figures~\ref{fig:exmp_diamond}(a,b) illustrate this phenomenon. \begin{figure} \centering \begin{subfigure}[b]{0.3\textwidth} \centering \drawpolyiamond[scale=0.4]{exmp_diamond.txt} \caption{$\poly$} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \drawpolyiamond[scale=0.4]{exmp_diamond_I.txt} \caption{$I(\poly)$} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \drawpolyiamond[scale=0.4]{exmp_diamond_II.txt} \caption{$\poly'$} \end{subfigure} \caption{An example of inflating polyiamonds. The polyiamond~$\poly$ is of a minimum perimeter, however, its inflated version, $I(\poly)$ is not of a minimum perimeter. The polyiamond~$\poly'$, obtained by adding to~$\poly$ all the cells sharing a \emph{vertex} with $\poly$, is a minimal-perimeter polyiamond.} \label{fig:exmp_diamond} \end{figure} However, we can fix this situation in the triangular lattice by modifying the definition of the perimeter of a polyiamond so that it it would include all cells that share a \emph{vertex} (instead of an edge) of the boundary of the polyiamond. Under the new definition, Theorem~\ref{thm:main} holds. The reason for this is surprisingly simple: The modified definition merely mimics the inflation of animals on the graph dual to that of the triangular lattice. (Recall that graph duality maps vertices to faces (cells), and vice versa, and edges to edges.) However, the dual of the triangular lattice is the hexagonal lattice, for which we have already shown in Section~\ref{sec:polyhexes} that all the premises of Theorem~\ref{thm:main} hold. Thus, applying the modified inflation operator in the triangular lattice induces a bijection between sets of minimal-perimeter polyiamonds. This relation is demonstrated in Figure~\ref{fig:exmp_diamond}. \comment{ \section{Polycubes} \label{sec:polycubes} In this section we consider animals in the high dimension square lattice, namely polycubes. Empirically, it seems that inflating all the minimal-perimeter polycubes of a given size the result is all the minimal-perimeter polycubes of some larger size. We can not say it definitively since we are not aware of any algorithm which generates all the minimal-perimeter polycubes other then generating all the polycubes and checking which ones have minimal-perimeter. Since the number of polycubes grows rapidly with the size we can not produce all the minimal with size greater than some relatively small value (in the 3D case, we only know the number of polycubes with size up to $19$). For all the values we did check it seems that the inflation operation does induce a bijection between sets of minimal-perimeter polycubes. However, we can not prove this using Theorem~\ref{thm:main} since the second condition does not hold. Even more than that, we can show that Theorem~\ref{thm:main} probably apply only to two dimensional lattices. A conclusion from Lemma~\ref{lemma:pnc_size} is that for a lattice $\lattice$, satisfying the conditions of Theorem~\ref{thm:main} it holds that $\minp_\lattice(n) = \Theta(\sqrt{n})$. It is reasonable to assume that in a $d$-dimensional lattice $\lattice_d$, the relation between the size of a minimal-perimeter animal and its perimeter is roughly as the relation between a $d$-dimensional sphere and its surface area, thus, we can assume that $\minp^{\lattice_d}(n) = \Theta(n^{\frac{d-1}{d}})$, and thus Theorem~\ref{thm:main} does not hold for high dimensional lattices. Proving this relation in high dimensions remains an open problem, and probably another technique should be utilized in order to prove (or disprove) this property in high dimensions. } \section{Conclusion} \label{sec:conclusion} In this paper, we show that the inflation operation induces a bijection between sets of minimal-perimeter animals on any lattice which satisfies three conditions. We demonstrate this result on three planar lattices: the square, hexagonal, and also the triangular (with a modified definition of the perimeter). The most important contribution of this paper is the application of our result to polyhexes. Specifically, the phenomenon of the number of isomers of a benzenoid hydrocarbons remaining unchanged under circumscribing, which was observed in the literature of chemistry more than~30 years ago but has never been proven till now. However, we do not believe that this set of conditions is necessary. Empirically, it seems that by inflating all the minimal-perimeter polycubes (animals on the 3-dimensional cubical lattice) of a given size, we obtain all the minimal-perimeter polycubes of some larger size. However, the second premise of Theorem~\ref{thm:main} does not hold for this lattice. Moreover, we believe that as stated, Theorem~\ref{thm:main} applies only to 2-dimensional lattices! A simple conclusion from Lemma~\ref{lemma:pnc_size} is that if the premises of Theorem~\ref{thm:main} hold for animals on a lattice~$\lattice$, then~$\minp_\lattice(n) = \Theta(\sqrt{n})$. We find it is reasonable to assume that for a $d$-dimensional lattice $\lattice_d$, the relation between the size of a minimal-perimeter animal and its perimeter is roughly equal to the relation between a $d$-dimensional sphere and its surface area. Hence, we conjecture that $\minp^{\lattice_d}(n) = \Theta(n^{1-1/d})$, and, thus, Theorem~\ref{thm:main} is not suitable for higher dimensions. \bibliographystyle{splncs03} \bibliography{references} \end{document} \documentclass[]{standalone} \usepackage{tikz} \usepackage{polylib} \polypath{./graphics/} \graphicspath{{./graphics/}} \begin{document} \begin{tikzpicture}[anchor=center] \pgfdeclarelayer{bg} \pgfsetlayers{bg,main} \draw[-,opacity=0] (0,0)--(30,10); \begin{pgfonlayer}{bg} \includegraphics[scale = 1]{minp_h.pdf} \end{pgfonlayer} \draw [->, ultra thick] (13.65,5.55) to[bend left=60] ++(6.95,1.1); \draw [->, ultra thick] (13.9,5.55) to[bend left=60] ++(6.95,1.1); \draw [->, ultra thick] (14.15,5.55) to[bend left=60] ++(6.95,1.1); \draw [->, ultra thick] (14.4,5.55) to[bend left=60] ++(6.95,1.1); \end{tikzpicture} \end{document}
2205.07010v1
http://arxiv.org/abs/2205.07010v1
Inverse of $α$-Hermitian Adjacency Matrix of a Unicyclic Bipartite Graph
\documentclass[12pt]{article} \usepackage{listings} \usepackage{amsmath,amssymb} \usepackage{subcaption} \usepackage{graphicx} \usepackage{tikz} \usepackage{structuralanalysis} \usepackage{siunitx} \usepackage{enumerate} \usepackage{mathtools} \usepackage{epic} \usepackage{float} \usepackage{mathtools} \usepackage{authblk} \usepackage{blindtext} \usepackage[numbers]{natbib} \bibliographystyle{vancouver} \usepackage{enumitem} \usepackage{geometry} \usepackage[hang,flushmargin]{footmisc} \newcommand{\qed}{\hfill \mbox{\raggedright \rule{.07in}{.1in}}} \newenvironment{proof}{\vspace{1ex}\noindent{\bf Proof}\hspace{0.5em}} {\hfill\qed\vspace{1ex}} \newtheorem{theorem}{Theorem} \newtheorem{example}{Example} \newtheorem{proposition}{Proposition} \newtheorem{observation}{Observation} \newtheorem{definition}{Definition} \newtheorem{lemma}{Lemma} \newtheorem{note}{Note} \newtheorem{remark}{Remark} \newtheorem{corollary}{Corollary} \newenvironment{pfof}[1]{\vspace{1ex}\noindent{\bf Proof of #1}\hspace{0.5em}} {\hfill\qed\vspace{1ex}} \usepackage{graphicx}\DeclareGraphicsRule{.bmp}{bmp}{}{} \lstset{basicstyle=\tiny, keywordstyle=\color{black}\bfseries\underbar, identifierstyle=, commentstyle=\color{white}, stringstyle=\ttfamily, showstringspaces=false} \providecommand{\keywords}[1]{\textbf{\textit{keywords:}} #1} \date{} \begin{document} \title{Inverse of $\alpha$-Hermitian Adjacency Matrix of a Unicyclic Bipartite Graph} \author{Mohammad Abudayah \thanks{School of Basic Sciences and Humanities, German Jordanian University, [email protected] }, Omar Alomari \thanks{School of Basic Sciences and Humanities, German Jordanian University, [email protected]}, Omar AbuGhneim \thanks{Department of Mathematics, Faculty of Science, The University of Jordan, [email protected]} } \maketitle \begin{abstract} Let $X$ be bipartite mixed graph and for a unit complex number $\alpha$, $H_\alpha$ be its $\alpha$-hermitian adjacency matrix. If $X$ has a unique perfect matching, then $H_\alpha$ has a hermitian inverse $H_\alpha^{-1}$. In this paper we give a full description of the entries of $H_\alpha^{-1}$ in terms of the paths between the vertices. Furthermore, for $\alpha$ equals the primitive third root of unity $\gamma$ and for a unicyclic bipartite graph $X$ with unique perfect matching, we characterize when $H_\gamma^{-1}$ is $\pm 1$ diagonally similar to $\gamma$-hermitian adjacency matrix of a mixed graph. Through our work, we have provided a new construction for the $\pm 1$ diagonal matrix. \end{abstract} \keywords{ Mixed graphs; $\alpha$-Hrmitian adjacency matrix; Inverse matrix; Bipartite mixed graphs; Unicyclic bipartite mixed graphs; Perfect matching} \section{\normalsize Introduction} A partially directed graph $X$ is called a mixed graph, the undirected edges in $X$ are called digons and the directed edges are called arcs. Formally, a mixed graph $X$ is a set of vertices $V(X)$ together with a set of undirected edges $E_0(D)$ and a set of directed edges $E_1(X)$. For an arc $xy \in E_1(X)$, $x$(resp. $y$) is called initial (resp. terminal) vertex. The graph obtained from the mixed graph $X$ after stripping out the orientation of its arcs is called the underlying graph of $X$ and is denoted by $\Gamma(X)$.\\ A collection of digons and arcs of a mixed graph $X$ is called a perfect matching if they are vertex disjoint and cover $V(X)$. In other words, perfect matching of a mixed graph is just a perfect matching of its underlying graph. In general, a mixed graph may have more than one perfect matching. We denote the class of bipartite mixed graphs with a unique perfect matching by $\mathcal{H}$. In this class of mixed graphs the unique perfect matching will be denoted by $\mathcal{M}$. For a mixed graph $X\in \mathcal{H}$, an arc $e$ (resp. digon) in $\mathcal{M}$ is called matching arc (resp. matching digon) in $X$. If $D$ is a mixed subgraph of $X$, then the mixed graph $X\backslash D$ is the induced mixed graph over $V(X)\backslash V(D)$.\\ Studying a graph or a digraph structure through properties of a matrix associated with it is an old and rich area of research. For undirected graphs, the most popular and widely investigated matrix in literature is the adjacency matrix. The adjacency matrix of a graph is symmetric, and thus diagonalizable and all of its eigenvalues are real. On the other hand, the adjacency matrix of directed graphs and mixed graphs is not symmetric and its eigenvalues are not all real. Consequently, dealing with such matrix is very challenging. Many researchers have recently proposed other adjacency matrices for digraphs. For instance in \cite{Irena}, the author investigated the spectrum of $AA^T$, where $A$ is the traditional adjacency matrix of a digraph. The author called them non negative spectrum of digraphs. In \cite{OMT1}, authors proved that the non negative spectrum is totally controlled by a vertex partition called common out neighbor partition. Authors in \cite{BMI} and in \cite{LIU2015182} (independently) proposed a new adjacency matrix of mixed graphs as follows: For a mixed graph $X$, the hermitian adjacency matrix of $X$ is a $|V|\times |V|$ matrix $H(X)=[h_{uv}]$, where \[h_{uv} = \left\{ \begin{array}{ll} 1 & \text{if } uv \in E_0(X),\\ i & \text{if } uv \in E_1(X), \\ -i & \text{if } vu \in E_1(X),\\ 0 & \text{otherwise}. \end{array} \right. \] This matrix has many nice properties. It has real spectrum and interlacing theorem holds. Beside investigating basic properties of this hermitian adjacency matrix, authors proved many interesting properties of the spectrum of $H$. This motivated Mohar in \cite{Mohar2019ANK} to extend the previously proposed adjacency matrix. The new kind of hermitian adjacency matrices, called $\alpha$-hermitian adjacency matrices of mixed graphs, are defined as follows: Let $X$ be a mixed graph and $\alpha$ be the primitive $n^{th}$ root of unity $e^{\frac{2\pi}{n}i}$. Then the $\alpha$ hermitian adjacency matrix of $X$ is a $|V|\times |V|$ matrix $H_{\alpha}(X)=[h_{uv}]$, where \[h_{uv} = \left\{ \begin{array}{ll} 1 & \text{if } uv \in E_0(D),\\ \alpha & \text{if } uv \in E_1(D), \\ \overline{\alpha} & \text{if } vu \in E_1(D),\\ 0 & \text{otherwise}. \end{array} \right. \] Clearly the new kind of hermitian adjacency matrices of mixed graphs is a natural generalization of the old one for mixed graphs and even for the graphs. As we mentioned before these adjacency matrices ($H_i(X)$ and $H_\alpha(X)$) are hermitian and have interesting properties. This paved the way to more a facinating research topic much needed nowadays.\\ For simplicity when dealing with one mixed graph $X$, then we write $H_\alpha$ instead of $H_\alpha(X)$. \\\\ The smallest positive eigenvalue of a graph plays an important role in quantum chemistry. Motivated by this application, Godsil in \cite{God} investigated the inverse of the adjacency matrix of a bipartite graph. He proved that if $T$ is a tree graph with perfect matching and $A(T)$ is its adjacency matrix then, $A(T)$ is invertabile and there is $\{1,-1\}$ diagonal matrix $D$ such that $DA^{-1}D$ is an adjacency matrix of another graph. Many of the problems mentioned in \cite{God} are still open. Further research appeared after this paper that continued on Godsil's work see \cite{Pavlkov}, \cite{McLeman2014GraphI} and \cite{Akbari2007OnUG}.\\ In this paper we study the inverse of $\alpha$-hermitian adjacency matrix $H_\alpha$ of unicyclic bipartite mixed graphs with unique perfect matching $X$. Since undirected graphs can be considered as a special case of mixed graphs, the out comes in this paper are broader than the work done previously in this area. We examine the inverse of $\alpha$-hermitian adjacency matricies of bipartite mixed graphs and unicyclic bipartite mixed graphs. Also, for $\alpha=\gamma$, the primative third root of unity, we answer the traditional question, when $H_\alpha^{-1}$ is $\{\pm 1\}$ diagonally similar to an $\alpha$-hermitian adjacency matrix of mixed graph. To be more precise, for a unicyclic bipartite mixed graph $X$ with unique perfect matching we give full characterization when there is a $\{\pm 1\}$ diagonal matrix $D$ such that $DH_\gamma^{-1}D$ is an $\gamma$-hermitian adjacency matrix of a mixed graph. Furthermore, through our work we introduce a construction of such diagonal matrix $D$. In order to do this, we need the following definitions and theorems: \begin{definition}\citep{Abudayah2} Let $X$ be a mixed graph and $H_\alpha=[h_{uv}]$ be its $\alpha$-hermitian adjacency matrix. \begin{itemize} \item $X$ is called elementary mixed graph if for every component $X'$ of $X$, $\Gamma(X')$ is either an edge or a cycle $C_k$ (for some $k\ge 3$). \item For an elementary mixed graph $X$, the rank of $X$ is defined as $r(X)=n-c,$ where $n=|V(X)|$ and $c$ is the number of its components. The co-rank of $X$ is defined as $s(X)=m-r(X)$, where $m=|E_0(X)\cup E_1(X)|$. \item For a mixed walk $W$ in $X$, where $\Gamma(W)=r_1,r_2,\dots r_k$, the value $h_\alpha(W)$ is defined as $$h_\alpha(W)=h_{r_1r_2}h_{r_2r_3}h_{r_3r_4}\dots h_{r_{k-1}r_k}\in \{\alpha^n\}_{n\in \mathbb{Z}}$$ \end{itemize} \end{definition} Recall that a bijective function $\eta$ from a set $V$ to itself is called permutation. The set of all permutations of a set $V$, denoted by $S_V$, together with functions composition form a group. Finally recall that for $\eta \in S_V$, $\eta$ can be written as composition of transpositions. In fact the number of transpositions is not unique. But this number is either odd or even and cannot be both. Now, we define $sgn(\eta)$ as $(-1)^k$, where $k$ is the number of transposition when $\eta$ is decomposed as a product of transpositions. The following theorem is well known as a classical result in linear algebra \begin{theorem} \label{exp} If $A=[a_{ij}]$ is an $n\times n$ matrix then $$det(A)=\displaystyle \sum_{\eta \in S_n } sgn(\eta) a_{1,\eta(1)}a_{2,\eta(2)}a_{3,\eta(3)}\dots a_{n,\eta(n)} $$ \end{theorem} \section{Inverse of $\alpha$-hermitian adjacency matrix of a bipartite mixed graph} In this section, we investigate the invertibility of the $\alpha$-hermitian adjacency matrix of a bipartite mixed graph $X$. Then we find a formula for the entries of its inverse based on elementary mixed subgraphs. This will lead to a formula for the entries based on the type of the paths between vertices. Using Theorem \ref{exp}, authors in \cite{Abudayah2} proved the following theorem. \begin{theorem}(Determinant expansion for $H_{\alpha}$) \cite{Abudayah2} \label{Determinant} Let $X$ be a mixed graph and $H_\alpha$ its $\alpha$-hermitian adjacency matrix, then $$ det( H_{\alpha}) = \sum_{X'} (-1)^{r(X')}2^{s(X')}Re \left(\prod_C h_{\alpha} ( \vec{C} )\right) $$ where the sum ranges over all spanning elementary mixed subgraphs $X'$ of $X$, the product ranges over all mixed cycles $C$ in $X'$, and $\vec{C}$ is any mixed closed walk traversing $C$. \end{theorem} Now, let $X\in \mathcal{H}$ and $\mathcal{M}$ is the unique perfect matching in $X$. Then since $X$ is bipartite graph, $X$ contains no odd cycles. Now, let $C_k$ be a cycle in $X$, then if $C_k \cap \mathcal{M}$ is a perfect matching of $C_k$ then, $\mathcal{M} \Delta C_k= \mathcal{M}\backslash C_k \cup C_k \backslash \mathcal{M}$ is another perfect matching in $X$ which is a contradiction. Therefore there is at least one vertex of $C_k$ that is matched by a matching edge not in $C_k$. This means if $X\in \mathcal{H}$, then $X$ has exactly one spanning elementary mixed subgraph that consist of only $K_2$ components. Therefore, Using the above discussion together with Theorem \ref{Determinant} we get the following theorem. \begin{theorem}\label{Inv} If $X\in \mathcal{H}$ and $H_\alpha$ is its $\alpha$-hermitian adjacency matrix then $H_\alpha$ is non singular. \end{theorem} Now, Let $X$ be a mixed graph and $H_\alpha$ be its $\alpha$-hermitian adjacency matrix. Then, for invertible $H_\alpha$, the following theorem finds a formula for the entries of $H_\alpha^{-1}$ based on elementary mixed subgraphs and paths between vertices. The proof can be found in \cite{invtree}. \begin{theorem}\label{Thm1} Let $X$ be a mixed graph, $H_\alpha$ be its $\alpha$-hermitian adjacency matrix and for $i \neq j$, $\rho_{i \to j}=\{ P_{i \to j}: P_{i \to j} \text{ is a mixed path from the vertex } i \text{ to the vertex } j \}$. If $\det(H_\alpha) \ne 0$, then \begin{align*} [H_\alpha^{-1}]_{ij} =&\\ & \frac{1}{\det(H_\alpha)}\displaystyle \sum_{P_{i \to j}\in \rho_{i \to j}} (-1)^{|E(P_{i \to j})|} \text{ } h_\alpha (P_{i \to j}) \sum_{X'} (-1)^{r(X')} 2^{s(X')} Re \left( \prod_C h_\alpha (\vec{C})\right) \end{align*} where the second sum ranges over all spanning elementary mixed subgraphs $X'$ of $X\backslash P_{i \to j}$, the product is being taken over all mixed cycles $C$ in $X'$ and $\vec{C}$ is any mixed closed walk traversing $C$. \end{theorem} This theorem describes how to find the non diagonal entries of $H_\alpha^{-1}$. In fact, the diagonal entries may or may not equal to zero. To observe this, lets consider the following example: \begin{example} Consider the mixed graph $X$ shown in Figure \ref{fig:A} and let $\alpha=e^{\frac{\pi}{5}i}$. The mixed graph $X$ has a unique perfect matching, say $M$, and this matching consists of the set of unbroken arcs and digons. Further $M$ is the unique spanning elementary mixed subgraph of $X$. Therefore, using Theorem \ref{Determinant} \[ det[H_\alpha]= (-1)^{8-4}2^{4-4}=1 \] So, $H_\alpha$ is invertible. To calculate $[H_\alpha^{-1}]_{ii}$, we observe that \[ [H_\alpha^{-1}]_{ii}= \frac{det((H_\alpha)_{(i,i)})}{det(H_\alpha)}=det((H_\alpha)_{(i,i)}). \] Where $(H_\alpha)_{(i,i)}$ is the matrix obtained from $H_\alpha$ by deleting the $i^{th}$ row and $i^{th}$ column, which is exactly the $\alpha$-hermitian adjacency matrix of $X\backslash \{i\}$. Applying this on the mixed graph, one can deduce that the diagonal entries of $H_\alpha^{-1}$ are all zeros except the entry $(H_\alpha^{-1})_{11}$. In fact it can be easily seen that the mixed graph $X \backslash \{1\}$ has only one spanning elementary mixed subgraph. Therefore, \[ [H_\alpha^{-1}]_{11}=det((H_\alpha)_{(1,1)})=(-1)^{7-2}2^{6-5}Re(\alpha)=-2Re(\alpha). \] \begin{figure}[ht] \centering \includegraphics[width=0.8\linewidth]{Ex1-1.eps} \caption{Mixed Graph $X$ where $H_\alpha^{-1}$ has nonzero diagonal entry} \label{fig:A} \end{figure} \end{example} The following theorem shows that if $X$ is a bipartite mixed graph with unique perfect matching, then the diagonal entries of $H_\alpha^{-1}$ should be all zeros. \begin{theorem} Let $X \in \mathcal{H}$ and $H_\alpha$ be its $\alpha$-hermitian adjacency matrix. Then, for every vertex $i \in V(X)$, $(H_\alpha^{-1})_{ii} =0$. \end{theorem} \begin{proof} Observing that $X$ is a bipartite mixed graph with a unique perfect matching, and using Theorem \ref{Inv}, we have $H_\alpha$ is invertable. Furthermore, \[ (H_\alpha^{-1})_{ii} = \frac{\det((H_\alpha)_{(i,i)})}{\det(H_\alpha)} \] Note that $(H_\alpha)_{(i,i)}$ is the $\alpha$-hermitian adjacency matrix of the mixed graph $X\backslash \{i\}$. However $X$ has a unique perfect matching, therefore $X\backslash \{i\}$ has an odd number of vertices. Hence $X\backslash \{i\}$ has neither a perfect matching nor an elementary mixed subgraph and thus $\det((H_\alpha)_{(i,i)})=0$. \end{proof}\\ Now, we investigate the non diagonal entries of the inverse of the $\alpha$-hermitian adjacency matrix of a bipartite mixed graph, $X \in \mathcal{H}$. In order to do that we need to characterize the structure of the mixed graph $X \backslash P$ for every mixed path $P$ in $X$. To this end, consider the following theorems: \begin{theorem}\cite{clark1991first}\label{clark} Let $M$ and $M'$ be two matchings in a graph $G$. Let $H$ be the subgraph of $G$ induced by the set of edges $$M \Delta M'=(M\backslash M') \cup (M' \backslash M).$$ Then, the components of $H$ are either cycles of even number of vertices whose edges alternate in $M$ and $M'$ or a path whose edges alternate in $M$ and $M'$ and end vertices unsaturated in one of the two matchings. \end{theorem} \begin{corollary} \label{c1} For any graph $G$, if $G$ has a unique perfect matching then $G$ does not contain alternating cycle. \end{corollary} \begin{definition} Let $X$ be a mixed graph with unique perfect matching. A path $P$ between two vertices $u$ and $v$ in $X$ is called co-augmenting path if the edges of the underlying path of $P$ alternates between matching edges and non-matching edges where both first and last edges of $P$ are matching edges. \end{definition} \begin{corollary} \label{c2} Let $G$ be a bipartite graph with unique perfect matching $\mathcal{M}$, $u$ and $v$ are two vertices of $G$. If $P_{uv}$ is a co-augmenting path between $u$ and $v$, then $G \backslash P_{uv}$ is a bipartite graph with unique perfect matching $\mathcal{M}\backslash P_{uv}$. \end{corollary} \begin{proof} The part that $\mathcal{M}\backslash P_{uv}$ is being a perfect matching of $G \backslash P_{uv}$ is obvious. Suppose that $M' \ne \mathcal{M}\backslash P_{uv}$ is another perfect matching of $G \backslash P_{uv}$. Using Theorem \ref{clark}, $G \backslash P_{uv}$ consists of an alternating cycles or an alternating paths, where its edges alternate between $\mathcal{M}\backslash P_{uv}$ and $M'$. If all $G \backslash P_{uv}$ components are paths, then $G \backslash P_{uv}$ has exactly one perfect matching, which is a contradiction. Therefore, $G \backslash P_{uv}$ contains an alternating cycle say $C$. Since $P_{uv}$ is a co-augmenting path, we have $M' \cup (P_{uv} \cap \mathcal{M})$ is a perfect matching of $G$. Therefore $G$ has more than one perfect matching, which is a contradiction. \end{proof}\\ \begin{theorem}\label{nco} Let $G$ be a bipartite graph with unique perfect matching $\mathcal{M}$, $u$ and $v$ are two vertices of $G$. If $P_{uv}$ is not a co-augmenting path between $u$ and $v$, then $G \backslash P_{uv}$ does not have a perfect matching. \end{theorem} \begin{proof} Since $G$ has a perfect matching, then $G$ has even number of vertices. Therefore, when $P_{uv}$ has an odd number of vertices, $G \backslash P_{uv}$ does not have a perfect matching.\\ Suppose that $P_{uv}$ has an even number of vertices. Then, $P_{uv}$ has a perfect matching $M$. Therefore if $G \backslash P_{uv}$ has a perfect matching $M'$, then $M \cup M'$ will form a new perfect matching of $G$. This contradicts the fact that $G$ has a unique perfect matching. \end{proof}\\ Now, we are ready to give a formula for the entries of the inverse of $\alpha$-hermitian adjacency matrix of bipartite mixed graph $X$ that has a unique perfect matching. This characterizing is based on the co-augmenting paths between vertices of $X$. \begin{theorem}\label{Thm2} Let $X$ be a bipartite mixed graph with unique perfect matching $\mathcal{M}$, $H_\alpha$ be its $\alpha$-hermitian adjacency matrix and $$\Im_{i \to j}=\{ P_{i \to j}: P_{i \to j} \text{\small{ is a co-augmenting mixed path from the vertex }} i \text{ to the vertex } j \}$$ Then \[ (H_\alpha^{-1})_{ij}= \left\{ \begin{array}{ll} \displaystyle \sum_{P_{i\to j} \in \Im_{i\to j}} (-1)^{\frac{|E(P_{i \to j})|-1}{2}} h_\alpha(P_{i \to j}) & \text{if } i\ne j \\ 0 & \text{ if } i =j \end{array} \right. \] \end{theorem} \begin{proof} Using Theorem \ref{Thm1}, $${ [H_{\alpha}^{-1}]_{ij} = \frac{1}{\det(H_\alpha)} \sum_{P_{i \rightarrow j} \in \rho_{i \rightarrow j}} \left[ (-1)^{|E(P_{i \rightarrow j})|} h_\alpha(P_{i \rightarrow j}) \sum_{X'} (-1)^{r(X')} 2^{s(X')} Re (\prod_C h_{\alpha} ( \vec{C} )) \right ]} $$ where the second sum ranges over all spanning elementary mixed subgraphs of $X \backslash P_{i \rightarrow j}$. The product is being taken over all mixed cycles $C$ of $X'$ and $\vec{C}$ is any mixed closed walk traversing $C$. \\ First, using Theorem \ref{nco} we observe that if $P_{i \rightarrow j}$ is not a co-augmenting path then $X \backslash P_{i\to j}$ does not have a perfect matching. Therefore, the term corresponds to $P_{i\to j}$ contributes zero. Thus we only care about the co-augmenting paths. According to Corollary \ref{c2}, for any co-augmenting path $P_{i\to j}$ from the vertex $i$ to the vertex $j$ we get $X \backslash P_{i\to j}$ has a unique perfect matching, namely $\mathcal{M}\cap E( X \backslash P_{i\to j})$. Using Corollary \ref{c1}, $X \backslash P_{i\to j}$ does not contain an alternating cycle. Thus $X \backslash P_{i\to j}$ contains only one spanning elementary mixed subgraph which is $\mathcal{M} \backslash P_{i\to j}$. So, $$ [H_{\alpha}^{-1}]_{ij} = \frac{1}{\det(H_\alpha)} \sum_{P_{i \to j} \in \Im_{i\to j}} (-1)^{|E(P_{i \to j})|} h_\alpha(P_{i \to j}) (-1)^{V(X\backslash P_{i \to j})-k} $$ where $k$ is the number of components of the spanning elementary mixed subgraph of $X \backslash P_{i\rightarrow j}$. Observe that $| V(X \backslash P_{i\rightarrow j})|=n-(|E(P_{i \rightarrow j})|+1)$, $k=\frac{n-(|E(P_{i\rightarrow j})|+1)}{2}$ and $\det(H_\alpha) = (-1)^\frac{n}{2}$, we get the result. \end{proof}\\ \section{Inverse of $\gamma$-hermitian adjacency matrix of a unicyclic bipartite mixed graph} \begin{figure}[ht] \centering \includegraphics[width=0.6\linewidth]{inverse.eps} \caption{Unicycle bipartite mixed graph with unique perfect matching and $4$ pegs } \label{fig:D} \end{figure} Let $\gamma$ be the third root of unity $e^{\frac{2\pi}{3}i}$. Using Theorem \ref{Thm2}, $h_\alpha(P_{i\to j})\in \{\alpha^i\}_{i=1}^n$ plays a central rule in finding the entries of $H_\alpha^{-1}$ and since the third root of unity has the property $\gamma^i \in \{1,\gamma, \overline{\gamma}\}$ we focus our study in this section on $\alpha=\gamma$. The property that $\alpha^i \in \{\pm1, \pm \alpha, \pm \overline{\alpha}\}$ is not true in general. To illustrate, consider the mixed graph shown in Figure \ref{fig:D} and let $\alpha=e^{\frac{\pi}{5}i}$. Using Theorem \ref{Thm2} we get $H_{05}^{-1}=e^{\frac{3\pi}{5}i}$ which is not from the set $\{\pm 1, \pm \alpha, \pm \overline{\alpha}\}$.\\ In this section, we are going to answer the classical question whether the inverse of $\gamma$-hermitian adjacency matrix of a unicyclic bipartite graph is $\{1,-1\}$ diagonally similar to a hermitian adjacency matrix of another mixed graph or not. Consider the mixed graph shown in Figure \ref{fig:D}. Then, obviously entries of $H_\gamma^{-1}$ are from the set $\{0,\pm 1, \pm \gamma, \pm \overline{\gamma}\}$ \\ Another thing we should bear in mind is the existence of $\{1,-1\}$ diagonal matrix $D$ such that $DH_\gamma D$ is $\gamma$-adjacency matrix of another mixed graph. In the mixed graph $X$ in Figure \ref{fig:D}, suppose that $D=diag\{d_{0},d_{1},\dots,d_{9}\}$ is $\{1,-1\}$ diagonal matrix with the property $DH_\gamma D$ has all entries from the set $\{0, \gamma, \overline{\gamma}\}$. Then, \\ \[ \begin{array}{l} d_0d_5=1 \\ d_0d_9=-1 \\ d_9d_7=-1 \\ d_5d_7=-1 \end{array} \] which is impossible. Therefore, such diagonal matrix $D$ does not exist. To discuss the existence of the diagonal matrix $D$ further, let $G$ be a bipartite graph with unique perfect matching. Define $X_G$ to be the mixed graph obtained from $G$ by orienting all non matching edges. Clearly for $\alpha \ne 1$ and $\alpha \ne -1$ changing the orientation of the non matching edges will change the $\alpha$-hermitian adjacency matrix. For now lets restrict our study on $\alpha=-1$. Using Theorem \ref{Thm2} one can easily get the following observation. \begin{observation}\label{corr1} Let $G$ be a bipartite mixed graph with unique perfect matching $\mathcal{M}$, $H_{-1}$ be the $-1$-hermitian adjacency matrix of $X_G$ and $$\Im_{i \to j}=\{ P_{i \to j}: P_{i \to j} \text{ is a co-augmenting mixed path from the vertex } i \text{ to the vertex } j \text{ in } X_G \}.$$ One can use Theorem \ref{Thm2} to get \[ (H_{-1}^{-1})_{ij}= \left\{ \begin{array}{ll} \displaystyle \vert \Im_{i\to j} \vert & \text{if } i\ne j \\ 0 & \text{ if } i =j \end{array} \right. \] \end{observation} So, the question we need to answer now is when $A(G)$ and $H_{-1}(X_G)$ are diagonally similar. To this end, let $G$ be a bipartite graph with a unique perfect matching and $u\in V(G)$. Then for a walk $W=u=r_1,r_2,r_3,\dots,r_k$ in $G$, define a function that assign the value $f_W(j)$ for the $j^{th}$ vertex of $W$ as follows: \[f_W(1)=1\] and \[ f_W(j+1)= \left\{ \begin{array}{ll} -f_W(j) & \text{if } r_jr_{j+1} \text{is unmatching edge in } G \\ f_W(j) & \text{if } r_jr_{j+1} \text{is matching edge in } G \end{array} \right. \] See Figure \ref{fig:E}. \begin{figure}[ht] \centering \includegraphics[width=0.6\linewidth]{store.eps} \caption{The values of $f_W$ where $W$ is the closed walk starting from $0$ } \label{fig:E} \end{figure} Since any path from a vertex $u$ to itself consist of pairs of identical paths and cycle, we get the following remark. \begin{remark} Let $G$ be bipartite graph with unique perfect matching and $F(u)=\{f_W(u): W \text{ is a closed walk in } G \text{ starting at u}\}.$ then, $\vert F(u) \vert=1$ if and only if the number of unmatching edges in each cycle of $G$ is even. \end{remark} Finally, let $G$ be a bipartite graph with unique perfect matching and suppose that each cycle of $G$ has an even number of unmatched edges. For a vertex $u\in V(G)$ define the function $w:V(G) \to \{1,-1\}$ by \[ w(v)=f_W(v), \text{ where } W \text{ is a path from } u \text{ to } v \] \begin{definition} Suppose that $G$ is bipartite graph with unique perfect matching and every cycle of $G$ has even number of unmatched edges. Suppose further $V(G)=\{v_1,v_2,\dots,v_n\}$ and $u\in V(G)$. Define the matrix $D_u$ by $D_u=diag\{w(v_1),w(v_2),\dots,w(v_n)\}$. \end{definition} \begin{theorem}\label{her} Suppose $G$ is a bipartite graph with unique perfect matching and every cycle of $G$ has an even number of unmatched edges. Then for every $u \in V(G)$, we get $D_uA(G)D_u=H_{-1}(X_G)$. \end{theorem} \begin{proof} Note that, for $x,y \in V(G)$, we have $(D_uA(G)D_u)_{xy}=d_xa_{xy}d_y$. Using the definition of $D_u$ we get, \[ d_xd_y= \left\{ \begin{array}{ll} -1 & \text{if } xy \text{ is an unmatching edge in } G \\ 1 & \text{if } xy \text{ is a matching edge in } G\\ 0 & \text{ otherwise } \end{array} \right. \] Therefore, $(D_uA(G)D_u)_{xy}=(H_{-1})_{xy}$. \end{proof}\\ Now we are ready to discuss the inverse of $\gamma$-hermitian adjacency matrix of unicyclic mixed graph. Let $X$ be a unicyclic bipartite graph with unique perfect matching. An arc or digon of $X$ is called a peg if it is a matching arc or digon and incident to a vertex of the cycle in $X$. Since $X$ is unicyclic bipartite graph with unique perfect matching, $X$ has at least one peg. Otherwise the cycle in $X$ will be alternate cycle, and thus $X$ has more than one perfect matching which contradicts the assumption. Since each vertex of the cycle incident to a matching edge and $|V(X)|$ is even, $X$ should contain at least two pegs. The following theorem will deal with unicyclic bipartite mixed graphs $X\in \mathcal{H}$ with more than two pegs. \begin{theorem}\label{peg} Let $X$ be a unicyclic bipartite graph with unique perfect matching. If $X$ has more than two pegs, then between any two vertices of $X$ there is at most one co-augmenting path. \end{theorem} \begin{proof} Let $\rho_1, \rho_2$ and $\rho_3$ be three pegs in $X$, $u,v \in V(D)$, $C$ is the unique cycle in $X$ and suppose there are two co-augmenting paths between $u$ and $v$, say $P$ and $P'$. Since $X$ is unicyclic, we have $V(C) \subset P \cup P'$, Case1: $E(P) \cup E(P')$ does not contain any of the pegs. Then, if $v$ is the $X$ cycle vertex incident to $\rho_1$ then, $v$ is not matched by an edge in the cycle, which means one of $P$ or $P'$ is not co-augmenting path, which contradicts the assumption. Case2: $(E(P) \cup E(P')$ contain pegs. Then, $(E(P) \cup E(P')$ should contain at most two pegs, suppose that $\rho_1$ and $v$ is the vertex of $X$ cycle that incident to $\rho_1$. Then, $v$ belongs to either $P$ or $P'$, again since $\rho_1$ is a matched edge, $v$ is not matched by the cycle edges which means one of $P$ or $P'$ is not co-augmenting path. which contradicts the assumption. \end{proof} \begin{corollary}\label{p1} Let $X$ be a unicycle bipartite mixed graph with unique perfect matching. If $X$ has more than two pegs, then \begin{enumerate} \item $ (H_\alpha^{-1})_{ij}= \left\{ \begin{array}{ll} (-1)^{\frac{|E(P_{i \to j})|-1}{2}} h_\alpha(P_{i \to j}) & \text{if } P_{i\rightarrow j} \text{ is a co-augmenting path from } i \text{ to } j \\ 0 & \text{ Otherwise } \end{array} \right. $ \item If the cycle of $X$ contains even number of unmatching edges, then for any vertex $u\in V(X)$, $D_uH^{-1}_\gamma(X)D_u$ is $\gamma$-hermitian adjacency matrix of a mixed graph. \end{enumerate} \end{corollary} \begin{proof} Part one is obvious using Theorem \ref{Thm2} together with Theorem \ref{peg}.\\ For part two, we observe that $\gamma^i\in \{1,\gamma,\overline{\gamma}\}$. Therefore all $H^{-1}_\gamma(X)$ entries are from the set $\{\pm 1,\pm \gamma,\pm \overline{\gamma}\}$. Also the negative signs in $A(\Gamma(X))^{-1}$ and in $H_\gamma^{-1}$ appear at the same position. Which means $D_uH_\gamma^{-1}D_u$ is $\gamma$-hermitian adjacency matrix of a mixed graph if and only if $D_uA(\Gamma(X))D_u$ is adjacency matrix of a graph. Finally, Theorem \ref{her} ends the proof. \end{proof} Now we will take care of unicycle graph with exactly two pegs. Using the same technique of the proof of Theorem \ref{peg}, one can show the following: \begin{theorem}\label{peg2} Let $D$ be a unicyclic bipartite graph with unique perfect matching and exactly two pegs $\rho_1$ and $\rho_2$. Then for any two vertices of $D$, $u$ and $v$, if there are two co-augmenting paths from the vertex $u$ to the vertex $v$, then $\rho_1$ and $\rho_2$ are edges of the two paths. \end{theorem} Let $X$ be a unicyclic bipartite mixed graph with unique perfect matching and exactly two pegs, and let $uv$ and $u'v'$ be the two pegs of $X$ where $v$ and $v'$ are vertices of the cycle of $X$. We, denote the two paths between $v$ and $v'$ by $\mathcal{F}_{v\rightarrow v'}$ and $\mathcal{F}_{v\rightarrow v'}^c$. \begin{theorem}\label{two pegs} Let $X$ be a unicyclic bipartite mixed graph with unique perfect matching and exactly two pegs and let $C$ be the cycle of $X$. If there are two coaugmenting paths between the vertex $x$ and the vertex $y$, then \[ (H_\alpha^{-1})_{xy}= (-1)^{\frac{|E(P_{x \to y})|-1}{2}} \frac{h_\alpha(P_{x \to v}) h_\alpha(P_{y \to v'}) }{h_\alpha(\mathcal{F}_{v \to v'})} \left[ (-1)^{m+1} h_\alpha(C)+1 \right] \] where $\mathcal{F}_{v \to v'}$ is chosen to be the part of the path $P_{x \to y}$ in the cycle $C$ and $C$ is of size $2m$. \end{theorem} \begin{proof} Suppose that $P_{x \to y}$ and $Q_{x \to y}$ are the paths between the vertices $x$ and $y$, using theorem \ref{Thm2} we have \[ (H_\alpha^{-1})_{xy}= (-1)^{\frac{|E(P_{x \to y})|-1}{2}} h_\alpha(P_{x \to y}) + (-1)^{\frac{|E(Q_{x \to y})|-1}{2}} h_\alpha(Q_{x \to y}) \] Now, using Theorem \ref{peg2}, $P_{x \to y}$ $(Q_{x \to y})$ can be divided into three parts $P_{x \to v}$, $\mathcal{F}_{v \to v'}$ and $P_{v' \to y}$ (resp. $Q_{x \to v}=P_{x \to v},\text{ } \mathcal{F}_{v \to v'}^c$ and $Q_{v' \to y}=P_{v' \to y}$).\\ Observe that the number of unmatched edges in $\mathcal{F}_{v \to v'}$ is $k_1=\frac{|E(\mathcal{F}_{v \to v'})|+1}{2}$ and the number of unmatched edges in $\mathcal{F}_{v \to v'}^c$ is $k_2=m-\frac{|E(\mathcal{F}_{v \to v'})|+1}{2}+1$ we get \[ (H_\alpha^{-1})_{xy}=(-1)^k h_\alpha(P_{x \to v}) h_\alpha(P_{v \to y}) \left( (-1)^{k_1} h_\alpha(\mathcal{F}_{v \to v'}) + (-1)^{k_2} h_\alpha(\mathcal{F}_{v \to v'}^c) \right) \] where $k=\frac{|E(P_{x \to v})|+|E(P_{v' \to y})|}{2}-1$ Note here $\overline{ h_\alpha(\mathcal{F}_{v \to v'})} h_\alpha(\mathcal{F}_{v \to v'}^c) = h_\alpha(C)$, therefore, \[ (H_\alpha^{-1})_{xy}= (-1)^{\frac{|E(P_{x \to y})|-1}{2}} \frac{h_\alpha(P_{x \to v}) h_\alpha(P_{y \to v'}) }{h_\alpha(\mathcal{F}_{v \to v'})} \left[ (-1)^{m+1} h_\alpha(C)+1 \right] \] \end{proof} \begin{theorem}\label{NH} Let $X$ be a unicyclic bipartite mixed graph with unique perfect matching and $H_\gamma$ be its $\gamma$-hermitian adjacency matrix. If $X$ has exactly two pegs, then $H_\gamma^{-1}$ is not $\pm 1$ diagonally similar to $\gamma$-hermitian adjacency matrix of a mixed graph. \end{theorem} \begin{proof} Let $xx'$ and $yy'$ be the two pegs of $X$, where $x'$ and $y'$ are vertices of the cycle $C$ of $X$ . Then, using Theorem \ref{two pegs} we have \[(H_\gamma^{-1})_{xy}= (-1)^{\frac{|E(P_{x \to y})|-1}{2}} \frac{h_\gamma(P_{x \to x'}) h_\gamma(P_{y \to y'}) }{h_\gamma(\mathcal{F}_{x' \to y'})} \left[ (-1)^{m+1} h_\gamma(C)+1 \right] \] where $\mathcal{F}_{x' \to y'}$ is chosen to be the part of the path $P_{x \to y}$ in the cycle $C$ and $C$ is of size $2m$. Suppose that $D=diag\{d_v:v\in V(X)\}$ is a $\{\pm 1\}$ diagonal matrix with the property that $DH_\gamma^{-1}D$ is $\gamma$-hermitian adjacency matrix of a mixed graph. \begin{itemize} \item Case1: Suppose $m$ is even say $m=2r$.\\ Observe that $(-1)^{m+1}h_\gamma(C)+1=1-h_\gamma(C)$. If $h_\gamma(C) \in \{1, \gamma, \gamma^2\}$, then $1-h_\gamma(C) \notin \{\pm 1, \pm \gamma, \pm \gamma^2\}$ and so $H_\gamma^{-1}$ is not $\pm 1$ diagonally similar to $\gamma$-hermitian adjacency matrix of a mixed graph. Thus we only need to discuss the case when $h_\gamma(C)=1$. To this end, suppose that $h_\gamma(C)=1$. Then $(H_\gamma^{-1})_{xy}=0$. Since the length of $C$ is $4r$, we have the number of unmatching edges (number of matching edges) in $C$ is $\frac{4r+2}{2}$ (resp. $\frac{4r-2}{2}$). Since the number of unmatching edges in $C$ is odd, there is a coaugmenting path $\mathcal{F}_{x \to y}$ from $x$ to $y$ that contains odd number of unmatching edges and another coaugmenting path $\mathcal{F}^c_{x \to y}$ with even number of unmatching edges. Now, let $o'o$($e'e$ ) be any matching edges in the path $\mathcal{F}_{x \to y}$ (resp. $\mathcal{F}^c_{x \to y}$). Then, without loss of generality we may assume that there is a coaugmenting path between $x$ and $e$, $x$ and $o$ (and hence there is a co-augmenting path between $y$ and $o'$, $y$ and $e'$ ). Now, if $d_xd_y=1$ then \begin{itemize} \item $(DH_\gamma^{-1}D)_{xo}=(-1)^kd_xh_\gamma(P_{x \to o})d_o$ \item $(DH_\gamma^{-1}D)_{yo'}=(-1)^{k'}d_yh_\gamma(P_{y \to o'})d_{o'}$ \end{itemize} Observe that $k+k'$ is odd number, we have $d_od_{o'}=-1$. This contradict the fact that for every matching edge $gg'$, $d_gd_{g'}=1$.\\ The case when $d_xd_y=-1$ is similar to the above case but with considering the path $\mathcal{F}^c_{x \to y}$ instead of $\mathcal{F}_{x \to y}$ and the vertex $e$ instead of $o$. \item Case2: Suppose $m$ is odd say $2r+1$. Then \[ (H_\gamma^{-1})_{xy}= (-1)^{\frac{|E(P_{x \to y})|-1}{2}} \frac{h_\alpha(P_{x \to v}) h_\alpha(P_{y \to v'}) }{h_\alpha(\mathcal{F}_{v \to v'})} \left[ h_\alpha(C)+1 \right] . \] Therefore, \[(H_\gamma^{-1})_{xy}= (-1)^{\frac{|E(P_{x \to y})|-1}{2}} \frac{h_\alpha(P_{x \to v}) h_\alpha(P_{y \to v'}) }{h_\alpha(\mathcal{F}_{v \to v'})}\left\{ \begin{array}{lll} -\gamma & \text{if } h_\alpha(C)=\gamma^2 \\ -\gamma^2 & \text{if } h_\alpha(C)=\gamma\\ 2 & \text{if } h_\alpha(C)=1 \end{array} \right. \] Obviously, when $h_\alpha(C)=1$, $H_\gamma^{-1}$ is not $\pm 1$ diagonally similar to $\gamma$-hermitian adjacency matrix of a mixed graph. Thus, the cases we need to discuss here are when $h_\alpha(C)=\gamma$ and $h_\alpha(C)=\gamma^2$.\\ Since $m$ is odd, then $C$ contains an even number of unmatched edges. Therefore, either both paths between $x$ and $y$, $\mathcal{F}_{x\to y}$ and $\mathcal{F}_{x\to y}^c$, contain odd number of unmatching edges or both of them contains even number of unmatching edges. \\ To this end, suppose that both of the paths $\mathcal{F}_{x\to y}$ and $\mathcal{F}_{x\to y}^c$ contain odd number of unmatched edges. Then, $(H_\gamma^{-1})_{xy}\in \{\gamma^i\}_{i=0}^2$, which means $d_xd_y=1$. Finally, let $v'v$ be any matching edge in $\mathcal{F}_{x\to y}$ where $P_{x \to v}$ and $P_{v' \to y}$ are coaugmenting paths, then obviously $d_vd_{v'}=1$. But one of the coaugmenting paths $P_{x \to v}$ and $P_{v' \to y}$ should contain odd number of unmatching edges and the other one should contain even number of unmatched edges. Which means $d_xd_vd_{v'}d_y=-1$. This contradicts the fact that $d_vd_{v'}=1$.\\ In the other case, when both $\mathcal{F}_{x\to y}$ and $\mathcal{F}_{x\to y}^c$ contain even mumber of unmatching edges, one can easily deduce that $d_xd_y=-1$ and using same technique we can get another contradiction. \end{itemize} \end{proof} Note that Corollary \ref{p1} and Theorem \ref{NH} give a full characterization of a unicyclic bipartite mixed graph with unique perfect matching where the inverse of its $\gamma$-hermitian adjacency matrix is $\{\pm 1\}$ diagonally similar to $\gamma$-hermitian adjacency matrix of a mixed graph. We summarize this characterization in the following corollary. \begin{theorem} Let $X$ be a unicyclic bipartite mixed graph with unique perfect matching and $H_\gamma$ its $\gamma$-hermitian adjacency matrix. Then, $H_\gamma^{-1}$ is $\pm 1$ diagonally similar to $\gamma$-hermitian adjacency matrix if and only if $X$ has more than two pegs and the cycle of $X$ contains even number of unmatching edges. \end{theorem} \section*{Acknowledgment} The authors wish to acknowledge the support by the Deanship of Scientific Research at German Jordanian University. \bibliography{InverseUni6} \end{document}
2205.06956v1
http://arxiv.org/abs/2205.06956v1
The multi-robber damage number of a graph
\documentclass[12pt]{article} \usepackage{amsmath,amssymb,amsthm, amsfonts} \usepackage{hyperref} \usepackage{graphicx} \usepackage{array, tabulary} \usepackage{url} \usepackage[mathlines]{lineno} \usepackage{dsfont} \usepackage{color} \usepackage{subcaption} \usepackage{enumitem} \definecolor{red}{rgb}{1,0,0} \def\red{\color{red}} \definecolor{blue}{rgb}{0,0,1} \def\blu{\color{blue}} \definecolor{green}{rgb}{0,.6,0} \def\gre{\color{green}} \usepackage{float} \usepackage{tikz} \setlength{\textheight}{8.8in} \setlength{\textwidth}{6.5in} \voffset = -14mm \hoffset = -10mm \newtheorem{thm}{Theorem}[section] \newtheorem{cor}[thm]{Corollary} \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{conj}[thm]{Conjecture} \newtheorem{obs}[thm]{Observation} \newtheorem{alg}[thm]{Algorithm} \newtheorem{prob}[thm]{Problem} \newtheorem{quest}[thm]{Question} \theoremstyle{definition} \newtheorem{rem}[thm]{Remark} \theoremstyle{definition} \newtheorem{defn}[thm]{Definition} \theoremstyle{definition} \newtheorem{ex}[thm]{Example} \def\clq#1{K^{(#1)}} \def\mtx#1{\begin{bmatrix} #1 \end{bmatrix}} \def\ord#1{| #1 |} \def\sdg#1{\mathop{\ooalign{$\overline{#1}$\cr$\mathring{#1}$}}} \newcommand{\R}{\mathbb{R}} \newcommand{\ZZ}{\mathbb{Z}} \newcommand{\N}{\mathbb{N}} \newcommand{\C}{\mathbb{C}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\D}{\Gamma} \newcommand{\G}{\mathcal{G}} \newcommand{\F}{\mathcal{F}} \newcommand{\sym}{\mathcal{S}} \newcommand{\SG}{\mathcal{S}(G)} \newcommand{\QG}{\mathcal{Q}(\Gamma)} \newcommand{\K}{{\cal K}} \newcommand{\Y}{{\cal Y}} \newcommand{\A}{\mathcal{A}} \newcommand{\ba}{{\bf a}} \newcommand{\bb}{{\bf b}} \newcommand{\bc}{{\bf c}} \newcommand{\be}{{\bf e}} \newcommand{\bz}{{\bf z}} \newcommand{\by}{{\bf y}} \newcommand{\bx}{{\bf x}} \newcommand{\bv}{{\bf v}} \newcommand{\bw}{{\bf w}} \newcommand{\bu}{{\bf u}} \newcommand{\Rnn}{\R^{n\times n}} \newcommand{\Rn}{\R^{n}} \newcommand{\Znn}{\Z^{n\times n}} \newcommand{\Zn}{\Z^{n}} \newcommand{\Fnn}{F^{n\times n}} \newcommand{\Fmn}{F^{m\times n}} \newcommand{\Fn}{F^{n}} \newcommand{\mr}{\operatorname{mr}} \newcommand{\mrp}{\operatorname{mr}_+} \newcommand{\mrF}{\operatorname{mr}^F} \newcommand{\mrFG}{\operatorname{mr}^F(G)} \newcommand{\M}{\operatorname{M}} \newcommand{\MF}{\operatorname{M}^F} \newcommand{\Mp}{\operatorname{M}_+} \newcommand{\Z}{\operatorname{Z}} \newcommand{\Zo}{\operatorname{Z}_o} \newcommand{\din}{\delta_i} \newcommand{\dout}{\delta_o} \newcommand{\doD}{\delta_o(\D)} \newcommand{\dD}{\delta(\D)} \newcommand{\PC}{\mathcal{P}} \newcommand{\tri}{\operatorname{tri}} \newcommand{\charF}{\operatorname{char}} \newcommand{\spec}{\operatorname{spec}} \newcommand{\range}{\operatorname{range}} \newcommand{\nul}{\operatorname{null}} \newcommand{\amr}{\operatorname{avemr}} \newcommand{\Exp}{\operatorname{E}} \newcommand{\cc}{\operatorname{cc}} \newcommand{\Gc}{\overline{G}} \newcommand{\Gd}{G^d} \newcommand{\Zmm}{\lfloor \operatorname{Z}\rfloor} \newcommand{\tw}{\operatorname{tw}} \newcommand{\dist}{\operatorname{dist}} \newcommand{\rb}{\text{rb}} \newcommand{\diam}{\text{diam}} \newcommand{\n}{\{1,\dots,n \}} \newcommand{\x}{\times} \newcommand{\wh}{\widehat} \newcommand{\wt}{\widetilde} \newcommand{\bit}{\begin{itemize}} \newcommand{\eit}{\end{itemize}} \newcommand{\ben}{\begin{enumerate}} \newcommand{\een}{\end{enumerate}} \newcommand{\beq}{\begin{equation}} \newcommand{\eeq}{\end{equation}} \newcommand{\bea}{\begin{eqnarray}} \newcommand{\eea}{\end{eqnarray}} \newcommand{\bpf}{\begin{proof}} \newcommand{\epf}{\end{proof}\ms} \newcommand{\bmt}{\begin{bmatrix}} \newcommand{\emt}{\end{bmatrix}} \newcommand{\ms}{\medskip} \newcommand{\noin}{\noindent} \newcommand{\cp}{\, \Box\,} \newcommand{\lc}{\left\lceil} \newcommand{\rc}{\right\rceil} \newcommand{\lf}{\left\lfloor} \newcommand{\rf}{\right\rfloor} \newcommand{\du}{\,\dot{\cup}\,} \newcommand{\noi}{\noindent} \newcommand{\ceil}[1]{\lc #1 \rc} \newcommand{\beqs}{\begin{equation*}} \newcommand{\eeqs}{\end{equation*}} \newcommand{\beas}{\begin{eqnarray*}} \newcommand{\eeas}{\end{eqnarray*}} \newcommand{\up}[1]{^{(#1)}} \newcommand{\upc}[1]{^{[#1]}} \newcommand{\floor}[1]{\lfloor #1 \rfloor} \newcommand{\calf}{\mathcal{F}} \newcommand{\calm}{\mathcal{M}} \newcommand{\zf}{\operatorname{\lfloor \operatorname{Z} \rfloor}} \newcommand{\Zf}{\zf} \newcommand{\zpf}{\floor{\operatorname{Z}_{+}}} \newcommand{\zp}{\operatorname{Z}_{+}} \renewcommand{\H}{\operatorname{H}} \newcommand{\pd}{\operatorname{PD}} \newcommand{\pt}{\operatorname{pt}} \newcommand{\ptp}{\operatorname{pt}_{+}} \newcommand{\ptzf}{\operatorname{pt_{\zf}}} \newcommand{\ptzpf}{\operatorname{pt}_{\zpf}} \newcommand{\ptz}{\operatorname{pt_{\Z}}} \newcommand{\pdpt}{\operatorname{pt}_{\gamma_P}} \newcommand{\pth}{\operatorname{pt}_{\H}} \newcommand{\throt}{\operatorname{th}} \newcommand{\thz}{\operatorname{th_{\Z}}} \newcommand{\thzf}{\operatorname{th_{\zf}}} \newcommand{\thzpf}{\operatorname{th_{\zpf}}} \newcommand{\thpd}{\operatorname{th}_{\gamma_P}} \newcommand{\thp}{\operatorname{th}_{+}} \newcommand{\thh}{\operatorname{th}_{\H}} \newcommand{\thhs}{\operatorname{th}_{\H}^*} \newcommand{\thr}[1]{\operatorname{th}(#1)} \newcommand{\kh}{k_{\H}} \newcommand{\thc}{\operatorname{th}_c} \newcommand{\thd}{\operatorname{th}_d} \newcommand{\capt}{\operatorname{capt}} \newcommand{\dmg}{\operatorname{dmg}} \newcommand{\rad}{\operatorname{rad}} \newcommand{\srg}{\operatorname{SRG}} \newcommand{\cart}{\, \square \,} \newcommand{\ol}{\overline} \newcommand{\mc}{\mathcal} \newcommand{\rev}{\operatorname{rev}} \newcommand{\josh}[1]{{\bf \color{blue} Josh: #1 }} \newcommand{\meghan}[1]{{\bf \color{purple} Meghan: #1}} \newcommand{\carolyn}[1]{{\bf \color{red} Carolyn: #1}} \newcommand{\todo}[1]{{\bf \color{green} TO DO: #1}} \title{The multi-robber damage number of a graph} \author{Joshua Carlson \thanks{Department of Mathematics and Computer Science, Drake University, Des Moines, IA, USA ([email protected])} \and Meghan Halloran \thanks{Department of Mathematics and Statistics, Williams College, Williamstown, MA, USA ([email protected])} \and Carolyn Reinhart \thanks{Department of Mathematics and Statistics, Swarthmore College, Swarthmore, PA, USA ([email protected])}} \date{\today} \begin{document} \maketitle \begin{abstract} In many variants of the game of Cops and Robbers on graphs, multiple cops play against a single robber. In 2019, Cox and Sanaei introduced a variant of the game that gives the robber a more active role than simply evading the cop. In their version, the robber tries to damage as many vertices as possible and the cop attempts to minimize this damage. While the damage variant was originally studied with one cop and one robber, it was later extended to play with multiple cops by Carlson et.~al in 2021. We take a different approach by studying the damage variant with multiple robbers against one cop. Specifically, we introduce the $s$-robber damage number of a graph and obtain a variety of bounds on this parameter. Applying these bounds, we determine the $s$-robber damage number for a variety of graph families and characterize graphs with extreme $2$-robber damage number. \end{abstract} \noi {\bf Keywords} Cops and Robbers, Damage number \noi{\bf AMS subject classification} 05C57, 05C15, 05C50 \section{Introduction} Cops and Robbers is a perfect information pursuit-evasion game played on simple graphs that was introduced in \cite{NW83, Q78}. Originally, the game was played with two players (cop and robber) that move from vertex to vertex by traversing the edges of the graph. The game is initialized in round $0$ when (starting with the cop) both players choose an initial vertex to occupy. Then, each subsequent round consists of a turn for the cop followed by a turn for the robber where each player has the opportunity to (but is not required to) move to a neighboring vertex on their turn. Of course, if the cop ever occupies the same vertex as the robber, the robber is said to be \emph{captured} and the game ends in victory for the cop. Alternatively, if the robber has a strategy to avoid capture forever, the robber wins the game. In \cite{AF84}, the authors consider a version of the game with more players. Specifically, a team of $k$ cops plays against a single robber. In this version, each round consists of a turn for the team of cops followed by a turn for the robber where on the cops turn, each cop has the opportunity to move. As in the original game, in round $0$, each cop chooses their initial position before the robbers' position is initialized. This multi-cop version of the game leads to the main parameter of interest in the study of cops and robbers. The \emph{cop number} of a graph $G$, denoted $c(G)$, is the smallest number of cops required for the cop team to guarantee capture of the robber on $G$. There are many variations of cops and robbers that have been studied in which it is interesting to consider multiple players on the cop team (see \cite{AF84, BMPP16, BPPR17, FHMP16}). Other variants slightly alter the objectives of the players. One such version, introduced in \cite{CS19}, states that if a vertex $v$ is occupied by the robber at the end of a given round and the robber is not caught in the following round, then $v$ becomes \emph{damaged}. In this version of the game, rather than trying to capture the robber, the cop is trying to minimize the number of damaged vertices. Additionally, the robber plays optimally by damaging as many vertices as possible. The damage variation of cops and robbers leads to another parameter of interest. The \emph{damage number} of a graph $G$, denoted $\dmg(G)$, is the minimum number of vertices damaged over all games played on $G$ where the robber plays optimally. Although the damage variant was introduced with a singe cop and robber, in \cite{CEGPRS21}, the authors extended the idea of damage to games played with $k$ cops against one robber. Specifically, they introduce the \emph{$k$-damage number} of a graph $G$, denoted $\dmg_k(G)$, which is defined analogously to $\dmg(G)$. Note that when the goal of the cops is simply to capture the robber, there is no reason to add players to the robber team because a strategy of the cop team to capture one robber is sufficient for repeatedly capturing additional robbers. However, in the damage variant, it the robber who is the more active player since their goal is to damage as many vertices as possible. This creates a somewhat rare situation where it becomes interesting to play with multiple robbers and one cop. We now generalize the damage number in a new way with the following definition. \begin{defn} Suppose $G$ is a simple graph. The \emph{$s$-robber damage number} of $G$, denoted $\dmg(G;s)$, is the minimum number of vertices damaged in $G$ over all games played on $G$ where $s$ robbers play optimally against one cop. Note that optimal play for the robbers is still to damage as many vertices as possible. \end{defn} The $s$-robber damage number is the main focus of this paper. All graphs we consider are finite, undirected, and simple. We adhere to most of the graph theoretic and Cops and Robbers notation found in \cite{Diestel} and \cite{CRbook} respectively. In Section \ref{sec:generalBounds}, we establish some general bounds on $\dmg(G;s)$ in terms of the number of vertices and the number of robbers. We focus on $\dmg(G;2)$ in Section \ref{subsec:2generalBounds}, providing an upper for graphs with maximum degree at least three. Then, in Section \ref{sec:srobberFamilies}, we determine $\dmg(G;s)$ for various graph families, including paths, cycles, and stars. Finally, in Section \ref{sec:extreme2robber}, we characterize the graphs with extreme values of $\dmg(G;2)$. Interestingly, we show that threshold graphs are exactly the graphs with $\dmg(G;2)=1$. \section{General results on the $s$-robber damage number}\label{sec:generalBounds} We begin by establishing bounds on the $s$-robber damage number. Throughout this section, we find upper bounds by describing a cop strategy which limits damage to some number of vertices and we find lower bounds by describing a robber strategy for which some number of vertices are always damaged. First, we find a general lower bound for all graphs on $n$ vertices. \begin{prop}\label{prop:damageAtLeastSMinus1} Suppose $G$ is a graph on $n$ vertices. If $s\leq n-1$, then $\dmg(G; s) \geq s-1$ and if $s\geq n$, then $\dmg(G; s) \geq n-2$. \end{prop} \begin{proof} Let the cop start on any vertex $v$. If $s\leq n-1$, place all of the robbers on separate vertices in $V(G) \setminus \{v\}$. The cop can only capture at most 1 robber in the first round, therefore at least $s-1$ vertices will be damaged. If $s\geq n$, then place at least one robber on each vertex of $V(G) \setminus \{v\}$. In the first round, if the cop moves to capture a robber, they can prevent damage to at most one vertex in $V(G) \setminus \{v\}$. The only other vertex which will not be damaged in the first round is $v$. Therefore, at least $n-2$ vertices will be damaged. \end{proof} We now provide a lower bound for all graphs on $n\geq 2$ vertices with at least one edge. Note that we later compute the $s$-robber damage number of the empty graph in Proposition \ref{prop:Empty}. \begin{prop}\label{prop:damageAtMostNMinus2} Suppose $G$ is a graph on $n \geq 2$ vertices with at least 1 edge. Then $\dmg(G; s) \leq n-2$ for each $s \geq 1$. \end{prop} \begin{proof} Consider a cop strategy where the cop starts on a vertex $v$ with positive degree and toggles between $v$ and one of its neighbors $u$. If the robber moves to $u$ or $v$, the cop either captures the robber immediately or moves to capture the robber in the following round. Since the cop can prevent at least two vertices from being damaged, $\dmg(G; s) \leq n-2$. \end{proof} The combination of Propositions \ref{prop:damageAtLeastSMinus1} and \ref{prop:damageAtMostNMinus2} yields an immediate corollary in the case where the number of robbers is at least the number of vertices. \begin{cor} Suppose $G$ is a graph on $n \geq 2$ vertices with at least 1 edge. If $s\geq n$, then $\dmg(G; s) = n-2$. \end{cor} Since we are considering graphs which are not necessarily connected, it is useful to compute the $s$-robber damage number of the disjoint union of graphs. In the case of a graph with two disjoint components, we can compute the $s$-robber damage number as follows. \begin{prop} For $s \geq 1$ and graphs $G$ and $H$, let $\ell = \max\{\dmg(G;s-1) + |H|, \dmg(G;s)\}$ and $r = \max\{\dmg(H;s-1) + |G|, \dmg(H;s)\}$. Then, $\dmg(G \cup H; s) = \min \{ \ell, r\}$ . \end{prop} \begin{proof} Suppose the cop starts on $G$. If $\dmg(G; s) > \dmg(G;s-1) + |H|$, then the robbers' strategy will be to all start on $G$ and damage $\dmg(G; s)$ vertices. Otherwise, at least one robber should start on $H$. However, since the cop is not on $H$, one robber in $H$ is enough to damage all $|H|$ vertices. So the remaining $s-1$ robbers should choose to start on $G$ and $\dmg(G;s-1) + |H|$ will be damaged. Therefore, if the cop starts on $G$, $\ell$ vertices are damaged. Similarly, if the cop starts on $H$, $r$ vertices are damaged. Since the cop is playing optimally, the cop will start on whichever graph will yield the least damage. Therefore, $\dmg(G \cup H; s) = \min \{\ell,r\}$. \end{proof} Finally, we consider graphs containing cut vertices and determine upper and lower bounds in terms of $s$ and the number of connected components which result from removing a cut vertex. \begin{prop} For a graph $G$, if there exists a vertex $v\in V(G)$ such that $G-v$ has $k \geq 1$ non-trivial connected components, then $\dmg(G,s)\geq \min(2k-2,2s-2)$ for all $s\geq 1$. \end{prop} \begin{proof} Let $v \in V(G)$ such that $G-v$ has $k$ non-trivial components. Label the components $C_1,\dots, C_k$. Observe that for vertices $v_i$ and $v_j$ which are in different non-trivial components, $\dist(v_i,v_j)\geq 2$. If $s\geq k$, at least one robber can start in each of the $k$ non-trivial components. If the cop captures a robber in $C_i$ on round 1, it will be at least round 3 before a robber in $C_j$ for $i\not=j$ is captured. Since component $C_j$ is non-trivial, the robber(s) in this component can damage vertices on both rounds 1 and 2. So two or more vertices are damaged in every component except for the component in which the cop captured a robber in round 1. Thus, $\dmg(G;s)\geq 2k-2$. If $s<k$, then each robber starts on a different connected component, say $C_1,\dots, C_s$. Using the same strategy as in the previous case, all the robbers except for the one captured first can damage at least two vertices. Thus, $\dmg(G,s)\geq 2s-2$. \end{proof} \begin{prop} \label{damage at most n-d} If there exists a vertex $v \in V(G)$ such that $G-v$ has $k\geq 1$ connected components, then $\dmg(G; s) \leq \min(n-k+s-2, n-2)$ for all $s\geq 1$. \end{prop} \begin{proof} Let $v \in V(G)$ such that $G-v$ has $k$ components. First, assume $s\leq k$ and label $s$ of the components $C_1,\dots,C_s$ and the rest of the components (excluding $v$), $C$. Note that $|C| \geq k-s$. Suppose the cop starts on $v$ and suppose one robber starts on each of the components $C_1,\dots,C_s$. Choose a neighbor of $v \in C_1$ and call this vertex $w$. Let the cop protect the edge $vw$ by moving between $v$ and $w$. This implies that the cop can protect all of the vertices in $C$ in addition to $v$ and $w$. Therefore, the cop can protect at least $k-s+2$ vertices, so $\dmg(G; 2) \leq n-k+s-2$. If $s > k$, then $\dmg(G;s) \leq n-2$ by Proposition \ref{prop:damageAtMostNMinus2}. \end{proof} \subsection{A bounds on the $2$-robber damage number}\label{subsec:2generalBounds} We now turn our focus to the case where $s=2$. In the next result, we consider graphs which contain a vertex of degree at least three and show that in this case, the bound from Proposition \ref{prop:damageAtMostNMinus2} can be improved from $n-2$ to $n-3$. \begin{prop} \label{prop:maxDegreeThree} For a graph $G$ on $n$ vertices, if $\Delta(G)\geq 3$, then $\dmg(G; 2) \leq n-3$. \end{prop} \begin{proof} Consider a graph $G$ with $\Delta(G)\geq 3$ and let $v$ be a vertex with at least 3 neighbors $x, y, z \in V(G)$. Let the cop's strategy be to start on $v$ and try to protect $x, y, z$. This implies that the robbers can move freely on the other vertices, but the cop only reacts when one or both robbers move to $x, y, z$ or $v$. Therefore, we only need to consider the subgraph induced by these 4 vertices, which we call $N$. Let the robbers be $R_1$ and $R_2$, and first suppose at most one robber ever moves to a vertex in $N$. If a robber moves to $N$, the cop can clearly capture them, so no vertices in $N$ are damaged. Next, suppose both robbers move to $N$ at some point during the game. If the robbers move to $N$ in non-consecutive rounds, it is clear that the cop can capture the first robber and then return to $v$. When the second robber moves to $N$ the cop can capture them too, thus protecting all $4$ vertices in $N$. Suppose the robbers show up in consecutive rounds. Without loss of generality, let $R_1$ move to $x$. In the next round, the cop will move from $v$ to $x$ to capture $R_1$ and $R_2$ will move to a vertex in $N$. If $R_2$ moved to $v$, then the cop can move back to $v$ and capture in the next round, so no vertices of $N$ are damaged. Otherwise, $R_2$ moved to $y$ or $z$, without loss of generality, say $y$. After capturing $R_1$, the cop will move back to $v$, protecting $x, z$ and $v$ and $R_2$ will damage $y$. No matter where $R_2$ moves next, the cop can still protect $x, z$ and $v$ from becoming damaged. Finally, suppose both robbers move to $N$ in the same round. In this case, the cop's strategy depends on the edges between $x, y,$ and $z$. First, suppose there are no edges between $x, y,\text{ or } z$. The cop can follow a similar strategy to the previous one. Without loss of generality, let $R_1$ move to $x$ and let $R_2$ move to $y$. The cop will move to $x$ in the next round to capture $R_1$ and $R_2$ will damage $y$. Next, $R_2$ can either move to $v$ or leave $N$ and the cop will return to $v$. From here it is clear that $R_2$ will not damage another vertex in the next round and if $R_2$ ever re-enters $N$ it is clear that the cop can capture them. Therefore the cop has prevented $v, x,$ and $z$ from being damaged. Next, suppose there exists one edge within ${x, y, z}$ and without loss of generality we'll assume the edge is between $x$ and $y$. If $R_1$ and $R_2$ move to $x$ and $y$, then the cop will move to $x$ to capture $R_1$. At this point, $R_2$ has damaged $y$ and can either move to $x$, $v$ (in either case, the cop can capture), or leave $N$. So it is clear that the cop can prevent $v, x,$ and $z$ from being damaged. If one robber moves to a vertex on the edge $xy$ and one robber moves to $z$, the cop will have a different strategy. Suppose $R_1$ moves to $z$ and $R_2$ moves to $y$. The cop will move to $y$, capturing $R_2$, and $R_1$ will damage $z$. From here, the cop can return to $v$ and protect $v, x$ and $y$ the rest of the game. Now, suppose there exists two edges within $x, y, z$. Without loss of generality, we'll let the edges be $xz$ and $yz$. First, suppose one robber moves to $z$ and the other moves to $x$ or $y$. We'll let $R_1$ move to $z$ and $R_2$ move to $x$. The cop can move to $z$ to capture $R_1$ and $R_2$ will damage $x$. From here, the cop can protect the vertices neighboring $x$ within $N$. This implies that $R_1$ cannot damage anymore vertices within $N$. Next, suppose neither robber moves to $z$ at first. We'll let $R_1$ move to $x$ and $R_2$ move to $y$. The cop will move to $x$ to capture $R_1$ and $R_2$ will damage $y$. From here, the cop will be able to protect the neighbors of $y$ within $N$ ($z$ and $v$), therefore preventing $R_2$ from damaging anymore vertices within $N$. Finally, suppose there exists an edge between each pair of neighbors of $v$ in $N$. This implies that $N$ is $K_4$, so the cop can capture one robber each round, and only one vertex will be damaged within $N$. We have shown that for all cases, the cop can prevent at least 3 vertices from being damaged, therefore $\dmg(G; 2) \leq n-3$. \end{proof} Next, it is natural to ask whether Proposition \ref{prop:maxDegreeThree} can be generalized for all $s$ and $n \geq 1$. The most obvious generalization would be: if $\Delta(G) \geq s+1$, is $\dmg(G; s) \leq n-s-1$? We can use Proposition \ref{prop:damageAtLeastSMinus1} to answer this question negatively in the following way. Note that if $n < 2s$, then $n-s-1 < s-1$. Thus, by Proposition \ref{prop:damageAtLeastSMinus1}, $\dmg(G; s) \geq s-1 > n-s-1$. Therefore, it is possible to have a graph on $n > 2s$ vertices with $\Delta(G) \geq s+1$ such that $\dmg(G; s) > n-s-1$. An example of this is illustrated in Figure \ref{fig:wheelOn5Vertices}. \begin{figure}[h] \begin{center} \scalebox{.8}{\includegraphics{wheel-on-5-vertices.pdf}}\\ \caption{The wheel on 4 vertices has $\dmg(W_4; s) > n-s-1$ for $s \in \{3, 4\}$. An initial placement with 1 cop (in blue) and 3 robbers (in red) is shown above.}\label{fig:wheelOn5Vertices} \end{center} \end{figure} We now consider another possible generalization. The following conjecture maintains the upper bound of $n-3$, but generalizes the condition on the maximum degree that is required. \begin{conj}\label{conj:maxdeg} In a graph $G$, if $\Delta(G)\geq\binom{s}{2}+2$, then $\dmg(G; s) \leq n-3$ for all $s \geq 2$. \end{conj} \section{The $s$-robber damage number of graph families}\label{sec:srobberFamilies} In this section, we determine the $s$-robber damage number for certain graph families. We begin by considering the empty graph $\overline{K_n}$ and the complete graph $K_n$ on $n$ vertices. \begin{prop}\label{prop:Empty} For $n\geq 1$, $\dmg (\overline{K_n}; s) = \min\{s, n-1\}$ for all $s\geq 1$. \end{prop} \begin{proof} Let $1 \leq s \leq n-1$ and suppose the cop starts on vertex $v \in V(G)$. The robbers can each start on distinct vertices in $V(G) \setminus \{v\}$ and the cop can only protect $v$. Thus, $s$ vertices are damaged. If $s > n-1$, let the $s$ robbers start on the $n-1$ vertices not occupied by the cop. Therefore, $n-1$ vertices are damaged. \end{proof} \begin{prop} For $n \geq 4$, $\dmg(K_n; s) = \min\{\frac{s(s-1)}{2}, n-2\}$ for all $s\geq 1$. \end{prop} \begin{proof} First, note that by Proposition \ref{prop:damageAtMostNMinus2}, $\dmg(K_n; s) \leq n-2$. Next, we assume $\frac{s(s-1)}{2}\leq n-2$ and show that there exists a cop strategy such that $\dmg(K_n; s) \leq \min\{\frac{s(s-1)}{2}\}$. Since every vertex in $K_n$ is a dominating vertex, the cop can capture a new robber each round until all of the robbers have been caught. Since $\binom{s}{2} \leq n-2$, in the first round, $s-1$ vertices will be damaged and as the cop continues to capture robbers, $s-2, s-3, ...$ vertices will be damaged each round. Therefore, if there are enough vertices in the graph, the robbers can damage at most $(s-1) + (s-2) + ... = {s \choose 2} = \frac{s(s-1)}{2}$ vertices. Thus, the cop should use this strategy when $\frac{s(s-1)}{2} \leq n-2$ and use the strategy from Proposition \ref{prop:damageAtMostNMinus2} otherwise. This implies that $\dmg(K_n; s) \leq \min\{\frac{s(s-1)}{2}, n-2\}$. Next, we will give a strategy for the robbers such that no matter what the cop does, the robbers can damage at least $\min\{\frac{s(s-1)}{2}, n-2\}$ vertices. Let the robbers start on as many vertices as possible, but not the vertex that the cop starts on. If ${s \choose 2} \leq n-2$, all of the robbers can start on distinct vertices and it is clear that the cop can only capture one robber in the first round. This implies that after the first round, $s-1$ vertices are damaged and $s-1$ robbers remain uncaught. Suppose the robbers try to damage as many vertices as possible by moving to different undamaged vertices each round. Thus, the robbers can damage $(s-1) + (s-2) +... = \frac{s(s-1)}{2}$ vertices, no matter what the cop does. Now, suppose ${s \choose 2} > n-2$. This implies that at some point in the game, the number of undamaged vertices, $k$, is less than the number of remaining robbers. Assuming the cop has been playing optimally up to this point, the cop will be occupying one of these undamaged vertices. Therefore, by moving to the undamaged vertices, the robbers can damage at least $k-2$ vertices in the next round. This leaves 2 vertices undamaged, which implies that the robbers can damage at least $n-2$ vertices. Therefore, we have established that $\dmg(K_n; s) = \min \{\frac{s(s-1)}{2}, n-2\}$. \end{proof} We next consider the path graph on $n$ vertices, $P_n$ and show that for any number of robbers $s$, the $s$-robber damage number is $n-2$. \begin{thm}\label{thm:path} For $n, s \geq 2$, $\dmg(P_n; s) = n-2$. \end{thm} \begin{proof} By Proposition \ref{prop:damageAtMostNMinus2}, we have that $\dmg(P_n; s) \leq n-2$. To show $\dmg(P_n; s) \geq n-2$, we argue that for any cop strategy, the robbers are able to damage $n-2$ vertices. For $s> 2$, the robbers can form two non-empty groups such that every robber in each group acts as a single robber. Thus, it is sufficient to prove the result for $s=2$. Let the two robbers be called $R_1$ and $R_2$. If $n=2$, it is clear that the cop can protect the two vertices and therefore the robbers are not able to damage any vertices. So, $n-2 = 2-2 = 0$ vertices can be damaged. Next, let $n > 2$. If the cop starts on a leaf, the robbers can start on the vertex which is distance two away from this leaf. On each round, the robbers can move towards the other end of the path and will not be captured until they reach the end. Therefore, the robbers can damage $n-2$ vertices. Now, suppose the cop starts on a neighbor of a leaf. If $n=3$, the only neighbor of a leaf is the middle vertex and a robber can start on each leaf. Since the cop can only capture one of the robbers in the first round, it is clear that at least one vertex will be damaged and $n-2 = 3-2 =1$. If $n > 3$, place $R_1$ on the leaf neighboring the cop and place $R_2$ on the vertex of distance two from the cop. If the cop passes during the first round, $R_1$ will damage the leaf and $R_2$ can move to the other end of the path, damaging $n-3$ vertices. Therefore, $n-3+1 = n-2$ vertices are damaged. If the cop captures $R_1$ in the first round, then $R_2$ can move towards the cop in the first round and then move back towards the other end of the path, damaging $n-2$ vertices. If the cop moves towards $R_2$ in the first round, $R_2$ will move to the other end of the path, damaging $n-3$ vertices on the way. Since $R_1$ will at least damage one vertex (the leaf), at least $n-3+1 = n-2$ vertices are damaged. Finally, suppose the cop starts on a vertex which is distance at least two from both leaves. It is clear in this case that $n\geq 5$. Consider the cop's initial vertex and the two vertices to its left and right. We label these vertices $v_1,...,v_5$, left to right, so the cop starts on $v_3$. Let $R_1$ start on $v_1$ and $R_2$ start on $v_5$. Let $x$ and $y$ be the number of vertices in $P_n$ to the left of $v_1$ and to the right of $v_5$, respectively. Without loss of generality, suppose $x \leq y$ (note that $x$ or $y$ could be zero). If the cop moves to $v_2$ in the first round, then the robbers will both move to the left as well and $R_2$ will damage $v_4$. Similarly, if the cop moves to $v_4$ in the first round, then the robbers will both move to the right as well and $R_1$ will damage $v_2$. After this happens, $R_1$ can move left during every turn and $R_2$ can move right during every turn (until they reach a leaf), damaging each vertex on their path. It is clear that $v_3$ and the vertex the cop moves to in the first round are the only undamaged vertices. Therefore, $n-2$ vertices will be damaged. If the cop doesn't move first, then the robbers must move first (otherwise, if neither player moves, only two vertices are damaged). It is obvious that $R_1$ can damage $x+1$ vertices without being caught. As $R_1$ is damaging those vertices, $R_2$ can stay exactly two vertices to the right of the cop, whenever possible. If $R_2$ is ever captured, this strategy ensures capture will occur on the right leaf. Capturing $R_2$ on that vertex will take the cop at least $2+y$ rounds. In order to prevent damage to all of the vertices, the cop must then move back to $v_3$. Note that the cop requires at least $2(2+y) = 4 + 2y$ rounds to capture $R_2$ and return to $v_3$. However, in at most $2x+1$ rounds, $R_1$ can move left, damaging the left side of the path, and then return to $v_2$. Since $x \leq y$, it's clear that $2x + 1 < 2y + 4$, which means $R_1$ can damage $v_2$. Overall, $R_1$ can damage at least $x+2$ vertices and $R_2$ can damage $y+1$ vertices and therefore, at least $n-2$ vertices will be damaged. Otherwise, assume that $R_2$ is not captured. If the cop ever moves to the left of $v_3$ towards $R_1$, then $R_2$ can damage $v_4$, $v_5$ and the $y$ vertices to the right $v_5$ without being caught. It is clear that $v_2$ and $v_3$ are the only undamaged vertices, so $n-2$ vertices can be damaged. Next, suppose the cop never moves to the left of $v_3$. If the cop is to the right of $v_3$ when $R_1$ returns to $v_1$, it's clear that $R_1$ can damage $v_2$. At this point, $R_2$ can damage any remaining vertices on the right side of the path, so $x+2+y+1=n-2$ vertices can be damaged. If the cop is on $v_3$ when $R_1$ returns to $v_1$, $R_2$ is on $v_5$. If the cop moves to either $v_2$ or $v_4$, then the robbers can act as if the cop did this in round one, and damage $n-2$ vertices as in that case. If the cop passes, $R_1$ can move to $v_2$ and $R_2$ can stay on $v_5$. If the cop doesn't capture $R_1$, then $v_2$ will be damaged and $R_2$ can damage $v_5$ and $y$ more vertices without being caught, so $n-2$ vertices are damaged. On the other hand, if the cop moves to $v_2$ to capture $R_1$, then $R_2$ can move to $v_4$ and then move back down the right end of the path without getting caught. Therefore $n-2$ vertices are damaged. We have shown that at least $n-2$ vertices are damaged regardless of what strategy the cop uses, so $\dmg(P_n; s) = n-2$. \end{proof} Next, we show that $n-2$ is also the $s$-robber damage number for the cycle graph $C_n$ on $n$ vertices, employing a similar technique to Theorem \ref{thm:path}. \begin{thm}\label{thm:cycle} For $n \geq 3$ and $s \geq 2, \dmg(C_n; s) = n-2$. \end{thm} \begin{proof} By Proposition \ref{prop:damageAtMostNMinus2}, we have that $\dmg(C_n; s) \leq n-2$. To show $\dmg(C_n; s) \geq n-2$, we argue that for any cop strategy, the robbers are able to damage $n-2$ vertices. As in the proof of Theorem \ref{thm:path}, for $s> 2$, the robbers can form two non-empty groups such that every robber in each group acts as a single robber. Thus, it sufficient to prove the result for $s=2$. Let the two robbers be called $R_1$ and $R_2$. If $n=3$, the robbers can start on the two vertices that the cop does not start on. In the first round, the cop can only capture one robber therefore one vertex will be damaged. Thus, damage is at least one. If $n = 4$, let $R_1$ start next to the cop and let $R_2$ start on the vertex of distance two from the cop. In the first round, the cop will capture $R_1$. Then $R_2$ can move to its neighbor that will be a distance of two away from the cop. This implies that $R_2$ can damage its starting vertex and a second vertex. Thus, at least two vertices are damaged. If $n\geq 5$, suppose the cop starts on an arbitrary vertex $v_3$ and label the four closest vertices to $v_3$ as $v_1, v_2, v_4, v_5$, clockwise. Let the robbers, $R_1$ and $R_2$, start on vertices $v_1$ and $v_5$, respectively. Suppose there are $z=n-5$ vertices left unlabeled (note it is possible that $z=0$). Split up the $z$ vertices into two sets, $X$ and $Y$, as follows. Let $X$ be the set of $\lceil \frac{n-5}{2} \rceil$ vertices, starting from the unlabeled neighbor of $v_1$ and moving counterclockwise. Similarly, let $Y$ be the set of $\lceil \frac{n-5}{2} \rceil$ vertices, starting from the unlabeled neighbor of $v_5$ and moving clockwise. Note that if $n$ is even, $X$ and $Y$ will both contain the vertex which is farthest away from $v_3$. Suppose the cop moves to $v_2$ in the first round. Then, $R_1$ will move in the same direction away from the cop and $R_2$ will move to $v_4$. At this point, $R_1$ and $R_2$ are guaranteed to damage $n-2$ vertices. This is because no matter what the cop does, $R_1$ and $R_2$ can move towards each other (and away from the cop), and damage the $z$ additional vertices without being caught. This implies that $z$ vertices plus $v_1, v_4,\text{ and } v_5$ are damaged, so $n-5 + 3 = n-2$ vertices are damaged. If the cop moves to $v_4$ in the first round, then the robbers can simply follow the same strategy with their roles reversed. If the cop passes on the first round, we can use a technique similar to the one in the proof of Theorem \ref{thm:path}. Let $R_1$ move counterclockwise, damaging the vertices in $X$, while $R_2$ stays a distance of two away from the cop. Using this strategy, it is clear that $R_2$ will not be captured. If the cop ever moves from $v_3$ to $v_2$, then we know that $R_2$ can damage $v_4$. Afterward, $R_2$ can move clockwise until the robbers have together damaged all remaining vertices. In this case, the robbers damage at least $z+3=n-2$ vertices. If the cop never moves from $v_3$ to $v_2$, then the cop could only move to a vertex in $X$ by moving clockwise through $Y$. During this process, $R_2$ will stay a distance of two away from the cop and damage all of the vertices in $Y$, as well as $v_5$. It will take at least $\lceil \frac{n-5}{2} \rceil + 2$ rounds for the cop to enter $X$. However, $R_1$ can damage $v_1$ and all of the vertices in $X$ in $\lceil \frac{n-5}{2} \rceil + 1$ rounds. Then, $R_1$ can move clockwise back to $v_2$ without being captured, since the cop will always be at least distance two away. Thus, $n-2$ vertices are damaged. If the cop never enters $X$, the cop will only ever move between the vertices in $Y \cup \{v_3, v_4, v_5\}$. This means that $R_1$ can damage $v_1$, $v_2$, and the vertices in $X$, since the cop will never enter these vertices. Meanwhile, $R_2$ can start moving clockwise on every turn while remaining at least distance two from the cop at all times. Using this strategy, $R_2$ can damage $v_5$ and the vertices in $Y$. Therefore, $n-2$ vertices are damaged. We have shown that the robbers can damage at least $n-2$ vertices no matter what strategy the cop uses, so $\dmg(C_n; s) = n-2$. \end{proof} Finally, we show that a similar technique to Theorem \ref{thm:path} can be used to compute the $s$-robber damage number of a spider graph. \begin{thm}\label{thm:star} Suppose $G$ is a spider graph with $\ell \geq 3$ legs of lengths $k_1\geq k_2\geq \dots\geq k_{\ell}$. If $2 \leq s\leq \ell$, $\displaystyle \dmg(G; s) =\left(\sum_{i=1}^s k_i\right) -1$ and if $s > \ell$, $\dmg(G; s) =n-2$ . \end{thm} \begin{proof} Let the vertex in the center of the spider be $c$. If $s > \ell$, the fact that $\dmg(G;s) \leq n - 2$ follows from Proposition \ref{prop:damageAtMostNMinus2}. If $2 \leq s\leq \ell$, suppose the cop starts on $c$ and remains there unless a robber moves to a neighbor of $c$. In this case, the cop's strategy will be to capture the robber and return back to $c$. This implies that if the robbers start on the $s$ longest legs, the cop can protect all of the other legs, as well as one vertex in a leg that contains a robber. Therefore, the cop can protect $n - \left(\sum_{i=1}^s k_i\right) + 1$ vertices and $\dmg(G; s) \leq \left(\sum_{i=1}^s k_i\right) -1$. If $s >l$, the robbers can behave as $\ell$ robbers which implies $\dmg(G; s)\geq \dmg(G; \ell)$. Since $(\sum_{i=1}^{\ell} k_i) -1 = n-2$, it is sufficient to assume $2 \leq s\leq \ell$ and provide a strategy for the robbers such that they can always damage at least $\left(\sum_{i=1}^s k_i\right) -1$ vertices for every cop strategy. We first consider the case where $k_i\geq 2$ for all $1\leq i\leq s$. Let $v_i$ be the vertex adjacent to $c$ in the leg of length $k_i$ for $1\leq i\leq \ell$, and let $u_i$ be the vertex adjacent to $v_i$ which is not $c$ for $1\leq i\leq s$. Call the $s$ robbers $R_1,R_2,\dots, R_s$. Suppose the cop starts on $c$ and let $R_i$ place on $u_i$ for each $1\leq i\leq s$. If the cop moves in round one to $v_j$ for some $s+1\leq j\leq \ell$, each robber $R_i$ can move to $v_i$ and damage it. Then, regardless of what the cop does next, $R_i$ can move to the leaf in their leg without being captured. Thus, damage is at least $\left(\sum_{i=1}^s k_i\right)$. If the cop moves in round one to $v_j$ for some $1\leq j\leq s$, then $R_j$ will move towards the leaf in their leg and all the other robbers $R_i$ can move to $v_i$. On each subsequent round, regardless of what the cop does, each robber can move towards the leaf in their leg without being captured. Thus, at least $\left(\sum_{i=1}^s k_i\right)-1$ vertices are damaged. If the cop passes during round 1, let $R_s$ move towards the leaf in its leg. While the cop remains on $c$, the other robbers should not move. If the cop ever moves from $c$ to $v_j$ for some $1\leq j\leq \ell$, all robbers $R_i$ for $i\not=s,j$ should move to $v_i$. In every round after this, each $R_i$ should move towards the leaf in their leg, damaging $k_i$ vertices. If $s\leq j\leq \ell$, then the robbers $R_1,\dots, R_{s-1}$ damage $\sum_{i=1}^{s-1} k_i$ vertices and $R_s$ damages $k_s-1$ vertices, so at least $\left(\sum_{i=1}^s k_i\right)-1$ vertices are damaged. If $1\leq j\leq s-1$, then $R_j$ should maintain a distance of two from the cop as long as they share a leg, or until $R_j$ is forced to the leaf of their leg and captured. If $R_j$ is captured, the cop will take at least $2k_j+1$ rounds to capture $R_j$ and return to the center (since the cop passed in the first round). However, $R_s$ can move to the end of their leg and back to $v_s$ in only $2k_s-1$ rounds. Since $k_s\leq k_j$, $R_s$ can damage every vertex in its leg, including $v_s$, without being captured. Each remaining robber $R_i$ for $i\not=s,j$ also damages $k_i$ vertices and $R_j$ damages $k_j-1$ vertices. Therefore, at least $\left(\sum_{i=1}^s k_i\right)-1$ are damaged. Next, assume the cop does not capture $R_j$. Since $R_j$ can always maintain a distance of two from the cop, if the cop ever moves into another leg, then $R_j$ can damage $v_j$. After damaging $v_j$, $R_j$ can stop following the cop and move to the leaf in their leg, damaging $k_j$ vertices. Since all other robbers also damaged all of the vertices in their legs (except for $R_s$, which damaged at least $k_s-1$ vertices), damage is at least $\left(\sum_{i=1}^s k_i\right)-1$. If the cop never leaves the leg containing $R_j$, then $R_j$ can maintain a distance of two from the cop until $R_s$ moves from the leaf in their leg and back to $v_s$. Since the cop is on the leg with $R_j$, it follows that $R_s$ can damage $v_s$ without being captured. After this, $R_j$ can move to the leaf in their leg, damaging $k_j-1$ vertices ($v_j$ will not be damaged). Since all other robbers damaged all of the vertices in their legs, damage is at least $\left(\sum_{i=1}^s k_i\right)-1$. If the cop starts on one of the $\ell-s$ shortest legs, let $R_i$ place on $v_i$ for $1\leq i\leq s$. Regardless of what the cop does, each robber can move towards the end of their leg on each turn, and will not be caught before they damage every vertex in their leg. Therefore, damage is at least $\sum_{i=1}^s k_i$. Next, let the cop start on one of the $s$ longest legs; specifically, suppose the cop starts on a vertex on the leg of length $k_j$ for some $1\leq j\leq s$. Choose another leg of length $k_t$ for some $1\leq t\leq s$ and $t\not=j$, and consider the path $P$ of length $k_j+k_t+1$ formed by the two legs and the center vertex. Place two robbers on $P$ in the optimal starting positions relative to the cop for a path on $k_j+k_t+1$ vertices. All other robbers $R_i$ for $1\leq i\leq s$ and $i\not=j,t$ should place on $v_i$. Regardless of what the cop does, each $R_i$ can move towards the end of their leg during each round, damaging all $k_i$ vertices in their leg. Meanwhile, as long as the cop remains on $P$, $R_j$ and $R_t$ should follow the strategy for a path of that length, as outlined in the proof of Theorem \ref{thm:path}. If the cop never leaves $P$, the damage on the path is at least $k_j+k_t+1-2$ and total damage is at least $\left(\sum_{i=1}^s k_i\right)-1$. Now assume that at some point, the cop leaves $P$ and enters another leg. Consider what strategy each robber was employing on the previous turn, when the cop was necessarily on $c$. If neither robber was attempting to remain two vertices away from the cop, then each robber can continue employing their current strategies from the proof of Theorem \ref{thm:path} and they will be able to damage their parts of the path, damaging at least $k_j+k_t-1$ vertices together. Now suppose one of the robbers was attempting to remain two vertices away from the cop on $P$. Without loss of generality, let this robber be $R_t$. Note, in this case, neither robber will have been captured. While the cop is on $c$ or in another leg of $G$, both robbers should act as if the cop is on $c$. Then, $R_t$ is necessarily on $u_t$ and will remain on this vertex as long as the cop doesn't move to $v_j$ or $v_t$, or until $R_j$ damages all vertices on the other leg in $P$, whichever happens first. If the cop moves to $v_j$ or $v_t$, the robbers continue playing their strategy outlined in Theorem \ref{thm:path} until they damage $k_j+k_t-1$ vertices. If $R_j$ damages all the vertices on their side of $c$ first, then $R_t$ can now move to the leaf on the other side of $c$ in $P$. In this case, the two robbers still damage $k_j+k_t-1$ vertices. Therefore, all $s$ cops together damage at least $\left(\sum_{i=1}^s k_i\right)-1$ vertices. Finally, we consider the case where $k_p=1$ for some $1\leq p\leq s$ and note this implies that $k_i=1$ for all $p\leq i\leq \ell$. Note if $p=1$, all legs have length one. If the cop starts on $c$ and the robbers all start on $v_1,\cdots, v_s$, the cop can capture at most one robber on the first round, so at least $s-1=\left(\sum_{i=1}^s k_i\right)-1$ vertices are damaged. If the cop does not start on $c$, the robbers can start on at least $s-1$ of the vertices $v_1,\cdots, v_s$ and the cop cannot capture a robber on the first round. Thus, at least $s-1=\left(\sum_{i=1}^s k_i\right)-1$ vertices are damaged. Now, assume $p \geq 2$, so there exists at least one leg of length at least two. In this case, if the cop starts on a vertex other than $c$, the argument follows as in the case where $k_i\geq 2$ for each $1 \leq i \leq s$. If the cop starts on $c$, let $R_i$ place on $u_i$ for each $1\leq i\leq p-1$ and let $R_i$ place on $v_i$ for each $p\leq i\leq s$. If the cop moves in the first round to a leg of length one (which may or may not contain a robber), the vertex in that leg is not damaged. However, all robbers $R_i$ not contained in that leg can then damage $v_i$ in at most two rounds (moving to do so if necessary) as well as any remaining vertices in their respective legs. So in this case, damage is at least $\left(\sum_{i=1}^s k_i\right)-1$. If the cop moves in the first round to a leg of length at least two, the argument proceeds the same as the $k_i\geq 2$ case. If the cop does not move in the first round, then all robbers $R_i$ for $p\leq i\leq s$ damage the vertex in their leg since they are not captured in this round. Let $R_{p-1}$, the robber on the shortest leg with length at least 2, move towards the leaf in their leg while all robbers $R_j$ such that $1\leq j\leq p-2$ (if such robbers exist) remain still. From here, the argument again follows as in the $k_i\geq 2$ case. We have shown that for each cop strategy, the $s$ robbers can damage at least $\left(\sum_{i=1}^s k_i\right)-1$ vertices, obtaining the desired result.\end{proof} \section{Graphs with extreme $2$-robber damage number}\label{sec:extreme2robber} We now turn our focus to the $2$-robber damage number and consider graphs for which this number is high and low. First, we note that by Proposition \ref{prop:Empty}, $\dmg(\overline{K_1},2)=0=n-1$, $\dmg(\overline{K_2},2)=1=n-1$, $\dmg(\overline{K_3},2)=2=n-1$, and $\dmg(\overline{K_n},2)=2<n-1$ for all $n\geq 4$. We also know that $\dmg(P_2,2)=0=n-2$ by Theorem \ref{thm:path}. Having considered all graphs on one and two vertices and all graphs on three or more vertices without an edge, we can now apply our general bounds from Section \ref{sec:generalBounds} to all other graphs. Proposition \ref{prop:damageAtLeastSMinus1} shows that for every graph $G$ on at least $3$ vertices, $\dmg(G,2)\geq 1$ and Proposition \ref{prop:damageAtMostNMinus2} shows that $\dmg(G,2)\leq n-2$ for every graph on at least $2$ vertices and at least $1$ edge. In this section, we provide a complete characterization of the graphs which achieve these bounds, starting with $\dmg(G,2)=n-2$. \begin{thm}\label{thm:CharN-2} Suppose $G$ is a graph on $n$ vertices, \[\mathcal{G'} = \{\overline{K_4}, K_2 \cup K_1 \cup K_1, K_2 \cup K_2 \cup K_1, K_2 \cup K_2 \cup K_2, K_2 \cup K_1, K_2 \cup K_2\},\] and $\mathcal{G} = \{P_n ~|~ n \geq 2\} \cup \{C_n \ | \ n \geq 3\} \cup \mathcal{G'}$. Then, $\dmg(G; 2) = n-2$ if and only if $G \in \mathcal{G}$. \end{thm} \begin{proof} By Proposition \ref{prop:maxDegreeThree}, if a graph $G$ has $\dmg(G;2) = n-2$, then $\Delta(G) \leq 2$. Therefore, it suffices to determine which graphs with $\Delta(G) \leq 2$ satisfy $\dmg(G;2) = n-2$. Observe that if $\Delta(G) \leq 2$, then every component of $G$ is a path or cycle. We can divide these graphs into cases based on the number of components in $G$. First, consider the case where $G$ has at least five components. This implies that there are at least three components that do not contain a robber and therefore, at least three vertices remain undamaged. Thus, any graph with five or more components must not satisfy $\dmg(G; 2) = n-2$. Next, suppose $G$ has four components and there exists a component with at least two vertices (i.e., $G \ncong \overline{K_4}$). If the cop starts on the component with at least two vertices and protects an edge, they prevent two vertices from being damaged. Also, since there are four components and only three players, there is always at least one empty component which implies that at least one additional vertex will be undamaged. Therefore, $\dmg(G; 2) \leq n-3$. Finally, the only other graph to consider with four components is $\overline{K_4}$. By Proposition \ref{prop:Empty}, $\dmg(\overline{K_4};2)=2=n-2$. Now, suppose $G$ has three components and at least one component contains three or more vertices. Assume the cop starts on a component that contains at least three vertices. We again show that no matter what the robbers do, the cop can prevent damage to at least three vertices. First, suppose the robbers start on separate components and not the one with the cop. This implies that the cop can protect their entire component and therefore, at least three vertices are protected. Next, suppose one robber starts on the same component as the cop and the other robber starts on another component. This implies that the third component remains undamaged and hence, at least one vertex in that component is protected. We also know that the cop can protect an edge in their component and therefore, at least two more vertices are protected. Thus, at least three vertices in total remain undamaged. Finally, suppose both robbers start on the component with the cop. Again, we know the cop can protect an edge, preventing two vertices from being damaged in their component. Since there are two more components, we know at least four vertices can be protected in total. This implies that $\dmg(G; 2) \leq n-3$, regardless of the robbers' strategy. The only remaining graphs to consider with three components have at most two vertices in each component. This includes four graphs: $K_2 \cup K_1 \cup K_1, K_2 \cup K_2 \cup K_1, K_2 \cup K_2 \cup K_2, \overline{K_3}$. Consider the first three graphs, each of which contains $K_2$ as a component. In this case, it is optimal for the cop to start on $K_2$. Then, it is clear that the cop can protect two vertices and the robbers can always damage the remaining $n-2$ vertices by starting on separate components. Therefore, $\dmg(G; 2) = n-2$ for these three graphs. Clearly, if $G \cong \overline{K_3}$, $\dmg(G; 2) = 2 = n-1$. Next, suppose $G$ has two components and one component contains at least four vertices. We know that this component, $G_1$, must be a path or cycle on four or more vertices. Let the cop start on $G_1$. First, note that if both robbers start on $G_1$, at most $n-3$ vertices are damaged because $\dmg(G_1; 2) = |G_1|-2 < n-2$. Next, let one robber start on $G_1$ and let the other robber start on the other component, $G_2$. Since $G_1$ is a path or a cycle and $|G_1| \geq 4$, we know that $\dmg(G_1) = \lfloor\frac{|G_1|-1}{2}\rfloor \leq \frac{|G_1|-1}{2}$ \cite{CS19}. Note that if $\frac{|G_1|-1}{2} > |G_1|-3$, then $|G_1| < 5$. This implies that if $|G_1| \geq 5$, $\lfloor\frac{|G_1|-1}{2}\rfloor \leq \frac{|G_1|-1}{2} \leq |G_1|-3$. Furthermore, if $|G_1|=4$, then $\lfloor \frac{|G_1|-1}{2} \rfloor = 1 = |G_1|-3$. Therefore in this case, $\dmg(G_1) \leq |G_1|-3$ because $|G_1| \geq 4$. This implies that $\dmg(G; 2) \leq |G_2| + |G_1| - 3 = n-3$. Now, we consider graphs $G$ with two components where both components contain fewer than four vertices. Suppose one of the components of $G$ has at least three vertices. This implies that one component (say $G_1$) is either $C_3$ or $P_3$. Let the cop start on $G_1$. First, let one robber start on $G_1$ and let the other robber start on the other component, $G_2$. We know that the damage number of $P_3$ and $C_3$ is zero and therefore, at least three vertices are protected. If both robbers start on $G_1$, then two vertices are protected in the first component (since $\dmg(G_1; 2) = 1$) and at least one vertex remains undamaged from the second component. Finally, if both robbers start on $G_2$, it is clear that $|G_1| = 3$ vertices are protected. Thus, $\dmg(G; 2) \leq n-3$. Next, suppose both components of $G$ contain at most two vertices. This means $G$ is either $K_2 \cup K_1$, $K_2 \cup K_2$ or $\overline{K_2}$. For the first two graphs, we know the cop can protect an edge and therefore protect a $K_2$ which means the robbers can only damage the vertices in the other component. This implies that $\dmg(K_2 \cup K_1; 2) = 1 = n-2$ and $\dmg(K_2 \cup K_2; 2) = 2 = n-2$. If $G = \overline{K_2}$, then $\dmg(G; 2) = 1 = n-1$. Finally, we consider graphs $G$ with one component. In this case, $G$ is either $P_n$ or $C_n$. From Theorem \ref{thm:path}, we know that for $n \geq 2$, $\dmg(P_n; 2) = n-2$ and from Theorem \ref{thm:cycle}, we know that for $n \geq 3$, $\dmg(C_n; 2) = n-2$. \end{proof} Turning our attention to graphs with $2$-robber damage number equal to one, we first characterize such graphs on at most four vertices. \begin{prop} Let $G$ be a graph on $n\leq 4$ vertices. Then, $\dmg(G,2)=1$ if and only if $G$ is not one of the graphs in $\{P_2,P_4,C_4,\overline{K_1},\overline{K_3},\overline{K_4},K_2\cup K_2, K_2\cup K_1 \cup K_1\}$. \end{prop} \begin{proof} By Theorem \ref{thm:CharN-2}, $\dmg(P_2,2)=0$, $\dmg(P_4,2)=2$, $\dmg(C_4,2)=2$, $\dmg(\overline{K_4},2)=2$, $\dmg(K_2\cup K_2,2)=2$, and $\dmg(K_2\cup K_1 \cup K_1,2)=2$. By Proposition \ref{prop:Empty}, $\dmg(\overline{K_1},2)=0$ and $\dmg(\overline{K_3},2)=2$. Let $G$ be a graph on four vertices which is not $P_4,C_4,\overline{K_4},K_2\cup K_2,$ or $K_2\cup K_1 \cup K_1$. Then, by Proposition \ref{prop:damageAtMostNMinus2} and Theorem \ref{thm:CharN-2}, $\dmg(G,2)\leq 1$. By Proposition \ref{prop:damageAtLeastSMinus1}, $\dmg(G,2)\geq 1$, so $\dmg(G,2)=1$. Let $G$ be a graph on three vertices which is not $\overline{K_3}$. Then, by Proposition \ref{prop:damageAtMostNMinus2}, $\dmg(G,2)\leq 1$ and by Proposition \ref{prop:damageAtLeastSMinus1}, $\dmg(G,2)\geq 1$, so $\dmg(G,2)=1$. Finally, the only graph on two vertices which is not $P_2$ is $\overline{K_2}$, and by Proposition \ref{prop:Empty}, $\dmg(\overline{K_2},2)=1$. \end{proof} Finally, we show that a graph $G$ on five or more vertices has $\dmg(G,2)=1$ if and only if it is a threshold graph. A graph is a \emph{threshold graph} if it can be constructed from a single vertex by repeated additions of a single isolated vertex or a single dominating vertex. Alternatively, it is well known that threshold graphs can be characterized by their forbidden subgraphs: a graph is threshold if and only if no four of its vertices contain $P_4$, $C_4$, or $K_2\cup K_2$ as an induced subgraph. Both of these characterizations are crucial in the following argument. \begin{thm} Let $G$ be a graph on $n\geq 5$ vertices. Then, $\dmg(G,2)=1$ if and only if $G$ is a threshold graph with no more than one isolated vertex. \end{thm} \begin{proof} We will prove the contrapositive. First, assume $G$ is not a threshold graph. Therefore, $G$ contains at least one of the following as an induced subgraph on four vertices: $P_4$, $C_4$, or $K_2\cup K_2$. Let $A$ be the induced subgraph of $G$ which is isomorphic to one of these graphs and label the vertices of $A$ as shown in Figure \ref{fig:ForbiddenSub}. \begin{figure}[h!] \centering \includegraphics{ForbiddenSub.pdf} \caption{Vertex labelings for $P_4$, $C_4$, and $K_2\cup K_2$, respectively} \label{fig:ForbiddenSub} \end{figure} If the cop's initial placement is $v_1$, let the robbers initially place on $v_3$ and $v_4$. If $A$ is isomorphic to $P_4$ or $K_2\cup K_2$, the robbers evade capture during the first round. Thus, at least two vertices are damaged. If $A$ is isomorphic to $C_4$, the robber on $v_4$ is captured in the first round, but the robber on $v_3$ can damage $v_3$ and move to $v_2$ in this round. In the second round, the cop cannot capture the remaining robber since $v_2v_4\not\in E(G)$. Thus, the robber also damages $v_2$, so at least two vertices are damaged. If the cop's initial placement is $v_4$, at least two vertices are damaged by a similar argument. If the cop's initial placement is a vertex $v$ other than $v_1$ and $v_4$, let the robbers initially place on $v_1$ and $v_4$. If $v$ is not adjacent to at least one of $v_1$ or $v_4$, then both robbers evade capture and damage their initial vertices in the first round. Thus, at least two vertices are damaged. If $v$ is adjacent to at least one of $v_1$ or $v_4$, without loss of generality, assume $v$ is adjacent to $v_1$. Then, during the first round, the cop can capture the robber on $v_1$. The robber on $v_4$ can damage $v_4$ and move to $v_3$ during round one. Since $v_1v_3\not\in E(G)$, the cop cannot capture the remaining robber during round two. Thus, the robber also damages $v_3$ and at least two vertices are damaged. Since the robbers can damage at least two vertices for any cop strategy, it is clear that $\dmg(G,2)\geq 2$. Next, assume that $G$ is a threshold graph with no more than one isolated vertex. Since threshold graphs are formed by repeated additions of a single isolated vertex or a single dominating vertex, a threshold graph with no isolated vertices has a dominating vertex and a threshold graph with one isolated vertex has a vertex of degree $n-2$. In either case, let $v$ be this vertex of high degree ($n-1$ or $n-2$). Assume for contradiction that $\dmg(G,2)\geq 2$ and consider the case where the cop initially places on $v$. If there is an isolated vertex in $G$, then the robbers must not place here since the robber on the isolated vertex would only damage one vertex and the other robber would be captured in the first round, contradicting $\dmg(G,2)\geq 2$. Therefore, the robbers both initially place on vertices adjacent to $v$. If the robbers initially place on the same vertex, they will both be captured in the first round, again contradicting $\dmg(G,2)\geq 2$. So, the robbers must initially place on two distinct vertices, $v_1$ and $v_4$. Since the cop can always capture one robber during round one and $\dmg(G,2)\geq 2$, the remaining robber must be able to damage their initial vertex and move to a neighboring vertex. In order for this robber to damage a second vertex, they need to move to a neighboring vertex which is not adjacent to the cop's location. Thus, it must be that $v_1$ has a neighbor which is not adjacent to $v_4$ and $v_4$ has a neighbor which is not adjacent to $v_1$. Let $v_2$ and $v_4$ be the neighbors of $v_1$ and $v_3$, respectively. Since $v_1v_3,v_2v_4\not\in E(G)$ and $v_1v_2,v_3v_4\in E(G)$, the induced subgraph of $G$ on vertices $\{v_1,v_2,v_3,v_4\}$ is one of the graphs in Figure \ref{fig:ForbiddenSub}. However, since $G$ is a threshold graph, it does not contain any of these graphs as induced subgraphs. Therefore, it must be that $\dmg(G,2)=1$, as desired. \end{proof} \section{Concluding Remarks} Many of the results in this paper focus on damage in the most general case with $s$ robbers. However, there are a few results that we proved for the $s=2$ case that we were unable to generalize. For the first such result, Proposition \ref{prop:maxDegreeThree}, we demonstrate that one potential generalization does not hold and in Conjecture \ref{conj:maxdeg}, we provide another generalization. In Section \ref{sec:extreme2robber}, we characterize the graphs with the highest and lowest $2$-robber damage number. In general, we know by Proposition \ref{prop:damageAtLeastSMinus1} and \ref{prop:damageAtMostNMinus2} that for a non-empty graph $G$ on $n\geq 2$ vertices, the maximum $s$-robber damage number is $n-2$ and the minimum is $s-1$. By Theorems \ref{thm:path}, \ref{thm:cycle}, and \ref{thm:star}, we already know that $\dmg(G;s)=n-2$ for paths, cycles, and spiders with no more than $s$ legs. However, this is not a complete characterization. It's worth noting that Proposition \ref{prop:maxDegreeThree} is used in the proof of Theorem \ref{thm:CharN-2}, which characterizes the graphs whose $2$-robber damage number is $n-2$. Therefore, it is possible that generalizing Proposition \ref{prop:maxDegreeThree}, possibly by proving Conjecture \ref{conj:maxdeg}, may lead to a characterization of graphs with $\dmg(G;s)=n-2$. \begin{thebibliography}{100} \bibitem{AF84} M. Aigner, M. Fromme, A game of cops and robbers, {\em Discrete Appl. Math.}, 8 (1984), 1--11. \bibitem{BMPP16} A. Bonato, D. Mitsche, X. P\'erez-Gim\,enez, P. Pra\l at, A probabilistic version of the game of Zombies and Survivors on graphs, {\em Theor. Comput. Sci.}, 655 (2016), 2--14. \bibitem{CRbook} A.~Bonato, R.J.~Nowakowski, {\em The game of Cops and Robbers on graphs}, American Mathematical Society, Providence, 2011. \bibitem{BPPR17} A.~Bonato, X.~P\'erez-Gim\'enez, P.~Pra\l at, B.~Reiniger, The game of overprescribed Cops and Robbers played on graphs, {\em Graphs Combin.}, 57 (2017), 801--815. \bibitem{CEGPRS21} J. Carlson, R. Eagleton, J. Geneson, J. Petrucci, C. Reinhart, P. Sen, The damage throttling number of a graph, {\em Australas. J. Combin.}, 80 (2021), 361--385. \bibitem{CS19} D. Cox, A. Sanaei, The damage number of a graph, {\em Australas. J. Combin.}, 75 (2019), 1--16. \bibitem{Diestel} R.~Diestel, \emph{Graph Theory, fifth ed}, Springer, Berlin, 2017. \bibitem{FHMP16} S.L. Fitzpatrick, J. Howell, M.E. Messinger, D.A. Pike, A deterministic version of the game of zombies and survivors on graphs, \emph{Discrete Appl. Math.}, 213 (2016), 1--12. \bibitem{NW83} R.J.~Nowakowski, P.~Winkler, Vertex-to-vertex pursuit in a graph, {\em Discrete Math.}, 43 (1983), 235--239. \bibitem{Q78} A.~Quilliot, Jeux et pointes fixes sur les graphes, {\em Th\`ese de 3\`eme cycle}, Universit\'e de Paris VI (1978) [in French], 131--145. \end{thebibliography} \end{document}
2205.06850v2
http://arxiv.org/abs/2205.06850v2
Nonlocal nonlinear diffusion equations. Smoothing effects, Green functions, and functional inequalities
\documentclass[a4paper, 11pt]{amsart} \usepackage[square, numbers, comma]{natbib} \usepackage[usenames,dvipsnames,svgnames,table]{xcolor} \usepackage{amsmath,amsthm,amssymb,amssymb,esint,verbatim,tabularx,graphicx} \usepackage{fancyhdr} \usepackage{enumerate} \usepackage{amsmath} \usepackage{fancyhdr} \usepackage{epic} \usepackage{pgf,tikz} \usetikzlibrary{positioning} \usetikzlibrary{arrows.meta} \usetikzlibrary{calc} \usepackage[utf8]{inputenc} \usepackage{color} \usepackage{hyperref} \usepackage{verbatim} \usepackage{pdfpages} \usepackage{mathrsfs} \usepackage{cancel} \hypersetup{urlcolor=blue, colorlinks=true} \addtolength{\hoffset}{-1.5cm}\addtolength{\textwidth}{3cm} \addtolength{\voffset}{-.5cm}\addtolength{\textheight}{1cm} \newtheorem{theorem}{Theorem}[section] \newtheorem{question}{Open question} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{definition}{Definition}[section] \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} \theoremstyle{definition} \newtheorem{example}{Example}[section] \numberwithin{equation}{section} \newcommand{\R}{\ensuremath{\mathbb{R}}} \newcommand{\N}{\ensuremath{\mathbb{N}}} \newcommand{\E}{\ensuremath{\mathcal{E}}} \newcommand{\Z}{\ensuremath{\mathbb{Z}}} \newcommand{\J}{\ensuremath{\mathbb{J}}} \newcommand{\Levy}{\ensuremath{\mathcal{L}}} \newcommand{\Operator}{\ensuremath{\mathfrak{L}}} \newcommand{\Hvar}{\ensuremath{\gamma}} \newcommand{\dual}{\ensuremath{\gamma}} \newcommand{\Dual}{\ensuremath{\Gamma}} \newcommand{\Levymu}{\ensuremath{\mathcal{L}^\mu}} \newcommand{\Levynu}{\ensuremath{\mathcal{L}^\nu}} \newcommand{\Levynux}{\ensuremath{\mathcal{L}^{\nu_{\Dx}}}} \newcommand{\Levyd}{\ensuremath{\overline{\mathcal{L}}}} \newcommand{\Lvar}{L_{\varphi}} \newcommand{\dif}{\mathrm{d}} \newcommand{\ra}{\rightarrow} \newcommand{\alp}{\beta} \newcommand{\veps}{\varepsilon} \newcommand{\Div}{\mbox{div}} \newcommand{\hv}{\widehat{v}} \newcommand{\hpsi}{\widehat{\psi}} \newcommand{\uu}{\hat{u}} \newcommand{\vv}{\hat{v}} \newcommand{\B}{B_\varepsilon^\mu} \newcommand{\Bn}{B_{\varepsilon_n}^\mu} \newcommand{\W}{\mathcal{W}} \newcommand{\V}{\mathcal{V}} \newcommand{\As}{{A}} \newcommand{\Bs}{{B}} \newcommand{\Cs}{{C}} \newcommand{\Ss}{{S}} \newcommand{\Gs}{{G}} \newcommand{\dd}{\,\mathrm{d}} \newcommand{\diver}{\mathrm{div}} \newcommand{\dell}{\partial} \newcommand{\indikator}{\mathbf{1}_{|z|\leq 1}} \newcommand{\indik}{\mathbf{1}} \newcommand{\qqquad}{\qquad\quad} \newcommand{\e}{\textup{e}} \DeclareMathOperator*{\esssup}{ess \, sup} \DeclareMathOperator*{\essinf}{ess \, inf} \DeclareMathOperator*{\esslim}{ess\,lim} \DeclareMathOperator{\sgn}{\textup{sign}} \DeclareMathOperator{\Ent}{E} \DeclareMathOperator{\Lip}{Lip} \DeclareMathOperator{\supp}{supp} \DeclareMathOperator*{\cO}{\mathit{O}} \DeclareMathOperator*{\co}{\mathit{o}} \DeclareMathOperator*{\sinc}{sinc} \newcommand{\Ux}{\overline{U}} \newcommand{\Ut}{\widetilde{U}} \newcommand{\Vt}{\widetilde{V}} \newcommand{\Uxt}{\widetilde{\overline{U}}} \newcommand{\Grid}{\mathcal{G}_h} \newcommand{\GridT}{\mathcal{T}_{\Delta t}^T} \newcommand{\RN}{\mathbb{R}^N} \definecolor{darkblue}{rgb}{0.05, .05, .65} \definecolor{darkgreen}{rgb}{0.1, .65, .1} \definecolor{darkred}{rgb}{0.8,0,0} \begin{document} \title[Smoothing effects, Green functions, and functional inequalities]{Nonlocal nonlinear diffusion equations.\\ Smoothing effects, Green functions, \\ and functional inequalities} \author[M.~Bonforte]{Matteo Bonforte} \address[M. Bonforte]{Departamento de Matem\'aticas\\ Universidad Aut\'onoma de Madrid (UAM)\\ Campus de Cantoblanco, 28049 Madrid, Spain} \email[]{matteo.bonforte\@@{}uam.es} \urladdr{http://verso.mat.uam.es/~matteo.bonforte/} \author[J.~Endal]{J\o rgen Endal} \address[J.~Endal]{Department of Mathematical Sciences\\ Norwegian University of Science and Technology (NTNU)\\ N-7491, Trondheim, Norway\\ and\\ Departamento de Matem\'aticas\\ Universidad Aut\'onoma de Madrid (UAM)\\ Campus de Cantoblanco, 28049 Madrid, Spain} \email[]{jorgen.endal\@@{}ntnu.no} \urladdr{https://folk.ntnu.no/jorgeen/} \keywords{Nonlinear degenerate parabolic equations, boundedness estimates, $L^1$--$L^\infty$-smoothing effects, Green functions, heat kernels, Gagliardo-Nirenberg-Sobolev inequalities, Moser iteration, porous medium, Laplacian, fractional Laplacian, nonlocal operators, existence} \subjclass[2010]{35K55, 35K65, 35A01, 35B45, 35R09, 35R11. } \begin{abstract} We establish boundedness estimates for solutions of generalized porous medium equations of the form $$ \partial_t u+(-\mathfrak{L})[u^m]=0\quad\quad\text{in $\mathbb{R}^N\times(0,T)$}, $$ where $m\geq1$ and $-\mathfrak{L}$ is a linear, symmetric, and nonnegative operator. The wide class of operators we consider includes, but is not limited to, L\'evy operators. Our quantitative estimates take the form of precise $L^1$--$L^\infty$-smoothing effects and absolute bounds, and their proofs are based on the interplay between a dual formulation of the problem and estimates on the Green function of $-\mathfrak{L}$ and $I-\mathfrak{L}$. In the linear case $m=1$, it is well-known that the $L^1$--$L^\infty$-smoothing effect, or ultracontractivity, is equivalent to Nash inequalities. This is also equivalent to heat kernel estimates, which imply the Green function estimates that represent a key ingredient in our techniques. We establish a similar scenario in the nonlinear setting $m>1$. First, we can show that operators for which ultracontractivity holds, also provide $L^1$--$L^\infty$-smoothing effects in the nonlinear case. The converse implication is not true in general. A counterexample is given by $0$-order L\'evy operators like $-\mathfrak{L}=I-J\ast$. They do not regularize when $m=1$, but we show that surprisingly enough they do so when $m>1$, due to the convex nonlinearity. This reveals a striking property of nonlinear equations: the nonlinearity allows for better regularizing properties, almost independently of the linear operator. Finally, we show that smoothing effects, both linear and nonlinear, imply families of inequalities of Gagliardo-Nirenberg-Sobolev type, and we explore equivalences both in the linear and nonlinear settings through the application of the Moser iteration. \end{abstract} \maketitle \tableofcontents \section{Introduction and main results} In this paper, we consider solutions of generalized porous medium equations \cite{Vaz17}: \begin{equation}\label{GPME}\tag{\textup{GPME}} \begin{cases} \dell_tu+(-\Operator)[u^m]=0 \qquad\qquad&\text{in $Q_T:=\R^N\times(0,T)$,}\\ u(\cdot,0)=u_0 \qquad\qquad&\text{on $\R^N$,} \end{cases} \end{equation} where $m\geq1$, $T>0$, $0\leq u_0\in L^1(\R^N)$, and the operator $-\Operator$ is at least linear, symmetric, nonnegative\footnote{And moreover, densely defined, $\mathfrak{m}$-accretive, and Dirichlet in $L^1(\R^N)$. Basically, we need the comparison principle and $L^p$-decay to hold for solutions of \eqref{GPME}. We refer the reader to Appendix \ref{sec:mAccretiveDirichlet} for further information. Note that the terminology ``Dirichlet operator'' appears in the literature also as ``sub-Markovian operator''. This property is expressing the fact that the operator has to be order preserving.}, and includes L\'evy operators\footnote{That is, operators which are nonnegative at any global nonnegative maximum (usually called the positive maximum principle), see e.g. \cite{Cou64}. When $c>0$, there is (strong) absorption in \eqref{GPME}.} defined for $\psi\in C_\textup{c}^\infty(\R^N)$ and $c\geq 0$ as \begin{equation}\label{def:LevyOperators} c\psi(x)-\sum_{i,j=1}^{N}a_{ij}\dell_{x_ix_j}^2\psi(x)-\underbrace{P.V.\int_{\R^N\setminus\{0\} } \big(\psi(x+z)-\psi(x)\big) \dd\mu(z)}_{=:\Levy^\mu[\psi](x)} \end{equation} where the real matrix $[a_{ij}]_{i,j=1,\ldots,N}$ is nonnegative and symmetric, $P.V.$ is the Cauchy principal value, and: \begin{align} &\label{muas}\tag{$\textup{H}_{\mu}$} \mu \text{ is a nonnegative symmetric Radon measure on }\R^N\setminus\{0\} \text{ satisfying} \nonumber\\ &\int_{|z|\leq1}|z|^2\dd \mu(z)+\int_{|z|>1}1\dd \mu(z)<\infty\nonumber. \end{align} Important examples are the Laplacian, the fractional Laplacian, sum of onedimensional fractional Laplacians, and so-called convolution type or $0$-order operators given as $-\Operator=I-J\ast$ where $J\geq 0$ satisfies $\|J\|_{L^1(\R^N)}=1$. Boundedness estimates are the first step on the way to further regularity properties. This was exploited in e.g. \cite{AtCa10} (cf. Theorem 2.2 in \cite{DTEnVa21}), \cite[Section 7]{DPQuRoVa14}, \cite{BoVa14}, \cite[Theorem 1.2]{DPQuRo16}, \cite[Theorem 1.2]{DPQuRoVa17}, \cite[Theorem 1.1]{DPQuRo18}, and \cite[Theorem 1.2]{BrLiSt21}. It is also an important estimate in obtaining uniqueness in $L^1$ for very weak solutions of \eqref{GPME}, see e.g. \cite{BrCr79, DTEnJa17a, DTEnJa17b}. We will therefore focus on such estimates in this paper. It is well-known since the works of B\'enilan \cite{Ben78} and V\'eron \cite{Ver79} that the parabolic equation $\dell_tu-\Delta[\varphi(u)]=0$ enjoys $L^1$--$L^\infty$-smoothing when $\varphi\in C^1(\R)$ and $\varphi'(r)\geq C|r|^{m-1}$ (see also \cite[Theorem 8.2]{DPQuRoVa17} in the case of the fractional Laplacian $-\Operator=(-\Delta)^{\frac{\alpha}{2}}$, and \cite{Vaz07} in the standard Laplacian case). Let us therefore fix $\varphi(r)=|r|^{m-1}r$. In the linear case ($m=1$), the standard heat equation and the fractional heat equation still enjoy $L^1$--$L^\infty$-smoothing \cite{BaPeSoVa14, BoSiVa17}, but there are cases in which the operator is too weak to ensure such estimates. This can e.g. be seen for the convolution type operators $-\Operator=I-J\ast$ (cf. \cite[Theorem 1.4 and Lemma 1.6]{A-VMaRoT-M10}), where the solutions are as smooth as the initial data. Hence, when the nonlinearity cannot help, the operator needs to be strong enough to provide bounded solutions. One of our main concerns is therefore the following question: $$ \textit{Which operators $\Operator$ produce bounded solutions of \eqref{GPME}?} $$ To provide an answer to this intriguing question we will extend the so-called Green function method to a wide class of operators. Such a method was developed in a series of papers \cite{BoVa15, BoVa16, BoFiR-O17, BoFiVa18a, BoFiVa18b, BeBoGaGr20, BeBoGrMu21, BoIbIs22} both for operators on bounded domains and on manifolds, including the Euclidean space $\R^N$. The key tool is having at disposal good estimates for the kernel of $(-\Operator)^{-1}$, i.e. the Green function $\mathbb{G}_{-\Operator}$. Now, applying the inverse operator on each side of the PDE in \eqref{GPME} yields the so-called dual equation $$ 0=(-\Operator)^{-1}[\dell_tu]+(-\Operator)^{-1}[(-\Operator)[u^m]]=\mathbb{G}_{-\Operator}\ast\dell_tu+u^m. $$ Another essential ingredient is the so-called B\'enilan-Crandall (time-monotonicity) estimate $$ \dell_tu(\cdot,t)\geq-\frac{u(\cdot,t)}{(m-1)t}\qquad\text{in $\mathcal{D}'(\R^N)$,} $$ which is a weak version of the fact that the map $t\mapsto t^{\frac{1}{m-1}}u(\cdot,t)$ is nondecreasing. This is well-known to be a consequence of the time-scaling and comparison principle for \eqref{GPME}, cf. \cite{BeCr81b, Vaz07}.\footnote{The estimate is purely nonlinear since it degenerates when $m=1$. However, the stronger Aronson-B\'enilan estimate \cite{ArBe79} do hold for the linear case as well, but it relies on the operator itself having space-scaling. We refer the reader e.g. to \cite[Lemma 6.1]{BoSiVa17} and \cite[p. 1270]{DPQuRoVa12}. Thus, the Green function method can indeed hold for particular linear cases.} A combination of the above equations then gives the so-called fundamental upper bound or ``almost representation formula'' \begin{equation}\label{eq:IntroAlmostRepFormula} u^m(x_0,t)\leq\frac{1}{(m-1)t}\mathbb{G}_{-\Operator}(\cdot-x_0)\ast_x u(\cdot,t). \end{equation} The latter name is justified in the sense that the bound is similar to the one given by the representation formula (convolution with the heat kernel) in the linear case $m=1$, where the Green function $\mathbb{G}_{-\Operator}(\cdot-x_0)$ is replaced by the heat kernel $\mathbb{H}_{-\Operator}(\cdot-x_0)$ corresponding to the operator. In both cases, the boundedness estimates follows directly by applying various properties of $\mathbb{G}_{-\Operator}(\cdot-x_0)$ and $\mathbb{H}_{-\Operator}(\cdot-x_0)$. Further details on the proofs can be found in Section \ref{sec:Proofs}. Our method allows to recover the well-known $L^1$--$L^\infty$-smoothing result, cf. Theorem \ref{thm:L1ToLinfinitySmoothing2} and Figure \ref{fig:ImplicationOfOperatorsAndSobolev}, \begin{equation*} \|u(\cdot,t)\|_{L^\infty(\R^N)}\lesssim t^{-N\theta_{\alpha}}\|u_0\|_{L^1(\R^N)}^{\alpha\theta_{\alpha}}\qquad\text{for all $t>0$} \end{equation*} where $\alpha\in(0,2]$ and $\theta_{\alpha}:=(\alpha+N(m-1))^{-1}$, is respectively valid for the Laplacian $(-\Operator)=(-\Delta)$ \cite{Vaz06, Vaz07, DBGiVe12} and the fractional Laplacian $-\Operator=(-\Delta)^{\frac{\alpha}{2}}$ \cite{DPQuRoVa12} (see also \cite{CoHa16, CoHa21} with $p=2$). An immediate consequence of our approach is that also L\'evy operators $\Operator=\Levy^\mu$ with $\mu$ comparable to the measure of the fractional Laplacian enjoys the same estimate, see Lemma \ref{lem:GreenComparableWithFractionalLaplacian}. It is also interesting to note that we are able to treat operators whose Green functions have different power behaviours. Solutions of \eqref{GPME} with such operators satisfy \begin{equation*} \|u(\cdot,t)\|_{L^\infty(\R^N)}\lesssim t^{-N\theta_{\alpha}}\|u_0\|_{L^1(\R^N)}^{\alpha\theta_{\alpha}}+t^{-N\theta_{2}}\|u_0\|_{L^1(\R^N)}^{2\theta_{2}}\qquad\text{for a.e. $t>0$,} \end{equation*} see Theorem \ref{thm:L1ToLinfinitySmoothing3}. The examples treated in Section \ref{sec:CombinationsOfAssumptionG_1} are $-\Operator=(-\Delta)+(-\Delta)^{\frac{\alpha}{2}}$, relativistic Schr\"odinger type operators $-\Operator=(\kappa^2I-\Delta)^\frac{\alpha}{2}-\kappa^{\alpha} I$ with $\kappa>0$, and $\Operator$ being the generator of a finite range isotropically symmetric $\alpha$-stable process in $\R^N$ with jumps of size larger than $1$ removed. Finally, if $\mathbb{G}_{-\Operator}\in L^1(\R^N)$ in \eqref{eq:IntroAlmostRepFormula}, then we immediately obtain the following absolute bound (cf. Theorem \ref{thm:AbsBounds} and Figure \ref{fig:ImplicationOfOperatorsAndWeaker}): \begin{equation}\label{eq:IntroAbsoluteBound} \|u(\cdot,t)\|_{L^\infty(\R^N)}\lesssim t^{-1/(m-1)}\qquad\text{for a.e. $t>0$.} \end{equation} All operators on the form $I-\Operator$ provide such an estimate (Lemma \ref{lem:GreenResolventIntegrable}), and also the operator $-\Operator=(I-\Delta)^{\frac{\alpha}{2}}$ corresponding to the Bessel potential (Lemma \ref{lem:OperatorWithBesselPotential})\footnote{This operator can be written as $I-\Levy^\mu$ for $\mu$ satisfying \eqref{muas}, i.e, on the form \eqref{def:LevyOperators}.}. They are furthermore examples of operators which have better boundedness properties in the nonlinear case than in the linear, see Remark \ref{rem:OperatorWithBesselPotential}. The Green function method requires the existence of an inverse $(-\Operator)^{-1}$ with a kernel $\mathbb{G}_{-\Operator}$ satisfying suitable estimates. This of course puts a restriction on the class of operators we are able to treat. To remedy this fact, we also develop another approach which consists in considering $\mathbb{G}_{I-\Operator}$ instead, i.e., the Green function associated with the resolvent operator $I-\Operator$. In this case, the inverse always exists, and $\mathbb{G}_{I-\Operator}$ is at least as good as $\mathbb{G}_{-\Operator}$. By rewriting the PDE in \eqref{GPME} to $\dell_tu+(I-\Operator)[u^m]=u^m$, applying $(I-\Operator)^{-1}$, and using the time-monotonicity estimate (associated with $-\Operator$), we obtain the following fundamental upper bound: \begin{equation}\label{eq:IntroAlmostRepFormula2} u^m(x_0,t)\leq\bigg(\frac{1}{(m-1)t}+\|u(\cdot,t)\|_{L^\infty(\R^N)}^{m-1}\bigg)\mathbb{G}_{I-\Operator}(\cdot-x_0)\ast_x u(\cdot,t). \end{equation} Hence, we see that we have to pay the price of treating an equation with the reaction term $u^m$, which we then have to reabsorb to be able to obtain good estimates in this case. However, note that we can split the estimation of \eqref{eq:IntroAlmostRepFormula2} into two cases: $$ \|u(\cdot,t)\|_{L^\infty(\R^N)}^{m-1}\leq \frac{1}{(m-1)t} \qquad\text{and}\qquad \|u(\cdot,t)\|_{L^\infty(\R^N)}^{m-1}> \frac{1}{(m-1)t}. $$ In the first case, we already have the estimate $\|u(\cdot,t)\|_{L^\infty(\R^N)}\lesssim t^{-1/(m-1)}$, while in the other \begin{equation*}u^m(x_0,t)\leq 2\|u(\cdot,t)\|_{L^\infty(\R^N)}^{m-1}\mathbb{G}_{I-\Operator}(\cdot-x_0)\ast_x u(\cdot,t), \end{equation*} from which we can deduce $\|u(\cdot,t)\|_{L^\infty(\R^N)}\lesssim \|u_0\|_{L^1(\R^N)}$ as long as $\mathbb{G}_{I-\Operator}\in L^p(\R^N)$ with $p\in(1,\infty)$. Hence, the fundamental upper bound \eqref{eq:IntroAlmostRepFormula2} yields \begin{equation}\label{eq:IntroResolventBoundednessEstimate} \|u(\cdot,t)\|_{L^\infty(\R^N)}\lesssim t^{-1/(m-1)}+\|u_0\|_{L^1(\R^N)}\qquad\text{for a.e. $t>0$,} \end{equation} see Theorem \ref{thm:L1ToLinfinitySmoothing} and Figure \ref{fig:ImplicationOfOperatorsAndNonSobolev}. The operator $-\Operator=\sum_{i=1}^N(-\dell_{x_ix_i}^2)^{\frac{\alpha}{2}}$ provides an important example in this case since $\mathbb{G}_{-\Operator}=\infty$ (for some values of $\alpha$), while $\mathbb{G}_{I-\Operator}\in L^p(\R^N)$. We refer to Lemma \ref{lem:GreenAnisotropicLaplacians} and Remark \ref{ref:GreenAnisotropicLaplacians} for further information. Indeed, this is the first time the Green function method is able to treat this operator. We end this part by also mentioning that L\'evy operators $\Operator=\Levy^\mu$ with $\mu$ such that, for $\alpha\in(0,2)$ and constants $C_1,C_2, C_3>0$, \begin{equation}\label{eq:IntroBestSmoothingExample} \frac{C_1}{|z|^{N+\alpha}}\mathbf{1}_{|z|\leq 1}\leq\frac{\dd \mu}{\dd z}(z)\leq \frac{C_2}{|z|^{N+\alpha}}\mathbf{1}_{|z|\leq 1}\qquad\text{and}\qquad \frac{\dd \mu}{\dd z}(z)\leq C_3\mathbf{1}_{|z|>1}, \end{equation} fall into this case (Lemma \ref{lem:OperatorWithDerivativesAtZero}). The latter fits with the ``usual impression'' in the PDE community regarding the least assumptions expected on nonlocal operators which would produce bounded solutions of \eqref{GPME}. Nevertheless, we were not able to find such a result other places in the literature. An alternative to the Green function method is the nowadays standard Moser iteration \cite{Mos64, Mos67}, which requires the quadratic form associated to the operator to satisfy Gagliardo-Nirenberg-Sobolev (GNS) and Stroock-Varopoulos inequalities. In the case $-\Operator=(-\Delta)^{\frac{\alpha}{2}}$, we refer to \cite{DPQuRoVa12}. We devote Section \ref{sec:SmoothingAndGNS} to a further discussion on the connections between Green function estimates, heat kernel estimates, and functional inequalities like GNS. In the linear case $m=1$, it is well-known that $L^1$--$L^\infty$-smoothing is equivalent with Nash inequalities (a subfamily of GNS) \cite{Nas58}, and moreover, equivalent with on-diagonal heat kernel $\mathbb{H}_{-\Operator}$ estimates. We present those connections in Theorem \ref{thm:LinearEquivalences}, where we also include---maybe the less-known---\emph{equivalence} with Sobolev inequalities. Since we are interested in Green function estimates, we finally prove that the bound $\mathbb{G}_{-\Operator}\lesssim |x|^{-(N-\alpha)}$ implies the Sobolev inequality. If the Green function exists, it is given by $$ \mathbb{G}_{-\Operator}(x)=\int_0^\infty\mathbb{H}_{-\Operator}(x,t)\dd t. $$ Hence, off-diagonal heat kernel bounds is needed to give estimates on the Green function. In other words, we need more information on $\mathbb{H}_{-\Operator}$ than what the previous equivalences give us. The linear panorama is more or less settled, and we move on to the nonlinear case $m>1$. Again, $L^1$--$L^\infty$-smoothing is equivalent with a family of GNS inequalities which is now subcritical since $m>1$. The latter is somehow interesting in the sense that we need a weaker inequality, compared to the linear case, in order to prove $L^1$--$L^\infty$-smoothing through the Moser iteration. However, this inequality is still equivalent with the Sobolev inequality by \cite{BaCoLeS-C95}. This is in contrast to the absolute bound which is equivalent to the Poincar\'e inequality! In general, the latter inequality can only give $L^q$--$L^p$-smoothing estimates through an iteration approach \cite{Gri10, GrMuPo13}, and somehow the Green function method then provides an improvement here (since we indeed reach $L^\infty$-estimates, cf. \eqref{eq:IntroAbsoluteBound}). See the Figures \ref{fig:ImplicationInLinearCase} and \ref{fig:ImplicationInNonlinearCase} for the various connections. The resolvent approach also offers further interesting insight. Since $$ \mathbb{G}_{I-\Operator}(x)=\int_0^\infty\mathbb{H}_{I-\Operator}(x,t)\dd t=\int_0^\infty\textup{e}^{-t}\mathbb{H}_{-\Operator}(x,t)\dd t, $$ even poor on-diagonal heat kernel bounds for $\mathbb{H}_{-\Operator}$ will give $\mathbb{G}_{I-\Operator}\in L^p(\R^N)$. This has at least two consequences: (i) Such estimates for $\mathbb{H}_{-\Operator}$ imply both Green function bounds and also GNS inequalities, which furthermore imply that solutions of \eqref{GPME} (with $m>1$) are bounded whenever the Green function method and/or Moser iteration go through. (ii) If the operator is such that solutions of \eqref{GPME} with $m=1$ are bounded, then also solutions of \eqref{GPME} with $m>1$ are bounded (Theorem \ref{thm:OverviewBoundedness}). The last item corresponds to the ``usual impression'' in the PDE community, but again we were not able to find a good reference for such a statement. The first item provides a clear connection between the Green function method and the Moser iteration, but there are some rather simple on-diagonal bounds for which the algebra of the Moser iteration is hard to work out, while the Green function approach is more straightforward. Consider for example $\mathbb{H}_{-\Operator}(x,t)\lesssim t^{-N/\alpha}\textup{e}^t$ which corresponds to the L\'evy operator $\Levy^\mu$ with $\mu$ satisfying \eqref{eq:IntroBestSmoothingExample}. It is clear that the linear case has the estimate $ \|u(\cdot,t)\|_{L^\infty(\R^N)}\lesssim t^{-N/\alpha}\textup{e}^t\|u_0\|_{L^1(\R^N)}$, for a.e. $t>0$, but the unclear nonlinear case is in fact easily handled with the Green function method. We have then reached our final task: $$ \textit{Can the nonlinear case provide bounded solutions in cases when the linear cannot?} $$ The question has parallels to other equations for which regularizing effects only happen when the nonlinearity is strong enough. Take e.g. the scalar conservation law $\dell_tu+\diver[f(u)]=0$. If $f(r)=r$, we are in the setting of the transport equation, and the solutions are as smooth as the initial data. Hence, the operator itself is not able to provide smoothing estimates. In the mentioned case, $f$ needs to be so-called genuinely nonlinear to provide regularizing effects. A sufficient condition is $f''(r)>0$ when $N=1$, and $f:\R^N\to\R^N$ defined as $f(r)=(u^2/2,u^3/3,\ldots,u^{N+1}/N+1)$ when $N>1$. $L^1$--$L^\infty$-smoothing can then be found in \cite{SeSi19}, while other regularizing properties in e.g. \cite{CrOtWe08}. In this context, we also mention \cite{AbBe98} which treats e.g. $\dell_tu+\diver[f(u)]-\Delta[u^m]=0$. Under some conditions on $f$, it is proven that properties like boundedness hold whenever it holds for $\dell_tu-\Delta[u^m]=0$. We also found the answer to the above question by looking at operators which were too weak to provide boundedness estimates by themselves: the family $-\Operator=I-J\ast$, mentioned earlier. Basically, the porous medium nonlinearity is so strong that we were even able to prove that solutions of \eqref{GPME} with those operators are bounded as in \eqref{eq:IntroResolventBoundednessEstimate}. Theorem \ref{thm:L1ToLinfinitySmoothing0} provides the rigorous statement, and what is interesting to note is that the proof resembles the Green function of the resolvent operator method. \subsubsection*{Notation} Derivatives are denoted by $'$, $\frac{\dd}{\dd t}$, and $\dell_{x_i}$. We use standard notation for $L^p$, $W^{p,q}$, and $C_\textup{b}$. Moreover, $C_\textup{c}^\infty$ is the space of smooth functions with compact support, $C_\textup{b}^\infty$ the space of smooth functions with bounded derivatives of all orders, and $C([0,T];L_\textup{loc}^p(\R^N))$ the space of measurable functions $\psi:[0,T]\to L_\textup{loc}^p(\R^N)$ such that $\psi(t)\in L_\textup{loc}^p(\R^N)$ for all $t\in[0,T]$, $\sup_{t\in[0,T]}\|\psi(t)\|_{L^p(K)}<\infty$, and $\|\psi(t)-\psi(s)\|_{L^p(K)}\to0$ when $t\to s$ for all compact $K\subset \R^N$ and $t,s\in [0,T]$. In a similar way we also define $C([0,T];L^p(\R^N))$. Note that the notion of $\R^N\times(0,T)\ni(x,t)\mapsto \psi(x,t)\in C([0,T];L_\textup{loc}^p(\R^N))$ is a subtle one. In fact, we mean that $\psi$ has an a.e.-version which is continuous $[0,T]\to L^p(\R^N)$. Let $f,g$ be positive functions. The notation $f\lesssim g$ or $f\gtrsim g$ translates to $f\leq Cg$ or $f\geq Cg$ for some constant $C>0$. Hence, $f\eqsim g$ is exactly that $f\lesssim g$ and $f\gtrsim g$ hold simultaneously. For $\alpha\in(0,2]$ and $p\in[1,\infty)$, the quantity $(\alpha p+N(m-1))^{-1}$ will either be denoted by $\theta_p$ or $\theta_\alpha$, when there is no ambiguity. Finally, the following Young inequality is repeatedly used throughout the paper: \begin{equation}\label{eq:Young} ab\leq \frac{1}{\vartheta}a^\vartheta+\frac{\vartheta-1}{\vartheta}b^{\frac{\vartheta}{\vartheta-1}},\qquad\text{where $a,b>0$ and $\vartheta>1$.} \end{equation} \section{Assumptions and weak dual solutions} \label{sec:AssGreenFunction} The spatial dimension is fixed to be $N\geq 3$ \footnote{The cases $N=1$ and $N=2$ are different and could fall out of our general setting. An example is the fractional Laplacian, where the condition $N>\alpha$ plays an essential role in the form of the Green function. In this case we could consider $N\geq 1$, under the extra condition $\alpha\in(0,1)$. Also, the Green function of the standard Laplacian is sign-changing when $N=2$, and it falls out of our setting as it fails to satisfy assumption \eqref{Gas} below. Since we are dealing with Sobolev-type inequalities and their connection with smoothing effects, it is worth recalling that those inequalities tend to be different in dimension 1 and 2: For instance, functions of $H^1(\mathbb{R})$ are automatically bounded.} , and the assumptions on the data $(u_0,m)$ are: \begin{align} &0\leq u_0\in L^1(\R^N).\footnotemark \tag{$\textup{H}_{u_0}$}& \label{u_0as}\\ &\text{The nonlinearity is $r\mapsto r^m$ for some fixed $m>1$}. \tag{$\textup{H}_m$}& \label{phias} \end{align} \footnotetext{For the purpose of boundedness results, there is no loss of generality in assuming nonnegative initial data. First, for sign-changing solutions, the nonlinearity $u^m$ has to be replaced by $|u|^{m-1}u$. As a consequence, $-u$ is a solution of (GPME) whenever $u$ is. Second, consider the sign-changing solution $u$ with initial data $u_0$, and also the two other nonnegative solutions $u^+$ and $u^-$ corresponding respectively to the initial data $u_0^+=\max\{u_0,0\}$ and $u_0^-=-\min\{u_0,0\}$. By the comparison principle, $u_0\leq u_0^+$ implies $u\leq u^+$ and $-u_0\leq u_0^-$ implies $-u\leq u^-$. We can combine the inequalities to obtain $-u^-\leq u\leq u^+$, that is $|u|\leq \max\{u^+,u^-\} \le u^++u^-$. Also, being nonnegative, $u^+$ and $u^-$ satisfy (some form of) smoothing effect estimate, that we can sum up to obtain the same estimate for $|u|$, since $u_0^+$ and $u_0^-$ have disjoint support.} We will make repeated use of the Green functions (or fundamental solutions or potential kernels) $\mathbb{G}_{-\Operator}^{x_0}$ and $\mathbb{G}_{I-\Operator}^{x_0}$ of the nonnegative operator $-\Operator$ and the positive operator $I-\Operator$. A crucial assumption throughout the paper is therefore: \begin{align} \label{Gas} \tag{$\textup{H}_\mathbb{G}$} &\text{For the operator $A$, there exists a function $\mathbb{G}_{A}^{x_0}\in L_\textup{loc}^1(\R^N)$ such that:}\nonumber\\ &\text{$0\leq\mathbb{G}_{A}^{x_0}=\mathbb{G}_{A}^0(\cdot-x_0)=\mathbb{G}_{A}^0(x_0-\cdot)$ a.e. in $\R^N$ and $A[\mathbb{G}_{A}^{x_0}]=\delta_{x_0}$ in $\mathcal{D}'(\R^N)$.}\nonumber \end{align} \begin{remark} The assumption $\mathbb{G}_{A}^{x_0}=\mathbb{G}_{A}^0(\cdot-x_0)$ (possibly) excludes $x$-dependent operators. To include $x$-dependent operators, one would instead need $\mathbb{G}_{A}^{x_0}=\mathbb{G}_{A}^0(\cdot,x_0)$ and $\mathbb{G}_{A}^0(\cdot,x_0)$ continuous in $\R^N\setminus\{x_0\}$. In this case, $A^{-1}[f]$ cannot be written as a convolution, but other than that, the proofs go through as before. \end{remark} Appendix \ref{sec:InverseOfLinearmAccretiveDirichlet} provides a guide for checking \eqref{Gas} for specific operators. Let us just mention that when $A$ is of the form \eqref{def:LevyOperators} with a measure $\mu$ satisfying \eqref{muas}, then $\mathbb{G}_{A}^{x_0}$ satisfy the above under (possibly) some additional properties on the heat kernel associated with $A$. Moreover, for each such operator $A$, we have $A^{-1}$ defined as $$ A^{-1}[f](y):=\int_{\R^N}\mathbb{G}_{A}^{y}f=\int_{\R^N}\mathbb{G}_{A}^{0}(\cdot-y)f=\mathbb{G}_{A}^{0}\ast f(y)=\mathbb{G}_{A}^{y}\ast f, $$ whenever that integral is convergent. The Green functions that will be used in this paper satisfy (with $C_p,K_1,K_2,K_3, C_1>0$ all independent of $x_0$) one of the following additional assumptions: \begin{align} &\label{G_1}\tag{$\textup{G}_{1}$} \text{For all $R>0$, some $x_0\in\R^N$, and some $\alpha\in(0,2]$,}\nonumber\\ &\begin{cases} \int_{B_R(x_0)}\mathbb{G}_{-\mathfrak{L}}^{x_0}(x)\dd x\leq K_1 R^\alpha, &\\ \text{and for a.e. $x\in\R^N\setminus B_R(x_0)$, }\mathbb{G}_{-\mathfrak{L}}^{x_0}(x)\leq K_2 R^{-(N-\alpha)}. & \end{cases}\nonumber\\ &\label{G_1'}\tag{$\textup{G}_{1}'$} \text{For all $R>0$, some $x_0\in\R^N$, and some $\alpha\in(0,2]$,}\nonumber\\ &\begin{cases} \int_{B_R(x_0)}\mathbb{G}_{-\mathfrak{L}}^{x_0}(x)\dd x\leq K_1 R^\alpha, &\\ \text{and for a.e. $x\in\R^N\setminus B_R(x_0)$, }\mathbb{G}_{-\mathfrak{L}}^{x_0}(x)\leq \max\{K_3,K_2R^{-(N-\alpha)}\}.& \end{cases}\nonumber\\ &\label{G_2}\tag{$\textup{G}_{2}$} \text{For some $x_0\in \R^N$,}\nonumber\\ &\|\mathbb{G}_{-\mathfrak{L}}^{x_0}\|_{L^1(\R^N)}=\|\mathbb{G}_{-\mathfrak{L}}^{0}\|_{L^1(\R^N)}\leq C_{1}<\infty.\nonumber\\ &\label{G_3}\tag{$\textup{G}_{3}$} \text{For some $x_0\in \R^N$ and some $p\in(1,\infty)$,}\nonumber\\ &\|\mathbb{G}_{I-\mathfrak{L}}^{x_0}\|_{L^p(\R^N)}=\|\mathbb{G}_{I-\mathfrak{L}}^{0}\|_{L^p(\R^N)}\leq C_{p}<\infty.\nonumber \end{align} \begin{remark}\label{rem:G_3p=1} \begin{enumerate}[{\rm (a)}] \item Note that there is no ambiguity in assumption \eqref{G_1'}. Indeed, we cannot consider Green functions which are merely bounded around $x_0$ since this would contradict the integrability condition. \item We can view assumption \eqref{G_3} in two ways: (i) We think of $-\Operator\mapsto I-\Operator$ in \eqref{GPME}, i.e., $c=1$ in \eqref{def:LevyOperators}. (ii) We think of $-\Operator$ in \eqref{GPME}, but we want to use the Green function of the resolvent of that operator, i.e., $c=0$ in \eqref{def:LevyOperators}. In both cases, if $-\Operator$ is such that the corresponding heat equation gives $L^1$-decay, then $\|\mathbb{G}_{I-\Operator}^{0}\|_{L^1(\R^N)}\leq 1$ (see Lemma \ref{lem:GreenResolventIntegrable}). Hence, if we consider item (i), we are actually in the case \eqref{G_2}. \item For now, we just remark that the fractional Laplacian/Laplacian $-\Operator=(-\Delta)^{\frac{\alpha}{2}}$ with $\alpha\in(0,2]$ satisfy \eqref{Gas} and \eqref{G_1}--\eqref{G_3}, while $-\Operator$ of the full form \eqref{def:LevyOperators} (with $c>0$) satisfies \eqref{Gas} and \eqref{G_2}. Other important examples can be found in Section \ref{sec:GreenAndHeat}, where heat kernel bounds and Fourier methods are used to obtain Green function bounds. \end{enumerate} \end{remark} If we apply the inverse $(-\Operator)^{-1}$ on each side of the PDE in \eqref{GPME}, we get $$ 0=(-\Operator)^{-1}[\dell_tu]+(-\Operator)^{-1}[(-\Operator)[u^m]]=\dell_t\big(\mathbb{G}_{-\Operator}^{x_0}\ast_xu)+u^m. $$ We thus define a suitable class of solutions as the following: \begin{definition}[Weak dual solution]\label{def:WeakDualSolution} We say that a nonnegative measurable function $u$ is a \emph{weak dual solution} of \eqref{GPME}~if: \begin{enumerate}[{\rm (i)}] \item $u\in C([0,T]; L^1(\R^N))$ and $u^m\in L^1((0,T);L_\textup{loc}^1(\R^N))$. \item For a.e. $0<\tau_1\leq\tau_2\leq T$, and all $\psi\in C_\textup{c}^1([\tau_1,\tau_2];L_\textup{c}^\infty(\R^N))$, \begin{equation}\label{def:eqWeakDualSolution} \begin{split} &\int_{\tau_1}^{\tau_2}\int_{\R^N}\big((-\Operator)^{-1}[u]\dell_t\psi-u^m\psi\big)\dd x\dd t\\ &=\int_{\R^N}(-\Operator)^{-1}[u(\cdot,\tau_2)](x)\psi(x,\tau_2)\dd x-\int_{\R^N}(-\Operator)^{-1}[u(\cdot,\tau_1)](x)\psi(x,\tau_1)\dd x. \end{split} \end{equation} \item $u(\cdot,0)=u_0$ a.e. in $\R^N$. \end{enumerate} \end{definition} \begin{remark}\label{rem:WeakDualResolvent} \begin{enumerate}[{\rm (a)}] \item We need to argue that $(-\Operator)^{-1}[u]\in C([0,T];L_\textup{loc}^1(\R^N))$ in order to make sense of the above definition. By using \eqref{G_1} and \eqref{G_1'}, we have $$ (-\Operator)^{-1}[\mathbf{1}_{B_r(x_0)}](x)=\int_{B_r(x_0)}\mathbb{G}_{-\Operator}^{x_0}(x)\dd x\leq C $$ which implies that $$ \int_{\R^N}(-\Operator)^{-1}[u(\cdot,t)](x)\mathbf{1}_{B_r(x_0)}(x)\dd x\leq C\|u(\cdot,t)\|_{L^1(\R^N)} $$ for all $r>0$ and all $x_0\in\R^N$. This makes us able to complete the argument as in Remark 2.1 in \cite{BeBoGrMu21}. In the case of \eqref{G_2}, we get the stronger $$ \int_{\R^N}(-\Operator)^{-1}[u(\cdot,t)](x)\dd x\leq \|\mathbb{G}_{-\Operator}^{x_0}\|_{L^1(\R^N)}\|u(\cdot,t)\|_{L^1(\R^N)}, $$ and hence, $(-\Operator)^{-1}[u]\in C([0,T];L^1(\R^N))$. \item Later, we will also use the the weak dual formulation for $$ \dell_tu+(I-\Operator)[u^m]=u^m \qquad\Longleftrightarrow\qquad \dell_tu-\Operator[u^m]=0. $$ Part (ii) of the above definition then looks like \begin{equation*} \begin{split} &\int_{\tau_1}^{\tau_2}\int_{\R^N}\big((I-\Operator)^{-1}[u]\dell_t\psi-u^m\psi+u^m(I-\Operator)^{-1}[\psi]\big)\dd x\dd t\\ &=\int_{\R^N}(I-\Operator)^{-1}[u(\cdot,\tau_2)](x)\psi(x,\tau_2)\dd x-\int_{\R^N}(I-\Operator)^{-1}[u(\cdot,\tau_1)](x)\psi(x,\tau_1)\dd x. \end{split} \end{equation*} We again need $(I-\Operator)^{-1}[u]\in C([0,T];L_\textup{loc}^1(\R^N))$. Since $$ \int_{\R^N}(I-\Operator)^{-1}[u(\cdot,t)](x)\dd x\leq \|\mathbb{G}_{I-\Operator}^0\|_{L^1(\R^N)}\|u(\cdot,t)\|_{L^1(\R^N)}, $$ which is finite by Remark \ref{rem:G_3p=1}, we get the stronger $(I-\Operator)^{-1}[u]\in C([0,T];L^1(\R^N))$. \item Regarding uniqueness and very weak solutions. In many cases, weak dual solutions are very weak in the sense of \cite{DTEnJa17a, DTEnJa17b}. For instance this happens when $C_\textup{c}^\infty\subset \textup{dom}(-\Operator)$. A simple, and yet technical, proof follows by approximating $\Operator[\phi]$ by a sequence $\psi_n$ of admissible test functions in \eqref{def:eqWeakDualSolution}. As a consequence, we can use the results of \cite{DTEnJa17a, DTEnJa17b} to conclude existence and uniqueness of weak dual solutions in $L^1(\R^N)$ since we will show that they are a priori bounded. A general existence result for our purposes can be found in Proposition \ref{prop:APriori}. \end{enumerate} \end{remark} \section{Statements of main boundedness results} We present some explicit estimates regarding instantaneous boundedness which rely on the assumptions \eqref{G_1}--\eqref{G_3}. All of our results originate from what is often referred to as fundamental upper bounds, see Theorem \ref{thm:FundamentalOpRe} in Section \ref{sec:Proofs}. These bounds provides an ``almost representation formula'' similar to the one given by convolution in the linear case ($m=1$). \begin{figure}[h!] \centering \begin{tikzpicture} \color{black} \node[draw, minimum width=2cm, minimum height=1.2cm, ] (l1) at (0,0){$\sum_{i=1}^N(-\partial_{x_ix_i}^2)^{\frac{\alpha}{2}}$}; \node[draw, minimum width=2cm, minimum height=1.2cm, below=0.5cm of l1 ] (l2){$(-\Delta)^{\frac{\alpha}{2}}$}; \node[draw, minimum width=2cm, minimum height=1.2cm, below=0.5cm of l2 ] (l3){$\begin{array}{cc}\Levy^\mu\text{ such that}\\|z|^{-(N+\alpha)}\lesssim\frac{\dd \mu}{\dd z}\lesssim|z|^{-(N+\alpha)}\end{array}$}; \node[draw, minimum width=2cm, minimum height=1.2cm, right=2cm of l2 ] (c1){$\|u(\cdot,t)\|_{L^\infty}\lesssim t^{-N\theta_{\alpha}}\|u_0\|_{L^1}^{\alpha\theta_\alpha}$}; \node[draw, minimum width=2cm, minimum height=1.2cm, right=1cm of c1 ] (r1){$\text{Sobolev}$}; \draw[-{Implies},double,line width=0.7pt] ( $ (c1.north east)!0.5!(c1.east) $ ) -- ( $ (r1.north west)!0.5!(r1.west) $ ) node[midway,above]{$$}; \draw[-{Implies},double,line width=0.7pt] ( $ (r1.south west)!0.5!(r1.west) $ ) -- ( $ (c1.south east)!0.5!(c1.east) $ ) node[midway,below]{$$}; \draw[-{Implies},double,line width=0.7pt] (l1.south east) |- (c1.west) node[midway,below]{$$}; \draw[-{Implies},double,line width=0.7pt] (l2.east) -- (c1.west) node[midway,below]{$$}; \draw[-{Implies},double,line width=0.7pt] (l3.north east) |- (c1.west) node[midway,below]{$$}; \normalcolor \end{tikzpicture} \caption{Operators that fall into the setting of Theorem \ref{thm:L1ToLinfinitySmoothing2}, see Section \ref{sec:GreenAndHeat}. Note that the operator $\sum_{i=1}^N(-\partial_{x_ix_i}^2)^{\frac{\alpha}{2}}$ actually enjoys Theorem \ref{thm:L1ToLinfinitySmoothing}, but after a scaling argument, we can deduce the better estimate above (Remark \ref{ref:GreenAnisotropicLaplacians}). According to Section \ref{sec:SmoothingAndGNS} they should furthermore enjoy a Sobolev inequality.} \label{fig:ImplicationOfOperatorsAndSobolev} \end{figure} \subsection{\texorpdfstring{$L^1$}{L1}--\texorpdfstring{$L^\infty$}{Linfty}-smoothing} We start with the assumptions \eqref{G_1} and \eqref{G_1'} which impose the most structure. In effect, we deduce well-known results. \begin{theorem}[$L^1$--$L^\infty$-smoothing]\label{thm:L1ToLinfinitySmoothing2} Assume \eqref{u_0as}--\eqref{Gas}, and let $u$ be a weak dual solution of \eqref{GPME} with initial data $u_0$. \begin{enumerate}[{\rm (a)}] \item If \eqref{G_1} holds, then $$ \|u(\cdot,t)\|_{L^\infty(\R^N)}\leq \frac{C(m,\alpha,N)}{t^{N\theta_{\alpha}}}\|u_0\|_{L^1(\R^N)}^{\alpha\theta_{\alpha}}\qquad\text{for a.e. $t>0$}, $$ where $\theta_{\alpha}:=(\alpha+N(m-1))^{-1}$, $C(m):=2^{\frac{m}{m-1}}$, and $$ C(m,\alpha,N):=2^{\frac{1}{m}}C(m)^{N\theta_{\alpha}}\Big(\frac{m}{m-1}\Big)^{\alpha \theta_{\alpha}}K_1^{(N-\alpha)\theta_{\alpha}}K_2^{\alpha \theta_{\alpha}}. $$ \item If \eqref{G_1'} holds, then $$ \|u(\cdot,t)\|_{L^\infty(\R^N)}\leq \begin{cases} \frac{C(m,\alpha,N)}{t^{N\theta_{\alpha}}}\|u_{0}\|_{L^1(\R^N)}^{\alpha\theta_{\alpha}} \qquad\qquad&\text{if $0<t\leq t_{0}$ a.e.,}\\ \frac{\tilde{C}(m)}{t^{\frac{1}{m}}}\|u_{0}\|_{L^1(\R^N)}^{\frac{1}{m}} \qquad\qquad&\text{if $t>t_{0}$ a.e.,} \end{cases} $$ where $\tilde{C}(m):=(2m(m-1)^{-1}C(m)K_3)^{1/m}$ and $$ t_{0}:= 2^m\Big(\frac{m}{m-1}\Big)^{-(m-1)}C(m)K_1^mK_2^{\frac{\alpha m}{m-1}}K_3^{-(\frac{\alpha m}{m-1}+(m-1))}\|u_{0}\|_{L^1(\R^N)}^{-(m-1)}. $$ \end{enumerate} \end{theorem} \begin{remark}\label{rem:L1ToLinfinitySmoothing2} \begin{enumerate}[{\rm (a)}] \item In this case, we can also get local smoothing estimates, see e.g. Proposition \ref{LocalpreCollectingResults2}. \item Note that the estimate in Theorem \ref{thm:L1ToLinfinitySmoothing2}(a) is invariant under time- {\em and} space-scaling. Consider e.g. \eqref{GPME} with $-\Operator=(-\Delta)^{\frac{\alpha}{2}}$. If $u$ solves \eqref{GPME}, then $$ u_{\kappa,\Xi,\Lambda}(x,t):=\kappa u(\Xi x,\Lambda t) \qquad\text{for all $\kappa,\Xi,\Lambda>0$} $$ also solves \eqref{GPME} as long as $\kappa^{m-1}\Xi^\alpha=\Lambda$. By inserting $u_{\kappa,\Xi,\Lambda}$ into Theorem \ref{thm:L1ToLinfinitySmoothing2}(b), we see that the estimate remains the same since $\|u_{\kappa,\Xi,\Lambda}(\cdot,0)\|_{L^1(\R^N)}= \kappa \Xi^{-N}\|u(\cdot,0)\|_{L^1(\R^N)}$. \item In a similar way, the second part of the estimate in Theorem \ref{thm:L1ToLinfinitySmoothing2}(b) is invariant under time-scaling (see Lemma \ref{lem:ScalingNonlinearity} below). Even if that estimate might seem a bit unusual, it has appeared in the literature before, see e.g. Theorem 2.7 in \cite{BeBoGrMu21}. \item Observe also that the constant in front of both estimates blows up as $m\to1^+$. \item As expected, $$ \Big(\frac{1}{t}\Big)^{N\theta_{\alpha}}\leq \Big(\frac{1}{t}\Big)^{\frac{1}{m}} \qquad\text{for a.e. $t> t_0$} $$ since the first estimate requires more assumptions at infinity. \end{enumerate} \end{remark} We also include smoothing effects when \eqref{G_1} holds simultaneously for different $\alpha\in(0,2]$. \begin{theorem}[$L^1$--$L^\infty$-smoothing]\label{thm:L1ToLinfinitySmoothing3} Assume \eqref{u_0as}--\eqref{Gas}, and let $u$ be a weak dual solution of \eqref{GPME} with initial data $u_0$. If \eqref{G_1} holds with $\alpha\in(0,2)$ when $0<R\leq1$ and with $\alpha=2$ when $R>1$, then: $$ \|u(\cdot,t)\|_{L^\infty(\R^N)}\leq \tilde{C}(m)\begin{cases} t^{-N\theta_{\alpha}}\|u_0\|_{L^1(\R^N)}^{\alpha\theta_{\alpha}}&\qquad\text{if $0<t\leq \|u_0\|_{L^1(\R^N)}^{-(m-1)}$ a.e.,}\\ t^{-N\theta_{2}}\|u_0\|_{L^1(\R^N)}^{2\theta_{2}}&\qquad\text{if $t> \|u_0\|_{L^1(\R^N)}^{-(m-1)}$ a.e.,} \end{cases} $$ where $\theta_{\alpha}=(\alpha+N(m-1))^{-1}$ (defined for $\alpha\in (0,2]$) and $$ \tilde{C}(m):=2\Big((C(m)K_1)^{\frac{m}{m-1}}+\frac{m}{m-1}C(m)K_2\Big)^{\frac{1}{m}}. $$ \end{theorem} \begin{remark} \begin{enumerate}[{\rm (a)}] \item Note that $t=\|u_0\|_{L^1(\R^N)}^{-(m-1)}$ gives the bound $\tilde{C}(m)\|u_0\|_{L^1(\R^N)}$ in both cases. \item We can of course combine other behaviours in a similar way, and as a rule of thumb one can say that $0<R\leq 1$ gives small time behaviour while $R>1$ gives large time behaviour. \end{enumerate} \end{remark} \begin{figure}[h!] \centering \begin{tikzpicture} \color{black} \node[draw, minimum width=2cm, minimum height=1.2cm, ] (l1) at (0,0){$I-J\ast$}; \node[draw, minimum width=2cm, minimum height=1.2cm, below=2.0cm of l1 ] (l2){$\begin{array}{ccc}\Levy^\mu\text{ such that}\\|z|^{-(N+\alpha)}\mathbf{1}_{|z|\leq1}\lesssim\frac{\dd \mu}{\dd z}\lesssim|z|^{-(N+\alpha)}\mathbf{1}_{|z|\leq1}\\\quad\quad\quad\,\,\,\,\,\frac{\dd \mu}{\dd z}\lesssim \mathbf{1}_{|z|>1}\end{array}$}; \node[draw, minimum width=2cm, minimum height=1.2cm, ] (c1) at (4,-1.6){$\begin{array}{ll}\|u(\cdot,t)\|_{L^\infty}\\\lesssim t^{-1/(m-1)}+\|u_0\|_{L^1}\end{array}$}; \node[draw, minimum width=2cm, minimum height=1.2cm, right=1cm of c1 ] (r1){$\begin{array}{cc}\text{Weaker than}\\\text{Sobolev}\end{array}$}; \draw[-{Implies},double,line width=0.7pt] (c1.east) -- (r1.west) node[midway,above]{$$}; \draw[-{Implies},double,line width=0.7pt] (l1.south) |- (c1.west) node[midway,below]{$$}; \draw[-{Implies},double,line width=0.7pt] (l2.north) |- (c1.west) node[midway,below]{$$}; \normalcolor \end{tikzpicture} \caption{Operators that fall into the setting of Theorem \ref{thm:L1ToLinfinitySmoothing}, see Section \ref{sec:GreenAndHeat}. It is clear that not all of these operators enjoy a Sobolev inequality since e.g. $I-J\ast$ does not produce bounded solutions in the linear case. The general statement is therefore that they enjoy a functional inequality weaker than the Sobolev.} \label{fig:ImplicationOfOperatorsAndNonSobolev} \end{figure} When we use the test function $\mathbb{G}_{I-\Operator}^{x_0}$, we lack ``structure'' in the sense that we do not assume, a priori, any precise behaviour of the Green function at zero nor at infinity. Still, we arrive at: \begin{theorem}[$L^1$--$L^\infty$-smoothing]\label{thm:L1ToLinfinitySmoothing} Assume \eqref{u_0as}--\eqref{Gas}, and let $u$ be a weak dual solution of \eqref{GPME} with initial data $u_0$. If \eqref{G_3} holds, then \begin{equation*} \|u(\cdot,t)\|_{L^\infty(\R^N)}\leq\begin{cases} 2(m-1)^{-\frac{1}{m-1}}t^{-\frac{1}{m-1}} &\qquad\text{if $0<t\leq t_0$ a.e.,}\\ 2C(m)^{1-\frac{1}{p}}C_p^{1-\frac{1}{p}}\|u_0\|_{L^1(\R^N)} &\qquad\text{if $t>t_0$ a.e.,} \end{cases} \end{equation*} where $$ t_0:=\frac{1}{m-1}\Big(C(m)^{1-\frac{1}{p}}C_p^{1-\frac{1}{p}}\Big)^{-(m-1)}\|u_0\|_{L^1(\R^N)}^{-(m-1)} $$ and $C(m):=2(1+m)^{\frac{m}{m-1}}$. \end{theorem} \begin{remark} \begin{enumerate}[{\rm (a)}] \item The time-scaling (see Lemma \ref{lem:ScalingNonlinearity} below) ensures that the above estimate is of an invariant form. \item Due to the ``linear structure'' of the fundamental upper bound in Theorem \ref{thm:FundamentalOpRe}(b), we cannot improve Theorem \ref{thm:L1ToLinfinitySmoothing} even if we strengthen assumption \eqref{G_3} in the spirit of \eqref{G_1} or \eqref{G_1'}. \item We would also like to refer the reader to \cite{Sac85, GrMePu21}. The settings are respectively bounded domains or Riemannian manifolds, but (some of) the results have a flavour of the above estimate. \end{enumerate} \end{remark} \subsection{Absolute bounds} We also include the especially well-behaved case when $\mathbb{G}_{-\Operator}^{x_0}$ is integrable. \begin{theorem}[Absolute bounds]\label{thm:AbsBounds} Assume \eqref{u_0as}--\eqref{Gas}, and let $u$ be a weak dual solution of \eqref{GPME} with initial data $u_0$. If \eqref{G_2} holds, then $$ \|u(\cdot,t)\|_{L^\infty(\R^N)}\leq \tilde{C}(m)t^{-1/(m-1)}\qquad\text{for a.e. $t>0$}, $$ where $\tilde{C}(m):=(2^{\frac{m}{m-1}}C_1)^{1/(m-1)}$. \end{theorem} \begin{remark} The above estimate immediately enjoys time-scaling (see Lemma \ref{lem:ScalingNonlinearity} below). \end{remark} \begin{figure}[h!] \centering \begin{tikzpicture} \color{black} \node[draw, minimum width=2cm, minimum height=1.2cm, ] (l1) at (0,0){$I-\Operator$}; \node[draw, minimum width=2cm, minimum height=1.2cm, below=2.0cm of l1 ] (l2){$(I-\Delta)^{\frac{\alpha}{2}}$}; \node[draw, minimum width=2cm, minimum height=1.2cm, ] (c1) at (4,-1.6){$\|u(\cdot,t)\|_{L^\infty}\lesssim t^{-1/(m-1)}$}; \node[draw, minimum width=2cm, minimum height=1.2cm, right=1cm of c1 ] (r1){$\text{Poincar\'e}$}; \draw[-{Implies},double,line width=0.7pt] (c1.east) -- (r1.west) node[midway,above]{$$}; \draw[-{Implies},double,line width=0.7pt] (l1.south) |- (c1.west) node[midway,below]{$$}; \draw[-{Implies},double,line width=0.7pt] (l2.north) |- (c1.west) node[midway,below]{$$}; \normalcolor \end{tikzpicture} \caption{Operators that fall into the setting of Theorem \ref{thm:AbsBounds}, see Section \ref{sec:GreenAndHeat}. According to Section \ref{sec:SmoothingAndGNS} they should furthermore enjoy a Poincar\'e inequality. Note that the Poincar\'e inequality is not strong enough to imply absolute bounds (only $L^q$--$L^p$-smoothing).} \label{fig:ImplicationOfOperatorsAndWeaker} \end{figure} \subsection{Linear implies nonlinear} Since on-diagonal heat kernel bounds give $L^1$--$L^\infty$-smoothing in the linear case, we are able to transfer such estimates to the nonlinear setting \eqref{GPME} by using $\mathbb{G}_{I-\Operator}$ as a test function, see the proof in Section \ref{sec:ProofsLinearImpliesNonlinear}. \begin{theorem}[Linear implies nonlinear]\label{thm:OverviewBoundedness} Assume $p\in(1,\infty)$ and \eqref{u_0as}--\eqref{Gas}, and let $u$ be a weak dual solution of \eqref{GPME} with $m\geq1$ and initial data $u_0$. If the operator $-\Operator$ is such that $\mathbb{H}_{-\Operator}^{x_0}$ satisfy $$ 0\leq\mathbb{H}_{-\Operator}^{x_0}(x,t)\leq C(t)\qquad\text{with}\qquad \int_0^\infty\e^{-t}C(t)^{\frac{p-1}{p}}\dd t <\infty, $$ then $u$ is bounded on $\R^N\times[\tau,\infty)$, for a.e. $\tau>0$. \end{theorem} \begin{remark}\label{rem.thm:OverviewBoundedness} \begin{enumerate}[{\rm (a)}] \item It is not possible to obtain that nonlinear implies linear since we construct a counterexample in Section \ref{sec:Boundedness0Order}. There we find an operator for which a linear boundedness estimate does not hold, but a nonlinear do. \item In the case of e.g. $-\Operator=-\Delta$, the on-diagonal heat kernel bound gives $$ |u(x,t)|\leq \int_{\R^N}|u_0(x-y)|\mathbb{H}_{-\Delta}^{x_0}(y,t)\dd y\lesssim t^{-N/2}\|u_0\|_{L^1(\R^N)}. $$ \item By e.g. \cite[Theorem 8.16]{LiLo01}, linear operators satisfying the $L^1$--$L^\infty$-smoothing is characterized by the Nash inequality when $C(t)\eqsim t^{-\gamma/(1-\gamma)}$ for some $\gamma\in(0,1)$. Hence, the $p$ needs to be restricted to $(1,\gamma/(2\gamma-1))$. That is, given $-\Operator$ for which $C(t)$ is power-like, we can find $p$ in that interval, and then, the result transfers to the nonlinear setting ($m>1$) in the sense that $u$ solving \eqref{GPME} is bounded. See also the discussion in Section \ref{sec:TheWell-KnownLinearCase}. \end{enumerate} \end{remark} \section{Proofs in the Green function setting} \label{sec:Proofs} Starting from the fundamental upper bound, already mentioned in the introduction, \begin{equation*}u^m(x_0,t)\leq\frac{1}{(m-1)t}\mathbb{G}_{-\Operator}(\cdot-x_0)\ast_x u(\cdot,t)=\frac{1}{(m-1)t}\int_{\R^N}\mathbb{G}_{-\Operator}(x-x_0)u(x,t)\dd x. \end{equation*} one can indeed deduce $L^1$--$L^\infty$-smoothing estimates. To motivate the proofs, let us provide the formal computations assuming \eqref{G_1}. Split the integral over $\R^N$ into $B_R(x_0)$ and $\R^N\setminus B_R(x_0)$, then estimate each part: $$ u^m(x_0,t)\leq \|u(\cdot,t)\|_{L^\infty(\R^N)}\frac{1}{(m-1)t}K_1R^\alpha+\|u(\cdot,t)\|_{L^1(\R^N)}\frac{1}{(m-1)t}K_2R^{-(N-\alpha)} $$ The Young inequality \eqref{eq:Young} with $\vartheta=m$ applied to the first term yields \begin{equation*} \begin{split} \frac{1}{m}\|u(\cdot,t)\|_{L^\infty(\R^N)}^m+\frac{m-1}{m}\Big(\frac{1}{(m-1)t}K_1R^\alpha\Big)^{\frac{m}{m-1}}. \end{split} \end{equation*} By taking the supremum, with respect to $x_0\in \R^N$, on each side of the above inequality and using the $L^1$-decay of solutions, we get, for some constant $C>0$, \begin{equation*} \begin{split} \|u(\cdot,t)\|_{L^\infty(\R^N)}^m&\leq \frac{1}{2}C^m\frac{R^{\frac{\alpha m}{m-1}}}{t^{\frac{m}{m-1}}}\Big(1+\frac{t^{\frac{1}{m-1}}\|u_0\|_{L^1(\R^N)}}{R^{\frac{1}{(m-1)\theta_\alpha}}}\Big). \end{split} \end{equation*} We then have $$ R=\big(t^{\frac{1}{m-1}}\|u_0\|_{L^1(\R^N)}\big)^{(m-1)\theta_\alpha} \quad\Longrightarrow\quad \|u(\cdot,t)\|_{L^\infty(\R^N)}\leq \frac{C}{t^{N\theta_{\alpha}}}\|u_{0}\|_{L^1(\R^N)}^{\alpha\theta_{\alpha}}. $$ \subsection{Properties of weak dual solutions} Our rigorous justification of the above computations starts with collecting some a priori results for weak dual solutions of \eqref{GPME}, which will be a consequence of the existence theory---its proof is postponed to Appendix \ref{sec:ExistenceAPriori}. Note that the proof requires the operator $-\Operator$ to be in a specific class, and that this particular class is at least one of the requirements to ensure \eqref{Gas}. A thorough discussion can be found in Appendix \ref{sec:InverseOfLinearmAccretiveDirichlet}. \begin{proposition}[Existence and a priori results]\label{prop:APriori} Assume $0\leq u_0\in (L^1\cap L^\infty)(\R^N)$, $-\Operator$ is densely defined, $\mathfrak{m}$-accretive, and Dirichlet in $L^1(\R^N)$, \eqref{phias}--\eqref{Gas}, and \eqref{G_1}--\eqref{G_3}. \begin{enumerate}[{\rm (a)}] \item There exists a weak dual solution $u$ of \eqref{GPME} such that $$ 0\leq u\in (L^1\cap L^\infty)(Q_T)\cap C([0,T]; L^1(\R^N)). $$ \item Let $u,v$ be weak dual solutions of \eqref{GPME} with initial data $u_0,v_0\in (L^1\cap L^\infty)(\R^N)$. Then: \begin{enumerate}[{\rm (i)}] \item \textup{(Comparison)} If $u_0\leq v_0$ a.e. in $\R^N$, then $u\leq v$ a.e. in $Q_T$. \item \textup{($L^p$-decay)} $\|u(\cdot,\tau_2)\|_{L^p(\R^N)}\leq \|u(\cdot,\tau_1)\|_{L^p(\R^N)}$ for all $p\in[1,\infty]$ and a.e. $0\leq \tau_1\leq \tau_2\leq T$. \end{enumerate} \end{enumerate} \end{proposition} \begin{remark}\label{rem:APrioriCasem1} \begin{enumerate}[{\rm (a)}] \item If $u_0\in L^1(\R^N)$, then Proposition \ref{prop:APriori}(b)(i)--(ii) hold also when $m=1$ by approximation, and then also for $u_0\in TV(\R^N)$. \item We provide no general uniqueness proof. However, the constructed solutions are unique by definition since they satisfy the comparison principle. \end{enumerate} \end{remark} The following scaling property holds independently of the operator $\Operator$: \begin{lemma}[Time-scaling]\label{lem:ScalingNonlinearity} Assume \eqref{phias} and $\Lambda>0$. If $(x,t)\mapsto u(x,t)$ solves \eqref{GPME} on $\R^N\times(0,T)$, then $$ (x,t)\mapsto u_{\Lambda}(x,t):=\Lambda^{\frac{1}{m-1}}u(x,\Lambda t) $$ solves \eqref{GPME} on $\R^N\times(0,\Lambda T)$ for all $\Lambda>0$. \end{lemma} \begin{proof} Note that $$ \dell_t u_\Lambda(x,t)=\Lambda^{\frac{m}{m-1}}\dell_t u(x,\Lambda t)\quad\text{and}\quad \Operator[u_\Lambda^m(\cdot,t)](x)=\Lambda^{\frac{m}{m-1}}\Operator[u^m(\cdot,\Lambda t)](x), $$ and the proof is finished. \end{proof} Our proofs heavily relies on: \begin{proposition}[Time-monotonicity, Theorem 4 in \cite{CrPi82}]\label{prop:Monotonicity} If $u$ is a solution of \eqref{GPME} with initial data $u_0$, then the function $0<t\mapsto t^{\frac{m}{m-1}}u^m(\cdot,t)$ is nondecreasing for a.e. $x\in \R^N$. \end{proposition} \begin{remark} Let us address the issue of defining exactly what we mean by the mapping ``$0<t\mapsto t^{\frac{m}{m-1}}u^m(\cdot,t)$ is nondecreasing for a.e. $x\in \R^N$''. Of course, if functions are continuous (in space and time) this is not an issue, but a priori $u$ is merely integrable, hence some remarks are in order. We shall make a systematical use of the following properties throughout the paper. Indeed, for functions $f\in L^\infty(Q_T)\subset L_\textup{loc}^1(Q_T)$, we have that: \begin{enumerate}[{\rm (i)}] \item If $(x_0,t_0)$ is a Lebesgue point for $f$, then it is a Lebesgue point for $f^m$. This follows from the fact that $f$ is essentially bounded. \item If $(x_0,t_0)$ is a Lebesgue point for $f^m$, then it is also a Lebesgue point for $f$. This easily follows from Jensen inequality and from the fact that $m>1$, so that the power nonlinearity $f^m$ is convex. \item We can always choose a version of $f$ which is bounded at every point, and such that all points are Lebesgue points. Indeed, we know that the set of ``non-Lebesgue points'' has measure zero, as well as the set where the function is not bounded, hence we can just redefine the function at those points as: letting $Q_R(x,t)=B_R(x)\times [t-R,t+R]\subset Q_T$ we define \[ f(x,t):=\lim_{R\to 0} \frac{1}{|Q_R(x,t)|}\int_{Q_R(x,t)}f(y,\tau)\dd y \dd \tau , \] noticing that the latter integral is always finite, since $f\in L^\infty(Q_T)$. Hence in this case we have a version of $f$ for which all points are Lebesgue points. \item Moreover, if $f\in C([0,T];L^1(\R^N))$, we can choose a version such that $f:[0,T]\to L^1(\R^N)$ is a continuous mapping, so that for all $t\in[0,T]$, $t\mapsto f(\cdot,t)\in L^1(\R^N)$. \end{enumerate} Throughout the paper we will therefore fix a version of a solution $u \in L^\infty(Q_T)\cap C([0,T];L^1(\R^N))$ to the \eqref{GPME} such that all the above properties hold. These properties provide a precise meaning to the statements that we will often use in the proofs, in particular when we use a solution $u$ evaluated at a Lebesgue point $(t,x)$, or at a Lebesgue point with respect to one variable, for instance a point $(\cdot,t)$ for a.e. $t>0$ or a point $(x,\cdot)$ for a.e. $x\in\R^N$. This is clear in view of the fact that we can redefine $u \in L^\infty(Q_T)\cap C([0,T];L^1(\R^N))$ so that \emph{all} space-time points are Lebesgue points, as explained above. This will happen for instance in the proof of Theorem \ref{thm:L1ToLinfinitySmoothing}, and also in the fundamental upper bounds Theorem \ref{thm:FundamentalOpRe} and its consequences. Sometimes, one can also extend the validity of statements that hold ``for a.e. $t>0$'' to statements that hold ``for all $t>0$''. \end{remark} Since \eqref{GPME} enjoys time-scaling, the framework provided in \cite{CrPi82} simplifies. In our setting, the proof only relies on the comparison principle. We thus include the argument, which we originally learned from Prof. Juan Luis V\'azquez (see also \cite{BeCr81b, Vaz06}). \begin{proof}[Proof of Proposition \ref{prop:Monotonicity}] For all $\Lambda\geq 1$, we have $\Lambda^{\frac{1}{m-1}}u_0\geq u_0$ a.e. in $\R^N$. Lemma \ref{lem:ScalingNonlinearity} gives that $u_\Lambda$ solves \eqref{GPME} with initial data $\Lambda^{\frac{1}{m-1}}u_0$, and then the comparison principle (Theorem \ref{prop:APriori}(b)(i)) implies $u_{\Lambda}\geq u$ for $(x,t)\in Q_T$ and all $\Lambda\geq1$. For any fixed $t>0$, choose $$ \Lambda:=\frac{t+h}{t}=1+\frac{h}{t}\qquad\text{for all $h\geq0$.} $$ Then $$ u(x,t)\leq u_{\Lambda}(x,t)=\Lambda^{\frac{1}{m-1}}u(x,\Lambda t)=\Big(\frac{t+h}{t}\Big)^{\frac{1}{m-1}}u(x,t+h). $$ We conclude by noting that $r\mapsto r^m$ is increasing. \end{proof} \subsection{Reduction argument} Throughout, we fix $\tau_*, T>0$ such that $0<\tau_*<T$, and let $\tau\in(\tau_*,T]$. We also consider the following sequence of approximations $\{u_{0,n}\}_{n\in\N}$ satisfying \begin{equation}\label{eq:PropApproxu_0} \begin{cases} 0\leq u_{0,n}\in (L^1\cap L^\infty)(\R^N)\text{ such that}\\ u_{0,n}\to u_0\text{ in $L^1(\R^N)$ and}\\ u_{0,n}\to u_0\text{ a.e. in $\R^N$ monotonically from below as $n\to \infty$.} \end{cases} \end{equation} When we take $u_{0,n}$ as initial data in \eqref{GPME}, we denote the corresponding solutions by $u_n$, and they satisfy Proposition \ref{prop:APriori}, Lemma \ref{lem:ScalingNonlinearity} and Proposition \ref{prop:Monotonicity}. \subsection{Fundamental upper bounds} This section is devoted to prove: \begin{theorem}[Fundamental upper bounds]\label{thm:FundamentalOpRe} Assume $0\leq u_{0,n}\in (L^1\cap L^\infty)(\R^N)$, \eqref{phias}, and \eqref{Gas}. If $u_n$ is a weak dual solution of \eqref{GPME} with initial data $u_{0,n}$, then: \begin{enumerate}[{\rm (a)}] \item Under assumptions \eqref{G_1}--\eqref{G_2}, for a.e. $\tau_*>0$ and all Lebesgue points $x_0\in\R^N$, \begin{equation*}u_n^m(x_0,\tau_*)\leq C(m)\frac{1}{\tau_*}\int_{\R^N}u_n(x,\tau_*)\mathbb{G}_{-\Operator}^{x_0}(x)\dd x, \end{equation*} where $C(m):=2^{\frac{m}{m-1}}$. \item Under assumption \eqref{G_3}, for a.e. $\tau_*>0$ and all Lebesgue points $x_0\in \R^N$, \begin{equation*}u_n^m(x_0,\tau_*)\leq C(m)\lambda \int_{\R^N}u_n(x,\tau_*)\mathbb{G}_{I-\Operator}^{x_0}(x)\dd x \end{equation*} where $$ C(m):=2(1+m)^{\frac{m}{m-1}} \qquad\text{and}\qquad \lambda:=\|u_n(\cdot,\tau_*)\|_{L^\infty(\R^N)}^{m-1}>\frac{1}{(m-1)\tau_*}. $$ \end{enumerate} \end{theorem} \begin{remark} \begin{enumerate}[{\rm (a)}] \item Theorem \ref{thm:FundamentalOpRe}(a) corresponds to equation (5.3) in \cite{BoVa15}. \item Theorem \ref{thm:FundamentalOpRe}(b) somehow corresponds to the mentioned equation (5.3) as well, however, the inequality has a ``linear structure'' due to the presence of $\lambda$. \end{enumerate} \end{remark} We begin by choosing the test function in the weak dual formulation. \begin{lemma}\label{lem:InverseBounded} Assume \eqref{Gas}. Then there is a constant $C$ depending on the Green function such that: \begin{enumerate}[{\rm (a)}] \item If \eqref{G_1} or \eqref{G_1'} holds, then $$ \|(-\Operator)^{-1}[\psi]\|_{L^\infty(\R^N)}\leq C(\|\psi\|_{L^\infty(\R^N)}+\|\psi\|_{L^1(\R^N)})\qquad\text{for all $\psi\in (L^1\cap L^\infty)(\R^N)$.} $$ \item If \eqref{G_2} or \eqref{G_3} holds, then, for $A=-\Operator$ or $A=I-\Operator$ respectively, $$ \|A^{-1}[\psi]\|_{L^1(\R^N)}\leq C\|\psi\|_{L^1(\R^N)}\qquad\text{for all $\psi\in L^1(\R^N)$.} $$ \end{enumerate} \end{lemma} \begin{proof} \noindent(a) To incorporate the assumptions \eqref{G_1}--\eqref{G_1'}, we split the integral over the sets $B_R(x)$ and $\R^N\setminus B_R(x)$ and change the variable $x-y\mapsto y$ to obtain \begin{equation*} \begin{split} |(-\Operator)^{-1}[\psi](x)|&\leq \int_{\R^N}\mathbb{G}_{-\Operator}^{0}(x-y)|\psi(y)|\dd y\\ &=\int_{B_R(0)}\mathbb{G}_{-\Operator}^{0}(y)|\psi(x-y)|\dd y+\int_{\R^N\setminus B_R(0)}\mathbb{G}_{-\Operator}^{0}(y)|\psi(x-y)|\dd y\\ &\leq \|\psi\|_{L^\infty(\R^N)}\int_{B_R(0)}\mathbb{G}_{-\Operator}^{0}(y)\dd y+C\|\psi\|_{L^1(\R^N)}. \end{split} \end{equation*} The bound in $L^\infty$ then follows. \medskip \noindent(b) We simply use that $\mathbb{G}_{A}^{0}\in L^1(\R^N)$, see Remark \ref{rem:G_3p=1}. \end{proof} \begin{proposition}\label{prop:ChoosingTestFunction} Assume $0\leq u_{0,n}\in (L^1\cap L^\infty)(\R^N)$, \eqref{phias}, \eqref{Gas}, and \eqref{G_1}--\eqref{G_2}. Let $0\leq \psi\in L_\textup{c}^\infty(\R^N)$ and $0\leq \Theta\in C_\textup{b}^1([\tau_*,\tau])$. If $u_n$ is a weak dual solution of \eqref{GPME} with initial data $u_{0,n}$, then, for a.e. $\tau\in(\tau_*,T]$, \begin{equation}\label{eq:TestFunctionInSpaceAndTime} \begin{split} &\int_{\tau_*}^\tau\Theta(t)\int_{\R^N}u_n^m(x,t)\psi(x)\dd x\dd t= \int_{\tau_*}^\tau\Theta'(t)\int_{\R^N}(-\Operator)^{-1}[u_n(\cdot,t)](x)\psi(x)\dd x\dd t\\ &\quad+\Theta(\tau_*)\int_{\R^N}(-\Operator)^{-1}[u_n(\cdot,\tau_*)](x)\psi(x)\dd x-\Theta(\tau)\int_{\R^N}(-\Operator)^{-1}[u_n(\cdot,\tau)](x)\psi(x)\dd x. \end{split} \end{equation} \end{proposition} \begin{proof} Define $\dual(x,t):=\Theta(t)\psi(x)$. Consider a sequence $\{\psi_k\}_{k\in\N}\subset C_\text{c}^\infty(\R^N\times[\tau_*,\tau])$ (i.e., a mollifying sequence) such that $\psi_k\to\dual$ and $\dell_t\psi_k\to\dell_t\dual$ a.e. as $k\to\infty$. By Definition \ref{def:WeakDualSolution} (with $\tau_1:=\tau_*$, $\tau_2:=\tau$, and $\psi=\psi_k$), \begin{equation*} \begin{split} &\int_{\R^N}(-\Operator)^{-1}[u_n(\cdot,\tau)](x)\psi_k(x,\tau)\dd x\\ &=\int_{\R^N}(-\Operator)^{-1}[u_n(\cdot,\tau_*)]\psi_k(x,\tau_*)\dd x+\int_{\tau_*}^{\tau}\int_{\R^N}\big((-\Operator)^{-1}[u_n]\dell_t\psi_k-u_n^m\psi_k\big)\dd x\dd t\\ \end{split} \end{equation*} holds for a.e. $\tau\in(\tau_*,T]$. Since $u_n\in L^\infty(\R^N\times[\tau_*,\tau])$, $(-\Operator)^{-1}[u_n]\in C([\tau_*,\tau];L_\textup{loc}^1(\R^N))$, and $\psi_k$ is compactly supported, we can take the limit in the above equality to get the result. \end{proof} \begin{corollary}[Limit estimate 1]\label{cor:LimitEstimate1} Under the assumptions of Proposition \ref{prop:ChoosingTestFunction}, let $\psi$ approximate $\delta_{x_0}$ and choose $\Theta\equiv1$. Then \begin{equation*} \int_{\tau_*}^\tau u_n^m(x_0,t)\dd t= \int_{\R^N}u_n(x,\tau_*)\mathbb{G}_{-\Operator}^{x_0}(x)\dd x-\int_{\R^N}u_n(x,\tau)\mathbb{G}_{-\Operator}^{x_0}(x)\dd x. \end{equation*} for all Lebesgue points $x_0\in \R^N$. \end{corollary} \begin{proof} Since we choose $\Theta\equiv1$, equation \eqref{eq:TestFunctionInSpaceAndTime} becomes \begin{equation*} \begin{split} &\int_{\tau_*}^\tau\int_{\R^N}u_n^m(x,t)\psi(x)\dd x\dd t\\ &=\int_{\R^N}(-\Operator)^{-1}[u_n(\cdot,\tau_*)](x)\psi(x)\dd x-\int_{\R^N}(-\Operator)^{-1}[u_n(\cdot,\tau)](x)\psi(x)\dd x\\ &=\int_{\R^N}u_n(x,\tau_*)(-\Operator)^{-1}[\psi](x)\dd x-\int_{\R^N}u_n(x,\tau)(-\Operator)^{-1}[\psi](x)\dd x. \end{split} \end{equation*} Now, fix $x_0\in\R^N$ and choose $$ \psi_k^{(x_0)}=\frac{\mathbf{1}_{B_{1/k}(x_0)}}{|B_{1/k}(x_0)|}\in L_\textup{c}^\infty(\R^N). $$ as a test function in the above equality. Since $u_n^m(\cdot,t)\in L_\textup{loc}^1(\R^N)$, and by the definition of a Lebesgue point, $$ \int_{\tau_*}^\tau\int_{\R^N}u_n^m(x,t)\psi_k^{(x_0)}(x)\dd x\dd t\to\int_{\tau_*}^\tau u_n^m(x_0,t)\dd t\qquad\text{as $k\to\infty$.} $$ For the remaining two terms, the argument is bit more involved, but let us start with the simplest case, in which $\mathbb{G}_{-\Operator}^{x_0}$ satisfies \eqref{G_2}. Since $\mathbb{G}_{-\Operator}^{x_0}$ is symmetric and integrable, we get \begin{equation*} \begin{split} &\bigg|\int_{\R^N}u_n(x,\tau)(-\Operator)^{-1}[\psi_k^{(x_0)}](x)\dd x-\int_{\R^N}u_n(x,\tau)\mathbb{G}_{-\Operator}^{x_0}(x)\dd x\bigg|\\ nt_{B_{1/k}(x_0)}\mathbb{G}_{-\Operator}^{x}(y)\dd y-\mathbb{G}_{-\Operator}^{x_0}(x)\bigg)\dd x\bigg|\\ nt_{B_{1/k}(x_0)}|\mathbb{G}_{-\Operator}^{0}(y-x)-\mathbb{G}_{-\Operator}^{0}(x_0-x)|\dd y\dd x\\ nt_{B_{1/k}(x_0)}\int_{\R^N}|\mathbb{G}_{-\Operator}^{0}(z)-\mathbb{G}_{-\Operator}^{0}(z+(x_0-y))|\dd z\dd y\\ &\leq \|u_n(\cdot,\tau)\|_{L^\infty(\R^N)}\sup_{|x_0-y|\leq 1/k}\|\mathbb{G}_{-\Operator}^{0}-\mathbb{G}_{-\Operator}^{0}(\cdot+(x_0-y))\|_{L^1(\R^N)}, \end{split} \end{equation*} which goes to zero as $k\to\infty$ by the continuity of the $L^1$-translation. In the case of \eqref{G_1} and \eqref{G_1'}, we still have that $\mathbb{G}_{-\Operator}^0\in L_\textup{loc}^1(\R^N)$, and hence, $$ (-\Operator)^{-1}[\psi_k^{(x_0)}](x)=\frac{1}{|B_{1/k}(x_0)|}\int_{B_{1/k}(x_0)}\mathbb{G}_{-\Operator}^{x}(y)\dd y\to \mathbb{G}_{-\Operator}^{x}(x_0)=\mathbb{G}_{-\Operator}^{x_0}(x) $$ for a.e. $x\in \R^N$ as $k\to\infty$. However, we cannot simply apply the Lebesgue dominated convergence theorem since the $L^\infty$-bound of $(-\Operator)^{-1}[\psi_k^{(x_0)}]$ depends on $\|\psi_k^{(x_0)}\|_{L^\infty}\lesssim k^N$ coming from the estimate in $B_R(x_0)$ by Lemma \ref{lem:InverseBounded}. We therefore split the integral over the sets $B_{R}(x_0)$ and $\R^N\setminus B_{R}(x_0)$: \begin{equation*} \begin{split} &\bigg|\int_{\R^N}u_n(x,\tau)(-\Operator)^{-1}[\psi_k^{(x_0)}](x)\dd x-\int_{\R^N}u_n(x,\tau)\mathbb{G}_{-\Operator}^{x_0}(x)\dd x\bigg|\\ &=\bigg|\int_{B_{R}(x_0)}u_n(x,\tau)(-\Operator)^{-1}[\psi_k^{(x_0)}](x)\dd x-\int_{B_{R}(x_0)}u_n(x,\tau)\mathbb{G}_{-\Operator}^{x_0}(x)\dd x\bigg|\\ &\quad+\bigg|\int_{\R^N\setminus B_{R}(x_0)}u_n(x,\tau)(-\Operator)^{-1}[\psi_k^{(x_0)}](x)\dd x-\int_{\R^N\setminus B_{R}(x_0)}u_n(x,\tau)\mathbb{G}_{-\Operator}^{x_0}(x)\dd x\bigg|\\ &=:I_1+I_2. \end{split} \end{equation*} The integral $I_1$ can be handled more or less as for \eqref{G_2}. Indeed, since $|x-x_0|\leq R$ and $|x-y|\leq (3/2)R$ (in the latter we assume that $k\geq 2/R$), we estimate $I_1$ as \begin{equation*} \begin{split} nt_{B_{1/k}(x_0)}\int_{B_{(3/2)R}(0)}|\mathbb{G}_{-\Operator}^{0}(z)-\mathbb{G}_{-\Operator}^{0}(z+(x_0-y))|\dd z\dd y\\ &= \|u_n(\cdot,\tau)\|_{L^\infty(\R^N)}\times\\ &\quad\times\sup_{|x_0-y|\leq 1/k}\big\|\big(\mathbb{G}_{-\Operator}^{0}\mathbf{1}_{B_{(5/2)R}(0)}\big)-\big(\mathbb{G}_{-\Operator}^{0}\mathbf{1}_{B_{(5/2)R}(0)}\big)(\cdot+(x_0-y))\big\|_{L^1(\R^N)}, \end{split} \end{equation*} which goes to zero as $k\to\infty$ by the continuity of the $L^1$-translation. To estimate $I_2$, we consider \begin{equation*} \begin{split} I_2&\leq \int_{\R^N\setminus B_{R}(x_0)}|u_n(x,\tau)|\big|(-\Operator)^{-1}[\psi_k^{(x_0)}](x)-\mathbb{G}_{-\Operator}^{x_0}(x)\big|\dd x. \end{split} \end{equation*} nt_{B_{1/k}(x_0)}\mathbb{G}_{-\Operator}^{x}(y)\dd y$ and $\mathbb{G}_{-\Operator}^{x_0}(x)$ are uniformly bounded in $k$ by \eqref{G_1} and \eqref{G_1'}. The conclusion then follows by the Lebesgue dominated convergence theorem. \end{proof} In the case of $\mathbb{G}_{I-\Operator}^{x_0}$, we note that we can obtain a similar result as in Proposition \ref{prop:ChoosingTestFunction} (see also Remark \ref{rem:WeakDualResolvent}(b)): \begin{equation}\label{eq:TestFunctionInSpaceAndTimeResolvent} \begin{split} &\int_{\tau_*}^\tau\Theta(t)\int_{\R^N}u_n^m(x,t)\psi(x)\dd x\dd t\\ &= \int_{\tau_*}^\tau\int_{\R^N}\big(\Theta'(t)+u_n^{m-1}(x,t)\Theta(t)\big)u_n(x,t)(I-\Operator)^{-1}[\psi](x)\dd x\dd t\\ &\quad+\Theta(\tau_*)\int_{\R^N}(I-\Operator)^{-1}[u_n(\cdot,\tau_*)](x)\psi(x)\dd x\\ &\quad-\Theta(\tau)\int_{\R^N}(I-\Operator)^{-1}[u_n(\cdot,\tau)](x)\psi(x)\dd x. \end{split} \end{equation} To find a suitable $\Theta$, we need to fix $\tau_*>0$ and $T(\lambda):=\tau_*+\frac{m}{\lambda(m-1)}>\tau_*$. The latter is denoted by $T$ from now on. \begin{lemma}\label{lem:TimeTest} Assume $0\leq u_{0,n}\in (L^1\cap L^\infty)(\R^N)$ and \eqref{phias}. Let $u_n$ be a weak dual solution of \eqref{GPME} with initial data $u_{0,n}$, $t\in[\tau_*,T]$, and define $$ \Theta(t):=(T-t)^{\frac{m}{m-1}}. $$ Then $0\leq \Theta\in C_\textup{b}^1([\tau_*,T])$ and solves $$ \Theta'(t)+u^{m-1}(x,t)\Theta(t)\leq 0\qquad\text{for a.e. $t\in [\tau_*,T]$ and a.e. $x\in\R^N$.} $$ \end{lemma} \begin{remark} \begin{enumerate}[{\rm (a)}] \item In particular, the choice $\tau=T=\tau_*+\frac{m}{\lambda(m-1)}$ will be used throughout the rest of the paper. \item The exponent $\frac{m}{m-1}$ is chosen to match the one of the time-monotonicity (Proposition \ref{prop:Monotonicity}). \end{enumerate} \end{remark} \begin{proof}[Proof of Lemma \ref{lem:TimeTest}] A direct computation gives $$ \Theta'(t)=\frac{m}{m-1}(T-t)^{\frac{m}{m-1}-1}(-1)=-\frac{ m}{(m-1)(T-t)}\Theta(t). $$ By Proposition \ref{prop:APriori}(b)(ii) with $p=\infty$, $$ 0\leq u_n^{m-1}(x,t)\leq \|u_n\|_{L^\infty(\R^N\times(\tau_*,T))}^{m-1}\leq \lambda $$ and then $$ \Theta'(t)+u_n^{m-1}(x,t)\Theta(t)\leq \Big(\lambda-\frac{ m}{(m-1)(T-t)}\Big)\Theta(t)\leq 0, $$ where, in the last inequality, we used that $\lambda$ is such that $$ \lambda<\frac{ m}{(m-1)(T-t)}\qquad\text{for all $t\in [\tau_*,T]$.} $$ This finished the proof. \end{proof} By following the proof of Corollary \ref{cor:LimitEstimate1} as for the assumption \eqref{G_2}, we get: \begin{corollary}[Limit estimate 2]\label{cor:LimitEstimate2} Assume $0\leq u_{0,n}\in (L^1\cap L^\infty)(\R^N)$, \eqref{phias}, \eqref{Gas}, and \eqref{G_3}. Let $u_n$ be a weak dual solution of \eqref{GPME} with initial data $u_{0,n}$, let $\psi$ approximate $\delta_{x_0}$, and choose $\Theta$ as in Lemma \ref{lem:TimeTest}. Then \begin{equation*} \int_{\tau_*}^{T}\Theta(t) u_n^{m}(x_0,t)\dd t\leq \Theta(\tau_*)\int_{\R^N}u_n(x,\tau_*)\mathbb{G}_{I-\Operator}^{x_0}(x)\dd x \end{equation*} for all Lebesgue points $x_0\in \R^N$. \end{corollary} \begin{remark} We note that Corollaries \ref{cor:LimitEstimate1} and \ref{cor:LimitEstimate2} reveal that another proper functional setting is the one where $$ \int_{\R^N}u_n(x,t)\mathbb{G}^{x_0}(x)\dd x<\infty\qquad\text{for a.e. $t>0$.} $$ \end{remark} We are ready to prove the fundamental upper bounds. \begin{proof}[Proof of Theorem \ref{thm:FundamentalOpRe}] We begin the proof by noting the following consequence of Proposition \ref{prop:Monotonicity}: For a.e. $t\in[\tau_*,\tau]$ and all Lebesgue points $x_0\in\R^N$, \begin{equation}\label{eq:Monotonicity} \tau_*^{\frac{m}{m-1}}u_n^m(x_0,\tau_*)\leq t^{\frac{m}{m-1}}u_n^m(x_0,t)\leq \tau^{\frac{m}{m-1}}u_n^m(x_0,\tau). \end{equation} \smallskip \noindent\text{(a)} For a.e. $\tau\geq\tau_*>0$, we combine Corollary \ref{cor:LimitEstimate1} and \eqref{eq:Monotonicity} to get \begin{equation*} \begin{split} \tau_*^{\frac{m}{m-1}}u_n^m(x_0,\tau_*)\int_{\tau_*}^{\tau}\frac{1}{t^{\frac{m}{m-1}}}\dd t&\leq \int_{\tau_*}^{\tau}\frac{1}{t^{\frac{m}{m-1}}} t^{\frac{m}{m-1}}u_n^{m}(x_0,t)\dd t\\ &\leq \int_{\R^N}u_n(x,\tau_*)\mathbb{G}_{-\Operator}^{x_0}(x)\dd x, \end{split} \end{equation*} or \begin{equation*} \begin{split} u_n^m(x_0,\tau_*)&\leq\frac{1}{\tau_*^{\frac{m}{m-1}}}\Bigg(\int_{\tau_*}^{\tau}\frac{1}{t^{\frac{m}{m-1}}}\dd t\Bigg)^{-1} \int_{\R^N}u_n(x,\tau_*)\mathbb{G}_{-\Operator}^{x_0}(x)\dd x\\ &=\frac{1}{(m-1)\tau_*^{\frac{m}{m-1}}}\Bigg(\bigg(\frac{1}{\tau_*}\bigg)^{\frac{1}{m-1}}-\bigg(\frac{1}{\tau}\bigg)^{\frac{1}{m-1}}\Bigg)^{-1} \int_{\R^N}u_n(x,\tau_*)\mathbb{G}_{-\Operator}^{x_0}(x)\dd x. \end{split} \end{equation*} Note that $t\mapsto t^{-\frac{1}{m-1}}$ is convex when $m>1$, and hence, $$ \bigg(\frac{1}{\tau_*}\bigg)^{\frac{1}{m-1}}-\bigg(\frac{1}{\tau}\bigg)^{\frac{1}{m-1}}\geq \frac{1}{m-1}\bigg(\frac{1}{\tau}\bigg)^{\frac{m}{m-1}}\big(\tau-\tau_*\big). $$ Moreover, \begin{equation*} u_n^m(x_0,\tau_*)\leq\bigg(\frac{\tau}{\tau_*}\bigg)^{\frac{m}{m-1}}\frac{1}{\tau-\tau_*}\int_{\R^N}u_n(x,\tau_*)\mathbb{G}_{-\Operator}^{x_0}(x)\dd x. \end{equation*} We conclude by choosing $\tau=2\tau_*$. \medskip \noindent\text{(b)} For some fixed $T>\tau_*$, we combine Corollary \ref{cor:LimitEstimate2} and \eqref{eq:Monotonicity} to get \begin{equation*} \begin{split} \tau_*^{\frac{m}{m-1}}u_n^m(x_0,\tau_*)\int_{\tau_*}^{T}\frac{\Theta(t)}{t^{\frac{m}{m-1}}}\dd t&\leq \int_{\tau_*}^{T}\frac{\Theta(t)}{t^{\frac{m}{m-1}}} t^{\frac{m}{m-1}}u_n^{m}(x_0,t)\dd t\\ &\leq \int_{\R^N}u_n(x,\tau_*)\Theta(\tau_*)\mathbb{G}_{I-\Operator}^{x_0}(x)\dd x, \end{split} \end{equation*} or \begin{equation*} \begin{split} u_n^m(x_0,\tau_*)&\leq\frac{(T-\tau_*)^{\frac{m}{m-1}}}{\tau_*^{\frac{m}{m-1}}}\Bigg(\int_{\tau_*}^{T}\frac{(\tau-t)^{\frac{m}{m-1}}}{t^{\frac{m}{m-1}}}\dd t\Bigg)^{-1} \int_{\R^N}u_n(x,\tau_*)\mathbb{G}_{I-\Operator}^{x_0}(x)\dd x\\ &=\Big(\frac{T-\tau_*}{\tau_*}\Big)^{\frac{m}{m-1}}\Bigg(\int_{\tau_*}^{T}\Big(\frac{\tau-t}{t}\Big)^{\frac{m}{m-1}}\dd t\Bigg)^{-1} \int_{\R^N}u_n(x,\tau_*)\mathbb{G}_{I-\Operator}^{x_0}(x)\dd x\\ &\leq \Big(1+\frac{m}{m-1}\Big)\left(\frac{T}{\tau_*}\right)^{\frac{m}{m-1}}\frac{1}{T-\tau_*} \int_{\R^N}u_n(x,\tau_*)\mathbb{G}_{I-\Operator}^{x_0}(x)\dd x. \end{split} \end{equation*} The last step follows from the estimate \begin{equation*} \begin{split} \Big(\frac{T-\tau_*}{\tau_*}\Big)^{\frac{m}{m-1}}\Bigg(\int_{\tau_*}^{T}\Big(\frac{\tau-t}{t}\Big)^{\frac{m}{m-1}}\dd t\Bigg)^{-1} \le \Big(1+\frac{m}{m-1}\Big)\left(\frac{T}{\tau_*}\right)^{\frac{m}{m-1}}\frac{1}{T-\tau_*} \end{split} \end{equation*} which can be proven as follows: \begin{equation*} \begin{split} &\Big(\frac{\tau_*}{T-\tau_*}\Big)^{\frac{m}{m-1}} \int_{\tau_*}^{T}\Big(\frac{T-t}{t}\Big)^{\frac{m}{m-1}}\dd t \ge \Big(\frac{\tau_*}{T-\tau_*}\Big)^{\frac{m}{m-1}} \int_{\tau_*}^{T}\Big(\frac{T-t}{T}\Big)^{\frac{m}{m-1}}\dd t\\ &= \Big(\frac{\tau_*}{T}\Big)^{\frac{m}{m-1}} \frac{1}{(T-\tau_*)^{\frac{m}{m-1}}} \int_{\tau_*}^{T}\Big( T-t \Big)^{\frac{m}{m-1}}\dd t= \Big(1+\frac{m}{m-1}\Big)^{-1}\Big(\frac{\tau_*}{T}\Big)^{\frac{m}{m-1}} \frac{(T-\tau_*)^{\frac{m}{m-1}+1}}{(T-\tau_*)^{\frac{m}{m-1}}} . \end{split} \end{equation*} We thus have \begin{equation*} \begin{split} u_n^m(x_0,\tau_*) &\leq \Big(1+\frac{m}{m-1}\Big)\left(\frac{T}{\tau_*}\right)^{\frac{m}{m-1}}\frac{1}{T-\tau_*} \int_{\R^N}u_n(x,\tau_*)\mathbb{G}_{I-\Operator}^{x_0}(x)\dd x. \end{split} \end{equation*} The choice $T=\tau_*+\frac{m}{\lambda(m-1)}$ and the assumption $\lambda^{-1}<(m-1)\tau_*$ readily give \begin{equation*} \begin{split} u_n^m(x_0,\tau_*) &\leq\Big(1+\frac{m-1}{m}\Big)\left(1+\frac{m}{\lambda(m-1)\tau_*}\right)^{\frac{m}{m-1}}\lambda \int_{\R^N}u_n(x,\tau_*)\mathbb{G}_{I-\Operator}^{x_0}(x)\dd x\\ &\leq \Big(2-\frac{1}{m}\Big)(1+m)^{\frac{m}{m-1}}\lambda \int_{\R^N}u_n(x,\tau_*)\mathbb{G}_{I-\Operator}^{x_0}(x)\dd x\\ &\leq 2(1+m)^{\frac{m}{m-1}}\lambda \int_{\R^N}u_n(x,\tau_*)\mathbb{G}_{I-\Operator}^{x_0}(x)\dd x, \end{split} \end{equation*} which is the desired result. \end{proof} \subsection{Boundedness under \eqref{G_3}} Recall that the fundamental upper bound (Theorem \ref{thm:FundamentalOpRe}(b)) was only valid when $\lambda>((m-1)\tau_*)^{-1}$. Hence, we need to combine that case with $\lambda\leq((m-1)\tau_*)^{-1}$ to reach a finial conclusion. Under the latter assumption, however, we immediately have $$ \|u_n(\cdot,\tau_*)\|_{L^\infty(\R^N)}\leq \Big(\frac{1}{(m-1)\tau_*}\Big)^{\frac{1}{m-1}}\qquad \text{for a.e. $\tau_*>0$.} $$ Let us therefore continue with $\lambda>((m-1)\tau_*)^{-1}$. \begin{lemma}[$L^q$--$L^\infty$-smoothing]\label{lem:LqLinftySmoothing} Let $p,q\in(1,\infty)$ be such that $\frac{1}{p}+\frac{1}{q}=1$. Under the assumptions of Theorem \ref{thm:FundamentalOpRe} and \eqref{G_3}, we have that $$ \|u_n(\cdot,\tau_*)\|_{L^\infty(\R^N)}\leq C(m)C_p\|u_n(\cdot,\tau_*)\|_{L^q(\R^N)}\qquad\text{for a.e. $\tau_*>0$.} $$ \end{lemma} \begin{proof} By Theorem \ref{thm:FundamentalOpRe}(b), we get $$ u_n^m(x_0,\tau_*)\leq C(m)\lambda\int_{\R^N} u_n(x,\tau_*)\mathbb{G}_{I-\Operator}^{x_0}(x)\dd x. $$ Now, take the essential supremum over $x_0\in\R^N$ on both sides and use the Young inequality \eqref{eq:Young} with $\vartheta=m/(m-1)>1$ to get $$ \|u_n(\cdot,\tau_*)\|_{L^\infty(\R^N)}^m\leq \frac{m-1}{m}\lambda^{\frac{m}{m-1}}+\frac{1}{m}\bigg(C(m)\esssup_{x_0\in\R^N}\int_{\R^N} u_n(x,\tau_*)\mathbb{G}_{I-\Operator}^{x_0}(x)\dd x\bigg)^{m}, $$ or, since $\lambda=\|u_n(\cdot,\tau_*)\|_{L^\infty(\R^N)}^{m-1}$, $$ \|u_n(\cdot,\tau_*)\|_{L^\infty(\R^N)}\leq C(m)\esssup_{x_0\in\R^N}\int_{\R^N} u_n(x,\tau_*)\mathbb{G}_{I-\Operator}^{x_0}(x)\dd x. $$ By assumption, \begin{equation*} \int_{\R^N} u_n(x,\tau_*)\mathbb{G}_{I-\Operator}^{x_0}(x)\dd x\leq \|u_n(\cdot,\tau_*)\|_{L^q(\R^N)}\|\mathbb{G}_{I-\Operator}^{0}\|_{L^p(\R^N)}, \end{equation*} and the result follows. \end{proof} So, $u_n(\cdot,t)$ is in fact bounded whenever $u_n(\cdot,t)\in L^q$ for some $q\in(1,\infty)$. We exploit this in the next result. \begin{lemma}[$L^1$--$L^\infty$-smoothing] Let $p,q\in(1,\infty)$ be such that $\frac{1}{p}+\frac{1}{q}=1$. Under the assumptions of Theorem \ref{thm:FundamentalOpRe} and \eqref{G_3}, we have that $$ \|u_n(\cdot,\tau_*)\|_{L^\infty(\R^N)}\leq C(m)^qC_p^q\|u_n(\cdot,\tau_*)\|_{L^1(\R^N)}\qquad\text{for a.e. $\tau_*>0$.} $$ \end{lemma} \begin{proof} We use the H\"older inequality in the proof of the Lemma \ref{lem:LqLinftySmoothing} to get \begin{equation*} \begin{split} \lambda\int_{\R^N} u_n(x,\tau_*)\mathbb{G}_{I-\Operator}^{x_0}(x)\dd x &\leq \lambda\|u_n(\cdot,\tau_*)\|_{L^q(\R^N)}\|\mathbb{G}_{I-\Operator}^{0}\|_{L^p(\R^N)}\\ &\leq \|u_n(\cdot,\tau_*)\|_{L^\infty(\R^N)}^{\frac{(m-1)q+q-1}{q}}\|u_n(\cdot,\tau_*)\|_{L^1(\R^N)}^{\frac{1}{q}}\|\mathbb{G}_{I-\Operator}^{0}\|_{L^p(\R^N)}. \end{split} \end{equation*} Now, the Young inequality \eqref{eq:Young} with $\vartheta=mq>1$ gives \begin{equation*} \begin{split} &\|u_n(\cdot,\tau_*)\|_{L^\infty}^{\frac{mq-1}{q}}C(m)\|u_n(\cdot,\tau_*)\|_{L^1}^{\frac{1}{q}}\|\mathbb{G}_{I-\Operator}^{x_0}\|_{L^p}\\ &\leq \frac{mq-1}{mq}\|u_n(\cdot,\tau_*)\|_{L^\infty}^m+\frac{1}{mq}C(m)^{mq}\normalcolor\|u_n(\cdot,\tau_*)\|_{L^1}^m\|\mathbb{G}_{I-\Operator}^{x_0}\|_{L^p}^{mq}. \end{split} \end{equation*} Combining the above yields \[ \|u_n(\cdot,\tau_*)\|_{L^\infty}^m\leq C(m)^{mq}C_p^{mq}\|u_n(\cdot,\tau_*)\|_{L^1}^m. \qedhere \] \end{proof} We sum up the results in the following theorem: \begin{proposition}[Smoothing effects]\label{prop:preCollectingResults} Let $p,q\in(1,\infty)$ be such that $\frac{1}{p}+\frac{1}{q}=1$. Under the assumptions of Theorem \ref{thm:FundamentalOpRe} and \eqref{G_3}, we have that: \begin{enumerate}[{\rm (a)}] \item \textup{($L^q$--$L^\infty$-smoothing)} $$ \|u_n(\cdot,\tau_*)\|_{L^\infty(\R^N)}\leq\Big(\frac{1}{(m-1)\tau_*}\Big)^{\frac{1}{m-1}}+C(m)C_p\|u_n(\cdot,\tau_*)\|_{L^q(\R^N)}\qquad\text{for a.e. $\tau_*>0$.} $$ \item \textup{($L^1$--$L^\infty$-smoothing)} $$ \|u_n(\cdot,\tau_*)\|_{L^\infty(\R^N)}\leq \Big(\frac{1}{(m-1)\tau_*}\Big)^{\frac{1}{m-1}}+C(m)^qC_p^q\|u_n(\cdot,\tau_*)\|_{L^1(\R^N)}\qquad\text{for a.e. $\tau_*>0$.} $$ \end{enumerate} \end{proposition} The above results are not invariant under time-scaling (Lemma \ref{lem:ScalingNonlinearity}). We thus rewrite them in a proper form: \begin{proposition}[Scaling-invariant smoothing effects]\label{prop:preCollectingResultsScaled} Let $p,q\in(1,\infty)$ be such that $\frac{1}{p}+\frac{1}{q}=1$. Under the assumptions of Theorem \ref{thm:FundamentalOpRe} and \eqref{G_3}, we have that: \begin{enumerate}[{\rm (a)}] \item \textup{($L^q$--$L^\infty$-smoothing)} \begin{equation*} \|u_n(\cdot,t)\|_{L^\infty(\R^N)}\leq\begin{cases} 2((m-1)t)^{-\frac{1}{m-1}} \qquad\qquad&\text{if $0<t\leq t_{0,n}$ a.e.,}\\ 2C(m)C_p\|u_{0,n}\|_{L^q(\R^N)} \qquad\qquad&\text{if $t>t_{0,n}$ a.e.,} \end{cases} \end{equation*} where $$ t_{0,n}:=\frac{1}{m-1}\Big(C(m)C_p\|u_{0,n}\|_{L^q(\R^N)}\Big)^{-(m-1)}. $$ \item \textup{($L^1$--$L^\infty$-smoothing)} \begin{equation*} \|u_n(\cdot,t)\|_{L^\infty(\R^N)}\leq\begin{cases} 2((m-1)t)^{-\frac{1}{m-1}} \qquad\qquad&\text{if $0<t\leq t_{0,n}$ a.e.,}\\ 2C(m)^{q}C_p^{q}\|u_{0,n}\|_{L^1(\R^N)} \qquad\qquad&\text{if $t>t_{0,n}$ a.e.,} \end{cases} \end{equation*} where $$ t_{0,n}:=\frac{1}{m-1}\Big(C(m)^{q}C_p^{q}\|u_{0,n}\|_{L^1(\R^N)}\Big)^{-(m-1)}. $$ \end{enumerate} \end{proposition} \begin{proof} We only provide a proof for part (a) since part (b) is similar. Proposition \ref{prop:preCollectingResults}(a) gives $$ \|u_n(\cdot,\tau_*)\|_{L^\infty}\leq \Big(\frac{1}{(m-1)\tau_*}\Big)^{\frac{1}{m-1}}+C(m)C_p\|u_n(\cdot,\tau_*)\|_{L^q}\qquad \text{for a.e. $\tau_*>0$,} $$ but this result is not respecting the time-scaling (Lemma \ref{lem:ScalingNonlinearity}): $$ \Lambda^{\frac{1}{m-1}}\|u_n(\cdot,\Lambda\tau_*)\|_{L^\infty}\leq \Lambda^{\frac{1}{m-1}}\Big(\frac{1}{(m-1)\Lambda\tau_*}\Big)^{\frac{1}{m-1}}+\Lambda^{\frac{1}{m-1}}C(m)C_p\|u_n(\cdot,\Lambda\tau_*)\|_{L^q}. $$ By Proposition \ref{prop:APriori}(b)(ii) with $p=q$, we can optimize by requiring that $$ \Big(\frac{1}{(m-1)\Lambda\tau_*}\Big)^{\frac{1}{m-1}}=C(m)C_p\|u_{0,n}\|_{L^q},\quad\text{or}\quad\Lambda\tau_*=\frac{1}{m-1}\Big(\frac{1}{C(m)C_p\|u_{0,n}\|_{L^q}}\Big)^{m-1}=:t_{0,n}. $$ We obtain that $$ \|u_n(\cdot,t_{0,n})\|_{L^\infty}\leq 2C(m)C_p\|u_{0,n}\|_{L^q}. $$ Now, if $0<t\leq t_{0,n}$, we use time-monotonicity (Lemma \ref{prop:Monotonicity}) $$ u_n(\cdot,t)\leq \Big(\frac{t_{0,n}}{t}\Big)^{\frac{1}{m-1}}u_n(\cdot,t_{0,n}) $$ to get \begin{equation*} \begin{split} \|u_n(\cdot,t)\|_{L^\infty} &\leq \Big(\frac{t_{0,n}}{t}\Big)^{\frac{1}{m-1}}\|u_n(\cdot,t_{0,n})\|_{L^\infty}\leq \Big(\frac{t_{0,n}}{t}\Big)^{\frac{1}{m-1}}2C(m)C_p\|u_{0,n}\|_{L^q}=2\Big(\frac{1}{(m-1)t}\Big)^{\frac{1}{m-1}}. \end{split} \end{equation*} And, if $t>t_{0,n}$, we use Proposition \ref{prop:APriori}(b)(ii) with $p=\infty$ $$ \|u_n(\cdot,t)\|_{L^\infty}\leq \|u_n(\cdot,t_{0,n})\|_{L^\infty} $$ to get \[ \|u_n(\cdot,t)\|_{L^\infty}\leq \|u_n(\cdot,t_{0,n})\|_{L^\infty}\leq 2C(m)C_p\|u_{0,n}\|_{L^q}. \qedhere \] \end{proof} \begin{proof}[Proof of Theorem \ref{thm:L1ToLinfinitySmoothing}] By the proof of Proposition \ref{prop:APriori}, we know that for every $u_{0,n}$, there is a unique mild solution $u_n$ enjoying comparison and $L^p$-decay, and this solution is moreover a weak dual solution in the sense of Definition \ref{def:WeakDualSolution}. As a consequence, the $L^1$--$L^\infty$-smoothing of Proposition \ref{prop:preCollectingResultsScaled}(b) holds for $u_n$. By construction, we have that $0\leq u_{0,n}\leq u_{0,n+1}\leq u_0$ a.e. in $\R^N$ for all $n\in\N$ (see \eqref{eq:PropApproxu_0}), so that Proposition \ref{prop:APriori}(b)(i) yields $$ 0\leq u_{n}\leq u_{n+1}\qquad\text{a.e. in $Q_T$ for all $n\in\N$.} $$ By monotonicity, the pointwise limit of $\{u_n\}_{n\in\N}$ always exists (possibly being $+\infty$ on a set of measure zero), and we then define our candidate limit solution as $$ u(x,t):=\liminf_{n\to\infty} u_n(x,t). $$ Moreover, by the Fatou lemma and Proposition \ref{prop:APriori}(b)(ii), we immediately have that $$ \|u(\cdot,t)\|_{L^1(\R^N)}\leq \liminf_{n\to\infty}\|u_n(\cdot,t)\|_{L^1(\R^N)}\leq \|u_{0,n}\|_{L^1(\R^N)}\leq \|u_{0}\|_{L^1(\R^N)}. $$ As a consequence, the set $\{(x,t)\in Q_T \,:\, u(x,t)=\infty\}$ has measure zero, so that the convergence above holds a.e. in $Q_T$, and $0\leq u_n\leq u$ a.e. in $Q_T$. Note that, for a.e. $x\in \R^N$ and a.e. $t>0$, we have \[ u(x,t)=\liminf_{n\to\infty} u_n(x,t)\le \liminf_{n\to\infty} \|u_n( \cdot,t )\|_{L^\infty(\R^N)}\,. \] As a consequence, $u$ inherits from $u_n$ the mentioned $L^1$--$L^\infty$-smoothing effect of Proposition \ref{prop:preCollectingResultsScaled}(b) since \eqref{eq:PropApproxu_0} gives $$ \|u_{0,n}\|_{L^1(\R^N)}\to\|u_0\|_{L^1(\R^N)}\quad\text{and}\quad t_{0,n}\to t_0\qquad\text{as $n\to\infty$}. $$ It remains to check that the constructed limit $u$ is indeed a weak dual solution. To that end, note that the regularity assumptions $u^m\in L^1((0,T);L_\textup{loc}^1(\R^N))$ and $u\in L^\infty((0,T);L^1(\R^N))$ are straightforward consequences of the fact that $u\in L^1(Q_T)\cap L^\infty(\R^N\times[\tau,T))$ for a.e. $\tau>0$. Moreover, $0\leq u_n\leq u_{n+1}\leq u$ a.e. and $u_n\to u$ a.e. as $n\to\infty$ yield that a simple use of the monotone convergence theorem ensures that parts (ii) and (iii) of Definition \ref{def:WeakDualSolution} are true for $u$ (the limit integrals are all finite). It only remains to prove that $u\in C([0,T]; L^1(\R^N))$. Let us begin with $t\in (0,T)$. We shall use that, for all $n\in\N$, $u_n\in C([0,T]; L^1(\R^N))$ satisfies the following time-monotonicity estimate (cf. Proposition \ref{prop:Monotonicity}): $$ t_0^{\frac{1}{m-1}}u_n(x,t_0)\leq t_1^{\frac{1}{m-1}}u_n(x,t_1), \qquad\text{for a.e. $t_1\geq t_0>0$ and a.e. $x\in\R^N$,} $$ which can be rearranged to $$ u_n(x,t_1)-u_n(x,t_0)\geq\Big(\frac{t_0}{t_1}\Big)^{\frac{1}{m-1}}u_n(x,t_0)-u_n(x,t_0)= -\Big(1-\Big(\frac{t_0}{t_1}\Big)^{\frac{1}{m-1}}\Big)u_n(x,t_0). $$ Now, recall that $|f|=2f^-+f=2(-\min\{f,0\})+f$, so that \begin{equation*} \begin{split} \|u_n(\cdot,t_1)-u_n(\cdot,t_0)&\|_{L^1(\R^N)}=2\int_{\R^N}\big(u_n(x,t_1)-u_n(x,t_0)\big)^-\dd x+\int_{\R^N}\big(u_n(x,t_1)-u_n(x,t_0)\big)\dd x\\ &\leq 2\Big(1-\Big(\frac{t_0}{t_1}\Big)^{\frac{1}{m-1}}\Big)\|u_n(\cdot,t_0)\|_{L^1(\R^N)}+\|u_n(\cdot,t_1)\|_{L^1(\R^N)}-\|u_n(\cdot,t_0)\|_{L^1(\R^N)}\\ &\leq \frac{2\|u_0\|_{L^1(\R^N)}}{t_1^{\frac{1}{m-1}}}\big(t_1^{\frac{1}{m-1}}-t_0^{\frac{1}{m-1}}\big), \end{split} \end{equation*} where we used that $u_n\geq 0$, the $L^1$-decay of $u_n$, and $u_{0,n}\leq u_0$. Changing the roles of $t_0$ and $t_1$ reveals that the estimate also holds when $0<t_1\leq t_0$, i.e., $$ \|u_n(\cdot,t_1)-u_n(\cdot,t_0)\|_{L^1(\R^N)}\leq \frac{2\|u_0\|_{L^1(\R^N)}}{t_1^{\frac{1}{m-1}}}\big|t_1^{\frac{1}{m-1}}-t_0^{\frac{1}{m-1}}\big|. $$ Moreover, since $u_n(\cdot,t)\to u(\cdot,t)$ a.e. in $\R^N$ for a.e. $t>0$, we get that $|u_n(\cdot,t_1)-u_n(\cdot,t_0)|\to |u(\cdot,t_1)-u(\cdot,t_0)|$. Then a simple application of the Fatou lemma yields $$ \|u(\cdot,t_1)-u(\cdot,t_0)\|_{L^1(\R^N)}\leq \liminf_{n\to\infty}\|u_n(\cdot,t_1)-u_n(\cdot,t_0)\|_{L^1(\R^N)}\leq \frac{2\|u_0\|_{L^1(\R^N)}}{t_1^{\frac{1}{m-1}}}\big|t_1^{\frac{1}{m-1}}-t_0^{\frac{1}{m-1}}\big|, $$ so that $u\in C((0,T];L^1(\R^N))$. The continuity at $t=0$ is a consequence of the triangle inequality. Indeed, for a.e. $t\in(0,T]$, \begin{equation*} \begin{split} \|u(\cdot,t)-u_0\|_{L^1(\R^N)}&\leq \|u(\cdot,t)-u_n(\cdot,t)\|_{L^1(\R^N)}+\|u_n(\cdot,t)-u_{0,n}\|_{L^1(\R^N)}\\ &\quad+\|u_{0,n}-u_0\|_{L^1(\R^N)}. \end{split} \end{equation*} The last two terms go to zero as $n\to\infty$ since $u_n\in C([0,T]; L^1(\R^N))$ and by the assumption \eqref{eq:PropApproxu_0} on $u_{0,n}$. Finally, the first term goes to zero by the Lebesgue dominated convergence theorem, noting that $|u(\cdot,t)-u_n(\cdot,t)|\leq 2|u(\cdot,t)|\in L^1(\R^N)$. \end{proof} \begin{remark} Note that we never used any of the particular assumptions on the Green function \eqref{G_1}--\eqref{G_3} here. This will be important later when we want repeat the above argument in slightly different settings. \end{remark} \subsection{Boundedness under \eqref{G_1} and \eqref{G_1'}} \begin{proposition}[Smoothing effects]\label{preCollectingResults2} Under the assumptions of Theorem \ref{thm:FundamentalOpRe}, we have that: \begin{enumerate}[{\rm (a)}] \item If \eqref{G_1} holds, then $$ \|u_n(\cdot,\tau_*)\|_{L^\infty(\R^N)}\leq \frac{C(m,\alpha,N)}{\tau_*^{N\theta}}\|u_{0,n}\|_{L^1(\R^N)}^{\alpha\theta}\qquad\text{for a.e. $\tau_*>0$}, $$ where $\theta:=(\alpha+N(m-1))^{-1}$ and $$ C(m,\alpha,N):=2^{\frac{1}{m}}C(m)^{N\theta}\Big(\frac{m}{m-1}\Big)^{\alpha \theta}K_1^{(N-\alpha)\theta}K_2^{\alpha \theta}. $$ \item If \eqref{G_1'} holds, then \begin{equation*} \|u_n(\cdot,\tau_*)\|_{L^\infty(\R^N)}\leq\begin{cases} \frac{C(m,\alpha,N)}{\tau_*^{N\theta}}\|u_{0,n}\|_{L^1(\R^N)}^{\alpha\theta} \qquad\qquad&\text{if $0<\tau_*\leq t_{0,n}$ a.e.,}\\ \Big(\frac{\tilde{C}(m)}{(m-1)\tau_*}\Big)^{\frac{1}{m}}\|u_{0,n}\|_{L^1(\R^N)}^{\frac{1}{m}} \qquad\qquad&\text{if $\tau_*>t_{0,n}$ a.e.,} \end{cases} \end{equation*} where $\tilde{C}(m):=2mC(m)K_3$ and $$ t_{0,n}:= 2^m\Big(\frac{m}{m-1}\Big)^{-(m-1)\theta}C(m)K_1^mK_2^{\frac{\alpha m}{m-1}}K_3^{-(\frac{\alpha m}{m-1}+(m-1))}\|u_{0,n}\|_{L^1(\R^N)}^{-(m-1)}. $$ \end{enumerate} \end{proposition} The proof is based on the following intermediate results: \begin{proposition}[Local smoothing effects]\label{LocalpreCollectingResults1} Under the assumptions of Theorem \ref{thm:FundamentalOpRe}, and that, for all $\rho>0$ and all $\alpha\in(0,2]$, $$ \int_{B_\rho(x_0)}\mathbb{G}_{-\Operator}^{x_0}(x)\dd x\leq K_1 \rho^\alpha \qquad\text{and}\qquad \int_{\R^N\setminus B_{\rho}(x_0)}u_n(x,\tau_*)\mathbb{G}_{-\Operator}^{x_0}(x)\dd x<\infty , $$ we get, for a.e. $\tau_*>0$, a.e. $z\in\R^N$, and all $0<\bar{R}<R<2\bar{R}$, \begin{equation*} \begin{split} &\|u_n(\cdot,\tau_*)\|_{L^\infty(B_{\bar{R}}(z))}^m\\ &\leq AR^{\frac{\alpha m}{m-1}}+\esssup_{x_0\in B_{R}(z)}\frac{m}{m-1}\frac{C(m)}{\tau_*}\int_{\R^N\setminus B_{R}(x_0)}u_n(x,\tau_*)\mathbb{G}_{-\Operator}^{x_0}(x)\dd x, \end{split} \end{equation*} where $$ A:=\Big(\frac{2C(m)}{\tau_*}K_1\Big)^{\frac{m}{m-1}}. $$ \end{proposition} \begin{remark} \begin{enumerate}[{\rm (a)}] \item By Corollary \ref{cor:LimitEstimate1}, $$ \int_{\R^N}u_n(x,\tau_*)\mathbb{G}_{-\Operator}^{x_0}(x)\dd x\leq \int_{\R^N}u_{0,n}(x)\mathbb{G}_{-\Operator}^{x_0}(x)\dd x. $$ \item The above assumptions are analogous to the space $L_{\alpha}(\R^N)$ discussed in \cite{KuMiSi18}. \end{enumerate} \end{remark} \begin{proof}[Proof of Proposition \ref{LocalpreCollectingResults1}] Fix $0<\bar{R}< R<2\bar{R}$. We split the integral in Theorem \ref{thm:FundamentalOpRe}(a) and use assumption \eqref{G_1} to obtain \begin{equation*} \begin{split} &u_n^m(x_0,\tau_*)\\ &\leq \frac{C(m)}{\tau_*}\int_{B_{\bar{R}}(x_0)}u_n(x,\tau_*)\mathbb{G}_{-\Operator}^{x_0}(x)\dd x+\frac{C(m)}{\tau_*}\int_{B_{R}(x_0)\setminus B_{\bar{R}}(x_0)}u_n(x,\tau_*)\mathbb{G}_{-\Operator}^{x_0}(x)\dd x\\ &\quad+\frac{C(m)}{\tau_*}\int_{\R^N\setminus B_{R}(x_0)}u_n(x,\tau_*)\mathbb{G}_{-\Operator}^{x_0}(x)\dd x\\ &\leq \Big(\|u_n(\cdot,\tau_*)\|_{L^\infty(B_{\bar{R}}(x_0))}+\|u_n(\cdot,\tau_*)\|_{L^\infty(B_{R}(x_0)\setminus B_{\bar{R}}(x_0))}\Big)\frac{C(m)}{\tau_*}K_1R^\alpha\\ &\quad+\frac{C(m)}{\tau_*}\int_{\R^N\setminus B_{R}(x_0)}u_n(x,\tau_*)\mathbb{G}_{-\Operator}^{x_0}(x)\dd x. \end{split} \end{equation*} The Young inequality \eqref{eq:Young} with $\vartheta=m$ applied to the first term yields \begin{equation*} \begin{split} \frac{1}{2m}\|u_n(\cdot,\tau_*)\|_{L^\infty(B_{\bar{R}}(x_0))}^m&+\frac{1}{2m}\|u_n(\cdot,\tau_*)\|_{L^\infty(B_{R}(x_0)\setminus B_{\bar{R}}(x_0))}^m\\ &+\frac{2^{1+\frac{1}{m-1}}(m-1)}{m}\Big(\frac{C(m)}{\tau_*}K_1R^\alpha\Big)^{\frac{m}{m-1}}. \end{split} \end{equation*} By taking the supremum on each side with respect to $x_0\in B_{\bar{R}}(z)$ and using that $$ \esssup_{x_0\in B_{\bar{R}}(z)}\|u_n(\cdot,\tau_*)\|_{L^\infty(B_{\bar{R}}(x_0))}\leq \|u_n(\cdot,\tau_*)\|_{L^\infty(B_{2\bar{R}}(z))}\leq \|u_n(\cdot,\tau_*)\|_{L^\infty(B_{3\bar{R}}(z))} $$ and $$ \esssup_{x_0\in B_{\bar{R}}(z)}\|u_n(\cdot,\tau_*)\|_{L^\infty(B_{R}(x_0)\setminus B_{\bar{R}}(x_0))}\leq \|u_n(\cdot,\tau_*)\|_{L^\infty(B_{R+\bar{R}}(z))}\leq \|u_n(\cdot,\tau_*)\|_{L^\infty(B_{3\bar{R}}(z))}, $$ we get \begin{equation*} \begin{split} \|u_n(\cdot,\tau_*)\|_{L^\infty(B_{\bar{R}}(z))}^m&\leq \frac{1}{m}\|u_n(\cdot,\tau_*)\|_{L^\infty(B_{3\bar{R}}(z))}^m+\frac{m-1}{m}AR^{\frac{\alpha m}{m-1}}\\ &\quad+\esssup_{x_0\in B_{R}(z)}\frac{C(m)}{\tau_*}\int_{\R^N\setminus B_{R}(x_0)}u_n(x,\tau_*)\mathbb{G}_{-\Operator}^{x_0}(x)\dd x. \end{split} \end{equation*} To conclude, we absorb the term $\|u_n(\cdot,\tau_*)\|_{L^\infty(B_{3\bar{R}}(z))}^m$ due to a classical lemma (cf. Lemma \ref{lem:DeGiorgi}). \end{proof} \begin{proposition}[Local smoothing effects 2]\label{LocalpreCollectingResults2} Under the assumptions of Theorem \ref{thm:FundamentalOpRe}, and for all $z\in \R^N$ and all $\bar{R}>0$ small enough, we have that: \begin{enumerate}[{\rm (a)}] \item If \eqref{G_1} holds, then $$ \|u_n(\cdot,\tau_*)\|_{L^\infty(B_{\bar{R}}(z))}\leq \frac{C(m,\alpha,N)}{\tau_*^{N\theta}}\|u_{0,n}\|_{L^1(\R^N)}^{\alpha\theta}\qquad\text{for a.e. $\tau_*>0$}, $$ where $\theta:=(\alpha+N(m-1))^{-1}$ and $$ C(m,\alpha,N):=2^{\frac{1}{m}+(N-\alpha)\theta}C(m)^{N\theta}\Big(\frac{m}{m-1}\Big)^{\alpha \theta}K_1^{(N-\alpha)\theta}K_2^{\alpha \theta}. $$ \item If \eqref{G_1'} holds, then $$ \|u_n(\cdot,\tau_*)\|_{L^\infty(B_{\bar{R}}(z))}\leq \begin{cases} \frac{C(m,\alpha,N)}{\tau_*^{N\theta}}\|u_{0,n}\|_{L^1(\R^N)}^{\alpha\theta} \qquad\qquad&\text{if $0<\tau_*\leq t_{0,n}$ a.e.,}\\ \Big(\frac{\tilde{C}(m)}{(m-1)\tau_*}\Big)^{\frac{1}{m}}\|u_{0,n}\|_{L^1(\R^N)}^{\frac{1}{m}} \qquad\qquad&\text{if $\tau_*>t_{0,n}$ a.e.,} \end{cases} $$ where $\tilde{C}(m):=2mC(m)K_3$ and $$ t_{0,n}:= 2^m\Big(\frac{m}{m-1}\Big)^{-(m-1)}C(m)K_1^mK_2^{\frac{\alpha m}{m-1}}K_3^{-(\frac{\alpha m}{m-1}+(m-1))}\|u_{0,n}\|_{L^1(\R^N)}^{-(m-1)}. $$ \end{enumerate} \end{proposition} \begin{proof} \noindent(a) Recall that we fixed $0<\bar{R}< R<2\bar{R}$. Now, estimate $$ \esssup_{x_0\in B_{R}(z)}\frac{m}{m-1}\frac{C(m)}{\tau_*}\int_{\R^N\setminus B_{R}(x_0)}u_n(x,\tau_*)\mathbb{G}_{-\Operator}^{x_0}(x)\dd x $$ in Proposition \ref{LocalpreCollectingResults1} by using \eqref{G_1} to get $$ \frac{m}{m-1}\frac{C(m)}{\tau_*}K_2R^{-(N-\alpha)}\|u_n(\cdot,\tau_*)\|_{L^1(\R^N)}=:BK_2R^{-(N-\alpha)}. $$ Optimizing in $R$ gives $$ R=\Big(\frac{BK_2}{A}\Big)^{(m-1)\theta}, $$ and \begin{equation*} \begin{split} &\|u_n(\cdot,\tau_*)\|_{L^\infty(B_{\bar{R}}(z))}^m\\ &\leq 2A^{1-\alpha m\theta}(BK_2)^{\alpha m\theta}\\ &=2^{1+(N-\alpha)m\theta}C(m)^{Nm\theta}\Big(\frac{m}{m-1}\Big)^{\alpha m\theta}K_1^{(N-\alpha)m\theta}K_2^{\alpha m \theta}\frac{1}{\tau_*^{Nm\theta}}\|u_n(\cdot,\tau_*)\|_{L^1(\R^N)}^{\alpha m\theta}. \end{split} \end{equation*} \smallskip \noindent(b) According to assumption \eqref{G_1'}, we have power-like behaviour of the Green function when $$ 0<R\leq \Big(\frac{K_2}{K_3}\Big)^{\frac{1}{N-\alpha}}, $$ and power-like behaviour around $x=x_0$ and constant around $x\to\infty$ when $$ R> \Big(\frac{K_2}{K_3}\Big)^{\frac{1}{N-\alpha}}. $$ Let us then first consider the case of small $R$. Following part (a), we have $$ \|u_n(\cdot,\tau_*)\|_{L^\infty(B_{\bar{R}}(z))}^m\leq AR^{\frac{\alpha m}{m-1}}+BK_2R^{-(N-\alpha)}. $$ Optimizing in $R$ gives $$ R=\Big(\frac{BK_2}{A}\Big)^{(m-1)\theta}, $$ and also the $L^1$--$L^\infty$-smoothing of part (a). However, this can only hold when $$ \Big(\frac{BK_2}{A}\Big)^{(m-1)\theta}\leq \Big(\frac{K_2}{K_3}\Big)^{\frac{1}{N-\alpha}} \qquad\Longleftrightarrow\qquad \tau_*\leq t_{0,n}. $$ Now, we turn our attention to the case of big $R$. Following part (a), we instead have $$ \|u_n(\cdot,\tau_*)\|_{L^\infty(B_{\bar{R}}(z))}^m\leq AR^{\frac{\alpha m}{m-1}}+BK_3. $$ Optimizing in $R$ gives $$ R=\Big(\frac{BK_3}{A}\Big)^{\frac{m-1}{\alpha m}}, $$ and \begin{equation*} \begin{split} \|u_n(\cdot,\tau_*)\|_{L^\infty(B_{\bar{R}}(z))}^m&\leq 2BK_3=\frac{2m}{m-1}C(m)K_3\frac{1}{\tau_*}\|u_n(\cdot,\tau_*)\|_{L^1(\R^N)}. \end{split} \end{equation*} Similarly as in the case of small $R$, the above estimate can now only hold when $\tau_*>t_{0,n}$. \end{proof} \begin{proof}[Proof of Proposition \ref{preCollectingResults2}] We simply take the supremum over $z\in \R^N$. \end{proof} The proof of Theorem \ref{thm:L1ToLinfinitySmoothing2} follows as for Theorem \ref{thm:L1ToLinfinitySmoothing}. \subsection{Boundedness under combinations of \eqref{G_1}} \begin{proposition}[Combined smoothing effects]\label{preCollectingResults3} Under the assumptions of Theorem \ref{thm:FundamentalOpRe}, we have that: If \eqref{G_1} holds with $\alpha\in(0,2)$ when $0<R\leq1$ and with $\alpha=2$ when $R>1$, then: $$ \|u_n(\cdot,\tau_*)\|_{L^\infty(\R^N)}\leq \tilde{C}(m)\begin{cases} \tau_*^{-N\theta_{\alpha}}\|u_{0,n}\|_{L^1(\R^N)}^{\alpha\theta_{\alpha}}&\qquad\text{if $0<\tau_*\leq \|u_{0,n}\|_{L^1(\R^N)}^{-(m-1)}$ a.e.,}\\ \tau_*^{-N\theta_{2}}\|u_{0,n}\|_{L^1(\R^N)}^{2\theta_{2}}&\qquad\text{if $\tau_*> \|u_{0,n}\|_{L^1(\R^N)}^{-(m-1)}$ a.e.,} \end{cases} $$ where $\theta_{\alpha}=(\alpha+N(m-1))^{-1}$ (defined for $\alpha\in (0,2]$) and $$ \tilde{C}(m):=2\Big((C(m)K_1)^{\frac{m}{m-1}}+\frac{m}{m-1}C(m)K_2\Big)^{\frac{1}{m}}. $$ \end{proposition} \begin{proof} Fix $0<R\leq 1$ (to be determined). We split the integral in Theorem \ref{thm:FundamentalOpRe}(a) and use assumption \eqref{G_1} to obtain \begin{equation*} \begin{split} &u_n^m(x_0,\tau_*)\\ &\leq \frac{C(m)}{\tau_*}\int_{B_{R}(x_0)}u_n(x,\tau_*)\mathbb{G}_{-\Operator}^{x_0}(x)\dd x+\frac{C(m)}{\tau_*}\int_{\R^N\setminus B_{R}(x_0)}u_n(x,\tau_*)\mathbb{G}_{-\Operator}^{x_0}(x)\dd x\\ &\leq \|u_n(\cdot,\tau_*)\|_{L^\infty(\R^N)}\frac{C(m)}{\tau_*}K_1R^\alpha+\|u_n(\cdot,\tau_*)\|_{L^1(\R^N)}\frac{C(m)}{\tau_*}K_2R^{-(N-\alpha)}. \end{split} \end{equation*} We then proceed as in the beginning of Section \ref{sec:Proofs} to obtain $$ \|u_n(\cdot,\tau_*)\|_{L^\infty(\R^N)}\leq \frac{\tilde{C}(m)}{\tau_*^{N\theta_{\alpha}}}\|u_{0,n}\|_{L^1(\R^N)}^{\alpha\theta_{\alpha}} $$ as long as $$ \big(\tau_*^{\frac{1}{m-1}}\|u_0\|_{L^1(\R^N)}\big)^{(m-1)\theta_\alpha}\leq 1\qquad\Longleftrightarrow\qquad \tau_*\leq\|u_0\|_{L^1(\R^N)}^{-(m-1)}. $$ Now, fix $R>1$ (to be determined). By simply repeating the above calculations (replacing $\alpha$ by $2$), the choice $$ R=\big(\tau_*^{\frac{1}{m-1}}\|u_0\|_{L^1(\R^N)}\big)^{(m-1)\theta_2} $$ gives $$ \|u_n(\cdot,\tau_*)\|_{L^\infty(\R^N)}\leq \frac{\tilde{C}(m)}{\tau_*^{N\theta_{2}}}\|u_{0,n}\|_{L^1(\R^N)}^{2\theta_{2}} $$ as long as \[ \big(\tau_*^{\frac{1}{m-1}}\|u_0\|_{L^1(\R^N)}\big)^{(m-1)\theta_2}> 1\qquad\Longleftrightarrow\qquad \tau_*>\|u_0\|_{L^1(\R^N)}^{-(m-1)}. \qedhere \] \end{proof} The proof of Theorem \ref{thm:L1ToLinfinitySmoothing3} follows as for Theorem \ref{thm:L1ToLinfinitySmoothing}. \subsection{Boundedness under \eqref{G_2}} \begin{proposition}[Absolute bounds]\label{prop:AbsBounds} Under the assumptions of Theorem \ref{thm:FundamentalOpRe} and \eqref{G_2}, we have that $$ \|u_n(\cdot,\tau_*)\|_{L^\infty(\R^N)}\leq \Big(\frac{C(m)C_1}{\tau_*}\Big)^{\frac{1}{m-1}}\qquad\text{for a.e. $\tau_*>0$.} $$ \end{proposition} \begin{proof} By Theorem \ref{thm:FundamentalOpRe}(a), we get $$ u_n^m(x_0,\tau_*)\leq C(m)\frac{1}{\tau_*}\int u_n(x,\tau_*)\mathbb{G}_{-\Operator}^{x_0}(x)\dd x\leq \frac{C(m)}{\tau_*}\|u_n(\cdot,\tau_*)\|_{L^\infty}\|\mathbb{G}_{-\Operator}^{0}\|_{L^1}. $$ Now, the Young inequality \eqref{eq:Young} with $\vartheta=m$ gives $$ \Big(1-\frac{1}{m}\Big)\|u_n(\cdot,\tau_*)\|_{L^\infty}^{m}\leq \frac{m-1}{m}\Big(\frac{C(m)C_1}{\tau_*}\Big)^{\frac{m}{m-1}}, $$ and hence, the result follows. \end{proof} The proof of Theorem \ref{thm:AbsBounds} follows as for Theorem \ref{thm:L1ToLinfinitySmoothing}. \subsection{Linear implies nonlinear} \label{sec:ProofsLinearImpliesNonlinear} \begin{proof}[Proof of Theorem \ref{thm:OverviewBoundedness}] Linear smoothing effects hold due to Theorem \ref{thm:LinearEquivalences} below. As for the nonlinear case, to be in the setting of Theorem \ref{thm:L1ToLinfinitySmoothing}, we only need to check that \eqref{G_3} holds. By Proposition \ref{prop:TheInverseOperatorA-1} and the Minkowski inequality for integrals (cf. Theorem 2.4 in \cite{LiLo01}), \begin{equation}\label{eq:LpNormOfGreenOfResolvent} \begin{split} \|\mathbb{G}_{I-\Operator}^{x_0}\|_{L^p(\R^N)}=\|\mathbb{G}_{I-\Operator}^{0}\|_{L^p(\R^N)} &=\bigg(\int_{\R^N}\bigg(\int_{0}^\infty\textup{e}^{-t} \mathbb{H}_{-\Operator}^{0}(x,t)\dd t\bigg)^p\dd x\bigg)^{\frac{1}{p}}\\ &\leq \int_{0}^\infty\bigg( \int_{\R^N}\Big(\textup{e}^{-t}\mathbb{H}_{-\Operator}^{0}(x,t)\Big)^p\dd x\bigg)^{\frac{1}{p}}\dd t\\ &=\int_{0}^\infty\textup{e}^{-t}\bigg( \int_{\R^N}\Big(\mathbb{H}_{-\Operator}^{0}(x,t)\Big)^p\dd x\bigg)^{\frac{1}{p}}\dd t\\ &=\int_{0}^\infty\textup{e}^{-t}\|\mathbb{H}_{-\Operator}^{0}(\cdot,t)\|_{L^p(\R^N)}\dd t. \end{split} \end{equation} Finally, $$ \|\mathbb{H}_{-\Operator}^{0}(\cdot,t)\|_{L^p(\R^N)}\leq \|\mathbb{H}_{-\Operator}^{0}(\cdot,t)\|_{L^\infty(\R^N)}^{\frac{p-1}{p}}\|\mathbb{H}_{-\Operator}^{0}(\cdot,t)\|_{L^1(\R^N)}\leq \|\mathbb{H}_{-\Operator}^{0}(\cdot,t)\|_{L^\infty(\R^N)}^{\frac{p-1}{p}}\leq C(t)^{\frac{p-1}{p}} $$ completes the proof. \end{proof} \section{Boundedness results for \texorpdfstring{$0$}{0}-order operators} \label{sec:Boundedness0Order} We need some assumptions regarding $0$-order or nonsingular operators, i.e., operators of the form $-\Operator=-\Levy^{\mu}$ with $\dd \mu=J\dd z$ where: \begin{align} &\textup{$J\geq0$ a.e. on $\R^N$, symmetric, and $\|J\|_{L^1(\R^N)}=1$.} \tag{$\textup{J}_{1}$}& \label{J_1}\\ &\|J\|_{L^p(\R^N)}\leq C_{J,p}<\infty \textup{ for some $p\in(1,\infty]$.}\nonumber \tag{$\textup{J}_{2}$}& \label{J_2} \end{align} I.e., we consider convolution type operators $-\Operator=I-J\ast$ (which we will denote by $-\Levy^{J}$). The nonlinear equation \eqref{GPME} with such operators has been studied in e.g. \cite{DFFeLi08}. Assumption \eqref{J_2} ensure that $J$ is far away from being concentrated. Therefore, we cannot consider discrete measures $\mu$, and thus, operators like the discrete Laplacian. The smoothing takes the following form: \begin{theorem}[$L^1$--$L^\infty$-smoothing]\label{thm:L1ToLinfinitySmoothing0} Assume \eqref{u_0as}, \eqref{phias}, and $q=p/(p-1)\in[1,\infty)$, and let $u$ be a very weak solution of \eqref{GPME} with initial data $u_0$. If \eqref{J_1} and \eqref{J_2} hold, then \begin{equation*} \|u(\cdot,t)\|_{L^\infty(\R^N)}\leq\begin{cases} 2(mqC(m)^{\frac{m}{m-1}})^{\frac{1}{m}}t^{-\frac{1}{m-1}} \qquad\qquad&\text{if $0<t\leq t_{0}$ a.e.,}\\ 2\Big(\frac{mC(m)}{m-1}C_{J,p}\Big)^q\|u_{0}\|_{L^1(\R^N)} \qquad\qquad&\text{if $t>t_{0}$ a.e.,} \end{cases} \end{equation*} where $C(m):=2^{\frac{m}{m-1}}$ and $$ t_{0}:=(mq)^{\frac{m-1}{m}}\Big(\frac{m}{m-1}C_{J,p}\Big)^{-q(m-1)}C(m)^{1-q(m-1)}\|u_{0}\|_{L^1(\R^N)}^{-(m-1)}. $$ \end{theorem} \begin{remark} The time-scaling (Lemma \ref{lem:ScalingNonlinearity}) ensures that the above estimate is of the proper form. \end{remark} The $0$-order or nonsingular operators have a particularly simple approach. In contrast to general singular L\'evy operators in this paper which maps $W^{2,p}(\R^N)$ to $L^p(\R^N)$, $0$-order operators $-\Operator=-\Levy^{J}$ are well-defined for merely $L^p(\R^N)$-functions: $$ -\Levy^{J}:L^p(\R^N)\rightarrow L^p(\R^N) \qquad\text{for all $p\in[1,\infty]$.} $$ We then define very weak solutions: \begin{definition}[Very weak solution]\label{def:VeryWeak} Assume $-\Operator=-\Levy^{J}$. We say that a nonnegative measurable function $u$ is a \emph{very weak solution} of \eqref{GPME}~if: \begin{enumerate}[{\rm (i)}] \item $u\in L^1(Q_T)\cap C([0,T]; L_{\textup{loc}}^1(\R^N))$ and $u^m\in L^1(Q_T)$. \item For a.e. $0<\tau_1\leq\tau_2\leq T$, and all $\psi\in C_\textup{c}^\infty(\R^N\times[\tau_1,\tau_2])$, \begin{equation*} \begin{split} &\int_{\tau_1}^{\tau_2}\int_{\R^N}\big(u\dell_t\psi-(-\Levy^{J})[u^m]\psi\big)\dd x\dd t=\int_{\R^N}u(x,\tau_2)\psi(x,\tau_2)\dd x-\int_{\R^N}u(x,\tau_1)\psi(x,\tau_1)\dd x. \end{split} \end{equation*} \item $u(\cdot,0)=u_0$ a.e. in $\R^N$. \end{enumerate} \end{definition} \begin{remark} For general L\'evy operators, see \eqref{def:LevyOperators}, we need to put the operator on the test function instead. \end{remark} We collect some known a priori results for \eqref{GPME} which will be useful in the proofs, see e.g. Theorem 2.3 in \cite{DTEnJa17b}. \begin{lemma}[Known a priori results]\label{lem:APriori0} Assume $0\leq u_0\in (L^1\cap L^\infty)(\R^N)$, \eqref{phias}, and $-\Operator=-\Levy^{J}$. \begin{enumerate}[{\rm (a)}] \item There exists a unique very weak solution $u$ of \eqref{GPME} with initial data $u_0$ such that $$ 0\leq u\in (L^1\cap L^\infty)(Q_T)\cap C([0,T]; L_\textup{loc}^1(\R^N)). $$ \item Let $u,v$ be very weak solutions of \eqref{GPME} with initial data $u_0,v_0\in (L^1\cap L^\infty)(\R^N)$. Then: \begin{enumerate}[{\rm (i)}] \item \textup{(Comparison)} If $u_0\leq v_0$ a.e. in $\R^N$, then $u\leq v$ a.e. in $Q_T$. \item \textup{($L^p$-decay)} $\|u(\cdot,\tau_2)\|_{L^p(\R^N)}\leq \|u(\cdot,\tau_1)\|_{L^p(\R^N)}$ for all $p\in[1,\infty]$ and a.e. $0\leq \tau_1\leq \tau_2\leq T$. \end{enumerate} \end{enumerate} \end{lemma} \begin{remark}\label{rem:APrioriCasem10} If $u_0\in L^1(\R^N)$, then Lemma \ref{lem:APriori0}(b)(i)--(ii) hold also when $m=1$ by approximation, and then also for $u_0\in TV(\R^N)$. \end{remark} Again, we fix $\tau_*, T>0$ such that $0<\tau_*<T$, and let $\tau\in(\tau_*,T]$. We also consider the following sequence of approximations $\{u_{0,n}\}_{n\in\N}$ satisfying \begin{equation*}\begin{cases} 0\leq u_{0,n}\in (L^1\cap L^\infty)(\R^N)\text{ such that}\\ u_{0,n}\to u_0\text{ in $L^1(\R^N)$, and}\\ u_{0,n}\to u_0\text{ a.e. in $\R^N$ monotonically from below.} \end{cases} \end{equation*} When we take $u_{0,n}$ as initial data in \eqref{GPME}, we denote the corresponding solutions by $u_n$, and they satisfy Lemmas \ref{lem:APriori0}, \ref{lem:ScalingNonlinearity} and Proposition \ref{prop:Monotonicity}. \begin{proposition}\label{prop:ChoosingTestFunction2} Assume $0\leq u_{0,n}\in (L^1\cap L^\infty)(\R^N)$, \eqref{phias}, $-\Operator=-\Levy^{J}$, and \eqref{J_1}. Let $0\leq v\in L^1(\R^N)$ and $0\leq \Theta\in C_\textup{b}^1([\tau_*,\tau])$. If $u_n$ is a very weak solution of \eqref{GPME} with initial data $u_{0,n}$, then, for a.e. $\tau\in(\tau_*,T]$, \begin{equation}\label{eq:TestFunctionInSpaceAndTime2} \begin{split} &\int_{\tau_*}^\tau\Theta(t)\int_{\R^N}(-\Levy^{J})[u_n^m(\cdot,t)](x)v(x)\dd x\dd t= \int_{\tau_*}^\tau\Theta'(t)\int_{\R^N}u_n(x,t)v(x)\dd x\dd t\\ &\quad+\Theta(\tau_*)\int_{\R^N}u_n(x,\tau_*)v(x)\dd x-\Theta(\tau)\int_{\R^N}u_n(x,\tau)v(x)\dd x. \end{split} \end{equation} \end{proposition} \begin{corollary}[Limit estimate 3]\label{cor:LimitEstimate3} Under the assumptions of Proposition \ref{prop:ChoosingTestFunction2}, let $$ v_R(x):=\frac{\mathbf{1}_{B(x_0,R)}(x)}{|B(x_0,R)|}\qquad \text{with $R>0$} $$ approximate $\delta_{x_0}$``$=\mathbb{G}_{I}^{x_0}$'' and choose $\Theta\equiv1$. Then \begin{equation*} \int_{\tau_*}^\tau(-\Levy^{J})[u_n^m(\cdot,t)](x_0)\dd t=u_n(x_0,\tau_*)-u_n(x_0,\tau) \end{equation*} for all Lebesgue points $x_0\in \R^N$. \end{corollary} \begin{proof} Simply apply the Lebesgue differentiation theorem as $R\to0^+$ in \eqref{eq:TestFunctionInSpaceAndTime2}. \end{proof} \begin{theorem}[Fundamental upper bound]\label{thm:FundamentalOpRe0} Assume $0\leq u_{0,n}\in (L^1\cap L^\infty)(\R^N)$, \eqref{phias}, $-\Operator=-\Levy^{J}$, and \eqref{J_1}. If $u_n$ is a very weak solution of \eqref{GPME} with initial data $u_{0,n}$, then, for a.e. $\tau_*>0$ and all Lebesgue points $x_0\in \R^N$, \begin{equation*} \begin{split} u_n^m(x_0,\tau_*)\leq C(m)\bigg( \frac{u_n(x_0,\tau_*)}{\tau_*}+\frac{1}{\tau_*}\int_{\tau_*}^{2\tau_*}\int_{\R^N}u_n^m(x,t)J^{x_0}(x)\dd x\dd t\bigg). \end{split} \end{equation*} where $J^{x_0}(x)=J(x-x_0)$ and $C(m)=2^{\frac{m}{m-1}}$. \end{theorem} \begin{remark} This is a completely new result, but we see that $J^{x_0}$ somehow takes the role of $\mathbb{G}^{x_0}$. \end{remark} \begin{proof}[Proof of Theorem \ref{thm:FundamentalOpRe0}] We begin the proof by noting the following consequence of Proposition \ref{prop:Monotonicity}: For a.e. $t\in[\tau_*,\tau]$ and all Lebesgue points $x_0\in\R^N$, \begin{equation}\label{eq:Monotonicity0} \tau_*^{\frac{m}{m-1}}u_n^m(x_0,\tau_*)\leq t^{\frac{m}{m-1}}u_n^m(x_0,t)\leq \tau^{\frac{m}{m-1}}u_n^m(x_0,\tau). \end{equation} We rearrange the result in Corollary \ref{cor:LimitEstimate3}: \begin{equation*} \begin{split} \int_{\tau_*}^\tau u_n^m(x_0,t)\dd t&=u_n(x_0,\tau_*)-u_n(x_0,\tau)+\int_{\tau_*}^\tau\int_{\R^N}u_n^m(x,t)J^{x_0}(x)\dd x\dd t\\ &\leq u(x_0,\tau_*)+\int_{\tau_*}^\tau\int_{\R^N}u_n^m(x,t)J^{x_0}(x)\dd x\dd t. \end{split} \end{equation*} Arguing by time-monotonicity \eqref{eq:Monotonicity0}, as in the proof of Theorem \ref{thm:FundamentalOpRe}(a), leads to \begin{equation*} \begin{split} u_n^m(x_0,\tau_*)\leq \bigg(\frac{\tau}{\tau_*}\bigg)^{\frac{m}{m-1}}\frac{1}{\tau-\tau_*}\bigg(u_n(x_0,\tau_*)+\int_{\tau_*}^\tau\int_{\R^N}u_n^m(x,t)J^{x_0}(x)\dd x\dd t\bigg). \end{split} \end{equation*} Choose $\tau=2\tau_*$ to obtain, for all $\tau_*>0$, \begin{equation*} \begin{split} u_n^m(x_0,\tau_*)\leq 2^{\frac{m}{m-1}}\bigg( \frac{u_n(x_0,\tau_*)}{\tau_*}+\frac{1}{\tau_*}\int_{\tau_*}^{2\tau_*}\int_{\R^N}u_n^m(x,t)J^{x_0}(x)\dd x\dd t\bigg). \end{split} \end{equation*} This completes the proof. \end{proof} \begin{proposition}[Smoothing effects]\label{prop:preCollectingResults2} Assume $q=p/(p-1)\in[1,\infty)$ and $r\in(1,m]$. Under the assumptions of Theorem \ref{thm:FundamentalOpRe0} and \eqref{J_2}, we have that: \begin{enumerate}[{\rm (a)}] \item \textup{($L^r$--$L^\infty$-smoothing)} For a.e. $\tau_*>0$, \begin{equation*} \begin{split} \|u_n(\cdot,\tau_*)\|_{L^\infty(\R^N)}&\leq\Big(\frac{q(m-1)^{\frac{2m-1}{m-1}}}{r-1}\Big)^{\frac{1}{m}}\Big(\frac{C(m)}{(m-1)}\Big)^{\frac{1}{m-1}}\tau_*^{-\frac{1}{m-1}}\\ &\quad+\Big(\frac{rC(m)^{\frac{m}{r}}}{r-1}C_{J,p}^{\frac{m}{r}}\Big)^{\frac{q}{m}}\|u_n(\cdot,\tau_*)\|_{L^r(\R^N)} \end{split} \end{equation*} where $C(m)=2^{\frac{m}{m-1}}$. \item \textup{($L^1$--$L^\infty$-smoothing)} For a.e. $\tau_*>0$, \begin{equation*} \begin{split} \|u_n(\cdot,\tau_*)\|_{L^\infty(\R^N)} &\leq (mqC(m)^{\frac{m}{m-1}})^{\frac{1}{m}}\tau_*^{-\frac{1}{m-1}}+\Big(\frac{mC(m)}{m-1}C_{J,p}\Big)^q\|u_n(\cdot,\tau_*)\|_{L^{1}(\R^N)}. \end{split} \end{equation*} \end{enumerate} \end{proposition} \begin{proof} By Theorem \ref{thm:FundamentalOpRe0}, $$ u_n^m(x_0,\tau_*)\leq \frac{C(m)}{\tau_*}u_n(x_0,\tau_*)+\frac{C(m)}{\tau_*}\int_{\tau_*}^{2\tau_*}\int_{\R^N}u_n^m(x,t)J^{x_0}(x)\dd x\dd t=:I+II. $$ \smallskip \noindent(a) We use Lemma \ref{lem:APriori0}(b)(ii) with $p=\infty$ to get $$ II\leq \|u_n(\cdot,\tau_*)\|_{L^\infty(\R^N)}^{m-r}\frac{C(m)}{\tau_*}\int_{\tau_*}^{2\tau_*}\int_{\R^N}u_n^r(x,t)J^{x_0}(x)\dd x\dd t. $$ By the Young inequality \eqref{eq:Young} with $\vartheta=\frac{m}{r}$ and $\vartheta=m$, we estimate $$ II\leq \frac{m-r}{m}\|u_n(\cdot,\tau_*)\|_{L^\infty(\R^N)}^m+\frac{r}{m}\bigg(\frac{C(m)}{\tau_*}\int_{\tau_*}^{2\tau_*}\int_{\R^N}u_n^r(x,t)J^{x_0}(x)\dd x\dd t\bigg)^{\frac{m}{r}} $$ and $$ I\leq \frac{1}{m}u_n^m(x_0,\tau_*)+\frac{m-1}{m}\Big(\frac{C(m)}{\tau_*}\Big)^{\frac{m}{m-1}}. $$ Collecting the terms yields \begin{equation*} \begin{split} u_n^m(x_0,\tau_*)&\leq \frac{m}{m-1}\frac{m-r}{m}\|u_n(\cdot,\tau_*)\|_{L^\infty}^m+\frac{m}{m-1}\frac{m-1}{m}\Big(\frac{C(m)}{\tau_*}\Big)^{\frac{m}{m-1}}\\ &\quad+\frac{m}{m-1}\frac{r}{m}\bigg(\frac{C(m)}{\tau_*}\int_{\tau_*}^{2\tau_*}\int_{\R^N}u_n^r(x,t)J^{x_0}(x)\dd x\dd t\bigg)^{\frac{m}{r}}\\ &= \frac{m-r}{m-1}\|u_n(\cdot,\tau_*)\|_{L^\infty}^m+\Big(\frac{C(m)}{\tau_*}\Big)^{\frac{m}{m-1}}\\ &\quad+\frac{r}{m-1}\bigg(\frac{C(m)}{\tau_*}\int_{\tau_*}^{2\tau_*}\int_{\R^N}u_n^r(x,t)J^{x_0}(x)\dd x\dd t\bigg)^{\frac{m}{r}}. \end{split} \end{equation*} Since $$ \frac{m-r}{m-1}=\frac{m-1+1-r}{m-1}=1-\frac{r-1}{m-1}, $$ we can only absorb the $L^\infty$-norm on the left-hand side when $r>1$. Indeed, \begin{equation*} \begin{split} \|u_n(\cdot,\tau_*)\|_{L^\infty(\R^N)}^m&\leq \frac{m-1}{r-1}\Big(\frac{C(m)}{\tau_*}\Big)^{\frac{m}{m-1}}\\ &\quad+\frac{r}{r-1}\esssup_{x_0\in\R^N}\bigg(\frac{C(m)}{\tau_*}\int_{\tau_*}^{2\tau_*}\int_{\R^N}u_n^r(x,t)J^{x_0}(x)\dd x\dd t\bigg)^{\frac{m}{r}}, \end{split} \end{equation*} and since $\|J^{x_0}\|_{L^p(\R^N)}=\|J\|_{L^p(\R^N)}$, we obtain by the H\"older inequality and Lemma \ref{lem:APriori0}(b)(ii) with $p=q$ that \begin{equation*} \begin{split} \|u_n(\cdot,\tau_*)\|_{L^\infty}^m&\leq \frac{m-1}{r-1}\Big(\frac{C(m)}{\tau_*}\Big)^{\frac{m}{m-1}}+\frac{rC(m)^{\frac{m}{r}}}{r-1}\|J\|_{L^p}^{\frac{m}{r}}\|u_n^r(\cdot,\tau_*)\|_{L^{q}}^{\frac{m}{r}}\\ &\leq \frac{m-1}{r-1}\Big(\frac{C(m)}{\tau_*}\Big)^{\frac{m}{m-1}}+\frac{rC(m)^{\frac{m}{r}}}{r-1}\|J\|_{L^p}^{\frac{m}{r}}\|u_n(\cdot,\tau_*)\|_{L^{\infty}}^{\frac{m(q-1)}{q}}\|u_n(\cdot,\tau_*)\|_{L^{r}}^{\frac{m}{q}}. \end{split} \end{equation*} Apply the Young inequality \eqref{eq:Young} with $\vartheta=q$ to obtain \begin{equation*} \begin{split} \|u_n(\cdot,\tau_*)\|_{L^\infty}^m&\leq\frac{m-1}{r-1}\Big(\frac{C(m)}{\tau_*}\Big)^{\frac{m}{m-1}}+\frac{q-1}{q}\|u_n(\cdot,\tau_*)\|_{L^{\infty}}^{m}\\ &\quad+\frac{r^qC(m)^{\frac{mq}{r}}}{q(r-1)^q}\|J\|_{L^p}^{\frac{mq}{r}}\|u_n(\cdot,\tau_*)\|_{L^{r}}^{m}, \end{split} \end{equation*} or, \begin{equation}\label{eq:JLrLinftyEstimate} \begin{split} \|u_n(\cdot,\tau_*)\|_{L^\infty(\R^N)}^m &\leq\frac{q(m-1)^{1+\frac{m}{m-1}}}{r-1}\Big(\frac{C(m)}{(m-1)\tau_*}\Big)^{\frac{m}{m-1}}\\ &\quad+\Big(\frac{rC(m)^{\frac{m}{r}}}{r-1}\|J\|_{L^p(\R^N)}^{\frac{m}{r}}\Big)^q\|u_n(\cdot,\tau_*)\|_{L^{r}(\R^N)}^{m}. \end{split} \end{equation} Finally, since $m>1$, $x\mapsto x^{\frac{1}{m}}$ is concave and sub-additive on $[0,\infty)$, and we conclude. \smallskip \noindent(b) The H\"older inequality yields $$ \|u_n(\cdot,\tau_*)\|_{L^m}^m\leq \|u_n(\cdot,\tau_*)\|_{L^\infty}^{m-1}\|u_n(\cdot,\tau_*)\|_{L^1}. $$ Inserting this estimate into \eqref{eq:JLrLinftyEstimate} with $r=m$, and then applying the Young inequalities with $\vartheta=m$ give \begin{equation*} \begin{split} \|u_n(\cdot,\tau_*)\|_{L^\infty}^m &\leq\frac{q(m-1)^{1+\frac{m}{m-1}}}{m-1}\Big(\frac{C(m)}{(m-1)\tau_*}\Big)^{\frac{m}{m-1}}\\ &\quad+\frac{m-1}{m}\|u_n(\cdot,\tau_*)\|_{L^{\infty}}^{m}+\frac{1}{m}\Big(\Big(\frac{mC(m)^{\frac{m}{m}}}{m-1}\|J\|_{L^p}^{\frac{m}{m}}\Big)^q\|u_n(\cdot,\tau_*)\|_{L^{1}}\Big)^{m}, \end{split} \end{equation*} or \begin{equation*} \begin{split} \|u_n(\cdot,\tau_*)\|_{L^\infty}^m &\leq m\frac{q(m-1)^{1+\frac{m}{m-1}}}{m-1}\Big(\frac{C(m)}{(m-1)\tau_*}\Big)^{\frac{m}{m-1}}+\Big(\Big(\frac{mC(m)}{m-1}\|J\|_{L^p}\Big)^q\|u_n(\cdot,\tau_*)\|_{L^{1}},\Big)^{m}, \end{split} \end{equation*} which concludes the proof since $x\mapsto x^{\frac{1}{m}}$ is sub-additive. \end{proof} The above results are not invariant under time-scaling (Lemma \ref{lem:ScalingNonlinearity}). We thus rewrite them in a proper form: \begin{proposition}[Scaling-invariant smoothing effects]\label{prop:preCollectingResultsScaled2} Assume $q=p/(p-1)\in[1,\infty)$ and $r\in(1,m]$. Under the assumptions of Theorem \ref{thm:FundamentalOpRe0} and \eqref{J_2}, we have that: \begin{enumerate}[{\rm (a)}] \item \textup{($L^r$--$L^\infty$-smoothing)} \begin{equation*} \|u_n(\cdot,t)\|_{L^\infty(\R^N)}\leq\begin{cases} 2\Big(\frac{q(m-1)^{\frac{2m-1}{m-1}}}{r-1}\Big)^{\frac{1}{m}}\Big(\frac{C(m)}{(m-1)}\Big)^{\frac{1}{m-1}}t^{-\frac{1}{m-1}} \qquad\qquad&\text{if $0<t\leq t_{0,n}$ a.e.,}\\ 2\Big(\frac{rC(m)^{\frac{m}{r}}}{r-1}C_{J,p}^{\frac{m}{r}}\Big)^{\frac{q}{m}}\|u_{0,n}\|_{L^r(\R^N)} \qquad\qquad&\text{if $t>t_{0,n}$ a.e.,} \end{cases} \end{equation*} where $$ t_{0,n}:=C(m)^{\frac{r-q(m-1)}{r}}C_{J,p}^{-\frac{q(m-1)}{r}}\Big(\frac{q(r-1)^{q-1}(m-1)}{r^q}\Big)^{\frac{m-1}{m}}\|u_{0,n}\|_{L^r(\R^N)}^{-(m-1)}. $$ \item \textup{($L^1$--$L^\infty$-smoothing)} \begin{equation*} \|u_n(\cdot,t)\|_{L^\infty(\R^N)}\leq\begin{cases} 2(mqC(m)^{\frac{m}{m-1}})^{\frac{1}{m}}t^{-\frac{1}{m-1}} \qquad\qquad&\text{if $0<t\leq t_{0,n}$ a.e.,}\\ 2\Big(\frac{mC(m)}{m-1}C_{J,p}\Big)^q\|u_{0,n}\|_{L^1(\R^N)} \qquad\qquad&\text{if $t>t_{0,n}$ a.e.,} \end{cases} \end{equation*} where $$ t_{0,n}:=(mq)^{\frac{m-1}{m}}\Big(\frac{m}{m-1}C_{J,p}\Big)^{-q(m-1)}C(m)^{1-q(m-1)}\|u_{0,n}\|_{L^1(\R^N)}^{-(m-1)}. $$ \end{enumerate} \end{proposition} \begin{proof} We only provide a proof for part (a) since (b) is similar. Proposition \ref{prop:preCollectingResults2}(a) gives \begin{equation*} \begin{split} \|u_n(\cdot,\tau_*)\|_{L^\infty}&\leq\Big(\frac{q(m-1)^{\frac{2m-1}{m-1}}C(m)^{\frac{m}{m-1}}}{r-1}\Big)^{\frac{1}{m}}\Big(\frac{1}{(m-1)\tau_*}\Big)^{\frac{1}{m-1}}+\Big(\frac{rC(m)^{\frac{m}{r}}}{r-1}C_{J,p}^{\frac{m}{r}}\Big)^{\frac{q}{m}}\|u_n(\cdot,\tau_*)\|_{L^r}, \end{split} \end{equation*} but this result is not respecting the time-scaling (Lemma \ref{lem:ScalingNonlinearity}): \begin{equation*} \begin{split} \Lambda^{\frac{1}{m-1}}\|u_n(\cdot,\Lambda\tau_*)\|_{L^\infty} &\leq \Big(\frac{q(m-1)^{\frac{2m-1}{m-1}}C(m)^{\frac{m}{m-1}}}{r-1}\Big)^{\frac{1}{m}}\Lambda^{\frac{1}{m-1}}\Big(\frac{1}{(m-1)\Lambda\tau_*}\Big)^{\frac{1}{m-1}}\\ &\quad+\Big(\frac{rC(m)^{\frac{m}{r}}}{r-1}C_{J,p}^{\frac{m}{r}}\Big)^{\frac{q}{m}}\Lambda^{\frac{1}{m-1}}\|u_n(\cdot,\Lambda\tau_*)\|_{L^r}. \end{split} \end{equation*} By Lemma \ref{lem:APriori0}(b)(ii) with $p=r$, we can optimize by requiring that \begin{equation*} \begin{split} &\Big(\frac{q(m-1)^{\frac{2m-1}{m-1}}C(m)^{\frac{m}{m-1}}}{r-1}\Big)^{\frac{1}{m}}\Big(\frac{1}{(m-1)\Lambda\tau_*}\Big)^{\frac{1}{m-1}}=\Big(\frac{rC(m)^{\frac{m}{r}}}{r-1}C_{J,p}^{\frac{m}{r}}\Big)^{\frac{q}{m}}\|u_{0,n}\|_{L^r}, \end{split} \end{equation*} or \begin{equation*} \begin{split} \Lambda\tau_*=C(m)^{\frac{r-q(m-1)}{r}}C_{J,p}^{-\frac{q(m-1)}{r}}\Big(\frac{q(r-1)^{q-1}(m-1)}{r^q\|u_{0,n}\|_{L^r}^m}\Big)^{\frac{m-1}{m}}=:t_{0,n}. \end{split} \end{equation*} We obtain that $$ \|u_n(\cdot,t_{0,n})\|_{L^\infty}\leq 2\Big(\frac{rC(m)^{\frac{m}{r}}}{r-1}C_{J,p}^{\frac{m}{r}}\Big)^{\frac{q}{m}}\|u_{0,n}\|_{L^r}. $$ To finish, we follow the proof of Proposition \ref{prop:preCollectingResultsScaled}. \end{proof} The proof of Theorem \ref{thm:L1ToLinfinitySmoothing0} follows as for Theorem \ref{thm:L1ToLinfinitySmoothing}, except that we verify that the limit is a very weak solution in the sense of Definition \ref{def:VeryWeak} here. \section{Smoothing effects VS Gagliardo-Nirenberg-Sobolev inequalities} \label{sec:SmoothingAndGNS} In this section we investigate the connections between the validity of smoothing effects for solutions to diffusion equations and the validity of suitable functional inequalities of Gagliardo-Nirenberg-Sobolev (GNS) type, together with some limiting cases, and their dual counterparts, the Hardy-Littlewood-Sobolev (HLS) type inequalities. As already mentioned, it is well-known since the celebrated work of Nash \cite{Nas58} that the ultracontractive estimate for solutions of the heat equation, i.e. \eqref{GPME} with $m=1$, are equivalent to a special GNS inequality. There has been an extensive literature on this nowadays classical topic, and theorems analogous to Theorem \ref{thm:LinearEquivalences} below can be found in analysis textbooks, e.g., \cite{LiLo01, S-Co02}. In the nonlinear setting much less is known, a first result in this direction has been given in \cite{BoGr05b} where, adapting the Gross method to the nonlinear setting, logarithmic Sobolev (LS) inequalities (of Euclidean type) implied $L^1$--$L^\infty$-smoothing effects for porous medium-type equations (also on Riemannian manifolds). Indeed, LS inequalities are limiting cases of GNS inequalities, hence it is shown there how GNS inequalities imply smoothing effects. Later the equivalence between GNS and smoothing effects was established in \cite{BoGrVa08, Gri10, GrMuPo13}, see also \cite{CoHa16}. In the nonlinear case, the Nash method does not work, and the classical alternative is provided by the celebrated Moser iteration, which was first introduced for linear parabolic equations \cite{Mos64,Mos67}, then extended by various authors to the nonlinear setting, see \cite{HePi85,DaKe07,JuLi09,BoVa10,GrMuPu15,NgVa18,BoSi19,JiXi19,BoDoNaSi20,JiXi22,LiLi22}. Another classical possibility is the DeGiorgi method, which can be adapted to the nonlinear setting. It also shows how functional inequalities imply regularity properties of solutions, see for instance \cite{DBe93, DBGiVe12}. Once GNS imply smoothing, it is often possible to prove the converse implication, establishing equivalence, cf. Theorem \ref{thm:GNSEquivalentL1Linfty}. In the pioneering paper \cite{BaCoLeS-C95}, see also \cite{S-Co02}, the equivalences of different Sobolev, GNS, Nash, LS, and Poincar\'e inequalities are established. We recall some of the precise results in Lemma \ref{lem:NashEquivalentGNS}. Roughly speaking, the idea is that all functional inequalities that can be true with a suitable quadratic form are equivalent: We will analyze mainly two classes that we call Sobolev or Poincar\'e, since they are equivalent respectively to $L^q$--$L^\infty$- or to $L^q$--$L^p$-smoothing effects for the associated linear equation, i.e., \eqref{GPME} with $m=1$. We add other equivalences and implications related to \eqref{GPME} with $m>1$, which is the main purpose of this section, see Figure \ref{fig:ImplicationInNonlinearCase} below. Let us also mention that a more direct proof of the equivalence between Nash and LS can be obtained by the methods of \cite{BoDoSc20} combined with the 4-norm inequality of \cite{BoGr05a}. We want to emphasize that sometimes the nonlinear diffusion enjoys smoothing while the linear counterpart does not. The nonlinear smoothing must then be equivalent to a functional inequality that has to be weaker than any GNS (or any other functional inequality equivalent to Sobolev), otherwise it would imply smoothing in the linear case. We provided explicit examples of this phenomenon in Section \ref{sec:Boundedness0Order}. This allowed to conclude that while linear smoothing implies nonlinear smoothing, the viceversa is not true in general, see Theorem \ref{thm:OverviewBoundedness} and Remark \ref{rem.thm:OverviewBoundedness}. The crucial ingredient to prove the smoothing for the nonlinear (when the linear does not smooth) is the Green function method, developed in the previous sections. So far, the panorama of implications does not include Green functions, only heat kernels. It can, however, be shown using Legendre transform that Sobolev and HLS are equivalent, see Lemma \ref{lem:SAndHLS}. Dual norms indeed involve Green functions, and an upper bound on the Green function implies the HLS, hence a Sobolev inequality. The Green function method thus replaces the use of Sobolev inequalities and iterations \`a la Moser or \`a la DeGiorgi with simpler integral estimates, and provides a solid alternative to those methods. Moreover, having at disposal estimates on the Green function seems to be more versatile in the sense that the method surprisingly works when the linear counterpart does not smooth. The latter must indicate that the Green function estimate cannot always provide strong functional inequalities which would imply smoothing in the linear case. In many examples, the Green function estimates necessary for the method to work, are derived from heat kernel bounds, or via Fourier transform, see Section \ref{sec:GreenAndHeat}. As we shall explain below, in the nonlocal case, GNS is not sufficient to prove smoothing effects via Moser iterations. One also needs the Stroock-Varopoulos inequality \cite{Str84,Var85}, which somehow replaces the Sobolev chain rule in the local case. One of the merits of this paper is represented from the fact that having Green function estimates---and then also boundedness estimates---allow to prove GNS (with the quadratic form already adjusted to the operator) that then have many other applications. In the nonlocal case, proving GNS is not an easy task for general quadratic forms, see \cite{DyFr12,BeFr14,ChKa20}. We provide here a PDE proof of many functional inequalities that can have their own interest. We also prove the validity of some weak GNS, as a consequence of the nonlinear smoothing. See Section \ref{sec:GreenAndHeat} for a rich list of examples of operators included in our theory. Optimal or explicit constants in fractional GNS type inequalities is mostly an unexplored topic. In the local case, the sharp classical Nash inequality has been proven in \cite{CaLo93, BoDoSc20} by different methods. Sharp GNS have been proven in \cite{DPDo02} by entropy methods and nonlinear flows, and by mass transportation techniques in \cite{C-ENaVi04}. Quantitative and constructive stability for GNS has recently been proven in \cite{BoDoNaSi20}, to which we refer the reader for thorough historical and bibliographical information, also on Sobolev and related inequalities. We refrain from a thorough discussion here. As for functional inequalities related to nonlocal operators or fractional Sobolev spaces, to the best of our knowledge only a few contributions are present in literature: optimal fractional Sobolev inequalities are discussed in \cite{CotTa04}, while optimal fractional GNS in \cite{BeFr14}. Fractional Hardy inequalities are studied in \cite{DyFr12}. Improved Sobolev have been studied in \cite{PaPi14} by means of concentration compactness methods. We apologize in advance, in case we are missing important contributions in these directions, but in this paper we do not address the question of optimal inequalities, we just establish their validity with a (computable) constant. Throughout this section, $C>0$ is a constant (that might change) which depends on $N$, $\alpha$, $m$, and the underlying Green function, but not on any norm of $u$ or $u_0$. We will use the notation $$ \mathcal{Q}_{-\Operator}[f,g]:=\int f(-\Operator)[g]\qquad\text{and}\qquad \mathcal{Q}_{-\Operator}[f]:=\mathcal{Q}_{-\Operator}[f,f], $$ and we will write, for $q>0$, $$ \bigg(\int_{\R^N}|f(x)|^{q}\dd x\bigg)^{\frac{1}{q}}=\|f\|_{L^{q}(\R^N)} $$ even though it is not a proper norm when $q\in (0,1)$. \subsection{The well-known linear case (\texorpdfstring{$m=1$}{m equal 1})} \label{sec:TheWell-KnownLinearCase} We state and prove the following form of the Nash-type theorem \cite{Nas58}, adapted to our setting. \begin{theorem}[Linear equivalences]\label{thm:LinearEquivalences} Assume $\alpha\in (0,2]$ and $2^*=2N/(N-\alpha)$. The following statements are equivalent: \begin{enumerate}[{\rm (a)}] \item \textup{($L^1$--$L^\infty$-smoothing)} Let $u$ be a solution of \eqref{GPME} with $m=1$ and initial data $u_0\in L^1(\R^N)$, then $$ \|u(\cdot, t)\|_{L^\infty(\R^N)}\leq Ct^{-\frac{N}{\alpha}}\|u_0\|_{L^1(\R^N)}. $$ \item \textup{(Sobolev)} For all $f\in L^1(\R^N)\cap {\rm dom}(\mathcal{Q}_{-\Operator})$, $$ \|f\|_{L^{2^*}(\R^N)}\leq C\mathcal{Q}_{-\Operator}[f]^{\frac{1}{2}}. $$ \item \textup{(Nash)} For all $f\in L^1(\R^N)\cap {\rm dom}(\mathcal{Q}_{-\Operator})$, $$ \|f\|_{L^{2}(\R^N)}\leq C\|f\|_{L^{1}(\R^N)}^\vartheta \mathcal{Q}_{-\Operator}[f]^{\frac{1}{2}(1-\vartheta)}\qquad\text{where}\qquad \vartheta:=\frac{1}{2}\frac{2^*-2}{2^*-1}. $$ \item \textup{(On-diagonal heat kernel bounds)} The heat kernel $\mathbb{H}_{-\Operator}^{x_0}$ satisfies $$ 0\leq \mathbb{H}_{-\Operator}^{x_0}(x)\leq C t^{-\frac{N}{\alpha}}. $$ \end{enumerate} \end{theorem} \begin{remark}\label{rem:LinearEquivalences} The case $\alpha=2$ is well-known, and we refer the reader to e.g. Lemma 2.1.2 and Theorem 2.4.6 in \cite{Dav89} (see also \cite{LiLo01}). In the context of L\'evy operators $\Operator=\Levy^\mu$ with an absolutely continuous measure $\mu$, it is worth mentioning that as long as $$ \frac{\dd \mu}{\dd z}(z)\gtrsim \frac{1}{|z|^{N+\alpha}} $$ in \eqref{muas}, we are in the case $\alpha\in(0,2)$, cf. \cite[Proposition 2.6]{GrHuLa14}. One can also replace $N/\alpha$ by $\sum_{i=1}^N(\alpha_i)^{-1}$ as in the Sobolev inequality corresponding to the sum of onedimensional fractional Laplacians \cite[Theorem 2.4]{ChKa20}. Examples of (some of the above) equivalences in the nonpower case can be found in e.g. Proposition 3 and Lemma 5 in \cite{KnSc12}. We also refer to \cite{BrDP18} which explores various equivalences between Nash inequalities and $L^q$--$L^p$-smoothing estimates for L\'evy operators (see also \cite{CaKuSt87}). \end{remark} \begin{figure}[h!] \centering \begin{tikzpicture} \color{black} \node[draw, minimum width=2cm, minimum height=1.2cm, ] (l1) at (0,0){$\begin{array}{ll}\mathbb{G}_{-\Operator}^{x_0}(x)\\\lesssim |x-x_0|^{-(N-\alpha)}\end{array}$}; \node[draw, minimum width=2cm, minimum height=1.2cm, right=2cm of l1 ] (r1){$\text{Sobolev inequality}$}; \node [draw, minimum width=2cm, minimum height=1.2cm, below=1cm of r1 ] (r2) {$L^1\text{--}L^\infty\text{-smoothing}$}; \node [draw, minimum width=2cm, minimum height=1.2cm, right=2cm of r2 ] (rr2) {$\text{Nash inequality}$}; \node [draw, minimum width=2cm, minimum height=1.2cm, below=1cm of r2 ] (r3) {$\begin{array}{cc}\text{On-diagonal}\\\text{heat kernel bounds}\end{array}$}; \node [draw, minimum width=2cm, minimum height=1.2cm, left=2.1cm of r3 ] (l2) {$\begin{array}{cc}\text{Off-diagonal}\\\text{heat kernel bounds}\end{array}$}; \draw[-{Implies},double,line width=0.7pt] ( $ (l1.north east)!0.5!(l1.east) $ ) -- ( $ (r1.north west)!0.5!(r1.west) $ ) node[midway,above]{$\text{Via HLS}$}; \draw[-{Implies},double,line width=0.7pt] (r1.east) -| (rr2.north) node[pos=0.25,above]{$L^p\text{-interp.}$}; \draw[-{Implies},double,line width=0.7pt] (r2.north) -- (r1.south) node[midway,left]{$\text{Via HLS}$}; \draw[-{Implies},double,line width=0.7pt] ( $ (r2.south west)!0.5!(r2.south) $ ) -- ( $ (r3.north west)!0.56!(r3.north) $ ) node[midway,left]{$\mathbb{H}_{-\Operator}^{x_0}(\cdot,0)=\delta_{x_0}$}; \draw[-{Implies},double,line width=0.7pt] ( $ (r2.south east)!0.5!(r2.east) $ ) -- ( $ (rr2.south west)!0.5!(rr2.west) $ ) node[midway,above]{$\text{Energy est.}$}; \draw[-{Implies},double,line width=0.7pt] ( $ (rr2.north west)!0.5!(rr2.west) $ ) -- ( $ (r2.north east)!0.5!(r2.east) $ ) node[midway,above]{$\text{Dual }L^\infty$}; \draw[-{Implies},double,line width=0.7pt] ( $ (r3.north east)!0.56!(r3.north) $ ) -- ( $ (r2.south east)!0.5!(r2.south) $ ) node[midway,right]{$\begin{array}{cc}\text{Representation}\\\text{formula}\end{array}$}; \draw[-{Implies},double,line width=0.7pt] (l2.east) -- (r3.west) node[midway,above]{$\text{Definition}$}; \draw[-{Implies},double,line width=0.7pt] ( $ (l2.north east)!1.1!(l2.north) $ ) -- ( $ (l1.south east)!1.389!(l1.south) $ ) node[midway,left]{$\text{Integration}$}; \normalcolor \end{tikzpicture} \caption{Implications in the linear case. Note that off-diagonal heat kernel bounds provide the strongest information unless we know how to deduce those bounds from the on-diagonal ones (like in \cite[Section 3]{Dav89} and \cite[Theorem 3.25]{CaKuSt87}). In the latter case, any piece of information is equivalent.} \label{fig:ImplicationInLinearCase} \end{figure} The proof is divided into several independent results. By interpolation in $L^p$, we immediately have: \begin{lemma}[Sobolev implies Nash]\label{lem:SobolevImplications} Assume $\alpha\in(0,2]$ and $2^*=2N/(N-\alpha)$. If $$ \|f\|_{L^{2^*}(\R^N)}\leq C\mathcal{Q}_{-\Operator}[f]^{\frac{1}{2}}, $$ then $$ \|f\|_{L^{2}(\R^N)}\leq C\|f\|_{L^{1}(\R^N)}^\vartheta \mathcal{Q}_{-\Operator}[f]^{\frac{1}{2}(1-\vartheta)}\qquad\text{where}\qquad \vartheta:=\frac{1}{2}\frac{2^*-2}{2^*-1}. $$ \end{lemma} \begin{lemma}[$L^1$--$L^\infty$-smoothing VS Nash inequality]\label{lem:NashL1Linfty} Under the assumptions of Theorem \ref{thm:LinearEquivalences}, the following are equivalent: \begin{enumerate}[{\rm (a)}] \item \textup{($L^1$--$L^\infty$-smoothing)} $$ \|u(\cdot,t)\|_{L^\infty(\R^N)}\leq Ct^{-\frac{N}{\alpha}}\|u_0\|_{L^1(\R^N)}. $$ \item \textup{(Nash)} $$ \|f\|_{L^{2}(\R^N)}\leq C\|f\|_{L^{1}(\R^N)}^\vartheta \mathcal{Q}_{-\Operator}[f]^{\frac{1}{2}(1-\vartheta)}\qquad\text{where}\qquad \vartheta:=\frac{1}{2}\frac{2^*-2}{2^*-1}. $$ \end{enumerate} \end{lemma} \begin{proof} Follows by Theorem 8.16 in \cite{LiLo01}\footnote{We warn the reader about a small typo in the remark after Theorem 8.16 in \cite{LiLo01}: $(f,Lf)=\int fLf$ is indeed $\|\nabla f\|_{L^2}^2$ when $L=-\Delta$.} (see also Section 4.1 in \cite{S-Co02}). There, the Nash inequality is equivalent with an $L^1$--$L^\infty$-smoothing effect. An intermediate step is the $L^1$--$L^2$-smoothing effect, which can be extended to $L^\infty$ by the Nash duality trick: \begin{equation*} \begin{split} \|u(t)\|_{L^\infty}&=\sup_{\|\phi\|_{L^1}=1}\bigg|\int u(t)\phi\bigg|=\sup_{\|\phi\|_{L^1}=1}\bigg|\int S_t[u_0]\phi\bigg|=\sup_{\|\phi\|_{L^1}=1}\bigg|\int S_{\frac{t}{2}}[S_{\frac{t}{2}}[u_0]]\phi\bigg|\\ &=\sup_{\|\phi\|_{L^1}=1}\bigg|\int S_{\frac{t}{2}}[u_0]S_{\frac{t}{2}}[\phi]\bigg|\leq \sup_{\|\phi\|_{L^1}=1}\|S_{\frac{t}{2}}[u_0]\|_{L^2}\|S_{\frac{t}{2}}[\phi]\|_{L^2}.\\ \end{split} \end{equation*} Here we used that the semigroup $S_t$ is self-adjoint, and the Cauchy-Schwarz inequality. \end{proof} \begin{remark}\label{rem:NashAndGNSAndSmoothing} To obtain the $L^1$--$L^\infty$-smoothing in the nonlinear case ($m>1$), the Nash inequality is usually replaced by the Gagliardo-Nirenberg-Sobolev inequality: $$ \|f\|_{L^{p}(\R^N)}\leq C\|f\|_{L^{q}(\R^N)}^\vartheta \mathcal{Q}_{-\Operator}[f]^{\frac{1}{2}(1-\vartheta)}, $$ where $$ 2\leq p<2^*,\qquad 1\leq q<p,\qquad \vartheta:=\frac{q}{p}\frac{2^*-p}{2^*-q}. $$ Then the Moser iteration can be used to obtain the desired result. In the next section, we show that we indeed need less than the above inequality to perform all the necessary steps. \end{remark} \begin{lemma}[$L^1$--$L^\infty$-smoothing and heat kernel bounds]\label{lem:HeatL1Linfty} Under the assumptions of Theorem \ref{thm:LinearEquivalences}, the following are equivalent: \begin{enumerate}[{\rm (a)}] \item \textup{($L^1$--$L^\infty$-smoothing)} $$ \|u(\cdot,t)\|_{L^\infty(\R^N)}\leq Ct^{-\frac{N}{\alpha}}\|u_0\|_{L^1(\R^N)}. $$ \item \textup{(On-diagonal heat kernel bounds)} $$ 0\leq \mathbb{H}_{-\Operator}^{x_0}(x,t)\leq C t^{-\frac{N}{\alpha}} $$ \end{enumerate} \end{lemma} \begin{proof} (a)$\Longrightarrow$(b). We apply Theorem 2.6.20 in \cite{Jac01}. Formally, $\mathbb{H}_{-\Operator}^{x_0}$ solves \eqref{GPME} with $m=1$ and $\delta_{x_0}$ as initial data. Hence, by an approximation argument and the lower semicontinuity of the $L^\infty$-norm, we arrive at part (b). \medskip \noindent(b)$\Longrightarrow$(a). Since $C_\textup{c}^\infty(\R^N)$-initial data produce solutions that satisfy the representation formula $u(x,t)=\mathbb{H}_{-\Operator}^{x_0}(\cdot,t)\ast u_0(x)$ and $$ |u(x,t)|\leq \mathbb{H}_{-\Operator}^{x_0}(\cdot,t)\ast |u_0|(x)\leq C t^{-\frac{N}{\alpha}}\|u_0\|_{L^1(\R^N)}, $$ we can again do an approximation argument to show (a). \end{proof} It remains to prove that the Nash inequality implies the Sobolev inequality. For $C_\textup{c}^\infty$-functions, such a result can be found in \cite{BaCoLeS-C95}. However, for semigroups in $L^2$, we will consider an indirect path through the inverse of the square root of the operator. Legendre duality allows to establish equivalences between Sobolev and Hardy-Littlewood-Sobolev (HLS) inequalities: \begin{lemma}[Sobolev VS HLS]\label{lem:SAndHLS} Assume $\alpha\in(0,2]$, $2^*=2N/(N-\alpha)$, and $(2^*)':=2^*/(2^*-1)=2N/(N+\alpha)$. The following inequalities are equivalent: \begin{enumerate}[{\rm (a)}] \item \textup{(Sobolev)} For all $f\in L^1(\R^N)\cap {\rm dom}(\mathcal{Q}_{-\Operator})$, $$ \|f\|_{L^{2^*}(\R^N)}\leq C\|(-\Operator)^{\frac{1}{2}}[f]\|_{L^2(\R^N)}. $$ \item \textup{(HLS)} For all $g\in L^1(\R^N)\cap {\rm dom}(\mathcal{Q}_{-\Operator})$, $$ \|(-\Operator)^{-\frac{1}{2}}[g]\|_{L^2(\R^N)}\leq C\|g\|_{L^{(2^*)'}(\R^N)}. $$ \end{enumerate} \end{lemma} The proof is based on the Legendre transform, see e.g. Proposition 7.4 in \cite{BoSiVa15}, that we learned by Lieb \cite{Lie83}. Lemma \ref{lem:NashL1Linfty} already established that the Nash inequality implies the $L^1$--$L^\infty$-smoothing. The next lemma then finishes the proof of Theorem \ref{thm:LinearEquivalences}. \begin{lemma}[$L^1$--$L^\infty$-smoothing VS HLS] Assume $\alpha\in (0,2]$ and $(2^*)':=2^*/(2^*-1)=2N/(N+\alpha)$. Then the following are equivalent: \begin{enumerate}[{\rm (a)}] \item \textup{($L^1$--$L^\infty$-smoothing)} Let $u$ be a solution of \eqref{GPME} with $m=1$ and initial data $u_0\in L^1(\R^N)$, then $$ \|u(\cdot,t)\|_{L^\infty(\R^N)}\leq Ct^{-\frac{N}{\alpha}}\|u_0\|_{L^1(\R^N)}. $$ \item \textup{(HLS)} For all $g\in L^1(\R^N)\cap {\rm dom}(\mathcal{Q}_{-\Operator})$, $$ \|(-\Operator)^{-\frac{1}{2}}[g]\|_{L^2(\R^N)}\leq C\|g\|_{L^{(2^*)'}(\R^N)}. $$ \end{enumerate} \end{lemma} \begin{proof} (a)$\Longrightarrow$ (b). We apply Theorem II.2.7 in \cite{VaSa-CoCo92} with $\zeta=\gamma=1$ and $p=(2^*)'$. \medskip \noindent(b)$\Longrightarrow$(a). Follows by Lemmas \ref{lem:SAndHLS}, \ref{lem:SobolevImplications}, and \ref{lem:NashL1Linfty}. \end{proof} Finally, we relate Green function estimates with all of the above equivalences. \begin{lemma}[Green VS HLS]\label{lem:GreenFunctionGivesSobolev} Assume \eqref{Gas}, $\alpha\in (0,2]$, and $(2^*)'=2^*/(2^*-1)=2N/(N+\alpha)$. If $$ 0\leq \mathbb{G}_{-\Operator}^{x_0}(x)\leq C|x-x_0|^{-(N-\alpha)}, $$ then, for all $g\in L^1(\R^N)\cap {\rm dom}(\mathcal{Q}_{-\Operator})$, $$ \|(-\Operator)^{-\frac{1}{2}}[g]\|_{L^2(\R^N)}\leq C\|g\|_{L^{(2^*)'}(\R^N)}. $$ \end{lemma} \begin{remark} The above assumption on the Green function is stronger than \eqref{G_1}. We refer the reader to \cite{KaKiLe21} for a discussion on the validity of such an upper bound. \end{remark} \begin{proof}[Proof of Lemma \ref{lem:GreenFunctionGivesSobolev}] This is essentially Theorem 7.5 in \cite{BoSiVa15}, which we restate here for completeness. A direct calculation gives \begin{equation*} \begin{split} &\|(-\Operator)^{-\frac{1}{2}}[f]\|_{L^2(\R^N)}^2=\int_{\R^N}(-\Operator)^{-\frac{1}{2}}[f](-\Operator)^{-\frac{1}{2}}[f]\dd x=\int_{\R^N}f(-\Operator)^{-1}[f]\dd x\\ &=\int_{\R^N}f(x)\bigg(\int_{\R^N}\mathbb{G}_{-\Operator}^{x}(y)f(y)\dd y\bigg)\dd x\leq C\int_{\R^N}f(x)\bigg(\int_{\R^N}|x-y|^{-(N-\alpha)}f(y)\dd y\bigg)\dd x\\ &=C\|(-\Delta)^{-\frac{\alpha}{4}}[f]\|_{L^2(\R^N)}^2. \end{split} \end{equation*} The classical Hardy-Littlewood-Sobolev $$ \|(-\Delta)^{-\frac{\alpha}{4}}[f]\|_{L^2(\R^N)}\leq C\|f\|_{L^{(2^*)'}(\R^N)} $$ then provides the result. \end{proof} \subsection{The nonlinear case (\texorpdfstring{$m>1$}{m greater 1})} \label{sec:EquivalencesGNSHomogeneous} While in the linear case, the Nash method works perfectly, in the nonlinear case, the Nash method simply does not work since the ``nonlinear heat semigroup'' is not symmetric. On the other hand, the Moser iteration, which provides an alternative proof in the linear case, can be adapted to work also in the nonlinear case, and it shows how to prove smoothing effects from GNS inequalities also in the nonlinear setting. However GNS are not sufficient to perform Moser iteration in the nonlocal setting, another ingredient is needed: the so-called Stroock-Varopoulos inequalities. Let us briefly explain how this works. \begin{figure}[h!] \centering \begin{tikzpicture} \color{black} \node[draw, minimum width=2cm, minimum height=1.2cm, ] (l1) at (0,0){$\begin{array}{cc}\text{Hom. nonlinear}\\L^1\text{--}L^\infty\text{-smoothing}\end{array}$}; \node[draw, minimum width=2cm, minimum height=1.2cm, right=2cm of l1 ] (r1){$\text{GNS inequality}$}; \node[draw, minimum width=2cm, minimum height=1.2cm, right=2cm of r1 ] (rr1){$\text{Sobolev inequality}$}; \node[draw, minimum width=2cm, minimum height=1.2cm, below=1cm of rr1 ] (rr2){$\begin{array}{ll}\mathbb{G}_{-\Operator}^{x_0}(x)\\\lesssim |x-x_0|^{-(N-\alpha)}\end{array}$}; \node [draw, minimum width=2cm, minimum height=1.2cm, below=1cm of rr2 ] (rr3) {$\begin{array}{cc}\text{Off-diagonal}\\\text{heat kernel bounds}\end{array}$}; \node [draw, minimum width=2cm, minimum height=1.2cm, below=1cm of r1 ] (r2) {$\begin{array}{cc}\text{Hom. linear}\\L^1\text{--}L^\infty\text{-smoothing}\end{array}$}; \node [draw, minimum width=2cm, minimum height=1.2cm, below=1cm of r2 ] (r3) {$\begin{array}{cc}\text{On-diagonal}\\\text{heat kernel bounds}\end{array}$}; \node [draw, minimum width=2cm, minimum height=1.2cm, left=2cm of r3 ] (l3) {$\begin{array}{cc}\eqref{G_3}\\\mathbb{G}_{I-\Operator}^{x_0}\in L^p(\R^N)\end{array}$}; \node [draw, minimum width=2cm, minimum height=1.2cm, above=1cm of l2 ] (l2) {$\begin{array}{cc}\text{Nonhom. nonlinear}\\L^1\text{--}L^\infty\text{-smoothing}\end{array}$}; \draw[-{Implies},double,line width=0.7pt] ( $ (l1.north east)!0.5!(l1.east) $ ) -- ( $ (r1.north west)!0.5!(r1.west) $ ) node[midway,above]{$\text{\cite{BaCoLeS-C95}}$}; \draw[-{Implies},double,line width=0.7pt] ( $ (r1.south west)!0.5!(r1.west) $ ) -- ( $ (l1.south east)!0.5!(l1.east) $ ) node[midway,below]{$\begin{array}{cc}\text{S.-V.}\\\text{and Moser}\end{array}$}; \draw[-{Implies},double,line width=0.7pt] ( $ (r1.north east)!0.5!(r1.east) $ ) -- ( $ (rr1.north west)!0.5!(rr1.west) $ ) node[midway,above]{$\text{\cite{BaCoLeS-C95}}$}; \draw[-{Implies},double,line width=0.7pt] ( $ (rr1.south west)!0.5!(rr1.west) $ ) -- ( $ (r1.south east)!0.5!(r1.east) $ ) node[midway,below]{$\begin{array}{cc}L^p\text{-}\\\text{interp.}\end{array}$}; \draw[-{Implies},double,line width=0.7pt] (rr2.north) -- (rr1.south) node[midway,right]{$\text{Via HLS}$}; \draw[-{Implies},double,line width=0.7pt] (rr3.north) -- (rr2.south) node[midway,right]{$\text{Integration}$}; \draw[-{Implies},double,line width=0.7pt] (rr3.west) -- (r3.east) node[midway,below]{$\text{Def.}$}; \draw[-{Implies},double,line width=0.7pt] ( $ (r1.south west)!0.355!(r1.south) $ ) -- ( $ (r2.north west)!0.5!(r2.north) $ ) node[midway,right]{$\text{Nash}$}; \draw[-{Implies},double,line width=0.7pt] ( $ (r2.north east)!0.5!(r2.north) $ ) -- ( $ (r1.south east)!0.355!(r1.south) $ ) node[midway,right]{$\text{\cite{BaCoLeS-C95}}$}; \draw[-{Implies},double,line width=0.7pt] ( $ (r2.south west)!0.545!(r2.south) $ ) -- ( $ (r3.north west)!0.56!(r3.north) $ ) node[midway,left]{$\mathbb{H}_{-\Operator}^{x_0}(\cdot,0)=\delta_{x_0}$}; \draw[-{Implies},double,line width=0.7pt] ( $ (r3.north east)!0.515!(r3.north) $ ) -- ( $ (r2.south east)!0.5!(r2.south) $ ) node[midway,right]{$\begin{array}{cc}\text{Representation}\\\text{formula}\end{array}$}; \draw[-{Implies},double,line width=0.7pt] (r3.west) -- (l3.east) node[midway,below]{$\text{Integration}$}; \draw[-{Implies},double,line width=0.7pt] ( $ (l3.north east)!1.0!(l3.north) $ ) -- ( $ (l2.south east)!0.88!(l2.south) $ ) node[midway,left]{$\text{Theorem \ref{thm:L1ToLinfinitySmoothing}}$}; \draw[-{Implies},double,line width=0.7pt] ( $ (l2.north east)!0.88!(l2.north) $ ) -- ( $ (l1.south east)!1.145!(l1.south) $ ) node[midway,left]{$\begin{array}{cc}\text{Homogeneous}\\\text{operator}\end{array}$}; \normalcolor \end{tikzpicture} \caption{Implications in the nonlinear case. Note that still the off-diagonal heat kernel bounds provide the strongest piece of information. However, we also see that because of \eqref{G_3}, on-diagonal heat kernel bounds ensure a closed loop in the nonlinear case, assuming that \cite{BaCoLeS-C95} applies.} \label{fig:ImplicationInNonlinearCase} \end{figure} Assume that there exists $2^*\ge 2$ such that the Sobolev-Poincar\'e type inequality holds ($2^*=2$ being the Poincar\'e case) \begin{equation*}\label{Sobolev.Moser} \|f\|^2_{L^{2^*}(\R^N)}\leq C\int_{\R^N} f (-\Operator) f\dd x =C\mathcal{Q}_{-\Operator}[f]=C\|(-\Operator)^{\frac{1}{2}}[u]\|^2_{L^2(\R^N)}, \end{equation*} where the last equality is true whenever the operator $-\Operator$ has an extension to $L^2(\R^N)$. By simple interpolation of $L^p$-norms, for $\tilde{p}\in [(1+m)/m,2)$ and $\tilde{q}\in[1/m,\tilde{p})$, \begin{equation*}\|f\|_{L^{\tilde{p}}(\R^N)}\leq C\|f\|_{L^{\tilde{q}}(\R^N)}^\vartheta \mathcal{Q}_{-\Operator}[f]^{\frac{1}{2}(1-\vartheta)}\quad\text{where}\quad \vartheta:=\vartheta(2^*). \end{equation*} When dealing with energy estimates in the local case, a calculus equality allows us to do the Moser iteration: \[ \int_{\R^N} u^{p-1} (-\Delta)[ u^m]\dd x = \int_{\R^N} \nabla u^{p-1} \cdot\nabla u^m \dd x = \frac{4m(p-1)}{(p+m-1)^2}\int_{\R^N} \big|\nabla u^{\frac{p+m-1}{2}} \big|^2 \dd x. \] However, we just need an inequality, which in the nonlocal case has been proven by Stroock and Varopoulos \cite{Str84,Var85} (cf. \cite[Proposition 4.11]{BrDP18} or \cite[Lemma 4.10]{DTEnJa18a}): For the same constant as above, \begin{equation*}\label{str-var.Moser} \int_{\R^N} u^{p-1}(-\Operator)[u^m] \dd x \gtrsim \int_{\R^N} u^{\frac{p+m-1}{2}}(-\Operator)[u^{\frac{p+m-1}{2}}]\dd x \eqsim \big\|(-\Operator)^{\frac{1}{2}}[u^{\frac{p+m-1}{2}}]\big\|^2_{L^2(\R^N)}. \end{equation*} Combining the two above inequalities, one gets \begin{equation}\label{M}\tag{M} \int u^{p-1}(-\Operator)[u^m]\ge \frac{4m(p-1)}{(p+m-1)^2}\mathcal{Q}_{-\Operator}[u^{\frac{p+m-1}{2}}] \ge\frac{4m(p-1)}{C^2(p+m-1)^2}\frac{\|u\|_{L^{\tilde{p}\frac{p+m-1}{2}}}^{\frac{2}{1-\vartheta}\frac{p+m-1}{2}}} {\|u\|_{L^{\tilde{q}\frac{p+m-1}{2}}}^{\frac{2\vartheta}{1-\vartheta}\frac{p+m-1}{2}}}. \end{equation} The above condition is the key to prove the following: \begin{theorem}[Green functions satisfying \eqref{G_1}]\label{thm:GNSEquivalentL1Linfty} Assume \eqref{phias}. Then the following statements are equivalent: \begin{enumerate}[{\rm (i)}] \item \textup{($L^1$--$L^\infty$-smoothing)} Let $u$ be a weak dual solution of \eqref{GPME} with initial data $u_0\in L^1(\R^N)$, then $$ \|u(\cdot,t)\|_{L^\infty(\R^N)}\leq Ct^{-N\theta_1}\|u_0\|_{L^1(\R^N)}^{\alpha\theta_1}, $$ where $\theta_1=(\alpha+N(m-1))^{-1}$. \item \textup{(Subcritical GNS)} For $\tilde{p}\in [(1+m)/m,2)$ and $\tilde{q}\in[1/m,\tilde{p})$, and for all $f\in L^{\tilde{q}}(\R^N)\cap {\rm dom}(\mathcal{Q}_{-\Operator})$ we have $$ \|f\|_{L^{\tilde{p}}(\R^N)}\leq C\|f\|_{L^{\tilde{q}}(\R^N)}^\vartheta \mathcal{Q}_{-\Operator}[f]^{\frac{1}{2}(1-\vartheta)}\quad\text{where}\quad \vartheta:=\frac{\tilde{q}}{\tilde{p}}\frac{2^*-\tilde{p}}{2^*-\tilde{q}}. $$ \end{enumerate} \end{theorem} When we have at our disposal Green functions satisfying \eqref{G_1} then, by Theorem \ref{thm:L1ToLinfinitySmoothing2}(a), we have the above nonlinear smoothing effect. This turns out to be equivalent to a family of subcritical Gagliardo-Nirenberg-Sobolev, which then are equivalent to Sobolev, see Lemma \ref{lem:NashEquivalentGNS} below. In order to show that subcritical GNS imply nonlinear smoothing, we will perform a Moser iteration. We refer to \cite[Section 3]{BoIbIs22} for a more detailed exposition of the Moser iteration in the fast diffusion case $0<m<1$, in the context of bounded domains. It also contains a detailed discussion about the Green function method versus the Moser iteration. When we have integrable Green functions \eqref{G_2}, we can obtain absolute bounds (i.e. independent of the initial datum), as in the case of bounded domains \cite{BoVa15,BoVa16,BoFiVa18a}. Such bounds imply weak GNS inequalities, which are equivalent to Poincaré inequalities (Lemma \ref{lem:NashEquivalentGNS}). We notice that it is not possible (to the best of our knowledge) to prove the converse implication via the Moser iteration. In fact, the constant simply blows up at the limit $p\to\infty$. A similar discussion can be found in \cite{Gri10, GrMuPo13}. However we have seen in Theorem \ref{thm:AbsBounds} a simple proof of the absolute bounds with the Green function method, so that we can conclude that integrable Green functions imply Poincar\'e-type inequalities as follows: \begin{proposition}[Green functions satisfying \eqref{G_2}]\label{abs.VS.poinc} Assume \eqref{phias}. Given the following statements: \begin{enumerate}[{\rm (i)}] \item \textup{(Absolute bound)} Let $u$ be a weak dual solution of \eqref{GPME} with initial data $u_0$, then $$ \|u(\cdot,t)\|_{L^\infty(\R^N)}\leq Ct^{-1/(m-1)}. $$ \item \textup{(Subcritical GNS)} For $\tilde{p}\in [(1+m)/m,2)$ and $\tilde{q}\in[1/m,\tilde{p})$, and for all $f\in L^{\tilde{q}}(\R^N)\cap {\rm dom}(\mathcal{Q}_{-\Operator})$ we have $$ \|f\|_{L^{\tilde{p}}(\R^N)}\leq C\|f\|_{L^{\tilde{q}}(\R^N)}^\vartheta \mathcal{Q}_{-\Operator}[f]^{\frac{1}{2}(1-\vartheta)}\quad\text{where}\quad \vartheta:=\frac{\tilde{q}}{\tilde{p}}\frac{2-\tilde{p}}{2-\tilde{q}}. $$ \end{enumerate} Then (i)$\Longrightarrow$(ii). \end{proposition} In order to prove the above theorem and proposition, we need a few results that we prefer to state and prove separately since they have their own interest. We shall start with the fact that (any) GNS is equivalent to some $L^q$--$L^p$-smoothing. This can be directly seen by the Stroock-Varopoulos inequality. \begin{proposition}[$L^q$--$L^p$-smoothing VS subcritical GNS]\label{prop:LqLpsmoothing} Assume \eqref{phias}. \begin{enumerate}[{\rm (a)}] \item \textup{(Green functions satisfying \eqref{G_1})} The following statements are equivalent: \begin{enumerate}[{\rm (i)}] \item \textup{($L^q$--$L^{p}$-smoothing)} For $p\in[1+m,\infty)$ and $q\in[1,p)$, let $u$ be a weak dual solution of \eqref{GPME} with initial data $u_0\in L^q(\R^N)$, then $$ \|u(\cdot,t)\|_{L^p(\R^N)}\leq C\bigg(\frac{(p+m-1)^2}{4m(m-1)(p-1)}\frac{1}{t}\bigg)^{\frac{N(p-q)\theta_{q}}{p}}\|u_0\|_{L^{q}(\R^N)}^{\frac{q}{p}\frac{\theta_{q}}{\theta_p}}, $$ where $\theta_r:=(\alpha r+N(m-1))^{-1}$ and $C>0$ is independent of $p,q$. \item \textup{(Subcritical GNS)} For $\tilde{p}\in [(1+m)/m,2)$ and $\tilde{q}\in[1/m,\tilde{p})$, and for all $f\in L^{\tilde{q}}(\R^N)\cap {\rm dom}(\mathcal{Q}_{-\Operator})$ we have $$ \|f\|_{L^{\tilde{p}}(\R^N)}\leq C\|f\|_{L^{\tilde{q}}(\R^N)}^\vartheta \mathcal{Q}_{-\Operator}[f]^{\frac{1}{2}(1-\vartheta)}\quad\text{where}\quad \vartheta:=\frac{\tilde{q}}{\tilde{p}}\frac{2^*-\tilde{p}}{2^*-\tilde{q}}. $$ \end{enumerate} \item \textup{(Green functions satisfying \eqref{G_2})} The following statements are equivalent: \begin{enumerate}[{\rm (i)}] \item \textup{($L^q$--$L^{p}$-smoothing)} For $p\in[1+m,\infty)$ and $q\in[1,p)$, let $u$ be a weak dual solution of \eqref{GPME} with initial data $u_0\in L^q(\R^N)$, then $$ \|u(\cdot,t)\|_{L^p(\R^N)}\leq C\bigg(\frac{(p+m-1)^2}{4m(m-1)(p-1)}\frac{1}{t}\bigg)^{\frac{p-q}{p(m-1)}}\|u_0\|_{L^{q}(\R^N)}^{\frac{q}{p}}, $$ where $C>0$ is independent of $p,q$. \item \textup{(Subcritical GNS)} For $\tilde{p}\in [(1+m)/m,2)$ and $\tilde{q}\in[1/m,\tilde{p})$, and for all $f\in L^{\tilde{q}}(\R^N)\cap {\rm dom}(\mathcal{Q}_{-\Operator})$ we have $$ \|f\|_{L^{\tilde{p}}(\R^N)}\leq C\|f\|_{L^{\tilde{q}}(\R^N)}^\vartheta \mathcal{Q}_{-\Operator}[f]^{\frac{1}{2}(1-\vartheta)}\quad\text{where}\quad \vartheta:=\frac{\tilde{q}}{\tilde{p}}\frac{2-\tilde{p}}{2-\tilde{q}}. $$ \end{enumerate} \end{enumerate} \end{proposition} \begin{remark} \begin{enumerate}[{\rm (a)}] \item Let us make some comments on part (a). First of all, note that $\vartheta$ is nothing but the standard quantity appearing in interpolation between $L^p$-norms with $\tilde{q}<\tilde{p}<2^*$. We also see that when formally $m\to1^{-}$, $\tilde{p}=2$ and $\tilde{q}=1$ in the above subcritical GNS inequality, and we recover the critical Nash inequality (see Section \ref{sec:TheWell-KnownLinearCase}). In our case $m>1$, and that is why we call it subcritical. Note, however, that the standard GNS inequality (see Remark \ref{rem:NashAndGNSAndSmoothing}) is not included as a special case here. \item The proof reveals that GNS inequalities always imply $L^q$--$L^{p}$-smoothing effects, but the opposite implication requires further assumptions. Actually, the equivalence which is always true is the one between $L^1$--$L^{m+1}$-smoothing effects and subcritical Nash inequalities. In fact, even operators only yielding boundedness estimates in the form of Theorem \ref{thm:L1ToLinfinitySmoothing} (see also Theorem \ref{thm:L1ToLinfinitySmoothing0}), still enjoy the latter equivalence. \end{enumerate} \end{remark} To provide a proof, we need: \begin{lemma}[\cite{BaCoLeS-C95, S-Co02}]\label{lem:NashEquivalentGNS} Assume \eqref{phias} and $f\in C_\textup{c}^\infty(\R^N)$. Then: \begin{enumerate}[{\rm (a)}] \item \textup{(Sobolev)} The following statements are equivalent: \begin{enumerate}[{\rm (i)}] \item \textup{(Sobolev)} $$ \|f\|_{L^{2^*}(\R^N)}\leq C\mathcal{Q}_{-\Operator}[f]^{\frac{1}{2}}. $$ \item \textup{(Subcritical GNS)} For $\tilde{p}\in [(1+m)/m,2)$ and $\tilde{q}\in[1/m,\tilde{p})$, $$ \|f\|_{L^{\tilde{p}}(\R^N)}\leq C\|f\|_{L^{\tilde{q}}(\R^N)}^\vartheta \mathcal{Q}_{-\Operator}[f]^{\frac{1}{2}(1-\vartheta)}\quad\text{where}\quad \vartheta:=\frac{\tilde{q}}{\tilde{p}}\frac{2^*-\tilde{p}}{2^*-\tilde{q}}. $$ \end{enumerate} \item \textup{(Poincar\'e)} The following statements are equivalent: \begin{enumerate}[{\rm (i)}] \item \textup{(Poincar\'e)} $$ \|f\|_{L^{2}(\R^N)}\leq C\mathcal{Q}_{-\Operator}[f]^{\frac{1}{2}}. $$ \item \textup{(Subcritical GNS)} For $\tilde{p}\in [(1+m)/m,2)$ and $\tilde{q}\in[1/m,\tilde{p})$, $$ \|f\|_{L^{\tilde{p}}(\R^N)}\leq C\|f\|_{L^{\tilde{q}}(\R^N)}^\vartheta \mathcal{Q}_{-\Operator}[f]^{\frac{1}{2}(1-\vartheta)}\quad\text{where}\quad \vartheta:=\frac{\tilde{q}}{\tilde{p}}\frac{2-\tilde{p}}{2-\tilde{q}}. $$ \end{enumerate} \end{enumerate} \end{lemma} \begin{remark} \begin{enumerate}[{\rm (a)}] \item In both cases, we simply check that $q$ in \cite{BaCoLeS-C95} is respectively given by $2^*$ and $2$. It is also worth noting, that ``any'' family of Sobolev/Poincar\'e-type inequalities is equivalent with the Sobolev/Poincar\'e inequality. As a consequence, subcritical GNS inequalities are equivalent with subcritical Nash inequalities, and then also equivalent with the standard Nash and GNS inequalities, respectively. However, the subcritical ones might be easier to prove in the nonlinear setting. \item The case $q=2$ is kind of curious since it yields a Poincar\'e inequality in $\R^N$. Such an inequality is not fulfilled in e.g. the case $\Operator=\Delta$ as the spectrum is nonnegative. Hence, it provides the intuition that absolute bounds holds if the spectrum of the operator is positive (e.g. as in the case $-\Operator=(I-\Delta)^{\frac{\alpha}{2}}$). \end{enumerate} \end{remark} \begin{proof}[Proof of Proposition \ref{prop:LqLpsmoothing}] \noindent(a) (i)$\Longrightarrow$(ii). By the particular choices $1=q<p=m+1$, we immediately have the corresponding $L^1$--$L^{m+1}$-smoothing effect. Then, direct and formal\footnote{We have decided to present the differential version of these estimates because the main idea is easier to follow. This is rigorous for strong solutions, for instance when $\partial_tu\in L^1$. It often happens that bounded weak solutions are strong, possibly under some additional assumptions on $\Operator$, see \cite{DPQuRoVa12}. These formal computations can be justified rigorously in several different ways. One possibility is to use Steklov averages and Gr\"onwall-type inequalities. It is beyond the scope of this paper to justify these computations, but we remark that they can be shown to hold for the mild solutions constructed in Appendix \ref{sec:ExistenceAPriori}, through standard approximations. The energy computations, namely the ones involving the $L^{m+1}$ norm, are always true for weak energy solutions (of which mild solutions are a subclass of).} computations show that \begin{equation}\label{eq:decayOfLpNorms} \frac{\dd}{\dd t}\int u^{m+1}=\int \dell_t(u^{m+1})=(m+1)\int u^{m}\dell_tu=-(m+1)\int u^{m}(-\Operator)[u^m], \end{equation} and also, since $\Operator$ is symmetric and $u$ solves \eqref{GPME}, \begin{equation}\label{eq:energyDecay} \begin{split} \frac{\dd}{\dd t}\int u^m(-\Operator)[u^m]&=2\int\dell_t(u^m)(-\Operator)[u^m]=2m\int u^{m-1}\dell_tu(-\Operator)[u^m]\\ &=-2m\int u^{m-1}\Big((-\Operator)[u^m]\Big)^2\leq 0. \end{split} \end{equation} Hence, by \eqref{eq:energyDecay}, we obtain in \eqref{eq:decayOfLpNorms} that \begin{equation*} \frac{\dd}{\dd t}\int u^{m+1}\geq -(m+1)\int u_0^m(-\Operator)[u_0^m]=-(m+1)\mathcal{Q}_{-\Operator}[u_0^m]. \end{equation*} We then integrate over $(0,T)$ and use $L^1$--$L^{m+1}$-smoothing effect to get \begin{equation*} \begin{split} -(m+1)Q_{-\Operator}[u_0^m]T &\leq \|u(T)\|_{L^{m+1}}^{m+1}-\|u_0\|_{L^{m+1}}^{m+1}\leq CT^{-N((m+1)-1)\theta_1}\|u_0\|_{L^1}^{\frac{\theta_1}{\theta_{m+1}}}-\|u_0\|_{L^{m+1}}^{m+1}, \end{split} \end{equation*} or, by taking $f:=u_0^m$, $$ \|f\|_{L^{\frac{m+1}{m}}}^{\frac{m+1}{m}}\leq F(T):=C\|f\|_{L^{\frac{1}{m}}}^{\frac{1}{m}\frac{\theta_1}{\theta_{m+1}}}T^{-N((m+1)-1)\theta_1}+(m+1)Q_{-\Operator}[f]T. $$ The inequality is still valid if we infimize $F$ over $T>0$, and this gives the subcritical Nash inequality, i.e., $\tilde{p}=(1+m)/m$ and $\tilde{q}=1/m$ in the stated subcritical GNS. Since the subcritical Nash inequality is a subfamily of the subcritical GNS, it is equivalent with the Sobolev inequality and then equivalent with GNS by Lemma \ref{lem:NashEquivalentGNS}. \smallskip \noindent(a) (ii)$\Longrightarrow$(i). Note that (ii) with $f=u^{\frac{p+m-1}{2}}$ and the Stroock-Varopoulos inequality gives \eqref{M}. Direct and formal\footnote{Again, it is beyond the scope of this paper to justify these computations, but they hold for e.g. the mild solutions constructed in Appendix \ref{sec:ExistenceAPriori} under some possibly additional assumptions on $\Operator$. See also the previous footnote.} calculations and the $L^p$-decay (Proposition \ref{prop:APriori}(b)(ii)) give \begin{equation}\label{eq:Lp-Lq.diff.ineq} \begin{split} \frac{\dd}{\dd t}\int u^p&=\int \dell_t(u^p)=p\int u^{p-1}\dell_tu=-p\int u^{p-1}(-\Operator)[u^m]\\ &\le -\frac{4mp(p-1)}{C^2(p+m-1)^2}\frac{\|u\|_{L^{\tilde{p}\frac{p+m-1}{2}}}^{\frac{2}{1-\vartheta}\frac{p+m-1}{2}}} {\|u_0\|_{L^{\tilde{q}\frac{p+m-1}{2}}}^{\frac{2\vartheta}{1-\vartheta}\frac{p+m-1}{2}}} = -\frac{4mp(p-1)}{C^2(p+m-1)^2}\frac{\|u\|_{L^{p}}^{1+\sigma}} {\|u_0\|_{L^{q}}^{\frac{2(2^*-\tilde{p})}{2^*(\tilde{p}-\tilde{q})}q}} \end{split} \end{equation} Where we have chosen: \[ \tilde{p}:=\frac{2p}{p+m-1},\qquad \tilde{q}:= \frac{2q}{p+m-1}\qquad\mbox{and}\qquad \sigma:=\frac{2^*(2-\tilde{p})+\tilde{q}(2^*-2)}{2^*(\tilde{p}-\tilde{q})}, \] and this choice is consistent with our assumptions. We also have that $$ \frac{2}{1-\vartheta}\frac{p+m-1}{2}=\frac{2(2^*-\tilde{q})}{2^*(\tilde{p}-\tilde{q})}p \qquad\text{and}\qquad \frac{2\vartheta}{1-\vartheta}\frac{p+m-1}{2}=\frac{2(2^*-\tilde{p})}{2^*(\tilde{p}-\tilde{q})}q. $$ so that, integrating the differential inequality we get (i). The proof of part (a) is concluded. \smallskip \noindent(b) (i)$\Longrightarrow$(ii). Follows by a similar argument as in (a) (i)$\Longrightarrow$(ii), except that $$ F(T):=C\|f\|_{L^{\frac{1}{m}}}^{\frac{1}{m}}T^{-\frac{m}{m-1}}+(m+1)Q_{-\Operator}[f]T. $$ \smallskip \noindent(b) (ii)$\Longrightarrow$(i). We argue exactly as in (a) (ii)$\Longrightarrow$(i), but now $$ \frac{2}{1-\vartheta}\frac{p+m-1}{2}=\frac{2-\tilde{q}}{\tilde{p}-\tilde{q}}p, \qquad \frac{2\vartheta}{1-\vartheta}\frac{p+m-1}{2}=\frac{2-\tilde{p}}{\tilde{p}-\tilde{q}}q, $$ $$ \frac{1}{\sigma}=\frac{p-q}{m-1},\qquad\text{and}\qquad \frac{2-\tilde{p}}{\tilde{p}-\tilde{q}}\frac{1}{\sigma}=1. $$ This yields the desired estimate. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:GNSEquivalentL1Linfty}] \noindent(ii)$\Longrightarrow$(i). In what follows, we will just sketch the essential parts of the proof, in order to focus on the main ideas. The proof can moreover be made rigorous by standard approximation techniques. Let us first remark that it is enough to prove the following $L^{m+1}$--$L^\infty$-smoothing effect: \begin{equation}\label{Moser.m+1} \|u(t)\|_{L^\infty(\R^N)}\leq C t^{-N\theta_{m+1}}\|u_0\|_{L^{m+1}(\R^N)}^{\alpha(m+1)\theta_{m+1}}. \end{equation} Indeed, the claimed $L^1$--$L^\infty$-smoothing effects is then deduced by applying Lemma \ref{lem:SmoothingImplications} of Appendix \ref{sec:SmoothingImplications}. In order to prove the smoothing effect \eqref{Moser.m+1}, we will iterate ``Moser style'' the $L^p$--$L^q$-smoothing effects Proposition \ref{prop:LqLpsmoothing}(a)(i): Let us define $p_0=m+1$ and $p_k=2^kp_0$ for each $k\geq 1$, and $t_k$ such that $t_k-t_{k-1}=\frac{t-t_0}{2^k}$, so that inequality Proposition \ref{prop:LqLpsmoothing}(a)(i) becomes \begin{align*}\label{Moser} \|u(t_k)\|_{L^{p_k}}\leq I_{k}^{\frac{N(p_k-p_{k-1})}{p_k}\theta_{k-1}}\|u(t_{k-1})\|_{L^{p_{k-1}}}^{\frac{p_{k-1}\,\theta_{k-1}}{p_k\,\theta_k}} \qquad\mbox{with}\qquad I_k \eqsim \frac{p_k}{t_k-t_{k-1}}\eqsim 4^k \end{align*} where $\theta_k:=\theta_{p_k}=(\alpha p_k+ N(m-1))^{-1}$. More precisely, we have that we can estimate $I_k$ uniformly as follows: \[ I_k:= C \frac{(p_k+m-1)^2}{4m(m-1)(p_k-1)}\frac{1}{t_k-t_{k-1}} \le 4^k\frac{C}{t-t_0}, \] for some constant $C>0$ that depends only on $m,N$. Then we iterate \begin{align*} \|u(t_k)\|_{L^{p_k}}&\leq I_{k}^{\frac{N(p_k-p_{k-1})}{p_k}\theta_{k-1}} \|u(t_{k-1})\|_{L^{p_{k-1}}}^{\frac{p_{k-1}\,\theta_{k-1}}{p_k\,\theta_k}}\\ &\le I_k^{\frac{N(p_k-p_{k-1})}{p_k}\theta_{k-1}}I_{k-1}^{\frac{N(p_{k-1}-p_{k-2})}{\cancel{p_{k-1}}}\theta_{k-2}\frac{\cancel{p_{k-1}}\theta_{k-1}}{p_k\theta_k}} \|u(t_{k-2})\|_{L^{p_{k-2}}}^{\frac{p_{k-2}\theta_{k-2}}{\cancel{p_{k-1}}\cancel{\theta_{k-1}}}\frac{\cancel{p_{k-1}}\,\cancel{\theta_{k-1}}}{p_k\,\theta_k}}\\ &\;\;\vdots\\ &\le \prod_{j=1}^{k} I_j^{\frac{N(p_j-p_{j-1})}{p_k}\frac{\theta_j\theta_{j-1}}{\theta_k}}\;\|u(t_0)\|_{L^{p_0}}^{\frac{p_0\,\theta_{p_0}}{p_k\theta_{k}}} \le\left[\prod_{j=1}^{k}\bigg(4^j\frac{\overline{c}}{t-t_0}\bigg)^{\frac{N(\theta_{j-1}-\theta_{j})}{\alpha}}\right]^{\frac{1}{p_k\theta_{k}}} \!\!\!\!\!\! \|u(t_0)\|_{L^{p_0}}^{\frac{p_0\,\theta_{p_0}}{p_k\theta_{k}}}. \end{align*} Finally, letting $k\to \infty$, it is easy to see that \[ \prod_{j=1}^{k}\bigg(4^j\frac{C}{t-t_0}\bigg)^{\frac{N(\theta_{j-1}-\theta_{j})}{\alpha}} \le 2^\frac{N}{\alpha^2p_0\,p_k\theta_{k}} \left(\frac{C}{t-t_0}\right)^{\frac{N(\theta_{p_0}-\theta_{k})}{\alpha}} \] so that, using the lower semicontinuity of the $L^\infty$ norm, we get \begin{align*} \|u(t)\|_{L^\infty}&\leq\lim\limits_{k\rightarrow\infty}\|u(t_k)\|_{L^{p_k}}\nonumber\\ &\leq\lim_{k\rightarrow\infty}\left(2^\frac{N}{\alpha^2{p_0}\,p_k\theta_{k}} \left(\frac{C}{t-t_0}\right)^{\frac{N(\theta_{p_0}-\theta_{k})}{\alpha}}\right)^{\frac{1}{p_k\theta_{k}}} \|u(t_0)\|_{L^{p_0}}^{\frac{p_0\,\theta_{p_0}}{p_k\theta_{k}}}\nonumber \leq C \frac{\|u(t_0)\|_{L^{p_0}}^{\alpha p_0\,\theta_{p_0}}}{(t-t_0)^{N\theta_{p_0}}}\,. \end{align*} This proves the desired inequality \eqref{Moser.m+1} and concludes the proof of (ii)$\Longrightarrow$(i). \smallskip \noindent (i)$\Longrightarrow$(ii). Follows by Theorem \ref{thm:SmoothingImplications} of Appendix \ref{sec:SmoothingImplications}, which states that $L^1$--$L^\infty$ imply $L^p$--$L^q$-smoothing effects, which in turn imply subcritical GNS, by Proposition \ref{prop:LqLpsmoothing}(a). This concludes the proof. \end{proof} \begin{remark}This proof holds for all $m\ge 1$, so in particular it also shows that subcritical GNS imply smoothing also in the linear case $m=1$, providing an alternative proof of the implication (b)$\Longrightarrow$(a) or (c)$\Longrightarrow$(a) in Theorem \ref{thm:LinearEquivalences}. When $m\in (0,1)$, which corresponds to the fast diffusion case, the same proof works as well, but we need to require further integrability on the initial datum in order to perform the iteration, as thoroughly explained in e.g. \cite[Section 3]{BoIbIs22} (and also \cite{DaKe07,BoVa10, BoSi19} for the local case). \end{remark} Finally, we have: \begin{proof}[Proof of Proposition \ref{abs.VS.poinc}] By Lemma \ref{lem:SmoothingImplications} (with $\gamma=0$) of Appendix \ref{sec:SmoothingImplications}, item (i) implies Proposition \ref{prop:LqLpsmoothing}(b)(i), and hence, item (ii) holds. \end{proof} \section{Various examples} \label{sec:GreenAndHeat} This section is devoted to study the operators whose Green functions satisfy assumptions \eqref{G_1}--\eqref{G_3}, and hence, which smoothing effects are satisfied by such operators. As a consequence of Proposition \ref{prop:TheInverseOperatorA-1}, we have: \begin{proposition}\label{prop:GreenFormulas} Assume that the operator $-\Operator$ is linear, symmetric, nonnegative, and moreover, densely defined, $\mathfrak{m}$-accretive, and Dirichlet in $L^1(\R^N)$. Then \eqref{Gas} holds, and the Green functions of $-\Operator$ and $I-\Operator$ are respectively given by $$ \mathbb{G}_{-\Operator}^{x_0}(x)=\int_{0}^\infty \mathbb{H}_{-\Operator}^{x_0}(x,t)\dd t $$ and $$ \mathbb{G}_{I-\Operator}^{x_0}(x)=\int_{0}^\infty \e^{-t}\mathbb{H}_{-\Operator}^{x_0}(x,t)\dd t, $$ where $\mathbb{H}_{-\Operator}^{x_0}$ is the corresponding heat kernel of $-\Operator$. \end{proposition} Let us illustrate these formulas through Fourier analysis. Denote by $\sigma_{-\Operator}$ the Fourier symbol of the operator $-\Operator$. Then the heat kernel can be expressed as $$ \mathbb{H}_{-\Operator}^{x_0}(x,t)=\mathcal{F}^{-1}\big[\e^{-\sigma_{-\Operator}(\cdot)t}\big](x-x_0)=\int_{\R^N}\e^{-\sigma_{-\Operator}(\xi)t}\e^{2\pi\textup{i}(x-x_0)\cdot\xi}\dd \xi, $$ and \begin{equation*} \begin{split} \mathbb{G}_{-\Operator}^{x_0}(x)&=\int_0^\infty \mathbb{H}_{-\Operator}^{x_0}(x,t)\dd t=\int_{\R^N}\bigg(\int_0^\infty\e^{-\sigma_{-\Operator}(\xi)t}\dd t\bigg)\e^{2\pi\textup{i}(x-x_0)\cdot\xi}\dd \xi\\ &=\int_{\R^N}\frac{1}{\sigma_{-\Operator}(\xi)}\e^{2\pi\textup{i}(x-x_0)\cdot\xi}\dd \xi=\mathcal{F}^{-1}\Big[\frac{1}{\sigma_{-\Operator}(\xi)}\Big](x-x_0). \end{split} \end{equation*} We also refer to the well-written book \cite{BoByKuRySoVo09}, which provides many examples of Green functions and a good introduction to potential theory. In the examples that follows, we will need \begin{equation}\label{GammaFunction} \int_{0}^{\infty}\e^{-tr}r^\vartheta\dd r=\frac{\Gamma(\vartheta+1)}{t^{\vartheta+1}}<\infty \qquad\text{whenever $\vartheta>-1$}, \end{equation} where $\Gamma$ is the gamma function. \subsection{On the assumption \eqref{G_1}} As demonstrated in Theorem \ref{thm:L1ToLinfinitySmoothing2}, assumption \eqref{G_1} leads to the estimate \begin{equation*} \|u(\cdot,t)\|_{L^\infty(\R^N)}\lesssim t^{-N\theta_{\alpha}}\|u_0\|_{L^1(\R^N)}^{\alpha\theta_{\alpha}}\qquad\text{for a.e. $t>0$} \end{equation*} for weak dual solutions of \eqref{GPME} with initial data $u_0$. Let us provide some concrete examples of operators $-\Operator$ in \eqref{GPME} whose Green functions satisfy \eqref{G_1}. \begin{lemma}\label{lem:FractionaLaplaceG_2'} The fractional Laplacian/Laplacian $(-\Delta)^{\frac{\alpha}{2}}$ with $\alpha\in(0,2]$ has a Green function which satisfies \eqref{G_1}. \end{lemma} \begin{remark} Let us mention that heat kernel estimates for the Laplacian and the fractional Laplacian dates back to Fourier \cite[Chapter IX Section II]{Fou55} (see also \cite[Section 2.3]{Eva10}) and Blumenthal and Getoor \cite{BlGe60}, respectively. \end{remark} \begin{proof}[Proof of Lemma \ref{lem:FractionaLaplaceG_2'}] Assume $\alpha\in(0,2)$. By Lemma 2 in Chapter V.1 in \cite{Ste70}, $$ \mathbb{G}_{(-\Delta)^{\frac{\alpha}{2}}}^{x_0}(x)=\mathscr{F}^{-1}\big[|\cdot|^{-\alpha}\big](x-x_0)\eqsim |x-x_0|^{-(N-\alpha)}. $$ Now, $$ \int_{B_R(x_0)}\mathbb{G}_{(-\Delta)^{\frac{\alpha}{2}}}^{x_0}(x)\dd x\eqsim \int_0^{R}r^{-(N-\alpha)}r^{N-1}\dd r\eqsim R^{\alpha}. $$ Moreover, for any $x\in \R^N\setminus B_R(x_0)$, $$ \mathbb{G}_{(-\Delta)^{\frac{\alpha}{2}}}^{x_0}(x)\lesssim R^{-(N-\alpha)}, $$ and the result follows. Assume $\alpha=2$. The result is classical and can e.g. be found in \cite[Section 1.1.8]{Dav89}. We get that $\mathbb{G}_{-\Delta}^{x_0}(x)\eqsim |x-x_0|^{-(N-2)}$ which satisfies \eqref{G_1} with $\alpha=2$. \end{proof} \begin{corollary} Any operator $\Operator$ whose Green function satisfies $$ \mathbb{G}_{-\Operator}^{x_0}(x)\lesssim |x-x_0|^{-(N-\alpha)} \qquad\text{for some $\alpha\in(0,2]$} $$ will fulfil \eqref{G_1}. \end{corollary} \begin{remark} By Lemma \ref{lem:GreenFunctionGivesSobolev}, the above assumption on the Green function implies that the corresponding operator satisfies the Sobolev inequality. Again, we also refer to \cite{KaKiLe21} for a further discussion. \end{remark} \begin{lemma}\label{lem:GreenForUniformlyElliptic} Assume that the real matrix $[a_{ij}]_{i,j=1,\ldots,N}$ is nonnegative and symmetric and $\Operator=\sum_{i,j=1}^{N}a_{ij}\dell_{x_ix_j}^2$. Given the following statements: \begin{enumerate}[{\rm (i)}] \item There exist constants $C,c>0$ such that $$ c|y|^2\leq \sum_{i,j=1}^Na_{ij}y_iy_j\leq C|y|^2. $$ \item There exist constants $C,c>0$ such that $$ \mathbb{H}_{-\Levy^{\mu}}^{x_0}(x,t)\leq ct^{-\frac{N}{2}}\textup{exp}\Big(-C\frac{|x-x_0|^2}{t}\Big). $$ \item There exists a constant $C>0$ such that $$ \mathbb{G}_{-\Levy^{\mu}}^{x_0}(x)\leq C|x-x_0|^{-(N-2)}. $$ \end{enumerate} We have (i)$\Longrightarrow$(ii)$\Longrightarrow$(iii). \end{lemma} \begin{remark} \begin{enumerate}[{\rm (a)}] \item The heat kernel bound is (up to constants) the same as for the regular Laplacian. This is not surprising in the constant coefficient case since the operator is, up to a translation, the Laplacian. The more interesting case is of course when the coefficients are $(x,t)$-dependent, see also \cite[Section 2.9]{Jac02} and the classical \cite{Aro68}. For a similar result in the fractional setting, we refer to \cite{KaWe21}. \item The upper bound in statement (i) might seem superfluous, but the constant inside the exponential function in (ii) depends on it. \end{enumerate} \end{remark} \begin{proof}[Proof of Lemma \ref{lem:GreenForUniformlyElliptic}] \noindent(i)$\Longrightarrow$(ii). Follows by \cite[Corollary 3.2.8]{Dav89} (see also \cite{Aro68}). \smallskip \noindent(ii)$\Longrightarrow$(iii). Follows by \cite[Theorem 3.1.1]{Dav89} (see also \cite{Aro68}). \end{proof} The nonlocal counterpart is somehow when the L\'evy measure is comparable to the measure of the fractional Laplacian. \begin{lemma}\label{lem:GreenComparableWithFractionalLaplacian} Assume $\Operator=\Levy^{\mu}$ and \eqref{muas}. Given the following statements, for $\alpha\in(0,2)$: \begin{enumerate}[{\rm (i)}] \item There exist constants $C,c>0$ such that $$ \frac{c}{|z|^{N+\alpha}}\leq \frac{\dd \mu}{\dd z}(z)\leq \frac{C}{|z|^{N+\alpha}}. $$ \item There exists a constant $C>0$ such that $$ \mathbb{H}_{-\Levy^{\mu}}^{x_0}(x,t)\leq C\min\big\{t^{-\frac{N}{\alpha}},t|x-x_0|^{-(N+\alpha)}\big\}. $$ \item There exists a constant $C>0$ such that $$ \mathbb{G}_{-\Levy^{\mu}}^{x_0}(x)\leq C|x-x_0|^{-(N-\alpha)}. $$ \end{enumerate} We have (i)$\Longrightarrow$(ii)$\Longrightarrow$(iii). \end{lemma} \begin{remark} We can also slightly weaken the assumption on the lower bound: There exist constants $C,c>0$ such that $$ c\veps^{-\alpha}\leq \int_{|z|>\veps}\dd\mu(z)\quad\text{$\forall\,\veps>0$} \qquad\text{and}\qquad \frac{\dd \mu}{\dd z}(z)\leq \frac{C}{|z|^{N+\alpha}}. $$ The estimates on the heat kernel and Green function still hold \cite[Theorem 2]{Szt10b} with $f(x,y)=f(|y-x|)=\dd \mu/\dd z$, see also the recent \cite{KaKiLe21}. \end{remark} \begin{proof}[Proof of Lemma \ref{lem:GreenComparableWithFractionalLaplacian}] \noindent(i)$\Longrightarrow$(ii). Follows by \cite[Theorem 1.2]{ChKu08} with $\rho(x,y)=|x-y|$, $V(r)=r^N$, $\gamma_1=\gamma_2=0$, $\psi(r)=1$, and $\phi_1(r)=r^\alpha$. \smallskip \noindent(ii)$\Longrightarrow$(iii). By direct computations, \begin{equation*} \begin{split} \mathbb{G}_{-\Levy^{\mu}}^{x_0}(x)&=\int_0^\infty \mathbb{H}_{-\Levy^{\mu}}^{x_0}(x,t)\dd t\\ &\lesssim\int_0^{|x-x_0|^\alpha} t|x-x_0|^{-(N+\alpha)}\dd t+\int_{|x-x_0|^\alpha}^{\infty}t^{-\frac{N}{\alpha}}\dd t\\ &\lesssim|x-x_0|^{-(N-\alpha)}. \qedhere \end{split} \end{equation*} \end{proof} \subsection{Combinations of assumption \eqref{G_1}}\label{sec:CombinationsOfAssumptionG_1} Sometimes the Green function has different power behaviours at zero and at infinity. As demonstrated in Theorem \ref{thm:L1ToLinfinitySmoothing3}, such a case leads to the estimate $$ \|u(\cdot,t)\|_{L^\infty(\R^N)}\lesssim t^{-N\theta_{\alpha}}\|u_0\|_{L^1(\R^N)}^{\alpha\theta_{\alpha}}+ t^{-N\theta_{2}}\|u_0\|_{L^1(\R^N)}^{2\theta_{2}}\qquad\text{for a.e. $t>0$} $$ for weak dual solutions of \eqref{GPME} with initial data $u_0$. Let us provide some concrete examples of operators $-\Operator$ in \eqref{GPME} whose Green functions satisfy combinations of \eqref{G_1}. We start with one of the most basic operators giving such an estimate: \begin{lemma}\label{lem:SumOfLapAndFracLap} Assume $\alpha\in(0,2)$ and $-\Operator=(-\Delta)+(-\Delta)^{\frac{\alpha}{2}}=:(-\Delta)+(-\Levy^\mu)$. Given the following statements: \begin{enumerate}[{\rm (i)}] \item For some constant $C>0$, we consider $$ \frac{\dd \mu}{\dd z}(z)=\frac{C}{|z|^{N+\alpha}}. $$ \item There exists a constant $C>0$ such that $$ \mathbb{H}_{-\Operator}^{x_0}(x,t)\leq C \begin{cases} f(x-x_0,t)+\mathbb{H}_{(-\Delta)^{\frac{\alpha}{2}}}^{x_0}(x,t)\qquad\qquad&\text{if $0<t<|x-x_0|^2\leq 1$,}\\ \mathbb{H}_{-\Delta}^{x_0}(x,t)\qquad\qquad&\text{if $|x-x_0|^2<t<|x-x_0|^\alpha\leq 1,$}\\ \mathbb{H}_{-\Delta}^{x_0}(x,t)\qquad\qquad&\text{if $|x-x_0|^\alpha\leq t\leq 1$,}\\ \mathbb{H}_{(-\Delta)^{\frac{\alpha}{2}}}^{x_0}(x,t)\qquad\qquad&\text{if $t\geq1$ or $|x-x_0|\geq 1$,} \end{cases} $$ where $f(x-x_0,t):=(4\pi t)^{-N/2}\textup{exp}(-|x-x_0|^2/(16t))$. \item There exists a constant $C>0$ such that $$ \mathbb{G}_{-\Operator}^{x_0}(x)\leq C \begin{cases} |x-x_0|^{-(N-2)}\qquad\qquad&\text{if $|x|\leq 1$,}\\ |x-x_0|^{-(N-\alpha)}\qquad\qquad&\text{if $|x|>1$.}\\ \end{cases} $$ \end{enumerate} We have (i)$\Longrightarrow$(ii)$\Longrightarrow$(iii). \end{lemma} \begin{remark}\label{rem:BigSmallR} Note that when $0<R\leq 1$, we get $$ \int_{B_R(x_0)}\mathbb{G}_{-\Operator}^{x_0}(x)\dd x\lesssim R^{2},\qquad \mathbb{G}_{-\Operator}^{x_0}(x)\lesssim R^{-(N-2)}\quad\text{for $x\in \R^N\setminus B_R(x_0)$,} $$ and when $R>1$, $$ \int_{B_R(x_0)}\mathbb{G}_{-\Operator}^{x_0}(x)\dd x\lesssim R^{\alpha},\qquad \mathbb{G}_{-\Operator}^{x_0}(x)\lesssim R^{-(N-\alpha)}\quad\text{for $x\in \R^N\setminus B_R(x_0)$.} $$ Hence, we are in the setting of Theorem \ref{thm:L1ToLinfinitySmoothing3} although in this example the small time behaviour is governed by the Laplacian and the large time by the fractional Laplacian. \end{remark} \begin{proof}[Proof of Lemma \ref{lem:SumOfLapAndFracLap}] \noindent(i)$\Longrightarrow$(ii). Follows by \cite[Theorem 2.13]{SoVo07}. \smallskip \noindent(ii)$\Longrightarrow$(iii). Assume $|x|\leq 1$. Then \begin{equation*} \begin{split} \mathbb{G}_{-\Operator}^{0}(x)&=\int_0^\infty\mathbb{H}_{-\Operator}^{0}(x,t)\dd t=\bigg(\int_0^{|x|^2}+\int_{|x|^2}^{|x|^\alpha}+\int_{|x|^\alpha}^1+\int_1^\infty\bigg)\mathbb{H}_{-\Operator}^{0}(x,t)\dd t. \end{split} \end{equation*} Let us start with the integral involving $f(x,t)$. The change of variables $|x|^2/(16t)\mapsto t$ gives \begin{equation*} \begin{split} \int_0^{|x|^2}f(x,t)\dd t\eqsim\int_0^{|x|^2}t^{-N/2}\textup{e}^{\frac{-|x|^2}{16t}}\dd t=\Big(\frac{|x|^2}{16}\Big)^{-\frac{N}{2}+1}\int_{\frac{1}{16}}^{\infty}\tau^{\frac{N}{2}-2}\textup{e}^{-\tau}\dd \tau\eqsim |x|^{-(N-2)}, \end{split} \end{equation*} where we estimated the final integral by \eqref{GammaFunction}. The integrals involving $\mathbb{H}_{-\Delta}^{0}$ can be estimated in a similar way. It remains to estimate the contribution from $\mathbb{H}_{(-\Delta)^{\frac{\alpha}{2}}}^{0}$: \begin{equation*} \begin{split} &\bigg(\int_0^{|x|^2}+\int_1^\infty\bigg)\mathbb{H}_{(-\Delta)^{\frac{\alpha}{2}}}^{0}(x,t)\dd t\lesssim \bigg(\int_0^{|x|^2}+\int_1^\infty\bigg)\min\{t^{-N/\alpha},t|x|^{-N-\alpha}\}\dd t\\ &=\int_0^{|x|^2}t|x|^{-N-\alpha}\dd t+\int_1^\infty t^{-N/\alpha}\dd t\eqsim|x|^{-(N-2)+2-\alpha}+1. \end{split} \end{equation*} Since $|x|\leq 1$, we have $|x|^{-(N-2)+2-\alpha}\leq |x|^{-(N-2)}$ and $1\leq |x|^{-(N-2)}$. Assume $|x|>1$. Then \begin{equation*} \begin{split} \mathbb{G}_{-\Operator}^{0}(x)&=\int_0^\infty\mathbb{H}_{-\Operator}^{0}(x,t)\dd t\lesssim\int_0^\infty\mathbb{H}_{(-\Delta)^{\frac{\alpha}{2}}}^{0}(x,t)\dd t\lesssim|x|^{-(N-\alpha)}. \end{split} \end{equation*} We combine the results to complete the proof. \end{proof} Let us now consider the relativistic Schr\"odinger type operators like \begin{equation}\label{eq:RelativisticSchrodinger} (\kappa^2I-\Delta)^\frac{\alpha}{2}-\kappa^{\alpha} I\qquad\text{with $\kappa>0$ and $\alpha\in(0,2)$} \end{equation} are L\'evy operators \cite[Lemma 2]{Ryz02} (see also \cite{GrRy08} and \cite[Appendix B]{FaFe15}), i.e., they can be written on the form $\Levy^\mu$, see \eqref{def:LevyOperators}, with a measure satisfying \eqref{muas}. \begin{lemma}\label{lem:GreenRelativisticSchrodinger} Assume $-\Operator$ is given by \eqref{eq:RelativisticSchrodinger}. Given the following statements, for $\alpha\in(0,2)$: \begin{enumerate}[{\rm (i)}] \item For some constant $C>0$, we consider $$ \frac{\dd \mu}{\dd z}(z)=\frac{C}{|z|^{\frac{N+\alpha}{2}}}K_{\frac{N+\alpha}{2}}(\kappa|z|), $$ where $K_a$ is the modified Bessel function of the second kind with index $a\in\R$. \item There exists a constant $C>0$ such that, for all $\gamma>2-\alpha$, $$ \mathbb{H}_{-\Levy^{\mu}}^{x_0}(x,t)\leq C\min\bigg\{t^{-\frac{N}{\alpha}},\frac{t}{|x-x_0|^{N+\alpha}(1+|x-x_0|)^{\gamma}}\bigg\}\qquad\text{if $0<t<1$,} $$ and $$ \mathbb{H}_{-\Levy^{\mu}}^{x_0}(x,t)\leq C\min\bigg\{t^{-\frac{N}{2}},\frac{t}{|x-x_0|^{N+\alpha}(1+|x-x_0|)^{\gamma}}\bigg\}\qquad\text{if $t>1$.} $$ \item There exists a constant $C>0$ such that $$ \mathbb{G}_{-\Levy^{\mu}}^{x_0}(x)\leq C\big(|x-x_0|^{-(N-\alpha)}+|x-x_0|^{-(N-2)}\big). $$ \end{enumerate} We have (i)$\Longrightarrow$(ii)$\Longrightarrow$(iii). \end{lemma} \begin{remark} It is interesting to note that $$ \frac{\dd \mu}{\dd z}(z)\eqsim\frac{1}{|z|^{N+\alpha}}\quad\text{as $|z|\to0$}\qquad\text{and}\qquad \frac{\dd \mu}{\dd z}(z)\eqsim\frac{1}{|z|^{\frac{N+\alpha+1}{2}}}\e^{-\kappa|z|}\quad\text{as $|z|\to\infty$.} $$ \end{remark} \begin{proof}[Proof of Lemma \ref{lem:GreenRelativisticSchrodinger}] \noindent(i)$\Longrightarrow$(ii). Follows by \cite[Section 5]{Szt10a}. \smallskip \noindent(ii)$\Longrightarrow$(iii). Since $$ \mathbb{G}_{-\Levy^{\mu}}^{0}(x)=\int_0^\infty \mathbb{H}_{-\Levy^{\mu}}^{0}(x,t)\dd t, $$ we will have to consider three cases $$ {\rm (I)}\qquad \frac{t}{|x|^{N+\alpha}(1+|x|)^\gamma}>t\qquad\Longleftrightarrow\qquad |x|^{N+\alpha}(1+|x|)^\gamma<1, $$ $$ {\rm (II)}\qquad \frac{t}{|x|^{N+\alpha}(1+|x|)^\gamma}=t\qquad\Longleftrightarrow\qquad |x|^{N+\alpha}(1+|x|)^\gamma=1, $$ and $$ {\rm (III)}\qquad \frac{t}{|x|^{N+\alpha}(1+|x|)^\gamma}<t\qquad\Longleftrightarrow\qquad |x|^{N+\alpha}(1+|x|)^\gamma>1. $$ In the case of (I), we have three different behaviours: \begin{equation*} \begin{split} \mathbb{G}_{-\Levy^{\mu}}^{0}(x)&\eqsim\int_0^{|x|^\alpha(1+|x|)^{\gamma\frac{\alpha}{N+\alpha}}}\frac{t}{|x|^{N+\alpha}(1+|x|)^\gamma}\dd t+\int_{|x|^\alpha(1+|x|)^{\gamma\frac{\alpha}{N+\alpha}}}^1t^{-\frac{N}{\alpha}}\dd t+\int_1^\infty t^{-\frac{N}{2}}\dd t\\ &=\frac{1}{2}|x|^{-(N-\alpha)}(1+|x|)^{-\gamma\frac{N-\alpha}{N+\alpha}}+\frac{\alpha}{N-\alpha}|x|^{-(N-\alpha)}(1+|x|)^{-\gamma\frac{N-\alpha}{N+\alpha}}-\frac{\alpha}{N-\alpha}+\frac{2}{N-2}\\ &= \frac{1}{2}\frac{N-\alpha}{N+\alpha}|x|^{-(N-\alpha)}(1+|x|)^{-\gamma\frac{N-\alpha}{N+\alpha}}+\frac{N(2-\alpha)}{(N-2)(N-\alpha)}\\ &\leq \Big(\frac{1}{2}\frac{N-\alpha}{N+\alpha}+\frac{N(2-\alpha)}{(N-2)(N-\alpha)}\Big)|x|^{-(N-\alpha)}\qquad\text{in $\{x \,:\, |x|^{N+\alpha}(1+|x|)^\gamma<1\}$,} \end{split} \end{equation*} were we used $1+|x|\geq 1$ to get $$ \frac{1}{(1+|x|)^{\gamma\frac{N-\alpha}{N+\alpha}}}\leq 1. $$ In the case of (II), we have two different behaviours: \begin{equation*} \begin{split} \mathbb{G}_{-\Levy^{\mu}}^{0}(x)&\eqsim\int_0^1t\dd t +\int_1^\infty t^{-\frac{N}{2}}\dd t=\frac{1}{2}+\frac{2}{N-2}=\frac{1}{2}\frac{N-2}{N+2}. \end{split} \end{equation*} In the case of (III), we have two different behaviours: \begin{equation*} \begin{split} \mathbb{G}_{-\Levy^{\mu}}^{0}(x)&\eqsim\int_0^{|x|^{(N+\alpha)\frac{2}{N+2}}(1+|x|)^{\gamma\frac{2}{N+2}}}\frac{t}{|x|^{N+\alpha}(1+|x|)^\gamma}\dd t+\int_{|x|^{(N+\alpha)\frac{2}{N+2}}(1+|x|)^{\gamma\frac{2}{N+2}}}^\infty t^{-\frac{N}{2}}\dd t\\ &=\frac{1}{2}|x|^{-(N-2)}\Big(\frac{|x|^{2-\alpha}}{(1+|x|)^\gamma}\Big)^{\frac{N-2}{N+2}}+\frac{2}{N-2}|x|^{-(N-2)}\Big(\frac{|x|^{2-\alpha}}{(1+|x|)^\gamma}\Big)^{\frac{N-2}{N+2}}\\ &=\frac{1}{2}\frac{N-2}{N+2}|x|^{-(N-2)}\Big(\frac{|x|^{2-\alpha}}{(1+|x|)^\gamma}\Big)^{\frac{N-2}{N+2}}. \end{split} \end{equation*} By the assumption $\gamma>2-\alpha$, we get \begin{equation*} \begin{split} \mathbb{G}_{-\Levy^{\mu}}^{0}(x)\lesssim |x|^{-(N-2)} \qquad\text{in $\{x \,:\, |x|^{N+\alpha}(1+|x|)^\gamma>1\}$.} \end{split} \end{equation*} We then collect the three cases in one estimate to complete the proof. \end{proof} Finally, we also consider the generator of a finite range isotropically symmetric $\alpha$-stable process in $\R^N$ with jumps of size larger than $1$ removed. \begin{lemma}\label{lem:GreenFiniteRange} Assume $\Operator=\Levy^\mu$ with a measure $\mu$ satisfying \eqref{muas}. Given the following statements, for $\alpha\in(0,2)$: \begin{enumerate}[{\rm (i)}] \item There exists a constant $C>0$ such that $$ \frac{\dd \mu}{\dd z}(z)=\frac{C}{|z|^{N+\alpha}}\mathbf{1}_{|z|\leq 1}. $$ \item There exist constants $C,c>0$ and $0<C_*, R_*<1$ such that, $$ \mathbb{H}_{-\Levy^{\mu}}^{x_0}(x,t)\leq C \min\big\{t^{-\frac{N}{\alpha}},t|x-x_0|^{-(N+\alpha)}\big\}, \qquad\text{$0<t<R_*^\alpha$, $|x-x_0|\leq R_*$,} $$ $$ \mathbb{H}_{-\Levy^{\mu}}^{x_0}(x,t)\leq C \exp\Big(-c|x-x_0|\log\Big(\frac{|x-x_0|}{t}\Big)\Big), \qquad\text{$|x-x_0|\geq\max\{t/C_*,R_*\}$,} $$ and $$ \mathbb{H}_{-\Levy^{\mu}}^{x_0}(x,t)\leq Ct^{-\frac{N}{2}}\exp\Big(-c\frac{|x-x_0|^2}{t}\Big), \qquad\text{$t>R_*^\alpha$, $|x-x_0|\leq t/C_*$.} $$ \item There exists a constant $C>0$ such that $$ \mathbb{G}_{-\Levy^{\mu}}^{x_0}(x)\leq C\big(|x-x_0|^{-(N-\alpha)}+|x-x_0|^{-(N-2)}\big). $$ \end{enumerate} We have (i)$\Longrightarrow$(ii)$\Longrightarrow$(iii). \end{lemma} \begin{remark} We are again in the setting of Theorem \ref{thm:L1ToLinfinitySmoothing3} by Remark \ref{rem:BigSmallR}. \end{remark} \begin{proof}[Proof of Lemma \ref{lem:GreenFiniteRange}] \noindent(i)$\Longrightarrow$(ii). Follows by \cite[Proposition 2.1 and Theorem 2.3]{ChKiKu08}. \smallskip \noindent(ii)$\Longrightarrow$(iii). Follows by the proof of \cite[Theorem 4.7]{ChKiKu08}. \end{proof} \subsection{On the assumption \eqref{G_2}} If the Green function decays fast enough at infinity, the function itself will not only be $L_\textup{loc}^1$ but indeed $L^1$, see \eqref{G_2}. As demonstrated in Theorem \ref{thm:AbsBounds}, such a case leads to the estimate $$ \|u(\cdot,t)\|_{L^\infty(\R^N)}\lesssim t^{-1/(m-1)}\qquad\text{for a.e. $t>0$}, $$ for weak dual solutions of \eqref{GPME} with initial data $u_0$. Let us provide some concrete examples of operators $-\Operator$ in \eqref{GPME} whose Green functions satisfy \eqref{G_2}. Actually, any operator of the form $I-\Operator$ has a Green function satisfying \eqref{G_2}. In what follows, we will explain this result, and illustrate it with other examples as well. \begin{lemma}\label{lem:GreenResolventIntegrable} Under the assumptions of Proposition \ref{prop:GreenFormulas}, $$ \|\mathbb{G}_{I-\Operator}^{x_0}\|_{L^1(\R^N)}=\|\mathbb{G}_{I-\Operator}^{0}\|_{L^1(\R^N)}\leq 1. $$ Hence, the operator $I-\Operator$ has a Green function which satisfies \eqref{G_2}. \end{lemma} \begin{proof}Assumption \eqref{Gas}, an application of the Tonelli lemma, and the fact that, by Remark \ref{rem:APrioriCasem1} (i.e., decay of $L^1$-norm), $\int \mathbb{H}_{-\Operator}^{x_0}(\cdot,t)=\int \mathbb{H}_{-\Operator}^{0}(\cdot,t)\leq 1$ for every fixed $t>0$ concludes the proof. \end{proof} \begin{remark}\label{rem:G4NotWhenConservationOfMass} We immediately see that the presence of the identity operator is crucial. In fact, if $-\Operator$ is such that the corresponding heat equation preserves mass, then $\|\mathbb{G}_{-\Operator}^{x_0}\|_{L^1(\R^N)}=\infty$ (cf. Proposition \ref{prop:GreenFormulas}). Examples of mass preserving operators are L\'evy operators \eqref{def:LevyOperators} with $c=0$. \end{remark} Let us begin by considering the extreme case $-\Operator=I$ for which the PDE in \eqref{GPME} reads \begin{equation*}\dell_tu=-u^m. \end{equation*} For any function $t\mapsto Y(t)$, that equation is an ODE of the form $$ Y'(t)= -Y(t)^{1+(m-1)} \quad\Longrightarrow\quad Y(t)\leq \Big(\frac{1}{(m-1)t}\Big)^{\frac{1}{m-1}}. $$ Hence, by the comparison principle for \eqref{GPME} with $-\Operator=I$ (where we take $Y(0)=\infty$), we get the absolute bound $$ \|u(\cdot,t)\|_{L^\infty(\R^N)}\leq Y(t)\leq \Big(\frac{1}{(m-1)t}\Big)^{\frac{1}{m-1}}. $$ See also Section III.C in \cite{Ver79}. The above is also contained in the following lemma: \begin{lemma}\label{eq:G4Identity} The identity operator $-\Operator=I$ has a Green function which satisfies \eqref{G_2}, i.e., $$ \|\mathbb{G}_{I}^{x_0}\|_{L^1(\R^N)}=\|\mathbb{G}_{I}^{0}\|_{L^1(\R^N)}\leq C_1<\infty. $$ \end{lemma} \begin{proof} We obtain $$ \frac{\dd }{\dd t}\int_{\R^N}\mathbb{H}_I^{x_0}(x,t)\dd x=-\int_{\R^N}\mathbb{H}_I^{x_0}(x,t)\dd x, $$ i.e., $\int \mathbb{H}_I^{x_0}(\cdot,t)=\e^{-t}$ for all $t>0$. By the definition of the Green function (cf. Proposition \ref{prop:GreenFormulas}), we conclude. \end{proof} \begin{remark} \begin{enumerate}[{\rm (a)}] \item The proof also demonstrates that the operator $-\Operator=I$ is not conserving mass. Indeed, in the corresponding heat equation, it decays with time. \item Moreover, it provides a trivial example of an operator which yields boundedness in the nonlinear case $(m>1)$, but not in the linear case ($m=1$). \end{enumerate} \end{remark} Of course, we can reapply the same strategy of comparison with $Y(t)$ for \eqref{GPME} with $-\Operator\mapsto I-\Operator$ to get $$ \|u(\cdot,t)\|_{L^\infty(\R^N)}\leq Y(t)\leq \Big(\frac{1}{(m-1)t}\Big)^{\frac{1}{m-1}} $$ independently of $\Operator$! \begin{remark}\label{rem:SmoothingLinearAbsorption} When $m=1$, we need to adapt another strategy (since $Y(t)=Y(0)\e^{-t})$, but recall that by defining $$ u(x,t):=\textup{e}^{-t}v(x,t) $$ where $v$ solves \eqref{GPME} with $m=1$, then $u$ solves \begin{equation*} \begin{cases} \dell_tu-\Operator[u]+u=0 \qquad\qquad&\text{in}\qquad \R^N\times(0,T],\\ u(\cdot,0)=u_0 \qquad\qquad&\text{on}\qquad \R^N. \end{cases} \end{equation*} Hence, in this case, $L^1$--$L^\infty$-smoothing follows as long as it holds for $v$, i.e., as long as $\Operator$ is strong enough to provide it. \end{remark} When $\kappa=1$ in \eqref{eq:RelativisticSchrodinger}, we get the L\'evy operator $$ -\Levy^{\mu_\text{RS}}:=(I-\Delta)^\frac{\alpha}{2}-I, $$ i.e., $(I-\Delta)^{\frac{\alpha}{2}}$ is of the form $(I-\Levy^{\mu_\text{RS}})$. Hence: \begin{lemma}\label{lem:OperatorWithBesselPotential} The operator $-\Operator=(I-\Delta)^{\frac{\alpha}{2}}$ with $\alpha\in(0,2)$ has a Green function which satisfies \eqref{G_2}, i.e., $$ \|\mathbb{G}_{(I-\Delta)^{\frac{\alpha}{2}}}^{x_0}\|_{L^1(\R^N)}=\|\mathbb{G}_{(I-\Delta)^{\frac{\alpha}{2}}}^{0}\|_{L^1(\R^N)}\leq C_1<\infty. $$ \end{lemma} \begin{remark}\label{rem:OperatorWithBesselPotential} The operator $-\Operator=(I-\Delta)^{\frac{\alpha}{2}}$ with $\alpha=1$ appears e.g. in \cite{BaDaQu11} for the linear equation \eqref{GPME} with $m=1$. Since the mentioned operator is of the form $(I-\Levy^{\mu_\text{RS}})$, $L^1$--$L^\infty$-smoothing holds whenever it holds for $\Levy^{\mu_\text{RS}}$ (see Remark \ref{rem:SmoothingLinearAbsorption}). The heat kernel bounds of Lemma \ref{lem:GreenRelativisticSchrodinger} then provides the result through Theorem \ref{thm:LinearEquivalences}. In the nonlinear case ($m>1$), however, the above lemma ensures that \eqref{G_2} holds and we deduce absolute bounds. \end{remark} \begin{proof}[Proof of Lemma \ref{lem:OperatorWithBesselPotential}] Note that $$ \mathbb{G}_{(I-\Delta)^{\frac{\alpha}{2}}}^{x_0}(x)=\mathscr{F}^{-1}\big[(1+|\cdot|^2)^{-\frac{\alpha}{2}}\big](x-x_0), $$ i.e., the Bessel potential. The result then follows by Proposition 2 in Chapter V.3 in \cite{Ste70}. Or, we can simply note that $\mathbb{G}_{(I-\Delta)^{\alpha/2}}^{x_0}=\mathbb{G}_{I-\Levy^{\mu_\text{RS}}}^{x_0}$ and $-\Levy^{\mu_\text{RS}}$ is such that the heat equation has $L^1$-decay, cf. Lemma \ref{lem:GreenResolventIntegrable}. \end{proof} Again, the operator related to the Bessel potential does not conserve mass since $(I-\Delta)^\frac{\alpha}{2}-I$ does so. Moreover, in the latter case, assumption \eqref{G_2} cannot hold (cf. Remark \ref{rem:G4NotWhenConservationOfMass}) and we can write the PDE in \eqref{GPME} as $$ \dell_t u+(I-\Delta)^\frac{\alpha}{2}[u^m]=u^m. $$ This is in contrast with the operators $I-\Operator$ in Lemma \ref{lem:GreenResolventIntegrable} which satisfies the PDE $$ \dell_tu-\Operator[u^m]=-u^m. $$ Taking $-\Operator=(I-\Delta)^\frac{\alpha}{2}$ in the latter, we see that assumption \eqref{G_2} relies on either the (strong) absorption term being present or the operator itself being \emph{positive}, or also both being present of course. \subsection{On the assumption \eqref{G_3}} By Corollary \ref{cor:GreenNonnegative}, $\mathbb{G}_{I-\Operator}^{x_0}$ has at least as good integrability properties as $\mathbb{G}_{-\Operator}^{x_0}$. It is, moreover, always defined for descent operators $-\Operator$, see the discussion in Section \ref{sec:InverseOfLinearmAccretiveDirichlet}. There should therefore be no surprise that assumption \eqref{G_3} is quite general, however, as shown in Theorem \ref{thm:L1ToLinfinitySmoothing}, it provides a rather poor smoothing estimate: \begin{equation}\label{eq:PoorSmoothing} \|u(\cdot,t)\|_{L^\infty(\R^N)}\lesssim t^{-\frac{1}{m-1}} +\|u_0\|_{L^1(\R^N)}\qquad\text{for a.e. $t>0$,} \end{equation} for weak dual solutions of \eqref{GPME} with initial data $u_0$. Let us provide some concrete examples of operators $-\Operator$ in \eqref{GPME} whose Green functions satisfy \eqref{G_3}, and let us also see how to improve the above estimate. To continue, we advice the reader to recall \eqref{eq:LpNormOfGreenOfResolvent}. \begin{lemma}\label{lem:GreenOfResolventOfFractionalLaplacian} The fractional Laplacian/Laplacian $(-\Delta)^{\frac{\alpha}{2}}$ with $\alpha\in(0,2]$ has a Green function which satisfies \eqref{G_3}, i.e., $$ \|\mathbb{G}_{I+(-\Delta)^{\frac{\alpha}{2}}}^{x_0}\|_{L^p(\R^N)}=\|\mathbb{G}_{I+(-\Delta)^{\frac{\alpha}{2}}}^{0}\|_{L^p(\R^N)}\leq C_{p}<\infty $$ for some $p\in(1,N/(N-\alpha))$. \end{lemma} \begin{proof} We use \eqref{GammaFunction} with $\vartheta:=\frac{N}{\alpha}-1$ to obtain \begin{equation}\label{eq:BoundOnHeatKernelFractionalLaplacian} \begin{split} &\|\mathbb{H}_{(-\Delta)^{\frac{\alpha}{2}}}^{0}(\cdot,t)\|_{L^\infty(\R^N)}=\|\mathscr{F}^{-1}[\e^{-t|\xi|^\alpha}]\|_{L^\infty(\R^N)}\leq \int_{\R^N}\e^{-t|\xi|^\alpha}\dd \xi\\ &\eqsim\int_0^{\infty}\e^{-tr^\alpha}r^{N-1}\dd r=\frac{1}{\alpha}\int_0^{\infty}\e^{-tr}r^{\vartheta}\dd r=\frac{1}{\alpha}t^{-\frac{N}{\alpha}}\Gamma\Big(\frac{N}{\alpha}\Big), \end{split} \end{equation} and hence, by Remark \ref{rem:APrioriCasem1} (i.e., $L^1$-decay for $m=1$), \begin{equation*} \begin{split} \|\mathbb{G}_{I+(-\Delta)^{\frac{\alpha}{2}}}^{0}\|_{L^p(\R^N)}&\leq \int_{0}^\infty\textup{e}^{-t}\|\mathbb{H}_{(-\Delta)^{\frac{\alpha}{2}}}^{0}(\cdot,t)\|_{L^p(\R^N)}\dd t\\ &\leq \int_{0}^\infty\textup{e}^{-t}\|\mathbb{H}_{(-\Delta)^{\frac{\alpha}{2}}}^{0}(\cdot,t)\|_{L^\infty(\R^N)}^{\frac{p-1}{p}}\|\mathbb{H}_{(-\Delta)^{\frac{\alpha}{2}}}^{0}(\cdot,t)\|_{L^1(\R^N)}^{\frac{1}{p}}\dd t\\ &\lesssim \int_0^\infty \e^{-t}t^{-\frac{N}{\alpha}\frac{p-1}{p}}\dd t=\tau^{-\frac{N}{\alpha}\frac{p-1}{p}+1}\int_0^\infty \e^{-\tau r}r^{-\frac{N}{\alpha}\frac{p-1}{p}}\dd r, \end{split} \end{equation*} which is finite if $p<N/(N-\alpha)$ due to \eqref{GammaFunction} again. \end{proof} The fractional Laplacian, however, satisfies our strongest assumption \eqref{G_1} as well. We will therefore consider an operator which satisfies \eqref{G_3}, but for which it is not possible to verify \eqref{G_1} or \eqref{G_1'}. To that end, consider the sum of onedimensional fractional Laplacians: \begin{equation}\label{eq:SumFracLap} -\Operator=\sum_{i=1}^N(-\dell_{x_ix_i}^2)^{\frac{\alpha_i}{2}}\qquad\text{with $\alpha_i\in(0,2)$.} \end{equation} It can be written on the form $\Levy^\mu$ with $\mu$, for some constant $C>0$, given by $$ \dd \mu(z)=C\sum_{i=1}^N\frac{1}{|z_i|^{1+\alpha_i}}\dd z_i\prod_{j\neq i}\dd \delta_0(z_j). $$ This measure satisfies \eqref{muas} since each onedimensional fractional Laplacian measure does, and we have: \begin{lemma}\label{lem:GreenAnisotropicLaplacians} Assume $\Operator$ is given by \eqref{eq:SumFracLap} and $$ \sum_{i=1}^N\frac{1}{\alpha_i}>1\qquad\text{where $\alpha_i\in(0,2)$}. $$ Then $$ \|\mathbb{G}_{I-\Levy^\mu}^{x_0}\|_{L^p(\R^N)}=\|\mathbb{G}_{I-\Levy^\mu}^{0}\|_{L^p(\R^N)}\leq C_{p}<\infty $$ for some $$ p\in\Big(1,\frac{\sum_{i=1}^N\frac{1}{\alpha_i}}{\sum_{i=1}^N\frac{1}{\alpha_i}-1}\Big). $$ \end{lemma} \begin{remark}\label{ref:GreenAnisotropicLaplacians} \begin{enumerate}[{\rm (a)}] \item Note that if $\alpha_i=\alpha$ for all $i\in\{1,\ldots,N\}$, then $$ \sum_{i=1}^N\frac{1}{\alpha_i}>1\qquad\Longrightarrow\qquad \frac{N}{\alpha}>1. $$ We thus recover the condition of Lemma \ref{lem:GreenOfResolventOfFractionalLaplacian}. \item Various extensions within the framework of anisotropic fractional Laplacians can be found in \cite{BoSz07, Szt11, Xu13, BoSzKn20}. \item By \cite[Section 3]{BaCh06}, \begin{equation}\label{eq:ProductOf1DFracLap} \mathbb{H}_{-\Levy^{\mu}}^{x_0}(x,t)=\prod_{i=1}^N\mathbb{H}_{(-\dell_{x_ix_i}^2)^{\frac{\alpha_i}{2}}}^{x_0}(x,t)\leq C\prod_{i=1}^N\rho_i^{x_{0,i}}(x_i,t), \end{equation} where $$ \rho_i^{x_{0,i}}(x_i,t)=\min\big\{t^{-\frac{1}{\alpha_i}},t|x_i-x_{0,i}|^{-(1+\alpha_i)}\big\}.\footnotemark $$ \footnotetext{Optimal bounds when $\alpha_i=\alpha$ can be found in \cite{KaKiKu19}.} However, the example stated at the end of \cite{BoSz05} shows that for $\alpha_i=\alpha$ with $\alpha\leq (N-1)/2<N$, small times (hence all times), and the choice $x=\xi e_1$ with $\xi>0$, yields $\mathbb{G}_{-\Levy^\mu}^{0}(x,t)=\infty$. We thus conclude that at least in this case, it is not possible to verify the second parts of \eqref{G_1} or \eqref{G_1'}. \item Solutions of \eqref{GPME} with $-\Operator$ defined by \eqref{eq:SumFracLap} satisfy \eqref{eq:PoorSmoothing}. Once this estimate is established, we can, moreover, use the scaling of the operator to get it on an invariant form. We restrict to the case $\alpha_i=\alpha$ for all $i\in\{1,\ldots,N\}$. As in Remark \ref{rem:L1ToLinfinitySmoothing2}, if $u$ solves \eqref{GPME}, then $$ u_{\kappa,\Xi,\Lambda}(x,t):=\kappa u(\Xi x,\Lambda t) \qquad\text{for all $\kappa,\Xi,\Lambda>0$} $$ also solves \eqref{GPME} as long as $\kappa^{m-1}\Xi^\alpha=\Lambda$. This means that $$ \|u_{\kappa,\Xi,\Lambda}(\cdot,t)\|_{L^\infty(\R^N)}\lesssim t^{-\frac{1}{m-1}}+\|u_{\kappa,\Xi,\Lambda}(\cdot,0)\|_{L^1(\R^N)} $$ or $$ \|u(\Xi\cdot,\Lambda t)\|_{L^\infty(\R^N)}\lesssim \Xi^{\frac{\alpha}{m-1}}(\Lambda t)^{-\frac{1}{m-1}}+\Xi^{-N}\|u(\cdot,0)\|_{L^1(\R^N)}. $$ The choice $\Xi=\|u_0\|_{L^1(\R^N)}^{(m-1)\theta}(\Lambda t)^\theta$ and $\Lambda t\mapsto t$, then gives $$ \|u(\cdot,t)\|_{L^\infty(\R^N)}\leq\frac{C}{t^{N\theta}}\|u_0\|_{L^1(\R^N)}^{\alpha\theta}\qquad\text{for a.e. $t>0$}, $$ where $\theta=(\alpha+N(m-1))^{-1}$ and $C$ now depending on $C_p$ instead of $K_1$ and $K_2$.\footnote{In the case $\alpha_i=\alpha$ for all $i\in\{1,\ldots,N\}$, the bilinear form of the operator \eqref{eq:SumFracLap} is comparable to the bilinear form of the fractional Laplacian, and one could instead use the Sobolev inequality for the latter operator (see e.g. \cite{DPQuRi22}) together with a Moser iteration to obtain the $L^1$--$L^\infty$-smoothing.} This in turn implies the corresponding Nash inequality (see Section \ref{sec:EquivalencesGNSHomogeneous}). Note that, even in the case $\alpha_i\neq \alpha_j$, one can deduce the Sobolev inequality, from which the Nash inequality follows, by scratch \cite[Theorem 2.4]{ChKa20}. This will then ensure the $L^1$--$L^\infty$-smoothing estimate both in the linear and nonlinear case. \end{enumerate} \end{remark} \begin{proof}[Proof of Lemma \ref{lem:GreenAnisotropicLaplacians}] Recall that the heat kernel is given by \eqref{eq:ProductOf1DFracLap}. Since we are considering a L\'evy operator, it provides $L^1$-decay, and by \eqref{eq:BoundOnHeatKernelFractionalLaplacian} with $N=1$, we get \begin{equation*} \begin{split} &\|\mathbb{G}_{I-\Levy^\mu}^{0}\|_{L^p(\R^N)}\leq \int_{0}^\infty\textup{e}^{-t}\|\mathbb{H}_{-\Levy^\mu}^{0}(\cdot,t)\|_{L^p(\R^N)}\dd t\\ &\leq \int_{0}^\infty\textup{e}^{-t}\|\mathbb{H}_{-\Levy^\mu}^{0}(\cdot,t)\|_{L^\infty(\R^N)}^{\frac{p-1}{p}}\|\mathbb{H}_{-\Levy^\mu}^{0}(\cdot,t)\|_{L^1(\R^N)}^{\frac{1}{p}}\dd t= \int_{0}^\infty\textup{e}^{-t}\bigg\|\prod_{i=1}^N \mathbb{H}_{(-\dell_{x_ix_i}^2)^{\alpha_i/2}}^{0}(\cdot_i,t)\bigg\|_{L^\infty(\R^N)}^{\frac{p-1}{p}}\dd t\\ &\leq \int_{0}^\infty\textup{e}^{-t}\prod_{i=1}^N\| \mathbb{H}_{(-\dell_{x_ix_i}^2)^{\alpha_i/2}}^{0}(\cdot_i,t)\|_{L^\infty(\R)}^{\frac{p-1}{p}}\dd t\lesssim \int_{0}^\infty\textup{e}^{-t}\prod_{i=1}^N\bigg(\frac{1}{\alpha_i}t^{-\frac{1}{\alpha_i}}\Gamma\Big(\frac{1}{\alpha_i}\Big)\bigg)^{\frac{p-1}{p}}\dd t\\ &\eqsim \int_{0}^\infty\textup{e}^{-t}t^{-\frac{p-1}{p}\sum_{i=1}^n\frac{1}{\alpha_i}}\dd t. \end{split} \end{equation*} Again, by \eqref{GammaFunction}, the result follows. \end{proof} Note that we have exploited the fact that $\mathbb{H}_{-\Levy^{\mu}}^{x_0}(x,t)$ indeed has the on-diagonal upper bound $t^{-\sum_{i=1}^N\frac{1}{\alpha_i}}$, see e.g. Corollary 3.2 in \cite{Xu13}. Since the off-diagonal bound cannot give a useful Green function estimate (in all cases), we resort to our assumption \eqref{G_3}. We refer the reader to Remark \ref{rem:LinearEquivalences} which provides various examples of on-diagonal bounds. Let us turn our attention to another interesting example where only a useful on-diagonal bound can be deduced: \begin{lemma}\label{lem:OperatorWithDerivativesAtZero} Assume $\Operator=\Levy^\mu$ with a measure $\mu$ satisfying \eqref{muas} and, for $\alpha\in(0,2)$ and constants $C_1,C_2, C_3>0$, $$ \frac{C_1}{|z|^{N+\alpha}}\mathbf{1}_{|z|\leq 1}\leq\frac{\dd \mu}{\dd z}(z)\leq \frac{C_2}{|z|^{N+\alpha}}\mathbf{1}_{|z|\leq 1}\qquad\text{and}\qquad \frac{\dd \mu}{\dd z}(z)\leq C_3\mathbf{1}_{|z|>1}. $$ Then, there exists a constant $C>0$ such that $$ \mathbb{H}_{-\Levy^{\mu}}^{x_0}(x,t)\leq Ct^{-\frac{N}{\alpha}}\e^{t}, $$ and, moreover, $$ \|\mathbb{G}_{I-\Levy^\mu}^{x_0}\|_{L^p(\R^N)}=\|\mathbb{G}_{I-\Levy^\mu}^{0}\|_{L^p(\R^N)}\leq C_{p}<\infty $$ for some $p\in(1,N/(N-\alpha))$. \end{lemma} \begin{proof} The estimate on the heat kernel follows by the beginning of Section 2 in \cite{ChKiKu09} with $V(r)=r^N$ and $\phi(r)=r^\alpha$, and since the heat kernel is proven to be H\"older continuous in Section 3 of the same reference (so that the exceptional set is empty). Note that the proof uses that $\mathbb{H}_{-\Levy^{\mu}}^{x_0}(x,t)=\e^{t}\mathbb{H}_{I-\Levy^{\mu}}^{x_0}(x,t)$. We then get \begin{equation*} \begin{split} \|\mathbb{G}_{I-\Levy^\mu}^{0}\|_{L^p(\R^N)}&\leq \int_{0}^\infty\textup{e}^{-t}\|\mathbb{H}_{-\Levy^\mu}^{0}(\cdot,t)\|_{L^p(\R^N)}\dd t\\ &\leq \int_{0}^\infty\textup{e}^{-t}\|\mathbb{H}_{-\Levy^\mu}^{0}(\cdot,t)\|_{L^\infty(\R^N)}^{\frac{p-1}{p}}\|\mathbb{H}_{-\Levy^\mu}^{0}(\cdot,t)\|_{L^1(\R^N)}^{\frac{1}{p}}\dd t\\ &\eqsim \int_{0}^\infty\textup{e}^{-t}\big(t^{-\frac{N}{\alpha}}\e^{t}\big)^{\frac{p-1}{p}}\dd t=\int_{0}^\infty\textup{e}^{-\frac{t}{p}}t^{-\frac{N}{\alpha}\frac{p-1}{p}}\dd t. \end{split} \end{equation*} Again, by \eqref{GammaFunction}, the result follows. \end{proof} \subsection{A nonexample of our theory} We will now consider a L\'evy operator which does not satisfy any of \eqref{G_1}--\eqref{G_3}. Consider the generator of a subordinate Brownian motion with Fourier symbol $\phi(|\xi|^2)$ where $\phi(\lambda):=\log(1+\lambda^{\frac{\alpha}{2}})$. This process is known as a rotationally invariant \emph{geometric} $\alpha$-stable process, see Section 5 in \cite{BoByKuRySoVo09}. \begin{lemma}\label{lem:GeometricAlphaStableGreenResolvent} Assume $\Operator=\Levy^\mu$ with a measure $\mu$ satisfying \eqref{muas}. If $\Levy^\mu$ has Fourier symbol given by $$ \log(1+|\xi|^\alpha), $$ then the heat kernel is given by $$ \mathbb{H}_{-\Levy^{\mu}}^{x_0}(x,t)=\frac{1}{\Gamma(t)}\int_0^\infty \mathbb{H}_{(-\Delta)^{\frac{\alpha}{2}}}^{x_0}(x,s)s^{t-1}\e^{-s}\dd s. $$ Moreover, $$ \|\mathbb{G}_{I-\Levy^{\mu}}^{x_0}\|_{L^1(\R^N)}=\|\mathbb{G}_{I-\Levy^{\mu}}^{0}\|_{L^1(\R^N)}\leq C_{1}<\infty. $$ \end{lemma} \begin{remark} The operator is indeed a L\'evy operator, therefore it is not surprising that it provides $L^1$-decay. It is, moreover, worth noting that Theorems 5.45 and 5.46 in \cite{BoByKuRySoVo09} establish that the density of the L\'evy measure corresponding to the rotationally invariant geometric $\alpha$-stable process satisfies $$ \frac{\dd\mu}{\dd z}(z)\eqsim \frac{1}{|z|^N}\quad\text{as $|z|\to0$}\qquad\text{and}\qquad\frac{\dd\mu}{\dd z}(z)\eqsim \frac{1}{|z|^{N+\alpha}}\quad\text{as $|z|\to\infty$}. $$ In fact, $$ \frac{\dd\mu}{\dd z}(z)=\int_0^\infty \mathbb{H}_{(-\Delta)^{\frac{\alpha}{2}}}^{0}(z,s)s^{-1}\e^{-s}\dd s, $$ see equation (5.69) in \cite{BoByKuRySoVo09}. \end{remark} \begin{proof}[Proof of Lemma \ref{lem:GeometricAlphaStableGreenResolvent}] The formula for the heat kernel is given by equation (5.68) in \cite{BoByKuRySoVo09}. Moreover, since $(-\Delta)^{\frac{\alpha}{2}}$ actually provides conservation of mass, we get \begin{equation*} \begin{split} \|\mathbb{G}_{I-\Levy^{\mu}}^{0}\|_{L^1(\R^N)}&= \int_{0}^\infty\textup{e}^{-t}\|\mathbb{H}_{-\Levy^{\mu}}^{0}(\cdot,t)\|_{L^1(\R^N)}\dd t\\ &= \int_{0}^\infty\textup{e}^{-t}\frac{1}{\Gamma(t)}\int_0^\infty \|\mathbb{H}_{(-\Delta)^{\frac{\alpha}{2}}}^{0}(\cdot,s)\|_{L^1(\R^N)}s^{t-1}\e^{-s}\dd s\dd t\\ &= \int_{0}^\infty\textup{e}^{-t}\frac{1}{\Gamma(t)}\int_0^\infty s^{t-1}\e^{-s}\dd s\dd t=\int_{0}^\infty\textup{e}^{-t}\frac{\Gamma(t)}{\Gamma(t)}\dd t=1. \qedhere \end{split} \end{equation*} \end{proof} The proof also demonstrates that \eqref{G_2} cannot hold, and moreover, neither can \eqref{G_1}, \eqref{G_1'}, and \eqref{G_3}: \begin{lemma}\label{lem:GeometricAlphaStableGreenResolvent2} Assume $\Operator=\Levy^\mu$ with a measure $\mu$ satisfying \eqref{muas}. If $\Levy^\mu$ has Fourier symbol given by $$ \log(1+|\xi|^\alpha), $$ then there is some $R>0$ such that $$ \int_{B_R(x_0)}\mathbb{G}_{-\Levy^{\mu}}^{x_0}(x)\dd x> CR^{\alpha} \qquad\text{for all $\alpha\in(0,2)$ and all $C>0$,} $$ and $$ \|\mathbb{G}_{I-\Levy^{\mu}}^{x_0}\|_{L^p(\R^N)}=\|\mathbb{G}_{I-\Levy^{\mu}}^{0}\|_{L^p(\R^N)}=\infty\qquad\text{for all $p>1$.} $$ \end{lemma} \begin{proof} By Theorem 5.35 in \cite{BoByKuRySoVo09}, $$ \mathbb{G}_{-\Levy^{\mu}}^{0}(x)\eqsim |x-x_0|^{-N}\big(-\log(|x|^2))^{-2}\eqsim|x-x_0|^{-N}\big(\log(|x|))^{-2}\qquad\text{as $|x|\to0$.} $$ Then for small enough $0<R<<1$, $$ \int_{B_R(0)}\mathbb{G}_{-\Levy^{\mu}}^{0}(x)\dd x\eqsim \int_0^Rr^{-1}(\log(r))^{-2}\dd r\eqsim\Big(\log\Big(\frac{1}{R}\Big)\Big)^{-1}. $$ Now, the statement $$ \Big(\log\Big(\frac{1}{R}\Big)\Big)^{-1}> CR^{\alpha}\qquad\text{for some $0<R<<1$} $$ is equivalent to $$ \frac{1}{R}< \textup{exp}\Big(C\Big(\frac{1}{R}\Big)^{\alpha}\Big)\qquad\text{for some $0<R<<1$,} $$ which is clearly true for all $\alpha\in(0,2)$ and all $C>0$. We already know that $$ \|\mathbb{G}_{I-\Levy^{\mu}}^{0}\|_{L^p(\R^N)}= \int_{0}^\infty\textup{e}^{-t}\|\mathbb{H}_{-\Levy^{\mu}}^{0}(\cdot,t)\|_{L^p(\R^N)}\dd t. $$ By Theorem 5.5.2 in \cite{BoByKuRySoVo09}, for all $0<t\leq \min\{1,N/(2\alpha)\}$, \begin{equation*} \begin{split} \|\mathbb{H}_{-\Levy^{\mu}}^{0}(\cdot,t)\|_{L^p(\R^N)}^p&\geq Ct^p\bigg(\int_{|x|<1}|x|^{-p(N-t\alpha)}\dd x+\int_{|x|>1}|x|^{-p(N+\alpha)}\dd x\bigg)\\ &\gtrsim t^p\int_0^1r^{N-1-p(N-t\alpha)}\dd r. \end{split} \end{equation*} If $1<p\leq 2$, then $$ \frac{N}{\alpha}\frac{p-1}{p}\leq \min\Big\{1,\frac{N}{2\alpha}\Big\}, $$ and \begin{equation*} \begin{split} \|\mathbb{G}_{I-\Levy^{\mu}}^{0}\|_{L^p(\R^N)}&\geq\int_0^{\frac{N}{\alpha}\frac{p-1}{p}}\textup{e}^{-t}\|\mathbb{H}_{-\Levy^{\mu}}^{0}(\cdot,t)\|_{L^p(\R^N)}\dd t\\ &\gtrsim \int_0^{\frac{N}{\alpha}\frac{p-1}{p}}\textup{e}^{-t}t\bigg(t^p\lim_{\xi\to0^+}\int_\xi^1r^{N-1-p(N-t\alpha)}\dd r\bigg)^\frac{1}{p}\dd t\\ &=\int_0^{\frac{N}{\alpha}\frac{p-1}{p}}\textup{e}^{-t}t^2\bigg(\frac{1}{p\alpha}\frac{1}{\frac{N}{\alpha}\frac{(p-1)}{p}- t}\bigg)^{\frac{1}{p}}\bigg(\lim_{\xi\to0^+}\frac{1}{\xi^{p\alpha(\frac{N}{\alpha}\frac{p-1}{p}-t)}}-1\bigg)^{\frac{1}{p}}\dd t\\ &\geq \int_0^{\frac{N}{\alpha}\frac{p-1}{p}}\textup{e}^{-t}t^2\bigg(\frac{1}{N(p-1)}\bigg)^{\frac{1}{p}}\bigg(\lim_{\xi\to0^+}\frac{1}{\xi^{p\alpha(\frac{N}{\alpha}\frac{p-1}{p}-t)}}-1\bigg)^{\frac{1}{p}}\dd t=\infty. \end{split} \end{equation*} If $p>2$, then $$ \frac{N}{\alpha}\frac{p-1}{p}>\frac{N}{\alpha}\frac{1}{2}, $$ and we simply consider $$ \|\mathbb{G}_{I-\Levy^{\mu}}^{0}\|_{L^p(\R^N)}\geq\int_0^{\min\{1,N/(2\alpha)\}}\textup{e}^{-t}\|\mathbb{H}_{-\Levy^{\mu}}^{0}(\cdot,t)\|_{L^p(\R^N)}\dd t $$ to reach the same conclusion. \end{proof} \section*{Acknowledgements} J. Endal has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sk\l odowska-Curie grant agreement no. 839749 ``Novel techniques for quantitative behavior of convection-diffusion equations (techFRONT)'', and from the Research Council of Norway under the MSCA-TOPP-UT grant agreement no. 312021. M. Bonforte was partially supported by the Projects MTM2017-85757-P and PID2020-113596GB-I00 (Spanish Ministry of Science and Innovation). M. Bonforte moreover acknowledges financial support from the Spanish Ministry of Science and Innovation, through the ``Severo Ochoa Programme for Centres of Excellence in R\&D'' (CEX2019-000904-S) and by the European Union’s Horizon 2020 research and innovation programme under the Marie Sk\l odowska-Curie grant agreement no. 777822. We would like to thank Nikita Simonov for fruitful discussions, and the anonymous referee for a thorough reading and insightful suggestions that helped us improve the paper. {\bf Conflict of interest statement. }On behalf of all authors, the corresponding author states that there is no conflict of interest. {\bf Data Availability Statements. }All data generated or analysed during this study are included in this published article. \appendix \section{Technical lemmas} Implicitly, we use the following in the Moser iteration: \begin{lemma}\label{lem:LIAsLimitOfLp} Assume $K(p)>0$ is such that $\lim_{p\to\infty}K(p)<\infty$, $p_0\geq1$, and $$ \psi\in L^p(\R^N) \qquad\text{and}\qquad \|\psi\|_{L^p(\R^N)}\leq K(p) \quad\text{for all $p\in[p_0,\infty)$.} $$ Then $\psi\in L^\infty(\R^N)$, and moreover, $$ \|\psi\|_{L^\infty(\R^N)}\leq \lim_{p\to\infty}K(p). $$ \end{lemma} \begin{proof} Define $K(\infty):=\lim_{p\to\infty}K(p)$, and consider $$ \Psi:=|\psi|\mathbf{1}_{\{|\psi|\leq K(\infty)+1\}}+(K(\infty)+1)\mathbf{1}_{\{|\psi|>K(\infty)+1\}}=\min\{|\psi|,K(\infty)+1\} $$ from which it follows that $\Psi\leq K(\infty)+1$ and $\Psi\leq |\psi|$. Then $\Psi\in L^\infty(\R^N)$ and $\|\Psi\|_{L^p}\leq \|\psi\|_{L^p}\leq K(p)$ for all $p\in[p_0,\infty)$, and hence, $\|\Psi\|_{L^\infty}=\lim_{p\to\infty}\|\Psi\|_{L^p}\leq K(\infty)$. But then $\min\{|\psi|,K(\infty)+1\}\leq K(\infty)$ which implies $\|\psi\|_{L^\infty}\leq K(\infty)$. \end{proof} The next lemma is classical, but we state and prove it for completeness. \begin{lemma}[A DeGiorgi-type lemma]\label{lem:DeGiorgi} Assume that $z\in \R^N$, and that $f\in L^\infty(B_{3R}(z))$ with $R>0$ fixed. If, for any $R\leq\rho<\bar{\rho}\leq 3R$, some $\delta\in(0,1)$, and some $M>0$ independent of $\rho, \bar{\rho}$, we have that $$ \|f\|_{L^\infty(B_{\rho}(z))}\leq \delta \|f\|_{L^\infty(B_{\bar{\rho}}(z))}+M, $$ then $$ \|f\|_{L^\infty(B_{\rho}(z))}\leq \frac{1}{1-\delta}M. $$ \end{lemma} \begin{proof} We follow the proof of Lemma 1.2 in Chapter 4 of \cite{HaLi97}. Fix $\rho\geq R$. For some $0<\eta<1$ we consider the sequence $\{\rho_i\}$ defined recursively by $$ \rho_0=\rho \qquad\text{and}\qquad \rho_{i+1}:=\rho_i+(1-\eta)\eta^{i}\rho. $$ Note that $\rho_\infty=2\rho$. Since $2\rho=\rho_\infty>\ldots>\rho_1>\rho_0=\rho$, \begin{equation*} \begin{split} \|f\|_{L^\infty(B_{\rho}(z))}&=\|f\|_{L^\infty(B_{\rho_0}(z))}\leq \delta \|f\|_{L^\infty(B_{\rho_1}(z))}+M\leq \delta^2\|f\|_{L^\infty(B_{\rho_2}(z))}+(1+\delta)M\\ &\leq \ldots\leq \delta^k\|f\|_{L^\infty(B_{\rho_k}(z))}+M\sum_{i=0}^{k-1}\delta^{i}. \end{split} \end{equation*} The conclusion follows by letting $k\to\infty$. \end{proof} \section{\texorpdfstring{$L^1$}{L1}--\texorpdfstring{$L^\infty$}{Linfty}-smoothing controls \texorpdfstring{$L^q$}{Lq}--\texorpdfstring{$L^p$}{Lp}-smoothing}\label{sec:SmoothingImplications} Throughout this section, $F>0$ is some nonincreasing function and $C>0$ is some constant (which might change from line to line) not depending on any norm of $u$. \begin{theorem}\label{thm:SmoothingImplications} Assume that $0\leq u_0=u(\cdot,0)\in (L^1\cap L^\infty)(\R^N)$, and that, for a.e. $t_1> t_0\geq0$, $$ \|u(t_1)\|_{L^\infty}\leq F(t_1-t_0)\|u(t_0)\|_{L^q}^\gamma\qquad\text{for some $0\leq \gamma<1$ and $q\in[1,\infty)$.} $$ Then, for all $1\leq q\leq p\leq \infty$, $$ \|u(t_1)\|_{L^{p}}\leq G(t_1-t_0)H(\|u(t_0)\|_{L^{q}}), $$ where $G,H\geq0$ are functions depending on $F, \gamma, p, q$ and $\gamma, p, q$, respectively, which has to be determined in each case. \end{theorem} The proof is a consequence of several results in this section. We start by investigating immediate consequences of $L^q$--$L^\infty$-smoothing effects through Young and H\"older inequalities, in addition to a DeGiorgi type lemma. \begin{lemma}\label{lem:SmoothingImplications} Assume that $0\leq u_0=u(\cdot,0)\in (L^1\cap L^\infty)(\R^N)$, and that, for a.e. $t_1> t_0\geq0$, $$ \|u(t_1)\|_{L^\infty}\leq F(t_1-t_0)\|u(t_0)\|_{L^q}^\gamma\qquad\text{for some $0\leq \gamma<1$ and $q\in[1,\infty)$.} $$ Then: \begin{enumerate}[{\rm (a)}] \item \textup{($L^{\leq q}$--$L^\infty$-smoothing)} $$ \|u(t_1)\|_{L^\infty}\leq CF(t_1-t_0)^{\frac{q}{(1-\gamma)q+\gamma r}}\|u(t_0)\|_{L^r}^{\frac{\gamma r}{(1-\gamma)q+\gamma r}}\qquad\text{for $r\in[1,q]$.} $$ \item \textup{($L^{\leq q}$--$L^{> q}$-smoothing)} $$ \|u(t_1)\|_{L^p}\leq CF(t_1-t_0)^{\frac{q}{p}\frac{p-r}{(1-\gamma)q+\gamma r}}\|u(t_0)\|_{L^r}^{\frac{r}{p}\frac{(1-\gamma)q+\gamma p}{(1-\gamma)q+\gamma r}}\qquad\text{for $p\in(r,\infty)$ and $r\in[1,q]$.} $$ \end{enumerate} \end{lemma} \begin{remark} The homogeneous smoothing estimates (cf. Theorem \ref{thm:L1ToLinfinitySmoothing2}(a) and Theorem \ref{thm:AbsBounds}) can respectively be recovered by choosing $q=1=r$ and: \begin{enumerate}[{\rm (i)}] \item If $\gamma=\alpha\theta$, then $$ \frac{\gamma(p-1)+1}{p}=\frac{1}{p}\frac{\theta_1}{\theta_p}. $$ \item If $\gamma=0$, then $$ \frac{\gamma(p-1)+1}{p}=\frac{1}{p}. $$ \end{enumerate} \end{remark} \begin{proof}[Proof of Lemma \ref{lem:SmoothingImplications}] \noindent (a) Since $q\geq r\geq 1$, we have $$ \|u(t_0)\|_{L^q}^{\gamma}\leq \|u(t_0)\|_{L^\infty}^{\frac{\gamma(q-r)}{q}}\|u(t_0)\|_{L^r}^{\frac{\gamma r}{q}}. $$ Applying the Young inequality \eqref{eq:Young} with $\vartheta=\frac{q}{\gamma(q-r)}>1$ yields $$ \|u(t_1)\|_{L^\infty}\leq \frac{\gamma(q-r)}{q}\|u(t_0)\|_{L^\infty}+CF(t_1-t_0)^{\frac{q}{(1-\gamma)q+\gamma r}}\|u(t_0)\|_{L^r}^{\frac{\gamma r}{(1-\gamma)q+\gamma r}}. $$ Since $\gamma(q-r)/q<1$, we can reabsorb the $L^\infty$-norm by a variant of a classical lemma due to DeGiorgi (cf. Lemma \ref{lem:DeGiorgi}). \smallskip \noindent (b) When $p\geq r\geq 1$, we use Proposition \ref{prop:APriori}(b)(ii) to get $$ \|u(t_1)\|_{L^p}\leq \|u(t_1)\|_{L^\infty}^{\frac{p-r}{p}}\|u(t_1)\|_{L^r}^{\frac{r}{p}}\leq \|u(t_1)\|_{L^\infty}^{\frac{p-r}{p}}\|u(t_0)\|_{L^r}^{\frac{r}{p}}. $$ Now, part (a) yields \begin{equation*} \begin{split} \|u(t_1)\|_{L^p}&\leq CF(t_1-t_0)^{\frac{q}{(1-\gamma)q+\gamma r}\frac{p-r}{p}}\|u(t_0)\|_{L^r}^{\frac{\gamma r}{(1-\gamma)q+\gamma r}\frac{p-r}{p}}\|u(t_0)\|_{L^r}^{\frac{r}{p}}\\ &=CF(t_1-t_0)^{\frac{q}{p}\frac{p-r}{(1-\gamma)q+\gamma r}}\|u(t_0)\|_{L^r}^{\frac{r}{p}\frac{(1-\gamma)q+\gamma p}{(1-\gamma)q+\gamma r}}. \end{split} \end{equation*} The proof is complete. \end{proof} As we saw, the easy consequences of smoothing effects, is to loose integrability on the right-hand side. Now, we instead want to gain integrability, i.e., we want $L^1$--$L^\infty$ to $L^{\geq 1}$--$L^\infty$. To gain integrability, however, requires a refined technique. In bounded domains $|\Omega|<\infty$, this is usually accomplished by the fact that $L^{\tilde{q}}\subseteq L^q$ with $1\leq q\leq \tilde{q}\leq \infty$, i.e., by the H\"older inequality $$ \|u\|_{L^q(\Omega)}^q=\int_{\Omega}|u|^q\dd x\leq \bigg(\int_{\Omega}\big(|u|^q\big)^{\frac{\tilde{q}}{q}}\dd x\bigg)^{\frac{q}{\tilde{q}}}\bigg(\int_{\Omega}\big(1\big)^{\frac{\tilde{q}}{\tilde{q}-q}}\dd x\bigg)^{\frac{\tilde{q}-q}{\tilde{q}}}=\|u\|_{L^{\tilde{q}}(\Omega)}^q|\Omega|^{\frac{\tilde{q}-q}{\tilde{q}}}. $$ So, while such a statement is trivial in bounded domains, the story is quite different in $\R^N$. The reason for this can be seen by the following estimate: $$ \|u\|_{L^q}^q=\int_{B_R(0)}|u|^q+\int_{\R^N\setminus B_R(0)}|u|^q. $$ On the small ball, we use that $L^{\tilde{q}}\subseteq L^q$, but what to do on the complementary set? In fact, there the ``natural'' ordering is $$ \frac{1}{(1+|x|)^{\tilde{q}}}\leq \frac{1}{(1+|x|)^q}, $$ i.e., opposite of the small ball. Hence, $$ \int_{\R^N\setminus B_R(0)}|u|^q\leq C_R\bigg(\int_{\R^N\setminus B_R(0)}|u|^{\tilde{q}}\bigg)^{\frac{q}{\tilde{q}}} $$ cannot be true for all functions, and must be a property of the equation itself. We therefore rely on a nice idea taken from Section 3.1 in \cite{Vaz06}: Consider \eqref{GPME} with the nonlinearity $$ \varphi: r\mapsto (r+\veps)^m-\veps^m \qquad\text{for some $\veps>0$ and some $m>1$.} $$ In the case of L\'evy operators \eqref{def:LevyOperators} with $c=0$, this equation has been studied in \cite{DTEnJa17a}, and in the case of the more general setting of possibly $x$-dependent $\mathfrak{m}$-accretive operators, this equation has been studied in detail in e.g. \cite[Appendix B]{BoFiR-O17} and \cite{DPQuRo16}. E.g., existence, uniqueness, and the comparison principle holds for sign-changing solutions. In our setting, we also have: \begin{lemma}\label{lem:EquivalenceOfL1LinftySmoothing} Assume \eqref{phias} and $\veps>0$. Let $u$ be a weak dual solution of \eqref{GPME} with initial data $0\leq u_0\in L^1(\R^N)$, and $v$ be a weak dual solution of \eqref{GPME} with nonlinearity $\varphi$ and initial data $0\leq v_0\in L^1(\R^N)$. Under suitable assumptions on the associated Green function: \smallskip \noindent{\rm (i)} For a.e. $t_1> t_0\geq0$, $$ \|u(t_1)\|_{L^\infty}\leq F(t_1-t_0)\|u(t_0)\|_{L^1}^\gamma\qquad\text{for some $0\leq \gamma<1$.} $$ \smallskip If, moreover, $u(x,t)\leq v(x,t)+\veps$ for a.e. $(x,t)\in Q_T$, then: \smallskip \noindent{\rm (ii)} For a.e. $t_1> t_0\geq0$, $$ \|v(t_1)\|_{L^\infty}\leq F(t_1-t_0)\|v(t_0)\|_{L^1}^\gamma+C\veps, $$ with the same $F, \gamma$ as given above. \end{lemma} \begin{proof} \noindent(i) This already holds, see Theorems \ref{thm:L1ToLinfinitySmoothing2} and \ref{thm:AbsBounds}. \smallskip \noindent(ii) Even though the nonlinearity $\varphi(r)$ is different from $r^m$, we will see that we can repeat the steps leading up to Theorems \ref{thm:L1ToLinfinitySmoothing2} and \ref{thm:AbsBounds} when we in addition know that $u\leq v+\veps$. Recall that by Lemma \ref{prop:Monotonicity}, we have $$ t\mapsto t^{\frac{m}{m-1}}u^m(\cdot,t)\qquad\text{is nondecreasing for a.e. $x\in \R^N$,} $$ and since $u\leq v+\veps$, we get the following time-monotonicity for $v+\veps$: $$ t\mapsto t^{\frac{m}{m-1}}(v(\cdot,t)+\veps)^m\qquad\text{is nondecreasing for a.e. $x\in \R^N$.} $$ Then \begin{equation*} \begin{split} &\int_{\tau_*}^\tau (v(x_0,t)+\veps)^m\dd t-\veps^m(\tau-\tau_*)= \int_{\tau_*}^\tau \varphi(v(x_0,t))\dd t\\ &= \int_{\R^N}v(x,\tau_*)\mathbb{G}_{-\Operator}^{x_0}(x)\dd x-\int_{\R^N}v(x,\tau)\mathbb{G}_{-\Operator}^{x_0}(x)\dd x\leq \int_{\R^N}v(x,\tau_*)\mathbb{G}_{-\Operator}^{x_0}(x)\dd x, \end{split} \end{equation*} or \begin{equation*} \begin{split} \int_{\tau_*}^\tau (v(x_0,t)+\veps)^m\dd t\leq \int_{\R^N}v(x,\tau_*)\mathbb{G}_{-\Operator}^{x_0}(x)\dd x+\veps^m(\tau-\tau_*). \end{split} \end{equation*} By the time-monotonicity, $$ (v(x_0,\tau_*)+\veps)^m\leq \frac{C(m)}{\tau_*}\int_{\R^N}v(x,\tau_*)\mathbb{G}_{-\Operator}^{x_0}(x)\dd x+C(m)\veps^m. $$ Since $r\mapsto r^m$ is superadditive on $\R_+$, we also get $$ v^m(x_0,\tau_*)\leq \frac{C(m)}{\tau_*}\int_{\R^N}v(x,\tau_*)\mathbb{G}_{-\Operator}^{x_0}(x)\dd x+(C(m)-1)\veps^m. $$ Hence, we get the stated $L^1$--$L^\infty$-smoothing for $v$ by simply following the proof for $u$, and using that $r\mapsto r^{\frac{1}{m}}$ is subadditive on $\R_+$. \end{proof} \begin{proposition}\label{prop:SmoothingImplications} Assume that $0\leq u_0=u(\cdot,0)\in (L^1\cap L^p)(\R^N)$ for some $p\in(1,\infty)$, and that, for a.e. $t_1> t_0\geq0$, $$ \|u(t_1)\|_{L^\infty}\leq F(t_1-t_0)\|u(t_0)\|_{L^1}^{\gamma}\qquad\text{for some $\gamma\in [0,1)$.} $$ Moreover, if the comparison principle holds for sign-changing weak dual solutions of \eqref{GPME} with the nonlinearity $\varphi$, then $$ \|u(t_1)\|_{L^\infty}\leq CF(t_1-t_0)^{\frac{1}{\gamma(p-1)+1}}\|u(t_0)\|_{L^p}^{\frac{\gamma p}{\gamma(p-1)+1}}. $$ \end{proposition} \begin{remark} The comparison principle indeed holds for sign-changing weak dual solutions of \eqref{GPME} with the nonlinearity $\varphi$: We simply repeat the existence proof in this setting. \end{remark} \begin{remark} Let us check that we indeed recover the different homogeneous cases: \begin{enumerate}[{\rm (i)}] \item If $\gamma_1=\gamma=\alpha\theta$, $F(t)=Ct^{-\gamma_2}$, and $\gamma_2=N\theta$, then $$ \frac{\gamma_1 p}{\gamma_1(p-1)+1}=\alpha p\theta_p\qquad\text{and}\qquad \frac{\gamma_2}{\gamma_1(p-1)+1}=N\theta_p. $$ \item If $\gamma_1=\gamma=0$, $F(t)=Ct^{-\gamma_2}$, and $\gamma_2=1/(m-1)$, then $$ \frac{\gamma_1 p}{\gamma_1(p-1)+1}=0\qquad\text{and}\qquad \frac{\gamma_2}{\gamma_1(p-1)+1}=\frac{1}{m-1}. $$ \end{enumerate} \end{remark} \begin{proof}[Proof of Proposition \ref{prop:SmoothingImplications}] Consider the function $v_\veps:=u-\veps$ with $\veps>0$, where $u$ solves \eqref{GPME} with initial data $u_0\geq0$. Note that $v_\veps$ also solves \eqref{GPME} with $\varphi$ as nonlinearity (we have subtracted the term $\veps^m$ for normalization purposes), and unsigned initial data $u_0-\veps$. Now, consider the solution $\tilde{v}_\veps$ of \eqref{GPME} with nonlinearity $\varphi$ and initial data $(u_0-\veps)_+$. Due to the H\"older inequality, \begin{equation*} \begin{split} \int (u_0-\veps)_+=\int_{\{u_0>\veps\}}(u_0-\veps)\leq \int_{\{u_0>\veps\}}u_0=\int u_0\mathbf{1}_{\{u_0>\veps\}}\leq \|u_0\|_{L^p}|\{u_0>\veps\}|^{\frac{p-1}{p}}. \end{split} \end{equation*} Moreover, for any $f\geq0$, $$ \|f\|_{L^p}^p=\int_{\{0\leq f\leq \veps\}}|f|^p+\int_{\{f>\veps\}}|f|^p\geq \int_{\{f>\veps\}}f^p\geq \veps^p|\{f>\veps\}|. $$ Hence, $$ \int (u_0-\veps)_+\leq \frac{\|u_0\|_{L^p}^p}{\veps^{p-1}}. $$ In particular, $(u_0-\veps)_+\in L^1$ as long as $u_0\in L^p$. The comparison principle for sign-changing solutions of \eqref{GPME} with nonlinearity $\varphi$ then gives $$ u_0(x)-\veps\leq (u_0(x)-\veps)_+\qquad\implies\qquad v_\veps(x,t)\leq \tilde{v}_\veps(x,t). $$ From which we conclude that $$ u(x,t)\leq \tilde{v}_\veps(x,t)+\veps \qquad\implies\qquad \|u(t)\|_{L^\infty}\leq \|\tilde{v}_\veps(t)\|_{L^\infty}+\veps. $$ We are then in the setting of Lemma \ref{lem:EquivalenceOfL1LinftySmoothing}, and, for all $\veps>0$, $$ \|u(t)\|_{L^\infty}\leq \|\tilde{v}_\veps(t)\|_{L^\infty}+\veps\leq F(t)\|(u_0-\veps)_+\|_{L^1}^{\gamma}+c\veps\leq F(t)\|u_0\|_{L^p}^{\gamma p}\veps^{-\gamma(p-1)}+c\veps. $$ To conclude, we infimize over $\veps>0$. \end{proof} \section{Densely defined, \texorpdfstring{$\mathfrak{m}$}{m}-accretive, and Dirichlet in \texorpdfstring{$L^1(\R^N)$}{L1RN}} \label{sec:mAccretiveDirichlet} The existence proof for weak dual solutions is based on the concept of \emph{mild} solutions, which again relies on finding a.e.-solutions (!) of the corresponding elliptic problem $$ \forall \lambda>0\qquad u+\lambda A[u^m]=f\qquad\text{in $\R^N$.} $$ We will therefore study so-called $\mathfrak{m}$-accretive operators $A$. \subsection{The setting of abstract solutions} The Laplacian $(-\Delta)$ (as well as $r\mapsto (-\Delta)[r^m]$) is a well-known example of an operator which is $\mathfrak{m}$\emph{-accretive} in $L^1(\R^N)$ \cite[Section 10.3.2]{Vaz07}. Now, we want to establish that any \emph{symmetric, nonlocal} and \emph{constant coefficient} L\'evy operator $$ (-\Levy^\mu)[\psi]=-P.V.\int_{\R^N\setminus\{0\} } \big(\psi(x+z)-\psi(x)\big) \dd\mu(z) $$ is also $\mathfrak{m}$-accretive in $L^1(\R^N)$. Since that operator is moreover \emph{Dirichlet}, we get by Propositions 1 and 2 in \cite{CrPi82}, that $r\mapsto (-\Levy^\mu)[r^m]$ is also $\mathfrak{m}$-accretive and Dirichlet in $L^1(\R^N)$. Indeed, such a result should be well-known, but we did not manage to find a useful reference for it. Throughout this section, we stick to the usual notation $A:=(-\Levy^\mu)$. \begin{theorem}\label{thm:LevyOperatorsmAccretive} Assume \eqref{muas}. Then the linear operator $A: D(A)\subset L^1(\R^N)\to L^1(\R^N)$ satisfies: \begin{enumerate}[{\rm (i)}] \item\label{thm:LevyOperatorsmAccretive1} $\overline{D(A)}^{\|\cdot\|_{L^1(\R^N)}}=L^1(\R^N)$. \item\label{thm:LevyOperatorsmAccretive2} $A$ is accretive in $L^1(\R^N)$. \item\label{thm:LevyOperatorsmAccretive3} $R(I+\lambda A)=L^1(\R^N)$ for all $\lambda>0$. \item\label{thm:LevyOperatorsmAccretive4} If $f\in L^1(\R^N)$ and $a,b\in \R$ such that $a\leq f\leq b$ a.e., then $a\leq (I+\lambda A)^{-1}f\leq b$ a.e. \end{enumerate} That is, the linear operator $A$ is densely defined, $\mathfrak{m}$-accretive, and Dirichlet in $L^1(\R^N)$. \end{theorem} \begin{remark}\label{rem:LinearResolventDensityAndRange} \begin{enumerate}[{\rm (a)}] \item Since $D(A)\subset L^1(\R^N)$, we define $$ D(A):=\{\psi\in L^1(\R^N) \,:\, \widetilde{A}[\psi]\in L^1(\R^N)\} $$ where $\widetilde{A}$ is the extension of $A$ to $L^1(\R^N)$ (see \eqref{def:DistrOperatorA} below). Moreover, in our case it is well-known that $$ A: C_\textup{c}^\infty(\R^N)\subset L^1(\R^N)\to L^p(\R^N)\qquad\text{for all $p\in[1,\infty]$}. $$ Hence, $C_\textup{c}^\infty(\R^N)\subset D(A)$, and thus, we have already proven Theorem \ref{thm:LevyOperatorsmAccretive}\eqref{thm:LevyOperatorsmAccretive1}. However, we need to make sure that when we define the extension $\widetilde{A}$ (see \eqref{def:DistrOperatorA} below), we have $$ \widetilde{A} |_{C_\textup{c}^\infty(\R^N)}=A\qquad\text{a.e. in $\R^N$.} $$ \item In our $\R^N$-case, the numbers $a,b$ in Theorem \ref{thm:LevyOperatorsmAccretive}\eqref{thm:LevyOperatorsmAccretive4} has the natural restriction $a\leq b$, $a\leq 0$, and $b\geq 0$. \end{enumerate} \end{remark} \begin{corollary}[{{\cite[Propositions 1 and 2]{CrPi82}}}]\label{cor:PorousMediummAccretive} Assume \eqref{muas} and \eqref{phias}. Then the nonlinear operator $r\mapsto Ar^m: D(Ar^m)\subset L^1(\R^N)\to L^1(\R^N)$ is densely defined, $\mathfrak{m}$-accretive, and Dirichlet in $L^1(\R^N)$. In particular, for all $f\in L^1(\R^N)$ such that $a\leq f\leq b$, there exists a unique $u\in L^1(\R^N)$ which satisfies $a\leq u\leq b$, the comparison principle, $L^p$-decay estimate, and $$ u+\lambda \widetilde{A}[u^m]=f\qquad\text{a.e. in $\R^N$.} $$ \end{corollary} Let us start by proving the range condition. To do so, consider \begin{equation}\label{eq:LinearResolvent} \forall \lambda>0\qquad u+\lambda A[u]=f \qquad\text{in $\R^N$}. \end{equation} \begin{definition}[Very weak solutions]\label{def:LinearResolventVeryWeak} Assume \eqref{muas}. We say that $u\in L_\textup{loc}^1(\R^N)$ is a \emph{very weak solution} of \eqref{eq:LinearResolvent} with right-hand side $f\in L_\textup{loc}^1(\R^N)$ if \begin{equation*} \begin{split} \int_{\R^N} u\psi\dd x+\lambda\int_{\R^N}uA[\psi]\dd x=\int_{\R^N} f\psi\dd x\qquad\text{for all $\psi\in C_\textup{c}^\infty(\R^N)$.} \end{split} \end{equation*} \end{definition} We need the following result (take $u\mapsto\lambda u$ for all $\lambda>0$ and choose $\veps=1/\lambda$): \begin{lemma}[{{\cite[Theorem 3.1]{DTEnJa17a}}}]\label{lem:LinearResolventWellposedness} Assume \eqref{muas}. \begin{enumerate}[{\rm (a)}] \item If $f\in C_\textup{b}^\infty(\R^N)$, then there exists a unique classical solution $u\in C_\textup{b}^\infty(\R^N)$ of \eqref{eq:LinearResolvent}. Moreover, for each multiindex $\alpha\in \mathbb{N}^N$, $$ \|D^\alpha u\|_{L^\infty(\R^N)}\leq \|D^\alpha f\|_{L^\infty(\R^N)}. $$ \item If $f\in L^\infty(\R^N)$, then there exists a unique classical solution $u\in L^\infty(\R^N)$ of \eqref{eq:LinearResolvent}. Moreover, $$ \|u\|_{L^\infty(\R^N)}\leq \|f\|_{L^\infty(\R^N)}. $$ \item If $f\in L^1(\R^N)$, then there exists a unique classical solution $u\in L^1(\R^N)$ of \eqref{eq:LinearResolvent}. Moreover, $$ \|u\|_{L^1(\R^N)}\leq \|f\|_{L^1(\R^N)}. $$ \end{enumerate} \end{lemma} \begin{remark}\label{rem:LinearResolventComparison} Theorem 6.15 in \cite{DTEnJa17a} gives comparison (or $T$-contraction) as well: If $f\in L_\textup{loc}^1(\R^N)$ such that $(f)^+\in L^1(\R^N)$ and $u\in L_\textup{loc}^1(\R^N)$ is a very weak solution of \eqref{eq:LinearResolvent}, then $$ \int_{\R^N}(u)^+\dd x\leq \int_{\R^N}(f)^+\dd x. $$ \end{remark} \begin{lemma}\label{lem:OperatorEstimateInLp} Assume $\eqref{muas}$, $p\in(1,\infty]$, $f\in (L^1\cap L^\infty)(\R^N)$. Let $u$ be the very weak solution of \eqref{eq:LinearResolvent} with right-hand side $f$. Then $$ \lambda\|\widetilde{A}[u]\|_{L^p(\R^N)}\leq 2\|f\|_{L^p(\R^N)}, $$ where the extension $\widetilde{A}:D(A)\cap L^p(\R^N)\to L^p(\R^N)$ satisfies \begin{equation}\label{def:DistrOperatorA} \int_{\R^N}\widetilde{A}[u]\psi\dd x=\int_{\R^N}uA[\psi]\dd x\qquad\text{for all $\psi\in C_\textup{c}^\infty(\R^N)$.} \end{equation} \end{lemma} The proof follows after an immediate consequence. \begin{corollary}\label{cor:LinearResolventAE} Assume $\eqref{muas}$, $p\in(1,\infty]$, $f\in (L^1\cap L^\infty)(\R^N)$. Then the equation \eqref{eq:LinearResolvent} (with $A\mapsto \widetilde{A}$) holds a.e. in $\R^N$. \end{corollary} \begin{proof}[Proof of Lemma \ref{lem:OperatorEstimateInLp}] Definition \ref{def:LinearResolventVeryWeak} gives \begin{equation*} \begin{split} \lambda\int_{\R^N}\widetilde{A}[u]\psi\dd x=\lambda\int_{\R^N}uA[\psi]\dd x=\int_{\R^N} (f-u)\psi\dd x. \end{split} \end{equation*} Now, take $q\in[1,\infty)$ such that $p^{-1}+q^{-1}=1$. We recall Theorem 2.14 in \cite{LiLo01}, $$ \|\phi\|_{L^p(\R^N)}=\sup_{\|\psi\|_{L^q(\R^N)}\leq 1}\bigg|\int_{\R^N}\phi(x)\psi(x)\dd x\bigg|, $$ to obtain, by the H\"older inequality, \begin{equation*} \begin{split} \lambda\|\widetilde{A}[u]\|_{L^p(\R^N)}&=\lambda\sup_{\|\psi\|_{L^q(\R^N)}\leq 1}\bigg|\int_{\R^N}\widetilde{A}[u]\psi\dd x\bigg|=\sup_{\|\psi\|_{L^q(\R^N)}\leq 1}\bigg|\int_{\R^N} (f-u)\psi\dd x\bigg|\\ &\leq \sup_{\|\psi\|_{L^q(\R^N)}\leq 1}\bigg\{\|f-u\|_{L^p(\R^N)}\|\psi\|_{L^q(\R^N)}\bigg\}\leq \|f-u\|_{L^p(\R^N)}\\ &\leq 2\|f\|_{L^p(\R^N)}. \end{split} \end{equation*} The proof is complete. \end{proof} Another consequence is that we can also extend $\widetilde{A}$ to $L^1(\R^N)$, and then make sense of the equation in $L^1(\R^N)$. To do so, we follow \cite{BrSt73}. \begin{corollary}\label{cor:OperatorEstimateInL1} Assume $\eqref{muas}$ and $f\in (L^1\cap L^\infty)(\R^N)$. Then $$ \lambda\|\widetilde{A}[u]\|_{L^1(\R^N)}\leq 2\|f\|_{L^1(\R^N)}. $$ \end{corollary} \begin{proof} Since $f\in (L^1\cap L^\infty)(\R^N)\subset L^1(\R^N)$, Lemma \ref{lem:LinearResolventWellposedness}(c) yields $$ \|u\|_{L^1(\R^N)}\leq \|f\|_{L^1(\R^N)}. $$ Moreover, by Corollary \ref{cor:LinearResolventAE}, equation \eqref{eq:LinearResolvent} holds pointwise, i.e., $$ \|f-\lambda\widetilde{A}[u]\|_{L^1(\R^N)}\leq \|f\|_{L^1(\R^N)}. $$ The reverse triangle inequality then provides the result. \end{proof} We are now ready to prove Theorem \ref{thm:LevyOperatorsmAccretive}\eqref{thm:LevyOperatorsmAccretive3}. \begin{proposition}\label{eq:LinearResolventRange} Assume \eqref{muas}. For all $f\in L^1(\R^N)$, there exists a very weak solution $u\in L^1(\R^N)$ of \eqref{eq:LinearResolvent} such that, for all $\lambda>0$, $$ u+\lambda\widetilde{A}[u]=f\qquad\text{a.e. in $\R^N$,} $$ where $\widetilde{A}$ is the extension to $L^1(\R^N)$ of $A$ defined through the relation \eqref{def:DistrOperatorA}. \end{proposition} \begin{proof} Take $\{f_n\}_{n\in\N}\subset (L^1\cap L^\infty)(\R^N)$ such that $f_n\to f$ in $L^1(\R^N)$ as $n\to\infty$. By Lemma \ref{lem:LinearResolventWellposedness}(c), $$ \|u_n-u_m\|_{L^1(\R^N)}\leq \|f_n-f_m\|_{L^1(\R^N)}. $$ Hence, $\{u_n\}_{n\in\N}$ is Cauchy in $L^1(\R^N)$ and there exists a $u\in L^1(\R^N)$ such that $u_n\to u$ in $L^1(\R^N)$. In a similar way, through Corollary \ref{cor:OperatorEstimateInL1}, $\{\widetilde{A}[u_n]\}_{n\in\N}$ is Cauchy in $L^1(\R^N)$ and there exists a $U\in L^1(\R^N)$ such that $\widetilde{A}[u_n]\to U$ in $L^1(\R^N)$. The definition of $\widetilde{A}$ \eqref{def:DistrOperatorA} then yields $$ \int_{\R^N}\widetilde{A}[u_n]\psi\dd x=\int_{\R^N}u_nA[\psi]\dd x\qquad\text{for all $\psi\in C_\textup{c}^\infty(\R^N)$}. $$ Moreover, since $\psi,A[\psi]\in L^\infty(\R^N)$, the $L^1$-convergence gives $U=\widetilde{A}[u]$. Finally, we take the $L^1$-limit in Definition \ref{def:LinearResolventVeryWeak} and use that fact that all the terms of the equation are elements in $L^1\subset L_\textup{loc}^1$. \end{proof} \begin{remark} In the literature, the property $$ u_n\to u\quad\text{in $L^1(\R^N)$}\qquad\implies\qquad \widetilde{A}[u_n]\to \widetilde{A}[u] \quad\text{in $L^1(\R^N)$} $$ is referred to as the operator $A$ being \emph{closed} in $L^1(\R^N)$. Here it automatically follows by the symmetry of the operator and that $A:C_\textup{c}^\infty(\R^N)\to L^\infty(\R^N)$. \end{remark} Theorem \ref{thm:LevyOperatorsmAccretive}\eqref{thm:LevyOperatorsmAccretive2} is a consequence of the $L^1$-contraction obtained in Lemma \ref{lem:LinearResolventWellposedness}(c) and Proposition \ref{eq:LinearResolventRange} since then \begin{equation}\label{eq:LinearResolventAccretive} \|(I+\lambda \widetilde{A})[u]\|_{L^1(\R^N)}=\|f\|_{L^1(\R^N)}\geq \|u\|_{L^1(\R^N)}. \end{equation} We are then in the setting of the classical result: \begin{proposition}[Hille-Yosida/Lumer-Phillips {{\cite[Theorem 4.1.33]{Jac01}}}]\label{prop:LumerPhillips} A linear operator $(A,D(A))$ on $L^1(\R^N)$ is the generator of a strongly continuous contraction semigroup $(T_t)_{t\geq 0}$ on $L^1(\R^N)$ if and only if $A$ satisfies Theorem \ref{thm:LevyOperatorsmAccretive}\eqref{thm:LevyOperatorsmAccretive1}--\eqref{thm:LevyOperatorsmAccretive3}. \end{proposition} We, moreover, have that our operators are \emph{maximal accretive}, i.e.: \begin{proposition}[{{\cite[Proposition 8.3]{BeCrPa01}}}]\label{prop:ExtensionOfmAccretiveIsTheSame} If $A$ is $\mathfrak{m}$-accretive, then $\widetilde{A}=A$. \end{proposition} \begin{remark} Hence, a posteriori, $L^1(\R^N)\ni u\mapsto A[u]$ can be identified in a unique way as a limit point in $L^1(\R^N)$. See also Theorem 4.1.40 in \cite{Jac01}. \end{remark} Our next task is Theorem \ref{thm:LevyOperatorsmAccretive}\eqref{thm:LevyOperatorsmAccretive4}, which follows by: \begin{proposition}\label{eq:LinearResolventDirichlet} Assume \eqref{muas} and $a,b\in\R$ such that $a\leq b$, $a\leq 0$, and $b\geq 0$. For all $f\in L^1(\R^N)$ such that $$ a\leq f\leq b\qquad\text{a.e.,} $$ the unique very weak solution $u\in L^1(\R^N)$ of \eqref{eq:LinearResolvent} satisfies $$ a\leq u\leq b\qquad\text{a.e.} $$ \end{proposition} \begin{proof} By Remark \ref{rem:LinearResolventComparison}, the $T$-contraction holds for $L_\textup{loc}^1$-very weak solutions of \eqref{eq:LinearResolvent}. On one hand, $ f\leq b$ yields $(f-b)^+=0\in L^1(\R^N)$ and then the $T$-contraction gives $u\leq b$. On the other hand, $ a\leq f$ yields $(a-f)^+=0\in L^1(\R^N)$ and then the $T$-contraction gives $a\leq u$. Here, we used that $a,b$ are very weak solutions with $a,b$ as right-hand side, and that bounded very weak solutions are unique. \end{proof} We have then proven that our operator $A$ satisfies Theorem \ref{thm:LevyOperatorsmAccretive}\eqref{thm:LevyOperatorsmAccretive1}--\eqref{thm:LevyOperatorsmAccretive4}, which is exactly the setting of \cite{CrPi82}. Finally, let us recall why the above works for our operator $A=(-\Levy^\mu)$. To deduce the $L^1$-contraction---or rather the $T$-contraction---needed to obtain \eqref{eq:LinearResolventAccretive}, we employed a more fundamental result: \begin{lemma}\label{eq:LinearResolventPositiveOperator} Assume \eqref{muas}. For all $u\in C_\textup{c}^\infty(\R^N)$, $$ \int_{\R^N}\widetilde{A}[u]\sgn^+(u)\dd x\geq 0. $$ \end{lemma} \begin{remark} This implies the condition stated as Corollary A.13 in \cite{A-VMaRoT-M10} or in Proposition 4.6.12 in \cite{Jac01} (see also \cite{Sch01} for $p=1$) from which it follows that $A$ is accretive. \end{remark} \begin{proof} Remark \ref{rem:LinearResolventDensityAndRange}(a) gives $\widetilde{A}=A$. By a convex inequality, $$ A[u]\sgn^+(u)\geq A[(u)^+] \qquad\textup{a.e. in $\R^N$.} $$ Now, multiply each side by a smooth cut-off function $\mathcal{X}_{R}\in C_\textup{c}^\infty(\R^N)$ satisfying $0\leq \mathcal{X}_R\leq 1$ and $\mathcal{X}_R\to1$ pointwise as $R\to\infty$, integrate, use symmetry, and that $A[\mathcal{X}_R]\to0$ pointwise as $R\to\infty$. \end{proof} \begin{corollary} Assume \eqref{muas}. For all $u\in C_\textup{c}^\infty(\R^N)$, $$ \int_{\R^N}\widetilde{A}[u]\sgn^+(u-1)\dd x\geq 0. $$ \end{corollary} \begin{remark} Operators satisfying the above are called Dirichlet operators, see Definition 4.6.7 in \cite{Jac01} (and also \cite{Sch01} for $p=1$). Moreover, according to Proposition 4.6.9 in \cite{Jac01} Dirichlet operators imply Theorem \ref{thm:LevyOperatorsmAccretive}\eqref{thm:LevyOperatorsmAccretive4} for both the semigroup and the resolvent of the semigroup generated by the operator, i.e., the semigroup and the resolvent of the semigroup are \emph{sub-Markovian}. \end{remark} \begin{proof} The result can be found as Corollary 3.2 in \cite{Sch01}, which actually provides an equivalence between the two conditions in our setting. Let us include a proof for completeness. Again, Remark \ref{rem:LinearResolventDensityAndRange}(a) gives $\widetilde{A}=A$. Now, take $u-\mathcal{X}_R\in C_\textup{c}^\infty(\R^N)$ in Lemma \ref{eq:LinearResolventPositiveOperator}. Since $\sgn^+(u-1)\leq \sgn^+(u-\mathcal{X}_R)$, we have that \begin{equation*} \begin{split} &\int_{\R^N}A[u]\sgn^+(u-\mathcal{X}_R)\dd x\\ &\geq \int_{\R^N}A[\mathcal{X}_R]\sgn^+(u-\mathcal{X}_R)\dd x\geq \int_{\R^N}A[\mathcal{X}_R]\sgn^+(u-1)\dd x. \end{split} \end{equation*} Moreover, $$ \int_{\R^N}\sgn^+(u-1)\dd x=\int_{\{u>1\}}1\dd x< \int_{\{u>1\}}u\dd x\leq \int_{\R^N}|u|\dd x, $$ which means that we can, again, use that $A[\mathcal{X}_R]\to0$ pointwise as $R\to\infty$ on the right-hand side. While on the left-hand side we simply use the Lebesgue dominated convergence theorem. \end{proof} \begin{remark}\label{rem:ExtensionToAnyMonotoneOperatorAndResolvent} Lemma \ref{lem:LinearResolventWellposedness} is also true for the general operator (possibly local, nonlocal, or a combination) $\mathfrak{L}^{\sigma,\mu}$ defined by \eqref{def:LevyOperators} with $c=0$, see \cite{DTEnJa17b}. Moreover, if $u$ solves $$ \veps u+(-\mathfrak{L}^{\sigma,\mu})[u]=f\qquad\text{in $\R^N$ for all $\veps>0$,} $$ then $u$ solves $$ (\veps-1) u+(I-\mathfrak{L}^{\sigma,\mu})[u]=f\qquad\text{in $\R^N$ for all $\veps>1$.} $$ Now, take $u\mapsto\lambda u$ and choose $\veps=1+1/\lambda$ to obtain that $u$ solves $$ u+\lambda(I-\mathfrak{L}^{\sigma,\mu})[u]=f\qquad\text{in $\R^N$ for all $\lambda>0$.} $$ Hence, $(-\mathfrak{L}^{\sigma,\mu})$ and $(I-\mathfrak{L}^{\sigma,\mu})$ are also $\mathfrak{m}$-accretive. The latter is exactly \eqref{def:LevyOperators} with $c=1$. To prove the Dirichlet property, we used that $A[\textup{const}]=0$ in Corollary \ref{eq:LinearResolventDirichlet}. This is of course true for $(-\mathfrak{L}^{\sigma,\mu})$, while for $(I-\mathfrak{L}^{\sigma,\mu})$, we have that $f=\textup{const}$ gives $u=\textup{const}/(1+\lambda)$ as a solution. Arguing as before, however, we still have $f\geq 0$ (resp. $\leq 0$) implies $u\geq 0$ (resp. $\leq 0$). According to Remark \ref{rem:LinearResolventDensityAndRange}, it remains to check $b>0,a<0$ and $a\leq f\leq b$: We readily get $a/(1+\lambda)\leq u\leq b/(1+\lambda)$, i.e., $a\leq u\leq b$ since $\lambda>0$. We then conclude that both $(-\mathfrak{L}^{\sigma,\mu})$ and $(I-\mathfrak{L}^{\sigma,\mu})$ satisfy Theorem \ref{thm:LevyOperatorsmAccretive}, Corollary \ref{cor:PorousMediummAccretive}, and Proposition \ref{prop:ExtensionOfmAccretiveIsTheSame}. So, indeed the whole class of \emph{symmetric} L\'evy operators with \emph{constant coefficients} are within the above framework. \end{remark} \subsection{The setting of very weak solutions} We will now provide an a priori different approach to the one developed in the previous subsection. Very weak solutions of \begin{equation}\label{eq:NonLinearResolvent} \forall \lambda>0\qquad u+\lambda A[u^m]=f \qquad\text{in $\R^N$} \end{equation} can be given as: \begin{definition}[Very weak solutions]\label{def:NonLinearResolventVeryWeak} Assume \eqref{muas}. We say that $u\in L_\textup{loc}^1(\R^N)$ is a \emph{very weak solution} of \eqref{eq:NonLinearResolvent} with right-hand side $f\in L_\textup{loc}^1(\R^N)$ if $u^m\in L_\textup{loc}^1(\R^N)$ and \begin{equation*} \begin{split} \int_{\R^N} u\psi\dd x+\lambda\int_{\R^N}u^mA[\psi]\dd x=\int_{\R^N} f\psi\dd x\qquad\text{for all $\psi\in C_\textup{c}^\infty(\R^N)$.} \end{split} \end{equation*} \end{definition} We collect uniqueness from \cite[Theorem 3.2]{DTEnJa17b}, and a priori estimates from \cite[Remark 5.10]{DTEnJa19}. \begin{theorem}\label{thm:NonlinearResolventVeryWeakAPriori} Assume $0\leq f\in (L^1\cap L^\infty)(\R^N)$, \eqref{phias}, \eqref{muas}, and $A=(-\Operator^{\sigma,\mu})$. \begin{enumerate}[{\rm (a)}] \item There exists a unique very weak solution $0\leq u\in (L^1\cap L^\infty)(\R^N)$ of \eqref{eq:NonLinearResolvent} with right-hand side $f$. \item Let $u,v$ be two very weak solutions of \eqref{eq:NonLinearResolvent} with respective right-hand sides $f,g$. Then: \begin{enumerate}[{\rm (i)}] \item \textup{(Comparison)} If $f\leq g$, then $u\leq v$. \item \textup{($L^p$-decay)} $\|u\|_{L^p(\R^N)}\leq \|f\|_{L^p(\R^N)}$ for all $p\in[1,\infty]$. \end{enumerate} \end{enumerate} \end{theorem} \subsection{Comparison between abstract and very weak solutions} If $f\in L^1(\R^N)$ only, it is hard to see how to construct very weak solutions of \eqref{eq:NonLinearResolvent}, but as in the abstract setting we could require that $a\leq f\leq b$ for $a,b\in\R$. We in any case have: \begin{lemma}\label{lem:equivalenceAbstractVeryWeak} Assume $0\leq f\in (L^1\cap L^\infty)(\R^N)$, \eqref{phias}, and \eqref{muas}. If the operator $A$ has an extension $\widetilde{A}$ to $L^1(\R^N)$, then very weak and a.e.-solutions of \eqref{eq:NonLinearResolvent} coincide. \end{lemma} \begin{remark} Here we discover the advantage of $\mathfrak{m}$-accretive operators: By Proposition \ref{prop:ExtensionOfmAccretiveIsTheSame}, $\widetilde{A}=A$, and thus we obtain an a.e.-equation involving the operator itself! In addition, the abstract setting does not see the difference between $A=(-\mathfrak{L}^{\sigma,\mu})$ and $A=(I-\mathfrak{L}^{\sigma,\mu})$ since they are both $\mathfrak{m}$-accretive. \end{remark} \begin{proof} By Corollary \ref{cor:PorousMediummAccretive}, we have a.e.-solutions of \eqref{eq:NonLinearResolvent} (with $A\mapsto \widetilde{A}$). Multiplying by $\psi\in C_\textup{c}^\infty(\R^N)$, integrating over $\R^N$, and using the definition of the extension of the operator \eqref{def:DistrOperatorA}, shows that those a.e.-solutions are actually very weak solutions in the sense of Definition \ref{def:NonLinearResolventVeryWeak}. However, we can also start with very weak solutions by Theorem \ref{thm:NonlinearResolventVeryWeakAPriori}, use the $L^1$-extension of the operator \eqref{def:DistrOperatorA}, and see that the equation (with $A\mapsto \widetilde{A}$) actually holds a.e. Hence, the equivalence between the elliptic problems is settled. \end{proof} \section{The inverse of a densely defined, \texorpdfstring{$\mathfrak{m}$}{m}-accretive, Dirichlet operator} \label{sec:InverseOfLinearmAccretiveDirichlet} This section is devoted to showing that a densely defined, $\mathfrak{m}$-accretive, Dirichlet operator in $L^1(\R^N)$ has an inverse such that \eqref{Gas} holds, under (possibly) some additional assumptions on the heat kernel associated with the operator. Consider a strongly continuous contraction semigroup $(T_t)_{t\geq0}$ in $L^1(\R^N)$ which is moreover sub-Markovian. The discussion before Definition 3.5.17 in \cite{Jac02} gives that the resolvent $(R_\lambda)_{\lambda>0}$ of the semigroup is well-defined for functions in $L^1(\R^N)$, i.e., $$ R_\lambda [\psi]:=\int_0^\infty\e^{-\lambda t}T_t[\psi]\dd t<\infty\qquad\text{for all $\psi\in L^1(\R^N)$.} $$ Moreover, the Green (or potential) operator $G$ associated with $(T_t)_{t\geq0}$ is defined as $$ G[\psi]:=\lim_{\lambda\to0^+}R_\lambda\psi\qquad\text{for all $0\leq \psi\in L^1(\R^N)$.} $$ \begin{definition}[Transient]\label{def:SemigroupTransient} A strongly continuous sub-Markovian contraction semigroup $(T_t)_{t\geq0}$ in $L^1(\R^N)$ which is moreover sub-Markovian is called \emph{transient} if $$ G[\psi](x)=\int_0^\infty T_t[\psi]\dd t<\infty \qquad\text{for all $0\leq \psi\in L^1(\R^N)$.} $$ \end{definition} By Proposition \ref{prop:LumerPhillips}, Theorem \ref{thm:LevyOperatorsmAccretive}, and Remark \ref{rem:ExtensionToAnyMonotoneOperatorAndResolvent}, the operators $(-\mathfrak{L}^{\sigma,\mu})$ and $(I-\mathfrak{L}^{\sigma,\mu})$ generate the respective strongly continuous contraction semigroups $(T_t^{-\Operator^{\sigma,\mu}})_{t\geq0}$ and $(T_t^{I-\Operator^{\sigma,\mu}})_{t\geq0}$ in $L^1(\R^N)$ which are moreover sub-Markovian. Note that by uniqueness of strongly continuous semigroups (cf. Corollary 4.1.35 in \cite{Jac01}), $(T_t^{I-\Operator^{\sigma,\mu}})_{t\geq0}=(\e^{-t}T_t^{-\Operator^{\sigma,\mu}})_{t\geq0}$. An immediate consequence of Example 3.5.30 in \cite{Jac02} is: \begin{lemma} The semigroup $(\e^{-t}T_t^{-\Operator^{\sigma,\mu}})_{t\geq0}$ associated with $(I-\mathfrak{L}^{\sigma,\mu})$, which satisfies Proposition \ref{prop:LumerPhillips} and Theorem \ref{thm:LevyOperatorsmAccretive}, is transient. \end{lemma} For the operator $(-\mathfrak{L}^{\sigma,\mu})$, we note that the semigroup defined as $$ T_t[\psi]:=\int_{\R^N}\psi(x-y)\mathbb{H}_t(y)\dd y \qquad\textup{for all $\psi\in L^1(\R^N)$} $$ is a strongly continuous sub-Markovian contraction semigroup in $L^1(\R^N)$ with $(-\mathfrak{L}^{\sigma,\mu})$ as generator. This can easily be seen since $(-\mathfrak{L}^{\sigma,\mu})$ admits a symmetric and positive heat kernel satisfying $\mathbb{H}_t\in L^1(\R^N)$ due to the fact that the corresponding heat equation enjoys mass conservation, $L^1$-decay, the comparison principle, and has solutions in $C([0,T];L^1(\R^N))$. By Corollary 4.1.35 in \cite{Jac01}, again, this semigroup must coincide with $(T_t^{-\Operator^{\sigma,\mu}})_{t\geq0}$. Moreover: \begin{lemma}[{{\cite[Theorem 3.5.51]{Jac02}}}]\label{lem:OperatorTransient} The semigroup $(T_t^{-\Operator^{\sigma,\mu}})_{t\geq0}$ associated with $(-\mathfrak{L}^{\sigma,\mu})$, which satisfies Proposition \ref{prop:LumerPhillips} and Theorem \ref{thm:LevyOperatorsmAccretive}, is transient if and only if, for all compact $K\subset \R^N$, $$ \int_0^\infty\int_K\mathbb{H}_t(x)\dd x\dd t<\infty. $$ \end{lemma} At this point $G$ is a good candidate for $A^{-1}$, but we need additional properties like e.g. $G:L^p\to D(A)$. The rigorous answer can be found in Proposition 3.5.79 in \cite{Jac02}. We simply check that $$ \lim_{t\to\infty}T_t[\psi]\to0\qquad\text{for all $\psi\in L^1(\R^N)$.} $$ For convolution semigroups with a symmetric positive kernel $\mathbb{H}_t$, we get $$ |T_t[\psi](x)|\leq \int_{\R^N}|\psi(x-y)|\mathbb{H}_t(y)\dd y, $$ and hence, this is really a condition on the kernel by the Lebesgue dominated convergence theorem. Note that we can immediately conclude for $I-\Operator^{\sigma,\mu}$ since $T_t^{I-\Operator^{\sigma,\mu}}[\psi]=\e^{-t}T_t^{-\Operator^{\sigma,\mu}}[\psi]$, $\|T_t^{-\Operator^{\sigma,\mu}}[\psi]\|_{L^1(\R^N)}\leq \|\psi\|_{L^1(\R^N)}$, and hence, we obtain the stronger property $$ \e^{-t}T_t^{-\Operator^{\sigma,\mu}}[\psi]\to0 \qquad\text{in $L^1(\R^N)$ as $t\to\infty$.} $$ The above can then be summarized as: \begin{proposition}[The inverse operator $A^{-1}$]\label{prop:TheInverseOperatorA-1} \begin{enumerate}[{\rm (a)}] \item The semigroup $(\e^{-t}T_t^{-\Operator^{\sigma,\mu}})_{t\geq0}$ associated with $(I-\mathfrak{L}^{\sigma,\mu})$, which satisfies Proposition \ref{prop:LumerPhillips} and Theorem \ref{thm:LevyOperatorsmAccretive}, has an inverse operator given, for all $0\leq \psi\in L^1(\R^N)$, by $$ (I-\mathfrak{L}^{\sigma,\mu})^{-1}[\psi]=\int_0^\infty \e^{-t}T_t^{-\Operator^{\sigma,\mu}}[\psi]\dd t, $$ and $(I-\mathfrak{L}^{\sigma,\mu})^{-1}[(I-\mathfrak{L}^{\sigma,\mu})[\psi]]=\psi$. \item Assume that the semigroup $(T_t^{-\Operator^{\sigma,\mu}})_{t\geq0}$ associated with $(-\mathfrak{L}^{\sigma,\mu})$, which satisfies Proposition \ref{prop:LumerPhillips} and Theorem \ref{thm:LevyOperatorsmAccretive}, has a symmetric and positive kernel $\mathbb{H}_t$ satisfying, for all compact $K\subset \R^N$, $$ \int_0^\infty\int_K\mathbb{H}_t(x)\dd x\dd t<\infty\qquad\text{and}\qquad\lim_{t\to\infty}\mathbb{H}_t(x)\to0\quad\text{for a.e. $x\in \R^N$.} $$ Then $(-\mathfrak{L}^{\sigma,\mu})$ has an inverse operator given, for all $0\leq \psi\in L^1(\R^N)$, by $$ (-\mathfrak{L}^{\sigma,\mu})^{-1}[\psi]=\int_0^\infty T_t^{-\Operator^{\sigma,\mu}}[\psi]\dd t=\int_{\R^N}\bigg(\int_0^\infty \mathbb{H}_t(y)\dd t\bigg)\psi(x-y)\dd y $$ and $(-\mathfrak{L}^{\sigma,\mu})^{-1}[(-\mathfrak{L}^{\sigma,\mu})[\psi]]=\psi$. \end{enumerate} \end{proposition} \begin{remark} \begin{enumerate}[{\rm (a)}] \item We immediately see that, in the setting of part (b) above, $$ \mathbb{G}_{-\mathfrak{L}^{\sigma,\mu}}^{x_0}(x):=\int_0^\infty \mathbb{H}_t^{x_0}(x)\dd t. $$ Moreover, if we apply the setting of part (b) to part (a), then we also have that $$ \mathbb{G}_{I-\mathfrak{L}^{\sigma,\mu}}^{x_0}(x):=\int_0^\infty\e^{-t} \mathbb{H}_t^{x_0}(x)\dd t. $$ Of course, the latter is well-defined for a larger class of kernels than the former. \item We can then check that all the examples considered in Section \ref{sec:GreenAndHeat} have inverses. \end{enumerate} \end{remark} The positivity assumption of \eqref{Gas} is then very natural since it is related to the fact that the operator ensures the comparison principle. \begin{corollary}\label{cor:GreenResolventIntegrable} Under the assumptions of Proposition \ref{prop:TheInverseOperatorA-1}, $$ \mathbb{G}_{-\Operator}^{x_0}(x), \mathbb{G}_{I-\Operator}^{x_0}(x)\geq0 \qquad\text{for a.e. $x\in\R^N$.} $$ \end{corollary} \begin{proof} Since the comparison principle holds for solutions of the heat equation, $\mathbb{H}_{-\Operator}^{x_0}(x,t)\geq0$. \end{proof} In fact, we only need to assume that the resolvent is nonnegative since $\e^{-t}\leq 1$: \begin{corollary}\label{cor:GreenNonnegative} Under the assumptions of Proposition \ref{prop:TheInverseOperatorA-1}, $$ 0\leq \mathbb{G}_{I-\Operator}^{x_0}(x)\leq \mathbb{G}_{-\Operator}^{x_0}(x) \qquad\text{for a.e. $x\in\R^N$.} $$ \end{corollary} \section{Existence and a priori results for weak dual solutions} \label{sec:ExistenceAPriori} Although this is not the main point of the paper, we will illustrate that our assumptions \eqref{G_1}--\eqref{G_3} does not lead to an empty theory. Let us therefore prove Proposition \ref{prop:APriori}. To do so, we rely on the theory of abstract solutions for the corresponding elliptic problem of \eqref{GPME}. Recall Lemma \ref{lem:InverseBounded}, and due to \eqref{Gas}, we also have: \begin{lemma} Consider $A:=(-\Operator)$ or $A:=(I-\Operator)$. Then $A[\mathbb{G}_{A}^{y}]=\delta_{y}$ in $\mathcal{D}'(\R^N)$, $$ A^{-1}[\psi]=\int_{\R^N}\mathbb{G}_{A}^{0}(x-y)\psi(y)\dd y, $$ and $A[A^{-1}[\psi]]=\psi$ for all $\psi\in C_\textup{c}^\infty(\R^N)$. \end{lemma} \begin{proof}[Proof of Proposition \ref{prop:APriori}] \noindent(a) Consider a uniform grid in time such that $0=t_0<t_1<\cdots<t_J=T$. Let $\mathbb{J}:=\{1,\ldots,J\}$, and denote the time steps by $\Delta t=t_{j}-t_{j-1}$ for all $j\in \mathbb{J}$. The piecewise constant time interpolant $u_{\Delta t}$ is, for $(x,t)\in Q_T$, given as $$ u_{\Delta t}(x,t):=u_{j}(x)\quad\text{where $t\in(t_{j-1},t_j]$ for all $j\in\mathbb{J}$,} $$ and $u_{\Delta t}(x,0):=u_0(x)$. Now, each $u_j$ is defined recursively as the solution of the following elliptic equation: \begin{equation*} u_j+\Delta tA[u_j^m]=u_{j-1}\qquad\text{in $\R^N$,} \end{equation*} which of course is another way of expressing \eqref{eq:NonLinearResolvent}. Since $A$ is densely defined, $\mathfrak{m}$-accretive, and Dirichlet in $L^1(\R^N)$, the above equation has an a.e.-solution (cf. Theorem \ref{thm:LevyOperatorsmAccretive} and Proposition \ref{prop:ExtensionOfmAccretiveIsTheSame}). Then rewriting the equation, multiplying by $A^{-1}[\psi(\cdot,t_{j-1})]$ with $\psi\in C_\textup{c}^\infty(\R^N\times(0,T))$, integrating over $\R^N$, using $$ \int_{\R^N} A[u_{j}^m]A^{-1}[\psi(\cdot,t_{j-1})]=\int_{\R^N} u_{j}^mA[A^{-1}[\psi(\cdot,t_{j-1})]]=\int_{\R^N} u_{j}^m\psi(\cdot,t_{j-1}), $$ and summing over $j$, we obtain that $$ \sum_{j\in \mathbb{J}}\int_{\R^N}\frac{u_j-u_{j-1}}{\Delta t}A^{-1}[\psi(t_{j-1})]\dd x\Delta t+\sum_{j\in \mathbb{J}}\int_{\R^N}u_{j}^m\psi(t_{j-1})\dd x\Delta t=0. $$ We now perform summation by parts, use the symmetry of $A^{-1}$, and use that $\psi$ has compact support in $(0,T)$ (so that it vanishes for small enough $\Delta t$ for some $n,m\in \mathbb{J}$) to obtain $$ -\sum_{j=n}^{m-1}\int_{\R^N}A^{-1}[u_{j}]\frac{\psi(t_j)-\psi(t_{j-1})}{\Delta t}\dd x\Delta t+\sum_{j=n}^{m-1}\int_{\R^N}u_{j}^m\psi(t_{j-1})\dd x\Delta t=0. $$ At this point, we can follow the proof of Proposition 5.2 in \cite{BeBoGrMu21} since, in our case, we have that $A^{-1}[u_{j}]\in C([0,T];L_\textup{loc}^1(\R^N))\cap L^\infty(Q_T)$ and e.g. $$ |u_j^m-u^m(t_{j})|\leq 2\|u_0\|_{L^\infty(\R^N)}^{m-1}|u_j-u(t_{j})|. $$ That is, $u\in (L^1\cap L^\infty)(Q_T)\cap C([0,T];L^1(\R^N))$ satisfies $$ \int_0^T\int_{\R^N}\big(A^{-1}[u]\dell_t\psi-u^m\psi\big)\dd x\dd t=0\qquad\text{for all $\psi\in C_\textup{c}^\infty(Q_T)$.} $$ Assume $0< \tau_1\leq\tau_2\leq T$, and choose $\psi(x,t)\mapsto \theta_n(t)\psi(x,t)$ where the new $\psi$ is in $C_\textup{c}^1([\tau_1,\tau_2];L_\textup{c}^\infty(\R^N))$ and $\theta_n$ is an approximation of the square pulse with support in $[\tau_1,\tau_2]$. The above expression is still well-defined for this choice, and moreover, since e.g. $A^{-1}[u]\in C([0,T];L_\textup{loc}^1(\R^N))$, we can take the limit as $n\to\infty$. This concludes that $u$ is a weak dual solution according to Definition \ref{def:WeakDualSolution}. \medskip \noindent(b) The comparison principle and the $L^p$-decay are immediately inherited from the elliptic problem, see e.g. the proofs of Propositions 5.1 and 5.2 in \cite{BeBoGrMu21}. \end{proof} \begin{remark} We have in fact shown that mild/integral solutions (i.e., the limit points of the time-discretized problem) of \eqref{GPME} are weak dual solutions. \end{remark} \begin{thebibliography}{100} \bibitem{AbBe98} C.~Abourjaily and P.~Benilan. \newblock Symmetrization of quasi-linear parabolic problems. \newblock {\em Rev. Un. Mat. Argentina}, 41(1):1--13, 1998. \newblock Dedicated to the memory of Julio E. Bouillet. \bibitem{A-VMaRoT-M10} F.~Andreu-Vaillo, J.~M. Maz{\'o}n, J.~D. Rossi, and J.~J. Toledo-Melero. \newblock {\em Nonlocal diffusion problems}, volume 165 of {\em Mathematical Surveys and Monographs}. \newblock American Mathematical Society, Providence, RI; Real Sociedad Matem\'atica Espa\~nola, Madrid, 2010. \bibitem{Aro68} D.~G. Aronson. \newblock Non-negative solutions of linear parabolic equations. \newblock {\em Ann. Scuola Norm. Sup. Pisa Cl. Sci. (3)}, 22:607--694, 1968. \bibitem{ArBe79} D.~G. Aronson and P.~B\'{e}nilan. \newblock R\'{e}gularit\'{e} des solutions de l'\'{e}quation des milieux poreux dans {${\bf R}^{N}$}. \newblock {\em C. R. Acad. Sci. Paris S\'{e}r. A-B}, 288(2):A103--A105, 1979. \bibitem{AtCa10} I.~Athanasopoulos and L.~A. Caffarelli. \newblock Continuity of the temperature in boundary heat control problems. \newblock {\em Adv. Math.}, 224(1):293--315, 2010. \bibitem{BaDaQu11} D.~Babusci, G.~Dattoli, and M.~Quattromini. \newblock Relativistic equations with fractional and pseudo-differential operators. \newblock {\em Phys. Rev. A}, 83:0621109, 2011. \bibitem{BaCoLeS-C95} D.~Bakry, T.~Coulhon, M.~Ledoux, and L.~Saloff-Coste. \newblock Sobolev inequalities in disguise. \newblock {\em Indiana Univ. Math. J.}, 44(4):1033--1074, 1995. \bibitem{BaPeSoVa14} B.~Barrios, I.~Peral, F.~Soria, and E.~Valdinoci. \newblock A {W}idder's type theorem for the heat equation with nonlocal diffusion. \newblock {\em Arch. Ration. Mech. Anal.}, 213(2):629--650, 2014. \bibitem{BaCh06} R.~F. Bass and Z.-Q. Chen. \newblock Systems of equations driven by stable processes. \newblock {\em Probab. Theory Related Fields}, 134(2):175--214, 2006. \bibitem{BeFr14} J.~Bellazzini, R.~L. Frank, and N.~Visciglia. \newblock Maximizers for {G}agliardo-{N}irenberg inequalities and related non-local problems. \newblock {\em Math. Ann.}, 360(3-4):653--673, 2014. \bibitem{Ben78} P.~B\'{e}nilan. \newblock Op\'erateurs accr\'etifs et semigroups dans les \'espaces ${L}^p$ $(1\leq p\leq \infty)$. \newblock In {\em Functional analysis and numerical analysis: Japan-France Seminar (Tokyo and Kyoto, 1976)}, pages 15--53. Japan Soc. Promotion Sci., Tokyo, 1978. \bibitem{BeCrPa01} P.~B\'enilan, M.~Crandall, and A.~Pazy. \newblock Nonlinear evolution equations governed by accretive operators. \newblock Book manuscript, Besan\c{c}on, 2001. \bibitem{BeCr81b} P.~B\'{e}nilan and M.~G. Crandall. \newblock Regularizing effects of homogeneous evolution equations. \newblock In {\em Contributions to analysis and geometry ({B}altimore, {M}d., 1980)}, pages 23--39. Johns Hopkins Univ. Press, Baltimore, Md., 1981. \bibitem{BeBoGaGr20} E.~Berchio, M.~Bonforte, D.~Ganguly, and G.~Grillo. \newblock The fractional porous medium equation on the hyperbolic space. \newblock {\em Calc. Var. Partial Differential Equations}, 59(5):Paper No. 169, 36, 2020. \bibitem{BeBoGrMu21} E.~Berchio, M.~Bonforte, G.~Grillo, and M.~Muratori. \newblock The fractional porous medium equation on noncompact {R}iemannian manifolds. \newblock Preprint, arXiv:2109.10732v1 [math.AP], 2021. \bibitem{BlGe60} R.~M. Blumenthal and R.~K. Getoor. \newblock Some theorems on stable processes. \newblock {\em Trans. Amer. Math. Soc.}, 95:263--273, 1960. \bibitem{BoByKuRySoVo09} K.~Bogdan, T.~Byczkowski, T.~Kulczycki, M.~Ryznar, R.~Song, and Z.~Vondra\v{c}ek. \newblock {\em Potential analysis of stable processes and its extensions}, volume 1980 of {\em Lecture Notes in Mathematics}. \newblock Springer-Verlag, Berlin, 2009. \newblock Edited by Piotr Graczyk and Andrzej Stos. \bibitem{BoSz05} K.~Bogdan and P.~Sztonyk. \newblock Harnack's inequality for stable {L}\'{e}vy processes. \newblock {\em Potential Anal.}, 22(2):133--150, 2005. \bibitem{BoSz07} K.~Bogdan and P.~Sztonyk. \newblock Estimates of the potential kernel and {H}arnack's inequality for the anisotropic fractional {L}aplacian. \newblock {\em Studia Math.}, 181(2):101--123, 2007. \bibitem{BoSzKn20} K.~Bogdan, P.~Sztonyk, and V.~Knopova. \newblock Heat kernel of anisotropic nonlocal operators. \newblock {\em Doc. Math.}, 25:1--54, 2020. \bibitem{BoDoNaSi20} M.~Bonforte, J.~Dolbeault, B.~Nazaret, and N.~Simonov. \newblock Stability in {G}agliardo-{N}irenberg-{S}obolev inequalities. {F}lows, regularity and the entropy method. \newblock To \emph{Appear in Memoirs AMS}, preprint available at https://arxiv.org/abs/2007.03674, 2020. \bibitem{BoFiR-O17} M.~Bonforte, A.~Figalli, and X.~Ros-Oton. \newblock Infinite speed of propagation and regularity of solutions to the fractional porous medium equation in general domains. \newblock {\em Comm. Pure Appl. Math.}, 70(8):1472--1508, 2017. \bibitem{BoFiVa18b} M.~Bonforte, A.~Figalli, and J.~L. V\'azquez. \newblock Sharp boundary behaviour of solutions to semilinear nonlocal elliptic equations. \newblock {\em Calc. Var. Partial Differential Equations}, 57(2):57:57, 2018. \bibitem{BoFiVa18a} M.~Bonforte, A.~Figalli, and J.~L. V\'azquez. \newblock Sharp global estimates for local and nonlocal porous medium-type equations in bounded domains. \newblock {\em Anal. PDE}, 11(4):945--982, 2018. \bibitem{BoGr05b} M.~Bonforte and G.~Grillo. \newblock Asymptotics of the porous media equation via {S}obolev inequalities. \newblock {\em J. Funct. Anal.}, 225(1):33--62, 2005. \bibitem{BoGr05a} M.~Bonforte and G.~Grillo. \newblock Direct and reverse {G}agliardo-{N}irenberg inequalities from logarithmic {S}obolev inequalities. \newblock {\em Bull. Pol. Acad. Sci. Math.}, 53(3):323--336, 2005. \bibitem{BoGrVa08} M.~Bonforte, G.~Grillo, and J.~L. Vazquez. \newblock Fast diffusion flow on manifolds of nonpositive curvature. \newblock {\em J. Evol. Equ.}, 8(1):99--128, 2008. \bibitem{BoIbIs22} M.~Bonforte, P.~Ibarrondo, and M.~Ispizua. \newblock The {C}auchy-{D}irichlet {P}roblem for {S}ingular {N}onlocal {D}iffusions on {B}ounded {D}omains. \newblock Preprint, arXiv:2203.12545v1 [math.AP], 2022. \bibitem{BoSi19} M.~Bonforte and N.~Simonov. \newblock Quantitative a priori estimates for fast diffusion equations with {C}affarelli-{K}ohn-{N}irenberg weights. {H}arnack inequalities and {H}\"{o}lder continuity. \newblock {\em Adv. Math.}, 345:1075--1161, 2019. \bibitem{BoSiVa15} M.~Bonforte, Y.~Sire, and J.~L. V{\'a}zquez. \newblock Existence, uniqueness and asymptotic behaviour for fractional porous medium equations on bounded domains. \newblock {\em Discrete Contin. Dyn. Syst.}, 35(12):5725--5767, 2015. \bibitem{BoSiVa17} M.~Bonforte, Y.~Sire, and J.~L. V\'azquez. \newblock Optimal existence and uniqueness theory for the fractional heat equation. \newblock {\em Nonlinear Anal.}, 153:142--168, 2017. \bibitem{BoVa10} M.~Bonforte and J.~L. V\'{a}zquez. \newblock Positivity, local smoothing, and {H}arnack inequalities for very fast diffusion equations. \newblock {\em Adv. Math.}, 223(2):529--578, 2010. \bibitem{BoVa14} M.~Bonforte and J.~L. V{\'a}zquez. \newblock Quantitative local and global a priori estimates for fractional nonlinear diffusion equations. \newblock {\em Adv. Math.}, 250:242--284, 2014. \bibitem{BoVa15} M.~Bonforte and J.~L. V{\'a}zquez. \newblock A priori estimates for fractional nonlinear degenerate diffusion equations on bounded domains. \newblock {\em Arch. Ration. Mech. Anal.}, 218(1):317--362, 2015. \bibitem{BoVa16} M.~Bonforte and J.~L. V\'azquez. \newblock Fractional nonlinear degenerate diffusion equations on bounded domains part {I}. {E}xistence, uniqueness and upper bounds. \newblock {\em Nonlinear Anal.}, 131:363--398, 2016. \bibitem{BoDoSc20} E.~Bouin, J.~Dolbeault, and C.~Schmeiser. \newblock A variational proof of {N}ash's inequality. \newblock {\em Atti Accad. Naz. Lincei Rend. Lincei Mat. Appl.}, 31(1):211--223, 2020. \bibitem{BrDP18} C.~Br\"andle and A.~de~Pablo. \newblock Nonlocal heat equations: {R}egularizing effect, decay estimates and {N}ash inequalities. \newblock {\em Commun. Pure Appl. Anal.}, 17(3):1161--1178, 2018. \bibitem{BrLiSt21} L.~Brasco, E.~Lindgren, and M.~Str\"{o}mqvist. \newblock Continuity of solutions to a nonlinear fractional diffusion equation. \newblock {\em J. Evol. Equ.}, 21(4):4319--4381, 2021. \bibitem{BrCr79} H.~Br{\'e}zis and M.~G. Crandall. \newblock Uniqueness of solutions of the initial-value problem for {$u_{t}-\Delta \varphi (u)=0$}. \newblock {\em J. Math. Pures Appl. (9)}, 58(2):153--163, 1979. \bibitem{BrSt73} H.~Br\'{e}zis and W.~A. Strauss. \newblock Semi-linear second-order elliptic equations in {$L^{1}$}. \newblock {\em J. Math. Soc. Japan}, 25:565--590, 1973. \bibitem{CaKuSt87} E.~A. Carlen, S.~Kusuoka, and D.~W. Stroock. \newblock Upper bounds for symmetric {M}arkov transition functions. \newblock {\em Ann. Inst. H. Poincar\'{e} Probab. Statist.}, 23(2, suppl.):245--287, 1987. \bibitem{CaLo93} E.~A. Carlen and M.~Loss. \newblock Sharp constant in {N}ash's inequality. \newblock {\em Internat. Math. Res. Notices}, 7:213--215, 1993. \bibitem{ChKa20} J.~Chaker and M.~Kassmann. \newblock Nonlocal operators with singular anisotropic kernels. \newblock {\em Comm. Partial Differential Equations}, 45(1):1--31, 2020. \bibitem{ChKiKu08} Z.-Q. Chen, P.~Kim, and T.~Kumagai. \newblock Weighted {P}oincar\'{e} inequality and heat kernel estimates for finite range jump processes. \newblock {\em Math. Ann.}, 342(4):833--883, 2008. \bibitem{ChKiKu09} Z.-Q. Chen, P.~Kim, and T.~Kumagai. \newblock On heat kernel estimates and parabolic {H}arnack inequality for jump processes on metric measure spaces. \newblock {\em Acta Math. Sin. (Engl. Ser.)}, 25(7):1067--1086, 2009. \bibitem{ChKu08} Z.-Q. Chen and T.~Kumagai. \newblock Heat kernel estimates for jump processes of mixed types on metric measure spaces. \newblock {\em Probab. Theory Related Fields}, 140(1-2):277--317, 2008. \bibitem{CoHa21} T.~A. Collier and D.~Hauer. \newblock A doubly nonlinear evolution problem involving the fractional $p$-{L}aplacian. \newblock Preprint, arXiv:2110.13401v1 [math.AP], 2021. \bibitem{C-ENaVi04} D.~Cordero-Erausquin, B.~Nazaret, and C.~Villani. \newblock A mass-transportation approach to sharp {S}obolev and {G}agliardo-{N}irenberg inequalities. \newblock {\em Adv. Math.}, 182(2):307--332, 2004. \bibitem{CotTa04} A.~Cotsiolis and N.~K. Tavoularis. \newblock Best constants for {S}obolev inequalities for higher order fractional derivatives. \newblock {\em J. Math. Anal. Appl.}, 295(1):225--236, 2004. \bibitem{CoHa16} T.~Coulhon and D.~Hauer. \newblock Functional inequalities and regularising properties of nonlinear semigroups -- theory and application. \newblock To appear in \emph{BCAM SpringerBriefs in Mathematics}, preprint available at http://arxiv.org/abs/1604.08737, 2016. \bibitem{Cou64} P.~Courr\`ege. \newblock G\'en\'erateur infinit\'esimal d'un semi-groupe de convolution sur {$R^{n}$}, et formule de {L}\'evy-{K}hinchine. \newblock {\em Bull. Sci. Math. (2)}, 88:3--30, 1964. \bibitem{CrPi82} M.~Crandall and M.~Pierre. \newblock Regularizing effects for {$u_{t}+A\varphi (u)=0$} in {$L^{1}$}. \newblock {\em J. Funct. Anal.}, 45(2):194--212, 1982. \bibitem{CrOtWe08} G.~Crippa, F.~Otto, and M.~Westdickenberg. \newblock Regularizing effect of nonlinearity in multidimensional scalar conservation laws. \newblock In {\em Transport equations and multi-{D} hyperbolic conservation laws}, volume~5 of {\em Lect. Notes Unione Mat. Ital.}, pages 77--128. Springer, Berlin, 2008. \bibitem{DaKe07} P.~Daskalopoulos and C.~E. Kenig. \newblock {\em Degenerate diffusions. Initial value problems and local regularity theory}, volume~1 of {\em EMS Tracts in Mathematics}. \newblock European Mathematical Society (EMS), Z\"{u}rich, 2007. \bibitem{Dav89} E.~B. Davies. \newblock {\em Heat kernels and spectral theory}, volume~92 of {\em Cambridge Tracts in Mathematics}. \newblock Cambridge University Press, Cambridge, 1989. \bibitem{DPQuRi22} A.~de~Pablo, F.~Quir\'{o}s, and A.~Ritorto. \newblock Extremals in {H}ardy-{L}ittlewood-{S}obolev inequalities for stable processes. \newblock {\em J. Math. Anal. Appl.}, 507(1):Paper No. 125742, 18, 2022. \bibitem{DPQuRo16} A.~de~Pablo, F.~Quir{\'o}s, and A.~Rodr{\'{\i}}guez. \newblock Nonlocal filtration equations with rough kernels. \newblock {\em Nonlinear Anal.}, 137:402--425, 2016. \bibitem{DPQuRo18} A.~de~Pablo, F.~Quir\'{o}s, and A.~Rodr\'{\i}guez. \newblock Regularity theory for singular nonlocal diffusion equations. \newblock {\em Calc. Var. Partial Differential Equations}, 57(5):Paper No. 136, 14, 2018. \bibitem{DPQuRoVa12} A.~de~Pablo, F.~Quir{\'o}s, A.~Rodr{\'{\i}}guez, and J.~L. V{\'a}zquez. \newblock A general fractional porous medium equation. \newblock {\em Comm. Pure Appl. Math.}, 65(9):1242--1284, 2012. \bibitem{DPQuRoVa14} A.~de~Pablo, F.~Quir{\'o}s, A.~Rodr{\'{\i}}guez, and J.~L. V{\'a}zquez. \newblock Classical solutions for a logarithmic fractional diffusion equation. \newblock {\em J. Math. Pures Appl. (9)}, 101(6):901--924, 2014. \bibitem{DPDo02} M.~Del~Pino and J.~Dolbeault. \newblock Best constants for {G}agliardo-{N}irenberg inequalities and applications to nonlinear diffusions. \newblock {\em J. Math. Pures Appl. (9)}, 81(9):847--875, 2002. \bibitem{DTEnJa17b} F.~del Teso, J.~Endal, and E.~R. Jakobsen. \newblock On distributional solutions of local and nonlocal problems of porous medium type. \newblock {\em C. R. Math. Acad. Sci. Paris}, 355(11):1154--1160, 2017. \bibitem{DTEnJa17a} F.~del Teso, J.~Endal, and E.~R. Jakobsen. \newblock Uniqueness and properties of distributional solutions of nonlocal equations of porous medium type. \newblock {\em Adv. Math.}, 305:78--143, 2017. \bibitem{DTEnJa18a} F.~del Teso, J.~Endal, and E.~R. Jakobsen. \newblock On the well-posedness of solutions with finite energy for nonlocal equations of porous medium type. \newblock In {\em Non-linear partial differential equations, mathematical physics, and stochastic analysis}, EMS Ser. Congr. Rep., pages 129--167. Eur. Math. Soc., Z\"{u}rich, 2018. \bibitem{DTEnJa19} F.~del Teso, J.~Endal, and E.~R. Jakobsen. \newblock Robust numerical methods for nonlocal (and local) equations of porous medium type. {P}art {I}: {T}heory. \newblock {\em SIAM J. Numer. Anal.}, 57(5):2266--2299, 2019. \bibitem{DTEnVa21} F.~del Teso, J.~Endal, and J.~L. V\'{a}zquez. \newblock The one-phase fractional {S}tefan problem. \newblock {\em Math. Models Methods Appl. Sci.}, 31(1):83--131, 2021. \bibitem{DFFeLi08} M.~Di~Francesco, K.~Fellner, and H.~Liu. \newblock A nonlocal conservation law with nonlinear ``radiation'' inhomogeneity. \newblock {\em J. Hyperbolic Differ. Equ.}, 5(1):1--23, 2008. \bibitem{DBe93} E.~DiBenedetto. \newblock {\em Degenerate parabolic equations}. \newblock Universitext. Springer-Verlag, New York, 1993. \bibitem{DBGiVe12} E.~DiBenedetto, U.~Gianazza, and V.~Vespri. \newblock {\em Harnack's inequality for degenerate and singular parabolic equations}. \newblock Springer Monographs in Mathematics. Springer, New York, 2012. \bibitem{DyFr12} B.~Dyda and R.~L. Frank. \newblock Fractional {H}ardy-{S}obolev-{M}az'ya inequality for domains. \newblock {\em Studia Math.}, 208(2):151--166, 2012. \bibitem{Eva10} L.~C. Evans. \newblock {\em Partial differential equations}, volume~19 of {\em Graduate Studies in Mathematics}. \newblock American Mathematical Society, Providence, RI, second edition, 2010. \bibitem{FaFe15} M.~M. Fall and V.~Felli. \newblock Unique continuation properties for relativistic {S}chr\"odinger operators with a singular potential. \newblock {\em Discrete Contin. Dyn. Syst.}, 35(12):5827--5867, 2015. \bibitem{Fou55} J.~Fourier. \newblock {\em Analytical theory of heat}. \newblock Dover Publications, Inc., New York, 1955. \newblock Translated, with notes, by Alexander Freeman. \bibitem{GrHuLa14} A.~Grigor'yan, J.~Hu, and K.-S. Lau. \newblock Estimates of heat kernels for non-local regular {D}irichlet forms. \newblock {\em Trans. Amer. Math. Soc.}, 366(12):6397--6441, 2014. \bibitem{Gri10} G.~Grillo. \newblock On the equivalence between {$p$}-{P}oincar\'{e} inequalities and {$L^r$}-{$L^q$} regularization and decay estimates of certain nonlinear evolutions. \newblock {\em J. Differential Equations}, 249(10):2561--2576, 2010. \bibitem{GrMePu21} G.~Grillo, G.~Meglioli, and F.~Punzo. \newblock Smoothing effects and infinite time blowup for reaction-diffusion equations: an approach via {S}obolev and {P}oincar\'{e} inequalities. \newblock {\em J. Math. Pures Appl. (9)}, 151:99--131, 2021. \bibitem{GrMuPo13} G.~Grillo, M.~Muratori, and M.~Porzio. \newblock Porous media equations with two weights: smoothing and decay properties of energy solutions via {P}oincar\'{e} inequalities. \newblock {\em Discrete Contin. Dyn. Syst.}, 33(8):3599--3640, 2013. \bibitem{GrMuPu15} G.~Grillo, M.~Muratori, and F.~Punzo. \newblock Fractional porous media equations: existence and uniqueness of weak solutions with measure data. \newblock {\em Calc. Var. Partial Differential Equations}, 54(3):3303--3335, 2015. \bibitem{GrRy08} T.~Grzywny and M.~Ryznar. \newblock Two-sided optimal bounds for {G}reen functions of half-spaces for relativistic {$\alpha$}-stable process. \newblock {\em Potential Anal.}, 28(3):201--239, 2008. \bibitem{HaLi97} Q.~Han and F.~Lin. \newblock {\em Elliptic partial differential equations}, volume~1 of {\em Courant Lecture Notes in Mathematics}. \newblock New York University, Courant Institute of Mathematical Sciences, New York; American Mathematical Society, Providence, RI, 1997. \bibitem{HePi85} M.~A. Herrero and M.~Pierre. \newblock The {C}auchy problem for {$u_t=\Delta u^m$} when {$0<m<1$}. \newblock {\em Trans. Amer. Math. Soc.}, 291(1):145--158, 1985. \bibitem{Jac01} N.~Jacob. \newblock {\em Pseudo differential operators and {M}arkov processes. {V}ol. {I}}. \newblock Imperial College Press, London, 2001. \newblock Fourier analysis and semigroups. \bibitem{Jac02} N.~Jacob. \newblock {\em Pseudo differential operators \& {M}arkov processes. {V}ol. {II}}. \newblock Imperial College Press, London, 2002. \newblock Generators and their potential theory. \bibitem{JiXi19} T.~Jin and J.~Xiong. \newblock Optimal boundary regularity for fast diffusion equations in bounded domains. \newblock To appear in \emph{Amer. J. Math.}, preprint available at https://arxiv.org/abs/1910.05160, 2019. \bibitem{JiXi22} T.~Jin and J.~Xiong. \newblock Regularity of solutions to the dirichlet problem for fast diffusion equations. \newblock Preprint, arXiv:2201.10091v1 [math.AP], 2022. \bibitem{JuLi09} P.~Juutinen and P.~Lindqvist. \newblock Pointwise decay for the solutions of degenerate and singular parabolic equations. \newblock {\em Adv. Differential Equations}, 14(7--8):663--684, 2009. \bibitem{KaKiKu19} M.~Kassmann, K.-Y. Kim, and T.~Kumagai. \newblock Heat kernel bounds for nonlocal operators with singular kernels. \newblock Preprint, arXiv:1910.04242v2 [math.AP], 2019. \bibitem{KaKiLe21} M.~Kassmann, M.~Kim, and K.-A. Lee. \newblock Robust near-diagonal green function estimates. \newblock Preprint, arXiv:2111.05768v1 [math.AP], 2021. \bibitem{KaWe21} M.~Kassmann and M.~Weidner. \newblock Upper heat kernel estimates for nonlocal operators via {A}ronson's method. \newblock Preprint, arXiv:2111.06744v1 [math.AP], 2021. \bibitem{KnSc12} V.~Knopova and R.~L. Schilling. \newblock Transition density estimates for a class of {L}\'{e}vy and {L}\'{e}vy-type processes. \newblock {\em J. Theoret. Probab.}, 25(1):144--170, 2012. \bibitem{KuMiSi18} T.~Kuusi, G.~Mingione, and Y.~Sire. \newblock Regularity issues involving the fractional {$p$}-{L}aplacian. \newblock In {\em Recent developments in nonlocal theory}, pages 303--334. De Gruyter, Berlin, 2018. \bibitem{LiLi22} F.~Li and E.~Lindgren. \newblock Large time behavior for a nonlocal nonlinear gradient flow. \newblock Preprint, arXiv:2202.04398v2 [math.AP], 2022. \bibitem{Lie83} E.~H. Lieb. \newblock Sharp constants in the {H}ardy-{L}ittlewood-{S}obolev and related inequalities. \newblock {\em Ann. of Math. (2)}, 118(2):349--374, 1983. \bibitem{LiLo01} E.~H. Lieb and M.~Loss. \newblock {\em Analysis}, volume~14 of {\em Graduate Studies in Mathematics}. \newblock American Mathematical Society, Providence, RI, second edition, 2001. \bibitem{Mos64} J.~Moser. \newblock A {H}arnack inequality for parabolic differential equations. \newblock {\em Comm. Pure Appl. Math.}, 17:101--134, 1964. \bibitem{Mos67} J.~Moser. \newblock Correction to: ``{A} {H}arnack inequality for parabolic differential equations''. \newblock {\em Comm. Pure Appl. Math.}, 20:231--236, 1967. \bibitem{Nas58} J.~Nash. \newblock Continuity of solutions of parabolic and elliptic equations. \newblock {\em Amer. J. Math.}, 80:931--954, 1958. \bibitem{NgVa18} Q.-H. Nguyen and J.~L. V\'{a}zquez. \newblock Porous medium equation with nonlocal pressure in a bounded domain. \newblock {\em Comm. Partial Differential Equations}, 43(10):1502--1539, 2018. \bibitem{PaPi14} G.~Palatucci and A.~Pisante. \newblock Improved {S}obolev embeddings, profile decomposition, and concentration-compactness for fractional {S}obolev spaces. \newblock {\em Calc. Var. Partial Differential Equations}, 50(3-4):799--829, 2014. \bibitem{Ryz02} M.~Ryznar. \newblock Estimates of {G}reen function for relativistic {$\alpha$}-stable process. \newblock {\em Potential Anal.}, 17(1):1--23, 2002. \bibitem{Sac85} P.~E. Sacks. \newblock Global behavior for a class of nonlinear evolution equations. \newblock {\em SIAM J. Math. Anal.}, 16(2):233--250, 1985. \bibitem{S-Co02} L.~Saloff-Coste. \newblock {\em Aspects of {S}obolev-type inequalities}, volume 289 of {\em London Mathematical Society Lecture Note Series}. \newblock Cambridge University Press, Cambridge, 2002. \bibitem{Sch01} R.~L. Schilling. \newblock Dirichlet operators and the positive maximum principle. \newblock {\em Integral Equations Operator Theory}, 41(1):74--92, 2001. \bibitem{SeSi19} D.~Serre and L.~Silvestre. \newblock Multi-dimensional {B}urgers equation with unbounded initial data: well-posedness and dispersive estimates. \newblock {\em Arch. Ration. Mech. Anal.}, 234(3):1391--1411, 2019. \bibitem{SoVo07} R.~Song and Z.~Vondra\v{c}ek. \newblock Parabolic {H}arnack inequality for the mixture of {B}rownian motion and stable process. \newblock {\em Tohoku Math. J. (2)}, 59(1):1--19, 2007. \bibitem{Ste70} E.~M. Stein. \newblock {\em Singular integrals and differentiability properties of functions}. \newblock Princeton Mathematical Series, No. 30. Princeton University Press, Princeton, N.J., 1970. \bibitem{Str84} D.~W. Stroock. \newblock {\em An introduction to the theory of large deviations}. \newblock Universitext. Springer-Verlag, New York, 1984. \bibitem{Szt10b} P.~Sztonyk. \newblock Approximation of stable-dominated semigroups. \newblock {\em Potential Anal.}, 33(3):211--226, 2010. \bibitem{Szt10a} P.~Sztonyk. \newblock Estimates of tempered stable densities. \newblock {\em J. Theoret. Probab.}, 23(1):127--147, 2010. \bibitem{Szt11} P.~Sztonyk. \newblock Transition density estimates for jump {L}\'{e}vy processes. \newblock {\em Stochastic Process. Appl.}, 121(6):1245--1265, 2011. \bibitem{Var85} N.~T. Varopoulos. \newblock Hardy-{L}ittlewood theory for semigroups. \newblock {\em J. Funct. Anal.}, 63(2):240--260, 1985. \bibitem{VaSa-CoCo92} N.~T. Varopoulos, L.~Saloff-Coste, and T.~Coulhon. \newblock {\em Analysis and geometry on groups}, volume 100 of {\em Cambridge Tracts in Mathematics}. \newblock Cambridge University Press, Cambridge, 1992. \bibitem{Vaz06} J.~L. V{\'a}zquez. \newblock {\em Smoothing and decay estimates for nonlinear diffusion equations}, volume~33 of {\em Oxford Lecture Series in Mathematics and its Applications}. \newblock Oxford University Press, Oxford, 2006. \bibitem{Vaz07} J.~L. V{\'a}zquez. \newblock {\em The porous medium equation. {M}athematical theory}. \newblock Oxford Mathematical Monographs. The Clarendon Press, Oxford University Press, Oxford, 2007. \bibitem{Vaz17} J.~L. V{\'a}zquez. \newblock The mathematical theories of diffusion. {N}onlinear and fractional diffusion. \newblock In {\em Nonlocal and {N}onlinear {D}iffusions and {I}nteractions: {N}ew {M}ethods and {D}irections}, volume 2186 of {\em C.I.M.E. Foundation Subseries}, pages 205--278. Springer {I}nternational {P}ublishing, 2017. \bibitem{DPQuRoVa17} J.~L. V\'azquez, A.~de~Pablo, F.~Quir\'os, and A.~Rodr{\'\i}guez. \newblock Classical solutions and higher regularity for nonlinear fractional diffusion equations. \newblock {\em J. Eur. Math. Soc. (JEMS)}, 19(7):1949--1975, 2017. \bibitem{Ver79} L.~V\'{e}ron. \newblock Effets r\'{e}gularisants de semi-groupes non lin\'{e}aires dans des espaces de {B}anach. \newblock {\em Ann. Fac. Sci. Toulouse Math. (5)}, 1(2):171--200, 1979. \bibitem{Xu13} F.~Xu. \newblock A class of singular symmetric {M}arkov processes. \newblock {\em Potential Anal.}, 38(1):207--232, 2013. \end{thebibliography} \end{document}
2205.06788v2
http://arxiv.org/abs/2205.06788v2
Partitioning through projections: strong SDP bounds for large graph partition problems
\documentclass[11p]{article} \usepackage[a4paper, total={6in, 9in}]{geometry} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{lmodern} \usepackage{amsthm} \usepackage{comment} \usepackage{bm,amsfonts,nicefrac,latexsym,amsmath,amsfonts,amsbsy,amscd,amsxtra,amsgen,amsopn,bbm,amssymb} \newtheorem{lemma}{Lemma} \newtheorem{theorem}{Theorem} \newtheorem{proposition}{Proposition} \newtheorem{remark}{Remark} \usepackage{lastpage} \usepackage{fancyhdr} \usepackage{mathrsfs} \usepackage[usenames,dvipsnames]{xcolor} \usepackage{graphicx} \usepackage{enumitem} \usepackage{listings} \usepackage{float} \usepackage[normalem]{ulem} \usepackage{textpos} \usepackage[numbers]{natbib} \usepackage[ruled,vlined,linesnumbered]{algorithm2e} \usepackage{algpseudocode} \usepackage{algorithm2e} \usepackage{tabularx,longtable,multirow,caption}\usepackage[english]{babel} \usepackage{dsfont} \usepackage{epstopdf} \usepackage{array} \usepackage[pdftex,hypertexnames=false,colorlinks]{hyperref} \usepackage[colorinlistoftodos]{todonotes} \allowdisplaybreaks \hypersetup{pdftitle={}, pdfsubject={}, pdfauthor={}, pdfkeywords={}, pdfstartview=FitH, pdfpagemode={UseOutlines}, bookmarksnumbered=true, bookmarksopen=true, colorlinks, citecolor=black, filecolor=black, linkcolor=black, urlcolor=black} \usepackage{booktabs} \usepackage{pgfplots} \usepackage{pgfplotstable} \usepackage{pdflscape} \usepackage{rotating} \usepackage{siunitx} \usepackage{dcolumn} \usepackage{scalerel} \usepackage{tikz} \usetikzlibrary{svg.path} \pgfplotstableset{ begin table=\begin{longtable}, end table=\end{longtable}, } \pgfplotsset{compat=1.17} \renewcommand*{\rmdefault}{bch} \renewcommand*{\ttdefault}{cmtt} \DeclareMathOperator*{\argmax}{arg\,max} \DeclareMathOperator*{\argmin}{arg\,min} \allowdisplaybreaks \definecolor{orcidlogocol}{HTML}{A6CE39} \tikzset{ orcidlogo/.pic={ ll[orcidlogocol] svg{M256,128c0,70.7-57.3,128-128,128C57.3,256,0,198.7,0,128C0,57.3,57.3,0,128,0C198.7,0,256,57.3,256,128z}; ll[white] svg{M86.3,186.2H70.9V79.1h15.4v48.4V186.2z} svg{M108.9,79.1h41.6c39.6,0,57,28.3,57,53.6c0,27.5-21.5,53.6-56.8,53.6h-41.8V79.1z M124.3,172.4h24.5c34.9,0,42.9-26.5,42.9-39.7c0-21.5-13.7-39.7-43.7-39.7h-23.7V172.4z} svg{M88.7,56.8c0,5.5-4.5,10.1-10.1,10.1c-5.6,0-10.1-4.6-10.1-10.1c0-5.6,4.5-10.1,10.1-10.1C84.2,46.7,88.7,51.3,88.7,56.8z}; } } \newcommand\orcidicon[1]{\href{https://orcid.org/#1}{\mbox{\scalerel*{ \begin{tikzpicture}[yscale=-1,transform shape] \pic{orcidlogo}; \end{tikzpicture} }{|}}}} \usepackage{hyperref} \hypersetup{ colorlinks=true, linkcolor=blue, linkbordercolor={0 0 1} } \renewcommand\lstlistingname{Algorithm} \renewcommand\lstlistlistingname{Algorithms} \def\lstlistingautorefname{Alg.} \newcommand{\R}{\mathbb{R}} \newcommand{\sn}{{\mathcal S}^n} \newcommand{\inprod}[2]{{\langle #1,#2 \rangle}} \newcommand{\trace}{\textrm{tr}} \newcommand{\rank}{\textrm{rank}} \newcommand{\diag}{\textrm{diag}} \newcommand{\Diag}{\textrm{Diag}} \newcommand{\vect}{\textrm{vec}} \newcommand{\Col}{\textrm{Col}} \newcommand{\Nul}{\textrm{Nul}} \newcommand{\Conv}{\textrm{Conv}} \newcommand{\epsadmm}{\varepsilon_\textrm{ADMM}} \newcommand{\epsproj}{\varepsilon_\textrm{proj}} \newcommand{\inner}{\text{inner}} \newcommand{\arrow}{\text{arrow}} \newcounter{rowcntr}[table] \renewcommand{\therowcntr}{\thetable.\arabic{rowcntr}} \newcolumntype{N}{>{\refstepcounter{rowcntr}\therowcntr}c} \AtBeginEnvironment{tabular}{\setcounter{rowcntr}{0}} \newcommand{\todofrank}[1]{\todo[author=Frank,color=blue!20]{#1}} \newcommand{\todoinlinefrank}[1]{\todo[inline,author=Frank,color=blue!20]{#1}} \newcommand{\todoangelika}[1]{\todo[author=Angelika,color=red!20]{#1}} \newcommand{\todoinlineangelika}[1]{\todo[inline,author=Angelika,color=red!20]{#1}} \newcommand{\todoshudian}[1]{\todo[author=Shudian,color=green!40]{#1}} \newcommand{\todoinlineshudian}[1]{\todo[inline,author=Shudian,color=green!40]{#1}} \providecommand{\keywords}[1] { \small \textbf{\textit{Keywords---}} #1 } \usepackage[multiple]{footmisc} \makeatletter \newcommand\footnoteref[1]{\protected@xdef\@thefnmark{\ref{#1}}\@footnotemark} \makeatother \title{Partitioning through projections: strong SDP bounds for large graph partition problems} \makeatletter \newcommand\newtag[2]{#1\def\@currentlabel{#1}\label{#2}} \makeatother \date{ } \begin{document} \author{ Frank de Meijer\footnote{Tilburg University, Department of Econometrics \& Operations Research, CentER, 5000 LE Tilburg, \href{mailto:[email protected]}{[email protected]}, \href{mailto:[email protected]}{[email protected]}}~\footnote{Corresponding Author: \href{mailto:[email protected]}{[email protected]}}~\orcidicon{0000-0002-1910-0699} \, \,\, \and Renata Sotirov$^*$\orcidicon{0000-0002-3298-7255 } \and Angelika Wiegele\footnote{Institut f\"ur Mathematik, Alpen-Adria-Universit\"at Klagenfurt, Universit\"atstra{\ss}e 65-67, 9020 Klagenfurt, \href{mailto:[email protected]}{[email protected]}, \href{mailto:[email protected]}{[email protected]}}~~\footnote{This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sk\l{}odowska-Curie grant agreement MINOA No 764759.}~\orcidicon{0000-0003-1670-7951} \and Shudian Zhao$^{\ddagger\,\S}$\orcidicon{0000-0001-6352-0968} } \maketitle \begin{abstract} The graph partition problem (GPP) aims at clustering the vertex set of a graph into a fixed number of disjoint subsets of given sizes such that the sum of weights of edges joining different sets is minimized. This paper investigates the quality of doubly nonnegative (DNN) relaxations, i.e., relaxations having matrix variables that are both positive semidefinite and nonnegative, strengthened by \textcolor{black}{additional} polyhedral cuts for two variations of the GPP: the $k$-equipartition and the graph bisection problem. After reducing the size of the relaxations by facial reduction, we solve them by a cutting-plane algorithm that combines an augmented Lagrangian method with Dykstra’s projection algorithm. Since many components of our algorithm are general, the algorithm is suitable for solving various DNN relaxations with a large number of cutting planes. We are the first to show the power of DNN relaxations with additional cutting planes for the GPP on large benchmark instances up to 1,024 vertices. Computational results show impressive improvements in strengthened DNN bounds. \\ \keywords{graph partition problems, semidefinite programming, cutting planes, Dykstra's projection algorithm, augmented Lagrangian method} \end{abstract} \section{Introduction} The graph partition problem (GPP) is the problem of partitioning the vertex set of a graph into a fixed number of subsets, say $k$, of given sizes such that the sum of weights of edges joining different sets is minimized. In the case that all sets are of equal sizes we refer to the resulting GPP as the $k$-equipartition problem ($k$-EP). The case of the graph partition problem with $k=2$ is known as the graph bisection problem (GBP). In the GBP the sizes of two subsets might differ. The special case of the GBP where both subsets have the same size is known in the literature as the equicut problem, see e.g.~\cite{KarischRendlClausen}. The graph partition problem is known to be NP-hard \cite{Garey1976SomeSN}. It is a fundamental problem that is extensively studied, mostly due to its applications in numerous fields, including VLSI design, social networks, floor \textcolor{black}{planning}, data mining, air traffic, image processing, image segmentation, parallel computing and telecommunication, see e.g., the book~\cite{Bichot} and the references therein. Recent studies in quantum circuit design \cite{quantum} also relate to the general graph partition problem. Furthermore, the GPP is used to compute bounds for the bandwidth problem~\cite{RENDL2021105422}. For an overview of recent advances in graph partitioning, we refer the reader to~\cite{Beluc}. There exist bounding approaches for the GPP that are valid for all variations of the GPP. We list some of them in this paragraph. Donath and Hoffman~\cite{Donath1973LowerBF} derive an eigenvalue bound for the graph partition problem. That bound is improved by Rendl and Wolkowicz~\cite{RendlWolkowicz}. Wolkowicz and Zhao~\cite{wokowicz-zhao1999} derive semidefinite programming (SDP) relaxations for the graph partition problem that are based on the so-called vector lifting approach. That is, relaxations in~\cite{wokowicz-zhao1999} have matrix variables of order $nk+1$, where $n$ is the number of vertices in the graph. An SDP relaxation based on the so-called matrix lifting approach is derived in \cite{sotirov2014gpp}, resulting in a compact relaxation for the GPP having matrix variables of order $n$. The relaxation from \cite{sotirov2014gpp} is a doubly nonnegative (DNN) relaxation, which is an SDP relaxation over the set of nonnegative matrices. In general, vector lifting relaxations provide stronger bounds than matrix lifting relaxations, {\color{black}see e.g.,~\cite{Olga,sotirov2014gpp}.} For the $k$-equipartition problem, Karisch and Rendl~\cite{Karisch98semidefinite}, among others, show how to reformulate the Donath-Hoffman and the Rendl-Wolkowicz relaxations as semidefinite programming problems. Karisch and Rendl also present several SDP relaxations with increasing complexity for the $k$-EP with matrix variables of order $n$ that dominate relaxations from~\cite{Donath1973LowerBF,RendlWolkowicz}. The strongest SDP relaxation from~\cite{Karisch98semidefinite} is a DNN relaxation with additional triangle and independent set inequalities. That relaxation provides the best known SDP bounds for the $k$-EP. However, it is difficult to compute those bounds for general graphs with more than 300 vertices when using interior point methods. In~\cite{Dam2015SemidefinitePA}, the authors compute bounds from~\cite{Karisch98semidefinite} for highly symmetric graphs by aggregating the triangle and independent set inequalities. Their approach results in significantly reduced computational effort for solving SDP relaxations with \textcolor{black}{a large number} of cutting planes for highly symmetric graphs. De~Klerk et al.~\cite{Klerk2012OnSP} exploit the fact that the $k$-EP is a special case of the quadratic assignment problem (QAP) to derive a DNN relaxation for the $k$-EP from a DNN relaxation for the QAP. The size of the resulting relaxation is reduced by exploiting \textcolor{black}{symmetry}. Numerical results show that the relaxation from~\cite{Klerk2012OnSP} does not dominate the strongest relaxation from \cite{Karisch98semidefinite}. There exist several SDP relaxations for the graph bisection problem. Two equivalent SDP relaxations for the GBP whose matrix variables have order $n$ are given in \cite{KarischRendlClausen,sotirov2014gpp}. The strongest matrix lifting DNN relaxation with a matrix variable of order $n$ is derived in \cite{Sotirov18}. That relaxation is equivalent to the vector lifting DNN relaxation from~\cite{wokowicz-zhao1999}, which has matrix variables of order $2n + 1$. For \textcolor{black}{comparisons} of the above mentioned SDP relaxations for the $k$-EP and the GBP, see e.g.,~\cite{sotirov2012sdp} and \cite{sotirov2014gpp}, respectively. Finally, we list below exact methods for solving the graph partition problem and its variants. Karisch et al.~\cite{KarischRendlClausen} develop a branch-and-bound algorithm that is based on a cutting-plane approach. In particular, the algorithm combines semidefinite programming and polyhedral constraints, i.e., triangle inequalities, to solve instances of the GBP. The algorithm from~\cite{KarischRendlClausen} solves problem instances with up to 90 vertices. Hager et al.~\cite{Hager2013AnEA} present a branch-and-bound algorithm for the graph bisection problem that exploits continuous quadratic programming formulations of the problem. The use of SDP bounds leads to the best performance of the algorithm from~\cite{Hager2013AnEA}. Armbruster et al.~\cite{HelmbergEtAl} evaluate the strength of a branch-and-cut framework on large and sparse instances, using linear and SDP relaxations of the GBP. The numerical results in~\cite{HelmbergEtAl} show that in the majority of the cases the semidefinite programming approach outperforms the linear one. Since prior to~\cite{HelmbergEtAl} it was widely believed that SDP relaxations are useful only for small and dense instances, the results of~\cite{HelmbergEtAl} influenced recent solver developments. As observed by \cite{Beluc}, all mentioned exact approaches typically solve only very small problems with very large running times. The aim of this paper is to further investigate the quality of DNN relaxations on large graphs, that are strengthened by polyhedral cuts and solved by a first order method. \subsection*{Main results and outline} Doubly nonnegative relaxations are known to provide superior bounds for various optimization problems. Although additional cutting planes further improve DNN relaxations, it is extremely difficult to compute the resulting bounds already for relaxations with matrix variables of order 300 via interior point methods. We design an efficient algorithm for computing DNN bounds with a huge number of additional cutting planes and show the power of the resulting bounds for two variations of the graph partition problem. We conduct a study for the $k$-equipartition problem and the graph bisection problem. Although there exists a DNN relaxation for the GPP~\cite{wokowicz-zhao1999} that is suitable for both problems, we study the problems separately. Namely, the $k$-EP allows various equivalent SDP relaxations having different sizes of matrix variables due to the problem's invariance under permutations of the subsets. Since one can solve DNN relaxations with smaller matrix variables more efficiently than those with larger matrix variables, we consider the matrix lifting DNN relaxation for the $k$-EP from~\cite{Karisch98semidefinite} that is strengthened by the triangle and independent set inequalities. On the other hand, the vector lifting DNN relaxation for the GBP from~\cite{wokowicz-zhao1999} is known to dominate matrix lifting DNN relaxations for the same problem. Therefore, we consider the vector lifting DNN relaxation for the GBP and strengthen it by adding boolean quadric polytope (BQP) inequalities. Prior to solving the DNN relaxations, we use facial reduction to obtain equivalent smaller dimensional relaxations that are strictly feasible. The approach we use for the GBP is based on the dimension of the underlying polytope. {\color{black}Although strict feasibility of an SDP is not required for our solver, it makes the procedure more efficient.} {\color{black} To solve the DNN relaxations with additional polyhedral inequalities, we design a cutting-plane algorithm called the cutting-plane ADMM (CP--ADMM) algorithm. Our algorithm combines the Alternating Direction Method of Multipliers (ADMM) with Dykstra’s projection algorithm. The ADMM exploits the natural splitting of the relaxations that arises from the facial reduction. Dykstra's cyclic projection algorithm finds projections onto polyhedra induced by the violated cuts. Since facial reduction eliminates redundant constraints and projects a relaxation onto a smaller dimensional space, the projections in the CP--ADMM are easier and faster. To further improve efficiency of the CP--ADMM, we cluster non-overlapping cuts, which allows us to perform the projections in each cluster simultaneously. Efficiency of the algorithm is also due to the exploitation of warm starts, as well as the use efficient separation routines. Since we present the various components of the CP--ADMM in a general way, the algorithm is suitable for solving various DNN relaxations incorporating additional cutting planes. } Our numerical results show that the CP--ADMM algorithm computes strong GPP bounds for graphs with up to 1,024 vertices by adding at most 50,000 cuts in less than two hours. Since our algorithm does not require lots of memory, we are able to compute strong bounds for even larger graphs than presented here. Numerical results also show that the additional cutting planes significantly improve the DNN relaxations, and that the resulting bounds can close gaps for instances with up to 500 vertices. The paper is structured as follows. In Section~\ref{section:problem} we introduce the graph partition problem. Section~\ref{sect:equipartition} presents DNN relaxations for the $k$-EP. In Section~\ref{sect:bisection} we present DNN relaxations for the GBP and show how to apply facial reduction by exploiting the dimension of the bisection polytope. Our cutting-plane augmented Lagrangian algorithm is introduced in Section~\ref{sect:admm}. The main ingredients of the algorithm are given in Section~\ref{sect:ADMMbasic} and Section~\ref{sect:Dykstra}. In particular, Section~\ref{sect:ADMMbasic} explains steps of the ADMM and Section~\ref{sect:Dykstra} introduces Dykstra’s projection algorithm and its semi-parallelized version. The CP--ADMM algorithm is outlined in Section~\ref{sect:CPADMM}. Section~\ref{sect:cuts} considers various families of cutting planes that are used to strengthen DNN relaxations for the $k$-EP and GBP. Numerical results are given in Section~\ref{sect:numerics}. \section*{Notation} The set of $n\times n$ real symmetric matrices is denoted by ${\mathcal S}^n$. The space of symmetric matrices is considered with the trace inner product, which for any $X, Y \in {\mathcal S}^{n}$ is defined as $\inprod{X}{Y}:= \trace (XY)$. The associated norm is the Frobenius norm $\| X\|_F := \sqrt{\trace (XX)}$. The cone of symmetric positive semidefinite matrices of order $n$ is defined as ${\mathcal S}_+^n :=\{X \in {\mathcal S}^n: X\succeq \mathbf{0} \}$. We use $[n]$ to denote the set of integers $\{1,\dots,n\}$. The $\vect(\cdot)$ operator maps an $n \times m$ matrix to a vector of length $nm$ by stacking the columns of the matrix on top of one another. The Kronecker product $A \otimes B$ of matrices $A \in {\R}^{p \times q}$ and $B\in {\R}^{r\times s}$ is defined as the $pr \times qs$ matrix composed of $pq$ blocks of size $r\times s$, with block $ij$ given by $a_{ij}B$, $i \in [p]$, $j \in [q]$. We use the following properties of the Kronecker product and the $\vect$ operator: \begin{align} \vect (AXB)= (B^\top \otimes A)\vect(X) \label{kron:property1}\\ \trace(AB) = \vect(A^\top)^\top \vect(B). \label{kron:property2} \end{align} The Hadamard product of two matrices $X=(x_{ij})$ and $Y=(y_{ij})$ of the same size is denoted by $X\circ Y$ and is defined as $(X\circ Y)_{ij} := x_{ij}y_{ij}$. We denote by ${\mathbf 1}_n$ the vector of all ones of length $n$, and define ${\mathbf J}_n: = {\mathbf 1}_n {\mathbf 1}_n^\top$. The all zero vector of length $n$ is denoted by ${\mathbf 0}_n$. We use ${\mathbf I}_n$ to denote the identity matrix of order $n$, while its $i$-th column is given by ${\mathbf u}_i$. In case that the dimension of ${\mathbf 1}_n$, ${\mathbf 0}_n$, ${\mathbf J}_n$ and $\mathbf{I}_n$ is clear from the context, we omit the subscript. The operator $\textrm{diag}\colon \R^{n\times n} \to \R^n$ maps a square matrix to a vector consisting of its diagonal elements. Its adjoint operator is denoted by $\textrm{Diag}\colon \R^n \to \R^{n\times n}$. The rank of matrix ${X}$ is denoted by $\mbox{rank}({X})$. \section{The graph partition problem} \label{section:problem} Let $G=(V,E)$ be an undirected graph with vertex set $V$, $|V|=n$, and edge set $E$. Let $w\colon E \rightarrow \mathbb{R}$ be a weight function on the edges. Let $k$ be a given integer such that $2\leq k \leq n-1$. The graph partition problem is to partition the vertex set of $G$ into $k$ disjoint sets $S_1$, \ldots, $S_k$ of specified sizes $m_1\geq \cdots \geq m_k \geq 1$, $\sum_{j=1}^k m_j=n$ such that the total weight of edges joining different sets $S_j$ is minimized. If $k=2$, then we refer to the corresponding graph partition problem as the graph bisection problem. If $m_1=\cdots = m_k = n/k$, then the resulting GPP is known as the $k$-equipartition problem. Let $A_w$ denote the weighted adjacency matrix of $G$ with respect to $w$ and ${\mathbf m}=(m_1,\ldots,m_k)^\top$. For a given partition of $G$ into $k$ subsets, let $P=(P_{ij}) \in \{0,1\}^{n\times k}$ be the partition matrix defined as follows: \begin{align} \label{Y} P_{ij} := \begin{cases} 1,& i\in S_j \\ 0, & \mbox{otherwise} \end{cases}\quad i \in [n], ~j\in [k]. \end{align} Thus, the $j$-th column of $P$ is the characteristic vector of $S_j$. {\color{black}The total weight of the partition}, i.e., the sum of weights of edges that join different sets equals $$ \frac{1}{2}\trace \left( A_w({\mathbf J}_n-PP^\top) \right) =\frac{1}{2} \trace (LPP^\top), $$ where {$L:= \textrm{Diag}(A_w{\mathbf 1}_n) -A_w$} is the weighted Laplacian matrix of $G$. The GPP can be formulated as the following binary optimization problem: \begin{subequations}\label{binary} \begin{align} \min_{P}~& \frac{1}{2} \langle L,PP^\top \rangle \label{binary-object} \\ (GPP) \qquad \textrm{s.t.}~& ~P{\mathbf 1}_k = {\mathbf 1}_n, \label{binary-a}\\ & P^\top {\mathbf 1}_n = {\mathbf m}, \label{binary-b}\\ & P_{ij} \in \{ 0, 1 \}\quad \forall i \in [n], ~j\in [k], \label{binary-d} \end{align} \end{subequations} where $P \in \mathbb{R}^{n\times k}$. {\color{black}Note that the objective function \eqref{binary-object} is quadratic.} The constraints \eqref{binary-a} ensure that each vertex must be in exactly one subset. The cardinality constraints \eqref{binary-b} take care that the number of vertices in subset $S_j$ is $m_j$ for $j\in [k]$. \medskip\medskip Let us briefly consider the polytope induced by all feasible partitions of $G$. Let $F_n^k({\mathbf m})$ be the set of all characteristic vectors representing a partition of $n$ vertices into $k$ disjoint sets corresponding to the cardinalities in ${\mathbf m}$. In other words, $F_n^k({\mathbf m})$ contains binary vectors of the form $\vect(P)$, where $P$ is a partition matrix, see \eqref{Y}: \begin{align} \label{Def:SetF} F_n^k({\mathbf m}) = \left\{ x \in \{0,1\}^{kn} \, : \, \, \begin{pmatrix} \mathbf{I}_k \otimes {\mathbf 1}_n^\top \\ {\mathbf 1}_k^\top \otimes \mathbf{I}_n \end{pmatrix} x = \begin{pmatrix} {\mathbf m} \\ {\mathbf 1}_n \end{pmatrix} \right\}. \end{align} We now define $\Conv(F_n^k({\mathbf m}))$ as the \textit{$k$-partition polytope}. Since the constraint matrix defining $F_n^k({\mathbf m})$ is totally unimodular, this polytope can be explicitly written as follows: \begin{align} \label{Def:GPPpolytope} \Conv(F_n^k({\mathbf m})) = \left\{ x \in \mathbb{R}^{kn} \, : \, \, \begin{pmatrix} \mathbf{I}_k \otimes {\mathbf 1}_n^\top \\ {\mathbf 1}_k^\top \otimes \mathbf{I}_n \end{pmatrix} x = \begin{pmatrix} {\mathbf m} \\ {\mathbf 1}_n \end{pmatrix}, \, x \geq {\mathbf 0} \right\}. \end{align} \textcolor{black}{The $k$-partition polytope can be seen as a special case of a transportation polytope, see e.g., \cite{DulmageMendelsohn}. We now derive the dimension of the $k$-partition polytope, which will be exploited in Section~\ref{sect:bisection}. The following result is implied by the dimension of the transportation polytope \cite{DulmageMendelsohn}. However, we add a proof for completeness.} \begin{theorem} \label{Thm:DimConvF} The dimension of $\Conv(F_n^k({\mathbf m}))$ equals $(k-1)(n-1)$. \end{theorem} \begin{proof} Let $B := \begin{pmatrix} \mathbf{I}_k \otimes {\mathbf 1}_n^\top \\ {\mathbf 1}_k^\top \otimes \mathbf{I}_n \end{pmatrix}.$ Since for all $i \in [kn]$ there exists a partition such that $x_i = 1$, we know that $\dim(\Conv(F_n^k({\mathbf m}))) = \dim(\Nul(B))$. Let $b_1, \ldots, b_{kn}$ denote the columns of $B$. It is obvious that the set $\{b_1, \ldots, b_n, b_{n+1},b_{2n+1},\ldots,$ $b_{(k-1)n+1}\}$ is linearly independent. From this it follows that $\rank ( B) \geq n +k - 1$. Next, we define for all $l = 1, \ldots, k-1$ and $i = 2, \ldots, n$ a vector $w^{l,i} \in \mathbb{R}^{kn}$ as follows: \begin{align*} (w^{l,i})_j = \begin{cases} +1 & \text{if $j = 1$ or $j = l \cdot n+i$,} \\ -1 & \text{if $j = i$ or $j = l\cdot n+1$,} \\ 0 & \text{otherwise.} \end{cases} \end{align*} One can verify that $Bw^{l,i} = \bold{0}$ for all $l = 1, \ldots, k-1$ and $i = 2, \ldots, n$. Moreover, since $w^{l,i}$ is the only vector that has a nonzero entry on position $ln + i$ among all defined vectors, the set $\{w^{l,i} \, : \, \, l = 1, \ldots, k-1, i = 2, \ldots, n\}$ is a linearly independent set. This proves that $\dim(\Nul(B)) \geq (k-1)(n-1)$. Since $\rank(B) + \dim(\Nul(B)) = kn$, we conclude that $\dim(\Nul(B)) = (k-1)(n - 1)$. \end{proof} \subsection{DNN relaxations for the $k$-equipartition problem} \label{sect:equipartition} There exist several ways to obtain semidefinite programming relaxations for the $k$-EP. Namely, to obtain an SDP relaxation for the $k$-EP one can linearize the objective function of $(GPP)$ by introducing a matrix variable of order $n$, which results in a matrix lifting relaxation, see e.g.,~\cite{Karisch98semidefinite,sotirov2014gpp}. Another approach is to linearize the objective function by lifting the problem in the space of $(nk+1)\times (nk+1)$ matrices, which results in a vector lifting relaxation, see~\cite{wokowicz-zhao1999}. We call a DNN relaxation basic if it does not contain additional cutting planes such as triangle inequalities, etc. It is proven in \cite{sotirov2012sdp} that the basic matrix and vector lifting DNN relaxations for the $k$-EP are equivalent. A more elegant proof of the same result can be found in Kuryatnikova et al.~\cite{Olga}. Since one can solve the basic matrix lifting relaxation from~\cite{Karisch98semidefinite} more efficiently than the equivalent vector lifting relaxation from~\cite{wokowicz-zhao1999}, we develop our algorithm for the matrix lifting relaxation for the $k$-EP. To linearize the objective from $(GPP)$ we replace $PP^\top$ by a matrix variable $Y\in \sn$. From \eqref{binary-a} it follows that $Y_{ii} = \sum_{j=1}^{k} P_{ij}^2 = \sum_{j=1}^{k} P_{ij}= 1$ for all $i\in [n]$. From \eqref{binary-a}--\eqref{binary-b} we have $ Y {\mathbf 1}_n = PP^\top {\mathbf 1}_n = \frac{n}{k}P {\mathbf 1}_k = \frac{n}{k} {\mathbf 1}_n$. After putting those constraints together, adding $Y \geq {\mathbf 0}$ and $Y \succeq {\mathbf 0}$, we arrive at the following DNN relaxation introduced by Karisch and Rendl~\cite{Karisch98semidefinite}: \begin{equation} \label{eq:relax_dnn} (DNN_{EP}) \qquad \begin{aligned} \min_{Y}~ &~~ \frac{1}{2} \langle L,Y \rangle\\ \textrm{s.t.}~ & \textrm{diag}(Y) = {\mathbf 1}_n,\\ & Y {\mathbf 1}_n = \frac{n}{k} {\mathbf 1}_n, \\ & Y \succeq \mathbf{0}, \quad Y \geq \mathbf{0}. \end{aligned} \end{equation} We refer to $(DNN_{EP})$ as the basic matrix lifting relaxation. We show below that the nonnegativity constraints in $(DNN_{EP})$ are redundant for the equicut problem. \begin{lemma} \label{redundantnonegative} Let $k=2$ and $Y\in \sn_+$ be such that $\textrm{diag}(Y) = {\mathbf 1}_n$ and $Y{\mathbf 1}_n = \frac{n}{2} {\mathbf 1}_n$. Then $Y \geq {\mathbf 0}$. \end{lemma} \begin{proof} From $Y {\mathbf 1}_n = \frac{n}{2} {\mathbf 1}_n$ it follows that ${\mathbf 1}_n$ is an eigenvector of $Y$ corresponding to the eigenvalue $n/2$. Then, the eigenvalue decomposition of $Y$ is $$Y = \frac{1}{2} {\mathbf J}_n+ \sum_{i=2}^n \lambda_i v_iv_i^\top,$$ where $v_i$ is the eigenvector of $Y$ corresponding to the eigenvalue $\lambda_i$ for $i=2,\ldots,n$. Moreover, eigenvectors $v_i$ are orthogonal to ${\mathbf 1}_n$. Thus $2Y -{\mathbf J}_n = 2\sum_{i=2}^n \lambda_i v_iv_i^\top \succeq {\mathbf 0}$. Now, let $Z:= 2Y -{\mathbf J}_n $. From $\mbox{diag}(Y)= {\mathbf 1}_n$ it follows that $\mbox{diag}(Z) = {\mathbf 1}_n$. Since $Z\succeq {\mathbf 0}$ we have that $-1\leq Z_{ij}\leq 1$ for all $i,j\in [n]$, which implies that $Y_{ij}\geq 0$ for all $i,j \in [n]$. \end{proof} {\color{black}For a different proof of Lemma~\ref{redundantnonegative} see e.g., Theorem 4.3 in \cite{Karisch98semidefinite}. } The relaxation $(DNN_{EP})$ can be further strengthened by adding triangle and independent set inequalities, see Section~\ref{subsect:triangle} and \ref{subsect:clique}, respectively. This strengthened relaxation is proposed in~\cite{Karisch98semidefinite} and provides currently the strongest SDP bounds for the $k$-EP. As proposed in~\cite{Karisch98semidefinite}, {\color{black}one can eliminate $Y {\mathbf 1}_n = \frac{n}{k} {\mathbf 1}_n$ in \eqref{eq:relax_dnn}} and project the relaxation onto a smaller dimensional space, by exploiting the following result. \begin{lemma}[\cite{Karisch98semidefinite}] \label{lemmaParameter} Let $V\in \mathbb{R}^{n\times (n-1)}$ such that $V^\top {\mathbf 1}_n =\mathbf{0}$ and $\mbox{rank}(V)=n-1$. Then \begin{align*} \bigg \{ Y \in \sn: \diag(Y) = {\mathbf 1}_n, Y {\mathbf 1}_n=\frac{n}{k} {\mathbf 1}_n \bigg \} = \left \{ \frac{1}{k}{\mathbf J}_n +VRV^\top : R\in {\mathcal S}^{n-1},~ {\diag}(VRV^\top) = \frac{k-1}{k} {\mathbf 1}_n \right \}. \end{align*} \end{lemma} The matrix $V$ in Lemma~\ref{lemmaParameter} can be any matrix which columns form a basis for ${\mathbf 1}_n^\perp$, e.g., \begin{align}\label{Vp} V = \begin{pmatrix} {\mathbf I}_{n-1} \\[1ex] -{\mathbf 1}^\top_{n-1}\end{pmatrix} \end{align} We use the result of Lemma~\ref{lemmaParameter} and replace $Y$ by $\frac{1}{k}{\mathbf J}_n+VRV^\top$ in $(DNN_{EP})$, which leads to the following equivalent relaxation: \begin{equation}\label{sdp-p} \begin{aligned} \min_R~ & ~~\langle L_{EP},V R V^\top\rangle \\ \textrm{s.t.}~& \textrm{diag}(V R V^\top) = \frac{k-1}{k}{\mathbf 1}_n,\\ & V R V^\top \geq -\frac{1}{k} {\mathbf J}_n, \quad R \succeq \mathbf{0}, \end{aligned} \end{equation} were $R\in {\mathcal S}^{n-1}_+$. Here, we exploit $\langle L, {\mathbf J}_n \rangle=0$ to rewrite the objective, and define \begin{align}\label{def:Leq} L_{EP}:=\frac{1}{2}L. \end{align} It is not difficult to verify that the matrix $$\hat{R} = \frac{n(k-1)}{k(n-1)}\mathbf{I}_{n-1} - \frac{(k-1)}{k(n-1)}{\mathbf J}_{n-1} $$ is feasible for \eqref{sdp-p}, see also \cite{Karisch98semidefinite}. The matrix $\hat{R}$ has two distinct eigenvalues, namely $\frac{n(k-1)}{k(n-1)}$ with multiplicity $n-2$ and $\frac{(k-1)}{k(n-1)}$ with multiplicity one. This implies that $\hat{R} \succ 0$. Also, $$ \frac{1}{k} {\mathbf J}_n + V \hat{R} V^\top = \frac{n(k-1)}{k(n-1)} \mathbf{I}_n + \frac{(n-k)}{k(n-1)} {\mathbf J}_n> \mathbf{0}, $$ and thus $V\hat{R}V^\top > -\frac{1}{k}{\mathbf J}_n$. This shows that $\hat{R}$ is a Slater feasible point of \eqref{sdp-p}. For future reference, we define the following sets: \begin{align} \mathcal{R}_{EP} & :=\left \{R \in {\mathcal S}^{n-1} : R\succeq \mathbf{0} \right \}, \label{setR}\\ \mathcal{X}_{EP} & := \bigg \{ X \in {\mathcal S}^n :~\textrm{diag}({X}) = \frac{k-1}{k} {\mathbf 1}_n, ~ -\frac{1}{k} {\mathbf J}_n \leq {X} \leq \frac{k-1}{k} {\mathbf J}_n ~ \bigg \}.\label{setX} \end{align} Now, we rewrite the DNN relaxation \eqref{sdp-p} as follows: \begin{align}\label{sdp-p2} \min \left \{ \langle L_{EP},{X}\rangle: ~{X} = V R V^\top,~R \in \mathcal{R}_{EP},~{X} \in \mathcal{X}_{EP} \right \}. \end{align} Note that $\mathcal{X}_{EP}$ also contains constraints that are redundant for \eqref{sdp-p}. \textcolor{black}{These constraints speed up the convergence of our algorithm, as explained in Section~\ref{sect:ADMMbasic}. In the same section, it becomes clear that the inclusion of redundant constraints should not complicate the structure of $\mathcal{X}_{EP}$ too much. Whether or not to include a redundant constraint, is determined by an empirical trade-off between these measures.} \subsection{DNN relaxations for the bisection problem} \label{sect:bisection} For the graph bisection problem there exist both vector and matrix lifting SDP relaxations. The matrix lifting relaxations derived in \cite{KarischRendlClausen,sotirov2014gpp} are equivalent and have matrix variables of order $n$. A vector lifting SDP relaxation for the GBP is derived by Wolkowicz and Zhao \cite{wokowicz-zhao1999} and has a matrix variable of order $2n+1$. The DNN relaxation from~\cite{wokowicz-zhao1999} dominates the basic matrix lifting DNN relaxations, i.e., DNN relaxations without additional cutting planes, see~\cite{sotirov2014gpp} for a proof. In~\cite{Sotirov18} a matrix lifting DNN relaxation with additional cutting planes is derived for the GBP that is equivalent to the DNN relaxation from~\cite{wokowicz-zhao1999}. Although the relaxation from \cite{Sotirov18} has a matrix variable of order $n$, we work with the vector lifting DNN relaxation because it has a more appropriate structure for our ADMM approach. In this section we present the vector lifting DNN relaxation from~\cite{wokowicz-zhao1999} and show how to obtain its facially reduced equivalent version by using properties of the bisection polytope. As a byproduct, we also study properties of the feasible set of the DNN relaxation, see Theorem~\ref{Thm:nullspaceZW}. Let ${\mathbf m}=(m_1,m_2)^\top$ such that $m_1+ m_2=n$ be given. To derive a vector lifting SDP relaxation for the GBP we linearize the objective from $(GPP)$ by lifting variables into ${\mathcal S}^{2n+1}$. In particular, let $P\in \{0,1\}^{n\times 2}$ be a partition matrix and $x=\vect(P)$. We use \eqref{kron:property1} and \eqref{kron:property2} to rewrite the objective as follows: $$ \trace (LPP^\top) = \vect (P)^\top (\mathbf{I}_2 \otimes L) \vect(P) = x^\top (\mathbf{I}_2 \otimes L)x = \langle \mathbf{I}_2 \otimes L, xx^\top \rangle. $$ Now, we replace $xx^\top$ by a matrix variable $\hat{X} \in {\mathcal S}^{2n}$. The constraint $\hat{X}=xx^\top$ can be weakened to $\hat{X}-xx^\top \succeq \mathbf{0}$, which is equivalent to $X := \begin{pmatrix} 1 & x^\top \\ x & \hat{X} \end{pmatrix} \succeq \mathbf{0} $ by the well-known Schur Complement lemma. In the sequel, we use the following block notation for matrices in ${\mathcal S}^{2n+1}$: $$ {X} = \begin{pmatrix} 1 & (x^1)^\top & (x^2)^\top \\ x^1 & X^{11} & X^{12} \\ x^2 & X^{21} & X^{22} \\ \end{pmatrix}, $$ where $x^1$ (resp., $x^2$) corresponds to the first (resp., second) column in $P$, and $X^{ij}$ corresponds to $x^i (x^j)^\top$ for $i,j=1,2$. Now, from ${\mathbf 1}_n^\top x^i=m_i$ ($i=1,2$) it follows that $\trace (X^{ii})=m_i$, $\trace ({\mathbf J}_n X^{ii})=m_i^2$ and $\trace({\mathbf J}_n(X^{12}+X^{21})) = 2m_{1}m_{2}$. From $x^1 \circ x^2 = \mathbf{0}$ it follows that $\textrm{diag}(X^{12})=\mathbf{0}$. The above derivation results in the following vector lifting SDP relaxation for the GBP~\cite{wokowicz-zhao1999}: \begin{align}\label{eq:ZW} ({SDP}_{BP}) & \qquad \begin{aligned} \min_X ~~&~ \frac{1}{2} \langle L, X^{11}+X^{22} \rangle \\[1ex] \textrm{s.t.}~~&~ \trace (X^{ii}) = m_{i}, ~ \trace~ ({\mathbf J}_n X^{ii}) = m_{i}^{2}, ~~i=1,2, \\[1ex] &\textrm{diag}(X^{12})=\mathbf{0},~ \textrm{diag}(X^{21})=\mathbf{0}, ~ \trace \left({\mathbf J}_n(X^{12}+X^{21})\right) = 2m_{1}m_{2}, \\[1ex] & X= \begin{pmatrix} 1 & (x^1)^\top & (x^2)^\top \\ x^1 & X^{11} & X^{12} \\ x^2 & X^{21} & X^{22} \\ \end{pmatrix} \succeq \mathbf{0}, ~~x^i=\textrm{diag}(X^{ii}), ~~ i=1,2, \end{aligned} \intertext{where $X\in {\mathcal S}^{2n+1}$. By imposing nonnegativity constraints on the matrix variable in $({SDP}_{BP})$, we obtain the following DNN relaxation:} ({DNN}_{BP}) & \qquad (SDP_{BP}) ~~\&~~ X \geq \mathbf{0}.\label{DNNBP} \end{align} The relaxation $(DNN_{BP})$ can be further strengthened by additional cutting planes. We propose adding the boolean quadric polytope inequalities, see Section~\ref{subsect:bqp}. The zero pattern on off-diagonal blocks in \eqref{eq:ZW} can be written using a linear operator $\mathcal{G}_{\mathcal J} (\cdot)$, known as the Gangster operator, see~\cite{wokowicz-zhao1999}. The operator ${\mathcal G}_{\mathcal J}\colon {\mathcal S}^{2n+1} \to {\mathcal S}^{2n+1}$ is defined as $$ \mathcal{G}_{\mathcal J} (X)=\left \{ \begin{array}{ll} {X}_{ij} & \mbox{ if } (i,j)\in {\mathcal J}, \\ 0 & \mbox{ otherwise,} \end{array}\right . $$ where \begin{align} \label{Def:J} {\mathcal J} = \left \{ (i,j) ~:~ i=(p-1)n+q+1, ~j=(r-1)n+q+1, ~q\in [n],~p,r\in \{1,2\},~p\neq r \right \}. \end{align} The constraints $\diag(X^{12}) = \textrm{diag}(X^{21})=\mathbf{0}$ are given by $\mathcal{G}_{J}\left( X \right) =\mathbf{0}$. We now show how to project the SDP relaxation \eqref{eq:ZW} onto a smaller dimensional space in order to obtain an equivalent strictly feasible relaxation by facial reduction. Although such reduction is performed for the general graph partitioning problem in \cite{wokowicz-zhao1999}, our approach differs by relying on the polytope of all bisections. We first apply facial reduction to the relaxation $(SDP_{BP})$, after which we derive the facially reduced equivalent of $(DNN_{BP})$. We start by deriving two properties that hold for all feasible solutions of $(SDP_{BP})$. \begin{theorem} \label{Thm:nullspaceZW} Let $X = \begin{pmatrix} 1 & (x^1)^\top & (x^2)^\top \\ x^1 & X^{11} & X^{12} \\ x^2 & X^{21} & X^{22} \\ \end{pmatrix}$ with $\hat{X} = \begin{pmatrix} X^{11} & X^{12} \\ X^{21} & X^{22} \\ \end{pmatrix}$ and $x = \begin{pmatrix} x^1 \\ x^2 \end{pmatrix}$ be feasible for $(SDP_{BP})$. Then, \begin{enumerate}[label=(\roman*)] \item $a_i^\top \left( \hat{X} - xx^\top \right)a_i = 0$ where $a_i = {\mathbf u}_i \otimes {\mathbf 1}_n$, ${\mathbf u}_i \in \mathbb{R}^2$, $i \in [2]$; \item $b_i^\top \left( \hat{X} - xx^\top \right)b_i = 0$ where $b_i = {\mathbf 1}_2 \otimes {\mathbf u}_i$, ${\mathbf u}_i \in \mathbb{R}^n$, $i \in [n] $. \end{enumerate} \end{theorem} \begin{proof} $(i)$ Without loss of generality we take $i = 1$. Then $a_1 = {\mathbf u}_1\otimes {\mathbf 1}_n$, which yields \begin{align*} a_1^\top \left( \hat{X} - xx^\top \right) a_1 & = {\mathbf 1}_n^\top X^{11} {\mathbf 1}_n - {\mathbf 1}_n^\top x^1(x^1)^\top {\mathbf 1}_n = \trace({\mathbf J}_n X^{11}) - \trace(X^{11})^2 = m_1^2 - m_1^2 = 0, \end{align*} using the constraints of \eqref{eq:ZW}. The proof for $i = 2$ is similar. $(ii)$ First, we show that any feasible solution to \eqref{eq:ZW} satisfies $\diag(X^{11}) + \diag(X^{22}) = {\mathbf 1}_n$. For all $i \in [n]$ we define $v^i \in \mathbb{R}^{2n}$ as \begin{align*} \left( v^i \right)_j := \begin{cases} -1 & \text{if $j = i$ or $j = n+i$,} \\ 0 & \text{otherwise.} \end{cases} \end{align*} From $\hat{X} - xx^\top \succeq {\mathbf 0}$, we have \begin{align*} \begin{pmatrix} 1 \\ v^i \end{pmatrix}^\top \begin{pmatrix} 1 & x^\top \\ x &\hat{X} \end{pmatrix}\begin{pmatrix} 1 \\ v^i \end{pmatrix} \geq 0 \quad \Longleftrightarrow \quad 1 - X_{ii}^{11} - X_{ii}^{22} \geq 0 \quad \Longleftrightarrow \quad X_{ii}^{11} +X_{ii}^{22} \leq 1, \end{align*} where we used the fact that $\diag(\hat{X}) = x$. Since \begin{align*} n = m_1 + m_2 = \trace(X^{11}) + \trace(X^{22}) = \sum_{i = 1}^n \left( X_{ii}^{11} +X_{ii}^{22} \right), \end{align*} and the latter summation consists of $n$ elements bounded above by $1$, we must have $X_{ii}^{11} +X_{ii}^{22} = 1$ for all $i \in [n]$. Now, for $b_i = {\mathbf 1}_2 \otimes {\mathbf u}_i$, $i\in [n]$ we have \begin{align*} b_i^\top \left( \hat{X} - xx^\top \right) b_i & = X_{ii}^{11} + X_{ii}^{12} + X_{ii}^{21} + X_{ii}^{22} - \left( X_{ii}^{11} + X_{ii}^{22} \right)^2. \end{align*} Since $\diag(X^{12}) = \diag(X^{21}) = \mathbf{0}$ and $\diag(X^{11}) + \diag(X^{22}) = {\mathbf 1}_n$, it follows that $b_i^\top \big( \hat{X} - xx^\top \big) b_i = 1 - 1^2 = 0$ for all $i\in[n]$. \end{proof} We can exploit the properties stated in Theorem~\ref{Thm:nullspaceZW} to identify vectors in the null space of all feasible solutions of $(SDP_{BP})$. In order to do so, we use the following result. \begin{lemma}[\cite{RendlSotirov}] \label{Lem:Eigenvector} Let $X \in \mathcal{S}^l$, $x \in \mathbb{R}^l$ and $a \in \mathbb{R}^l$ be such that $X - xx^\top \succeq \mathbf{0}$, $a^\top x = t$ for some $t \in \mathbb{R}$, and $a^\top \left( X - xx^\top \right) a = 0$. Then $[ -t, a^\top ]^\top$ is an eigenvector of $\begin{pmatrix} 1 & x^\top \\ x & X \end{pmatrix}$ with respect to eigenvalue 0. \end{lemma} It follows from the constraints of \eqref{eq:ZW} that $a_i^\top x = m_i$ for $i \in [2]$ and $b_i^\top x = 1$ for $i \in [n]$, where $a_i$ and $b_i$ are defined as in Theorem \ref{Thm:nullspaceZW}. As a result, Theorem~\ref{Thm:nullspaceZW} and Lemma~\ref{Lem:Eigenvector} imply that \begin{align*} \begin{pmatrix} -m_i \\ {\mathbf u}_i \otimes {\mathbf 1}_n \end{pmatrix}, \, i \in [2], \quad \text{and} \quad \begin{pmatrix} -1 \\ {\mathbf 1}_2 \otimes {\mathbf u}_i \end{pmatrix}, \, i \in [n], \end{align*} are eigenvectors of $\begin{pmatrix} 1 & x^\top \\ x & \hat{X} \end{pmatrix}$ with respect to eigenvalue 0. Now, let us define the matrix $T\in \mathbb{R}^{(n+2) \times (2n + 1)}$ as follows: \begin{align*} T := \begin{pmatrix} - {\mathbf m} & \mathbf{I}_2 \otimes {\mathbf 1}_n^\top \\ - {\mathbf 1}_n & {\mathbf 1}_2^\top \otimes \mathbf{I}_n \end{pmatrix}. \end{align*} Moreover, let $\mathcal{V} = \Nul(T)$. Any $a \in \mathcal{V}$ defines an element $aa^\top$ exposing the feasible set of $(SDP_{BP})$. It follows from the facial geometry of the cone of positive semidefinite matrices that the feasible set of $(SDP_{BP})$ is contained in \begin{align*} S_\mathcal{V} := \left\{ X \in \mathcal{S}^{2n+1}_+ \, : \, \, \Col(X) \subseteq \mathcal{V} \right\}, \end{align*} which is a face of $\mathcal{S}^{2n+1}_+$. It remains to prove that this is actually the minimal face of $\mathcal{S}_+^{2n+1}$ containing the feasible set of $(SDP_{BP})$. For that purpose, we consider the underlying bisection polytope $\Conv(F_n^2({\mathbf m}))$, see \eqref{Def:GPPpolytope}. It follows from Theorem~\ref{Thm:DimConvF} that $\dim(\Conv(F_n^2({\mathbf m}))) = n -1$. Besides, observe that $T$ is constructed as the constraint matrix defining $\Conv(F_n^2({\mathbf m}))$ augmented with an additional column. Since this additional column does not increase its rank, we have $\rank(T) = n+1$, which implies that $\dim(\mathcal{V}) = n$. Let $V \in \mathbb{R}^{(2n + 1) \times n}$ be a matrix whose columns form a basis for $\mathcal{V}$. Then the face $S_\mathcal{V}$ can be equivalently written as: \begin{align} S_\mathcal{V} = V \mathcal{S}_+^{n} V^\top. \end{align} To show that $S_\mathcal{V}$ is the minimal face containing the feasible set of $(SDP_{BP})$ we apply a result by Tun\c{c}el \cite{Tuncel}. \begin{theorem}[\cite{Tuncel}] \label{Thm:Tuncel} Given $F \subseteq \mathbb{R}^{l}$, let $\mathcal{F} := \left\{ \begin{pmatrix} 1 & x^\top \\ x & \hat{X} \end{pmatrix} \in \mathcal{S}_+^{l+1} \, : \right. \left. \, \, \mathcal{A} \left( \begin{pmatrix} 1 & x^\top \\ x & \hat{X} \end{pmatrix} \right) = \mathbf{0} \right\}$, with $\mathcal{A}\colon \mathcal{S}^{l+1} \to \mathbb{R}^p$ a linear transformation, be a relaxation of the lifted polyhedron \begin{align*} \Conv \left\{ \begin{pmatrix} 1 \\ x \end{pmatrix}\begin{pmatrix} 1 \\ x \end{pmatrix}^\top \, : \, \, x \in F \right\}. \end{align*} Suppose that $\mathcal{F} \subseteq V \mathcal{S}^d_+ V^\top$ for some full rank matrix $V \in \mathbb{R}^{(l+1)\times d}$. If $\dim(\Conv(F)) = d-1$, then there exists some $R \succ \mathbf{0}$ such that $VRV^\top \in \mathcal{F}$. \end{theorem} Based on Theorem~\ref{Thm:Tuncel}, we can now show the minimality of the face $S_\mathcal{V}$ for both $(SDP_{BP})$ and $(DNN_{BP})$. \begin{theorem} \label{Thm:minimalface} The set $S_\mathcal{V}$ is the minimal face of $\mathcal{S}^{2n+1}_+$ containing the feasible set of $(SDP_{BP})$. If $m_1,m_2 \geq 2$, then $S_\mathcal{V}$ is also the minimal face of $\mathcal{S}^{2n+1}_+$ containing the feasible set of $(DNN_{BP})$. \end{theorem} \begin{proof} The feasible region of $(SDP_{BP})$ can be written in the form of $\mathcal{F}$ in the statement of Theorem~\ref{Thm:Tuncel}. To show minimality for $(SDP_{BP})$, it suffices to show that there exists a matrix $R \in \mathcal{S}^n_+$, $R \succ \mathbf{0}$ such that $VRV^\top$ is feasible for $(SDP_{BP})$. Since $\dim(\Conv(F_n^2({\mathbf m}))) = n-1$, it immediately follows from Theorem~\ref{Thm:Tuncel} that such matrix, say $R_1$, exists. To prove the second statement, it suffices to show that there exists an $R \in \mathcal{S}^n_+$ such that $R \succ \mathbf{0}$ and $(VRV^\top)_{ij} > 0$ for all $(i,j) \in {\mathcal J}_C$, where ${\mathcal J}_C = ([2n+1] \times [2n+1])\setminus {\mathcal J}$, see \eqref{Def:J}. Since $m_1, m_2 \geq 2$, it follows that for any $(i,j) \in {\mathcal J}_C$ there exists a bisection $x^{ij}$ such that \begin{align*} \left( \begin{pmatrix} 1 \\ x^{ij} \end{pmatrix}\begin{pmatrix} 1 \\ x^{ij} \end{pmatrix}^\top \right)_{ij} > 0. \end{align*} Let $R^{ij} \in \mathcal{S}^n_+$ denote the matrix such that $VR^{ij}V^\top = \begin{pmatrix} 1 \\ x^{ij} \end{pmatrix}\begin{pmatrix} 1 \\ x^{ij} \end{pmatrix}^\top$, which consists by construction of $S_\mathcal{V}$. Now, let $R_2$ be any positive convex combination of the elements in $\{R^{ij} \, : \, \, (i,j) \in {\mathcal J}_C\}$. By construction, $R_2 \succeq \mathbf{0}$, while $(VR_2V^\top)_{ij} > 0$ for all $(i,j) \in {\mathcal J}_C$. Finally, any positive convex combination of $R_1$ and $R_2$ provides a matrix $R$ with the desired properties. \end{proof} The result of Theorem~\ref{Thm:minimalface} can be exploited to derive strictly feasible equivalent versions of $(SDP_{BP})$ and $(DNN_{BP})$. We focus here only on the DNN relaxation $(DNN_{BP})$. Theorem~\ref{Thm:minimalface} allows us to replace $X$ by $VRV^\top$ in $(DNN_{BP})$. One can take the following matrix for $V$: \begin{align} \label{V2n} V = \begin{pmatrix} 1 & \mathbf{0} \\ \frac{1}{n}{\mathbf m} \otimes \mathbf{1}_n & V_2\otimes V_n \end{pmatrix}, \quad \text{where } V_p = \begin{pmatrix} {\mathbf I}_{p-1} \\ -{\mathbf 1}_{p-1}^\top \end{pmatrix} \text{ for } p = 2, n. \end{align} Because of the structure of $V$, most of the constraints in $(DNN_{BP})$ become redundant. One can easily verify that the resulting relaxation in lower dimensional space is as follows, see e.g.,~\cite{wokowicz-zhao1999}: \begin{equation}\label{ZWslater} \quad \begin{aligned} \min_R ~& ~~\trace ( L_{BP} {V}R{V}^\top) \\ \textrm{s.t.} ~&~~ {\mathcal G}_{\mathcal J}({V}R{V}^\top) = \mathbf{0}, \\ &~~ ({V}R{V}^\top)_{1,1}=1,\\ &~~ {V}R{V}^\top \geq \mathbf{0},~ R\in {\mathcal S}_+^{n}, \end{aligned} \end{equation} where \begin{align}\label{def:Lbp} L_{BP} := \frac{1}{2} \begin{pmatrix} 0 & \mathbf{0}^\top\\ \mathbf{0} & \mathbf{I}_2 \otimes L \end{pmatrix}, \end{align} and $L$ is the weighted Laplacian matrix of $G$. Let us now define the following sets: \begin{align} \mathcal{R}_{BP} & :=\left \{R \in {\mathcal S}^{n} : R\succeq \mathbf{0} \right \}, \label{setRGB} \\ \mathcal{X}_{BP} &:= \left \{ X \in {\mathcal S}^{2n+1} : ~~ \begin{aligned} & {\mathcal G}_{\mathcal J}({X}) = \mathbf{0}, ~{X}_{1,1}=1,~\trace(X^{ii})=m_i,~ i\in [2], \\ & \diag(X^{11})+\diag(X^{22})={\mathbf 1}_n,~ X {\mathbf u}_1 =\diag(X),\\ & \mathbf{0} \leq {X} \leq {\mathbf J} \end{aligned} \right\}. \label{setXGB} \end{align} Now, we are ready to rewrite the facially reduced DNN relaxation \eqref{ZWslater} as follows: \begin{equation} \label{sdp-bp2} \min \left \{\langle L_{BP},X\rangle: ~X = V R V^\top,~R \in \mathcal{R}_{BP},~X \in \mathcal{X}_{BP} \right \}. \end{equation} Note that $\mathcal{X}_{BP}$ also contains constraints that are redundant for \eqref{ZWslater}. \section{A cutting plane augmented Lagrangian algorithm} \label{sect:admm} {\color{black} SDP has proven effective for modeling optimization problems and providing strong bounds. It is well-known that SDP solvers based on interior point methods might have considerable memory demands already for medium-scale problems. Recently, promising alternatives for solving large-scale SDP relaxations have been investigated. We refer the interested reader to \cite{BurerVandenbussche,MaiEtAl2, PovhEtAl,SunTohetAl,WenEtAl,YurtseverEtAl} for algorithms based on alternating direction augmented Lagrangian methods for solving SDPs. For efficient approaches \textcolor{black}{to} solving DNN relaxations, see also e.g.,~\cite{HuSotirov,{HaoSotWolk},Li2021ASC,oliveira2018admm,wiegele-zhao2022,zhao2022}. To the best of our knowledge only \cite{deMeijerSotirov} incorporates an augmented Lagrangian method into a cutting-plane framework. The authors of \cite{deMeijerSotirov} consider only one type of cutting planes. Here, we incorporate various types of cutting planes into one framework and use a more efficient version of the ADMM than the one used in \cite{deMeijerSotirov}. } In Section~\ref{sect:ADMMbasic} we describe variants of the ADMM that are used within our cutting-plane algorithm. Section~\ref{sect:Dykstra} presents Dykstra's cyclic projection algorithm that is used for projections onto polyhedra induced by the violated cuts. Section \ref{sect:CPADMM} presents our cutting plane augmented Lagrangian algorithm. \subsection{The Alternating Direction Method of Multipliers } \label{sect:ADMMbasic} The ADMM is a first-order method from the 1970s that is developed for solving convex optimization problems. This method decomposes an optimization problem into several subproblems that are easier to solve than the original problem. There exist several variants of the ADMM for solving SDPs. We consider here a variant of the ADMM that resembles variants from \cite{HuSotirov,oliveira2018admm}, where we additionally consider an adaptive stepsize term proposed by Lorenz and Tran-Dinh~\cite{lorenz2018non} when solving the $k$-EP. In order to describe the ADMM scheme for solving SDP relaxations for both problems, the $k$-equipartition problem \eqref{sdp-p2} and the graph bisection problem \eqref{sdp-bp2}, we introduce the following unified notation: For the $k$-EP, define $\bar{L}:=L_{EP}$, $\mathcal{R}:=\mathcal{R}_{EP}$ and $\mathcal{X}:=\mathcal{X}_{EP}$ (see resp.,~\eqref{def:Leq}, \eqref{setR}, \eqref{setX}), and for the GBP define $\bar{L}:=L_{BP}$, $\mathcal{R}:=\mathcal{R}_{BP}$ and $\mathcal{X}:=\mathcal{X}_{BP}$ (see resp.,~\eqref{def:Lbp}, \eqref{setRGB}, \eqref{setXGB}). Let $Z$ denote the Lagrange multiplier for the constraint $X=VRV^\top$. Then, the augmented Lagrangian function of \eqref{sdp-p2} and \eqref{sdp-bp2} w.r.t.~the constraint $X=VRV^\top$ for a penalty parameter $\sigma$ is as follows: \begin{equation} \label{eq:Lagrangianfunction} \mathcal{L}_{\sigma}({X},R,Z )= \langle \bar{L},{X}\rangle + \langle Z,{X}-{V}R{V}^\top\rangle + \frac{\sigma}{2} \|{X}-VRV^\top\|^2_F. \end{equation} In each iteration, the ADMM minimizes $\mathcal{L}_{\sigma}({X},R,Z )$ subject to $X\in \mathcal{X}$ and $R\in \mathcal{R}$ and updates $Z$ via a stepsize update. The ADMM update scheme requires a matrix $V$ that has orthonormal columns that can be obtained by applying a QR-decomposition to \eqref{Vp} for the $k$-EP and to \eqref{V2n} for the GBP. Let $(R^p,X^p,Z^p)$ denote the $p$-th iterate of the ADMM. The next iterate $(R^{p+1},X^{p+1},Z^{p+1})$ is obtained as follows: \begin{subequations}\label{ADMMall1} \begin{align} R^{p+1} =& \argmin_{R\in \mathcal{R}}\mathcal{L}_{\sigma^p}(R,{X}^p,Z^p),\label{R_sub}\\ X^{p+1} =& \argmin_{X\in \mathcal{X}}\mathcal{L}_{\sigma^p}(R^{p+1},{X},Z^p), \label{X_sub}\\ Z^{p+1} =& Z^p + \gamma \cdot \sigma^{p} \cdot (X^{p+1}-VR^{p+1}V^\top), \label{Z_sub} \end{align} \end{subequations} where $\gamma \in \left(0, \frac{1 + \sqrt{5}}{2} \right )$ is a parameter for updating the dual multiplier $Z^p$, see e.g., \cite{WenEtAl}. There exist different ways for dealing with the stepsize term $\gamma \cdot \sigma^p$. One possibility is to keep $\sigma^p$ and $\gamma$ fixed during the algorithm. In this approach, $\sigma^p$ depends on the problem data and $\gamma$ has a value larger than one. This is known in the literature as the ADMM with larger stepsize, as originally proposed by~\cite{FortinGlowinski}. An alternative is the ADMM with adaptive stepsize term as introduced in~\cite{lorenz2018non}. In that case $\gamma = 1$ and the parameter $\sigma^{p}$ is updated as follows: \begin{equation}\label{updateParam} \sigma^{p+1} := (1-\omega^{p+1})\sigma^p + \omega^{p+1} \mathcal{P}_{[\sigma_{\min},\sigma_{\max}] }\frac{\|Z^{p+1}\|_F}{\|X^{p+1}\|_F}, \end{equation} where $\omega^{p+1}:= 2^{-p/100}$ is the weight, $\sigma_{\min}$ and $\sigma_{\max}$ are the box bounds for $\sigma^p$, and $\mathcal{P}_{[\sigma_{\min},\sigma_{\max}]}$ is the projection onto $[\sigma_{\min},\sigma_{\max}]$. Recall that we added redundant constraints for the SDP relaxations \eqref{sdp-p2} and \eqref{sdp-bp2} to the set $\mathcal{X}$. Those constraints are, though, not redundant in the subproblem \eqref{X_sub}. They are included to \textcolor{black}{speed up} the convergence of the ADMM algorithm in practice, see e.g.,~\cite{HuSotirov,oliveira2018admm,deMeijerSotirov}. One can solve the $R$-subproblem~\eqref{R_sub} as follows: \begin{align*} R^{p+1} = &\argmin_{R\in \mathcal{R}}\mathcal{L}_{\sigma^p}(R,{X}^p,Z^p) = \argmin_{R\in \mathcal{R}} ~\langle Z^p,-VRV^\top\rangle + \frac{\sigma^p}{2} \left \|{X}^p-VRV^\top \right \|^2_F,\\ =& \argmin_{R\in \mathcal{R}} ~\left \| V^\top \left( {X}^p+ \frac{1}{\sigma^p} Z^p \right )V- R \right \|_F^2 = \mathcal{P}_{\succeq \mathbf{0}} \left ( V^\top \left( {X}^p + \frac{1}{\sigma^p} Z^p \right )V \right ), \end{align*} where $\mathcal{P}_{\succeq \mathbf{0}} (\cdot)$ denotes the orthogonal projection onto the cone of positive semidefinite matrices. The $X$-subproblem \eqref{X_sub} can be solved as follows: \begin{align*} X^{p+1} = &\argmin_{{X}\in \mathcal{X}}\mathcal{L}_{\sigma^p}(R^{p+1},{X},Z^p)= \argmin_{{X}\in \mathcal{X}} ~\langle \bar{L}+ Z^p,{X}\rangle + \frac{\sigma^p}{2} \left \|{X}-VR^{p+1}V^\top \right \|^2_F,\\ =& \argmin_{{X}\in \mathcal{X}} \left \|{X}- \left (VR^{p+1}V^\top- \frac{1}{\sigma^p}\left (\bar{L}+Z^p \right ) \right )\right \|^2_F = \mathcal{P}_{\mathcal{X}}\left (VR^{p+1}V^\top- \frac{1}{\sigma^p} \left (\bar{L}+Z^p \right) \right ), \end{align*} where $ \mathcal{P}_{\mathcal{X}}(\cdot)$ denotes the orthogonal projection onto the polyhedral set $\mathcal{X}$. In Appendix~\ref{App:projectorX} we show how this projection can be performed explicitly. \medskip {\color{black}The performance of the ADMM greatly depends on the stepsize term.} Our preliminary tests show that for the $k$-EP the updating scheme \eqref{ADMMall1}--\eqref{updateParam} with adaptive stepsize term outperforms the ADMM with larger stepsize. {\color{black}That is, our adaptive ADMM performs better than the ADMM variants from \cite{HuSotirov,{HaoSotWolk},oliveira2018admm}. Moreover, preliminary results show that it outperforms the algorithm from~\cite{Li2021ASC}.} For the GBP, however, our preliminary tests show that it is more beneficial to keep $\sigma^p$ fixed and use larger $\gamma$. {\color{black}The resulting version of the ADMM resembles versions from~\cite{HuSotirov,{HaoSotWolk},oliveira2018admm}.} Consequently, we initialize the ADMM algorithm \eqref{ADMMall1} for the $k$-EP by \begin{align}\label{ADMM:initialEP} R^0 = \mathbf{0}, \qquad {X}^0=\frac{k-1}{k}\mathbf{I}_n, \qquad Z^0=\mathbf{0},\qquad \sigma^0= \left \lceil \frac{n}{k}\right \rceil, \qquad \gamma = 1, \end{align} and for the GBP we set \begin{align}\label{ADMM:initialGBP} R^0 = \mathbf{0}, \qquad X^0 = {\mathbf u}_1 {\mathbf u}_1^\top, \qquad Z^0=\mathbf{0}, \qquad {\sigma^0 = \left \lceil \left (\frac{2n}{m_1} \right )^2 \right \rceil, \qquad \gamma = 1.608}. \end{align} \subsection{Clustered Dykstra's projection algorithm} \label{sect:Dykstra} Both DNN problems, \eqref{sdp-p2} and \eqref{sdp-bp2}, can be strengthened by adding valid cutting planes, see Section~\ref{sect:cuts}. Since these cutting planes are polyhedral, it is natural to include additional cuts to the set $\mathcal{X}$. This addition will, however, spoil the easy structure of $\mathcal{X}$. As a result, finding the explicit projection onto this new polyhedral set becomes a difficult task, even after the addition of a single cut. In \cite{deMeijerSotirov} this issue is resolved by splitting the polyhedral set into subsets and using iterative projections based on Dykstra's algorithm~\cite{BoyleDykstra, Dykstra}. This algorithm finds the projection onto the intersection of a finite number of polyhedral sets, assuming that the projection onto each of the separate sets is known. Although there exist many algorithms for finding such projection in the literature, the recent study of~\cite{BauschkeKoch} shows superior behaviour of Dykstra's cyclic projection algorithm. In this section we briefly present Dykstra's algorithm and show how to implement it efficiently by clustering non-overlapping cuts. Similar to the previous section, we present a generic version of the algorithm that can be applied to both the $k$-EP and the GBP. \medskip Let us assume that $\mathcal{T}$ is an index set of cutting planes on the primal variable $X$ in the ADMM scheme, i.e., every $t \in \mathcal{T}$ corresponds to a single cut. Also, for each $t \in \mathcal{T}$ let $\mathcal{H}_t$ be a polyhedron that is induced by the cut $t$. \textcolor{black}{One can think of $\mathcal{H}_t$ as the halfspace induced by the cut $t$, where additional constraints are added as long as the projection onto $\mathcal{H}_t$ remains efficient.} In Section~\ref{sect:cuts} we show how the set $\mathcal{T}$ and the polyhedra $\mathcal{H}_t$ look like for cuts related to both the $k$-EP and the GBP, and present the projectors onto the sets $\mathcal{H}_t$. When adding the cuts in $\mathcal{T}$ to the relaxation, the polyhedral set $\mathcal{X}$ has to be replaced by \begin{align}\label{XTset} \mathcal{X}_\mathcal{T} := \mathcal{X} \cap \left( \bigcap_{t \in \mathcal{T}} \mathcal{H}_t \right). \end{align} The $X$-subproblem of the ADMM scheme \eqref{X_sub} asks for the projection onto $\mathcal{X}_\mathcal{T}$. That is, for a given matrix $M$, one wants to solve the following best approximation problem: \begin{align} \label{ProjectionProblem} \min_{\hat{M}} ~& \Vert \hat{M} - M \Vert_F^2 \quad \text{ s.t. } \quad \hat{M} \in \mathcal{X}_\mathcal{T}. \end{align} Since the structure of $\mathcal{X}_\mathcal{T}$ is too complex to perform the projection in one step, the idea behind Dykstra's algorithm is to use iterative projections. Let $\mathcal{P}_{\mathcal{H}_t}( \cdot )$ denote the projection onto $\mathcal{H}_t$ for each $t \in \mathcal{T}$. Also, we assume that $\mathcal{P}_\mathcal{X}( \cdot )$ is known. Dykstra's algorithm starts by initializing the so-called normal matrices $N^0_\mathcal{X} = \mathbf{0}$ and $N^0_{t} = \mathbf{0}$ for all $t\in \mathcal{T}$. These normal matrices have the same size as the primal variable $X$ in the ADMM scheme. Moreover, we initialize $X^0 = M$. The algorithm iterates for $q \geq 1$ as follows: \begin{align} \label{AlgCyclicDykstra} \tag{CycDyk} \begin{aligned} \begin{aligned} X^q & := \mathcal{P}_\mathcal{X} \left( X^{q-1} + N_\mathcal{X}^{q-1} \right) \\ N_{\mathcal{X}}^q & := X^{q-1} + N_\mathcal{X}^{q-1} - X^q \end{aligned} \,\, \quad \, & \\ \left. \begin{aligned} L_{t} & := X^q + N_{t}^{q-1} \\ X^q & := \mathcal{P}_{\mathcal{H}_t} \left( L_{t} \right) \\ N^q_{t} & := L_{t} - X^q \end{aligned} \quad \right\} & \quad \text{for all } t \in \mathcal{T} \end{aligned} \end{align} Since the polyhedra are considered in a cyclic order, the iterative scheme (\ref{AlgCyclicDykstra}) is also known in the literature as Dykstra's cyclic projection algorithm. The sequence $\{X^q\}_{q\geq 1}$ strongly converges to the solution of the best approximation problem (\ref{ProjectionProblem}), see e.g.,~\cite{BoyleDykstra,Bichot,GaffkeMathar}. We perform several actions to implement the algorithm \eqref{AlgCyclicDykstra} as \textcolor{black}{efficiently} as possible. First, we can reduce the number of iterations needed to converge by adding some of the constraints of $\mathcal{X}$ also to the sets $\mathcal{H}_t$. This brings the sets $\mathcal{H}_t$ closer to the intersection $\mathcal{X}_\mathcal{T}$, leading to faster convergence. A restriction on this addition is that we should still be able to find the explicit projection onto $\mathcal{H}_t$. In Section~\ref{subsect:bqp} we show how some of the constraints from the DNN relaxation of the bisection problem are added to the polyhedra $\mathcal{H}_t$, while keeping the structure of the polyhedra sufficiently simple. Second, as observed in \cite{deMeijerSotirov}, it is possible to partly parallelize the algorithm \eqref{AlgCyclicDykstra}. The cuts in $\mathcal{T}$ are often very sparse. This implies that the projection onto $\mathcal{H}_t$ only involves a small number of entries, while the other entries are kept fixed. This property can be exploited by clustering non-overlapping cuts. Suppose two cuts $t_1, t_2 \in \mathcal{T}$ are such that no entry in the matrix variable of the $X$-subproblem is perturbed by both $\mathcal{P}_{\mathcal{H}_{t_1}} ( \cdot)$ and $\mathcal{P}_{\mathcal{H}_{t_2}} ( \cdot )$. Then, we can project onto both cuts simultaneously. This idea can be generalized by creating clusters of non-overlapping cuts. Suppose we cluster the set $\mathcal{T}$ into $r$ clusters $C_i$, $i\in [r]$ such that $C_1\cup \ldots \cup C_r = \mathcal{T}$, $C_i\cap C_j = \emptyset$ for $i \neq j$, $i,j\in [r]$, and all cuts in $C_i$, $i \in [r]$ are non-overlapping. Then, an iterate of (\ref{AlgCyclicDykstra}) is performed in $r+1$ consecutive steps, instead of $|\mathcal{T}|+1$. To cluster the cuts, we proceed as follows. We denote by $H$ an undirected graph in which each vertex represents a cutting plane indexed by an element from $\mathcal{T}$. Two vertices in $H$ are connected by an edge if and only if two cuts are overlapping. Clustering $\mathcal{T}$ into non-overlapping sets corresponds to clustering vertices of $H$ into independent sets. Therefore, clustering $\mathcal{T}$ into the smallest number of non-overlapping sets reduces to finding a minimum coloring in $H$. Since the graph coloring problem is NP-hard, we use an efficient heuristic algorithm from~\cite{GalinierHao} to find a near-optimal coloring. \subsection{The cutting-plane ADMM } \label{sect:CPADMM} In this section we put all elements of our cutting-plane algorithm together. In particular, we combine the ADMM from Section~\ref{sect:ADMMbasic} and the clustered implementation of Dykstra's projection algorithm from Section~\ref{sect:Dykstra} into a cutting-plane ADMM-based algorithm. We refer to this algorithm as the cutting-plane ADMM. Algorithm~\ref{Alg:CP_ADMM} provides a pseudo-code of our algorithm. {\color{black} Since the CP--ADMM algorithm solves a two-block separable convex problem it is guaranteed to converge, see e.g.,~\cite{Boyd2011DistributedOA,WenEtAl}.} The stopping criteria and input parameters are specified in Section~\ref{sec:stopp} and Section~\ref{sect:numerics}, respectively. The CP--ADMM algorithm is designed to solve DNN relaxations for the GPP with additional cutting planes. In particular, Algorithm~\ref{Alg:CP_ADMM} can solve the DNN relaxation for the $k$-EP, see~\eqref{sdp-p2}, that is strengthened by the triangle inequalities \eqref{ineq:trianglehat} and independent set inequalities~\eqref{ineq:cliquehat}. Similarly, Algorithm~\ref{Alg:CP_ADMM} also solves the DNN relaxation for the GBP, see~\eqref{sdp-bp2}, with additional BQP inequalities~\eqref{BQP3}. Let us outline the main steps of the CP--ADMM. Initially, the set $\mathcal{T}$ is empty and the algorithm solves the basic DNN relaxation, i.e., the DNN relaxation without additional cutting planes, using the ADMM as described in Section~\ref{sect:ADMMbasic}. After one of the stopping criteria from the inner while-loop is satisfied, see Section~\ref{sec:stopp}, a valid lower bound is computed based on the current approximate solution, see Section~\ref{sect:lowerbound}. Then, the algorithm identifies violated cuts and adds the $numCuts$ most violated ones to $\mathcal T$. To increase performance, the cuts induced by tuples in $\mathcal T$ are clustered by using a heuristic for the graph coloring problem from~\cite{GalinierHao}. The procedure is repeated, where the projection onto ${\mathcal X}_{\mathcal T}$, see \eqref{XTset}, is performed by the semi-parallelized version of Dykstra’s projection algorithm, see Section~\ref{sect:Dykstra}. The outer while-loop stops whenever one of the global stopping criteria is met. The CP--ADMM can be extended to solve various DNN relaxations with a large number of additional cutting planes. We remark that computing such strong bounds was not possible until now even for medium-sized problems and limited number of cutting planes. \begin{algorithm}[h] \footnotesize \caption{\textsc{CP--ADMM} for the GPP}\label{Alg:CP_ADMM} \SetAlgoLined \KwData{{weighted Laplacian matrix $\bar{L}$}, $m_1\geq\ldots \geq m_k$;} \KwIn{$UB$, $\epsadmm$, $\epsproj$, $numCuts$, $maxOuterLoops$, $maxTime$;} \KwOut{valid lower bound {$lb(R^p,Z^{p})$} \;} Initialization: Set $(R^0,X^0,Z^0)$, $\sigma^0$ and $\gamma$ by using \eqref{ADMM:initialEP} or \eqref{ADMM:initialGBP}. Set $p=0$, ${\mathcal{T}}=\emptyset$\; {Obtain $V$ by applying a QR-decomposition to \eqref{Vp} for the $k$-EP and to \eqref{V2n} for the GBP}\; \While{stopping criteria not met}{ \While{stopping criteria not met}{ $R^{p+1} = \mathcal{P}_{\succeq \mathbf{0}} \left ( V^\top \left( {X}^p + \frac{1}{\sigma^p} Z^p \right )V \right )$\; ${X}^{p+1} = \mathcal{P}_{\mathcal{X}_{\mathcal{T}}}\left (VR^{p+1}V^\top- \frac{1}{\sigma^p} \left (\bar{L}+Z^p \right) \right )$ by solving \eqref{ProjectionProblem} using (\ref{AlgCyclicDykstra})\; $ Z^{p+1} = Z^p + \gamma \cdot \sigma^{p} \cdot (X^{p+1}-VR^{p+1}V^\top)$\; {If adaptive stepsize term is used, update $\sigma^{p+1}$ by using \eqref{updateParam}\;} $p \gets p+1$\; } Compute a valid lower bound {$lb(R^p,Z^{p})$} by using \eqref{safeLB} \; Identify the violated inequalities and add the $numCuts$ most violated cuts to $\mathcal{T}$\; Cluster the cuts in $\mathcal{T}$\; } \end{algorithm} \subsubsection{Valid lower bounds} \label{sect:lowerbound} There are existing several ways to obtain valid lower bounds when stopping iterative algorithms earlier, see e.g., \cite{oliveira2018admm,Li2021ASC}. We compute valid lower bounds by exploiting the approach from \cite{Li2021ASC}. We use our uniform notation for the $k$-EP and the GBP to derive the Lagrangian dual problem for both problems. Let $\mathcal{L}(X,R,Z) := \mathcal{L}_0(X,R,Z)$, see~\eqref{eq:Lagrangianfunction}, denote the Lagrangian function of \eqref{sdp-p2} and \eqref{sdp-bp2} with respect to dualizing $X = VRV^\top$. Then, the Lagrangian dual of \eqref{sdp-p2} and \eqref{sdp-bp2} is \begin{equation} \label{eq:lagrangiandual2} \begin{aligned} \max_{Z\in {\mathcal S}^{q}}\min_{{X}\in \mathcal{X}_{\mathcal T},R\in \mathcal{R}} \mathcal{L} ({X},R,Z) & = \max_{Z\in {\mathcal S}^q} \left \{ \min_{{X}\in \mathcal{X}_{\mathcal T}} \langle \bar{L}+Z,{X}\rangle - \trace(R)\lambda_{\max} \left (V^\top Z V \right) \right \},\\ \end{aligned} \end{equation} where $\lambda_{\max}(V^\top Z V)$ is the largest eigenvalue of $V^\top Z V$, and $q$ is the appropriate order of the positive semidefinite cone. In~\eqref{eq:lagrangiandual2} we exploit the well-known Rayleigh principle. It follows from~\eqref{eq:lagrangiandual2} that for any $Z\in {\mathcal S}^q$ one can obtain a valid lower bound by computing: \begin{align} \label{safeLB} lb(R,Z)=\min_{{X}\in \mathcal{X}_{\mathcal T}} \langle \bar{L}+Z,{X}\rangle - \trace(R)\lambda_{\max}(V^\top Z V). \end{align} Since the minimization problem in \eqref{safeLB} is a linear programming problem, the computation of valid lower bounds is efficient. \subsubsection{Stopping criteria for the CP--ADMM algorithm} \label{sec:stopp} We use different stopping criteria for the inner and outer while-loops in Algorithm \ref{Alg:CP_ADMM}. The following measure is used as one of the stopping criteria for the inner while-loop: \begin{align*} \max \left \{ \frac{\|{X}^p - VR^p V ^\top \|_F}{1 + \|{X}^p\|_F}, \sigma \frac{\|{X}^{p+1}-{X}^p\|_F}{1 + \|{Z}^p\|_F} \right \} < \epsadmm, \end{align*} where ${\epsadmm}$ is the prescribed tolerance precision. We also stop the inner while-loop when $maxTime$ is reached. The Dykstra's projection algorithm (\ref{AlgCyclicDykstra}) stops when $\|X^{q+1}-X^q \|_F< \epsproj $ for a given input parameter $\epsproj$. We consider the following types of stopping criteria for the outer while-loop: \begin{itemize} \item The algorithm stops if the gap between a valid lower bound, that is rounded up to the closest integer, and a given upper bound $UB$ is closed. \item The algorithm stops if an improvement in lower bounds between two consecutive outer loops is less than the prescribed threshold, i.e., $0.001$. \item The algorithm stops if the number of new cuts to be added in the next outer loop is small, i.e., $<0.25n$. \item The algorithm stops if the maximum number of outer loops $maxOuterLoops$ is reached. \item The algorithms stops immediately if the maximum computation time $maxTime$ is reached. \end{itemize} We specify the values of the input parameters in Section~\ref{sect:numerics}. \subsubsection{Efficient ingredients of the CP--ADMM algorithm} Algorithm~\ref{Alg:CP_ADMM} is efficient due to the following ingredients: \begin{enumerate} \item Warm starts. After identifying new cuts we start the new ADMM iterate from the last obtained triple $(R^p,X^p,Z^p)$. Observe that there is no warm start strategy for an interior point method. \item Scaling of data. It is known that the performance of a first order method can be improved by appropriate scaling of data. Therefore, we scale the objective by a scalar $S\in \R$ that depends on the problem and its size. Namely, for the $k$-EP we set $S=1/{\| L\|_F}$ for $n \leq 400$, $S={k}/{\| L\|_F}$ for $ 400< n \leq 800$, and $S={n}/{(k \| L\|_F )}$ otherwise. For the GBP we use $S=1$. The values for $S$ are obtained by extensive numerical tests. \item Clustering. A crucial ingredient for improving the performance of Dykstra's projection algorithm is clustering cuts, see Section~\ref{sect:Dykstra}. \item Separation. We introduce a probabilistic independent set separation method to separate independent set inequalities, see Algorithm~\ref{alg:separate_clique_prob} in Section~\ref{sect:cuts}. \end{enumerate} \textcolor{black}{When we run the CP--ADMM without any cutting planes (i.e., $\mathcal{T} = \emptyset$), the bottleneck of the code is the projection onto the positive semidefinite cone. When we start adding cuts, Dykstra's algorithm starts taking over the major part of the computation time. } \section{Valid cutting planes, their projectors and separators}\label{sect:cuts} In this section we consider various families of cutting planes that strengthen the DNN relaxations for the $k$-EP and GBP. In the light of adding them in the cutting-plane augmented Lagrangian algorithm of Section~\ref{sect:admm}, we present for each cut type a polyhedral set $\mathcal{H}_t$ induced by the cut (and, possibly, a subset of the constraints from the corresponding DNN relaxation). We show how to explicitly project a matrix onto these polyhedral sets. The efficient separation of these cut types is also considered. In total we consider three types of cutting planes: two for the $k$-EP and one for the GBP. \subsection{Triangle inequalities for the $k$-EP} \label{subsect:triangle} Let us consider the relaxation $(DNN_{EP})$, see \eqref{eq:relax_dnn} for the $k$-equipartition problem. \textcolor{black}{Marcotorchino~\cite{Marcotorchino} as well as Gr\"otschel and Wakabayashi~\cite{groetschel1989cuttingplane}} observe that \textcolor{black}{the linear relaxation of the $k$-equipartition problem} can be strengthened by adding the triangle inequalities: \begin{align} \label{ineq:triangle} Y_{ij} + Y_{il} \leq 1 + Y_{jl} \quad \text{for all triples } (i,j,l), i \neq j, j \neq l, i \neq l. \end{align} For a given triple $(i,j,l)$ of distinct vertices, the triangle constraint \eqref{ineq:triangle} ensures that if $i$ and $j$ are in the same set of the partition and so are $i$ and $l$, then also $j$ and $l$ have to belong to the same set of the partition. \textcolor{black}{Karisch and Rendl~\cite{Karisch98semidefinite} use these inequalities to strengthen $(DNN_{EP})$.} To obtain the equivalent facially reduced relaxation \eqref{sdp-p}, we apply the linear transformation $X = Y - \frac{1}{k}{\mathbf J}_n$, see Section~\ref{sect:equipartition}. As we apply our cutting-plane algorithm on this latter relaxation, we also perform this transformation on the triangle inequalities. The transformed cuts are as follows: \begin{align}\label{ineq:trianglehat} X_{ij} + X_{il} \leq \frac{k-1}{k} + X_{jl} \quad \text{for all triples } (i,j,l), i\neq j, j \neq l, i \neq l. \end{align} Observe that there exist $3\binom{n}{3}$ triangle inequalities. \medskip \\ To incorporate the cutting planes \eqref{ineq:trianglehat} into our cutting-plane augmented Lagrangian algorithm, we define for each cut a polyhedral set that is induced by the cut. For each triple $(i,j,l)$ we define the polyhedron $\mathcal{H}_{ijl}^\Delta \subseteq \mathcal{S}^n$ as follows: \begin{align} \label{polyhedra:triangle} \mathcal{H}_{ijl}^\Delta := \left\{ X \in \mathcal{S}^n \, : \, \,X_{ij} + X_{il} \leq \frac{k-1}{k} + X_{jl} \right\}. \end{align} Let $\mathcal{P}_{\mathcal{H}^\Delta_{ijl}}\colon \mathcal{S}^n \to \mathcal{S}^n$ denote the operator that projects a matrix in $\mathcal{S}^n$ onto $\mathcal{H}^\Delta_{ijl}$. As the idea behind Dykstra's cyclic projection algorithm suggests, this projector can be characterized by a closed form expression. \begin{lemma} \label{Lem:triangle} Let $M \in \mathcal{S}^n$ and let $\hat{M} := \mathcal{P}_{\mathcal{H}^\Delta_{ijl}}(M)$. If $M \in \mathcal{H}^\Delta_{ijl}$, then $\hat{M} = M$. If $M \notin \mathcal{H}^\Delta_{ijl}$, then $\hat{M}$ is such that \begin{align*} \hat{M}_{pq} = \begin{cases} \frac{2}{3} M_{ij} - \frac{1}{3} M_{il} + \frac{1}{3} M_{jl} + \frac{1}{3} - \frac{1}{3k} & \text{if $(p,q) \in \{(i,j), (j,i)\}$,} \\ -\frac{1}{3}M_{ij} + \frac{2}{3} M_{il} + \frac{1}{3} M_{jl} + \frac{1}{3} - \frac{1}{3k} & \text{if $(p,q) \in \{(i,l), (l,i)\}$,} \\ \frac{1}{3}M_{ij} + \frac{1}{3}M_{il} + \frac{2}{3} M_{jl} - \frac{1}{3} + \frac{1}{3k} & \text{if $(p,q) \in \{(j,l), (l,j)\}$,} \\ M_{pq} & \text{otherwise.} \end{cases} \end{align*} \end{lemma} \begin{proof} See Appendix~\ref{App:prooftriangle}. \end{proof} Identifying the most violated inequalities of the form \eqref{ineq:trianglehat} can be done by a complete enumeration. This separation can be done in $O(n^3)$. \subsection{Independent set inequalities for the $k$-EP} \label{subsect:clique} \textcolor{black}{Chopra and Rao~\cite{chopra1993partition} introduced a further type of inequalities} that are valid for \textcolor{black}{the linear relaxation of the $k$-equipartition problem}, namely \begin{align} \label{ineq:clique} \sum_{i, j \in I, i < j} Y_{ij} \geq 1 \quad \text{for all $I \subseteq V$ with $|I| = k+1$}, \end{align} which are known as the independent set inequalities. \textcolor{black}{These inequalities have also been used for the SDP relaxation in the work of Karisch and Rendl~\cite{Karisch98semidefinite}.} The intuition behind these constraints is that for all subsets of $k + 1$ nodes, there must always be two nodes that are in the same set of the partition. Thus, the graph with adjacency matrix $Y$ has no independent set of size $k+1$. Using the linear transformation $X = Y - \frac{1}{k}{\mathbf J}_n$, we obtain the following equivalent inequalities that are valid for the facially reduced relaxation \eqref{sdp-p}: \begin{align} \label{ineq:cliquehat} \sum_{i, j \in I, i < j} X_{ij} \geq \frac{1 - k}{2} \quad \text{for all $I$ with $|I| = k+1$}. \end{align} Observe that there are $\binom{n}{k + 1}$ independent set inequalities. Let us define for each set $I \subseteq V$ with $|I| = k+1$ a polyhedral set $\mathcal{H}_{I}^{IS} \subseteq \mathcal{S}^n$ that is induced by the cut, i.e., \begin{align}\label{polyhedra:cut} \mathcal{H}_I^{IS} := \left\{ X \in \mathcal{S}^n \, : \, \, \sum_{i, j \in I, i < j} X_{ij} \geq \frac{1 - k}{2} \right\}. \end{align} We let $\mathcal{P}_{\mathcal{H}^{IS}_I}\colon \mathcal{S}^n \to \mathcal{S}^n$ denote the projector onto the polyhedron $\mathcal{H}^{IS}_I$. This projection can be performed explicitly, as shown by the following result. \begin{lemma} \label{Lem:clique} Let $M \in \mathcal{S}^n$ and let $\hat{M} := \mathcal{P}_{\mathcal{H}_I^{IS}}(M)$. If $M \in \mathcal{H}^{IS}_I$, then $\hat{M} = M$. If $M \notin \mathcal{H}^{IS}_I$, then $\hat{M}$ is such that \begin{align*} \hat{M}_{pq} = \begin{cases} M_{pq} - \frac{k-1}{k(k+1)} - \frac{2}{k(k+1)} \sum_{i< j, i,j \in I}M_{ij}, & \text{if $p, q \in I$, $p \neq q$,} \\ M_{pq}, & \text{otherwise.} \end{cases} \end{align*} \end{lemma} \begin{proof} See Appendix~\ref{App:proofclique}. \end{proof} In order to find the most violated inequalities of type~\eqref{ineq:cliquehat}, we need a separator for independent set inequalities. It is known that exact separation of these inequalities is NP-hard~\cite{eisenblaetter}. Complete enumeration leads to a running time of $O(n^{k+1})$, which is computationally tractable only for $k = 2$ and $k = 3$. For larger $k$, we apply a combination of two separation heuristics to identify violated inequalities. First, we apply the deterministic separation heuristic from \citet{anjos2013solving}. This method efficiently generates at most $n$ inequalities, which turn out to be effective as numerical experiments in~\cite{anjos2013solving} suggest. On top of the heuristic from~\cite{anjos2013solving}, we also introduce a probabilistic independent set inequality separation heuristic. Although this algorithm relies on the same idea as the deterministic heuristic from~\cite{anjos2013solving}, the greedy selection of a new vertex to add in the set $C$ is randomized with probabilities inversely proportional to their values in the current solution matrix $X$. A pseudo-code of this heuristic is given in Algorithm~\ref{alg:separate_clique_prob}. The parameter $N_R$ corresponds to the number of repetitions, while $\varepsilon > 0$ is a sensitivity parameter. Low values of $\varepsilon$ lead to very sensitive behaviour with respect to differences in the current solution $X$, while the selection eventually resembles a uniform distribution when $\varepsilon$ is increased. The advantage of this randomization is that the combination of both heuristics can yield more than $n$ violated independent set inequalities. \begin{algorithm}[H] \footnotesize \caption{Probabilistic separation method for independent set inequalities}\label{alg:separate_clique_prob} \SetAlgoLined \KwData{the number of partitions $k$, the size of graph $n$;} \KwIn{ output matrix $X$ from the ADMM, number of repetitions $N_R$, sensitivity parameter $\varepsilon > 0$;} \KwOut{a collection of violated distinct independent set inequalities $\mathcal{C}$, its violation vector $v$\;} Initialization: $Y = X + \frac{1}{k}\mathbf{J}_n$, $\mathcal{C} = \emptyset$\; \For{ $r \in [N_R] $} { Choose vertex $v$ uniformly at random from $[n]$\; $C \leftarrow \{ v \} $\; $S \leftarrow [n] \setminus \{v \}$\; \For{$l \in [k]$}{ Define $p_i := \sum_{j \in C}{y_{ij}} + \varepsilon$ for all $i \in S$\; Define $q_i := \frac{(1/p_i)}{\sum_{i \in S}(1/p_i)}$ for all $i \in S$\; Randomly select vertex $i \in S$ according to probability mass function $\{q_i\}_{i \in S}$\; $C \leftarrow C \cup \{ i\}, \,\, S \leftarrow S \setminus \{i\}$\; } \If{$C \notin \mathcal{C}$}{ $\mathcal{C}\leftarrow \mathcal{C} \cup \{ C \}$ \;} } \end{algorithm} \subsection{BQP inqualities for the GBP} \label{subsect:bqp} We now consider the relaxation $(DNN_{BP})$, see \eqref{DNNBP}. The relaxation $(DNN_{BP})$ can be further strengthened by adding the following inequalities: \begin{eqnarray} 0 \leq X_{ij} \leq X_{ii} \label{BQP1} \\[1ex] X_{ii} + X_{jj} \leq 1 + X_{ij} \label{BQP2} \\[1ex] X_{il} + X_{jl} \leq X_{ll} + X_{ij} \label{BQP3} \\[1ex] X_{ii} + X_{jj} + X_{ll} \leq X_{ij} + X_{il} + X_{jl} +1, \label{BQP4} \end{eqnarray} where $X=(X_{ij})\in {\mathcal S}^{2n+1}$ and $ 1\leq i,j,l \leq 2n$, $i\neq j$, $i\neq l$, $j\neq l$. The inequalities \eqref{BQP1}--\eqref{BQP4} are facet defining inequalities of the boolean quadric polytope~\cite{Padberg}. Wolkowicz and Zhao~\cite{wokowicz-zhao1999} prove that the inequalities \eqref{BQP1} and \eqref{BQP2} are already implied by the constraints in \eqref{eq:ZW}. Moreover, preliminary numerical results show that the inequalities~\eqref{BQP3} make larger improvements in the bounds when added to the DNN relaxation than the inequalities~\eqref{BQP4}. Therefore, we consider only the constraints~\eqref{BQP3} within our algorithm. Different from the SDP relaxation of the $k$-EP, the polyhedral set $\mathcal{X}_{BP}$ is a subset of the lifted space $\mathcal{S}^{2n+1}$. As a result, the polyhedral part induced by a BQP cut of the form \eqref{BQP3} is also a subset of $\mathcal{S}^{2n+1}$. For each triple $(i,j,l)$ with $ 2\leq i,j,l \leq 2n +1$, $i\neq j$, $i\neq l$, $j\neq l$, we define the following polyhedron: \begin{align}\label{polyhedra:bqp} \mathcal{H}^{BQP}_{ijl} := \left\{ \begin{pmatrix} 1 & (x^1)^\top & (x^2)^\top \\ x^1 & X^{11} & X^{12} \\ x^2 & X^{21} & X^{22} \end{pmatrix} \in \mathcal{S}^{2n+1} : \begin{aligned} X = \begin{pmatrix} X^{11} & X^{12} \\ X^{21} & X^{22} \end{pmatrix}, \, X_{il} + X_{jl} \leq X_{ll} + X_{ij} \\ \diag(X^{11}) = x^1, \diag(X^{22}) = x^2, x^1 + x^2 = {\mathbf 1}_n \end{aligned} \right\}. \end{align} The polyhedron $\mathcal{H}_{ijl}^{BQP}$ is not only induced by the BQP cut, it also contains a subset of the constraints of the relaxation~\eqref{DNNBP}. This idea is inspired by the approach in~\cite{deMeijerSotirov}, where the inclusion of additional constraints in each polyhedron in Dykstra's algorithm speeds up the convergence. Since the structure of $\mathcal{H}_{ijl}^{BQP}$ must remain simple enough to project onto it via a closed form expression, it is impractical to add all constraints from~\eqref{DNNBP}. The set $\mathcal{H}_{ijl}^{BQP}$ is chosen such that we are still able to project onto it explicitly. Let $\mathcal{P}_{\mathcal{H}^{BQP}_{ijl}}: \mathcal{S}^{2n+1} \rightarrow \mathcal{S}^{2n+1}$ denote the projector onto $\mathcal{H}^{BQP}_{ijl}$. Given that the matrix that is projected already satisfies the constraints $\diag(X^{11}) = x^1, \diag(X^{22}) = x^2$ and $x^1 + x^2 = \mathbf{1}_n$, which is always the case in our implementation, this projector is specified by the result below. \begin{lemma} \label{Lem:bqp} Let $M = \begin{pmatrix} 1 & \diag(M^{11})^\top & \diag (M^{22})^\top \\ \diag(M^{11}) & M^{11} & M^{12} \\ \diag (M^{22}) & M^{21} & M^{22} \end{pmatrix} \in \mathcal{S}^{2n+1}$ be such that $\diag(M^{11})+\diag (M^{22})=\mathbf{1}_n$ and let $\hat{M} := \mathcal{P}_{\mathcal{H}^{BQP}_{ijl}}(M)$. If $M_{il} + M_{jl} \leq M_{ij} + \frac{1}{6}M_{ll} + \frac{1}{3}M_{1l} - \frac{1}{6}M_{l^*l^*} - \frac{1}{3}M_{1l^*} + \frac{1}{2}$, then \begin{align*} \hat{M}_{pq} = \begin{cases} \frac{1}{6}M_{ll} + \frac{1}{3}M_{1l} - \frac{1}{6}M_{l^*l^*} - \frac{1}{3}M_{1l^*} + \frac{1}{2} & \text{if $(p,q) \in \{(l,l), (1,l), (l,1)\},$} \\ -\frac{1}{6}M_{ll} - \frac{1}{3}M_{1l} + \frac{1}{6}M_{l^*l^*} + \frac{1}{3}M_{1l^*} + \frac{1}{2}& \text{if $(p,q) \in \{(l^*,l^*), (1,l^*),(l^*,1)\},$} \\ M_{pq} & \text{otherwise.} \end{cases} \end{align*} Otherwise, $\hat{M}$ is such that \begin{align*} \footnotesize \hat{M}_{pq} = \begin{cases} \frac{7}{10} M_{il} - \frac{3}{10} M_{jl} + \frac{3}{10} M_{ij} + \frac{1}{20}M_{ll} + \frac{1}{10}M_{1l} - \frac{1}{20}M_{l^*l^*} - \frac{1}{10}M_{1l^*} + \frac{3}{20} & \text{if $(p,q) \in \{(i,l), (l,i)\}$,} \\ -\frac{3}{10} M_{il} + \frac{7}{10} M_{jl} + \frac{3}{10} M_{ij} + \frac{1}{20}M_{ll} + \frac{1}{10}M_{1l} - \frac{1}{20}M_{l^*l^*} - \frac{1}{10}M_{1l^*} + \frac{3}{20} & \text{if $(p,q) \in \{(j,l), (l,j)\}$,} \\ \frac{3}{10} M_{il} + \frac{3}{10} M_{jl} + \frac{7}{10} M_{ij} - \frac{1}{20}M_{ll} - \frac{1}{10}M_{1l} + \frac{1}{20}M_{l^*l^*} + \frac{1}{10}M_{1l^*} - \frac{3}{20} & \text{if $(p,q) \in \{(i,j), (j,i)\}$,} \\ \frac{1}{10} M_{il} + \frac{1}{10} M_{jl} - \frac{1}{10} M_{ij} + \frac{3}{20}M_{ll} + \frac{1}{10}M_{1l} - \frac{3}{20}M_{l^*l^*} - \frac{3}{10}M_{1l^*} + \frac{9}{20} & \text{if $(p,q) \in \begin{aligned}[t] \{ &(l,l), (1,l), \\&(l,1)\}\end{aligned} $} \\ -\frac{1}{10} M_{il} - \frac{1}{10} M_{jl} + \frac{1}{10} M_{ij} - \frac{3}{20}M_{ll} - \frac{3}{10}M_{1l} + \frac{3}{20}M_{l^*l^*} + \frac{3}{10}M_{1l^*} + \frac{11}{20} & \text{if $(p,q) \in \begin{aligned}[t] \{&(l^*,l^*), (1,l^*),\\ &(l^*,1)\} \end{aligned}$} \\ M_{pq} & \text{otherwise.} \end{cases} \end{align*} where $l^*$ is obtained from $l$ by $l^* := 2 + (l+ n - 2) \bmod 2n$. \end{lemma} \begin{proof} See Appendix~\ref{App:proofbqp}. \end{proof} Separating the BQP inequalities \eqref{BQP3} can be done in $O(n^3)$ by complete enumeration. \section{Numerical results}\label{sect:numerics} \textcolor{black}{We implemented our algorithm CP--ADMM in Matlab. For efficiency some separation routines have been coded in~C.} In order to evaluate the quality of the bounds and the run times to compute these bounds, we test our algorithms on various instances from the literature. All experiments were run on an Intel Xeon, E5-1620, 3.70~GHz with 32~GB memory. To compute valid lower bounds after each run of the inner while-loops we use Mosek~\cite{Mosek}. Note that the computation of a valid bound after the inner while-loops is necessary for checking the stopping criteria. We now describe the data sets used in our evaluation. Most of these instances were also considered in~\cite{armbruster2007} and~\cite{HelmbergEtAl}. \begin{itemize} \item $G_{|V|,|V|p}$ and $U_{|V|,|V|{\pi d^2}}$: randomly generated graphs by Johnson et al.~\cite{johnson1989optimization}. \begin{itemize} \item $G_{|V|,|V|p}$: graphs $G = (V,E)$, with $|V| \in \{124, 250, 500, 1000\}$ and four individual edge probabilities $p$. These probabilities were chosen depending on $|V|$, so that the average expected degree of each node was approximately $|V| p \in \{2.5, 5, 10, 20\}$. \item $U_{|V|,|V|{\pi d^2}}$: graphs $G=(V,E)$, with $|V| \in \{500,1000\}$ with distance value $d$ such that $|V|{\pi d^2} \in \{ 5, 10, 20, 40\}$. To form such a graph $G=(V,E)$, one chooses $2|V|$ independent numbers uniformly from the interval $(0,1)$ and views them as coordinates of $|V|$ nodes on the unit square. An edge is inserted between two vertices if and only if their Euclidean distance is at most $d$. \end{itemize} \item Mesh graphs from~\cite{de1993graph,ferreira1998}: Instances from finite element meshes; all nonzero edge weights are equal to one. Graph names begin with `mesh', followed by the number of vertices and the number of edges. \item KKT graphs: These instances originate from nested bisection approaches for solving sparse symmetric linear systems. Each instance consists of a graph that represents the support structure of a sparse symmetric linear system, for details see~\cite{helmberg2004cutting}. \item Toroidal 2D- and 3D-grid graphs arise in physics when computing ground states for Ising spinglasses, see e.g.,~\cite{helmberg2004cutting}. They are generated using the \texttt{rudy} graph generator~\cite{rudygenerator}: \begin{itemize} \item spinglass2pm\_$n_r$: A toroidal 2D-grid for a spinglass model with weights $\{+1,-1\}$. The grid has size $n_r \times n_r $, i.e., $|V| = n_r^2$. The percentage of edges with negative weights is $50~\%$. \item spinglass3pm\_$n_r$: A toroidal 3D-grid for a spinglass model with weights $\{+1,-1\}$. The grid has size $n_r \times n_r \times n_r$, i.e., $|V| =n_r^3$. The percentage of edges with negative weights is $50~\%$. \end{itemize} \end{itemize} \subsection{Numerical results for the $k$-EP} In the CP--ADMM, see Algorithm~\ref{Alg:CP_ADMM}, we input an upper bound $UB$ and the parameters $maxTime$, $numCuts$, $maxOuterloops$, $\epsproj$, and $\epsadmm$. We also require the bounds $\sigma_{\min}$, and $\sigma_{\max}$ for the adaptive stepsize term. The setting of these parameters is as follows: \begin{itemize} \item As an upper bound we input the values we obtained by heuristics or the optimal solution given in the literature. \item The maximal number of cuts added in each outer while-loop, $numCuts$, is $3n$ for graphs with $n\leq 300$ and $5n$ when $n>300$. These values are determined by preliminary tests, see also Appendix~\ref{App:numericalresults}. \item The maximal number of outer while-loops is~30 for instances with $n\leq 300$, and~10 when $n>300$. \item The precision for Dykstra's projection algorithm $\epsproj$ is set to $10^{-4}$. \item The inner precision $\epsadmm$ is $10^{-4}$ in the last iteration and $10^{-3}$ in all previous loops. \item The maximum computation time $maxTime$ is set to 2~hours. \item The bounds $\sigma_{\min}$ and $ \sigma_{\max}$ for $\sigma^p$ are $10^{-5}$ and $ 10^3$, respectively. \end{itemize} In each outer while-loop we separate $MaxIneq$ violated inequalities. We experimented with two strategies: One strategy is to first add violated triangle inequalities and, in case less than $MaxIneq$ violated triangle inequalities are found, we add violated independent set inequalities. The other strategy is to mix the two kinds of cuts and search for violated triangle and independent set inequalities together. The experiments showed that the latter strategy obtains better results, i.e., better bounds within the same time. Therefore, in the final setting we search for the most violated inequalities from both, the triangle and independent set inequalities. The separation of triangle inequalities is done by complete enumeration. Searching for independent set inequalities is also done by complete enumeration if $k\in \{2,3\}$. For $k \in \{4,5\}$ we apply the heuristic from~\cite{anjos2013solving} and Algorithm~\ref{alg:separate_clique_prob}, as explained in Section~\ref{subsect:clique}. \bigskip In Table~\ref{table:compareAll} we compare the eigenvalue lower bound by Donath and Hoffman~\cite{Donath1973LowerBF} (denoted by DH) with the lower bound when computing the DNN relaxation as well as the DNN relaxation with additional cuts on selected graphs from our testbed. We do not include other bounds from the literature, such as bounds from \cite{Hager2013AnEA}, as they are weaker than the DH bound. The DH bound is {\color{black} obtained by solving the corresponding SDP relaxation using Mosek~\cite{Mosek}.} The numbers in the table show that the DNN bound significantly improves over the DH bound. {\color{black}Moreover, our ADMM algorithm requires less time and memory to compute the DNN bound than Mosek to compute the DH bound.} Adding cuts to the DNN gives an even more substantial improvement. Hence, including triangle and independent set inequalities is much stronger than including non-negativity constraints only. As the DH bound is not competitive with the DNN bound and far from the DNN+cuts bound, we will not include it in the subsequent presentation of the numerical results. Computing an equipartition using commercial solvers is out of reach unless the graphs are extremely sparse. For instances in Table~\ref{table:compareAll}, Gurobi {\color{black} (with the default settings)} solves the small and very sparse instances within a few seconds, but obtains a gap of more than 40 \% after 2 hours for larger, but still reasonably sparse graphs. E.g., for $G_{250,5}$, a graph on 250 vertices with density 2 \%, the gap after two hours is more than 40 \%. For $G_{250,20}$, a graph on 250~vertices and density 8 \%, the gap is even more than 80 \% after two hours. This limited power of commercial solvers to tackle the $k$-EP is also observed in \cite{Beluc}. Hence, we omit the comparison to Gurobi or other LP-based solvers in the tables. \begin{table}[htp!] \centering \footnotesize \begin{tabular}{rrrr|rrr} \hline Graph & $n$ & $k$ & ub & {DH} & {DNN} &{DNN + Cuts}\\ \hline mesh.70.120 & 70 &2 &7 & 1.93 & 2.91 &6.02 \\ KKT.lowt01 & 82 &2 &13 &2.47 & 4.88 & 12.43 \\ mesh.148.265 & 148 & 4 &22 & 5.46& 8.13&21.23 \\ $G_{124,2.5}$ &124 &2& 13& 4.59 & 7.29 & 12.01\\ $G_{124,10}$ &124 & 2 & 178 & 138.24 &152.86 & 170.88 \\ $G_{124,20}$ &124 & 2 & 449 & 403.08 &418.67 & 439.96\\ $G_{250,2.5}$ & 250 & 2 & 29 & 10.99 &15.16 & 28.30 \\ $G_{250,5}$ &250 & 2 & 114 & 70.21 & 81.52 &105.00\\ $G_{250,10}$ &250 & 2 & 357 & 280.25 &303.02 & 330.40\\ \hline \end{tabular} \caption{Comparison of different lower bounds} \label{table:compareAll} \end{table} In Tables~\ref{tab:hager_table9_tol=10-3_mixcuts=1} to~\ref{tab:LargeGraph_GU_5n_k=5_mixcuts=1_tol=10-3} we give the details of our numerical results. In the first 4 columns we list the name of the instance, the number of vertices $n$, the partition number $k$ and an upper bound. The upper bounds are obtained by heuristics or the optimal solution from the literature. In columns~5 and~6 the lower bound (rounded up) and the time when solving the DNN relaxation are given. Finally, in the remaining columns we display the results when adding cuts to the DNN relaxation: we report the rounded up lower bounds, computation time and the improvement (in \%) on the rounded up lower bounds with respect to the DNN relaxation without cuts. We decided to report the improvement with respect to the rounded values, since otherwise for small numbers the percentages are incredibly huge. E.g., for instance $U_{1000,5}$ and $k=4$, the value of the DNN bound is 0.17, the DNN+cuts bound is 2.45, giving an improvement of 1,341.2~\%. The rounded up values are 1 and 3, respectively, giving a 200~\% improvement, which reflects the situation much better. In columns~10 and~11 we list the number of triangle cuts and independent set cuts present when stopping the algorithm. In the last two columns, the number of iterations of the ADMM and the number of outer while-loops is reported. As can be observed in all tables, the bounds improve drastically when adding triangle and independent set inequalities to the DNN relaxation while the time for computing these bounds is still reasonable. {\color{black}We remark here that we can stop the CP--ADMM algorithm at any time and provide a valid lower bound.} The results show that the CP--ADMM takes much more of the triangle inequalities when computing bounds for the $k$-equipartition problem. Remember that we search for triangle and independent set inequalities together. Hence, there are more triangle inequalities with a large violation which means that the triangle inequalities contribute more to the strength of the bound than the independent set inequalities. We discuss the results in more detail in the subsequent sections. \subsubsection{Detailed results for $k=2$} In Tables~\ref{tab:hager_table9_tol=10-3_mixcuts=1} and~\ref{tab:LargeGraph_GU_5n_mixcuts=1_tol10-3} we report results for $k=2$. Table~\ref{tab:hager_table9_tol=10-3_mixcuts=1} includes graphs with up to 274~vertices, the DNN bound for these graphs can be computed within a few seconds. After adding in total some 500 up to 16,000 triangle inequalities and additional 300 up to roughly 7,000 independent set inequalities, the bound improves between 4.55~\% and 300~\%. In several cases, the bound closes the gap to the best known upper bound. Otherwise, the algorithm stops due to the little improvement of the bounds in consecutive outer while-loops. The time for computing these bounds ranges from a few seconds to 17~minutes. In Table~\ref{tab:LargeGraph_GU_5n_mixcuts=1_tol10-3} we consider larger graphs with 500 and 1,000 vertices. The DNN bound can be computed for these graphs within 12~minutes. After adding triangle and independent set inequalities, the bound improves up to 200~\% in a running of 40~seconds up to 2~hours. On the $G$ graphs one observes that the improvement of the bound when adding cuts gets more significant as the graphs get sparser. Note that for all these instances we stop because the number of outer while-loops is reached or the improvement of the bounds in consecutive outer while-loops is too small. We give further results for graphs from the literature in Table~\ref{tab:PlanarGraph_3n_mixcuts=1_tol=10-3} in Appendix~\ref{App:numericalresults}. For most of these we prove optimality of the best found $k$-equipartition, confirming the high quality of our lower bounds. \subsubsection{Detailed results for $k>2$} {\color{black}In Tables~\ref{tab:hager_table9DiffK_tol=10-3_mixcuts=1} to~\ref{tab:LargeGraph_GU_5n_k=5_mixcuts=1_tol=10-3} we report results for $k>2$.} As in the case for $k=2$, the bounds can be significantly improved while the time for obtaining the bounds is still reasonable. However, we close the gap for fewer instances as for $k=2$. The improvement for the larger graphs after adding cuts to the DNN relaxation is up to 200~\% for $k=4$ and up to 300~\% for $k=5$. For smaller graphs the CP--ADMM stops because the improvement of the lower bound compared to the previous iteration is below the threshold~0.001, see Section~\ref{sec:stopp}.The largest improvement in the DNN+cuts bound w.r.t.~the DNN bound is 500~\%. Again, we observe that for graphs with more than 250 vertices, the algorithm typically stops because the maximum number of outer loops is reached. \subsection{Numerical results for bisection} In Algorithm~\ref{Alg:CP_ADMM}, we input an upper bound $UB$ and the parameters $numCuts$, $maxOuterloops$, $\epsproj$, $\epsadmm$, and $maxTime$. The setting of these parameters is as follows. \begin{itemize} \item As an upper bound we input the numbers we obtained by heuristics. \item The maximal number of cuts added in each outer while-loop, $numCuts$, is $3n$ for graphs with $n\leq 300$ and $5n$ when $n>300$. \item The maximal number of outer while-loops is~30 for instances with $n\leq 300$, and~10 when $n>300$. \item The precision for Dykstra's projection algorithm $\epsproj$ is set to $10^{-6}$. \item The inner precision $\epsadmm$ is $10^{-5}$ in the first and last inner while-loop, and $10^{-4}$ in all other inner while-loops. \item The maximum computation time $maxTime$ is set to 2~hours. \end{itemize} The parameters differ from the experiments for the $k$-equipartition problem since for the bisection problem the size of the matrix in the SDP is of order $2n+1$, i.e., more than double the size than for the $k$-equipartition problem. Violated BQP inequalities~\eqref{BQP3} are found by an enumeration search. We take $m_1 = \lceil np \rceil$ where $n$ is the number of vertices in the graph and for $p$ we choose a number out of $\{ 0.6, 0.65, 0.7 \}$. The results are given in Table~\ref{tab:TableHager_finaltol=0.001_bisection} for smaller graphs and in Table~\ref{tab:Table_largeG_Spinglass2pm_bisection} for larger graphs. Similar as for the $k$-equipartition problem, we observe a significant improvement of the bound after adding inequalities. For graphs with vertices up to 274, see Table~\ref{tab:TableHager_finaltol=0.001_bisection}, the improvement of the DNN+cuts bound over the DNN bound ranges between 2.23~\% and 200~\% after adding up to 16,000 BQP inequalities. For five of the graphs in Table~\ref{tab:TableHager_finaltol=0.001_bisection} the algorithm stopped because the gap was closed, for four graphs the improvement of the lower bound was only minor and four times the number of violated cuts found was too small. Only for one graph the algorithm stopped due to the time limit. For the larger graphs we typically stop because of the time limit of 2~hours, see Table~\ref{tab:Table_largeG_Spinglass2pm_bisection}. For those graphs the DNN+cuts bound improves over the DNN bound between 0.44~\% and 17.32~\% after adding between 8,000 and 24,200 BQP inequalities. One can observe that the results are somewhat weaker than for the $k$-equipartition problem. Note that the nature of the cuts added to the DNN relaxations for the $k$-EP and GBP differs, and the size of matrix variables in the GBP relaxations is more than twice larger than in the case of the $k$-EP for the same graph. \textcolor{black}{For large graphs, we are able to add roughly up to 50,000 cuts for the $k$-EP and 24,000 cuts for the GBP within a time span of 2 hours.} \section{Conclusions} This study aims to investigate and expand the boundary of obtaining strong DNN bounds with additional cutting planes for large graph partition problems. Due to memory requirements, state-of-the-art interior point methods are only capable of solving medium-size SDPs and are not suitable for handling lots of polyhedral cuts. We overcome both difficulties by utilizing a first order method in a cutting-plane framework. Our approach focuses on two variations of the graph partition problem: the $k$-equipartition problem and the graph bisection problem. We first derive DNN relaxations for both problems, see~\eqref{eq:relax_dnn} and \eqref{DNNBP}, then we apply facial reduction to obtain strictly feasible equivalent relaxations, see~\eqref{sdp-p} and \eqref{ZWslater}, respectively. To prove the minimality of the face of the DNN cone containing the feasible set of \eqref{DNNBP}, we exploit the dimension of the bisection polytope, see Theorem~\ref{Thm:DimConvF}. After facial reduction, both relaxations enclose a natural splitting of the feasible set into polyhedral and positive semidefinite constraints. Moreover, both relaxations can be further strengthened by several types of cutting planes. To solve both relaxations, we use an ADMM update scheme, see~\eqref{ADMMall1}, that is incorporated in a cutting-plane framework, leading to the so-called CP--ADMM, see Algorithm~\ref{Alg:CP_ADMM}. The cutting planes are handled in the polyhedral subproblem by exploiting a semi-parallelized version of Dykstra's cyclic projection algorithm, see Section~\ref{sect:Dykstra}. The CP--ADMM benefits from warm starts whenever new cuts are added, provides valid lower bounds even after solving with low precision, and can be implemented efficiently. A particular ingredient of the CP--ADMM are the projectors onto polyhedra induced by the cutting planes. Projection operators for three types of cutting planes that are effective for the graph partition problem, i.e., the triangle, independent set and BQP inequalities, are derived in Lemma~\ref{Lem:triangle}, \ref{Lem:clique} and \ref{Lem:bqp}, respectively. Numerical experiments show that using our CP--ADMM algorithm we are able to produce high-quality bounds for graphs up to 1,024~vertices. We experimented with several graph types from the literature. For structured graphs of medium size and the $2$-EP we often close the gap in a few seconds or at most a couple of minutes. For bisection problems on those graphs, we also close the gaps in many cases. For larger graphs, we are able to add polyhedral cuts roughly up to 50,000 for $k$-EP and 24,000 for the GBP within 2 hours, which results in strong lower bounds. Our results provide benchmarks for solving medium and large scale graph partition problems. \medskip This research can be extended in several directions. Motivated by the optimistic results for the $k$-EP and the GBP, we expect that strong bounds from DNN relaxations with additional cutting planes can be obtained for other graph partition problems, such as the vertex separator problem and the maximum cut problem. Moreover, since the major ingredients of our algorithm are presented generally, establishing an approach for solving general DNN relaxations with additional cutting planes is an interesting future research direction. Our results also provide new perspectives on solving large-scale optimization problems to optimality by using SDPs. \subsection*{Acknowledgements} We would like to thank William Hager for providing us instances from the paper~\cite{Hager2013AnEA}. \begin{landscape} \input{hager_table9_tol=10-3_mixcuts=1} \input{LargeGraph_GU_5n_mixcuts=1_tol=10-3} \input{hager_table9DiffK_tol=10-3_mixcuts=1} \input{LargeSparseSpinglass2pm_DiffK_DNNtol=0.0001_finaltol=0.0001} \input{LargeSparseSpinglass3pm_DiffK_DNNtol=0.0001_finaltol=0.0001} \input{LargeGraph_GU_5n_k=4_mixcuts=1_tol=10-3} \input{LargeGraph_GU_5n_k=5_mixcuts=1_tol=10-3} \input{TableHager_finaltol=0.001_bisection} \input{Table_largeG_Spinglass2pm_bisection} \end{landscape} \clearpage \bibliographystyle{plainnat} \bibliography{mybib} \appendix \section{Projection onto polyhedral sets} \label{App:projectorX} One of the ingredients of the CP--ADMM is the orthogonal projection onto a polyhedral set. More precisely, the $X$-subproblem in~\eqref{X_sub} involves a projection onto the set $\mathcal{X}$, where $\mathcal{X}$ is given in~\eqref{setX} and \eqref{setXGB} for the $k$-EP and GBP, respectively. We focus here on the projector for the GBP. The projector for the $k$-EP, which has a simpler structure, can be obtained similarly. Recall from~\eqref{setXGB} that the polyhedral set $\mathcal{X}_{BP}$ looks as follows: \begin{align*} \mathcal{X}_{BP} &= \left \{ X = \begin{pmatrix} x^0 & (x^1)^\top & (x^2)^\top \\ x^1 & X^{11} & X^{12} \\ x^2 & X^{21} & X^{22} \end{pmatrix} \in {\mathcal S}^{2n+1} : ~~ \begin{aligned} & {\mathcal G}_{\mathcal J}({X}) = \mathbf{0}, ~{X}_{1,1}=1,~\trace(X^{ii})=m_i,~ i\in [2], \\ & \diag(X^{11})+\diag(X^{22})={\mathbf 1}_n,~ X {\mathbf u}_1 =\diag(X),\\ & \mathbf{0} \leq {X} \leq {\mathbf J} \end{aligned} \right\}. \end{align*} Let $\mathcal{P}_{\mathcal{X}_{BP}} : \mathcal{S}^{2n+1} \to \mathcal{S}^{2n+1}$ denote the projection onto $\mathcal{X}_{BP}$. Observe that each constraint that defines $\mathcal{X}_{BP}$ either acts on the diagonal, first row, and first column of the matrix, or on the remaining entries. In the latter case, an entry $X_{ij}$ is either bounded by 0 and 1 or equals 0 if $(i,j) \in \mathcal{J}$. These projections are very simple and are given by the operators $T_{\inner}$ and $T_{\text{box}}$ in Table~\ref{TableOperators}. Next, we focus on the entries on the diagonal, first row\textcolor{black}{,} and first column of the orthogonal projection. Suppose $Y = \mathcal{P}_{\mathcal{X}_{BP}}(X)$ and let $y_1 := \diag(Y^{11})$ and $y_2 := \diag(Y^{22})$. Then $y_1$ and $y_2$ can be obtained via the following optimization problem: \begin{align} \label{eq:projectionDiag} \begin{aligned} \min_{y_1,y_2 \in \mathbb{R}^n} \quad & (y_1 - \diag(X^{11}))^\top (y_1 - \diag(X^{11})) +2(y_1 - x^1)^\top (y_1 - x^1) \\ & + (y_2 - \diag(X^{22}))^\top (y_2 - \diag(X^{22})) +2(y_2 - x^2)^\top (y_2 - x^2) \\ \text{s.t.} \quad & \mathbf{1}_n^\top y_1 = m_1,~ \mathbf{1}_n^\top y_2 = m_2,~ y_1 + y_2 = \mathbf{1}_n,~ y_1 \geq \mathbf{0}_n,~ y_2 \geq \mathbf{0}_n. \end{aligned} \end{align} Using basic algebra, one can show that the optimal $y_1$ to~\eqref{eq:projectionDiag} is attained by the minimizer of the following optimization problem: \begin{align} \label{eq:projectionDiag_rewritten} \begin{aligned} \min_{y_1 \in \mathbb{R}^n} \quad & \left\Vert y_1 - \left( \frac{1}{6} (\diag(X^{11}) - \diag(X^{22})) + \frac{1}{3}(x^1 - x^2) + \frac{1}{2}\mathbf{1}_n\right)\right\Vert_2^2 \\ \text{s.t.} \quad & \mathbf{1}_n^\top y_1 = m_1,~ \mathbf{0}_n \leq y_1 \leq \mathbf{1}_n, \end{aligned} \end{align} while the corresponding optimal $y_2$ to~\eqref{eq:projectionDiag} is $y_2 = \bold{1}_n - y_1$. Observe that \eqref{eq:projectionDiag_rewritten} is equivalent to a projection onto the capped simplex $\bar{\Delta}(m_1) = \{ y \in \mathbb{R}^n \, : \, \, \mathbf{1}_n^\top y = m_1,~ \mathbf{0}_n \leq y \leq \mathbf{1}_n\}$. The projection onto $\bar{\Delta}(m_1)$ we denote by $\mathcal{P}_{\bar{\Delta}(m_1)} : \mathbb{R}^n \to \mathbb{R}^n$, which can be performed efficiently, see~\cite{AngEtAl}. We define the operator $T_{\arrow}$, see Table~\ref{TableOperators}, to embed the optimal $y_1$ and $y_2$ in the space $\mathcal{S}^{2n+1}$. \begin{table}[H] \centering \footnotesize \begin{tabular}{@{}ccll@{}} \toprule \multicolumn{3}{c}{Operator} & Description \\ \midrule $T_{\inner}$ & : & $\mathcal{S}^{2n+1} \to \mathcal{S}^{2n+1}$ & \begin{tabular}[c]{@{}l@{}} $T_{\inner}\left( X \right)_{ij} = 0$ if $i = 1$ or $j = 1$ or $i = j$ or $(i,j) \in \mathcal{J}$ \\ and $T_{\inner}(X)_{ij} = X_{ij}$ otherwise. \end{tabular}\\ [0.5cm] $T_{\text{box}}$ & : & $\mathcal{S}^{2n+1} \to \mathcal{S}^{2n+1}$ & $T_{\text{box}}(X)_{ij} = \min( \max( X_{ij}, 0), 1)$ for all $(i,j)$. \\[0.3cm] $T_\arrow$ & : & $\mathbb{R}^n \to \mathcal{S}^{2n+1}$ & $T_\arrow(x) = \begin{pmatrix} 1 & x^\top & (\mathbf{1}_n - x)^\top \\ x & \Diag(x) & \mathbf{0} \\ \mathbf{1}_n - x & \mathbf{0} & \Diag(\mathbf{1}_n - x) \end{pmatrix}$. \\ \bottomrule \end{tabular} \caption{Overview of operators and their definitions. \label{TableOperators}} \end{table} \noindent Now, the projector $\mathcal{P}_{\mathcal{X}_{BP}}$ can be written out explicitly as follows: \begin{minipage}{0.95\textwidth} \begin{flalign*} \mathcal{P}_{\mathcal{X}_{BP}}\left( \begin{pmatrix} x^0 & (x^1)^\top & (x^2)^\top \\ x^1 & X^{11} & X^{12} \\ x^2 & X^{21} & X^{22} \end{pmatrix} \right) = T_{\text{box}} \left( T_{\inner} \left( \begin{pmatrix} x^0 & (x^1)^\top & (x^2)^\top \\ x^1 & X^{11} & X^{12} \\ x^2 & X^{21} & X^{22} \end{pmatrix} \right) \right) && \end{flalign*} \begin{flalign*} && + T_{\arrow}\left( \mathcal{P}_{\bar{\Delta}(m_1)}\left( \frac{1}{6} (\diag(X^{11}) - \diag(X^{22})) + \frac{1}{3}(x^1 - x^2) + \frac{1}{2}\mathbf{1}_n\right) \right). \end{flalign*} \end{minipage} \section{Proofs for Cutting Plane Projectors} \subsection{Proof of Lemma \ref{Lem:triangle}} \label{App:prooftriangle} The first statement is trivial. Now assume $M \notin \mathcal{H}^\Delta_{ijl}$, i.e., $M_{ij} + M_{il} > \frac{k-1}{k} + M_{jl}$. The projection of $M$ onto $\mathcal{H}^\Delta_{ijl}$ is the solution to $\min_{\hat{M} \in \mathcal{S}^n}\left\{ \Vert \hat{M} - M \Vert_F^2 \, : \, \, \hat{M} \in \mathcal{H}^\Delta_{ijl}\right\}$. Since the inequality that describes $\mathcal{H}^\Delta_{ijl}$ only involves the submatrix induced by indices $i, j$ and $l$, we can restrict ourselves to the following convex optimization problem: \begin{align*} \min_{\alpha, \beta, \gamma} \quad & 2(\alpha - M_{ij})^2 + 2(\beta - M_{il})^2 + 2(\gamma - M_{jl})^2 \\ \text{s.t.} \quad & \alpha + \beta \leq \frac{k-1}{k} + \gamma. \end{align*} Let $\lambda \geq 0$ denote the Lagrange multiplier for the inequality, then the KKT conditions imply the following system: \begin{align*} \begin{cases} 4(\alpha - M_{ij}) + \lambda = 0 \\ 4(\beta - M_{il}) + \lambda = 0 \\ 4(\gamma - M_{jl}) - \lambda = 0 \\ \lambda (\alpha + \beta - \frac{k-1}{k} - \gamma) = 0 \\ \alpha + \beta \leq \frac{k-1}{k} + \gamma \\ \lambda \geq 0. \end{cases} \end{align*} By complementarity, we have either $\lambda = 0$ or $\alpha + \beta - \frac{k-1}{k} - \gamma = 0$. The first case leads to the solution $\hat{M} = M$ that is a KKT-point if $M \in \mathcal{H}^\Delta_{ijl}$. If $M \notin \mathcal{H}^\Delta_{ijl}$, then $\alpha + \beta - \frac{k-1}{k} - \gamma = 0$. It follows from the first three conditions of the above system that we have \begin{align*} \alpha = M_{ij} - \frac{1}{4}\lambda, \quad \beta = M_{il} - \frac{1}{4} \lambda, \quad \gamma = M_{jl} + \frac{1}{4}\lambda. \end{align*} Substitution into $\alpha + \beta - \frac{k-1}{k} - \gamma = 0$ yields \begin{align*} 0 & = M_{ij} - \frac{1}{4}\lambda + M_{il} - \frac{1}{4}\lambda - \frac{k-1}{k} - M_{jl} - \frac{1}{4}\lambda \\ \Longleftrightarrow \quad \lambda & = \frac{4}{3}\left( M_{ij} + M_{il} - M_{jl} - \frac{k-1}{k} \right). \end{align*} Using this expression of the Lagrange multiplier gives the optimal values for $\alpha, \beta$ and $\gamma$: \begin{align*} \alpha & = M_{ij} - \frac{1}{4}\cdot\frac{4}{3}\left( M_{ij} + M_{il} - M_{jl} - \frac{k-1}{k} \right) = \frac{2}{3} M_{ij} - \frac{1}{3} M_{il} + \frac{1}{3} M_{jl} + \frac{1}{3} - \frac{1}{3k}, \\ \beta & = M_{il} - \frac{1}{4}\cdot \frac{4}{3}\left( M_{ij} + M_{il} - M_{jl} - \frac{k-1}{k} \right) = -\frac{1}{3}M_{ij} + \frac{2}{3} M_{il} + \frac{1}{3} M_{jl} + \frac{1}{3} - \frac{1}{3k}, \\ \gamma & = M_{jl} + \frac{1}{4} \cdot \frac{4}{3}\left( M_{ij} + M_{il} - M_{jl} - \frac{k-1}{k} \right) = \frac{1}{3}M_{ij} + \frac{1}{3}M_{il} + \frac{2}{3} M_{jl} - \frac{1}{3} + \frac{1}{3k}. \end{align*} Setting $\hat{M}_{ij} = \hat{M}_{ji}= \alpha, \hat{M}_{il} = \hat{M}_{li} =\beta$ and $\hat{M}_{jl} = \hat{M}_{lj} = \gamma$ gives the desired result. \qedsymbol \subsection{Proof of Lemma \ref{Lem:clique}} \label{App:proofclique} We use a similar technique as used in the proof of Lemma \ref{Lem:triangle}. The statement is trivial if $M \in \mathcal{H}^{IS}_I$. If $M \notin \mathcal{H}^{IS}_I$, it suffices to consider the following optimization problem: \begin{align*} \min_{z} \quad & \sum_{i, j \in I, i < j} 2\left( z_{ij} - M_{ij} \right)^2 \\ \text{s.t.} \quad & \sum_{i, j \in I, i < j} z_{ij} \geq \frac{1 - k}{2}. \end{align*} Since this problem is convex, we restrict ourselves to solutions satisfying the KKT conditions. Let $\lambda \geq 0$ denote the Lagrange multiplier for the inequality. The KKT conditions read as follows: \begin{align*} \begin{cases} 4(z_{ij} - M_{ij} ) - \lambda = 0 \qquad \forall i, j \in I, ~i < j \\ \lambda \left( \frac{1 - k}{2} - \sum_{i, j \in I, i < j} z_{ij} \right) = 0 \\ \sum_{i,j \in I, i < j} z_{ij} \geq \frac{1 - k}{2} \\ \lambda \geq 0. \end{cases} \end{align*} Complementarity implies that either $\lambda = 0$ or $\frac{1 - k}{2} - \sum_{i, j \in I, i < j} z_{ij} = 0$. The former case leads to the solution $z_{ij} = M_{ij}$ for all $i,j \in I, i < j$, which is only a KKT point if $M \in \mathcal{H}^{IS}_I$. Since this is not the case, we have $\sum_{i, j \in I, i < j} z_{ij} = \frac{1 - k}{2}$. It follows from the system's first equation that $z_{ij} = M_{ij} + \frac{1}{4}\lambda$ for all $i, j \in I, i < j$. Substitution into $\sum_{i, j \in I, i < j} z_{ij} = \frac{1 - k}{2}$ yields: \begin{align*} \sum_{i,j\in I, i < j} \left( M_{ij} + \frac{1}{4}\lambda \right) = \frac{1 - k}{2} \quad & \Longleftrightarrow \quad \binom{k+1}{2} \frac{1}{4} \lambda + \sum_{i,j \in I, i < j} M_{ij} = \frac{1 - k}{2} \\ & \Longleftrightarrow \quad \frac{k(k+1)}{2 \cdot 4} \lambda = \frac{1 - k}{2} - \sum_{i,j \in I, i < j} M_{ij} \\ & \Longleftrightarrow \quad \lambda = \frac{8}{k(k+1)} \left( \frac{1 - k}{2} - \sum_{i,j \in I, i < j} M_{ij} \right). \end{align*} Hence, this implies that for all $p,q \in I, p < q$ we have \begin{align*} z_{pq} = M_{pq} + \frac{1}{4}\lambda = M_{pq} - \frac{k-1}{k(k+1)} - \frac{2}{k(k+1)} \sum_{i,j \in I, i < j}M_{ij}. \end{align*} Setting $\hat{M}_{pq} = z_{pq}$ for all $p, q \in I, p < q$, $\hat{M}_{pq} = z_{qp}$ for all $p,q \in I, p > q$, and $\hat{M}_{pq} = M_{pq}$ otherwise leads to the desired result. \qedsymbol \subsection{Proof of Lemma \ref{Lem:bqp}} \label{App:proofbqp} The projection of $M$ onto $\mathcal{H}_{ijl}^{BQP}$ is the solution to $\min_{\hat{M} \in \mathcal{S}^{2n+1}}\left\{ \Vert \hat{M} - M \Vert_F^2 \, : \, \, \hat{M} \in \mathcal{H}_{ijl}\right\}$. The inequality describing $\mathcal{H}_{ijl}^{BQP}$ only involves the pairs $(i,l), (j,l), (i,j)$ and $(l,l)$. Since $\diag(X^{11}) = x^1, \diag(X^{22}) = x^2$ and $x^1 + x^2 = {\mathbf 1}_n$ should be satisfied for $\hat{M}$, any change in $(l,l)$ also has an effect on the pairs $(1,l)$, $(l^*, l^*)$ and $(1,l^*)$, where $l^*$ is the index corresponding to $l$ in the diagonal block not containing $l$. Taking these pairs into account, we can restrict ourselves to the following convex optimization problem: \begin{align*} \min_{\alpha, \beta, \gamma, \mu} \quad & \begin{aligned}[t] & 2(\alpha - M_{il})^2 + 2(\beta - M_{jk})^2 + 2(\gamma - M_{ij})^2 + (\mu - M_{ll})^2 + 2(\mu - M_{1l})^2 \\ &\quad + (1 - \mu - M_{l^*l^*})^2 + 2(1 - \mu - M_{1l^*})^2 \end{aligned} \\ \text{s.t.} \quad & \alpha + \beta \leq \gamma + \mu. \end{align*} Let $\lambda \geq 0$ denote the Lagrange multiplier for the inequality, then the KKT conditions imply the following system: \begin{align*} \begin{cases} 4(\alpha - M_{il}) + \lambda = 0 \\ 4(\beta - M_{jl}) + \lambda = 0 \\ 4(\gamma - M_{ij}) - \lambda = 0 \\ 2(\mu - M_{ll}) + 4(\mu - M_{1l}) +2 (\mu - 1 + M_{l^*l^*}) + 4(\mu - 1 + M_{1l^*}) - \lambda = 0 \\ \lambda (\alpha + \beta - \gamma - \mu) = 0 \\ \alpha + \beta \leq \gamma + \mu \\ \lambda \geq 0. \end{cases} \end{align*} Complementarity implies that either $\alpha + \beta = \gamma + \mu$ or $\lambda = 0$. The latter case leads to the KKT-point $(\alpha, \beta, \gamma, \mu) = \left(M_{il}, M_{jl}, M_{ij}, \frac{1}{6}M_{ll} + \frac{1}{3}M_{1l} - \frac{1}{6}M_{l^*l^*} - \frac{1}{3}M_{1l^*} + \frac{1}{2} \right)$, which is optimal if $M_{il} + M_{jl} \leq M_{ij} + \frac{1}{6}M_{ll} + \frac{1}{3}M_{1l} - \frac{1}{6}M_{l^*l^*} - \frac{1}{3}M_{1l^*} + \frac{1}{2}$. Now assume that $\lambda \neq 0$. Then $\alpha + \beta = \gamma + \mu$. The first four equalities of the KKT system can be rewritten as: \begin{align*} \alpha &= M_{il} - \frac{1}{4}\lambda, \quad \beta = M_{jl} - \frac{1}{4}\lambda, \quad \gamma = M_{ij} + \frac{1}{4}\lambda \\ \mu &= \frac{1}{6}M_{ll} + \frac{1}{3}M_{1l} - \frac{1}{6}M_{l^*l^*} - \frac{1}{3}M_{1l^*} + \frac{1}{2} + \frac{1}{12}\lambda \end{align*} Substitution into $\alpha + \beta = \gamma + \mu$ yields \begin{align*} M_{il} & - \frac{1}{4}\lambda + M_{jl} - \frac{1}{4}\lambda = M_{ij} + \frac{1}{4}\lambda + \frac{1}{6}M_{ll} + \frac{1}{3}M_{1l} - \frac{1}{6}M_{l^*l^*} - \frac{1}{3}M_{1l^*} + \frac{1}{2} + \frac{1}{12}\lambda \\ \Longleftrightarrow \quad \lambda & = \frac{12}{10}\left( M_{il} + M_{jl} - M_{ij} - \frac{1}{6}M_{ll} - \frac{1}{3}M_{1l} + \frac{1}{6}M_{l^*l^*} + \frac{1}{3}M_{1l^*} - \frac{1}{2}\right). \end{align*} Substitution of this expression for the Lagrange multiplier into the remaining equalities provides the optimal values for $\alpha, \beta, \gamma$ and $\mu$: \begin{align*} \alpha & = \frac{7}{10} M_{il} - \frac{3}{10} M_{jl} + \frac{3}{10} M_{ij} + \frac{1}{20}M_{ll} + \frac{1}{10}M_{1l} - \frac{1}{20}M_{l^*l^*} - \frac{1}{10}M_{1l^*} + \frac{3}{20} \\ \beta & = -\frac{3}{10} M_{il} + \frac{7}{10} M_{jl} + \frac{3}{10} M_{ij} + \frac{1}{20}M_{ll} + \frac{1}{10}M_{1l} - \frac{1}{20}M_{l^*l^*} - \frac{1}{10}M_{1l^*} + \frac{3}{20} \\ \gamma & = \frac{3}{10} M_{il} + \frac{3}{10} M_{jl} + \frac{7}{10} M_{ij} - \frac{1}{20}M_{ll} - \frac{1}{10}M_{1l} + \frac{1}{20}M_{l^*l^*} + \frac{1}{10}M_{1l^*} - \frac{3}{20} \\ \mu & = \frac{1}{10} M_{il} + \frac{1}{10} M_{jl} - \frac{1}{10} M_{ij} + \frac{3}{20}M_{ll} + \frac{3}{10}M_{1l} - \frac{3}{20}M_{l^*l^*} - \frac{3}{10}M_{1l^*} + \frac{9}{20}. \end{align*} Setting $\hat{M}_{il} = \hat{M}_{li} = \alpha$, $\hat{M}_{jl} = \hat{M}_{lj} = \beta$, $\hat{M}_{ij} = \hat{M}_{ji} = \gamma$, $\hat{M}_{ll} = \hat{M}_{1l} = \hat{M}_{l1} = \mu$ and $\hat{M}_{l^*l^*} = \hat{M}_{1l^*} = \hat{M}_{l^*1}= 1 - \mu$ gives the final result. \qedsymbol \section{Additional numerical results}\label{App:numericalresults} In this section we report additional numerical results. Table~\ref{tab:LargeGraph_GU_3n_mixcuts=1_tol=10-3} serves to evaluate a quality of the DNN relaxation \eqref{sdp-p2} with additional cuts for large graphs obtained after adding at most $3n$ cuts in each outer while-loop of Algorithm \ref{Alg:CP_ADMM}. In Table~\ref{tab:LargeGraph_GU_5n_mixcuts=1_tol10-3}, Section~\ref{sect:numerics} we report the results when the number of added cuts in each outer while-loop is at most $5n$. Our numerical results show that lower bounds might significantly improve when adding more cuts. Therefore, our final choice for adding cuts for large graphs is $5n$. \medskip We furthermore give in Table~\ref{tab:PlanarGraph_3n_mixcuts=1_tol=10-3} additional numerical results for the DNN relaxation \eqref{sdp-p2} with additional cuts and $k=2$ for (rather small) instances from the literature. All these instances have been considered in~\cite{Hager2013AnEA}. The first group of instances are grid graphs of Brunetta et al.~\cite{brunetta1997branch}. These graphs are as follows. \begin{itemize} \item Planar grid instances: To represent instances of equicut on planar grid graphs we assign a weight from 1 to 10, drawn from a uniform distribution, to the edges of a $h \times k$ planar grid, and a 0 weight to the other edges. The names of those graphs are formed by the size followed by the letter `g'. \item Toroidal grid instances: Same as planar grid instances but for toroidal grids. The names of those graphs are formed by the size followed by the letter `t'. \item Mixed grid instances: These are dense instances with all edges having a nonzero weight. The edges of a planar grid receive weights from 10 to 100 uniformly generated and all the other edges a weight from 1 to 10, also uniformly generated. The names of those graphs are formed by the size followed by the letter `m'. \end{itemize} The second group of instances are randomly generated graphs from~\cite{Hager2013AnEA}: for a fixed density, the edges are assigned integer weights uniformly drawn from [1,10]. Graphs' names begin with `v', `t', `q', `c', and `s'. Table \ref{tab:PlanarGraph_3n_mixcuts=1_tol=10-3} also includes results on instances constructed with de~Bruijn networks by~\cite{Hager2013AnEA}, of which the original data arise in applications related to parallel computer architecture \cite{collins1992vlsi,feldmann1997better}. Graphs' names begin with `db'. Finally, we test some instances from finite element meshes from~\cite{Hager2013AnEA}, graphs' names begin with `m'. There are 64 instances in Table \ref{tab:PlanarGraph_3n_mixcuts=1_tol=10-3}, and we prove optimality for 53 instances. The longest time required to compute a lower bound is 2.5 minutes, and computation of upper bounds is negligible. \begin{landscape} \input{LargeGraph_GU_3n_mixcuts=1_tol=10-3} \input{PlanarGraph_3n_mixcuts=1_tol=10-3} \end{landscape} \end{document} \pgfplotstableread[col sep=comma]{Hager_datafile1.tex}\data \pgfplotstabletypeset[ font=\footnotesize, multicolumn names, columns={graph_string,n,k,UB,round_DNN_lb,DNN_clocktime,round_cuts_lb,cuts_clocktime,imp_lb_round,TotalTriCuts,TotalCliqueCuts, iter, cut_add_occasions_total}, fixed zerofill, fixed, columns/graph_string/.style={string type, column name={graph} }, columns/n/.style={int detect, column name={$n$}, }, columns/k/.style={int detect, column name={$k$}, }, columns/UB/.style={int detect, column name={ub}, assign column name/.style={ /pgfplots/table/column name={\multicolumn{1}{c|}{##1}}} }, columns/round_DNN_lb/.style={int detect, column name=lb, }, columns/DNN_clocktime/.style={precision=2, column name=clocktime, assign column name/.style={ /pgfplots/table/column name={\multicolumn{1}{c|}{##1}}} }, columns/round_cuts_lb/.style={int detect, column name=lb, }, columns/cuts_clocktime/.style={precision=2, column name=clocktime, }, columns/imp_lb_round/.style={preproc/expr={100*##1}, column name=imp., }, columns/TotalTriCuts/.style={int detect, column name=\# tri. cuts, }, columns/TotalCliqueCuts/.style={int detect, column name=\# ind. set cuts, }, columns/iter/.style={int detect, column name=\# iter., }, columns/cut_add_occasions_total/.style={int detect, column name=\# outer loops, }, columns addvline/.style={ every col no #1/.style={ column type/.add={}{|}} }, columns addvline/.list={3,5}, column type=r, every head row/.style={ before row={ \caption{Instances considered in~\cite[Table~9]{Hager2013AnEA}, $k=2$} \label{tab:hager_table9_tol=10-3_mixcuts=1}\\ \toprule & & & & \multicolumn{2}{c|}{DNN}& \multicolumn{7}{c}{DNN+cuts}\\ }, after row={ & & & & (rounded) & \multicolumn{1}{c|}{(\si{\second})} & (rounded) & \multicolumn{1}{c}{(\si{\second})} & \multicolumn{1}{c}{\%} & & & & \endfirsthead \midrule} }, every last row/.style={after row=\bottomrule} ]{\data} \pgfplotstableread[col sep=comma]{largeGraphSparse5n_datafile3.tex}\data \pgfplotstabletypeset[ font=\footnotesize, multicolumn names, columns={graph_string,n,k,UB,round_DNN_lb,DNN_clocktime,round_cuts_lb,cuts_clocktime,imp_lb_round,TotalTriCuts,TotalCliqueCuts, iter, cut_add_occasions_total}, fixed zerofill, fixed, columns/graph_string/.style={string type, column name={graph}, }, columns/n/.style={int detect, column name={$n$}, }, columns/k/.style={int detect, column name={$k$}, }, columns/UB/.style={int detect, column name={ub}, assign column name/.style={ /pgfplots/table/column name={\multicolumn{1}{c|}{##1}}} }, columns/round_DNN_lb/.style={int detect, column name=lb, }, columns/DNN_clocktime/.style={precision=2, column name=clocktime, assign column name/.style={ /pgfplots/table/column name={\multicolumn{1}{c|}{##1}}} }, columns/round_cuts_lb/.style={int detect, column name=lb, }, columns/cuts_clocktime/.style={precision=2, column name=clocktime, }, columns/imp_lb_round/.style={preproc/expr={100*##1}, column name=imp. , }, columns/TotalTriCuts/.style={int detect, column name=\# tri. cuts, }, columns/TotalCliqueCuts/.style={int detect, column name=\# ind. set cuts, }, columns/iter/.style={int detect, column name=\# iter., }, columns/cut_add_occasions_total/.style={int detect, column name=\# outer loops, }, columns addvline/.style={ every col no #1/.style={ column type/.add={}{|}} }, columns addvline/.list={3,5}, column type=r, every head row/.style={ before row={ \caption{Large $G$ and $U$ graphs from~\cite{johnson1989optimization}, $k=2$}\label{tab:LargeGraph_GU_5n_mixcuts=1_tol10-3}\\ \toprule & & & & \multicolumn{2}{c|}{DNN}& \multicolumn{7}{c}{DNN+Cuts}\\ }, after row={ & & & & (rounded) &\multicolumn{1}{c|}{(\si{\second})} & (rounded)& \multicolumn{1}{c}{(\si{\second})} & \multicolumn{1}{c}{\%} & & & & \endfirsthead \midrule} }, every last row/.style={after row=\bottomrule} ]{\data} \pgfplotstableread[col sep=comma]{Hager_datafile2.tex}\data \pgfplotstabletypeset[ font=\footnotesize, multicolumn names, columns={graph_string,n,k,UB,round_DNN_lb,DNN_clocktime,round_cuts_lb,cuts_clocktime,imp_lb_round,TotalTriCuts,TotalCliqueCuts, iter, cut_add_occasions_total}, fixed zerofill, fixed, skip rows between index={0}{1}, skip rows between index={7}{8}, columns/graph_string/.style={string type, column name={graph}, }, columns/n/.style={int detect, column name={$n$}, }, columns/k/.style={int detect, column name={$k$}, }, columns/UB/.style={int detect, column name={ub}, assign column name/.style={ /pgfplots/table/column name={\multicolumn{1}{c|}{##1}}} }, columns/round_DNN_lb/.style={int detect, column name={lb}, }, columns/DNN_clocktime/.style={precision=2, column name=clocktime, assign column name/.style={ /pgfplots/table/column name={\multicolumn{1}{c|}{##1}}} }, columns/round_cuts_lb/.style={int detect, column name=lb, }, columns/cuts_clocktime/.style={precision=2, column name=clocktime, }, columns/imp_lb_round/.style={preproc/expr={100*##1}, column name=imp., }, columns/TotalTriCuts/.style={int detect, column name=\# tri. cuts, }, columns/TotalCliqueCuts/.style={int detect, column name=\# ind. set cuts, }, columns/iter/.style={int detect, column name=\# iter., }, columns/cut_add_occasions_total/.style={int detect, column name=\# outer loops, }, columns addvline/.style={ every col no #1/.style={ column type/.add={}{|}} }, columns addvline/.list={3,5}, column type=r, every row no 2/.style={before row={\midrule}}, every row no 7/.style={before row={\midrule}}, every row no 13/.style={before row={\midrule}}, every head row/.style={ before row={ \caption{Instances considered in~\cite{HelmbergEtAl}, $k \in \{3,4,5,6 \}$}\label{tab:hager_table9DiffK_tol=10-3_mixcuts=1}\\ \toprule & & & & \multicolumn{2}{c|}{DNN}& \multicolumn{7}{c}{DNN+cuts}\\ }, after row={ & & & & (rounded) & \multicolumn{1}{c|}{(\si{\second})} & (rounded) & \multicolumn{1}{c}{(\si{\second})} &\multicolumn{1}{c}{\%} & & & & \endfirsthead \midrule}}, every last row/.style={after row=\bottomrule} ]{\data} \pgfplotstableread[col sep=comma]{largeGraphSparseSpinglass2pm_datafile1.tex}\data \pgfplotstabletypeset[ font=\footnotesize, multicolumn names, columns={graph,n,k,UB,round_DNN_lb,DNN_clocktime,round_cuts_lb,cuts_clocktime,imp_lb_round,TotalTriCuts,TotalCliqueCuts, iter, cut_add_occasions_total}, fixed zerofill, fixed, columns/graph/.style={string type, column name={graph} }, columns/n/.style={int detect, column name={$n$}, }, columns/k/.style={int detect, column name={$k$}, }, columns/UB/.style={int detect, column name={ub}, assign column name/.style={ /pgfplots/table/column name={\multicolumn{1}{c|}{##1}}} }, columns/round_DNN_lb/.style={int detect, column name=lb, }, columns/DNN_clocktime/.style={precision=2, column name=clocktime, assign column name/.style={ /pgfplots/table/column name={\multicolumn{1}{c|}{##1}}} }, columns/round_cuts_lb/.style={int detect, column name=lb, }, columns/cuts_clocktime/.style={precision=2, column name=clocktime, }, columns/imp_lb_round/.style={preproc/expr={100*##1}, column name=imp., }, columns/TotalTriCuts/.style={int detect, column name=\# tri. cuts, }, columns/TotalCliqueCuts/.style={int detect, column name=\# ind. set cuts, }, columns/iter/.style={int detect, column name=\# iter., }, columns/cut_add_occasions_total/.style={int detect, column name=\# outer loops, }, columns addvline/.style={ every col no #1/.style={ column type/.add={}{|}} }, columns addvline/.list={3,5}, column type=r, every row no 5/.style={before row={\midrule}}, every row no 12/.style={before row={\midrule}}, every head row/.style={ before row={ \caption{Two-dimensional spinglass graphs, $k\in\{3,4,5\}$}\\ \toprule & & & & \multicolumn{2}{c|}{DNN}& \multicolumn{7}{c}{DNN+cuts}\\ }, after row={ & & & & (rounded) & \multicolumn{1}{c|}{(\si{\second})} &(rounded) & \multicolumn{1}{c}{(\si{\second})} & \multicolumn{1}{c}{\%} & & & & \endfirsthead \midrule} }, every last row/.style={after row=\bottomrule} ]{\data} \pgfplotstableread[col sep=comma]{largeGraphSparseSpinglass3pm_datafile1.tex}\data \pgfplotstabletypeset[ font=\footnotesize, multicolumn names, columns={graph,n,k,UB,round_DNN_lb,DNN_clocktime,round_cuts_lb,cuts_clocktime,imp_lb_round,TotalTriCuts,TotalCliqueCuts, iter, cut_add_occasions_total}, fixed zerofill, fixed, columns/graph/.style={string type, column name={graph} }, columns/n/.style={int detect, column name={$n$}, }, columns/k/.style={int detect, column name={$k$}, }, columns/UB/.style={int detect, column name={ub}, assign column name/.style={ /pgfplots/table/column name={\multicolumn{1}{c|}{##1}}} }, columns/round_DNN_lb/.style={int detect, column name=lb, }, columns/DNN_clocktime/.style={precision=2, column name=clocktime, assign column name/.style={ /pgfplots/table/column name={\multicolumn{1}{c|}{##1}}} }, columns/round_cuts_lb/.style={int detect, column name=lb, }, columns/cuts_clocktime/.style={precision=2, column name=clocktime, }, columns/imp_lb_round/.style={preproc/expr={100*##1}, column name=imp., }, columns/TotalTriCuts/.style={int detect, column name=\# tri. cuts, }, columns/TotalCliqueCuts/.style={int detect, column name=\# ind. set cuts, }, columns/iter/.style={int detect, column name=\# iter., }, columns/cut_add_occasions_total/.style={int detect, column name=\# outer loops, }, columns addvline/.style={ every col no #1/.style={ column type/.add={}{|}} }, column type=r, columns addvline/.list={3,5}, every row no 1/.style={before row={\midrule}}, every row no 3/.style={before row={\midrule}}, every head row/.style={ before row={ \caption{Three-dimensional spinglass graphs, $k\in\{3,4,5\}$}\\ \toprule & & & & \multicolumn{2}{c|}{DNN}& \multicolumn{7}{c}{DNN+cuts}\\ }, after row={ & & & & (rounded) & \multicolumn{1}{c|}{(\si{\second})} & (rounded) & \multicolumn{1}{c}{(\si{\second})} & \multicolumn{1}{c}{\%} & & & & \endfirsthead \midrule} }, every last row/.style={after row=\bottomrule} ]{\data} \pgfplotstableread[col sep=comma]{largeGraphSparse5n_datafile1.tex}\data \pgfplotstabletypeset[ font=\footnotesize, multicolumn names, columns={graph_string,n,k,UB,round_DNN_lb,DNN_clocktime,round_cuts_lb,cuts_clocktime,imp_lb_round,TotalTriCuts,TotalCliqueCuts, iter, cut_add_occasions_total}, fixed zerofill, fixed, columns/graph_string/.style={string type, column name={graph}, }, columns/n/.style={int detect, column name={$n$}, }, columns/k/.style={int detect, column name={$k$}, }, columns/UB/.style={int detect, column name={ub}, assign column name/.style={ /pgfplots/table/column name={\multicolumn{1}{c|}{##1}}} }, columns/round_DNN_lb/.style={int detect, column name=lb, }, columns/DNN_clocktime/.style={precision=2, column name=clocktime, assign column name/.style={ /pgfplots/table/column name={\multicolumn{1}{c|}{##1}}} }, columns/round_cuts_lb/.style={int detect, column name=lb, }, columns/cuts_clocktime/.style={precision=2, column name=clocktime, }, columns/imp_lb_round/.style={preproc/expr={100*##1}, column name=imp., }, columns/TotalTriCuts/.style={int detect, column name=\# tri. cuts, }, columns/TotalCliqueCuts/.style={int detect, column name=\# ind. set cuts, }, columns/iter/.style={int detect, column name=\# iter., }, columns/cut_add_occasions_total/.style={int detect, column name=\# outer loops, }, columns addvline/.style={ every col no #1/.style={ column type/.add={}{|}} }, columns addvline/.list={3,5}, column type=r, every head row/.style={ before row={ \caption{Large $G$ and $U$ graphs from~\cite{johnson1989optimization}, $k=4$} \label{tab:LargeGraph_GU_5n_k=4_mixcuts=1_tol=10-3}\\ \toprule & & & & \multicolumn{2}{c|}{DNN}& \multicolumn{7}{c}{DNN+cuts}\\ }, after row={ & & & & (rounded) & \multicolumn{1}{c|}{(\si{\second})} & (rounded) & \multicolumn{1}{c}{(\si{\second})} & \multicolumn{1}{c}{\%} & & & & \endfirsthead \midrule} }, every last row/.style={after row=\bottomrule} ]{\data} \pgfplotstableread[col sep=comma]{largeGraphSparse5n_datafile2.tex}\data \pgfplotstabletypeset[ font=\footnotesize, multicolumn names, columns={graph_string,n,k,UB,round_DNN_lb,DNN_clocktime,round_cuts_lb,cuts_clocktime,imp_lb_round,TotalTriCuts,TotalCliqueCuts, iter, cut_add_occasions_total}, fixed zerofill, fixed, columns/graph_string/.style={string type, column name={graph}, }, columns/n/.style={int detect, column name={$n$}, }, columns/k/.style={int detect, column name={$k$}, }, columns/UB/.style={int detect, column name={ub}, assign column name/.style={ /pgfplots/table/column name={\multicolumn{1}{c|}{##1}}} }, columns/round_DNN_lb/.style={int detect, column name=lb, }, columns/DNN_clocktime/.style={precision=2, column name=clocktime, assign column name/.style={ /pgfplots/table/column name={\multicolumn{1}{c|}{##1}}} }, columns/round_cuts_lb/.style={int detect, column name=lb, }, columns/cuts_clocktime/.style={precision=2, column name=clocktime, }, columns/imp_lb_round/.style={preproc/expr={100*##1}, column name=imp., }, columns/TotalTriCuts/.style={int detect, column name=\# tri. cuts, }, columns/TotalCliqueCuts/.style={int detect, column name=\# ind. set cuts, }, columns/iter/.style={int detect, column name=\# iter., }, columns/cut_add_occasions_total/.style={int detect, column name=\# outer loops, }, columns addvline/.style={ every col no #1/.style={ column type/.add={}{|}} }, columns addvline/.list={3,5}, column type=r, every head row/.style={ before row={ \caption{Large $G$ and $U$ graphs from~\cite{johnson1989optimization}, $k=5$} \label{tab:LargeGraph_GU_5n_k=5_mixcuts=1_tol=10-3}\\ \toprule & & & & \multicolumn{2}{c|}{DNN}& \multicolumn{7}{c}{DNN+cuts}\\ }, after row={ & & & & (rouneded) &\multicolumn{1}{c|}{(\si{\second})} & (rouneded)& \multicolumn{1}{c}{(\si{\second})} & \multicolumn{1}{c}{\%}& & & & \endfirsthead \midrule} }, every last row/.style={after row=\bottomrule} ]{\data} \pgfplotstableread[col sep=comma]{hagerBisection_datafile1.tex}\data \pgfplotstabletypeset[ font=\footnotesize, multicolumn names, columns={graph_string,n,m1,UB,round_DNN_lb,DNN_clocktime,round_cuts_lb,cuts_clocktime,imp_lb_round,TotalBQPTriCuts, iter, cut_add_occasions_total}, fixed zerofill, fixed, columns/graph_string/.style={string type, column name={graph} }, columns/n/.style={int detect, column name={$n$}, }, columns/m1/.style={int detect, column name={$m_1$}, }, columns/UB/.style={int detect, column name={ub}, assign column name/.style={ /pgfplots/table/column name={\multicolumn{1}{c|}{##1}}} }, columns/round_DNN_lb/.style={int detect, column name=lb, }, columns/DNN_clocktime/.style={precision=2, column name=clocktime, assign column name/.style={ /pgfplots/table/column name={\multicolumn{1}{c|}{##1}}} }, columns/round_cuts_lb/.style={int detect, column name=lb, }, columns/cuts_clocktime/.style={precision=2, column name=clocktime, }, columns/imp_lb_round/.style={preproc/expr={100*##1}, column name=imp., }, columns/TotalBQPTriCuts/.style={int detect, column name=\# BQP cuts, }, columns/iter/.style={int detect, column name=\# iter., }, columns/cut_add_occasions_total/.style={int detect, column name=\# outer loops, }, columns addvline/.style={ every col no #1/.style={ column type/.add={}{|}} }, columns addvline/.list={3,5}, column type=r, every head row/.style={ before row={ \caption{Instances considered in~\cite[Table~9]{Hager2013AnEA}, $m = (m_1,n-m_1)$}\label{tab:TableHager_finaltol=0.001_bisection} \\ \toprule & & & & \multicolumn{2}{c|}{DNN}& \multicolumn{6}{c}{DNN+cuts}\\ }, after row={ & & & & (rounded) & \multicolumn{1}{c|}{(\si{\second})} & (rounded) & \multicolumn{1}{c}{(\si{\second})}& \multicolumn{1}{c}{\%} & & & \endfirsthead \midrule} }, every last row/.style={after row=\bottomrule} ]{\data} \pgfplotstableread[col sep=comma]{spinglass2pmBisection_datafile1.tex}\data \pgfplotstabletypeset[ font=\footnotesize, multicolumn names, columns={graph_string,n,m1,UB,round_DNN_lb,DNN_clocktime,round_cuts_lb,cuts_clocktime,imp_lb_round,TotalBQPTriCuts, iter, cut_add_occasions_total}, fixed zerofill, fixed, columns/graph_string/.style={string type, column name={graph} }, columns/n/.style={int detect, column name={$n$}, }, columns/m1/.style={int detect, column name={$m_1$}, }, columns/UB/.style={int detect, column name={ub}, assign column name/.style={ /pgfplots/table/column name={\multicolumn{1}{c|}{##1}}} }, columns/round_DNN_lb/.style={int detect, column name=lb, }, columns/DNN_clocktime/.style={precision=2, column name=clocktime, assign column name/.style={ /pgfplots/table/column name={\multicolumn{1}{c|}{##1}}} }, columns/round_cuts_lb/.style={int detect, column name=lb, }, columns/cuts_clocktime/.style={precision=2, column name=clocktime, }, columns/imp_lb_round/.style={preproc/expr={100*##1}, column name=imp., }, columns/TotalBQPTriCuts/.style={int detect, column name=\# BQP cuts, }, columns/iter/.style={int detect, column name=\# iter., }, columns/cut_add_occasions_total/.style={int detect, column name=\# outer loops, }, columns addvline/.style={ every col no #1/.style={ column type/.add={}{|}} }, columns addvline/.list={3,5}, column type=r, every head row/.style={ before row={ \caption{Large $G$ graphs from~\cite{johnson1989optimization} and two-dimensional spinglass graphs, $m = (m_1,n-m_1)$}\label{tab:Table_largeG_Spinglass2pm_bisection} \\ \toprule & & & & \multicolumn{2}{c|}{DNN}& \multicolumn{6}{c}{DNN+cuts}\\ }, after row={ & & & & (rounded) & \multicolumn{1}{c|}{(\si{\second})} & (rounded) & \multicolumn{1}{c}{(\si{\second})} & \multicolumn{1}{c}{\%} & & & \endfirsthead \midrule} }, every last row/.style={after row=\bottomrule} ]{\data} \pgfplotstableread[col sep=comma]{largeGraphSparse3n_datafile1.tex}\data \pgfplotstabletypeset[ font=\footnotesize, multicolumn names, columns={graph_string,n,k,UB,round_DNN_lb,DNN_clocktime,round_cuts_lb,cuts_clocktime,imp_lb_round,TotalTriCuts,TotalCliqueCuts, iter, cut_add_occasions_total}, fixed zerofill, fixed, columns/graph_string/.style={string type, column name={graph}, }, columns/n/.style={int detect, column name={$n$}, }, columns/k/.style={int detect, column name={$k$}, }, columns/UB/.style={int detect, column name={ub}, assign column name/.style={ /pgfplots/table/column name={\multicolumn{1}{c|}{##1}}} }, columns/round_DNN_lb/.style={int detect, column name=lb, }, columns/DNN_clocktime/.style={precision=2, column name=clocktime, assign column name/.style={ /pgfplots/table/column name={\multicolumn{1}{c|}{##1}}} }, columns/round_cuts_lb/.style={int detect, column name=lb , }, columns/cuts_clocktime/.style={precision=2, column name=clocktime, }, columns/imp_lb_round/.style={preproc/expr={100*##1}, column name=imp., dec sep align, }, columns/TotalTriCuts/.style={int detect, column name=\# tri. cuts, }, columns/TotalCliqueCuts/.style={int detect, column name=\# ind. set cuts, }, columns/iter/.style={int detect, column name=\# iter., }, columns/cut_add_occasions_total/.style={int detect, column name=\# outer loops, }, columns addvline/.style={ every col no #1/.style={ column type/.add={}{|}} }, columns addvline/.list={3,5}, column type=r, every head row/.style={ before row={ \caption{Large $G$ and $U$ graphs from~\cite{johnson1989optimization} (Maxineq=$3n$) } \label{tab:LargeGraph_GU_3n_mixcuts=1_tol=10-3}\\ \toprule & & & & \multicolumn{2}{c|}{DNN}& \multicolumn{7}{c}{DNN+cuts}\\ }, after row={ & & & & (rounded) & \multicolumn{1}{c|}{(\si{\second})} & (rounded) & \multicolumn{1}{c}{(\si{\second})} & \multicolumn{1}{c}{\%} & & & & \endfirsthead \midrule} }, every last row/.style={after row=\bottomrule} ]{\data} \pgfplotstableread[col sep=comma]{Hager_datafile3.tex}\data \pgfplotstabletypeset[ font=\footnotesize, multicolumn names, columns={graph,n,k,UB,round_DNN_lb,DNN_clocktime,round_cuts_lb,cuts_clocktime,imp_lb_round,TotalTriCuts,TotalCliqueCuts, iter, cut_add_occasions_total}, fixed zerofill, fixed, columns/graph/.style={verb string type, column name={graph}, }, skip rows between index={64}{67}, columns/n/.style={int detect, column name={$n$}, }, columns/k/.style={int detect, column name={$k$}, }, columns/UB/.style={int detect, column name={ub}, assign column name/.style={ /pgfplots/table/column name={\multicolumn{1}{c|}{##1}}} }, columns/round_DNN_lb/.style={int detect, column name=lb, }, columns/DNN_clocktime/.style={precision=2, column name=clocktime, assign column name/.style={ /pgfplots/table/column name={\multicolumn{1}{c|}{##1}}} }, columns/round_cuts_lb/.style={int detect, column name=lb, }, columns/cuts_clocktime/.style={precision=2, column name=clocktime, }, columns/imp_lb_round/.style={preproc/expr={100*##1}, column name=imp., }, columns/TotalTriCuts/.style={int detect, column name=\# tri. cuts, }, columns/TotalCliqueCuts/.style={int detect, column name=\# ind. set cuts, }, columns/iter/.style={int detect, column name=\# iter., }, columns/cut_add_occasions_total/.style={int detect, column name=\# outer loops, }, columns addvline/.style={ every col no #1/.style={ column type/.add={}{|}} }, column type=r, columns addvline/.list={3,5}, every head row/.style={ before row={ \caption{Graphs considered in the paper of Hager, Phan and Zhang~\cite{Hager2013AnEA} (Maxineq=$3n$) }\label{tab:PlanarGraph_3n_mixcuts=1_tol=10-3}\\ \toprule & & & & \multicolumn{2}{c|}{DNN}& \multicolumn{7}{c}{DNN+cuts}\\ }, after row={ & & & & (rounded) & \multicolumn{1}{c|}{(\si{\second})} & (rounded) & \multicolumn{1}{c}{(\si{\second})} & \multicolumn{1}{c}{\%} & & & & \endfirsthead \midrule} }, every last row/.style={after row=\bottomrule} ]{\data} n,k,UB,DNN_lb,round_DNN_lb,DNN_cputime,DNN_clocktime,cuts_lb,round_cuts_lb,cuts_cputime,cuts_clocktime,imp_lb,imp_lb_round,TotalTriCuts,TotalCliqueCuts,numClustTri,numClustCliq,cut_add_occasions_total,iter,stopflag,sigma,graph,graph_string, 500,2,49,24.8999946365116,25,266.625,45.8307645,44.3788001371166,45,4609.6875,1425.01329,0.782281513910154,0.8,24549,451,85,59,10,1444,4,0.014777867563098,G500.005, \(G_{500,2.5}\), 500,2,218,155.578188913135,156,133.03125,22.596964,196.610031556718,197,2144.078125,592.328893,0.263737757395367,0.262820512820513,24356,644,98,7,10,1188,4,0.011339431611836,G500.01, \(G_{500,5}\), 500,2,626,512.133506337669,513,94.234375,15.9009213,553.434944993851,554,567.765625,125.1503322,0.080645843603426,0.079922027290448,13069,713,58,4,8,678,2,0.016211681405011,G500.02, \(G_{500,10}\), 500,2,1744,1565.591086041,1566,86.75,14.7806008,1612.88758266649,1613,192.015625,42.9080801,0.030209993559105,0.030012771392082,9705,1076,18,5,6,520,3,0.013178998854968,G500.04, \(G_{500,20}\), 1000,2,102,44.2938767040396,45,2091.5,421.7976519,73.3326588074606,74,21443.890625,7206.8587673,0.655593600385235,0.644444444444444,44815,185,118,7,9,1731,5,2.63270815171919,G1000.0025, \(G_{1000,2.5}\), 1000,2,451,306.240421509122,307,1009,204.6536909,378.977854701301,379,6977.609375,2183.88006,0.237517414695736,0.234527687296417,49054,946,152,9,10,1822,4,1.85333436577108,G1000.005, \(G_{1000,5}\), 1000,2,1367,1112.75608620417,1113,742.9375,152.3545105,1178.94455268326,1179,1947.53125,523.8239187,0.059481558716854,0.059299191374663,25413,1272,46,5,7,1251,2,2.48296320694699,G1000.01, \(G_{1000,10}\), 1000,2,3389,3006.9639862136,3007,683.25,140.6909266,3078.70867766825,3079,1311.65625,350.83682,0.023859511382107,0.023944130362488,19195,1813,25,7,6,1078,2,2.01714740526152,G1000.02, \(G_{1000,20}\), 500,2,2,0.268496303387567,1,429.359375,74.4340328,1.459128417606,2,8257.21875,2528.18655,4.43444508991913,1,25000,0,98,0,10,3062,1,0.00981228859779,U500.05, \(U_{500,5}\), 500,2,26,7.19052052652705,8,228.125,39.5222584,22.4796727665861,23,2455.3125,708.106613,2.12629283007465,1.875,17824,7176,67,51,10,1742,4,0.007632283104375,U500.10, \(U_{500,10}\), 500,2,178,55.2543976988495,56,379.75,66.2750358,152.107251661071,153,3004.84375,838.7393735,1.75285331115351,1.73214285714286,14817,10183,66,50,10,2638,4,0.008116341101737,U500.20, \(U_{500,20}\), 500,2,412,162.799455136723,163,550.9375,97.4653066,378.901427004252,379,5467.296875,1562.3088117,1.32741213222146,1.32515337423313,17444,7556,120,74,10,3024,4,0.005710119061765,U500.40, \(U_{500,40}\), 1000,2,1,-0.370648806768082,0,3470.1875,725.581516,-0.162284216340162,0,21411.796875,7200.3132442,0.562161773147958,0,50000,0,81,0,10,6237,5,1.83344306914098,U1000.05, \(U_{1000,5}\), 1000,2,39,7.58691904119982,8,3265.640625,674.9015202,23.8982739838007,24,8886.578125,3086.5555022,2.14993132970368,2,43836,6164,62,43,10,3357,4,1.29514306884821,U1000.10, \(U_{1000,10}\), 1000,2,222,50.5734086572461,51,1037.5625,213.6737431,135.032392820123,136,11479,3877.1344094,1.67002751851838,1.66666666666667,42215,7785,95,68,10,3016,4,1.08032041849132,U1000.20, \(U_{1000,20}\), 1000,2,737,239.384802540148,240,2724.609375,561.0779492,639.201849376332,640,14500.015625,4665.7659204,1.67018558652707,1.66666666666667,40190,9810,197,72,10,3199,4,0.928907459903531,U1000.40, \(U_{1000,40}\), n,m1,UB,DNN_lb,round_DNN_lb,DNN_cputime,DNN_clocktime,cuts_lb,round_cuts_lb,cuts_cputime,cuts_clocktime,imp_lb,imp_lb_round,TotalBQPTriCuts,numClustBQPTri,cut_add_occasions_total,iter,stopflag,graph,graph_string, 124,75,12,6.00642816304654,7,77.515625,11.2857066,9.99644990405172,10,32349.90625,7077.1141511,0.664291927364263,0.428571428571429,7399,145,21,29486,3,G124.02,\(G_{124,2.5}\), 124,87,49,41.2371079908197,42,97.3125,14.2263061,48.0661872016825,49,3741.984375,616.3535168,0.165605192594569,0.166666666666667,4054,90,13,15861,1,G124.04,\(G_{124,5}\), 124,81,159,140.609002763814,141,83.609375,12.2619936,149.956147199745,150,935.734375,154.1276256,0.066476144857042,0.063829787234043,2429,46,8,12633,3,G124.08,\(G_{124,10}\), 124,75,414,398.248633598769,399,87.796875,12.9318929,411.211761499398,412,2268.96875,373.6047025,0.032550338675335,0.032581453634085,2716,44,8,19067,2,G124.16,\(G_{124,20}\), 250,175,20,11.3882646396351,12,1382.921875,202.6349095,15.6684808984494,16,26338.515625,7200.5195386,0.375844467463256,0.333333333333333,15750,170,21,38639,2,G250.01,\(G_{250,2.5}\), 250,163,101,73.3205562828957,74,887.140625,129.1125872,89.3366023109624,90,8040.28125,1710.966304,0.218438686775253,0.216216216216216,9702,90,15,23906,2,G250.02,\(G_{250,5}\), 250,150,343,288.330175081409,289,560.078125,81.5147244,309.60193682261,310,2529.9375,441.6484811,0.073775704312584,0.072664359861592,5454,57,9,16145,3,G250.04,\(G_{250,10}\), 250,175,673,626.881446757844,627,648.375,94.4700855,640.737330431435,641,2821.40625,473.6166743,0.022102877258933,0.022328548644338,4416,50,6,17655,2,G250.08,\(G_{250,20}\), 138,90,6,1.4025974834764,2,146.6875,21.5133146,5.10612560009927,6,13521.75,2647.7289652,2.64047822718425,2,7452,138,17,24424,1,mesh.138.232,mesh.138.232, 148,89,9,3.16952953947826,4,248.734375,36.9647048,7.19596938332492,8,14423.40625,3074.6816306,1.270358832027,1,7104,151,16,31656,3,mesh.148.265,mesh.148.265, 274,192,10,1.43059505963921,2,1752.203125,259.5492886,4.81725003859408,5,26217.5625,7200.5225075,2.36730509876706,1.5,15618,184,19,43682,5,mesh.274.469,mesh.274.469, 70,46,6,3.01047001318663,4,28.953125,4.5362411,5.03572934069033,6,1620.703125,374.2029683,0.672738581893375,0.5,1470,59,6,12431,1,mesh.70.120,mesh.70.120, 74,45,7,4.1092711648911,5,26.3125,4.0171084,6.11423610566219,7,139.34375,28.0147167,0.487912542229182,0.4,1110,48,4,7695,1,mesh.74.129,mesh.74.129, 82,58,13,9.31634941774573,10,23.1875,3.5715143,12.1924663910204,13,2016.625,371.4944873,0.308717164235628,0.3,984,56,3,12164,1,KKT_lowt01,KKT\_lowt01, n,k,UB,DNN_lb,round_DNN_lb,DNN_cputime,DNN_clocktime,cuts_lb,round_cuts_lb,cuts_cputime,cuts_clocktime,imp_lb,imp_lb_round,TotalTriCuts,TotalCliqueCuts,numClustTri,numClustCliq,cut_add_occasions_total,iter,stopflag,sigma,graph,graph_string, 500,4,96,42.6876558538663,43,121.453125,19.9132499,69.67360411377,70,5007.03125,1491.2907212,0.632172175307198,0.627906976744186,24966,34,91,4,10,1419,4,0.018761424620692,G500.005, \(G_{500,2.5}\), 500,4,375,252.420406489872,253,60.609375,10.7848161,296.031131013927,297,3073.3125,867.973167,0.172770201627121,0.173913043478261,23646,23,180,2,10,1063,4,0.016072308262771,G500.01, \(G_{500,5}\), 500,4,1016,805.858848841064,806,41.046875,6.8117214,833.254051934914,834,279.125,70.8079464,0.033995039122854,0.034739454094293,8737,36,57,2,5,380,3,0.023394473258823,G500.02, \(G_{500,10}\), 500,4,2753,2417.37933963608,2418,33.65625,5.6084872,2433.33202150294,2434,92.734375,27.9326744,0.006599163650195,0.006617038875103,5232,44,19,1,4,254,3,0.019922971794042,G500.04, \(G_{500,20}\), 1000,4,200,73.5391131443667,74,866.78125,174.1145391,116.220389425989,117,21393.75,7204.3080057,0.580388781651928,0.581081081081081,49998,2,129,1,10,1519,5,0.847423054052297,G1000.0025, \(G_{1000,2.5}\), 1000,4,767,484.854031979177,485,406,81.8150069,569.686166826029,570,10987.75,3446.472681,0.174964276362861,0.175257731958763,50000,0,230,0,10,1340,4,0.626346186628485,G1000.005, \(G_{1000,5}\), 1000,4,2219,1731.69658444303,1732,261.9375,52.4609123,1773.06805636616,1774,1134.578125,295.2940082,0.023890716361514,0.024249422632795,16558,5,46,1,4,636,2,0.893895621521943,G1000.01, \(G_{1000,10}\), 1000,4,5422,4619.74083994534,4620,223.6875,45.2290173,4639.61108724888,4640,539.4375,132.933149,0.00430116060445,0.004329004329004,8229,12,25,1,3,414,2,0.764350812050851,G1000.02, \(G_{1000,20}\), 500,4,22,1.05156914897908,2,298.125,50.0211797,5.35013268820948,6,5396.046875,1596.1992031,4.08776117424487,2,23854,1146,66,25,10,2966,4,0.014264785281892,U500.05, \(U_{500,5}\), 500,4,115,24.0707782004054,25,268.65625,46.0211691,53.3345003189793,54,3947.46875,1208.5142455,1.21573643672563,1.16,24672,328,94,17,10,1560,4,0.012651533416153,U500.10, \(U_{500,10}\), 500,4,358,161.122543990459,162,408.828125,68.6722388,303.685626171525,304,5306.984375,1502.7325753,0.884811514579288,0.876543209876543,24228,772,127,11,10,2061,4,0.012350204855651,U500.20, \(U_{500,20}\), 500,4,1020,548.877957642629,549,560.71875,94.5553227,926.569934392695,927,5675.28125,1644.2687833,0.688116495645429,0.688524590163934,23946,1054,82,8,10,2546,4,0.01046645707278,U500.40, \(U_{500,40}\), 1000,4,11,0.16844183920266,1,3190.6875,649.013279,2.44609330288518,3,15007.3125,5195.1424953,13.5218866907656,2,50000,0,70,0,10,4565,4,0.622084237819619,U1000.05, \(U_{1000,5}\), 1000,4,182,22.1144250346585,23,1008.84375,218.2294309,50.8001335637699,51,8727.328125,2929.2376132,1.29714919036575,1.21739130434783,49820,180,107,11,10,2081,4,0.505453381364461,U1000.10, \(U_{1000,10}\), 1000,4,735,159.511158938809,160,2016.390625,433.6054624,361.072671740729,362,15114.796875,4847.5399016,1.26362013882203,1.2625,49447,553,68,8,10,2832,4,0.472246799785497,U1000.20, \(U_{1000,20}\), 1000,4,1596,670.640083004918,671,2561.375,534.8845554,1341.68013667791,1342,17488.875,5502.6131224,1.00059640137566,1,49100,900,106,7,10,3147,4,0.429883979375792,U1000.40, \(U_{1000,40}\), n,k,UB,DNN_lb,round_DNN_lb,DNN_cputime,DNN_clocktime,cuts_lb,round_cuts_lb,cuts_cputime,cuts_clocktime,imp_lb,imp_lb_round,TotalTriCuts,TotalCliqueCuts,numClustTri,numClustCliq,cut_add_occasions_total,iter,stopflag,sigma,graph 729,3,-769,-902.290043930694,-902,464.875,98.1200456,-821.29763623073,-821,18295.09375,6438.3545525,0.089763162349806,0.08980044345898,20349,12456,44,78,9,2177,2,0.009958700652403,spinglass3pm\_9 512,4,-562,-643.692903205767,-643,169.40625,29.3125109,-590.246516793447,-590,8629.921875,2352.3418632,0.083030877218224,0.082426127527216,17633,287,48,17,7,2355,2,0.019200813095998,spinglass3pm\_8 1000,4,-1058,-1258.4608671532,-1258,1326.765625,265.9521073,-1152.2622856222,-1152,22255.859375,7202.4855742,0.08438767092634,0.084260731319555,24790,210,40,10,5,2523,5,0.845751147512099,spinglass3pm\_10 1000,5,-1068,-1263.77432917812,-1263,1079.484375,207.4967178,-1152.81563486836,-1152,23527.765625,7202.2619191,0.087799452598409,0.087885985748219,30000,0,41,0,6,2482,5,0.633068540455538,spinglass3pm\_10 n,k,UB,DNN_lb,round_DNN_lb,DNN_cputime,DNN_clocktime,cuts_lb,round_cuts_lb,cuts_cputime,cuts_clocktime,imp_lb,imp_lb_round,TotalTriCuts,TotalCliqueCuts,numClustTri,numClustCliq,cut_add_occasions_total,iter,stopflag,sigma,graph 20,2,6,2.67284608409418,3,0.125,0.1311543,5.9292229914497,6,1.171875,1.0921636,1.21831815409569,1.24479817064864,23,97,8,8,2,367,1,0.085988718406895,10x2g 30,2,19,11.0337154881111,12,0.375,0.0390259,18.2553026384787,19,8.90625,4.3352751,0.654501845561353,0.721994736992505,281,169,26,8,5,581,1,0.051117224634652,5x6g 32,2,8,2.95150392107908,3,0.84375,0.1048865,7.59506911103735,8,4.9375,1.9107654,1.5732878268583,1.71048259257442,193,191,12,10,4,341,1,0.077126377265991,2x16g 36,2,6,2.40619220696752,3,1.46875,0.1864142,5.27092404061025,6,3.09375,1.3509799,1.19056649977813,1.49356638369372,146,178,18,8,3,251,1,0.063159633389723,18x2g 38,2,6,1.3604409271326,2,1.21875,0.1583066,3.22256614238834,4,69.578125,29.4518185,1.36876594794935,1.94022321750552,692,993,24,18,16,1184,2,0.04359250428961,2x19g 40,2,18,11.2749463903653,12,0.625,0.0872578,17.1379356341833,18,1.796875,0.8046491,0.520001518484208,0.596459918903154,91,149,10,14,2,254,1,0.053792153496746,5x8g 42,2,10,3.60034402168058,4,0.875,0.1105669,9.16690935685816,10,13.296875,6.5553865,1.54612039895543,1.77751235431446,443,439,36,18,7,511,1,0.049680646363694,3x14g 50,2,22,11.156841404806,12,1.625,0.2046318,21.4999081263198,22,7.421875,3.3131126,0.927060477623922,0.971884263813513,460,290,21,13,5,479,1,0.041482727643,5x10g 60,2,28,13.4511158533319,14,0.84375,0.1517521,27.0525727567161,28,51.8125,17.602385,1.0111768459726,1.08161168971452,1117,1223,33,21,13,696,1,0.027105189093022,6x10g 70,2,23,10.912006976557,11,1.84375,0.2663967,22.3604446008209,23,52.8125,16.9632565,1.04915966868967,1.10776991340021,1081,1229,27,17,11,792,1,0.031402956224273,7x10g 20,2,28,24.4538929113356,25,0,0.0094051,27.0187856319213,28,0.203125,0.1457136,0.104886887739528,0.145011966050632,44,16,7,3,1,135,1,0.103623224684692,4x5t 24,2,9,4.42087143426146,5,0.109375,0.1121888,8.72848177957561,9,1.65625,1.3354971,0.97438037033388,1.03579772310287,134,82,8,6,3,373,1,0.102373846896697,12x2t 30,2,31,22.647197566313,23,0.25,0.023198,30.6803681615845,31,2.8125,1.0318173,0.354709255825126,0.368822783005679,93,87,19,8,2,202,1,0.056357650360659,6x5t 40,2,33,20.2710136041984,21,0.359375,0.0511059,32.0606774959231,33,3.296875,1.8134599,0.581602090646464,0.62794030157255,348,132,13,8,4,277,1,0.059385502105219,8x5t 46,2,9,1.97739334452204,2,2.421875,0.3130555,6.16254068175961,7,64.984375,17.0004204,2.11649712933021,2.54001393773882,1094,1083,35,16,18,634,3,0.050893298180796,23x2t 48,2,24,10.9881549864818,11,1.359375,0.1822733,23.6632275301234,24,11.125,4.4086229,1.15352145644425,1.18417013861982,539,325,17,12,6,446,1,0.040243355424343,4x12t 50,2,33,16.5228931915164,17,0.625,0.0915555,32.351670778018,33,6.890625,3.8967932,0.957990674092647,0.997228912485114,513,237,15,13,5,348,1,0.040136938947935,5x10t 60,2,43,19.465582601454,20,0.53125,0.0835619,41.9970871828879,42,176.171875,126.5141345,1.15750476329185,1.15765440264104,1412,928,27,22,13,1537,2,0.018280015389665,10x6t 70,2,45,20.3035959247191,21,0.5625,0.0990192,44.2191455781885,45,22.421875,8.1274417,1.17789724254475,1.21635616502856,1120,350,24,14,7,505,1,0.029199655573273,7x10t 80,2,43,23.453124690994,24,0.921875,0.1482302,42.9916163467829,43,210.46875,62.0063968,0.833086930343729,0.833444394576216,2263,2150,43,31,19,890,1,0.019493728915133,10x8t 20,2,118,111.951416329426,112,0.03125,0.0052936,117.882216849025,118,0.140625,0.1383075,0.05297655638538,0.054028648041179,44,16,4,2,1,139,1,0.046102988978044,2x10m 30,2,270,258.142277957366,259,0.25,0.0326936,269.72473894688,270,1.53125,0.5973748,0.044868516235171,0.045934831506339,144,36,16,3,2,219,1,0.031485122843803,6x5m 34,2,316,296.148636436583,297,0.125,0.0280952,312.180479488128,313,28.90625,18.7615395,0.054134448310986,0.05690170910858,432,486,16,14,9,1203,2,0.023986073591854,2x17m 40,2,436,423.770216212679,424,0.125,0.0238187,435.137674282675,436,2.0625,0.9097578,0.026824580008452,0.028859469871718,156,84,9,6,2,200,1,0.022893632115746,10x4m 50,2,670,653.963565846674,654,0.375,0.0507522,669.999579557542,670,5.140625,1.9024189,0.024521264713129,0.024521907627322,346,104,28,6,3,258,1,0.013914863545566,5x10m 52,2,721,694.46112607877,695,0.375,0.0510391,720.542576108751,721,6.890625,4.4051324,0.037556385880443,0.038215060461455,459,477,18,16,6,349,1,0.013709740558404,4x13m 52,2,721,694.46112607877,695,0.3125,0.0510203,720.465661913168,721,8.1875,4.6662872,0.037445632099281,0.038215060461455,440,496,14,16,6,398,1,0.015157280975915,13x4m 54,2,792,761.808404846158,762,0.453125,0.068726,791.32275654331,792,4.390625,2.1326703,0.038742486312044,0.03963148077887,410,238,17,9,4,299,1,0.014405041462556,9x6m 60,2,954,931.768614444553,932,0.640625,0.0978824,953.999530543089,954,3.8125,1.6839438,0.023858837649075,0.023859341483293,460,80,10,4,3,330,1,0.010191728286828,10x6m 70,2,1288,1258.2582145509,1259,0.984375,0.1393563,1287.85225560726,1288,10.640625,3.364636,0.023519847288995,0.02363726706105,624,216,40,7,4,372,1,0.01026426811802,10x7m 20,2,401,392.008780021161,393,0.015625,0.0152589,400.723190231697,401,0.515625,0.4987321,0.022230140381207,0.022936271933383,80,40,8,5,2,164,1,0.030176787882504,v000 20,2,21,18.322201605002,19,0.015625,0.0199022,20.7002222923156,21,0.765625,0.7396305,0.129789025280914,0.146150471036568,57,63,14,9,2,190,1,0.083808060712381,v090 30,2,900,877.921003322586,878,0.125,0.0259051,899.326862272091,900,3.34375,1.6734806,0.024382443145217,0.025149183803388,284,166,10,8,5,289,1,0.022118575086574,t000 30,2,397,370.048476968143,371,0.375,0.0575638,396.38483568396,397,6.3125,3.4901874,0.071170023267206,0.072832411722579,343,197,14,7,6,449,1,0.025881875330817,t050 30,2,24,20.7049407079921,21,0.5,0.0611332,23.2507514766938,24,0.9375,0.3608227,0.122956679983103,0.159143623663506,65,25,16,7,1,154,1,0.060177572513007,t090 40,2,1606,1577.62891134949,1578,0.25,0.032122,1605.35263473613,1606,2.515625,0.9238068,0.017573032027491,0.017983372671741,329,151,9,7,4,247,1,0.012364094469135,q000 40,2,1425,1386.49734296736,1387,0.125,0.0276375,1423.45002807775,1424,9.703125,4.4008749,0.026651825405813,0.027048488208695,404,196,10,7,5,812,2,0.013793966001942,q010 40,2,1238,1202.41873154927,1203,0.125,0.0238153,1237.55363916628,1238,8.421875,4.2541252,0.029220193178245,0.02959141230683,487,233,11,7,6,657,1,0.014362358281109,q020 40,2,1056,1015.96166328898,1016,0.125,0.0286181,1054.15837978474,1055,5.5,2.8267443,0.037596612033672,0.038425009645188,491,229,10,9,6,421,2,0.017475673275349,q030 40,2,199,168.600724647361,169,0.25,0.0356406,198.397247162767,199,9.15625,5.3148878,0.176728318206979,0.18030334932558,521,319,14,9,7,542,1,0.023530482484515,q080 40,2,63,50.6596396093662,51,0.75,0.1018958,62.3026394099793,63,3.8125,1.7929286,0.229827923972449,0.243593529006319,315,165,16,12,4,369,1,0.044079714141856,q090 50,2,1658,1592.51770272699,1593,0.375,0.0462944,1654.07744822054,1655,10.65625,4.8776922,0.038655611418408,0.039234915358251,725,325,13,8,7,529,2,0.012449063590604,c030 50,2,603,555.597247250563,556,0.5,0.0629193,599.287350054157,600,17.984375,7.1066283,0.078636283782541,0.07991895742675,817,315,15,8,9,694,2,0.018039302055415,c070 50,2,368,327.842568747166,328,0.4375,0.0847229,366.560218934566,367,38.71875,22.4801056,0.118098300459753,0.119439740246278,967,353,22,8,9,1124,2,0.0126353756493,c080 50,2,122,101.92751533877,102,0.625,0.0822683,121.013803411125,122,4.609375,1.4513728,0.187253540017326,0.196929009743016,421,179,14,7,4,348,1,0.033554898904334,c090 52,2,123,96.4090441842344,97,0.375,0.0575233,122.030492089961,123,7.84375,2.7065141,0.265757721410088,0.275813913941011,543,237,18,16,5,381,1,0.027223488872713,c290 54,2,160,134.979252197953,135,0.484375,0.071639,159.074347792995,160,6.734375,2.2761716,0.178509624276963,0.185367361239733,577,233,16,8,5,371,1,0.028716261435144,c490 56,2,177,149.244940172924,150,0.609375,0.10597,176.335299595836,177,7.953125,2.7943056,0.181516099584505,0.185969854622323,623,217,16,11,5,415,1,0.020341047856054,c690 58,2,227,185.338431627877,186,0.828125,0.1313311,222.963538698376,223,29.640625,13.6947466,0.203007583154922,0.203204311385024,1219,347,34,8,9,861,2,0.020903552417171,c890 60,2,238,200.546601592166,201,0.546875,0.0867663,237.318602269457,238,15.65625,5.9759416,0.183358882101981,0.186756584806157,886,374,19,10,7,539,1,0.017661521572038,s090 8,2,4,3.82235974827321,4,0,0.0035759,3.81050029456092,4,0.015625,0.0042158,0.003102652417174,0.046473975090137,0,0,0,0,0,69,1,0.169996442566161,db3 32,2,10,6.88423214944217,7,0.25,0.0177057,9.2578448987146,10,3.234375,1.4052376,0.344789759808545,0.452594825816602,194,94,9,6,3,349,1,0.062540758515761,db5 64,2,18,10.2250661365467,11,0.609375,0.1047288,17.0209214106277,18,15.5625,5.523415,0.664627023759883,0.760379811692751,1020,324,16,7,7,518,1,0.027969793510574,db6 128,2,30,15.1395595458436,16,2.9375,0.3963775,29.9993245456659,30,36.31,149.44,0.981518977142362,0.98156359233292,3650,958,41,11,12,910,2,0.005257474511962,db7 32,2,6,3.67623676967517,4,0.21875,0.0424353,5.13928765832103,6,1.484375,0.510789,0.397975152393444,0.632103799595627,138,54,10,5,2,163,1,0.072572693264283,m4 54,2,2,0.742544987556784,1,0.859375,0.1086551,1.1910397586013,2,0.765625,0.2616575,0.603996765933618,1.69343949998323,132,30,10,4,1,181,1,0.049143299817487,ma 60,2,3,1.20114793811338,2,0.90625,0.1429149,2.01141935848888,3,0.84375,0.4010052,0.674580869404128,1.49761074785845,90,90,7,6,1,171,1,0.044423837485729,me 70,2,7,2.91000531699042,3,1.203125,0.18982,6.02177500742118,7,6.03125,2.3871086,1.06933470954926,1.40549388660208,475,365,25,9,4,351,1,0.037892476255351,m6 74,2,4,1.09640761042306,2,1.5625,0.227905,3.09706618050823,4,14.671875,5.1721484,1.82473976928452,2.64827821512162,626,706,15,17,6,446,1,0.033448988142347,mb 74,2,6,2.27539468805532,3,1.09375,0.1612585,5.12884210916483,6,5.328125,2.2162087,1.25404503934578,1.63690516265024,420,468,15,15,4,342,1,0.036898064247179,mc 80,2,4,1.10068499029115,2,1.625,0.2423111,3.0727039563779,4,14.6875,4.7930092,1.79162883429992,2.63410061487432,567,873,12,12,6,448,1,0.035348933165545,md 90,2,4,1.03304309285502,2,1.96875,0.3079641,3.08158800251849,4,15.078125,5.1463764,1.98301980220584,2.87205531663273,915,705,14,12,6,437,1,0.031403190841836,mf 100,2,4,0.886285566199276,1,5.203125,0.7774793,3.03748458253596,4,216.75,47.3883813,2.42720754842238,3.51321803327284,2461,2639,32,22,17,922,1,0.018730182702582,m1 148,2,9,2.6862060306022,3,16.59375,2.3271692,6.98909493653794,7,283.375,49.8457697,1.60184619381973,1.60590584648145,2639,3146,41,22,14,864,3,0.010834330218625,m8 32,2,6,3.05233404130947,4,0.125,0.0215716,5.12299385807708,6,4.359375,1.8385747,0.678385716878905,0.965708837498652,248,136,18,8,4,365,1,0.063212204611372,se5 64,2,9,4.71711068572707,5,0.953125,0.1424689,8.27652429691017,9,10.546875,4.8270481,0.754574960887201,0.907947597505397,688,464,13,10,6,539,1,0.039387587733945,se6 128,2,16,7.17773505630431,8,3.921875,0.5813273,15.0231258213131,16,122.640625,25.8324598,1.09301760283254,1.2291154346728,3626,598,23,10,11,857,1,0.016047344297096,se7 n,k,UB,DNN_lb,round_DNN_lb,DNN_cputime,DNN_clocktime,cuts_lb,round_cuts_lb,cuts_cputime,cuts_clocktime,imp_lb,imp_lb_round,TotalTriCuts,TotalCliqueCuts,numClustTri,numClustCliq,cut_add_occasions_total,iter,stopflag,sigma,graph 324,3,-256,-291.178601529681,-291,79.25,11.4042802,-270.397541918309,-270,3643.0625,1095.0991726,0.072734058816208,0.072164948453608,5274,7686,28,228,8,1797,2,0.008171313664088,spinglass2pm\_18 441,3,-352,-395.508367217053,-395,212.1875,34.6270265,-370.148108347892,-370,4791.34375,1204.8967205,0.06449513924709,0.063291139240506,3280,5540,21,118,4,2786,2,0.026643095479211,spinglass2pm\_21 576,3,-453,-512.168196967489,-512,361.71875,67.5348028,-474.761268817398,-474,6824.671875,1962.9361564,0.074522778246444,0.07421875,3662,7858,23,178,4,2178,2,0.025166115568664,spinglass2pm\_24 729,3,-580,-656.335813442165,-656,579.71875,114.6197084,-607.871903886472,-607,9897.625,3169.0449126,0.075168553096962,0.07469512195122,7289,7291,21,191,4,2587,2,0.021365845025206,spinglass2pm\_27 900,3,-723,-809.707870922529,-809,1103.375,219.8444316,-748.800132442712,-748,20625.359375,6713.4085852,0.076210042088665,0.07540173053152,7922,10078,30,147,4,3442,2,1.85964190051141,spinglass2pm\_30 324,4,-263,-293.618696178475,-293,64.359375,9.6263322,-271.481419431818,-271,1755.3125,297.8498605,0.077034250450885,0.075085324232082,6417,63,51,13,4,1003,2,0.010712137133537,spinglass2pm\_18 400,4,-326,-361.441105561873,-361,91.796875,14.4523586,-333.524483401064,-333,837.171875,164.2482503,0.078688077045527,0.077562326869806,7821,179,50,24,4,487,2,0.011960535007509,spinglass2pm\_20 484,4,-399,-441.131105886212,-441,178.15625,29.4700338,-408.067757238916,-408,1441.421875,408.025362,0.075104896127543,0.074829931972789,16686,254,125,20,7,535,2,0.045520319053496,spinglass2pm\_22 576,4,-465,-516.648211568463,-516,257.65625,47.8307068,-475.335636535165,-475,2510.703125,668.2482574,0.080612321180839,0.079457364341085,16769,511,80,26,6,740,2,0.03720117047107,spinglass2pm\_24 676,4,-551,-611.352804788407,-611,282.78125,55.2733323,-562.777399546412,-562,3054.71875,897.1684478,0.080727207599037,0.080196399345336,19628,652,70,37,6,687,2,0.036634135416251,spinglass2pm\_26 784,4,-635,-709.926903782666,-709,491.890625,97.8833255,-656.1843818751,-656,10223.4375,3102.6320889,0.07596120599928,0.07475317348378,19350,250,49,55,5,1500,2,0.027967391566658,spinglass2pm\_28 1024,4,-823,-922.875353984384,-922,858.609375,170.9051593,-842.642117405745,-842,12901.484375,3978.2176919,0.087634103170289,0.086767895878525,15198,162,38,27,3,1572,2,1.84243002153364,spinglass2pm\_32 400,5,-327,-362.324505631084,-362,91.328125,14.3141306,-333.201448431096,-333,1200.296875,225.3089327,0.080934370089066,0.080110497237569,8000,0,41,0,4,695,2,0.011472741548971,spinglass2pm\_20 625,5,-507,-566.994211555681,-566,338.5625,64.7077875,-515.712370921972,-515,4680.3125,1221.1160643,0.091701485651895,0.090106007067138,12500,0,33,0,4,1150,2,0.043060285896647,spinglass2pm\_25 900,5,-736,-818.707920949541,-818,765.515625,151.5312092,-751.934110989312,-751,7011.875,2110.8321028,0.082700947696968,0.081907090464548,22500,0,57,0,5,1116,2,1.28164037422537,spinglass2pm\_30 n,k,UB,DNN_lb,round_DNN_lb,DNN_cputime,DNN_clocktime,cuts_lb,round_cuts_lb,cuts_cputime,cuts_clocktime,imp_lb,imp_lb_round,TotalTriCuts,TotalCliqueCuts,numClustTri,numClustCliq,cut_add_occasions_total,iter,stopflag,sigma,graph,graph_string, 45,3,1408,1260.925072706,1261,2.4375,0.0395015,1407.65226165996,1408,82.328125,60.7316745,0.116364716770265,0.116574147501983,372,438,13,48,6,1426,1,0.0240627545932,cb.45.98,cb.45.98, 69,3,10,3.60774014739462,4,1.203125,0.0418025,8.59740316941194,9,1631.703125,1093.3881143,1.38304390509407,1.25,1375,4421,21,109,28,1266,2,0.023513454141646,Mesh.69.212,mesh.69.212, 138,3,13,1.48106029608232,2,1.9375,0.2962077,11.353413702298,12,3642.34375,1019.3264416,6.665733618226,5,3862,8558,28,108,30,1877,4,0.012796939168501,mesh.138.232 ,mesh.138.232, 124,4,23,12.755552970779,13,1.203125,0.181248,20.2527124193492,21,974.875,215.180769,0.587756521865029,0.615384615384615,5124,456,53,14,15,1741,2,0.010713864398436,G124.02, \(G_{124,2.5}\), 124,4,104,79.1993802471889,80,0.65625,0.0919775,93.600885306963,94,251.8125,53.8556599,0.181838608014679,0.175,3160,188,64,7,9,1003,2,0.010065393087738,G124.04, \(G_{124,5}\), 124,4,290,250.082497435608,251,0.609375,0.0851899,260.760270175558,261,25.90625,5.5379865,0.04269700138731,0.039840637450199,1519,130,21,6,6,325,2,0.013978696815355,G124.08, \(G_{124,10}\), 124,4,720,659.717087146369,660,0.40625,0.0659026,669.297658240917,670,11.96875,3.0386311,0.014522241853685,0.015151515151515,949,221,15,7,4,234,2,0.012121793719145,G124.16, \(G_{124,20}\), 128,4,30,14.896245442461,15,1.984375,0.2703313,26.3925422893492,27,1064.984375,235.3620665,0.771758017232892,0.8,4974,402,32,11,14,2047,2,0.007349175551335,se7,se7, 148,4,22,8.13242024394805,9,1.453125,0.2041026,21.2315806588939,22,2309.265625,101.94,1.61073333915496,1.44444444444444,3382,458,51,12,10,1197,2,0.009396227830062,mesh.148.265,mesh.148.265, 70,5,20,11.2084123567966,12,0.1875,0.0365033,17.6307221964897,18,126.609375,97.035168,0.572990146619536,0.5,1401,279,26,16,8,1570,2,0.01992801972218,mesh.70.120 ,mesh.70.120, 115,5,104,81.7640562910578,82,4.34375,0.6181537,100.066804263274,101,989.25,208.2342546,0.223848336328415,0.231707317073171,2255,310,16,19,19,2193,2,0.007790624415977,KKT_putt01,KKT.putt01, 250,5,58,26.2292122746384,27,4.640625,0.6659508,49.2069810577526,50,3771.796875,839.0372587,0.87603731833502,0.851851851851852,12682,68,58,12,17,2047,2,0.005143941462304,G250.01, \(G_{250,2.5}\), 250,5,209,146.808076998499,147,2.78125,0.3737951,171.630538986793,172,525.859375,97.9800455,0.16908103760904,0.170068027210884,6747,3,111,1,9,726,2,0.006928743184008,G250.02, \(G_{250,5}\), 250,5,629,522.301143965622,523,2,0.2791562,539.671115787788,540,55.609375,11.7411799,0.033256622205119,0.032504780114723,2605,5,33,2,5,257,3,0.00969323518559,G250.04, \(G_{250,10}\), 250,5,1418,1262.60665615399,1263,1.953125,0.2524076,1276.16963629761,1277,17.984375,4.8958988,0.010742047079757,0.011084718923199,1545,8,12,1,3,176,3,0.008315929654591,G250.08, \(G_{250,25}\), 138,6,24,8.20108672001771,9,0.734375,0.1186149,20.6493432385146,21,1618.59375,332.9038984,1.51787890354975,1.33333333333333,5974,650,31,16,16,1605,2,0.012560300273065,mesh.138.232,mesh.138.232, n,m1,UB,DNN_lb,round_DNN_lb,DNN_cputime,DNN_clocktime,cuts_lb,round_cuts_lb,cuts_cputime,cuts_clocktime,imp_lb,imp_lb_round,TotalBQPTriCuts,numClustBQP,cut_add_occasions_total,iter,stopflag,graph,graph_string, 324,195,-222,-245.449421593437,-245,1707.3125,252.0151955,-227.492522413111,-227,27086.84375,7200.074992,0.073159264600224,0.073469387755102,12960,127,8,35081,5,spinglass2pm\_18,spinglass2pm\_18, 400,280,-258,-267.923988831801,-267,4889.78125,735.5972417,-262.569332388747,-262,31207.78125,7200.3941737,0.019985729782544,0.0187265917603,14000,218,7,47002,2,spinglass2pm\_20,spinglass2pm\_20, 441,287,-294,-318.319164273528,-318,4341.46875,667.6976549,-304.395181141394,-304,30236.5625,7200.2525878,0.043742208119677,0.044025157232704,15435,157,7,41725,5,spinglass2pm\_21,spinglass2pm\_21, 484,291,-336,-369.955486084447,-369,5483.625,831.9145873,-347.038390103933,-347,31932.65625,7200.6400448,0.061945549782394,0.059620596205962,24200,146,10,34795,5,spinglass2pm\_22,spinglass2pm\_22, 500,350,173,126.446926310574,127,5250.546875,804.2226209,148.685401834865,149,30331.296875,7200.6297621,0.175872013445946,0.173228346456693,20000,143,10,32734,5,G500.01,\(G_{500,5}\), 500,325,563,462.159519658154,463,3438.828125,518.0066466,489.032608843123,490,15208.46875,3101.6578603,0.058146782749052,0.058315334773218,11361,67,8,22746,2,G500.02,\(G_{500,10}\), 500,300,1682,1496.26372329061,1497,2841.40625,426.1252629,1533.13991563716,1534,12208.15625,2405.5408881,0.024645516544002,0.024716098864396,9646,60,6,21665,2,G500.04,\(G_{500,20}\), 1000,650,387,259.204673099794,260,31575.375,5048.1200272,270.527590050291,271,42869.125,7200.384752,0.043683305609761,0.042307692307692,8000,44,2,26970,5,G1000.005,\(G_{1000,5}\), 1000,600,1304,1047.76028757542,1048,21267.375,3400.2076212,1073.93102439008,1074,40751.265625,7200.4867623,0.024977790363885,0.024809160305344,16000,52,4,22496,5,G1000.01,\(G_{1000,10}\), 1000,700,2844,2502.91563892708,2503,30919.890625,4952.7372983,2513.2697661136,2514,42535.34375,7200.6638274,0.004136826277917,0.004394726328406,12000,70,3,26158,2,G1000.02,\(G_{1000,20}\), n,k,UB,DNN_lb,round_DNN_lb,DNN_cputime,DNN_clocktime,cuts_lb,round_cuts_lb,cuts_cputime,cuts_clocktime,imp_lb,imp_lb_round,TotalTriCuts,TotalCliqueCuts,numClustTri,numClustCliq,cut_add_occasions_total,iter,stopflag,sigma,graph,graph_string, 124,2,13,7.28532052225482,8,9,1.2521323,12.0128311074534,13,160.84375,31.5573908,0.648909072807054,0.625,3298,1166,41,18,12,848,1,0.018322034333498,G124.02,\(G_{124,2.5}\), 124,2,63,46.7102970636413,47,4.703125,0.6530695,61.338334386335,62,341.140625,72.2986922,0.313165152916143,0.319148936170213,4354,482,41,9,13,1325,2,0.011571402817698,G124.04,\(G_{124,5}\), 124,2,178,152.857252115462,153,2.84375,0.4007673,170.877609333514,171,48.921875,9.8563522,0.11789010314303,0.117647058823529,2663,432,19,6,9,570,2,0.015559870167661,G124.08,\(G_{124,10}\), 124,2,449,418.672727417812,419,2.6875,0.3876029,439.95858648967,440,34.0625,6.6688111,0.050841284081578,0.050119331742244,2266,509,13,7,8,485,2,0.011687184131925,G124.16,\(G_{124,20}\), 250,2,29,15.1588063084243,16,35.171875,4.8282195,28.3040857716572,29,4019.859375,968.0517114,0.867171147633676,0.8125,15865,2135,87,23,24,2068,1,0.005257466832045,G250.01, \(G_{250,2.5}\), 250,2,114,81.5222883170643,82,18.921875,2.6655993,104.999755400908,105,874.734375,174.1071799,0.28798832280729,0.280487804878049,10063,374,92,5,14,1184,2,0.008698751175913,G250.02,\(G_{250,5}\), 250,2,357,303.016218855567,304,15.09375,2.102401,330.402579084339,331,164.03125,27.7361504,0.090379189378724,0.088815789473684,5352,415,42,5,9,575,2,0.011940455580504,G250.04,\(G_{250,10}\), 250,2,828,746.265550639492,747,12.734375,1.7741714,780.743686114921,781,54.390625,10.594845,0.046200893831806,0.045515394912985,4516,617,13,5,7,484,2,0.010102480305792,G250.08,\(G_{250,20}\), 70,2,7,2.91000531699043,3,1.25,0.1934524,6.01817711868984,7,6.921875,2.4224597,1.06809832392813,1.33333333333333,474,366,25,9,4,351,1,0.037905525988941,mesh.70.120 ,mesh.70.120 , 74,2,8,3.55540264499326,4,1.453125,0.1955245,7.14849283239675,8,10.53125,3.7442161,1.01060007717081,1,575,313,20,12,4,518,1,0.024743678047277,mesh.74.129 ,mesh.74.129, 138,2,8,1.54350478454276,2,10.25,1.4192829,7.10614672053748,8,618.53125,124.3833485,3.60390326722736,3,5298,3810,29,19,22,1135,1,0.017955827117338,mesh.138.232 ,mesh.138.232, 148,2,7,2.68620603060218,3,16.0625,2.250933,6.15932747322803,7,36.0625,6.1472518,1.29294678183984,1.33333333333333,1064,1156,27,16,5,481,1,0.019345394243569,mesh.148.265 ,mesh.148.265, 274,2,7,1.14442361066238,2,96.796875,13.6903877,6.09040916433291,7,3815.75,1000.6020062,4.32181362529549,2.5,14038,7334,38,33,26,1926,1,0.005738000467411,mesh.274.469 ,mesh.274.469, 82,2,13,4.88131927305449,5,1.46875,0.2155911,12.4266512501266,13,20.4375,6.2271306,1.54575670121053,1.6,515,715,19,16,5,357,1,0.019983245569072,KKT_lowt01,KKT.lowt01, n,k,UB,DNN_lb,round_DNN_lb,DNN_cputime,DNN_clocktime,cuts_lb,round_cuts_lb,cuts_cputime,cuts_clocktime,imp_lb,imp_lb_round,TotalTriCuts,TotalCliqueCuts,numClustTri,numClustCliq,cut_add_occasions_total,iter,stopflag,sigma,graph,graph_string, 500,2,49,24.8999946365116,25,266.625,45.8307645,40.8254010692756,41,4089.671875,1094.0907784,0.639574693297809,0.64,14593,407,73,59,10,1560,4,0.018891198031821,G500.005,\(G_{500,2.5}\), 500,2,218,155.578188913135,156,133.03125,22.596964,195.655814023588,196,904.40625,222.6049035,0.257604394230542,0.256410256410256,14597,403,38,7,10,1116,4,0.012867692662404,G500.01, \(G_{500,5}\), 500,2,626,512.133506337669,513,94.234375,15.9009213,553.547033930157,554,413.796875,91.7493828,0.080864710236675,0.079922027290448,10989,464,47,4,9,743,2,0.016219556710487,G500.02, \(G_{500,10}\), 500,2,1744,1565.591086041,1566,86.75,14.7806008,1612.85929173751,1613,161.90625,35.0262408,0.030191923113231,0.030012771392082,8075,743,18,5,7,547,3,0.013176409604152,G500.04, \(G_{500,20}\), 1000,2,102,44.2938767040396,45,2091.5,421.7976519,71.2339344732021,72,22271.984375,7206.5005709,0.608211783971159,0.6,29796,204,118,95,10,1979,5,2.93550439838905,G1000.0025, \(G_{1000,2.5}\), 1000,2,451,306.240421509122,307,1009,204.6536909,377.486827140908,378,4701.4375,1400.9069684,0.232648600993595,0.231270358306189,29432,568,54,8,10,1975,4,2.07464583408249,G1000.005, \(G_{1000,5}\), 1000,2,1367,1112.75608620417,1113,742.9375,152.3545105,1178.93522377057,1179,1881.546875,495.8046354,0.059473175107179,0.059299191374663,21285,799,41,4,9,1329,2,2.48280248134331,G1000.01, \(G_{1000,10}\), 1000,2,3389,3006.9639862136,3007,683.25,140.6909266,3078.42218408485,3079,1383.421875,374.942909,0.023764234689499,0.023944130362488,15471,1242,25,6,6,1208,2,2.02000776204759,G1000.02, \(G_{1000,20}\), 500,2,2,0.268496303387567,1,429.359375,74.4340328,1.30949355484287,2,4836.6875,1391.3841535,3.87713811445907,1,15000,0,45,0,10,3136,1,0.010312142302437,U500.05, \(U_{500,5}\), 500,2,26,7.19052052652705,8,228.125,39.5222584,18.5932309870202,19,1408.28125,393.37937,1.58579763710104,1.375,9921,5079,59,41,10,1770,4,0.007891370063054,U500.10, \(U_{500,10}\), 500,2,178,55.2543976988495,56,379.75,66.2750358,134.607460122003,135,2040.015625,565.4107938,1.43614021196372,1.41071428571429,10439,4561,60,39,10,2782,4,0.008989963481828,U500.20, \(U_{500,20}\), 500,2,412,162.799455136723,163,550.9375,97.4653066,338.378649393828,339,3484.4375,1004.6492281,1.07849988877204,1.07975460122699,8739,6261,66,71,10,3326,4,0.006856359327158,U500.40, \(U_{500,40}\), 1000,2,1,-0.370648806768082,0,3470.1875,725.581516,-0.2627636719341,0,13898.921875,4498.2202233,0.291071043165361,0,30000,0,58,0,10,6792,4,1.94344304155328,U1000.05, \(U_{1000,5}\), 1000,2,39,7.58691904119982,8,3265.640625,674.9015202,21.4706793587232,22,7291.171875,2284.0963077,1.82996025687494,1.75,25014,4986,63,46,10,3583,4,1.31098518555584,U1000.10, \(U_{1000,10}\), 1000,2,222,50.5734086572461,51,1037.5625,213.6737431,132.297883177871,133,5928.359375,1840.4606781,1.61595741102801,1.6078431372549,21093,8907,67,72,10,2827,4,1.02976016296414,U1000.20, \(U_{1000,20}\), 1000,2,737,239.384802540148,240,2724.609375,561.0779492,570.11914703267,571,11695.96875,3685.7005806,1.38160125865573,1.37916666666667,21429,8571,206,103,10,3310,4,1.00908562262322,U1000.40, \(U_{1000,40}\), n,k,UB,DNN_lb,round_DNN_lb,DNN_cputime,DNN_clocktime,cuts_lb,round_cuts_lb,cuts_cputime,cuts_clocktime,imp_lb,imp_lb_round,TotalTriCuts,TotalCliqueCuts,numClustTri,numClustCliq,cut_add_occasions_total,iter,stopflag,sigma,graph,graph_string, 500,5,108,48.0187194230034,49,101.828125,17.4713512,77.2031464551006,78,4709.203125,1366.4696294,0.607771872777523,0.591836734693878,24973,27,85,11,10,1416,4,0.02089969834316,G500.005, \(G_{500,2.5}\), 500,5,403,278.160552752331,279,51.046875,8.7709247,318.848720946983,319,2331.265625,625.0907441,0.146275838870942,0.14336917562724,19998,2,234,1,8,924,2,0.021164395432579,G500.01, \(G_{500,5}\), 500,5,1113,879.394660407565,880,32.796875,5.5605928,898.251491096982,899,222.21875,57.7423299,0.021442967007188,0.021590909090909,6387,0,59,0,4,304,3,0.030899332299748,G500.02, \(G_{500,10}\), 500,5,2987,2617.06078169391,2618,26.5,4.4731032,2623.49612295622,2624,68.984375,22.9317584,0.002458995720437,0.002291825821238,2868,1,18,1,3,207,3,0.026219845812912,G500.04, \(G_{500,20}\), 1000,5,228,82.105152438675,83,753.78125,157.9229249,132.558711993473,133,21844.203125,7207.6395598,0.614499310411513,0.602409638554217,50000,0,123,0,10,1514,5,0.57222499452949,G1000.0025, \(G_{1000,2.5}\), 1000,5,854,531.176113596579,532,331.703125,71.1344118,611.66684398689,612,9673.203125,2758.9802691,0.151533038346379,0.150375939849624,45000,0,262,0,9,1140,2,0.514698724265668,G1000.005, \(G_{1000,5}\), 1000,5,2449,1880.46382990491,1881,206.015625,40.9094262,1907.65069482967,1908,831.265625,205.347577,0.014457531430496,0.014354066985646,11499,0,43,0,3,510,2,0.757529271654812,G1000.01, \(G_{1000,10}\), 1000,5,5904,4989.72222565542,4990,171.03125,35.0489147,4996.34731659005,4997,309.53125,66.8661241,0.001327747444651,0.001402805611222,4215,0,31,0,2,285,3,0.651347328508832,G1000.02, \(G_{1000,20}\), 500,5,21,2.09883985525338,3,203.96875,34.3601209,7.84934885316621,8,4309.703125,1391.0097217,2.7398512485454,1.66666666666667,24772,228,71,7,10,1770,4,0.01910761777319,U500.05, \(U_{500,5}\), 500,5,98,34.6255438531403,35,282.03125,48.8784312,71.3475010661459,72,4470.28125,1310.7587054,1.06054528323821,1.05714285714286,24744,256,130,5,10,1812,4,0.016730711221404,U500.10, \(U_{500,10}\), 500,5,439,215.48643151787,216,395.796875,67.6081903,372.230561988345,373,4458.796875,1302.05874,0.727396752391236,0.726851851851852,24638,362,89,9,10,1911,4,0.015829238576651,U500.20, \(U_{500,20}\), 500,5,1230,770.710382371067,771,500.296875,85.3656381,1164.09274224706,1165,6419.46875,1807.1768759,0.510415285526275,0.511024643320363,24090,910,177,9,10,2499,4,0.011659874830473,U500.40, \(U_{500,40}\), 1000,5,29,0.284714317141351,1,2565.890625,529.2836963,3.02809747972145,4,15277.71875,5104.2754681,9.63556448486608,3,50000,0,57,0,10,3908,4,0.472145604913791,U1000.05, \(U_{1000,5}\), 1000,5,220,33.6943972068554,34,847.84375,175.3289423,75.9172935373199,76,9689.453125,3285.8318054,1.25311327195591,1.23529411764706,49879,121,85,5,10,2058,4,0.431918005099387,U1000.10, \(U_{1000,10}\), 1000,5,716,215.261715172515,216,1994.140625,411.0729014,468.694085573128,469,14944.390625,4581.5154252,1.17732208069376,1.1712962962963,49336,664,81,7,10,2706,4,0.415694089206987,U1000.20, \(U_{1000,20}\), 1000,5,1836,886.454545399997,887,2427.984375,497.6850373,1608.82308624678,1609,16748.265625,4864.6187702,0.814896313178502,0.813979706877114,49010,990,146,8,10,2625,4,0.308837723006811,U1000.40, \(U_{1000,40}\),
2205.06715v2
http://arxiv.org/abs/2205.06715v2
Binomial ideals attached to finite collections of cells
\documentclass[12pt]{amsart} \usepackage{amscd,amsmath,amsthm,amssymb} \usepackage{color} \usepackage{stmaryrd} \usepackage{tikz} \def\NZQ{\mathbb} \def\NN{{\NZQ N}} \def\QQ{{\NZQ Q}} \def\ZZ{{\NZQ Z}} \def\RR{{\NZQ R}} \def\CC{{\NZQ C}} \def\AA{{\NZQ A}} \def\PP{{\NZQ P}} \def\DD{{\NZQ D}} \def\FF{{\NZQ F}} \def\GG{{\NZQ G}} \def\HH{{\NZQ H}} \def\EE{{\NZQ E}} \def\KK{{\NZQ K}} \def\MM{{\NZQ M}} \def\frk{\mathfrak} \def\aa{{\frk a}} \def\pp{{\frk p}} \def\Pp{{\frk P}} \def\qq{{\frk q}} \def\Qq{{\frk Q}} \def\mm{{\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}} \def\MI{{\mathcal I}} \def\MJ{{\mathcal J}} \def\ML{{\mathcal L}} \def\MR{{\mathcal R}} \def\MG{{\mathcal G}} \def\Ic{{\mathcal I}} \def\Jc{{\mathcal J}} \def\Lc{{\mathcal L}} \def\G{{\mathcal G}} \def\F{{\mathcal F}} \def\C{{\mathcal C}} \def\D{{\mathcal D}} \def\Bc{{\mathcal B}} \def\P{{\mathcal P}} \def\Pc{{\mathcal P}} \def\Mc{{\mathcal M}} \def\Ac{{\mathcal A}} \def\Oc{{\mathcal O}} \def\Tc{{\mathcal T}} \def\Cc{{\mathcal C}} \def\ab{{\mathbf a}} \def\bb{{\mathbf b}} \def\xb{{\mathbf x}} \def\yb{{\mathbf y}} \def\zb{{\mathbf z}} \def\cb{{\mathbf c}} \def\db{{\mathbf d}} \def\eb{{\mathbf e}} \def\vb{{\mathbf v}} \def\wb{{\mathbf w}} \def\eb{{\mathbf e}} \def\opn#1#2{\def#1{\operatorname{#2}}} \opn\chara{char} \opn\length{\ell} \opn\pd{pd} \opn\rk{rk} \opn\projdim{proj\,dim} \opn\injdim{inj\,dim} \opn\rank{rank} \opn\depth{depth} \opn\grade{grade} \opn\height{height} \opn\embdim{emb\,dim} \opn\codim{codim} \def\OO{{\mathcal O}} \opn\Tr{Tr} \opn\bigrank{big\,rank} \opn\superheight{superheight}\opn\lcm{lcm} \opn\trdeg{tr\,deg} \opn\reg{reg} \opn\lreg{lreg} \opn\ini{in} \opn\lpd{lpd} \opn\size{size} \opn\sdepth{sdepth} \opn\link{link}\opn\fdepth{fdepth}\opn\lex{lex} \opn\tr{tr} \opn\type{type} \opn\gap{gap} \opn\diam{diam} \opn\Mod{Mod} \opn\Jac{Jac} \opn\bigheight{bigheight} \opn\div{div} \opn\Div{Div} \opn\cl{cl} \opn\Cl{Cl} \opn\Spec{Spec} \opn\Supp{Supp} \opn\supp{supp} \opn\Sing{Sing} \opn\Ass{Ass} \opn\Min{Min}\opn\Mon{Mon} \opn\Ann{Ann} \opn\Rad{Rad} \opn\Soc{Soc} \opn\Im{Im} \opn\Ker{Ker} \opn\Coker{Coker} \opn\Am{Am} \opn\Hom{Hom} \opn\Tor{Tor} \opn\Ext{Ext} \opn\End{End} \opn\Aut{Aut} \opn\id{id} \def\Frob{{\mathcal F}} \opn\nat{nat} \opn\pff{pf} \opn\Pf{Pf} \opn\GL{GL} \opn\SL{SL} \opn\mod{mod} \opn\ord{ord} \opn\Gin{Gin} \opn\Hilb{Hilb}\opn\sort{sort} \opn\PF{PF}\opn\Ap{Ap} \opn\dist{dist} \opn\aff{aff} \opn\relint{relint} \opn\st{st} \opn\lk{lk} \opn\cn{cn} \opn\core{core} \opn\vol{vol} \opn\inp{inp} \opn\nilpot{nilpot} \opn\link{link} \opn\star{star}\opn\lex{lex}\opn\set{set} \opn\width{wd} \opn\Fr{F} \opn\QF{QF} \opn\G{G} \opn\type{type}\opn\res{res} \opn\conv{conv} \opn\gr{gr} \def\Rees{{\mathcal R}} \def\poly#1#2#3{#1[#2_1,\dots,#2_{#3}]} \def\pot#1#2{#1[\kern-0.28ex[#2]\kern-0.28ex]} \def\Pot#1#2#3{\pot{#1}{#2_1,\dots,#2_{#3}}} \def\konv#1#2{#1\langle #2\rangle} \def\Konv#1#2#3{\konv{#1}{#2_1,\dots,#2_{#3}}} \opn\dirlim{\underrightarrow{\lim}} \opn\inivlim{\underleftarrow{\lim}} \let\union=\cup \let\sect=\cap \let\dirsum=\oplus \let\tensor=\otimes \let\iso=\cong \let\Union=\bigcup \let\Sect=\bigcap \let\Dirsum=\bigoplus \let\Tensor=\bigotimes \def\Cc{{\mathcal C}} \def\ab{{\mathbf a}} \def\bb{{\mathbf b}} \def\xb{{\mathbf x}} \def\yb{{\mathbf y}} \def\zb{{\mathbf z}} \def\cb{{\mathbf c}} \def\db{{\mathbf d}} \def\eb{{\mathbf e}} \def\mb{{\mathbf m}} \let\to=\rightarrow \let\To=\longrightarrow \def\Implies{\ifmmode\Longrightarrow \else } \def\implies{\ifmmode\Rightarrow \else } \def\iff{\ifmmode\Longleftrightarrow \else } \let\gets=\leftarrow \let\Gets=\longleftarrow \let\followsfrom=\Leftarrow \let\Followsfrom=\Longleftarrow \let\:=\colon \newtheorem{Theorem}{Theorem}[section] \newtheorem{Lemma}[Theorem]{Lemma} \newtheorem{Corollary}[Theorem]{Corollary} \newtheorem{Proposition}[Theorem]{Proposition} \newtheorem{Remark}[Theorem]{Remark} \newtheorem{Remarks}[Theorem]{Remarks} \newtheorem{Example}[Theorem]{Example} \newtheorem{Examples}[Theorem]{Examples} \newtheorem{Definition}[Theorem]{Definition} \newtheorem{Problem}[Theorem]{} \newtheorem{Conjecture}[Theorem]{Conjecture} \newtheorem{Question}[Theorem]{Question} \newtheorem{Rules}[Theorem]{Rules} \let\epsilon\varepsilon \let\kappa=\varkappa \textwidth=15cm \textheight=22cm \topmargin=0.5cm \oddsidemargin=0.5cm \evensidemargin=0.5cm \pagestyle{plain} } nalhyphendemerits=0} \opn\dis{dis} \def\pnt{{\raise0.5mm\hbox{\large\bf.}}} \def\lpnt{{\hbox{\large\bf.}}} \opn\Lex{Lex} \def\Coh#1#2{H_{\mm}^{#1}(#2)} \def\hchst#1{for all $u\in G(I)$ there exists $i\notin {#1}$ such that $\nu_i(u) > a_i \geq 0$} \begin{document} \title {Binomial ideals attached to finite collections of cells} \author {J\"urgen Herzog} \address{J\"urgen Herzog, Fachbereich Mathematik, Universit\"at Duisburg-Essen, Campus Essen, 45117 Essen, Germany} \email{[email protected]} \author {Takayuki Hibi} \address{Takayuki Hibi, Department of Pure and Applied Mathematics, Graduate School of Information Science and Technology, Osaka University, Suita, Osaka 565-0871, Japan} \email{[email protected]} \author {Somayeh Moradi} \address{Somayeh Moradi, Department of Mathematics, School of Science, Ilam University, P.O.Box 69315-516, Ilam, Iran} \email{[email protected]} \thanks{The second author was partially supported by JSPS KAKENHI 19H00637. The third author is supported by the Alexander von Humboldt Foundation} \begin{abstract} We consider the ideal of inner $2$-minors $I_\Pc$ of a finite set of cells $\Pc$, which we call the cell ideal of $\Pc$. A nice interpretation for the height of an unmixed ideal $I_\Pc$, in terms of the number of cells of $\Pc$ is given. Moreover, the coordinate rings of cell ideals with isolated singularities are determined. \end{abstract} \subjclass[2010]{13F20, 05E40.} \keywords{cell ideal, height, isolated singularity} \maketitle \setcounter{tocdepth}{1} \section*{Introduction} Combinatorial descriptions of height of polyomino ideals have been studied in several works. Qureshi~\cite{Q} proved that for a convex polyominoe $\Pc$ the height of the polyomino ideal $I_\Pc$ is the number of cells of $\Pc$. Herzog and Madani [14] extended this result to simple polyominoes, which by definition are the polyominoes with no holes, see \cite{HM} and \cite{ASS}. Such polyomino ideals are in particular prime. However, not all polyomino ideals are prime ideals and it is still an open problem to identify the polyominoes $\Pc$ for which $I_\Pc$ is a prime ideal. In \cite{DN} the same description for height in terms of the number of cells of $\Pc$ was proved for closed path polyominoes. In this paper we consider more generally cell ideals, i.e., ideals of inner $2$-minors which are attached to finite collections of cells. When any two cells of $\Pc$ are connected in $\Pc$, this ideal is just the polyomino ideal. In Theorem~\ref{heightprime} it is shown that $\height I_\Pc\leq c\leq \bigheight I_\Pc$, where $c$ is the number of cells of $\Pc$. In particular, if $I_\Pc$ is an unmixed ideal, then $\height I_\Pc= c$. To this aim we use Lemma~\ref{linearalgebra} which determines the height of an unmixed binomial ideal $I\subset S$ in terms of the dimension of the $\QQ$-vector space spanned by the set of integer vectors $\{\vb-\wb\in \QQ^n\:\; \xb^\vb-\xb^\wb\in I\}$. In the next section of this paper it is shown that when $K$ is a perfect field, and $\Pc$ is a finite set of cells such that $I_\Pc\subset S$ is a prime ideal, then the ring $S/I_\Pc$ has an isolated singularity if and only if $\Pc$ is an inner interval. \section{On the height of cell ideals} Consider $(\ZZ^2,\leq )$ as a partially ordered set with $(i,j)\leq (i',j')$ if $i\leq i'$ and $j\leq j'$. Let $\ab,\bb\in \ZZ^2$. Then the set $[\ab,\bb]=\{\cb\in \ZZ^2: \ab\leq \cb\leq \bb\}$ is called an \textit{interval}. The interval with $\ab =(i,j)$ and $\bb = (i',j')$ is called \textit{proper}, if $i<i'$ and $j<j'$. A \textit{cell} is an interval of the form $[\ab,\bb]$, where $\bb=\ab+(1,1)$. The cell $C=[\ab,\ab+(1,1)]$ consists of the elements $\ab, \ab+(0,1),\ab+(1,0)$ and $\ab+(1,1)$, which are called the \emph{vertices} of $C$. We denote the set of vertices of $C$ by $V(C)$. The intervals $[\ab,\ab+(1,0)]$, $[\ab+(1,0),\ab+(1,1)]$, $[\ab+(0,1),\ab+(1,1)]$ and $[\ab,\ab+(0,1)]$ are called the \emph{edges} of $C$, denoted $E(C)$. Each edge consists of two elements, called the \emph{corners of the edge}. Let $[\ab, \bb]$ be a proper interval in $\ZZ^2$. A cell $C = [\ab', \bb']$ of $\ZZ^2$ is called a cell of $[\ab, \bb]$ if $\ab \leq \ab'$ and $\bb' \leq \bb$. Let $\Pc$ be a finite collection of cells of $\ZZ^2$. The vertex set of $\Pc$ is $V(\Pc) = \Union_{C \in \Pc} V(C)$ and the edge set of $\Pc$ is $E(\Pc) = \Union_{C \in \Pc} E(C)$. Let $C$ and $D$ be two cells of $\Pc$. Then $C$ and $D$ are {\em connected} in $\Pc$, if there is a sequence of cells of $\Pc$ of the form $C = C_1, \ldots, C_m = D$ for which $C_i \sect C_{i+1}$ is an edge of $C_i$ for $i = 1, \ldots, m - 1$. A {\em polyomino} is a finite collection $\Pc$ of cells of $\ZZ^2$ for which any two cells of $\Pc$ are connected in $\Pc$. Let $\Pc$ be a polyomino, and let $S = K[\{x_\ab\}_{\ab \in V(\Pc)}]$ be the polynomial ring in $|V(\Pc)|$ variables over a field $K$. A proper interval $[\ab, \bb]$ of $\ZZ^2$ is called an {\em inner interval} of $\Pc$, if each cell of $[\ab, \bb]$ belongs to $\Pc$. Now, for each inner interval $[\ab, \bb]$ of $\Pc$, one introduces the binomial $f_{\ab,\bb} = x_{\ab}x_{\bb}-x_{\cb}x_{\db}$, where $\cb$ and $\db$ are the anti-diagonal corners of $[\ab, \bb]$. The binomial $f_{\ab,\bb}$ is called an {\em inner $2$-minor} of $\Pc$. The {\em polyomino ideal} of $\Pc$ is the binomial ideal $I_\Pc$ which is generated by the inner $2$-minors of $\Pc$. Furthermore, we write $K[\Pc]$ for the quotient ring $S/I_\Pc$. \medskip The main result of this section is \begin{Theorem} \label{heightprime} Let $\Pc$ be a finite set of cells, and let $c$ be the number of cells of $\Pc$. Then $\height I_\Pc\leq c\leq \bigheight I_\Pc$. In particular, if $I_\Pc$ is an unmixed ideal, then $\height I_\Pc= c$. \end{Theorem} For the proof of this theorem we use the following lemma. We refer to $\bigheight I$ as to the maximal height of an associated prime ideal of $I$. An ideal $I$ is called {\em unmixed} if all $P\in \Ass(I)$ have the same height. \begin{Lemma} \label{linearalgebra} Let $I\subset S$ be a binomial ideal, and let $V_I$ be the $\QQ$-vector space spanned by the set of integer vectors $\{\vb-\wb\in \QQ^n\:\; \xb^\vb-\xb^\wb\in I\}$. Then \[ \height I\leq \dim_Q V_I \leq \bigheight I. \] In particular, $\height I= \dim_Q V_I$, if $I$ is unmixed. \end{Lemma} \begin{proof} Let $\xb=x_1\cdots x_n$. Then $S_\xb=K[x_1^\pm,\ldots, x_n^\pm]$ is the Laurent polynomial ring, and we have $\height I\leq \height IS_\xb$. Hence, for the first inequality it suffices to show that $\height IS_\xb\leq \dim_\QQ V_I$. Note that \[ IS_\xb =(1-\xb^{\vb}\:\; \vb\in V_I). \] We observe that \[ (1-\xb^{\vb_2})-(1-\xb^{\vb_1})=(\xb^{\vb_2}-\xb^{\vb_1})= \xb^{\vb_2}(1-\xb^{\vb_1-\vb_2}). \] This shows that with $1-\xb^{\vb_1}$ and $1-\xb^{\vb_2}$, also $(1-\xb^{\vb_1-\vb_2})\in S_\xb$, since $\xb^{\vb_2}$ is a unit in $S_{\xb}$. Similarly, one sees that $(1-\xb^{\vb_1+\vb_2})\in S_\xb$. Hence the integer vectors $\vb$, which span $V_I$, form an abelian subgroup $G$ of $\ZZ^n$. Any abelian subgroup of $\ZZ^n$ is free. Let $\vb_1,\ldots,\vb_r$ be a basis of $G$. Then this basis is also a $\QQ$-basis of $V_I$, and \[ IS_\xb =(1-\xb^{\vb_1},\ldots, 1-\xb^{\vb_r}). \] Now, we apply Krull's generalized principle ideal theorem, to deduce that $\height IS_\xb \leq r=\dim_\QQ V_I$, as desired. For the second inequality we notice that $\height IS_\xb\leq \bigheight I$. Thus it suffices to show that $\height (1-\xb^{\vb_1},\ldots, 1-\xb^{\vb_r})=\dim_\QQ V$. Observe that $S_\xb$ can be identified with the group ring $K[\ZZ^n]$, whose $K$-basis consists of all monomials $\xb^{\ab}$ with $\ab\in \ZZ^n$. By the elementary divisor theorem there exists a basis $\eb_1,\ldots, \eb_n$ of $\ZZ^n$ and positive integers $a_1,\ldots,a_r$ such that $\vb_i=a_i\eb_i$ for $i=1,\ldots,r$. In these coordinates \[ IS_\xb= (1-x_1^{a_1},\ldots, 1-x_r^{a_r})S_\xb. \] Now, consider the ideal $J=(1-x_1^{a_1},\ldots, 1- x_r^{a_r})S_r$, where $S_r=K[x_1,\ldots,x_r]$. Let $R=S_r/J$. Since $\dim R=0$, it follows that $R[x_{r+1},\ldots,x_n]$ is Cohen-Macaulay of dimension $n-r$, and since $R[x_{r+1},\ldots,x_n]\iso S/JS$, this implies that $JS$ is an unmixed ideal of height $r$. Because $JS$ is unmixed, we then have \[ r=\height JS=\height JS_\xb= \height IS_\xb, \] as desired. \end{proof} {\em Proof of Theorem~\ref{heightprime}}. Note that $V_{I_\Pc}$ is a subspace of the $\QQ$-vector space $W:=\QQ^{V(\Pc)}$. We denote by $\vb_\ab \in W$ the vector, whose $\ab$'s component is $1$, while its other components are $0$. The set of vectors $\{\vb_{\ab}\:\; \ab\in V(\Pc)\}$ is the canonical basis of $W$. For each inner interval $[\ab,\bb]$ of $\Pc$ with anti-diagonals $\cb$ and $\db$ we define the vector \[ \vb_{[\ab,\bb]}= \vb_\ab+\vb_\bb-\vb_\cb-\vb_\db. \] It follows from the definition of $V_{I_\Pc}$ that the vectors $\vb_{[\ab,\bb]}$ span $V_{I_\Pc}$. If $C=[\ab,\bb]$ is a cell of $\Pc$, then we write $\vb_C$ for the vector $\vb_{[\ab,\bb]}$ and claim that the vectors $\vb_C$ form a $\QQ$-basis of $V_{I_\Pc}$. Together with Theorem~\ref{linearalgebra} this claim implies the desired conclusion. If $[\ab, \bb]$ is an arbitrary inner interval of $\Pc$, then it is readily seen that \[ \vb_{[\ab,\bb]}= \sum_C \vb_C, \] where the sum is taken over all cells in $[\ab, \bb]$. This shows that the vectors $\vb_C$ generate $V_{I_\Pc}$. It remains to be shown that the set of vectors $\vb_C$ with $C$ a cell of $\Pc$ is linearly independent. For this purpose we choose any total order on $\ZZ^2$, extending the partial order $\leq $ on $\ZZ^2$ which is defined by componentwise comparison. We set $\vb_{\ab}\leq \vb_{\bb}$ when $\ab\leq \bb$. Then for any cell $C=[\ab, \bb]$, the leading vector in the expression of $\vb_C$ is $\vb_\bb$. Since the leading vectors of all the vectors $\vb_C$ are pairwise distinct, it follows that the vectors $\vb_C$ are linearly independent. \qed \section{The coordinate ring of cell ideals with isolated singularity} Let $I=(f_1, \ldots,f_m)$ be an ideal in $S$, and let \[ A=(\partial f_i/\partial x_j)_{i=1,\ldots,m \atop j=1,\ldots,n} \] be the corresponding Jacobian matrix. Let $h$ be the height of $I$. The \textit{Jacobian ideal} of the ring $R=S/I$ is the ideal $J\subset R$ generated by the $h\times h$-minors of $A$. When $K$ is a perfect field, the ideal $J$ defines the singular locus of $R$. In other words, $R_P$ is not regular for $P\in \Spec(R)$ if and only if $J\subseteq P$, see ~\cite[Corollary~16.20]{Ei}. In the following result we investigate when the ring $K[\Pc]$ has an isolated singularity. \begin{Theorem} \label{isolated} Let $K$ be a perfect field, and let $\Pc$ be a finite set of cells such that $I_\Pc\subset S$ is a prime ideal. Then $S/I_\Pc$ has an isolated singularity if and only if $\Pc$ is an inner interval. \end{Theorem} \begin{proof} We set $u_\ab=x_\ab\mod I_\Pc$ for all $\ab\in V(\Pc)$. Let $J\subset R$ be the Jacobian ideal of $R=S/I_\Pc$. By ~\cite[Corollary~16.20]{Ei} the assumption on $K$ guarantees that the $K$-algebra $R$ has an isolated singularity if and only $\dim R/J=0$. The latter is the case if and only if suitable powers of the $K$-algebra generators $u_\ab$ of $R$ belong to $J$. Let us first assume that $\P$ is the inner interval $[\ab,\bb]$. The desired result follows in this case from known results about determinantal ideals, and the fact that ideal of $2$-minors of an inner interval is a determinantal ideal. Conversely, assume that $R=S/I_\Pc$ has an isolated singularity. Let $h$ denotes the number of cells in $\Pc$. Since the entries of the Jacobian matrix are of the form $0$ or $\pm x_{\ab}$, it follows that the Jacobian ideal is generated by monomials of degree $h$ formed by the generators $u_\ab$ of $R$. Thus, if $R$ has an isolated singularity, it is required that for each $\ab\in V(\Pc)$, there exists $k\geq h$ such that $u_{\ab}^k\in J$, and this is the case if and only if $u_{\ab}^h\in J$. Let $\ab\in V(\Pc)$. Then $\pm x_{\ab}$ appears as an entry of the Jacobian matrix, if and only if there exists $\bb\in V(\Pc)$ such that $\ab$ and $\bb$ are the diagonal or anti-diagonal corners of an inner interval $D$ of $\Pc$. Let $\Bc_\ab$ be the set of such elements $\bb$. Thus, if $u_\ab^h$ appears as a monomial generator of the Jacobian ideal $J$, then there should exists at least $h$ such elements $\bb$ so that $\ab$ and $\bb$ are the diagonal or anti-diagonal corners of an inner interval of $\Pc$. Hence, $h\leq |\Bc_\ab|$. For each $\bb\in \Bc_\ab$ there exists a unique cell $C_\bb\subseteq D$ for which $\bb$ is a corner of $D$. \begin{figure}[h] \begin{center} \includegraphics[scale=0.9]{p002.eps} \end{center} \caption{Inside cell} \label{insidecell} \end{figure} It follows that $|\Bc_\ab|\leq h$ (which is the number of cells of $\Pc$). Thus we have shown that $|\Bc_\ab|=h$ for all $\ab\in V(\Pc)$. Assume that $\Pc$ is not an interval. We claim that in this case there exists $\ab \in \Pc$ such that $|\Bc_\ab|<h$, which then leads to a contradiction. Proof of the claim: choose $\ab\in V(\Pc)$, and take the subset $\{\bb_1,\ldots \bb_r\}$ of the elements in $\Bc_\ab$ for which the inner interval $I_j$ with corners $\ab$ and $\bb_j$ (as displayed in Figure~\ref{bi}) is maximal in the sense that if $\bb\in\Bc_\ab$, then the inner interval with corners $\ab$ and $\bb$ is contained in one of the intervals $I_j$. \begin{figure}[h] \begin{center} \includegraphics[scale=0.9]{p003.eps} \end{center} \caption{} \label{bi} \end{figure} Since $|\Bc_\ab|=h$ and since the cells $C_\bb$ are pairwise distinct, and since they are cells of $\Union_{j=1}^rI_j$, it follows that $\Union_{j=1}^rI_j$ contains $h$ cells. By Theorem~\ref{heightprime}, $\Pc$ has exactly $h$ cells. Hence we see that $\Pc$ is equal to the set of the cells of $\Union_{j=1}^rI_j$. Let $[\cb,\db]$ be the smallest interval containing $\Pc$, see Figure~\ref{bounded}. \begin{figure}[h] \begin{center} \includegraphics[scale=0.9]{p001.eps} \end{center} \caption{} \label{bounded} \end{figure} Since we assume that $\Pc$ is not an interval, it follows that not all corners of $[\cb,\db]$ belong to $V(\Pc)$. We may assume that $\cb\not\in V(\Pc)$, and in order to simplify our discussion we may further assume that $\cb=(0,0)$. Let $\bb$ be the smallest element on the $x$-axis and $\bb'$ be the smalest element on the $y$-axis which belongs to $V(\Pc)$. In our picture these are the elements $\bb=\bb_1$ and $\bb'=\bb_5$. Then $\bb'+(1,1)\not \in \Bc_\bb$, which implies that $ |\Bc_\bb|<h$. This proves the claim and completes the proof of the theorem. \end{proof} \begin{thebibliography}{} \bibitem{DN} R. Dinu, F. Navarra, Non-simple polyominoes of K\"onig type, arXiv:2210.12665, 2022. \bibitem{Ei} D.~Eisenbud, Commutative Algebra with a View towards Algebraic Geometry. GTM 150, Springer, 1994. \bibitem{HM} J.~Herzog, S.S. Madani, The coordinate ring of a simple polyomino. Illinois J. Math. 58 (2014), 981--995. \bibitem{Q} A.A. Qureshi, Ideals generated by $2$-minors, collections of cells and stack polyominoes. J. Algebra 357 (2012), 279--303. \bibitem{ASS} A.A. Qureshi, T. Shibuta, A.~Shikama, Simple polyominoes are prime. J. Commut. Algebra 9 (2017), 413--422. \end{thebibliography}{} \end{document}
2205.06656v2
http://arxiv.org/abs/2205.06656v2
Dynamic boundary conditions for time dependent fractional operators on extension domains
\NeedsTeXFormat{LaTeX2e} \documentclass[12pt,intlimits]{article} \usepackage[a4paper,top=2cm,bottom=2cm,left=2cm,right=2cm,bindingoffset=5mm]{geometry} \usepackage{amsmath} \usepackage{amsthm} \usepackage{amssymb} \textwidth15.5cm \textheight23cm \oddsidemargin0cm \def\baselinestretch{1.2} \usepackage{graphicx} \usepackage{epsfig} \usepackage{hyperref} \hypersetup{colorlinks=true,linkcolor=blue} \setlength{\emergencystretch}{20pt} \tolerance=2000 \vfuzz2pt \newtheorem{theo}{Theorem}[section] \newtheorem{lem}[theo]{Lemma} \newtheorem{cor}[theo]{Corollary} \newtheorem{prop}[theo]{Proposition} \newtheorem{con}[theo]{Conclusions} \theoremstyle{definition} \newtheorem{defi}[theo]{Definition} \theoremstyle{remark} \newtheorem{remark}[theo]{Remark} \newcommand{\be}{\begin{eqnarray}} \newcommand{\ee}{\end{eqnarray}} \newcommand{\bes}{\begin{eqnarray*}} \newcommand{\ees}{\end{eqnarray*}} \newcommand{\bi}{\begin{itemize}} \newcommand{\ei}{\end{itemize}} \newcommand{\ben}{\begin{enumerate}} \newcommand{\een}{\end{enumerate}} \def\pa{\partial} \newcommand{\Ep}{\mathcal{E}^{(p)}} \newcommand{\Lp}{\mathcal{L}^{(p)}} \newcommand{\Hdf}{\mathcal{H}^{D_f}} \newcommand{\La}{\mathcal{L}} \newcommand{\D}{\mathcal{D}} \newcommand{\E}{\mathcal{E}} \newcommand{\B}{\mathcal{B}^{s,t}_\Omega} \newcommand{\Bn}{\mathcal{B}^{s,t}_{\Omega_n}} \newcommand{\BT}{\mathcal{B}^{s,t}_{\mathcal{T}}} \newcommand{\Y}{\mathcal{Y}} \newcommand{\Hcal}{\mathcal{H}} \newcommand{\R}{\mathbb{R}} \newcommand{\N}{\mathbb{N}} \newcommand{\C}{\mathbb{C}} \newcommand{\X}{\mathbb{X}} \newcommand{\G}{\mathcal{G}} \newcommand{\T}{\mathcal{T}} \newcommand{\Ncal}{\mathcal{N}_{2-2s}^K} \newcommand{\e}{\mbox {e}} \newcommand{\de}{\mathrm {d}} \def\Ext{{\hbox{\rm Ext}}} \def\einschr{\hbox{\kern1pt\vrule height 6pt\vrule width6pt height 0.4pt depth0pt\kern1pt}} \newcommand{\p}{\mathbf{P}} \newcommand {\no} {\noindent} \newcommand{\EQR}[1]{{\normalfont (\ref{#1})}} \newcommand{\enne}{{(n)}} \newcommand{\Lm}{\mathfrak{L}} \DeclareMathOperator{\dive}{div} \DeclareMathOperator{\trac}{Tr} \DeclareMathOperator{\supp}{supp} \DeclareMathOperator{\curl}{curl} \DeclareMathOperator{\Ker}{Ker} \DeclareMathOperator{\capac}{cap} \DeclareMathOperator{\diam}{diam} \newcommand{\uu}{u|_{\partial \Omega}} \newcommand{\vv}{v|_{\partial \Omega}} \newcommand{\ww}{w_k|_{\partial \Omega}} \newcommand{\ff}{f|_{\partial \Omega}} \newcommand{\UU}{U|_{\partial \Omega}} \newcommand{\psipsi}{\psi|_{\partial \Omega}} \newcommand{\bfu}{\mathbf{u}} \newcommand{\bfv}{\mathbf{v}} \newcommand{\bfw}{\mathbf{w}} \newcommand{\bff}{\mathbf{f}} \newcommand{\bfg}{\mathbf{g}} \newcommand{\bfh}{\mathbf{h}} \newcommand{\bfU}{\mathbf{U}} \newcommand{\bfF}{\mathbf{F}} \let\theorem\theo \let\definition\defi \let\proposition\prop \let\lemma\lem \def\theequation{\thesection.\arabic{equation}} \title{\bf Dynamic boundary conditions for time dependent fractional operators on extension domains } \date{} \begin{document} \maketitle \centerline{\scshape Simone Creo and Maria Rosaria Lancia} \medskip {\footnotesize \centerline{Dipartimento di Scienze di Base e Applicate per l'Ingegneria, Sapienza Universit\`{a} di Roma, } \centerline{via Antonio Scarpa 16, 00161 Roma, Italy.} \centerline{E-mail: [email protected],\quad [email protected]} } \vspace{1cm} \begin{abstract} \noindent We consider a parabolic semilinear non-autonomous problem $(\tilde P)$ for a fractional time dependent operator $\mathcal{B}^{s,t}_\Omega$ with Wentzell-type boundary conditions in a possibly non-smooth domain $\Omega\subset\mathbb{R}^N$. We prove existence and uniqueness of the mild solution of the associated semilinear abstract Cauchy problem $(P)$ via an evolution family $U(t,\tau)$. We then prove that the mild solution of the abstract problem $(P)$ actually solves problem $(\tilde P)$ via a generalized fractional Green formula. \end{abstract} \medskip \noindent\textbf{Keywords:} Extension domains, fractional operators, non-autonomous energy forms, evolution operators, semilinear parabolic equations, ultracontractivity.\\ \noindent{\textbf{2010 Mathematics Subject Classification:} Primary: 35R11, 47D06. Secondary: 35K90, 35K58, 28A80.} \bigskip \section*{Introduction} \setcounter{equation}{0} In this paper we consider a parabolic semilinear boundary value problem with dynamic boundary conditions for a generalized time dependent fractional operator in an extension domain $\Omega\subset\R^N$ having as boundary a $d$-set (we refer the reader to Section \ref{geometria} for the definitions). Problems of this type are also known as Wentzell-type problems. The problem is formally stated as follows: \begin{equation}\notag (\tilde P)\begin{cases} \frac{\partial u}{\partial t}(t,x)+\B u(t,x)=J(u(t,x)) &\text{in $[0,T]\times\Omega$,}\\[2mm] \frac{\partial u}{\partial t}(t,x)+C_s\Ncal u(t,x)+b(t,x)u(t,x)+\Theta^t_\alpha (u(t,x))=J(u(t,x))\, &\text{on $[0,T]\times\partial\Omega$},\\[2mm] u(0,x)=\phi(x) &\text{in $\overline\Omega$}, \end{cases} \end{equation} where $0<s<1$, $\alpha$ is defined in \eqref{definizione alpha}, $\B$ and $\Theta^t_\alpha$ denote generalized time dependent fractional operators on $\Omega$ and $\partial\Omega$ (see \eqref{fracreglap} and \eqref{nonlocal-op.}) respectively, $T$ is a fixed positive number, $b$ is a suitable function depending also on $t$ which satisfies hypotheses \eqref{ipotesi b}, $\Ncal$ is the fractional conormal derivative defined in Theorem \ref{greenf}, $\phi$ is a given datum in a suitable functional space and $J$ is a mapping from $L^{2p}(\Omega,m)$ to $L^2(\Omega,m)$, for $p>1$, locally Lipschitz on bounded sets in $L^{2p}(\Omega,m)$ (see condition \eqref{LIPJ}), where $m$ is the measure defined in \eqref{defmisura}. We remark that $\B$ is a time dependent generalization of the regional fractional Laplacian $(-\Delta)^s_\Omega$ and $\Theta^t_\alpha$ plays the role of a regional fractional Laplacian of order $\alpha\in (0,1)$ on $\partial\Omega$ (see Section \ref{sezgreen}). We approach this problem by proving that there exists a unique evolution family associated with the non-autonomous energy form $E[t,u]$ defined in \eqref{frattale}. More precisely, after introducing the energy form $E[t,u]$, we consider the following abstract Cauchy problem $(P)$ (see also \eqref{eq:5.1}): \begin{equation}\notag(P)\begin{cases} \frac{\partial u(t)}{\partial t}=A(t)u(t)+J(u(t))\quad\text{for $t\in[0,T]$},\\ u(0)=\phi, \end{cases} \end{equation} where $A(t)\colon D(A(t))\subset L^2(\Omega,m)\to L^2(\Omega,m)$ is the family of operators associated to $E[t,u]$. Crucial tools for proving existence and uniqueness of the (mild) solution of the non-autonomous abstract Cauchy problem $(P)$ are a fractional version of the Nash inequality on $L^2(\Omega,m)$, which in turn allows us to prove the ultracontractivity of the evolution family $U(t,\tau)$ (see Theorem \ref{ultracontr}), and a contraction argument in suitable Banach spaces. A generalized fractional Green formula, proved in Section \ref{sezgreen}, then allows us to deduce that the mild solution of problem $(P)$ actually solves problem $(\tilde P)$ in a suitable weak sense, see Theorem \ref{esistfrattale}. The literature on boundary value problems with dynamic boundary conditions in smooth domains is huge: we refer to \cite{AP-NAsurv,F-G-G-Ro,goldstein} and the references listed in. On the contrary, the study of Wentzell problems in extension domains (in particular with fractal boundaries and/or interfaces) is more recent; among the others, we refer to \cite{La-Ve3,JEE,CLNpar,JEEfraz,creoZAA}. The study of autonomous semilinear problems in extension domains with Wentzell-type boundary conditions for fractional operators is a rather recent topic. We refer to \cite{JEEfraz} for the linear case and to \cite{CLVmosco,CLNODEA,CLVtrasm} for the case $p\geq 2$. The literature on fractional operators is huge since they mathematically describe the so-called anomalous diffusion. This topic appears also in finance and probability. We refer to the papers \cite{Ab-Th,Ja,MeMiMo,schneider,CSF,valdinoci,mandelbrot}, which deal with models describing such diffusion. On the other side, to consider the corresponding non-autonomous problems allows to tackle more realistic problems, and it is indeed a challenging task. To our knowledge, the first results on non-autonomous semilinear Wentzell problems for the Laplace operator in irregular domains are contained in \cite{LVnonaut}. When investigating semilinear problems, both autonomous and non-autonomous, the functional setting is given by an interpolation space between the domain of the generator $A(t)$ and $L^2(\Omega,m)$ or the domain of a fractional power of $A(t)$. In the case of extension domains, possibly with fractal boundary, the domain of $A(t)$ is unknown. Our aim here is to extend to the fractional non-autonomous case the ideas and methods of \cite{LVnonaut} and \cite{Daners} under suitable hypotheses on $J(u)$. In order to use a fixed point argument in Banach spaces, a crucial tool is to prove suitable mapping properties for $J(u)$, which in turn deeply rely on the ultracontractivity of the evolution family. We stress the fact that the techniques used in \cite{LVnonaut} to prove the ultracontractivity property cannot be applied to the present case, since the non-autonomous form $E[t,u]$ is nonlocal. Here the ultracontractivity property is obtained by an abstract argument which deeply relies on a fractional Nash inequality on $L^2(\Omega,m)$. When giving the strong interpretation of problem $(P)$, this functional setting allows us to prove that the unknown $u$ satisfies a dynamic boundary condition on $\partial\Omega$, whereas it was not possible to achieve it in \cite{LVnonaut}, due to the presence of the \lq\lq fractal Laplacian" on the boundary. \medskip The paper is organized as follows. In Section \ref{preliminari} we introduce the geometry and the functional setting and we recall important general results, such that trace theorems, Sobolev-type embeddings for extension domains and Nash inequality (see Proposition \ref{Nash}).\\ In Section \ref{sezgreen} we introduce the time dependent operator $\B$ which governs the diffusion in the bulk and we introduce the notion of fractional conormal derivative $\Ncal$ via a generalized fractional Green formula (see Theorem \ref{greenf}).\\ In Section \ref{sec3} we introduce the nonlocal operator $\Theta^t_\alpha$ acting on $\partial\Omega$ and the non-autonomous energy form $E[t,u]$, we prove its properties and that there exists a unique evolution family $U(t,\tau)$ associated to $E[t,u]$.\\ In Section \ref{sec4} we prove some regularity properties of the evolution family, in particular its ultracontractivity (see Theorem \ref{ultracontr}).\\ In Section \ref{sec5} we consider the abstract Cauchy problem $(P)$ and we prove that it admits a unique local (mild) solution. We then prove that the unique solution is also global in time under suitable assumptions on the initial datum. Finally, we prove that the unique mild solution of $(P)$ solves in a suitable weak sense problem $(\tilde P)$. \section{Preliminaries}\label{preliminari} \setcounter{equation}{0} \subsection{Functional spaces}\label{spazi funzionali} Let $\G$ (resp. $\mathcal{S}$) be an open (resp. closed) set of $\R^N$. By $L^p(\G)$, for $p\geq 1$, we denote the Lebesgue space with respect to the Lebesgue measure $\de\La_N$, which will be left to the context whenever that does not create ambiguity. By $L^p(\partial\G)$ we denote the Lebesgue space on $\partial\G$ with respect to a Hausdorff measure $\mu$ supported on $\partial \G$. By $\D(\G)$ we denote the space of infinitely differentiable functions with compact support on $\G$. By $C(\mathcal{S})$ we denote the space of continuous functions on $\mathcal{S}$ and by $C^{0,\vartheta}(\mathcal{S})$ we denote the space of H\"older continuous functions on $\mathcal{S}$ of order $0<\vartheta<1$.\\ By $H^s(\G)$, where $0<s<1$, we denote the fractional Sobolev space of exponent $s$. Endowed with the norm \begin{equation*} \|u\|^2_{H^s(\G)}=\|u\|^2_{L^2(\G)}+\iint_{\G\times\G} \frac{(u(x)-u(y))^2}{|x-y|^{N+2s}}\,\de\La_N(x)\de\La_N(y), \end{equation*} it becomes a Banach space. We denote by $|u|_{H^s(\G)}$ the seminorm associated to $\|u\|_{H^s(\G)}$ and by $(u,v)_{H^s(\G)}$ the scalar product induced by the $H^s$-norm. Moreover, we set \begin{equation}\notag (u,v)_s:=\iint_{\G\times\G}\frac{(u(x)-u(y))(v(x)-v(y))}{|x-y|^{N+2s}}\,\de\La_N(x)\de\La_N(y). \end{equation} In the following we will denote by $|A|$ the Lebesgue measure of a subset $A\subset\R^N$. For $f\in H^{s}(\G)$, we define the trace operator $\gamma_0$ as \begin{equation}\notag \gamma_0f(x):=\lim_{r\to 0}{1\over|B(x,r)\cap\G|}\int_{B(x,r)\cap\G}f(y)\,\de\La_N(y) \end{equation} at every point $x\in \overline{\G}$ where the limit exists. The above limit exists at quasi every $x\in \overline{\G}$ with respect to the $(s,2)$-capacity (see Definition 2.2.4 and Theorem 6.2.1 page 159 in \cite{AdHei}). From now on, we denote the trace operator simply by $f|_{\G}$; sometimes we will omit the trace symbol and the interpretation will be left to the context. Moreover, we denote by $\Lm(X\to Y)$ the space of linear and continuous operators from a Banach space $X$ to a Banach space $Y$. If $X=Y$, we simply denote this space by $\Lm(X)$. \noindent Throughout the paper, $C$ denotes possibly different constants. We give the dependence of constants on some parameters in parentheses. \subsection{$(\varepsilon,\delta)$ domains and trace theorems}\label{geometria} We recall the definition of $(\varepsilon,\delta)$ domains. For details see \cite{Jones}. \begin{definition}\label{defepsdelta} Let $\mathcal{F}\subset\R^N$ be open and connected. For $x\in\mathcal{F}$, let $\displaystyle d(x):=\inf_{y\in\mathcal{F}^c}|x-y|$. We say that $\mathcal{F}$ is an $(\varepsilon,\delta)$ domain if, whenever $x,y\in\mathcal{F}$ with $|x-y|<\delta$, there exists a rectifiable arc $\gamma\in\mathcal{F}$ of length $\ell(\gamma)$ joining $x$ to $y$ such that \begin{center} $\displaystyle\ell(\gamma)\leq\frac{1}{\varepsilon}|x-y|\quad$ and\quad $\displaystyle d(z)\geq\frac{\varepsilon|x-z||y-z|}{|x-y|}$ for every $z\in\gamma$. \end{center} \end{definition} \medskip \noindent We now recall the definition of $d$-set, referring to \cite{JoWa} for a complete discussion. \begin{definition}\label{dset} A closed nonempty set $\mathcal{S}\subset\R^N$ is a $d$-set (for $0<d\leq N$) if there exist a Borel measure $\mu$ with $\supp\mu=\mathcal{S}$ and two positive constants $c_1$ and $c_2$ such that \begin{equation}\label{defindset} c_1r^{d}\leq \mu(B(x,r)\cap\mathcal{S})\leq c_2 r^{d}\quad\text{for every }x \in\mathcal{S}.\end{equation} The measure $\mu$ is called a $d$-measure. \end{definition} \medskip \noindent In this paper, we consider two particular classes of $(\varepsilon,\delta)$ domains $\Omega\subset\R^N$. More precisely, $\Omega$ can be a $(\varepsilon,\delta)$ domain having as boundary either a $d$-set or an arbitrary closed set in the sense of \cite{jonsson91}. For the sake of simplicity, from now on we restrict ourselves to the case in which $\partial\Omega$ is a $d$-set. We suppose that $\Omega$ can be approximated by a sequence $\{\Omega_n\}$ of domains such that, for every $n\in\N$, \begin{equation*} (\Hcal)\begin{cases} \Omega_n\text{ is bounded and Lipschitz;}\\[2mm] \Omega_n\subseteq \Omega_{n+1};\\[2mm] \Omega=\displaystyle\bigcup_{n=1}^\infty \Omega_n. \end{cases} \end{equation*} \noindent The reader is referred to \cite{JEEfraz} and \cite{CLVmosco} for examples of such domains. \bigskip We recall the definition of Besov space specialized to our case. For generalities on Besov spaces, we refer to \cite{JoWa}. \begin{definition} Let $\mathcal{F}$ be a $d$-set with respect to a $d$-measure $\mu$ and $0<\alpha<1$. ${B^{2,2}_\alpha(\mathcal{F})}$ is the space of functions for which the following norm is finite, $$ \|u\|^2_{B^{2,2}_\alpha(\mathcal{F})}=\|u\|^2_{L^2(\mathcal{F})}+\iint_{|x-y|<1}\frac{|u(x)-u(y)|^2}{|x-y|^{d+2\alpha}}\,\de\mu(x)\,\de\mu(y). $$ \end{definition} \noindent In the following, we will denote the dual of the Besov space $B^{2,2}_\alpha(\mathcal{F})$ with $(B^{2,2}_\alpha(\mathcal{F}))'$; we point out that this space coincides with the space $B^{2,2}_{-\alpha}(\mathcal{F})$ (see \cite{JoWa2}). From now on, let \begin{equation}\label{definizione alpha} \alpha:=s-\frac{N-d}{2}\in (0,1). \end{equation} We now state the trace theorem for functions in $H^s(\Omega)$, where $\Omega$ is a bounded $(\varepsilon,\delta)$ domain with boundary $\partial\Omega$ a $d$-set. For the proof, we refer to \cite[Theorem 1, Chapter VII]{JoWa}. \begin{prop}\label{teotraccia} Let $\frac{N-d}{2}<s<1$ and $\alpha$ be as in \eqref{definizione alpha}. $B^{2,2}_\alpha(\partial\Omega)$ is the trace space of $H^{s}(\Omega)$ in the following sense: \begin{enumerate}\item[(i)] $\gamma_0$ is a continuous linear operator from $H^s(\Omega)$ to $B^{2,2}_\alpha(\partial\Omega)$; \item[(ii)] there exists a continuous linear operator $\Ext$ from $B^{2,2}_\alpha(\partial\Omega)$ to $H^{s}(\Omega)$ such that $\gamma_0\circ \Ext$ is the identity operator in $B^{2,2}_\alpha(\partial\Omega)$. \end{enumerate} \end{prop} \noindent We point out that, if $\Omega\subset\R^N$ is a Lipschitz domain, its boundary $\partial\Omega$ is a $(N-1)$-set. Hence, the trace space of $H^s(\Omega)$ is $B^{2,2}_{s-\frac{1}{2}}(\partial\Omega)$, and the latter space coincides with $H^{s-\frac{1}{2}}(\partial\Omega)$. The following result provides us with an equivalent norm on $H^s(\Omega)$. The proof can be achieved by adapting the proof of \cite[Theorem 2.3]{warmaCPAA}. \begin{theorem}\label{equivalenza norme} Let $\Omega\subset\R^N$ be a $(\varepsilon,\delta)$ domain having as boundary a $d$-set, and let $\frac{N-d}{2}<s<1$. Then there exists a positive constant $C=C(\Omega,N,s,d)$ such that for every $u\in H^s(\Omega)$ \begin{equation}\label{norma eq} \int_\Omega |u|^2\,\de\La_N\leq C\left(\frac{C_{N,s}}{2}\iint_{\Omega\times\Omega} \frac{|u(x)-u(y)|^2}{|x-y|^{N+2s}}\,\de\La_N(x)\de\La_N(y)+\int_{\partial\Omega}|u|^2\,\de\mu\right). \end{equation} \end{theorem} \noindent Here, $C_{N,s}$ is the positive constant defined in Section \ref{sezgreen}. Hence, from Theorem \ref{equivalenza norme} and Proposition \ref{teotraccia}, the following norm is equivalent to the \lq\lq usual" $H^s(\Omega)$-norm: \begin{equation}\notag |||u|||^2_{H^s(\Omega)}:=\frac{C_{N,s}}{2}\iint_{\Omega\times\Omega} \frac{|u(x)-u(y)|^2}{|x-y|^{N+2s}}\,\de\La_N(x)\de\La_N(y)+\int_{\partial\Omega}|u|^2\,\de\mu. \end{equation} \noindent Finally, we recall the following important extension property which holds for $(\varepsilon,\delta)$ domains having as boundary a $d$-set. For details, we refer to Theorem 1, page 103 and Theorem 3, page 155 in \cite {JoWa}. \begin{theorem}\label{teo estensione} Let $0<s<1$. There exists a linear extension operator $\mathcal{E}{\rm xt}\colon\,H^s(\Omega)\to\,H^{s}(\R^N)$ such that \begin{equation}\label{R-3d} \|\mathcal{E}{\rm xt}\, w\|^2_{H^{s}(\R^N)}\leq C\|w\|^2_{H^{s}(\Omega)}, \end{equation} where $C$ is a positive constant depending on $s$, where $\mathcal{E}{\rm xt}\, w=w$ on $\Omega$. \end{theorem} \medskip \noindent Domains $\Omega$ satisfying property \eqref{R-3d} are the so-called \emph{$H^s$-extension domains}. \subsection{Sobolev embeddings and the Nash inequality}\label{sobolev embed} We now recall some important Sobolev-type embeddings for the fractional Sobolev space $H^s(\Omega)$ where $\Omega$ is a $H^{s}$-extension domain with boundary a $d$-set, see \cite[Theorem 6.7]{hitch} and \cite[Lemma 1, p. 214]{JoWa} respectively. \noindent We set $$2^*:=\frac{2N}{N-2s}\quad\text{and}\quad\bar{2}:=\frac{2d}{N-2s}.$$ \begin{theorem}\label{immsobpstar} Let $s\in(0,1)$ such that $2s<N$. Let $\Omega\subseteq\R^N$ be a $H^s$-extension domain. Then $H^s(\Omega)$ is continuously embedded in $L^q(\Omega)$ for every $q\in [1,2^*]$, i.e. there exists a positive constant $C_1=C_1(N,s,\Omega)$ such that, for every $u\in H^{s}(\Omega)$, \begin{equation}\label{sobolev p star} \|u\|_{L^q(\Omega)}\leq C_1\|u\|_{H^s(\Omega)}. \end{equation} \end{theorem} \begin{theorem}\label{immsobpbar} Let $s\in(0,1)$ be such that $N-d<2s<N$. Let $\Omega\subseteq\R^N$ be a $H^s$-extension domain having as boundary $\partial\Omega$ a $d$-set, for $0<d\leq N$. Then $H^s(\Omega)$ is continuously embedded in $L^q(\partial\Omega)$ for every $q\in [1,\bar{2}\,]$, i.e. there exists a positive constant $C_2=C_2(N,s,d,\Omega)$ such that, for every $u\in H^s(\Omega)$, \begin{equation}\label{sobolev p bar} \|u\|_{L^q(\partial\Omega)}\leq C_2\|u\|_{H^s(\Omega)}. \end{equation} \end{theorem} \noindent We point out that $2^*\geq\bar{2}\geq 2$. For $1\leq p\leq\infty$, we denote by $L^p(\Omega,m)$ the Lebesgue space with respect to the measure \begin{equation}\label{defmisura} \de m=\de\La_N+\de\mu, \end{equation} where $\mu$ is the $d$-measure supported on $\partial\Omega$. For $p\in[1,\infty)$, we endow $L^p(\Omega,m)$ with the following norm: \begin{equation*} \|u\|^p_{L^p(\Omega,m)}=\|u\|^p_{L^p(\Omega)}+\|\uu\|^p_{L^p(\partial\Omega)}. \end{equation*} For $p=\infty$, we endow $L^\infty(\Omega,m)$ with the following norm $$\|u\|_{L^\infty(\Omega,m)}:=\max\left\{\|u\|_{L^\infty(\Omega)},\|\uu\|_{L^\infty(\partial\Omega)}\right\}.$$ With these definitions, $L^p(\Omega,m)$ becomes a Banach space for every $1\leq p\leq\infty$. We now prove a version of the well known Nash inequality adapted to our setting. \begin{prop}\label{Nash} Let $u\in H^s(\Omega)$. Then there exists a positive constant $\bar{C}=\bar{C}(N,s,d,\Omega)$ such that the following Nash inequality holds, \begin{equation}\label{Nashineq} \|u\|_{L^2(\Omega,m)}^{2+\frac{4}{\lambda}}\leq\bar{C}\|u\|^2_{H^s(\Omega)}\|u\|_{L^1(\Omega,m)}^{\frac{4}{\lambda}}, \end{equation} where $\lambda=\frac{2d}{d-N+2s}$. \end{prop} \begin{proof} We adapt the proof of Proposition 4.5 in \cite{Daners} to our context. We set $\lambda=\frac{2d}{d-N+2s}$. From interpolation inequalities (see e.g. \cite[Section 7.1]{giltrud}), we have that \begin{equation}\label{interpolazione} \|u\|_{L^2(\omega)}\leq\|u\|_{L^{\bar{2}}(\omega)}^{1-\mu}\|u\|_{L^1(\omega)}^\mu, \end{equation} with $\mu=\frac{d-N+2s}{2d-N+2s}=1-\frac{d}{2d-N+2s}$ and $\omega$ is either $\Omega$ or $\partial\Omega$. Hence, from Theorems \ref{immsobpstar} and \ref{immsobpbar} we obtain \begin{equation}\label{somma1} \|u\|_{L^2(\Omega,m)}\leq C_3\|u\|_{H^s(\Omega)}^{\frac{d}{2d-N+2s}}\|u\|_{L^1(\Omega,m)}^{1-\frac{d}{2d-N+2s}}, \end{equation} where $C_3=C_3(N,s,d,\Omega)=\max\{C_1,C_2\}$ and $C_1$ and $C_2$ are the constants appearing in \eqref{sobolev p star} and \eqref{sobolev p bar} respectively. Therefore, since $\frac{2d-N+2s}{d}=\frac{1}{1-\mu}=1+\frac{2}{\lambda}$, from Theorem \ref{equivalenza norme} we have that there exists a positive constant $\bar{C}$ depending on $N$, $s$, $d$ and $\Omega$ such that \begin{equation}\notag\|u\|_{L^2(\Omega,m)}^{1+\frac{2}{\lambda}}\leq\bar{C}\|u\|_{H^s(\Omega)}\|u\|_{L^1(\Omega,m)}^{\frac{2}{\lambda}}, \end{equation} i.e., \eqref{Nashineq} holds. \end{proof} \section{The time dependent generalized regional fractional Laplacian}\label{sezgreen} \setcounter{equation}{0} From now on, let $T>0$ be fixed. We introduce a suitable measurable function $K\colon [0,T]\times\Omega\times\Omega\to\R$ such that $K(t,\cdot,\cdot)$ is symmetric for every $t\in[0,T]$ and there exist two constants $0<k_1<k_2$ such that $k_1\leq K(t,x,y)\leq k_2$ for a.e. $t\in[0,T]$ and $x,y\in\Omega$.\\ For $u,v\in H^s(\Omega)$, we set \begin{equation}\notag |u|_{s,K}:=\frac{C_{N,s}}{2}\iint_{\Omega\times\Omega}K(t,x,y)\frac{(u(x)-u(y))^2}{|x-y|^{N+2s}}\,\de\La_N(x)\de\La_N(y) \end{equation} and \begin{equation}\notag (u,v)_{s,K}:=\frac{C_{N,s}}{2}\iint_{\Omega\times\Omega}K(t,x,y)\frac{(u(x)-u(y))(v(x)-v(y))}{|x-y|^{N+2s}}\,\de\La_N(x)\de\La_N(y). \end{equation} We point out that $|u|_{s,1}=|u|^2_{H^s(\Omega)}$ and $(u,v)_{s,1}=(u,v)_s$. We introduce now a time dependent generalization of the regional fractional Laplacian $(-\Delta)^s_\Omega$. For the definition of regional fractional Laplacian, among the others we refer to \cite{BBC,chenkum,guan1,guan2,guan3}. \noindent Let $s\in(0,1)$. For every fixed $t\in[0,T]$, we define the operator $\B$ acting on $H^s(\Omega)$ in the following way: \begin{equation}\label{fracreglap} \begin{split} \B u(t,x) &=C_{N,s}{\rm P.V.}\int_\Omega K(t,x,y) \frac{u(t,x)-u(t,y)}{|x-y|^{N+2s}}\,\de\La_N(y)\\[2mm] &=C_{N,s}\lim_{\epsilon\to 0^+}\int_{\{y\in\Omega\,:\,|x-y|>\epsilon\}} K(t,x,y)\frac{u(t,x)-u(t,y)}{|x-y|^{N+2s}}\,\de\La_N(y). \end{split} \end{equation} The positive constant $C_{N,s}$ is defined by \begin{equation}\notag C_{N,s}=\frac{s2^{2s}\Gamma(\frac{N+2s}{2})}{\pi^\frac{N}{2}\Gamma(1-s)}, \end{equation} where $\Gamma$ is the Euler function. If $K\equiv 1$ on $[0,T]\times\Omega\times\Omega$, the operator $\B$ reduces to the usual regional fractional Laplacian $(-\Delta)^s_\Omega$.\\ We now introduce the notion of \emph{fractional conormal derivative} on $(\varepsilon,\delta)$ domains having as boundary a $d$-set and satisfying hypotheses $(\Hcal)$ in Section \ref{geometria}. We will generalize the notion of fractional normal derivative on irregular sets, which was introduced in \cite{JEEfraz} (see also \cite{CLVmosco,CLNODEA} for the nonlinear case). \noindent We define the space \begin{equation*} V(\B,\Omega):=\{u\in H^s(\Omega)\,:\,\B u\in L^{2}(\Omega)\,\,\text{in the sense of distributions}\}, \end{equation*} which is a Banach space equipped with the norm \begin{equation}\notag \|u\|^2_{V(\B,\Omega)}:=\|u\|^2_{H^s(\Omega)}+\|\B u\|^2_{L^{2}(\Omega)}. \end{equation} \noindent We define the fractional conormal derivative on Lipschitz domains. \begin{definition}\label{derivatafrazlip} Let $\T\subset\R^N$ be a Lipschitz domain. Let $u\in V(\BT,\T):=\{u\in H^s(\T)\,:\,\BT u\in L^{2}(\T)\,\,\text{in the sense of distributions}\}$. We say that $u$ has a weak fractional conormal derivative in $(H^{s-\frac{1}{2}}(\partial\T))'$ if there exists $g\in (H^{s-\frac{1}{2}}(\partial\T))'$ such that \begin{align} &\left\langle g,\vv\right\rangle_{(H^{s-\frac{1}{2}}(\partial\T))', H^{s-\frac{1}{2}}(\T)} =-\int_{\T} \BT u\,v\,\de\La_N \label{fracgreenlip}\\[2mm] &+\frac{C_{N,s}}{2}\iint_{\T\times\T}K(t,x,y)\frac{(u(x)-u(y))(v(x)-v(y))}{|x-y|^{N+2s}}\,\de\La_N(x)\de\La_N(y)\notag \end{align} for every $v\in H^s(\T)$. In this case, $g$ is uniquely determined and we call $C_{s}\Ncal u:=g$ the weak fractional conormal derivative of $u$, where \begin{equation*} C_s:=\frac{C_{1,s}}{2s(2s-1)}\int_0^\infty\frac{|z-1|^{1-2s}-(z\vee 1)^{1-2s}}{z^{2-2s}}\,\de z. \end{equation*} \end{definition} \medskip \noindent We point out that, if $K(t,x,y)\equiv1$, we recover the definition of fractional normal derivative on Lipschitz sets introduced in \cite{JEEfraz}. Moreover, if in addition to that we let $s\to 1^-$ in \eqref{fracgreenlip}, we obtain the Green formula for Lipschitz domains \cite{bregil}. \medskip \begin{theorem}[Generalized fractional Green formula]\label{greenf} There exists a bounded linear operator $\Ncal$ from $V(\B,\Omega)$ to $(B^{2,2}_{\alpha}(\partial\Omega))'$.\\ The following generalized Green formula holds for every $u\in V(\B,\Omega)$ and $v\in H^s(\Omega)$, \begin{equation}\label{fracgreen} \begin{split} &C_{s}\left\langle \Ncal u,\vv\right\rangle_{(B^{2,2}_{\alpha}(\partial\Omega))', B^{2,2}_{\alpha}(\partial\Omega)} =-\int_\Omega \B u\,v\,\de\La_N\\[4mm] &+\frac{C_{N,s}}{2}\iint_{\Omega\times\Omega}K(t,x,y)\frac{(u(x)-u(y))(v(x)-v(y))}{|x-y|^{N+2s}}\,\de\La_N(x)\de\La_N(y). \end{split} \end{equation} \end{theorem} \begin{proof} We adapt to our setting the proof of Theorem 2.2 in \cite{CLNODEA}, which we recall for the sake of completeness. For $u\in V(\B,\Omega)$ and $v\in H^s(\Omega)$, we define \begin{equation}\notag \langle l(u),v\rangle:=-\int_\Omega \B u\,v\,\de\La_N +\frac{C_{N,s}}{2}(u,v)_{s,K}. \end{equation} \noindent From H\"older's inequality, the trace theorem and the hypotheses on $K(t,x,y)$, we get \begin{align} |\left\langle l(u),v\rangle\right|&\leq \|\B u\|_{L^{2}(\Omega)}\|v\|_{L^2(\Omega)}+k_2\frac{C_{N,s}}{2}\|u\|_{H^s(\Omega)}\|v\|_{H^{s}(\Omega)}\notag\\ &\leq C\,\|u\|_{V(\B,\Omega)}\|v\|_{H^s(\Omega)}\leq C\,\|u\|_{V(\B,\Omega)}\|v\|_{B^{2,2}_{\alpha}(\partial\Omega)}.\label{eqstima} \end{align} We prove that the operator $l(u)$ is independent from the choice of $v$ and it is an element of $(B^{2,2}_{\alpha}(\partial\Omega))'$. From Proposition \ref{teotraccia}, for every $v\in B^{2,2}_{\alpha}(\partial\Omega)$ there exists a function $\tilde w:=\Ext\,v\in H^s(\Omega)$ such that \begin{equation}\label{eq1} \|\tilde w\|_{H^{s}(\Omega)}\leq C\|v\|_{B^{2,2}_{\alpha}(\partial\Omega)} \end{equation} and $\tilde w|_{\partial\Omega}=v$ $\mu$-almost everywhere. From \eqref{fracgreen} we have that \begin{equation}\notag C_{s}\left\langle \Ncal u,v\right\rangle_{(B^{2,2}_{\alpha}(\partial\Omega))', B^{2,2}_{\alpha}(\partial\Omega)}=\langle l(u),\tilde w\rangle. \end{equation} The conclusion follows from \eqref{eqstima} and \eqref{eq1}. We now recall that $\Omega$ is approximated by a sequence of Lipschitz domains $\Omega_n$, for $n\in\N$, satisfying conditions $(\Hcal)$ in Section \ref{geometria}. From \eqref{fracgreenlip} we have that \begin{align} &C_{s}\left\langle \Ncal u,\vv\right\rangle_{(H^{s-\frac{1}{2}}(\partial\Omega_n))', H^{s-\frac{1}{2}}(\partial\Omega_n)} =-\int_\Omega\chi_{\Omega_n}\Bn u\,v\,\de\La_N \notag\\[2mm]&+\frac{C_{N,s}}{2}\iint_{\Omega\times\Omega}\chi_{\Omega_n}(x)\chi_{\Omega_n}(y)K(t,x,y) \frac{(u(x)-u(y))(v(x)-v(y))}{|x-y|^{N+2s}}\,\de\La_N(x)\de\La_N(y).\notag \end{align} From the dominated convergence theorem, we have \begin{align*}\label{E:Lipschitz} &\lim_{n\to\infty} C_{s}\left\langle \Ncal u,\vv\right\rangle_{(H^{s-\frac{1}{2}}(\partial\Omega_n))', H^{s-\frac{1}{2}}(\partial\Omega_n)}=\lim_{n\to\infty}\left(-\int_{\Omega_n} \Bn u\,v\,\de\La_N\right.\\[2mm]\notag &+\left.\frac{C_{N,s}}{2}\iint_{\Omega_n\times\Omega_n}K(t,x,y)\frac{(u(x)-u(y))(v(x)-v(y))}{|x-y|^{N+2s}}\,\de\La_N(x)\de\La_N(y)\right)\notag\\ &=-\int_\Omega \B u\,v\,\de\La_N +\frac{C_{N,s}}{2}(u,v)_{s,K}=\left\langle l(u),v\right\rangle\notag \end{align*} for every $u\in V(\B,\Omega)$ and $v\in H^{s}(\Omega)$. Hence, we define the fractional conormal derivative on $\Omega$ as \begin{equation}\notag \langle C_{s}\Ncal u,\vv\rangle_{(B^{2,2}_{\alpha}(\partial\Omega))', B^{2,2}_{\alpha}(\partial\Omega)}:=-\int_\Omega \B u\,v\,\de\La_N+\frac{C_{N,s}}{2}(u,v)_{s,K}. \end{equation} \end{proof} \begin{remark} As in the Lipschitz case, when $K(t,x,y)\equiv 1$, we recover the notion of fractional normal derivative on an irregular set introduced in \cite{JEEfraz}. Moreover, from \cite[Remark 3.1]{JEEfraz}, when $s\to 1^-$ and $K(t,x,y)\equiv1$ in \eqref{fracgreen}, we recover the Green formula proved in \cite{LaVe2} for fractal domains.\end{remark} \section{The non-autonomous energy form and the evolution family}\label{sec3} \setcounter{equation}{0} From now on, let us suppose that $s\in (0,1)$ is such that $N-d<2s<N$. Let $b\colon(0,T)\times\partial\Omega\to\R$ be a function such that \begin{equation}\label{ipotesi b} \begin{cases} b\in L^\infty([0,T]\times\partial\Omega),\\ \inf b(t,P)>b_0>0\quad \text{for every }(t,P)\in [0,T]\times\partial\Omega,\\ \text{there exists }\eta\in(\frac{1}{2}, 1): |b(t,P)-b(\tau,P)|\leq c |t-\tau|^\eta\quad\text{for every }P\in\partial\Omega. \end{cases} \end{equation} \noindent Let $\zeta\colon[0,T]\times\partial\Omega\times\partial\Omega\to\R$ be such that $\zeta(t,\cdot,\cdot)$ is symmetric for every fixed $t\in[0,T]$ and $\zeta_1\leq\zeta(t,x,y)\leq\zeta_2$ for suitable constants $0<\zeta_1<\zeta_2$ and for a.e $(t,x,y)\in[0,T]\times\partial\Omega\times\partial\Omega$. We now introduce a bounded linear operator $\Theta^t_{\alpha}\colon B^{2,2}_\alpha (\partial\Omega)\to (B^{2,2}_\alpha (\partial\Omega))'$ defined by \begin{equation}\label{nonlocal-op.} \langle\Theta^t_{\alpha}(u),v\rangle:=\iint_{\partial\Omega\times\partial\Omega}\zeta(t,x,y)\frac{(u(x)-u(y))(v(x)-v(y))}{|x-y|^{d+2\alpha}}\,\de\mu(x)\,\de\mu(y), \end{equation} where $\langle\cdot,\cdot\rangle$ denotes the duality pairing between $(B^{2,2}_\alpha (\partial\Omega))'$ and $B^{2,2}_\alpha (\partial\Omega)$ and $\alpha$ is defined in \eqref{definizione alpha}. From our hypotheses on $\zeta$, this nonlocal term on $\partial\Omega$ is equivalent to the seminorm of $B^{2,2}_\alpha (\partial\Omega)$; moreover, we point out that, if $\zeta\equiv 1$, it can be regarded as a regional fractional Laplacian of order $\alpha$ on the boundary. \medskip \noindent We suppose that the kernels $K(t,x,y)$ and $\zeta(t,x,y)$ appearing in the nonlocal terms in the bulk and on the boundary respectively are H\"older continuous with respect to $t$. More precisely, we suppose that there exists $\eta\in(\frac{1}{2},1)$ such that \begin{equation}\label{holderianita nuclei} |K(t,x,y)-K(\tau,x,y)|\leq C|t-\tau|^\eta\quad\text{and}\quad|\zeta(t,x,y)-\zeta(\tau,x,y)|\leq C|t-\tau|^\eta \end{equation} for a.e. $x,y\in\Omega$ and a.e. $x,y\in\partial\Omega$ respectively. Obviously, one can take different H\"older exponents in \eqref{ipotesi b} and \eqref{holderianita nuclei}. For the sake of simplicity, we suppose that the third condition in \eqref{ipotesi b} and \eqref{holderianita nuclei} hold for the same exponent $\eta\in(\frac{1}{2},1)$. \medskip \noindent For every $u\in H:=L^2(\Omega,m)$, we introduce the following energy form, with effective domain $D(E)=[0,T]\times H^s(\Omega)$, \begin{equation}\label{frattale} E[t,u]:= \begin{cases} \,\displaystyle\frac{C_{N,s}}{2}\iint_{\Omega\times\Omega}K(t,x,y)\frac{|u(x)-u(y)|^2}{|x-y|^{N+2s}}\,\de\La_N(x)\de\La_N(y)\\[2mm] +\displaystyle\int_{\partial\Omega} b(t,P)|\uu|^2\,\de\mu+\langle\Theta^t_{\alpha}(\uu),\uu\rangle&\text{if}\,\,u\in D(E),\\[2mm] +\infty &\text{if}\,\,u\in H\setminus D(E), \end{cases} \end{equation} \noindent We remark that, if $u\in H^s(\Omega)$, from \eqref{definizione alpha} and Proposition \ref{teotraccia}, its trace $\uu$ is well-defined. We now prove some properties of the form $E$. \begin{prop}\label{coer} For every $t\in[0,T]$, the form $E[t,u]$ is continuous and coercive on $H^s(\Omega)$. \end{prop} \begin{proof} We start by proving the continuity of $E$. Since $b\in L^\infty([0,T]\times\partial\Omega)$ and $K$ and $\zeta$ are bounded from above, for every $t\in[0,T]$ we have \begin{equation}\notag \begin{split} &E[t,u]\leq k_2|u|^2_{H^s(\Omega)}+\|b\|_{L^\infty([0,T]\times\partial\Omega)}\|u\|^2_{L^2(\partial\Omega)}+\zeta_2\iint_{\partial\Omega\times\partial\Omega}\frac{(u(x)-u(y))^2}{|x-y|^{d+2\alpha}}\,\de\mu(x)\,\de\mu(y)\\[2mm] &\leq k_2|u|^2_{H^s(\Omega)}+\max\{\|b\|_{L^\infty([0,T]\times\partial\Omega)},\zeta_2\}\|u\|^2_{B^{2,2}_\alpha(\partial\Omega)}\\[2mm] &\leq\max\{k_2,C\|b\|_{L^\infty([0,T]\times\partial\Omega)},C\zeta_2\}\,\|u\|^2_{H^s(\Omega)}, \end{split} \end{equation} where the last inequality follows from the trace theorem.\\ We now prove the coercivity. By using again the hypotheses on $b$, $K$ and $\zeta$, for every $t\in[0,T]$ we have \begin{equation}\notag \begin{split} &E[t,u]\geq k_1|u|^2_{H^s(\Omega)}+b_0\|u\|^2_{L^2(\partial\Omega)}+\langle\Theta^t_\alpha (u),u\rangle\\[2mm] &\geq\min\{k_1,b_0\}\left(|u|^2_{H^s(\Omega)}+\|u\|^2_{L^2(\partial\Omega)}\right)\geq\beta\|u\|^2_{H^s(\Omega)}, \end{split} \end{equation} for a suitable constant $\beta>0$, where the last inequality follows from Theorem \ref{equivalenza norme}. \end{proof} \begin{prop}\label{chiusura} For every $t\in[0,T]$, $E[t,u]$ is closed on $L^2(\Omega,m)$. \end{prop} \begin{proof} For every fixed $t\in[0,T]$, we have to prove that for every sequence $\{u_k\}\subseteq H^s(\Omega)$ such that \begin{equation}\label{cauchyseq} E[t,u_k-u_j]+\|u_k-u_j\|_{L^2(\Omega,m)}\to 0\quad\text{for}\,\,k,j\to +\infty, \end{equation} there exists $u\in H^s(\Omega)$ such that \begin{center} $\displaystyle E[t,u_k-u]+\|u_k-u\|_{L^2(\Omega,m)}\to 0\quad$ for $k\to +\infty$. \end{center} This means that we should prove that \begin{equation}\label{tesi chiusura} \begin{split} &|u_k-u|_{s,K}+\int_{\partial\Omega}b(t,P)|u_k-u|^2\,\de\mu\\[2mm] &+\langle\Theta^t_{\alpha}(u_k-u),u_k-u\rangle+\|u_k-u\|_{L^2(\Omega,m)}\to 0\quad\text{for }k\to +\infty. \end{split} \end{equation} We point out that \eqref{cauchyseq} infers that $\{u_k\}$ is a Cauchy sequence in $L^2(\Omega,m)$ and, since $L^2(\Omega,m)$ is a Banach space, there exists $u\in L^2(\Omega,m)$ such that \begin{center} $\|u_k-u\|_{L^2(\Omega,m)}\xrightarrow[k\to +\infty]{} 0$. \end{center} Moreover, since $|u_k-u_j|_{s,K}+\|u_k-u_j\|_{L^2(\Omega)}$ is equivalent to the $H^s(\Omega)$-norm of $u_k-u_j$, \eqref{cauchyseq} implies that $\{u_k\}$ is a Cauchy sequence also in $H^s(\Omega)$. Since $H^s(\Omega)$ is a Banach space, then also $\|u_k-u\|^2_{H^s(\Omega)}\to 0$ when $k\to+\infty$. Hence, since $|u_k-u|_{s,K}\leq k_2|u_k-u|^2_{H^s(\Omega)}$, the first term on the left-hand side of \eqref{tesi chiusura} vanishes as $k\to+\infty$. \noindent From hypotheses \eqref{ipotesi b}, for every fixed $t\in [0,T]$ the thesis follows from \cite[Proposition 4.1]{JEEfraz} for the second term on the left-hand side of \eqref{tesi chiusura}. As to the term $\langle\Theta^t_{\alpha}(u_k-u),u_k-u\rangle$, we point out that from the hypotheses on $\zeta$ and the trace theorem we have \begin{equation}\notag \begin{split} \langle\Theta^t_\alpha (u_k-u),u_k-u\rangle&=\iint_{\partial\Omega\times\partial\Omega}\zeta(t,x,y)\frac{|u_k(x)-u(x)-(u_k(y)-u(y))|^2}{|x-y|^{d+2\alpha}}\,\de\mu(x)\,\de\mu(y)\\[2mm] &\leq\zeta_2\|u_k-u\|^2_{B^{2,2}_\alpha (\partial\Omega)}\leq\,C\|u_k-u\|^2_{H^s(\Omega)}, \end{split} \end{equation} and the last term tends to 0 when $k\to +\infty$ because $u_k\to u$ in $H^s(\Omega)$. \end{proof} \begin{theorem}\label{dirform} For every $t\in[0,T]$, $E[t,u]$ is Markovian, hence it is a Dirichlet form on $L^2(\Omega,m)$. \end{theorem} \medskip \noindent The proof follows by adapting the one of Theorem 3.4 in \cite{nostroMMAS}, see also \cite[Lemma 2.7]{warma2015}. \bigskip \noindent By $E(t,u,v)$ we denote the corresponding bilinear form \begin{equation}\label{frattalebilineare} E(t,u,v)=(u,v)_{s,K}+\int_{\partial\Omega} b(t,P) \uu\,\vv \, \de\mu+\langle\Theta^t_{\alpha}(\uu),\vv\rangle \end{equation} defined on $[0,T]\times H^s(\Omega)\times H^s(\Omega)$. \begin{theorem} For every $u,v\in H^s(\Omega)$ and for every $t\in [0,T]$, $E(t,u,v)$ is a closed symmetric bilinear form on $L^2(\Omega,m)$. Then there exists a unique selfadjoint non-positive operator $A(t)$ on $L^2(\Omega,m)$ such that \begin{equation}\label{corr} E(t,u,v)=(-A(t)u,v)_{L^2(\Omega,m)}\quad\text{for every }u\in D(A(t)),\,v\in H^s(\Omega), \end{equation} where $D(A(t))\subset H^s(\Omega)$ is the domain of $A(t)$ and it is dense in $L^2(\Omega,m)$. \end{theorem} \medskip \noindent For the proof we refer to Theorem 2.1, Chapter 6 in \cite{kato}. \begin{prop}\label{propertiesenergyform} For every $t\in[0,T]$, the form $E(t,u,v)$ has the square root property, i.e. $D(A(t))^{\frac{1}{2}}=H^s(\Omega)$. Moreover, there exists a constant $C>0$ such that, for every $\eta\in (\frac{1}{2},1)$, \begin{equation}\label{holderianita forma in t} |E(t,u,v)-E(\tau,u,v)|\leq C|t-\tau|^\eta\|u\|_{H^s(\Omega)}\|v\|_{H^s(\Omega)}, \quad 0\leq\tau,t\leq T. \end{equation} \end{prop} \begin{proof} The square root property follows since the form is symmetric and bounded. As to \eqref{holderianita forma in t}, we have that \begin{equation}\notag \begin{split} &|E(t,u,v)-E(\tau,u,v)|\\[2mm] &\leq\frac{C_{N,s}}{2}\iint_{\Omega\times\Omega}|K(t,x,y)-K(\tau,x,y)|\frac{|u(x)-u(y)||v(x)-v(y)|}{|x-y|^{N+2s}}\,\de\La_N(x)\de\La_N(y)\\[2mm] &+\int_{\partial\Omega} |b(t,P)-b(\tau,P)|\,|u(P)|\,|v(P)|\,\de\mu\\[2mm] &+\iint_{\partial\Omega\times\partial\Omega}|\zeta(t,x,y)-\zeta(\tau,x,y)|\frac{|u(x)-u(y)||v(x)-v(y)|}{|x-y|^{d+2\alpha}}\,\de\mu(x)\de\mu(y)\\[2mm] &\leq C|t-\tau|^\eta\|u\|_{H^s(\Omega)}\|v\|_{H^s(\Omega)}, \end{split} \end{equation} where the last inequality follows from the hypotheses \eqref{ipotesi b} on $b$, from \eqref{holderianita nuclei} and the trace theorem. \end{proof} \begin{prop}\label{propertiesoperatorA} For every $t \in [0,T]$ and $\tau\geq 0$, $A(t)\colon\D(A(t))\to L^2(\Omega,m)$ is the generator of a semigroup $e^{\tau A(t)}$ on $L^2(\Omega,m)$ which is strongly continuous, contractive and analytic with angle $\omega_{A(t)}>0$. \end{prop} \begin{proof} The analyticity follows from the coercivity of $E[t,u]$ (see Theorem 6.2, Chapter 4 in \cite{showalter}). The contraction property follows from Lumer-Phillips Theorem (see Theorem 4.3, Chapter 1 in \cite{pazy}). The strong continuity follows from Theorem 1.3.1 in \cite{fukush} \end{proof} \begin{proposition}\label{propertiesAstructutural} For every $t\in [0,T]$, the operator $A(t)$ satisfies the following \mbox{properties}: \begin{enumerate} \item[1)] the spectrum of $A(t)$ is contained in a sectorial open domain $$\sigma(A(t))\subset \Sigma_\omega=\{\mu\in \C\,:\,|\rm{Arg}\,\mu|<\omega\}$$ for some fixed angle $0<\omega<\frac{\pi}{2}$. The resolvent satisfies the estimate $$\|\left(\mu-A(t)\right)^{-1}\|_{\Lm(L^2(\Omega,m))}\leq\frac{M}{|\mu|}$$ for $M\geq 1$ independent from t and $\mu\notin \Sigma_\omega \cup 0$; moreover, $A(t)$ is invertible and $\|A(t)^{-1}\|\leq M_1$ with $M_1$ independent from $t$; \item[2)] $D(A(t))\subset D(A(\tau))^{\frac{1}{2}}=H^s(\Omega),\quad 0\leq\tau\leq t\leq T$; in particular, $D(A(t))\subset D(A(\tau))^{\nu}$ for every $\nu$ such that $0<\nu\leq \frac{1}{2}$; \item[3)] $A(t)^{-1}$ is H\"older continuous in $t$ in the sense of Yagi, i.e., \begin{equation}\label{HolderA(t)-1} \left\|A(t)^{\frac{1}{2}} \left(A(t)^{-1}-A(\tau)^{-1}\right)\right\|_{\Lm(L^2(\Omega,m))}\leq C|t-\tau|^\eta, \end{equation} with some fixed exponent $\eta\in \left(\frac{1}{2},1\right]$ and $C>0$. \end{enumerate} \end{proposition} \begin{proof} The first two properties follow from Propositions \ref{propertiesenergyform} and \ref{propertiesoperatorA}. In order to prove the H\"older continuity, one can proceed as in \cite[Chapter 3, section 7.1]{Yagi} Let $\mathcal{A}(t)\colon H^s(\Omega)\to H^{-s}(\Omega)$ denote the sectorial operator with angle $\omega_{\mathcal{A}(t)}\leq \omega_A <\frac{\pi}{2}$ associated with $E(t,u,v)$, $$E(t,u,v)=-\langle\mathcal{A}(t)u, v\rangle_{H^{-s}(\Omega),H^s(\Omega)},$$ with $u\in D(\mathcal{A}(t))=H^s(\Omega)$. Let $\phi\in H^{-s}(\Omega)$ and $u\in H^s(\Omega)$. We have that \begin{equation}\notag \begin{split} &\langle\mathcal{A}(t)[\mathcal{A}(t)^{-1}-\mathcal{A}(\tau)^{-1}]\phi,u\rangle_{H^{-s}(\Omega),H^s(\Omega)}\\ =&-\langle[\mathcal{A}(t)-\mathcal{A}(\tau)]\mathcal{A}(\tau)^{-1}\phi,u\rangle_{H^{-s}(\Omega),H^s(\Omega)}\\ =&E(t,\mathcal{A}(\tau)^{-1}\phi, u)-E(\tau,\mathcal{A}(\tau)^{-1}\phi, u). \end{split} \end{equation} From \eqref{holderianita forma in t}, we obtain \begin{equation}\notag \|\mathcal{A}(t)[\mathcal{A}(t)^{-1}-\mathcal{A}(\tau)^{-1}]\phi\|_{H^{-s}(\Omega)}\leq C|t-\tau|^\eta\|\mathcal{A}(\tau)^{-1}\|_{\Lm(H^{-s}(\Omega)\to H^{s}(\Omega))}\|\phi\|_{H^{-s}(\Omega)}. \end{equation} From \cite[page 190]{Laasri}, we have that $\|\mathcal{A}(\tau)^{-1}\|_{\Lm(H^{-s}(\Omega)\to H^{s}(\Omega))}\leq C$ for a suitable positive constant $C$. Hence, we conclude that $$\|\mathcal{A}(t)[\mathcal{A}(t)^{-1}-\mathcal{A}(\tau)^{-1}]\phi\|_{H^{-s}(\Omega)}\leq C|t-\tau|^\eta\|\phi\|_{H^{-s}(\Omega)}.$$ In order to prove condition \eqref{HolderA(t)-1}, we note that for every $z\in H^s(\Omega)$ it holds that $A^{\frac{1}{2}}(\cdot)z= \mathcal{A}^{-\frac{1}{2}}(\cdot)\mathcal{A}(\cdot)z$. Therefore, we have \begin{equation}\notag \begin{split} (A(t)^{\frac{1}{2}}[A(t)^{-1}-A(\tau)^{-1}]\phi,u)_{L^2(\Omega,m)}= E\left(t,A(\tau)^{-1}\phi,({\mathcal A}(t)^{-\frac{1}{2}})'u\right)\\ -E\left(\tau,A(\tau)^{-1}\phi,({\mathcal A}(t)^{-\frac{1}{2}})'u\right). \end{split} \end{equation} Since adjoint operators have the same norm, taking into account \cite[page 190]{Laasri}, condition \eqref{HolderA(t)-1} holds. \end{proof} \medskip \noindent From the above results we deduce the following. \begin{theo}\label{theorem1} For every $t\in [0,T]$, let $A(t)\colon D(A(t))\to L^2(\Omega,m)$ be the linear unbounded operator defined in \eqref{corr}. Then there exists a unique family of evolution operators $U(t,\tau)\in \Lm(L^2(\Omega,m))$ such that \begin{enumerate}\label{propU} \item[1)] $U(\tau,\tau)=\rm{Id},\quad 0\leq\tau\leq T$; \item[2)] $U(t,\tau)U(\tau,\sigma)=U(t,\sigma),\quad 0\leq\sigma\leq\tau\leq t\leq T$; \item[3)] for every $0\leq\tau\leq t\leq T$ one has \begin{equation}\label{estimatesU} \|U(t,\tau)\|_{\Lm(L^2(\Omega,m))}\leq 1, \end{equation} and $U(t,\tau)$ for $0\leq\tau\leq t<T$ is a strongly contractive family on $L^2(\Omega,m)$; \item[4)] the map $t\mapsto U(t,\tau)$ is differentiable in $(\tau,T]$ with values in $\Lm(L^2(\Omega,m))$ and $\frac{\partial U(t,\tau)}{\partial t}= A(t)U(t,\tau)$; \item[5)] $A(t)U(t,\tau)$ is a $\Lm(L^2(\Omega,m))$-valued continuous function for $0\leq\tau<t\leq T$. Moreover, there exists a constant $C>0$ such that \begin{equation}\label{estimate AU} \|A(t)U(t,\tau)\|_{\Lm(L^2(\Omega,m))}\leq\frac{C}{t-\tau},\quad 0\leq\tau<t<T. \end{equation} \end{enumerate} \end{theo} \noindent For the proof, see Section 5.3 in \cite{Yagi}. We now consider the abstract Cauchy problem \begin{equation}\label{cauchyproblem for U} \begin{cases} \frac{\partial u(t)}{\partial t}=A(t)u(t)\quad\text{for $t\in(0,T]$},\\ u(0)=u_0, \end{cases} \end{equation} with $u_0\in L^2(\Omega,m)$. \begin{theo} For every $u_0\in L^2(\Omega,m)$ there exists a unique $u\in C([0,T]; L^2(\Omega,m))\cap C^1((0,T]; L^2(\Omega,m))$, such that $A(t)u\in C((0,T];L^2(\Omega,m))$ and $$\|u(t)\|_{L^2(\Omega,m)}+t\left\|\frac{\partial u(t)}{\partial t}\right\|_{L^2(\Omega,m)}+t\|A(t)u(t)\|_{L^2(\Omega,m)}\leq C\|u_0\|_{L^2(\Omega,m)},$$ for $0<t\leq T$, where $C$ is a positive constant. Moreover, one has $u(t)=U(t,0)u_0$. \end{theo} \noindent For the proof, see Theorem 3.9 in \cite{Yagi}. \section{Ultracontractivity property}\label{sec4} \setcounter{equation}{0} In this section we investigate the regularity of the evolution family $U(t,\tau)$. We set $\Xi:=\{(t,\tau)\in(0,T)^2\,:\,\tau<t\}$. We recall some useful definitions, specialized to our setting. \begin{definition}\label{proprieta U} An evolution family $\{U(t,\tau)\}_{(t,\tau)\in\Xi}$ on $L^2(\Omega,m)$ is \begin{itemize} \item[1)] \emph{positive preserving} if for every $0\leq u\in L^2(\Omega,m)$ one has $U(t,\tau)u\geq 0$ for every $(t,\tau)\in\Xi$; \item[2)] \emph{$L^p$-contractive}, for $1\leq p\leq+\infty$, if $U(t,\tau)$ maps the set $\{u\in L^2(\Omega,m)\cap L^p(\Omega,m)\,:\,\|u\|_{L^p(\Omega,m)}\leq 1\}$ into itself for every $(t,\tau)\in\Xi$; \item[3)] \emph{completely contractive} if it is both $L^1$-contractive and $L^\infty$-contractive; \item[4)] \emph{sub-Markovian} if it is positive preserving and $L^\infty$-contractive; \item[5)] \emph{Markovian} if it is sub-Markovian and $\|U(t,\tau)\|_{\Lm(L^\infty(\Omega,m))}=1$. \end{itemize} \end{definition} \noindent We point out that, since for every $t\in[0,T]$ the energy form $E[t,u]$ is Markovian by Theorem \ref{dirform}, the associated evolution family $U(t,\tau)$ is Markovian. In particular, this implies that the evolution family $U(t,\tau)$ is positive preserving and $L^\infty$-contractive. Moreover, since for every $t\in[0,T]$ the bilinear form $E(t,u,v)$ is symmetric, it follows that $U(t,\tau)$ is also $L^1$-contractive, hence the evolution family is completely contractive. Therefore, by the Riesz-Thorin theorem, $\{U(t,\tau)\}_{(t,\tau)\in\Xi}$ is $L^p$-contractive for every $p\in[1,+\infty]$ and the following result holds. \begin{theo}\label{theorem3} For every $p\in [1,+\infty]$ there exists an operator $U_p(t,\tau)\in\Lm(L^p(\Omega,m))$ such that $$U_p(t,\tau)u_0=U(t,\tau)u_0\quad\text{for every }(t,\tau)\in\Xi\,,\,\text{for every }u_0\in L^p(\Omega,m)\cap L^2(\Omega,m).$$ Moreover, for every $\tau\geq 0$ the map $U_p(\cdot,\tau)$ is strongly continuous from $(\tau,\infty)$ to $\Lm(L^p(\Omega,m))$ for every $t\geq\tau$ and \begin{equation}\label{Stima Contrazione U} \|U_p(t,\tau)\|_{\Lm(L^p(\Omega,m))}\leq 1\quad\text{for every }p\geq 1. \end{equation} \end{theo} \bigskip \noindent We now prove the ultracontractivity of the evolution family $U(t,\tau)$. \begin{theo}\label{ultracontr} The evolution operator $U(t,\tau)$ is ultracontractive, i.e., for every $f\in L^1(\Omega,m)$ and $(t,\tau)\in\Xi$, \begin{equation}\label{stima ultracontr} \|U_1(t,\tau)f(\tau)\|_{L^\infty(\Omega,m)}\leq\left(\frac{\lambda\bar{C}}{2\beta}\right)^\frac{\lambda}{2} (t-\tau)^{-\frac{\lambda}{2}}\|f(\tau)\|_{L^1(\Omega,m)}, \end{equation} where we recall that $\lambda=\frac{2d}{d-N+2s}$, $\bar{C}$ is the positive constant depending on $N$, $s$, $d$ and $\Omega$ appearing in \eqref{Nashineq} and $\beta>0$ is the coercivity constant of $E$. \end{theo} \begin{proof} We adapt to our setting the proof of \cite[Theorem 4.3]{mugnolo}, see also \cite[Proposition 3.8]{arendtelst}. Let $f\in H^s(\Omega)$ and let $\tau\in [0,T)$ be fixed. From \cite[Proposition III.1.2]{showalter} it holds that \begin{equation}\notag \frac{\de}{\de t}\|F(\cdot)\|_{L^2(\Omega,m)}^2=2\left(\frac{\de F(\cdot)}{\de t},F(\cdot)\right)_{L^2(\Omega,m)}\quad\text{for every }F\in H^s(\Omega). \end{equation} \noindent We remark that, if $f\in H^s(\Omega)$, then $f\in L^1(\Omega,m)$. Hence, for every $f\in H^s(\Omega)$ and a.e. $(t,\tau)\in\Xi$, from Theorem \ref{theorem1}, \eqref{corr} and the coercivity of $E[t,u]$ we have that \begin{equation}\notag \begin{split} \frac{\partial}{\partial t}\|U(t,\tau)f\|_{L^2(\Omega,m)}^2&=2\left(\frac{\partial U(t,\tau)f}{\partial t},U(t,\tau)f\right)_{L^2(\Omega,m)}=2\left(A(t)U(t,\tau)f,U(t,\tau)f\right)_{L^2(\Omega,m)}\\[2mm] &=-2E[t,U(t,\tau)f]\leq -2\beta\|U(t,\tau)f\|^2_{H^s(\Omega)}, \end{split} \end{equation} where $\beta$ is the (positive) coercivity constant of $E$. Then, from Nash inequality \eqref{Nashineq}, recalling that $\lambda=\frac{2d}{d-N+2s}$, it follows that \begin{equation}\label{ultracont1} \frac{\partial}{\partial t}\|U(t,\tau)f\|_{L^2(\Omega,m)}^2\leq -\frac{2\beta}{\bar{C}}\|U(t,\tau)f\|_{L^2(\Omega,m)}^{2+\frac{4}{\lambda}}\|U(t,\tau)f\|_{L^1(\Omega,m)}^{-\frac{4}{\lambda}}, \end{equation} where $\bar{C}$ is the positive constant in \eqref{Nashineq} depending on $N$, $s$, $d$ and $\Omega$. Therefore, since $U(t,\tau)$ is completely contractive, from \eqref{ultracont1} we have \begin{equation}\label{ultracont2} \begin{split} &\frac{\partial}{\partial t}\left(\|U(t,\tau)f\|_{L^2(\Omega,m)}^2\right)^{-\frac{2}{\lambda}}=-\frac{2}{\lambda}\|U(t,\tau)f\|_{L^2(\Omega,m)}^{-2-\frac{4}{\lambda}}\frac{\partial}{\partial t}\|U(t,\tau)f\|_{L^2(\Omega,m)}^2\\[2mm] &\geq\frac{4\beta}{\lambda\bar{C}}\|U(t,\tau)f\|_{L^1(\Omega,m)}^{-\frac{4}{\lambda}}\geq\frac{4\beta}{\lambda\bar{C}}\|f\|_{L^1(\Omega,m)}^{-\frac{4}{\lambda}}. \end{split} \end{equation} Then, integrating \eqref{ultracont2} between $\tau$ and $t$, we get \begin{equation}\notag \|U(t,\tau)f\|_{L^2(\Omega,m)}^{-\frac{4}{\lambda}}\geq\frac{4\beta}{\lambda\bar{C}}\|f\|_{L^1(\Omega,m)}^{-\frac{4}{\lambda}}(t-\tau), \end{equation} which in turn implies that \begin{equation}\label{ultracont L1-L2} \|U(t,\tau)\|_{\Lm(L^1(\Omega,m)\to L^2(\Omega,m))}\leq\left(\frac{\lambda\bar{C}}{4\beta}\right)^\frac{\lambda}{4}(t-\tau)^{-\frac{\lambda}{4}}. \end{equation} In order to complete the proof, we need to prove an analogous bound by considering $U(t,\tau)$ as an operator from $L^2(\Omega,m)$ to $L^\infty(\Omega,m)$. We point out that, since $E(t,v,u)=E(t,u,v)$ for every $t\in [0,T]$ and $u,v\in H^s(\Omega)$, the evolution operators associated with the two forms coincide with $U(t,\tau)$. Then, from (2.22) in \cite{Daners}, we have that for $p\geq 1$ the adjoint operator $(U_p(t,\tau))^\prime$ is equal to $U_{p^\prime}(T-\tau,T-t)$ for every $0\leq\tau\leq t\leq T$. \noindent We now compute \begin{equation}\label{ultracont L2-Linf} \begin{split} &\|U_2(t,\tau)\|_{\Lm(L^2(\Omega,m)\to L^\infty(\Omega,m))}=\|(U_2(T-\tau,T-t))^\prime\|_{\Lm(L^2(\Omega,m)\to L^\infty(\Omega,m))}\\ &=\|U_2(T-\tau,T-t)\|_{\Lm(L^1(\Omega,m)\to L^2(\Omega,m))}=\|U(T-\tau,T-t)\|_{\Lm(L^1(\Omega,m)\to L^2(\Omega,m))}\\ &\leq\left(\frac{\lambda\bar{C}}{4\beta}\right)^\frac{\lambda}{4}(t-\tau)^{-\frac{\lambda}{4}}, \end{split} \end{equation} where the last inequality follows from \eqref{ultracont L1-L2}. Now, from 2) in Theorem \ref{theorem1}, combining \eqref{ultracont L1-L2} and \eqref{ultracont L2-Linf}, it finally holds that \begin{equation}\label{ultracont L1-Linf} \begin{split} &\|U_1(t,\tau)\|_{\Lm(L^1(\Omega,m)\to L^\infty(\Omega,m))}=\left\|U_1\left(t,\frac{t+\tau}{2}\right)U_1\left(\frac{t+\tau}{2},\tau\right)\right\|_{\Lm(L^1(\Omega,m)\to L^\infty(\Omega,m))}\\[2mm] &\leq\left\|U\left(t,\frac{t+\tau}{2}\right)\right\|_{\Lm(L^2(\Omega,m)\to L^\infty(\Omega,m))}\left\|U_1\left(\frac{t+\tau}{2},\tau\right)\right\|_{\Lm(L^1(\Omega,m)\to L^2(\Omega,m))}\\[2mm] &\leq\left(\frac{\lambda\bar{C}}{2\beta}\right)^\frac{\lambda}{2}(t-\tau)^{-\frac{\lambda}{2}}. \end{split} \end{equation} \end{proof} \bigskip \begin{theo}\label{theorem2} Under the hypotheses of Theorem \ref{theorem1}, the evolution operator $U(t,\tau)$ associated with the family $A(t)$ satisfies the following properties, \begin{enumerate} \item[1)] for every $\theta$ such that $0\leq\theta<\eta+\frac{1}{2}$ and $0\leq\tau<t\leq T$, $$\mathcal{R}(U(t,\tau))\subset D(A(t)^\theta);$$ \item[2)] for $0\leq \tau<t\leq T$, $$\left\|A(t)^\theta U(t,\tau)\right\|_{\Lm(L^{2}(\Omega,m))}\leq C_\theta (t-\tau)^{-\theta};$$ \item[3)] for $0<\xi<\gamma<\eta+\frac{1}{2}$, $$\left\|A(t)^{\gamma} U(t,\tau) A(\tau)^{-\xi}\right\|_{\Lm(L^{2}(\Omega,m))} \leq C_\gamma (t-\tau)^{\xi-\gamma};$$ \item[4)] for $\tau>0$ and $0<\xi<1$, $$\left\|[U(t+\tau,t)-U(t,t)] A(t)^{-\xi}\right\|_{\Lm(L^{2}(\Omega,m))}\leq C\tau^\xi.$$ \end{enumerate} \end{theo} \begin{proof} For the proof of properties 1) to 3) we refer to Section 8.1 in \cite{Yagi}. We prove 4). From Theorem \ref{theorem1}, for every $\epsilon>0$ we have \begin{equation}\notag \begin{split} &\left\|[U(t+\tau,t)-U(t+\epsilon,t)] A(t)^{-\xi}\right\|_{\Lm(L^{2}(\Omega,m))}=\left\|\,\,\int_{t+\epsilon}^{t+\tau} \frac{\partial U(\sigma,t)}{\partial\sigma} A(t)^{-\xi}\,\de\sigma\right\|_{\Lm(L^{2}(\Omega,m))}\\[2mm] &=\left\|\,\,\int_{t+\epsilon}^{t+\tau} A(\sigma)U(\sigma,t)A(t)^{-\xi}\,\de\sigma\right\|_{\Lm(L^{2}(\Omega,m))} \leq\int_{t+\epsilon}^{t+\tau}\left\|A(\sigma)U(\sigma,t)A(t)^{-\xi}\right\|_{\Lm(L^{2}(\Omega,m))}\,\de\sigma\\[2mm] &\leq C\int_{t+\epsilon}^{t+\tau}|\sigma-t|^{\xi-1}\,\de\sigma=\frac{C}{\xi}(\tau^\xi-\epsilon^\xi), \end{split} \end{equation} where the last inequality follows by property 3). The conclusion then follows by passing to the limit as $\epsilon\to 0^+$ and taking into account that $U(t,\tau)$ is strongly continuous. \end{proof} \begin{remark} The above properties still hold for the family of evolution operators extended to $L^p(\Omega,m)$. \end{remark} \section{The semilinear problem}\label{sec5} \setcounter{equation}{0} We recall the properties of the abstract inhomogeneous Cauchy problem \begin{equation}\label{nonhomogeneous cauchyproblem for U} \begin{cases} \frac{\partial u(t)}{\partial t}=A(t)u(t)+f(t)\quad\text{for $t\in(0,T]$},\\ u(0)=\phi, \end{cases} \end{equation} where $A(t)$ satisfies Theorem \ref{propertiesAstructutural}, $\phi\in L^2(\Omega,m)$ and $f\in C^{0,\vartheta}([0,T],L^2(\Omega,m))$. \begin{theo}\label{TheoremesistenzayagiF} For every $\phi\in L^2(\Omega,m)$ and $f\in C^{0,\vartheta}([0,T], L^2(\Omega,m))$ there exists a unique $u(t)\in C([0,T]; L^2(\Omega,m))\cap C^1((0,T]; L^2(\Omega,m))$, with $A(t)u\in C((0,T]; L^2(\Omega,m))$, which satisfies \eqref{nonhomogeneous cauchyproblem for U}. Moreover, for $0<t\leq T$, one has $$\|u(t)\|_{L^2(\Omega,m)}+t\left\|\frac{\partial u(t)}{\partial t}\right\|_{L^2(\Omega,m)} + t \|A(t)u(t)\|_{L^2(\Omega,m)}\leq C(\|\phi\|_{L^2(\Omega,m)}+\|f\|_{C^{0,\vartheta}([0,T],L^2(\Omega,m))}),$$ where $C$ is a positive constant depending on the constant in \eqref{estimate AU}. Finally, $$u(t)=U(t,0)\phi+ \int_{0}^{t}U(t,\sigma)f(\sigma)\,\de\sigma.$$ \end{theo} \noindent For the proof, see Theorem 3.9 in \cite{Yagi}. \subsection{Local existence} We now consider the abstract semilinear Cauchy problem \begin{equation}\label{eq:5.1} (P)\begin{cases} \frac{\partial u(t)}{\partial t}=A(t)u(t)+J(u(t))\quad\text{for $t\in[0,T]$},\\ u(0)=\phi, \end{cases} \end{equation} where $A(t)\colon D(A(t))\subset L^2(\Omega,m)\to L^2(\Omega,m)$ is the family of operators associated to the energy form $E[t,u]$ introduced in \eqref{frattale} and $\phi$ is a given function in $L^2(\Omega,m)$. We assume that for every $t\in [0,T]$ $J$ is a mapping from $L^{2p}(\Omega,m)$ to $L^2(\Omega,m)$ for $p>1$ which is locally Lipschitz, i.e., it is Lipschitz on bounded sets in $L^{2p}(\Omega,m)$, \begin{equation}\label{LIPJ} \|J(u)-J(v)\|_{L^2(\Omega,m)}\leq\sl{l(r)}\|u-v\|_{L^{2p}(\Omega,m)} \end{equation} whenever $\|u\|_{L^{2p}(\Omega,m)}\leq r,\|v\|_{L^{2p}(\Omega,m)}\leq r$, where $\sl{l(r)}$ denotes the Lipschitz constant of $J$. We also assume that $J(0)=0$. This assumption is not necessary in all that follows, but it simplifies the calculations (see \cite{weiss1}). In order to prove the local existence theorem, we make the following assumption on the growth of $\sl{l(r)}$ when $r\to+\infty$, \begin{equation}\label{crescita l} \mbox{Let}\; a:=\frac{\lambda}{4}\left(1-\frac{1}{p}\right); \quad \mbox{there exists}\,\,0<b<a\,:\,\sl{l(r)}= {\mathcal{O}}(r^\frac{1-a}{b}),\,r\to+\infty, \end{equation} where $\lambda$ is defined in Proposition \ref{Nash}. We note that $0<a<1$ for $N-2s\leq\frac{d}{2}$ and $p>1$. Let $p>1$. Following the approach in Theorem 2 in \cite{weiss1} and adapting the proof of Theorem 5.1 in \cite{La-Ve3}, we have the following result. \begin{theorem}\label{theoesloc} Let condition \eqref{crescita l} hold. Let $\kappa>0$ be sufficiently small, $\phi\in L^2(\Omega,m)$ and \begin{equation}\label{cnidato} \limsup_{t\to 0^+}\|t^b U(t,0)\phi\|_{L^{2p}(\Omega,m)}<\kappa. \end{equation} Then there exists a $\overline{T}>0$ and a unique mild solution \begin{equation}\notag u\in C([0,\overline{T}],L^2(\Omega,m))\cap C((0,\overline{T}],L^{2p}(\Omega,m)), \end{equation} with $u(0)=\phi$ and $\|t^b u(t)\|_{L^{2p}(\Omega,m)}<2\kappa$, satisfying, for every $t\in [0,\overline{T}]$, \begin{equation}\label{rappint1} u(t)= U(t,0) \phi +\int_0^t U(t,\tau) J(u(\tau))\,\de\tau, \end{equation} with the integral being both an $L^2$-valued and an $L^{2p}$-valued Bochner integral. \end{theorem} \begin{proof} The proof is based on a contraction mapping argument on suitable spaces of continuous functions with values in Banach spaces. We adapt the proof of Theorem 5.1 in \cite{La-Ve3} to this functional setting; for the reader's convenience, we sketch it.\\ Let $Y$ be the complete metric space defined by \begin{equation}\label{spazioY} \begin{split} Y=&\left\{u\in C([0,\overline{T}],L^2(\Omega,m))\cap C((0,\overline{T}], L^{2p}(\Omega,m))\,:\,u(0)=\phi,\right.\\[2mm] &\left.\|t^b u(t)\|_{L^{2p}(\Omega,m)}<2\kappa \mbox{ for every} \;t\in [0,\overline{T}]\right\}, \end{split} \end{equation} equipped with the metric $$d(u,v)=\max\left\{\|u-v\|_{C([0,\overline{T}],L^2(\Omega,m))},\,\sup_{(0,\overline{T}]}t^b \|u(t)-v(t)\|_{L^{2p}(\Omega,m)}\right\}.$$ For $w\in Y$, let $\mathcal{F}w(t)=U(t,0)\phi +\int_0^t U(t,\tau) J(w(\tau))\,\de\tau$. Then obviously $\mathcal{F}w(0)=\phi$ and, by using arguments similar to those used in \cite[proof of Lemma 2.1]{weiss3}, we can prove that, for $w\in Y$, $\mathcal{F}w\in C([0,\overline{T}],L^2(\Omega,m))\cap C((0,\overline{T}],L^{2p}(\Omega,m))$. Then, by proceeding as in the proof of Theorem 5.1 of \cite{La-Ve3}, we prove that \begin{equation}\label{tbFu} \limsup_{t\rightarrow 0^+}\|t^b\mathcal{F}w(t)\|_{L^{2p}(\Omega,m)}<2\kappa \;\mbox{ for every}\; t\in [0,\overline{T}]. \end{equation} Hence, $\mathcal{F}\colon Y\to Y$ and, by choosing suitably $\overline{T}$ and $\kappa$, we prove that it is a strict contraction. \end{proof} \begin{remark}\label{remteoesloc} If $J(u)= |u|^{p-1} u$, then $\sl{l(r)}= {\mathcal{O}}(r^{p-1})$ when $r\rightarrow+\infty$. Thus condition \eqref{crescita l} is satisfied for $b=\frac{1}{p-1}-\frac{\lambda}{4p}$ with $p>1+\frac{4}{\lambda}$. \end{remark} We recall that, from Theorem \ref{theorem2}, we have that $\mathcal{R}(U(t,\tau))\subset D(A(t))$ for every $0<\tau\leq t$ and we can prove that the following regularity result holds (see also \cite[Theorem 5.3]{La-Ve3}). \begin{theorem}\label{theoregloc} Let the assumptions of Theorem \ref{theoesloc} hold. \begin{itemize} \item[a)] Let also condition \eqref{crescita l} hold. Then, the solution $u(t)$ can be continuously extended to a maximal interval $(0,T_\phi)$ as a solution of \eqref{rappint1}, until $\|u(t)\|_{L^{2p}(\Omega,m)}<\infty$; \item[b)] one has that $$u\in C([0,T_\phi),L^2(\Omega,m))\cap C((0,T_\phi), L^{2p}(\Omega,m))\cap C^1((0,T_\phi),L^2(\Omega,m)),$$ $$ Au(t)\in\;C((0,T_\phi);\;L^2(\Omega,m))$$ and $u$ satisfies \begin{equation*} \frac{\partial u(t)}{\partial t}=A(t)u(t)+J(u(t)) \quad\mbox{for every}\;\; t \in (0,T_\phi) \end{equation*} and $u(0)=\phi$ (that is, $u$ is a classical solution). \end{itemize} \end{theorem} \begin{proof} For the proof of condition a), we follow \cite[Theorem 2]{weiss1}. From the proof of Theorem \ref{theoesloc}, it turns out that the minimum existence time for the solution to the integral equation is as long as $\|t^b U(t,\tau)\phi\|_{L^{2p}(\Omega,m)}\leq\kappa$ (see also Corollary 2.1. in \cite{weiss1}). \noindent To prove that the mild solution is classical, we use the classical regularity results for linear equations (see Theorem \ref{TheoremesistenzayagiF}) by proving that $J(u)\in C^{0,\vartheta}((0,T],L^2(\Omega,m))$ for any fixed $T<T_\phi$. Taking into account the local Lipschitz continuity of $J(u)$, it is enough to show that $u(t)$ is H\"older continuous from $(\epsilon, T)$ into $L^{2p}(\Omega,m)$ for every $\epsilon>0$. Let $\psi= u(\epsilon)$ and we set $w(t)= U(t,0)\psi +\int_0^t U(t,\tau) J(w(\tau))\,\de\tau$. If we prove that $$w(t)\in C^0([0,T];\;L^{2p}(\Omega,m))\cap C^1([0,T]), L^2(\Omega,m))$$ and $$A(t)w\;\in C([0,T];\;L^2(\Omega,m)),$$ then, as $u(t+\epsilon)= w(t)$ due to the uniqueness of the solution of \eqref{rappint1}, we deduce that $$u(t) \in C^1([\epsilon,T+\epsilon);\;L^2(\Omega,m))\cap C([\epsilon,T+\epsilon), L^{2p}(\Omega,m))$$ and $$A(t)u(t)\in\;C([\epsilon,T+\epsilon);\;L^2(\Omega,m))$$ for every $\epsilon>0$, hence $u(t)$ is a classical solution (see claim b)).\\ Let $\sup_{t\in (0,T)}\|w\|_{L^{2p}(\Omega,m)}\leq r$. Since $U(t,0)$ is differentiable in $(\epsilon, T)$, then it is H\"older continuous for any exponent $\gamma\in (0,1)$. We now prove that $$v(t)=\int_0^t U(t,\tau) J(w(\tau))\,\de\tau$$ is H\"older continuous too. Let $0\leq t\leq t+\sigma\leq T$; then \begin{equation}\notag \begin{split} &v(t+\sigma)-v(t)=\int_{0}^{t+\sigma}U(t+\sigma,\tau)J(w(\tau))\,\de\tau-\int_{0}^{t} U(t,\tau)J(w(\tau))\,\de\tau\\[2mm] &=\int_{0}^{t}(U(t+\sigma,\tau)-U(t,\tau))J(w(\tau))\,\de\tau+\int_{t}^{t+\sigma}U(t+\sigma,\tau)J(w(\tau))\,\de\tau=:v_1(t)+v_2(t). \end{split} \end{equation} For the function $v_1$, for $0<\gamma<1$ it holds that \begin{equation}\notag \begin{split} &\|v_1(t)\|_{L^{2p}(\Omega,m)}\leq\int_{0}^{t}\|(U(t+\sigma,t)U(t,\tau)-U(t,\tau)) J(w(\tau))\|_{L^{2p}(\Omega,m)}\,\de\tau\\[2mm] &=\int_{0}^{t}\|(U(t+\sigma,t)-\textrm{Id})A(t)^{-\gamma}A(t)^{\gamma}U(t,\tau) J(w(\tau))\|_{L^{2p}(\Omega,m)}\,\de\tau\\[2mm] &=\int_{0}^{t}\left\|\left(\int_{t}^{t+\sigma}A(\xi)U(\xi,t)A(t)^{-\gamma}\,\de\xi\right) A(t)^{\gamma}U(t,\tau) J(w(\tau))\right\|_{L^{2p}(\Omega,m)}\,\de\tau\\[2mm] &\leq\int_{0}^{t}\left(\int_{t}^{t+\sigma}\left\|A(\xi)U(\xi,t)A(t)^{-\gamma}\right\|_{\Lm(L^{2p}(\Omega,m))}\,\de\xi\right)\cdot\\[2mm] &\cdot\left\|A(t)^{\gamma}U\left(t,\frac{\tau+t}{2}\right)U\left(\frac{\tau+t}{2},\tau\right)J(w(\tau))\right\|_{L^{2p}(\Omega,m)}\,\de\tau. \end{split} \end{equation} From the ultracontractivity of $U(t,\tau)$ and the Riesz-Thorin theorem, one has \begin{equation}\label{RT} \|U(t,\tau)\|_{\Lm(L^2(\Omega,m)\to L^{2p}(\Omega,m))}\leq C\left((t-\tau)^{-\frac{\lambda}{4}}\right)^{1-\frac{1}{p}}, \end{equation} where we recall that $\lambda=\frac{2d}{d-N+2s}$ and $C$ is a positive constant depending on $N$, $s$, $d$, $p$ and $\Omega$.\\ Then, taking into account \eqref{RT} and parts 2) and 3) of Theorem \ref{theorem2}, we have \begin{equation}\notag \begin{split} &\|v_1(t)\|_{L^{2p}(\Omega,m)}\leq C\int_{0}^{t}\left(\int_{t}^{t+\sigma}|\xi-t|^{\gamma-1}\,\de\xi\right)\left\|A^\gamma(t)U\left(t,\frac{\tau+t}{2}\right)\right\|_{\Lm(L^{2p}(\Omega,m))}\cdot\\[2mm] &\cdot\left\|U\left(\frac{\tau+t}{2},\tau\right)J(w(\tau))\right\|_{L^{2p}(\Omega,m)}\de\tau\\[2mm] &\leq \tilde{C}\int_{0}^{t} \frac{\sigma^\gamma}{\gamma}\left(\frac{t-\tau}{2}\right)^{-\gamma} \left(\frac{t-\tau}{2}\right)^{-\frac{\lambda}{4}\left(1-\frac{1}{p}\right)} \|J(w(\tau))\|_{L^2(\Omega,m)}\,\de\tau\\[2mm] &\leq \tilde{C}\int_{0}^{t} \frac{\sigma^\gamma}{\gamma} \left(\frac{t-\tau}{2}\right)^{-\gamma}\left(\frac{t-\tau}{2}\right)^{-a} \sl{l(r)} r\,\de\tau, \end{split} \end{equation} where $\tilde{C}$ is a positive constant depending on the constant in \eqref{RT} and $\gamma$. If we choose $\gamma<1-a$, we obtain $\|v_1(t)\|_{L^{2p}(\Omega,m)}\leq C\sigma^\gamma$, for a suitable positive constant $C$ depending also on $T$ and $r$. As to the function $v_2$, using again \eqref{RT} we have \begin{equation}\notag \begin{split} &\|v_2(t)\|_{L^{2p}(\Omega,m)}\leq \int_t^{t+\sigma}\|U(t+\sigma,\tau)J(w(\tau))\|_{L^{2p}(\Omega,m)}\,\de\tau\\[2mm] &=\int_{t}^{t+\sigma}\|U(t+\sigma,\tau)\|_{\Lm(L^2(\Omega,m)\to L^{2p}(\Omega,m))}\|J(w(\tau))\|_{L^2(\Omega,m)}\,\de\tau\leq \tilde{C}\frac{\sigma^{1-a}}{1-a} \sl{l(r)} r \leq C\sigma^{1-a}, \end{split} \end{equation} for a suitable positive constant $C$ depending on the constant in \eqref{RT}, $r$ and $a$. Therefore, if $\gamma<1-a$, $v(t)$ is H\"older continuous on $[0,T]$ with exponent $\gamma$. \end{proof} \subsection{Global existence} We now give a sufficient condition on the initial datum for obtaining a global solution, by adapting Theorem 3 (b) in \cite{weiss2}. \begin{theorem}\label{theoesglob} Let condition \eqref{crescita l} hold. Let $q:=\frac{2\lambda p}{\lambda+4pb}$, $\phi\in L^q(\Omega,m)$ and $\|\phi\|_{L^q(\Omega,m)}$ be sufficiently small. Then there exists $u\in C([0,\infty), L^q(\Omega,m))$ which is a global solution of \eqref{rappint1}. \end{theorem} \begin{proof} Since $q<2p$, as in \eqref{RT} from the ultracontractivity of $U(t,\tau)$ and the Riesz-Thorin theorem it follows that $U(t,\tau)$ is a bounded operator from $L^q(\Omega,m)$ to $L^{2p}(\Omega,m)$ with $$\|U(t,\tau)\|_{\Lm(L^q(\Omega,m)\to L^{2p}(\Omega,m))}\leq M (t-\tau)^{-\frac{\lambda}{2}\left(\frac{1}{q}-\frac{1}{2p}\right)}\equiv M (t-\tau)^{-b},$$ where $M$ is a positive constant depending $N$, $s$, $d$, $p$, $b$ and $\Omega$. Hence, we have that $$ \|t^b U(t,0)\phi\|_{L^{2p}(\Omega,m)}\leq M \|\phi\|_{L^q(\Omega,m)};$$ by choosing $\|\phi\|_{L^q(\Omega,m)}$ sufficiently small, from Theorem \ref{theoesloc} we have that there exists a local solution of \eqref{rappint1} $u \in C([0,T], L^q(\Omega,m))$. Furthermore, from Theorem \ref{theoesloc} we also have that $u \in C((0,T],L^{2p}(\Omega,m))$ and $ \|t^b u(t) \|_{L^{2p}(\Omega,m)}\leq 2M\|\phi\|_{L^q(\Omega,m)}$. From Theorem \ref{theoregloc} a), if we prove that $\|u(t)\|_{L^{2p}(\Omega,m)}$ is bounded for every $t>0$, then $u(t)$ is a global solution. We will prove that $\|t^b u(t)\|_{L^{2p}(\Omega,m)}$ is bounded for every $t>0$, and we will use the notations of the proof of Theorem \ref{theoesloc}.\\ We choose $\Lambda>0$ such that $l(r)\leq\Lambda r^{\frac{1-a}{b}}$ for $r\geq 1$. Then \begin{equation}\notag \begin{split} &\|t^b u(t)\|_{L^{2p}(\Omega,m)}\leq M\|\phi\|_{L^q(\Omega,m)}+ t^b \int_0^t \|U(t,\tau)\|_{\Lm(L^2(\Omega,m)\to L^{2p}(\Omega,m))}\|J(u(\tau))\|_{L^2(\Omega,m)}\,\de\tau\\[2mm] &\leq M\|\phi\|_{L^q(\Omega,m)}+ M\Lambda\left(2M \|\phi\|_{L^q(\Omega,m)}\right)^{\frac{1-a}{b}} t^b \int_0^t (t-\tau)^{-a} \tau^{a-1-b}\|\tau^b u(\tau)\|_{L^{2p}(\Omega,m)}\,\de\tau\\[2mm] &\leq M\|\phi\|_{L^q(\Omega,m)} +M\Lambda\left(2M \|\phi\|_{L^q(\Omega,m)}\right)^{\frac{1-a}{b}}\sup_{t\in [0,T]} \|t^b u(t)\|_{L^{2p}(\Omega,m)} \int_0^1 (1-\tau)^{-a} \tau^{a-1-b}\,\de\tau. \end{split} \end{equation} We point out that the integral on the right-hand side of the above inequality is finite. Let now $f(T)=\sup_{t\in [0,T]} \|t^b u(t)\|_{L^{2p}(\Omega,m)}$. Then $f(T)$ is a continuous nondecreasing function with $f(0)=0$ which satisfies $$f(T)\leq M\|\phi\|_{L^q(\Omega,m)}+\left(2M\|\phi\|_{L^q(\Omega,m)}\right)^{\frac{1-a}{b}} \Lambda BM f(T),$$ where $B:=\int_0^1 (1-\tau)^{-a} \tau^{a-1-b}\,\de\tau>0$. If $M\|\phi\|_{L^q(\Omega,m)}\leq \epsilon$ and $2^{\frac{1-a+b}{b}}\Lambda BM\epsilon^{\frac{1-a}{b}}<1$, then $f(T)$ can never be equal to $2\epsilon$. If it could, we would have $2\epsilon\leq\epsilon+(2\epsilon)^{\frac{1-a+b}{b}}\Lambda BM$, i.e., $\epsilon\leq(2\epsilon)^{\frac{1-a+b}{b}} \Lambda BM$, which is false if $\epsilon>0$ is small enough. This proves that, for $\|\phi\|_{L^q(\Omega,m)}$ sufficiently small, $\|t^b u(t)\|_{L^{2p}(\Omega,m)}$ must remain bounded and the claim follows. \end{proof} \subsection{The strong formulation} We now give a strong formulation of the abstract Cauchy problem $(P)$ in \eqref{eq:5.1}. \begin{theorem}\label{esistfrattale} Let $\alpha$ be as defined in \eqref{definizione alpha} and $s\in (0,1)$ be such that $N-d<2s<N$. Let $u$ be the unique solution of problem $(P)$. Then for every fixed $t\in (0,T]$, one has \begin{equation}\label{pbforte} \begin{cases} \frac{\partial u}{\partial t}(t,x)+\B u(t,x)=J(u(t,x)) &\text{for a.e. $x\in\Omega$,}\\[2mm] \frac{\partial u}{\partial t}+C_s\Ncal u+bu+\Theta^t_\alpha (u)=J(u)\quad &\text{in $(B^{2,2}_\alpha(\partial\Omega))'$},\\[2mm] u(0,x)=\phi(x) &\text{in $L^2(\Omega,m)$}. \end{cases} \end{equation} \end{theorem} \begin{proof} For every $t\in (0,T]$, we multiply the first equation of problem $(P)$ by a test function $\varphi\in\D(\Omega)$ and then we integrate on $\Omega$. Then from \eqref{corr} we obtain \begin{equation}\notag \begin{split} \int_\Omega \frac{\partial u}{\partial t}(t,x)\,\varphi(x)\,\de\La_N&=\int_{\Omega} A(t)u(t,x)\,\varphi(x)\,\de\La_N+\int_{\Omega} J(u(t,x))\,\varphi(x)\,\de\La_N\\ &=-E(t,u,\varphi)+\int_{\Omega} J(u(t,x))\,\varphi(x)\,\de\La_N. \end{split} \end{equation} Since $\varphi$ has compact support in $\Omega$, after integrating by parts we get \begin{equation}\label{forte} \frac{\partial u}{\partial t}+\B u=J(u)\quad\text{in $(\D(\Omega))'$}. \end{equation} By density, equation \eqref{forte} holds in $L^2(\Omega)$, so it holds for a.e. $x\in\Omega$. We remark that, since $J(u(t,\cdot))\in L^2(\Omega,m)$, it also follows that, for each fixed $t\in (0,T]$, $u\in V(\B,\Omega)$. Hence, we can apply Green formula \eqref{fracgreen}. We now take the scalar product in $L^2(\Omega,m)$ between the first equation of problem $(P)$ and $\varphi\in H^s(\Omega)$. Hence we get \begin{equation}\label{formvariaz} \left(\frac{\partial u}{\partial t},\varphi\right)_{L^2(\Omega,m)}=(A(t)u,\varphi)_{L^2(\Omega,m)}+(J(u),\varphi)_{L^2(\Omega,m)}. \end{equation} By using again \eqref{corr}, we have that \begin{equation}\notag \begin{split} &\int_\Omega \frac{\partial u}{\partial t}(t,x)\,\varphi\,\de\La_N+\int_{\partial\Omega} \frac{\partial u}{\partial t}(t,x)\,\varphi(x)\,\de\mu\\[2mm] &=-\frac{C_{N,s}}{2}\iint_{\Omega\times\Omega} K(t,x,y)\frac{(u(t,x)-u(t,y))(\varphi(x)-\varphi(y))}{|x-y|^{N+2s}}\,\de\La_N(x)\de\La_N(y) \\[2mm] &-\int_{\partial\Omega} b(t,x)\,u(t,x)\,\varphi(x)\,\de\mu-\langle\Theta^t_\alpha(u),\varphi\rangle+\int_{\Omega}J(u(t,x))\,\varphi(x)\,\de\La_N+\int_{\partial\Omega} J(u(t,x))\,\varphi(x)\,\de\mu. \end{split} \end{equation} Using \eqref{fracgreen} and \eqref{forte}, we obtain for every $\varphi\in H^s(\Omega)$ and for each $t\in (0,T]$ \begin{equation}\label{bordo} \begin{split} \int_{\partial\Omega} \frac{\partial u}{\partial t}(t,x)\,\varphi(x)\,\de\mu=&-\left\langle C_s\Ncal u,\varphi\right\rangle-\int_{\partial\Omega} b(t,x)\,u(t,x)\,\varphi(x)\,\de\mu\\[2mm] &-\langle\Theta^t_\alpha(u),\varphi\rangle+\int_{\partial\Omega} J(u(t,x))\,\varphi(x)\,\de\mu. \end{split} \end{equation} Hence the boundary condition holds in $(B^{2,2}_\alpha(\partial\Omega))'$. \end{proof} \vspace{1cm} \noindent {\bf Acknowledgements.} The authors have been supported by the Gruppo Nazionale per l'Analisi Matematica, la Probabilit\`a e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM). The authors report there are no competing interests to declare. \begin{thebibliography}{100} \label{Bibliography} \addcontentsline{toc}{chapter}{\textbf{References}} \footnotesize{ \bibitem{Ab-Th} S. Abe, S. Thurner, \emph{Anomalous diffusion in view of Einstein's 1905 theory of Brownian motion}, Physica A, 356 (2005), pp. 403--407. \bibitem{AdHei} D. R. Adams, L. I. Hedberg, \emph{Function Spaces and Potential Theory}, Springer-Verlag, Berlin, 1996. \bibitem{AP-NAsurv} D. E. Apushkinskaya, A. I. Nazarov, {\em A survey of results on nonlinear Venttsel' problems}, Appl. Math., 45 (2000), pp. 69--80. \bibitem{arendtelst} W. Arendt, A. F. M. ter Elst, \emph{Gaussian estimates for second order elliptic operators with boundary conditions}, J. Operator Theory, 38 (1997), pp. 87--130. \bibitem{BBC} K. Bogdan, K. Burdzy, Z.-Q. Chen, \emph{Censored stable processes}, Probab. Theory Related Fields, 127 (2003), pp. 89--152. \bibitem{bregil} F. Brezzi, G. Gilardi, \emph{Functional Spaces}, in: Finite Element Handbook (H. Kardestuncer, D. H. Norrie eds.), Part 1, Chapter 2, McGraw-Hill Book Co., New York, 1987, pp. 1.29-1.75. \bibitem{nostroMMAS} M. Cefalo, S. Creo, M. R. Lancia, P. Vernole, \emph{Nonlocal Venttsel' diffusion in fractal-type domains: regularity results and numerical approximation}, Math. Methods Appl. Sci., 42 (2019), pp. 4712--4733. \bibitem{chenkum} Z.-Q. Chen, T. Kumagai, \emph{Heat kernel estimates for stable-like processes on d-sets}, Stoch. Process. Appl., 108 (2003), pp. 27--62. \bibitem{creoZAA} S. Creo, \emph{Singular $p$-homogenization for highly conductive fractal layers}, Z. Anal. Anwend., 40 (2021), pp. 401--424. \bibitem{CLNODEA} S. Creo, M. R. Lancia, \emph{Fractional $(s,p)$-Robin-Venttsel' problems on extension domains}, NoDEA Nonlinear Differential Equations Appl., 28 (2021), Paper no. 31, 33 pp. \bibitem{CLNpar} S. Creo, M. R. Lancia, A. I. Nazarov, \emph{Regularity results for nonlocal evolution Venttsel’ problems}, Fract. Calc. Appl. Anal., 23 (2020), pp. 1416--1430. \bibitem{JEEfraz} S. Creo, M. R. Lancia, P. Vernole, \emph{Convergence of fractional diffusion processes in extension domains}, J. Evol. Equ., 20 (2020), pp. 109--139. \bibitem{CLVmosco} S. Creo, M. R. Lancia, P. Vernole, \emph{M-Convergence of $p$-fractional energies in irregular domains}, J. Convex Anal., 28 (2021), pp. 509--534. \bibitem{CLVtrasm} S. Creo, M. R. Lancia, P. Vernole, \emph{Transmission problems for the fractional $p$-Laplacian across fractal interfaces}, Discrete Contin. Dyn. Syst. Ser. S, 15 (2022), pp. 3621--3644. \bibitem{Daners} D. Daners, {\em Heat kernel estimates for operators with boundary conditions}, Math. Nachr., 217 (2000), pp. 13--41. \bibitem{hitch} E. Di Nezza, G. Palatucci, E. Valdinoci, \emph{Hitchhiker's guide to the fractional Sobolev spaces}, Bull. Sci. Math., 136 (2012), pp. 521--573. \bibitem{F-G-G-Ro} A. Favini, G. R. Goldstein, J. A. Goldstein, S. Romanelli, {\em The heat equation with generalized Wentzell boundary condition}, J. Evol. Equ., 2 (2002), pp. 1--19. \bibitem{fukush} M. Fukushima, Y. Oshima, M. Takeda, \emph{Dirichlet Forms and Symmetric Markov Processes}, Walter de Gruyter and Co., Berlin, 1994. \bibitem{giltrud} D. Gilbarg, N. S. Trudinger, \emph{Elliptic Partial Differential Equations of Second Order}, Second Edition, Springer-Verlag, Berlin, 1983. \bibitem{goldstein} G. R. Goldstein, {\em Derivation and physical interpretation of general boundary conditions}, Adv. Differential Equations, 11 (2006), pp. 457--480. \bibitem{CSF} R. Gorenflo, F. Mainardi, A. Vivoli, \emph{Continuous-time random walk and parametric subordination in fractional diffusion}, Chaos, Solitons, Fractals, 34 (2007), pp. 87--103. \bibitem{guan1} Q. Y. Guan, \emph{Integration by parts formula for regional fractional Laplacian}. Commun. Math. Phys., 266 (2006), pp. 289--329. \bibitem{guan2} Q. Y. Guan, Z. M. Ma, \emph{Boundary problems for fractional Laplacians}, Stoch. Dyn., 5 (2005), pp. 385--424. \bibitem{guan3} Q. Y. Guan, Z. M. Ma, \emph{Reflected symmetric $\alpha$-stable processes and regional fractional Laplacian}, Probab. Theory Related Fields, 134 (2006), pp. 649--694. \bibitem{Ja} M. Jara, \emph{Nonequilibrium scaling limit for a tagged particle in the simple exclusion process with long jumps}, Comm. Pure Appl. Math., 62 (2009), pp. 198--214. \bibitem{Jones} P. W. Jones, {\em Quasiconformal mapping and extendability of functions in Sobolev spaces}, Acta Math., 147 (1981), pp. 71--88. \bibitem{jonsson91} A. Jonsson, {\em Besov spaces on closed subsets of $\mathbb{R}^n$}, Trans. Amer. Math. Soc., 341 (1994), pp. 355--370. \bibitem{JoWa} A. Jonsson, H. Wallin, {\em Function Spaces on Subsets of $\mathbb{R}^n$}, Part 1, Math. Reports, vol. 2, Harwood Acad. Publ., London, 1984. \bibitem{JoWa2} A. Jonsson, H. Wallin, {\em The dual of Besov spaces on fractals}, Studia Math., 112 (1995), pp. 285--300. \bibitem{kato} T. Kato, \emph{Perturbation Theory for Linear Operators}, II ed., Springer-Verlag, Berlin-New York, 1976. \bibitem{Laasri} H. Laasri, {\em Regularity properties for evolution families governed by non-autonomous forms}, Arch. Math., 111 (2018), pp. 187--201. \bibitem{mugnolo} H. Laasri, D. Mugnolo, \emph{Ultracontractivity and Gaussian bounds for evolution families associated with nonautonomous forms}, Math. Methods Appl. Sci. 43 (2020), pp. 1409--1436. \bibitem{La-Ve3} M. R. Lancia P. Vernole, {\em Semilinear evolution transmission problems across fractal layers}, Nonlinear Anal. 75 (2012), pp. 4222--4240. \bibitem{LaVe2} M. R. Lancia, P. Vernole, {\em Semilinear fractal problems: approximation and regularity results}, Nonlinear Anal., 80 (2013), pp. 216--232. \bibitem{JEE} M. R. Lancia, P. Vernole, \emph{Venttsel' problems in fractal domains}, J. Evol. Equ., 14 (2014), pp. 681--712. \bibitem{LVnonaut} M. R. Lancia, P. Vernole, \emph{Nonautonomous semilinear Wentzell problems in fractal domains}, J. Evol. Equ., 22 (2022), article number 88, 22 pp. \bibitem{mandelbrot} B. B. Mandelbrot, J. W. Van Ness, \emph{Fractional Brownian motions, fractional noises and applications}, SIAM Rev., 10 (1969), pp. 422--437. \bibitem{MeMiMo} A. Mellet, S. Mischler, C. Mouhot, \emph{Fractional diffusion limit for collisional kinetic equations}, Arch. Ration. Mech. Anal., 199 (2011), pp. 493--525. \bibitem{pazy} A. Pazy, \emph{Semigroup of Linear Operators and Applications to Partial Differential Equations}, Applied Mathematical Sciences, 44, Springer-Verlag, New York, 1983. \bibitem{schneider} W. R. Schneider, \emph{Grey noise}, in: S. Albeverio, G. Casati, U. Cattaneo, D. Merlini, R. Moresi, eds., Stochastic Processes, Physics and Geometry, Teaneck, NJ, USA, World Scientific, 1990, pp. 676--681. \bibitem{showalter} R. E. Showalter, \emph{Monotone Operators in Banach Space and Nonlinear Partial Differential Equations}, Mathematical Surveys and Monographs 49, American Mathematical Society, Providence, RI, 1997. \bibitem{valdinoci} E. Valdinoci, \emph{From the long jump random walk to the fractional Laplacian}, Bol. Soc. Esp. Mat. Apl. SeMA, 49 (2009), pp. 33--44. \bibitem{warmaCPAA} M. Warma, \emph{A fractional Dirichlet-to-Neumann operator on bounded Lipschitz domains}, Commun. Pure Appl. Anal., 14 (2015), pp. 2043--2067. \bibitem{warma2015} M. Warma, \emph{The fractional relative capacity and the fractional Laplacian with Neumann and Robin boundary conditions on open sets}, Potential Anal., 42 (2015), pp. 499--547. \bibitem{weiss3} F. B. Weissler, \emph{Semilinear evolution equations in Banach spaces}, J. Functional Analysis, 32 (1979), pp. 277--296. \bibitem{weiss1} F. B. Weissler, {\em Local existence and nonexistence of semilinear parabolic equations in $L^p$}, Indiana Univ. Math. J., 29 (1980), pp. 79--102. \bibitem{weiss2} F. B. Weissler, {\em Existence and non-existence of global solutions for a semilinear heat equation}, Israel J. Math., 38 (1981), pp. 29--40. \bibitem{Yagi} A. Yagi, {\em Abstract Parabolic Evolution Equations and Their Applications}, Springer Verlag, Berlin, 2010. } \end{thebibliography} \end{document}
2205.06652v1
http://arxiv.org/abs/2205.06652v1
Pullback and forward attractors of contractive difference equations
\documentclass{singlecol-new} \usepackage{natbib,stfloats} \usepackage{mathrsfs} \usepackage{eucal,graphicx,float} \usepackage[center]{caption} \usepackage{color,soul} \numberwithin{table}{section} \def\newblock{\hskip .11em plus .33em minus .07em} \newcommand{\F}{{\mathcal F}} \newcommand{\A}{{\mathcal A}} \newcommand{\B}{{\mathcal B}} \newcommand{\cL}{{\mathcal L}} \newcommand{\K}{{\mathcal K}} \newcommand{\N}{{\mathbb N}} \newcommand{\R}{{\mathbb R}} \newcommand{\Z}{{\mathbb Z}} \newcommand{\tref}[1]{Thm.~\ref{#1}} \newcommand{\pref}[1]{Propp.~\ref{#1}} \newcommand{\lref}[1]{Lem.~\ref{#1}} \newcommand{\cref}[1]{Cor.~\ref{#1}} \newcommand{\egref}[1]{Example~\ref{#1}} \newcommand{\fall}{\quad\text{for all }} \renewcommand{\d}{\,{\mathrm d}} \DeclareMathOperator{\dist}{dist} \DeclareMathOperator{\erf}{erf} \theoremstyle{TH}{ \newtheorem{lemma}{Lemma}[section] \newtheorem{theorem}[lemma]{Theorem} \newtheorem{corollary}[lemma]{Corollary} \newtheorem{conjecture}[lemma]{Conjecture} \newtheorem{proposition}[lemma]{Proposition} \newtheorem{claim}[lemma]{Claim} \newtheorem{stheorem}[lemma]{Wrong Theorem} \newtheorem{example}[lemma]{Example} \newtheorem{algorithm}{Algorithm} } \theoremstyle{THrm}{ \newtheorem{definition}{Definition}[section] \newtheorem{question}{Question}[section] \newtheorem{remark}{Remark} \newtheorem{scheme}{Scheme} } \theoremstyle{THhit}{ \newtheorem{case}{Case}[section] } \makeatletter \def\theequation{\arabic{equation}} ll}}lcr@{}} \begin{document} \setcounter{page}{1} \LRH{H. Huynh and A. Kalkan} \RRH{Pullback and forward attractors of contractive difference equations} \subtitle{} \title{Pullback and forward attractors of contractive difference equations} \authorA{Huy Huynh and Abdullah Kalkan{\sf{*}}} \affA{Universit\"at Klagenfurt,\\ Institut f\"ur Mathematik,\\ 9020 Klagenfurt am W\"orthersee, Austria\\ E-mail: [email protected]\\ E-mail: [email protected]\\ {\sf{*}}Corresponding author} \begin{abstract} The construction of attractors of a dissipative difference equation is usually based on compactness assumptions. In this paper, we replace them with contractivity assumptions under which the pullback and forward attractors are identical. As a consequence, attractors degenerate to unique bounded entire solutions. As an application, we investigate attractors of integrodifference equations which are popular models in theoretical ecology. \end{abstract} \KEYWORD{Pullback attractor, Forward attractor, Contractive mapping, Dissipative difference equation, Semilinear difference equation, Contractive difference equation, Integrodifference equation} \begin{bio} Huy Huynh received his MSc in Applied Mathematics at Université de Rennes 1, France in 2018. His research interests include partial difference equations, dynamical systems, fluid dynamics, numerical analysis and their applications in modelling and discretisation. At present, he is a PhD student in Applied Mathematics at Universität Klagenfurt, Austria. \noindent After studying mathematics at the Ludwig-Maximilians-University Munich, Germany, Abdullah Kalkan received his MSc in Mathematics at University of Augsburg, Germany and PhD in Applied Mathematics at Universität Klagenfurt, Austria. His research interests include dynamical systems, numerical analysis, stochastic processes and stability theory of stochastic differential equations. \noindent This research of the authors has been supported by the Austrian Science Fund (FWF) under Grant Number P 30874-N35. \end{bio} \maketitle \section{Introduction} Different attractor notions for nonautonomous difference equations has received a large amount of attention over recent years as they reflect the long-term behaviour of a process and consist of bounded entire solutions. Whilst the theory of attraction to autonomous systems is well established where attractors are given by invariant $\omega$-limit sets, the generalization to nonautonomous systems is not always appropriate. This leads us to considering nonautonomous sets that are not necessarily invariant. There are two ways to describe attractors, namely \textit{forward attractors} and \textit{pullback attractors} (\cite{Huynh:20, Kloeden:2000, Kloeden:16}). These two types are independent concepts, whereby in the autonomous case they are equivalent (\cite{Kloeden:11}). In order to construct an attractor, one is in need of dissipativity and compactness properties. In general, nevertheless, the latter may not be easy to verify and thus can be replaced by alternative assumptions such as contractivity. Indeed, an example of a contractive difference equation which is not compact will be shown in Subsection \mbox{\ref{excontractive}}. For this reason, our goal here is to investigate the existence and structure of pullback and forward attractors under the assumption of contraction. Furthermore, based on abstract fixed-point theorems, error estimates are provided to numerically approximate attractors. In particular, we discuss two classes of dissipative difference equations and show that forward and pullback attractors degenerate to unique bounded entire solutions: first, semilinear difference equations in Banach spaces and then contractive difference equations in general complete metric spaces. Our abstract theory can be applied to integrodifference equations in Banach spaces of continuous functions over a compact domain. The right-hand side of such equations consists of nonlinear integral operators of Hammerstein type, which in theoretical ecology usually describe the spatial dispersal and evolution of species with nonoverlapping generations (\cite{Jacobsen:15, Kot:1986, Lutscher:19}). On the other hand, as integrodifference equations are infinite-dimensional dynamical systems in the sense that one cannot implement the solutions, we are interested in their spatial discretisation to simulate the dynamical behaviours obtained in finite-dimensional state spaces. Numerical simulations require discretisation. Thus, appropriate discretisation to replace our integral equations by some corresponding methods are given by common techniques in numerical analysis, e.g., Nystr\"om method (\mbox{\cite{Atkinson:09}}). As an example, we consider the following realistic problem in which a species lives in a habitat and an integrodifference equation is used to model it. For simplicity, the habitat here can be considered as the compact interval $\left[-\tfrac{L}{2},\tfrac{L}{2}\right]$ in one dimension and the initial condition can be chosen as any function in $C\left[-\tfrac{L}{2},\tfrac{L}{2}\right]$. Now we iterate our equation and consider the growth and distribution of the species in each generation. Due to sufficient or insufficient natural conditions, the growth or decay of the total population can be observed in different seasons. Furthermore, ecologists take certain measures to protect and promote their growth and distribution in order to prevent the species from dying out. Therefore, we use and compare various supportive measures for the total population, i.e., inhomogeneities in our equation in the last subsection, in order to determine the best possible influence on the population. The contents of this paper are as follows: In Section 2, we first establish the necessary terminology and provide the results for the unique existence and construction of attractors of two classes of dissipative difference equations: semilinear difference equations in Banach spaces and contractive difference equations in complete metric spaces. In addition, results for the latter are based on a version of contraction principle for composite mappings. Followed by that, Section 3 addresses the related notions for integrodifference equations of Hammerstein type and presents some applications of our abstract theory. \paragraph{Notation} Let $\Z$ be the set of all integers and $\N:=\{1,2,...\}$ the positive integers. On a metric space $(U,d)$, $I_U$ is the identity map, $B_r(x)$ and $\bar{B}_r(x)$ the open and closed balls respectively, with center $x\in U$ and radius $r>0$. We write $\dist(x,A):=\inf_{a\in A}d(x,a)$ for the distance of $x$ from a set $A\subseteq U$ and $B_r(A):=\{x\in X:\,\dist(x,A)<r\}$ for its $r$-neighborhood. The Hausdorff semidistance of bounded and closed sets $A,B \subseteq U$ is then defined as \begin{align*} \dist(A,B):=\sup_{a\in A}\inf_{b\in B}d(a,b). \end{align*} A subset $\A \subseteq\Z\times U$ with $t$-fibers defined by $\A(t):=\{u\in U:\,(t,u)\in \A\}$ is called a nonautonomous set. \section{Nonautonomous difference equations} \label{sec:DDE} Let $(U,d)$ be a complete metric space. The paper deals with nonautonomous difference equations of the form \begin{align} \label{deq} u_{t+1}=H_t(u_t) \tag{$\Delta$} \end{align} with right-hand side $H_t:U\to U$. For an \textit{initial time} $\tau \in \Z$, a ~\textit{forward solution} to \eqref{deq} is a sequence $(u_t)_{\tau\leq t}$ in $U$ satisfying \begin{align} \label{solid} u_{t+1}\equiv H_t(u_t) \end{align} for all $\tau\leq t\in\Z$ while an \textit{entire solution} $(u_t)_{t\in\Z}$ satisfies \eqref{solid} on $\Z$. We define the \textit{general solution} $\varphi:\{(t,\tau,u)\in\Z^2\times U:\,\tau\leq t\}\to U$ to \eqref{deq} by \begin{align*} \varphi(t,\tau,u) := \begin{cases} u,&\tau=t,\\ H_{t-1} \circ \ldots \circ H_{\tau}(u),&\tau<t. \end{cases} \end{align*} By the definition of $\varphi$, the \textit{process property} \begin{align} \label{pp} \varphi(t,s,\varphi(s,\tau,u))=\varphi(t,\tau,u) \end{align} holds for all $\tau \leq s\leq t \in \Z$ and $u\in U$. A solution $u^\ast=(u^\ast_t)_{t\in \Z}$ to \eqref{deq} is called \textit{globally attractive}, if \begin{align*} \lim_{t\rightarrow\infty} d(u^\ast_t,\varphi(t,\tau,u_\tau))=0 \end{align*} holds for all $\tau \in \Z$ and $u_\tau \in U$. A nonautonomous set $\A$ is called \begin{itemize} \item \textit{positively invariant} or \textit{invariant}, if \begin{align*} H_t(\A(t))\subseteq\A(t+1)\textrm{ or } H_t(\A(t))=\A(t+1) \end{align*} holds for all $t\in\Z$, respectively, \item \textit{compact}, if every $t$-fiber $\A(t)$ is compact. \end{itemize} Moreover, a nonautonomous set $\A$ with the two properties of invariance and compactness is called a \textit{forward attractor} of \eqref{deq}, if \begin{align*} \lim_{t\to\infty}\dist(\varphi(t,\tau,B),\A(t))=0 \quad\text{for all bounded }B\subseteq U \end{align*} and a \textit{pullback attractor} of \eqref{deq}, if \begin{align*} \lim_{\tau\to-\infty}\dist(\varphi(t,\tau,B),\A(t))=0 \quad\text{for all bounded }B\subseteq U. \end{align*} We denote equations \eqref{deq} as $\theta$-periodic for some $\theta\in\N$, if \begin{align} \label{pdeq} H_{t+\theta}=H_t \end{align} holds for all $t\in\Z$ and $\theta$ is consequently denoted the period. In case of $\theta=1$, i.e., $H_{t+1}=H_t=:H$ for each $t\in \Z$, we say \eqref{deq} is autonomous. \subsection{Semilinear difference equations} We consider nonautonomous difference equations \eqref{deq} in Banach spaces $(\tilde{U},\|\cdot\|)$. Let $L_t\in L(\tilde{U})$, $t\in\Z$, be a sequence of bounded linear operators and $K_t:\tilde{U}\to\tilde{U}$, $t\in\Z$, be mappings. We consider \eqref{deq} of the semilinear form \begin{align} \label{semilineardeq} H_t(u):=L_t u + K_t(u). \end{align} The general solution to \eqref{deq} of the form \eqref{semilineardeq} by the variation of constants formula \cite[Theorem 3.1.16, p.~100]{Poetzsche:10} is of the form \begin{align} \label{voc} \varphi(t,\tau,u_\tau)=\Phi(t,\tau)u_\tau+\sum_{s=\tau}^{t-1}\Phi(t,s+1)K_s\left(\varphi(s,\tau,u_\tau)\right) \fall \tau\leq t, \end{align} where the transition operator $\Phi:\{(t,\tau)\in\Z^2:\tau\leq t\}\to L(\tilde{U})$ is defined by \begin{align*} \Phi(t,\tau):=\begin{cases} L_{t-1}\ldots L_\tau, &\tau<t,\\ I_{\tilde{U}}, &\tau=t. \end{cases} \end{align*} The next lemma is helpful for proving the main result in this subsection: \begin{lemma} \label{semilinearlem} Let $H_t: \tilde{U}\to \tilde{U}$ be of the semilinear form \eqref{semilineardeq}. Suppose there exist reals $\kappa_t\geq 0, \alpha_t> 0$ ~for all $t\in\Z$ and $\gamma \geq 1$ such that \begin{gather} \label{semilinearass1} \|K_t(u)-K_t(\bar{u})\|\leq \kappa_t\|u-\bar{u}\| \end{gather} and \begin{gather} \label{semilinearass2} \|\Phi(t,\tau)\|\leq \gamma \prod_{r=\tau}^{t-1} \alpha_r \end{gather} hold for all $\tau\leq t\in\Z$ and $u,\bar{u}\in \tilde{U}$, then the general solution of \eqref{semilineardeq} satisfies the estimate \begin{align} \label{semilinearest} \|\varphi(t,\tau,u)-\varphi(t,\tau,\bar{u})\|\leq \gamma\|u-\bar{u}\| \prod_{r=\tau}^{t-1} (\alpha_r+\gamma \kappa_r) \end{align} for all $\tau\leq t\in\Z$ and $u,\bar{u}\in \tilde{U}$. \end{lemma} \begin{proof} {Proof} Let $\tau\leq t\in\Z$ and $u,\bar{u}\in \tilde{U}$. The variation of constants formula \eqref{voc} and the assumptions \eqref{semilinearass1}-\eqref{semilinearass2} then imply \begin{align*} \|\varphi(t,\tau,u)-\varphi(t,\tau,\bar{u})\|&\leq \gamma\|u-\bar{u}\| \prod_{r=\tau}^{t-1} \alpha_r + \gamma\sum_{s=\tau}^{t-1}\kappa_s \|\varphi(s,\tau,u)-\varphi(s,\tau,\bar{u})\| \prod_{r=s+1}^{t-1} \alpha_r. \end{align*} This leads to the estimate \begin{align*} \|\varphi(t,\tau,u)-\varphi(t,\tau,\bar{u})\| \dfrac{1}{\prod_{r=\tau}^{t-1}\alpha_r} &\leq \gamma\|u-\bar{u}\| \\ & \qquad + \gamma\sum_{s=\tau}^{t-1}\dfrac{\kappa_s}{\alpha_s} \|\varphi(s,\tau,u)-\varphi(s,\tau,\bar{u})\| \dfrac{1}{\prod_{r=\tau}^{s-1}\alpha_r}. \end{align*} Then the discrete Gr\"onwall inequality from \cite[Proposition A.2.1(a), p.~348]{Poetzsche:10} yields the claimed estimate \eqref{semilinearest}. \end{proof} In order to construct pullback and forward attractors of the equation \eqref{deq} in the semilinear form \eqref{semilineardeq}, we assume \eqref{deq} is pullback absorbing, i.e., there is a nonautonomous set $\B \subseteq \Z\times \tilde U $ satisfying \begin{itemize} \item there exists a real $\rho>0$ such that $\B(t) \subseteq \bar B_\rho(0)$ for all $t\in\Z$, \item for all bounded nonautonomous subsets $C\subseteq\Z\times\tilde U$, there exists a real $S\in\N$ such that \begin{align*} \varphi(t,t-s,C(t-s))\subseteq \B(t) \fall s\geq S, \end{align*} \item $\B$ is positively invariant. \end{itemize} \begin{theorem} \label{semilinearattractor} Let \eqref{deq} of the form \eqref{semilineardeq} satisfy the assumptions of Lemma \ref{semilinearlem}. If \eqref{deq} has a pullback absorbing set $\B$ and \begin{align}\label{limtau} \lim_{\tau \to -\infty} \prod_{r=\tau}^{t-1} (\alpha_r+\gamma \kappa_r)=0 \end{align} for a fixed $t\in\Z$, then \eqref{deq} has a unique bounded entire solution $u^\ast=(u^\ast_t)_{t\in\Z}$ and a unique pullback attractor $\A^\ast:=\{(t, u^\ast_t)\in\Z\times \tilde{U}: t\in\Z\}$. If, in addition, \begin{align}\label{limt} \lim_{t \to \infty} \prod_{r=\tau}^{t-1} (\alpha_r+\gamma \kappa_r)=0 \end{align} holds for a fixed $\tau\in\Z$, then $\A^\ast$ is also the forward attractor of \eqref{deq}. \end{theorem} The following proof is in fact the discrete time version of the proof of \cite[Theorem 5.4, p.~35]{Kloeden:2020}. \begin{proof} {Proof} Let $\B$ be a pullback absorbing set of \eqref{deq}, $\tau\in\Z$ and $t_n\leq \tau$ a monotone decreasing sequence with $t_n\to-\infty$ as $n\to\infty$ and $b_n\in \B(t_n)$ for all $n\in\N$. \begin{itemize} \item[(I)] Define the sequence \begin{align*} \phi_n:=\varphi(\tau,t_n,b_n) \end{align*} for all $n\in\N$. It is easy to see that $\phi_n\in \B(\tau)$ due to the positive invariance of $\B$. Moreover, $(\phi_n)_{n\in\N}$ is Cauchy, i.e., for all $\epsilon>0$, there exists a real $N\in\N$ such that \begin{align*} \|\phi_n-\phi_m \| < \epsilon \fall n,m \geq N. \end{align*} Indeed, w.l.o.g. for $m\geq n$, the process property \eqref{pp} yields \begin{align*} \phi_m =\varphi(\tau,t_m,b_m)=\varphi(\tau,t_n,\varphi(t_n,t_m,b_m)) =\varphi(\tau,t_n,\bar{x}_{n,m}) \end{align*} where $\bar{x}_{n,m}:=\varphi(t_n,t_m,b_m)\in \B(t_n)$ due to the positive invariance of $\B$. Then by the definition of $\phi_n$, the estimate \eqref{semilinearest} and the triangle inequality, one obtains \begin{align*} \|\phi_n-\phi_m\|&=\|\varphi(\tau,t_n, b_n)-\varphi(\tau,t_n,\bar{x}_{n,m})\|\\ &\leq \gamma\|b_n-\bar{x}_{n,m}\| \prod_{r=t_n}^{\tau-1} (\alpha_r+\gamma \kappa_r)\\ &\leq 2 \gamma \rho \prod_{r=t_n}^{\tau-1} (\alpha_r+\gamma \kappa_r). \end{align*} Combining this with the assumption \eqref{limtau} yields $\|\phi_n-\phi_m \| < \epsilon$ for some sufficiently large $n$. Therefore, $(\phi_n)_{n\in\N}$ is Cauchy and consequently has a unique limit $u^\ast_\tau$ due to the completeness of $\tilde U$. \item[(II)] The sequence $(u^\ast_\tau)_{\tau\in\Z}$ constructed in step (I) satisfies $u^\ast_\tau\in \B(\tau)$ and $u^\ast_{\tau+1}=H_\tau(u^\ast_\tau)$. Hence, $(u^\ast_t)_{t\in\Z}$ is an entire solution to \eqref{deq} and bounded due to $u^\ast_t\in \B(t)\subseteq \bar B_\rho(0)$ for all $t\in\Z$. \item[(III)] The aim of this step is to show the uniqueness of a bounded entire solution $(u^\ast_t)_{t\in\Z}$ to \eqref{deq} in $\B$. Let $(v^\ast_t)_{t\in\Z}$ be another bounded entire solution to \eqref{deq} in $\B$. By the definition of an entire solution to \eqref{deq}, the estimate \eqref{semilinearest}, the triangle inequality and the assumption \eqref{limtau}, we obtain \begin{align*} \|u^\ast_t-v^\ast_t\|&= \|\varphi(t,\tau,u^\ast_\tau)-\varphi(t,\tau,v^\ast_\tau)\|\\ &\leq \gamma\|u^\ast_\tau-v^\ast_\tau\| \prod_{r=\tau}^{t-1} (\alpha_r+\gamma \kappa_r)\\ &\leq 2\gamma \rho \prod_{r=\tau}^{t-1} (\alpha_r+\gamma \kappa_r)\xrightarrow[\tau\to-\infty]{}0. \end{align*} Hence, $u_t^\ast=v_t^\ast$ for all $t\in\Z$, implying the uniqueness of the bounded entire solution $(u^\ast_t)_{t\in\Z}$ to \eqref{deq} in $\B$. Moreover, since a pullback attractor consists of all bounded entire solutions (cf. \cite[Corollary 1.3.4, p.~17]{Poetzsche:10}), the nonautonomous set $\A^\ast=\{(t, u^\ast_t): t\in\Z\}$ is the pullback attractor. \item[(IV)] In this step, additionally suppose the assumption \eqref{limt} holds for one $\tau\in\Z$ and consequently for all $\tau\in\Z$. We want to prove that $\mathcal A^\ast$ is also the forward attractor to \eqref{semilineardeq}. Indeed, for all $u_\tau\in\tilde U$, by the definition of $u^\ast_t$, the estimate \eqref{semilinearest}, the triangle inequality and the assumption \eqref{limt}, one obtains \begin{align*} \|\varphi(t,\tau,u_\tau)-u^\ast_t\|&= \|\varphi(t,\tau,u_\tau)-\varphi(t,\tau,u^\ast_\tau)\|\\ &\leq \gamma\|u_\tau-u^\ast_\tau\| \prod_{r=\tau}^{t-1} (\alpha_r+\gamma \kappa_r)\\ &\leq 2\gamma\rho \prod_{r=\tau}^{t-1} (\alpha_r+\gamma \kappa_r) \xrightarrow[t\to\infty]{}0. \end{align*} Thus, $\A^\ast$ is also the forward attractor. \end{itemize} The proof is now set and done. \end{proof} Next we regard a special case of the previous results where \mbox{\eqref{deq}} of the form \mbox{\eqref{semilineardeq}} is $\theta$-periodic for some $\theta\in\N$, i.e., \begin{align} \label{psemilinearass} L_{t+\theta}=L_t \textrm{ and } K_{t+\theta}=K_t \end{align} hold for all $t\in\Z$, as follows. \begin{corollary} \label{corpsemilinear} Let \eqref{deq} of the form \eqref{semilineardeq} satisfy the assumptions of Lemma \ref{semilinearlem}. If there exists a $\theta\in\N$ such that the assumptions \eqref{psemilinearass} hold for all $t\in\Z$ and \begin{align*} \prod_{r=0}^{\theta-1} (\alpha_r+\gamma\kappa_r)<1, \end{align*} then the unique bounded entire solution $u^\ast$ to \eqref{deq} is $\theta$-periodic and the nonautonomous set $\A^\ast=\{(t, u^\ast_t)\in\Z\times \tilde{U}: t\in\Z\}$ is the pullback and forward attractor of \eqref{deq}. \end{corollary} \begin{proof} {Proof} The periodicity of $L_t$ and $K_t$ in the assumptions \eqref{psemilinearass} extends to the right-hand side $H_t$ of \eqref{deq} as well as to the constants $\kappa_t$ and $\alpha_t$ in the assumptions \eqref{semilinearass1} and \eqref{semilinearass2}, respectively. This satisfies the assumptions \eqref{limtau} and \eqref{limt} for a fixed $t\in\Z$ and a fixed $\tau\in\Z$, respectively. Hence, Theorem \ref{semilinearattractor} implies \eqref{deq} has a unique bounded entire solution $u^\ast$ and the pullback and forward attractor $\A^\ast$. Moreover, thanks to {\cite[Proposition 1.4.5, p.~22]{Poetzsche:10}}, $\{u^\ast_{t+\theta}\}=\A^\ast(t+\theta)=\A^\ast(t)=\{u_t^\ast\}$ for all $t\in\Z$. This proves the periodicity of $u^\ast$. \end{proof} \subsection{Contractive difference equations} In this subsection, we return to nonautonomous difference equations \eqref{deq} in general complete metric spaces $(U,d)$ whose right-hand side $H_t$ satisfies the following standing assumptions: \begin{itemize} \item [(i)] there exists a real $\lambda_t\geq 0$ such that \begin{align*} d(H_t(u),H_t(\bar{u}))\leq\lambda_td(u,\bar{u}) \fall t\in\Z, u,\bar{u}\in U, \end{align*} \item [(ii)] there exists a sequence $(\tilde{u}_t)_{t\in\Z}$ such that $\left(H_t(\tilde{u}_t)\right)_{t\in\Z}$ is bounded for all $t\in\Z$, i.e., there exist a real $R>0$ and a point $\hat{u}\in U$ such that \begin{align*} H_t(\tilde{u}_t) \subseteq B_R(\hat{u}) \fall t\in\Z. \end{align*} \end{itemize} The following lemma enables us to prove our results in this subsection. It is a variant of the contraction mapping principle {\cite[Theorem 17.1(a), p.~187]{Deimling:1985}} where the iterates $F^T$ of a mapping $F$ are defined recursively via $F^0:=I_X$ and $F^T:=F\circ F^{T-1}$ for all $T\in\Z, T>0$: \begin{lemma} \label{contractionlem} Let $(X,d_\infty)$ be a complete metric space and $F:X\to X$. If there exist a $T\in\N$ and a real $\ell\in[0,1)$ such that \begin{align*} d_\infty(F^T(x_1),F^T(x_2))\leq\ell d_\infty(x_1,x_2) \quad\fall x_1,x_2\in X, \end{align*} then $F$ possesses a unique fixed point $x^\ast\in X$. Moreover, the error estimate \begin{align} \label{errestlem} d_\infty(x^\ast,F^{tT}(x))\leq\dfrac{\ell^t}{1-\ell}d_\infty(x,F^T(x)) \end{align} is satisfied for all $t\in\N$ and $x\in X$. \end{lemma} The Lipschitz condition for the composite mappings and the error estimate in Lemma \ref{contractionlem} applied to \eqref{deq} can be stated as follows. \begin{theorem} \label{thmsolution} Let the assumptions $(i)$-$(ii)$ hold. If there exists a $T\in\N$ such that $\sup_{s\in\Z}d(u,\varphi(s,s-T,u))<\infty$ for all $u\in U$ and \begin{align} \label{ellass} \ell:=\sup_{\tau\in\Z}\prod_{r=\tau}^{\tau+T-1} \lambda_r <1, \end{align} then the following results hold: \begin{itemize} \item [(a)] \eqref{deq} possesses a unique bounded entire solution $u^\ast=(u_t^\ast)_{t\in\Z}$. Moreover, $u^\ast$ is globally attractive and the error estimate \begin{align} \label{errest0} \sup_{s\in\Z}d(u_s^\ast,\varphi(s, s-tT,u)) \leq \dfrac{\ell^t}{1-\ell} \sup_{s\in\Z} d(u,\varphi(s, s-T, u)) \end{align} is satisfied for all $t\in\N$ and $u\in U$, \item [(b)] the nonautonomous set $\A^\ast:=\{(t, u^\ast_t)\in\Z\times U: t\in\Z\}$ is the pullback and forward attractor of \eqref{deq}. \end{itemize} \end{theorem} \begin{proof} {Proof} \ \begin{itemize} \item [(a)] In order to apply Lemma \ref{contractionlem}, let us consider the space $X:=\ell^\infty(U)$ of all bounded sequences $(u_t)_{t\in\Z}$ in $U$ equipped with the metric \begin{align*} d_\infty(u,v):=\sup_{t\in\Z}d(u_t,v_t) \end{align*} and define the mapping \begin{align*} F(u)_t:=H_{t-1}(u_{t-1}) \end{align*} for all $t\in\Z$. Obviously, $\ell^\infty(U)$ is a complete metric space and a sequence $u^\ast=(u_t^\ast)_{t\in\Z}$ is a bounded entire solution to \eqref{deq} if and only if $u^\ast=F(u^\ast)$. We first show that $F(u)$ is bounded for each $u\in\ell^\infty(U)$. By the assumptions (i)-(ii) and the triangle inequality, one obtains \begin{align*} d(H_t(u_t),\hat{u}) \leq d(H_t(u_t),H_t(\tilde{u}_t))+d(H_t(\tilde{u}_t),\hat{u}) \leq \sup_{t\in\Z} \lambda_t d(u,\tilde{u}) +R \end{align*} and passing to the supremum over $t$ yields $d_\infty(F(u),\hat{u}) \leq \sup_{t\in\Z} \lambda_t d(u,\tilde{u}) +R$. Thus, $F$ maps the bounded sequences into the bounded sequences, i.e.,\ $F:X\to X$ is well-defined. Next, we show by induction \begin{align} \label{FT} F^T(u)_{\tau+T}= \varphi(\tau+T,\tau,u_{\tau}). \end{align} Indeed, the initial case for $T=1$ holds by the definition. We then assume the induction hypothesis \eqref{FT} holds for a particular $T\in\N.$ By the induction hypothesis, one obtains \begin{align*} F^{T+1}(u)_{\tau+T+1}&=[F\circ F^T(u)]_{\tau+T+1}=F(F^T(u))_{\tau+T+1}\\ &=H_{\tau+T}(F^T(u)_{\tau+T})=\varphi(\tau+T+1,\tau,u_\tau), \end{align*} which implies the induction step for $T+1 \in \N$. Hence \eqref{FT} holds for all $T \in \N$. Finally, we compute the Lipschitz constant of $F^T$ for some $T\in\N$ \begin{align*} d(F^T(u)_{\tau +T},F^T(\bar{u})_{\tau +T})&=d(\varphi(\tau+T,\tau,u),\varphi(\tau+T,\tau,\bar{u})) \\ &\leq \lambda_{\tau+T-1} d(\varphi(\tau+T-1,\tau,u),\varphi(\tau+T-1,\tau,\bar{u})) \\ &\leq \lambda_{\tau+T-1} \ldots \lambda_{\tau} d(u,\bar{u}) =\left(\prod_{r=\tau}^{\tau+T-1} \lambda_r\right) d(u,\bar{u})\\ &\leq \left(\sup_{\tau\in\Z} \prod_{r=\tau}^{\tau+T-1} \lambda_r\right) d_\infty(u,\bar{u}) = \ell d_\infty(u,\bar{u}) \end{align*} for all $u,\bar{u}\in \ell^\infty(U)$ and passing to the supremum over all $\tau\in\Z$ yields \begin{align*} d_\infty(F^T(u)_{\tau +T},F^T(\bar{u})_{\tau +T}) \leq \ell d_\infty(u,\bar{u}). \end{align*} Applying Lemma \ref{contractionlem} with $\ell<1$ then derives that $F$ has a unique fixed point $u^\ast$, which in turn is a unique bounded entire solution of \eqref{deq}. Concerning the error estimate \eqref{errest0}, take a point $\bar{u}\in U$ and consider the constant sequence $u:=(\bar{u})_{t\in\Z}\in\ell^\infty(U)$. By the definition of $d_\infty$ and the error estimate \eqref{errestlem}, one obtains \begin{align*} d(u_s^\ast,\varphi(s,s-tT,u)) &= d(u_s^\ast,F^{tT}(u)_s) \leq d_\infty(u^\ast,F^{tT}(u))\\ &\leq \dfrac{\ell^t}{1-\ell}d_\infty(u,F^T(u)) = \dfrac{\ell^t}{1-\ell} \sup_{s\in\Z}d(u,\varphi(s, s-T, u)) \end{align*} for all $s \in\Z$ and passing to the supremum over all $s \in\Z$ proves \eqref{errest0}. Furthermore, it is not difficult to see that $u^\ast$ is also globally attractive. \item [(b)] First, we verify the following properties: \begin{itemize} \item [$\bullet$] $\A^\ast$ is invariant: $\A^\ast(t+1)=\{H_t(u^\ast_t)\}=H_t\left(\{u^\ast_t\}\right)=H_t\left(\A^\ast(t)\right)$ for all $t\in \Z$, \item [$\bullet$] $\A^\ast$ is compact: $\A^\ast(t)=\{u^\ast_t\}$ is a singleton and thus compact. \end{itemize} Now, let $B\subseteq U$ be bounded and $(\tau,u_\tau) \in \Z \times B $, i.e., there exist a real $R>0$ and a point $u\in U$ so that $u_\tau, u^\ast_\tau \in B_R(u)$. The definition of $\A^\ast$, the triangle inequality and the assumptions (i)-(ii) derive \begin{align*} \inf_{a \in \A^\ast(t)} d(\varphi(t,\tau,u_\tau),a)&= d(\varphi(t,\tau,u_\tau),u_t^\ast)= d(\varphi(t,\tau,u_\tau),\varphi(t,\tau,u_\tau^\ast)) \\ &\leq \left(\prod_{r=\tau}^{t-1}\lambda_s\right)d(u_\tau,u_\tau^\ast) \leq 2R\prod_{r=\tau}^{t-1}\lambda_s \end{align*} for all $u_\tau \in B$ and passing to the supremum over all $u_\tau \in B$ yields \begin{align*} \dist(\varphi(t,\tau,B),\A^\ast(t))\leq 2R\prod_{r=\tau}^{t-1}\lambda_s. \end{align*} Hence, combining this with the limit relations $ \lim_{t \rightarrow \infty}\prod_{r=\tau}^{t-1}\lambda_s=\lim_{\tau \rightarrow -\infty}\prod_{r=\tau}^{t-1}\lambda_s=0$ implies $\A^\ast$ is the pullback and forward attractor of \eqref{deq}. \end{itemize} This completes the proof of the theorem. \end{proof} We next formulate the result of periodic attractors as follows. \begin{corollary} \label{patt} If there exists a $\theta\in\N$ such that \eqref{pdeq} holds for all $t\in\Z$ and \begin{align} \label{pprod} \prod_{r=0}^{\theta-1}\lambda_r<1, \end{align} then the unique bounded entire solution $u^\ast$ to \eqref{deq} is $\theta$-periodic and the nonautonomous set $\A^\ast=\{(t, u^\ast_t)\in\Z\times U: t\in\Z\}$ is the pullback and forward attractor of \eqref{deq}. \end{corollary} \begin{proof} {Proof} The assumption \eqref{pdeq} yields both \eqref{deq} and $\lambda_t$ are also $\theta$-periodic, implying the assumption \eqref{ellass} is satisfied for $T:=\theta$. The rest of the proof is verbatim to that in the proof of Corollary \ref{corpsemilinear} where Theorem \ref{thmsolution} is used instead. \end{proof} \subsection{Examples of contractive non-compact difference equations} \label{excontractive} In this subsection, we provide an example where a difference equation is contractive but not compact in order to explain the reason behind our alternative assumption. In Banach spaces $C\left[-\tfrac{L}{2},\tfrac{L}{2}\right]$ with \begin{align} \label{norm} \|v\|&:=\sup_{|x|\leq\tfrac{L}{2}}|v(x)| \fall v\in C\left[-\tfrac{L}{2},\tfrac{L}{2}\right], \end{align} we consider nonautonomous difference equations \eqref{deq} with right-hand side given by \begin{align*} H_t(u)(x):=\dfrac{b_t(x) u(x)}{1+|u(x)|} \end{align*} where $b_t:\left[-\tfrac{L}{2},\tfrac{L}{2}\right]\to[0,\infty)$ is continuous and $h_t\in C\left[-\tfrac{L}{2},\tfrac{L}{2}\right]$. It is obvious that $H_t$ is well-defined in $C\left[-\tfrac{L}{2},\tfrac{L}{2}\right]$, i.e., $H_t:C\left[-\tfrac{L}{2},\tfrac{L}{2}\right]\to C\left[-\tfrac{L}{2},\tfrac{L}{2}\right]$. However, $H_t$ may not be completely continuous. Indeed, if $H_t$ is completely continuous, i.e., $H_t$ maps any bounded set of $\left[-\tfrac{L}{2},\tfrac{L}{2}\right]$ into a relatively compact subset of $\left[-\tfrac{L}{2},\tfrac{L}{2}\right]$, then $\overline{H_t(B_1(0))}$ is compact. This implies that $\overline{H_t(B_1(0))}$ contains a closed ball $\overline{B}_r(0)$ with some $r>0$. Therefore, by a consequence of Riesz's lemma \cite[Theorem 4, p.~3]{Diestel:1984}, the dimension of the state space $\dim C\left[-\tfrac{L}{2},\tfrac{L}{2}\right]$ is finite, which does not hold. Hence, the operator $H_t$ is not compact. On the other hand, it can be easily seen that $H_t$ is contractive, i.e., $H_t$ satisfies Assumption (i) in Subsection 2.2. where $d(u,\bar{u}):=\|u-\bar{u}\|$ for all $u,\bar{u}\in C\left[-\tfrac{L}{2},\tfrac{L}{2}\right]$ and $\lambda_t:=\sup_{|x|\leq \tfrac{L}{2}}b_t(x)$ for all $t\in\Z$. Indeed, letting $u,\bar{u}\in C\left[-\tfrac{L}{2},\tfrac{L}{2}\right]$, one obtains \begin{align*} |H_t(u)(x)-H_t(\bar{u})(x)|&=b_t(x)\left|\dfrac{u}{1+|u(x)|}-\dfrac{\bar{u}}{1+|\bar{u}(x)|}\right|\\ &\leq b_t(x) |u(x)-\bar{u}(x)|\leq\lambda_t\|u-\bar{u}\|. \end{align*} and passing to the supremum over all $x\in\left[-\tfrac{L}{2},\tfrac{L}{2}\right]$ then yields the contractivity of $H_t$. In conclusion, one can construct attractors of nonautonomous difference equations under the assumption of contractivity instead of compactness. \section{Applications} \label{sec:App} Now, we apply the theory from the previous section to integrodifference equations (shortly IDEs) as applications. To be more precise, we study a class of IDEs involving Hammerstein integral operators satisfying a global Lipschitz condition and being defined on the space of continuous functions over a compact domain. \subsection{Integrodifference equations} In this part, we deal with scalar IDEs of Hammerstein type which are equations $\eqref{deq}$ with right-hand side given by \begin{align} \label{hdef} H_t(u):=\int_{-\tfrac{L}{2}}^{\tfrac{L}{2}}k_t(\cdot,y)g_t(y,u(y)) \d y+h_t \end{align} under the following standing assumptions: \begin{itemize} \item the kernel $k_t:\left[-\tfrac{L}{2},\tfrac{L}{2}\right]^2\to\R$ is continuous and satisfies \begin{align*} \sup_{|x|\leq\tfrac{L}{2}}\int_{-\tfrac{L}{2}}^{\tfrac{L}{2}}|k_t(x,y)|\d y<\infty, \end{align*} \item the growth function $g_t:\left[-\tfrac{L}{2},\tfrac{L}{2}\right]\times\R\to\R$ is such that $g_t$ is bounded and satisfies a global Lipschitz condition for all $t\in\Z$, i.e.,\ there exists a continuous function $\hat{g}_t:\left[-\tfrac{L}{2},\tfrac{L}{2}\right]\to[0,\infty)$ such that for all $x\in\left[-\tfrac{L}{2},\tfrac{L}{2}\right]$ and $z,\bar{z}\in\R$, one has \begin{align*} |g_t(x,z)-g_t(x,\bar{z})|\leq\hat{g}_t(x)|z-\bar{z}| \fall t\in\Z; \end{align*} additionally, $g_t(\cdot,z):\left[-\tfrac{L}{2},\tfrac{L}{2}\right]\to\R$ is continuous and $g_t(x,0)=0$ for all $t\in\Z$ and $x\in\left[-\tfrac{L}{2},\tfrac{L}{2}\right]$, \item the inhomogeneity $h_t\in C\left[-\tfrac{L}{2},\tfrac{L}{2}\right]$ satisfies $\sup_{t\in\Z}\|h_t\|<\infty$ where the norm $\|\cdot\|$ is defined in \eqref{norm}. \end{itemize} The right-hand side \eqref{hdef} of \eqref{deq} defines a Hammerstein integral operator whose properties are summarized in the following theorem {\cite[Theorem B.5 and Corollary B.6]{Poetzsche:19}}: \begin{theorem}\label{thmlip} Let $t\in\Z$. The Hammerstein operator $H_t:C\left[-\tfrac{L}{2},\tfrac{L}{2}\right]\to C\left[-\tfrac{L}{2},\tfrac{L}{2}\right]$ is well-defined and satisfies the global Lipschitz condition \begin{align} \label{LipHt} \|H_t(u)-H_t(\bar{u})\| \leq \sup_{|x|\leq \tfrac{L}{2}}\int_{-\tfrac{L}{2}}^{\tfrac{L}{2}}|k_t(x,y)|\hat{g}_t(y) \d y \|u-\bar{u}\| \end{align} for all $u,\bar{u}\in C\left[-\tfrac{L}{2},\tfrac{L}{2}\right]$; one obtains \begin{align*} \lambda_t:=\sup_{|x|\leq\tfrac{L}{2}}\int_{-\tfrac{L}{2}}^{\tfrac{L}{2}}|k_t(x,y)|\hat{g}_t(y) \d y \end{align*} as the Lipschitz constant of the Hammerstein operator $H_t$. \end{theorem} Notice that the Lipschitz constant $\lambda_t$ in Theorem \ref{thmlip} is the same as the constant $\lambda_t$ in Assumption (i) in Subsection 2.2. Under the assumptions of Theorem \ref{thmlip}, we obtain from Theorem \ref{thmsolution}: \begin{theorem} \label{thmide} Let \eqref{deq} have the right-hand side \eqref{hdef}. If there exists a $T\in\N$ such that \eqref{ellass} holds, then the following results hold: \begin{itemize} \item [(a)] \eqref{deq} possesses a unique bounded entire solution $u^\ast=(u_t^\ast)_{t\in\Z}$ in $C\left[-\tfrac{L}{2},\tfrac{L}{2}\right]$. Moreover, $u^\ast$ is globally attractive and the following error estimate \begin{align} \label{errest} \sup_{s\in\Z}\|u_s^\ast-\varphi(s,s-tT,u)\| \leq \dfrac{\ell^t}{1-\ell} \sup_{s\in\Z}\|u-\varphi(s,s-T,u)\| \end{align} is satisfied for all $t\in \N$ and $u \in C\left[-\tfrac{L}{2},\tfrac{L}{2}\right]$, \item [(b)] the nonautonomous set $\A^\ast:=\left\{(t,u^\ast_t)\in \Z\times C\left[-\tfrac{L}{2},\tfrac{L}{2}\right]: t\in\Z\right\}$ is the pullback and forward attractor of \eqref{deq} with right-hand side \eqref{hdef}. \end{itemize} \end{theorem} \begin{proof} {Proof} First, notice that the well-definedness and Lipschitz properties of the Hammerstein type mapping $H_t$ as in \eqref{hdef} are stated in Theorem \ref{thmlip}. Then applying Theorem \ref{thmsolution} with $U:=C\left[-\tfrac{L}{2},\tfrac{L}{2}\right]$ and the metric $d$ as the norm $\|\cdot\|$ proves both results of the theorem. \end{proof} Next we deal with IDEs being $\theta$-periodic for some $\theta\in\N$, i.e., \begin{align} \label{pide} k_{t+\theta}=k_t, g_{t+\theta}=g_t \textrm{ and } h_{t+\theta}=h_t \end{align} hold for all $t\in\Z$. The periodicity of the pullback and forward attractor of a $\theta$-periodic IDE is stated as follows. \begin{corollary} \label{pattide} Let \eqref{deq} have the right-hand side \eqref{hdef}. If there exists a $\theta\in\N$ such that the assumptions \eqref{pprod} and \eqref{pide} hold for all $t\in\Z$, then the unique bounded entire solution $u^\ast$ to \eqref{deq} is $\theta$-periodic and the nonautonomous set $\A^\ast=\{(t, u^\ast_t)\in\Z\times C\left[-\tfrac{L}{2},\tfrac{L}{2}\right]: t\in\Z\}$ is the pullback and forward attractor of \eqref{deq}. \end{corollary} \begin{proof} {Proof} The argument is verbatim to the one given in the proof of Corollary \ref{patt} where the assumption \eqref{pide} and Theorem \ref{thmide} are used instead. \end{proof} \subsection{Examples} In this subsection, we apply our theoretical results above to IDEs. IDEs are investigated under commonly used kernels $k_t$ and growth functions $g_t$ listed in Tables~\ref{tabkernel} and \ref{tabgrowth}, respectively (cf.~\cite{Lutscher:19}), whose coefficients $a_t$ and $b_t:\left[-\tfrac{L}{2},\tfrac{L}{2}\right]\to[0,\infty)$ satisfy $a_t> 0$ and $\sup_{|x|\leq \tfrac{L}{2}}|b_t(x)|\leq \beta_t$, respectively, for all $t\in\Z$ and some real $\beta_t$ with $\sup_{t\in\Z} \beta_t<\infty$. \begin{table}[H] \begin{center} \begin{tabular}{c|c|c|c} & Laplace & Gau{\ss} & tent \\ \hline \rule{0pt}{20pt} $k_t(x,y)$ & $\dfrac{a_t}{2}e^{-a_t|x-y|}$ & $\dfrac{a_t}{\sqrt{\pi}}e^{-a_t^2(x-y)^2}$ & $\max(0,a_t-a_t^2|x-y|)$\\ \hline \rule{0pt}{20pt} $\sup_{|x|\leq\tfrac{L}{2}}\int_{-\tfrac{L}{2}}^{\tfrac{L}{2}}|k_t(x,y)|\d y$ & $1-e^{-\tfrac{a_tL}{2}}$ & $\erf\left(\dfrac{a_tL}{2}\right)$ & $a_tL-\dfrac{1}{4}a_t^2L^2$\\ \end{tabular} \caption{Kernels $k_t$ and their upper bounds $\sup_{|x|\leq\tfrac{L}{2}}\int_{-\tfrac{L}{2}}^{\tfrac{L}{2}}|k_t(x,y)|\d y$} \label{tabkernel} \vspace{-20pt} \end{center} \end{table} \begin{table}[H] \begin{center} \begin{tabular}{c|c|c|c} & logistic & Beverton-Holt & Ricker \\ \hline \rule{0pt}{20pt} $g_t(x,z)$ & $\max(0,b_t(x)z(1-z))$ & $\dfrac{b_t(x)z}{1+|z|}$ & $ze^{-b_t(x)|z|}$\\ \hline \rule{0pt}{20pt} $\sup_{|x|\leq\tfrac{L}{2}}|g_t(x,z)|$ & $\dfrac{1}{4}\beta_t$ & $\beta_t$ & $e^{-1}\beta_t$\\ \end{tabular} \caption{Growth functions $g_t$ and their upper bounds $\sup_{|x|\leq\tfrac{L}{2}}|g_t(x,z)|$} \label{tabgrowth} \vspace{-20pt} \end{center} \end{table} In order to construct and numerically approximate the pullback and forward attractor $\A^\ast=\left\{(t,u^\ast_t)\in \Z\times C\left[-\tfrac{L}{2},\tfrac{L}{2}\right]: t\in\Z\right\}$ of an IDE \eqref{deq} from Theorem \ref{thmide}, the Lipschitz constant $\lambda_t$ of the Hammerstein operator $H_t$ from Theorem \ref{thmlip} and the error estimate \eqref{errest} are required. First, for the Lipschitz function $\lambda_t$, one can obtain it from Table~\ref{tabgrowth}. Indeed, notice that all the growth functions $g_t$ from Table~\ref{tabgrowth} possess the same continuous function $\hat{g}_t(x):=\beta_t$ (independent of $x$) and thus \begin{align*} \lambda_t=\beta_t\sup_{|x|\leq\tfrac{L}{2}}\int_{-\tfrac{L}{2}}^{\tfrac{L}{2}}|k_t(x,y)|\d y. \end{align*} Then, concerning the error estimate \eqref{errest}, we compute $\sup_{s\in\Z}\|u-\varphi(s,s-T,u)\|$ by \begin{align*} \|u-\varphi(s,s-T,u)\|&\leq \|u\|+\|\varphi(s,s-T,u)\|\\ &=\|u\|+\|H_{s-1}\left(\varphi(s-1,s-T,u)\right)\|\\ &\leq \|u\|+\sup_{s\in\Z}\|H_{s-1}\left(\varphi(s-1,s-T,u)\right)\| \end{align*} for all $s\in\Z$ by the triangle inequality and the definition of $\varphi(s,s-T,u)$. Followed by that, we determine the upper bound of $\sup_{s\in\Z}\|H_{s-1}\left(\varphi(s-1,s-T,u)\right)\|$. For each function $u\in C\left[-\tfrac{L}{2},\tfrac{L}{2}\right]$, one obtains \begin{align*} |H_s(u)(x)| &\leq \int_{-\tfrac{L}{2}}^{\tfrac{L}{2}}|k_s(x,y)||g_s(y,u(y))|\d y+|h_s(x)|\\ &\leq \sup_{|x|\leq\tfrac{L}{2}}\int_{-\tfrac{L}{2}}^{\tfrac{L}{2}}|k_s(x,y)|\d y\sup_{|x|\leq\tfrac{L}{2}}|g_s(x,u(x))|+\|h\|_\infty\\ &=:l_1(s,u)+\|h\|_\infty \end{align*} for all $s\in\Z$ and $x\in\left[-\tfrac{L}{2},\tfrac{L}{2}\right]$ where $\|u\|_\infty:=\sup_{t\in\Z}\|u_t\|$ for all $u\in \ell^\infty(C\left[-\tfrac{L}{2},\tfrac{L}{2}\right])$. If $\sup_{s\in\Z}l_1(s,u)<\infty$, then subsequent passing to the supremum over all $x\in\left[-\tfrac{L}{2},\tfrac{L}{2}\right]$ and $s\in\Z$ yield \begin{align*} \sup_{s\in\Z}\|H_s(u)\| \leq \sup_{s\in\Z}l_1(s,u) + \|h\|_\infty \end{align*} since $l_1(s,u)$ is independent of $x$ and $h_s$ is bounded above in $s\in\Z$. This implies the upper bounds of $\sup_{s\in\Z}\|H_{s-1}\left(\varphi(s-1,s-T,u)\right)\|$ and $\|u-\varphi(s,s-T,u)\|$ for all $s\in\Z$. Similarly, passing to the supremum over all $s\in\Z$ results in \begin{align} \label{l2} \sup_{s\in\Z}\|u-\varphi(s,s-T,u)\|\leq \|u\| + \sup_{s\in\Z}l_1\left(s-1,\varphi(s-1,s-T,u)\right) + \|h\|_\infty=:l_2 \end{align} and the error estimate \eqref{errest} is obtained as a result. Let us now be more precise by illustrating some results using concrete IDEs and their discretisations. In order to avoid the numerical integration of the integral operator and obtain a full discretisation, we work with the Nystr\"om method (cf.~\cite{Atkinson:09}) by replacing the integral with a weighted sum such that the discretisation of \eqref{deq} leads to a sequence of equations $(\Delta_n)_{n\in\N}$ of the form \begin{align*} u_{t+1}(x)=H^n_t(u_t)(x)=\sum_{i=0}^{n}\omega_i k_t(x,\eta_i)g_t(\eta_i,u_t(\eta_i))\d y + h_t(x) \tag{$\Delta_n$} \end{align*} for all $t\in\Z$ and $x\in\left[-\tfrac{L}{2},\tfrac{L}{2}\right]$, assuming given a convergent quadrature rule with nodes $\eta_i$ and weights $\omega_i$ for $i=0,1,\ldots,n$. In particular, \mbox{\egref{example}} below uses the trapezoidal quadrature rule to approximate integrals with \begin{align*} h:=\dfrac{L}{n}, \eta_i:=-\dfrac{L}{2}+ih \textrm{ and } \omega_i:=\begin{cases} \dfrac{h}{2}, &i\in\{0,n\},\\ h, &\textrm{otherwise} \end{cases} \end{align*} for all $i=0,1,\ldots,n$. Moreover, we can also approximate the total population $\overline{u}_t$ at time $t$ as \begin{align*} \overline{u}_t:=\int_{-\tfrac{L}{2}}^{\tfrac{L}{2}} u_t(y)\d y\approx\sum_{i=0}^{n}\omega_i u_t(\eta_i). \end{align*} On the other hand, for a discretisation, the error estimate $\sup_{s\in\Z}\|u^\ast_s-\varphi(s,s-tT,u)\|$ in \eqref{errest} has to be less than or equal to some real tolerance $tol>0$ for some sufficiently large $t\in\Z$. Therefore, in order to meet the assumption \eqref{ellass}, one can pick $\ell \leq \dfrac{1}{2}$ and hence by \eqref{l2} \begin{align*} \dfrac{\ell^t}{1-\ell} \sup_{s\in\Z}\|u-\varphi(s,s-T,u)\| \leq \dfrac{\left(\tfrac{1}{2}\right)^t}{1-\tfrac{1}{2}} l_2 = 2^{1-t} l_2 < tol, \end{align*} resulting in $t\geq \log_2\left(\dfrac{2l_2}{tol}\right)$. Thus, we need at least \begin{align} \label{fewestiteration} S:=T\log_2\left(\dfrac{2l_2}{tol}\right) \end{align} iterations so that the error estimate \eqref{errest} is satisfied and $\A^\ast$ is consequently the pullback and forward attractor of the IDE \eqref{deq}. \begin{example} \label{example} Suppose in a habitat, a species initially lives mostly on the boundary of the habitat and its population gradually decreases in the middle areas (cf. a choice of a constant initial solution $u_0$ below). Over four seasons of each year, it periodically breeds and dies as well as spreads from one place to another. Not to mention that the total population decays dramatically in fall and winter before slightly flourishing again in spring and summer every year resulting from natural conditions and the species may come to extinction after a period of time. Therefore, ecologists would like to improve the total population by supporting it in each season with various options and thus choosing the optimal one. Followed by that, a mathematical model using an IDE may help to study the system and give the optimal solution in terms of the total population of the species as a result. For simplicity, we consider the scalar, $\theta$-periodic and inhomogeneous IDE \begin{align} \label{BH} u_{t+1}(x)=H_t(u_t)(x):=\int_{-\tfrac{L}{2}}^{\tfrac{L}{2}} \dfrac{a}{2}e^{-a|x-y|}\dfrac{b_t(y)u_t(y)}{1+|u_t(y)|}\d y+h_t(x) \end{align} for all $(x,t)\in\left[-\tfrac{L}{2},\tfrac{L}{2}\right]\times\Z$ that fits into the realistic problem and the framework of \eqref{hdef} where \begin{itemize} \item the domain $\left[-\tfrac{L}{2},\tfrac{L}{2}\right]$ represents the habitat, \item the kernel $k(x,y):=\dfrac{a}{2}e^{-a|x-y|}$ with $a>0$ represents the dispersal rate of the species from point $x$ to point $y$, \item the periodic growth function $g_t(x,z):=\dfrac{b_t(x)z}{1+|z|}$ represents the natural growth rate of the species at point $x$ and day $t$ starting from the population $z$, while the periodic parameter $b_t(x):=\alpha_t\hat{b}(x)$ represents the natural conditions at point $x$ and day $t$ with \begin{itemize} \item [$\cdot$] $\alpha_t:=C\left(1+\tfrac{1}{2}\sin\left(\tfrac{2\pi t}{\theta}\right)\right)>0$ for all $t\in\Z$, \item [$\cdot$] $\hat{b}(x)\geq 0$ for all $x\in \left[-\tfrac{L}{2},\tfrac{L}{2}\right]$, \item [$\cdot$] $\sup_{|x|\leq \tfrac{L}{2}}|b_t(x)|\leq \beta_t:=\alpha_t \sup_{|x|\leq \tfrac{L}{2}}|\hat{b}(x)|$ for all $t\in\Z$, \end{itemize} \item the periodic inhomogeneity $h_t(x)$ represents the supported population at point $x$ and day $t$. \end{itemize} With the choice of $C$ as follows \begin{align*} C:=\dfrac{1}{1-e^{-\tfrac{aL}{2}}}\sqrt[\theta]{\dfrac{1}{2\prod_{r=0}^{\theta-1}\beta_r\left(1+\tfrac{1}{2}\sin\left(\tfrac{2\pi r}{\theta}\right)\right)}} >0 \end{align*} for all $t\in\Z$, the assumptions \eqref{ellass} and \eqref{LipHt} hold where $\lambda_t:=\beta_t\left(1-e^{-aL/2}\right)$ and \begin{align*} \|H_t(u)-H_t(\bar{u})\|\leq \beta_t\left(1-e^{-aL/2}\right) \|u-\bar{u}\| \fall u,\bar{u}\in C(\left[-\tfrac{L}{2},\tfrac{L}{2}\right]). \end{align*} By Theorem \ref{thmide}, for $\ell=\prod_{r=0}^{\theta-1}\lambda_r\leq\dfrac{1}{2}$, \eqref{BH} is contractive and has the $\theta$-periodic pullback and forward attractor $\A^\ast=\{(t,u^\ast_t)\in\Z\times C(\left[-\tfrac{L}{2},\tfrac{L}{2}\right]): t\in\Z\}$ due to Corollary~\ref{pattide}. Furthermore, for the initial solution $u_0\in C(\left[-\tfrac{L}{2},\tfrac{L}{2}\right])$ representing the initial population, the error estimate \eqref{errest} then becomes \begin{align*} \sup_{s\in\Z}\|u_s^\ast-\varphi(s,s-t\theta,u_0)\| \leq 2^{1-t} \left(\|u_0\| + (1-e^{-aL/2})\sup_{s\in\Z}\beta_{s-1} + \|h\|_\infty\right) \end{align*} for all $t\in\Z$. By the Nystr\"om method with the trapezoidal quadrature rule, the $t$-fibers $\A^\ast(t)$ are illustrated in Figure \ref{figattractor} for $t\in[0,T']$, $L:=6$, $\theta:=365$ (a year), $tol:=10^{-6}$, $a:=10$, $\hat{b}(x):=2|x|+3$ and \begin{align*} u_0(x):=\begin{cases} 2x^2+0.5, &x\in[-1,1],\\ 2.5, &\textrm{otherwise}. \end{cases} \end{align*} For $T'$, in order to observe the periodicity of $\A^\ast$, we pick $T':=366$ (one year and one day) so that the same behaviour of the system at days 0 and 365 can be displayed. For $h_t(x)$, we consider four functions $h^i_t(x), i\in\{1,2,3,4\}$, in the following table: \begin{table}[H] \begin{center} \renewcommand{\arraystretch}{2.5} \begin{tabular}{c|c|c|c|c} & $t\in \left(\left.0,\dfrac{\theta}{4}\right.\right]$ & $t\in \left(\left.\dfrac{\theta}{4},\dfrac{\theta}{2}\right.\right]$ & $t\in \left(\left.\dfrac{\theta}{2},\dfrac{3\theta}{4}\right.\right]$ & $t\in \left(\left.\dfrac{3\theta}{4},\theta\right.\right]$ \\ \hline $h_t^1(x)$ & $\cos\left(\dfrac{\pi x}{L}\right)$ & $\cos\left(\dfrac{\pi x}{L}\right)$ & $2\cos\left(\dfrac{\pi x}{L}\right)$ & $2\cos\left(\dfrac{\pi x}{L}\right)$ \\ \hline $h_t^2(x)$ & $\cos\left(\dfrac{\pi x}{L}\right)$ & $2\cos\left(\dfrac{\pi x}{L}\right)$ & $\cos\left(\dfrac{\pi x}{L}\right)$ & $2\cos\left(\dfrac{\pi x}{L}\right)$ \\ \hline $h_t^3(x)$ & $2\cos\left(\dfrac{\pi x}{L}\right)$ & $2\cos\left(\dfrac{\pi x}{L}\right)$ & $\cos\left(\dfrac{\pi x}{L}\right)$ & $\cos\left(\dfrac{\pi x}{L}\right)$ \\ \hline $h_t^4(x)$ & $2\cos\left(\dfrac{\pi x}{L}\right)$ & $\cos\left(\dfrac{\pi x}{L}\right)$ & $2\cos\left(\dfrac{\pi x}{L}\right)$ & $\cos\left(\dfrac{\pi x}{L}\right)$ \end{tabular} \caption{Inhomogeneities $h_t^i$, $i\in\{1,2,3,4\}$} \label{tabinhom} \vspace{-20pt} \end{center} \end{table} In Table \ref{tabinhom}, all intervals $\left(\left.0,\tfrac{\theta}{4}\right.\right]$, $\left(\left.\tfrac{\theta}{4},\tfrac{\theta}{2}\right.\right]$, $\left(\left.\tfrac{\theta}{2},\tfrac{3\theta}{4}\right.\right]$ and $\left(\left.\tfrac{3\theta}{4},\theta\right.\right]$ represent four seasons spring, summer, fall and winter, respectively, and each $h_t^i$, $i\in\{1,2,3,4\}$, represents different behaviour in each season. For instance, the populations regarding $h_t^1$ in fall and winter are twice those in spring and summer. Last, by the definitions of $u_0$, $b_t$ and $h_t$, $S=8760$ due to \eqref{fewestiteration}. Finally, in order to meet the optimal choice for the artificial support, we compute the mean total populations $\dfrac{1}{\theta}\sum_{t=0}^{\theta-1} \bar{u}_t$ over a year for inhomogeneities $h^i_t(x), i\in\{1,2,3,4\}$, from Table \ref{tabinhom} as follows. \begin{table}[H] \begin{center} \renewcommand{\arraystretch}{2} \begin{tabular}{c|c|c|c|c} & $h^1_t(x)$ & $h^2_t(x)$ & $h^3_t(x)$ & $h^4_t(x)$ \\ \hline $\dfrac{1}{\theta}\sum_{t=0}^{\theta-1} \bar{u}_t$ & $7.9640$ & $5.8614$ & $8.0794$ & $10.1816$ \end{tabular} \caption{Mean total populations $\dfrac{1}{\theta}\sum_{t=0}^{\theta-1} \bar{u}_t$ over a year for inhomogeneities $h^i_t(x), i\in\{1,2,3,4\}$} \label{tabmeantotalpop} \vspace{-20pt} \end{center} \end{table} According to Table \ref{tabmeantotalpop}, we can conclude $h^4_t(x)$ is the best option among four. To study this behaviour, one can see that if the populations supported in spring and fall are larger than those in summer and winter to encounter natural conditions, then species can develop to the highest amount after some generations. To be more precise, in winter, the total population witnesses a significant decline and hence requires a larger support in spring. Meanwhile, despite reaching a peak in summer, the total population is supported more in fall in order to avoid a dramatic decline due to natural conditions. \begin{figure}[H] \begin{center} \includegraphics[trim=125 20 155 105,clip,width=.75\textwidth]{190620-1b} \includegraphics[trim=125 20 155 105,clip,width=.75\textwidth]{190620-2b} \includegraphics[trim=125 20 155 105,clip,width=.75\textwidth]{190620-3b} \includegraphics[trim=125 20 155 105,clip,width=.75\textwidth]{190620-4b} \caption{The $t$-fibers $\A^\ast(t)$ for $t\in[0,365]$ and $h_t(x)=h^i_t(x)$, $i\in\{1,2,3,4\}$ from the top ($i=1$) to the bottom ($i=4$), respectively.} \label{figattractor} \end{center} \end{figure} \end{example} \section*{Acknowledgments} We are deeply grateful to our supervisor for their support as well as helpful comments and suggestions on the manuscript. We also thank a colleague of ours for polishing the language in some parts. \begin{thebibliography}{99} \bibitem[\protect\citeauthoryear{Atkinson et al.}{2009}]{Atkinson:09} Atkinson, K.E. and Han, W. (2009) `Theoretical Numerical Analysis: A Functional Analysis Framework', Third Edition, \textit{Texts in Applied Mathematics 39}, Springer, New York. \bibitem[\protect\citeauthoryear{Carvalho et al.}{1992}]{Carval:12} Carvalho, A.N., Langa J.A. and Robinson, J.C. (2013) `Attractors for infinite-dimensional nonautonomous dynamical systems', \textit{Applied Mathematical Sciences}, Vol. 182, Springer, Berlin. \bibitem[\protect\citeauthoryear{Deimling}{1985}]{Deimling:1985} Deimling, K. (1985) \textit{Nonlinear functional analysis}, Springer, Berlin, Heidelberg, New York, Tokyo. \bibitem[\protect\citeauthoryear{Diestel}{1984}]{Diestel:1984} Diestel, J. (1984) `Sequences and Series in Banach Spaces', \textit{Graduate Texts in Mathematics 92}, Springer-Verlag, New York, Heidelberg, Berlin. \bibitem[\protect\citeauthoryear{G\"opfert}{2009}]{Goepfert:09} G\"opfert, A., Riedrich, T. und Tammer, C. (2009) \textit{Angewandte Funktionalanalysis}, Vieweg \& Teuber, 1.Auflage. \bibitem[\protect\citeauthoryear{H\"ammerlin et al.}{1991}]{Haemmerlin:1991} H\"ammerlin, G. and Hoffmann, K.H. (1991) `Numerical Mathematics', \textit{Undergraduate Texts in Mathematics}, Vol. 192, Springer, New York. \bibitem[\protect\citeauthoryear{Huynh et al.}{2020}]{Huynh:20} Huynh, H., Kloeden, P.E. and P\"otzsche, C. (2020) `Forward and pullback dynamics of nonautonomous integrodifference equations: Basic constructions', \textit{J. Dyn. Diff. Equat.}. https://doi.org/10.1007/s10884-020-09887-8. \bibitem[\protect\citeauthoryear{Jacobsen et al.}{2015}]{Jacobsen:15} Jacobsen, J., Jin, Y. and Lewis, M.A. (2015) `Integrodifference models for persistence in temporally varying river environments', \textit{J. Math. Biol.}, Vol. 70, pp.~549--590. \bibitem[\protect\citeauthoryear{Kloeden et al.}{2000}]{Kloeden:2000} Kloeden, P.E. (2000) `Pullback attractors in nonautonomous difference equations', \textit{J. Difference Equ. Appl.}, Vol. 6, No. 1, pp.~33--52. \bibitem[\protect\citeauthoryear{Kloeden et al.}{2016}]{Kloeden:16} Kloeden, P.E. and Lorenz, T. (2016) `Construction of nonautonomous forward attractors', \textit{Proc. Am. Math. Soc.}, Vol. 144, pp.~ 259--268. \bibitem[\protect\citeauthoryear{Kloeden et al.}{2011}]{Kloeden:11} Kloeden, P.E. and Rasmussen, M. (2011) `Nonautonomous Dynamical Systems', \textit{Mathematical Surveys and Monographs}, Vol. 176, American Mathematical Society, Rhode Island. \bibitem[\protect\citeauthoryear{Kloeden et al.}{2020}]{Kloeden:2021} Kloeden, P.E. and Yang, M. (2021) `Introduction to Nonautonomous Dynamical Systems and their Attractors', \textit{Interdisciplinary Mathematical Sciences}, Vol. 21, World Scientific Publishing Co. Inc, Singapore. \bibitem[\protect\citeauthoryear{Kot et al.}{1986}]{Kot:1986} Kot, M. and Schaffer, W. M. (1986) `Discrete-time growth-dispersal models', \textit{Math. Biosci.}, Vol. 80, pp.~109--136. \bibitem[\protect\citeauthoryear{Lutscher}{2019}]{Lutscher:19} Lutscher, F. (2019) `Integrodifference Equations in Spatial Ecology', \textit{Interdisciplinary Applied Mathematics}, Vol. 49, Springer Nature, Switzerland. \bibitem[\protect\citeauthoryear{P{\"o}tzsche}{2010}]{Poetzsche:10} P{\"o}tzsche, C. (2010) `Geometric theory of discrete nonautonomous dynamical systems', \textit{Lect. Notes Math. 2002}, Springer, Berlin. \bibitem[\protect\citeauthoryear{P{\"o}tzsche}{2019}]{Poetzsche:19} P{\"o}tzsche, C. (2019) `Numerical dynamics of integrodifference equations: Basics and discretization errors in a ${C}^0$-setting', \textit{Appl. Math. Comput.}, Vol. 354, pp.~422--443. \end{thebibliography} \end{document}
2205.06285v2
http://arxiv.org/abs/2205.06285v2
Essential holonomy of Cantor actions
\documentclass{amsart} \usepackage{graphicx} \usepackage{amssymb} \usepackage{epstopdf} \usepackage{nicefrac} \usepackage{enumerate} \usepackage{hyperref} \usepackage{float} \usepackage{color} \usepackage{pst-all} \usepackage{tabularx} \DeclareGraphicsRule{.tif}{png}{.png}{`convert #1 `dirname #1`/`basename #1 .tif`.png} \parskip = 6pt \parindent = 0.0in \hoffset=-.7in \voffset=-.7in \setlength{\textwidth}{6in} \setlength{\textheight}{9in} \newtheorem{theorem}{THEOREM}[section] \newtheorem{thm}{THEOREM}[section] \newtheorem{conj}[thm]{CONJECTURE} \newtheorem{cor}[thm]{COROLLARY} \newtheorem{definition}[thm]{DEFINITION} \newtheorem{defn}[thm]{DEFINITION} \newtheorem{ex}[thm]{EXAMPLE} \newtheorem{hyp}[thm]{HYPOTHESES} \newtheorem{lemma}[thm]{LEMMA} \newtheorem{prob}[thm]{PROBLEM} \newtheorem{prop}[thm]{PROPOSITION} \newtheorem{quest}[thm]{QUESTION} \newtheorem{problem}[thm]{PROBLEM} \newtheorem{remark}[thm]{REMARK} \newtheorem{const}[thm]{CONSTRUCTION} \newcommand{\ds}{\displaystyle} \newcommand{\cN}{{\mathcal N}} \newcommand{\cA}{{\mathcal A}} \newcommand{\cD}{{\mathcal D}} \newcommand{\cE}{{\mathcal E}} \newcommand{\cG}{{\mathcal G}} \newcommand{\cH}{{\mathcal H}} \newcommand{\cI}{{\mathcal I}} \newcommand{\cK}{{\mathcal K}} \newcommand{\cL}{{\mathcal L}} \newcommand{\cM}{{\mathcal M}} \newcommand{\cO}{{\mathcal O}} \newcommand{\cP}{{\mathcal P}} \newcommand{\CO}{{\rm CO}} \newcommand{\cS}{{\mathcal S}} \newcommand{\cT}{{\mathcal T}} \newcommand{\cU}{{\mathcal U}} \newcommand{\cV}{{\mathcal V}} \newcommand{\cW}{{\mathcal W}} \newcommand{\cZ}{{\mathcal Z}} \newcommand{\diam}{{\rm diam}} \newcommand{\dist}{{\rm dist}} \newcommand{\dX}{d_{\fX}} \newcommand{\e}{{\varepsilon}} \newcommand{\fD}{{\mathfrak{D}}} \newcommand{\fE}{{\mathfrak{E}}} \newcommand{\fH}{{\mathfrak{H}}} \newcommand{\fG}{{\mathfrak{G}}} \newcommand{\fK}{{\mathfrak{K}}} \newcommand{\Fix}{{\rm Fix}} \newcommand{\fM}{{\mathfrak{M}}} \newcommand{\fN}{{\mathfrak{N}}} \newcommand{\fS}{{\mathfrak{S}}} \newcommand{\fX}{{\mathfrak{X}}} \newcommand{\fY}{{\mathfrak{Y}}} \newcommand{\fU}{{\mathfrak{U}}} \newcommand{\fV}{{\mathfrak{V}}} \newcommand{\G}{\Gamma} \newcommand{\F}{{\mathcal F}} \newcommand{\Ad}{{\rm Ad}} \newcommand{\Aut}{{\rm Aut}} \newcommand{\Homeo}{\operatorname{Homeo}} \newcommand{\Germ}{{\rm Germ}} \newcommand{\Id}{{\rm Id}} \newcommand{\Iso}{{\rm Iso}} \newcommand{\mH}{{\mathbb H}} \newcommand{\mR}{{\mathbb R}} \newcommand{\mS}{{\mathbb S}} \newcommand{\mT}{{\mathbb T}} \newcommand{\mZ}{{\mathbb Z}} \newcommand{\oG}{{\mathfrak{G}(\Phi)}} \newcommand{\oPhi}{{\overline{\Phi}}} \newcommand{\oPsi}{{\overline{\Psi}}} \newcommand{\Perm}{{\rm Perm}} \newcommand{\vp}{{\varphi}} \newcommand{\whe}{\widehat{e}} \newcommand{\whg}{\widehat{g}} \newcommand{\whh}{\widehat{h}} \newcommand{\whq}{\widehat{q}} \newcommand{\whgamma}{\widehat{\gamma}} \newcommand{\whGamma}{\widehat{\Gamma}} \newcommand{\whLambda}{\widehat{\Lambda}} \newcommand{\whC}{\widehat{C}} \newcommand{\whG}{\widehat{G}} \newcommand{\whU}{\widehat{U}} \newcommand{\whmZ}{\widehat{\mZ}} \newcommand{\whPhi}{\widehat{\Phi}} \newcommand{\whPsi}{\widehat{\Psi}} \newcommand{\whUpsilon}{\widehat{\Upsilon}} \newcommand{\whrho}{{\widehat{\rho}}} \newcommand{\whtau}{{\widehat{\tau}}} \newcommand{\whcH}{{\widehat{\cH}}} \newcommand{\cR}{{\mathcal R}} \newcommand{\whTheta}{{\widehat{\Theta}}} \newcommand{\ovq}{\overline{q}} \newcommand{\ovh}{\overline{h}} \newcommand{\ova}{\overline{a}} \newcommand{\ovb}{\overline{b}} \newcommand{\ovc}{\overline{c}} \newcommand{\ovg}{\overline{g}} \newcommand{\ovG}{\overline{G}} \newcommand{\ovx}{\overline{x}} \newcommand{\ovy}{\overline{y}} \newcommand{\ovz}{\overline{z}} \newcommand{\wha}{\widehat{a}} \newcommand{\whb}{\widehat{b}} \newcommand{\whc}{\widehat{c}} \newcommand{\whx}{\widehat{x}} \newcommand{\why}{\widehat{y}} \newcommand{\whz}{\widehat{z}} \newcommand{\wte}{\widetilde{e}} \newcommand{\wtf}{\widetilde{f}} \newcommand{\wtg}{\widetilde{g}} \newcommand{\wth}{\widetilde{h}} \newcommand{\wtk}{\widetilde{k}} \newcommand{\wto}{\widetilde{\omega}} \newcommand{\wtx}{\widetilde{x}} \newcommand{\wty}{\widetilde{y}} \newcommand{\wtz}{\widetilde{z}} \newcommand{\wtgamma}{\widetilde{\gamma}} \newcommand{\wtM}{{\widetilde{M}}} \newcommand{\fL}{{\mathfrak{L}}} \newcommand{\whalpha}{\widehat{\alpha}} \newcommand{\whtheta}{{\widehat{\theta}}} \newcommand\myeq{\mathrel{\stackrel{\makebox[0pt]{\mbox{\normalfont\tiny def}}}{=}}} \newcommand\homeo{\mathrel{\stackrel{\makebox[0pt]{\mbox{\normalfont\tiny top}}}{\approx}}} \newcommand\mor{\mathrel{\stackrel{\makebox[0pt]{\mbox{\normalfont\tiny a}}}{\sim}}} \begin{document} \title{Essential holonomy of Cantor actions} \author{Steven Hurder} \address{Steven Hurder, Department of Mathematics, University of Illinois at Chicago, 322 SEO (m/c 249), 851 S. Morgan Street, Chicago, IL 60607-7045} \email{[email protected]} \author{Olga Lukina} \address{Olga Lukina, Mathematical Institute, Leiden University, P.O. Box 9512, 2300 RA Leiden, The Netherlands} \email{[email protected]} \thanks{Version date: January 20, 2023} \thanks{2020 {\it Mathematics Subject Classification}. Primary: 20E18, 22F10, 37A15, 37B05; Secondary: 57S10.} \thanks{Keywords: Equicontinuous group actions, topologically free, essentially free, holonomy, locally quasi-analytic actions, nilpotent actions, lower central series, commutator subgroup.} \begin{abstract} A group action has essential holonomy if the set of points with non-trivial holonomy has positive measure. If such an action is topologically free, then having essential holonomy is equivalent to the action not being essentially free, which means that the set of points with non-trivial stabilizer has positive measure. In this paper, we investigate the relation between the property of having essential holonomy and structure of the acting group for minimal equicontinuous actions on Cantor sets. We show that if such a group action is locally quasi-analytic and has essential holonomy, then every commutator subgroup in the group lower central series has elements with positive measure set of points with non-trivial holonomy. In particular, this gives a new proof that a minimal equicontinuous Cantor action by a nilpotent group has no essential holonomy. We also show that the property of having essential holonomy is preserved under return equivalence and continuous orbit equivalence of minimal equicontinuous Cantor actions. Finally, we give examples to show that the assumption on the action that it is locally quasi-analytic is necessary. \end{abstract} \maketitle \section{Introduction}\label{sec-intro} We say that $(\fX, \G, \Phi)$ is a \emph{Cantor action} if $\G$ is a countable group, $\fX$ is a Cantor space, and $\Phi \colon \G \times \fX \to \fX$ is an action by homeomorphisms. We assume throughout this paper that, moreover, Cantor actions are minimal and equicontinuous. This implies that a Cantor action $(\fX, \G, \Phi)$ has a unique invariant probability measure $\mu$. We recall further basic definitions and constructions for Cantor actions in Section \ref{sec-basics}, as used in the formulation of our results below. The dynamical properties of Cantor actions can have surprisingly subtle aspects, especially for the case where $\G$ is non-abelian, as revealed by the many examples in the literature. One approach to classifying Cantor actions is by their dynamical properties, and to also consider the relation between the algebraic properties of the acting group $\G$ and their dynamics. The main result of this work makes a new contribution to this classification scheme, as it relates the algebraic properties of $\G$ and the dynamics of the action, through the study of the property that an action has \emph{essential holonomy}; see Definition~\ref{def-essentialholonomy} below. First recall some standard notions concerning the fixed-point sets for a Cantor action $(\fX, \G, \Phi)$. We use the notation $g \cdot x = \Phi(g)(x)$, for $g \in \G$ and $x \in \fX$. The set $\fX_g = \{ x \in \fX \mid g \cdot x = x\}$ consists of fixed points for $g$, and the \emph{stabilizer} of a point $x \in \fX$ is the subgroup $\G_x = \{ g \in \G \mid g \cdot x = x\}$. Let \begin{equation}\label{eq-fpset} \fX_\G = \bigcup_{e \ne g \in \G} \ \fX_{g} = \{x \in \fX \mid \G_x \ne \{e\}\} \end{equation} be the set of all points fixed by some non-identity element $g \in \G$. \begin{defn} A minimal Cantor action $(\fX, \G, \Phi)$ with invariant probability measure $\mu$ is: \begin{enumerate} \item \emph{free} if $\fX_{\G}$ is empty. \item \emph{topologically free} if $\fX_\G$ is a meager set in $\fX$. \item \emph{essentially free} if $\mu(\fX_\G) = 0$. \end{enumerate} \end{defn} Recall that a Cantor action $(\fX, \G, \Phi)$ is \emph{effective} if the only element of $\G$ that acts as the identity on $\fX$ is the identity of $\G$. It is elementary to show that if $\G$ is abelian, then every effective Cantor action of $\G$ must be free. The topologically free Cantor actions have an important role in the study of the $C^*$-algebras associated to the actions, as studied for example in \cite{AS1994,BoyleTomiyama1998,Kennedy2020,LeBMB2018,Renault2008}. Kambites, Silva and Steinberg showed in \cite[Theorem~4.3]{KSS2006} that the action of a group generated by finite automata on a rooted tree is topologically free if and only if it is essentially free. Joseph proved in \cite[Corollary 2.4]{Joseph2021} that if $\G$ has countably many subgroups, then a topologically free Cantor action of $\G$ is essentially free. Bergeron and Gaboriau \cite{BergeronGaboriau2004} showed that if $\G$ is a non-amenable group which is a free product of two residually finite groups, then $\G$ admits a Cantor action which is topologically free and not essentially free. Ab\'ert and Elek \cite{AE2007} proved a similar result for finitely generated non-abelian free groups $\Gamma$. Joseph \cite{Joseph2021} proved that any non-amenable surface group admits a continuum of pairwise non-conjugate and non-measurably isomorphic Cantor actions which are topologically free and not essentially free. Examples of effective Cantor actions which are not topologically free include some actions of branch and weakly branch groups on the boundaries of rooted trees \cite{Grigorchuk2011,Nekrashevych2005,SilvaSteinberg2005}, some actions of nilpotent groups \cite{HL2021a}, and actions of topological full groups \cite{CortezMedynets2016,Grigorchuk2011}. The work by Gr\"{o}ger and Lukina \cite{GL2019} introduced a refinement of the notion of essentially free actions, called the \emph{essential holonomy} property. The idea is to consider the dynamics of the action in small neighborhoods of fixed points. In place of the set of points with trivial (resp. non-trivial) stabilizers, one considers the set of points with trivial (resp. non-trivial) holonomy. The analog of an essentially free Cantor action is an action which has \emph{no essential holonomy}. \begin{defn} \label{def-holonomy} Let $(\fX, \G, \Phi)$ be a Cantor action. Say that $x \in \fX$ is a \emph{point of non-trivial holonomy for} $g \in \G$ if $\Phi(g)(x) = x$, and for each open set $U \subset \fX$ with $x \in U$, there exists $y \in U$ such that $\Phi(g)(y) \ne y$. \end{defn} We say that a fixed point $x \in \fX$ is a \emph{point of trivial holonomy} for $g \in \G$, if $x$ is fixed by $\Phi(g)$, and $x$ has an open neighborhood $U_{x,g}$ where every point is fixed by $\Phi(g)$. We say that $x \in \fX$ is a \emph{point of trivial holonomy}, if $x$ is a point of trivial holonomy for all $g \in \G$ with $g \cdot x = x$. We say that $x \in \fX$ is a \emph{point of non-trivial holonomy}, if $x$ is a point of non-trivial holonomy for some $g \in \G$ with $g \cdot x = x$. Let $\fX_g^{hol}$ denote the (possibly empty) subset of points of non-trivial holonomy in $\fX_g$, and set \begin{equation}\label{eq-hfpset} \fX_{\G}^{hol} = \bigcup_{g \in \G} \ \fX_{g}^{hol} \subset \bigcup_{g \in \G} \ \fX_{g} \subset \fX_{\G} \ . \end{equation} The set $\fX_\G^{hol}$ is invariant under the action of $\G$, thus $\fX_\G^{hol}$ has either $\mu$-measure 0 or 1. The following concept was formulated in \cite{GL2019}: \begin{defn}\label{def-essentialholonomy} A Cantor group action $(\fX,\G,\Phi)$ has \emph{essential holonomy} if the set $\fX_\G^{hol}$ of points with non-trivial holonomy has positive measure. Otherwise, it has \emph{no essential holonomy}. \end{defn} Vorobets \cite{Vorobets2012} showed that the standard action of the Grigorchuk group on the boundary of a binary rooted tree has only a countable set of points with non-trivial holonomy, which implies that it has no essential holonomy. Gr\"oger and Lukina proved in \cite{GL2019} that the action of a group generated by finite automata on a rooted tree has no essential holonomy; their proof does not require the Cantor action to be topologically free. They also gave a criterion for when a group action on a rooted tree has no essential holonomy for its boundary Cantor action. The decomposition \eqref{eq-hfpset} of $\fX_{\G}^{hol}$ as a countable union of fixed-point sets implies that an action has essential holonomy if and only if, for at least one $g \in \G$, the set $\fX_g^{hol}$ has positive $\mu$-measure. In particular, this implies that whether or not a Cantor action has essential holonomy is a local property. This remark is the basis for the following invariance result, with details and proof in Section~\ref{sec-invariance}. \begin{prop} \label{prop-inv} \begin{enumerate} \item Suppose that Cantor actions $(\fX_i, \G_i, \Phi_i)$, for $i =1,2$, are continuous orbit equivalent. Then either both have essential holonomy, or both have no essential holonomy. \item Suppose that Cantor actions $(\fX_i, \G_i, \Phi_i)$, for $i =1,2$, are return equivalent. Then either both have essential holonomy, or both have no essential holonomy. \end{enumerate} \end{prop} For a Cantor action $(\fX, \G, \Phi)$, the assumption that $\mu$ is a probability measure, and that the action is minimal, implies that $\mu$ is a continuous measure; that is, the measure of any point $x \in \fX$ is zero. Thus, if a Cantor action has essential holonomy, then the intersection of $\fX_g^{hol}$ with the support of $\mu$ must be an uncountable set. This remark is the basis for the result by Joseph in \cite[Corollary 2.4]{Joseph2021} that if $\G$ has the countable subgroup property, then a topologically free Cantor action by $\G$ is essentially free, and hence has no essential holonomy. Our main result below relates the dynamical properties of a Cantor action of a finitely-generated group $\G$, with the lower central series $\G = \gamma_{1}(\G) \supset \gamma_{2}(\G) \supset \cdots \supset \gamma_{n}(\G) \supset \cdots$. \begin{thm}\label{thm-main0} Let $(\fX, \G, \Phi)$ be a locally quasi-analytic Cantor action. If the action has essential holonomy, then for every $n \geq 1$ there exists $\phi_n \in \gamma_{n}(\G)$ such that the action of $\phi_n$ has a positive measure set of points with non-trivial holonomy. \end{thm} The proof of Theorem~\ref{thm-main0} is given in Section~\ref{sec-proof}. We observe the following corollary for $n=2$: \begin{cor}\label{cor-commutator} Let $(\fX, \G, \Phi)$ be a locally quasi-analytic Cantor action. Let $[\G,\G] \subset \G$ be the commutator subgroup of $\G$. If the action has essential holonomy, then there exists $\phi_2 \in [\G,\G]$ such that the action of $\phi_2$ has a positive measure set of points with non-trivial holonomy. \end{cor} It is tempting to apply Corollary~\ref{cor-commutator} inductively, so that the conclusion of Theorem~\ref{thm-main0} applies to the derived series of $\G$. This argument does not go through, though, because while the action of the commutator subgroup $[\G,\G]$ on $\fX$ is again equicontinuous and locally quasi-analytic, it need not be minimal, and the minimality assumption on the action is critical for the proof of Theorem~\ref{thm-main0}. The family of examples constructed in the proof of Theorem \ref{thm-examples} show that the assumption that the Cantor action is locally quasi-analytic is essential for the conclusions of Theorem \ref{thm-main0} and Corollary \ref{cor-commutator}. The actions of groups $\G$ and the commutator subgroups $[\G,\G]$ in Theorem \ref{thm-examples} are minimal and not locally quasi-analytic, and the action of the group $\G$ has essential holonomy, while the action of $[\G,\G]$ has no essential holonomy. The locally quasi-analytic property is a localized form of the topologically free property, as discussed in Section~\ref{sec-basics}. If $\G$ is a Noetherian group, then every Cantor action by $\G$ must be locally quasi-analytic by \cite[Theorem~1.6]{HL2018b}. For a nilpotent group $\G$, the lower central series terminates, and $\G$ is Noetherian, so as a consequence we obtain a novel proof of a result of Joseph which he proved for topologically free Cantor actions in \cite{Joseph2021}: \begin{cor}\label{cor-nilp} Let $(\fX, \G, \Phi)$ be a Cantor action. If $\G$ is a finitely-generated nilpotent group, then $(\fX, \G, \Phi)$ has no essential holonomy. \end{cor} Up until recently, it was an open problem to show that if $\G$ is a finitely-generated amenable group, then every minimal equicontinuous Cantor action of $\G$ has no essential holonomy. In the paper \cite{Joseph2023}, Joseph constructs an action of the wreath product of two finitely generated amenable groups which is topologically free and not essentially free. Actions in Theorem \ref{thm-examples} in our paper are not locally quasi-analytic (and therefore not topologically free) actions of infinitely generated amenable groups which have essential holonomy. It would be interesting to obtain a criterion for when an amenable group admits an action with essential holonomy. Theorem~\ref{thm-main0} suggests that the property of having essential holonomy is an interesting invariant of Cantor actions, intrinsically related to the structure of the acting group, to be further explored. {\bf Acknowledgements:} The paper started in response to the question by Eduardo Scarparo whether an action of an amenable group must have no essentially holonomy. The authors also thank Eduardo for bringing to their attention the work by Matthieu Joseph \cite{Joseph2021}. \section{Cantor actions}\label{sec-basics} We recall some basic properties of Cantor actions. More details can be found in \cite{Auslander1988,CortezPetite2008,CortezMedynets2016,HL2018b,HL2019a,LavrenyukNekrashevych2002}. \subsection{Basic notions}\label{subsec-basic} Let $(\fX,\G,\Phi)$ denote a topological action $\Phi \colon \G \times \fX \to \fX$. We write $g\cdot x$ for $\Phi(g)(x)$ when appropriate. The orbit of $x \in \fX$ is the subset $\cO(x) = \{g \cdot x \mid g \in \G\}$. The action is \emph{minimal} if for all $x \in \fX$, its orbit $\cO(x)$ is dense in $\fX$. The action is said to be \emph{effective}, or \emph{faithful}, if the homomorphism $\Phi \colon \G \to \Homeo(\fX)$ is injective. An action $(\fX,\G,\Phi)$ is \emph{equicontinuous} with respect to a metric $\dX$ on $\fX$, if for all $\e >0$ there exists $\delta > 0$, such that for all $x , y \in \fX$ and $g \in \G$ we have that $\ds \dX(x,y) < \delta$ implies $\dX(g \cdot x, g \cdot y) < \e$. The property of being equicontinuous is independent of the choice of the metric on $\fX$ which is compatible with the topology of $\fX$. Recall that we assume $(\fX,\G,\Phi)$ is a minimal equicontinuous action. We say that $U \subset \fX$ is \emph{adapted} to the action if $U$ is a {non-empty clopen} subset, and for any $g \in \G$, if $\Phi(g)(U) \cap U \ne \emptyset$ then $\Phi(g)(U) = U$. Recall a basic property of equicontinuous Cantor actions (see \cite[Section~3]{HL2018b}). \begin{prop}\label{prop-adpatedchain} Let $(\fX,\G,\Phi)$ be a Cantor action, and let $\dX$ be an invariant metric on $\fX$ compatible with the topology on $\fX$. Given $x \in \fX$ and $\e > 0$, there exists an adapted clopen set $U \subset \fX$ with $x \in U$ and $\diam(U) < \e$. \end{prop} \begin{cor}\label{cor-subbasis} The adapted clopen sets form a subbasis for the topology of $\fX$. \end{cor} For an adapted set $U$, the set of ``return times'' to $U$, \begin{equation}\label{eq-adapted} \G_U = \left\{g \in \G \mid g \cdot U \cap U \ne \emptyset \right\} \end{equation} is a subgroup of $\G$, called the \emph{stabilizer} of $U$. Then for $g, g' \in \G$ with $g \cdot U \cap g' \cdot U \ne \emptyset$ we have $g^{-1} \, g' \cdot U = U$, hence $g^{-1} \, g' \in \G_U$. Thus, the translates $\{ g \cdot U \mid g \in \G\}$ form a finite clopen partition of $\fX$, and are in 1-1 correspondence with the quotient space $X_U = \G/\G_U$. Then $\G$ acts by permutations of the finite set $X_U$ and so the stabilizer group $\G_U \subset \G$ has finite index. Note that this implies that if $V \subset U$ is a proper inclusion of adapted sets, then the inclusion $\G_V \subset \G_U$ is also proper. Let $U$ be an adapted set $U$ for the action $(\fX,\G,\Phi)$, then the action of $\G_U$ restricts to an action on $U$, so we have a homomorphism $\Phi_U \colon \G_U \to \Homeo(U)$. Let $\cH_{U}$ denote the image of this action. Note that the map $\Phi_U \colon \G_U \to \cH_{U}$ is injective if the action is topologically free. \subsection{Group chains}\label{sec-gchains} Given a basepoint $x$, by iterating the process in Proposition \ref{prop-adpatedchain} one can always construct the following: \begin{defn}\label{def-adaptednbhds} Let $(\fX,\G,\Phi)$ be a Cantor action. A properly descending chain of clopen sets $\cU = \{U_{\ell} \subset \fX \mid \ell \geq 0\}$ is said to be an \emph{adapted neighborhood basis} at $x \in \fX$ for the action $\Phi$ if each $U_{\ell}$ is adapted to the action $\Phi$, $U_{\ell +1} \subset U_{\ell}$ for all $ \ell \geq 1$, and $\cap \ U_{\ell} = \{x\}$. \end{defn} Let $\G_{\ell} = \G_{U_{\ell}}$ denote the stabilizer group of $U_{\ell}$ given by \eqref{eq-adapted}. Then we obtain a descending chain of finite index subgroups $ \ds \cG^x_{\cU} : \G = \G_0 \supset \G_1 \supset \G_2 \supset \cdots $. Note that each $\G_{\ell}$ has finite index in $\G$, and is not assumed to be a normal subgroup. Also note that while the intersection of the chain $\cU$ is a single point $\{x\}$, the intersection of the stabilizer groups in $\cG^x_{\cU}$ need not be the trivial group. Next, set $X_{\ell} = \G/\G_{\ell}$ and note that $\G$ acts transitively on the left on $X_{\ell}$. The inclusion $\G_{\ell +1} \subset \G_{\ell}$ induces a natural $\G$-invariant quotient map $p_{\ell +1} \colon X_{\ell +1} \to X_{\ell}$. Introduce the inverse limit \begin{equation} \label{eq-invlimspace} X ~ = ~ \varprojlim \ \{p_{\ell +1} \colon X_{\ell +1} \to X_{\ell} \mid \ell > 0\} = \{(x_\ell) = (x_0,x_1,\ldots) \mid p_{\ell+1}(x_{\ell+1}) = x_\ell\} \end{equation} which is a Cantor space with the Tychonoff topology. Thus elements of $X$ are infinite sequences with entries in $X_\ell$, $\ell \geq 0$. The actions of $\G$ on the factors $X_{\ell}$ induce a minimal equicontinuous action, denoted by $\Phi_x \colon \G \times X \to X$, which reads \begin{equation}\label{eq-actioninvlim}(g, (x_\ell)) \mapsto g \cdot (x_\ell) = (g \cdot x_\ell) = (g \cdot x_0, g \cdot x_1,\ldots).\end{equation} For each $\ell \geq 0$, we have the ``partition coding map'' $\Theta_{\ell} \colon \fX \to X_{\ell}$ which is $\G$-equivariant. The maps $\{\Theta_{\ell}\}$ are compatible with the map on quotients in \eqref{eq-invlimspace}, and so define a limit map $\Theta_x \colon \fX \to X$. The fact that the diameters of the clopen sets $\{U_{\ell}\}$ tend to zero, implies that $\Theta_x$ is a homeomorphism. This is proved in detail in \cite[Appendix~A]{DHL2016a}. Moreover, $\Theta_x(x) = e_{\infty} = (e\G_\ell) \in X$, the basepoint of the inverse limit \eqref{eq-invlimspace}, where $e\G_\ell = \G_\ell$ is the coset of the identity $e \in \G$. Let $X$ have an ultrametric metric such that $\G$ acts on $X$ by isometries, for instance, let \begin{equation}\label{dxmetric}d_X\left((x_\ell),(y_\ell) \right) = 2^{-m}, \quad \textrm{ where } m = \max \{\ell \mid x_\ell = y_\ell, \, \ell \geq 0\}. \end{equation} Then let $\dX$ be the ultrametric metric on $\fX$ induced from $d_X$ by the homeomorphism $\Theta_x$. The minimal equicontinuous action $(X, \G, \Phi_x)$ is called the \emph{odometer model} centered at $x$ for $(\fX,\G,\Phi)$. The group chain $\cG^x_\cU$ depends on $x$ and $\cU$, and one can introduce an equivalence relation which, for a given group $\G$, identifies the class of group chains with topologically conjugate associated odometer models. We refer the interested reader to \cite{DHL2016a}. \subsection{Unique ergodic invariant measure} \label{subsec-measure} Given a Cantor action $(\fX,\G,\Phi)$, choose an adapted neighborhood basis $\cU$ and consider the corresponding group chain $\cG_\cU^x$ and the odometer model \eqref{eq-invlimspace}. The group $\G$ acts transitively on the coset space $X_{\ell} = \G/\G_{\ell}$ and we define a $\G$-invariant probability measure $\mu_{\ell}$ on $X_{\ell}$ by giving equal weight to each point (coset) in $X_{\ell}$. Thus one has \begin{align}\label{eq-mubern}\mu_{\ell} (h \G_{\ell}) = \frac{1}{|\G: \G_{\ell}|}, \textrm{ for all }h\G_{\ell} \in X_{\ell} \textrm{ and all }\ell \geq 0,\end{align} where $|\G: \G_{\ell}|$ denotes the index of $\G_{\ell}$ in $\G$. The unique $\G$-invariant measure on the inverse limit $X$ is defined as the limit of the pull-backs of these measures under the projection maps $X \to X_{\ell}$. Then the invariant measure $\mu$ on $\fX$ is the pull-back via the homeomorphism $\Theta_{x} \colon \fX \to X$. Alternately, consider the closure $E = \overline{\Phi(\G)} \subset \Homeo(\fX)$ in the uniform topology. It is a profinite compact group, called the \emph{Ellis} or \emph{enveloping group} \cite{Auslander1988,Ellis1969}. (If the action $(\fX,\G,\Phi)$ is not assumed to be equicontinuous, then $\overline{\Phi(\G)}$ is only a semi-group.) The group $E$ acts on $\fX$ with isotropy group $E_x = \{g \in E \mid g\cdot x = x\}$ for $x \in \fX$. Then $E_x$ is a closed subgroup of $E$, and we have $\fX = E/E_x$. The profinite group $E$ has a unique Haar measure $\widehat{\mu}$, which is invariant with respect to the action of $E$ on itself. The measure $\widehat{\mu}$ on $E$ pushes down to the measure $\mu$ on $\fX$. \subsection{Lebesgue density theorem} Let $(\fX,\G,\Phi,\mu)$ be a Cantor action probability measure $\mu$ and ultrametric $d_\fX$ induced from \eqref{dxmetric}. Denote by $B(x,\epsilon) = \{ y \in \fX \mid d_\fX(x,y) < \epsilon\}$ the open ball with center $x$ of radius $\e >0$. The proof of the \emph{Lebesgue Density Theorem} in the formulation below can be found, for instance, in \cite[Proposition 2.10]{Miller2008}. \begin{theorem} \label{thm-polishlebesgue} Let $\fX$ be a Polish space, and suppose $\fX$ has an ultrametric $d_\fX$ compatible with its topology. Let $\mu$ be a probability measure on $\fX$, and let $A$ be a Borel set of positive measure. Then the \emph{Lebesgue density} of $x$ in $A$, given by \begin{align}\label{eq-lebesgue}\lim_{\epsilon \to 0} \frac{\mu(A \cap B(x,\epsilon))}{\mu(B(x,\epsilon))}\end{align} exists and is equal to $1$ for $\mu$-almost every $x \in A$. \end{theorem} We give an important consequence of the Lebesgue Density Theorem. \begin{lemma}\label{lem-density} Let $(\fX, \G, \Phi,\mu)$ be a Cantor action, with invariant probability measure $\mu$. Assume there exists an element $g \in \G$ for which $\fX_g^{hol}$ has positive $\mu$-measure. Then there exists $x \in \fX_g^{hol}$ such that, for all $0 < \e <1$, there exists an adapted set $U_{\e}$ with $x \in U_{\e}$ and $\displaystyle \mu(U_{\e} \cap \fX_g^{hol}) \geq (1-\e) \cdot \mu(U_{\e})$. \end{lemma} \proof Since $\fX_g^{hol}$ has positive $\mu$-measure, by the Lebesgue Density Theorem \ref{thm-polishlebesgue} there exists a point $x \in \fX_g^{hol}$ of full Lebesgue density. For this point, choose an adapted neighborhood basis $\cU = \{U_{\ell} \subset \fX \mid \ell \geq 0\}$ at $x$. By the convergence of the limit in Theorem \ref{thm-polishlebesgue} there exists $\ell_{\e}$ so that $\displaystyle \mu(U_{\ell} \cap \fX_g^{hol}) \geq (1-\e) \cdot \mu(U_{\ell})$ for $\ell \geq \ell_0$. Then set $U_{\e} = U_{\ell_{\e}}$. \endproof \subsection{Locally quasi-analytic}\label{subsec-lqa} The quasi-analytic property for Cantor actions was introduced by {\'A}lvarez L{\'o}pez and Candel in \cite[Definition~9.4]{ALC2009} as a generalization of the notion of a \emph{quasi-analytic action} studied by Haefliger for actions of pseudogroups of real-analytic diffeomorphisms in \cite{Haefliger1985}. The authors introduced a local form of the quasi-analytic property in \cite{HL2018a,HL2018b}: \begin{defn} \cite[Definition~2.1]{HL2018b} \label{def-LQA} A topological action $(\fX,\G,\Phi)$ on a metric Cantor space $\fX$, is \emph{locally quasi-analytic} if there exists $\e > 0$ such that for any non-empty open set $U \subset \fX$ with $\diam (U) < \e$, and for any non-empty open subset $V \subset U $, and elements $g_1 , g_2 \in \G$ \begin{equation}\label{eq-lqa} \text{if the restrictions} ~~ \Phi(g_1)|V = \Phi(g_2)|V, ~ \text{ then}~~ \Phi(g_1)|U = \Phi(g_2)|U. \end{equation} The action is said to be \emph{quasi-analytic} if \eqref{eq-lqa} holds for $U=\fX$. \end{defn} In other words, $(\fX,\G,\Phi)$ is locally quasi-analytic if for every $g \in \G$, the homeomorphism $\Phi(g)$ has unique extensions on the sets of diameter $\e>0$ in $\fX$, with $\e$ uniform over $\fX$. We note that an effective action $(\fX,\G,\Phi)$ is topologically free if and only if it is quasi-analytic \cite[Proposition 2.2]{HL2018b}. Recall that a group $\G$ is \emph{Noetherian} \cite{Baer1956} if every increasing chain of subgroups has a maximal element. Equivalently, a group is Noetherian if every subgroup of $\G$ is finitely generated. \begin{thm}\label{thm-noetherian} \cite[Theorem~1.6]{HL2018b} Let $\G$ be a Noetherian group. Then a minimal equicontinuous Cantor action $(\fX,\G,\Phi)$ is locally quasi-analytic. \end{thm} A finitely-generated nilpotent group is Noetherian, so as a corollary we obtain that all Cantor actions by finitely-generated nilpotent groups are locally quasi-analytic. Examples of locally quasi-analytic actions which are not quasi-analytic are easy to construct; see for instance \cite[Example A.4]{HL2018b}. \section{Invariance}\label{sec-invariance} We recall notions of equivalence for Cantor actions, considered as topological dynamical systems. For each notion considered, we show that the measurable dynamical systems property that an action has essential holonomy is preserved for equivalent actions. The work \cite{GPS2019} gives an excellent comparison of the various notions of equivalence for the case of Cantor actions by $\G = \mZ^n$. First, we recall the most basic equivalence of actions. \begin{defn} \label{def-isomorphism} Cantor actions $(\fX_1, \G_1, \Phi_1)$ and $(\fX_2, \G_2, \Phi_2)$ are said to be \emph{isomorphic}, or \emph{conjugate}, if there is a homeomorphism $h \colon \fX_1 \to \fX_2$ and a group isomorphism $\Theta \colon \G_1 \to \G_2$ so that \begin{equation}\label{eq-isomorphism} \Phi_1(\gamma) = h^{-1} \circ \Phi_2(\Theta(\gamma)) \circ h \in \Homeo(\fX_2) \ {\rm for \ all} \ \gamma \in \G_1 \ . \end{equation} \end{defn} \begin{prop}\label{prop-isoessholo} Let $(\fX_1, \G_1, \Phi_1)$ and $(\fX_2, \G_2, \Phi_2)$ be isomorphic Cantor actions. Then $(\fX_1, \G_1, \Phi_1)$ has essential holonomy if and only if $(\fX_2, \G_2, \Phi_2)$ has essential holonomy. \end{prop} \proof Let $(\fX_1, \G_1, \Phi_1)$ and $(\fX_2, \G_2, \Phi_2)$ be isomorphic Cantor actions, with map $h \colon \fX_1 \to \fX_2$ and isomorphism $\Theta \colon \G_1 \to \G_2$. Let $\mu_1$ be the unique invariant probability measure for $(\fX_1, \G_1, \Phi_1)$, and $\mu_2$ be the unique invariant probability measure for $(\fX_2, \G_2, \Phi_2)$. Then $h^*(\mu_2)$ is an invariant probability measure for $(\fX_1, \G_1, \Phi_1)$, hence $\mu_1 = h^*(\mu_2)$. Moreover, for $g \in \Gamma_1$ the action $\Phi_1(g)$ has essential holonomy if and only if $\Phi_2(\Theta(g))$ has essential holonomy, so $h(\fX_{\G_1}^{hol}) = \fX_{\G_2}^{hol}$, and the claim follows. \endproof The notion of \emph{return equivalence} is the analog for Cantor actions of Morita equivalence for $C^*$-algebras. This equivalence is weaker than the notion of isomorphism, and is natural when considering the Cantor systems defined by the holonomy actions for matchbox manifolds, as in \cite{HL2018a,HL2018b}. \begin{defn}\label{def-re} Cantor actions $(\fX_1, \G_1, \Phi_1)$ and $(\fX_2, \G_2, \Phi_2)$ are said to be \emph{return equivalent} if there exist non-empty clopen subsets $U_i \subset \fX_i$, for $i=1,2$, such that $U_i$ is adapted to the action $\Phi_i$, and there is a homeomorphism $h \colon U_1 \to U_2$ whose induced homomorphism $h_* \colon \Homeo(U_1) \to \Homeo(U_2)$ restricts to an isomorphism $\Theta \colon \cH_{U_1} \to \cH_{U_2}$. \end{defn} Note that when $U_i = \fX_i$ and both actions are effective, then this definition reduces to the usual notion of isomorphism of the actions, with induced group isomorphism $\Theta \colon \G_1 \cong \cH_{\fX_1} \to \cH_{\fX_2} \cong \G_2$. \begin{prop}\label{prop-REessholo} Let $(\fX_1, \G_1, \Phi_1)$ and $(\fX_2, \G_2, \Phi_2)$ be return equivalent Cantor actions. Then $(\fX_1, \G_1, \Phi_1)$ has essential holonomy if and only if $(\fX_2, \G_2, \Phi_2)$ has essential holonomy. \end{prop} \proof Assume that $(\fX_1, \G_1, \Phi_1)$ has essential holonomy, and let $U_1 \subset \fX_1$ and $U_2 \subset \fX_2$ be clopen sets such that the restricted actions are isomorphic by $\Theta \colon \cH_{U_1} \to \cH_{U_2}$. Let $g \in \G_1$ be such that $\fX_g^{hol}$ has positive $\mu_1$-measure, and thus there exists $x \in \fX_1$ such that $x$ is fixed by $\Phi_1(g)$ and $ \fX_{1,g}^{hol}$ has Lebesgue density 1 at $x$. The action $\Phi_1$ is minimal on $\fX_1$ so there exists $k \in \G_1$ such that $k \cdot x \in U_1$. Then $g' = kgk^{-1} \in \G_1$ has fixed point $x' = kx$ which is a point of Lebesgue density 1 in the set $\fX_{1,g'}^{hol}$. As $x' \in U_1 \cap \Phi_1(g')(U_1)$ and $U_1$ is adapted, we have $U_1 = \Phi_1(g')(U_1)$ and so $g' \in \G_{1,U}$. The action of $\G_{1,U_1}$ on $U_1$ is minimal, so the renormalized measure $\mu_1' = \mu(U_1)^{-1} \mu_1 \mid U_1$ is the unique invariant probability measure for the restricted action of $\G_{1,U_1}$ on $U_1$. The set $\fX_{1,g'}^{hol} \cap U_1$ has Lebesgue density 1 at $x'$, hence its image $h(x') \in h(\fX_{1,g'}^{hol} \cap U_1) \subset U_2$ is also a point of Lebesgue density 1 for the action of $\Theta(g')$ on $U_2$, with corresponding renormalized measure $\mu_2'$ on $U_2$. Thus, $(\fX_2, \G_2, \Phi_2)$ has essential holonomy. The converse follows similarly. \endproof The notion of \emph{continuous orbit equivalence} for Cantor actions was introduced in \cite{BoyleTomiyama1998}. It is the analogue for topological dynamics of measurable orbit equivalence for measurable actions, as first introduced by Dye \cite{Dye1959}. Continuous orbit equivalence plays a fundamental role in the classification of group actions on Cantor sets (see for example \cite{Renault2008}). \begin{defn}\label{def-coe} Cantor actions $(\fX_1, \G_1, \Phi_1)$ and $(\fX_2, \G_2, \Phi_2)$ are said to be \emph{continuously orbit equivalent} if there exists a homeomorphism $h \colon \fX_1 \to \fX_2$ and continuous functions \begin{enumerate} \item $\alpha \colon G_1 \times \fX_1 \to G_2$, $h(\Phi_1(g_1)(x_1)) = \Phi_2(\alpha(g_1 , x_1))(h(x_1))$ for all $g_1 \in G_1$ and $x_1 \in \fX_1$; \item $\beta \colon G_2 \times \fX_2 \to G_1$, $h^{-1}(\Phi_2(g_2)(x_2)) = \Phi_1(\beta(g_2, x_2))(h^{-1}(x_2))$ for all $g_2 \in G_2$ and $x_2 \in \fX_2$. \end{enumerate} \end{defn} The homeomorphism $h$ is called a \emph{continuous orbit equivalence} between the two actions. Note that the functions $\alpha$ and $\beta$ are not assumed to satisfy the cocycle property. We have the following result of Cortez and Medynets: \begin{thm} \cite{CortezMedynets2016} \label{thm-CM} Let $(\fX_1, \G_1, \Phi_1)$ and $(\fX_2, \G_2, \Phi_2)$ be topologically free Cantor actions. If the actions $\Phi_1$ and $\Phi_2$ are continuously orbit equivalent, then they are return equivalent. \end{thm} This result was generalized by the authors: \begin{thm}\cite{HL2018b} \label{thm-main1} Let $(\fX_1, \G_1, \Phi_1)$ and $(\fX_2, \G_2, \Phi_2)$ be locally quasi-analytic Cantor actions. If the actions $\Phi_1$ and $\Phi_2$ are continuously orbit equivalent, then they are return equivalent. \end{thm} For the general case (not necessarily locally quasi-analytic) of Cantor actions, we have the following: \begin{prop}\label{prop-COEessholo} Let $(\fX_1, \G_1, \Phi_1)$ and $(\fX_2, \G_2, \Phi_2)$ be continuously orbit equivalent Cantor actions. Then $(\fX_1, \G_1, \Phi_1)$ has essential holonomy if and only if $(\fX_2, \G_2, \Phi_2)$ has essential holonomy. \end{prop} \proof Let $h \colon \fX_1 \to \fX_2$ be the homeomorphism given by Definition~\ref{def-coe}. Let $\mu_2$ be the unique invariant probability measure for $(\fX_2, \G_2, \Phi_2)$, then the pull-back $\widetilde{\mu}_2 = h^*(\mu_2)$ is a probability measure on $\fX_1$. Condition (1) implies $\widetilde{\mu}_2$ is invariant under the action $(\fX_1, \G_1, \Phi_1)$, hence $\mu_1 = \widetilde{\mu}_2$. Assume that $(\fX_1, \G_1, \Phi_1)$ has essential holonomy, and let $g \in \G_1$ be such that $\fX_g^{hol}$ has positive $\mu_1$-measure, so there exists $x \in \fX_1$ such that $\Phi_1(g)(x) = x$ and $ \fX_g^{hol}$ has $\mu_1$-Lebesgue density 1 at $x$. Let $\alpha \colon G_1 \times \fX_1 \to G_2$ the map given by condition (1) so that $h(\Phi_1(g_1)(x_1)) = \Phi_2(\alpha(g_1 , x_1))(h(x_1))$ for all $g_1 \in G_1$ and $x_1 \in \fX_1$. Then for $g_1 =g$ there exists a clopen set $U_1 \subset \fX_1$ so that $x \in U_1$ and $\alpha(g, y) \in \G_2$ is continuous for $y \in U_1$. Since $\G_2$ is a discrete space, we can assume that $U_1$ is sufficiently small so that $g_2 = \alpha(g, y) \in \G_2$ is constant for $y \in U$. We then have $h(\Phi_1(g)(y)) = \Phi_2(g_2)(h(y))$ for all $y \in U_1$. Set $U_2 = h(U_1)$ then this states that $h_{U_1} \colon U_1 \to U_2$ conjugates the action of $\Phi_1(g_1)$ on $U_1$ with the action of $\Phi_2(g_2)$ on $U_2$. Thus, $\Phi_2(g_2)$ has non-trivial holonomy at the fixed point $z=h(x)$ and this is a point of Lebesgue density 1 for the action of $\Phi_2(g_2)$. In particular, the action $(\fX_2, \G_2, \Phi_2)$ has essential holonomy. The converse implication is proved similarly, using condition (2) of Definition~\ref{def-coe}. \endproof \section{Dynamics and the lower central series}\label{sec-proof} Theorem \ref{thm-main0} relates the non-trivial essential holonomy property for a Cantor action $(\fX, \G, \Phi)$, with the lower central series of $\G$. In this section, we show that if a Cantor action of $\G$ is locally quasi-analytic and has essential holonomy, then every commutator subgroup in the lower central series of $\G$ has elements with positive measure sets of points with non-trivial holonomy. First, recall the construction of the lower central series for $\G$. Set $\gamma_1(\G) = \G$, and for $i \geq 1$, let \break $\gamma_{i+1}(\G) = [\G,\gamma_{i}(\G)]$ be the commutator subgroup, which is a normal subgroup of $\G$. Then for $a \in \gamma_{i}(\G)$ and $b \in \gamma_{j}(\G)$ the commutator $[a,b] \in \gamma_{i+j}(\G)$. Moreover, these subgroups form a descending chain \begin{equation}\label{eq-filtration} \G = \gamma_{1}(\G) \supset \gamma_{2}(\G) \supset \cdots \supset \gamma_{n}(\G) \supset \cdots \ . \end{equation} Note that each quotient group $\gamma_{i}(\G)/\gamma_{i+1}(\G)$ is abelian. The group $\G$ is nilpotent of length $n_{\G}$ if there exists an $n_\G > 0$ such that $\gamma_{n}(\G)$ is the trivial group for $n > n_{\G}$, and $\gamma_{n_{\G}}(\G)$ is non-trivial. It follows that every element in $\gamma_{n_{\G}}(\G)$ commutes with every element of $\G$; that is, $\gamma_{n_{\G}}(\G)$ is contained in the center of $\G$. Denote by $\Phi_n \colon \gamma_{n}(\G) \times \fX \to \fX$ the restriction of the action $\Phi$ to the subgroup $\gamma_{n}(\G)$. \begin{defn}\label{def-turbdepth} A Cantor action $(\fX, \G, \Phi)$, with invariant probability measure $\mu$, is said to have \emph{essential holonomy at depth $n_t$} if the restricted action $(\fX, \gamma_{n_t}(\G), \Phi_{n_t})$ has essential holonomy, but $(\fX, \gamma_{n}(\G), \Phi_n)$ has no essential holonomy for $n > n_t$. The action has \emph{essential holonomy at infinite depth} if for all $n \geq 1$, the restricted action $(\fX, \gamma_n(\G), \Phi_n)$ has essential holonomy. \end{defn} Here is our main technical result, which combined with Theorem \ref{thm-noetherian} implies Theorem~\ref{thm-main0}. \begin{prop}\label{prop-commutators} Let $(\fX, \G, \Phi)$ be a Cantor action which is locally quasi-analytic. If the action has essential holonomy, then it has essential holonomy of infinite depth. \end{prop} \proof Suppose that $(\fX, \G, \Phi)$ has essential holonomy at depth $n_t$. We show that this leads to a contradiction by localizing the action to a sufficiently small adapted set. Recall that $\mu$ denotes the unique invariant probability measure for the action. The set $\fX_{\gamma_{(n_t)}(\G)}^{hol}$ has positive $\mu$-measure, while the set $\fX_{\gamma_{(n_t +1)}(\G)}^{hol}$ has $\mu$-measure zero. Thus, there exists $g \in \gamma_{n_t}(\G)$ such that $\mu(\fX_{g}^{hol}) > 0$. Moreover, for all $\tau \in \gamma_{(n_t+1)}(\G)$ we have $\mu(\fX_{\tau}^{hol}) = 0$. Let $x \in \fX$ be such that $g \cdot x =x$, and $\fX_g^{hol}$ has Lebesgue density $1$ at $x$. Let $U \subset \fX$ be an adapted clopen set with $x \in U$, and so $g \in \G_U$, and sufficiently small diameter such that the restricted action $\Phi_U \colon \G_U \times U \to U$ is quasi-analytic, and we have $\mu(\fX_g^{hol} \cap U) \geq 3/4 \cdot \mu(U)$, as in Lemma \ref{lem-density}. By the assumption that $x \in \fX_g^{hol}$, there exist $y \in U$ so that $z = g\cdot y \ne y$. Note that $z \ne x$ as $g \cdot x = x$. Let $V \subset U$ with $y \in V$ be a sufficiently small clopen set such that $(g \cdot V) \cap V = \emptyset$. Then choose $k_y \in \G_U$ such that $k_y \cdot x \in V$. Then replace $y$ with $k_y \cdot x$ and we have $g \cdot y \ne y$. Let $\tau = [g, k_y]$ be the commutator. Then $g \in \gamma_{n_t}(\G)$ implies that $\tau \in \gamma_{(n_t+1)}(\G) \cap \G_U$. By definition we have $g \cdot k_y = \tau \cdot k_y \cdot g$. Then for $w \in \fX_g^{hol} \cap U$ calculate \begin{equation}\label{eq-gamma} g \cdot (k_y \cdot w) = \tau \cdot k_y \cdot g \cdot w = \tau \cdot (k_y \cdot w) \ . \end{equation} We use the identity \eqref{eq-gamma} to prove the key observation: \begin{lemma}\label{lem-disjoint} The sets $k_y\cdot (\fX_g^{hol} \cap U)$ and $(\fX_g^{hol} \cap U)$ are $\mu$-a.e. disjoint. \end{lemma} \proof We must show that $\mu\left( k_y\cdot (\fX_g^{hol} \cap U) \cap (\fX_g^{hol} \cap U)\right) = 0$. Suppose that $w \in \fX_g^{hol} \cap U$ satisfies $k_y\cdot w \in \fX_g^{hol} \cap U$. Then $k_y \cdot w$ is a fixed-point for the action of $g$, and so by \eqref{eq-gamma} we have $k_y \cdot w $ is a fixed point for the action of $\tau = [g, k_y]$. As $g \in \gamma_{n_t}(\G)$, we have that $\tau \in \gamma_{(n_t+1)}(\G)$. Then by the definition of the index $n_t$, for $\mu$-almost all $w \in \fX_g^{hol} \cap U$, we have that $\Phi_U(\tau)$ has trivial germinal holonomy at $k_y \cdot w$, as the action of $\Phi(k_y)$ is a measure preserving homeomorphism. Thus, there exists a clopen set $W_{w} \subset U$ with $k_y \cdot w \in W_{w}$ such that $\Phi_U(\tau)$ acts as the identity on $W_{w}$. As we chose $U$ so that the action $\Phi_U$ is quasi-analytic on $U$, this implies that $\Phi_U(\tau)$ acts as the identity on $U$. However, we also have $g \cdot y \ne y$, so using the identity \eqref{eq-gamma} again, we have $\tau \cdot y = \tau \cdot (k_y \cdot x) \ne k_y \cdot x$, and thus $\Phi_U(\tau)$ does not act as the identity on $U$, which is a contradiction. It follows that for $\mu$-almost all $w \in \fX_g^{hol} \cap U$ we have $k_y \cdot w \not\in \fX_g^{hol} \cap U$. Hence $\mu\left( k_y\cdot (\fX_g^{hol} \cap U) \cap (\fX_g^{hol} \cap U)\right) = 0$. \endproof We now complete the proof of Proposition~\ref{prop-commutators}. As $\mu$ is invariant under the action of $\Phi_U$ we have $\mu\left(k_y \cdot (\fX_g^{hol} \cap U) \right) = \mu\left(\fX_g^{hol} \cap U\right)\geq 3/4 \cdot \mu(U)$. But then by Lemma~\ref{lem-disjoint} we obtain the contradiction \begin{equation}\label{eq-toomuch} \mu(U) \geq \mu\left( k_y\cdot (\fX_g^{hol} \cap U)\right) + \mu \left(\fX_g^{hol} \cap U\right) \geq (3/4 + 3/4) \mu(U) > \mu(U) \ . \end{equation} Thus, the action $(\fX, \G, \Phi)$ cannot have essential holonomy at finite depth. \endproof \begin{cor}\label{cor-commutators} Let $(\fX, \G, \Phi)$ be a Cantor action, and suppose that $\G$ is a finitely-generated nilpotent group. Then the set of points with non-trivial holonomy has measure 0; that is, $(\fX, \G, \Phi)$ does not have essential holonomy. \end{cor} \proof By Theorem \ref{thm-noetherian}, the action $(\fX, \G, \Phi)$ is locally quasi-analytic. As $\G$ is nilpotent, the commutator group $\gamma_{n_{\G}}(\G)$ is in the center of $\G$, hence the set of points with non-trivial holonomy for the action of $\gamma_{n_{\G}}(\G)$ on $\fX$ has measure 0; that is, it has no essential holonomy. Thus by Proposition~\ref{prop-commutators} the action of $\G$ on $\fX$ has no essential holonomy. \endproof \section{Commutators and not locally quasi-analytic actions} In this section, we exhibit a family of examples to show that the assumption that an action $(\fX,\G,\Phi)$ is locally quasi-analytic is essential for the conclusions of Theorem \ref{thm-main0} and Corollary \ref{cor-commutator}. \begin{thm}\label{thm-examples} Let $\{n_\ell\}_{\ell \geq 1}$, $n_\ell \geq 6$ be a sequence of positive numbers, such that $n_\ell > 2 n_{\ell-1}$. Let $S_{\ell}$ be a set with $n_\ell$ elements, and let $\fX = \prod_{\ell \geq 1} S_\ell$. Let $A_\ell$ be the alternating group on $n_\ell$ symbols. There exists a countably generated subgroup $\G \subset \prod_{\ell \geq 1} A_\ell$ with the following properties. \begin{enumerate} \item The lower central series of $\G$ stabilizes, i.e. $\gamma_n(\G) = [\G,\G]$ for $n \geq 2$. \item The minimal equicontinuous action $(\fX,\G,\Phi)$ is not locally quasi-analytic and has essential holonomy, i.e. the set of points with non-trivial holonomy has full measure. \item The induced action $(\fX,[\G,\G],\Phi)$ of the commutator subgroup is minimal, not locally quasi-analytic and it has no essential holonomy. \end{enumerate} \end{thm} At the moment we are not aware of an action of a finitely generated group which exhibits similar properties for the commutator action. The family of examples in Theorem \ref{thm-examples} is an amalgam of the family considered in \cite[Theorem 1.5]{ABLN2022} and \cite[Theorem 6.1]{GL2019}. The idea to use actions of alternating groups comes from \cite[Theorem 1.5]{ABLN2022}, and the construction of an element with positive measure set of points with holonomy is the same as in \cite[Theorem 6.1]{GL2019}. When constructing the commutator subgroup, we have to restrict to using the direct sum of alternating groups instead of the direct product to ensure that the action of the commutator in our family has no essential holonomy (in fact, no points with non-trivial holonomy at all), and consequently the group we construct is not finitely generated. Merging these constructions yields the actions in Theorem~\ref{thm-examples}. \proof The demonstration of Theorem~\ref{thm-examples} follows from Lemmas~\ref{lem-construction1}, \ref{lem-construction2} and \ref{lem-construction3} below. \begin{lemma}\label{lem-construction1} Consider the direct sum \begin{equation}\label{eq-defG} G = \bigoplus_{\ell \geq 1} A_\ell = \{g = (g_1,g_2,\ldots ) \mid g_\ell \in A_\ell, \quad g_\ell = e_\ell \textrm{ for all but a finite set of }\ell \} ~ , \end{equation} where $e_\ell$ is the identity element in $A_\ell$. Then the component-wise action of $G$ on $\fX$ is minimal, equicontinuous, not locally quasi-analytic and has no essential holonomy. \end{lemma} \proof The group $G$ acts on the direct product space $\fX$ minimally, since the action of every $A_\ell$ on $S_\ell$ is transitive. Define the metric on $\fX$ by setting $d_{\fX}(x,y) = 1$ for all $x=(x_i),y=(y_i) \in \fX$ if $x_1 \ne y_1$, and otherwise $$d_{\fX}(x,y) = 2^{-K}, \quad K= \max \{k \geq 1 \mid x_\ell = y_\ell \textrm{ for all } 1 \leq \ell \leq k\}.$$ Since $G$ acts on each component of $\fX$ by permutations, its action is equicontinuous. If $g \in G$ has a fixed point, then this point has a clopen neighborhood entirely fixed by the action of $g$, since $g$ acts non-trivially on at most a finite number of factors in the direct product $\fX$. Thus the action of $G$ on $\fX$ has no essential holonomy. We show that the action of $G$ is not locally quasi-analytic. Label the elements in $S_4$ by $\{1,2,3, \ldots, n_\ell\}$, and let $a_\ell = (123) \in A_\ell$, i.e. $a_\ell$ fixes all symbols except $1$, $2$ and $3$. Let $g \in G$, then there is $m \geq 1$ such that $g_\ell = a_\ell$ for a subset of $1 \leq \ell \leq m$, and otherwise $g_\ell = e_\ell$. Then $g$ fixes every point in the clopen set $V = \prod_{1 \leq \ell \leq m} \{4,\ldots,n_\ell\} \times \prod_{\ell > m} S_\ell$. It is clear that the restriction $g|V$ has multiple extensions to larger sets, for instance, take $\widetilde g$ which acts non-trivially on the same factors as $g$ by the permutation $b_\ell = (132)$. \endproof For sequences $\{n_\ell\}_{\ell \geq 1}$ with the additional property that $n_{\ell+1} > 2 n_{\ell}$ for $\ell \geq 1$ we now realize $G$ as the commutator of a discrete group $\G$ whose action has essential holonomy. For $\ell \geq 1$, define a permutation $\gamma_\ell = (n_\ell-1\, n_\ell)$, i.e. $\gamma_\ell$ fixes every vertex in $S_\ell$ except the last two, and let $\gamma = (\gamma_1, \gamma_2,\ldots) \in \prod_{\ell \geq 1} A_\ell$ be an element in the \emph{direct product} of alternating groups. Define \begin{equation}\label{eq-defGamma} \G = \langle \gamma, \G \rangle \subset \prod_{\ell \geq 1} A_\ell \ . \end{equation} \begin{lemma}\label{lem-construction2} The commutators $ [G,G] = [\Gamma, G] = [\Gamma, \Gamma] = G$. \end{lemma} \proof For each $\ell \geq 1$, we have $n_{\ell} \geq 5$, hence the alternating group $A_{\ell}$ is simple and thus perfect, and thus $G$ is also perfect; that is, $G = [G,G]$. Next we show that $[\Gamma, \Gamma] = G$. Note that the restriction $\gamma|S_\ell = \gamma_\ell$ is an odd permutation, and $\gamma$ acts on the components of $\prod_{\ell \geq 1}A_\ell$ independently, so for any $g \in G$ the commutator $[\gamma, g]|S_\ell$ is an even permutation, hence $[\gamma, g] \in G$. Thus, $G = [G,G] \subset [\G, G] \subset [\G,\G] \subset G$. \endproof Using the metric $d_\fX$, defined above, it is convenient to think of the finite products $\prod_{1 \leq \ell \leq k}S_\ell$ as sets of vertices of a rooted tree at level $k \geq 1$, where the root is a single vertex at level $0$ (it is omitted from $\fX$). In such a tree, each vertex at level $\ell$ is connected to $n_{\ell+1} = |S_{\ell+1}|$ vertices at level $\ell+1$, and so one can think of vertices in $\prod_{1 \leq \ell \leq k}S_\ell$ as labeled by finite words $s_1 \cdots s_k$, where $s_\ell \in S_\ell$. An element of the direct product $\gamma \in \prod_{\ell \geq 1} A_\ell$ acts on the tree so that, for a given $s_\ell \in S_\ell$, $\gamma \cdot s_\ell = \gamma_\ell \cdot s_\ell$, and this action depends only on the symbol $s_\ell$ in the finite word $s_1 \cdots s_k$. An element of $\fX$ is a infinite sequence $s_1 s_2 \cdots$, where each truncated word $s_1 \cdots s_k$ corresponds to a vertex in $\prod_{1 \leq \ell \leq k}S_\ell$. Open balls of diameter $2^{-K}$ in the metric $d_\fX$ are the sets of infinite words in $\fX$ which coincide for their first $K$ symbols. The measure of each such open ball is $$\frac{1}{|\prod_{1 \leq \ell \leq K}S_\ell|} = \frac{1}{n_1 \cdots n_K}.$$ We now show that $\gamma$ has positive measure set of points with non-trivial holonomy by explicitly computing the Lebesgue density of this set at each point. The argument is the same as in \cite[Theorem 6.1]{GL2019} and we give it here for completeness and for the convenience of the reader. \begin{lemma}\label{lem-construction3} Let ${\rm Fix}(\gamma)$ be the set of fixed points of the element $\gamma \in \prod_{\ell \geq 1} A_\ell$ defined above. Then every point in ${\rm Fix}(\gamma)$ has non-trivial holonomy, and $\mu({\rm Fix}(\gamma)) > 0$. \end{lemma} \proof First note that $\gamma$ fixes an infinite path $s = s_1 s_2 \cdots \in \fX$ if and only if for all $\ell \geq 1$ we have $s_\ell \ne n_\ell$ and $s_\ell \ne n_\ell - 1$. We claim that each such point has non-trivial holonomy. Indeed, let $V$ be an open neighborhood of $s$. Then there is an $m_V \geq 1$ such that $$V \cap \left(\prod_{1 \leq \ell \leq m_V} \{s_\ell\} \times \prod_{ \ell > m_V} S_\ell \right) =\prod_{1 \leq \ell \leq m_V}\{ s_\ell \} \times \prod_{ \ell > m_V} S_\ell,$$ and so $V$ contains the points which are moved by the action of $\gamma$. For each clopen ball $U_\ell$ around a fixed point $s$, the action of $\gamma$ permutes two clopen balls in $U_\ell$ consisting of sequences starting with $s_1 \cdots s_\ell (n_{\ell+1}-1)$ and $s_1 \cdots s_\ell n_{\ell+1}$, which have the total measure $2/(n_1 \cdots n_{\ell+1})$. Each of the remaining $n_{\ell+1} - 2$ clopen balls determined by words of length $\ell+1$ contains $2$ subsets of sequences starting with $s_1 \cdots s_{\ell+1} (n_{\ell+2}-1)$ and $s_1 \cdots s_{\ell+1} n_{\ell+2}$ permuted by the action, whose total measure is $2(n_{\ell+1} - 2)/(n_1 \cdots n_{\ell+1} n_{\ell+2})$. Continuing by induction, we compute the upper bound on the measure of the complement of the set ${\rm Fix}(\gamma)$: $$\mu(U_\ell - {\rm Fix}(\gamma)) = \frac{1}{n_1 \cdots n_\ell} \left( \frac{2}{n_{\ell+1}} + \frac{2(n_{\ell+1} - 2)}{n_{\ell+1} n_{\ell+2}} + \frac{2(n_{\ell+1}-2)( n_{\ell+2} - 2)}{n_{\ell+1} n_{\ell+2} n_{\ell+2}} + \cdots \right) <\frac{1}{n_1 \cdots n_\ell} \sum_{i \geq 1} \frac{1}{n_{\ell+i}}. $$ Since we assume that $n_{\ell+i} > 2 n_{\ell+i-1} > 2^{i-1} n_{\ell+1}$, we obtain that $$\mu(U_\ell - {\rm Fix}(\gamma)) < \frac{1}{n_1 \cdots n_\ell} \frac{4}{n_{\ell+1}},$$ and so $$\mu(U_\ell \cap {\rm Fix}(\gamma)) > \frac{1}{n_1\cdots n_\ell} - \frac{4}{n_1 \cdots n_{\ell+1}}.$$ It follows that for every point in ${\rm Fix}(\gamma)$ the Lebesgue density is $1$, namely \begin{equation} 1 = \lim_{\ell \to \infty} \left(1 - \frac{4}{n_{\ell+1}} \right) \leq \lim_{\ell \to \infty} \frac{\mu (U_\ell \cap {\rm Fix}(\gamma))}{\mu(U_\ell)} \leq 1 \ . \end{equation} Thus $\gamma$ has the positive measure set of points with non-trivial holonomy. \endproof We have shown that the action of $\G$ on $\fX$ has essential holonomy, while the action of its commutator $[\G,\G] = G$ has no essential holonomy, which proves the assertions of Theorem~\ref{thm-examples}. \endproof \begin{thebibliography}{10} \bibitem{AE2007} {M.~Ab\'{e}rt and G.~Elek}, \newblock {\it Non-abelian free groups admit non--essentially free actions on rooted trees}, \newblock {preprint}; {arXiv:0707.0970}. \bibitem{ABLN2022} {J.~\'{A}lvarez L\'{o}pez, R.~Barral Lijo, O.~Lukina, and H.~Nozawa}, \newblock {\it Wild {C}antor actions}, \newblock {\bf J. Math. Soc. Japan}, 74:447--472, 2022. \bibitem{ALC2009} {J.~{\'A}lvarez L{\'o}pez and A.~Candel}, \newblock {\it Equicontinuous foliated spaces}, \newblock {\bf Math. Z.}, 263:725--774, 2009. \bibitem{AS1994} {R.J.~Archbold and J.~Spielberg}, \newblock {\it Topologically free actions and ideals in discrete {$C^*$}-dynamical systems}, \newblock {\bf Proc. Edinburgh Math. Soc. (2)}, 37:119--124, 1994. \bibitem{Auslander1988} {J.~Auslander}, \newblock {\bf Minimal flows and their extensions}, \newblock {North-Holland Mathematics Studies}, Vol. 153, {North-Holland Publishing Co., Amsterdam}, 1988. \bibitem{Baer1956} {R.~Baer}, \newblock {\it Noethersche {G}ruppen}, \newblock {\bf Math. Z.}, 66:269--288, 1956. \bibitem{BergeronGaboriau2004} {N.~Bergeron and D.~Gaboriau}, \newblock {\it Asymptotique des nombres de {B}etti, invariants {$l^2$} et laminations}, \newblock {\bf Comment. Math. Helv.}, 79(2):362--395, 2004. \bibitem{BoyleTomiyama1998} {M.~Boyle and J.~Tomiyama}, \newblock {\it Bounded topological orbit equivalence and {$C^*$}-algebras}, \newblock {\bf J. Math. Soc. Japan}, 50:317--329, 1998. \bibitem{CortezPetite2008} {M.-I.~Cortez and S.~Petite}, \newblock {\it $G$-odometers and their almost one-to-one extensions}, \newblock {\bf J. London Math. Soc.}, 78(2):1--20, 2008. \bibitem{CortezMedynets2016} {M.I.~Cortez and K.~Medynets}, \newblock {\it Orbit equivalence rigidity of equicontinuous systems}, \newblock {\bf J. Lond. Math. Soc. (2)}, 94:545--556, 2016. \bibitem{Dye1959} {H.~Dye}, \newblock {\it On groups of measure preserving transformations}, \newblock {\bf Amer. J. Math.}, 81:119--159, 1959. \bibitem{DHL2016a} {J.~Dyer, S.~Hurder and O.~Lukina}, \newblock {\it The discriminant invariant of Cantor group actions}, \newblock {\bf Topology Appl.}, 208: 64--92, 2016. \bibitem{Ellis1969} {R.~Ellis}, \newblock {\bf Lectures on topological dynamics}, \newblock {W. A. Benjamin, Inc., New York}, 1969. \bibitem{GPS2019} {T.~Giordano, I.~Putman and C.~Skau}, \newblock {\it {$\mathbb{Z}^d$}-odometers and cohomology}, \newblock {\bf Groups Geom. Dyn.}, 13:909--938, 2019; \bibitem{Grigorchuk2011} {R.~Grigorchuk}, {\it Some Topics in the Dynamics of Group Actions on Rooted Trees}, {\bf Proc. Steklov Institute of Math.}, 273: 64--175, 2011. \bibitem{GL2019} {M.~Gr\"oger and O.~Lukina}, \newblock {\it Measures and regularity of group Cantor actions}, \newblock {\bf Discrete Contin. Dynam. Sys. Ser. A.}, 41:2001-2029, 2021. \bibitem{Haefliger1985} {A.~Haefliger}, \newblock {\it Pseudogroups of local isometries}, in Differential Geometry (Santiago de Compostela, 1984), edited by L.A. Cordero, \newblock {\bf Res. Notes in Math.}, 131:174--197, Boston, 1985. \bibitem{HL2018a} {S.~Hurder and O.~Lukina}, \newblock {\it Wild solenoids}, \newblock {\bf Transactions A.M.S.}, 371:4493--4533, 2019; {arXiv:1702.03032}. \bibitem{HL2018b} {S.~Hurder and O.~Lukina}, \newblock {\it Orbit equivalence and classification of weak solenoids}, \newblock {\bf Indiana Univ. Math. J.}, 69:2339--2363, 2020; {arXiv:1803.02098}. \bibitem{HL2019a} {S.~Hurder and O.~Lukina}, \newblock {\it Limit group invariants for non-free Cantor actions}, \newblock {\bf Ergodic Theory Dynam. Systems}, 41:1751--1794, 2021; {arXiv:1904.11072}. \bibitem{HL2021a} {S.~Hurder and O.~Lukina}, \newblock {\it Nilpotent Cantor actions}, \newblock {\bf Proceedings A.M.S.}, 150:289--304, 2022. \bibitem{Joseph2021} {M.~Joseph}, \newblock{\it Continuum of allosteric actions for non-amenable surface groups}, \newblock {arXiv:2110.01068}. \bibitem{Joseph2023} {M.~Joseph}, \newblock{\it Wreath products, allostery and amenability}, \newblock {arXiv:2301.07616}. \bibitem{Kennedy2020} {M.~Kennedy}, \newblock {\it An intrinsic characterization of {$C^*$}-simplicity}, \newblock {\bf Ann. Sci. \'{E}c. Norm. Sup\'{e}r. (4)}, 53:1105--1119, 2020. \bibitem{KSS2006} {M.~Kambites, P.~Silva, Pedro V. and B.~Steinberg}, \newblock{\it The spectra of lamplighter groups and {C}ayley machines}, \newblock{\bf Geom. Dedicata}, 120:193--227, 2006. \bibitem{LavrenyukNekrashevych2002} {Y.~Lavreniuk and V.~Nekrashevych}, \newblock {\it Rigidity of branch groups acting on rooted trees}, \newblock {\bf Geom. Dedicata}, 89:159--179, 2002. \bibitem{LeBMB2018} {A.~Le Boudec and N.~Matte Bon}, \newblock {\it Subgroup dynamics and {$C^*$}-simplicity of groups of homeomorphisms}, \newblock {\bf Ann. Sci. \'{E}c. Norm. Sup\'{e}r. (4)}, 51:557--602, 2018. \bibitem{Miller2008} \newblock {B.~Miller}, \newblock {\it The existence of measures of a given cocycle, I: atomless, ergodic $\sigma$-finite measures}, \newblock {\bf Ergodic Theory Dynam. Systems}, 28:1599--1613, 2008. \bibitem{Nekrashevych2005} {V. Nekrashevych}, {\bf Self-similar groups}, Mathematical Survey and Monographs, 117, Americal Mathematical Society, Providence, RI, 2005. \bibitem{Renault2008} {J.~Renault}, \newblock {\it Cartan subalgebras in {$C^*$}-algebras}, \newblock {\bf Irish Math. Soc. Bull.}, 61:29--63, 2008. \bibitem{SilvaSteinberg2005} {P.~Silva and B.~Steinberg}, \newblock{\it On a class of automata groups generalizing lamplighter groups}, \newblock{\bf Internat. J. Algebra Comput.}, 15:1213--1234, 2005. \bibitem{Vorobets2012} {Ya.~Vorobets}, {\it Notes on the {S}chreier graphs of the {G}rigorchuk group}, in {\bf Dynamical systems and group actions}, {Contemp. Math.}, {567}:{221--248}, 2012. \end{thebibliography} \end{document}
2205.06223v1
http://arxiv.org/abs/2205.06223v1
Record-Setters in the Stern Sequence
\pdfoutput=1 \documentclass[12pt]{article} \usepackage{lineno} \usepackage[usenames]{color} \usepackage[colorlinks=true, linkcolor=webgreen, filecolor=webbrown, citecolor=webgreen]{hyperref} \definecolor{webgreen}{rgb}{0,.5,0} \definecolor{webbrown}{rgb}{.6,0,0} \newcommand{\seqnum}[1]{\href{https://oeis.org/#1}{\rm \underline{#1}}} \usepackage{amsmath, amssymb, amscd, amsthm, amsfonts} \usepackage{mathtools} \usepackage{tabto} \usepackage{tabularx} \usepackage[makeroom]{cancel} \usepackage{fullpage} \usepackage{float} \usepackage{longtable} \usepackage[tableposition=below]{caption} \captionsetup[longtable]{skip=1em} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{example}[theorem]{Example} \newtheorem{definition}{Definition} \newtheorem{observation}[theorem]{Observation} \newcommand{\INFIX}{\geq_{\rm inf}} \newcommand{\SUFFIX}{\geq_{\rm suff}} \newcommand{\PREFIX}{\geq_{\rm pref}} \newcommand{\VMAT}{\begin{bmatrix} 1 & 0 \end{bmatrix}} \newcommand{\WMAT}{\begin{bmatrix} 1 \\ 0 \end{bmatrix} } \newcommand{\ZMAT}{\begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix} } \newcommand{\IMAT}{\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} } \definecolor{green}{RGB}{0,127,0} \definecolor{red}{RGB}{200,0,0} \begin{document} \title{Record-Setters in the Stern Sequence} \author{Ali Keramatipour\\ School of Electrical and Computer Engineering\\ University of Tehran\\ Tehran\\ Iran\\ \href{mailto:[email protected]}{\tt [email protected]} \\ \and Jeffrey Shallit\\ School of Computer Science\\ University of Waterloo\\ Waterloo, ON N2L 3G1 \\ Canada\\ \href{mailto:[email protected]}{\tt [email protected]}} \maketitle \begin{abstract} Stern's diatomic series, denoted by $(a(n))_{n \geq 0}$, is defined by the recurrence relations $a(2n) = a(n)$ and $a(2n + 1) = a(n) + a(n + 1)$ for $n \geq 1$, and initial values $a(0) = 0$ and $a(1) = 1$. A record-setter for a sequence $(s(n))_{n \geq 0}$ is an index $v$ such that $s(i) < s(v)$ holds for all $i < v$. In this paper, we give a complete description of the record-setters for the Stern sequence. \end{abstract} \section{Introduction}\label{section-introduction} Stern's sequence $(a(n))_{n \geq 0}$, defined by the recurrence relations $$ a(2n) = a(n), \quad a(2n+1) = a(n)+a(n+1),$$ for $n \geq 0$, and initial values $a(0) = 0$, $a(1) = 1$, has been studied for over 150 years. It was introduced by Stern in 1858 \cite{Stern:1858}, and later studied by Lucas \cite{Lucas:1878}, Lehmer \cite{Lehmer:1929}, and many others. For a survey of the Stern sequence and its amazing properties, see the papers of Urbiha \cite{Urbiha:2001} and Northshield \cite{Northshield:2010}. It is an example of a $2$-regular sequence \cite[Example 7]{Allouche&Shallit:1992}. The first few values of this sequence are given in Table~\ref{tab1}; it is sequence \seqnum{A002487} in the {\it On-Line Encyclopedia of Integer Sequences} (OEIS)\cite{Sloane:2022}. \begin{table}[H] \begin{center} \begin{tabular}{c|cccccccccccccccc} $n$ & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15\\ \hline $a(n)$ & 0 & 1 & 1 & 2 & 1 & 3 & 2 & 3 & 1 & 4 & 3 & 5 & 2 & 5 & 3 & 4 \end{tabular} \end{center} \caption{First few values of the Stern sequence.} \label{tab1} \end{table} The sequence $a(n)$ rises and falls in a rather complicated way; see Figure~\ref{fig1}. \begin{figure}[htb] \begin{center} \includegraphics[width=6.5in]{sternchart3.png} \end{center} \caption{Stern's sequence and its running maximum for $0\leq n \leq 1200$.} \label{fig1} \end{figure} For this reason, several authors have been interested in understanding the local maxima of $(a(n))_{n \geq 0}$. This is easiest to determine when one restricts one's attention to numbers with $i$ bits; that is, to the interval $[2^{i-1}, 2^{i})$. Lucas \cite{Lucas:1878} observed without proof that $\max_{2^{i-1} \leq n < 2^i} a(n) = F_{i+1}$, where $F_n$ is the $n$th Fibonacci number, defined as usual by $F_0 = 0$, $F_1 = 1$, and $F_n = F_{n-1} + F_{n-2}$ for $n \geq 2$, and proofs were later supplied by Lehmer \cite{Lehmer:1929} and Lind \cite{Lind:1969}. The second- and third-largest values in the same interval, $[2^{i-1}, 2^{i})$, were determined by Lansing \cite{Lansing:2014}, and more general results for these intervals were obtained by Paulin \cite{Paulin:2017}. On the other hand, Coons and Tyler \cite{Coons&Tyler:2014} showed that $$ \limsup_{n \rightarrow \infty} \frac{a(n)}{n^{\log_2 \varphi}} = \frac{\varphi^{\log_2 3}}{\sqrt{5}},$$ where $\varphi = (1+\sqrt{5})/2$ is the golden ratio. This gives the maximum order of growth of Stern's sequence. Later, Defant \cite{Defant:2016} generalized their result to the analogue of Stern's sequence in all integer bases $b \geq 2$. In this paper, we are concerned with the positions of the ``running maxima'' or ``record-setters'' of the Stern sequence overall, not restricted to subintervals of the form $[2^{i-1}, 2^i)$. These are the indices $v$ such that $a(j) < a(v)$ for all $j < v$. The first few record-setters and their values are given in Table~\ref{tab2}. \begin{table}[H] \begin{center} \begin{tabular}{c|cccccccccccccccccc} $i$ & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 \\ \hline $v_i$ & 0 & 1 & 3 & 5 & 9 & 11 & 19 & 21 & 35 & 37 & 43 & 69& 73 & 75 & 83 & 85 & 139 & 147 \\ $a(v_i)$ & 0 & 1 & 2 & 3 & 4 & 5 & 7 & 8 & 9 & 11 & 13 & 14 & 15 & 18 & 19 & 21 & 23 &26 \end{tabular} \end{center} \caption{First few record-setters for the Stern sequence.} \label{tab2} \end{table} The sequence of record-setters $(v_i)_{i \geq 1}$ is sequence \seqnum{A212288} in the OEIS, and the sequence $(a(v_i))_{i \geq 1}$ is sequence \seqnum{A212289} in the OEIS. In this paper, we provide a complete description of the record-setters for the Stern sequence. To state the theorem, we need to use a standard notation for repetitions of strings: for a string $x$, the expression $x^i$ means $\overbrace{xx\cdots x}^i$. Thus, there is a possibility for confusion between ordinary powers of integers and powers of strings, but hopefully the context will make our meaning clear. \begin{theorem} \label{mainTheorem} The $k$-bit record-setters, for $k < 12$, are given in Table~\ref{tab3}. For $k \geq 12$, the $k$-bit record-setters of the Stern sequence, listed in increasing order, have the following representation in base $2$: \begin{itemize} \item $k$ even, $k = 2n$: $$\begin{cases} 100\, (10)^a\, 0\, (10)^{n-3-a}\, 11, & \text{ for } 0 \leq a \leq n-3; \\ (10)^{b}\, 0\, (10)^{n-b-1} \, 1, & \text{ for } 1 \leq b \leq \lfloor n/2 \rfloor; \\ (10)^{n-1}\, 11. \end{cases}$$ \item $k$ odd, $k=2n+1$: $$ \begin{cases} 10 00\, (10)^{n-2}\, 1 ; \\ 100100\, (10)^{n-4}\, 011; \\ 100\, (10)^b\, 0\, (10)^{n-2-b} \, 1, & \text{ for } 1 \leq b \leq \lceil n/2 \rceil - 1; \\ (10)^{a+1}\, 0\, (10)^{n-2-a}\, 11, & \text{ for } 0 \leq a \leq n-2;\\ (10)^{n}\, 1. \end{cases} $$ \end{itemize} In particular, for $k \geq 12$, the number of $k$-bit record-setters is $\lfloor 3k/4 \rfloor - (-1)^k$. \end{theorem} In this paper, we prove the correctness of the classification above by ruling out many cases and then trying to find the set of record-setters. Our approach is to interpret numbers as binary strings. In Section \ref{basics}, we will introduce and provide some basic lemmas regarding this approach. To find the set of record-setters, we exclude many candidates and prove they do not belong to the set of record-setters in Section \ref{search_space}. In Section \ref{limit1001000}, we rule out more candidates by using some calculations based on Fibonacci numbers. Finally, in Sections \ref{final_even} and \ref{final_odd}, we finish the classification of record-setters and prove Theorem \ref{mainTheorem}. {\small\begin{center} \begin{longtable}[htb]{c|r|r} $k$ & record-setters & numerical \\ & with $k$ bits & values \\ \hline 1 & 1 & 1 \\ 2 & 11 & 3 \\ 3 & 101 & 5 \\ 4 & 1001 & 9 \\ & 1011 & 11 \\ 5 & 10011 & 19 \\ & 10101 & 21 \\ 6 & 100011 & 35 \\ & 100101 & 37 \\ & 101011 & 43 \\ 7 & 1000101 & 69 \\ & 1001001 & 73 \\ & 1001011 & 75 \\ & 1010011 & 83 \\ & 1010101 & 85 \\ 8 & 10001011 & 139 \\ & 10010011 & 147 \\ & 10010101 & 149 \\ & 10100101 & 165 \\ & 10101011 & 171 \\ 9 & 100010101 & 277 \\ & 100100101 & 293 \\ & 100101011 & 299 \\ & 101001011 & 331 \\ & 101010011 & 339 \\ & 101010101 & 341 \\ 10 & 1000101011 & 555 \\ & 1001001011 & 587 \\ & 1001010011 & 595 \\ & 1001010101 & 597 \\ & 1010010101 & 661 \\ & 1010101011 & 683 \\ 11 & 10001010101 & 1109 \\ & 10010010101 & 1173 \\ & 10010100101 & 1189 \\ & 10010101011 & 1195 \\ & 10100101011 & 1323 \\ & 10101001011 & 1355 \\ & 10101010011 & 1363 \\ & 10101010101 & 1365 \\ \caption{$k$-bit record-setters for $k < 12$.} \label{tab3} \end{longtable} \end{center} } \section{Basics}\label{basics} We start off by defining a new sequence $(s(n))_{n \geq 0}$, which is the Stern sequence shifted by one: $s(n) = a(n + 1)$ for $n \geq 0$. Henceforth we will be mainly concerned with $s$ instead of $a$. Let $R$ be the set of record-setters for the sequence $(s(n))_{n \geq 0}$, so that $R = \{ v_i - 1 \, : \, i \geq 1 \}$. A {\it hyperbinary representation\/} of a positive integer $n$ is a summation of powers of $2$, using each power at most twice. The following theorem of Carlitz \cite{Carlitz:1964} provides another way of interpreting the quantity $s(n)$: \begin{theorem} The number of hyperbinary representations of $n$ is $s(n)$. \end{theorem} We now define some notation. We frequently represent integers as strings of digits. If $ x = e_{t-1} e_{t-2} \cdots e_1 e_0$ is a string of digits 0, 1, or 2, then $[x]_2$ denotes the integer $n = \sum_{0 \leq i < t} e_i 2^i$. For example, \begin{equation*} 43 = [101011]_2 = [012211]_2 = [020211]_2 = [021011]_2 = [100211]_2. \label{example43} \end{equation*} By ``breaking the power $2^i$'' or the $(i + 1)$-th bit from the right-hand side, we mean writing $2^i$ as two copies of $2^{i - 1}$. For example, breaking the power $2^1$ into $2^0 + 2^0$ can be thought of as rewriting the string $10$ as $02$. Now we state two helpful but straightforward lemmas: \begin{lemma} \label{breakBits} Let string $x$ be the binary representation of $n \geq 0$, that is $(x)_2 = n$. All proper hyperbinary representations of $n$ can be reached from $x$, only by breaking powers $2^i$, for $0 < i <|x|$. \end{lemma} \begin{proof} To prove this, consider a hyperbinary representation string $y = c_{t-1} c_{t-2} \cdots c_1 c_0$ of $n$. We show that $y$ can be reached from $x$ using the following algorithm: Let $i$ be the position of $y$'s leftmost 2. In each round, change bits $c_i := c_i - 2$ and $c_{i+1} := c_{i+1} + 1$. By applying this algorithm, $i$ increases until the number of 2s decrease, while the value $[y]_2$ remains the same. Since $i$ cannot exceed $t - 1$, eventually $y$ would have no 2s. Therefore, string $y$ becomes $x$. By reversing these steps, we can reach the initial value of $y$ from $x$, only by ``breaking" bits. \end{proof} \begin{lemma} \label{breaktwice} Let string $x$ be the binary representation of $n \geq 0$. In the process of reaching a hyperbinary representation from $x$, only by breaking bits, a bit cannot be broken twice. \end{lemma} \begin{proof} Since $2^i > 2^{i-1} + \cdots + 2^0$, and $[2(0)^i]_2$ $>$ $[(2)^{i-1}]_2$, the $(i+1)$-th bit from right cannot be broken twice. \end{proof} For simplicity, we define a new function, $G(x)$, and work with binary and hyperbinary representations henceforward. The argument of $G$ is a string $x$ containing only the digits $\{0,1,2, 3\}$, and its value is the number of different hyperbinary representations reachable from $x$, only by the breaking mechanism we defined above. Thus, for example, Eq.~\eqref{example43} demonstrates that $G(101011) = 5$. Although the digit 3 cannot appear in a proper hyperbinary representation, we use it here to mean that the corresponding bit \textit{must} be broken. Also, from Lemma~\ref{breaktwice}, we know that the digit 4 cannot appear since it must be broken twice. We can conclude from Lemma \ref{breakBits}, for a \textit{binary} string $x$, we have $G(x) = s([x]_2)$. We define $G(\epsilon)= 1$. In what follows, all variables have the domain $\{ 0,1 \}^*$; if we have a need for the digits $2$ and $3$, we write them explicitly. We will later use the following lemma to get rid of 2s and 3s in our hyperbinary representations and get a representation using only $0$s and $1$s: \begin{lemma} \label{remove23} For a binary string $h$, the equalities \begin{itemize} \item[(a)] $G(2h) = G(1h)$, \item[(b)] $G(30h) = G(1h)$, \item[(c)] $G(3(1)^i0h) = G(1h)$, \item[(d)] $G(3(1)^i) = G(3) = 0$ \end{itemize} hold. \end{lemma} \begin{proof} \leavevmode \begin{itemize} \item[(a)] According to Lemma \ref{breaktwice}, we cannot break the leftmost bit twice. Therefore, the number of different hyperbinary representations we can reach from $2h$ and $1h$, i.e. their $G$-value, is the same. \item[(b)] Since 3 cannot appear in a hyperbinary representation, we must break it. This results in a new string $22h$. Due to Lemma \ref{breaktwice}, the first (leftmost) $2$ is useless, and we cannot break it again. Thus, $G(30h) = G(2h) = G(1h)$. \item[(c)] Since we have to break the 3 again, the string $3(1)^i0h$ becomes $23(1)^{i -1}0h$, and $G(3(1)^i0h) = G(3(1)^{i -1}0h)$ . By continuing this we get $G(3(1)^i0h) = G(30h) = G(1h)$. \item[(d)] To calculate $3(1)^i$'s $G$-value, we must count the number of proper hyperbinary representations reachable from $3(1)^i$. The first 3 must be broken, and by breaking 3, we obtain another string of the same format, i.e., $3(1)^{i-1}$. By continuing this, we reach the string $3$, which cannot be broken any further and is not a valid hyperbinary string. Therefore $G(3(1)^i) = G(3) = 0$ \end{itemize} \end{proof} We now define two transformations on string $h$, prime and double prime transformations. For a string $h$, we let $h'$ be the string resulting from adding two to its leftmost bit, and then applying Lemma~\ref{remove23} to remove the excessively created 2 or 3. Therefore, string $h'$ is either a {\it binary} string, or it is 3, which is not transformable as the case (d) in Lemma~\ref{remove23}. For example, \begin{itemize} \item[(a)] If $h = 0011$, then we get $2011$, and by applying Lemma~\ref{remove23}, we have $h' =1011$. \item[(b)] If $h = 1011$, then $h' = 111$. \item[(c)] If $h = \epsilon$, then $h$ has no leftmost bit, and $h'$ is undefined. Therefore, we set $\epsilon' = 3$ and $G(\epsilon') = 0$. \item[(d)] If $h = 1$, then $h' = 3$ and $G(h') = 0$. \end{itemize} We let $h''$ be the string resulting from removing all trailing zeroes and decreasing the rightmost bit by 1. For example, \begin{itemize} \item[(a)] If $h = 100\ 100$, then $h'' = 1000$; \item[(b)] If $h = 1011$, then $h'' = 10\ 10$; \item[(c)] If $h = 3$, then $h'' = 2$; \item[(d)] If $h = 0^i$ for $i \geq 0$, then after removing trailing zeros, the string does not have a rightmost bit and is not in the transformation function's domain. Therefore, we set $G(h'') = 0$. \end{itemize} The reason behind defining prime and double prime of strings is to allow dividing a single string into two pieces and calculating the $G$ function for both pieces. This way, we can calculate $G$-values more easily. For example, $h'$ is useful when a bit with the value $2^{|h|}$ is broken, and $h''$ is useful when we want to break $2^0$ and pass it to another string on its right. Lemma~\ref{breaktwice} implies this usefulness as we cannot break a bit twice; thus, we can assume the two pieces are entirely separate after breaking a bit. \section{Ruling out Candidates for Record-Setters}\label{search_space} In this section, by using Lemmas \ref{breakBits} and \ref{remove23}, we try to decrease the search space as much as possible. A useful tool is linear algebra. We now define a certain matrix $\mu(x)$ for a binary string $x$. We set \begin{equation} \mu(x) = \begin{bmatrix} G(x) & G(x'')\\ G(x') & G((x')'') \end{bmatrix} . \end{equation} For example, when $|x|=1$, the values are \begin{align*} &G(1) = 1, && G(1'') = G(0) = 1,\\ &G(1') = G(3) = 0, && G( (1')'') = G(3'') = G(2) = G(1) = 1,\\ &G(0) = 1, && G(0'') = 0,\\ &G(0') = G(2) = 1, && G( (0')'') = G(2'') = G(1) = 1, \end{align*} and the corresponding matrices are \begin{equation*} \mu(1) = \begin{bmatrix} 1 & 1\\ 0 & 1 \end{bmatrix} \text{ and } \mu(0) = \begin{bmatrix} 1 & 0\\ 1 & 1 \end{bmatrix}. \end{equation*} In the case where $x = \epsilon$, the values are \begin{align*} &G(\epsilon) = 1, && G(\epsilon'') = 0,\\ &G(\epsilon') = G(3) = 0, && G( (\epsilon')'') = G(3'') = G(2) = G(1) = 1,\\ \end{align*} and the matrix is \begin{equation*} \mu(\epsilon) = \begin{bmatrix} 1 & 0\\ 0 & 1 \end{bmatrix}, \end{equation*} the identity matrix. \begin{theorem} \label{matrix_linearization} For two binary strings $x$ and $y$, the equation \begin{equation} \mu(xy) = \mu(x)\cdot\mu(y) \end{equation} holds. \end{theorem} \begin{proof} To show this, we prove $\mu(1x) = \mu(1)\cdot\mu(x)$ and $\mu(0x) = \mu(0) \cdot \mu(x)$. The general case for $\mu(xy) = \mu(x)\cdot\mu(y)$ then follows by induction. We first prove the case for $1x$. Consider \begin{equation*} \mu(1)\cdot\mu(x) = \begin{bmatrix} 1 & 1\\ 0 & 1 \end{bmatrix} \cdot \begin{bmatrix} G(x) & G(x'')\\ G(x') & G((x')'') \end{bmatrix} = \begin{bmatrix} G(x) + G(x') & G(x'') + G((x')'')\\ G(x') & G((x')'') \end{bmatrix}, \end{equation*} which must equal \begin{equation*} \mu(1x) = \begin{bmatrix} G(1x) & G((1x)'')\\ G((1x)') & G(((1x)')'') \end{bmatrix}. \end{equation*} We first prove $G(1x) = G(x) + G(x')$. Consider two cases where the first 1 either breaks or not. The number of hyperbinary representations where it does not break equals $G(x)$; if it breaks, then the rest of the string becomes $0x'$, which has $G(x')$ representations. To show $G((1x)'') = G(x'') + G((x')'')$, we use the same approach. The first one either breaks or not, resulting in two different strings, $x$ and $x'$. In both cases, we must apply the double prime transformation to break a $2^0$ in order to pass it to a string on the right side of $1x$. For the equality of the bottom row, the string $(1x)'$ is $3x$; thus, the 3 must be broken, and the rest of the string becomes $x'$. So $\mu(1x) = \mu(1)\cdot\mu(x)$ holds. The case of $0x$ can be shown using similar conclusions. Consider \begin{equation*} \mu(0)\cdot\mu(x) = \begin{bmatrix} 1 & 0\\ 1 & 1 \end{bmatrix} \cdot \begin{bmatrix} G(x) & G(x'')\\ G(x') & G((x')'') \end{bmatrix} = \begin{bmatrix} G(x) & G(x'') \\ G(x) + G(x') & G(x'') + G((x')'') \end{bmatrix}, \end{equation*} which must equal \begin{equation*} \mu(0x) = \begin{bmatrix} G(0x) & G((0x)'')\\ G((0x)') & G(((0x)')'') \end{bmatrix} = \begin{bmatrix} G(x) & G(x'')\\ G(2x) & G((2x)'') \end{bmatrix} = \begin{bmatrix} G(x) & G(x'')\\ G(1x) & G((1x)'') \end{bmatrix}. \end{equation*} We have already shown $G(1x) = G(x) + G(x')$ and $G((1x)'') = G(x'') + G((x')'')$. Therefore, the equation $\mu(0x) = \mu(0)\cdot\mu(x)$ holds, and the theorem is proved. \end{proof} This theorem also gives us a helpful tool to compute $G(x)$, $G(x'')$, $G(x')$, and $G((x')''$ as $\mu(x)$ is just a multiplication of $\mu(1)$s and $\mu(0)$s. \begin{lemma} \label{G_linearization} For a string $x$, the equation $G(x) = \VMAT \mu(x) \WMAT $ holds. This multiplication simply returns the top-left value of the $\mu(x)$ matrix. \end{lemma} From Theorem \ref{matrix_linearization} and Lemma \ref{G_linearization} we deduce the following result. \begin{lemma} \label{string-division} For binary strings $x, y$, the equation \begin{equation} G(xy) = G(x)G(y) + G(x'')G(y') \end{equation} holds. \end{lemma} \begin{proof} We have \begin{align*} G(xy) &= \VMAT\mu(xy)\WMAT = \VMAT\mu(x)\mu(y)\WMAT\\ &= \VMAT \begin{bmatrix} G(x)G(y) + G(x'')G(y') & G(x)G(y'') + G(x'')G((y')'')\\ G(x')G(y)+ G((x')'')G(y') & G(x')G(y'') + G((x')'')G((y')'') \end{bmatrix}\WMAT \\ &= G(x)G(y) + G(x'')G(y'). \end{align*} This can also be explained in another way. If we do not break the rightmost bit of $x$, we can assume the two strings are separate and get $G(x)G(y)$ number of hyperbinary representations. In case we break it, then $G(x'')G(y')$ ways exist. \end{proof} In what follows, we always set $v := \VMAT$ and $w := \WMAT$. Here we define three comparators that help us replace substrings (or contiguous subsequences) in order to obtain a new string without decreasing the string's $G$-value. \begin{definition}[Comparators] In this paper, when we state a matrix $M_1$ is greater than or equal to the matrix $M_0$, we mean each entry of $M_1 - M_0$ is non-negative (they both must share the same dimensions). \begin{itemize} \item The infix comparator: For two strings $y$ and $t$, the relation $ t \INFIX y$ holds if $\mu(t) \geq \mu(y)$ holds. \item The suffix comparator: For two strings $y$ and $t$, the relation $ t \SUFFIX y$ holds if $ \mu(t)\cdot w \geq \mu(y)\cdot w$ holds. \item The prefix comparator: For two strings $y$ and $t$, the relation $t \PREFIX y$ holds if $ v\cdot\mu(t) \geq v\cdot\mu(y) $ holds. \end{itemize} \end{definition} \begin{lemma} \label{gc_lemma} If $t \INFIX y$, and $t$ represents a smaller string, no record-setter can contain $y$ as its substring. \end{lemma} \begin{proof} Consider a string $a = xyz$. According to Lemma \ref{G_linearization}, we have \begin{equation*} G(a) = v \cdot \mu(x) \cdot \mu(y) \cdot \mu(z) \cdot w. \end{equation*} Since $ \mu(t) \geq \mu(y)$, and all entries in the matrices are positive, the replacement of $y$ with $t$ does not decrease $G(a)$, and also yields a smaller number, that is $(xtz)_2 \leq (xyz)_2$. Therefore, $(xyz)_2 \notin R$. \end{proof} As an example, consider the two strings $111$ and $101$. Then $101 \INFIX 111$ holds, since \begin{equation*} \mu(101) = \begin{bmatrix} 2 & 3\\ 1 & 2 \end{bmatrix} \geq \mu(111) = \begin{bmatrix} 1 & 3\\ 0 & 1 \end{bmatrix} . \end{equation*} \begin{lemma} \label{endLemma} If $t < y$ and $t \SUFFIX y$, then $y$ is not a suffix of a record-setter. \end{lemma} \begin{proof} Consider a string $a = xy$. We have shown $G(a) = v \cdot \mu(x) \cdot \mu(y) \cdot w$. By replacing $y$ with $t$, since $\mu(t) \cdot w \geq \mu(y) \cdot w$, the value $G(a)$ does not decrease, and we obtain a smaller string. \end{proof} \begin{lemma} \label{beginLemma} If $t < y$ and $t \PREFIX x$, then $x$ is not a prefix of a record-setter. \end{lemma} \begin{corollary} \label{lemma111} For an $h \in R$, since $101 \INFIX 111$, then $h$ cannot contain $111$ as a substring. \end{corollary} We have established that a record-setter $h$ cannot contain three consecutive 1s. Now, we plan to prove $h$ cannot have two consecutive 1s, either. We do this in the following lemmas and theorems. The following theorem provides examples that their $G$-values equal Fibonacci numbers. \begin{theorem} \label{fibonacci-vals} For $i \geq 0$, the equations \begin{align} G((10)^i) &= F_{2i+1},\label{Fib1st} \\ G((10)^i0) &= F_{2i + 2},\label{Fib2nd}\\ G(1(10)^i) &= F_{2i + 2}, \text{ and}\label{Fib3rd} \\ G(1(10)^i0) &= F_{2i + 3}\label{Fib4th} \end{align} hold. \end{theorem} \begin{proof} We first prove that the following equation holds: \begin{equation} \mu((10)^i) = \begin{bmatrix} F_{2i + 1} & F_{2i}\\ F_{2i} & F_{2i - 1} \end{bmatrix} . \label{mat10} \end{equation} The case for $i = 1$, namely $\mu(10) = \begin{bmatrix} 2 & 1\\ 1 & 1 \end{bmatrix}$, holds. We now use induction: \begin{equation*} \mu((10)^{i + 1}) = \mu((10)^i) \mu(10) = \begin{bmatrix} F_{2i + 1} & F_{2i}\\ F_{2i} & F_{2i - 1} \end{bmatrix} \begin{bmatrix} 2 & 1\\ 1 & 1 \end{bmatrix} = \begin{bmatrix} F_{2i + 3} & F_{2i + 2}\\ F_{2i + 2} & F_{2i + 1} \end{bmatrix}, \end{equation*} and thus we can conclude \eqref{Fib1st}. For the other equations \eqref{Fib2nd}, \eqref{Fib3rd}, and \eqref{Fib4th}, we proceed similarly: \begin{align*} \mu((10)^i0) = \mu((10)^i)\mu(0) = \begin{bmatrix} F_{2i + 1} & F_{2i}\\ F_{2i} & F_{2i - 1} \end{bmatrix} \begin{bmatrix} 1 & 0\\ 1 & 1 \end{bmatrix} = \begin{bmatrix} F_{2i + 2} & F_{2i}\\ F_{2i + 1} & F_{2i - 1} \end{bmatrix};\\ \mu(1(10)^i) = \mu(1)\mu((10)^i) = \begin{bmatrix} 1 & 1\\ 0 & 1 \end{bmatrix} \begin{bmatrix} F_{2i + 1} & F_{2i}\\ F_{2i} & F_{2i - 1} \end{bmatrix} = \begin{bmatrix} F_{2i + 2} & F_{2i + 1}\\ F_{2i} & F_{2i - 1} \end{bmatrix};\\ \mu(1(10)^i0) = \mu(1)\mu((10)^i)\mu(0) = \begin{bmatrix} F_{2i + 2} & F_{2i + 1}\\ F_{2i} & F_{2i - 1} \end{bmatrix} \begin{bmatrix} 1 & 0\\ 1 & 1 \end{bmatrix} = \begin{bmatrix} F_{2i + 3} & F_{2i + 1}\\ F_{2i + 1} & F_{2i - 1} \end{bmatrix} . \end{align*} Multiplying these by $v$ and $w$ as in Lemma \ref{G_linearization} confirms the equalities \eqref{Fib1st}--\eqref{Fib4th}. \end{proof} \begin{lemma} \label{lemma1100} If $h \in R$, then $h$ cannot contain a substring of the form $1(10)^{i}0$ for $i>0$. \end{lemma} \begin{proof} To prove this we use Theorem \ref{fibonacci-vals} and the infix-comparator to show $t = (10)^{i+1} \INFIX y = 1(10)^{i}0$: \begin{equation*} \mu(t) = \begin{bmatrix} F_{2i + 3} & F_{2i + 2}\\ F_{2i + 2} & F_{2i + 1} \end{bmatrix} \geq \mu(y) = \begin{bmatrix} F_{2i + 3} & F_{2i + 1}\\ F_{2i + 1} & F_{2i - 1} \end{bmatrix} . \end{equation*} We conclude $t \INFIX y$ for $i \geq 1$. Because of this, a $00$ cannot appear to the right of a $11$, since if it did, it would contain a substring of the form $1(10)^i0$. \end{proof} \begin{lemma} \label{lemma110} If $h \in R$, then $h$ does not end in $1(10)^{i}$ for $i \geq 0$. \end{lemma} \begin{proof} Consider $t = (10)^i0$ and $y = 1(10)^{i}$. Then \begin{equation*} \mu(t) = \begin{bmatrix} F_{2i + 2} & F_{2i}\\ F_{2i + 1} & F_{2i - 1} \end{bmatrix} \quad \mu(y) = \begin{bmatrix} F_{2i + 2} & F_{2i + 1}\\ F_{2i} & F_{2i - 1} \end{bmatrix} . \end{equation*} and \begin{equation*} \mu(t)\WMAT = \begin{bmatrix} F_{2i + 2}\\ F_{2i + 1} \end{bmatrix} \geq \mu(y)\WMAT = \begin{bmatrix} F_{2i + 2}\\ F_{2i} \end{bmatrix} \end{equation*} Hence $t \SUFFIX y$, and $h$ cannot end in $y$. \end{proof} \begin{theorem} A record-setter $h \in R$ cannot contain the substring $11$. \end{theorem} \begin{proof} Suppose it does. Consider the rightmost $11$. Due to Lemma \ref{lemma1100}, there cannot be two consecutive 0s to its right. Therefore, the string must end in $1(10)^i$, which is impossible due to Lemma \ref{lemma110}. \end{proof} Therefore, we have shown that a record-setter $h$ is a concatenation of multiple strings of the form $1(0^i)$, for $i>0$. The next step establishes an upper bound on $i$ and shows that $i \leq 3$. \begin{theorem} \label{only10100} A record-setter $h \in R$ cannot contain the substring $10000$. \end{theorem} \begin{proof} First, we show $h$ cannot begin with $10000$: \begin{equation*} \VMAT \mu(10\ 10) = \begin{bmatrix} 5 & 3 \end{bmatrix} \geq \VMAT \mu(10000) = \begin{bmatrix} 5 & 1 \end{bmatrix} \Longrightarrow 10\ 10 \PREFIX 10000 . \end{equation*} Now consider the leftmost $10000$; it has to have a $10$, $100$, or $1000$ on its left: \begin{align*} \mu(1000\ 100) &= \begin{bmatrix} 14 & 5 \\ 11 & 4 \end{bmatrix} \geq \mu(10\ 10000) = \begin{bmatrix} 14 & 3 \\ 9 & 2 \end{bmatrix} &&\Longrightarrow 1000\ 100 \INFIX 10\ 10000; \\ \mu(1000\ 1000) &= \begin{bmatrix} 19 & 5 \\ 15 & 4 \end{bmatrix} \geq \mu(100\ 10000) = \begin{bmatrix} 19 & 4 \\ 14 & 3 \end{bmatrix} &&\Longrightarrow 1000\ 1000 \INFIX 100\ 10000; \\ \mu(100\ 100\ 10) &= \begin{bmatrix} 26 & 15 \\ 19 & 11 \end{bmatrix} \geq \mu(1000\ 10000) = \begin{bmatrix} 24 & 5 \\ 19 & 4 \end{bmatrix} &&\Longrightarrow 100\ 100\ 10 \INFIX 1000\ 10000 . \end{align*} Consequently, the substring $10000$ cannot appear in $h$. \end{proof} \section{Limits on the number of 1000s and 100s}\label{limit1001000} At this point, we have established that a record-setter's binary representation consists of a concatenation of 10s, 100s, and 1000s. The following theorem limits the appearance of 1000 to the beginning of a record-setter: \begin{theorem} \label{begin1000} A record-setter can only have 1000 at its beginning, except in the case $1001000$. \end{theorem} \begin{proof} It is simple to check this condition manually for strings of length $<12$. Now, consider a record-setter $h \in R$, with $|h| \geq 12$. String $h$ must at least have three 1s. To prove $h$ can only have 1000 at its beginning, we use our comparators to show neither \begin{itemize} \item[(a)] \textcolor{blue}{101000}, nor \item[(b)] \textcolor{blue}{1001000}, nor \item[(c)] \textcolor{blue}{10001000} \end{itemize} can appear in $h$. \begin{itemize} \item[(a)] Consider the following comparison: \begin{equation} \label{tenThousand} \mu(100\ 100) = \begin{bmatrix} 11 & 4 \\ 8 & 3 \end{bmatrix} \geq \mu(\textcolor{blue}{10\ 1000}) = \begin{bmatrix} 11 & 3 \\ 7 & 2 \end{bmatrix} \Longrightarrow 100\ 100 \INFIX\textcolor{blue}{10\ 1000}. \end{equation} We can infer that 101000 cannot appear in $h$. \item[(b)] In this case, for every $x < \textcolor{blue}{1001000}$, the equation $\mu(x) < \mu(\textcolor{blue}{1001000})$ holds, and we cannot find a replacement right away. Therefore, we divide this into two cases: \begin{itemize} \item[(b1)] In this case, we consider \textcolor{blue}{1001000} in the middle or at the end, thus it must have a 10, 100, or 1000 immediately on its left: \begin{align} \label{hundredThousand} \begin{alignedat}{3} \mu( 100\ 100\ 100 ) = \begin{bmatrix} 41 & 15 \\ 30 & 11 \end{bmatrix} &\geq \ &\mu( 10\ \textcolor{blue}{1001000} ) & = \begin{bmatrix} 41 & 11 \\ 26 & 7 \end{bmatrix},\\ \mu( 1000\ 10\ 10\ 10 ) = \begin{bmatrix} 60 & 37 \\ 47 & 29 \end{bmatrix} &\geq \ &\mu( 100\ \textcolor{blue}{1001000} ) & = \begin{bmatrix} 56 & 15 \\ 41 & 11 \end{bmatrix},\\ \mu( 10000\ 10\ 10\ 10 ) = \begin{bmatrix} 73 & 45 \\ 60 & 37 \end{bmatrix} &\geq \ &\mu( 1000\ \textcolor{blue}{1001000} ) & = \begin{bmatrix} 71 & 19 \\ 56 & 15 \end{bmatrix}. \end{alignedat} \end{align} \item[(b2)] The other case would be for \textcolor{blue}{1001000} to appear at the beginning: \begin{align} \label{thousandLeftHundred} \begin{alignedat}{3} \mu( 1000\ 110\ 10 ) = \begin{bmatrix} 35 & 22 \\ 27 & 17 \end{bmatrix} &\geq &\ \mu( \textcolor{blue}{1001000}\ 10 ) = \begin{bmatrix} 34 & 19 \\ 25 & 14 \end{bmatrix},\\ \mu( 1000\ 10\ 10\ 10 ) = \begin{bmatrix} 60 & 37 \\ 47 & 29 \end{bmatrix} &\geq &\ \mu( \textcolor{blue}{1001000}\ 100 ) = \begin{bmatrix} 53 & 19 \\ 39 & 14 \end{bmatrix},\\ \mu( 100\ 10\ 10\ 100 ) = \begin{bmatrix} 76 & 29 \\ 55 & 21 \end{bmatrix} &\geq &\ \mu( \textcolor{blue}{1001000}\ 1000 ) = \begin{bmatrix} 72 & 19 \\ 53 & 14 \end{bmatrix}. \end{alignedat} \end{align} \end{itemize} Therefore $h$ cannot contain \textcolor{blue}{1001000}. \item[(c)] Just like the previous case, there is no immediate replacement for \textcolor{blue}{10001000}. We divide this into two cases: \begin{itemize} \item[(c1)] There is a prefix replacement for \textcolor{blue}{10001000}: \begin{multline} v. \mu( 10\ 100\ 10 ) = \begin{bmatrix} 19 & 11 \end{bmatrix} \geq v.\mu( \textcolor{blue}{10001000} ) = \begin{bmatrix} 19 & 5 \end{bmatrix}\\ \Longrightarrow 10\ 100\ 10 \PREFIX \textcolor{blue}{10001000}. \end{multline} \item[(c2)] In case \textcolor{blue}{10001000} does not appear at the beginning, there must be a 10, 100, or a 1000 immediately on its left: \begin{align} \label{thousandThousand} \begin{alignedat}{3} \mu( 10\ 10\ 10\ 100 ) = \begin{bmatrix} 55 & 21 \\ 34 & 13 \end{bmatrix} &\geq\ &\mu( 10\ \textcolor{blue}{10001000} ) & = \begin{bmatrix} 53 & 14 \\ 34 & 9 \end{bmatrix},\\ \mu( 100\ 10\ 10\ 100 ) = \begin{bmatrix} 76 & 29 \\ 55 & 21 \end{bmatrix} &\geq\ &\mu( 100\ \textcolor{blue}{10001000} ) &= \begin{bmatrix} 72 & 19 \\ 53 & 14 \end{bmatrix},\\ \text{and }\mu( 1000\ 10\ 10\ 100 ) = \begin{bmatrix} 97 & 37 \\ 76 & 29 \end{bmatrix} &\geq\ &\mu( 1000\ \textcolor{blue}{10001000} ) &= \begin{bmatrix} 91 & 24 \\ 72 & 19 \end{bmatrix}. \end{alignedat} \end{align} \end{itemize} \end{itemize} \end{proof} Considering Theorem \ref{begin1000}, we can easily guess that 1000s do not often appear in record-setters. In fact, they only appear once for each length. We will prove this result later in Lemmas \ref{even1000} and \ref{odd1000}, but for now, let us consider that our strings only consist of 10s and 100s. The plan from here onward is to limit the number of 100s. The next set of theorems and lemmas concerns this limitation. To do this, we calculate the maximum $G$-values for strings with $0, 1, \ldots, 5$ 100s and compare them. Let $h$ be a string; we define the function $\delta(h)$ as the difference between the number of 0s and 1s occurring in $h$. For strings only containing 100s and 10s, the quantity $\delta(h)$ equals the number of 100s in $h$. The following theorem was previously proved in \cite{Lucas:1878}: \begin{theorem} \label{max-val-prime} The maximum $G$-value for strings of length $2n$ $(s(t)$ for $ 2^{2n-1} \leq t < 2^{2n})$ is $F_{2n + 1}$, and it first appears in the record-setter $(10)^n$. The maximum $G$-value for strings of length $2n + 1$ $(s(t)$ for $ 2^{2n} \leq t < 2^{2n + 1})$ is $F_{2n + 2}$, and it first appears in the record-setter $(10)^n0$. \end{theorem} The above theorem represents two sets of strings $(10)^+$ and $(10)^+0$, with $\delta$-values 0 and 1. \begin{lemma} \label{replace10} Consider a string $yz$, where $z$ begins with 1. If $|z| = 2n$ for $n \geq 1$, then $G(y (10)^{2n}) \geq G(yz)$. If $|z| = 2n + 1$, then $G(y (10)^{2n}0) \geq G(yz)$. \end{lemma} \begin{proof} Consider the matrix $\mu((10)^n)\WMAT = \begin{bmatrix} F_{2n + 1}\\ F_{2n} \end{bmatrix}$. The suffix matrix for $z$ is $\mu(z)\WMAT = \begin{bmatrix} G(z)\\ G(z') \end{bmatrix}$. Since $F_{2n + 1} \geq G(z)$, and $|z'| < |z|$ (since $z$ begins with 1), the value of $G(z')$ cannot exceed $F_{2n}$. Therefore $(10)^n \SUFFIX z$. For an odd length $2n + 1$, with the same approach, the matrix $\mu((10)^n0)\WMAT = \begin{bmatrix} F_{2n + 2}\\ F_{2n + 1} \end{bmatrix} \geq \mu(z)\WMAT = \begin{bmatrix} G(z)\\ G(z') \end{bmatrix}$, and $z$ can be replaced with $(10)^n0$. \end{proof} To continue our proofs, we need simple lemmas regarding the Fibonacci sequence: \begin{lemma} \label{oddFibZero} The sequence $F_1F_{2n}$, $F_3F_{2n - 2}$, \ldots, $F_{2n-1}F_2$ is strictly decreasing. \end{lemma} \begin{proof} Consider an element of the sequence $F_{2i+1}F_{2n - 2i}$. There are two cases to consider, depending on the relative magnitude of $n$ and $2i$. If $n \geq 2i + 1$, then \begin{align*} F_{2i + 1}F_{2n - 2i} &= F_{2i + 2}F_{2n - 2i} - F_{2i}F_{2n - 2i} = F^2_{n + 1} - F^2_{n - 2i - 1} - F^2_n + F^2_{n - 2i}\\ &= (F^2_{n+1} - F^2_{n}) + (F^2_{n - 2i} - F^2_{n - 2i - 1}). \end{align*} Notice that the first term, namely $(F_{n+1}^2 -F_n^2)$ is a constant, while the second term $F^2_{n - 2i} - F^2_{n - 2i - 1} = F_{n - 2i - 2}F_{n - 2i + 1}$ decreases with an increasing $i$. If $n \leq 2i$, then \begin{equation*} F_{2i + 1}F_{2n - 2i} = (F^2_{n+1} - F^2_{n}) + (F^2_{2i - n} - F^2_{2i + 1 - n}). \end{equation*} The non-constant term is $F^2_{2i - n} - F^2_{2i + 1 - n} = -F_{2i - n - 1}F_{2i + 2 - n}$, which is negative and still decreases. \end{proof} \begin{lemma} \label{evenMult} The sequence $F_0F_{2n}$, $F_2F_{2n - 2}$, \ldots, $F_nF_n$ is strictly increasing. \end{lemma} \begin{proof} For $0 \leq i \leq n/2$, We already know that $F_{2i}F_{2n - 2i} = F^2_n - F^2_{n - 2i}$. Since the sequence $F^2_n$, $F^2_{2n - 2}$, \ldots, $F^2_0$ decreases, the lemma holds. \end{proof} In the next theorem, we calculate the maximum $G$-value obtained by a string $x$ with $\delta(x) = 2$. \begin{lemma} [Strings with two 100s] \label{two100s} The maximum $G$-value for strings with two 100s occurs for $(10)^n0(10)^{n-1}0$ for lengths $l = 4n$, or for $(10)^{n}0(10)^{n}0$ for lengths $l = 4n + 2$, while $l \geq 6$. \end{lemma} \begin{proof} To simplify the statements, we write $\mu(10) = \mu(1)\mu(0)$ as $\mu_{10}$, and $\mu(0)$ as $I_2 + \gamma_0$, where $$I_2 = \IMAT, \text{ and } \gamma_0 = \ZMAT.$$ Consider the string $(10)^i0 (10)^j0(10)^k$, where $i,j \geq 1$ and $k \geq 0$: \begin{align*} G((10)^i0(10)^j0(10)^k) = v\mu^i_{10}\mu(0)\mu^j_{10}\mu(0)\mu^k_{10}w = v\mu^i_{10}(I + \gamma_0)\mu^j_{10}(I + \gamma_0)\mu^{k}_{10}w\\ = v\mu^{i + j + k}_{10}w + v\mu^i_{10}\gamma_0\mu^{j + k}_{10}w + v\mu^{i + j}_{10}\gamma_0\mu^k_{10}w + v\mu^i_{10}\gamma_0\mu^j_{10}\gamma_0\mu^k_{10}w. \end{align*} We now evaluate each summand in terms of Fibonacci numbers. \begin{align*} v\mu^{i + j + k}_{10}w &= v\begin{bmatrix} F_{2i + 2j + 2k + 1} & F_{2i + 2j + 2k}\\ F_{2i + 2k + 2k} & F_{2i + 2j + 2k - 1} \end{bmatrix}w = F_{2i + 2j + 2k + 1} \\ v\mu^i_{10}\gamma_0\mu^{j + k}_{10}w &= \begin{bmatrix} F_{2i + 1} & F_{2i} \end{bmatrix} \ZMAT \begin{bmatrix} F_{2j + 2k + 1}\\ F_{2j + 2k} \end{bmatrix} = F_{2i}F_{2j + 2k + 1} \\ v\mu^{i + j}_{10}\gamma_0\mu^k_{10}w &= \begin{bmatrix} F_{2i + 2j + 1} & F_{2i + 2j} \end{bmatrix} \ZMAT \begin{bmatrix} F_{2k + 1}\\ F_{2k} \end{bmatrix} = F_{2i+2j}F_{2k + 1}\\ v\mu^i_{10}\gamma_0\mu^j_{10}\gamma_0\mu^k_{10}w &= \begin{bmatrix} F_{2i + 1} & F_{2i} \end{bmatrix} \ZMAT \begin{bmatrix} F_{2j + 1} & F_{2j}\\ F_{2j} & F_{2j - 1} \end{bmatrix} \ZMAT \begin{bmatrix} F_{2k + 1}\\ F_{2k} \end{bmatrix} = F_{2i}F_{2j}F_{2k + 1} . \end{align*} For a fixed $i$, according to Lemma \ref{oddFibZero}, to maximize the above equations $k := 0$ must become zero, and $j := j + k$. Then the above equation can be written as \begin{equation*} G((10)^i0(10)^j0) = v\mu^i_{10}I_2\mu^j_{10}\mu(0)w + v\mu^i_{10}\gamma_0\mu^j_{10}\mu(0)w = F_{2i + 2j + 2} + F_{2i}F_{2j + 2}. \end{equation*} In case $l = 4n = 2i + 2j + 2$, to maximize the above equation, according to Lemma \ref{evenMult}, $i = n$, $j = n-1$, and the $G$-value would be $F_{4n} + F^2_{2n}$. In case $l = 4n + 2$, $i = j = n$, and the $G$-value is $F_{4n + 2} + F_{2n}F_{2n + 2} = F_{4n + 2} + F^2_{2n + 1} - 1$. Thus the theorem holds. Also in general, for any even $l$, the maximum $G$-value $\leq F_{l} + F^2_{l/2}$. \end{proof} \begin{lemma} \label{minValSingle100} Let $x = (10)^i0(10)^{n - i}$ be a string of length $2n + 1$ for $n \geq 1$ and $i\geq 1$ containing a single 100. Then, the minimum $G$-value for $x$ is $F_{2n + 1} + F_{2n - 1}$. \end{lemma} \begin{proof} We have \begin{align*} G(x) = G((10)^i0(10)^{n - i}) = v \cdot \mu^i_{10} \cdot (I + \gamma_0) \cdot \mu^{n - i}_{10} \cdot w = F_{2n + 1} + F_{2i}F_{2n-2i+1} \\ \xRightarrow{{\rm Thm.}~\ref{oddFibZero}\ i = 1\ } F_{2n + 1} + F_{2n - 1}. \end{align*} \end{proof} \begin{theorem} \label{three100s} For two strings $x$ and $y$, if $\delta(x) = 3$ and $\delta(y) = 1$ then $G(x) < G(y)$. \end{theorem} \begin{proof} Consider the two strings of the same length below: \begin{center} \begin{tabular}{ll} $x = (10)^i100$ \fbox{$(10)^j0(10)^{k-1-j}0$} \\ $y= 100(10)^i$ \fbox{$(10)^{k}$} . \end{tabular} \end{center} We must prove for $i \geq 0$, $j \geq 1$, and $k - 1 - j \geq 1$, the inequality $G(x) \leq G(y)$ holds, where $y$ has the minimum $G$-value among the strings with a single 100 (see Lemma \ref{minValSingle100}). \begin{align*} G(x) &= G((10)^i100)G((10)^j0(10)^{k-1-j}0) + G((10)^i0)G(1(10)^{j-1}0(10)^{k-1-j}0)\\ &\leq F_{2i + 4} (F^2_k + F_{2k}) + F_{2i + 2}F_{2k} = F_{2i + 4} \left(\dfrac{2F_{2k + 1} - F_{2k} - 2}{5} + F_{2k} \right) + F_{2i + 2}F_{2k} .\\ G(y) &= G(100(10)^i)F_{2k + 1} + G(100(10)^{i-1}0)F_{2k} \\ &= (F_{2i+3} + F_{2i + 1})F_{2k + 1} + (F_{2i} + F_{2i + 2})F_{2k} \\ &= (F_{2i+4} - F_{2i})F_{2k + 1} + (F_{2i} + F_{2i + 2})F_{2k}. \end{align*} We now show $G(y) - G(x) \geq 0$: \begin{multline} G(y) - G(x) \geq (F_{2i+4} - F_{2i})F_{2k + 1} + (F_{2i} + \cancel{F_{2i + 2}})F_{2k} - F_{2i + 4} \left(\dfrac{2F_{2k + 1} + 4F_{2k} - 2}{5} \right) - \cancel{F_{2i + 2}F_{2k}} \\ \begin{aligned} \xRightarrow{\times 5} 5F_{2i + 4}F_{2k + 1} - 5F_{2i}F_{2k - 1} - 2F_{2i + 4}F_{2k + 1} - 4F_{2i + 4}F_{2k} + 2F_{2i + 4} &\\= F_{2i + 4}(3F_{2k + 1} - 4F_{2k} + 2) - 5F_{2i}F_{2k - 1} &\\= F_{2i + 4}(F_{2k - 1} + F_{2k - 3} + 2) - 5F_{2i}F_{2k - 1} &\\= F_{2i + 4}(F_{2k - 3} + 2) + F_{2k - 1}(F_{2i+4} - 5F_{2i}) &\\= F_{2i + 4}(F_{2k - 3} + 2) + F_{2k - 1}(\cancel{5F_{2i}} + 3F_{2i-1} - \cancel{5F_{2i}}) &\geq 0. \end{aligned} \end{multline} \end{proof} Theorem \ref{three100s} can be generalized for all odd number occurrences of 100. To do this, replace the right side of the third 100 occurring in $x$ using Lemma~\ref{replace10}. \begin{lemma} \label{replaceWith10010} Let $i \geq 1$, and let $x$ be a string with $|x| = 2i + 3$ and $\delta(x) = 3$. Then $y = 100(10)^i \SUFFIX x$. \end{lemma} \begin{proof} We have already shown that $G(y) > G(x)$ (Theorem~\ref{three100s}). Also, the inequality $G(y') > G(x')$ holds since $y' = (10)^{i + 1}$, and $G(y')$ is the maximum possible $G$-value for strings of length $2i + 2$. \end{proof} \begin{theorem} \label{noFour100s} Let $n \geq 4$. If $|x| = 2n + 4$ and $\delta(x) = 4$, then $x \notin R$. \end{theorem} \begin{proof} Consider three cases where $x$ begins with a 10, a 10010, or a 100100. If $x$ begins with $10$, due to Lemma \ref{replaceWith10010}, we can replace the right side of the first 100, with $100(10)^*$, and get the string $y$. For example, if $x = $ 10 10 \textcolor{blue}{100} 10 100 100 100 becomes $y = $ 10 10 \textcolor{blue}{100} \textcolor{blue}{100} 10 10 10 10, which has a greater $G$-value. Then consider the strings $a = 10\ 100\ 100\ (10)^i$ and $b = 100\ 10\ (10)^i\ 100$: \begin{align*} \mu(a)\WMAT &= \begin{bmatrix} 30 & 11 \\ 19 & 7 \end{bmatrix} \begin{bmatrix} F_{2i + 1}\\ F_{2i} \end{bmatrix} = \begin{bmatrix} 30F_{2i + 1} + 11F_{2i}\\ 19F_{2i + 1} + 7F_{2i} \end{bmatrix} = \begin{bmatrix} 11F_{2i + 3} + 8F_{2i + 1}\\ 7F_{2i + 3} + 5F_{2i + 1} \end{bmatrix}\\ \mu(b)\WMAT &= \begin{bmatrix} 7 & 4 \\ 5 & 3 \end{bmatrix} \begin{bmatrix} F_{2i + 4}\\ F_{2i + 3} \end{bmatrix} = \begin{bmatrix} 7F_{2i + 4} + 4F_{2i + 3}\\ 5F_{2i + 4} + 3F_{2i + 3} \end{bmatrix}, \end{align*} so $b \SUFFIX a$ for $i \geq 1$. Therefore, by replacing suffix $a$ with $b$, we get a smaller string with a greater $G$-value. So $x \notin R$. Now consider the case where $x$ begins with 10010. Replace the 1st 100's right with $100(10)^{n - 1}$, so that we get $100\ 100\ (10)^{n-1}$. After these replacements, the $G$-value does not decrease, and we also get smaller strings. The only remaining case has $x$ with two 100s at the beginning. We compare $x$ with a string beginning with 1000, which is smaller. Let $x_2$ represent the string $x$'s suffix of length $2n - 2$, with two 100s. The upper bound on $G(x_2)$ and $G(10x_2)$ is achieved using Lemma \ref{two100s}: \begin{equation*} G(x) = G(1001\ 00 x_2) = G(1001)G(00x_2) + G(1000)G(10x_2) \leq 3(F_{2n-2} + F^2_{n - 1}) + 4(F_{2n} + F^2_n) . \end{equation*} After rewriting the equation to swap $F^2$s with first order $F$, multiply the equation by 5 to remove the $\dfrac{1}{5}$ factor: \begin{equation*} 3(2F_{2n -1} + 4F_{2n - 2} - 2)+ 4(2F_{2n + 1} + 4F_{2n} + 2) = 8F_{2n + 2} + 14F_{2n} + 6F_{2n - 2} + 2 \end{equation*} We now compare this value with $5G(1000\ (10)^n)$: \begin{align*} 5G(1000\ (10)^n) = 20F_{2n + 1} + 5F_{2n}\\ 20F_{2n + 1} + 5F_{2n} &\geq 8F_{2n + 2} + 14F_{2n} + 6F_{2n - 2} + 2 \\ \rightarrow 12F_{2n + 1} &\geq 17F_{2n} + 6F_{2n - 2} + 2\\ \rightarrow 12F_{2n - 1} &\geq 5F_{2n} + 6F_{2n - 2} + 2 \\ \rightarrow 7F_{2n - 1} &\geq 11F_{2n - 2} + 2 \\ \rightarrow 7F_{2n - 3} &\geq 4F_{2n - 2} + 2 \\ \rightarrow 3F_{2n - 3} &\geq 4F_{2n - 4} + 2 \\ \rightarrow 2F_{2n - 5} &\geq F_{2n - 6} + 2, \end{align*} which holds for $n \geq 4$. Therefore we cannot have four 100s in a record-setter. For six or more 100s, the same proof can be applied by replacing the fourth 100's right with 10s using Theorem~\ref{replace10}. \end{proof} \begin{theorem} \label{even1000} For even lengths $2n + 4$ with $n \geq 0$, only a single record-setter $h$ beginning with 1000 exists. String $h$ is also the first record-setter of length $2n + 4$. \end{theorem} \begin{proof} The only record-setter is $h = 1000\ (10)^n$. Let $x$ be a string with length $|x| = 2n$ containing 100 substrings ($n$ must be $\geq 3$ to be able to contain 100s). Using Lemma \ref{two100s}: \begin{equation*} 5G(1000\ x) \leq 4(5F^2_n + 5F_{2n}) + 5F_{2n} \leq 8F_{2n + 1} + 21F_{2n} + 8 \leq 5F_{2n + 4}. \end{equation*} The above equation holds for $n \geq 5$. For $n = 4$: \begin{equation*} 5G(1000\ x) \leq 4(F^2_4 + F_{8}) + F_{8} = 141 \leq F_{12} = 144. \end{equation*} For $n = 3$: \begin{equation*} G(1000\ 100\ 100) = 52 \leq G(101010100) = 55. \end{equation*} Ergo, the $G$-value cannot exceed $F_{2n + 4}$, which the smaller string $(10)^{n + 1}0$ already has. Let us calculate $G(h)$: \begin{align*} G(1000\ (10)^{n}) = 4F_{2n + 1} + F_{2n} = F_{2n + 2} + 3F_{2n + 1}\\ = F_{2n + 3} + 2F_{2n + 1} > F_{2n + 3} + F_{2n + 2} = F_{2n + 4} . \end{align*} Hence, the string $1000\ (10)^{n}$ is the first record-setter of length $2n + 4$ with a $G$-value greater than $F_{2n + 4}$, which is the maximum (Theorem~\ref{max-val-prime}) generated by the strings of length $2n + 3$. This makes $h$ the first record-setter of length $2n + 4$. \end{proof} \begin{theorem} Let $x$ be a string with length $|x| = 2n + 9$, for $n \geq 3$, and $\delta(x) \geq 5$. Then $x \notin R$. \end{theorem} \begin{proof} Our proof provides smaller strings with greater $G$-values only based on the position of the first five 100s. So for cases where $\delta(x) \geq 7$, replace the right side of the fifth 100 with 10s (Lemma~\ref{replace10}). Therefore consider $\delta(x)$ as 5, and $x = (10)^i0\ (10)^j0\ (10)^k0\ (10)^p0\ (10)^q0\ (10)^r$, with $i,j,k,p,q, \geq 1$ and $r \geq 0$. First, we prove that if $i = 1, j = 1, k = 1$ does not hold, then $x \notin R$. \begin{itemize} \item[(a)] If $i>1$, then smaller string $100(10)^{n + 3}$ has a greater $G$-value as proved in Lemma \ref{three100s}. \item[(b)] If $j > 1$, using the approach as in Theorem~\ref{noFour100s}, we can obtain a smaller string with a greater $G$-value. \item[(c)] If $k > 1$, using Lemma~\ref{replaceWith10010}, by replacing $(10)^k0\ (10)^p0\ (10)^q0\ (10)^r$ with $100\ (10)^{n + 1 - j}$, we obtain $y$ with $G(y) > G(x)$. \end{itemize} Now consider the case where $i = 1$, $j = 1$, $k = 1$. Let $x_2$, with $|x_2| = 2n$, be a string with two 100s: \begin{align*} &G(100100100\ x_2) = 41(F^2_n + F_{2n}) + 15F_{2n} \leq 16.4F_{2n + 1} + 47.8F_{2n} + 16.4\\ &G(100010101\ 0(10)^{n-1}0) = 23F_{2n} + 37F_{2n + 1}\\ &23F_{2n} + 37F_{2n + 1} - 16.4F_{2n + 1} - 47.8F_{2n} - 16.4 \geq 20F_{2n + 1} -25F_{2n} - 17 \geq 0 \end{align*} The above equation holds for $n \geq 2$. \end{proof} \begin{theorem} \label{odd1000} For odd lengths $2n + 5$ with $n \geq 1$, only a single record-setter $h$ beginning with 1000 exists. String $h$ is also the first record-setter of length $2n+5$. \end{theorem} \begin{proof} The first record-setter is $h = 1000\ (10)^n0$. Consider another string $1000x$. If $x$ has three or more occurrences of 100, then Lemma \ref{three100s} showed that $1000\ 100\ (10)^{n - 1}$ has a greater $G$-value. Therefore it is enough to consider strings $x$s with a single 100. Suppose $1000x = 1000\ (10)^{n-i}0(10)^i$, with $i \geq 1$: \begin{equation*} G(1000\ (10)^{n-i}0(10)^i) = 4G((10)^{n-i}0(10)^i) + G(1(10)^{n - i - 1}0(10)^i) . \end{equation*} We now evaluate $G((10)^{n-i}0(10)^i)$: \begin{align*} v\mu^{n}_{10}w &= v\begin{bmatrix} F_{2n + 1} & F_{2n}\\ F_{2n} & F_{2n - 1} \end{bmatrix}w = F_{2n + 1} \\ v\mu^{n - i}_{10}\gamma_0\mu^{i}_{10}w &= \begin{bmatrix} F_{2n - 2i + 1} & F_{2n - 2i} \end{bmatrix} \ZMAT \begin{bmatrix} F_{2i + 1}\\ F_{2i} \end{bmatrix} = F_{2n - 2i}F_{2i + 1}\\ \Longrightarrow\ G((10)^{n-i}0(10)^i) &= v\mu^{n - i}_{10}(I_2 + \gamma_0)\mu^{i}_{10}w = F_{2n + 1} + F_{2n - 2i}F_{2i + 1}. \end{align*} Next, we evaluate $G(1(10)^{n - i - 1}0(10)^i)$: \begin{align*} v\mu(1)\mu^{n - 1}_{10}w &= v\begin{bmatrix} F_{2n} & F_{2n - 1}\\ F_{2n - 2} & F_{2n - 3} \end{bmatrix}w = F_{2n} \\ v\mu(1)\mu^{n - i - 1}_{10}\gamma_0\mu^{i}_{10}w &= \begin{bmatrix} F_{2n - 2i} & F_{2n - 2i - 1} \end{bmatrix} \ZMAT \begin{bmatrix} F_{2i + 1}\\ F_{2i} \end{bmatrix} = F_{2n - 2i - 1}F_{2i + 1}\\ \Longrightarrow\ G(1(10)^{n - i - 1}0(10)^i) &= v\mu(1)\mu^{n - i - 1}_{10}(I_2 + \gamma_0)\mu^{i}_{10}w = F_{2n} + F_{2n - 2i - 1}F_{2i + 1}. \end{align*} We can now determine $G(1000\ (10)^{n-i}0(10)^i)$: \begin{align*} G(1000\ (10)^{n-i}0(10)^i) = 4F_{2n + 1} + 4F_{2n - 2i}F_{2i + 1} + F_{2n} + F_{2n - 2i - 1}F_{2i + 1}\\ = 4F_{2n + 1} + F_{2n} + F_{2i + 1}(4F_{2n - 2i} + F_{2n - 2i - 1})\\ = 4F_{2n + 1} + F_{2n}+ F_{2i + 1}(2F_{2n - 2i} + F_{2n - 2i + 2}). \end{align*} To maximize this, we need to make $i$ as small as possible: \begin{equation*} 4F_{2n + 1} + F_{2n}+ F_{3}(2F_{2n - 2} + F_{2n}) = 4F_{2n + 1} + 3F_{2n} + 4F_{2n - 2} < F_{2n + 5}, \end{equation*} which is less than $G((10)^{n + 2}) = F_{2n + 5}$. For $h$ we have \begin{align*} G(1000\ (10)^{n}0) = 4G((10)^{n}0) + G(1(10)^{n - 1}0) = 4F_{2n + 2} + F_{2n + 1} \\= F_{2n + 3} + 3F_{2n + 2} = F_{2n + 4} + 2F_{2n + 2} > F_{2n + 4} + F_{2n + 3} = F_{2n + 5}. \end{align*} Therefore, the string $1000\ (10)^n0$ is the only record-setter beginning with 1000. Also, since string $h$ begins with 1000 instead of 100 or 10, it is the first record-setter of length $2n + 5$. \end{proof} \section{Record-setters of even length} \label{final_even} At this point, we have excluded all the possibilities of a record-setter $x$ with $\delta(x) > 2$, because if the string $x$ contains 1000, by Theorem~\ref{even1000}, the only record-setter has a $\delta$-value of 2. Also, $x$ cannot have more than two 100s according to Theorem~\ref{noFour100s}. We determine the set of records-setters with $\delta(x) = 2$. If $\delta(x) = 0$, then $x$ has the maximum $G$-value as shown in Theorem \ref{max-val-prime}. \begin{theorem} \label{evenBeginEnd} Let $x = (10)^i0 (10)^j0 (10)^k$ be a string with two 100s, that is $i,j \geq 1$ and $k \geq 0$. If $i > 1$ and $k > 0$, then $x\notin R$. \end{theorem} \begin{proof} For a fixed $i$, to maximize the $G$-value, we must minimize $k$ and add its value to $j$. \begin{align*} G((10)^i0 (10)^j0 (10)) = F_{2i + 2j + 3} + F_{2i + 2j}F_{3} + F_{2i}F_{2j + 3} + F_{2i}F_{2j}F_{3}\\ = F_{2i + 2j + 3} + 2F_{2i + 2j} + F_{2i}F_{2j + 3} + 2F_{2i}F_{2j}. \end{align*} We now compare this value with a smaller string: $$ G((10)^{i-1}0 (10)^{j+2}0) = F_{2i + 2j + 3} + F_{2i + 2j + 2} + F_{2i - 2}F_{2j + 5} + F_{2i - 2}F_{2j + 4}$$ and \begin{align}\label{equationIJK} \begin{split} &G((10)^{i-1}0 (10)^{j+2}0) - G((10)^i0 (10)^j0 (10)) \\ &= \cancel{F_{2i + 2j + 3}} + F_{2i + 2j + 2} + F_{2i - 2}F_{2j + 6} - \cancel{F_{2i + 2j + 3}} - 2F_{2i + 2j} - F_{2i}(F_{2j + 3} + 2F_{2j}) \\ &=F_{2i + 2j - 1} + F_{2i - 2}F_{2j + 6} - F_{2i}(F_{2j + 3} + 2F_{2j}) \\ &=F_{2i + 2j - 1} + F_{2i - 2}(F_{2j + 6} - F_{2j + 3} - 2F_{2j}) - F_{2i - 1}(F_{2j + 3} + 2F_{2j}) \\ &=F_{2i + 2j - 1} + 2F_{2i - 2}(F_{2j + 3} + F_{2j + 1}) - F_{2i - 1}(F_{2j + 3} + 2F_{2j}) \\ &=F_{2i + 2j - 1} + F_{2j + 3}F_{2i - 4} + 2F_{2i - 2}F_{2j + 1} - 2F_{2i - 1}F_{2j} \geq 0. \end{split} \end{align} Since $i \geq 2$, Equation \eqref{equationIJK} holds, resulting in finding a smaller string with a greater $G$-value. \end{proof} \begin{theorem} The record-setters of even length $2n + 2$, for $n \geq 5$, are as follows: $$\begin{cases} 1000\ (10)^{n - 1},\\ 100\ (10)^{i+1}0\ (10)^{n - i - 2}, &\text{ for } 0 \leq i \leq n - 2, \\ (10)^i0\ (10)^{n - i}0, & \text{ for } 1 < i \leq \lceil\frac{n}{2}\rceil ,\\ (10)^{n + 1}. \end{cases}$$ \label{eventhm} \end{theorem} \begin{proof} According to Theorem \ref{even1000}, the set of record-setters for length $2n + 2$ starts with $1000 (10)^{n - 1}$. Next, we consider the strings beginning with 100 and using the same solution as in Theorem \ref{odd1000}, the $G$-value increases as the 2nd 100 moves to the end. Theorem \ref{evenBeginEnd} proved if a record-setter does not begin with a 100, it must end in one. In Lemmas \ref{two100s} and \ref{evenMult} we showed $G((10)^i0 (10)^{n - i}0) = F_{2n + 2} + F_{2i}F_{2n - 2i + 2}$ and it would increase until $i \leq \lceil\frac{n}{2}\rceil$. In Theorem \ref{max-val-prime}, it was shown that the smallest string with the maximum $G$-value for strings of length $2n+2$ is $(10)^{n + 1}$ and $G((10)^{n + 1}) = F_{2n + 3}$. \end{proof} \section{Record-setters of odd length} \label{final_odd} For strings $x$ of odd length, we have established that if $\delta(x) > 3$, then $x \notin R$. We now look for the final set of record-setters, among the remaining possibilities, with either $\delta(x) = 3$ or $\delta(x) = 1$. We first consider the cases with $\delta$-value three. \begin{theorem}\label{breakTwoEven} Let $x = (10)^c0 (10)^i0 (10)^j0 (10)^k$ be a string with three 100s and $|x|= 2n + 3$. If $c > 0$, or $i > 1$ and $k > 0$, then $x \notin R$. \end{theorem} \begin{proof} As stated in Theorem \ref{three100s}, the $G$-value of $100(10)^{n}$ is greater than $x$. Therefore, if $c > 0$, then $x \notin R$. For $c = 1$, we can write $G(x)$ as follows: \begin{equation} \label{breakingOdd} G(1\ 00 (10)^i0 (10)^j0 (10)^k ) = G((10)^i0 (10)^j0 (10)^k) + G((10)^{i+1}0 (10)^j0 (10)^k), \end{equation} which is the sum of $G$-values of two strings with even length and two 100s. Now, in case $i>1$ and $k>0$, according to Theorem \ref{evenMult}, by decreasing $i$ by one and making $k = 0$, we obtain a greater $G$-value. Hence, the theorem holds. \end{proof} \begin{theorem} \label{begin100100} $100100\ (10)^n0$ and $100100\ (10)^{n-1}0\ 10$ are the only record-setters beginning with $100100$. \end{theorem} \begin{proof} The first record-setter begins with a 1000: \begin{equation*} G(1000\ 10\ (10)^n0) = 9F_{2n + 2} + 5F_{2n + 1}. \end{equation*} Now define $x_i = 100\ 100\ (10)^{i}0(10)^{n-i}$ for $1 \leq i \leq n$. We get \begin{align*} G(x_i) &= G(100\ 100\ (10)^{i}0(10)^{n-i})\\ &= 11(F_{2n + 1} + F_{2i}F_{2n - 2i + 1}) + 4(F_{2n} + F_{2i - 1}F_{2n - 2i + 1}) \\ &= 11F_{2n + 1} + 4F_{2n} + F_{2n - 2i + 1}(11F_{2i} + 4F_{2i - 1})\\ &= 11F_{2n + 1} + 4F_{2n} + F_{2n - 2i + 1}(4F_{2i + 2} + 3F_{2i}). \end{align*} As $i$ increases, the value of $G(x_i)$ also increases. Suppose $i = n - 2$. Then \begin{align*} G(x_{n - 2}) = G(100\ 100\ (10)^{n-2}0(10)^{2}) = 11F_{2n + 1} + 4F_{2n} + 5(4F_{2n - 2} + 3F_{2n - 4}). \end{align*} This value is smaller than $9F_{2n + 2} + 5F_{2n + 1}$. Therefore, if $i < n -1$, then $x \notin R$. If $i = n - 1$, then for the string $x_{n - 1}$ we have \begin{align*} G(x_{n - 1}) &= G(100\ 100\ (10)^{n-1}0(10)) = 11F_{2n + 1} + 4F_{2n} + 2(4F_{2n} + 3F_{2n - 2})\\ &= 11F_{2n + 1} + 12F_{2n} + 6F_{2n - 2} > 9F_{2n + 2} + 5F_{2n + 1}. \end{align*} Also, for $i = n$, we know $x_n > x_{n - 1}$. Therefore, the first two record-setters after $1000\ (10)^{n + 1}0$ are $100100\ (10)^{n - 1} 0 10$ followed by $100100\ (10)^n 0$. \end{proof} Putting this all together, we have the following result. \begin{theorem} The record-setters of odd length $2n + 3$, for $n \geq 5$, are: $$\begin{cases} 1000\ (10)^{n - 1}0,\\ 100\ 100\ (10)^{n - 3}0\ 10,\\ 100\ 100\ (10)^{n - 2}0,\\ 100\ (10)^{i}0\ (10)^{n - i - 1}0, &\text{ for } 1 < i \leq \lceil\frac{n-1}{2}\rceil, \\ (10)^{i+1}0 (10)^{n-i}, & \text{ for } 0 \leq i \leq n. \end{cases}$$ \label{oddthm} \end{theorem} \begin{proof} The first three strings were already proven in Theorems~\ref{begin100100} and~\ref{odd1000}. We showed in Eq.~\eqref{breakingOdd} how to break the strings beginning with a 100 into two strings of even lengths. Thus, using Lemmas \ref{two100s} and \ref{evenMult}, for the strings of the form $$ 100 (10)^{i}0 (10)^{n - i - 1}0 $$ for $1 < i \leq \lceil\frac{n-1}{2}\rceil$, the $G$-value increases with increasing $i$. Moreover, Theorem~\ref{three100s} shows that the minimum $G$-value for a string having a single 100 is greater than the maximum with three 100s. So after the strings with three 100s come those with a single 100. Also, due to the Lemma \ref{oddFibZero} while using the same calculations as in Lemma \ref{minValSingle100}, as $i$ increases, we get greater $G$-values until we reach the maximum $F_{2n + 3} = G((10)^{n + 1})$. \end{proof} We can now prove Theorem~\ref{mainTheorem}. \begin{proof}[Proof of Theorem~\ref{mainTheorem}] By combining the results of Theorems~\ref{eventhm} and \ref{oddthm}, and noting that the indices for the sequence $s$ differ by $1$ from the sequence $a$, the result now follows. \end{proof} We can obtain two useful corollaries of the main result. The first gives an explicit description of the record-setters, and their $a$-values. \begin{corollary} The record setters lying in the interval $[2^{k-1}, 2^k)$ for $k \geq 12$ and even are, in increasing order \begin{itemize} \item $2^{2n-1} + {{2^{2n-2} - 2^{2n-2a-3}+1} \over {3}}$ for $0 \leq a \leq n-3$; \item ${{2^{2n+1} - 2^{2n-2b} - 1} \over {3}}$ for $1 \leq b \leq \lfloor n/2 \rfloor$; and \item $(2^{2n+1} + 1)/3$, \end{itemize} where $k = 2n$. The Stern values of these are, respectively \begin{itemize} \item $L_{2a+3}F_{2n-2a-3} + L_{2a+1} F_{2n-2a-4}$ \item $F_{2b+2} F_{2n-2b} + F_{2b} F_{2n-2b-1}$ \item $F_{2n+1}$ \end{itemize} where $L_0 = 2$, $L_1 = 1$, and $L_n = L_{n-1} + L_{n-2}$ for $n \geq 2$ are the Lucas numbers. The record-setters for $k\geq 12$ and odd are, in increasing order, \begin{itemize} \item $2^{2n} + {{2^{2n-2} - 1} \over 3} $ \item $2^{2n} + 2^{2n-3} + {{2^{2n-4} -7} \over 3} $ \item $2^{2n} + {{2^{2n-1} - 2^{2n-2b-2} -1} \over 3}$ for $1 \leq b \leq \lceil n/2 \rceil - 1$ \item ${{2^{2n+2} - 2^{2n-2a-1} +1} \over 3}$ for $0 \leq a \leq n-2$; \item $(2^{2n+2} - 1)/3$, \end{itemize} where $k = 2n+1$. The corresponding Stern values are \begin{itemize} \item $F_{2n+1} + F_{2n-4}$ \item $F_{2n+1} + 8 F_{2n-8}$ \item $L_{2b+3} F_{2n-2b-2} + L_{2b+1} F_{2n-2b-3}$ \item $F_{2a+4} F_{2n-2a-1} + F_{2a+2} F_{2n-2a-2}$ \item $F_{2n+2}$ \end{itemize} \end{corollary} \begin{proof} We obtain the record-setters from Theorem~\ref{mainTheorem} and the identities $[(10)^i]_2 = (2^{2i+1}-2)/3$ and $[(10)^i 1]_2 = (2^{2i+2}-1)/3$. We obtain their Stern values from Eq.~\ref{mat10}. \end{proof} \begin{corollary} The binary representations of the record-breakers for the Stern sequence form a context-free language. \end{corollary} \section{Acknowledgments} We thank Colin Defant for conversations about the problem in 2018, and for his suggestions about the manuscript. \bibliographystyle{new2} \bibliography{abbrevs,stern} \end{document}
2205.06199v1
http://arxiv.org/abs/2205.06199v1
Bipartite intrinsically knotted graphs with 23 edges
\documentclass[11pt,a4paper]{amsart} \usepackage{graphicx,multirow,array,amsmath,amssymb,color,adjustbox,kotex} \newcommand{\veq}{\mathrel{\rotatebox{90}{$=$}}} \def\stackbelow#1#2{\underset{\displaystyle\overset{\displaystyle\veq}{#2}}{#1}} \newcommand{\adj}{\! \thicksim \!} \newcommand{\nadj}{\! \nsim \!} \newtheorem{theorem}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \title{Bipartite intrinsically knotted graphs with 23 edges} \author[H. Kim]{Hyoungjun Kim} \address{College of General Education, Kookmin University, Seoul 02707, Korea} \email{[email protected]} \author[T. Mattman]{Thomas Mattman} \address{Department of Mathematics and Statistics, California State University, Chico, Chico CA 95929-0525, USA} \email{[email protected]} \author[S. Oh]{Seungsang Oh} \address{Department of Mathematics, Korea University, Seoul 02841, Korea} \email{[email protected]} \thanks{The first author was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government Ministry of Science and ICT(NRF-2018R1C1B6006692).} \thanks{The third author was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIP) (No. NRF-2017R1A2B2007216).} \begin{document} \maketitle \begin{abstract} A graph is intrinsically knotted if every embedding contains a nontrivially knotted cycle. It is known that intrinsically knotted graphs have at least 21 edges and that there are exactly 14 intrinsically knotted graphs with 21 edges, in which the Heawood graph is the only bipartite graph. The authors showed that there are exactly two graphs with at most 22 edges that are minor minimal bipartite intrinsically knotted: the Heawood graph and Cousin 110 of the $E_9+e$ family. In this paper we show that there are exactly six bipartite intrinsically knotted graphs with 23 edges so that every vertex has degree 3 or more. Four among them contain the Heawood graph and the other two contain Cousin 110 of the $E_9+e$ family. Consequently, there is no minor minimal intrinsically knotted graph with 23 edges that is bipartite. \end{abstract} \section{Introduction}\label{sec:int} A graph is {\it intrinsically knotted} if every embedding of the graph in $\mathbb{R}^3$ contains a non-trivially knotted cycle. We say graph $H$ is a {\it minor} of graph $G$ if $H$ can be obtained from a subgraph of $G$ by contracting edges. A graph $G$ is {\it minor minimal intrinsically knotted} if $G$ is intrinsically knotted but no proper minor is. Robertson and Seymour's~\cite{RS} Graph Minor Theorem implies that there are only finitely many minor minimal intrinsically knotted graphs. While finding the complete list of minor minimal intrinsically knotted graphs remains an open problem, there has been recent progress in understanding the condition for small size. \begin{figure}[h!] \includegraphics[scale=1]{fig1.eps} \caption{$\nabla Y$ and $Y \nabla$ moves} \label{fig:delta} \end{figure} The known examples mainly belong to $\nabla Y$ families. A {\it $\nabla Y$ move} is an exchange operation on a graph that removes all edges of a 3-cycle $abc$ and then adds a new vertex $v$ that is connected to each vertex of the 3-cycle, as shown in Figure~\ref{fig:delta}. The reverse operation is a {\it $Y \nabla$ move}. We say two graphs $G$ and $G'$ are {\it cousins} if $G'$ is obtained from $G$ by a finite sequence of $\nabla Y$ and $Y \nabla$ moves. The set of all cousins of $G$ is called the {\it $G$ family}. \begin{figure}[h!] \includegraphics[scale=1]{fig2.eps} \caption{$E_9+e$} \label{fig:e9} \end{figure} Johnson, Kidwell and Michael~\cite{JKM}, and, independently, Mattman~\cite{M}, showed that intrinsically knotted graphs have at least 21 edges. Lee, Kim, Lee and Oh~\cite{LKLO}, and, independently, Barsotti and Mattman~\cite{BM} showed that the complete set of minor minimal intrinsically knotted graphs with 21 edges consists of fourteen graphs: $K_7$ and the 13 graphs obtained from $K_7$ by $\nabla Y$ moves. There are 92 known examples of size 22: 58 in the $K_{3,3,1,1}$ family, 33 in the $E_9+e$ family (Figure~\ref{fig:e9}) and a $4$--regular example due to Schwartz (see \cite{FMMNN}). We are in the process of determining whether or not this is a complete list \cite{KLLMO, KMO2, KMO3}. In the current article, we continue a study of intrinsic knotting of bipartite graphs of small size. A {\em bipartite\/} graph is a graph whose vertices can be divided into two disjoint sets $A$ and $B$ such that every edge connects a vertex in $A$ to one in $B$. In an earlier paper~\cite{KMO}, we proved that the Heawood graph (of size 21) is the only bipartite graph among the minor minimal intrinsically knotted graphs of size 22 or less. We also showed that Cousin 110 of the $E_9+e$ family (of size 22) is the only other graph of 22 or fewer edges that is bipartite and intrinsically knotted and has no proper minor with both properties. We can think of Cousin 110 as being constructed from $K_{5,5}$ through deletion of the edges in a $3$-path. In the current paper, we extend the classification to graphs of size 23. \begin{theorem} \label{thm:main} There are exactly six bipartite intrinsically knotted graphs with 23 edges so that every vertex has degree 3 or more. Two of these are obtained from Cousin 110 of the $E_9+e$ family by adding an edge, the other four from the Heawood graph by adding 2 edges. \end{theorem} The two graphs obtained from Cousin 110 of the $E_9+e$ family are described in Subsection~\ref{subsec:110}. Three of the graphs obtained from the Heawood graph are found in Subsection~\ref{subsec:89} and the last in Subsection~\ref{subsec:106025}. Since a minor minimal intrinsically knotted graph must have minimum degree at least three we have the following. \begin{corollary} There is no minor minimal intrinsically knotted graph with 23 edges that is bipartite. \end{corollary} The remainder of this paper is a proof of Theorem~\ref{thm:main}. In the next section we introduce some terminology. Section~\ref{sec:restoring} reviews the restoring method and introduces the twin restoring method. Section~\ref{sec:g6} treats the case of a vertex of degree 6 or more, Section~\ref{sec:ab5}, the case where both $A$ and $B$ have degree 5 vertices, and Section~\ref{sec:a5}, the case where only $A$ has degree 5 vertices. Finally, we conclude the argument with Section~\ref{sec:g4}, which deals with the remaining cases. \section{Terminology and strategy} \label{sec:term} We use notation and terminology similar to that of our previous paper~\cite{KMO}. Let $G = (A,B,E)$ denote a bipartite graph with 23 edges whose partition has the parts $A$ and $B$ with $E$ denoting the edges of the graph. For distinct vertices $a$ and $b$, let $G \setminus \{ a,b \}$ denote the graph obtained from $G$ by deleting the two vertices $a$ and $b$. Deleting a vertex means removing the vertex, interiors of all edges adjacent to the vertex and remaining isolated vertices. Let $G_{a,b}$ denote the graph obtained from $G \setminus \{ a,b \}$ by deleting all degree 1 vertices, and $\widehat{G}_{a,b}=(\widehat{V}_{a,b}, \widehat{E}_{a,b})$ denote the graph obtained from $G_{a,b}$ by contracting edges adjacent to degree 2 vertices, one by one repeatedly, until no degree 2 vertex remains. The degree of $a$, denoted by $\deg(a)$, is the number of edges adjacent to $a$. We say that $a$ is adjacent to $b$, denoted by $a \adj b$, if there is an edge connecting them. If they are not adjacent, we write $a \nadj b$. If $a$ is adjacent to vertices $b, \dots, b'$, then we write $a \adj \{b, \dots, b'\}$. If each of $a, \dots, a'$ is adjacent to all of $b, \dots, b'$, then we similarly write $\{a, \dots, a'\} \adj \{b, \dots, b'\}$. Note that $\sum_{a \in A} \deg(a) = \sum_{b \in B} \deg(b) = 23$ by the definition of bipartition. We need some notation to count the number of edges of $\widehat{G}_{a,b}$. \begin{itemize} \item $V_n(a)=\{c \in V\ |\ a \adj c,\ \deg(c)=n \}$ \item $V_n(a,b)=V_n(a) \cap V_n(b)$ \item $E(a)=\{ e \in E\ |\ e \,\, {\rm is \,\, adjacent \,\, to}\,\, a \}$ \item $G \setminus \{a,b\}$ has $NE(a,b)=|E(a)\cup E(b)|$ fewer edges than $G$ \end{itemize} Furthermore $G \setminus \{a,b\}$ has degree 1 or 2 vertices from $V_3(a) \cup V_3(b)$ and degree 2 vertices from $V_4(a,b)$ as shown in Figure~\ref{fig21}. To derive $\widehat{G}_{a,b}$, we delete and contract the edges related to these vertices. The total number of these edges is the sum of the following two values: \begin{itemize} \item $NV_3(a,b)=|V_3(a) \cup V_3(b)| = |V_3(a)|+|V_3(b)|-|V_3(a,b)|$ \item $NV_4(a,b)=|V_4(a,b)|$ \end{itemize} \begin{figure}[h] \includegraphics{fig3.eps} \caption{Deriving $\widehat{G}_{a,b}$} \label{fig21} \end{figure} To count $|\widehat{E}_{a,b}|$ more precisely, we need to consider the following set. \begin{itemize} \item $V_Y(a,b)$ is the set of removed vertices to derive $\widehat{G}_{a,b}$, that are adjacent to neither $a$ nor $b$; let $NV_Y(a,b)=|V_Y(a,b)|$. \end{itemize} This vertex set has three types as illustrated in Figure~\ref{fig22}. In the figure, a vertex $c$ of $V_Y(a,b)$ has degree 1 or 2, and so must be removed in $\widehat{G}_{a,b}$. Especially, in the rightmost figure, the two vertices $c$ and $c'$ of $V_Y(a,b)$ are removed. \begin{figure}[h] \includegraphics{fig4.eps} \caption{Three types of vertex in $V_Y(a,b)$} \label{fig22} \end{figure} Combining these ideas, we have the following equation for the number of edges of $\widehat{G}_{a,b}$, which is called the {\em count equation\/}: $$ |\widehat{E}_{a,b}| = 23 - NE(a,b) - NV_3(a,b) - NV_4(a,b) - NV_Y(a,b). $$ A graph is called 2-{\em apex\/} if it can be made planar by deleting two or fewer vertices. It is known that if $G$ is 2-apex, then it is not intrinsically knotted~\cite{BBFFHL, OT}. So we check whether or not $\widehat{G}_{a,b}$ is planar. The unique non-planar graph with nine edges is $K_{3,3}$. For non-planar graphs with 10 edges, we consider which graphs could be isomorphic to $\widehat{G}_{a,b}$. Note that $\widehat{G}_{a,b}$ consists of vertices with degree larger than 2. Furthermore, $\widehat{G}_{a,b}$ may have multiple edges. There are exactly three non-planar graphs on 10 edges that satisfy these conditions, shown in Figure~\ref{fig:nonpla}. More precisely, two of the graphs are obtained from $K_{3,3}$ by adding an edge $e_1$ or $e_2$, and the other graph is $K_5$. Thus we have the following proposition, which was mentioned in~\cite{LKLO}. \begin{proposition} \label{prop:planar} If $\widehat{G}_{a,b}$ is planar, then $G$ is not intrinsically knotted. Especially, if $\widehat{G}_{a,b}$ satisfies one of the following three conditions, then $\widehat{G}_{a,b}$ is planar, so $G$ is not intrinsically knotted. \begin{itemize} \item[(1)] $|\widehat{E}_{a,b}| \leq 8$, or \item[(2)] $|\widehat{E}_{a,b}|=9$ and $\widehat{G}_{a,b}$ is not isomorphic to $K_{3,3}$. \item[(3)] $|\widehat{E}_{a,b}|=10$ and $\widehat{G}_{a,b}$ is not isomorphic to $K_5$, $K_{3,3}+e_1$ and $K_{3,3}+e_2$. \end{itemize} \end{proposition} \begin{figure}[h] \includegraphics{fig5.eps} \caption{Three non-planar graphs with 10 edges} \label{fig:nonpla} \end{figure} \section{Restoring and twin restoring methods} \label{sec:restoring} In this section we review the restoring method, which we introduced in~\cite{KMO2} and will use frequently in this paper. We also introduce a similar technique that we call the twin restoring method. We will find all candidate bipartite intrinsically knotted graphs with 23 edges. To prove the main theorem, we distinguish several cases according to the combinations of degrees of all vertices and further sub-cases according to connections of some of the 23 edges. Let $G$ be a bipartite graph with 23 edges with some distinct vertices $a$ and $b$. Figure~\ref{fig:rest}(a) gives an example where $a$ and $b$ are $a_1$ and $a_2$. As in the figure, we assume that the degree of every vertex as well as information about certain edges, including all edges incident to $a$ and $b$, is known. First, we examine the number of the edges of the graph $\widehat{G}_{a,b}$. If it has at most eight edges, then it is planar and so $G$ cannot be intrinsically knotted by Proposition~\ref{prop:planar}. Even if it has more edges, $G$ is rarely intrinsically knotted. Especially if it has 9 edges, $\widehat{G}_{a,b}$ must be isomorphic to $K_{3,3}$ in order for $G$ to be intrinsically knotted. In this case, $G_{a,b}$, being a subdivision of $K_{3,3}$, has exactly six vertices with degree 3 and, possibly, additional vertices of degree 2. The {\em restoring method\/} is a way to find candidates for such a $G_{a,b}$ as shown in Figure~\ref{fig:rest}(b) and (c). Finally we recover $G$ from $G_{a,b}$ by restoring the deleted vertices and edges. $$ \stackbelow{\widehat{G}_{a,b}}{K_{3,3}} \ \ \rightarrow \ \ G_{a,b} \ \ \rightarrow \ \ G $$ Sometimes the restoring method applied to $G_{a,b}$ for only one pair of vertices $\{a,b\}$ does not give sufficient information to construct the graph $G$. In this case, we apply the restoring method to two graphs $G_{a,b}$ and $G_{a',b'}$ simultaneously for different pairs of vertices. We call this method the {\em twin restoring method\/}. \subsection{An example of the restoring method with 9 edges} \label{subsec:restoring1} \ As an example, suppose that $A$ consists of one degree 6 vertex, two degree 4 vertices and three degree 3 vertices, and $B$ consists of five degree 4 vertices and one degree 3 vertex with edge information as shown in Figure~\ref{fig:rest}(a). In the figure, the vertices are labeled by $a_1, \dots, a_6, b_1,\dots,b_6$ and the numbers near vertices indicate their degrees. In this case, $G_{a_1,a_2}$ has six degree 3 vertices $a_3,a_4,a_5,a_6,b_4,b_5$ and three degree 2 vertices $b_1,b_2,b_3$. Now we examine the number of the edges $|\widehat{E}_{a_1,a_2}|$ of the graph $\widehat{G}_{a_1,a_2}$. Since $NE(a_1,a_2) = 10$, $NV_3(a_1,a_2) = 1$ and $NV_4(a_1,a_2) = 3$, the count equation gives $|\widehat{E}_{a_1,a_2}| = 9$. We now assume that $\widehat{G}_{a_1,a_2}$ is isomorphic to $K_{3,3}$. As the bipartition of $K_{3,3}$, we assign the bipartition $C$ (black vertices) and $D$ (red vertices) for six degree 3 vertices of $G_{a_1,a_2}$. Since all four vertices $a_3,a_4,a_5,a_6$ have degree 3, $b_4$ is not adjacent to $b_5$ ($b_4 \nadj b_5$) in $\widehat{G}_{a_1,a_2}$. This implies that $b_4$ and $b_5$ should be in the same partition, say $D$. Without loss of generality, the remaining vertex of $D$ is either $a_3$ or $a_4$ (indeed, the three vertices $a_4, a_5, a_6$ are isomorphic). Compare Figures~\ref{fig:rest}(b) and (c). In the first case, $C = \{a_4,a_5,a_6\}$. The three edges of $\widehat{G}_{a_1,a_2}$ connecting $a_3$ to $C$ inevitably pass through the three degree 2 vertices $b_1, b_2, b_3$. The three edges of $\widehat{G}_{a_1,a_2}$ incident to $b_4$ (or $b_5$) are directly connected to $C$. This $G_{a_1,a_2}$ is drawn by the solid edges in the figure. By restoring the deleted vertices and dotted edges, we recover $G$. In the second case, $C = \{a_3,a_5,a_6\}$. Then the three edges of $\widehat{G}_{a_1,a_2}$ connecting $a_4$ and $C$ passes through three degree 2 vertices $b_1, b_2, b_3$. The remaining arguments are similar to the first case. \begin{figure}[h] \includegraphics[scale=1]{fig6.eps} \caption{Restoring method} \label{fig:rest} \end{figure} \subsection{An example of the restoring method with 10 edges} \label{subsec:restoring2} \ Even when $\widehat{G}_{a,b}$ has 10 edges, we can still apply the restoring method. As an example, suppose both $A$ and $B$ consist of two degree 5 vertices, one degree 4 vertex and three degree 3 vertices with edge information and vertex labelling as drawn in Figure~\ref{fig:rest2}(a). In this case, $G_{a_1,a'_1}$ has two degree 4 vertices $a_2$ and $a'_2$, four degree 3 vertices $b_1$, $c_3$, $b'_1$ and $c'_3$, and four degree 2 vertices $c_1$, $c_2$, $c'_1$ and $c'_2$. Then $|\widehat{E}_{a_1,a'_1}| = 10$. We now assume that $\widehat{G}_{a_1,a'_1}$ is isomorphic to one of $K_{3,3}+e_1$ or $K_{3,3}+e_2$. Since $a_2$ and $a'_2$ are mutually adjacent to $b_1$ and $b'_1$ in $\widehat{G}_{a_1,a_2}$, $a_2$ and $a'_2$ are contained in the same partition $C$ and $b_1$ and $b'_1$ are in $D$. Therefore $\widehat{G}_{a_1,a_2}$ is isomorphic to $K_{3,3}+e_1$. Without loss of generality, $c_3$ is contained in $C$ and $c'_3$ is contained in $D$. Obviously $b_1$ and $c_3$ are adjacent in $\widehat{G}_{a_1,a'_1}$ passing through $c'_2$ and $a'_2$ and $c'_3$ are adjacent passing through $c_2$. The remaining connections are drawn in Figure~\ref{fig:rest2}(b). To recover $G$, restore the deleted vertices and dotted edges and so we get the graph $G$ as drawn in Figure~\ref{fig:rest2}(c). \begin{figure}[h] \includegraphics[scale=1]{fig7.eps} \caption{Restoring method with 10 edges} \label{fig:rest2} \end{figure} \subsection{An example of the twin restoring method} \label{subsec:restoring3} \ As an example, suppose that both $A$ and $B$ consist of two degree 4 vertices and five degree 3 vertices with vertex labelling and partial edge information as drawn in Figure~\ref{fig:rest3}(a). In this case, we apply the restoring method to two graphs $G_{b_1,b_2}$ and $G_{b_2,b'_1}$ simultaneously. These two graphs have the bipartitions assigned as in Figure~\ref{fig:rest3}(b) and (c). By considering the bipartition in $G_{b_1,b_2}$, each of $c'_3, c'_4$ and $c'_5$ must be adjacent to exactly one of $c_3, c_4$ and $c_5$. Furthermore, by considering the bipartition in $G_{b_2,b'_1}$, each of $c_3, c_4$ and $c_5$ must be adjacent to at least one of $c'_3, c'_4$ and $c'_5$. From these two facts, we assume that $c_3 \adj c'_3$, $c_4 \adj c'_4$ and $c_5 \adj c'_5$. Without loss of generality, we further assume that $c_3 \adj b'_2$, $c_4 \adj c'_1$, $c_5 \adj c'_2$. In Figure~\ref{fig:rest3}(d), since $b'_2$ is adjacent to $c_1$ or $c_2$, $\widehat{G}_{b'_1,b'_2}$ has 9 edges and a 5-cycle $(b_2 c'_4 c'_1 c'_2 c'_5)$. Since it is not isomorphic to $K_{3,3}$, $G$ is not intrinsically knotted. \begin{figure}[h] \includegraphics[scale=1]{fig8.eps} \caption{Twin restoring method} \label{fig:rest3} \end{figure} \section{$G$ contains a vertex with degree 6 or more} \label{sec:g6} Throughout this paper, we assume that $G$ is a bipartite intrinsically knotted graph with 23 edges. In this section we assume there is a vertex $a$ in $A$ of maximal degree, with $\deg(a) \geq 6$. We conclude that there are no size 23 bipartite intrinsically knotted graphs in this case. Let $a'$ be a vertex in $A \setminus \{a\}$ with maximal degree. Since $G$ has 23 edges and has vertices with degree at least 3, $A$ and $B$ have at most seven vertices. Therefore $\deg(a)$ is 6 or 7, and $\deg(a') \geq 4$. Suppose $B$ has seven vertices. Then we will say that $B$ has a 5333333 or 4433333 degree combination, meaning either a single vertex of degree 5 or two vertices of degree 4, with the remaining vertices all of degree 3. In the first case, by the count equation, $|\widehat{E}_{a,a'}| \leq 8$ in $\widehat{G}_{a,a'}$ since $NE(a,a') \geq 10$ and $|V_3(a)| \geq 5$. By Proposition~\ref{prop:planar}, this contradicts $G$ being intrinsically knotted. In the second case, if $\deg(a) = 7$ then $|V_3(a)|=5$, and so $|\widehat{E}_{a,a'}| \leq 7$. If $\deg(a) = 6$ and $\deg(a') \geq 5$, then $|\widehat{E}_{a,a'}| \leq 8$. So we can assume $\deg(a) = 6$ and $\deg(a') = 4$. Then $A$ consists of $a$, $a'$, one more vertex with degree 4 and three vertices with degree 3. Let $b$ be a vertex in $B$ with degree 4. If $a' \adj b$, $|\widehat{E}_{a,a'}| \leq 8$ since $|V_3(a)|+NV_4(a,a') \geq 5$. Otherwise, $|\widehat{E}_{a,b}| \leq 8$ since $|V_3(a)|+|V_3(b)| \geq 6$. See Figure~\ref{fig:exdeg6}(a) for an example. \begin{figure}[h] \includegraphics[scale=1]{fig9.eps} \caption{Example of the case of $\text{deg}(a)=6$} \label{fig:exdeg6} \end{figure} Now we assume that $B$ has six vertices, and so $a$ has degree 6. If $a'$ has degree 6, then $NE(a,a') = 12$ and $NV_3(a,a') + NV_4(a,a') \geq 3$. So $|\widehat{E}_{a,a'}| \leq 8$. Suppose $\deg(a') = 5$. If $B$ has either at least four vertices with degree 3 or at least five vertices with degree 3 or 4, then $NV_3(a,a') + NV_4(a,a') \geq 4$, and so $|\widehat{E}_{a,a'}| \leq 8$. So we can assume, $B$ has both at most three vertices with degree 3 and at most four vertices with degree 3 or 4, meaning $B$ must have 554333 degree combination. If $a'$ is adjacent to a degree 4 vertex in $B$, $NV_3(a,a') + NV_4(a,a') = 4$, and so $|\widehat{E}_{a,a'}| \leq 8$. Therefore, we assume that $a'$ is not adjacent to a degree 4 vertex in $B$ as in Figure~\ref{fig:exdeg6}(b). Here $b$ is a degree 5 vertex in $B$. Now, $A$'s degree combination is one of 65543, 65444 or 653333. If $A$ has a 65543 or 653333 degree combination, then $|\widehat{E}_{a,b}| \leq 9$ with a degree 4 vertex $a'$ in $\widehat{G}_{a,b}$. Since it is not isomorphic to $K_{3,3}$, $G$ is not intrinsically knotted. If $A$ has a 65444 degree combination, then the two degree 5 vertices in $B$ are adjacent to all vertices in $A$ and the degree 4 vertex in $B$ is adjacent to all vertices except $a'$ as in Figure~\ref{fig:exdeg6}(c). For a degree 4 vertex $a''$ in $A$, $\widehat{G}_{a,a''}$ has at most 9 edges and a degree 4 vertex $a'$. The remaining case is that $\deg(a') = 4$, and so $A$ has a 644333 degree combination. Let $b$ be a vertex in $B$ with maximal degree. If $\deg(b) = 6$, then $|\widehat{E}_{a,b}| \leq 8$. If $\deg(b) = 5$ and $B$ has at least three degree 3 vertices, then $NE(a,b) = 10$ and $NV_3(a,b) \geq 5$, implying $|\widehat{E}_{a,b}| \leq 8$. Now consider the case that either $\deg(b) = 4$ or $\deg(b) = 5$ along with the condition that $B$ has at most two degree 3 vertices. Then $B$ has a 444443 or 544433 degree combination. Assume that $A$ and $B$ have 644333 and 444443 degree combinations, respectively. If $a'$ is not adjacent to the unique degree 3 vertex in $B$, then $|\widehat{E}_{a,a'}| \leq 8$. So the two degree 4 vertices in $A$ are adjacent to the degree 3 vertex in $B$, and without loss of generality we have the connecting combination as in Figure~\ref{fig:rest}(a). The rest of process follows the restoring method, discussed in Subsection~\ref{subsec:restoring1} as an example. Eventually we obtain the two graphs for $G$ shown in Figures~\ref{fig:rest}(b) and (c). In Figure~\ref{fig:rest}(b), $\widehat{G}_{a_1,a_4}$ is planar. In Figure~\ref{fig:rest}(c), $\widehat{G}_{a_1,a_3}$ has at most 9 edges and a 2-cycle $(a_5 a_6)$. Now consider the final case where $B$ has a 544433 degree combination. We label the vertices in descending order of their vertex degree as in Figure~\ref{fig:deg6}(a). If $|V_3(b_1)| =3$ then $|\widehat{E}_{a_1,b_1}| \leq 8$. Without loss of generality, we may assume that $b_1 \nadj a_6$. If $|V_3(a_2)| =0$ (similarly for $a_3$) then $|\widehat{E}_{a_1,a_2}| \leq 8$. Also if $|V_3(a_2)| =|V_3(a_3)|=2$ then $\widehat{G}_{a_1,b_1}$ has at most 9 edges and a 2-cycle $(a_2 a_3)$. Therefore one of them, say $a_2$, is adjacent to exactly one degree 3 vertex in $B$. Without loss of generality, $a_2$ is adjacent to $b_1,b_2,b_3,b_5$. Now $a_3 \adj b_5$, for otherwise, $NV_3(a_1,a_2) + NV_4(a_1,a_2) = 4$ and $NV_Y(a_1,a_2) = 1$, and so $|\widehat{E}_{a_1,a_2}| \leq 8$. Here $V_Y(a_1,a_2)$ includes the degree 3 vertex in $A$, which is adjacent to $b_5$. If $a_3$ is adjacent to $b_2$ (or $b_3$), then $|\widehat{E}_{a_1,b_1}| \leq 9$ with a 3-cycle $(a_2 a_3 b_2)$. So $a_3$ is adjacent to $b_4$ and $b_6$. By applying the restoring method, we construct $G_{a_1,a_2}$ as drawn in Figure~\ref{fig:deg6}(b). Finally we recover $G$ by restoring the deleted vertices and dotted edges. For the graph $G$, $\widehat{G}_{a_1,a_3}$ has 9 edges and a 3-cycle $(a_2 b_2 b_3)$, showing that $G$ is not intrinsically knotted. \begin{figure}[h] \includegraphics[scale=1]{fig10.eps} \caption{The case of $\text{deg}(a_1)=6$ and the restoring method} \label{fig:deg6} \end{figure} \section{Both $A$ and $B$ contain degree 5 vertices} \label{sec:ab5} In this section we assume $G$ has maximal degree 5 and both $A$ and $B$ have degree 5 vertices. We find two graphs for Theorem~\ref{thm:main} in Subsection~\ref{subsec:110} (see Figure~\ref{fig:b320}). Both are formed by adding an edge to Cousin 110 of $E_9+e$ family. Let $A_n$ denote the set of vertices in $A$ with degree $n = 3,4,5$ and $[A] = [|A_5|, |A_4|,|A_3|]$. The possible cases for $[A]$ are $[4,0,1]$, $[3,2,0]$, $[2,1,3]$, $[1,3,2]$ and $[1,0,6]$. Similarly, define $B_n$ and $[B]$. Without loss of generality, we may assume that $|A_5| \geq |B_5|$, and furthermore, if $|A_5| = |B_5|$ then $|A_4| \geq |B_4|$. We distinguish fifteen cases of all possible combinations of $[A]$ and $[B]$, which we treat in the following seven subsections. To simplify the notation, vertices in $A_5$, $A_4$, $A_3$, $B_5$, $B_4$ and $B_3$ are denoted by $\{a_i\}$, $\{b_i\}$, $\{c_i\}$, $\{a'_i\}$, $\{b'_i\}$ and $\{c'_i\}$, respectively. \subsection{$[A]=[4,0,1]$ or $[3,2,0]$, and $[B]=[4,0,1]$ or $[3,2,0]$} \label{subsec:110} \ If both are $[4,0,1]$, the four degree 5 vertices in $A$ must all be adjacent to the unique degree 3 vertex in $B$, which is impossible. Suppose instead, $[A]=[4,0,1]$ or $[3,2,0]$, and $[B]=[3,2,0]$. Both cases are uniquely realized as the two graphs in Figure~\ref{fig:b320}, which are obtained from Cousin 110 of $E_9+e$ family by adding an edge $l$. Cousin 110 of $E_9+e$ is intrinsically knotted~\cite{GMN}. These are the first two bipartite intrinsically knotted graphs of Theorem~\ref{thm:main}. \begin{figure}[h] \includegraphics[scale=1]{fig11.eps} \caption{Two bipartite intrinsically knotted graphs} \label{fig:b320} \end{figure} \subsection{$[A]=[4,0,1]$ or $[3,2,0]$, and $[B]=[2,1,3]$} \ First assume that a degree 3 vertex $c'_1$ in $B$ is adjacent to at most one degree 5 vertex in $A$. In this case, $[A]=[3,2,0]$ and $c'_1$ must be adjacent to one degree 5 vertex $a_1$ and two degree 4 vertices in $A$. Furthermore $a_2$ and $a_3$ are adjacent to all vertices in $B$ except $c'_1$. If $a_1 \adj b'_1$, then $|\widehat{E}_{a_1,a_2}| \leq 9$ and $a_3$ has degree larger than 3 in $\widehat{G}_{a_1,a_2}$. If $a_1 \nadj b'_1$, then $|\widehat{E}_{a_2,b_1}| \leq 10$ and $a_1$ has degree 5 in $\widehat{G}_{a_2,b_1}$ as drawn in Figure~\ref{fig:a51}(a). So every degree 3 vertex in $B$ is adjacent to at least two degree 5 vertices in $A$. Assume that $c'_1 \adj \{a_1, a_2\}$, and furthermore $b'_1 \adj a_1$. If $a_3 \nadj c'_1$, then $\widehat{G}_{a_1,a_3}$ has at most 9 edges and a vertex $a_2$ with degree larger than 3. So every degree 5 vertex in $A$ is adjacent to all degree 3 vertices in $B$, and so $[A]=[3,2,0]$. In this case no degree 4 vertex in $A$ can be adjacent to all degree 3 vertices in $B$, but no such graph $G$ is possible. \begin{figure}[h] \includegraphics[scale=1]{fig12.eps} \caption{Case of $[A]=[3,2,0]$, and $[B]=[2,1,3]$ or $[1,3,2]$} \label{fig:a51} \end{figure} \subsection{$[A]=[4,0,1]$ or $[3,2,0]$, and $[B]=[1,3,2]$} \ If a degree 5 vertex $a_1$ in $A$ is adjacent to all degree 4 vertices in $B$, then $|\widehat{E}_{a_1,a_2}| \leq 9$ and $a_3$ has degree larger than 3 in $\widehat{G}_{a_1,a_2}$. Suppose instead each degree 5 vertex in $A$ is adjacent to two degree 3 vertices and two among the three degree 4 vertices in $B$. So $[A]=[3,2,0]$ and $G$ is uniquely realized as in Figure~\ref{fig:a51}(b). Since $\widehat{G}_{a_1,b_2}$ has 10 edges with a degree 5 vertex, it is planar. \subsection{$[A]=[2,1,3]$ and $[B]=[2,1,3]$.}\ If a degree 5 vertex $a_1$ is adjacent to all three degree 3 vertices in $B$, then $|\widehat{E}_{a_1,a'_1}| \leq 9$ and $a_2$ has degree larger than 3 in $\widehat{G}_{a_1,a'_1}$. The same argument applies to vertices $a_2, a'_1$ and $a'_2$. Therefore there are vertices $c_1$ and $c'_1$ adjacent to both degree 5 vertices on the other side as in Figure~\ref{fig:rest2}(a). As in the figure, we also have the condition $b_1 \adj c'_1$ (similarly $b'_1 \adj c_1$). For, if $a_2 \adj c'_3$ and $b_1 \nadj c'_1$, $|\widehat{E}_{a_1,a_2}| \leq 8$ because $V_Y(a_1,a_2)$ is not empty. Or, if $a_2 \adj c'_2$ and $b_1$ is not adjacent to both $c'_1$ and $c'_2$, then $b_1 \adj \{ a'_1,a'_2,b'_1,c'_3 \}$. This implies that $\widehat{G}_{a_1,b_1}$ has 10 edges and a degree 5 vertex $a_2$. The rest of process follows the restoring method, as described in Subsection~\ref{subsec:restoring2} as an example. This leads to the graph $G$ as drawn in Figure~\ref{fig:rest2}(c). Then $\widehat{G}_{a'_1,a'_2}$ has at most 9 edges and a 3-cycle $(a_2 c_3 b'_1)$. \subsection{$[A]=[2,1,3]$ and $[B]=[1,3,2]$.}\ If there is a degree 5 vertex $a_1$ which is not adjacent to $a'_1$, $NV_3(a_1,a'_1)=5$, and so $|\widehat{E}_{a_1,a'_1}| =8$. Therefore $a'_1 \adj \{a_1, a_2\}$. First assume that $NV_4(a_1,a_2)=3$. Let $a_1 \adj c'_1$. To avoid $|\widehat{E}_{a_1,a_2}| \leq 8$, $NV_3(a_1,a_2)+NV_Y(a_1,a_2) \leq 1$. This implies that $a_2 \adj c'_1$ and $V_Y(a_1,a_2)$ is empty (so $b_1 \adj c'_1$). Now we apply the restoring method. According to the bipartition choice of $\widehat{G}_{a_1,a_2}$, we construct two graphs for $G_{a_1,a_2}$ as drawn in Figure~\ref{fig:213132}(a) and (b). Finally we recover $G$ by restoring the deleted vertices and dotted edges. In Figure~\ref{fig:213132}(a) and (b), we find planar graphs $\widehat{G}_{a_1,c_1}$ and $\widehat{G}_{a_1,b_1}$, respectively. Assume that $NV_4(a_1,a_2)=2$, which are $b'_1,b'_2$. Obviously, $NV_3(a_1,a_2)=2$. If both $a_1$ and $a_2$ are not adjacent to $b'_3$, then $NV_Y(a_1,a_2) \geq 1$, implying $|\widehat{E}_{a_1,a_2}| \leq 8$. Now we assume that $a_1 \adj b'_3$ and $a_2 \nadj b'_3$, and assume further that $a_1 \adj c'_1$ and $a_2 \adj \{c'_1, c'_2\}$. If $c'_1 \adj c_i$, then $NV_Y(a_1,a_2)=1$, and so $c'_1 \adj b_1$. If $a'_1 \nadj b_1$, then $\widehat{G}_{a_2,a'_1}$ has 9 edges and a degree 4 vertex $a_1$, and so we assume $a'_1 \adj \{b_1,c_1,c_2\}$ as in Figure~\ref{fig:213132}(c). Using the restoring method, we construct $G_{a_1,a_2}$, and then recover $G$. In this graph, $\widehat{G}_{a_1,b_1}$ is planar. Finally we have $NV_4(a_1,a_2)=1$, which is $b'_1$. Then we may assume that $a_1 \adj \{a'_1,b'_1,b'_2,c'_1,c'_2\}$ and $a_2 \adj \{a'_1,b'_1,b'_3,c'_1,c'_2\}$. If $b_1 \adj \{a'_1,b'_1,b'_2,b'_3\}$, then $\widehat{G}_{a_1,b_1}$ has 10 edges and a degree 5 vertex $a_2$. Therefore, assume $b_1 \adj c'_1$. If $a'_1 \nadj b_1$, then $\widehat{G}_{a_1,a'_1}$ has 9 edges and a degree 4 vertex $a_2$. Thus we assume $a'_1 \adj \{b_1,c_1,c_2\}$. Since $\widehat{G}_{a_1,a'_1}$ has 10 edges and exactly two degree 4 vertices $a_2$ and $b'_3$, it is isomorphic to either $K_{3,3}+e_1$ or $K_{3,3}+e_2$. However, $K_{3,3}+e_2$ is not possible for $\widehat{G}_{a_1,a'_1}$. Therefore $\widehat{G}_{a_1,a'_1}$ is isomorphic to $K_{3,3}+e_1$. Using the restoring method, we construct $G_{a_1,a'_1}$ as drawn in Figure~\ref{fig:213132}(d), and then recover $G$. Since $NV_3(a_1,a_2)+NV_4(a_1,a_2)+NV_Y(a_1,a_2)=4$, $\widehat{G}_{a_1,a_2}$ has 9 edges and a 3-cycle $(a'_1 c_1 c_2)$. \begin{figure}[h] \includegraphics[scale=1]{fig13.eps} \caption{Case of $[A]=[2,1,3]$ and $[B]=[1,3,2]$} \label{fig:213132} \end{figure} \subsection{$[A]=[1,3,2]$ and $[B]=[1,3,2]$.}\ First assume that $a_1 \nadj a'_1$. If there is a degree 4 vertex $b_1$ in $A$ which is not adjacent to degree 3 vertices in $B$, then $NV_4(a_1,b_1)=3$, implying that $\widehat{G}_{a_1,b_1}$ has 9 edges and a degree 4 vertex $a'_1$. Therefore we may assume that $\{b_1,b_2\} \adj c'_1$, and similarly $\{b'_1,b'_2\} \adj c_1$. Using the restoring method, we construct $G_{a_1,a'_1}$ as in Figure~\ref{fig:132132}(a). After recovering $G$, we have a planar graph $\widehat{G}_{b_1,b_2}$. Now assume that $a_1 \adj a'_1$. We distinguish into three cases. The first case is that $a_1 \adj \{b'_1,b'_2,b'_3,c'_1\}$ and $a'_1 \adj \{b_1,b_2,b_3,c_1\}$. There is a vertex, say $b_1$, among the degree 4 vertices in $A$ such that $b_1 \nadj c'_1$. We may assume either $b_1 \adj \{b'_1,b'_2,b'_3\}$ or $b_1 \adj \{b'_1,b'_2,c'_2\}$. In the former case, using the restoring method, we construct three graphs $G_{a_1,b_1}$ as drawn in Figure~\ref{fig:132132}(b), (c) and (d). In all cases, after recovering $G$, we have planar graphs $\widehat{G}_{b_1,b_2}$. In the latter case, we partially construct $\widehat{G}_{a_1,b_1}$ as in Figure~\ref{fig:132132}(e). In the figure, the bipartition is determined by the connection of $a'_1$ so that $b'_3 \adj \{b_2,b_3,c_1\}$. To be $K_{3,3}+e_1$, the two degree 4 vertices $b_2, b_3$ must be connected through a degree 2 vertex. If this vertex is $b'_1$ (or similarly $b'_2$), then $b'_1 \adj \{b_1,b_2,b_3\}$, which is the same as the former case with exchanging the vertices in $A$ and $B$. This implies that $c_2 \adj \{b'_1,b'_2\}$ and further we may assume $b_2 \adj b'_1$. Now we instead consider $\widehat{G}_{a'_1,b'_1}$ as in Figure~\ref{fig:132132}(f). Then the bipartition is determined by the connection of $a_1$, but the two degree 3 vertices $c'_1, c'_2$ cannot be connected through any degree 2 vertex. The second case is that $a_1 \adj \{b'_1,b'_2,b'_3,c'_1\}$ and $a'_1 \adj \{b_1,b_2,c_1,c_2\}$. If $b_3 \nadj c'_1$, then $b_3 \adj \{b'_1,b'_2,b'_3,c'_2\}$, implying that $\widehat{G}_{a_1,b_3}$ has 9 edges and a degree 4 vertex $a'_1$. Thus we have $b_3 \adj c'_1$. If $c'_1$ is adjacent to a degree 3 vertex in $A$, then $V_Y(a_1,b_3)$ is not empty, implying that $\widehat{G}_{a_1,b_3}$ has 9 edges and a degree 4 vertex $a'_1$. Therefore, $b_1 \adj c'_1$. Now in either case of $b_3 \adj \{b'_1,b'_2,b'_3\}$ or $b_3 \adj \{b'_1,b'_2,c'_2\}$, using the restoring method, we have three graphs $\widehat{G}_{a_1,b_3}$ as in Figure~\ref{fig:132132}(g), (h) and (i). In all cases, after recovering $G$, we have planar graphs $\widehat{G}_{a_1,b_1}$. Finally we consider the third case, where $a_1 \adj \{b'_1,b'_2,c'_1,c'_2\}$ and $a'_1 \adj \{b_1,b_2,c_1,c_2\}$. In this case, we construct $\widehat{G}_{a_1,a'_1}$ which has 10 edges with exactly two degree 4 vertices $b_3, b'_3$, so it must be isomorphic to either $K_{3,3}+e_1$ or $K_{3,3}+e_2$. Then we have four cases as follows; (1) $b_3 \nadj b'_3$, (2) $b_3 \nadj b'_1$ and $b'_3 \nadj b_1$, (3) $b_3 \nadj b'_1$ and $b'_3 \nadj c_1$, and (4) $b_3 \nadj c'_1$ and $b'_3 \nadj c_1$. In the figure, (j) indicates the case (1), (k) and (l) indicate the case (3), (m) and (n) indicate the case (4), and no graph satisfying case (2) can be constructed. We find a planar graph $\widehat{G}_{a_1,b_3}$ in (j), and planar graphs $\widehat{G}_{a_1,b_1}$ for the remaining cases. \begin{figure}[h!] \includegraphics{fig14.eps} \caption{Case of $[A]=[1,3,2]$ and $[B]=[1,3,2]$.} \label{fig:132132} \end{figure} \subsection{$[A]$ is one of the five cases, and $[B]=[1,0,6]$} \ First assume that $[A]=[4,0,1]$ or $[3,2,0]$. If $NV_3(a_i,a_j) \geq 5$ for some $i$ and $j$, then $|\widehat{E}_{a_i,a_j}| \leq 8$. Suppose instead, three degree 5 vertices in $A$ are adjacent to the unique degree 5 vertex and the same four degree 3 vertices in $B$. It is not possible to construct such a graph $G$. If $[A]=[2,1,3]$ or $[1,0,6]$, $NV_3(a_1,a'_1) \geq 6$, implying $|\widehat{E}_{a_1,a'_1}| \leq 8$. Finally, assume that $[A]=[1,3,2]$. If $a_1 \nadj a'_1$, $NV_3(a_1,a'_1) = 7$, implying $|\widehat{E}_{a_1,a_2}| \leq 6$. Now we assume that $a_1 \adj \{a'_1, c'_1, c'_2, c'_3, c'_4\}$. If $b_1 \adj \{c'_5,c'_6\}$ then $NV_3(a_1,b_1) = 6$, implying $|\widehat{E}_{a_1,a_2}| \leq 8$. So none of the degree 4 vertices $b_1, b_2, b_3$ is adjacent to both $c'_5$ and $c'_6$. This means that there are at least three edges connecting $\{c_1,c_2\}$ and $\{c'_5,c'_6\}$. Therefore, one of them, say $c_1$, is adjacent to both $c'_5$ and $c'_6$. Since $NV_3(a_1,c_1) = 6$, $\widehat{G}_{a_1,c_1}$ has 9 edges and a degree 4 vertex among $b_1, b_2$ and $b_3$. \section{Only $A$ contains degree 5 vertices} \label{sec:a5} In this section we assume that only $A$ contains degree 5 vertices and $B$ contains vertices with degree at most 4. In Subsection~\ref{subsec:106025} we find a bipartite intrinsically knotted graph formed by adding two edges to the Heawood graph (see Figure~\ref{fig:106025}(d)). The possible cases for $[A]$ are $[4,0,1]$, $[3,2,0]$, $[2,1,3]$, $[1,3,2]$ and $[1,0,6]$, and for $[B]$, $[0,5,1]$ and $[0,2,5]$. We distinguish ten cases of possible combinations of $[A]$ and $[B]$ in the following six subsections. \subsection{$[A]=[4,0,1]$, $[3,2,0]$ or $[2,1,3]$, and $[B]=[0,5,1]$} \ If two degree 5 vertices $a_1$ and $a_2$ in $A$ satisfy $V_4(a_1,a_2) \geq 4$, then $NV_3(a_1,a_2) + NV_4(a_1,a_2)=5$, implying $|\widehat{E}_{a_1,a_2}|=8$. Suppose instead that $a_1 \adj \{b'_1,b'_2,b'_3,b'_4,c'_1\}$ and $a_2 \adj \{b'_1,b'_2,b'_3,b'_5,c'_1\}$. If $[A]=[4,0,1]$ or $[3,2,0]$, $\widehat{G}_{a_1,a_2}$ has at most 9 edges and a vertex $a_3$ with degree larger than 3. On the other hand, if $[A]=[2,1,3]$, then we have $b_1 \adj c'_1$ because, if not, $NV_Y(a_1,a_2)=1$, implying $|\widehat{E}_{a_1,a_2}|=8$. Using the restoring method, we construct two graphs of $G_{a_1,a_2}$ as in Figure~\ref{fig:213051}(a) and (b). By recovering $G$, we find that $\widehat{G}_{a_1,c_1}$ is planar in both cases. \begin{figure}[h!] \includegraphics{fig15.eps} \caption{Case of $[A]=[2,1,3]$ and $[B]=[0,5,1]$.} \label{fig:213051} \end{figure} \subsection{$[A]=[4,0,1]$, $[3,2,0]$ or $[2,1,3]$, and $[B]=[0,2,5]$} \ If a degree 5 vertex $a_1$ in $A$ is adjacent to either both or neither of the two degree 4 vertices in $B$, then $NV_3(a_1,a_2) + NV_4(a_1,a_2) \geq 5$, implying $|\widehat{E}_{a_1,a_2}| \leq 8$. Therefore each degree 5 vertex in $A$ must be adjacent to exactly one degree 4 vertex. If $[A]=[4,0,1]$ or $[3,2,0]$, let $a_1$ and $a_2$ be two degree 5 vertices in $A$ that are adjacent to the same degree 4 vertex in $B$, implying $|\widehat{E}_{a_1,a_2}| \leq 8$. If $[A]=[2,1,3]$, then we may say that $a_1 \nadj b'_1$. In this case $NV_3(a_1,b'_1) \geq 6$, implying $|\widehat{E}_{a_1,b'_1}| \leq 8$. \subsection{$[A]=[1,3,2]$ and $[B]=[0,5,1]$.}\ First assume that $a_1$ is adjacent to all degree 4 vertices of $B$. Since $\widehat{G}_{a_1,b_1}$ has 10 edges and exactly two degree 4 vertices $b_2$ and $b_3$, it is isomorphic to either $K_{3,3}+e_1$ or $K_{3,3}+e_2$. For the first case as in Figure~\ref{fig:132051}(a), we recover $G \setminus \{ a_1 \}$ instead of $G$ by restoring only $b_1$ and the related edges. In this graph, $\widehat{G}_{a_1,b_3}$ is planar. For the second case as in Figure~\ref{fig:132051}(b), we similarly recover $G \setminus \{ a_1 \}$, and then $\widehat{G}_{a_1,b_2}$ is planar. Now we assume that $a_1 \adj \{b'_1,b'_2,b'_3,b'_4,c '_1\}$. Then we further assume that $c'_1 \nadj b_1$. If $NV_4(a_1,b_1)=4$, then $\widehat{G}_{a_1,b_1}$ has at most 9 edges and a vertex $b_2$ with degree larger than 3. So we may say that $b_1 \adj \{b'_1,b'_2,b'_3,b'_5\}$. Since $\widehat{G}_{a_1,b_1}$ has 10 edges and exactly two degree 4 vertices $b_2$ and $b_3$, it is isomorphic to either $K_{3,3}+e_1$ or $K_{3,3}+e_2$. Using the restoring method, we construct $G_{a_1,b_1}$ as drawn in Figure~\ref{fig:132051}(c), (d), (e), (f), and (g), in which the first three figures correspond to $K_{3,3}+e_1$ and the remaining two figures to $K_{3,3}+e_2$. After recovering $G$, we have planar graphs $\widehat{G}_{a_1,b_2}$ for all five cases. \begin{figure}[h!] \includegraphics{fig16.eps} \caption{Case of $[A]=[1,3,2]$ and $[B]=[0,5,1]$.} \label{fig:132051} \end{figure} \subsection{$[A]=[1,3,2]$ and $[B]=[0,2,5]$.}\ If $a_1$ is adjacent to all five degree 3 vertices in $B$, $NV_3(a_1,b'_1) \geq 6$, implying $|\widehat{E}_{a_1,b'_1}| \leq 8$. Now assume that $a_1 \adj b'_1$ and $a_1 \nadj b'_2$. If $b'_2 \adj \{c_1, c_2\}$, then $NV_3(a_1,b'_2) \geq 6$, implying $|\widehat{E}_{a_1,b'_2}| \leq 8$. Therefore $b'_2$ is adjacent to all three degree 4 vertices in $A$. Similarly if $b'_1 \adj \{c_1, c_2\}$, then $\widehat{G}_{a_1,b'_1}$ has 9 edges and a degree 4 vertex $b'_2$, and so we assume $b'_1 \adj \{b_1,b_2\}$. To avoid $|\widehat{E}_{a_1,b_1}| \leq 8$, $NV_3(a_1,b_1)+NV_Y(a_1,b_1) \leq 4$ because $NV_4(a_1,b_1)=1$. So we may assume that $b_1 \adj \{c'_1,c'_2\}$, and then $c'_1 \adj b_2$ and $c'_2 \adj b_3$. Therefore $\widehat{G}_{a_1,b'_2}$ has 9 edges and a 3-cycle $(b_1 b_2 b'_1)$. Finally we assume that $a_1 \adj \{b'_1,b'_2,c'_1,c'_2,c'_3\}$. If a degree 4 vertex $b_i$ in $A$ is adjacent to at most one among $\{c'_1,c'_2,c'_3\}$, then $|\widehat{E}_{a_1,b_i}| \leq 8$. Therefore $NV_3(a_1,b_i) \geq 2$, and we may assume that $b_1 \adj \{c'_1,c'_2\}$, $b_2 \adj \{c'_1,c'_3\}$ and $b_3 \adj \{c'_2,c'_3\}$. If a degree 4 vertex $b'_i$ in $B$ is adjacent to at least two degree 4 vertices in $A$, say $b_1$ and $b_2$, then $\widehat{G}_{a_1,b_3}$ has 9 edges and a 3-cycle $(b_1 b_2 b'_1)$. Therefore both $b'_1$ and $b'_2$ are adjacent to $c_1$ and $c_2$. We further assume that $b'_1 \adj b_1$, and so $\widehat{G}_{a_1,b_1}$ has 9 edges and a 3-cycle $(c_1 c_2 b'_2)$. \subsection{$[A]=[1,0,6]$ and $[B]=[0,5,1]$.}\ If there is a degree 4 vertex $b'_1$ in $B$ which is not adjacent to $a_1$, then we assume that $b'_1 \adj \{c_1, c_2, c_3, c_4\}$. If both $c_5$ and $c_6$ are adjacent to the same degree 4 vertex, say $b'_2$, then $\widehat{G}_{b'_1,b'_2}$ has 9 edges and a degree 4 vertex $a_1$. Therefore we assume that $c_5 \adj \{b'_2, b'_3, c'_1\}$ and $c_6 \adj \{b'_4, b'_5, c'_1\}$. Now use the restoring method to construct $G_{a_1,b'_1}$ as drawn in Figure~\ref{fig:106051}(a). After recovering $G$, we have a planar graph $\widehat{G}_{a_1,c_6}$. Suppose instead that $a_1$ is adjacent to all degree 4 vertices in $B$. If there is a pair of degree 4 vertices, say $b'_1, b'_2$, so that $V_3(b'_1, b'_2)$ is empty, then $\widehat{G}_{b'_1,b'_2}$ has 9 edges and a degree 4 vertex $b'_3$. Furthermore it is impossible that all pairs of degree 4 vertices, for example $b'_1, b'_2$, are such that $V_3(b'_1, b'_2)$ has at least two degree 3 vertices in $A$. Therefore we may assume that $b'_1 \adj \{c_1, c_2, c_3\}$ and $b'_2 \adj \{c_1, c_4, c_5\}$. Using the restoring method, we construct $G_{b'_1,b'_2}$ as drawn in Figure~\ref{fig:106051}(b) and (c). After recovering $G$, we have planar graphs $\widehat{G}_{b'_1,b'_3}$ in both cases. \begin{figure}[h!] \includegraphics{fig17.eps} \caption{Case of $[A]=[1,0,6]$ and $[B]=[0,5,1]$.} \label{fig:106051} \end{figure} \subsection{$[A]=[1,0,6]$ and $[B]=[0,2,5]$.}\label{subsec:106025} \ If there is a degree 4 vertex $b'_1$ in $B$ which is not adjacent to $a_1$, then $NV_3(a_1,b'_1) \geq 7$, implying $|\widehat{E}_{a_1,b'_1}| \leq 7$. Therefore we assume that $a_1 \adj \{b'_1, b'_2, c'_1, c'_2, c'_3\}$. Suppose that $c_1$ is not adjacent to $c'_1, c'_2, c'_3$. Then we distinguish two cases: $c_1 \adj \{b'_1, b'_2, c'_4\}$ or $c_1 \adj \{b'_1, c'_4, c'_5\}$. In the first case, if both $b'_1, b'_2$ are adjacent to a degree 3 vertex $c_2$, then $\widehat{G}_{a_1,c'_4}$ has at most 9 edges and a 3-cycle $(c_2 b'_1 b'_2)$. So we may assume that $b'_1 \adj \{c_2, c_3\}$ and $b'_2 \adj \{c_4, c_5\}$. Now we partially construct $\widehat{G}_{a_1,b_1}$ as in Figure~\ref{fig:106025}(a). In the figure, the bipartition is determined by the connection of $b'_2$, and so $c'_4$ must be adjacent to one of $c_2$ or $c_3$, say $c_2$. Then $\widehat{G}_{a_1,b'_2}$ has 9 edges and a 3-cycle $(c_2 b'_1 c'_4)$. In the second case, if both $c'_4, c'_5$ are adjacent to another degree 3 vertex $c_2$, then $\widehat{G}_{a_1,b'_1}$ has at most 9 edges and a 3-cycle $(c_2 c'_4 c'_5)$. So we may assume that $c'_4 \adj \{c_2, c_3\}$ and $c'_5 \adj \{c_4, c_5\}$. Again, we partially construct $\widehat{G}_{a_1,c'_4}$ as in Figure~\ref{fig:106025}(b). Then $\widehat{G}_{a_1,c'_5}$ has 9 edges and a 3-cycle $(c_2 b'_1 c'_4)$. Suppose instead that $c'_1 \adj \{c_1, c_2\}$, $c'_2 \adj \{c_3, c_4\}$ and $c'_3 \adj \{c_5, c_6\}$. If both $c_1, c_2$ are adjacent to $b'_1$ (similarly for $b'_2, c'_4 c'_5$), then $\widehat{G}_{a_1,b'_2}$ has at most 9 edges and a 3-cycle $(c_1 c_2 b'_1)$. Therefore we may assume that $b'_1 \adj \{c_1, c_3, c_5\}$. Now if $b'_2$ (similarly for $c'_4, c'_5$) is adjacent to at least two among $c_1, c_3, c_5$, say $c_1, c_3$, as in Figure~\ref{fig:106025}(c), then $\widehat{G}_{a_1,c_5}$ has 9 edges and a 3-cycle $(c_1 c_3 b'_2)$. So we may assume that $c_1 \adj b'_2$, $c_3 \adj c'_4$ and $c_5 \adj c'_5$. Now use the restoring method so that we construct $G_{a_1,b'_1}$ as drawn in Figure~\ref{fig:106025}(d). After recovering $G$, we have an intrinsically knotted graph, from which we obtain the Heawood graph by deleting two edges connecting $a_1$ and $\{b'_1, b'_2\}$. \begin{figure}[h!] \includegraphics{fig18.eps} \caption{Case of $[A]=[1,0,6]$ and $[B]=[0,2,5]$.} \label{fig:106025} \end{figure} \section{$G$ only contains vertices with degree 3 or 4} \label{sec:g4} In this section we assume that both $A$ and $B$ only contain vertices with degree 3 or 4. Then $[A]$ and $[B]$ are either $[0,5,1]$ or $[0,2,5]$, and we have three cases in the following subsections. In Subsection~\ref{subsec:89} we find three IK graphs, each formed by adding two edges to the Heawood graph, see Figures~\ref{fig:025025}(f), (j), and (o). \subsection{$[A]=[0,5,1]$ and $[B]=[0,5,1]$.}\ First we remark that if $H$ is a graph, allowing multi-edges, that consists of four degree 4 vertices, two degree 3 vertices and eleven edges, and such that there is a degree 4 vertex adjacent to the other three degree 4 vertices as well as a degree 3 vertex, then the graph is non-planar only when it is the graph in Figure~\ref{fig:051051}(a). Without loss of generality, there is a vertex $b_1$ so that $b_1 \adj \{b'_1, b'_2, b'_3, b'_4\}$. As a first case, assume that some $b_2 \adj \{b'_1, b'_2, b'_3, b'_4\}$, and so $b'_5 \adj \{b_3, b_4, b_5, c_1\}$. Using the restoring method, we construct $G_{b_1,b_2}$ so that $\widehat{G}_{b_1,b_2}$ is the $H$ mentioned in the remark above, as shown in Figure~\ref{fig:051051}(b). After recovering $G$, we have a planar graph $\widehat{G}_{b_1,b_3}$. As a second case, assume that $b_2 \adj \{b'_1, b'_2, b'_3, c'_1\}$, and so $b'_5 \adj \{b_3, b_4, b_5, c_1\}$ again. Using the restoring method, we construct $G_{b_1,b_2}$ so that $\widehat{G}_{b_1,b_2}$ is the $H$ mentioned in the remark above, as shown in Figure~\ref{fig:051051}(c), (d) and (e). After recovering $G$, we have planar graphs $\widehat{G}_{b_1,b_3}$ in all cases. If $c_1 \nadj c'_1$, we assume $c'_1 \adj \{b_3,b_4,b_5\}$. Then, to avoid the first case, we may say $b_2 \adj \{b'_1, b'_2, b'_3, b'_5\}$. Furthermore, to avoid the second case, we conclude that $b'_4 \adj \{b_1, b_3, b_4, b_5\}$ and $b'_5 \adj \{b_2, b_3, b_4, b_5\}$. By connecting the remaining edges, we obtain Figure~\ref{fig:051051}(f). After recovering $G$, we have a planar graph $\widehat{G}_{b_1,b_3}$. For otherwise, $c_1 \adj c'_1$, so may assume $c'_1 \adj \{b_4,b_5, c_1\}$, then to avoid the first case, we may say $b_2 \adj \{b'_1, b'_2, b'_3, b'_5\}$ and $b_3 \adj \{b'_1, b'_2, b'_4, b'_5\}$. Furthermore, to avoid the second case, we conclude that $\{b_4, b_5 \} \adj \{b'_3, b'_4, b'_5, c'_1\}$. In a similar way, we obtain Figure~\ref{fig:051051}(g). This graph has an embedding which does not have any non-trivially knotted cycles as shown in Figure~\ref{fig:051051}(h). \begin{figure}[h!] \includegraphics{fig19.eps} \caption{Case of $[A]=[0,5,1]$ and $[B]=[0,5,1]$.} \label{fig:051051} \end{figure} \subsection{$[A]=[0,5,1]$ and $[B]=[0,2,5]$.}\ First we remark that $NV_3(b_i,b_j) + NV_4(b_i,b_j) \neq 6$ for any pair $i,j$. For otherwise, $\widehat{G}_{b_i,b_j}$ has 9 edges and at least two degree 4 vertices because $V_3(b_i,b_j)$ has at most one vertex. In this case, $b'_1$ and $b'_2$ are adjacent to at least two vertices in $A$ simultaneously. First assume that there are exactly two common neightbors in $A$, one each of degree 3 and 4: say, $b'_1 \adj \{b_1, b_2, b_3, c_1\}$ and $b'_2 \adj \{b_1, b_4, b_5, c_1\}$. If $NV_3(b_2,b_3) = 5$, then $NV_3(b_2,b_3) + NV_4(b_2,b_3) = 6$. Therefore $V_3(b_2,b_3)$ has at least two vertices; say, $\{ b_2, b_3 \} \adj \{c'_1, c'_2 \}$ and similarly $\{ b_4, b_5 \} \adj \{c'_3, c'_4 \}$. We further assume that $b_1 \adj c'_1$. Now we distinguish three sub-cases. First assume that $b_1 \adj c'_2$. Then $b_4 \adj c'_5$, and so $NV_3(b_1,b_4) + NV_4(b_1,b_4) = 6$. Second, assume that $b_1 \adj c'_3$ (similarly for $b_1 \adj c'_4$). We may assume that $c'_5 \adj b_2$. Using the restoring method, we construct $G_{b_1,b_2}$ as shown in Figure~\ref{fig:051025}(a). After recovering $G$, we have a planar graph $\widehat{G}_{b_4,b_5}$. Thirdly, we assume that $b_1 \adj c'_5$. If $b_4 \nadj c'_5$, then $b_4 \adj c'_2$ and thus $NV_3(b_1,b_4) + NV_4(b_1,b_4) = 6$. Therefore $b_4 \adj c'_5$, and similarly $b_5 \adj c'_5$. By connecting the remaining edges, we obtain the same graph as Figure~\ref{fig:051025}(a) after relabelling. Without loss of generality, we assume that $\{ b'_1, b'_2 \} \adj \{b_1, b_2 \}$. From the above remark, $NV_3(b_1,b_2) \leq 3$. Now assume that $NV_3(b_1,b_2)=3$; say, $b_1 \adj \{ c'_1, c'_2 \}$ and $b_2 \adj \{ c'_1, c'_3 \}$. If $NV_Y(b_1,b_2) = 1$, then again $\widehat{G}_{b_1,b_2}$ has 9 edges and a degree 4 vertex $b_3$. So assume that $c'_1 \adj b_3$. If $b_4 \nadj c'_2$, then we have $NV_3(b_1,b_4) + NV_4(b_1,b_4) = 6$. Therefore $b_4 \adj c'_2$, and similarly $\{b_4, b_5\} \adj \{c'_2, c'_3\}$. Using the restoring method, we construct $G_{b_1,b_2}$ as shown in Figure~\ref{fig:051025}(b), so it must be isomorphic to $K_{3,3}+e_2$. Then $\widehat{G}_{b_3,b_5}$ has 9 edges and a degree 4 vertex $b_1$. Finally we assume that $NV_3(b_1,b_2)=2$; say, $\{ b_1, b_2 \} \adj \{ c'_1, c'_2 \}$. There is a degree 4 vertex $b_i$ among $b_3, b_4, b_5$ which is not adjacent to $c'_1, c'_2$. Obviously, $NV_3(b_1,b_i) + NV_4(b_1,b_i) = 6$. \begin{figure}[h!] \includegraphics{fig20.eps} \caption{Case of $[A]=[0,5,1]$ and $[B]=[0,2,5]$.} \label{fig:051025} \end{figure} \subsection{$[A]=[0,2,5]$ and $[B]=[0,2,5]$, and the twin restoring method.}\ \label{subsec:89} If there is a degree 4 vertex $b_i$ in $A$ which is not adjacent to degree 4 vertices in $B$, or vice versa, then $NV_3(b_i,b'_1) \geq 7$, implying $|\widehat{E}_{b_i,b_1}| \leq 8$. Therefore we assume that $b_1 \adj b'_1$ and $b_2 \adj b'_2$. Now we distinguish three cases: (1) $b_1 \nadj b'_2$ and $b_2 \nadj b'_1$, (2) $b_1 \adj b'_2$ and $b_2 \nadj b'_1$, or (3) $b_1 \adj b'_2$ and $b_2 \adj b'_1$. In case (1), we first assume that $NV_3(b_1,b_2) =3$, say $\{b_1, b_2\} \adj \{ c'_1, c'_2, c'_3\}$. If some two vertices among $c'_1, c'_2, c'_3$ are adjacent to the same degree 3 vertex $c_i$ in $A$, then $\widehat{G}_{b_1,b'_2}$ has 9 edges and a 2-cycle $(b_2 c_i)$. Thus we further assume that $c_1 \adj c'_1$, $c_2 \adj c'_2$, and $c_3 \adj c'_3$. Using the restoring method, we construct two graphs $G_{b_1,b_2}$ as shown in Figure~\ref{fig:025025}(a) and (b). In both cases, after recovering $G$, we have planar graphs $\widehat{G}_{b'_1,c'_3}$. We now assume that $NV_3(b_1,b_2) =4$, say $b_1 \adj \{ c'_1, c'_2, c'_3\}$ and $b_2 \adj \{ c'_1, c'_2, c'_4\}$. From the same argument above, we further assume that $c_1 \adj c'_1$ and $c_2 \adj c'_2$. If $c_1 \adj c'_3$, then $\widehat{G}_{b_2,b'_1}$ has 9 edges and a 3-cycle $(b_1 c_1 c'_3)$. Therefore $c_1 \nadj c'_3$, and similarly $c_1 \nadj c'_4$, $c_2 \nadj c'_3$, and $c_2 \nadj c'_4$. Using the restoring method, we construct two graphs $G_{b_1,b_2}$ where there is no connection between degree 2 vertices in $A$ and $B$ as drawn in Figure~\ref{fig:025025}(c) and (d). In both cases, after recovering $G$, we have planar graphs $\widehat{G}_{b'_1,c'_3}$. Lastly we have $NV_3(b_1,b_2) = NV_3(b'_1,b'_2) =5$, and we may have the connections shown in Figure~\ref{fig:025025}(e). Here if $c_1 \nadj c'_1$, then there is no proper bipartition for $\widehat{G}_{b_1,b_2}$. If $c_i \adj \{c'_2, c'_3\}$ (similarly for $\{c'_4, c'_5\}$), then $\widehat{G}_{b'_1,b'_2}$ has 9 edges and a 3-cycle $(b_1 c'_2 c'_3)$. Thus all $c_i$, $i=2,3,4,5$, are adjacent to one of $\{c'_2, c'_3\}$ and another one of $\{c'_4, c'_5\}$. Similarly all $c'_i$, $i=2,3,4,5$, are adjacent to one of $\{c_2, c_3\}$ and one of $\{c_4, c_5\}$. Using the restoring method, we construct a graph $G_{b_1,b_2}$ satisfying the above conditions as shown in Figure~\ref{fig:025025}(f). After recovering $G$, we have an intrinsically knotted graph, from which we obtain the Heawood graph by deleting two edges: one connecting $b_1$ and $b'_1$ and the other between $b_2$ and $b'_2$. \begin{figure}[h!] \includegraphics{fig21.eps} \caption{Case of $[A]=[0,2,5]$ and $[B]=[0,2,5]$.} \label{fig:025025} \end{figure} In the remaining cases (2) and (3), we remark that both $\widehat{G}_{b_1,b_2}$ and $\widehat{G}_{b'_1,b'_2}$ have at most 9 edges. It is sufficient to show that $NV_3(b_1,b_2) + NV_4(b_1,b_2) + NV_Y(b_1,b_2) \geq 6$ since $NE(b_1,b_2) = 8$. If $NV_3(b_1,b_2) + NV_4(b_1,b_2) = 5$, then $NV_Y(b_1,b_2) = 1$, so we are done. If $NV_3(b_1,b_2) + NV_4(b_1,b_2) = 4$, then $V_3(b_1,b_2)$ has exactly two vertices, say $c'_1, c'_2$. Let $c'_1 \adj c_1$. If $c_1 \nadj c'_2$, then $NV_Y(b_1,b_2) = 2$. For otherwise, when the remaining edge adjacent to $c_1$ is deleted in $\widehat{G}_{b_1,b_2}$, we can eventually delete one more edge. In case (2), we first assume that $NV_3(b_1,b_2) =3$, there are two degree 3 vertices in $B$ such that $\{b_1, b_2\} \adj \{ c'_1, c'_2\}$. Then $\widehat{G}_{b'_1,b'_2}$ has 9 edges by the remark and a 3-cycle $(b_2 c'_1 c'_2)$. Now assume that $NV_3(b_1,b_2) =5$, say $b_1 \adj \{c'_1, c'_2\}$ and $b_2 \adj \{c'_3, c'_4, c'_5\}$. By considering $\widehat{G}_{b_1,b_2}$, there are no degree 2 vertices in $A$. Therefore we may assume that $b'_1, c_1$ and $c_2$ lie in the same partition, implying $b'_1 \adj \{c_3, c_4, c_5\}$ as in Figure~\ref{fig:rest3}(b). The rest of process follows the twin restoring method, described in Subsection~\ref{subsec:restoring3} as an example. Eventually we conclude that $\widehat{G}_{b'_1,b'_2}$ has 9 edges and a 5-cycle. Finally we may assume that $NV_3(b_1,b_2) = NV_3(b'_1,b'_2) = 4$, say $b_2 \adj \{c'_1, c'_2, c'_3\}$, $b_1 \adj \{c'_3, c'_4\}$, $b'_1 \adj \{c_1, c_2, c_3\}$, and $b'_2 \adj \{c_3, c_4\}$ as in Figure~\ref{fig:025025}(g). We remark that each of $c_1, c_2, c_4$ (similarly for $c'_1, c'_2, c'_4$) is adjacent to at most one of $c'_1, c'_2, c'_3$, for otherwise, $\widehat{G}_{b'_1,b'_2}$ has 9 edges and a 2 or 3-cycle. If $c_3 \adj c'_3$ or $c_3 \adj c'_4$, then $\widehat{G}_{b_2,b'_1}$ has 9 edges and a 2 or 3-cycle. Assume that $c_3 \adj c'_5$. In this case, $b_1$ and $c_5$ are adjacent in $\widehat{G}_{b_2,b'_1}$, and by the above remark, $c'_3 \adj c_5$ directly. By applying the same remark again, $c_4 \adj c'_4$. Without loss of generality, $c_4 \adj c'_1$ and $c'_4 \adj c_1$, and thus $c_1 \adj c'_5$ and $c'_1 \adj c_5$. By connecting the remaining edges, we obtain the graph as Figure~\ref{fig:025025}(h). Then we have a planar graph $\widehat{G}_{b'_1,c'_3}$. Therefore we may assume that $c_3 \adj c'_1$ and $c'_3 \adj c_1$ as Figure~\ref{fig:025025}(i). Furthermore, we have $c_1 \adj c'_5$ because if $c_1 \adj c'_1, c'_2$ or $c'_4$, then $\widehat{G}_{b'_1,b'_2}$ has 9 edges and a 2 or 3-cycle. Similarly we have $c'_1 \adj c_5$. By the bipartition for $\widehat{G}_{b'_1,b'_2}$, $c'_2 \adj \{ c_2, c_4\}$, and similarly $c_2 \adj c'_4$. By connecting the remaining edges, we obtain the graph as Figure~\ref{fig:025025}(j). After recovering $G$, we have an intrinsically knotted graph, from which we obtain the Heawood graph by deleting two edges: one connecting $b_1$ and $b'_1$, the other connecting $b_2$ and $b'_2$. In case (3), we first assume that $NV_3(b_1,b_2) =2$, there are two degree 3 vertices in $B$ such that $\{b_1, b_2\} \adj \{ c'_1, c'_2\}$. Then $\widehat{G}_{b'_1,b'_2}$ has 9 edges by the remark and a 2-cycle $(c'_1 c'_2)$. Now assume that $NV_3(b_1,b_2) =4$, say $b_1 \adj \{c'_1, c'_2\}$ and $b_2 \adj \{c'_3, c'_4\}$. By considering $\widehat{G}_{b_1,b_2}$, there are no degree 2 vertices in $A$. Therefore we may assume that $c_3,c_4,c_5$ lie in the same partition, implying $c'_5 \adj \{c_3, c_4, c_5\}$. Now, we remark that none of the five $c_i$'s can be adjacent to both $c'_1$ and $c'_2$ (and similarly for both $c'_3$ and $c'_4$), for otherwise, $\widehat{G}_{b'_1,b'_2}$ has a 3-cycle $(c_i c'_1 c'_2)$. Without loss of generality, we say that $c_1 \adj \{b'_1, c'_1, c'_3\}$ and $c_2 \adj \{b'_2, c'_2, c'_4\}$. We distinguish two cases: $b'_1$ and $b'_2$ are adjacent to the same vertex $c_3$ or to different vertices, $b'_1 \adj c_3$ and $b'_2 \adj c_4$. By connecting the remaining edges, we obtain the graphs in Figure~\ref{fig:025025}(k) and (l), respectively. In both cases, after recovering $G$, we have planar graphs $\widehat{G}_{b_1,c_3}$. Finally assume that $NV_3(b_1,b_2) = NV_3(b'_1,b'_2) = 3$, say $b_1 \adj \{c'_1, c'_2\}$, $b_2 \adj \{c'_1, c'_3\}$, $b'_1 \adj \{c_1, c_2\}$, and $b'_2 \adj \{c_1, c_3\}$. If $c'_1 \adj c_1$, then using the restoring method, we construct a graph $G_{b_1,b_2}$ as drawn in Figure~\ref{fig:025025}(m). After recovering $G$, we have planar graphs $\widehat{G}_{b'_1,b'_2}$, which has 9 edges and a 2-cycle $(c'_2 c'_3)$. If $c'_1 \adj c_2$, we partially construct $\widehat{G}_{b_1,b_2}$ as in Figure~\ref{fig:025025}(n). As in the figure, $c'_4$ and $c'_5$ lie in the same partition, and so are $c_4$ and $c_5$. Then $\widehat{G}_{b'_1,b'_2}$ has 9 edges and a 3-cycle; in details, if $c_1$ and $c'_4$ lie in different partitions, then $c_1 \adj c'_4$ (or $c_1 \adj c'_5$), in which case the 3-cycle is $(c_4 c_5 c'_5)$, or otherwise, $c_3$ and $c'_4$ lie in different partitions, then $c_3 \adj \{c'_4, c'_5\}$, in which case the 3-cycle is $(c_4 c'_4 c'_5)$. Therefore we may assume that $c_1 \adj c'_4$ and $c'_1 \adj c_4$. Using the restoring method, we construct a graph $G_{b_1,b_2}$ as drawn in Figure~\ref{fig:025025}(o). After recovering $G$, we have an intrinsically knotted graph, in which we obtain $C_{14}$ by deleting two edges; one connecting $b_1$ and $b'_1$, and another connecting $b_2$ and $b'_2$. \begin{thebibliography}{99} \bibitem{BM} J. Barsotti and T. Mattman, {\em Graphs on 21 edges that are not $2$--apex}, Involve \textbf{9} (2016) 591--621. \bibitem{BBFFHL} P. Blain, G. Bowlin, T. Fleming, J. Foisy, J. Hendricks and J. LaCombe, {\em Some results on intrinsically knotted graphs}, J. Knot Theory Ramifications \textbf{16} (2007) 749--760. \bibitem{FMMNN} E. Flapan, T. Mattman, B. Mellor, R. Naimi and R. Nikkuni, {\em Recent developments in spatial graph theory\/} in Knots, links, spatial graphs, and algebraic invariants, Contemp. Math. \textbf{689} (2017) 81--102. \bibitem{GMN} N. Goldberg, T. Mattman and R. Naimi, {\em Many, many more intrinsically knotted graphs}, Algebr. Geom. Topol. \textbf{14} (2014) 1801--1823. \bibitem{HAMM} S. Huck, A. Appel, M-A. Manrique and T. Mattman, {\em A sufficient condition for intrinsic knotting of bipartite graphs}, Kobe J. Math. \textbf{27} (2010) 47--57. \bibitem{JKM} B. Johnson, M. Kidwell and T. Michael, {\em Intrinsically knotted graphs have at least $21$ edges}, J. Knot Theory Ramifications \textbf{19} (2010) 1423--1429. \bibitem{KLLMO} H. Kim, Lee, Lee, T. Mattman and S. Oh, {\em A new intrinsically knotted graph with 22 edges}, Topology Appl. \textbf{228} (2017) 303--317. \bibitem{KMO} H. Kim, T. Mattman and S. Oh, {\em Bipartite intrinsically knotted graphs with 22 edges}, J. Graph Theory \textbf{85} (2017) 568--584. \bibitem{KMO2} H. Kim, T. Mattman and S. Oh, {\em More intrinsically knotted graphs with 22 edges and the restoring method}, J. Knot Theory Ramifications \textbf{27} (2018) 1850059. \bibitem{KMO3} H. Kim, T. Mattman and S. Oh, {\em Triangle-free intrinsically knotted graphs with 22 edges and the twin-restoring method}, (in preparation) \bibitem{LKLO} M. Lee, H. Kim, H. J. Lee and S. Oh, {\em Exactly fourteen intrinsically knotted graphs have 21 edges}, Algebr. Geom. Topol. \textbf{15} (2015) 3305--3322. \bibitem{M} T. Mattman, {\em Graphs of 20 edges are 2-apex, hence unknotted}, Algebr. Geom. Topol. \textbf{11} (2011) 691--718. \bibitem{OT} M. Ozawa and Y. Tsutsumi, {\em Primitive spatial graphs and graph minors}, Rev. Mat. Complut. \textbf{20} (2007) 391--406. \bibitem{RS} N. Robertson and P. Seymour, {\em Graph minors XX, Wagner's conjecture}, J. Combin. Theory Ser. B \textbf{92} (2004) 325--357. \end{thebibliography} \end{document}
2205.06174v1
http://arxiv.org/abs/2205.06174v1
Exponential Stability of Large BV Solutions in a Model of Granular flow
\documentclass[11pt]{article} \usepackage{amsmath,amsfonts,amssymb,amsthm} \usepackage{graphics,graphicx} \usepackage{epsfig} \usepackage{fullpage} \usepackage{cancel} \usepackage{xcolor} \usepackage{epsfig,psfrag} \usepackage{comment} \providecommand{\keywords}[1] { \small {\textit{Keywords:}} #1 } \providecommand{\subjclass}[1] { { \textit{2020 MSC:}} #1 } \usepackage[margin=2cm]{geometry} \usepackage{helvet} \renewcommand{\familydefault}{\sfdefault} \usepackage{hyperref} \usepackage{titlesec} \setcounter{secnumdepth}{4} \titleformat{\paragraph} {\normalfont\normalsize\bfseries}{\theparagraph}{1em}{} \titlespacing*{\paragraph} {0pt}{3.25ex plus 1ex minus .2ex}{1.5ex plus .2ex} \newcommand{\email}[1]{\href{mailto:#1}{\tt #1}} \title{Exponential Stability of Large BV Solutions\\ in a Model of Granular flow} \author{ Fabio Ancona \thanks{Dipartimento di Matematica ``Tullio Levi-Civita'', Universit\`a di Padova, Via Trieste 63, 35121 Padova, Italy. \email{[email protected]}} \and Laura Caravenna \thanks{Dipartimento di Matematica ``Tullio Levi-Civita'', Universit\`a di Padova, Via Trieste 63, 35121 Padova, Italy. \email{[email protected]}} \and Cleopatra Christoforou \thanks{Department of Mathematics and Statistics, Univer. of Cyprus, 1678 Nicosia, Cyprus. \email{[email protected]}} } \begin{document} \maketitle \baselineskip=18pt \renewcommand{\footnote}{\endnote} \newcounter{stepnb} \newcounter{substepnb} rststep}{\setcounter{stepnb}{0}} rstsubstep}{\setcounter{substepnb}{0}} \newcommand{\step}[1]{{{\sc \addtocounter{stepnb}{1}\noindent $\circleddash$ Step \arabic{stepnb}:} #1.}} \newcommand{\substep}[1]{\vskip.3\baselineskip{\bf \addtocounter{substepnb}{1} \arabic{stepnb}.\arabic{substepnb}: #1.} } \newcommand{\sgn}{\mathrm{sgn}} \newcommand{\wkarr}{\; \rightharpoonup \;} \def\Weak{\,\,\relbar\joinrel\rightharpoonup\,\,} \def\del{\partial} \font\msym=msbm10 \def\Real{{\mathop{\hbox{\msym \char '122}}}} \def\R{\Real} \def\cS{\mathcal{S}} \def\A{\mathbb A} \def\Z{\mathbb Z} \def\K{\mathbb K} \def\J{\mathbb J} \def\L{\mathbb L} \def\D{\mathbb D} \def\M{\mathbb M} \def\Integers{{\mathop{\hbox{\msym \char '132}}}} \def\Complex{{\mathop{\hbox{\msym\char'103}}}} \def\C{\Complex} \newcommand{\ptpS}[4]{ \frac{ \partial\mathfrak {p} \mathbf {S}_{1} }{\partial #1} \left(#3;#4\right)} \newcommand{\pthS}[4]{ \frac{ \partial\mathfrak {h} \mathbf {S}_{2} }{\partial #1} \left(#3;#4\right)} \newcommand{\ptlls}[2]{ \frac{\partial^{}#2}{\partial #1} } \newcommand{\ptll}[1]{ \frac{\partial^{}}{\partial #1} } \newcommand{\ptl}[2]{ \frac{\partial^{#2}}{\partial #1^{#2}} } \newcommand{\ptltdu}[2]{ \frac{\partial^{3}}{\partial {(#1)^{2}}\partial{#2}} } \newcommand{\ptld}[2]{ \frac{\partial^{2}}{\partial {#1}\partial{#2}} } \newcommand{\ddn}[2]{ \frac{d^{#2}}{d #1^{#2}} } \def\Fdot{\dot F} \def\bU{\bar U} \def\bF{\bar F} \def\bFdot{ \dot {\bar F}} \def\bQ{\bar Q} \def\bZ{\bar Z} \def\bSigma{\bar \Sigma} \def\sC{{\gamma}} \def\sR{{\rho}} \def\bu{\bar u} \def\bv{\bar v} \def\btheta{\bar \theta} \def\bsigma{\bar \sigma} \def\bareta{\bar \eta} \def\bkappa{\bar \kappa} \def\bmu{\bar \mu} \def\br{\bar r} \def\barf{\bar f} \newcommand{\Bild}[2]{\begin{center}\begin{tabular}{l} \setlength{\epsfxsize}{#1} \epsfbox{#2} \end{tabular}\end{center}} \newcommand{\lessim}{\stackrel{<}{\sim}} \newcommand{\gesim}{\stackrel{>}{\sim}} \newcommand{\ignore}[1]{} \newcommand{\oh}{{\textstyle\frac{1}{2}}} \newcommand{\red}[1]{{\color{purple}#1}} \newcommand{\blue}[1]{{{{\color{blue}#1\frown}}}} \newtheorem{lemma}{Lemma} \newtheorem{theorem}[lemma]{Theorem} \newtheorem{maintheorem}[lemma]{Main Theorem} \newtheorem{corollary}[lemma]{Corollary} \newtheorem{proposition}[lemma]{Proposition} \newtheorem{definition}[lemma]{Definition} \newtheorem{remark}[lemma]{Remark} \newtheorem{remarks}[lemma]{Remarks} \newtheorem{Notation}[lemma]{Notation} \newcommand{\be}[1]{\begin{equation}\label{#1}} \newcommand{\ee}{\end{equation}} \newcommand{\bes}{\begin{equation*}} \newcommand{\ees}{\end{equation*}} \def\ba#1\ea{\begin{align}#1\end{align}} \def\bas#1\eas{\begin{align*}#1\end{align*}} \newcommand{\Suno}[2]{{\mathbf S_{1}\!\left(#1;#2\right)}} \newcommand{\Sdue}[2]{{\mathbf S_{2}\!\left(#1;#2\right)}} \newcommand{\SC}[3]{{\mathbf {S}_{#1}\!\left(#2;#3\right)}} \newcommand{\hSC}[3]{{\mathfrak {h}\mathbf {S}_2\!\left(#2;#3\right)}} \newcommand{\pSC}[3]{{\mathfrak {p}\mathbf {S}_1\!\left(#2;#3\right)}} \newcommand{\dSdue}[2]{{\mathbf {\dot S}_{2}\!\left(#1;#2\right)}} \newcommand{\dSuno}[2]{{\mathbf {\dot S}_{1}\!\left(#1;#2\right)}} \newcommand{\dotSuno}[2]{{\dot \mathbf S_{1}\!\left(#1;#2\right)}} \newcommand{\dotSdue}[2]{{\dot \mathbf S_{2}\!\left(#1;#2\right)}} \newcommand{\PHI}[1]{{\left\lvert#1-1\right\rvert}} \newcommand{\ca}{{\cal A}} \newcommand{\cd}{{\cal D}} \newcommand{\ce}{{\cal E}} \newcommand{\cj}{{\cal J}} \newcommand{\co}{{\cal O}} \newcommand{\cs}{{\cal S}} \newcommand{\cp}{{\cal P}} \newcommand{\cc}{{\cal C}} \newcommand{\cw}{{\cal W}} \newcommand{\ccp}{{\cal P}} \newcommand{\cb}{{\cal B}} \newcommand{\cl}{{\cal L}} \newcommand{\cg}{{\cal G}} \newcommand{\cq}{{\mathcal Q}} \newcommand{\cM}{{\cal M}} \newcommand{\cK}{{\cal K}} \newcommand{\er}{{z}} \newcommand{\dist}{{\mbox{dist}}} \newcommand{\m}{\mbox{\boldmath $ \mu$}} \newcommand{\ebb}{\mbox{\boldmath $\epsilon$}} \newcommand{\oeps}{\overline{\varepsilon}} \newcommand{\kr}{\hbox{Ker}} \newcommand{\qinfo}{\stackrel{\circ}{Q}_{\infty}} \newcommand{\mat}{\hbox{Mat}} \newcommand{\cof}{\hbox{cof}\,} \renewcommand{\det}{\hbox{det}\,} \renewcommand{\del}{\partial} \newcommand{\delt}{\partial_t} \newcommand{\dela}{\partial_{\alpha}} \newcommand{\eps}{\varepsilon} \newcommand{\barQT}{{\overline Q}_T} \newcommand{\esssup}{\operatornamewithlimits{esssup}} \newcommand{\essinf}{\operatornamewithlimits{essinf}} \font\msym=msbm10 \def\Real{{\mathop{\hbox{\msym \char '122}}}} \def\Integers{{\mathop{\hbox{\msym \char '132}}}} \font\smallmsym=msbm7 \def\smr{{\mathop{\hbox{\smallmsym \char '122}}}} \def\torus{{{\text{\rm T}} \kern-.42em {\text{\rm T}}}} \def\T3{\torus^3} \def\div{\hbox{div}\,} \date{} \numberwithin{equation}{subsection} \numberwithin{lemma}{section} \begin{abstract} We consider a $2\times 2$ system of hyperbolic balance laws, in one-space dimension, that describes the evolution of a granular material with slow erosion and deposition. The dynamics is expressed in terms of the thickness of a moving layer on top and of a standing layer at the bottom. The system is linearly degenerate along two straight lines in the phase plane and genuinely nonlinear in the subdomains confined by such lines. In particular, the characteristic speed of the first characteristic family is strictly increasing in the region above the line of linear degeneracy and strictly decreasing in the region below such a line. The non dissipative source term is the product of two quantities that are transported with the two different characteristic speeds. The global existence of entropy weak solutions of the Cauchy problem for such a system was established by Amadori and Shen~\cite{AS} for initial data with bounded but possibly large total variation, under the assumption that the initial height of the moving layer be sufficiently small. In this paper we establish the Lipschitz ${\bf L^1}$-continuous dependence of the solutions on the initial data with a Lipschitz constant that grows exponentially in time. The proof of the ${\bf L^1}$-stability of solutions is based on the construction of a Lyapunov like functional equivalent to the ${\bf L^1}$-distance, in the same spirit of the functional introduced by Liu and Yang~\cite{LY} and then developed by Bressan, Liu, Yang~\cite{bly} for systems of conservation laws with genuinely nonlinear or linearly degenerate characteristic fields. \end{abstract} \keywords{Balance laws; granular flow; stability; large $BV$; weakly linearly degenerated system.} \subjclass{35L65; 76T25; 35L45.} \tableofcontents \section{Introduction} \label{intro} We consider a model for the flow of granular material proposed by Hadeler and Kuttler~\cite{HK} where the evolution of a moving layer on top and of a resting layer at the bottom is described by the two balance laws: \be{E:S1system-d} \begin{aligned} h_t&=\div(h\nabla \mathfrak{s})-(1-|\nabla \mathfrak{s}|)h,\\ \mathfrak{s}_t&=(1-|\nabla \mathfrak{s}|)h. \end{aligned} \ee Here, the unknown $h=h(x,t)\in\R$ and $\mathfrak{s}(x,t)\in\R$ represent, respectively, the thickness of the rolling layer and the height of the standing layer, while $t\geq 0$ and $x\in \R^n$ are the time and space variables. The evolution equations~\eqref{E:S1system-d} show that the moving layer slides downhill with speed proportional to the slope of the standing layer in the direction of steepest descent. The model is written in normalised form, assuming that the critical slope is $|\nabla \mathfrak{s}|=1$. This means that, if $|\nabla \mathfrak{s}|>1$, then grains initially at rest are hit by rolling matter of the moving layer and hence they start moving as well. As a consequence the moving layer gets thicker. On the other hand, if $|\nabla \mathfrak{s}|<1$, then rolling grains can be deposited on the standing bed. Hence the moving layer becomes thinner. Typical examples of granular material whose dynamics is described by such models are dry sand and gravel in dunes and heaps, or snow in avalanches. In the one-space dimensional setting, assuming that the thickness of the moving layer and the slope of the resting layer remain non-negative, if we differentiate the second equation of~\eqref{E:S1system-d} with respect to $x\in\R$ and set $p\doteq \mathfrak{s}_x$, we obtain the system of balance laws \be{S1system} \begin{aligned} &h_t-(hp)_x=(p-1)h,\\ &p_t+((p-1)h)_x=0, \end{aligned} \ee with $h\ge 0$ and $p\ge 0$. The purpose of the present paper is to study the well-posedness of the Cauchy problem for~\eqref{S1system}. Observing that the Jacobian matrix of the flux function $F(h, p)= \big(hp, (p-1)h\big)$ associated to~\eqref{S1system} is \bes A(h,p)=\left[\begin{array}{cc} -p & -h\\ p-1 & h\end{array}\right], \ees by a direct computation one finds that the system~\eqref{S1system} is strictly hyperbolic on the domain \begin{equation} \label{Om-def} \Omega\doteq\big\{(h,p) : h\geq 0,\ p>0\big\} \end{equation} and weakly linearly degenerate at the point $(h,p)=(0,1)$. Namely, letting $\lambda_1(h,p)<\lambda_2(h,p)$ denote the eigenvalues of $A(h,p)$ when $p>0$, one can verify that the first characteristic family is genuinely nonlinear on each domain $\{(h,p)\, | \, h\geq 0,\, p>1\}$, $\{(h,p)\, | \, h\geq 0,\, 0<p<1\}$, and linearly degenerate on the semiline $\{(h,1)\, | \, h\geq 0 \}$ since, along the rarefaction curves of the first family, $\lambda_1$ is strictly increasing for $p>1$, strictly decreasing for $p<1$ and constant for $p=1$. Moreover, each region $\{(h,p)\, | \, h\geq 0,\, p>1\}$, $\{(h,p)\, | \, h\geq 0,\, 0<p<1\}$ is an invariant domain for solutions of the Riemann problem. Instead, the second characteristic family is genuinely nonlinear for $h>0$ and linearly degenerate along $h=0$. We recall that hyperbolic systems of balance laws generally do not admit smooth solutions and, therefore, weak solutions in the sense of distributions are considered. Moreover, for the sake of uniqueness, an entropy criterion for admissibility is usually added. In~\cite{tplrpnpn} T.P. Liu proposed an admissibility criterion valid for general systems of conservation laws with non genuinely nonlinear characteristic fields. For system~\eqref{S1system}, since the characteristic families enjoy the above properties, this criterion is equivalent to the classical {\it Lax stability condition}: \begin{description} \item[]\hspace{19pt} A shock connecting the left state $(h_\ell,p_\ell)$ and the right state $(h_r,p_r)$, traveling with speed \ $s$ \ is an admissible discontinuity of the $k$-th family if \begin{equation} \label{Lax-adm} \lambda_k\big((h_\ell,p_\ell)\big)\geq s\geq\lambda_k\big((h_r,p_r)\big). \end{equation} \end{description} Thus, throughout the paper, with an {\it entropy-admissible weak solution} of~\eqref{S1system} we shall always mean a standard weak solution, admissible in the sense of Lax. Global existence of classical smooth solutions to~\eqref{S1system} were established for a special class of initial data by Shen~\cite{S}. In the case of more general initial data with bounded but possible large total variation, the existence of entropy weak solutions globally defined in time was proved by Amadori and Shen~\cite{AS}. In the present paper we tackle the problem of stability of entropy weak solutions to~\eqref{S1system} with respect to the ${\bf L}^1$--topology. For systems without source term and small BV data, the Lipschitz ${\bf L}^1$-continuous dependence of solutions on the initial data, was first established by Bressan and collaborators in~\cite{BC,BCP} under the assumptions that all characteristic families are genuinely nonlinear (GNL) or linearly degenerate (LD), relying on a (lengthy and technical) homotopy method. This approach requires careful a-priori estimates on a suitable defined weighted norm of the generalized tangent vector to the flow generated by the system of conservation laws. These results were then extended with the same techniques in~\cite{AM} to a class of $2\times 2$ systems with non GNL characteristic fields that does not comprise the convective part of system~\eqref{S1system}. A much simpler, more transparent proof of the ${\bf L}^1$-stability of solutions for conservation laws with GNL or LD characteristic fields was later achieved by a technique introduced by Liu and Yang in~\cite{LY} and then developed in~\cite{bly}. The heart of the matter here is to construct a Lyapunov-like nonlinear functional, equivalent to the ${\bf L}^1$-distance, which is decreasing in time along any pair of solutions. Extensions of ${\bf L}^1$-stability results to the setting of large BV data was obtained for systems of conservation laws with Temple type characteristic fields, adopting the homotopy approach in~\cite{BG,B1,B2}, and constructing a Lyapunov-like functional in~\cite{CC1}. This latter approach was followed also in~\cite{L1,L2,LT} to prove ${\bf L}^1$-stability of solutions for general systems with GNL or LD characteristic fields, within a special class of initial data with large total variation. In the case of balance laws with GNL or LD characteristic families and small BV data, the ${\bf L}^1$-stability of solutions was first obtained in~\cite{CP} for $2\times 2$ systems via the homotopy method and the a-priori bounds on a weighted distance. Next, this result was established for $N\times N$ systems producing a Lyapunov-like functional for balance laws with dissipative source~\cite{AG} and non-resonant source~\cite{AGG}. An extension of these results for systems of balance laws of Temple class with large BV data is given in~\cite{CC2}. We remark that all of the above results, with the only exception of~\cite{AM, B1,CC2}, deal with systems having GNL or LD characteristic fields. Unfortunately, for systems as~\eqref{S1system} that do not fulfill these classical assumptions, the derivation of a-priori bounds on generalized tangent vectors, already hindered by heavy technicalities in the classical Lax setting, is further hampered by the occurrence of richer nonlinear wave phenomena exhibited by such equations. This is mainly due to the presence in the solutions of discontinuous waves of the first characteristic family, shocks or contact discontinuities, which may turn into rarefaction waves (and vicecersa) after interactions with waves of the other family. In order to establish the ${\bf L}^1$-stability of solutions for system~\eqref{S1system}, we have thus followed the second approach, introducing in the present paper a Lyapunov-like functional controlling the growth of the ${\bf L}^1$-distance between pairs of approximate solutions with large BV data. Namely, in the same spirit of~\cite{bly}, we explicitly construct a functional $\Phi=\Phi(u,v)$ for piecewise constant functions $u,\,v\in {\bf L}^1(\R;\,\Omega)$, such that: \begin{enumerate} \item[(i)] it is equivalent to the $ {\bf L}^1$-distance. Namley, for every pair of piecewise constant functions $u=(u_h, u_p), v=(v_h,v_p)\in {\bf L}^1(\R;\,\Omega)$, with bounded total variation, there holds \begin{equation} \frac{1}{C}\cdot \big\|u-v\big\|_{\strut {\bf L}^1}\le \Phi(u,v)\le C\cdot \big\|u-v\big\|_{\strut {\bf L}^1} \end{equation} for some constant $C>0$ depending only on the system~\eqref{S1system}, on the total variation of $u,v$, and on the ${\bf L}^\infty$ norm of $u_h, v_h$. \item[(ii)] it is exponentially increasing in time along pairs of approximate solutions of~\eqref{S1system} generated by a front-tracking algorithm combined with an operator splitting scheme with time steps $t_k=k \Delta t$. Namely, for every couple of such approximate solutions $u(x,t),\, v(x,t),$ the right limits of $u(\,\cdot,t), v(\,\cdot,t)$ at $t_h<t_k$ satisfy \begin{equation} \label{gammaest2} \begin{aligned} \Phi\big(u(\,\cdot,t_k+), v(\,\cdot,t_k+)\big) &\leq \Phi\big(u(\,\cdot,t_h+), v(\,\cdot,t_h+)\big) \big(1+{\mathcal{O}}(1)\Delta t\big)^{(k-h)}+ \\ \noalign{\smallskip} &\qquad + {\mathcal{O}}(1)\cdot \varepsilon\,\Delta t \sum_{i=1}^{k-h} \big(1+{\mathcal{O}}(1)\Delta t\big)^i \qquad\quad\forall~0\leq h<k, \end{aligned} \end{equation} where $\varepsilon$ denotes a small parameter that controls the errors in the wave speeds and the maximum size of rarefaction fronts in $u$ and in $v$. In particular, $\Phi$ is ``almost decreasing'' in time if the only effect of the convective part of~\eqref{S1system} is taken into account: \begin{equation} \label{gammaest1} \Phi\big(u(\,\cdot, \tau_2), v(\,\cdot, \tau_2)\big)\leq \Phi\big(u(\,\cdot, \tau_1), v(\,\cdot, \tau_1)\big)+ {\mathcal{O}}(1)\cdot \varepsilon (\tau_2-\tau_1)\qquad\forall~t_k<\tau_1<\tau_2<t_{k+1}. \end{equation} Here, and throughout the paper, we use the Landau symbol $\co(1)$ to denote a quantity whose absolute value satisfies a uniform bound that depends only on the system~\eqref{S1system}. In particular, this bound does not depend on the front tracking parameter $\varepsilon$, or on the two solutions $u,v$ considered. \end{enumerate} \noindent The value of $\Phi$ is defined as follows. Given two piecewise constant functions $u,\,v\in {\bf L}^1(\R;\,\Omega)$, for each $x\in\R$, connect $u(x)$ with $v(x)$ moving along the Hugoniot curves of the first and second families and let $\eta_i(x)$, $i=1,2$, denote the size of the corresponding $i$-shock in the jump $(u(x), v(x))$. Then, define \begin{equation} \label{Phi-def-1} \Phi\big(u,v)\doteq \sum_{i=1}^2 \int_{-\infty}^\infty W_i(x)\big|\eta_i(x)\big|~dx \cdot \exp(\kappa_\cg\mathcal{B} )\,, \end{equation} where the weights $W_i$ have the following form: \begin{equation} \label{W-def-1} W_i(x)\doteq \exp(\kappa_{i\ca 1}\cdot\mathcal{A}_{i,1}(x)+ \kappa_{i\ca 2}\cdot\mathcal{A}_{i,2}(x))\,, \qquad i=1,2\,, \end{equation} with \begin{equation} \label{W-def-2} \allowdisplaybreaks \begin{aligned} \mathcal{A}_{1,1}(x) &\doteq \Big\{ \text{total}\ \Big[\big[\text{strength}\big]\cdot\big[\text{distance from}~1~\text{of the p-component of the left state}\big] \Big] \\ \noalign{\vspace{-2 pt}} &\hskip 0.6in \text{of $1$-waves in $u$ and in $v$ which approach the $1$-wave $\eta_1(x)$} \Big\} \\ \noalign{\bigskip} \mathcal{A}_{1,2}(x) &\doteq \Big\{ \text{total}\ \big[\text{strength}\big] \ \text{of $2$-waves in $u$ and in $v$ which approach the $1$-wave $\eta_1(x)$} \Big\}\,, \\ \noalign{\bigskip} \mathcal{A}_{2,j}(x) &\doteq \Big\{ \text{total}\ \big[\text{strength}\big] \ \text{of $j$-waves in $u$ and in $v$ which approach the $2$-wave $\eta_2(x)$} \Big\}\,,\quad j=1,\,2 \\ \noalign{\bigskip} \mathcal{B} &\doteq \Big\{ \text{total \big[\text{strength}\big] of $u$ and of $v$} \Big\} + \Big\{\text{\big[\text{wave interaction potential}\big] of $u$ and of $v$} \Big\}\,. \end{aligned} \end{equation} Here, $\kappa_{i\ca j}$, $i,\,j=1,\,2$ and $\kappa_\cg$ denote suitable positive constants depending on the system~\eqref{S1system} that obey Conditions ${\bf \Sigma}$ given in the proof of Proposition~\ref{PropCond}. A precise definition of $W_1, W_2$ is given in Subsection~\ref{Ss:Lyapfunct}. Observe that the weights $W_i$ are defined similarly to the expression of the weight given in~\cite{bly} for GNL and LD characteristic fields in the sense that here we have the exponential version of them. However, notice that the main novelty of our functional is encoded in the weight $W_1$ and in particular in $\mathcal{A}_{1,1}$, whereas $\mathcal{A}_{1,2}$, $\mathcal{A}_{2,1}$, $\mathcal{A}_{2,2}$ have almost the same expression given in~\cite{bly}. In addition, another difference between the above definition of $\Phi$ and the one given in~\cite{bly} is the presence of the whole Glimm functional of $u$ and $v$ in $\mathcal{B}$, instead of their interaction potential alone that appears as well on the exponent of e. This is due to the fact that, since the first characteristic family is not GNL, we adopt as in~\cite{AS} a definition of wave interaction potential, suited to~\eqref{S1system}, that is in general not decreasing in presence of interactions of 1-waves of different sign (1-shocks with 1-rarefaction waves). Therefore, one needs to exploit the decrease of the total strength of waves due to cancellation in order to control the possible increase of the potential interaction occurring at such interactions. The key ingredient in the definition of $\mathcal{A}_{1,1}$ is the appropriate formulation of {\it approaching wave} of the first family for a given wave $\eta_1(x)$ in the jump $(u(x), v(x))$, which extends to our case the definition given in~\cite{bly} for GNL characteristic fields. Observe that, letting $\gamma\mapsto\mathbf {S}_1(\gamma;\, h_0,p_0)$ be the Rankine-Hugoniot curve of right states of the first family issuing from a given state $(h_0,p_0)\in\Omega$, and denoting $\lambda_1(\gamma;\, h_0,p_0)$ the Rankine-Hugoniot speed of the jump connecting $(h_0,p_0)$ with $\mathbf {S}_1(\gamma;\, h_0,p_0)$, by the properties of system~\eqref{S1system} it follows that $\gamma\mapsto \lambda_1(\gamma;\, h_0,p_0)$ is strictly increasing on $\{p>1\}$, strictly decreasing on $\{0<p<1\}$, and constant along $\{p=1\}$. Therefore, if the size $\eta_1(x)$ is positive, we shall regard as approaching all the $1$-waves present in $v$ which either have left state in the region $\{p>1\}$ and are located on the left of $\eta_1(x)$, or have left state in the region $\{0<p<1\}$ and are located on the right of~$\eta_1(x)$. On the contrary, we regard as approaching to $\eta_1(x)>0$ all the $1$-waves present in $u$ which either have left state in the region $\{p>1\}$ and are located on the right of $\eta_1(x)$, or have left state in the region $\{0<p<1\}$ and are located on the left of $\eta_1(x)$. Similar definition is given in the case where $\eta_1(x)<0$. Observe that, in the definition of the Lyapunov functional given in~\cite{bly}, the weights $W_i$ are expressed only in terms of the strength of the approaching waves. Instead here the terms of $\mathcal{A}_{1,1}$ related to the approaching waves of the first family have the form of the product of the strength of the waves $|\sR_\alpha|$ times the distance from $\{p=1\}$ of the left state of the waves $|p_\alpha-1|$. The presence of the factor $|p_\alpha-1|$ is crucial to guarantee the decreasing property~\eqref{gammaest1} at times of interactions involving a $1$-wave, say of strength $|\sR_\alpha|$, and a $2$-wave crossing $\{p=1\}$ (i.e. connecting two states lying on opposite sides of $\{p=1\}$), say of strength $|\sR_\beta|$. In fact, in this case the possible increase of $\mathcal{A}_{1,1}$ turns out to be of order $|p_\beta-1||\sR_\alpha| \approx |\sR_\alpha\sR_\beta|$, and thus it can be controlled by the decrease of $\mathcal{B}$ determined by the corresponding decrease of the interaction potential. Unfortunately, because of the presence of these quadratic terms in the weight $W_1$, we are forced to establish sharp fourth order interaction estimates in order to carry on the analysis of the variation of $\Phi(u(t,\cdot), v(t,\cdot))$. This is achieved deriving accurate Taylor expansions of the Hugoniot and rarefaction curves of each famiy, which rely on the specific geometric features of system~\eqref{S1system}. Namely, the characteristic fields of~\eqref{S1system} are ``almost Temple class'' (the rarefaction and Hugoniot curves through the same point are ``almost'' straight lines and have ``almost'' third order tangency at their issuing point) near $\{p=1\}$ for the first family and near $\{h=0\}$ for the second family. The estimate~\eqref{gammaest1} implies the convergence of front-traking approximate solutions of the homogeneous system \be{S1system-hom} \begin{aligned} &h_t-(hp)_x=0,\\ &p_t+((p-1)h)_x=0, \end{aligned} \ee to a unique limit, depending Lipschitz continuously on the initial data in the ${\bf L}^1$-norm, that defines a semigroup solution operator $\cs_t$, $t\geq 0$, on domains $\mathcal{D}$ of the form\\ \begin{equation} \label{Domain-def1} \begin{aligned} \mathcal{D}(M_0 , \delta_0,\delta_p )=cl\big\{(h,p)\in{\bf L}^1 (\mathbb{R};\mathbb{R}^2):\ \, &h,p \ \ \text{are piecewise constant, } \\ & 0\leq h(x)\leq \delta_0 , \ \ |p(x)-1|<\delta_p\ \ \text{for a.e.}~x, \\ & \text{and}\ \ \mathrm{TotVar}\{(h,p)\} \le M_0, \quad \lVert{h}\rVert_{{\bf L}^{1}}+\lVert{p-1}\rVert_{{\bf L}^{1}}\leq M_0 \big\}, \end{aligned} \end{equation} where $cl$ denotes the ${\bf L}^1$-closure, $\mathrm{TotVar}\{(h,p)\} \doteq \mathrm{TotVar}\{h\}+\mathrm{TotVar}\{p\}$, and $M_0, %p_0, \delta_0, \delta_p$ are positive constants. For any given initial data $\overline u \doteq (\,\overline h, \overline p\,)\in \mathcal{D}(M_0 , \delta_0, \delta_p)$, the map $u(t,x)\doteq \cs_t\overline u (x)$ provides an entropy weak solution of the Cauchy problem for~\eqref{S1system-hom} with initial condition \be{S1system-data} h(x,0)=\overline h(x)\,,\qquad p(x,0)=\overline p(x) \qquad\quad \text{for a.e.} \ \ x~\in\R\,. \ee Relying on the estimate~\eqref{gammaest2}, we then show that approximate solutions of~\eqref{S1system} generated by a front-tracking algorithm combined with an operator splitting scheme, in turn, converge to a map that defines a Lipschitz continuous semigroup operator $\ccp_t$, $t\geq 0$, on domains as~\eqref{Domain-def1}, with a Lipschitz constant that grows exponentially in time. The trajectories of $\ccp$ are entropy weak solution of the Cauchy problem~\eqref{S1system},~\eqref{S1system-data}. The uniqueness of the limit of approximate solutions to~\eqref{S1system} and of the semigroup operator $\ccp$, is achieved as in~\cite{AG} deriving the key estimate \begin{equation} \label{uniq-semigr-est-1} \left\|\ccp_\theta \overline u-\cs_\theta \overline u - \theta\cdot \! \Bigg( \!\begin{aligned} &(\overline p-1) \overline h \\ &\quad\ \ 0 \end{aligned} \,\Bigg) \right\|_{{\bf L}^1} = {\mathcal{O}}(1)\cdot\theta^2\qquad\text{as}\qquad \theta\to 0\,, \end{equation} relating the solutions operators of the homogeneous and nonhomogeneous systems, and invoking a general uniqueness result for quasidifferential equations in metric spaces~\cite{Bre2}. We point out that the results established in the present paper provide the first construction of a Lipschitz continuous semigroup of entropy weak solutions for nonlinear hyperbolic systems via a Lyapuonv type functional for: \begin{itemize} \item[-] systems with characteristic families that are neither GNL nor LD (nor of Temple class) \item[-] initial data with arbitrary large total variation. \end{itemize} It remains an open problem to analyze whether the Lipschitz constant of the solution operator $\ccp$ is actually uniformly bounded. We conclude this section observing that in~\cite{AS} it was investigated the slow erosion/deposition limit of~\eqref{S1system} as the height of the moving layer tends to zero. The limiting behaviour of the slope of the standing layer provides an entropy weak solution of a scalar integro-differential conservation law. A semigroup of solutions to such a nonlocal conservation law, depending Liptschitz continuously on the initial data, was constructed in~\cite{cgw} as limit of generalized front tracking approximations, and in~\cite{BS} by a flux splitting method alternating backward Euler approximations with a nonlinear projection operator. Granular models different from the one derived in~\cite{HK} can be found in~\cite{BdG,Du,SH}. An analysis of steady state solutions for~\eqref{S1system} was carried out in~\cite{CC,CCCG}. The paper is organized as follows. In Section~\ref{S2}, we study the properties of system~\eqref{S1system}, describe the construction of the approximate solutions also employed in~\cite{AS}, define the wave size in the two coordinate systems, Eulerian or Lagrangian, and their relation, introduce the Lyapunov functionals and conclude with the statement of the theorems for the semigroups associated with both the homogeneous and the non-homogeneous systems. Lets us note here that the stability functional $\Phi$ that is equivalent to the ${\bf L}^1$ norm between two solutions $u$ and $v$ is denoted by $\Phi_0$ from here and on depending on the type of estimates explored. In Section~\ref{S:basicest}, we present the interaction estimates for the approximate solutions and the variation of the wave size at time steps. The main work of our analysis is in Section~\ref{S4old}, which is divided into four subsections and the analysis establishes Theorem~\ref{stability-Phy}. More precisely, after defining the functional $\Phi$ that is equivalent to the ${\bf L}^1$ norm between two solutions $u$ and $v$, we estimate the change of $\Phi$ in the following three regions: In \S~\ref{Ss:interactionTimes} at interaction times, in~\S~\ref{Ss:nointeractiontimesN} between interaction times and in \S~\ref{Ss:timestep} at time steps. Then in~\S~\ref{S5.4}, we generalize our analysis to treat the functional $\Phi_z$ and conclude the proof of Theorem~\ref{stability-Phy}. Last, in Section~\ref{S6}, we establish the uniqueness of the limit and obtain a Lipschitz continuous evolution operator for the non-homogeneous system, proving Theorem~\ref{exist-nonhom-smgr-1}. There are many technical steps employed throughout our analysis and for the convenience of the reader, these can be found in the Appendices ~\ref{S:shockReduction}-~\ref{App:C}. They involve the reduction to shock curves in the stability analysis, standard analysis on the wave curves, delicate interaction-type estimates for each characteristic family up to fourth order and a convenient auxiliary lemma. \section{Preliminaries and main results}\label{S2} Let $F(h, p)= \big(hp, (p-1)h\big)$ be the flux function associated to~\eqref{S1system}. Then, the Jacobian matrix \bes \label{jacob} DF(h,p)\doteq A(h,p)=\left[\begin{array}{cc} -p & -h\\ p-1 & h\end{array}\right] \ees has eigenvalues \be{S2 evalues} \lambda_1(h,p)=\frac{h-p-\sqrt{(p-h)^2+4h}}{2},\qquad\lambda_2(h,p)=\frac{h-p+\sqrt{(p-h)^2+4h}}{2} \ee with associated right eigenvectors \be{S2 evectors} {\bf r}_1(h,p)=\left(\begin{array}{c} 1 \\ \noalign{\smallskip} -\dfrac{\lambda_1+1}{\lambda_1}\end{array} \right),\qquad {\bf r}_2(h,p)=\left(\begin{array}{c} -\dfrac{\lambda_2}{\lambda_2+1}\\\noalign{\smallskip}1\end{array} \right)\;. \ee Note that system~\eqref{S1system} is strictly hyperbolic in the domain $$\Omega=\{(h,p):\,h\ge 0,\,p>0\},$$ since, for every $0<p_0<1$, one has \be{strict-hyp} \lambda_1(h,p)\leq -\frac{p_0}{2},\qquad\quad \lambda_2(h,p)\geq 0\qquad\forall~h\geq 0,\, p\geq p_0\,. \ee Moreover, for $p=1$, one has $$\lambda_1(h,1)=-1,\quad\lambda_2(h,1)=h,\quad{\bf r}_1(h,1)=\left(\begin{array}{c} 1\\0\end{array} \right),\qquad {\bf r}_2(h,1)=\left(\begin{array}{c} -\dfrac{h}{h+1}\\\noalign{\smallskip}1\end{array} \right), $$ while, for $h=0$, there holds \begin{equation} \label{lambda10p} \lambda_1(0,p)=-p,\quad\lambda_2(0,p)=0,\quad{\bf r}_1(0,p)=\left(\begin{array}{c} 1\\ \noalign{\smallskip} \dfrac{1-p}{p}\end{array} \right),\qquad {\bf r}_2(0,p)=\left(\begin{array}{c} 0\\1\end{array} \right). \end{equation} Moreover, by direct computations, we find that $$D\lambda_1\, {\bf r}_1=-\frac{2(\lambda_1+1)}{\lambda_2-\lambda_1}\approx\frac{2(p-1)}{p},\quad D\lambda_2\, {\bf r}_2=-\frac{2\lambda_2}{\lambda_2-\lambda_1}\approx-\frac{2h}{p^2}. $$ Therefore, the first characteristic field is genuinely nonlinear on each domain $\{p<1\}$, $\{p>1\}$, and linearly degenerate along the semiline $p=1$, while the quantity $D\lambda_1\, {\bf r}_1$ changes sign across the semiline $p=1$. On the other hand, the second characteristic field is genuinely nonlinear for $h\neq 0$ and linearly degenerate along $h=0$ (see Figure~\ref{S2:fig1}). \begin{figure}[htbp] {\centering \scalebox{1}{\input{characteristic-Riemann} } \par} \caption{{\bf On the left:} The rarefaction curves of the two families, with the arrow pointing in the direction of increasing eigenvalues. {\bf On the right:} The curves of the right states that are connected to the left state $(h_{\ell},p_{\ell})$ by an entropy admissible 1-wave or 2-wave of the homogeneous system~\eqref{S1system-hom}. Here, $R_i, S_i, C_i$ denote rarefaction, Hugoniot and contact discontinuity curves of the $i$-th family, respectively. The three cases depending on the value of $p_\ell$ ($<\!1, =\!1, >\!1$) are indicated. \label{S2:fig1}} \end{figure} \subsection{Properties of Riemann solver and approximate solutions} \label{Ss:Rsolv-appsol} Let $\gamma\mapsto\mathbf {S}_k(\gamma;\, h^{\ell},p^{\ell})$ denote the Hugoniot curve of right states of the $k^{\text{th}}$ family issuing from $(h^{\ell},p^{\ell})$, whose points $(h^{r},p^{r})\doteq \mathbf {S}_k(\gamma;\, h^{\ell},p^{\ell})$ satisfy the Rankine Hugoniot equations $$F((h^{r},p^{r}))-F((h^{\ell},p^{\ell}))=\lambda \cdot \big((h^{r},p^{r})-(h^{\ell},p^{\ell})\big)$$ for $\lambda=\lambda_k\big((h^{\ell},p^{\ell}),\,(h^{r},p^{r})\big)$ where $\lambda_k\big((h^{\ell},p^{\ell}),\,(h^{r},p^{r})\big)$ denotes the $k$-th eigenvalue of the averaged matrix \begin{equation} A\big((h^{\ell},p^{\ell}),\,(h^{r},p^{r})\big)= \int_0^1 A\big(s\, (h^{\ell},p^{\ell}) +(1-s) (h^{r},p^{r})\big) d s\,. \end{equation} We call $\lambda_k\big((h^{\ell},p^{\ell}),\,(h^{r},p^{r})\big)$ the Rankine Hugoniot speed associated to the left and right states $(h^{\ell},p^{\ell})$, $(h^{r},p^{r})$. The analysis in~\cite{AS} shows that the Hugoniot curve of the \emph{first} and \emph{second family} is given by \begin{equation}\label{Suno} \SC{1}{\sC}{ h^{\ell},p^{\ell}}=\bigg(h^{\ell}+\sC,\,p^{\ell}-\frac{(s_1+1)\,\sC}{s_1}\bigg)= \bigg(h^{\ell}+\sC,\,p^{\ell}-\frac{(p^\ell-1)\,\sC}{h^\ell+\gamma-s_1}\bigg) \qquad\gamma\geq - h^\ell\,, \end{equation} and \begin{equation}\label{Sdue} \SC{2}{\sC}{h^{\ell},p^{\ell}}=\bigg(h^{\ell}-\frac{s_2}{s_2+1}\sC,\,p^{\ell}+\sC\bigg) =\bigg(h^\ell\bigg(1+\frac{\gamma}{\lambda_1(h^\ell,p^\ell+\gamma)-h^\ell}\bigg), \,p^{\ell}+\sC\bigg) \qquad\gamma\geq - p^\ell\,, \end{equation} respectively, where \begin{equation} \label{rh-speed-def} s_k=\lambda_k\big(\gamma;\,h^{\ell},p^{\ell}\big) \doteq \lambda_k\bigg((h^{\ell},p^{\ell}),\, \SC{k}{\sC}{ h^{\ell},p^{\ell}}\bigg),\qquad k=1,2, \end{equation} are the corresponding Rankine Hugoniot speeds. In fact, one finds that there holds \begin{equation} \label{rhspeed-eigenvalue} s_1 =\lambda_1\big(h^{\ell}+\sC,p^{\ell}\big),\qquad\quad s_2=\lambda_2\big(h^{\ell},p^{\ell}+\sC\big)\,. \end{equation} The shock connecting the left state $(h_{\ell},p_{\ell})$ with the right state $ \mathbf {S}_1(\gamma;\, h^{\ell},p^{\ell})$ satisfies the Lax stability condition~\eqref{Lax-adm}, if $\gamma\cdot (p-p^\ell)\leq 0$, while the shock with left state $(h^{\ell},p^{\ell})$ and right state $ \mathbf {S}_2(\gamma;\, h^{\ell},p^{\ell})$ is Lax admissible if $\gamma>0$. We observe that the line $p=1$ separates the domain $\Omega$ into two invariant regions for solutions of the Riemann problem: the quarter $\{h\geq0,\ p>1\}$ and the half-strip $\{h\geq 0, \ 0<p<1\}$. Indeed, the rarefaction and Hugoniot curves of the first family through a point $(h^{\ell},p^{\ell})$, with $p^{\ell}\ne 1$, never meets the line $p=1$, while the rarefaction and Hugoniot curves of the second family through a point $(h^{\ell},p^{\ell})$, with $h^{\ell}>0$, never meets the line $h=0$. On the the other hand, the lines $p=1$ and $h=0$ are also invariant regions for solutions of the Riemann problem since they coincide with the rarefaction and Hugoniot curves of the first and second family, respectively, passing through any of their points. For convenience, we have drawn in Figure~\ref{S2:fig1} (on the right) the elementary curves of right states that are connected to a given left state $(h^{\ell}, p^{\ell})$ with entropy admissible waves of the first family (equivalently called {\bf${\bf 1}$-waves} or {\bf ${\bf h}$-waves}) and of the second family (equivalently called {\bf ${\bf 2}$-waves} or {\bf ${\bf p}$-waves}) of the homogeneous system~\eqref{S1system-hom}. Notice that, although the characteristic field of the first family does not satisfy the classical GNL assumption, no composite waves are present in the solution of a Riemann problem for~\eqref{S1system-hom} since in each invariant region $\{p>1\}$, $\{p<1\}$ the field is GNL. In fact, the general solution of a Riemann problem for~\eqref{S1system-hom} consists of at most one simple wave for each family which can be either a rarefaction or a compressive shock or a contact discontinuity. Global existence of entropy weak solutions to~\eqref{S1system} has been established by Amadori and Shen~\cite{AS} using a front tracking algorithm in conjunction with the operator splitting. Their results can be summarized as follows: \begin{theorem}[\cite{AS}] \label{T:AS} For any given $M_{0}, p_{0} > 0$, there exists $\delta_{0} > 0$ small enough such that, if \be{initial-data-bounds} \mathrm{TotVar}\ \bar h+\mathrm{TotVar}\ \bar p\leq M_{0}, \quad \lVert\bar h\rVert_{{\bf L}^{1}}+\lVert\bar p-1\rVert_{{\bf L}^{1}}\leq M_{0}, \quad\lVert \bar h\Vert_{{\bf L}^{\infty}}\leq \delta_{0} ,\quad \bar p\geq p_{0}, \ee hold, then the Cauchy problem~\eqref{S1system},~\eqref{S1system-data} has an entropy weak solution $(h(x,t),\, p(x,t))$ defined for all $t \geq 0$, satisfying \be{unifbvbound} \mathrm{TotVar}\{h(\,\cdot\,,t)\} + \mathrm{TotVar}\{p(\,\cdot\,,t )\}\le M^*_0\,, \qquad \big\|h(\,\cdot\,,t)\big\|_{{\bf L}^{1}}+\big\|p(\,\cdot\,,t )-1\big\|_{{\bf L}^{1}}\le M^*_0\,, \ee and with values in a compact set \be{E:reeeeeeeterete} K=[0,\delta^*_0] \times [p^*_{0}, p^*_1]\,, \ee for some constants $M^*_0, \delta^*_0, p^*_0, p^*_1>0$. \end{theorem} In this article, we treat solutions $(h(x,t),\, p(x,t))$ to~\eqref{S1system},~\eqref{S1system-data} as established in~\cite{AS} stated inTheorem~\ref{T:AS}that have the $p$-component of the initial data close to $1$, i.e. \be{unifbvboundLinftyp}\big\|\bar p-1\big\|_{{\bf L}^{\infty}}\le \delta_p\ee for some constant $\delta_p>0$ sufficiently small. Hence, the solution $(h(x,t),\, p(x,t))$ satisfies in addition to~\eqref{unifbvbound}--\eqref{E:reeeeeeeterete} the bound \be{unifbvboundLinftypsol}\big\|\bar p(\,\cdot\,,t )-1\big\|_{{\bf L}^{\infty}}\le \delta_p^*, \qquad \forall t>0\,.\ee We shall consider here approximate solutions converging to an entropy weak solutions to~\eqref{S1system}--\eqref{S1system-data} constructed as in~\cite{AS} by the following operator splitting scheme. Fix a {\bf time step $\bf s=\Delta t>0$} and a {\bf parameter $\varepsilon>0$} that shall control: \vspace{-5pt} \begin{itemize} \item[-] the size of the rarefaction fronts; \vspace{-5pt} \item[-] the errors in speeds of shocks (or contact discontinuities) and rarefaction fronts; \vspace{-5pt} \item[-] the ${\bf L^1}$-distance between the piecewise constant initial data of the front-tracking approximation and the initial data $(\bar h,\bar p)$ in~\eqref{S1system-data}. \end{itemize} Let $t_k\doteq k \Delta t= k\, s$, $k=0,1,2,\dots$. Then, an $s$-$\varepsilon${\bf -approximate solution} $(h^{s,\varepsilon},p^{s,\varepsilon})$ of~\eqref{S1system} is obtained as follows: on each time interval $[t_{k-1},t_k)$ the function $(h^{s,\varepsilon},p^{s,\varepsilon})$ is an $\varepsilon${\it -front tracking approximate solution} to the homogeneous system of conservation laws ~\eqref{S1system-hom}. We recall that front-tracking solutions of a system of conservation laws are piecewise constant functions with discontinuities occurring along finitely many lines in the $(t,x)$-plane, with interactions involving exactly two incoming fronts. In general, the jumps can be of three types: shocks (or contact discontinuities), rarefaction fronts and non-physical waves travelling at a constant speed faster than all characteristic speeds (see~~\cite{Bre} for systems with GNL or LD characteristic fields and~\cite{AM3} for general systems). In particular, we shall adopt here the simplified version of front tracking algorithm developed in~\cite{BD} for $2\times 2$ systems as~\eqref{S1system-hom}, which does not require the introduction of non physical fronts. Next, at time $t_k$ the function $(h^{s,\varepsilon},p^{s,\varepsilon})$ is updated as follows \be{updated} \left\{\begin{aligned} & h^{s,\varepsilon}(t_k)=h^{s,\varepsilon}(t_k-)+\Delta t[p^{s,\varepsilon}(t_k-)-1]h^{s,\varepsilon}(t_k-)\\ &p^{s,\varepsilon}(t_k)=p^{s,\varepsilon}(t_k-). \end{aligned}\right. \ee to account for the presence of source terms. In this way we construct an approximate solution of~\eqref{S1system} defined for all times. It is shown in~\cite{AS} that, given $M_{0}, p_{0} > 0$, one can choose $\delta_{0} > 0$ sufficiently small so that the following holds. Consider a sequence of piecewise constant initial data $({\overline h}^{\varepsilon_m},{\overline p}^{\varepsilon_m})$ satisfying~\eqref{initial-data-bounds}, with $\varepsilon_m\to 0$ as $m \to \infty$, that converges in ${\bf L}^1$ to the initial data~\eqref{S1system-data} as $m \to \infty$, and let $\{s_m\}_m$ be a sequence of positive numbers converging to zero as $m \to \infty$. Then, the above scheme provides a sequence of approximate solutions $(h^{s_m,\varepsilon_m},p^{s_m,\varepsilon_m})$ taking values in a compact set $K$ as in~\eqref{E:reeeeeeeterete}, and satisfying the a-priori bounds~\eqref{unifbvbound}. A subsequence of $(h^{s_m,\varepsilon_m},p^{s_m,\varepsilon_m})$ converges, as $m \to \infty$, in~${\bf L}^1_{loc}$ to an entropy weak solution of the Cauchy problem~\eqref{S1system},~\eqref{S1system-data}, defined for all times $t>0$. The idea of adopting a time-splitting scheme to handle the effect of source terms was first introduced by Dafermos and Hsiao~\cite{DH}, in combination with the Glimm scheme (see also~\cite{C5, daf16} and references therein). Subsequently, this scheme was implemented in conjunction with the front tracking method~\cite{AG,CP} and with the vanishing viscosity method~\cite{C1,C2}. Alternative methods to generate solutions of balance laws present in the literature are based on a generalisation of the Glimm scheme~\cite{TPL2} or of the front tracking algorithm~\cite{AGG}. All these techniques provide local existence of solutions in presence of general source terms, whereas global existence is achieved either within the class of dissipative source terms~\cite{C5}, or for non-resonant systems (having characteristic speeds bounded away from zero) with source terms sufficiently small in~${\bf L}^1$~\cite{AGG,TPL2}. However, although the system of balance laws~\eqref{S1system} does not belong to any of these classes, an a-priori bound on the total variation of its weak solutions valid for all times is established in~\cite{AS}, exploiting the particular geometric features of~\eqref{S1system}. This yields the existence of global solutions provided by Theorem~\ref{T:AS}. \subsection{Wave size notation} \label{Ss:strengthnotation} The sizes of wave fronts of approximate solutions of~\eqref{S1system} are defined as the jumps between the left and right states which can be measured either with the original thickness and slope variables $(h,p)$, or with the corresponding Riemann coordinates $(H,P)$ associated to the $2\times 2$ system~\eqref{S1system}. Such coordinates are defined as follows~\cite[Definition 1]{AS}. Given any point $(h,p)\in\Omega$, let $(H,0)$ be the point on the $h-$axis connected with $(h,p)$ by a rarefaction curve of the second family and, similarly, let $(0,P)$ be the point on the $p-$axis connected to $(h,p)$ by a rarefaction curve of the first family. Then the functions $(H,P)$ form a coordinate system of Riemann invariants associated to~\eqref{S1system}. So given a wave front with left and right states $(h^{\ell},p^{\ell})$ and $(h^{r},p^{r})$, respectively, let $(H^{\ell},P^{\ell})$ and $(H^{r},P^{r})$ be the corresponding Riemann coordinates. For simplicity, we drop from here and on the dependence of the approximate solution on $(s,\varepsilon)$. Then, the wave size of the jump $\big((h^{\ell},p^{\ell}),\,(h^{r},p^{r})\big)$ can be defined either in the original or in the Riemann coordinate systems as follows: \begin{itemize} \item the size of a 1-wave (h-wave) is measured by \bes \sR_{h} = H^{r} - H^{\ell} \qquad \text{or } \qquad \sC_{h} = h^{r} - h^{\ell} \ees in Riemann or original coordinates, respectively. \item the size of a 2-wave (p-wave) is measured by \bes \sR_{p} = P^{r} -P^{\ell} \qquad \text{or } \qquad \sC_{p} = p^{r} - p^{\ell} \ees in Riemann or original coordinates, respectively. \end{itemize} Notice that 1-rarefaction waves have positive size in the region $p>1$ and negative size in the region $p<1$, whereas 1 admissible shocks have negative size in the region $p>1$ and positive size in the region $p<1$. Instead 2-rarefaction waves have always negative sizes whereas 2 admissible shocks have always positive sizes (see Figure~\ref{S2:fig1}). In particular, recalling~\eqref{Suno}--\eqref{Sdue}, when the right state $(h^r,p^r)$ is connected with the left state $(h^{\ell},p^{\ell})$ via a shock wave of size $\sC$ in original coordinates, then one has: \begin{equation} (h^r,p^r)\doteq\mathbf{S}_1\big(\sC;h^{\ell}, p^{\ell}\big) ,\qquad \sC=\sC_h,\qquad (p-1)\cdot\gamma\leq 0\,, \end{equation} if $\big((h^{\ell},p^{\ell}),\,(h^{r},p^{r})\big)$ is an admissible shock of the first family, and \begin{equation} (h^r,p^r)\doteq\mathbf{S}_2\big(\sC;h^{\ell}, p^{\ell}\big) ,\qquad \sC=\sC_p,\qquad\gamma\geq 0\,, \end{equation} if $\big((h^{\ell},p^{\ell}),\,(h^{r},p^{r})\big)$ is an admissible shock of the second family. \\ With the above notations, the size $\rho$ of the shock $\big((h^{\ell},p^{\ell}),\,(h^{r},p^{r})\big)$ expressed in Riemann coordinates is given by: \bas \rho=\sR_{h}=H\big(\mathbf{S}_1(\sC_{h};h^{\ell}, p^{\ell})\big) -H(h^{\ell}, p^{\ell})\,, && \rho=\sR_{p}=P\big(\mathbf{S}_2(\sC_{p};h^{\ell}, p^{\ell}) \big)-P(h^{\ell}, p^{\ell})\,. \eas \begin{remark} \label{Riemann coordinates} Since the change of variable $(h,p) \to (H,P)$ is a smooth map, the two ways of measuring the sizes of the wave fronts are equivalent, provided $(h,p)$ lie within the compact set $K$ of Theorem~\ref{T:AS}. Namely, in view of the analysis of~\cite[Section~3]{AS}, one can obtain the following relations among the sizes and the $p$-component in the two different coordinate systems \ba\label{E:equivalncestrengths} &\frac{|\sC_{h}|}{\mu}\leq |\sR_{h}|\leq \mu |\sC_{h}|, &&\frac{|\sC_{p}|}{\mu}\leq |\sR_{p}|\leq \mu|\sC_{p} |, &&\frac{|p^\ell-1|}{\mu}\leq |P(h^\ell,p^\ell)-1|\leq \mu|p^\ell-1|, \ea that hold for some $\mu>1$ and for all $(h^\ell,p^\ell), (h^r,p^r)\in K$. \end{remark} \subsection{Glimm functionals} \label{Ss:glimmfunct} Let $u=u(x,t)\doteq (h^{s,\varepsilon},p^{s,\varepsilon})(x,t)$ be a piecewise constant $s$-$\varepsilon$-approximate solution of~\eqref{S1system} constructed by the procedure described in Subsection~\ref{Ss:Rsolv-appsol}. As customary, a-priori bounds on the total variation of $u(t)\doteq u(\cdot,t)$ outside the time steps are obtained in~\cite{AS} by analyzing suitable wave strength and wave interaction potential defines as follows. At any time $t>0$ where no interaction occurs and different from the time steps $t_k$ where $u$ is updated taking into account the source term, let $\cj_{i}\big(u(t)\big)$ denote a set of indexes $\alpha$ associated to the jumps of the $i$-th family of $u(t)$. Assume that the jump of index $\alpha$ is located at $x_\alpha$, that its strength measured in Riemann coordinates is $|\sR_\alpha|$, and let $P_\alpha^\ell\doteq P(u(x_\alpha-,t))$ denote the left limit of the \linebreak P-Riemann coordinate of $u(t)$ at $x_\alpha$. Set $\cj\big(u(t)\big)\doteq \cj_{1}\big(u(t)\big)\bigcup \cj_{2}\big(u(t)\big)$ to denote the collection of indexes associated to all jumps of $u(t)$. Denote by $k_{\alpha}\in\{1,2\}$ the characteristic family of the jump $\alpha\in \cj\big(u(t)\big)$, so that, in particular, one has $\alpha\in \cj_{k_{\alpha}}\big(u(t)\big)$. Then, we define the {\it total strength} of waves in $u(t)$ as: \be{E:Vdef} V_i\big(u(t)\big)\doteq\!\!\sum_{\alpha\in\cj_i((u(t))}|\sR_{\alpha}|,\qquad i=1,2,\qquad\quad V\big(u(t)\big)=V_1\big(u(t)\big)+V_2\big(u(t)\big)\doteq\sum_{\alpha\in\cj(u(t))}|\sR_{\alpha}|\,, \ee and the {\it interaction potential} as: \be{E:Q} \cq\big(u(t)\big)\doteq\cq_{hh}+\cq_{hp}+\cq_{pp}\;. \ee Here $\cq_{hh}$ is the modified interaction potential of waves of the first family (h-waves) introduced in~\cite{AS}, defined as \be{E:Qhh} \cq_{hh}\doteq\sum_{\stackrel{k_{\alpha}=k_\beta=1}{x_{\alpha}<x_\beta}}\omega_{\alpha,\beta}|\sR_{\alpha}||\sR_\beta|\,, \ee with the weights $\omega_{\alpha,\beta}$ given by \be{E:omega} \omega_{\alpha,\beta}\doteq \begin{cases} \overline\delta\cdot\min\{|P_{\alpha}^{\ell}-1|,|P_{\beta}^{\ell}-1|\} \quad&\text{if}\quad \sR_{\alpha},\sR_\beta \ \ \ \text{are 1-shocks lying on the same side of} \ \ \ $p=1$, \\ \noalign{\medskip} \qquad\qquad 0\qquad &\text{otherwise,} \end{cases} \ee for a suitable constant $\overline\delta>0$ sufficiently small (depending on the bound $M_0$ on the total variation of the initial data in Theorem~\ref{T:AS}). Instead, the interaction potential of waves of both families and of the second family (p-waves) are defined in the standard way as \be{E:Qhp} \cq_{hp}\doteq\sum_{\stackrel{k_{\alpha}=2,k_\beta=1}{ x_{\alpha}<x_\beta}}|\sR_{\alpha}\sR_\beta| \ee \be{E:Qpp} \cq_{pp}\doteq\sum_{(\alpha,\beta)\in Appr_2}|\sR_{\alpha}\sR_\beta| \ee where $Appr_2$ denotes the collection of pairs of indexes of approaching p-waves. We recall that two waves of the same family, located at $x_\alpha, x_\beta$ with $x_{\alpha}<x_\beta$, are defined as {\it approaching} if at least one of them is a shock. Notice that the interaction potentials defined for non GNL systems available in the literature (e.g.~the one in~\cite{AM2}) are suited only for solutions with sufficiently small total variation, whereas the functional~\eqref{E:Q} introduced in~\cite{AS} allows to establish the existence of solutions with arbitrarily large total variation. In fact, relying on the interaction estimates established in~\cite{AS} and collected here in Section~\ref{S:basicest}, one can show that the {\em{Glimm functional}} \be{S3G} \cg\big(u(t)\big)\doteq V\big(u(t)\big)+\cq\big(u(t)\big) \ee is nonincreasing in any time interval $]t_k, t_{k+1}[$ between two consecutive time steps. Instead, when the solution is updated with the source term, we will exploit other estimates on the variation of the strength of waves which were derived in~\cite{AS} and are collected here in Section~\ref{S:basicest}. Thanks to such estimates, we deduce that at any time step $t_k=k\Delta t = k\, s$ there holds \begin{equation} \label{G-timestep-est1} \cg\big(u(t_k+)\big)\leq \big(1+\co(1)\Delta t\big)\cdot\cg^{-}\big(u(t_k-)\big)\,. \end{equation} Notice that, by definition~\eqref{E:Vdef}, oner clearly has \begin{equation} \label{V-totvar-est} V\big(u(t)\big) \leq \co(1) \mathrm{TotVar}\{u(\,\cdot\,,t)\}\qquad\forall~t>0\,. \end{equation} \subsection{Lyapunov functional} \label{Ss:Lyapfunct} Let $u$ and $v:\R\times\R_+\to\R^2$ be two $s$-$\varepsilon$-approximate solution of~\eqref{S1system}, with values in a compact set $K$ as in~\eqref{E:reeeeeeeterete}, and constructed as in Subsection~\ref{Ss:Rsolv-appsol}. In order to: \vspace{-5pt} \begin{itemize} \item[(I)] provide an a-priori bound on the ${\bf L^1}$-distance between $u$ and $v$ in the spirit of~\cite[\S~8]{Bre}; \item[(II)] derive as in~\cite{AG} an estimate of the type~\eqref{uniq-semigr-est-1} between approximate solutions of the non-homogeneous system~\eqref{S1system} and of the homogeneous system~\eqref{S1system-hom}; \end{itemize} we introduce here a Lyapunov-like functional~$\Phi$ with the properties (i)-(ii) stated in the introduction. Consider a piecewice constant function $\er:\R\to\R^2$ taking values in a given compact set $K'$ related to $K$. Assume also that $\er\in {\bf L^1}$ and \begin{equation} \label{z-bound-1} \mathrm{TotVar}\{\er_i\}\le \sigma\quad\ i=1,2\,, \qquad \forall~t>0\,, \end{equation} for some constant $\sigma>0$. Notice that $\er$ is an arbitrary function with the aforementioned properties, not related to the system~\eqref{S1system}, which is introduced to accomplish~(II). Next, for every $(x,t)\in\R\times\R_+$, we connect $u(x,t)$ with $w(x,t)\doteq v(x,t)+\er(x)$ through the Hugoniot curves of the first and second family. In this way we define implicitly two scalar functions $\eta_{i}:\R\times\R_+\to\R$, $i=1,2$, by the relation \be{E:vu} w(x,t)=\mathbf {S}_2\big(\eta_{2}(x,t);\,\cdot\,\big)\circ\mathbf {S}_1\big(\eta_{1}(x,t);\,u(x,t)\big)\,. \ee According to this definition, the parameter $\eta_{i}$ can be regarded as the size of the i-shock wave in the jump $\big(u(x,t),\, w(x,t)\big)$, where this size is measured in the original coordinates (cfr.~Subsection~\ref{Ss:strengthnotation}). Then, we clearly have \be{S4:etaabs} \frac{1}{C_0} \big|u(x,t)-w(x,t)\big|\le \sum_i \big|\eta_i(x,t)\big|\le C_0 \big|u(x,t)-w(x,t)\big| \qquad\forall~x,t\,, \ee for some constant $C_0>0$. Setting $u(t)\doteq u(\,\cdot\,,t)$, $v(t)\doteq v(\,\cdot\,,t)$, we now define the functional \be{UpsilonNew} \Phi_\er(u(t),v(t))\doteq \left(\sum_{i=1}^2\int_{-\infty}^\infty|\eta_{i}(x,t)|W_i(x,t)\,dx\right) \cdot e^{\kappa_{\cg} \big[\cg(u(t))+\cg(v(t))\big]} \ee with weights $W_i$ of the form \begin{subequations} \label{S4Wi} \ba \label{S4WiN} W_1(x,t)&\doteq\exp\left(\kappa_{1\ca 1}\ca_{1,1}(x,t)+\kappa_{1\ca 2}\ca_{1,2}(x,t) \right), \\ \label{S4WiN2} W_2(x,t)&\doteq\exp\left(\kappa_{2\ca 1}\ca_{2,1}(x,t)+ \kappa_{2\ca 2}\ca_{2,2}(x,t) \right), \ea \end{subequations} for suitable positive constants $1<\kappa_{i\ca j}<\kappa_\cg$, $i,j=1,2$, to be specified later. Here $\cg$ is the Glimm functional defined in~\eqref{E:Vdef}--\eqref{S3G}, while $\ca_{i,j}(x,t)$ measures the total amount of $j$-waves in $u(t)$ and $v(t)$ which approach the $i$-wave $\eta_{i}$ located at $x$. Since the second characteristic family is GNL for $h>0$ and LD along $h=0$, the expression of $\ca_{2,1}$ and $ \ca_{2,2}$ is the same as the one given in~\cite{bly} for GNL and LD characteristic fields. Instead, because of the properties of the non GNL first characteristic family, the definition of 1-waves approaching $\eta_1$ varies if the left state of such waves lies on the left or on the right of $\{p=1\}$ (see Figure~\ref{figappr0}). \begin{figure}[htbp] \label{figappr0} {\centering \scalebox{.8}{\input{uv.tex} } \par} \caption{\small \emph{Approaching waves} in $v$ towards $ \eta_1({\bf x})>0$ are indicated by the jumps marked with bolded lines. Also, regions $p < 1$, $p > 1$ can only be connected by $2-$waves crossing the line $p=1$. The selected $1-$waves that are located at ${\bf x_\alpha}$ with ${\bf x_\alpha}<{\bf x}$ correspond to $\gamma\to \lambda_1(\gamma;\cdot)$ strictly increasing, i.e. $\{p>1\}$. On the other hand, the selected $1-$waves that are located at ${\bf x_\alpha}$ with ${\bf x_\alpha}>{\bf x}$ correspond to $\gamma\to \lambda_1(\gamma;\cdot)$ strictly decreasing, i.e. $\{p<1\}$.\label{figappr0} } \end{figure} For this reason, it will be necessary to assign a weight to the strength of the 1-waves approaching $\eta_1$ which depends on the distance of their left state from $\{p=1\}$, so to control the possible increase of $\Phi_\er$ at times of interactions (of $u$ or of $v$) involving a 1-wave and a 2-wave crossing $\{p=1\}$. More precisely, the definitions of $\ca_{i,j}$, $i,j=1,2$, are the followings: As in Subsection~\ref{Ss:glimmfunct}, let $\cj_{i}\big(u(t)\big)$ denote a set of indexes $\alpha$ associated to the jumps of the $i$-th family of $u(t)$ located at $x_\alpha$ and let $\cj_{i}\big(v(t)\big)$ denote a similar set of indexes for the jumps of the $i$-th family of $v(t)$. Denote by $p_\alpha^\ell$ the p-component (in original coordinates) of the left state of the jump located at $x_{\alpha}$, and by $\sR_{\alpha}$ the corresponding size of the jump measured in Riemann coordinates. Then define \be{S4A1} \begin{aligned} \ca_{1,1}&(x,t)\doteq\left\{ \!\! \begin{array}{ll} \displaystyle\Big[\sum_{\stackrel{\alpha\in\cj(u(t)),k_{\alpha}=1}{p_\alpha^\ell >1,\,x_{\alpha}<x}} +\sum_{\stackrel{\alpha\in\cj(u(t)),k_{\alpha}=1}{p_\alpha^\ell <1,\,x_{\alpha}>x}} +\sum_{\stackrel{\alpha\in\cj(v(t)),k_{\alpha}=1}{p_\alpha^\ell >1,\,x_{\alpha}>x}} +\sum_{\stackrel{\alpha\in\cj(v(t)),k_{\alpha}=1}{p_\alpha^\ell <1,\,x_{\alpha}<x}} \Big]\PHI{p}|\sR_{\alpha}| \qquad &\text{if }\quad\eta_{1}(x,t)<0,\\ \noalign{\medskip} \displaystyle\Big[\sum_{\stackrel{\alpha\in\cj(v(t)),k_{\alpha}=1}{p_\alpha^\ell >1,\,x_{\alpha}<x}} +\sum_{\stackrel{\alpha\in\cj(v(t)),k_{\alpha}=1}{p_\alpha^\ell <1,\,x_{\alpha}>x}} +\sum_{\stackrel{\alpha\in\cj(u(t)),k_{\alpha}=1}{p_\alpha^\ell >1,\,x_{\alpha}>x}} +\sum_{\stackrel{\alpha\in\cj(u(t)),k_{\alpha}=1}{p_\alpha^\ell <1,\,x_{\alpha}<x}} \Big]\PHI{p}|\sR_{\alpha}| \qquad& \text{if }\quad\eta_{1}(x,t)>0, \end{array} \right.\\ \noalign{\medskip} \ca_{1,2}&(x,t)\doteq\sum_{\stackrel{\alpha\in\cj(u(t))\cup\cj(v(t))}{k_{\alpha}=2,\,x_{\alpha}<x}}|\sR_{\alpha}|\,, \end{aligned} \ee and \begin{subequations} \label{S4A2} \begin{align} \ca_{2,1}&(x,t)\doteq\sum_{\stackrel{\alpha\in\cj(u(t))\cup\cj(v(t))}{k_{\alpha}=1,\,x_{\alpha}>x}}|\sR_{\alpha}|\,, \\ \noalign{\medskip} \ca_{2,2}&(x,t)\doteq\left\{ \begin{array}{ll} \displaystyle\Big[\sum_{\stackrel{\alpha\in\cj(u(t)),k_{\alpha}=2}{x_{\alpha}>x}} +\sum_{\stackrel{\alpha\in\cj(v(t)),k_{\alpha}=2}{x_{\alpha}<x}} \Big]|\sR_{\alpha}| \qquad &\text{if }\quad\eta_{2}(x,t)<0,\\ \noalign{\medskip} \displaystyle\Big[\sum_{\stackrel{\alpha\in\cj(v(t)),k_{\alpha}=2}{x_{\alpha}>x}} +\sum_{\stackrel{\alpha\in\cj(u(t)),k_{\alpha}=2}{x_{\alpha}<x}} \Big]|\sR_{\alpha}| \qquad&\text{if }\quad\eta_{2}(x,t)>0. \end{array} \right. \end{align} \end{subequations} Notice that, differently from the functional introduced in~\cite{bly}, here the sum of the whole Glimm functional of $u$ and $v$ instead of the sum of their interaction potential $\cq$ is present as a weight in the time variable. This is due to the fact that one needs to exploit the decrease of the functional $V$ in~\eqref{E:Vdef} at interactions of 1-shock with 1-rarefaction since, by the definitions~\eqref{E:Q}--\eqref{E:Qhh}, the interaction potential is not decreasing when such interactions occur (cfr.~\eqref{E:decreaseGlimm1s1r} and the analysis in Subsection~\ref{Sss:Interaction1nC}). We point out that: \vspace{-5pt} \begin{itemize} \item[-] the definition of the functional $\Phi_\er$ at~\eqref{UpsilonNew}--\eqref{S4Wi} is given in terms of waves connecting $u(t)$ with $w(t)=v(t)+z$, and of waves in $u(t), v(t)$, which are measured with respect to different coordinate systems. Namely, the size $\eta_i(t)$ of the i-waves connecting $u(t)$ with $w(t)$ is measured in original coordinates. Instead, the size $\sR_\alpha$ of the waves in $u(t)$ or in $v(t)$ is measured in Riemann coordinates. Of course, one can express also $\eta_i(t)$ in Riemann coordinates because of~\eqref{E:equivalncestrengths}, but we choose to keep them in original coordinates for technical reasons since this choice simplifies the computations carried out in Appendix~\ref{S:finerInteractions} and applied in Section~\ref{Ss:nointeractiontimesN}. \item[-] The function $z$ affects directly the definition of the waves $\eta_i$ connecting $u$ with $w\doteq u+z$ while enters only indirectly in the definition of the weights $W_i$ which are expressed in terms of waves of $u$ and $v$ which depend on the sign of $\eta_i$. \end{itemize} We observe also that, thanks to the a-priori BV and ${\bf L}^{\infty}$-bounds established in~\cite{AS}, there will be constants $M^*, P^*>0$ such that, for all $t>0$, there hold \begin{equation} \label{mstar-bound} \cg(u(t))\le M^*\,,\qquad\quad \cg(v(t))\le M^*\,, \end{equation} and \begin{equation} \label{pstar-bound} \big\|P(u(t))-1\big\|_{\bf L^\infty}\le P^*\,, \qquad\quad \big\|P(v(t))-1\big\|_{\bf L^\infty}\le P^*\,. \end{equation} Hence, the functional $W_i$ in~\eqref{S4Wi} is uniformly bounded by \begin{subequations} \label{W-unif-boundN} \ba &\ca_{1,1}(x,t) \leq M^* \delta_p^* \,, && 1 \leq W_1(x,t) \leq W_1^*\doteq e^{\kappa_{1\ca1}\cdot M^*\delta_p^* + \kappa_{1\ca2}\cdot M^* } \qquad\quad \forall~x,t \\ &&&1 \leq W_2(x,t) \leq W_2^*\doteq e^{ \kappa_{2\ca1}\cdot M^*+ \kappa_{2\ca2}\cdot M^*} \qquad\quad \forall~x,t \,. \ea \end{subequations} Therefore, relying on~\eqref{S4:etaabs},~\eqref{W-unif-boundN}, we deduce that the functional $\Phi_\er$ is equivalent to the ${\bf L^1}$ distance between $u(t)$ and $w(t)=v(t)+z$: \be{S4-PhiL1} \frac{1}{C_{0}}\big\| u(t)-w(t)\big\|_{L^1} \leq\Phi_\er(u(t),v(t)) \leq C_{0}\cdot W^* \cdot \big\| u(t)-w(t)\big\|_{L^1} \qquad\quad \forall~t>0\,. \ee For $z=0$, we automatically have that $\Phi_0(u,v)$ satisfies the corresponding relation between $u$ and $v$. \subsection{Main results} \label{Ss:mainres} In view of~\eqref{S4-PhiL1}, ${\bf L}^1$ stability estimates for approximate solutions $u, v$ of~\eqref{S1system} and~\eqref{S1system-hom} can be established in terms of the functional $\Phi_z$ as stated in the following \begin{theorem} \label{stability-Phy} Given $M_0>0$, there exist constants $\delta_0, \delta_p, \delta_0^*, \delta_p^*, M^*_0>0$, and $C_1, C_2>0$, so that, letting $\Phi_z$ be the functional defined in~\eqref{UpsilonNew}--\eqref{S4A2}, for suitable $\kappa_{i\ca j}>0$,\,$i,\ j=1,\ 2$ and $\kappa_\cg$, the following hold. \begin{itemize} \item[(i)] (Homogeneous case). Let $u$ and $v:\R\times\R_+\to\R^2$ be two $\varepsilon${\it -front tracking approximate solution} of the homogeneous system~\eqref{S1system-hom}, constructed as in Subsection~\ref{Ss:Rsolv-appsol}, with initial data $u(\,\cdot,0), v(\,\cdot,0)$ satisfying~\eqref{initial-data-bounds} and~\eqref{unifbvboundLinftyp} , and taking values in $[0,\delta^*_0] \times [p^*_{0}, p^*_1]$. Let $\er : \R \to\R^2$ be a piecewise constant function, that takes values in the compact set \be{E:im-z} K'=[(p^*_{0}-1)\cdot\delta_0^*,\,p_1^*\cdot\delta^*_0] \times [p^*_{0}, p^*_1]\,, \ee and satisfies~\eqref{z-bound-1} for \begin{equation} \label{sigma-def} \sigma\doteq (\delta_0^*+p_1^*)\cdot M_0^*, \end{equation} where $p_0^*:=1-\delta_p^*>0$, $p_1^*:=1+\delta_p^*>1$. Then, there holds \begin{equation} \label{Phi0est1} \Phi_{\er}\big(u(\tau_2), v(\tau_2)\big)\leq \Phi_{\er}\big(u(\tau_1), v(\tau_1)\big)+ C_1\cdot \big(\varepsilon + \sigma\big) (\tau_2-\tau_1)\qquad\forall~ \tau_2>\tau_1>0\,. \end{equation} \item[(ii)] (Non-homogeneous case). Let $u$ and $v:\R\times\R_+\to\R^2$ be two $s$-$\varepsilon$-approximate solution of the non-homogeneous system~\eqref{S1system} constructed as in Subsection~\ref{Ss:Rsolv-appsol}, with initial data $u(\,\cdot,0), v(\,\cdot,0)$ satisfying~\eqref{initial-data-bounds} and~\eqref{unifbvboundLinftyp}, and taking values in $[0,\delta^*_0] \times [p^*_{0}, p^*_1]$. Then, letting $t_k\doteq k \Delta t = k\, s$, \, $k \in \mathbb{N}$ be the time steps, there holds \begin{equation} \label{Phi0est2} \Phi_0\big(u(\tau_2), v(\tau_2)\big)\leq \Phi_0\big(u(\tau_1), v(\tau_1)\big)+ C_1\cdot \varepsilon (\tau_2-\tau_1)\qquad\forall~t_k<\tau_1<\tau_2<t_{k+1}\,, \end{equation} and \begin{equation} \label{Phi0est3} \begin{aligned} \Phi_0\big(u(t_k+), v(t_k+)\big) &\leq \Phi_0\big(u(t_h+), v(t_h+)\big) \big(1+C_2\cdot\Delta t\big)^{(k-h)}+ \\ \noalign{\smallskip} &\qquad + C_1\cdot \varepsilon\,\Delta t \sum_{i=1}^{k-h} \big(1+C_2\cdot \Delta t\big)^i \qquad\quad\forall~0\leq h<k, \end{aligned} \end{equation} for all $k,\ h\in\mathbb{N}$. \end{itemize} \end{theorem} The estimate~\eqref{Phi0est3} is precisely the estimate~\eqref{gammaest2} stated in Section~\ref{intro}, with $\Phi_0$ in place of $\Phi$. A proof of Theorem~\ref{stability-Phy} will be established in Section~\ref{S4old}. Relying on Theorem~\ref{stability-Phy}-(i), one can easily derive the existence of a Lipschitz continuous semigroup of solutions of the homogeneous system~\eqref{S1system-hom}. \begin{theorem} \label{exist-hom-smgr} Given $M_0>0$, there exist $\delta_0, \delta_p, \delta^*_0, \delta_p^*, M^*_0, L>0$ and a unique (up to the domain) semigroup map \begin{equation} \label{E:sem} \mathcal{S}: [0,+\infty)\times \mathcal{D}_{0}\to \mathcal{D}^*_{0}, \qquad\qquad (\tau,\overline u)\mapsto \mathcal{S}_\tau \overline u\,, \end{equation} with $\mathcal{D}_{0}\doteq \mathcal{D}(M_0 , \delta_0 , \delta_p)$, $\mathcal{D}^*_{0}\doteq \mathcal{D}(M^*_0, \delta^*_0, \delta_p^*)$ domains defined as in~\eqref{Domain-def1}, which enjoys the following properties: \vspace{-5pt} \begin{itemize} \item[(i)] $\cs_{\tau_2}\big(\cs_{\tau_1}\, \overline u\big)\in \mathcal{D}^*_0\qquad\forall~ \overline u\in\mathcal{D}_{0},\ \ \forall~\tau_1, \tau_2\geq 0;$ \item[(ii)] $\cs_{0} \,\overline u= \overline u,\quad \cs_{\tau_1+\tau_2}\, u=\cs_{\tau_2}\big(\cs_{\tau_1}\, \overline u\big)\qquad \forall~\overline u\in\mathcal{D}_{0},\ \ \forall~\tau_1, \tau_2\geq 0;$ \item[(iii)] $\big\Vert \cs_{\tau_2} \overline u - \cs_{\tau_1} \overline v \big\Vert_{{\bf L}^{1}}\leq L\cdot (|\tau_1-\tau_2|+\lVert \overline u-\overline v\rVert_{{\bf L}^{1}})\qquad \forall~\overline u,\,\overline v \in\mathcal{D}_{0},\ \ \forall~\tau_1, \tau_2\geq 0;$ \item[(iv)] For any $\overline u\doteq (\overline h, \overline p)\in\mathcal{D}_{0}$, the map $\big(h(x,\tau),p(x,\tau)\big)\doteq \mathcal{S}_\tau\, \overline u(x)$ provides an entropy weak solution of the Cauchy problem~\eqref{S1system-hom}, ~\eqref{S1system-data}. Moreover, $\mathcal{S}_\tau\, \overline u(x)$ coincides with the unique limit of front tracking approximations. \item[(v)] If $\overline u\in \mathcal{D}_{0}$ is piecewise constant, then for $\tau$ sufficiently small $u(\,\cdot\,,\tau)\doteq \cs_\tau\,\overline u$ coincides with the solution of the Cauchy problem~\eqref{S1system-hom},~\eqref{S1system-data} obtained by piecing together the entropy solutions of the Riemann problems determined by the jumps of~$\overline u$. \end{itemize} \end{theorem} \begin{proof} The proof is entirely similar to~\cite[Proof of Theorem~2]{bly}. For sake of clarity, we provide it here. Let $\delta_0, \delta_p, \delta^*_0, \delta_p^*, M^*_0, p^*_0, p_1^*$ be the constants provided by Theorem~\ref{stability-Phy}. Given $\overline u\in \mathcal{D}_{0}$, consider a sequence $\{u_m\}_m$ of $\varepsilon_m$-front tracking approximate solutions to~\eqref{S1system-hom} with values in $[0,\delta^*_0] \times [p^*_{0}, p^*_1]$, with initial data $u_m(0)\in \mathcal{D}_{0}$, and such that \begin{equation} \label{indata-eps-conv-1} \lim_{m\to\infty} \big\|u_m(0)-\overline u\big\|_{\bf L^1}=0\,. \end{equation} Then, assuming that $\{\varepsilon_m\}_m$ is decreasing to zero, relying on~\eqref{S4-PhiL1} with $u=u_m$, $v=u_n$, $\er=0$, and applying~\eqref{Phi0est1} with $\er=0$, $\sigma=0$, we find \begin{equation} \label{lip-est-appr-hom-1} \begin{aligned} \big\|u_m(\tau)-u_n(\tau)\big\|_{\bf L^1} &\leq C_0 \cdot \Phi_0\big(u_m(\tau), u_n(\tau)\big) \\ &\leq C_0 \cdot \Phi_0\big(u_m(0), u_n(0)\big)+ C_0\,C_1\cdot \tau\cdot\varepsilon_m \\ &\leq C_0^2\, W^* \cdot \big\|u_m(0)-u_n(0)\big\|_{\bf L^1}+ C_0\,C_1\cdot \tau\cdot\varepsilon_m\,, \end{aligned} \end{equation} for $m\leq n$ and for all $\tau>0$. Thus, it follows from~\eqref{indata-eps-conv-1}--\eqref{lip-est-appr-hom-1} that $\{u_m(t)\}_m$ is a Cauchy sequence in~${\bf L^1}$ for all $t>0$, and hence it converges to a unique limit \begin{equation} \label{hom-sem-lim-def} \cs_\tau\, \overline u \doteq \lim_{m\to\infty} u_m(\tau)\,. \end{equation} With the same arguments of the analysis in~\cite{AS}, and by the uniqueness of the limit~\eqref{hom-sem-lim-def}, one then deduces that $\cs_\tau\, \overline u\in \mathcal{D}^*_0$ for all $\tau>0$ (cfr.~Theorem~\ref{T:AS}), and that properties (i)-(ii), (iv) are verified. Next, the Lipschitz continuity property of $\cs_\tau$ is obtained as in~\cite{bly}. Namely, given $\overline u, \overline v\in \mathcal{D}_{0}$, consider two sequences $\{u_m\}_m$, $\{v_m\}_m$ of $\varepsilon_m$-front tracking approximate solutions to~\eqref{S1system-hom} with values in $[0,\delta^*_0] \times [p^*_{0}, p^*_1]$, with initial data $u_m(0), v_m(0)\in \mathcal{D}_{0}$, and such that \begin{equation} \label{indata-eps-conv-2} \lim_{m\to\infty} \varepsilon_m= 0\,, \qquad\quad \lim_{m\to\infty} \big\|u_m(0)-\overline u\big\|_{\bf L^1}= \lim_{m\to\infty} \big\|v_m(0)-\overline v\big\|_{\bf L^1}=0\,. \end{equation} Again, relying on~\eqref{S4-PhiL1} with $u=u_m$, $v=v_m$, $\er=0$, and applying~\eqref{Phi0est1} with $\er=0$, $\sigma=0$, we derive \begin{equation} \label{lip-est-appr-hom-2} \begin{aligned} \big\|u_m(\tau)-v_m(\tau)\big\|_{\bf L^1} &\leq C_0 \cdot \Phi_0\big(u_m(\tau), v_m(\tau)\big) \\ &\leq C_0^2\, W^* \cdot \big\|u_m(0)-u_n(0)\big\|_{\bf L^1}+ C_0\,C_1\cdot \tau\cdot\varepsilon_m\,. \end{aligned} \end{equation} Taking the limit as $m\to\infty$ in~\eqref{lip-est-appr-hom-2}, and relying on~\eqref{hom-sem-lim-def}--\eqref{indata-eps-conv-2}, we thus obtain \begin{equation} \label{lip-est-appr-hom-3} \big\|\cs_\tau\, \overline u-\cs_\tau\, \overline v\big\|_{\bf L^1} \leq C_0^2\, W^* \cdot \big\|\overline u-\overline v\big\|_{\bf L^1}\,. \end{equation} This yields (iii) since the Lipschitz continuity with respect to time is a standard property enjoyed by limits of front tracking solutions with finite speed of propagation and uniformly bounded total variation (cfr~\cite[Section 7.4]{Bre2}). Finally, the consistency with the solutions of the Riemann problem and with limits of front tracking approximations (v) as well as the uniqueness of the semigroup map can be established by standard arguments in~\cite{Bre2}, \cite{BB} that remain valid for solutions with large total variation. This concludes the proof. \end{proof} \begin{remark} Notice that the image of the map $\cs_t$ in~\eqref{E:sem} is the same for every $t>0$, but the domain~$\mathcal{D}_{0}$ is not positively invariant under the action of $\cs$. This is due to the fact that, although one can establish ${\bf L}^\infty$, ${\bf L}^1$ and BV bounds on the solutions of~\eqref{S1system-hom} which are uniform in time, it turns out that the ${\bf L}^\infty$, ${\bf L}^1$- norms as well as the total variation of the solution (that appear in the definition of the domain~\eqref{Domain-def1}) may well increase in presence of interactions (see the analysis in~\cite[Section 5]{AS}). \end{remark} Employing Theorem~\ref{stability-Phy}-(ii) and Theorem~\ref{exist-hom-smgr}, we can now construct an approximate solution operator for the non homogeneous system~\eqref{S1system} that depends Lipschitz continuously on the initial data, with a Lipschitz constant that grows exponentially in time. \begin{theorem}\label{ThmPS5.4} Given $M_0>0$, there exist $\delta_0, \delta_p, \delta^*_0, \delta^*_p, M^*_0>0$ so that the conclusions of Theorem~\ref{exist-hom-smgr} hold together with the following. For all $s=\Delta t>0$ sufficiently small, setting $t_k\doteq k \Delta t = k\, s$, \, $k \in \mathbb{N}$, \ $g((h,p))=\big(0,(p-1)h\big)$, and letting $\mathcal{D}_{0}\doteq \mathcal{D}(M_0 ,\delta_0, \delta_p )$, $\mathcal{D}^*_{0}\doteq \mathcal{D}(M^*_0, \delta^*_0, \delta^*_p)$ be domains as in~\eqref{Domain-def1}, the map $(\tau,\overline u)\mapsto \mathcal{P}^s_\tau \,\overline u$ given by \be{app-nonhom-flow-def} \begin{aligned} \mathcal{P}^s_{0} \,\overline u &= \overline u\qquad \overline u\in\mathcal{D}_{0}\,, \\ \noalign{\smallskip} \mathcal{P}^s_\tau\,\overline u &= \cs_{\tau-t_k}\cp^s_{t_k}\,\overline u\qquad\forall~\tau\in (t_k, t_{k+1}), \quad k\in\mathbb{N}\,,\quad \overline u\in\mathcal{D}_{0}\,, \\ \noalign{\smallskip} \mathcal{P}^s_{t_k}\,\overline u &= \mathcal{P}^s_{t_k-}\,\overline u + s\cdot g\big( \mathcal{P}^s_{t_k-}\,\overline u\big) \quad \text{with} \quad \displaystyle{\mathcal{P}^s_{t_k-}\,\overline u\doteq \lim_{\ \tau\to t_k-} \!\!\mathcal{P}^s_{\tau}\,\overline u=\cs_s \cp^s_{t_{k-1}}\,\overline u}, \quad k\in\mathbb{N}\,,\quad \overline u\in\mathcal{D}_{0}\,, \end{aligned} \ee is well defined for all $\tau>0$ and takes values in $\mathcal{D}^*_{0}$. Moreover, there exist $L',C_3,C_4>0$ so that the following properties hold. \begin{itemize} \item[(i)] $\mathcal{P}^s_{\tau_2}\big(\mathcal{P}^s_{\tau_1} \,\overline u\big)\in \mathcal{D}^*_0\qquad\forall~ \overline u\in\mathcal{D}_{0},\ \ \forall~\tau_1, \tau_2\geq 0\,;$ \item[(ii)] $\big\|\cp_{\tau_1}^{s} \cp_{\tau_2}^{s} \,\overline u-\cp_{\tau_1+\tau_2}^{s} \,\overline u\big\|_{L^1}\le C_3\cdot s\qquad\forall~ \overline u\in\mathcal{D}_{0},\ \ \forall~\tau_1, \tau_2\geq 0\,;$ \item[(iii)] $\big\|\cp_{\tau_1}^s \,\overline u-\cp_{\tau_2}^s \,\overline u\big\|_{L^1} \le L'\cdot |\tau_2-\tau_1|+C_3\cdot s\qquad\forall~ \overline u \in\mathcal{D}_{0},\ \ \forall~\tau_1, \tau_2\geq 0\,;$ \item[(iv)] $\big\|\cp_{\tau}^s \,\overline u-\cp_{\tau}^s \,\overline v\big\|_{L^1} \le L'\cdot e^{C_4\cdot \tau} \cdot\|\overline u - \overline v\|_{\bf L^1} \qquad\forall~\overline u, \overline v\in\mathcal{D}_{0},\ \ \forall~\tau>0\,.$ \end{itemize} \end{theorem} \begin{proof} Given $M_0>0$, let $\delta_0, \delta_p \delta^*_0, \delta_p^*,M^*_0, p^*_0, p_1^*>0$ be constants so that the conclusions of Theorem~\ref{T:AS}, Theorem~\ref{stability-Phy} and Theorem~\ref{exist-hom-smgr} are verified. By the analysis in~\cite{AS} it follows that, taking the time step $s$ sufficiently small, the approximate operator $\cp^s$ in~\eqref{app-nonhom-flow-def} is well defined for all $\tau>0$, $\overline u\in \mathcal{D}_0$, and satisfies property (i). Moreover, consider a sequence $\{u^{s,\varepsilon_m}\}_m$ of $s$-$\varepsilon_m$-approximate solutions of~\eqref{S1system} constructed as in subsection~\ref{Ss:Rsolv-appsol}, with values in $[0,\delta^*_0] \times [p^*_{0}, p^*_1]$, with initial data $u^{s,\varepsilon_m}(0)\in \mathcal{D}_{0}$, and such that \begin{equation} \label{indata-eps-conv-3} \lim_{m\to\infty} \varepsilon_m= 0\,, \qquad\quad \lim_{m\to\infty} \big\|u^{s,\varepsilon_m}(0)-\overline u\big\|_{\bf L^1}=0\,. \end{equation} Relying on Theorem~\ref{exist-hom-smgr} and on the Lipshitz continuity of the source term $g((h,p))$ by the definition~\eqref{app-nonhom-flow-def} it follows that \begin{equation} \label{nonhom-sem-lim} \cp^s_\tau\, \overline u = \lim_{m\to\infty} u^{s,\varepsilon_m}(\tau) \quad\forall~\tau>0\,. \end{equation} Given $\overline u, \overline v\in \mathcal{D}_0$, consider now two sequences $\{u^{s,\varepsilon_m}\}_m$, $\{v^{s,\varepsilon_m}\}_m$ of $s$-$\varepsilon_m$-approximate solutions of~\eqref{S1system} with values in $[0,\delta^*_0] \times [p^*_{0}, p^*_1]$, with initial data $u^{s,\varepsilon_m}(0), v^{s,\varepsilon_m}(0)\in \mathcal{D}_{0}$, and such that \begin{equation} \label{indata-eps-conv-4} \lim_{m\to\infty} \varepsilon_m= 0\,, \qquad\quad \lim_{m\to\infty} \big\|u^{s,\varepsilon_m}(0)-\overline u\big\|_{\bf L^1}= \lim_{m\to\infty} \big\|v^{s,\varepsilon_m}(0)-\overline v\big\|_{\bf L^1}=0\,. \end{equation} Then, relying on~\eqref{S4-PhiL1} with $u=u^{s,\varepsilon_m}$, $v=v^{s,\varepsilon_m}$, $\er=0$, and applying~\eqref{Phi0est2}--\eqref{Phi0est3}, we derive \begin{equation} \label{lip-est-appr-nonhom-1} \begin{aligned} \big\|u^{s,\varepsilon_m}(\tau)-v^{s,\varepsilon_m}(\tau)\big\|_{\bf L^1} &\leq C_0 \cdot \Phi_0\big(u^{s,\varepsilon_m}(\tau), v^{s,\varepsilon_m}(\tau)\big) \\ &\leq C_0 \cdot e^{C_2\cdot\tau}\cdot \Phi_0\big(u^{s,\varepsilon_m}((0), v^{s,\varepsilon_m}((0)\big)+ C_0\,C_1\cdot e^{C_2\cdot\tau}\cdot \tau\cdot\varepsilon_m \\ &\leq C_0^2\cdot W^* \cdot e^{C_2\cdot\tau}\cdot \big\|u^{s,\varepsilon_m}(0)-u^{s,\varepsilon_m}(0)\big\|_{\bf L^1}+ \frac{C_0\,C_1}{C_2}\cdot (e^{C_2\cdot\tau}-1) \cdot\varepsilon_m\,. \end{aligned} \end{equation} Taking the limit as $m\to\infty$ in~\eqref{lip-est-appr-nonhom-1}, and relying on~\eqref{nonhom-sem-lim}--\eqref{indata-eps-conv-4}, we thus obtain \begin{equation} \label{llip-est-appr-nonhom-2} \big\|\cp^s_\tau\, \overline u-\cp^s_\tau\, \overline v\big\|_{\bf L^1} \leq C_0^2\cdot W^* \cdot e^{C_2\cdot\tau}\cdot \big\|\overline u-\overline v\big\|_{\bf L^1}\,, \end{equation} which proves property (iv). To conclude the proof, observe that properties (ii)-(iii) can be derived with entirely similar arguments to the proofs of~\cite[Proposition 3.2, Proposition 4.1]{AG}, relying on property (iv) and on Theorem~\ref{exist-hom-smgr}. \end{proof} Observe that, given $\overline u\in\mathcal{D}_{0}$, if we consider a sequence $\{s_n\}_n$ decreasing to zero, the limit function $\lim_{n\to\infty} \cp^{s_n}_\tau\,\overline u$ may be not well defined. In fact, the estimates provided by Theorem~\ref{ThmPS5.4} do not guarantee the uniqueness of such a limit. However, one can show that it is possible to extract a subsequence $\{s_{n_k}\}_k$ so that $\{\cp^{s_{n_k}}_\tau\,\overline u(x)\}_k$ converges, for all $\tau>0$ and a.e. $x\in\R$, to a function $u(x,\tau)$ which is an entropy weak solution of~\eqref{S1system}, ~\eqref{S1system-data}. Next, relying on Theorem~\ref{stability-Phy}-(i), and applying a uniqueness result on quasi differential equations in metric spaces, we derive as in~\cite{AG} the uniqueness of solutions to the Cauchy problem~\eqref{S1system},~\eqref{S1system-data}, In turn, this implies the convergence of the complete sequence $\{\cp^{s_n}_\tau\,\overline u\}_n$. and thus defines a solution operator $\cp_\tau$ for~\eqref{S1system} as stated in the next theorem whose proof is given in Section~\ref{S6}. \begin{theorem} \label{exist-nonhom-smgr-1} Given $M_0,>0$, there exist $\delta_0, \delta_p, \delta^*_0, \delta_p^*, M^*_0, L>0$ so that the conclusions of Theorem~\ref{exist-hom-smgr} hold together with the following. There exist a map \begin{equation} \label{E:semP} \mathcal{P}: [0,+\infty)\times \mathcal{D}_{0}\to \mathcal{D}^*_0, \qquad\qquad (\tau,\overline u)\mapsto \mathcal{P}_\tau \overline u\,, \end{equation} (with $\mathcal{D}_{0}, \mathcal{D}^*_{0}$ domains as in~\eqref{Domain-def1}), which enjoys the properties: \vspace{-5pt} \begin{itemize} \item[(i)] $\mathcal{P}_{\tau_1}\big(\mathcal{P}_{\tau_2} \,\overline u\big)\in \mathcal{D}^*_0\qquad\forall~ \overline u\in\mathcal{D}_{0},\ \ \forall~\tau_1,\tau_2\geq 0;$ \item[(ii)] $\mathcal{P}_{0} \overline u= \overline u,\quad \mathcal{P}_{\tau_1+\tau_2} u=\mathcal{P}_{\tau_2}\big(\mathcal{P}_{\tau_1} \overline u\big)\qquad\forall~ \overline u\in\mathcal{D}_{0},\ \ \forall \tau_1,\tau_2\geq 0;$ \item[(iii)] $\big\Vert \mathcal{P}_{\tau_1} \overline u - \mathcal{P}_{\tau_2} \overline v \big\Vert_{{\bf L}^{1}}\leq L' \big(e^{C_4\cdot \tau_2}\cdot\lVert \overline u-\overline v\rVert_{{\bf L}^{1}}+(\tau_2-\tau_1)\big) \qquad \forall~\overline u,\,\overline v \in\mathcal{D}_{0},\ \ \forall~\tau_2>\tau_1>0\,,$\\ ($L', C_4>0$ being the constants provided by Theorem~\ref{ThmPS5.4}); \item[(iv)] For any $\overline u\doteq (\overline h, \overline p)\in\mathcal{D}_{0}$, the map $\big(h(x,\tau),p(x,\tau)\big)\doteq \mathcal{P}_\tau \overline u(x)$ provides an entropy weak solution of the Cauchy problem~\eqref{S1system}, ~\eqref{S1system-data}. \end{itemize} \end{theorem} \bigskip \begin{remark} Notice that, although the source term of system~\eqref{S1system} is not dissipative, relying on the global existence result established in~\cite{AS}, we construct an evolution operator whose image $ \mathcal{D}^*_0$ is the same for every time $\tau>0$. \end{remark} \section{Basic interaction estimates} \label{S:basicest} We collect in the next lemma the interaction estimates on the change of strength of the wave fronts of an approximate solution constructed as in Section~\ref{Ss:Rsolv-appsol} whenever an interaction between two fronts takes place outside a time step. These estimates were established in~\cite[Lemma~3]{AS} and are sharper than the classical ones for $2\times2$ systems of conservation or balance laws. We present here also a slight refinement of the estimate in~\cite[Lemma~3]{AS} for the case of interactions between two fronts of the second characteristic family. \begin{lemma}[Interaction Estimates] \label{L:stimeinter} Consider two interacting wavefronts, with left, middle and right states $(h^{\ell},p^{\ell})$, $(h^{m},p^{m})$, $(h^{r},p^{r})$ before the interaction. Then, assuming that the sizes of the incoming fronts and of the two outgoing waves produced by this interaction are measured in Riemann coordinates, the followings hold true: \begin{itemize} \item[1-1] If the incoming fronts are two h-waves of sizes $\sR_{h}$, $\widetilde{\sR}_{h}$, then the sizes $\widehat\sR_{h}$ and $\widehat\sR_{p}$ of the outgoing h-wave and p-wave satisfy \be{E:11interest} \big|\widehat\sR_{h}-\sR_{h}-\widetilde{\sR}_{h}\big|+\big|\widehat\sR_{p}\big|\leq \co(1) \min\big\{|p^{\ell}-1\big|,\,|p^m-1\big|\big\} \big(|\sR_{h}|+|\widetilde{\sR}_{h}|\big)\big|\sR_{h}\widetilde{\sR}_{h}\big|\;. \ee \item[1-2] If the incoming fronts are an h-wave and a p-wave of sizes $ \sR_{h}, \sR_{p}$, respectively, then the sizes $\widehat\sR_{h}$ and $\widehat\sR_{p}$ of the outgoing h-wave and p-wave satisfy \be{E:21interest} \big|\widehat{\sR}_{h} -\sR_{h}\big|+\big|\widehat{\sR}_{p} -\sR_{p}\big|\leq \co(1) h_{\rm max}\cdot \big|\sR_{h}\sR_{p}\big|\;, \ee where $ h_{\text{max}}\doteq \max\{h^l,h^m,h^r\}$. \item[2-2] If the incoming fronts are two p-waves of sizes $\sR_{p}$, $\widetilde\sR_{p}$, then the sizes $\widehat\sR_{h}$ and $\widehat\sR_{p}$ of the outgoing h-wave and p-wave satisfy \be{E:22interest} \big|\widehat\sR_{h}\big|+\big|\widehat\sR_{p}-\sR_{p}-\widetilde\sR_{p}\big|\leq \co(1) h^\ell\cdot \big|\sR_{p}\widetilde\sR_{p}\big| \big(|\sR_{p}|+|\widetilde\sR_{p}|\big) \;. \ee \end{itemize} \end{lemma} \begin{proof} The proofs of~\eqref{E:11interest},~\eqref{E:21interest} can be found in~\cite{AS}. We provide here only a proof of~\eqref{E:22interest} which is a slight refinement of the corresponding estimate established in~\cite[Lemma 3]{AS}. Consider the functional \bes \Psi(h^\ell,p^\ell,\sR_{p},\widetilde\sR_{p}):=(\widehat\sR_h, \ \widehat\sR_p-\sR_p-\widetilde\sR_p), \ees which is smooth in $(h^\ell,p^\ell)$ and twice continuously differentiable w.r.t. $\sR_{p},\widetilde\sR_{p}$, with Lipschitz continuous second derivatives. Observe that \begin{equation} \label{psi-est-1-1-a} \Psi(0,p^{\ell},\sR_{p},\widetilde\sR_{p})= \Psi(h^{\ell},p^{\ell},0,\widetilde\sR_{p})=\Psi(h^{\ell},p^{\ell},\sR_{p},0)=0\qquad\forall~h^\ell\geq 0\,, \end{equation} which implies \begin{equation} \label{psi-est-1-1-a2} \frac{\partial\Psi}{\partial h^{\ell}}(h^{\ell},p^{\ell},0,\widetilde\sR_{p})= \frac{\partial\Psi}{\partial h^{\ell}}(h^{\ell},p^{\ell},\sR_{p},0)=0\,. \end{equation} Moreover, with the same arguments of~\cite[Lemma~3]{AS} one can show that \begin{equation} \nonumber \frac{\partial^2\Psi}{\partial\sR_p\partial{\tilde{\sR}}_p}(h^{\ell},p^{\ell},\sR_p=0,{\widetilde{\sR}}_p=0)=(0,0) \qquad\forall~h^{\ell}\geq 0\,, \end{equation} which in turn implies \begin{equation} \label{psi-est-1-1-b} \frac{\partial^3\Psi}{\partial\sR_p\partial{\tilde{\sR}}_p\partial h^{\ell}}(h^{\ell},p^{\ell},\sR_p=0,{\widetilde{\sR}}_p=0)=(0,0) \qquad\forall~h^{\ell}\geq 0\,. \end{equation} Hence, using~\eqref{psi-est-1-1-a} we find \be{psi-est-1-1-h} \Psi(h^\ell,p^\ell,\sR_{p},\widetilde\sR_{p}) =\int_0^{h_\ell} \frac{\partial \Psi}{\partial h} (h,p^\ell,\sR_{p},\widetilde\sR_{p})\,dh\,. \ee On the other hand, relying on~\eqref{psi-est-1-1-a2},~\eqref{psi-est-1-1-b}, and invoking~\cite[Lemma~2.5]{Bre}, we derive \be{psi-est-1-1-hh} \frac{\partial\Psi}{\partial h^{\ell}}(h^{\ell},p^{\ell},\sR_{p},\widetilde\sR_{p}) \leq \co(1) \big|\sR_{p}\widetilde\sR_{p}\big| \big(|\sR_{p}|+|\widetilde\sR_{p}|\big) \qquad\forall~h^{\ell}\geq 0\,. \ee Combining together~\eqref{psi-est-1-1-h},~\eqref{psi-est-1-1-hh}, we recover the estimate~\eqref{E:22interest}. \end{proof} \begin{remark} Notice that, thanks to the relations~\eqref{E:equivalncestrengths}, the interaction estimates provided by the above lemma relative to 1-1 and 2-2 interactions remain valid if we measure the size of incoming fronts and outgoing waves in the original coordinates instead that in the Riemann ones . Instead, for the 1-2 interaction, the $h_{max}$ factor would be missing in the right hand side of~\eqref{E:21interest} if the size of waves is measured in the original coordinates. \end{remark} Observe that, thanks to the a-priori ${\bf L}^{\infty}$-bounds established in~\cite{AS}, given $M_0,>0$ and any $\delta_0^*>0$ and $\delta_p^*\in(0,1)$, there exists $\delta_0>0$ and $\delta_p\in (0,1)$ such that for an approximate solution $u=(h^{s,\varepsilon},p^{s,\varepsilon})$ constructed as in Subsection~\ref{Ss:Rsolv-appsol}, with initial data that satisfy~\eqref{initial-data-bounds}, one has \be{linf-h-bound} \|h^{s,\varepsilon}(t)\|_{{\bf L}^{\infty}}\le \delta_0^*\qquad \forall\,\, t>0\,. \ee \be{linf-pnew-bound} \|p^{s,\varepsilon}(t)-1\|_{{\bf L}^{\infty}}\le \delta_p^*\qquad \forall\,\, t>0\,. \ee Hence, relying on the estimates stated in Lemma~\ref{L:stimeinter}, it is shown in~\cite{AS} that one can choose $\overline\delta>0$ in~\eqref{E:omega} and $\delta_0^*>0$ in~\eqref{linf-h-bound} sufficiently small so that the Glimm functional defined in~\eqref{S3G} is strictly decreasing at any interaction occurring between time steps. Namely, at any time $t>0$ where an interaction takes places, the variation $\Delta\cg(t)\doteq \cg\big(u(t+)\big)-\cg\big(u(t-)\big)$ of the functional $\cg$ satisfies the following bounds. \begin{enumerate} \item[(i)] Consider an interaction between two 1-shocks with sizes $\sR_{\alpha}, \sR_\beta$. Notice that, by the properties of the rarefaction and Hugoniot curves of system~\eqref{S1system} recalled in subsection~\ref{Ss:Rsolv-appsol}, such shocks must have the same sign and lie on the same side with respect to $p=1$. Then, we have \be{E:decreaseGlimm1s1s} \Delta\cg(t)\le-\frac{\omega_{\alpha,\beta}}{4}\cdot |\sR_{\alpha}\sR_\beta| \ee if we assume that $\delta_0^*$ and $\dfrac{\delta_0^*}{\overline\delta}$ are sufficiently small. \item[(ii)] At interactions between a $1$-shock of size $\sR_{\alpha}$ with a $1$-rarefaction of size $\sR_\beta$, we have a cancellation in the waves and the functional $V$ is strictly decreasing. Then, we have \be{E:decreaseGlimm1s1r} \Delta\cg(t)\le-\min\big\{|\sR_{\alpha}|,\,|\sR_{\beta}|\big\}\,, \ee if we assume that $\delta_0^*$ is sufficiently small. \item[(iii)] At interactions between fronts of different families or between two $2$-fronts, we have \be{E:decreaseGlimm2r} \Delta\cg\le- \frac{1}{4}\cdot|\sR_{\alpha}\sR_\beta| \ee if we assume that $\delta_0^*$ and ${\overline\delta}$ are sufficiently small. \end{enumerate} Instead, the bound~\eqref{G-timestep-est1} on the variation of the functional $\cg$ occurring at time steps is based on the following lemma established in~\cite[Lemma 1]{AS}. \begin{lemma}[Time Step Estimates] Consider a wavefront located at a point $x$ at a time step $t_k$, with left state $(h^{\ell},p^{\ell})$ and right state $(h^r,p^r)$ before the time step. After updating the approximate solution at time~$t_k$ according with~\eqref{updated}, the solution of the Riemann problem determined by the jump at $(x,t_k)$ will consist of two waves of the first and second characteristic families, say of sizes $\sR_{h}^+$ and $\sR_{p}^+$ respectively, measured in Riemann coordinates. Then, the followings hold true: \begin{itemize} \item[1] If the front connecting $(h^{\ell},p^{\ell})$ to $(h^r,p^r)$ is of the first family with size $\sR_{h}$, then we have \be{E:timestepest1} \sR_{h}^+=\sR_{h}+\co(1)\Delta t\cdot |p^{\ell}-1|\cdot|\sR_{h}|\,, \ee \be{E:timestepest2} \sR_{p}^+=\co(1)\Delta t\cdot|p^{\ell}-1|\cdot|\sR_{h}|\,. \ee \item[2] If the front connecting $(h^{\ell},p^{\ell})$ to $(h^r,p^r)$ is of the second family with size $\sR_{p}$, then we have \be{E:timestepest3} \sR_{h}^+=\co(1)\Delta t \cdot h^{\ell} \cdot|\sR_{p}|\,, \ee \be{E:timestepest4} \sR_{p}^+=\sR_{p}+\co(1)\Delta t \cdot h^{\ell}\cdot|\sR_{p}|\,. \ee \end{itemize} \end{lemma} \section{\texorpdfstring{${\bf L^{1}}$}{L1}-stability estimates - Proof of Theorem~\ref{stability-Phy}} \label{S4old} Consider two $s$-$\varepsilon$-approximate solutions $u, v$ of the non-homogeneous system~\eqref{S1system} constructed as in Subsection~\ref{Ss:Rsolv-appsol}, with initial data $(\overline h_u,\overline p_u)\doteq u(\,\cdot,0), (\overline h_v,\overline p_v)\doteq v(\,\cdot,0)$ satisfying~\eqref{initial-data-bounds}. The heart of the matter to establish Theorem~\ref{stability-Phy} is to control the change in time of the functional~$\Phi_z$ defined in~\eqref{UpsilonNew}--\eqref{S4A2}, evaluated along $(u(t), v(t))$. This is accomplished in the following subsections by first analyzing the variation of $\Phi_z(u(t),v(t))$ when $\er\equiv 0$. Namely, assuming that $\delta_0^*$ in~\eqref{linf-h-bound} is sufficiently small, we will analyze the change of $\Phi_0(u(t),v(t))$ at three different classes of times: \begin{itemize} \item[\S~\ref{Ss:interactionTimes}:] at times where two fronts of $u$ or $v$ interact, we show that $t\mapsto\Phi_0(u(t),v(t))$ does not increase; \item[\S~\ref{Ss:nointeractiontimesN}:] at times between interactions, the function $t\mapsto\Phi_0(u(t),v(t))$ is Lipschitz continuous and we prove that there holds \begin{equation} \label{est-phi0-decr} \ddn{t}{}\Phi_0(u(t),v(t))\leq C_1\eps \;, \end{equation} where $C_1>0$ constant independent of $s$-$\varepsilon$. \item[\S~\ref{Ss:timestep}:] at time steps $t_{k}$, we prove that \bas \Phi_0(u(t_k+),v(t_k+))&\le (1+C_2\, \Delta t) \,\Phi_0(u(t_k-),v(t_k-))\;. \eas where $C_2>0$ constant independent again of $s$-$\varepsilon$. \end{itemize} The analysis in each class of times is performed in the Subsections \S~\ref{Ss:interactionTimes}--\S~\ref{Ss:timestep}. Integrating~\eqref{est-phi0-decr} between two interaction times, and combining these three results of~\S~\ref{Ss:interactionTimes}--\S~\ref{Ss:timestep}, we thus establish Theorem~\ref{stability-Phy}-(ii). Next, we shall consider two $\varepsilon$-approximate solutions $u$ and $v$ of the homogeneous system~\eqref{S1system-hom} with initial data satisfying~\eqref{initial-data-bounds}, and we will analyze the variation of $\Phi_\er(u(t),v(t))$ when $z\ne 0$ and~\eqref{z-bound-1} holds: \begin{itemize} \item[\S~\ref{S5.4}:] performing a similar analysis as in~\S~\ref{Ss:interactionTimes}-\S~\ref{Ss:nointeractiontimesN}, we show that \bas \Phi_\er(u(t_2),v(t_2))-\Phi_\er(u(t_1),v(t_1))&\le C_1 \cdot (t_2-t_1) [\eps+\sigma]\;, \eas \end{itemize} for all $t_2\ge t_1\ge 0$, where $C_1>0$ constant independent of $s$-$\varepsilon$, and this establishes Theorem~\ref{stability-Phy}-(i). \subsection{Analysis at interaction times} \label{Ss:interactionTimes} In this section, we consider an interaction between waves of the approximate solution $v$ or $u$ occurring at time $t=\tau$ and show that the functional $\Phi_0(u(t),v(t))$, given in~\eqref{UpsilonNew}, does not increase across interaction for appropriate constants in the weights $W_i$, i.e. we prove that \begin{equation} \label{phiest0} \Phi_0(u(\tau{+}),v(\tau{+}))\le \Phi_0(u(\tau{-}),v(\tau{-})). \end{equation} In the following lemma, we provide a condition under which the constants in the weights $W_i$ need to be controlled by the coefficient $\kappa_{\cg}$ of the Glimm functionals of $u$ and $v$. In preparation for this, we need the notation of the change $\Delta$ across an interaction accuring at $t=\tau$. \begin{align*} &\Delta \cg(\tau) :=\cg(u(\tau{+}))-\cg(u(\tau{-}))+\cg(v(\tau{+}))-\cg(v(\tau{-})) \ ,\\ &\Delta W_i(\tau):=W_{i}(\tau{+},x)- W_{i}( \tau{-}, x)\ ,\\ &\Delta \ca_{i,j}(\tau):= \ca_{i,j}(\tau{+},x)- \ca_{i,j}( \tau{-}, x) \ ,\qquad i,\,j=1,\,2\ . \end{align*} The next lemma states a sufficient condition that implies~\eqref{phiest0} \begin{lemma}\label{lemma4.1} Let $t=\tau$ be an interaction time for either $v$ or $u$. If \be{E:constraint_interNew} \kappa_{\cg}\ge \kappa_{i\ca1} \frac{ \Delta \ca_{i,1}}{|\Delta\cg|}+\kappa_{i\ca2}\frac{\Delta \ca_{i,2}}{|\Delta\cg|} \qquad\text{for $i=1,2$} \ee then~\eqref{phiest0} holds true. \end{lemma} \begin{proof} First, we note that across an interaction the Glimm functional is not increasing, hence $$|\Delta\cg(\tau) |=\cg(u(\tau-))+\cg(v(\tau-))-\cg(u(\tau+))-\cg(v(\tau+))>0.$$ Then, using the $\Delta$ notation, we have the identity \begin{align} W_i(\tau+,x) \cdot & e^{\kappa_{\cg} \big[\cg(u(\tau+))+\cg(v(\tau+))\big]} - W_i(\tau-,x) \cdot e^{\kappa_{\cg} \big[\cg(u(\tau-))+\cg(v(\tau-))\big]}\nonumber\\ =& \left(\Delta W_i (\tau)e^{-\kappa_{\cg} |\Delta\cg(\tau)|}+W_i(\tau-,x) (e^{-\kappa_{\cg} |\Delta\cg(\tau)|}-1) \right)\cdot e^{\kappa_{\cg} \big[\cg(u(\tau-))+\cg(v(\tau-))\big]}. \end{align} Using that \[ \Delta W_i(\tau)=\left( e^{ \kappa_{i\ca1}\Delta \ca_{i,1}+\kappa_{i\ca2}\Delta \ca_{i,2}}-1\right) W_i(\tau-) \] we get \begin{align} W_i(\tau+,x) \cdot & e^{\kappa_{\cg} \big[\cg(u(\tau+))+\cg(v(\tau+))\big]} - W_i(\tau-,x) \cdot e^{\kappa_{\cg} \big[\cg(u(\tau-))+\cg(v(\tau-))\big]} \nonumber\\ \le & \left( e^{-\kappa_{\cg}|\Delta\cg|} e^{ \kappa_{i\ca1}\Delta \ca_{i,1}+\kappa_{i\ca2}\Delta \ca_{i,2}}-1 \right) \cdot W_i(\tau-,x) e^{\kappa_{\cg} \big[\cg(u(\tau-))+\cg(v(\tau-))\big]}\le0 \end{align} under condition~\eqref{E:constraint_interNew}.This implies~\eqref{phiest0} immediately, because the map \bes \R^{+}\ni t\mapsto |\eta_{i}(\cdot,t)|\in L^{1}(\R) \ees is continuous. The proof is complete. \end{proof} The aim is to show that there exists $\kappa_{\cg}>0$ large enough, for sufficiently small $\delta_0>0$, such that~\eqref{E:constraint_interNew} holds true at all interaction times $\tau$. Our strategy is to examine all cases of interactions and prove that in each case \ba\label{E:rel} \Delta \ca_{i,j} \le a |\Delta\cg| \qquad \text{for $i,j=1,2$} \ea where the factor $a>0$ depends on $\delta_0^*$, $M_0^*$, $\bar\delta$ and the coefficients $\kappa_{i\ca j}$, for $i,j=1,2$. From this, the conclusion from the analysis in the following subsections is that~\eqref{E:constraint_interNew} holds true as long as \begin{equation}\label{S4.1conditionkappa}\kappa_{\cg}> 2a\max_{i,j=1,2} \kappa_{i\ca j} \ . \end{equation} From here and on, we consider an interaction between waves of the approximate solution $v$ and omit the analysis relative to interactions of waves of $u$ because it is entirely similar. We devote the rest of the section to the proof of~\eqref{E:constraint_interNew} and hence~\eqref{phiest0}. Therefore, we analyze separately in each subsection the different type of interactions occurring between two wave-fronts of $v$ using that $\|h\|_\infty$ and $\delta_0^*$ are sufficiently small according to Theorem~\ref{T:AS}. We recall that by~\eqref{linf-h-bound} the ${\bf L^\infty}$-bound on $h$-component of the approximate solutions takes value in $[0,\delta_0^*]$, and that the $p$-component of the approximate solutions takes values in the interval $[p_0^*, p_1^*]$. Throughout the section, we denote by $\sR_{\alpha}$ and $\sR_{\beta}$ the strengths (in Riemann coordinates) of the incoming waves of $v$ before their interaction takes place at $t=\tau$, with $\sR_{\alpha}$ located on the left of $\sR_{\beta}$. We also let $p_{\alpha}^{\ell}$, $p_{\beta}^{\ell}$ denote the $p$-components (in the original coordinates) of their left states. For the convenience of the reader, we note that in the following subsection we use that \begin{enumerate} \item[$\circ$] for $1$-waves, it holds: $k_\alpha=1$, $|\sR_{\alpha}|<\mu\, \delta_0^*< M_0^* $,\ $0<h_\alpha<\delta_0^*$\ and $p_0^*<1-\delta_p^*<p_\alpha<1+\delta_p^*<p_1^*$ \item[$\circ$] for $2$-waves, it holds: $k_\alpha=2$, $|\sR_{\alpha}|< M_0^* $, $0<h_\alpha<\delta_0^*$ and $p_0^*<1-\delta_p^*<p_\alpha<1+\delta_p^*<p_1^*$\;. \end{enumerate} \subsubsection{Case of $1-1$ interaction without cancellation} \label{Sss:Interaction1nC} Assume that $\sR_{\alpha}$, $\sR_{\beta}$ are the sizes of two interacting $1$-waves of $v$ that have the same sign. Thus $\sR_{\alpha}$, $\sR_{\beta}$ are two shocks of the first family lying on the same side of $\{p=1\}$. We denote by $\sR_{h}'$, $\sR_{p}'$, the sizes (in Riemann coordinates) of the outgoing $1$-wave and $2$-wave, respectively, after the interaction. Notice that the left states of $\sR_{h}'$ and $\sR_{\alpha}$ are the same. Therefore, if we denote by ${p'}^{\ell}$ the $p$-component (in original coordinates) of the left state of $\sR_{h}'$, one has ${p'}^{\ell}=p_{\alpha}^{\ell}$. By~\eqref{E:decreaseGlimm1s1s} the Glimm functional is decreasing across this type of interaction with the bound $$ \cg(v(\tau{+}))\leq \cg(v(\tau{-}))-\frac{\omega_{\alpha,\beta}}{4}|\sR_{\alpha}\sR_\beta|, \qquad \cg(u(\tau{+}))=\cg(u(\tau{-})), $$ hence, \be{E:orerggrgeege} \Delta\cg=-\frac{\omega_{\alpha,\beta}}{4}|\sR_{\alpha}\sR_\beta|<0\;. \ee Next, we note that at points $x$ where neither $u(\tau)$ nor $v(\tau)$ admits a jump, one has $\eta_i(x,\tau-)=\eta_i(x,\tau+)$, $i=1,2$. Then, by the definition of $\ca_{1,1}$ in~\eqref{S4A1} and relying on the interaction estimate~\eqref{E:11interest}, we get that at such points there holds \begin{align*} \ca_{1,1}(x,\tau{+})- \ca_{1,1}(x,\tau{-}) & \leq \Big|\PHI{{p'}^{\ell}}|\sR_{h}'| -\left( \PHI{p_{\alpha}^{\ell}}|\sR_{\alpha}| +\PHI{p_{\beta}^{\ell}}|\sR_\beta| \right)\Big| \end{align*} Considering that in this case ${p'}^{\ell}=p_{\alpha}^{\ell}$ the right hand side can be estimated by \[ \Big|\PHI{{p'}^{\ell}}|\sR_{h}'| - \PHI{p_{\alpha}^{\ell}}|\sR_{\alpha}| -\PHI{p_{\beta}^{\ell}}|\sR_\beta| \Big| \leq \PHI{p_{\alpha}^{\ell}}\big|\sR_{h}'-\sR_{\alpha}-\sR_\beta\big| +\left| p_{\beta}^{\ell} - p_{\alpha}^{\ell}\right||\sR_\beta| \ . \] By~\eqref{E:equivalncestrengths} and recalling definition~\eqref{E:omega} we observe that \be{E:omega-equivalence} \min\big\{|p^{\ell}_\alpha-1\big|,\,|p^\ell_\beta-1\big|\big\}\leq\frac{\mu}{\overline\delta}\cdot\omega_{\alpha,\beta} \ee so that applying the interaction estimate~\eqref{E:11interest} we arrive at \begin{align} \label{E:orerggrgeege223prec} \ca_{1,1}(x,\tau{+})- \ca_{1,1}(x,\tau{-}) \le \co(1)p_1^*\frac{ \,\mu}{\overline\delta}\cdot \omega_{\alpha,\beta} \delta_0^*|\sR_{\alpha}\sR_{\beta}|+|p_{\alpha}^{\ell}-p_{\beta}^{\ell}| |\sR_{\beta}|. \end{align} Next, observe that $p_\beta^\ell=p_\alpha^r=\SC{1}{\sC_\alpha}{(h_\alpha^\ell,p_\alpha^\ell)}$ with $|\sC_\alpha|\leq \mu |\sR_\alpha|$ by~\eqref{E:equivalncestrengths}. To estimate the last term in~\eqref{E:orerggrgeege223prec}, we employ the explicit expression of $\SC{1}{\cdot}{}$ given in~\eqref{E:p1shock} and get \be{pab-est1} \big|p_{\alpha}^{\ell}-p_{\alpha}^{r}\big|=\big|p_{\alpha}^{\ell}-p_{\beta}^{\ell}\big|=\co(1)\frac{\mu}{\overline\delta}\cdot\omega_{\alpha,\beta}|\sR_{\alpha}|\,. \ee since $|p_{\alpha}^{\ell}-1|<\delta_p^*$ and $h-s_{1}(h,p_{\ell})>\frac{1}{2}$. Substituting into~\eqref{E:orerggrgeege223prec}, we get the estimate \begin{align}\label{E:orerggrgeege223a} \ca_{1,1}(x,\tau{+})- \ca_{1,1}(x,\tau{-}) &\leq \co(1) \cdot\omega_{\alpha,\beta}|\sR_{\alpha}\sR_{\beta}|\,. \end{align} Now, the change of $\ca_{1,2}$ given in~\eqref{S4A1} across such an interaction at $\tau$ is estimated \begin{align} \ca_{1,2}(x,\tau{+})- \ca_{1,2}(x,\tau{-}) & \leq |\rho'_{p}| \label{E:orerggrgeege223b} \leq\co(1)\frac{\,\mu}{\overline\delta}\cdot \omega_{\alpha,\beta}\delta_0^*|\sR_{\alpha}\sR_{\beta}| . \end{align} using again~\eqref{E:11interest}. By the definition of $\ca_{2,1}$ and of $\ca_{2,2}$ in~\eqref{S4A2} and estimates~\eqref{E:11interest},~\eqref{E:omega-equivalence}, we also bound the change \ba \label{E:irehgreugrg1} \left|\ca_{2,1}(x,\tau{+})- \ca_{2,1}(x,\tau{-})\right|+ & \left|\ca_{2,2}(x,\tau{+})- \ca_{2,2}(x,\tau{-})\right| \leq |\sR_{p}'|+|\sR_{h}'-\sR_{\alpha}-\sR_{\beta}| \notag \\ &\stackrel{\eqref{E:11interest}}{\leq} \co(1)\cdot\frac{\,\mu}{\overline\delta}\cdot \omega_{\alpha,\beta}\delta_0^*|\sR_{\alpha}\sR_{\beta}|\,. \ea Estimates~\eqref{E:orerggrgeege},~\eqref{E:orerggrgeege223a},~\eqref{E:orerggrgeege223b} and~\eqref{E:irehgreugrg1} with~\eqref{S4Wi} directly prove that \eqref{E:rel} holds with \[a\ge\co(1) (1+\frac{\mu}{\overline\delta} \delta_0^*)\;.\] \subsubsection{Case of $1-1$ interaction with cancellation} \label{Sss:Interaction1C} Here, we consider an interaction between two incoming $1$-waves of the solution $v$ at time $t=\tau$ but with strengths $\sR_{\alpha}$, $\sR_{\beta}$ of opposite sign. By~\eqref{E:decreaseGlimm1s1r} the Glimm functional is decreasing across this type of interaction with the bound \[ \cg(v(\tau{+}))\leq \cg(v(\tau{-}))-\min\{|\sR_{\alpha}|,|\sR_{\beta}|\}, \qquad \cg(u(\tau{+}))=\cg(u(\tau{-})). \] Assuming that the strengths satisfy $|\sR_{\alpha}|<|\sR_{\beta}|$, we have \be{E:orerggrgeege30394} \Delta\cg(\tau)\leq-|\sR_{\alpha}|\;. \ee We proceed following the same arguments and notations of \S~\ref{Sss:Interaction1nC}. Firstly, by~\eqref{S4A1} and the interaction estimate~\eqref{E:11interest}, we deduce that: \begin{align} \left|\ca_{1,1}(x,\tau{+})\right.&\left.- \ca_{1,1}(x,\tau{-})\right|+ |\ca_{1,2}(x,\tau{+})- \ca_{1,2}(x,\tau{-})| \notag\\ \notag&\leq |\rho'_{p}|+ \Big|\PHI{{p'}^{\ell}}|\sR_{h}'| - \PHI{p_{\alpha}^{\ell}}|\sR_{\alpha}| -\PHI{p_{\beta}^{\ell}}|\sR_\beta| \Big| \notag \\ &=\co(1) (p_1^*)^2\,\delta_0^*\cdot |\sR_{\alpha}\sR_{\beta}|\notag\\ &=\co(1) (p_1^*)^2\,(\delta_0^*)^2\cdot |\sR_{\alpha}|\,. \label{E:orerggrgeege0293-2} \end{align} Secondly, by~\eqref{S4A2} we deduce that: \begin{align} \notag \left|\ca_{2,1}(x,\tau{+})-\right.&\left. \ca_{2,1}(x,\tau{-})\right|+ \left|\ca_{2,2}(x,\tau{+})- \ca_{2,2}(x,\tau{-})\right| \\ \notag&\leq |\sR'_{p}|+|\sR_{h}'| -|\sR_{\alpha}| -|\sR_\beta| \notag \\ & \leq |\sR'_{p}|+|\sR_{h}' -\sR_{\alpha} -\sR_\beta| \notag \\ &=\co(1) p_1^*\delta_0^*\cdot |\sR_{\alpha}\sR_{\beta}|\notag\\ &=\co(1) p_1^*(\delta_0^*)^2\cdot |\sR_{\alpha}|\,. \label{E:orerggrgeege2029} \end{align} Estimates~\eqref{E:orerggrgeege30394},~\eqref{E:orerggrgeege0293-2} and~\eqref{E:orerggrgeege2029} directly prove that \eqref{E:rel} holds, actually with $a$ small satisfying with \[a\ge\co(1) p_1^*(\delta_0^*)^2\;.\] \subsubsection{Case of 2-2 interaction} \label{Sss:Interaction22} Now, we assume that $\sR_{\alpha}$, $\sR_{\beta}$ are the strengths of two interacting $2$-waves of $v$. By~\eqref{E:decreaseGlimm2r}, the Glimm functional is decreasing across such an interaction with the bound: \[ \cg(v(\tau{+}))\leq \cg(v(\tau{-}))-\frac{1}{4}|\sR_{\alpha}\sR_\beta|, \qquad \cg(u(\tau{+}))=\cg(u(\tau{-})), \] hence, \be{E:orerggrgeege2029-1} \Delta\cg(\tau)\leq -\frac{1}{4}|\sR_{\alpha}\sR_\beta|\;. \ee By the definition of the functionals $\ca_{i,j}$, given at~\eqref{S4A1}--\eqref{S4A2} and relying on the interaction estimates~\eqref{E:22interest}, it holds \begin{align}\label{E:orerggrgeege2029-2a} \left|\ca_{1,1}(x,\tau{+})- \ca_{1,1}(x,\tau{-})\right| &\le p_1^*\cdot |\sR'_{h}|\notag\\ &\le \co(1)\delta_0^* M_0^*\cdot |\sR_{\alpha}\sR_{\beta}| \end{align} and \begin{align} \left| \ca_{1,2}(x,\tau{+})- \ca_{1,2}(x,\tau{-}) \right| &\le |\sR_{p}' -\sR_{\alpha} -\sR_\beta| \notag \\ & \le \co(1)\delta_0^*\,M_0^*\cdot |\sR_{\alpha}\sR_{\beta}| \label{E:orerggrgeege2029-2} \end{align} while \begin{align} \left|\ca_{2,1}(x,\tau{+})- \ca_{2,1}(x,\tau{-})\right|+& \left|\ca_{2,2}(x,\tau{+})- \ca_{2,2}(x,\tau{-})\right|\notag\\ &\leq \big( |\sR'_{h}|+|\sR_{p}' -\sR_{\alpha} -\sR_\beta|\big)\qquad\qquad \notag \\ & \leq\co(1)\delta_0^*\,M_0^*\cdot |\sR_{\alpha}\sR_{\beta}|\,. \label{E:orerggrgeege2029-2b} \end{align} Estimates~\eqref{E:orerggrgeege2029-1} and~\eqref{E:orerggrgeege2029-2a}--~\eqref{E:orerggrgeege2029-2b} directly prove that \eqref{E:rel} holds, actually with $a$ small being \[ a\ge\co(1)\delta_0^* M_0^* (1+p_1^*)\;. \] \subsubsection{Case of 2-1 interaction} \label{Sss:Interaction21sameRegion} Here, we consider the case that two incoming waves of different families interact. The strength $\sR_{\alpha}$ corresponds to the 2-wave located at $x_\alpha$ and the strength $\sR_{\beta}$ is the 1-wave located at $x_\beta$. We observe that the left state of the outgoing 1-wave $\rho'_h$ is the same with the left state of the incoming 2-wave $\sR_\alpha$. Hence, if we denote by ${p'}^{\ell}$ the $p$-component of the left state of $\sR_{h}'$, then ${p'}^{\ell}=p_{\alpha}^{\ell}$. Since the $p$-component of the solution $v$ attains values in $[p_0^*,p_1^*]$, with $p_0^*<1<p_1^*$, there are the following cases: (a) The $p$-components of the left states $p_{\alpha}^{\ell}$, $p_{\beta}^{\ell}$ of the incoming waves belong to the same interval $[p_0^*,1]$ or $[1, p_1^*]$, i.e. they both lie in the same region $\{p>1\}$ or $\{p< 1\}$ of the $h-p$ plane. (b) The $p$-components of the left states $p_{\alpha}^{\ell}$, $p_{\beta}^{\ell}$ of the incoming waves belong to different regions. $\bullet$ Case (a) with states not crossing $\{p=1\}$. In this case, we first note that the decrease of the Glimm functional~\eqref{E:decreaseGlimm2r}, is the same as of the $2-2$ interaction in the previous subsection, see~\eqref{E:orerggrgeege2029-1}. Next, relying on the interaction estimates~\eqref{E:21interest}, we obtain again similar estimates to~\eqref{E:orerggrgeege2029-2}--\eqref{E:orerggrgeege2029-2b} on the variation of $\ca_{i,j}$ around $\tau$ and show that \[ \Delta \ca_{i,j} \leq \co(1)\delta_0^*\, \cdot |\sR_{\alpha}\sR_{\beta}| \;. \] Hence, the same conclusion~\eqref{E:rel} holds here as well, actually with $a$ small, i.e. $a=\co(1)\delta_0^*$. $\bullet$ Case (b) with states crossing $\{p=1\}$. In this case, the left states of the incoming waves do not lie in the same region $\{p<1\}$ or $\{p>1\}$. In other words, we assume that the left state of $\sR_\beta$ lies on $\{p<1\}$ and the left state of $\rho'_h$ lies on on $\{p>1\}$ or viceversa. Then, one can deal with the variation of the functionals $\ca_{1,2}(x,t)$, $\ca_{2,1}(x,t)$, $\ca_{2,2}(x,t)$, $\cg(u(t))$ and $\cg(v(t))$ across the interaction time $t=\tau$ precisely as in \S~\ref{Sss:Interaction22}, so that there holds~\eqref{E:orerggrgeege2029-1} and \be{E:orerggrgeege2029-4b} \begin{split} &\ca_{1,2}(x,\tau{+})- \ca_{1,2}(x,\tau{-})\leq \co(1)\delta_0^*\, \cdot |\sR_{\alpha}\sR_{\beta}| \\ & \left|\ca_{2,1}(x,\tau{+})- \ca_{2,1}(x,\tau{-})\right|+ \left|\ca_{2,2}(x,\tau{+})- \ca_{2,2}(x,\tau{-})\right| \leq \co(1) \delta_0^*\, \cdot |\sR_{\alpha}\sR_{\beta}|\,. \end{split} \ee Instead, the variation of the functional $\ca_{1,1}(x,t)$ needs a different treatment. Due to the change of region with respect to $\{p=1\}$ of the left state of the incoming and outgoing 1-wave, either the 1-wave located at $x_\beta$ is moving towards a 1-wave $\eta_1(x,\tau-)$, $x\neq x_\beta$, before the interaction and it is moving away from $\eta_1(x,\tau+)$ after the interaction, or viceversa. This behaviour is precisely determined by the fact that the first characteristic family is not genuinely nonlinear since we have $D\lambda_1\, {\bf r}_1<0$ on $\{p<1\}$ and $D\lambda_1\, {\bf r}_1>0$ on $\{p>1\}$. Hence, to estimate the variation of the functional $\ca_{1,1}$, we proceed by studying two subcases: To fix the ideas, let a point $x<x_\beta$ where $\eta_1(x,\tau-)=\eta_1(x,\tau+)>0$. If the 1-wave at $x_\beta$ is approaching $\eta_1(x,\tau\pm)$ before the interaction and it is moving away from~$\eta_1(x,\tau\pm)$ after the interaction, i.e. if $p_{\beta}^{\ell}< 1< p_\alpha^{\ell}$, then one has \be{S4.A11<0} \ca_{1,1}(x,\tau{+})-\ca_{1,1}(x,\tau{-})=-\PHI{p_{\beta}^{\ell}}|\sR_{\beta}| <0 \ee and the functional $\ca_{1,1}$ decreases. Instead, if the 1-wave at $x_{\beta}$ approaches $\eta_1(x,\tau\pm)$ after the interaction, but it was not approaching $\eta_1(x,\tau\pm)$ before the interaction, i.e. if $p_{\alpha}^{\ell}< 1< p_\beta^{\ell}$, then, relying on~\eqref{E:21interest}, one has \ba\label{E:orerggrgeege2029-4f} \ca_{1,1}(x,\tau{+})-\ca_{1,1}(x,\tau{-})&= |p_{\beta}^{\ell}-1||\sR_{h}'|\notag\\ &= |p_{\beta}^{\ell}-1|\left(|\sR_\beta|+\co(1)\delta_0^*\cdot |\sR_{\alpha}\sR_{\beta}|\right). \ea Since $|p_{\beta}^{\ell}-1|\leq |p_{\beta}^{\ell}-p_{\alpha}^{\ell}|$, and because $$ (h_\beta^\ell,p_\beta^\ell)=\mathbf{S}_2\big(\sR_\alpha; (h_\alpha^\ell,p_\alpha^\ell)\big), $$ recalling~\eqref{Sdue},~\eqref{E:equivalncestrengths}, we get $|p_{\beta}^{\ell}-1|\leq\mu|\sR_{\alpha}|$. We thus conclude from~\eqref{E:orerggrgeege2029-4f} that \be{E:orerggrgeege2029-4X} \ca_{1,1}(x,\tau{+})- \ca_{1,1}(x,\tau{-})\leq \mu\big(1+\co(1)\delta_0^*\big)|\sR_{\alpha}\sR_{\beta}|\;. \ee \begin{figure}[htbp] {\centering \scalebox{1}{\input{2-1interactiontalk.tex} } \par} \caption{{\bf On the left:} The $2-1$ interaction in \S~\ref{Sss:Interaction21sameRegion} in Case (a). {\bf On the right:} The $2-1$ interaction in \S~\ref{Sss:Interaction21sameRegion}\label{S4:fig1} in Case (b).} \end{figure} Estimates~\eqref{E:orerggrgeege2029-1},~\eqref{E:orerggrgeege2029-4b},~\eqref{S4.A11<0} and~\eqref{E:orerggrgeege2029-4X} directly prove that \eqref{E:rel} holds with \[ a\ge\co(1)\left[(\delta_0^*+1)\mu+\delta_0^*\right] \] Combining all results of \S~\ref{Sss:Interaction1nC}--\S~\ref{Sss:Interaction21sameRegion}, estimate~\eqref{E:rel} holds for \[ a=\co(1)\max\left\{ (1+\frac{\mu}{\overline\delta} \delta_0^*) , p_1^*(\delta_0^*)^2, \delta_0^* M_0^* (1+p_1^*),\left[(\delta_0^*+1)\mu+\delta_0^*\right]\right\}\;. \] By Lemma~\ref{lemma4.1}, this yields immediately that under the restriction~\eqref{S4.1conditionkappa} on the size of $\kappa_{\cg}$, the functional $\Phi_0$ is non-increasing at interaction times. \subsection{Analysis at times between interactions} \label{Ss:nointeractiontimesN} When there is no interaction at time $t$, a short direct computation yields \begin{equation} \label{E:der-phi0-00N} \ddn{t}{}\Phi_0(u(t),v(t))= \left[\sum_{\alpha\in \cj} \sum_{i=1}^{2} \big\{ \left|\eta_{i}\left(x_{\alpha}{-},t\right)\right| W_{i}(x_{\alpha}{-},t)-\left|\eta_{i}\left(x_{\alpha}{+},t\right)\right| W_{i}(x_{\alpha}{+},t)\big\} \dot x_{\alpha}\right] \cdot e^{\kappa_{\cg} \big[\cg(u(t))+\cg(v(t))\big]} \;, \end{equation} where $\cj$ denotes the set of indexes $\alpha$ associated to the jumps in $u(t)$ and $v(t)$. As in \S~\ref{S2}, we denote $x_\alpha$ the location of the the $\alpha$-jump, and let $\sC_\alpha$ denote its size, measured in original coordinate (cfr.~\S~\ref{Ss:strengthnotation}). Since $u(t), v(t)\in L^{1}(\R)$, we may assume that the piecewise constant maps $u(x,t), v(x,t)$ vanish when $x\to \pm \infty$, which implies that also $\eta_i(x,t)=0$ when $x\to \pm \infty$. Hence, following~\cite[(8.18)-(8.19)]{Bre} we rewrite~\eqref{E:der-phi0-00N} in the equivalent form \begin{equation} \label{der-phi-sum} \ddn{t}{}\Phi_0(u(t),v(t))= \left[\sum_{\alpha\in \cj} \sum_{i=1}^{2} E_{\alpha,i}\right] \cdot e^{\kappa_{\cg} \big[\cg(u(t))+\cg(v(t))\big]} \end{equation} where \be{E:error} E_{\alpha,i} \doteq W_{i}^{\alpha,r}|\eta_{i}^{\alpha,r}|(\lambda_{i}^{\alpha,r}-\dot x_{\alpha})- W_{i}^{\alpha,\ell}|\eta_{i}^{\alpha,\ell}|(\lambda_{i}^{\alpha,\ell}-\dot x_{\alpha}) \qquad \alpha\in \cj \ . \ee Here, and throughout the following, we adopt the notation \begin{subequations} \label{GE:definitionslralpha} \begin{equation} \label{W-eta-lambda-rl-def} \begin{gathered} W_{i}^{\alpha,\ell}\doteq W_i(x_{\alpha}{-},t),\qquad W_{i}^{\alpha,r}\doteq W_i(x_{\alpha}{+},t),\qquad \eta_{i}^{\alpha,\ell}\doteq \eta_i(x_{\alpha}{-},t),\qquad \eta_{i}^{\alpha,r}\doteq \eta_i(x_{\alpha}{+},t). \end{gathered} \end{equation} Similarly, we set \be{S5.2.1 intro} u^\alpha\doteq (u^\alpha_{1},u^\alpha_{2})\doteq u(x_{\alpha},t)\ ,\qquad v^{\alpha,\ell}\doteq (v^{\alpha,\ell}_{1},v^{\alpha,\ell}_{2})\doteq v(x_{\alpha}-,t) \ , \qquad v^{\alpha,r}\doteq (v^{\alpha,r}_{1},v^{\alpha,r}_{2})\doteq v(x_{\alpha}+,t) \ . \ee Then, recalling~\eqref{E:vu} with $\er=0$, one has \begin{equation} \label{S5.2.1 intro-a} \omega^{\alpha,\ell/r}\doteq \mathbf{S}_1\big(\eta_1^{\alpha,\ell / r}; u^\alpha\big),\qquad v^{\alpha,\ell/r}= \mathbf{S}_2\big(\eta_2^{\alpha,\ell / r}; \omega^{\alpha,\ell/r}\big), \end{equation} and we set \begin{equation} \label{S5.2.1_introd} \lambda_{1}^{\alpha,\ell / r}\doteq \lambda_{1}\big(u^\alpha,\omega^{\alpha,\ell / r}\big), \qquad \lambda_{2}^{\alpha,\ell / r}\doteq\lambda_{2}\big(\omega^{\alpha,\ell / r}, v^{\alpha,\ell / r}\big), \end{equation} \end{subequations} where $\lambda_i(u,w)$ denotes the speed of the $i$-wave $\eta_i$ connecting the left state $u$ with the right state $w$. This means that $\lambda_1(u^\alpha,\omega^{\alpha,\ell / r})$ is the Rankine-Hugoniot speed of the $1$-shock connecting the left state $u^\alpha$ with the right state $\omega^{\alpha,\ell/r}$, while $\lambda_2(\omega^{\alpha,\ell / r}, v^{\alpha,\ell/r})$ is the Rankine-Hugoniot speed of the $2$-shock connecting the left state $\omega^{\alpha,\ell/r}$ with the right state $v^{\alpha,\ell / r}$ (see \S~\ref{Ss:Rsolv-appsol}). The goal of this section is to prove that, choosing the coefficeints $\kappa_{i\ca j}$, $i,\,j=1,2$, in~\eqref{S4Wi} appropriately, together with the sizes $\delta_0^*$ and $\delta_p^*$ of the domain, for every $\alpha\in \cj$, there holds \be{E:mainest} E_{\alpha,1}+E_{\alpha,2} \leq \co(1) \varepsilon |\sC_{\alpha}|\,,\quad\text{for every }\alpha\in \cj\,. \ee Actually, the selection of these parameters is performed to obey \emph{Conditions $(\Sigma)$} stated in the proof of Propostion~\ref{PropCond}. Next, summing up~\eqref{E:mainest} over all jumps $\alpha\in \cj$, we derive the general estimate~\eqref{est-phi0-decr} from~\eqref{der-phi-sum} and \eqref{E:mainest} relying on ~\eqref{unifbvbound},~\eqref{V-totvar-est}, since $\cg(u(t))<M^*$ and $\cg(v(t))<M^*$ for all times. We will establish the basic estimate~\eqref{E:mainest} assuming that the term $E_{\alpha,i}$ in~\eqref{E:error} always refers to a jump in $v(t)$ at $x_\alpha$ that connects two states along an Hugoniot curve. This means that, when the jump in $v(t)$ is actually a rarefaction front, we shall replace it with a {\it rarefaction shock} (cfr.~\cite[\S~5.2]{Bre}) of the same size, connecting two states along an Hugoniot curve, and travelling with the corresponding Hugoniot speed. Following a similar argument as in~\cite[\S~8.2]{Bre}, one can show that, because of the second order tangency of Hugoniot and rarefaction curves, this reduction produces an error of size $\co(1) \varepsilon |\sC_{\alpha}|$. We shall discuss this reduction in Appendix~\ref{S:shockReduction}. The estimate~\eqref{E:mainest} in the case when $E_{\alpha,i}$ refers to a jump of $u(t)$ rather than of $v(t)$ is entirely analogous. The structure of this section is the following: \begin{itemize} \item[\S~\ref{Ss:estimPhys1wBis}] We derive~\eqref{E:mainest} when the wave of $v(t)$ at $x_{\alpha}$ is a (compressive or rarefaction) shock of the first famiy. \item[\S~\ref{Ss:estimPhys2wNN}] We derive~\eqref{E:mainest} when the wave of $v(t)$ at $x_{\alpha}$ is a (compressive or rarefaction) shock of the second family. \end{itemize} The analysis of sections \S~\ref{Ss:estimPhys1wBis}-\S~\ref{Ss:estimPhys2wNN} relies on refined interaction-type estimates that are obtained in Appendix~\ref{S:finerInteractions}: they involve the waves $\eta_i^{\alpha,\ell / r}$ connecting $u(t)$ with $v(t)$ at $x_\alpha$, the wave speeds $\lambda_{2}^{\alpha,\ell / r}$ given at~\eqref{S5.2.1_introd} and the speed $\dot x_{\alpha}$ of the wave in $v$. \subsubsection{Waves of the first family} \label{Ss:estimPhys1wBis} In this section, we derive the estimate~\eqref{E:mainest} on the sum of the errors $E_{\alpha,1}+E_{\alpha,2}$ defined in~\eqref{E:error} when the wave of $v(t)$ present at $x_{\alpha}$ belongs to the first family, i.e.~$k_{\alpha}=1$. To this end, we shall first provide an estimate of $E_{\alpha,1}$ and $E_{\alpha,2}$ separately, and then we will combine them to derive~\eqref{E:mainest}. We shall adopt the notation given in~\eqref{GE:definitionslralpha}, dropping the superscript $\alpha$, and we will let $\sC_\alpha, \sR_\alpha$ denote the size of the wave located at $x_\alpha$, measured in the original and Riemann coordinates, respectively (see \S~\ref{Ss:strengthnotation}). Recall that, by~\eqref{E:reeeeeeeterete} in Theorem~\ref{T:AS} the solutions~$u$, $v$, and the intermediate value $\omega$ defined in~\eqref{S5.2.1 intro-a}, take values in the compact set \begin{subequations} \label{bound-eta1-gamma-eta1+gamma-pNN} \ba &\text{$K={[0,\delta_0^*]\times [p_0^*,p_1^*]}$} \ea with the parameters satisfying \ba\label{:setKbd-1bound} 0<\delta_0^*<1-\delta_p^*< p_0^*<1<p_1^*<1+\delta_p^* \ea for $\delta_0^*>0$ and $\delta_p^*>0$ sufficiently small. For convenience, we assume that both $\delta_0^*$ and $\delta_p^*$ are less than $\frac{1}{2}$. Having these restrictions on $K$, we obtain the following conditions on the variables \ba\label{:cvetagamma-1bound} v_1^{\alpha,\ell}, \omega_1^{\alpha,\ell}, |\eta_1^{\alpha,\ell}|, |\sC_\alpha|, |\eta_1^{\alpha,\ell}+\sC_\alpha| \le { \delta_0^*},\quad |v_2^{\alpha,\ell}-1|,|v_2^{\alpha,\ell}-1-\eta_2^{\alpha,\ell}|, |p_\alpha-1|\le \delta_p^* \,,\quad |\eta_2^{\alpha,\ell}|\le 2\delta_p^*. \ea Furthermore, we also require \ba\label{:constrNewCoeff0} \mathfrak K\doteq\kappa_{1\ca1}\mu\delta_0^*\delta_p^*<\tfrac{1}{4} \, . \ea and \ba\label{E:constrNewCoeff} &\left(1-e^{\mathfrak K}\frac{\kappa_{1\ca1}}{\mu^2}\delta_0^*\delta_p^* \right) > \frac{1}{2} \ea These additional conditions~\eqref{:constrNewCoeff0}--\eqref{E:constrNewCoeff} are imposed in order for the decay of the waves of the first family to dominate the possible increase of the waves of the second family and also to control error terms that arise while establishing~\eqref{E:mainest}. \end{subequations} \begin{figure} \centering \hfill \includegraphics[width=.30\linewidth, height=.30\linewidth]{shockCurves1} \hfill \includegraphics[width=.30\linewidth, height=.30\linewidth]{shockCurves2} \hspace{1in} \caption{{\bf Left}: Hugoniot curves of the first family through $(0,p)$ \textemdash left to right are entropy admissible. {\bf Right}: Hugoniot curves of the second family throught $(h,0)$ \textemdash up to down are entropy admissible.} \label{fig:shockCurves} \end{figure} \paragraph{Estimate of $E_{\alpha,1}$ for waves of the first family} \label{SubEalpha1-1} The analysis is divided into three cases according to the signs of $\eta_{1}^{\alpha,r}$, $\eta_{1}^{\alpha,\ell}$ and to the type of the wave at $x_\alpha$: a) treating the case when $\eta_{1}^{\alpha,r}$ and $\eta_{1}^{\alpha,\ell}$ have the same sign, b) treating the case when $\eta_{1}^{\alpha,r}$ and $\eta_{1}^{\alpha,\ell}$ have opposite sign and the wave at $x_{\alpha}$ is an entropy admissible $1$-shock, c) treating the case when $\eta_{1}^{\alpha,r}$ and $\eta_{1}^{\alpha,\ell}$ have opposite sign and the wave at $x_{\alpha}$ is $1$-rarefaction shock. \noindent\textbf{Case a)} Suppose that $\eta_{1}^{\alpha,r}$ and $\eta_{1}^{\alpha,\ell}$ have the same sign. Observe that by definitions~\eqref{S4Wi}--\eqref{S4A1} one has $$W_{1}^{\alpha,r}=W_{1}^{\alpha,\ell} e^{\mathfrak D}$$ where $ \mathfrak D$ is given by \ba\label{E:rguiregregregrergeNN} \mathfrak D=\sgn\left( (v_{2}^{\alpha,\ell}-1)\eta_{1}^{\alpha,\ell}\right)\cdot \kappa_{1\ca1}\PHI{{v_{2}^{\alpha,\ell}}}|\sR_{\alpha}|\;. \ea By bounds~\eqref{bound-eta1-gamma-eta1+gamma-pNN},\,\eqref{E:equivalncestrengths}, it holds \ba \left|\mathfrak D\right|\leq \mathfrak K<\tfrac14 \label{E:boundD}\;. \ea We thus estimate $E_{\alpha,1} $ in~\eqref{E:error} as follows: \begin{align*} E_{\alpha,1} &= (W_{1}^{\alpha,r}-W_{1}^{\alpha,\ell})|\eta_{1}^{\alpha,\ell}|(\lambda_{1}^{\alpha,\ell}-\dot x_{\alpha}) +W_{1}^{\alpha,r}\left[|\eta_{1}^{\alpha,r}|(\lambda_{1}^{\alpha,r}-\dot x_{\alpha})-|\eta_{1}^{\alpha,\ell}|(\lambda_{1}^{\alpha,\ell}-\dot x_{\alpha})\right] \notag \\ &\stackrel{\eqref{E:rguiregregregrergeNN}}{=} W_{1}^{\alpha,r}\left(1-e^{-\mathfrak D}\right)(\lambda_{1}^{\alpha,\ell}-\dot x_{\alpha}) |\eta_{1}^{\alpha,\ell} |+W_{1}^{\alpha,r}\sgn{(\eta_{1}^{\alpha,\ell})}\left[\eta_{1}^{\alpha,r}(\lambda_{1}^{\alpha,r}-\dot x_{\alpha})-\eta_{1}^{\alpha,\ell}(\lambda_{1}^{\alpha,\ell}-\dot x_{\alpha})\right] \ . \end{align*} Using~\eqref{E:stimaetarbasefull1}, we approximate $\eta_{1}^{\alpha,\ell}+\sC_\alpha $ by $\eta_{1}^{\alpha,r} $ having some error terms \[ \eta_{1}^{\alpha,\ell}+\sC_\alpha =\eta_{1}^{\alpha,r} (1+\co(1)\delta_0^*)+\co(1)\delta_0^*|\eta_2^{\alpha,\ell}| \] and then, we combine with~\eqref{E:stima11lfull} to estimate the range of the difference in speeds to get \[ \frac{ (v_{2}^{\alpha,\ell}-1) \eta_{1}^{\alpha,r} }{ v_{2}^{\alpha,\ell}} \left(1-\co(1) \delta_0^*\right)-\co(1)|\eta_2^{\alpha,\ell}| \leq \dot x_{\alpha}- \lambda_{1}^{\alpha,\ell} \leq \frac{ (v_{2}^{\alpha,\ell}-1) \eta_{1}^{\alpha,r} }{ v_{2}^{\alpha,\ell}} \left(1+\co(1) \delta_0^*\right)+ \co(1)|\eta_2^{\alpha,\ell}| \,. \] Next, we recall~\eqref{E:rguiregregregrergeNN} and observe that \[ \left(1-e^{-\mathfrak D}\right) (v_{2}^{\alpha,\ell}-1) \eta_{1}^{\alpha,r}>0\;, \] since $\eta_{1}^{\alpha,\ell}\eta_1^{\alpha,r}>0$. Combining these estimates together with ~\eqref{E:fvgabfafvNewfull} and bounds~\eqref{bound-eta1-gamma-eta1+gamma-pNN}, we arrive at \ba E_{\alpha,1} \leq W_{1}^{\alpha,r}\cdot&\left(\frac{-1+\co(1) \delta_0^*}{p_1^*}\cdot \left|\left(1-e^{-\mathfrak D}\right)(v_{2}^{\alpha,\ell}-1)\eta_{1}^{\alpha,\ell}\eta_{1}^{\alpha,r}\right| + \co(1) \left|\left(1-e^{-\mathfrak D}\right)\eta_{1}^{\alpha,\ell}\eta_2^{\alpha,\ell} \right|\notag \right. \\ &\qquad\qquad\qquad\qquad\left. +\co(1) \delta_0^* \left\lvert(v_{2}^{\alpha,\ell}-1)^2 \eta_{1}^{\alpha,\ell}\eta_{1}^{\alpha,r}\sC_{ \alpha} \right\rvert+\co(1)|\eta_2^{\alpha,\ell} \sC_{{{\alpha}}}| \right) \ \label{E:stimaintermedia} \ea using that $|v_1^{\alpha,\ell}+\sC_{{{\alpha}}}|\le \co(1)\delta_0^*$. Next, we use the Maclaurin expansion to estimate $\left|e^{-\mathfrak D}-1\right|$ \bas 0\leq e^{-\mathfrak D}-1+\sgn\left( (v_{2}^{\alpha,\ell}-1)\eta_{1}^{\alpha,\ell}\right)\cdot \kappa_{1\ca1}\PHI{{v_{2}^{\alpha,\ell}}}|\sR_{\alpha}| \leq e^{\mathfrak K}\kappa_{1\ca1}^2 \PHI{{v_{2}^{\alpha,\ell}}}^2 \sR^2_{\alpha} \ , \eas and control the error terms in~\eqref{E:stimaintermedia}. Indeed, by estimates~\eqref{E:equivalncestrengths}\,,~\eqref{E:boundD} we get the upper bound \ba\label{E:upperestE} \left|\left(1-e^{-\mathfrak D}\right)\eta_1^{\alpha,\ell}\right|\leq \left(1+e^{\mathfrak K}\mathfrak K\right) \mathfrak K \left|\sC_\alpha\right| \leq4 \mathfrak K\left|\sC_\alpha\right| \,, \ea while by~\eqref{E:constrNewCoeff}, we get the lower bound \ba\label{E:newlowerbound} {\left|1-e^{-\mathfrak D}\right|}&\ge|\mathfrak D|- e^{\mathfrak K}\kappa_{1\ca1}^2 \PHI{{v_{2}^{\alpha,\ell}}}^2 |\sR_{\alpha}|^2 \notag\\ &\ge \left(1-e^{\mathfrak K}\frac{\kappa_{1\ca1}}{\mu^2}\delta_0^* \delta_p^*\right)\cdot \kappa_{1\ca1}\PHI{{v_{2}^{\alpha,\ell}}}|\sC_{\alpha}| \notag\\ &> \frac{\kappa_{1\ca1} }{2}\PHI{{v_{2}^{\alpha,\ell}}}|\sC_{\alpha}| \;. \ea Substituting estimates~\eqref{E:upperestE} and~\eqref{E:newlowerbound} into~\eqref{E:stimaintermedia} , we obtain \begin{align}\label{E:sanfownFNN} E_{\alpha,1} \leq W_{1}^{r}\cdot&\left(\left(\frac{-1+\co(1) \delta_0^*(1+\frac{2}{\kappa_{1\ca1}})}{ p_1^*} \right)\cdot \left|\left(1-e^{-\mathfrak D}\right)(v_{2}^{\alpha,\ell}-1)\eta_{1}^{\alpha,\ell}\eta_{1}^{\alpha,r}\right| +\co(1) \left(1+ 4 \mathfrak K \right) \cdot\left| \sC_{\alpha} \eta_{2}^{\alpha,\ell}\right| \right) \ . \end{align} \noindent\textbf{Case b)} Suppose that $\eta_{1}^{\alpha,r}$ and $\eta_{1}^{\alpha,\ell}$ have opposite sign and that the wave at $x_{\alpha}$ is an entropy admissible $1$-shock. Due to the geometry of shock curves (see~Figure~\ref{S2:fig1}) this implies that \be{S4:C1b} \sgn(v_{2}^{\alpha,\ell}-1)\,\eta_1^{\alpha,r}<0< \sgn(v_{2}^{\alpha,\ell}-1)\,\eta_{1}^{\alpha,\ell} \ . \ee We also note that the interaction estimates~\eqref{E:stimaetarbasefull1}, together with~\eqref{bound-eta1-gamma-eta1+gamma-pNN}, imply the bound \be{etarl-strenght-opposite-signNN} |\eta_{1}^{\alpha,r}|+|\eta_{1}^{\alpha,\ell}|=|\eta_{1}^{\alpha,r}-\eta_{1}^{\alpha,\ell}| \leq \mathcal{O}(1)|\sC_{\alpha}| \ . \ee Applying~\eqref{E:stima11full} and bounds~\eqref{bound-eta1-gamma-eta1+gamma-pNN}, we can estimate the error $E_{\alpha,1} $ in~\eqref{E:error}: \bas E_{\alpha,1} &\stackrel{}{=} W_{1}^{\alpha,r} |\eta_1^{\alpha,r}|\left[- \frac{(v_{2}^{\alpha,\ell}-1) \eta_{1}^{\alpha,\ell} }{v_{2}^{\alpha,\ell}} (1+\co(1) \delta_0^*p_1^*)+ \co(1)|\eta_{2}^{\alpha,\ell} |\right] \notag\\ &\qquad\qquad- W_{1}^{\alpha,\ell}|\eta_1^{\alpha,\ell}| \left[- \frac{(v_{2}^{\alpha,\ell}-1) (\eta_{1}^{\alpha,\ell}+\sC_{\alpha})}{v_{2}^{\alpha,\ell}} (1+\co(1) \delta_0^* p_1^*)+ \co(1)|\eta_{2}^{\alpha,\ell} |\right] \notag \eas If one now considers~\eqref{E:stimaetarbasefull1}--\eqref{etarl-strenght-opposite-signNN} and bounds~\eqref{bound-eta1-gamma-eta1+gamma-pNN},~\eqref{W-unif-boundN}, one arrives to \ba\label{E:49nvbhqbdfdNN} E_{\alpha,1} &\stackrel{}{\le} \frac{-1+\co(1) \delta_0^* }{p_1^*} (W_{1}^{\alpha,r}+W_{1}^{\alpha,\ell} ) \cdot {\left|(v_{2}^{\alpha,\ell}-1) \eta_{1}^{\alpha,r}\eta_{1}^{\alpha,\ell}\right|} + \co(1) W_{1}^{*}\cdot |\sC_\alpha\eta_{2}^{\alpha,\ell}| \,. \ea By~\eqref{E:stimaetarbasefull1}, we note that in the above, it holds \[ \sgn\left((v_{2}^{\alpha,\ell}-1)\,(\eta_1^{\alpha,\ell}+\sC_\alpha)\right)=\sgn\left((v_{2}^{\alpha,\ell}-1)\,\eta_1^{\alpha,r}\right)<0 \] for small $\delta_0^*$ and as in Case a), $\eta_{1}^{\alpha,\ell}+\sC_{\alpha}$ is approximated by $\eta_{1}^{\alpha,r}$ to establish~\eqref{E:49nvbhqbdfdNN}. \noindent\textbf{Case c)} Suppose that $\eta_{1}^{\alpha,r}$ and $\eta_{1}^{\alpha,\ell}$ have opposite sign and that the wave at $x_{\alpha}$ is a 1-rarefaction shock (see discussion at beginning of \S~\ref{Ss:nointeractiontimesN}). Then, by construction one has $|\sC_{\alpha}|\leq\varepsilon$ (see \S~\ref{Ss:Rsolv-appsol}). By the estimate on velocities~\eqref{E:stima11lfull} and since~\eqref{etarl-strenght-opposite-signNN} remains valid, we deduce that \be{wavespeed-opposite-sign-epsNN} \begin{aligned} |\lambda_1^{\alpha,\ell}-\dot{x}_\alpha| &\leq \co(1) \left(\delta_p^*\cdot |\eta_1^{\alpha,\ell}+\sC_{\alpha}| + |\eta_2^{\alpha,\ell}|\right) \leq \co(1) \left(\delta_p^*|\sC_{\alpha} |+ |\eta_2^{\alpha,\ell}|\right) \leq \co(1) \left(\delta_p^*\varepsilon + |\eta_2^{\alpha,\ell}|\right) \;. \end{aligned} \ee and similarly, from~\eqref{E:stima11full2}, \be{wavespeed-opposite-sign-epsNN2} \begin{aligned} |\lambda_1^{\alpha,r}-\dot{x}_\alpha| &\leq \co(1) \left(\delta_p^*\cdot |\eta_1^{\alpha,\ell}| + |\eta_2^{\alpha,\ell}|\right) \leq \co(1) \left(\delta_p^*\varepsilon + |\eta_2^{\alpha,\ell}|\right) \;. \end{aligned} \ee Using~\eqref{etarl-strenght-opposite-signNN} and \eqref{wavespeed-opposite-sign-epsNN}--\eqref{wavespeed-opposite-sign-epsNN2}, we can now derive the following estimate for $E_{\alpha,1} $ in~\eqref{E:error}: \begin{align}\label{E:aefNN} E_{\alpha,1} &\stackrel{\eqref{E:error}}{=} (W_{1}^{\alpha,r}-W_{1}^{\alpha,\ell})|\eta_{1}^{\alpha,\ell}|(\lambda_{1}^{\alpha,\ell}-\dot x_{\alpha}) +W_{1}^{\alpha,r}\left[|\eta_{1}^{\alpha,r}|(\lambda_{1}^{\alpha,r}-\dot x_{\alpha})-|\eta_{1}^{\alpha,\ell}|(\lambda_{1}^{\alpha,\ell}-\dot x_{\alpha})\right] \notag \\ &\leq \left|W_{1}^{\alpha,r}-W_{1}^{\alpha,\ell}\right| |\eta_{1}^{\alpha,\ell}| \co(1) \left(\delta_p^* \varepsilon + |\eta_2^{\alpha,\ell}|\right) +\co(1)W_{1}^{\alpha,r}\left[|\eta_{1}^{\alpha,r}|+|\eta_{1}^{\alpha,\ell}|\right] \left(\delta_p^*\varepsilon + |\eta_2^{\alpha,\ell}|\right)\notag\\ &\stackrel{\eqref{etarl-strenght-opposite-signNN}}{\leq} \co(1) W_{1}^{*} \left( \ \delta_p^* \varepsilon |\sC_\alpha|+ |\eta_2^{\alpha,\ell}l\sC_\alpha| \right)\; . \end{align} All cases for $E_{\alpha,1}$ for waves of the first family have been investigated at this point. \paragraph{Estimate of $E_{\alpha,2}$ for waves of the first family}\label{SubEalpha1-2} Let us first point out that the Hugoniot curves of the same family cannot cross each other, see Figure~\ref{fig:shockCurves}. As a consequence, due to the geometric properties of such curves, the components $\eta_{2}^{\alpha,\ell},\,\eta_{2}^{\alpha,r}$ must have the same sign and thus, by definitions~\eqref{S4WiN2} and~\eqref{S4A2}, there holds \bes W_{2}^{\alpha,\ell}=W_{2}^{\alpha,r} e^{\kappa_{2\ca 1}|\sR_{\alpha}|}\;. \ees We thus rewrite the error term in~\eqref{E:error} as \bas E_{\alpha,2} &= (W_{2}^{\alpha,r}-W_{2}^{\alpha,\ell})(\lambda_{2}^{\alpha,\ell}-\dot x_{\alpha})\big|\eta_{2}^{\alpha,\ell} \big| +W_{2}^{\alpha,r}\left[|\eta_{2}^{\alpha,r}|(\lambda_{2}^{\alpha,r}-\dot x_{\alpha})-|\eta_{2}^{\alpha,\ell}|(\lambda_{2}^{\alpha,\ell}-\dot x_{\alpha})\right] \\ &= W_{2}^{\alpha,r}\left[(1-e^{\kappa_{2\ca 1}|\sR_{\alpha}|})(\lambda_{2}^{\alpha,\ell}-\dot x_{\alpha})\big|\eta_{2}^{\alpha,\ell} \big| + \left(\eta_{2}^{\alpha,r}(\lambda_{2}^{\alpha,r}-\dot x_{\alpha})-\eta_{2}^{\alpha,\ell}(\lambda_{2}^{\alpha,\ell}-\dot x_{\alpha})\right) \sgn(\eta_{2}^{\alpha,\ell})\right] \;. \eas Next, we observe that by convexity of the exponential and recalling bounds~\eqref{E:equivalncestrengths}, one has \[ 1-e^{\kappa_{2\ca 1}|\sR_{\alpha}|}\leq-\kappa_{2\ca 1}|\sR_{\alpha}|\leq-\frac{\kappa_{2\ca 1}}\mu|\sC_{\alpha}|\,. \] Also, by definition, we have $\lambda_{2}^{\alpha,\ell}-\dot x_{\alpha}=v_{2}^{\alpha,\ell} +\co(1)\delta_{0}^*\ge\frac{p_0^*}{2}$, using bounds~\eqref{bound-eta1-gamma-eta1+gamma-pNN} for small $\delta_{0}^*$. In view of the above analysis, we can estimate the term $E_{\alpha,2} $ as: \ba\label{E:fpNIVqbjkqsfNN} E_{\alpha,2} & \stackrel{\eqref{E:fvgabfafvNewfull2}}{\leq}W_{2}^{\alpha,r}\cdot\left[\left( - \kappa_{2\ca1} \frac{p_{0}^{*}}{2\mu} + \co(1) \right)\big|\eta_{2}^{\alpha,\ell}\sC_{\alpha}\big| +\co(1) \cdot |(v_{2}^{\alpha,\ell}-1)^{ 2}\eta_{1}^{\alpha,r} \eta_{1}^{\alpha,\ell}\sC_{\alpha} | \right]\;, \ea using estimate~\eqref{E:fvgabfafvNewfull2}, bounds~\eqref{bound-eta1-gamma-eta1+gamma-pNN} and noting again that~\eqref{E:stimaetarbasefull1} allowed us to replace $\eta_{1}^{\alpha,\ell}+\sC_\alpha $ with $\eta_{1}^{\alpha,r} $, with some error as in \S~\ref{SubEalpha1-1}. For the case a) in \S~\ref{SubEalpha1-1} that both $\eta_1^{\alpha,\ell}$ and $\eta_1^{\alpha,r}$ have the same sign, estimate~\eqref{E:fpNIVqbjkqsfNN} further reduces to \ba E_{\alpha,2} &\label{E:fpNIVqbjkqsfNN2} \stackrel{\eqref{E:newlowerbound}}{\leq} W_{2}^{\alpha,r}\cdot\left[\left( - \kappa_{2\ca1} \frac{p_{0}^{*}}{2\mu} + \co(1) \right)\big|\eta_{2}^{\alpha,\ell}\sC_{\alpha}\big| + \co(1) \frac{2}{\kappa_{1\ca1}}\left|\left(1-e^{-\mathfrak D}\right)(v_{2}^{\alpha,\ell}-1)\eta_{1}^{\alpha,\ell}\eta_{1}^{\alpha,r}\right| \right]\;. \ea \paragraph{Derivation of~\eqref{E:mainest} for waves of the first family}\label{P4.2.1.3} We conclude here the proof of estimate~\eqref{E:mainest} for a wave of $v(t)$ at $x_{\alpha}$ belonging to the first family. In view of the above analysis, we combine now the estimates of $E_{\alpha,1}$ and $E_{\alpha,2}$ distinguishing the three cases studied to estimate $E_{\alpha,1}$ in \S~\ref{SubEalpha1-1}--~\ref{SubEalpha1-2}. \vskip\baselineskip \noindent\textbf{Case a)} Recall that in this case both $\eta_{1}^{\alpha,\ell}, \eta_{1}^{\alpha,r}$ are assumed to have the same sign. Adding together~\eqref{E:sanfownFNN} and~\eqref{E:fpNIVqbjkqsfNN2} yields \be{4.26-1aNN} \begin{aligned} E_{\alpha,1} +E_{\alpha,2} &\leq \left( \frac{-1+\co(1) \delta_0^*}{ p_1^*} \cdot W_1^{\alpha,r}+\co(1) \cdot\frac{1}{\kappa_{1\ca1}} W_2^*\right) \left|\left(1-e^{-\mathfrak D}\right)(v_{2}^{\alpha,\ell}-1)\eta_{1}^{\alpha,\ell}\eta_{1}^{\alpha,r}\right| \\ &+\left( - \kappa_{2\ca1}\,\frac{W_2^{\alpha,r} p_0^* }{2\mu}+\co(1) ( 1+\mathfrak K )W_1^*+ \co(1)W_2^{\alpha,r} \right) |\eta_{2}^{\alpha,\ell}\sC_{\alpha}| \end{aligned} \ee using~\eqref{W-unif-boundN} and taking $\kappa_{1\ca1}\ge 1$. Taking $\delta_0^*$ small enough and using that $W_i^{\alpha,r}\ge 1$, we can further bound this sum as \be{4.26-1aNN2} \begin{aligned} E_{\alpha,1} +E_{\alpha,2} \leq & \left( \frac{-1+\co(1) \delta_0^*}{ p_1^*} +\co(1) \cdot\frac{1}{\kappa_{1\ca1}} W_2^*\right) \left|\left(1-e^{-\mathfrak D}\right)(v_{2}^{\alpha,\ell}-1)\eta_{1}^{\alpha,\ell}\eta_{1}^{\alpha,r}\right| \\ &+\left( - \kappa_{2\ca1}\,\frac{p_0^* }{4\mu}+\co(1) W_1^*\right) |\eta_{2}^{\alpha,\ell}\sC_{\alpha}| +\left( - \kappa_{2\ca1}\,\frac{ p_0^* }{4\mu}+ \co(1)\right) W_2^{\alpha,r} |\eta_{2}^{\alpha,\ell}\sC_{\alpha}| \\ =:& I_1 \left|\left(1-e^{-\mathfrak D}\right)(v_{2}^{\alpha,\ell}-1)\eta_{1}^{\alpha,\ell}\eta_{1}^{\alpha,r}\right|+ \left(J_1 +J_2 W_2^{\alpha,r}\right) |\eta_{2}^{\alpha,\ell}\sC_{\alpha}| \;. \end{aligned} \ee \vskip\baselineskip \noindent\textbf{Case b)} In this case, $\eta_{1}^{\alpha,\ell}$ and $\eta_{1}^{\alpha,r}$ have opposite sign and the wave at $x_{\alpha}$ is an entropy admissible 1-shock. Summing up~\eqref{E:49nvbhqbdfdNN},~\eqref{E:fpNIVqbjkqsfNN}, and relying on~\eqref{bound-eta1-gamma-eta1+gamma-pNN} and~\eqref{W-unif-boundN}, it follows \be{4.26-1aNN3} \begin{aligned} E_{\alpha,1} +E_{\alpha,2} &\leq \left(\frac{-1+\co(1) \delta_0^* }{p_1^*} W_{1}^{\alpha,r} +\co(1)W_2^{*}\delta_0^*p_1^* \right)\cdot {\left|(v_{2}^{\alpha,\ell}-1) \eta_{1}^{\alpha,r}\eta_{1}^{\alpha,\ell}\right|} \\ & +\left(- \kappa_{2\ca1}\,\frac{W_2^{\alpha,r} p_0^* }{2\mu}+\co(1)\,\left(W_1^*+W_2^{\alpha,r}\right)\right) \cdot|\eta_{2}^{\alpha,\ell}\sC_{\alpha}| \end{aligned} \ee In a similar manner as before, this further reduces to \be{4.26-1aNN4} \begin{aligned} E_{\alpha,1} +E_{\alpha,2} \leq & \left( \frac{-1+\co(1) \delta_0^* }{p_1^*} +\co(1) \cdot W_2^* \delta_0^*p_1^*\right) \cdot {\left|(v_{2}^{\alpha,\ell}-1) \eta_{1}^{\alpha,r}\eta_{1}^{\alpha,\ell}\right|} \\ &+\left( - \kappa_{2\ca1}\,\frac{p_0^* }{4\mu}+\co(1) W_1^*\right) |\eta_{2}^{\alpha,\ell}\sC_{\alpha}| +\left( - \kappa_{2\ca1}\,\frac{ p_0^* }{4\mu}+ \co(1)\right) W_2^{\alpha,r} |\eta_{2}^{\alpha,\ell}\sC_{\alpha}| \\ =:& I_2 \left|\left(1-e^{-\mathfrak D}\right)(v_{2}^{\alpha,\ell}-1)\eta_{1}^{\alpha,\ell}\eta_{1}^{\alpha,r}\right|+ \left(J_1 +J_2 W_2^{\alpha,r}\right) |\eta_{2}^{\alpha,\ell}\sC_{\alpha}| \;. \end{aligned} \ee \vskip\baselineskip \noindent\textbf{Case c)} In this last case, $\eta_{1}^{\alpha,\ell}$ and $\eta_{1}^{\alpha,r}$ have opposite sign and the wave at $x_{\alpha}$ is a 1-rarefaction shock. By~\eqref{E:aefNN},~\eqref{E:fpNIVqbjkqsfNN}, and relying on~\eqref{bound-eta1-gamma-eta1+gamma-pNN},~\eqref{etarl-strenght-opposite-signNN} and~\eqref{W-unif-boundN}, one thus has \be{4.26-1aNN5} \begin{aligned} E_{\alpha,1}+E_{\alpha,2} \leq & \left(- \kappa_{2\ca1}\,\frac{W_2^{\alpha,r} p_0^* }{2\mu}+\co(1)\,\left(W_1^* + W_2^{\alpha,r}\right)\right) \cdot|\eta_{2}^{\alpha,\ell}\sC_{\alpha}| \\ &+\co(1) (W_1^* +W_2^* \delta_p^* \delta_0^*) \delta_p^* \cdot \varepsilon |\sC_{\alpha} | \,\\ \le & \left(J_1 +J_2 W_2^{\alpha,r}\right) |\eta_{2}^{\alpha,\ell}\sC_{\alpha}| + \co(1) (W_1^* +W_2^* \delta_p^* \delta_0^*) \delta_p^* \cdot \varepsilon |\sC_{\alpha} | \;, \end{aligned} \ee using $|\sC_{\alpha}|\le\varepsilon$. \vskip\baselineskip The next proposition guarantees the existence of suitable parameters $\kappa_{i\ca j}$, with $i,j=1,2$ and $\delta_p^*$, $\delta_0^*$ such that~\eqref{E:mainest} holds true. \begin{proposition}\label{PropCond} There exist coefficients $\kappa_{i\ca j}$, with $i,j=1,2$ of $W_1$ and $W_2$ in~\eqref{S4Wi} and positive constants $\delta_p^*$, $\delta_0^*$ such that estimate~\eqref{E:mainest} holds. \end{proposition} \begin{proof} Let $\co(1)$ denote the maximum of all constants $\co(1)$ appearing in estimates~\eqref{4.26-1aNN2},~\eqref{4.26-1aNN4}\, and~\eqref{4.26-1aNN5}. Then, we select the values $\kappa_{i\ca j}$ and $\delta_p^*$, $\delta_0^*$ in such a way that the terms $I_1$, $I_2$, $J_1$ and $J_2$ appearing in~\eqref{4.26-1aNN2},~\eqref{4.26-1aNN4}\,~\eqref{4.26-1aNN5} are all negative. This is accomplished following the next steps under the so-called \emph{\bf Conditions ($\Sigma$)}: \begin{enumerate} \item[Step 1.] First, fix the positive constants $\kappa_{1\ca 2}$ and $\kappa_{2\ca 2}$. \item[Step 2.] Choose next $\kappa_{2\ca 1}$ large such that \[- \kappa_{2\ca1}\,\frac{p_0^* }{4\mu}+\co(1) e^{2 \kappa_{1\ca 2}M^*}\le 0.\label{Condition1}\tag{$\Sigma_1$}\] \item[Step 3.] Now, select $\kappa_{1\ca 1}\ge 1$ large enough so that \[ - \,\frac{1}{2p_1^*}+ \frac{\co(1)}{\kappa_{1\ca 1}} e^{(\kappa_{2\ca1}+\kappa_{2\ca2})M^*} <0\,.\label{Condition2}\tag{$\Sigma_2$}\] \item[Step 4.] Next, let $\delta_p^*$ be a positive small constant that satisfies \[0<\delta_p^*<\min\left\{\frac{1}{2},\,\frac{\kappa_{2\ca1}}{\kappa_{1\ca1}}\right\}\,\label{Condition3}\tag{$\Sigma_3$}\] Then, combining with the previous step, we immediately have \[ J_2:= - \kappa_{2\ca1}\,\frac{ p_0^* }{4\mu}+ \co(1)\le J_1:= - \kappa_{2\ca1}\,\frac{p_0^* }{4\mu}+\co(1) W_1^*<- \kappa_{2\ca1}\,\frac{p_0^* }{4\mu}+\co(1) e^{2 \kappa_{1\ca 2}M^*}< 0 \] recalling that $W_1^*= \co(1)e^{ (\kappa_{1\ca1} \delta_p^*+ \kappa_{1\ca2})\cdot M^*}$ from~\eqref{W-unif-boundN}. \item[Step 5.] Last, choose $\delta_0^*$ small enough, depending on $\kappa_{i\ca j}$ and $\delta_p^*$, so that \[ -1+\co(1)\delta_0^*<-\frac{1}{2} ,\quad\text{and}\quad-\frac{1}{ 2p_1^*} +\co(1) \cdot W_2^* \delta_0^*p_1^*< 0 \label{Condition4}\tag{$\Sigma_4$} \] as well as both conditions~\eqref{:constrNewCoeff0}--\eqref{E:constrNewCoeff} remain valid. The size of $\delta_0^*$ may even become smaller according to other requirements of the analysis in this paper. Then, we immediately deduce from \eqref{Condition2} and~\eqref{Condition4} that $I_1<0 $ and $I_2<0$. \end{enumerate} Having that the terms $I_1$, $I_2$, $J_1$ and $J_2$ are all negative and combining with~\eqref{4.26-1aNN2},~\eqref{4.26-1aNN4}\,~\eqref{4.26-1aNN5} , the proof is complete. However, for completeness, we could add a last step in \emph{\bf Conditions ($\Sigma$)}: Having Steps 1--5, we proceed to \begin{enumerate} \item[Step 6.] From \S~\ref{Ss:interactionTimes} the parameter $\kappa_\cg$ can be chosen to satisfy \begin{equation}\label{S4.1conditionkappa2}\kappa_{\cg}> 2a\max_{i,j=1,2} \kappa_{i\ca j}\tag{$\Sigma_5$} \ . \end{equation} where $a$ is determined in \S~\ref{Ss:interactionTimes} and may depend on $\delta_0^*$. \end{enumerate} \end{proof} More precisely, under \emph{\bf Conditions ($\Sigma$)}, we get \be{4.26-1aNN6} \begin{aligned} E_{\alpha,1}+E_{\alpha,2} \leq & \co(1) (W_1^* +W_2^* \delta_p^* \delta_0^*) \delta_p^* \cdot \varepsilon |\sC_{\alpha} | \;, \end{aligned} \ee that yields estimate~\eqref{E:mainest} for a wave of $v(t)$ at $x_{\alpha}$ belonging to the first family. Let us make some comments:\\ $\bullet$ According to the above steps, the coefficient $\kappa_{\cg}$ is selected obeying~\eqref{S4.1conditionkappa} after one completes step 5 above, hence in step $6$. It is should be pointed out that $\kappa_{\cg}$ is not involved in $\co(1)$ in steps 1--5 of \emph{\bf Conditions ($\Sigma$)}.\\ $\bullet$ One can realize that the smallness of the factor $\delta_p^*$ is crucial to balance the positive contribution of $E_{\alpha,2}$ with the negative part of $E_{\alpha,1}$, i.e. obtain $I_1<0$ and at the same time to balance the positive contribution of $E_{\alpha,1}$ with the negative part of $E_{\alpha,2}$, i.e. obtain $J_1<0$ in Case a). In addition, for this same reason, we write $W_i$ as exponential functions of the linear combination of $\ca_{i,j}$, recall~\eqref{S4Wi}, and not as linear functions of $\ca_{i,j}$, that is the standard way this analysis has been done in the literature so far. Actually the exponential allows us to better compensate the gain and the loss. \subsubsection{Waves of the second family} \label{Ss:estimPhys2wNN} Here, we derive estimate~\eqref{E:mainest} on the sum of the errors $E_{\alpha,1}+E_{\alpha,2}$ defined in~\eqref{E:error} when the wave of $v(t)$ present at $x_{\alpha}$ belongs to the second family, i.e.~$k_{\alpha}=2$. The strategy is similar to the previous subsection: we will first provide a separate estimate of $E_{\alpha,1}$, $E_{\alpha,2}$, and then we will combine such estimates to estblish~\eqref{E:mainest}. We shall adopt the notation given in~\eqref{GE:definitionslralpha}, dropping the superscript $\alpha$, and we will let $\sC_\alpha, \sR_\alpha$ denote the size of the wave located at $x_\alpha$, measured in the original and Riemann coordinates, respectively (see \S~\ref{Ss:strengthnotation}). Having the restrictions on the set $K$ given in \eqref{:setKbd-1bound}, we have \ba\label{bound-eta1-gamma-eta1+gamma-pNN2} v_1^{\alpha,\ell}, \omega_1^{\alpha,\ell}, |\eta_1^{\alpha,\ell}| \le { \delta_0^*},\qquad |v_2^{\alpha,\ell}-1|,|v_2^{\alpha,\ell}-1-\eta_2^{\alpha,\ell}|, \le \delta_p^* \,, \qquad |\sC_\alpha|, |\eta_2^{\alpha,\ell}+\sC_\alpha|, |\eta_2^{\alpha,\ell}| \le 2\delta_p^*. \ea \paragraph{Estimate of $E_{\alpha,1}$ for waves of the second family}\label{P:rergqokngrqongrqqgrwgqrqgg} Again we divide the analysis into two cases according to the sign of $\eta_{1}^{\alpha,r}$ and $\eta_{1}^{\alpha,\ell}$: a) treating the case when $\eta_{1}^{\alpha,r}$ and $\eta_{1}^{\alpha,\ell}$ have the same sign, which is the most relevant case, while b) treating the case when $\eta_{1}^{\alpha,r}$ and $\eta_{1}^{\alpha,\ell}$ have opposite sign. \noindent \textbf{Case a)} Assume that $\eta_{1}^{\alpha,r}$ and $\eta_{1}^{\alpha,\ell}$ have the same sign. Hence, by~\eqref{S4WiN}--\eqref{S4A1}, one has the relation \bes W_{1}^{\alpha,r}=W_{1}^{\alpha,\ell} \exp\left({\kappa_{1\ca 2}|\sR_{\alpha}|}\right)\;. \ees Observe that by strict hyperbolicity~\eqref{strict-hyp} there holds $\lambda_{1}^{\alpha,\ell}-\dot x_{\alpha}<-p_0^*/2$. Then, relying on~\eqref{E:sqFNDKAVDUAall} and on the uniform bounds~\eqref{bound-eta1-gamma-eta1+gamma-pNN2} and~\eqref{E:equivalncestrengths}, it follows \ba E_{\alpha,1} &= W_{1}^{\alpha,r}\left[(1-e^{- \kappa_{1\ca 2}|\sR_{\alpha}|})\big(\lambda_{1}^{\alpha,\ell}-\dot x_{\alpha}\big) \big|\eta_{1}^{\alpha,\ell}\big|+\left(\eta_{1}^{\alpha,r}(\lambda_{1}^{\alpha,r}-\dot x_{\alpha})-\eta_{1}^{\alpha,\ell}(\lambda_{1}^{\alpha,\ell}-\dot x_{\alpha})\right)\sgn(\eta_1^\ell) \right] \notag \\ & { \leq} W_{1}^{\alpha,r}\left[- (1-e^{- \kappa_{1\ca 2}|\sR_{\alpha}|})\frac{p_0^*}{2} \big|\eta_{1}^{\alpha,\ell}\big| +\left|\eta_{1}^{\alpha,r}(\lambda_{1}^{\alpha,r}-\dot x_{\alpha})-\eta_{1}^{\alpha,\ell}(\lambda_{1}^{\alpha,\ell}-\dot x_{\alpha})\right| \right]\notag \\ & { \leq} W_{1}^{\alpha,r}\left[- \frac{\kappa_{1\ca 2}\,p_0^*}{2\mu} \cdot e^{-2\kappa_{1\ca 2}\mu \delta_p^* }\cdot \big|\eta_{1}^{\alpha,\ell}\sC_{\alpha}\big| +\co(1)\left| (\eta_{2}^{\alpha,\ell}+\sC_{{\alpha}})\eta_{2}^{\alpha,\ell}\sC_{{\alpha}} \right| (v_{1}^{\alpha,\ell})^{2} \right]\notag \ea Now, from~\eqref{E:primsfwewgrqgerqpall2}, we have the relation\be{etaell+g=r} \eta_{2}^{\alpha,\ell}+\sC_{{\alpha}}=\eta_{2}^{\alpha,r}(1\pm\co(1)\delta_0^2) \ee and employing this, we arrive at the estimate \ba E_{\alpha,1} &{ \leq} W_{1}^{\alpha,r}\left[- \frac{\kappa_{1\ca 2}\,p_0^*}{2\mu} \cdot e^{-2\kappa_{1\ca 2}\mu \delta_p^* }\cdot \big|\eta_{1}^{\alpha,\ell}\sC_{\alpha}\big| +\co(1)\left| \eta_{2}^{\alpha,r}\eta_{2}^{\alpha,\ell}\sC_{{\alpha}} \right| (v_{1}^{\alpha,\ell})^{2} \right]. \label{E:efavadvNn} \ea \noindent \textbf{Case b)} Assume that $\eta_{1}^{\alpha,r}$ and $\eta_{1}^{\alpha,\ell}$ have opposite signs. Then using the interaction-type estimate~\eqref{E:primsfwewgrqgerqpall}, we have \be{E:funfenfenfeNn} |\eta_{1}^{\alpha,r}|+|\eta_{1}^{\alpha,\ell}| = |\eta_{1}^{\alpha,r}-\eta_{1}^{\alpha,\ell}| \leq \co(1) \left| \left(\eta_{2}^{\alpha,\ell}+\sC_{{\alpha}}\right)\eta_{2}^{\alpha,\ell}\sC_{{\alpha}} \right| (v_{1}^{\alpha,\ell})^{2} \ . \ee Since the speeds $\lambda_{1}^{\alpha,\ell}$, $\lambda_{1}^{\alpha,r}$, $\dot x_{\alpha}$ are uniformly bounded, expression~\eqref{E:error} of $E_{\alpha,1}$ can be estimated as follows: \be{E:boundq1changesignNN} \begin{split} E_{\alpha,1}\leq& \co(1)W_1^*\cdot(|\eta_{1}^{\alpha,r}|+|\eta_{1}^{\alpha,\ell}| ) \\ \leq & \co(1) W_1^*\left| \left(\eta_{2}^{\alpha,\ell}+\sC_{{\alpha}}\right)\eta_{2}^{\alpha,\ell}\sC_{{\alpha}} \right| (v_{1}^{\alpha,\ell})^{2} \\ \leq& \co(1) W_1^*\cdot\left|\eta_{2}^{\alpha,r}\eta_{2}^{\alpha,\ell}\sC_{{\alpha}} \right| (v_{1}^{\alpha,\ell})^{2} \ . \end{split} \ee using again bounds~\eqref{W-unif-boundN} and~\eqref{etaell+g=r}. We thus conclude that in both cases \be{E:boundq1changesignNN2} \begin{split} E_{\alpha,1} \leq& \co(1) W_1^*\cdot\left|\eta_{2}^{\alpha,r}\eta_{2}^{\alpha,\ell}\sC_{{\alpha}} \right| (v_{1}^{\alpha,\ell})^{2} \ . \end{split} \ee \paragraph{Estimate of $E_{\alpha,2}$ for waves of the second family}\label{per:rggqrgrqgrqgrqgrq} In this subsection, we estimate the error term $E_{\alpha,2}$ for waves of the second family dividing the analysis into three cases again according to the signs of $\eta_{2}^{\alpha,r}$, $\eta_{2}^{\alpha,\ell}$ and to the type of the wave at $x_\alpha$ as follows: a) treating the case when $\eta_{2}^{\alpha,r}$ and $\eta_{2}^{\alpha,\ell}$ have the same sign, b) treating the case when $\eta_{2}^{\alpha,r}$ and $\eta_{2}^{\alpha,\ell}$ have opposite signs and the wave at $x_{\alpha}$ is an entropy admissible $2$-shock, c) treating the case when $\eta_{2}^{\alpha,r}$ and $\eta_{2}^{\alpha,\ell}$ have opposite signs and the wave at $x_{\alpha}$ is $2$-rarefaction shock. \noindent\textbf{Case 2a)} We consider the case that $\eta_{2}^{\alpha,r}$ and $\eta_{2}^{\alpha,\ell}$ have the same sign. Then, by~\eqref{S4WiN} one has the identity \be{E:rguiregregregrerge2} W_{2}^{\alpha,r} =W_{2}^{\alpha,\ell} e^{- \kappa_{2\ca2}|\sR_{\alpha}| \sgn(\eta_{2}^{\alpha,\ell})}\;, \ee since $\ca_{2,1}^{\alpha,\ell}=\ca_{2,1}^{\alpha,r}$ and $\ca_{2,2}^{\alpha,\ell}=\ca_{2,2}^{\alpha,r}+|\sR_{\alpha}| \sgn(\eta_{2}^{\alpha,\ell})$. Thanks to estimates in Lemma~\ref{L:stimegenerali2}, by the uniform boundedness of $W_2^{\alpha,r}$, with the compact domain in bounds~\eqref{bound-eta1-gamma-eta1+gamma-pNN} from Theorem~\ref{T:AS} and the equivalence relation~\eqref{E:equivalncestrengths}, we estimate \begin{align} E_{\alpha,2}= & (W_{2}^{\alpha,r}-W_{2}^{\alpha,\ell})|\eta_{2}^{\alpha,\ell}|(\lambda_{2}^{\alpha,\ell}-\dot x_{\alpha}) +W_{2}^{\alpha,r}\left[|\eta_{2}^{\alpha,r}|(\lambda_{2}^{\alpha,r}-\dot x_{\alpha})-|\eta_{2}^{\alpha,\ell}|(\lambda_{2}^{\alpha,\ell}-\dot x_{\alpha})\right] \notag \\ \stackrel{\eqref{E:rguiregregregrerge2}}{=} & W_{2}^{\alpha,r}\left[ (1-e^{\kappa_{2\ca2} \sgn(\eta_{2}^{\alpha,\ell})\left|\sR_{\alpha}\right|})| \eta_{2}^{\alpha,\ell}|(\lambda_{2}^{\alpha,\ell}-\dot x_{\alpha})+ \sgn(\eta_2^{\alpha,\ell})\left(\eta_{2}^{\alpha,r}(\lambda_{2}^{\alpha,r}-\dot x_{\alpha})-\eta_{2}^{\alpha,\ell}(\lambda_{2}^{\alpha,\ell}-\dot x_{\alpha})\right)\right]\notag\,. \end{align} Next, using~\eqref{bound-eta1-gamma-eta1+gamma-pNN2} and~\eqref{E:8.50all}, we write $$ \lambda_{2}^{\alpha,\ell}-\dot x_{\alpha}= \frac{v_{1}^{\alpha,\ell} (\eta_2^{\alpha,\ell}+\sC_\alpha) }{(v_2^{\alpha,\ell}-\eta_2^{\alpha,\ell})(v_2^{\alpha,\ell}+\sC_\alpha)}(1+\co(1)|v_1^\ell |) $$ since $0<(1-\delta_p^*)^2<(v_2^{\alpha,\ell}-\eta_2^{\alpha,\ell})(v_2^{\alpha,\ell}+\sC_\alpha)\le(1+\delta_p^*)^2<4$ for $\delta_p^*<\frac{1}{2}$ by~\eqref{:cvetagamma-1bound} and also using that $v_2^{\alpha,\ell}+\gamma_\alpha=v_2^{\alpha,r}\in(1-\delta_p^*,1+\delta_p^*)$ Hence, combining with~\eqref{E:sqFNDKAVDUAall}, we deduce \begin{align} E_{\alpha,2} \le &W_{2}^{\alpha,r}\Big[ (1-e^{\kappa_{2\ca2} \sgn(\eta_{2}^{\alpha,\ell})\left|\sR_{\alpha}\right|}) |\eta_{2}^{\alpha,\ell}| \frac{v_{1}^{\alpha,\ell} (\eta_2^{\alpha,\ell}+\sC_\alpha) }{(v_2^{\alpha,\ell}-\eta_2^{\alpha,\ell})(v_2^{\alpha,\ell}+\sC_\alpha)}(1+\co(1)\delta_0^* ) \notag \\ &\qquad\qquad\qquad\qquad\qquad\qquad +\co(1) \left| \left(\eta_{2}^{\alpha,\ell}+\sC_{{\alpha}}\right)\eta_{2}^{\alpha,\ell}\sC_{{\alpha}} \right| (v_{1}^{\alpha,\ell})^{2} \Big] \notag \end{align} Now, from~\eqref{etaell+g=r}, we note that $$\sgn(\eta_{2}^{\alpha,\ell}+\sC_{{\alpha}})=\sgn(\eta_{2}^{\alpha,r})$$ and hence by~\eqref{E:equivalncestrengths} and~\eqref{bound-eta1-gamma-eta1+gamma-pNN2}, we get the estimate $$ (1-e^{\kappa_{2\ca2} \sgn(\eta_{2}^{\alpha,\ell})\left|\sR_{\alpha}\right|}) |\eta_{2}^{\alpha,\ell}| (\eta_2^{\alpha,\ell}+\sC_\alpha)\le -\kappa_{2\ca2} e^{- 2\kappa_{2\ca 2}\mu \delta_p^* } \left|\sR_{\alpha}\right| \eta_{2}^{\alpha,\ell}\cdot (\eta_2^{\alpha,\ell}+\sC_\alpha) $$ thus from~\eqref{etaell+g=r}, we deduce \begin{align}\label{E:case2.2.aNew} E_{\alpha,2} {\le} &W_{2}^{\alpha,r}\left[ -\kappa_{2\ca2} e^{- 2\kappa_{2\ca 2}\mu \delta_p^* } \left|\sR_{\alpha}\right| \frac{v_{1}^{\alpha,\ell} \eta_{2}^{\alpha,\ell}\eta_2^{\alpha,r} }{(v_2^{\alpha,\ell}-\eta_2^{\alpha,\ell})(v_2^{\alpha,\ell}+\sC_\alpha)}(1+\co(1)\delta_0^* ) +\co(1) \left| \eta_{2}^{\alpha,r}\eta_{2}^{\alpha,\ell}\sC_{{\alpha}} \right| (v_{1}^{\alpha,\ell})^{2} \right] \notag \\ {\le} &W_{2}^{\alpha,r}\kappa_{2\ca2} e^{-2\kappa_{2\ca 2}\mu \delta_p^*}\frac{ (-1+\co(1)\delta_0^* ) }{ ({p_{1}^*)}^2\mu} \left| \eta_{2}^{\alpha,r}\eta_{2}^{\alpha,\ell}\sC_{{\alpha}} \right| v_{1}^{\alpha,\ell} +\co(1) W_2^*\delta_0^*\left| \eta_{2}^{\alpha,r}\eta_{2}^{\alpha,\ell}\sC_{{\alpha}} \right| v_{1}^{\alpha,\ell} \ .\nonumber\\ {\le} & 1\cdot \kappa_{2\ca2} e^{-2\kappa_{2\ca 2}\mu \delta_p^*}\frac{ (-\frac{1}{2}) }{ ({p_{1}^*)}^2\mu} \left| \eta_{2}^{\alpha,r}\eta_{2}^{\alpha,\ell}\sC_{{\alpha}} \right| v_{1}^{\alpha,\ell} +\co(1) W_2^*\delta_0^*\left| \eta_{2}^{\alpha,r}\eta_{2}^{\alpha,\ell}\sC_{{\alpha}} \right| v_{1}^{\alpha,\ell} \ . \end{align} Here, we also employed bounds $0\le (p_0^*)^2\le(v_2^{\alpha,\ell}-\eta_2^{\alpha,\ell})(v_2^{\alpha,\ell}+\sC_\alpha)\le (p_{1}^*)^2$, $W_{2}^{\alpha,r}\ge 1$ and take $\delta_0^*$ small enough so that $-1+\co(1)\delta_0^* <-\frac{1}{2}$. \textbf{Case 2b)} In this case, we assume that $\eta_{2}^{\alpha,r}$ and $\eta_{2}^{\alpha,\ell}$ have opposite signs and the $\alpha$-wave is an admissible $2-$shock. This is equivalent to $\eta_{2}^{\alpha,\ell}<0<\eta_{2}^{\alpha,r}$. By interaction-type estimate~\eqref{E:primsfwewgrqgerqpall2}, we have \bes |\eta_{2}^{\alpha,r}|+|\eta_{2}^{\alpha,\ell}| = |\eta_{2}^{\alpha,r}-\eta_{2}^{\alpha,\ell}| \in \left(\frac{1}{2}|\sC_{\alpha}|, 2|\sC_{\alpha}|\right)\;. \ees using the smallness of the component $v_1^{\alpha,\ell}$; $0<v_1^{\alpha,\ell}<\delta_0^*$. Now, by estimates~\eqref{E:sstimaetarbase2all}, the error term $E_{\alpha,2}$ defined in~\eqref{E:error} can be estimated as \begin{align} E_{\alpha,2} {\le} & W_{2}^{\alpha,r} |\eta_{2}^{\alpha,r}|\left[ \frac{v_1^{\alpha,\ell}\eta_2^{\alpha,\ell}}{ (v_2^{\alpha,\ell}-\eta_2^{\alpha,\ell})(v_2^{\alpha,\ell}+\sC_\alpha)} +\co(1) |\eta_2^{\alpha,\ell} |(v_1^{\alpha,\ell})^2 \right] \notag\\ &-W_{2}^{\alpha,\ell} |\eta_{2}^{\alpha,\ell} | \left[ \frac{v_1^{\alpha,\ell}(\eta_2^{\alpha,\ell}+\sC_\alpha)}{ (v_2^{\alpha,\ell}-\eta_2^{\alpha,\ell})(v_2^{\alpha,\ell}+\sC_\alpha)} -\co(1) |\eta_2^{\alpha,\ell}+\sC_\alpha |(v_1^{\alpha,\ell})^2 \right]\notag\;. \end{align} Next, recalling that $$0<(p_0^*)^2\le(v_2^{\alpha,\ell}-\eta_2^{\alpha,\ell})(v_2^{\alpha,\ell}+\sC_\alpha)\le (p_{1}^*)^2$$ given by the compact domain~\eqref{bound-eta1-gamma-eta1+gamma-pNN}, expression~\eqref{etaell+g=r} and the sign relations $$ \sgn(\eta_2^{\alpha,\ell}+\sC_\alpha)=\sgn(\eta_2^{\alpha,r})=+1,\qquad \eta_{2}^{\alpha,\ell}<0<\eta_{2}^{\alpha,r}\;, $$ we arrive at \begin{align}\label{E:case22b} E_{\alpha,2} {\le} &(W_{2}^{\alpha,r}+W_{2}^{\alpha,\ell}) \cdot \left(-\frac{1}{(p_1^*)^2}+\co(1)\delta_0^*\right)\cdot \left| \eta_{2}^{\alpha,r}\eta_{2}^{\alpha,\ell} \right| v_{1}^{\alpha,\ell} \notag \\ {\le} & 2\left(-\frac{1}{(p_1^*)^2}+\co(1)\delta_0^*\right)\cdot \left| \eta_{2}^{\alpha,r}\eta_{2}^{\alpha,\ell} \right| v_{1}^{\alpha,\ell}\;, \end{align} choosinf $\delta^*_0$ small enough so that $-\frac{1}{(p_1^*)^2}+\co(1)\delta_0^* <0$ and using $W_i^{\alpha,r/\ell}\ge 1$, by definition. \textbf{Case 2c)} In the last case, $\eta_{2}^{\alpha,r}$ and $ \eta_{2}^{\alpha,\ell}$ have opposite signs and the $\alpha$-wave is a $2-$rarefaction, which is replaced with a jump of the same size connecting states along a Hugoniot curve. This means that $\eta_{2}^{\alpha,r}\leq0\leq \eta_{2}^{\alpha,\ell}$ and $0<\sC_{\alpha}\leq\varepsilon$. As before in the previous case, by interaction-kind estimate~\eqref{E:primsfwewgrqgerqpall2} and for small enough $\delta_0^*$, we have \be{E:gregggfgg} |\eta_{2}^{\alpha,r}|+|\eta_{2}^{\alpha,\ell}| = |\eta_{2}^{\alpha,r}-\eta_{2}^{\alpha,\ell}| \leq 2|\sC_{\alpha}|\leq 2\varepsilon \ . \ee Substituting this into~\eqref{E:sstimaetarbase2all}, it yields \[ |\lambda_{2}^{\alpha,r}-\lambda_{2}^{\alpha,\ell}|\leq|\dot x_{\alpha}-\lambda_{2}^{\alpha,\ell}|+ |\dot x_{\alpha}-\lambda_{2}^{\alpha,r}| \leq \co(1) (|\eta_{2}^{\alpha,\ell}|+|\eta_{2}^{\alpha,\ell}+\sC_\alpha| )( v_{1}^{\alpha,\ell})^2 \leq \co(1) \varepsilon (v_{1}^{\alpha,\ell} )^2\;. \] In view of the above, the error $E_{\alpha,2}$ defined in~\eqref{E:error} can be estimated as \begin{align}\label{E:case2.2.c} E_{\alpha,2}&\le W_2^{\alpha,r} |\eta_2^{\alpha,r}-\eta_2^{\alpha,\ell}| |\lambda_2^{\alpha,r}-\dot{x}_\alpha| +(W_2^{\alpha,r}+W_2^{\alpha,\ell}) |\eta_2^{\alpha,\ell}| |\lambda_2^{\alpha,r}-\dot{x}_\alpha| + W_2^{\alpha,\ell} |\eta_2^{\alpha,\ell}| |\lambda_2^{\alpha,r}-\lambda_2^{\alpha,\ell}| \notag\\ &\le \co(1) W_2^{*} \; \varepsilon (v_{1}^{\alpha,\ell})^2 |\sC_{\alpha}|\;, \end{align} recalling~\eqref{W-unif-boundN}. \paragraph{Derivation of~\eqref{E:mainest} for 2-waves} We combine now estimate~\eqref{E:boundq1changesignNN2} for $E_{\alpha,1}$ with the three cases presented in Paragraph~\ref{per:rggqrgrqgrqgrqgrq} for $E_{\alpha,2}$ to establish~\eqref{E:mainest} when the wave at $x_{\alpha}$ is a 2-wave. \noindent {\bf(Case 2a)} In this case, we have $\eta_{2}^{\alpha,r}\,\cdot\eta_{2}^{\alpha,\ell}\ge 0$. Adding~\eqref{E:boundq1changesignNN2} and~\eqref{E:case2.2.aNew}, we get \be{E:case2.2.anew} E_{\alpha,1} +E_{\alpha,2} \leq \left(-\kappa_{2\ca2}e^{-{\color{blue} 2}\kappa_{2\ca 2}\mu \delta_p^*}\frac{ 1}{2 ({p_{1}^*)}^2\mu} +(W_1^{*}+W_{2}^{*})\co(1)\delta_0^*\right) \left| \eta_{2}^{\alpha,r}\eta_{2}^{\alpha,\ell}\sC_{{\alpha}} \right| v_{1}^{\alpha,\ell} \ . \ee since $W_{2}^{\alpha,r}\ge 1$. \noindent {\bf(Case 2b)} Here, we recall that $\eta_{2}^{\alpha,\ell}<0<\eta_{2}^{\alpha,r}$. Summing up~\eqref{E:boundq1changesignNN2} and~\eqref{E:case22b} \be{E:case2.2.bnew} E_{\alpha,1} +E_{\alpha,2} \leq \left(-\frac{2}{(p_1^*)^2}+\co(1)\delta_0^*(W_1^*+1)\right)\cdot \left| \eta_{2}^{\alpha,r}\eta_{2}^{\alpha,\ell} \right| v_{1}^{\alpha,\ell}\;, \ee \noindent {\bf(Case 2c)} In the last case, we deal with $\eta_{2}^{\alpha,r}<0<\eta_{2}^{\alpha,\ell}$. Combining estimates~\eqref{E:boundq1changesignNN2} and~\eqref{E:case2.2.c} and taking into account~\eqref{E:gregggfgg}, we arrive at \begin{align}\label{E:case2.2.cnew} E_{\alpha,1}+E_{\alpha,2} &\leq \co(1) ( W_1^{*}+W_2^{*}) \; \varepsilon (v_{1}^{\alpha,\ell})^2 |\sC_{\alpha}|\ . \end{align} In view of the above estimates, we thus deduce that~\eqref{E:mainest} holds by the smallness of $\delta_0^*$. Indeed, in Step $5$ in Conditions $(\Sigma$), (see Proposition~\ref{PropCond} in Paragraph~\ref{P4.2.1.3}), one can further shrink, if needed, $\delta_0^*$ in~\eqref{Condition4} depending on $\delta_p^*$ and $\kappa_{2\ca 2}$ -- that are already fixed from the previous steps -- so that estimates ~\eqref{E:case2.2.anew}--\eqref{E:case2.2.bnew} are all negative. \subsection{Analysis at time steps} \label{Ss:timestep} The aim in this section is to estimate the change of the function $\Phi_0(u(t),v(t)) $ across a time step $t=t_k$, when $u$ and $v$ are approximate solutions to~\eqref{S1system} constructed via the front-tracking algorithm in conjunction with the operator splitting method. Throughout this section, we denote by $u$ and $v$ the piecewise constant approximate solutions to the non-homogeneous system~\eqref{S1system} as described in Section~\ref{Ss:Rsolv-appsol}, with the update of the states given at~\eqref{updated} and the source denoted by $g(\theta)=((\theta_2-1)\theta_1,0)$ where $\theta$ stands for a generic state $\theta=(\theta_1,\theta_2) $. Fix an index $k$ and set $t_k=k\Delta t$. For our convenience, we use the following notation \[ u^+(x)\doteq u(x,t_k+) , \ u^-(x)\doteq u(x,t_k-) , \ \eta_i^+(x)\doteq\eta_i(x,t_k+) , \ \eta_i^-(x)\doteq\eta_i(x,t_k-) \] for the values before and after the update at the time step $t_k$. Similarly, we use \[ v^{\pm}(x) ,\ W_i^\pm(x) , \ \ca_i^\pm(x) , \ Q(u)^{\pm} , \ Q(v)^{\pm} , \ \cg^{\pm}(u) , \ \cg^{\pm}(v) \text{ etc.} \] for the corresponding values as $t\to t_k\pm$. Using this notation, we have \begin{equation} \label{S:etapm-def} v^{\pm}(x)=\SC{2}{\eta_2^{\pm}}{ \SC{1}{\eta_1^{\pm}}{u^{\pm}(x)}} \end{equation} and \begin{equation} \label{S:vupm-def} v^{+}(x)=v^{-}(x)+\Delta t g(v^-(x)),\qquad u^+(x)=u^{-}(x)+\Delta t g(u^-(x)) \;. \end{equation} \begin{figure}[htbp] {\centering \scalebox{1}{\input{timesttalk.tex} } \par} \caption{The shock curves connecting the states of $u$ with $v$ before and after a time step of size $\Delta t$. \label{S4.5 :fig1}} \end{figure} The following lemma provides an estimate in the change of the strengths $\eta=(\eta_1,\eta_2)$ across the time step $t_k$. \begin{lemma}\label{S:ts-lemma} Let $u(t,x)$ and $v(t,x)$ be two approximate solutions to~\eqref{S1system} in $[0,\infty)\times\mathbb{R}$. Then there exists $s=\Delta t>0$ such that \be{S:ts-eta12} |\eta_1^+-\eta_1^-|+|\eta_2^+-\eta_2^-|\le \co(1)\,\Delta t \left( |\eta_1^-|+|\eta_2^-| \right) \ee and \be{S-tsGbd} \cg^{+}(u)\le (1+\co(1)\Delta t)\cg^{-}(u),\qquad \cg^{+}(v)\le (1+\co(1)\Delta t)\cg^{-}(v)\;, \ee at every time step $t_k=k\Delta t$. \end{lemma} \begin{proof} Given the state $u^-(x)=(u_1^-,u_2^-)$, the time step $\Delta t$ and the strengths $\eta^-=(\eta_1^-,\eta_2^-)$, one can determine $\eta^+=(\eta_1^+,\eta_2^+)$ relying on~\eqref{S:etapm-def}--\eqref{S:vupm-def}. Hence, by considering the independent variables $(u_1^-,u_2^-,\eta_1^-,\eta_2^-,\Delta t)$, we define the functional \bes \Psi^{*}(h_u^-,p_u^-,\eta_1^-,\eta_2^-,\Delta t)\doteq (\eta_1^+-\eta_1^-,\eta_2^+-\eta_2^-)\;. \ees It is easy to verify that \bes \Psi^{*}(u_1^-,u_2^-,\eta_1^-,\eta_2^-,0)=\Psi^{*}(u_1^-,u_2^-,0,0,\Delta t)=(0,0) \ees Hence, by Lemma~\ref{E:rewgwgeq}, we arrive immediately at~\eqref{S:ts-eta12}. Relying on estimate~\eqref{S:ts-eta12} and a proof similar to the one of Lemmas $2.1$ and $2.2$ in~\cite{AG}, we obtain~\eqref{S-tsGbd}. Let us point out that the derivation of~\eqref{S-tsGbd} can be established following the aforementioned work in~\cite{AG} since it relies on the smallness of $\Delta t$ and the lack of genuine nonlinearity or linear degeneracy or the size of the initial data are not relevant at this point. One should only check the terms in $\cq$ with the weights $\omega_{\alpha,\beta}$ that are not present in~\cite{AG}. Indeed, one can follow the proof of estimate (2.14) in~\cite{AG} to show $$ \omega_{\alpha,\beta}^+-\omega_{\alpha,\beta}^-=\co(1)\Delta t,\qquad {\mathcal{Q}_{hh}}^+-{\mathcal{Q}_{hh}}^-= \co(1) \Delta t {(V^-)}^2 $$ for each approximate solution $u$ and $v$. Combining the definition~\eqref{S3G} of the Glimm functional $\mathcal{G}$ and Lemma $2.2$ in~\cite{AG}, the proof of~\eqref{S-tsGbd} is complete. \end{proof} The aim now is to estimate the change of the functional $\Phi_0(u,v)$ across $t=t_k$. \begin{lemma} Let $u$ and $v$ be two approximate solutions to~\eqref{S1system}. Then there exists $s=\Delta t>0$ such that \ba\label{S:ts-Phi2} \Phi_0(u(t_k+),v(t_k+))-\Phi_0(u(t_k-),v(t_k-))\le\co(1) \Delta t \Phi_0(u(t_k-),v(t_k-))\;, \ea for every $k=0,1,2,\dots$. \end{lemma} \begin{proof} The aim now is to estimate the change of the functional $\Phi_0(u,v)$ across $t=t_k$. By definition, \be{S:ts-Phinew} \Phi_0(u^+,v^+)-\Phi_0(u^-,v^-)=\sum_{i=1}^2\int_{-\infty}^\infty\left[|\eta_i^+(x)|W_i^+ \cdot e^{\kappa_{\cg} \cg^+} - |\eta_i^-(x)|W_i^- \cdot e^{\kappa_{\cg} \cg^-} \right] dx \ee where $ \cg^{\pm}:=\cg^{\pm}(u)+\cg^{\pm}(v)$ and we expand the above integrand as \ba\label{S4.3term} |\eta_i^+(x)|W_i^+(x) \cdot e^{\kappa_{\cg} \cg^+}- |\eta_i^-(x)|W_i^-(x) \cdot e^{\kappa_{\cg} \cg^-} & =e_i^-(x) \left[\eta_i^+(x) e^{\delta_i}-\eta_i^-(x)\right]\nonumber\\ & =e_i^-(x) \left[\eta_i^-(x) (e^{\delta_i}-1)+(\eta_i^+(x) -\eta_i^-(x) ) e^{\delta_i}\right] \ea where $$ e_i^-(x):= \exp\left(\kappa_{i\ca 1}\ca_{i,1}^-(x)+\kappa_{i\ca 2}\ca_{i,2}^-(x) +\kappa_{\cg} \cg^-\right) $$ $$ e^{\delta_i}:=\exp\left(\delta_i\right),\qquad \delta_i(x):=\exp\left(\kappa_{i\ca 1}\Delta\ca_{i,1}(x)+\kappa_{i\ca 2}\Delta\ca_{i,2}(x) +\kappa_{\cg} \Delta\cg\right) $$ for $i=1,2$ and as usual $\Delta\cdot$ denotes the change across $t_k$, i.e. $\Delta\ca_{i,1}(x)=\ca_{i,1}^+(x)-\ca_{i,1}^-(x)$. Now the aim is to estimate term~\eqref{S4.3term}. Using that the functional $\mathcal{G}$ is uniformly bounded for all times for both solutions $u,\,v\in\mathcal{D}_0^*$ by a positive constant, we first observe that \be{S:ts-Abdnew} e_i^-(x)\le\co(1),\qquad \ca_{i,j}^{\pm}(x)\le \co(1)\cg^\pm= \co(1)(\cg^{\pm}(u)+\cg^{\pm}(v))\le \co(1)\qquad i,\,j=1,2 \;, \ee and then we examine two possible cases: Either (i) $\eta_i(x)^- \eta_i(x)^+\le 0$ or (ii) $\eta_i(x)^- \eta_i(x)^+> 0$ for each family $i=1,2$. \noindent {\bf Case (i)} If $\eta_i(x)^- \eta_i(x)^+\le 0$, then by Lemma~\ref{S:ts-lemma} we get \bes |\eta_i^-(x)|+|\eta_i^+(x)|=|\eta_i^-(x)-\eta_i^+(x)|\le\co(1) \Delta t (|\eta_1^-(x)|+|\eta_2^-(x)|). \ees Combining~\eqref{S:ts-Abdnew} with the above, we arrive at \ba\label{S:ts-I1new} |\eta_i^+(x)|W_i^+(x) \cdot e^{\kappa_{\cg} \cg^+}- |\eta_i^-(x)|W_i^-(x)\cdot e^{\kappa_{\cg} \cg^-}&\le \co(1) ([|\eta_i^+(x)| +|\eta_i^-(x)|)\nonumber\\ &\le \co(1) \Delta t(|\eta_1^-(x)|+|\eta_2^-(x)|)\;. \ea \noindent {\bf Case (ii)} If $\eta_i(x)^- \eta_i(x)^+ > 0$, then we claim that \be{S:ts-Abd2new} \Delta \ca_{i,j}(x)=\ca_{i,j}^{+}(x)-\ca_{i,j}^{-}(x)\le \co(1)\Delta t ,\qquad j=1,\,2\;. \ee The proof of the above claim follows by a simpler argument of the one establishing (4.37) in~\cite[Section 4]{AG} and taking into account the presence of the factors $|p-1|$ in $\ca_{1,1}$. The argument here is slightly simpler since non-physical fronts do not appear in our approximate scheme. For the convenience of the reader, we present the argument. Let $\sR_{\alpha} ^{-}$ be a wave of either $u$ or $v$ at the point $x_{\alpha}$ before the time step $t_k$ and $\sR_{\alpha,j}^{+}$ a wave of the $j$-family at the same location $x_{\alpha}$ after the time step $t_k$. We call a newly generated wave $\sR_{\alpha,j}^{+}$ of the $j$- family at the point $x_{\alpha}$ if $\sR_{\alpha} ^{-}$ is not present in $\ca_{i,j}^{-}$, but $\sR_{\alpha,j}^{+}$ is present in $\ca_{i,j}^{+}$. Now let $\sR_{\alpha,j}^{+}$ be a wave front present in $\ca_{i,j}^+(x)$, then there are two possibilities: \begin{enumerate} \item[(a)] $\sR_{\alpha,j}^{+}$ is not a newly generated wave. Hence $\sR_{\alpha} ^{-}$ is present in $\ca_{i,j}^-(x)$ and therefore, \bes |\sR_{\alpha,j}^{+}-\sR_{\alpha}^-|\le \co(1)\Delta t|\sR_{\alpha}^-| \ees by~\eqref{E:timestepest1},~\eqref{E:timestepest4}. Observe that both $\sR_{\alpha,j}^{+}$ and $\sR_{\alpha} ^{-}$ are waves of the same family. Furthermore, if $j=1$ then the $p$ components of the left state of both fronts, $\sR_{\alpha,1}^{+}$ and $\sR_{\alpha} ^{-}$, are equal using the update~\eqref{updated} at $t=t_k$. Hence, \bes |p_{\alpha,1}^{l,+}-1| |\sR_{\alpha,1}^{+}|-|p_{\alpha}^{l,-}-1| | \sR_{\alpha}^-|= |p_{\alpha}^{l,-}-1| ( |\sR_{\alpha,1}^{+}|-| \sR_{\alpha}^-| ) \le \co(1)\Delta t|\sR_{\alpha}^-|\;. \ees These are terms that may appear in $\Delta\ca_{1,1}$. \item[(b)] $\sR_{\alpha,j}^{+}$ is a newly generated wave. So now, \bes |\sR_{\alpha,j}^{+}|=|\sR_{\alpha,j}^{+}-0|\le \co(1)\Delta t|\sR_{\alpha}^-| \ees by~\eqref{E:timestepest2},~\eqref{E:timestepest3}. If $j=1$, we also have $|p^{l,+}_\alpha-1| |\sR_{\alpha,1}^{+}|\le \co(1)\Delta t|\sR_{\alpha}^-|$ since the factor $|p-1|$ is uniformly bounded. \end{enumerate} Now bound~\eqref{S:ts-Abd2new} follows immediately by Cases (a)-(b) for both $i=1,2$ and it implies $e^{\delta_i}-1\le\delta_i \le\co(1)\Delta t$. Having~\eqref{S:ts-Abdnew},~\eqref{S:ts-Abd2new} and~\eqref{S:ts-eta12}, we get \ba\label{S:ts-I2new} |\eta_i^+(x)|W_i^+(x) \cdot e^{\kappa_{\cg} \cg^+}- |\eta_i^-(x)|W_i^-(x)\cdot e^{\kappa_{\cg} \cg^-} &\le\co(1)\left[|\eta_i^-(x)| {\delta_i}+|\eta_i^+(x) -\eta_i^-(x) | e^{\delta_i}\right]\nonumber\\ &\le\co(1) \,\Delta t (|\eta_1^-(x)|+|\eta_2^-(x)|)\;. \ea Combining~\eqref{S4.3term},\,\eqref{S:ts-I1new},~\eqref{S:ts-I2new} with~\eqref{S:ts-Phinew} , we arrive at \ba \Phi_0(u(t_k+),v(t_k+))-\Phi_0(u(t_k-),v(t_k-))&\le\co(1) \Delta t \int_{-\infty}^\infty (|\eta_1^-(x)|+|\eta_2^-(x)|) dx\notag\\ &\le\co(1) \Delta t \Phi_0(u(t_k-),v(t_k-))\;, \ea for sufficiently small $s=\Delta t$, where we use that $W_i\ge 1$. The proof is complete. \end{proof} \subsection{Stability of the functional $\Phi_z$ - Proof of Theorem~\ref{stability-Phy}-(i)} \label{S5.4} In order to complete the proof of Theorem~\ref{stability-Phy} it remains to establish the statement (i). Let $u$, $v$ be two approximate solutions of the homogeneous system~\eqref{S1system-hom} constructed as described in subsection~\ref{Ss:Rsolv-appsol}. We shall extend here the estimate~\eqref{Phi0est2} on $\Phi_0(u,v)$ to the general functional $\Phi_z(u,v)$, when $z\neq 0$ is an arbitrary piecewise constant function, that takes values in the compact set~\eqref{E:im-z}, and satisfies~\eqref{z-bound-1} with $\sigma$ as in~\eqref{sigma-def}. Recall that the function $z$ affects the definition of $\Phi_z(u,v)$ in~\eqref{E:vu},\,\eqref{UpsilonNew}, both through the value of the waves $\eta_{i}$, $i=1,2$, that connect $u$ and $v+z$ via~\eqref{E:vu}, and through the weights $W_i$, which depend on the sign of $\eta_i$. We first observe that, since $u, v$ are both approximate solutions to the homogeneous system~\eqref{S1system-hom}, the map $t\mapsto \Phi_\er(u(t),v(t))$ is continuous in $L^1$ except at times of interaction of the waves of $v$ or $u$. Moreover, as in \S~\ref{Ss:interactionTimes}, one can verify that at interaction times for $u$ or $v$, the map $t\mapsto\Phi_\er(u(t),v(t))$ is decreasing. Indeed, this follows immediately by the observation that, at an interaction time $t=\tau$, one has $\eta_i(x,\tau+)=\eta_i(x,\tau-)$ for all points $x$ where none of $u(\tau)$, $v(\tau)$ or $\er$ has a jump. Hence, at such points~$x$ the value of $W_i(\tau\pm,x)$ depends only on the strengths and left states of the waves in $u$ and $v$ and is not affected by the presence of $\er$. Therefore, employing the same analysis in \S~\ref{Ss:interactionTimes}, we deduce \begin{equation} \label{phi-est-z} \Phi_\er(u(\tau{+}),v(\tau{+}))\le \Phi_\er(u(\tau{-}),v(\tau{-})). \end{equation} Next, away from interaction times of $v$ or $u$, relying also on~\eqref{unifbvbound},~\eqref{V-totvar-est},~\eqref{sigma-def}, we will prove that \begin{equation} \label{E:phier-est21} \ddn{t}{}\Phi_\er(u(t),v(t))\le\co(1)\big(\eps V(u(t))+(\eps+\sigma) V(v(t))+\sigma\big)\le\co(1)(\eps+\sigma)\,. \end{equation} Integrating~\eqref{E:phier-est21} between two interaction times and relying on~\eqref{phi-est-z}, we thus derive the estimate~\eqref{Phi0est1}, proving Theorem~\ref{stability-Phy}-(i). In order to establish the first estimate in~\eqref{E:phier-est21}, we write as in \S~\ref{Ss:nointeractiontimesN} the derivative \begin{equation} \label{E:phier-est21new} \ddn{t}{}\Phi_\er(u(t),v(t))=\left( \sum_{\alpha\in \cj} \sum_{i=1}^{2} E_{\alpha,i}^\er\right) e^{\kappa_{\cg} (\cg(u)+\cg(v)) } \end{equation} where \be{E:errorz} E_{\alpha,i}^\er \doteq W_{i}^{\alpha,r}|\eta_{i}^{\alpha,r}|(\lambda_{i}^{\alpha,r}-\dot x_{\alpha})- W_{i}^{\alpha,\ell}|\eta_{i}^{\alpha,\ell}|(\lambda_{i}^{\alpha,\ell}-\dot x_{\alpha}) \qquad \alpha\in \cj \ . \ee Here $\cj=\cj(u)\cup\cj(v)\cup\cj(\er)$ and the other quantities $\eta_i^{\alpha,\ell/r}$, $W_i^{\alpha,\ell,r}$, $\lambda_i^{\alpha,\ell,r}$ are used as in \S~\ref{Ss:nointeractiontimesN} taking into account that here $u$ and $v+\er$ are connected via~\eqref{E:vu}. The goal here is to prove that \begin{align} \label{E:mainestz-z1} E_{\alpha,1}^\er+E_{\alpha,2}^\er & \leq \co(1) \varepsilon |\sC_{\alpha}|\,, \qquad \forall \alpha\in \cj(u)\\ \label{E:mainestz-z2} E_{\alpha,1}^\er+E_{\alpha,2}^\er & \leq \co(1) |z(x_\alpha+)-z(x_\alpha-) |\,, \qquad \forall \alpha\in \cj(\er)\\ \label{E:mainestz-z3} E_{\alpha,1}^\er+E_{\alpha,2}^\er & \leq \co(1) \left[\varepsilon+\sigma\right] |\sC_{\alpha}|\,, \qquad \forall \alpha\in \cj(v) \end{align} where $\sC_\alpha$ is the strength of the $\alpha$-wave at $x_\alpha$. Estimate~\eqref{E:mainestz-z1} follows immediately from the analysis in \S~\ref{Ss:nointeractiontimesN} exchanging $v$ with $v+\er$ in the process of proving~\eqref{E:mainest}. Hence, it remains to prove~\eqref{E:mainestz-z2} and~\eqref{E:mainestz-z3}. Given $\alpha\in \cj(\er)$, we have for $i=1,2$ \be{S5.4 etadiffz} |\eta_i^{\alpha,r}-\eta_i^{\alpha,\ell} |\le\co(1) |z(x_\alpha+)-z(x_\alpha-) |,\qquad | \eta_i^{\alpha,r}\lambda_i^{\alpha,r}-\eta_i^{\alpha,\ell} \lambda_i^{\alpha,\ell} |\le\co(1) |z(x_\alpha+)-z(x_\alpha-) | \ee by Lipschitz continuity. Now, if $\eta_i^{\alpha,\ell}\,\eta_i^{\alpha,r}\le 0$, then \be{S5.4 etadiffz2} E_{\alpha,i}^\er\le\co(1)\left( |\eta_i^{\alpha,r}|+|\eta_i^{\alpha,\ell} |\right)=\co(1)|\eta_i^{\alpha,r}-\eta_i^{\alpha,\ell} | \le\co(1) |z(x_\alpha+)-z(x_\alpha-) | \ee from~\eqref{E:errorz} and~\eqref{S5.4 etadiffz}. On the other hand, if $\eta_i^{\alpha,\ell}\,\eta_i^{\alpha,r}\ge 0$, then using that $W_i^{\alpha, r}=W_i^{\alpha,\ell}$ when $\alpha\in \cj(\er)$, we write \begin{align}\label{S5.4 etadiffz3} E_{\alpha,i}^\er&= W_{i}^{\alpha,r}\left[ ( |\eta_i^{\alpha,\ell} |-|\eta_i^{\alpha,r} |) \dot x_{\alpha} + |\eta_i^{\alpha,r} | \lambda_{i}^{\alpha,r}- |\eta_i^{\alpha,\ell}| \lambda_{i}^{\alpha,\ell}\right]\nonumber\\ &\le\co(1)\left[ \hat{\lambda}|\eta_i^{\alpha,r}-\eta_i^{\alpha,\ell} |+ | \eta_i^{\alpha,r}\lambda_i^{\alpha,r}-\eta_i^{\alpha,\ell} \lambda_i^{\alpha,\ell} |\right]\nonumber\\ & \le\co(1) |z(x_\alpha+)-z(x_\alpha-) | \end{align} from~\eqref{S5.4 etadiffz} again. Thus, the error estimate~\eqref{E:mainestz-z2} follows from~\eqref{S5.4 etadiffz2} and~\eqref{S5.4 etadiffz3}. Last, let $\alpha\in \cj(v)$. Using notation similar to \S~\ref{Ss:nointeractiontimesN}, we consider a jump in $v$ at $x_\alpha$ of the family $k_\alpha$ connecting $v^{\alpha,\ell}=v(x_\alpha-)$ with $v^{\alpha,r}=v(x_\alpha+)$ and which is of strength $\sC_\alpha$. As before, we assume that this jump is along a shock and denote the quantities \ba &v^{\alpha,r}=\SC{k_{\alpha}}{\sC_{\alpha}}{v^{\alpha,\ell}} \ , && \dot x_{\alpha}=\lambda_{k_{\alpha}}\left(v^{\alpha,\ell},v^{\alpha,r}\right) \ , \\ &\omega^{\alpha,\ell / r} =\SC{1}{\eta_{1}^{\alpha,\ell / r}}{u } \ , &&v^{\alpha,\ell / r}+z =\SC{2}{\eta_{2}^{\alpha,\ell / r}}{\omega^{\alpha,\ell / r} } \ , \\ &\lambda_{1}^{\alpha,\ell / r}=\lambda_{1}\left(u,\omega^{\alpha,\ell / r}\right) \ , &&\lambda_{2}^{\alpha,\ell / r}=\lambda_{2}\left(\omega^{\alpha,\ell / r}, v^{\alpha,\ell / r}+z\right) \ . \ea and in addition to those, we also define \ba &\tilde{v}:=\SC{k_{\alpha}}{\sC_{\alpha}}{v^{\alpha,\ell}+z} \ , && \tilde{s}:=\lambda_{k_{\alpha}}\left(v^{\alpha,\ell}+z,\tilde{v} \right) \ . \ea The definition of the intermediate state $\tilde{v}$ prompts the connection of $u$ with $\tilde{v}$ via the waves $\tilde{\eta}_i$, i.e. $$ \tilde{v}(t,x)=\SC{2}{\tilde{\eta}_{2}(t,x)}{\cdot}\circ{ \SC{1}{\tilde{\eta}_{1}(t,x)}{u(t,x)}} \ , $$ and hence, we obtain the intermediate quantities, that are the speeds $\tilde{\lambda}_i$, the weights $\widetilde{W}_i$ and the corresponding errors \be{E:errorztil} \widetilde{E}_{\alpha,i}^z=\widetilde{W}_i|\tilde{\eta}_i|(\tilde{\lambda}_i-\tilde{s})-W_{\alpha,i}^{\ell}|\eta_i^{\alpha,\ell}|(\lambda_i^{\alpha,\ell}-\tilde{s})\;. \ee By exchanging the roles of $(v^{\alpha,\ell}, v^{\alpha,r}, \dot{x}_\alpha, \eta^{\alpha,r}_i,\lambda_i^{\alpha,r},W_i^{\alpha,r})$ with $(v^{\alpha,\ell}+z, \tilde{v}, \tilde{s}, \tilde{\eta}_i,\tilde{\lambda}_i,\widetilde{W}_i)$ in \S~\ref{Ss:nointeractiontimesN} and in particular at~\eqref{E:mainest}, we immediately deduce that \be{Ez:mainest} |\widetilde{E}_{\alpha,i}^z|\le\co(1)|\sC_\alpha|\eps. \ee Next, we note that if $\er=0$, these intermediate quantities become \be{Ez:mainesv1} \tilde{v}=v^{\alpha,r},\quad \tilde{\eta}_i=\eta_i^{\alpha,r},\quad \tilde{\lambda}_i=\lambda_i^{\alpha,r},\quad \tilde{s}=\dot x_\alpha\;. \ee On the other hand, when $\sC_\alpha=0$, we have \be{Ez:mainesv2} v^{\alpha,r}=v^{\alpha,\ell},\quad \tilde{\eta}_i=\eta_i^{\alpha,r}=\eta_i^{\alpha,\ell},\quad \tilde{\lambda}_i=\lambda_i^{\alpha,r}=\lambda_i^{\alpha,\ell}\;. \ee Combining the expressions~\eqref{E:errorz} and~\eqref{E:errorztil} together with the values~\eqref{Ez:mainesv1}--\eqref{Ez:mainesv2}, we arrive at \be{Ez:mainest2} |\tilde{E}_{\alpha,i}^z-E_{\alpha,i}^z|\le\co(1)|\sC_\alpha|\cdot[\|\er\|_\infty+\eps] \ee using the same strategy as in~\cite[p.1002]{AG}. Just note that the term of $\er$ here is denoted by $\omega$ in~\cite{AG}. By~\eqref{Ez:mainest} and~\eqref{Ez:mainest2} and since $\|\er\|_\infty\le\sigma$ , we obtain~\eqref{E:mainestz-z3}. Having now bounds~\eqref{E:mainestz-z1}--\eqref{E:mainestz-z3} that hold true at times that no interaction in $u$ and $v$ occurs and combining them with~\eqref{E:phier-est21new}, we arrive at~\eqref{E:phier-est21} immediately. As already mentioned, due to the non-increase of $\Phi_z(u,v)$ at interaction times, see~\eqref{phi-est-z}, we obtain~\eqref{Phi0est1} by integrating~\eqref{E:phier-est21} over $[t_1,t_2]$. The proof of Theorem~\ref{stability-Phy}-(i) is now complete. \qed \begin{remark} As $\eps\to0+$ in~\eqref{Phi0est1}, combining with the equivalence relation~\eqref{S4-PhiL1} we deduce \be{} \|\cs_{t_2}\bar u-\cs_{t_2}\bar v-z(t_2)\|_{L^1}\le 2C_0^2\, W^* \| \cs_{t_1}\bar u-\cs_{t_1}\bar v-z(t_1)\|_{L^1}+\co(1)(t_2-t_1)\sigma\;, \ee for $0<t_1<t_2$. \end{remark} \section{A Lipschitz continuous evolution operator - Proof of Theorem~\ref{exist-nonhom-smgr-1}}\label{S6} In this section, we conclude the proof of Theorem~\ref{exist-nonhom-smgr-1}. First, we consider the limit $u(t)=\cp_t\bar u$ of the approximate flow $\cp^s_t$ and prove that this limit is unique independent of the subsequence by using a uniqueness result on quasi differential equations in metric spaces. First, by Theorem~\ref{T:AS} of Amadori and Shen, we have that: \begin{proposition}\label{S6Prop1} For the constants of Theorem~\ref{ThmPS5.4}, given $\bar u\in\mathcal{D}_0$, then there exists a subsequence $\{s_m\}$, $m\in\mathbb{N}$ such that the functions $\cp^{s_m}_t \bar u \in \mathcal{D}_0^*$ converge in $L^1(\R)$ as $m\to\infty$ for any $t>0$ to an entropic weak solutions $u(t,\cdot)=(h(t,\cdot),p(t,\cdot))$ of the system~\eqref{S1system} with data $\bar u=(\bar h,\bar p)$ and the map $t\mapsto u(t,\cdot)$ is Lipschitz continuous in the $L^1$ norm. \end{proposition} Next we aim to prove the uniqueness of the solution $u(t,x)=(h(t,x),p(t,x))$. This result provides the fact that the whole sequence $\{\cp^s_t\}_s$ converges to $u(t,\cdot)$ as $s=\Delta t\to0+$. To establish uniqueness we follow a similar line to~\cite{AG}: We apply the uniqueness result on quasi differential equations in metric spaces established in~\cite{Bre2} in the case of the entropic weak solution $u$ of Proposition~\ref{S6Prop1}. For the convenience of the reader we state this uniqueness result: \begin{theorem}\label{S6ThmBre} Suppose that given a quasi-differential equation \be{S6ThmBreeq} \frac{du(t)}{dt}={\bf v}(u(t)) \ee there exists a Lipschitz semigroup $\cp^*:E_0\times[0,\infty)\to E_0^*$, where $(E_0,d_0)$ and $(E_0^*,d_0^*)$ are two metric spaces, which enjoys the following properties $\forall \,\bar u,\bar v\in E_0$, and $\forall\, t_1,t_2\ge 0$: \begin{enumerate} \item[(a)] $\cp_0^*\bar u=\bar u$, $\cp^*_{t_1}\cp_{t_2}^*\bar u=\cp_{t_1+t_2}^* \bar u$, \qquad $\forall \,\bar u,\bar v\in E_0$, \item[(b)] $\exists L>0$ such that $d_0^*(\cp_{t_1}^*\bar u,\cp_{t_2}^*\bar v)\le L\left( |t_1-t_2|+d_0(\bar u,\bar v) \right)$, \item[(c)] every trajectory $t\mapsto \cp_t^*\bar u$ provides a solution to the generalized Cauchy problem~\eqref{S6ThmBreeq} with initial data $u(0)=\bar u$. \end{enumerate} Then for every initial data $\bar u\in E_0$, the Cauchy problem~\eqref{S6ThmBreeq} with initial data $u(0)=\bar u$ has the unique solution $u(t)=\cp_t^*\bar u$. \end{theorem} We apply the above uniqueness result for $E_0=\mathcal{D}_0$, $E_1=\mathcal{D}_0^*$ and both metrics $d_0=d_0^*$ to be the $ L^{1}$ norm. Moreover, we consider ${\bf v} (u)$, to be the generalized tangent vector of the curve \be{S6ThmBreeqv} \gamma(\theta)=\cs_\theta u+\theta g(u),\qquad \theta\ge 0, \ee with $\cs$ the semigroup accosiated with the homogeneous system~\eqref{S1system-hom}. The aim now is to prove that the solution $u(t,\cdot)$ obtained as limit of $\{\cp^{s_m}_t\bar u\}_m$ is also a solution to~\eqref{S6ThmBreeq}--\eqref{S6ThmBreeqv}, which admits a Lipschitz flow of solutions. As already mentioned, this is the strategy in~\cite{AG} for general systems of balance laws of small total variation that we also adopt here. However, we cannot quote the analysis in~\cite[\S6]{AG} since our stability functional and the metric spaces are different. \begin{theorem}\label{S6Thm1} The curve $t\mapsto u(t,\cdot)$, where $u(t,\cdot)$ is the entropy weak solution obtained in Proposition~\ref{S6Prop1}, satisfies the generalized differential equation~\eqref{S6ThmBreeq}--\eqref{S6ThmBreeqv} with initial data $u(0)=\bar u\in\mathcal{D}_0$ in the metric space $L^1(\R,\R^2)$ and there exists $\theta_0\in(0,1)$ so that it holds \be{} \|u(t+\theta)-\cs_\theta u(t)-\theta g(u(t))\|_{L^1}\le \co(1)\theta^2,\qquad 0<\theta<\theta_0\;, \ee where $\cs$ is the semigroup of the homogeneous system~\eqref{S1system-hom} and $g$ the source term of the inhomogeneous system~\eqref{S1system}. Moreover, the generalized differential equation~\eqref{S6ThmBreeq}--\eqref{S6ThmBreeqv} admits a Lipschitz semigroup of solutions. \end{theorem} \begin{proof} Let $\bar u\in\mathcal{D}_0$ and consider $u(t)\in\mathcal{D}_0^*$ the limit of $\{\cp^{s_m}_t\bar u\}$ for $t>0$ as $m\to\infty$ that is obtained in Proposition~\ref{S6Prop1}. By Theorem~\ref{ThmPS5.4}, note that $\cp_{t'}^{s_m}(\cp_t^{s_m}\bar u)\in \mathcal{D}_0^*$ $\forall \, t,t'>0$. We first claim that there exists $\theta_0\in(0,1)$ such that for all $\theta\le \theta_0$ and $s>0$ satisfying $s<\theta^2$, it follows \be{S6:claim1} \|\cp_\theta^s u-\cs_\theta u-\theta g(u)\|_{L^1}\le \co(1)\theta^2 \ee for all $u\in\mathcal{D}_0^*$. To prove this, consider the approximate solution $u_\eps(\tau)$ to $P^s_\tau u$ of the non-homogeneous system~\eqref{S1system} and the approximate solution $v_\eps(\tau)$ to $\cs_\tau u$ of the homogeneous system~\eqref{S1system-hom} starting out from same initial data $ u$ and for a time interval of length $\theta>0$. Let $v_\eps(0,x)=u_\eps(0,x)$ and take $\omega(\tau,x):=u_\eps(0,x)$. Note that $u_\eps(\tau)$ is discontinuous in $L^1$ at the time steps $t_k=k s$, while $v_\eps(\tau)$ is continuous. On the other hand, $\omega$ is independent of time $\tau$ and discontinuous along vertical lines. In what follows, three limits are taken in the following order: first $\eps\to0+$ ,next $s\to0$ and last $\theta\to0+$. To start with, fix $\theta_0$ to be determined, take $\theta<\theta_0$, choose $k_0$ so that $k_0s\le\theta<(k_0+1)s$ and consider $u_\eps$, $v_\eps$ and $\omega$ for $(\tau,x)\in[0,\theta]\times\mathbb{R}$. Also, define the quantities \be{} \varphi_k^+=\Phi_{ks g(\omega)}(u_\eps(ks),v_\eps(ks)) \ee \be{} \varphi_k^-=\Phi_{(k-1)s g(\omega)}(u_\eps(ks-),v_\eps(ks)) \ee for $k=1,\dots,k_0$. The aim is to estimate $\varphi_{k_0}^+$, and hence, we estimate the differences $\varphi_k^+-\varphi_k^-$ and $\varphi_k^--\varphi_{k-1}^+$ for $k=1,\dots, k_0$ and proceed by induction. First, we consider \be{} \varphi_k^--\varphi_{k-1}^+=\Phi_{(k-1)s g(\omega)}(u_\eps(ks-),v_\eps(ks))- \Phi_{(k-1)s g(\omega)}(u_\eps((k-1)s),v_\eps((k-1)s)) \ee and observe that the functions $u_\eps$, $v_\eps$, $(k-1) sg(\omega)$ within the time strip $(t_{k-1},t_{k})$ play the role of the functions $u$, $v$ and $z$, respectively of Theorem~\ref{stability-Phy}-(i). Hence, if $\theta_0$ is sufficiently small, we get \be{} \varphi_k^--\varphi_{k-1}^+\le \co(1) s[\eps+(k-1) s] \le \co(1) s[\eps+\theta] \ee since $\text{TotVar}\{ g(\omega)\}\le \co(1)$. By definition of $\Phi_\er$, the other difference can be rewritten as \be{} \varphi_k^+-\varphi_k^-=\sum_{i=1}^2 \int_{-\infty}^\infty \left[ |\eta_i^{+}(x) |W_i^+(x)-|\eta_i^-(x)| W_i^-(x) \right] dx \ee with $\eta_i^+$ connecting $u_\eps(ks)$ with $v_\eps(ks)+ks g(\omega)$ and $\eta_i^-$ connecting $u_\eps(ks-)$ with $v_\eps(ks)+(k-1)s g(\omega)$. Then using Lemma~\ref{S:ts-lemma} and a bound similar to~\eqref{S:ts-Abd2new}, we can reconstruct the work in~\cite[p. 1011]{AG} and get \be{} \varphi_k^+-\varphi_k^-\le\co(1) (\eps+s)\varphi_k^-+\co(1) s h_\eps \ee with $h_\eps=\max_{k=1,\dots,k_0}\|\omega-u_\eps(ks-)\|_{L^1}$. Proceeding as in~\cite[p. 1012]{AG}, by induction and the $L^1$ equivalence of the functional $\Phi_z$, \[ \|u_\eps(k_0 s)-v_\eps(k_0 s)-k_0 s g(\omega)\|_{L^1}\le\co(1)\varphi_{k_0}^+\le\co(1)s(\eps+\theta+h_\eps+C(s+\eps)(\theta+\eps)) \frac{[1+C(s+\eps)]^{k_0}-1}{s+\eps} \] since $\varphi_0^+=0$. Letting $\eps\to0+$ and using property (iii) of Theorem~\ref{ThmPS5.4} and Theorems~\ref{exist-hom-smgr}--\ref{ThmPS5.4}, we arrive at~\eqref{S6:claim1} for $s<\theta^2$ and $\theta_0$ sufficiently small. Using now the subsequence $\{\cp^{s_m}_t \bar u \}$ of Proposition~\ref{S6Prop1} converging to $u=u(t)$, properties (ii), (iv) of Theorem~\ref{ThmPS5.4}, and estimate~\eqref{S6:claim1} with $u=u(t)\in\mathcal{D}_0^*$ and letting $m\to\infty$, we get \be{} \frac{\|u(t+\theta)-\cs_\theta u(t)-\theta g(u(t))\|_{L^1}}{\theta}\le \co(1)\theta,\qquad 0<\theta<\theta_0\;. \ee Taking the limit as $\theta\to0+$, this immediately implies that $u(t,\cdot)$ satisfies the generalized differential equation~\eqref{S6ThmBreeq}--\eqref{S6ThmBreeqv} with initial data $u(0)=\bar u$. Thus, to apply~Theorem~\ref{S6ThmBre} and conclude uniqueness and continuous dependence, it remains to prove the existence of a Lipschitz semigroup to~\eqref{S6ThmBreeq}--\eqref{S6ThmBreeqv}. Then arguing as in~\cite[p. 1013]{AG} and combining~\eqref{S6:claim1}, properties (ii)- (iv) of Theorem~\ref{ThmPS5.4}, we get that $\{\cp^{s_m}_t \bar u \}$ converges pointwise to a Lipschitz semigroup $\cp:[0,\infty)\times\mathcal{D}_0 \to \mathcal{D}_0^*$ enjoying the properties of Theorem~\ref{exist-nonhom-smgr-1} and its trajectories satisfy the generalized differential equation~\eqref{S6ThmBreeq}--\eqref{S6ThmBreeqv}. This immediately concludes the existence. Hence, applying Theorem~\ref{S6ThmBre} to $\cp^*=\cp$, the proof is complete. \end{proof} In view of the above analysis, Theorems~\ref{exist-hom-smgr} and~\ref{exist-nonhom-smgr-1} follow immediately. At the same time, the Lipschitz semigroup $u(t)=\cp_t\bar u$ obtained in Theorem~\ref{S6Thm1} satisfies the properties (i)--(iv) of Theorem~\ref{exist-nonhom-smgr-1} as indicated in the proof of Theorem~\ref{S6Thm1} by letting $s\to0+$ in Theorem~\ref{ThmPS5.4}-(iii)-(iv). \newpage \appendix \section{Reduction to shock curves in the stability analysis} \label{S:shockReduction} In \S~\ref{Ss:nointeractiontimesN} we proved estimates for the stability analysis relatively to times which are neither of interaction nor time steps. There, the right states and velocities were computed with the correct strength but along shock curves instead of along rarefaction curves even across a rarefaction discontinuity at $x_{\alpha}$ of the front-tracking approximation. The reason for this reduction is the same as in the reduction argument of \cite[\S~8.2, Page~161-162]{Bre} and we briefly remind this here, but we refer the reader to~\cite{Bre} for the full explanations together with the computations. Such values corresponding to approximated right states at rarefaction shall be more correctly denoted by $\eta_{i}^{\diamond},\lambda_{i}^{\diamond}, \dot x_{\alpha}^{\diamond}$ instead of $\eta_{i}^{r},\lambda_{i}^{r}, \dot x_{\alpha}^{r}$, as we do with an abuse of notation. The goal was to prove~\eqref{E:mainest}: each correct error $E_{\alpha,i} $ can be written as in \cite[(8.44)]{Bre}, just by adding and subtracting equal terms, as \[ E_{\alpha,i} = E_{\alpha,i}' +E_{\alpha,i}'' +E_{\alpha,i} ''' \quad,\quad E_{\alpha,i}' \doteq W_{i}^{\alpha,r}|\eta_{i}^{\alpha,\diamond}|(\lambda_{i}^{\alpha,\diamond}-\dot x_{\alpha}^{\diamond})- W_{i}^{\alpha,\ell}|\eta_{i}^{\alpha,\ell}|(\lambda_{i}^{\alpha,\ell}-\dot x_{\alpha}^{\diamond}) \ . \] The error term $E_{\alpha,i}' $ is precisely what we estimate in \S~\ref{Ss:nointeractiontimesN}, where as mentioned with the abuse of notation we briefly write $\eta_{i}^{r},\lambda_{i}^{r}, \dot x_{\alpha}^{r}$ in place of $\eta_{i}^{\diamond},\lambda_{i}^{\diamond}, \dot x_{\alpha}^{\diamond}$. The error term $E_{\alpha,i}''$ is estimated by \[ E_{\alpha,i}'' \doteq W_{i}^{\alpha,r}|\eta_{i}^{\alpha,\diamond}|(\lambda_{i}^{\alpha,r}-\lambda_{i}^{\alpha,\diamond})+ W_{i}^{\alpha,r}\left(|\eta_{i}^{\alpha,r}|-|\eta_{i}^{\alpha,\diamond}|\right)(\lambda_{i}^{\alpha,r}-\lambda_{i}^{\alpha,\diamond}) \leq\co(1) |\sC_{\alpha}|^{3} \leq \co(1) \varepsilon |\sC_{\alpha}| \] just due to second orderer tangency among shock curves and rarefaction curves which yields errors of order $\co(1) |\sC_{\alpha}|^{3}$, see \cite[(8.43)]{Bre}. The error term $E_{\alpha,i}'''$, even with our functional~\eqref{S4Wi}--\eqref{S3G}--\eqref{S4A1}--\eqref{S4A2}, is estimated similarly to \cite[(8.46)]{Bre} by \[ E_{\alpha,i}''' \doteq (\dot x_{\alpha}^{\diamond}- \dot x_{\alpha} ) \left\{W_{i}^{\alpha,r}\left(|\eta_{i}^{\alpha,r}|-|\eta_{i}^{\alpha,\ell}|\right) + \left( W_{i}^{\alpha,r}- W_{i}^{\alpha,\ell}\right) |\eta_{i}^{\alpha,\ell}| \right\} \leq \co(1) \varepsilon |\sC_{\alpha}| \ . \] We indeed stress, concerning the second addend within brackets, that by standard interaction estimates, $|\eta_{i}^{\alpha,\ell}|\leq \co(1) |\sC_{\alpha}|$ in case $\eta_{i}^{\alpha,\ell}$ and $\eta_{i}^{\alpha,r}$ have different sign. This follows immediately from~\eqref{E:stimaetarbasefull1}--\eqref{E:stimaetarbasefull} in Corollary~\ref{C:stimegenerali} and ~\eqref{E:primsfwewgrqgerqpall}--\eqref{E:primsfwewgrqgerqpall2} in Lemma~\ref{L:stimegenerali2}. On the other hand, if $\eta_{i}^{\alpha,\ell}$ and $\eta_{i}^{\alpha,r}$ have the same sign, it holds $| W_{i}^{\alpha,r}- W_{i}^{\alpha,\ell}|\leq \co(1) |\sC_{\alpha}|$ just by definition of $\ca_{i,j}$. \section{Analysis of shock curves}\label{AppB} \subsection{Analysis of 1-shock curves} \label{S:1shocks} By algebraic manipulations of the Rankine-Hugoniot equations, It is proven in \S~2 of~\cite{AS} that from the Rankine-Hugoniot equations, the 1-shock curve $\SC{1}{h-h_{\ell}}{h_{\ell},p_{\ell}}$ through the point $(h_{\ell},p_{\ell})$ can take the form: \begin{equation} \label{E:p1shock} p=p_{\ell}-\frac{s_{1}(h,p_{\ell})+1}{s_{1}(h,p_{\ell})}(h-h_{\ell})~\equiv p_{\ell}- \frac{ p_{\ell}-1}{ h-s_{1}(h,p_{\ell}) }(h-h_{\ell}) \end{equation} and this wave connecting $(h_\ell,p_\ell)$ on the left with $(h,p)$ on the right has strength $h-h_\ell$ in Cartesian coordinates. Here, $s_{1}$ is the Rankine-Hugoniot speed of the 1-shock that is strictly negative and it has the following expression: \begin{equation} \label{E:p1shock2} s_{1}=s_{1}(h,p_{\ell})\equiv\lambda_{1}(h,p_{\ell})\equiv -\frac{1}{2}\left[ p_{\ell}-h+\sqrt{(p_{\ell}-h)^{2}+4h}\right]\;. \end{equation} and by~\eqref{E:p1shock}, we have \bas \frac{p-p_{\ell}}{h-h_{\ell}}=-\frac{s_{1}(h,p_{\ell})+1}{s_{1}(h,p_{\ell})} \;, \eas so that if we exchange $(h,p)$ and $(h^\ell,p^\ell)$ the left hand side remains the same. As a consequence, \be{E:symmetry1HC} s_{1}(h,p_{\ell})=s_{1}(h_{\ell},p) \qquad \text{whenever }p=\SC{1}{h-h_{\ell}}{h_{\ell},p_{\ell}}\ . \ee Moreover, for small $h$, one has the expression \bes s_{1}(h,p_{\ell})= -p_{\ell }+\frac{p_{\ell}-1}{p_{\ell}}h+\mathcal{O}(h^{2}) \ees and for $p_{\ell}=1$ one has that $s_{1}(h, p_{\ell}=1)\big|_{p_{\ell}=1}\equiv-1$. Finally, one can compute \ba \label{E:coeff1s} \frac{s_{1}(h,p_{\ell})+1}{s_{1}(h,p_{\ell})} &=\frac{2}{s_{1}(h,p_{\ell})}\cdot\frac{ p_{\ell}-1}{ p_{\ell}-h-2-\sqrt{(p_{\ell}-h)^{2}+4h} } = \frac{ p_{\ell}-1}{ h-s_{1}(h,p_{\ell}) } \\&=\frac{p_{\ell}-1}{p_{\ell}}-\frac{p_{\ell}-1}{p_{\ell}^{3}}h+o(h)\;. \notag \ea Thanks to this explicit expression, one can compute the derivatives of the Rankine--Hugoniot speed when the 1-Hugoniot curve is parametrized by $h$: \be{E:s1derh1} \ptlls{h}{ s_{1}}(h,p_{\ell})= \frac{1}{2} \left(\frac{-h+p_{\ell}-2}{\sqrt{(p_{\ell}-h)^{2}+4 h}}+1\right) = \frac{ p_{\ell}-1 }{r(h,p_{\ell})(1+s_{2}(h,p_{\ell}))} \quad = \quad \frac{p_{\ell}-1}{p_{\ell}}+2\frac{p_{\ell}-1}{ p_{\ell} ^{3}} h+o(h)\;, \ee \be{E:ddhs1} \ddn{h}{2} s_{1}(h,p_{\ell})=- \frac{2 (p_{\ell}-1)}{\left((p_{\ell}-h)^{2}+4 h \right)^{3/2}}\;. \ee We also have that \be{E:s1derp1} \ptlls{p}{ s_{1}}(h,p_{\ell})=\frac{1}{2} \left(\frac{h-p_{\ell}}{\sqrt{(h-p_{\ell})^2+4 h}}-1\right) = \frac{s_{1}(h,p_{\ell})}{\sqrt{(h-p_{\ell})^2+4 h}} \quad = \quad -1+\frac{h}{p_{\ell}^{2}}+o(h)\;, \ee \bes \ptl{p}{2} s_{1}(h,p_{\ell})=-\frac{2 h}{\left((p_{\ell}-h)^{2}+4 h\right)^{3/2}}\;. \ees In the above, we have set \bes r(h,p):=\sqrt{(p-h)^{2}+4 h} \quad =\quad p+\frac{h}{2p}+o(h)\quad\text{for $h$ small}.\ees Finally, we notice that if $p$ is defined by~\eqref{E:p1shock} then \[ \ptlls{p_\ell}p=1- \frac{h-h_{\ell}}{ h-s_{1}(h,p_{\ell}) }- \frac{(p_{\ell}-1)(h-h_{\ell})}{ ( h-s_{1}(h,p_{\ell}))^2 }\frac{s_{1}(h,p_{\ell})}{\sqrt{(h-p_{\ell})^2+4 h}}\;, \] so that \ba\label{contiderivataS1} &\ptlls{p_\ell}p{\Big|_{p_\ell=1}}=1- \frac{h-h_{\ell}}{ 1+ h }=\frac{1+h_\ell}{1+ h } &&\text{while} &&\ptlls{h}p{\Big|_{p_\ell=1}}=\ptlls{h_\ell}p{\Big|_{p_\ell=1}}=0 \ . \ea It also holds that \ba\label{contiderivataS1eta0} &\ptlls{p_\ell}p{\Big|_{h_\ell=h}}=1 \ , &&\ptlls{h_\ell}p{\Big|_{h_\ell=h}}=0 &&\text{while} &&\ptlls{h}p{\Big|_{h_\ell=h}}=- \frac{ p_{\ell}-1}{ h-s_{1}(h,p_{\ell}) } \ . \ea When $0\leq h\leq \delta_0^*$ and $p_0^*\leq p\leq p_1^*$, with $p_0^*\leq 1\leq p_1^*$, it is also useful the bound \be{stimadaldalvassos1} p_{0}^*+h\leq\quad h-\lambda_{1}(h,p_{\ell})\equiv \frac{1}{2}\left[ p_{\ell}+h+\sqrt{(p_{\ell}-h)^{2}+4h} \right]\quad\leq p_{1}^*+h\;. \ee We also compute and estimate $\ptlls{h}p$ where $p$ is defined by~\eqref{E:p1shock}: \ba \ptlls{h}p&=-( p_{\ell}-1)\left( \frac{1}{ h-s_{1}(h,p_{\ell}) }-\frac{h-h_{\ell}}{ (h-s_{1}(h,p_{\ell}))^2 }\left(1-\ptlls{h}{s_{1}}(h,p_{\ell})\right) \right)\notag \\ &\stackrel{\eqref{E:s1derh1}}{=}-\frac{p_{\ell}-1}{ h-s_{1}(h,p_{\ell}) }\left(1 -\frac{h-h_{\ell}}{ h-s_{1}(h,p_{\ell}) }\cdot\frac{1}{2} \left(\frac{h-p_{\ell}+2+\sqrt{(p_{\ell}-h)^{2}+4 h}}{\sqrt{(p_{\ell}-h)^{2}+4 h}}\right) \right)\notag\\ &\stackrel{\eqref{E:p1shock2}}{=}-\frac{p_{\ell}-1}{ h-s_{1}(h,p_{\ell}) }\left(1 -\frac{h-h_{\ell}}{ \sqrt{(p_{\ell}-h)^{2}+4 h} }\cdot\ \left(1+\frac{1-p_{\ell} }{h-s_{1}(h,p_{\ell}) } \right) \right)\ . \label{E:computationder} \ea For $0\leq h\leq \delta_0^*<p_0^*\leq p\leq p_1^*$, it is also useful to perform the estimates \ba\label{stimadaldalvassos1} &\max\{p_{\ell}-h;2\sqrt h\}\leq\quad\sqrt{(p_{\ell}-h)^{2}+4h}\quad \leq\sqrt{(p_{1}^*-h)^{2}+4hp_{1}^*} =p_{1}^*+h\\ &p_{0}^*\leq p_{0}^*+h\leq\quad h-\lambda_{1}(h,p_{\ell})\equiv \frac{1}{2}\left[ p_{\ell}+h+\sqrt{(p_{\ell}-h)^{2}+4h} \right]\quad\leq p_{1}^*+h \ea Using now that $p_0^*=1-\delta_p^*$ and choosing $\delta_0^*$ and $\delta_p^*$ small enough so that $1-\delta_p^*-\delta_0^*>2\sqrt{\delta_0^*}$, we have $\max\{p_{\ell}-h;2\sqrt h\}\ge1-\delta_p^*-\delta_0^*$. This holds true for instance if we use the rough estimate that both $\delta_p^*$ and $\delta_p^*$ are less than $1/9$ in addition to Conditions ($\Sigma$) in Proposition~\ref{PropCond}. Combining these together with~\eqref{E:computationder}, we arrive at \ba\label{esotedegaaggargrawgragar} \left\lvert \ptlls{h}p\right\rvert&\leq \frac{1}{p_{0}^*}\left(1+\frac{|h-h_\ell|}{\max\{p_{\ell}-h;2\sqrt h\}}\cdot\left(1+\frac{\delta_p^* }{p_{0}^*}\right)\right) \cdot \left\lvert p_{\ell}-1\right\rvert\notag\\ & \leq \frac{1}{p_{0}^*} \left(1+\frac{\delta_0^*}{(1-\delta_p^*-\delta_0^*)p_{0}^*}\right) \cdot \left\lvert p_{\ell}-1\right\rvert\notag\\ & \leq \frac{1+\delta_0^*}{(p_{0}^*)^2(1-\delta_p^*-\delta_0^*)}\cdot \left\lvert p_{\ell}-1\right\rvert \ea where the coefficient of $ \left\lvert p_{\ell}-1\right\rvert$ in the above bound is a positive finite constant. \subsection{Analysis of 2-shock curves} \label{S:2shocks} In \S~2 of~\cite{AS} it is shown from the Rankine-Hugoniot equations that the 2-shock curve~$\SC{2}{p-p_{\ell}}{h_{\ell},p_{\ell}}$ through the point $(h_{\ell},p_{\ell})$ has the following form: \be{E:h2shock} h=h_{\ell}-\frac{s_{2}(h_{\ell},p)}{s_{2}(h_{\ell},p)+1}(p-p_{\ell})\quad\equiv\quad h_{\ell}+\frac{h_{\ell}}{\lambda_{1}(h_{\ell},p)-h_{\ell}}(p-p_{\ell}) \quad\equiv\quad \left(1+\frac{p-p_{\ell}}{\lambda_{1}(h_{\ell},p)-h_{\ell}}\right)h_{\ell} \ , \ee with the corresponding $2-$wave of strength $p-p_\ell$ in Cartesian coordinates. Here, $s_{2}$ is the Rankine-Hugoniot speed of the 2-shock that is nonnegative and it has the following expression: \be{E:s2proph} s_{2}=s_{2}(h_{\ell},p)\equiv\lambda_{2}(h_{\ell},p)~\equiv\frac{1}{2}\left[-(p-h_{\ell})+\sqrt{(p-h_{\ell})^{2}+4h_{\ell}}\right]\equiv-\frac{h_{\ell}}{\lambda_{1}(h_{\ell},p)} \ . \ee We know that $h=\SC{2}{p-p_{\ell}}{h_{\ell},p_{\ell}}$ is equivalent to $h_{\ell}=\SC{1}{p_{\ell}-p }{h,p }$ and from~\eqref{E:p1shock}, we have \bas \frac{p-p_{\ell}}{h-h_{\ell}}=-\frac{s_{2}(h,p_{\ell})+1}{s_{2}(h,p_{\ell})} \eas so that if we exchange $(h,p)$ and $(h^\ell,p^\ell)$ the left hand side remains the same: as a consequence \be{E:symmetry2HC} s_{2}(h,p_{\ell})=s_{2}(h_{\ell},p) \qquad \text{whenever }h=\SC{2}{p-p_{\ell}}{h_{\ell},p_{\ell}}\ . \ee Moreover, for small $h_{\ell}$ one has \bes s_{2}(h_{\ell},p)= \frac{h_{\ell}}{p}+\frac{p-1}{p^{3}}{h_{\ell}^{2}}+\frac{(p-1)(p-2)}{p^{5}}h_{\ell}^{3}+\mathcal{O}(h_{\ell}^{4}) \ . \ees At $p=1$, one has that $s_{2}(h_{\ell},1)\equiv h_{\ell}$ while on the line $h_{\ell}=0$, it holds $s_{2}(0,p )\equiv0$. At $h_{\ell}=0$, we also have \be{E:jacobianS2} \ptl{s}{}\SC{2}{s}{0,p_{\ell}}=\left(\begin{array}{c}0 \\1\end{array}\right) \qquad J_{h_{\ell},p_{\ell}}\SC{2}{s}{0,p_{\ell}}= \left(\begin{array}{cc} 1-\frac{s}{p_{\ell}+s} & 0\\0&1\end{array} \right) \ . \ee Thanks to this explicit expression, one can compute again the derivatives of the Rankine-Hugoniot speed when the 2-Hugoniot curve is parametrized by $p$: \ba \label{E:ders2p1} \ptlls{p}{s_{2}}(h_{\ell},p)&=\frac{1}{2} \left(\frac{p-h_{\ell}}{\sqrt{(h_{\ell}-p)^2+4 h_{\ell}}}-1\right) = \frac{h_{\ell}}{s_{1}(h_{\ell},p)\cdot\sqrt{(h_{\ell}-p)^2+4 h_{\ell}}} \\ \notag &= -\frac{h_{\ell}}{p^{2}}+-2\frac{ 3-2 p }{ p^4 }h_{\ell}^{2}+ o(h_{\ell}^{2}) \ , \ea \bas \ptl{p}{2} s_{2}(h_{\ell},p)=\frac{2 h_{\ell}}{\left((h_{\ell}-p)^2+4 h_{\ell}\right)^{3/2}} \ . &&\ . \eas We also have that \bas \ptlls{h}{{s_{2}}}(h_{\ell},p)= \frac{1}{2} \left(\frac{h_{\ell}-p+2}{\sqrt{(h_{\ell}-p)^2+4 h_{\ell}}}+1\right)= \frac{\lambda_{2}(h_{\ell},p)+1}{\sqrt{(h_{\ell}-p)^2+4 h_{\ell}}} \ , \eas \bas \ptl{h}{2} s_{2}(h_{\ell},p)= \frac{2 (p-1)}{\left((h_{\ell}-p)^2+4 h_{\ell}\right)^{3/2}}\;. \eas In particular, when $h_\ell=0$, we get \be{AppB.3s2h} \ptlls{h}{{s_{2}}}(0,p)= \frac{ 1}{ p } \ . \ee Finally, we notice that if $h$ is defined by~\eqref{E:h2shock} then \[ \ptlls{h_\ell}h=1+\frac{p-p_{\ell}}{ \lambda_{1}(h_{\ell},p ) -h_\ell}- \frac{(p-p_{\ell})h_{\ell}}{ ( \lambda_{1}(h_{\ell},p) -h_\ell)^2 } \cdot \frac{1}{2} \left(\frac{-h_{\ell}+p-2}{\sqrt{(p -h_{\ell})^{2}+4 h_{\ell}}}-1\right) \] so that \be{contiderivataS2} \ptlls{h_\ell}h{\Big|_{h_\ell=0}}=1- \frac{p-p_{\ell}}{ p }=\frac{p_\ell}{p } \qquad \text{while}\qquad \ptlls{p}h{\Big|_{h_\ell=0}}=0 \ . \ee \section{Finer interaction-type estimates} \label{S:finerInteractions} We consider the vector states $\underline v^{\alpha,\ell}$, $\underline v^{\alpha,r}$, $\underline u^\alpha$, $\underline \omega^{\alpha,\ell}$, $\underline \omega^{\alpha,r}$ in $K=[0,\delta^*_0] \times [p^*_{0}, p^*_1]$ as already defined in~\S~\ref{Ss:nointeractiontimesN} that are related as follows: \begin{align*} &\underline v^{\alpha,r}=\mathbf{S}_{k_\alpha}\big(\sC_\alpha; \underline v^{\alpha,\ell}\big), && \underline \omega^{\alpha,\ell}\doteq \mathbf{S}_1\big(\eta_1^{\alpha,\ell }; \underline u^\alpha\big), && \underline v^{\alpha,\ell}= \mathbf{S}_2\big(\eta_2^{\alpha,\ell };\underline \omega^{\alpha,\ell}\big) \ , \\ &&& \underline \omega^{\alpha,r}\doteq \mathbf{S}_1\big(\eta_1^{\alpha, r};\underline u^\alpha \big), && \underline v^{\alpha,r}= \mathbf{S}_2\big(\eta_2^{\alpha, r};\underline \omega^{\alpha,r}\big) \end{align*} and we prove interaction-type estimates on the wave sizes $\eta_1^{\alpha,r}$, $\eta_2^{\alpha,r}$ and on their speeds. More precisely \begin{itemize} \item[$\bullet$] In \S~\ref{Ss:estimates1}, we derive auxiliary estimates when the jump of $v(t)$ at $x_{\alpha}$ is a $1$-wave and \item[$\bullet$] In \S~\ref{Ss:estimates2}, we derives auxiliary estimates when the jump of $v(t)$ at $x_{\alpha}$ is a $2$-wave. \end{itemize} Let us recall that we only consider Hugoniot curves despite of the admissibility criteria and in a sense one could say that these are perturbed interaction estimates. The estimates established in this appendix are fundamental for the error analysis derived in \S~\ref{Ss:nointeractiontimesN}. For simplicity, as there is no ambiguity, we omit the index $\alpha$ for the remaining of the present appendix. \begin{figure}[htbp] \label{fig:setting} {\centering \scalebox{1}{\input{nointeractiontalk.tex} } \par} \caption{ {\bf Left}: The jump $\gamma_\alpha$ at $x_{\alpha}$ is along the first Hugoniot curve: $\underline v^{\alpha,r}=S_1(\sC_\alpha; \underline v^{\alpha,\ell})$. {\bf Right}: The jump $\gamma_\alpha$ at $x_{\alpha}$ is along the second Hugoniot curve $\underline v^{\alpha,r}=S_2(\sC_\alpha; \underline v^{\alpha,\ell})$. \label{S4:fig2}} \end{figure} \subsection{Case of a $1$-wave} \label{Ss:estimates1} Let $\gamma$ be a wave that belongs to the first family joining the states $\underline v^{\ell}$ and $\underline v^{r}$ of $v$, i.e. \[\underline v^{r}=\mathbf{S}_{1}\big(\sC;\underline v^{\ell}\big)\] where the Hugoniot curve $\SC{1}{\cdot}{\cdot}$ is given in~\eqref{E:p1shock}, while $\SC{2}{\cdot}{\cdot}$ is given in~\eqref{E:h2shock}. The states $\underline v^{\ell}$ and $\underline v^{r}$ are related to a third state $\underline u$ by \[\underline v^\ell=\SC{2}{\eta_2^\ell}{\SC{1}{\eta_1^\ell}{\underline u}} \qquad \underline v^r=\SC{2}{\eta_2^r}{\SC{1}{\eta_1^r}{\underline u}},\] via the waves $\eta^{\ell/r}=(\eta_1^{\ell/r}, \eta_2^{\ell/r})$, respectively. We will also need the speed $\dot x$ connecting the states $\underline v^\ell$ and $\underline v^r$, that is \be{C1xdot}\dot x:=\lambda_1\left(v_1^\ell+\sC_{},v_2^\ell\right) \ee and the corresponding ones connecting $\underline u$ with $\underline v^\ell$ and $\underline v^r$ are: \be{C1lambdaell} \lambda_1^\ell:=\lambda_1\left(u_1+\eta_1^\ell,u_2\right),\qquad \lambda_2^\ell:=\lambda_2(u_1+\eta_1^\ell,v_2^\ell)\ee \be{C1lambdar}\lambda_1^r:=\lambda_1\left(u_1+\eta_{1}^{r}, u_2\right),\qquad \lambda_2^r=\lambda_2(u_1+\eta_1^r, v_2^r )\,,\ee respectively. The aim is to establish auxiliary estimates on the wave strengths $\eta_1^r$, $\eta_1^\ell$, $\eta_2^r$, $\eta_2^\ell$, on the waves velocities and on a commutator among waves and velocities. First, in Lemma~\ref{L:sstimaetarbase} we deal with the simpler case when $\eta_2^\ell=0$ and afterwards, we prove the general case in Corollary~\ref{C:stimegenerali}. \newcommand{\evi}[1]{{\color{purple}#1}} \begin{lemma} \label{L:sstimaetarbase} \begin{subequations} \label{E:sstimaetarbase} Let $\underline v^{\ell}$, $\underline v^{r}$, $\underline u$, $\underline \omega^{\ell}$, $\underline \omega^{r}$, $\gamma$, $\eta^{\ell}$ and $\eta^r$ as denoted above and $\gamma$ be a wave of the first family, i.e. $\underline v^{r}=\mathbf{S}_{1}\big(\sC; \underline v^{\ell}\big)$. Suppose that $\eta_2^\ell=0$, so that $\underline v^\ell=\SC{1}{\eta_1^\ell}{\underline u}$. Then \begin{align} \label{E:stimaetarbasepartial1} &|\eta_{1}^{r}-\eta_{1}^{\ell}-\sC_{{{}}}| \leq \co(1)( v^{\ell}_{1}+\gamma) (v_{2}^{\ell}-1)^{2} \left\lvert(\eta_{1}^{\ell}+\sC_{ }) \eta_{1}^{\ell}\sC_{ } \right\rvert \ , \\ &\label{E:stimaetarbasepartial} |\eta_{2}^{r}| \leq \co(1) (v_{2}^{\ell}-1)^{2} \left\lvert(\eta_{1}^{\ell}+\sC_{ }) \eta_{1}^{\ell}\sC_{ } \right\rvert\ . \end{align} \end{subequations} Moreover, the speed $\dot x$ given in~\eqref{C1xdot} satisfies \begin{subequations} \label{E:stima11part} \begin{align} \label{E:stima11lpart} &\left\lvert\dot x_{}-\lambda_{1}^{\ell}-\frac{(v_{2}^{\ell}-1) (\eta_{1}^{\ell}+\sC_{})}{v_2^\ell}\right\rvert\leq \co(1)\delta_0^*\cdot \left\lvert (v_{2}^{\ell}-1) (\eta_{1}^{\ell}+\sC_{})\right\rvert \\ \label{E:stima11rpart} &\left\lvert\dot x_{}-\lambda_{1}^{r}- \frac{(v_{2}^{\ell}-1) \eta_{1}^{\ell} }{v_2^\ell} \right\rvert\leq \co(1) \delta_0^* \cdot \left\lvert(v_{2}^{\ell}-1) \eta_{1}^{\ell}\right\rvert \end{align} \end{subequations} while the following commutators satisfy the bounds \begin{subequations} \label{E:fvgabfafvGroup} \begin{align} \label{E:fvgabfafvNew} &\left\lvert\eta_{1}^{r}( \lambda_{1}^{r}-\dot x_{})-\eta_{1}^{\ell}(\lambda_{1}^{\ell}-\dot x_{})\right\rvert\leq \co(1)(\sC+v_{1}^{\ell}) (v_{2}^{\ell}-1)^2 \left\lvert(\eta_{1}^{\ell}+\sC_{ }) \eta_{1}^{\ell}\sC_{ } \right\rvert \;, \\ &\left\lvert \eta_{2}^{r}(\lambda_{2}^{r}-\dot x_{})-\eta_{2}^{\ell}(\lambda_{2}^{\ell}-\dot x_{ })\right\rvert\leq \co(1) (v_{2}^{\ell}-1)^{2} \left\lvert(\eta_{1}^{\ell}+\sC_{ }) \eta_{1}^{\ell}\sC_{ } \right\rvert \;, \end{align} where $\lambda_i^r$, $\lambda_i^\ell$ correspond to the speeds of the $i$-family that are given in~\eqref{C1lambdaell}--\eqref{C1lambdar}. \end{subequations} \end{lemma} \begin{proof} Suppose that $\eta_2^\ell=0$, then we have $u_1+\eta_1^\ell=v_1^\ell$, and therefore, the speeds reduce to \[ \lambda_1^\ell:=\lambda_1\left(v_1^\ell,u_2\right),\qquad \lambda_2^\ell=\lambda_2(\underline v ^\ell)\] \[\lambda_1^r=\lambda_1\left(v_1^\ell-\eta_{1}^{\ell}+\eta_{1}^{r}, u_2\right),\qquad \lambda_2^r=\lambda_2(v_1^\ell-\eta_{1}^{\ell}+\eta_1^r, v_2^r )\,,\] and recall that \[\dot x:=\lambda_1\left(v_1^\ell+\sC_{},v_2^\ell\right)\;. \] We now prove separately each estimate in the following steps.\\ {\sc Step 1}. Proof of~\eqref{E:stimaetarbasepartial}. \\ We consider $ \eta_{2}^{r}$ as smooth functions of the independent variables $\underline v^\ell=(v_{1}^{\ell},v_{2}^{\ell}), \eta_{1}^{\ell},\sC{}$ written in the form: \bas & \eta_{1}^{r}=\eta_{1}^{r}(v_{1}^{\ell},v_{2}^{\ell} ,\eta_{1}^{\ell}, \sC{}) \ , & \eta_{2}^{r}= \eta_{2}^{r}(v_{1}^{\ell},v_{2}^{\ell} ,\eta_{1}^{\ell}, \sC{}) \ . \eas Since $\underline w=\SC{i}{t}{\underline z}$ is equivalent to $\underline z=\SC{i}{-t}{\underline w}$, the functions $ \eta_{1}^{r}$ and $ \eta_{2}^{r}$ are implicitly defined also by the identity \be{implicitdefinitionetar1} \SC{2}{-\eta_2^r}{\SC{1}{\gamma}{\underline v^{ \ell} }}\equiv\SC{1}{\eta_1^r}{\SC{1}{-\eta_1^\ell}{ \underline v^\ell}}\ . \ee We remark that although in the context of this model we consider $h\geq0$, the above functions are analytic in the larger domain \ba \label{E:domaknfknfgrjfwff1} v_{1}^{\ell}, v_{1}^{\ell}-\eta_{1}^{\ell}, v_{1}^{\ell}+\sC\in[-\delta^*_0,\delta^*_0]\qquad v_{2}^{\ell}\in[p^*_{0}, p^*_1] \ . \ea Now, we observe that $\eta_2^r=\eta_{2}^{r}(v_{1}^{\ell},v_{2}^{\ell} ,\eta_{1}^{\ell},\sC)$ satisfies the following vanishing conditions \begin{itemize} \item[(i)] $\eta_{2}^{r}(v_{1}^{\ell},v_{2}^{\ell} ,\eta_{1}^{\ell},0)=0$. Indeed, if $\sC=0$, then $\eta_{2}^{r}\equiv\eta_{2}^{\ell}$, that is zero by assumption. \item[(ii)] $\eta_{2}^{r}(v_{1}^{\ell},v_{2}^{\ell} ,0,\sC)=0$. In this case, when $\eta_{1}^{\ell}=0$, we have $\underline v^\ell\equiv\underline u$ and thus $\underline v^{r}=\SC{1}{\sC}{ \underline u}{}$ and $\eta_{2}^{r}=0$. \item[(iii)] $\eta_{2}^{r}(v_{1}^{\ell}, 1 ,\eta_{1}^{\ell},\sC)=0$. This holds true because if $v_{2}^{\ell}=1$, then $\underline v^{r}$, $\underline v^{\ell}$ and $\underline u$ are connected following the $1$-Hugoniot curve that is a horizontal line at $p=1$ and we have $\eta_{2}^{r}\equiv\eta_{2}^{\ell}=0$ and $\eta_{1}^{r}\equiv\eta_{1}^{\ell}+\sC$. \item[(iv)] $\eta_{2}^{r}(v_{1}^{\ell},v_{2}^{\ell} ,\tau,-\tau)=0$ for $\tau\in\mathbb{R}$. This follows if $\sC+\eta_{1}^{\ell}=0$ since in this case $\underline v^{r}=\mathbf{S}_{1}\big(\sC; \underline v^{\ell}\big)=\mathbf{S}_{1}\left(- \eta_{1}^{\ell}; \mathbf{S}_{1}\left( \eta_{1}^{\ell};\underline u\right)\right)$ that implies $\underline u\equiv \underline v^r$ and $\eta_{1}^{r}\equiv\eta_{2}^{r}\equiv \sC+\eta_{1}^{\ell}\equiv0 $. \end{itemize} In addition to these vanishing conditions, we claim that \be{E:claimvanishingderivative} \ptlls{ {v_{2}^{\ell}}}{ } \eta_{2}^{r} \left(v_{1}^{\ell},v_2^\ell, \eta_{1}^{\ell},\sC{}\right) \Big|_{v_2^\ell=1}=0\ . \ee holds true as well. Before we prove the claim, we show that we can obtain~\eqref{E:stimaetarbasepartial}. Indeed, using Lemma~\ref{E:rewgwgeq} in Appendix~\ref{App:C}, we can express $\eta_2^r$ as \begin{equation} \label{psi1est-case1} \begin{aligned} \eta_2^r\left(v_{1}^{\ell},v_2^\ell, \eta_{1}^{\ell},\sC \right) &=\int_1^{v_2^\ell}\int_1^p \frac{\del^2 \eta_2^r}{\del\tilde p^2}\left(v_{1}^{\ell},\tilde{p},\eta_1^{\ell},\gamma\right) d\tilde p\,dp \;. \end{aligned} \end{equation} On the other hand, relying on the vanishing conditions (i)-(ii) and (iv) and applying Lemma 2.6 of~\cite{Bre}, we get \bas \left|\frac{\del^2\eta_2^r}{\del\tilde p^2}\left(v_{1}^{\ell},\tilde{p},\eta_1^{\ell},\gamma\right)\right|= \co(1) |\eta_1^\ell| |\sC| |\eta_1^{\ell} +\sC|\;. \eas Thus, from~\eqref{psi1est-case1} it follows immediately estimate~\eqref{E:stimaetarbasepartial}. In view of the above, it remains to prove claim~\eqref{E:claimvanishingderivative}. To accomplish this, we write explicitly the second component of~\eqref{implicitdefinitionetar1}. Denoting by $\pSC{1}{\cdot}{\cdot}$ the second component of the Hugoniot curve $\SC{1}{\cdot}{\cdot}$ recalled in~\eqref{E:p1shock}, and recalling that we are using Cartesian coordinates, we get \[ \pSC{1}{\gamma}{\underline v^{ \ell} }-\eta_2^r=\pSC{1}{\eta_1^r}{v_1^\ell-\eta_1^\ell,\pSC{1}{-\eta_1^\ell}{ \underline v^\ell}} \ . \] Now, we take the derivative $\ptll{v_2^\ell}$ of this equality and by the chain rule, we arrive at \be{accessrior030r2ke2} \ptpS{v_2^\ell}{1}{\gamma}{\underline v^{\ell} }-\ptlls{v_2^\ell}{\eta_2^r}\left(\underline v^\ell , \eta_{1}^{\ell},\sC{}\right) =\ptpS{\eta_1^r}{1}{\eta_1^r}{\underline u } \ptlls{v_2^\ell}{\eta_1^r}\left(\underline v^\ell , \eta_{1}^{\ell},\sC{}\right) +\ptpS{u_2} {1}{\eta_1^r}{\underline u } \ptpS{v_2^\ell} {1}{-\eta_1^\ell}{ \underline v^{ \ell} }\ . \ee Next, using~\eqref{contiderivataS1}, we evaluate all terms for $v_2^\ell=u_2=1$ and hence, $\eta_2^r=0$ and $\eta_1^r=\gamma+\eta_1^\ell$ to get \bas &\ptpS{v_2^\ell}{1}{\gamma}{v_1^{ \ell},v_2^{ \ell} }\Big|_{v_2^\ell=1}=\frac{1+v_1^{ \ell}}{1+v_1^r} \ , &&\ptpS{u_2}{1}{\eta_1^r=\gamma+\eta_1^\ell}{ u_1,u_2 }\Big|_{u_2=1}= \frac{1+u_1}{1+v_1^r} \ , \\ &\ptpS{v_2^\ell}{1}{-\eta_1^\ell}{ v_1^{ \ell},v_2^\ell }\Big|_{v_2^\ell=1}=\frac{1+v_1^\ell}{1+u_1} \ , &&\ptpS{\eta_1^r}{1}{\eta_1^r}{ u_1,u_2 }\Big|_{u_2=1} =0 \ . \eas Substituting these into~\eqref{accessrior030r2ke2} we compute \bas{} \ptlls{v_2^\ell}{\eta_2^r}\left( v_1^\ell,v_2^\ell , \eta_{1}^{\ell},\sC{}\right)\Big|_{v_2^\ell=1} &= \ptpS{v_2^\ell}{1}{\gamma}{v_1^{ \ell},v_2^{ \ell} }\Big|_{v_2^\ell=1} -\ptpS{\eta_1^r}{1}{\eta_1^r}{\underline u } \Big|_{u_2=1} \ptlls{v_2^\ell}{\eta_1^r}\left( v_1^\ell,v_2^\ell , \eta_{1}^{\ell},\sC{}\right)\Big|_{v_2^\ell=1} \\ &\qquad\qquad\qquad\qquad\qquad -\ptpS{u_2}{1}{\eta_1^r}{\underline u } \Big|_{u_2=1} \ptpS{v_2^\ell}{1}{-\eta_1^\ell}{ \underline v^{ \ell} }\Big|_{v_2^\ell=1}\\ & =\frac{1+v_1^{ \ell}}{1+v_1^r}-0-\frac{\cancel{1+u_1}}{1+v_1^r}\cdot \frac{1+v_1^\ell}{\cancel{1+u_1}} =0 \ . \eas and claim~\eqref{E:claimvanishingderivative} follows. This concludes the proof of~\eqref{E:stimaetarbasepartial}. \noindent {\sc Step 2}. Proof of~\eqref{E:stimaetarbasepartial1}. \\ Using the same notation as in the first step, we first recall that in the Cartesian coordinates $\eta_1^r$ represent the difference in the first components of the connected states. This means that $\eta_1^r$ is the difference of the $h$-component that is $\SC{2}{-\eta_2^r}{\underline v^r}-\underline u$. Denoting by $\hSC{1}{\cdot}{\cdot}$ the first component of the Hugoniot curve and using expression~\eqref{E:h2shock}, we get \[ \eta_1^r=\hSC{2}{-\eta_2^r}{\underline v^r}-v_1^r+(v_1^r- u_1)\\ =\frac{-v_1^r\eta_2^r}{\lambda_{1}(v_1^r,v_2^r-\eta_2^r)-v_1^r} + (v_1^r-v_1^\ell)+(v_1^\ell-u_1) \] Since $\sC=v_1^r-v_1^\ell$ and $\eta_1^\ell=v_1^\ell-u_1$, we obtain \[ |\eta_1^r-\eta_1^\ell -\sC|\stackrel{\eqref{stimadaldalvassos1}}{\leq} \frac{v_1^r|\eta_2^r|}{p^*_{0}}\stackrel{\eqref{E:stimaetarbasepartial}}{\leq}\co(1)(v_1^\ell+\gamma)|v^{ \ell}_{2}-1|^{2} | \eta_{1}^{\ell}+ \sC_{{{}}}||\eta_{1}^{\ell} \sC_{{{}}}|\ .\] as claimed in~\eqref{E:stimaetarbasepartial1}. \noindent {\sc Step 3}. Proof of~\eqref{E:stima11part}. \\ We define the auxiliary functions of the independent variables $v_{1}^{\ell},v_{2}^{\ell}, \eta_{1}^{\ell} ,\sC{}$ as follows \bas &\Psi_{1,\ell} \left(v_{1}^{\ell},v_{2}^{\ell}, \eta_{1}^{\ell} ,\sC{}\right)=\dot x_{}-\lambda_{1}^{\ell}\equiv\lambda_1\left(v_1^\ell+\sC_{},v_2^\ell\right)-\lambda_1\left(v_1^\ell,u_2\right) \ , \\ &\Psi_{1,r}(v_{1}^{\ell},v_{2}^{\ell} ,\eta_{1}^{\ell} ,\sC{})=\dot x_{}-\lambda_{1}^{r}\equiv \lambda_1\left(v_1^\ell+\sC_{},v_2^\ell\right)-\lambda_1\left(v_1^\ell-\eta_{1}^{\ell}+\eta_{1}^{r}, u_2\right) \ . \eas and by evaluating each one of them and their derivatives at particular points of $ \left(v_{1}^{\ell},v_{2}^{\ell}, \eta_{1}^{\ell} ,\sC{}\right)$, we prove~\eqref{E:stima11part}. To begin with, we note that the functional $\Psi_{1,\ell}$ satisfies the following identities \bas \Psi_{1,\ell} \left(v_{1}^{\ell},v_{2}^{\ell}, s,-s\right)=0,\quad \Psi_{1,\ell} \left(v_{1}^{\ell},1, \eta_{1}^{\ell} ,\sC\right)=0,\qquad\forall s\in\mathbb{R} \eas Indeed, if $\sC+\eta_{1}^{\ell} =0$, then $\underline u=\underline v^r$ and $\dot x=\lambda_1^r$ since $\eta_2^\ell=0$ by assumption and using the symmetry~\eqref{E:symmetry1HC}. On the other hand, if $v_{2}^{\ell}=1$, then $u_2=1$ and $\dot x=\lambda_1^\ell=-1$. Having now these two vanishing conditions, we can express $\Psi_{1,\ell}$ as \be{Psi1lApC} \Psi_{1,\ell} \left(v_{1}^{\ell},v_{2}^{\ell}, \eta_{1}^{\ell} ,\sC\right)= \int_{-\eta_{1}^{\ell} }^\sC \frac{\partial\Psi_{1,\ell} }{\partial\sC} \left(v_{1}^{\ell},v_{2}^{\ell}, \eta_{1}^{\ell} ,\tau\right)d\tau\;. \ee Next, we show that \bas \ptlls{\sC_{}}{\Psi_{1,\ell} }\left(0,v_{2}^{\ell}, 0 ,0\right)=\frac{v_2^\ell-1}{v_2^\ell}\ . \eas This is a direct computation of the explicit expression of $\Psi_{1,\ell}$. Since $u_2$ is given by \bas u_2=v_2^\ell+\frac{(v_2^\ell-1)\eta_1^\ell}{v_1^\ell-\eta_1^\ell-\lambda_1(v_1^\ell-\eta_1^\ell,v_2^\ell)} \eas from $\underline u=\SC{1}{-\eta_1^\ell}{\underline v^\ell}$ computed using~\eqref{E:p1shock} and it is independent of $\sC_{}$, we get \bas \ptlls{\sC_{}}{\Psi_{1,\ell}}\left(0,v_{2}^{\ell}, 0 ,0\right)&=\ptll{\sC_{}} \left(\lambda_1\left(v_1^\ell+\sC_{},v_2^\ell\right)-\lambda_1\left(v_1^\ell,v_2^\ell+u_2\right) \right)\Big|_{\sC_{}=v_1^\ell=0} \stackrel{\eqref{E:s1derh1}}{=}\frac{v_2^\ell-1}{v_2^\ell}\ . \eas Combining now with~\eqref{Psi1lApC}, we get \be{} \Psi_{1,\ell} \left(v_{1}^{\ell},v_{2}^{\ell}, \eta_{1}^{\ell} ,\sC\right)= (\eta_1^\ell+\sC) \frac{\partial\Psi_{1,\ell} }{\partial\sC} \left(0,v_{2}^{\ell}, 0 ,0\right)+\int_{-\eta_{1}^{\ell} }^\sC\int_1^{v_{2}^{\ell}} \left[\frac{\partial^2\Psi_{1,\ell} }{\partial\sC\partial v_2} \left(v_{1}^{\ell},v_{2}, \eta_{1}^{\ell} ,\tau\right)- \frac{\partial^2\Psi_{1,\ell} }{\partial\sC\partial v_2} \left( 0 ,v_{2}, 0,0 \right)\right] dv_2\,d\tau\;. \ee Then~\eqref{E:stima11lpart} follows since \bas \frac{\partial^2\Psi_{1,\ell} }{\partial\sC\partial v_2} \left(v_{1}^{\ell},v_{2}, \eta_{1}^{\ell} ,\tau\right)- \frac{\partial^2\Psi_{1,\ell} }{\partial\sC\partial v_2} \left( 0 ,v_{2}, 0,0 \right)=\co(1)(|v_1^\ell |+|\eta_1^\ell|+|\sC|)=\co(1)\delta_0^* \eas in the domain~\eqref{E:domaknfknfgrjfwff1}. We proceed in a similar way with $\Psi_{1,r}$, that is again analytic in~\eqref{E:domaknfknfgrjfwff1} and it vanishes as follows \bas \Psi_{1,r} \left(v_{1}^{\ell},v_{2}^{\ell}, 0,\sC\right)=0,\quad \Psi_{1,r} \left(v_{1}^{\ell},1, \eta_{1}^{\ell} ,\sC\right)=0. \eas Indeed, if $\eta_1^\ell=0$, then $\eta_1^r=\sC$ and $\eta_2^r=0$ since by assumption $\eta_2^\ell=0$. Hence $\dot x=\lambda_1^r$. On the other hand, if $v_2^\ell=1$, then $u_2=1$ and $\dot x=\lambda_1^r=-1$. Thus the above two vanishing conditions hold true. Hence, we have the following expression \be{Psi1rApC} \Psi_{1,r} \left(v_{1}^{\ell},v_{2}^{\ell}, \eta_{1}^{\ell} ,\sC\right)= \int_{0}^{\eta_1^\ell} \frac{\partial\Psi_{1,r} }{\partial\eta_1^\ell} \left(v_{1}^{\ell},v_{2}^{\ell}, \eta_{1} ,\sC\right)d\eta_1\;. \ee In order to find the leading coefficient in~\eqref{E:stima11rpart}, we now compute the derivative \bas \ptlls{\eta_1^\ell}{\Psi_{1,r}} \left(0,v_{2}^{\ell}, 0 ,0\right)=\ptll{\eta_1^\ell}\left(\lambda_1\left(v_1^\ell+\sC_{},v_2^\ell\right)-\lambda_1\left(v_1^\ell-\eta_{1}^{\ell}+\eta_{1}^{r}, u_2\right)\right)\Big|_{\sC_{}=v_1^\ell=\eta_1^\ell= 0}\ . \eas However, here not only $u_2$, but also $\eta_1^r$ depends on $\eta_1^\ell$. Nevertheless, by~\eqref{E:stimaetarbasepartial1}, \bas \ptlls{\eta_1^\ell}{\eta_1^r}\Big|_{\sC_{}= 0}=1 \qquad\Rightarrow\qquad \ptll{\eta_1^\ell}\left(v_1^\ell-\eta_{1}^{\ell}+\eta_{1}^{r}\right)\Big|_{\sC_{}= 0}=0\ . \eas As this factor vanishes, by the explicit expression of $u_2$ and by~\eqref{E:s1derp1} we thus get, \bas \ptlls{\eta_1^\ell}{\Psi_{1,r}} \left(0,v_{2}^{\ell}, 0 ,0\right) \equiv -\ptll{\eta_1^\ell}\left(\lambda_1\left(v_1^\ell-\eta_{1}^{\ell}+\eta_{1}^{r}, u_2\right)\right)\Big|_{\sC_{}=v_1^\ell=\eta_1^\ell= 0} =-\left(0-1\cdot \frac{v_2^\ell-1}{v_2^\ell}\right) =\frac{v_2^\ell-1}{v_2^\ell} \eas since $\ptll{\eta_1^\ell}u_2=\frac{v_2^\ell-1}{v_2^\ell}$ when $v_1^\ell=\eta_1^\ell= 0$. Substituting into~\eqref{Psi1rApC}, we have \be{} \Psi_{1,r} \left(v_{1}^{\ell},v_{2}^{\ell}, \eta_{1}^{\ell} ,\sC\right)= \frac{\eta_1^\ell (v_2^\ell-1)}{v_2^\ell}+ \int_{0}^{\eta_1^\ell} \frac{\partial\Psi_{1,r} }{\partial\eta_1^\ell} \left(v_{1}^{\ell},v_{2}^{\ell}, \eta_{1}^\ell ,\sC\right)-\ptlls{\eta_1^\ell}{\Psi_{1,r}} \left(0,v_{2}^{\ell}, 0 ,0\right)d\eta_1\;. \ee with \[ \frac{\partial\Psi_{1,r} }{\partial\eta_1^\ell} \left(v_{1}^{\ell},v_{2}^{\ell}, \eta_{1}^\ell ,\sC\right)-\ptlls{\eta_1^\ell}{\Psi_{1,r}} \left(0,v_{2}^{\ell}, 0 ,0\right)=\co(1) |v_2^\ell-1| (|v_1^\ell |+|\eta_1^\ell|+|\sC|)=\co(1)\delta_0^*|v_2^\ell-1| \;. \] This proves the desired estimate~\eqref{E:stima11rpart}. \noindent {\sc Step 4}. Proof of~\eqref{E:fvgabfafvGroup}. \\ Similarly to the previous step, we define the commutator functions of the independent variables $v_{1}^{\ell},v_{2}^{\ell}, \eta_{1}^{\ell} ,\sC{}$ as follows \bas &\widehat\Psi_{1,1} \left(v_{1}^{\ell},v_{2}^{\ell}, \eta_{1}^{\ell} ,\sC{}\right) :=\eta_{1}^{r}( \lambda_{1}^{r}-\dot x_{})-\eta_{1}^{\ell}(\lambda_{1}^{\ell}-\dot x_{}) \;, \\ &\widehat\Psi_{1,2}\left(v_{1}^{\ell},v_{2}^{\ell} ,\eta_{1}^{\ell} ,\sC{}\right) :=\eta_{2}^{r}(\lambda_{2}^{r}-\dot x_{} ) -\eta_{2}^{\ell}(\lambda_{2}^{\ell}-\dot x_{})\ , \eas which are analytic in the domain~\eqref{E:domaknfknfgrjfwff1}. These satisfy the following conditions: \be{ApCvan111} \widehat\Psi_{1,i} \left(v_{1}^{\ell},1, \eta_{1}^{\ell} ,\sC{}\right) =0,\quad \widehat\Psi_{1,i} \left(v_{1}^{\ell},v_{2}^{\ell}, 0 ,\sC{}\right)=0,\quad\widehat\Psi_{1,i} \left(v_{1}^{\ell},v_{2}^{\ell}, \eta_{1}^{\ell} ,0\right)=0\,\quad\widehat\Psi_{1,i} \left(v_{1}^{\ell},v_{2}^{\ell}, s,-s\right)=0 \ee for $s\in\mathbb{R}$ and $i=1,\,2$. Indeed, recall that $\eta_{2}^{\ell}=0$ by assumption, then \begin{itemize} \item[(i)] if $v_{2}^{\ell}=1$, we have $\lambda_{1}^{r}=\dot x_{}=\lambda_{1}^{\ell}=-1$, $\eta_{2}^{r}=0$ and $v_{2}^{r}=v_{2}^{\ell}=u_{2} =1$, \item[(ii)] if $\eta_{1}^{\ell}=0$, then $\underline u=\underline v^\ell$. Hence, $\underline v^r=\SC{1}{\sC_{}}{\underline u}$ so that $ \lambda_{1}^{r}=\dot x_{}$ and $\eta_{2}^{r}=0$, \item[(iii)] if $\sC=0$, then $\underline v^r=\underline v^\ell$ and thus $\eta_{1}^{r}=\eta_{1}^{\ell}$, $\lambda_{1}^{r}=\lambda_{1}^{\ell}$ and $\eta_{2}^{r}=\eta_{2}^{\ell}=0$, \item[(iv)] if $\sC=-\eta_{1}^{\ell}$, then $\underline v^r=\underline u$ and hence $\eta_{1}^{r}=\eta_{2}^{r}=0$ and $\lambda_{1}^{\ell}=\dot x_{}$ by~\eqref{E:symmetry1HC}. \end{itemize} All the above cases imply immediately that $\widehat\Psi_{1,i}=0$ for $i=1,\,2$. For $i=1$, it also holds true that \be{ApCvan112}\widehat\Psi_{1,1} \left(s,v_{2}^{\ell}, \eta_{1}^{\ell} ,-s\right)=0,\qquad \forall s\in\mathbb{R}\;. \ee To check this, assume that $\sC=-v_{1}^{\ell}$ then necessarily $v_1^r=u_1+\eta_1^r=0$ and this yields $\eta_1^r=\eta_1^\ell-v_{1}^{\ell}$, $ \lambda_{1}^{r}=-u_2$, $\dot x_{}=-v_2^\ell$. So \be{E:afqwrfggggggwggrw000} \widehat\Psi_{1,1} \left(v_{1}^{\ell},v_{2}^{\ell}, \eta_{1}^{\ell} ,-v_{1}^{\ell}\right)= \eta_{1}^{r}( \lambda_{1}^{r}-\dot x_{})-\eta_{1}^{\ell}(\lambda_{1}^{\ell}-\dot x_{}) =-(\eta_1^\ell-v_{1}^{\ell}) (u_2-v_2^\ell)-\eta_{1}^{\ell} (\lambda_{1}^{\ell}+v_2^\ell) \ . \ee Since $\underline v^\ell= \SC{1}{\eta_1^\ell}{\underline u}$, it holds $\underline u= \SC{1}{-\eta_1^\ell}{\underline v^\ell}$ and by definition~\eqref{E:p1shock}, we get \[ u_2-v_2^\ell=\frac{ v_2^\ell-1}{v_1^\ell-\eta_1^\ell-\lambda_{1}^{\ell}}\eta_1^\ell \ . \] We thus deduce \be{E:afqwrfggggggwggrw} \widehat\Psi_{1,1} \left(v_{1}^{\ell},v_{2}^{\ell}, \eta_{1}^{\ell} ,-v_{1}^{\ell}\right) =-\frac{\eta_{1}^{\ell}}{v_1^\ell-\eta_1^\ell-\lambda_{1}^{\ell}} \left(( v_2^\ell-1)(\eta_1^\ell-v_{1}^{\ell})+(\lambda_{1}^{\ell}+v_2^\ell) (v_1^\ell-\eta_1^\ell-\lambda_{1}^{\ell}) \right)\ . \ee Now, by~\eqref{E:symmetry1HC}, the speed $\lambda_{1}^{\ell}$ is \[ \lambda_{1}^{\ell}\equiv -\frac{1}{2}\left[ v_2^\ell-v_1^\ell+\eta_1^\ell+\sqrt{(v_2^\ell-v_1^\ell+\eta_1^\ell)^{2}+4(v_1^\ell-\eta_1^\ell)}\right] \] so that \[ (\lambda_{1}^{\ell}+v_2^\ell) (v_1^\ell-\eta_1^\ell-\lambda_{1}^{\ell})=\frac{1}{4}(v_2^\ell+v_1^\ell-\eta_1^\ell)^2-\frac{1}{4}\left((v_2^\ell-v_1^\ell+\eta_1^\ell)^{2}+4(v_1^\ell-\eta_1^\ell)\right) \equiv (v_2^\ell-1)(v_1^\ell-\eta_1^\ell)\ . \] This immediately yields that the term within the parenthesis in~\eqref{E:afqwrfggggggwggrw} is zero and hence,~\eqref{E:afqwrfggggggwggrw000} holds true. Next, we prove that \be{ApCvan113} \ptlls{v_2^\ell}{\widehat\Psi_{1,1}} \left(v_{1}^{\ell},v_2^\ell, \eta_{1}^{\ell} ,\sC{}\right)\Big|_{v_2^\ell=1}=\ptlls{v_2^\ell}{\widehat\Psi_{1,2}} \left(v_{1}^{\ell},v_2^\ell, \eta_{1}^{\ell} ,\sC{}\right)\Big|_{v_2^\ell=1}=0 \ . \ee To show these, we use~\eqref{E:claimvanishingderivative} and get $ \ptll{v_2^\ell}\eta_{2}^{r} =\eta_{2}^{r}=0$ when $v_2^\ell=1$. Then it follows easily \[ \ptlls{v_2^\ell}{\widehat\Psi_{1,2}} \left(v_{1}^{\ell},v_2^\ell, \eta_{1}^{\ell} ,\sC{}\right)\Big|_{v_2^\ell=1}= \left(\ptlls{v_2^\ell}{\eta_{2}^{r}}\right)\Big|_{v_2^\ell=1}(\lambda_{2}^{r}-\dot x_{} )\Big|_{v_2^\ell=1}+\eta_{2}^{r}\Big|_{v_2^\ell=1}\ptll{v_2^\ell}\left(\lambda_{2}^{r}-\dot x_{} \right)\Big|_{v_2^\ell=1} =0 \] Similarly $\ptll{v_2^\ell}\left(\eta_{1}^{r}( \lambda_{1}^{r}-\dot x_{})\right)=(\eta_1^\ell+\sC_{})\ptll{v_2^\ell}\left(\lambda_{1}^{r}-\dot x_{}\right)$ when $v_2^\ell=1$ since by~\eqref{E:stimaetarbasepartial1} we know that $ \ptll{v_2^\ell}\eta_{2}^{r}=0$. Moreover, we evaluate \bas \ptll{v_2^\ell}\left(\dot x_{}-\lambda_{1}^{\ell}\right)\Big|_{v_2^\ell=1}&=\ptlls{v_2^\ell}{\lambda_1}\left(v_1^\ell+\sC_{},v_2^\ell\right)\Big|_{v_2^\ell=1}-\ptlls{v_2^\ell}{\lambda_1}\left(v_1^\ell,u_2\right)\Big|_{v_2^\ell=1} \\ &\stackrel{\eqref{E:s1derp1}}{=}\frac{-1}{1+v_1^\ell+\sC_{}}-\frac{-1}{1+v_1^\ell} \cdot \left(1+\frac{\eta_1^\ell}{1+v_1^\ell-\eta_1^\ell}\right) =\frac{\sC_{}+\eta_1^\ell}{(1+v_1^\ell-\eta_1^\ell)(1+v_1^\ell+\sC_{})}\;, \\ \ptll{v_2^\ell}\left(\lambda_{1}^{r}-\dot x_{}\right) \Big|_{v_2^\ell=1} &\stackrel{\eqref{E:s1derp1}}{=}\frac{-1}{1+v_1^\ell+\sC_{}} \cdot \left(1+\frac{\eta_1^\ell}{1+v_1^\ell-\eta_1^\ell}\right)-\frac{-1}{1+v_1^\ell+\sC_{}} =\frac{ -\eta_1^\ell}{(1+v_1^\ell-\eta_1^\ell)(1+v_1^\ell+\sC_{})}\;. \eas Using these values, we are now able to compute \[ \ptlls{v_2^\ell}{\widehat\Psi_{1,1}} \left(v_{1}^{\ell},v_2^\ell, \eta_{1}^{\ell} ,\sC{}\right)\Big|_{v_2^\ell=1}=(\eta_1^\ell+\sC_{})\cdot \frac{ -\eta_1^\ell}{(1+v_1^\ell-\eta_1^\ell)(1+v_1^\ell+\sC_{})}+\eta_1^\ell\cdot\frac{\sC_{}+\eta_1^\ell}{(1+v_1^\ell-\eta_1^\ell)(1+v_1^\ell+\sC_{})}=0\ . \] Now, estimates~\eqref{E:fvgabfafvGroup} follow immediately combining~\eqref{ApCvan111},~\eqref{ApCvan112} and~\eqref{ApCvan113} with Lemma~\ref{E:rewgwgeq} in Appendix~\ref{App:C}. \end{proof} Now, we extend the previous result to the general case when $\eta_2^\ell$ is not necessarily zero. \begin{corollary} \label{C:stimegenerali} Suppose $\underline v^\ell=\SC{2}{\eta_2^\ell}{\SC{1}{\eta_1^\ell}{\underline u}}$, and $\underline v^r=\SC{2}{\eta_2^r}{\SC{1}{\eta_1^r}{\underline u}}$ and let $\gamma$ be a wave of the first family, i.e. $\underline v^{r}=\mathbf{S}_{1}\big(\sC; \underline v^{\ell}\big)$. Then \begin{subequations} \begin{align} \label{E:stimaetarbasefull1} |\eta_{1}^{r}-\eta_{1}^{\ell}-\sC_{{{}}}| &\leq \co(1) ( v^{\ell}_{1}+\gamma)|v^{ \ell}_{2}-1|^{2} | \eta_{1}^{\ell}+ \sC_{{{}}}||\eta_{1}^{\ell} \sC_{{{}}}| +\co(1)|\eta_2^\ell \sC_{{{}}}| \ , \\ \label{E:stimaetarbasefull} |\eta_{2}^{r}-\eta_2^\ell| &\leq \co(1) |v^{ \ell}_{2}-1|^{2} | \eta_{1}^{\ell}+ \sC_{{{}}}||\eta_{1}^{\ell} \sC_{{{}}} |+\co(1) |\eta_2^\ell \sC_{{{}}}| \ . \end{align} \end{subequations} Moreover, the speed $\dot x$ given in~\eqref{C1xdot} satisfies \begin{subequations} \label{E:stima11full} \begin{align} \label{E:stima11lfull} &\left\lvert\dot x_{}-\lambda_{1}^{\ell}-\frac{(v_{2}^{\ell}-1) (\eta_{1}^{\ell}+\sC_{})}{v_2^\ell}\right\rvert\leq \co(1)\delta_0^*\cdot \left\lvert (v_{2}^{\ell}-1) (\eta_{1}^{\ell}+\sC_{})\right\rvert+\co(1)\left\lvert \eta_2\right\rvert \\ \label{E:stima11full2} &\left\lvert\dot x_{}-\lambda_{1}^{r}- \frac{(v_{2}^{\ell}-1) \eta_{1}^{\ell} }{v_2^\ell} \right\rvert\leq \co(1) \delta_0^* \cdot \left\lvert(v_{2}^{\ell}-1) \eta_{1}^{\ell}\right\rvert+\co(1)\left\lvert \eta_2\right\rvert\;, \end{align} \end{subequations} while the commutators satisfy the estimates \begin{subequations} \label{E:fvgabfafvGroupfull} \begin{align} \label{E:fvgabfafvNewfull} &\left\lvert\eta_{1}^{r}( \lambda_{1}^{r}-\dot x_{})-\eta_{1}^{\ell}(\lambda_{1}^{\ell}-\dot x_{})\right\rvert\leq \co(1)(\sC+v_{1}^{\ell}) (v_{2}^{\ell}-1)^2 \left\lvert(\eta_{1}^{\ell}+\sC_{ }) \eta_{1}^{\ell}\sC_{ } \right\rvert+\co(1)|\eta_2^\ell \sC_{{{}}}| \;, \\ &\label{E:fvgabfafvNewfull2} \left\lvert \eta_{2}^{r}(\lambda_{2}^{r}-\dot x_{})-\eta_{2}^{\ell}(\lambda_{2}^{\ell}-\dot x_{ })\right\rvert\leq \co(1) (v_{2}^{\ell}-1)^{2} \left\lvert(\eta_{1}^{\ell}+\sC_{ }) \eta_{1}^{\ell}\sC_{ } \right\rvert +\co(1)|\eta_2^\ell \sC_{{{}}}| \;, \end{align} \end{subequations} where $\lambda_i^r$, $\lambda_i^\ell$ correspond to the speeds of the $i$-family that are given in~\eqref{C1lambdaell}--\eqref{C1lambdar}. \end{corollary} \begin{proof} The estimates can be derived using the following general argument: Given a smooth function $\Psi(\underline x, \eta_2^\ell)$, one can write \[ \Psi(\underline x, \eta_2^\ell)=\Psi(\underline x, 0)+\int_0^{\eta_2^\ell}\ptlls{\eta_2^\ell}{\Psi}(\underline x,\xi)\,d\xi \] and then use Lemma~\ref{L:sstimaetarbase} to control the term $\Psi(\underline x, 0)$ and an estimate on $\ptlls{\eta_2^\ell}{\Psi}$ to conclude \[ \left|\Psi(\underline x, \eta_2^\ell)- \Psi(\underline x, 0)\right|\leq \max_{0\leq \xi\,\sgn\eta_2^\ell\leq |\eta_2^\ell|}\ptlls{\eta_2^\ell}{\Psi}(\underline x,\xi) \cdot \left\lvert\eta_2^\ell\right\rvert \ . \] We apply this argument to prove~\eqref{E:stimaetarbasefull1}. Set \[\Psi(v_{1}^{\ell}, v_{2}^{\ell} , \eta_{1}^{\ell} , \sC{},\eta_2^\ell):=\eta_{1}^{r}(v_{1}^{\ell}, v_{2}^{\ell} , \eta_{1}^{\ell} , \sC{},\eta_2^\ell)-\eta_{1}^{\ell}-\sC_{{{}}},\] and recall from Lemma~\ref{E:sstimaetarbase} that \[|\Psi(v_{1}^{\ell}, v_{2}^{\ell} , \eta_{1}^{\ell} , \sC{},0)|\le \co(1)( v^{\ell}_{1}+\gamma) (v_{2}^{\ell}-1)^{2} \left\lvert(\eta_{1}^{\ell}+\sC_{ }) \eta_{1}^{\ell}\sC_{ } \right\rvert \ , \] We need to prove that \[\left\lvert\ptlls{\eta_2^\ell} {\eta_{1}^{r}}(v_{1}^{\ell}, v_{2}^{\ell} , \eta_{1}^{\ell} , \sC{}, \xi)\right\rvert\leq \co(1)|\sC_{}|\ \qquad\text{for $v_{1}^{\ell} , \ v_{1}^{\ell} -\eta_{1}^{\ell} ,\ v_{1}^{\ell} +\sC{}\in [0,\delta^*_0] $ and $v_{2}^{\ell},v_{2}^{\ell}+\xi\in [p^*_{0}, p^*_1]$} .\] When $\sC_{}=0$ of course $\eta_{1}^{r}=\eta_{1}^{\ell}$ for all $\eta_2^\ell$, thus $\ptll{\eta_2^\ell} \eta_{1}^{r}(v_{1}^{\ell},v_{2}^{\ell} ,\eta_{1}^{\ell} , 0,\xi)=0$. Therefore, there exists $\tilde{\sC}$, that satisfies $\tilde{\sC}\in(0,\sC)$ for $\sC>0$ or $\tilde{\sC}\in(\sC,0)$ for $\sC<0$, so that \[ \ptlls{\eta_2^\ell}{ \Psi}=\ptlls{\eta_2^\ell}{ \eta_{1}^{r}}(v_{1}^{\ell},v_{2}^{\ell} ,\eta_{1}^{\ell}, \sC{},\ \xi)=\cancel{\ptlls{\eta_2^\ell} {\eta_{1}^{r}}(v_{1}^{\ell},v_{2}^{\ell} ,\eta_{1}^{\ell}, 0, \xi)}+ \frac{\partial^2 {\eta_{1}^{r}} }{\partial\eta_2^\ell\partial\sC} (v_{1}^{\ell},v_{2}^{\ell} ,\eta_{1}^{\ell},\tilde{\sC}, \xi)\cdot \gamma \ . \] We conclude the proof of~\eqref{E:stimaetarbasefull1} since $\partial^2_{\eta_2^\ell\sC_{}} \eta_{1}^{r}$ is bounded on the given compact domain. The proofs of~\eqref{E:stimaetarbasefull} and of~\eqref{E:fvgabfafvGroupfull} are completely analogous since for $\sC_{}=0$, the following functions vanish since left and right states are the same: \[ \eta_{2}^{r}-\eta_2^\ell\ , \qquad \eta_{1}^{r}( \lambda_{1}^{r}-\dot x_{})-\eta_{1}^{\ell}(\lambda_{1}^{\ell}-\dot x_{})\ , \qquad \eta_{2}^{r}( \lambda_{2}^{r}-\dot x_{})-\eta_{2}^{\ell}(\lambda_{2}^{\ell}-\dot x_{}) \ . \] In a similar manner, we also treat estimate~\eqref{E:stima11full} since the derivatives $\ptll{\eta_2^\ell}{(\dot x_{}-\lambda_{1}^{\ell})}$, $\ptll{\eta_2^\ell}{(\dot x_{}-\lambda_{1}^{r})}$ are continuous and hence bounded on the given compact domain. \end{proof} \subsection{Case of a $2$--wave}\label{Ss:estimates2} Let now a wave $\sC$ that belongs to the second family joining states $\underline v^{\ell}$ and $\underline v^{r}$, this means \[\underline v^{r}=\mathbf{S}_{2}\big(\sC;\underline v^{\ell}\big)\ .\] and recall that the states $\underline v^{\ell}$ and $\underline v^{r}$ are related to a third state $\underline u$ via \[\underline v^\ell=\SC{2}{\eta_2^\ell}{\SC{1}{\eta_1^\ell}{\underline u}} \qquad \underline v^r=\SC{2}{\eta_2^r}{\SC{1}{\eta_1^r}{\underline u}}.\] and recall in~\eqref{E:p1shock} the expression of the Hugoniot curve $\SC{1}{\cdot}{\cdot}$, while n~\eqref{E:h2shock} the expression of Hugoniot curve $\SC{2}{\cdot}{\cdot}$. We will also need the speed $\dot x$ connecting the states $\underline v^\ell$ ind $\underline v^r$, that is \be{C2xdot}\dot x:=\lambda_2\left(v_1^\ell,v_2^\ell+\sC_{}\right) \ee and the corresponding ones connecting $\underline u$ with $\underline v^\ell$ and $\underline v^r$ are: \be{C2lambdaell}\lambda_1^\ell:=\lambda_1\left(u_1+\eta_1^\ell,u_2\right),\qquad \lambda_2^\ell:=\lambda_2(u_1+\eta_1^\ell,v_2^\ell)\ee \be{C2lambdar}\lambda_1^r:=\lambda_1\left(u_1+\eta_{1}^{r}, u_2\right),\qquad \lambda_2^r=\lambda_2(u_1+\eta_1^r, v_2^r )\,,\ee respectively. In the rest of the section, we work as in the previous subsection and derive auxiliary estimates on the wave strengths $\eta_1^r$, $\eta_1^\ell$, $\eta_2^r$, $\eta_2^\ell$, on the waves velocities and on a commutator among waves and velocities. These are established in the next lemma. \begin{lemma} \label{L:stimegenerali2} \begin{subequations}\label{E:8.49improved2Nall} Let $\underline v^{\ell}$, $\underline v^{r}$, $\underline u$, $\underline \omega^{\ell}$, $\underline \omega^{r}$, $\gamma$, $\eta^{\ell}$ and $\eta^r$ be as denoted above and $\gamma$ be a wave of the second family, i.e. $\underline v^{r}=\mathbf{S}_{2}\big(\sC; \underline v^{\ell}\big)$. Then \begin{align} \label{E:primsfwewgrqgerqpall} |\eta_{1}^{r}-\eta_{1}^\ell | \leq & \co(1) \left| \left(\eta_{2}^{\ell}+\sC_{{}}\right)\eta_{2}^{\ell}\sC_{{}} \right| (v_{1}^{\ell})^{2} \ , \\ |\eta_{2}^{r}-\eta_{2}^{\ell}-\sC_{{}}| \leq &\co(1) \left| \ \left(v_2^\ell-\eta_2^\ell-1\right) \right| \left| \left(\eta_{2}^{\ell}+\sC_{{}}\right) \eta_{2}^{\ell}\sC_{{}} \right| (v_{1}^{\ell})^{2} \ . \label{E:primsfwewgrqgerqpall2} \end{align} \end{subequations} Moreover, the speed $\dot x$ given in~\eqref{C2xdot} satisfies \begin{subequations} \label{E:sstimaetarbase2all} \ba \label{E:8.50all} &\left|\dot x_{{}}-\lambda_{2}^{\ell} + \frac{ v_{1}^{\ell}\cdot(\eta_{2}^{\ell}+\sC_{{}})}{(v_{2}^{\ell}-\eta_{2}^{\ell})(v_{2}^{\ell}+\sC_{{}}) } \right| \leq \co(1) |\eta_{2}^{\ell}+\sC_{{}}|(v_{1}^{\ell})^{2} \ , \\ \label{E:8.51all} &\left|\dot x_{{}}-\lambda_{2}^{r} + \frac{ v_{1}^{\ell}\cdot\eta_{2}^{\ell} }{(v_{2}^{\ell}-\eta_{2}^{\ell})(v_{2}^{\ell}+\sC_{{}}) } \right| \leq \co(1 ) |\eta_{2}^{\ell}| (v_{1}^{\ell})^{2} \ , \ea \end{subequations} while the following estimate of the commutators holds true \ba \label{E:sqFNDKAVDUAall} &|\eta_{1}^{r}( \lambda_{1}^{r}-\dot x_{{}})-\eta_{1}^{\ell}(\lambda_{1}^{\ell}-\dot x_{{}})|+ |\eta_{2}^{r}(\lambda_{2}^{r}-\dot x_{{}})-\eta_{2}^{\ell}(\lambda_{2}^{\ell}-\dot x_{{}})| \leq \co(1) \left| \left(\eta_{2}^{\ell}+\sC_{{}}\right)\eta_{2}^{\ell}\sC_{{}} \right| (v_{1}^{\ell})^{2} \ , \ea where $\lambda_i^r$, $\lambda_i^\ell$ correspond to the speeds of the $i$-family that are given in~\eqref{C2lambdaell}--\eqref{C2lambdar}. \end{lemma} \begin{proof} We now prove separately each estimate in the following steps.\\ {\sc Step 1}. Proof of~\eqref{E:primsfwewgrqgerqpall}. \\ We consider $ \eta_{1}^{r}$ and $ \eta_{2}^{r}$ as smooth functions of the independent variables $\underline v^\ell=(v_{1}^{\ell},v_{2}^{\ell}), \eta_{1}^{\ell},\eta_{2}^{\ell},\sC{}$ given implicitly via the relation \be{implicitdefinitionetar} \SC{2}{-\eta_2^r}{\SC{2}{\gamma}{\underline v^{ \ell} }}\equiv\SC{1}{\eta_1^r}{\SC{1}{-\eta_1^\ell}{\SC{2}{-\eta_2^\ell}{ \underline v^\ell}}}\ , \ee since $\underline w=\SC{i}{t}{\underline z}$ is equivalent to $\underline z=\SC{i}{-t}{\underline w}$. We now study the functions \bas & \Psi_{1}^{r}(v_{1}^{\ell},v_{2}^{\ell} ,\eta_{1}^{\ell},\eta_{2}^{\ell}, \sC{}) :=\eta_{1}^{r}(v_{1}^{\ell},v_{2}^{\ell} ,\eta_{1}^{\ell},\eta_{2}^{\ell}, \sC{}) -\eta_1^\ell\ , & \Psi_{2}^{r}(v_{1}^{\ell},v_{2}^{\ell} ,\eta_{1}^{\ell},\eta_{2}^{\ell}, \sC{}) := \eta_{2}^{r}(v_{1}^{\ell},v_{2}^{\ell} ,\eta_{1}^{\ell}, \eta_{2}^{\ell},\sC{}) -\left(\eta_2^\ell+\sC\right)\ , \eas that are analytic in the larger domain \ba\label{E:domaknfknfgrjfwff} v_{1}^{\ell} \ , \ v_{1}^{\ell} -\eta_1^\ell\in[-\delta^*_0,\delta^*_0]\qquad v_{2}^{\ell}\ ,\ v_{2}^{\ell}-\eta_{2}^{\ell}\ , \ v_{2}^{\ell}+\sC\in[p^*_{0}, p^*_1] \ . \ea First, we observe that $\Psi_i^r$, for $i=1,\,2$, satisfy the following vanishing conditions: \begin{itemize} \item[(i)] $ \Psi_{i}^{r}(v_{1}^{\ell},v_{2}^{\ell} ,\eta_{1}^{\ell},\eta_{2}^{\ell}, 0{})=0$. Indeed, if $ \sC=0 $, we have $\underline v^\ell\equiv\underline v^r$ and therefore, $\eta_{1}^{r}\equiv\eta_{1}^{\ell}$ and $\eta_{2}^{r}\equiv\eta_{2}^{\ell}=\eta_{2}^{\ell}+\sC$, since $ \sC=0 $. \item[(ii)] $ \Psi_{i}^{r}(v_{1}^{\ell},v_{2}^{\ell} ,\eta_{1}^{\ell},0, \sC{})=0$. This is true because if $ \eta_{2}^{\ell}=0 $, then it holds $\underline v^{r}=\SC{2}{\sC}{ \SC{1}{\eta_1^\ell}{ \underline u}}{}$. This implies $\eta_1^r=\eta_1^\ell$ and $\eta_2^r=\sC$. \item[(iii)] $ \Psi_{i}^{r}(0,v_{2}^{\ell} ,\eta_{1}^{\ell},\eta_{2}^{\ell}, \sC{})=0$. In this case we use that the $2$-Hugoniot curve is vertical at $h=0$. So if $ v_{1}^{\ell}=0 $, then $\eta_{2}^{r}\equiv\eta_{2}^{\ell}+\sC$ and $\eta_{1}^{r}\equiv\eta_{1}^{\ell}$. \item[(iv)] $ \Psi_{i}^{r}(v_{1}^{\ell},v_{2}^{\ell} ,\eta_{1}^{\ell},\tau, -\tau)=0$ for $\tau\in\mathbb{R}$. Since we have $\underline v^\ell=\SC{2}{\eta_2^\ell}{ \SC{1}{\eta_1^\ell}{ \underline u}}{}$, then if $ \sC= -\eta_{2}^{\ell} $, it holds $$\underline v^r=\SC{2}{-\eta_2^\ell}{\SC{2}{\eta_2^\ell}{ \SC{1}{\eta_1^\ell}{ \underline u}}{}}\equiv \SC{1}{\eta_1^\ell}{ \underline u}$$ and this implies $\eta_{1}^{r}=\eta_{1}^{\ell}$ and $\eta_{2}^{r}\equiv \sC+\eta_{2}^{\ell}\equiv0 $. \end{itemize} All the above cases yield that $\Psi_i^r=0$. Next, we show that \be{E:claimvanishingderivative2} \ptlls{ {v_{1}^{\ell}}} {\Psi_{1}^{r}}\left(v_{1}^{\ell},v_2^\ell, \eta_{1}^{\ell}, \eta_{2}^{\ell},\sC{}\right)\Big|_{v_{1}^{\ell}=0}=0\ . \ee To begin with, we write explicitly the first component of~\eqref{implicitdefinitionetar}, which is \[{\hSC{2}{-\eta_2^r}{\hSC{2}{\sC}{\underline v^\ell},v_2^\ell-\sC }} = \hSC{2}{-\eta_2^\ell}{ \underline v^\ell}-\eta_1^\ell+\eta_1^r\ . \] where $\hSC{2}{\cdot}{\cdot}$ denotes the first component of the Hugoniot curve $\SC{2}{\cdot}{\cdot}$ given at~\eqref{E:h2shock} and we recall that we use Cartesian coordinates. Differentiating the above identity with respect to $v_1^\ell$, we obtain \bas \ptlls{ {v_{1}^{\ell}}}{\eta_{1}^{r}}\left(v_1^\ell, v_2^\ell , \eta_{1}^{\ell}, \eta_{2}^{\ell},\sC{}\right) =-\pthS{v_{1}^{\ell}}{2}{-\eta_2^\ell}{\underline v^\ell} &-\pthS{\eta_2^r}{2}{-\eta_2^r}{\underline v^r } \ptlls{v_1^\ell}{\eta_2^r}\left(\underline v^\ell ,\eta_{1}^{\ell}, \eta_{2}^{\ell},\sC{}\right) \\ & +\pthS{v_1^r} {1}{-\eta_2^r}{\underline v^r } \pthS{v_1^\ell} {1}{\sC}{ \underline v^{ \ell} }\ . \eas Here, if $v_1^\ell=0$, then $v_1^r=0$, $\eta_1^r=\eta_1^\ell=u_1$ and $\eta_2^r=\gamma+\eta_2^\ell$ and $v_2^r=v_2^\ell+\sC$. Therefore, by~\eqref{contiderivataS2}, we compute \bas \ptlls{ {v_{1}^{\ell}}}{\eta_{1}^{r}}\left( v_1^\ell,v_2^\ell , \eta_{1}^{\ell}, \eta_{2}^{\ell},\sC{}\right)\Big|_{v_1^\ell=0} &=-\frac{v_2^\ell}{v_2^\ell-\eta_2^\ell}+ 0\cdot \ptlls{v_1^\ell}{\eta_2^r}\left(\underline v^\ell , \eta_{1}^{\ell}, \eta_{2}^{\ell},\sC{}\right) + \frac{\cancel{v_2^r}}{v_2^r-\eta_2^r}\cdot \frac{v_2^\ell}{\cancel{v_2^\ell+\gamma}} \\ &=-\frac{v_2^\ell}{v_2^\ell-\eta_2^\ell}+\frac{v_2^\ell}{v_2^r-\eta_2^r}=0\ , \eas and~\eqref{E:claimvanishingderivative2} follows immediately. In view of the vanishing conditions (i)-(iv) and~\eqref{E:claimvanishingderivative2}, estimate~\eqref{E:primsfwewgrqgerqpall} is established using Lemma~\ref{E:rewgwgeq} in Appendix~\ref{App:C} and repeating a similar argument as the one in Step 1 in the proof of Lemma~\ref{L:sstimaetarbase}. \noindent {\sc Step 2}. Proof of~\eqref{E:primsfwewgrqgerqpall2}. \\ Here, we write explicitly the second component of the implicit relation~\eqref{implicitdefinitionetar} using as before $\pSC{1}{\cdot}{\cdot}$ for the second component of the Hugoniot curve $\SC{1}{\cdot}{\cdot}$ given in~\eqref{E:p1shock} and $\hSC{2}{\cdot}{\cdot}$ for the first component of the Hugoniot curve $\SC{1}{\cdot}{\cdot}$ given in~\eqref{E:h2shock}. Using Cartesian coordinates, we have \be{impllicitsecond} v_2^\ell+\gamma-\eta_2^r= {\pSC{1}{\eta_1^r}{\underline u}} \qquad\text{where $\underline u=\SC{1}{-\eta_1^\ell}{\SC{2}{-\eta_2^\ell}{ \underline v^\ell}}$.} \ee and also $ v_2^\ell=\pSC{1}{\eta_1^\ell}{\underline u}+\eta_2^\ell$. This means that we obtain \[ \eta_2^\ell+\gamma-\eta_2^r = \pSC{1}{\eta_1^r}{\underline u}-\pSC{1}{\eta_1^\ell}{\underline u}\ . \] Since by~\eqref{esotedegaaggargrawgragar}, the derivative of $\pSC{1}{\eta}{}$ with respect to the strength $\eta$ is bounded, we apply mean value theorem to get immediately \bas \left|\eta_2^\ell+\gamma-\eta_2^r\right|\stackrel{\eqref{esotedegaaggargrawgragar}}{\leq} \frac{1+\delta_0^*}{(p_{0}^*)^2(1-\delta_p^*-\delta_0^*)} \left| u_2-1\right| \left|(\eta_1^r-\eta_1^\ell)\right| =\co(1)\left| v_2^\ell-\eta_2^\ell-1\right| \left| \eta_1^r-\eta_1^\ell\right| \ , \eas where we used that $|u_2-1|\leq (1+\delta_0^*/p_0^*) |v_2^\ell-\eta_2^\ell-1|$ by definition~\eqref{E:p1shock} of $\SC{1}{\cdot}{\cdot}$ because $\underline u$ is as in \eqref{impllicitsecond}. We also note that from the calculations in the end of Appendix ~\ref{AppB}, we know that $1-\delta_p^*-\delta_0^*>\frac{7}{9}$ if $\delta_0^*<\frac{1}{9} $ and $\delta_p^*<\frac{1}{9} $. This is to assure the reader that the coefficient in~\eqref{esotedegaaggargrawgragar} remains uniformly bounded. Substituting now~\eqref{E:primsfwewgrqgerqpall} into this, estimate~\eqref{E:primsfwewgrqgerqpall2} is proven. \noindent {\sc Step 3}. Proof of~\eqref{E:sstimaetarbase2all}. \\ Here, we proceed as in the Lemma~\ref{L:sstimaetarbase} and define the auxiliary functions of the independent variables $v_{1}^{\ell},v_{2}^{\ell}, \eta_{1}^{\ell}, \eta_{2}^{\ell} ,\sC{}$ as follows \bas &\Psi_{2,\ell} \left(v_{1}^{\ell},v_{2}^{\ell},\eta_1^\ell, \eta_{2}^{\ell} ,\sC{}\right)=\dot x_{}-\lambda_{2}^{\ell}\equiv \lambda_2\left(v_1^\ell,v_2^\ell+\sC_{}\right)-\lambda_2\left(\omega_1^\ell,v_2^\ell\right) \ , \\ &\Psi_{2,r}(v_{1}^{\ell},v_{2}^{\ell} ,\eta_1^\ell, \eta_{2}^{\ell} ,\sC{})=\dot x_{}-\lambda_{2}^{r}\equiv \lambda_2\left(v_1^\ell,v_2^\ell+\sC_{}\right)-\lambda_2\left(u_1+\eta_{1}^{r}, v_2^\ell+\sC\right) \ , \eas where $\omega_1^\ell=u_1+\eta_1^\ell $, $v_2^r=v_2^\ell+\sC$ and \ba\label{E:expressionu1} u_1=\hSC{}{-\eta_2^\ell}{\SC{1}{-\eta_1^\ell}{\underline v^\ell}}\ , && \omega_1^\ell&=\hSC{}{-\eta_2^\ell}{\underline v^\ell} \ , && v_1^r=\hSC{}{\sC}{\underline v^\ell}\ . \ea It is useful to recall the implicit relations~\eqref{implicitdefinitionetar} and the Hugoniot curve $\SC{1}{\cdot}{\cdot}$ given in~\eqref{E:p1shock}, while $\SC{2}{\cdot}{\cdot}$ is given in~\eqref{E:h2shock}.\\ rststep \noindent {\sc Step 3A}. We note that $\Psi_{2,\ell}$ is analytic in the domain~\eqref{E:domaknfknfgrjfwff} and it satisfies: \begin{itemize} \item $\Psi_{2,\ell} \left(0,v_{2}^{\ell},\eta_1^\ell, \eta_{2}^{\ell} ,\sC{}\right)=0$. If $ v_{1}^{\ell}=0 $, then $\omega_1=0$, because the $2$-Hugoniot curve is vertical at $h=0$, and hence $\dot x_{}=\lambda_{2}^{\ell}=0$; \item $\Psi_{2,\ell} \left(v_{1}^{\ell},v_{2}^{\ell},\eta_1^\ell, \tau ,-\tau\right)=0$, for all $\tau\in\mathbb{R}$. Indeed if $ \sC= -\eta_{2}^{\ell} $, then $\underline v^r=\underline \omega^\ell$ and the identity follows using the symmetry condition~\eqref{E:symmetry2HC}. \end{itemize} Now, from relation~\eqref{E:h2shock}, we write the first component $w_1^\ell$ $$w_1^\ell=v_1^\ell-\frac{v_1^\ell \eta_2^\ell}{\lambda_1(v_1^\ell,v_2^\ell-\eta_2^\ell)-v_1^\ell}$$ and compute its derivative \be{E:aboveexpressionw1ell} \ptlls{v_1^\ell}{w_1^\ell}\Big|_{v_1^\ell=0}=\frac{v_2^\ell}{v_2^\ell-\eta_2^\ell}\;. \ee Moreover, we calculate the derivative of the speeds for $v_1^\ell=0$ to find \[ \ptll{v_1^\ell}\dot x\Big|_{v_1^\ell=0}= \ptll{v_1^\ell} \left(\lambda_2\left(v_1^\ell,v_2^\ell+\sC_{}\right) \right)\Big|_{v_1^\ell=0}\stackrel{\eqref{AppB.3s2h}}{=}\frac{1}{v_2^\ell+\sC}\;, \] and\[ \ptll{v_1^\ell} \lambda_{2}^{\ell}\Big|_{v_1^\ell=0}=\ptll{w_1^\ell} \lambda_2\left(w_1^\ell,v_2^\ell\right) \cdot \ptlls{v_1^\ell}{w_1^\ell} \Big|_{v_1^\ell=0}\stackrel{\eqref{E:aboveexpressionw1ell}}{=}\frac{1}{v_2^\ell-\eta_2^\ell}\;. \] Then \be{E:vrkjfenjkafnjkavnjkvnkfv} \ptlls{v_1^\ell}{\Psi_{2,\ell}}\left(v_1^\ell,v_{2}^{\ell},\eta_{1}^{\ell} ,\eta_{2}^{\ell} ,\sC{}\right)\Big|_{v_1^\ell=0}=\frac{1}{v_2^\ell+\sC} - \frac{1}{v_2^{\ell}-\eta_2^\ell} = -\frac{ \eta_{2}^{\ell}+\sC_{{}} }{\left(v_2^\ell+\sC\right)\left(v_2^{\ell}-\eta_2^r\right)}\ . \ee This value together with the two vanishing conditions of $\Psi_{2,\ell}$ yield estimate~\eqref{E:8.50all} immediately. \noindent {\sc Step 3B}. Next, we check that $\Psi_{2, r}$ satisfies the following conditions: \begin{itemize} \item $\Psi_{2,r} \left(0,v_{2}^{\ell},\eta_1^\ell, \eta_{2}^{\ell} ,\sC{}\right)=0$. Here, if $v_1^\ell=0$ then we have $v_1^r=u_1+\eta_1^r=0$, since $\SC{2}{\cdot}{0,p}{}$ is vertical for $p>0$, and thus $\dot x_{}=\lambda_{2}^{r}=0$; \item $\Psi_{2,r} \left(v_1^\ell,v_{2}^{\ell},\eta_1^\ell, 0 ,\sC{}\right)=0$. For $\eta_2^\ell=0$, we observe that $ \underline v^r\equiv\SC{2}{\eta_2^r}{\SC{1}{\eta_1^r}{\underline u}}\equiv\SC{2}{\sC}{\SC{1}{\eta_1^\ell}{\underline u}}$, so $\eta_1^r=\eta_1^\ell$ and so $v_1^\ell=w_1^\ell=u_1+\eta_1^\ell=u_1+\eta_1^r$. \end{itemize} and also the property \ba\label{E:rvonveianriqbrbqbqbqb} \ptlls{v_1^\ell}{\Psi_{2,r} }\left(v_1^\ell,v_{2}^{\ell},\eta_{1}^{\ell} ,\eta_{2}^{\ell} ,\sC{}\right)\Big|_{v_1^\ell=0} =- \frac{ \eta_{2}^{\ell} }{(v_{2}^{\ell}-\eta_{2}^{\ell})(v_{2}^{\ell}+\sC_{{}}) }\ . \ea in the domain~\eqref{E:domaknfknfgrjfwff}. Indeed, we apply the chain rule in the expression of $\Psi_{2, r}$ and get \bas \ptlls{v_1^\ell}{\Psi_{2,r}}\left(v_1^\ell,v_{2}^{\ell},\eta_{1}^{\ell} ,\eta_{2}^{\ell} ,\sC{}\right)\Big|_{v_1^\ell=0}&=\ptll{v_1^\ell} \left(\lambda_2\left(v_1^\ell,v_2^\ell+\sC_{}\right)-\lambda_2\left(u_1+\eta_{1}^{r}, v_2^\ell+\sC\right) \right)\Big|_{v_1^\ell=0} \\ &\stackrel{\eqref{AppB.3s2h}}{=}\frac{1}{v_2^\ell+\sC}-\frac{1}{v_2^\ell+\sC}\cdot\ptll{v_1^\ell}{\left(u_1+\eta_{1}^{r}\right)} \Big|_{v_1^\ell=0} \ . \eas Now from ~\eqref{E:primsfwewgrqgerqpall} $\ptll{v_1^\ell}{\eta_{1}^{r}}=0$ if $v_1^\ell=0$ and the identity $u_1=w_1^\ell-\eta_1^\ell$, we compute \[\ptll{v_1^\ell}{\left(u_1+\eta_{1}^{r}\right)} \Big|_{v_1^\ell=0}=\ptll{v_1^\ell}{u_1} \Big|_{v_1^\ell=0}=\ptlls{v_1^\ell}{w_1^\ell}\Big|_{v_1^\ell=0}=\frac{v_2^\ell}{v_2^\ell-\eta_2^\ell}\;, \] which follows from~\eqref{E:aboveexpressionw1ell}. Substituting above, we get~\eqref{E:rvonveianriqbrbqbqbqb}. As before,~\eqref{E:rvonveianriqbrbqbqbqb} and the vanishing conditions of ${\Psi_{2,r}}$ imply~\eqref{E:8.51all}. \noindent {\sc Step 4}. Proof of~\eqref{E:sqFNDKAVDUAall}. \\ We define in the domain~\eqref{E:domaknfknfgrjfwff} the auxiliary functions $\widehat{\Psi}_{2,1}$ and $\widehat{\Psi}_{2,2}$ as follows: \bes \widehat{\Psi}_{2,1}(v_{1}^{\ell},v_{2}^{\ell}, {}\eta_{1}^{\ell},\eta_{2}^{\ell},\sC_{\alpha}) = \eta_{1}^{r}( \lambda_{1}^{r}-\dot x_{\alpha})-\eta_{1}^{\ell}(\lambda_{1}^{\ell}-\dot x_{\alpha}) \ , \ees \bes \widehat{\Psi}_{2,2}(v_{1}^{\ell},v_{2}^{\ell},{}\eta_{1}^{\ell},\eta_{2}^{\ell},\sC_{\alpha})=\eta_{2}^{r}(\lambda_{2}^{r}-\dot x_{\alpha})-\eta_{2}^{\ell}(\lambda_{2}^{\ell}-\dot x_{\alpha}) \ . \ees We observe that $\widehat{\Psi}_{2,i}$, for $i=1,\,2$, is analytic and vanishes whenever: \begin{itemize} \item[(i)] $\widehat{\Psi}_{2,i}(v_{1}^{\ell},v_{2}^{\ell},{}\eta_{1}^{\ell},\eta_{2}^{\ell},0)=0$. We can see this as follows: if $\sC=0$, then $\underline v^\ell\equiv \underline v^r$ and thus, by definition, $\eta_{1}^{r}=\eta_{1}^{\ell}$, $\eta_{2}^{r}=\eta_{2}^{\ell}$ and $\lambda_{1}^{r}=\lambda_{1}^{\ell}$, $\lambda_{2}^{r}=\lambda_{2}^{\ell}$; \item[(ii)] $\widehat{\Psi}_{2,i}(v_{1}^{\ell},v_{2}^{\ell},{}\eta_{1}^{\ell},0,\sC_{\alpha})=0$. Here, if $\eta_2^\ell=0$, then $\underline v^{r}=\SC{2}{\sC}{ \SC{1}{\eta_1^\ell}{ \underline u}}{}$ since $\sC$ is a $2$- wave and therefore, $\eta_1^r=\eta_1^\ell$, $\eta_2^r=\sC$ and $\lambda_1^r=\lambda_1^\ell$, $\lambda_2^r= \dot{x}_\alpha$; \item[(iii)] $\widehat{\Psi}_{2,i}(0,v_{2}^{\ell},{}\eta_{1}^{\ell},\eta_{2}^{\ell},\sC_{\alpha})=0$. In this case, if $v_{1}^{\ell}=0$, then $w_1^\ell=v_1^r=0$ and hence, $\lambda_1^r=\lambda_1^\ell\equiv u_2$ and $\lambda_2^r=\dot{x}_\alpha=\lambda_2^\ell\equiv0$; \item[(iv)] $\widehat{\Psi}_{2,i}(v_{1}^{\ell},v_{2}^{\ell},{}\eta_{1}^{\ell},\tau,-\tau)=0$ for $\tau\in\mathbb{R}$. In the last case, if $\eta_2^\ell=-\sC$, then $\underline v^r=\SC{1}{\eta_1^\ell}{\underline u}$ and therefore, we get $\eta_1^r=\eta_1^\ell$, $\lambda_{1}^{r}=\lambda_{1}^{\ell}$ and $\eta_2^r=0$, $\lambda_2^r=\dot{x}_\alpha$.\end{itemize} We can conclude now~\eqref{E:sqFNDKAVDUAall} in a similar way as the previous estimates as long as we show \bas & \ptlls{ {v_{1}^{\ell}}}{\widehat{\Psi}_{2,1}} \left(v_{1}^{\ell},v_{2}^{\ell}, \eta_{1}^{\ell}, \eta_{2}^{\ell},\sC_{}\right)\Big|_{v_{1}^{\ell}=0}=\ptlls{ {v_{1}^{\ell}}}{\widehat{\Psi}_{2,2}} \left(v_{1}^{\ell},v_{2}^{\ell}, \eta_{1}^{\ell}, \eta_{2}^{\ell},\sC_{}\right)\Big|_{v_{1}^{\ell}=0}= 0 \ . \eas To justify this identity, we differentiate $\widehat{\Psi}_{2,2}$ and use the values~\eqref{E:vrkjfenjkafnjkavnjkvnkfv}--\eqref{E:rvonveianriqbrbqbqbqb} for $v_{1}^{\ell}=0$, to obtain \bas \ptlls{ {v_{1}^{\ell}}}{\widehat{\Psi}_{2,2}} \left(v_{1}^{\ell}, v_{2}^{\ell},\eta_{2}^{\ell},\eta_{2}^{\ell},\sC{}\right)\Big|_{v_{1}^{\ell}=0}&= \left[\eta_{2}^{r}\ptl{ {v_{1}^{\ell}}}{}( \lambda_{2}^{r}-\dot x_{\alpha})-\eta_{2}^{\ell}\ptl{ {v_{1}^{\ell}}}{}(\lambda_{2}^{\ell}-\dot x_{\alpha})\right]_{v_{1}^{\ell}=0} +\frac{\partial\eta_{2}^{r}}{ \partial v_{1}^{\ell}} \Big|_{v_{1}^{\ell}=0}\cdot ( \lambda_{2}^{r}-\dot x_{\alpha}) \Big|_{v_{1}^{\ell}=0} \\ &= \eta_2^r\, \Big|_{v_{1}^{\ell}=0}\cdot \left(-\frac{ \eta_{2}^{\ell} }{ (v_{2}^{\ell}-\eta_{2}^{\ell})(v_{2}^{\ell}+\sC) }\right)\, -\eta_2^\ell\cdot \left(-\frac{ \eta_{2}^{\ell} +\sC{}}{(v_2^\ell-\eta_2^\ell)(v_2^\ell+\sC)}\right)+0=0\;, \eas since $\eta_2^r=\eta_2^\ell+\sC$ and $\lambda_2^\ell=\lambda_2^r=\dot{x}_\alpha=0$ when $v_1^\ell=0$ and from~\eqref{E:primsfwewgrqgerqpall2}, we know that the derivative $\partial_{v_1^\ell}\eta_{2}^{r}$ is zero at $v_1^\ell=0$. On the other hand, the derivative of $\widehat{\Psi}_{2,1}$ is \bas \ptlls{ {v_{1}^{\ell}}}{\widehat{\Psi}_{2,1}} \left(v_{1}^{\ell}, v_{2}^{\ell},\eta_{2}^{\ell},\eta_{2}^{\ell},\sC{}\right)\Big|_{v_{1}^{\ell}=0}&= \left[\eta_{1}^{r}\ptl{ {v_{1}^{\ell}}}{}( \lambda_{1}^{r}-\dot x_{\alpha})-\eta_{1}^{\ell}\ptl{ {v_{1}^{\ell}}}{}(\lambda_{1}^{\ell}-\dot x_{\alpha})\right]_{v_{1}^{\ell}=0} +0=\eta_{1}^{\ell}\left[\ptl{ {v_{1}^{\ell}}}{}( \lambda_{1}^{r}- \lambda_{1}^{\ell}) \right]_{v_{1}^{\ell}=0} \eas since $\eta_1^\ell=\eta_1^r$ and $\ptll{v_{1}^{\ell}}{\eta_1^\ell}=\ptll{v_{1}^{\ell}}{\eta_1^r}=0$ for $v_1^\ell=0$ by~\eqref{E:primsfwewgrqgerqpall}. Now \[ \ptll{v_{1}^{\ell}}{ \lambda_1^r}\Big|_{v_{1}^{\ell}=0}=\ptl{ {v_{1}^{\ell}}}{}\lambda_1(u_1+\eta_1^r,u_2)\Big|_{v_{1}^{\ell}=0}\stackrel{\eqref{E:s1derh1}}{=}\left(\frac{u_2-1}{u_2} \right) \frac{\partial u_1}{ \partial v_{1}^{\ell}}\Big|_{v_{1}^{\ell}=0} \] and \[ \ptll{v_{1}^{\ell}}{ \lambda_1^\ell}\Big|_{v_{1}^{\ell}=0}=\ptl{ {v_{1}^{\ell}}}{}\lambda_1(u_1+\eta_1^\ell,u_2)\Big|_{v_{1}^{\ell}=0}\stackrel{\eqref{E:s1derh1}}{=}\left(\frac{u_2-1}{u_2} \right) \frac{\partial u_1}{ \partial v_{1}^{\ell}}\Big|_{v_{1}^{\ell}=0} \] Thus, we deduce \bas \ptlls{ {v_{1}^{\ell}}}{\widehat{\Psi}_{2,1}} \left(v_{1}^{\ell}, v_{2}^{\ell},\eta_{2}^{\ell},\eta_{2}^{\ell},\sC{}\right)\Big|_{v_{1}^{\ell}=0} &= \eta_{1}^{\ell}\frac{\partial u_1}{ \partial v_{1}^{\ell}}\Big|_{v_{1}^{\ell}=0} \left[\frac{u_2-1}{u_2} -\frac{u_2-1}{u_2} \right]=0\;. \eas The proof is now complete. \end{proof} \section{Auxiliary lemma}\label{App:C} \begin{lemma} \label{E:rewgwgeq} Let $J\subset\{0,1,\dots,m\}$ be a set of indices. Suppose that, for each $j\in J$, $F:\R^{m}\to\R$ is $k_{j}$-times differentiable in the variable $x_{j}$ with $\ptl{x_{j}}{k_{j}}$ derivative continuous on $\R^{m}$ and \[ F\Big|_{x_{j}=0} =\ptl{x_{j}}{} F\Big|_{x_{j}=0} =\dots= \ptl{x_{j}}{k_{j}} F\Big|_{x_{j}=0} =0 \ . \] One has then that for $(x_{1},\dots,x_{m})\in[-M,M]^{m}$ there is a constant $C=C(M)$ such that \[ |F(x_{1},\dots,x_{m})| \leq C\prod_{j\in J} |x_{j}|^{k_{j}+1} \ . \] \end{lemma} \begin{proof} Given a vector $\underline{x}=(x_{1},\dots,x_{m})$, consider the projected vector $\tau_{j}(s)[\underline{x}]:=\underline{x}+(s-x_{j})\hat{\mathrm{e}}_{j}$ that has the same components of $\underline x$ but the $j$-th one which is set to be $s$. By the smoothness of $F$, and since \[ F\left(\tau_{j}(0)[\underline{x}]\right)=\ptl{x_{i}}{h} F\left(\tau_{j}(0)[\underline{x}]\right) =0 \qquad\text{for $h=1,\dots,k_{j}$}, \quad j\in J \] one has that for every $j\in J$ the function $F$ can be written as \bas F(\underline{x}) &=\int_{0}^{ x_{{j}}} \ptl{x_{j}}{} F\left(\tau_{j}(s_{1})[\underline{x}]\right) \, ds_{1}=\dots \\ &=\int_{0}^{x_j}\int_0^{s_1}\int_0^{s_2}\dots\int_0^{s_{k_j}} \ptl{x_{j}}{k_{j}+1} F\left(\tau_{j}(s_{k_j+1})[\underline{x}]\right) ds_{k_{j}+1}\dots\, ds_{3}\,ds_{2} ds_{1} \\ &=\ptl{x_{j}}{k_{j}+1} F\left(\tau_{j}(\hat s)[\underline{x}]\right) \cdot \frac{x_{j}^{k_{j}+1}}{(k_{j}+1)!} \qquad \text{for some $\hat s\in(0,x_{j})$} \eas and therefore \[ \left| F(\underline{x}) \right| \leq \max_{[\![\tau_{j}(0)[\underline{x}],\underline{x}]\!]}\left| \ptl{x_{j}}{k_{j}} F \right| \cdot \left|x_{j}\right|^{k_{j}+1} \leq \max_{[-M,M]^{m}}\left| \ptl{x_{j}}{k_{j}} F \right| \cdot \left|x_{j}\right|^{k_{j}+1} \qquad\text{for $j\in J$, $\underline{x}\in[-M,M]^{m}$.} \] Recursively, for the same reason for $j\in J$ the function \[ F_{j}(\underline{x})=\frac{ F(\underline{x})}{x_{j}^{k_{j}+1}} \] is continuous and it satisfies the hypotheses of the theorem in each index $i\in J\setminus\{j\}$. Applying the same argument as above we conclude that \[ \left| F_{j}(\underline{x}) \right|\leq \max_{[-M,M]^{m}}\left| \ptl{x_{i}}{k_{i}} F_{j} \right| \cdot \left|x_{i}\right|^{k_{i}+1} \qquad\text{for $i\in J\setminus\{j\}$, $\underline{x}\in[-M,M]^{m}$} \] and therefore \[ \left| F(\underline{x}) \right|\leq \max_{[-M,M]^{m}}\left| \ptl{x_{i}}{k_{i}} F_{j} \right| \cdot \left|x_{i}\right|^{k_{i}+1}\cdot \left|x_{j}\right|^{k_{j}+1} \qquad\text{for $i,j\in J $, $i\neq j$, $\underline{x}\in[-M,M]^{m}$.} \] Repeating the argument recursively for the indices in $J\setminus\{i,j\}$, we deduce the result. \end{proof} \section*{Acknowledgement} Fabio Ancona and Laura Caravenna are partially supported by the Gruppo Nazionale per l’Analisi Matematica, la Probabilità e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM), and by the PRIN 2020 "Nonlinear evolution PDEs, fluid dynamics and transport equations: theoretical foundations and applications". Christoforou was partially supported by the Internal grant SBLawsMechGeom \#21036 from University of Cyprus . \begin{thebibliography}{10} \bibitem{AGG} \textsc{D. Amadori, L. Gosse and G. Guerra}, Global BV entropy solutions and uniqueness for hyperbolic systems of balance laws, \emph{Arch. Ration. Mech. Anal.} {\bf{162}} (2002), (4) 327--366. \bibitem{AG} {\sc D. Amadori and G. Guerra}, Uniqueness and continuous dependence for systems of balance laws with dissipation, \emph{Nonlinear Anal.} {\bf 49(7)} (2002) 987--1014. \bibitem{AS} {\sc D. Amadori and W. Shen}, Global existence of large $BV$ solutions in a model of granular flow, \emph{Comm. Part. Diff. Equations} {\bf 34} (2009) 1003--1040. \bibitem{AS2} {\sc D. Amadori and W. Shen}, The slow erosion limit in a model of granular flow, \emph{Arch. Ration. Mech. Anal.} {\bf{199}} (2011), (1) 1--31. \bibitem{AM} {\sc F. Ancona, A. Marson}, Well-posedness for general $2\times 2$ systems of conservation laws, \emph{Mem. Amer. Math. Soc.}, {\bf 169} (2004) (801). \bibitem{AM3} {\sc F. Ancona, A. Marson}, Existence theory by front tracking for general nonlinear hyperbolic systems, \emph{Arch. Rational Mech. Anal.}, \textbf{185}, no.2, 287-340 (2007). \bibitem{AM2} {\sc F. Ancona, A. Marson}, A locally quadratic Glimm functional and sharp convergence rate of the Glimm scheme for nonlinear hyperbolic systems, \emph{Arch. Rational Mech. Anal.}, \textbf{196}, no.2, 455-487 (2010). \bibitem{BD} \textsc{ P. Baiti, E. Dal Santo}, Front tracking for a $2\times 2$ system of conservation laws, \emph{ Electron. J. Differential Equations}, No. 220, 14 (2012), 1--14. \bibitem{B1} {\sc S. Bianchini}, The semigroup generated by a Temple class system with nonconvex flux function, \emph{Differ. Integral Equ.}, 13 (2000), no. (10-12), 1529--1550. \bibitem{B2} {\sc S. Bianchini}, Stability of $L^\infty$ solutions for hyperbolic systems with coinciding shocks and rarefactions, \emph{SIAM J. Math. Anal.} 33 (2001), 959--981. \bibitem{BB} {\sc S. Bianchini, A. Bressan}, Vanishing viscosity solutions to nonlinear hyperbolic systems. \emph{Ann. Math.} 161 (2005), 223-342. \bibitem{BCM} {\sc S. Bianchini, R.M. Colombo, F. Monti}, $2\times 2$ systems of conservation laws with ${\bf L}^\infty$ data, \emph{J. Differential Equations} {\bf 249} (2010) 3466--3488. \bibitem{BdG} {\sc T. Boutreux, P.-G. de Gennes}, Surface flows of granular mixtures, I, General Principles and minimal model, \emph{J. Phys.} I, France {\bf 6} (1996), 1295--1304. \bibitem{Bre2} {\sc A. Bressan}, On the Cauchy problem for systems of conservation laws, Actes du 29\`eme Congr\`es d'Analyse Num\'erique: CANum'97 (Larnas, 1997) Soc. Math. Appl. Indust., Paris, 1998, \emph{ESAIM Proc.}, 3, 23--36 (electronic). \bibitem{Bre} {\sc A. Bressan}, \newblock{\em Hyperbolic systems of conservation laws. The one-dimensional Cauchy problem}. {Oxford Lecture Series in Mathematics and its Applications}, 20. \newblock Oxford University Press, 2000. \bibitem{BC} \textsc{A. Bressan, R.M. Colombo}, The semigroup generated by $2\times 2$ systems of conservation laws, \emph{Arch. rational Mech. Anal.}, {\bf 133} (1996), pp. 1-75. \bibitem{BCP} \textsc{A. Bressan, G. Crasta, B. Piccoli}, Well-posedness of the Cauchy problem for $n\times n$ conservation laws, \emph{Amer. Math. Soc. Memoir}, {\bf 146} (2000) (694). \bibitem{BG} \textsc{A. Bressan, P. Goatin}, Stability of $L^\infty$ solutions of Temple class systems, \emph{Differential Integral Equations}, {\bf 13} (2000), no. 10-12, 1503--1528. \bibitem{bly} \textsc{A. Bressan, T.P. Liu, T. Yang}, $L^1$ stability estimates for $n\times n$ conservation laws, \emph{Arch. Rational Mech. Anal.}, \textbf{149}, 1-22 (1999). \bibitem{BS} {\sc A. Bressan, W. Shen}, A semigroup approach to an integro-differential eqution modeling slow erosion, \emph{J. Differential Equations} {\bf 257} (2014) 2360-2403. \bibitem{CC} {\sc P. Cannarsa, P. Cardaliaguet}, Representation of equilibrium solutions to the table problems for growing sandpiles, \emph{J. Eur. Math. Soc.} {\bf 6} (2004), 435--464. \bibitem{CCCG} {\sc P. Cannarsa, P. Cardaliaguet, G. Grasta, E. Giorgieri}, A boundary value problem for a PDE model in mass transfer theory: representation of solutions and applications, \emph{Calc. Var. PDE} {\bf 24} (2005), 431--457. \bibitem{C1} {\sc C. Christoforou}, Hyperbolic systems of balance laws via vanishing viscosity, \emph{J. Differential Equations} {\bf 221} (2006), 470--541. \bibitem{C2} {\sc C. Christoforou}, Uniqueness and sharp estimates on solutions to hyperbolic systems with dissipative source, \emph{Comm. P.D.E.} {\bf 31} (2006), 1825--1839. \bibitem{C5} \textsc{C. Christoforou}, A remark on the Glimm scheme for inhomogeneous hyperbolic systems of balance laws, \emph{J. Hyperbolic Differ. Equ.}, {\bf 12} (2015), (4), 787--797. \bibitem{CC1} \textsc{R.M. Colombo, A. Corli}, On $2\times 2$ conservation laws with large data, \emph{NoDEA Nonlinear Differential Equations Appl.} 10 (3) (2003), 255--268. \bibitem{CC2} \textsc{R.M. Colombo, A. Corli}, On a class of hyperbolic balance laws, \emph{J. Hyperbolic Differ. Equ.} 1 (2004), no. 4, 725--745. \bibitem{cgw} \textsc{R.M. Colombo, G. Guerra, and W. Shen}, Lipschitz semigroup for an integro-differential equation for slow erosion, \emph{Quart. Appl. Math.}, 70 (2012), 539-578. \bibitem{CP} {\sc G. Crasta and B. Piccoli}, Viscosity solutions and uniqueness for systems of inhomogeneous balance laws, \emph{Discrete Contin. Dynam. Systems}, 3(4), (1997), 477--502. \bibitem{daf10} {\sc C.M. Dafermos}, {\em Hyperbolic Conservation Laws in Continuum Physics}, Third Edition. Grundlehren der Mathematischen Wissenschaften, 325. Springer Verlag, Berlin, 2010. \bibitem{daf16} {\sc C.M. Dafermos}, Hyperbolic balance laws with relaxation, \emph{ Discrete Contin. Dyn. Syst.}, 36 (2016), no. 8, 4271--4285. \bibitem{DH} {\sc C.~M. Dafermos and L. Hsiao}, Hyperbolic systems of balance laws with inhomogeneity and dissipation. \emph{Indiana U. Math. J}. {\bf 31} (1982) 471-- 491. \bibitem{HK} {\sc K.~P., Hadeler, and C. Kuttler}, Dynamical models for granular matter. \emph{Granular Matter} {\bf 2} (1999) 9--18. \bibitem{Du} {\sc J. Duran}, {\em Sands, Powders, and Grains: An Introduction to the Physics of Granular Materials}, Springer-Verlag, 2000. \bibitem{L1} \textsc{M. Lewicka}, Well-posedness for hyperbolic systems of conservation laws with large BV data, \emph{Arch. Ration. Mech. Anal.} {\bf{173}} (2004), (3) 415--445. \bibitem{L2} \textsc{M. Lewicka}, Lyapunov functional for solutions of systems of conservation laws containing a strong rarefaction, \emph{SIAM J. Math. Anal.} {\bf{36}} (5) (2005) 1371--1399 (electronic). \bibitem{LT} \textsc{M. Lewicka, K. Trivisa}, On the $L^1$ well posedness of systems of conservation laws near solutions containing two large shocks, \emph{J. Differential Equations} {\bf{179}} (1) (2002), 133--177. \bibitem{tplrpnpn} \textsc{T.P. Liu}, The Riemann problem for general systems of conservation laws, \emph{J. Differential Equations}, {\bf 18} (1975), pp. 218--234. \bibitem{TPL2} \textsc{T.P. Liu}, Quasilinear hyperbolic systems, \emph{Commun. Math. Phys.} 68 (1979), no.2, 141-- 172. \bibitem{LY} \textsc{T.P. Liu, T. Yang}, $L^1$-stability for $2\times 2$ systems of hyperbolic conservation laws, \emph{J. Amer. Math. Soc.}, {\bf 12} (1999), no. 3, pp. 729-774. \bibitem{SH} \textsc{S.B. Savage, K. Hutter}, The dynamics of avalanches of granular materials from initiation to runout. Part I: Analysis. \emph{Acta Mech.} {\bf 86} (1996), 201--223. \bibitem{Ser} {\sc D. Serre}, Systems of Conservation Laws I, II. Cambridge: Cambridge University Press 2000. \bibitem{S} {\sc W. Shen}, On the shape of avalanches. \emph{J. Math. Anal. Appl.} {\bf 339} (2008) 828--838. \bibitem{Sm} {\sc J. Smoller}, Shock Waves and Reaction-Diffusion Equations. New York: Springer-Verlag, 1983. \end{thebibliography} \end{document} \unitlength \JPicScale mm \begin{picture}(143,52)(0,0) \linethickness{0.3mm} \multiput(1.25,0)(34.69,0.12){4}{\line(1,0){34.69}} \put(140,0.5){\vector(1,0){0.12}} \linethickness{1.5mm} \put(20,2){\line(0,1){28}} \linethickness{0.3mm} \put(20,30){\line(1,0){10}} \put(5,3.5){\makebox(0,0)[cc]{}} \linethickness{0.3mm} \put(-3.5,2){\line(1,0){23.5}} \linethickness{1.5mm} \put(30,30){\line(0,1){20}} \linethickness{0.3mm} \put(40,20){\line(1,0){5}} \linethickness{1.5mm} \put(45,20){\line(0,1){20}} \linethickness{0.3mm} \put(45,40){\line(1,0){29.5}} \linethickness{0.3mm} \put(74.5,15.5){\line(0,1){24.5}} \linethickness{0.3mm} \put(74.5,15.5){\line(1,0){15.5}} \linethickness{1.5mm} \put(90,15.5){\line(0,1){8.5}} \linethickness{0.3mm} \put(90,24){\line(1,0){10}} \linethickness{0.3mm} \multiput(60,5)(0,2){18}{\line(0,1){1}} \put(59,-2.5){\makebox(0,0)[cc]{${\bf x}$}} \put(69,8.5){\makebox(0,0)[cc]{$\eta_1({\bf x})>0$}} \put(70,42){\makebox(0,0)[cc]{${\color{red} p>1}$}} \put(83,13){\makebox(0,0)[cc]{${\color{blue} p<1}$}} \put(11,4.5){\makebox(0,0)[cc]{${\color{blue} p>1}$}} \put(33.5,52){\makebox(0,0)[cc]{${\color{red} p<1}$}} \put(42,17.5){\makebox(0,0)[cc]{${\color{red} p<1}$}} \put(-6,1.5){\makebox(0,0)[cc]{$v_1$}} \put(19.5,-3){\makebox(0,0)[cc]{${\bf x_1}$}} \put(30,-3){\makebox(0,0)[cc]{${\bf x_2}$}} \put(40,-3){\makebox(0,0)[cc]{$x_3$}} \put(75.5,-3){\makebox(0,0)[cc]{$x_5$}} \put(90.5,-3){\makebox(0,0)[cc]{${\bf x_6}$}} \linethickness{0.3mm} \put(30,50){\line(1,0){10}} \linethickness{0.3mm} \put(40,20){\line(0,1){30}} \put(24,32){\makebox(0,0)[cc]{${\color{blue} p>1}$}} \put(16,25){\makebox(0,0)[cc]{$\colorbox{green}{1}$}} \put(26.5,38.5){\makebox(0,0)[cc]{$\colorbox{orange}{2}$}} \put(49,23){\makebox(0,0)[cc]{$\colorbox{orange}{2}$}} \put(17.5,20){\makebox(0,0)[cc]{}} \put(36,23){\makebox(0,0)[cc]{$\colorbox{green}{1}$}} \put(78,22){\makebox(0,0)[cc]{$\colorbox{orange}{2}$}} \put(95,26.5){\makebox(0,0)[cc]{${\color{blue} p<1}$}} \put(94,18.5){\makebox(0,0)[cc]{$\colorbox{green}{1}$}} \linethickness{0.3mm} \put(100,24){\line(0,1){10}} \linethickness{0.3mm} \put(100,34){\line(1,0){14.5}} \put(103.5,30){\makebox(0,0)[cc]{$\colorbox{orange}{2}$}} \put(107.5,36){\makebox(0,0)[cc]{${\color{red} p>1}$}} \linethickness{0.3mm} \put(114.5,14){\line(0,1){20}} \linethickness{0.3mm} \put(114.5,14){\line(1,0){25.5}} \put(123.5,11){\makebox(0,0)[cc]{${\color{red} p>1}$}} \put(119.5,19){\makebox(0,0)[cc]{$\colorbox{green}{1}$}} \put(143,13){\makebox(0,0)[cc]{$v_1$}} \put(46,-3){\makebox(0,0)[cc]{${\bf x_4}$}} \put(101,-3){\makebox(0,0)[cc]{$x_7$}} \put(114.5,-3){\makebox(0,0)[cc]{$x_8$}} \linethickness{0.3mm} \put(44,5){\line(1,0){48}} \linethickness{0.3mm} \put(44,5){\line(0,1){6}} \linethickness{0.3mm} \put(37,11){\line(1,0){7}} \linethickness{0.3mm} \put(37,8){\line(0,1){3}} \linethickness{0.3mm} \put(29,8){\line(1,0){8}} \linethickness{0.3mm} \put(29,8){\line(0,1){5}} \linethickness{0.3mm} \put(10,13){\line(1,0){19}} \linethickness{0.3mm} \put(10,13){\line(0,1){33}} \linethickness{0.3mm} \put(-3,46){\line(1,0){13}} \put(-5,46){\makebox(0,0)[cc]{$u_1$}} \linethickness{0.3mm} \put(92,5){\line(0,1){4}} \linethickness{0.3mm} \put(92,9){\line(1,0){18}} \linethickness{0.3mm} \put(110,3){\line(0,1){6}} \linethickness{0.3mm} \put(110,3){\line(1,0){30}} \put(143,4){\makebox(0,0)[cc]{$u_1$}} \end{picture} \unitlength \JPicScale mm \begin{picture}(160.85,67.07)(0,0) \linethickness{0.4mm} \put(100.49,5.33){\line(1,0){60.35}} \put(160.84,5.33){\vector(1,0){0.12}} \linethickness{0.4mm} \put(100.49,5.33){\line(0,1){61.74}} \put(100.49,67.07){\vector(0,1){0.12}} \linethickness{0.2mm} \put(100.5,32.5){\line(1,0){60.35}} \put(159.75,3.28){\makebox(0,0)[cc]{$h$}} \put(97.2,66.38){\makebox(0,0)[cc]{$p$}} \put(96.1,31.4){\makebox(0,0)[cc]{$1$}} \put(98.3,6.71){\makebox(0,0)[cc]{$0$}} \put(101.59,3.28){\makebox(0,0)[cc]{$0$}} \put(112,51){\makebox(0,0)[cc]{\scalebox{1}{$v^\ell=(h_\beta^\ell,p_\beta^\ell)$}}} \put(123,21){\makebox(0,0)[cc]{\scalebox{1}{$v^{m}=({h_\alpha}^\ell,{p_\alpha}^\ell)$}}} \put(107.35,33.11){\makebox(0,0)[cc]{}} \put(103.51,21.8){\makebox(0,0)[cc]{}} \linethickness{0.4mm} \multiput(14.31,5.09)(57.69,-0.09){1}{\line(1,0){57.69}} \put(72,5){\vector(1,-0){0.12}} \linethickness{0.4mm} \put(14.31,5.09){\line(0,1){61.72}} \put(14.31,66.81){\vector(0,1){0.12}} \linethickness{0.2mm} \put(14.31,32.52){\line(1,0){45.69}} \put(70,3){\makebox(0,0)[cc]{$h$}} \put(10.01,66.13){\makebox(0,0)[cc]{$p$}} \put(8.58,31.15){\makebox(0,0)[cc]{$1$}} \put(11.44,6.46){\makebox(0,0)[cc]{$0$}} \linethickness{0.2mm} \multiput(30.67,52.65)(0.37,-0.13){5}{\line(1,0){0.37}} \put(32.5,52){\vector(3,-1){0.12}} \linethickness{0.2mm} \multiput(35.96,36.63)(0.79,-0.14){2}{\line(1,0){0.79}} \put(37.53,36.36){\vector(4,-1){0.12}} \linethickness{0.2mm} \multiput(24,51)(0.12,-0.62){4}{\line(0,-1){0.62}} \put(24.5,48.5){\vector(1,-4){0.12}} \linethickness{0.2mm} \multiput(44.19,43.45)(0.12,-0.35){7}{\line(0,-1){0.35}} \put(45,41){\vector(1,-3){0.12}} \put(15.75,3.03){\makebox(0,0)[cc]{$0$}} \linethickness{0.3mm} \put(111.5,48.5){\circle*{1}} \linethickness{0.3mm} \put(115.5,23.5){\circle*{1}} \linethickness{0.3mm} \put(145,27){\circle*{1}} \linethickness{0.3mm} \put(23.5,56.5){\circle*{1}} \linethickness{0.3mm} \put(26.5,38){\circle*{1}} \linethickness{0.3mm} \put(47.5,35.5){\circle*{1}} \linethickness{0.3mm} \put(42,49){\circle*{1}} \put(22.5,45){\makebox(0,0)[cc]{${\color{red}\rho_\beta}$}} \put(37,39){\makebox(0,0)[cc]{${\color{red}\rho_\alpha}$}} \linethickness{0.3mm} \put(140,41){\circle*{1}} \put(110,36){\makebox(0,0)[cc]{${\color{red}\rho_\beta}$}} \put(131.5,24.5){\makebox(0,0)[cc]{${\color{red}\rho_\alpha}$}} \put(128,46){\makebox(0,0)[cc]{${\color{blue}\rho_h'}$}} \put(146,35){\makebox(0,0)[cc]{${\color{blue}\rho_p'}$}} \put(147,25.5){\makebox(0,0)[cc]{\scalebox{1}{$v^r$}}} \put(144,42.5){\makebox(0,0)[cc]{\scalebox{1}{${v^m}'$}}} \put(26,59){\makebox(0,0)[cc]{\scalebox{1}{$v^\ell=(h_\beta^\ell,p_\beta^\ell)$}}} \put(38,53){\makebox(0,0)[cc]{${\color{blue}\rho_h'}$}} \put(50,42){\makebox(0,0)[cc]{${\color{blue}\rho_p'}$}} \put(28,35){\makebox(0,0)[cc]{\scalebox{1}{$v^m=({h_\alpha}^\ell,{p_\alpha}^\ell)$}}} \put(50,36){\makebox(0,0)[cc]{\scalebox{1}{$v^r$}}} \put(46,50){\makebox(0,0)[cc]{\scalebox{1}{${v^m}'$}}} \linethickness{0.2mm} \qbezier(111.5,48.5)(111.49,48.1)(111.73,43.66) \qbezier(111.73,43.66)(111.96,39.23)(112.5,35.5) \qbezier(112.5,35.5)(113.14,31.99)(114.26,27.92) \qbezier(114.26,27.92)(115.38,23.86)(115.5,23.5) \linethickness{0.2mm} \qbezier(115.5,23.5)(115.93,23.6)(120.86,24.52) \qbezier(120.86,24.52)(125.79,25.45)(130,26) \qbezier(130,26)(134.32,26.47)(139.43,26.73) \qbezier(139.43,26.73)(144.55,27)(145,27) \linethickness{0.2mm} \qbezier(140,41)(140.05,40.67)(140.95,37.05) \qbezier(140.95,37.05)(141.85,33.43)(143,30.5) \qbezier(143,30.5)(143.48,29.44)(144.2,28.27) \qbezier(144.2,28.27)(144.93,27.1)(145,27) \linethickness{0.2mm} \qbezier(111.5,48.5)(112.04,48.23)(118.27,45.8) \qbezier(118.27,45.8)(124.51,43.37)(130,42) \qbezier(130,42)(132.85,41.44)(136.27,41.22) \qbezier(136.27,41.22)(139.69,40.99)(140,41) \linethickness{0.2mm} \qbezier(23.5,56.5)(23.72,56.36)(26.24,54.93) \qbezier(26.24,54.93)(28.76,53.5)(31,52.5) \qbezier(31,52.5)(34.1,51.27)(37.88,50.17) \qbezier(37.88,50.17)(41.66,49.07)(42,49) \linethickness{0.2mm} \qbezier(27,38)(27.27,37.94)(30.33,37.4) \qbezier(30.33,37.4)(33.39,36.85)(36,36.5) \qbezier(36,36.5)(39.31,36.11)(43.23,35.81) \qbezier(43.23,35.81)(47.15,35.52)(47.5,35.5) \linethickness{0.2mm} \qbezier(42,49)(42.1,48.73)(43.27,45.65) \qbezier(43.27,45.65)(44.44,42.58)(45.5,40) \qbezier(45.5,40)(46.05,38.69)(46.74,37.16) \qbezier(46.74,37.16)(47.44,35.63)(47.5,35.5) \linethickness{0.2mm} \qbezier(23.5,56.5)(23.52,56.24)(23.83,53.34) \qbezier(23.83,53.34)(24.13,50.45)(24.5,48) \qbezier(24.5,48)(24.99,45.09)(25.71,41.7) \qbezier(25.71,41.7)(26.43,38.3)(26.5,38) \linethickness{0.2mm} \multiput(123,44)(0.38,-0.12){4}{\line(1,0){0.38}} \put(124.5,43.5){\vector(3,-1){0.12}} \linethickness{0.2mm} \multiput(111.94,41.97)(0.06,-2.47){1}{\line(0,-1){2.47}} \put(112,39.5){\vector(0,-1){0.12}} \linethickness{0.2mm} \multiput(142,33.5)(0.12,-0.38){8}{\line(0,-1){0.38}} \put(143,30.5){\vector(1,-3){0.12}} \linethickness{0.2mm} \put(135.5,26.5){\line(1,0){1}} \put(135.5,26.5){\vector(-1,0){0.12}} \end{picture} \unitlength \JPicScale mm \begin{picture}(79.5,66.81)(0,0) \put(64.85,28.11){\makebox(0,0)[cc]{}} \linethickness{0.4mm} \multiput(14.31,5.09)(57.69,-0.09){1}{\line(1,0){57.69}} \put(72,5){\vector(1,-0){0.12}} \linethickness{0.4mm} \put(14.31,5.09){\line(0,1){61.72}} \put(14.31,66.81){\vector(0,1){0.12}} \linethickness{0.2mm} \put(14.31,32.52){\line(1,0){45.69}} \put(70,3){\makebox(0,0)[cc]{$h$}} \put(10.01,66.13){\makebox(0,0)[cc]{$p$}} \put(8.58,31.15){\makebox(0,0)[cc]{$1$}} \put(11.44,6.46){\makebox(0,0)[cc]{$0$}} \linethickness{0.2mm} \put(18,45){\line(0,1){2.5}} \put(18,45){\vector(0,-1){0.12}} \linethickness{0.2mm} \multiput(47,42)(0.23,-0.12){13}{\line(1,0){0.23}} \put(50,40.5){\vector(2,-1){0.12}} \put(15.75,3.03){\makebox(0,0)[cc]{$0$}} \linethickness{0.3mm} \put(76.5,46){\circle*{1}} \linethickness{0.3mm} \put(63,22.5){\circle*{1}} \linethickness{0.3mm} \put(40,46){\circle*{1}} \linethickness{0.3mm} \put(18,56.5){\circle*{1}} \linethickness{0.3mm} \put(22,22.5){\circle*{1}} \linethickness{0.3mm} \put(58.5,38){\circle*{1}} \put(19,20.5){\makebox(0,0)[cc]{${\color{blue}v^+}$}} \put(37,47.5){\makebox(0,0)[cc]{$\color{red}{u^-}$}} \put(49.5,38.5){\makebox(0,0)[cc]{${\color{red}\eta_1^-}$}} \put(64,29.5){\makebox(0,0)[cc]{$\color{red}{\eta_2^-}$}} \put(65,21){\makebox(0,0)[cc]{\color{red}{$v^-$}}} \linethickness{0.3mm} \put(40,46){\line(1,0){6}} \put(22,25.5){\makebox(0,0)[cc]{}} \put(22.5,25){\makebox(0,0)[cc]{}} \put(24,23){\makebox(0,0)[cc]{}} \linethickness{0.3mm} \put(47.5,46){\line(1,0){6}} \linethickness{0.3mm} \put(55,46){\line(1,0){6}} \linethickness{0.3mm} \put(62.5,46){\line(1,0){6}} \linethickness{0.3mm} \put(70,46){\line(1,0){6}} \linethickness{0.3mm} \put(55,22.5){\line(1,0){6}} \linethickness{0.3mm} \put(47.5,22.5){\line(1,0){6}} \linethickness{0.3mm} \put(40,22.5){\line(1,0){6}} \linethickness{0.3mm} \put(32,22.5){\line(1,0){6}} \linethickness{0.3mm} \put(24,22.5){\line(1,0){6}} \linethickness{0.2mm} \qbezier(76.5,46)(75.38,46.12)(62.75,47.85) \qbezier(62.75,47.85)(50.13,49.57)(39.5,51.5) \qbezier(39.5,51.5)(33.25,52.75)(25.94,54.53) \qbezier(25.94,54.53)(18.64,56.32)(18,56.5) \linethickness{0.2mm} \qbezier(58.5,38)(58.45,37.73)(58.42,34.8) \qbezier(58.42,34.8)(58.39,31.86)(59,29.5) \qbezier(59,29.5)(59.77,27.45)(61.3,25.31) \qbezier(61.3,25.31)(62.82,23.17)(63,23) \linethickness{0.2mm} \qbezier(18,56.5)(17.93,55.83)(17.86,48.32) \qbezier(17.86,48.32)(17.78,40.81)(18.5,34.5) \qbezier(18.5,34.5)(19.06,31.12)(20.21,27.23) \qbezier(20.21,27.23)(21.37,23.34)(21.5,23) \put(79.5,47.5){\makebox(0,0)[cc]{${\color{blue}u^+}$}} \put(38.5,54.5){\makebox(0,0)[cc]{${\color{blue} \eta_1^+}$}} \put(22.5,40.5){\makebox(0,0)[cc]{${\color{blue}\eta_2^+}$}} \linethickness{0.2mm} \multiput(32,53)(0.75,-0.12){4}{\line(1,0){0.75}} \put(32,53){\vector(-4,1){0.12}} \linethickness{0.2mm} \multiput(59,29)(0.12,-0.25){4}{\line(0,-1){0.25}} \put(59.5,28){\vector(1,-2){0.12}} \linethickness{0.4mm} \put(41.5,22.5){\line(1,0){3.5}} \put(41.5,22.5){\vector(-1,0){0.12}} \linethickness{0.2mm} \put(55.5,46){\line(1,0){4}} \put(59.5,46){\vector(1,0){0.12}} \linethickness{0.2mm} \qbezier(40,46)(40.26,45.82)(43.28,44.03) \qbezier(43.28,44.03)(46.3,42.23)(49,41) \qbezier(49,41)(51.66,39.92)(54.93,38.99) \qbezier(54.93,38.99)(58.2,38.06)(58.5,38) \put(49,18.5){\makebox(0,0)[cc]{}} \put(44.5,19){\makebox(0,0)[cc]{\colorbox{green}{$\Delta t$}}} \put(66.5,43){\makebox(0,0)[cc]{\colorbox{green}{$\Delta t$}}} \end{picture} \unitlength \JPicScale mm \begin{picture}(140.2,66.67)(0,0) \linethickness{0.4mm} \put(6.69,4.93){\line(1,0){60.35}} \put(67.04,4.93){\vector(1,0){0.12}} \linethickness{0.4mm} \put(6.69,4.93){\line(0,1){61.74}} \put(6.69,66.67){\vector(0,1){0.12}} \linethickness{0.2mm} \put(6.7,32.1){\line(1,0){60.35}} \put(65.95,2.88){\makebox(0,0)[cc]{$h$}} \put(3.4,65.98){\makebox(0,0)[cc]{$p$}} \put(2.3,31){\makebox(0,0)[cc]{$1$}} \put(4.5,6.31){\makebox(0,0)[cc]{$0$}} \put(7.79,2.88){\makebox(0,0)[cc]{$0$}} \put(14.4,48.9){\makebox(0,0)[cc]{$u$}} \put(29.1,20.5){\makebox(0,0)[cc]{${\color{red}v^\ell}=S_2(\eta_2^\ell;v^m)$}} \put(13.55,32.71){\makebox(0,0)[cc]{}} \put(9.71,21.4){\makebox(0,0)[cc]{}} \linethickness{0.4mm} \multiput(82.51,4.69)(57.69,-0.09){1}{\line(1,0){57.69}} \put(140.2,4.6){\vector(1,-0){0.12}} \linethickness{0.4mm} \put(82.51,4.69){\line(0,1){61.72}} \put(82.51,66.41){\vector(0,1){0.12}} \linethickness{0.2mm} \put(82.51,32.12){\line(1,0){45.69}} \put(138.2,2.6){\makebox(0,0)[cc]{$h$}} \put(78.21,65.73){\makebox(0,0)[cc]{$p$}} \put(76.78,30.75){\makebox(0,0)[cc]{$1$}} \put(79.64,6.06){\makebox(0,0)[cc]{$0$}} \linethickness{0.2mm} \multiput(98.87,52.25)(0.37,-0.13){5}{\line(1,0){0.37}} \put(100.7,51.6){\vector(3,-1){0.12}} \linethickness{0.2mm} \multiput(117.2,28.8)(0.12,-0.3){8}{\line(0,-1){0.3}} \put(118.2,26.4){\vector(1,-2){0.12}} \linethickness{0.2mm} \multiput(111,45.2)(0.12,-0.52){5}{\line(0,-1){0.52}} \put(111.6,42.6){\vector(1,-4){0.12}} \put(83.95,2.63){\makebox(0,0)[cc]{$0$}} \linethickness{0.3mm} \put(17.7,48.1){\circle*{1}} \linethickness{0.3mm} \put(29.1,24.4){\circle*{1}} \linethickness{0.3mm} \put(51.2,26.6){\circle*{1}} \linethickness{0.3mm} \put(91.7,56.1){\circle*{1}} \linethickness{0.3mm} \put(121.6,21.1){\circle*{1}} \linethickness{0.3mm} \put(114.38,35){\circle*{1}} \linethickness{0.3mm} \put(110.2,48.6){\circle*{1}} \linethickness{0.3mm} \put(46.2,40.6){\circle*{1}} \put(23.8,34.1){\makebox(0,0)[cc]{${\color{red}\eta_2^\ell}$}} \put(38.12,28.12){\makebox(0,0)[cc]{\colorbox{green}{$\gamma_\alpha$}}} \put(34.2,45.6){\makebox(0,0)[cc]{${\color{blue}\eta_1^r}$}} \put(52.3,34.6){\makebox(0,0)[cc]{${\color{blue}\eta_2^r}$}} \put(66.25,25.62){\makebox(0,0)[cc]{${\color{blue}v^r}=S_2(\eta_2^r;{v^m}')$}} \put(57,43){\makebox(0,0)[cc]{${\omega^r}=S_1(\eta_1^r; u)$}} \put(87.4,57.4){\makebox(0,0)[cc]{$u$}} \put(106.25,55.62){\makebox(0,0)[cc]{${\color{blue}\eta_1^r}$}} \put(110.5,34.4){\makebox(0,0)[cc]{${\color{red}v^\ell}$}} \linethickness{0.2mm} \qbezier(26.2,44.6)(26.2,44.2)(26.44,39.77) \qbezier(26.44,39.77)(26.68,35.33)(27.2,31.6) \qbezier(27.2,31.6)(27.51,29.75)(27.97,28.03) \qbezier(27.97,28.03)(28.43,26.31)(28.9,24.5) \linethickness{0.2mm} \qbezier(29.28,24.42)(31.07,24.74)(32.76,25.07) \qbezier(32.76,25.07)(34.45,25.4)(36.25,25.62) \qbezier(36.25,25.62)(40.61,26.09)(45.76,26.35) \qbezier(45.76,26.35)(50.92,26.62)(51.38,26.62) \linethickness{0.2mm} \qbezier(46.2,40.6)(46.25,40.27)(47.15,36.65) \qbezier(47.15,36.65)(48.05,33.03)(49.2,30.1) \qbezier(49.2,30.1)(49.68,29.04)(50.4,27.87) \qbezier(50.4,27.87)(51.13,26.7)(51.2,26.6) \linethickness{0.2mm} \qbezier(17.7,48.1)(18.24,47.83)(24.47,45.4) \qbezier(24.47,45.4)(30.71,42.97)(36.2,41.6) \qbezier(36.2,41.6)(39.05,41.04)(42.47,40.82) \qbezier(42.47,40.82)(45.89,40.59)(46.2,40.6) \linethickness{0.2mm} \qbezier(91.7,56.1)(91.92,55.96)(94.44,54.53) \qbezier(94.44,54.53)(96.96,53.1)(99.2,52.1) \qbezier(99.2,52.1)(102.3,50.87)(106.08,49.77) \qbezier(106.08,49.77)(109.86,48.67)(110.2,48.6) \linethickness{0.2mm} \qbezier(110.2,48.6)(110.25,48.31)(110.96,45.15) \qbezier(110.96,45.15)(111.67,41.99)(112.5,39.38) \qbezier(112.5,39.38)(112.96,38.08)(113.63,36.6) \qbezier(113.63,36.6)(114.31,35.13)(114.38,35) \linethickness{0.2mm} \multiput(29.2,43.6)(0.38,-0.12){4}{\line(1,0){0.38}} \put(30.7,43.1){\vector(3,-1){0.12}} \linethickness{0.2mm} \multiput(26.3,41.7)(0.1,-1.25){2}{\line(0,-1){1.25}} \put(26.5,39.2){\vector(0,-1){0.12}} \linethickness{0.2mm} \multiput(48.2,33.1)(0.12,-0.38){8}{\line(0,-1){0.38}} \put(49.2,30.1){\vector(1,-3){0.12}} \linethickness{0.2mm} \multiput(42.7,26.1)(1.05,0.15){1}{\line(1,0){1.05}} \put(43.75,26.25){\vector(4,1){0.12}} \put(20.4,43.8){\makebox(0,0)[cc]{${\color{red}\eta_1^\ell}$}} \put(34,49.7){\makebox(0,0)[cc]{}} \put(58.4,37.9){\makebox(0,0)[cc]{}} \linethickness{0.3mm} \put(26.2,44.8){\circle*{1}} \put(34,63.7){\makebox(0,0)[cc]{\colorbox{green}{$k_\alpha=1$}}} \put(109.3,64.2){\makebox(0,0)[cc]{\colorbox{yellow}{$k_\alpha=2$}}} \linethickness{0.2mm} \qbezier(114.38,35)(114.49,34.71)(115.92,31.49) \qbezier(115.92,31.49)(117.36,28.27)(118.75,25.62) \qbezier(118.75,25.62)(119.46,24.37)(120.38,22.95) \qbezier(120.38,22.95)(121.31,21.52)(121.4,21.4) \put(136.4,19.8){\makebox(0,0)[cc]{${\color{blue}v^r}=S_2(\eta_2^r;{v^m}')$}} \put(59.8,34.6){\makebox(0,0)[cc]{}} \put(121.2,44.4){\makebox(0,0)[cc]{${\color{blue}\eta_2^r}$}} \put(109.6,40){\makebox(0,0)[cc]{${\color{red}\eta_2^\ell}$}} \put(101.5,47.6){\makebox(0,0)[cc]{${\color{red}\eta_1^\ell}$}} \linethickness{0.2mm} \qbezier(115.88,51)(115.87,50.22)(116.75,41.57) \qbezier(116.75,41.57)(117.63,32.92)(119.4,25.8) \qbezier(119.4,25.8)(119.71,24.67)(120.14,23.63) \qbezier(120.14,23.63)(120.58,22.59)(121.2,21.6) \put(111.88,27.5){\makebox(0,0)[cc]{\colorbox{yellow}{$\gamma_\alpha$}}} \put(131.25,51.88){\makebox(0,0)[cc]{${\omega^r}=S_1(\eta_1^r; u)$}} \linethickness{0.2mm} \qbezier(92,56)(92.31,55.86)(95.85,54.55) \qbezier(95.85,54.55)(99.4,53.25)(102.5,52.5) \qbezier(102.5,52.5)(106.49,51.73)(111.28,51.36) \qbezier(111.28,51.36)(116.07,51)(116.5,51) \linethickness{0.3mm} \put(116.5,51){\circle*{1}} \linethickness{0.2mm} \multiput(117,39.2)(0.13,-0.8){3}{\line(0,-1){0.8}} \put(117.4,36.8){\vector(1,-4){0.12}} \end{picture}
2205.06122v1
http://arxiv.org/abs/2205.06122v1
The average genus of a 2-bridge knot is asymptotically linear
\documentclass[11pt]{amsart} \usepackage{fullpage} \usepackage{color} \usepackage{pstricks,pst-node,pst-plot} \usepackage{graphicx,psfrag} \usepackage{color} \usepackage{tikz} \usepackage{pgffor} \usepackage{hyperref} \usepackage{todonotes} \usepackage{subfigure} \usepackage{verbatim} \usepackage{bm} \usepackage{multirow} \usepackage{perpage} \allowdisplaybreaks \MakePerPage{footnote} \newtheorem{problem}{Problem} \newtheorem{claim}{Claim} \newtheorem{theorem}{Theorem}[section] \newtheorem*{theorem-non}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{noname}[theorem]{} \newtheorem{sublemma}[theorem]{Sublemma} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{construction}[theorem]{Construction} \newtheorem{alternatedefinition}[theorem]{Alternate Definition} \newtheorem{assumption}[theorem]{Assumption} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{property}[theorem]{Property} \newtheorem{question}[theorem]{Question} \newtheorem{note}[theorem]{Note} \newtheorem{fact}[theorem]{Fact} \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} \numberwithin{equation}{section} \newcommand{\ba}{\backslash} \newcommand{\utf}{uniform time function} \definecolor{gray}{rgb}{.5,.5,.5} \def\gray{\color{gray}} \definecolor{black}{rgb}{0,0,0} \def\black{\color{black}} \definecolor{blue}{rgb}{0,0,1} \def\blue{\color{blue}} \definecolor{red}{rgb}{1,0,0} \def\red{\color{red}} \definecolor{green}{rgb}{0,1,0} \def\green{\color{green}} \definecolor{yellow}{rgb}{1,1,.4} \def\yellow{\color{yellow}} \newrgbcolor{purple}{.5 0 .5} \newrgbcolor{black}{0 0 0} \newrgbcolor{white}{1 1 1} \newrgbcolor{gold}{.5 .5 .2} \newrgbcolor{darkgreen}{0 .5 0} \newrgbcolor{gray}{.5 .5 .5} \newrgbcolor{lightgray}{.75 .75 .75} \newrgbcolor{lightred}{.75 0 0} \DeclareMathOperator{\parity}{par} \newcommand{\parityi}{\parity i} \DeclareMathOperator{\sgn}{sgn} \newcommand{\sgni}{\sgn i} \DeclareMathOperator{\pos}{pos} \newcommand{\posi}{\pos i} \newcommand{\Plus}{\mathord{\begin{tikzpicture}[baseline=0ex, line width=1, scale=0.13] \draw (1,0) -- (1,2); \draw (0,1) -- (2,1); \end{tikzpicture}}} \newcommand{\Minus}{\mathord{\begin{tikzpicture}[baseline=0ex, line width=1, scale=0.13] \draw (0,1) -- (2,1); \end{tikzpicture}}} \newcommand{\crossneg}{ \begin{tikzpicture}[baseline=-2] \draw[white,line width=1.5pt,double=black,double distance=.5pt] (0,-0.1) -- (0.3,0.2); \draw[white,line width=1.5pt,double=black,double distance=.5pt] (0,0.2) -- (0.3,-0.1); \end{tikzpicture}} \newcommand{\crosspos}{ \begin{tikzpicture}[baseline=-2] \draw[white,line width=1.5pt,double=black,double distance=.5pt] (0,0.2) -- (0.3,-0.1); \draw[white,line width=1.5pt,double=black,double distance=.5pt] (0,-0.1) -- (0.3,0.2); \end{tikzpicture}} \begin{document} \title{The average genus of a 2-bridge knot is asymptotically linear} \author{Moshe Cohen} \address{Mathematics Department, State University of New York at New Paltz, New Paltz, NY 12561} \email{[email protected]} \author{Adam M. Lowrance} \address{Department of Mathematics and Statistics, Vassar College, Poughkeepsie, NY 12604} \email{[email protected]} \thanks{The second author was supported by NSF grant DMS-1811344.} \begin{abstract} Experimental work suggests that the Seifert genus of a knot grows linearly with respect to the crossing number of the knot. In this article, we use a billiard table model for $2$-bridge or rational knots to show that the average genus of a $2$-bridge knot with crossing number $c$ asymptotically approaches $c/4+1/12$. \end{abstract} \maketitle \section{Introduction} The Seifert genus $g(K)$ of a knot $K$ in $S^3$ is the minimum genus of any oriented surface embedded in $S^3$ whose boundary is the knot $K$. Dunfield et al. \cite{Dun:knots} presented experimental data that suggests the Seifert genus of a knot grows linearly with respect to crossing number. Using a billiard table model for $2$-bridge knots developed by Koseleff and Pecker \cite{KosPec3, KosPec4}, Cohen \cite{Coh:lower} gave a lower bound on the average genus of a $2$-bridge knot. In this paper, we compute the average genus $\overline{g}_c$ of $2$-bridge knots with crossing number $c$ and show that $\overline{g}_c$ is asymptotically linear with respect to $c$. Let $\mathcal{K}_c$ be the set of unoriented $2$-bridge knots with $c$ crossings where only one of a knot and its mirror image is in the set. For example $|\mathcal{K}_3|=1$ and contains one of the right-handed or left-handed trefoil. Define the average genus $\overline{g}_c$ by \begin{equation} \label{eq:avgenus} \overline{g}_c = \frac{\sum_{K\in\mathcal{K}_c} g(K)}{|\mathcal{K}_c|}. \end{equation} Since the genus of a knot and the genus of its mirror image are the same, $\overline{g}_c$ is independent of the choice of each knot or its mirror image as elements in $\mathcal{K}_c$. \begin{theorem} \label{thm:mainformula} Let $c\geq 3$. The average genus $\overline{g}_c$ of a $2$-bridge knot with crossing number $c$ is \[\overline{g}_c = \frac{c}{4} + \frac{1}{12} + \varepsilon(c),\] where \[\varepsilon (c) = \begin{cases} \displaystyle\frac{2^{\frac{c-4}{2}} - 4}{12(2^{c-3}+2^{\frac{c-4}{2}})} & \text{if } c\equiv 0\text{ mod }4,\\ \displaystyle \frac{1}{3\cdot 2^{\frac{c-3}{2}}} & \text{if } c\equiv 1\text{ mod }4,\\ \displaystyle \frac{2^{\frac{c-4}{2}}+3c-11}{12(2^{c-3}+2^{\frac{c-4}{2}}-1)}& \text{if } c\equiv 2\text{ mod }4, \text{ and}\\ \displaystyle \frac{2^{\frac{c+1}{2}}+11-3c}{12(2^{c-3}+2^{\frac{c-3}{2}}+1)} & \text{if } c\equiv 3\text{ mod }4. \end{cases}\] Since $\varepsilon(c)\to 0$ as $c\to \infty$, the average genus $\overline{g}_c$ approaches $\frac{c}{4}+\frac{1}{12}$ as $c \to \infty$. \end{theorem} Suzuki and Tran \cite{SuzukiTran} independently proved this formula for $\overline{g}_c$. Ray and Diao \cite{RayDiao} expressed $\overline{g}_c$ using sums of products of certain binomial coefficients. Baader, Kjuchukova, Lewark, Misev, and Ray \cite{BKLMR} previously showed that if $c$ is sufficiently large, then $\frac{c}{4} \leq \overline{g}_c$. The proof of Theorem \ref{thm:mainformula} uses the Chebyshev billiard table model for knot diagrams of Koseleff and Pecker \cite{KosPec3,KosPec4} as presented by Cohen and Krishnan \cite{CoKr} and with Even-Zohar \cite{CoEZKr}. This model yields an explicit enumeration of the elements of $\mathcal{K}_c$ as well as an alternating diagram in the format of Figure \ref{fig:alternating} for each element of $\mathcal{K}_c$. Murasugi \cite{Mur:genus} and Crowell \cite{Cro:genus} proved that the genus of an alternating knot is the genus of the surface obtained by applying Seifert's algorithm \cite{Sei} to an alternating diagram of the knot. The proof of Theorem \ref{thm:mainformula} proceeds by applying Seifert's algorithm to the alternating diagrams obtained from our explicit enumeration of $\mathcal{K}_c$ and averaging the genera of those surfaces. This paper is organized as follows. In Section \ref{sec:background}, we recall how the Chebyshev billiard table model for $2$-bridge knots diagrams can be used to describe the set $\mathcal{K}_c$ of $2$-bridge knots. In Section \ref{sec:recursions}, we find recursive formulas that allow us to count the total number of Seifert circles among all $2$-bridge knots with crossing number $c$. Finally in Section \ref{sec:formulas}, we find a closed formula for the number of Seifert circles among all $2$-bridge knots and use that to prove Theorem \ref{thm:mainformula}. \section{Background} \label{sec:background} The average genus of $2$-bridge knots with crossing number $c$ is the quotient of the sum of the genera of all $2$-bridge knots with crossing number $c$ and the number of $2$-bridge knots with crossing number $c$. Ernst and Sumners \cite{ErnSum} proved formulas for the number $|\mathcal{K}_c|$ of $2$-bridge knots. \begin{theorem}[Ernst-Sumners \cite{ErnSum}, Theorem 5] \label{thm:ernstsumners} The number $|\mathcal{K}_c|$ of 2-bridge knots with $c$ crossings where chiral pairs are \emph{not} counted separately is given by \[ |\mathcal{K}_c| = \begin{cases} \frac{1}{3}(2^{c-3}+2^{\frac{c-4}{2}}) & \text{ for }4 \geq c\equiv 0 \text{ mod }4,\\ \frac{1}{3}(2^{c-3}+2^{\frac{c-3}{2}}) & \text{ for }5\geq c\equiv 1 \text{ mod }4, \\ \frac{1}{3}(2^{c-3}+2^{\frac{c-4}{2}}-1) & \text{ for }6 \geq c\equiv 2 \text{ mod }4, \text{ and}\\ \frac{1}{3}(2^{c-3}+2^{\frac{c-3}{2}}+1) & \text{ for }3\geq c\equiv 3 \text{ mod }4. \end{cases} \] \end{theorem} A billiard table diagram of a knot is constructed as follows. Let $a$ and $b$ be relatively prime positive integers with $a<b$, and consider an $a\times b$ grid. Draw a sequence of line segments along diagonals of the grid as follows. Start at the bottom left corner of the grid with a line segment that bisects the right angle of the grid. Extend that line segment until it reaches an outer edge of the grid, and then start a new segment that is reflected $90^\circ$. Continue in this fashion until a line segment ends in a corner of the grid. Connecting the beginning of the first line segment with the end of the last line segment results in a piecewise linear closed curve in the plane with only double-point self-intersections. If each such double-point self-intersection is replaced by a crossing, then one obtains a \emph{billiard table diagram} of a knot. See Figure \ref{fig:billiard}. \begin{figure}[h] \begin{tikzpicture}[scale=.6] \draw[dashed, white!50!black] (0,0) rectangle (8,3); \foreach \x in {1,...,7} {\draw[dashed, white!50!black] (\x,0) -- (\x,3);} \foreach \x in {1,2} {\draw[dashed, white!50!black] (0,\x) -- (8, \x);} \foreach \x in {0,2,4} {\draw[thick] (\x,0) -- (\x+3,3); \draw[thick] (\x+1,3) -- (\x+4,0);} \draw[thick] (1,3) -- (0,2) -- (2,0); \draw[thick] (6,0) -- (8,2) -- (7,3); \draw[thick, ->] (0,0) -- (1.5,1.5); \begin{scope}[xshift = 12 cm] \draw[dashed, white!50!black] (0,0) rectangle (8,3); \foreach \x in {1,...,7} {\draw[dashed, white!50!black] (\x,0) -- (\x,3);} \foreach \x in {1,2} {\draw[dashed, white!50!black] (0,\x) -- (8, \x);} \draw[thick] (0,0) -- (1.8,1.8); \draw[thick] (2.2, 2.2) -- (3,3) -- (3.8,2.2); \draw[thick] (4.2,1.8) -- (6,0) -- (8,2) -- (7,3) -- (6.2,2.2); \draw[thick] (5.8,1.8) -- (5.2,1.2); \draw[thick] (4.8,0.8) -- (4,0) -- (3.2,0.8); \draw[thick] (2.8,1.2) -- (1,3) -- (0,2) -- (0.8,1.2); \draw[thick] (1.2,0.8) -- (2,0) -- (5,3) -- (6.8,1.2); \draw[thick] (7.2, 0.8) -- (8,0); \draw[thick, ->] (0,0) -- (1.5,1.5); \end{scope} \end{tikzpicture} \caption{A billiard table projection and a billiard table diagram of a knot on a $3\times 8$ grid. The diagram corresponds to the word $+-++ -{}-+$. We do not draw the arc connecting the ends but understand it to be present.} \label{fig:billiard} \end{figure} Billiard table diagrams on a $3\times b$ grid have bridge number either one or two, that is, such a knot is either the unknot or a $2$-bridge knot. In a $3\times b$ billiard table diagram, there is one crossing on each vertical grid line except the first and the last. A string of length $b-1$ in the symbols $\{+,-\}$ determines a $2$-bridge knot or the unknot, as follows. A crossing corresponding to a $+$ looks like $\tikz[baseline=.6ex, scale = .4]{ \draw (0,0) -- (1,1); \draw (0,1) -- (.3,.7); \draw (.7,.3) -- (1,0); } ~$, and a crossing corresponding to a $-$ looks like $\tikz[baseline=.6ex, scale = .4]{ \draw (0,0) -- (.3,.3); \draw (.7,.7) -- (1,1); \draw (0,1) -- (1,0); } ~$. Figure \ref{fig:billiard} shows an example. A given $2$-bridge knot has infinitely many descriptions as strings of various lengths in the symbols $\{+,-\}$. Cohen, Krishnan, and Evan-Zohar's work \cite{CoKr, CoEZKr} lets us describe $2$-bridge knots in this manner but with more control on the number of strings representing a given $2$-bridge knot. \begin{definition} Define the \emph{partially double-counted set $T(c)$ of $2$-bridge words with crossing number $c$} as follows. Each word in $T(c)$ is a word in the symbols $\{+,-\}$. If $c$ is odd, then a word $w$ is in $T(c)$ if and only if it is of the form \[ (+)^{\varepsilon_1}(-)^{\varepsilon_2}(+)^{\varepsilon_3}(-)^{\varepsilon_4}\ldots(-)^{\varepsilon_{c-1}}(+)^{\varepsilon_c}, \] where $\varepsilon_i\in\{1,2\}$ for $i\in\{1,\ldots,c\}$, $\varepsilon_1=\varepsilon_c=1$, and the length of the word $\ell=\sum_{i=1}^{c}\varepsilon_i \equiv 1$ mod $3$. Similarly, if $c$ is even, then a word $w$ is in $T(c)$ if and only if it is of the form \[(+)^{\varepsilon_1}(-)^{\varepsilon_2}(+)^{\varepsilon_3}(-)^{\varepsilon_4}\ldots(+)^{\varepsilon_{c-1}}(-)^{\varepsilon_c},\] where $\varepsilon_i\in\{1,2\}$ for $i\in\{1,\ldots,c\}$, $\varepsilon_1=\varepsilon_c=1$, and the length of the word $\ell=\sum_{i=1}^{c}\varepsilon_i \equiv 1$ mod $3$. \end{definition} The set $T(c)$ is described as partially double-counted because every $2$-bridge knot is represented by exactly one or two words in $T(c)$, as described in Theorem \ref{thm:list} below. Although the billiard table diagram associated with $w$ has $\ell$ crossings, there is an alternating diagram associated with $w$ that has $c$ crossings, and hence we use the $T(c)$ notation. The \emph{reverse} $r(w)$ of a word $w$ of length $\ell$ is a word whose $i$th entry is the $(\ell - i +1)$st entry of $w$; in other words, $r(w)$ is just $w$ backwards. The \emph{reverse mirror} $\overline{r}(w)$ of a word $w$ of length $\ell$ is the word of length $\ell$ where each entry disagrees with the corresponding entry of $r(w)$; in other words, $\overline{r}(w)$ is obtained from $w$ by reversing the order and then changing every $+$ to a $-$ and vice versa. \begin{definition} The subset $T_p(c)\subset T(c)$ of \emph{words of palindromic type} consists of words $w\in T(c)$ such that $w=r(w)$ when $c$ is odd and $w=\overline{r}(w)$ when $c$ is even. \end{definition} \noindent For example, the word $w=+ -{}-+$ is the only word in $T_p(3)$, and the word $w=+ - + -$ is the only word in $T_p(4)$. The following theorem says exactly which $2$-bridge knots are represented by two words in $T(c)$ and which $2$-bridge knots are represented by only one word in $T(c)$. The theorem is based on work by Schubert \cite{Sch} and Koseleff and Pecker \cite{KosPec4}. The version of the theorem we state below comes from Lemma 2.1 and Assumption 2.2 in \cite{Coh:lower}. \begin{theorem} \label{thm:list} Let $c\geq 3$. Every $2$-bridge knot is represented by a word in $T(c)$. If a $2$-bridge knot $K$ is represented by a word $w$ of palindromic type, that is, a word in $T_p(c)$, then $w$ is the only word in $T(c)$ that represents $K$. If a $2$-bridge knot $K$ is represented by a word $w$ that is not in $T_p(c)$, then there are exactly two words in $T(c)$ that represent $K$, namely $w$ and $r(w)$ when $c$ is odd or $w$ and $\overline{r}(w)$ when $c$ is even. \end{theorem} A billiard table diagram associated with a word $w$ in $T(c)$ is not necessarily alternating; however the billiard table diagram associated with $w$ can be transformed into an alternating diagram $D$ of the same knot as follows. A \emph{run} in $w$ is a subword of $w$ consisting of all the same symbols (either all $+$ or all $-$) that is not properly contained in a single-symbol subword of longer length. By construction, if $w\in T(c)$, then it is made up of $c$ runs all of length one or two. The run $+$ is replaced by $\sigma_1$, the run $++$ is replaced by $\sigma_2^{-1}$, the run $-$ is replaced by $\sigma_2^{-1}$ and the run $-{}-$ is replaced by $\sigma_1$, as summarized by pictures in Table \ref{tab:wtoD}. The left side of the diagram has a strand entering from the bottom left and a cap on the top left. If the last term is $\sigma_1$, then the right side of the diagram has a strand exiting to the bottom right and a cap to the top right, and if the last term is $\sigma_2^{-1}$, then the right side of the diagram has a strand exiting to the top right and a cap on the bottom right. See Figure \ref{fig:alternating} for an example. Theorem 2.4 and its proof in \cite{Coh:lower} explain this correspondence. \begin{center} \begin{table}[h] \begin{tabular}{|c||c|c|c|c|} \hline &&&&\\ Run in billiard table diagram word $w$ & $(+)^1$ & $(+)^2$ & $(-)^1$ & $(-)^2$ \\ &&&&\\ \hline &&&&\\ Crossing in alternating diagram $D$ & $\sigma_1$ & $\sigma_2^{-1}$ & $\sigma_2^{-1}$ & $\sigma_1$ \\ &&&&\\ && $\crossneg$ & $\crossneg$ &\\ &$\crosspos$ &&& $\crosspos$ \\ &&&&\\ \hline \end{tabular} \caption{Transforming a billiard table diagram into an alternating diagram, as seen in \cite[Table 1]{Coh:lower}.} \label{tab:wtoD} \end{table} \end{center} \begin{figure}[h] \begin{tikzpicture}[scale=.6] \draw[dashed, white!50!black] (0,0) rectangle (8,3); \foreach \x in {1,...,7} {\draw[dashed, white!50!black] (\x,0) -- (\x,3);} \foreach \x in {1,2} {\draw[dashed, white!50!black] (0,\x) -- (8, \x);} \draw[thick] (0,0) -- (1.8,1.8); \draw[thick] (2.2, 2.2) -- (3,3) -- (3.8,2.2); \draw[thick] (4.2,1.8) -- (6,0) -- (8,2) -- (7,3) -- (6.2,2.2); \draw[thick] (5.8,1.8) -- (5.2,1.2); \draw[thick] (4.8,0.8) -- (4,0) -- (3.2,0.8); \draw[thick] (2.8,1.2) -- (1,3) -- (0,2) -- (0.8,1.2); \draw[thick] (1.2,0.8) -- (2,0) -- (5,3) -- (6.8,1.2); \draw[thick] (7.2, 0.8) -- (8,0); \draw[thick, ->] (0,0) -- (1.5,1.5); \begin{scope}[xshift=12cm, thick, rounded corners = 2mm] \draw[->] (0,0) -- (1.5,1.5); \draw (0,0) -- (1.8,1.8); \draw (2.2,2.2) -- (3,3) -- (4.8,1.2); \draw (5.2,0.8) -- (6,0) -- (8,2) -- (7,3) -- (5,3) -- (4.2,2.2); \draw (3.8,1.8) -- (3,1) -- (1,3) -- (0,2) -- (0.8,1.2); \draw (1.2,0.8) -- (2,0) -- (4,0) -- (6,2) -- (6.8,1.2); \draw (7.2,0.8) -- (8,0); \end{scope} \end{tikzpicture} \caption{The billiard table diagram knot corresponding to the word $+-++ -{}-+$ has alternating diagram $\sigma_1\sigma_2^{-2}\sigma_1^2$. } \label{fig:alternating} \end{figure} Murasugi \cite{Mur:genus} and Crowell \cite{Cro:genus} proved that the genus of an alternating knot $K$ is the genus of the Seifert surface obtained from Seifert's algorithm on an alternating diagram of $K$. Therefore, the average genus $\overline{g}_c$ is \[ \overline{g}_c = \frac{1}{2}\left(1 + c - \overline{s}_c \right),\] where $\overline{s}_c$ is the average number of Seifert circles in the alternating diagrams of all $2$-bridge knots with crossing number $c$. In Section \ref{sec:recursions}, we find recursive formulas for the total number of Seifert circles in the alternating diagrams associated with words in $T(c)$ and $T_p(c)$, named $s(c)$ and $s_p(c)$, respectively. Theorem \ref{thm:list} implies that \begin{equation} \label{eq:avseifert} \overline{s}_c = \frac{s(c) + s_p(c)}{2|\mathcal{K}_c|}. \end{equation} Seifert's algorithm uses the orientation of a knot diagram to construct a Seifert surface. Lemma 3.3 in \cite{Coh:lower} keeps track of the orientations of the crossings in the alternating diagram $D$ associated with a word $w$ in $T(c)$. See also Property 7.1 in \cite{Co:3-bridge}. \begin{lemma} \label{lem:or1} \cite[Lemma 3.3]{Coh:lower} The following conventions determine the orientation of every crossing in the alternating diagram $D$ associated with a word $w$ in $T(c)$. \begin{enumerate} \item Two of the three strands in $D$ are oriented to the right. \item If either a single $+$ or a single $-$ appears in a position congruent to $1$ modulo $3$ in $w$, then it corresponds to a single crossing in the alternating diagram $D$ that is horizontally-oriented. \item If either a double $++$ or a double $-{}-$ appears in two positions congruent to $2$ and $3$ modulo $3$ in $w$, then they correspond to a single crossing in the alternating diagram $D$ that is horizontally-oriented. \item The remaining crossings in $D$ are vertically-oriented. \end{enumerate} \end{lemma} \section{Recursive formulas for Seifert circles} \label{sec:recursions} In this section, we find recursive formulas for the total number of Seifert circles in the alternating diagrams associated with words in $T(c)$ and $T_p(c)$. The section is split between the general case, where we deal with $T(c)$, and the palindromic case, where we deal with $T_p(c)$. \subsection{General case} \label{subsec:general} In order to develop the recursive formulas for the total number of Seifert circles of alternating diagrams coming from $T(c)$, we partition $T(c)$ into four subsets. The final run of each of word $w$ in $T(c)$ is fixed by construction; if $c$ is odd, then $w$ ends in a single $+$, and if $c$ is even, then $w$ ends in a single $-$. Suppose below that $c$ is odd; the even case is similar. The two penultimate runs in a word in $T(c)$ must be exactly one of the following cases: \begin{itemize} \item[(1)] a single + followed by a single -, \item[(2)] a double ++ followed by a double -{}-, \item[(3)] a single + followed by a double -{}-, or \item[(4)] a double ++ followed by a single -. \end{itemize} These four cases form a partition of $T(c)$. The Jacobsthal sequence \href{https://oeis.org/A001045}{A001045} \cite{OEIS1045} is an integer sequence satisfying the recurrence relation $J(n) = J(n-1) + 2J(n-2)$ with initial values $J(0)=0$ and $J(1)=1$. The closed formula for the $n$th Jacobsthal number is $J(n)=\frac{2^n - (-1)^n}{3}$. We use the Jacobsthal sequence to find a formula for the number of words in $T(c)$. \begin{proposition} \label{prop:countterms} The number $t(c) = \frac{2^{c-2} - (-1)^c}{3}$ is the Jacobsthal number $J(c-2)$ and satisfies the recursive formula $t(c)=t(c-1)+2t(c-2)$. \end{proposition} \begin{proof} The base cases of $t(3)=t(4)=1$ hold because $T(3) =\{+-{}-+\}$ and $T(4) = \{+-+-\}$. Next, we show that $t(c)$ satisfies the recursive formula above. The penultimate two runs in cases 3 and 4 are of length three, which is convenient for our model, and so they can be removed without changing the length requirement modulo 3. Removing either $+-{}-$ or $++-$ also does not affect the parity of the number of crossings. The final $+$ after these subwords can still be appended to the shorter words after the removal. What is left after removal in each of these cases is the set $T(c-2)$, and so cases 3 and 4 combine to contribute $2t(c-2)$ words. In case 1, the final three runs $+-+$ can be replaced by $++-$, preserving the length of the word and reducing the number of crossings by one. In case 2, the final three runs $++-{}-+$ can be replaced by $+-$ without changing the length requirement modulo 3. In this case, the number of crossings is reduced by one. These two cases partition $T(c-1)$. In case 1, the penultimate run is a double, and in case 2, it is a single. Thus these two cases together contribute $t(c-1)$ words. Therefore $t(c) = t(c-1) + 2t(c-2)$. Since $t$ satisfies the Jacobsthal recurrence relation and $t(3)=t(4)=J(1)=J(2)=1$, it follows that $t(c) = J(c-2)= \frac{2^{c-2} - (-1)^c}{3}$. \end{proof} The replacements in the proof of Proposition \ref{prop:countterms} can be summarized as follows. \begin{itemize} \item[(1)] The final string $+-+$ is replaced by $++-$, obtaining a new word with $c-1$ crossings. \item[(2)] The final string $++-{}-+$ is replaced by $+-$, obtaining a new word with $c-1$ crossings. \item[(3)] The final string $+-{}-+$ is replaced by $+$, obtaining a new word with $c-2$ crossings. \item[(4)] The final string $++-+$ is replaced by $+$, obtaining a new word with $c-2$ crossings. \end{itemize} \begin{example} \label{ex:c6countterms} Table \ref{tab:c456} shows the sets $T(4)$, $T(5)$, and $T(6)$. Subwords of words in $T(6)$ in parentheses are replaced according to the proof of Proposition \ref{prop:countterms} to obtain the words on the left in either $T(4)$ or $T(5)$. We see that $t(6) = t(5) + 2t(4)$. \end{example} \begin{center} \begin{table}[h] \begin{tabular}{|c|c||c|c|} \hline $T(4)$ & $+-+()-$ & $+-+(-++)-$ & \\ \cline{1-2} $T(4)$ & $+-+()-$ & $+-+(-{}-+)-$ & \\ \cline{1-2} \multirow{3}{*}{$T(5)$} & $+-{}-++(-)+$ & $+-{}-++(-{}-++)-$ & $T(6)$\\ & $+-++(-{}-)+$ & $+-++(-+)-$ & \\ & $+-{}-+(-{}-)+$ & $+-{}-+(-+)-$ & \\ \hline \end{tabular} \caption{The sets $T(4)$, $T(5)$, and $T(6)$ with the subwords in the parentheses replaced as in the proof of Proposition \ref{prop:countterms}.} \label{tab:c456} \end{table} \end{center} \begin{example} \label{ex:c7countterms} Table \ref{tab:c567} shows the sets $T(5)$, $T(6)$, and $T(7)$. Subwords of words in $T(7)$ in parentheses are replaced according to the proof of Proposition \ref{prop:countterms} to obtain the words on the left in either $T(5)$ or $T(6)$. We see that $t(7) = t(6) + 2t(5)$. \end{example} \begin{center} \begin{table}[h] \begin{tabular}{|c|c||c|c|} \hline & $+-{}-++-()+$ & $+-{}-++-(+--)+$ & \\ $T(5)$ & $+-++-{}-()+$ & $+-++-{}-(+--)+$ & \\ & $+-{}-+-{}-()+$ & $+-{}-+-{}-(+--)+$ & \\ \cline{1-2} & $+-{}-++-()+$ & $+-{}-++-(++-)+$ & \\ $T(5)$ & $+-++-{}-()+$ & $+-++-{}-(++-)+$ & \\ & $+-{}-+-{}-()+$ & $+-{}-+-{}-(++-)+$ & $T(7)$ \\ \cline{1-2} & $+-+-{}-(+)-$ & $+-+-{}-(++--)+$ & \\ & $+-++-(+)-$ & $+-++-(++--)+$ & \\ $T(6)$ & $+-{}-+-(+)-$ & $+-{}-+-(++--)+$ & \\ & $+-+-(++)-$ & $+-+-(+-)+$ & \\ & $+-{}-++-{}-(++)-$ & $+-{}-++-{}-(+-)+$ & \\ \hline \end{tabular} \caption{The sets $T(5)$, $T(6)$, and $T(7)$ with the subwords in the parentheses replaced as in the proof of Proposition \ref{prop:countterms}.} \label{tab:c567} \end{table} \end{center} Let $s(c)$ be the total number of Seifert circles obtained when Seifert's algorithm is applied to the alternating diagrams associated to words in $T(c)$. For brevity, we say that $s(c)$ is the total number of Seifert circles from $T(c)$. In order to find a recursive formula for $s(c)$, we develop recursive formulas for sizes of the subsets in the partition of $T(c)$ defined by the four cases above. \begin{lemma} \label{lem:countcases} Let $t_1(c)$, $t_2(c)$, $t_3(c)$, and $t_4(c)$ be the number of words in cases 1, 2, 3, and 4, respectively, for crossing number $c$. Then \[t_1(c)=2t(c-3),~t_2(c)=t(c-2),~\text{and}~t_3(c)=t_4(c)=t(c-2).\] \end{lemma} \begin{proof} The last result $t_3(c)=t_4(c)=t(c-2)$ appears in the proof of Proposition \ref{prop:countterms} above. We now consider the other cases. Without loss of generality, suppose $c$ is odd. In case 2, the final three runs are $++-{}-+$, and we can obtain a word with crossing number $c-1$ by replacing this string with $+-$, as described in Proposition \ref{prop:countterms} above. If the $(c-3)$rd run is a double $-{}-$, then the string $-{}-++-{}-$ in positions $c-3$ through $c-1$ can be removed without affecting the required length modulo 3, with the final single $+$ becoming a final single $-$. The number of such words is $t(c-3)$. If the $(c-3)$rd run is a single $-$, then $-++-{}-+$ is replaced with the string $-+-$. This is case 1 for $c-1$ crossings, and so the number of these words is $t_1(c-1)$. Therefore $t_2(c) = t(c-3)+t_1(c-1)$. In case 1, the final three runs are $+-+$ and we can reduce this to a word with crossing number $c-1$ by replacing this string with $++-$, as described in Proposition \ref{prop:countterms} above. If the $(c-3)$rd run is a single $-$, then first perform the replacement move, yielding the string $-++-$, and then remove the penultimate two runs without affecting the required length modulo 3, keeping the final single $-$. The number of these words is $t(c-3)$. If the $(c-3)$rd run is a double $-{}-$, then after performing the replacement move, the final three runs are $-{}-++-$. This is case 2 for $c-1$ crossings, and so the number of these words is $t_2(c-1)$. Therefore $t_1(c)=t(c-3)+t_2(c-1)$. We prove that $t_1(c)=2t(c-3)$ and that $t_2(c)=t(c-2)$ by induction. For the base cases, Example \ref{ex:c6countterms} implies that $t_2(5)=1$ and $t_1(6)=2$, and $t(3)=1$ because $T(3)=\{+--+\}$. Our inductive hypothesis is that $t_1(c-1)=2t(c-4)$ and $t_2(c-1)=t(c-3)$. We then have that \[t_1(c) = t(c-3) + t_2(c-1) = 2t(c-3)\] and \[t_2(c)=t(c-3)+t_1(c-1) = t(c-3) + 2t(c-4) = t(c-2).\] \end{proof} We are now ready to prove our recursive formula for $s(c)$, the total number of Seifert circles from $T(c)$. Throughout the proof, we refer to Table \ref{tab:Seifert} below. \begin{table}[h] \begin{tabular}{|c|c||c|c|c|} \hline Case & Crossing & String & Alternating & Seifert State \\ & Number & & Diagram& \\ \hline \hline 1 & $c$ & $+-+$ & \begin{tikzpicture}[scale=.5, rounded corners = 1mm] \draw[white] (-.2,-.2) rectangle (3.2,2.2); \draw (0,0) -- (1.3, 1.3); \draw (0,1) -- (.3,.7); \draw (.7,.3) -- (1,0) -- (2,0) -- (3,1) -- (2,2) -- (1.7,1.7); \draw (0,2) -- (1,2) -- (2.3,.7); \draw (2.7,.3) -- (3,0); \draw[->] (.5, .5) -- (.1,.1); \draw[->] (.7,.3) -- (.9,.1); \draw[->] (2.5, .5) -- (2.9,.9); \draw[->] (2.7,.3) -- (2.9,.1); \draw[->] (1.5, 1.5) -- (1.9,1.1); \draw[->] (1.3,1.3) -- (1.1,1.1); \end{tikzpicture} & \begin{tikzpicture}[scale=.5, rounded corners = 1mm] \draw[white] (-.2,-.2) rectangle (3.2,2.2); \draw[->] (0,1) -- (.4,.5) -- (0,0); \draw[->] (0,2) -- (1,2) -- (1.4,1.5) -- (.6,.5) -- (1,0) -- (2,0) -- (2.5,.4) -- (3,0); \draw[->] (2,1) -- (2.5,.6) -- (3,1) -- (2,2) -- (1.6,1.5) -- (2,1); \end{tikzpicture} \\ \hline 1 & $c-1$ & $++-$ & \begin{tikzpicture}[scale = .5, rounded corners = 1mm] \draw[white] (-.2,-.2) rectangle (2.2,2.2); \draw (0,0) -- (1,0) -- (2,1) -- (1.7,1.3); \draw (1.3,1.7) -- (1,2) -- (0,1); \draw (0,2) -- (0.3,1.7); \draw (.7,1.3) -- (1,1) -- (2,2); \draw[->] (0.5,1.5) -- (.9,1.9); \draw[->] (.7,1.3) -- (.9,1.1); \draw[->] (1.5,1.5) -- (1.9,1.9); \draw[->] (1.7, 1.3) -- (1.9,1.1); \end{tikzpicture} & \begin{tikzpicture}[scale = .5, rounded corners = 1mm] \draw[white] (-.2,-.2) rectangle (2.2,2.2); \draw[->] (0,2) -- (.5,1.6) -- (1,2) -- (1.5,1.6) -- (2,2); \draw[->] (0,1) -- (.5, 1.4) -- (1,1) -- (1.5,1.4) -- (2,1) -- (1,0) -- (0,0); \end{tikzpicture} \\ \hline\hline 2A & $c$ & $-++-{}-+$ & \begin{tikzpicture}[scale = .5, rounded corners = 1mm] \draw[white] (-1.2,-.2) rectangle (3.2,2.2); \draw (-1,0) -- (1,0) -- (2,1) -- (2.3,.7); \draw (2.7,.3) -- (3,0); \draw (-1,2) -- (0,1) -- (.3,1.3); \draw (-.3,1.7) -- (0,2) -- (1.3,.7); \draw (-1,1) -- (-.7,1.3); \draw (1.7,.3) -- (2,0) -- (3,1) -- (2,2) -- (1,2) -- (.7,1.7); \draw[->] (.3,1.3) -- (.1,1.1); \draw[->] (.5,1.5) -- (.9,1.1); \draw[->] (1.5,.5) -- (1.9,.9); \draw[->] (1.7,.3) -- (1.9,.1); \draw[->] (2.5,.5) -- (2.9,.9); \draw[->] (2.7,.3) -- (2.9,.1); \draw[->] (-.5,1.5) -- (-.9,1.9); \draw[->] (-.3,1.7) -- (-.1,1.9); \end{tikzpicture} & \begin{tikzpicture}[scale = .5, rounded corners = 1mm] \draw[white] (-1.2,-.2) rectangle (3.2,2.2); \draw[->] (0,2) arc (90:-270:.4cm and .5cm); \draw[->] (-1,0) -- (1,0) -- (1.5,.4) -- (2,0) -- (2.5,.4) -- (3,0); \draw[->] (1.5,2) -- (1,2) -- (.6,1.5) -- (1,1) -- (1.5,.6) -- (2,1) -- (2.5,.6) -- (3,1) -- (2,2) -- (1.5,2); \draw[->] (-1,1) -- (-.6,1.5) -- (-1,2); \end{tikzpicture} \\ \hline 2A & $c-1$ & $-+-$ & \begin{tikzpicture} [scale = .4, rounded corners = 1mm] \draw[white] (-1.2,-.2) rectangle (2.2,2.2); \draw (-1,0) -- (0,0) -- (1.3,1.3); \draw (1.7,1.7)--(2,2); \draw (-1,2) -- (0.3,0.7); \draw (0.7,0.3) -- (1,0) -- (2,1) -- (1,2) -- (0,2) -- (-.3,1.7); \draw (-1,1) -- (-.7,1.3); \draw[->] (-.3,1.7) -- (-.1,1.9); \draw[->] (-.5,1.5) -- (-.9,1.9); \draw[->] (0.5,0.5) -- (0.9, 0.9); \draw[->] (0.3,0.7) -- (0.1,0.9); \draw[->] (1.5,1.5) -- (1.9,1.1); \draw[->] (1.7, 1.7) -- (1.9, 1.9); \end{tikzpicture} & \begin{tikzpicture} [scale = .4, rounded corners = 1mm] \draw[white] (-1.2,-.2) rectangle (2.2,2.2); \draw[->] (-1,0) --(0,0) -- (.4,.5) -- (0,1) -- (-.4,1.5) -- (0,2)-- (1,2) --(1.5,1.6) -- (2,2); \draw[->] (1,1) -- (1.5,1.4) -- (2,1) -- (1,0) -- (0.6,0.5) -- (1,1); \draw[->] (-1,1) -- (-.6,1.5) -- (-1,2); \end{tikzpicture} \\ \hline \hline 2B & $c$ & $-{}-++-{}-+$ & \begin{tikzpicture}[scale = .5, rounded corners = 1mm] \draw[white] (-1.2,-.2) rectangle (3.2,2.2); \draw (-.3,.3) -- (0,0) -- (1,0) -- (2,1) -- (2.3,.7); \draw (2.7,.3) -- (3,0); \draw (-1,0) -- (.3,1.3); \draw (-1,2) -- (0,2) -- (1.3,.7); \draw (1.7,.3) -- (2,0) -- (3,1) -- (2,2) -- (1,2) -- (.7,1.7); \draw (-1,1) -- (-.7,.7); \draw[->] (.3,1.3) -- (.1,1.1); \draw[->] (.5,1.5) -- (.9,1.1); \draw[->] (1.5,.5) -- (1.9,.9); \draw[->] (1.7,.3) -- (1.9,.1); \draw[->] (2.5,.5) -- (2.9,.9); \draw[->] (2.7,.3) -- (2.9,.1); \draw[->] (-.5,.5) -- (-1,0); \draw[->] (-.3,.3) -- (-.1,.1); \end{tikzpicture} & \begin{tikzpicture}[scale = .5, rounded corners = 1mm] \draw[white] (-1.2,-.2) rectangle (3.2,2.2); \draw[->] (-1,2) -- (0,2) -- (.4,1.5) -- (0,1) -- (-.4,.5) -- (0,0) -- (1,0) -- (1.5,.4) -- (2,0) -- (2.5,.4) -- (3,0); \draw[->] (1.5,2) -- (1,2) -- (.6,1.5) -- (1,1) -- (1.5,.6) -- (2,1) -- (2.5,.6) -- (3,1) -- (2,2) -- (1.5,2); \draw[->] (-1,1) -- (-.6,.5) -- (-1,0); \end{tikzpicture} \\ \hline 2B & $c-1$ & $-{}-+-$ & \begin{tikzpicture} [scale = .4, rounded corners = 1mm] \draw[white] (-1.2,-.2) rectangle (2.2,2.2); \draw (-.3,.3) -- (0,0) -- (1.3,1.3); \draw (1.7,1.7)--(2,2); \draw (-1,0) -- (0,1) -- (0.3,0.7); \draw (-1,1) -- (-.7,.7); \draw (0.7,0.3) -- (1,0) -- (2,1) -- (1,2) -- (0,2) -- (-1,2); \draw[->] (0.5,0.5) -- (0.9, 0.9); \draw[->] (0.3,0.7) -- (0.1,0.9); \draw[->] (1.5,1.5) -- (1.9,1.1); \draw[->] (1.7, 1.7) -- (1.9, 1.9); \draw[->] (-.5,.5) -- (-.9,.1); \draw[->] (-.3,.3) -- (-.1,.1); \end{tikzpicture} & \begin{tikzpicture} [scale = .4, rounded corners = 1mm] \draw[white] (-1.2,-.2) rectangle (2.2,2.2); \draw[->] (0,1) arc (90:450:.4cm and .5cm); \draw[->] (-1,1) -- (-.6,.5) -- (-1,0); \draw[->] (-1,2) -- (1,2) --(1.5,1.6) -- (2,2); \draw[->] (1,1) -- (1.5,1.4) -- (2,1) -- (1,0) -- (0.6,0.5) -- (1,1); \end{tikzpicture} \\ \hline \hline 3 & $c$ & $+-{}-+$ & \begin{tikzpicture}[scale = .5, rounded corners = 1mm] \draw[white] (-.2,-.2) rectangle (3.2,2.2); \draw (0,0) -- (1,1) -- (1.3,.7); \draw (0,1) -- (0.3,0.7); \draw (0.7,0.3) -- (1,0) -- (2,1) -- (2.3,0.7); \draw (1.7,0.3) -- (2,0) -- (3,1) -- (2,2) -- (0,2); \draw (2.7,0.3) -- (3,0); \draw[->] (0.5, 0.5) -- (0.9, 0.9); \draw[->] (0.7,0.3) -- (0.9,0.1); \draw[->] (1.5, 0.5) -- (1.9,0.9); \draw[->] (1.7,0.3) -- (1.9, 0.1); \draw[->] (2.5,0.5) -- (2.9,0.9); \draw[->] (2.7,0.3) -- (2.9, 0.1); \end{tikzpicture} & \begin{tikzpicture}[scale = .5, rounded corners = 1mm] \draw[white] (-.2,-.2) rectangle (3.2,2.2); \draw[->] (0,0) -- (.5,.4) -- (1,0) -- (1.5,.4) -- (2,0) -- (2.5,.4) -- (3,0); \draw[->] (0,1) -- (.5,.6) -- (1,1) -- (1.5,.6) -- (2,1) -- (2.5,.6) -- (3,1) -- (2,2) -- (0,2); \end{tikzpicture} \\ \hline 3 & $c-2$ & $+$ & \begin{tikzpicture}[scale = .5, rounded corners = 1mm] \draw[white] (-.2,-.2) rectangle (1.2,2.2); \draw (0,1) -- (.3,.7); \draw (.7,.3) -- (1,0); \draw (0,0) -- (1,1) -- (0,2); \draw[->] (.5,.5) -- (.9,.9); \draw[->] (.7,.3) -- (.9,.1); \end{tikzpicture} & \begin{tikzpicture}[scale = .5, rounded corners = 1mm] \draw[white] (-.2,-.2) rectangle (1.2,2.2); \draw[->] (0,0) -- (.5,.4) -- (1,0); \draw[->] (0,1) -- (.5,.6) -- (1,1) -- (0,2); \end{tikzpicture} \\ \hline \hline 4 & $c$ & $++-+$ & \begin{tikzpicture}[scale = .5, rounded corners = 1mm] \draw[white] (-.2,-.2) rectangle (3.2,2.2); \draw (0,0) -- (2,0) -- (3,1) -- (2,2) -- (1.7,1.7); \draw (1.3,1.3) -- (1,1) -- (0,2); \draw (0,1) -- (.3,1.3); \draw (.7,1.7) -- (1,2) -- (2.3,.7); \draw (2.7,0.3) -- (3,0); \draw[->] (0.5, 1.5) -- (0.1, 1.9); \draw[->] (0.7,1.7) -- (0.9,1.9); \draw[->] (1.5, 1.5) -- (1.9,1.1); \draw[->] (1.3,1.3) -- (1.1, 1.1); \draw[->] (2.5,0.5) -- (2.9,0.9); \draw[->] (2.7,0.3) -- (2.9, 0.1); \end{tikzpicture} & \begin{tikzpicture}[scale = .5, rounded corners = 1mm] \draw[white] (-.2,-.2) rectangle (3.2,2.2); \draw[->] (0,0) -- (2,0) -- (2.5,.4) -- (3,0); \draw[->] (0,1) -- (.4,1.5) -- (0,2); \draw[->] (1,2) arc (90:-270:.4 cm and .5cm); \draw[->] (2,1) -- (2.5,.6) -- (3,1) -- (2,2) -- (1.6,1.5) -- (2,1); \end{tikzpicture} \\ \hline 4 & $c-2$ & $+$ & \begin{tikzpicture}[scale = .5, rounded corners = 1mm] \draw[white] (-.2,-.2) rectangle (1.2,2.2); \draw (0,1) -- (.3,.7); \draw (.7,.3) -- (1,0); \draw (0,0) -- (1,1) -- (0,2); \draw[->] (.5,.5) -- (.9,.9); \draw[->] (.7,.3) -- (.9,.1); \end{tikzpicture} & \begin{tikzpicture}[scale = .5, rounded corners = 1mm] \draw[white] (-.2,-.2) rectangle (1.2,2.2); \draw[->] (0,0) -- (.5,.4) -- (1,0); \draw[->] (0,1) -- (.5,.6) -- (1,1) -- (0,2); \end{tikzpicture} \\ \hline \end{tabular} \caption{Alternating diagrams and Seifert states corresponding to the cases in the proof of Theorem \ref{thm:Seifertrecursion}.} \label{tab:Seifert} \end{table} \begin{theorem} \label{thm:Seifertrecursion} Let $s(c)$ be the total number of Seifert circles obtained when Seifert's algorithm is applied to the alternating $2$-bridge diagrams associated with words in $T(c)$. Then $s(c)$ satisfies the recursion $s(c)= s(c-1) + 2s(c-2) + 3t(c-2)$. \end{theorem} \begin{proof} Following the ideas from earlier in this section, we consider the contributions to $s(c)$ from each of the four cases, calling these $s_1(c)$, $s_2(c)$, $s_3(c)$, and $s_4(c)$ so that $s(c)=s_1(c)+s_2(c)+s_3(c)+s_4(c)$. Refer to Table \ref{tab:Seifert} for pictures of each of the cases, where the orientations of the crossings are determined by Lemma \ref{lem:or1}. In case 3, the final string $+-{}-+$ in a word with crossing number $c$ is replaced by $+$ in a new word with crossing number $c-2$. The partial Seifert states in the last column of Table \ref{tab:Seifert} before and after the replacement will have the same number of components when completed. Therefore $s_3(c) = s(c-2)$, the total number of Seifert circles from $T(c-2)$. In case 4, the final string $++-+$ in a word with crossing number $c$ is replaced by $+$ in a new word with crossing number $c-2$. When the partial Seifert states in the last column of Table \ref{tab:Seifert} are completed, the state before the replacement will have two more components than the state after the replacement. Thus $s_4(c)=s(c-2)+2t(c-2)$, the total number of Seifert circles from $T(c-2)$ and additionally counting two circles for each element in $T(c-2)$. In case 1, the final string $+-+$ in a word with crossing number $c$ is replaced by a $++-$ in a new word with crossing number $c-1$. When the partial Seifert states in the last column of Table \ref{tab:Seifert} are completed, the state before the replacement will have one more component than the state after the replacement. Thus $s_1(c)$ is equal to the sum of the total number of Seifert circles in words in $T(c-1)$ that end with $++-$ and $t_1(c)$, the number of words in case 1. The subset of $T(c-1)$ consisting of words ending with $++-$ can be partitioned into the subset of words ending in $-++-$ (case 3 for $c-1$ crossings) and the subset of words ending in $-{}-++-$ (case 2 for $c-1$ crossings). Thus the total number of Seifert circles is \[s_1(c) = s_2(c-1) + s_3(c-1) + t_1(c) = s_2(c-1)+s_3(c-1)+2t(c-3).\] In case 2, the final string $++ -{}-+$ in a word $w\in T(c)$ is replaced by $+-$, obtaining a diagram with $c-1$ crossings. The $(c-3)$rd run in $w$ is either a single $-$ or a double $-{}-$; we name these cases $2A$ and $2B$, respectively. So in case $2A$, the final string $-++-{}-+$ in $w$ is replaced with $-+-$, and in case $2B$, the final string $-{}-++-{}-+$ in $w$ is replaced with $-{}-+-$. Let $s_{2A}(c)$ and $s_{2B}(c)$ be the number of Seifert circles coming from words in $T(c)$ in cases $2A$ and $2B$, respectively. In case $2A$, Table \ref{tab:Seifert} shows that the Seifert state before the replacement has one more component than the Seifert state after the replacement. Because the replacement words end with $-+-$, the set of replacement words for case $2A$ is case 1 for $c-1$ crossings. Therefore $s_{2A}(c) = s_1(c-1) + t_1(c-1)$. In case $2B$, Table \ref{tab:Seifert} shows that the Seifert state before the replacement has one fewer component than the Seifert state after the replacement. Because the replacement words end with $-{}-+-$, the set of replacement words is case 4 for $c-1$ crossings. Thus $s_{2B}(c) = s_4(c-1) - t_4(c-1)$. Lemma \ref{lem:countcases} implies that $t_1(c-1) = 2t(c-4)$ and $t_4(c-1)=t(c-3)$. Therefore, \begin{align*} s_2(c) = & \; s_{2A}(c) + s_{2B}(c)\\ = & \; [s_1(c-1) + t_1(c-1)] + [s_4(c-1) - t_4(c-1)]\\ = & \; s_1(c-1) + s_4(c-1) -t(c-3) + 2t(c-4) . \end{align*} Hence, we have \begin{align*} s(c) = & \; s_1(c)+s_2(c)+s_3(c)+s_4(c)\\ = & \; [s_2(c-1) + s_3(c-1) + 2t(c-3)] + [s_1(c-1) + s_4(c-1) -t(c-3) + 2t(c-4)]\\ & \;+ s(c-2) + s(c-2)+ 2t(c-2)\\ = &\; \sum_{i=1}^4 s_i(c-1) + 2s(c-2) + [t(c-3) + 2t(c-4)] + 2t(c-2)\\ = & \; s(c-1) + 2s(c-2) + 3t(c-2). \end{align*} \end{proof} \subsection{Palindromic case} \label{subsec:palindromic} Recall that $T_p(c)$ is the set of strings in $\{+,-\}$ of palindromic type for crossing number $c$. Alternatively we may abuse notation by using $T_p(c)$ to refer to the set of the corresponding alternating knot diagrams. Let $t_p(c)$ be the number of elements in the set $T_p(c)$. Theorem \ref{thm:list} states that all 2-bridge knots are counted twice in $T(c)$ \emph{except} for words of palindromic type in $T_p(c)$, which are only counted once. For odd $c$, such words are indeed palindromes; for even $c$, the words need to be read backwards and then have all $+$'s changed to $-$'s and vice versa. Equation \ref{eq:avseifert} states that the average number of Seifert circles in an alternating diagram of a $2$-bridge knot with crossing number $c$ is $\overline{s}_c = \frac{s(c) + s_p(c)}{4|\mathcal{K}_c|}$. In this subsection we mirror the previous subsection to obtain a recursive formula for $s_p(c)$. In the discussion below, we consider separately the cases of odd $c$ and even $c$; so let us define $c=2i+1$ and $c=2i$ in these cases, respectively. Let $T_{po}(i)$ and $T_{pe}(i)$ be the respective sets, and let $t_{po}(i)$ and $t_{pe}(i)$ be the number of elements in $T_{po}(i)$ and $T_{pe}(i)$, respectively. \begin{proposition} \label{prop:numberpalindromic} The number $t_p(c)$ of words of palindromic type in $T_p(c)$ satisfies the recursion $t_p(c)=t_p(c-2)+2t_p(c-4)$. Moreover, \[t_p(c) = \begin{cases} J\left(\frac{c-2}{2}\right) = \frac{2^{(c-2)/2} - (-1)^{(c-2)/2}}{3} & \text{if $c$ is even and}\\ J\left(\frac{c-1}{2}\right) = \frac{2^{(c-1)/2} - (-1)^{(c-1)/2}}{3} & \text{if $c$ is odd,}\\ \end{cases} \] where $J(n)$ is the $n$th Jacobsthal number. \end{proposition} When restricting parity, this follows a similar pattern as the recursion $t(c)=t(c-1)+2t(c-2)$ for $t(c)$. \begin{proof} We proceed by induction on $c$. The base cases $t_p(3)=t_p(4)=1$ and $t_p(5)=t_p(6)=1$ are satisfied by the proof of Proposition \ref{prop:countterms} and Table \ref{tab:c456}, respectively. Consider separately the number of terms $t_{pe}(i)$ and $t_{po}(i)$ for $c=2i$ and $c=2i+1$, respectively, with the goal of showing the recursion mentioned in the remark above. Suppose that $c=2i$ is even, and let $w\in T_{pe}(i)$. Since $w=\overline{r}(w)$, the $i$th and $(i+1)$st runs must have the same length but be opposite symbols, and the $(i-1)$st and $(i+2)$nd runs must have the same length but be opposite symbols. Without loss of generality, assume $i$ is even; then the $(i-1)$st run is a single $+$ or double $+$, and the $i$th run is a single $-$ or a double $-{}-$. Then the $(i-1)$st and $i$th runs must be exactly one of the following cases: \begin{itemize} \item[(1$_{pe}$)] a single $+$ followed by a single $-$, \item[(2$_{pe}$)] a double $++$ followed by a double $-{}-$, \item[(3$_{pe}$)] a single $+$ followed by a double $-{}-$, or \item[(4$_{pe}$)] a double $++$ followed by a single $-$. \end{itemize} If we replace the center four runs $+-+-$ in case 1$_{pe}$ with $++-{}-$, then two crossings can be removed without changing the length. If we replace the center four runs $++-{}-++-{}-$ in case 2$_{pe}$ with $+-$, then two crossings can be removed without changing the length requirement modulo 3. Furthermore, in both cases this does not affect the parity of the number of crossings, and we are left with $c-2$ crossings. These two cases partition $T_p(c-2)$, the subset of $T(c-2)$ consisting words of palindromic type with crossing number $c-2$. In case 2$_{pe}$, the $i$th run is a single, and in case 1$_{pe}$, it is a double. Thus these two cases together contribute $t_p(c-2)$ words. The strings $-++-{}-+$ and $-{}-+-++$ in positions $i-1$ through $i+2$ in cases 3$_{pe}$ and 4$_{pe}$ each have length six, which is convenient for our model. If these six crossings are removed, then the length requirement modulo 3 remains satisfied. What is left after removal in each case is the set $T_p(c-4)$, and so cases 3 and 4 contribute $2t_p(c-4)$ words. Hence if $c$ is even, then $t_p(c)=t_p(c-2) + 2t_p(c-4)$. Since $t_p(4)=t_p(6)=1$ and $t_p(c)=t_p(c-2) + 2t_p(c-4)$ when $c$ is even, the sequence $t_p(2n+2)$ for $n=1,2,\dots$ is the Jacobsthal sequence. Thus, if $c$ is even, then \[t_p(c) = J\left(\frac{c-2}{2}\right) = \frac{2^{(c-2)/2} - (-1)^{(c-2)/2}}{3}.\] Now suppose $c=2i+1$ is odd, and let $w\in T_{po}(i)$. Since $c=2i+1$ is odd, the $(i+1)$st run is in the middle of the word, and since $w=r(w)$, the $i$th run and the $(i+2)$nd run are the same length and consist of the same symbol. Without loss of generality, assume $i$ is odd; thus the $(i+1)$st run is a single $-$ or double $-{}-$. Then the $i$th through $(i+2)$nd runs must be exactly one of the following cases: \begin{itemize} \item[(1$_{po}$)] a single $+$ followed by a double $-{}-$ followed by a single $+$, \item[(2$_{po}$)] a double $++$ followed by a single $-$ followed by a double $++$, \item[(3$_{po}$)] a single $+$ followed by a single $-$ followed by a single $+$, or \item[(4$_{po}$)] a double $++$ followed by a double $-{}-$ followed by a double $++$. \end{itemize} If we replace the string $+--+$ in case 1$_{po}$ with a single $+$ or if we replace the string $++-++$ in case 2$_{po}$ with a double $++$, then two crossings can be removed without changing the length requirement modulo 3. Furthermore this does not affect the parity of the number of crossings, and we are left with $c-2$ crossings. These two cases partition $T_p(c-2)$ the subset of words of palindromic type with crossing number $c-2$. In case 1$_{po}$ the middle run is a single and in case 2$_{po}$ it is a double. Thus these two cases together contribute $t_p(c-2)$ words. In case $3_{po}$, the $i$th through $(i+2)$nd runs are $+-+$. There are two possibilities for the $(i-1)$st through the $(i+3)$rd runs: either $ - + - + -$ or $-{}- + - + -{}-$. The string $ - + - + -$ can be replaced with $-{}-$, and the string $-{}- + - + -{}-$ can be replaced with $-$. These replacements respect the length condition modulo 3 and result in words of palindromic type with crossing number $c-4$ in $T_p(c-4)$. In the first replacement, the middle run is a double $-{}-$, and in the second replacement, the middle run is a single $-$; therefore, these two subcases partition $T_p(c-4)$ and contribute $t_p(c-4)$ words. In case $4_{po}$, the $i$th through $(i+2)$nd runs are $++-{}-++$. There are two possibilities for the $(i-1)$st through the $(i+3)$rd runs: either $-++-{}-++-$ or $-{}- ++ -{}- ++ -{}-$. The string $-++-{}-++-$ can be replaced with $-{}-$, and the string $-{}- ++ -{}- ++ -{}-$ can be replaced with $-$. These replacements respect the length condition modulo 3 and result in words of palindromic type with crossing number $c-4$ in $T_p(c-4)$. In the first replacement, the middle run is a double $-{}-$, and in the second replacement, the middle run is a single $-$; therefore, these two subcases partition $T_p(c-4)$ and contribute $t_p(c-4)$ words. Thus when $c$ is odd, $t_p(c) = t_p(c-2)+2t_p(c-4)$. Since $t_p(3)=t_p(5)=1$ and $t_p(c) = t_p(c-2)+2t_p(c-4)$ when $c$ is odd, the sequence $t_p(2n+1)$ for $n=1,2,\dots$ is the Jacobsthal sequence. Thus, if $c$ is odd, then \[t_p(c) = J\left(\frac{c-1}{2}\right) = \frac{2^{(c-1)/2} - (-1)^{(c-1)/2}}{3}.\] \end{proof} \begin{example} \label{ex:c9counttermsp} Table \ref{tab:c579p} shows the words of palindromic type in $T_p(5)$, $T_p(7)$, and $T_p(9)$. Note that for $c=9$, we have even $i$, which is opposite the discussion in the proof above. Subwords of words in $T_p(9)$ in parentheses are replaced according to the proof of Proposition \ref{prop:numberpalindromic} to obtain the words on the left in either $T_p(5)$ or $T_p(7)$. We see that $t_p(9) = t_p(7) + 2t_p(5)$. \end{example} \begin{center} \begin{table}[h] \begin{tabular}{|c|c||c|c|} \hline $T_p(5)$ & $+-{}-(+)-{}-+$ & $+-{}-(++-{}-++-{}-++)-{}-+$ & \\ \cline{1-2} $T_p(5)$ & $+-{}-(+)-{}-+$ & $+-{}-(++-+-++)-{}-+$ & \\ \cline{1-2} \multirow{3}{*}{$T_p(7)$} & $+-+(-)+-+$ & $+-+(-++-)+-+$ & $T_p(9)$\\ & $+-++(-{}-)++-+$ & $+-++(-{}-+-{}-)++-+$ & \\ & $+-{}-+(-{}-)+-{}-+$ & $+-{}-+(-{}-+-{}-)+-{}-+$ & \\ \hline \end{tabular} \caption{The sets $T_p(5)$, $T_p(7)$ and $T_p(9)$ with the subwords in parentheses replaced as in the proof of Proposition \ref{prop:numberpalindromic}.} \label{tab:c579p} \end{table} \end{center} \begin{example} \label{ex:c10counttermsp} Table \ref{tab:c6810p} shows the words of palindromic type in $T_p(6)$, $T_p(8)$, and $T_p(10)$. Note that for $c=10$, we have odd $i$, which is opposite the discussion in the proof above. Subwords of words in $T_p(10)$ in parentheses are replaced according to the proof of Proposition \ref{prop:numberpalindromic} to obtain the words on the left in either $T_p(6)$ or $T_p(8)$. We see that $t_p(10) = t_p(8) + 2t_p(6)$. \end{example} \begin{center} \begin{table}[h] \begin{tabular}{|c|c||c|c|} \hline $T_p(6)$ & $+-{}-++()-{}-++-$ & $+-{}-++(-++-{}-+)-{}-++-$ & \\ \cline{1-2} $T_p(6)$ & $+-{}-++()-{}-++-$ & $+-{}-++(--+-++)-{}-++-$ & \\ \cline{1-2} \multirow{3}{*}{$T_p(8)$} & $+-+(--++)-+-$ & $+-+(-+-+)-+-$ & $T_p(10)$\\ & $+-++(-+)-{}-+-$ & $+-++(--++-{}-++)-{}-+-$ & \\ & $+-{}-+(-+)-++-$ & $+-{}-+(--++-{}-++)-++-$ & \\ \hline \end{tabular} \caption{The sets $T_p(6)$, $T_p(8)$, and $T_p(10)$ with the subwords in parentheses replaced as in the proof of Proposition \ref{prop:numberpalindromic}.} \label{tab:c6810p} \end{table} \end{center} We are now ready to prove the recursive formula for $s_p(c)$, the total number of Seifert circles from $T_p(c)$. \begin{theorem} \label{thm:Seifertrecursionpalindrome} Let $s_p(c)$ be the total number of Seifert circles over all 2-bridge knots of palindromic type with crossing number $c$ for all knots appearing in $T_p(c)$. Then $s_p(c)$ satisfies the recursion $s_p(c)= s_p(c-2) + 2s_p(c-4) + 6t_p(c-4)$. \end{theorem} \begin{proof} As in the proof of Proposition \ref{prop:numberpalindromic}, we consider separately the cases for even $c=2i$ and odd $c=2i+1$ crossing number, with notation $s_{pe}(i)=s_p(2i)$ and $s_{po}(i)=s_p(2i+1)$. Suppose $c=2i$ is even. In the same spirit as Lemma \ref{lem:countcases}, define $t_{pe1}(i)$, $t_{pe2}(i)$, $t_{pe3}(i)$, and $t_{pe4}(c)$ to be the number of words in cases $1_{pe}$, $2_{pe}$, $3_{pe}$, and $4_{pe}$, respectively. Similarly, as in the proof of Theorem \ref{thm:Seifertrecursion}, define $s_{pe1}(i)$, $s_{pe2}(i)$, $s_{pe3}(i)$, and $s_{pe4}(c)$ to be the number of Seifert circles coming from words in cases $1_{pe}$, $2_{pe}$, $3_{pe}$, and $4_{pe}$, respectively. Then $s_{pe}(i)=s_{pe1}(i)+s_{pe2}(i)+s_{pe3}(i)+s_{pe4}(i)$. Refer to Table \ref{tab:SeifertPalindromeEven} for pictures of each of the cases, where the orientations of the crossings are determined by Lemma \ref{lem:or1}. In case 1$_{pe}$, the center string $+-+-$ in a word with crossing number $c$ is replaced by $++-{}-$ in a new word with crossing number $c-2$, and in case $2_{pe}$, the center string $++-{}-++-{}-$ in a word with crossing number $c$ is replaced by $+-$ in a new word with crossing number $c-2$. Lemma \ref{lem:or1} and the first four rows in Table \ref{tab:SeifertPalindromeEven} imply that the only changes caused by these replacements are the removal of two horizontally-oriented crossings. The Seifert states before and after the replacements have the same number of components. Since the center strings $+-$ and $++-{}-$ partition $T_{pe}(i-1)$, it follows that $s_{pe1}(i)+s_{pe2}(i)=s_{pe}(i-1)$. As in the odd palindromic case of the proof of Proposition \ref{prop:numberpalindromic} above, we split cases 3$_{pe}$ and 4$_{pe}$ into two subcases called $A$ and $B$ depending on whether the ($i-2$)nd run is a single $-$ or a double $-{}-$, respectively. In case 3A$_{pe}$, the center string $-+-{}-++-+$ in a word with crossing number $c$ is replaced by $-+$ in a new word with crossing number $c-4$. Lemma \ref{lem:or1} and the fifth and sixth rows in Table \ref{tab:SeifertPalindromeEven} imply that the Seifert state after the replacement has four fewer components than the Seifert state before the replacement. So in order to count $s_{pe3A}(i)$ we need to count the number of words in this case. The center string in the new word with crossing number $c-4$ is $-+$. The cases that have such a center word are 1$_{pe}$ and 3$_{pe}$ for crossing number $c-4$. Thus $s_{pe3A}(i)=(s_{pe1}(i-2)+s_{pe3}(i-2))+4(t_{pe1}(i-2)+t_{pe3}(i-2))$. In case 3B$_{pe}$, the center string $-{}-+-{}-++-++$ in a word with crossing number $c$ is replaced by $-{}-++$ in a new word with crossing number $c-4$. Lemma \ref{lem:or1} and the seventh and eighth rows in Table \ref{tab:SeifertPalindromeEven} imply that the Seifert state after the replacement has two fewer components than the Seifert state before the replacement. So in order to count $s_{pe3B}(i)$ we need to count the number of words in this case. The center string in the new word with crossing number $c-4$ is $-{}-++$. The cases that have such a center word are 2$_{pe}$ and 4$_{pe}$ for crossing number $c-4$. Thus $s_{pe3B}(i)=(s_{pe2}(i-2)+s_{pe4}(i-2))+2(t_{pe2}(i-2)+t_{pe4}(i-2))$. In case 4A$_{pe}$, the center string $-++-+-{}-+$ in a word with crossing number $c$ is replaced by $-+$ in a new word with crossing number $c-4$. Lemma \ref{lem:or1} and the ninth and tenth rows in Table \ref{tab:SeifertPalindromeEven} imply that the Seifert state after the replacement has two fewer components than the Seifert state before the replacement. By a similar argument as case 3A$_{pe}$, we get $s_{pe4A}(i)=(s_{pe1}(i-2)+s_{pe3}(i-2))+2(t_{pe1}(i-2)+t_{pe3}(i-2))$. In case 4B$_{pe}$, the center string $-{}-++-+-{}-++$ in a word with crossing number $c$ is replaced by $-{}-++$ in a new word with crossing number $c-4$. Lemma \ref{lem:or1} and the last two rows in Table \ref{tab:SeifertPalindromeEven} imply that the Seifert state after the replacement has four fewer components than the Seifert state before the replacement. By a similar argument as case 3B$_{pe}$, we get $s_{pe4B}(i)=(s_{pe2}(i-2)+s_{pe4}(i-2))+4(t_{pe2}(i-2)+t_{pe4}(i-2))$. Thus \begin{align*} s_{pe3}(i) + s_{pe4}(i) = & \; s_{pe3A}(i) + s_{pe4B}(i) + s_{pe3B}(i) + s_{pe4A}(i) \\ = & \; (s_{pe1}(i-2)+s_{pe3}(i-2))+4(t_{pe1}(i-2)+t_{pe3}(i-2)) \\ & \; + (s_{pe2}(i-2)+s_{pe4}(i-2))+4(t_{pe2}(i-2)+t_{pe4}(i-2))\\ & \; + (s_{pe2}(i-2)+s_{pe4}(i-2))+2(t_{pe2}(i-2)+t_{pe4}(i-2))\\ & \; + (s_{pe1}(i-2)+s_{pe3}(i-2))+2(t_{pe1}(i-2)+t_{pe3}(i-2))\\ = & \; 2\sum_{j=1}^4 s_{pej}(i-2) + 6 \sum_{j=1}^4 t_{pej}(i-2)\\ = & \; 2s_{pe}(i-2) + 6 t_{pe}(i-2). \end{align*} Concluding the even length case, we have \[s_{pe}(i) = \sum_{j=1}^4 s_{pej}(i) = s_{pe}(i-1) + 2s_{pe}(i-2) + 6 t_{pe}(i-2).\] When $c=2i+1$ is odd, one can prove that $s_{po}(i) = s_{po}(i-1) + 2s_{po}(i-2) + 6 t_{po}(i-2)$ in a similar fashion. The interested reader can work out the details from Table \ref{tab:SeifertPalindromeOdd}. Since $s_{pe}(i)=s_p(2i)$ and $s_{po}(i)=s_p(2i+1)$, it follows that \[s_p(c) = s_p(c-2) + 2s_p(c-4)+6t_p(c-4).\] \end{proof} \begin{table} \begin{tabular}{|c|c||c|c|c|} \hline Case & Crossing & String & Alternating Diagram & Seifert state \\ & Number & & & \\ \hline \hline 1$_{pe}$ & $c$ & \tiny{$+-+-$} & \begin{tikzpicture}[scale=.4] \draw[white] (-.2,-.7) rectangle (10.2,2.7); \draw (1,-.5) rectangle (3,2.5); \draw (2,1) node{$R$}; \draw (8,1) node[rotate = 180]{$\overline{R}$}; \draw (7,-.5) rectangle (9,2.5); \draw (0,0) -- (1,0); \draw (9,2) -- (10,2); \begin{scope}[rounded corners = 1mm] \draw (3,0) -- (4.3,1.3); \draw (3,1) -- (3.3,.7); \draw (3,2) -- (4,2) -- (5.3,.7); \draw (4.7,1.7) -- (5,2) -- (6,2) -- (7,1); \draw (3.7,.3) -- (4,0) -- (5,0) -- (6.3,1.3); \draw (6.7,1.7) -- (7,2); \draw (5.7,.3) -- (6,0) -- (7,0); \end{scope} \draw[->] (3.5,.5) -- (3.9,.9); \draw[->] (3.7,.3) -- (3.9,.1); \draw[->] (4.5,1.5) -- (4.1,1.9); \draw[->] (4.7,1.7) -- (4.9,1.9); \draw[->] (5.5,.5) -- (5.9,.9); \draw[->] (5.3,.7) -- (5.1,.9); \draw[->] (6.5,1.5) -- (6.9,1.1); \draw[->] (6.7,1.7) -- (6.9,1.9); \end{tikzpicture} & \begin{tikzpicture}[scale=.4] \draw[white] (-.2,-.7) rectangle (10.2,2.7); \draw (1,-.5) rectangle (3,2.5); \draw (7,-.5) rectangle (9,2.5); \draw (0,0) -- (1,0); \draw (9,2) -- (10,2); \begin{scope}[rounded corners = 1mm] \draw[->] (3,0) -- (3.5,.4) -- (4,0) -- (5,0) -- (5.4,.5) -- (5,1) -- (4.6,1.5) -- (5,2) -- (6,2) -- (6.5,1.6) -- (7,2); \draw[->] (3,2) -- (4,2) -- (4.4,1.5) -- (4,1) -- (3.5,.6) -- (3,1); \draw[->] (7,0) -- (6,0) -- (5.6,.5) -- (6,1) -- (6.5,1.4) -- (7,1); \end{scope} \draw[densely dashed] (1,0) -- (3,0); \draw[densely dashed, rounded corners=1mm] (3,2) -- (2.6,1.5) -- (3,1); \draw[densely dashed] (7,2) -- (9,2); \draw[densely dashed, rounded corners=1mm] (7,1) -- (7.4,.5) -- (7,0); \end{tikzpicture} \\ \hline 1$_{pe}$ & $c-2$ & \tiny{$++ -{}-$} & \begin{tikzpicture}[scale=.4] \draw[white] (-.2,-.7) rectangle (8.2,2.7); \draw (1,-.5) rectangle (3,2.5); \draw (2,1) node{$R$}; \draw (6,1) node[rotate = 180]{$\overline{R}$}; \draw (5,-.5) rectangle (7,2.5); \draw (0,0) -- (1,0); \draw (7,2) -- (8,2); \begin{scope}[rounded corners = 1mm] \draw (3,0) -- (4,0) -- (5,1); \draw (3,1) -- (3.3,1.3); \draw (3.7,1.7) -- (4,2) -- (5,2); \draw (3,2) -- (4.3,.7); \draw (4.7,.3) -- (5,0); \end{scope} \draw[->] (3.5,1.5) -- (3.1,1.9); \draw[->] (3.7,1.7) -- (3.9,1.9); \draw[->] (4.5,.5) -- (4.9,.9); \draw[->] (4.3,.7) -- (4.1,.9); \end{tikzpicture} & \begin{tikzpicture}[scale=.4] \draw[white] (-.2,-.7) rectangle (8.2,2.7); \draw (1,-.5) rectangle (3,2.5); \draw (5,-.5) rectangle (7,2.5); \draw (0,0) -- (1,0); \draw (7,2) -- (8,2); \begin{scope}[rounded corners = 1mm] \draw[->] (3,1) -- (3.4,1.5) -- (3,2); \draw[->] (5,0) -- (4.6,.5) -- (5,1); \draw[->] (3,0) -- (4,0) -- (4.4,.5) -- (4,1) -- (3.6,1.5) -- (4,2) -- (5,2); \end{scope} \draw[densely dashed] (1,0) -- (3,0); \draw[densely dashed, rounded corners=1mm] (3,2) -- (2.6,1.5) -- (3,1); \draw[densely dashed] (5,2) -- (7,2); \draw[densely dashed, rounded corners=1mm] (5,1) -- (5.4,.5) -- (5,0); \end{tikzpicture} \\ \hline \hline 2$_{pe}$ & $c$ & \tiny{$++-{}-++-{}-$} & \begin{tikzpicture}[scale=.4] \draw[white] (-.2,-.7) rectangle (10.2,2.7); \draw (1,-.5) rectangle (3,2.5); \draw (2,1) node{$R$}; \draw (8,1) node[rotate = 180]{$\overline{R}$}; \draw (7,-.5) rectangle (9,2.5); \draw (0,0) -- (1,0); \draw (9,2) -- (10,2); \begin{scope}[rounded corners = 1mm] \draw (3,0) -- (4,0) -- (5.3,1.3); \draw (5.7,1.7) -- (6,2) --(7,2); \draw (3,1) -- (3.3,1.3); \draw (3.7,1.7) -- (4,2) -- (5,2) -- (6.3,.7); \draw (6.7,.3) -- (7,0); \draw (3,2) -- (4.3,.7); \draw (4.7,.3) -- (5,0) -- (6,0) -- (7,1); \end{scope} \draw[->] (3.5,1.5) -- (3.9,1.1); \draw[->] (3.7,1.7) -- (3.9,1.9); \draw[->] (4.5,.5) -- (4.1,.1); \draw[->] (4.7,.3) -- (4.9,.1); \draw[->] (5.5,1.5) -- (5.9,1.1); \draw[->] (5.3,1.3) -- (5.1,1.1); \draw[->] (6.5,.5) -- (6.9,.9); \draw[->] (6.7,.3) -- (6.9,.1); \end{tikzpicture} & \begin{tikzpicture}[scale=.4] \draw[white] (-.2,-.7) rectangle (10.2,2.7); \draw (1,-.5) rectangle (3,2.5); \draw (7,-.5) rectangle (9,2.5); \draw (0,0) -- (1,0); \draw (9,2) -- (10,2); \begin{scope}[rounded corners = 1mm] \draw[->] (3,2) -- (3.5,1.6) -- (4,2) -- (5,2) -- (5.4,1.5) -- (5,1) -- (4.6,.5) -- (5,0) -- (6,0) -- (6.5,.4) -- (7,0); \draw[->] (3,1) -- (3.5,1.4) -- (4,1) -- (4.4,.5) -- (4,0) -- (3,0); \draw[->] (7,2) -- (6,2) -- (5.6,1.5) -- (6,1) -- (6.5,.6) -- (7,1); \end{scope} \draw[densely dashed] (1,0) -- (3,2); \draw[densely dashed, rounded corners=1mm] (3,1) -- (2.6,.5) -- (3,0); \draw[densely dashed] (7,0) -- (9,2); \draw[densely dashed, rounded corners=1mm] (7,2) -- (7.4,1.5) -- (7,1); \end{tikzpicture} \\ \hline 2$_{pe}$ & $c-2$ & \tiny{$+-$} & \begin{tikzpicture}[scale=.4] \draw[white] (-.2,-.7) rectangle (8.2,2.7); \draw (1,-.5) rectangle (3,2.5); \draw (2,1) node{$R$}; \draw (6,1) node[rotate = 180]{$\overline{R}$}; \draw (5,-.5) rectangle (7,2.5); \draw (0,0) -- (1,0); \draw (7,2) -- (8,2); \begin{scope}[rounded corners = 1mm] \draw (3,0) -- (4.3,1.3); \draw (4.7,1.7) -- (5,2); \draw (3,1) -- (3.3,.7); \draw (3,2) -- (4,2) -- (5,1); \draw (3.7,.3) -- (4,0) -- (5,0); \end{scope} \draw[->] (3.5,.5) -- (3.1,.1); \draw[->] (3.7,.3) -- (3.9,.1); \draw[->] (4.5,1.5) -- (4.9,1.1); \draw[->] (4.3,1.3) -- (4.1,1.1); \end{tikzpicture} & \begin{tikzpicture}[scale=.4] \draw[white] (-.2,-.7) rectangle (8.2,2.7); \draw (1,-.5) rectangle (3,2.5); \draw (5,-.5) rectangle (7,2.5); \draw (0,0) -- (1,0); \draw (7,2) -- (8,2); \begin{scope}[rounded corners = 1mm] \draw[->] (3,1) -- (3.4,.5) -- (3,0); \draw[->] (5,2) -- (4.6,1.5) -- (5,1); \draw[->] (3,2) -- (4,2) -- (4.4,1.5) -- (4,1) -- (3.6,.5) -- (4,0) -- (5,0); \end{scope} \draw[densely dashed] (1,0) -- (3,2); \draw[densely dashed, rounded corners=1mm] (3,0) -- (2.6,.5) -- (3,1); \draw[densely dashed] (5,0) -- (7,2); \draw[densely dashed, rounded corners=1mm] (5,1) -- (5.4,1.5) -- (5,2); \end{tikzpicture} \\ \hline \hline 3A$_{pe}$ & $c$ & \tiny{$-+--++-+$} & \begin{tikzpicture}[scale=.4] \draw[white] (-.2,-.7) rectangle (12.2,2.7); \draw (1,-.5) rectangle (3,2.5); \draw (2,1) node{$R$}; \draw (10,1) node[rotate = 180]{$\overline{R}$}; \draw (9,-.5) rectangle (11,2.5); \draw (0,0) -- (1,0); \draw (11,2) -- (12,2); \begin{scope}[rounded corners = 1mm] \draw (3,0) -- (4,0) -- (5,1) -- (5.3,.7); \draw (5.7,.3) -- (6,0) -- (8,0) -- (9,1); \draw (3,1) -- (3.3,1.3); \draw (3.7,1.7) -- (4,2) -- (6,2) -- (7,1) -- (7.3,1.3); \draw (7.7,1.7) -- (8,2) -- (9,2); \draw (3,2) -- (4.3,.7); \draw (4.7,.3) -- (5,0) -- (6.3,1.3); \draw (6.7,1.7) -- (7,2) -- (8.3,.7); \draw (8.7,.3) -- (9,0); \end{scope} \draw[->] (3.5,1.5) -- (3.1,1.9); \draw[->] (3.7,1.7) -- (3.9,1.9); \draw[->] (4.5,.5) -- (4.9,.9); \draw[->] (4.3,.7) -- (4.1,.9); \draw[->] (5.5,.5) -- (5.1,.1); \draw[->] (5.7,.3) -- (5.9,.1); \draw[->] (6.5,1.5) --(6.9,1.1); \draw[->] (6.3,1.3) -- (6.1,1.1); \draw[->] (7.5,1.5) -- (7.1,1.9); \draw[->] (7.7,1.7) -- (7.9,1.9); \draw[->] (8.5,.5) -- (8.9,.9); \draw[->] (8.3,.7) -- (8.1,.9); \end{tikzpicture} & \begin{tikzpicture}[scale=.4] \draw[white] (-.2,-.7) rectangle (12.2,2.7); \draw (1,-.5) rectangle (3,2.5); \draw (9,-.5) rectangle (11,2.5); \draw (0,0) -- (1,0); \draw (11,2) -- (12,2); \begin{scope}[rounded corners = 1mm] \draw[->] (3,0) -- (4,0) -- (4.4,.5) -- (4,1) -- (3.6,1.5) -- (4,2) -- (6,2) -- (6.4,1.5) -- (6,1) --(5.6,.5) -- (6,0) -- (8,0) -- (8.4,.5) -- (8,1) -- (7.6,1.5) -- (8,2) -- (9,2); \draw[->] (3,1) -- (3.4,1.5) -- (3,2); \draw[->] (9,0) -- (8.6,.5) -- (9,1); \draw[->] (5,1) arc (90:-270:.4cm and .5cm); \draw[->] (7,2) arc (90:450:.4cm and .5cm); \end{scope} \draw[densely dashed] (1,0) -- (3,0); \draw[densely dashed, rounded corners =1mm] (3,1) -- (2.6,1.5) -- (3,2); \draw[densely dashed] (9,2) -- (11,2); \draw[densely dashed, rounded corners =1mm] (9,1) -- (9.4,.5) -- (9,0); \end{tikzpicture} \\ \hline 3A$_{pe}$ & $c-4$ & \tiny{$-+$} & \begin{tikzpicture}[scale=.4] \draw[white] (-.2,-.7) rectangle (8.2,2.7); \draw (1,-.5) rectangle (3,2.5); \draw (2,1) node{$R$}; \draw (6,1) node[rotate = 180]{$\overline{R}$}; \draw (5,-.5) rectangle (7,2.5); \draw (0,0) -- (1,0); \draw (7,2) -- (8,2); \begin{scope}[rounded corners = 1mm] \draw (3,0) -- (4,0) -- (5,1); \draw (3,1) -- (3.3,1.3); \draw (3.7,1.7) -- (4,2) -- (5,2); \draw (3,2) -- (4.3,.7); \draw (4.7,.3) -- (5,0); \end{scope} \draw[->] (3.5,1.5) -- (3.1,1.9); \draw[->] (3.7,1.7) -- (3.9,1.9); \draw[->] (4.5,.5) -- (4.9,.9); \draw[->] (4.3,.7) -- (4.1,.9); \end{tikzpicture} & \begin{tikzpicture}[scale=.4] \draw[white] (-.2,-.7) rectangle (8.2,2.7); \draw (1,-.5) rectangle (3,2.5); \draw (5,-.5) rectangle (7,2.5); \draw (0,0) -- (1,0); \draw (7,2) -- (8,2); \begin{scope}[rounded corners = 1mm] \draw[->] (3,1) -- (3.4,1.5) -- (3,2); \draw[->] (5,0) -- (4.6,.5) -- (5,1); \draw[->] (3,0) -- (4,0) -- (4.4,.5) -- (4,1) -- (3.6,1.5) -- (4,2) -- (5,2); \end{scope} \draw[densely dashed] (1,0) -- (3,0); \draw[densely dashed, rounded corners=1mm] (3,2) -- (2.6,1.5) -- (3,1); \draw[densely dashed] (5,2) -- (7,2); \draw[densely dashed, rounded corners=1mm] (5,1) -- (5.4,.5) -- (5,0); \end{tikzpicture} \\ \hline \hline 3B$_{pe}$ & $c$ & \tiny{$--+--++-++$} & \begin{tikzpicture}[scale=.4] \draw[white] (-.2,-.7) rectangle (12.2,2.7); \draw (1,-.5) rectangle (3,2.5); \draw (2,1) node{$R$}; \draw (10,1) node[rotate = 180]{$\overline{R}$}; \draw (9,-.5) rectangle (11,2.5); \draw (0,0) -- (1,0); \draw (11,2) -- (12,2); \begin{scope}[rounded corners = 1mm] \draw (3,0) -- (4,1) -- (4.3,.7); \draw (3.7,.3) -- (4,0) -- (5,1) -- (5.3,.7); \draw (5.7,.3) -- (6,0) -- (9,0); \draw (3,1) -- (3.3,.7); \draw (7.7,1.7) -- (8,2) -- (9,1); \draw (3,2) -- (6,2) -- (7,1) -- (7.3,1.3); \draw (4.7,.3) -- (5,0) -- (6.3,1.3); \draw (6.7,1.7) -- (7,2) -- (8,1) -- (8.3,1.3); \draw (8.7,1.7) -- (9,2); \end{scope} \draw[->] (3.5,.5) -- (3.1,.1); \draw[->] (3.7,.3) -- (3.9,.1); \draw[->] (4.5,.5) -- (4.9,.9); \draw[->] (4.3,.7) -- (4.1,.9); \draw[->] (5.5,.5) -- (5.1,.1); \draw[->] (5.7,.3) -- (5.9,.1); \draw[->] (6.5,1.5) --(6.9,1.1); \draw[->] (6.3,1.3) -- (6.1,1.1); \draw[->] (7.5,1.5) -- (7.1,1.9); \draw[->] (7.7,1.7) -- (7.9,1.9); \draw[->] (8.5,1.5) -- (8.9,1.1); \draw[->] (8.3,1.3) -- (8.1,1.1); \end{tikzpicture} & \begin{tikzpicture}[scale=.4] \draw[white] (-.2,-.7) rectangle (12.2,2.7); \draw (1,-.5) rectangle (3,2.5); \draw (9,-.5) rectangle (11,2.5); \draw (0,0) -- (1,0); \draw (11,2) -- (12,2); \begin{scope}[rounded corners = 1mm] \draw[->] (3,2) -- (6,2) -- (6.4,1.5) -- (6,1) -- (5.6,.5) -- (6,0) -- (9,0); \draw[->] (3,1) -- (3.4,.5) -- (3,0); \draw[->] (9,2) -- (8.6,1.5) -- (9,1); \draw[->] (5,1) arc (90:-270:.4cm and .5cm); \draw[->] (4,1) arc (90:450:.4cm and .5cm); \draw[->] (7,2) arc (90:450:.4cm and .5cm); \draw[->] (8,2) arc (90:-270:.4cm and .5cm); \end{scope} \draw[densely dashed] (1,0) -- (3,2); \draw[densely dashed, rounded corners =1mm] (3,0) -- (2.6,.5) -- (3,1); \draw[densely dashed] (9,0) -- (11,2); \draw[densely dashed, rounded corners =1mm] (9,1) -- (9.4,1.5) -- (9,2); \end{tikzpicture} \\ \hline 3B$_{pe}$ & $c-4$ & \tiny{$--++$} & \begin{tikzpicture}[scale=.4] \draw[white] (-.2,-.7) rectangle (8.2,2.7); \draw (1,-.5) rectangle (3,2.5); \draw (2,1) node{$R$}; \draw (6,1) node[rotate = 180]{$\overline{R}$}; \draw (5,-.5) rectangle (7,2.5); \draw (0,0) -- (1,0); \draw (7,2) -- (8,2); \begin{scope}[rounded corners = 1mm] \draw (3,0) -- (4.3,1.3); \draw (4.7,1.7) -- (5,2); \draw (3,1) -- (3.3,.7); \draw (3,2) -- (4,2) -- (5,1); \draw (3.7,.3) -- (4,0) -- (5,0); \end{scope} \draw[->] (3.5,.5) -- (3.1,.1); \draw[->] (3.7,.3) -- (3.9,.1); \draw[->] (4.5,1.5) -- (4.9,1.1); \draw[->] (4.3,1.3) -- (4.1,1.1); \end{tikzpicture} & \begin{tikzpicture}[scale=.4] \draw[white] (-.2,-.7) rectangle (8.2,2.7); \draw (1,-.5) rectangle (3,2.5); \draw (5,-.5) rectangle (7,2.5); \draw (0,0) -- (1,0); \draw (7,2) -- (8,2); \begin{scope}[rounded corners = 1mm] \draw[->] (3,1) -- (3.4,.5) -- (3,0); \draw[->] (5,2) -- (4.6,1.5) -- (5,1); \draw[->] (3,2) -- (4,2) -- (4.4,1.5) -- (4,1) -- (3.6,.5) -- (4,0) -- (5,0); \end{scope} \draw[densely dashed] (1,0) -- (3,2); \draw[densely dashed, rounded corners=1mm] (3,0) -- (2.6,.5) -- (3,1); \draw[densely dashed] (5,0) -- (7,2); \draw[densely dashed, rounded corners=1mm] (5,1) -- (5.4,1.5) -- (5,2); \end{tikzpicture} \\ \hline \hline 4A$_{pe}$ & $c$ & \tiny{$-++-+--+$} & \begin{tikzpicture}[scale=.4] \draw[white] (-.2,-.7) rectangle (12.2,2.7); \draw (1,-.5) rectangle (3,2.5); \draw (2,1) node{$R$}; \draw (10,1) node[rotate = 180]{$\overline{R}$}; \draw (9,-.5) rectangle (11,2.5); \draw (0,0) -- (1,0); \draw (11,2) -- (12,2); \begin{scope}[rounded corners = 1mm] \draw (3,0) -- (6,0) -- (7,1) -- (7.3,.7); \draw (7.7,.3) -- (8,0) -- (9,1); \draw (3,1) -- (3.3,1.3); \draw (3.7,1.7) -- (4,2) -- (5,1) -- (5.3,1.3); \draw (5.7,1.7) -- (6,2) -- (9,2); \draw (3,2) -- (4,1) -- (4.3,1.3); \draw (4.7,1.7) -- (5,2) -- (6.3,.7); \draw (6.7,.3) -- (7,0) -- (8,1) -- (8.3,.7); \draw (8.7,.3) -- (9,0); \end{scope} \draw[->] (3.5,1.5) -- (3.1,1.9); \draw[->] (3.7,1.7) -- (3.9,1.9); \draw[->] (4.5,1.5) -- (4.9,1.1); \draw[->] (4.3,1.3) -- (4.1,1.1); \draw[->] (5.5,1.5) -- (5.1,1.9); \draw[->] (5.7,1.7) -- (5.9,1.9); \draw[->] (6.5,.5) --(6.9,.9); \draw[->] (6.3,.7) -- (6.1,.9); \draw[->] (7.5,.5) -- (7.1,.1); \draw[->] (7.7,.3) -- (7.9,.1); \draw[->] (8.5,.5) -- (8.9,.9); \draw[->] (8.3,.7) -- (8.1,.9); \end{tikzpicture} & \begin{tikzpicture}[scale=.4] \draw[white] (-.2,-.7) rectangle (12.2,2.7); \draw (1,-.5) rectangle (3,2.5); \draw (9,-.5) rectangle (11,2.5); \draw (0,0) -- (1,0); \draw (11,2) -- (12,2); \begin{scope}[rounded corners = 1mm] \draw[->] (3,0) -- (6,0) -- (6.4,.5) -- (6,1) -- (5.6,1.5) -- (6,2) -- (9,2); \draw[->] (3,1) -- (3.4,1.5) -- (3,2); \draw[->] (9,0) -- (8.6,.5) -- (9,1); \draw[->] (5,2) arc (90:450:.4cm and .5cm); \draw[->] (4,2) arc (90:-270:.4cm and .5cm); \draw[->] (7,1) arc (90:-270:.4cm and .5cm); \draw[->] (8,1) arc (90:450:.4cm and .5cm); \end{scope} \draw[densely dashed] (1,0) -- (3,0); \draw[densely dashed, rounded corners =1mm] (3,1) -- (2.6,1.5) -- (3,2); \draw[densely dashed] (9,2) -- (11,2); \draw[densely dashed, rounded corners =1mm] (9,1) -- (9.4,.5) -- (9,0); \end{tikzpicture} \\ \hline 4A$_{pe}$ & $c-4$ & \tiny{$-+$} & \begin{tikzpicture}[scale=.4] \draw[white] (-.2,-.7) rectangle (8.2,2.7); \draw (1,-.5) rectangle (3,2.5); \draw (2,1) node{$R$}; \draw (6,1) node[rotate = 180]{$\overline{R}$}; \draw (5,-.5) rectangle (7,2.5); \draw (0,0) -- (1,0); \draw (7,2) -- (8,2); \begin{scope}[rounded corners = 1mm] \draw (3,0) -- (4,0) -- (5,1); \draw (3,1) -- (3.3,1.3); \draw (3.7,1.7) -- (4,2) -- (5,2); \draw (3,2) -- (4.3,.7); \draw (4.7,.3) -- (5,0); \end{scope} \draw[->] (3.5,1.5) -- (3.1,1.9); \draw[->] (3.7,1.7) -- (3.9,1.9); \draw[->] (4.5,.5) -- (4.9,.9); \draw[->] (4.3,.7) -- (4.1,.9); \end{tikzpicture} & \begin{tikzpicture}[scale=.4] \draw[white] (-.2,-.7) rectangle (8.2,2.7); \draw (1,-.5) rectangle (3,2.5); \draw (5,-.5) rectangle (7,2.5); \draw (0,0) -- (1,0); \draw (7,2) -- (8,2); \begin{scope}[rounded corners = 1mm] \draw[->] (3,1) -- (3.4,1.5) -- (3,2); \draw[->] (5,0) -- (4.6,.5) -- (5,1); \draw[->] (3,0) -- (4,0) -- (4.4,.5) -- (4,1) -- (3.6,1.5) -- (4,2) -- (5,2); \end{scope} \draw[densely dashed] (1,0) -- (3,0); \draw[densely dashed, rounded corners=1mm] (3,2) -- (2.6,1.5) -- (3,1); \draw[densely dashed] (5,2) -- (7,2); \draw[densely dashed, rounded corners=1mm] (5,1) -- (5.4,.5) -- (5,0); \end{tikzpicture} \\ \hline \hline 4B$_{pe}$ & $c$ &\tiny{$--++-+--++$} & \begin{tikzpicture}[scale=.4] \draw[white] (-.2,-.7) rectangle (12.2,2.7); \draw (1,-.5) rectangle (3,2.5); \draw (2,1) node{$R$}; \draw (10,1) node[rotate = 180]{$\overline{R}$}; \draw (9,-.5) rectangle (11,2.5); \draw (0,0) -- (1,0); \draw (11,2) -- (12,2); \begin{scope}[rounded corners = 1mm] \draw (3,0) -- (4.3,1.3); \draw (4.7,1.7) -- (5,2) -- (6.3,.7); \draw (6.7,.3) -- (7,0) -- (8.3,1.3); \draw (8.7,1.7) -- (9,2); \draw (3,1) -- (3.3,.7); \draw (3.7,.3) -- (4,0) -- (6,0) -- (7,1) -- (7.3,.7); \draw (7.7,.3) -- (8,0) -- (9,0); \draw (3,2) -- (4,2) -- (5,1) -- (5.3,1.3); \draw (5.7,1.7) -- (6,2) -- (8,2) -- (9,1); \end{scope} \draw[->] (3.5,.5) -- (3.1,.1); \draw[->] (3.7,.3) -- (3.9,.1); \draw[->] (4.5,1.5) -- (4.9,1.1); \draw[->] (4.3,1.3) -- (4.1,1.1); \draw[->] (5.5,1.5) -- (5.1,1.9); \draw[->] (5.7,1.7) -- (5.9,1.9); \draw[->] (6.5,.5) --(6.9,.9); \draw[->] (6.3,.7) -- (6.1,.9); \draw[->] (7.5,.5) -- (7.1,.1); \draw[->] (7.7,.3) -- (7.9,.1); \draw[->] (8.5,1.5) -- (8.9,1.1); \draw[->] (8.3,1.3) -- (8.1,1.1); \end{tikzpicture} & \begin{tikzpicture}[scale=.4] \draw[white] (-.2,-.7) rectangle (12.2,2.7); \draw (1,-.5) rectangle (3,2.5); \draw (9,-.5) rectangle (11,2.5); \draw (0,0) -- (1,0); \draw (11,2) -- (12,2); \begin{scope}[rounded corners = 1mm] \draw[->] (3,2) -- (4,2) -- (4.4,1.5) -- (4,1) -- (3.6,.5) -- (4,0) -- (6,0) -- (6.4,.5) -- (6,1) -- (5.6,1.5) -- (6,2) -- (8,2) -- (8.4,1.5) -- (8,1) -- (7.6,.5) -- (8,0) -- (9,0); \draw[->] (3,1) -- (3.4,.5) -- (3,0); \draw[->] (9,2) -- (8.6,1.5) -- (9,1); \draw[->] (5,2) arc (90:450:.4cm and .5cm); \draw[->] (7,1) arc (90:-270:.4cm and .5cm); \end{scope} \draw[densely dashed] (1,0) -- (3,2); \draw[densely dashed, rounded corners =1mm] (3,0) -- (2.6,.5) -- (3,1); \draw[densely dashed] (9,0) -- (11,2); \draw[densely dashed, rounded corners =1mm] (9,1) -- (9.4,1.5) -- (9,2); \end{tikzpicture} \\ \hline 4B$_{pe}$ & $c-4$ & \tiny{$--++$} & \begin{tikzpicture}[scale=.4] \draw[white] (-.2,-.7) rectangle (8.2,2.7); \draw (1,-.5) rectangle (3,2.5); \draw (2,1) node{$R$}; \draw (6,1) node[rotate = 180]{$\overline{R}$}; \draw (5,-.5) rectangle (7,2.5); \draw (0,0) -- (1,0); \draw (7,2) -- (8,2); \begin{scope}[rounded corners = 1mm] \draw (3,0) -- (4.3,1.3); \draw (4.7,1.7) -- (5,2); \draw (3,1) -- (3.3,.7); \draw (3,2) -- (4,2) -- (5,1); \draw (3.7,.3) -- (4,0) -- (5,0); \end{scope} \draw[->] (3.5,.5) -- (3.1,.1); \draw[->] (3.7,.3) -- (3.9,.1); \draw[->] (4.5,1.5) -- (4.9,1.1); \draw[->] (4.3,1.3) -- (4.1,1.1); \end{tikzpicture} & \begin{tikzpicture}[scale=.4] \draw[white] (-.2,-.7) rectangle (8.2,2.7); \draw (1,-.5) rectangle (3,2.5); \draw (5,-.5) rectangle (7,2.5); \draw (0,0) -- (1,0); \draw (7,2) -- (8,2); \begin{scope}[rounded corners = 1mm] \draw[->] (3,1) -- (3.4,.5) -- (3,0); \draw[->] (5,2) -- (4.6,1.5) -- (5,1); \draw[->] (3,2) -- (4,2) -- (4.4,1.5) -- (4,1) -- (3.6,.5) -- (4,0) -- (5,0); \end{scope} \draw[densely dashed] (1,0) -- (3,2); \draw[densely dashed, rounded corners=1mm] (3,0) -- (2.6,.5) -- (3,1); \draw[densely dashed] (5,0) -- (7,2); \draw[densely dashed, rounded corners=1mm] (5,1) -- (5.4,1.5) -- (5,2); \end{tikzpicture} \\ \hline \end{tabular} \caption{Alternating diagrams and Seifert states corresponding to the even palindromic cases in the proof of Theorem \ref{thm:Seifertrecursionpalindrome}.} \label{tab:SeifertPalindromeEven} \end{table} \begin{table} \begin{tabular}{|c|c||c|c|c|} \hline Case & Crossing & String & Alternating Diagram & Seifert state \\ & Number & & & \\ \hline \hline 1$_{po}$ & $c$ & \tiny{$+-{}-+$} & \begin{tikzpicture}[scale = .4] \draw[white] (-.2,-.7) -- (9.2,2.7); \draw (0,0) -- (1,0); \draw (1,-.5) rectangle (3,2.5); \draw (6,-.5) rectangle (8,2.5); \draw (2,1) node{$R$}; \draw (7,1) node{$\reflectbox{R}$}; \draw (8,0) -- (9,0); \begin{scope}[rounded corners = 1mm] \draw (3,0) -- (4,1) -- (4.3,.7); \draw (4.7,.3) -- (5,0) -- (6,1); \draw (3,1) -- (3.3,.7); \draw (3.7,.3) -- (4,0) -- (5,1) --(5.3,.7); \draw (5.7,.3) -- (6,0); \draw (3,2) -- (6,2); \end{scope} \draw[->] (3.5,.5) -- (3.9,.9); \draw[->] (3.7,.3) -- (3.9,.1); \draw[->] (4.5,.5) -- (4.9,.9); \draw[->] (4.7,.3) -- (4.9,.1); \draw[->] (5.5,.5) -- (5.9,.9); \draw[->] (5.7,.3) -- (5.9,.1); \draw[->] (4,2) -- (3.2,2); \end{tikzpicture} & \begin{tikzpicture}[scale = .4] \draw[white] (-.2,-.7) -- (9.2,2.7); \draw (0,0) -- (1,0); \draw (1,-.5) rectangle (3,2.5); \draw (6,-.5) rectangle (8,2.5); \draw (8,0) -- (9,0); \begin{scope}[rounded corners = 1mm] \draw[->] (3,0) -- (3.5,.4) -- (4,0) -- (4.5,.4) -- (5,0) -- (5.5,.4) -- (6,0); \draw[->] (3,1) -- (3.5,.6) -- (4,1) -- (4.5,.6) -- (5,1) -- (5.5,.6) -- (6,1); \draw[->] (6,2) -- (3,2); \draw[densely dashed] (1,0) -- (3,0); \draw[densely dashed] (3,1) -- (2.6,1.5) -- (3,2); \draw[densely dashed] (6,1) -- (6.4,1.5) -- (6,2); \draw[densely dashed] (6,0) -- (8,0); \end{scope} \end{tikzpicture} \\ \hline 1$_{po}$ & $c-1$ & \tiny{$+$} & \begin{tikzpicture}[scale = .4] \draw[white] (-.2,-.7) -- (7.2,2.7); \draw (0,0) -- (1,0); \draw (1,-.5) rectangle (3,2.5); \draw (4,-.5) rectangle (6,2.5); \draw (2,1) node{$R$}; \draw (5,1) node{$\reflectbox{R}$}; \draw (6,0) -- (7,0); \draw (3,0) -- (4,1); \draw (3,1) -- (3.3,.7); \draw (3.7,.3) -- (4,0); \draw (3,2) -- (4,2); \draw[->] (3.5,.5) -- (3.9,.9); \draw[->] (3.7,.3) -- (3.9,.1); \draw[->] (4,2) -- (3.2,2); \end{tikzpicture} & \begin{tikzpicture}[scale = .4] \draw[white] (-.2,-.7) -- (7.2,2.7); \draw (0,0) -- (1,0); \draw (1,-.5) rectangle (3,2.5); \draw (4,-.5) rectangle (6,2.5); \draw (6,0) -- (7,0); \begin{scope}[rounded corners = 1mm] \draw[->] (3,0) -- (3.5,.4) -- (4,0); \draw[->] (3,1) -- (3.5,.6) -- (4,1); \draw[->] (4,2) -- (3,2); \draw[densely dashed] (1,0) -- (3,0); \draw[densely dashed] (3,1) -- (2.6,1.5) -- (3,2); \draw[densely dashed] (4,1) -- (4.4,1.5) -- (4,2); \draw[densely dashed] (4,0) -- (6,0); \end{scope} \end{tikzpicture} \\ \hline \hline 2$_{po}$ & $c$ & \tiny{$++-++$} & \begin{tikzpicture}[scale = .4] \draw[white] (-.2,-.7) -- (9.2,2.7); \draw (0,0) -- (1,0); \draw (1,-.5) rectangle (3,2.5); \draw (6,-.5) rectangle (8,2.5); \draw (2,1) node{$R$}; \draw (7,1) node{$\reflectbox{R}$}; \draw (8,0) -- (9,0); \begin{scope}[rounded corners = 1mm] \draw (3,2) -- (4,1) -- (4.3,1.3); \draw (4.7,1.7) -- (5,2) -- (6,1); \draw (3,1) -- (3.3,1.3); \draw (3.7,1.7) -- (4,2) -- (5,1) -- (5.3,1.3); \draw (5.7,1.7) -- (6,2); \draw (3,0) -- (6,0); \end{scope} \draw[->] (3.5,1.5) -- (3.9,1.1); \draw[->] (3.7,1.7) -- (3.9,1.9); \draw[->] (4.5,1.5) -- (4.9,1.1); \draw[->] (4.7,1.7) -- (4.9,1.9); \draw[->] (5.5,1.5) -- (5.9,1.1); \draw[->] (5.7,1.7) -- (5.9,1.9); \draw[->] (4,0) -- (3.2,0); \end{tikzpicture} & \begin{tikzpicture}[scale = .4] \draw[white] (-.2,-.7) -- (9.2,2.7); \draw (0,0) -- (1,0); \draw (1,-.5) rectangle (3,2.5); \draw (6,-.5) rectangle (8,2.5); \draw (8,0) -- (9,0); \begin{scope}[rounded corners = 1mm] \draw[->] (3,2) -- (3.5,1.6) -- (4,2) -- (4.5,1.6) -- (5,2) -- (5.5,1.6) -- (6,2); \draw[->] (3,1) -- (3.5,1.4) -- (4,1) -- (4.5,1.4) -- (5,1) -- (5.5,1.4) -- (6,1); \draw[->] (6,0) -- (3,0); \draw[densely dashed] (1,0) -- (3,2); \draw[densely dashed] (3,1) -- (2.6,.5) -- (3,0); \draw[densely dashed] (6,1) -- (6.4,.5) -- (6,0); \draw[densely dashed] (6,2) -- (8,0); \end{scope} \end{tikzpicture} \\ \hline 2$_{po}$ & $c-1$ & \tiny{$++$} & \begin{tikzpicture}[scale = .4] \draw[white] (-.2,-.7) -- (7.2,2.7); \draw (0,0) -- (1,0); \draw (1,-.5) rectangle (3,2.5); \draw (4,-.5) rectangle (6,2.5); \draw (2,1) node{$R$}; \draw (5,1) node{$\reflectbox{R}$}; \draw (6,0) -- (7,0); \draw (3,2) -- (4,1); \draw (3,1) -- (3.3,1.3); \draw (3.7,1.7) -- (4,2); \draw (3,0) -- (4,0); \draw[->] (3.5,1.5) -- (3.9,1.1); \draw[->] (3.7,1.7) -- (3.9,1.9); \draw[->] (4,0) -- (3.2,0); \end{tikzpicture} & \begin{tikzpicture}[scale = .4] \draw[white] (-.2,-.7) -- (7.2,2.7); \draw (0,0) -- (1,0); \draw (1,-.5) rectangle (3,2.5); \draw (4,-.5) rectangle (6,2.5); \draw (6,0) -- (7,0); \begin{scope}[rounded corners = 1mm] \draw[->] (3,2) -- (3.5,1.6) -- (4,2); \draw[->] (3,1) -- (3.5,1.4) -- (4,1); \draw[->] (4,0) -- (3,0); \draw[densely dashed] (1,0) -- (3,2); \draw[densely dashed] (3,1) -- (2.6,.5) -- (3,0); \draw[densely dashed] (4,1) -- (4.4,.5) -- (4,0); \draw[densely dashed] (4,2) -- (6,0); \end{scope} \end{tikzpicture} \\ \hline \hline 3A$_{po}$ & $c$ & \tiny{$-+-+-$} & \begin{tikzpicture}[scale = .4] \draw[white] (-.2,-.7) -- (11.2,2.7); \draw (0,0) -- (1,0); \draw (1,-.5) rectangle (3,2.5); \draw (8,-.5) rectangle (10,2.5); \draw (2,1) node{$R$}; \draw (9,1) node{$\reflectbox{R}$}; \draw (10,0) -- (11,0); \begin{scope}[rounded corners = 1mm] \draw (3,0) -- (4,0) -- (5.3,1.3); \draw (5.7,1.7) -- (6,2) -- (7,2) -- (8,1); \draw (3,1) -- (3.3,1.3); \draw (3.7,1.7) -- (4,2) -- (5,2) -- (6.3,.7); \draw (6.7,.3) -- (7,0) -- (8,0); \draw (3,2) -- (4.3,.7); \draw (4.7,.3) -- (5,0) -- (6,0) -- (7.3,1.3); \draw (7.7,1.7) -- (8,2); \end{scope} \draw[->] (3.5,1.5) -- (3.1,1.9); \draw[->] (3.7,1.7) -- (3.9,1.9); \draw[->] (4.5,.5) -- (4.9,.9); \draw[->] (4.3,.7) -- (4.1,.9); \draw[->] (5.5,1.5) -- (5.9,1.1); \draw[->] (5.7,1.7) -- (5.9,1.9); \draw[->] (6.5,.5) -- (6.1,.1); \draw[->] (6.7,.3) -- (6.9,.1); \draw[->] (7.5,1.5) -- (7.9,1.1); \draw[->] (7.3,1.3) -- (7.1,1.1); \end{tikzpicture} & \begin{tikzpicture}[scale = .4] \draw[white] (-.2,-.7) -- (11.2,2.7); \draw (0,0) -- (1,0); \draw (1,-.5) rectangle (3,2.5); \draw (8,-.5) rectangle (10,2.5); \draw (10,0) -- (11,0); \begin{scope}[rounded corners = 1mm] \draw[->] (3,1) -- (3.4,1.5) -- (3,2); \draw[->] (8,2) -- (7.6,1.5) -- (8,1); \draw[->] (5.5,0) -- (5,0) -- (4.6,.5) -- (5,1) -- (5.5,1.4) -- (6,1) -- (6.4,.5) -- (6,0) -- (5.5,0); \draw[->] (3,0) --(4,0) -- (4.4,.5) -- (4,1) -- (3.6,1.5) -- (4,2) -- (5,2) -- (5.5,1.6) -- (6,2) -- (7,2) -- (7.4,1.5) -- (7,1) -- (6.6,.5) -- (7,0) -- (8,0); \draw[densely dashed] (1,0) -- (3,0); \draw[densely dashed] (3,1) -- (2.6,1.5) -- (3,2); \draw[densely dashed] (8,1) -- (8.4,1.5) -- (8,2); \draw[densely dashed] (8,0) -- (10,0); \end{scope} \end{tikzpicture} \\ \hline 3A$_{po}$ & $c-4$ &\tiny{$--$} & \begin{tikzpicture}[scale = .4] \draw[white] (-.2,-.7) -- (7.2,2.7); \draw (0,0) -- (1,0); \draw (1,-.5) rectangle (3,2.5); \draw (4,-.5) rectangle (6,2.5); \draw (2,1) node{$R$}; \draw (5,1) node{$\reflectbox{R}$}; \draw (6,0) -- (7,0); \draw (3,0) -- (4,1); \draw (3,1) -- (3.3,.7); \draw (3.7,.3) -- (4,0); \draw (3,2) -- (4,2); \draw[->] (3.5,.5) -- (3.9,.9); \draw[->] (3.7,.3) -- (3.9,.1); \draw[->] (4,2) -- (3.2,2); \end{tikzpicture} & \begin{tikzpicture}[scale = .4] \draw[white] (-.2,-.7) -- (7.2,2.7); \draw (0,0) -- (1,0); \draw (1,-.5) rectangle (3,2.5); \draw (4,-.5) rectangle (6,2.5); \draw (6,0) -- (7,0); \begin{scope}[rounded corners = 1mm] \draw[->] (3,0) -- (3.5,.4) -- (4,0); \draw[->] (3,1) -- (3.5,.6) -- (4,1); \draw[->] (4,2) -- (3,2); \draw[densely dashed] (1,0) -- (3,0); \draw[densely dashed] (3,1) -- (2.6,1.5) -- (3,2); \draw[densely dashed] (4,1) -- (4.4,1.5) -- (4,2); \draw[densely dashed] (4,0) -- (6,0); \end{scope} \end{tikzpicture} \\ \hline \hline 3B$_{po}$ & $c$ & \tiny{$-{}-+-+-{}-$} & \begin{tikzpicture}[scale = .4] \draw[white] (-.2,-.7) -- (11.2,2.7); \draw (0,0) -- (1,0); \draw (1,-.5) rectangle (3,2.5); \draw (8,-.5) rectangle (10,2.5); \draw (2,1) node{$R$}; \draw (9,1) node{$\reflectbox{R}$}; \draw (10,0) -- (11,0); \begin{scope}[rounded corners = 1mm] \draw (3,0) -- (4,1) -- (4.3,.7); \draw (4.7,.3) -- (5,0) -- (6,0) -- (7,1) -- (7.3,.7); \draw (7.7,.3) -- (8,0); \draw (3,1) -- (3.3,.7); \draw (3.7,.3) -- (4,0) -- (5.3,1.3); \draw (5.7,1.7) -- (6,2) -- (8,2); \draw (3,2) -- (5,2) -- (6.3,.7); \draw (6.7,.3) -- (7,0) -- (8,1); \end{scope} \draw[->] (3.5,.5) -- (3.1,.1); \draw[->] (3.7,.3) -- (3.9,.1); \draw[->] (4.5,.5) -- (4.9,.9); \draw[->] (4.3,.7) -- (4.1,.9); \draw[->] (5.5,1.5) -- (5.9,1.1); \draw[->] (5.7,1.7) -- (5.9,1.9); \draw[->] (6.5,.5) -- (6.1,.1); \draw[->] (6.7,.3) -- (6.9,.1); \draw[->] (7.5,.5) -- (7.9,.9); \draw[->] (7.3,.7) -- (7.1,.9); \end{tikzpicture} & \begin{tikzpicture}[scale = .4] \draw[white] (-.2,-.7) -- (11.2,2.7); \draw (0,0) -- (1,0); \draw (1,-.5) rectangle (3,2.5); \draw (8,-.5) rectangle (10,2.5); \draw (10,0) -- (11,0); \begin{scope}[rounded corners = 1mm] \draw[->] (3,2) -- (5,2) -- (5.5,1.6) -- (6,2) -- (8,2); \draw[->] (3,1) -- (3.4,.5) -- (3,0); \draw[->] (8,0) -- (7.6,.5) -- (8,1); \draw[->] (4,1) arc (90:450:.4cm and .5cm); \draw[->] (7,1) arc (90:450:.4cm and .5cm); \draw[->] (5.5,0) -- (5,0) -- (4.6,.5) -- (5,1) --(5.5,1.4) -- (6,1) -- (6.4,.5) -- (6,0) -- (5.5,0); \draw[densely dashed] (1,0) -- (3,2); \draw[densely dashed] (3,1) -- (2.6,.5) -- (3,0); \draw[densely dashed] (8,1) -- (8.4,.5) -- (8,0); \draw[densely dashed] (8,2) -- (10,0); \end{scope} \end{tikzpicture} \\ \hline 3B$_{po}$ & $c-4$ & \tiny{$-$} & \begin{tikzpicture}[scale = .4] \draw[white] (-.2,-.7) -- (7.2,2.7); \draw (0,0) -- (1,0); \draw (1,-.5) rectangle (3,2.5); \draw (4,-.5) rectangle (6,2.5); \draw (2,1) node{$R$}; \draw (5,1) node{$\reflectbox{R}$}; \draw (6,0) -- (7,0); \draw (3,2) -- (4,1); \draw (3,1) -- (3.3,1.3); \draw (3.7,1.7) -- (4,2); \draw (3,0) -- (4,0); \draw[->] (3.5,1.5) -- (3.9,1.1); \draw[->] (3.7,1.7) -- (3.9,1.9); \draw[->] (4,0) -- (3.2,0); \end{tikzpicture} & \begin{tikzpicture}[scale = .4] \draw[white] (-.2,-.7) -- (7.2,2.7); \draw (0,0) -- (1,0); \draw (1,-.5) rectangle (3,2.5); \draw (4,-.5) rectangle (6,2.5); \draw (6,0) -- (7,0); \begin{scope}[rounded corners = 1mm] \draw[->] (3,2) -- (3.5,1.6) -- (4,2); \draw[->] (3,1) -- (3.5,1.4) -- (4,1); \draw[->] (4,0) -- (3,0); \draw[densely dashed] (1,0) -- (3,2); \draw[densely dashed] (3,1) -- (2.6,.5) -- (3,0); \draw[densely dashed] (4,1) -- (4.4,.5) -- (4,0); \draw[densely dashed] (4,2) -- (6,0); \end{scope} \end{tikzpicture} \\ \hline \hline 4A$_{po}$ & $c$ & \tiny{$-++-{}-++-$} & \begin{tikzpicture}[scale = .4] \draw[white] (-.2,-.7) -- (11.2,2.7); \draw (0,0) -- (1,0); \draw (1,-.5) rectangle (3,2.5); \draw (8,-.5) rectangle (10,2.5); \draw (2,1) node{$R$}; \draw (9,1) node{$\reflectbox{R}$}; \draw (10,0) -- (11,0); \begin{scope}[rounded corners = 1mm] \draw (3,0) --(5,0) -- (6.3,1.3); \draw (6.7,1.7) -- (7,2) --(8,1); \draw (3,1) -- (3.3,1.3); \draw (3.7,1.7) -- (4,2) -- (5.3,.7); \draw (5.7,.3) -- (6,0) -- (8,0); \draw (3,2) -- (4,1) -- (4.3,1.3); \draw (4.7,1.7) -- (5,2) -- (6,2) -- (7,1) -- (7.3,1.3); \draw (7.7,1.7) -- (8,2); \end{scope} \draw[->] (3.5,1.5) -- (3.1,1.9); \draw[->] (3.7,1.7) -- (3.9,1.9); \draw[->] (4.5,1.5) -- (4.9,1.1); \draw[->] (4.3,1.3) -- (4.1,1.1); \draw[->] (5.5,.5) -- (5.9,.9); \draw[->] (5.7,.3) -- (5.9,.1); \draw[->] (6.5,1.5) -- (6.1,1.9); \draw[->] (6.7,1.7) -- (6.9,1.9); \draw[->] (7.5,1.5) -- (7.9,1.1); \draw[->] (7.3,1.3) -- (7.1,1.1); \end{tikzpicture} & \begin{tikzpicture}[scale = .4] \draw[white] (-.2,-.7) -- (11.2,2.7); \draw (0,0) -- (1,0); \draw (1,-.5) rectangle (3,2.5); \draw (8,-.5) rectangle (10,2.5); \draw (10,0) -- (11,0); \begin{scope}[rounded corners = 1mm] \draw[->] (3,0) -- (5,0) -- (5.5,0.4) -- (6,0) -- (8,0); \draw[->] (3,1) -- (3.4,1.5) -- (3,2); \draw[->] (8,2) -- (7.6,1.5) -- (8,1); \draw[->] (5.5,2) -- (5,2) -- (4.6,1.5) -- (5,1) -- (5.5,.6) -- (6,1) -- (6.4,1.5) -- (6,2) -- (5.5,2); \draw[->] (4,2) arc (90:-270:.4cm and .5cm); \draw[->] (7,2) arc (90:-270:.4cm and .5cm); \draw[densely dashed] (1,0) -- (3,0); \draw[densely dashed] (3,1) -- (2.6,1.5) -- (3,2); \draw[densely dashed] (8,1) -- (8.4,1.5) -- (8,2); \draw[densely dashed] (8,0) -- (10,0); \end{scope} \end{tikzpicture} \\ \hline 4A$_{po}$ & $c-4$ & \tiny{$--$} & \begin{tikzpicture}[scale = .4] \draw[white] (-.2,-.7) -- (7.2,2.7); \draw (0,0) -- (1,0); \draw (1,-.5) rectangle (3,2.5); \draw (4,-.5) rectangle (6,2.5); \draw (2,1) node{$R$}; \draw (5,1) node{$\reflectbox{R}$}; \draw (6,0) -- (7,0); \draw (3,0) -- (4,1); \draw (3,1) -- (3.3,.7); \draw (3.7,.3) -- (4,0); \draw (3,2) -- (4,2); \draw[->] (3.5,.5) -- (3.9,.9); \draw[->] (3.7,.3) -- (3.9,.1); \draw[->] (4,2) -- (3.2,2); \end{tikzpicture} & \begin{tikzpicture}[scale = .4] \draw[white] (-.2,-.7) -- (7.2,2.7); \draw (0,0) -- (1,0); \draw (1,-.5) rectangle (3,2.5); \draw (4,-.5) rectangle (6,2.5); \draw (6,0) -- (7,0); \begin{scope}[rounded corners = 1mm] \draw[->] (3,0) -- (3.5,.4) -- (4,0); \draw[->] (3,1) -- (3.5,.6) -- (4,1); \draw[->] (4,2) -- (3,2); \draw[densely dashed] (1,0) -- (3,0); \draw[densely dashed] (3,1) -- (2.6,1.5) -- (3,2); \draw[densely dashed] (4,1) -- (4.4,1.5) -- (4,2); \draw[densely dashed] (4,0) -- (6,0); \end{scope} \end{tikzpicture}\\ \hline \hline 4B$_{po}$ & $c$ & \tiny{$-{}-++-{}-++-{}-$} & \begin{tikzpicture}[scale = .4] \draw[white] (-.2,-.7) -- (11.2,2.7); \draw (0,0) -- (1,0); \draw (1,-.5) rectangle (3,2.5); \draw (8,-.5) rectangle (10,2.5); \draw (2,1) node{$R$}; \draw (9,1) node{$\reflectbox{R}$}; \draw (10,0) -- (11,0); \begin{scope}[rounded corners = 1mm] \draw (3,0) -- (4.3,1.3); \draw (4.7,1.7) -- (5,2) -- (6,2) -- (7.3,.7); \draw (7.7,.3) -- (8,0); \draw (3,1) -- (3.3,.7); \draw (3.7,.3) -- (4,0) -- (5,0) -- (6.3,1.3); \draw (6.7,1.7) -- (7,2) -- (8,2); \draw (3,2) -- (4,2) -- (5.3,.7); \draw (5.7,.3) -- (6,0) -- (7,0) -- (8,1); \end{scope} \draw[->] (3.5,.5) -- (3.1,.1); \draw[->] (3.7,.3) -- (3.9,.1); \draw[->] (4.5,1.5) -- (4.9,1.1); \draw[->] (4.3,1.3) -- (4.1,1.1); \draw[->] (5.5,.5) -- (5.9,.9); \draw[->] (5.7,.3) -- (5.9,.1); \draw[->] (6.5,1.5) -- (6.1,1.9); \draw[->] (6.7,1.7) -- (6.9,1.9); \draw[->] (7.5,.5) -- (7.9,.9); \draw[->] (7.3,.7) -- (7.1,.9); \end{tikzpicture} & \begin{tikzpicture}[scale = .4] \draw[white] (-.2,-.7) -- (11.2,2.7); \draw (0,0) -- (1,0); \draw (1,-.5) rectangle (3,2.5); \draw (8,-.5) rectangle (10,2.5); \draw (10,0) -- (11,0); \begin{scope}[rounded corners = 1mm] \draw[->] (3,2) -- (4,2) -- (4.4,1.5) -- (4,1) -- (3.6,.5) -- (4,0) -- (5,0) -- (5.5,.4) -- (6,0) --(7,0) -- (7.4,.5) -- (7,1) -- (6.6,1.5) -- (7,2) -- (8,2); \draw[->] (3,1) -- (3.4,.5) -- (3,0); \draw[->] (8,0) -- (7.6,.5) -- (8,1); \draw[->] (5.5,2) -- (5,2) -- (4.6,1.5) -- (5,1) --(5.5,.6) -- (6,1) -- (6.4,1.5) -- (6,2) -- (5.5,2); \draw[densely dashed] (1,0) -- (3,2); \draw[densely dashed] (3,1) -- (2.6,.5) -- (3,0); \draw[densely dashed] (8,1) -- (8.4,.5) -- (8,0); \draw[densely dashed] (8,2) -- (10,0); \end{scope} \end{tikzpicture} \\ \hline 4B$_{po}$ & $c-4$ & \tiny{$-$} &\begin{tikzpicture}[scale = .4] \draw[white] (-.2,-.7) -- (7.2,2.7); \draw (0,0) -- (1,0); \draw (1,-.5) rectangle (3,2.5); \draw (4,-.5) rectangle (6,2.5); \draw (2,1) node{$R$}; \draw (5,1) node{$\reflectbox{R}$}; \draw (6,0) -- (7,0); \draw (3,2) -- (4,1); \draw (3,1) -- (3.3,1.3); \draw (3.7,1.7) -- (4,2); \draw (3,0) -- (4,0); \draw[->] (3.5,1.5) -- (3.9,1.1); \draw[->] (3.7,1.7) -- (3.9,1.9); \draw[->] (4,0) -- (3.2,0); \end{tikzpicture} & \begin{tikzpicture}[scale = .4] \draw[white] (-.2,-.7) -- (7.2,2.7); \draw (0,0) -- (1,0); \draw (1,-.5) rectangle (3,2.5); \draw (4,-.5) rectangle (6,2.5); \draw (6,0) -- (7,0); \begin{scope}[rounded corners = 1mm] \draw[->] (3,2) -- (3.5,1.6) -- (4,2); \draw[->] (3,1) -- (3.5,1.4) -- (4,1); \draw[->] (4,0) -- (3,0); \draw[densely dashed] (1,0) -- (3,2); \draw[densely dashed] (3,1) -- (2.6,.5) -- (3,0); \draw[densely dashed] (4,1) -- (4.4,.5) -- (4,0); \draw[densely dashed] (4,2) -- (6,0); \end{scope} \end{tikzpicture} \\ \hline \end{tabular} \caption{Alternating diagrams and Seifert states corresponding to the odd palindromic cases in the proof of Theorem \ref{thm:Seifertrecursionpalindrome}.} \label{tab:SeifertPalindromeOdd} \end{table} \section{Seifert circles and average genus} \label{sec:formulas} In Section \ref{sec:recursions}, we find recursive formulas for the total number of Seifert circles $s(c)$ and $s_p(c)$ coming from the alternating diagrams associated to words in $T(c)$ and $T_p(c)$, respectively. In this section, we find closed formulas for $s(c)$ and $s_p(c)$, and then use those formulas to prove Theorem \ref{thm:mainformula}. The total number $s(c)$ of Seifert circles in the alternating diagrams coming from words in $T(c)$ is given by the following theorem. \begin{theorem} \label{thm:s(c)} Let $c\geq 3$. The number $s(c)$ of Seifert circles in the alternating diagrams with crossing number $c$ coming from words in $T(c)$ can be expressed as \[ s(c) = \frac{(3c+5)2^{c-3}+(-1)^c (5-3c)}{9}.\] \end{theorem} \begin{proof} Recall that $s(c)$ satisfies the recurrence relation $s(c) = s(c-1) + 2s(c-2) + 3t(c-2)$ with initial conditions $s(3)=2$ and $s(4)=3$ and that $3t(c-2) = 2^{c-4}-(-1)^{c-4}$. Proceed by induction. The base cases of $s(3)=2$ and $s(4)=3$ can be shown by direct computation. The recurrence relation is satisfied because \begin{align*} & s(c-1) + 2s(c-2) + 3t(c-2)\\ = & \; \frac{[3(c-1)+5]2^{(c-1)-3}+(-1)^{c-1}[5-3(c-1)]}{9} \\ & \; + 2\left(\frac{[3(c-2)+5]2^{(c-2)-3} + (-1)^{c-2}[5-3(c-2)]}{9}\right) + 2^{c-4} - (-1)^{c-4} \\ = & \; \frac{(3c+2)2^{c-4} + (-1)^c(3c-8)+(3c-1)2^{c-4} + (-1)^c(22-6c) + 9\cdot 2^{c-4} - 9 (-1)^c}{9}\\ = & \; \frac{(6c+10)2^{c-4} +(-1)^c[(3c-8) +(22-6c) -9]}{9}\\ = & \; \frac{(3c+5)2^{c-3}+(-1)^c (5-3c)}{9}. \end{align*} \end{proof} The total number $s_p(c)$ of Seifert circles in the alternating diagrams coming from words of palindromic type in $T_p(c)$ is given by the following theorem. \begin{theorem} \label{thm:sp(c)} Let $c\geq 3$. The number $s_p(c)$ of Seifert circles in the alternating diagrams coming from words of palindromic type in $T_p(c)$ can be expressed as \[s_p(c) = \begin{cases}\displaystyle \frac{(3c+1)2^{(c-3)/2} + (-1)^{(c-1)/2}(1-3c)}{9} & \text{if $c$ is odd,}\\ \displaystyle \frac{(3c+4)2^{(c-4)/2} + (-1)^{(c-2)/2}(1-3c)}{9} & \text{if $c$ is even.} \end{cases}\] \end{theorem} \begin{proof} Recall that $s_p(c)$ satisfies the recurrence relation $s_p(c) = s_p(c-2) + 2s_p(c-4) + 6t_p(c-4)$ with initial conditions $s_p(3)=2,$ $s_p(4)=3$, $s_p(5)=2$, and $s_p(6) = 3$ . Proceed by induction. One may verify the initial conditions by direct computation. Since the recursion relation for $s_p(c)$ either involves only odd indexed terms or only even indexed terms, we handle each case separately. Suppose $c$ is odd. Then Proposition \ref{prop:numberpalindromic} implies that $ t_p(c-4) = J(\frac{c-5}{2}) = \frac{2^{(c-5)/2} - (-1)^{(c-5)/2})}{3}$. Thus \begin{align*} & \;s_p(c-2) + 2s_p(c-4) + 6t_p(c-4)\\ = & \; \frac{ (3(c-2)+1) 2^{ ((c-2)-3)/2 } + (-1)^{ ((c-2)-1)/2 } (1-3(c-2)) } { 9 }\\ & \; + 2\left(\frac{(3(c-4)+1)2^{((c-4)-3)/2} + (-1)^{((c-4)-1)/2}(1-3(c-4))}{9}\right) + 6\left(\frac{2^{(c-5)/2} - (-1)^{(c-5)/2}}{3}\right)\\ = &\; \frac{ (3c-5) 2^{(c-5)/2} + (-1)^{(c-3)/2}(7-3c)}{9}\\ & \; + \frac{(3c-11) 2^{(c-5)/2} +(-1)^{(c-5)/2}(26-6c)}{9} + \frac{18 \cdot 2^{(c-5)/2} -(-1)^{(c-5)/2} \cdot 18}{9}\\ = & \; \frac{(6c+2)2^{(c-5)/2} + (-1)^{(c-1)/2}((3c-7) + (26-6c) -18)}{9}\\ = & \; \frac{(3c+1)2^{(c-3)/2} + (-1)^{(c-1)/2}(1-3c)}{9}. \end{align*} Suppose $c$ is even. Then Proposition \ref{prop:numberpalindromic} implies $t_p(c-4) = J(\frac{c-6}{2})= \frac{2^{(c-6)/2} - (-1)^{(c-6)/2}}{3}$. Thus \begin{align*} & \; s_p(c-2) + 2s_p(c-4) + 6t_p(c-4) \\ = & \; \frac{ (3(c-2)+4)2^{((c-2)-4)/2} + (-1)^{((c-2)-2)/2}(1-3(c-2))}{9}\\ & \; + 2\left( \frac{ (3(c-4)+4)2^{((c-4)-4)/2} + (-1)^{((c-4)-2)/2}(1-3(c-4))}{9} \right) + 6\left(\frac{2^{(c-6)/2} - (-1)^{(c-6)/2}}{3}\right)\\ = & \; \frac{(3c-2) 2^{(c-6)/2} + (-1)^{(c-4)/2}(7-3c)}{9} \\ & + \; \frac{(3c-8)2^{(c-6)/2} + (-1)^{(c-6)/2} (26-6c)}{9} + \frac{18\cdot 2^{(c-6)/2} - (-1)^{(c-6)/2}\cdot 18}{9}\\ = & \; \frac{ (6c+8)2^{(c-6)/2} + (-1)^{(c-2)/2} ((3c-7) + (26-6c) -18)}{9}\\ = & \; \frac{(3c+4)2^{(c-4)/2} + (-1)^{(c-2)/2}(1-3c)}{9}. \end{align*} \end{proof} Although the proofs of Theorems \ref{thm:s(c)} and \ref{thm:sp(c)} are straightforward, finding the formulas for $s(c)$ and $s_p(c)$ involved combining several closed formulas found in the Online Encyclopedia of Integer Sequences \cite{OEIS}. We use the formulas for $|\mathcal{K}_c|$, $s(c)$, and $s_p(c)$ in Theorems \ref{thm:ernstsumners}, \ref{thm:s(c)}, and \ref{thm:sp(c)}, respectively to prove Theorem \ref{thm:mainformula}. \begin{proof}[Proof of Theorem \ref{thm:mainformula}] If $K$ is an alternating knot, then Murasugi \cite{Mur:genus} and Crowell \cite{Cro:genus} showed that its genus is $g(K) = \frac{1}{2}(1+c(K)-s(K))$ where $c(K)$ and $s(K)$ are the crossing number and number of components in the Seifert state of a reduced alternating diagram of $K$. Theorem \ref{thm:list} implies that \[\sum_{K\in\mathcal{K}_c} s(K) = \frac{1}{2}(s(c) + s_p(c)).\] As in Equation (\ref{eq:avgenus}), the average genus $\overline{g}_c$ satisfies \[ \overline{g}_c = \frac{\sum_{K\in\mathcal{K}_c} g(K)}{|\mathcal{K}_c|} = \frac{\sum_{K\in\mathcal{K}_c} (1 + c - s(K))}{2|\mathcal{K}_c|} = \frac{1}{2} + \frac{c}{2} - \frac{s(c) + s_p(c)}{4|\mathcal{K}_c|}.\] Theorems \ref{thm:ernstsumners}, \ref{thm:s(c)} and \ref{thm:sp(c)} contain expressions for $|\mathcal{K}_c|$, $s(c)$, and $s_p(c)$ that depend on $c$ mod $4$. If $c\equiv 0$ mod $4$, then \begin{align*} \frac{s(c) + s_p(c)}{4|\mathcal{K}_c|} = & \; \frac{(3c+5)2^{c-3}+(5-3c) + (3c+4)2^{(c-4)/2} +(3c-1)}{12(2^{c-3}+2^{(c-4)/2}}\\ = & \; \frac{(3c+5)2^{c-3}+ (3c+5)2^{(c-4)/2} - 2^{(c-4)/2} + 4}{12(2^{c-3}+2^{(c-4)/2})}\\ = & \; \frac{(3c+5)(2^{c-3}+2^{(c-4)/2})}{12(2^{c-3}+2^{(c-4)/2})} + \frac{4 - 2^{(c-4)/2}}{12(2^{c-3}+2^{(c-4)/2})}\\ = & \; \frac{c}{4} + \frac{5}{12} + \frac{4 - 2^{(c-4)/2}}{12(2^{c-3}+2^{(c-4)/2})}. \end{align*} When $c\equiv 0$ mod $4$, the average genus is \[\overline{g}_c = \frac{c}{4} + \frac{1}{12} + \frac{2^{(c-4)/2}-4 }{12(2^{c-3}+2^{(c-4)/2})}.\] The cases where $c\equiv 1$, $2$, or $3$ mod $4$ are similar. \end{proof} \bibliographystyle{amsalpha} \begin{thebibliography}{BKLMR19} \bibitem[BKLMR19]{BKLMR} Sebastian Baader, Alexandra Kjuchukova, Lukas Lewark, Filip Misev, and Arunima Ray, \emph{Average four-genus of two-bridge knots}, arXiv:1902.05721. To appear in Proc. Amer. Math. Soc., 2019. \bibitem[CEZK18]{CoEZKr} Moshe Cohen, Chaim Even-Zohar, and Sunder~Ram Krishnan, \emph{Crossing numbers of random two-bridge knots}, Topology Appl. \textbf{247} (2018), 100--114. \bibitem[CK15]{CoKr} Moshe Cohen and Sunder~Ram Krishnan, \emph{Random knots using {C}hebyshev billiard table diagrams}, Topol. Appl. \textbf{194} (2015), 4--21. \bibitem[Coh21a]{Co:3-bridge} Moshe Cohen, \emph{The {J}ones polynomials of three-bridge knots via {C}hebyshev knots and billiard table diagrams}, J. Knot Theory Ramifications \textbf{30} (2021), no.~13, 29pp. \bibitem[Coh21b]{Coh:lower} \bysame, \emph{A lower bound on the average genus of a 2-bridge knot}, arXiv:2108.00563, 2021. \bibitem[Cro59]{Cro:genus} Richard Crowell, \emph{Genus of alternating link types}, Ann. of Math. (2) \textbf{69} (1959), 258--275. \bibitem[Dun14]{Dun:knots} Nathan Dunfield, \emph{Random knots: a preliminary report}, Slides for the talk available at http:// dunfield.info/preprints, 2014. \bibitem[ES87]{ErnSum} Claus Ernst and De~Witt Sumners, \emph{The growth of the number of prime knots}, Math. Proc. Cambridge Philos. Soc. \textbf{102} (1987), no.~2, 303--315. \bibitem[KP11a]{KosPec4} P.-V. Koseleff and D.~Pecker, \emph{Chebyshev diagrams for two-bridge knots}, Geom. Dedicata \textbf{150} (2011), 405--425. \bibitem[KP11b]{KosPec3} \bysame, \emph{Chebyshev knots}, J. Knot Theory Ramifications \textbf{20} (2011), no.~4, 575--593. \bibitem[Mur58]{Mur:genus} Kunio Murasugi, \emph{On the genus of the alternating knot. {I}, {II}}, J. Math. Soc. Japan \textbf{10} (1958), 94--105, 235--248. \bibitem[OEI22a]{OEIS} \emph{The {O}n-{L}ine {E}ncyclopedia of {I}nteger {S}equences}, http://oeis.org, 2022. \bibitem[OEI22b]{OEIS1045} \emph{The {O}n-{L}ine {E}ncyclopedia of {I}nteger {S}equences}, http://oeis.org, 2022, Sequence A001045. \bibitem[RD22]{RayDiao} Dawn Ray and Yuanan Diao, \emph{The average genus of oriented rational links with a given crossing number}, arXiv:2204.12538, 2022. \bibitem[Sch56]{Sch} Horst Schubert, \emph{Knoten mit zwei {B}r\"ucken}, Math. Z. \textbf{65} (1956), 133--170. \bibitem[Sei35]{Sei} H.~Seifert, \emph{\"{U}ber das {G}eschlecht von {K}noten}, Math. Ann. \textbf{110} (1935), no.~1, 571--592. \bibitem[ST22]{SuzukiTran} Masaaki Suzuki and Anh~T. Tran, \emph{Genera and crossing numbers of $2$-bridge knots}, arXiv:2204.09238, 2022. \end{thebibliography} \end{document}
2205.06007v2
http://arxiv.org/abs/2205.06007v2
Compact Embeddings, Eigenvalue Problems, and subelliptic Brezis-Nirenberg equations involving singularity on stratified Lie groups
\documentclass[12pt, reqno]{amsart} \usepackage{amssymb} \usepackage{graphicx} \usepackage{amssymb} \usepackage{amsthm} \usepackage{epsf} \usepackage{amsbsy,amsmath} \usepackage{mathtools} \usepackage{mathrsfs} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{enumitem} \usepackage{eucal} \usepackage{graphics,mathrsfs} \usepackage{amsthm} \usepackage{secdot} \usepackage{esint} \usepackage{hyperref} \usepackage{varwidth} \usepackage{tasks} \addtolength{\topmargin}{-15mm} \addtolength{\textheight}{30mm} \addtolength{\oddsidemargin}{-15mm} \addtolength{\evensidemargin}{-15mm} \addtolength{\textwidth}{30mm} \theoremstyle{plain} \newtheorem{theorem}{Theorem}[section] \newtheorem{cor}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{example}[theorem]{Example} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition}\newtheorem{exmp}[theorem]{Example} \newtheorem{remark}[theorem]{Remark} \newtheorem{ack}{Acknowledgment\ignorespaces} \renewcommand{\theack}{} \allowdisplaybreaks \newcommand{\NaturalNumber}{\mathbb N} \newcommand{\RealNumber}{\mathbb R} \newcommand{\Liealg}{\mathfrak g} \usepackage{xcolor} \renewcommand{\d}{\:\! \mathrm{d}} \long\def\symbolfootnote[#1]#2{\begingroup \def\thefootnote{\fnsymbol{footnote}}\footnote[#1]{#2}\endgroup} \numberwithin{equation}{section} \begin{document} \title[A subelliptic nonlocal equations on stratified Lie groups] { Compact Embeddings, Eigenvalue Problems, and subelliptic Brezis-Nirenberg equations involving singularity on stratified Lie groups} \author{Sekhar Ghosh, Vishvesh Kumar and Michael Ruzhansky} \address[ Sekhar Ghosh]{Department of Mathematics: Analysis, Logic and Discrete Mathematics, Ghent University, Ghent, Belgium\newline and \newline Statistics and Mathematics Unit, Indian Statistical Institute Bangalore, Bengaluru, 560059, India} \email{[email protected] / [email protected]} \address[Vishvesh Kumar]{Department of Mathematics: Analysis, Logic and Discrete Mathematics, Ghent University, Ghent, Belgium} \email{[email protected] / [email protected]} \address[Michael Ruzhansky]{Department of Mathematics: Analysis, Logic and Discrete Mathematics, Ghent University, Ghent, Belgium\newline and \newline School of Mathematical Sciences, Queen Mary University of London, United Kingdom} \email{[email protected]} \thanks{{\em 2020 Mathematics Subject Classification: } 35R03, 35H20, 35P30, 22E30, 35R11, 35J75} \keywords{Stratified Lie groups; Heisenberg group; Fractional $p$-sub-Laplacian; Brezis-Nirenberg equations, Sobolev-Rellich-Kondrachov type embeddings; Eigenvalue problems; Nehari manifold; Singularity.} \maketitle \begin{abstract} The purpose of this paper is twofold: first we study an eigenvalue problem for the fractional $p$-sub-Laplacian over the fractional Folland-Stein-Sobolev spaces on stratified Lie groups. We apply variational methods to investigate the eigenvalue problems. We conclude the positivity of the first eigenfunction via the strong minimum principle for the fractional $p$-sub-Laplacian. Moreover, we deduce that the first eigenvalue is simple and isolated. Secondly, utilising established properties, we prove the existence of at least two weak solutions via the Nehari manifold technique to a class of subelliptic singular problems associated with the fractional $p$-sub-Laplacian on stratified Lie groups. We also investigate the boundedness of positive weak solutions to the considered problem via the Moser iteration technique. The results obtained here are also new even for the case $p=2$ with $\mathbb{G}$ being the Heisenberg group. \end{abstract} \tableofcontents \section{Introduction} The study of nonlocal elliptic partial differential equations (PDEs) and developments of the corresponding tools have been well explored in the Euclidean setting during the last few decades. Apart from the mathematical point of view, the theory of PDEs associated with nonlocal (or fractional) operators witnessed vast applications in different fields of applied sciences. We list a few (in fact a tiny fraction of them) of such applications involving fractional models like the L\'{e}vy processes in probability theory, in finance, image processing, in anomalous equations, porous medium equations, Cahn-Hilliard equations and Allen-Cahn equations, etc. Interested readers may refer to \cite{ASS16, AMRT10, LPGSGZMCMAK20, TC03} and the references therein. These models have been one of the primary context to study nonlocal PDEs both theoretically and numerically. One of the most important tools to study PDEs over bounded domains is the embeddings of Sobolev spaces into Lebesgue spaces. It says, ``If $\Omega\subset\mathbb{R}^N$ is open, then for $0<s<1<p<\infty$ with $N>ps$, the fractional Sobolev space $W^{s,p}(\Omega)$ is continuously embedded into $L^{q}(\Omega)$ for all $q\in[1,Np/(N-ps)]$. In addition, if $\Omega$ is bounded and is an extension domain, then the embedding is compact for all $q\in[1,Np/(N-ps))$." The compact embedding plays a crucial role for obtaining the existence of solutions of some PDEs. We refer the readers to see \cite{NPV12} for a well presented study of the fractional Sobolev spaces and the properties of the fractional $p$-Laplacian and its applications to PDEs. One can also consult \cite{B11, E10} for the theory and tools developed for the classical Sobolev spaces. The Sobolev spaces (also known as Folland-Stein spaces) on stratified Lie groups were first considered by Folland \cite{F75} and then several further properties have been obtained in the book by Folland and Stein \cite{FS82}. The reader may refer to several monographs devoted to the study of such spaces and the corresponding subelliptic operators \cite{BLU07, FR16, RS19}. For Sobolev embeddings of Folland-Stein spaces over bounded domains of stratified Lie groups, we refer to \cite{CDG93}. Recently, the fractional Sobolev type inequality and the corresponding Sobolev embeddings were investigated in \cite{AM18} for weighted fractional Sobolev spaces on the Heisenberg group $\mathbb{H}^N$, whereas in \cite{KD20}, the authors established the fractional Sobolev type inequalities on stratified Lie groups (or homogeneous Carnot groups). In \cite {AM18}, the authors established the compact embeddings of Sobolev spaces $W_0^{s,p,\alpha}(\Omega)$ into Lebesgue spaces $L^p(\Omega)$ over a bounded extension domain $\Omega\subset\mathbb{H}^N$. We recall here the definition of an extension domain: A domain $\Omega \subset {\mathbb{G}}$ is said to be an extension domain of $W^{s,p}_0(\Omega)$ (see Section \ref{s2} for the definition) if for every $f \in W^{s,p}_0 (\Omega) $ there exist a $\tilde{f} \in W^{s,p}_0({\mathbb{G}})$ such that $\tilde{f}|_\Omega=f$ and $\|\tilde{f}\|_{W^{s,p}_0({\mathbb{G}})}\leq C_{Q,s,p, \Omega} \|f\|_{W^{s,p}_0(\Omega)},$ where $C_{Q,s,p, \Omega}$ is a positive constant depending only on $Q, s, p, \Omega.$ The extension property of a domain plays a crucial role in establishing such compact embeddings of the Sobolev spaces into Lebesgue spaces ({\it cf.} Theorem 2.4, Lemma 5.1 in \cite{NPV12}). Recently, Zhou \cite{Z15} studied the characterizations of $(s,p)$-extension domains and embedding domains for the fractional Sobolev space on $\mathbb{R}^N$. To the best of our knowledge, we do not have such characterization for an arbitrary bounded domain in the case of stratified Lie groups. In fact, because of the existence of characteristic points, the problem of finding classes of extension domains in stratified Lie groups is highly non-trival and there are essentially no examples for step $3$ and higher (see \cite{CG98}) . Thus, to overcome this issue, we will work with the fractional Sobolev space $X_0^{s,p}(\Omega)$ with vanishing trace (See Section \ref{s2} for the definition). We first state the following embedding result for the fractional Sobolev space $X_0^{s,p}(\Omega).$ \begin{theorem}\label{l-3} Let $\mathbb{G}$ be a stratified Lie group of homogeneous dimension $Q.$ Let $0<s<1<p<\infty$ and $Q>sp.$ Let $\Omega\subset\mathbb{G}$ be an open subset. Then the fractional Sobolev space $X_0^{s, p}(\Omega)$ is continuously embedded in $L^r(\Omega)$ for $p\leq r\leq p_s^*$, where $p_s^*:=\frac{Qp}{Q-sp}$, that is, there exists a positive constant $C=C(Q,s,p, \Omega)$ such that for all $u\in X_0^{s, p}(\Omega)$, we have $$\|u\|_{L^r(\Omega)}\leq C \|u\|_{X_0^{s,p}(\Omega)}.$$ Moreover, if $\Omega$ is bounded, then the embedding \begin{align} X_0^{s,p}(\Omega) \hookrightarrow L^r(\Omega) \end{align} is continuous for all $r\in[1,p_s^*]$ and is compact for all $r\in[1,p_s^*)$. \end{theorem} It was pointed out to us by the referee of this paper that there may be a relation between Theorem \ref{l-3} and the results in the recent paper \cite{BK22} combined with \cite{KYZ11} dealing the fractional Sobolev spaces defined on metric measure spaces satisfying various conditions (typically, but not always, a Poincar\'e inequality and doubling condition), see also \cite{Pio00} and \cite{GS22}. One such example of a metric measure space is a stratified Lie group. However, it is not completely clear how the result in \cite{BK22} applies to our spaces $X_0^{s,p}$ since the definition of this space is different. Therefore, for the benefit of readers, we include a simple and direct proof of embedding theorems in Appendix A (Section \ref{appA}) which makes use of group structures such as the group translation and regularisation process via group convolution and dilations for this particular setting of stratified Lie groups. We follow the ideas of \cite{NPV12} to establish the continuous embedding whereas the compact embedding will be proved based on the idea originated by \cite{GL92}. We also refer \cite{AYY22, AWYY23} for embedding results on function spaces defined on spaces of homogeneous type. In this paper, we now aim to apply Theorem \ref{l-3} to study the nonlinear Dirichlet eigenvalue problem on stratified Lie groups. The earliest known study of Dirichlet eigenvalue problems involving the $p$-Laplacian on $\mathbb{R}^N$ is due to Lindqvist \cite{L90}, where the author investigated the simplicity and isolatedness of the first eigenvalue of the following problem: \begin{align}\label{p-lind} \Delta_{p} u&+\nu|u|^{p-2} u=0,~\text{in}~\Omega,\nonumber\\ u&=0~\text{ on }~\partial\Omega. \end{align} Lindqvist further showed that the first eigenfunction of the problem \eqref{p-lind} is strictly positive on any arbitrary bounded domain $\Omega$. This study is directly related to the corresponding Rayleigh quotient of the energy given by the following expression: \begin{equation} \mathcal{R}(u):=\cfrac{\int_{\Omega}{|\nabla u|^p}dx}{\int_{\Omega}|u(x)|^p dx},~u\in C_c^{\infty}(\Omega). \end{equation} The nonlocal counterpart of the above problem \eqref{p-lind} was explored by Lindgren and Lindqvist \cite{LL14}, and Franzina and Palatucci \cite{FP14}. After that, this topic received an extensive attention. For instance, we cite \cite{L05, AA87, BP16, LL14, FP14} just to mention a few of names toward the development of the eigenvalue problem. As per our knowledge, the study of eigenvalue problems for the subelliptic setting is very limited in the literature. The earliest traces of such studies are due to \cite{MS78, FP81}. Thereafter, there has been some progress in this direction involving the $p$-sub-Laplacian on the Heisenberg group, for instance, see \cite{HL08, WPH09}. Recently, there is an elevation of interest in the study of eigenvalue problems involving subelliptic operators on stratified Lie groups. We refer to \cite {CC21, HL08, FL10} and the references therein. In this paper, we study the following nonlinear nonlocal Dirichlet eigenvalue problem involving the fractional $p$-sub-Laplacian on stratified Lie groups, \begin{align} \label{pro1.3} (-\Delta_{p,{\mathbb{G}}})^s u&=\nu|u|^{p-2} u,~\text{in}~\Omega,\nonumber\\ u&=0~\text{ in }~{\mathbb{G}}\setminus\Omega. \end{align} In this direction, we first establish the existence of a minimizer for the Rayleigh quotient, namely, the existence of the first eigenfunction. Then, similar to the classical case, we prove some important properties of the first eigenfunction and the first eigenvalue of the problem \eqref{pro1.3}, which are listed below in the form of the following result. \begin{theorem}\label{ev-mainthm} Let $0<s<1<p<\infty$ and let $\Omega\subset{\mathbb{G}}$ be a bounded domain of a stratified Lie group $\mathbb{G}$ of homogeneous dimension $Q$. Then for $Q>sp$, we have the following properties. \begin{enumerate}[label=(\roman*)] \item The first eigenfunction of the problem \eqref{pro1.3} is strictly positive. \item The first eigenvalue $\lambda_1$ of the problem \eqref{pro1.3} is simple and the corresponding eigenfunction $\phi_1$ is the only eigenfunction of constant sign, that is, if $u$ is an eigenfunction associated to an eigenvalue $\nu>\lambda_1(\Omega)$, then $u$ must be sign-changing. \item The first eigenvalue $\lambda_1$ of the problem \eqref{pro1.3} is isolated. \end{enumerate} \end{theorem} Among the key ingredients to prove Theorem \ref{ev-mainthm} are a strong minimum principle (Theorem \ref{min p}) and logarithmic estimates (Lemma \ref{k-log-lemma}). Now, as a combined application of Theorem \ref{l-3} and Theorem \ref{ev-mainthm}, we turn our attention to the following problem involving the fractional $p$-sub-Laplacian on the stratified Lie group ${\mathbb{G}}$: \begin{align}\label{problem} \left(-\Delta_{p, {\mathbb{G}}}\right)^s u&=\frac{\lambda f(x)}{u^{\delta}}+g(x)u^{q} \text { in } \Omega, \nonumber\\ u&>0 \text { in } \Omega,\\ u&=0 \text { in } {\mathbb{G}}\setminus\Omega, \nonumber \end{align} where $\Omega$ is a bounded domain in ${\mathbb{G}}$, $\lambda>0$, $1<p<Q$, $0<s, \delta<1<p<q+1<p_s^{*}.$ Here $Q$ denotes the homogeneous dimension of ${\mathbb{G}},$ $p_s^{*}:=\frac{Q p}{Q-sp}$ denotes the critical Sobolev exponent, and $\left(-\Delta_{p, {\mathbb{G}}}\right)^s$ is the fractional $p$-sub-Laplacian ({\it ref.} Section \ref{s2}). The weight functions $f,g\in L^{\infty}(\Omega)$ are strictly positive. The problem of the type \eqref{problem} is usually referred to as the Brezis-Nirenberg type problem \cite{BN83} in the literature. Before we briefly recall some studies done in the Euclidean case, let us first discuss the motivation to consider Brezis-Nirenberg type problem on stratified Lie groups setting. The primary motivation to investigate the Brezis-Nirenberg problem in the classical Euclidean setting (i.e., $p=2$ and $s=1$) comes from the fact that it resembles variation problems in differential geometry and physics. One such celebrated example is the {\it Yamabe problem} on a Riemannian manifolds. There are many other examples that are directly related to the Brezis-Nirenberg problem; for example, existences of extremal functions for functional inequalities and existence of non-minimal solutions for Yang-Mills functions and $H$-system (see \cite{BN83}). The pioneering investigation of {\it CR Yamabe problem} was started by Jerison and Lee in their seminal work \cite{JL}. It is well-known that the Heisenberg group (simplest example of a stratified Lie group) plays the same role in the CR geometry as the Euclidean space in conformal geometry. Naturally, the analysis on stratified Lie groups proved to be a fundamental tool in the resolution of the CR Yamabe problem. Therefore, a great deal of interest has been shown to studying subelliptic PDEs on stratified Lie groups. Recently, several researchers have considered the fractional CR Yamabe problem and problems around it; see \cite{GQ13,CW17,FMMT15, KMW17, K20, CHY21} and references therein. These aforementioned developments naturally encourage one for studying the Brezis-Nirenberg type problem \eqref{problem} on stratified Lie groups. Apart from this, it is also worth mentioning that the investigation of problems of type \eqref{problem} is closely related to the existence of best constant in functional inequalities, e.g. see \cite{GU21} and references therein. On the other hand, it was noted in the celebrated paper \cite{RS76} by Rothschild and Stein that nilpotent Lie groups play an important role in deriving sharp subelliptic estimates for differential operators on manifolds. In view of the Rothschild-Stein lifting theorem, a general H\"ormander's sums of squares of vector fields on manifolds can be approximated by a sup-Laplacian on some stratified Lie group (see also, \cite{F77} and \cite{Roth83}). This makes it crucial to study partial differential equations on stratified Lie groups and led to several interesting and promising works amalgamating the Lie group theory with the analysis on partial differential equations. Moreover, in recent decades, there is a rapidly growing interest for sub-Laplacians on stratified Lie groups because these operators appear not only in theoretical settings (see e.g. Gromov \cite{Gro} or Danielli, Garofalo and Nhieu \cite{DGN07} for general expositions from different points of view), but also in application settings such as mathematical models of crystal material and human vision (see, \cite{C98} and \cite{CGS04}). It is almost impossible to enlist all such studies dealing with existence, multiplicity and regularity of solutions but we will mention some of the pivotal studies that motivated us to consider this problem \eqref{problem} in the subelliptic setting on stratified Lie groups. These studies are primarily divided into two cases, namely, $\lambda=0$ and $\lambda>0$. For $\lambda>0$, $g=0$, $p=2$, $s=1$ and $\delta>0$, i.e., the purely singular problem involving Laplacian was initially tossed up by Crandall et al. \cite{CRT77} following a pseudo-plastic model in the bounded domain $\Omega\subset\mathbb{R}^N$ with the Dirichlet boundary condition. Moving forward with the same setting, Lazer and McKenna \cite{LM91} observed that one can expect a $W_0^{1,2}(\Omega)$ solution if and only if $0<\delta<3.$ Later, in \cite{YD13} it was proved that the exponent $\delta=3$ proposed in \cite{LM91} is optimal to obtain a $W_0^{1,2}(\Omega)$ solution. The nonlocal counterparts of these type of PDEs were studied in Canino et al. \cite{CMSS17} for all $p\in(1,\infty)$ and $s\in(0,1)$. For further references on the study of purely singular problems we refer to \cite{BO09, CMSS17} and the references therein. It is noteworthy to mention that the problem \eqref{problem} with $g=0$ always possess a unique solution for $\lambda,\delta>0$. The study of the multiplicity and regularity of solutions was widely considered for $\lambda\geq0,$ see \cite{AF17,BS15, DP97,HSS08,MS16} and the references therein. We now emphasize the study of the existence of solutions to PDEs associated with subelliptic operators. In \cite{ALT20}, the authors established the existence of solutions to the following problem involving the sub-Laplacian on the Heisenberg group: \begin{align} -\Delta_{\mathbb{H}^N} u&=\frac{\lambda f(x)}{u^{\delta}} \text { in } \Omega, \nonumber\\ u&>0 \text { in } \Omega,\\ u&=0 \text { in } \partial\Omega, \nonumber \end{align} They applied the fixed point theorem argument and a weak convergence method to deduce existence of solutions. The nonlinear extension of the aforementioned problem, that is, the singular problem with the $p$-sub-Laplacian was investigated in \cite{GU21} in the setting of stratified Lie groups for $0<\delta<1$. In both of these studies, the authors used the weak convergence method to establish the existence of solution. In \cite{WW18}, the author considered subelliptic problem associated with sub-Laplacian on the Heisenberg group with mixed singular and power type nonlinearity. They established the existence of solution using the moving plane method. The authors \cite{YY12} extended this to the Carnot groups. In \cite{H20}, Han studied existence and nonexistence results for positive solutions to an integral type Brezis-Nirenberg type problem on the Heisenberg groups. Ruzhansky et al. \cite{RST22} established the global existence and blow-up of the positive solutions to a nonlinear porous medium equation over stratified Lie groups. In \cite{BPV22} the authors characterised the existence of unique positive weak solution for subelliptic Dirichlet problems. A few more results dealing with the existence and multiplicity of solutions over the Heisenberg groups and stratified Lie groups can be found in \cite{P19, PT22, BFP20a, BR17, FBR17, L19, L07,PT21,L15,FF15} and references listed therein. Finally, we cite \cite{CKKSS22, EGKSS22} and references therein for the study of non-homogeneous fractional $p$-Laplacian on metric measure spaces. The study of existence and multiplicity of weak solutions mainly uses the variational tools, such as mountain pass theorem, Nehari manifold techniques, etc. In this study we employ the Nehari manifold method \cite{ST10, HSS04} to establish the multiplicity of solutions to the problem \eqref{problem}. The result is stated as follows. \begin{theorem}\label{main-thm} Let $\Omega$ be a bounded domain of a stratified Lie group ${\mathbb{G}}$ of homogeneous dimension $Q$, and let $0<s, \delta<1<p<q+1<p_s^{*}:=\frac{Qp}{Q-ps}$, $Q>ps$. Then there exists $\Lambda>0$ such that for all $\lambda \in\left(0, \Lambda\right)$ the problem \eqref{problem} admits at least two non-negative solutions in $X_0^{s,p}(\Omega)$. \end{theorem} Let us make a few more comments on results of this paper before concluding the introduction. In this paper, our main focus is to study subelliptic eigenvalue problem and the Brezis-Nirenberg type problem on stratified Lie groups. But, we emphasise that the proofs and statements of Theorem \ref{ev-mainthm} and Theorem \ref{main-thm} can easily be adopted with suitable modifications in the case of metric measure space, at least, in case when the metric measure space is doubling and satisfies a Poincar\'e inequality. The paper is organized as follows: In the next section we present basics of the analysis on stratified Lie groups along with function spaces defined on them. In Section \ref{sec4}, we study the fractional $p$-sub-Laplacian eigenvalue problem on stratified Lie groups. The existence of (at least) two solutions of the nonlocal singular problem by using Nehari manifold technique is analysed in Section \ref{sec5}. The last section consists of showing the boundedness of solution by employing the Moser iteration followed by a comparison principle. \section{Preliminaries: Stratified Lie groups and Sobolev spaces}\label{s2} This section is devoted to recapitulating some basic notations and concepts related to stratified Lie groups and the fractional Sobolev spaces defined on them. There are many ways to introduce the notion of stratified Lie groups, for instance one may refer to books and monographs \cite{FS82,BLU07,FR16,RS19}. In his seminal paper \cite{F75}, Folland extensively investigated the properties of function spaces on these groups. The monographs \cite{FR16} deals with the theory associated to higher order invariant operators, namely, the Rockland operators on graded Lie groups. For precise studies and properties on stratified Lie group, we refer \cite{GL92, F75, FS82, BLU07, FR16}. \begin{definition}\label{d1} A Lie group $\mathbb{G}$ (on $\mathbb{R}^N$) is said to be homogeneous if, for each $\lambda>0$, there exists an automorphism $D_{\lambda}:\mathbb{G} \rightarrow\mathbb{G}$ defined by $D_{\lambda}(x)=(\lambda^{r_1}x_1, \lambda^{r_2}x_2,..., \lambda^{r_N}x_N)$ for $r_i>0,\,\forall\, i=1,2,...,N$. The map $D_\lambda$ is called a {\it dilation} on $\mathbb{G}$. \end{definition} For simplicity, we sometimes prefer to use the notation $\lambda x$ to denote the dilation $D_{\lambda}x$. Note that, if $\lambda x$ is a dilation then $\lambda^r x$ is also a dilation. The number $Q=r_1+r_2+...+r_N$ is called the homogeneous dimension of the homogeneous Lie group $\mathbb{G}$ and the natural number $N$ represents the topological dimension of $\mathbb{G}.$ The Haar measure on $\mathbb{G}$ is denoted by $dx$ and it is nothing but the usual Lebesgue measure on $\mathbb{R}^N.$ \begin{definition}\label{d-carnot} A homogeneous Lie group $\mathbb{G} = (\mathbb{R}^N, \circ)$ is called a stratified Lie group (or a homogeneous Carnot group) if the following two conditions are fulfilled: \begin{enumerate}[label=(\roman*)] \item For some natural numbers $N_1 + N_2+ ... + N_k = N$ the decomposition $\mathbb{R}^N = \mathbb{R}^{N_1} \times \mathbb{R}^{N_2} \times ... \times \mathbb{R}^{N_k}$ holds, and for each $\lambda > 0$ there exists a dilation of the form $D_{\lambda}(x)= (\lambda x^{(1)}, \lambda^2 x^{(2)},..., \lambda^{k}x^{(k)})$ which is an automorphism of the group $\mathbb{G}$. Here $x^{(i)}\in \mathbb{R}^{N_i}$ for $i = 1,2, ..., k$. \item With $N_1$ the same as in the above decomposition of $\mathbb{R}^N$, let $X_1, ..., X_{N_1}$ be the left invariant vector fields on $\mathbb{G}$ such that $X_k(0) = \frac{\partial}{\partial x_k}|_0$ for $k = 1, ..., N_1$. Then the H\"{o}rmander condition $rank(Lie\{X_1, ..., X_{N_1} \}) = N$ holds for every $x \in \mathbb{R}^{N}$. In other words, the Lie algebra corresponding to the Lie group $\mathbb{G}$ is spanned by the iterated commutators of $X_1, ..., X_{N_1}$. \end{enumerate} \end{definition} Here $k$ is called the step of the homogeneous Carnot group. Note that, in the case of stratified Lie groups, the homogeneous dimension becomes $Q=\sum_{i=1}^{i=k}iN_i$. Furthermore, the left-invariant vector fields $X_j$ satisfy the divergence theorem and they can be written explicitly as \begin{equation}\label{d-vec fld} X_i=\frac{\partial}{\partial x_i^{(1)}} + \sum_{j=2}^{k}\sum_{l=1}^{N_1} a^{(j)}_{i,l}(x^1, x^2, ..., x^{j-1})\frac{\partial}{\partial x_l^{(j)}}. \end{equation} For simplicity, we set $n=N_1$ in the above Definition \ref{d-carnot}. An absolutely continuous curve $\gamma:[0,1]\rightarrow\mathbb{R}$ is said to be admissible, if there exist functions $c_i:[0,1]:\rightarrow\mathbb{R}$, for $i=1,2...,n$ such that $${\dot{\gamma}(t)}=\sum_{i=1}^{i=n}c_i(t)X_i(\gamma(t))~\text{and}~ \sum_{i=1}^{i=n}c_i(t)^2\leq1.$$ Observe that the functions $c_i$ may not be unique as the vector fields $X_i$ may not be linearly independent. For any $x,y\in\mathbb{G}$ the Carnot-Carath\'{e}odory distance is defined as $$\rho_{cc}(x,y)=\inf\{l>0: ~\text{there exists an admissible}~ \gamma:[0,l]\rightarrow\mathbb{G} ~\text{with}~ \gamma(0)=x ~\text{\&}~ \gamma(l)=y \}.$$ We define $\rho_{cc}(x,y)=0$, if no such curve exists. This $\rho_{cc}$ is not a metric in general but the H\"{o}rmander condition for the vector fields $X_1,X_2,...X_{N_1}$ ensures that $\rho_{cc}$ is a metric. The space $(\mathbb{G}, \rho_{cc})$ is is known as a Carnot-Carath\'{e}odory space. Let us now define the quasi-norm on the homogeneous Carnot group $\mathbb{G}$. \begin{definition}\label{d-quasi-norm} A continuous function $|\cdot|: \mathbb{G} \rightarrow \mathbb{R}^{+}$ is said to be a homogeneous quasi-norm on a homogeneous Lie group $\mathbb{G}$ if it satisfies the following conditions: \begin{enumerate}[label=(\roman*)] \item (definiteness): $|x| = 0$ if and only if $x = 0$. \item (symmetric): $|x^{-1}| = |x|$ for all $x \in\mathbb{G}$, and \item ($1$-homogeneous): $|\lambda x| = \lambda |x|$ for all $x \in\mathbb{G}$ and $\lambda>0$. \end{enumerate} \end{definition} An example of a quasi-norm on $\mathbb{G}$ is the norm defined as $d(x):=\rho_{cc}(x, 0),\,\, x\in \mathbb{G}$, where $\rho$ is a Carnot-Carath\'{e}odory distance related to H\"ormander vector fields on $\mathbb{G}.$ It is known that all homogeneous quasi-norms are equivalent on $\mathbb{G}.$ In this paper we will work with a left-invariant homogeneous distance $d(x, y):=|y^{-1} \circ x|$ for all $x, y \in \mathbb{G}$ induced by the homogeneous quasi-norm of $\mathbb{G}.$ The sub-Laplacian (or Horizontal Laplacian) on $\mathbb{G}$ is defined as \begin{equation}\label{d-sub-lap} \mathcal{L}:=X_{1}^{2}+\cdots+X_{N_1}^{2}. \end{equation} The horizontal gradient on $\mathbb{G}$ is defined as \begin{equation}\label{d-h-grad} \nabla_{\mathbb{G}}:=\left(X_{1}, X_{2}, \cdots, X_{N_1}\right). \end{equation} The horizontal divergence on $\mathbb{G}$ is defined by \begin{equation}\label{d-h-div} \operatorname{div}_{\mathbb{G}} v:=\nabla_{\mathbb{G}} \cdot v. \end{equation} For $p \in(1,+\infty)$, we define the $p$-sub-Laplacian on the stratified Lie group $\mathbb{G}$ as \begin{equation}\label{d-p-sub} \Delta_{\mathbb{G},p} u:=\operatorname{div}_{\mathbb{G}}\left(\left|\nabla_{\mathbb{G}} u\right|^{p-2} \nabla_{\mathbb{G}} u\right). \end{equation} Let $\Omega$ be a Haar measurable subset of $\mathbb{G}$. Then $\mu(D_{\lambda}(\Omega))=\lambda^{Q}\mu(\Omega)$ where $\mu(\Omega)$ is the Haar measure of $\Omega$. The quasi-ball of radius $r$ centered at $x\in\mathbb{G}$ with respect to the quasi-norm $|\cdot|$ is defined as \begin{equation}\label{d-ball} B(x, r)=\left\{y \in \mathbb{G}: \left|y^{-1} \circ x\right|<r\right\}. \end{equation} Observe that $B(x, r)$ can be obtained by the left-translation by $x$ of the ball $B(0, r)$. Furthermore, $B(0, r)$ is the image under the dilation $D_{r}$ of $B(0,1)$. Thus, we have $\mu(B(x,r))=r^Q$ for all $x\in \mathbb{G}$. We are now in a position to define the notion of fractional Sobolev-Folland-Stein type spaces related to our study. Let $\Omega \subset {\mathbb{G}}$ be an open subset. Then for $0<s<1< p<\infty$, the fractional Sobolev space $W^{s,p}(\Omega)$ on stratified groups is defined as \begin{equation} W^{s,p}(\Omega)=\{u\in L^{p}(\Omega): [u]_{s, p,\Omega}<\infty\}, \end{equation} endowed with the norm \begin{equation} \|u\|_{W^{s,p}(\Omega)}=\|u\|_{L^p(\Omega)}+[u]_{s,p,\Omega}, \end{equation} where $[u]_{s, p,\Omega}$ denotes the Gagliardo semi-norm defined by \begin{equation} [u]_{s, p,\Omega}:=\left(\int_{\Omega} \int_{\Omega} \frac{|u(x)-u(y)|^{p}}{\left|y^{-1} x\right|^{Q+ps}} dxdy\right)^{\frac{1}{p}}<\infty. \end{equation} Observe that for all $\phi \in C_c^{\infty}(\Omega)$, we have $[u]_{s, p,\Omega}<\infty$. We define the space $W_0^{s,p}(\Omega)$ as the closure of $C_c^{\infty}(\Omega)$ with respect to the norm $\|u\|_{W^{s,p}(\Omega)}$. We would like to point out that $W_0^{s,p}(\mathbb{G})=W^{s,p}(\mathbb{G})$. Now for an open bounded subset $\Omega \subset {\mathbb{G}}$, define the space $X_0^{s,p}(\Omega)$ as the closure of $C_c^{\infty}(\Omega)$ with respect to the norm $\|u\|_{L^p(\Omega)}+[u]_{s, p,\mathbb{G}}$. Note that the spaces $X_0^{s,p}(\Omega)$ and $W_0^{s,p}(\Omega)$ are different even in the Euclidean case unless $\Omega$ is an extension domain (see \cite{NPV12}). \begin{lemma}\label{l-2} The space $X_0^{s,p}(\Omega)$ is a reflexive Banach space for $1< p<\infty$. \end{lemma} The space $X_0^{s,p}(\Omega)$ can be equivalently defined with respect to the homogeneous norm $[u]_{s, p,\mathbb{G}}$. Indeed, for $u\in C_c^{\infty}(\Omega)$ and $B_r\subset {\mathbb{G}}\setminus\Omega$, we have \begin{equation} \left|u(x)\right|^{p}= {|y^{-1}x|^{Q+ps}} \frac{\left|u(x)-u(y)\right|^{p}}{|y^{-1}x|^{Q+ps}} \end{equation} for all $x\in \Omega$ and $y\in B_r$. Integrating first with respect to $y$ and then with respect to $x$, we obtain, \begin{equation} \int_{\Omega}\left|u(x)\right|^{p}dx\leq \frac{diam(\Omega\cup B_r)^{Q+ps}}{|B_r|} \int_\Omega\int_{B_r} \frac{\left|u(x)-u(y)\right|^{p}}{|y^{-1}x|^{Q+ps}}dxdy. \end{equation} Now define $$C=C(Q,s,p,\Omega)=\inf\Big\{\frac{diam(\Omega\cup B)^{Q+ps}}{|B|}: B\subset {\mathbb{G}}\setminus\Omega~\text{is a ball}\Big\}.$$ Then we have the following Poincar\'{e} type inequality, \begin{equation}\label{poin} \|u\|_{L^p(\Omega)}^p\leq C[u]_{s, p,\mathbb{G}}^p. \end{equation} This confirms that the space $X_0^{s,p}(\Omega)$ can be defined as a closure of $C_c^{\infty}(\Omega)$ with respect to the homogeneous norm $[u]_{s,p, \mathbb{G}}$. That is $$\|u\|_{X_0^{s,p}(\Omega)}\cong[u]_{s, p,\mathbb{G}}~\text{for all}~u\in X_0^{s,p}(\Omega).$$ Moreover, the construction of the space $X_0^{s,p}(\Omega)$ suggests that by assigning $u=0$ in ${\mathbb{G}}\setminus\Omega$ for $u\in X_0^{s,p}(\Omega)$, we conclude that the inclusion map $i:X_0^{s,p}(\Omega)\rightarrow {W}^{s,p}(\mathbb{G})$ is well-defined and continuous. Note that, in general $X_0^{s,p}(\Omega)\subset\{u\in W^{s, p}({\mathbb{G}}): u=0~\text{in}~{\mathbb{G}}\setminus\Omega \}$. From the equivalence of the norms and being the closure of $C_c^{\infty}(\Omega)$ with respect to the norm $\|\cdot\|_{L^p(\Omega)}+ [u]_{s, p,\mathbb{G}}$, we can represent $X_0^{s,p}(\Omega)$ as follows: $$X_0^{s,p}(\Omega)=\{u\in W^{s, p}({\mathbb{G}}): u=0~\text{in}~{\mathbb{G}}\setminus\Omega \},$$ whenever $\Omega$ is bounded with at least continuous boundary $\partial\Omega$. For $u\in X_0^{s,p}(\Omega)$, \begin{align*} [u]_{s, p,\mathbb{G}}&=\iint_{\mathbb{G} \times \mathbb{G}} \frac{|u(x)-u(y)|^{p}}{\left|y^{-1} x\right|^{Q+p s}} dxdy=\iint_{\mathbb{G} \times \mathbb{G}\setminus (\Omega^c\times\Omega^c)} \frac{|u(x)-u(y)|^{p}}{\left|y^{-1} x\right|^{Q+p s}} dxdy. \end{align*} We conclude this section with the following two definitions. For $s\in(0,1)$ and $p\in(1,\infty)$, we define the fractional $p$-sub-Laplacian as \begin{equation} \left(-\Delta_{p,{\mathbb{G}}}\right)^s u(x):=C_{Q,s, p}\,\, P.V. \int_{{\mathbb{G}} } \frac{|u(x)-u(y)|^{p-2}(u(x)-u(y))}{\left|y^{-1} x\right|^{Q+p s}} dy, \quad x \in {\mathbb{G}}. \end{equation} For any $\varphi\in X_0^{s,p}(\Omega)$, we have \begin{equation} \langle\left(-\Delta_{p,{\mathbb{G}}}\right)^s u,\varphi\rangle= \iint_{\mathbb{G} \times \mathbb{G}} \frac{|u(x)-u(y)|^{p-2}(u(x)-u(y))(\varphi(x)-\varphi(y))}{\left|y^{-1} x\right|^{Q+p s}} dxdy. \end{equation} The simplest example of a stratified Lie group is the Heisenberg group $\mathbb{H}^N$ with the underlying manifold $\mathbb{R}^{2N+1}:=\mathbb{R}^N\times\mathbb{R}^N\times\mathbb{R}$ for $N\in \mathbb{N}$. For $(x, y, t),(x', y', t')\in \mathbb{H}^N$ the multiplication in $\mathbb{H}^N$ is given by \begin{equation*} (x, y, t) \circ(x', y', t')=(x+x', y+y', t+t'+2 (\langle x',y\rangle)-\langle x,y'\rangle), \end{equation*} where $(x, y, t), (x', y', t') \in \mathbb{R}^{N}\times \mathbb{R}^{N}\times \mathbb{R}$ and $\langle\cdot,\cdot\rangle$ represents the inner product on $\mathbb{R}^N$. The homogeneous structure of the Heisenberg group $\mathbb{H}^N$ is provided by the following dilation, for $\lambda>0,$ \begin{equation*} D_{\lambda}(x,y,t)=(\lambda x, \lambda y, \lambda^2 t). \end{equation*} the homogeneous dimension $Q$ of $\mathbb{H}^N$ is given by $2N+2:= N+N+2$ while the topological dimension of $\mathbb{H}^N$ is $2N+1.$ The left-invariant vector fields $\{X_i,Y_i\}_{i=1}^N$ defined below form a basis for the Lie algebra corresponding to the Heisenberg group $\mathbb{H}^N$: \begin{align} X_i&=\frac{\partial}{\partial x_i}+2y_i\frac{\partial}{\partial t}; Y_i=\frac{\partial}{\partial y_i}-2x_i\frac{\partial}{\partial t}~\text{and}~ T=\frac{\partial}{\partial t}, ~\text{for}~i=1, 2,..., N. \end{align} It is easy to see that $[X_i,Y_i]=-4T$ for $i=1,2,...,N$ and $$[X_i,X_j]=[Y_i,Y_j]=[X_i,Y_j]=[X_i,T]=[Y_j,T]=0$$ for all $i\neq j$ and these vector fields satisfy the H\"{o}rmander rank condition. Consequently, the sub-Laplacian on $\mathbb{H}^N$ is given by $$\mathfrak{L}_{\mathbb{H}^N}:=\sum_{i=1}^N (X_i^2+Y_i^2).$$ \section{Fractional $p$-sub-Laplacian eigenvalue problem on stratified Lie groups} \label{sec4} This section is devoted to the study of eigenvalue problems associated to the fractional $p$-sub-Laplacian on stratified Lie groups. Let us consider the following PDE on a stratified Lie group ${\mathbb{G}}$: \begin{align}\label{l-eigen} \left(-\Delta_{p,{\mathbb{G}}}\right)^{s} u&=\nu|u|^{p-2} u,~\text{in}~\Omega,\nonumber\\ u&=0~\text{ in }~{\mathbb{G}}\setminus\Omega, \end{align} where $\nu\in\mathbb{R}$ and $\Omega$ is bounded domain in ${\mathbb{G}}$. The problem \eqref{l-eigen} is usually referred to as the fractional $p$-sub-Laplacian (or $(s,p)$-sub-Laplacian) eigenvalue problem. \begin{definition} We say that $u\in X_0^{s,p}(\Omega)$ is a weak solution to \eqref{l-eigen} if, for each $\phi\in C_c^{\infty}(\Omega),$ we have \begin{equation}\label{ev-soln} \langle\left(-\Delta_{p,{\mathbb{G}}}\right)^s u,\phi\rangle=\nu\int_{\Omega} |u|^{p-2}u\phi dx. \end{equation} A nontrivial solution to \eqref{ev-soln} is known as the $(s,p)$-sub-Laplacian eigenfunctions corresponding to an $(s,p)$-sub-Laplacian eigenvalue $\nu$. \end{definition} Such eigenfunctions are directly related to the following minimization problem of the Rayleigh quotient $\mathcal{R}$ defined by \begin{equation}\label{ev-raleigh} \mathcal{R}(u):=\cfrac{\iint_{\mathbb{G}\times\mathbb{G}}\frac{|u(x)-u(y)|^p}{|y^{-1}x|^{Q+ps}}dxdy}{\int_{\Omega}|u(x)|^p dx},~u\in C_c^{\infty}(\Omega). \end{equation} Observe that a minimizer for the Rayleigh quotient does not change its sign. This follows immediately from the triangle inequality \begin{equation*} |u(x)-u(y)|>||u(x)|-|u(y)||~\text{whenever}~ u(x)u(y)<0. \end{equation*} Consider the space $\mathcal{S}$ defined as \begin{equation} \mathcal{S}=\{u\in X_0^{s,p}(\Omega): \|u\|_p=1\}. \end{equation} Then the eigenfunctions of \eqref{l-eigen} are the minimizers of the following energy functional on $\mathcal{S}$: \begin{equation}\label{ev-energy} I(u)=\iint_{\mathbb{G}\times \mathbb{G}}\frac{|u(x)-u(y)|^p}{|y^{-1}x|^{Q+ps}}dxdy. \end{equation} In particular, the eigenfunctions of the problem \eqref{l-eigen} coincides with the critical points of $I$ on the space $\mathcal{S}$. We define the first eigenvalue or the least eigenvalue $\lambda_1(\Omega)$ over $\Omega$ as \begin{align}\label{ev-1} \lambda_1(\Omega)=&\inf\{\mathcal{R}(\phi): \phi \in C_c^{\infty}(\Omega)\}\\ &\text{or}\nonumber\\ \lambda_1(\Omega)=&\inf\{I(u): u \in \mathcal{S}\}. \end{align} Recall the Sobolev inequality \eqref{embed-cont} which is given by $$\left(\int_{\Omega}|u(x)|^{p^*} dx \right)^{\frac{1}{p_s^*}} \leq C(Q, p, s) \left( \int_{\mathbb{G}} \int_{ \mathbb{G}} \frac{|u(x)-u(y)|^p}{|y^{-1}x|^{Q+ps}} dx\,dy \right)^{\frac{1}{p}}.$$ From the H\"older inequality with the exponent $\frac{p^*_s}{p}$ and $\frac{p_s^*}{p_s^*-p}$ we obtain the following inequality which assures that the first eigenvalue $\lambda_1(\Omega)$ of \eqref{l-eigen} is positive: \begin{align}\label{dadaineq} C(Q, p, s)^{-p}\,|\Omega|^{-\frac{ps}{Q}} \int_{\Omega} |u(x)|^p\,dx \leq \int_{\mathbb{G}} \int_{ \mathbb{G}} \frac{|u(x)-u(y)|^p}{|y^{-1}x|^{Q+ps}} dx\,dy. \end{align} Thus, by definition all eigenvalues of \eqref{l-eigen} are positive. The weak solution of \eqref{l-eigen} corresponding to $\nu=\lambda_1$ is called the first eigenfunction of \eqref{l-eigen}. We now state the following existence result for the problem \eqref{l-eigen}. \begin{theorem}\label{ev-thm1} Let $0<s<1<p<\infty$ and let $\Omega$ be a bounded domain of a stratified Lie group $\mathbb{G}$ of homogeneous dimension $Q$. Then for $Q>ps$, there exists a nonnegative minimizer $\phi_1$ of \eqref{ev-energy} in $X_0^{s,p}(\Omega)$ and $\phi_1$ is a weak solution to the problem \eqref{l-eigen} for $\nu=\lambda_1(\Omega)$. Moreover, $\phi_1\in L^{\infty}(\Omega)$. Furthermore, there exists $C=C(Q, p,s)$ such that $\lambda_1(\Omega) \geq C |\Omega|^{-\frac{ps}{Q}}.$ \end{theorem} \begin{proof} The proof for existence is straightforward from the direct method of the calculus of variations. Suppose $\{u_n\}$ is a minimizing sequence for $I$. Then, by the Sobolev inequality, we have that $\{u_n\}$ is bounded in $X_0^{s,p}(\Omega)$. Thanks to the reflexivity of $X_0^{s,p}(\Omega)$, we get $\phi_1\in X_0^{s,p}(\Omega)$ such that up to a subsequence $u_n\rightharpoonup \phi_1$ weakly in $X_0^{s,p}(\Omega)$ and therefore, $u_n\rightarrow \phi_1$ strongly in $(X_0^{s,p}(\Omega))^{'} :=X_0^{-s,p'}(\Omega)$. Thus, Theorem \ref{l-3} implies that $u_n\rightarrow \phi_1$ strongly in $L^p(\Omega)$ and $u_n\rightarrow \phi_1$ a.e. in $\Omega$ and $u_n\rightarrow \phi_1$ strongly in $L^{p'}(\Omega)$, where $p'=\frac{p}{p-1}.$ To prove the strong convergence, we show that $\|u_n\|_{X_0^{s,p}(\Omega)}\rightarrow\|\phi_1\|_{X_0^{s,p}(\Omega)}$. The weak convergence implies that \begin{equation} \label{4.7eq} \langle\left(-\Delta_{p,{\mathbb{G}}}\right)^s u_n- \left(-\Delta_{p,{\mathbb{G}}}\right)^s \phi_1, u_n-\phi_1\rangle\rightarrow0. \end{equation} We will use the following inequality from \eqref{append}: \begin{equation} \label{4.8eq} \langle\left(-\Delta_{p,{\mathbb{G}}}\right)^s u_1- \left(-\Delta_{p,{\mathbb{G}}}\right)^s u_2, u_1-u_2\rangle\geq C_p\begin{cases} [u_1-u_2]_{s,p}^p,&\text{if}~p\geq2\\ \frac{[u_1-u_2]_{s,p}^2}{\left([u_1]_{s,p}^p+[u_2]_{s,p}^p\right)^{\frac{2-p}{p}}},&\text{if}~1<p<2. \end{cases} \end{equation} Thus, by combining these two inequalities \eqref{4.7eq} and \eqref{4.8eq}, we obtain $\|u_n\|_{X_0^{s,p}(\Omega)}\rightarrow\|\phi_1\|_{X_0^{s,p}(\Omega)}$ and therefore, by using the uniform convexity, we conclude $u_n\rightarrow \phi_1$ strongly in $X_0^{s,p}(\Omega)$. In addition to this, we also observe that $I(|\phi_1|)=I(\phi_1)$. Thus we conclude that the solutions are nonnegative. Indeed, we have \begin{align*} \lambda_1(\Omega)&=\inf_{u\in \mathcal{S}}\int_{\mathbb{G}} \int_{ \mathbb{G}} \frac{|u(x)-u(y)|^p}{|y^{-1}x|^{Q+ps}}dxdy\\ &\leq \int_{\mathbb{G}} \int_{ \mathbb{G}} \frac{||\phi_1(x)|-|\phi_1(y)||^p}{|y^{-1}x|^{Q+ps}}dxdy\\ &\leq \int_{\mathbb{G}} \int_{ \mathbb{G}} \frac{|\phi_1(x)-\phi_1(y)|^p}{|y^{-1}x|^{Q+ps}}\\ &=\lambda_1(\Omega). \end{align*} Thus, $|\phi_1|$ is also minimizes $I$ over $\mathcal{S}$. Therefore, we may conclude that the first eigenfunction of \eqref{l-eigen} can be chosen to be non-negative. By taking $\lambda=0,$ $g(x)=\nu$ and $q=p$ in the problem \eqref{problem} and from the Lemma \ref{bounded} (see Section \ref{sec6}), we deduce that every solutions of the eigenvalue problem \eqref{ev-1} are uniformly bounded. \end{proof} \begin{theorem}\label{sign-changing} Let $0<s<1<p<\infty.$ Assume that $\Omega\subset{\mathbb{G}}$ is a bounded domain of a stratified Lie group $\mathbb{G}$. Let $v\in X_0^{s,p}(\Omega)$ solve \eqref{l-eigen} and assume that $v>0,$ and let $\nu$ be the corresponding eigenvalue of $v$. Then we have \begin{equation} \nu=\lambda_1(\Omega), \end{equation} where $\lambda_1(\Omega)=\inf \{I(\phi): \phi \in X_0^{s,p}(\Omega)\}$. In particular, any eigenfunction corresponding to an eigenvalue $\nu>\lambda_1(\Omega)$ must be sign-changing. \end{theorem} \begin{proof} For every nonnegative $u,v\in X_0^{s,p}(\Omega)$, we claim that \begin{equation}\label{ev-z} I(z(t))\leq(1-t)I(v)+tI(u),~\forall\, t\in[0,1], \end{equation} where $z(t)= \left((1-t)v^p(x) + tu^p(x) \right)^{1/p}, ~\forall\,t\in[0,1].$ Let us first establish the above inequality. The estimate follows immediately by considering the $\ell_p$-norm of $z(t)$ over $\mathbb{R}^2$. Observe that $$z(t)=\left\|\left(t^{\frac{1}{p}} u,(1-t)^{\frac{1}{p}} v\right)\right\|_{\ell_{p}}.$$ For any $x, y \in\Omega\subset{\mathbb{G}}$, we first put $$a=\left(t^{1 / p} u(y),(1-t)^{1 / p} v(y)\right)$$ and $$b=\left(t^{1 / p} u(x),(1-t)^{1 / p} v(y)\right)$$ in the following triangle inequality $$|\|a\|_{\ell_p}-\|b\|_{\ell_p}|\leq\|a-b\|_{\ell_p}$$ and then divide it by the fractional $p$-kernel $|y^{-1}x|^{Q+ps}$ on both sides followed by integration to obtain the desired inequality \eqref{ev-z}. We now proceed to prove the main claim of this theorem. Suppose $v \in X_0^{s, p}(\Omega)$ and $v>0$ in $\Omega$ is a weak solution of \eqref{l-eigen}. Further, by normalizing, if necessary, we may assume that $\|v\|_p=1$. Suppose that $u \in X_0^{s, p}(\Omega)$ minimizes the problem \eqref{ev-1}. In other words \begin{equation*} \lambda_1(\Omega)=\min \left\{I(u): u \in X_0^{s, p}(\Omega), \int_{\Omega}|u(x)|^{p} dx=1\right\} \end{equation*} is minimized at $u.$ Define, $u_{\epsilon}=u+\epsilon$, $v_{\epsilon}=v+\epsilon$ and for all $x \in \Omega$ \begin{equation*} z(t,\epsilon)(x)=\left(t u_{\epsilon}(x)^{p}+(1-t) v_{\epsilon}(x)^{p}\right)^{\frac{1}{p}}, ~t \in[0,1]. \end{equation*} Thanks to the inequality \eqref{ev-z}, the image of $t \mapsto z(t,\epsilon)$ is a family of curves in $X_0^{s, p}(\Omega)$ along which the energy $I$ is convex. Thus we have \begin{align}\label{ev-est1} \iint_{\mathbb{G}\times\mathbb{G}} &\frac{\left|z(t,\epsilon)(x)-z(t,\epsilon)(y)\right|^{p}}{|y^{-1}x|^{Q+ps}} dxdy-\iint_{\mathbb{G}\times\mathbb{G}} \frac{|v(x)-v(y)|^{p}}{|y^{-1}x|^{Q+ps}} dxdy\nonumber\\ & \leq t\left(\iint_{\mathbb{G}\times \mathbb{G}} \frac{|u(x)-u(y)|^{p}}{|y^{-1}x|^{Q+ps}} dxdy -\iint_{\mathbb{G} \times \mathbb{G}} \frac{|v(x)-v(y)|^{p}}{|y^{-1}x|^{Q+ps}} dxdy\right)\nonumber\\ &=t\left(\lambda_1(\Omega)-\nu\right),~\forall\,t\in[0,1] ~\text{and}~ \forall\, \epsilon\ll 1. \end{align} Now, using the convexity of $\tau \mapsto|\tau|^{p},$ that is, $(|a|^p-|b|^p \geq p|b|^{p-2} b (a-b))$, we estimate the left hand side of \eqref{ev-est1} from below as follows: \begin{align}\label{ev-est2} \iint_{\mathbb{G}\times\mathbb{G}} &\frac{\left|z(t,\epsilon)(x)-z(t,\epsilon)(y)\right|^{p}}{|y^{-1}x|^{Q+ps}} dxdy-\iint_{\mathbb{G} \times \mathbb{G}} \frac{|v(x)-v(y)|^{p}}{|y^{-1}x|^{Q+ps}} dxdy\nonumber\\ &\geq p\iint_{\mathbb{G} \times \mathbb{G}} \frac{|v(x)-v(y)|^{p-2}(v(y)-v(x))}{|y^{-1}x|^{Q+ps}}\nonumber\\ &\quad \quad\times\left(z(t,\epsilon)(y)-z(t,\epsilon)(x)-(v(y)-v(x))\right) dxdy, \end{align} for all\,$t\in[0,1]$ and for all $ \epsilon\ll 1.$ Observe that, the fact $u, v \in X_0^{s, p}(\Omega)$ implies that $$z(t,\epsilon)\in X_0^{s, p}(\Omega)~\text{and}~v(y)-v(x)=v_{\epsilon}(y)-v_{\epsilon}(x).$$ Thus, on testing \eqref{ev-soln} with $\phi=(z(t,\epsilon)-v_{\epsilon})$ corresponding to the eigenfunction $v$, we get, for all $\epsilon\ll 1$, \begin{align}\label{ev-est3} \iint_{\mathbb{G} \times \mathbb{G}} \frac{|v(x)-v(y)|^{p-2}(v(y)-v(x))}{|y^{-1}x|^{Q+ps}}\left(z(t,\epsilon)(y)-z(t,\epsilon)(x)-\left(v_{\epsilon}(y)-v_{\epsilon}(x)\right)\right)dxdy\nonumber\\ =\nu \int_{\Omega} v(\tau)^{p-1}\left(z(t,\epsilon)(\tau)-v_\epsilon(\tau)\right)d\tau. \end{align} Therefore, from \eqref{ev-est1}, \eqref{ev-est2} and \eqref{ev-est3}, we obtain \begin{equation}\label{ev-est4} \nu \int_{\Omega} v(\tau)^{p-1} ({z(t,\epsilon)(\tau)-v_{\epsilon}(\tau)}) d\tau\leq t(\lambda_1(\Omega)-\nu), \end{equation} for all\,$t\in[0,1]$ and for all $ \epsilon\ll 1.$ Now, by the concavity $\tau \mapsto |\tau|^{\frac{1}{p}}$ and by recalling that $z(t,\epsilon)(x)=\left(t u_{\epsilon}(x)^{p}+(1-t) v_{\epsilon}(x)^{p}\right)^{\frac{1}{p}}$ we get the following point-wise boundedness a.e. in $\Omega$ \begin{equation} v(\tau)^{p-1} ({z(t,\epsilon)(\tau)-v_{\epsilon}(\tau)})\geq t\, v(\tau)^{p-1}\left(u_{\epsilon}(\tau)-v_{\epsilon}(\tau)\right) \end{equation} and $$v(\tau)^{p-1}\left(u_{\epsilon}(\tau)-v_{\epsilon}(\tau)\right)\in L^1(\Omega).$$ Therefore, from Fatou's lemma we obtain \begin{align} \nu \int_{\Omega}\left(\frac{v(\tau)}{v_{\epsilon}(\tau)}\right)^{p-1}((u_{\epsilon}(\tau))^{p}-(v_{\epsilon}(\tau))^{p})d\tau & \leq \nu\liminf_{t \rightarrow 0^{+}} \int_{\Omega} v(\tau)^{p-1} \frac{z(t,\epsilon)(\tau)-v_{\epsilon}(\tau)}{t}d\tau\nonumber\\&\leq \lambda_1(\Omega)-\nu \end{align} for sufficiently small $\epsilon>0.$ Finally, recalling that $v>0$ and applying the Lebesgue dominated convergence theorem and then passing the limit $\epsilon \rightarrow 0^{+}$, we get \begin{equation} 0 \leq \lambda_1(\Omega)-\nu. \end{equation} Since, $\lambda_1(\Omega)$ is the least eigenvalue and $\lambda_1(\Omega)\geq\nu$, we conclude that $\lambda_1(\Omega)=\nu$. Hence, the proof is complete. \end{proof} \begin{lemma}\label{l-simple} Let $0<s<1<p<\infty$ and let $\Omega\subset{\mathbb{G}}$ be a bounded domain. Suppose that $u$ and $v$ are two positive eigenfunctions corresponding to $\lambda_1(\Omega)$. Then $u=cv$ for some $c>0$, that means, $u$ and $v$ are proportional. This says that the first eigenfunction $\lambda_1(\Omega)$ is simple. \end{lemma} \begin{proof} Let $u,v\in X_0^{s, p}(\Omega)$ be such that $\|u\|_p=\|v\|_p=1$ and $u,v \geq 0$. Recall the inequality \eqref{ev-z} for $t=1/2$. Then, we have \begin{equation}\label{simple} I\left( \left( \frac{v^p + u^p}{2}\right)^{1/p} \right)\leq\frac{I(v)+I(u)}{2}. \end{equation} Observe that $w=\left( \frac{v^p + u^p}{2}\right)^{1/p}\in\mathcal{S}$. Consider the convex function $$B(l,m)=\left|l^{\frac{1}{p}}-m^{\frac{1}{p}}\right|^p~\text{for all}~l>0,m>0.$$ Recall from \cite[Lemma 13]{LL14} that $$B\Big(\frac{l_1+l_2}{2}, \frac{m_1+m_2}{2}\Big)\leq \frac{1}{2}B(l_1,m_1) + \frac{1}{2}B(l_2,m_2)$$ and equality holds only if $l_1m_2=l_2m_1$. Thus, using the fact that $u,v,w\in\mathcal{S}$ and \eqref{simple}, we obtain \begin{align*} \lambda_1(\Omega)&\leq \int_{\mathbb{G}} \int_{ \mathbb{G}} \frac{|w(x)-w(y)|^p}{|y^{-1}x|^{Q+ps}}dxdy\\ &\leq \frac{1}{2}\int_{\mathbb{G}} \int_{ \mathbb{G}} \frac{|u(x)-u(y)|^p}{|y^{-1}x|^{Q+ps}}dxdy + \frac{1}{2}\int_{\mathbb{G}} \int_{ \mathbb{G}} \frac{|v(x)-v(y)|^p}{|y^{-1}x|^{Q+ps}}dxdy=\lambda_1(\Omega). \end{align*} Therefore, the inequality becomes equality and thus we get $$u(x)v(y)=v(x)u(y).$$ This implies \begin{equation*} \frac{u(y)}{v(y)}=\frac{u(x)}{v(x)}=c, (say). \end{equation*} Hence, $u=c v$ a.e. in $\Omega$. \end{proof} \noindent Consider the problem \begin{align}\label{p-kuusi} \left(-\Delta_{p,{\mathbb{G}}}\right)^s u=0~\text{in}~\Omega\nonumber\\ u=0~\text{in}~{\mathbb{G}}\setminus\Omega. \end{align} We say a function $u\in X_0^{s,p}(\Omega)$ is a weak subsolution (or supersolution) of \eqref{p-kuusi}, if for every nonnegative $\phi \in X_0^{s,p}(\Omega)$, we have \begin{equation}\label{k-subsup} \int_{\mathbb{G}}\int_{\mathbb{G}}\frac{|u(x)-u(y)|^{p-2}(u(x)-u(y))(\phi(x)-\phi(y))}{|y^{-1}x|^{Q+ps}}dxdy\leq (or \geq) 0. \end{equation} A function $u\in X_0^{s,p}(\Omega)$ is a weak solution of \eqref{p-kuusi}, if it is a weak subsolution as well as a weak supersolution of \eqref{k-subsup}. In particular, for every $\phi \in X_0^{s,p}(\Omega)$, $u$ satisfies \begin{equation}\label{k-sol} \int_{\mathbb{G}}\int_{\mathbb{G}}\frac{|u(x)-u(y)|^{p-2}(u(x)-u(y))(\phi(x)-\phi(y))}{|y^{-1}x|^{Q+ps}}dxdy= 0. \end{equation} We define the nonlocal tail of a function $v\in X_0^{s,p}({\Omega})$ in a quasi-ball $B_R(x_0)\subset{\mathbb{G}}$ given by \begin{equation} Tail(v,x_0,R)=\left[R^{sp}\int_{{\mathbb{G}}\setminus{B_R(x_0)}}\frac{|v(x)|^{p-1}}{|x_0^{-1}x|^{Q+ps}}dx\right]^{\frac{1}{p-1}}. \end{equation} Clearly, for any $v\in L^r({\mathbb{G}}), r\geq p-1$ and $R>0$, we have $Tail(v,x_0,R)$ is finite, by using the H\"{o}lder inequality. For the definitions of the nonlocal tail in the Euclidean space and the Heisenberg group, we refer \cite{DKP14} and \cite{PP22}, respectively. We state the following comparison principle for fractional $p$-sub-Laplacian on stratified groups. We refer to \cite{CZ18, CCHY18} for the strong maximal principle for the subellipic $p$-Laplacian for families of H\"ormander vector fields and to \cite{RS18, RS20, RY22} for a comparison principle for higher order invariant hypoelliptic operators on graded Lie groups. \begin{lemma} \label{ev-weakcomp} Let $\lambda>0$, $0< s<1<p<\infty$ and $u, v\in X_0^{s,p}(\Omega)$. Suppose that $$(-\Delta_{p,{\mathbb{G}}})^sv\geq(-\Delta_{p, {\mathbb{G}}})^su$$ weakly with $v=u=0$ in ${\mathbb{G}}\setminus\Omega$. Then $v\geq u$ in ${\mathbb{G}}.$ \end{lemma} \begin{proof} It immediately follows from the proof of Lemma \ref{weakcomp} later on with $\lambda=0$. \end{proof} The next aim is to establish a minimum principle for the problem \eqref{p-kuusi}. Prior to that we will prove the following logarithmic estimate which will be used to prove the minimum principle. \begin{lemma}\label{k-log-lemma} Let $0<s<1<p<\infty$ and let $u\in X_0^{s,p}(\Omega)$ be a weak supersolution of \eqref{p-kuusi} such that $u\geq0$ in $B_R:=B_R(x_0)\subset\Omega$. Then for any $B_r:=B_r(x_0)\subset B_{\frac{R}{2}}(x_0)$ and for any $d>0$, the following estimate holds: \begin{equation}\label{k-log-ineq} \int_{B_r}\int_{B_r}\left|\log\frac{u(x)+d}{u(y)+d}\right|^p\frac{dxdy}{|y^{-1}x|^{Q+ps}}\leq Cr^{Q-ps}\left(d^{1-p}\left(\frac{r}{R}\right)^{sp}[Tail(u_{-},x_0,R)]^{p-1}+1\right), \end{equation} where $C=C(N,p,s)$, $u_{-}$ is the negative part of $u$. \end{lemma} \begin{proof} We follow the idea from \cite{DKP16} which is proved for the Euclidean case. Let us first prove an inequality similar to Lemma 3.1 of \cite{DKP16}. Let $p\geq1$ and $\epsilon\in(0,1]$. Then for any $a,b\in\mathbb{R}$, we have \begin{equation} |a|\leq (|b|+|a-b|). \end{equation} Now, using this triangle inequality and the convexity of $t^p$, we obtain \begin{align}\label{k-ineq} |a|^p\leq (|b|+|a-b|)^p&= (1+\epsilon)^p\left[\frac{1}{1+\epsilon}|b|+\frac{\epsilon}{1+\epsilon}\frac{|a-b|}{\epsilon}\right]^p\nonumber\\ &\leq (1+\epsilon)^{p-1}|b|^p+\left(\frac{1+\epsilon}{\epsilon}\right)^{p-1}|a-b|^p\nonumber\\ &\leq |b|^p +c_p\epsilon|b|^p+c^p(1+c_p\epsilon)\epsilon^{1-p}|a-b|^p, \end{align} where $c_p=(p-1)\Gamma(\max\{1,p-2\})$ is obtained by iterating the last term of the following estimate $$(1+\epsilon)^{p-1}=1+(p-1)\int_{1}^{1+\epsilon}t^{p-2}dt\leq1+\epsilon(p-1)\max\{1, (1+\epsilon)^{p-2}\}.$$ We will now proceed to prove the main estimate of this lemma. Let $d>0$ and $\eta \in C_c^{\infty}(\mathbb{G})$ be such that \begin{equation} 0\leq\eta\leq1,~~~\eta\equiv1~\text{in}~B_r,~~~\eta\equiv0~\text{in}~\mathbb{G}\setminus B_{2r}~~~~\text{and}~|\nabla_H \eta|<Cr^{-1}. \end{equation} Since $u(x)\geq0$ for all $x\in supp(\eta)$, $\psi=(u+d)^{1-p}\eta^p$ is a well-defined test function for \eqref{k-sol}. Thus, we get \begin{align}\label{k-3.5} &\int_{B_{2r}} \int_{B_{2r}}\frac{|u(x)-u(y)|^{p-2}(u(x)-u(y))}{|y^{-1}x|^{Q+ps}}\left[\frac{\eta^{p}(x)}{(u(x)+d)^{p-1}}-\frac{\eta^{p}(y)}{(u(y)+d)^{p-1}}\right]dxdy\nonumber\\ &+2\int_{\mathbb{G}\setminus B_{2r}} \int_{B_{2r}}\frac{|u(x)-u(y)|^{p-2}(u(x)-u(y))}{|y^{-1}x|^{Q+ps}}\frac{\eta^{p}(x)}{(u(x)+d)^{p-1}}dxdy=0. \end{align} We will estimate both the terms individually. Set \begin{align} I_1&=\int_{B_{2r}} \int_{B_{2r}}\frac{|u(x)-u(y)|^{p-2}(u(x)-u(y))}{|y^{-1}x|^{Q+ps}}\left[\frac{\eta^{p}(x)}{(u(x)+d)^{p-1}}-\frac{\eta^{p}(y)}{(u(y)+d)^{p-1}}\right]dxdy\label{k-I1}\\ I_2&=2\int_{\mathbb{G} \setminus B_{2r}} \int_{B_{2r}}\frac{|u(x)-u(y)|^{p-2}(u(x)-u(y))}{|y^{-1}x|^{Q+ps}}\frac{\eta^{p}(x)}{(u(x)+d)^{p-1}}dxdy.\label{k-I2} \end{align} We will first estimate $I_1$. Let us assume that $u(x)>u(y)$. Observe that $u(y)\geq0$ for all $y\in B_{2r}\subset B_R$ using the support of $\eta$. Then on choosing \begin{align} \label{corona} a=\eta(x), b=\eta(y)~\text{and}~\epsilon=l\frac{u(x)-u(y)}{u(x)+d}\in(0,1)~\text{with}~l\in(0,1) \end{align} in the inequality \eqref{k-ineq}, it can be estimated that \begin{align}\label{k-3.6} &\frac{|u(x)-u(y)|^{p-2}(u(x)-u(y))}{|y^{-1}x|^{Q+ps}}\left[\frac{\eta^{p}(x)}{(u(x)+d)^{p-1}}-\frac{\eta^{p}(y)}{(u(y)+d)^{p-1}}\right]\nonumber\\ & \leq \frac{(u(x)-u(y))^{p-1}}{(u(x)+d)^{p-1}}\frac{ \eta^{p}(y)}{|y^{-1}x|^{Q+ps}}\left[1+c_pl \frac{u(x)-u(y)}{u(x)+d}-\left(\frac{u(x)+d}{u(y)+d}\right)^{p-1}\right]\nonumber\\ &+c_pl^{1-p} \frac{|\eta(x)-\eta(y)|^{p}}{|y^{-1}x|^{Q+ps}}\nonumber\\ &= \left(\frac{u(x)-u(y)}{u(x)+d}\right)^{p} \frac{ \eta^{p}(y)}{|y^{-1}x|^{Q+ps}}\left[\frac{1-\left(\frac{u(y)+d}{u(x)+d}\right)^{1-p}}{1-\frac{u(y)+d}{u(x)+d}}+c_pl\right] +c_pl^{1-p} \frac{|\eta(x)-\eta(y)|^{p}}{|y^{-1}x|^{Q+ps}}\nonumber\\ &:=J_1 +c_pl^{1-p} \frac{|\eta(x)-\eta(y)|^{p}}{|y^{-1}x|^{Q+ps}}. \end{align} We now aim to estimate $J_1$. Consider the following function \begin{equation*} h(t):=\frac{1-t^{1-p}}{1-t}=-\frac{p-1}{1-t} \int_{t}^{1} \tau^{-p} d\tau, \quad \forall t \in(0,1). \end{equation*} Since, the function $h_1(t)=\frac{1}{1-t} \int_{t}^{1} \tau^{-p} d\tau$ is decreasing in $t\in(0,1)$, we have $h$ is increasing in $t\in(0,1)$. Thus, we have \begin{equation*} h(t) \leq-(p-1),~\forall\, t\in(0,1). \end{equation*} {\bf{Case-1:}} $0<t\leq\frac{1}{2}$.\\ In this case, \begin{equation*} h(t) \leq-\frac{p-1}{2^{p}} \frac{t^{1-p}}{1-t}. \end{equation*} For $t=\frac{u(y)+d}{u(x)+d} \in(0,1 / 2]$, i.e. for $u(y)+d \leq \frac{u(x)+d}{2}$, we get \begin{equation}\label{k-3.7} J_{1} \leq \left(c_pl-\frac{p-1}{2^{p}}\right)\left[\frac{u(x)-u(y)}{u(y)+d}\right]^{p-1}\frac{ \eta^{p}(y)}{|y^{-1}x|^{Q+ps}}, \end{equation} since $$(u(x)-u(y))\left(\frac{(u(y)+d)^{p-1}}{(u(x)+d)^{p}} \right)=\left(\frac{u(y)+d}{u(x)+d}\right)^{p-1} - \left(\frac{u(y)+d}{u(x)+d}\right)^{p}\leq 1.$$ On choosing $l$ as \begin{equation}\label{k-3.8} l=\frac{p-1}{2^{p+1} c_p} \left( =\frac{1}{2^{p+1} \Gamma (\text{max} \{1, p-2\})}<1\right), \end{equation} we obtain \begin{equation*} J_{1} \leq-\frac{p-1}{2^{p+1}} \left[\frac{u(x)-u(y)}{u(y)+d}\right]^{p-1}\frac{ \eta^{p}(y)}{|y^{-1}x|^{Q+ps}}. \end{equation*} {\bf{Case-2:}} $\frac{1}{2}<t<1$.\\ Again choosing, $t=\frac{u(y)+d}{u(x)+d} \in(1 / 2,1)$, i.e. $u(y)+d>\frac{u(x)+d}{2}$, we obtain \begin{align}\label{k-3.9} J_{1} &\leq [c_pl-(p-1)]\left[\frac{u(x)-u(y)}{u(x)+d}\right]^{p} \frac{ \eta^{p}(y)}{|y^{-1}x|^{Q+ps}}\nonumber\\ &- \frac{\left(2^{p+1}-1\right)(p-1)}{2^{p+1}}\left[\frac{u(x)-u(y)}{u(x)+d}\right]^{p} \frac{ \eta^{p}(y)}{|y^{-1}x|^{Q+ps}} \end{align} for the choice of $l$ as in \eqref{k-3.8}. We note that, for $2(u(y)+d)<u(x)+d$, we have \begin{equation}\label{k-3.10} \left[\log \left(\frac{u(x)+d}{u(y)+d}\right)\right]^{p} \leq c_p\left[\frac{u(x)-u(y)}{u(y)+d}\right]^{p-1}, \end{equation} and, for $2(u(y)+d) \geq u(x)+d,$ we derive \begin{equation}\label{k-3.11} \left[\log \left(\frac{u(x)+d}{u(y)+d}\right)\right]^{p}=\left[\log \left(1+\frac{u(x)-u(y)}{u(y)+d}\right)\right]^{p} \leq 2^{p}\left(\frac{u(x)-u(y)}{u(x)+d}\right)^{p}, \end{equation} by using $u(x)>u(y)$ and $\log (1+x)\leq x, ~ \forall x\geq 0$. Thus, from the estimates \eqref{k-3.6}, \eqref{k-3.7}, \eqref{k-3.9}, \eqref{k-3.10} and \eqref{k-3.11}, we obtain \begin{align*} &\frac{|u(x)-u(y)|^{p-2}(u(x)-u(y))}{|y^{-1}x|^{Q+ps}}\left[\frac{\eta^{p}(x)}{(u(x)+d)^{p-1}}-\frac{\eta^{p}(y)}{(u(y)+d)^{p-1}}\right]\\ & \leq-\frac{1}{c_p}\left[\log \left(\frac{u(x)+d}{u(y)+d}\right)\right]^{p} \frac{ \eta^{p}(y)}{|y^{-1}x|^{Q+ps}}+c_pl^{1-p} \frac{|\eta(x)-\eta(y)|^{p}}{|y^{-1}x|^{Q+ps}}. \end{align*} This is true also for $u(y)>u(x)$ by exchanging $x$ and $y$. The case $u(x)=u(y)$ holds trivially. Thus, we can estimate $I_1$ in \eqref{k-I1} as \begin{align}\label{k-3.12} I_{1} \leq &-\frac{1}{c(p)} \int_{B_{2 r}} \int_{B_{2 r}} \left|\log \left(\frac{u(x)+d}{u(y)+d}\right)\right|^{p} \frac{ \eta^{p}(y)}{|y^{-1}x|^{Q+ps}}dxdy\nonumber \\ &+c(p) \int_{B_{2 r}} \int_{B_{2 r}} \frac{|\eta(x)-\eta(y)|^{p}}{|y^{-1}x|^{Q+ps}} dxdy, \end{align} for some constant $c(p)$ depending on the choice of $l$. We will now estimate the term $I_{2}$ in \eqref{k-I2}. Observe that $u(y)\geq 0$ for all $y \in B_{R}$. Thus, using $(u(x)-u(y))_{+}\leq u(x)$, we get \begin{equation}\label{4.40} \frac{(u(x)-u(y))_{+}^{p-1}}{(d+u(x))^{p-1}} \leq 1,~\forall\, x \in B_{2 r}, \,y \in B_{R}. \end{equation} On the other hand for $y \in \Omega\setminus B_{R}$, we have \begin{equation}\label{4.41} (u(x)-u(y))_{+}^{p-1} \leq 2^{p-1}\left[u^{p-1}(x)+(u(y))_{-}^{p-1}\right],~\forall\, x\in B_{2r}. \end{equation} Then using the inequalities \eqref{4.40} and \eqref{4.41}, we obtain \begin{align}\label{k-3.13} I_{2} \leq & 2 \int_{B_{R} \setminus B_{2 r}} \int_{B_{2 r}} (u(x)-u(y))_{+}^{p-1}(d+u(x))^{1-p} \frac{ \eta^{p}(x)}{|y^{-1}x|^{Q+ps}}dxdy\nonumber \\ &+2 \int_{\mathbb{G} \setminus B_{R}} \int_{B_{2 r}} (u(x)-u(y))_{+}^{p-1}(d+u(x))^{1-p} \frac{ \eta^{p}(x)}{|y^{-1}x|^{Q+ps}}dxdy\nonumber \\ \leq & C(p)\int_{\mathbb{G}\setminus B_{2 r}} \int_{B_{2 r}} \frac{\eta^{p}(x)}{|y^{-1}x|^{Q+ps}}dxdy+C'(p)d^{1-p} \int_{\mathbb{G} \setminus B_{R}} \int_{B_{2 r}} \frac{(u(y))_{-}^{p-1}}{|y^{-1}x|^{Q+ps}} dxdy\nonumber\\ &\leq C(p) \sup _{x \in B_{2r}} r^{Q} \int_{\mathbb{G}\setminus B_{2 r}} \frac{dy}{|y^{-1}x|^{Q+ps}}+C'(p)d^{1-p}\left|B_{r}\right| \int_{\mathbb{G} \setminus B_{R}} \frac{(u(y))_{-}^{p-1}}{\left|y^{-1}x_{0}\right|^{Q+ps}}dy\nonumber\\ &\leq C(p) r^{Q-ps} + C'(p)d^{1-p} \frac{r^{Q}}{R^{s p}}\left[\operatorname{Tail}\left(u_{-} ; x_{0}, R\right)\right]^{p-1}\nonumber\\ &\leq C(p) \int_{B_{2 r}} \int_{B_{2 r}} \frac{|\eta(x)-\eta(y)|^{p}}{|y^{-1}x|^{Q+ps}} dxdy + C(p) r^{Q-ps} + C'(p)d^{1-p} \frac{r^{Q}}{R^{s p}}\left[\operatorname{Tail}\left(u_{-} ; x_{0}, R\right)\right]^{p-1}, \end{align} for some constants $C(p), C'(p)$ depending on $p$. Therefore, by using \eqref{k-3.12} and \eqref{k-3.13} in \eqref{k-3.5}, we get \begin{align}\label{k-3.16} \int_{B_{2 r}} \int_{B_{2 r}} &\left|\log \left(\frac{u(x)+d}{u(y)+d}\right)\right|^{p} \frac{\eta^{p}(y)}{|y^{-1}x|^{Q+ps}} dxdy\nonumber \\ \leq & C \int_{B_{2 r}} \int_{B_{2 r}} \frac{|\eta(x)-\eta(y)|^{p}}{|y^{-1}x|^{Q+ps}} dxdy\nonumber \\ &+C d^{1-p} r^{Q} R^{-ps}\left[\operatorname{Tail}\left(u_{-} ; x_{0}, R\right)\right]^{p-1}+C r^{Q-ps}. \end{align} Again, by using $|\nabla_H\eta|\leq Cr^{-1}$, we have \begin{align}\label{k-3.17} \int_{B_{2 r}} \int_{B_{2 r}} \frac{|\eta(x)-\eta(y)|^{p}}{|y^{-1}x|^{Q+ps}}dxdy & \leq Cr^{-p} \int_{B_{2 r}} \int_{B_{2 r}}|y^{-1}x|^{-Q+p(1-s)}dxdy \leq \frac{C}{p(1-s)} r^{-s p}\left|B_{2 r}\right|. \end{align} Therefore, the logarithmic estimate \eqref{k-log-ineq} follows from \eqref{k-3.16} and \eqref{k-3.17}. \end{proof} We have now all the ingredients to state the following strong minimum principle. \begin{theorem}[Strong minimum principle]\label{min p} Let $0<s<1<p<\infty$ and let $\Omega \subset {\mathbb{G}}$ be an open, connected and bounded subset of a stratified Lie group $\mathbb{G}$. Assume that $u \in X_0^{s, p}(\Omega)$ is a weak supersolution of \eqref{p-kuusi} such that $u \geq 0$ in $\Omega$ and $u \not\equiv 0$ in $\Omega.$ Then $u>0$ a.e. in $\Omega$. \end{theorem} \begin{proof} Suppose for a moment that $u>0$ a.e. in $K$ for every connected and compact subset of $\Omega$. Since $\Omega$ is connected and $u\not\equiv0$ in $\Omega$, there exists a sequence of compact and connected sets $K_j\subset\Omega$ such that \begin{equation*} \left|\Omega \backslash K_{j}\right|<\frac{1}{j}~\text { and }~ u \not \equiv 0 ~\text { in }~ K_{j}. \end{equation*} Thus $u>0$ a.e. in $K_j$ for all $j$. Now passing to the limit as $j\rightarrow\infty$, we get that $u>0$ a.e. $\Omega$. Thus it enough to prove the result stated in the lemma for compact and connected subsets of $\Omega$. Since $K \subset \Omega$ is compact and connected, then there exists $r>0$ such that $K \subset\{x \in \Omega: \operatorname{dist}_{cc}(x, \partial \Omega)>2 r\}$. Again, using the compactness, there exist $x_i\in K$, $i=1,2,...,k,$ such that the quasi-balls $B_{r / 2}\left(x_{1}\right), \ldots B_{r / 2}\left(x_{k}\right)$ cover $K$ and \begin{equation}\label{bf-A4} \left|B_{r / 2}\left(x_{i}\right) \cap B_{r / 2}\left(x_{i+1}\right)\right|>0, \quad i=1, \ldots, k-1. \end{equation} Suppose that $u$ vanishes on a subset of $K$ with positive measure. Then with the help of \eqref{bf-A4}, we conclude that there exists $i \in\{1, \ldots, k-1\}$ such that \begin{equation*} |Z|:=|\left\{x \in B_{r / 2}\left(x_{i}\right): u(x)=0\right\}|>0. \end{equation*} For $d>0$ and $x \in B_{r / 2}\left(x_{i}\right)$, define \begin{equation*} F_d(x)=\log \left(1+\frac{u(x)}{d}\right). \end{equation*} Observe that for every $x\in Z$ we have \begin{equation*} F_d(x)=0. \end{equation*} Thus for every $x \in B_{r/2}\left(x_{i}\right)$ and $y\in Z$ with $x\neq y$ we get \begin{equation*} \left|F_d(x)\right|^{p}=\frac{\left|F_d(x)-F_d(y)\right|^{p}}{|y^{-1}x|^{Q+ps}}|y^{-1}x|^{Q+ps} . \end{equation*} Integrating with respect to $y \in Z$, we get \begin{equation*} |Z|\left|F_d(x)\right|^{p} \leq\left(\max _{x, y \in B_{r / 2}\left(x_{i}\right)}|y^{-1}x|^{Q+ps}\right) \int_{B_{r / 2}\left(x_{i}\right)} \frac{\left|F_d(x)-F_d(y)\right|^{p}}{|y^{-1}x|^{Q+ps}} d y. \end{equation*} Again integrating with respect to $x \in B_{r / 2}\left(x_{i}\right)$ we deduce the following local Poincar\'{e} inequality: \begin{equation}\label{bf-A5} \int_{B_{r/2}\left(x_{i}\right)}\left|F_{d}\right|^{p}dx\leq \frac{r^{Q+ps}}{|Z|} \int_{B_{r/2}\left(x_{i}\right)} \int_{B_{r/2}\left(x_{i}\right)} \frac{\left|F_{d}(x)-F_{d}(y)\right|^{p}}{|y^{-1}x|^{Q+ps}} dx dy. \end{equation} Observe that \begin{equation*} \left|\log \left(\frac{d+u(x)}{d+u(y)}\right)\right|^{p}=\left|F_d(x)-F_d(y)\right|^{p}. \end{equation*} Plugging the logarithmic estimate \eqref{k-log-ineq} into the above Poincar\'{e} inequality \eqref{bf-A5} by using the fact that $u_{-}=0$ (hence $Tail(u_{-},x_i,R)=0$), we get \begin{equation}\label{bf-A6} \int_{B_{r/2}\left(x_{i}\right)}\left|\log \left(1+\frac{u(x)}{d}\right)\right|^{p} dx \leq C \frac{r^{2Q}}{|Z|}. \end{equation} Now taking limit $d\rightarrow0$ in \eqref{bf-A6}, we obtain $u=0$ a.e. in $B_{r/2}\left(x_{i}\right).$ Thanks to \eqref{bf-A4}, by repeating this arguments in the quasi-balls $B_{r / 2}\left(x_{i-1}\right)$ and $B_{r / 2}\left(x_{i+1}\right)$ and so on we obtain that $u\equiv0$ a.e. on $K$. This is a contradiction and hence $u>0$ a.e. in $K$. This completes the proof. \end{proof} \begin{lemma}\label{ll-t17} Let $0<s<1<p<\infty.$ Assume that $\Omega\subset{\mathbb{G}}$ is a bounded domain. Let $u$ be an eigenfunction of \eqref{l-eigen} corresponding to $\nu\neq\lambda_1(\Omega)$. Then we have $\nu(\Omega)>\lambda_1(\Omega_{+})$ and $\nu(\Omega)>\lambda_1(\Omega_{-})$, where $\Omega_{+}=\{u>0\}$ and $\Omega_{-}=\{u<0\}$. In particular, \begin{equation}\label{ll-ineq} \nu \geq C(N, p, s)\left|\Omega_{+}\right|^{-\frac{ps}{Q}} \text { and } \nu \geq C(Q, p,s) \left|\Omega_{-}\right|^{-\frac{ps}{Q}}. \end{equation} \end{lemma} \begin{proof} Since $\nu\neq\lambda_1(\Omega)$, then $u$ must be sign-changing. On testing the equation \eqref{ev-soln} with $\phi=u_{+}$ we obtain \begin{align*} \nu \int_{\Omega_{+}}\left|u_{+}\right|^{p} dx &\geq \iint_{\mathbb{G} \times \mathbb{G}} \frac{\left|u_{+}(x)-u_{+}(y)\right|^{p}}{|y^{-1}x|^{Q+ps}} dxdy +2^{p/2} \iint_{\mathbb{G}\times\mathbb{G}} \frac{\left(u_{+}(y) u_{-}(x)\right)^{\frac{p}{2}}}{|y^{-1}x|^{Q+ps}} dxdy. \end{align*} Dividing both sides by $\int_{\Omega_{+}}\left|u_{+}(x)\right|^{p} dx$, we have \begin{align*} \nu &\geq \lambda_{1}\left(\Omega_{+}\right) + 2^{p / 2} \frac{\iint_{\mathbb{G}\times\mathbb{G}} \frac{\left(u_{+}(y) u_{-}(x)\right)^{\frac{p}{2}}}{|y^{-1}x|^{Q+ps}} d x d y}{\int_{\Omega_{+}}\left|u_{+}(x)\right|^{p} dx}. \end{align*} Therefore, we get $\nu>\lambda_{1}\left(\Omega_{+}\right)$. Inequality \eqref{dadaineq} yields that \begin{align} \nu \int_{\Omega_{+}}\left|u_{+}\right|^{p} dx &\geq \iint_{\mathbb{G} \times \mathbb{G}} \frac{\left|u_{+}(x)-u_{+}(y)\right|^{p}}{|y^{-1}x|^{Q+ps}} dxdy \geq C |\Omega_+|^{-\frac{ps}{Q}} \int_{\Omega_+} |u_+(x)|^p dx \end{align} and dividing by $\int_{\Omega_+} |u_+(x)|^p dx$ we deduce $\nu \geq C(N,p,s)\left|\Omega_{+}\right|^{-\frac{ps}{Q}}.$ Similarly, we can deduce $\nu>\lambda_{1}\left(\Omega_{-}\right)$ and $\nu \geq C\left|\Omega_{-}\right|^{-\frac{ps}{Q}}$. This completes the proof. \end{proof} \begin{lemma}\label{isolated} Let $0<s<1<p<\infty.$ Assume that $\Omega\subset{\mathbb{G}}$ is bounded. Then the first eigenvalue $\lambda_1(\Omega)$ of \eqref{l-eigen} is isolated. \end{lemma} \begin{proof} We will prove it by contradiction. Let $\{\nu_{k}\}$ be a sequence of eigenvalues converging to $\lambda_{1}$ such that $\nu_{k} \neq \lambda_{1}$. Suppose that $u_{k}$ is the eigenfunction corresponding to $\nu_k$. Without loss of generality, we may assume that $\|u_{k}\|_p=1.$ Then we have \begin{equation*} \nu_{k}=\int_{\Omega\times\Omega} \frac{\left|u_{k}(x)-u_{k}(y)\right|^{p}}{|y^{-1}x|^{Q+ps}} dxdy. \end{equation*} By Theorem \ref{l-3}, there exists $u \in$ $X_0^{s, p}(\Omega)$ such that, up to a subsequence \begin{equation*} u_{k} \rightarrow u ~\text {strongly in }~ L^{p}\left(\Omega\right)~\text {and}~u_{k}(x)\rightarrow u(x) ~\text {point-wise a.e. in }~\Omega. \end{equation*} Then by applying Fatou's lemma, we get \begin{equation*} \frac{\iint_{\mathbb{G}\times\mathbb{G}} \frac{|u(y)-u(x)|^{p}}{|y^{-1}x|^{Q+ps}} dxdy}{\int_{\Omega}|u(x)|^{p} dx} \leq \lim _{k\rightarrow \infty} \nu_{k}=\lambda_1(\Omega). \end{equation*} Hence we can conclude that $u$ coincides with the first eigenfunction. Theorem \ref{main-thm} infers that $u$ cannot change sign. Thus either $u>0$ in $\Omega$ or $u<0$ in $\Omega$. Thanks to Theorem \ref{ll-t17}, we conclude that $u_{k}$ must change signs in $\Omega$, since $\nu_{k}>\lambda_1(\Omega)$. Therefore, the sets $\Omega^{k}_{\pm}\neq\emptyset$ are with positive measure, where \begin{equation*} \Omega^{k}_{+}=\left\{x\in\Omega: u_{k}(x)>0\right\} \text { and } \Omega^{k}_{-}=\left\{x\in\Omega: u_{k}(x)<0\right\}. \end{equation*} From the estimate \eqref{ll-ineq}, we have \begin{equation*} \nu_{k} \geq \lambda_{1}\left(\Omega^{k}_{+}\right) \geq C\left|\Omega^{k}_{+}\right|^{-\frac{ps}{Q}}~\text{and}~ \nu_{k} \geq \lambda_{1}\left(\Omega^{k}_{-}\right) \geq C\left|\Omega^{k}_{-}\right|^{-\frac{ps}{Q}}. \end{equation*} This implies that \begin{equation*} |\Omega_{\pm}|=|\lim \sup \Omega^{k}_{\pm}|>0. \end{equation*} Therefore, letting $k\rightarrow\infty$, we get that $u \geq 0$ in $\Omega_{+}$ and $u \leq 0$ in $\Omega_{-}$. Thus we arrive at a contradiction that $u$ is a first eigenfunction. \end{proof} {{\it{Proof of Theorem \ref{ev-mainthm}:}}} The proof immediately follows from the Theorem \ref{l-simple}, Lemma \ref{sign-changing} and Lemma \ref{isolated}. \section{Nehari Manifold, weak formulation and multiplicity result}\label{sec5} In this section, we use the results established in the previous two sections to study the existence and multiplicity of weak solutions to the nonlocal singular subelliptic problem \eqref{problem}. We employ the Nehari manifold technique to establish the multiplicity of solutions. The following subsection is devoted to defining the notion of weak solutions, fibering maps, Nehari manifold and some preliminary results. \subsection{Weak solution and geometry of Nehari manifold} Let us now present the notion of a positive weak solution to the problem \eqref{problem}. \begin{definition}\label{d-weak} We say that $u \in X_0^{s,p}(\Omega)$ is a positive weak solution of \eqref{problem} if $u> 0$ on $\Omega$ (i.e. $\operatorname{essinf}_{K} u\geq C_K>0$ for all compact subsets $K\subset \Omega$) and \begin{equation} \langle\left(-\Delta_{p,{\mathbb{G}}}\right)^s u,\psi\rangle-\lambda\int_{\Omega} f(x)u^{-\delta}\psi dx-\int_{\Omega} g(x)u^{q}\psi d x=0 \end{equation} for all $\psi\in C_c^{\infty}(\Omega)$ \end{definition} The energy functional $I_{\lambda}: X_0^{s,p}(\Omega)\rightarrow \mathbb{R}$ associated with the problem \eqref{problem} is defined as \begin{equation}\label{d-energy} I_{\lambda}(u)=\frac{1}{p} \|u\|_{X_0^{s,p}(\Omega)}^{p}-\frac{\lambda}{1-\delta} \int_{\Omega} f(x)|u|^{1-\delta} dx-\frac{1}{q+1} \int_{\Omega} g(x)|u|^{q+1} d x. \end{equation} We note here that due to the presence of the singular exponent $\delta\in(0,1)$, the functional $I_{\lambda}$ is not Fr\'echet differentiable. Also, it is not bounded from below in $X_0^{s,p}(\Omega)$ as $q>p-1$. The method of Nehari manifold plays an important role to extract critical points of this type of energy functional. We define the Nehari manifold $\mathcal{N_{\lambda}}$ for $\lambda>0$ as \begin{equation}\label{d-nehari} \mathcal{N}_{\lambda}:=\left\{u \in X_0^{s,p}(\Omega)\setminus\{0\}:\left\langle I_{\lambda}^{\prime}(u), u\right\rangle=0\right\}. \end{equation} We set \begin{equation}\label{d-min} c_{\lambda}=\inf \left\{I_{\lambda}(u): u \in \mathcal{N}_{\lambda}\right\}. \end{equation} It is obvious that $u \in \mathcal{N}_{\lambda}$ if and only if \begin{equation}\label{d-nehari1} \|u\|_{X_0^{s,p}(\Omega)}^{p}-\lambda \int_{\Omega} f(x)|u|^{1-\delta} dx-\int_{\Omega} g(x)|u|^{q+1} dx=0. \end{equation} In the next result we establish the coerciveness and boundedness of the functional $I_\lambda.$ \begin{lemma}\label{l-coercive} For each $\lambda>0$, the energy $I_{\lambda}$ is coercive and bounded from below on $\mathcal{N}_{\lambda}$. \end{lemma} \begin{proof} By referring to the equations \eqref{d-energy} and \eqref{d-nehari1}, we obtain \begin{align} I_{\lambda}(u) &=\frac{1}{p} \|u\|_{X_0^{s,p}(\Omega)}^{p}-\frac{\lambda}{1-\delta} \int_{\Omega} f(x)|u|^{1-\delta} d x-\frac{1}{q+1} \int_{\Omega} g(x)|u|^{q+1} dx\nonumber \\ &=\left(\frac{1}{p}-\frac{1}{q+1}\right)\|u\|_{X_0^{s,p}(\Omega)}^{p}-\lambda\left(\frac{1}{1-\delta}-\frac{1}{q+1}\right)\int_{\Omega} f(x)|u|^{1-\delta} d x\nonumber\\ & \geq \left(\frac{1}{p}-\frac{1}{q+1}\right) \|u\|_{X_0^{s,p}(\Omega)}^{p}-c\lambda\|f\|_{\infty}\left(\frac{1}{1-\delta}-\frac{1}{q+1}\right)\|u\|_{X_0^{s,p}(\Omega)}^{1-\delta}.\label{eq3.6} \end{align} Since $0<1-\delta<1$ and $q+1>p>1$, we conclude that that $I_{\lambda}$ is coercive and bounded from below on $\mathcal{N}_{\lambda}$. \end{proof} Now, we prove the following lemma proceeding as in the proof given in \cite{HSS08}. \begin{lemma}\label{l-hirano} For every non-negative $u\in X_0^{s,p}(\Omega)$ there exists a non-negative, increasing sequence $\{u_n\}$ in $X_0^{s,p}(\Omega)$ with each $u_n$ having compact support in $\Omega$ such that $u_n\rightarrow u$ strongly in $X_0^{s,p}(\Omega)$. \end{lemma} \begin{proof} Take $u\in X_0^{s,p}(\Omega)$ and $u\geq0$. By invoking the density of $C_c^{\infty}(\Omega)$ in $X_0^{s,p}(\Omega)$, we can choose a sequence $\{v_n\}\subset C_c^{\infty}(\Omega)$ converging strongly to $u$ in $X_0^{s,p}(\Omega)$ such that $v_n\geq0$ for all $n\in\mathbb{N}$. We now construct another sequence $\{w_n\}$ by $w_n=\min\{v_n, u\}$. Then $w_n\rightarrow u$ strongly in $X_0^{s,p}(\Omega)$. Let $\epsilon>0$. Choose $n_1>0$ such that $\|w_{n_1}-u\|<\epsilon$, then $\|\max\{u_1, w_n\}-u\|\rightarrow0$, where $u_1:=w_{n_1}$. Again choose, $n_2$ such that $\|\max\{u_1, w_{n_2}\}-u\|<\frac{\epsilon}{2}$, then for $u_2:=\max\{u_1,w_{n_2}\}$ we have $\|\max\{u_2, w_n\}-u\|\rightarrow0$. Continuing in this way, set $u_k=\max\{u_{k-1},w_{n_k}\}$. Note that each $u_k$ is compactly supported and $\|u_k-u\|\leq\frac{\epsilon}{k}$. Thus we can deduce that $\|u_n-u\|\rightarrow0$ and this is the desired sequence. \end{proof} For each $u \in X_0^{s,p}(\Omega)$, the fiber map $\phi_{u}: (0,\infty) \rightarrow \mathbb{R}$ is defined by $\phi_{u}(t)=I_{\lambda}(t u)$. This fibering map is an important tool to extract the critical points of the energy functional $I_{\lambda}$ which was first coined by Dr\'abek and Pohozaev \cite{DP97}. Clearly, for $t>0$, we have \begin{align} \label{fiber-1} &\phi_{u}(t)= \frac{t^{p}}{p}\|u\|^{p}-\lambda \frac{ t^{1-\delta}}{1-\delta} \int_{\Omega}f(x)|u|^{1-\delta} dx-\frac{t^{q+1}}{q+1} \int_{\Omega}g(x)|u|^{q+1} dx, \end{align} \begin{align} \label{fiber-2} &\phi_{u}^{\prime}(t)=t^{p-1}\|u\|^{p}-\lambda t^{-\delta} \int_{\Omega}f(x)|u|^{1-\delta} d x-t^{q} \int_{\Omega}g(x)|u|^{q+1} d x, \end{align} and \begin{align}\label{fiber-3} &\phi_{u}^{\prime \prime}(t)=(p-1) t^{p-2}\|u\|^{p}+\delta \lambda t^{-\delta-1} \int_{\Omega}f(x)|u|^{1-\delta} d x-q t^{q-1} \int_{\Omega}g(x)|u|^{q+1} d x. \end{align} Observe that $\phi_{u}^{\prime}(t)=\frac{1}{t}\langle I_{\lambda}'(tu), tu\rangle$. Thus $\phi_{u}^{\prime}(t)=0$ if and only if $tu\in \mathcal{N}_{\lambda}$ for some $t>0$ and $u$ is a critical point of $I_{\lambda}$ if and only if $\phi_{u}^{\prime}(1)=0$. Thus it is natural to split $\mathcal{N}_{\lambda}$ into three essential subsets corresponding to local minima, local maxima and points of inflexion. For this purpose, we define the following three sets \begin{align} \label{n-plus} \mathcal{N}_{\lambda}^{+}&=\left\{u \in \mathcal{N}_{\lambda}: \phi_{u}^{\prime}(1)=0, \phi_{u}^{\prime \prime}(1)>0\right\} \nonumber \\&=\left\{t_{0} u \in \mathcal{N}_{\lambda}: t_{0}>0, \phi_{u}^{\prime}\left(t_{0}\right)=0, \phi_{u}^{\prime \prime}\left(t_{0}\right)>0\right\}, \end{align} \begin{align} \label{n-minus} \mathcal{N}_{\lambda}^{-}&=\left\{u \in \mathcal{N}_{\lambda}: \phi_{u}^{\prime}(1)=0, \phi_{u}^{\prime \prime}(1)<0\right\}\nonumber\\ &=\left\{t_{0} u \in \mathcal{N}_{\lambda}: t_{0}>0, \phi_{u}^{\prime}\left(t_{0}\right)=0, \phi_{u}^{\prime \prime}\left(t_{0}\right)<0\right\}, \end{align} and \begin{align} \mathcal{N}_{\lambda}^{0}&=\left\{u \in \mathcal{N}_{\lambda}: \phi_{u}^{\prime}(1)=0, \phi_{u}^{\prime \prime}(1)=0\right\}.\label{n-0} \end{align} Therefore, it is enough to find two members $u\in \mathcal{N}_{\lambda}^{+}\setminus\mathcal{N}_{\lambda}^{0}$ and $v\in \mathcal{N}_{\lambda}^{-}\setminus\mathcal{N}_{\lambda}^{0}$ to establish our result. It is easy to see that only members of the sets $\mathcal{N}_{\lambda}^{\pm}\setminus\mathcal{N}_{\lambda}^{0}$ are critical points of the energy functional $I_{\lambda}$. We first introduce the following quantity \begin{align}\label{lambda-1} \Lambda_{1}= \sup_{u \in X_0^{s,p}(\Omega)} \Big\{\lambda>0: \phi_{u}(t) ~&\text {(ref. \eqref{fiber-1}) has two critical points in }~(0, \infty) \nonumber\Big\}. \end{align} \begin{proposition}\label{lambda-finite} Under the assumptions on the problem \eqref{problem}, we have $0<\Lambda_1<\infty$. \end{proposition} To prove Proposition \ref{lambda-finite} we first prove the following result which ensure that $\Lambda_1>0.$ We first define the function $m_{u}: \mathbb{R}^{+} \rightarrow \mathbb{R}$ by \begin{equation}\label{fn-m} m_{u}(t)=t^{p-1+\delta} \|u\|_{X_0^{s,p}(\Omega)}^p-t^{q+\delta} \int_{\Omega} g(x)|u|^{q+1} dx. \end{equation} The function $m_u$ will play a crucial role to find a $\lambda_*>0$ in the following lemma. \begin{lemma}\label{n-nonempty} Under the assumptions on the problem \eqref{problem}, there exists $\lambda_*>0$ such that, for every $0<\lambda<\lambda_*,$ we have $\mathcal{N}_{\lambda}^{\pm}\neq\emptyset$, i.e., there exist unique $t_{1}$ and $t_{2}$ in $(0, \infty)$ with $t_{1}<t_{2}$ such that $t_{1} u \in \mathcal{N}_{\lambda}^{+}$and $t_{2} u \in \mathcal{N}_{\lambda}^{-}$. Moreover, for any $\lambda \in\left(0, \Lambda_{1}\right)$, we have $\mathcal{N}_{\lambda}^{0}=\emptyset$. Furthermore, $\sup\limits_{u \in \mathcal{N}_{\lambda}^{+}}\|u\|_{X_0^{s,p}(\Omega)}<\infty$ and $\inf\limits_{v \in \mathcal{N}_{\lambda}^{-}}\|v\|_{X_0^{s,p}(\Omega)}>0$. \end{lemma} \begin{proof} Using \eqref{fiber-2} and \eqref{fn-m} we first deduce that, for $t>0$, we have \begin{equation}\label{rel-m-phi} \phi_{u}^{\prime}(t)=t^{-\delta}\left(m_{u}(t)-\lambda \int_{\Omega} f(x)|u|^{1-\delta} d x\right). \end{equation} This implies that $\phi_{u}^{\prime}(t)=0$ if and only if $m_{u}(t)-\lambda \int_{\Omega} f(x)|u|^{1-\delta} dx=0$. Referring to \eqref{fn-m} and $q>p-1$, we note that for $u\ne0$, $m_{u}(0)=0$ and $\lim _{t \rightarrow \infty} m_{u}(t)=-\infty$. Thus, one can verify that the function $m_{u}(t)$ attains its maximum at $t=t_{\max}$ given by \begin{equation}\label{t-max} t_{\max }=\left[\frac{(p-1+\delta)\|u\|_{X_0^{s,p}(\Omega)}^{p}}{(q+\delta) \int_{\Omega}g(x)|u|^{q+1}dx}\right]^{\frac{1}{q+1-p}}. \end{equation} The value of $m_u$ at $t=t_{\max}$ is given by \begin{equation}\label{m-max} m_{u}\left(t_{\max }\right)=\left(\frac{q+2-p}{p-1+\delta}\right)\left(\frac{p-1+\delta}{q+\delta}\right)^{\frac{\delta+q}{q+1-p}}\frac{\|u\|_{X_0^{s,p}(\Omega)}^{\frac{p(q+\delta)}{q+1-p}}}{\left(\int_{\Omega}g(x)|u|^{q+1}dx\right)^{\frac{p-1+\delta}{q+1-p}}}. \end{equation} In addition, by using the fact that $\lim_{t\rightarrow0^+}m_u'(t)>0$, we conclude that $m_u$ is increasing function on $(0, t_{\max })$ and is decreasing function on $(t_{\max}, \infty)$. Indeed, we have \begin{align}\label{lambda-choice} \frac{m_{u}(t_{\max})}{\int_{\Omega}f(x)|u|^{1-\delta}dx} &=\left(\frac{q+2-p}{p-1+\delta}\right)\left(\frac{p-1+\delta}{q+\delta}\right)^{\frac{\delta+q}{\delta+1-p}}\frac{\|u\|_{X_0^{s,p}(\Omega)}^{\frac{p(q+\delta)}{\delta+1-p}}}{\left(\int_{\Omega}g|u|^{q+1}dx\right)^{\frac{p-1+\delta}{q+1-p}}\left(\int_{\Omega}f|u|^{1-\delta}dx\right)}\nonumber\\ &\geq \left(\frac{q+2-p}{p-1+\delta}\right)\left(\frac{p-1+\delta}{q+\delta}\right)^{\frac{\delta+q}{\delta+1-p}} \frac{S_{q+1}^{-\frac{p-1+\delta}{q+1-p}}{S_{1-\delta}^{-1}}}{\|f\|_{\infty}\|g\|_{\infty}^{\frac{p-1+\delta}{q+1-p}}}, \end{align} where $S_{\alpha}=\sup\{\|u\|_{\alpha}^{\alpha}: u\in X_0^{s,p}(\Omega), \|u\|_{X_0^{s,p}(\Omega)}=1\}$ for $\alpha\geq0$, i.e. $\int_{\Omega}|u|^{\alpha}dx\leq S_{\alpha}\|u\|_{X_0^{s,p}(\Omega)}^{\alpha}.$ Now we set \begin{equation}\label{lambda-max} \lambda_*= \left(\frac{q+2-p}{p-1+\delta}\right)\left(\frac{p-1+\delta}{q+\delta}\right)^{\frac{\delta+q}{\delta+1-p}}\frac{S_{q+1}^{-\frac{p-1+\delta}{q+1-p}}{S_{1-\delta}^{-1}}}{\|f\|_{\infty}\|g\|_{\infty}^{\frac{p-1+\delta}{q+1-p}}}. \end{equation} Then, for every $\lambda\in (0,\lambda_*),$ we have \begin{equation} 0<\lambda \int_{\Omega} f(x)|u|^{1-\delta} d x\leq m_{u}\left(t_{\max }\right). \end{equation} Thus, there exist $t_1$ and $t_2$ with $0<t_{1}<t_{\max }<t_{2}$ such that \begin{equation} m_{u}(t_1)=m_{u}(t_2)=\lambda \int_{\Omega} f(x)|u|^{1-\delta} d x. \end{equation} Therefore, we deduce that $\phi_u$ decreasing on the set $(0,t_1)$, increasing on $(t_1, t_2)$ and again decreasing on $(t_2, \infty)$. So, $\phi_u$ has a local maxima at $t=t_2$ and a local minima at $t=t_1$ such that $t_{2} u \in \mathcal{N}_{\lambda}^{-}$ and $t_{1} u \in \mathcal{N}_{\lambda}^{+}$. In particular, we have \begin{equation} I_{\lambda}(t_1)(u)=\min_{0\leq t\leq t_{\max}}I_{\lambda}(u)~~~\text{and}~~~I_{\lambda}(t_2)(u)=\max_{t\geq0}I_{\lambda}(u). \end{equation} We now intend to prove that $\mathcal{N}_{\lambda}^{0}=\emptyset.$ For a moment, we suppose that $u \not \equiv 0$ and $u \in \mathcal{N}_{\lambda}^{0}$, then $u \in \mathcal{N}_{\lambda}$. Therefore, by using the definition of the fibering map $\phi_{u}(t)$, we see that $t=1$ is a critical point. Now, the above arguments imply that the critical points of $\phi_{u}$ are corresponding to a local minima or a local maxima. Thus, we get either $u \in \mathcal{N}_{\lambda}^{+}$ or $u \in \mathcal{N}_{\lambda}^{-}$. This contradicts the fact that $u \in \mathcal{N}_{\lambda}^{0}$ and therefore we conclude that $\mathcal{N}_{\lambda}^{0}=\emptyset.$ Finally, we assume that $u \in \mathcal{N}_{\lambda}^{+}$. From \eqref{fiber-3} and $\phi_{u}^{''}(1)>0$ we get $$(q+1-p)\|u\|_{X_0^{s,p}(\Omega)}^{p}\leq \lambda(q+\delta) c_1\|f\|_{\infty}\|u\|_{X_0^{s,p}(\Omega)}^{1-\delta},$$ which implies that \begin{align}\label{l-est1} \|u\|_{X_0^{s,p}(\Omega)} &\leq\left(\frac{\lambda(q+\delta) c_1\|f\|_{\infty}}{q+1-p}\right)^{\frac{1}{p-1+\delta}}. \end{align} Similarly, for $v \in \mathcal{N}_{\lambda}^{-}$, from \eqref{fiber-3} and the fact $\phi_{v}^{''}(1)<0$ we obtain $$(p-1+\delta)\|v\|_{X_0^{s,p}(\Omega)}^{p}\geq (q+\delta) c_2\|g\|_{\infty}\|v\|_{X_0^{s,p}(\Omega)}^{q+1}$$ which eventually gives \begin{align}\label{l-est2} \|v\|_{X_0^{s,p}(\Omega)} &\geq\left(\frac{p-1+\delta}{(q+\delta) c_1\|g\|_{\infty}}\right)^{\frac{1}{q+1-p}}. \end{align} From \eqref{l-est1} and \eqref{l-est2}, we conclude that $\sup\limits_{u \in \mathcal{N}_{\lambda}^{+}}\|u\|_{X_0^{s,p}(\Omega)}<\infty$ and $\inf\limits_{v \in \mathcal{N}_{\lambda}^{-}}\|v\|_{X_0^{s,p}(\Omega)}>0$. \end{proof} \begin{lemma}\label{l-crit} Let u be a local minimizer for $I_{\lambda}$ on $\mathcal{N}_{\lambda}^{-}$ or $\mathcal{N}_{\lambda}^{+}$ such that $u \notin \mathcal{N}_{\lambda}^{0}$. Then $u$ is a critical point of $I_{\lambda}$. \end{lemma} \begin{proof} We first introduce the functional $J_{\lambda}(u)=\langle I_{\lambda}'(u), u\rangle$. Then, one can easily verify that $\mathcal{N}_{\lambda}=J_{\lambda}^{-1}(0)\setminus\{0\}$ and \begin{align*} \left\langle J_{\lambda}'(u), u\right\rangle &=p \|u\|_{X_0^{s,p}(\Omega)}^p-\lambda (1-\delta) \int_{\Omega} f(x)|u|_{X_0^{s,p}(\Omega)}^{1-\delta} d x-q \int_{\Omega} g(x)|u|^{q+1} dx \\ &=(p-1+\delta)\|u\|^{p}-(q-\delta) \int_{\Omega} h(x)|u|^{q+1} dx,~~\forall~u \in \mathcal{N}_{\lambda}. \end{align*} Since $u$ is a local minimizer for $I_{\lambda}$ on $\mathcal{N}_{\lambda}$ we can redefine the minimization problem under the following constrained equation \begin{equation} J_{\lambda}(u)=\langle I_{\lambda}'(u), u\rangle=0 \end{equation} Therefore, the method of Lagrange multipliers guarantees the existence of a constant $\kappa \in \mathbb{R}$ such that $$J_{\lambda}'(u)=\kappa I_{\lambda}'(u).$$ Thus, we obtain $$ \langle I_{\lambda}'(u), u\rangle=\kappa\langle J_{\lambda}'(u), u\rangle=\kappa\phi^{''}(1)=0. $$ Therefore, we conclude that $\kappa=0$ as $u \notin \mathcal{N}_{\lambda}^{0}$. Hence, $u$ is a critical point of $I_{\lambda}$. \end{proof} \subsection{Existence of minimizers on $\mathcal{N}_{\lambda}^{+}$ and $\mathcal{N}_{\lambda}^{-}$} In this subsection, we will prove the existence of minimizers $u_{\lambda}$ and $v_{\lambda}$ of $I_{\lambda}$ on $\mathcal{N}_{\lambda}^{+}$ and $\mathcal{N}_{\lambda}^{-}$ which is attained in $\mathcal{N}_{\lambda}^{+}$ and $\mathcal{N}_{\lambda}^{-}$ respectively. Also, we show that these minimizers are solutions of \eqref{problem} and $u_{\lambda}\neq v_{\lambda}$. We have the following lemma. \begin{lemma}\label{soln1} For all $\lambda \in\left(0, \Lambda_{1}\right)$, there exists $u_{\lambda} \in \mathcal{N}_{\lambda}^{+}$ such that $I_{\lambda}\left(u_{\lambda}\right)=\inf I_{\lambda}\left(\mathcal{N}_{\lambda}^{+}\right)$. Moreover, $u_{\lambda}$ is a non-negative weak solution to the problem \eqref{problem}. \end{lemma} \begin{proof} Since the functional $I_{\lambda}$ is bounded below on $\mathcal{N}_{\lambda}$ (hence bounded below on $\mathcal{N}_{\lambda}^{+}$), there exists a sequence $\{u_n\}\subset \mathcal{N}_{\lambda}^{+}$ such that $I_{\lambda}(u_n) \rightarrow \inf I_{\lambda}(\mathcal{N}_{\lambda}^{+})$ as $n\rightarrow +\infty$. Moreover, by using the coercivity of $I_{\lambda}$ and Lemma \ref{n-nonempty}, we have that $\{u_n\}$ is bounded in $X_0^{s,p}(\Omega)$ and hence by the reflexiveness of $X_0^{s,p}(\Omega)$, there exists $u_{\lambda}\in X_0^{s,p}(\Omega)$ such that $u_n\rightharpoonup u_{\lambda}$ weakly in $X_0^{s,p}(\Omega)$. Thus, by the compact embedding (ref. Theorem \ref{l-3}), we get $u_n\rightarrow u_{\lambda}$ strongly in $L^r(\Omega)$ for $1\leq r<p_s^*$ and $u_n\rightarrow u_{\lambda}$ pointwise a.e. in $\Omega$. Our aim is to show $u_n\rightarrow u_{\lambda}$ strongly in $X_0^{s,p}(\Omega)$. Prior to that we prove that $\inf I_{\lambda}(\mathcal{N}_{\lambda}^{+})<0$. Indeed, for $w\in \mathcal{N}_{\lambda}^{+}$, the fiber map $\phi$ has a local minima in $\mathcal{N}_{\lambda}^{+}$ and $\phi^{''}(1)>0$. Thus, from \eqref{fiber-3}, we get \begin{equation}\label{e1} \left(\frac{p-1+\delta}{\delta+q}\right)\left\|w\right\|_{X_0^{s,p}(\Omega)}^{p}>\int_{\Omega}\left|w\right|^{q+1} d x . \end{equation} The above inequality \eqref{e1} with the fact that $q>p-1$ retrieves the required claim. In fact, we have \begin{align*} I_{\lambda}\left(w\right) &=\left(\frac{1}{p}-\frac{1}{1-\delta}\right)\left\|w\right\|_{X_0^{s,p}(\Omega)}^{p}+\left(\frac{1}{1-\delta}-\frac{1}{q+1}\right) \int_{\Omega}\left|w\right|^{q+1} dx\\ &\leq\frac{(1-\delta-p)}{p(1-\delta)}\left\|w\right\|_{X_0^{s,p}(\Omega)}^{p}+\frac{(p-1+\delta)}{(q+1)(1-\delta)}\left\|w\right\|_{X_0^{s,p}(\Omega)}^p\\ &=\left(-\frac{1}{p}+\frac{1}{q+1}\right)\left(\frac{p-1+\delta}{1-\delta}\right)\left\|w\right\|_{X_0^{s,p}(\Omega)}^{p}\\ &=\left(\frac{p-(q+1)}{p(q+1)}\right)\left(\frac{p-1+\delta}{1-\delta}\right)\left\|w\right\|_{X_0^{s,p}(\Omega)}^{p}\\ &<0. \end{align*} We now prove the strong convergence by contradiction. Suppose the strong convergence $u_n\rightarrow u_{\lambda}$ in $X_0^{s,p}(\Omega)$ fails. Then we have \begin{equation} \|u_{\lambda}\|_{X_0^{s,p}(\Omega)}<\lim\inf\limits_{n\rightarrow\infty}\|u_n\|_{X_0^{s,p}(\Omega)}. \end{equation} Further, by the compact embedding (see Theorem \ref{l-3}), we have \begin{align} \int_{\Omega}g(x)|u_{\lambda}|^{q+1}dx=\lim\inf\limits_{n\rightarrow\infty}\int_{\Omega}g(x)|u_n|^{q+1} dx\label{est1}\\ \int_{\Omega}f(x)|u_{\lambda}|^{1-\delta}dx=\lim\inf\limits_{n\rightarrow\infty}\int_{\Omega}f(x)|u_n|^{1-\delta} dx\label{est2}. \end{align} Since $\{u_n\}\subset \mathcal{N}_{\lambda}^{+}$ then $\phi'(1)=\langle I_{\lambda}'(u_n), u_n\rangle=0.$ Thus, we get from \eqref{eq3.6} that \begin{align} I_{\lambda}(u_n)&\geq \left(\frac{1}{p}-\frac{1}{q+1}\right) \|u_n\|_{X_0^{s,p}(\Omega)}^{p}-c\lambda\|f\|_{\infty}\left(\frac{1}{1-\delta}-\frac{1}{q+1}\right)\|u_n\|_{X_0^{s,p}(\Omega)}^{1-\delta}. \end{align} Therefore, passing to the limit as $n\rightarrow\infty,$ we deduce \begin{align} \inf I_{\lambda}(\mathcal{N}_{\lambda}^{+})&\geq \lim\limits_{n\rightarrow\infty} \left(\frac{1}{p}-\frac{1}{q+1}\right) \|u_n\|_{X_0^{s,p}(\Omega)}^{p}-\lim\limits_{n\rightarrow\infty}c\lambda\|f\|_{\infty}\left(\frac{1}{1-\delta}-\frac{1}{q+1}\right)\|u_n\|_{X_0^{s,p}(\Omega)}^{1-\delta}\nonumber\\ &>\left(\frac{1}{p}-\frac{1}{q+1}\right) \|u_{\lambda}\|_{X_0^{s,p}(\Omega)}^{p}-c\lambda\|f\|_{\infty}\left(\frac{1}{1-\delta}-\frac{1}{q+1}\right)\|u_{\lambda}\|_{X_0^{s,p}(\Omega)}^{1-\delta}\nonumber\\ &>0, \end{align} which is impossible since $\inf I_{\lambda}(\mathcal{N}_{\lambda}^{+})<0$. Thus, $u_n\rightarrow u_{\lambda}$ strongly in $X_0^{s,p}(\Omega)$. Finally, we get $\phi_{u_{\lambda}}^{''}(1)>0$ for all $\lambda\in(0,\Lambda_1)$. Hence, we have $u_{\lambda}\in \mathcal{N}_{\lambda}^{+}$ and $I_{\lambda}(u_{\lambda})=I_{\lambda}(\mathcal{N}_{\lambda}^{+})$. Since, $I_{\lambda}(u_{\lambda})=I_{\lambda}(|u_{\lambda}|)$, we can assume that $u_{\lambda}$ is non-negative. Finally, by the Lemma \ref{l-crit}, we deduce that $u_{\lambda}$ is a critical point of $I_{\lambda}(u_{\lambda})$ and hence a weak solution to the problem \eqref{problem}. \end{proof} The next lemma guarantees the existence of a minimizer in $\mathcal{N}_{\lambda}^{-}$. \begin{lemma}\label{soln2} For all $\lambda \in\left(0, \Lambda_{1}\right)$, there exists $v_{\lambda} \in \mathcal{N}_{\lambda}^{-}$ such that $I_{\lambda}\left(v_{\lambda}\right)=\inf I_{\lambda}\left(\mathcal{N}_{\lambda}^{-}\right)$. Moreover, $v_{\lambda}$ is a non-negative weak solution to the problem \eqref{problem}. \end{lemma} \begin{proof} Proceeding as in the previous Lemma \ref{soln1}, we can assume that there exists a sequence $\{v_n\}\subset \mathcal{N}_{\lambda}^{-}$ such that $I_{\lambda}(v_n) \rightarrow \inf I_{\lambda}(\mathcal{N}_{\lambda}^{-})$ as $n\rightarrow +\infty$ and there exists $v_{\lambda}\in X_0^{s,p}(\Omega)$ such that $v_n\rightharpoonup v_{\lambda}$ weakly in $X_0^{s,p}(\Omega)$. Therefore, the compact embedding (see Theorem \ref{l-3}) guarantees that $v_n\rightarrow v_{\lambda}$ strongly in $L^r(\Omega)$ for $1\leq r<p_s^*$ and $v_n\rightarrow v_{\lambda}$ pointwise a.e. in $\Omega$. Let us first prove that $\inf I_{\lambda}(\mathcal{N}_{\lambda}^{-})>0$. Suppose $z\in \mathcal{N}_{\lambda}$. Therefore, using \eqref{eq3.6}, we get \begin{align} I_{\lambda}(z) &\geq \left(\frac{1}{p}-\frac{1}{q+1}\right) \|z\|_{X_0^{s,p}(\Omega)}^{p}-c\lambda\|f\|_{\infty}\left(\frac{1}{1-\delta}-\frac{1}{q+1}\right)\|z\|_{X_0^{s,p}(\Omega)}^{1-\delta}\nonumber\\ &=\|z\|_{X_0^{s,p}(\Omega)}^{1-\delta}\left(\frac{q+1-p}{p(q+1)}\right)\|z\|_{X_0^{s,p}(\Omega)}^{p-1+\delta}-c\lambda\|f\|_{\infty}\left(\frac{q+\delta}{(1-\delta)(q+1)}\right).\label{eq-n-} \end{align} Now, for any $\lambda<\frac{(q+1-p)(1-\delta)}{cp\|f\|_{\infty}}$ in \eqref{eq-n-}, we get $I_{\lambda}(z)>0$. Since, $\mathcal{N}_{\lambda}^{+}\cap \mathcal{N}_{\lambda}^{-}=\emptyset$ and $\mathcal{N}_{\lambda}^{+}\cup \mathcal{N}_{\lambda}^{-}=\mathcal{N}_{\lambda}$ (ref. Lemma \ref{n-nonempty}), then we must have $z\in \mathcal{N}_{\lambda}^{-}$. Again, for $z\in \mathcal{N}_{\lambda}^{-}$, there exists $t>0$ such that $\phi_z'(tz)=I_{\lambda}'(tz)<0$, since $1-\delta< 1<p<q+1$. This implies $tz\in \mathcal{N}_{\lambda}^{-}$. This is also true for $v_{\lambda}$. We are now in a state to prove the strong convergence. Suppose the strong convergence $v_n\rightarrow v_{\lambda}$ in $X_0^{s,p}(\Omega)$ fails. Then proceeding as Lemma \ref{soln1}, we obtain \begin{align} I_{\lambda}(tv_{\lambda})&\leq \lim\limits_{n\rightarrow\infty}I_{\lambda}(tv_n)\leq \lim\limits_{n\rightarrow\infty}I_{\lambda}(v_n)=\inf I_{\lambda}\left(\mathcal{N}_{\lambda}^{-}\right). \end{align} This estimate gives the equality $I_{\lambda}(tv_{\lambda})=\inf I_{\lambda}\left(\mathcal{N}_{\lambda}^{-}\right)$, which is a contradiction. Thus, $v_n\rightarrow v_{\lambda}$ strongly in $X_0^{s,p}(\Omega)$ and $I_{\lambda}(v_{\lambda})=I_{\lambda}(\mathcal{N}_{\lambda}^{-})$. Since, $I_{\lambda}(u_{\lambda})=I_{\lambda}(|u_{\lambda}|)$, we can assume that $u_{\lambda}$ is non-negative. Finally, by the Lemma \ref{l-crit}, we deduce that $u_{\lambda}$ is a critical point of $I_{\lambda}(u_{\lambda})$ and hence a weak solution to the problem \eqref{problem}. \end{proof} {{\bf{Proof of Proposition \ref{lambda-finite}:}}} Clearly, from Lemma \ref{n-nonempty}, we get $\Lambda_1 >0$. We will prove the boundedness of $\Lambda_1$ by contradiction. Suppose $\Lambda_1= +\infty$. Let $\lambda_1$ be the first eigenvalue of the problem \eqref{p-kuusi} and let $\phi_1$ be the corresponding first eigenfunction. Choose $\bar{\lambda}>0$ such that \begin{equation}\label{lambda-finite-1} \frac{\bar{\lambda}f(x)}{t^{\delta}}+g(x)t^q>(\lambda_1+\epsilon)t^{p-1} \end{equation} for all $t \in (0,\infty)$, $x\in\Omega$ and for some $\epsilon \in(0,1).$ Recall the weak solution $u_{\lambda}\in\mathcal{N}_{\lambda}^{+}$. Then for the above choice of $\bar{\lambda}, \bar{u}:=u_{\bar{\lambda}}\in X_0^{s,p}(\Omega)$ is weak supersolution to \begin{align}\label{lambda-fin-2} (-\Delta_{p,{\mathbb{G}}})^su &=(\lambda_1+\epsilon)|u|^{p-2}u~\text{in}~\Omega,\nonumber\\ u&=0~\text{in}~{\mathbb{G}}\setminus\Omega. \end{align} Then we can choose $r>0$ such that $\underline{u}=r\phi_1$ becomes a subsolution to the problem \eqref{lambda-fin-2}. Now by using the boundedness of $\phi_1$, we can choose a smaller $r>0$ (this choice is possible since $r\phi_1$ is a subsolution) such that $\underline{u}\leq\bar{u}$. Now define $w=r\phi_1$ and $w_n\in X_0^{s,p}(\Omega)$ such that $$(-\Delta_{p,{\mathbb{G}}})^sw_k =(\lambda_1+\epsilon)|w_{k-1}|^{p-2}w_{k-1}~\text{in}~\Omega.$$ From Lemma \ref{ev-weakcomp}, for all $x\in\Omega$ we have $$r\phi_1=w_0\leq w_1\leq...\leq w_k\leq....\leq u_{\bar{\lambda}}.$$ This shows that $\{w_k\}$ is bounded in $X_0^{s,p}(\Omega)$ and hence from the reflexivity, we conclude that $w_k\rightharpoonup w$ in $X_0^{s,p}(\Omega)$, up to a subsequence. Thus $w$ becomes a weak solution to \eqref{lambda-fin-2}. Since $\lambda_1+\epsilon>\lambda_1$, we arrive at a contradiction to the fact that $\lambda_1$ is simple and isolated. Hence, $\Lambda_1<\infty$.\\ Having developed all the necessary tools now we are ready to prove our main result. \\ {\it Proof of Theorem \ref{main-thm}:} Set $\Lambda=\min\{\lambda_*, \Lambda_1\}$. Then, by using the fact $\mathcal{N}_{\lambda}^{+}\cap \mathcal{N}_{\lambda}^{-}=\emptyset$ and $\mathcal{N}_{\lambda}^{+}\cup \mathcal{N}_{\lambda}^{-}=\mathcal{N}_{\lambda}$ together with Lemma \ref{soln1} and Lemma \ref{soln2}, we get two solutions $u_{\lambda}\neq v_{\lambda}$ in $X_0^{s,p}(\Omega)$. In other words, it shows that the problem \eqref{problem} has at least two non-negative solutions for every $\lambda\in(0,\Lambda)$. \section{Regularity results for the obtained solutions} \label{sec6} In this section we prove that all nonnegative solutions to the problem \eqref{problem} are uniformly bounded. Let us begin with the following weak comparison principle. \begin{lemma}[Weak Comparison Principle]\label{weakcomp} Let $\lambda>0$, $0<\delta, s<1<p<\infty$ and $u, v\in X_0^{s,p}(\Omega)$. Suppose that $$(-\Delta_{p,{\mathbb{G}}})^sv-\frac{\lambda f(x)}{v^{\delta}}\geq(-\Delta_{p, {\mathbb{G}}})^su-\frac{\lambda f(x)}{u^{\delta}}$$ weakly with $v=u=0$ in ${\mathbb{G}}\setminus\Omega$. Then $v\geq u$ in ${\mathbb{G}}.$ \end{lemma} \begin{proof} It follows from the statement of the lemma that \begin{align}\label{3compprinci} \langle(-\Delta_{p,{\mathbb{G}}})^sv,\phi\rangle-\int_{\Omega}\frac{\lambda\phi}{v} dx&\geq\langle(-\Delta_{p,{\mathbb{G}}})^su,\phi\rangle-\int_{\Omega}\frac{\lambda\phi}{u} dx, \end{align} for all non-negative $\phi\in X_0^{s,p}(\Omega)$. Recall the identity \begin{align}\label{comp principle} |b|^{p-2}b-|a|^{p-2}a&=(p-1)(b-a)\int_0^1|a+t(b-a)|^{p-2} dt \end{align} and define, \begin{align} &Q(x,y)=\int_0^1|(u(x)-u(y))+t((v(x)-v(y))-(u(x)-u(y)))|^{p-2} dt. \end{align} Then, by choosing $a=v(x)-v(y)$, $b=u(x)-u(y)$ we have \begin{align} &|u(x)-u(y)|^{p-2}(u(x)-u(y))\nonumber-|v(x)-v(y)|^{p-2}(u(x)-u(y))\nonumber\\ &=(p-1)\{(u(y)-v(y))-(u(x)-v(x))\}Q(x,y). \end{align} Set $\psi=u-v=(u-v)_{+}-(u-v)_{-}$, where $(u-v)_{\pm}=\max\{\pm(u-v),0\}$. Then, for $\phi=(u-v)^{+}$ we obtain \begin{align}\label{3negativity} [\psi(x)-\psi(y)][\phi(x)-\phi(y)]&=(\psi^{+}(x)-\psi^{+}(y))^2\geq0. \end{align} Therefore, the inequality \eqref{3negativity} together with the test function $\phi=(u-v_{+})$ yields that \begin{align*} 0&\geq\int_{\Omega}\lambda(u-v)_{+}\left[\frac{1}{v^{\delta}}-\frac{1}{u^{\delta}}\right]\\ &\geq \langle(-\Delta_{p,{\mathbb{G}}})^su-(-\Delta_{p,{\mathbb{G}}})^sv,(u-v)_{+}\rangle\\ &=(p-1)\iint_{\mathbb{G}\times \mathbb{G}}\frac{Q(x,y)(\psi^{+}(x)-\psi^{+}(y))^2}{|y^{-1}x|^{Q+ps}}dxdy \geq0. \end{align*} Hence, we have $v\geq u$ a.e. in ${\mathbb{G}}$. \end{proof} \begin{remark} It is worth noting that the result of Lemma \ref{weakcomp} also holds for more general nonlocal operator of subelliptic type on homogeneous Lie groups. \end{remark} We recall the following three results from \cite{BP16} which will be useful for establishing subsequent results. \begin{proposition}[\cite{BP16}]\label{beta convex} For every $\beta>0$ and $1\leq p<\infty$ we have the following inequality $$\left(\frac{1}{\beta}\right)^{\frac{1}{p}}\left(\frac{p+\beta-1}{p}\right)\geq 1.$$ \end{proposition} \begin{proposition}[\cite{BP16}]\label{l infty 1} Let $1<p<\infty$ and let $f: \mathbb{R}\rightarrow \mathbb{R}$ to be a $C^{1}$ convex function and $J_{p}(t):=|t|^{p-2}t$. Then, the following inequality \begin{equation}\label{bdd est1 remark} J_{p}(a-b)\big[AJ_{p}(f'(a))-BJ_{p}(f'(b))\big]\geq(f(a)-f(b))^{p-2}(f(a)-f(b))(A-B), \end{equation} holds for every $a, b\in \mathbb{R}$ and every $A, B\geq 0.$ \end{proposition} \begin{proposition}[\cite{BP16}]\label{l infty 2} Let $1<p<\infty$ and let $h:\mathbb{R}\rightarrow \mathbb{R}$ to be an increasing function. Define $$G(t)=\int_{0}^{t}h'(\tau)^{\frac{1}{p}}\d\tau, t\in \mathbb{R}.$$ Then, we have \begin{equation}\label{bdd est2} J_{p}(a-b)(h(a)-h(b))\geq|h(a)-h(b)|^{p}. \end{equation} \end{proposition} \noindent The next lemma concludes the boundedness of solutions of the problem \eqref{problem}. We will employ a Moser type iteration to establish our result. \begin{lemma}\label{bounded} Suppose $u\in X_0^{s,p}(\Omega)$ is a nonnegative weak solution to the problem \eqref{problem}, then we have $u\in L^{\infty}({\Omega}).$ \end{lemma} \begin{proof} Let $\epsilon>0$ be given. Consider the smooth, Lipschitz function $g_{\epsilon}(t)=(\epsilon^2+t^2)^{\frac{1}{2}}$, which is convex and $g_{\epsilon}(t)\rightarrow|t|$ as $\epsilon \rightarrow0$. In addition, we also have $|g'_{\epsilon}(t)|\leq1.$ For each strictly positive $\psi\in C_c^{\infty}(\Omega)$, test the weak formulation \eqref{d-weak} with the test function $\varphi=|g'_{\epsilon}(u)|^{p-2}g'_{\epsilon}(u)\psi$ to obtain the following estimate \begin{align}\label{bound est 2.0} \langle(-\Delta_{p,{\mathbb{G}}})^sg_{\epsilon}(u),\psi\rangle &\leq\int_\Omega\left(\left|\frac{\lambda f(x)}{{u^{\delta}}}+g(x)u^q\right|\right)|g_{\epsilon}'(u)|^{p-1}\psi dx, \end{align} for all $\psi\in C_c^{\infty}(\Omega)\cap\mathbb{R^+}.$ This is immediate from Proposition \ref{l infty 1} by setting $a=u(x), b=u(y), A=\psi(x)$ and $B=\psi(y)$. Thanks to Fatou's Lemma, by passing to the limit $\epsilon\rightarrow0,$ we deduce \begin{align}\label{bound est 2} \langle(-\Delta_{p,{\mathbb{G}}})^s(|u|),\psi\rangle&\leq\int_\Omega\left(\left|\frac{\lambda{f(x)}}{{u^{\delta}}}+g(x)u^q\right|\right)\psi dx. \end{align} The density result guarantees that \eqref{bound est 2} holds also for $\psi\in X_0^{s,p}(\Omega)$. For each $k>0$, consider $u_k=\min\{(u-1)^+, k\}\in X_0^{s,p}(\Omega)$. Then, for fixed $\beta>0$ and $\eta>0, $ by testing \eqref{bound est 2} with the test function $\psi=(u_k+\eta)^{\beta}-\eta^{\beta}$ we get \begin{align*} \iint_{\mathbb{G}\times \mathbb{G}}&\cfrac{||u(x)|-|u(y)||^{p-2}(|u(x)|-|u(y)|)((u_k(x)+\eta)^{\beta}-(u_k(y)+\eta)^{\beta})}{|y^{-1}x|^{Q+ps}}dxdy\\ &\qquad\leq\int_\Omega\left|\frac{\lambda{f(x)}}{{u^{\delta}}}+g(x)u^q\right|((u_k+\eta)^{\beta}-\eta^{\beta}) dx. \end{align*} We apply Proposition \ref{l infty 2} with $h(u)=(u_k+\eta)^{\beta}$ to deduce the following estimate: \begin{align}\label{bound est 4} &\iint_{\mathbb{G}\times \mathbb{G}}\cfrac{|((u_k(x)+\eta)^{\frac{\beta+p-1}{p}} -(u_k(y)+\eta)^{\frac{\beta+p-1}{p}})|^{p}}{|y^{-1}x|^{Q+ps}}dxdy\nonumber\\ &\leq \cfrac{(\beta+p-1)^{p}}{{\beta}p^{p}} \nonumber \\& \quad\times \iint_{\mathbb{G}\times \mathbb{G}}\cfrac{||u(x)|-|u(y)||^{p-2}(|u(x)|-|u(y)|)((u_k(x)+\eta)^{\beta}-(u_k(y)+\eta)^{\beta})}{|y^{-1}x|^{Q+ps}}dxdy\nonumber\\ &\leq\cfrac{(\beta+p-1)^{p}}{\beta{p}^{p}}\int_{\Omega}\left(\left|\frac{\lambda{f(x)}}{{u^{\delta}}}\right|+|g(x)u^q|\right)\left((u_k+\eta)^{\beta}-\eta^{\beta}\right) dx\nonumber\\ &=\cfrac{(\beta+p-1)^{p}}{\beta{p}^{p}} \nonumber \\& \times\left[\int_{\{u\geq1\}}\lambda|f(x)||u|^{-\delta}\left((u_k+\eta)^{\beta}-\eta^{\beta}\right)+\int_{\{u\geq1\}}|g(x)||u|^{q}\left((u_k+\eta)^{\beta}-\eta^{\beta}\right) dx\right]\nonumber\\ &\leq\cfrac{(\beta+p-1)^{p}}{\beta{p}^{p}}\left[\int_{\{u\geq1\}}\left(\lambda|f(x)|+|g(x)||u|^{q}\right)\left((u_k+\eta)^{\beta}-\eta^{\beta}\right) dx\right]\nonumber\\ &\leq{2C(\lambda, \|f\|_{\infty},\|g\|_{\infty})}\left(\cfrac{(\beta+p-1)^{p}}{\beta{p}^{p}}\right)\left[\int_{\Omega}|u|^{q}\left((u_k+\eta)^{\beta}-\eta^{\beta}\right) dx\right]\nonumber\\ &\leq{C'}\left(\cfrac{(\beta+p-1)^{p}}{\beta{p}^{p}}\right){\|u\|^{q}_{p_s^*}}\|(u_k+\eta)^{\beta}\|_{\kappa}, \end{align} where $\kappa=\frac{p_s^*}{p_s^*-q}$. By recalling the fractional Sobolev inequality for fractional $p$-sub-Laplacian \eqref{embed-cont}, we obtain \begin{align}\label{bound est 5} &\iint_{\mathbb{G}\times \mathbb{G}}\cfrac{|((u_k(x)+\eta)^{\frac{\beta+p-1}{p}} -(u_k(y)+\eta)^{\frac{\beta+p-1}{p}})|^{p}}{|y^{-1}x|^{Q+ps}}dxdy\geq{C}\left\|(u_k+\eta)^{\frac{\beta+p-1}{p}}-\eta^{\frac{\beta+p-1}{p}}\right\|_{p_{s}^*}^{p} \end{align} for some $C>0$. By using triangle inequality with $(u_k+\eta)^{\beta+p-1}\geq\eta^{p-1}(u_k+\eta)^{\beta},$ we have \begin{align}\label{bound est 6} \left[\int_{\Omega}\left((u_k+\eta)^{\frac{\beta+p-1}{p}} -\eta^{\frac{\beta+p-1}{p}}\right)^{p_s^*}dx\right]^{\cfrac{p}{p_s^*}}\geq\left(\frac{\eta}{2}\right)^{p-1}&\left[\int_{\Omega}(u_k+\eta)^{\frac{p_s^*\beta}{p}}\right]^{\cfrac{p}{p_s^*}}-\eta^{\beta+p-1}|\Omega|^{\cfrac{p}{p_s^*}}. \end{align} Thus, plugging \eqref{bound est 6} into \eqref{bound est 5} and finally from \eqref{bound est 4}, we obtain \begin{align}\label{bdd1} \left\|(u_k+\eta)^{\frac{\beta}{p}}\right\|^{p}_{p_s^*} \leq{C'}\left[C\left(\frac{2}{\eta}\right)^{p-1}\left(\cfrac{(\beta+p-1)^{p}}{\beta{p}^{p}}\right)\|u\|_{p_s^*}^{q}\|(u_k+\eta)^{\beta}\|_{\kappa}+\eta^{\beta}|\Omega|^{\cfrac{p}{p_s^*}}\right]. \end{align} Now, Proposition \ref{beta convex}, estimates \eqref{bound est 4} and \eqref{bdd1} imply that \begin{align}\label{bound est 7} \left\|(u_k+\eta)^{\frac{\beta}{p}}\right\|^{p}_{p_s^*} &\leq{C'}\left[\frac{1}{\beta}\left(\cfrac{\beta+p-1}{p}\right)^{p}\left\|(u_k+\eta)^{\beta}\right\|_{\kappa}\left(\frac{C\|u\|_{p_s^*}^{q}}{\eta^{p-1}}+|\Omega|^{\cfrac{p}{p_s^*}-\cfrac{1}{\kappa}} \right)\right]. \end{align} \noindent We are now in a position to employ a Moser type bootstrap argument to establish our claim. For this, choose $\eta>0$ such that $\eta^{p-1}=C\|u\|_{p_s^*}^{r-1}\left(|\Omega|^{\frac{p}{p_s^*}-\frac{1}{\kappa}}\right)^{-1}$. We observe that for $\beta\geq1$, we have $\beta^{p}\geq\left(\frac{\beta+p-1}{p}\right)^{p}.$ Let us now rewrite the estimate \eqref{bound est 7} by plugging $\chi=\cfrac{p_s^*}{p\kappa}>1$ and $\tau=\beta\kappa$ as follows: \begin{align}\label{bound est 8} \left\|(u_k+\eta)\right\|_{\chi\tau}\leq\left(C|\Omega|^{\frac{p}{p_s^*}-\frac{1}{\kappa}}\right)^{\frac{\kappa}{\tau}}\left(\frac{\tau}{\kappa}\right)^{\frac{\kappa}{\tau}}\left\|(u_k+\eta)\right\|_{\tau}. \end{align} We perform $m$ iterations with $\tau_0=\kappa$ and $\tau_{m+1}=\chi\tau_m=\chi^{m+1}\kappa$ on \eqref{bound est 8} to have \begin{align}\label{bound est 9} \left\|(u_k+\eta)\right\|_{\tau_{m+1}}&\leq\left(C|\Omega|^{\frac{p}{p_s^*}-\frac{1}{\kappa}}\right)^{\left(\sum\limits_{i=0}^{m}\frac{\kappa}{\tau_i}\right)}\left(\prod\limits_{i=0}^{m}\left(\frac{\tau_i}{\kappa}\right)^{\frac{\kappa}{\tau_i}}\right)^{p-1}\left\|(u_k+\eta)\right\|_{\kappa}\nonumber\\ &=\left(C|\Omega|^{\frac{p}{p_s^*}-\frac{1}{\kappa}}\right)^{\frac{\chi}{\chi-1}}\left(\chi^{\frac{\chi}{(\chi-1)^2}}\right)^{p-1}\left\|(u_k+\eta)\right\|_{\kappa}. \end{align} Now, taking the limit as $m\rightarrow\infty$, we obtain \begin{equation}\label{bound est 10} \left\|u_k\right\|_{\infty}\leq\left(C|\Omega|^{\frac{p}{p_s^*}-\frac{1}{q}}\right)^{\frac{\chi}{\chi-1}}\left(C'\chi^{\frac{\chi}{(\chi-1)^2}}\right)^{p-1}\left\|(u_k+\eta)\right\|_{q}. \end{equation} Finally, we use $u_k\leq(u-1)^+$ in \eqref{bound est 10} combined with the triangle inequality and pass the limit $k\rightarrow\infty$, to obtain \begin{equation}\label{bound est 11} \left\|(u-1)^+\right\|_{\infty}\leq\left\|u_k\right\|_{\infty}\leq{C}\left(\chi^{\frac{\chi}{(\chi-1)^2}}\right)^{p-1}\left(|\Omega|^{\frac{p}{p_s^*}-\frac{1}{\kappa}}\right)^{\frac{\chi}{\chi-1}}\left(\left\|(u-1)^+\right\|_{\kappa}+\eta|\Omega|^{\frac{1}{\kappa}}\right). \end{equation} Therefore, we have $u\in L^{\infty}({\Omega})$ and hence the proof. \end{proof} \section{Appendix A: Sobolev-Rellich-Kondrachov type embedding on stratified Lie groups} \label{appA} The purpose of this section to prove continuity and compactness of the Sobolev embedding for $X^{s,p}_0(\Omega)$ where $\Omega$ is any open subset of a stratified Lie group $\mathbb{G}.$ We follow the ideas of \cite{NPV12} to establish the continuous embedding whereas the compact embedding will be proved based on the idea originated by \cite{GL92}. Recently, a similar embedding result is obtained for the Rockland operator on graded Lie groups \cite{RTY20}. The embedding results for the fractional Sobolev space $X_0^{s,p}(\Omega)$ over $\mathbb{R}^N$ can be found in \cite {DGV20, FSV15}. We note here that in \cite{AM18} the authors studied weighted compact embeddings for the fractional Sobolev spaces on bounded extension domains of the Heisenberg group using an approach similar to \cite{NPV12}. Recently, the fractional Sobolev inequality on stratified Lie groups was shown in \cite[Theorem 2]{KD20} (see \cite{KRS20} for fractional logarithmic inequalities on homogeneous Lie groups). Motivated by the above mentioned investigations we prove the continuous and compact embeddings of $X_0^{s, p}(\Omega)$ into the Lebesgue space $L^r(\Omega)$ for an appropriate range of $r\geq1$. We now state the embedding result for the space $X_0^{s, p}(\Omega)$ on stratified Lie groups. \begin{theorem} \label{l-3App} Let $\mathbb{G}$ be a stratified Lie group of homogeneous dimension $Q$, and let $\Omega\subset\mathbb{G}$ be an open set. Let $0<s<1<p<\infty$ and $Q>sp.$ Then the fractional Sobolev space $X_0^{s, p}(\Omega)$ is continuously embedded in $L^r(\Omega)$ for $p\leq r\leq p_s^*:=\frac{Qp}{Q-sp}$, that is, there exists a positive constant $C=C(Q,s,p, \Omega)$ such that for all $u\in X_0^{s, p}(\Omega)$, we have $$\|u\|_{L^r(\Omega)}\leq C \|u\|_{X_0^{s,p}(\Omega)}.$$ Moreover, if $\Omega$ is bounded, then the following embedding \begin{align} X_0^{s,p}(\Omega) \hookrightarrow L^r(\Omega) \end{align} is continuous for all $r\in[1,p_s^*]$ and is compact for all $r\in[1,p_s^*)$. \end{theorem} \begin{proof} Let us recall the fractional Sobolev inequality on stratified Lie groups \cite{KD20}, given by \begin{equation}\label{fullfractionalsobo} \|u\|_{L^{p_s^*}(\mathbb{G})}\leq C \|u\|_{W^{s,p}(\mathbb{G})}. \end{equation} Thus, the space $W^{s,p}(\mathbb{G})$ is continuously embedded in $L^{p_s^*}(\mathbb{G})$. Let $r\in(p,p_s^*)$ be such that $\frac{1}{r}=\frac{\theta}{p}+\frac{1-\theta}{p_s^*}$ for some $\theta\in(0,1)$. Then by the interpolation inequality of Lebesgue spaces we have $$\|u\|_{L^{r}(\mathbb{G})}\leq \|u\|_{L^{p}(\mathbb{G})}^{\theta}\|u\|_{L^{p_s^*}(\mathbb{G})}^{1-\theta}.$$ Therefore, using Young's inequality with the exponent $\frac{1}{\theta}$ and $\frac{1}{1-\theta}$ we obtain \begin{align*} \|u\|_{L^{r}(\mathbb{G})}\leq& \|u\|_{L^{p}(\mathbb{G})} + \|u\|_{L^{p_s^*}(\mathbb{G})}\\ \leq& \|u\|_{L^{p}(\mathbb{G})} + C\|u\|_{W^{s,p}(\mathbb{G})}. \end{align*} Thus, we get that the space $W^{s,p}(\mathbb{G})$ is continuously embedded in $L^{r}(\mathbb{G})$ for all $r\in[p,p_s^*]$. Let $\Omega$ be an open subset of $\mathbb{G}$. Then, for each $u\in X_0^{s,p}(\Omega)$, we have from \eqref{fullfractionalsobo}, as $u=0$ in $\mathbb{G}\setminus \Omega,$ that \begin{equation}\label{embed-cont} \|u\|_{L^{p_s^*}(\Omega)}\leq C \|u\|_{X_0^{s,p}(\Omega)}. \end{equation} Thus the space $X_0^{s,p}(\Omega)$ is continuously embedded in $L^{p_s^*}(\Omega)$. Proceeding as above we conclude that the embedding $X_0^{s,p}(\Omega)\hookrightarrow L^{r}(\Omega)$ is continuous for all $r\in[p,p_s^*]$. That is, for all $u\in X_0^{s,p}(\Omega)$ there exists a $C=C(Q,p,s,\Omega)>0$ such that \begin{equation}\label{embed} \|u\|_{L^{r}(\Omega)}\leq C \|u\|_{X_0^{s,p}(\Omega)}~\text{for all}~p\leq r\leq p_s^*. \end{equation} In particular, if $\Omega$ is bounded that is $|\Omega|<\infty$, then applying the H\"{o}lder inequality to the inequality \eqref{embed}, we get the continuous embedding for all $r\in[1,p_s^*]$. This concludes the proof of the first part of the theorem. Now, we choose $\eta \in C^\infty_c(\mathbb{G})$ such that $\text{supp} \eta \subset \overline{B}_1(0) ,$ $0\leq \eta \leq 1 $ and $\|\eta\|_{L^1(\mathbb{G})}=1.$ For each $\epsilon>0$ and $f \in L^1_{\text{loc}}(\mathbb{G})$, let us define $$\eta_\epsilon(x)=\frac{1}{\epsilon^Q} \eta(\epsilon^{-1}x)$$ and \begin{align} T_\epsilon f(x):= f*\eta_\epsilon(x):=\int_{\mathbb{G}} f(x) \eta_\epsilon(x^{-1}y)\, dy. \end{align} Prior to proceeding to show the compactness of the embedding, we first we prove the following lemma. \begin{lemma} \label{lemma3.2} Let $\Omega$ be a open bounded subset of $\mathbb{G}.$ Then, for $1\leq r<\infty,$ the set $\mathcal{F} \subset L^r(\Omega)$ is relatively compact in $L^r(\Omega)$ if and only if $\mathcal{F}$ is bounded and $\|T_\epsilon f-f\|_{L^r(\Omega)} \rightarrow 0$ uniformly in $f \in \mathcal{F}$ as $\epsilon \rightarrow 0.$ \end{lemma} \begin{proof} Suppose that $\mathcal{F}$ is relatively compact in $L^r(\Omega).$ We agree to extend any function $L^r(\Omega)$ to $L^r(\mathbb{G})$ by assigning zero out of $\Omega.$ Let $R>0$ and let $f_1,f_2,\ldots,f_l \in \mathcal{F}$ be such that $\mathcal{F} \subset \cup_{j=1}^l B_R(f_j) \subset L^r(\Omega).$ Then we have \begin{align} \|f-T_\epsilon f\|_{L^r(\Omega)} \leq \|f- f_j\|_{L^r(\Omega)}+ \|f_j- T_\epsilon f_j\|_{L^r(\Omega)}+ \|T_\epsilon f_j-T_\epsilon f\|_{L^r(\Omega)}. \end{align} Since $T_\epsilon f \rightarrow f$ in $L^r(\Omega)$ as $\epsilon \rightarrow 0$ and $\|T_\epsilon f\|_r \leq \|f\|_{r}$, we have uniform convergence $\|T_\epsilon f-f\|_{L^r(\Omega)} \rightarrow 0$ by passing $\epsilon \rightarrow 0.$ Conversely, we assume that $\mathcal{F}$ is bounded and $\|T_\epsilon f-f\|_{L^r(\Omega)} \rightarrow 0$ uniformly in $f \in \mathcal{F}$ as $\epsilon \rightarrow 0.$ Choose a bounded sequence $(f_n)$ in $\mathcal{F}.$ Thanks to the Banach-Alouglu theorem we can extract a subsequence (again denoted by $(f_n)$) such that $f_n \rightharpoonup f$ weakly in $L^r(\Omega).$ We now aim to prove strong convergence. For that we first observe that \begin{align} \label{eqq36d} \|f_n- f\|_{L^r(\Omega)} \leq \|f- T_\epsilon f_n\|_{L^r(\Omega)}+ \|T_\epsilon f_n- T_\epsilon f\|_{L^r(\Omega)}+ \|T_\epsilon f-f\|_{L^r(\Omega)}. \end{align} It follows from the weak convergence of $f_n \rightharpoonup f$ that, for all $x \in \mathbb{G}$ and for $\epsilon>0,$ we have $\lim_{n \rightarrow \infty} T_\epsilon (f_n-f)(x) \rightarrow 0.$ Again, by H\"older inequality we have \begin{align} \|T_\epsilon(f_n-f)\|^r_{L^r(\Omega)}\leq \|\eta_\epsilon\|_{L^1(\mathbb{G})}^r \|f_n-f\|_{L^r(\Omega)}^r <\infty \end{align} and therefore by the Lebesgue dominated convergence theorem we get \begin{align} \int_{\mathbb{G}} |T_\epsilon (f_n-f)(x)|^r dx \rightarrow 0\quad n \rightarrow \infty. \end{align} Thus, as $\epsilon \rightarrow 0,$ all three terms on right hand side of \eqref{eqq36d} go to zero with the use of assumption $\|T_\epsilon f-f\|_{L^r(\Omega)} \rightarrow 0$ uniformly in $f \in \mathcal{F}$ as $\epsilon \rightarrow 0.$ Thus, $f_n \rightarrow f$ converges strongly in $L^r(\Omega).$ Hence, $\mathcal{F}$ is relative compact in $L^r(\Omega)$ for all $1\leq r<\infty$. \end{proof} Now, we continue the proof of Theorem \ref{l-3}. We emphasise that by assigning $f=0$ in $\mathbb{G} \setminus \Omega$ we have $f \in W^{s, p}(\mathbb{G})$ for every $f \in X_0^{s,p}(\Omega).$ Now, with the help of Lemma \ref{lemma3.2} we prove the relative compactness of a bounded set $\mathcal{F}$ in $X^{s,p}_0(\Omega).$ Recall that $|B_R(x)|=R^Q|B_1(0)|$ (see \cite[p. 140]{FR16}). Therefore, the boundedness of $\mathcal{F}$ in $L^r(\Omega)$ is immediate from the fractional Gagliardo-Nirenberg inequality \cite[Theorem 4.4.1]{RS19}, \begin{align} \label{gag} \|f\|_{L^r(\mathbb{G})} \leq C [f]_{s,p}^b \|f\|_{L^q(\mathbb{G})}^{1-b}, \end{align} where $p>1, q\geq 1, r>0, b \in (0,1]$ satisfy $\frac{1}{r}=b\left(\frac{1}{p}-\frac{s}{Q} \right)+\frac{1-b}{q}.$ Setting, $$K_\epsilon:=T_\epsilon f-f~\text{for all}~f \in \mathcal{F},$$ we get from the fractional Gagliardo-Nirenberg inequality \eqref{gag}, as $f \in X_0^{s, p}(\Omega)$ and thus $K_\epsilon(x)=0$ for all $x \in \mathbb{G}\backslash \Omega$, that \begin{align} \|K_\epsilon\|_{L^r(\Omega)} \leq C [K_\epsilon]_{s,p}^b \|K_\epsilon\|_{L^q(\Omega)}^{1-b} , \end{align} where $\frac{1}{r}=b\left(\frac{1}{p}-\frac{s}{Q} \right)+\frac{1-b}{q}.$ Thus, it is sufficient to show that \begin{align} \label{eq310} [K_\epsilon]_{s,p} \leq \|T_\epsilon f-f\|_{X_0^{s,p}(\Omega)} \rightarrow 0. \end{align} This means that \begin{align} \label{eq311} \lim_{\epsilon \rightarrow 0} \int_{\mathbb{G}} \int_{\mathbb{G}} \frac{|(T_\epsilon f-f)(x)-(T_\epsilon f-f)(y)|^p}{|y^{-1}x|^{Q+ps}} dx dy = 0. \end{align} Using $\text{supp}(\eta_\epsilon) \subset B_\epsilon(0),$ the H\"older inequality, Tonelli's and Fubini's theorem we obtain \begin{align} \label{eq312} &\int_{\mathbb{G}} \int_{\mathbb{G}} \frac{|(T_\epsilon f-f)(x)-(T_\epsilon f-f)(y)|^p}{|y^{-1}x|^{Q+ps}} dx dy \\&=\int_{\mathbb{G}} \int_{\mathbb{G}} \frac{1}{|y^{-1}x|^{Q+ps}} \Big| \int_{\mathbb{G}} \eta_\epsilon(z)\left(f(z^{-1}x)-f(z^{-1}y) \right) dz-f(x)+f(y) \Big|^p dx dy \nonumber \\&=\int_{\mathbb{G}} \int_{\mathbb{G}} \frac{1}{|y^{-1}x|^{Q+ps}} \Big| \epsilon^{-Q} \int_{B_\epsilon(0)} \eta(\epsilon^{-1} z)\left(f(z^{-1}x)-f(z^{-1}y) \right) dz-f(x)+f(y) \Big|^p dx dy \nonumber \\&=\int_{\mathbb{G}} \int_{\mathbb{G}} \frac{1}{|y^{-1}x|^{Q+ps}} \Big| \int_{B_1(0)} \eta( z')\left(f((\epsilon z')^{-1}x)-f((\epsilon z')^{-1}y) -f(x)+f(y)\right) dz' \Big|^p dx dy \nonumber \\&\leq |B_1(0)|^{p-1}\int_{\mathbb{G}} \int_{\mathbb{G}} \left(\int_{B_1(0)} \eta^p(z) \frac{|f((\epsilon z)^{-1}x)-f((\epsilon z)^{-1}y) -f(x)+f(y)|^p}{|y^{-1}x|^{Q+ps}} dz \right) dx dy \nonumber \\&= |B_1(0)|^{p-1}\int_{B_1(0)}\int_{\mathbb{G} \times \mathbb{G}} \frac{|f((\epsilon z)^{-1}x)-f((\epsilon z)^{-1}y) -f(x)+f(y)|^p}{|y^{-1}x|^{Q+ps}} \eta^p(z) dx\, dy\, dz\nonumber \end{align} Now, we note that for the Lie group $\mathbb{G} \times \mathbb{G}$ with the Haar measure $dx dy$ using the continuity of translations on $L^p(\mathbb{G} \times \mathbb{G})$ (see \cite[Theorem 20.15]{HR79}) we obtain, for $v \in L^p(\mathbb{G} \times \mathbb{G})$ and $(z, z) \in \mathbb{G} \times \mathbb{G},$ that \begin{align} \label{eq313} \lim_{\epsilon \rightarrow 0}\int_{\mathbb{G} \times \mathbb{G}} |v((\epsilon z, \epsilon z)^{-1}(x, y))-v(x,y)|^p dx dy =0. \end{align} Now, fix $z \in B_1(0)$ and set $$v(x, y):=\frac{f(x)-f(y)}{|y^{-1}x|^{\frac{Q+ps}{p}}}.$$ Observe that $v \in L^p(\mathbb{G} \times \mathbb{G})$ as $f \in X_0^{s,p}(\Omega).$ Therefore, the property \eqref{eq313} yields \begin{align} \lim_{\epsilon \rightarrow 0}\int_{\mathbb{G} \times \mathbb{G}} \frac{|f((\epsilon z)^{-1}x)-f((\epsilon z)^{-1}y)-f(x)+f(y)|^p}{|y^{-1}x|^{Q+ps}} dx dy = 0. \end{align} Thus, \begin{align} \rho_\epsilon(z):=\eta^p(z)\int_{\mathbb{G} \times \mathbb{G}} \frac{|f((\epsilon z)^{-1}x)-f((\epsilon z)^{-1}y)-f(x)+f(y)|^p}{|y^{-1}x|^{Q+ps}} dx dy \rightarrow 0 \end{align} as $\epsilon \rightarrow 0.$ Now for a.e. $z \in B_1(0),$ using the fact that $f \in X_0^{s,p}(\Omega)$ we have \begin{align} |\rho_\epsilon(z)|\leq 2^{p-1} \eta^p(z) &\Bigg( \int_{\mathbb{G} \times \mathbb{G}} \frac{|f((\epsilon z)^{-1}x)-f((\epsilon z)^{-1}y)|^p}{|y^{-1}x|^{Q+ps}} dx dy \nonumber\\&\quad+\int_{\mathbb{G} \times \mathbb{G}} \frac{|f(x)-f(y)|^p}{|y^{-1}x|^{Q+ps}} dx dy\Bigg)\nonumber \\&= 2^p \eta^p(z) \int_{\mathbb{G} \times \mathbb{G}} \frac{|f(x)-f(y)|^p}{|y^{-1}x|^{Q+ps}} dx dy. \end{align} Observe that the last estimate shows $\rho_\epsilon \in L^\infty(B_1(0))$ uniformly as $\epsilon \rightarrow 0.$ Therefore, by the Lebesgue dominated convergence theorem we conclude that \begin{align} \int_{B_1(0)}\int_{\mathbb{G} \times \mathbb{G}} & \frac{|f((\epsilon z)^{-1}x)-f((\epsilon z)^{-1}y) dz-f(x)+f(y)|^p}{|y^{-1}x|^{Q+ps}} \eta^p(z) dx\, dy\, dz \nonumber \\&=\int_{B_1(0)} \rho_\epsilon(z) dz \rightarrow 0 \end{align} as $\epsilon \rightarrow 0.$ This fact along with \eqref{eq312} gives \eqref{eq311} and so \eqref{eq310}. Finally, by Lemma \ref{lemma3.2} we conclude that $\mathfrak{F}$ is relatively compact in $L^r(\Omega)$. Thus we conclude that the space $X_0^{s,p}(\Omega)$ is compactly embedded in $L^r(\Omega)$ for all $r\in[1,p_s^*)$. \end{proof} \section{Appendix B} In this section we prove the following important lemma. \begin{lemma}\label{lemma ercole} Let $u_1, u_2 \in X_{0}^{s, p}(\Omega) \setminus\{0\}$. Then there exists a positive constant $C=C_p$, depending only on $p$, such that \begin{equation} \label{append} \langle\left(-\Delta_{p,{\mathbb{G}}}\right)^s u_1- \left(-\Delta_{p,{\mathbb{G}}}\right)^s u_2, u_1-u_2\rangle\geq C_p\begin{cases} [u_1-u_2]_{s,p}^p,&\text{if}~p\geq2\\ \frac{[u_1-u_2]_{s,p}^2}{\left([u_1]_{s,p}^p+[u_2]_{s,p}^p\right)^{\frac{2-p}{p}}},&\text{if}~1<p<2. \end{cases} \end{equation} \end{lemma} \begin{proof} Let us recall the well-known Simmon's inequality \begin{equation}\label{simmon} \left(|a|^{p-2}a-|b|^{p-2}b\right) \cdot(a-b) \geq C(p) \begin{cases}\frac{|a-b|^{2}}{(|a|+|b|)^{2-p}} & \text { if } \quad 1<p<2 \\ |a-b|^{p} & \text { if } \quad p \geq 2,\end{cases} \end{equation} where $a, b \in \mathbb{R}^{N} \setminus\{0\}$ and $C(p)$ is a positive constant depending only on $p$. For simplicity we denote \begin{equation*} w_i(x, y)=u_i(x)-u_i(y), \quad i=1,2. \end{equation*} Therefore, \begin{equation*} \langle\left(-\Delta_{p,{\mathbb{G}}}\right)^s u_1- \left(-\Delta_{p,{\mathbb{G}}}\right)^s u_2, u_1-u_2\rangle=\iint_{\mathbb{G}\times\mathbb{G}} \frac{|w_1|^{p-2}w_1-|w_2|^{p-2}w_2}{|y^{-1}x|^{Q+ps}}\left(w_1-w_2\right)dxdy. \end{equation*} Observe that for $p\geq 2$ the inequality \eqref{append} immediately follows from the inequality \eqref{simmon}. Thus we are left to establish the inequality \eqref{append} for the range $1<p<2$. From \eqref{simmon}, we have \begin{equation}\label{simmin est} \left\langle\left(-\Delta_{p, \mathbb{G}}\right)^{s} u_1-\left(-\Delta_{p, \mathbb{G}}\right)^{s} u_2, u_1-u_2\right\rangle \geq C(p) \iint_{\mathbb{G}\times\mathbb{G}} \frac{|w_1-w_2|^{2}}{\left(|w_1|+|w_2|\right)^{2-p}|y^{-1}x|^{Q+ps}}dxdy. \end{equation} Now from the H\"older's inequality, we get \begin{align} [u_1-u_2]_{s,p}^{p} &=\iint_{\mathbb{G}\times\mathbb{G}} \frac{|w_1-w_2|^{p}}{|y^{-1}x|^{Q+ps}}dxdy\nonumber\\ &=\iint_{\mathbb{G}\times\mathbb{G}} \frac{|w_1-w_2|^p}{\left(|w_1|+|w_2|\right)^{\frac{p(2-p)}{2}} |y^{-1}x|^{(Q+ps)\frac{p}{2}}} \frac{\left(|w_1|+|w_2|\right)^{\frac{p(2-p)}{2}}}{|y^{-1}x|^{(Q+ps)\frac{2-p}{2}}}dxdy\leq A^{\frac{p}{2}} B^{\frac{2-p}{2}}, \end{align} where \begin{equation*} A=\iint_{\mathbb{G}\times\mathbb{G}} \frac{|w_1-w_2|^{2}}{\left(|w_1|+|w_2|\right)^{2-p}|y^{-1}x|^{Q+ps}}dxdy \end{equation*} and \begin{equation*} B=\iint_{\mathbb{G}\times\mathbb{G}} \frac{\left(|w_1|+|w_2|\right)^p}{|y^{-1}x|^{Q+ps}}dxdy \leq 2^p\iint_{\mathbb{G}\times\mathbb{G}} \frac{|w_1|^p+|w_2|^p}{|y^{-1}x|^{Q+ps}}dxdy =2^p([u_1]_{s,p}^p +[u_2]_{s,p}^p). \end{equation*} From \eqref{simmin est}, we deduce \begin{align} \left\langle\left(-\Delta_{p, \mathbb{G}}\right)^{s} u_1-\left(-\Delta_{p, \mathbb{G}}\right)^{s} u_2, u_1-u_2\right\rangle &\geq C(p)A \nonumber \\ & \geq C(p)\left([u_1-u_2]_{s,p}^p B^{-\frac{2-p}{2}}\right)^{\frac{2}{p}} \nonumber\\ & \geq C(p)[u_1-u_2]_{s,p}^2\left(2^p\left([u_1]_{s,p}^p+[u_2]_{s,p}^p\right)\right)^{-\frac{2-p}{p}}\nonumber \\ &=2^{p-2} C(p) \frac{[u_1-u_2]_{s,p}^2}{\left([u_2]_{s,p}^p+[u_2]_{s,p}^p\right)^{\frac{2-p}{p}}}, \end{align} completing the proof. \end{proof} \section{Conflict of interest statement} On behalf of all authors, the corresponding author states that there is no conflict of interest. \section{Data availability statement} Data sharing not applicable to this article as no datasets were generated or analysed during the current study. \section*{Acknowledgement} The authors are grateful to the reviewer for reading the manuscript carefully, providing several useful comments and suggesting relevant references. SG would like to thank the Ghent Analysis \& PDE centre, Ghent University, Belgium for the support during his research visit. VK and MR are supported by the FWO Odysseus 1 grant G.0H94.18N: Analysis and Partial Differential Equations, the Methusalem programme of the Ghent University Special Research Fund (BOF) (Grant number 01M01021) and by FWO Senior Research Grant G011522N. MR is also supported by EPSRC grants EP/R003025/2 and EP/V005529/1. \begin{thebibliography}{10} \bibitem{AM18} A.~Adimurthi and A.~Mallick. \newblock A {H}ardy type inequality on fractional order {S}obolev spaces on the {H}eisenberg group. \newblock {\em Ann. Sc. Norm. Super. Pisa Cl. Sci. (5)}, 18(3):917--949, 2018. \bibitem{AWYY23} R. Alvarado, F. Wang, D. Yang and W. Yuan. \newblock Pointwise characterization of Besov and Triebel–Lizorkin spaces on spaces of homogeneous type. \newblock {\em Studia Math.} 268(2):121–166, 2023. \bibitem{AA87} J. G. Azorero and I. P. Alonso. Existence and nonuniqueness for the $p$-Laplacian: nonlinear eigenvalues. {\em Comm. Partial Differential Equations}, 12(12):1389--1430, 1987. \bibitem{ASS16} G. Akagi, G. Schimperna and A. Segatti. \newblock Fractional {C}ahn-{H}illiard, {A}llen-{C}ahn and porous medium equations. \newblock {\em Journal of Differential Equations}, 261(6):2935--2985, 2016. \bibitem{AYY22} R. Alvarado, D. Yang and W. Yuan. \newblock A measure characterization of embedding and extension domains for Sobolev, Triebel-Lizorkin, and Besov spaces on spaces of homogeneous type. {\em J. Funct. Anal.} 283(12):109687, 71 pp., 2022. \bibitem{ALT20} Y. C. An, H. Liu and L. Tian. \newblock The {D}irichlet problem for a sub-elliptic equation with singular nonlinearity on the {H}eisenberg group. \newblock {\em Journal of Mathematical Inequalities}, 14(1):67--82, 2020. \bibitem{AMRT10} F. Andreu, J. M. Mazon, J. D. Rossi and J. J. Toledo. \newblock Nonlocal {D}iffusion {P}roblems. \newblock {\em Mathematical Surveys and Monographs American Mathematical Society, Philadelphia}, Vol. 165, 2010. \bibitem{AF17} G.~Anello and F.~Faraci. \newblock Two solutions for an elliptic problem with two singular terms. \newblock {\em Calculus of Variations and Partial Differential Equations}, 56(4):1--31, 2017. \bibitem{BPV22} S. Biagi, A. Pinamonti and E. Vecchi. \newblock Sublinear Equations Driven by H\"{o}rmander Operators. \newblock {\em The Journal of Geometric Analysis}, 32(4):1--27, 2022. \bibitem{BR17} G. M. Bisci and D. Repovs. \newblock Yamabe type equations on Carnot groups. \newblock {\em Potential Analysis}, 46(2):369--383, 2017. \bibitem{BS15} G.~M. Bisci and R.~Servadei. \newblock A {B}rezis-{N}irenberg splitting approach for nonlocal fractional equations. \newblock {\em Nonlinear Analysis: Theory, Methods {\&} Applications}, 119:341--353, 2015. \bibitem{BK22} J. Bj\"orn and A. Ka\l{}amajska. \newblock Poincar\'e inequalities and compact embeddings from Sobolev type spaces into weighted $L^q$ spaces on metric spaces. \newblock{\em J. Funct. Anal.} 282(11):Paper No. 109421, 47 pp., 2022. \bibitem{BO09} L.~Boccardo and L.~Orsina. \newblock Semilinear elliptic equations with singular nonlinearities. \newblock {\em Calculus of Variations and Partial Differential Equations}, 37(3-4):363--380, 2009. \bibitem{BLU07} A.~Bonfiglioli, E.~Lanconelli and F.~Uguzzoni. \newblock {\em Stratified {L}ie {G}roups and {P}otential {T}heory for their {S}ub-{L}aplacians}. \newblock Springer Berlin Heidelberg, 2007. \bibitem{BFP20a} S. Bordoni, R. Filippucci and P. Pucci. Existence of solutions in problems on Heisenberg groups involving Hardy and critical terms. {\em J. Geom. Anal.}, 30(2):1887--1917, 2020. \bibitem{BF14} L.~Brasco and G. Franzina. \newblock Convexity properties of {D}irichlet integrals and {P}icone-type inequalities. \newblock {\em Kodai Math. J.}, 37(3):769--799, 2014. \bibitem{BP16} L.~Brasco and E.~Parini. \newblock The second eigenvalue of the fractional $p$-{L}aplacian. \newblock {\em Advances in Calculus of Variations}, 9(4):323--355, 2016. \bibitem{BS19} L. Brasco and A. Salort. \newblock A note on homogeneous Sobolev spaces of fractional order. \newblock {\em Annali di Matematica Pura ed Applicata (1923-)}, 198(4):1295--1330, 2019. \bibitem{B11} H. Brezis. \newblock Functional Analysis, Sobolev Spaces and Partial Differential Equations, 662 pages \newblock Springer, New York, NY, XIV, 600, 2011. \bibitem{BL83} H.~Brezis and E.~Lieb. \newblock A relation between pointwise convergence of functions and convergence of functionals. \newblock {\em Proceedings of the American Mathematical Society}, 88(3):486, 1983. \bibitem{BN83} H.~Brezis and L.~Nirenberg. \newblock Positive solutions of nonlinear elliptic equations involving critical {S}obolev exponents. \newblock {\em Communications on Pure and Applied Mathematics}, 36(4):437--477, 1983. \bibitem{CMSS17} A.~Canino, L.~Montoro, B.~Sciunzi and M.~Squassina. \newblock Nonlocal problems with singular nonlinearity. \newblock {\em Bulletin des Sciences Math{\'{e}}matiques}, 141(3):223--250, 2017. \bibitem{CDG93} L.~Capogna, D.~Danielli and N.~Garofalo. \newblock An embedding theorem and the {H}arnack inequality for nonlinear subelliptic equations. \newblock {\em Communications in Partial Differential Equations}, 18(9-10):1765--1794, 1993. \bibitem{CG98} L.~Capogna and N.~Garofalo. Boundary behavior of nonnegative solutions of subelliptic equations in NTA domains for Carnot-Carath\'eodory metrics. \newblock {\em J. Fourier Anal. Appl.}, 4(4–5):403–432, 1998. \bibitem{CZ18} L. Capogna and X. Zhou. \newblock Strong comparison principle for $p$-harmonic functions in Carnot-Carath\'eodory spaces. \newblock {\em Proc. Amer. Math. Soc.} 146(10):4265-4247, 2018. \bibitem{CKKSS22} L. Capogna, J. Kline, R. Korte, N. Shanmugalingam and M. Snipes. \newblock Neumann problems for $ p $-harmonic functions, and induced nonlocal operators in metric measure spaces. {\em arXiv preprint}, 2022. arXiv:2204.00571 \bibitem{CC21} H. Chen and H. G. Chen. \newblock Estimates of Dirichlet eigenvalues for a class of sub‐elliptic operators. \newblock {\em Proceedings of the London Mathematical Society}, 122(6):808-847, 2021. \bibitem{CW17} Y.-H. Chen and Y. Wang. \newblock Perturbation of the CR fractional Yamabe problem. {\em Math. Nachr.}, 290(4):534–545, 2017. \bibitem{CCHY18} J.-H. Cheng, H.-L. Chiu, J.-F. Hwang and P. Yang. \newblock Strong maximum principle for mean curvature operators on subRiemannian manifolds. \newblock {\em Math. Ann.} 372(3-4):1393–1435, 2018. \bibitem{CRT77} M.~G. Crandall, P.~H. Rabinowitz and L.~Tartar. \newblock On a {D}irichlet problem with a singular nonlinearity. \newblock {\em Communications in Partial Differential Equations}, 2(2):193--222, 1977. \bibitem{C98} D. Christodoulou. \newblock On the geometry and dynamics of crystalline continua. {\em Ann. Inst. H. Poincaré}. 69:335–358, 1998. \bibitem{CGS04} G. Citti, G. Manfredini and A. Sarti. \newblock Neuronal oscillations in the visual cortex: $\Gamma$- convergence to the Riemannian Mumford–Shah functional. {\em SIAM J. Math. Anal.} 35:1394-1419, 2004. \bibitem{DGN07} Danielli, N. Garofalo and D. Nhieu. \newblock Sub-Riemannian calculus on hypersurfaces in Carnot groups. {\em Adv. Math.} 215:292–378, 2007. \bibitem{DGV20} F. del Teso, D. G\'{o}mez-Castro and J. L. V\'{a}zquez. \newblock Estimates on translations and Taylor expansions in fractional Sobolev spaces. \newblock {\em Nonlinear Anal}, 200:1--12, 111995, 2020. \bibitem{DKP14} A. DiCastro, T. Kuusi and G. Palatucci. \newblock Nonlocal Harnack inequalities. {\em J. Funct. Anal.}, 267(6):1807–1836, (2014). \bibitem{DKP16} A. DiCastro, T. Kuusi and G. Palatucci. \newblock Local behavior of fractional $p$-minimizers. \newblock {\em Ann. Inst. H. Poincar\'{e} Anal. Non lin\'{e}aire}, 33(5):1279--1299, 2016. \bibitem{DP97} P. Dr\'{a}bek and S. I. Pohozaev. \newblock Positive solutions for the $p$-{L}aplacian: application of the fibering method. \newblock {\em Proc. Roy. Soc. Edinburgh Sect. A}, 127(4):703--726, 1997. \bibitem{EGKSS22} S. Eriksson-Bique, G. Giovannardi, R. Korte, N. Shanmugalingam and G. Speight. \newblock Regularity of solutions to the fractional Cheeger-Laplacian on domains in metric spaces of bounded geometry. {\em J. Differential Equations} 306:590–632, 2022. \bibitem{E10} L. C. Evans. \newblock Partial differential equations. \newblock Providence, R.I.: AMS, 662 pp., 2010. \bibitem{FMMT15} R. L. Frank, M. del Mar Gonz\`alez, D. D. Monticelli and J. Tan. \newblock An extension problem for the CR fractional Laplacian. {\em Adv. Math.} 270:97--137, 2015. \bibitem{FF15} F. Ferrari and B. Franchi. \newblock Harnack inequality for fractional sub-Laplacians in Carnot groups. \newblock {\em Mathematische Zeitschrift}, 279(1):435--458, 2015. \bibitem{FP81} C. Fefferman and D. H. Phong. \newblock Subelliptic eigenvalue problems. \newblock {\em In Conference on harmonic analysis in honor of Antoni Zygmund}, vol. 1, pp. 590--606. 1981. \bibitem{FSV15} A. Fiscella, R. Servadei and E. Valdinoci. \newblock Density properties for fractional Sobolev spaces. \newblock{\em Ann. Acad. Sci. Fenn. Math.} 40(1):235--253, 2015. \bibitem{FBR17} M. Ferrara, G. M. Bisci and D. Repov$\check{\text{s}}$. \newblock Nonlinear elliptic equations on Carnot groups. \newblock {\em RACSAM}, 111(3):707--718, 2017. \bibitem{FR16} V.~Fischer and M.~Ruzhansky. \newblock {\em Quantization on Nilpotent Lie Groups, Progress in Mathematics, vol. 314}. \newblock Springer International Publishing, Birkh\"{a}user, 2016. \bibitem{F75} G.~B. Folland. \newblock Subelliptic estimates and function spaces on nilpotent {L}ie groups. \newblock {\em Arkiv f\"{o}r Matematik}, 13(1-2):161--207, 1975. \bibitem{F77} G. B. Folland. \newblock On the Rothschild–Stein lifting theorem. {\em Comm. Partial Differential Equations} 2(2):165-191, 1977. \bibitem{FS82} G.~B. Folland and E.~M. Stein. \newblock {\em Hardy Spaces on Homogeneous Groups, Mathematical Notes}. \newblock vol. 28, Princeton University Press, Princeton, N.J., 1982. \bibitem{FL10} R. L. Frank and A. Laptev. \newblock Inequalities between Dirichlet and Neumann eigenvalues on the Heisenberg group. \newblock {\em International Mathematics Research Notices}, 2010(15):2889--2902, 2010. \bibitem{FP14} G. Franzina and G. Palatucci. \newblock Fractional $p$-eigenvalues. \newblock {\em Riv. Math. Univ. Parma (N.S.)}, 5(2):373--386, 2014. \bibitem{GU21} P. Garain and A. Ukhlov. \newblock Singular subelliptic equations and {S}obolev inequalities on {C}arnot groups. \newblock {\em Analysis and Mathematical Physics}, 12(2):1--18. \bibitem{GL92} N. Garofalo and E. Lanconelli. \newblock Existence and nonexistence results for semilinear equations on the Heisenberg group. \newblock {\em Indiana Univ. Math. J.}, 41(1):71--98, 1992. \bibitem{GV00} N.~Garofalo and D.~Vassilev. \newblock Regularity near the characteristic set in the non-linear {D}irichlet problem and conformal geometry of {S}ub-{L}aplacians on {C}arnot groups. \newblock {\em Mathematische Annalen}, 318(3):453--516, 2000. \bibitem{Gro} M. Gromov. \newblock Carnot–Carath\'eodory Spaces Seen from Within. Sub-Riemannian Geometry, Progr. Math., vol. 144, Birkh\"auser, Basel, pp. 79–323, 1996. \bibitem{GQ13} M. d. M. Gonz\'alez and J. Qing. \newblock Fractional conformal Laplacians and fractional Yamabe problems. \newblock {\em Anal. PDE} 6(7): 1535–1576, 2013. \bibitem{GS22} P. G\'orka and A. S\l{}abuszewski. \newblock Embeddings of the fractional Sobolev spaces on metric-measure spaces. {\em Nonlinear Anal.} 221:112867 23 pp., 2022. \bibitem{Pio00} P. Haj\l{}asz and P. Koskela. \newblock Sobolev met Poincar\'e, \newblock {\em Mem. Amer. Math. Soc.} 145(688):x+101 pp, 2000. \bibitem{H20} Y. Han. \newblock An integral type Brezis-Nirenberg problem on the Heisenberg group. \newblock {\em Journal of Differential Equations}, 269(5):4544--4565, 2020. \bibitem{HL08} A. M. Hansson and A. Laptev. \newblock Sharp spectral inequalities for the Heisenberg Laplacian. \newblock{\em Groups and analysis}, 354(1):100--115, 2008. \bibitem{HR79} E. Hewitt and K. A. Ross. Abstract harmonic analysis. Vol. I. Structure of topological groups, integration theory, group representations. Second edition. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], 115. Springer-Verlag, Berlin-New York, 1979. ix+519 pp. \bibitem{HSS04} N. Hirano, C. Saccon and N. Shioji. \newblock Existence of multiple positive solutions for singular elliptic problems with concave and convex nonlinearities. \newblock {\em Adv. Differential Equations}, 9(1-2):197--220, 2004. \bibitem{HSS08} N.~Hirano, C.~Saccon and N.~Shioji. \newblock Brezis-{N}irenberg type theorems and multiplicity of positive solutions for a singular elliptic problem. \newblock {\em J. Differential Equations}, 245(8):1997--2037, 2008. \bibitem{JL} D. S. Jerison and J. M. Lee. The Yamabe problem on CR manifolds. {\em J. Differential Geom.}, 25 :167-197, 1987. \bibitem{KRS20} A. Kassymov, M. Ruzhansky and D. Suragan. \newblock Fractional logarithmic inequalities and blow-up results with logarithmic nonlinearity on homogeneous groups. \newblock {\em NoDEA Nonlinear Differential Equations Appl.}, 27(1), Paper no. 7, 2020. \bibitem{KD20} A.~Kassymov and D.~Suragan. \newblock Lyapunov-type inequalities for the fractional $p$-sub-{L}aplacian. \newblock {\em Advances in Operator Theory}, 5(2):435--452, 2020. \bibitem{KMW17} S. Kim, M. Musso and J. Wei. \newblock Existence theorems of the fractional Yamabe problem. {\em Analysis \& PDE}, 11(1):75-113, 2017. \bibitem{K20} A. Krist\'aly. \newblock Nodal solutions for the fractional Yamabe problem on Heisenberg groups. {\em Proceedings of the Royal Society of Edinburgh Section A: Mathematics}, 150(2):771-788, 2020. \bibitem{KYZ11} P. Koskela, D. Yang, and Y. Zhou. \newblock Pointwise characterizations of Besov and Triebel-Lizorkin spaces and quasiconformal mappings. {\em Adv. Math.} 226(4):3579–3621, 2011. \bibitem{LM91} A.~C. Lazer and P.~J. McKenna. \newblock On a singular nonlinear elliptic boundary value problem. \newblock {\em Proceedings of the American Mathematical Society}, 111(3):721--721, 1991. \bibitem{L05} A. L$\hat{\text{e}}$. \newblock Eigenvalue problems for the p-Laplacian. \newblock {\em Nonlinear Analysis: Theory, Methods \& Applications} 64, no. 5 (2006): 1057--1099. \bibitem{LL14} E. Lindgren and P. Lindqvist. \newblock Fractional eigenvalues. \newblock {\em Cal. Var. Partial Differential Equations}, 49(1-2):795--826, 2014. \bibitem{L90} P. Lindqvist. \newblock On the equation $div\left(|\nabla u|^{p-2}\nabla u\right)+\lambda|u|^{p-2}u=0$. \newblock {\em Proceedings of the American Mathematical Society}, 109(1):157--164, 1990. \bibitem{LPGSGZMCMAK20} A. Lischke, G. Pang, M. Gulian, F. Song, C. Glusa, X. Zheng, Z. Mao, W. Cai, M. M. Meerschaert, M. Ainsworth and G. E. Karniadakis. \newblock What is the fractional {L}aplacian? {A} comparative review with new results. \newblock {\em Journal of Computational Physics}, 404, 109009, 2020. \bibitem{L07} A. Loiudice. \newblock Semilinear subelliptic problems with critical growth on Carnot groups. \newblock {\em Manuscripta Mathematica}, 124(2):247--259, 2007. \bibitem{L15} A. Loiudice. \newblock Critical growth problems with singular nonlinearities on Carnot groups. \newblock {\em Nonlinear Analysis}, 126:415--436, 2015. \bibitem{L19} A. Loiudice. \newblock Optimal decay of $p$-Sobolev extremals on Carnot groups. \newblock {\em Journal of Mathematical Analysis and Applications}, 470(1):619--631, 2019. \bibitem{MS78} A. Menikoff and J. Sj\"ostrand. \newblock On the eigenvalues of a class of hypoelliptic operators. \newblock {\em Math. Ann.}, 235:55--85, 1978. \bibitem{MS16} T.~Mukherjee and K.~Sreenadh. \newblock On {D}irichlet problem for fractional $p$-{L}aplacian with singular non-linearity. \newblock {\em Advances in Nonlinear Analysis}, 8(1):52--72, 2016. \bibitem{NPV12} E.~D. Nezza, G.~Palatucci and E.~Valdinoci. \newblock Hitchhiker's guide to the fractional {S}obolev spaces. \newblock {\em Bulletin des Sciences Math{\'{e}}matiques}, 136(5):521--573, 2012. \bibitem{PP22} G. Palatucci and M. Piccinini. \newblock Nonlocal Harnack inequalities in the Heisenberg group. {\em Calc. Var. Partial Differential Equations}, 61(5):185, 30 pp, 2022. \bibitem{PT22} P. Pucci and L. Temperini. Entire solutions for some critical equations in the Heisenberg group. {\em Opuscula Math.}, 42(2):279--303, 2022. \bibitem{PT21} P. Pucci and L. Temperini, Existence for singular critical exponential ($p,Q$) equations in the Heisenberg group. {\em Adv. Calc. Var.}, 20 pp., 2021. DOI: 10.1515/acv-2020-0028. \bibitem{P19} P. Pucci. Existence and multiplicity results for quasilinear elliptic equations in the Heisenberg group. {\em Opuscula Math.}, 39(2):247--257, 2019. \bibitem{RS76} L. P. Rothschild and E. M. Stein, Hypoelliptic differential operators and nilpotent groups. {\em Acta Math.} 137(3-4):247–320, 1976. \bibitem{Roth83} L. P. Rothschild. \newblock A remark on hypoellipticity of homogeneous invariant differential operators on nilpotent Lie groups. {\em Comm. Partial Differential Equations} 8(15):1679–1682, 1983. \bibitem{RST22} M. Ruzhansky, B. Sabitbek and B. Torebek. \newblock Global existence and blow-up of solutions to porous medium equation and pseudo-parabolic equation, I. Stratified groups. \newblock {\em Manuscripta Mathematica}, 19 pp., 2022. DOI: 10.1007/s00229-022-01390-2 \bibitem{RS18} M. Ruzhansky and D. Suragan. \newblock A comparison principle for nonlinear heat Rockland operators on graded groups. \newblock {\em Bull. Lond. Math. Soc.}, 50:753--758, 2018. \bibitem{RS19} M. Ruzhansky and D. Suragan. \newblock Hardy inequalities on homogeneous groups: 100 years of Hardy inequalities, \newblock Progress in Mathematics, Vol. 327, Birkha\"{u}ser, 2019. xvi+588pp. \bibitem{RS20} M. Ruzhansky and D. Suragan. \newblock Green's identities, comparison principle and uniqueness of positive solutions for nonlinear $p$-sub-Laplacian equations on stratified Lie groups. \newblock {\em Potential Analysis}, 53:645--658, 2020. \bibitem{CHY21} H. Chtioui, H. Hajaiej and R. Yacoub. \newblock Concentration phenomena for fractional CR Yamabe type flows. {\em Differential Geom. Appl.} 79:101803, 27 pp., 2021. \bibitem{RTY20} M. Ruzhansky, N. Tokmagambetov and N. Yessirkegenov. \newblock Best constants in Sobolev and Gagliardo-Nirenberg inequalities on graded groups and ground states for higher order nonlinear subelliptic equations. \newblock {\em Calc. Var. Partial Differential Equations}, 59(5):1--23, paper no. 175, 2020. \bibitem{RY22} M. Ruzhansky and N. Yessirkegenov. \newblock A comparison principle for higher order nonlinear hypoelliptic heat operators on graded Lie groups. \newblock {\em Nonlinear Analysis}, 215, Paper No. 112621, 2022. \bibitem{RT16} L. Roncal and S. Thangavelu. \newblock Hardy’s inequality for fractional powers of the sub-{L}aplacian on the {H}eisenberg group. \newblock {\em Adv. Math.} 302:106--158, 2016. \bibitem{ST10} A. Szulkin and T. Weth. \newblock The method of Nehari manifold, \newblock {\em Handbook of nonconvex analysis and applications}, pp. 597--632, Int. Press, Somerville, MA, 2010. \bibitem{TC03} P. Tankov and R. Cont. \newblock Financial Modelling with {J}ump Processes \newblock {\em Chapman and Hall, CRC Financial Mathematics Series}, 2003. \bibitem{WW18} X.~Wang and Y.~Wang. \newblock Subelliptic equations with singular nonlinearities on the {H}eisenberg group. \newblock {\em Boundary Value Problems}, 2018(1), 2018. \bibitem{WD20} X.~ Wang and G. Du. \newblock Properties of solutions to fractional $p$-sub-{L}aplace equations on the {H}eisenberg group. \newblock {\em Bound. Value Probl.}, Paper No. 128, 15 pp, 2020. \bibitem{WN19} X. Wang and P. Niu. \newblock Properties for nonlinear fractional sub-{L}aplace equations on the {H}eisenberg group. {\em J. Partial Differ. Equ.} 32(1):66–76, 2019. \bibitem{WPH09} N. Wei and P. Niu and H. Liu. \newblock Dirichlet Eigenvalue Ratios for the $p$-sub-Laplacian in the Carnot Group. \newblock {\em J. Partial Differ. Equ.}, 22(1):1--10, 2009. \bibitem{YD13} S.~Yijing and Z.~Duanzhi. \newblock The role of the power $3$ for elliptic equations with negative exponents. \newblock {\em Calculus of Variations and Partial Differential Equations}, 49(3-4):909--922, 2013. \bibitem{YY12} Z. Yuan and G. Yuan. \newblock Dirichlet problems for linear and semilinear sub-Laplace equations on Carnot groups. \newblock {\em Journal of Inequalities and Applications}, 2012(1):1-12, 2012. \bibitem{Z15} Y. Zhou. \newblock Fractional Sobolev extension and Imbedding. \newblock {\em Transactions of AMS}, 367(2):959--979, 2015. \end{thebibliography} \end{document}
2205.05894v2
http://arxiv.org/abs/2205.05894v2
Robustness of Stochastic Optimal Control to Approximate Diffusion Models under Several Cost Evaluation Criteria
\documentclass[notitlepage,11pt,reqno]{amsart} \usepackage[foot]{amsaddr} \usepackage{amssymb,nicefrac,bm,upgreek,mathtools,verbatim} \usepackage[final]{hyperref} \usepackage[mathscr]{eucal} \usepackage{dsfont} \usepackage[normalem]{ulem} \usepackage{amsopn} \usepackage[margin=1in]{geometry} \allowdisplaybreaks \raggedbottom } \newtheorem{lemma}{Lemma}[section] \newtheorem{theorem}{Theorem}[section] \newtheorem{proposition}{Proposition}[section] \newtheorem{corollary}{Corollary}[section] \newtheorem{fact}{Fact}[section] \theoremstyle{definition} \newtheorem{definition}{Definition}[section] \newtheorem{assumption}{Assumption}[section] \newtheorem{notation}{Notation}[section] \newtheorem{hypothesis}{Hypothesis}[section] \newtheorem{example}{Example}[section] \theoremstyle{remark} \newtheorem{remark}{Remark}[section] \numberwithin{theorem}{section} \numberwithin{equation}{section} \hypersetup{ colorlinks=true, citecolor=mblue, linkcolor=mblue, urlcolor = blue, anchorcolor = blue, frenchlinks=false, pdfborder={0 0 0}, naturalnames=false, hypertexnames=false, breaklinks} \usepackage[capitalize,nameinlink]{cleveref} \usepackage[abbrev,msc-links,nobysame,citation-order]{amsrefs} \crefname{section}{Section}{Sections} \crefname{subsection}{Section}{Sections} \crefname{condition}{Condition}{Conditions} \crefname{hypothesis}{Hypothesis}{Conditions} \crefname{assumption}{Assumption}{Assumptions} \crefname{lemma}{Lemma}{Lemmas} \crefname{fact}{Fact}{Facts} \Crefname{figure}{Figure}{Figures} \crefformat{equation}{\textup{#2(#1)#3}} \crefrangeformat{equation}{\textup{#3(#1)#4--#5(#2)#6}} \crefmultiformat{equation}{\textup{#2(#1)#3}}{ and \textup{#2(#1)#3}} {, \textup{#2(#1)#3}}{, and \textup{#2(#1)#3}} \crefrangemultiformat{equation}{\textup{#3(#1)#4--#5(#2)#6}}{ and \textup{#3(#1)#4--#5(#2)#6}}{, \textup{#3(#1)#4--#5(#2)#6}}{, and \textup{#3(#1)#4--#5(#2)#6}} \Crefformat{equation}{#2Equation~\textup{(#1)}#3} \Crefrangeformat{equation}{Equations~\textup{#3(#1)#4--#5(#2)#6}} \Crefmultiformat{equation}{Equations~\textup{#2(#1)#3}}{ and \textup{#2(#1)#3}} {, \textup{#2(#1)#3}}{, and \textup{#2(#1)#3}} \Crefrangemultiformat{equation}{Equations~\textup{#3(#1)#4--#5(#2)#6}}{ and \textup{#3(#1)#4--#5(#2)#6}}{, \textup{#3(#1)#4--#5(#2)#6}}{, and \textup{#3(#1)#4--#5(#2)#6}} \crefdefaultlabelformat{#2\textup{#1}#3} \newcommand{\vertiii}[1]{{\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert #1 \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}} \newcommand{\lamstr}{\lambda^{\!*}}\newcommand{\rc}{{\mathscr{R}}} \newcommand{\cIn}{\bm1_{\{\abs{z}\le 1\}}} \newcommand{\dd}{\mathfrak{d}} \newcommand{\Uadm}{\mathfrak U} \newcommand{\Act}{\mathbb{U}} \newcommand{\Usm}{\mathfrak U_{\mathsf{sm}}} \newcommand{\bUsm}{\overline{\mathfrak U}_{\mathsf{sm}}} \newcommand{\Usms}{\mathfrak U^*_{\mathsf{sm}}} \newcommand{\Um}{\mathfrak U_{\mathsf{m}}} \newcommand{\sV}{\mathscr{V}} \newcommand{\Inv}{\mathcal{M}} \newcommand{\pV}{\mathrm{V}} \newcommand{\pv}{\mathrm{v}} \newcommand{\cA}{{\mathcal{A}}} \newcommand{\sA}{{\mathscr{A}}} \newcommand{\fB}{{\mathfrak{B}}} \newcommand{\cB}{{\mathcal{B}}} \newcommand{\sB}{{\mathscr{B}}} \newcommand{\cC}{{\mathcal{C}}} \newcommand{\sC}{{\mathscr{C}}} \newcommand{\sE}{{\mathscr{E}}} \newcommand{\sF}{{\mathfrak{F}}} \newcommand{\eom}{{\mathscr{M}}} \newcommand{\cG}{{\mathcal{G}}} \newcommand{\sH}{{\mathscr{H}}} \newcommand{\Hg}{{\mathcal{H}}} \newcommand{\cI}{{\mathcal{I}}} \newcommand{\fI}{{\mathfrak{I}}} \newcommand{\cJ}{{\mathcal{J}}} \newcommand{\sJ}{{\mathscr{J}}} \newcommand{\cK}{{\mathcal{K}}} \newcommand{\sK}{{\mathscr{K}}} \newcommand{\Lg}{{\mathcal{L}}} \newcommand{\sL}{{\mathscr{L}}} \newcommand{\Lp}{{L}} \newcommand{\Lpl}{L_{\text{loc}}} \newcommand{\cT}{{\mathcal{T}}} \newcommand{\cN}{{\mathcal{N}}} \newcommand{\cP}{{\mathcal{P}}} \newcommand{\sP}{{\mathscr{P}}} \newcommand{\Lyap}{{\mathcal{V}}} \newcommand{\cX}{{\mathcal{X}}} \newcommand{\RR}{\mathds{R}} \newcommand{\NN}{\mathds{N}} \newcommand{\ZZ}{\mathds{Z}} \newcommand{\RI}{\mathds{R}^{I}} \newcommand{\Rd}{{\mathds{R}^{d}}} \DeclareMathOperator{\Exp}{\mathbb{E}} \DeclareMathOperator{\Prob}{\mathbb{P}} \newcommand{\D}{\mathrm{d}} \newcommand{\E}{\mathrm{e}} \newcommand{\Ind}{\mathds{1}} \newcommand{\cD}{\mathcal{D}} \newcommand{\Sob}{{\mathscr W}} \newcommand{\Sobl}{{\mathscr W}_{\text{loc}}} \newcommand{\df}{:=} \newcommand{\transp}{^{\mathsf{T}}} \DeclareMathOperator*{\osc}{osc} \DeclareMathOperator*{\diag}{diag} \DeclareMathOperator*{\trace}{Tr} \DeclareMathOperator*{\dist}{dist} \DeclareMathOperator*{\diam}{diam} \DeclareMathOperator*{\Argmin}{Arg\,min} \DeclareMathOperator*{\Argmax}{Arg\,max} \DeclareMathOperator*{\argmin}{arg\,min} \DeclareMathOperator*{\esssup}{ess\,sup} \DeclareMathOperator*{\essinf}{ess\,inf} \DeclareMathOperator*{\supp}{support} \DeclareMathOperator{\conv}{conv} \DeclareMathOperator{\sign}{sign} \newcommand{\order}{{\mathscr{O}}} \newcommand{\sorder}{{\mathfrak{o}}} \newcommand{\grad}{\nabla} \newcommand{\uuptau}{{\Breve\uptau}} \newcommand{\cIm}{{\widehat{\mathcal{I}}}} \newcommand{\abs}[1]{\lvert#1\rvert} \newcommand{\norm}[1]{\lVert#1\rVert} \newcommand{\babs}[1]{\bigl\lvert#1\bigr\rvert} \newcommand{\Babs}[1]{\Bigl\lvert#1\Bigr\rvert} \newcommand{\babss}[1]{\biggl\lvert#1\biggr\rvert} \newcommand{\bnorm}[1]{\bigl\lVert#1\bigr\rVert} \newcommand{\Bnorm}[1]{\Bigl\lVert#1\Bigr\rVert} \newcommand{\bnormm}[1]{\biggl\lVert#1\biggr\rVert} \newcommand{\compcent}[1]{\vcenter{\hbox{$#1\circ$}}} \newcommand{\comp}{\mathbin{\mathchoice {\compcent\scriptstyle}{\compcent\scriptstyle} {\compcent\scriptscriptstyle}{\compcent\scriptscriptstyle}}} \usepackage{color} \definecolor{dmagenta}{rgb}{.4,.1,.5} \definecolor{dblue}{rgb}{.0,.0,.5} \definecolor{mblue}{rgb}{.0,.0,.7} \definecolor{ddblue}{rgb}{.0,.0,.4} \definecolor{dred}{rgb}{.7,.0,.0} \definecolor{dgreen}{rgb}{.0,.5,.0} \definecolor{Eeom}{rgb}{.0,.0,.5} \newcommand{\lline}{{\noindent\makebox[\linewidth]{\rule{\textwidth}{0.8pt}}}} \newcommand{\ftn}[1]{\footnote{\bfseries\color{dred}#1}} \newcommand{\sy}[1]{{\color{magenta} #1}} \newcommand{\bl}[1]{{\color{dblue}#1}} \newcommand{\rd}[1]{{\color{dred}#1}} \newcommand{\mg}[1]{{\color{dmagenta}#1}} \newcommand{\dg}[1]{{\color{dgreen}#1}} \begin{document} \title[Robustness of Stochastic Optimal Control for Controlled Diffusion Processes] {Robustness of Stochastic Optimal Control to Approximate Diffusion Models under Several Cost Evaluation Criteria} \author[Somnath Pradhan]{Somnath Pradhan$^\dag$} \address{$^\dag$Department of Mathematics and Statistics, Queen's University, Kingston, ON, Canada} \email{[email protected]} \author[Serdar Y\"{u}ksel]{Serdar Y\"{u}ksel$^{\ddag}$} \address{$^\ddag$Department of Mathematics and Statistics, Queen's University, Kingston, ON, Canada} \email{[email protected]} \begin{abstract} In control theory, typically a nominal model is assumed based on which an optimal control is designed and then applied to an actual (true) system. This gives rise to the problem of performance loss due to the mismatch between the true model and the assumed model. A robustness problem in this context is to show that the error due to the mismatch between a true model and an assumed model decreases to zero as the assumed model approaches the true model. We study this problem when the state dynamics of the system are governed by controlled diffusion processes. In particular, we will discuss continuity and robustness properties of finite horizon and infinite-horizon $\alpha$-discounted/ergodic optimal control problems for a general class of non-degenerate controlled diffusion processes, as well as for optimal control up to an exit time. Under a general set of assumptions and a convergence criterion on the models, we first establish that the optimal value of the approximate model converges to the optimal value of the true model. We then establish that the error due to mismatch that occurs by application of a control policy, designed for an incorrectly estimated model, to a true model decreases to zero as the incorrect model approaches the true model. We will see that, compared to related results in the discrete-time setup, the continuous-time theory will let us utilize the strong regularity properties of solutions to optimality (HJB) equations, via the theory of uniformly elliptic PDEs, to arrive at strong continuity and robustness properties. \end{abstract} \keywords{Robust control, Controlled diffusions, Hamilton-Jacobi-Bellman equation, Stationary control} \subjclass[2000]{Primary: 93E20, 60J60; secondary: 49J55} \maketitle \section{Introduction} In stochastic control applications, typically only an ideal model is assumed, or learned from available incomplete data, based on which an optimal control is designed and then applied to the actual system. This gives rise to the problem of performance loss due to the mismatch between the actual system and the assumed system. A robustness problem in this context is to show that the error due to mismatch decreases to zero as the assumed system approaches the actual system. With this motivation, in this article, our goal is to study the continuity and robustness properties of finite horizon and infinite horizon discounted/ergodic cost problems for a large class of multidimensional controlled diffusions. We note that the problems of existence, uniqueness and verification of optimality of stationary Markov policies have been studied extensively in literature see e.g., \cite{Bor-book}, \cite{HP09-book} (finite horizon) \cite{BS86}, \cite{BB96} (discounted cost) \cite{AA12}, \cite{AA13}, \cite{BG88I}, \cite{BG90b} (ergodic cost) and references therein. For a book-length exposition of this topic see e.g., \cite{ABG-book}. In more explicit terms, here is the problem that we will study. For a precise statement please see Section \ref{contRobSec}. Suppose that our true model is represented as $(X, c)$ (see, e.g., \cref{E1.1}), where $X$ is the true system model (representing the system evolution model via the drift and diffusion terms) and $c$ is the associated running cost function, and let $(X_n, c_n)$ (see, e.g., \cref{ASE1.1}) be a sequence of approximating models $X_n$ with associated running cost functions $c_n$, such that as $n\to \infty$ approximating models $X_n$ converge to the true model $X$ in some sense to be precisely stated. Suppose that for each choice of control policy $U$ the associated total cost in true/approximating models are $\cJ(c,U),$ $\cJ_n(c_n,U)$ respectively. The objective of the controller is to minimize the total cost over all possible admissible policies. If the optimal control policies of the true/approximating models are $v^*$, $v^{*n}$ respectively, the performance loss due to mismatch is given by $|\cJ(c,v^*) - \cJ(c,v^{*n})|$. Thus the robustness problem in this context is to show that $|\cJ(c,v^*) - \cJ(c,v^{*n})| \to 0$ as $n\to \infty$\,. See Section \ref{contRobSec}. In this sense, our paper can be viewed as a continuous-time counterpart of the setting studied in \cite{KY-20}, \cite{KRY-20}. This problem is of major practical importance and, accordingly, there have been many studies. Most of the existing works in this direction are concerned with the discrete time Markov decision process, see for instance \cite{KY-20}, \cite{KRY-20}, \cite{BJP02}, \cite{KV16}, \cite{NG05} \cite{SX15}, and references therein. We should note that the term {\it robustness} has various interpretations, contexts and solution methods. A common approach to robustness in the literature has been to design controllers that work sufficiently well for all possible uncertain systems under some structured constraints, such as $H_\infty$ norm bounded perturbations (see \cite{basbern}, \cite{zhou1996robust}). For such problems, the design for robust controllers has often been developed through a game theoretic formulation where the minimizer is the controller and the maximizer is the uncertainty. In \cite{DJP00}, \cite{jacobson1973optimal} the authors established the connections of this formulation to risk sensitive control. Using Legendre-type transforms, relative entropy constraints came in to the literature to probabilistically model the uncertainties, see e.g. \cite[Eqn. (4)]{dai1996connections} or \cite[Eqns. (2)-(3)]{DJP00}. Here, one selects a nominal system which satisfies a relative entropy bound between the actual measure and the nominal measure, solves a risk sensitive optimal control problem, and this solution value provides an upper bound for the original system performance. Therefore, a common approach in robust stochastic control has been to consider all models which satisfy certain bounds in terms of relative entropy pseudo-distance (or Kullback-Leibler divergence); see e.g. \cite{DJP00,dai1996connections,dupuis2000kernel,boel2002robustness} among others. In order to quantify the uncertainty in the system models, other than the relative entropy pseudo-distance, various other metrics/criterion have also been used in the literature. In \cite{tzortzis2015dynamic}, for discrete time controlled models, the authors have studied a min-max formulation for robust control where the one-stage transition kernel belongs to a ball under the total variation metric for each state action pair. For distributionally robust stochastic optimization problems, it is assumed that the underlying probability measure of the system lies within an ambiguity set and a worst case single-stage optimization is made considering the probability measures in the ambiguity set. To construct ambiguity sets, \cite{blanchet2016}, \cite{esfahani2015} use the Wasserstein metric, \cite{erdogan2005} uses the Prokhorov metric which metrizes the weak topology, \cite{sun2015} uses the total variation distance, and \cite{lam2016} works with relative entropy. For fully observed finite state-action space models with uncertain transition probabilities, the authors in \cite{iyengar2005robust}, \cite{nilim2005robust} have studied robust dynamic programming approaches through a min-max formulation. Similar work with model uncertainty includes \cite{oksendal2014forward}, \cite{benavoli2011robust}, \cite{xu_mannor}. In the economics literature related work has been done in \cite{hansen2001robust}, \cite{gossner2008entropy}. The robustness formulation we study has been considered in \cite{KY-20}, \cite{KRY-20} for discrete-time models, where the authors studied both continuity of value functions as transition kernel models converge, as well as the robustness problem where an optimal control designed for an incorrect approximate model is applied to a true model and the mismatch term is studied. The solution approach is fundamentally different in the continuous-time analysis we present in this paper. In a related study \cite{Dean18}, the author studied the optimal control of systems with unknown dynamics for a linear quadratic regulator setup and proposes an algorithm to learn the system from observed data with quantitative convergence bounds. The author in \cite[Theorem 5.1]{Lan81} has considered fully observed discrete time controlled models and established continuity results for approximate models and gives a set convergence result for sets of optimal control actions, this set convergence result is inconclusive for robustness without further assumptions on the true system model (for more details see \cite{KY-20}). For fully observed MDPs, \cite{muller1997does} studied continuity of the value function under a general metric defined as the integral probability metric, which captures both the total variation metric or the Kantorovich metric with different setups (which is not weaker than the metrics leading to weak convergence). A recent study on game problems along a similar theme is presented in \cite{subramanian2021robustness}. For control problems of MDPs with standard Borel spaces, the approximation methods through quantization, which lead to finite models, can be viewed as approximations of transition kernels, but this interpretation requires caution: indeed, \cite{SaYuLi17, arruda2012, arruda2013}, among many others, study approximation methods for MDPs where the convergence of approximate models is satisfied in a particularly constructed fashion. Reference \cite{SaYuLi17} presents a construction for the approximate models through quantizing the actual model with continuous spaces (leading to a finite space model), which allows for continuity and robustness results with only a weak continuity assumption on the true transition kernel which, in turn, leads to the weak convergence of the approximate models. For both fully observed and partially observed models, a detailed analysis of approximation methods for continuous state and action spaces can be found in \cite{SaLiYuSpringer} . The literature on robustness of stochastic optimal control for continuous time system seems to be rather limited; see e.g., \cite{GL99}, \cite{LJE15} \cite{hansen2001robust}\,. In \cite{GL99} the authors have considered the problem of controlling a system whose dynamics are given by a stochastic differential equation (SDE) whose coefficients are known only up to a certain degree of accuracy. For the finite horizon reward maximization problem, using the technique of contractive operators, \cite{GL99} has obtained upper bounds of performance loss due to mismatch (or, ``robustness index") and has shown by an example that the robustness index may behave abnormally even if we have the convergence of the value functions. The associated discounted payoff maximization problem has been studied in \cite{LJE15}, where using a Lyapunov type stability assumption the authors have studied the robustness problem via a game theoretic formulation. For controlled diffusion models, the authors in \cite{hansen2001robust} described the links between the max-min expected utility theory and the applications of robust-control theory, in analogy with some of the papers on discrete-time noted above adopting a min-max formulation. Along a further direction, for controlled diffusions, via the Maximum Principle technique, \cite{PDPB02a}, \cite{PDPB02b}, \cite{PDPB02c} have established the robustness of optimal controls for the finite horizon payoff criterion. In a recent comprehensive work \cite{RZ21}, the authors have studied the robustness of feedback relaxed controls for a continuous time stochastic exit time problem. Under sufficient smoothness assumptions on the coefficients (i.e, uniform Lipschitz continuity on the diffusion coefficients and uniform H\"older continuity on the discount factor and payoff function on a fixed bounded domain) they have established that a regularized control problem admits a H\"older continuous optimal feedback control and also they have shown that both the value function and the feedback control of the regularized control problem are Lipschitz stable with respect to parameter perturbations when the action space is finite. It is known that the optimal control obtained form the HJB equation (i.e. the argmin function) in general is unstable with respect to perturbations of coefficients; in practice, this would result in numerical instability of learning algorithms (as noted in \cite{RZ21}). Stability/continuity of solutions of PDEs with respect coefficient perturbations is a significant mathematical and practical question in PDE theory (see e.g. \cite{WLS01}, \cite{SI72}). The continuity results established in this paper (see Theorems~\ref{TC1.3}, \ref{ErgoContnuity}, \ref{TErgoOptCont}) will provide sufficient conditions which ensure stability of solutions of semilinear elliptic PDEs (HJB equations) in the whole space $\Rd$. Our robustness results also will be useful to the study of the robust optimal investment problems for local volatility models, e.g. given in \cite[Remark~2.1]{AS08} (also, see \cite{KT12}, \cite{BDD20})\,. When the system noise is not given by a Wiener process, but it is given by a general wide bandwidth noise (or, a more general discontinuous martingales \cite{LRT00}), the controlled process becomes a non-Markovian process even under stationary Markov policies. The general method of studying optimal control problem for such a system is to find suitable Markovian processes which approximate the non-Markovian process (see, \cite{K90}, \cite{KR87}, \cite{KR87a}, \cite{KR88}). For wide bandwidth noise driven controlled systems \cite{K90}, \cite{KR87}, \cite{KR87a}, \cite{KR88}, diffusion approximation techniques were used to study stochastic optimal control problems. The results described in this paper are complementary to the above mentioned works on the diffusion approximation of wide bandwidth noise driven systems. {\bf Contributions and main results.} In the present paper, our aim is to study the continuity and robustness properties for a general class of controlled diffusion processes in $\Rd$ for both infinite horizon discounted/ ergodic costs, where the action space is a (general) compact metric space. As in \cite{KY-20}, \cite{KRY-20}, in order to establish our desired robustness results we will use the continuity result as an intermediate step. For the discounted cost case, we will establish our results following a direct approach (under a relatively weaker set of assumptions on the diffusion coefficients, i.e., locally Lipschitz continuous coefficients). Using the results on existence and uniqueness of solutions of the associated discounted Hamilton Jacobi Bellman (HJB) equation and the complete characterization of (discounted) optimal policies in the space of stationary Markov policies (see \cite[Theorem~3.5.6]{ABG-book}), we first establish the continuity of value functions. Then utilizing this continuity of value functions, we derive a robustness result. The analysis of ergodic cost (or long-run expected average cost) is somewhat more involved. To the best of our knowledge there is no work on continuity and robustness properties of optimal controls for the ergodic cost criterion in the existing literature (for the discrete-time setup, see \cite{KRY-20}). We have studied these ergodic cost problems under two sets of assumptions: In the first case, we assume that our running cost function satisfies a near-monotone type structural assumption (see, eq. \cref{ENearmonot}, Assumption~(A6)), and in the second case we assume Lyapunov type stability assumptions on the dynamics of the system (see Assumption~(A7))\,. One of the major issues in analyzing the robustness of ergodic optimal controls under the near-monotone hypothesis is the non-uniqueness/restricted uniqueness of solutions of the associated HJB equation (see, \cite[Example~3.8.3]{ABG-book}, \cite{AA13}). It is shown in \cite[Example~3.8.3]{ABG-book} that the ergodic HJB equation may admit uncountably many solutions. Considering this, in \cite[Theorem~1.1]{AA13} the author has established the uniqueness of compatible solution pairs (see \cite[Definition~1.1]{AA13}). Exploiting this uniqueness result, under a suitable tightness assumption (on a certain set of invariant measures) we will establish the desired robustness result. Under the Lyapunov type stability assumption it is known that the ergodic HJB equation admits a unique solution in a certain class of functions, also the complete characterization of ergodic optimal control is known (see \cite[Theorem~3.7.11]{ABG-book} and \cite[Theorem~3.7.12]{ABG-book})\,. Utilizing this characterization of optimal controls, we derive the robustness properties of ergodic optimal controls under a Lyapunov stability assumption. We also emphasize the duality between the PDE approach vs. a probabilistic flow approach to study robustness. The PDE approach presents a very general and conclusive, yet concise and unified, approach for several cost criteria (notably, a probabilistic approach via Dynkin's lemma would require separate arguments for discounted infinite-horizon and average cost infinite-horizon criteria) and such a unified approach had not been considered earlier, to our knowledge. Thus, the main results of this article can be roughly described as follows. \begin{itemize} \item[•] {\it For discounted cost criterion:} We establish continuity of value functions and provide sufficient conditions which ensure robustness/stability of optimal controls designed under model uncertainties. \item[•] {\it For ergodic cost criterion:} Under two different sets of assumptions ((i) where the running cost is near-monotone or (ii) where a Lyapunov stability condition holds) we establish the continuity of value functions and exploiting the continuity results we derive the robustness/stability of ergodic optimal controls designed for approximate models applied to actual systems. \item[•] {\it For finite horizon cost criterion:} Under uniform boundedness assumptions on the drift term and diffusion matrices (of the true and approximating models), we establish continuity of value functions. Then exploiting the continuity result we prove the robustness/stability of optimal controls designed under model uncertainties. \item[•] {\it For cost up to an exit time:} Similar to the above criteria, under a mild set of assumptions we first establish the continuity of value functions and then using the continuity results we establish the robustness/stability of optimal controls designed under model uncertainties. \end{itemize} We will see that compared with the discrete-time counterpart of this problem studied in \cite{KY-20} (discounted cost) and \cite{KRY-20} (average cost), where value iteration methods were crucially used, in our analysis here we will develop rather direct arguments, with strong implications, utilizing regularity properties of value functions: In the discrete-time setup, these properties need to be established via tedious arguments whereas the continuous-time theory allows for the use of regularity properties of solutions to PDEs. Nonetheless, we will see that {\it continuous convergence in control actions} of models and cost functions is a unifying condition for continuity and robustness properties in both the discrete-time setup studied in \cite{KY-20} (discounted cost) and \cite{KRY-20} (average cost) and our current paper. Compared to \cite{RZ21}, in addition to the infinite horizon criteria we study, we note that the perturbations we consider do not involve only coefficient/parameter variations, i.e., we consider functional perturbations, and the action space we consider is uncountable, though we do not establish the Lipschitz property of control policies, unlike \cite{RZ21}. The rest of the paper is organized as follows. Section~\ref{PD} introduces the the problem setup and summarizes the notation. Section~\ref{discCostSec} is devoted to the analysis of robustness of optimal controls for discounted cost criterion. In Section~\ref{secErgodicCost} we provide the analysis of robustness of ergodic optimal control under two different sets of hypotheses (i) near-monotonicity (ii) Lyapunov stability. For the finite horizon cost criterion the robustness problem is analyzed in Section~\ref{Finitecost}. The robustness problem for optimal controls up to an exit time is considered in Section~\ref{exitTimeSection}. \section{Description of the problem}\label{PD} Let $\Act$ be a compact metric space and $\pV=\mathscr{P}(\Act)$ be the space of probability measures on $\Act$ with topology of weak convergence. Let $$b : \Rd \times \Act \to \Rd, $$ $$ \sigma : \Rd \to \RR^{d \times d},\, \sigma = [\sigma_{ij}(\cdot)]_{1\leq i,j\leq d},$$ be given functions. We consider a stochastic optimal control problem whose state is evolving according to a controlled diffusion process given by the solution of the following stochastic differential equation (SDE) \begin{equation}\label{E1.1} \D X_t \,=\, b(X_t,U_t) \D t + \upsigma(X_t) \D W_t\,, \quad X_0=x\in\Rd. \end{equation} Where \begin{itemize} \item $W$ is a $d$-dimensional standard Wiener process, defined on a complete probability space $(\Omega, \sF, \mathbb{P})$. \item We extend the drift term $b : \Rd \times \pV \to \Rd$ as follows: \begin{equation*} b (x,\mathrm{v}) = \int_{\Act} b(x,\zeta)\mathrm{v}(\D \zeta), \end{equation*} for $\mathrm{v}\in\pV$. \item $U$ is a $\pV$ valued process satisfying the following non-anticipativity condition: for $s<t\,,$ $W_t - W_s$ is independent of $$\sF_s := \,\,\mbox{the completion of}\,\,\, \sigma(X_0, U_r, W_r : r\leq s)\,\,\,\mbox{relative to} \,\, (\sF, \mathbb{P})\,.$$ \end{itemize} The process $U$ is called an \emph{admissible} control, and the set of all admissible controls is denoted by $\Uadm$ (see, \cite{BG90}). To ensure existence and uniqueness of strong solutions of \cref{E1.1}, we impose the following assumptions on the drift $b$ and the diffusion matrix $\upsigma$\,. \begin{itemize} \item[\hypertarget{A1}{{(A1)}}] \emph{Local Lipschitz continuity:\/} The function $\upsigma\,=\,\bigl[\upsigma^{ij}\bigr]\colon\RR^{d}\to\RR^{d\times d}$, $b\colon\Rd\times\Act\to\Rd$ are locally Lipschitz continuous in $x$ (uniformly with respect to the control action for $b$). In particular, for some constant $C_{R}>0$ depending on $R>0$, we have \begin{equation*} \abs{b(x,\zeta) - b(y, \zeta)}^2 + \norm{\upsigma(x) - \upsigma(y)}^2 \,\le\, C_{R}\,\abs{x-y}^2 \end{equation*} for all $x,y\in \sB_R$ and $\zeta\in\Act$, where $\norm{\upsigma}\df\sqrt{\trace(\upsigma\upsigma\transp)}$\,. Also, we are assuming that $b$ is jointly continuous in $(x,\zeta)$. \medskip \item[\hypertarget{A2}{{(A2)}}] \emph{Affine growth condition:\/} $b$ and $\upsigma$ satisfy a global growth condition of the form \begin{equation*} \sup_{\zeta\in\Act}\, \langle b(x, \zeta),x\rangle^{+} + \norm{\upsigma(x)}^{2} \,\le\,C_0 \bigl(1 + \abs{x}^{2}\bigr) \qquad \forall\, x\in\RR^{d}, \end{equation*} for some constant $C_0>0$. \medskip \item[\hypertarget{A3}{{(A3)}}] \emph{Nondegeneracy:\/} For each $R>0$, it holds that \begin{equation*} \sum_{i,j=1}^{d} a^{ij}(x)z_{i}z_{j} \,\ge\,C^{-1}_{R} \abs{z}^{2} \qquad\forall\, x\in \sB_{R}\,, \end{equation*} and for all $z=(z_{1},\dotsc,z_{d})\transp\in\RR^{d}$, where $a\df \frac{1}{2}\upsigma \upsigma\transp$. \end{itemize} By a Markov control we mean an admissible control of the form $U_t = v(t,X_t)$ for some Borel measurable function $v:\RR_+\times\Rd\to\pV$. The space of all Markov controls is denoted by $\Um$\,. If the function $v$ is independent of $t$, then $U$ or by an abuse of notation $v$ itself is called a stationary Markov control. The set of all stationary Markov controls is denoted by $\Usm$. From \cite[Section~2.4]{ABG-book}, we have that the set $\Usm$ is metrizable with compact metric under the following topology: A sequence $v_n\to v$ in $\Usm$ if and only if \begin{equation*} \lim_{n\to\infty}\int_{\Rd}f(x)\int_{\Act}g(x,u)v_{n}(x)(\D u)\D x = \int_{\Rd}f(x)\int_{\Act}g(x,u)v(x)(\D u)\D x \end{equation*} for all $f\in L^1(\Rd)\cap L^2(\Rd)$ and $g\in \cC_b(\Rd\times \Act)$ (for more details, see \cite[Lemma~2.4.1]{ABG-book})\,. It is well known that under the hypotheses \hyperlink{A1}{{(A1)}}--\hyperlink{A3}{{(A3)}}, for any admissible control \cref{E1.1} has a unique strong solution \cite[Theorem~2.2.4]{ABG-book}, and under any stationary Markov strategy \cref{E1.1} has unique strong solution which is a strong Feller (therefore strong Markov) process \cite[Theorem~2.2.12]{ABG-book}. \subsection{Cost Criteria} Let $c\colon\Rd\times\Act \to \RR_+$ be the \emph{running cost} function. We assume that \begin{itemize} \item[\hypertarget{A4}{{(A4)}}] The \emph{running cost} $c$ is bounded (i.e., $\|c\|_{\infty} \leq M$ for some positive constant $M$), jointly continuous in $(x, \zeta)$ and locally Lipschitz continuous in its first argument uniformly with respect to $\zeta\in\Act$. \end{itemize} This condition (A4) can also be relaxed to (A4\'), to be presented further below, where the local Lipschitz property is eliminated. We extend $c\colon\Rd\times\pV \to\RR_+$ as follows: for $\pv \in \pV$ \begin{equation*} c(x,\pv) := \int_{\Act}c(x,\zeta)\pv(\D\zeta)\,. \end{equation*} In this article, we consider the problem of minimizing finite horizon, discounted, ergodic and control up to an exit time cost criteria:\\ {\bf Discounted cost criterion.} For $U \in\Uadm$, the associated \emph{$\alpha$-discounted cost} is given by \begin{equation}\label{EDiscost} \cJ_{\alpha}^{U}(x, c) \,\df\, \Exp_x^{U} \left[\int_0^{\infty} e^{-\alpha s} c(X_s, U_s) \D s\right],\quad x\in\Rd\,, \end{equation} where $\alpha > 0$ is the discount factor and $X(\cdot)$ is the solution of \cref{E1.1} corresponding to $U\in\Uadm$ and $\Exp_x^{U}$ is the expectation with respect to the law of the process $X(\cdot)$ with initial condition $x$. The controller tries to minimize \cref{EDiscost} over his/her admissible policies $\Uadm$\,. Thus, a policy $U^{*}\in \Uadm$ is said to be optimal if for all $x\in \Rd$ \begin{equation}\label{OPDcost} \cJ_{\alpha}^{U^*}(x, c) = \inf_{U\in \Uadm}\cJ_{\alpha}^{U}(x, c) \,\,\, (\,=:\, \,\, V_{\alpha}(x))\,, \end{equation} where $V_{\alpha}(x)$ is called the optimal value.\\ {\bf Ergodic cost criterion.} For each $U\in\Uadm$ the associated ergodic cost is defined as \begin{equation}\label{ECost1} \sE_{x}(c, U) = \limsup_{T\to \infty}\frac{1}{T}\Exp_x^{U}\left[\int_0^{T} c(X_s, U_s) \D{s}\right]\,, \end{equation} and the optimal value is defined as \begin{equation}\label{ECost1Opt} \sE^*(c) \,\df\, \inf_{x\in\Rd}\inf_{U\in \Uadm}\sE_{x}(c, U)\,. \end{equation} Then a control $U^*\in \Uadm$ is said to be optimal if we have \begin{equation}\label{ECost1Opt1} \sE_{x}(c, U^*) = \sE^*(c)\quad \text{for all}\,\,\, x\in \Rd\,. \end{equation}\\ {\bf Finite horizon cost.} For $U\in \Uadm$, the associated \emph{finite horizon cost} is given by \begin{equation}\label{FiniteCost1} \cJ_{T}^U(x, c) = \Exp_x^{U}\left[\int_0^{T} c(X_s, U_s) \D{s} + H(X_T)\right]\,, \end{equation} where $H(\cdot)$ is the terminal cost. The optimal value is defined as \begin{equation}\label{FiniteCost1Opt} \cJ_{T}^*(x, c) \,\df\, \inf_{U\in \Uadm}\cJ_{T}^U(x, c)\,. \end{equation} Thus, a policy $U^*\in \Uadm$ is said to be (finite horizon) optimal if we have \begin{equation}\label{FiniteCost1Opt1} \cJ_{T}^{U^*}(x, c) = \cJ_{T}^*(x, c)\quad \text{for all}\,\,\, x\in \Rd\,. \end{equation}\\ {\bf Control up to an exit time.} This criterion will be presented in Section \ref{exitTimeSection}. Our analysis for this criterion will be immediate given the study involving the above criteria. \\ We define a family of operators $\sL_{\zeta}$ mapping $\cC^2(\Rd)$ to $\cC(\Rd)$ by \begin{equation}\label{E-cI} \sL_{\zeta} f(x) \,\df\, \trace\bigl(a(x)\grad^2 f(x)\bigr) + \,b(x,\zeta)\cdot \grad f(x)\,, \end{equation} for $\zeta\in\Act$, \,\, $f\in \cC^2(\Rd)$\,. For $\pv \in\pV$ we extend $\sL_{\zeta}$ as follows: \begin{equation}\label{EExI} \sL_\pv f(x) \,\df\, \int_{\Act} \sL_{\zeta} f(x)\pv(\D \zeta)\,. \end{equation} For $v \in\Usm$, we define \begin{equation}\label{Efixstra} \sL_{v} f(x) \,\df\, \trace(a\grad^2 f(x)) + b(x,v(x))\cdot\grad f(x)\,. \end{equation} We are interested in the robustness of optimal controls under these criteria. To this end, we now introduce our approximating models. \subsection{Approximating Control Diffusion Process:} Let, $\upsigma_n\,=\,\bigl[\upsigma_n^{ij}\bigr]\colon\RR^{d}\to\RR^{d\times d}$, $b_n\colon\Rd\times\Act\to\Rd$, $c_n\colon\Rd\times\Act\to\Rd$ be sequence of functions satisfying the following assumptions \begin{itemize} \item[\hypertarget{A5}{{(A5)}}] \begin{itemize} \item[(i)] as $n\to\infty$ \begin{equation}\label{AproxiE1} \upsigma_n(x)\to \upsigma(x)\quad \text{a.e.}\,\, x\in\Rd\,, \end{equation} \item[(ii)]\emph{Continuous convergence in controls}: for any sequence $\zeta_n\to \zeta$ \begin{equation}\label{AproxiE2} c_n(x,\zeta_n)\to c(x,\zeta)\quad\text{and}\quad b_n(x,\zeta_n)\to b(x,\zeta)\quad \text{a.e.}\,\, x\in\Rd\,. \end{equation} \item[(iii)] for each $n\in\NN$,\, $b_n$ and $\upsigma_n$ satisfy Assumptions (A1) - (A3) and $c_n$ is uniformly bounded ( in particular, $\norm{c_n}_{\infty} \leq M$ where $M$ is a positive constant as in (A4)), jointly continuous in $(x, \zeta)$ and locally Lipschitz continuous in its first argument uniformly with respect to $\zeta\in\Act$.\,. \end{itemize} \end{itemize} Let for each $n\in\NN$, $X_t^n$ be the solution of the following SDE \begin{equation}\label{ASE1.1} \D X_t^n \,=\, b_n(X_t^n,U_t) \D t + \upsigma_n(X_t^n) \D W_t\,, \quad X_0^n=x\in\Rd. \end{equation} Define a family of operators $\sL_{\zeta}^n$ mapping $\cC^2(\Rd)$ to $\cC(\Rd)$ by \begin{equation}\label{E-cIn} \sL_{\zeta}^n f(x) \,\df\, \trace\bigl(a_n(x)\grad^2 f(x)\bigr) + \,b_n(x,\zeta)\cdot \grad f(x)\,, \end{equation} for $\zeta\in\Act$, \,\, $f\in \cC^2(\Rd)$\,. For the approximated model, for each $n\in\NN$ and $U\in\Uadm$ the associated discounted cost is defined as \begin{equation}\label{EApproDiscost} \cJ_{\alpha, n}^{U}(x, c_n) \,\df\, \Exp_x^{U} \left[\int_0^{\infty} e^{-\alpha t} c_n(X_s^n, U_s) \D s\right],\quad x\in\Rd\,, \end{equation} and the optimal value is defined as \begin{equation}\label{EApproOptDisc} V_{\alpha}^n(x) \,\df\, \inf_{U\in\Uadm}\cJ_{\alpha, n}^{U}(x, c_n) \end{equation} For each $n\in\NN$ and $U\in\Uadm$ the associated ergodic cost is defined as \begin{equation}\label{ECostAprox1} \sE_{x}^n(c_n, U) = \limsup_{T\to \infty}\frac{1}{T}\Exp_x^{U}\left[\int_0^{T} c_n(X_s^n, U_s) \D{s}\right]\,, \end{equation} and the optimal value is defined as \begin{equation}\label{ECost1OptAprox} \sE^{n*}(c_n) \,\df\, \inf_{x\in\Rd}\inf_{U\in \Uadm}\sE_{x}^n(c_n, U)\,. \end{equation} Similarly, for each $n\in\NN$ and $U\in\Uadm$ the associated finite horizon cost is given by \begin{equation}\label{FiniteCost1OptAprox1} \cJ_{T,n}^U(x,c_n) \,\df\, \Exp_x^{U}\left[\int_0^{T} c_n(X_s^n, U_s) \D{s} + H(X_T^n)\right]\,. \end{equation} The optimal value is given by \begin{equation}\label{FiniteCost1Opt1Aprox2} \cJ_{T,n}^*(x, c_n) = \inf_{U\in \Uadm}\cJ_{T,n}^U (x, c_n)\quad \text{for all}\,\,\, x\in \Rd\,, \end{equation} where state process $X_t^n$ is given by the solution of the SDE \cref{ASE1.1}\,. \subsection{Continuity and Robustness Problems}\label{contRobSec} The primary objective of this article will be to address the following problems: \begin{itemize} \item \textbf{Continuity:} If the approximated model \cref{ASE1.1} approches the true model \cref{E1.1}, whether this implies \begin{itemize} \item[•] for discounted cost: $V_{\alpha}^n \to V_{\alpha} ?$ \item[•] for ergodic cost : $\sE^{n*}(c_n) \to \sE^{*}(c) ?$ \item[•] for finite horizon cost : $\cJ_{T,n}^*(x, c_n) \to \cJ_{T}^*(x, c) ?$ \item[•] for cost up to an exit time: $\hat{\cJ}_{e,n}^* \to \hat{\cJ}_{e}^*?$ \,\,\, (for details, see Section~\ref{exitTimeSection}) \end{itemize} \item \textbf{Robustness:} Suppose $v_{n}^*$ is an optimal policy designed over incorrect model \cref{ASE1.1} for finite horizon/ discounted/ergodic/up to an exit time cost problem, does this imply \begin{itemize} \item[•] for discounted cost: $\cJ_{\alpha}^{v_n^*}(x, c) \to V_{\alpha} ?$ \item[•] for ergodic cost: $\sE_x(c, v_n^*) \to \sE^*(c) ? $ \item[•] for finite horizon cost : $\cJ_{T}^{v_n^*}(x, c) \to \cJ_{T}^*(x, c) ?$ \item[•] for cost up to an exit time: $\hat{\cJ}_{e}^{v_n^*} \to \hat{\cJ}_{e}^*?$ \,\,\, (for details, see Section~\ref{exitTimeSection}) \end{itemize}as $n\to \infty$\,. \end{itemize} In this article, under a mild set of assumptions we show that the answers to the above mentioned questions are affirmative. \begin{example}\label{ERS11Example} \begin{itemize} \item[(i)] If our noise term is not the (ideal) Brownian, and instead of \cref{E1.1}, the state dynamics of the system is governed the following SDE \begin{equation}\label{ERS1.1} \begin{cases} \D \hat{X}_t^n \,=\, b(\hat{X}_t^n,U_t) \D t + \upsigma(\hat{X}_t^n) \D S_t^{n}\\ \D S_t^{n} \,=\, \hat{b}_n(\hat{X}_t^n) \D t + \hat{\upsigma}_n(\hat{X}_t^n) \D \hat{W}_t\,. \end{cases} \end{equation} Here we are approximating the noise term by a It$\hat{\rm o}$ process $\{S_t^{n}\}$, given by \begin{equation}\label{ERS1.1A} \D S_t^{n} \,=\, \hat{b}_n(\hat{X}_t^n) \D t + \hat{\upsigma}_n(\hat{X}_t^n) \D \hat{W}_t\,, \end{equation} where $\hat{b}_n(\cdot)\to 0$ and $\hat{\upsigma}_n(\cdot)\to I$ as $n\to\infty$. \item[(ii)] Suppose that \cref{E1.1} is approximated by \cref{ASE1.1} where $b_n$ and $\upsigma_n$ consist of polynomials of appropriate dimensions which converge pointwise to $b$ and $\upsigma$ (which are already assumed to be continuous) where we also have continuous convergence in control variable $\zeta$. \item[(iii)] Consider a Vasicek interest rate model, given by \begin{equation*} \D r_t \,=\, \theta(\mu - r_t)\D t + \sigma\D W_t\,. \end{equation*} this is a mean reverting process, where $\theta$ is the rate of reversion, $\mu$ is the long term mean and $\sigma$ is the volatility. The wealth process corresponding to this interest model can be described by \cref{E1.1} (see \cite[Remark~2.1]{AS08}, \cite{KT12}, \cite{DJ07})). Since market models are typically incomplete, usually model parameters ($\theta, \mu, \sigma$) are learned from the market data. This gives rise to the problem of robustness of optimal investment. This also applies to several other interest/pricing models as well, \cite{merton1998applications}. \item[(iv)] In the above examples, $c_n$ can be a regularized version of $c$, e.g. by adding, for $\epsilon_n > 0$, $\epsilon_n \zeta^T \zeta$ where $\mathbb{U} \subset \mathbb{R}^m$, which then would continuously converge (in control) to $c$ as $\epsilon_n \to 0$. \end{itemize} In the cases above, the approximating kernel conditions in (A5) would apply. \end{example} \begin{remark} If we replace $\sigma(x)$ by $\sigma(x,\zeta)$, in the relaxed control framework if $\sigma(\cdot, v(\cdot))$ is Lipschitz continuous for $v\in \Usm$ then $\cref{E1.1}$ admits a unique strong solution. But in general stationary policies $v\in \Usm$ are just measurable functions. Existence of suitable strong solutions in our setting is not known (see, \cite[Remarks~2.3.2]{ABG-book}, \cite{B05Survey})\,. However, under stationary Markov policies one can prove the existence of weak solutions which may not be unique \cite{stroock1997multidimensional}\cite[Remarks~2.3.2]{ABG-book} (note though that uniqueness is established for $d=1,2$ in \cite[p. 192-194]{stroock1997multidimensional} under some conditions). The existence of a suitable strong solution (which is also a strong Markov process) under stationary Markov policies is essential to obtain stochastic representation of solutions of HJB equations (by applying It$\hat{o}$-Krylov formula). \end{remark} \subsection*{Notation:} \begin{itemize} \item For any set $A\subset\RR^{d}$, by $\uptau(A)$ we denote \emph{first exit time} of the process $\{X_{t}\}$ from the set $A\subset\RR^{d}$, defined by \begin{equation*} \uptau(A) \,\df\, \inf\,\{t>0\,\colon X_{t}\not\in A\}\,. \end{equation*} \item $\sB_{r}$ denotes the open ball of radius $r$ in $\RR^{d}$, centered at the origin, and $\sB_{r}^c$ denotes the complement of $\sB_{r}$ in $\Rd$\,. \item $\uptau_{r}$, $\uuptau_{r}$ denote the first exist time from $\sB_{r}$, $\sB_{r}^c$ respectively, i.e., $\uptau_{r}\df \uptau(\sB_{r})$, and $\uuptau_{r}\df \uptau(\sB^{c}_{r})$. \item By $\trace S$ we denote the trace of a square matrix $S$. \item For any domain $\cD\subset\RR^{d}$, the space $\cC^{k}(\cD)$ ($\cC^{\infty}(\cD)$), $k\ge 0$, denotes the class of all real-valued functions on $\cD$ whose partial derivatives up to and including order $k$ (of any order) exist and are continuous. \item $\cC_{\mathrm{c}}^k(\cD)$ denotes the subset of $\cC^{k}(\cD)$, $0\le k\le \infty$, consisting of functions that have compact support. This denotes the space of test functions. \item $\cC_{b}(\Rd)$ denotes the class of bounded continuous functions on $\Rd$\,. \item $\cC^{k}_{0}(\cD)$, denotes the subspace of $\cC^{k}(\cD)$, $0\le k < \infty$, consisting of functions that vanish in $\cD^c$. \item $\cC^{k,r}(\cD)$, denotes the class of functions whose partial derivatives up to order $k$ are H\"older continuous of order $r$. \item $\Lp^{p}(\cD)$, $p\in[1,\infty)$, denotes the Banach space of (equivalence classes of) measurable functions $f$ satisfying $\int_{\cD} \abs{f(x)}^{p}\,\D{x}<\infty$. \item $\Sob^{k,p}(\cD)$, $k\ge0$, $p\ge1$ denotes the standard Sobolev space of functions on $\cD$ whose weak derivatives up to order $k$ are in $\Lp^{p}(\cD)$, equipped with its natural norm (see, \cite{Adams})\,. \item If $\mathcal{X}(Q)$ is a space of real-valued functions on $Q$, $\mathcal{X}_{\mathrm{loc}}(Q)$ consists of all functions $f$ such that $f\varphi\in\mathcal{X}(Q)$ for every $\varphi\in\cC_{\mathrm{c}}^{\infty}(Q)$. In a similar fashion, we define $\Sobl^{k, p}(\cD)$. \item For $\mu > 0$, let $e_{\mu}(x) = e^{-\mu\sqrt{1+\abs{x}^2}}$\,, $x\in\Rd$\,. Then $f\in \Lp^{p,\mu}((0, T)\times \Rd)$ if $fe_{\mu} \in \Lp^{p}((0, T)\times \Rd)$\,. Similarly, $\Sob^{1,2,p,\mu}((0, T)\times \Rd) = \{f\in \Lp^{p,\mu}((0, T)\times \Rd) \mid f, \frac{\partial f}{\partial t}, \frac{\partial f}{\partial x_i}, \frac{\partial^2 f}{\partial x_i \partial x_j}\in \Lp^{p,\mu}((0, T)\times \Rd) \}$ with natural norm (see \cite{BL84-book}) \begin{align*} \norm{f}_{\Sob^{1,2,p,\mu}} = \norm{\frac{\partial f}{\partial t}}_{\Lp^{p,\mu}((0, T)\times \Rd)} + \norm{f}_{\Lp^{p,\mu}((0, T)\times \Rd)} + & \sum_{i}\norm{\frac{\partial f}{\partial x_i}}_{\Lp^{p,\mu}((0, T)\times \Rd)} \nonumber\\ &+ \sum_{i,j}\norm{\frac{\partial^2 f}{\partial x_i \partial x_j}}_{\Lp^{p,\mu}((0, T)\times \Rd)}\,. \end{align*} \end{itemize} \section{Analysis of Discounted Cost}\label{discCostSec} In this section we analyze the robustness of optimal controls for discounted cost criterion. From \cite[Theorem~3.5.6]{ABG-book}, we have the following characterization of the optimal $\alpha$-discounted cost $V_{\alpha}$\,. \begin{theorem}\label{TD1.1} Suppose Assumptions (A1)-(A4) hold. Then the optimal discounted cost $V_{\alpha}$ defined in \cref{OPDcost} is the unique solution in $\cC^2(\Rd)\cap\cC_b(\Rd)$ of the HJB equation \begin{equation}\label{OptDHJB} \min_{\zeta \in\Act}\left[\sL_{\zeta}V_{\alpha}(x) + c(x, \zeta)\right] = \alpha V_{\alpha}(x) \,,\quad \text{for all\ }\,\, x\in\Rd\,. \end{equation} Moreover, $v^*\in \Usm$ is $\alpha$-discounted optimal control if and only if it is a measurable minimizing selector of\cref{OptDHJB}, i.e., \begin{equation}\label{OtpHJBSelc} b(x,v^*(x))\cdot \grad V_{\alpha}(x) + c(x, v^*(x)) = \min_{\zeta\in \Act}\left[ b(x, \zeta)\cdot \grad V_{\alpha}(x) + c(x, \zeta)\right]\quad \text{a.e.}\,\,\, x\in\Rd\,. \end{equation} \end{theorem} \begin{remark} The assumption that the running cost is Lipschitz continuous in it's first argument uniformly with respect to the second, is used to obtain a $\cC^2(\Rd)$ solution of the HJB equation \cref{OptDHJB}. If we don't have this uniformly Lipschitz assumption, one can still show that the HJB equation admits a solution now in $\Sobl^{2, p}(\Rd)$,\, $p\geq d+1$ and all the conclusions of the Theorem~\ref{TD1.1} still hold\,. To see this: in view of \cite[Theorem~9.15]{GilTru} and the Schauder fixed point theorem, it can be shown that there exists $\phi_{R}\in \Sob^{2,p}(\sB_R)$ satisfying the Dirichlet problem \begin{align*} \min_{\zeta \in\Act}\left[\sL_{\zeta}\phi_{R}(x) + c(x, \zeta)\right] = \alpha \phi_{R}(x) \,,\quad \text{for all\ }\,\, x\in\sB_R\,,\quad\text{with}\quad \phi_{R} = 0\,\,\, \text{on}\,\,\, \partial{\sB_{R}}\,. \end{align*} Now letting $R\to\infty$ and following \cite[Theorem~3.5.6]{ABG-book} we arrive at the solution. Hence, one can replace our assumption (A4) by the following (relatively weaker) assumption \begin{itemize} \item[\hypertarget{A4\'}{{(A4\')}}] The \emph{running cost} $c$ is bounded (i.e., $\|c\|_{\infty} \leq M$ for some positive constant $M$) and jointly continuous in both variables $(x, \zeta)$\,. \end{itemize} All the results of this paper will also hold if we replace (A4) by (A4\')\,. \end{remark} As in Theorem~\ref{TD1.1}, following \cite[Theorem~3.5.6]{ABG-book}, for each approximating model we have the following complete characterization of an optimal policy, which is in the space of stationary Markov policies. \begin{theorem}\label{TD1.2} Suppose (A5)(iii) holds. Then for each $n\in \NN$, there exists a unique solution $V_{\alpha}^n\in\cC^2(\Rd)\cap\cC_b(\Rd)$ of \begin{equation}\label{APOptDHJB1} \min_{\zeta \in\Act}\left[\sL_{\zeta}^nV_{\alpha}^n(x) + c_n(x, \zeta)\right] = \alpha V_{\alpha}^n(x) \,,\quad \text{for all\ }\,\, x\in\Rd\,. \end{equation} Moreover, we have the following: \begin{itemize} \item[(i)] $V_{\alpha}^n$ is the optimal discounted cost, i.e., \begin{equation*} V_{\alpha}^n(x) = \inf_{U\in \Uadm}\Exp_x^{U} \left[\int_0^{\infty} e^{-\alpha t} c_n(X_s^n, U_s) \D s\right]\quad x\in\Rd\,, \end{equation*} \item[(ii)]$v_n^*\in \Usm$ is $\alpha$-discounted optimal control if and only if it is a measurable minimizing selector of\cref{APOptDHJB1}, i.e., \begin{equation}\label{OtpHJBSelc1} b_n(x,v_n^*(x))\cdot \grad V_{\alpha}^n(x) + c_n(x, v_n^*(x)) = \min_{\zeta\in \Act}\left[ b_n(x, \zeta)\cdot \grad V_{\alpha}^n(x) + c_n(x, \zeta)\right]\quad \text{a.e.}\,\,\, x\in\Rd\,. \end{equation} \end{itemize} \end{theorem} In the next theorem, we prove that $V_{\alpha}^n(x)$ converges to $V_{\alpha}(x)$ as $n\to \infty$ for all $x\in\Rd$\,. This result will be useful in establishing the robustness of discounted optimal controls. \begin{theorem}\label{TC1.3} Suppose Assumptions (A1)-(A5) hold. Then \begin{equation}\label{EC1.1} \lim_{n\to\infty} V_{\alpha}^n(x) = V_{\alpha}(x) \quad\text{for all}\,\, x\in\Rd\,. \end{equation} \end{theorem} \begin{proof} From \cref{APOptDHJB1} and \cref{OtpHJBSelc1} for any minimizing selector $v_n^*\in \Usm$, it follows that \begin{equation*} \trace\bigl(a_n(x)\grad^2 V_{\alpha}^n(x)\bigr) + b_n(x,v_n^*(x))\cdot \grad V_{\alpha}^n(x) + c_n(x, v_n^*(x)) = \alpha V_{\alpha}^n(x)\,. \end{equation*} Then using the standard elliptic PDE estimate as in \cite[Theorem~9.11]{GilTru}, for any $p\geq d+1$ and $R >0$, we deduce that \begin{equation}\label{ETC1.3A} \norm{V_{\alpha}^n(x)}_{\Sob^{2,p}(\sB_R)} \,\le\, \kappa_1\bigl(\norm{V_{\alpha}^n(x)}_{L^p(\sB_{2R})} + \norm{c_n(x, v_n^*(x))}_{L^p(\sB_{2R})}\bigr)\,, \end{equation} where $\kappa_1$ is a positive constant which is independent of $n$\,. Since \begin{equation*} \norm{c_n}_{\infty} \,\df\, \sup_{(x,u)\in\Rd\times\Act} c_n(x,u) \leq M, \quad \text{and}\quad V_{\alpha}^n(x) \leq \frac{\norm{c_n}_{\infty}}{\alpha}\,, \end{equation*} from \cref{ETC1.3A} we get \begin{equation}\label{ETC1.3B} \norm{V_{\alpha}^n(x)}_{\Sob^{2,p}(\sB_R)} \,\le\, \kappa_1 M\bigl(\frac{|\sB_{2R}|^{\frac{1}{p}}}{\alpha} + |\sB_{2R}|^{\frac{1}{p}}\bigr)\,. \end{equation}We know that for $1< p < \infty$, the space $\Sob^{2,p}(\sB_R)$ is reflexive and separable, hence, as a corollary of the Banach-Alaoglu theorem, we have that every bounded sequence in $\Sob^{2,p}(\sB_R)$ has a weakly convergent subsequence (see, \cite[Theorem~3.18.]{HB-book}). Also, we know that for $p\geq d+1$ the space $\Sob^{2,p}(\sB_R)$ is compactly embedded in $\cC^{1, \beta}(\bar{\sB}_R)$\,, where $\beta < 1 - \frac{d}{p}$ (see \cite[Theorem~A.2.15 (2b)]{ABG-book}), which implies that every weakly convergent sequence in $\Sob^{2,p}(\sB_R)$ will converge strongly in $\cC^{1, \beta}(\bar{\sB}_R)$\,. Thus, in view of estimate \cref{ETC1.3B}, by a standard diagonalization argument and the Banach-Alaoglu theorem, we can extract a subsequence $\{V_{\alpha}^{n_k}\}$ such that for some $V_{\alpha}^*\in \Sobl^{2,p}(\Rd)$ \begin{equation}\label{ETC1.3BC} \begin{cases} V_{\alpha}^{n_k}\to & V_{\alpha}^*\quad \text{in}\quad \Sobl^{2,p}(\Rd)\quad\text{(weakly)}\\ V_{\alpha}^{n_k}\to & V_{\alpha}^*\quad \text{in}\quad \cC^{1, \beta}_{loc}(\Rd) \quad\text{(strongly)}\,. \end{cases} \end{equation} In the following, we will show that $V^*_{\alpha} = V_{\alpha}$. Now, for any compact set $K\subset \Rd$, it is easy to see that \begin{align}\label{ETC1.3C1A} &\max_{x\in K}|\min_{\zeta\in \Act}\{ b_{n_k}(x,\zeta)\cdot \grad V_{\alpha}^{n_k}(x) + c(x, \zeta)\} - \min_{\zeta\in \Act}\{b(x,\zeta)\cdot \grad V_{\alpha}^*(x) + c(x, \zeta)\}|\nonumber\\ &\, \leq \, \max_{x\in K}\max_{\zeta\in \Act}|\{ b_{n_k}(x,\zeta)\cdot \grad V_{\alpha}^{n_k}(x) + c_n(x, \zeta)\} - \{b(x,\zeta)\cdot \grad V_{\alpha}^*(x) + c(x, \zeta)\}| \nonumber\\ &\, \leq \, \max_{x\in K}\max_{\zeta\in \Act}| b_{n_k}(x,\zeta)\cdot \grad V_{\alpha}^{n_k}(x) -b(x,\zeta)\cdot \grad V_{\alpha}^*(x) | + \max_{x\in K}\max_{\zeta\in \Act}| c_{n_k}(x, \zeta) - c(x, \zeta)| \end{align} Since $c_{n}(x, \cdot)\to c(x, \cdot)$, $b_{n}(x, \cdot)\to b(x, \cdot)$ continuously on compact set $\Act$ and $V_{\alpha}^{n_k}\to V_{\alpha}^*$ in $\cC^{1, \beta}_{loc}(\Rd)\,,$ for any compact set $K\subset \Rd$, as $k\to \infty$ we deduce that \begin{align}\label{ETC1.3C} \max_{x\in K}|\min_{\zeta\in \Act}\{ b_{n_k}(x,\zeta)\cdot \grad V_{\alpha}^{n_k}(x) + c_{n_k}(x, \zeta)\} - \min_{\zeta\in \Act}\{b(x,\zeta)\cdot \grad V_{\alpha}(x) + c(x, \zeta)\}|\to 0 \end{align} Thus, multiplying by a test function $\phi\in \cC_{c}^{\infty}(\Rd)$, from \cref{APOptDHJB1}, we obtain \begin{equation*} \int_{\Rd}\trace\bigl(a_{n_k}(x)\grad^2 V_{\alpha}^{n_k}(x)\bigr)\phi(x)\D x + \int_{\Rd} \min_{\zeta\in \Act} \{b_{n_k}(x,\zeta)\cdot \grad V_{\alpha}^{n_k}(x) + c_{n_k}(x, \zeta)\}\phi(x)\D x = \alpha\int_{\Rd} V_{\alpha}^{n_k}(x)\phi(x)\D x\,. \end{equation*} In view of \cref{ETC1.3BC} and \cref{ETC1.3C}, letting $k\to\infty$ it follows that \begin{equation}\label{ETC1.3D} \int_{\Rd}\trace\bigl(a(x)\grad^2 V_{\alpha}^*(x)\bigr)\phi(x)\D x + \int_{\Rd} \min_{\zeta\in \Act} \{b(x,\zeta)\cdot \grad V_{\alpha}^*(x) + c(x, \zeta)\}\phi(x)\D x = \alpha\int_{\Rd} V_{\alpha}^*(x)\phi(x)\D x\,. \end{equation} Since $\phi\in \cC_{c}^{\infty}(\Rd)$ is arbitrary and $V_{\alpha}^*\in \Sobl^{2,p}(\Rd)$ from \cref{{ETC1.3D}} we deduce that \begin{equation}\label{ETC1.3E} \min_{\zeta \in\Act}\left[\sL_{\zeta}V_{\alpha}^*(x) + c(x, \zeta)\right] = \alpha V_{\alpha}^*(x) \,,\quad \text{a.e.\ }\,\, x\in\Rd\,. \end{equation} Let $\tilde{v}^*\in\Usm$ be a minimizing selector of \cref{ETC1.3E} and $\tilde{X}$ be the solution of the SDE \cref{E1.1} corresponding to $\tilde{v}^*$. Then applying It$\hat{\rm o}$-Krylov formula, we obtain the following \begin{align*} & \Exp_x^{\tilde{v}^*}\left[ e^{-\alpha T} V_{\alpha}^{*}(\tilde{X}_{T})\right] - V_{\alpha}^{*}(x)\nonumber\\ & \,=\,\Exp_x^{\tilde{v}^*}\left[\int_0^{T} e^{-\alpha s}\{\trace\bigl(a(\tilde{X}_s)\grad^2 V_{\alpha}^{*}(\tilde{X}_s)\bigr) + b(\tilde{X}_s, \tilde{v}^*(\tilde{X}_s))\cdot \grad V_{\alpha}^{*}(\tilde{X}_s) - \alpha V_{\alpha}^{*}(\tilde{X}_s))\} \D{s}\right] \,. \end{align*} Hence, using \cref{ETC1.3E}, we deduce that \begin{align}\label{ETC1.3FC} \Exp_x^{\tilde{v}^*}\left[ e^{-\alpha T} V_{\alpha}^{*}(\tilde{X}_{T})\right] - V_{\alpha}^{*}(x) \,=\,- \Exp_x^{\tilde{v}^*}\left[\int_0^{T} e^{-\alpha s}c(\tilde{X}_s, \tilde{v}^*(\tilde{X}_s))\D{s}\right] \,. \end{align} Since $V_{\alpha}^{*}$ is bounded and $$\Exp_x^{\tilde{v}^*}\left[ e^{-\alpha T} V_{\alpha}^{*}(\tilde{X}_{T})\right] = e^{-\alpha T}\Exp_x^{\tilde{v}^*}\left[ V_{\alpha}^{*}(\tilde{X}_{T})\right],$$ letting $T\to\infty$, it is easy to see that \begin{equation*} \lim_{T\to\infty}\Exp_x^{\tilde{v}^*}\left[ e^{-\alpha T} V_{\alpha}^{*}(\tilde{X}_{T})\right] = 0\,. \end{equation*} Now, letting $T \to \infty$ by monotone convergence theorem, from \cref{ETC1.3FC} we obtain \begin{align}\label{ETC1.3FD} V_{\alpha}^{*}(x) \,=\, \Exp_x^{\tilde{v}^*}\left[\int_0^{\infty} e^{-\alpha s}c(\tilde{X}_s, \tilde{v}^*(\tilde{X}_s)) \D{s}\right] \,. \end{align} Again by similar argument, applying It$\hat{\rm o}$-Krylov formula and using \cref{ETC1.3E}, for any $U\in \Uadm$\,, we have \begin{align*} V_{\alpha}^{*}(x) \, \leq\, \Exp_x^{U}\left[\int_0^{\infty} e^{-\alpha s}c(\tilde{X}_s, U_s) \D{s}\right] \,. \end{align*}This implies \begin{align}\label{ETC1.3FE} V_{\alpha}^{*}(x) \, \leq\,\inf_{U\in\Uadm} \Exp_x^{U}\left[\int_0^{\infty} e^{-\alpha s}c(\tilde{X}_s, U_s) \D{s}\right] \,. \end{align} Thus, from \cref{ETC1.3FD} and \cref{ETC1.3FE}, we deduce that \begin{align}\label{ETC1.3FF} V_{\alpha}^{*}(x) \,= \, \inf_{U\in\Uadm}\Exp_x^{U}\left[\int_0^{\infty} e^{-\alpha s}c(\tilde{X}_s, U_s) \D{s}\right] \,. \end{align} Since both $V_{\alpha}, V_{\alpha}^*$ are continuous functions on $\Rd$, from \cref{OPDcost} and \cref{ETC1.3FF}, it follows that $V_{\alpha}(x) = V_{\alpha}^*(x)$ for all $x\in\Rd$. This completes the proof. \end{proof} Let $\hat{X}^n$ be the solution of the SDE \cref{E1.1} corresponding to $v_n^*$. Then we have \begin{equation}\label{EDiscostRobust} \cJ_{\alpha}^{v_n^*}(x, c) \,=\, \Exp_x^{v_n^*} \left[\int_0^{\infty} e^{-\alpha t} c(\hat{X}^n_s, v_n^*(\hat{X}^n_s)) \D s\right],\quad x\in\Rd\,. \end{equation} Next we prove the robustness result, i.e., we prove that $\cJ_{\alpha}^{v_n^*}(x, c) \to \cJ_{\alpha}^{v^*}(x, c)$ as $n\to \infty$\,, where $v_n^*$ is an optimal control of the approximated model and $v^*$ is an optimal control of the true model. As in \cite{KY-20} we will use the continuity result above as an intermediate step. \begin{theorem}\label{TC1.4} Suppose Assumptions (A1)-(A5) hold. Then \begin{equation}\label{ETC1.4A} \lim_{n\to\infty} \cJ_{\alpha}^{v_n^*}(x, c) = \cJ_{\alpha}^{v^*}(x, c) \quad\text{for all}\,\, x\in\Rd\,. \end{equation} \end{theorem} \begin{proof} Following the argument as in \cite[Theorem~3.5.6]{ABG-book}, one can show that for each $v_n^*\in \Usm$, there exists $V_{\alpha}^{n,*}\in \Sobl^{2,p}(\Rd)\cap \cC_{b}(\Rd)$ satisfying \begin{equation}\label{ETC1.4B} \trace\bigl(a(x)\grad^2 V_{\alpha}^{n,*}(x)\bigr) + b(x,v_n^*(x))\cdot \grad V_{\alpha}^{n,*}(x) + c(x, v_n^*(x)) = \alpha V_{\alpha}^{n,*}(x)\,. \end{equation} Applying It$\hat{\rm o}$-Krylov formula, we deduce that \begin{align*} & \Exp_x^{v_n^*}\left[ e^{-\alpha T} V_{\alpha}^{n,*}(\hat{X}^n_{T})\right] - V_{\alpha}^{n,*}(x)\nonumber\\ & \,=\,\Exp_x^{v_n^*}\left[\int_0^{T} e^{-\alpha s}\{\trace\bigl(a(\hat{X}^n_s)\grad^2 V_{\alpha}^{n,*}(\hat{X}^n_s)\bigr) + b(\hat{X}^n_s, v_n^*(\hat{X}^n_s))\cdot \grad V_{\alpha}^{n,*}(\hat{X}^n_s) - \alpha V_{\alpha}^{n,*}(\hat{X}^n_s))\} \D{s}\right] \,. \end{align*} Now using \cref{ETC1.4B}, it follows that \begin{align}\label{ETC1.4C} \Exp_x^{v_n^*}\left[ e^{-\alpha T} V_{\alpha}^{n,*}(\hat{X}^n_{T})\right] - V_{\alpha}^{n,*}(x) \,=\,- \Exp_x^{v_n^*}\left[\int_0^{T} e^{-\alpha s}c(\hat{X}^n_s, v_n^*(\hat{X}^n_s)) \D{s}\right] \,. \end{align} Since $V_{\alpha}^{n,*}$ is bounded and $$\Exp_x^{v_n^*}\left[ e^{-\alpha T} V_{\alpha}^{n,*}(\hat{X}^n_{T})\right] = e^{-\alpha T}\Exp_x^{v_n^*}\left[ V_{\alpha}^{n,*}(\hat{X}^n_{T})\right],$$ letting $T\to\infty$ we deduce that \begin{equation*} \lim_{T\to\infty}\Exp_x^{v_n^*}\left[ e^{-\alpha T} V_{\alpha}^{n,*}(\hat{X}^n_{T})\right] = 0\,. \end{equation*} Thus, from \cref{ETC1.4C}, letting $T \to \infty$ by monotone convergence theorem we obtain \begin{align}\label{ETC1.4D} V_{\alpha}^{n,*}(x) \,=\, \Exp_x^{v_n^*}\left[\int_0^{\infty} e^{-\alpha s}c(\hat{X}^n_s, v_n^*(\hat{X}^n_s)) \D{s}\right] \,=\, \cJ_{\alpha}^{v_n^*}(x, c)\,. \end{align} This implies that $V_{\alpha}^{n,*} \leq \frac{\norm{c}_{\infty}}{\alpha}$. Thus, as in Theorem \ref{TC1.3} (see, \cref{ETC1.3A}, \cref{ETC1.3B}), by standard Sobolev estimate, for any $R>0$ we get $\norm{V_{\alpha}^{n,*}}_{\Sob^{2,p}(\sB_R)} \leq \kappa_2$\,, for some positive constant $\kappa_2$ independent of $n$. Hence, by the Banach-Alaoglu theorem and standard diagonalization argument (as in \cref{ETC1.3BC}), we have there exists $\hat{V}_{\alpha}^*\in \Sobl^{2,p}(\Rd)\cap \cC_{b}(\Rd)$ such that along some sub-sequence $\{V_{\alpha}^{n_{k},*}\}$ \begin{equation}\label{ETC1.4E} \begin{cases} V_{\alpha}^{n_{k},*}\to & \hat{V}_{\alpha}^*\quad \text{in}\quad \Sobl^{2,p}(\Rd)\quad\text{(weakly)}\\ V_{\alpha}^{n_{k},*}\to & \hat{V}_{\alpha}^*\quad \text{in}\quad \cC^{1, \beta}_{loc}(\Rd)\quad\text{(strongly)}\,. \end{cases} \end{equation} Since space of stationary Markov strategies $\Usm$ is compact, along some further sub-sequence (without loss of generality denoting by same sequence) we have $v_{n_k}^*\to \hat{v}^*$ in $\Usm$\,. It is easy to see that \begin{align*} b(x,v_{n_k}^*(x))\cdot \grad V_{\alpha}^{n_{k}, *}(x) - b(x,\hat{v}^*(x))\cdot \grad \hat{V}_{\alpha}^*(x) = & b(x,v_{n_k}^*(x))\cdot \grad \left(V_{\alpha}^{n_{k}, *} - \hat{V}_{\alpha}^*\right)(x) \\ & + \left(b(x,v_{n_k}^*(x)) - b(x,\hat{v}^*(x))\right)\cdot \grad \hat{V}_{\alpha}^*(x)\,. \end{align*} Since $V_{\alpha}^{n_k, *}\to \hat{V}_{\alpha}^*$ in $\cC^{1, \beta}_{loc}(\Rd)\,,$ on any compact set $b(x,v_{n_k}^*(x))\cdot \grad \left(V_{\alpha}^{n_{k}, *} - \hat{V}_{\alpha}^*\right)(x)\to 0$ strongly and by the topology of $\Usm$, we have $\left(b(x,v_{n_k}^*(x)) - b(x,\hat{v}^*(x))\right)\cdot \grad \hat{V}_{\alpha}^*(x)\to 0$ weakly. Thus, in view of the topology of $\Usm$, and since $V_{\alpha}^{n_k, *}\to \hat{V}_{\alpha}^*$ in $\cC^{1, \beta}_{loc}(\Rd)\,,$ as $k\to \infty$ we obtain \begin{equation}\label{ETC1.4EA} b(x,v_{n_k}^*(x))\cdot \grad V_{\alpha}^{n_{k}, *}(x) + c(x, v_{n_k}^*(x)) \to b(x,\hat{v}^*(x))\cdot \grad \hat{V}_{\alpha}^*(x) + c(x, \hat{v}^*(x))\quad\text{weakly}\,. \end{equation} Now, multiplying by a test function $\phi\in \cC_{c}^{\infty}(\Rd)$, from \cref{ETC1.4B}, it follows that \begin{align*} \int_{\Rd}\trace\bigl(a(x)\grad^2 V_{\alpha}^{n_{k}, *}(x)\bigr)\phi(x)\D x + \int_{\Rd}\{b(x,v_{n_k}^*(x))\cdot \grad V_{\alpha}^{n_{k}, *}(x) + & c(x, v_{n_k}^*(x))\}\phi(x)\D x \\ &= \alpha\int_{\Rd} V_{\alpha}^{n_{k}, *}(x)\phi(x)\D x\,. \end{align*} Hence, using \cref{ETC1.4E}, \cref{ETC1.4EA}, and letting $k\to\infty$ we obtain \begin{equation}\label{ETC1.4EB} \int_{\Rd}\trace\bigl(a(x)\grad^2 \hat{V}_{\alpha}^*(x)\bigr)\phi(x)\D x + \int_{\Rd} \{b(x,\hat{v}^*(x))\cdot \grad \hat{V}_{\alpha}^*(x) + c(x, \hat{v}^*(x))\}\phi(x)\D x = \alpha\int_{\Rd} \hat{V}_{\alpha}^*(x)\phi(x)\D x\,. \end{equation} Since $\phi\in \cC_{c}^{\infty}(\Rd)$ is arbitrary and $\hat{V}_{\alpha}^*\in \Sobl^{2,p}(\Rd)$ from \cref{ETC1.4EB}, we deduce that the function $\hat{V}_{\alpha}^*\in \Sobl^{2,p}(\Rd)\cap \cC_{b}(\Rd)$ satisfies \begin{equation}\label{ETC1.4F} \trace\bigl(a(x)\grad^2 \hat{V}_{\alpha}^{*}(x)\bigr) + b(x,\hat{v}^*(x))\cdot \grad \hat{V}_{\alpha}^{*}(x) + c(x, \hat{v}^*(x)) = \alpha \hat{V}_{\alpha}^{*}(x)\,. \end{equation} As earlier, applying It$\hat{\rm o}$-Krylov formula and using \cref{{ETC1.4F}}, it follows that \begin{align}\label{ETC1.4G} \hat{V}_{\alpha}^{*}(x) \,=\, \Exp_x^{\hat{v}^*}\left[\int_0^{\infty} e^{-\alpha s}c(\hat{X}_s, \hat{v}^*(\hat{X}_s)) \D{s}\right] \,, \end{align} where $\hat{X}$ is the solution of SDE \cref{E1.1} corresponding to $\hat{v}^*$\,. Now, we have \begin{equation} |\cJ_{\alpha}^{v_{n_k}^*}(x, c) - \cJ_{\alpha}^{v^*}(x, c)| \leq |\cJ_{\alpha}^{v_{n_k}^*}(x, c) - V_{\alpha}^{n_k}(x)| + |V_{\alpha}^{n_k}(x) - \cJ_{\alpha}^{v^*}(x, c)|\,. \end{equation} From Theorem \ref{TD1.1}, we know that $\cJ_{\alpha}^{v^*}(x, c) = V_{\alpha}(x)$. Thus from Theorem~\ref{TC1.3}, we deduce that $|V_{\alpha}^{n_k}(x) - \cJ_{\alpha}^{v^*}(x, c)|\to 0$ as $k\to\infty$\,. To complete the proof we have to show that $|\cJ_{\alpha}^{v_{n_k}^*}(x, c) - V_{\alpha}^{n_k}(x)|\to 0$ as $k\to \infty$\,. Also, from Theorem~\ref{TD1.2} we know that $v_n^*\in \Usm$ is a minimizing selector of the HJB equation \cref{APOptDHJB1} of the approximated model, thus it follows that \begin{align}\label{ETC1.4GA1} \alpha V_{\alpha}^{n_k}(x) &\,=\, \min_{\zeta \in\Act}\left[\sL_{\zeta}^{n_k}V_{\alpha}^{n_k}(x) + c_{n_k}(x, \zeta)\right]\nonumber\\ & \,=\, \trace\bigl(a_{n_k}(x)\grad^2 V_{\alpha}^{n_k}(x)\bigr) + b_{n_k}(x,v_{n_k}^*(x))\cdot \grad V_{\alpha}^{n_k}(x) + c(x, v_{n_k}^*(x))\,,\quad \text{a.e.\ }\,\, x\in\Rd\,. \end{align} Hence, by standard Sobolev estimate (as in Theorem~\ref{TC1.3}), for each $R>0$ we have $\norm{V_{\alpha}^{n_k}}_{\Sob^{2,p}(\sB_R)} \leq \kappa_3$\,, for some positive constant $\kappa_3$ independent of $k$. Thus, we can extract a further sub-sequence (without loss of generality denoting by same sequence) such that for some $\Tilde{V}_{\alpha}^*\in \Sobl^{2,p}(\Rd)\cap \cC_{b}(\Rd)$ (as in \cref{ETC1.3BC}) we get \begin{equation}\label{ETC1.4H} \begin{cases} V_{\alpha}^{n_k}\to & \Tilde{V}_{\alpha}^*\quad \text{in}\quad \Sobl^{2,p}(\Rd)\quad\text{(weakly)}\\ V_{\alpha}^{n_k}\to & \Tilde{V}_{\alpha}^*\quad \text{in}\quad \cC^{1, \beta}_{loc}(\Rd)\quad\text{(strongly)}\,. \end{cases} \end{equation} Following the similar steps as in Theorem \ref{TC1.3}, multiplying by test function and letting $k\to \infty$, from \cref{ETC1.4GA1} we deduce that $\Tilde{V}_{\alpha}^*\in \Sobl^{2,p}(\Rd)\cap \cC_{b}(\Rd)$ satisfies \begin{align}\label{ETC1.4I} \alpha \Tilde{V}_{\alpha}^{*}(x) &\,=\, \min_{\zeta \in\Act}\left[\sL_{\zeta}\Tilde{V}_{\alpha}^{*}(x) + c(x, \zeta)\right]\nonumber\\ & \,=\, \trace\bigl(a(x)\grad^2 \Tilde{V}_{\alpha}^{*}(x)\bigr) + b(x,\hat{v}^*(x))\cdot \grad \Tilde{V}_{\alpha}^{*}(x) + c(x, \hat{v}^*(x))(x)\,. \end{align} From the continuity results (Theorem~\ref{TC1.3}), it is easy to see that $\Tilde{V}_{\alpha}^{*}(x) = \cJ_{\alpha}^{v^*}(x, c)$ for all $x\in\Rd$. Moreover, applying It$\hat{\rm o}$-Krylov formula and using \cref{{ETC1.4I}} we obtain \begin{align}\label{ETC1.4J} \Tilde{V}_{\alpha}^{*}(x) \,=\, \Exp_x^{\hat{v}^*}\left[\int_0^{\infty} e^{-\alpha s}c(\hat{X}_s, \hat{v}^*(\hat{X}_s)) \D{s}\right] \,. \end{align} Since both $\hat{V}_{\alpha}^{*}$, $\Tilde{V}_{\alpha}^{*}$ are continuous, from \cref{ETC1.4G} and \cref{ETC1.4J}, it follows that both $\cJ_{\alpha}^{v_{n_k}^*}(x, c)$ (which is equals to $V_{\alpha}^{n_{k},*}(x)$) and $V_{\alpha}^{n_{k}}(x)$ converge to the same limit. This completes the proof. \end{proof} \begin{remark} Note that in the above, we indirectly also showed the continuity of the value function in the control policy (under the topology defined); uniqueness of the solution to the PDE in the above implies continuity. This result, while can be obtained from the analysis of Borkar \cite{Bor89} (in a slightly more restrictive setup), is obtained directly via a careful optimality analysis and will have important consequences in numerical solutions and approximation results for both discounted and average cost optimality. This is studied in details, with implications in \cite{YukselPradhan}\,. \end{remark} \section{Analysis of Ergodic Cost}\label{secErgodicCost} In this section we study the robustness problem for the ergodic cost criterion. The associated optimal control problem for this cost criterion has been studied extensively in the literature, see e.g., \cite{ABG-book}. For this cost evolution criterion we will study the robustness problem under two sets of assumptions: the first is so called near-monotonicity condition on the running cost which discourage instability and second is Lyapunov stability. \subsection{Analysis under a near-monotonicity assumption}\label{NearMonotone} Here we assume that the cost function $c$ satisfies the following near-monotonicity condition: \begin{itemize} \item[\hypertarget{A6}{{(A6)}}] It holds that \begin{equation}\label{ENearmonot} \liminf_{\norm{x}\to\infty}\inf_{\zeta\in \Act} c(x,\zeta) > \sE^*(c)\,. \end{equation} \end{itemize} This condition penalizes the escape of probability mass to infinity. Since our running cost $c$ is bounded it is easy to see that $\sE^*(c) \leq \norm{c}_{\infty}$\,. Recall that a stationary policy $v\in \Usm$ is said to be stable if the associated diffusion process is positive recurrent. It is known that under \cref{ENearmonot}, optimal control exists in the space of stable stationary Markov controls (see, \cite[Theorem~3.4.5]{ABG-book}). Now from \cite[Theorem~3.6.10]{ABG-book}, we have the following complete characterization of ergodic optimal control. \begin{theorem}\label{ergodicnearmono1} Suppose that Assumptions (A1)-(A4) and (A6) hold. Then there exists a unique solution pair $(V, \rho)\in \Sobl^{2,p}(\Rd)\times \RR$, \, $1< p < \infty$, with $V(0) = 0$ and $\inf_{\Rd} V > -\infty$ and $\rho \leq \sE^*(c)$, satisfying \begin{equation}\label{EErgonearOpt1A} \rho = \min_{\zeta \in\Act}\left[\sL_{\zeta}V(x) + c(x, \zeta)\right]\,. \end{equation} Moreover, we have \begin{itemize} \item[(i)]$\rho = \sE^*(c)$ \item[(ii)] a stationary Markov control $v^*\in \Usm$ is an optimal control if and only if it is a minimizing selector of \cref{EErgonearOpt1A}, i.e., if and only if it satisfies \begin{equation}\label{EErgonearOpt1B} \min_{\zeta \in\Act}\left[\sL_{\zeta}V(x) + c(x, \zeta)\right] \,=\, \trace\bigl(a(x)\grad^2 V(x)\bigr) + b(x,v^*(x))\cdot \grad V(x) + c(x, v^*(x))\,,\quad \text{a.e.}\,\, x\in\Rd\,. \end{equation} \end{itemize} \end{theorem} We assume that for the approximated model, for each $n\in \NN$ the running cost function $c_n$ satisfies the near-monotonicity condition \cref{ENearmonot} relative to $\max_{n\in\NN}\sE^{n*}(c_n)$\, i.e., \begin{equation}\label{AssumNearApprox1} \max_{n\in\NN}\sE^{n*}(c_n) < \liminf_{\norm{x}\to\infty}\inf_{\zeta\in\Act} c_n(x,\zeta)\,. \end{equation} Thus, in view of \cite[Theorem~3.6.10]{ABG-book}, for the approximating model, for each $n\in \NN$ we have the following theorem. \begin{theorem}\label{ergodicnearmono2} Suppose that Assumption (A5)(iii) holds. Then there exists a unique solution pair $(V_n, \rho_n)\in \Sobl^{2,p}(\Rd)\times \RR$, \, $1< p < \infty$, with $V_n(0) = 0$ and $\inf_{\Rd} V_n > -\infty$ and $\rho_n \leq \sE^{n*}(c_n)$, satisfying \begin{equation}\label{EErgonearOpt2A} \rho_n = \min_{\zeta \in\Act}\left[\sL_{\zeta}^{n}V_n(x) + c_n(x, \zeta)\right] \end{equation} Moreover, we have \begin{itemize} \item[(i)]$\rho_n = \sE^{n*}(c_n)$ \item[(ii)] a stationary Markov control $v_n^*\in \Usm$ is an optimal control if and only if it is a minimizing selector of \cref{EErgonearOpt2A}, i.e., if and only if it satisfies \begin{equation}\label{EErgonearOpt2B} \min_{\zeta \in\Act}\left[\sL_{\zeta}^n V_n(x) + c_n(x, \zeta)\right] \,=\, \trace\bigl(a_n(x)\grad^2 V_n(x)\bigr) + b_n(x,v_n^*(x))\cdot \grad V_n(x) + c_n(x, v_n^*(x))\,,\quad \text{a.e.}\,\, x\in\Rd\,. \end{equation} \end{itemize} \end{theorem} In view of the near-monotonicity assumption \cref{AssumNearApprox1}, for any minimizing selector $v_n^*\in\Usm$ of \cref{EErgonearOpt2A}, it is easy to see that outside a compact set $\sL_{v_n^*}^{n}V_n(x) \leq -\epsilon$ for some $\epsilon>0$\,. Since $V_n$ is bounded from below, \cite[Theorem~2.6.10(f)]{ABG-book} asserts that $v_n^*$ is stable. Hence, we deduce that the optimal policies of the approximating models are stable. However, note that the compact set mentioned above may not be applicable uniformly for all $n$, which turns out to be a consequential issue. Now we want to show that as $n\to\infty$ the optimal value of the approximated model $\sE^{n*}(c_n)$ converges to the optimal value of the true model $\sE^{*}(c)$\,. Under near-monotonicity assumption this result may not be true in general due to the restricted uniqueness/non-uniqueness of the solution of the associated HJB equation (see e.g., \cite{AA12}, \cite{AA13}). As a result of this, in \cite{AA12}, \cite{M97} the authors have shown that for the optimal control problem the policy iteration algorithm (PIA) may fail to converge to the optimal value. In order to to ensure convergence of the PIA, in addition to the near-monotonicity assumption a blanket Lyapunov condition is assumed in \cite{M97}\,. Accordingly, in this article, to guarantee the convergence $\sE^{n*}(c_n)\to \sE^{*}(c)$, we will assume that \[\Theta \,\df\, \{\eta_{v_n^*}^n : n\in\NN\},\] is tight, where $\eta_{v_n^*}^n$ is the unique invariant measure of the solution $X^n$ of \cref{ASE1.1} corresponding to $v_n^*\in \Usm$ (the optimal policies of the approximated models)\,. One sufficient condition which ensures the required tightness is the following: there exists a pair of nonnegative inf-compact functions $(\Lyap, h)\in \cC^{2}(\Rd)\times\cC(\Rd)$ such that $\sL_{v_n^*}^{n} \Lyap(x) \leq \hat{\kappa}_{0} - h(x)$ for some positive constant $\hat{\kappa}_{0}$ and for all $n\in\NN$ and $x\in \Rd$\,. \begin{theorem}\label{ErgoContnuity} Suppose that Assumptions (A1) - (A6) hold. Also, assume that the set $\Theta$ is tight. Then, we have \begin{equation}\label{EErgoContnuity1} \lim_{n\to\infty} \sE^{n*}(c_n) = \sE^{*}(c)\,. \end{equation} \end{theorem} \begin{proof} From Theorem~\ref{ergodicnearmono2}, we know that for each $n\in\NN$ there exists $(V_n, \rho_n)\in \Sobl^{2,p}(\Rd)\times \RR$, \, $1< p < \infty$, with $V_n(0) = 0$ and $\inf_{\Rd} V_n > -\infty$, satisfying \begin{equation}\label{EErgoContnuity1A} \rho_n = \min_{\zeta \in\Act}\left[\sL_{\zeta}^{n}V_n(x) + c_n(x, \zeta)\right]\,, \end{equation} where $\rho_n = \sE^{n*}(c_n)$\,. Since $\norm{c_n}_{\infty} \leq M$, it follows that $\rho_n = \sE^{n*}(c_n) \leq M$\,. From \cite[Theorem~3.6.6]{ABG-book} (the standard vanishing discount asymptotics), we know that as $\alpha \to 0$ the difference $V_{\alpha}^n(\cdot) - V_{\alpha}^n(0) \to V_{n}(\cdot)$ and $\alpha V_{\alpha}^n(0)\to \rho_n$, where $V_{\alpha}^n$ is the solution of the $\alpha$-discounted HJB equation \cref{APOptDHJB1}\,. Let $$\kappa(\rho_n) \,\df\, \{x\in\Rd\mid \min_{\zeta\in\Act} c_n(x,\zeta) \leq \rho_n\}\,.$$ Since the map $x\to \min_{\zeta\in\Act} c_n(x,\zeta)$ is continuous, it is easy to see that $\kappa(\rho_n)$ is closed and due the near-monotonicity assumption (see, \cref{AssumNearApprox1}), it follows that $\kappa(\rho_n)$ is bounded. Therefore $\kappa(\rho_n)$ is a compact subset of $\Rd$. Since $V_{\alpha}^{n} \leq \cJ_{\alpha,n}^{v_n^*}(x, c_n)$ and $v_n^*$ is stable, from \cite[Lemma~3.6.1]{ABG-book}, we have \begin{equation}\label{EErgoContnuity1C} \inf_{\kappa(\rho_n)} V_{\alpha}^{n} = \inf_{\Rd} V_{\alpha}^{n} \leq \frac{\rho_n}{\alpha}\,. \end{equation} Now for any minimizing selector $\hat{v}_n^*\in \Usm$ of \cref{APOptDHJB1}, we get \begin{equation*} \trace\bigl(a_n(x)\grad^2 V_{\alpha}^n(x)\bigr) + b_n(x,\hat{v}_n^*(x))\cdot \grad V_{\alpha}^n(x) - \alpha V_{\alpha}^n(x) = - c_n(x, \hat{v}_n^*(x))\,. \end{equation*} Since $\norm{c_n}_{\infty} \leq M$ for all $n\in\NN$, from estimate (3.6.9b) of \cite[Lemma~3.6.3]{ABG-book}, it follows that \begin{equation}\label{EErgoContnuity1D} \norm{V_{\alpha}^n - V_{\alpha}^n(0)}_{\Sob^{2,p}(\sB_R)} \leq \Tilde{C}(R,p)\left(1 + \alpha \inf_{\sB_{R_0}}V_{\alpha}^n \right)\,, \end{equation} for all $R> R_0$, where $R_0\in\RR$ is positive number such that $\kappa(\rho_n)\subset\sB_{R_0}$ and $\Tilde{C}(R,p)$ is a positive constant which depends only on $d$ and $R_0$\,. Now combining \cref{EErgoContnuity1C} and \cref{EErgoContnuity1D}, we obtain \begin{equation}\label{EErgoContnuity1E} \norm{V_{\alpha}^n - V_{\alpha}^n(0)}_{\Sob^{2,p}(\sB_R)} \leq \Tilde{C}(R,p)\left(1 + M \right)\,. \end{equation} In view of assumption \cref{AssumNearApprox1}, one can choose $R_0$ independent of $n$. Thus \cref{EErgoContnuity1E} implies that \begin{equation}\label{EErgoContnuity1F} \norm{V_n}_{\Sob^{2,p}(\sB_R)} \leq \Tilde{C}(R,p)\left(1 + M \right)\,. \end{equation} Hence, by the Banach-Alaoglu theorem and standard diagonalization argument (as in \cref{ETC1.3BC}), we have there exists $\Tilde{V}\in \Sobl^{2,p}(\Rd)$ such that along a sub-sequence \begin{equation}\label{EErgoContnuity1G} \begin{cases} V_{n_k}\to & \Tilde{V}\quad \text{in}\quad \Sobl^{2,p}(\Rd)\quad\text{(weakly)}\\ V_{n_k}\to & \Tilde{V}\quad \text{in}\quad \cC^{1, \beta}_{loc}(\Rd)\quad\text{(strongly)}\,. \end{cases} \end{equation} Again, since $\rho_n \leq M$, along a further sub-sequence (without loss of generality denoting by same sequence), we have $\rho_{n_k}\to \Tilde{\rho}$ as $k\to \infty$\,. Now, as before, multiplying by test function $\phi\in \cC_{c}^{\infty}(\Rd)$, from \cref{EErgoContnuity1A}, we obtain \begin{equation*} \int_{\Rd}\trace\bigl(a_{n_k}(x)\grad^2 V_{n_k}(x)\bigr)\phi(x)\D x + \int_{\Rd} \min_{\zeta\in \Act} \{b_{n_k}(x,\zeta)\cdot \grad V_{n_k}(x) + c_{n_k}(x, \zeta)\}\phi(x)\D x = \int_{\Rd} \rho_{n_k}\phi(x)\D x\,. \end{equation*} By similar argument as in Theorem~\ref{TC1.3}, in view of \cref{EErgoContnuity1G}, letting $k\to\infty$ it follows that \begin{equation}\label{EErgoContnuity1H} \int_{\Rd}\trace\bigl(a(x)\grad^2 \Tilde{V}(x)\bigr)\phi(x)\D x + \int_{\Rd} \min_{\zeta\in \Act} \{b(x,\zeta)\cdot \grad \Tilde{V}(x) + c(x, \zeta)\}\phi(x)\D x = \int_{\Rd} \Tilde{\rho}\phi(x)\D x\,. \end{equation} Since $\phi\in \cC_{c}^{\infty}(\Rd)$ is arbitrary and $\Tilde{V}\in \Sobl^{2,p}(\Rd)$, we deduce that $\Tilde{V}\in \Sobl^{2,p}(\Rd)$ satisfies \begin{equation*}\Tilde{\rho} = \min_{\zeta \in\Act}\left[\sL_{\zeta}\Tilde{V}(x) + c(x, \zeta)\right]\,. \end{equation*} Since $\Usm$ is compact along a further subsequence $v_{n_k}^*\to \tilde{v}^*$ (denoting by the same sequence) in $\Usm$. Repeating the above argument, one can show that the pair $(\Tilde{V}, \Tilde{\rho})$ satisfies \begin{equation*}\Tilde{\rho} = \sL_{\tilde{v}^*}\Tilde{V}(x) + c(x, \tilde{v}^*(x))\,. \end{equation*} As we know $V_n(0) = 0$ for all $n\in\NN$ (see, \cref{EErgoContnuity1A}), it is easy to see that $\Tilde{V}(0) = 0$. Next we show that $\Tilde{V}$ is bounded from below. From estimate (3.6.9a) of \cite[Lemma~3.6.3]{ABG-book}, for each $R> R_0$ we have \begin{equation}\label{EErgoContnuity1I} \left(\osc_{\sB_{2R}} V_{\alpha}^{n}\,\df\, \right) \sup_{x\in \sB_{2R}} V_{\alpha}^{n}(x) - \inf_{x\in \sB_{2R}}V_{\alpha}^{n}(x) \leq \Tilde{C}_1(R)(1 + \alpha\inf_{\sB_{R_0}} V_{\alpha}^{n})\leq \Tilde{C}_1(R)(1 + M)\,, \end{equation} for some constant $\Tilde{C}_1(R) >0$ which depends only on $d$ and $R_0$\,. Also, let $\alpha_k$ be a sequence such that $\alpha_k\to 0$ as $k\to \infty$, thus for each $x\in \Rd$ we have \begin{align}\label{EErgoContnuity1IA} V_n(x) &= \lim_{k\to \infty}\left(V_{\alpha_k}^n(x) - V_{\alpha_k}^n(0)\right) \geq \liminf_{k\to \infty} \left(V_{\alpha_k}^n(x) - \inf_{\Rd}V_{\alpha_k}^n(x) + \inf_{\Rd}V_{\alpha_k}^n(x) - V_{\alpha_k}^n(0)\right) \nonumber\\ &\geq - \limsup_{k\to \infty} \left(V_{\alpha_k}^n(0) - \inf_{\Rd}V_{\alpha_k}^n(x)\right) + \liminf _{k\to \infty} \left(V_{\alpha_k}^n(x) - \inf_{\Rd}V_{\alpha_k}^n(x)\right)\nonumber\\ &\geq - \limsup_{k\to \infty} \left(\osc_{\sB_{R_0}} V_{\alpha_k}^n \right);\quad \left(\text{since}\,\,\, \inf_{\sB_{R_0}} V_{\alpha_k}^n = \inf_{\Rd} V_{\alpha_k}^n \right)\,, \end{align} where the last inequality follows form the fact that $V_{\alpha_k}^n(x) - \inf_{\Rd}V_{\alpha_k}^n(x) \geq 0$\,. Hence, in view of estimate \cref{EErgoContnuity1I}, we deduce that \begin{equation}\label{EErgoContnuity1J} V_n(x) \geq - \Tilde{C}_1(R_0)(1 + M)\,. \end{equation} This implies that the limit $\Tilde{V}\geq - \Tilde{C}_1(R_0)(1 + M)$\,. Note that $$\rho_{n_k} = \sE^{{n_k}*}(c_{n_k}) = \int_{\Rd}\int_{\Act} c_{n_k}(x,\zeta)v_{n_k}^*(x)(\D \zeta)\eta_{v_{n_k}^*}^{n_k}(\D x)\,.$$ Since $\Theta$ is tight, from \cite[Lemma~3.2.6]{ABG-book}, we deduce that $\eta_{v_{n_k}^*}^{n_k} \to \eta_{\tilde{v}^*}$ in total variation norm as $k\to\infty$, where $\eta_{\tilde{v}^*}$ is the unique invariant measure of \cref{E1.1} corresponding to $\tilde{v}^*$\,. Thus, by writing \begin{align} &\int_{\Rd}\int_{\Act} c_{n_k}(x,\zeta)v_{n_k}^*(x)(\D \zeta)\eta_{v_{n_k}^*}^{n_k}(\D x)\, -\int_{\Rd}\int_{\Act} c(x,\zeta)\tilde{v}^*(x)(\D \zeta)\eta_{\tilde{v}^*}(\D x) \nonumber \\ & = \bigg(\int_{\Rd}\int_{\Act} c_{n_k}(x,\zeta)v_{n_k}^*(x)(\D \zeta)\eta_{v_{n_k}^*}^{n_k}(\D x) - \int_{\Rd}\int_{\Act} c_{n_k}(x,\zeta)v_{n_k}^*(x)(\D \zeta)\eta_{\tilde{v}^*}(\D x) \bigg) \nonumber \\ &\quad + \bigg(\int_{\Rd}\int_{\Act} c_{n_k}(x,\zeta)v_{n_k}^*(x)(\D \zeta)\eta_{\tilde{v}^*}(\D x) -\int_{\Rd}\int_{\Act} c(x,\zeta)\tilde{v}^*(x)(\D \zeta)\eta_{\tilde{v}^*}(\D x) \bigg) \end{align} and noting that the first term converges to zero by total variation convergence of $\eta_{v_{n_k}^*}^{n_k} \to \eta_{\tilde{v}^*}$ and the second term converging by the convergence in the control topology on $\Usm$ as $\eta_{\tilde{v}^*}$ is fixed; in view of the fact that $c_n \to c$ (continuously over control actions) we conclude that $\Tilde{\rho} = \int_{\Rd}\int_{\Act} c(x,\zeta)\tilde{v}^*(x)(\D \zeta)\eta_{\tilde{v}^*}(\D x)$\,. Therefore, the pair $(\Tilde{V}, \Tilde{\rho})\in \Sobl^{2,p}(\Rd)\times \RR$, \, $1< p < \infty$, which has the properties that $\Tilde{V}(0) = 0$ and $\inf_{\Rd} \Tilde{V} > -\infty$, is a compatible solution (see \cite[Definition~1.1]{AA13}) to \cref{EErgonearOpt1A}. Since solution to the equation \cref{EErgonearOpt1A} is unique (see \cite[Theorem~1.1]{AA13}), it follows that $(\Tilde{V}, \Tilde{\rho}) \equiv (V, \rho)$\,. This completes the proof of the theorem. \end{proof} In the following theorem, we prove existence and uniqueness of solution of a certain Poisson's equation. This will be useful in proving the robustness result. \begin{theorem}\label{NearmonotPoisso} Suppose that Assumptions (A1) - (A4) hold. Let $v\in\Usm$ be a stable control such that \begin{equation}\label{ENearmonotPoisso1} \liminf_{\norm{x}\to\infty}\inf_{\zeta\in \Act} c(x,\zeta) > \inf_{x\in\Rd}\sE_x(c, v)\,. \end{equation} Then, there exists a unique pair $(V^v, \rho_v)\in \Sobl^{2,p}(\Rd)\times \RR$, \, $1< p < \infty$, with $V^v(0) = 0$ and $\inf_{\Rd} V^v > -\infty$ and $\rho_v \leq \int_{\Rd}\int_{\Act} c(x,\zeta)v(x)(\D \zeta)\eta_{v}(\D x)$, satisfying \begin{equation}\label{EErgonearPoisso1A} \rho_v = \left[\sL_{v}V^v(x) + c(x, v(x))\right] \end{equation} Moreover, we have \begin{itemize} \item[(i)]$\rho_v = \inf_{\Rd}\sE_x(c, v)$\,. \item[(ii)] for all $x\in \Rd$ \begin{equation}\label{EErgonearPoisso1B} V^v(x) \,=\, \lim_{r\downarrow 0}\Exp_{x}^v\left[\int_{0}^{\uuptau_{r}} \left( c(X_t, v(X_t)) - \rho_v\right)\D t\right]\,. \end{equation} \end{itemize} \end{theorem} \begin{proof} Since $c$ is bounded, we have $\left(\rho^{v} \,\df\,\right) \int_{\Rd}\int_{\Act} c(x,\zeta)v(x)(\D \zeta)\eta_{v}(\D x)\leq \inf_{\Rd}\sE_x(c, v) \leq \norm{c}_{\infty}$\,. Also, since (see, \cref{ENearmonotPoisso1}) $\liminf_{\norm{x}\to\infty}\inf_{\zeta\in \Act} c(x,\zeta) > \rho^{v}$, from \cite[Lemma~3.6.1]{ABG-book}, it follows that \begin{equation}\label{EErgonearPoisso1C} \inf_{\kappa(\rho^v)}\cJ_{\alpha}^{v}(x, c) = \inf_{\Rd}\cJ_{\alpha}^{v}(x, c) \leq \frac{\rho^{v}}{\alpha}\,, \end{equation} where $\kappa(\rho^v) \,\df\, \{x\in \Rd\mid \min_{\zeta\in\Act}c(x,\zeta) \leq \rho^v\}$ and $\cJ_{\alpha}^{v}(x, c)$ is the $\alpha$-discounted cost defined as in \cref{EDiscost}. It known that $\cJ_{\alpha}^{v}(x, c)$ is a solution to the Poisson's equation (see, \cite[Lemma~A.3.7]{ABG-book}) \begin{equation}\label{EErgonearPoisso1D} \sL_{v}\cJ_{\alpha}^{v}(x, c) - \alpha \cJ_{\alpha}^{v}(x, c) = - c(x, v(x))\,. \end{equation} Since $\kappa(\rho^{v})$ is compact, for some $R_0>0$, we have $\kappa(\rho^v)\subset \sB_{R_{0}}$\,. Thus from \cite[Lemma~3.6.3]{ABG-book}, we deduce that for each $R> R_0$ there exist constants $\Tilde{C}_{2}(R), \Tilde{C}_{2}(R, p)$ depending only on $d, R_0$ such that \begin{equation}\label{EErgonearPoisso1E} \osc_{\sB_{2R}} \cJ_{\alpha}^{v}(x, c) \leq \Tilde{C}_{2}(R)\left(1 + \alpha\inf_{\sB_{R_0}}\cJ_{\alpha}^{v}(x, c) \right)\,, \end{equation} \begin{equation}\label{EErgonearPoisso1F} \norm{\cJ_{\alpha}^{v}(\cdot, c) - \cJ_{\alpha}^{v}(0, c)}_{\Sob^{2,p}(\sB_R)}\leq \Tilde{C}_{2}(R, p) \left(1 + \alpha\inf_{\sB_{R_0}}\cJ_{\alpha}^{v}(x, c) \right)\,. \end{equation} Thus, arguing as in \cite[Lemma~3.6.6]{ABG-book}, we deduce that there exists $(V^{v}, \Tilde{\rho}_v)\in \Sobl^{2,p}(\Rd)\times \RR$ such that as $\alpha\to 0$, $\cJ_{\alpha}^{v}(\cdot, c) - \cJ_{\alpha}^{v}(0, c) \to V^{v}(\cdot)$ and $\alpha\cJ_{\alpha}^{v}(0, c) \to \Tilde{\rho}_{v}$ and the pair $(V^{v}, \Tilde{\rho}_v)$ satisfies \begin{equation}\label{EErgonearPoisso1G} \sL_{v}V^{v}(x) + c(x, v(x)) = \Tilde{\rho}_{v}\,. \end{equation} By \cref{EErgonearPoisso1C}, we get $\Tilde{\rho}_{v} \leq \rho^{v}$. Now, in view of estimates \cref{EErgonearPoisso1C} and \cref{EErgonearPoisso1F}, it is easy to see that \begin{equation}\label{EErgonearPoisso1H} \norm{V^{v}}_{\Sob^{2,p}(\sB_R)}\leq \Tilde{C}_{2}(R, p) \left(1 + M \right)\,. \end{equation} Also, arguing as in Theorem~\ref{ErgoContnuity} (see \cref{EErgoContnuity1IA}), from estimate \cref{EErgonearPoisso1E} it follows that \begin{equation}\label{EErgonearPoisso1I} V^{v}\geq -\Tilde{C}_{2}(R_0) \left(1 + M \right)\,. \end{equation} Now, applying It$\hat{\rm o}$-Krylov formula and using \cref{EErgonearPoisso1G} we obtain \begin{align*} \Exp_x^{v}\left[V^{v}\left(X_{T\wedge \uptau_{R}}\right)\right] - V^v(x)\,=\, \Exp_x^{v}\left[\int_0^{T\wedge \uptau_{R}} \left(\Tilde{\rho}_{v} - c(X_t, v(X_t))\right) \D{t}\right]\,. \end{align*} This implies \begin{align*} \inf_{y\in\Rd}V^{v}(y) - V^v(x)\,\leq\, \Exp_x^{v}\left[\int_0^{T\wedge \uptau_{R}} \left(\Tilde{\rho}_{v} - c(X_t, v(X_t))\right) \D{t}\right]\,. \end{align*}Since $v$ is stable, letting $R\to \infty$, we get \begin{align*} \inf_{y\in\Rd}V^{v}(y) - V^v(x)\,\leq\, \Exp_x^{v}\left[\int_0^{T} \left(\Tilde{\rho}_{v} - c(X_t, v(X_t))\right) \D{t}\right]\,. \end{align*}Now dividing both sides of the above inequality by $T$ and letting $T\to \infty$, it follows that \begin{align*} \limsup_{T\to \infty}\frac{1}{T}\Exp_x^{v}\left[\int_0^{T} c(X_t, v(X_t)) \D{t}\right] \,\leq\, \Tilde{\rho}_{v}\,. \end{align*} Thus, $\rho^v \leq \Tilde{\rho}_{v}$. This indeed implies that $\rho^v = \Tilde{\rho}_{v}$\,. The representation \cref{EErgonearPoisso1B} of $V^v$ follows by closely mimicking the argument of \cite[Lemma~3.6.9]{ABG-book}. Therefore, we have a solution pair $(V^v, \rho_v)$ to \cref{EErgonearPoisso1A} satisfying (i) and (ii). Next we want to prove that the solution pair is unique. To this end, let $(\hat{V}^v, \hat{\rho}_v)\in \Sobl^{2,p}(\Rd)\times \RR$, \, $1< p < \infty$, with $\hat{V}^v(0) = 0$ and $\inf_{\Rd} \hat{V}^v > -\infty$ and $\hat{\rho}_v \leq \int_{\Rd}\int_{\Act} c(x,\zeta)v(x)(\D \zeta)\eta_{v}(\D x)$, satisfying \begin{equation}\label{EErgonearPoisso1J} \hat{\rho}_v = \left[\sL_{v}\hat{V}^v(x) + c(x, v(x))\right] \end{equation} Applying It$\hat{\rm o}$-Krylov formula and using \cref{EErgonearPoisso1J} we obtain \begin{align}\label{EErgonearPoisso1L} \limsup_{T\to \infty}\frac{1}{T}\Exp_x^{v}\left[\int_0^{T} c(X_t, v(X_t)) \D{t}\right] \,\leq\, \hat{\rho}_{v} \end{align} Since, $\hat{\rho}_v \leq \inf_{\Rd}\sE_x(c, v)$, from \cref{EErgonearPoisso1L} we obtain $\hat{\rho}^{v} = \rho_{v}$\,. Now, from \cref{EErgonearPoisso1G}, applying It$\hat{\rm o}$-Krylov formula, we deduce that \begin{align}\label{EErgonearPoisso1N} \hat{V}^v(x)\,=\, \Exp_x^{v}\left[\int_0^{\uuptau_{r}\wedge \uptau_{R}} \left(c(X_t, v(X_t)) - \hat{\rho}_{v}\right) \D{t} + \hat{V}^{v}\left(X_{\uuptau_{r}\wedge \uptau_{R}}\right)\right]\,. \end{align} Since $v$ is stable and $\hat{V}^v$ is bounded from below, for all $x\in \Rd$ we have \begin{equation*} \liminf_{R\to\infty}\Exp_x^{v}\left[\hat{V}^{v}\left(X_{\uptau_{R}}\right)\Ind_{\{\uuptau_{r}\geq \uptau_{R}\}}\right]\geq 0\,. \end{equation*}Hence, letting $R\to\infty$ by Fatou's lemma from \cref{EErgonearPoisso1N}, it follows that \begin{align*} \hat{V}^v(x)&\,\geq\, \Exp_x^{v}\left[\int_0^{\uuptau_{r}} \left(c(X_t, v(X_t)) - \hat{\rho}_{v}\right) \D{t} +\hat{V}^{v}\left(X_{\uuptau_{r}}\right)\right]\nonumber\\ &\,\geq\, \Exp_x^{v}\left[\int_0^{\uuptau_{r}} \left(c(X_t, v(X_t)) - \hat{\rho}_{v}\right) \D{t}\right] +\inf_{\sB_r}\hat{V}^{v}\,. \end{align*}Since $\hat{V}^{v}(0) =0$, letting $r\to 0$, we obtain \begin{align}\label{EErgonearPoisso1o} \hat{V}^v(x)\,\geq\, \limsup_{r\downarrow 0}\Exp_x^{v}\left[\int_0^{\uuptau_{r}} \left(c(X_t, v(X_t)) - \hat{\rho}_{v}\right) \D{t} \right]\,. \end{align} From \cref{EErgonearPoisso1B} and \cref{EErgonearPoisso1o}, it is easy to see that $V^v - \hat{V}^v \leq 0$ in $\Rd$. On the other hand by \cref{EErgonearPoisso1A} and \cref{EErgonearPoisso1J} one has $\sL_{v}\left(V^v - \hat{V}^v\right)(x) = 0$ in $\Rd$. Hence, applying strong maximum principle \cite[Theorem~9.6]{GilTru}, one has $V^v = \hat{V}^v$. This proves uniqueness. \end{proof} Next we prove the robustness result, i.e., we prove that $\sE_{x}(c, v_{n}^*)\to \rho$ as $n\to \infty$, where $v_{n}^*$ is an optimal ergodic control of the approximated model (see, Theorem~\ref{ergodicnearmono2})\,. In order to establish this result we will assume that $\widehat{\Theta}\df \{\eta_{v_n^*}: n\in \NN\}$ is tight, where $\eta_{v_n^*}$ is the unique invariant measure of \cref{E1.1} corresponding to $v_n^*$\,. \begin{theorem}\label{ErgodNearmonoRobu1} Suppose that Assumptions (A1) - (A6) hold. Also, assume that \begin{equation}\label{ENearmonotApro1} \liminf_{\norm{x}\to\infty}\inf_{\zeta\in \Act} c(x,\zeta) > \inf_{x\in\Rd}\sup_{n\in\NN}\sE_x(c, v_n^*)\,, \end{equation} and the sets $\widehat{\Theta}$ and $\Theta$ are tight. Then, we have \begin{equation}\label{EErgoRobust1} \lim_{n\to\infty} \inf_{x\in\Rd}\sE_{x}(c, v_{n}^*) = \sE^{*}(c)\,. \end{equation} \end{theorem} \begin{proof} We shall follow a similar proof program as that of Theorem ~\ref{TC1.4}, under the discounted setup. Since $c$ is bounded, we have $\left(\rho_{v_{n}^*} \,\df\,\right) \inf_{x\in \Rd}\sE_{x}(c, v_{n}^*) \leq \norm{c}_{\infty}$\,. From our assumption \cref{ENearmonotApro1}, we know that $\liminf_{\norm{x}\to\infty}\inf_{\zeta\in \Act} c(x,\zeta) > \rho_{v_{n}^*}$\,. Hence, from Theorem~\ref{NearmonotPoisso}, we have there exists a unique pair $(V^{v_{n}^*}, \rho_{v_{n}^*})\in \Sobl^{2,p}(\Rd)\times \RR$, \, $1< p < \infty$, with $V^{v_{n}^*}(0) = 0$ and $\inf_{\Rd} V^{v_{n}^*} > -\infty$, satisfying \begin{equation}\label{ErgodNearmonoRobu1A} \rho_{v_{n}^*} = \left[\sL_{v_{n}^*}V^{v_{n}^*}(x) + c(x, {v_{n}^*}(x))\right], \end{equation} with $\rho_{v_{n}^*} = \inf_{x\in \Rd}\sE_{x}(c, v_{n}^*) = \int_{\Rd}\int_{\Act} c(x,\zeta)v_{n}^*(x)(\D \zeta)\eta_{v_{n}^*}(\D x)$\,. Moreover, in view of assumption \cref{ENearmonotApro1}, from \cref{EErgonearPoisso1H} and \cref{EErgonearPoisso1I}, we have \begin{equation}\label{ErgodNearmonoRobu1B} \norm{V^{v_{n}^*}}_{\Sob^{2,p}(\sB_R)}\leq \kappa_1\quad\text{and}\quad V^{v_{n}^*}(x)\geq - \kappa_2\,\,\, \text{for all}\,\,\, x\in \Rd\,, \end{equation} where $\kappa_1, \kappa_2$ are constants independent of $n\in\NN$\,. Thus by the Banach-Alaoglu theorem and standard diagonalization argument (as in \cref{ETC1.3BC}), we deduce that exists $\hat{V}\in \Sobl^{2,p}(\Rd)$ such that along a sub-sequence \begin{equation}\label{ErgodNearmonoRobu1C} \begin{cases} V^{v_{n_k}^*}\to & \hat{V}\quad \text{in}\quad \Sobl^{2,p}(\Rd)\quad\text{(weakly)}\\ V^{v_{n_k}^*}\to & \hat{V}\quad \text{in}\quad \cC^{1, \beta}_{loc}(\Rd)\quad\text{(strongly)}\,. \end{cases} \end{equation} Again, since $\rho_{v_{n}^*} \leq M$, along a further sub-sequence (without loss of generality denoting by same sequence), we have $\rho_{v_{n_k}^*}\to \hat{\rho}$ as $k\to \infty$\,. Since $\Usm$ is compact along a further subsequence (without loss of generality denoting by same sequence) we have $v_{n_k}^* \to \hat{v}^*$ as $k\to\infty$\,. Now, as before, multiplying by test function and letting $k\to\infty$, from \cref{ErgodNearmonoRobu1A}, we deduce that the pair $(\hat{V}, \hat{\rho})\in \Sobl^{2,p}(\Rd)\times \RR$, \, $1< p < \infty$, satisfies \begin{equation}\label{ErgodNearmonoRobu1D} \hat{\rho} = \left[\sL_{\hat{v}^*}\hat{V}(x) + c(x, {\hat{v}^*}(x))\right] \end{equation} Since $V^{v_{n_k}^*}(0) = 0$ for all $k\in \NN$, it is easy to see that $\hat{V}(0) = 0$\,. Also, by \cref{ErgodNearmonoRobu1B}, it follows that $\inf_{\Rd} \hat{V} > -\infty$\,. Hence, using \cref{ENearmonotApro1} and \cref{ErgodNearmonoRobu1D}, we have $\hat{v}^*\in \Usm$ is stable. Since $\widehat{\Theta}$ is tight, in view of \cite[Lemma~3.2.6]{ABG-book}, it is easy to see that $\hat{\rho} = \int_{\Rd}\int_{\Act} c(x,\zeta)\hat{v}^*(x)(\D \zeta)\eta_{\hat{v}^*}(\D x)$\,. Thus, by Lemma~\ref{NearmonotPoisso}, we deduce that $(\hat{V}, \hat{\rho})\equiv (V^{\hat{v}^*}, \rho_{\hat{v}^*})$\,. Note that \begin{equation*} |\rho_{v_{n_k}^*} - \rho| \leq |\rho_{v_{n_k}^*} - \rho_{n_k}| + |\rho_{n_k} - \rho|\,. \end{equation*} Since $|\rho_{n_k} - \rho| \to 0$ as $k\to\infty $ (see, Theorem~\ref{ErgoContnuity}), to complete the proof we have to show that $|\rho_{v_{n_k}^*} - \rho_{n_k}|\to 0$ as $k\to \infty$\,. From Theorem~\ref{ergodicnearmono2}, we know that the pair $(V_{n_k}, \rho_{n_k})\in \Sobl^{2,p}(\Rd)\times \RR$, \, $1< p < \infty$, with $V_{n_k}(0) = 0$, satisfies \begin{equation}\label{ErgodNearmonoRobu1E} \rho_{n_k} = \min_{\zeta \in\Act}\left[\sL_{\zeta}^{n_k}V_{n_k}(x) + c_{n_k}(x, \zeta)\right]\,. \end{equation} For any minimizing selector $v_{n_k}^*\in \Usm$, rewriting \cref{ErgodNearmonoRobu1E}, we get \begin{equation}\label{ErgodNearmonoRobu1F} \rho_{n_k} = \left[\sL_{v_{n_k}^*}^{n_k}V_{n_k}(x) + c_{n_k}(x, v_{n_k}^*(x))\right]\,. \end{equation} Now, in view of estimates \cref{EErgoContnuity1F} and \cref{EErgoContnuity1J}, it follows that \begin{equation}\label{ErgodNearmonoRobu1G} \norm{V_{n_k}}_{\Sob^{2,p}(\sB_R)}\leq \kappa_3\quad\text{and}\quad V_{n_k}(x)\geq - \kappa_4\,\,\, \text{for all}\,\,\, x\in \Rd\,, \end{equation} where $\kappa_3, \kappa_4$ are constants independent of $k\in\NN$\,. Hence, by the Banach-Alaoglu theorem and standard diagonalization argument (see \cref{ETC1.3BC}), we have there exists $\bar{V}\in \Sobl^{2,p}(\Rd)$ such that along a sub-sequence \begin{equation}\label{ErgodNearmonoRobu1H} \begin{cases} V_{n_k}\to & \bar{V}\quad \text{in}\quad \Sobl^{2,p}(\Rd)\quad\text{(weakly)}\\ V_{n_k}\to & \bar{V}\quad \text{in}\quad \cC^{1, \beta}_{loc}(\Rd) \quad\text{(strongly)}\,. \end{cases} \end{equation}Also, $\rho_{n_k} \leq M$ implies that along a further subsequence (denoting by same sequence without loss generality) $\rho_{n_k} \to \bar{\rho}$. Since $v_{n_k}^* \to \hat{v}^*$ in $\Usm$, multiplying by test functions and letting $k\to\infty$ from \cref{ErgodNearmonoRobu1F}, we obtain that the pair $(\bar{V}, \bar{\rho})\in \Sobl^{2,p}(\Rd)\times \RR$, \, $1< p < \infty$ satisfies \begin{equation}\label{ErgodNearmonoRobu1I} \bar{\rho} = \left[\sL_{\hat{v}^*}\bar{V}(x) + c(x, \hat{v}^*(x))\right]\,. \end{equation} Form \cref{ErgodNearmonoRobu1G}, it easy to see that $\inf_{\Rd}\bar{V} > -\infty$. Also, since $V_{n_k}(0) = 0$ for all $k\in\NN$, we have $\bar{V}(0) = 0$\,. Since $\Theta$ is tight, arguing as in proof of Theorem~\ref{ErgoContnuity}, we deduce that $\bar{\rho} = \int_{\Rd}\int_{\Act} c(x,\zeta)\hat{v}^*(x)(\D \zeta)\eta_{\hat{v}^*}(\D x)$\,. Thus, by uniqueness of solution of \cref{ErgodNearmonoRobu1I} (see, Theorem~\ref{NearmonotPoisso}) it follows that $(\bar{V}, \bar{\rho})\equiv (V^{\hat{v}^*}, \rho_{\hat{v}^*})$\,. Since both $\rho_{v_{n_k}^*}$ and $\rho_{n_k}$ converges to same limit $\rho_{\hat{v}^*}$, we deduce that $|\rho_{v_{n_k}^*} - \rho_{n_k}|\to 0$ as $k\to \infty$\,. This completes the proof of the theorem. \end{proof} \subsection{Analysis under Lyapunov stability}\label{Lyapunov stability} In this section we study the robustness problem for ergodic cost criterion under Lyapunov stability assumption. We assume the following Foster-Lyapunov condition on the dynamics. \begin{itemize} \item[\hypertarget{A7}{{(A7)}}] \begin{itemize} \item[(i)] There exists a positive constant $\widehat{C}_0$, and a pair of inf-compact functions $(\Lyap, h)\in \cC^{2}(\Rd)\times\cC(\Rd\times\Act)$ (i.e., the sub-level sets $\{\Lyap\leq k\} \,,\{h\leq k\}$ are compact or empty sets in $\Rd$\,, $\Rd\times\Act$ respectively for each $k\in\RR$) such that \begin{equation}\label{Lyap1} \sL_{\zeta}\Lyap(x) \leq \widehat{C}_{0} - h(x,u)\quad \text{for all}\,\,\, (x,\zeta)\in \Rd\times \Act\,, \end{equation} where $h$ is locally Lipschitz continuous in its first argument uniformly with respect to second. \item[(ii)] For each $n\in\NN$\,, we have \begin{equation}\label{Lyap2} \sL_{\zeta}^{n}\Lyap(x) \leq \widehat{C}_{0} - h(x,u)\quad \text{for all}\,\,\, (x,\zeta)\in \Rd\times \Act\,, \end{equation} where the functions $\Lyap, h$ are as in \cref{Lyap1}\,. \end{itemize} \end{itemize} Combining \cite[Theorem~3.7.11]{ABG-book} and \cite[Theorem~3.7.12]{ABG-book}, we have the following complete characterization of the ergodic optimal control. \begin{theorem}\label{TErgoOpt1} Suppose that assumptions (A1)-(A4) and (A7)(i) hold. Then the ergodic HJB equation \begin{equation}\label{EErgoOpt1A} \rho = \min_{\zeta \in\Act}\left[\sL_{\zeta}V^*(x) + c(x, \zeta)\right] \end{equation} admits unique solution $(V^*, \rho)\in \cC^2(\Rd)\cap \sorder(\Lyap)\times \RR$ satisfying $V^*(0) = 0$\,. Moreover, we have \begin{itemize} \item[(i)]$\rho = \sE^*(c)$ \item[(ii)] a stationary Markov control $v^*\in \Usm$ is an optimal control (i.e., $\sE_x(c, v^*) = \sE^*(c)$) if and only if it satisfies \begin{equation}\label{EErgoOpt1B} \min_{\zeta \in\Act}\left[\sL_{\zeta}V^*(x) + c(x, \zeta)\right] \,=\, \trace\bigl(a(x)\grad^2 V^*(x)\bigr) + b(x,v^*(x))\cdot \grad V^*(x) + c(x, v^*(x))\,,\quad \text{a.e.}\,\, x\in\Rd\,. \end{equation} \item[(iii)] for any $v^*\in \Usm$ satisfying \cref{EErgoOpt1B}, we have \begin{equation}\label{EErgoOpt1C} V^*(x) \,=\, \lim_{r\downarrow 0}\Exp_{x}^{v^*}\left[\int_{0}^{\uuptau_{r}} \left( c(X_t, v^*(X_t)) - \sE^*(c)\right)\D t\right] \quad\text{for all}\,\,\, x\in \Rd\,. \end{equation} \end{itemize} \end{theorem} Again, from \cite[Theorem~3.7.11]{ABG-book} and \cite[Theorem~3.7.12]{ABG-book}, for the approximated model for each $n\in\NN$, we have the following complete characterization of the optimal control. \begin{theorem}\label{TErgoOptApprox1} Suppose that Assumptions (A5) and (A7)(ii) hold. Then the ergodic HJB equation \begin{equation}\label{TErgoOptApprox1A} \rho_n = \min_{\zeta \in\Act}\left[\sL_{\zeta}^nV(x) + c_n(x, \zeta)\right] \end{equation} admits unique solution $(V^{n*}, \rho_n)\in \cC^2(\Rd)\cap \sorder(\Lyap)\times \RR$ satisfying $V^{n*}(0) = 0$\,. Moreover, we have \begin{itemize} \item[(i)]$\rho_n = \sE^{n*}(c_n)$ \item[(ii)] a stationary Markov control $v_n^*\in \Usm$ is an optimal control (i.e., $\sE_x^n(c_n, v_n^{*}) = \sE^{n*}(c_n)$) if and only if it satisfies \begin{equation}\label{TErgoOptApprox1B} \min_{\zeta \in\Act}\left[\sL_{\zeta}^n V^{n*}(x) + c_n(x, \zeta)\right] \,=\, \trace\bigl(a_n(x)\grad^2 V^{n*}(x)\bigr) + b_n(x,v_n^*(x))\cdot \grad V^{n*}(x) + c(x, v_n^*(x))\,,\quad \text{a.e.}\,\, x\in\Rd\,. \end{equation} \item[(iii)] for any $v_n^*\in \Usm$ satisfying \cref{TErgoOptApprox1B}, we have \begin{equation}\label{TErgoOptApprox1C} V^{n*}(x) \,=\, \lim_{r\downarrow 0}\Exp_{x}^{v_n^*}\left[\int_{0}^{\uuptau_{r}} \left( c_n(X_t, v_n^*(X_t)) - \sE^{n*}(c_n)\right)\D t\right] \quad\text{for all}\,\,\, x\in \Rd\,. \end{equation} \end{itemize} \end{theorem} From \cite[lemma~3.7.8]{ABG-book}, it is easy to see that the functions $V^{n*}, V^{*}$ are bounded from below. Next we show that under Assumption (A7), as $n\to\infty$ the optimal value $V^{n*}$ of the approximated model converges to the optimal value $V^{*}$ of the true model. \begin{theorem}\label{TErgoOptCont} Suppose that Assumptions (A1)-(A5) and (A7) hold. Then, it follows that \begin{equation}\label{ETErgoOptCont1} \lim_{n\to\infty} \sE^{n*}(c_n) = \sE^{*}(c)\,. \end{equation} \end{theorem} \begin{proof} Since, $\|c_n\|_{\infty} \leq M,$ we get $\sE^{n*}(c_n) \leq M$\,. Also, \cref{Lyap1} implies that all $v\in\Usm$ is stable and $\inf_{v\in\Usm}\eta_v(\sB_R) > 0$ for any $R>0$ (see, \cite[Lemma~3.3.4]{ABG-book} and \cite[Lemma~3.2.4(b)]{ABG-book}). Thus from \cite[Theorem~3.7.6]{ABG-book}, we have there exist constants $\widehat{C}_1, \widehat{C}_2 >0$ depending only on the radius $R>0$ such that for all $\alpha >0$, we have \begin{equation}\label{ETErgoOptCont1A} \|V_\alpha^n(\cdot) - V_\alpha^n(0)\|_{\Sob^{2,p}(\sB_{R})} \leq \widehat{C}_1 \quad \text{and}\,\,\, \sup_{\sB_R}\alpha V_{\alpha}^{n} \leq \widehat{C}_2\,. \end{equation}By standard vanishing discount argument (see \cite[Lemma~3.7.8]{ABG-book}) as $\alpha\to 0$ we have $V_\alpha^n(\cdot) - V_\alpha^n(0) \to V^{n*}$ and $\alpha V_\alpha^n(0) \to \rho_n$\,. Hence the estimates \cref{ETErgoOptCont1A} give us $\|V^{n*}\|_{\Sob^{2,p}(\sB_{R})} \leq \widehat{C}_1$\,. Since the constant $\widehat{C}_1$ is independent of $n$, by standard diagonalization argument and the Banach-Alaoglu theorem, we can extract a subsequence $\{V^{n_k*}\}$ such that for some $\widehat{V}^*\in \Sobl^{2,p}(\Rd)$ (as in \cref{ETC1.3BC}) \begin{equation}\label{ETErgoOptCont1B} \begin{cases} V^{n_k*}\to & \widehat{V}^*\quad \text{in}\quad \Sobl^{2,p}(\Rd)\quad\text{(weakly)}\\ V^{n_k*}\to & \widehat{V}^*\quad \text{in}\quad \cC^{1, \beta}_{loc}(\Rd) \quad\text{(strongly)}\,. \end{cases} \end{equation} Also, since $\rho_n \leq M$ , along a further sub-sequence (with out loss of generality denoting by same sequence) we have $\rho_{n_k}\to \widehat{\rho}$ as $k\to \infty$\,. Now multiplying both sides of the equation \cref{TErgoOptApprox1A} by test functions $\phi$, we obtain \begin{equation*} \int_{\Rd}\trace\bigl(a_{n_k}(x)\grad^2 V^{n_k*}(x)\bigr)\phi(x)\D x + \int_{\Rd} \min_{\zeta\in \Act} \{b_{n_k}(x,\zeta)\cdot \grad V^{n_k*}(x) + c_{n_k}(x, \zeta)\}\phi(x)\D x = \int_{\Rd} \rho_{n_k}\phi(x)\D x\,. \end{equation*} As in Theorem~\ref{TC1.4}, using \cref{ETErgoOptCont1B} and letting $k\to\infty$ it follows that $\widehat{V}^*\in \Sobl^{2,p}(\Rd)$ satisfies \begin{equation}\label{TErgoOptCont1C} \widehat{\rho} = \min_{\zeta \in\Act}\left[\sL_{\zeta}\widehat{V}^*(x) + c(x, \zeta)\right]\,. \end{equation}Rewriting the equation \cref{TErgoOptCont1C}, we have \begin{equation*} \trace\bigl(a(x)\grad^2 \widehat{V}^*(x)\bigr) = f(x)\,,\quad \text{a.e.}\,\, x\in\Rd\,, \end{equation*} where \begin{equation*} f(x) = - \inf_{\zeta\in \Act}\left[b(x,\zeta)\cdot \grad \widehat{V}^*(x) + c(x, \zeta) - \widehat{\rho} \right]\,. \end{equation*} In view of \cref{ETErgoOptCont1B} and assumptions (A1) and (A2), it is easy to see that $f\in \cC^{0, \beta}_{loc}(\Rd)$ where $0< \beta < 1 - \frac{d}{p}$\,. Thus, by elliptic regularity \cite[Theorem~3]{CL89} (also see, \cite[Theorem~9.19]{GilTru}), we obtain $\widehat{V}^*\in \cC^{2}(\Rd)$\,. Next we want to show that $\widehat{V}^*\in \sorder{(\Lyap)}$. Since $\sup_{n}\|c_n\| \leq M$ we have $1 + \tilde{c} \in \sorder{(h)}$, where $\tilde{c} \,\df\, \sup_n c_n$\,. Also, since $h$ is inf-compact for a large enough $r > 0$ we have $\displaystyle{\widehat{C}_0 - \inf_{\zeta\in\Act} h(x, \zeta) \leq - \epsilon}$ for all $x\in \sB_r^c$\,. Let $X_t^n$ be the solution of \cref{ASE1.1} corresponding to $v\in \Usm$\,. Hence, in view of \cref{Lyap2}, by It$\hat{\rm o}$-Krylov formula, for any $v\in \Usm$ and $x\in \sB_r^c\cap\sB_R$ we deduce that \begin{equation*} \Exp_{x}^{v}\left[\Lyap(X_{\uuptau_{r}^n \wedge \uptau_{R}^n}^n)\right] - \Lyap(x) = \Exp_{x}^{v}\left[\int_{0}^{\uuptau_{r}^n \wedge \uptau_{R}^n} \sL_{v}^n \Lyap(X_s^n) \D s\right] \leq -\epsilon \Exp_{x}^{v}\left[\uuptau_{r}^n \wedge \uptau_{R}^n\right]\,, \end{equation*} where $\uuptau_{r}^n \df \inf\{t\geq 0: X_t^n\in \sB_r\}$ and $\uptau_{R}^n \df \inf \{t\geq 0: X_t^n\in \sB_R^c\}$\,. Letting $R\to \infty$, by Fatou's lemma we obtain \begin{equation}\label{TErgoOptCont1D} \Exp_{x}^{v}\left[\uuptau_{r}^n\right] \leq \frac{1}{\epsilon} \Lyap(x)\quad \text{for all}\,\,\, x\in \sB_r^c\,\,\,\text{and}\,\,\, n\in\NN\,. \end{equation} Again, by It$\hat{\rm o}$-Krylov formula, for any $v\in \Usm$ and $x\in \sB_r^c\cap\sB_R$ we have \begin{equation*} \Exp_{x}^{v}\left[\Lyap(X_{\uuptau_{r}^n \wedge \uptau_{R}^n}^n)\right] - \Lyap(x) = \Exp_{x}^{v}\left[\int_{0}^{\uuptau_{r}^n \wedge \uptau_{R}^n} \sL_{v}^n \Lyap(X_s^n) \D s\right] \leq \Exp_{x}^{v}\left[\int_0^{\uuptau_{r}^n \wedge \uptau_{R}^n} (\widehat{C}_0 - h(X_s^n, v(X_s^n)))\D s \right]\,, \end{equation*} Thus, by Fatou's lemma letting $R\to\infty$ and using \cref{TErgoOptCont1D} we get \begin{equation*} \sup_{n\in\NN}\sup_{v\in\Usm}\Exp_{x}^{v}\left[\int_0^{\uuptau_{r}^n} h(X_s^n, v(X_s^n)) \D s\right] \leq \widehat{M}_1 \Lyap(x)\,, \end{equation*} for some positive constant $\widehat{M}_1$\,. Hence, by arguing as in the proof of \cite[Lemma~3.7.2 (i)]{ABG-book}, we have \begin{equation}\label{TErgoOptCont1E} \sup_{n\in \NN}\sup_{v\in\Usm} \Exp_{x}^{v}\left[\int_{0}^{\uuptau_{r}^n}(1 + \tilde{c}(X_s^n, v(X_s^n)))\D s\right]\in \sorder{(\Lyap)}\,. \end{equation} Now, following the proof of \cite[Lemma~3.7.8]{ABG-book} (see, eq.(3.7.47)), it follows that \begin{equation}\label{TErgoOptCont1F} V^{n*}(x) \,\leq\, \sup_{v\in\Usm}\Exp_{x}^{v}\left[\int_{0}^{\uuptau_{r}^n} \left( c_n(X_t^n, v(X_t^n)) - \sE^{n*}(c_n)\right)\D t + V^{n*}(X_{\uuptau_{r}})\right]\,. \end{equation} We know that for $p \geq d+1$ the space $\Sob^{2,p}(\sB_{R})$ is compactly embedded in $\cC^{1, \beta}(\bar{\sB}_R)$ where $0< \beta < 1 - \frac{d}{p}$\,. Since $\|V^{n*}\|_{\Sob^{2,p}(\sB_{R})} \leq \widehat{C}_1$ for some positive constant $\widehat{C}_1$ which depends only on $R$, we deduce that $\sup_{n\in\NN}\sup_{\sB_r}|V^{n*}| \leq \widehat{M}_2$, where $\widehat{M}_2 (>0)$ is a constant. Also, since $\sE^{n*}(c_n) \leq \|c_n\|_{\infty} \leq M$ from \cref{TErgoOptCont1F}, it is easy to see that \begin{equation}\label{TErgoOptCont1H} |V^{n*}(x)| \,\leq\, M\sup_{n\in\NN}\sup_{v\in\Usm}\Exp_{x}^{v}\left[\int_{0}^{\uuptau_{r}^n} \left( \tilde{c}(X_t^n, v(X_t^n)) + 1\right)\D t + \sup_{n\in\NN}\sup_{\sB_r}|V^{n*}|\right]\,. \end{equation} Therefore, by combining \cref{ETErgoOptCont1B}, \cref{TErgoOptCont1E} and \cref{TErgoOptCont1H}, we obtain $\widehat{V}^*\in \sorder{(\Lyap)}$\,. Since, $(\widehat{V}^*, \widehat{\rho})\in \cC^2(\Rd)\cap \sorder(\Lyap)\times \RR$ satisfying $V^*(0) = 0$ satisfies \cref{EErgoOpt1A}, by uniqueness result of Theorem~\ref{TErgoOpt1}, we deduce that $(\widehat{V}^*, \widehat{\rho}) \equiv (V^*, \rho)$. This completes the proof of the theorem. \end{proof} Next theorem proofs existence of a unique solution to a certain equation in some suitable function space. This result will be very useful in establishing our robustness result. \begin{theorem}\label{TErgoExisPoiss1} Suppose that assumptions (A1)-(A4) and (A7)(i) hold. Then for each $v\in \Usm$ there exist a unique solution pair $(V^v, \rho^{v})\in \Sobl^{2,p}(\Rd)\cap \sorder(\Lyap)\times \RR$ for any $p >1$ satisfying \begin{equation}\label{TErgoExisPoiss1A} \rho^{v} = \sL_{v}V^v(x) + c(x, v(x))\quad\text{with}\quad V^v(0) = 0\,. \end{equation} Furthermore, we have \begin{itemize} \item[(i)]$\rho^{v} = \sE_x(c, v)$ \item[(ii)] for all $x\in\Rd$, we have \begin{equation}\label{TErgoExisPoiss1B} V^v(x) \,=\, \lim_{r\downarrow 0}\Exp_{x}^{v}\left[\int_{0}^{\uuptau_{r}} \left( c(X_t, v(X_t)) - \sE_x(c, v)\right)\D t\right]\,. \end{equation} \end{itemize} \end{theorem} \begin{proof} Existence of a solution pair $(V^v, \rho^{v})\in \Sobl^{2,p}(\Rd)\cap \sorder(\Lyap)\times \RR$ for any $p >1$ satisfying (i) and (ii) follows from \cite[Lemma~3.7.8]{ABG-book}\,. Now we want to prove the uniqueness of the solutions of \cref{TErgoExisPoiss1A}. Let $(\bar{V}^v, \bar{\rho}^{v})\in \Sobl^{2,p}(\Rd)\cap \sorder(\Lyap)\times \RR$ for any $p >1$ be any other solution pair of \cref{TErgoExisPoiss1A} with $\bar{V}^v(0) = 0$. By It$\hat{\rm o}$-Krylov formula, for $R>0$ we obtain \begin{align}\label{TErgoExisPoiss1C} \Exp_{x}^{v}\left[\bar{V}^v(X_{T\wedge\uptau_{R}})\right] - \bar{V}^v(x) &= \Exp_{x}^{v}\left[\int_{0}^{T\wedge\uptau_{R}} \sL_{v} \bar{V}^v(X_s) \D s\right]\nonumber\\ & = \Exp_{x}^{v}\left[\int_{0}^{T\wedge\uptau_{R}} \left(\bar{\rho}^{v} - c(X_s, v(X_s))\right)\D s \right]\,. \end{align} Note that \begin{equation*} \int_{0}^{T\wedge\uptau_{R}} \left(\bar{\rho}^{v} - c(X_s, v(X_s))\right)\D s = \int_{0}^{T\wedge\uptau_{R}} \bar{\rho}^{v}\D s - \int_{0}^{T\wedge\uptau_{R}}c(X_s, v(X_s))\D s \end{equation*} Thus, letting $R\to \infty$ by monotone convergence theorem, we get \begin{equation*} \lim_{R\to\infty}\Exp_{x}^{v}\left[\int_{0}^{T\wedge\uptau_{R}} \left(\bar{\rho}^{v} - c(X_s, v(X_s))\right)\D s \right] = \Exp_{x}^{v}\left[\int_{0}^{T} \left(\bar{\rho}^{v} - c(X_s, v(X_s))\right)\D s \right]\,. \end{equation*} Since $\bar{V}^v \in \sorder{(\Lyap)}$, in view of \cite[Lemma~3.7.2 (ii)]{ABG-book}, letting $R\to\infty$, we deduce that \begin{align}\label{TErgoExisPoiss1D} \Exp_{x}^{v}\left[\bar{V}^v(X_{T})\right] - \bar{V}^v(x) = \Exp_{x}^{v}\left[\int_{0}^{T} \left(\bar{\rho}^{v} - c(X_s, v(X_s))\right)\D s \right]\,. \end{align} Also, from \cite[Lemma~3.7.2 (ii)]{ABG-book}, we have \begin{equation*} \lim_{T\to\infty}\frac{\Exp_{x}^{v}\left[\bar{V}^v(X_{T})\right]}{T} = 0\,. \end{equation*} Now, dividing both sides of \cref{TErgoExisPoiss1D} by $T$ and letting $T\to\infty$, we obtain \begin{align*} \bar{\rho}^{v} = \limsup_{T\to \infty}\frac{1}{T}\Exp_{x}^{v}\left[\int_{0}^{T} \left(c(X_s, v(X_s))\right)\D s \right]\,. \end{align*}This implies that $\bar{\rho}^{v} = \rho^{v}$\,. Using \cref{TErgoExisPoiss1A}, by It$\hat{\rm o}$-Krylov formula we have \begin{align}\label{TErgoExisPoiss1E} \bar{V}^v(x)\,=\, \Exp_x^{v}\left[\int_0^{\uuptau_{r}\wedge \uptau_{R}} \left(c(X_t, v(X_t)) - \bar{\rho}^{v}\right) \D{t} + \bar{V}^{v}\left(X_{\uuptau_{r}\wedge \uptau_{R}}\right)\right]\,. \end{align} Also, by It$\hat{\rm o}$-Krylov formula and using \cref{Lyap1} it follows that \begin{equation*} \Exp_x^{v}\left[\Lyap\left(X_{\uptau_{R}}\right)\Ind_{\{\uuptau_{r}\geq \uptau_{R}\}}\right]\leq \widehat{C}_0 \Exp_x^{v}\left[\uuptau_{r}\right] + \Lyap(x)\quad \text{for all} \,\,\, r <|x|<R\,. \end{equation*} Since $\bar{V}^v \in \sorder(\Lyap)$, form the above estimate, we get \begin{equation*} \liminf_{R\to\infty}\Exp_x^{v}\left[\bar{V}^{v}\left(X_{\uptau_{R}}\right)\Ind_{\{\uuptau_{r}\geq \uptau_{R}\}}\right] = 0\,. \end{equation*}Thus, letting $R\to\infty$ by Fatou's lemma from \cref{TErgoExisPoiss1E}, it follows that \begin{align*} \bar{V}^v(x)&\,\geq\, \Exp_x^{v}\left[\int_0^{\uuptau_{r}} \left(c(X_t, v(X_t)) - \bar{\rho}^{v}\right) \D{t} +\bar{V}^{v}\left(X_{\uuptau_{r}}\right)\right]\nonumber\\ &\,\geq\, \Exp_x^{v}\left[\int_0^{\uuptau_{r}} \left(c(X_t, v(X_t)) - \bar{\rho}^{v}\right) \D{t}\right] +\inf_{\sB_r}\bar{V}^{v}\,. \end{align*}Since $\bar{V}^{v}(0) =0$, letting $r\to 0$, we deduce that \begin{align}\label{TErgoExisPoiss1F} \bar{V}^v(x)\,\geq\, \limsup_{r\downarrow 0}\Exp_x^{v}\left[\int_0^{\uuptau_{r}} \left(c(X_t, v(X_t)) - \bar{\rho}^{v}\right) \D{t} \right]\,. \end{align} Since $\widehat{\rho}^v = \bar{\rho}^v$, from \cref{TErgoExisPoiss1B} and \cref{TErgoExisPoiss1F}, it is easy to see that $\widehat{V}^v - \bar{V}^v \leq 0$ in $\Rd$. Also, since $(V^v, \rho^v)$ and $(\bar{V}^v, \bar{\rho}^v)$ are two solution pairs of \cref{TErgoExisPoiss1A}, we have $\sL_{v}\left(\widehat{V}^v - \bar{V}^v\right)(x) = 0$ in $\Rd$. Hence, by strong maximum principle \cite[Theorem~9.6]{GilTru}, one has $V^v = \bar{V}^v$. This proves the uniqueness\,. \end{proof} Now we are ready to prove the robustness result, i.e., we want to show that $\sE_{x}(c, v_{n}^*)\to \sE^*(c)$ as $n\to \infty$, where $v_{n}^*$ is an optimal ergodic control of the approximated model (see, Theorem~\ref{TErgoOptApprox1})\,. \begin{theorem}\label{ErgodLyapRobu1} Suppose that Assumptions (A1) - (A5) and (A7) hold. Then, we have \begin{equation}\label{ErgodLyapRobu1A} \lim_{n\to\infty} \inf_{x\in\Rd}\sE_{x}(c, v_{n}^*) = \sE^{*}(c)\,. \end{equation} \end{theorem} \begin{proof} We shall follow a similar proof program as that of Theorem \ref{TC1.4}, under the discounted setup. From Theorem~\ref{TErgoExisPoiss1}, we know that for each $n\in \NN$ there exists a unique pair $(V^{v_{n}^*}, \rho^{v_{n}^*})\in \Sobl^{2,p}(\Rd)\cap\sorder{(\Lyap)}\times \RR$, \, $1< p < \infty$, with $V^{v_{n}^*}(0) = 0$ satisfying \begin{equation}\label{ErgodLyapRobu1B} \rho^{v_{n}^*} = \left[\sL_{v_{n}^*}V^{v_{n}^*}(x) + c(x, {v_{n}^*}(x))\right] \end{equation} In view of \cref{Lyap1}, it is easy to see that, each $v\in\Usm$ is stable and $\inf_{v\in\Usm}\eta_v(\sB_R) > 0$ for any $R>0$ (see, \cite[Lemma~3.3.4]{ABG-book} and \cite[Lemma~3.2.4(b)]{ABG-book}). Thus, from \cite[Theorem~3.7.4]{ABG-book}, it follows that $\norm{V^{v_{n}^*}}_{\Sob^{2,p}(\sB_R)}\leq \hat{\kappa}_1$ where $\hat{\kappa}_1$ is a constant independent of $n\in\NN$\,. Therefore by the Banach-Alaoglu theorem and standard diagonalization argument (as in \cref{ETC1.3BC}), we deduce that there exists $\tilde{V}\in \Sobl^{2,p}(\Rd)$ such that along a sub-sequence \begin{equation}\label{ErgodLyapRobu1C} \begin{cases} V^{v_{n_k}^*}\to & \tilde{V}\quad \text{in}\quad \Sobl^{2,p}(\Rd)\quad\text{(weakly)}\\ V^{v_{n_k}^*}\to & \tilde{V}\quad \text{in}\quad \cC^{1, \beta}_{loc}(\Rd)\quad\text{(strongly)}\,. \end{cases} \end{equation} Again, since $\rho^{v_{n}^*} \leq M$, along a further sub-sequence (without loss of generality denoting by same sequence), we have $\rho^{v_{n_k}^*}\to \tilde{\rho}$ as $k\to \infty$\,. Since $\Usm$ is compact along a further subsequence (without loss of generality denoting by same sequence) we have $v_{n_k}^* \to \tilde{v}^*$ as $k\to\infty$\,. Now, as in Theorem~\ref{TC1.4}, multiplying by test function and letting $k\to\infty$, from \cref{ErgodLyapRobu1B}, it is easy to see that $(\tilde{V}, \tilde{\rho})\in \Sobl^{2,p}(\Rd)\times \RR$, \, $1< p < \infty$, satisfies \begin{equation}\label{ErgodLyapRobu1D} \tilde{\rho} = \left[\sL_{\tilde{v}^*}\tilde{V}(x) + c(x, {\tilde{v}^*}(x))\right] \end{equation} As we know that $V^{v_{n_k}^*}(0) = 0$ for all $k\in \NN$, we deduce that $\tilde{V}(0) = 0$\,. Arguing as in Theorem~\ref{TErgoOptCont} and using the estimate $\norm{V^{v_{n}^*}}_{\Sob^{2,p}(\sB_R)}\leq \hat{\kappa}_1$, we have \begin{equation}\label{ErgodLyapRobu1E} |\tilde{V}(x)| \,\leq\, M\sup_{v\in\Usm}\Exp_{x}^{v}\left[\int_{0}^{\uuptau_{r}} \left( c(X_t, v(X_t)) + 1\right)\D t + \sup_{n\in\NN}\sup_{\sB_r}|V^{v_n^*}|\right] \in \sorder{(\Lyap)}\,. \end{equation} Thus, by uniqueness of solution of \cref{ErgodLyapRobu1D} (see, Theorem~\ref{TErgoExisPoiss1}), we deduce that $(\tilde{V}, \tilde{\rho})\equiv (V^{\tilde{v}^*}, \rho^{\tilde{v}^*})$\,. By the triangle inequality \begin{equation*} |\rho^{v_{n_k}^*} - \rho| \leq |\rho^{v_{n_k}^*} - \rho_{n_k}| + |\rho_{n_k} - \rho|\,. \end{equation*} From Theorem~\ref{TErgoOptApprox1} we have $|\rho_{n_k} - \rho| \to 0$ as $k\to\infty $. Hence to complete the proof we have to show that $|\rho^{v_{n_k}^*} - \rho_{n_k}|\to 0$ as $k\to \infty$\,. Now, for any minimizing selector $v_{n_k}^*\in \Usm$ of \cref{TErgoOptApprox1A}, we have \begin{equation}\label{ErgodLyapRobu1F} \rho_{n_k} = \left[\sL_{v_{n_k}^*}^{n_k}V^{n_k}(x) + c_{n_k}(x, v_{n_k}^*(x))\right]\,. \end{equation} In view of the estimate \cref{ETErgoOptCont1A}, we obtain \begin{equation}\label{ErgodLyapRobu1G} \norm{V^{n_k}}_{\Sob^{2,p}(\sB_R)}\leq \hat{\kappa} \end{equation} where $\hat{\kappa} > 0$ is a constant independent of $k\in\NN$\,. Hence, by the Banach-Alaoglu theorem and standard diagonalization argument (see \cref{ETC1.3BC}), we have there exists $\tilde{V}^*\in \Sobl^{2,p}(\Rd)$ such that along a sub-sequence \begin{equation}\label{ErgodLyapRobu1H} \begin{cases} V^{n_k}\to & \tilde{V}^*\quad \text{in}\quad \Sobl^{2,p}(\Rd)\quad\text{(weakly)}\\ V^{n_k}\to & \tilde{V}^*\quad \text{in}\quad \cC^{1, \beta}_{loc}(\Rd) \quad\text{(strongly)}\,. \end{cases} \end{equation}Since, $\rho_{n_k} \leq M$ long a further subsequence (denoting by same sequence without loss generality) $\rho_{n_k} \to \tilde{\rho}^*$. As we know $v_{n_k}^* \to \tilde{v}^*$ in $\Usm$, multiplying both sides of \cref{ErgodLyapRobu1F} by test functions and letting $k\to\infty$, it follows that $(\tilde{V}^*, \tilde{\rho}^*)\in \Sobl^{2,p}(\Rd)\times \RR$, \, $1< p < \infty$ satisfies \begin{equation}\label{ErgodLyapRobu1I} \tilde{\rho}^* = \left[\sL_{\tilde{v}^*}\tilde{V}^*(x) + c(x, \tilde{v}^*(x))\right]\,. \end{equation} Arguing as in Theorem~\ref{TErgoOptCont}, one can show that $\tilde{V}^*\in \sorder{(\Lyap)}$. Hence, by uniqueness of solution of \cref{ErgodLyapRobu1F} (see, Theorem~\ref{TErgoExisPoiss1}) we deduce that $(\tilde{V}^*, \tilde{\rho}^*)\equiv (V^{\tilde{v}^*}, \rho^{\tilde{v}^*})$\,. Since both $\rho^{v_{n_k}^*}$ and $\rho_{n_k}$ converge to same limit $\rho^{\tilde{v}^*}$, it follows that $|\rho^{v_{n_k}^*} - \rho_{n_k}|\to 0$ as $k\to \infty$\,. This completes the proof of the theorem. \end{proof} \section{Finite Horizon Cost}\label{Finitecost} In this section we study the robustness problem under a finite horizon criterion\,. We will assume that $a, a_n, b, b_n, c, c_n$ satisfy the following: \begin{itemize} \item[\hypertarget{FN1}{{(FN1)}}] The functions $a, a_n, b, b_n, c, c_n$ satisfy \begin{equation*} \sup_{(x,\zeta)\in \Rd\times \Act}\left[\abs{b(x,\zeta)} + \norm{a(x)} + \sum_{i}^{d} \norm{\frac{\partial{a}}{\partial x_i}(x)} + \abs{c(x, \zeta)}\right] \,\le\, \mathrm{K}\,. \end{equation*} and \begin{equation*} \sup_{n\in \NN}\sup_{(x,\zeta)\in \Rd\times \Act}\left[\abs{b_n(x,\zeta)} + \norm{a_n(x)} + \sum_{i}^{d} \norm{\frac{\partial{a_n}}{\partial x_i}(x)} + \abs{c_n(x, \zeta)}\right] \,\le\, \mathrm{K}\,. \end{equation*} for some positive constant $\mathrm{K}$\,. Furthermore, $H\in \Sob^{2,p,\mu}(\Rd)\cap \Lp^{\infty}(\Rd)$\,,\,\, $p\ge 2$\,. \end{itemize} From \cite[Theorem~3.3, p. 235]{BL84-book}, the finite horizon optimality equation (or, the HJB equation) \begin{align}\label{EFinitecost1A} &\frac{\partial \psi}{\partial t} + \inf_{\zeta\in \Act}\left[\sL_{\zeta}\psi + c(x, \zeta) \right] = 0 \\ & \psi(T,x) = H(x) \end{align} admits a unique solution $\psi\in \Sob^{1,2,p,\mu}((0, T)\times\Rd)\cap \Lp^{\infty}((0, T)\times\Rd)$, for some $p\ge 2$ and $\mu > 0$. Now, by It\^{o}-Krylov formula (as in \cite[Theorem~3.5.2]{HP09-book}), there exist an optimal Markov policy, i.e., there exists $v^*\in \Um$ such that $\cJ_{T}^{v^*}(x, c) = \cJ_{T}^*(x, c) = \psi(0,x)$\,. Similarly, for each $n\in\NN$ (for the approximating models) the optimality equation \begin{align}\label{EFinitecost1B} &\frac{\partial \psi_n}{\partial t} + \inf_{\zeta\in \Act}\left[\sL_{\zeta}^n\psi_n + c_n(x, \zeta) \right] = 0 \\ & \psi_n(T,x) = H(x) \end{align} admits a unique solution $\psi_n\in \Sob^{1,2,p,\mu}((0, T)\times\Rd)\cap \Lp^{\infty}((0, T)\times\Rd)$\,,\,\, $p\ge 2$\,. Moreover, by the It\^{o}-Krylov formula (as in \cite[Theorem~3.5.2]{HP09-book}), there exists $v_n^*\in \Um$ such that $\cJ_{T,n}^{v_n^*}(x, c_n) = \cJ_{T,n}^*(x, c_n) = \psi_n(0,x)$\,. The following theorem shows that as the approximating model approaches the true model the optimal value of the approximating model converge to the optimal value of the true model. \begin{theorem}\label{FinitecostThm1} Suppose Assumptions (A1), (A3) and (FN1) hold. Then \begin{equation*} \lim_{n\to \infty} \cJ_{T,n}^*(x, c_n) = \cJ_{T}^*(x, c)\,. \end{equation*} \end{theorem} \begin{proof} For any minimizing selector $v_n^*$ of \cref{EFinitecost1B}, we have \begin{align}\label{EFinitecost1C} &\frac{\partial \psi_n}{\partial t} + \sL_{v_n^*}^n\psi_n + c_n(x, v_n^*(t,x)) = 0 \\ & \psi_n(T,x) = H(x) \end{align} By the It\^{o}-Krylov formula, it follows that \begin{align}\label{EFinitecost1DA} \psi_{n}(t,x) = \Exp_x^{v_n^*}\left[\int_t^{T} c_n(X_s^n, v_n^*(s, X_s^*)) \D{s} + H(X_T^*)\right] \end{align}This implies that \begin{equation}\label{EFinitecost1D} \norm{\psi_n}_{\infty} \leq T\norm{c_n}_{\infty} + \norm{H}_{\infty}\,. \end{equation} Rewriting \cref{EFinitecost1C}, it follows that \begin{align*} &\frac{\partial \psi_n}{\partial t} + \sL_{v_n^*}^n\psi_n + \lambda_0 \psi_n = \lambda_0 \psi_n - c_n(x, v_n^*(t,x)) \nonumber\\ & \psi_n(T,x) = H(x)\,, \end{align*} for some fixed $\lambda_0 >0$\,. Thus, by parabolic pde estimate \cite[eq. (3.8), p. 234]{BL84-book}, we deduce that \begin{equation}\label{EFinitecost1E} \norm{\psi_n}_{\Sob^{1,2,p,\mu}} \leq \hat{\kappa}_1 \norm{\lambda_0 \psi_n - c_n(x, v_n^*(t,x))}_{\Lp^{p,\mu}}\,. \end{equation} Thus, from \cref{EFinitecost1D} and \cref{EFinitecost1E}, we obtain $\norm{\psi_n}_{\Sob^{1,2,p,\mu}} \leq \hat{\kappa}_2$ for some positive constant $\hat{\kappa}_2$ (independent of $n$)\,. Since $\Sob^{1,2,p,\mu}((0, T)\times\Rd)$ is a reflexive Banach space, as a corollary of the Banach-Alaoglu theorem, there exists $\bar{\psi}\in\Sob^{1,2,p,\mu}((0, T)\times\Rd)$ such that along a subsequence (without loss of generality denoting by same sequence) \begin{equation}\label{EFinitecost1F} \begin{cases} \psi_n \to & \bar{\psi}\quad \text{in}\quad \Sob^{1,2,p,\mu}((0, T)\times\Rd)\quad\text{(weakly)}\\ \psi_n \to & \bar{\psi}\quad \text{in}\quad \Sob^{0,1,p,\mu}((0, T)\times\Rd)\quad \text{(strongly)}\,. \end{cases} \end{equation} Now, as in our earlier analysis for the different cost criteria considered, multiplying both sides of the \cref{EFinitecost1B} by test function $\phi\in\cC_c^{\infty}((0, T)\times \Rd)$ and integrating, we get \begin{align}\label{EFinitecost1G} \int_{0}^{T}\int_{\Rd}\frac{\partial \psi_n}{\partial t}\phi(t,x)\D t \D x + \int_{0}^{T}\int_{\Rd}\inf_{\zeta\in \Act}\left[\sL_{\zeta}^n\psi_n + c_n(x, \zeta) \right]\phi(t,x)\D t \D x = 0\,. \end{align} Thus, in view of \cref{EFinitecost1F}, letting $n\to \infty$, from \cref{EFinitecost1G} it follows that (arguing as in \cref{ETC1.3C1A} - \cref{ETC1.3D}) \begin{align*} \int_{0}^{T}\int_{\Rd}\frac{\partial \bar{\psi}}{\partial t}\phi(t,x)\D t \D x + \int_{0}^{T}\int_{\Rd}\inf_{\zeta\in \Act}\left[\sL_{\zeta}\bar{\psi} + c(x, \zeta) \right]\phi(t,x)\D t \D x = 0\,. \end{align*} Since $\phi\in\cC_c^{\infty}((0, T)\times \Rd)$ is arbitrary, from the above equation we deduce that $\bar{\psi}\in\Sob^{1,2,p,\mu}((0, T)\times\Rd)$ satisfies \begin{align}\label{EFinitecost1H} &\frac{\partial \bar{\psi}}{\partial t} + \inf_{\zeta\in \Act}\left[\sL_{\zeta}\bar{\psi} + c(x, \zeta) \right] = 0 \nonumber\\ & \bar{\psi}(T,x) = H(x)\,. \end{align} Since $\psi $ is the unique solution of \cref{EFinitecost1H}, we deduce that $\bar{\psi}(0,x) = \psi(0,x) = \cJ_{T}^*(x, c)$. This completes the proof. \end{proof} In the following theorem, we prove the robustness result for the finite horizon cost criterion. \begin{theorem}\label{FinitecostThm2} Suppose Assumptions (A1), (A3) and (FN1) hold. Then for any optimal control $v_n^*$ of the approximating models we have \begin{equation*} \lim_{n\to \infty} \cJ_{T}^{v_n^*}(x, c) = \cJ_{T}^*(x, c)\,. \end{equation*} \end{theorem} \begin{proof} By the triangle inequality we have $$|\cJ_{T}^{v_n^*}(x, c) - \cJ_{T}^*(x, c)| \leq |\cJ_{T}^{v_n^*}(x, c) - \cJ_{T, n}^{v_n^*}(x, c_n)| + |\cJ_{T,n}^{v_n^*}(x, c_n) - \cJ_{T}^*(x, c)|\,.$$ From Theorem~\ref{FinitecostThm1}, it is known that $|\cJ_{T,n}^{v_n^*}(x, c_n) - \cJ_{T}^*(x, c)| \to 0$ as $n\to \infty$\,. Next, we show that $|\cJ_{T}^{v_n^*}(x, c) - \cJ_{T, n}^{v_n^*}(x, c_n)|\to 0$ as $n\to \infty$\,. Since the space $\Um$ is compact (with topology defined as in \cite[Definition~2.2]{YukselPradhan}), along a sub-sequence $v_n^*\to \bar{v}$. From \cite[Theorem~3.3, p. 235]{BL84-book}, we have that for each $n\in\NN$ there exists a unique solution $\bar{\psi}_n\in\Sob^{1,2,p,\mu}((0, T)\times\Rd)\cap \Lp^{\infty}((0, T)\times\Rd)$\,,\,\, $p\ge 2$, to the following Poisson equation \begin{align}\label{EFinitecostThm2A} &\frac{\partial \bar{\psi}_n}{\partial t} + \left[\sL_{v_n^*}\bar{\psi}_n + c(x, v_n^*(t,x)) \right] = 0 \nonumber\\ & \psi_n(T,x) = H(x)\,. \end{align} By It\^{o}-Krylov formula, from \cref{EFinitecostThm2A} it follows that \begin{align}\label{EFinitecostThm2B} \bar{\psi}_{n}(t,x) = \Exp_x^{v_n^*}\left[\int_t^{T} c(X_s, v_n^*(s, X_s)) \D{s} + H(X_T)\right] \end{align} This gives us \begin{equation}\label{EFinitecostThm2C} \norm{\bar{\psi}_{n}}_{\infty} \leq T\norm{c}_{\infty} + \norm{H}_{\infty}\,. \end{equation} Arguing as in Theorem~\ref{FinitecostThm1}, letting $n\to \infty$ from \cref{EFinitecostThm2A}, we deduce that there exists $\hat{\psi}\in\Sob^{1,2,p,\mu}((0, T)\times\Rd)\cap \Lp^{\infty}((0, T)\times\Rd)$\,,\,\, $p\ge 2$, satisfying \begin{align}\label{EFinitecostThm2D} &\frac{\partial \hat{\psi}}{\partial t} + \left[\sL_{\bar{v}}\hat{\psi} + c(x, \bar{v}(t,x)) \right] = 0 \nonumber\\ & \hat{\psi}(T,x) = H(x)\,. \end{align} Now using \cref{EFinitecostThm2D}, by It\^{o}-Krylov formula we deduce that \begin{align}\label{EFinitecostThm2E} \hat{\psi}(t,x) = \Exp_x^{\bar{v}}\left[\int_t^{T} c(X_s, \bar{v}(s, X_s)) \D{s} + H(X_T)\right]\,. \end{align} Moreover, we have \begin{align}\label{EFinitecostThm2F} &\frac{\partial \psi_n}{\partial t} + \sL_{v_n^*}^n\psi_n + c_n(x, v_n^*(t,x)) = 0 \\ & \psi_n(T,x) = H(x)\,. \end{align} Letting $n\to \infty$, as in Theorem~\ref{FinitecostThm1}, we have there exists $\tilde{\psi}\in\Sob^{1,2,p,\mu}((0, T)\times\Rd)\cap \Lp^{\infty}((0, T)\times\Rd)$\,,\,\, $p\ge 2$, satisfying \begin{align}\label{EFinitecostThm2G} &\frac{\partial \tilde{\psi}}{\partial t} + \left[\sL_{\bar{v}}\tilde{\psi} + c(x, \bar{v}(t,x)) \right] = 0 \nonumber\\ & \tilde{\psi}(T,x) = H(x)\,. \end{align} By It\^{o}-Krylov formula, from \cref{EFinitecostThm2G}, we obtain \begin{align}\label{EFinitecostThm2H} \tilde{\psi}(t,x) = \Exp_x^{\bar{v}}\left[\int_t^{T} c(X_s, \bar{v}(s, X_s)) \D{s} + H(X_T)\right]\,. \end{align} From \cref{EFinitecostThm2E}and \cref{EFinitecostThm2H}, we deduce that $\cJ_{T}^{v_n^*}(x, c) = \bar{\psi}_{n}(0,x)$ and $\cJ_{T, n}^{v_n^*}(x, c_n) = \psi_n(0,x)$ converge to the same limit\,. This completes the proof. \end{proof} \section{Control up to an Exit Time}\label{exitTimeSection} Before we conclude the paper, let us also briefly note that if one consider an optimal control up to an exit time with the cost given as: \begin{itemize} \item[•]\textit{(in true model:)} for each $U\in\Uadm$ the associated cost is given as \begin{equation*} \hat{\cJ}_{e}^{U}(x) \,\df \, \Exp_x^{U} \left[\int_0^{\tau(O)} e^{-\int_{0}^{t}\delta(X_s, U_s) \D s} c(X_t, U_t) \D t + e^{-\int_{0}^{\tau(O)}\delta(X_s, U_s) \D s}h(X_{\tau(O)})\right],\quad x\in\Rd\,, \end{equation*} \item[•]\textit{(in approximated models:)} for each $n\in \NN$ and $U\in\Uadm$ the associated cost is given as \begin{equation*} \hat{\cJ}_{e,n}^{U}(x) \,\df \, \Exp_x^{U} \left[\int_0^{\tau(O)} e^{-\int_{0}^{t}\delta(X_s, U_s) \D s} c_n(X_t, U_t) \D t + e^{-\int_{0}^{\tau(O)}\delta(X_s, U_s) \D s}h(X_{\tau(O)})\right],\quad x\in\Rd\,, \end{equation*} \end{itemize} where $O\subset \Rd$ is a smooth bounded domain, $\tau(O) \,\df\, \inf\{t \geq 0: X_t\notin O\}$, $\delta(\cdot, \cdot): \bar{O}\times\Act\to [0, \infty)$ is the discount function and $h:\bar{O}\to \RR_+$ is the terminal cost function. In the true model the optimal value is defined as $\hat{\cJ}_{e}^{*}(x)=\inf_{U\in \Uadm}\hat{\cJ}_{e}^{U}(x)$, and in the approximated model the optimal value is defined as $\hat{\cJ}_{e,n}^{*}(x)=\inf_{U\in \Uadm}\hat{\cJ}_{e,n}^{U}(x)$\,. We assume that $\delta\in \cC(\bar{O}\times \Act)$, $h\in\cC(\bar{O})$. As in \cite{RZ21}, \cite[p.229]{B05Survey} the analysis leads to the following HJB equation. \begin{align*} \min_{\zeta \in\Act}\left[\sL_{\zeta}\phi(x) - \delta(x, \zeta) \phi(x) + c(x, \zeta)\right] = 0\,,\quad \text{for all\ }\,\, x\in O\,,\quad\text{with}\quad \phi = h\,\,\, \text{on}\,\,\, \partial{O}\,. \end{align*} By similar argument as in \cite[Theorem~3.5.3]{ABG-book}, \cite[Theorem~3.5.6]{ABG-book} we have that $\hat{\cJ}_{e}^{*}$, $\hat{\cJ}_{e,n}^{*}$ are unique solutions to their respective HJB equations. Existence follows by utilizing the Leray-Schauder fixed point theorem as in \cite[Theorem~3.5.3]{ABG-book} and uniqueness follows by It$\hat{o}$-Krylov formula as in \cite[Theorem~3.5.6]{ABG-book}\,. Using standard elliptic PDE estimates (on bounded domain $O$) and closely mimicking the arguments as in Theorem~\ref{TC1.3}, we have the following continuity result \begin{theorem}\label{TExi1.1} Suppose Assumptions (A1)-(A5) hold. Then \begin{equation*} \lim_{n\to\infty} \hat{\cJ}_{e,n}^{*}(x) = \hat{\cJ}_{e}^{*}(x) \quad\text{for all}\,\, x\in \bar{O}\,. \end{equation*} \end{theorem}For each $n\in\NN$, suppose that $\hat{v}_{e,n}^*\in \Usm$, $\hat{v}_{e}^*\in \Usm$ are optimal controls of the approximated model and true model respectively. Then in view of the the above continuity result, following the steps of the proof of the Theorem~\ref{TC1.4}, we obtain the following robustness result. \begin{theorem}\label{TExi1.2} Suppose Assumptions (A1)-(A5) hold. Then \begin{equation*} \lim_{n\to\infty} \hat{\cJ}_{e}^{\hat{v}_{e,n}^*}(x) = \hat{\cJ}_{e}^{\hat{v}_{e}^*}(x) \quad\text{for all}\,\, x\in \bar{O}\,. \end{equation*} \end{theorem} \section{Revisiting Example \ref{ERS11Example}} Consider Example \ref{ERS11Example}(i). \begin{itemize} \item[•]\textbf{For discounted cost:} Let $\hat{v}_n^*$ be a discounted cost optimal control when the system is governed by \cref{{ERS1.1}} (existence of such control is ensured by Theorem~\ref{TD1.1}). Then following Theorem~\ref{TC1.4}, we have that \begin{equation}\label{ENoiseAPPDisc} \lim_{n\to\infty} \cJ_{\alpha}^{\hat{v}_n^*}(x, c) = \cJ_{\alpha}^{v^*}(x, c) \quad\text{for all}\,\, x\in\Rd\,. \end{equation} \item[•]\textbf{For ergodic cost:} Let $\hat{v}_n^*$ be an ergodic optimal control when the system is governed by \cref{{ERS1.1}} (existence is guaranteed by Theorem~\ref{ergodicnearmono2}, Theorem~\ref{TErgoOptApprox1}). Then arguing as in Theorem~\ref{ErgodNearmonoRobu1} (for near-monotone case) Theorem~\ref{ErgodLyapRobu1} (for stable case), it follows that \begin{equation}\label{ENoiseAPPErgo} \lim_{n\to\infty} \inf_{x\in\Rd}\sE_{x}(c, v_{n}^*) = \sE^{*}(c)\,. \end{equation} \item[•]\textbf{Finite horizon cost:} For each $n\in\NN$, let $\hat{v}_n^*$ be a finite horizon optimal control when the system is governed by \cref{{ERS1.1}}\,. Then in view of Theorem~\ref{FinitecostThm2}, we have \begin{equation}\label{EFiniteCost} \lim_{n\to\infty} \cJ_{T}^{\hat{v}_{n}^*}(x, c) = \cJ_{T}^{*}(x, c) \quad\text{for all}\,\, x\in \Rd\,. \end{equation} \item[•]\textbf{For cost up to an exit time:} Let $\hat{v}_{e,n}^*$ be a an optimal control when the system is governed by \cref{{ERS1.1}}, for each $n\in \NN$. Then Theorem~\ref{TExi1.2} ensures that \begin{equation}\label{ENoiseExiCost} \lim_{n\to\infty}\hat{\cJ}_{e}^{\hat{v}_{e,n}^*}(x) = \hat{\cJ}_{e}^{\hat{v}_{e}^*}(x) \quad\text{for all}\,\, x\in \bar{O}\,. \end{equation} \end{itemize} \section{Conclusion} In this paper, we studied continuity of optimal costs and robustness/stability of optimal control policies designed for an incorrect models applied to an actual model for both discounted/ergodic cost criteria. In our analysis we have crucially used the fact that our actual model is a non-degenerate diffusion model. It would be an interesting problem to investigate if such results can be proved in the cases when the limiting system (actual system) is a degenerate diffusion system. Also, in our analysis we have assumed that our system noise is given by a Wiener process; it would be interesting to study further noise processes e.g., when system noise is a wide-bandwidth process or a more general discontinuous martingale noise (as in \cite{K90}, \cite{KR87}, \cite{KR87a}, \cite{KR88})\,. In the latter case the controlled process may become non-Markovian process even under stationary Markov policies. Therefore, it is reasonable to find suitable Markovian approximation of it which maintains the necessary properties of the original system. The analysis of robustness problems in this setting is a direction of research worth pursuing. \bibliography{Somnath_Robustness} \end{document}
2205.05746v1
http://arxiv.org/abs/2205.05746v1
Unisolvent and minimal physical degrees of freedom for the second family of polynomial differential forms
\documentclass{article} \usepackage[utf8]{inputenc} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amsthm} \usepackage{comment} \usepackage{tikz-cd} \usepackage{tikz} \usepackage{cite} \usepackage{authblk} \usepackage[margin=1.5in]{geometry} \usetikzlibrary{shapes,positioning,intersections,quotes} \DeclareMathOperator{\Id}{Id} \numberwithin{equation}{section} \newtheorem{theorem}{Theorem}[section] \newtheorem{definition}{Definition}[section] \newtheorem{lemma}{Lemma}[section] \newtheorem{remark}{Remark}[section] \newcommand{\R}{\mathbb{R}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\der}{\mathrm{d}} \title{Unisolvent and minimal physical degrees of freedom for the second family of polynomial differential forms} \author[$\dagger$]{Ludovico Bruni Bruno\thanks{email: [email protected]}} \author[$\dagger$]{Enrico Zampa\thanks{email: [email protected]}} \affil[$\dagger$]{Dipartimento di Matematica, Università di Trento, via Sommarive 14, 38123, Povo (TN), Italia} \date{May 2022} \begin{document} \maketitle \begin{abstract} The principal aim of this work is to provide a family of unisolvent and minimal physical degrees of freedom, called weights, for N\'ed\'elec second family of finite elements. Such elements are thought of as differential forms $ \mathcal{P}_r \Lambda^k (T)$ whose coefficients are polynomials of degree $ r $. We confine ourselves in the two dimensional case $ \R^2 $ since it is easy to visualise and offers a neat and elegant treatment; however, we present techniques that can be extended to $ n > 2 $ with some adjustments of technical details. In particular, we use techniques of homological algebra to obtain degrees of freedom for the whole diagram $$ \mathcal{P}_r \Lambda^0 (T) \rightarrow \mathcal{P}_r \Lambda^1 (T) \rightarrow \mathcal{P}_r \Lambda^2 (T), $$ being $ T $ a $2$-simplex of $ \R^2 $. This work pairs its companions recently appeared for N\'ed\'elec first family of finite elements. \end{abstract} \section{Introduction} Degrees of freedom are one of the main ingredients of a finite element triple as defined by Ciarlet \cite{Ciarlet}. For standard polynomial Lagrange elements over simplices, the classical degrees of freedom are evaluations on the principal lattice $L_r(T)$ of top-dimensional simplices $T$ of the triangulation. These degrees of freedom have a clear physical meaning: if $u_h$ is the numerical solution, then degrees of freedom are just the values of the exact solution at some points of the mesh. On the other side, for the polynomial differential forms families $\mathcal{P}_r^-\Lambda^k$ and $\mathcal{P}_r\Lambda^k$ described in \cite{FEEC}, the standard degrees of freedom are the so called \emph{moments}, that is, integrals against $(d-k)$-forms on $d$-subsimplices, for $d = k,\ldots,n$, where $n = \dim T $ is the dimension of the domain of the problem. These degrees of freedom have some disadvantages, which we aim here to improve: \begin{enumerate} \item they lack an immediate physical interpretation; \item the associated Vandermonde matrix is not well conditioned; \item they are difficult to implement. \end{enumerate} To overcome these issues, another choice of degrees of freedom has been proposed in \cite{BossBook}. It consists in considering integrals over $k$-cells topologically contained in the top dimensional simplices. These degrees of freedom are called \emph{weights} or \emph{physical}, since they have a clear physical interpretation: circulations or fluxes for vector fields ($1$- and $n-1$-forms) and averages for densities ($n$-forms). Moreover, weights are a straightforward generalization of the evaluation-type degrees of freedom for scalar functions (for $k=0$, a $k$-cell is just a point and the integral is just the evaluation). Physical degrees of freedom for the first (or \emph{trimmed}) family $\mathcal{P}_r^-\Lambda^k$, whose features in the framework that we adopt here have been pointed out in several works, such as \cite{Gopa}, \cite{Hipt} and \cite{FEEC}, were studied extensively in \cite{RapettiBossavit}, \cite{ChristiansenRapetti} and more recently in \cite{ENUMATH} and \cite{nostropreprint}. A slightly different point of view is also offered in \cite{Lohi1} and \cite{Lohi2}. On the other side, for the second (or \emph{complete}) family $\mathcal{P}_r\Lambda^k$ the first physical degrees of freedom for the two dimensional case were proposed in \cite{ZAR} only recently, where however unisolvence was not proved, but only checked numerically. In this work, we stick to the two dimensional case and we provide different physical degrees of freedom for the second family and we rigorously prove the unisolvence using cohomological tools. Moreover, we provide numerical evidence of the well-conditioning of the associated Vandermonde matrix and we perform some interpolation tests. We assume that the reader is familiar with standard notions in differential geometry and algebraic topology that are now common in most works on Finite Element Exterior Calculus and Finite Element Systems, such as differential forms, differential complexes, cellular complexes, chains, cochains, cohomology, de Rham maps, and so on. See section 2 of \cite{christiansen2009Foundations} for a concise introduction on these topics. We however recall known and useful facts when setting the notation. The outline of this work is as follows. In section \ref{sect:prel} we introduce basic definitions and tools. We recall known results and state Lemmas that we will use in the subsequent. In section \ref{sect:mainsect} we state the main results concerning the construction of unisolvent and minimal sets. In particular, confining ourselves in the case of $ \mathbb{R}^2 $, we identify a unisolvent and minimal sequence for N\'ed\'elec second family. In section \ref{sect:NumTest} we present some numerical results concerning the generalised Vandermonde matrices associated with the introduced families and the associated interpolators, comparing an example of convergence of a smooth, oscillating form with that of its differential. We summarise conclusions and propose future developments in section \ref{sect:concl}. \section{Physical systems of degrees of freedom} \label{sect:prel} In this section we recall the definition of a physical system of degrees of freedom and some results from \cite{ZAR}. Let $X$ be a compatible finite element system in the sense of Christiansen \cite{ChristiansenConstruction}, \cite{ChristiansenTopics}, \cite{ChristiansenRapetti} over the cellular complex $\mathcal{T}$. In particular, for each $T$ in $\mathcal{T}$ (of any dimension\footnote{In the finite element systems framework, one considers spaces of differential forms and degrees of freedom on cells of each dimensions, not only on top dimensional ones.}), for each $k = 0,1,2,\ldots, \dim T$, the following sequence is exact: \begin{equation*} 0 \rightarrow \R \hookrightarrow X^0(T) \overset{\der}{\rightarrow} X^1(T) \overset{\der}{\rightarrow} \dots \overset{d}{\rightarrow} X^{\dim T}(T) \rightarrow 0. \end{equation*} Here the first arrow is the inclusion and $\mathrm{d}$ denotes the exterior derivative. Moreover, for $T$ in $\mathcal{T}$ we denote with $\mathring{X}^k(T)$ the subspace of $X^k(T)$ made of all forms with zero trace on the boundary. \begin{definition} A \emph{system of physical degrees of freedom} (physical sysdofs) $\mathcal{F}$ over $X$ is a choice, for each cell $T$ in $\mathcal{T}$ , for each $k = 0,1,2,\ldots, \dim T$, of a finite set $\mathring{\mathcal{F}}^k(T)\doteq \{s_1,\ldots,s_{\mathring{N}^k(T)}\}$ of non-overlapping $k$-cells. These cells induce functionals \begin{equation} \omega\mapsto w(\omega,s_i)\doteq \int_{s_i}\omega. \end{equation} We call $w(\omega,s_i)$ the \emph{weight} of $\omega$ on $s_i$. \end{definition} The unisolvence of a physical system of degrees of freedom is defined in the obvious way. \begin{definition} A physical sysdofs is said to be \emph{unisolvent} if, for each $T$ in $\mathcal{T}$, for each $k=0,1,2,\ldots, \dim T$, the only form $\omega$ in $\mathring{X}^k(T)$ which satisfies \[ w(\omega,s) = 0, \quad\forall s\in\mathring{\mathcal{F}}^k(T) \] is the zero form. \end{definition} Clearly, a unisolvent physical sysdofs must satisfy the trivial necessary condition: for each $T$ in $\mathcal{T}$, for each $k = 0,1,2,\ldots$, $\mathring{N}^k(T) \geq \dim\mathring{X}^k(T)$ where $\mathring{N}^k(T)$ denotes the cardinality of the set $\mathring{\mathcal{F}}^k(T)$. This motivates the following definition. \begin{definition} A physical sysdofs is \emph{minimal} if, for each $T$ in $\mathcal{T}$, for each $k = 0,1,2,\ldots, \dim T$, the following equality holds: \[ N^k(T) = \dim \mathring{X}^k(T). \] \end{definition} From the properties of compatible finite element system we obtain the following equivalent definition of unisolvence and minimality, which is closer to classical one found in standard books on finite elements \cite{Ciarlet}. For each $S,T$ in $\mathcal{T}$ we write $S\leq T$ is $S$ is a subcell of $T$. Moreover, for $T$ in $\mathcal{T}$ and $k = 0,1,2,\ldots, \dim T$, write \begin{equation} \mathcal{F}^k(T) \doteq \bigcup_{S\leq T}\mathring{\mathcal{F}}^k(S). \end{equation} \begin{lemma} If a physical system of degrees of freedom $\mathcal{F}$ is unisolvent, then, for each top dimensional cell $T$ in $\mathcal{T}$, for each $k = 0,1,2,\ldots, \dim T$, the only form $\omega$ in $X^k(T)$ satisfying \begin{equation} w(\omega,s) = 0,\qquad \forall s\in\mathcal{F}^k(T) \label{eq: alternative unisolvence} \end{equation} is the zero form. Moreover $\mathcal{F}$ is minimal and unisolvent if and only if the above condition holds and $N^k(T) = \dim X^k(T)$, where $N^k(T)$ denotes the cardinality of $\mathcal{F}^k(T)$. \end{lemma} \begin{proof} Assume that $\mathcal{F}$ is unisolvent. Let $\omega\in X^k(T)$ satisfying condition \eqref{eq: alternative unisolvence}. Then let $S$ any $k$-subcell of $T$ and let $\iota_{S,T}:S\to T$. Clearly $\iota_{S,T}^*\omega$ belongs to $X^k(S)$, but since it is a $k$-form on a $k$-cell, its traces on the boundary of $S$ vanish by definition, therefore $\iota_{S,T}^*\omega$ actually belongs to $\mathring{X}^k(S)$. Then, by unisolvence, $\iota_{S,T}^*\omega = 0$. Let now $S$ be a $k+1$-subcell of $T$ and $S'$ be a $k$-cell belonging to the boundary of $S$. Then $\iota_{S',S}^*\iota_{S,T}^*\omega = \iota_{S',T}^*\omega = 0$ by the previous argument. Therefore $\iota_{S,T}^*\omega$ belongs to $\mathring{X}^k(S)$. Again, by unisolvence, $\iota_{S,T}^*\omega = 0$. Proceeding in this way, we obtain that $\omega\in\mathring{X}^k(T)$. Finally, unisolvence gives $\omega = 0$. For the stronger statement, see Proposition 2.5 in \cite{ChristiansenRapetti}. \end{proof} From the computational point of view, one may check equivalent an condition for any given top dimensional cell $T$ and any $ k = 0,1,2,\ldots $. We thus define the \emph{generalized Vandermonde matrix} $ V $, whose $(i,j)$-th element is $$ \int_{s_i} \omega_j ,$$ being $ \omega_1, \ldots, \omega_{N^k(T)} $ some basis for $X^k (T) $. We thus have the following. \begin{lemma} A collection of $ k $-cells $ \{ s_1, \ldots, s_{N^k}(T) \} $ is unisolvent and minimal if and only if $V$ is a square full rank matrix. Such a rank does not depend on the basis $ \{ \omega_1, \ldots, \omega_{\dim X^k(T)} \}$ chosen for $ X^k(T) $. \end{lemma} \subsection{A motivation: the scalar case} To fix ideas, let $ T $ be a $2$-simplex, i.e. a non degenerate triangle. Notice that, for $ k = 0 $ and $X^0(T) = \mathcal{P}_r(T)$, the problem of deducing unisolvence and minimality is linked to the problem of deducing if a collection of nodes $ \mathcal{N} $ in $ \R^2 $ is \emph{poised}, which means that the only polynomial vanishing on $ \mathcal{N} $ is the zero polynomial. Explicitly, for a polynomial $ \varphi \in \mathcal{P}_r (\R^2) $ this reads as $$ \varphi (\pmb{x}) = 0 \quad \forall \pmb{x} \in \mathcal{N} \ \Longrightarrow \varphi (\pmb{x}) = 0 \quad \forall \pmb{x} \in \R^2 .$$ This problem is still unsolved in its greatest generality, however several partial results and conjectures have been offered. A possible approach to a complete understanding of the placement of points in $ \R^2 $ consists in studying the number of lines that pass through a fixed number of points of $ \mathcal{N} $. This does not give \emph{all possible unisolvent sets}, but the conjectural result claims these collections are \emph{all unisolvent}, see \cite{GCGM}. This approach is convenient in this framework, since when considering particular collection of points, such as \emph{principal lattices} or \emph{regular lattices} \cite{ChungYao} and some of their subsets, one may reduce the problem. These considerations clearly also extend to greater $ k $, in this context to $ k = 1 $ (that is, to edges) and $ k = 2 $ (that is, to faces). Some numerical results relate these two problems. In particular, for $ k = 1 $ we address the reader to \cite{nostropreprint} and for $ k = 2 $ to \cite{ABRICO}. \subsection{Interpolators and (co)-homological tools} \label{sect:interpdef} For each $T$ in $\mathcal{T}$, for each $k = 0,1,2,\ldots, \dim T$, a physical sysdofs induces an interpolator $\Pi^k(T) :\Lambda^k(T) \to X^k(T)$ by the equations: \begin{equation} w(\omega,s) = w(\Pi^k(T)\omega,s),\qquad\forall s\in\mathcal{F}^k(T). \label{eq:interpolator} \end{equation} The interpolator is well defined if the physical sysdofs is unisolvent. In fact, assume that $\Pi^k\omega$ and $\tilde{\Pi}^k\omega$ are two interpolators which satisfy \eqref{eq:interpolator}. Then \[ w (\Pi^k\omega - \tilde{\Pi}^k\omega) = 0, \] and unisolvence gives $\omega = 0$. We are interested in interpolators that commute with the exterior derivative, that is, such that the following diagram is commutative \begin{equation*} \begin{tikzcd} \Lambda^k(T) \arrow[r, "\der"] \arrow[d, "\Pi^k(T)"] & \Lambda^{k+1}(T) \arrow[d, "\Pi^{k+1}(T)"] \\ X^k(T) \arrow[r, "\der"] & X^{k+1}(T) \end{tikzcd} \end{equation*} In \cite{ZAR} Zampa et al. showed that an interpolator induced by a physical sysdofs commutes with the exterior derivative if and only if the union \begin{equation} \mathcal{F}^{\bullet}(T)\doteq \bigcup_{k = 0}^{\dim T}\mathcal{F}^k(T) = \bigcup_{k = 0}^{\dim T}\bigcup_{S\leq T}\mathring{\mathcal{F}}^k(S) \end{equation} is a cellular complex, that is, if and only if the boundary of a cell in $\mathcal{F}^{k+1}(T)$ is a union of cells in $\mathcal{F}^k(T)$. If this is the case, we can consider $k$-chains $C_k(\mathcal{F}^{\bullet}(T))$ and $k$-cochains $C^k(\mathcal{F}^{\bullet}(T))$ over $\R$. Denote with $\delta$ the coboundary operator mapping $k$-cochains to $k+1$-cochains \cite{Hatcher}. It is natural then to consider the de Rham map \cite{Whitney} \begin{equation} \begin{split} \mathfrak{R}^k: X^k(T) & \to C^k(\mathcal{F}^{\bullet}(T)) \\ \omega & \mapsto \left( c\mapsto \int_c\omega \right). \end{split} \label{eq: de Rham map} \end{equation} Stokes Theorem \cite{Lee} implies that the de Rham map commutes with the exterior derivative, that is, is a chain map. We can then arrange everything in a commutative diagram \begin{equation} \small \begin{tikzcd} 0 \arrow[r] & \R \arrow[r, hook] \arrow[d, "\Id"] & X^0(T) \arrow[r, "\der"] \arrow[d, "\mathfrak{R}^0"] & X^1(T) \arrow[r, "\der"] \arrow[d, "\mathfrak{R}^1"] & \dots \arrow[r, "\der"] & X^{\dim T}(T) \arrow[r] \arrow[d, "\mathcal{R}^{\dim T}"] & 0 \\ 0 \arrow[r] & \R \arrow[r, "\psi"] & C^0(\mathcal{F}^{\bullet}(T))) \arrow[r, "\delta"] & C^1(\mathcal{F}^{\bullet}(T)) \arrow[r, "\delta"] & \dots \arrow[r, "\delta"] & C^{\dim T}(\mathcal{F}^{\bullet}(T)) \arrow[r] & 0 \end{tikzcd} \label{eq: diagram} \end{equation} where $\psi$ is the unique map that makes it commutative, sending $1$ to the $0$-cochain $c\mapsto 1$. Notice that the top sequence is exact since $X$ is a compatible finite element system. We can thus give an equivalent characterization of unisolvence and minimality in terms of the de Rham map. \begin{lemma} A physical sysdofs $\mathcal{F}$ is unisolvent (unisolvent and minimal) if and only if, for each $T$ in $\mathcal{T}$ and for each $ k $ the de Rham map \eqref{eq: de Rham map} is injective (an isomorphism of vector spaces). \end{lemma} In \cite{ZAR}, the authors showed that a unisolvent and minimal physical system of degrees of freedom that induces commuting interpolators must satisfy the following condition: the union of all cells in $\mathcal{F}^\bullet(T)$ paves $T$. If this this is the case, the bottom sequence in \eqref{eq: diagram} is exact. \section{Physical degrees of freedom for the second family} \label{sect:mainsect} In this section we will construct a physical sysdofs for the finite element system $X^k(T) = \mathcal{P}_{r-k}\Lambda^k(T)$ with $r\geq 2$ in the two-dimensional case. Since the majority of the following results are general, we claim and prove them in the case of an $n$-simplex $ T $; the specific case of interest here is immediately obtained for $ n = 2 $. We will exploit features of $ \mathbb{R}^2 $ only for the definition of $ \mathcal{F} $ and hence in Theorem \ref{thm:main}. We invite the reader to match the following construction with that for N\'ed\'elec first family \cite{NedelecFirst} given in \cite{ENUMATH}. Recall that spaces $ \mathcal{P}_{r-k}\Lambda^k(T) $ are defined as subspaces of differential $k$-forms $ \Lambda^k (T) $ whose coefficient are polynomials of degree $ \leq r-k $. These spaces are sometimes called \emph{complete}, since they are precisely tensor products $ \mathcal{P}_{r-k} (T) \otimes \mathrm{Alt}^k(T) $, being $ \mathcal{P}_{r-k} (T) $ the space of polynomials of degree at most $ r-k $ in $n$ variables defined on $ T $ and $ \mathrm{Alt}^k(T) $ that of linear alternating $k$-forms on (the tangent bundle of) $ T $. This makes it easy the computation of $$ \dim \mathcal{P}_r \Lambda^k (T) = \dim \mathcal{P}_r (T) \cdot \dim \mathrm{Alt}^k(T) = \binom{r + \dim T}{\dim T} \binom{\dim T}{k}.$$ When $ n = 2 $, proxies of this sequence are known as N\'ed\'elec second family \cite{NedelecSecond} and the central space is that of \cite{BDM}. Note that we use the subscript $r-k$ instead of the classical $r$ found in the literature, since the exterior derivative lowers the polynomial degree at each stage of the complex \[ \mathcal{P}_r\Lambda^0(T) \overset{\der}{\rightarrow} \mathcal{P}_{r-1}\Lambda^1(T) \overset{\der}{\rightarrow} \dots\overset{\der}{\rightarrow} \mathcal{P}_{r-\dim T}\Lambda^{\dim T}(T). \] We recall now the definition of \emph{small simplex} from \cite{ChristiansenRapetti}. For $ n = \dim T $ and $r \geq 0 $ let $\mathcal{I}(r,n)$ be the set of multi-indices $\pmb{\alpha} = (\alpha_0,\ldots,\alpha_n)$ with nonnegative components and such that $|\boldsymbol{\alpha} | \doteq \alpha_0 + \ldots + \alpha_n = r$. If $T$ is a simplex of dimension $n$ and vertices $\{\pmb{x}_0,\ldots,\pmb{x}_n\}$, we equip it with barycentric coordinates $\{\lambda_0,\ldots,\lambda_n \}$, i.e. the only (up to permutations) non negative degree $1$ polynomials defined on $ T $ such that $$ \boldsymbol{x} = \sum_{i=0}^n \lambda_i \boldsymbol{x}_i , \qquad \sum_{i=0}^n \lambda_i = 1 , \qquad \forall \boldsymbol{x} \in T.$$ For each $\pmb{\alpha}\in\mathcal{I}(r-1,n)$ we define the \emph{small} $n$-simplex $s^{\pmb{\alpha}}$ as the image of $T$ under the homothety \begin{equation} z_{\pmb{\alpha}} \ : \quad \pmb{x} \ \mapsto \ z_{\pmb{\alpha}}(\pmb{x}) = \frac{1}{r} \sum_{i=0}^n [\lambda_i(\pmb{x}) + \alpha_i]\, \pmb{x}_i\,. \label{eq: homothety} \end{equation} Note that \eqref{eq: homothety} is just the identity for $r = 1$. Small $k$-simplices are just $k$-subsimplices of small $n$-simplices and we denote them with $\Sigma^k_r(T)$. In particular $\Sigma^0_r(T)$ is the principal lattice $L_r(T)$, that is, the set of points with barycentric coordinates \[ \Sigma_r^0 (T) \doteq \frac{1}{r}(\lambda_0 + \alpha_0, \ldots, \lambda_n + \alpha_n ),\qquad \pmb{\alpha}\in\mathcal{I}(r,n). \] If the reader is familiar with weights for N\'ed\'elec first family they might have noted that a slightly different definition of small simplices is usually provided. In particular, the term $ \lambda_i (\boldsymbol{x}) $ in \eqref{eq: homothety} is usually omitted, so that overlappings are avoided. We shall see the reason of such a different choice in the subsequent of this section. For $\pmb{\xi}\in T$, define the affine tranformation \begin{equation} \tau_{\pmb{\xi}} \ : \quad\pmb{x}\mapsto \lambda_0(\pmb{\xi})\pmb{x} + \sum_{i=1}^n\lambda_i(\pmb{\xi})\pmb{x}_i. \label{eq: tau map} \end{equation} Note that the map \eqref{eq: tau map} is invertible if and only if $\lambda_0(\pmb{\xi}) \neq 0$. We define $ T_{\boldsymbol{\xi}} \doteq \tau_{\boldsymbol{\xi}} (T) $ and let $ \tau^*_{\boldsymbol{\xi}} $ denote the pullback with respect to $ \tau_{\boldsymbol{\xi}} $. We have the following. \begin{lemma} \label{lem:useful} Let $ \omega \in \mathcal{P}_{r-\dim T} \Lambda^{\dim T} (T)$ be such that $$ \int_{T} \tau_{\boldsymbol{\xi}} \omega = 0, \qquad \forall \boldsymbol{\xi} \in \mathbb{R}^{\dim T}.$$ Then $ \omega = 0 $. \end{lemma} \begin{proof} This is a direct consequence of \cite[Lemma $3.12$]{ChristiansenRapetti}. \end{proof} \begin{theorem} Let $\Gamma = \{\pmb{\xi}_1,\ldots,\pmb{\xi}_{N^2(T)}\}$ be a poised subset of $L_{r}(T)$ such that $\lambda_0(\pmb{\xi}_i) > 0$ for $i=1,\ldots,N^2(T)$. Let $ \omega \in \mathcal{P}_{r-\dim T} \Lambda^{\dim T} (T)$ be such that \[ \int_{\tau_{\pmb{\xi}}(T)}\omega= 0,\qquad \forall \pmb{\xi}\in \Gamma. \] Then $\omega = 0$. \end{theorem} \begin{proof} The map \[ \pmb{\xi}_i \mapsto\int_{\tau_{\pmb{\xi}_i}(T)}\omega \] is a polynomial of degree $r$ in $\dim T$ variables $ \xi_1, \ldots, \xi_{\dim T} $ which vanishes on $\lvert L_r(T)\rvert$ points of a poised set, therefore is zero for each $ \boldsymbol{\xi} \in \mathbb{R}^{\dim (T)}$. It follows from Lemma \ref{lem:useful} that $\omega = 0$. \end{proof} As an example of set $ \Gamma $, we may pick any set satisfying the GC condition \cite{GCCarni} (see also \cite{GCGeneral} and \cite{deBoor} for higher dimensional counterparts). Some explicit examples can be found in \cite{GCr4} and \cite{GCr5} and we offer more in a recursive fashion in the following. We define $\mathcal{F}$ as follows. Let $T$ be a $2$-simplex. For $k = 0$, $\mathcal{F}^0(T)$ is just the principal lattice $L_r(T)$. For $k = 2$ we consider the GC set $\Gamma_r = \{\pmb{\xi}_1,\ldots,\pmb{\xi}_{N^2(T)}\}$, which is a subset of $L_r(T)$ of cardinality $N^2(T) = \dim\mathcal{P}_{r-2}\Lambda^2(T) = \frac{r(r-1)}{2}$. For $i = 1,\ldots, N^2(T)$, define the subset \[ \Gamma_r(i) \doteq \{ \pmb{\xi} \in\ \Gamma \mid \lambda_0(\pmb{\xi}) < \lambda_0 (\pmb{\xi}_i) \} \] We define $\mathcal{F}^2(T)$ as the set $\{s_1,\ldots,s_{N^2(T)}\}$ where \begin{equation} s_i = \overline{ \tau_{\pmb{\xi}_i}(T) \setminus \left(\bigcup_{\pmb{\xi}\in\Gamma_r(i)}\tau_{\pmb{\xi}}(T) \right) }. \label{eq : def case k = 2} \end{equation} The closure is needed to preserve the structure of cell. Finally, we define $\mathcal{F}^1(T)$ as the subset of $\Sigma_r^1(T)$ made of those small $1$-simplices that are on the boundary of cells in $\mathcal{F}^2(T)$. We now propose a possible choice of $\Gamma_r$ for each polynomial degree $r$. We identify each point $x$ of $T$ with the triple $(\lambda_0 (x) , \lambda_1 (x) , \lambda_2( x))$ (e.g. the barycenter is $(1/3,1/3,1/3)$). Let \begin{equation} \Gamma_r = \begin{cases} \{ (1, 0, 0 ) \} \text{ if $r = 2$, }\\ \tau_{\boldsymbol{\zeta}_r}(\Gamma_{r-1}) \cup \Delta_r \text{ if $r > 2$ }, \end{cases} \end{equation} where \begin{align*} \boldsymbol{\zeta}_r & = \begin{cases} \left ( \frac{r -1}{r}, 0, \frac{1}{r} \right ) \text{ if $r$ is odd,}\\ \left ( \frac{r -1}{r}, \frac{1}{r}, 0 \right ) \text{ if $r$ is even, } \end{cases} \\ \Delta_r & = \begin{cases} \left\{\frac{1}{r} ( i, 1 - i, 0 ) \text{ for } i = 1,\ldots,r, \ i\neq \frac{r+1}{2} \right\}, \text{ if $r$ is odd,}\\ \left\{\frac{1}{r} ( i, 0 , 1- i) \text{ for } i = 1,\ldots,r, \ i\neq \frac{r}{2} \right\},\text{ if $r$ is even. }\end{cases} \end{align*} For example $\Gamma_3$ is \begin{align*} \Gamma_3 &= \left \{ \left ( \frac{2}{3}, 0, \frac{1}{3} \right ) , \left ( \frac{1}{3}, \frac{2}{3}, 0 \right ) , ( 1, 0, 0 ) \right \}, \end{align*} since $\tau_{( 2/3 , 0 , 1/3 )}$ maps $( 1, 0, 0)$ to $ ( 2/3 , 0 , 1/3 )$ and $\Delta_3 = \{ (1/3, 2/3, 0 ) , (1, 0, 0) \}$. Similarly, $\Gamma_4$ is given by \begin{equation*} \Gamma_4 = \left \{ \left ( \frac{1}{2}, \frac{1}{4}, \frac{1}{4} \right ), \left ( \frac{1}{4}, \frac{3}{4}, 0 \right ), \left ( \frac{3}{4}, \frac{1}{4}, 0 \right ), \left ( \frac{1}{4}, 0, \frac{3}{4} \right ), \left ( \frac{3}{4}, 0, \frac{1}{4}\right ), (1, 0, 0) \right \}. \end{equation*} See Figure \ref{fig:cells} for a depiction of the set $\Gamma_r$ and the resulting cells $\mathcal{F}$ for $r = 2$, $3$ and $4$. \begin{figure} \centering \begin{tikzpicture}[scale = .66] \draw (0,0) -- (4,0) -- (2,4) -- cycle; ll (0,0) circle (2pt); ll (4,0) circle (2pt); ll (2,0) circle (2pt); ll (1,2) circle (2pt); ll (3,2) circle (2pt); \draw (0,0) node [anchor = north east] {$ \boldsymbol{x}_1 $}; \draw (2,4) node [anchor = south] {$ \boldsymbol{x}_0 $}; \draw (4,0) node [anchor = north west] {$ \boldsymbol{x}_2 $}; ll[lightgray] (2,4) circle (2pt); \end{tikzpicture} \begin{tikzpicture}[scale = 0.44] \draw (0,0) -- (6,0) -- (3,6) -- cycle; ll (0,0) circle (3pt); ll (2,0) circle (3pt); ll (4,0) circle (3pt); ll (6,0) circle (3pt); ll (2,4) circle (3pt); ll (2,0) circle (3pt); ll (1,2) circle (3pt); ll (3,2) circle (3pt); ll (3,6) circle (3pt); ll (4,4) circle (3pt); ll (5,2) circle (3pt); \draw (1,2) -- (2,0) -- (4,4); \draw (0,0) node [anchor = north east] {$ \boldsymbol{x}_1 $}; \draw (3,6) node [anchor = south] {$ \boldsymbol{x}_0 $}; \draw (6,0) node [anchor = north west] {$ \boldsymbol{x}_2 $}; ll[lightgray] (3,6) circle (3pt); ll[lightgray] (4,4) circle (3pt); ll[lightgray] (1,2) circle (3pt); \end{tikzpicture} \begin{tikzpicture}[scale = .66] \draw (0,0) -- (4,0) -- (2,4) -- cycle; ll (0,0) circle (2pt); ll (4,0) circle (2pt); ll[lightgray] (2,4) circle (2pt); ll (2,0) circle (2pt); ll (1,2) circle (2pt); ll (3,2) circle (2pt); ll (1,0) circle (2pt); ll (3,0) circle (2pt); ll[lightgray] (0.5, 1) circle (2pt); ll (1.5,1) circle (2pt); ll[lightgray] (1.5,3) circle (2pt); ll[lightgray] (3.5,1) circle (2pt); ll[lightgray] (2.5,3) circle (2pt); ll (2.5,1) circle (2pt); ll[lightgray] (2,2) circle (2pt); \draw (0.5, 1) -- (1,0) -- (2.5, 3); \draw (1.5, 3) -- (3,0) -- (3.5, 1); \draw (0,0) node [anchor = north east] {$ \boldsymbol{x}_1 $}; \draw (2,4) node [anchor = south] {$ \boldsymbol{x}_0 $}; \draw (4,0) node [anchor = north west] {$ \boldsymbol{x}_2 $}; ll[lightgray] (2,4) circle (2pt); ll[lightgray] (0.5, 1) circle (2pt); ll[lightgray] (1.5,3) circle (2pt); ll[lightgray] (3.5,1) circle (2pt); ll[lightgray] (2.5,3) circle (2pt); ll[lightgray] (2,2) circle (2pt); \end{tikzpicture} \caption{Cells of $ \mathcal{F} $ for $ r = 2 $, $ r = 3 $ and $ r = 4 $, left to right. Gray dots represent the set $\Gamma_r$, that is, vertices of the triangles considered as (small) $ 2 $-simplices.} \label{fig:cells} \end{figure} \begin{remark}\label{rmk:hierar} The recursiveness in the definition of $\Gamma_r$ gives a hierarchy on the weights associated with these cells. In fact, as degree $ r $ is increased by one, the associated family $ \mathcal{F} $ is obtained by adding a stripe on one side of the triangle, as shown in Figure \ref{fig:hierarchy}. \begin{figure}[h] \centering \begin{tikzpicture}[scale = .75] \draw (0,0) -- (3,0) -- (1.5,3) -- cycle; ll (0,0) circle (2pt); ll (4,0) circle (2pt); ll (2,4) circle (2pt); ll (2,0) circle (2pt); ll (1,2) circle (2pt); ll (3,2) circle (2pt); ll (1,0) circle (2pt); ll (3,0) circle (2pt); ll (0.5, 1) circle (2pt); ll (1.5,1) circle (2pt); ll (1.5,3) circle (2pt); ll (3.5,1) circle (2pt); ll (2.5,3) circle (2pt); ll (2.5,1) circle (2pt); ll (2,2) circle (2pt); \draw (0.5, 1) -- (1,0); \draw (1,0) -- (2,2); \draw[dotted] (3,0) -- (3.5, 1); \draw[dotted] (3,0) -- (4,0) -- (2,4) -- (1.5,3); \draw[dotted] (3,0) -- (3.5,1); \draw[dotted] (2,2) -- (2.5,3); \draw[dashed] (0,0) -- (-1,0) -- (1.5, 5) -- (2,4); \draw[dashed] (0.5,1) -- (0,2); \draw[dashed] (1.5,3) -- (1,4); \draw[dashed] (0,0) -- (-0.5,1); ll (-1, 0) circle (2pt); ll (-0.5,1) circle (2pt); ll (0, 2) circle (2pt); ll (0.5, 3) circle (2pt); ll (1, 4) circle (2pt); ll (1.5,5) circle (2pt); \draw (-1,0) node [anchor = north east] {$ \boldsymbol{x}_1 $}; \draw (1.5,5) node [anchor = south] {$ \boldsymbol{x}_0 $}; \draw (4,0) node [anchor = north west] {$ \boldsymbol{x}_2 $}; \end{tikzpicture} \caption{Cells of $ \mathcal{F} $ for $ r = 5 $. Step from $ r = 3$ to $ r=4 $ are obtained by adding the dotted part, step from $ r =4$ to $ r = 5$ is obtained by adding the dashed part.} \label{fig:hierarchy} \end{figure} \end{remark} Before proving unisolvence, we check that $\Gamma_r$ has the right cardinality. This is immediate from Remark \ref{rmk:hierar}. \begin{lemma} The set $\Gamma_r$ has cardinality $\lvert \Gamma_r \rvert$ equal to the dimension of $\mathcal{P}_{r-2}\Lambda^2(T)$, that is \[ \lvert \Gamma_r \rvert = \frac{r(r-1)}{2} .\] \end{lemma} \begin{proof} We use induction on $r$. The result clearly holds for $r = 2$, see Figure \ref{fig:cells}. For $r>2$ the sets $\tau_{\boldsymbol{\zeta}_r}(\Gamma_{r-1})$ and $\Delta_r$ are disjoint, therefore the cardinality of $\Gamma_r$ is given by \begin{align*} \lvert \Gamma_r \rvert & = \lvert \Gamma_{r-1} \rvert + \lvert \Delta_r \rvert\\ & = \frac{(r-1)(r-2)}{2} + r - 1\\ & = \frac{r ( r - 1 ) }{ 2 }. \end{align*} This concludes the proof. \end{proof} To prove unisolvence of weights here defined we shall work as follows. Consider the sequence \begin{equation} \label{eq:sequence} \mathcal{P}_r \Lambda^0 (T) \xrightarrow{\der} \mathcal{P}_{r-1} \Lambda^1 (T) \xrightarrow{\der} \mathcal{P}_{r-2} \Lambda^2 (T). \end{equation} The first and the last space are isomorphic under the action of the (smooth) Hodge star operator $ \star $ \cite{MarsdenBook}. This rather easy fact induces an interesting consequence, which consists in the fact that techniques adopted to prove unisolvence of the spaces at the extremity of \eqref{eq:sequence} are very close. On the contrary, unisolvence for the central space is obtained without direct computations but just relying on the structure of the sequence \eqref{eq:sequence} itself. We are ready to prove the unisolvence of $\mathcal{F}$. \begin{theorem} \label{thm:main} If the assumptions of Theorem 1 hold, then $\mathcal{F}$ is a unisolvent and minimal physical sysdofs. \end{theorem} \begin{proof} The minimality holds by construction for $k = 0$ and $k = 2$. For $k = 0$, unisolvence is just the standard Lagrange unisolvence on poised sets. For $k = 2$, let $\omega \in\mathcal{P}_{r-2}\Lambda^2(T)$ and assume that $w(\omega,s) = 0$ for each $s\in\mathcal{F}^2(T)$. Then, by linearity of the integral, it follows that \[ \int_{\tau_{\pmb{\xi}}(T)}\omega = 0,\quad\forall\pmb{\xi} \in \Gamma. \] Then Theorem 1 implies $\omega = 0$. Finally, for $k = 1$, consider the following diagram: \begin{equation*} \small\begin{tikzcd} 0 \arrow[r] & \R \arrow[r, "\iota", hook] \arrow[d, "\Id"] & \mathcal{P}_r\Lambda^0(T) \arrow[r, "d"] \arrow[d, "\mathfrak{R}^0"] & \mathcal{P}_{r-1}\Lambda^1(T) \arrow[r, "d"] \arrow[d, "\mathfrak{R}^1"] & \mathcal{P}_{r-2}\Lambda^2(T) \arrow[r] \arrow[d, "\mathfrak{R}^2"] & 0 \\ 0 \arrow[r] & \R \arrow[r, "\psi"] & C^0(\mathcal{F}^{\bullet}(T)) \arrow[r, "\delta"] & C^1(\mathcal{F}^{\bullet}(T)) \arrow[r, "\delta"] & C^2(\mathcal{F}^{\bullet}(T)) \arrow[r] & 0 \end{tikzcd} \end{equation*} We already know that the rows are exact and we have just showed that the maps $\mathfrak{R}^0$ and $\mathfrak{R}^2$ are isomorphisms. Then, by the Five Lemma (see section 2.1 of \cite{Hatcher}) it follows that also $\mathfrak{R}^1$ is an isomorphism. In particular minimality holds also for $k=1$. \end{proof} The idea of proving the unisolvence of the intermediate space $k = 1$ using the Five Lemma appeared for the first time in \cite{ZAR}, but it was not exploited since a proof of the unisolvence for the case $k = 2$ was lacking. The problem with the physical sysdofs defined in \cite{ZAR} is that the $2$-cells cannot be written as differences of small $2$-simplices as in \eqref{eq : def case k = 2} and therefore Theorem 1 does not apply. We remark an interesting aspect. For $ k = 1 $, the set involved is the set of small simplices defined in \cite{nostropreprint}, which is a subset of that of \emph{small simplices} introduced by Bossavit in \cite{RapettiBossavit}. Interestingly, for $ k = 2 $ one does not find its $2$-dimensional counterpart, but the set $ \Sigma_r^2 (T) $ defined in \cite{ENUMATH}. To the authors' knowledge, this is the first construction in which those sets appears paired in such a natural fashion. \section{Numerical tests} \label{sect:NumTest} We offer a computational proof of unisolvece exploiting Lemma \ref{eq: alternative unisolvence}. We compute the conditioning number of the Vandermonde matrices of the sequence $$ \mathcal{P}_r \Lambda^0 (T) \rightarrow \mathcal{P}_r \Lambda^1 (T) \rightarrow \mathcal{P}_r \Lambda^2 (T), $$ for $ r-2 = 1, \ldots, 4 .$ These quantities are reported in Table \ref{tab:condVk0} and confirm, up to the considered degree, the theoretical statement proved in Theorem \ref{thm:main}. The basis chosen for such computations is the monomial one, and barycentric coordinates offer a compact way to visualise it. In particular, when $k= 0$, it is defined as $ \boldsymbol{\lambda}^{\boldsymbol{\alpha}} $ with $ |\boldsymbol{\alpha} | = r$. When $ k = 1 $ it is defined as $ \boldsymbol{\lambda}^{\boldsymbol{\alpha}} \mathrm{d} x + \boldsymbol{\lambda}^{\boldsymbol{\beta}} \mathrm{d} y$ with $ |\boldsymbol{\alpha} | = r $ and $ |\boldsymbol{\beta} | = 0 $ and $ |\boldsymbol{\beta} | = r $ and $ |\boldsymbol{\alpha} | = 0 $. Finally, for $k = 2$, such a basis is $ \boldsymbol{\lambda}^{\boldsymbol{\alpha}} \mathrm{d} x \wedge \mathrm{d} y$, again with $ |\boldsymbol{\alpha} | = r $. Results for $ r = 1, \ldots, 6 $ are reported in Table \ref{tab:condVk0}. To improve conditioning numbers Bernstein bases or orthogonal polynomials shall be taken into account. However, unisolvence is clearly independent from the choice of the basis for $ \mathcal{P}_r \Lambda^k (T) $ and we thus leave this problem of optimisation to further investigations. \begin{table}[h] \centering \begin{tabular}{c|ccc} $ r $ & $ k = 0 $ & $ k = 1$ & $ k = 2 $ \\ \hline $ 1 $ & $ 3.7320 \times 10^0 $ & $ 4.4985 \times 10^0 $ & $ 3.1682 \times 10^1 $ \\ $ 2 $ & $ 3.0969 \times 10^1 $ & $2.3281 \times 10^1 $ & $ 5.2130 \times 10^2 $ \\ $ 3 $ & $ 3.1245 \times 10^2 $ & $ 8.6268 \times 10^1 $ & $ 9.3809 \times 10^3 $ \\ $ 4 $ & $ 3.4290 \times 10^3 $ & $ 5.6267 \times 10^2 $ & $ 1.3525 \times 10^6 $ \\ $ 5 $ & $ 3.9513 \times 10^4 $ & $2.9791 \times 10^3 $ & -- \\ $ 6 $ & $ 4.7004 \times 10^5 $ & -- & -- \\ \end{tabular} \caption{Conditioning number of the Vandermonde matrix for $k=0, 1, 2$.} \label{tab:condVk0} \end{table} \begin{remark} We stress that Table \ref{tab:condVk0} shall be read \emph{diagonally}. In particular, when the degree for $ k = 0 $ is $ r $, the corresponding data for $ k = 1 $ and $ k = 2 $ are, respectively, those associated with $ r - 1 $ and $ r - 2 $. \end{remark} \subsection{Some interpolation tests} In section \ref{sect:interpdef} we have defined how an interpolator can be defined using weights and we have briefly discussed its features. In particular, we showed that under the hypothesis of Theorem \ref{thm:main} such an operator is well defined and commutes with the exterior derivative. We now give an explicit meaning of this fact, using weights to interpolate a $ 0 $-form $ \omega $ and its differential $ \der \omega \in \Lambda^1 (T) $. For ease of the reader we deal with the standard $2$-simplex. This is not restrictive, since one may always reduce to this case by passing to barycentric coordinates. We thus consider a $ 0 $-form $$ \omega = e^{x} \sin(\pi y),$$ whence $$ d \omega = e^{x} \sin (\pi y) \der x + \pi e^x \cos(\pi y) \der y.$$ We interpolate by means of the interpolator \eqref{eq:interpolator} and study the convergence as $ r $ increases. The most informative norm for such a situation is the $0$-norm \cite{Harrison}, which is defined as \begin{equation} \Vert \omega \Vert_0 \doteq \sup_{c \in \mathcal{C}_k (T)} \frac{1}{|c|_0}\left\vert \int_c \omega \right\vert, \end{equation} being $ |c|_0 $ the $k$-th volume of the $k$-simplex $ T $ and $ \mathcal{C}_k (T) \doteq \mathcal{C}_k(\mathcal{F}^{\bullet}(T))$ the set of all possible $k$-chains supported in $ T $. Results are reported in Table \ref{tab:conv}, where a comparison with the corresponding points for $ k = 0 $ is included, and shown in Figure \ref{fig:conv}. \begin{table}[h] \centering \begin{tabular}{c|cc} & $ k = 0 $ & $ k = 1 $ \\ \hline $ r $ & $ \Vert \omega - \Pi \omega \Vert_0 $ & $ \Vert \omega - \Pi \omega \Vert_0 $ \\ \hline $ 1 $ & -- & $ 2.5334 $ \\ $ 2 $ & $ 0.3377 \times 10^0 $ & $ 1.1224 $ \\ $ 3 $ & $ 0.6967 \times 10^{-1} $ & $ 0.4292 $ \\ $ 4 $ & $ 0.1792 \times 10^{-1} $ & $ 0.0782 $ \\ $ 5 $ & $ 0.1600 \times 10^{-2} $ & $ 0.0171 $ \\ $ 6 $ & $ 0.4314 \times 10^{-3} $ & -- \end{tabular} \caption{Trend of $ \omega - \Pi \omega $ with respect to the $0$-norm for the $ 1 $-form $ \omega $ above defined and its potential. The $ 0 $-norm for of the function $ k = 0 $ is approximately $ 1.7319 $ whereas $ \Vert \omega \Vert_0 \sim 2.5334 $ for the case $ k = 1 $.} \label{tab:conv} \end{table} \begin{figure}[h] \centering \includegraphics[scale = .34]{errork0.eps} \quad \includegraphics[scale = .34]{errork1.eps} \caption{Plot of the convergence, a comparison for the nodal case $ k = 0 $ and the simplicial case $ k = 1 $ in semi-logarithmic scale. Left, the case for $ k = 0$ and right, that for $ k = 1 $. Notice the degree shift, explained by the sequence.} \label{fig:conv} \end{figure} \section{Conclusions and future directions} \label{sect:concl} In this work we have proposed new physical degrees of freedom for the second family $\mathcal{P}_{r-k}\Lambda^k$ in the two dimensional case. We have proved rigorously their unisolvence and we have showed their effectiveness with an interpolation test. The three dimensional case is trickier. In principle one could use the same technique to construct unisolvent and minimal physical degrees of freedom for the case $k=3$, but unisolvence and minimality of the intermediate spaces in the sequence, that is $k = 1$ and $k=2$, will not follow trivially since the Five Lemma cannot be applied in this situation. This will be the object of future research. \bibliographystyle{plain} \bibliography{bibliografia.bib} \end{document}
2205.05743v1
http://arxiv.org/abs/2205.05743v1
A Model for Birdwatching and other Chronological Sampling Activities
\documentclass[11pt]{article} \usepackage{amsfonts} \usepackage{amsmath} \usepackage{amssymb} \usepackage{amsthm} \usepackage{mathtools} \usepackage{xcolor} \usepackage{ bbold } \usepackage{subfigure} \theoremstyle{theorem} \newtheorem{theorem}{Theorem} \newtheorem{prop}[theorem]{Proposition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{remark}[theorem]{Remark} \DeclareMathOperator{\conv}{conv} \DeclareMathOperator{\Deg}{Deg} \DeclareMathOperator{\supp}{supp} \makeatletter \renewcommand\@biblabel[1]{[#1]} \makeatother \title{A Model for Birdwatching and other \\ Chronological Sampling Activities} \author{Jes\'us ~A. De Loera$^1$, Edgar Jaramillo-Rodriguez$^1$, \\ Deborah Oliveros$^2$, and Antonio J. Torres$^2$} \date{ $^1$Department of Mathematics, University of California, Davis\\ $^2$ Instituto de Matem\'aticas, Universidad Nacional Aut\'onoma de M\'exico\\[2ex] \today } \begin{document} \maketitle \begin{abstract} In many real life situations one has $m$ types of random events happening in chronological order within a time interval and one wishes to predict various milestones about these events or their subsets. An example is birdwatching. Suppose we can observe up to $m$ different types of birds during a season. At any moment a bird of type $i$ is observed with some probability. There are many natural questions a birdwatcher may have: how many observations should one expect to perform before recording all types of birds? Is there a time interval where the researcher is most likely to observe all species? Or, what is the likelihood that several species of birds will be observed at overlapping time intervals? Our paper answers these questions using a new model based on random interval graphs. This model is a natural follow up to the famous coupon collector's problem. \end{abstract} \section{Introduction.}\label{intro} Suppose you are an avid birdwatcher and you are interested in the migratory patterns of different birds passing through your area this winter. Each day you go out to your backyard and keep an eye on the skies; once you see a bird you make a note of the species, day, and time you observed it. You know from prior knowledge that there are $m$ different species of birds that pass over your home every year and you would love to observe at least one representative of each species. Naturally, you begin to wonder: {\em after $n$ observations, how likely is it that I have seen every type of bird?} If we only care that all $m$ types of birds are observed at least once after $n$ observations, we recognize this situation as an example of the famous \emph{coupon collector's problem} (for a comprehensive review of this problem see \cite{Coupon} and references therein). In this old problem a person is trying to collect $m$ types of objects, the coupons, labeled $1,2,\dots ,m$. The coupons arrive one by one as an ordered sequence $X_1,X_2, \ldots$ of independent identically distributed (i.i.d.) random variables taking values in $[m] = \{1,\ldots, m\}$. But a professional birdwatcher is also interested in more nuanced information than the coupon collector. To properly understand interspecies interactions, one not only hopes to observe every bird, but also needs to know which species passed through the area at the same time(s). For example, the birdwatcher might also ask: \begin{itemize} \item \emph{What are the chances that the visits of $k$ types of birds do not overlap at all?} \item \emph{What are the chances that a pair of birds is present on the same time interval?} \item \emph{What are the chances of one bird type overlapping in time with $k$ others?} \item \emph{What are the chances that all the bird types overlap in a time interval?} \end{itemize} We note that very similar situations, where scientists collect or sample time-stamped data that comes in $m$ types or classes and wish to predict overlaps, appear in applications as diverse as archeology, genetics, job scheduling, and paleontology \cite{GOLUMBIC,Fishburn85,pippenger,paleobook}. The purpose of this paper is to present a new \emph{random graph model} to answer the four time-overlap questions above. Our model is very general, but to avoid unnecessary formalism and technicalities, we present clear answers in some natural special cases that directly generalize the coupon collector problem. For the special cases we analyze, the only tools we use are a combination of elementary probability and combinatorial geometry. \subsection{Establishing a general random interval graph model.} In order to answer any of the questions above we need to deal with one key problem: how do we estimate which time(s) each species of bird might be present from a finite number of observations? To do so, we will make some modeling choices which we outline below. The first modeling choice is that our observations are samples from a stochastic process indexed by a real interval $[0,T]$ and taking values in $[m]$. We recall the definition of a stochastic process for the reader (see {\cite{StochProcess}): Let $I$ be a set and let $(\Omega, \mathcal{F}, P)$ be a probability space. Suppose that for each $\alpha \in I$, there is a random variable $Y_\alpha : \Omega \to S \subset \mathbb{R}$ defined on $(\Omega, \mathcal{F}, P)$. Then the function $Y : I \times \Omega \to S$ defined by $Y(\alpha, \omega) = Y_\alpha(\omega)$ is called a \emph{stochastic process} with \emph{indexing set} $I$ and \emph{state space} $S$, and is written $Y = \{Y_\alpha : \alpha \in I\}$. When we conduct an observation at some time $t_0 \in [0,T]$, we are taking a sample of the random variable $Y_{t_0}$. For each $i\in [m]$, the probabilities that $Y_t=i$ give us a function from $[0,T] \to [0,1]$, which we call the \emph{rate function} of $Y$ corresponding to $i$; the name is inspired by the language of Poisson point processes where the density of points in an interval is determined by a \emph{rate} parameter (see \cite{Ross_Stoch}). \begin{definition}[Rate function] Let $Y = \{Y_t: t \in[0,T]\}$ be a stochastic process with indexing set $I = [0,T]$ and state space $S = [m]$. The \emph{rate function} corresponding to label $i\in S$ in this process is the function $f_i : I \to [0,1]$ given by $$f_i(t)=P(Y_t =i)= P(\{\omega: Y(t,\omega)=i\}).$$ \end{definition} Figure \ref{fig:2examples} gives two examples of the rate functions of some hypothetical stochastic processes (we will clarify the meaning of stationary and non-stationary later in this section when we discuss a special case of our model). Observe that at a fixed time $t_0$, the values $f_i(t_0)$ sum to 1 and thus determine the probability density function of $Y_{t_0}$. Therefore, the rate functions describe the change of the probability density functions of the variables $Y_t$ with respect to the indexing variable $t$. Next, note that the set of times where species $i$ might be present is exactly the \emph{support} of the rate function $f_i$. Recall, the support of a function is the subset of its domain for which the function is non-zero, in our case this will be a portion of $[0,T]$. Therefore, \emph{our key problem is to estimate the support of the rate functions from finitely many samples}. \begin{figure}[h] \centering \subfigure[Stationary]{\label{fig:stat_timeline}\includegraphics[width=65mm]{Stat_Timeline.pdf}} \subfigure[Non-Stationary]{\label{fig:timeline}\includegraphics[width=59mm]{Timeline.pdf}} \caption{Two examples of hypothetical rate functions.} {\label{fig:2examples}} \end{figure} We note that the stochastic process $Y$ is defined to take values in $[m]$ due to a modeling choice on our part. Alternatively, one could have $Y$ take values in the power set $2^{[m]}$, so as to allow for multiple species of birds to be observed at the same time. However, choosing $[m]$ rather than $2^{[m]}$ simplifies some calculations and, moreover, is quite reasonable. Rather than registering ``three birds at 6 o'clock," our birdwatcher can instead register three sightings: one bird at 6:00:00, a second at 6:00:01, and a third a 6:00:02, for example. This brings us to our next modeling choice: all the rate functions $f_i$ have connected support for each $i \in [m]$. This is reasonable for our motivation; after all, a bird species first seen on a Monday and last seen on a Friday is not likely to suddenly be out of town on Wednesday. The main benefit of this assumption is that now the support of the rate function $f_i$, $\supp(f_i)$, is a sub-interval of $[0,T]$. This fact provides a natural way of approximating the support of $f_i$: given a sequence of observations $Y_{t_1}, Y_{t_2} , \ldots, Y_{t_n}$ with $0 \leq t_1 < t_2 < \ldots < t_n \leq T$, let $I_n(i)$ denote the sub-interval of $[0, T]$ whose endpoints are the first and last times $t_k$ for which $Y_{t_k} = i$. Note that it is possible for $I_n(i)$ to be empty or a singleton. It follows that $I_n(i) \subset \supp(f_i)$ so we can use it to approximate $\supp(f_i)$. We call the interval $I_n(i)$ the \emph{empirical support} of $f_i$, as it is an approximation of $\supp(f_i)$ taken from a random sample. In summary, our model is actually quite simple: given a sequence of observations $Y_{t_1}, Y_{t_2} , \ldots, Y_{t_n}$ we construct $m$ random intervals $I_n(1), \ldots, I_n(m)$ whose endpoints are the first and last times we see its corresponding species. These intervals, known as the \emph{empirical supports}, are approximations of the supports of the rate functions, $f_i$, and satisfy $\supp(f_i) \supset I_n(i)$. The four birdwatching questions above may be expressed in terms of the empirical supports as follows: \begin{itemize} \item \emph{What are the chances that none of the empirical supports $I_n(i)$ intersect?} \item \emph{What are the chances that a particular pair of empirical supports $I_n(i)$ and $I_n(j)$ intersect?} \item \emph{What are the chances that one empirical support, $I_n(i)$, intersects with $k$-many others?} \item \emph{What are the chances that the collection of empirical supports has a non-empty intersection?} \end{itemize} To make these questions even easier to analyze, we will present a combinatorial object: an \emph{interval graph} that records the intersections of the intervals $I_n(i)$ in its edge set. \begin{definition} Given a finite collection of $m$ intervals on the real line, its corresponding interval graph, $G(V,E)$, is the simple graph with $m$ vertices, each associated to an interval, such that an edge $\{i,j\}$ is in $E$ if and only if the associated intervals have a nonempty intersection, i.e., they overlap. \end{definition} Figure \ref{fig:nerve_example} demonstrates how we construct the desired interval graph from some observations. Figure \ref{fig:data} shows a sequence of $n=11$ points on the real line, which corresponds to the indexing set $I$ of our random process $Y$. Above each point we have a label, representing a sample from $Y$ at that time. Displayed above the data are the empirical supports $I_n(i)$ for each $i \in [m] = [4]$. Finally, Figure \ref{fig:int_graph} shows the interval graph constructed from these four intervals where each vertex is labeled with the interval it corresponds to. In this example there are no times shared by the species $\{1,2\}$ and the species $\{4\}$, so there are no edges drawn between those nodes. We emphasize that the interval graph constructed in this way will contain up to $m$-many vertices, but may contain fewer if some of the intervals $I_n(i)$ are empty, i.e., if we never see species $i$ in our observations. \begin{figure}[h] \centering \subfigure[Labeled observations and induced intervals]{\label{fig:data}\includegraphics[width=55mm]{data.pdf}} \subfigure[Interval Graph]{\label{fig:int_graph}\includegraphics[width=30mm]{interval_graph.pdf}} \subfigure[Nerve Complex]{\label{fig:nerve}\includegraphics[width=30mm]{nerve.pdf}} \caption{Example observations with their corresponding graph and nerve.} \label{fig:nerve_example} \end{figure} Although the interval graph $G(V,E)$ is constructed using only pairwise intersections, we can further encode all $k$-wise intersections for $k = 2, \ldots, m$ in a \emph{simplicial complex}, which is a way to construct a topological space by gluing \emph{simplices} (generalizations of triangles, tetrahedra, etc). A simplicial complex $K$ must satisfy the two requirements that every face of a simplex in $K$ is also in $K$ and that the non-empty intersection of any two simplices in $K$ is a face of both. (for an introduction to basic topology and simplicial complexes see \cite{ghrist2014elementary,Hatcher}). The construction we need is known as the \emph{nerve complex} (see \cite{kozlovbook}, \cite{tancer}, \cite[p.~197]{matousek2002lectures} and \cite[p.~31]{ghrist2014elementary}). \begin{definition} Let $\mathcal{F} = \{F_1,\ldots,F_m\}$ be a family of convex sets in $\mathbb{R}^d$. The \emph{nerve complex} $\mathcal{N}(\mathcal{F})$ is the abstract simplicial complex whose $k$-facets are the $(k+1)$-subsets $I \subset [m]$ such that $\bigcap_{i\in I} F_i \neq \emptyset$. \end{definition} Figure \ref{fig:nerve} shows the nerve complex constructed from the intervals $I_n(i)$ in Figure \ref{fig:data}. Note the presence of a 2-simplex (triangle) with vertices $\{1, 2, 3\}$ because the corresponding intervals mutually intersect. By construction, the interval graph $G$ is exactly the 1-skeleton of the nerve complex $\mathcal N$ generated by the intervals. In fact, because our intervals lie in a 1-dimensional space, $\mathcal N$ is completely determined by $G$. To see this, suppose we have a collection of intervals $(x_1,y_1), \ldots, (x_k,y_k)$ such that all intervals intersect pairwise. It follows that $y_i \geq x_j$ for all $i,j \in [k]$, and so $(\max \{x_1, \ldots,x_k\}, \min\{y_1, \ldots, y_k \})$ $\subseteq \cap_{i=1}^k (x_i,y_i)$. Hence the whole collection has non-empty intersection (this is a special case of Helly's theorem \cite{Barvinok}, which is necessary in higher dimensional investigations). Thus, the $k$-dimensional faces of the nerve complex are precisely $k$-cliques of the interval graph. Therefore, going forward we will refer to the nerve complex $\mathcal N$ and the graph $G$ interchangeably depending on the context, but the reader should understand that these are fundamentally the same object as long as the family of convex sets $\mathcal F$ lies in a 1-dimensional space. We stress that in higher dimensions the intersection graph of convex sets \emph{does not} determine the nerve complex (we demonstrate this by an example in the Conclusion). We can now present our random interval graph model in its entirety: \begin{definition}[The Random Interval Graph Model] We let $Y = \{ Y_t : t\in [0,T]\}$ be a stochastic process as above and let $\mathcal{P}=\{ t_1,t_2,...,t_n\}$ be a set of $n$ distinct observation times or sample points in $[0,T]$ with $t_1 < t_2 < \ldots < t_n$. Then let $Y = (Y_1, Y_2, \ldots, Y_n)$ be a random vector whose components $Y_i$ are samples from $Y$ where $Y_i = Y_{t_i}$, so each $Y_i$ takes values $\{ 1, \ldots, m\}$. For each label $i$ we define the (possibly empty) interval $I_n(i)$ as the convex hull of the points $t_j$ for which $Y_j =i$, i.e., the interval defined by points colored $i$. Explicitly $I_n(i) = \text{Conv}(\{t_j \in \mathcal{P} : Y_j = i\})$, and we refer to $I_n(i)$ as the \emph{empirical support} of label $i$. Furthermore, because it comes from the $n$ observations or samples, we call the nerve complex, $\mathcal N(\{I_n(i): i =1, \ldots m \})$, the \emph{empirical nerve} of $Y$ and denote it $\mathcal N_n(Y)$. \end{definition} Under this random interval graph model our four questions can be rephrased in terms of the random graph $\mathcal N_n(Y)$: \begin{itemize} \item \emph{What is the likelihood that $\mathcal N_n(Y)$ has no edges?} \item \emph{What is the likelihood that a particular edge is present in $\mathcal N_n(Y)$?} \item \emph{What is the likelihood of having a vertex of degree at least $k$ in $\mathcal N_n(Y)$?} \item \emph{What is the likelihood that $\mathcal N_n(Y)$ is the complete graph $K_m$?} \end{itemize} Our original questions have become questions about random graphs! \subsection{The special case this paper analyzes.} We presented a very general model because it best captures the nuances and subtleties of our motivating problem. However, without additional assumptions on the distribution $Y$, the prevalence of pathological cases makes answering the motivating questions above become very technical and unintuitive. Therefore, our analysis will focus on a special case of this problem where we make two additional assumptions on $Y$ so that our analysis only requires basic combinatorial probability. The first assumption we make is that our observations $Y_{t_1}, Y_{t_2}, \ldots, Y_{t_n}$ are mutually independent random variables. Note, we do not claim that all pairs of random variables $Y_s, Y_t$ for $s,t \in [0,T]$ are independent. We only claim this holds for all $s,t \in \{t_1, t_2, \ldots, t_n\}$. The second assumption we make is that the rate functions $f_i$ be constant throughout the interval $[0,T]$. In this case, there exist constants $p_1, p_2, \ldots, p_m \geq 0$ such that $\sum_{i=1}^m p_i = 1$ and $f_i(t) = p_i$ for all $t\in [0,T]$ and all $i \in [m]$. We call the special case of our model where both of these assumptions are satisfied the \emph{stationary case} and all other cases as \emph{non-stationary}. Figure \ref{fig:2examples} shows examples of a stationary case, \ref{fig:stat_timeline}, and a non-stationary case, \ref{fig:timeline}. We will also refer to the \emph{uniform case}, which is the extra-special situation where $p_i=\frac{1}{m}$ for all $i\in [m]$. Note Figure \ref{fig:stat_timeline} is stationary but not uniform. Of course, the stationary case is less realistic and applicable in many situations. For example, it is not unreasonable to suppose that the presence of a dove at 10 o'clock should influence the presence of another at 10:01, or that the presence of doves might fluctuate according to the season and time of day. However, the stationary case is still rich in content and, importantly, simplifies things so that this analysis requires only college-level tools from probability and combinatorics. Moreover, as we discuss below, the stationary case has a strong connection to the famed coupon collector problem and is of interest as a novel method for generating random interval graphs. The stationary case assumptions directly lead to two important consequences that greatly simplify our analysis. The first is that now the random variables $Y_{t_1} ,\ldots, Y_{t_n}$ are independent and identically distributed (i.i.d.) such that $P(Y_{t_k} = i) =p_i >0$. Note that this is true for any set of distinct observation times $\mathcal P = \{t_1, \ldots, t_n\}$. The second consequence simplifies things further still: though the points $\mathcal{P}$ corresponding to our sampling times have thus far been treated as arbitrary, one can assume without loss of generality that $\mathcal{P} =[n]= \{1,2,\ldots, n\}$ since all sets of $n$ points in $\mathbb{R}$ are combinatorially equivalent, as explained in the following lemma. \begin{lemma} \label{stat_lemma} Let $\mathcal{P} = \{x_1, \ldots, x_n \}$ and $\mathcal{P}' = \{x_1', \ldots, x_n' \}$ be two sets of $n$ distinct points in $\mathbb{R}$ with $x_1 < \ldots < x_n$ and $x_1' < \ldots < x_n'$. Let $Y = (Y_1, \ldots, Y_n)$ and $Y' = (Y_1', \ldots, Y_n')$ be i.i.d. random vectors whose components are i.i.d. random variables taking values in $[m]$ with $P(Y_j = i) = p_i > 0$ and $P(Y^{\prime}_j = i) = p_i > 0$. Then for any abstract simplicial complex $\mathcal{K}$ we have that $P(\mathcal{N}_n(\mathcal P, Y) = \mathcal{K}) = P(\mathcal{N}_n(\mathcal P', Y') = \mathcal{K})$. \end{lemma} \begin{proof} Let $c_1,c_2,\ldots, c_n$ be an arbitrary sequence of labels, so $c_i \in [m]$ for each $i$. Because $Y,Y'$ are i.i.d. we have that $P(\cap_{i=1}^n \{Y_i =c_i)\}) = P(\cap_{i=1}^n (\{Y_i' =c_i\}).$ Therefore if both sequences of colors $Y_i = Y_i' = c_i$ have the same order for all $i =1,\ldots, n$, then it is sufficient to show that the two empirical nerves are the same. Consider two empirical supports $I_n(j)$ and $I_n(k)$ of labels $j,k$, and observe that if they do (do not) intersect on $Y_i$, then the two empirical supports $I^{\prime}_n(j)$ and $I^{\prime}_n(k)$ of labels $j,k$ do (do not) intersect, then the two corresponding empirical nerves do (do not) contain the edge $\{j,k\}$. This implies that the two nerves have the same edge set. Furthermore, as we observed before, due to Helly's theorem in the line the empirical nerve is completely determined by its 1-skeleton. Then both empirical nerves are the same. \end{proof} We now summarize the key assumptions of our model in the stationary case. {\bf Key assumptions for our analysis:} \emph{ In all results that follow let $Y = (Y_1, \ldots, Y_n)$ be a random vector whose components are i.i.d. random variables such that $P(Y_j = i) = p_i >0$ for all $i \in [m]$. As a consequence the support functions of the underlying stochastic process are constant and each has support on the entire domain. We denote by $\mathcal{N}_n = \mathcal{N}_n([n], Y)$ the empirical nerve of the random coloring induced by $Y$. We also denote the graph or 1-skeleton of $\mathcal{N}_n$ by the same symbol. When we refer to the uniform case this means the special situation when $p_i=\frac{1}{m}$ for all $i=1,\dots, m$.} \subsection{Context and prior work.} We want to make a few comments to put our work in context and mention prior work: The famous coupon collector problem that inspired us dates back to 1708 when it first appeared in De Moivre's \textit{De Mensura Sortis (On the Measurement of Chance)} \cite{Coupon}. The answer for the coupon collector problem depends on the assumptions we make about the distributions of the $X_i$. Euler and Laplace proved several results when the coupons are equally likely, that is when $P(X_i = k) = \frac{1}{m}$ for every $k\in [m]$. The problem lay dormant until 1954 when H. Von Schelling obtained the expected waiting time when the coupons are not equally likely \cite{Schelling}. More recently, Flajolet et. al. introduced a unified framework relating the coupon collector problem to many other random allocation processes \cite{FLAJOLET}. We note that the stationary case of our model has the same assumptions as this famous problem: an observer receives a sequence of i.i.d. random variables taking values in $[m]$. In the language of our model, the coupon collector problem could be posed as, \emph{What is the likelihood that the nerve} $\mathcal{N}_n(Y)$ \emph{will contain exactly m vertices?} Thus, we can consider this model a generalization of the coupon collector problem which seeks to answer more nuanced questions about the arrival of different coupons. Interval graphs have been studied extensively due to their wide applicability in areas as diverse as archeology, genetics, job scheduling, and paleontology \cite{GOLUMBIC,Fishburn85,pippenger,paleobook}. These graphs have the power to model the overlap of spacial or chronological events and allow for some inference of structure. There are also a number of nice characterizations of interval graphs that have been obtained \cite{Lekkeikerker,fulkersongross,gilmore_hoffman,hanlon82}. For example, a graph $G$ is an interval graph if and only if the maximal cliques of $G$ can be linearly ordered in such a way that, for every vertex $x$ of $G$, the maximal cliques containing $x$ occur consecutively in the list. Another remarkable fact of interval graphs is that they are \emph{perfect} and thus the weighted clique and coloring problems are polynomial time solvable \cite{GOLUMBIC}. Nevertheless, sometimes it may not be immediately clear whether a graph is an interval graph or not. For example, of the graphs in Figure \ref{fig:graph_example} only \ref{fig:graph1} is an interval graph. \begin{figure}[h] \centering \subfigure[]{\label{fig:graph1}\includegraphics[width=42mm]{graph1.pdf}} \subfigure[]{\label{fig:graph2}\includegraphics[width=25mm]{graph2.pdf}} \subfigure[]{\label{fig:graph3}\includegraphics[width=25mm]{graph3.pdf}} \caption{It is not obvious which of these graphs are interval.} \label{fig:graph_example} \end{figure} The most popular model for generating random graphs is the Erd\H{os}-Renyi model \cite{erdos-renyi}, but it is insufficient for studying random interval graphs. The reason is that, as pointed out in \cite{cohenetal1979probability}, an Erd\H{os}-Renyi graph is almost certainly \emph{not} an interval graph as the number of vertices goes to infinity. Several other authors have studied various models for generating random \emph{interval graphs} (see \cite{diaconis2013interval, Scheinermanoriginal, Scheinerman2, JusticzScheinermanWinkler, iliopoulos, pippenger} and the many references therein). Perhaps most famously Scheinerman introduced \cite{Scheinermanoriginal,Scheinerman2}, and others investigated \cite{diaconis2013interval,JusticzScheinermanWinkler,iliopoulos}, a method of generating random interval graphs with a fixed number of intervals $m$: the extremes of the intervals $\{(x_1, y_1),\dots, (x_m, y_m)\}$ are $2m$ points chosen independently from some fixed continuous probability distribution on the real line. Each pair $(x_i, y_i)$ determines a random interval. This is a very natural simple random process, but it is different from our random process (see the Appendix). We noted earlier that because our intervals lie in a 1-dimensional space, the nerve complex is completely determined by the interval graph because the $k$-facets of the nerve complex are exactly the $k$-cliques of the interval graph. In other words, the nerve complex is precisely the \emph{clique complex} of the interval graph. We also remark that the complement graph of the interval graph $G$ is the graph $H$ of non-overlapping intervals. The graph $H$ is in fact a partially ordered set, called the \emph{interval order} where one interval is less than the other if the first one is completely to the left of the second one. We can associate to each \emph{independent set} of $k$ non-intersecting intervals, a $(k-1)$-dimensional simplex, this yields a simplicial complex, the \emph{independence complex} of the corresponding interval order graph $H$. Observe that this independence complex is the same as the nerve $\mathcal N$ we just defined above. This is all well-known since the independence complex of any graph equals the clique complex of its complement graph, and vice versa (see Chapter 9 in \cite{kozlovbook}). \subsection{Outline of our contributions.} In this paper we answer the four birdwatching questions using the random interval graphs and complexes generated by the stochastic process described above. Here are our results section by section: Section \ref{sec:expectation} presents various results about the expected structure of the random interval graph $\mathcal{N}_n$, including the expected number of edges and the likelihood that the graph has an empty edge set. Section \ref{sec:cliques} presents results regarding the distribution of maximum degree and clique number of the graph $\mathcal{N}_n$, and our results show that the random interval graph asymptotically approximates the complete graph, $K_m$, as the number of samples $n$ grows large. This means the nerve complex is asymptotically an $(m-1)$-dimensional simplex. From the results of Section \ref{sec:cliques} one can see that as we sample more and more bird observations it becomes increasingly unlikely that we see any graph other than $K_m$. We investigate the number of samples needed to find $K_m$ with high probability. Section \ref{conclusiones} closes the paper outlining three natural open questions. We also include an Appendix that contains computer experiments to evaluate the quality of various bounds proved throughout the paper and to show our model is different from earlier models of random interval graphs. \section{Random Interval Graphs and Behavior in Expectation.} \label{sec:expectation} In this section we explore what type of nerve complexes one might expect to find for a fixed number of observations $n$ when the likelihood of observing each label $i$ is a constant $p_i>0$. \begin{prop}\label{Null_small_prop} Under the key assumptions in Section \ref{intro}, the probability that the random graph $\mathcal{N}_n$ is the empty graph with $0\leq k \leq m$ vertices but no edges, $K_k^c$, is given by $$P(\mathcal{N}_n=K_k^c)\geq p_{*}^n k! \binom{m}{k}\binom{n-1}{k-1},$$ where $p_{*}=\min\{p_1,p_2,$ $...,p_m\}$. Moreover, if $p_i = \frac{1}{m}$ for all $i \in [m]$, then $$P(\mathcal{N}_n=K_k^c)= \frac{k!}{m^n} \binom{m}{k}\binom{n-1}{k-1}.$$ \end{prop} \begin{proof} Note that for $\mathcal{N}_n$ to form a disjoint collection of $k$ points, the intervals induced by the coloring must also be disjoint. This occurs if and only if all points of the same color are grouped together. Given $k$ fixed colors it is well known that the disjoint groupings are counted by the number of compositions of $n$ into $k$ parts, $\binom{n-1}{k-1}$. Each composition occurs with probability at least $p_{*}^n$. Finally, considering the $\binom{m}{k}$ different ways to choose these $k$ colors and the $k!$ ways to order them, we have that, $$P(\mathcal{N}_n=K_k^c)\geq p_{*}^n k! \binom{m}{k} \binom{n-1}{k-1}.$$ The last statement follows the same idea but here every $k-$coloring of the $n$ points happens with probability $\frac{1}{m}$. \end{proof} Next we bound the probability that a particular edge is present in the random interval graph. \begin{theorem}\label{ijedges} Under the key assumptions in Section \ref{intro} and for any pair $\{i,j\}$, $1\leq i < j \leq m$, the probability of event $A_{ij} =\{\{i,j\} \in \mathcal{N}_n \}$, i.e., that the edge $\{i,j\}$ is present in the graph $\mathcal{N}_n$ equals $$ P(A_{ij}) = 1-q_{ij}^n -\sum_{k=1}^n \binom{n}{k}\bigg[ \bigg( 2 \sum_{r=1}^{k-1} p_i^r p_j^{k-r} \bigg) +p_i^k +p_j^k \bigg] q_{ij}^{n-k},$$ where $q_{ij} = 1-(p_i +p_j)$.\\ When $p_i = \frac{1}{m}$ for all $i \in [m]$, then $ P(A_{ij}) = 1- \frac{2n(m-1)^{n-1}+(m-2)^n}{m^n}.$ \end{theorem} \begin{proof} We will find the probability of the complement, $A_{ij}^c$, which is the event where the two empirical supports do not intersect, i.e., $A_{ij}^c = \{I_n(i) \cap I_n(j)\} = \emptyset$. Let $C_i = \{\ell : Y_\ell = i, 1 \leq \ell \leq n \}$ and define $C_j$ analogously. Note that $A_{ij}^c$ can be expressed as the disjoint union of three events: \begin{enumerate} \item $\{C_i \text{ and } C_j \text{ are both empty}\}$, \item $\{\text{Exactly one of } C_i \text{ or } C_j \text{ is empty}\}$, \item $\{C_i \text{ and } C_j \text{ are both non-empty but $I_n(i)$ and $I_n(j)$ do not intersect}\}$. \end{enumerate} The probability of the first event is simply $q_{ij}^n$. For the second event, assume for now that $C_i$ will be the non-empty set and let $k \in [n]$ be the desired size of $C_i$. There are $\binom{n}{k}$ ways of choosing the locations of the $k$ points in $C_i$. Once these points are chosen, the probability that these points receive label $i$ and no others receive label $i$ nor label $j$ is exactly $p_i^kq_{ij}^{n-k}$. Summing over all values of $k$ and noting that the argument where $C_j$ is non-empty is analogous, we get that the probability of the second event is exactly $\sum_{k=1}^n \binom{n}{k}(p_i^k +p_j^k)q_{ij}^{n-k}$. Now, note that the third event only occurs if all the points in $C_i$ are to the left of all points in $C_j$ or vice versa; for now assume $C_i$ is to the left. Let $k\in [n]$ be the desired size of $C_i \cup C_j$ and let $r \in [k-1]$ be the desired size of $C_i$. As before there are $\binom{n}{k}$ ways of choosing the locations of the $k$ points in $C_i \cup C_j$. Once these points are fixed, we know $C_i$ has to be the first $r$ many points, $C_j$ has to be the remaining $k-r$ points, and all other points cannot have label $i$ nor label $j$. This occurs with probability $p_i^r p_j^{k-r}q_{ij}^{n-k}$. Finally, summing over all values of $k$ and $r$ and adding a factor of 2 to account for flipping the sides of $C_i$ and $C_j$ we get that the third event occurs with probability $2\sum_{k=1}^n \binom{n}{k} \sum_{r=1}^{k-1}p_i^r p_j^{k-r}q_{ij}^{n-k}$. Since $A_{ij}^c$ is the disjoint union of these three events, $P(A_{ij}^c)$ is equal to the sum of these three probabilities, which gives the desired result. For the uniform case, simply set $p_i=p_j=1/m$ in the general formula and note, \begin{align*} P(A_{ij}) = & 1- (\frac{m-2}{m})^n -\sum_{k=1}^n \binom{n}{k}\bigg[ \bigg( 2 \sum_{r=1}^{k-1} \frac{1}{m^k} \bigg) +\frac{2}{m^k} \bigg] (\frac{m-2}{m})^{n-k}\\ =& 1- (\frac{m-2}{m})^n - \frac{1}{m^n} \sum_{k=1}^n \binom{n}{k}2k(m-2)^{n-k}\\ = & 1- \frac{2n(m-1)^{n-1}+(m-2)^n}{m^n}. \end{align*} \end{proof} From this we can compute the expected number of edges in the random interval graph, which is the 1-skeleton of $\mathcal{N}_n$. The proof follows immediately from the above by the linearity of expectation. \begin{corollary} Let $X$ be the random variable equal to the number of edges in $\mathcal{N}_n$, the random interval graph. Under the key assumptions in Section \ref{intro}, $$ \mathbb{E}X = \hskip -.3cm \sum_{1 \leq i < j \leq m} \hskip -.4cm 1- q_{ij}^n -\sum_{k=1}^n \bigg[ \binom{n}{k} \bigg( 2 \sum_{r=1}^{k-1} p_i^r p_j^{k-r} \bigg) +p_i^k +p_j^k \bigg] q_{ij}^{n-k},$$ where $q_{ij} = 1-(p_i +p_j)$. In the uniform case where $p_i = {{1}\over{m}}$ for all $i\in [m]$, this expectation equals $$ \binom{m}{2}\bigg( 1- \frac{2n(m-1)^{n-1}+(m-2)^n}{m^n}\bigg).$$ \end{corollary} \section{Maximum Degree, Cliques, and Behavior in the limit.} \label{sec:cliques} In this section we investigate the connectivity of the empirical nerve. First we study the maximum degree and clique number of $\mathcal{N}_n$. Then we show that as the number of samples $n$ goes to infinity, then $\mathcal{N}_n$ is almost surely the $(m-1)$-simplex. \subsection{Maximum Degree.} The following result is a lower bound on the probability of finding an interval intersecting all others, i.e., that the maximum degree $\Deg(\mathcal{N}_n)$ of $\mathcal{N}_n$ is $m-1$. In our birdwatching story this can be interpreted as the probability of finding a species which overlaps in time with all others. In the following theorem we let $\mathcal{X}_{m,k}^n$ denote the set of weak-compositions of $n$ with length $m$ containing exactly $k$-many non-zero parts \cite[p.~25]{EC1}. Formally, $\mathcal{X}_{m,k}^n =\{(x_1,...,x_m)\in \mathbb Z_{\geq 0}^m: \sum_{i=1}^m x_i=n, |\{x_i: x_i \neq 0\}| =k \}$. \\ Also let $M(x)=\frac{(x_1+x_2+...+x_m)!}{x_1!x_2!...x_m!}\prod_{i=1}^m p_i^{x_i}$ denote the multinomial distribution applied to the vector $x\in\mathcal{X}_{m,k}^n$ considering the associated probabilities $p_1,p_2,...,p_m$. Finally, let $S_n^k$ denotes the \emph{Stirling numbers} of the second kind \cite[p.~81]{EC1}. \begin{theorem} \label{thmmaxdegree} Under the key assumptions in Section \ref{intro}, the maximum degree of $\mathcal{N}_n$ satisfies $P(\Deg(\mathcal{N}_n)=m-1)\geq$ $$\max_{r}\{[1- \sum_{k=1}^{m-1} \frac{k^r}{m^r}\binom{m}{k}\sum\limits_{x\in\mathcal{X}_{m,k}^r} \hskip -0.3cm M(x) (m-k)^rp_{*}^r][\hskip -0.2cm \sum\limits_{x\in\mathcal{X}_{m,m}^{n-2r}} \hskip -0.4cm M(x)+\hskip -0.4cm \sum\limits_{x\in\mathcal{X}_{m,m-1}^{n-2r}} \hskip -0.4cm M(x)]\}.$$ Moreover, in the uniform case where $p_i = {{1}\over{m}}$ for all $i\in [m]$, we have that\\ $P(\Deg(\mathcal{N}_n)=m-1)\geq$ $$\max_{r}\{[1-\frac{m!}{m^{2r}} \sum\limits_{k=1}^{m-1} \frac{(m-k)^r}{(m-k)!}S_r^k][\frac{m!}{m^{n-2r}}S_{n-2r}^{m}+\frac{(m-1)!}{(m-1)^{n-2r}}S_{n-2r}^{m-1}]\}.$$ \end{theorem} \begin{proof} Fix some $1\leq r \leq \frac{n-m}{2}$ and consider the sets $L=\{1,2,...,r\}$, $C=\{r,r+1,...,n-(r+1)\}$ and $R=\{n-r,n-(r-1),...,n\}$. If the following events hold, we can guarantee that $\Deg(\mathcal{N}_n)=m-1$.\\ $A=$ $\{$There exists a chromatic class with points in $L$ and $R\}$,\\ $B=$ $\{$There exists at least one point with each of the remaining $m-1$ colors in $C\}$. \\ In order to calculate $P(A)$, we will compute the probability of its complement $A^c$, i.e., the event where no color appears in both $L$ and $R$. First we calculate the probability of $L$ being colored with exactly $k$ colors with $1\leq k \leq m-1$. Observe that there are $\binom{m}{k}$ ways to choose these colors and $k^r \sum_{x\in\mathcal{X}_{m,k}^r} M(x)$ ways to color $L$ with them. As there exist $m^r$ different colorings with all the $m$ colors, we have that for a fixed $k$ the probability is $$\frac{1}{m^r}k^r\binom{m}{k}\sum\limits_{x\in\mathcal{X}_{m,k}^r}\hskip -.25cm M(x).$$ In order for $A^c$ to occur, we need that $R$ be colored with only the $(m-k)$ remaining colors. Note that this event is independent from the coloring of $L$ as the two sets are disjoint. There are $(m-k)^r$ different ways of coloring $R$, and each occurs with probability at most $p_{*}^r$, where $p_*=\max\{p_i:1\leq i \leq m\}$. Thus, for a fixed $ k $ we have that the probability that no color appears in both $L$ and $R$ is at most $$\left[ \frac{1}{m^r}k^r\binom{m}{k}\sum\limits_{x\in\mathcal{X}_{m,k}^r} \hskip -.25cm M(x)\right] \left[ (m-k)^rp_{*}^r\right].$$ Then, by summing over all $k$ we have that $$P(A^c)\leq \sum_{k=1}^{m-1}\left[ \frac{1}{m^r}k^r\binom{m}{k}\sum\limits_{x\in\mathcal{X}_{m,k}^r} \hskip -.25cm M(x)\right] \left[ (m-k)^rp_{*}^r\right] ,$$ which implies that $$P(A)\geq 1- \sum_{k=1}^{m-1} \frac{1}{m^r}k^r\binom{m}{k}\sum\limits_{x\in\mathcal{X}_{m,k}^r} \hskip -.25cm M(x) (m-k)^rp_{*}^r.$$ To compute $P(B)$, note that the probability of coloring $C$ with $m$ or $m-1$ colors is exactly $$\sum\limits_{x\in\mathcal{X}_{m,m}^{n-2r}} \hskip -.4 cm M(x)+\sum\limits_{x\in\mathcal{X}_{m,m-1}^{n-2r}} \hskip -.5cm M(x).$$ Finally, as $A$ and $B$ are independent events, we have $P(\Deg(\mathcal{N}_n)=m-1)$ is greater than $$[1- \sum_{k=1}^{m-1} \frac{1}{m^r}k^r\binom{m}{k}\sum\limits_{x\in\mathcal{X}_{m,k}^r} \hskip -.3cm M(x) (m-k)^rp_{*}^r][\sum\limits_{x\in\mathcal{X}_{m,m}^{n-2r}} \hskip -.4 cm M(x)+\sum\limits_{x\in\mathcal{X}_{m,m-1}^{n-2r}} \hskip -.5cm M(x)].$$ Maximizing over $r$ gives the desired result. For the case uniform, we just apply $p_{*}=1/m$ and use the former equality together with the fact that $k!/k^nS_n^k=\sum_{x\in\mathcal{X}_{m,k}^n}M(x)$. \end{proof} \subsection{Cliques.} The expected clique number of $\mathcal{N}_n$ is of special interest to us. In our birdwatching story this corresponds to the maximal subset of birds whose time intervals all intersect. While we do not compute the expected clique number exactly, the next theorem provides a lower bound on the expected clique number which performs very well in simulations (see the Appendix). \begin{lemma} Under the key assumptions in Section \ref{intro}, the probability that an arbitrary point $x\in [n]$ lies inside the interval of color $i$, $I_n(i)$, is exactly $ 1-q_i^{x}- q_i^{n-x+1} +q_i^{n},$ where $q_i=1-p_i.$ \end{lemma} \begin{proof} Fix an arbitrary $x\in [n]$ and define the event $A = \{x\in I_n(i)\}$. Note that in order for $A$ to occur either $x$ lies between two points with label $i$ or $x$ itself is labeled $i$. Now consider the complementary probability event, $A^c = \{x \notin I_n(i)\}$. Next define the events $L,R$ where $L=\{$none of the points \emph{less than or equal} to $x$ have label $i\}$ and $R=\{$none of the points \emph{greater than or equal to} $x$ have label $i\}$. Note $A^c = L \cup R$ and $P(L)=q_i^{x}$, $P(R)=q_i^{n-x+1}$ and $P(L\cap R)=p_i^{n}$. Therefore, by the inclusion-exclusion principle we have, $$P(A^c)= P(L)+P(R)-P(L \cap R) = q_i^{x}+ q_i^{n-x+1} -q_i^{n}, $$ and hence $P(A)= 1-q_i^{x}- q_i^{n-x+1} +q_i^{n}.$ \end{proof} \begin{theorem} \label{expectedcliquenum} Let $\omega$ be the random variable equal to the clique number of $\mathcal{N}_n$, i.e., the size of the largest clique in the 1-skeleton of $\mathcal{N}_n$. Under the key assumptions in Section \ref{intro} we have \begin{center} $\mathbb{E }$ $\omega \geq \sum\limits_{i=1}^{m}( 1-q_i^{\lceil \frac{n}{2}\rceil}-q_i^{n-\lceil \frac{n}{2}\rceil +1}+q_i^{n})$ \end{center} where $q_i=1-p_i$. Moreover, in the uniform case where $p_i= {{1}\over{m}}$ for all $i\in [m]$, we have that \begin{center} $\mathbb{E }$ $\omega \geq m - \big(\frac{m-1}{m}\big)^{\lceil \frac{n}{2}\rceil} - \big(\frac{m-1}{m}\big)^{n-\lceil \frac{n}{2}\rceil +1} + \big(\frac{m-1}{m}\big)^{n}.$ \end{center}\end{theorem} \begin{proof} By the preceding lemma we know that the probability that $x\in I_n(i)$ for some $x\in [n]$ is exactly $1-q_i^{x}- q_i^{n-x+1} +q_i^{n}.$ To maximize this quantity over $x \in [n]$ we will first minimize $f(x) = q_i^{x}+ q_i^{n-x+1} -q_i^{n}$ over all $x$. Note $f$ is convex so a simple calculus exercise shows that $f$ is minimized at $x^* = \frac{n+1}{2}$. This can also be seen directly from the fact that $f$ is convex and symmetric about $\frac{n+1}{2}$. When $n$ is odd the minimizer $x^*$ is an integer and lies in $[n]$. To handle the case when $n$ is even, note that $f$ is symmetric about the minimizer $x^*$. Therefore, when $n$ is even, $f$ is minimized over $[n]$ at the integers closest to $x^*$, which are $\frac{n}{2}$ and $\frac{n}{2}+1$. We see then that $f$ is minimized over $[n]$ at the point $x = \lceil \frac{n}{2} \rceil$, which holds whether $n$ is even or odd. Now, for $i = 1, \ldots, m$ let $X_i$ be the indicator random variable which equals $1$ if $ \lceil \frac{n}{2} \rceil \in I_n(i)$ and is $0$ otherwise and set $X = \sum_{i=1}^m X_i$, so $X$ counts the number of intervals containing the point $ \lceil \frac{n}{2} \rceil$. Note that the clique number $\omega \geq X$, so $$\mathbb{E } \text{ } \omega \geq \mathbb{E} X = \sum\limits_{i=1}^{m} \mathbb{E} X_i =\sum\limits_{i=1}^{m}P(X_i)=\sum\limits_{i=1}^{m}( 1-q_i^{\lceil \frac{n}{2}\rceil}-q_i^{n-\lceil \frac{n}{2}\rceil +1}+q_i^{n}).$$ The result for the uniform case follows directly by setting $p_i =\frac{1}{m}$ for every $i$. \end{proof} \subsection{Behavior of the nerve complex as the number of samples goes to infinity.} Note that as the number of samples $n$ grows large, Theorem \ref{expectedcliquenum} implies that the expected clique number $\mathbb{E }$ $\omega \to m$. Since $\omega$ only takes values in $\{1, \ldots, m\}$ it follows that the clique number also converges to $m$ in probability. Thus, as $n$ goes to infinity, the probability that the nerve of the observations is the $(m-1)$-simplex denoted by $\Delta_{m-1}$, i.e., a complete graph, goes to 1. In our birdwatcher analogy, this implies that with sufficiently many observations one is almost sure to find a common interval of time where all $m$ species can be observed. The following theorem provides a lower bound on this convergence. \begin{theorem}\label{probtosimplex} Under the key assumptions in section \ref{intro}, the probability that $\mathcal{N}_n$ is an $(m-1)$-simplex (or equivalently the graph is a complete graph $K_m$) satisfies $$P(\mathcal{N}_n = \Delta_{m-1}) \geq ( \sum\limits_{x \in \mathcal{X}_{m}^{\lfloor\frac{n}{2}\rfloor}} \hskip -.25cm M(x))^2$$ where $\mathcal{X}_m^{\lfloor\frac{n}{2}\rfloor} = \{(x_1,x_2,...,x_m)\in \mathbb{N}^m:\sum_{i=1}^m x_i= \lfloor \frac{n}{2} \rfloor \}.$\\ In the uniform case where $p_i = \frac{1}{m}$ for every $i\in[m]$, this gives that $$ P(\mathcal{N}_n = \Delta_{m-1}) \geq \left( \frac{m!}{m^{\lfloor \frac{n}{2} \rfloor }} S_{\lfloor \frac{n}{2} \rfloor}^{m} \right)^2 $$ where, again, $S_n^k$ denotes the Stirling numbers of the second kind. \end{theorem} \begin{proof} For each vector $x\in\mathcal{X}_m^{\lfloor\frac{n}{2}\rfloor}$ the multinomial $M(x)$ computes the probability that there exist exactly $x_i$ points with color $i$ for every $1\leq i \leq m$. Therefore, the sum over all the vectors of $\mathcal{X}_m^{\lfloor\frac{n}{2}\rfloor}$ gives us the probability of having at least one point of each color.\\ Now, we consider the events $L=$ $\{$the first $\lfloor \frac{n}{2} \rfloor$ points are colored with exactly $m$ colors$\}$ and $R=$ $\{$the last $\lfloor \frac{n}{2} \rfloor $ points are colored with exactly $m$ colors$\}$. We have $$P(L)=P(R)=\sum\limits_{x \in \mathcal{X}_m^{\lfloor \frac{n}{2}\rfloor}} \hskip -.25cm M(x).$$ Then $P(\mathcal{N}_n=\Delta_{m-1})\geq P(L\cap R)$ and as $L$ and $R$ are independent events, we conclude $$P(\mathcal{N}_n=\Delta_{m-1})\geq P(L\cap R)=P(L)P(R)=( \sum\limits_{x \in \mathcal{X}_m^{\lfloor \frac{n}{2}\rfloor}} \hskip -.25cm M(x))^2.$$ The result for the uniform case follows because $k!/k^nS_n^k=\sum_{x\in\mathcal{X}_{m,k}^n}M(x)$. \end{proof} Theorem \ref{probtosimplex} tells us how likely it is for the empirical nerve of $n$ samples to form the $(m-1)$-simplex for fixed $n$. A related question asks what is the \emph{first} observation $n$ for which this occurs, i.e., if we have a sequence of observations $Y_1, Y_2, \ldots$ what is the least $n$ such that $\mathcal N_n((Y_1, \ldots, Y_n)) = \Delta_{m-1}$? We call this quantity the \emph{waiting time} to form the $(m-1)$-simplex and provide a lower bound on its expectation below. \begin{theorem}\label{waitingtime} Let $X$ be the random variable for the waiting time until $\mathcal N_n = \Delta_{m-1}$, explicitly $X = \inf\{n \in \mathbb N : \mathcal N_n = \Delta_{m-1} \}$. Then, under the key assumptions in Section \ref{intro}, we have $ \mathbb E X \leq 2 \int_0^\infty \Big(1- \prod_{i=1}^m (1-e^{-p_ix}) \Big) dx. $ Moreover, in the uniform case, where $p_i = {{1}\over{m}}$ for all $i\in [m]$, we have that $ \mathbb E X \leq 2m \sum_{i=1}^m \frac{1}{i}. $ \end{theorem} \begin{proof} The results follow directly from the expected waiting time of the classical coupon collector problem. Let $Z$ denote the waiting time until we have observed every label, i.e., $Z$ is the waiting time until we have completed a collection of coupons if each coupon is an i.i.d. random variable that takes value $i$ with probability $p_i$. It is known that $\mathbb E Z = 2 \int_0^\infty \big(1- \prod_{i=1}^m (1-e^{-p_ix}) \big) dx$, and in the uniform case where $p_i = {{1}\over{m}}$ for all $i\in [m]$, $\mathbb E Z = m \sum_{i=1}^m \frac{1}{i}$ (see \cite{Coupon} for several detailed proofs). Now, note that $\mathcal N_n = \Delta_{m-1}$ if we complete a collection, then complete a second collection, disjoint from the first. Let $Z_1$ denote the waiting time to complete the first collection, and let $Z_2$ be the additional waiting time to complete a second collection. Then $X \leq Z_1 + Z_2$ and $Z_1, Z_2$ are equal in distribution to $Z$, so $\mathbb E X \leq \mathbb E(Z_1 +Z_2) = 2\mathbb E Z$. \end{proof} \section{Conclusion.} \label{conclusiones} In this paper we introduced a novel random interval graph model. It is well-suited for applications involving the overlap patterns of chronological observations. There are a number of natural fascinating questions for the curious reader. First, obviously the distribution of birds varies in time as seasonal changes (or other factors such as predators or climate change) affect the species, thus the non-stationary case is better for applications. We ask ourselves, which of the results can be extended to the non-stationary case when the key assumptions made here are no longer valid? Second, Hanlon presented in \cite{hanlon82} a characterization of all interval graphs using a unique interval representation. He used this to enumerate all interval graphs. The analysis we presented in Theorem \ref{probtosimplex} indicates that, when we use our stochastic process to generate random intervals on the line, the probability of getting an interval graph other than the complete graph goes to 0 as the number of samples $n$ goes to infinity. A natural challenge is to understand the decay of probabilities for different classes of graphs, for instance, random \emph{interval trees} (see \cite{EckhoffIntervalGraph}). Finally, the story we presented is about data samples indexed by a single parameter, say time. But what happens when geographical coordinates, temperature, humidity, or other parameters are considered to model the distribution of birds? Extending the model to higher-dimensions produces new challenges. For example, the random interval graphs are no longer sufficient to capture all the information. Instead, one needs to investigate random simplicial complexes (see \cite{Hogan_Tverberg2020}) because we lose the natural order for the points that we have in the line. This implies that an equivalent result to Lemma 1 is no longer possible. For instance, continuing with our birdwatcher's analogy, suppose that colored points in Figure \ref{dib} represent the geographic coordinates of three different types of birds that have been studied. If our birdwatcher is trying to determine the usual habitat and the territorial interactions between them he/she will face the problem that two very similar data sets will induce different simplicial complexes. \begin{figure}[h!] \begin{center} \includegraphics[scale=.18]{Dib.pdf} \caption{Two data sets of $3$ different bird species with the same order type inducing different simplicial complexes.} \label{dib} \end{center} \end{figure} \noindent {\bf Acknowledgements.} The first and second authors gratefully acknowledge partial support from NSF DMS-grant 1818969. The second author also acknowledges support from the NSF-AGEP supplement. Finally, the third and fourth authors gratefully acknowledge partial support from PAPIIT IG100721 and CONACyT 282280. \vskip 0.8cm \def\cprime{$'$} \begin{thebibliography}{10} \expandafter\ifx\csname url\endcsname\relax \expandafter\ifx\csname urlstyle\endcsname\relax \expandafter\ifx\csname doi\endcsname\relax \else \expandafter\ifx\csname doi\endcsname\relax \bibitem{Barvinok} Barvinok, A. (2002). {\em A Course in Convexity\/}. {Graduate Studies in Mathematics, Vol. 54.\/} \newblock Providence, RI: American Mathematical Society. \bibitem{cohenetal1979probability} Cohen, J.~E., Koml\'os, J., Mueller, T. (1979). The probability of an interval graph, and why it matters. \newblock In: Ray-Chaudhuri, D.~K., ed. {\em Proceedings of the Symposia in Pure Mathematics\/}, Vol.~34. Providence, RI: American Mathematical Society, pp. 97--115. \bibitem{Hogan_Tverberg2020} De~Loera, J.~A., Hogan, T. (2020). Stochastic {Tverberg} theorems with applications in multiclass logistic regression, separability, and centerpoints of data. \newblock {\em SIAM Jour. on Math. of Data Science\/}. 2:1151--1166. \newblock {doi.org/10.1137/19M1277102}. \bibitem{diaconis2013interval} Diaconis, P., Holmes, S., Janson, S. (2013). Interval graph limits. \newblock {\em Ann. of Comb.\/} 17(1):27--52. \newblock {doi.org/10.1007/s00026-012-0175-0}. \bibitem{EckhoffIntervalGraph} Eckhoff, J. (1993). Extremal interval graphs. \newblock {\em Jour. Graph Theory\/}. 17(1):117--127. \newblock {doi.org/10.1002/jgt.3190170112}. \bibitem{erdos-renyi} Erd\H{o}s, P., Renyi, A. (1959). On random graphs i. \newblock {\em Publ. Math. Debrecen\/}. 6(18):290--297. \bibitem{Coupon} Ferrante, M., Saltalamacchia, M. (2014). The coupon collector's problem. \newblock {\em MATerials MATematics\/}. 2014(2):35. \bibitem{Fishburn85} Fishburn, P.~C. (1985). {\em Interval orders and interval graphs: a study of partially ordered sets\/}. \newblock New York: Wiley. \bibitem{FLAJOLET} Flajolet, P., Gardy, D., Thimonier, L. (1992). Birthday paradox, coupon collectors, caching algorithms and self-organizing search. \newblock {\em Discrete Appl. Math.\/} 39(3):207--229. \newblock {doi.org/10.1016/0166-218X(92)90177-C}. \bibitem{fulkersongross} Fulkerson, D.~R., Gross, O.~A. (1965). Incidence matrices and interval graphs. \newblock {\em Pacific Jour. Math.\/} 15(3):835--855. \newblock {doi.org/10.2140/PJM.1965.15.835}. \bibitem{ghrist2014elementary} Ghrist, R. (2014). {\em Elementary Applied Topology\/}. \newblock CreateSpace. \bibitem{gilmore_hoffman} Gilmore, P.~C., Hoffman, A.~J. (1964). A characterization of comparability graphs and of interval graphs. \newblock {\em Canadian Jour. of Math.\/} 16:539--548. \newblock {doi.org/10.4153/CJM-1964-055-5}. \bibitem{GOLUMBIC} Golumbic, M.~C. (2004). Interval graphs. \newblock In: Golumbic, M.~C., ed. {\em Algorithmic Graph Theory and Perfect Graphs\/}. Annals of Discrete Mathematics, Vol.~57. {Amsterdam, Netherlands: North Holland\/}, pp. 171 -- 202. \bibitem{paleobook} Hammer, O., Harper, D.~A.~T. (2007). {\em Paleontological Data Analysis\/}. \newblock Oxford, UK: Blackwell Publishing. \bibitem{hanlon82} Hanlon, P. (1982). Counting interval graphs. \newblock {\em Trans. of the Amer. Math. Soc.\/} 272(2):383--426. \newblock {doi.org/10.1090/S0002-9947-1982-0662044-8}. \bibitem{Hatcher} Hatcher, A. (2002). {\em Algebraic Topology\/}. \newblock Cambridge, UK: Cambridge University Press. \bibitem{iliopoulos} Iliopoulos, V. (2017). A study on properties of random interval graphs and {E}rd\"{o}s {R}enyi graph ${\cal G}(n,2/3)$. \newblock {\em Jour. of Discrete Math. Sci. and Cryptography\/}. 20(8):1697--1720. \newblock {doi.org/10.1080/09720529.2016.1184453}. \bibitem{JusticzScheinermanWinkler} Justicz, J., Scheinerman, E.~R., Winkler, P. (1990). Random intervals. \newblock {\em Amer. Math. Monthly\/}. 97(10):881--889. \newblock {doi.org/10.1080/00029890.1990.11995679}. \bibitem{kozlovbook} Kozlov, D.~N. (2008). {\em Combinatorial Algebraic Topology\/}. { Algorithms and Computation in Mathematics, Vol.~21\/}. \newblock Berlin, Germany: Springer. \bibitem{StochProcess} Krylov, N. (2000). {\em Introduction to the Theory of Random Processes\/}. Graduate Studies in Mathematics, Vol.~43. \newblock Providence, RI: American Mathematical Society. \bibitem{Lekkeikerker} Lekkerkerker, C., Boland, J. (1962). Representation of a finite graph by a set of intervals on the real line. \newblock {\em Fundam. Math.\/} 51(1):45--64. \bibitem{matousek2002lectures} Matousek, J. (2002). {\em Lectures on Discrete Geometry\/}. \newblock Graduate Texts in Mathematics, Vol.~212. New York: Springer. \bibitem{pippenger} {Pippenger}, N. (1998). {Random interval graphs.} \newblock {\em {Random Struct. Algorithms}\/}. 12(4):361--380. \newblock {doi.org/10.1002/(SICI)1098-2418(199807)12:4$<$361::AID-RSA4$>$3.0.CO;2-R}. \bibitem{Ross_Stoch} Ross, S. (1996). {\em Stochastic processes\/}, \newblock 2nd ed. New York: Wiley. \bibitem{Scheinermanoriginal} {Scheinerman}, E.R. (1988). Random interval graphs. \newblock {\em Combinatorica\/}, 8(4):357--371. \newblock {doi.org/10.1007/BF02189092}. \bibitem{Scheinerman2} {Scheinerman}, E.R. (1990). {An evolution of interval graphs.} \newblock {\em {Discrete Math.}\/} 82(3):287--302. \newblock {doi.org/10.1016/0012-365X(90)90206-W}. \bibitem{Schelling} Schelling, H.V. (1954). Coupon collecting for unequal probabilities. \newblock {\em Amer. Math. Monthly\/}. 61(5):306--311. \newblock {doi.org/10.1080/00029890.1954.11988466}. \bibitem{EC1} Stanley, R. (2011). {\em Enumerative Combinatorics\/}, Vol.~1, 2nd ed. \newblock Cambridge, UK: Cambridge University Press. \bibitem{tancer} Tancer, M. (2013). Intersection patterns of convex sets via simplicial complexes: a survey. \newblock In: Pach, J., ed. {\em Thirty essays on geometric graph theory\/}. New York: Springer, pp. 521--540. \end{thebibliography} \textbf{J.~A. De Loera} is a professor of Mathematics at the University of California, Davis. His main mathematical themes are discrete geometry and combinatorial optimization. He enjoys walking with his dog Bolo and watching coyotes roam the fields near his house. Department of Mathematics, University of California, Davis\\ [email protected] \textbf{E. Jaramillo-Rodriguez} is a PhD candidate in Mathematics at the University of California, Davis. Edgar is writing their thesis on stochastic combinatorial geometry applied to data science and machine learning. Edgar likes spending time outdoors with good friends or good books. Department of Mathematics, University of California, Davis\\ [email protected] \textbf{D. Oliveros} is a professor at the Institute of Mathematics at the National Autonomous University of Mexico UNAM (Campus Juriquilla). Her areas of interest in mathematics are discrete and computational geometry and convexity. She enjoys dancing, gardening and playing with her dogs. Instituto de Matem\'aticas, Universidad Nacional Aut\'onoma de M\'exico\\ [email protected] \textbf{A.~J. Torres} is a doctoral student in Mathematics at the National Autonomous University of Mexico UNAM. His main areas of interest include discrete geometry, data analysis, and combinatorics. He enjoys jogging around the city and hiking the trails near his hometown in Quer\'etaro. Instituto de Matem\'aticas, Universidad Nacional Aut\'onoma de M\'exico\\ [email protected] \vfill\eject \section{Appendix.} \subsection{Experimental Results.} In Theorems \ref{thmmaxdegree}, \ref{expectedcliquenum}, and \ref{probtosimplex} we provided lower bounds on the likelihood of various events occurring given $n$ points and $m$ labels. To study the usefulness of these bounds we ran simulations. For each pair $(m,n)$ we randomly colored $n$ points on the real line using $m$ colors with uniform probability (each color was equally likely) then constructed the induced interval graph. We repeated this process 100 times for each pair $(m,n)$ and plotted the percentage of the simulations where the desired event occurred. We also plotted our lower bounds from the theorems above and found that, in general, our bounds perform well for most values of $m$ and $n$. Figure \ref{fig:Delta m-1} compares the bound on the maximum degree obtained in Theorem \ref{thmmaxdegree} and the empirical approximation generated by our simulations. Figure \ref{fig:clique_number} compares the bound on the expected clique number obtained in Theorem \ref{expectedcliquenum} and the empirical approximation generated by our simulations. Figure \ref{fig:complete_bounds} compares the bound on the probability of the nerve being the $(m-1)$ simplex obtained in Theorem \ref{probtosimplex} and the empirical approximation generated by our simulations. Finally, we also compared the probability of obtaining $m-1$ as a maximum degree $\Deg(\mathcal{N}_n)$ in the Scheinerman model with our model. In \cite{JusticzScheinermanWinkler}, the authors prove in a clever way, that this probability is exactly $2/3$ and their result does not depend on the number of intervals. On the other hand, in our model this probability depends on both, the number of points and the number of colors that we use. We generated $1000$ random $m-$colorings, for Scheinerman's model (the solid line) $1\leq m \leq 50$. For our model we use several values of $n$ with $1\leq m \leq n$. The results are displayed in Figure \ref{fig:degree}. \begin{figure}[hbt] \subfigure{\includegraphics[width=3cm, height=3cm]{m=10.pdf}} \subfigure{\includegraphics[width=3cm, height=3cm]{m=15.pdf}} \subfigure{\includegraphics[width=3cm, height=3cm]{m=20.pdf}} \subfigure{\includegraphics[width=3cm, height=3cm]{m=25.pdf}} \\ \subfigure{\includegraphics[width=3cm, height=3cm]{m=30.pdf}} \subfigure{\includegraphics[width=3cm, height=3cm]{m=35.pdf}} \subfigure{\includegraphics[width=3cm, height=3cm]{m=40.pdf}} \subfigure{\includegraphics[width=3cm, height=3cm]{m=45.pdf}} \caption{Probability of $\Deg(\mathcal{N}_n) = m-1$, simulations compared to bound from Theorem \ref{thmmaxdegree}.} \label{fig:Delta m-1} \end{figure} \begin{figure}[hbt] \includegraphics[width=\textwidth]{clique_number.pdf} \caption{Expected clique number of $\mathcal{N}_n$ with uniform coloring as a function of $n$.} \label{fig:clique_number} \end{figure} \begin{figure}[hbt] \centering \includegraphics[width=0.95\textwidth]{mn_2.pdf} \caption{Probability that $\mathcal{N}_n = \Delta_{m-1}$.} \label{fig:complete_bounds} \end{figure} \begin{figure}[h] \includegraphics[scale=.38]{Degree.pdf} \centering \caption{Comparison between Scheinerman's model and ours with the probability that $\Deg(\mathcal{N}_n) = m-1$.} \label{fig:degree} \end{figure} \end{document}
2205.05736v2
http://arxiv.org/abs/2205.05736v2
Exact solution for the quantum and private capacities of bosonic dephasing channels
\documentclass[aps,twocolumn,tightenlines,superscriptaddress]{revtex4-1} \bibliographystyle{unsrt} \input{Def} \renewcommand{\CC}{\mathscr{C}} \usepackage{newtxtext} \usepackage{newtxmath} \allowdisplaybreaks \newcommand{\deff}[1]{\textbf{\emph{#1}}} \newcommand{\QQ}{Q_{\leftrightarrow}} \newcommand{\ludo}[1]{{\color{blue!75!black}#1}} \newcommand{\mw}[1]{{\color{red!75!black}#1}} \renewcommand{\epsilon}{\varepsilon} \newcommand{\nocontentsline}[3]{} \newcommand{\tocless}[2]{\bgroup\let\addcontentsline=\nocontentsline#1{#2}\egroup} \begin{document} \title{Exact solution for the quantum and private capacities of bosonic dephasing channels} \author{Ludovico Lami} \email{[email protected]} \affiliation{Institut f\"{u}r Theoretische Physik und IQST, Universit\"{a}t Ulm, Albert-Einstein-Allee 11, D-89069 Ulm, Germany} \affiliation{QuSoft, Science Park 123, 1098 XG Amsterdam, the Netherlands} \affiliation{Korteweg--de Vries Institute for Mathematics, University of Amsterdam, Science Park 105-107, 1098 XG Amsterdam, the Netherlands} \affiliation{Institute for Theoretical Physics, University of Amsterdam, Science Park 904, 1098 XH Amsterdam, the Netherlands} \author{Mark M. Wilde} \email{[email protected]} \affiliation{Hearne Institute for Theoretical Physics, Department of Physics and Astronomy, and Center for Computation and Technology, Louisiana State University, Baton Rouge, Louisiana 70803, USA} \affiliation{School of Electrical and Computer Engineering, Cornell University, Ithaca, New York 14850, USA} \begin{abstract} The capacities of noisy quantum channels capture the ultimate rates of information transmission across quantum communication lines, and the quantum capacity plays a key role in determining the overhead of fault-tolerant quantum computation platforms. In the case of bosonic systems, central to many applications, no closed formulas for these capacities were known for bosonic dephasing channels, a key class of non-Gaussian channels modelling, e.g., noise affecting superconducting circuits or fiber-optic communication channels. Here we provide the first exact calculation of the quantum, private, two-way assisted quantum, and secret-key agreement capacities of all bosonic dephasing channels. We prove that that they are equal to the relative entropy of the distribution underlying the channel to the uniform distribution. Our result solves a problem that has been open for over a decade, having been posed originally by~\href{https://doi.org/10.1117/12.870179}{[Jiang \& Chen, \emph{Quantum and Nonlinear Optics} 244, 2010]}. \end{abstract} \date{\today} \maketitle \let\oldaddcontentsline\addcontentsline \renewcommand{\addcontentsline}[3]{} One of the great promises of quantum information science is that remarkable tasks can be achieved by encoding information into quantum systems~\cite{book2000mikeandike}. In principle, algorithms executed on quantum computers can factor large integers~\cite{Shor}, simulate complex physical dynamics~\cite{childs2018toward}, solve unstructured search problems with proven speedups~\cite{grover96}, and perform linear-algebraic manipulations on large matrices encoded into quantum systems~\cite{Harrow2009,gilyen2018QSingValTransf}. Additionally, ordinary (\textquotedblleft classical\textquotedblright) information can be transmitted securely over quantum channels by means of quantum key distribution~\cite{XMZLP20}. However, all of these possibilities are hindered in practice because all quantum systems are subject to decoherence~\cite{S05}. A very simple decoherence process takes a density operator $\rho = \sum_{n,m} \rho_{nm} \ketbraa{n}{m}$ to $\rho' = \sum_{n,m} \rho_{nm} e^{-\frac{\gamma}{2}(n-m)^2} \ketbraa{n}{m}$, where $\gamma\geq 0$ measures the extent to which the off-diagonal elements are reduced in magnitude. This process is also called dephasing, because it reduces or eliminates relative phases. Decoherence is a ubiquitous phenomenon affecting all quantum physical systems. In fact, in various platforms for quantum computation, experimentalists employ the T2 time as a phenomenological quantity that roughly measures the time that it takes for a coherent superposition to decohere to a probabilistic mixture. Dephasing noise in some cases is considered to be the dominant source of errors affecting quantum information encoded into superconducting systems~\cite{BDKS08}, as well as other platforms~\cite{Taylor2005,OLABLW08}. If those systems are employed to carry out quantum computation, then the errors must be amended by means of error-correcting codes, which typically causes expensive overheads in the amount of physical qubits needed. Not only does dephasing affect quantum computers, but it also affects quantum communication systems. Indeed, temperature fluctuations~\cite{W92} or Kerr non-linearities~\cite{Gordon:90} in a fiber, imprecision in the path length of a fiber~\cite{D98}, or the lack of a common phase reference between sender and receiver~\cite{BDS07} lead to decoherence as well, and this can affect quantum communication and key distribution schemes adversely. Many of the aforementioned forms of decoherence can be unified under a single model, known as the bosonic dephasing channel (BDC)~\cite{JC10,Arqand2020}. The action of such a channel on the density operator $\rho$ of a single-mode bosonic system is given by \bb \NN_{p}(\rho)\coloneqq\int_{-\pi}^{\pi}d\phi\ p(\phi)\ e^{-i\n\, \phi}\rho\, e^{i\n\, \phi},\label{eq:bdc-def} \ee where $p$ is a probability density function on the interval $\left[-\pi,\pi\right]$ and $a^\dag a$ is the photon number operator. Since each unitary operator $e^{-i\n\,\phi}$ realizes a phase shift of the state $\rho$, the action of the channel $\NN_{p}$ is to randomize the phase of this state according to the probability density $p$. Representing $\rho=\sum_{n,m}\rho_{nm}\ketbraa{n}{m}$ in the photon number basis, it is a straightforward calculation to show that \bb \NN_{p}(\rho)=\sum_{n,m}\rho_{nm}(T_p)_{nm}\ketbraa{n}{m}\, , \label{eq:action-bdc-number-basis} \ee where $T_p$ is the infinite matrix with entries \bb (T_p)_{nm} \coloneqq\int_{-\pi}^{\pi}d\phi\ p(\phi)\, e^{-i\phi(n-m)}. \label{eq:toeplitz-bos-deph} \ee This channel thus generalizes the simple dephasing channel considered above. Its action preserves diagonal elements of~$\rho$, but reduces the magnitude of the off-diagonal elements, a key signature of decoherence. As the name suggests, the BDC can be seen as a generalization to bosonic systems of the qudit dephasing channel~\cite{Devetak-Shor}. Of primary interest is understanding the information-processing capabilities of the BDC in~\eqref{eq:bdc-def}. We can do so by means of the formalism of quantum Shannon theory~\cite{H17, W17}, in which we assume that the channel acts many times to affect multiple quantum systems. Not only does this formalism model dephasing that acts on quantum information encoded in a memory, as in superconducting systems, but also dephasing that affects communication systems. Here, a key quantity of interest is the quantum capacity $Q(\NN_{p})$ of the BDC $\NN_{p}$, which is equal to the largest rate at which quantum information can be faithfully sustained in the presence of dephasing~\cite{W17}. The quantum capacity has been traditionally studied with applications to quantum communication in mind; however, recent evidence~\cite{FMS22} indicates that it is also relevant for understanding the overhead of fault-tolerant quantum computation, i.e., the fundamental ratio of physical to logical qubits to perform quantum computation indefinitely with little chance of error. The private capacity $P(\NN_{p})$ is another operational quantity of interest~\cite{W17}, being the largest rate at which private classical information can be faithfully transmitted over many independent uses of the channel $\NN_{p}$ (Figure~\ref{protocol_Q_fig}). One can also consider both of these capacities in the scenario in which classical processing or classical communication is allowed for free between every channel use~\cite{BDSW96, TGW14IEEE}, and here we denote the respective quantities by $Q_{\leftrightarrow}(\NN_{p})$ and $P_{\!\leftrightarrow}(\NN_{p})$ (Figure~\ref{protocol_Q2_fig}). The secret-key-agreement capacity $P_{\!\leftrightarrow}(\NN_{p})$ is directly related to the rate at which quantum key distribution is possible over the channel~\cite{TGW14IEEE}, and as such, it is a fundamental limit of experimental interest. One can also study strong converse capacities (see e.g.~\cite[Eq.~(9.122)]{H17},~\cite[Definition~9.15]{KW20book}, and~\cite{WTB16}), which sharpen the above operational interpretations by considering error probabilities between zero and one. If the usual capacity is equal to the strong converse capacity, then we say that the strong converse property holds for the channel under consideration, and the implication is that the capacity demarcates a very sharp dividing line between achievable and unachievable rates for communication. We let $Q^{\dag}(\NN_{p})$, $P^{\dag}(\NN_{p})$, $Q_{\leftrightarrow}^{\dag}(\NN_{p})$, and $P_{\!\leftrightarrow}^{\dag}(\NN_{p})$ denote the various strong converse capacities for the communication scenarios mentioned above. Understanding all of the aforementioned capacities is essential for the forthcoming quantum internet~\cite{WEH18}, which will consist of various nodes in a network exchanging quantum and private information using the principles of quantum information science. \begin{figure} \includegraphics[scale=0.16]{Thinner-Q.jpeg} \caption{A depiction of a quantum communication protocol that uses the channel $\NN$ a total of $n$ times to send a quantum system $M$ reliably. The initial state of the protocol is $\Psi_{RM}$, and the final state is $\eta_{RM} \coloneqq \left(\operatorname{id}_R \otimes \left(\mathcal{D}_n\circ \NN^{\otimes n} \circ \mathcal{E}_n\right)_M \right) (\Psi_{RM})$. The encoding and decoding channels ${\cal E}_n$ and ${\cal D}_n$ are operated by the sender Alice and receiver Bob, respectively. The system $M$, initially entangled with a reference system $R$, is encoded via a suitable encoding map $\mathcal{E}_n$, transmitted via $n$ parallel uses of the channel $\NN$, and decoded at the receiving end by a decoding map $\mathcal{D}_n$. The error associated with the transmission is $\e \coloneqq \sup_{\ket{\Psi}} \left(1-\braket{\Psi_{RM}|\eta_{RM}|\Psi_{RM}}\right)$, and the number of transmitted qubits is $\log_2 |M|$, where $|M|$ is the dimension of $M$. Thus, the rate of transmitted qubits with $n$ uses of $\NN$ and error $\e$ is given by $\sup_{\mathcal{E}_n,\mathcal{D}_n} (\log_2|M|)/n \eqqcolon \frac1n Q_\e(\NN^{\otimes n})$, with the maximization being over all encoding and decoding operations achieving error at most $\e$. The quantum capacity is then obtained by taking the limit $n\to\infty$ and requiring that $\e$ vanishes in this limit, i.e.\ $Q(\NN)\coloneqq \inf_{\e\in (0,1)}\liminf_{n} \frac1n Q_\e(\NN^{\otimes n})$. The strong converse quantum capacity, instead, is constructed by allowing a nonzero error $\e$ also asymptotically, with the only requirement that it stays bounded away from its maximum value of $1$: in formula, $Q^\dag(\NN)\coloneqq \sup_{\e\in (0,1)}\limsup_{n} \frac1n Q_\e(\NN^{\otimes n})$. The private capacity $P(\NN)$ and the associated strong converse capacity $P^\dag(\NN)$ are defined analogously, with the main differences being that (a)~the transmitted message $M$ is classical, (b)~an eavesdropper Eve is granted access to all environment systems interacting with the input signals of $\NN$, and (c)~the main goal of the protocol is to transmit the message reliably in such a way that Eve does not learn about it. See~\cite{KW20book} for further expositions.} \label{protocol_Q_fig} \end{figure} \begin{figure} \includegraphics[scale=0.16]{Thinner-Q2-new.jpeg} \caption{An LOCC-assisted protocol that involves $n$ uses of the quantum channel $\NN$, assumed to connect two spatially separated laboratories, belonging to Alice and Bob. The upper arrows correspond to quantum registers of Alice and the lower arrows to quantum registers of Bob. Between each channel use and the next, Alice and Bob can implement an arbitrary protocol composed of local operations assisted by classical communication (LOCC). The final output of the protocol is a state $\eta_n$ that should resemble a maximally entangled state $\Phi_K$ of local dimension $K$. The associated error is $\e\coloneqq 1-\braket{\Phi_K|\eta_n|\Phi_K}$, and the rate of entanglement generation with $n$ uses is given by $\sup (\log_2 K)/n \eqqcolon Q_{\leftrightarrow,n,\epsilon}(\mathcal{N})$, with the maximization being over all sequences of LOCC protocols. The assisted quantum capacity of $\NN$ is then defined by taking the limit $n\to\infty$ as $Q_\leftrightarrow(\NN)\coloneqq \inf_{\e\in (0,1)} \liminf_{n} Q_{\leftrightarrow,n,\epsilon}(\mathcal{N})$, with the associated strong converse capacity being $Q^\dag_{\leftrightarrow}(\NN) \coloneqq \sup_{\epsilon \in (0,1)} \limsup_{n} Q_{\leftrightarrow,n,\epsilon}(\NN)$. The assisted private capacity $P_{\!\leftrightarrow}(\NN)$ and its strong converse capacity $P_{\!\leftrightarrow}^\dag(\NN)$ are constructed similarly, with the difference that the target state is a private state instead of a maximally entangled state.} \label{protocol_Q2_fig} \end{figure} We note here that while the quantum capacity~\cite{JC10,Arqand2020} and the assisted quantum capacity~\cite{Arqand2021} of the BDC $\NN_p$ in~\eqref{eq:bdc-def} have been investigated, neither of them has been calculated so far. The determination of the quantum capacity of this channel, in particular, has been an open problem since the publication of~\cite{JC10} in~2010. The main difficulty is that $\NN_p$ is in general a non-Gaussian channel, which makes the techniques in~\cite{holwer,WPG07} inapplicable. \medskip \paragraph*{Results.} In this paper, we completely solve all of the aforementioned eight capacities of the BDCs, finding that they all coincide and are given by the following simple expression: \bb \CC(\NN_p) \coloneqq&\ \log_{2}(2\pi)\!-\!h(p) \\ =&\ Q(\NN_{p}) = P(\NN_{p}) = Q_{\leftrightarrow}(\NN_{p}) = P_{\!\leftrightarrow}(\NN_{p}) \\ =&\ Q^{\dag}\!(\NN_{p}) = P^{\dag}\!(\NN_{p}) = Q_{\leftrightarrow}^{\dag} (\NN_{p}) = P_{\!\!\leftrightarrow}^{\dag}(\NN_{p}) , \label{eq:main-result} \ee where \bb h(p)\coloneqq-\int d\phi\ p(\phi)\log_{2}(p(\phi)) \ee is the differential entropy of the probability density $p$. Section~III.B of the Supplementary Information contains a detailed derivation of the above result. We note here that the first expression in~\eqref{eq:main-result} can be written in terms of the relative entropy as \bb \log_{2}(2\pi)-h(p)=D(p\Vert u), \ee where $u$ is the uniform probability density on the interval $\left[-\pi,\pi\right]$, and the relative entropy is defined as \bb D(r\Vert s)\coloneqq\int d\phi\ r(\phi)\log_{2}\!\left(\frac{r(\phi)}{s(\phi)}\right) \ee for general probability densities $r$ and $s$. By invoking basic properties of relative entropy~\cite{vanErven2014}, this rewriting indicates that all of the capacities are strictly positive unless the density $p$ is uniform, which represents a complete dephasing of the channel input state. As Eq.~\eqref{eq:main-result} indicates, there is a remarkable simplification of the capacities for BDCs. The ultimate rate of private communication over these channels is no larger than the ultimate rate for quantum communication. Furthermore, unlimited classical communication between the sender and receiver does not enhance the capacities. Finally, the strong converse property holds, meaning that the rate $D(p\Vert u)$ represents a very sharp dividing line between possible and impossible communication rates. As mentioned in the introduction, since dephasing is a prominent source of noise in both quantum communication and computation, we expect our finding to have practical relevance in both scenarios. Based on the recent findings of~\cite{FMS22}, we expect that $\left[ D(p\Vert u)\right]^{-1}$ can be related to the ultimate overhead (ratio of physical systems to logical qubits) of fault-tolerant quantum computation with superconducting systems, but further work is needed to demonstrate this definitively. Our results can be extended to all multimode BDCs, which act simultaneously on a collection of $m$ bosonic modes with photon number operators $a_j^\dag a_j$ as \bb \NN_p^{(m)}(\rho) \coloneqq \int_{[-\pi,\pi]^m} \hspace{-1ex} d^m \vb{\upphi}\ p(\vb{\upphi})\, e^{-i \sum_j a^\dag_{\!j} a_{\!j}^{\phantom{\dag}} \phi_j} \rho\, e^{i\sum_j a^\dag_{\!j} a_{\!j}^{\phantom{\dag}} \phi_j} , \label{Np_multimode} \ee where $\vb{\upphi} \coloneqq (\phi_1, \ldots, \phi_m)$ and $p$ is a probability density function on the hypercube $[-\pi,\pi]^m$. The eight capacities listed in~\eqref{eq:main-result} are all equal also for the channel $\NN_p^{(m)}$, and we denote them by $\CC\big(\NN_p^{(m)}\big)$. They are given by the formula \bb \CC\big(\NN_p^{(m)}\big) = m\log_2(2\pi) - h(p)\, , \label{multimode_capacities} \ee where \bb h(p) = -\int _{[-\pi,\pi]^m} d^m \vb{\upphi}\ p(\vb{\upphi}) \log_2 (p(\vb{\upphi}))\, , \ee constituting a straightforward generalization of~\eqref{eq:main-result}. As a special case of~\eqref{Np_multimode}, when $p$ is concentrated on the line $\vb{\upphi} = (\phi,\ldots,\phi)$ and $\phi\in [-\pi,\pi]$ is uniformly distributed, one obtains the completely dephasing channel considered in~\cite{Fanizza2021squeezingenhanced, Z21}. The most paradigmatic example of a BDC is that corresponding to a normal distribution $\widetilde{p}_\gamma(\phi) \coloneqq (2\pi\gamma)^{-1/2} e^{-\phi^2/(2\gamma)}$ of $\phi$ over the whole real line. This is the main example studied in~\cite{JC10,Arqand2020}, and it is based on a physical model discussed in those works. Here, $\gamma>0$ parametrizes the uncertainty of the rotation angle in~\eqref{eq:bdc-def}: the larger $\gamma$, the stronger the dephasing. Since values of $\phi$ that differ modulo $2\pi$ can be identified, we obtain as an effective distribution $p$ on $[-\pi,\pi]$ the \emph{wrapped normal distribution} \bb p_{\gamma}(\phi) \coloneqq \frac{1}{\sqrt{2\pi\gamma}} \sum_{k=-\infty}^{+\infty} e^{-\frac{1}{2\gamma} (\phi+2\pi k)^2} . \label{wrapped_Gaussian} \ee The matrix $T_{p_\gamma}$ obtained by plugging this distribution into~\eqref{eq:toeplitz-bos-deph} has entries $(T_{p_\gamma})_{nm} = e^{-\frac{\gamma}{2}(n-m)^2}$, and therefore the corresponding BDC is the one discussed in the introduction. We find that \bb \CC(\NN_{p_\gamma}) &= \log_2 \varphi( e^{-\gamma} ) + \frac{2}{\ln 2} \sum_{k=1}^\infty \frac{(-1)^{k-1} e^{-\frac{\gamma}{2}(k^2+k)}}{k\left(1-e^{-k\gamma}\right)}\, , \label{capacities_wrapped_Gaussian} \ee where $\varphi(q) \coloneqq \prod_{k=1}^\infty \left(1-q^k\right)$ is the Euler function. See Section~IV.A of the Supplementary Information for details. In the physically relevant limit $\gamma \lesssim 1$, $p_\gamma$ and $\widetilde{p}_\gamma$ are both concentrated around $0$, and their entropies are nearly identical. In this regime \bb \mathscr{C}(\NN_{p_\gamma}) &\approx \frac12 \log_2\frac{2\pi}{e\gamma} \\ &\approx \left(0.604 + \frac12\log_2\frac{1}{\gamma}\right)\, \mathrm{bits}\big/\mathrm{channel\ use}\, , \ee which demarcates the ultimate limitations on quantum and private communication in the presence of small dephasing noise. In the opposite case $\gamma\gg 1$ we obtain that \bb \mathscr{C}(\NN_{p_\gamma}) \approx \frac{e^{-\gamma}}{\ln 2}\, . \ee The above formula generalizes and makes quantitative the claim found in~\cite[\S~VI]{Arqand2020} that the quantum capacity of $\NN_{p_\gamma}$ vanishes exponentially for large $\gamma$. In Figure~\ref{capacities_fig}, we plot the capacity formula~\eqref{capacities_wrapped_Gaussian} as a function of $\gamma$, comparing it with the capacities $\CC(\NN_p)$ obtained for other important probability distributions $p$ on the circle. \begin{figure} \includegraphics[scale=0.95]{capacities.pdf} \caption{The capacities of the BDCs associated with the wrapped normal distribution $p_\gamma$, the von Mises distribution $p_\lambda$, and the wrapped Cauchy distribution $p_\kappa$. The units of the vertical axis are qubits or private bits per channel use, and the horizontal axis features the main parameter governing the various distributions. The wrapped normal distribution is given by~\eqref{wrapped_Gaussian}, it gives a Gaussian-modulated dephasing $(T_{p_\gamma})_{nm} = e^{-\gamma (n-m)^2/2}$, and the capacity of the associated BDC $\NN_{p_\gamma}$ is given by~\eqref{capacities_wrapped_Gaussian}. The von Mises distribution $p_\lambda$ is a better analogue of the normal distribution in the case of a circle. It is given by $p_\lambda(\phi) \coloneqq \frac{e^{\cos(\phi)/\lambda}}{2\pi\, I_0(1/\lambda)}$, where $\lambda>0$ is a scale parameter analogous to $\gamma$ above, and $I_k$ is a modified Bessel function of the first kind. The obtained dephasing matrix has entries $(T_{p_\lambda})_{nm} = \tfrac{I_{|n-m|}(1/\lambda)}{I_0(1/\lambda)}$, and the capacities of the BDC $\NN_{p_\lambda}$ can be expressed as $\mathscr{C}(\NN_{p_\lambda}) = \frac{1}{\ln 2} \frac{I_1(1/\lambda)}{\lambda\, I_0(1/\lambda)} - \log_2 I_0(1/\lambda)$. Finally, the wrapped Cauchy distribution defined by $p_\kappa(\phi) \coloneqq \frac{1}{2\pi} \frac{\sinh\sqrt{\kappa}}{\cosh\sqrt{\kappa} -\cos\phi}$ corresponds to a dephasing matrix $(T_{p_\kappa})_{nm} = e^{-\sqrt{\kappa} |n-m|}$; it yields a capacity equal to $\CC(\NN_{p_\kappa}) = - \log_2\!\big(1-e^{-2\sqrt{\kappa}}\big)$.} \label{capacities_fig} \end{figure} \medskip \paragraph*{Discussion.} Our main result represents important progress for quantum information theory, solving the capacities of a physically relevant class of non-Gaussian bosonic channels. While many capacities of bosonic Gaussian channels have been solved in prior work~\cite{holwer,GGLMSY04,WPG07,Giovadd,GLMS03,PLOB,WTB16}, we are not aware of any other class of non-Gaussian channels that represent relevant models of noise in bosonic systems and whose capacity can be computed to yield a nontrivial value (neither zero nor infinite). Our findings have non-trivial implications for the design of quantum error-correcting codes~\cite{Shor95,LB13} that encode and protect quantum information against the deleterious effects of BDCs. In particular, there is no superadditivity effect that occurs, as is the case with other quantum channels such as the depolarizing and dephrasure channels~\cite{DSS98,SS07,LLS18}. Thus, we now know that the random selection schemes of~\cite{ieee2005dev, devetak2005} are optimal designs for BDCs. It would be interesting to design quantum polar codes tailored to BDCs, as these codes are known to be capacity-achieving for certain kinds of finite-dimensional channels~\cite{RDR12,RW14}. As stated previously, another implication of our findings is that classical communication between sender and receiver does not increase the quantum and private capacities of BDCs. Our formula can be seen as a natural generalization to bosonic systems of that given in~\cite{Devetak-Shor,TWW17,PLOB} for the quantum and private capacities of the qudit dephasing channel. However, the similarity of the final formula should not obscure the fact that the techniques used for its derivation are quite different. In particular, a key technical tool employed here is the Szeg\H{o} theorem from asymptotic linear algebra~\cite{Szego1920,SerraCapizzano2002}, in addition to a teleportation~\cite{BBCJPW93} simulation argument that is rather different from those presented previously~\cite{BDSW96,NFC09,WPG07,Alex-Master,PLOB,WTB16}. The collapse that occurs in~\eqref{eq:main-result}, where eight different capacities are shown to coincide, also occurs for the quantum-limited bosonic amplifier channel, as a consequence of the findings of~\cite{WPG07,PLOB,Mark-energy-constrained,WTB16}. It would be interesting to determine other channels of physical interest for which this collapse occurs. It is known that this kind of collapse does not occur for the quantum erasure and pure-loss bosonic channels, because classical feedback from receiver to sender can increase the quantum and private capacities of these channels~\cite{erasure,Pirandola2009,PLOB}. Such an increase has long been known to have practical implications for the design of quantum key distribution protocols, as discussed in~\cite{Pirandola2009,PLOB}. Going forward from here, it is of interest to address the capacities of bosonic lossy dephasing channels, in which both loss and dephasing act at the same time. Such channels are modeled as the serial concatenation $\mathcal{L}_{\eta} \circ \NN_p$, where $\mathcal{L}_{\eta}$ is a pure loss channel of transmissivity $\eta \in [0,1]$; they provide realistic noise models for communication and computation, given that both kinds of noises are relevant in these systems~\cite{LXJR22}. Our result here, combined with the main result of~\cite{WPG07} and a data-processing bottlenecking argument, leads to the following upper bound on the quantum and private capacities of the bosonic lossy dephasing channel: \bb Q(\mathcal{L}_{\eta} \circ \NN_p) & \leq P(\mathcal{L}_{\eta} \circ \NN_p) \\ & \leq \min\{P(\mathcal{L}_{\eta}), P(\NN_p)\} \\ & = \min\big\{\big(\log_2(\eta / (1-\eta))\big)_+,\,D(p\Vert u)\big\}\, , \ee where $x_+\coloneqq \max\{x,0\}$. By the same argument, but invoking the results of~\cite{PLOB,WTB16}, the following upper bounds hold for the quantum and private capacities assisted by classical communication: \bb Q_{\leftrightarrow}(\mathcal{L}_{\eta} \circ \NN_p) &\leq Q^\dag_{\leftrightarrow}(\mathcal{L}_{\eta} \circ \NN_p),\ P_{\!\leftrightarrow}(\mathcal{L}_{\eta} \circ \NN_p) \\ & \leq P^\dag_{\!\leftrightarrow}(\mathcal{L}_{\eta} \circ \NN_p) \\ & \leq \min\{\log_2(1 / (1-\eta)),D(p\Vert u)\}. \ee The same data-processing argument can be employed for BDCs composed with other common bosonic Gaussian channels in order to obtain upper bounds on the composed channels' capacities, while using known upper bounds from prior work~\cite{PLOB,WTB16,Sharma2018,Rosati2018,Noh2019,FKG21}. It also remains open to determine the energy-constrained quantum and private capacities of BDCs, as well as their classical-communication-assisted counterparts~\cite{Arqand2020,Arqand2021}. Note that the lower bound in~\eqref{eq:max-mixed-rate} is a legitimate lower bound on the energy-constrained quantum capacity of $\NN_p$ when the mean photon number of the channel input cannot exceed $(d-1)/2$. Also, it is clear that the energy-constrained classical capacity of $\NN_p$ is equal to $g(E)\coloneqq (E+1)\log_2(E+1) - E\log_2 E$, where $E$ is the energy constraint. This identity depends essentially on the fact that Fock states can be perfectly transmitted through any BDC~\cite[\S~3.1]{Fanizza2021squeezingenhanced}. Finally, it is an open question to determine the energy-constrained entanglement-assisted classical capacity of BDCs~\cite{Holevo2004}. In conclusion, in this work we have found an analytic expression for the quantum and private, assisted and unassisted, weak and strong converse capacities of all multimode bosonic dephasing channels, solving a problem that had been open for over a decade. BDCs are among the first non-Gaussian channels for which these capacities are calculated. \medskip \paragraph*{Acknowledgements} --- We thank Stefano Mancini for discussions. LL was partially supported by the Alexander von Humboldt Foundation. MMW acknowledges support from the National Science Foundation under grant no.~2014010. \medskip \noindent\textbf{Author contributions} --- Both authors contributed to all aspects of this manuscript and to the writing of the paper. \medskip \noindent\textbf{Supplementary Information} is available for this paper. \medskip \noindent\textbf{Competing interest} --- The authors declare no competing interests. \bibliography{Ref,biblio} \let\addcontentsline\oldaddcontentsline \tocless{\section*}{Methods} In this section, we provide a short overview of the techniques used to prove our main result~\eqref{eq:main-result}. We establish the following two inequalities: \begin{align} Q(\NN_{p}) & \geq D(p\Vert u),\label{eq:lower-bnd-cap}\\ P_{\!\leftrightarrow}^{\dag}(\NN_{p}) & \leq D(p\Vert u).\label{eq:upper-bnd-cap}\end{align} Note that~\eqref{eq:lower-bnd-cap} and~\eqref{eq:upper-bnd-cap} together imply the main result, because $Q(\NN_{p})$ is the smallest among all of the capacities listed and $P_{\!\leftrightarrow}^{\dag}(\NN_{p})$ is the largest. For a precise ordering of the various capacities, see~\cite[Eq.~(5.6)--(5.13)]{WTB16}. To prove~\eqref{eq:lower-bnd-cap}, let us recall that the coherent information of a quantum channel is a lower bound on its quantum capacity~\cite{W17}. Specifically, the following inequality holds for a general channel $\NN$:\bb Q(\NN) \geq \sup_{\rho} \left\{H(\NN(\rho))-H((\operatorname{id}\otimes\NN)(\psi^{\rho})) \right\} ,\label{eq:coh-info-low-bnd}\ee where the von Neumann entropy of a state $\sigma$ is defined as $H(\sigma)\coloneqq -\Tr[\sigma\log_{2}\sigma]$, the optimization is over every state~$\rho$ that can be transmitted into the channel $\NN$, and $\psi^{\rho}$ is a purification of $\rho$ (such that one recovers $\rho$ after a partial trace). We can apply this lower bound to the BDC $\NN_{p}$. For a fixed photon number $d-1$, let us choose $\rho$ to be the maximally mixed state of dimension $d$, i.e., $\rho=\tau_{d}\coloneqq \frac{1}{d}\sum_{n=0}^{d-1}\ketbra{n}$. This state is purified by the maximally entangled state $\Phi_{d} \coloneqq \frac{1}{d}\sum_{n,m=0}^{d-1}\ketbraa{n}{m} \otimes \ketbraa{n}{m}$. To evaluate the first term in~\eqref{eq:coh-info-low-bnd}, consider from~\eqref{eq:action-bdc-number-basis} and~\eqref{eq:toeplitz-bos-deph} that the output state is maximally mixed, i.e., $\NN_p(\tau_{d})=\tau_{d}$, because the input state $\tau_{d}$ has no off-diagonal elements and the diagonal elements of the matrix $T_p$ in~\eqref{eq:toeplitz-bos-deph} are all equal to one. Thus, we find that $H(\NN_p(\tau_d))=\log_{2}d$. For the second term in~\eqref{eq:coh-info-low-bnd}, we again apply~\eqref{eq:action-bdc-number-basis} and~\eqref{eq:toeplitz-bos-deph} to determine that \bb \omega_{p,d} \coloneqq&\ (\operatorname{id}\otimes\NN_p)(\Phi_{d})\\ =&\ \frac{1}{d}\sum_{n,m=0}^{d-1}(T_p)_{nm}\ketbraa{n}{m} \otimes \ketbraa{n}{m}. \label{eq:choi-state-bdc} \ee As the entropy is invariant under the action of an isometry, and the isometry $\ket{n}\rightarrow\ket{n}\ket{n}$ takes the state \bb \frac{T^{(d)}_p}{d} \coloneqq \frac{1}{d} \sum_{n,m=0}^{d-1}(T_p)_{nm}\ketbraa{n}{m} \ee to $\omega_{p,d}$, we find that the entropy $H(\omega_{p,d})$ reduces to\bb H(\omega_{p,d})=H\Big(T^{(d)}_p\!\big/d\Big). \ee By a straightforward calculation, we then find that\bb H\left(\NN_p(\tau_{d})\right) - H(\omega_{p,d}) &=\log_{2}d - H\Big(T^{(d)}_p\!\big/d\Big)\\ &= \frac{1}{d}\Tr \left[T^{(d)}_p \log_2 T^{(d)}_p\right]\,. \label{eq:max-mixed-rate} \ee This establishes the value in~\eqref{eq:max-mixed-rate} to be an achievable rate for quantum communication over $\NN_{p}$. Since this lower bound holds for every photon number $d-1\in\mathbb{N}$, we can then take the limit $d\rightarrow\infty$ and apply the Szeg\H{o} theorem~\cite{Szego1920,SerraCapizzano2002} to conclude that the following value is also an achievable rate: \bb &\lim_{d\rightarrow\infty}\frac{1}{d} \Tr \left[T^{(d)}_p \log_2 T^{(d)}_p\right]\\ &\qquad =\frac{1}{2\pi}\int_{-\pi}^{\pi}d\phi\ 2\pi p(\phi)\log_{2}\left( 2\pi p(\phi)\right) \\ &\qquad =D(p\Vert u). \ee Thus, this establishes the lower bound in~\eqref{eq:lower-bnd-cap}. To prove the upper bound in~\eqref{eq:upper-bnd-cap}, we apply a modified teleportation simulation argument. This kind of argument was introduced in~\cite[Section~V]{BDSW96}, for the specific purpose of finding upper bounds on the quantum capacity assisted by classical communication, and it has been employed in a number of works since then~\cite{NFC09,WPG07,Alex-Master,PLOB,WTB16}. Since we are interested in bounding the strong converse secret key agreement capacity $P_{\!\leftrightarrow}^{\dag}(\NN_{p})$, we apply reasoning similar to that given in~\cite{WTB16} (here see also~\cite{private,Horodecki2009}). However, there are some critical differences in our approach here. To begin, let us again consider the state in~\eqref{eq:choi-state-bdc}. As we show in Section~III.B of the Supplementary Information, by performing the standard teleportation protocol~\cite{BBCJPW93} with the state in~\eqref{eq:choi-state-bdc} as the entangled resource state, rather than the maximally entangled state, we can simulate the action of the channel $\NN_p$ on a fixed input state, up to an error that vanishes in the limit as $d\to \infty$. This key insight demonstrates that the state in~\eqref{eq:choi-state-bdc} is approximately equivalent in a resource-theoretic sense to the channel $\NN_p$. In more detail, we can express this observation in terms of the following equality: for every state $\rho$, it holds that \bb \lim_{d \to \infty}\left \Vert (\operatorname{id} \otimes \NN_p)(\rho) - (\operatorname{id} \otimes \NN_{p,d})(\rho) \right\Vert_1 = 0, \label{eq:tp-sim-error} \ee where $\NN_{p,d}(\sigma) \coloneqq \TT(\sigma \otimes \omega_{p,d})$ is the channel resulting from the teleportation simulation. That is, the simulating channel $\NN_{p,d}$ is realized by sending one subsystem of the maximally entangled state $\Phi_d$ through $\NN_p$, which generates $\omega_{p,d}$, and then acting on the input state $\sigma$ and the resource state $\omega_{p,d}$ with the standard teleportation protocol~$\TT$. By invoking the main insight from~\cite{private,Horodecki2009} (as used later in~\cite{TGW14IEEE}), we next note that a protocol for secret key agreement over the channel is equivalent to one for which the goal is to distill a bipartite private state. Such a protocol involves only two parties, and thus the tools of entanglement theory come into play~\cite{private,Horodecki2009}. Now let $\mathcal{P}_{n,\epsilon}$ denote a general, fixed protocol for secret key agreement, involving $n$ uses of the channel $\NN_p$ and achieving an error $\epsilon$ for generating a bipartite private state of rate $R_{n,\epsilon}$ (where the units of $R_{n,\epsilon}$ are secret key bits per channel use). By using the two aforementioned tools, teleportation simulation and the reduction from secret key agreement to bipartite private distillation, the protocol $\mathcal{P}_{n,\epsilon}$ can be approximately simulated by the action of a single LOCC channel on $n$ copies of the resource state $\omega_{p,d}$. Associated with this simulation are two trace norm errors $\epsilon$ and $\delta_{d}$, the first of which is the error of the original protocol $\mathcal{P}_{n,\epsilon}$ in producing the desired bipartite private state and the second of which is the error of the simulation. We then invoke~\cite[Eq.~(5.37)]{WTB16} to establish the following inequality, which, for the fixed protocol $\mathcal{P}_{n,\epsilon}$, relates the rate $R_{n,\epsilon}$ at which secret key can be distilled to the aforementioned errors and an entanglement measure called sandwiched R\'enyi relative entropy of entanglement: \bb R_{n,\epsilon} \leq \widetilde{E}_{R,\alpha}(\omega_{p,d}) + \frac{2\alpha}{n(\alpha-1)}\log_2\!\left(\frac{1}{1-\delta_{d} - \epsilon}\right), \ee where $\alpha > 1$ and the sandwiched R\'enyi relative entropy of entanglement of a general bipartite state $\rho$ is defined as~\cite{WTB16} \bb \widetilde{E}_{R,\alpha}(\rho) \coloneqq \inf_{\sigma \in \operatorname{SEP}} \frac{2\alpha}{\alpha-1}\log_2\!\left\Vert \rho^{1/2} \sigma^{(1-\alpha)/2\alpha}\right\Vert_{2\alpha}, \ee with SEP denoting the set of separable (unentangled) states. By choosing the separable state to be $(\operatorname{id} \otimes \NN_p)\big(\overline{\Phi}_d\big)$, where $\overline{\Phi}_d \coloneqq \frac{1}{d}\sum_{n=0}^{d-1} \ketbra{n}\otimes \ketbra{n}$, we find that \bb \widetilde{E}_{R,\alpha}(\omega_{p,d}) \leq \frac{1}{\alpha-1}\log_2\frac{1}{d} \Tr\Big[\big(T_p^{(d)}\big)^\alpha\Big] . \ee We refer the reader to Section~III.B of the Supplementary Information for a detailed derivation. Thus, we find that the following rate upper bound holds for the secret key agreement protocol $\mathcal{P}_{n,\epsilon}$ for all $d \in \mathbb{N}$: \begin{multline} R_{n,\epsilon} \leq \frac{1}{\alpha-1}\log_2\frac{1}{d} \Tr\Big[\big(T_p^{(d)}\big)^\alpha\Big] \\ + \frac{2\alpha}{n(\alpha-1)}\log_2\!\left(\frac{1}{1-\delta_{d} - \epsilon}\right), \end{multline} Since this bound holds for all $d\in\mathbb{N}$, we can take the limit $d\to \infty$ and then arrive at the following upper bound: \bb R_{n,\epsilon} & \leq \liminf_{d\to \infty}\Bigg(\frac{1}{\alpha-1}\log_2\frac{1}{d} \Tr\Big[\big(T_p^{(d)}\big)^\alpha\Big] \label{eq:uniform-bnd-skac}\\ & \qquad \qquad \qquad + \frac{2\alpha}{n(\alpha-1)}\log_2\!\left(\frac{1}{1-\delta_{d} - \epsilon}\right) \Bigg) \\ & = D_{\alpha}(p\Vert u) + \frac{2\alpha}{n(\alpha-1)}\log_2\!\left(\frac{1}{1 - \epsilon}\right). \ee In the above, we again applied the Szeg\H{o} theorem~\cite{Szego1920,SerraCapizzano2002} to conclude that \bb \lim_{d\to \infty}\frac{1}{\alpha-1}\log_2\frac{1}{d} \Tr\Big[\big(T_p^{(d)}\big)^\alpha\Big] = D_{\alpha}(p\Vert u)\, . \ee We also used the fact that $\lim_{d\to \infty} \delta_d = 0$, which is a consequence of~\eqref{eq:tp-sim-error}. The bound in the last line only depends on the error $\epsilon$ of the original protocol $\mathcal{P}_{n,\epsilon}$ and the R\'enyi relative entropy \bb D_{\alpha}(p\Vert u)\coloneqq \frac{1}{\alpha-1}\log_2\int_{-\pi}^{\pi} d\phi\, p(\phi)^{\alpha} u(\phi)^{1-\alpha}. \ee As such, it is a uniform upper bound, applying to all $n$-round secret-key-agreement protocols that generate a private state of rate $R_{n,\epsilon}$ and with error $\epsilon$. Now noting that the $n$-shot secret key agreement capacity $P_{\!\leftrightarrow}(\NN_p,n,\epsilon)$ is defined as the largest rate $R_{n,\epsilon}$ that can be achieved by using the channel $\NN_p$ a total of $n$ times along with classical communication for free, while allowing for $\epsilon$ error, it follows from the uniform bound in~\eqref{eq:uniform-bnd-skac} that \bb P_{\!\leftrightarrow}(\NN_p,n,\epsilon) \leq D_{\alpha}(p\Vert u) + \frac{2\alpha}{n(\alpha-1)}\log_2\!\left(\frac{1}{1 - \epsilon}\right), \ee holding for all $\alpha >1$. Remembering that the strong converse secret-key-agreement capacity is defined as \bb P_{\!\leftrightarrow}^\dag(\NN_p) \coloneqq \sup_{\epsilon\in(0,1)}\limsup_{n\to \infty} P_{\!\leftrightarrow}(\NN_p,n,\epsilon) \ee we take the limit $n\to \infty$ to find that \bb & P_{\!\leftrightarrow}^\dag(\NN_p) \\ & \leq \sup_{\epsilon\in(0,1)}\limsup_{n\to \infty} \left\{D_{\alpha}(p\Vert u) + \frac{2\alpha}{n(\alpha-1)}\log_2\!\left(\frac{1}{1 - \epsilon}\right) \right\} \\ & =D_{\alpha}(p\Vert u)\, . \ee This upper bound holds for all $\alpha>1$. Thus, we can finally take the $\alpha\to 1$ limit, and use a basic property of R\'enyi relative entropy~\cite{vanErven2014} to conclude the desired upper bound: \bb P_{\!\leftrightarrow}^\dag(\NN_p) \leq \lim_{\alpha \to 1}D_{\alpha}(p\Vert u) =D(p\Vert u)\, . \ee This concludes the proof of the capacity formula~\eqref{eq:main-result} for the BDC. The argument required to establish its multimode generalization~\eqref{multimode_capacities} is very similar, with the only substantial technical difference being the application of the \emph{multi-index} Szeg\H{o} theorem~\cite{SerraCapizzano2002}. See Section~III.C of the Supplementary Information for details. \medskip \noindent\textbf{Data availability} --- No data sets were generated during this study. \clearpage \fakepart{Supplemental Material} \onecolumngrid \begin{center} \vspace*{\baselineskip} {\textbf{\large Supplemental Material}}\\ \end{center} \renewcommand{\theequation}{S\arabic{equation}} \renewcommand{\thethm}{S\arabic{thm}} \renewcommand{\thefigure}{S\arabic{figure}} \setcounter{page}{1} \setcounter{section}{0} \setcounter{equation}{0} \makeatletter \setcounter{secnumdepth}{2} \tableofcontents \newpage \section{Summary of contents} The supplementary material provides detailed arguments for the claims made in the main text. We begin with some background material on quantum states, channels, entropies, continuous-variable systems, capacities, and bosonic dephasing channels. After that, we prove our main claims about the capacities of all bosonic dephasing channels. Finally, we discuss some examples of bosonic dephasing channels of physical interest. \section{Preliminaries, notation, and definitions} \subsection{Quantum states and channels} An arbitrary quantum system is mathematically represented by a separable complex Hilbert space~$\HH$. We start by reviewing a few basic concepts from the theory of operators acting on a Hilbert space $\HH$. An operator $X:\HH\to \HH$ acting on $\HH$ is \deff{bounded} if its \deff{operator norm} $\|X\|_\infty \coloneqq \sup_{\ket{\psi}\in \HH,\, \|\ket{\psi}\|\leq 1} \left\|X\ket{\psi}\right\|$ is finite, i.e., if $\|X\|_\infty<\infty$. The Banach space of bounded operators acting on $\HH$ equipped with the norm $\|\cdot\|_\infty$ will be sometimes denoted by $\B(\HH)$. A bounded operator $X\in \B(\HH)$ is \deff{positive semi-definite} if $\braket{\psi|X|\psi}\geq 0$ for all $\ket{\psi}\in \HH$. A bounded operator $T$ such that the series defining $\Tr|T| = \Tr \sqrt{T^\dag T} \eqqcolon \|T\|_1 <\infty$ converges is said to be of \deff{trace class}. Trace class operators acting on $\HH$ form another Banach space, denoted by $\T(\HH)$, once they are equipped with the trace norm $\| \cdot \|_1$. The set of positive semi-definite trace class operators forms a cone, denoted here by $\T_+(\HH)$. Since trace class operators are compact, the spectral theorem applies~\cite[Theorem~VI.16]{REED}. This means that every $T\in \T(\HH)$ can be decomposed as $T = \sum_{k=0}^\infty t_k \ketbraa{e_k}{f_k}$, where $\|T\|_1 = \sum_k |t_k|<\infty$, and $\{\ket{e_k}\}_k$, $\{\ket{f_k}\}_k$ are orthonormal bases of $\HH$. Quantum states of a system $A$ are described by \deff{density operators}, i.e., positive semi-definite trace class operators with trace $1$, acting on $\HH_A$. The distance between two density operators $\rho,\sigma$ acting on the same Hilbert space can be measured in two different but compatible ways, either with the \deff{trace distance} $\frac12 \|\rho-\sigma\|_1$, endowed with a direct operational interpretation via the Helstrom--Holevo theorem for state discrimination~\cite{HELSTROM, Holevo1976} or with the \deff{fidelity} $F(\rho,\sigma)\coloneqq \left\|\sqrt{\rho}\sqrt{\sigma}\right\|_1^2$~\cite{Uhlmann-fidelity}. Two fundamental relations known as the Fuchs--van de Graaf inequalities establish the essential equivalence of these two distance measures. They are as follows~\cite{Fuchs1999}: \bb 1-\sqrt{F(\rho,\sigma)} \leq \frac12 \left\|\rho-\sigma\right\|_1 \leq \sqrt{1-F(\rho,\sigma)}\, . \label{Fuchs_vdG} \ee Physical transformations between states of a system $A$ and states of a system $B$ are represented mathematically as \deff{quantum channels}, i.e., completely positive trace-preserving maps $\Lambda:\T(\HH_A)\to \T(\HH_B)$~\cite{Stinespring, Choi, KRAUS}. A linear map $\Lambda:\T(\HH_A) \to \T(\HH_B)$ is \begin{enumerate}[(i)] \item positive if $\Lambda\left( \T_+(\HH_A) \right) \subseteq \T_+(\HH_B))$; \item completely positive, if $\operatorname{id}_N \otimes \Lambda : \T\big(\C^N\otimes \HH_A\big) \to \T\big(\C^N\otimes \HH_B\big)$ is a positive map for all $N\in \N$, where $\operatorname{id}_N$ represents the identity channel acting on the space of $N\times N$ complex matrices; \item trace preserving, if $\Tr \Lambda (X) = \Tr X$ holds for all trace class $X$. \end{enumerate} \subsection{Entropies and relative entropies} Let $p,q$ be two probability density functions defined on the same measurable space $\pazocal{X}$ with measure $\mu$. For $\alpha\in (0,1)\cup(1,\infty)$, define their \deff{$\boldsymbol{\alpha}$-R\'enyi divergence} by~\cite{vanErven2014} \bb D_\alpha(p\|q) \coloneqq \frac{1}{\alpha-1} \log_2 \int_{\pazocal{X}} d\mu(x)\ p(x)^\alpha\, q(x)^{1-\alpha}\, . \label{Renyi} \ee This definition can be extended to $\alpha\in \{0,1,\infty\}$~\cite[Definition~3]{vanErven2014} by taking suitable limits. For our purposes, it suffices to consider the \deff{Kullback--Leibler} divergence~\cite{Kullback-Leibler} obtained by taking the limit $\alpha\to 1^-$ in~\eqref{Renyi}. It is defined as \bb D_1(p\|q) = D(p\|q) \coloneqq \int_{\pazocal{X}}d\mu(x)\ p(x)\, \log_2 \frac{p(x)}{q(x)}\, . \label{Kullback--Leibler} \ee The following technical result is important for this paper. \begin{lemma}[{\cite[Theorems~3 and~5]{vanErven2014}}] \label{technical_Renyi_lemma} For all fixed $p,q$, the $\alpha$-R\'enyi divergence is monotonically non-decreasing in $\alpha$. Moreover, $\lim_{\alpha\to 1^-} D_\alpha(p\|q) = D(p\|q)$, and if $D_{\alpha_0}(p\|q)<\infty$ for some $\alpha_0>1$ (and therefore $D_{\alpha}(p\|q)<\infty$ for all $\alpha\in(0, \alpha_0]$) then also \bb \lim_{\alpha\to 1^+} D_\alpha(p\|q) = D(p\|q)\, . \label{limit_1+_Renyi} \ee \end{lemma} As a special case of the above formalism, one can define the \deff{differential R\'{e}nyi entropy} of a probability density function $p$ on $\pazocal{X}$ by setting \bb h_\alpha(p) \coloneqq \frac{1}{1-\alpha} \log_2 \int_{\pazocal{X}} d\mu(x)\ p(x)^\alpha , \ee for all $\alpha\in (0,1)\cup(1,\infty)$. For $\alpha=1$ we obtain the standard \deff{differential entropy}, given by \bb h(p) \coloneqq - \int_{\pazocal{X}}d\mu(x)\ p(x) \log_2 p(x)\, , \label{differential_entropy} \ee whenever the integral is well defined. We now consider entropies and relative entropies between quantum states. For the sake of simplicity we assume throughout this subsection that all quantum systems are finite dimensional. Indeed, in this paper we shall not consider entropies and relative entropies of infinite-dimensional states. The most immediate way to extend~\eqref{Renyi} to the case of two quantum states $\rho,\sigma$ is to define the \deff{Petz--R\'enyi relative entropy}~\cite{PetzRenyi} \bb D_\alpha(\rho\|\sigma) \coloneqq \frac{1}{\alpha-1}\, \log_2 \Tr \rho^{\alpha}\sigma^{1-\alpha} , \label{Petz--Renyi} \ee where as usual $\alpha\in (0,1)\cup (1,\infty)$, and it is conventional to set $D_\alpha(\rho\|\sigma) = +\infty$ whenever $\alpha>1$ and $\supp\rho\not\subseteq \supp\sigma$, where $\supp X$ denotes the \deff{support} of $X$, i.e., the orthogonal complement of its kernel. Although~\eqref{Petz--Renyi} is a sensible definition, it is often helpful to consider an alternative quantity. The \deff{sandwiched $\boldsymbol{\alpha}$-R\'enyi relative entropy} is defined as~\cite{newRenyi, Wilde2014} \bb \widetilde{D}_\alpha(\rho\|\sigma) \coloneqq \frac{2\alpha}{\alpha-1} \log_2 \left\|\sigma^{\frac{1-\alpha}{2\alpha}}\rho^{\frac12} \right\|_{2\alpha} . \label{sandwiched_Renyi} \ee Here, for $\beta>0$ we define the corresponding \deff{Schatten norm} of a matrix $X$ as \bb \|X\|_\beta \coloneqq \left(\Tr \Big[|X|^\beta\Big]\right)^{1/\beta} , \label{Schatten_norm} \ee where $|X|\coloneqq \sqrt{X^\dag X}$. As before, it is understood that $\widetilde{D}_\alpha(\rho\|\sigma) = +\infty$ when $\alpha>1$ and $\supp\rho\not\subseteq \supp\sigma$. Importantly, when $[\rho,\sigma]=0$, i.e., $\rho$ and $\sigma$ commute,~\eqref{Petz--Renyi} and~\eqref{sandwiched_Renyi} coincide, and are equal to the $\alpha$-R\'enyi divergence between the spectra of $\rho$ and $\sigma$. Namely, \bb [\rho,\sigma] = 0 \quad \Longrightarrow\quad \widetilde{D}_\alpha(\rho\|\sigma) = D_\alpha(\rho\|\sigma)\, . \label{commute_Petz--Renyi} \ee Taking the limit as $\alpha\to 1$ of either~\eqref{Petz--Renyi} or~\eqref{sandwiched_Renyi} yields the (Umegaki) \deff{relative entropy}, given by~\cite{Umegaki1962, Lindblad1973, Hiai1991} \bb D(\rho\|\sigma) \coloneqq \lim_{\alpha \to 1} \widetilde{D}_\alpha(\rho\|\sigma) = \lim_{\alpha \to 1} D_\alpha(\rho\|\sigma) = \Tr\left[ \rho (\log_2 \rho - \log_2 \sigma)\right] . \label{Umegaki} \ee The final quantity we need to define is the simplest of all, namely, the (von Neumann) \deff{entropy} of a density operator $\rho$: \bb S(\rho) \coloneqq - \Tr \left[ \rho\log_2 \rho\right] . \label{von_Neumann} \ee \subsection{Continuous-variable systems} A single-mode continuous-variable system is mathematically modelled by the Hilbert space $\HH_1\coloneqq L^2(\R)$, which comprises all square-integrable complex-valued functions over $\R$. The operators $x$ and $p \coloneqq -i\frac{d}{d x}$ satisfy the \deff{canonical commutation relation} $[x, p] = i \id$, where $\id$ denotes the identity operator (in this case, acting on $\HH_1$). Introducing the \deff{annihilation} and \deff{creation operators} \bb a \coloneqq \frac{x + i p}{\sqrt{2}}\, ,\qquad a^\dag \coloneqq \frac{x - i p}{\sqrt{2}}\, , \label{a_adag} \ee this can be recast in the form \bb [a, a^\dag] = \id \, . \label{CCR} \ee Creation operators map the \deff{vacuum state} $\ket{0}$ to the \deff{Fock states} \bb \ket{k} \coloneqq \frac{(a^\dag)^k}{\sqrt{k!}}\, \ket{0}\, . \label{Fock} \ee Fock states are eigenvectors of the \deff{photon number} operator $a^\dag a$, which satisfies \bb a^\dag a\,\ket{k} = k \ket{k}\, . \label{Fock_eigenvectors} \ee \subsection{Unassisted capacities of quantum channels} In this section, we briefly define the quantum and private capacities of a quantum channel. We begin with the quantum capacity. An $(|M|,\epsilon)$ code for quantum communication over the channel $\mathcal{N}_{A\to B}$ consists of an encoding channel $\mathcal{E}_{M\to A}$ and a decoding channel $\mathcal{D}_{B \to M}$ such that the channel fidelity of the coding scheme and the identity channel $\operatorname{id}_{M}$ is not smaller than $1-\epsilon$: \bb F( \operatorname{id}_{M}, \mathcal{D}_{B \to M}\circ \mathcal{N}_{A\to B} \circ \mathcal{E}_{M\to A} ) \geq 1-\epsilon , \ee where the channel fidelity of channels $\mathcal{N}_1$ and $\mathcal{N}_2$ is defined as \cite{GLN04} \bb F(\mathcal{N}_1,\mathcal{N}_2) \coloneqq \inf_{\rho} F ((\operatorname{id} \otimes \mathcal{N}_1)(\rho),(\operatorname{id} \otimes \mathcal{N}_2)(\rho)) , \ee with the optimization over every bipartite state $\rho$ and the reference system allowed to be arbitrarily large. See Figure~\ref{protocol_Q_fig} for a depiction of a quantum communication protocol that uses the channel $n$ times. \begin{figure} \includegraphics[scale=0.19]{protocol-Q.jpeg} \caption{A depiction of a quantum communication protocol that uses the channel $n$ times.} \label{protocol_Q_fig} \end{figure} The one-shot quantum capacity $Q_{\epsilon}(\mathcal{N}_{A\to B})$ of the channel $\mathcal{N}_{A\to B}$ is defined as \bb Q_{\epsilon}(\mathcal{N}) \coloneqq \sup_{\mathcal{E},\mathcal{D}} \{\log_2 |M| : \exists (|M|,\epsilon) \text{ quantum communication protocol for } \mathcal{N}_{A\to B}\}, \ee where the optimization is over every encoding channel $\mathcal{E}$ and decoding channel $\mathcal{D}$. The (asymptotic) quantum capacity of $\mathcal{N}_{A\to B}$ is then defined as \bb Q(\mathcal{N}) \coloneqq \inf_{\epsilon \in (0,1)} \liminf_{n \to \infty}\frac{1}{n}Q_{\epsilon}(\mathcal{N}^{\otimes n}), \ee where $\NN^{\otimes n}$ denotes $n$ copies of the channel $\NN$ used in parallel. The strong converse quantum capacity of $\mathcal{N}_{A\to B}$ is defined as \bb Q^\dag(\mathcal{N}) \coloneqq \sup_{\epsilon \in (0,1)} \limsup_{n \to \infty}\frac{1}{n}Q_{\epsilon}(\mathcal{N}^{\otimes n}). \ee The above way of defining quantum capacity is standard, by now, in several references on quantum information theory~\cite{KW20book},~\cite[Section~VIII]{BD10}, following the same approach for defining various other capacities in classical and quantum information theory~\cite[Eqs.~(1.6)--(1.7)]{Pol10},~\cite[Section~V-A]{DMHB13},~\cite[Eq.~(1)]{TT2015},~\cite[Eq.~(10)]{Chubb2017}. There are several different ways of defining quantum capacity (see also~\cite{BS98}), but it is known that they lead to the same quantity in the asymptotic limit~\cite{temaconvariazioni}. It is a classic result of quantum information theory that the quantum capacity is equal to the regularized coherent information of the channel~\cite{Schumacher1996,PhysRevA.54.2629,BKN98,L97,capacity2002shor,ieee2005dev}: \bb Q(\NN) =&\ \lim_{n\to\infty} \frac1n\, Q^{(1)}\!\left(\NN^{\otimes n}\right) = \sup_{n\in \N_+} \frac1n\, Q^{(1)}\!\left(\NN^{\otimes n}\right) , \\ Q^{(1)}\left(\NN\right) \coloneqq&\ \sup_{\ket{\Psi}_{AA'}} \Icoh(A\rangle B)_{\nu}\, , \label{LSD} \ee where \bb \nu_{AB} & \coloneqq \big(\operatorname{id}_A\,\otimes\, \NN_{A'\to B}\big)(\Psi_{AA'}) ,\\ \Icoh(A\rangle B)_\rho & \coloneqq S(\rho_B) - S(\rho_{AB})\, . \label{Icoh} \ee This gives us a method for evaluating the quantum capacity of particular channels of interest, including the bosonic dephasing channels. Let us now recall basic definitions related to private capacity. Let $\mathcal{U}^{\mathcal{N}}_{A\to BE}$ be an isometric channel extending the channel $\mathcal{N}_{A\to B}$~\cite{Stinespring}. An $(|M|,\epsilon)$ code for private communication over the channel $\mathcal{N}_{A\to B}$ consists of a set $\{\rho^m_A\}_m$ of encoding states and a decoder, specified as a positive operator-valued measure (POVM) $\{\Lambda^m_B\}_m$. It achieves an error $\epsilon$ if there exists a state $\sigma_E$ of the environment, such that the following inequality holds for every message $m$: \bb F\left( \sum_{m'} \ketbra{m'} \otimes \Tr_B[\Lambda^{m'}_B\mathcal{U}^{\mathcal{N}}_{A\to BE}(\rho^m_A)] , \ketbra{m} \otimes \sigma_E \right) \geq 1-\epsilon . \ee The one-shot private capacity $P_{\epsilon}(\mathcal{N}_{A\to B})$ of the channel $\mathcal{N}_{A\to B}$ is defined as \bb P_{\epsilon}(\mathcal{N}) \coloneqq \sup_{\{\rho^m_A\}_m,\{\Lambda^m_B\}_m} \{\log_2 |M| : \exists (|M|,\epsilon) \text{ private communication protocol for } \mathcal{N}_{A\to B}\}, \ee where the optimization is over every set $\{\rho^m_A\}_m$ of encoding states and decoding POVM $\{\Lambda^m_B\}_m$. The (asymptotic) private capacity of $\mathcal{N}_{A\to B}$ is then defined as \bb P(\mathcal{N}) \coloneqq \inf_{\epsilon \in (0,1)} \liminf_{n \to \infty}\frac{1}{n}P_{\epsilon}(\mathcal{N}^{\otimes n}). \ee The strong converse private capacity of $\mathcal{N}_{A\to B}$ is defined as \bb P^\dag(\mathcal{N}) \coloneqq \sup_{\epsilon \in (0,1)} \limsup_{n \to \infty}\frac{1}{n}P_{\epsilon}(\mathcal{N}^{\otimes n}). \ee The following inequalities are direct consequences of the definitions: \bb Q(\mathcal{N}) & \leq Q^\dag(\mathcal{N}), \\ P(\mathcal{N}) & \leq P^\dag(\mathcal{N}). \ee Less trivially, we also have that~\cite{ieee2005dev} \bb Q(\mathcal{N}) & \leq P(\mathcal{N}). \ee \subsection{Two-way assisted capacities of quantum channels} In this section, we define the quantum and private capacities when the channel of interest is assisted by local operations and classical communication (LOCC). We begin with the LOCC-assisted quantum capacity. \begin{figure} \includegraphics[scale=0.22]{protocol-Q2-new.jpeg} \caption{An LOCC-assisted protocol that involves $n$ uses of the quantum channel $\NN$. Its action is described formally in Eq.~\eqref{final_state_protocol_Q2}.} \label{protocol_Q2_fig} \end{figure} An $(n,|M|,\epsilon)$ protocol $\mathcal{P}$ for LOCC-assisted quantum communication consists of a separable state $\sigma_{A'_1 A_1 B'_1}$ (which is understood to be separable with respect to the bipartition $A'_1 A_1:B'_1$), the set $\big\{\mathcal{L}^{(i-1)}_{A'_{i-1}B_{i-1}B'_{i-1} \to A'_{i}A_{i}B'_{i}}\big\}_{i=2}^{n}$ of LOCC channels, and the LOCC channel $\mathcal{L}^{(n)}_{A'_{n}B_{n}B'_{n}\to M_A M_B}$. (See~\cite{LOCC} for the definition of an LOCC channel.) We can also imagine that the state $\sigma_{A'_1 A_1 B'_1}$ is produced by an LOCC preprocessing channel $\mathcal{L}^{(0)}_{A'_0B'_0\to A'_1A_1B'_1}$, with the $A'_0B'_0$ system initialised in a product state. The final state of the protocol is \bb \eta_{M_AM_B} \coloneqq \Big(\mathcal{L}^{(n)}_{A'_{n}B_{n}B'_{n}\to M_A M_B} \circ \mathcal{N}_{A_n \to B_n} &\circ \mathcal{L}^{(n-1)}_{A'_{n-1}B_{n-1}B'_{n-1} \to A'_{n}A_{n}B'_{n}} \circ \cdots\\ &\circ \mathcal{L}^{(1)}_{A'_{1}B_{1}B'_{1} \to A'_{2}A_{2}B'_{2}}\circ \mathcal{N}_{A_1 \to B_1}\Big)\big(\sigma_{A'_1 A_1 B'_1}\big)\, , \label{final_state_protocol_Q2} \ee satisfying \bb F(\eta_{M_A M_B}, \Phi_{M_A M_B}) \geq 1- \epsilon, \ee where $\Phi_{M_A M_B}$ is a maximally entangled state of Schmidt rank $|M|$. Such a protocol is depicted in Figure~\ref{protocol_Q2_fig}. We note here that it suffices for such a protocol to generate the maximally entangled state $\Phi_{M_A M_B}$ because entanglement and quantum communication are equivalent communication resources when classical communication is freely available, due to the teleportation protocol~\cite{teleportation}. The $n$-shot LOCC-assisted quantum capacity of the channel $\mathcal{N}_{A\to B}$ is defined as \bb Q_{\leftrightarrow,n,\epsilon}(\mathcal{N}) \coloneqq \sup_{\mathcal{P}} \left\{\frac{1}{n} \log_2 |M| : \exists (n,|M|,\epsilon) \text{ LOCC-assisted q.~comm.~protocol } \mathcal{P} \text{ for } \mathcal{N}_{A\to B} \right \}, \ee where the optimization is over every LOCC-assisted quantum communication protocol $\mathcal{P}$. The (asymptotic) LOCC-assisted quantum capacity of $\mathcal{N}_{A\to B}$ is then defined as \bb Q_{\leftrightarrow}(\mathcal{N}) \coloneqq \inf_{\epsilon \in (0,1)} \liminf_{n \to \infty}Q_{\leftrightarrow,n,\epsilon}(\mathcal{N}). \ee The strong converse LOCC-assisted quantum capacity of $\mathcal{N}_{A\to B}$ is defined as \bb Q^\dag_{\leftrightarrow}(\mathcal{N}) \coloneqq \sup_{\epsilon \in (0,1)} \limsup_{n \to \infty}Q_{\leftrightarrow,n,\epsilon}(\mathcal{N}). \ee An $(n,|M|,\epsilon)$ protocol $\mathcal{K}$ for secret key agreement over a quantum channel is defined essentially the same as an LOCC-assisted protocol for quantum communication, except that the target final state of the protocol is more general. That is, the final step of the protocol is an LOCC channel $\mathcal{L}^{(n)}_{A'_{n}B_{n}B'_{n}\to M_A M_B S_A S_B}$, where $S_A$ and $S_B$ are extra systems of the sender Alice and the receiver Bob. Let us then denote the final state of the protocol by $\eta_{M_A M_B S_A S_B}$. Such a protocol satisfies \bb F(\eta_{M_A M_B S_A S_B},\gamma_{M_A M_B S_A S_B}) \geq 1-\epsilon, \ee where $\gamma_{M_A M_B S_A S_B} $ is a private state of dimension $|M|$~\cite{private,Horodecki2009}, having the form \bb \gamma_{M_A M_B S_A S_B} \coloneqq U_{M_A M_B S_A S_B} (\Phi_{M_A M_B} \otimes \theta_{S_A S_B} )U_{M_A M_B S_A S_B}^\dag. \ee In the above, $U_{M_A M_B S_A S_B}$ is a twisting unitary of the form \bb U_{M_A M_B S_A S_B} = \sum_{i,j} \ketbra{i}_{M_A} \otimes \ketbra{j}_{M_B} \otimes U^{i,j}_{S_A S_B}, \ee with each $U^{i,j}_{S_A S_B}$ a unitary. Also, $\Phi_{M_A M_B}$ is a maximally entangled state of Schmidt rank $|M|$ and $\theta_{S_A S_B}$ is an arbitrary state. The fact that such a protocol is equivalent to the more familiar notion of secret key agreement, involving three parties generating a tripartite secret key state of the form $\frac{1}{|M|} \sum_{m=0}^{|M|-1}\ketbra{m}_{M_A} \otimes \ketbra{m}_{M_B} \otimes \sigma_E$, is the main contribution of~\cite{private,Horodecki2009} (see~\cite{KW20book} for another presentation). The $n$-shot secret-key-agreement capacity of the channel $\mathcal{N}_{A\to B}$ is defined as \bb P_{\!\leftrightarrow,n,\epsilon}(\mathcal{N}) \coloneqq \sup_{\mathcal{K}} \left\{\frac{1}{n} \log_2 |M| : \exists (n,|M|,\epsilon) \text{ secret-key-agreement protocol } \mathcal{K} \text{ for } \mathcal{N}_{A\to B} \right \}, \ee where the optimization is over every secret key agreement protocol $\mathcal{K}$. The (asymptotic) secret key agreement capacity of $\mathcal{N}_{A\to B}$ is then defined as \bb P_{\!\leftrightarrow}(\mathcal{N}) \coloneqq \inf_{\epsilon \in (0,1)} \liminf_{n \to \infty}P_{\!\leftrightarrow,n,\epsilon}(\mathcal{N}). \ee The strong converse secret key agreement capacity of $\mathcal{N}_{A\to B}$ is defined as \bb P^\dag_{\!\leftrightarrow}(\mathcal{N}) \coloneqq \sup_{\epsilon \in (0,1)} \limsup_{n \to \infty}P_{\!\leftrightarrow,n,\epsilon}(\mathcal{N}). \label{strong_converse_secret_key_agreement_capacity} \ee The following inequalities are direct consequences of the definitions: \bb Q_{\leftrightarrow}(\mathcal{N}) & \leq Q_{\leftrightarrow}^\dag(\mathcal{N}) \\ P_{\!\leftrightarrow}(\mathcal{N}) & \leq P_{\!\leftrightarrow}^\dag(\mathcal{N}). \ee Due to the fact that a more general target state is allowed in secret key agreement, the following inequalities hold \bb Q_{\leftrightarrow}(\mathcal{N}) & \leq P_{\!\leftrightarrow}(\mathcal{N})\\ Q^{\dag}_{\leftrightarrow}(\mathcal{N}) & \leq P^{\dag}_{\leftrightarrow}(\mathcal{N}). \ee Finally, due to the fact that classical communication can only enhance capacities, the following inequalities hold: \bb Q(\mathcal{N}) & \leq Q_{\leftrightarrow}(\mathcal{N})\\ Q^{\dag}(\mathcal{N}) & \leq Q^{\dag}_{\leftrightarrow}(\mathcal{N})\\ P(\mathcal{N}) & \leq P_{\!\leftrightarrow}(\mathcal{N})\\ P^{\dag}(\mathcal{N}) & \leq P^{\dag}_{\leftrightarrow}(\mathcal{N}). \ee Thus, to establish the collapse of all of the capacities discussed in this section and the previous one, for the case of bosonic dephasing channels, it suffices to prove the lower bound on $Q(\mathcal{N})$ and the upper bound on $P^{\dag}_{\leftrightarrow}(\mathcal{N})$. \subsection{Teleportation simulation} The $d$-dimensional quantum teleportation protocol~\cite{teleportation} takes as input a $d$-dimensional quantum state $\rho_{A'}$ of a system $A'$, a $d$-dimensional maximally entangled state \bb \Phi_d^{AB} \coloneqq \ketbra{\Phi_d}_{AB}\, ,\qquad \ket{\Phi_d}_{AB} \coloneqq \frac{1}{\sqrt{d}} \sum_{k=0}^{d-1} \ket{k}_A\ket{k}_B\, , \label{Phi_d} \ee and by using only local operations and one-way classical communication from Alice to Bob reproduces the exact same state $\rho$ on the system $B$. To define it rigorously, for $x,z\in \{0,\ldots,d-1\}$ let us introduce the unitary matrices \bb X(x)\,\coloneqq\, \sum_{k=0}^{d-1} \ket{k \oplus x}\!\!\bra{k}\, ,\qquad Z(z)\, \coloneqq\, \sum_{k=0}^{d-1} e^{\frac{2\pi i}{d} zk} \ketbra{k}\, ,\qquad U(x,z)\coloneqq X(x) Z(z)\, , \label{HW} \ee where $\oplus$ denotes sum modulo $d$. Then the teleportation channel $\T^{(d)}_{A'AB\to B}$ is given by \bb \T^{(d)}_{A'AB\to B}(X_{A'AB}) \coloneqq \sum_{x,z=0}^{d-1} U(x,z)_{B} \Tr_{AA'}\!\left[ X_{A'AB}\, U(x,z)_{A'} \Phi_d^{AA'} U(x,z)_{A'}^\dag \right] U(x,z)_{B}^{\dag}\, . \label{telep} \ee The effectiveness of the standard quantum teleportation protocol is expressed by the identity \bb \T^{(d)}_{A'AB\to B}\!\left(\rho_{A'} \otimes \Phi_d^{AB}\right)=\rho_{B}\, , \ee meaning that the same operator $\rho$ is written in the registers $A'$ and $B$ on the left-hand and on the right-hand side, respectively. Some channels can be simulated by the action of the standard teleportation protocol on their Choi states~\cite{BDSW96}, in the sense that \bb \T^{(d)}_{A'AB\to B}\!\left(\rho_{A'} \otimes \Phi_{\mathcal{N}}^{AB}\right)=\mathcal{N}(\rho_{A'})\, , \ee where $\Phi_{\mathcal{N}}^{AB}$ is the Choi state of the channel $\mathcal{N}$. For example, this is the case for all Pauli channels. More generally, other channels can be simulated approximately by the action of the teleportation protocol on their Choi states. This concept was introduced in~\cite{BDSW96} for the explicit purpose of obtaining upper bounds on the LOCC-assisted quantum capacity of a channel in terms of an entanglement measure evaluated on the Choi state. The idea was rediscovered in~\cite{Mul12} for the same purpose, and more recently the same idea was used to bound the secret-key-agreement capacity~\cite{PLOB} and the strong converse secret-key-agreement capacity~\cite{WTB16}. Here we make use of this concept in order to obtain upper bounds on the strong converse secret key agreement capacity of all bosonic dephasing channels. As discussed earlier, it suffices to consider establishing an upper bound on this latter capacity because it is the largest among all the capacities that we consider in this paper. \subsection{Bosonic dephasing channel} \begin{Def} Let $p$ be a probability density function on the interval $[-\pi,\pi]$. The associated \deff{bosonic dephasing channel} is the quantum channel $\NN_p:\T(\HH_1)\to \T(\HH_1)$ acting on a single-mode system and given by \bb \NN_p (\rho) \coloneqq \int_{-\pi}^{\pi} d\phi\ p(\phi)\, e^{-i \n\, \phi} \rho\, e^{i \n\, \phi}\, , \label{Np} \ee where $a^\dag a$ is the photon number operator. \end{Def} The action of the bosonic dephasing channel can be easily described by representing the input operator in the Fock basis. By means of this representation the Hilbert space of a single-mode system, $\HH_1$, becomes equivalent to that of square-summable complex-valued sequences, denoted $\ell^2(\N)$. Operators on $\HH_1$ are represented by \deff{infinite matrices}, i.e., operators $S:\ell^2(\N)\to \ell^2(\N)$. Given two such operators $S,T$, which we formally write $S = \sum_{h,k} S_{hk} \ketbraa{h}{k}$ and $T = \sum_{h,k} T_{hk} \ketbraa{h}{k}$, their \deff{Hadamard product} is defined by \bb S\circ T \coloneqq \sum_{h,k} S_{hk} T_{hk} \ketbraa{h}{k}\, . \label{Hadamard_product} \ee One of the fundamental facts concerning the Hadamard product is the \emph{Schur product theorem}~\cite[Theorem~7.5.3]{HJ1}: it states that if $S\geq 0$ and $T\geq 0$ are positive semi-definite, then also $S\circ T\geq 0$ is such. The theorem is usually stated for matrices, but is is immediately generalisable to the operator case as a consequence of the remark below. \begin{rem} \label{positive_semidefiniteness_truncation_rem} Let $T: \ell^2(\N)\to \ell^2(\N)$ be an infinite matrix. Then $T\geq 0$ if and only if $T^{(d)}\geq 0$ for all $d\in \N_+$, where $T^{(d)}$ is the $d\times d$ top left corner of $T$. This follows from the fact that the linear span of the basis vectors $\ket{k}$, $k\in\N$, is dense in $\ell^2(\N)$. \end{rem} Given an infinite matrix $T$ which represents a bounded operator $T:\ell^2(\N) \to \ell^2(\N)$, we can define the associated \deff{Hadamard channel} as \bb \begin{array}{ccccc} \LL_T & : & \T\left(\ell^2(\N)\right) & \longrightarrow & \T\left(\ell^2(\N)\right) \\[1ex] && S & \longmapsto & S\circ T\, . \end{array} \label{Hadamard_channel} \ee When restricted to finite-dimensional systems, Hadamard channels are examples of so-called `partially coherent direct sum channels'~\cite{Chessa2021}. The following is then easily established. \begin{lemma} Let $T:\ell^2(\N) \to \ell^2(\N)$ be a bounded operator represented by an infinite matrix. Then the Hadamard channel $\LL_T$ defined by~\eqref{Hadamard_channel} is a completely positive and trace preserving map, i.e., a quantum channel, if and only if \begin{enumerate}[(i)] \item $T\geq 0$ as an operator; and \item $T_{kk} = 1$ for all $k\in \N$. \end{enumerate} \end{lemma} \begin{proof} The two conditions are clearly necessary. In fact, if $T_{kk}\neq 1$ for some $k\in \N_+$, then $\Tr[T\circ \ketbra{k}] = T_{kk} \neq 1 = \Tr \ketbra{k}$; i.e., $\LL_T$ is not trace preserving. Also, if $T \ngeq 0$ then by Remark~\ref{positive_semidefiniteness_truncation_rem} there exists $d\in \N_+$ and some $\ket{\psi}\in \C^d$ such that $\braket{\psi|T^{(d)}|\psi} < 0$. Rewriting $\braket{\psi|T^{(d)}|\psi} = \sum_{h,k=0}^{d-1} \psi_h^* \psi_k T_{hk} = d \braket{+| (T\circ \psi)|+}$, where $\psi\coloneqq \ketbra{\psi}$ and $\ket{+}\coloneqq \frac{1}{\sqrt{d}} \sum_{k=0}^{d-1} \ket{k}$, shows that in this case $\LL_T$ would not even be positive, let alone completely positive. Vice versa, conditions~(i)--(ii) are sufficient. In fact, on the one hand, by~(ii), for an arbitrary $X$, we have that $\Tr \LL_T(X) = \sum_k T_{kk} X_{kk} = \Tr X$, i.e., $\LL_T$ is trace preserving. On the other hand, if $T\geq 0$ then for all $d\in \N_+$ and for all positive semi-definite bipartite operators $X \geq 0$ acting on $\C^d \otimes \ell^2(\N)$ we have that $\left(I \otimes \LL_T\right) (X) = d \left(\ketbra{+} \otimes T\right) \circ X \geq 0$, where $\ket{+}$ is defined above, and the last inequality follows by the Schur product theorem. Since $d$ is arbitrary, this proves that $\LL_T$ is completely positive. \end{proof} The theory of Hadamard channels we just sketched out is relevant here due to the following simple observation. \begin{lemma} When both the input and the output density operators are represented in the Fock basis, the bosonic dephasing channel $\NN_p$ acts as the Hadamard channel \begin{align} \NN_p(\rho) =&\ \rho\circ T_p\, , \label{Np_action_Tp} \\ (T_p)_{hk} \coloneqq&\ \int_{-\pi}^{\pi} d\phi\ p(\phi)\, e^{-i\phi (h-k)} \, . \label{Tp} \end{align} \end{lemma} \begin{proof} Due to~\eqref{Fock_eigenvectors}, we have that $\NN_p(\rho) = \sum_{h,k} \rho_{hk} \int_{-\pi}^{\pi}d\phi\ p(\phi)\, e^{-i\phi (h-k)} \ketbraa{h}{k} = \rho\circ T_p$. \end{proof} \section{Capacities of bosonic dephasing channels} \subsection{Infinite Toeplitz matrices and theorems of Szeg\H{o} and Avram--Parter type} Observe that the expression for $(T_p)_{hk}$ only depends on the difference $h-k$. Matrices with this property are named after the mathematician Otto Toeplitz. Formally, a \deff{Toeplitz matrix} of size $d\in \N_+$ is a matrix of the form \bb T = \begin{pmatrix} a_0 & a_{-1} & a_{-2} & \ldots & a_{-d+1} \\ a_1 & a_0 & a_{-1} & \ldots & a_{-d+2} \\ a_2 & a_1 & a_0 & \ldots & a_{-d+3} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ a_{d-1} & a_{d-2} & a_{d-3} & \ldots & a_0 \end{pmatrix} , \ee where $a_0,\ldots, a_{d-1}\in \C$. Alternatively, it can be defined to have entries \bb T_{hk} = a_{h-k}\, . \label{Toeplitz_entries} \ee This definition can be formally extended to the case of \deff{infinite Toeplitz matrices}, simply by letting $h,k\in \N$ run over all non-negative integers. Note that the top left corners $T^{(d)}\coloneqq \sum_{h,k=0}^{d-1} T_{hk} \ketbraa{h}{k}$ of an infinite Toeplitz matrix are Toeplitz matrices themselves. In applications one often encounters the case in which the numbers $a_k$ are the Fourier coefficients of an absolutely integrable function $a:[-\pi,\pi]\to \C$, i.e. \bb a_k = \int_{-\pi}^{+\pi} \frac{d\phi}{2\pi}\ a(\phi)\, e^{-ik\phi} . \label{a_k_Fourier} \ee In this paper, we will consider mainly non-negative functions $a:[-\pi,\pi]\to \R_+$. A result due to Szeg\H{o}~\cite{Szego1920, GRENADER} states that the spectrum of the $d\times d$ top left corners $T^{(d)}$ of an infinite Toeplitz matrix converges to the generating function $a:[-\pi,\pi]\to \R$ (for now assumed to be real-valued), in the sense that \bb \lim_{d\to\infty} \frac1d \Tr F\big( T^{(d)}\big) = \lim_{d\to\infty} \frac1d \sum_{j=1}^d F\big(\lambda_j\big(T^{(d)}\big)\big) = \int_{-\pi}^{\pi} \frac{d\phi}{2\pi}\ F\big(a(\phi)\big) \label{Szego} \ee \emph{whenever $a$ and $F:\R\to \R$ are sufficiently well behaved.} Here, $\lambda_j\big(T^{(d)}\big)$ denotes the $j^\text{th}$ eigenvalue of the matrix $T^{(d)}$. The scope and extension of Szeg\H{o}'s result has been expanded over the years by relaxing the conditions to be imposed on $a$ and $F$ so that~\eqref{Szego} holds. At the same time, an analogous class of results, initially conceived by Parter~\cite{Parter1986} and Avram~\cite{Avram1988}, has been developed to deal with the case of complex-valued generating functions $a:[-\pi,\pi]\to \C$. Results of the Avram--Parter type generalize~\eqref{Szego} by stating that \bb \lim_{d\to\infty} \frac1d \Tr F\Big( \big|T^{(d)}\big|\Big) = \lim_{d\to\infty} \frac1d \sum_{j=1}^d F\big(s_j \big(T^{(d)}\big)\big) = \int_{-\pi}^{\pi} \frac{d\phi}{2\pi}\ F\Big(\big| a(\phi)\big|\Big)\, , \label{Avram-Parter} \ee where $s_j\big(T^{(d)}\big)$ is now the $j^\text{th}$ \emph{singular value} of the matrix $T^{(d)}$. Both Szeg\H{o}'s and Avram--Parter's result have been generalized in successive steps, by Zamarashkin and Tyrtyshnikov~\cite{Zamarashkin1997}, Tilli~\cite{Tilli1998}, Serra-Capizzano~\cite{SerraCapizzano2002}, B\"{o}ttcher, Grudsky, and Maksimenko~\cite{Boettcher2008}, and others. For a detailed account of these developments, we refer the reader to the textbooks~\cite{GRUDSKY, BOETTCHER, GARONI} and especially to the lecture notes by Grudsky~\cite{Grudsky-lectures}. Here we will just need the following lemma, extracted from the work of Serra-Capizzano. \begin{lemma}[(Serra-Capizzano~\cite{SerraCapizzano2002})] \label{Serra-Capizzano_lemma} If $a:[-\pi,\pi]\to \R_+$ is such that \bb \int_{-\pi}^{+\pi} \frac{d\phi}{2\pi}\ a(\phi)^\alpha < \infty \ee for some $\alpha \geq 1$, and moreover $F:\R_+ \to \R$ is continuous and satisfies \bb F(x) = O(x^\alpha) \qquad (x\to \infty)\, , \ee then~\eqref{Szego} holds. \end{lemma} \begin{proof} The original result by Serra-Capizzano~\cite[Theorem~2]{SerraCapizzano2002} states that the identity~\eqref{Avram-Parter} involving singular values holds. However, under the stronger hypotheses that we are making here it can be seen that~\eqref{Szego} and~\eqref{Avram-Parter} are actually equivalent. Since $a$ takes on values in $\R_+$, we only need to check that for each $d$ the singular values and the eigenvalues of $T^{(d)}$ coincide. To this end, it suffices to note that $T^{(d)}$ is a positive semi-definite operator, simply because \bb \sum_{h,k=0}^{d-1} \psi_h^* \psi_k T_{hk} &= \sum_{h,k=0}^{d-1} \psi_h^* \psi_k \int_{-\pi}^{+\pi} \frac{d\phi}{2\pi}\ a(\phi)\, e^{-i(h-k)\phi} \\ &= \int_{-\pi}^{+\pi} \frac{d\phi}{2\pi}\ a(\phi) \sum_{h,k=0}^{d-1} \psi_h^* \psi_k\, e^{-i(h-k)\phi} \\ &= \int_{-\pi}^{+\pi} \frac{d\phi}{2\pi}\ a(\phi) \left|\sumno_{k=0}^{d-1} \psi_k\, e^{ik\phi} \right|^2 \\ &\geq 0 \ee for every $\ket{\psi} \in \C^d$. This can also be seen as a simple consequence of (the easy direction of) Bochner's theorem. \end{proof} \subsection{Proof of main result} Before stating and proving our main result, let us fix some terminology. For an infinite matrix $\tau$ that is also a density operator on $\ell^2(\N)$, the associated \deff{maximally correlated state} $\Omega[\tau]$ on $\ell^2(\N)\otimes \ell^2(\N)$ is defined by \bb \Omega[\tau] \coloneqq \sum_{h,k=0}^\infty \tau_{hk} \ketbraa{h}{k}\otimes \ketbraa{h}{k}\, . \label{maximally_correlated} \ee Maximally correlated states appear naturally in connecting coherence theory~\cite{Aberg2004, Aberg2006, Baumgraz2014, Winter2016, Chitambar-Hsieh-coherence, Regula2018, Fang2018, bound-coherence, GrandTour} (see also the review article~\cite{coherence-review}) with entanglement theory. When seen in this latter context, they are useful because they represent a particularly simple class of entangled states. Before we proceed further, let us recall in passing that the R\'enyi entropy of a probability density defined on the interval $[-\pi,\pi]$ need not be finite; that is, it can be equal to $-\infty$. It is always bounded from above by $\log_2(2\pi)$, due to the non-negativity of relative entropy. Indeed, $h_\alpha(p) \leq \log_2 (2\pi)$ for every probability density $p$ defined on $[-\pi,\pi]$ and for all $\alpha\in (0,1)\cup(1,\infty)$, because $\log_2 (2\pi) - h_\alpha(p) = D_\alpha(p\Vert u) \geq 0$, where $u$ is the uniform probability density on $[-\pi,\pi]$. The same is true for the case $\alpha=1$, where we identify $h_1$ with the standard differential entropy~\eqref{differential_entropy}. However, as an example, if we take the probability density to be $p(x) = c |x|^{-\frac{1}{\alpha}}$ for $\alpha > 1$, where $c$ is a normalization factor, then the R\'enyi entropy $h_\alpha(p)$ diverges to $-\infty$. Note that the condition $h_{\alpha}(p) > -\infty$ is equivalent to the condition $\int_{-\pi}^{+\pi}d\phi\ p(\phi)^{\alpha} <\infty$. An analogous reasoning can be repeated for the case where $\alpha=1$. Here, $p(x) \propto |x|^{-1}\left(\log_2\frac{2\pi}{|x|}\right)^{-2}$ provides an example of a probability distribution for which $h(p) = -\infty$ (while $p$ itself is integrable). \begin{thm} \label{capacities_thm} Let $p:[-\pi,+\pi]\to \R_+$ be a probability density function with the property that one of its R\'enyi entropies is finite for some $\alpha_0>1$, i.e. \bb \int_{-\pi}^{+\pi}d\phi\ p(\phi)^{\alpha_0} <\infty\, . \label{finite_Renyi_p} \ee Then the two-way assisted quantum capacity, the unassisted quantum capacity, the private capacity, the secret-key capacity, and all of the corresponding strong converse capacities of the associated bosonic dephasing channel $\NN_p$ coincide, and are given by the expression \bb Q(\NN_p) &= Q^\dag(\NN_p) = P(\NN_p) = P^\dag(\NN_p) = \QQ(\NN_p) = \QQ^\dag(\NN_p) = P(\NN_p) = P_{\!\leftrightarrow}^\dag(\NN_p) \\ &= D(p\|u) = \log_2(2\pi) - h(p) \\ &= \log_2(2\pi) - \int_{-\pi}^{\pi} d\phi\ p(\phi) \log_2\frac{1}{p(\phi)}\, . \ee Here, $D(p\|u)$ denotes the Kullback--Leibler divergence between $p$ and the uniform probability density $u$ over $[-\pi,\pi]$, and $h(p)$ is the differential entropy of $p$. \end{thm} \begin{rem} The condition on the R\'enyi entropy of $p$ is of a purely technical nature. We expect it to be obeyed in all cases of practical interest. For example, it holds true provided that $p$ is bounded on $[-\pi,\pi]$. \end{rem} \begin{rem} By using H\"older's inequality, it can be easily verified that if~\eqref{finite_Renyi_p} holds for $\alpha_0\geq 1$ then it holds for all $\alpha$ such that $1\leq\alpha\leq \alpha_0$. \end{rem} \begin{proof}[Proof of Theorem~\ref{capacities_thm}] The smallest of all eight quantities is the unassisted quantum capacity $Q(\NN_p)$, and the largest is $P_{\!\leftrightarrow}^\dag(\NN_p)$. Therefore, it suffices to prove that \bb Q(\NN_p) \geq D(p\|u)\, ,\qquad P_{\!\leftrightarrow}^\dag(\NN_p) \leq D(p\|u)\, . \label{eq:bounds-on-caps} \ee Note that it is elementary to verify that \bb D(p\|u) = \int_{-\pi}^{+\pi} d\phi\ p(\phi) \log_2\frac{p(\phi)}{1/(2\pi)} = \log_2(2\pi) - \int_{-\pi}^{+\pi} d\phi\ p(\phi) \log_2\frac{1}{p(\phi)} = \log_2(2\pi) - h(p)\, , \ee where the differential entropy $h(p)$ is defined by~\eqref{differential_entropy}. To bound $Q(\NN_p)$ from below, we need an ansatz for a state $\ket{\Psi}_{AA'}$ to plug into~\eqref{LSD}. Letting $A$ and $A'$ be single-mode systems, we can consider the maximally entangled state $\ket{\Phi_d}_{AA'} \coloneqq \frac{1}{\sqrt{d}} \sum_{k=0}^{d-1} \ket{k}_A\ket{k}_{A'}$ locally supported on the subspace spanned by first $d$ Fock states $\ket{k}$ (see~\eqref{Fock}), where $k\in\{0,\ldots, d-1\}$. Let us also define the truncated matrix \bb T_p^{(d)} \coloneqq \Pi_d T_p \Pi_d = \sum_{h,k=0}^{d-1} (T_p)_{hk} \ketbraa{h}{k}\, , \label{truncated_Tp} \ee where \bb \Pi_d\coloneqq \sum_{k=0}^{d-1} \ketbra{k}\, . \label{Pi_d} \ee Then note that \bb \omega_{p,d} \coloneqq&\ \big(I\otimes \NN_p\big)\big(\Phi_d\big) \\ =&\ \frac1d \sum_{h,k=0}^{d-1} \big(I\,\otimes\, \NN_p\big)\big(\ketbraa{hh}{kk}\big) \\ =&\ \frac1d \sum_{h,k=0}^{d-1} (T_p)_{hk} \ketbraa{hh}{kk} \\ =&\ \frac1d\, \Omega\big[ T_p^{(d)} \big]\, , \label{omega_p_d} \ee where $\Omega[\tau]$ is defined by~\eqref{maximally_correlated}. Consider that \begin{align} Q(\NN_p) &\textgeq{(i)} \limsup_{d\to\infty} \Icoh(A\rangle B)_{\big(I\,\otimes\, \NN_p^{A'\to B}\big)\big(\Phi_d^{AA'}\big)} \nonumber\\ &\texteq{(ii)} \limsup_{d\to\infty} \Icoh(A\rangle B)_{\omega_{p,d}} \nonumber\\ &\texteq{(iii)} \limsup_{d\to\infty} \left( \log_2 d - S\left( T_p^{(d)} \big/ d \right) \right) \nonumber\\ &\texteq{(iv)} \limsup_{d\to\infty} \left( \log_2 d + \frac1d \Tr T_p^{(d)}\! \left(-\log_2 d + \log_2 T_p^{(d)}\right) \right) \label{main_lower_bound_eq} \\ &= \limsup_{d\to\infty} \frac1d \Tr T_p^{(d)} \log_2 T_p^{(d)} \nonumber\\ &\texteq{(v)} \int_{-\pi}^{\pi} \frac{d\phi}{2\pi}\ \left(2\pi p(\phi)\right) \log_2 \left(2\pi p(\phi)\right) \nonumber\\ &= \log_2(2\pi) - \int_{-\pi}^{\pi} d\phi\ p(\phi) \log_2 \frac{1}{p(\phi)}\, . \nonumber \end{align} Here: (i)~follows from the LSD theorem~\eqref{LSD}; in~(ii) we introduced the state $\omega_{p,d}$ defined by~\eqref{maximally_correlated}; (iii)~comes from~\eqref{Icoh}, due to the fact that $S(\Omega[\tau]) = S(\tau)$ on the one hand, and \begin{align} \Tr_A \omega_{p,d}^{AB} &= \Tr_A \big(I\otimes \NN_p^{A'\to B}\big)\big(\Phi_d^{AA'}\big) \nonumber\\ &= \frac1d \Tr_A\! \sum_{h,k=0}^{d-1} (T_p)_{hk} \ketbraa{h}{k}_A\otimes \ketbraa{h}{k}_B \nonumber\\ &= \frac1d \sum_{h,k=0}^{d-1} (T_p)_{hk} \delta_{hk} \ketbraa{h}{k}_B \\ &= \frac1d \sum_{k=0}^{d-1} \ketbra{k}_B \nonumber\\ &= \frac{\id_B}{d} \nonumber \end{align} and therefore $S\big( \Tr_A \big(I\,\otimes\, \NN_p^{A'\to B}\big)\big(\Phi_d^{AA'}\big)\big) = \log_2 d$ on the other; in~(iv) we simply substituted the definition~\eqref{von_Neumann} of von Neumann entropy; and finally in~(v) we employed Lemma~\ref{Serra-Capizzano_lemma} with the choice $a(\phi) = 2\pi p(\phi)$. This is possible due to our assumption that~\eqref{finite_Renyi_p} holds for some $\alpha>1$. Note that $F(x) = x\log_2 x$ satisfies $\left|F(x)\right| < x^\alpha$ for all $\alpha>1$ and for all sufficiently large $x\in \R_+$. This concludes the proof of the lower bound on $Q(\NN_p)$ in~\eqref{eq:bounds-on-caps}. We remark in passing that the Szeg\H{o} theorem has been applied before, although with an entirely different scope, in the context of quantum information theory~\cite{Lupo2010}. We now establish the upper bound on $P_{\!\leftrightarrow}^\dag(\NN_p)$ in~\eqref{eq:bounds-on-caps}. We claim that there is a sequence of LOCC protocols that can simulate $\NN_p$ using $\omega_{p,d}$ defined by~\eqref{omega_p_d} as a resource state and with error vanishing as $d\to\infty$. To see why this is the case, let $\rho$ be an arbitrary input state, and consider the $d$-dimensional teleportation protocol~\eqref{telep} on $\rho$ that uses $\omega_{p,d}$ as a resource. In formula, let us define \bb \NN_{p,d}^{A'\to B} (\rho_{A'})\coloneqq \T_{A'AB\to B}^{(d)} \left(\rho_{A'} \otimes \omega_{p,d}^{AB}\right) . \label{simulation_NN_p_d} \ee We see that \begin{align} &\NN_{p,d}^{A'\to B} (\rho_{A'}) \nonumber\\ &\quad \texteq{(vi)} \sum_{x,z=0}^{d-1} X(x)_B Z(z)_B \Tr_{AA'}\! \left[ \rho_{A'} \otimes \omega_{p,d}^{AB}\, X(x)_{A'} Z(z)_{A'} \Phi_d^{AA'} Z(z)^\dag_{A'} X(x)^\dag_{A'} \right] Z(z)_B^\dag X(x)_B^\dag \nonumber\\ &\quad \texteq{(vii)} \sum_{x,z=0}^{d-1} \sum_{h,k=0}^{d-1} \frac1d\, (T_p)_{hk}\, X(x)_B Z(z)_B \nonumber\\ &\hspace{15.5ex} \Tr_{AA'}\! \left[ \rho_{A'} \otimes \ketbraa{hh}{kk}_{AB}\, X(x)_{A'} Z(z)_{A'} \Phi_d^{AA'} Z(z)^\dag_{A'} X(x)^\dag_{A'} \right] Z(z)_B^\dag X(x)_B^\dag \nonumber\\ &\quad \texteq{(viii)} \sum_{x,z=0}^{d-1} \sum_{h,k=0}^{d-1} \frac1d\, (T_p)_{hk}\, X(x)_B Z(z)_B \nonumber\\ &\hspace{15.5ex} \left(\Tr_{AA'}\! \left[ \rho_{A'} \otimes \ketbraa{h}{k}_{A}\, Z(z)_{A}^\intercal X(x)_{A}^\intercal \Phi_d^{AA'} X(x)_{A}^* Z(z)_{A}^* \right] \ketbraa{h}{k}_B \right) Z(z)_B^\dag X(x)_B^\dag \nonumber\\ &\quad \texteq{(ix)} \sum_{x,z=0}^{d-1} \sum_{h,k=0}^{d-1} \frac1d\, (T_p)_{hk}\, X(x)_B Z(z)_B \label{main_upper_bound_eq1} \\ &\hspace{15.5ex} \left( e^{\frac{2\pi i}{d} z(k-h)} \Tr_{AA'}\! \left[ \rho_{A'} \otimes \ketbraa{h\oplus x}{k\oplus x}_{A}\, \Phi_d^{AA'} \right] \ketbraa{h}{k}_B \right) Z(z)_B^\dag X(x)_B^\dag \nonumber\\ &\quad \texteq{(x)} \sum_{x,z=0}^{d-1} \sum_{h,k=0}^{d-1} \frac{1}{d^2}\, (T_p)_{hk}\, X(x)_B Z(z)_B \left( e^{\frac{2\pi i}{d} z(k-h)} \rho_{h\oplus x,\, k\oplus x} \ketbraa{h}{k}_B \right) Z(z)_B^\dag X(x)_B^\dag \nonumber\\ &\quad \texteq{(xi)} \sum_{x,z=0}^{d-1} \sum_{h,k=0}^{d-1} \frac{1}{d^2}\, (T_p)_{hk}\, \rho_{h\oplus x,\, k\oplus x} \ketbraa{h\oplus x}{k\oplus x}_B \nonumber\\ &\quad = \sum_{x=0}^{d-1} \sum_{h,k=0}^{d-1} \frac1d\, (T_p)_{hk}\, \rho_{h\oplus x,\, k\oplus x} \ketbraa{h\oplus x}{k\oplus x}_B \nonumber\\ &\quad \texteq{(xii)} \sum_{h,k=0}^{d-1} \left(\frac1d \sumno_{x=0}^{d-1} (T_p)_{h\oplus x,\, k\oplus x} \right) \rho_{hk} \ketbraa{h}{k}_B\, . \nonumber \end{align} In the above derivation, (vi)~follows from~\eqref{telep}, (vii)~from~\eqref{omega_p_d}, (viii)~from the formula \bb M\otimes \id \ket{\Phi_d} = \id \otimes M^\intercal \ket{\Phi_d}\, , \ee valid for the maximally entangled state~\eqref{Phi_d} in any finite dimension, (ix)~and~(xi) from~\eqref{HW}, (x)~from the identity \bb \Tr \left[ M_A \otimes N_{A'} \Phi_d^{AA'} \right] = \frac1d \Tr \left[ MN^\intercal \right] , \ee and finally (xii)~by a simple change of variable $h\oplus x \mapsto h$, $k\oplus x \mapsto k$, once one observes that \bb \sum_{x=0}^{d-1} (T_p)_{hk} \mapsto \sum_{x=0}^{d-1} (T_p)_{h\ominus x,\, k\ominus x} = \sum_{x'=0}^{d-1} (T_p)_{h\ominus (-x'),\, k\ominus (-x')} = \sum_{x'=0}^{d-1} (T_p)_{h\oplus x',\, k\oplus x'}\, , \ee where $x'\coloneqq -x$. The calculation in~\eqref{main_upper_bound_eq1} shows that \bb \braket{h|\NN_{p,d} (\rho) |k} = \left(\frac1d \sumno_{x=0}^{d-1} (T_p)_{h\oplus x,\, k\oplus x}\right) \rho_{hk}\, . \ee We now want to argue that \emph{for fixed $h,k\in \N$} the above quantity converges to $(T_p)_{hk}\, \rho_{hk} = \braket{h|\NN_p(\rho)|k}$ as $d\to\infty$. To this end, note that if $d\geq h,k$ we have that $(T_p)_{h\oplus x,\, k\oplus x} = T_{hk}$ provided that either $x\leq \min\{d-1-h,d-1-k\} = \min\{d-h,d-k\}-1$ or $x\geq \max\{d-h,d-k\}$. Therefore, $(T_p)_{h\oplus x,\, k\oplus x} \neq T_{hk}$ for at most $|h-k|$ values of $x$, out of the $d$ possible ones. We can estimate the remainder terms pretty straightforwardly using the inequality $\left|(T_p)_{hk}\right|\leq 1$, valid for all $h,k\in \N$. Doing so yields \bb \left|(T_p)_{hk} - \frac1d \sum_{x=0}^{d-1} (T_p)_{h\oplus x,\, k\oplus x} \right| &\leq \left|(T_p)_{hk} - \frac{d-|h-k|}{d} (T_p)_{hk} \right| + \frac{|h-k|}{d} \\ &\leq \frac{2|h-k|}{d} \tends{}{d\to\infty} 0\, . \label{main_upperbound_eq2} \ee Thus, for all fixed $h,k\in \N$, \bb \braket{h|\NN_{p,d} (\rho) |k} \tends{}{d\to\infty} \braket{h|\NN_{p} (\rho) |k}\qquad \forall\ \rho\, , \label{entrywise_convergence} \ee as claimed. We now argue that this implies the stronger fact that \bb \lim_{d\to\infty} \left\| \left(\left(\NN_{p,d} - \NN_p\right)_{A'\to B} \otimes I_E \right) (\rho_{A'E}) \right\|_1 = 0 \qquad \forall\ \rho_{A'E}\, , \label{strong_convergence} \ee where it is understood that $\rho_{A'E}$ is an arbitrary, but fixed state of a bipartite system $A'E$, with the quantum system $E$ arbitrary. The above identity is usually expressed in words by saying that $\NN_{p,d}$ converges to $\NN_p$ in the \emph{topology of strong convergence}~\cite{Shirokov2008, Shirokov2018}. The arguments that allow to deduce~\eqref{strong_convergence} from~\eqref{entrywise_convergence} are standard: \begin{enumerate}[(a)] \item The linear span of the Fock states $\{\ket{k}\}_{k\in \N}$ is dense in $\HH_1$, and moreover the operators $\NN_{p,d} (\rho), \NN_{p} (\rho)$ are uniformly bounded in trace norm --- since they are states, they all have trace norm $1$. Hence, we see that~\eqref{entrywise_convergence} actually holds also when $\ket{h},\ket{k}$ are replaced by any two fixed vectors $\ket{\psi},\ket{\phi}\in \HH_1$. In formula, \bb \braket{\psi|\NN_{p,d} (\rho) |\phi} \tends{}{d\to\infty} \braket{\psi|\NN_{p} (\rho) |\phi}\qquad \forall\ \rho,\quad \forall\ \ket{\psi},\ket{\phi}\in \HH_1\, . \label{weak_operator_convergence} \ee \item Therefore, by definition $\NN_{p,d} (\rho)$ converges to $\NN_{p} (\rho)$ in the \emph{weak operator topology}. Since the latter object is also a quantum state, an old result due to Davies~\cite[Lemma~4.3]{Davies1969}, which can also be seen as an elementary consequence of the `gentle measurement lemma'~\cite[Lemma~9]{VV1999} (see also~\cite[Lemmata~9.4.1 and~9.4.2]{MARK}), states that in fact there is trace norm convergence, i.e. \bb \lim_{d\to\infty} \left\| \left(\NN_{p,d} - \NN_p\right)(\rho) \right\|_1 = 0\qquad \forall\ \rho\, . \label{strong_convergence_single_system} \ee \item The topology of strong convergence is stable under tensor products with the identity channel~\cite{Shirokov2008} (see also~\cite[Lemma~2]{Shirokov2018}). Therefore,~\eqref{strong_convergence_single_system} and~\eqref{strong_convergence} are in fact equivalent. Since we have proved the former, the latter also follows. \end{enumerate} We are now ready to prove that $P_{\!\leftrightarrow}^\dag(\NN_p)\leq D(p\|u)$. For a fixed positive integer $n\in \N_+$, consider a generic protocol as the one depicted in Figure~\ref{protocol_Q2_fig}, where the channel $\NN$ is now $\NN_p$. Since we are dealing with the secret-key-agreement capacity, the final state $\eta_n$ will approximate a private state $\gamma_n$ containing $\ceil{Rn}$ secret bits~\cite{private, Horodecki2009}. Here, $R$ is an achievable strong converse rate of secret-key agreement, i.e., it satisfies that $R<P_{\!\leftrightarrow}^\dag(\NN_p)$, where the right-hand side is defined by~\eqref{strong_converse_secret_key_agreement_capacity}. Call $\epsilon_n\coloneqq \frac12 \|\eta_n - \gamma_n\|_1$ the corresponding trace norm error, so that \bb \liminf_{n\to\infty} \epsilon_n <1\, . \label{limsup_epsilon_n} \ee Imagine now to replace each instance of $\NN_p$ with its simulation $\NN_{p,d}$. This will yield at the output a state $\eta_{n,d}$, in general different from $\eta_n$; however, because of~\eqref{strong_convergence}, and since $n$ here is fixed, we have that the associated error $\delta_{n,d}$ vanishes as $d\to\infty$, i.e. \bb \delta_{n,d} \coloneqq \frac12 \left\|\eta_{n,d} - \eta_d \right\|_1 \tends{}{d\to\infty} 0\, . \label{omega_n_d_approximation} \ee Now, after the above replacement the global protocol can be seen as an LOCC manipulation of $n$ copies of the state $\omega_{p,d}$ that is used to simulate $\NN_{p,d}$ as per~\eqref{simulation_NN_p_d}. By the triangle inequality, the trace distance between the final state $\eta_{n,d}$ and the private state $\gamma_n$ satisfies \bb \frac12 \left\| \eta_{n,d} - \gamma_n\right\|_1 \leq \frac12 \left\| \eta_{n,d} - \eta_n\right\|_1 + \frac12 \left\| \eta_{n} - \gamma_n\right\|_1 \leq \delta_{n,d} + \epsilon_n \, . \ee To apply the results of~\cite{MMMM}, we need to translate the above estimate into one that uses the fidelity instead of the trace distance. Such a translation can be made with the help of the Fuchs--van de Graaf inequalities~\cite{Fuchs1999}, here reported as~\eqref{Fuchs_vdG}. We obtain that $F(\eta_{n,d},\gamma_n)\geq \left(1- \delta_{n,d} - \epsilon_n\right)^2$. We can then use~\cite[Eq.~(5.37)]{MMMM} directly to deduce that \bb \left(1 - \delta_{n,d} - \epsilon_n\right)^2 \leq F(\eta_{n,d},\gamma_n) \leq 2^{- n\frac{\alpha-1}{\alpha} \left( R - \widetilde{E}_{R,\alpha}(\omega_{p,d}) \right)} \label{converse_bound_eq1} \ee for all $1<\alpha\leq \alpha_0$, where \bb \widetilde{E}_{R,\alpha}(\rho_{AB}) \coloneqq \inf_{\sigma\in \SEP_{AB}} \widetilde{D}_\alpha(\rho\|\sigma) \label{sandwiched_REE} \ee is the \deff{sandwiched $\boldsymbol{\alpha}$-R\'enyi relative entropy of entanglement}, and \bb \SEP_{AB} \coloneqq \mathrm{conv}\left\{ \ketbra{\psi}_A \otimes \ketbra{\phi}_B:\, \ket{\psi}_A\in \HH_A,\, \ket{\phi}_B\in \HH_B,\, \braket{\psi|\psi}=1=\braket{\phi|\phi} \right\} \ee is the set of \deff{separable states} over the bipartite quantum system $AB$. We can immediately recast~\eqref{converse_bound_eq1} as \bb R \leq \frac2n \frac{\alpha}{\alpha-1} \log_2\frac{1}{1-\delta_{n,d}-\epsilon_n} + \widetilde{E}_{R,\alpha}(\omega_{p,d}) \label{converse_bound_eq2} \ee Let us now estimate the quantity $\widetilde{E}_{R,\alpha}(\omega_{p,d})$. By taking as an ansatz for a separable state to be plugged into~\eqref{sandwiched_REE} simply $\Omega[\Pi_d/d]$ (see~\eqref{maximally_correlated} and~\eqref{Pi_d}), which is manifestly separable because $\Pi_d$ is diagonal, we conclude that \bb \widetilde{E}_{R,\alpha}(\omega_{p,d}) &\leq \widetilde{D}_\alpha\left( \omega_{p,d}\, \bigg\|\, \Omega\!\left[\frac{\Pi_d}{d}\right]\right) \\ &\texteq{(xiii)} \frac{1}{\alpha-1}\, \log_2 \Tr \omega_{p,d}^\alpha\, \Omega\!\left[\frac{\Pi_d}{d}\right]^{1-\alpha} \\ &\texteq{(xiv)} \frac{1}{\alpha-1}\, \log_2 \frac1d \Tr \left(T_{p}^{(d)}\right)^\alpha \label{converse_bound_eq3} \ee Here, (xiii)~follows from~\eqref{commute_Petz--Renyi}, while in~(xiv) we simply recalled~\eqref{omega_p_d}. Now, applying Lemma~\ref{Serra-Capizzano_lemma} once again with $a(\phi) = 2\pi p(\phi)$, we surmise that \bb \lim_{d\to\infty} \frac1d \Tr \left[\left(T_{p}^{(d)}\right)^\alpha\right] = \int_{-\pi}^{+\pi} \frac{d\phi}{2\pi}\ (2\pi p(\phi))^\alpha = (2\pi)^{\alpha-1} \int_{-\pi}^{+\pi} d\phi\ p(\phi)^\alpha\, . \label{converse_bound_eq4} \ee Therefore, from~\eqref{converse_bound_eq3} we deduce that \bb \limsup_{d\to\infty} \widetilde{E}_{R,\alpha}(\omega_{p,d}) \leq \log_2(2\pi) + \frac{1}{\alpha - 1}\log_2 \int_{-\pi}^{+\pi} d\phi\ p(\phi)^\alpha = D_\alpha(p\|u)\, , \label{converse_bound_eq5} \ee where $D_\alpha$ is the $\alpha$-R\'enyi divergence defined by~\eqref{Renyi}. Due to both~\eqref{converse_bound_eq5} and~\eqref{omega_n_d_approximation}, taking the limit $d\to\infty$ in~\eqref{converse_bound_eq2} yields \bb R \leq \frac2n \frac{\alpha}{\alpha-1} \log_2\frac{1}{1-\epsilon_n} + D_\alpha(p\|u)\, . \ee We are now ready to take the limit $n\to\infty$. In light of~\eqref{limsup_epsilon_n}, we obtain that \bb R \leq \liminf_{n\to\infty} \left( \frac2n \frac{\alpha}{\alpha-1} \log_2\frac{1}{1-\epsilon_n} + D_\alpha(p\|u) \right) = D_\alpha(p\|u)\, . \ee The limit as $\alpha\to1^+$ can be computed via Lemma~\ref{technical_Renyi_lemma} (and in particular~\eqref{limit_1+_Renyi}), due to the condition~\eqref{finite_Renyi_p}, which can be rephrased as $D_{\alpha_0}(p\|u)<\infty$. It gives \bb R \leq \liminf_{\alpha\to 1^+} D_\alpha(p\|u) = D(p\|u)\, . \ee Since $R$ was an arbitrary achievable strong converse rate for secret-key agreement, we deduce that \bb P_{\!\leftrightarrow}^\dag(\NN_p) \leq D(p\|u)\, , \ee completing the proof. \end{proof} \subsection{Extension to multimode channels} We will now see how to extend our main result, Theorem~\ref{capacities_thm}, to the case of a multimode bosonic dephasing channel. An $m$-mode quantum system ($m\in \N_+$) is modelled mathematically by the Hilbert space $\HH_m = \HH_1^{\otimes m} = L^2(\R)^{\otimes m} = L^2(\R^m)$. The annihilation and creation operators $a_j,a_j^\dag$ ($j=1,\ldots,m$), defined by \bb a_1 \coloneqq a\otimes \id\otimes \ldots\otimes \id\, ,\quad\ldots ,\quad a_m \coloneqq \id\otimes\ldots \otimes \id\otimes a \ee in terms of the single-mode operators in~\eqref{a_adag}, satisfy the canonical commutation relations \bb \left[a_j,a_k\right] = 0 = \left[a_j^\dag, a_k^\dag\right] ,\qquad \left[a_j,a_k^\dag\right] = \delta_{jk} \id\, . \ee The multimode Fock states $\ket{\vb{k}}$, indexed by $\vb{k} = (k_1,\ldots,k_m)^\intercal\in \N^m$, are given by \bb \ket{\vb{k}} \coloneqq \ket{k_1} \otimes \cdots \otimes \ket{k_m}\, . \ee Now, for a probability density function $p$ on $[-\pi,\pi]^m$, the corresponding \deff{multimode bosonic dephasing channel} is the quantum channel $\NN_p^{(m)}:\T(\HH_m) \to \T(\HH_m)$ defined by \bb \NN_p^{(m)}(\rho) \coloneqq \int_{[-\pi,\pi]^m} d^m \vb{\upphi}\ p(\vb{\upphi})\, e^{-i \sum_j a^\dag_j a_j^{\phantom{\dag}} \phi_j} \rho\, e^{i\sum_j a^\dag_j a_j^{\phantom{\dag}} \phi_j} , \label{Np_multimode} \ee where $j=1,\ldots,m$, and $\vb{\upphi} = (\phi_1,\ldots,\phi_m)^\intercal$. In perfect analogy with Theorem~\ref{capacities_thm}, we can now prove the following. \begin{thm} \label{capacities_multimode_thm} Let $p:[-\pi,+\pi]^m\to \R_+$ be a probability density function with the property that one of its R\'enyi entropies is finite for some $\alpha_0>1$, i.e. \bb \int_{[-\pi,\pi]^m} d^m \vb{\upphi}\ p(\vb{\upphi})^{\alpha_0} <\infty\, . \label{finite_Renyi_p_multimode} \ee Then the two-way assisted quantum capacity, the unassisted quantum capacity, the private capacity, the secret-key capacity, and all of the corresponding strong converse rates of the associated multimode bosonic dephasing channel $\NN_p^{(m)}$ coincide, and are given by the expression \bb Q\big(\NN_p^{(m)}\big) &= Q^\dag\big(\NN_p^{(m)}\big) = P\big(\NN_p^{(m)}\big) = P^\dag\big(\NN_p^{(m)}\big) = \QQ\big(\NN_p^{(m)}\big) = \QQ^\dag\big(\NN_p^{(m)}\big) = P_{\!\leftrightarrow}\big(\NN_p^{(m)}\big) = P_{\!\leftrightarrow}^\dag\big(\NN_p^{(m)}\big) \\ &= D(p\|u) = m\log_2(2\pi) - h(p) \\ &= m\log_2(2\pi) - \int_{[-\pi,\pi]^m} d^m\vb{\upphi}\ p(\vb{\upphi}) \log_2\frac{1}{p(\vb{\upphi})}\, . \ee Here, $D(p\|u)$ denotes the Kullback--Leibler divergence between $p$ and the uniform probability distribution $u$ over $[-\pi,\pi]^m$, and $h(p)$ is the differential entropy of $p$. \end{thm} Rather unsurprisingly, one of the key technical tools that we need to prove the above generalization of Theorem~\ref{capacities_thm} is a multi-index version of the Szeg\H{o} theorem reported here as Lemma~\ref{Serra-Capizzano_lemma}. In fact, the original paper by Serra-Capizzano~\cite{SerraCapizzano2002} deals already with multi-indices, so we can borrow the following result directly from~\cite[Theorem~2]{SerraCapizzano2002} (cf.\ also the proof of Lemma~\ref{Serra-Capizzano_lemma}). An \deff{multi-index infinite Toeplitz matrix} is an operator $T:\ell^2(\N^m) \to \ell^2(\N^m)$ with the property that its matrix entries $T_{\vb{h},\vb{k}}$ (where $\ket{h}=(h_1,\ldots,h_m)^\intercal\in \N^m$ is a multi-index) depend only on the difference $\vec{h}-\vec{k}$, in formula $T_{\vb{h},\vb{k}}= a_{\vb{h}-\vb{k}}$. The case of interest is when \bb a_{\vb{k}} = \int_{[-\pi,\pi]^m} d^m\vb{\upphi}\ a(\vb{\upphi})\, e^{-i\vb{k}\cdot\vb{\upphi}}\, , \ee where $a:[-\pi,\pi]^m\to\R_+$ is a non-negative function, and $\vb{k}\cdot\vb{\upphi} \coloneqq \sum_{j=1}^m k_j\phi_j$. As in the setting of Szeg\H{o}'s theorem, one considers the truncations of $T$ defined for some $\vb{d} = (d_1,\ldots, d_m)^\intercal\in \N^m$ by \bb T^{(\vb{d})}\coloneqq \sum_{h_1,k_1=0}^{d_1-1}\ldots \sum_{h_m,k_m=0}^{d_m-1} T_{\vb{h},\vb{k}}\ketbraa{\vb{h}}{\vb{k}}\, . \label{multiindex_truncation} \ee Note that $T^{(\vb{d})}$ is an operator on a space of dimension \bb D(\vb{d})\coloneqq \prod_{j=1}^m d_j\, . \label{multiindex_total_dim} \ee The multi-index Szeg\H{o} theorem then reads \bb \lim_{\vb{d}\to\infty} \frac{1}{D\big(\vb{d}\big)} \Tr F\big( T^{(\vb{d})}\big) = \lim_{\vb{d}\to\infty} \frac{1}{D(\vb{d})} \sum_{j=1}^{D(\vb{d})} F\big(\lambda_j\big(T^{(\vb{d})}\big)\big) = \int_{[-\pi,\pi]^m} \frac{d\vb{\upphi}}{(2\pi)^m}\ F\big(a(\vb{\upphi})\big)\, , \label{Szego_multiindex} \ee where $F:\R\to\R$, and $\vb{d}\to\infty$ means that $\min_{j=1,\ldots,m} d_j \to\infty$. Conditions on $a$ and $F$ so that~\eqref{Szego_multiindex} holds are as follows. \begin{lemma}[(Serra-Capizzano~\cite{SerraCapizzano2002}, multi-index case)] \label{Serra-Capizzano_multiindex_lemma} If $a:[-\pi,\pi]^m\to \R_+$ is such that \bb \int_{[-\pi,\pi]^m} \frac{d^m\vb{\upphi}}{(2\pi)^m}\ a(\vb{\upphi})^\alpha < \infty \ee for some $\alpha \geq 1$, and moreover $F:\R_+ \to \R$ is continuous and satisfies \bb F(x) = O(x^\alpha) \qquad (x\to \infty)\, , \ee then~\eqref{Szego_multiindex} holds. \end{lemma} The proof of Theorem~\ref{capacities_multimode_thm} follows very closely that of Theorem~\ref{capacities_thm}. Let us briefly summarize the main differences. \begin{proof}[Proof of Theorem~\ref{capacities_multimode_thm}] As an ansatz in the coherent information~\eqref{main_lower_bound_eq}, we use a multimode maximally entangled state, defined by \bb \ket{\Phi_{\vb{d}}}\coloneqq \frac{1}{D(\vb{d})} \sum_{k_1=0}^{d_1-1}\ldots \sum_{k_m=0}^{d_m-1} \ket{\vb{k}}_A\ket{\vb{k}}_{A'}\, , \ee where $\vb{d}\in \N^m$ is fixed for now. Since \bb \omega_{p,\vb{d}} \coloneqq \left(I\otimes \NN_p^{(m)}\right)(\Phi_{\vb{d}}) = \frac{1}{D(\vb{d})} \, \Omega\big[T_p^{\vb{d}}\big] \ee is still a maximally correlated state, the derivation in~\eqref{main_lower_bound_eq} is unaffected, provided that one employs in~(v) Lemma~\ref{Serra-Capizzano_multiindex_lemma}. As for the converse bound on the strong converse rate, one replaces~\eqref{simulation_NN_p_d} with \bb \big(\NN_{p,\vb{d}}^{(m)}\big)_{A'\to B} (\rho_{A'})\coloneqq \left(\bigotimes\nolimits_{j=1}^m \T_{A_j'A_jB_j\to B_j}^{(d_j)} \right) \left(\rho_{A'} \otimes \omega_{p,d}^{AB}\right) , \ee where $A_j$ denotes the $j^{\text{th}}$ mode of $A$, and analogously for $A'$ and $B$. Then~\eqref{main_upper_bound_eq1} becomes \bb \big(\NN_{p,\vb{d}}^{(m)}\big)_{A'\to B} (\rho_{A'}) = \sum_{h_1,k_1=0}^{d_1-1}\ldots \sum_{h_m,k_m=0}^{d_m-1} \left(\frac{1}{D(\vb{d})} \sum_{x_1=0}^{d_1-1}\ldots \sum_{x_m=0}^{d_m-1} (T_p)_{\vb{h}\oplus\vb{x},\, \vb{k}\oplus \vb{x}} \right) \rho_{\vb{h},\vb{k}} \ketbraa{\vb{h}}{\vb{k}}_B\, . \ee We can write an inequality analogous to~\eqref{main_upperbound_eq2} as \bb &\left| (T_p)_{\vb{h},\vb{k}} - \frac{1}{D(\vb{d})}\, \sum_{x_1=0}^{d_1-1}\ldots \sum_{x_m=0}^{d_m-1} (T_p)_{\vb{h}\oplus\vb{x},\, \vb{k}\oplus \vb{x}} \right| \\ &\qquad \leq \left|(T_p)_{\vb{h},\vb{k}} - \frac{\prod_{j=1}^m (d_j-|h_j-k_j|)}{D(\vb{d})}\, (T_p)_{\vb{h},\vb{k}} \right| + \frac{D(\vb{d})-\prod_{j=1}^m (d_j-|h_j-k_j|)}{D(\vb{d})} \\ &\qquad \leq 2\frac{D(\vb{d})-\prod_{j=1}^m (d_j-|h_j-k_j|)}{D(\vb{d})} \tends{}{\vb{d}\to\infty} 0\, . \ee In the exact same way, one uses the above inequality to prove a generalized version of~\eqref{strong_convergence} as \bb \lim_{\vb{d}\to\infty} \left\| \left(\left(\NN_{p,\vb{d}}^{(m)} - \NN_p^{(m)}\right)_{A'\to B} \otimes I_E \right) (\rho_{A'E}) \right\|_1 = 0 \qquad \forall\ \rho_{A'E}\, . \label{strong_convergence_multimode} \ee The combination of~\eqref{converse_bound_eq3} and~\eqref{converse_bound_eq4} now becomes \bb \widetilde{E}_{R,\alpha}(\omega_{p,\vb{d}}) \leq \frac{1}{\alpha-1} \log_2 \frac{1}{D(\vb{d})} \Tr \left[\left(T_p^{\vb{d}}\right)^\alpha\right] \tends{}{\vb{d}\to\infty} \frac{1}{\alpha-1}\log_2 (2\pi)^{m(\alpha-1)} \int_{[-\pi,\pi]^m} d\vb{\upphi}\, p(\vb{\upphi})^\alpha\, , \ee so that we find, precisely as in~\eqref{converse_bound_eq5}, that \bb \limsup_{\vb{d}\to\infty} \widetilde{E}_{R,\alpha}(\omega_{p,\vb{d}}) \leq m\log_2(2\pi) + \frac{1}{\alpha-1}\log_2 \int_{[-\pi,\pi]^m} d\vb{\upphi}\, p(\vb{\upphi})^\alpha = D_\alpha(p\|u)\, . \ee The rest of the proof is formally identical. \end{proof} \section{Examples} \subsection{Wrapped normal distribution} The most commonly studied~\cite{Arqand2020, Arqand2021} example of the bosonic dephasing channel is that which yields in~\eqref{Np_action_Tp} a matrix $T_p$ with entries \bb (T_{p_\gamma})_{hk} = e^{-\frac{\gamma}{2}(h-k)^2} , \ee where $\gamma>0$ is a parameter. The probability density function $p:[-\pi,+\pi]\to \R_+$ that gives rise to this matrix is a \deff{wrapped normal distribution}, that is, a normal distribution on $\R$ with variance $\gamma$ `wrapped' around the unit circle. In formula, this is given by \bb p_{\gamma}(\phi) = \frac{1}{\sqrt{2\pi\gamma}}\sum_{k=-\infty}^{+\infty} e^{-\frac{1}{2\gamma} (\phi+2\pi k)^2} . \label{wrapped_Gaussian} \ee \begin{figure}[ht] \begin{tikzpicture}[scale=1] \begin{axis}[axis lines = left, xlabel = $\phi$, xmin=-pi, xmax=pi, ymin=0, ymax=0.7, xtick={-3.14159, -1.5708, 0, 1.5708, 3.14159}, xticklabels={$-\pi$,$-\pi/2$,$0$,$\pi/2$,$\pi$}, yticklabel style={/pgf/number format/fixed,/pgf/number format/precision=5}, scaled y ticks=false, legend style={at={(axis cs:(2.7,0.1)}, anchor=south, minimum height=0.5cm}] \addplot[line width=1pt, solid,color=black] table[x=a, y=b, col sep=comma]{distributions.csv}; \addlegendentry{$p_\gamma(\phi)$}; \addplot[line width=1pt, solid,color=Blues5seq5] table[x=a, y=c, col sep=comma]{distributions.csv}; \addlegendentry{$p_\lambda(\phi)$}; \addplot[line width=1pt, solid,color=Reds5seq4] table[x=a, y=d, col sep=comma]{distributions.csv}; \addlegendentry{$p_\kappa(\phi)$}; \end{axis} \end{tikzpicture} \caption{The probability density functions of the wrapped normal~\eqref{wrapped_Gaussian}, von Mises~\eqref{von_Mises}, and wrapped Cauchy distributions~\eqref{eq:wrapped-cauchy-cap}, plotted as a function of $\phi\in [-\pi,\pi]$ for the case where $\gamma=\lambda=\kappa=0.5$.} \label{distributions_fig} \end{figure} Its entropy can be expressed as~\cite[Chapter~3, \S~3.3]{PAPADIMITRIOU} \bb h(p_\gamma) = \frac{1}{\ln 2}\left( - \ln \left(\frac{\varphi(e^{-\gamma})}{2\pi}\right) + 2 \sum_{k=1}^\infty \frac{(-1)^k e^{-\frac{\gamma}{2}(k^2+k)}}{k\left(1-e^{-k\gamma}\right)} \right)\, , \ee where \bb \varphi(q) \coloneqq \prod_{k=1}^\infty \left(1-q^k\right) \ee is the Euler function. Therefore, the capacities of the channel $\NN_{p_\gamma}$ are given by \bb Q(\NN_{p_\gamma}) &= Q^\dag(\NN_{p_\gamma}) = P(\NN_{p_\gamma}) = P^\dag(\NN_{p_\gamma}) = \QQ(\NN_{p_\gamma}) = \QQ^\dag(\NN_{p_\gamma}) = P_\leftrightarrow(\NN_{p_\gamma}) = P_\leftrightarrow^\dag(\NN_{p_\gamma}) \\ &= D(p_\gamma \| u) = \log_2 \varphi( e^{-\gamma} ) + \frac{2}{\ln 2} \sum_{k=1}^\infty \frac{(-1)^{k-1} e^{-\frac{\gamma}{2}(k^2+k)}}{k\left(1-e^{-k\gamma}\right)}\, . \label{capacities_wrapped_Gaussian} \ee It is instructive to obtain asymptotic expansions of the above expressions in the limits $\gamma\ll 1$ (small dephasing) and $\gamma\gg 1$ (large dephasing). \begin{itemize} \item \emph{Small dephasing.} When $\gamma \to 0^+$, the channel $\NN_{p_\gamma}$ approaches the identity over an infinite-dimensional Hilbert space. Therefore, it is intuitive to expect that its capacities will diverge. To determine its asymptotic behavior, it suffices to note that in this limit the entropy of the wrapped normal distribution, which is very concentrated around $0$, is well approximated by that of the corresponding normal variable on the whole $\R$, i.e., $\frac12 \log_2 (2\pi e \gamma)$. Thus \bb Q(\NN_{p_\gamma}) &= Q^\dag(\NN_{p_\gamma}) = P(\NN_{p_\gamma}) = P^\dag(\NN_{p_\gamma}) = \QQ(\NN_{p_\gamma}) = \QQ^\dag(\NN_{p_\gamma}) = P_\leftrightarrow(\NN_{p_\gamma}) = P_\leftrightarrow^\dag(\NN_{p_\gamma}) \\ &\approx \frac12 \log_2\frac{2\pi}{e\gamma}\, . \ee In practice, already for $\gamma\lesssim 1$ the above estimate is within about $1\%$ of the actual value. \item \emph{Large dephasing.} A straightforward computation using the series representation \bb -\ln \varphi(q) = \sum_{k=1}^\infty \frac{1}{k}\frac{q^k}{1-q^k} \ee yields the expansion \bb \ln \varphi(q) + 2 \sum_{k=1}^\infty \frac{(-1)^{k-1} q^{-\frac{1}{2}(k^2+k)}}{k\left(1-q^k\right)} &= q+\frac{q^2}{2}-\frac{q^3}{3}+\frac{q^4}{4}-\frac{q^5}{5}+\frac{2 q^6}{3}+O\Big(q^7\Big) \\ &= 2q - \ln(1+q) + O(q^6)\, . \ee This can be plugged into~\eqref{capacities_wrapped_Gaussian} to give \bb Q(\NN_{p_\gamma}) &= Q^\dag(\NN_{p_\gamma}) = P(\NN_{p_\gamma}) = P^\dag(\NN_{p_\gamma}) = \QQ(\NN_{p_\gamma}) = \QQ^\dag(\NN_{p_\gamma}) = P_\leftrightarrow(\NN_{p_\gamma}) = P_\leftrightarrow^\dag(\NN_{p_\gamma}) \\ &= \frac{2}{\ln 2}\, e^{-\gamma} - \log_2 \left(1+e^{-\gamma} \right) + O\left( e^{-6\gamma}\right) \\ &= \frac{e^{-\gamma}}{\ln 2} + O\left(e^{-2\gamma}\right) . \ee \end{itemize} Incidentally, the combination of these two regimes yields an excellent approximation of the capacities across the whole range of $\gamma>0$. Namely, \bb Q(\NN_{p_\gamma}) &= Q^\dag(\NN_{p_\gamma}) = P(\NN_{p_\gamma}) = P^\dag(\NN_{p_\gamma}) = \QQ(\NN_{p_\gamma}) = \QQ^\dag(\NN_{p_\gamma}) = P_\leftrightarrow(\NN_{p_\gamma}) = P_\leftrightarrow^\dag(\NN_{p_\gamma}) \\ &\approx \max\left\{ \frac12 \log_2\frac{2\pi}{e\gamma},\ \frac{2}{\ln 2}\, e^{-\gamma} - \log_2 \left(1+e^{-\gamma} \right) \right\} . \ee The maximum absolute difference between the left-hand side and the right-hand side for $\gamma>0$ is less than $4\times 10^{-3}$. \subsection{Von Mises distribution} The \deff{von Mises distribution} on $[-\pi,+\pi]$ is defined by \bb p_\lambda(\phi) = \frac{e^{\frac1\lambda \cos(\phi)}}{2\pi\, I_0(1/\lambda)}\, , \label{von_Mises} \ee where $I_n$ denotes a modified Bessel function of the first kind. Here, $\lambda>0$ is a parameter that plays a role analogous to that $\gamma>0$ played in the case of the wrapped normal. The matrix $T_{p_\lambda}$ obtained in~\eqref{Np_action_Tp} for $p=p_\lambda$ is given by \bb (T_{p_\lambda})_{hk} = \frac{I_{|h-k|}(1/\lambda)}{I_0(1/\lambda)}\, . \ee The differential entropy of $p_\lambda$ can be calculated analytically, yielding~\cite[Chapter~3, Section~3.3]{PAPADIMITRIOU} \bb h(p_\lambda) = \log_2 (2\pi\, I_0(1/\lambda)) - \frac{1}{\ln 2} \frac{I_1(1/\lambda)}{\lambda\, I_0(1/\lambda)}\, . \ee Therefore, the capacities of the corresponding bosonic dephasing channel are given by \bb Q(\NN_{p_\lambda}) &= Q^\dag(\NN_{p_\lambda}) = P(\NN_{p_\lambda}) = P^\dag(\NN_{p_\lambda}) = \QQ(\NN_{p_\lambda}) = \QQ^\dag(\NN_{p_\lambda}) = P_\leftrightarrow(\NN_{p_\lambda}) = P_\leftrightarrow^\dag(\NN_{p_\lambda}) \\ &= \frac{1}{\ln 2} \frac{I_1(1/\lambda)}{\lambda\, I_0(1/\lambda)} - \log_2 I_0(1/\lambda)\, . \ee \subsection{Wrapped Cauchy distribution} Our final example of a probability distribution on the circle, and of the bosonic dephasing channel associated to it, is defined similarly to the wrapped normal distribution, but this time starting from the Cauchy probability density function. Namely, for some parameter $\kappa>0$ we set \bb p_\kappa(\phi) \coloneqq \sum_{k=-\infty}^{+\infty} \frac{\sqrt{\kappa}}{\pi\left(\kappa+\left(\phi+2\pi k\right)^2\right)} = \frac{1}{2\pi} \frac{\sinh(\sqrt{\kappa})}{\cosh(\sqrt{\kappa}) -\cos\phi}. \label{wrapped_Cauchy} \ee For a proof of the second identity, see~\cite[p.~51]{MARDIA}. The matrix $T_{p_\kappa}$ obtained in~\eqref{Np_action_Tp} for $p=p_\kappa$ is given by \bb (T_{p_\kappa})_{hk} = e^{-\sqrt{\kappa}|h-k|}\, . \ee The differential entropy of $p_\kappa$ is equal to $\log_2\big(2\pi\big(1-e^{-2\sqrt{\kappa}}\big)\big)$~\cite[Chapter~3, \S~3.3]{PAPADIMITRIOU}, implying that the various capacities of the corresponding bosonic dephasing channel $\NN_{p_\kappa}$ are equal to \bb \CC(\NN_{p_\kappa}) = \log_2\!\left(\frac{1}{1-e^{-2\sqrt{\kappa}}}\right) \, . \label{eq:wrapped-cauchy-cap} \ee \end{document} \usepackage[latin1]{inputenc} \usepackage{amsthm} \usepackage{amssymb} \usepackage{amsmath} \usepackage{braket} \usepackage{dsfont} \usepackage{mathdots} \usepackage{mathtools} \usepackage{enumerate} \usepackage{scalerel} \usepackage{stackengine} \usepackage[colorlinks=true, linkcolor=blue, urlcolor=blue, citecolor=red, anchorcolor=green, backref=page]{hyperref} \usepackage{mathrsfs} \usepackage{upgreek} \newcommand\hmmax{0} \newcommand\bmmax{0} \usepackage{bm} \usepackage{pgfplots} \usepackage{siunitx} \usepackage{centernot} \usepackage{comment} \usepackage{chngcntr} \newtheorem{thm}{Theorem} \newtheorem*{thm*}{Theorem} \newtheorem{prop}[thm]{Proposition} \newtheorem*{prop*}{Proposition} \newtheorem{lemma}[thm]{Lemma} \newtheorem*{lemma*}{Lemma} \newtheorem{cor}[thm]{Corollary} \newtheorem*{cor*}{Corollary} \newtheorem{cj}[thm]{Conjecture} \newtheorem*{cj*}{Conjecture} \newtheorem{Def}[thm]{Definition} \newtheorem*{Def*}{Definition} \newtheorem{question}[thm]{Question} \newtheorem*{question*}{Question} \newtheorem{problem}[thm]{Problem} \newtheorem*{problem*}{Problem} \theoremstyle{definition} \newtheorem{rem}[thm]{Remark} \newtheorem*{note}{Note} \newtheorem{ex}[thm]{Example} \makeatletter \def\thmhead@plain#1#2#3{ \thmname{#1}\thmnumber{\@ifnotempty{#1}{ }\@upn{#2}} \thmnote{ {\the\thm@notefont#3}}} \let\thmhead\thmhead@plain \makeatother \newcommand{\bb}{\begin{equation}\begin{aligned}\hspace{0pt}} \newcommand{\bbb}{\begin{equation*}\begin{aligned}} \newcommand{\ee}{\end{aligned}\end{equation}} \newcommand{\eee}{\end{aligned}\end{equation*}} \newcommand\floor[1]{\left\lfloor#1\right\rfloor} \newcommand\ceil[1]{\left\lceil#1\right\rceil} \newcommand{\texteq}[1]{\stackrel{\mathclap{\scriptsize \mbox{#1}}}{=}} \renewcommand{\textleq}[1]{\stackrel{\mathclap{\scriptsize \mbox{#1}}}{\leq}} \newcommand{\textl}[1]{\stackrel{\mathclap{\scriptsize \mbox{#1}}}{<}} \newcommand{\textg}[1]{\stackrel{\mathclap{\scriptsize \mbox{#1}}}{>}} \renewcommand{\textgeq}[1]{\stackrel{\mathclap{\scriptsize \mbox{#1}}}{\geq}} \newcommand{\ketbra}[1]{\ket{#1}\!\!\bra{#1}} \newcommand{\ketbraa}[2]{\ket{#1}\!\!\bra{#2}} \newcommand{\sumno}{\sum\nolimits} \newcommand{\e}{\varepsilon} \newcommand{\G}{\mathrm{\scriptscriptstyle G}} \newcommand{\id}{\mathds{1}} \newcommand{\R}{\mathds{R}} \newcommand{\N}{\mathds{N}} \newcommand{\Z}{\mathds{Z}} \newcommand{\C}{\mathds{C}} \newcommand{\E}{\mathds{E}} \newcommand{\PSD}{\mathrm{PSD}} \DeclareMathOperator{\Tr}{Tr} \DeclareMathOperator{\rk}{rk} \DeclareMathOperator{\cl}{cl} \DeclareMathOperator{\co}{conv} \DeclareMathOperator{\cone}{cone} \DeclareMathOperator{\inter}{int} \DeclareMathOperator{\Span}{span} \DeclareMathAlphabet{\pazocal}{OMS}{zplm}{m}{n} \DeclareMathOperator{\aff}{aff} \DeclareMathOperator{\ext}{ext} \DeclareMathOperator{\pr}{Pr} \DeclareMathOperator{\supp}{supp} \DeclareMathOperator{\spec}{sp} \DeclareMathOperator{\sgn}{sgn} \DeclareMathOperator{\tr}{tr} \DeclareMathOperator{\Id}{id} \DeclareMathOperator{\relint}{relint} \DeclareMathOperator{\arcsinh}{arcsinh} \DeclareMathOperator{\erf}{erf} \DeclareMathOperator{\dom}{dom} \DeclareMathOperator{\diag}{diag} \DeclareMathOperator{\ran}{ran} \newcommand{\HH}{\pazocal{H}} \newcommand{\T}{\pazocal{T}} \newcommand{\Tp}{\pazocal{T}_+} \newcommand{\D}{\pazocal{D}} \newcommand{\K}{\pazocal{K}} \newcommand{\CC}{\pazocal{C}} \newcommand{\B}{\pazocal{B}} \newcommand{\Sch}{\pazocal{S}} \newcommand{\Tsa}{\pazocal{T}_{\mathrm{sa}}} \newcommand{\Ksa}{\pazocal{K}_{\mathrm{sa}}} \newcommand{\Bsa}{\pazocal{B}_{\mathrm{sa}}} \newcommand{\Schsa}{\pazocal{S}_{\mathrm{sa}}} \newcommand{\MM}{\mathcal{M}} \newcommand{\NN}{\mathcal{N}} \newcommand{\TT}{\mathcal{T}} \newcommand{\LL}{\mathcal{L}} \newcommand{\F}{\pazocal{F}} \newcommand{\Icoh}{I_{\mathrm{coh}}} \newcommand{\cptp}{\mathrm{CPTP}} \newcommand{\lo}{\mathrm{LO}} \newcommand{\onelocc}{\mathrm{LOCC}_\to} \newcommand{\locc}{\mathrm{LOCC}} \newcommand{\sep}{\mathrm{SEP}} \newcommand{\ppt}{\mathrm{PPT}} \newcommand{\SEP}{\pazocal{S}} \newcommand{\PPT}{\pazocal{P\!P\!T}} \newcommand{\lsmatrix}{\left(\begin{smallmatrix}} \newcommand{\rsmatrix}{\end{smallmatrix}\right)} \newcommand\smatrix[1]{{ \scriptsize\arraycolsep=0.4\arraycolsep\ensuremath{\begin{pmatrix}#1\end{pmatrix}}}} \stackMath \newcommand\xxrightarrow[2][]{\mathrel{ \setbox2=\hbox{\stackon{\scriptstyle#1}{\scriptstyle#2}} \stackunder[2pt]{ \xrightarrow{\makebox[\dimexpr\wd2\relax]{$\scriptstyle#2$}} }{ \scriptstyle#1\, }}} \newcommand{\tends}[2]{\xxrightarrow[\! #2 \!]{\mathrm{#1}}} \newcommand{\tendsn}[1]{\xxrightarrow[\! n\rightarrow \infty\!]{#1}} \newcommand{\tendsk}[1]{\xxrightarrow[\! k\rightarrow \infty\!]{#1}} \newcommand{\ctends}[3]{\xxrightarrow[\raisebox{#3}{$\scriptstyle #2$}]{\raisebox{-0.7pt}{$\scriptstyle #1$}}} \definecolor{Blues5seq1}{RGB}{239,243,255} \definecolor{Blues5seq2}{RGB}{189,215,231} \definecolor{Blues5seq3}{RGB}{107,174,214} \definecolor{Blues5seq4}{RGB}{49,130,189} \definecolor{Blues5seq5}{RGB}{8,81,156} \definecolor{Greens5seq1}{RGB}{237,248,233} \definecolor{Greens5seq2}{RGB}{186,228,179} \definecolor{Greens5seq3}{RGB}{116,196,118} \definecolor{Greens5seq4}{RGB}{49,163,84} \definecolor{Greens5seq5}{RGB}{0,109,44} \definecolor{Reds5seq1}{RGB}{254,229,217} \definecolor{Reds5seq2}{RGB}{252,174,145} \definecolor{Reds5seq3}{RGB}{251,106,74} \definecolor{Reds5seq4}{RGB}{222,45,38} \definecolor{Reds5seq5}{RGB}{165,15,21} \newcommand{\n}{a^\dag\! a} \newcommand{\vb}[1]{\boldsymbol{\mathbf{#1}}} \newcommand{\fakepart}[1]{ \par\refstepcounter{part} \sectionmark{#1} } \newcommand{\tcr}[1]{{\color{red!85!black}#1}}
2205.05724v1
http://arxiv.org/abs/2205.05724v1
Symmetry of surfaces for linear fractional group
\documentclass[12pt]{amsart} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsthm} \usepackage[all]{xy} \usepackage{color} \usepackage{verbatim} \usepackage{graphicx} \usepackage{tikz} \usepackage{placeins} \usepackage{float} \usepackage{listings} \usepackage{tikz} \usetikzlibrary{matrix} \usetikzlibrary{positioning} \usepackage{empheq} \usepackage{caption} \usepackage{cases}\usepackage{epsfig} \setlength{\textheight}{23cm} \setlength{\textwidth}{16cm} \setlength{\topmargin}{-0.8cm} \setlength{\parskip}{1 em} \hoffset=-1.4cm \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{definition}[theorem]{Definition} \newtheorem{remark}[theorem]{Remark} \newtheorem{example}[theorem]{Example} \numberwithin{equation}{section} \baselineskip=15pt \newcommand{\kau}[1]{{\color{blue} {#1} }} \author[lokenath Kundu, Kaustav Mukherjee]{Lokenath Kundu, Kaustav Mukherjee} \email{[email protected], lokenath$\[email protected]} \address{SRM University, A.P.} \address{Indian Institute of Science Education and Research Bhopal, Madhya Pradesh 462066 } \keywords{Riemann surface, finite group, stable upper genus.} \title[Symmetry of surfaces for linear fractional group] {Symmetry of surfaces for linear fractional group} \date{24/11/21} \begin{document} \begin{abstract} We will compute the stable upper genus for the family of finite non-abelian simple groups $PSL_2(\mathbb{F}_p)$ for $p \equiv 3~(mod~4)$. This classification is well-grounded in the other branches of Mathematics like topology, smooth, and conformal geometry, algebraic categories. \end{abstract} \maketitle \section{Introduction} \noindent Let $\Sigma_g$ be a Riemann surface of genus $g\geq 0$. We will imply by the action of a finite group $G$ on $\Sigma_g$, a properly discontinuous, orientation preserving, faithful action. The collection $\lbrace g \geq 0| G ~\text{acts on}~ \Sigma_g \rbrace$ is known as spectrum of $G$ denoted by $Sp(G)$. The least element of $Sp(G)$ is denoted by $\mu(G)$ familiar as the minimum genus of the group $G$. An element $g \in Sp(G)$ is said to be the stable upper genus of a given group $G$, if $g+i \in Sp(G)$ for all $i \in \mathbb{N}$. The necessary and sufficient condition for an effective action of a group $G$ preserving the orientation on compact, connected, orientable surface $\Sigma_g$ of genus $g$ except for finitely many exceptional values of $g$ was proved by Kulkarni in \cite{kulkarni}. In particular the group $PSL_2(\mathbb{F}_p)$ has the above mentioned property for $p \geq ~ 5$, and $p$ is odd. The authors determined the minimum genus for the family of finite groups in \cite{ming2,ming1}. \\ \noindent Any action of a finite group $G$ on a Riemann surface $\Sigma_g$ of genus $g$ gives an orbit space $\Sigma_h ~ := \Sigma_g/G$ also known as orbifold. We can take this action as conformal action, that means the action is analytic in some complex structure on $\Sigma_g$, as the positive solution of Nielson Realization problem \cite{niel,eck} implies that if any group $G$ acts topologically on $\Sigma_g$ then it can also act conformally with respect to some complex structure. \\ \noindent The orbit space $\Sigma_h$ is again a Riemann surface possibly with some marked points and the quotient map $p~:~\Sigma_g~\rightarrow~\Sigma_h$ is a branched covering map. Let $B=~\lbrace c_1,c_2,\dots,c_r~ \rbrace$ be the set of all branch points in $\Sigma_h$ and $A:=p^{-1}(B)$. Then $p:~\Sigma_g \setminus A ~\rightarrow ~\Sigma_h \setminus B$ is a proper covering. The tuple $(h;m_1,m_2,\dots,m_r)$ is known as signature of the finite group $G$, where $m_1,m_2,\dots,m_r$ are the order of stabilizer of the preimages of the branch points $c_1,c_2,\dots,c_r$ respectively. By Riemann-Hurwitz formula we have $$ (g-1)=~|G|(h-1)+\frac{|G|}{2}\sum_{i=1}^r(1-\frac{1}{m_i}) \label{R.H.formula}.$$ The signature of a group encodes the information of the group action of a Riemann surface and about $Sp(G)$. For more details about signature of Fuchsian group and Riemann surfaces refer to \cite{otto}, and \cite{sve} respectively. In \cite{kundu1,kundu2}, with accurate use of Frobenius theorem and explicit formation of surface kernel epimorphisms, the author able to prove the following theorems: \begin{theorem}\label{1}\cite{kundu1} $ ( h;2^{[a_{2}]}, 3^{[a_{3}]}, 4^{[a_{4}]}, 7^{[a_{7}]} ) $ is a signature of $ PSL_2(\mathbb{F}_7) $ if and only if $$ 1+168(h-1)+ 42a_{2} + 56a_{3} + 63a_{4} + 72a_{7} \geq 3 $$ except when the signature is $(1;2)$. \end{theorem} \begin{theorem}\label{2}\cite{kundu1} $ ( h;2^{[a_{2}]}, 3^{[a_{3}]}, 5^{[a_{5}]}, 6^{[a_6]} 11^{[a_{11}]} ) $ is a signature of $ PSL_2(\mathbb{F}_{11}) $ if and only if $$ 1+660(h-1)+ 165a_{2} + 220a_{3} + 264a_{5} + 275a_6 +300a_{11} \geq 26 .$$ \end{theorem} and the following lemma; \begin{lemma}\label{3}\cite{kundu2} $(h_{\geq ~ 0};~ 2^{[a_2]},~ 3^{[a_3]},~ 4^{[a_4]},~ 5^{[a_5]},~ d^{[a_d]},~ \frac{p-1}{2}^{[a_{\frac{p-1}{2}}]},~ \frac{p+1}{2}^{[a_{\frac{p+1}{2}}]},~ p^{[a_p]})$ is a signature for $PSL_2(\mathbb{F}_p)$ for $p ~ \equiv ~ 3 ~ (mod ~ 4)$ if and only if $$2(h-1)+~\frac{a_2-1}{2}~ + \frac{2a_3-1}{3} + ~ \frac{3a_4}{4} +~ \frac{4a_5}{5} +~ \frac{(d-1)a_d+1}{d} ~+ \frac{a_{\frac{p-1}{2}}(p-3)}{p-1} ~+ \frac{a_{\frac{p+1}{2}}(p-1)}{p+1} $$ $$+\frac{(p-1)a_p}{p} ~ \geq 0 \text{ or }$$ $$20(h-1) ~ + 10[\frac{a_2}{2} ~ +\frac{2.a_3}{3} ~+\frac{3.a_4}{4} ~+\frac{4.a_5}{5} ~+\frac{(d-1)a_d}{d} ~+\frac{(p-3)a_{\frac{p-1}{2}}}{p-1} ~+$$ $$\frac{(p-1)a_{\frac{p+1}{2}}}{p+1} ~+\frac{(p-1)a_p}{p} ] ~ \geq ~ 1 $$ when $p ~ \geq ~ 13, ~ p \equiv \pm 1~(\mod ~ 5~),~ p ~ \not \equiv ~ \pm ~ 1(\mod ~ 8), ~ \text{and} ~ d \geq 15$. Here $$d:=min\lbrace e|e\geq 7 \text{ and either } e|\frac{p-1}{2} \text{ or } e|\frac{p+1}{2} \rbrace.$$ \end{lemma} \noindent Having the details knowledge of the spectrum of the group $PSL_2(\mathbb{F}_p)$ one would like to address the following question:\\ \noindent \textbf{What is the stable upper genus for each of the group $PSL_2(\mathbb{F}_p)$ for $p\equiv 3~(mod ~4)$?} In \cite{kundu1}, we find out the stable upper genus for the group $PSL_2(\mathbb{F}_7)$ is 399 and the stable upper genus for the group $PSL_2(\mathbb{F}_{11})$ is 3508 using generic programming techniques \cite{ipython,pandas,matplotlib,numpy}. Following a similar approach described in \cite{kundu1}, here we will largely extend the scenario for higher prime numbers and determine the stable upper genus value for the each of the members of the family of finite groups $PSL_2(\mathbb{F}_p)$ for $p \equiv 3~(mod~4)$. Interestingly, the novelty of this work is the observance of the exponential curve fitting for the stable upper genus values of $PSL_2(\mathbb{F}_p)$ for $p\equiv 3~(mod~4)$ which has not been seen in earlier cases \cite{kulkarni,kundu1}. \\ \noindent Here we have stated the main result of this paper as follows:\\ \noindent \begin{theorem} \label{main} The stable upper genus value of the group $PSL_2(\mathbb{F}_p)$ can be written in the form \begin{equation} g=a p^b e^{c\times p}, \label{g_exp} \end{equation} where $a$, $b$ and $c$ are constants discussed in the proof and $g$ represents the upper stable genus of the group $PSL_2(\mathbb{F}_p)$ while $p$ is the respective prime for $p \equiv 3 ~(mod ~4)$. \end{theorem} \noindent Implementing computations with loops over large variations of $h$ and $a_i$ [\ref{1},\ref{2},\ref{3}] by means of Python coding \cite{ipython,pandas,numpy}, we find a set of stable upper genus values of $PSL_2(\mathbb{F}_p)$ for $p\in\{7,11,19,23\}$ which we discuss in the following sections. Based on the set of stable upper genus values, we construct a mathematical function described in Eq. \ref{g_exp}, which follows the variation in the stable upper genus values of $PSL_2(\mathbb{F}_p)$ with the respect to $p$. We discuss the detailed comparison of the expression in Eq. \ref{g_exp} with the dependency of the stable upper genus on $p$ in the proof. To explore the possibility of obtaining a mathematical function describing the stable upper genus as a function of $p$ for the group $PSL_2(\mathbb{F}_p)$, we make use of the curve-fitting technique on Mathematica \cite{mathematica} following from Fit and Manipulate tool, which provides us with the best fit on the data set of the stable upper genus corresponding to respective prime $p\in\{7,11,19,23\}$. We have specifically considered the function type for the stable upper genus as \begin{equation} g=a p^b \exp[cp], \end{equation} where $a$, $b$ and $c$ are constants that are obtained based on the best fit on the data-set and $p$ is the prime following $p\equiv 3~(mod~4)$. This expression subsequently provides us an estimate along with upper bound of stable upper genus of the group $PSL_2(\mathbb{F}_p)$ for general $p\equiv 3~(mod~4)$. \noindent We have organized our paper in the following way. In chapter 2 we will study the necessary preliminary results. In most cases, we will state the theorems without proof. In chapter 3, we will prove our main Theorem [\ref{main}]. \section{preliminaries} \noindent In this section, we will collect the knowledge about the properly discontinuous actions of a group $G$ on any Riemann surface $\Sigma_g$, signature of a finite group, the family of groups $PSL_2(\mathbb{F}_p)$ for a prime $p$, curve fitting, exponential fitting. \noindent We start with the definition of properly discontinuous action of a finite group on a Riemann surface. \begin{definition}\cite{sve} Let $G$ be a finite group is said to act on a Riemann surface $\Sigma_g$ properly discontinuously if for any $x\in \Sigma_g$ there exists a neighbouhood $U$ of $x$ in $X$ such that $g(U)\cap U=\emptyset$ for only finitely many $g\in G$. \end{definition} \subsection{Fuchsian group} A discrete subgroup of the Fuchsian group is known as Fuchsian group \cite{sve}. \begin{theorem}\cite{sve} A group $\Gamma$ is a Fuchsian group if and only if $\Gamma$ acts on the upper half plane $\mathbb{H}$ properly discontinuously. \end{theorem} \begin{definition} A Fuchsian group $\Gamma$ is said to be co-compact Fuchsian group if $\mathbb{H}/\Gamma$ is compact. \end{definition} \subsection{Dirichlet Region} Let $\Gamma$ be a Fuchsian group acts on the upper half plane $\mathbb{H}$. Let $p \in \mathbb{H}$ be a point which is not fixed by any non identity element of $\Gamma \setminus \lbrace id \rbrace.$ The Dirichlet region center at $p$ for $\Gamma$ is defined as $$D_p(\Gamma)=\lbrace z\in \mathbb{H}|\rho(z,p)\leq \rho(z,T(p)) ~ \forall T\in \Gamma \setminus \lbrace id \rbrace \rbrace$$ \noindent Here $\rho$ is the usual hyperbolic metric. \begin{theorem} The Dirichlet region $D_p(\Gamma) $is a connected region of $\Gamma$ if $p$ is not fixed by any element of $\Gamma \setminus \lbrace id \rbrace . $ \end{theorem} \begin{proof} \cite{sve}. \end{proof} \begin{theorem} Any two distinct points that lie inside the Dirichlet region will belong to two different $\Gamma$ orbits. \end{theorem} \begin{proof} \cite{sve}. \end{proof} \noindent Two points $w_1,w_2\in \mathbb{H}$ are said to be congruent if they lie to the same $\Gamma$ orbit. Any two pints in a fundamental region $F$ may be congruent only if the points lie in the boundary of $F$. Let $F$ be a Dirichlet region for a Fuchsian group $\Gamma$. We will consider all congruent vertices of $F$. The congruence is an equivalence relation on the vertices of $F$, the equivalence classes are called the \textbf{cycles}. Let $w\in \mathbb{H}$ be fixed by an elliptic element $T$ of $\Gamma$, then $Sw$ is fixed by $STS^{-1}$. So if one vertex of the cycle is fixed by an elliptic element then all the vertices of the cycle are fixed by the conjugate of the elliptic cycles. Those cycles are called elliptic cycles, and the vertices of the cycles are known as elliptic vertics. The cardinality of the collection of distinct elliptical cycles is same as the of non-congruent elliptic points in the Dirichlet region $F$. \\ \noindent Every non trivial stabilizer of any point in $\mathbb{H}$ is a maximal finite cyclic subgroup of the group $\Gamma$. In this context we have the following theorem. \begin{theorem} Let $\Gamma$ be a Fuchsian group, and $F$ be a Dirichlet region for $\Gamma$. Let $\alpha_1,\alpha_2, \dots, \alpha_n$ be the internal angles at all congruent vertices of $F$. Let $k$ be the order of the stabilizer in $\Gamma$ of one of the vertices. Then $\alpha_1+\alpha_2+\dots+\alpha_n=\frac{2\pi}{k}$. \end{theorem} \begin{proof} \cite{sve}. \end{proof} \begin{definition} The orders of non-conjugate maximal finite cyclic subgroups of the Fuchsian group $\Gamma$ are known as the period of $\Gamma$. \end{definition} \subsection{Signature of Fuchsian group} Let a Fuchsian group $\Gamma$ acts on $\mathbb{H}$. Let the area of the orbit space $\mathbb{H}/\Gamma$ has the finite area $i.e.~\mu(\mathbb{H}/\Gamma)<\infty .$ The restriction of the natural projevtion map $\mathbb{H}\rightarrow \mathbb{H}/\Gamma$ to the Dirichlet region $F$, identifies the congruent points of $F$. So $F/ \Gamma$ is an oriented surface possibly with some marked points as the congruent points are lying on the boundary of $F$. The marked points are correspond to the elliptic cycles and the cusps are corresponding to the non-congruent vertices at infinity. As a space $\mathbb{H}/\Gamma$ is known as orbifold. The number of cusps and the genus of the orbifold decisive the topology type of the orbifold. The area of $\mathbb{H}/\Gamma$ is defined as the area of the fundamental region $F$. If one Dirichlet region is compact then all the other Dirichlet regions are compact. If a Fuchsin group has a compact Dirichlet region then the Dirichlet region has finitely many sides and the orbifold is also compact. \\ \noindent If a convex fundamental region for a Fuchsian group $\Gamma$ has finitely many sides then the Fuchsian group is known as geometrically finite group. \begin{theorem} Let $\Gamma$ be a Fuchsian group. If the orbifold $\mathbb{H}/\Gamma$ has finite area then the $\Gamma$ is geometrically finite. \end{theorem} \begin{proof} \cite{sve}. \end{proof} \begin{definition}{\textbf{(Co-compact Fuchsian group)}} A Fuchsian group is said to be co-compact if the orbifold $\mathbb{H}/\Gamma$ is compact topological space. \end{definition} \noindent Let $\Gamma$ be a Fuchsian group and $F$ be a compact Dirichlet region for $\Gamma$. So the number of sides, vertices, and elliptic cycles of $F$ are finitely many. Let $m_1,m_2,\dots,m_r$ be the finite number of periods of $\Gamma$. Hence the orbifold $\mathbb{H}/\Gamma$ is a compact oriented surface of genus $g$ with $r$-many marked points. The tuple $(g;m_1,m_2,\dots,m_r)$ is known as the signature of the Fuchsian group $\Gamma$. \subsection{Signature of finite group} Now we define the signature of a finite group in the sense of Harvey \cite{har}. \begin{lemma}[Harvey condition] \label{Harvey condition} A finite group $G$ acts faithfully on $\Sigma_g$ with signature $\sigma:=(h;m_1,\dots,m_r)$ if and only if it satisfies the following two conditions: \begin{enumerate} \item The \emph{Riemann-Hurwitz formula for orbit space} i.e. $$\displaystyle \frac{2g-2}{|G|}=2h-2+\sum_{i=1}^{r}\left(1-\frac{1}{m_i}\right), \text{ and }$$ \item There exists a surjective homomorphism $\phi_G:\Gamma(\sigma) \to G$ that preserves the orders of all torsion elements of $\Gamma$. The map $\phi_G$ is also known as surface-kernel epimorphism. \end{enumerate} \end{lemma} \begin{corollary} Let $Sig(G)$ denote the set of all possible signatures of a finite group $G$, then $Sig(G)$ and $Sp(G)$ have bijective correspondence via the Harvey condition. \end{corollary} \subsection{The family of finite groups $PSL_2(\mathbb{F}_p)$} Let $p$ be a prime number. The set $$PSL_2(\mathbb{F}_p):=\large\lbrace \begin{pmatrix} a & b \\ c & d \end{pmatrix}|~ad-bc=1,~a,b,c,d \in \mathbb{F}_p \large\rbrace/ \pm I$$ forms a group under matrix multiplication. It is a simple linear group generated by two elements, $A=\begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix}$ of order $2$, and $B=\begin{pmatrix} 0 & 1 \\ -1 & -1 \end{pmatrix}$ of order $3.$ The order of $AB= \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}$ is $7, i.e.$ $$PSL_2(\mathbb{F}_p)=\langle A,B|A^2=B^3=(AB)^P \rangle.$$ \begin{theorem} Let $p$ be an odd prime. Let $G:=\langle x,y|x^p=y^p=(x^ay^b)^2=1,ab \equiv 1(mod~p) \rangle$ be a two generator group. Then $G$ is isomorphic $PSL_2(\mathbb{F}_p).$ \end{theorem} \begin{proof} \cite{beetham}. \end{proof} \subsubsection{Maximal subgroups of $PSL_2(\mathbb{F}_p)$} The group $PSL_2(\mathbb{F}_p)$ has $\frac{p(p^2-1)}{2}$ many elements. The elements of the group $PSL_2(\mathbb{F}_p)$ have one of the following order $p,~2,~3,~4,~\text{or}~5,~d $ and a divisor of either $\frac{p-1}{2}$ or $\frac{p+1}{2}$ where $d$ is defined as $$d= min \lbrace ~ e| ~ e \geq 7 \text{ and either } e| \frac{p-1}{2} \text{ or } ~ e| \frac{p+1}{2} \rbrace.$$ \noindent A subgroup $H$ of $G$ is said to be a maximal subgroup of $G$ if there exists a subgroup $K$ such that $H \subset K \subset G,$ then either $H=K$ or $K=G.$ The maximal proper subgroups of $PSL_2(\mathbb{F}_p)$ are the followings \cite{sjerve}; \begin{itemize} \item[1.] dihedral group of order $p-1$ or $p+1$. \item[2.] solvable group of order $\frac{p.(p-1)}{2}$. \item[3.] $A_4$ if $p \equiv 3,13,27,37 ~ (mod ~ 40)$. \item[4.] $S_4$ if $p \equiv \pm 1 ~ (mod ~ 8)$. \item[5.] $A_5$ if $p \equiv \pm 1 ~ (mod ~ 5)$. \end{itemize} \subsection{Exponential Regression} \begin{definition} Exponential regression is defined as the process of obtaining a mathematical expression for the exponential curve that best fits a set of data. In \cite{exponentialregression}, an exponential regression model has been discussed. As an example, we know a data is fit into a linear regression, if it can be explained using $y=mx+c$ where the data is represented as $\{x,y\}$ with $m$ as the slope and $c$ is the intercept on $y$-axis. Similarly, if the set of data can be best explained using \begin{eqnarray} Log[y]&=mLog[x]+c\\ Y&=mX+c \end{eqnarray} where $Y=Log[y]$ and $X=Log[x]$ with slope $m$ and intercept $c$ then it can be called as exponential regression. The above example is the simplest form of exponential regression, with possibilities of significant extension in more complex scenario. \end{definition} \section{Stable upper genus of $PSL_2(\mathbb{F}_p)$ for $p\equiv 3~(mod~4)$} \noindent In this section we will prove our main theorem [\ref{main}] using python coding. \begin{theorem}\label{19} The stable upper genus of the group $PSL_2(\mathbb{F}_{19})$ is 33112. \end{theorem} \begin{proof} We will prove the theorem in two steps. \begin{enumerate} \item[Step 1:] We will first prove that $33111 \notin Sp(PSL_2(\mathbb{F}_{19})).$ \\ \noindent From [\ref{3}] we know that $(h;2^{[a_2]},3^{[a_3]},5^{[a_5]},9^{[a_9]},10^{[a_{10}]},19^{[a_{19}]})$ is a signature of $PSL_2(\mathbb{F}_{19})$ if and only if $$3420h-3419+855a_2+1140a_3+1368a_5+1520a_9+1539a_{10}+1620a_{19}\geq 96.$$ \noindent If possible let $$33111=3420h-3419+855a_2+1140a_3+1368a_5+1520a_9+1539a_{10}+1620a_{19}.$$ \noindent Then the value of $h$ could be at most $11$. Similarly the values of $a_i$ could be at most $43,~ 33,~ 27,~ 25,~24,~23$ for $i= ~ 2,~ 3,~ 5,~ 9,~10,~19$ respectively. So We will consider $$0 ~ \leq ~ h ~ \leq ~11$$ $$0 ~ \leq ~ a_2 ~ \leq ~ 43$$ $$0 ~ \leq ~ a_3 ~ \leq ~ 33$$ $$0 ~ \leq ~ a_5 ~ \leq ~ 27$$ $$0 ~ \leq ~ a_9 ~ \leq ~ 25$$ $$0 ~ \leq ~ a_{10} ~ \leq ~ 24$$ $$0 ~ \leq ~ a_{19} ~ \leq ~ 23.$$ \noindent We execute the following python code to conclude that $PSL_2(\mathbb{F}_{19})$ can not act on a compact, connected, orientable surface of genus $33111$ preserving the orientation. \lstset{language=Python} \lstset{frame=lines} \lstset{caption={$33111$ is not an admissable signature of $PSL_2(\mathbb{F}_{19})$}} \lstset{label={2nd:code_direct}} \lstset{basicstyle=\footnotesize} \begin{lstlisting} def func2(h,a2,a3,a5,a9,a10,a19): return 1+3420*(h-1) + 855*a2 + 1140*a3 + 1368*a5 + 1520*a9 + 1539*a10 + 1620*a19 for h in range(11): for a2 in range(43): for a3 in range(33): for a5 in range(27): for a9 in range(25): for a10 in range(24): for a19 in range(23): sol = func2(h,a2,a3,a5,a9,a10,a19) if sol >33111: if sol < 33111: if sol == 33111: print("wrong") \end{lstlisting} \item[Step 2:] To complete the proof of our claim, we have to find out signatures corresponding to the genus values $33112-33967$ of $PSL_2(\mathbb{F}_{19})$. We execute the following python code to compute all the signature values of $PSL_2(\mathbb{F}_{19})$ corresponding to the genus values $33112-33967$. \lstset{language=Python} \lstset{frame=lines} \lstset{caption={Signatures of $PSL_2(\mathbb{F}_{19})$} corresponding to the genus value $33112-33967$} \lstset{label={3rd:code_direct}} \lstset{basicstyle=\footnotesize} \begin{lstlisting} def func2(h,a2,a3,a5,a9,a10,a19): return 1+3420*(h-1) + 855*a2 + 1140*a3 + 1368*a5 + 1520*a9 + 1539*a10 + 1620*a19 sol_arr = [] const_arr = [] for h in range(11): for a2 in range(44): for a3 in range(33): for a5 in range(27): for a9 in range(25): for a10 in range(25): for a19 in range(24): sol = func2(h,a2,a3,a5,a6,a11) if sol >33112: if sol < 33967: #print(sol) sol_arr += [sol] const_arr += [[h,a2,a3,a5,a9,a10,a19]] color_dictionary = dict(zip(sol_arr, const_arr)) sort_orders = sorted(color_dictionary.items(), key=lambda x: x[0]) for i in sort_orders: print(i[0], i[1]) \end{lstlisting} \noindent Now we have to prove that $PSL_2(\mathbb{F}_{19})$ can act on all compact, connected, orientable surface of genus $g ~ \geq ~ 33967$ preserving the orientation. Let $g ~ \geq 33967$, and $\Sigma_{g}$ be a compact, connected, orientable surface of genus $g$. So we have $$ g-33112 ~ \equiv ~ s ~ (mod ~855) ~ \text{ where } ~1 ~ \leq ~ s ~ \leq 854.$$ Then $g ~ = ~ l+n.855$ where $ l ~= 33112+ s$. We know the signature corresponding to the genus $l$ as $333112~\leq l~ \leq 33967$ and let it be $(h;m_2,~m_3,~m_5,~m_9,m_{10},m_{19})$. Then the signature corresponding to the genus $g$ is $(h;m_2+n,~m_3,~m_5,~m_9,m_{10},m_{19})$. In this way we can find signature corresponding to genus $g ~ \geq 33967$. This completes the proof of our claim. \end{enumerate} \end{proof} \begin{theorem}\label{23} The stable upper genus of the group $PSL_2(\mathbb{F}_{23})$ is 297084. \end{theorem} \begin{proof} Similar to Theorem\ref{19}. \end{proof} \begin{theorem}\label{31} The stable upper genus of the group $PSL_2(\mathbb{F}_{31})$ is 20275804. \end{theorem} \begin{proof} Similar to Theorem\ref{19}. \end{proof} \textbf{Proof of the main theorem \ref{main}.} \begin{proof} In the previous Theorems~\ref{19},\ref{23},\ref{31}, we have obtained the upper stable genus for $p\in\{19,23,31\}$. Combining with the previously obtained results described in \cite{kundu1}, we now posses a data-set of stable upper genus $g\in\{399,3508,33112,297084,20275804\}$ corresponding to prime $p\in\{7,11,19,23,31\}$ for the group $PSL_2(\mathbb{F}_p)$, shown as blue dots in Fig.\ref{fitting}. We next investigate the dependencies of the stable upper genus values $g$ with respect to prime number $p\equiv 3~(mod~4)$, subsequently constructing a mathematical function described in Eq.\ref{g_exp}, which can be visualized in Fig.\ref{fitting} \cite{mathematica}. \begin{figure}[htb] \centering \epsfig{file=Fitting_linear_v5.pdf,width= 0.9\linewidth} \caption{ Upper stable genus values (blue dot) given by $PSL_2(\mathbb{F}_p)$ for $p$ ranging from $7$ to $31$. Black line indicates the fitting on the upper stable genus values. \label{fitting}} \end{figure} In Fig.~\ref{fitting}~(b), we show the Log scaled plot of (a), which provides an indication of exponential dependence of $g$ on $p$. For the next step, we consider an ansatz in the form Eq. \ref{g_exp}. Additionally, Fig.~\ref{fitting} indicates two linear Log scaled lines, separating the cases $p\in\{7,11\}$ and $p\in\{19,23,31\}$. Based on the exponential fitting, using the following Mathematica code \lstset{language=Mathematica} \lstset{frame=lines} \lstset{caption={Exponential fitting of $g$ with respect to $p$ for the group $PSL_2(\mathbb{F}_{p})$}} \lstset{label={3rd:code_direct}} \lstset{basicstyle=\footnotesize} \begin{lstlisting} data = {{7, 399}, {11, 3508}, {19, 33112}, {23, 297084}, {31, 20275804}} lineardata = ListPlot[data, Frame -> True, PlotMarkers -> {\[FilledCircle], 10}, PlotStyle -> Blue, FrameLabel -> {Style["Prime p", Black, 14], Style["Upper stable genus g", Black, 14]}, FrameTicksStyle -> Directive[FontSize -> 14], ImageSize -> Medium, PlotRange -> All] logdata = ListPlot[data, Frame -> True, PlotMarkers -> {\[FilledCircle], 10}, PlotStyle -> Blue, FrameLabel -> {Style["Prime p", Black, 14], Style["Upper stable genus g", Black, 14]}, FrameTicksStyle -> Directive[FontSize -> 14], ImageSize -> Medium, PlotRange -> All] Manipulate[ Show[{logdata, LogPlot[a*p^b*Exp[c*x], {p, 7, 31}]}], {b, 0.0, 2, 0.5}, {a, 0.5, 4.5, 0.1}, {c, 0.5, 1, 0.01}] Linearplot = Show[{lineardata, Plot[0.5*x^0.5*Exp[0.51*x], {x, 7, 31}, PlotRange -> All, PlotStyle -> Black], Plot[4.5*x^0.5*Exp[0.5*x], {x, 7, 31}, PlotRange -> All, PlotStyle -> {Blue, Dashed}]}, ImageSize -> Large] Logplot = Show[{logdata, LogPlot[0.5*x^0.5*Exp[0.51*x], {x, 7, 31}, PlotRange -> All, PlotStyle -> Black], LogPlot[4.5*x^0.5*Exp[0.5*x], {x, 7, 31}, PlotRange -> All, PlotStyle -> {Blue, Dashed}]}, ImageSize -> Large] \end{lstlisting} which leads us to the values of constants $\{a\rightarrow 4.5,b\rightarrow0.5,c\rightarrow0.5\}$ for $p\in\{7,11\}$ and $\{a\rightarrow 0.5,b\rightarrow0.5,c\rightarrow0.51\}$ for $p\in\{19,23,31\}$. This provides a general expression for $g$ as function of $p$ for higher range of primes $p\equiv 3~(mod~4)$ following the second category, given as \begin{equation} g=0.5p^{0.5}\exp[0.51p]. \label{g_2} \end{equation} This expressions essentially captures the variation of upper stable genus $g$ with respect to $p$, and provides us with a rough estimate for general case scenario of $p\equiv 3~(mod~4)$ as shown in the Table\ref{table1}. In order to validate the fitting, we predict for arbitrary prime $p=59$, which should have stable upper genus close to $g=44907302712962$. \begin{table}[htb] \begin{center} \begin{tabular}{||c| c| c||} \hline Prime p & Stable upper genus g & Exponential fitting g \\ [0.5ex] \hline\hline 7 & 399 & 394 \\ \hline 11 & 3508 & 3651 \\ \hline 19 & 33112 & 35209 \\ \hline 23 & 297084 & 297926 \\ \hline 31 & 20275804 & 20457219 \\ [1ex] \hline \end{tabular} \caption{Comparison of Stable upper genus obtained computationally with the exponential fitting described in \ref{g_exp} for primes $p\in\{7,11,19,23,31\}$.} \label{table1} \end{center} \end{table} \end{proof} \section{Acknowledgement} We acknowledge fruitful discussions with Dr. Manish Kumar Pandey. Author Mukherjee acknowledges financial support from Ministry of Education, India for the Prime Minister's Research Fellowship (PMRF) for pursuing Ph.D. Author Kundu is grateful to Council of Scientific and Industrial Research (CSIR), India for the partial financial support. \newpage \begin{thebibliography}{20} \bibitem{niel} Kerckhoff, S. The Nielsen realization problem, Ann. Math. 117 (1983), 235—265. \bibitem{eck} Eckmann, B., M{\"u}ller, H. Plane motion groups and virtual Poincare duality of dimension two, Invent. Math. 69 (1982), 293—310. \bibitem{ming2} Glover, H., Sjerve, D. The genus of $PSL_2(q).$ Journal f{\"u}r die renie und angewandte Mathematik. J.renie angew. Math. $380(1987), 59-86.$ \bibitem{ming1} Glover, H., and Sjerve, D. Representing $PSl_2(p)$ on a Riemann surface of least genus, L'Enseignement Mathematique 31 (1985), 305—325. \bibitem{oza} {\"O}zaydin, M., Simmons, C., and Taback, J. Surface Symmetries and $PSL_2(p),$ American Mathematical Society, Volume 359, Number 5, May 2007, Pages 2243–2268. S 0002-9947(06)04011-6. \bibitem{kulkarni} Kulkarni, R.S. Symmetries of Surfaces, Topology, 26, No. 2(1987), 195-203. \bibitem{ipython} Pérez. F., and Granger. B. E. IPython: A System for Interactive Scientific Computing, Computing in Science \& Engineering, (2007) 9, 21-29. \bibitem{pandas} McKinney. W. Data Structures for Statistical Computing in Python, Proceedings of the 9th Python in Science Conference, (2010), 51-56. \bibitem{matplotlib} Hunter. J. D. Matplotlib: A 2D Graphics Environment, Computing in Science \& Engineering, (2007), 9, 90-95. \bibitem{numpy} Harris. C. R. \textit{et al.}, Array programming with NumPy, Nature, 585, (2020), 357–362. \bibitem{otto} Foster, O. Lectures on Riemann surfaces. Springer Verlag New york INC. \bibitem{sve} Katok, S. Fuchsian groups. Chicago Lectures in Mathematics. University of Chicago Press, Chicago, IL, $1992.$ \bibitem{har} Harvey, W.J. Cyclic groups of automorphisms of a compact Riemann surface. Quart. J. Math. Oxford Ser. $(2),$ $1966$, $17:86-97.$ \bibitem{frob} Lando, S.K., Zvonkin, A. K., Graphs on surfaces and their applications. Springer-Verlag Berlin Heidelberg New York: 2004. \bibitem{kundu1} Kundu, L. (2021). Linear Fractional group as Galois group. Journal of Topology and Analysis. DOI: 10.1142/S1793525321500722. \bibitem{kundu2} Kundu, L. Growth of a family of finite simple groups. arXiv:2110.11429v1. (submitted) \bibitem{sjerve} Glover, H., Sjerve; D. (1985). Representing $PSL_2(\mathbb{F}_p)$ on a Riemann surface of least genus. L'Enseignement Mathematique. t.31(p.305-325.). http://doi.org/10.5169/seals-54572. \bibitem{beetham} M.J. Beetham; A set of generators and relations for the group $PSL(2,q),$ q odd. Journal of London Mathematical society, s2-3(3), 554-557. \bibitem{mathematica} Wolfram Research, Inc., Mathematica, Version 13.0.0, Champaign, IL (2021). \end{thebibliography} \end{document}
2205.05713v4
http://arxiv.org/abs/2205.05713v4
Concise tensors of minimal border rank
\documentclass[11pt]{amsart} \usepackage[USenglish]{babel} \usepackage{amsmath,amsthm,amssymb,amscd} \usepackage{booktabs} \usepackage[T1]{fontenc} \usepackage{url} \usepackage{enumitem} \setlist[enumerate,1]{label=(\arabic*), ref=(\arabic*), itemsep=0em} \usepackage[pdfborder={0 0 0}]{hyperref} \hypersetup{ colorlinks, linkcolor={red!80!black}, citecolor={blue!80!black}, urlcolor={blue!80!black} } \numberwithin{equation}{section} \def\Amat{X} \def\Bmat{Y} \def\Cmat{Z} \newcommand{\acta}{\circ_{\scriptscriptstyle A}} \newcommand{\actb}{\circ_{\scriptscriptstyle B}} \newcommand{\actc}{\circ_{\scriptscriptstyle C}} \newcommand{\otR}{\ot_{\cA}} \newcommand{\alg}[1]{\cA_{111}^{#1}} \usepackage{MnSymbol} \usepackage{tikz} \usetikzlibrary{arrows,shapes.geometric,positioning,decorations.markings, cd} \usepackage[mathscr]{eucal} \usepackage[normalem]{ulem} \usepackage{latexsym,youngtab} \usepackage{multirow} \usepackage{epsfig} \usepackage{parskip} \usepackage[textwidth=16cm, textheight=22cm]{geometry} \usepackage{todonotes} \usepackage{xcolor} \newcommand{\mytodo}[1]{\todo[color=blue!10,bordercolor=blue,size=\footnotesize]{\textbf{TODO: }#1}} \newcommand{\myinfo}[1]{\todo[color=orange!10,bordercolor=black,size=\footnotesize]{\textbf{Info: }#1}} \newcommand{\myintodo}[1]{\todo[inline,color=blue!10,bordercolor=violet,size=\footnotesize]{\textbf{Joa: }#1}} \newcommand{\jjch}[1]{\textcolor{red}{#1}} \newcommand{\jjrm}[1]{\textcolor{blue}{#1}} \setcounter{MaxMatrixCols}{15} \usepackage{color} \input{cortdefs.tex} \def\bt{\bold t} \def\tincompr{\operatorname{incompr}}\def\cb{ b}\def\cf{ f} \def\epr{\bra{epr}} \def\tlker{\operatorname{Lker}}\def\trker{\operatorname{Rker}} \def\texp{\operatorname{exp}} \def\eprx{\frac 1{\sqrt 2}(\bra{00}+\bra{11})} \def\bra#1{|{#1}\rangle}\def\ket#1{\langle {#1}|} \def\braket#1#2{\langle {#1}|{#2}\rangle} \def\ketbra#1#2{ \bra {#1}\ket {#2}} \def\bU{{\bold{U}}} \def\EE{\mathcal{E}} \def\Mn{M_{\langle \nnn \rangle}}\def\Mone{M_{\langle 1\rangle}}\def\Mthree{M_{\langle 3\rangle}} \def\Mnl{M_{\langle \mmm,\nnn,\lll\rangle}} \def\Mnnl{M_{\langle \nnn,\nnn,\lll\rangle}} \def\Mnm{M_{\langle \nnn,\nnn, \mmm\rangle}}\def\Mnw{M_{\langle \nnn,\nnn, \bw\rangle}} \def\Mtwo{M_{\langle 2\rangle}}\def\Mthree{M_{\langle 3\rangle}} \def\cK{{\mathcal K}} \def\lam{\lambda} \def\aa#1#2{a^{#1}_{#2}} \def\bb#1#2{b^{#1}_{#2}} \def\garbagec#1#2{c^{#1}_{#2}} \def\tinf{{\rm inf}} \def\subsmooth{{}_{smooth}} \def\tbrank{{\underline{\bold R}}} \def\trank{{\mathrm {rank}}} \def\len{{\mathrm{length}}} \def\trankc{{ \bold R}} \def\tlker{{\rm Lker}} \def\trker{{\rm Rker}} \def\tlength{{\rm length}} \def\us#1{\s_{#1}^0} \def\uV{{\underline V}} \def\aaa{{\bold a}} \def\ccc{{\bold c}} \def\tbase{{\rm Zeros}} \def\uuu{\bold u} \def\oldet{\ol{GL(W)\cdot [\tdet_n]}} \def\oldetc{\ol{GL_{n^2}\cdot [\tdet_n]}} \def\ogdv{\ol{GL(W)\cdot [v]}} \def\tmult{{\rm mult}} \def\VV{\mathbf{V}} \def\bpi{\hbox{\boldmath$\pi$\unboldmath}} \def\Dual{{\mathcal Dual}}\def\Osc{{\mathcal Osc}} \def\Ideal{{\mathcal I}} \def\bs{\bold s} \def\mmm{\bold m}\def\nnn{\bold n}\def\lll{\bold l} \def\Om{\Omega}\def\Th{\Theta} \def\simgeq{\sim\geq} \def\rig#1{\smash{ \mathop{\longrightarrow} \limits^{#1}}} \def\bS{\bold S} \def\bL{\bold L} \def\bv{\bold v}\def\bw{\bold w} \def\ip{{i'}}\def\jp{{j'}}\def\kp{{k'}} \def\ap{{\alpha '}}\def\bp{{\beta '}}\def\gp{{\gamma '}} \def\tsupp{{\rm supp}} \def\L{\Lambda} \def\BU{\mathbb{U}}\def\BB{\mathbb{B}} \def\bx{{\bold x}}\def\by{{\bold y}}\def\bz{{\bold z}} \def\Ra{\Rightarrow} \renewcommand{\a}{\alpha} \renewcommand{\b}{\beta} \renewcommand{\g}{\gamma} \renewcommand{\BC}{\mathbb{C}} \renewcommand{\red}[1]{ {\color{red} #1} } \newcommand{\fulges}[1]{ {\color{cyan} #1} } \renewcommand{\d}{\delta} \def\kk{\kappa} \newcommand{\aR}{\uwave{\mathbf{R}}} \newcommand{\bfR}{\mathbf{R}} \renewcommand{\bar}[1]{\overline{#1}} \renewcommand{\hat}[1]{\widehat{#1}} \newcommand{\rk}{\mathrm{rk}} \renewcommand{\emptyset}{\font\cmsy = cmsy11 at 11pt \hbox{\cmsy \char 59} } \renewcommand{\tilde}{\widetilde} \newcommand{\dotitem}{\item[$\cdot$]} \newtheorem{mainthm}{Theorem} \renewcommand{\themainthm}{\Alph{mainthm}} \newcommand{\textfrac}[2]{{\textstyle\frac{#1}{#2}}} \newcommand{\dispsum}{{\displaystyle\sum}} \def\Mlmn{M_{\langle \lll,\mmm,\nnn\rangle}} \usepackage[normalem]{ulem} \begin{document} \author{Joachim Jelisiejew, J. M. Landsberg, and Arpan Pal} \address{Department of Mathematics, Informatics and Mechanics, University of Warsaw, Banacha 2, 02-097, Warsaw, Poland} \email[J. Jelisiejew]{[email protected]} \address{Department of Mathematics, Texas A\&M University, College Station, TX 77843-3368, USA} \email[J.M. Landsberg]{[email protected]} \email[A. Pal]{[email protected]} \title[Concise tensors of minimal border rank]{Concise tensors of minimal border rank} \thanks{Landsberg supported by NSF grants AF-1814254 and AF-2203618. Jelisiejew supported by National Science Centre grant 2018/31/B/ST1/02857.} \keywords{Tensor rank, border rank, secant variety, Segre variety, Quot scheme, spaces of commuting matrices, spaces of bounded rank, smoothable rank, wild tensor, 111-algebra} \subjclass[2010]{68Q15, 15A69, 14L35} \begin{abstract} We determine defining equations for the set of concise tensors of minimal border rank in $\BC^m\ot \BC^m\ot \BC^m$ when $m=5$ and the set of concise minimal border rank $1_*$-generic tensors when $m=5,6$. We solve the classical problem in algebraic complexity theory of classifying minimal border rank tensors in the special case $m=5$. Our proofs utilize two recent developments: the 111-equations defined by Buczy\'{n}ska-Buczy\'{n}ski and results of Jelisiejew-\v{S}ivic on the variety of commuting matrices. We introduce a new algebraic invariant of a concise tensor, its 111-algebra, and exploit it to give a strengthening of Friedland's normal form for $1$-degenerate tensors satisfying Strassen's equations. We use the 111-algebra to characterize wild minimal border rank tensors and classify them in $\BC^5\ot \BC^5\ot \BC^5$. \end{abstract} \maketitle \section{Introduction} This paper is motivated by algebraic complexity theory and the study of secant varieties in algebraic geometry. It takes first steps towards overcoming complexity lower bound barriers first identified in \cite{MR3761737,MR3611482}. It also provides new ``minimal cost'' tensors for Strassen's laser method to upper bound the exponent of matrix multiplication that are not known to be subject to the barriers identified in \cite{MR3388238} and later refined in numerous works, in particular \cite{blser_et_al:LIPIcs:2020:12686} which shows there are barriers for minimal border rank {\it binding} tensors (defined below), as our new tensors are not binding. Let $T\in \BC^m\ot \BC^m\ot \BC^m=A\ot B\ot C$ be a tensor. One says $T$ has {\it rank one} if $T=a\ot b\ot c$ for some nonzero $a\in A$, $b\in B$, $c\in C$, and the {\it rank} of $T$, denoted $\bold R(T)$, is the smallest $r$ such that $T$ may be written as a sum of $r$ rank one tensors. The {\it border rank} of $T$, denoted $\ur(T)$, is the smallest $r$ such that $T$ may be written as a limit of a sum of $r$ rank one tensors. In geometric language, the border rank is smallest $r$ such that $T$ belongs to the $r$-th secant variety of the Segre variety, $\s_r(Seg(\pp{m-1}\times \pp{m-1}\times\pp{m-1}))\subseteq \BP (\BC^m\ot \BC^m\ot \BC^m)$. Informally, a tensor $T$ is {\it concise} if it cannot be expressed as a tensor in a smaller ambient space. (See \S\ref{results} for the precise definition.) A concise tensor $T\in \BC^m\ot \BC^m\ot \BC^m $ must have border rank at least $m$, and if the border rank equals $m$, one says that $T$ has {\it minimal border rank}. As stated in \cite{BCS}, tensors of minimal border rank are important for algebraic complexity theory as they are ``an important building stone in the construction of fast matrix multiplication algorithms''. More precisely, tensors of minimal border rank have produced the best upper bound on the exponent of matrix multiplication \cite{MR91i:68058,stothers,williams,LeGall:2014:PTF:2608628.2608664,MR4262465} via Strassen's laser method \cite{MR882307}. Their investigation also has a long history in classical algebraic geometry as the study of secant varieties of Segre varieties. Problem 15.2 of \cite{BCS} asks to classify concise tensors of minimal border rank. This is now understood to be an extremely difficult question. The difficulty manifests itself in two substantially different ways: \begin{itemize} \item {\it Lack of structure.} Previous to this paper, an important class of tensors ({\it $1$-degenerate}, see \S\ref{results}) had no or few known structural properties. In other words, little is known about the geometry of singular loci of secant varieties. \item {\it Complicated geometry.} Under various genericity hypotheses that enable one to avoid the previous difficulty, the classification problem reduces to hard problems in algebraic geometry: for example the classification of minimal border rank {\it binding} tensors (see~\S\ref{results}) is equivalent to classifying smoothable zero-dimensional schemes in affine space~\cite[\S 5.6.2]{MR3729273}, a longstanding and generally viewed as impossible problem in algebraic geometry, which is however solved for $m\leq 6$~\cite{MR576606, MR2459993}. \end{itemize} The main contributions of this paper are as follows: (i) we give equations for the set of concise minimal border rank tensors for $m\leq 5$ and classify them, (ii) we discuss and consolidate the theory of minimal border rank $1_*$-generic tensors, extending their characterization in terms of equations to $m\leq 6$, and (iii) we introduce a new structure associated to a tensor, its {\it 111-algebra}, and investigate new invariants of minimal border rank tensors coming from the 111-algebra. Our contributions allow one to streamline proofs of earlier results. This results from the power of the 111-equations, and the utilization of the ADHM correspondence discussed below. While the second leads to much shorter proofs and enables one to avoid using the classification results of \cite{MR2118458, MR3682743}, there is a price to be paid as the language and machinery of modules and the Quot scheme need to be introduced. This language will be essential in future work, as it provides the only proposed path to overcome the lower bound barriers of \cite{MR3761737,MR3611482}, namely {\it deformation theory}. We emphasize that this paper is the first direct use of deformation theory in the study of tensors. Existing results from deformation theory were previously used in \cite{MR3578455}. Contribution (iii) addresses the \emph{lack of structure} and motivates many new open questions, see~\S\ref{sec:questions}. \subsection{Results on tensors of minimal border rank}\label{results} Given $T\in A\ot B\ot C$, we may consider it as a linear map $T_C: C^*\ra A\ot B$. We let $T(C^*)\subseteq A\ot B$ denote its image, and similarly for permuted statements. A tensor $T$ is {\it $A$-concise} if the map $T_A $ is injective, i.e., if it requires all basis vectors in $A$ to write down $T$ in any basis, and $T$ is {\it concise} if it is $A$, $B$, and $C$ concise. A tensor $T\in \BC^\aaa\ot \BC^m\ot \BC^m$ is {\it $1_A$-generic} if $T(A^*)\subseteq B\ot C$ contains an element of rank $m$ and when $\aaa=m$, $T$ is {\it $1$-generic} if it is $1_A$, $1_B$, and $1_C$ generic. Define a tensor $T\in \BC^m\ot \BC^m\ot \BC^m$ to be {\it $1_*$-generic} if it is at least one of $1_A$, $1_B$, or $1_C$-generic, and {\it binding} if it is at least two of $1_A$, $1_B$, or $1_C$-generic. We say $T$ is {\it $1$-degenerate} if it is not $1_*$-generic. Note that if $T$ is $1_A$ generic, it is both $B$ and $C$ concise. In particular, binding tensors are concise. Two classical sets of equations on tensors that vanish on concise tensors of minimal border rank are Strassen's equations and the End-closed equations. These are discussed in \S\ref{strandend}. These equations are sufficient for $m\leq 4$, \cite[Prop. 22]{GSS}, \cite{Strassen505, MR2996364}. In \cite[Thm~1.3]{MR4332674} the following polynomials for minimal border rank were introduced: Let $T\in A\ot B\ot C=\BC^m\ot \BC^m\ot \BC^m$. Consider the map \be\label{111map} (T(A^*)\ot A)\op (T(B^*)\ot B) \op (T(C^*)\ot C)\ra A\ot B\ot C \oplus A\ot B\ot C \ene that sends $(T_1, T_2,T_3)$ to $(T_1 - T_2, T_2 - T_3)$, where the $A$, $B$, $C$ factors of tensors are understood to be in the correct positions, for example $T(A^*)\ot A$ is more precisely written as $A\ot T(A^*)$. If $T$ has border rank at most $m$, then the rank of the above map is at most $3m^2-m$. The resulting equations are called the {\it 111-equations}. Consider the space \be\label{111sp} (T(A^*)\ot A)\cap (T(B^*)\ot B) \cap (T(C^*)\ot C). \ene We call this space the \emph{triple intersection} or the \emph{111-space}. We say that $T$ is \emph{111-abundant} if the inequality \begin{equation}\label{eq:111} {(111\mathrm{-abundance})}\ \ \tdim\big((T(A^*)\ot A)\cap (T(B^*)\ot B) \cap (T(C^*)\ot C)\big)\geq m \end{equation}\stepcounter{equation} holds. If equality holds, we say $T$ is \emph{111-sharp}. When $T$ is concise, 111-abundance is equivalent to requiring that the equations of \cite[Thm 1.3]{MR4332674} are satisfied, i.e., the map \eqref{111map} has rank at most $3m^2-m$. \begin{example}\label{Wstate111} For $T=a_1\ot b_1\ot c_2+ a_1\ot b_2\ot c_1+ a_2\ot b_1\ot c_1\in \BC^2\ot \BC^2\ot \BC^2$, a tangent vector to the Segre variety, also called the $W$-state in the quantum literature, the triple intersection is $\langle T, a_1\ot b_1\ot c_1\rangle$. \end{example} We show that for concise tensors, the 111-equations imply both Strassen's equations and the End-closed equations: \begin{proposition}\label{111iStr+End} Let $T\in \BC^m\ot \BC^m\ot \BC^m$ be concise. If $T$ satisfies the 111-equations then it also satisfies Strassen's equations and the End-closed equations. If $T$ is $1_A$ generic, then it satisfies the 111-equations if and only if it satisfies the $A$-Strassen equations and the $A$-End-closed equations. \end{proposition} The first assertion is proved in \S\ref{111impliessectb}. The second assertion is Proposition \ref{1Ageneric111}. In \cite{MR2554725}, and more explicitly in \cite{MR3376667}, equations generalizing Strassen's equations for minimal border rank, called {\it $p=1$ Koszul flattenings} were introduced. (At the time it was not clear they were a generalization, see \cite{GO60survey} for a discussion.). The $p=1$ Koszul flattenings of type 210 are equations that are the size $ m(m-1)+1 $ minors of the map $T_A^{\ww 1}: A\ot B^*\ra \La 2 A\ot C$ given by $a\ot \b\mapsto \sum T^{ijk}\b(b_j) a\ww a_i\ot c_k$. Type 201, 120, etc.~are defined by permuting $A$, $B$ and $C$. Together they are called $p=1$ Koszul flattenings. These equations reappear in border apolarity as the $210$-equations, see \cite{CHLapolar}. \begin{proposition}\label{kyfv111} The $p=1$ Koszul flattenings for minimal border rank and the $111$-equations are independent, in the sense that neither implies the other, even for concise tensors in $\BC^m\ot \BC^m\ot \BC^m$. \end{proposition} Proposition \ref{kyfv111} follows from Example~\ref{ex:111necessary} where the 111-equations are nonzero and the $p=1$ Koszul flattenings are zero and Example~\ref{ex:failureFor7x7} where the reverse situation holds. We extend the characterization of minimal border rank tensors under the hypothesis of $1_*$-genericity to dimension $ m=6$, giving two different characterizations: \begin{theorem}\label{1stargprim} Let $m\leq 6$ and consider the set of tensors in $\BC^m\ot \BC^m\ot \BC^m$ which are $1_*$-generic and concise. The following subsets coincide \begin{enumerate} \item\label{it:1stargprimOne} the zero set of Strassen's equations and the End-closed equations, \item\label{it:1stargprimTwo} 111-abundant tensors, \item\label{it:1stargprimThree} 111-sharp tensors, \item\label{it:1stargprimFour} minimal border rank tensors. \end{enumerate} More precisely, in~\ref{it:1stargprimOne}, if the tensor is $1_A$-generic, only the $A$-Strassen and $A$-End-closed conditions are required. \end{theorem} The equivalence of \ref{it:1stargprimOne},~\ref{it:1stargprimTwo},~\ref{it:1stargprimThree} in Theorem \ref{1stargprim} is proved by Proposition \ref{1Ageneric111}. The equivalence of~\ref{it:1stargprimOne} and~\ref{it:1stargprimFour} is proved in \S\ref{quotreview}. For $1_A$-generic tensors, the $p=1$ Koszul flattenings of type 210 or 201 are equivalent to the $A$-Strassen equations, hence they are implied by the 111-equations in this case. However, the other types are not implied, see Example~\ref{ex:failureFor7x7}. The result fails for $m\geq 7$ by \cite[Prop.~5.3]{MR3682743}, see Example~\ref{ex:failureFor7x7}. This is due to the existence of additional components in the {\it Quot scheme}, which we briefly discuss here. The proof of Theorem \ref{1stargprim} introduces new algebraic tools by reducing the study of $1_A$-generic tensors satisfying the $A$-Strassen equations to {\it deformation theory} in the Quot scheme (a generalization of the Hilbert scheme, see~\cite{jelisiejew2021components}) in two steps. First one reduces to the study of commuting matrices, which implicitly appeared already in \cite{Strassen505}, and was later spelled out in in~\cite{MR3682743}, see~\S\ref{1genreview}. Then one uses the ADHM construction as in \cite{jelisiejew2021components}. From this perspective, the tensors satisfying \ref{it:1stargprimOne}-\ref{it:1stargprimThree} correspond to points of the Quot scheme, while tensors satisfying~\ref{it:1stargprimFour} correspond to points in the {\it principal component} of the Quot scheme, see \S\ref{prelimrems} for explanations; the heart of the theorem is that when $m\leq 6$ there is only the principal component. We expect deformation theory to play an important role in future work on tensors. As discussed in \cite{CHLapolar}, at this time deformation theory is the {\it only} proposed path to overcoming the lower bound barriers of \cite{MR3761737,MR3611482}. As another byproduct of this structure, we obtain the following proposition: \begin{proposition}\label{Gorgood} A $1$-generic tensor in $\BC^m\ot \BC^m\ot \BC^m$ with $m\leq 13$ satisfying the $A$-Strassen equations has minimal border rank. A $1_A$ and $1_B$-generic tensor in $\BC^m\ot \BC^m\ot \BC^m$ with $m\leq 7$ satisfying the $A$-Strassen equations has minimal border rank.\end{proposition} Proposition~\ref{Gorgood} is sharp: the first assertion does not hold for higher $m$ by~\cite[Lem.~6.21]{MR1735271} and the second by~\cite{MR2579394}. Previously it was known (although not explicitly stated in the literature) that the $A$-Strassen equations combined with the $A$-End-closed conditions imply minimal border rank for $1$-generic tensors when $m\leq 13$ and binding tensors when $m\leq 7$. This can be extracted from the discussion in \cite[\S 5.6]{MR3729273}. While Strassen's equations and the End-closed equations are nearly useless for $1$-degenerate tensors, this does not occur for the 111-equations, as the following result illustrates: \begin{theorem}\label{concise5} When $m\leq 5$, the set of concise minimal border rank tensors in $\BC^m\ot \BC^m\ot \BC^m$ is the zero set of the $111$-equations. \end{theorem} We emphasize that no other equations, such as Strassen's equations, are necessary. Moreover Strassen's equations, or even their generalization to the $p=1$ Koszul flattenings, and the End-closed equations are not enough to characterize concise minimal border rank tensors in $\BC^5\ot \BC^5\ot \BC^5$, see Example~\ref{ex:111necessary} and \S\ref{111vclass}. By Theorem \ref{1stargprim}, to prove Theorem \ref{concise5} it remains to prove the $1$-degenerate case, which is done in \S\ref{m5sect}. The key difficulty here is the above-mentioned lack of structure. We overcome this problem by providing a new normal form, which follows from the 111-equations, that strengthens Friedland's normal form for corank one $1_A$-degenerate tensors satisfying Strassen's equations \cite[Thm. 3.1]{MR2996364}, see Proposition~\ref{1Aonedegenerate111}. It is possible that Theorem~\ref{concise5} also holds for $m=6$; this will be subject to future work. It is false for $m = 7$, as already Theorem~\ref{1stargprim} fails when $m= 7$. The $1_*$-generic tensors of minimal border rank in $\BC^5\ot\BC^5\ot \BC^5$ are essentially classified in \cite{MR3682743}, following the classification of abelian linear spaces in \cite{MR2118458}. We write ``essentially'', as the list has redundancies and it remains to determine the precise list. Using our normal form, we complete (modulo the redundancies in the $1_*$-generic case) the classification of concise minimal border rank tensors: \begin{theorem}\label{5isom} Up to the action of $\GL_5(\BC)^{\times 3} \rtimes \FS_3$, there are exactly five concise $1$-degenerate, minimal border rank tensors in $\BC^5\ot\BC^5\ot \BC^5$. Represented as spaces of matrices, the tensors may be presented as: \begin{align*} T_{\cO_{58}}&= \begin{pmatrix} x_1& &x_2 &x_3 & x_5\\ x_5 & x_1&x_4 &-x_2 & \\ & &x_1 & & \\ & &-x_5 & x_1& \\ & & &x_5 & \end{pmatrix}, \ \ T_{\cO_{57}} = \begin{pmatrix} x_1& &x_2 &x_3 & x_5\\ & x_1&x_4 &-x_2 & \\ & &x_1 & & \\ & & & x_1& \\ & & &x_5 & \end{pmatrix}, \\ T_{\cO_{56}} &= \begin{pmatrix} x_1& &x_2 &x_3 & x_5\\ & x_1 +x_5 & &x_4 & \\ & &x_1 & & \\ & & & x_1& \\ & & &x_5 & \end{pmatrix}, \ \ T_{\cO_{55}}= \begin{pmatrix} x_1& &x_2 &x_3 & x_5\\ & x_1& x_5 &x_4 & \\ & &x_1 & & \\ & & & x_1& \\ & & &x_5 & \end{pmatrix}, \ \ T_{\cO_{54}} = \begin{pmatrix} x_1& &x_2 &x_3 & x_5\\ & x_1& &x_4 & \\ & &x_1 & & \\ & & & x_1& \\ & & &x_5 & \end{pmatrix}. \end{align*} In tensor notation: set $$T_{\mathrm{M1}} = a_1\ot(b_1\ot c_1+b_2\ot c_2+b_3\ot c_3+b_4\ot c_4)+a_2\ot b_3\ot c_1 + a_3\ot b_4\ot c_1+a_4\ot b_4\ot c_2+a_5\ot(b_5\ot c_1+ b_4\ot c_5)$$ and $$T_{\mathrm{M2}} = a_1\ot(b_1\ot c_1+b_2\ot c_2+b_3\ot c_3+b_4\ot c_4)+a_2\ot( b_3\ot c_1-b_4\ot c_2) + a_3\ot b_4\ot c_1+a_4\ot b_3\ot c_2+a_5\ot(b_5\ot c_1+b_4\ot c_5). $$ Then \begin{align*} T_{\cO_{58}}= &T_{\mathrm{M2}} + a_5 \ot (b_1 \ot c_2 - b_3 \ot c_4) \\ T_{\cO_{57}}=&T_{\mathrm{M2}} \\ T_{\cO_{56}}= &T_{\mathrm{M1}} + a_5 \ot b_2 \ot c_2 \\ T_{\cO_{55}}= &T_{\mathrm{M1}} + a_5 \ot b_3 \ot c_2 \\ T_{\cO_{54}}= &T_{\mathrm{M1}}. \end{align*} Moreover, each subsequent tensor lies in the closure of the orbit of previous: $T_{\cO_{58}}\unrhd T_{\cO_{57}}\unrhd T_{\cO_{56}}\unrhd T_{\cO_{55}}\unrhd T_{\cO_{54}}$. \end{theorem} The subscript in the name of each tensor is the dimension of its $\GL(A)\times \GL(B) \times \GL(C)$ orbit in projective space $\mathbb{P}(A\ot B\ot C)$. Recall that $\tdim \s_5(Seg(\pp 4\times\pp 4\times \pp 4))=64$ and that it is the orbit closure of the so-called unit tensor $[\sum_{j=1}^5a_j\ot b_j\ot c_j]$. Among these tensors, $T_{\cO_{58}}$ is (after a change of basis) the unique symmetric tensor on the list (see Example~\ref{ex:symmetricTensor} for its symmetric version). The subgroup of $\GL(A)\times \GL(B) \times \GL(C)$ preserving $T_{\cO_{58}}$ contains a copy of $\GL_2\BC$ while all other stabilizers are solvable. \medskip The {\it smoothable rank} of a tensor $T\in A\ot B\ot C$ is the minimal degree of a smoothable zero dimensional scheme $\Spec(R)\ \subseteq \mathbb{P}A\times \mathbb{P}B\times \mathbb{P}C $ which satisfies the condition $T\in \langle \Spec(R) \rangle$. See, e.g., \cite{MR1481486, MR3724212} for basic definitions regarding zero dimensional schemes. The smoothable rank of a polynomial with respect to the Veronese variety was introduced in \cite{MR2842085} and generalized to points with respect to arbitrary projective varieties in \cite{MR3333949}. It arises because the span of the (scheme theoretic) limit of points may be smaller than the limit of the spans. The smoothable rank lies between rank and border rank. Tensors (or polynomials) whose smoothable rank is larger than their border rank are called {\it wild} in \cite{MR3333949}. The first example of a wild tensor occurs in $\BC^3\ot \BC^3\ot \BC^3$, see \cite[\S 2.3]{MR3333949} and it has minimal border rank. We characterize wild minimal border rank tensors: \begin{theorem}\label{wildthm} The concise minimal border rank tensors that are wild are precisely the concise minimal border rank $1_*$-degenerate tensors. \end{theorem} Thus Theorem \ref{5isom} classifies concise wild minimal border rank tensors in $\BC^5\ot\BC^5\ot\BC^5$. The proof of Theorem \ref{wildthm} utilizes a new algebraic structure arising from the triple intersection that we discuss next. \subsection{The 111-algebra and its uses}\label{111intro} We emphasize that 111-abundance, as defined by~\eqref{eq:111}, is a necessary condition for border rank $m$ only when $T$ is concise. The condition can be defined for arbitrary tensors and we sometimes allow that. \begin{remark}\label{rem:111semicontinuity} The condition~\eqref{eq:111} is not closed: for example it does not hold for the zero tensor. It is however closed in the set of concise tensors as then $T(A^*)$ varies in the Grassmannian, which is compact. \end{remark} For $\Amat\in \tend(A) = A^*\ot A$, let $\Amat\acta T$ denote the corresponding element of $T(A^*)\ot A$. Explicitly, if $\Amat = \alpha\ot a$, then $\Amat \acta T := T(\alpha)\ot a$ and the map $(-)\acta T\colon \tend(A)\to A\ot B\ot C$ is extended linearly. Put differently, $\Amat \acta T = (\Amat \ot \Id_B \ot \Id_C)(T)$. Define the analogous actions of $\tend(B)$ and $\tend(C)$. \begin{definition} Let $T$ be a concise tensor. We say that a triple $(\Amat, \Bmat, \Cmat)\in \tend(A) \times\tend(B)\times \tend(C)$ \emph{is compatible with} $T$ if $\Amat\acta T = \Bmat \actb T = \Cmat \actc T$. The \emph{111-algebra} of $T$ is the set of triples compatible with $T$. We denote this set by $\alg{T}$. \end{definition} The name is justified by the following theorem: \begin{theorem}\label{ref:111algebra:thm} The 111-algebra of a concise tensor $T\in A\ot B\ot C$ is a commutative unital subalgebra of $\tend(A)\times \tend(B) \times \tend(C)$ and its projection to any factor is injective. \end{theorem} Theorem \ref{ref:111algebra:thm} is proved in \S\ref{111algpfsect}. \begin{example} Let $T$ be as in Example \ref{Wstate111}. Then \[ \alg{T}=\langle (\Id,\Id,\Id), (a_1\ot\a_2,b_1\ot \b_2,c_1\ot \g_2)\rangle. \] \end{example} In this language, the triple intersection is $\alg{T}\cdot T$. Once we have an algebra, we may study its modules. The spaces $A,B,C$ are all $\alg{T}$-modules: the algebra $\alg{T}$ acts on them as it projects to $\tend(A)$, $\tend(B)$, and $\tend(C)$. We denote these modules by $\ul{A}$, $\ul{B}$, $\ul{C}$ respectively. Using the 111-algebra, we obtain the following algebraic characterization of \emph{all} 111-abundant tensors as follows: a tensor $T$ is 111-abundant if it comes from a bilinear map $N_1\times N_2\to N_3$ between $m$-dimensional $\cA$-modules, where $\dim \cA \geq m$, $\cA$ is a unital commutative associative algebra and $N_1$, $N_2$, $N_3$ are $\cA$-modules, see Theorem~\ref{ref:111abundantChar:cor}. This enables an algebraic investigation of such tensors and shows how they generalize abelian tensors from~\cite{MR3682743}, see Example~\ref{ex:1AgenericAndModulesTwo}. We emphasize that there are no genericity hypotheses here beyond conciseness, in contrast with the $1_* $-generic case. In particular the characterization applies to \emph{all} concise minimal border rank tensors. In summary, for a concise tensor $T$ we have defined new algebraic invariants: the algebra $\alg{T}$ and its modules $\ul A$, $\ul B$, $\ul C$. There are four consecutive obstructions for a concise tensor to be of minimal border rank: \begin{enumerate} \item\label{it:abundance} the tensor must be 111-abundant. For simplicity of presentation, for the rest of this list we assume that it is 111-sharp (compare~\S\ref{question:strictlyAbundant}). We also fix a surjection from a polynomial ring $S=\BC[y_1\hd y_{m-1}]$ onto $\alg{T}$ as follows: fix a basis of $\alg{T}$ with the first basis element equal to $(\Id,\Id,\Id)$ and send $1\in S$ to this element, and the variables of $S$ to the remaining $m-1$ basis elements. In particular $\ul{A}$, $\ul{B}$, $\ul{C}$ become $S$-modules (the conditions below do not depend on the choice of surjection). \item\label{it:cactus} the algebra $\alg{T}$ must be smoothable (Lemma \ref{ref:triplespanalgebra}), \item\label{it:modulesPrincipal} the $S$-modules $\ul A$, $\ul B$, $\ul C$ must lie in the principal component of the Quot scheme, so there exist a sequence of modules $\ul A_{\ep}$ limiting to $ \ul A$ with general $\ul A_{\ep}$ semisimple, and similarly for $\ul B$, $\ul C$ (Lemma \ref{ref:triplespanmodules}), \item\label{it:mapLimit} the surjective module homomorphism $\ul A\ot_{\alg{T}} \ul B\to \ul C$ associated to $T$ as in Theorem~\ref{ref:111abundantChar:cor} must be a limit of module homomorphisms $\ul A_\ep\ot_{\cA_\ep} \ul B_\ep \to \ul C_\ep$ for a choice of smooth algebras $\cA_\ep$ and semisimple modules $\ul A_{\ep}$, $\ul B_{\ep}$, $\ul C_{\ep}$. \end{enumerate} Condition~\ref{it:modulesPrincipal} is shown to be nontrivial in Example~\ref{ex:failureFor7x7}. In the case of $1$-generic tensors, by Theorem \ref{wildthm} above, they have minimal border rank if and only if they have minimal smoothable rank, that is, they are in the span of some zero-dimensional smoothable scheme $\Spec(R)$. Proposition~\ref{ref:cactusRank:prop} remarkably shows that one has an algebra isomorphism $\alg{T}\isom R$. This shows that to determine if a given $1$-generic tensor has minimal smoothable rank it is enough to determine smoothability of its 111-algebra, there is no choice for $R$. This is in contrast with the case of higher smoothable rank, where the choice of $R$ presents the main difficulty. \begin{remark} While throughout we work over $\BC$, our constructions (except for explicit computations regarding classification of tensors and their symmetries) do not use anything about the base field, even the characteristic zero assumption. The only possible nontrivial applications of the complex numbers are in the cited sources, but we expect that our main results, except for Theorem~\ref{5isom}, are valid over most fields. \end{remark} \subsection{Previous work on tensors of minimal border rank in $\BC^m\ot \BC^m\ot \BC^m$}\ When $m=2$ it is classical that all tensors in $\BC^2\ot \BC^2\ot \BC^2$ have border rank at most two. For $m=3$ generators of the ideal of $\s_3(Seg(\pp 2\times\pp 2\times \pp 2))$ are given in \cite{LWsecseg}. For $m=4$ set theoretic equations for $\s_4(Seg(\pp 3\times\pp 3\times \pp 3))$ are given in \cite{MR2996364} and lower degree set-theoretic equations are given in \cite{MR2891138,MR2836258} where in the second reference they also give numerical evidence that these equations generate the ideal. It is still an open problem to prove the known equations generate the ideal. (This is the ``salmon prize problem'' posed by E. Allman in 2007. At the time, not even set-theoretic equations were known). Regarding the problem of classifying concise tensors of minimal border rank: For $m=3$ a complete classification of all tensors of border rank three is given in \cite{MR3239293}. For $m=4$, a classification of all $1_*$-generic concise tensors of border rank four in $\BC^4\ot \BC^4\ot \BC^4$ is given in \cite{MR3682743}. When $m=5$, a list of all abelian subspaces of $\tend(\BC^5)$ up to isomorphism is given in \cite{MR2118458}. The equivalence of~\ref{it:1stargprimOne} and~\ref{it:1stargprimFour} in the $m=5$ case of Theorem \ref{1stargprim} follows from the results of \cite{MR3682743}, but is not stated there. The argument proceeds by first using the classification in \cite{MR2202260}, \cite{MR2118458} of spaces of commuting matrices in $\tend(\BC^5)$. There are $15$ isolated examples (up to isomorphism), and examples that potentially depend on parameters. (We write ``potentially'' as further normalization is possible.) Then each case is tested and the tensors passing the End-closed condition are proven to be of minimal border rank using explicit border rank five expressions. We give a new proof of this result that is significantly shorter, and self-contained. Instead of listing all possible tensors, we analyze the possible Hilbert functions of the associated modules in the Quot scheme living in the unique non-principal component. \subsection{Open questions and future directions}\label{sec:questions} \subsubsection{111-abundant, not 111-sharp tensors}\label{question:strictlyAbundant} We do not know any example of a concise tensor $T$ which is 111-abundant and is not 111-sharp, that is, for which the inequality in~\eqref{eq:111} is strict. By Proposition \ref{1Ageneric111} such a tensor would have to be $1$-degenerate, with $T(A^*), T(B^*),T(C^*)$ of bounded (matrix) rank at most $m-2$, and by Theorems \ref{5isom} and \ref{concise5} it would have to occur in dimension greater than $5$. Does there exist such an example?\footnote{After this paper was submitted, A. Conca pointed out an explicit example of a 111-abundant, not 111-sharp tensor when $m=9$. We do not know if such exist when $m=6,7,8$. The example is a generalization of Example~\ref{ex:symmetricTensor}.} \subsubsection{111-abundant $1$-degenerate tensors} The 111-abundant tensors of bounded rank $m-1$ have remarkable properties. What properties do 111-abundant tensors with $T(A^*)$, $T(B^*)$, $T(C^*)$ of bounded rank less than $m-1$ have? \subsubsection{111-abundance v. classical equations}\label{111vclass} A remarkable feature of Theorem~\ref{concise5} is that 111-equations are enough: there is no need for more classical ones, like $p=1$ Koszul flattenings~\cite{MR3376667}. In fact, the $p=1$ Koszul flattenings, together with End-closed condition, are almost sufficient, but not quite: the $111$-equations are only needed to rule out one case, described in Example~\ref{ex:111necessary}. Other necessary closed conditions for minimal border rank are known, e.g., the higher Koszul flattenings of \cite{MR3376667}, the flag condition (see, e.g., \cite{MR3682743}), and the equations of \cite{LMsecb}. We plan to investigate the relations between these and the new conditions introduced in this paper. As mentioned above, the 111-equations in general do not imply the $p=1$ Koszul flattening equations, see Example~\ref{ex:failureFor7x7}. \subsubsection{111-abundance in the symmetric case} Given a concise symmetric tensor $T\in S^3 \BC^m \subseteq \BC^m\ot \BC^m\ot \BC^m$, one classically studies its apolar algebra $\cA = \BC[ x_1, \ldots ,x_m]/\tann(T)$, where $x_1\hd x_m$ are coordinates on the dual space $\BC^{m*}$ and $\tann(T)$ are the polynomials that give zero when contracted with $T$. This is a {\it Gorenstein} (see \S\ref{1gsubsect}) zero-dimensional graded algebra with Hilbert function $(1, m,m,1)$ and each such algebra comes from a symmetric tensor. A weaker version of Question~\ref{question:strictlyAbundant} is: does there exist such an algebra with $\tann(T)$ having at least $m$ minimal cubic generators? There are plenty of examples with $m-1$ cubic generators, for example $T=\sum_{i=1}^m x_i^3$ or the $1$-degenerate examples from the series~\cite[\S7]{MR4163534}. \subsubsection{The locus of concise, 111-sharp tensors} There is a natural functor associated to this locus, so we have the machinery of deformation theory and in particular, it is a linear algebra calculation to determine the tangent space to this locus at a given point and, in special cases, even its smoothness. This path will be pursued further and it gives additional motivation for Question~\ref{question:strictlyAbundant}. \subsubsection{111-algebra in the symmetric case} The 111-algebra is an entirely unexpected invariant in the symmetric case as well. How is it computed and how can it be used? \subsubsection{The Segre-Veronese variety} While in this paper we focused on $\BC^m\ot \BC^m\ot \BC^m$, the 111-algebra can be defined for any tensor in $V_1\ot V_2 \ot V_3 \ot \ldots \ot V_q$ and the argument from~\S\ref{111algpfsect} generalizes to show that it is still an algebra whenever $q\geq 3$. It seems worthwhile to investigate it in greater generality. \subsubsection{Strassen's laser method} An important motivation for this project was to find new tensors for Strassen's laser method for bounding the exponent of matrix multiplication. This method has barriers to further progress when using the Coppersmith-Winograd tensors that have so far given the best upper bounds on the exponent of matrix multiplication \cite{MR3388238}. Are any of the new tensors we found in $\BC^5\ot \BC^5\ot \BC^5$ better for the laser method than the big Coppersmith-Winograd tensor $CW_3$? Are any $1$-degenerate minimal border rank tensors useful for the laser method? (At this writing there are no known laser method barriers for $1$-degenerate tensors.) \subsection{Overview} In \S\ref{1genreview} we review properties of binding and more generally $1_A$-generic tensors that satisfy the $A$-Strassen equations. In particular we establish a dictionary between properties of modules and such tensors. In \S\ref{111impliessect} we show $1_A$-generic 111-abundant tensors are exactly the $1_A$-generic tensors that satisfy the $A$-Strassen equations and are $A$-End-closed. We establish a normal form for 111-abundant tensors with $T(A^*)$ corank one that generalizes Friedland's normal for tensors with $T(A^*)$ corank one that satisfy the $A$-Strassen equations. In \S\ref{111algpfsect} we prove Theorem \ref{ref:111algebra:thm} and illustrate it with several examples. In \S\ref{newobssect} we discuss 111-algebras and their modules, and describe new obstructions for a tensor to be of minimal border rank coming from its 111-algebra. In \S\ref{noconcise} we show certain classes of tensors are not concise to eliminate them from consideration in this paper. In \S\ref{m5sect} we prove Theorems \ref{concise5} and \ref{5isom}. In \S\ref{quotreview} we prove Theorem \ref{1stargprim} using properties of modules, their Hilbert functions and deformations. In \S\ref{minsmoothsect} we prove Theorem \ref{wildthm}. \subsection{Definitions/Notation}\label{defs} Throughout this paper we adopt the index ranges \begin{align*} &1\leq i,j,k\leq \aaa\\ &2\leq s,t,u\leq \aaa-1,\\ \end{align*} and $A,B,C$ denote complex vector spaces respectively of dimension $\aaa, m,m$. Except for~\S\ref{1genreview} we will also have $\aaa =m$. The general linear group of changes of bases in $A$ is denoted $\GL(A)$ and the subgroup of elements with determinant one by $\SL(A)$ and their Lie algebras by $\fgl(A)$ and $\fsl(A)$. The dual space to $A$ is denoted $A^*$. For $Z\subseteq A$, $Z^\perp:=\{\a\in A^*\mid \a(x)=0\forall x\in Z\}$ is its annihilator, and $\langle Z\rangle\subseteq A$ denotes the span of $Z$. Projective space is $\BP A= (A\backslash \{0\})/\BC^*$. When $A$ is equipped with the additional structure of being a module over some ring, we denote it $\ul A$ to emphasize its module structure. Unital commutative algebras are usually denoted $\cA$ and polynomial algebras are denoted $S$. Vector space homomorphisms (including endomorphisms) between $m$-dimensional vector spaces will be denoted $K_i,X_i,X,Y,Z$, and we use the same letters to denote the corresponding matrices when bases have been chosen. Vector space homomorphisms (including endomorphisms) between $(m-1)$-dimensional vector spaces, and the corresponding matrices, will be denoted $\bx_i,\by,\bz$. We often write $T(A^*)$ as a space of $m\times m$ matrices (i.e., we choose bases). When we do this, the columns index the $B^*$ basis and the rows the $C$ basis, so the matrices live in $\Hom(B^*, C)$. (This convention disagrees with~\cite{MR3682743} where the roles of $B$ and $C$ were reversed.) For $X\in \thom(A,B)$, the symbol $X^\bt$ denotes the induced element of $\thom(B^*,A^*)$, which in bases is just the transpose of the matrix of $X$. The \emph{$A$-Strassen equations} were defined in \cite{Strassen505}. The $B$ and $C$ Strassen equations are defined analogously. Together, we call them \emph{Strassen's equations}. Similarly, the \emph{$A$-End-closed equations} are implicitly defined in \cite{MR0132079}, we state them explicitly in~\eqref{bigenda1gen}. Together with their $B$ and $C$ counterparts they are the End-closed equations. We never work with these equations directly (except proving Proposition~\ref{111iStr+End}), we only consider the conditions they impose on $1_*$-generic tensors. For a tensor $T\in \BC^m\otimes \BC^m\otimes \BC^m$, we say that $T(A^*)\subseteq B\ot C$ is of \emph{bounded (matrix) rank} $r$ if all matrices in $T(A^*)$ have rank at most $r$, and we drop reference to ``matrix'' when the meaning is clear. If rank $r$ is indeed attained, we also say that $T(A^*)$ is of \emph{corank} $m-r$. \subsection{Acknowledgements} We thank M. Micha{\l}ek for numerous useful discussions, in particular leading to Proposition~\ref{Gorgood}, M. Micha{\l}ek and A. Conner for help with writing down explicit border rank decompositions, and J. Buczy{\'n}ski for many suggestions to improve an earlier draft. Macaulay2 and its {\it VersalDeformation} package~\cite{MR2947667} was used in computations. We thank the anonymous referee for helpful comments. We are very grateful to Fulvio Gesmundo for pointing out a typo in the statement of Theorem~\ref{wildthm} in the previous version. \section{Dictionaries for $1_*$-generic, binding, and $1$-generic tensors satisfying Strassen's equations for minimal border rank}\label{1genreview} \subsection{Strassen's equations and the End-closed equations for $1_*$-generic tensors}\label{strandend} A $1_*$-generic tensor satisfying Strassen's equations may be reinterpreted in terms of classical objects in matrix theory and then in commutative algebra, which allows one to apply existing results in these areas to their study. Fix a tensor $T\in A\ot B\ot C=\BC^\aaa\ot \BC^m\ot \BC^m$ which is $A$-concise and $1_A$-generic with $\alpha\in A^*$ such that $T(\alpha): B^*\to C $ has full rank. The $1_A$-genericity implies that $T$ is $B$ and $C$-concise. \def\Espace{\cE_{\alpha}(T)} Consider \[ \Espace := T(A^*)T(\a)\inv \subseteq \tend(C). \] This space is $T'(A^*)$ where $T'\in A\ot C^*\ot C$ is a tensor obtained from $T$ using the isomorphism $\Id_A\ot (T(\a)\inv)^{ \bt }\ot \Id_C$. It follows that $T$ is of rank $m$ if and only if the space $\Espace$ is simultaneously diagonalizable and that $T$ is of border rank $m$ if and only if $\Espace$ is a limit of spaces of simultaneously diagonalizable endomorphisms~\cite[Proposition~2.8]{MR3682743} also see~\cite{LMsecb}. Note that $\Id_C = T(\a)T(\a)\inv \in \Espace$. A necessary condition for a subspace $\tilde E\subseteq \tend(C)$ to be a limit of simultaneously diagonalizable spaces of endomorphisms is that the elements of $\tilde E$ pairwise commute. The $A$-Strassen equations \cite[(1.1)]{MR2996364} in the $1_A$-generic case are the translation of this condition to the language of tensors, see, e.g., \cite[\S2.1]{MR3682743}. For the rest of this section, we additionally assume that $T$ satisfies the $A$-Strassen equations, i.e., that $\cE_\a(T)$ is abelian. Another necessary condition on a space to be a limit of simultaneously diagonalizable spaces has been known since 1962 \cite{MR0132079}: the space must be closed under composition of endomorphisms. The corresponding equations on the tensor are the $A$-End-closed equations. \subsection{Reinterpretation as modules}\label{dictsectOne} In this subsection we introduce the language of modules and the ADHM correspondence. This extra structure will have several advantages: it provides more invariants for tensors, it enables us to apply theorems in the commutative algebra literature to the study of tensors, and perhaps most importantly, it will enable us to utilize deformation theory. Let $\tilde E\subseteq \tend(C)$ be a space of endomorphisms that contains $\Id_C$ and consists of pairwise commuting endomorphisms. Fix a decomposition $\tilde E = \langle\Id_C\rangle \oplus E$. A canonical such decomposition is obtained by requiring that the elements of $E$ are traceless. To eliminate ambiguity, we will use this decomposition, although in the proofs we never make use of the fact that $E\subseteq\fsl(C)$. Let $S = \Sym E$ be a polynomial ring in $\dim E = \aaa - 1$ variables. By the ADHM correspondence \cite{MR598562}, as utilized in~\cite[\S3.2]{jelisiejew2021components} we define the \emph{module associated to $E$} to be the $S$-module $\ul{C}$ which is the vector space $C$ with action of $S$ defined as follows: let $e_1\hd e_{\aaa-1}$ be a basis of $E$, write $S=\BC[y_1\hd y_{\aaa-1}]$, define $y_j(c):=e_j(c)$, and extend to an action of the polynomial ring. It follows from~\cite[\S3.4]{jelisiejew2021components} that $\tilde E$ is a limit of simultaneously diagonalizable spaces if and only if $\ul{C}$ is a limit of \emph{semisimple modules}, which, by definition, are $S$-modules of the form $N_1\oplus N_2 \oplus \ldots \oplus N_{ m }$ where $\dim N_{ h } = 1$ for every $ h $. The limit is taken in the {\it Quot scheme}, see~\cite[\S3.2 and Appendix]{jelisiejew2021components} for an introduction, and~\cite[\S5]{MR2222646}, \cite[\S9]{MR1481486} for classical sources. The Quot scheme will not be used until \S\ref{twonew}. Now we give a more explicit description of the construction in the situation relevant for this paper. Let $A$, $B$, $C$ be $\BC$-vector spaces, with $\dim A = \aaa$, $\dim B = \dim C = m$, as above. Let $T\in A\ot B\ot C$ be a concise $1_A$-generic tensor that satisfies Strassen's equations (see~\S\ref{strandend}). To such a $T$ we associated the space $\Espace\subseteq \tend(C)$. The \emph{module associated to $T$} is the module $\ul{C}$ associated to the space $\tilde{E} := \Espace$ using the procedure above. The procedure involves a choice of $\alpha$ and a basis of $E$, so the module associated to $T$ is only defined up to isomorphism. \begin{example}\label{ex:modulesForMinRank} Consider a concise tensor $T\in \BC^m\ot \BC^m\ot \BC^m$ of minimal rank, say $T = \sum_{i=1}^m a_i\ot b_i\ot c_i$ with $\{ a_i\}$, $\{ b_i\}$, $\{ c_i\} $ bases of $A,B,C$ and $\{\a_i\}$ the dual basis of $A^*$ etc.. Set $\alpha = \sum_{i=1}^m \a_i$. Then $\Espace$ is the space of diagonal matrices, so $E = \langle E_{ii} - E_{11}\ |\ i=2,3, \ldots ,m \rangle$ where $E_{ij}=\g_i\ot c_j$. The module $\ul{C}$ decomposes as an $S$-module into $\bigoplus_{i=1}^m \BC c_i$ and thus is semisimple. Every semisimple module is a limit of such. \end{example} If a module $\ul{C}$ is associated to a space $\tilde{E}$, then the space $\tilde{E}$ may be recovered from $\ul{C}$ as the set of the linear endomorphisms corresponding to the actions of elements of $S_{\leq 1}$ on $\ul{C}$. If $\ul{C}$ is associated to a tensor $T$, then the tensor $T$ is recovered from $\ul{C}$ up to isomorphism as the tensor of the bilinear map $S_{\leq 1}\ot \ul C\to \ul C$ coming from the action on the module. \begin{remark} The restriction to $S_{\leq 1}$ may seem unnatural, but observe that if $\tilde E$ is additionally End-closed then for every $s\in S$ there exists an element $s'\in S_{\leq 1}$ such that the actions of $s$ and $s'$ on $\ul{C}$ coincide. \end{remark} Additional conditions on a tensor transform to natural conditions on the associated module. We explain two such additional conditions in the next two subsections. \subsection{Binding tensors and the Hilbert scheme} \label{dictsect} \begin{proposition}\label{ref:moduleVsAlgebra} Let $T\in \BC^m\ot \BC^m\ot \BC^m=A\ot B\ot C$ be concise, $1_A$-generic, and satisfy the $A$-Strassen equations. Let $\ul{C}$ be the $S$-module obtained from $T$ as above. The following conditions are equivalent \begin{enumerate} \item\label{it:One} the tensor $T$ is $1_B$-generic (so it is binding), \item\label{it:Two} there exists an element $c\in \ul C$ such that $S_{\leq 1}c = \ul C$, \item\label{it:Three} the $S$-module $\ul{C}$ is isomorphic to $S/I$ for some ideal $I$ and the space $\Espace$ is End-closed, \item\label{it:ThreePrim} the $S$-module $\ul{C}$ is isomorphic to $S/I$ for some ideal $I$, \item\label{it:Alg} the tensor $T$ is isomorphic to a multiplication tensor in a commutative unital rank $m$ algebra $ \cA $. \end{enumerate} \end{proposition} The algebra $\cA$ in \ref{it:Alg} will be obtained from the module $\ul C$ as described in the proof. The equivalence of~\ref{it:One} and~\ref{it:Alg} for minimal border rank tensors was first obtained by Bl\"aser and Lysikov \cite{MR3578455}. \begin{proof} Suppose~\ref{it:One} holds. Recall that $\Espace = T'(A^*)$ where $T'\in A\ot C^*\ot C$ is obtained from $T\in A\ot B\ot C$ by means of $(T(\alpha)\inv)^{ \bt } \colon B\to C^*$. Hence $T'$ is $1_{C^*}$-generic, so there exists an element $c\in (C^*)^* \simeq C$ such that the induced map $A^*\to C$ is bijective. But this map is exactly the multiplication map by $c$, $S_{\leq1}\to \ul C$, so~\ref{it:Two} follows. Let $\varphi\colon S\to \ul C$ be defined by $\varphi(s) = sc$ and let $I = \ker \varphi$. (Note that $\varphi$ depends on our choice of $c$.) Suppose~\ref{it:Two} holds; this means that $\varphi|_{S_{\leq 1}}$ is surjective. Since $\dim S_{\leq 1} = m = \dim C$, this surjectivity implies that we have a vector space direct sum $S = S_{\leq 1} \oplus I$. Now $X\in \Espace\subseteq \tend(C)$ acts on $C$ in the same way as the corresponding linear polynomial $\ul X\in S_{\leq 1}$. Thus a product $XY\in\End(C)$ acts as the product of polynomials $\ul X\ul Y\in S_{\leq 2}$. Since $S = I\oplus S_{\leq 1}$ we may write $\ul X\ul Y = U + \ul Z$, where $U\in I$ and $\ul Z\in S_{\leq 1}$. The actions of $XY,Z\in \End(C)$ on $C$ are identical, so $XY = Z$. This proves~\ref{it:Three}. Property~\ref{it:Three} implies~\ref{it:ThreePrim}. Suppose that~\ref{it:ThreePrim} holds and take an $S$-module isomorphism $\varphi'\colon \ul{C}\to S/I$. Reversing the argument above, we obtain again $S = I\oplus S_{\leq 1}$. Let $ \cA := S/I$. This is a finite algebra of rank $\tdim S_{\leq 1} = m$. The easy, but key observation is that the multiplication in $ \cA $ is induced by the multiplication $S\ot \cA \to \cA $ on the $S$-module $ \cA $. The multiplication maps arising from the $S$-module structure give the following commutative diagram: \[ \begin{tikzcd} S_{\leq 1}\ar[d, hook]\ar[dd, "\psi"', bend right=40] &[-2.5em] \ot &[-2.5em] \ul{C}\ar[d,equal]\ar[r] & \ul{C}\ar[d,equal]\\ S\ar[d,two heads] & \ot & \ul{C}\ar[d,equal]\ar[r] & \ul{C}\ar[d,equal]\\ S/I\ar[d,equal] & \ot & \ul{C}\ar[d, "\varphi'"]\ar[r] & \ul{C}\ar[d,"\varphi'"]\\ S/I & \ot & S/I \ar[r] & S/I \end{tikzcd} \] The direct sum decomposition implies the map $\psi$ is a bijection. Hence the tensor $T$, which is isomorphic to the multiplication map from the first row, is also isomorphic to the multiplication map in the last row. This proves~\ref{it:Alg}. Finally, if~\ref{it:Alg} holds, then $T$ is $1_B$-generic, because the multiplication by $1\in \cA$ from the right is bijective. \end{proof} The structure tensor of a module first appeared in Wojtala~\cite{DBLP:journals/corr/abs-2110-01684}. The statement that binding tensors satisfying Strassen's equations satisfy End-closed conditions was originally proven jointly with M. Micha{\l}ek. A binding tensor is of minimal border rank if and only if $\ul{C}$ is a limit of semisimple modules if and only if $S/I$ is a \emph{smoothable} algebra. For $m\leq 7$ all algebras are smoothable~\cite{MR2579394}. \subsection{$1$-generic tensors}\label{1gsubsect} A $1$-generic tensor satisfying the $A$-Strassen equations is isomorphic to a symmetric tensor by~\cite{MR3682743}. (See \cite{GO60survey} for a short proof.). For a commutative unital algebra $\cA$, the multiplication tensor of $\cA$ is $1$-generic if and only if $\cA$ is \emph{Gorenstein}, see~\cite[Prop. 5.6.2.1]{MR3729273}. By definition, an algebra $\cA$ is Gorenstein if $\cA^*=\cA \phi$ for some $\phi\in \cA^*$, or in tensor language, if its structure tensor $T_{\cA}$ is $1$-generic with $T_{\cA}(\phi)\in \cA^*\ot \cA^*$ of full rank. For $m\leq 13$ all Gorenstein algebras are smoothable~\cite{MR3404648}, proving Proposition~\ref{Gorgood}. \subsection{Summary}\label{summarysect} We obtain the following dictionary for tensors in $\BC^\aaa\ot \BC^m\ot \BC^m$ with $\aaa\leq m$: \begin{tabular}[h]{c c c} tensor satisfying $A$-Strassen eqns. & is isomorphic to &multiplication tensor in \\ \toprule $1_A$-generic && module\\ $1_A$- and $1_B$-generic (hence binding and $\aaa=m$) && unital commutative algebra\\ $1$-generic ($\aaa=m$) && Gorenstein algebra \end{tabular} \section{Implications of 111-abundance}\label{111impliessect} For the rest of this article, we restrict to tensors $T\in A\ot B\ot C=\BC^m\ot \BC^m\ot \BC^m$. Recall the notation $X\acta T$ from \S\ref{111intro} and that $\{ a_i\}$ is a basis of $A$. In what follows we allow $\tilde{a}_h$ to be arbitrary elements of $A$. \begin{lemma}\label{111intermsOfMatrices} Let $T = \sum_{h=1}^r \tilde{a}_h\ot K_h$, where $ \tilde{a}_h\in A$ and $K_h\in B\ot C$ are viewed as maps $K_h\colon B^*\to C$. Let $\Amat\in \tend(A)$, $Y\in \tend(B)$ and $Z\in \tend(C)$. Then \begin{align*} \Amat\acta T &= \sum_{h=1}^{r} \Amat( \tilde{a}_h) \ot K_h,\\ \Bmat\actb T &= \sum_{h=1}^r \tilde{a}_h\ot (K_h\Bmat^{\bt}),\\ \Cmat\actc T &= \sum_{h=1}^r \tilde{a}_h\ot (\Cmat K_h). \end{align*} If $T$ is concise and $\Omega$ is an element of the triple intersection \eqref{111sp}, then the triple $(\Amat, \Bmat, \Cmat)$ such that $\Omega =\Amat \acta T = \Bmat\actb T = \Cmat \actc T$ is uniquely determined. In this case we call $\Amat$, $\Bmat$, $\Cmat$ \emph{the matrices corresponding to $\Omega$}. \end{lemma} \begin{proof} The first assertion is left to the reader. For the second, it suffices to prove it for $\Amat$. Write $T = \sum_{i=1}^m a_i\ot K_i$. The $K_i$ are linearly independent by conciseness. Suppose $\Amat, \Amat'\in \tend(A)$ are such that $\Amat\acta T = \Amat'\acta T$. Then for $\Amat'' = \Amat - \Amat'$ we have $0 = \Amat''\acta T = \sum_{i=1}^m \Amat''(a_i) \ot K_i$. By linear independence of $K_i$, we have $\Amat''(a_i) = 0$ for every $i$. This means that $\Amat''\in\tend(A)$ is zero on a basis of $A$, hence $\Amat'' = 0$. \end{proof} \subsection{$1_A$-generic case} \begin{proposition}\label{1Ageneric111} Suppose that $T\in \BC^m\ot \BC^m\ot \BC^m=A\ot B\ot C$ is $1_A$-generic with $\alpha\in A^*$ such that $T(\alpha)\in B\ot C$ has full rank. Then $T$ is 111-abundant if and only if the space $\Espace = T(A^*)T(\alpha)\inv\subseteq \tend(C)$ is $m$-dimensional, abelian, and End-closed. Moreover if these hold, then $T$ is concise and 111-sharp. \end{proposition} \begin{proof} Assume $T$ is $111$-abundant. The map $ (T(\alpha)^{-1})^{\bt}\colon B\to C^* $ induces an isomorphism of $T$ with a tensor $T'\in A\ot C^*\ot C$, so we may assume that $T = T'$, $T(\alpha) = \Id_C$ and $B=C^*$. We explicitly describe the tensors $\Omega$ in the triple intersection. We use Lemma~\ref{111intermsOfMatrices} repeatedly. Fix a basis $a_1, \ldots ,a_m$ of $A$ and write $T = \sum_{i=1}^m a_i\ot K_i$ where $K_0 = \Id_C$, but we do not assume the $K_i$ are linearly independent, i.e., that $T$ is $A$-concise. Let $\Omega = \sum_{i=1}^m a_i\ot \omega_i\in A\ot B\ot C$. Suppose $\Omega = \Bmat^{\bt}\actb T = \Cmat \actc T$ for some $\Bmat\in \tend(C)$ and $\Cmat\in \tend(C)$. The condition $\Omega = \Bmat^{\bt} \actb T$ means that $\omega_i = K_i\Bmat$ for every $i$. The condition $\Omega = \Cmat \actc T$ means that $\omega_i = \Cmat K_i$. For $i=1$ we obtain $\Bmat = \Id_C \cdot \Bmat = \omega_1 = \Cmat \cdot \Id_C = \Cmat$, so $\Bmat = \Cmat$. For other $i$ we obtain $\Cmat K_i = K_i \Cmat$, which means that $\Cmat$ is in the joint commutator of $T(A^*)$. A matrix $\Amat$ such that $\Omega = \Amat \acta T$ exists if and only if $\omega_i\in \langle K_1, \ldots ,K_m\rangle = T(A^*)$ for every $i$. This yields $\Cmat K_i = K_i\Cmat\in T(A^*)$ and in particular $\Cmat = \Cmat\cdot \Id_C\in T(A^*)$. By assumption, we have a space of choices for $\Omega$ of dimension at least $m$. Every $\Omega$ is determined uniquely by an element $\Cmat\in T(A^*)$. Since $\dim T(A^*) \leq m$, we conclude that $\dim T(A^*) = m$, i.e., $T$ is $A$-concise (and thus concise), and for every $\Cmat\in T(A^*)$, the element $\Omega = \Cmat \actc T$ lies in the triple intersection. Thus for every $\Cmat\in T(A^*)$ we have $\Cmat K_i = K_i \Cmat$, which shows that $T(A^*)\subseteq \tend(C)$ is abelian and $\Cmat K_i\in T(A^*)$, which implies that $\Espace$ is End-closed. Moreover, the triple intersection is of dimension $\dim T(A^*) = m$, so $T$ is 111-sharp. Conversely, if $\Espace$ is $m$-dimensional, abelian and End-closed, then reversing the above argument, we see that $\Cmat\actc T$ is in the triple intersection for every $\Cmat\in T(A^*)$. Since $(\Cmat \actc T)(\alpha) = \Cmat$, the map from $T(A^*)$ to the triple intersection is injective, so that $T$ is 111-abundant and the above argument applies to it, proving 111-sharpness and conciseness. \end{proof} \subsection{Corank one $1_A$-degenerate case: statement of the normal form} We next consider the $1_A$-degenerate tensors which are as ``nondegenerate'' as possible: there exists $\a\in A^*$ with $\trank(T(\alpha))=m-1$. \begin{proposition}[characterization of corank one concise tensors that are 111-abundant]\label{1Aonedegenerate111} Let $T = \sum_{i=1}^m a_i \ot K_i$ be a concise tensor which is 111-abundant and not $1_A$-generic. Suppose that $K_1\colon B^*\to C$ has rank $m-1$. Choose decompositions $B^* = {B^*}'\oplus \tker(K_1)=: {B^*}'\oplus \langle \b_m\rangle $ and $C = \tim(K_1)\op \langle c_m\rangle =: C'\oplus \langle c_m\rangle $ and use $K_1$ to identify ${B^*}'$ with $C'$. Then there exist bases of $A,B,C$ such that \be\label{thematrices} K_1 = \begin{pmatrix} \Id_{C'} & 0\\ 0 & 0 \end{pmatrix}, \qquad K_s = \begin{pmatrix} \bx_s & 0\\ 0 & 0 \end{pmatrix} \quad \mbox{for}\ \ 2\leq s\leq m-1, \quad\mbox{and}\quad K_m = \begin{pmatrix} \bx_{m} & w_m\\ u_m & 0 \end{pmatrix} , \ene for some $\bx_2, \ldots ,\bx_m\in \tend(C')$ and $0\neq u_m\in B'\ot c_m\isom {C'}^* $, $0\neq w_m\in \b_m\ot C'\isom C' $ where, setting $\bx_1 := \Id_{C'}$, \begin{enumerate} \item\label{uptohereFriedland} $u_mx^jw_m = 0$ for every $j\geq 0$ and $x\in \langle \bx_1, \ldots ,\bx_m\rangle$, so in particular $u_mw_m = 0$. \item\label{item2} the space $\langle \bx_{1},\bx_{2}, \ldots ,\bx_{m-1}\rangle\subseteq \tEnd( C' )$ is $(m-1)$-dimensional, abelian, and End-closed. \item \label{item3} the space $\langle \bx_2, \ldots ,\bx_{m-1}\rangle$ contains the rank one matrix $w_mu_m$. \item\label{item3b}For all $2\leq s\leq m-1$, $u_m\bx_s = 0$ and $\bx_s w_m = 0$. \item \label{item4} For every $s$, there exist vectors $u_s\in {C'}^* $ and $w_s\in C'$, such that \begin{equation}\label{finalpiece} \bx_s \bx_{m} + w_{s}u_m = \bx_{m}\bx_s + w_m u_s\in \langle \bx_2, \ldots ,\bx_{m-1}\rangle. \end{equation} The vector $[u_s,\ w_s^{\bt}]\in \BC^{2(m-1)*}$ is unique up to adding multiples of $[u_m,\ w_m^{\bt}]$. \item \label{Fried2item} For every $j\geq 1$ and $2\leq s\leq m-1$ \begin{equation}\label{Fried2} \bx_s\bx_m^j w_m = 0 {\rm \ and \ }u_m\bx_m^j \bx_s = 0. \end{equation} \end{enumerate} Moreover, the tensor $T$ is 111-sharp. Conversely, any tensor satisfying \eqref{thematrices} and \ref{uptohereFriedland}--\ref{item4} is 111-sharp, concise and not $1_A$-generic, hence satisfies~\ref{Fried2item} as well. Additionally, for any vectors $u^*\in C'$ and $w_m^*\in (C')^* $ with $u_mu^* = 1 = w^*w_m$, we may normalize $\bx_m$ such that for every $2\leq s\leq m-1$ \be\label{five} \bx_mu^* = 0 ,\ w^*\bx_m = 0, \ u_s = w^*\bx_s\bx_m, {\rm\ and \ } w_s = \bx_m\bx_su^*. \ene \end{proposition} \begin{remark}\label{ANFFNF} Atkinson \cite{MR695915} defined a normal form for spaces of corank $m-r$ where one element is $\begin{pmatrix}\Id_r&0\\ 0&0\end{pmatrix}$ and all others of the form $\begin{pmatrix} \bx&W\\ U&0\end{pmatrix}$ and satisfy $U\bx^jW=0$ for every $j\geq 0$. The zero block is clear and the equation follows from expanding out the minors of $\begin{pmatrix}\xi \Id_r+ \bx&W\\ U&0\end{pmatrix}$ with a variable $\xi$. This already implies \eqref{thematrices} and~\ref{uptohereFriedland} except for the zero blocks in the $K_s$ just using bounded rank. Later, Friedland \cite{MR2996364}, assuming corank one, showed that the $A$-Strassen equations are exactly equivalent to having a normal form satisfying \eqref{thematrices}, \ref{uptohereFriedland}, and \ref{Fried2item}. In particular, this shows the 111-equations imply Strassen's equations in the corank one case. \end{remark} \begin{proof} \def\Bmat{Y} \def\Cmat{Z} We use Atkinson normal form, in particular we use $K_1$ to identify ${B^*}'$ with $C'$. Take $(\Bmat, \Cmat)\in \tend(B) \times \tend(C)$ with $0\neq \Bmat \actb T = \Cmat \actc T \in T(A^*)\ot A$, which exist by 111-abundance. Write these elements following the decompositions of $B^*$ and $C$ as in the statement: \[ \Bmat^\bt = \begin{pmatrix} \by & w_{\Bmat}\\ u_{\Bmat} & t_{\Bmat} \end{pmatrix} \qquad \Cmat = \begin{pmatrix} \bz & w_{\Cmat}\\ u_{\Cmat} & t_{\Cmat} \end{pmatrix}, \] with $\by\in \tend((B^*)')$, $\bz\in \tend(C')$ etc. The equality $\Bmat \actb T = \Cmat \actc T\in T(A^*)\ot A$ says $ K_i\Bmat^\bt = \Cmat K_i\in T(A^*) = \langle K_1, \ldots ,K_m\rangle$. When $i = 1$ this is \begin{equation}\label{equalityOne} \begin{pmatrix} \by & w_{\Bmat}\\ 0 & 0 \end{pmatrix} = \begin{pmatrix} \bz & 0\\ u_{\Cmat} &0 \end{pmatrix}\in T(A^*), \end{equation} so $w_{\Bmat} = 0$, $u_{\Cmat} = 0$, and $\by = \bz$. For future reference, so far we have \begin{equation}\label{cohPair} \Bmat^\bt = \begin{pmatrix} \bz & 0\\ u_{\Bmat} & t_{\Bmat} \end{pmatrix} \qquad \Cmat = \begin{pmatrix} \bz & w_{\Cmat}\\ 0 & t_{\Cmat} \end{pmatrix}. \end{equation} By~\eqref{equalityOne}, for every $(\Bmat, \Cmat)$ above the matrix $\bz$ belongs to ${B'}\ot C' \cap T(A^*)$. By conciseness, the subspace ${B'}\ot C' \cap T(A^*)$ is proper in $T(A^*)$, so it has dimension less than $m$. The triple intersection has dimension at least $m$ as $T$ is 111-abundant, so there exists a pair $(\Bmat, \Cmat)$ as in~\eqref{cohPair} with $\bz = 0$, and $0\neq \Bmat\actb T = \Cmat \actc T$. Take any such pair $(\Bmat_0, \Cmat_0)$. Consider a matrix $X\in T(A^*)$ with the last row nonzero and write it as \[ X = \begin{pmatrix} \bx & w_m\\ u_m & 0 \end{pmatrix} \] where $u_m\neq 0$. The equality \begin{equation}\label{eq:specialMatrix} X \Bmat_0^\bt = \begin{pmatrix} w_mu_{\Bmat_0} & w_mt_{\Bmat_0}\\ 0 & 0 \end{pmatrix} = \Cmat_0 X = \begin{pmatrix} w_{\Cmat_0}u_m & 0 \\ t_{\Cmat_0}u_m & 0 \end{pmatrix} \end{equation} implies $w_mt_{\Bmat_0} = 0$, $0 = t_{\Cmat_0}$ (as $u_m\neq 0$) and $w_{\Cmat_0}u_m = w_mu_{\Bmat_0}$. Observe that $w_{\Cmat_0} \neq 0$ as otherwise $\Cmat_0 = 0$ while we assumed $\Cmat_0\actb T\neq 0$. Since $u_m\neq 0$ and $w_{\Cmat_0}\neq 0$, we have an equality of rank one matrices $w_{\Cmat_0}u_m=w_mu_{\Bmat_0}$. Thus $u_m = \lambda u_{\Bmat_0}$ and $w_m = \lambda w_{\Cmat_0}$ for some nonzero $\lambda\in \BC$. It follows that $w_m\neq 0$, so $t_{\Bmat_0} = 0$. The matrix $X$ was chosen as an arbitrary matrix with nonzero last row and we have proven that every such matrix yields a vector $[u_m,\ w_m^{\bt}]$ proportional to a fixed nonzero vector $[u_{\Bmat_0},\ w^{\bt}_{\Cmat_0}]$. It follows that we may choose a basis of $A$ such that there is only one such matrix $X$. The same holds if we assume instead that $X$ has last column nonzero. This gives \eqref{thematrices}. Returning to~\eqref{equalityOne}, from $u_Z = 0$ we deduce that $\bz\in \langle \bx_1, \ldots ,\bx_{m-1}\rangle$. Now $\Bmat_0$ and $\Cmat_0$ are determined up to scale as \begin{equation}\label{eq:degenerateMats} \Bmat_0^\bt = \begin{pmatrix} 0 & 0\\ u_m & 0 \end{pmatrix} \qquad \Cmat_0 = \begin{pmatrix} 0 & w_m\\ 0 & 0 \end{pmatrix}, \end{equation} so there is only a one-dimensional space of pairs $(\Bmat, \Cmat)$ with $\Bmat\actb T = \Cmat\actc T$ and upper left block zero. The space of possible upper left blocks $\bz$ is $\langle \bx_1, \ldots ,\bx_{m-1}\rangle$ so it is $(m-1)$-dimensional. Since the triple intersection is at least $m$-dimensional, for any matrix $\bz\in \langle \bx_1, \ldots ,\bx_{m-1}\rangle$ there exist matrices $\Bmat^\bt$ and $\Cmat$ as in \eqref{cohPair} with this $\bz$ in the top left corner. Consider any matrix as in~\eqref{cohPair} corresponding to an element $\Bmat \actb T = \Cmat \actc T \in T(A^*)\ot A$. For $2\leq s\leq m-1$ we get $\bz \bx_s= \bx_s \bz\in \langle \bx_1, \ldots ,\bx_{m-1}\rangle$. Since for any matrix $\bz\in \langle \bx_1, \ldots ,\bx_{m-1}\rangle$ a suitable pair $(\Bmat, \Cmat)$ exists, it follows that $\langle \bx_1, \ldots ,\bx_{m-1}\rangle\subseteq \tend(C')$ is abelian and closed under composition proving \ref{item2}. The coefficient of $a_m$ in $\Bmat \actb T = \Cmat \actc T$ gives \begin{equation}\label{eq:finalFantasy} \begin{pmatrix} \bx_m\bz + w_m u_{\Bmat} & w_m t_{\Bmat}\\ u_m \bz & 0 \end{pmatrix} = \begin{pmatrix} \bz\bx_m + w_{\Cmat} u_m & \bz w_m\\ t_{\Cmat} u_m & 0 \end{pmatrix} = \lambda_{\Bmat} K_m + K_{\Bmat}, \end{equation} where $\lambda_{\Bmat}\in \BC$ and $K_{\Bmat}\in \langle K_1, \ldots ,K_{m-1}\rangle$. It follows that $t_{\Bmat} = \lambda_{\Bmat} = t_{\Cmat}$ and that $\bz w_m = \lambda_{\Bmat} w_m$ as well as $u_m \bz = \lambda_{\Bmat} u_m$. Iterating over $\bz\in \langle \bx_1, \ldots ,\bx_{m-1}\rangle$, we see that $w_m$ is a right eigenvector and $u_m$ a left eigenvector of any matrix from this space, and $u_m,w_m$ have the same eigenvalues for each matrix. We make a $\GL(A)$ coordinate change: we subtract this common eigenvalue of $\bx_s$ times $\bx_1$ from $\bx_s$, so that $\bx_sw_m = 0$ and $u_m\bx_s=0$ for all $ 2\leq s\leq m-1$ proving \ref{item3b}. Take $\bz\in \langle \bx_2, \ldots ,\bx_{m-1}\rangle$ so that $\bz w_m = 0$ and $u_m\bz = 0$. The top left block of~\eqref{eq:finalFantasy} yields \begin{equation}\label{zpm} \bz \bx_m + w_{\Cmat} u_m = \bx_m \bz + w_m u_{\Bmat} = \lambda_{\Bmat} \bx_m + K_Y. \end{equation} Since $\bz w_m = 0$, the upper right block of \eqref{eq:finalFantasy} implies $\lambda_Y = 0$ and we deduce that \begin{equation}\label{zpmb} \bz \bx_{m} + w_{\Cmat}u_m = \bx_{m}\bz + w_m u_{\Bmat} = K_{Y}\in \langle \bx_2, \ldots ,\bx_{m-1}\rangle. \end{equation} For a pair $(\Bmat, \Cmat)$ with $\bz = \bx_s$, set $w_s := w_{\Cmat}$ and $u_{s} := u_{\Bmat}$. Such a pair is unique up to adding matrices~\eqref{eq:degenerateMats}, hence $[u_{s},\ w_{s}^{\bt}]$ is uniquely determined up to adding multiples of $[u_m,\ w_m^{\bt}]$. With these choices \eqref{zpmb} proves \ref{item4}. Since $\bx_s$ determines $u_s,w_s$ we see that $T$ is 111-sharp. The matrix~\eqref{eq:specialMatrix} lies in $T(A^*)$, hence $w_mu_m\in \langle \bx_1, \ldots ,\bx_{m-1}\rangle$. Since $ 0= (u_mw_m)u_m =u_m(w_mu_m) $ we deduce that $w_mu_m\in \langle \bx_2, \ldots ,\bx_{m-1}\rangle$, proving \ref{item3}. Conversely, suppose that the space of matrices $K_1, \ldots , K_m$ satisfies \eqref{thematrices} and \ref{uptohereFriedland}--\ref{item4}. Conciseness and $1_A$-degeneracy of $K_1, \ldots ,K_m$ follow by reversing the argument above. That $T$ is 111-sharp follows by constructing the matrices as above. To prove~\ref{Fried2item}, we fix $s$ and use induction to prove that there exist vectors $v_{h}\in {C'}^* $ for $h=1,2, \ldots $ such that for every $j\geq 1$ we have \begin{equation}\label{eq:express} \bx_m^j\bx_s + \sum_{h=0}^{j-1} \bx_m^h w_mv_{ j-h }\in \langle \bx_2, \ldots ,\bx_{m-1}\rangle. \end{equation} The base case $j=1$ follows from~\ref{item4}. To make the step from $j$ to $j+1$ use~\ref{item4} for the element~\eqref{eq:express} of $\langle \bx_2, \ldots ,\bx_{m-1}\rangle$, to obtain \[ \bx_m\left(\bx_m^j\bx_s + \sum_{h=0}^{j-1} \bx_m^h w_mv_{ j-h }\right)+w_mv_{ j+1 } \in \langle \bx_2, \ldots ,\bx_{m-1}\rangle, \] for a vector $v_{ j+1 }\in C' $. This concludes the induction. For every $j$, by~\ref{item3b}, the expression~\eqref{eq:express} is annihilated by $u_m$: \[ u_m\cdot \left( \bx_m^j\bx_s + \sum_{h=0}^{j-1} \bx_m^h w_mv_{ j-h } \right) = 0. \] By~\ref{uptohereFriedland} we have $u_m\bx_m^h w_m = 0$ for every $h$, so $u_m\bx_m^j\bx_s = 0$ for all $j$. The assertion $\bx_s\bx_m^j w_m = 0$ is proved similarly. This proves~\ref{Fried2item}. Finally, we proceed to the ``Additionally'' part. The main subtlety here is to adjust the bases of $B$ and $C$. Multiply the tuple from the left and right respectively by the matrices \[ \begin{pmatrix} \Id_{C'} & \gamma\\ 0 & 1 \end{pmatrix}\in GL(C) \qquad \begin{pmatrix} \Id_{{B'}^{ * }} & 0\\ \beta & 1 \end{pmatrix}\in GL( B^* ) \] and then add $\alpha w_mu_m$ to $\bx_m$. These three coordinate changes do not change the $\bx_1$, $\bx_s$, $u_m$, or $w_m$ and they transform $\bx_m$ into $\bx_m' := \bx_m + w_m\beta + \gamma u_m + \alpha w_mu_m$. Take $(\alpha, \beta, \gamma) := (w^*\bx_mu^*, -w^*\bx_m, -\bx_mu^*)$, then $\bx_m'$ satisfies $w^*\bx_m' =0$ and $\bx_m'u^* = 0$. Multiplying~\eqref{finalpiece} from the left by $w^*$ and from the right by $u^*$ we obtain respectively \begin{align*} w^*\bx_s\bx_m + (w^* w_s)u_m &= u_s\\ w_s &= \bx_m\bx_su^* + w_m( u_su^*). \end{align*} Multiply the second line by $w^*$ to obtain $w^* w_s = u_su^* $, so \[ [u_s,\ w_s^{\bt}]- w^*(w_s)[u_m, \ w_m^{\bt}] = [w^*\bx_s\bx_m, \ (\bx_m\bx_su^*)^{\bt}]. \] Replace $[u_s,\ w_s^{\bt}]$ by $[u_s,\ w_s^{\bt}]- w^*(w_s)[u_m, \ w_m^{\bt}]$ to obtain $u_s = w^*\bx_s\bx_m$, $w_{s} = \bx_m\bx_su^*$, proving \eqref{five}. \end{proof} \begin{example}\label{ex:111necessary} Consider the space of $4\times 4$ matrices $\bx_1 = \Id_4, \bx_2 = E_{14}, \bx_3 = E_{13}, \bx_4 = E_{34}$. Take $\bx_5 = 0$, $u_m = (0, 0, 0, 1)$ and $w_m = (1, 0, 0, 0)^{\bt}$. The tensor built from this data as in Proposition~\ref{1Aonedegenerate111} does \emph{not} satisfy the 111-condition, since $\bx_3$ and $\bx_4$ do not commute. Hence, it is not of minimal border rank. However, this tensor does satisfy the $A$-End-closed equations (described in \S\ref{strandend}) and Strassen's equations (in all directions), and even the $p=1$ Koszul flattenings. This shows that 111-equations are indispensable in Theorem~\ref{concise5}; they cannot be replaced by these more classical equations. \end{example} \subsection{Proof of Proposition \ref{111iStr+End}} \label{111impliessectb} The $1_ A$-generic case is covered by Proposition \ref{1Ageneric111} together with the description of the $A$-Strassen and $A$-End-closed equations for $1_A$-generic tensors which was given in~\S\ref{strandend}. In the corank one case, Remark \ref{ANFFNF} observed that the 111-equations imply Strassen's equations. The End-closed equations are: Let $\a_1\hd \a_m$ be a basis of $A^*$. Then for all $\a',\a''\in A^*$, \be\label{bigenda1gen} (T(\a')T(\a_1)^{\ww m-1}T(\a'') ) \ww T(\a_1) \ww \cdots \ww T(\a_m) =0\in \La{m+1}(B\ot C). \ene Here, for $Z\in B\ot C$, $Z^{\ww m-1}$ denotes the induced element of $\La{m-1}B\ot \La{m-1}C$, which, up to choice of volume forms (which does not effect the space of equations), is isomorphic to $C^*\ot B^*$, so $(T(\a')T(\a_1)^{\ww m-1}T(\a'') )\in B\ot C$. In bases $Z^{\ww m-1}$ is just the cofactor matrix of $Z$. (Aside: when $T$ is $1_A$-generic these correspond to $\cE_\a(T)$ being closed under composition of endomorphisms.) When $T(\a_1)$ is of corank one, using the normal form~\eqref{thematrices} we see $T(\a')T(\a_1)^{\ww m-1}T(\a'')$ equals zero unless $\a'=\a''=\a_m$ in which case it equals $w_mu_m$ so the vanishing of~\eqref{bigenda1gen} is implied by Proposition \ref{1Aonedegenerate111}\ref{item3}. Finally if the corank is greater than one, both Strassen's equations and the End-closed equations are trivial. \qed \section{Proof of Theorem~\ref{ref:111algebra:thm}}\label{111algpfsect} We prove Theorem~\ref{ref:111algebra:thm} that $\alg{T}$ is indeed a unital subalgebra of $\tend(A)\times \tend(B)\times \tend(C)$ which is commutative for $T$ concise. The key point is that the actions are linear with respect to $A$, $B$, and $C$. We have $(\Id, \Id, \Id)\in \alg{T}$ for any $T$. \begin{lemma}[composition and independence of actions]\label{ref:independence:lem} Let $T\in A\ot B\ot C$. For all $\Amat,\Amat'\in \tend(A)$ and $\Bmat\in \tend(B)$, \begin{align} \label{71}\Amat\acta (\Amat'\acta T) &= (\Amat\Amat')\acta T,\ {\rm and}\\ \label{eq:independence} \Amat\acta (\Bmat\actb T) &= \Bmat\actb (\Amat\acta T). \end{align} The same holds for $(A,B)$ replaced by $(B,C)$ or $(C,A)$. \end{lemma} \begin{proof} Directly from the description in Lemma~\ref{111intermsOfMatrices}. \end{proof} \begin{lemma}[commutativity]\label{ref:commutativity:prop} Let $T\in A\ot B\ot C$ and suppose $(\Amat, \Bmat, \Cmat), (\Amat', \Bmat', \Cmat')\in \alg T$. Then $\Amat\Amat' \acta T = \Amat'\Amat \acta T$ and similarly for the other components. If $T$ is concise, then $\Amat \Amat' = \Amat' \Amat$, $\Bmat\Bmat' = \Bmat' \Bmat$ and $\Cmat \Cmat' = \Cmat'\Cmat$. \end{lemma} \begin{proof} We will make use of compatibility to move the actions to independent positions and~\eqref{eq:independence} to conclude the commutativity, much like one proves that $\pi_2$ in topology is commutative. Concretely, Lemma~\ref{ref:independence:lem} implies \begin{align*} \Amat\Amat' \acta T &= \Amat \acta (\Amat' \acta T) = \Amat \acta (\Bmat'\actb T) = \Bmat'\actb (\Amat \acta T) = \Bmat' \actb (\Cmat \actc T), \ {\rm and}\\ \Amat'\Amat \acta T &= \Amat' \acta (\Amat \acta T) = \Amat' \acta (\Cmat \actc T) = \Cmat \actc (\Amat' \acta T) = \Cmat \actc (\Bmat'\actb T). \end{align*} Finally $\Bmat' \actb (\Cmat \actc T)= \Cmat \actc (\Bmat'\actb T)$ by~\eqref{eq:independence}. If $T$ is concise, then the equation $(\Amat\Amat' - \Amat'\Amat)\acta T = 0$ implies $\Amat\Amat' - \Amat'\Amat=0$ by the description in Lemma~\ref{111intermsOfMatrices}, so $\Amat$ and $\Amat'$ commute. The commutativity of other factors follows similarly. \end{proof} \begin{lemma}[closure under composition]\label{ref:Endclosed:prop} Let $T\in A\ot B\ot C$ and suppose $(\Amat, \Bmat, \Cmat), (\Amat', \Bmat', \Cmat')\in \alg T$. Then $(\Amat\Amat', \Bmat\Bmat', \Cmat\Cmat')\in \alg T$. \end{lemma} \begin{proof} By Lemma~\ref{ref:independence:lem} \[ \Amat\Amat' \acta T = \Amat \acta (\Amat'\acta T) = \Amat \acta (\Bmat' \actb T) = \Bmat' \actb (\Amat \acta T) = \Bmat'\actb (\Bmat \actb T) = \Bmat'\Bmat \actb T. \] We conclude by applying Proposition~\ref{ref:commutativity:prop} and obtain equality with $\Cmat'\Cmat\actc T$ similarly. \end{proof} \begin{proof}[Proof of Theorem \ref{ref:111algebra:thm}] Commutativity follows from Lemma~\ref{ref:commutativity:prop}, the subalgebra assertion is Lemma~\ref{ref:Endclosed:prop}, and injectivity of projections follows from Lemma~\ref{111intermsOfMatrices} and conciseness. \end{proof} \begin{remark} Theorem~\ref{ref:111algebra:thm} without the commutativity conclusion still holds for a non-concise tensor $T$. An example with a noncommutative 111-algebra is $\sum_{i=1}^r a_i\ot b_i\ot c_i$, where $r \leq m-2$. In this case the 111-algebra contains a copy of $\End(\BC^{m-r})$. \end{remark} \begin{example}\label{ex:tensorAlgebra} If $T$ is a $1_A$-generic 111-abundant tensor, then by Proposition~\ref{1Ageneric111} its 111-algebra is isomorphic to $\Espace$. In particular, if $T$ is the structure tensor of an algebra $\cA$, then $\alg{T}$ is isomorphic to $\cA$. \end{example} \begin{example}\label{ex:symmetricTensor} Consider the symmetric tensor $F\in S^3\BC^5\subseteq \BC^5\ot \BC^5\ot \BC^5$ corresponding to the cubic form $x_3x_1^2 + x_4x_1x_2 + x_5x_2^2$, where, e.g., $x_3x_1^2=2(x_3\ot x_1\ot x_1+ x_1\ot x_3\ot x_1+ x_1\ot x_1\ot x_3)$. This cubic has vanishing Hessian, hence $F$ is $1$-degenerate. The triple intersection of the corresponding tensor is $\langle F, x_1^3, x_1^2x_2, x_1x_2^2, x_2^3\rangle$ and its 111-algebra is given by the triples $(x,x,x)$ where $$ x\in \langle \Id, x_1\ot \alpha_3, x_2\ot \alpha_3 + x_1\ot \alpha_4, x_2\ot \alpha_4 + x_1\ot \alpha_5, x_2\ot \alpha_5 \rangle, $$ where $\a_j$ is the basis vector dual to $x_j$. Since all compositions of basis elements other than $\Id$ are zero, this 111-algebra is isomorphic to $\BC[\varepsilon_1, \varepsilon_2,\varepsilon_3, \varepsilon_4]/(\varepsilon_1, \varepsilon_2, \varepsilon_3, \varepsilon_4)^2$. \end{example} \begin{example}\label{ex:1Aonedegenerate111Algebra} Consider a tensor in the normal form of Proposition~\ref{1Aonedegenerate111}. The projection of the 111-algebra to $\tend(B)\times \tend(C)$ can be extracted from the proof. In addition to $(\Id,\Id)$ we have: \begin{align*} &Y_0=\begin{pmatrix}0 & 0 \\ u_m & 0\end{pmatrix}, \ Z_0=\begin{pmatrix} 0 & w_m \\ 0 & 0\end{pmatrix}, \\ &Y_s=\begin{pmatrix}\bx_s& 0 \\ u_s & 0\end{pmatrix}, \ Z_s=\begin{pmatrix} \bx_s& w_s \\ 0 & 0\end{pmatrix}. \end{align*} Theorem~\ref{ref:111algebra:thm} implies for matrices in $\tend(C)$ that \[ \begin{pmatrix} \bx_s\bx_t & \bx_sw_t\\ 0 & 0 \end{pmatrix} = \begin{pmatrix} \bx_s & w_s\\ 0 & 0 \end{pmatrix}\cdot \begin{pmatrix} \bx_t & w_t\\ 0 & 0 \end{pmatrix} = \begin{pmatrix} \bx_t & w_t\\ 0 & 0 \end{pmatrix}\cdot \begin{pmatrix} \bx_s & w_s\\ 0 & 0 \end{pmatrix} = \begin{pmatrix} \bx_t\bx_s & \bx_tw_s\\ 0 & 0 \end{pmatrix} \] which gives $\bx_sw_t = \bx_tw_s$ for any $2\leq s,t\leq m-1$. Considering matrices in $\tend(B)$ we obtain $u_t\bx_s = u_s\bx_t$ for any $2\leq s,t\leq m-1$. (Of course, these identities are also a consequence of Proposition~\ref{1Aonedegenerate111}, but it is difficult to extract them directly from the Proposition.) \end{example} \section{New obstructions to minimal border rank via the 111-algebra}\label{newobssect} In this section we characterize 111-abundant tensors in terms of an algebra equipped with a triple of modules and a module map. We then exploit this extra structure to obtain new obstructions to minimal border rank via deformation theory. \subsection{Characterization of tensors that are 111-abundant}\label{111abcharsect} \begin{definition} A \emph{tri-presented algebra} is a commutative unital subalgebra $\cA \subseteq \tend(A) \times \tend(B) \times \tend(C)$. \end{definition} For any concise tensor $T$ its 111-algebra $\alg{T}$ is a tri-presented algebra. A tri-presented algebra $\cA$ naturally gives an $\cA$-module structure on $A$, $B$, $C$. For every $\cA$-module $N$ the space $N^*$ is also an $\cA$-module via, for any $r\in \cA$, $n\in N$, and $f\in N^*$, $(r\cdot f)(n) := f(rn)$. (This indeed satisfies $r_2\cdot (r_1\cdot f)=(r_2r_1)\cdot f$ because $\cA$ is commutative.) In particular, the spaces $A^*$, $B^*$, $C^*$ are $\cA$-modules. Explicitly, if $r = (\Amat, \Bmat, \Cmat)\in \cA$ and $\alpha\in A^*$, then $r\alpha = \Amat^{\bt}(\alpha)$. There is a canonical surjective map $\pi\colon A^*\ot B^*\to \ul A^* \ot_\cA \ul B^*$, defined by $\pi(\alpha\ot \beta) = \alpha\ot_{\cA} \beta$ and extended linearly. For any homomorphism $\varphi\colon \ul A^*\ot_\cA \ul B^*\to \ul C$ of $\cA$-modules, we obtain a linear map $\varphi\circ\pi\colon A^*\ot B^*\to C$ hence a tensor in $A\ot B\ot C$ which we denote by $T_{\varphi}$. We need the following lemma, whose proof is left to the reader. \begin{lemma}[compatibility with flattenings]\label{ref:flattenings:lem} Let $T\in A\ot B\ot C$, $\Amat \in \tend(A)$, $\Cmat\in \tend(C)$ and $\alpha\in A^*$. Consider $T(\alpha): B^*\to C$. Then \begin{align} (\Cmat \actc T)(\alpha) &= \Cmat \cdot T(\alpha),\label{eq:flatOne}\\ T\left(\Amat^{\bt}(\alpha)\right) &= (\Amat \acta T)(\alpha), \label{eq:flatTwo} \end{align} and analogously for the other factors.\qed \end{lemma} \begin{proposition}\label{ex:1AgenericAndModules} Let $T$ be a concise 111-abundant tensor. Then $T$ is $1_A$-generic if and only if the $\alg{T}$-module $\ul{A}^*$ is generated by a single element, i.e., is a cyclic module. More precisely, an element $\alpha\in A^*$ generates the $\alg{T}$-module $\ul{A}^*$ if and only if $T(\alpha)$ has maximal rank. \end{proposition} \begin{proof} Take any $\alpha\in A^*$ and $r = (\Amat, \Bmat, \Cmat)\in \alg{T}$. Using~\eqref{eq:flatOne}-\eqref{eq:flatTwo} we have \begin{equation}\label{eq:kernel} T(r\alpha) = T(\Amat^{\bt}(\alpha)) = (\Amat \acta T)(\alpha) = (\Cmat \actc T)(\alpha) = \Cmat \cdot T(\alpha). \end{equation} Suppose first that $T$ is $1_A$-generic with $T(\alpha)$ of full rank. If $r\neq 0$, then $\Cmat \neq 0$ by the description in Lemma~\ref{111intermsOfMatrices}, so $\Cmat \cdot T(\alpha)$ is nonzero. This shows that the homomorphism $\alg{T} \to \ul A^*$ of $\alg{T}$-modules given by $r\mapsto r\alpha$ is injective. Since $\dim \alg{T} \geq m = \dim A^*$, this homomorphism is an isomorphism and so $\ul A^* \simeq \alg{T}$ as $\alg{T}$-modules. Now suppose that $\ul{A}^*$ is generated by an element $\alpha\in A^*$. This means that for every $\alpha'\in A^*$ there is an $r = (\Amat, \Bmat, \Cmat)\in \alg{T}$ such that $r\alpha = \alpha'$. From~\eqref{eq:kernel} it follows that $\ker T(\alpha) \subseteq \ker T(\alpha')$. This holds for every $\alpha'$, hence $\ker T(\alpha)$ is in the joint kernel of $T(A^*)$. By conciseness this joint kernel is zero, hence $\ker T(\alpha) = 0$ and $T(\alpha)$ has maximal rank. \end{proof} \begin{theorem}\label{ref:normalizationCharacterization:thm} Let $T\in A\ot B\ot C$ and let $\cA$ be a tri-presented algebra. Then $\cA\subseteq \alg{T}$ if and only if the map $T_C^\bt: A^*\ot B^*\to C$ factors through $\pi: A^*\ot B^*\ra \ul A^*\ot_\cA \ul B^*$ and induces an $\cA$-module homomorphism $\varphi\colon \ul A^*\ot_\cA \ul B^*\to \ul C$. If this holds, then $T = T_{\varphi}$. \end{theorem} \begin{proof} By the universal property of the tensor product over $\cA$, the map $T_C^\bt: A^*\ot B^*\ra C$ factors through $\pi$ if and only if the bilinear map $A^*\times B^*\to C$ given by $(\alpha, \beta)\mapsto T(\alpha, \beta)$ is $\cA$-bilinear. That is, for every $r = (\Amat, \Bmat, \Cmat)\in \cA$, $\alpha\in A^*$, and $\beta\in B^*$ one has $T(r\alpha, \beta) = T(\alpha, r \beta)$. By~\eqref{eq:flatTwo}, $T(r\alpha, \beta) = (\Amat \acta T)(\alpha, \beta)$ and $T(\alpha, r\beta) = (\Bmat \actb T)(\alpha, \beta)$. It follows that the factorization exists if and only if for every $r = (\Amat, \Bmat, \Cmat)\in \cA$ we have $\Amat \acta T = \Bmat \actb T$. Suppose that this holds and consider the obtained map $\varphi\colon \ul A^*\ot_\cA \ul B^*\to \ul C$. Thus for $\alpha\in A^*$ and $\beta\in B^*$ we have $\varphi(\alpha\ot_{\cA} \beta) = T(\alpha, \beta)$. The map $\varphi$ is a homomorphism of $\cA$-modules if and only if for every $r = (\Amat, \Bmat, \Cmat)\in \cA$ we have $\varphi(r\alpha\otR \beta) = r\varphi(\alpha\otR \beta)$. By~\eqref{eq:flatOne}, $r\varphi(\alpha\otR \beta) = (\Cmat \actc T)(\alpha, \beta)$ and by~\eqref{eq:flatTwo}, $\varphi(r\alpha\otR \beta) = (\Amat \acta T)(\alpha, \beta)$. These are equal for all $\alpha$, $\beta$ if and only if $\Amat \acta T = \Cmat \actc T$. The equality $T = T_{\varphi}$ follows directly from definition of $T_{\varphi}$. \end{proof} \begin{theorem}[characterization of concise 111-abundant tensors]\label{ref:111abundantChar:cor} A concise tensor that is 111-abundant is isomorphic to a tensor $T_{\varphi}$ associated to a surjective homomorphism of $\cA$-modules \be\label{phimap}\varphi\colon N_1\ot_\cA N_2\to N_3, \ene where $\cA$ is a commutative associative unital algebra, $N_1$, $N_2$, $N_3$ are $\cA$-modules and $\dim N_1 = \dim N_2 = \dim N_3 = m \leq \dim \cA$, and moreover for every $n_1\in N_1, n_2\in N_2$ the maps $\varphi(n_1\otR -)\colon N_2\to N_3$ and $\varphi(-\otR n_2)\colon N_1\to N_3$ are nonzero. Conversely, any such $T_{\varphi}$ is 111-abundant and concise. \end{theorem} The conditions $\varphi(n_1\otR -)\neq0$, $\varphi(-\otR n_2)\neq 0$ for any nonzero $n_1, n_2$ have appeared in the literature. Bergman~\cite{MR2983182} calls $\varphi$ {\it nondegenerate} if they are satisfied. \begin{proof} By Theorem~\ref{ref:normalizationCharacterization:thm} a concise tensor $T$ that is 111-abundant is isomorphic to $T_{\varphi}$ where $\cA = \alg{T}$, $N_1 =\ul{A}^*$, $N_2 = \ul{B}^*$, $N_3 = \ul{C}$. Since $T$ is concise, the homomorphism $\varphi$ is onto and the restrictions $\varphi(\alpha\otR -)$, $\varphi(-\otR \beta)$ are nonzero for any nonzero $\alpha\in A^*$, $\beta\in B^*$. Conversely, if we take \eqref{phimap} and set $A := N_1^*$, $B:= N_2^*$, $C := N_3$, then $T_{\varphi}$ is concise by the conditions on $\varphi$ and by Theorem~\ref{ref:normalizationCharacterization:thm}, $\cA \subseteq \alg{T_{\varphi}}$ hence $T_{\varphi}$ is 111-abundant. \end{proof} \begin{example}\label{ex:1AgenericAndModulesTwo} By Proposition~\ref{ex:1AgenericAndModules} we see that for a concise $1_A$-generic tensor $T$ the tensor product $\ul A^*\ot_{\cA} \ul B^*$ simplifies to $\cA\ot_{\cA} \ul B^* \simeq \ul B^*$. The homomorphism $\varphi\colon \ul B^*\to \ul C$ is surjective, hence an isomorphism of $\ul B^*$ and $\ul C$, so the tensor $T_{\varphi}$ becomes the multiplication tensor ${\cA}\ot_{\BC} \ul C\to \ul C$ of the ${\cA}$-module $\ul C$. One can then choose a surjection $S\to {\cA}$ from a polynomial ring such that $S_{\leq 1}$ maps isomorphically onto $\cA$. This shows how the results of this section generalize~\S\ref{dictsectOne}. \end{example} In the setting of Theorem~\ref{ref:111abundantChar:cor}, since $T$ is concise it follows from Lemma~\ref{111intermsOfMatrices} that the projections of $\alg{T}$ to $\tend(A)$, $\tend(B)$, $\tend(C)$ are one to one. This translates into the fact that no nonzero element of $\alg{T}$ annihilates $A$, $B$ or $C$. The same is then true for $A^*$, $B^*$, $C^*$. \subsection{Two new obstructions to minimal border rank}\label{twonew} \begin{lemma}\label{ref:triplespanalgebra} Let $T\in \BC^m\ot \BC^m\ot \BC^m$ be concise, 111-sharp and of minimal border rank. Then $\alg{T}$ is smoothable. \end{lemma} \begin{proof} By 111-sharpness, the degeneration $T_\ep\to T$ from a minimal rank tensor induces a family of triple intersection spaces, hence by semicontinuity it is enough to check for $T_\ep$ of \emph{rank} $m$. By Example~\ref{ex:tensorAlgebra} each $T_\ep$ has 111-algebra $\prod_{i=1}^m \BC$. Thus the 111-algebra of $T$ is the limit of algebras isomorphic to $\prod_{i=1}^m \BC$, hence smoothable. \end{proof} Recall from~\S\ref{1genreview} that for $m\leq 7$ every algebra is smoothable. As in section~\S\ref{dictsectOne} view $\alg{T}$ as a quotient of a fixed polynomial ring $S$. Then the $\alg{T}$-modules $\ul A$, $\ul B$, $\ul C$ become $S$-modules. \begin{lemma}\label{ref:triplespanmodules} Let $T\in \BC^m\ot \BC^m\ot \BC^m$ be concise, 111-sharp and of minimal border rank. Then the $S$-modules $\ul A$, $\ul B$, $\ul C$ lie in the principal component of the Quot scheme. \end{lemma} \begin{proof} As in the proof above, the degeneration $T_\ep\to T$ from a minimal rank tensor induces a family of $\alg{T_{\ep}}$ and hence a family of $S$-modules $\ul A_{\ep}$, $\ul B_{\ep}$, $\ul C_{\ep}$. These modules are semisimple when $T_{\ep}$ has minimal border rank by Example~\ref{ex:modulesForMinRank}. \end{proof} Already for $m = 4$ there are $S$-modules outside the principal component~\cite[\S6.1]{jelisiejew2021components}, \cite{MR1199042}. \begin{example}\label{ex:failureFor7x7} In~\cite[Example~5.3]{MR3682743} the authors exhibit a $1_A$-generic, End-closed, commuting tuple of seven $7\times 7$-matrices that corresponds to a tensor $T$ of border rank higher than minimal. By Proposition~\ref{1Ageneric111} this tensor is 111-sharp. However, the associated module $\ul{C}$ is \emph{not} in the principal component, in fact it is a smooth point of another (elementary) component. This can be verified using Bia\l{}ynicki-Birula decomposition, as in~\cite[Proposition~5.5]{jelisiejew2021components}. The proof of non-minimality of border rank in \cite[Example~5.3]{MR3682743} used different methods. We note that the tensor associated to this tuple does \emph{not} satisfy all $p=1$ Koszul flattenings. \end{example} \section{Conditions where tensors of bounded rank fail to be concise}\label{noconcise} \begin{proposition}\label{5notconciseprop} Let $T\in \BC^5\ot \BC^5\ot \BC^5$ be such that the matrices in $T(A^*)$ have the shape \[ \begin{pmatrix} 0 & 0 & 0 & * & *\\ 0 & 0 & 0 & * & *\\ 0 & 0 & 0 & * & *\\ 0 & 0 & 0 & * & *\\ * & * & * & * & * \end{pmatrix}. \] If $T$ is concise, then $T(C^*)$ contains a matrix of rank at least $4$. \end{proposition} \begin{proof} Write the elements of $T(A^*)$ as matrices \[ K_i = \begin{pmatrix} 0 & \star\\ u_i & \star \end{pmatrix}\in \Hom(B^*, C)\quad\mbox{for } i = 1,2, \ldots ,5 \] where $u_i \in \BC^3$. Suppose $T$ is concise. Then the joint kernel of $\langle K_1, \ldots ,K_5\rangle$ is zero, so $u_1, \ldots ,u_5$ span $\BC^3$. After a change of coordinates we may assume $u_1$, $u_2$, $u_3$ are linearly independent while $u_4 = 0$, $u_5 = 0$. Since $K_4\neq 0$, choose a vector $\gamma\in C^*$ such that $\gamma \cdot K_4 \neq 0$. Choose $\xi\in \BC$ such that $(\gamma_5 + \xi \gamma)\cdot K_4 \neq 0$. Note that $T(\gamma_5): B^*\ra A$ has matrix whose rows are the last rows of $K_1\hd K_5$. We claim that the matrix $T(\gamma_5 + \xi \gamma)\colon B^*\to A$ has rank at least four. Indeed, this matrix can be written as \[ \begin{pmatrix} u_1 & \star & \star\\ u_2 & \star & \star\\ u_3 & \star & \star\\ 0 & \multicolumn{2}{c}{(\gamma_5 + \xi \gamma) \cdot K_4}\\ 0 & \star & \star \end{pmatrix}. \] This concludes the proof. \end{proof} \begin{proposition}\label{5notconcise} Let $T\in A\ot B\ot C$ with $m = 5$ be a concise tensor. Then one of its associated spaces of matrices contains a full rank or corank one matrix. \end{proposition} \begin{proof} Suppose that $T(A^*)$ is of bounded rank three. We use~\cite[Theorem~A]{MR695915} and its notation, in particular $r = 3$. By~this theorem and conciseness, the matrices in the space $T(A^*)$ have the shape \[ \begin{pmatrix} \star & \star & \star\\ \star & \mathcal Y &0\\ \star &0&0 \end{pmatrix} \] where the starred part consists of $p$ rows and $q$ columns, for some $p, q\geq 0$, and $\mathcal Y$ forms a primitive space of bounded rank at most $3 - p - q$. Furthermore, since $r+1 < m$ and $r < 2+2$, by \cite[Theorem~A, ``Moreover''~part]{MR695915} we see that $T(A^*)$ is not primitive itself, hence at least one of $p$, $q$ is positive. If just one is positive, say $p$, then by conciseness $\mathcal{Y}$ spans $5-p$ rows and bounded rank $3-p$, which again contradicts \cite[Theorem~A, ``Moreover'']{MR695915}. If both are positive, we have $p=q=1$ and $\mathcal Y$ is of bounded rank one, so by~\cite[Lemma~2]{MR621563}, up to coordinate change, after transposing $T(A^*)$ has the shape as in Proposition~\ref{5notconcise}. \end{proof} \begin{proposition}\label{1degensimp} In the setting of Proposition \ref{1Aonedegenerate111}, write $T'=a_1\ot \bx_1+\cdots + a_{m-1}\ot \bx_{m-1}\in \BC^{m-1}\ot \BC^{m-1}\ot\BC^{m-1}=: A'\ot {C'}^* \ot C'$, where $\bx_1=\Id_{ C' }$. If $T$ is $1$-degenerate, then $T'$ is $1_{ {C'}^* }$ and $1_{C'}$-degenerate. \end{proposition} \begin{proof} Say $T'$ is $1_{ {C'}^*} $-generic with $T'( c' )$ of rank $m-1$. Then $T( c'+\lambda u^* )$ has rank $m$ for almost all $\lambda\in \BC$, contradicting $1$-degeneracy. The $1_{C'}$-generic case is similar. \end{proof} \begin{corollary}\label{noalgcor} In the setting of Proposition~\ref{1degensimp}, the module $\ul{C'}$ associated to $T'({A'}^*)$ via the ADHM correspondence as in~\S\ref{dictsectOne} cannot be generated by a single element. Similarly, the module $\ul{{C'}^*}$ associated to $(T'({A'}^*))^{\bt}$ cannot be generated by a single element. \end{corollary} \begin{proof} By Proposition~\ref{ref:moduleVsAlgebra} the module $\ul{C'}$ is generated by a single element if and only if $T'$ is $1_{ {C'}^* }$-generic. The claim follows from Proposition~\ref{1degensimp}. The second assertion follows similarly since $T'$ is not $1_{C'}$-generic. \end{proof} \section{Proof of Theorem~\ref{concise5} in the $1$-degenerate case and Theorem \ref{5isom} }\label{m5sect} Throughout this section $T\in \BC^5\ot \BC^5\ot \BC^5$ is a concise $1$-degenerate 111-abundant tensor. We use the notation of Proposition~\ref{1Aonedegenerate111} throughout this section. We begin, in \S\ref{prelim7} with a few preliminary results. We then, in \S\ref{restrisom7} prove a variant of the $m=5$ classification result under a more restricted notion of isomorphism and only require 111-abundance. Then the $m=5$ classification of corank one 111-abundant tensors follows easily in \S\ref{isom7} as does the orbit closure containment in \S\ref{orb7}. Finally we give two proofs that these tensors are of minimal border rank in \S\ref{end7}. \subsection{Preliminary results}\label{prelim7} We first classify admissible three dimensional spaces of $4\times 4$ matrices $\langle\bx_2, \bx_3, \bx_4\rangle \subseteq \tend(\BC^4)$. One could proceed by using the classification \cite[\S3]{MR2118458} of abelian subspaces of $\tend(\BC^4)$ and then impose the additional conditions of Proposition~\ref{1Aonedegenerate111}. We instead utilize ideas from the ADHM correspondence to obtain a short, self-contained proof. \begin{proposition}\label{nodecomposition} Let $\langle \bx_1=\Id_4,\bx_2, \bx_3,\bx_4\rangle \subset \tend(\BC^4)$ be a $4$-dimensional subspace spanned by pairwise commuting matrices. Suppose there exist nonzero subspaces $V, W\subseteq \BC^4$ with $V\oplus W = \BC^4$ which are preserved by $\bx_1, \bx_2, \bx_3, \bx_4$. Then either these exists a vector $v \in \BC^4$ with $\langle \bx_1, \bx_2,\bx_3,\bx_4\rangle \cdot v = \BC^4$ or there exists a vector $v^*\in {\BC^4}^*$ with $\langle\bx_1^{\bt}, \bx_2^{\bt},\bx_3^{\bt},\bx_4^{\bt}\rangle v^* = {\BC^4}^*$. \end{proposition} \begin{proof} For $h=1,2,3,4$ the matrix $\bx_h$ is block diagonal with blocks $\bx_h'\in \tend(V)$ and $\bx_h''\in \tend(W)$. Suppose first that $\dim V = 2 = \dim W$. In this case we will prove that $v$ exists. The matrices $\bx_h'$ commute and commutative subalgebras of $\tend(\BC^2)$ are at most $2$-dimensional and are, up to a change of basis, spanned by $\Id_{\BC^2}$ and either $\begin{pmatrix} 0 & 1\\ 0 & 0 \end{pmatrix}$ or $\begin{pmatrix} 1 & 0\\ 0 & 0 \end{pmatrix}$. In each of of the two cases, applying the matrices to the vector $(1, 1)^{\bt}$ yields the space $\BC^2$. Since the space $\langle \bx_1, \bx_2, \bx_3, \bx_4\rangle$ is $4$-dimensional, it is, after a change of basis, a direct sum of two maximal subalgebras as above. Thus applying $\langle \bx_1, \bx_2, \bx_3, \bx_4\rangle$ to the vector $v = (1, 1, 1, 1)^{\bt}$ yields the whole space. Suppose now that $\dim V = 3$. If some $\bx_h'$ has at least two distinct eigenvalues, then consider the generalized eigenspaces $V_1$, $V_2$ associated to them and suppose $\dim V_1 = 1$. By commutativity, the subspaces $V_1$, $V_2$ are preserved by the action of every $\bx_h'$, so the matrices $\bx_h$ also preserve the subspaces $W\oplus V_1$ and $V_2$. This reduces us to the previous case. Hence, every $\bx_h'$ has a single eigenvalue. Subtracting multiples of $\bx_1$ from $\bx_s$ for $s=2,3,4$, the $\bx_s'$ become nilpotent, hence up to a change of basis in $V$, they have the form \[ \bx_s' = \begin{pmatrix} 0 & (\bx_{s}')_{12} & (\bx_{s}')_{13}\\ 0 & 0 & (\bx_{s}')_{23}\\ 0 & 0 & 0 \end{pmatrix}. \] The space $\langle \bx_2', \bx_3', \bx_4'\rangle$ cannot be $3$-dimensional, as it would fill the space of $3\times3$ upper triangular matrices, which is non-commutative. So $\langle \bx_2', \bx_3', \bx_4'\rangle$ is $2$-dimensional and so some linear combination of the matrices $\bx_2, \bx_3 ,\bx_4$ is the identity on $W$ and zero on $V$. We subdivide into four cases. First, if $(\bx_s')_{12}\neq 0$ for some $s$ and $(\bx_t')_{23}\neq 0$ for some $t\neq s$, then change bases so $(\bx_s')_{23}=0 $ and take $v=(0,p,1,1)^\bt$ such that $p(\bx_s')_{12}+(\bx_s')_{13}\neq 0$. Second, if the above fails and $(\bx_s')_{12}\neq 0$ and $(\bx_s')_{23}\neq 0$ for some $s$, then there must be a $t$ such that $(\bx_t')_{13}\neq 0$ and all other entries are zero, so we may take $v = (0, 0, 1, 1)^{\bt}$. Third, if $(\bx_s')_{12}= 0$ for all $s=2,3,4$, then for dimensional reasons we have \[ \langle \bx_2', \bx_3', \bx_4'\rangle = \begin{pmatrix} 0 & 0 & \star\\ 0 & 0 & \star\\ 0 & 0 & 0 \end{pmatrix} \] and again $v = (0, 0, 1, 1)^{\bt}$ is the required vector. Finally, if $(\bx_s')_{23}= 0$ for all $s=2,3,4$, then arguing as above $v^* = (1, 0, 0, 1)$ is the required vector. \end{proof} \newcommand{\trx}{\chi} We now prove a series of reductions that will lead to the proof of Theorem~\ref{5isom}. \begin{proposition}\label{isomRough} Let $m = 5$ and $T\in A\ot B\ot C$ be a concise, $1$-degenerate, 111-abundant tensor with $T(A^*)$ of corank one. Then up to $\GL(A)\times \GL(B)\times \GL(C)$ action it has the form as in Proposition~\ref{1Aonedegenerate111} with \begin{equation}\label{eq:uppersquare} \bx_s = \begin{pmatrix} 0 & \trx_s\\ 0 & 0 \end{pmatrix}, \ \ 2\leq s\leq 4, \end{equation} where the blocking is $(2,2)\times (2,2)$. \end{proposition} \begin{proof} We apply Proposition~\ref{1Aonedegenerate111}. It remains to prove the form~\eqref{eq:uppersquare}. By Proposition~\ref{1Aonedegenerate111}\ref{item3b} zero is an eigenvalue of every $\bx_s$. Suppose some $\bx_s$ is not nilpotent, so has at least two different eigenvalues. By commutativity, its generalized eigenspaces are preserved by the action of $\bx_2, \bx_3, \bx_4$, hence yield $V$ and $W$ as in Proposition~\ref{nodecomposition} and a contradiction to Corollary~\ref{noalgcor}. We conclude that every $\bx_s$ is nilpotent. We now prove that the codimension of $\sum_{s=2}^4 \tim \bx_s\subseteq C'$ is at least two. Suppose the codimension is at most one and choose $c\in C'$ such that $\sum_{s=2}^4 \tim \bx_s + \BC c = C'$. Let $\cA\subset \tend(C')$ be the unital subalgebra generated by $\bx_2$, $\bx_3$, $\bx_4$ and let $W = \cA \cdot c$. The above equality can be rewritten as $\langle \bx_2, \bx_3, \bx_4\rangle C' + \BC c = C'$, hence $\langle \bx_2, \bx_3, \bx_4\rangle C' + W = C'$. We repeatedly substitute the last equality into itself, obtaining \[ C' = \langle \bx_2, \bx_3, \bx_4\rangle C' + W = (\langle \bx_2, \bx_3, \bx_4\rangle)^2 C' + W = \ldots = (\langle \bx_2, \bx_3, \bx_4\rangle)^{10}C' + W = W, \] since $\bx_2, \bx_3, \bx_4$ commute and satisfy $\bx_s^4 = 0$. This proves that $C' = \cA\cdot c$, again yielding a contradiction with Corollary~\ref{noalgcor}. Applying the above argument to $\bx_2^{\bt}, \bx_{3}^{\bt}, \bx_4^{\bt}$ proves that joint kernel of $\bx_2, \bx_3, \bx_4$ is at least two-dimensional. We now claim that $\bigcap_{s=2}^4\ker(\bx_s) \subseteq \sum_{s=2}^4 \tim \bx_s$. Suppose not and choose $v\in C'$ that lies in the joint kernel, but not in the image. Let $W \subseteq C'$ be a subspace containing the image and such that $W \oplus \BC v = C'$. Then $\langle \bx_2, \bx_3, \bx_4\rangle W \subseteq \langle \bx_2, \bx_3, \bx_4\rangle C' \subseteq W$, hence $V = \BC v$ and $W$ yield a decomposition as in Proposition~\ref{nodecomposition} and a contradiction. The containment $\bigcap_{s=2}^4\ker(\bx_s) \subseteq \sum_{s=2}^4 \tim \bx_s$ together with the dimension estimates yield the equality $\bigcap_{s=2}^4\ker(\bx_s) = \sum_{s=2}^4 \tim \bx_s$. To obtain the form~\eqref{eq:uppersquare} it remains to choose a basis of $C'$ so that the first two basis vectors span $\bigcap_{s=2}^4\ker(\bx_s)$. \end{proof} \subsection{Classification of 111-abundant tensors under restricted isomorphism}\label{restrisom7} Refining Proposition~\ref{isomRough}, we now prove the following classification. \begin{theorem}\label{7isom} Let $m = 5$. Up to $\GL(A)\times \GL(B) \times \GL(C)$ action and swapping the $B$ and $C$ factors, there are exactly seven concise $1$-degenerate, 111-abundant tensors in $A\ot B\ot C$ with $T(A^*)$ of corank one. To describe them explicitly, let $$T_{\mathrm{M1}} = a_1\ot(b_1\ot c_1+b_2\ot c_2+b_3\ot c_3+b_4\ot c_4)+a_2\ot b_3\ot c_1 + a_3\ot b_4\ot c_1+a_4\ot b_4\ot c_2+a_5\ot(b_5\ot c_1+ b_4\ot c_5)$$ and $$T_{\mathrm{M2}} = a_1\ot(b_1\ot c_1+b_2\ot c_2+b_3\ot c_3+b_4\ot c_4)+a_2\ot( b_3\ot c_1-b_4\ot c_2) + a_3\ot b_4\ot c_1+a_4\ot b_3\ot c_2+a_5\ot(b_5\ot c_1+b_4\ot c_5). $$ Then the tensors are \begin{align} &T_{\mathrm{M2}} + a_5 \ot (b_1 \ot c_2 - b_3 \ot c_4)\label{M2s1}\tag{$T_{\cO_{58}}$}\\ &T_{\mathrm{M2}}\label{M2s0}\tag{$T_{\cO_{57}}$}\\ &T_{\mathrm{M1}} + a_5 \ot (b_5 \ot c_2 - b_1 \ot c_2 + b_3 \ot c_3)\label{M1aParams}\tag{$\tilde{T}_{\cO_{57}}$}\\ &T_{\mathrm{M1}} + a_5 \ot b_5 \ot c_2\label{M1aNoParams}\tag{$\tilde{T}_{\cO_{56}}$}\\ &T_{\mathrm{M1}} + a_5 \ot b_2 \ot c_2\label{M1bQ2}\tag{$T_{\cO_{56}}$}\\ &T_{\mathrm{M1}} + a_5 \ot b_3 \ot c_2\label{M1bQ4}\tag{$T_{\cO_{55}}$}\\ &T_{\mathrm{M1}}\label{M1bNoParams}\tag{$T_{\cO_{54}}$} \end{align} \end{theorem} These tensors are pairwise non-isomorphic, as we explain below. For a tensor $T\in A\ot B\ot C$ its annihilator in $\fgl(A) \times \fgl(B) \times \fgl(C)$ is called its \emph{symmetry Lie algebra}. The symmetry Lie algebra intersected with $\fgl(A) \times \fgl(B)$ is called the \emph{AB-part} etc. We list the dimensions of these Lie algebras below. A linear algebra computation (see, e.g., \cite{2019arXiv190909518C}) shows that the dimensions of the symmetry Lie algebras are \[ \begin{matrix} \mbox{case} & \eqref{M2s1} & \eqref{M2s0} &\eqref{M1aParams} & \eqref{M1aNoParams} & \eqref{M1bQ2} & \eqref{M1bQ4} & \eqref{M1bNoParams}\\ \mbox{full} & 16 & 17& 17 & 18 & 18 & 19 & 20\\ \mbox{AB-part} & 5 & 5 & 5 & 5 & 6 & 6 & 6 \\ \mbox{BC-part} & 5 & 6 & 5 & 6 & 5 & 6 & 6 \\ \mbox{CA-part} & 5 & 5 & 6 & 6 & 6 & 6 & 6 \\ \end{matrix} \] \begin{proof}[Proof of Theorem~\ref{7isom}] We utilize Proposition~\ref{isomRough} and its notation. By conciseness, the matrices $\bx_2$, $\bx_3$, $\bx_4$ are linearly independent, hence form a codimension one subspace of $\tend(\BC^2)$. We utilize the perfect pairing on $\tend(\BC^2)$ given by $(A,B)\mapsto \Tr(AB)$, so that $\langle \trx_2, \trx_3, \trx_4\rangle^{\perp} \subseteq\tend(\BC^2)$ is one-dimensional, spanned by a matrix $P$. Conjugation with an invertible $4\times 4$ block diagonal matrix with $2\times 2$ blocks $M$, $N$ maps $\trx_s$ to $M\trx_s N^{-1}$ and $P$ to $NPM^{-1}$. Under such conjugation the orbits are matrices of fixed rank, so after changing bases in $\langle a_2,a_3,a_4\rangle$, we reduce to the cases \begin{align}\tag{M1}\label{eq:M1} P = \begin{pmatrix} 0 & 1\\ 0 & 0 \end{pmatrix}&\qquad \trx_2 = \begin{pmatrix} 1 & 0\\ 0 & 0 \end{pmatrix},\quad \trx_3 = \begin{pmatrix} 0 & 1\\ 0 & 0 \end{pmatrix},\quad \trx_4 = \begin{pmatrix} 0 & 0\\ 0 & 1 \end{pmatrix}, \ \ and\\ P = \begin{pmatrix}\tag{M2}\label{eq:M2} 1 & 0\\ 0 & 1 \end{pmatrix}&\qquad \trx_2 = \begin{pmatrix} 1 & 0\\ 0 & -1 \end{pmatrix},\quad \trx_3 = \begin{pmatrix} 0 & 1\\ 0 & 0 \end{pmatrix},\quad \trx_4 = \begin{pmatrix} 0 & 0\\ 1 & 0 \end{pmatrix}. \end{align} In both cases the joint right kernel of our matrices is $(*, *, 0, 0)^{\bt}$ while the joint left kernel is $(0, 0, *, *)$, so $w_5 = (w_{5,1}, w_{5,2}, 0, 0)^{\bt}$ and $u_5 = (0,0,u_{5,3},u_{5,4})$. \subsubsection{Case~\eqref{eq:M2}}\label{ssec:M2} In this case there is an involution, namely conjugation with $$\begin{pmatrix} 0&1&0&0&0\\ 1&0&0&0&0\\ 0&0&0&1&0\\ 0&0&1&0&0\\ 0&0&0&0&1\end{pmatrix} \in \GL_{ {5}} $$ that preserves $P$, and hence $\langle \bx_2,\bx_3,\bx_4\rangle$, while it swaps $w_{5,1}$ with $w_{5,2}$ and $u_{5,1}$ with $u_{5,2}$. Using this involution and rescaling $c_5$, we assume $w_{5,1} = 1$. The matrix \[ \begin{pmatrix} u_{5,3} & u_{5,4}\\ u_{5,3}w_{5,2} & u_{5,4}w_{5,2} \end{pmatrix} \] belongs to $\langle \trx_2, \trx_3, \trx_4\rangle$ by Proposition~\ref{1Aonedegenerate111}\ref{item3}, so it is traceless. This forces $u_{5,4}\neq 0$. Rescaling $b_5$ we assume $u_{5,4} = 1$. The trace is now $u_{5,3} + w_{5,2}$, so $u_{5,3} = -w_{5,2}$. The condition~\eqref{finalpiece} applied for $s=2,3,4$ gives linear conditions on the possible matrices $\bx_5$ and jointly they imply that \begin{equation}\label{eq:M2lastGeneral} \bx_5 = \begin{pmatrix} p_1 & p_2 & * & *\\ p_3 & p_4 & * & *\\ 0 & 0 & p_4 - w_{5,2}(p_1 + p_5) & p_5\\ 0 & 0 & -p_3 - w_{5,2}(p_6 - p_1) & p_6 \end{pmatrix} \end{equation} for arbitrary $p_i\in\BC$ and arbitrary starred entries. Using \eqref{five} with $u^* = (1, 0, 0, 0)^{\bt}$ and $w^* = (0, 0, 0, 1)$, we may change coordinates to assume that the first row and last column of $\bx_5$ are zero, and subtracting a multiple of $\bx_4$ from $\bx_5$ we obtain further that the $(3,2)$ entry of $\bx_5$ is zero, so \[ \bx_5 = \begin{pmatrix} 0 & 0 & 0 & 0\\ p_3 & p_4 & 0 & 0\\ 0 & 0 & p_4 & 0\\ 0 & 0 & -p_3 & 0 \end{pmatrix} \] Subtracting $p_4X_1$ from $X_5$ and then adding $p_4$ times the last row (column) to the fourth row (column) we arrive at \begin{equation}\label{eq:M2lastSpecial} \bx_5 = \begin{pmatrix} 0 & 0 & 0 & 0\\ p_3 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & -p_3 & 0 \end{pmatrix} \end{equation} for possibly different values of the parameter $p_3$. Conjugating with the $5\times 5$ block diagonal matrix \[ \begin{pmatrix} 1 & 0 & 0 & 0 & 0\\ w_{5,2} & 1 & 0 & 0& 0 \\ 0& 0& 1 & 0& 0\\ 0& 0& w_{5,2} & 1& 0\\ 0& 0& 0& 0& 1 \end{pmatrix} \] does not change $P$, hence $\langle \bx_2, \bx_3, \bx_4\rangle$, and it does not change $\bx_5$ as well, but it makes $w_{5,2} = 0$. Thus we arrive at the case when $w_5 = (1, 0, 0, 0)^{\bt}$, $u_5 = (0, 0, 0, 1)$ and $\bx_5$ is as in~\eqref{eq:M2lastSpecial}. There are two subcases: either $p_3 = 0$ or $p_3\neq 0$. In the latter case, conjugation with the diagonal matrix with diagonal $(1,p_3,1,p_3,1)$ does not change $\langle \bx_2, \bx_3, \bx_4\rangle$ and it maps $\bx_5$ to the same matrix but with $p_3 = 1$. In summary, in this case we obtain the types~\eqref{M2s0} and~\eqref{M2s1}. \subsubsection{Case~\eqref{eq:M1}} For every $t\in \BC$ conjugation with $$ \begin{pmatrix} 1 & t&0& 0&0 \\ 0 & 1& 0&0&0 \\ 0&0 &1 & t&0\\ 0&0 &0 & 1&0\\ 0&0 &0 & 0&1 \end{pmatrix} $$ preserves $\langle \bx_2,\bx_3,\bx_4\rangle $ and maps $u_5$ to $(0, 0, u_{5,3}, u_{5,4}-tu_{5,3})$ and $w_5$ to $(w_{5,1}+tw_{5,2}, w_{5,2}, 0, 0)^{\bt}$. Taking $t$ general, we obtain $w_{5,1}, u_{5,4}\neq 0$ and rescaling $b_5, c_5$ we obtain $u_{5,4} = 1 = w_{5,1}$. Since $w_5u_5\in\langle \bx_2, \bx_3, \bx_4\rangle$, this forces $u_{5,3} = 0$ or $w_{5,2} = 0$. Using~\eqref{finalpiece} again, we obtain that \begin{equation}\label{eq:M1lastGeneral} \bx_5 = \begin{pmatrix} q_1 & * & * & *\\ w_{5,2}(q_1-q_3) & q_2 & * & *\\ 0 & 0 & q_3 & *\\ 0 & 0 & u_{5,3}(q_4-q_2) & q_4 \end{pmatrix} \end{equation} for arbitrary $q_1, q_2, q_3, q_4\in \BC$ and arbitrary starred entries. We normalize further. Transposing (this is the unique point of the proof where we swap the $B$ and $C$ coordinates) and swapping $1$ with $4$ and $2$ with $3$ rows and columns (which is done by conjugation with appropriate permutation matrix) does not change the space $\langle \bx_2, \bx_3, \bx_4\rangle$ or $\bx_1$ and it maps $u_5$, $w_5$ to $(0, 0, w_{5,2}, w_{5,1})$, $(u_{5,4}, u_{5,3}, 0, 0)^{\bt}$. Using this operation if necessary, we may assume $u_{5,3} = 0$. By subtracting multiples of $u_5$, $w_5$ and $\bx_2$, $\bx_3$, $\bx_4$ we obtain \begin{equation}\label{eq:M1lastSpecial} \bx_5 = \begin{pmatrix} 0 & 0 & 0 & 0\\ -q_3w_{5,2} & q_2 & q_4 & 0\\ 0 & 0 & q_3 & 0\\ 0 & 0 & 0 & 0 \end{pmatrix} \end{equation} Rescaling the second row and column we reduce to two cases: \begin{align}\tag{M1a}\label{eq:M1a} w_{5,2} & = 1\\ \tag{M1b}\label{eq:M1b} w_{5,2} & = 0 \end{align} \subsubsection*{Case~\eqref{eq:M1a}}\label{sssec:M1a} In this case we have $w_5 = (1, 1, 0, 0)^{\bt}$ and $u_5 = (0, 0, 0, 1)$. We first add $q_4\bx_2$ to $\bx_5$ and subtract $q_4 w_5$ from the fourth column. This sets $q_4=0$ in~\eqref{eq:M1lastSpecial}. Next, we subtract $-q_2X_1$ from $X_5$ and then add $q_2 u_5$ to the first column and $q_2 w_5$ to the fourth row. This makes $q_2 = 0$ (and changes $q_3$). Finally, if $q_3$ is nonzero, we can rescale $\bx_5$ by $q_3^{-1}$ and rescale the fifth row and column. This yields $q_3 = 1$. In summary, we have two cases: $(q_2, q_3, q_4) = (0, 1, 0)$ and $(q_2, q_3, q_4) = (0, 0, 0)$. These are the types \eqref{M1aNoParams} and~\eqref{M1aParams}. \subsubsection*{Case~\eqref{eq:M1b}}\label{sssec:M1b} In this case we have $w_5 = (1, 0, 0, 0)^{\bt}$ and $u_5 = (0, 0, 0, 1)$. Subtract $-q_3\bx_1$ from $\bx_5$ and then add $q_3 u_5$ to the first column and $q_3 w_5$ to the fourth row. This makes $q_3 = 0$ (and changes $q_2$). Subcase $q_2 = 0$: Then either $q_4 = 0$ and we obtain type~\eqref{M1bNoParams} or we rescale $X_5$ and the fifth row and column to obtain $q_4 = 1$. Here $(q_2, q_3, q_4) = (0, 0, 1)$. This is type \eqref{M1bQ4}. Subcase $q_2 \neq 0$: Then we rescale $X_5$ and the fifth row and column to obtain $q_2 = 1$. Subtract $q_4$ times the second column from the third and add $q_4$ times the third row to the second. This does not change $\bx_1$, \ldots , $\bx_4$ and it changes $\bx_5$ by making $q_4 = 0$. Here $(q_2, q_3, q_4) = (1, 0, 0)$, this is type \eqref{M1bQ2}. We have shown that there are at most seven isomorphism types up to $\GL(A)\times \GL(B)\times \GL(C)$ action, while the dimensions of the Lie algebras and restricted Lie algebras show that they are pairwise non-isomorphic. This concludes the proof of Theorem~\ref{7isom}. \end{proof} \subsection{Proof of Theorem~\ref{5isom}}\label{isom7} \begin{proof} We first prove that there are exactly five isomorphism types of concise $1$-degenerate 111-abundant up to action of $\GL_5(\BC)^{\times 3}\rtimes \FS_3$. By Proposition~\ref{5notconcise}, after possibly permuting $A$, $B$, $C$, the space $T(A^*)$ has corank one. It is enough to prove that in the setup of Theorem~\ref{7isom} the two pairs of tensors with the symmetry Lie algebras of the same dimension of are isomorphic. Swapping the $A$ and $C$ coordinates of the tensor in case~\eqref{M1bQ2} and rearranging rows, columns, and matrices gives case~\eqref{M1aNoParams}. Swapping the $A$ and $B$ coordinates of the tensor in case~\eqref{M1aParams} and rearranging rows and columns, we obtain the tensor \[ a_{1}(b_{1}c_{1}+b_{2}c_{2}+b_{3}c_{3}+b_{4}c_{4})+a_{2} b_{3}c_{2} +a_{3}(b_{4} c_{1}+b_{4}c_{2}) +a_{4}(b_{3}c_{1}-b_{4}c_{2}) +a_{5}(b_{3}c_{5}+b_{5}c_{1}+b_{4}c_{5}) \] The space of $2\times 2$ matrices associated to this tensor is perpendicular to $\begin{pmatrix} 1 & 0\\ 1 & -1 \end{pmatrix}$ which has full rank, hence this tensor is isomorphic to one of the~\eqref{eq:M2} cases. The dimension of the symmetry Lie algebra shows that it is isomorphic to~\eqref{M2s0}. This concludes the proof that there are exactly five isomorphism types. \subsection{Proof of the degenerations}\label{orb7} Write $T \unrhd T'$ if $T$ degenerates to $T'$ and $T \simeq T'$ if $T$ and $T'$ lie in the same orbit of $\GL_5(\BC)^{\times 3}\rtimes \FS_3$. The above yields~$\eqref{M1bQ2} \simeq \eqref{M1aNoParams}$ and $\eqref{M1aParams} \simeq \eqref{M2s0}$. Varying the parameters in~\S\ref{ssec:M2}, \S\ref{sssec:M1a}, \S\ref{sssec:M1b} we obtain degenerations which give \[ \eqref{M2s1} \unrhd \eqref{M2s0} \simeq \eqref{M1aParams} \unrhd \eqref{M1aNoParams} \simeq \eqref{M1bQ2} \unrhd \eqref{M1bQ4} \unrhd \eqref{M1bNoParams}, \] which proves the required nesting. For example, in \S\ref{sssec:M1b} we have a two-parameter family of tensors parameterized by $(q_2, q_4)\in \BC^2$. As explained in that subsection, their isomorphism types are \begin{tabular}{c c c c} & $q_2 \neq0$ & $q_2 = 0$, $q_4\neq 0$ & $q_2 = q_4 = 0$\\ & $\eqref{M1bQ2}$ & $\eqref{M1bQ4}$ & $\eqref{M1bNoParams}$ \end{tabular} This exhibits the last two degenerations; the others are similar. To complete the proof, we need to show that these tensors have minimal border rank. By degenerations above, it is enough to show this for~\eqref{M2s1}. We give two proofs. \color{black} \subsection{Two proofs that the tensors have minimal border rank}\label{end7} \subsubsection{Proof one: the tensor \eqref{M2s1} lies in the closure of minimal border rank $1_A$-generic tensors}\label{ex:M2} \def\oldb{p_3} Our first approach is to prove that~\eqref{M2s1} lies in the closure of the locus of $1_A$-generic concise minimal border rank tensors. We do this a bit more generally, for all tensors in the case~\eqref{eq:M2}. By the discussion above every such tensor is isomorphic to one where $\bx_5$ has the form~\eqref{eq:M2lastSpecial} and we will assume that our tensor $T$ has this form for some $\oldb{}\in \BC$. Recall the notation from Proposition \ref{1Aonedegenerate111}. Take $u_2 = 0$, $w_2 = 0$, $u_3 := (0, 0, -\oldb{}, 0)$, $w_3^{\bt} = (0, \oldb{}, 0, 0)$, $u_4 = 0$, $w_4 = 0$. We see that $u_s\bx_m = 0$, $\bx_mw_s = 0$, and $w_{s}u_{t} = w_{t}u_{s}$ for all $s,t$, so for every $ \ep\in \BC^*$ we have a commuting quintuple \[ \Id_5,\quad \begin{pmatrix} \bx_s & w_s\\ u_s\ep & 0 \end{pmatrix}\quad s=2,3,4,\quad\mbox{and}\quad \begin{pmatrix} \bx_5 & w_5\ep^{-1}\\ u_5 & 0 \end{pmatrix} \] We check directly that the tuple is End-closed, hence by~Theorem~\ref{1stargprim} it corresponds to a tensor of minimal border rank. (Here we only use the $m=5$ case of the theorem, which is significantly easier than the $m=6$ case.) Multiplying the matrices of this tuple from the right by the diagonal matrix with entries $1, 1, 1, 1, t$ and then taking the limit with $t\to 0$ yields the tuple of matrices corresponding to our initial tensor $T$. While we have shown all~\eqref{eq:M2} cases are of minimal border rank, it can be useful for applications to have an explicit border rank decomposition. What follows is one such: \subsubsection{Proof two: explicit proof of minimal border rank for~\eqref{M2s1}} For $t\in \BC^*$, consider the matrices \[\hspace*{-.8cm} B_1=\begin{pmatrix} 0&0&1&1& 0 \\ 0& 0&-1&-1& 0 \\ 0& 0&0&0& 0 \\ 0& 0&0&0& 0 \\ 0& 0&0&0& 0 \end{pmatrix}, \ \ B_2=\begin{pmatrix} 0&0&-1&1& 0 \\ 0& 0&-1&1& 0 \\ 0& 0&0&0& 0 \\ 0& 0&0&0& 0 \\ 0& 0&0&0& 0 \end{pmatrix}, \ \ B_3=\begin{pmatrix} 0&0&0&0& 0 \\ 0& t&1&0& 0 \\ 0& t^2&t&0& 0 \\ 0& 0&0&0& 0 \\ 0& 0&0&0& 0 \end{pmatrix}, B_4=\begin{pmatrix} -t&0&0&1& 0 \\ 0& 0& 0&0& 0 \\ 0&0&0&0& 0 \\ t^2& 0&0&-t& 0 \\ 0& 0&0&0& 0 \end{pmatrix}, \] \[ B_5= (1, -t, 0, -t, t^{2})^{\bt}\cdot (-t, 0, t, 1, t^{2}) = \begin{pmatrix} -t&0&t&1&t^{2}\\ t^{2}&0&-t^{2}&-t&-t^{3}\\ 0&0&0&0&0\\ t^{2}&0&-t^{2}&-t&-t^{3}\\ -t^{3}&0&t^{3}&t^{2}&t^{4} \end{pmatrix} \] The limit at $t\to 0$ of this space of matrices is the required tuple. This concludes the proof of Theorem~\ref{5isom}. \end{proof} \section{Proof (1)=(4) in Theorem \ref{1stargprim}} \label{quotreview} \subsection{Preliminary remarks}\label{prelimrems} Let $T\in A\ot B\ot C=\BC^m\ot \BC^m\ot \BC^m$ be $1_A$-generic and satisfy the $A$-Strassen equations. Let $E \subseteq \fsl(C)$ be the associated $m-1$-dimensional space of commuting traceless matrices as in~\S\ref{dictsectOne}. Let $\ul{C}$ be the associated module and $S$ the associated polynomial ring, as in \S\ref{dictsectOne}. By~\S\ref{dictsectOne} the tensor $T$ has minimal border rank if and only if the space $E$ is a limit of spaces of simultaneously diagonalizable matrices if and only if $\ul{C}$ is a limit of semisimple modules. The \emph{principal component} of the Quot (resp. Hilbert) scheme is the closure of the set of semisimple modules (resp. algebras). Similarly, the \emph{principal component} of the space of commuting matrices is the closure of the space of simultaneously diagonalizable matrices. A tensor $T$ has minimal border rank if and only if $E$ lies in the principal component of the space of commuting matrices if and only if $\ul{C}$ lies in the principal component of the Quot scheme. Write $\tann(\ul C)=\{s\in S\mid s(\ul C)=0\}$. Let $\a_i$ be a basis of $A^*$ with $T(\a_1)$ of full rank and $X_i=T(\a_{i })T(\a_1)\inv\in \tend(C)$, for $1\leq i\leq m$. The algebra of matrices generated by $\Id,X_2\hd X_m$ is isomorphic to $S/\tann(\ul C)$. The End-closed condition in the language of modules becomes the requirement that the algebra of matrices has dimension (at most) $m$. The tensor $T$ is assumed to be $A$-concise, i.e., $\tdim\langle \Id, X_2\hd X_m\rangle=m$, so the algebra is equal to this linear span: $X_iX_j\in \langle \Id=X_1, X_2\hd X_m\rangle$. Our argument proceeds by examining the possible structures of $\ul C$ and $S/\tann(\ul C)$ and, in each case, proving that $\ul{C}$ lies in the principal component. Let $r$ be the minimal number of generators of $\ul{C}$. In this section we introduce the additional index range $$ 2\leq y,z,q\leq m. $$ When $S/\tann(\ul C)$ is \emph{local}, i.e., there is a unique maximal ideal $\fm$, we consider the Hilbert function $H_{\ul C}(k):=\tdim (\fm^k\ul C/\fm^{k+1}\ul C)$ and by Nakayama's Lemma $H_{\ul C}(0)=r$. Similarly, we consider the Hilbert function $H_{S/\tann(\ul C)}(k):=\tdim (\fm^k/\fm^{k+1})$. Since the algebra is local, $H_{S/\tann(\ul{C})}(0) = 1$. Observe that if $X_yX_zX_w = 0$ for all $y,z,w$, then $\tann(\ul{C})$ contains $S_{\geq3}$, which implies $S/\tann(\ul{C})$ is local. When $H_{S/\tann(\ul{C})}(1) = k<m-1$, we may work with a polynomial ring in $k$ variables, $\tilde S=\BC[y_1\hd y_k]$. We will use the following results, which significantly restrict the possible structure of $\ul{C}$ and $S/\tann(\ul{C})$. \begin{enumerate}[label=(\roman*)] \item\label{stdfact} For a finite algebra $\cA=\Pi \cA_t$, with the $\cA_t$ local, the algebra $\cA$ can be generated by $q$ elements if and only if $H_{\cA_t}(1)\leq q$ for all $t$. From the geometric perspective, the number of generators needed is the smallest dimension of an affine space the associated scheme can be realized inside, and one just chooses the support of each $\cA_t$ to be a different point of $\BA^{q}$. \item \label{rone} When the module $\ul C$ is generated by a single element (so we are in the Hilbert scheme), and $m\leq 7$, all such modules lie in the principal component \cite{MR2579394}. \item \label{rthree} By \cite[Cor. 4.3]{jelisiejew2021components}, when $m\leq 10$ and the algebra of matrices generated by $\Id, X_2, \ldots ,X_m$ is generated by at most three generators, then the module lies in the principal component. When $S/\tann(\ul{C})$ is local, this happens when $ H_{S/\tann(\ul C)}(1)\leq 3$. \item \label{squarezero} When $m-1\leq 6$, if $X_yX_z=0$ for all $y,z$, then the module lies in the principal component by \cite[Thm. 6.14]{jelisiejew2021components}. This holds when $S/\tann(\ul{C})$ is local with $H_{S/\tann(\ul C)}(2)=0$. \item \label{613} If $X_yX_zX_w=0$ for all $y,z,w$ (i.e., $H_{S/\tann(\ul C)}(3)=0$), $\tdim\sum\tim(X_yX_z)=1$ (i.e., $H_{S/\tann(\ul C)}(2)=1$), and $\tdim \cap_{y,z}\tker(X_yX_z)=m-1$, then $(X_2\hd X_m)$ deforms to a tuple with a matrix having at least two eigenvalues. Explicitly, there is a normal form so that $$ X_y=\begin{pmatrix} 0&0& H_y&*&*\\ 0&0&0&*&*\\ 0&0&0&0& G_y\\ 0&0&0&0& 0\\0&0&0&0& 0\end{pmatrix} $$ where $X_2^2\neq 0$ and all other products are zero. Then $$ Y:=\begin{pmatrix} 0&0&0&0& 0\\ 0&0&0&0& 0\\ 0&0&G_2H_2&0& 0\\ 0&0&0&0& 0\\0&0&0&0& 0\end{pmatrix} $$ commutes with all the $X_i$, and the deformation (to a not necessarily traceless tuple) is $(X_2+\lam Y,X_3\hd X_m)$ by \cite[Lem. 6.13]{jelisiejew2021components}. \end{enumerate} We now show that all End-closed subspaces $\tilde E=\langle \Id, E\rangle $ lie in the principal component when $m=5,6$ by, in each possible case, assuming the space is not in the principal component and obtaining a contradiction. \subsection{Case $m=5$}\label{m51g} \subsubsection{Case: $E$ contains an element with more than one eigenvalue, i.e., $E$ is not nilpotent}\label{m5nonlocal} By \cite[Lem. 3.12]{jelisiejew2021components} this is equivalent to saying the algebra $S/\tann(\ul C)$ is a nontrivial product of algebras $\Pi_t \cA_t$. Since $\tdim (S/\tann(\ul C))=5$, we have for each $t$ that $\tdim(\cA_t)\leq 4$ and thus $H_{\cA_t}(1)\leq 3$.Using \ref{stdfact}, we see $S/\tann(\ul C)$ is generated by at most three elements, a contradiction by \ref{rthree}. \subsubsection{Case: all elements of $E$ are nilpotent} In this case $\tann(\ul{C})$ contains $S_{\geq (m-1)m}$ because any nilpotent $m\times m$ matrix raised to the $m$-th power is zero and we have $m-1$ commuting matrices that we could multiply together. Thus $S/\tann(\ul{C})$ is local and we can speak about Hilbert functions. By \ref{rthree} we assume $H_{S/\tann(C)}(1)\geq 4$, so $H_{S/\tann(\ul C)}(2)=0$. Thus for all $z,w$, $y_zy_w\in \tann(\ul C)$ and we conclude by~\ref{squarezero}. \subsection{Case $m=6$}\label{m61g} For non-local $S/\tann(\ul C)$, arguing as in~\S\ref{m5nonlocal} the only case is $S/\tann(\ul C) \simeq \cA_1\times \cA_2$ with $\dim\cA_1=1$ and $H_{\cA_2}(1) = 4$, $H_{\cA_2}(2) = 0$. Correspondingly the module $\ul{C}$ is a direct sum of modules $\ul{C}_1\oplus \ul{C}_2$, where $\cA_2 \simeq S/\tann(\ul{C}_2)$. By ~\ref{rthree} and ~\ref{squarezero} the module $\ul{C}_2$ lies in the principal component and trivially so does $\ul{C}_1$. Hence $\ul{C}$ lies in the principal component. We are reduced to the case $S/\tann(\ul C)$ is local. By~\ref{rthree} we assume $H_{S/\tann(\ul C)}(1)> 3$. Moreover, if $H_{S/\tann(\ul C)}(1)=5$, we have $H_{S/\tann(\ul C)}(2)=0$ and we conclude by~\ref{squarezero}. Thus the unique Hilbert function $H_{S/\tann(\ul C)}$ left to consider is $(1,4,1)$. \subsubsection{Case $\tdim \sum_{y,z}\tim(X_yX_z)=1$, i.e., $H_{S/\tann(\ul C)}(2)=1$} Since for all $y,z$, $X_yX_z$ lies in the $m$ dimensional space $\langle \Id, X_2\hd X_m\rangle$, we must have $\tdim(\cap_{y,z}\tker(X_yX_z))=m-1$ and thus~\ref{613} applies. Let $\ul C(\lam)$ denote $\ul C$ with this deformed module structure. The assumption that $X_1X_y=X_y X_1=0$ for $2\leq y\leq m$ implies $H_1K_y=0$ and $H_y K_1=0$ which implies that $\ul C(\lam)$ also satisfies the End-closed condition. Since $\ul C(\lam)$ is not supported at a point, it cannot have Hilbert function $(1,4,1)$ so it is in the principal component, and thus so is $\ul C= \ul C(0)$. \subsubsection{Case $\tdim \sum_{y,z}\tim(X_yX_z)>1$} This hypothesis says $H_{\ul C }(2)\geq 2$. Since $H_{S/\tann(\ul{C})}(3) = 0$ also $H_{\ul{C}}(3) = 0$. We have $H_{\ul C}(0)+ H_{\ul C}(1)+H_{\ul C}(2)=6$. If $H_{\ul C}(0)=1$ then~\ref{rone} applies, so assume $H_{\ul C}(0)\geq 2$. If $H_{\ul C}(1)=1$, then a near trivial case of Macaulay's growth bound for modules \cite[Cor. 3.5]{MR1774095}, says $H_{\ul C }(2)<2$, so the Hilbert function $H_{\ul C}$ is $(2,2,2)$, and the minimal number of generators of $\ul{C}$ is $H_{\ul{C}}(0) = 2$. Let $F = Se_1 \oplus Se_2$ be a free $S$-module of rank two. Fix an isomorphism $\ul{C} \simeq F/\cR$, where $\cR$ is the subspace generated by the relations. We briefly recall the apolarity theory for modules from~\cite[\S4.1]{jelisiejew2021components}. Let $\tilde S = \BC[y_1, \ldots ,y_4]$ which we may use instead of $S$ because $H_{S/\tann(\ul{C})}(1) = 4$. Let $\tilde S^* = \bigoplus_j \thom(\tilde S_j,\BC) = :\BC[z_1, \ldots ,z_4]$ be the dual polynomial ring. Let $F^*:=\bigoplus_j \thom(F_j,\BC) = \tilde S^*e_1^* \oplus \tilde S^*e_2^* = \BC[z_1, \ldots ,z_4]e_1^* \oplus \BC[z_1, \ldots ,z_4]e_2^*$. The action of $\tilde S$ on $F^*$ is the usual contraction action. In coordinates it is the ``coefficientless'' differentiation: $y_i^d(z_j^u)=\d_{ij}z_j^{u-d}$ when $u\geq d$ and is zero otherwise. The subspace $\cR^{\perp} \subseteq F^*$ is an $\tilde S$-submodule. Consider a minimal set of generators of $\cR^\perp \subseteq F^*$. The assumption $H_{\ul C}(2)=2$ implies there are two generators in degree two, write their leading terms as $\s_{11}e_1^*+\s_{12}e_2^*$ and $\s_{21}e_1^*+\s_{22}e_2^*$, with $\s_{uv}\in \tilde S_2$. Then $\tann(\ul C)\cap \tilde S_{\geq 2}=\langle \s_{11}\hd \s_{22}\rangle^\perp\cap \tilde S_{\geq 2}$. But $H_{\tilde S/\tann(\ul C)}(2)=1$, so all the $\s_{uv}$ must be a multiple of some $\s$ and after changing bases we write the leading terms as $\s e_1^*$, $\s e_2^*$. We see $\langle y_i\s e_1^*+\ldots , y_i\s e_2^*+\ldots, 1\leq i\leq 4\rangle\subseteq \cR^{\perp}$, where $y_i$ acts on $\tilde S^*$ by contraction and the ``\ldots'' are lower order terms. Now $H_{\ul C}(1)=2$ says this is a $2$-dimensional space, i.e., that $\s$ is a square. Change coordinates so $\s=z_1^2$. Thus the generators of $\cR^{\perp}$ include $Q_1:=z_1^2 e_1^*+\ell_{11}e_1^* +\ell_{12}e_2^*, Q_2:=z_1^2 e_2^*+\ell_{21}e_1^*+ \ell_{22}e_2^*$ for some linear forms $\ell_{uv}$. These two generators plus their contractions (by $ y_1,y_1^2$) span a six dimensional space, so these must be all the generators. Our module is thus a degeneration of the module where the $z_1,\ell_{uv}$ are all independent linear forms. Take a basis of the module $\cR^{\perp}\subseteq F^*$ as $Q_1,Q_2,y_1Q_1,y_1Q_2,y_1^2Q_1,y_1^2Q_2$. Then the matrix associated to the action of $y_1$ is $$ \begin{pmatrix} 0&0&1&0&0&0\\ 0&0&0&1&0&0\\ 0&0&0&0&1&0\\ 0&0&0&0&0&1\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\end{pmatrix} $$ and if we deform our module to a space where the linear forms $z_1,\ell_{uv}$ are all independent and change bases such that $\ell_{11}=y_2^*$, $\ell_{12}=y_3^*$, $\ell_{21}=y_4^*$, $\ell_{22}=y_5^*$, we may write our space of matrices as $$ \begin{pmatrix} 0&0&z_1&0&z_2&z_3\\ 0&0&0&z_1&z_4&z_5\\ 0&0&0&0&z_1&0\\ 0&0&0&0&0&z_1\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\end{pmatrix} $$ Using \emph{Macaulay2 VersalDeformations} \cite{MR2947667} we find that this tuple is a member of the following family of tuples of commuting matrices parametrized by $\lambda\in \BC$. Their commutativity is straightforward if tedious to verify by hand $$ \begin{pmatrix} 0&\lam^2 z_4&z_1&-\lam z_5&z_2&z_3\\ -\lam z_1&0&-\lam z_4&z_1&z_4&z_5\\ -\lam^3z_4&\lam^2z_1&0&\lam^2z_4&z_1&-\lam z_5\\ 0&0&0&-\lam^2z_5&\lam(z_2-z_4)&\lam z_3+z_1\\ 0&0&0&0&-\lam^2z_5&0\\ 0&0&0&0&0&-\lam^2z_5\end{pmatrix}. $$ Here there are two eigenvalues, each with multiplicity three, so the deformed module is a direct sum of two three dimensional modules, each of which thus has an associated algebra with at most three generators and we conclude by \ref{rthree}. \qed \section{Minimal cactus and smoothable rank}\label{minsmoothsect} For a degree $m$ zero-dimensional subscheme $\Spec(R)$ with an embedding $\Spec(R)\subseteq Seg(\BP A\times \BP B\times\BP C)\subseteq \BP(A\ot B\ot C)$, its \emph{span} $\langle \Spec(R)\rangle$ is the zero set of $I_1(\Spec(R))\subseteq A^*\ot B^*\ot C^*$, where $I_1(\Spec(R))$ is the degree one component of the homogeneous ideal $I$ of the embedded $\Spec(R)$. We say that the embedding $\Spec(R)\subseteq Seg(\BP A\times \BP B\times\BP C)$ is \emph{nondegenerate} if its span projects surjectively to $\mathbb{P} A $, $\mathbb{P} B $, and $\mathbb{P} C $. For a nondegenerate embedding, the maps $\Spec(R)\to \BP A$, $\Spec(R)\to \BP B$, $\Spec(R) \to \BP C$, induced by projections, are embeddings as well. If $\langle \Spec(R)\rangle$ contains a concise tensor, then the embedding of $\Spec(R)$ is automatically nondegenerate. The \emph{cactus rank} \cite{MR3121848} of $T\in A\ot B\ot C$ is the smallest $r$ such that there exists a degree $r$ zero-dimensional subscheme $\Spec(R)\subseteq Seg(\BP A\times \BP B\times\BP C)\subseteq \BP(A\ot B\ot C)$ with $[T]\in \langle Spec(R)\rangle$. (Recall that the smoothable rank has the same definition except that one additionally requires $R$ to be smoothable.) Given a degree $\rho$ zero-dimensional scheme $R$, for each $\varphi\in R^*$, one gets a tensor $T^\varphi\in R^*\ot R^*\ot R^* \simeq \BC^\rho\ot \BC^\rho\ot \BC^\rho$ defined by $T^\varphi(r_1,r_2,r_3):=\varphi(r_1r_2r_3)$. Given any non-degenerate embedding $\Spec(R)\subseteq Seg(\BP A\times \BP B\times\BP C)\subseteq \BP(A\ot B\ot C)$, the space of tensors $T^\varphi$ is isomorphic to the space of tensors $\langle \Spec(R)\rangle$ as will be shown in the proof of Proposition~\ref{ref:cactusRank:prop} below. In this section we show that the scheme (resp.~smoothable scheme) $\Spec(R)$ which witnesses that a tensor $T\in A\ot B\ot C$ has minimal cactus (resp.~smoothable) rank is unique, in fact, the algebra $R$ is isomorphic to $\alg{T}$. \begin{proposition}\label{ref:cactusRank:prop} Let $\Spec(R)$ be a degree $m$ zero-dimensional subscheme and let $T\in A\ot B\ot C$. The following are equivalent: \begin{enumerate} \item\label{it:cactusRank:one} There exists a nondegenerate embedding $\Spec(R)\subseteq Seg(\BP A\times \BP B\times\BP C)$ with $T\in \langle \Spec(R)\rangle$, so in particular $T$ has cactus rank at most $m$. \item\label{it:cactusRank:two} there exists $\varphi\in R^*$ such that $T$ is isomorphic to the tensor in $R^*\ot R^*\ot R^*$ given by the trilinear map $(r_1,r_2,r_3)\mapsto \varphi(r_1r_2r_3)$. \end{enumerate} If $T$ is concise and satisfies the above, then it is $1$-generic and has cactus rank $m$. \end{proposition} \begin{proof} We first show~\ref{it:cactusRank:one} implies~\ref{it:cactusRank:two}. An embedding $\Spec(R)\subseteq \mathbb{P} A$ with $\langle \Spec(R)\rangle = \mathbb{P} A$ is induced from an embedding $\Spec(R)\subseteq A$ with $\langle \Spec(R)\rangle = A$, which in turn induces a vector space isomorphism $\tau_a\colon A^*\to R\isom \Sym(A^*)/I_{R,A}$ as follows: let $I_{R,A}$ denote the ideal of $\Spec(R)\subseteq A$, then $\tau_a(\a) := \a\tmod{I_{R,A}}$. Hence, a nondegenerate embedding of $\Spec(R)$ induces a triple of vector space isomorphisms $\tau_a\colon A^*\to R$, $\tau_b\colon B^*\to R$, $\tau_c\colon C^*\to R$. More generally, for each $(s,t,u)$, with $s,t,u\geq 1$, the map $$ \tau_{s,t,u}: S^sA^*\ot S^tB^*\ot S^uC^* \ra S^sA^*\ot S^tB^*\ot S^uC^*/(I_{R,A\ot B\ot C} )_{s,t,u} $$ is a surjection onto $R\isom S^sA^*\ot S^tB^*\ot S^uC^*/(I_{R,A\ot B\ot C} )_{s,t,u}$, and these maps are all compatible with multiplication, in particular $\t_{1,1,1}(\a\ot \b\ot \g)=\t_a(\a)\t_b(\b)\t_c(\g)$. Then $$\langle \Spec(R)\rangle = (\ker \tau_{1,1,1})^{\perp}\subseteq (A^*\ot B^*\ot C^*)^* = A\ot B\ot C. $$ By duality, the space $(\ker \tau_{1,1,1})^{\perp}$ is the image of the map $R^*\to A\ot B\ot C$ defined by requiring that $\varphi\in R^*$ maps to the trilinear form $(\alpha, \beta, \gamma)\mapsto \varphi(\tau_a(\alpha)\tau_b(\beta)\tau_c(\gamma))$. If $T$ is the image of $\varphi$, then it is isomorphic to the trilinear map $(r_1, r_2, r_3)\mapsto \varphi(r_1r_2r_3)$ via $ \tau_a^{\bt}\ot \tau_b^{\bt}\ot \tau_c^{\bt}$, proving~\ref{it:cactusRank:one} implies~\ref{it:cactusRank:two}. Assuming~\ref{it:cactusRank:two}, choose vector space isomorphisms $\t_a,\t_b,\t_c$ and define a map $A^*\ot B^*\ot C^* \ra R$, by $\a\ot \b\ot \g\mapsto \t_a(\a)\t_b(\b)\t_c(\g)$. (For readers familiar with border apolarity, the kernel of this map is $I_{111}$.) Then extend it to $S^sA^*\ot S^tB^*\ot S^uC^*$ by $ \t_a(\a_1\cdots \a_s)=\t_a(\a_1)\cdots \t_a(\a_i)$ and similarly. This yields the required nondegenerate embedding of $\Spec(R)$. The tensor $T'$ corresponding to $(\alpha, \beta, \gamma)\mapsto \varphi(\tau_a(\alpha)\tau_b(\beta)\tau_c(\gamma))$ is isomorphic to $T$ and lies in $\langle \Spec(R)\rangle$. This proves~\ref{it:cactusRank:one}. Finally, if $T$ satisfies the above, then it is isomorphic to $(r_1, r_2, r_3)\mapsto \varphi(r_1r_2r_3)$ for some $\varphi$. If $T$ is additionally concise, then for every $r\in R$ there exists an $r'\in R$ such that $\varphi(rr')\neq 0$. Hence the map $(r_1, r_2)\mapsto \varphi(r_1, r_2)$ has full rank. But this map is $\varphi(1_R)$. This shows that $T$ is $1$-generic. It has cactus rank at least $m$ by conciseness and at most $m$ by assumption. \end{proof} In particular, a concise tensor $T\in \BC^m\ot \BC^m\ot \BC^m$ has \emph{minimal smoothable rank} if there exists a smoothable degree $m$ algebra $R$ satisfying the conditions of Proposition~\ref{ref:cactusRank:prop}. \begin{theorem}\label{ref:smoothableRank:thm} Let $T\in \BC^m\ot \BC^m\ot \BC^m$ be a concise tensor. The following are equivalent \begin{enumerate} \item\label{it:smoothableRankOne} $T$ has minimal smoothable rank, \item\label{it:smoothableRankTwo} $T$ is $1$-generic, 111-sharp and its 111-algebra is smoothable and Gorenstein. \item\label{it:smoothableRankThree} $T$ is $1$-generic, 111-abundant and its 111-algebra is smoothable. \end{enumerate} \end{theorem} We emphasize that in Theorem \ref{ref:smoothableRank:thm} one does not need to find the smoothable scheme to show the tensor has minimal smoothable rank, which makes the theorem effective by reducing the question of determining minimal smoothable rank to proving smoothability of a given algebra. \begin{proof}[Proof of Theorem~\ref{ref:smoothableRank:thm}] Suppose~\ref{it:smoothableRankOne} holds and so there exists a smoothable algebra $R$ and an embedding of it into $Seg(\BP A\times \BP B\times\BP C)$ with $T\in \langle \Spec(R)\rangle$. By Proposition~\ref{ref:cactusRank:prop} $T$ is $1$-generic and isomorphic to the tensor in the vector space $R^*\ot R^*\ot R^*$ given by the trilinear map $(r_1,r_2,r_3)\mapsto \varphi(r_1r_2r_3)$ for some functional $\varphi\in R^*$, in particular $T\in \Hom(R\ot R\ot R, \BC)$. Suppose that there exists a nonzero $r\in R$ such that $\varphi(Rr) = 0$. Then for all $r_1,r_2\in R$, $(r_1, r_2, r)\mapsto 0$ so $T$ is not concise. Hence no such $r$ exists and so $\varphi$ is nondegenerate. This shows that $R$ is Gorenstein. For an element $r\in R$, the multiplication by $r$ on the first position gives a map \[ \mu_{r}^{(1)}\colon\Hom(R\ot R\ot R, \BC)\to \Hom(R\ot R\ot R, \BC) \] and similarly we obtain $\mu_r^{(2)}$ and $\mu_r^{(3)}$. Observe that for $i=1,2,3$ and every $r\in R$ the map corresponding to the tensor $\mu_r^{(i)}(T)$ is the composition of the multiplication $R\ot R\ot R\to R$, the multiplication by $r$ map $R\to R$ and $\varphi\colon R\to \BC$. Therefore $\mu_r^{(1)}(T) = \mu_r^{(2)}(T) = \mu_r^{(3)}(T)$. Moreover, for any nonzero $r$ we have $\mu_r^{(i)}(T)\neq0$ since $\varphi$ is nondegenerate. This shows that $\langle\mu_r^{(i)}(T)\ |\ r\in R\rangle$ is an $m$-dimensional subspace of $\alg{T}\cdot T\subseteq A\ot B\ot C$. Since $T$ has minimal smoothable rank, it has minimal border rank so it is 111-abundant and by Proposition~\ref{1Ageneric111} is it 111-sharp, so its 111-algebra is $\langle\mu_r^{(i)}(T)\ |\ r\in R\rangle$, which is isomorphic to $R$. This proves \ref{it:smoothableRankOne} implies~\ref{it:smoothableRankTwo}. That~\ref{it:smoothableRankTwo} implies~\ref{it:smoothableRankThree} is vacuous. Suppose~\ref{it:smoothableRankThree} holds and take $R=\alg{T}$. Then $T$ is $111$-sharp by Proposition~\ref{1Ageneric111}, which also implies the tensor $T$ is isomorphic to the multiplication tensor of $R$. The algebra $R$ is Gorenstein as $T$ is $1$-generic (see~\S\ref{summarysect}). Since $R$ is Gorenstein, the $R$-module $R^*$ is isomorphic to $R$. Take one such isomorphism $\Phi\colon R\to R^*$ and let $\varphi = \Phi(1_R)$. Then the composition $R\ot R\ot R\to R\to \BC$ can be rewritten as $R\ot R\to R\to R^*$, where the first map is the multiplication and the second one sends $r$ to $r\varphi$; this second map is equal to $\Phi$. Composing further with $\Phi^{-1}$ we obtain a map $R\ot R\to R\to R^*\to R$ which is simply the multiplication. All this shows that the tensor in $R^*\ot R^*\ot R^*$ associated to $(R,\varphi)$ is isomorphic to the multiplication tensor of $R$, hence to $T$. By Proposition~\ref{ref:cactusRank:prop} and smoothability of $R$ such a tensor has minimal smoothable rank. \end{proof} \begin{remark} There is a version of Theorem~\ref{ref:smoothableRank:thm} without smoothability assumptions: a concise tensor has minimal cactus rank if and only if it is $1$-generic and 111-abundant with Gorenstein 111-algebra. \end{remark} \def\cdprime{$''$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\Dbar{\leavevmode\lower.6ex\hbox to 0pt{\hskip-.23ex \accent"16\hss}D} \def\cprime{$'$} \def\cprime{$'$} \def\cdprime{$''$} \def\cprime{$'$} \def\cprime{$'$} \def\Dbar{\leavevmode\lower.6ex\hbox to 0pt{\hskip-.23ex \accent"16\hss}D} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\Dbar{\leavevmode\lower.6ex\hbox to 0pt{\hskip-.23ex \accent"16\hss}D} \def\cprime{$'$} \def\cprime{$'$} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2} \begin{thebibliography}{10} \bibitem{MR4262465} Josh Alman and Virginia Vassilevska~Williams, \emph{A refined laser method and faster matrix multiplication}, Proceedings of the 2021 {ACM}-{SIAM} {S}ymposium on {D}iscrete {A}lgorithms ({SODA}), [Society for Industrial and Applied Mathematics (SIAM)], Philadelphia, PA, 2021, pp.~522--539. \MR{4262465} \bibitem{MR3388238} Andris Ambainis, Yuval Filmus, and Fran{\c{c}}ois Le~Gall, \emph{Fast matrix multiplication: limitations of the {C}oppersmith-{W}inograd method (extended abstract)}, S{TOC}'15---{P}roceedings of the 2015 {ACM} {S}ymposium on {T}heory of {C}omputing, ACM, New York, 2015, pp.~585--593. \MR{3388238} \bibitem{MR598562} M.~F. Atiyah, N.~J. Hitchin, V.~G. Drinfel{\cprime}d, and Yu.~I. Manin, \emph{Construction of instantons}, Phys. Lett. A \textbf{65} (1978), no.~3, 185--187. \MR{598562} \bibitem{MR695915} M.~D. Atkinson, \emph{Primitive spaces of matrices of bounded rank. {II}}, J. Austral. Math. Soc. Ser. A \textbf{34} (1983), no.~3, 306--315. \MR{MR695915 (84h:15017)} \bibitem{MR621563} M.~D. Atkinson and S.~Lloyd, \emph{Primitive spaces of matrices of bounded rank}, J. Austral. Math. Soc. Ser. A \textbf{30} (1980/81), no.~4, 473--482. \MR{621563} \bibitem{MR2836258} Daniel~J. Bates and Luke Oeding, \emph{Toward a salmon conjecture}, Exp. Math. \textbf{20} (2011), no.~3, 358--370. \MR{2836258 (2012i:14056)} \bibitem{MR2983182} George~M. Bergman, \emph{Bilinear maps on {A}rtinian modules}, J. Algebra Appl. \textbf{11} (2012), no.~5, 1250090, 10. \MR{2983182} \bibitem{MR1774095} Cristina Blancafort and Juan Elias, \emph{On the growth of the {H}ilbert function of a module}, Math. Z. \textbf{234} (2000), no.~3, 507--517. \MR{1774095} \bibitem{MR3578455} Markus Bl\"aser and Vladimir Lysikov, \emph{On degeneration of tensors and algebras}, 41st {I}nternational {S}ymposium on {M}athematical {F}oundations of {C}omputer {S}cience, LIPIcs. Leibniz Int. Proc. Inform., vol.~58, Schloss Dagstuhl. Leibniz-Zent. Inform., Wadern, 2016, pp.~Art. No. 19, 11. \MR{3578455} \bibitem{blser_et_al:LIPIcs:2020:12686} Markus Bl{\"a}ser and Vladimir Lysikov, \emph{{Slice Rank of Block Tensors and Irreversibility of Structure Tensors of Algebras}}, 45th International Symposium on Mathematical Foundations of Computer Science (MFCS 2020) (Dagstuhl, Germany) (Javier Esparza and Daniel Kr{\'a}ľ, eds.), Leibniz International Proceedings in Informatics (LIPIcs), vol. 170, Schloss Dagstuhl--Leibniz-Zentrum f{\"u}r Informatik, 2020, pp.~17:1--17:15. \bibitem{MR3333949} Weronika Buczy{\'n}ska and Jaros{\l}aw Buczy{\'n}ski, \emph{On differences between the border rank and the smoothable rank of a polynomial}, Glasg. Math. J. \textbf{57} (2015), no.~2, 401--413. \MR{3333949} \bibitem{MR3121848} Weronika Buczy{\'n}ska and Jaros{\l}aw Buczy{\'n}ski, \emph{Secant varieties to high degree {V}eronese reembeddings, catalecticant matrices and smoothable {G}orenstein schemes}, J. Algebraic Geom.\textbf{23} (2014), no.~1, 63--90. \MR{3121848} \bibitem{MR4332674} \bysame, \emph{Apolarity, border rank, and multigraded {H}ilbert scheme}, Duke Math. J. \textbf{170} (2021), no.~16, 3659--3702. \MR{4332674} \bibitem{MR3724212} Jaros{\l}aw Buczy\'{n}ski and Joachim Jelisiejew, \emph{Finite schemes and secant varieties over arbitrary characteristic}, Differential Geom. Appl. \textbf{55} (2017), 13--67. \MR{3724212} \bibitem{MR3239293} Jaros{\l}aw Buczy{\'n}ski and J.~M. Landsberg, \emph{On the third secant variety}, J. Algebraic Combin. \textbf{40} (2014), no.~2, 475--502. \MR{3239293} \bibitem{BCS} Peter B{\"u}rgisser, Michael Clausen, and M.~Amin Shokrollahi, \emph{Algebraic complexity theory}, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 315, Springer-Verlag, Berlin, 1997, With the collaboration of Thomas Lickteig. \MR{99c:68002} \bibitem{MR2579394} Dustin~A. Cartwright, Daniel Erman, Mauricio Velasco, and Bianca Viray, \emph{Hilbert schemes of 8 points}, Algebra Number Theory \textbf{3} (2009), no.~7, 763--795. \MR{2579394} \bibitem{MR3404648} Gianfranco Casnati, Joachim Jelisiejew, and Roberto Notari, \emph{Irreducibility of the {G}orenstein loci of {H}ilbert schemes via ray families}, Algebra Number Theory \textbf{9} (2015), no.~7, 1525--1570. \MR{3404648} \bibitem{2019arXiv190909518C} Austin {Conner}, Fulvio {Gesmundo}, Joseph~M. {Landsberg}, and Emanuele {Ventura}, \emph{{Tensors with maximal symmetries}}, arXiv e-prints (2019), arXiv:1909.09518. \bibitem{CHLapolar} Austin Conner, Alicia Harper, and J.M. Landsberg, \emph{New lower bounds for matrix mulitplication and $\tdet_3$}, arXiv:1911.07981. \bibitem{MR91i:68058} Don Coppersmith and Shmuel Winograd, \emph{Matrix multiplication via arithmetic progressions}, J. Symbolic Comput. \textbf{9} (1990), no.~3, 251--280. \MR{91i:68058} \bibitem{MR3761737} Klim Efremenko, Ankit Garg, Rafael Oliveira, and Avi Wigderson, \emph{Barriers for rank methods in arithmetic complexity}, 9th {I}nnovations in {T}heoretical {C}omputer {S}cience, LIPIcs. Leibniz Int. Proc. Inform., vol.~94, Schloss Dagstuhl. Leibniz-Zent. Inform., Wadern, 2018, pp.~Art. No. 1, 19. \MR{3761737} \bibitem{MR2222646} Barbara Fantechi, Lothar G\"{o}ttsche, Luc Illusie, Steven~L. Kleiman, Nitin Nitsure, and Angelo Vistoli, \emph{Fundamental algebraic geometry}, Mathematical Surveys and Monographs, vol. 123, American Mathematical Society, Providence, RI, 2005, Grothendieck's FGA explained. \MR{2222646} \bibitem{MR2996364} Shmuel Friedland, \emph{On tensors of border rank {$l$} in {$\Bbb{C}^{m\times n\times l}$}}, Linear Algebra Appl. \textbf{438} (2013), no.~2, 713--737. \MR{2996364} \bibitem{MR2891138} Shmuel Friedland and Elizabeth Gross, \emph{A proof of the set-theoretic version of the salmon conjecture}, J. Algebra \textbf{356} (2012), 374--379. \MR{2891138} \bibitem{MR3611482} Maciej Ga{\l}{\k{a}}zka, \emph{Vector bundles give equations of cactus varieties}, Linear Algebra Appl. \textbf{521} (2017), 254--262. \MR{3611482} \bibitem{GSS} Luis~David Garcia, Michael Stillman, and Bernd Sturmfels, \emph{Algebraic geometry of {B}ayesian networks}, J. Symbolic Comput. \textbf{39} (2005), no.~3-4, 331--355. \MR{MR2168286 (2006g:68242)} \bibitem{MR0132079} Murray Gerstenhaber, \emph{On dominance and varieties of commuting matrices}, Ann. of Math. (2) \textbf{73} (1961), 324--348. \MR{0132079 (24 \#A1926)} \bibitem{MR1199042} Robert~M. Guralnick, \emph{A note on commuting pairs of matrices}, Linear and Multilinear Algebra \textbf{31} (1992), no.~1-4, 71--75. \MR{1199042 (94c:15021)} \bibitem{MR4163534} Hang Huang, Mateusz Micha{\l}ek, and Emanuele Ventura, \emph{Vanishing {H}essian, wild forms and their border {VSP}}, Math. Ann. \textbf{378} (2020), no.~3-4, 1505--1532. \MR{4163534} \bibitem{MR1735271} Anthony Iarrobino and Vassil Kanev, \emph{Power sums, {G}orenstein algebras, and determinantal loci}, Lecture Notes in Mathematics, vol. 1721, Springer-Verlag, Berlin, 1999, Appendix C by Iarrobino and Steven L. Kleiman. \MR{MR1735271 (2001d:14056)} \bibitem{MR2202260} Atanas Iliev and Laurent Manivel, \emph{Varieties of reductions for {${\frak{gl}}_n$}}, Projective varieties with unexpected properties, Walter de Gruyter GmbH \& Co. KG, Berlin, 2005, pp.~287--316. \MR{MR2202260 (2006j:14056)} \bibitem{MR2947667} Nathan~Owen Ilten, \emph{Versal deformations and local {H}ilbert schemes}, J. Softw. Algebra Geom. \textbf{4} (2012), 12--16. \MR{2947667} \bibitem{jelisiejew2021components} Joachim Jelisiejew and Klemen Šivic, \emph{Components and singularities of {Q}uot schemes and varieties of commuting matrices}, 2021. \bibitem{MR3729273} J.~M. Landsberg, \emph{Geometry and complexity theory}, Cambridge Studies in Advanced Mathematics, vol. 169, Cambridge University Press, Cambridge, 2017. \MR{3729273} \bibitem{LMsecb} J.~M. Landsberg and Laurent Manivel, \emph{Generalizations of {S}trassen's equations for secant varieties of {S}egre varieties}, Comm. Algebra \textbf{36} (2008), no.~2, 405--422. \MR{MR2387532} \bibitem{MR3682743} J.~M. Landsberg and Mateusz Micha{\l}ek, \emph{Abelian tensors}, J. Math. Pures Appl. (9) \textbf{108} (2017), no.~3, 333--371. \MR{3682743} \bibitem{LWsecseg} J.~M. Landsberg and Jerzy Weyman, \emph{On the ideals and singularities of secant varieties of {S}egre varieties}, Bull. Lond. Math. Soc. \textbf{39} (2007), no.~4, 685--697. \MR{MR2346950} \bibitem{GO60survey} \emph{Secant varieties and the complexity of matrix multiplication}, https://arxiv.org/abs/2208.00857, to appear in Rendiconti dell’Istituto di Matematica dell’Università di Trieste, in the special volume entitled “Proceedings of the conference GO60”. \bibitem{MR3376667} Joseph~M. Landsberg and Giorgio Ottaviani, \emph{New lower bounds for the border rank of matrix multiplication}, Theory Comput. \textbf{11} (2015), 285--298. \MR{3376667} \bibitem{LeGall:2014:PTF:2608628.2608664} Fran\c{c}ois Le~Gall, \emph{Powers of tensors and fast matrix multiplication}, Proceedings of the 39th International Symposium on Symbolic and Algebraic Computation (New York, NY, USA), ISSAC '14, ACM, 2014, pp.~296--303. \bibitem{MR576606} Guerino Mazzola, \emph{Generic finite schemes and {H}ochschild cocycles}, Comment. Math. Helv. \textbf{55} (1980), no.~2, 267--293. \MR{576606} \bibitem{MR2554725} Giorgio Ottaviani, \emph{Symplectic bundles on the plane, secant varieties and {L}\"uroth quartics revisited}, Vector bundles and low codimensional subvarieties: state of the art and recent developments, Quad. Mat., vol.~21, Dept. Math., Seconda Univ. Napoli, Caserta, 2007, pp.~315--352. \MR{2554725} \bibitem{MR2459993} Bjorn Poonen, \emph{Isomorphism types of commutative algebras of finite rank over an algebraically closed field}, Computational arithmetic geometry, Contemp. Math., vol. 463, Amer. Math. Soc., Providence, RI, 2008, pp.~111--120. \MR{2459993} \bibitem{MR2842085} Kristian Ranestad and Frank-Olaf Schreyer, \emph{On the rank of a symmetric form}, J. Algebra \textbf{346} (2011), 340--342. \MR{2842085 (2012j:13037)} \bibitem{stothers} A.~Stothers, \emph{On the complexity of matrix multiplication}, PhD thesis, University of Edinburgh, 2010. \bibitem{Strassen505} V.~Strassen, \emph{Rank and optimal computation of generic tensors}, Linear Algebra Appl. \textbf{52/53} (1983), 645--685. \MR{85b:15039} \bibitem{MR882307} \bysame, \emph{Relative bilinear complexity and matrix multiplication}, J. Reine Angew. Math. \textbf{375/376} (1987), 406--443. \MR{MR882307 (88h:11026)} \bibitem{MR1481486} Stein~Arild Str{\o}mme, \emph{Elementary introduction to representable functors and {H}ilbert schemes}, Parameter spaces ({W}arsaw, 1994), Banach Center Publ., vol.~36, Polish Acad. Sci. Inst. Math., Warsaw, 1996, pp.~179--198. \MR{1481486} \bibitem{MR2118458} D.~A. Suprunenko and R.~I. Tyshkevich, \emph{Perestanovochnye matritsy}, second ed., \`Editorial URSS, Moscow, English translation of first edition: Academic Press: New York, 1968, 2003. \MR{2118458 (2006b:16045)} \bibitem{williams} Virginia Williams, \emph{Breaking the {C}oppersimith-{W}inograd barrier}, preprint. \bibitem{DBLP:journals/corr/abs-2110-01684} Maciej Wojtala, \emph{Irreversibility of structure tensors of modules}, Collectanea Mathematica (2022). \end{thebibliography} \end{document} \newtheoremstyle{custom} {3pt} {3pt} {\slshape} {} {\bfseries} {.} { } {}\theoremstyle{custom} \newtheorem{theorem}{Theorem}[section] \newtheorem{proposition}[theorem]{Proposition} \newtheorem{observation}[theorem]{Observation} \newtheorem{proposition/definition}[theorem]{Proposition/Definition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem*{theoremstar}{Theorem} \newtheorem{prop}[theorem]{Proposition} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{summary}[theorem]{Summary} \newtheorem{defn}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{notary}[theorem]{Notation} \newtheorem*{examplecontd}{Example} \newtheorem{problem}[theorem]{Problem} \newtheorem{question}[theorem]{Question} \newtheorem{other}[theorem]{} \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} \newtheorem{rema}[theorem]{Remark} \newtheorem{aside}[theorem]{Aside} \def\donote#1{\noindent{\bf #1\ }} \def\hin{$\circledcirc$} \def\mhin{\ \circledcirc} \newcommand{\stack}[2]{\ensuremath{\genfrac{}{}{0pt}{}{#1}{#2}}} \newcommand{\bel}[2]{\begin{equation}\label{#1} #2 \end{equation}} \newtheoremstyle{exercise} {3pt} {6pt} {} {} {\bfseries} {:} { } {}\theoremstyle{exercise} \newtheorem{exercise}[theorem]{Exercise} \newtheoremstyle{exercises} {3pt} {6pt} {} {} {\bfseries} {:} {\newline} {} \theoremstyle{exercise} \newtheorem{exercises}[theorem]{Exercises} \newcommand{\exerone}[2][{}]{\begin{exercise}{#1}{#2}\end{exercise}} \newcommand{\exeronetitled}[3][{}]{\begin{exercise}[#2]{#1}{#3}\end{exercise}} \newcommand{\exerset}[2][{}]{\begin{exercises}{#1}\end{exercises} \exersetmiddle{#2} } \newcommand{\exersettitled}[3][{}]{\begin{exercises}[#2]{#1}\end{exercises} \exersetmiddle{#3} } \newcommand{\exersetmiddle}[1] { \begin{list}{\arabic{enumi}.} {\usecounter{enumi}\exerfuss} {#1}\end{list} \vskip 10pt} \newcommand{\exerfuss}{ \setlength{\itemsep}{0pt} \setlength{\topsep}{-10pt}} \newcommand{\exerparts}[1]{\sqlist{(\alph{enumii})}{#1}} \newcommand{\sqlist}[2]{ \begin{list}{#1}{\usecounter{enumii} \setlength{\itemsep}{0pt}\setlength{\topsep}{-10pt} }{#2}\end{list}} \newcommand{\exersethead}[1][{}]{\begin{exercises}[#1]\end{exercises}} \def\intprod{\negthinspace \mathbin{\raisebox{.4ex}{\hbox{\vrule height .5pt width 4pt depth 0pt \vrule height 4pt width .5pt depth 0pt}}}} \def\lefthook{\intprod} \def\numbit{} \input epsf \def\boxit#1{\vbox{\hrule height1pt\hbox{\vrule width1pt\kern3pt \vbox{\kern3pt#1\kern3pt}\kern3pt\vrule width1pt}\hrule height1pt}} \def\cD{{\mathcal D}} \def\JO{{\mathcal J}_3(\OO)} \def\jarekl{\l} \def\Rker{\operatorname{Rker}} \def\overarrow{\vec} \def\trank{\text{rank}} \def\ee#1{e_{#1}} \def\ue#1{e^{#1}} \def\bv{\mathbf{v}} \def\z{\zeta} \def\BC{\mathbb C}\def\BF{\mathbb F}\def\BO{\mathbb O}\def\BS{\mathbb S}\def\BN{\mathbb N} \def\BA{\mathbb A}\def\BR{\mathbb R}\def\BH{\mathbb H}\def\BQ{\mathbb Q}\def\BE{\mathbb E} \def\BP{\mathbb P}\def\BG{\mathbb G} \def\pp#1{\mathbb P^{#1}} \def\fa{\mathfrak a}\def\fr{\mathfrak r}\def\fz{\mathfrak z} \def\fb{\mathfrak b}\def\fo{\mathfrak o}\def\fgl{\mathfrak g\mathfrak l} \def\hh{\quad} \def\fd{\mathfrak d} \def\la{\lambda} \def\pp#1{{\mathbb P}^{#1}} \def\tdim{{\rm dim}}\def\tDer{{\rm Der}}\def\tder{{\rm Der}} \def\tAut{{\rm Aut}} \def\ww{\wedge} \def\upperp{{}^\perp} \def\TH{\Theta} \def\na{n+a} \def\upone#1{{}^{#1}} \newcommand{\abs}[1]{\lvert#1\rvert} \def\inv{{}^{-1}} \def\com{{\rm com}} \def\trace{{\rm trace}} \def\cI{{\mathcal I}}\def\cB{{\mathcal B}}\def\cA{{\mathcal A}} \def\cJ{{\mathcal J}}\def\cH{{\mathcal H}} \def\cE{{\mathcal E}}\def\cT{{\mathcal T}} \def\cF{{\mathcal F}} \def\cG{{\mathcal G}} \def\cR{{\mathcal R}} \def\cS{{\mathcal S}}\def\cN{{\mathcal N}} \def\cL{{\mathcal L}}\def\cP{{\mathcal P}} \def\cW{{\mathcal W}} \def\cO{{\mathcal O}} \def\CC{\mathbb C} \def\RR{\mathbb R} \def\HH{\mathbb H} \def\AA{{\mathbb A}} \def\BB{{\mathbb B}} \def\OO{\mathbb O} \def\LG{\mathbb {LG}} \def\LF{{\mathbb {LF}}} \def\ZZ{\mathbb Z} \def\SS{\mathbb S} \def\GG{\mathbb G} \def\11{\mathbf 1} \def\PP{\mathbb P} \def\QQ{\mathbb Q} \def\FF{\mathbb F} \def\JA{{\mathcal J}_3(\AA)} \def\JB{{\mathcal J}_3(\BB)} \def\ZA{{\mathcal Z}_2(\AA)} \def\fh{{\mathfrak h}} \def\fs{{\mathfrak s}} \def\fsl{{\mathfrak {sl}}}\def\fsu{{\mathfrak {su}}} \def\fsp{{\mathfrak {sp}}} \def\fspin{{\mathfrak {spin}}} \def\fso{{\mathfrak {so}}} \def\fe{{\mathfrak e}}\def\fm{{\mathfrak m}} \def\fu{{\mathfrak u}} \def\ff{{\mathfrak f}} \def\fq{{\mathfrak q}} \def\fz{{\mathfrak z}} \def\ffi{{\mathfrak i}} \def\fg{{\mathfrak g}} \def\fn{{\mathfrak n}} \def\fp{{\mathfrak p}} \def\fk{{\mathfrak k}} \def\ft{{\mathfrak t}} \def\fl{{\mathfrak l}} \def\a{\alpha} \def\ta{\tilde{\alpha}} \def\oo{\omega} \def\O{\Omega} \def\b{\beta} \def\g{\gamma} \def\s{\sigma}\def\r{\rho} \def\kk{\kappa} \def\n{\nu} \def\d{\delta} \def\th{\theta} \def\m{\mu} \def\up#1{{}^{({#1})}} \def\e{\varepsilon} \def\ot{{\mathord{ \otimes } }} \def\op{{\mathord{\,\oplus }\,}} \def\otc{{\mathord{\otimes\cdots\otimes} }} \def\pc{{\mathord{+\cdots +}}} \def\lra{{\mathord{\;\longrightarrow\;}}} \def\ra{{\mathord{\;\rightarrow\;}}} \def\da{{\mathord{\downarrow}}} \def\we{{\mathord{{\scriptstyle \wedge}}}} \def\JA{{\mathcal J}_3(\AA)} \def\JB{{\mathcal J}_3(\BB)} \def\tr{{\rm trace}\;} \def\dim{{\rm dim}\;} \def\La#1{\Lambda^{#1}} \def\fosp{{\mathfrak o\mathfrak s \mathfrak p}} \def\mi{m+i}\def\mj{m+j} \def\aag{affine algebraic group } \def\cV{{\underline{V}}} \def\wdots{\ww\cdots\ww}\def\ccdots{\circ\cdots\circ} \def\overarrow{\vec} \def\frak{\mathfrak} \def\fgl{\frak g\frak l}\def\fsl{\frak s\frak l}\def\fr{\frak r} \def\fd{\frak d}\def\fstr{\frak s\frak t\frak r} \def\op{\oplus} \def\BO{\Bbb O}\def\BA{\Bbb A}\def\BF{\Bbb F}\def\BH{\Bbb H}\def\BZ{\Bbb Z} \def\EE#1#2{E^{#1}_{#2}} \def\ff#1{\Bbb F\Bbb F^{#1}} \def\dsigma{\delta_{\sigma}} \def\dtau{\delta_{\tau}} \def\bb#1#2#3{b^{#1}_{{#2}{#3}}} \def\tbaseloc{\text{Base}\,} \def\fff#1#2{\Bbb F\Bbb F^{#1}_{#2}} \def\tann{\text{Ann}\,} \def\tsing{\text{sing}\,} \def\ep{\epsilon} \def\op{\oplus} \def\nj{n+j} \def\ff{\mathfrak f} \def\ul{\underline} \def\ule#1{{\underline e}_{#1}} \def\Der{\rm Der}\def\tIm{\rm Im} \def\none{n+1} \def\ntwo{n+2} \def\s{\sigma} \def\t{\tau} \def\bvec#1{\overset\rightharpoonup\to{#1}} \def\a{\alpha} \def\b{\beta} \def\ga{\gamma} \def\g{\gamma} \def\si{\sum} \def\TH{\Theta} \def\fs{\mathfrak s} \def\fo{\mathfrak o} \def\too#1{\tilde{\omega}^{#1}} \def\tee#1{\tilde{e}_{#1}} \def\tooo#1#2{\tilde{\omega}^{#1}_{#2}} \def\trr#1#2#3#4{\tilde{r}^{#1}_{{#2} {#3}{#4}}} \def\tqq#1#2#3{\tilde{q}^{#1}_{{#2} {#3}}} \def\xsm{X_{smooth}} \def\ue#1{e^{#1}} \def\fl{\mathfrak l}\def\FS{\mathfrak S} \def\dstar{\delta_*}\def\dsigma{\delta_{\sigma}} \def\fso{\frak{so}} \def\ol{\overline} \def\BP{\mathbb P}\def\BT{\mathbb T} \def\BC{\mathbb C} \def\baa#1{\mathbb A^{#1}} \def\pp#1{\mathbb P^{#1}} \def\co{\mathcal O}\def\cC{\mathcal C} \def\tcodim{\text{codim}} \def\BR{\mathbb R}\def\BS{\mathbb S} \def\tch{\cosh} \def\tsh{\sinh} \def\tcos{\cos} \def\tsin{\sin} \def\ep{\epsilon} \def\fp{\mathfrak p} \def\opc{\op\cdots\op} \def\eea#1#2{a^{#1}_{#2}} \def\aal#1#2{A_{{#1}{#2}}} \def\ao{{[A_0]}} \def\aone{A^{(1)}} \def\bi#1{\mathbf{I^{#1}}} \def\bcc#1{\mathbb C^{#1}} \def\bpp#1{\mathbb P^{#1}} \def\brr#1{\mathbb R^{#1}} \def\cf{\mathcal F} \def\ci{\mathcal I} \def\cl{\mathcal L} \def\cp{\mathcal P}\def\cQ{\mathcal Q} \def\crr{\mathcal R} \def\ee#1{e_{#1}} \def\uee#1{e^{#1}} \def\eee#1#2#3{e^{#1}_{{#2}{#3}}} \def\fg{\mathfrak g} \def\fhat#1{\hat F_{#1}} \def\ffhat#1#2{\hat F_{#1}^{#2}} \def\frp#1#2{\frac{\partial {#1}}{\partial {#2}}} \def\frpp#1{\frac{\partial {}}{\partial {#1}}} \def\gg#1#2{g^{#1}_{#2}} \def\GG#1#2{G^{#1}_{#2}} \def\gggg#1#2#3#4{g^{#1}_{{#2}{#3}{#4}}} \def\hhh#1#2#3{h^{#1}_{{#2}{#3}}} \def\hd{, \dotsc ,} \def\iii#1#2#3{i^{#1}_{{#2}{#3}}} \def\inv{{}^{-1}} \def\ii{|II|} \def\JJ#1#2#3{J^{#1}_{{#2}{#3}}} \def\jjj#1#2#3#4{j^{#1}_{{#2}{#3}{#4}}} \def\kkkk{k^{\mu}_{\nu_1\nu_2\nu_3}} \def\La#1{\Lambda^{#1}} \def\ooo#1#2{\omega^{#1}_{#2}} \def\oo#1{\omega^{#1}_0} \def\ett#1#2{\eta^{#1}_{#2}} \def\et#1{\eta^{#1}} \def\lo#1{\omega_{#1}} \def\cwedge{\ww \cdots\ww} \def\pp#1{\mathbb P^{#1}} \def\brank{\underline {\mathbf{R}}} \def\ur{\underline{\mathbf{R}}}\def\asrk{\uwave{\mathbf{R}}} \def\uR{\underline{\mathbf{R}}} \def\tRank{{\mathbf{R}}} \def\piii#1{\partial^{#1}II} \def\qq#1#2#3{q^{#1}_{{#2} {#3}}} \def\qqq#1#2#3{q^{n+ #1}_{{#2} {#3}}} \def\rr#1#2#3#4{r^{#1}_{{#2} {#3}{#4}}} \def\rrr#1#2#3#4#5{r^{#1}_{{#2} {#3}{#4}{#5}}} \def\ra{\rightarrow} \def\ralg{\}_{alg}} \def\rdiff{\}_{diff}} \def\restr#1{|_{#1}} \def\schur{S^{(21)}_{\circ}} \def\ta#1{\theta^{#1}} \def\taa#1#2{\theta^{#1}_{#2}} \def\taba{A\subset W\otimes V^*} \def\tsingloc{\operatorname{Singloc}} \def\tbase{\operatorname{Baseloc}} \def\tdeg{\operatorname{deg}} \def\tdet{\operatorname{det}}\def\tpfaff{\operatorname{Pf}}\def\thc{\operatorname{HC}} \def\sPf{\operatorname{sPf}} \def\pas{\operatorname{4dPascal}} \def\tperm{\operatorname{perm}} \def\ttrace{\operatorname{trace}} \def\tend{\operatorname{End}} \def\tim{\operatorname{Im}} \def\tdim{\operatorname{dim}} \def\tker{\operatorname{ker}}\def\lker{\operatorname{Lker}}\def\rker{\operatorname{Rker}} \def\tproj{\operatorname{Proj}} \def\tlim{\lim}\def\tsup{\operatorname{sup}} \def\tmod{\operatorname{mod}} \def\sPfaff{\operatorname{sPfaff}} \def\tmin{\operatorname{min}} \def\tmax{\operatorname{max}} \def\thom{\operatorname{Hom}} \def\trank{\operatorname{rank}} \def\tmultrank{\operatorname{multlinrank}} \def\pii#1#2{\pi^{#1}_{#2}}\def\vie{\varpi^a_i} \def\ps#1{\psi^{#1}} \def\ph#1{\phi^{#1}} \def\muu#1{\mu^{#1}} \def\uu#1#2{ u^{#1}_{#2}} \def\ubo#1{\underline{\omega}^{#1}} \def\uboo#1#2{\underline{\omega}^{#1}_{#2}} \def\ud#1{\underline d^{#1}} \def\up#1{{}^{({#1})}} \def\upperp{{}^{\perp}} \def\vdx{v_d(X)} \def\vex{v_e(X)} \def\vtwox{v_2(X)} \def\vthreex{v_3(X)} \def\vsub#1{V_{#1}} \def\ww{\wedge} \def\xx#1{x^{#1}}\def\yy#1{y^{#1}} \def\zz#1{x^{#1}} \def\vv#1{v^{#1}} \def\xxx#1#2{x^{#1}_{#2}} \def\na{{n+a}} \def\ns{{n+s}} \def\ctimes{\times \cdots\times} \def\wcdots{\ww \cdots\ww} \def\timage{\operatorname{image}} \def\trankc{\underline{\mathbf{R}}} \def\bbb{{\mathbf{b}}} \def\tliminf{\underline{\rm lim}} \def\tlimsup{\overline{\rm lim}} \def\be{\begin{equation}} \def\ene{\end{equation}} \def\aaa{{\mathbf{a}}}\def\bee{{\mathbf{e}}} \def\bbb{{\mathbf{b}}} \def\ccc{{\mathbf{c}}} \def\sgn{{\rm{sgn}}}\def\tsgn{{\rm{sgn}}} \def\ad{{\rm{ad}}} \def\tlangth{{\rm{length}}} \DeclareMathOperator{\tlog}{log} \DeclareMathOperator\tspan{span} \def\tbrank{\underline{\mathbf{R}}} \def\trank{\mathbf{R}} \def\epp#1{\ep_{#1}} \def\smv{S^{\bullet}V^*} \newcommand{\isom}{\cong} \newcommand{\partitions}{\raisebox{.5ex}{${}_\vdash$}} \def\frp#1{\frac{\partial}{\partial{#1}}} \def\aa#1#2{a^{#1}_{#2}} \def\bb#1#2{b^{#1}_{#2}} \def\cc#1#2{c^{#1}_{#2}} \def\trker{{\rm Rker}} \def\tlker{{\rm Lker}} \def\us#1{\underline{\sigma}_{#1}} \def\dual{{^\vee}} \def\xduals#1{X^{\vee}_{#1}} \def\vp{\mathbf{V}\mathbf{P}} \def\vqp{\mathbf{V}\mathbf{Q}\mathbf{P}} \def\vnp{\mathbf{V}\mathbf{N}\mathbf{P}} \def\p{\mathbf{P}} \def\np{\mathbf{N}\mathbf{P}} \def\oL{\underline{L}} \def\ovp{\underline{\vp}} \def\ovnp{\underline{\vnp}} \def\G{\Gamma} \def\tperfm{{\rm PerfMat}} \def\vep#1{\varepsilon_{#1}} \def\rank{\operatorname{rank}} \def\dra{\dashrightarrow} \def\trker{{\rm Rker}}\def\mult{{\rm mult}}\def\Zeros{{\rm Zeros}} \def\tlker{{\rm Lker}}\def\tspan{{\rm span}} \newcommand\rem{{\medskip\noindent {\em Remark}.}\hspace{2mm}} \newcommand\exam {{\medskip\noindent {\em Example}.}\hspace{2mm}} \newcommand{\norm}[1]{\lVert#1\rVert} \newcommand{\Lker}{\operatorname{Lker}} \newcommand{\uV}{\underbar{V}} \newtheorem{theo}{Theorem}[section] \newtheorem{coro}[theo]{Corollary} \newtheorem{lemm}[theo]{Lemma} \newcommand{\ty}[1]{{\tiny \young(#1)}} \newcommand{\scrI}{\mathcal{I}} \newcommand{\scrF}{\mathcal{F}} \newcommand{\scrM}{\mathcal{M}} \newcommand{\scrP}{\mathcal{P}} \def\tp{{T_{\pi}}} \def\tps{{\overline{T}_{\pi}}} \newcommand{\sign}{\operatorname{sgn}} \newcommand{\End}{\operatorname{End}} \newcommand{\sspan}{\operatorname{span}} \newcommand{\SYT}{\operatorname{SYT}} \newcommand{\SSYT}{\operatorname{SSYT}} \newcommand{\scrH}{\mathcal{H}} \newcommand{\tlength}{\operatorname{length}} \newcommand{\Id}{\operatorname{Id}} \newcommand{\Tr}{\operatorname{Tr}} \newcommand{\C}{\operatorname{C}} \newcommand{\indep}{\perp \!\!\! \perp} \newcommand{\sqrtI}{\operatorname{\sqrt{I}}} \newcommand{\sinpioverthree}{0.866025404} \newcommand{\R}{\mathbb{R}} \newcommand{\N}{\mathbb{N}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\TT}{\mathbb{T}} \newcommand{\mcA}{\mathcal{A}} \newcommand{\mcB}{\mathcal{B}} \newcommand{\mcY}{\mathcal{Y}} \newcommand{\mcZ}{\mathcal{Z}} \newcommand{\conv}{\operatorname{conv}} \def\clit#1#2#3{c^{#1}_{{#2},{#3}}} \def\tzeros{{\rm Zeros}} \newcommand{\mbff}{\mathbf{f}} \newcommand{\bfp}{\mathbf{p}} \newcommand{\bfq}{\mathbf{q}} \newcommand{\rowspan}{\operatorname{rowspan}} \newcommand{\supp}{\operatorname{supp}} \newcommand{\initial}{\operatorname{in}} \newcommand{\intersection}{\cap} \newcommand{\transpose}{\top} \newcommand{\pos}{\operatorname{pos}} \newcommand{\Spec}{\operatorname{Spec}} \newcommand{\Hom}{\operatorname{Hom}} \newcommand{\tEnd}{\operatorname{End}} \newcommand{\tHom}{\operatorname{Hom}} \newcommand{\idef}[1]{{\em #1} \index{#1}} \def\chapmmult{11} \def\chapweyman{16} \def\weymanchap{16} \def\chapvnp{13} \def\chaprankvsbrank{8} \def\chapAH{14} \def\repchap{4} \def\chaprepappen{15} \def\mmultchap{11} \def\entry{\ } \def\Mn{M_{\langle \nnn \rangle}}\def\Mone{M_{\langle 1\rangle}}\def\Mtwo{M_{\langle 2 \rangle}}\def\Mthree{M_{\langle 3\rangle}} \def\EE{\mathcal{E}} \def\Mn{M_{\langle \nnn \rangle}}\def\Mone{M_{\langle 1\rangle}}\def\Mthree{M_{\langle 3\rangle}} \def\Mnl{M_{\langle \mmm,\nnn,\lll\rangle}} \def\Mnnl{M_{\langle \nnn,\nnn,\lll\rangle}} \def\Mnm{M_{\langle \nnn,\nnn, \mmm\rangle}}\def\Mnw{M_{\langle \nnn,\nnn, \bw\rangle}} \def\Mtwo{M_{\langle 2\rangle}}\def\Mthree{M_{\langle 3\rangle}} \def\cK{{\mathcal K}} \def\aa#1#2{a^{#1}_{#2}} \def\bb#1#2{b^{#1}_{#2}} \def\garbagec#1#2{c^{#1}_{#2}} \def\tinf{{\rm inf}} \def\subsmooth{{}_{smooth}} \def\tbrank{\underline{\mathbf{R}}} \def\trank{{\mathrm {rank}}} \def\len{{\mathrm{length}}} \def\trankc{\mathbf{R}} \def\tlker{{\rm Lker}} \def\trker{{\rm Rker}} \def\tlength{{\rm length}} \def\us#1{\s_{#1}^0} \def\uV{{\underline V}} \def\aaa{\mathbf{a}} \def\bbb{\mathbf{b}} \def\ccc{\mathbf{c}} \def\tbase{{\rm Zeros}} \def\uuu{\mathbf{u}} \def\blue{\red} \def\oldet{\ol{GL(W)\cdot [\tdet_n]}} \def\oldetc{\ol{GL_{n^2}\cdot [\tdet_n]}} \def\ogdv{\ol{GL(W)\cdot [v]}} \def\tmult{{\rm mult}} \def\VV{\mathbf{V}} \newcommand{\GL}{\operatorname{GL}} \newcommand{\SL}{\operatorname{SL}} \newcommand{\SU}{\operatorname{SU}} \newcommand{\Ch}{\operatorname{Ch}} \newcommand{\Sym}{\operatorname{Sym}} \newcommand{\red}[1]{{\color{red}#1}} \def\bpi{\hbox{\boldmath$\pi$\unboldmath}} \def\Dual{{\mathcal Dual}}\def\Osc{{\mathcal Osc}} \def\Ideal{{\mathcal I}} \def\bs{\mathbf{s}} \def\mmm{\mathbf{m}} \def\nnn{\mathbf{n}} \def\lll{\mathbf{l}} \def\Om{\Omega}\def\Th{\Theta} \def\simgeq{\sim\geq} \def\rig#1{\smash{ \mathop{\longrightarrow} \limits^{#1}}} \def\bS{\mathbf{S}} \def\bL{\mathbf{L}} \def\bv{\mathbf{v}} \def\bw{\mathbf{w}} \def\calV{\mathcal{V}} \def\tchow{{\rm chow}}\def\tfermat{{\rm fermat}} \def\Det{{\mathcal Det}}\def\Perm{{\mathcal Perm}}\def\Pasdet{{\mathcal Pasdet}} \def\PD{\operatorname{Pasdet}} \def\BRe{B} \def\Mnred{M_{\langle \nnn \rangle}^{red}} \def\Monred{M_{\langle 1,1, \nnn \rangle}^{red}} \def\Mnwredl{M_{\langle \nnn,\nnn,\bw \rangle}^{\l}} \def\Mnwredlp{M_{\langle \nnn,\nnn,\bw \rangle}^{\l'}} \newcommand{\Reff}{\text{Reff}} \DeclareMathOperator*{\argmin}{arg\,min} \newcommand{\zfzt}{$\cS_{\BZ_4\times \BZ_3}$} \newcommand{\twofix}{$\cS_{2fix-\BZ_3}$} \newcommand{\lad}{$\cS_{Lader-\BZ_3}$} \newcommand{\addtlone}{Addtl.~Dec.~\#1} \newcommand{\addtltwo}{Addtl.~Dec.~\#2} \newcommand{\addtlthree}{Addtl.~Dec.~\#3} \newcommand{\debug}[1]{\textcolor{red}{\textbf{#1}}}
2205.05530v1
http://arxiv.org/abs/2205.05530v1
On the $d$-dimensional algebraic connectivity of graphs
\documentclass[a4paper,11pt]{article} \pagestyle{plain} \usepackage{braket,hyperref} \usepackage{amssymb,mathtools,amsthm,amsmath,bm} \usepackage{tikz} \usepackage{xargs} \usepackage{gensymb} \usepackage{enumitem} \usepackage{verbatim} \usepackage{kbordermatrix} \renewcommand{\kbldelim}{(}\renewcommand{\kbrdelim}{)}\usepackage{thm-restate} \theoremstyle{plain} \newtheorem{theorem}{\bf Theorem}[section] \newtheorem{claim}[theorem]{\bf Claim} \newtheorem{conjecture}[theorem]{\bf Conjecture} \newtheorem{proposition}[theorem]{\bf Proposition} \newtheorem{corollary}[theorem]{\bf Corollary} \newtheorem{lemma}[theorem]{\bf Lemma} \theoremstyle{definition} \newtheorem{definition}[theorem]{\bf Definition} \newcommand{\bigzero}{\mbox{\normalfont\Large\bfseries 0}} \newcommand{\rvline}{\hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep}} \newenvironment{remark}[1][Remark.]{\begin{trivlist} \item[\hskip \labelsep {\bfseries #1}]}{\end{trivlist}} \newenvironment{example}[1][Example.]{\begin{trivlist} \item[\hskip \labelsep {\bfseries #1}]}{\end{trivlist}} \newcommand{\Rea}{{\mathbb R}} \newcommand{\QQ}{{\mathbb Q}} \newcommand{\ZZ}{{\mathbb Z}} \newcommand{\FF}{{\mathbb F}} \DeclareMathOperator{\lk}{lk} \DeclareMathOperator{\cost}{cost} \DeclareMathOperator{\st}{st} \DeclareMathOperator{\lap}{\Delta} \DeclareMathOperator{\id}{\text{Id}} \newcommand{\cobound}[2]{d_{#1}#2} \newcommand{\bound}[2]{d_{#1}^*#2} \DeclareMathOperator{\rank}{\text{rank}} \DeclareMathOperator{\supp}{\text{supp}} \newcommand{\cupdot}{\mathbin{\mathaccent\cdot\cup}} \newcommand{\lfrac}[2]{\left\lfloor\frac{#1}{#2}\right\rfloor} \newcommand{\bv}{\mathbf v} \newcommand{\p}{{\bf p}} \usepackage{authblk} \title{On the $d$-dimensional algebraic connectivity of graphs} \author[1]{Alan Lew\thanks{\href{mailto:[email protected]}{[email protected]}. Alan Lew was partially supported by the Israel Science Foundation grant ISF-2480/20.}} \author[1]{Eran Nevo\thanks{\href{mailto:[email protected]}{[email protected]}. Eran Nevo was partially supported by the Israel Science Foundation grant ISF-2480/20.}} \author[1]{Yuval Peled\thanks{\href{mailto:[email protected]}{[email protected]}.}} \author[1]{Orit E. Raz\thanks{\href{mailto:[email protected]}{[email protected]}.}} \affil[1]{Einstein Institute of Mathematics, Hebrew University, Jerusalem~91904, Israel} \date{} \setcounter{Maxaffil}{0} \renewcommand\Affilfont{\itshape\small} \begin{document} \maketitle \begin{abstract} The $d$-dimensional algebraic connectivity $a_d(G)$ of a graph $G=(V,E)$, introduced by Jord\'an and Tanigawa, is a quantitative measure of the $d$-dimensional rigidity of $G$ that is defined in terms of the eigenvalues of stiffness matrices (which are analogues of the graph Laplacian) associated to mappings of the vertex set $V$ into $\Rea^d$. Here, we analyze the $d$-dimensional algebraic connectivity of complete graphs. In particular, we show that, for $d\geq 3$, $a_d(K_{d+1})=1$, and for $n\geq 2d$, \[ \left\lceil\frac{n}{2d}\right\rceil-2d+1\leq a_d(K_n) \leq \frac{2n}{3(d-1)}+\frac{1}{3}. \] \end{abstract} \section{Introduction} A \emph{$d$-dimensional framework} is a pair $(G,p)$ consisting of a graph $G=(V,E)$ and a mapping of its vertices $p:V\to \Rea^d$. A framework $(G,p)$ is called \emph{rigid} if every continuous motion of the vertices that preserves the lengths of all the edges of $G$, preserves in fact the distance between every two vertices of $G$. In \cite{AR1}, Asimov and Roth introduced the stricter notion of \emph{infinitesimal rigidity}: For two distinct vertices $u,v\in V$, let $d_{u v}\in \Rea^d$ be defined by \[ d_{u v}=\begin{cases} \frac{p(u)-p(v)}{\|p(u)-p(v)\|} & \text{ if } p(u)\neq p(v),\\ 0 & \text{ otherwise,} \end{cases} \] and let $\bv_{u,v}\in \Rea^{d|V|}$ be defined by \[ \bv_{u,v} ^t= \kbordermatrix{ & & & & u & & & & v & & & \\ & 0 & \ldots & 0 & d_{u v} & 0 & \ldots & 0 & d_{v u} & 0 & \ldots & 0}. \] Equivalently, $ \bv_{u,v}= (1_u-1_v)\otimes d_{u v}, $ where $\{1_u\}_{u\in V}$ is the standard basis of $\Rea^{|V|}$ and $\otimes$ denotes the Kronecker product. The \emph{normalized rigidity matrix} of $(G,p)$, denoted by $R(G,p)$, is the $d|V|\times |E|$ matrix whose columns are the vectors $\bv_{u,v}$ for all $\{u,v\}\in E$. Assume that $|V|>d$. It is known (see \cite{AR1}) that the rank of $R(G,p)$ is at most $d|V|-\binom{d+1}{2}$. The framework $(G,p)$ is called \emph{infinitesimally rigid}\footnote{The usual definition of infinitesimal rigidity is in terms of the unnormalized rigidity matrix, but both definitions are equivalent as scaling of the columns of a matrix does not change its rank.} if the rank of the $R(G,p)$ is exactly $d|V|-\binom{d+1}{2}$. Infinitesimal rigidity is in general stronger than rigidity, but it is known that both notions coincide for ``generic" embeddings (see \cite{AR1}). Note that for $d=1$, and assuming that $p$ is injective, both notions of rigidity coincide with the notion of graph connectivity. Recently Jord\'an and Tanigawa~\cite{jordan2020rigidity} (building on Zhu and Hu~\cite{zhu2013quantitative,zhu2009stiffness} who considered the $2$-dimensional case) introduced the following quantitative measure of rigidity: The \emph{stiffness matrix} $L(G,p)$ is defined by \[ L(G,p)=R(G,p) R(G,p)^t \in \Rea^{d|V|\times d|V|}. \] It is easy to check that the rank of $L(G,p)$ equals the rank of $R(G,p)$. Therefore, the kernel of $L(G,p)$ is of dimension at least $\binom{d+1}{2}$, and equality occurs if and only if the framework $(G,p)$ is infinitesimally rigid. Let $\lambda_i(L(G,p))$ be the $i$-th smallest eigenvalue of $L(G,p)$. The \emph{spectral gap} of $L(G,p)$ is its minimal non-trivial eigenvalue $\lambda_{\binom{d+1}{2}+1}(L(G,p))$. The \emph{$d$-dimensional algebraic connectivity of $G$} is defined by \[ a_d(G)=\sup\left\{ \lambda_{\binom{d+1}{2}+1}(L(G,p)) \middle| \, p: V\to \Rea^d\right\}. \] For $d=1$, $a_1(G)$ is the usual algebraic connectivity (a.k.a. spectral gap) of $G$, introduced by Fiedler in \cite{fiedler1973algebraic}. For general $d$, we always have $a_d(G) \ge 0$ (since $L(G,p)$ is positive semi-definite) and $a_d(G)>0$ if and only if a generic embedding of $G$ in $\Rea^d$ forms a rigid framework. Let $K_n$ be the complete graph on $n$ vertices. In~\cite{jordan2020rigidity}, a lower bound on $a_d(K_n)$ was used to deduce an improved constant in a threshold result for $d$-rigidity~\cite{kiraly2013coherence}: there exists a constant $C_d$ such that if $pn > C_d \log n$ then a graph $G\in G(n,p)$, the Erd\H{o}s-R\'enyi $n$-vertex random graph with edge probability $p$, is asymptotically almost surely $d$-rigid. (For a sharp threshold for $d$-rigidity see our recent~\cite{SharpRigidityThreshold}.) Their estimate on $C_d$ depended on the value of the spectral gap of the stiffness matrix of the regular $d$-simplex graph $K_{d+1}$, denoted $s_d$, which was conjectured to equal $1$ for $d\geq 3$. Motivated by these results, we study in this paper the $d$-dimensional algebraic connectivity of complete graphs. It is well known and easy to check that $a_1(K_n)=n$. For $d=2$, it was shown by Jord\'an and Tanigawa \cite[Theorem 4.4]{jordan2020rigidity}, based on a result by Zhu \cite{zhu2013quantitative}, that $a_2(K_n)=n/2$ for all $n\geq 3$ (see also Proposition \ref{prop:2d}). For $d\geq 3$ the situation is more complicated, and the only previously known result is the lower bound $a_d(K_n)\geq s_d d n/ (2 (d+1)^2)-d$, proved by Jord\'an and Tanigawa in \cite[Thm. 5.2]{jordan2020rigidity}\footnote{The bound as stated in \cite{jordan2020rigidity} is incorrect, the bound stated here is obtained after fixing a typo in \cite[Lemma 5.4]{jordan2020rigidity}.}. Here, we first focus on the case $n=d+1$. Let $p^{\triangle}:V(K_{d+1})\to \Rea^d$ denote the regular simplex embedding (that is, the vertices of $K_{d+1}$ are mapped bijectively to the vertices of a regular simplex in $\Rea^d$), and denote \[s_d=\lambda_{\binom{d+1}{2}+1}(L(K_{d+1},p^{\triangle})). \] We prove the following. \begin{restatable}{theorem}{simplexspectrum}\label{thm:simplex_spectrum} The spectrum of $L(K_{d+1},p^{\triangle})$ is \[ \left\{0^{(d(d+1)/2)},1^{((d+1)(d-2)/2)},\frac{d+1}{2}^{(d)},d+1^{(1)}\right\}. \] \end{restatable} (The superscript $(m)$ indicates multiplicity $m$ of the corresponding eigenvalue, here and throughout the paper.) This settles a conjecture of Jord\'an and Tanigawa \cite[Conj. 1]{jordan2020rigidity}. In particular, we obtain $s_d=1$ for $d\ge 3$. (Note that $s_2=3/2$.) Further, we show that this is the largest possible spectral gap for a framework $(K_{d+1},p)$. That is, \begin{restatable}{theorem}{connectivityofsimplex} \label{thm:simplex} For $d\geq 3$, $a_d(K_{d+1})=1$. \end{restatable} However, for $d\ge 3$, $p^{\triangle}$ is not the only embedding that achieves the maximum value $a_d(K_{d+1})=1$, see Proposition~\ref{prop:nonregular}. Next we consider (balanced) Tur{\'a}n graphs: Let $r,n$ be positive integers such that $r$ divides $n$. Let $V_1,\ldots,V_r$ be pairwise disjoint sets such that $|V_i|=n/r$ for all $i\in[r]$. Let $V=V_1\cup\cdots\cup V_r$ and $E=\cup_{i\neq j\in[r]} \{\{u,v\}:\, u\in V_i, v\in V_j\}$. The graph $T(n,r)=(V,E)$ is called a Tur{\'a}n graph, or the complete balanced $r$-partite graph on $n$ vertices. For $r=d+1$ let $q^{\triangle}: V\to \Rea^d$ denote the mapping into the vertices of a regular $d$-simplex, such that the preimage of each vertex of the simplex equals $V_i$ for a different $i\in [d+1]$. We compute the spectrum of $L(T(n,d+1),q^{\triangle})$: \begin{restatable}{theorem}{simplexrepeated}\label{thm:simplex_repeated} Let $d\geq 2$ and $n\geq d+1$ such that $n$ is divisible by $d+1$. Then, the spectrum of $L(T(n,d+1),q^{\triangle})$ is \[ \left\{ 0^{(d(d+1)/2)},\frac{n}{2(d+1)}^{((n-d-1)(d-1))},\frac{n}{d+1}^{((d-2)(d+1)/2)},\frac{n}{2}^{(n-1)}, n^{(1)} \right\}. \] \end{restatable} In particular, its spectral gap for $n\geq 2(d+1)$ is: \[ \lambda_{\binom{d+1}{2}+1}(L(T(n,d+1),q^{\triangle}))= \frac{n}{2(d+1)}. \] This improves upon the lower bound $\lambda_{\binom{d+1}{2}+1}(L(T(n,d+1),q^{\triangle}))\geq \frac{d n}{2(d+1)^2}$ obtained in~\cite{jordan2020rigidity} (after fixing a typo and plugging $s_d=1$). Similarly, for $r=2d$ let $q^{\diamond}:V\to \Rea^d$ denote the mapping into the vertices of a regular $d$-crosspolytope, such that the preimage of each vertex of the crosspolytope equals $V_i$ for a different $i\in [2d]$ (recall that the regular $d$-crosspolytope is defined as the convex hull of the set $\{\pm e_1,\ldots, \pm e_d\}$, where $\{e_1,\ldots,e_d\}$ is the standard basis of $\Rea^d$). We compute the spectrum of $L(T(n,2d),q^{\diamond})$: \begin{restatable}{theorem}{crosspolytopespectrum}\label{thm:crosspolytope_spectrum} Let $d\geq 2$ and $n\geq 2d$ such that $n$ is divisible by $2d$. Then, the spectrum of $L(T(n,2d),q^{\diamond})$ is \[ \left\{ 0^{(d(d+1)/2)},\frac{n}{2d}^{(n(d-1)-d^2)},\frac{n}{d}^{(d(d-1)/2)},\frac{n}{2}^{(n-1)}, n^{(1)} \right\}. \] \end{restatable} In particular, its spectral gap is: \[ \lambda_{\binom{d+1}{2}+1}(L(T(n,2d),q^{\diamond}))= \frac{n}{2d}. \] The proofs of Theorems \ref{thm:simplex_repeated} and \ref{thm:crosspolytope_spectrum} follow by computing the eigenbases the corresponding stiffness matrices. As a corollary of Theorem \ref{thm:crosspolytope_spectrum}, we obtain the following lower bound on $a_d(K_n)$: \begin{restatable}{theorem}{lowerbound}\label{thm:lower_bound} Let $d\geq 3$ and $n\geq 2d$. If $n$ is divisible by $2d$, then \[ a_d(K_n)\geq \frac{n}{2d}. \] For general $n\geq 2d$, we have $ a_d(K_n)\geq \left\lceil\frac{n}{2d}\right\rceil-2d+1. $ \end{restatable} This improves the previously known lower bound. Finally, we prove an upper bound on the $d$-dimensional algebraic connectivity of the complete graph: \begin{restatable}{theorem}{upperbound}\label{cor:upperbound} Let $d\geq 3$ and $n\geq d+1$. Then, \[ a_d(K_n)\leq \frac{2n}{3(d-1)}+\frac{1}{3}\,. \] \end{restatable} This follows by proving a lower bound on the sum of the $n$ largest eigenvalues of $L(K_n,p)$ for every embedding $p$ into $\Rea^d$ (see Lemma \ref{lem:one_third}). Most of our results rely on analyzing the \emph{lower stiffness matrix} of the framework $(G,p)$, defined by \[ L^{-}(G,p)=R(G,p)^t R(G,p) \in \Rea^{|E|\times |E|}. \] It is easy to check that $\rank(L(G,p))=\rank(L^{-}(G,p))=\rank(R(G,p))$, and that the non-zero eigenvalues of $L(G,p)$ are the same as those of $L^{-}(G,p)$, namely also with same multiplicities. \textbf{Outline}: In Section~\ref{sec:prelim} we describe $L^{-}(G,p)$ explicitly and apply Cauchy's Interlacing Theorem to relate the spectra of $L(G,p)$ and $L(G\setminus{e},p)$ where $e$ is an edge in $G$. In Section~\ref{sec:spec_simplex} we compute the spectrum of $L(K_{d+1},p^{\triangle})$ and determine the value of $a_d(K_{d+1})$. In Section~\ref{sec:Turan} we find the spectrum and eigenbasis of $L(T(n,d+1),q^{\triangle})$, improving on the estimate in~\cite{jordan2020rigidity} for its spectral gap, and similarly we find the spectrum and eigenbasis of $L(T(n,2d),q^{\diamond})$, concluding a lower bound on $a_d(K_n)$. In Section~\ref{sec:top-n-ev} we lower bound the sum of the largest $n$ eigenvalues of $L(K_n,p)$ for every embedding $p$, and use it to prove an upper bound on $a_d(K_n)$. We conclude in Section~\ref{sec:conclude} with open problems and conjectures on $a_d(G)$ for graphs of interest. \section{The lower stiffness matrix}\label{sec:prelim} We start with the following explicit description of $L^{-}(G,p)$: \begin{lemma}\label{lemma:down_laplacian} Let $G=(V,E)$ be a graph and let $p:V\to \Rea^d$. Let $e_1,e_2\in E$. Then, \[ L^{-}(G,p)(e_1,e_2)= \begin{cases} 2 & \text{ if } e_1=e_2=\{i,j\} \text{ and } p(i)\neq p(j),\\ \cos(\theta(e_1,e_2)) & \text{ if } |e_1\cap e_2|=1,\\ 0 & \text{ otherwise,} \end{cases} \] where, for $e_1=\{i,j\}$ and $e_2=\{i,k\}$, $\theta(e_1,e_2)$ is the angle between $d_{ij}$ and $d_{ik}$; that is, $\cos(\theta(e_1,e_2))=d_{ij}\cdot d_{ik}$ (note that, by convention, if $d_{ij}=0$ or $d_{ik}=0$, then $\cos(\theta(e_1,e_2))=0$). \end{lemma} \begin{proof} For convenience, we identify the vertex set of $G$ with the set $[n]$. Let $e_1=\{i,j\}$ and $e_2=\{k,l\}$, where $i<j$ and $k<l$. Let $\{1_i\}_{i\in V}$ be the standard basis for $\Rea^{|V|}$, and $\{1_e\}_{e\in E}$ be the standard basis for $\Rea^{|E|}$. Then, \[ R(G,p)1_{e_1}= (1_i-1_j)\otimes d_{ij}, \] and similarly \[ R(G,p)1_{e_2}= (1_k-1_l)\otimes d_{kl}. \] Note that \[ L^{-}(G,p)(e_1,e_2)= (R(G,p) 1_{e_1})^t (R(G,p)1_{e_2})= ((1_i-1_j)\cdot(1_k-1_l))(d_{ij}\cdot d_{kl}). \] If $e_1=e_2$, we obtain \[ L^{-}(G,p)(e_1,e_2)= \|1_i-1_j\|^2 \cdot \|d_{ij}\|^2=\begin{cases} 2 & \text{ if } p(i)\neq p(j),\\ 0 & \text{ otherwise.} \end{cases} \] If $e_1\cap e_2=\emptyset$, then \[ L^{-}(G,p)(e_1,e_2)= ((1_i-1_j)\cdot(1_k-1_l))(d_{ij}\cdot d_{kl})=0\cdot (d_{ij}\cdot d_{kl})=0. \] Finally, assume $|e_1\cap e_2|=1$. Then, either $i=k$, or $i=l$, or $j=k$ or $j=l$. If $i=k$, then \[ L^{-}(G,p)(e_1,e_2)= 1\cdot (d_{ij}\cdot d_{il})= \cos(\theta(e_1,e_2)). \] If $i=l$, then \[ L^{-}(G,p)(e_1,e_2)= (-1)\cdot (d_{ij}\cdot d_{ki})= d_{ij}\cdot d_{ik}= \cos(\theta(e_1,e_2)). \] The other two cases follow similarly. \end{proof} \begin{remark} In \cite{aryankia2021spectral}, the lower stiffness matrix $L^{-}(G,p)$ was studied in the special case where $G=K_3$, the complete graph on three vertices, and $p$ is an embedding of the vertices in $\Rea^2$. \end{remark} \subsection{Interlacing of spectra} We will use the following special case of Cauchy's interlacing theorem: \begin{theorem}[See e.g. \cite{brouwer2011spectra}]\label{thm:cauchy} Let $A$ be a real symmetric matrix of size $n\times n$ and $B$ a principal submatrix of $A$ of size $(n-1)\times(n-1)$. Let $\mu_1\geq \mu_2\geq \cdots\geq \mu_n$ be the eigenvalues of $A$ and $\mu'_1\geq\mu'_2\geq \cdots \geq \mu'_{n-1}$ be the eigenvalues of $B$. Then, for $1\leq i\leq n-1$, we have \[ \mu_i \geq \mu'_i\geq \mu_{i+1}. \] \end{theorem} We obtain the following interlacing result, generalizing a known result for graph Laplacians (see e.g. \cite[Theorem 13.6.2]{godsil2001algebraic}). \begin{theorem}\label{thm:edge_removal_interlacing} Let $G=(V,E)$ be a graph with $|V|=n$, and let $p:V\to \Rea^d$. Let $e\in E$, and let $G\setminus e=(V,E\setminus\{e\})$. Let $\lambda_1\leq \lambda_2\leq \cdots\leq \lambda_{dn}$ be the eigenvalues of $L(G,p)$ and $\lambda'_1\leq \cdots \leq \lambda'_{dn}$ be the eigenvalues of $L(G\setminus e,p)$. Let $\lambda_0=0$. Then, we have \[ \lambda_{i-1} \leq \lambda'_{i} \leq \lambda_{i}, \] for all $1\leq i\leq dn$. \end{theorem} \begin{proof} Let $\mu'_1\geq \cdots\geq \mu'_{|E|}$ be the eigenvalues of $L^{-}(G,p)$ and $\mu_1\geq \cdots\geq \mu_{|E|-1}$ be the eigenvalues of $L^{-}(G\setminus e,p)$. Note that $L^{-}(G\setminus\{e\},p)$ is a principal submatrix of $L^{-}(G,p)$, therefore by Theorem \ref{thm:cauchy}, we have \[ \mu_i \geq \mu'_i \geq \mu_{i+1} \] for $i=1,\ldots,|E|-1$. For $i>|E|-1$, let $\mu'_i=0$, and for $i>|E|$, let $\mu_i=0$. Then, we have \[ \mu_i \geq \mu'_i \geq \mu_{i+1} \] for all $i$. Since $L(G,p)$ and $L^{-}(G,p)$ have the same positive eigenvalues, we have for all $i=1,\ldots,d n$ \[ \lambda_{dn+1-i}=\mu_{i}, \] and similarly \[ \lambda'_{dn+1-i}=\mu'_{i}. \] Therefore, we have \[ \lambda_{dn+1-i} \geq \lambda'_{dn+1-i} \geq \lambda_{dn-i} \] for all $i=1,\ldots,dn$ (using $\lambda_0=0$). So, for $j=1,\ldots,dn$, we obtain \[ \lambda_{j-1} \leq \lambda'_{j} \leq \lambda_{j}, \] as wanted. \end{proof} As an application of Theorem \ref{thm:edge_removal_interlacing}, we show that restricting attention to maps $p:V\to \Rea^d$ that are embeddings (i.e injective) does not affect the $d$-dimensional algebraic connectivity of a graph $G=(V,E)$. \begin{lemma}\label{lemma:a_d_for_embeddings} Let $G=(V,E)$ and $d\geq 1$. Then, \[ a_d(G)=\sup\left\{ \lambda_{\binom{d+1}{2}+1}(L(G,p)) \middle| \, p: V\to \Rea^d,\, p \text{ is injective}\right\}. \] \end{lemma} \begin{proof} Let \[ \tilde{a}_d(G)=\sup\left\{ \lambda_{\binom{d+1}{2}+1}(L(G,p)) \middle| \, p: V\to \Rea^d,\, p \text{ is injective}\right\}. \] Clearly, $\tilde{a}_d(G)\leq a_d(G)$. In the other direction, let $p:V\to\Rea^d$. We will show that for any $\epsilon>0$ there exists a $p':V\to \Rea^d$ such that $p'$ is injective and \[ \lambda_{\binom{d+1}{2}+1}(L(G,p')) > \lambda_{\binom{d+1}{2}+1}(L(G,p))-\epsilon. \] Let $E'=\{\{u,v\}\in E:\, p(u)\neq p(v)\}$ and let $G'=(V,E')$. Note that $L(G,p)=L(G',p)$, and in particular $\lambda_{\binom{d+1}{2}+1}(L(G,p))=\lambda_{\binom{d+1}{2}+1}(L(G',p))$. By Lemma \ref{lemma:down_laplacian}, for $p'$ in a neighborhood of $p$, the entries of the lower stiffness matrix $L^{-}(G',p')$ are continuous functions of $p'$. Therefore, the spectral gap $\lambda_{\binom{d+1}{2}+1}(L(G',p'))$ is also continuous in $p'$ (as a root of the characteristic polynomial of $L^{-}(G',p'))$. That is, there exists $\delta>0$ such that if $\|p(u)-p'(u)\|<\delta$ for all $u\in V$, then \[ \left|\lambda_{\binom{d+1}{2}+1}(L(G',p'))-\lambda_{\binom{d+1}{2}+1}(L(G',p))\right|<\epsilon. \] Now, let $p':V\to\Rea^d$ be an embedding satisfying $\|p(u)-p'(u)\|< \delta$ for all $u\in V$. Then, we have \[ \lambda_{\binom{d+1}{2}+1}(L(G',p'))> \lambda_{\binom{d+1}{2}+1}(L(G',p))-\epsilon = \lambda_{\binom{d+1}{2}+1}(L(G,p))-\epsilon. \] Finally, by Theorem \ref{thm:edge_removal_interlacing}, we obtain \[ \lambda_{\binom{d+1}{2}+1}(L(G,p')) \geq \lambda_{\binom{d+1}{2}+1}(L(G',p'))> \lambda_{\binom{d+1}{2}+1}(L(G,p))-\epsilon. \] Thus, $\tilde{a}_d(G)\geq a_d(G)$, as wanted. \end{proof} \section{The $d$-dimensional algebraic connectivity of the simplex graph} \label{sec:spec_simplex} It is known (see \cite{jordan2020rigidity}) that $a_1(K_{n})=n$ and $a_2(K_n)=n/2$. In particular, $a_2(K_3)=3/2$. In this section we prove the following: \connectivityofsimplex* \subsection{Lower bound: $a_d(K_{d+1})\ge 1$} Recall that $p^{\triangle}:V\to \Rea^d$ is the embedding that maps each vertex of $K_{d+1}$ to one of the vertices of the regular $d$-dimensional simplex. The lower bound follows from the following result, conjectured in~\cite[Conj.1]{jordan2020rigidity}: \simplexspectrum* \begin{proof} Let $K_{d+1}=(V,E)$, where $V=[d+1]$ and $E=\binom{[d+1]}{2}$. Since the angle between every two intersecting edges of the regular simplex is $60\degree$, we have, by Lemma \ref{lemma:down_laplacian}, \[ L^{-}(K_{d+1},p^{\triangle})(e_1,e_2)= \begin{cases} 2 & \text{ if } e_1=e_2,\\ \frac{1}{2} & \text{ if } |e_1\cap e_2|=1,\\ 0 & \text{ otherwise,} \end{cases} \] for every $e_1,e_2\in E$. We can write \[ L^{-}(K_{d+1},p^{\triangle})= I+ \frac{1}{2} Q, \] where $Q\in \Rea^{|E|\times|E|}$ is defined by \[ Q(e_1,e_2)= \begin{cases} 2 & \text{ if } e_1=e_2,\\ 1 & \text{ if } |e_1\cap e_2|=1,\\ 0 & \text{ otherwise,} \end{cases} \] for every $e_1,e_2\in E$. Let $M\in \Rea^{(d+1)\times|E|}$ be the signless incidence matrix of $K_{d+1}$, defined by \[ M(i,e)=\begin{cases} 1 & \text{ if } i\in e,\\ 0 & \text{ otherwise,} \end{cases} \] for $i\in V=[d+1]$ and $e\in E$. Then, we have \[ Q= M^{t} M. \] Let \[ \tilde{Q}= M M^{t}\in \Rea^{(d+1)\times(d+1)}. \] The matrix $\tilde{Q}$ is the signless Laplacian of $K_{d+1}$, namely: \[ \tilde{Q}(i,j)=\begin{cases} d & \text{ if } i=j,\\ 1 & \text{ otherwise.} \end{cases} \] Therefore, $\tilde{Q}=(d-1)I+J$, where $J$ is the all-ones matrix. The spectrum of $J$ is $\{0^{(d)},d+1^{(1)}\}$; therefore, the spectrum of $\tilde{Q}$ is $\{d-1^{(d)},2d^{(1)}\}$. Since the non-zero eigenvalues of $Q$ and $\tilde{Q}$ are the same, the spectrum of $Q$ is $\{0^{((d-2)(d+1)/2)},d-1^{(d)},2d^{(1)}\}$. Thus, the spectrum of $L^{-}(K_{d+1},p^{\triangle})$ is \[ \left\{1^{((d-2)(d+1)/2)},\frac{d+1}{2}^{(d)},d+1^{(1)}\right\}. \] Finally, since the non-zero eigenvalues of $L^{-}(K_{d+1},p^{\triangle})$ and $L(K_{d+1},p^{\triangle})$ are the same, the spectrum of $L(K_{d+1},p^{\triangle})$ is \[ \left\{0^{(d(d+1)/2)},1^{((d-2)(d+1)/2)},\frac{d+1}{2}^{(d)},d+1^{(1)}\right\}, \] as wanted. \end{proof} As a consequence of Theorem \ref{thm:simplex}, we obtain $a_d(K_{d+1})\geq 1$ for all $d\geq 3$. We are left to show that $a_d(K_{d+1})\leq 1$. Before doing that, let us remark that for $d\ge 3$ there are embeddings $p\neq p^{\triangle}$ such that $\lambda_{\binom{d+1}{2}+1}(L(K_{d+1},p))=1$: \begin{proposition}\label{prop:nonregular} Let $p_1,p_2,p_3\in \Rea^2$ be the vertices of an equilateral triangle with sides of length $1$ centered at the origin. For $h\geq 0$, let $p_h:[4]\to \Rea^3$ be defined by \[ p_h(i)=\begin{cases} (p_i,0) & \text{ if } i\in[3],\\ (0,h) & \text{ if } i=4. \end{cases} \] Then, the spectrum of $L(K_4,p_h)$ is \[ \left\{0^{(6)},1^{(2)},\left(3-\left(h^2+\frac{1}{3}\right)^{-1}\right)^{(1)},\left(\frac{3}{2}+\frac{1}{2}\left(h^2+\frac{1}{3}\right)^{-1}\right)^{(2)},4^{(1)}\right\}. \] In particular, for $h\geq \frac{1}{\sqrt{6}}$, we have $\lambda_7(L(K_4,p_h))=1$. \end{proposition} \begin{proof} Denote the edges of the tetrahedron formed by the image of $p_h$ by $e_{ij}$, $1\leq i\neq j\leq 4$. Let $\ell$ be the length of the edges $e_{i4}$ (for $i\in[3]$). Note that $\ell=\sqrt{h^2+\frac{1}{3}}$. It is easy to check that for three distinct indices $i,j,k$ \[ \cos(\theta(e_{ij},e_{jk}))=\begin{cases} \frac{1}{2} & \text{ if } i,j,k\in [3],\\ \frac{1}{2\ell} & \text{ if } i,j\in [3], k=4,\\ 1-\frac{1}{2\ell^2} & \text{ if } i,k\in[3], j=4. \end{cases} \] Therefore, by Lemma \ref{lemma:down_laplacian}, we have \[ L^{-}(K_4,p_h)=\begin{pmatrix} 2 & \frac{1}{2} & \frac{1}{2} & \frac{1}{2\ell} & \frac{1}{2\ell} & 0\\[5pt] \frac{1}{2} & 2 & \frac{1}{2} & \frac{1}{2\ell} & 0 & \frac{1}{2\ell}\\[5pt] \frac{1}{2} & \frac{1}{2} & 2 & 0 & \frac{1}{2\ell} & \frac{1}{2\ell}\\[5pt] \frac{1}{2\ell} & \frac{1}{2\ell} & 0 & 2 & 1-\frac{1}{2\ell^2} & 1-\frac{1}{2\ell^2}\\[5pt] \frac{1}{2\ell} & 0 & \frac{1}{2\ell} & 1-\frac{1}{2\ell^2} & 2 & 1-\frac{1}{2\ell^2}\\[5pt] 0 & \frac{1}{2\ell} & \frac{1}{2\ell} & 1-\frac{1}{2\ell^2} & 1-\frac{1}{2\ell^2} & 2 \end{pmatrix}. \] The spectrum of $L^{-}(K_4,p_h)$ can now be computed with the help of a computer algebra system. We obtain the following eigenvalues: \[ 1^{(2)},\left(3-\ell^{-2}\right)^{(1)},\left(\frac{3}{2}+\frac{1}{2}\ell^{-2}\right)^{(2)},4^{(1)}. \] Since the non-zero eigenvalues of $L^{-}(K_4,p_h)$ are the same as those of $L(K_4,p_h)$, and $\ell^2=h^2+\frac{1}{3}$, the spectrum of $L(K_4,p_h)$ is \[ \left\{ 0^{(6)},1^{(2)},\left(3-\left(h^2+\frac{1}{3}\right)^{-1}\right)^{(1)},\left(\frac{3}{2}+\frac{1}{2}\left(h^2+\frac{1}{3}\right)^{-1}\right)^{(2)},4^{(1)}\right\}, \] as wanted. Finally, note that for $h\geq \frac{1}{\sqrt{6}}$ we obtain \[ 3-(h^2+\frac{1}{3})^{-1}\geq 3-2 =1, \] and therefore $\lambda_7(K_4,p_h)=1$. \end{proof} \subsection{Upper bound: $a_d(K_{d+1})\le 1$} First, we prove the $3$-dimensional case: \begin{proposition} \[ a_3(K_4)= 1. \] \end{proposition} \begin{proof} Let $p:V\to \Rea^3$. Note that, since $|E|=6=dn-\binom{d+1}{2}$ (for $d=3$ and $n=4$), \[ \lambda_7(L(K_4,p))=\lambda_{\binom{3+1}{2}+1}(L(K_4,p)) \] is equal to the minimal eigenvalue of $L^{-}(K_4,p)$. Thus, for every $0\neq x\in \Rea^6$ \[ \lambda_7(L(K_4,p)) \leq \frac{x^t L^{-}(K_4,p) x}{\|x\|^2}. \] Let $e_{1},e_1',e_2,e_2',e_3,e_3'$ be the edges of $K_4$, such that $e_i\cap e_i'=\emptyset$ for all $i$. Let $\ell_1,\ell_1',\ell_2,\ell'_2,\ell_3,\ell'_3$ be the lengths of the images of $e_{1},e_1',e_2,e_2',e_3,e_3'$ (that is, if $e_i=\{u,v\}$, $\ell_i=\|p(u)-p(v)\|$). Assume without loss of generality that $\ell_3^2+\ell_3^{'2} \leq \min\{\ell_1^2+\ell_1^{'2} ,\ell_2^2+\ell_2^{'2}\}$. Let \[ x= \ell_1 1_{e_1}+\ell_1' 1_{e_1'}-\ell_2 1_{e_2}-\ell_2' 1_{e_2'}. \] Then, $\|x\|^2=\ell_1^2+\ell_1^{'2}+\ell_2^2+\ell_2^{'2}$, and \begin{multline*} x^t L^{-}(K_4,p) x =2\|x\|^2 - 2(\ell_1\ell_2\cos(\theta(e_1,e_2))+\ell_1\ell_2'\cos(\theta(e_1,e'_2))\\+\ell_1'\ell_2\cos(\theta(e'_1,e_2)) +\ell_1'\ell_2'\cos(\theta(e'_1,e'_2))). \end{multline*} By the law of cosines, we obtain \[ x^t L^{-}(K_4,p) x =2\|x\|^2 +2(\ell_3^2+\ell_3^{'2}- \ell_1^2-\ell_1^{'2}-\ell_2^2-\ell_2^{'2}) = 2(\ell_3^2+\ell_3^{'2}). \] So, \[ \frac{x^t L^{-}(K_4,p) x}{\|x\|^2}= \frac{2(\ell_3^2+\ell_3^{'2})}{\ell_1^2+\ell_1^{'2}+\ell_2^2+\ell_2^{'2}}\leq 1. \] Therefore, we obtain \[ \lambda_7(L(K_4,p)) \leq 1. \] \end{proof} Finally, we show that the bound for general $d$ follows from the $d=3$ case: \begin{proposition} For all $d\geq 4$, \[ a_d(K_{d+1})\leq a_3(K_4)=1. \] \end{proposition} \begin{proof} Let $K_{d+1}=(V,E)$, where $V=[d+1]$ and $E=\binom{[d+1]}{2}$. We will show that for every $d$, \[ a_d(K_{d+1})\leq a_{d-1}(K_{d}). \] Let $G$ be the graph obtained by adding an isolated vertex $v$ to $K_d$. Then, $G$ is obtained from $K_{d+1}$ by removing the $d$ edges containing $v$. Therefore, for every $p:V\to \Rea^d$, we have by Theorem \ref{thm:edge_removal_interlacing}, \[ \lambda_{\binom{d+1}{2}+1}(L(K_{d+1},p)) \leq \lambda_{\binom{d+1}{2}+d+1}(L(G,p)). \] Let $H$ be an affine hyperplane containing $p(V\setminus\{v\})$. Identify $H$ with $\Rea^{d-1}$, and let $p'=p|_{V\setminus\{v\}}: V\setminus\{v\}\to\Rea^{d-1}$. Note that $L^{-}(G,p)=L^{-}(K_d,p')$; therefore, the non-zero eigenvalues of $L(G,p)$ and $L(K_d,p')$ are the same. Since $L(K_d,p')\in \Rea^{(d-1)d \times (d-1)d}$ and $L(G,p)\in \Rea^{d(d+1)\times d(d+1)}$, this means in particular that \[ \lambda_{\binom{d}{2}+1}(L(K_d,p')) =\lambda_{\binom{d}{2}+2d+1}(L(G,p))=\lambda_{\binom{d+1}{2}+d+1}(L(G,p)). \] So, we obtain \[ \lambda_{\binom{d+1}{2}+1}(L(K_{d+1},p)) \leq \lambda_{\binom{d}{2}+1}(L(K_d,p')) \leq a_{d-1}(K_d). \] Since this holds for every $p:V\to \Rea^d$, we obtain \[ a_d(K_{d+1}) \leq a_{d-1}(K_d), \] as wanted. Therefore, the bound \[ a_d(K_{d+1}) \leq a_{3}(K_4)=1 \] follows by induction on $d$. \end{proof} \section{Spectra of Tur{\'a}n graphs $T(n,d+1)$ and $T(n,2d)$}\label{sec:Turan} Recall that $T(n,r)$ is the complete balanced $r$-partite graph with $n$ vertices. For $r=d+1$, we denote by $q^{\triangle}: [n]\to \Rea^d$ the mapping that maps the vertices of each of the $d+1$ parts of $T(n,d+1)$ to one of the vertices of the regular $d$-dimensional simplex. Similarly, we denote by $q^{\diamond}:[n]\to \Rea^d$ the mapping that maps the vertices of each of the $2d$ parts of $T(n,2d)$ to one of the vertices of the regular $d$-dimensional crosspolytope. In this section we determine the spectra of $L(T(n,d+1),q^{\triangle})$ and $L(T(n,2d),q^{\diamond})$. In fact, in each case we provide a basis of $\Rea^{E}$ consisting of eigenvectors of the corresponding lower stiffness matrix. As a consequence, we obtain a lower bound on the $d$-dimensional algebraic connectivity of $K_n$ (Theorem \ref{thm:lower_bound}). We begin with the following result of Jord\'an and Tanigawa: \begin{lemma}[{\cite[Lemma 4.3]{jordan2020rigidity}}]\label{lemma:n_eigenvalue} Let $p:[n]\to \Rea^d$. If $p$ is not constant, then the largest eigenvalue of $L(K_n,p)$ is $n$. \end{lemma} In \cite{jordan2020rigidity} it was shown that $p$ itself (when considered as a vector in $\Rea^{d n}$) is an eigenvector of $L(K_n,p)$ with eigenvalue $n$. The corresponding eigenvector for $L^{-}(K_n,p)$ is $\phi=R(G,p)^{t} p$, which satisfies \[ \phi(\{i,j\})= \|p(i)-p(j)\| \] for each $i\neq j\in[n]$. The following result shows that for mappings $p$ satisfying certain ``spherical symmetry", $n/2$ is an eigenvalue of $L(K_n,p)$ of high multiplicity: \begin{proposition}\label{prop:large_eigenvalues_balanced_p} Let $p:[n]\to\Rea^d$ such that $\|p(i)\|=1$ for all $i\in[n]$ and $\sum_{i=1}^n p(i)=0$. Assume that the image of $p$ is of size at least $3$. Then, $n/2$ is an eigenvalue of $L(K_n,p)$ with multiplicity at least $n-1$. \end{proposition} \begin{proof} Since the non-zero eigenvalues of $L(K_n,p)$ and $L^{-}(K_n,p)$ are the same, it is enough to look at $L^{-}(K_n,p)$. Let $f\in \Rea^n$ such that $\sum_{i=1}^n f_i=0$. For every $i\neq j\in[n]$, let $\ell_{ij}=\|p(i)-p(j)\|$. Let $E=E(K_n)=\{\{i,j\}:\, 1\leq i<j\leq n\}$, and let $\phi_f \in \Rea^E$ be defined by \[ \phi_f(\{i,j\})=(f_i+f_j) \ell_{ij}. \] We will show that $\phi_f$ is an eigenvector of $L^{-}(K_n,p)$ with eigenvalue $n/2$. For $i,j,k\in[n]$, let $\theta_{ijk}$ be the angle between $p(i)-p(j)$ and $p(k)-p(j)$. By the law of cosines, we have \[ \ell_{ij}\ell_{jk}\cos(\theta_{ijk})=\frac{1}{2}\left(\ell_{ij}^2+\ell_{jk}^2-\ell_{ik}^2\right). \] Let $I'\in \Rea^{E\times E}$ be a diagonal matrix with elements $I'_{e,e}=1$ if $e=\{i,j\}$ where $p(i)\neq p(j)$ and $I'_{e,e}=0$ otherwise. Let $A=L^{-}(K_n,p)-2I'$. Let $e=\{i,j\}\in E$. First, assume that $p(i)\neq p(j)$. Then, by Lemma \ref{lemma:down_laplacian}, we have \begin{align*} A \phi_f(e)&= \sum_{k\neq i,j} \left(\cos(\theta_{ijk}) \phi_f(\{j,k\}) +\cos(\theta_{jik})\phi_f(\{i,k\})\right) \\ &=\sum_{k\neq i,j} \left(\ell_{jk}\cos(\theta_{ijk})(f_j+f_k) +\ell_{ik}\cos(\theta_{jik})(f_i+f_k)\right) \\ &=\sum_{k\neq i,j} \left(\frac{\ell_{ij}^2+\ell_{jk}^2-\ell_{ik}^2}{2\ell_{ij}}(f_j+f_k) +\frac{\ell_{ij}^2+\ell_{ik}^2-\ell_{jk}^2}{2\ell_{ij}}(f_i+f_k)\right) \\ &=\sum_{k\neq i,j} \ell_{ij} f_k + \sum_{k\neq i,j} (f_i+f_j)\frac{\ell_{ij}}{2} +\frac{f_j-f_i}{2\ell_{ij}} \sum_{k\neq i,j}(\ell_{jk}^2-\ell_{ik}^2). \end{align*} Note that, since $\sum_{k=1}^n f_k=0$, we have \[ \sum_{k\neq i,j} f_k = -(f_i+f_j). \] Also, since $\|p(x)\|=1$ for all $x\in[n]$, for all $x,y\in[n]$ we have \[ \ell_{xy}^2= \|p(x)\|^2+\|p(y)\|^2-2 p(x)\cdot p(y)= 2-2 p(x)\cdot p(y). \] So, since $\sum_{k=1}^n p(k)=0$, we obtain \begin{multline*} \sum_{k\neq i,j} (\ell_{jk}^2-\ell_{ik}^2) = 2(p(i)-p(j))\cdot\sum_{k\neq i,j} p(k) \\ = 2(p(j)-p(i))\cdot(p(i)+p(j))= 2(\|p(j)\|^2-\|p(i)\|^2)=0. \end{multline*} Therefore, we obtain \[ A \phi_f(e)= -(f_i+f_j)\ell_{ij}+\frac{n-2}{2}(f_i+f_j) \ell_{ij}= \frac{n-4}{2} \phi_f(e). \] So, $L^{-}(K_n,p)\phi_{f}(e)=(n/2)\phi_f(e)$. Now, assume $p(i)=p(j)$. Note that $\phi_f(e)=(f_i+f_j)\ell_{i j}=0$. Then, since $\cos(\theta_{i j k})=0$ and $\cos(\theta_{j i k})=0$ for all $k\neq i,j$, we obtain \[ A \phi_f(e)= \sum_{k\neq i,j} \left(\cos(\theta_{ijk}) \phi_f(\{j,k\}) +\cos(\theta_{jik})\phi_f(\{i,k\}\right)=0, \] and therefore $L^{-}(K_n,p)\phi_{f}(e)=0=(n/2)\phi_f(e)$. Thus, $\phi_f$ is an eigenvector of $L^{-}(K_n,p)$ with eigenvalue $n/2$. Finally, we will show that the dimension of the subspace \[ U=\left\{\phi_f:\, f\in \Rea^n, \sum_{i=1}^n f_i=0\right\} \] is $n-1$. Indeed, let $W=\{ f\in \Rea^n:\, \sum_{i=1}^n f_i=0\}$. Clearly $\dim(W)=n-1$. We have $U=\Phi(W)$, where $\Phi\in \Rea^{|E|\times n}$ is defined by \[ \Phi(e,i)= \begin{cases} \ell_{i j} & \text{ if } e=\{i,j\} \text{ for some } j\in[n],\\ 0 & \text{ otherwise.} \end{cases} \] Let $g\in \Rea^n$ such that $\Phi g=0$. Let $i\in[n]$. Since the image of $p$ is of size at least $3$, there exist $j,k\in[n]$ such that $p(i),p(j),p(k)$ are pairwise distinct. Then, we have \begin{align*} 0&=(\Phi g)_{\{i,j\}}= (g_i+g_j)\ell_{i j}, \\ 0&=(\Phi g)_{\{i,k\}}= (g_i+g_k)\ell_{i k}, \\ 0&=(\Phi g)_{\{j,k\}}= (g_j+g_k)\ell_{j k}. \end{align*} We obtain $g_i=-g_j=g_k=-g_i$. That is, $g_i=0$. Therefore, $g=0$. Thus, $\Phi$ has linearly independent columns, and so $\dim(U)=\dim(\Phi(W))= \dim(W)=n-1$, as wanted. Hence, $n/2$ has multiplicity at least $n-1$ as an eigenvalue of $L^{-}(K_n,p)$ (and thus also as an eigenvalue of $L(K_n,p)$). \end{proof} We conjecture that for the mappings considered in Proposition \ref{prop:large_eigenvalues_balanced_p}, $n/2$ is the second largest eigenvalue: \begin{conjecture}\label{conj:largest_eig_spherical} Let $d\geq 3$, and let $p:[n]\to\Rea^d$ such that $\|p(i)\|=1$ for all $i\in[n]$ and $\sum_{i=1}^n p(i)=0$. Assume that the image of $p$ is of size at least $3$. Then, the second largest eigenvalue of $L(K_n,p)$ is $n/2$, and its multiplicity is exactly $n-1$. \end{conjecture} In the case $d=2$ we can say more: \begin{proposition}\label{prop:2d} Let $p:[n]\to\Rea^2$ such that $\|p(i)\|=1$ for all $i\in[n]$ and $\sum_{i=1}^n p(i)=0$. Assume that the $p$ is injective. Then, the spectrum of $L(K_n,p)$ is \[ \left\{ 0^{(3)}, \frac{n}{2}^{(2n-4)},n^{(1)}\right\}. \] \end{proposition} Proposition \ref{prop:2d} extends Zhu's result \cite[Remark 3.4.1]{zhu2013quantitative}, which states that the same conclusion holds under the more restrictive assumption that $p$ maps $[n]$ to the roots of unity of order $n$. In particular, this shows that there are infinitely many embeddings $p:[n]\to \Rea^2$ attaining the supremum $a_2(K_n)=n/2$. For completeness, we include a proof, following the proof of the special case sketched by Zhu in \cite{zhu2013quantitative}. \begin{proof} Let $x,y\in \Rea^{2n}$ be defined by \[ x= (1,0,1,0,\ldots,1,0)^t,\ \ y= (0,1,0,1,\ldots,0,1)^t. \] For $i\in [n]$, let $p_x(i),p_y(i)$ be the two coordinates of $p(i)$, and let \[ p^{\perp}(i)= (-p_y(i),p_x(i))^t. \] Define $r\in \Rea^{2n}$ by \begin{align*} r&= (p_x^{\perp}(1),p_y^{\perp}(1),p_x^{\perp}(2),p_y^{\perp}(2),\ldots,p_x^{\perp}(n),p_y^{\perp}(n))^t \\&= (-p_y(1),p_x(1),-p_y(2),p_x(2),\ldots,-p_y(n),p_x(n))^t. \end{align*} It is easy to check (using the fact that $\sum_{i=1}^n p(i)=0$) that $x,y,r$ belong to the kernel of $L(K_n,p)$, and moreover, form an orthogonal set. We identify the mapping $p$ with the vector \[ p= (p_x(1),p_y(1),\ldots,p_x(n),p_y(n))^t\in \Rea^{2n}. \] By Lemma \ref{lemma:n_eigenvalue}, $p$ is an eigenvector of $L(K_n,p)$ with eigenvalue $n$. We will show that \begin{equation}\label{eq:zhu} L(K_n,p)= \frac{n}{2}I +\frac{1}{2} (p p^t -x x^t -y y^t -r r^t). \end{equation} Then, it immediately follows that every vector in $\Rea^{2d}$ orthogonal to $p,x,y$ and $r$ is an eigenvector of $L(K_n,p)$ with eigenvalue $n/2$, as wanted. We can write $L(K_n,p)$ as a $n\times n$ block matrix (see \cite[Section 4.4]{jordan2020rigidity}), formed by $2\times 2$ blocks \[ [L(K_n,p)]_{i,j}= -d_{i j} d_{i j}^t, \] for $i\neq j\in [n]$, and \[ [L(K_n,p)]_{i,i}=\sum_{j\in[n]\setminus\{i\}} d_{i j} d_{i j}^t \] for $i\in[n]$. It is then easy to check that proving \eqref{eq:zhu} is equivalent to showing that, for all $i\in[n]$, \begin{equation}\label{eq:block_zhu_diag} \sum_{j\in[n]\setminus\{i\}} d_{i j} d_{i j}^t=\frac{1}{2} p(i)p(i)^t- \frac{1}{2}p^{\perp}(i) (p^{\perp}(i))^t + \frac{n-1}{2} I, \end{equation} and for all $i\neq j\in[n]$ \begin{equation}\label{eq:block_zhu} -d_{i j} d_{i j}^t= \frac{1}{2} p(i)p(j)^t- \frac{1}{2}p^{\perp}(i) (p^{\perp}(j))^t - \frac{1}{2} I. \end{equation} First, note that \eqref{eq:block_zhu_diag} follows from \eqref{eq:block_zhu}. Indeed, let $i\in[n]$. By \eqref{eq:block_zhu}, and using the fact that $\sum_{j\in[n]\setminus\{i\}} p(j)=-p(i)$ (and similarly, $\sum_{j\in[n]\setminus\{i\}} p^{\perp}(j)=-p^{\perp}(i)$), we obtain \begin{multline*} \sum_{j\in[n]\setminus\{i\}} d_{i j} d_{i j}^t= \sum_{j\in[n]\setminus\{i\}}\left( \frac{1}{2} I -\frac{1}{2}p(i)p(j)^t+\frac{1}{2}p^{\perp}(i)(p^{\perp}(j))^t\right) \\ =\frac{n-1}{2}I -\frac{1}{2}p(i)\left(\sum_{j\in[n]\setminus\{i\}}p(j)^t\right) +\frac{1}{2}p^{\perp}(i)\left(\sum_{j\in[n]\setminus\{i\}}p^{\perp}(j)^t\right) \\ =\frac{n-1}{2}I +\frac{1}{2}p(i)p(i)^t -\frac{1}{2}p^{\perp}(i)p^{\perp}(i)^t. \end{multline*} So, we are left to show that \eqref{eq:block_zhu} holds for all $i\neq j\in[n]$. Let $i\neq j\in[n]$, and let \[ M= d_{i j} d_{i j}^t+ \frac{1}{2} p(i)p(j)^t- \frac{1}{2}p^{\perp}(i) (p^{\perp}(j))^t - \frac{1}{2} I. \] We will show that $M=0$. We denote the four coordinates of $M$ by $M_{x x}$, $M_{x y}$, $M_{y x}$ and $M_{y y}$. Since $\|p(i)\|=\|p(j)\|=1$, we have \[ \|p(i)-p(j)\|^2 = 2(1-p(i)\cdot p(j)), \] and therefore \[ d_{i j} d_{i j}^t= \frac{(p(i)-p(j))(p(i)-p(j))^t}{2(1-p(i)\cdot p(j))}. \] Using this and the fact that $p_x(i)^2+p_y(i)^2=p_x(j)^2+p_y(j)^2=1$, we obtain \begin{align*} &M_{x x}= \frac{(p_x(i)-p_x(j))(p_x(i)-p_x(j))}{2(1-p_x(i)p_x(j)-p_y(i)p_y(j))} +\frac{1}{2}p_x(i)p_x(j)-\frac{1}{2}p_y(i)p_y(j)-\frac{1}{2} \\&= \frac{1}{2(1-p_x(i)p_x(j)-p_y(i)p_y(j))}\big( p_x(i)^2-2p_x(i)p_x(j)+p_x(j)^2 +p_x(i)p_x(j)\\&-p_y(i)p_y(j) +p_y(i)^2p_y(j)^2-p_x(i)^2 p_x(j)^2 -1 +p_x(i)p_x(j)+p_y(i)p_y(j) \big) \\ &=\frac{p_x(i)^2 + p_x(j)^2 -p_x(i)^2p_x(j)^2-1 +(1-p_x(i)^2)(1-p_x(j)^2)}{2(1-p_x(i)p_x(j)-p_y(i)p_y(j))}=0. \end{align*} Similarly, $M_{y y}=0$. Finally, we have \begin{align*} M_{x y}&= \frac{(p_x(i)-p_x(j))(p_y(i)-p_y(j))}{2(1-p_x(i)p_x(j)-p_y(i)p_y(j))} +\frac{1}{2}p_x(i)p_y(j)+\frac{1}{2}p_y(i)p_x(j) \\&= \frac{1} {2(1-p_x(i)p_x(j)-p_y(i)p_y(j))} \big( p_x(i)p_y(i)-p_y(i)p_x(j)-p_x(i)p_y(j)\\&+p_x(j)p_y(j) +p_x(i)p_y(j)+p_y(i)p_x(j) -p_x(i)^2 p_x(j)p_y(j)\\ &-p_x(i)p_y(i)p_x(j)^2 -p_x(i)p_y(i)p_y(j)^2 -p_y(i)^2p_x(j)p_y(j) \big) \\ &=\frac{p_x(i)p_y(i)+p_x(j)p_y(j) -p_x(j)p_y(j) -p_x(i)p_y(i)} {2(1-p_x(i)p_x(j)-p_y(i)p_y(j))} =0, \end{align*} and similarly $M_{y x}=0$. So, $M=0$ as wanted. \end{proof} The following very simple Lemma will be useful for finding the additional eigenvectors of $L^{-}(T(n,d+1),q^{\triangle})$ and $L^{-}(T(n,2d),q^{\diamond})$. \begin{lemma}\label{lemma:condition_for_eigenvector} Let $G=(V,E)$ and $p:V\to \Rea^d$ such that $p(i)\neq p(j)$ for all $\{i,j\}\in E$. Let $L^{-}=L^{-}(G,p)$ and $\phi\in \Rea^{E}$. Let $\text{supp}(\phi)\subset E$ be the support of $\phi$. Assume that the following conditions hold: \begin{enumerate} \item For all $e\in E\setminus \supp(\phi)$, $ \cos(\theta(e,e'))=\cos(\theta(e,e'')) $ for every $e',e''\in \supp(\phi)$ such that $|e\cap e'|=|e\cap e''|=1$. \item For all $v\in V$, $ \sum_{e\in E:\, v\in e} \phi(e)=0. $ \item There is $\lambda\in \Rea$ such that for all $e\in\supp(\phi)$ \[ \sum_{e'\in E:\, |e\cap e'|=1} \cos(\theta(e,e'))\phi(e')=(\lambda-2)\phi(e). \] \end{enumerate} Then, $\phi$ is an eigenvector of $L^{-}$ with eigenvalue $\lambda$. \end{lemma} \begin{proof} By Lemma \ref{lemma:down_laplacian}, conditions $(1)$ and $(2)$ imply that $L^{-}\phi(e)=0=\phi(e)$ for every $e\in E\setminus \supp(\phi)$, and condition $(3)$ says exactly that $L^{-}\phi(e)=\lambda\phi(e)$ for all $e\in\supp(\phi)$. Therefore, we obtain $L^{-}\phi=\lambda\phi$, as wanted. \end{proof} \subsection{The spectrum of $L(T(n,d+1),q^{\triangle})$} \simplexrepeated* Note that for $n=d+1$ the proof below gives a second proof of Theorem \ref{thm:simplex_spectrum}. \begin{proof} Denote $T(n,d+1)=(V,E)$. First note that, as for all frameworks, $0$ is an eigenvalue of $L(T(n,d+1),q^{\triangle})$ with multiplicity at least $\binom{d+1}{2}=d(d+1)/2$. Moreover, note that $L(T(n,d+1),q^{\triangle})=L(K_n,q^{\triangle})$. Thus, by Lemma \ref{lemma:n_eigenvalue} , $n$ is an eigenvalue of $L(T(n,d+1),q^{\triangle})$. Also, since $\|q^{\triangle}(v)\|=1$ for all $v\in V$ and $\sum_{v\in V} q^{\triangle}(v)=0$, then by Proposition \ref{prop:large_eigenvalues_balanced_p}, $n/2$ is an eigenvalue of $L(T(n,d+1),q^{\triangle})$ with multiplicity at least $n-1$. Therefore, we are left to show that $n/(d+1)$ is an eigenvalue with multiplicity at least $(d-2)(d+1)/2$ and that $n/(2(d+1))$ is an eigenvalue with multiplicity at least $(n-d-1)(d-1)$. Since the non-zero eigenvalues of $L(T(n,d+1),q^{\triangle})$ and $L^{-}=L^{-}(T(n,d+1),q^{\triangle})$ are the same, we will find the corresponding eigenvectors for $L^{-}$. Denote by $V_1,\ldots,V_{d+1}$ the sides of $T(n,d+1)$. For every $i\neq j\in[d+1]$, let $E(V_i,V_j)$ be the set of edges with one endpoint in $V_i$ and the other in $V_j$. Let $i,j,k\in [d+1]$ such that $i\neq j,k$. Let $v\in V_i$, $u\in V_j$ and $w\in V_k$. Denote by $\theta_{uvw}$ the angle between $q^{\triangle}(v)-q^{\triangle}(u)$ and $q^{\triangle}(v)-q^{\triangle}(w)$. Then, we have \begin{equation}\label{eq:anglesimplex} \cos(\theta_{uvw})=\begin{cases} \frac{1}{2} & \text{ if } j\neq k,\\ 1 & \text{ if } j=k. \end{cases} \end{equation} \textbf{Eigenvalue $\bm{n/(d+1)}$:} Assume $d\geq 3$. Let $i_1,i_2,i_3,i_4\in[d+1]$ be four distinct integers. Define $ \Phi_{i_1,i_2,i_3,i_4}\in \Rea^{E}$ by \[ \Phi_{i_1,i_2,i_3,i_4}(e)=\begin{cases} 1 & \text{ if } e\in E(V_{i_1},V_{i_2})\cup E(V_{i_3},V_{i_4}),\\ -1 & \text{ if } e\in E(V_{i_2},V_{i_3})\cup E(V_{i_1},V_{i_4}),\\ 0 & \text{ otherwise.} \end{cases} \] Note that for every $e\notin \supp(\Phi_{i_1,i_2,i_3,i_4})$ and every $e'\in \supp(\Phi_{i_1,i_2,i_3,i_4})$ such that $|e\cap e'|=1$, $\cos(\theta(e,e'))=1/2$, and that for all $v\in V$, we have $\sum_{e\in E:\, v\in e} \Phi_{i_1,i_2,i_3,i_4}(e)=0$. Moreover, let $e=\{u,v\}\in\supp(\Phi_{i_1,i_2,i_3,i_4})$. Assume $u\in V_{i_1}$ and $v\in V_{i_2}$ (the other cases are analyzed similarly). Then, using \eqref{eq:anglesimplex}, we obtain \begin{multline*} \sum_{e'\in E:\, |e\cap e'|=1} \cos(\theta(e,e'))\Phi_{i_1,i_2,i_3,i_4}(e')=\sum_{w\in V_{i_1}\setminus\{u\}} 1 + \sum_{w\in V_{i_2}\setminus\{v\}} 1 \\-\sum_{w\in V_{i_3}}\frac{1}{2}-\sum_{w\in V_{i_4}}\frac{1}{2}= \frac{n}{d+1}-2= \left(\frac{n}{d+1}-2\right)\Phi_{i_1,i_2,i_3,i_4}(e). \end{multline*} Therefore, by Lemma \ref{lemma:condition_for_eigenvector}, $\Phi_{i_1,i_2,i_3,i_4}$ is an eigenvector of $L^{-}$ with eigenvalue $n/(d+1)$. Let \begin{multline*} I= \left\{ (1,2,3,k),(1,3,2,k) :\, k\in [d+1]\setminus\{1,2,3\}\right\} \\ \cup \left\{ (2,3,j,k) :\, j,k\in[d+1]\setminus\{1,2,3\},j<k \right\}, \end{multline*} and let \[ B= \left\{ \Phi_{i_1,i_2,i_3,i_4}\right\}_{(i_1,i_2,i_3,i_4)\in I}. \] Note that \[ |B|= |I|= 2(d-2)+ \binom{d-2}{2} = \frac{(d-2)(d+1)}{2}. \] We will show that $B$ is a linearly independent set: For each $(i_1,i_2,i_3,i_4)\in I$, let $\alpha_{i_1,i_2,i_3,i_4}\in \Rea$. Assume that \[ \sum_{(i_1,i_2,i_3,i_4)\in I} \alpha_{i_1,i_2,i_3,i_4} \Phi_{i_1,i_2,i_3,i_4}=0. \] Let $j,k\in[d+1]\setminus\{1,2,3\}$ such that $j<k$. Note that for every $u\in V_j$ and $w\in V_k$, $\Phi_{2,3,j,k}$ is the only vector in $B$ containing $\{u,w\}$ in its support. Therefore, we must have $\alpha_{2,3,j,k}=0$. Now, let $k\in [d+1]\setminus\{1,2,3\}$. For every $u\in V_1$, $w\in V_k$, the only vectors in $B$ containing $\{u,w\}$ in their support are $\Phi_{1,2,3,k}$ and $\Phi_{1,3,2,k}$. Therefore, we must have $\alpha_{1,2,3,k}=-\alpha_{1,3,2,k}$. Hence, we obtain a linear relation \[ \sum_{k\in[d+1]\setminus\{1,2,3\}} \alpha_{1,2,3,k} (\Phi_{1,2,3,k}-\Phi_{1,3,2,k})=0. \] Note that $\Phi_{1,2,3,k}-\Phi_{1,3,2,k}=\Phi_{1,2,k,3}$. So, we obtain \[ \sum_{k\in[d+1]\setminus\{1,2,3\}} \alpha_{1,2,3,k} \Phi_{1,2,k,3}=0. \] But note that each vector $\Phi_{1,2,k,3}$ contains a unique edge in its support (for example, any edge $\{u,w\}$ where $u\in V_2$ and $w\in V_k$). Therefore, $\{\Phi_{1,2,k,3}\}_{k\in[d+1]\setminus\{1,2,3\}}$ are independent, and hence $\alpha_{1,2,3,k}=\alpha_{1,3,2,k}=0$ for all $k\in[d+1]\setminus\{1,2,3\}$. Thus, $B$ is linearly independent. So, $n/(d+1)$ is an eigenvalue of $L^{-}$ with multiplicity at least $(d-2)(d+1)/2$. \textbf{Eigenvalue $\bm{n/(2(d+1))}$:} Let $i,j,k\in[d+1]$ be three distinct indices, and let $u,v\in V_i$. Define \[ \Psi_{u,v,j,k}= \sum_{w\in V_j} (1_{\{u,w\}}-1_{\{v,w\}})+\sum_{w\in V_k} (1_{\{v,w\}}-1_{\{u,w\}}). \] Let $e=\{x,y\}\notin \supp(\Psi_{u,v,j,k})$. Assume $x\in V_r$ and $y\in V_t$. If $\{r,t\}=\{i,j\}$ or $\{r,t\}=\{i,k\}$, then $\cos(\theta(e,e'))=1$ for all $e'\in \supp(\Psi_{u,v,j,k})$ such that $|e\cap e'|=1$. Otherwise, $\cos(\theta(e,e'))=1/2$ for all $e'\in \supp(\Psi_{u,v,j,k})$ such that $|e\cap e'|=1$. Also, it is easy to check that for every $w\in V$, $\sum_{e\in E,\\ w\in e} \Psi_{u,v,j,k}(e)=0$. Finally, let $e=\{x,y\}\in\supp(\Psi_{u,v,j,k})$. Assume $x=u$ and $y\in V_j$ (the other cases are similar). Then, by \eqref{eq:anglesimplex}, we have \begin{align*} &\sum_{e'\in E:\, |e\cap e'|=1} \cos(\theta(e,e'))\Psi_{u,v,j,k}(e') \\&=\sum_{w\in V_j\setminus\{y\}} \cos(\theta_{y u w}) - \sum_{w\in V_k}\cos(\theta_{y u w}) - \cos(\theta_{u y v}) \\ &=\sum_{w\in V_j\setminus\{y\}} 1 -\sum_{w\in V_k} \frac{1}{2} -1 = \frac{n}{2(d+1)}-2=\left(\frac{n}{2(d+1)}-2\right)\Psi_{u,v,j,k}(e). \end{align*} Therefore, by Lemma \ref{lemma:condition_for_eigenvector}, $\Psi_{u,v,j,k}$ is an eigenvector of $L^{-}$ with eigenvalue $n/(2(d+1))$. For every $i\in[d+1]$, fix some $j(i)\in [d+1]\setminus\{i\}$ and $u_i\in V_i$, and let \[ J_i=\left\{(u_i,v,j(i),k):\, v\in V_i\setminus\{u_i\}, \, k\in[d+1]\setminus\{i,j(i)\}\right\} \] and \[ B_i= \left\{ \Psi_{u,v,j,k}\right\}_{(u,v,j,k)\in J_i}. \] Let $B=\cup_{i=1}^{d+1} B_i$. Note that \[ |B|=\sum_{i=1}^{d+1}|J_i|= (d+1)\left(\frac{n}{d+1}-1\right)(d-1) = (n-d-1)(d-1). \] We will show that $B$ is a linearly independent set. Let $i\neq i'\in[d+1]$ and let $(u_i,v,j,k)\in J_i$ and $(u_{i'},v',j',k')\in J_{i'}$. We will show that $\Psi_{u_i,v,j,k}$ and $\Psi_{u_{i'},v',j',k'}$ are orthogonal. Indeed, the supports of the two vectors are disjoint unless $i'\in \{j,k\}$ and $i\in \{j',k'\}$. In this case, the intersection of the two supports consists of the four edges $\{u_i,u_{i'}\},\{u_i,v'\},\{v,u_{i'}\},\{v,v'\}$, and it is easy to check that we have $\Psi_{u_i,v,j,k}\cdot\Psi_{u_{i'},v',j',k'}=0$. Therefore, it is enough to check that for each $i\in[d+1]$, $B_i$ is linearly independent. But this follows from the fact that for every $(u_i,v,j(i),k)\in J_i$, there is a unique edge in the support of $\Psi_{u_i,v,j(i),k}$ (any edge of the form $\{v,w\}$ where $w\in V_k$). So $B$ is linearly independent, and therefore $n/(2(d+1))$ is an eigenvector of $L^{-}$ with multiplicity at least $(n-d-1)(d-1)$. \end{proof} \begin{remark} In \cite{jordan2020rigidity}, a lower bound $\lambda_{\binom{d+1}{2}+1}(L(T(n,d+1),q^{\triangle}))\geq \frac{2d^2+d}{2(d+1)^3}n$ (for $n$ divisible by $d+1$) was stated, in contradiction to Theorem \ref{thm:simplex_repeated}. After fixing a typo in \cite[Lemma 5.4]{jordan2020rigidity}, the corrected lower bound obtained in \cite{jordan2020rigidity} is \[ \lambda_{\binom{d+1}{2}+1}(L(T(n,d+1),q^{\triangle}))\geq \frac{d n}{2(d+1)^2}. \] From Theorem \ref{thm:simplex_repeated} it follows that in fact, for $n$ divisible by $d+1$, \[ \lambda_{\binom{d+1}{2}+1}(L(T(n,d+1),q^{\triangle}))= \frac{n}{2(d+1)}. \] \end{remark} \subsection{The spectrum of $L(T(n,2d),q^{\diamond})$} \crosspolytopespectrum* \begin{proof} Denote $T(n,2d)=(V,E)$. First, note that, as for all frameworks, $0$ is an eigenvalue of $L(T(n,2d),q^{\diamond})$ with multiplicity at least $d(d+1)/2$. Note that $L(T(n,2d),q^{\diamond})=L(K_n,q^{\diamond})$. Thus, by Lemma \ref{lemma:n_eigenvalue} , $n$ is an eigenvalue of $L(T(n,2d),q^{\diamond})$. Also, since $\|q^{\diamond}(v)\|=1$ for all $v\in V$ and $\sum_{v\in V} q^{\diamond}(v)=0$, then by Proposition \ref{prop:large_eigenvalues_balanced_p}, $n/2$ is an eigenvalue of $L(T(n,2d),q^{\diamond})$ with multiplicity at least $n-1$. Therefore, we are left to show that $n/d$ is an eigenvalue with multiplicity at least $d(d-1)/2$ and that $n/(2d)$ is an eigenvalue with multiplicity at least $n(d-1)-d^2$. Since the non-zero eigenvalues of $L(T(n,2d),q^{\diamond})$ and $L^{-}=L^{-}(T(n,2d),q^{\diamond})$ are the same, we will find the corresponding eigenvectors for $L^{-}$. Denote the parts of $T(n,2d)$ by $V_{1,1},V_{1,-1},V_{2,1},V_{2,-1}\ldots,V_{d,1},V_{d,-1}$ where, for $i\in[d]$ and $s\in\{1,-1\}$, $q^{\diamond}(V_{i,s})$ is the singleton set consisting of the vector $s e_i$, where $e_i$ is the $i$-th standard vector in $\Rea^d$. For every $i,j\in [d]$ and $x,y\in\{1,-1\}$, let $E(V_{i,x},V_{j,y})$ be the set of edges with one endpoint in $V_{i,x}$ and the other in $V_{j,y}$. Let $u\in V_{i,x}$ and $v\in V_{j,y}$, where $i\neq j\in[d]$ and $x,y\in\{1,-1\}$. Let $w\neq u,v$, and assume $w\in V_{k,z}$ for $k\in [d]$ and $z\in\{1,-1\}$. Let $\theta_{v u w}$ be the angle between $q^{\diamond}(u)-q^{\diamond}(v)$ and $q^{\diamond}(u)-q^{\diamond}(w)$. Then, \begin{equation}\label{eq:angles2d} \cos(\theta_{v u w})=\begin{cases} \frac{1}{2} & \text{ if } k\neq i,j,\\ \frac{1}{\sqrt{2}} & \text{ if } k=i, z=-x,\\ 1 & \text{ if } k=j, z=y,\\ 0 & \text{ otherwise.} \end{cases} \end{equation} \textbf{Eigenvalue $\bm{n/d}$:} Let $i< j \in [d]$. Let $\phi_{ij}\in \Rea^E$ be defined by \[ \phi_{ij}(e)= \begin{cases} 1 & \text{ if } e\in E(V_{i,1},V_{j,1})\cup E(V_{i,-1},V_{j,-1}),\\ -1 & \text{ if } e\in E(V_{i,-1},V_{j,1})\cup E(V_{i,1},V_{j,-1}),\\ 0 & \text{ otherwise}. \end{cases} \] Let $e\notin \supp(\phi_{i j})$. If $e\in E(V_{i,1},V_{i,-1})\cup E(V_{j,1},V_{j,-1})$, then $\cos(\theta(e,e'))=1/\sqrt{2}$ for every $e'\in \supp(\phi_{i j})$ such that $|e\cap e'|=1$. If $e\in E(V_{k,1},V_{k,-1})$ for some $k\in[d]\setminus\{i,j\}$, then there are no edges $e'\in \supp(\phi_{i j})$ such that $|e\cap e'|=1$. If $e\notin \cup_{k=1}^d E(V_{k,1},V_{k,-1})$, then $\cos(\theta(e,e'))=1/2$ for every $e'\in \supp(\phi_{i j})$ such that $|e\cap e'|=1$. Also, note that for all $v\in V$, we have $\sum_{e\in E:\, v\in e} \phi_{i j}(e)=0$. Moreover, let $e=\{u,v\}\in\supp(\phi_{i j})$. Assume $u\in V_{i, 1}$ and $v\in V_{j, 1}$ (the other cases are analyzed similarly). Then, using \eqref{eq:angles2d}, we obtain \begin{align*} \sum_{e'\in E:\, |e\cap e'|=1} \cos(\theta(e,e'))\phi_{ i j}(e') &=\sum_{w\in V_{i,1}\setminus\{u\}} 1 + \sum_{w\in V_{j,1}\setminus\{v\}} 1 \\ &= \frac{n}{d}-2 = \left(\frac{n}{d}-2\right)\phi_{i j}(e). \end{align*} So, by Lemma \ref{lemma:condition_for_eigenvector}, $\phi_{ij}$ is an eigenvector of $L^{-}$ with eigenvalue $n/d$. Since the supports of the vectors $\{\phi_{ij}\}_{i<j\in[d]}$ are pairwise disjoint, they form a linearly independent set. Thus, $n/d$ is an eigenvalue of $L^{-}$ with multiplicity at least $d(d-1)/2$. \textbf{Eigenvalue $\bm{n/(2d)}$:} Let $i\neq j\in[d]$. Define $\psi_{i j}\in \Rea^E$ by \[ \psi_{ij}(e)= \begin{cases} 1 & \text{ if } e\in E(V_{i,1},V_{j,1})\cup E(V_{i,-1},V_{j,1}),\\ -1 & \text{ if } e\in E(V_{i,1},V_{j,-1})\cup E(V_{i,-1},V_{j,-1}),\\ 0 & \text{ otherwise.} \end{cases} \] For $k\neq i,j$, let $ \Psi_{i j k}= \psi_{i j}-\psi_{k j}$. Let $e\notin \supp(\Psi_{i j k})$. If $e\in \cup_{t=1}^d E(V_{t,1},V_{t,-1})$, then $\cos(\theta(e,e'))=1/\sqrt{2}$ for every $e'\in \supp(\Psi_{i j k})$ such that $|e\cap e'|=1$ (note that if $t\neq i,j,k$ then there are no such edges $e'$). If $e\notin \cup_{t=1}^d E(V_{t,1},V_{t,-1})$, then $\cos(\theta(e,e'))=1/2$ for every $e'\in \supp(\Psi_{i j k})$ such that $|e\cap e'|=1$. Also, note that for all $v\in V$, we have $\sum_{e\in E:\, v\in e} \Psi_{i j k}(e)=0$. Moreover, let $e=\{u,v\}\in\supp(\Psi_{i j k})$. Assume $u\in V_{i, 1}$ and $v\in V_{j, 1}$ (the other cases are analyzed similarly). Then, using \eqref{eq:angles2d}, we obtain \begin{align*} \sum_{e'\in E:\, |e\cap e'|=1} \cos(\theta(e,e'))\Psi_{ i j k}(e') &=\sum_{w\in V_{i,1}\setminus\{u\}} 1 + \sum_{w\in V_{j,1}\setminus\{v\}} 1 - \sum_{w\in V_{k,1}\cup V_{k,-1}} \frac{1}{2} \\ &= \frac{n}{2d}-2 = \left(\frac{n}{2d}-2\right)\Psi_{i j k}(e). \end{align*} Therefore, by Lemma \ref{lemma:condition_for_eigenvector}, $\Psi_{i j k}$ is an eigenvector of $L^{-}$ with eigenvalue $n/(2d)$. For each $j\in [d]$, fix $i(j)\in[d]\setminus\{j\}$. Let \[ I_j=\{(i(j),j,k):\, k\in [d]\setminus\{j,i(j)\}\} \] and $I=\cup_{j\in[d]} I_j$. Note that $|I|=d(d-2)$. We will show that $\{\Psi_{ijk}\}_{(i,j,k)\in I}$ is a linearly independent set. First, note that the vectors $\{\psi_{ij}\}_{i\neq j\in[d]}$ form an orthogonal set. Indeed, let $i,j,k,s\in[d]$ such that $i\neq j$ and $k\neq s$. If $\{i,j\}\neq \{k,s\}$ then $\psi_{i j}$ and $\psi_{k s}$ have disjoint supports, and therefore $\psi_{ij}\cdot \psi_{k s}=0$. Otherwise, if $i=s$ and $j=k$ we obtain \begin{multline*} \psi_{ij}\cdot \psi_{ji} = \sum_{e\in E(V_{i,1},V_{j,1})} 1\cdot 1 + \sum_{e\in E(V_{i,1},V_{j,-1})} (-1)\cdot 1 \\ + \sum_{e\in E(V_{i,-1},V_{j,1})} 1\cdot (-1) + \sum_{e\in E(V_{i,-1},V_{j,-1})} (-1)\cdot(-1)=0. \end{multline*} Hence, for $\{i,j,k\}\in\binom{[d]}{3}$ and $\{i',j',k'\}\in\binom{[d]}{3}$ such that $j\neq j'$, we have $\Psi_{i j k}\cdot \Psi_{i' j' k'}=0$. Therefore, we are left to show that for all $j\in[d]$, $\{\Psi_{i j k}\}_{(i,j,k)\in I_j}$ is linearly independent. But this follows from the fact that for every $(i,j,k)\in I_j$ there is an edge in the support of $\Psi_{i j k}$ that is unique to $\Psi_{i j k}$ (for example, any edge $\{u,v\}$ where $u\in V_{j,1}$ and $v\in V_{k,1}$). We now complete the set $\{\Psi_{i j k}\}_{(i,j,k)\in I}$ to an eigenbasis of $n/2d$. Let $i\neq j \in[d]$ and $x\in\{1,-1\}$. Let $u\neq v\in V_{i,x}$. Define $f_{j}^{u,v}\in \Rea^E$ by \[ f_{j}^{u,v}= \sum_{w\in V_{j,1}} (1_{\{u,w\}}-1_{\{v,w\}})+\sum_{w\in V_{j,-1}} (-1_{\{u,w\}}+1_{\{v,w\}}). \] Let $e\notin \supp(f_j^{u,v})$. If $e\in E(V_{i,x},V_{j,1})\cup E(V_{i,x},V_{j,-1})$, then $\cos(\theta(e,e'))=1$ for all $e'\in \supp(f_j^{u,v})$ such that $|e\cap e'|=1$. If $e\in E(V_{i,1},V_{i,-1})\cup E(V_{j,1},V_{j,-1})$, then $\cos(\theta(e,e'))=1/\sqrt{2}$ for all $e'\in \supp(f_j^{u,v})$ such that $|e\cap e'|=1$. If $e\in E(V_{i,-x},V_{j,1})\cup E(V_{i,-x},V_{j,-1})$, then $\cos(\theta(e,e'))=0$ for all $e'\in \supp(f_j^{u,v})$ such that $|e\cap e'|=1$. Otherwise, $\cos(\theta(e,e'))=1/2$ for all $e'\in \supp(f_j^{u,v})$ such that $|e\cap e'|=1$. In addition, it is easy to check that for every $w\in V$, $\sum_{e\in E,\\ w\in e} f^{u,v}_j(e)=0$. Finally, let $e=\{a,b\}\in\supp(f_j^{u,v})$. Assume $a=u$ and $b\in V_{j,1}$ (the other cases are similar). Then, by \eqref{eq:angles2d}, we have \begin{align*} &\sum_{e'\in E:\, |e\cap e'|=1} \cos(\theta(e,e')) f_j^{u,v}(e') \\&=\sum_{w\in V_{j,1}\setminus\{b\}} \cos(\theta_{b u w}) - \sum_{w\in V_{j,-1}}\cos(\theta_{b u w}) - \cos(\theta_{u b v}) \\ &=\sum_{w\in V_j\setminus\{b\}} 1 -\sum_{w\in V_k} 0 -1 = \frac{n}{2d}-2=\left(\frac{n}{2d}-2\right)f_j^{u,v}(e). \end{align*} Therefore, by Lemma \ref{lemma:condition_for_eigenvector}, $f_j^{u,v}$ is an eigenvector of $L^{-}$ with eigenvalue $n/(2d)$. For each $i\in [d]$ and $x\in\{1,-1\}$, fix some $u(i,x)\in V_{i,x}$. For $j\in[d]\setminus\{i\}$, let \[ J_{i,x,j}=\{ (u(i,x),v,j):\, u(i,x)\neq v\in V_{i,x}\}. \] and let $J=\cup_{i\in[d]}\cup_{x\in\{1,-1\}}\cup_{j\in [d]\setminus \{i\}} J_{i,x,j}$. Note that $|J|=2d(d-1)(n/(2d)-1)$. We will show that $\{f_j^{u,v}\}_{(u,v,j)\in J}$ are linearly independent. Let $u\neq v\in V_{i,x}$ and $u'\neq v'\in V_{i',x'}$. Let $j\neq i$ and $j'\neq i'$. Assume $(i,x,j)\neq (i',x',j')$. We will show that $f_{j}^{u,v}$ and $f_{j'}^{u',v'}$ are orthogonal. Indeed, the supports of the two vectors are disjoint unless $i'=j$ and $j'=i$. But it is easy to check that also in this case we have $f_{j}^{u,v}\cdot f_{j'}^{u',v'}=0$. Therefore, it is enough to show that for every $i\neq j\in[d]$ and $x\in\{1,-1\}$, $\{f_j^{u,v}\}_{(u,v,j)\in J_{i,x,j}}$ is linearly independent. But again, this follows from the fact that for every $(u,v,j)\in J_{i,x,j}$ there is an edge in the support of $f_{j}^{u,v}$ that is unique to it (for example, the edge $\{v,w\}$ for any $w\in V_{j,1}$). Finally, note that $\psi_{ij}\cdot f_{k}^{u,v}=0$ for every $i\neq j\in[d]$ and $(u,v,k)\in J$. Therefore, $\Psi_{i j k}\cdot f_{m}^{u,v}=0$ for all $(i,j,k)\in I$ and $(u,v,m)\in J$. Thus, the eigenvectors $\{\Psi_{i j k}\}_{(i,j,k)\in I}\cup \{f_j^{u,v}\}_{(u,v,j)\in J}$ form a linearly independent set. So, $n/(2d)$ is an eigenvalue of $L^{-}$ with multiplicity at least \[ d(d-2)+2d(d-1)(n/(2d)-1)= n(d-1)-d^2, \] as wanted. \end{proof} It was shown in \cite[Lemma 4.5]{jordan2020rigidity} that the removal of a vertex reduces the $d$-dimensional algebraic connectivity of a graph by at most $1$. Hence, as an immediate consequence of Theorem \ref{thm:crosspolytope_spectrum}, we obtain: \lowerbound* \section{The $n$ largest eigenvalues of the complete graph $K_n$}\label{sec:top-n-ev} We proceed to establish a lower bound for the sum of the largest $n$ eigenvalues of $L(K_n,p)$ for all embeddings $p:[n]\to \Rea^d$. As a corollary, we derive an upper bound for $a_d(K_n)$. \begin{lemma}\label{lem:one_third} Let $p:[n]\to\Rea^d$ be injective. Then, \[ \sum_{j=(d-1)n+1}^{d n}\lambda_j(L(K_n,p)) \ge \frac {n^2}3+n\,. \] \end{lemma} \upperbound* \begin{proof}[Proof of Theorem \ref{cor:upperbound}] Let $p:V\to\Rea^d$ be injective. We compute the trace of $L(K_n,p)$ in two ways. By Lemma \ref{lemma:down_laplacian}, all the diagonal entries of $L^{-}(K_n,p)$ equal $2$, since $p$ is injective. Thus, we have \begin{equation}\label{eq:trace_direct} \mathrm{Tr}(L(K_n,p))=\mathrm{Tr}(L^{-}(K_n,p)) = 2\binom n2=n^2-n\,. \end{equation} On the other hand, let $\lambda=\lambda_{\binom{d+1}{2}+1}(L(K_n,p))$. We deduce from Lemma \ref{lem:one_third} that \begin{align}\nonumber \mathrm{Tr}(L(K_n,p))=&~\sum_{j=1}^{dn}\lambda_j(L(K_n,p)) \\\nonumber =& \sum_{j=\binom{d+1}{2}+1}^{(d-1)n}\lambda_j(L(K_n,p)) +\sum_{j=(d-1)n+1}^{dn}\lambda_j(L(K_n,p)) \\ \ge&~ \left((d-1)n-\binom{d+1}2\right)\lambda+\frac {n^2}3+n. \label{eq:trace_eigs} \end{align} By combining \eqref{eq:trace_direct} and \eqref{eq:trace_eigs}, we derive that \[ \lambda \le \frac{2n^2/3-2n}{(d-1)n-\binom{d+1}2}. \] Finally, we have \begin{align*} & \frac{2n^2/3-2n}{(d-1)n-\binom{d+1}2}-\frac{2n}{3(d-1)} =\frac{n(d(d+1)-6(d-1))}{3(d-1)\left((d-1)n-\binom{d+1}{2}\right)} \\ &\leq \frac{n(d(d+1)-6(d-1))}{3n(d-1)^2} =\frac{d(d+1)-6(d-1)}{3(d-1)^2} \leq \frac{1}{3}. \end{align*} Therefore, we obtain \[ \lambda \le \frac{2n}{3(d-1)}+\frac{1}{3}. \] Thus, by Lemma \ref{lemma:a_d_for_embeddings}, we obtain \[ a_d(K_n) \le \frac{2n}{3(d-1)}+\frac{1}{3}, \] as claimed. \end{proof} To prove Lemma \ref{lem:one_third} we will need the following theorem due to Ky Fan: \begin{theorem}[Ky Fan {\cite[Thm. 1]{fan1949theorem}}]\label{thm:fan} Let $A\in\Rea^{m\times m} $ be a symmetric matrix, and let $\mu_1\geq \mu_2\geq \cdots\geq \mu_m$ be its eigenvalues. Then, for every $k\leq m$, \[ \sum_{i=1}^k \mu_i = \max\left\{ \text{Tr}(AP):\, P\in\mathcal{P}_k\right\} \] where $\mathcal{P}_k$ consists of all orthogonal projection matrices into $k$-dimensional subspaces of $\Rea^m$. \end{theorem} \begin{proof}[Proof of Lemma \ref{lem:one_third}] Let $E=\binom{[n]}{2}$. For $i\in[n]$, let $v^{(i)}\in \Rea^E$ be defined by \[ v^{(i)}_e=\begin{cases} 1 & \text{ if } i\in e,\\ 0 & \text{ otherwise.} \end{cases} \] We claim that the $E\times E$ symmetric matrix $P$ defined by \[ P_{e,e'} =\left\{\begin{matrix} \frac{2}{n-1}&e=e'\\[6pt] \frac{n-3}{(n-1)(n-2)}&|e\cap e'|=1\\[6pt] \frac{-1}{\binom {n-1}2}&e\cap e'=\emptyset \end{matrix}\right.\,, \] is the orthogonal projection on the subspace spanned by $\{v^{(i)} :\, i\in[n]\}$. Indeed, if $e=\{i,j\}$, then the $e$-th row of $P$ is equal to \[ P_{e,\cdot} = \frac{1}{n-1}\left(v^{(i)}+v^{(j)}-\frac{1}{n-2}\sum_{k\ne i,j}v^{(k)}\right)\,. \] Therefore, $Pw=0$ for every $w$ that is orthogonal to all the vectors $v^{(i)},i\in[n]$. In addition, a straightforward computation shows that $Pv^{(i)}=v^{(i)}$ for every $i\in[n]$ since $(v^{(i)})^tv^{(j)}=1$ if $i\ne j$ and $\|v^{(i)}\|^2=n-1$. We apply theorem \ref{thm:fan} for $L^-:=L^-(K_n,p)$ and $P$ to find that \begin{align*} \sum_{i=(d-1)n+1}^{dn}\lambda_{i}(L(K_n,p))= \sum_{i=\binom{n-1}{2}}^{\binom{n}{2}}\lambda_i(L^-) \ge&~ \mathrm{Tr}( L^-P)=\sum_{e,e'}P_{e,e'}L^-_{e,e'}. \end{align*} Recall the precise description of $L^-$ in Lemma \ref{lemma:down_laplacian}. Since $p$ is injective, the contribution of the diagonal terms $e=e'$ to the summation is \begin{equation} \sum_{e\in E}P_{e,e}L^-_{e,e} = \binom{n}2\cdot \frac{2}{n-1}\cdot 2=2n\,. \label{eq:diagonal} \end{equation} In addition, since $L^-_{e,e'}=0$ if $e\cap e'=\emptyset$, the contribution of the non-diagonal terms is \begin{equation}\label{eq:ee'} \sum_{e\ne e'}P_{e,e'}L^-_{e,e'}=\sum_{i=1}^{n}\sum_{j\ne i}\sum_{j'\ne i,j} \frac{n-3}{(n-1)(n-2)}L^-_{\{i,j\},\{i,j'\}}\,. \end{equation} Note that all the $\binom n3$ triples $\{i,j,j'\}$ of vertices satisfy \[ L^-_{\{i,j\},\{i,j'\}}+L^-_{\{j,i\},\{j,j'\}}+L^-_{\{j',i\},\{j',j\}}\ge 1. \] Indeed, since $p$ is injective, this is the sum of the cosines of the angles in the (possibly flat) triangle $p(i),p(j),p(j')$--- which is bounded from below by $1$. In addition, note that each term $L^-_{\{i,j\},\{i,j'\}}$ appears twice in \eqref{eq:ee'} since $j,j'$ are ordered. Consequently, \begin{equation} \sum_{e\ne e'}P_{e,e'}L^-_{e,e'}\ge \binom{n}3 \cdot \frac{2(n-3)}{(n-1)(n-2)}=\frac{n^2}{3}-n\,. \label{eq:off_diagonal} \end{equation} Joining \eqref{eq:diagonal} and \eqref{eq:off_diagonal}, we obtain \[ \sum_{i=(d-1)n+1}^{dn}\lambda_{i}(L(K_n,p))\geq \frac {n^2}3+n\,. \] \end{proof} We believe that the bound in Lemma \ref{lem:one_third} can be improved: \begin{conjecture}\label{conj:large_eigenvalues} Let $p:[n]\to\Rea^d$ be injective. Then, the sum of the $n$ largest eigenvalues of $L(K_n,p)$ is at least $\frac{n(n+1)}{2}$. \end{conjecture} Recall that, by Lemma \ref{lemma:n_eigenvalue}, for every non-constant $p:[n]\to \Rea^d$, the largest eigenvalue of $L(K_n,p)$ is $n$. Thus, Conjecture \ref{conj:large_eigenvalues} is equivalent to saying that the average of the next $n-1$ largest eigenvalues is at least $n/2$. \section{Concluding remarks}\label{sec:conclude} \subsection{Complete graphs} We conjecture that the lower bound of Theorem \ref{thm:lower_bound} is essentially tight: \begin{conjecture} Let $n\geq 2d$. Then, \[ \lfrac{n}{2d}\leq a_d(K_n)\leq \frac{n}{2d}. \] \end{conjecture} \subsection{Regular graphs} From some computer calculations, it seems possible that the following generalization of \cite[Conj. 2]{jordan2020rigidity} holds: \begin{conjecture} For $d\geq 1$, \[ \lim_{n\to\infty} \max\{ a_d(G) :\, G \text{ is $2d$-regular on $n$ vertices}\}=0. \] \end{conjecture} \subsection{Repeated points} In line with our analysis of the minimal nontrivial eigenvalue of $L(G,p)$, where $G$ is the Tur{\'a}n graph $T(n,d+1)$ (resp. $T(n,2d)$) and $p=q^{\triangle}$ (resp. $p=q^{\diamond}$) in Theorem~\ref{thm:simplex_repeated} and Theorem~\ref{thm:crosspolytope_spectrum}, we conjecture the following general phenomena regarding the effect of repeated points on the spectral gap. For an injective $p: [n] \to \Rea^d$ and graph $G$ with vertex set $[n]$, denote by $\lambda(G,p)=\lambda_{\binom{d+1}{2}+1}(L(G,p))$. For $k\ge 1$ let $p^{k}: [kn] \to \Rea^d $ be a mapping such that $|(p^{k})^{-1}(p(v))|=k$ for every $v\in [n]$. In words, we put $k$ vertices on each point of the image of $p$. \begin{conjecture} For every injective mapping $p: [n] \to \Rea^d$ and every $k\ge 2$, \[\lambda(K_{kn},p^{k})= \frac{k}{2}\lambda(K_{2n},p^{2}). \] \end{conjecture} We remark that for $k=1$ the assertion fails, as demonstrated by the regular simplex embedding $p^{\triangle}$. \bibliographystyle{abbrv} \bibliography{biblio} \end{document}
2205.05510v1
http://arxiv.org/abs/2205.05510v1
Invariance entropy for uncertain control systems
\documentclass[12pt,a4paper]{amsart} \usepackage{amsfonts} \usepackage[top=35mm, bottom=35mm, left=30mm, right=30mm]{geometry} \usepackage[colorlinks=true,citecolor=blue]{hyperref} \usepackage{mathptmx} \usepackage{eucal} \usepackage{graphicx} \usepackage{mathrsfs} \usepackage{amssymb} \usepackage{amsmath} \usepackage{amsthm} \usepackage{tikz,pgfplots,float} \usepackage{xcolor} \newtheorem{thm}{Theorem}[section] \newtheorem{cor}[thm]{Corollary} \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{conj}[thm]{Conjecture} \newtheorem{ques}[thm]{Question} \newtheorem{prob}[thm]{Problem} \theoremstyle{definition} \newtheorem{defn}[thm]{Definition} \newtheorem{exam}[thm]{Example} \newtheorem{rem}[thm]{Remark} \numberwithin{equation}{section} \newtheorem{clm}[thm]{Claim} \newcommand{\N}{\mathbb{N}} \newcommand{\calF}{\mathcal{F}} \newcommand{\calA}{\mathcal{A}} \newcommand{\calB}{\mathcal{B}} \newcommand{\calG}{\mathcal{G}} \newcommand{\calM}{\mathcal{M}} \newcommand{\calC}{\mathcal{C}} \newcommand{\calS}{\mathcal{S}} \newcommand{\scrS}{\mathscr{S}} \newcommand{\scrB}{\mathscr{B}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\scrP}{\mathscr{P}} \newcommand{\vp}{\varepsilon} \newcommand{\w}{\omega} \newcommand{\normmm}[1]{{\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert #1 \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}} \DeclareMathOperator{\supp}{supp} \DeclareMathOperator{\diam}{diam} \DeclareMathOperator{\Int}{int} \usepackage{tkz-graph} \GraphInit[vstyle = Shade] \usepackage{nicematrix} \begin{document} \title[Invariance entropy for uncertain control systems]{Invariance entropy for uncertain control systems} \author{Xingfu Zhong} \address{School of Mathematics and Statistics, Guangdong University of Foreign Studies\\ Guangzhou, 510006 P. R. China} \email{[email protected]} \author{Yu Huang} \address{School of Mathematics, Sun Yat-Sen University, GuangZhou 510275, P. R. China} \email{[email protected]} \author{Xingfu Zou} \address{Department of Applied Mathematics, University of Western Ontario London, Ontario, N6A 5B7, Canada} \email{[email protected]} \subjclass[2020]{37B40; 93C55} \keywords{Invariance entropy, invariance feedback entropy, topological entropy} \begin{abstract} We introduce a notion of invariance entropy for \emph{uncertain} control systems, which is, roughly speaking, the exponential growth rate of ``branches'' of ``trees'' that are formed by controls and are necessary to achieve invariance of controlled invariant subsets of the state space. This entropy extends the invariance entropy for \emph{deterministic} control systems introduced by Colonius and Kawan (2009). We show that invariance feedback entropy, proposed by Tomar, Rungger, and Zamani (2020), is bounded from below by our invariance entropy. We generalize the formula for the calculation of entropy of invariant partitions obtained by Tomar, Kawan, and Zamani (2020) to quasi-invariant-partitions. Moreover, we also derive lower and upper bounds for entropy of a quasi-invariant-partition by spectral radii of its adjacency matrix and weighted adjacency matrix. With some reasonable assumptions, we obtain explicit formulas for computing invariance entropy for \emph{uncertain} control systems and invariance feedback entropy for finite controlled invariant sets. \end{abstract} \maketitle \section{Introduction} Entropy for a dynamical system is an intrinsic quantity that measures the complexity of the system. There are two popular dynamical entropies: one is measure-theoretic entropy and the other is topological entropy. The former was introduced by one of the most influential mathematicians of modern times, Kolmogorov~\cite{Kolmogorov1958A}, and improved by his student Sinai~\cite{Sina1959On} who practically brought it to the contemporary form; the latter was proposed via open covers by Adler et al.~\cite{Adler1965} and was redefined by Dinaburg~\cite{Dinaburg1970} and Bowen~\cite{Bowen1971} independently in the language of metric spaces. These two definitions of dynamical entropies resemble the definition of Shannon's entropy~\cite{shannon1948mathematical}. Both of them measure the exponential rates of growth of numbers of orbits in some sense: \begin{itemize} \item measure-theoretic entropy counts the number of typical $n$-orbits, while \item topological entropy counts all distinguishable $n$-orbits. \end{itemize} Note that topological entropy is equal to the supremum of measure-theoretic entropy over all invariant measures (this basic relationship between topological entropy and measure-theoretic entropy is called variational principle). We refer the reader to references~\cite{Downarowicz2011Entropy,Young2003Entropy,Katok2007Fifty} for more details about the history of dynamical entropies. Invariance (or stabilization) is another important notion that described a widely needed property for control systems. When a control system involves communication of information, an interesting question involving invariance is how much information practically needs to be communicated between the coder and controller in order to make a given set invariant under information constraint(s). Early work on this topics are Delchamps~\cite{Delchamps1990Stabilizing} and Wong and Brockeet~\cite{Wong-Brockett1999}, in which they respectively investigated quantized feedbacks and the influence of restricted communication channels for stabilization. The seminal work by Nair et al.~\cite{Nair2004Topological} first addressed the problem of data-rate-limited stabilization by introducing topological feedback entropy. This entropy characterizes the smallest average data rate at which a subset of the state space can be made invariant and it resembles the topological entropy by using similar open cover techniques. Then Colonius and Kawan~\cite{Colonius-Kawan2009} introduced an invariance entropy to describe the exponential growth rate of different control functions sufficient for making a subset of the state space invariant. The definition of this invariance entropy is analogous to that of the topological entropy by Dinaburg~\cite{Dinaburg1970} and Bowen~\cite{Bowen1971} by replacing distinguishable orbits by different control functions. Colonius et al.~\cite{Colonius2013A} showed that this invariance entropy and the topological feedback entropy are equivalent with some suitable modifications. For better understanding of invariance entropy, various notions relating to invariance entropy have been proposed by several groups of researchers from different views, such as invariance pressure~\cite{Colonius2018PressureA,Colonius2018PressureB, Colonius2019Bounds, Zhong2020Variational,Zhong2018Invariance,Colonius2021Controllability}, measure-theoretic versions of invariance entropy~\cite{Colonius2018Invariance,Colonius2018Metric,Wang2019MeasureInvariance,Wang-Huang2022Inverse}, dimension types of invariance entropy~\cite{Huang2018Caratheodory}, complexity of invariance entropy~\cite{Wang2019dichotomy,Zhong2020Equi,Zhong2022Equi}. Note that Kawan and Y\"{u}ksel~\cite{Kawan-Yuksel2019Stabilization-entropy} introduced a notion of stabilization entropy which is a variant of invariance entropy. We refer the reader to the monograph written by Kawan~\cite{Kawan2013} for more details about invariance entropy of \emph{deterministic} control systems and to~\cite{savkin2006analysis,Liberzon2018Entropy,Kawan-Matveev-Pogromsky2021Remote} and the references therein for observability that is another data-rate-limited task and is closely related to controlled invariance. In the context of \emph{uncertain} control systems, invariance feedback entropy (IFE) was introduced by Rungger and Zamani~\cite{Rungger2017Invariance} to quantify the state information required by any controller to render a subset of the state space invariant. Later, Tomar et al.~\cite{Tomar2017invariance} and Tomar and Zamani~\cite{Tomar2020Compositional} further investigated the properties of IFE. Recently, Tomar et al.~\cite{Tomar2020Numerical} presented wonderful algorithms for the numerical computation of invariance entropy for \emph{deterministic} control systems and IFE for \emph{uncertain} control systems respectively. Particularly, they showed that the entropy with respect to an invariant partition is equivalent to the maximum mean cycle weight (MMCW) of the weighted graph associated with this partition. Their algorithms allow us to compute upper bounds for invariance entropy of \emph{deterministic} control systems and IFE of \emph{uncertain} control systems. From the above, there arise the following questions naturally: \begin{itemize} \item[Q1.] Whether or not there is an analogous version of the invariance entropy introduced by Colonius and Kawan via controls for \emph{uncertain} control systems? Note that the definition of IFE (see Subsection~\ref{subsec:ife}) begins with a cover. If such a version exists, how is the relation between this invariance entropy and IFE for \emph{uncertain} control systems? \item[Q2.] If the formula for the calculation of entropy of an invariant partition, obtained by Tomar et al.~\cite{Tomar2020Numerical}, also holds for some ``weak'' invariant partitions? \item[Q3.] Which conditions guarantee the existence of ``generators''? By a generator we mean an invariant cover such that IFE is equal to the entropy of this cover. \end{itemize} In order to answer these questions, we introduce a new notion of invariance entropy via control functions for \emph{uncertain} control systems. Roughly speaking, our invariance entropy for \emph{uncertain} control systems is the exponential growth rate of ``branches'' of ``trees''. Such trees are formed by control functions that are necessary to make the target set invariant. We emphasize that our notion of invariance entropy discussed in the sequel is for \emph{uncertain} control system and that Colonius and Kawan have used the term ``invariance entropy'' for \emph{deterministic} control system. The structure of the paper is as follows. In Section~\ref{sec:invariance-entropy}, we introduce the concept of invariance entropy for \emph{uncertain} control system, give some basic properties for this invariance entropy, and show that this invariance entropy is less than or equal to the invariance feedback entropy (an answer to Q1, see Theorem~\ref{thm:inv-less-ifb}). It is worthy to note that invariance entropy is equal to topological feedback entropy for \emph{deterministic} controls systems, see~\cite{Colonius2013A} or~\cite{Kawan2013}. In Section~\ref{sec:calculation}, we derive some formulas for the calculation of our invariance entropy and IFE. We show that the invariance entropy for a controlled invariant set equals to the logarithm of the spectral radius of its admissible matrix under some technical assumption (see Theorem~\ref{thm:symbolic-log-radius}). We also extend the formula for the calculation of entropy of invariant partitions, obtained by Tomar et al.~\cite{Tomar2020Numerical}, to quasi-invariant-partitions; that is, the entropy for a quasi-invariant-partition is equal to the maximal mean weight of this partition (an answer to Q2, see Theorem~\ref{thm:invariant-partition-log-weight}), and show that if the spectral radius of the adjacency matrix of this partition is $1$, then the entropy for this partition equals to the logarithm of the spectral radius of its weighted adjacency matrix (see Theorem~\ref{thm:upper-lower-bounds-partition}). Finally, we show that there exists generators for a finite controlled invariant set; i.e., IFE for a finite controlled invariant set is equal to the entropy of its atom partition (an answer to Q3, see Theorem~\ref{thm:IFE-equi-atom-partitions}). \section{Invariance entropy}\label{sec:invariance-entropy} This section consists of two subsections. The first presents some basic properties for invariance entropy. The second recalls the definition of invariance feedback entropy and shows that invariance entropy is a tight lower bound for invariance feedback entropy. \subsection{Invariance entropy} First let us introduce terminology and notation. (We borrow some terms and signs from~\cite{Colonius-Kawan2009}.) We use $f:A\rightrightarrows B$ to denote a \emph{set-valued map} from $A$ into $B$, whereas $f:A\rightarrow B$ denotes an ordinary map. If $f$ is set-valued, then $f$ is \emph{strict} if for every $a\in A$ we have $f(a)\neq\emptyset$. The composition of $f:A\rightrightarrows B$ and $g:C\rightrightarrows A$, $(f\circ g)(x)=f(g(x))$ is denoted by $f\circ g$. We call a triple $\Sigma:=(X,U,F)$ a \emph{system} if $X$ and $U$ are nonempty sets and $F: X\times U\rightrightarrows X$ is strict. Recall that $Q\subset X$ is called \emph{controlled invariant} with respect to a system $\Sigma=(X,U,F)$, if for every $x\in Q$ there exists $u\in U$ such that $F(x,u)\subset Q$. Fixing $u\in U$ and $Q\subset X$, let $Q_u=\{x\in Q| F(x,u)\subset Q\}$. A subset $S\subset U^n$ is said to be \emph{an admissible family of length $n$} for $Q$ if \begin{itemize} \item[(a).] $\omega'_0=\omega''_0$ for any $\omega',\omega''\in S$; \item[(b).] there exists $x\in Q$ such that for any $\omega\in S$, \begin{align*} &F(I_\omega^i(x),\omega_i)\subset\bigcup_{\substack{\omega'\in S,\\ \omega'_{[0,i]}=\omega_{[0,i]}}}Q_{\omega'_{i+1}}, \\ &I_{\omega}^{i+1}(x)=F(I_\omega^{i}(x),\omega_i)\cap Q_{\omega_{i+1}}\neq\emptyset,\forall~i=0,1,\ldots,n-2,\\ &I_{\omega}^{n}(x)=F(I_\omega^{n-1}(x),\omega_{n-1})\subset Q, \end{align*} where $I_\omega^{0}(x)=x$. \end{itemize} Let \[AF^n(Q)=\{S\subset U^n: S~\text{is an admissible family for}~Q\}\] and \[AF(Q)=\bigcup_{n=1}^\infty AF^n(Q).\] Given $S\in AF(Q)$, the set of points that satisfy condition (b) is denoted by $Q_S$. Let $K\subset Q$ be a nonempty set. A set $\mathscr{S}\subset U^n$ is called an \emph{$(n,K,Q)$-spanning set} of $(K,Q)$ if \[K\subset \bigcup_{S\subset\mathscr{S}~\text{and}~S\in AF^n(Q)}Q_S.\] We use the notation $r_{inv}(n,K,Q)$ for the minimal number of a spanning set, i.e., \[r_{inv}(n,K,Q):=\inf\{\sharp \scrS:\scrS~\text{is an}~(n,K,Q)\text{-spanning set of}~(K,Q)~\},\] where $\sharp\scrS$ denotes the cardinality of $\scrS$. For convenience, we write $r_{inv}(n,Q)$ in place of $r_{inv}(n,Q,Q)$. \begin{defn} Given a pair $(K,Q)$, we define the \emph{invariance entropy} of $(K,Q)$ by \[h_{inv}(K,Q)=h_{inv}(K,Q;\Sigma):=\limsup_{n\to\infty}\frac{\log r_{inv}(n,K,Q)}{n},\] where $\log$ signifies the logarithm base $2$. \end{defn} \begin{rem} When $F$ is a single-valued map, the definition of invariance entropy coincides with that of invariance entropy for \emph{deterministic} control systems (see~\cite[Definition 2.2]{Kawan2013}). \end{rem} Recall that a sequence of real numbers $\{a_n\}_{n\geq1}$ is \emph{subadditive} if $a_{n+p}\leq a_n+a_p$ for all $n,p\in\N$. \begin{lem}\label{lem:subadditive}(\cite[Theorem 4.9]{Walters1982} or \cite[Lemma B.3]{Kawan2013}) If $\{a_n\}_{n\geq1}$ is a subadditive sequence, then $\lim_{n\to\infty}\frac{a_n}{n}$ exists and equals $\inf_{n}\frac{a_n}{n}$. \end{lem} The rest of this subsection will generalize some properties of invariance entropy for \emph{deterministic} control systems (see~\cite{Kawan2013}) to \emph{uncertain} control systems, including finiteness, time discretization, finite stability, and invariance under conjugacy. \begin{prop}\label{prop:subadditive-of-invariance-entropy} Let $\Sigma=(X,U,F)$ be a system and $Q\subset X$ be a controlled invariant set. Then the following assertions hold: \begin{enumerate} \item The number $r_{inv}(n,Q)$ is either finite for all $n\in\N$ or for none. \item The function $n\mapsto\log r_{inv}(n,Q)$, $\N\to[0,+\infty]$, is subadditive and thus \[h_{inv}(Q)=\lim_{n\to\infty}\frac1{n}\log r_{inv}(n,Q).\] \end{enumerate} \end{prop} \begin{proof} (1) Suppose there exists $N\in\N$ such that $r_{inv}(N,Q)<\infty$. It is easy to check that $r_{inv}(n,Q)\leq r_{inv}(N,Q)$ for every $n<N$. We now show that $r_{inv}(n,Q)< \infty$ for every $n>N$. Given $n\geq N$, pick $k\in\N$ such that $kN>n$. Let $\scrS=\{\omega^1,\ldots,\omega^m\}$ be a minimal $(N,Q)$-spanning set, where $m=r_{inv}(N,Q)$, and let \[\scrS_k=\{\omega\in U^{kN}:~\forall ~0\leq i\leq k-1 ~\exists~ \omega'\in \scrS~\text{s.t.}~\omega_{[Ni,(i+1)N)}=\omega'\}.\] We shall show that $\scrS_k$ is a $(kN,Q)$-spanning set. For every $x\in Q$ there exists $S_x^1\subset \scrS$ such that $\omega'_0=\omega''_0$ for any $\omega',\omega''\in S_x^1$ and for any $\omega\in S_x^1$, \begin{align*} &F(I_\omega^i(x),\omega_i)\subset\bigcup_{\substack{\omega'\in S_x^1,\\ \omega'_{[0,i]}=\omega_{[0,i]}}}Q_{\omega'_{i+1}}, \\ &I_{\omega}^{i+1}(x)=F(I_\omega^{i}(x),\omega_i)\cap Q_{\omega_{i+1}}\neq\emptyset,\forall~i=0,1,\ldots,N-2,\\ &I_{\omega}^{N}(x)=F(I_\omega^{n-1}(x),\omega_{N-1})\subset Q, \end{align*} where $I_\omega^{0}(x)=x$. For every $y\in I_{\omega}^{N}(x)\subset Q$ there exists $S_{\omega,y}\subset \scrS$ such that $\omega'_0=\omega''_0$ for any $\omega',\omega''\in S_{\omega,y}$ and for any $\bar{\omega}\in S_{\omega,y}$, \begin{align*} &F(I_{\bar{\omega}}^i(y),{\bar{\omega}}_i)\subset\bigcup_{\substack{{\bar{\omega}}'\in S_{\omega,y},\\ {\bar{\omega}}'_{[0,i]}={\bar{\omega}}_{[0,i]}}}Q_{{\bar{\omega}}'_{i+1}}, \\ &I_{{\bar{\omega}}}^{i+1}(y)=F(I_{\bar{\omega}}^{i}(y),{\bar{\omega}}_i)\cap Q_{{\bar{\omega}}_{i+1}}\neq\emptyset,\forall~i=0,1,\ldots,N-2,\\ &I_{{\bar{\omega}}}^{N}(y)=F(I_{\bar{\omega}}^{N-1}(y),{\bar{\omega}}_{N-1})\subset Q, \end{align*} where $I_{\bar{\omega}}^{0}(y)=y$. Let $\bar{S}_x^2=\cup_{\omega\in S_x^1}\cup_{y\in I_{\omega}^{N}(x)}S_{\omega,y}$ and \[S_x^2:=\{\omega\in U^{2N}:\exists \omega'\in S_x^1,\omega''\in\bar{S}_x^2~\text{s.t.}~\omega_{[0,N)}=\omega'~\text{and} ~\omega_{[N,2N)}=\omega''\}\subset \scrS_2. \] Hence we have $x\in Q_{S_x^2}$. Repeating the process $k$ times, we can obtain $S_x^k\subset \scrS_k$ such that $x\in Q_{S_x^k}$. This means that $\scrS_k$ is a $(kN,Q)$-spanning set. (2) By Lemma~\ref{lem:subadditive}, we shall prove that \[r_{inv}(n+p,Q)\leq r_{inv}(n,Q)\cdot r_{inv}(p,Q),~\forall~n,p\in\N.\] Assume that $\scrS_1$ is a minimal $(n,Q)$-spanning set and $\scrS_2$ is a minimal $(p,Q)$-spanning set. Let \[\scrS=\{\omega\in U^{n+p}:\exists~\omega'\in \scrS_1,~\omega''\in \scrS_2~\text{s.t.}~\omega_{[0,n)}=\omega'~\text{and}~\omega_{[n,n+p-1)}=\omega''\}.\] Similar to the proof of (1), we can show that $\scrS$ is an $(n+p,Q)$-spanning set. So \[r_{inv}(n+p,Q)\leq r_{inv}(n,Q)\cdot r_{inv}(p,Q),~\forall~n,p\in\N,\] which completes the proof. \end{proof} \begin{rem} We see from Proposition~\ref{prop:subadditive-of-invariance-entropy} that $r_{inv}(n,Q)$ is finite for some $n$ if and only if $r_{inv}(n,Q)$ is finite for all $n$ if and only if $h_{inv}(Q)$ is finite. \end{rem} \begin{prop}[Time discretization]\label{prop:time-discretization} Let $\Sigma=(X,U,F)$ be a system and $Q\subset X$ be a controlled invariant set. If $K\subset Q$ and $m\in\N$, then \[h_{inv}(K,Q)=\limsup_{n\to\infty}\frac{1}{nm}\log r_{inv}(nm,K,Q).\] \end{prop} \begin{proof} It is clear that $h_{inv}(K,Q)\geq\limsup_{n\to\infty}\frac{1}{nm}\log r_{inv}(nm,K,Q)$. We now show the converse inequality. By definition of $h_{inv}(K,Q)$, we can take a sequence $\{q_k\}_{k\geq1}$ such that \[h_{inv}(K,Q)=\lim_{k\to\infty}\frac{1}{q_k}\log r_{inv}(q_k,K,Q).\] For every $k\geq m$, there exists $n_k\geq1$ such that $n_km\leq q_k\leq (n_k+1)m$, and $n_k\to\infty$ for $k\to\infty$. Then we have $r_{inv}(q_k,K,Q)\leq r_{inv}((n_k+1)m,K,Q)$. It follows that \[\frac{1}{q_k}\log r_{inv}(q_k,K,Q)\leq \frac{1}{n_km}\log r_{inv}((n_k+1)m,K,Q).\] It follows by a straightforward computation that \begin{align*} h_{inv}(K,Q)\leq \limsup_{n\to\infty} \frac{1}{nm}\log r_{inv}(nm,K,Q). \end{align*} \end{proof} \begin{prop}[Subsets rule or finite stability]\label{prop:subset-rule} Let $\Sigma=(X,U,F)$ be a system and $Q\subset X$ be a controlled invariant set. If $K\subset Q$ and $K=\cup_{i=1}^mK_i$. Then $h_{inv}(K,Q)=\max\limits_{i=1,\ldots,m}h_{inv}(K_i,Q)$. \end{prop} \begin{proof} Obviously, we have $h_{inv}(K,Q)\geq\max\limits_{i=1,\ldots,m}h_{inv}(K_i,Q)$. To show the converse inequality, note that $r_{inv}(n,K,Q)\leq\sum_{i=1}^mr_{inv}(n,K_i,Q)$. For every $n$ pick $\hat{K}_{n}\in\{K_1,\ldots,K_m\}$ such that \[r_{inv}(n,\hat{K}_{n},Q)=\max_{i=1,\ldots,m}r_{inv}(n,K_i,Q).\] It immediately follows that \[r_{inv}(n,K,Q)\leq m\cdot r_{inv}(n,\hat{K}_{n},Q).\] Thus \[\log r_{inv}(n,K,Q)\leq\log m+\log r_{inv}(n,\hat{K}_{n},Q).\] Choose $n_k\to\infty$ such that \[\lim_{k\to\infty}\frac{1}{n_k}\log r_{inv}(n_k,K,Q)=\limsup_{n\to\infty}\frac{1}{n}\log r_{inv}(n,K,Q)\] and $\hat{K}_{n_k}=K_j$ for some $j\in\{1,2,\ldots,m\}$ and all $k$. A brief calculation then shows that \begin{align*} h_{inv}(K,Q)&=\lim_{k\to\infty}\frac{1}{n_k}\log r_{inv}(n_k,K,Q)\\ &\leq\limsup_{k\to\infty}\frac{1}{n_k}\big(\log m+\log r_{inv}(n_k,K_{j},Q)\big)\\ &\leq\limsup_{k\to\infty}\frac{1}{n_k}\log r_{inv}(n_k,K_{j},Q)\\ &\leq\limsup_{n\to\infty}\frac{1}{n}\log r_{inv}(n,K_{j},Q)\\ &=h_{inv}(K_j,Q)\leq\max_{i=1,\ldots,m}h_{inv}(K_i,Q). \end{align*} \end{proof} Consider two systems $\Sigma_i= (X_i, U_i, F_i)$, $i=1,2$. Let $\pi:X_1\to X_2$ be a continuous map and $r:U_1\to U_2$ a map. We say $(\pi,r)$ is a \emph{semi-conjugacy from $\Sigma_1$ to $\Sigma_2$} if \[F_2(\pi(x),r(u))\subset\pi(F_1(x,u)),~\forall~x\in X_1,~u\in U_1.\] \begin{prop} Let $\Sigma_1= (X_1, U_1, F_1)$ and $\Sigma_2= (X_2, U_2, F_2)$ be two systems, $Q\subset X_1$ be controlled invariant, and $(\pi,r)$ a semi-conjugacy from $\Sigma_1$ to $\Sigma_2$. Then for any $K\subset Q$, \[h_{inv}(K,Q;\Sigma_1)\geq h_{inv}(\pi(K),\pi(Q);\Sigma_2).\] \end{prop} \begin{proof} Suppose $\scrS\subset U_1^n$ is an $(n,K,Q)$-spanning set of $(K,Q)$. Let \[r(\scrS)=\{r(\w_0)r(\w_1)\cdots r(\w_{n-1}):\w\in\scrS\}.\] Then $r(\scrS)\subset U_2^{n}$. We shall show that $r(\scrS)$ is an $(n,\pi(K),\pi(Q))$-spanning set of $(\pi(K),\pi(Q))$. To this end, fix $y\in\pi(K)$. Thus there exists $x\in K$ with $\pi(x)=y$. Since $\scrS$ is an $(n,K,Q)$-spanning set, there exists $S\subset\scrS$ such that $S\in AF^n(Q)$ and $x\in Q_S$. We will find a subset $\hat{S}\subset r(S)$ such that $\hat{S}\in AF^n(\pi(Q))$ and $y\in \pi(Q)_{\hat{S}}$. To see this, let \[S_{m}=\{s\in U^{m+1}:\exists~\omega\in S~\text{s.t.}~\omega_{[0,m]}=s\},~m\in\{0,1,\ldots,n-1\}.\] Then we have $S_0=\{s_0\}$, where $s_0$ is the common initial symbol for every $\omega\in S$, and \[ F_1(x,s_0)\subset\bigcup_{\substack{s\in S_1}}Q_{s_1},~~ F_1(x,s_0)\cap Q_{s_1}\neq\emptyset,~\forall~s\in S_1. \] Since $(\pi,r)$ is a semi-conjugacy from $\Sigma_1$ to $\Sigma_2$, \[F_2(y,r(s_0))=F_2(\pi(x),r(s_0))\subset\pi(F_1(x,s_0))\subset\pi(Q).\] Let $\hat{S_0}=r(S_0)$. Then $\hat{S}_0\in AF^1(\pi(Q))$ and $y\in \pi(Q)_{\hat{S}_0}$. Let $B_{r(s_0)}=F_2(\pi(x),r(s_0))$. Then for every $z\in B_{r(s_0)}$ there exists $\hat{z}\in F_1(x,s_0)$ such that $\pi(\hat{z})=z$. Denote the set of all these points by $A_{s_0}$. Thus $\pi(A_{s_0})=B_{r(s_0)}$ and \[A_{s_0}\subset F_1(x,s_0)\subset\bigcup_{\substack{s\in S_1}}Q_{s_1}.\] Let \[A[s_0]=\{s\in S_1:A_{s_0}\cap Q_{s_1}\neq\emptyset\}.\] Set $\hat{S}_1=r(A[s_0])$. Thus for any $\hat{s}\in\hat{S}_1$, \[B_{r(s_0)}\subset \cup_{\hat{s}\in\hat{S}_1}\pi(Q)_{\hat{s}_1}~\text{and}~ B_{r(s_0)}\cap\pi(Q)_{\hat{s}_1}\neq\emptyset\] and \[F_2(B_{r(s_0)}\cap\pi(Q)_{\hat{s}_1},\hat{s}_1)\subset\pi(F_1(A_{s_0}\cap Q_{s_1},s_1))\subset\pi(Q).\] Then $\hat{S}_1\in AF^2(\pi(Q))$ and $y\in \pi(Q)_{\hat{S}_1}$. Repeating this process, we can find the desired $\hat{S}\in AF^n(\pi(Q))$ and $y\in \pi(Q)_{\hat{S}}$. Hence, $r(\scrS)$ is an $(n,\pi(K),\pi(Q))$ of $(\pi(K),\pi(Q))$. This completes the proof. \end{proof} \subsection{Invariance feedback entropy}\label{subsec:ife} Let us recall the concept of invariance feedback entropy proposed by Tomar et al.~\cite{Tomar2017invariance}. Assume that $\Sigma= (X, U, F)$ is a system and $Q\subset X$ is controlled invariant. A pair $(\mathcal{A},G)$ is called an \emph{invariant cover} of $Q$ if $\mathcal{A}$ is a finite cover of $Q$ and $G$ is a map $G:\mathcal{A}\to U$ such that for every $A\in\mathcal{A}$ we have $F(A,G(A))\subset Q$. Suppose $(\mathcal{A},G)$ is an invariant cover of $Q$; and let $n\in\N$ and $\mathcal{S}\subset\mathcal{A}^n$ be a set of sequences in $\mathcal{A}$. For $\alpha\in \mathcal{S}$ and $t\in [0,n-1)$ we define \[P(\alpha|_{[0,t]}):=\{A\in\mathcal{A}|\exists\hat{\alpha}\in\mathcal{S}~\text{s.t.}~ \hat{\alpha}|_{[0,t]}=\alpha|_{[0,t]}~\text{and}~A=\hat{\alpha}_{t+1}\}.\] The set $P(\alpha|_{[0,t]})$ contains the cover elements $A$ so that the sequence $\alpha|_{[0,t]}A$ can be extended to a sequence in $\mathcal{S}$. If $t=n-1$ then $\alpha|_{[0,n-1]}=\alpha$ and define \[P(\alpha):=\left\{A\in\mathcal{A}|\exists\hat{\alpha}\in\mathcal{S}~\text{s.t.}~ A=\hat{\alpha}_0\right\},\] which is actually independent of $\alpha\in\mathcal{S}$ and corresponds to the ``initial" cover elements $A$ in $\mathcal{S}$, i.e., there exists $\alpha\in\mathcal{S}$ with $A=\alpha(0)$. A set $\mathcal{S}\subset\mathcal{A}^n$ is called \emph{$(n,Q)$-spanning} in $(\mathcal{A},G)$ if \begin{itemize} \item[(1).] the set $P(\alpha)$ with $\alpha\in\mathcal{S}$ covers $Q$; \item[(2).] for every $\alpha\in\calS$ and $t\in[0,n-1)$, we have \[F(\alpha(t), G(\alpha(t))) \subseteq \bigcup_{A^{\prime} \in P(\alpha|_{[0, t]})} A^{\prime}.\] \end{itemize} The \emph{expansion number} $N(\mathcal{S})$ associated with $\mathcal{S}$ is defined by \[N(\mathcal{S}):=\max _{\alpha \in \mathcal{S}} \prod_{t=0}^{n-1} \sharp P\left(\left.\alpha\right|_{[0,t]}\right).\] Let \[r_{inv}(n,Q,\mathcal{A},G):=\min\{N(\mathcal{S})|\mathcal{S}~\text{is}~(n,Q)\text{-spanning in} (\mathcal{A},Q)\}.\] Since $\log r_{inv}(\cdot,Q,\mathcal{A},G)$ is subadditive (see Lemma 1 in~\cite{Tomar2017invariance}), the following limit exits \[h_{inv}(\mathcal{A},G):=\lim_{n\to\infty}\frac{1}{n}\log r_{inv}(n,Q,\mathcal{A},G),\] and it is called \emph{entropy} of $(\mathcal{A},G)$. The \emph{invariance feedback entropy }of $Q$ is defined as \[h_{inv}^{fb}(Q):=\inf _{(\mathcal{A}, G)} h_{inv}(\mathcal{A}, G),\] where the infimum is taken over all invariant covers of $Q$. The following theorem states that invariance entropy is bounded above by invariance feedback entropy. \begin{thm}\label{thm:inv-less-ifb} If $\Sigma=(X,U,F)$ is a system and $Q\subset X$ is a controlled invariant set, then \[h_{inv}(Q)\leq h_{inv}^{fb}(Q).\] \end{thm} \begin{proof} Suppose that $(\mathcal{A},G)$ is an invariant cover of $Q$, $n\in\N$, and $\mathcal{S}\subset\mathcal{A}^n$ is $(n,Q)$-spanning in $(\mathcal{A},Q)$. Let $\mathscr{S}=\{G(\alpha)|\alpha\in\mathcal{S}\}$, where $G(\alpha):=G(\alpha_0)\cdots G(\alpha_{n-1})$. It is obvious that \[Q\subset \bigcup_{S\subset\mathscr{S}~\text{and}~S\in AF^n(Q)}Q_S.\] Hence, $\mathscr{S}$ is an $(n,Q)$-spanning set of $Q$ and $\sharp\mathscr{S}\leq\sharp\mathcal{S}$. It follows that $r_{inv}(n,Q)\leq\sharp\mathcal{S}$. Applying Lemma 2 in~\cite{Tomar2017invariance}, we have $r_{inv}(n,Q)\leq N(\mathcal{S})$, which implies that $r_{inv}(n,Q)\leq r_{inv}(n,Q,\mathcal{A},G)$. Thus we obtain the desired inequality. \end{proof} The following two examples illustrate that both $h_{inv}(Q)<h_{inv}^{fb}(Q)$ and $h_{inv}(Q)=h_{inv}^{fb}(Q)$ may be possible. \begin{exam} Let $\Sigma=(X,U,F)$ be a system, where $X=\{0,1,2\}$ and $U=\{a,b\}$. The transition function $F$ is illustrated by \begin{center} \begin{tikzpicture}[scale=0.5] \tikzset{ LabelStyle/.style = {text = black, font = \bfseries,fill=white }, VertexStyle/.style = { fill=white, draw=black,circle,font = \bfseries}, EdgeStyle/.style = {-latex, bend left} } \SetGraphUnit{5} \Vertex{1} \WE(1){0} \tikzset{VertexStyle/.style = { fill=white, draw=black,circle,font = \bfseries,dashed}} \EA(1){2} \Edge[label = a](0)(1) \Loop[dist = 4cm, dir = NO, label = a,style={-latex,thick}](0.west) \tikzset{EdgeStyle/.style = {-latex}} \Loop[dist = 4cm,dir=NO,label = b,style={-latex,thick}](1.north) \tikzset{EdgeStyle/.style = {-latex, bend right,dashed}} \Edge[label = b](0)(2) \tikzset{EdgeStyle/.style = {-latex, bend right,dashed}} \Edge[label = b](2)(1) \tikzset{EdgeStyle/.style = {-latex,dashed}} \Edge[label = a](1)(2) \tikzset{EdgeStyle/.style = {bend right,dashed}} \Loop[dist = 4cm,dir=SO,label = a,style={-latex,thick}](2.east) \end{tikzpicture} \end{center} The set of interest is $Q:=\{0,1\}$. Then $h_{inv}(Q)=0$ and $h_{inv}^{fb}(Q)=1$. \end{exam} \begin{proof} Let $\mathscr{S}=\{a^ib^{n-1-i}|i=0,1,\ldots,n-1\}$. It is not difficult to check that $\mathscr{S}$ is an $(n,Q)$-spanning set of $Q$. So $h_{inv}(Q)=0$. Put $\mathcal{A}=\{\{0\},\{1\}\}$, and define $G:\mathcal{A}\to U$ by $G(\{0\})=a$ and $G(\{1\})=b$. We shall show that $h_{inv}^{fb}(Q)=1$. Suppose that $\mathcal{S}\subset\mathcal{A}^{n}$ is $(n,Q)$-spanning in $(\mathcal{A},G)$. Then $\alpha=\underbrace{\{0\}\ldots\{0\}}_{n}\in\mathcal{S}$. This yields that $N(\mathcal{S})=2^n$ and \[r_{inv}(n,Q,\mathcal{A},G)=2^n.\] It then follows that $h(\mathcal{A},G)=1$. Since $(\mathcal{A},G)$ is the only invariant cover of $Q$, we obtain $h_{inv}^{fb}(Q)=1$. \end{proof} \begin{exam}\label{exa:inv-equas-IFE} Let $\Sigma=(X,U,F)$ be a system, where $X=\{0,1,2\}$ and $U=\{a,b\}$. The transition function $F$ is illustrated by \begin{center} \begin{tikzpicture}[scale=0.5] \tikzset{ LabelStyle/.style = {text = black, font = \bfseries,fill=white }, VertexStyle/.style = { fill=white, draw=black,circle,font = \bfseries}, EdgeStyle/.style = {-latex, bend left} } \SetGraphUnit{5} \tikzset{VertexStyle/.style = { fill=white, draw=black,circle,font = \bfseries,dashed}} \Vertex{1} \tikzset{VertexStyle/.style = { fill=white, draw=black,circle,font = \bfseries}} \EA(1){2} \WE(1){0} \tikzset{EdgeStyle/.style = {-latex, bend left,dashed}} \Edge[label = b](0)(1) \Edge[label = b](1)(0) \tikzset{EdgeStyle/.style = {bend left}} \Loop[dist = 4cm, dir = NO, label = a,style={-latex,thick}](0.west) \tikzset{EdgeStyle/.style = {-latex, bend left=50}} \Edge[label = a](0)(2) \tikzset{EdgeStyle/.style = {-latex, bend left,dashed}} \Edge[label = a](2)(1) \tikzset{EdgeStyle/.style = {-latex,bend left,dashed}} \Edge[label = a](1)(2) \tikzset{EdgeStyle/.style = {bend right}} \Loop[dist = 4cm,dir=SO,label = b,style={-latex,thick}](2.east) \tikzset{EdgeStyle/.style = {-latex, bend left=50}} \Edge[label = b](2)(0) \end{tikzpicture} \end{center} The set of interest is $Q:=\{0,2\}$. Then $h_{inv}(Q)=h_{inv}^{fb}(Q)=1$. \end{exam} \begin{proof} We see that $h_{inv}^{fb}(Q)=1$ from Example 1 in~\cite{Tomar2017invariance}. We shall show that $h_{inv}(Q)=1$. Suppose that $\mathscr{S}\subset U^n$ is an $(n,Q)$-spanning set. Since $Q_a=\{0\}$, $Q_b=\{2\}$, $F(0,a)=F(2,b)=Q=\{0,2\}$, we have $U^n\subset\mathscr{S}$. It follows that $\sharp\mathscr{S}=2^n$. Hence $r_{inv}(n,Q)=2^n$ and $h_{inv}(Q)=1$. \end{proof} \section{Calculations for invariance entropy and IFE}\label{sec:calculation} This section deals with the calculations for invariance entropy and invariance feedback entropy. \subsection{Calculation for entropy of quasi-invariant-partitions} Let $\Sigma=(X,U,F)$ be a system, $Q\subset X$ a controlled invariant set, and $(\calA,G)$ an invariant cover of $Q$. Before going further, we borrow some notations from~\cite{Tomar2017invariance} and introduce some new concepts. For every $A\in\calA$, let $D_\calA(A):=\{A'\in\calA:F(A,G(A))\cap A'\neq\emptyset\}$ and $w_\calA(A):=\log \sharp D_\calA(A)$. When there is no ambiguity, we write $D(A)$ and $w(A)$ instead of $D_\calA(A)$ and $w_\calA(A)$. Given $m\in\N$, a sequence $(A_i)_{i=0}^m$ is called \emph{admissible} for $(\calA,G)$ if $F(A_i,G(A_i))\cap A_{i+1}\neq\emptyset$ for every $0\leq i< m$. Set \begin{align*} W_m(\calA,G):=\{(A_i)_{i=0}^{m-1}|(A_i)_{i=0}^{m-1}~\text{is admissible for}~(\calA,G) \}. \end{align*} A sequence $c=(A_i)_{i=0}^{k-1}$ is called \emph{an irreducible sequence of period $k$} for $(\calA,G)$ if $c^\infty$ is admissible for $(\calA,G)$ and $A_i\neq A_j$ for distinct $i,j$. (By ``$c^{\infty}$'' we mean $ccc\cdots$.) The \emph{period} of $c$ is defined as $k$ (denoted by $l(c)$) and the \emph{mean weight} for $c$ is defined as \[\overline{w}(c):=\frac{1}{k}\sum_{i=0}^{k-1}w(A_i).\] The \emph{maximum mean weight} $\overline{w}^*(\calA,G)$ is defined by $\overline{w}^*(\calA,G):=\max_{c}\overline{w}(c)$, where the maximum is taken over all irreducible periodic sequences for $(\calA,G)$. The \emph{adjacency matrix} $M_{\calA,G}=(M_{AB})$ of $(\calA,G)$ is defined by \[M_{AB}=\left\{ \begin{array}{ll} 1, & F(A,G(A))\cap B\neq\emptyset; \\ 0, & otherwise. \end{array} \right. \] We define the \emph{weighted adjacency matrix} $W_{\calA,G}=(W_{AB})$ with $A, B\in\calA$ of $(\calA,G)$ as \[W_{AB}=\left\{ \begin{array}{ll} \sharp D(A), & F(A,G(A))\cap B\neq\emptyset; \\ 0, & otherwise. \end{array} \right. \] Recall that the $l_\infty$-norm for a $n\times n$ matrix $M$ is defined by $\|M\|_\infty=\max_{1\leq i,j\leq n}|a_{ij}|$. An invariant cover $(\calA,G)$ is said to be a \emph{quasi-invariant-partition} of $Q$ if \begin{equation}\label{eq:quasi-partition-a} A\setminus\bigcup_{B\in\calA,B\neq A}B\neq\emptyset,~\forall A\in\calA \end{equation} and \begin{equation}\label{eq:quasi-parition-b} F(A,G(A))\bigcap(B\setminus \bigcup_{C\in D(A),~C\neq B}C)\neq\emptyset,~\forall A\in\calA, B\in D(A); \end{equation} an \emph{invariant partition} if $\calA$ is a partition of $Q$. Obviously an invariant partition is a quasi-invariant-partition. Tomar et al. in~\cite{Tomar2020Numerical} obtained an interesting result for computing entropy of an invariant partition. Here we generalize this result to quasi-invariant-partitions. \begin{thm}\label{thm:IFE-of-quasi-equals-log-sequences} Suppose that $\Sigma=(X,U,F)$ is a system, $Q\subset X$ is controlled invariant, and $(\calA,G)$ is a quasi-invariant-partition of $Q$. Then \[h_{inv}(\calA,G)=\lim_{m\to\infty}\frac{1}{m}\max_{\alpha\in W_m(\calA,G)}\sum_{i=0}^{m-2}w(\alpha(i)).\] \end{thm} \begin{proof} \textbf{Claim 1.} $W_m(\calA,G)$ is an $(m,Q)$-spanning set of $(\calA,G)$ for $m\geq2$. For every $\alpha\in W_m(\calA,G)$, $P(\alpha)=\{\alpha(0):\alpha\in W_m(\calA,G)\}=\calA$, $P(\alpha|_{[0,t]})=D(\alpha(t))$, and \[F(\alpha(t),G(\alpha(t)))\subset \bigcup_{A'\in D(\alpha(t))}A'.\] Hence Claim 1 holds. \textbf{Claim 2.} For any $(m,Q)$-spanning set $\calS$ in $(\calA,G)$, $W_m(\calA,G)\subset \calS$. We apply the inductive argument to show this claim. Suppose that $\calS$ is a $(2,Q)$-spanning set of $(\calA,G)$. Then $P(\alpha)=\calA$ for every $\alpha\in\calS$ by formula~\eqref{eq:quasi-partition-a}. Thus $W_2(\calA,G)\subset\calS$ follows from formula~\eqref{eq:quasi-partition-a}. So the claim holds for $m=2$. Assume that the claim holds for $2\leq i\leq m-1$. Let $\calS$ be an $(m,Q)$-spanning set of $(\calA,Q)$. Then $\calS'=\{\alpha|_{[0,m-2]}:\alpha\in\calS\}$ is an $(m-1,Q)$-spanning set and $W_{m-1}(\calA,G)\subset\calS'$. Hence $W_m(\calA,G)\subset\calS$ by formula~\eqref{eq:quasi-parition-b}. So the claim holds for every $m\geq2$. Combining Claim 1 with Claim 2 yields $r_{inv}(m,Q,\calA,G)=N(W_m(\calA,G))$. It follows that \begin{align*} h_{inv}(\calA,G)&=\lim_{m\to\infty}\frac{1}{m}\log N(W_m(\calA,G))\\ &=\lim_{m\to\infty}\frac{1}{m}\log \big(\sharp\calA\max_{\alpha\in W_m(\calA,G)}\prod_{i=0}^{m-2}D(\alpha(i))\big)\\ &=\lim_{m\to\infty}\frac{1}{m}\max_{\alpha\in W_m(\calA,G)}\sum_{i=0}^{m-2}w(\alpha(i)). \end{align*} \end{proof} \begin{thm}\label{thm:invariant-partition-log-weight} Let $\Sigma=(X,U,F)$ be a system, $Q\subset X$ a controlled invariant set, and $(\calA,G)$ a quasi-invariant-partition of $Q$. Then \[h_{inv}(\calA,G)=\overline{w}^*(\calA,G),\] where $\overline{w}^*(\calA,G)$ is the maximum mean weight. \end{thm} \begin{proof} We first show that $h_{inv}(\calA,G)\geq \overline{w}^*(\calA,G)$. Suppose that $c=(A_i)_{i=0}^{k-1}$ is an irreducible sequence of period $k$. For any $m\in\N$, let \[\beta_{c,m}:=\underbrace{A_0\cdots A_{k-1}\cdots A_0\cdots A_{k-1}}_{m}A_0.\] Then $\beta_{c,m}\in W_{mk+1}(\calA,G)$. Utilizing Theorem~\ref{thm:IFE-of-quasi-equals-log-sequences}, we have \begin{align*} h_{inv}(\calA,G)&=\lim_{m\to\infty}\frac{1}{mk+1}\max_{\alpha\in W_{mk+1}(\calA,G)}\sum_{i=0}^{mk-1}w(\alpha(i))\\ &\geq \lim_{m\to\infty}\frac{1}{mk+1}\sum_{i=0}^{mk-1}w(\beta_{c,m}(i))\\ &=\lim_{m\to\infty}\frac{m}{mk+1}\sum_{i=0}^{k-1}w(A_i)\\ &=\frac{1}{k}\sum_{i=0}^{k-1}w(A_i)=\overline{w}(c). \end{align*} The desired inequality immediately follows from the arbitrariness of $c$. We now show the inverse inequality. For any $m\geq\sharp\calA+3$ and $\alpha_1\in W_m(\calA,G)$, we have $\alpha_1(0)\alpha_1(1)\cdots\alpha_1(m-2)\in W_{m-1}(\calA,G)$. Using the pigeonhole principle, we can pick an irreducible sequence $c_1=(A_{1,i})_{i=0}^{k_1-1}$ of period $k_1$ in $(\calA,G)$ and there exists $p_1\in[0,\sharp\calA]$ such that \[\alpha_1(p_1)\alpha_1(p_1+1)\cdots\alpha_1(p_1+k_1)=A_{1,0}A_{1,1}\cdots A_{1,k_1-1}.\] We thus have \[w(\alpha_1(p_1))+w(\alpha_1(p_1+1))+\cdots w(\alpha_1(p_1+k_1-1))=k_1\overline{w}(c_1)\leq k_1\overline{w}^*(\calA,G).\] Let \[\alpha_2=\alpha_1(0)\cdots\alpha_1(p_1-1)\alpha_1(p_1+k_1)\cdots\alpha_1(m-2).\] Clearly, $\alpha_2\in W_{m-k_1-1}(\calA,G)$. Applying the pigeonhole principle repeatedly, we can find a sequence of irreducible sequences of period $\{c_j\}_{j=1}^q$ in $(\calA,G)$, a sequence $\{\alpha_{j+1}\}_{j=1}^{q}$ with $\alpha_{j+1}\in W_{m-\sum_{i=1}^jk_i-1}(\calA,G)$ and two sequence numbers $\{k_j\}_{j=1}^q$ with $l(c_j)=k_j$ and $\{p_j\}_{j=1}^q$ with $p_j\in[0,\sharp\calA]$ such that \[\alpha_j(p_j)\alpha_j(p_j+1)\cdots\alpha_j(p_j+k_j)=A_{j,0}A_{j,1}\cdots A_{j,k_j},\] \[w(\alpha_j(p_j))+w(\alpha_j(p_j+1))+\cdots w(\alpha_j(p_j+k_j-1))=k_j\overline{w}(c_j)\leq k_j\overline{w}^*(\calA,G),\] and $\alpha_{q+1}\in W_{m-\sum_{j=1}^qk_j-1}(\calA,G)$, where $m-\sum_{j=1}^qk_j-1\in[0,\sharp\calA]$. It is convenient to write $L=m-\sum_{j=1}^qk_j-1$. Then \begin{align*} \sum_{i=0}^{m-2}w(\alpha(i)) &=\sum_{j=1}^q\sum_{i=0}^{k_j-1}w(\alpha_j(p_j+i))+\sum_{i=0}^{L-1}w(\alpha_{q+1}(i))\\ &\leq\sum_{j=1}^qk_j\overline{w}^*(\calA,G)+L\max_{A\in\calA}w(A)\\ &=(m-1-L)\overline{w}^*(\calA,G)+L\max_{A\in\calA}w(A)\\ &\leq(m-1)\overline{w}^*(\calA,G)+\sharp\calA\max_{A\in\calA}w(A). \end{align*} Therefore, \begin{align*} h_{inv}(\calA,G)&=\lim_{m\to\infty}\frac{1}{m}\max_{\alpha\in W_m(\calA,G)}\sum_{i=0}^{m-2}w(\alpha(i))\\ &\leq\lim_{m\to\infty}\frac{m-1}{m}\overline{w}^*(\calA,G) +\lim_{m\to\infty}\frac{1}{m}\sharp\calA\max_{A\in\calA}w(A)\\ &=\overline{w}^*(\calA,G). \end{align*} This completes the proof. \end{proof} \begin{rem} Passing $(\calA,G)$ to an invariant partition of $Q$, we recover Theorem 1 in~\cite{Tomar2020Numerical}. \end{rem} \begin{cor}\label{cor:quasi-par-bigger-equal-par} Let $\Sigma=(X,U,F)$ be a system and $Q\subset X$ be controlled invariant. For every quasi-invariant-partition $(\calA,G)$ of $Q$, there exists an invariant partition $(\calA',G')$ of $Q$ such that $\sharp\calA=\sharp\calA'$ and $h_{inv}(\calA',G')\leq h_{inv}(\calA,G)$. \end{cor} \begin{proof} Let $\calA=\{A_1,\ldots,A_p\}$. Define sets $A_1',\ldots,A_p'$ by $A_1'=A_1$, $A_j'=A_j\setminus\cup_{i=1}^{j-1}A_i$, for any $2\leq j\leq p$, and $G'(A_j'):=G(A_j)$, $j=1,\ldots,p$. Then $(\calA',G')$ is an invariant partition of $Q$. Suppose $c'=(A_i')_{1}^{q-1}$ is an irreducible periodic sequence in $(\calA',G')$ so that $h_{inv}(\calA',G')=\overline{w}(c')$. Since $A_i'\subset A_i$ for any $1\leq i\leq p$, it follows that $c:=(A_i)_{1}^{q-1}$ is an irreducible periodic sequence in $(\calA,G)$ and $\overline{w}(c')\leq\overline{w}(c)$; hence, we have by Theorem~\ref{thm:invariant-partition-log-weight} $h_{inv}(\calA,G)\geq\overline{w}(c)\geq h_{inv}(\calA',G')$. \end{proof} In the following example, we construct a system that has a quasi-invariant-partition $(\calA_1,G_1)$, where we find two invariant partitions $(\calA_2,G_2)$ and $(\calA_3,G_3)$ such that \[h_{inv}(\calA_2,G_2)= h_{inv}(\calA_1,G_1)~\text{and}~h_{inv}(\calA_3,G_3)< h_{inv}(\calA_1,G_1).\] \begin{exam} Let $\Sigma=(X,U,F)$ be a system, where $X=\{0,1,2,3\}$ and $U=\{a,b\}$. The transition function $F$ is illustrated by \begin{figure}[H] \centering \begin{tikzpicture}[scale=0.5] \tikzset{ LabelStyle/.style = {text = black, font = \bfseries,fill=white }, VertexStyle/.style = { fill=white, draw=black,circle,font = \bfseries}, EdgeStyle/.style = {-latex, bend left} } \SetGraphUnit{5} \Vertex{1} \WE(1){0} \EA(1){2} \tikzset{VertexStyle/.style = { fill=white, draw=black,circle,font = \bfseries,dashed}} \EA(2){3} \tikzset{EdgeStyle/.style = {-latex, bend left=50}} \Edge[label = a](0)(2) \tikzset{EdgeStyle/.style = {-latex, bend left}} \Edge[label = a](0)(1) \Edge[label = b](2)(0) \tikzset{EdgeStyle/.style = {-latex}} \Edge[label = a](1)(0) \Edge[label = b](1)(2) \tikzset{EdgeStyle/.style = {-latex,dashed}} \Edge[label = a](2)(3) \tikzset{EdgeStyle/.style = {-latex,bend right,dashed}} \Edge[label = b](0.south)(3) \Loop[dist = 4cm,dir=SO,label ={a,b},style={-latex,thick}](3.east) \end{tikzpicture} \caption{}\label{fig:inv-par-less-equal-quasi} \end{figure} The set of interest is $Q:=\{0,1,2\}$. Let $A_{11}=\{0,1\}$, $A_{12}=\{1,2\}$, $A_{21}=\{2\}$, $A_{31}=\{0\}$, $A_{32}=\{1\}$, and $A_{33}=\{2\}$; and set $\calA_1=\{A_{11},A_{12}\}$, $\calA_2=\{A_{11},A_{21}\}$, and $\calA_3=\{A_{31},A_{32},A_{33}\}$. Define $G_1:\calA_1\to U$ by $G_1(A_{11})=a$ and $G_1(A_{12})=b$; $G_2:\calA_2\to U$ by $G_2(A_{11})=a$ and $G_2(A_{21})=b$; and $G_3:\calA_3\to U$ by $G_3(A_{31})=a$, $G_3(A_{32})$=a, and $G_3(A_{33})=b$. Then \[h_{inv}(\calA_1,G_1)=h_{inv}(\calA_2,G_2)=1,~h_{inv}(\calA_3,G_3)=\frac12.\] \end{exam} \begin{proof} By definition, $(\calA_1,G_1)$ is a quasi-invariant-partition, and $(\calA_2,G_2)$ and $(\calA_3,G_3)$ are invariant partitions. We now compute entropy for $(\calA_1,G_1)$, $(\calA_2,G_2)$, and $(\calA_3,G_3)$. From Fig.~\ref{fig:inv-par-less-equal-quasi}, we see $A_{11}=\{0,1\}$ is an irreducible sequence of period $1$ for both $(\calA_1,G_1)$ and $(\calA_2,G_2)$ and $\overline{w}(A_{11})=1$. Since $h_{inv}(\calA_1,G_1)\leq1$ and $h_{inv}(\calA_2,G_2)\leq1$, it follows from Theorem~\ref{thm:invariant-partition-log-weight} that $h_{inv}(\calA_1,G_1)=1$ and $h_{inv}(\calA_2,G_2)=1$. Fig.~\ref{fig:inv-par-less-equal-quasi} also tells us that $(\calA_3,G_3)$ only has irreducible sequences of period $2$: $A_{31}A_{32}$, $A_{31}A_{33}$, $A_{32}A_{31}$, and $A_{33}A_{31}$. Applying Theorem~\ref{thm:invariant-partition-log-weight}, an easy computation shows that $h_{inv}(\calA_3,G_3)=\frac12$. \end{proof} Theorem~\ref{thm:invariant-partition-log-weight} states that the entropy for a quasi-invariant-partition is equal to its maximum mean weight. From a numerical point of view, we shall give lower and upper bounds for entropy of quasi-invariant-partitions. In some special case, we can obtain the entropy for a quasi-invariant-partition by the logarithm of the spectral radius of its weighted adjacency matrix. \begin{thm}\label{thm:upper-lower-bounds-partition} Let $\Sigma=(X,U,F)$ be a system and $Q\subset X$. If $(\calA,G)$ is a quasi-invariant-partition of $Q$, then \[\log \rho(W_{\calA,G})-\log \rho(M_{\calA,G})\leq h_{inv}(\calA,G)\leq\min\{\log \|W_{\calA,G}\|_\infty,\log \rho(W_{\calA,G})\},\] where $\rho(W_{\calA,G})$ is the spectral radius of $W_{\calA,G}$ (the maximum of absolute values of its eigenvalues). Particularly, if $\rho(M_{\calA,G})=1$, then $h_{inv}(\calA,G)=\log \rho(W_{\calA,G})$. \end{thm} \begin{proof} We first show the right hand side inequality. It is clear that $h_{inv}(\calA,G)\leq\log \|W_{\calA,G}\|_\infty$. Since $(\calA,G)$ is a quasi-invariant-partition, it follows from the proof of Theorem~\ref{thm:IFE-of-quasi-equals-log-sequences} that \[r_{inv}(n,Q,\mathcal{A},G)=\sharp\calA\cdot\max_{\alpha\in W_n(\calA,G)}\prod_{i=0}^{n-2}\sharp D(\alpha_i).\] For any $\alpha\in W_n(\calA,G)$, we have \[\prod_{t=0}^{n-2}\sharp D(\alpha_t)=W_{\alpha_0\alpha_1}\cdot W_{\alpha_1\alpha_2}\cdots W_{\alpha_{n-2}\alpha_{n-1}}\leq \|W_{\calA,G}^{n-1}\|_\infty.\] This implies that \[r_{inv}(n,Q,\mathcal{A},G)\leq\sharp\calA\cdot\|W_{\calA,G}^{n-1}\|_\infty.\] Hence, \begin{align*} h_{inv}(\calA,G)&=\lim_{n\to\infty}\frac{1}{n}\log r_{inv}(n,Q,\calA,G)\\ &\leq\lim_{n\to\infty}\frac{1}{n}\log \sharp\calA\cdot\|W_{\calA,G}^{n-1}\|_\infty\\ &=\lim_{n\to\infty}\frac{n-1}{n}\log \|W_{\calA,G}^{n-1}\|_\infty^{\frac1{n-1}}. \end{align*} Employing Theorem 5.7.10 in~\cite{Horn-Johnson2013}, we obtain $h_{inv}(\calA,Q)\leq\log \rho(W_{\calA,G})$. We now show the left hand side inequality holds. For any $\beta\in W_n(\calA,G)$, we have \[\prod_{t=0}^{n-2}\sharp D(\beta_i)=W_{\beta_0\beta_1}\cdot W_{\beta_1\beta_2}\cdots W_{\beta_{n-2}\beta_{n-1}}\] and \[M_{\beta_0\beta_1}\cdot M_{\beta_1\beta_2}\cdots M_{\beta_{n-2}\beta_{n-1}}=1.\] Then \[W_{\beta_0\beta_{n-1}}\leq M_{\beta_0\beta_{n-1}}\cdot\max_{\alpha\in W_n(\calA,G)}\prod_{i=0}^{n-2}\sharp D(\alpha_i)\leq \|M_{\calA,G}^{n-1}\|_\infty\cdot\max_{\alpha\in W_n(\calA,G)}\prod_{i=0}^{n-2}\sharp D(\alpha_i).\] It follows that \[\|W_{\calA,G}^{n-1}\|_\infty\leq \|M_{\calA,G}^{n-1}\|_\infty\cdot\max_{\alpha\in W_n(\calA,G)}\prod_{i=0}^{n-2}\sharp D(\alpha_i).\] Thus \begin{align*} \lim_{n\to\infty}\frac{1}{n}\log r_{inv}(n,Q,\mathcal{A},G)&\geq\lim_{n\to\infty}\frac{1}{n}\log \sharp\calA\cdot\frac{\|W_{\calA,G}^{n-1}\|_\infty}{\|M_{\calA,G}^{n-1}\|_\infty}\\ &=\log \rho(W_{\calA,G})-\log \rho(M_{\calA,G}). \end{align*} \end{proof} \begin{rem}(1) Since the norm $\|\cdot\|_\infty$ is not spectrally dominant, that is, there exists a matrix $M$ such that $\|M\|_\infty<\rho(M)$, we take the minimum of $\log \|W_{\calA,G}\|_\infty$ and $\log \rho(W_{\calA,G})$ in the right hand side of the inequality in Theorem~\ref{thm:upper-lower-bounds-partition}. See Exa.~\ref{exa:inv-fb-exa}. (2) If $(\calA,G)$ is a quasi-invariant-partition of $Q$, then \begin{itemize} \item[(\romannumeral1)]$\rho(M_{\calA,G})\geq1$, which implies that $\log \rho(M_{\calA,G})\geq0$; \begin{itemize} \item Since for any $A\in\calA$ there exists $B\in\calA$ so that $M_{AB}=1$, we have \[\|M_{\calA,G}^n\|_\infty\geq1,~\forall~n\in\N.\] Then $\rho(M_{\calA,G})\geq1$. \end{itemize} \item[(\romannumeral2)]$\rho(W_{\calA,G})\geq\rho(M_{\calA,G})$, which means that $\log \rho(W_{\calA,G})-\log \rho(M_{\calA,G})\geq0$. \begin{itemize} \item Since $W_{AB}\geq M_{AB}$, $W_{AB}=\sharp D(A)M_{AB}$, and $M_{AB}=1$ for any $A,B\in\calA$, we have \[\|W_{\calA,G}^n\|_\infty\geq \min_{A\in\calA}\{(\sharp D(A))^{n-1}\}\cdot\|M_{\calA,G}^n\|_\infty,~\forall~n\in\N.\] It therefore follows that \[\log \rho(W_{\calA,G})-\log \rho(M_{\calA,G})\geq \min_{A\in\calA}\{w(A)\}.\] \end{itemize} \item[(\romannumeral3)] If $\rho(M_{\calA,G})>1$, then we can use Theorem~\ref{thm:invariant-partition-log-weight} to compute the entropy of $(\calA,G)$. On the other hand, if $\rho(M_{\calA,G})=1$, then the entropy of $(\calA,G)$ is $\log \rho(W_{\calA,G})$. See Exa.~\ref{exa:inv-fb-exa}. \end{itemize} \end{rem} \subsection{Calculation of invariance entropy and IFE for some control systems} Let $\Sigma=(X,U,F)$ be a system, $Q\subset X$, and $V\subset U$. We say that $V$ is a \emph{cover} of $Q$ if $Q\subset \cup_{a\in V}Q_a$, where $Q_a=\{x\in Q| F(x,a)\subset Q\}$. The \emph{admissible matrix} $M_{Q,V}=(M_{ab})_{a,b\in V}$ of $Q$ with respect to $V$ is defined by \[M_{ab}=\left\{ \begin{array}{ll} 1, & \exists~x\in Q_a~s.t.~F(x,a)\cap Q_b\neq\emptyset; \\ 0, & otherwise. \end{array} \right. \] Recall that the $l_1$ norm of an $n\times n$ matrix $M$ is $\|M\|_1=\sum_{i,j=1}^n|a_{ij}|$. \begin{prop} Let $\Sigma=(X,U,F)$ be a system, $Q\subset X$ a controlled invariant set, and $V\subset U$ a cover of $Q$. Then $h_{inv}(Q)\leq \log \rho(M_{Q,V})$. \end{prop} \begin{proof} Since $V$ is a cover of $Q$, we can pick an $(n,Q)$-spanning set $\scrS\subset V^{n}$ for every $n\geq2$. For every $u\in\scrS$, we have $M_{u_0u_1}\cdot M_{u_1u_2}\cdots M_{u_{n-2}u_{n-1}}=1$. Thus $r_{inv}(n,Q)\leq\sharp\scrS\leq \|M_{Q,V}^{n-1}\|_1$. This implies that \begin{align*} h_{inv}(Q)&=\limsup_{n\to\infty}\frac{1}{n}\log r_{inv}(n,Q)\\ &\leq\limsup_{n\to\infty}\frac{1}{n}\log \|M_{Q,V}^{n-1}\|_1\\ &=\limsup_{n\to\infty}\frac{n-1}{n}\log \|M_{Q,V}^{n-1}\|_1^{\frac1{n-1}}. \end{align*} Using Gelfand formula~\cite[Corollary 5.6.14]{Horn-Johnson2013}, it follows that $h_{inv}(Q)\leq\log \rho(M_{Q,V})$. \end{proof} \begin{cor} Let $\Sigma=(X,U,F)$ be a system and $Q\subset X$ be controlled invariant. Then \[h_{inv}(Q)\leq\inf_{V\subset U~\text{covers}~Q}\log \rho(M_{Q,V}).\] \end{cor} \begin{thm}\label{thm:symbolic-log-radius} Let $\Sigma=(X,U,F)$ be a system, $Q\subset X$ a controlled invariant set, $V\subset U$ a finite cover of $Q$. If in addition \begin{itemize} \item[(C.1)] $Q_a\cap Q_b=\emptyset$ for distinct $a,b\in V$, \item[(C.2)]there exists $K\subset Q_a$ such that $Q_b\subset F(K,a)$ for every $M_{ab}=1$, \item[(C.3)] $Q_c=\emptyset$ for every $c\in U\setminus V$. \end{itemize} Then $h_{inv}(Q)=\log \rho(M_{Q,V})$. \end{thm} \begin{proof} To simplify the notation, set $M=M_{Q,V}$. It is easy to see from conditions (C.1) and (C.3) that the set $\scrS_2=\{u_0u_1:M_{u_0u_1}=1\}$ is the only $(2,Q)$-spanning set of $Q$. Assume that \[\scrS_n=\{u_0u_1\cdots u_{n-1}:M_{u_0u_1}\cdot M_{u_1u_2}\cdots M_{u_{n-2}u_{n-1}}=1\}\] is the only $(n,Q)$-spanning set of $Q$. We will show that $\scrS_{n+1}=\{u_0u_1\cdots u_{n}:M_{u_0u_1}\cdot M_{u_1u_2}\cdots M_{u_{n-1}u_{n}}=1\}$ is the only $(n+1,Q)$-spanning set of $Q$. Let $\scrS$ be an $(n+1,Q)$-spanning set. Then $\scrS\subset\scrS_{n+1}$ and \[\scrS|_n=\{w\in U^n:\exists u\in\scrS~s.t.~w_i=u_i,i=0,\ldots,n-1\}\] is an $(n,Q)$-spanning set. By assumption, we see that $\scrS_n=\scrS|_n$. For every $u\in\scrS_n$, let $B_u:=\{b\in V| M_{u_{n-1}b}=1\}$ and $\scrS_u:=\{ub|b\in B_u\}$. Condition (C.2) tells us that $\scrS_u\subset\scrS$, and thus $\scrS_{n+1}\subset\scrS$. It immediately follows that $\scrS_{n+1}$ is the only $(n+1,Q)$-spanning set of $Q$ and $r_{inv}(n,Q)=\sharp\scrS_n$ for $n\geq 2$. A standard induction on $n$ then yields \begin{align*} \sharp\scrS_{n}&=\sum_{u_{0} \in V} \sum_{u_{n-1} \in V}\left(M^{n-1}\right)_{u_{0}, u_{n-1}}. \end{align*} Hence $r_{inv}(n,Q)=\sharp\scrS_n=\|M^{n-1}\|_1$, and thus this with Gelfand formula shows that $h_{inv}(Q)=\log \rho(M)$. \end{proof} \begin{rem} (1) The formula for invariance entropy in Theorem~\ref{thm:symbolic-log-radius} is completely analogous to that for topological entropy of Markov subshifts (see for example Theorem 3.48 in~\cite{PK2000}). The latter gives us a motivation for the former since a Markov subshift can be described by a binary matrix. (2) Suppose that $V\subset U$ satisfies the conditions of Theorem~\ref{thm:symbolic-log-radius}. Let $\calA_V=\{Q_a:a\in V\}$ and define $G_V:\calA_V\to U$ by $G(Q_a)=a$ for every $a\in V$. Then $(\calA_V,G_V)$ is an invariant partition of $Q$. From Theorem~\ref{thm:symbolic-log-radius}, we see $h_{inv}(Q)=\log \rho(M_{Q,V})$. It is natural to wonder if $h_{inv}^{fb}(Q)=h_{inv}(\calA_V,G_V)$. However, these conditions of Theorem~\ref{thm:symbolic-log-radius} are not sufficient (see Exa.~\ref{exa:inv-fb-exa}). But the invariance feedback entropy of $Q$ can be determined by this invariant partition. \end{rem} Under the conditions of Theorem~\ref{thm:symbolic-log-radius}, we say $(\calB,G_\calB)$ is a \emph{refinement} of $(\calA_V,G_V)$ if $\calB$ is cover of $Q$, every element $B$ of $\calB$ is contained in some element $A$ of $\calA_V$, and $G_\calB(B)=G_V(A)$ for every $B\in\calB$. Let $\scrB(\calA_V,G_V)$ denote all the refinements of $(\calA_V,G_V)$. We call $(\calC,G_\calC)\in\scrB(\calA_V,G_V)$ an \emph{atom refinement} of $(\calA_V,G_V)$ if $\calC=\{\{x\}:x\in Q\}$ and $\sharp(F(x,a)\cap Q_b)$ is at most $1$ for every $a,b\in V$ and $x\in Q_a$. Note that if $(\calA_V,G_V)$ has an atom refinement then it is unique. \begin{cor}\label{cor:IFE-equi-partitions} Under the conditions of Theorem~\ref{thm:symbolic-log-radius}, \[h_{inv}^{fb}(Q)=\inf\{h_{inv}(\calB,G_\calB):(\calB,G_\calB)\in\scrB(\calA_V,G_V)\}.\] \end{cor} \begin{proof} Suppose that $(\calB,G_\calB)$ is an invariant cover. By (C.3) in Theorem~\ref{thm:symbolic-log-radius}, For every $B\in\calB$, $G_\calB(B)\in V$. Since $B\subset \cup_{A\in\calA_V}A$, it follows from (C.1) in Theorem~\ref{thm:symbolic-log-radius} that there exists only one element $A\subset \calA_V$ such that $B\subset A$. Hence $G_\calB(B)=G_V(A)$ and it follows that $(\calB,G_\calB)$ is a refinement of $(\calA_V,G_V)$. \end{proof} \begin{thm}\label{thm:IFE-equi-atom-partitions} Under the conditions of Theorem~\ref{thm:symbolic-log-radius}, if moreover $\sharp Q$ is finite and $(\calC,G_\calC)$ is the atom refinement of $(\calA_V,G_V)$, then $h_{inv}^{fb}(Q)=h_{inv}(\calC,G_\calC)$. \end{thm} \begin{proof} Since $(\calC,G_\calC)$ is an invariant partition of $Q$, it immediately follows from Theorem~\ref{thm:invariant-partition-log-weight} that $h_{inv}(\calC,G_\calC)=\overline{w}^*(\calC,G_\calC)$. Take an irreducible periodic sequence $c$ in $(\calC,G_\calC)$ such that $\overline{w}^*(\calC,G_\calC)=\overline{w}(c)$. We can without loss of generality assume that $c=(C_i)_{i=0}^{k-1}$, where $k\leq\sharp Q$. Fixing $m\in\N$ and a refinement $(\calB,G_\calB)$ of $(\calA,G)$, let \[\beta_{c,m}:=\underbrace{C_0\cdots C_{k-1}\cdots C_0\cdots C_{k-1}}_{m}C_0\] and $\calS$ be an $(mk+1,Q)$-spanning set in $(\calB,G_\calB)$. Since $P(\alpha)$ covers $Q$ for any $\alpha\in\calS$ and $\sharp C_0=1$, there exists $\alpha^0\in\calS$ so that $C_0\subset \alpha^0(0)$ and \[F(C_0,G_{\calB}(C_0))\subset F(\alpha^0(0),G_\calB(\alpha^0(0)))\subset \bigcup_{A' \in P\big(\alpha^0|_{[0,0]}\big)} A'.\] Thus there exists $\alpha^1\in\calS$ such that $C_0\subset\alpha^1(0), C_1\subset \alpha^1(1)$. Repeating this process, we can find $\alpha^{mk+1}\in\calS$ such that \[C_i\subset\alpha^{mk+1}(jk+i),~j=0,\ldots,m-1, ~i=0,\ldots,k-1,~C_0\subset\alpha^{mk+1}(mk).\] Since $(\calC,G_\calC)$ is the atom refinement, \begin{equation}\label{eq:DC} \sharp D_\calC(\{x\})\leq D^*_x \end{equation} for any $x\in Q$, where \[D^*_x=\min_{\substack{(\calB,G_\calB)\in\scrB(\calA,G)\\A\in\calB\\\{x\}\subset A}}\min\{\sharp\calF:\calF\subset D_\calB(A),~F(A,G_\calB(A))\subset\bigcup_{A'\in\calF}A'\}.\] Replacing $\{x\}$ in (\ref{eq:DC}) by $C_i$ implies that \[\sharp D_\calC(C_i)\leq \sharp P(\alpha^{mk+1}|_{[0,jk+i]})=\sharp P(\alpha^{mk+1}|_{[0,jk+i]}),~j=0,\ldots,m-1, ~i=0,\ldots,k-1.\] Then \[N(\mathcal{S})\geq\prod_{t=0}^{mk} \sharp P(\alpha^{mk+1}|_{[0,t]})\geq\prod_{t=0}^{mk-1} \sharp P(\alpha^{mk+1}|_{[0,t]})\geq\left(\prod_{i=0}^{k-1}\sharp D_\calC(C_i)\right)^m.\] Since $\calS$ is arbitrary, \[r_{inv}(mk+1,Q,\calB,G_\calB)\geq\left(\prod_{i=0}^{k-1}\sharp D_\calC(C_i)\right)^m.\] It follows that \begin{align*} h_{inv}(\calB,G_\calB)&=\lim_{m\to\infty}\frac{1}{mk+1}\log r_{inv}(mk+1,Q,\calB,G_\calB)\\ &\geq \lim_{m\to\infty}\frac{1}{mk+1}\log \left(\prod_{i=0}^{k-1}\sharp D_\calC(C_i)\right)^m\\ &=\overline{w}(c)=h_{inv}(\calC,G_\calC). \end{align*} This together with Corollary~\ref{cor:IFE-equi-partitions} yields the desired equality. \end{proof} \begin{exam}\label{exa:inv-fb-exa} Let $\Sigma=(X,U,F)$ be a system, where $X=\{0,1,2,3,4,5\}$ and $U=\{a,b,c\}$. The transition function $F$ is illustrated by Fig.~\ref{fig:partion-not-sufficient}. \begin{figure}[H] \centering \begin{tikzpicture}[scale=0.4] \tikzset{ LabelStyle/.style = {text = black, font = \bfseries,fill=white }, VertexStyle/.style = { fill=white, draw=black,circle,font = \bfseries}, EdgeStyle/.style = {-latex, bend left} } \SetGraphUnit{5} \Vertex{0} \tikzset{VertexStyle/.style = { fill=white, draw=white,color=white}} \SO(0){6} \tikzset{VertexStyle/.style = { fill=white, draw=black,circle,font = \bfseries}} \SO(6){1} \EA(0){2} \tikzset{VertexStyle/.style = { fill=white, draw=white,color=white}} \SO(2){7} \tikzset{VertexStyle/.style = { fill=white, draw=black,circle,font = \bfseries}} \SO(7){3} \WE(6){4} \tikzset{VertexStyle/.style = { fill=white, draw=black,circle,font = \bfseries,dashed}} \EA(7){5} \tikzset{EdgeStyle/.style = {-latex}} \Edge[label = a](0)(2) \Edge[label = a](1)(3) \Edge[label = b](2)(1) \Edge[label = b](3)(0) \Edge[label = a](0)(4) \tikzset{EdgeStyle/.style = {-latex, bend left=70}} \tikzset{EdgeStyle/.style = {-latex}} \tikzset{EdgeStyle/.style = {-latex, bend left}} \tikzset{EdgeStyle/.style = {-latex,dashed}} \Edge[label = {b,c}](0)(5) \Edge[style={pos=.25},label = {b,c}](1)(5) \Edge[label = {a,c}](2)(5) \Edge[label = {a,c}](3)(5) \tikzset{EdgeStyle/.style = {-latex,bend right=9,dashed}} \Edge[style={pos=.25},label = {a,b}](4)(5) \tikzset{EdgeStyle/.style = {-latex}} \Loop[dist = 4cm,label = c,style={-latex,thick}](4.west) \Loop[dist = 4cm,dir=EA,label ={a,b,c},style={-latex,thick,dashed}](5.east) \end{tikzpicture} \caption{}\label{fig:partion-not-sufficient} \end{figure} The set of interest is $Q:=\{0,1,2,3,4\}$. Let \begin{align*} \calA_1&=\{A_{10},A_{11},A_{12}\},~A_{10}=\{0,1\},~A_{11}=\{2,3\},~A_{12}=\{4\},\\ \calA_2&=\{A_{20},A_{21},A_{22},A_{23},A_{24}\},~A_{20}=\{0\},~A_{21}=\{1\}, ~A_{22}=\{2\},~A_{23}=\{3\},~A_{24}=\{4\},\\ \calA_3&=\{A_{30},A_{31},A_{32},A_{33}\},~A_{30}=\{0\},~A_{31}=\{1\}, ~A_{32}=\{2,3\},~A_{33}=\{4\}. \end{align*} Define $G_i:\calA_i\to U$, $i=1,2,3$ by \begin{align*} G_1(A_{10})&=a,~G_1(A_{11})=b,~G_1(A_{12})=c,\\ G_2(A_{20})&=a,~G_2(A_{21})=a,~G_2(A_{22})=b,~G_2(A_{23})=b,~G_2(A_{24})=c,\\ G_3(A_{30})&=a,~G_3(A_{31})=a,~G_3(A_{32})=b,~G_3(A_{33})=c. \end{align*} Then \begin{align*} h_{inv}(Q)=0,~h_{inv}(\calA_1,G_1)=\frac{1}{2},~h_{inv}(\calA_2,G_2)=\frac{1}{4},~ h_{inv}(\calA_3,G_3)=1,~h_{inv}^{fb}(Q)=\frac{1}{4}. \end{align*} \end{exam} \begin{proof} Clearly $Q_a=\{0,1\}$, $Q_b=\{2,3\}$, $Q_c=\{4\}$, and thus $U$ is a cover of $Q$. It is obvious that conditions (C.1) and (C.3) in Theorem~\ref{thm:symbolic-log-radius} hold. Since $Q_b\subset F(Q_a,a)$, $Q_c\subset F(0,a)$, $Q_a\subset F(Q_b,b)$, and $Q_c\subset F(Q_c,c)$, condition (C.2) in Theorem~\ref{thm:symbolic-log-radius} holds. From Fig.~\ref{fig:partion-not-sufficient}, we have \[M_{Q,U}=\begin{pNiceMatrix}[first-row,last-col,nullify-dots] a & b & c & \\ 0 & 1 & 1 & a \\ 1 & 0 & 0 & b \\ 0 & 0 & 1 & c \\ \end{pNiceMatrix}.\] A brief computation shows that $\rho(M_{Q,U})=1$. It then follows from Theorem~\ref{thm:symbolic-log-radius} that $h_{inv}(Q)=0$. We now compute $h_{inv}(\calA_i,G_i)$, $i=1,2,3$. Since \[M_{\calA_1,G_1}=\begin{pNiceMatrix}[first-row,last-col,nullify-dots] A_{10} & A_{11} & A_{12} & \\ 0 & 1 & 1 & A_{10} \\ 1 & 0 & 0 & A_{11} \\ 0 & 0 & 1 & A_{12} \\ \end{pNiceMatrix},~~ W_{\calA_1,G_1}=\begin{pNiceMatrix}[first-row,last-col,nullify-dots] A_{10} & A_{11} & A_{12} & \\ 0 & 2 & 2 & A_{10} \\ 1 & 0 & 0 & A_{11} \\ 0 & 0 & 1 & A_{12} \\ \end{pNiceMatrix}, \] \[M_{\calA_2,G_2}=\begin{pNiceMatrix}[first-row,last-col,nullify-dots] A_{20} & A_{21} & A_{22} & A_{23} & A_{24} & \\ 0 & 0 & 1 & 0 & 1 & A_{20} \\ 0 & 0 & 0 & 1 & 0 & A_{21} \\ 0 & 1 & 0 & 0 & 0 & A_{22} \\ 1 & 0 & 0 & 0 & 0 & A_{23} \\ 0 & 0 & 0 & 0 & 1 & A_{24} \\ \end{pNiceMatrix},~~ W_{\calA_2,G_2}=\begin{pNiceMatrix}[first-row,last-col,nullify-dots] A_{20} & A_{21} & A_{22} & A_{23} & A_{24} & \\ 0 & 0 & 2 & 0 & 2 & A_{20} \\ 0 & 0 & 0 & 1 & 0 & A_{21} \\ 0 & 1 & 0 & 0 & 0 & A_{22} \\ 1 & 0 & 0 & 0 & 0 & A_{23} \\ 0 & 0 & 0 & 0 & 1 & A_{24} \\ \end{pNiceMatrix}, \] \[M_{\calA_3,G_3}=\begin{pNiceMatrix}[first-row,last-col,nullify-dots] A_{30} & A_{31} & A_{32} & A_{33} & \\ 0 & 0 & 1 & 1 & A_{30} \\ 0 & 0 & 1 & 0 & A_{31} \\ 1 & 1 & 0 & 0 & A_{32} \\ 0 & 0 & 0 & 1 & A_{33} \\ \end{pNiceMatrix},~~ W_{\calA_3,G_3}=\begin{pNiceMatrix}[first-row,last-col,nullify-dots] A_{30} & A_{31} & A_{32} & A_{33} & \\ 0 & 0 & 2 & 2 & A_{30} \\ 0 & 0 & 1 & 0 & A_{31} \\ 2 & 2 & 0 & 0 & A_{32} \\ 0 & 0 & 0 & 1 & A_{33} \\ \end{pNiceMatrix}, \] it follows by a straightforward calculation that \[\rho(M_{\calA_1,G_1})=1,~~\rho(W_{\calA_1,G_1})=\sqrt{2},~~\|W_{\calA_1,G_1}\|_\infty=2,\] \[\rho(M_{\calA_2,G_2})=1,~~\rho(W_{\calA_2,G_2})=\sqrt[4]{2},~~\|W_{\calA_2,G_2}\|_\infty=2,\] \[\rho(M_{\calA_3,G_3})=\sqrt{2},~~\rho(W_{\calA_3,G_3})=\sqrt{6},~~\|W_{\calA_3,G_3}\|_\infty=2.\] It then follows from Theorem~\ref{thm:upper-lower-bounds-partition} that $h_{inv}(\calA_1,G_1)=\frac{1}{2}$ and $h_{inv}(\calA_2,G_2)=\frac{1}{4}$. Noting that $\overline{w}^*(\calA_3,G_3)=\overline{w}(c)$, where $c=A_{30}A_{32}$, we have by Theorem~\ref{thm:invariant-partition-log-weight} $h_{inv}(\calA_3,G_3)=1$. It is not difficult to check that $(A_2,G_2)$ is the atom refinement, and therefore Theorem~\ref{thm:IFE-equi-atom-partitions} asserts that $h_{inv}^{fb}(Q)=\frac14$. \end{proof} \section*{Acknowledgements} The project was supported by National Nature Science Funds of China (No. 12171492). \begin{thebibliography}{10} \bibitem{Kolmogorov1958A} A.~N. Kolmogorov, ``A new metric invariant of transient dynamical systems and automorphisms in lebesgue spaces,'' {\em Dokl. Akad. Nauk SSSR}, vol.~951, no.~5, pp.~861--864, 1958. \bibitem{Sina1959On} Y.~Sina\u{\i}, ``On the concept of entropy for a dynamic system,'' {\em Dokl. Akad. Nauk SSSR}, pp.~768--771, 1959. \bibitem{Adler1965} R.~L. Adler, A.~G. Konheim, and M.~H. McAndrew, ``Topological entropy,'' {\em Transactions of the American Mathematical Society}, vol.~114, no.~2, pp.~309--319, 1965. \bibitem{Dinaburg1970} E.~I. Dinaburg, ``A correlation between topological entropy and metric entropy,'' {\em Dokl.akad.nauk Sssr}, vol.~190, pp.~19--22, 1970. \bibitem{Bowen1971} R.~Bowen, ``Entropy for group endomorphisms and homogeneous spaces,'' {\em Transactions of the American Mathematical Society}, vol.~153, pp.~401--414, 1971. \bibitem{shannon1948mathematical} C.~E. Shannon, ``A mathematical theory of communication,'' {\em The Bell system technical journal}, vol.~27, no.~3, pp.~379--423, 1948. \bibitem{Downarowicz2011Entropy} T.~Downarowicz, {\em Entropy in dynamical systems}, vol.~18. \newblock Cambridge University Press, 2011. \bibitem{Young2003Entropy} L.-S. Young, ``Entropy in dynamical systems,'' {\em Entropy}, vol.~313, p.~114, 2003. \bibitem{Katok2007Fifty} A.~Katok, ``Fifty years of entropy in dynamics: 1958--2007,'' {\em Journal of Modern Dynamics}, vol.~1, no.~4, p.~545, 2007. \bibitem{Delchamps1990Stabilizing} D.~F. Delchamps, ``Stabilizing a linear system with quantized state feedback,'' {\em IEEE Transactions on Automatic Control}, vol.~35, no.~8, pp.~916--924, 1990. \bibitem{Wong-Brockett1999} W.~S. Wong and R.~W. Brockett, ``Systems with finite communication bandwidth constraints. ii. stabilization with limited information feedback,'' {\em IEEE Transactions on Automatic Control}, vol.~44, pp.~1049--1053, May 1999. \bibitem{Nair2004Topological} G.~N. Nair, R.~J. Evans, I.~M.~Y. Mareels, and W.~Moran, ``Topological feedback entropy and nonlinear stabilization,'' {\em IEEE Transactions on Automatic Control}, vol.~49, no.~9, pp.~1585--1597, 2004. \bibitem{Colonius-Kawan2009} F.~Colonius and C.~Kawan, ``Invariance entropy for control systems,'' {\em SIAM Journal on Control and Optimization}, vol.~48, no.~3, pp.~1701--1721, 2009. \bibitem{Colonius2013A} F.~Colonius, C.~Kawan, and G.~Nair, ``A note on topological feedback entropy and invariance entropy,'' {\em Systems \verb'&' Control Letters}, vol.~62, no.~5, pp.~377--381, 2013. \bibitem{Colonius2018PressureA} F.~Colonius, A.~J. Santana, and J.~A.~N. Cossich, ``Invariance pressure for control systems,'' {\em Journal of Dynamics and Differential Equations}, vol.~31, no.~1, pp.~1--23, 2019. \bibitem{Colonius2018PressureB} F.~Colonius, J.~A.~N. Cossich, and A.~J. Santana, ``Invariance pressure of control sets,'' {\em SIAM Journal on Control and Optimization}, vol.~56, no.~6, pp.~4130--4147, 2018. \bibitem{Colonius2019Bounds} F.~Colonius, J.~A.~N. Cossich, and A.~J. Santana, ``Bounds for invariance pressure,'' {\em Journal of Differential Equations}, vol.~268, no.~12, pp.~7877 -- 7896, 2020. \bibitem{Zhong2020Variational} X.~Zhong, ``Variational principles of invariance pressures on partitions,'' {\em Discrete \verb'&' Continuous Dynamical Systems-A}, vol.~40, no.~1, pp.~491--508, 2020. \bibitem{Zhong2018Invariance} X.~Zhong and Y.~Huang, ``Invariance pressure dimensions for control systems,'' {\em Journal of Dynamics and Differential Equations}, vol.~31, no.~4, pp.~2205--2222, 2019. \bibitem{Colonius2021Controllability} F.~Colonius, J.~a. A.~N. Cossich, and A.~J. Santana, ``Controllability properties and invariance pressure for linear discrete-time systems,'' {\em Journal of Dynamics and Differential Equations}, vol.~34, pp.~5--28, 2022. \bibitem{Colonius2018Invariance} F.~Colonius, ``Invariance entropy, quasi-stationary measures and control sets,'' {\em Discrete \verb'&' Continuous Dynamical Systems - A}, vol.~38, no.~4, pp.~2093--2123, 2018. \bibitem{Colonius2018Metric} F.~Colonius, ``Metric invariance entropy and conditionally invariant measures,'' {\em Ergodic Theory and Dynamical Systems}, vol.~38, no.~3, pp.~921--939, 2018. \bibitem{Wang2019MeasureInvariance} T.~Wang, Y.~Huang, and H.~Sun, ``Measure-theoretic invariance entropy for control systems,'' {\em SIAM Journal on Control and Optimization}, vol.~57, no.~1, pp.~310--333, 2019. \bibitem{Wang-Huang2022Inverse} T.~Wang and Y.~Huang, ``Inverse variational principles for control systems,'' {\em Nonlinearity}, vol.~35, pp.~1610--1633, feb 2022. \bibitem{Huang2018Caratheodory} Y.~Huang and X.~Zhong, ``Carath\'{e}odory--{P}esin structures associated with control systems,'' {\em Systems \verb'&' Control Letters}, vol.~112, pp.~36--41, 2018. \bibitem{Wang2019dichotomy} T.~Wang, Y.~Huang, and Z.~Chen, ``Dichotomy theorem for control sets,'' {\em Systems \verb'&' Control Letters}, vol.~129, pp.~10--16, 2019. \bibitem{Zhong2020Equi} X.~Zhong, Z.~Chen, and Y.~Huang, ``Equi-invariability, bounded invariance complexity and {L}-stability for control systems,'' {\em SCIENCE CHINA Mathematics}, vol.~64, no.~10, pp.~2275--2294, 2021. \bibitem{Zhong2022Equi} X.~Zhong and Z.~Chen, ``Equi-invariability and bounded invariance complexity for control systems,'' {\em Journal of Dynamics and Differential Equations}, 2022. \bibitem{Kawan-Yuksel2019Stabilization-entropy} C.~Kawan and S.~Y\"{u}ksel, ``Invariance properties of controlled stochastic nonlinear systems under information constraints,'' {\em IEEE Transactions on Automatic Control}, vol.~66, no.~10, pp.~4514--4529, 2021. \bibitem{Kawan2013} C.~Kawan, ``Invariance entropy for deterministic control systems,'' {\em Lecture Notes in Mathematics}, vol.~2089, 2013. \bibitem{savkin2006analysis} A.~V. Savkin, ``Analysis and synthesis of networked control systems: Topological entropy, observability, robustness and optimal control,'' {\em Automatica}, vol.~42, no.~1, pp.~51--62, 2006. \bibitem{Liberzon2018Entropy} S.~Mitra and D.~Liberzon, ``Entropy and minimal data rates for state estimation and model detection,'' {\em IEEE Transactions on Automatic Control}, vol.~63, no.~10, pp.~3330--3344, 2018. \bibitem{Kawan-Matveev-Pogromsky2021Remote} C.~Kawan, A.~S. Matveev, and A.~Y. Pogromsky, ``Remote state estimation problem: Towards the data-rate limit along the avenue of the second lyapunov method,'' {\em Automatica}, vol.~125, p.~109467, 2021. \bibitem{Rungger2017Invariance} M.~Rungger and M.~Zamani, ``Invariance feedback entropy of nondeterministic control systems,'' in {\em Proceedings of the 20th International Conference on Hybrid Systems: Computation and Control}, pp.~91--100, 2017. \bibitem{Tomar2017invariance} M.~S. Tomar, M.~Rungger, and M.~Zamani, ``Invariance feedback entropy of uncertain control systems,'' {\em IEEE Transactions on Automatic Control}, vol.~66, no.~12, pp.~5680--5695, 2021. \bibitem{Tomar2020Compositional} M.~S. Tomar and M.~Zamani, ``Compositional quantification of invariance feedback entropy for networks of uncertain control systems,'' {\em IEEE Control Systems Letters}, vol.~4, no.~4, pp.~827--832, 2020. \bibitem{Tomar2020Numerical} M.~S. Tomar, C.~Kawan, and M.~Zamani, ``Numerical over-approximation of invariance entropy via finite abstractions,'' {\em ArXiv}, vol.~2011.02916v3, 2020. \bibitem{Walters1982} P.~Walters, {\em An introduction to ergodic theory. [Graduate texts in mathematics, Vol. 79]}. \newblock New York: Springer-Verlag, 1982. \bibitem{Horn-Johnson2013} R.~A. Horn and C.~R. Johnson, {\em Matrix Analysis}. \newblock Cambridge: Cambridge University Press, second~ed., 2013. \bibitem{PK2000} P.~Kurka, {\em Topological and symbolic dynamics}. \newblock Paris: Soci\'{e}t\'{e} math\'{e}matique de France, 2003. \end{thebibliography} \end{document}
2205.05394v1
http://arxiv.org/abs/2205.05394v1
Stability of intersecting families
\documentclass[12pt]{article} \renewcommand{\baselinestretch}{1.0} \topmargin = 0 in \oddsidemargin = 0.25 in \setlength{\textheight}{8.6 in} \setlength{\textwidth}{6 in} \setlength{\topmargin}{-0.8cm} \setlength{\unitlength}{1.0 mm} \usepackage{wrapfig} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{soul} \usepackage{color} \usepackage{amssymb} \usepackage{graphicx} \usepackage{enumerate} \usepackage{amsthm,amscd} \usepackage[all]{xy} \usepackage{ulem} \usepackage{comment} \usepackage{anyfontsize} \renewcommand{\thefootnote}{\fnsymbol{footnote}} \def\cp{\,\square\,} \def\lg{\rm \langle\,} \def\rg{\rm \,\rangle} \def\sm{\rm \setminus} \def\sbs{\rm \subseteq\,} \def\sps{\rm \supseteq\,} \def\o{\overline} \def\Ga{{\rm \Gamma}} \def\N{{\rm \mathbb{N}}} \def\Z{{\rm \mathbb{Z}}} \def\B{{\rm \mathbb{B}}} \def\D{{\rm \mathbb{D}}} \def\P{{\rm \mathbb{P}}} \def\L{{\rm \mathcal{L}}} \def\Aut{{\rm \textsf{Aut}}} \def\SS{{\rm S}} \def\IG{{\rm \textsf{IG}}} \def\I{{\rm \textsf{IG}}} \def\mod{{\rm mod\,}} \def\ker{{\rm Ker\,}} \def\tr{{\rm tr}} \def\E{{\rm \mathcal {E}}} \def\rr{\rm \rightarrow} \allowdisplaybreaks \usepackage{hyperref} \newtheorem{theorem}{Theorem}[section] \newtheorem{corollary}[theorem]{Corollary} \newtheorem{definition}[theorem]{Definition} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{question}[theorem]{Question} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{quest}[theorem]{Question} \newtheorem{example}[theorem]{Example} \newtheorem{clm}[theorem]{Claim} \newtheorem{fact}[theorem]{Fact} \newtheorem{remark}[theorem]{Remark} \newtheorem{construction}[theorem]{Construction} \newcommand{\pp}{{\it p.}} \newcommand{\de}{\em} \linespread{1} \begin{document} \title{Stability of intersecting families\thanks{This work is supported by NSFC (Grant No. 11931002). E-mail addresses: [email protected] (Yang Huang), [email protected] (Yuejian Peng, corresponding author).}} \author{Yang Huang, Yuejian Peng$^{\dag}$ \\[2ex] {\small School of Mathematics, Hunan University} \\ {\small Changsha, Hunan, 410082, P.R. China } } \maketitle \vspace{-0.5cm} \begin{abstract} The celebrated Erd\H{o}s--Ko--Rado theorem \cite{EKR1961} states that the maximum intersecting $k$-uniform family on $[n]$ is a full star if $n\ge 2k+1$. Furthermore, Hilton-Milner \cite{HM1967} showed that if an intersecting $k$-uniform family on $[n]$ is not a subfamily of a full star, then its maximum size achieves only on a family isomorphic to $HM(n,k):= \Bigl\{G\in {[n] \choose k}: 1\in G, G\cap [2,k+1] \neq \emptyset \Bigr\} \cup \Bigl\{ [2,k+1] \Bigr\} $ if $n>2k$ and $k\ge 4$, and there is one more possibility in the case of $k=3$. Han and Kohayakawa \cite{HK2017} determined the maximum intersecting $k$-uniform family on $[n]$ which is neither a subfamily of a full star nor a subfamily of the extremal family in Hilton-Milner theorm, and they asked what is the next maximum intersecting $k$-uniform family on $[n]$. Kostochka and Mubayi \cite{KM2016} gave the answer for large enough $n$. In this paper, we are going to get rid of the requirement that $n$ is large enough in the result by Kostochka and Mubayi \cite{KM2016} and answer the question of Han and Kohayakawa \cite{HK2017}. \end{abstract} {{\bf Key words:} Intersecting families; Extremal finite sets; Shifting method. } {{\bf 2010 Mathematics Subject Classification.} 05D05, 05C65, 05D15.} \section{Introduction} For a positive interge $n$, let $[n]=\{1, 2, \dots, n\}$ and $2^{[n]}$ be the family of all subsets of $[n]$. An $i$-element subset $A\subseteq [n]$ is called an $i$-set. For $0\le k \le n$, let ${[n] \choose k}$ denote the collection of all $k$-sets of $[n]$. A family $\mathcal{F} \subseteq {[n] \choose k}$ is called {\it k-uniform}. For a family $\mathcal{F}\subseteq 2^{[n]}$, we say $\mathcal{F}$ is {\it intersecting} if for any two distinct sets $F$ and $F'$ in $\mathcal{F}$ we have $ | F\cap F' |\ge 1$. In this paper, we always consider a $k$-uniform intersecting family on $[n]$. The following celebrated theorem of Erd\H{o}s--Ko--Rado determines the maximum intersecting family. For $x\in [n]$ denote $\mathcal{F}_x:=\{F\in {[n] \choose k}: x\in F\}$ by the {\it full star} centered at $x$. We say $\mathcal{F}$ is {\it EKR} if $\mathcal{F}$ is contained in a full star. \begin{theorem}[Erd\H{o}s--Ko--Rado \cite{EKR1961}]\label{thm1.1} Let $n\ge 2k$ be integer and $\mathcal{F}$ be a $k$-uniform intersecting family of subsets of $[n]$. Then $$ |\mathcal{F}| \le {n-1 \choose k-1}.$$ Moreover, when $n>2k$, equality holds if and only if $\mathcal{F}$ is a full star. \end{theorem} The theorem of Hilton-Milner determines the maximum size of non-EKR families. \begin{theorem}[Hilton--Milner \cite{HM1967}] \label{thm1.2} Let $k\ge 2$ and $n\ge 2k$ be integers and $\mathcal{F}\subseteq {[n] \choose k}$ be an intersecting family. If $\mathcal{F}$ is not EKR, then $$ |\mathcal{F}| \le {n-1 \choose k-1} -{n-k-1 \choose k-1} +1.$$ Moreover, for $n>2k$ and $k\ge 4$, equality holds if and only if $\mathcal{F}$ is isomorphic to \[ \begin{matrix} HM(n,k):= \Bigl\{G\in {[n] \choose k}: 1\in G, G\cap [2,k+1] \neq \emptyset \Bigr\} \cup \Bigl\{ [2,k+1] \Bigr\}. \end{matrix} \] For the case $k=3$, there is one more possibility, namely \[ \begin{matrix} \mathcal{T}(n,3):=\left\{F\in {[n] \choose 3}: |F\cap [3]|\ge 2 \right\}. \end{matrix} \] \end{theorem} We say a family $\mathcal{F}$ is {\it HM} if it is isomorphic to a subfamily of HM($n, k$). We say that 1 is the {\it center} of HM($n, k$). Let $E\subseteq [n]$ be an $i$-set and $x\in [n]$. We define \[ \mathcal{G}_i:= \left\{ G\in {[n]\choose k}:E\subseteq G\right\} \cup \left\{ G\in {[n]\choose k}: x\in G \,\, \text{and} \,\, G\cap E\neq \emptyset \right\}. \] We call $x$ the {\it center}, and $E$ the {\it core} of $\mathcal{G}_i$ for $i\ge 3$. With a slight tweaking, we call $\{x\}\cup E$ the {\it core} of $\mathcal{G}_2$. Note that $\mathcal{G}_k=HM(n, k)$. For a $(k-1)$-set $E$, a point $x\in [n]\setminus E$, and an $i$-set $J \subset [n] \setminus (E\cup\{x\})$, we denote \begin{align*} \mathcal {J}_i:=\left\{G\in {[n]\choose k}: E \subseteq G \,\, \text{and} \,\, G \cap J \neq \emptyset\right\} \cup \left\{G\in {[n]\choose k}:J\cup\{x\}\subseteq G\right\} \\ \cup \left\{G\in {[n]\choose k}:x\in G, G\cap E \neq \emptyset\right\}. \end{align*} We call $x$ the {\it center}, $E$ the {\it kernel}, and $J$ the {\it set of pages}. For two $k$-sets $E_1$ and $E_2 \subseteq [n] $ with $|E_1\cap E_2|=k-2$, and $x\in [n]\setminus (E_1\cup E_2)$, we define $$\mathcal{K}_2:=\{ G\in {[n]\choose k}: x\in G, G\cap E_1\ne \emptyset \,\, \text{and} \,\, G\cap E_2\ne \emptyset \} \cup \{E_1, E_2\},$$ and call $x$ the {\it center} of $\mathcal{K}_2$. In \cite{HK2017}, Han and Kohayakawa obtained the size of a maximum non-EKR, non-HM intersecting family. \begin{theorem}[Han--Kohayakawa \cite{HK2017}]\label{thm1.4} Suppose $k\ge 3$ and $n\ge 2k+1$ and let $\mathcal{H}$ be an intersecting $k$-uniform family on $[n]$. Furthermore, assume that $\mathcal{H}$ is neither EKR nor HM, if $k=3$, $\mathcal{H} \not\subseteq \mathcal{G}_2 $. Then $$|\mathcal{H}|\le {n-1 \choose k-1} -{n-k-1 \choose k-1} -{n-k-2 \choose k-2}+2.$$ For $ k=4$, equality holds if and only if $\mathcal{H}= \mathcal{J}_{2},\, \mathcal{G}_2$ or $ \mathcal{G}_3$. For every other $k$, equality holds if and only if $\mathcal{H}= \mathcal{J}_2 $. \end{theorem} Han and Kohayakawa \cite{HK2017} proposed the following question. \begin{question}\label{quest1} Let $n\ge 2k+1$. What is the maximum size of an intersecting family $\mathcal{H}$ that is neither EKR nor HM, and $\mathcal{H} \not\subseteq \mathcal{J}_2$ $(\!\!$ in addition $\mathcal{H} \not\subseteq \mathcal{G}_2$ and $\mathcal{H} \not\subseteq \mathcal{G}_3$ if $k=4$ $\!\!)$? \end{question} Regarding this question, Kostochka and Mubayi \cite{KM2016} showed that the answer is $|\mathcal{J}_3|$ for sufficiently large $n$. In fact they proved that the maximum size of an intersecting family that is neither EKR, nor HM, nor contained in $\mathcal{J}_i$ for each $i$, $2\le i\le k-1$ (nor in $\mathcal{G}_2,\mathcal{G}_3$ for $k=4$) is $|\mathcal{K}_2|$ for all large enough $n$. In paper \cite{KM2016}, they also established the structure of almost all intersecting 3-uniform families. Sometimes, it is relatively easier to get extremal families under the assumption that $n$ is large enough. For example, Erd\H{o}s matching conjecture \cite{MC1965} states that for a $k$-uniform family $\mathcal{F}$ on finite set $[n]$, $|\mathcal{F}|\le \max\{{k(s+1)-1\choose k}, {n\choose k}-{n-s\choose k}\}$ if there is no $s+1$ pairwise disjoint members of $\mathcal{F}$ and $n\ge(s+1)k$, and it was proved to be true for large enough $n$ in \cite{MC1965}. There has been a lot of recent studies for small n (see \cite{Fra2013, FraX, HLS2012, LM2014}). However, the conjecture is not completely verified for small $n$. Up to now, the best condition on $n$ was given by Frankl in \cite{Fra2013, FT2018} that $n\ge k(2s+1)-s$, for $(s+1)k\le n\le k(2s+1)-s-1$. As mentioned by Han and Kohayakawa in \cite{HK2017}, for $k\ge 4$, the bound in Theorem \ref{thm1.4} can be deduced from Theorem 3 in \cite{HM1967} which was established by Hilton and Milner in 1967. However, family $\mathcal{H}$ in Question \ref{quest1} does not satisfy the hypothesis of Theorem 3 in \cite{HM1967} for $k\ge 4$. This makes Question \ref{quest1} more interesting. In this paper, we answer Question \ref{quest1}. We are going to get rid of the requirement that $n$ is large enough in the result by Kostochka and Mubayi \cite{KM2016}. As in the proofs of Theorem \ref{thm1.1}, Theorem \ref{thm1.2} and Theorem \ref{thm1.4}, we will apply the shifting method. The main difficulty in our proof is to guarantee that we can get a {\it stable} family which is not EKR, not HM, $\not\subseteq \mathcal{J}_2$ $(\!\!$ in addition $\not\subseteq \mathcal{G}_2, \not\subseteq \mathcal{G}_3$ if $k=4$ $\!\!)$ after performing a series of shifts to a family which is not EKR, not HM, $\not\subseteq \mathcal{J}_2$ $(\!\!$ in addition $\not\subseteq \mathcal{G}_2, \not\subseteq \mathcal{G}_3$ if $k=4$ $\!\!)$. Our main result is as follows. \begin{theorem}\label{main} Let $k\ge 4$ and $\mathcal{H} \subseteq {[n]\choose k}$ be an intersecting family which is neither EKR nor HM. Furthermore, $\mathcal{H} \not\subseteq \mathcal{J}_2$ $(\!\!$ in addition $\mathcal{H} \not\subseteq \mathcal{G}_2$ and $\mathcal{H} \not\subseteq \mathcal{G}_3$ if $k=4$ $\!\!)$. \\ (i) If $2k+1\le n\le 3k-3$, then $$|\mathcal{H}| \le {n-1 \choose k-1} -2{n-k-1 \choose k-1} +{n-k-3 \choose k-1}+2,$$ Moreover, the equality holds only for $\mathcal{H}= \mathcal{K}_2 $ if $k\ge 5$, and $\mathcal{H}= \mathcal{K}_2 $ or $\mathcal{J}_3$ if $k=4$.\\ (ii) If $n\ge 3k-2$, then $$|\mathcal{H}|\le {n-1 \choose k-1} -{n-k-1 \choose k-1} -{n-k-2 \choose k-2} -{n-k-3 \choose k-3}+3.$$ Moreover, for $k=5$, the equality holds only for $\mathcal{H}= \mathcal{J}_{3}$ or $\mathcal{G}_4$. For every other $k$, equality holds only for $\mathcal{H}= \mathcal{J}_3 $. \end{theorem} In Section \ref{sec2}, we will give the proof of Theorem \ref{main}. The proofs of some crucial lemmas for the proof of Theorem \ref{main} are given in Section \ref{sec3}. \section{Proof of Theorem \ref{main}}\label{sec2} In this section, we always assume that $\mathcal{H}$ is a maximum intersecting family which satisfies the conditions of Theorem \ref{main}, that is, $\mathcal{H}$ is not EKR, not HM, $\mathcal{H} \not\subseteq \mathcal{J}_2$ $(\!\!$ in addition $\mathcal{H} \not\subseteq \mathcal{G}_2, \mathcal{H} \not\subseteq \mathcal{G}_3$ if $k=4$ $\!\!)$. By direct calculation, we have the following fact. \begin{fact}\label{fact0} $ (i) $ Suppose that there is $x\in [n]$ such that there are only 2 sets, say, $E_1\,\, \text{and} \,\, E_2 \in \mathcal{H}$ missing $x$. If $|E_1 \cap E_2|=k-i \,\, \text{and} \,\, i\ge 2$, then \begin{align}\label{eq1} |\mathcal{H}| &\le {n-1 \choose k-1} -2{n-k-1 \choose k-1} +{n-k-i-1 \choose k-1}+2 \nonumber\\ &\le {n-1 \choose k-1} -2{n-k-1 \choose k-1} +{n-k-3 \choose k-1}+2. \end{align} The equality in (\ref{eq1}) holds if and only if $|E_1 \cap E_2|=k-2$, that is $\mathcal{H}=\mathcal{K}_2$.\\ $ (ii) $ By the definiton of $\mathcal{J}_i$, we have \begin{equation}\label{eq2} |\mathcal{J}_3|={n-1 \choose k-1} -{n-k-1 \choose k-1} -{n-k-2 \choose k-2} -{n-k-3 \choose k-3}+3. \end{equation} $ (iii) $ Comparing the right hand sides of $($\ref{eq1}$)$ and $($\ref{eq2}$)$, we can see that if $2k+1\le n \le 3k-3$, then $|\mathcal{K}_2|\ge |\mathcal{J}_3|$, the equality holds only for $k=4$; and if $n \ge 3k-2$, then $|\mathcal{K}_2|< |\mathcal{J}_3|$. \end{fact} By Fact \ref{fact0}, we may assume that for any $x$, at least 3 sets in $\mathcal{H}$ do not contain $x$. To show Theorem \ref{main}, it is sufficient to show the following result. \begin{theorem}\label{thm1.5} Let $k\ge 4, n\ge 2k+1$ and $\mathcal{H} \subseteq {[n]\choose k}$ be an intersecting family which is not EKR, not HM and $\mathcal{H} \not\subseteq \mathcal{J}_2$ $(\!\!$ in addition $\mathcal{H} \not\subseteq \mathcal{G}_2, \mathcal{H} \not\subseteq \mathcal{G}_3$ if $k=4$ $\!\!)$. Moreover, for any $x\in [n]$, there are at least 3 sets in $\mathcal{H}$ not containing $x$. Then $$|\mathcal{H}|\le {n-1 \choose k-1} -{n-k-1 \choose k-1} -{n-k-2 \choose k-2} -{n-k-3 \choose k-3}+3.$$ Moreover if $k\ne 5$, the equality holds only for $\mathcal{H}=\mathcal{J}_3$; if $k=5$, the equality holds for $\mathcal{H}=\mathcal{J}_3$ or $\mathcal{G}_4$. \end{theorem} From now on, we always assume that $\mathcal{H}$ is a maximum intersecting family which satisfies the conditions of Theorem \ref{thm1.5}, that is $\mathcal{H}$ is not EKR, not HM, $\mathcal{H} \not\subseteq \mathcal{J}_2$ $(\!\!$ in addition $\mathcal{H} \not\subseteq \mathcal{G}_2, \mathcal{H} \not\subseteq \mathcal{G}_3$ if $k=4$ $\!\!)$ and for any $x\in [n]$, there are at least 3 sets in $\mathcal{H}$ not containing $x$. We first give some definition related to the shifting method. For $x$ and $y \in [n], x<y$, and $F\in \mathcal{F}$, we call the following operation a {\it shift}\,: $$ S_{xy}(F)= \begin{cases} (F\setminus \{y\})\cup \{x\}, & \text{if} \ x \not \in F, y\in F\, \text{and} \,(F\setminus \{y\})\cup \{x\} \not \in \mathcal{F};\\ F,& \text{otherwise}. \end{cases} $$ We say that $F$ is {\it stable} under the shift $S_{xy}$ if $S_{xy}(F)=F$. If $z\in F$ and $z\in S_{xy}(F)$ still, we say that $F$ is {\it stable at $z$} after the shift $S_{xy}$. For a family $\mathcal{F}$, we define $$S_{xy}(\mathcal{F})=\{S_{xy}(F): F\in \mathcal{F}\}.$$ Clearly, $\vert S_{xy}(\mathcal{F})\vert = \vert\mathcal{F}\vert$. We say that $\mathcal{F}$ is {\it stable} if $S_{xy}(\mathcal{F})= \mathcal{F}$ for all $x, y \in [n]$ with $x<y$. An important property shown in \cite{Fra1987shift} is that if $\mathcal{F}$ is intersecting, then $S_{xy}(\mathcal{F})$ is still intersecting. Let us rewrite is as a remark. \begin{remark} \cite{Fra1987shift}\label{shiftpro} If $\mathcal{F}$ is a maximum intersecting family, then $S_{xy}(\mathcal{F})$ is still a maximum intersecting family. \end{remark} This property guarantees that performing shifts repeatedly to a maximum intersecting family will yield a stable maximum intersecting family. The main difficulty we need to overcome is to guarantee that we can get a stable maximum intersecting family with further properties: not EKR, not HM, $\not\subseteq \mathcal{J}_2$ $(\!\!$ in addition $\not\subseteq \mathcal{G}_2, \not\subseteq \mathcal{G}_3$ if $k=4$ $\!\!)$. The following facts and lemmas are for this purpose. \begin{fact} \label{fact 2} The following properties hold. \\ (i) If $S_{xy}(\mathcal{H})$ is EKR $(\!\!$ or HM $\!)$, then $x$ must be the center.\\ (ii) If $S_{xy}(\mathcal{H})\subseteq\mathcal{G}_2$, then the core is $\{ x, x_1, x_2 \}$ for some $x_1, x_2 \in [n] \setminus \{x, y\}$.\\ (iii) If $S_{xy}(\mathcal{H})\subseteq\mathcal{J}_2$, then $x$ is the center.\\ (iv) If $S_{xy}(\mathcal{H})\subseteq\mathcal{G}_3$, then $x$ is the center or $x$ is in the core. \end{fact} \begin{proof} For (i) and (ii), Han and Kohaykawa proved them in \cite{HK2017}. We prove (iii) and (iv) only. For (iii), suppose that $S_{xy}(\mathcal{H})\subseteq\mathcal{J}_2$ at center $z \in [n] \setminus \{x\}$. Since $\mathcal{H}\not\subseteq\mathcal{J}_2$ at $z$, there are at least three sets $E_1, E_2$ and $E_3$ in $\mathcal{H}$ missing $z$, after doing the shift $S_{xy}$, these 3 sets still miss $z$, so $S_{xy} (\mathcal{H})$ is not contained in $\mathcal{J}_2$ center at $z$. For (iv), let $S_{xy}(\mathcal{H})\subseteq\mathcal{G}_3$ at center $x_0$ and core $E=\{x_1, x_2, x_3\}$, and let $B=\{x_0, x_1, x_2, x_3\}$. Since $\mathcal{H}\not \subseteq \mathcal{G}_3$, there is a set $G \in \mathcal{H} $ that satisfies one of the following two cases: (a) $\{y, x_0\} \subseteq G, G \cap E=\emptyset$; (b) $y \in G, x_0 \not \in G, |G \cap E| \in \{1, 2\}$. If (a) holds, then $x\ne x_0$ and $x$ must be in the core, $y\not \in B$. If (b) holds, then either $x=x_0$ is the center or $x$ is in the core and $y\not \in B$. \end{proof} \begin{remark}\label{remark2.4} By Fact \ref{fact 2}, if applying $S_{x'y'} (x' < y')$ repeatedly to $\mathcal {H}$, we may reach a family which belong to one of the following cases.\\ Case 1: a family $\mathcal{H}_1$ such that $S_{xy}(\mathcal{H}_1)$ is EKR with center $x$;\\ Case 2: a family $\mathcal{H}_2$ such that $S_{xy}(\mathcal{H}_2)$ is HM with center $x$;\\ Case 3: a family $\mathcal{H}_3$ such that $S_{xy}(\mathcal{H}_3)\subseteq\mathcal{J}_2$ with center $x$;\\ Case 4: a family $\mathcal{H}_4$ such that $S_{xy}(\mathcal{H}_4)\subseteq\mathcal{G}_2$ with core $\{x, x_1, x_2\}$ for some $\{x_1, x_2\} \in X\setminus \{x, y\}$ $($$k=4$ only$)$;\\ Case 5: a family $\mathcal{H}_5$ such that $S_{xy}(\mathcal{H}_5)\subseteq\mathcal{G}_3$ with center $x$ or $x$ being in the core $($$k=4$ only$)$;\\ Case 6: a stable family $\mathcal{H}_6$ satisfies the conditions of Theorem \ref{thm1.5}, that is we will not meet Cases 1-5 after doing all shifts. \end{remark} By Remark \ref{shiftpro}, we know that for any shift $S_{xy}$ on $[n]$ we have $|S_{xy}(\mathcal{H})|=|\mathcal{H}|$ and $S_{xy}(\mathcal{H})$ is also intersecting. We hope to get a stable family satisfying the conditions of Theorem \ref{thm1.5} after some shifts, that is neither EKR, nor HM, nor contained in $\mathcal{J}_2$ (nor in $\mathcal{G}_2$, $\mathcal{G}_3$ if $k=4$). By Fact \ref{fact0}, we can assume that a family $\mathcal{G}$ obtained by performing shifts to $\mathcal{H}$ has the property that for any $x$, at least 3 sets in $\mathcal{G}$ do not contain $x$. What we are going to do is: If any case of Cases 1-5 happens, we will not perform $S_{xy}$. Instead we will adjust the shifts as shown in Lemma \ref{lem2.1} to guarantee that the terminating family is a stable family satisfying the conditions of Theorem \ref{thm1.5}. We will prove the following two crucial lemmas in Section \ref{sec3}. \begin{lemma}\label{lem2.1} Let $ i \in [5] $. If we reach $\mathcal{H}_i$ in {\it Case i } in Remark \ref{remark2.4}, then there is a set $X_i \subseteq [n]$ with $|X_i| \le 5$ $($when $k\ge 5$, $|X_i| \le 3$ for $i\in [3]$$)$, such that after a series of shifts $S_{x'y'} \, (x'<y'\,\,\text{and}\,\, x', y' \in [n]\setminus X_i)$ to $\mathcal{H}_i$, we can reach a stable family satisfying the conditions of Theorem \ref{thm1.5}. Moreover, for any set $G$ in the final family $\mathcal{G}$, we have $G \cap X_i \ne \emptyset$. \end{lemma} From now on, let $X_i$ be the corresponding sets in Lemma \ref{lem2.1} for $1\le i \le 5$ and $X_6=\emptyset$ . For $k\ge 5$ and $i\in \{1, 2, 3, 6\}$, let $Y_i$ be the set of the first $2k-|X_i|$ elements of $[n]\setminus X_i$, and for $k=4$ and $i\in \{1, 2, 3, 4, 5, 6\}$, let $Y_i$ be the first $9-|X_i|$ elements of $[n]\setminus X_i$. Let $Y=Y_i\cup X_i$, then $|Y_i|\ge 2k-4$ and $|Y|=2k$ if $k\ge 5$. If $k=4$ then $|Y|=9$. Let \begin{align*} \mathcal{A}_i:&=\{G\cap Y: G\in \mathcal{G}, |G\cap Y|=i\},\\ \widetilde{\mathcal{A}_i}:&=\{G: G\in \mathcal{G}, |G\cap Y|=i\}. \end{align*} \begin{lemma} \label{lem2.2} Let $\mathcal{G}$ be the final stable family guaranteed by Lemma \ref{lem2.1} satisfying the conditions of Theorem \ref{thm1.5}, and let $X_i$ be inherit from Lemma \ref{lem2.1}. In other words, $\mathcal{G}$ is stable; $\mathcal{G}$ is neither EKR, nor HM, nor contained in $\mathcal{J}_2$ $($nor in $\mathcal{G}_2$, $\mathcal{G}_3$ if $k=4$$)$; for any $x\in [n]$, there are at least 3 sets in $\mathcal{G}$ not containing $x$; and $G\cap X_i\ne \emptyset$ for any $G\in \mathcal{G}$. Then \\ $(i)$ $\mathcal{A}_1=\emptyset$.\\ $(ii)$ For all $G$ and $G'\in \mathcal{G}$, we have $G\cap G' \cap Y\ne \emptyset$, or equivalently, $\cup_{i=2}^{k}\mathcal{A}_i\cup \mathcal{G}$ is intersecting. \end{lemma} \subsection{Quantitative Part of Theorem \ref{thm1.5}} \begin{lemma}\label{lem2.3} For $k=4$, we have $|\mathcal{A}_1|=0$, $| \mathcal{A}_2 | \le 3, | \mathcal{A}_3 | \le 18$ and $|\mathcal{A}_4 | \le 50$. For $k\ge 5$, we have $$| \mathcal{A}_i | \le {2k-1 \choose i-1}-{k-1 \choose i-1}-{k-2 \choose i-2}-{k-3 \choose i-3}, \,\, 1\le i \le k-1, $$ $$| \mathcal{A}_k | \le {\frac{1}{2}}{2k \choose k}={2k-1 \choose k-1}-{k-1 \choose k-1}-{k-2 \choose k-2}-{k-3 \choose k-3}+ 3.$$ \end{lemma} \begin{proof} By Lemma \ref{lem2.2} (i), we have $|\mathcal{A}_1|=0$. First consider $k=4$. If $|\mathcal{A}_2| \ge 4$, since $\mathcal{A}_2$ is intersecting, it must be a star. Let its center be $x$. Since $\mathcal{A}_2\cup \mathcal{A}_3 \cup \mathcal{A}_4$ is intersecting, $\mathcal{A}_3$ must be a star with center $x$ and there is at most one set in $\mathcal{A}_4$ missing $x$, this implies that $\mathcal{G}$ is EKR or HM, which contradicts the fact that $\mathcal{G}$ is neither EKR nor HM. Suppose that $ | \mathcal{A}_3 | \ge 19$. By Theorem \ref{thm1.4}, $\mathcal{A}_3$ must be EKR, HM or $\mathcal{G}_2$. If $\mathcal{A}_3$ is EKR with center $x$, then since $\mathcal{G}$ is not EKR and $\mathcal{A}_1=\emptyset$, there must exist $G\in \mathcal{G}$, such that either $x\not\in G$ and $G\cap Y\in \mathcal{A}_2$, or $x\not\in G$ and $G\cap Y\in \mathcal{A}_4$. If the former holds, by the intersecting property of $\mathcal{A}_2\cup \mathcal{A}_3$, every set in $\mathcal{A}_3$ must contain at least one of the elements in $G\cap Y$, so $ | \mathcal{A}_3 | \le 13$, a contradiction. Otherwise, the latter holds and $\mathcal{A}_2$ is a star with center $x$, and all sets of $\mathcal{G}$ missing $x$ lie in $Y$ completely. Recall that the number of these sets is at leat 3, say $x\not\in G_1, G_2, G_3\in \mathcal{G}$. Since $\mathcal{G}$ is not $\mathcal{G}_3$, it's impossible that $G_1, G_2, G_3$ form a 3-star (each member contains a fixed 3-set). If any two sets in $G_1, G_2, G_3$ intersect at $3$ vertices, then $G_1, G_2, G_3$ must be a 2-star. Since $\mathcal{A}_3\cup \mathcal{A}_4$ is intersecting, calculating directly the number of triples of $Y$ containing $x$ and intersecting with $G_1, G_2$ and $G_3$, we have $ | \mathcal{A}_3 | \le 16$, a contradiction. Otherwise, there are two members, w.l.o.g., say, $G_1, G_2$, such that $|G_1\cap G_2|=2$. Since $\mathcal{A}_3\cup \mathcal{A}_4$ is intersecting, calculating directly the number of triples of $Y$ containing $x$ and intersecting with $G_1$ and $G_2$, we have $ | \mathcal{A}_3 | \le 17$, also a contradiction. If $\mathcal{A}_3$ is HM with center $x$, let $\{z_1, z_2, z_3\}\in \mathcal{A}_3$. By Theorem \ref{thm1.2}, we have $ | \mathcal{A}_3 | \le 19$, so we may assume $ | \mathcal{A}_3 | = 19$ and $\mathcal{A}_3$ is isomorphic to $HM(9, 3)$. Suppose that there is a set $G$ such that $x\not\in G, G\cap Y\in \mathcal{A}_2$, w.l.o.g., assume $z_1\not\in G$. Since $|Y\setminus(\{x, z_1, z_2, z_3\}\cup G)|\ge 3$, there is $a\in Y\setminus(\{x, z_1, z_2, z_3\}\cup G)$ such that $\{x, z_1, a\}\cap G=\emptyset$. By the intersecting property of $\mathcal{A}_3\cup \mathcal{A}_4$, we have $\{x, z_1, a\}\not\in \mathcal{A}_3$, so $ | \mathcal{A}_3 | < 19$, a contradiction. Now we may assume that $\mathcal{A}_2$ is a star with center $x$. Since $\mathcal{G}$ is neither HM nor contained in $\mathcal{G}_3$, there must be a 4-set $G$ in $\mathcal{A}_4$ such that either $x\not\in G$ and $1\le |G\cap \{z_1, z_2, z_3\}|\le 2$, w.l.o.g., assume $z_1\not\in G$ or $x\in G$ and $|G\cap \{z_1, z_2, z_3\}|=0$. But since $\mathcal{A}_3\cup \mathcal{A}_4$ is intersecting, the latter case will not happen. Assume the former holds. Since $|Y\setminus(\{x, z_1, z_2, z_3\}\cup G)|\ge 2$, there is $a\in Y\setminus(\{x, z_1, z_2, z_3\}\cup G)$ such that $\{x, z_1, a\}\cap G=\emptyset$. By the intersecting property of $\mathcal{A}_3\cup \mathcal{A}_4$, we have $\{x, z_1, a\}\not\in \mathcal{A}_3$, so $ | \mathcal{A}_3 | < 19$. At last, assume that $\mathcal{A}_3 \subseteq \mathcal{G}_2$ with core $\{x_1, x_2, x_3\}$. Since $ \mathcal{A}_3$ is intersecting, by calculating the number of triples in $Y$ containing at least 2 vertices in core $\{x_1, x_2, x_3\}$, we have $ | \mathcal{A}_3 | \le 19$, so we may assume that $ | \mathcal{A}_3 | = 19$. Since $\mathcal{G}\not\subseteq\mathcal{G}_2$, there exists a set $G\in\mathcal{G}$ such that $| G\cap \{x_1, x_2, x_3\} |\le 1$. w.l.o.g., let $G\cap \{x_1, x_2\}=\emptyset$. Since $ |Y\setminus (\{x_1, x_2, x_3\}\cup G)|\ge 2 $, we can pick $a\in Y\setminus (\{x_1, x_2, x_3\}\cup G)$ such that $G\cap \{x_1, x_2, a\}=\emptyset$. By the intersecting property of $\mathcal{A}_3\cup \mathcal{G}$, we have $\{x_1, x_2, a\}\not\in \mathcal{A}_3$, hence $ | \mathcal{A}_3 | \le 18$, as desired. So we have proved that $ | \mathcal{A}_3 | \le 18$ for $k=4$. Next, we prove $ | \mathcal{A}_4 | \le 50$. On the contrary, suppose that $ | \mathcal{A}_4 | \ge 51$. By Theorem \ref{thm1.4}, $\mathcal{A}_4$ must be EKR, HM, or contained in $\mathcal{J}_2$, $\mathcal{G}_2$ or $\mathcal{G}_3$. Suppose that $\mathcal{A}_4$ is EKR at $x$. Since $\mathcal{G}$ is not EKR and $\mathcal{A}_1=\emptyset$, there must exist $G\in \mathcal{G}$ such that either $x\not\in G$ and $G\cap Y\in \mathcal{A}_2$ or $x\not\in G$ and $G\cap Y\in \mathcal{A}_3$. If the former holds, since $\mathcal{A}_2\cup \mathcal{A}_4$ is intersecting, by calculating the number of 4-sets in $Y$ containing $x$ and intersecting with $G\cap Y$ directly, we have $ | \mathcal{A}_4 | \le 36$. If the latter holds, since $\mathcal{A}_3\cup \mathcal{A}_4$ is intersecting, by calculating the number of 4-sets in $Y$ containing $x$ and intersecting with $G\cap Y$ directly, we have $ | \mathcal{A}_4 | \le 46$. Suppose that $\mathcal{A}_4$ is HM at $x$. Since $\mathcal{G}$ is not HM at $x$, there exists $G\in \mathcal{G}$ such that either $x\not\in G$ and $G\cap Y\in \mathcal{A}_2$ or $x\not\in G$ and $G\cap Y\in \mathcal{A}_3$, since $\mathcal{A}_4$ is HM at $x$ and $\mathcal{A}_2\cup \mathcal{A}_4$ (or $\mathcal{A}_3\cup \mathcal{A}_4$) is intersecting, by calculating the number of 4-subsets containing $x$ and intersecting with $G\cap Y$, and adding $1$ set not containing $x$, we have $|\mathcal{A}_4|\le 37$ (or $|\mathcal{A}_4|\le 47$). Suppose that $\mathcal{A}_4\subseteq\mathcal{G}_2$ with core $\{x_1, x_2, x_3\}=A$. By calculating the number of 4-subsets in $Y$ containing at least 2 of $\{x_1, x_2, x_3\}$, we have $ | \mathcal{A}_4 | \le 51$, so we may assume $ | \mathcal{A}_4 | = 51$. Since $\mathcal{G}\not\subseteq\mathcal{G}_2$, there exists a set $G$ in $\mathcal{G}$ such that $|G\cap A|\le 1, G\cap Y\in \mathcal{A}_2$ or $\mathcal{A}_3$. w.l.o.g., let $G\cap \{x_1, x_2\}=\emptyset$. Since $ |Y\setminus (A\cup G)|\ge 2 $, we can pick $a, b \in Y\setminus (A\cup G)$ such that $(G\cap Y) \cap \{x_1, x_2, a, b\}=\emptyset$. By the intersecting property of $\mathcal{A}_2\cup\mathcal{A}_3\cup \mathcal{A}_4$, we have $\{x_1, x_2, a, b\}\not\in \mathcal{A}_4$. Hence $ | \mathcal{A}_4 | \le 50$, as desired. Suppose that $\mathcal{A}_4\subseteq\mathcal{G}_3$ with core $\{x_1, x_2, x_3\}$ and center $x$. By direct calculation, $ | \mathcal{A}_4 | \le 51$, so we may assume $ | \mathcal{A}_4 | = 51$ and $\mathcal{A}_4=\mathcal{G}_3$. Since $\mathcal{G}\not\subseteq\mathcal{G}_3$, there must be $G\in \mathcal{G}$ and $G\cap Y\in \mathcal{A}_2 \text{ or } \mathcal{A}_3$, such that either $x\not \in G$ and $\{x_1, x_2, x_3\}\not\subseteq G\cap Y$ or $x \in G$ and $\{x_1, x_2, x_3\}\cap (G\cap Y) =\emptyset$. By the intersecting property of $\mathcal{A}_2\cup \mathcal{A}_3 \cup \mathcal{A}_4$, in either case, we have $\mathcal{A}_4\neq\mathcal{G}_3$ and $ | \mathcal{A}_4 | < 51$. At last, suppose that $\mathcal{A}_4\subseteq\mathcal{J}_2$ with center $x$, kernel $\{x_1, x_2, x_3\}$ and the set of pages $\{x_4, x_5\}$. By Theorem 1.4, we may assume $ | \mathcal{A}_4 | = 51$ and $\mathcal{A}_4=\mathcal{J}_2$. Since $\mathcal{A}_2\cup \mathcal{A}_3\cup \mathcal{A}_4$ is intersecting, there is no member in $\mathcal{A}_2 \text{ or } \mathcal{A}_3$ avoiding $x$. And each member in $\mathcal{A}_2$ must interset with $\{x_1, x_2, x_3\}$, each member in $\mathcal{A}_3$ must interset with $\{x_1, x_2, x_3\}$ or contain $\{x_4, x_5\}$, to satisfy these conditions, $G$ must be contained in $\mathcal{J}_2$, a contradiction. So we have proved that $\mathcal{A}_4\le 50$ for $k=4$. Next consider $k\ge 5$. Suppose on the contrary that there exists $i\in \{2, \dots, k-1\}$ such that \begin{equation}\label{eq3} | \mathcal{A}_i | > {2k-1 \choose i-1}-{k-1 \choose i-1}-{k-2 \choose i-2}-{k-3 \choose i-3}. \end{equation} Note that for $i=2$, \begin{equation} {2k-1 \choose i-1} - {k-1\choose i-1} - {k-2\choose i-2} - {k-3 \choose i-3}=k-1\nonumber. \end{equation} If $|\mathcal{A}_2|\ge k \,(\, k\ge 5\, )$, then $\mathcal{A}_2$ is EKR, moreover, since $\mathcal{A}_2\cup \mathcal{G}$ is intersecting, $\mathcal{G}$ must be EKR or HM, a contradiction. Hence $|\mathcal{A}_2|\le k-1$, as desired. Now consider $i\ge 3$. Under the assumption (\ref{eq3}), we claim that \begin{equation}\label{eq4} | \mathcal{A}_i | > {2k-1 \choose i-1}-{2k-i-1 \choose i-1}-{2k-i-2 \choose i-2}+2. \end{equation} Let us explain inequality (\ref{eq4}). We write \begin{equation}\label{5} {2k-i-2 \choose i-2}={2k-i-3 \choose i-2}+{2k-i-3 \choose i-3}. \end{equation} For $k\ge 5$ and $3 \le i \le k-1$, we have \begin{equation}\label{6} {2k-1-i \choose i-1}-{k-1 \choose i-1}={k-1 \choose i-2}+{k \choose i-2}+\dots+{2k-2-i \choose i-2} \ge4, \end{equation} \begin{equation}\label{7} {2k-i-3 \choose i-2}-{k-2 \choose i-2}\ge 0,\,\,\, {2k-i-3 \choose i-3}-{k-3 \choose i-3} \ge 0, \end{equation} Combining (\ref{eq3}), (\ref{5}), (\ref{6}) and (\ref{7}), we obtain (\ref{eq4}). Since $\mathcal{A}_i$ is intersecting, we may assume, by Theorem \ref{thm1.4} that $\mathcal{A}_i$ is EKR or HM or for $i=3$, $\mathcal{A}_i\subseteq\mathcal{G}_2$. Case (i): $\mathcal{A}_i$ is EKR or HM at center $x$. In this case $\mathcal{A}_i$ contains at most 1 $i$-set missing $x$. Recall that there are at least three sets missing $x$ in $\mathcal{G}$. Pick three sets $G_1, G_2, G_3\in \mathcal{G}$ missing $x$. Denote \\ \begin{minipage}[b]{0.5\linewidth} \begin{flushleft} \begin{align*} T&=G_1\cap G_2\cap G_3\cap Y, \,t=|T|,\\ T_1&=(G_1\cap Y)\setminus (G_2\cup G_3),\, t_1=|T_1|,\\ T_2&=(G_2\cap Y)\setminus (G_1\cup G_3), \,t_2=|T_2|, \\ T_3&=(G_3\cap Y)\setminus (G_1\cup G_2),\, t_3=|T_3|, \\ T_4&=(G_1\cap G_2\cap Y)\setminus G_3,\, t_4=|T_4|,\\ T_5&=(G_1\cap G_3\cap Y)\setminus G_2, \,t_5=|T_5|,\\ T_6&=(G_2\cap G_3\cap Y)\setminus G_1,\, t_6=|T_6|. \end{align*} \end{flushleft} \end{minipage} \hspace{-1cm} \begin{minipage}[b]{0.5\linewidth} \begin{flushright} \includegraphics[scale=0.39]{Huang5.png} \end{flushright} \end{minipage} Clearly, $t+t_1+t_4+t_5\le k$, $t+t_2+t_4+t_6\le k$, $t+t_3+t_5+t_6\le k$. By Lemma \ref{lem2.2} $\mathcal{A}_i \cup \{G_1\cap Y, G_2\cap Y, G_3\cap Y\}$ is intersecting. Applying Inclusion-Exclusion principle, we have \begin{equation}\label{eq5} \begin{split} \mathcal{A}_i\le{2k-1 \choose i-1}-{2k-1-t-t_1-t_4-t_5 \choose i-1} -{2k-1-t-t_2-t_4-t_6 \choose i-1}\\-{2k-1-t-t_3-t_5-t_6 \choose i-1} +{2k-1-t-t_1-t_2-t_4-t_5-t_6 \choose i-1}\\+{2k-1-t-t_1-t_3-t_4-t_5-t_6 \choose i-1} +{2k-1-t-t_2-t_3-t_4-t_5-t_6 \choose i-1}\\-{2k-1-t-t_1-t_2-t_3-t_4-t_5-t_6 \choose i-1}+c, \end{split} \end{equation} where $c=0$ (if $\mathcal{A}_i$ is EKR) or $1$ (if $\mathcal{A}_i$ is HM). Denote the right side of equality (\ref{eq5}) by $f$. We rewrite it as \begin{equation}\label{eq6} \begin{split} f={2k-1 \choose i-1}-{2k-2-t-t_1-t_4-t_5 \choose i-2}-\dots -{2k-1-t-t_1-t_3-t_4-t_5-t_6 \choose i-2}\\ -{2k-2-t-t_2-t_4-t_6 \choose i-2}-\dots-{2k-1-t-t_1-t_2-t_4-t_5-t_6 \choose i-2}\\ - {2k-2-t-t_3-t_5-t_6 \choose i-2}-\dots-{2k-1-t-t_2-t_3-t_4-t_5-t_6 \choose i-2}\\ -{2k-1-t-t_1-t_2-t_3-t_4-t_5-t_6 \choose i-1}+c. \end{split} \end{equation} We can see that the right side of (\ref{eq6}), consequently (\ref{eq5}) does not decrease as $t+t_1+t_4+t_5, t+t_2+t_4+t_6, t+t_3+t_5+t_6$ increase. Since $t+t_1+t_4+t_5, t+t_2+t_4+t_6, t+t_3+t_5+t_6\le k,$ we can substitute $t+t_1+t_4+t_5=k, t_2+t_4+t_6=k-t, t_3+t_5+t_6=k-t$ into inequality (\ref{eq5}), and this will not decrease $f$. So we have \begin{equation}\label{eq7} \begin{split} |\mathcal{A}_i| &\le {2k-1\choose i-1}-3{k-1\choose i-1}+{t+t_4-1\choose i-1}+{t+t_5-1\choose i-1}+{t+t_6-1\choose i-1}\\ &-{t+t_5-t_2-1 \choose i-1} +c\\ &={2k-1\choose i-1}-3{k-1\choose i-1}+{t+t_4-1\choose i-1}+{t+t_6-1\choose i-1}+{t+t_5-2\choose i-2}\\ &+\dots+{t+t_5-t_2-1 \choose i-2}+c\\ &\triangleq g. \end{split} \end{equation} Clearly, $g$ does not decrease as $t+t_4, t+t_5, t+t_6$ increase and $t+t_4\le k-1, t+t_5\le k-1$ $t+t_6\le k-1$. If $t+t_5-t_2-1\ge k-3$, then \[ \begin{split} |\mathcal{A}_i| &\le {2k-1\choose i-1}-3{k-1\choose i-1}+3{k-2\choose i-1}-{k-3 \choose i-1}+c\\ &={2k-1\choose i-1}-{k-1\choose i-1}-{k-2\choose i-2}-{k-3 \choose i-3}+c. \end{split} \] The equality holds only if $t=k-1, t_1=t_2=t_3=1, t_4=t_5=t_6=0$. If $t+t_5-t_2-1\le k-4$ ($\ast$), then $t\le k-2$ since $t=k-1$ implies $t_5=0$ and combining with ($\ast$), we have $t_2\ge2$, so $t+t_2\ge k+1$, a contradiction. Since $t+t_4\le k-1, t+t_5\le k-1$ and $t+t_6\le k-1$, by (\ref{eq6}) and (\ref{eq7}), taking $t+t_1+t_4+t_5=k, t+t_2+t_4+t_6=k, t+t_3+t_5+t_6=k$ and $t+t_4=k-1, t+t_5=k-1, t+t_6=k-1$ (this implies that $t=k-2, t_4=t_5=t_6=1$ and $t_1=t_2=t_3=0$) does not decrease $f$. So \[ \begin{split} g & \le {2k-1\choose i-1}-3{k-1\choose i-1}+3{k-2\choose i-1}-{k-2 \choose i-1}+c\\ &={2k-1\choose i-1}-{k-1\choose i-1}-{k-2\choose i-2}-{k-3 \choose i-3}-{k-3 \choose i-2}+c\\ &\le {2k-1\choose i-1}-{k-1\choose i-1}-{k-2\choose i-2}-{k-3 \choose i-3}-2+c. \end{split} \] So $$ |\mathcal{A}_i| \le {2k-1\choose i-1}-{k-1\choose i-1}-{k-2\choose i-2}-{k-3 \choose i-3}+c.$$ To reach $c=1$, there is a set $A$ in $\mathcal{A}_i$ not containing $x$. Let $G_1$ be such that $G_1\cap Y=A$. So $|G_1\cap Y|= i\le k-1$. This implies that $t+t_1+t_4+t_5\le k-1$. In view of (\ref{eq5}) and (\ref{eq6}), $|\mathcal{A}_i|$ strictly decreases as $t+t_1+t_4+t_5$ strictly decreases. So we have \[ |\mathcal{A}_i|\le {2k-1\choose i-1}-{k-1\choose i-1}-{k-2\choose i-2}-{k-3 \choose i-3}, \] as desired. Case (ii): For $i=3$, $\mathcal{A}_i\subseteq\mathcal{G}_2$ with core, say $\{x_1, x_2, x_3\}$. By direct calculation, we have $|\mathcal{A}_3|\le 3(2k-3)+1=6k-8$. When $k\ge 5$, we have $$6k-8< {2k-1\choose 2}-{k-1\choose 2}-{k-2\choose 1}-{k-3\choose 0},$$ as desired. \end{proof} \begin{lemma}\label{lem2.8} Let $\mathcal{G}$ be the final stable family as in Lemma \ref{lem2.2}. Then \[ |\mathcal{G}|\le {n-1 \choose k-1} -{n-k-1 \choose k-1} -{n-k-2 \choose k-2} -{n-k-3 \choose k-3}+3. \] \end{lemma} \begin{proof} Note that for any $A\in\mathcal{A}_i$, there are at most ${n-|Y|}\choose k-i$ $k$-sets in $\mathcal{G}$ containing $A$. For $k=4$, we have \begin{align} |\mathcal{G}|\le \sum_{i=1}^{4}|\mathcal{A}_i|{n-9\choose 4-i}.\nonumber \end{align} By Lemma \ref{lem2.3}, \begin{align}\label{eq8} |\mathcal{G}|&\le 3{n-9\choose 2}+18{n-9\choose 1}+50\nonumber\\ &=\frac{3}{2}n^2-\frac{21}{2}n+23\nonumber\\ &={n-1\choose 3}-{n-5\choose 3}-{n-6\choose 2}-{n-7\choose 1}+3. \end{align} For $k\ge5$, we have \begin{align}\label{eq9} |\mathcal{G}|&\le \sum_{i=1}^{k}|\mathcal{A}_i|{n-2k\choose k-i} \nonumber \\ &\!\!\!\!\!\! \!\! \overset{\text{Lemma \ref{lem2.3}}}{\le} 3+\sum_{i=1}^{k} \left( {2k-1\choose i-1}-{k-1\choose i-1}-{k-2\choose i-2}-{k-3\choose i-3} \right){n-2k\choose k-i} \nonumber\\ &={n-1\choose k-1}-{n-k-1\choose k-1}-{n-k-2\choose k-2}-{n-k-3\choose k-3}+3. \end{align} \end{proof} By Lemma \ref{lem2.8}, we have obtained the quantitative part of Theorem \ref{thm1.5}. \subsection{Uniqueness Part of Theorem \ref{thm1.5}} Let $\mathcal{G}$ be a $k$-uniform family such that the equality holds in Lemma \ref{lem2.8} .We first show the structure of $\mathcal{G}$. \begin{theorem}\label{thm2.9} Let $\mathcal{G}$ be a family as in Lemma \ref{lem2.8} such that the equality holds. If $k=5$, then $\mathcal{G}= \mathcal{J}_3$ or $\mathcal{G}_4$; if $k\ne5$, then $\mathcal{G}= \mathcal{J}_3$.\end{theorem} \begin{proof} To make the equalities (\ref{eq8}) and (\ref{eq9}) hold, we must get all the equalities in Lemma \ref{lem2.3}. So $|\mathcal{A}_2|=k-1$. By Lemma \ref{lem2.2}, $\mathcal{A}_2$ is intersecting, so $\mathcal{A}_2$ is a star, say with center $x$ and leaves $\{x_1, x_2, \dots, x_{k-1}\}$, or a triangle on $\{x, y, z\}$ (only for $k=4$). First consider $k=4$. If $\mathcal{A}_2$ is a triangle, then $\mathcal{G}=\mathcal{G}_2$, a contradiction. Otherwise, $\mathcal{A}_2$ is a star, this implies that all sets in $\mathcal{G}$ missing $x$ must contain $\{x_1, x_2, x_3\}$, and the number of such sets is at least 3. Then either $\mathcal{G}= \mathcal{G}_3$ or $\mathcal{G}= \mathcal{J}_i, 3\le i\le k-1$. By the assumption that $\mathcal{G}\not\subseteq \mathcal{G}_3$, the former is impossible, and the latter implies $\mathcal{G}= \mathcal{J}_3$. Hence, the equality in (21) holds only if $\mathcal{G}=\mathcal{J}_3$. For $k\ge5$, $\mathcal{A}_2$ must be a star. Similarly, in this condition, we have either $\mathcal{G}=\mathcal{G}_{k-1}$ or $\mathcal{G}= \mathcal{J}_i, 3\le i\le k-1$. In particular, for $k=5$, we can see that the extremal value of $|\mathcal{G}|$ can be achieved by $|\mathcal{G}_4|$ and $|\mathcal{J}_3|$, and for $k>5$, by $|\mathcal{J}_3|$ only. \end{proof} We will use some results in \cite{HK2017}. We say two families $\mathcal{G}$ and $\mathcal{F}$ are \textit{cross-intersecting} if for any $G\in \mathcal{G}$ and $F\in \mathcal{F}$, $G\cap F\neq\emptyset $. We say that a family $\mathcal{F}$ is {\it non-separable} if $\mathcal{F}$ cannot be partitioned into the union of two cross-intersecting non-empty subfamilies. \begin{proposition}$(\cite{HK2017})$\label{prop3.2} Let $r\ge 2$. Let $Z$ be a set of size $m\ge2r+1$ and let $A\subseteq Z$ such that $|A|\in \{r-1, r\}$. Let $\mathcal{B}$ be an $r$-uniform family on $Z$ such that $\mathcal{B}=\{B\subseteq Z: 0<|B\cap A|<|A|\}$. Then $\mathcal{B}$ is non-separable. \end{proposition} \begin{lemma}$(\cite{HK2017})$\label{lem3.2} Let $\mathcal{F}$ be a $k$-uniform intersecting family. If $k\ge 3$ and $S_{xy}(\mathcal{F})\in \{\mathcal{J}_2, \mathcal{G}_{k-1}, \mathcal{G}_2\}$, then $\mathcal{F}$ is isomorphic to $S_{xy}(\mathcal{F}).$ \end{lemma} Combining with Theorem \ref{thm2.9} and Lemma \ref{lem3.2}, the uniqueness part of Theorem \ref{thm1.5} will be completed by showing the following lemma. \begin{lemma}\label{lem3.3} Let $\mathcal{F}$ be a $k$-uniform intersecting family. If $k\ge 4$ and $S_{xy}(\mathcal{F})=\mathcal{J}_3$, then $\mathcal{F}$ is isomorphic to $\mathcal{J}_3$. \end{lemma} \begin{proof} Assume that $S_{xy}(\mathcal{F})=\mathcal{J}_3$ with center $x_0$, kernel $E$ and the set of pages $\{x_1, x_2, x_3\}$. That is \[ \mathcal{J}_3=\{G:\{x_0, x_1, x_2, x_3\}\subseteq G\}\cup\{G:x_0\in G, G\cap E\ne \emptyset\}\cup \{E\cup\{x_1\}, E\cup\{x_2\}, E\cup\{x_3\}\}. \] Define \begin{align*} \mathcal{B}_x&:=\{G\in \mathcal{J}_3: x\in G, y\not\in G, (G\setminus{x})\cup{y}\not\in \mathcal{J}_3\}, \\ \mathcal{C}_x&:=\{G\in \mathcal{B}_x: G\in \mathcal{F}\}, \\ \mathcal{D}_x&:=\{G\in \mathcal{B}_x: G\not\in \mathcal{F}\}, \\ \mathcal{B}' &:=\{G\setminus \{x\}: G\in \mathcal{B}_x\},\\ \mathcal{C}'&:=\{G\setminus \{x\}: G\in \mathcal{C}_x\},\\ \mathcal{D}'&:=\{G\setminus \{x\}: G\in \mathcal{D}_x\}. \end{align*} Then $\mathcal{B}_x=\mathcal{C}_x \sqcup \mathcal{D}_x$ and $\mathcal{B}'=\mathcal{C}' \sqcup \mathcal{D}'$. The definition of $\mathcal{D}_x$ implies that for any $G\in \mathcal{D}_x$, $G\setminus \{x\} \cup \{y\} \in \mathcal{F}$, and the definition of $\mathcal{C}_x$ implies that for any $G\in \mathcal{C}_x$, $G\setminus \{x\} \cup \{y\} \not\in \mathcal{F}$. Clearly, only the sets in $\mathcal{D}_x$ are in $S_{xy}(\mathcal{F})\setminus \mathcal{F} $. If $\mathcal{D}_x=\emptyset$, then $S_{xy}(\mathcal{F})=\mathcal{F}=\mathcal{J}_3$, and if $\mathcal{C}_x=\emptyset$, then $\mathcal{F}$ is still $\mathcal{J}_3$ with center $y$. On the other hand, notice that $\mathcal{C}_x$ and $\{ G\setminus \{x\}\cup \{y\}: G\in\mathcal{D}_x\}$ are cross intersecting, so $\mathcal{C}'$ and $\mathcal{D}'$ are cross intersecting. We are going to prove that $\mathcal{B'}$ is non-separable, this means that $\mathcal{C}'=\emptyset$ or $\mathcal{D}'=\emptyset$, and hence $\mathcal{C}_x=\emptyset$ or $\mathcal{D}_x=\emptyset$, we can conclude the proof. So what remains is to show the following claim. \begin{clm} $\mathcal{B'}$ is non-separable. \end{clm} \begin{proof} We say the shift $S_{xy}: \mathcal{F} \to \mathcal{J}_3$ is trivial if $\mathcal{B}_x=\emptyset$. Let $Z:=[n]\setminus\{x, y\}$. If $r=k-1$, then $|Z|\ge 2k+1-2=2r+1$. Let $T_1:=\{x_0\}, \,T_2:=E, \,T_3:=\{x_1, x_2, x_3\}, \,T_4:=[n]\setminus(T_1\cup T_2\cup T_3)$. Since for $x, y \in T_i$ or for $x\in T_i, y\in T_j, i>j$, the shift is trivial, we only need to consider the following three cases. Case (i): $x=x_0$ and $y\in T_2 \cup T_3 \cup T_4$. If $y\in T_3$, let $A=E$, then $\mathcal{B'}=\{B\subseteq Z: 0<|B\cap A|<|A|\}$. By Proposition \ref{prop3.2}, $\mathcal{B'}$ is non-separable. If $y\in T_2\cup T_4$, let $A:=E\setminus \{y\}$, then $|A|\in \{r-1, r\}$. Assume that $\mathcal{B'}$ has a partition $\mathcal{B'}_1\cup \mathcal{B'}_2$ such that $\mathcal{B'}_1$ and $\mathcal{B'}_2$ are cross-intersecting. We now partition $\mathcal{B'}$ into three parts $\mathcal{P}_1\sqcup \mathcal{P}_2\sqcup \mathcal{P}_3$, where \begin{gather*} \mathcal{P}_1:=\{B\subseteq Z: 0<|B\cap A|<|A|\},\\ \mathcal{P}_2:=\{B\in \mathcal{B'}: B\cap A=\emptyset\}=\{T_3\cup F: F\subseteq T_4\setminus \{y\}, |F|=k-4\}, \end{gather*} and \begin{eqnarray*} \mathcal{P}_3:=\{B\in \mathcal{B'}: A\subseteq B\}= \begin{cases} \{A\cup \{z\}: z\in T_4\}, & y\in T_2;\\ \{A\}, & y\in T_4. \end{cases} \end{eqnarray*} Obviously, $\mathcal{P}_1\ne \emptyset$. By Proposition \ref{prop3.2}, $\mathcal{P}_1$ is non-separable. For any $P\in \mathcal{P}_2$, and any $a\in A$, we have $|Z\setminus\{a\}|\ge 2r $, then in $\mathcal{P}_1$ we can always find $P'\subseteq Z\setminus (\{a\}\cup P)$ such that $0<|P'\cap A|<|A|$ and $P\cap P'=\emptyset$. This implies that $P$ and $P'$ must be in the same $\mathcal{B'}_i$ ($i=1$ or $2$)(recall that we assumed that $\mathcal{B'}$ has a partition $\mathcal{B'}_1\cup \mathcal{B'}_2$ such that $\mathcal{B'}_1$ and $\mathcal{B'}_2$ are cross-intersecting), hence $\mathcal{P}_1$ and $\mathcal{P}_2$ are in the same $\mathcal{B'}_i$. For any $P\in \mathcal{P}_3$, we have $|P\cap T_4|\le 1$. Since $|T_4|\ge k-2$, there is a $(k-4)$-set $F\subseteq T_4\setminus \{y\}$, such that $P\cap F=\emptyset$. Note that $P':=F\cup T_3\in \mathcal{P}_2$ and $P'\cap P=\emptyset$, so $\mathcal{P}_2$ and $\mathcal{P}_3$ are in the same $\mathcal{B'}_i$. Hence $\mathcal{B'}=\mathcal{B'}_1$ or $\mathcal{B'}_2$, as desired. Case (ii): $x\in T_2$ and $y\in T_3 \cup T_4$. Let $E_i:=(E\cup \{x_i\})\setminus \{x\}, i=1, 2, 3$. If $y\in T_4$, then \[ \mathcal{B'}=\{E_1, E_2, E_3\}\cup \left\{G\in{[n]\setminus\{x\} \choose k-1}: x_0\in G, G\cap E=\emptyset, |G\cap T_3|\le 2, y\not\in G\right\}. \] Since $|T_4\setminus \{y\}|\ge k-3$, there is $P\in \mathcal{B'}\setminus\{E_1, E_2\}$, such that $P\cap E_1=P\cap E_2=\emptyset$. Hence, $E_1$ and $E_2$ belong to the same part $\mathcal{B'}_i$. Similarly, $E_1$ and $E_3$ belong to the same part. Thus $E_1, E_2$ and $E_3$ are in the same $\mathcal{B'}_i$. Moreover, for any $P'\in \mathcal{B'}\setminus\{E_1, E_2, E_3\}$, because $|P'\cap \{x_1, x_2, x_3\}|\le 2$, we have $P'\cap E_1=\emptyset$, or $P'\cap E_2=\emptyset$ or $P'\cap E_3=\emptyset$. Hence, $\mathcal{B'}$ is non-separable, as desired. If $y\in T_3$, w.l.o.g., let $y=x_1$. Then \[ \mathcal{B'}=\{E_2, E_3\}\cup \left\{G\in{[n]\setminus\{x\} \choose k-1}: x_0\in G, G\cap E=\emptyset, |G\cap T_3|\le 1, y\not\in G\right\}. \] Since $|T_4|\ge k-2$, there exists $P\in \mathcal{B'}\setminus \{E_2, E_3\}$ such that $P\cap T_3=\emptyset$, then $P\cap E_2=\emptyset$, and $P\cap E_3=\emptyset$, this implies that $E_2$ and $E_3$ are in the same $\mathcal{B'}_i$. Because $|G\cap T_3|\le 1$ and $G\cap E=\emptyset$, it's not hard to see that each $P\in \mathcal{B'}\setminus \{E_2, E_3\}$ is disjoint from one of $E_2$ and $E_3$. Hence $\mathcal{B'}$ is non-separable. Case (iii): $x\in T_3$ and $y\in T_4$. w.l.o.g., let $x=x_1$. Under this condition, \[ \mathcal{B'}=\{E\}\cup \left\{G\in{[n]\setminus\{x\} \choose k-1}: \{x_0, x_2, x_3\}\subseteq G, G\cap E=\emptyset, y\not\in G\right\}. \] Since $E$ is disjoint from every other set in $\mathcal{B}'\setminus \{E\}$, $\mathcal{B'}$ is non-separable. \end{proof} The proof of Lemma \ref{lem3.3} is complete. \end{proof} \section{Proofs of Lemma \ref{lem2.1} and Lemma \ref{lem2.2}}\label{sec3} \subsection{Proof of Lemma \ref{lem2.1}} We first show the following preliminary results. For a family $\mathcal{F}\subseteq 2^{[n]}$ and $x_1, x_2, x_3\in [n]$, let $d_{\{x_1, x_2\}}$ be the number of sets containing $\{x_1, x_2\}$ in $\mathcal{F}$, and $d_{\{x_1, x_2, x_3\}}$ be the number of sets containing $\{x_1, x_2, x_3\}$ in $\mathcal{F}$. \begin{clm} \label{clm 1} Let $\mathcal{F}\subseteq\mathcal{G}_2$ be a 4-uniform family with core $A$ satisfying $d_{\{x_1, x_2\}} >2n-7$. Then $\{x_1, x_2\} \subseteq A$. \end{clm} \begin{proof} If $\{x_1, x_2\} \subseteq [n]\setminus A$, then a set in $\mathcal{F}$ containing $\{x_1, x_2\}$ must have two elements from $A$, so $d_{(x_1, x_2)}\leq 3$, a contraction. If $|\{x_1, x_2\} \cap A| = 1$, then a set in $\mathcal{F}$ containing $\{x_1, x_2\}$ must have at least one element from $A$, so $d_{(x_1, x_2)}\leq 2n-7$, a contraction again. So $\{x_1, x_2\} \subseteq A$, as desired. \end{proof} \begin{clm}\label{clm 2} Let $\mathcal{F}\subseteq\mathcal{G}_3$ be a 4-uniform family with center $x$ and core $E$ and let $B=\{x\}\cup E$. \\ (i) If $d_{\{x_1, x_2\}}\ge 3n-12$, then $x\in \{x_1, x_2\}$.\\ (ii) If $d_{\{x_1, x_2\}}> 3n-12$, then $\{x_1, x_2\} \subseteq B$ and $x\in \{x_1, x_2\}$. \end{clm} \begin{proof} For (i), assume that $x\not \in \{x_1, x_2\}$. If $\{x_1, x_2\}\cap B= \emptyset$, then the sets containing $\{x_1, x_2\}$ must contain the center $x$ and another vertex from core $E$, so $d_{(x_1, x_2)}\le 3<3n-12$, a contradiction. So $\{x_1, x_2\}\subseteq E$ or $|\{x_1, x_2\} \cap E|=1$. If the former holds, then the sets containing $\{x_1, x_2\}$ must contain the center $x$ or contain the core $E$, so $d_{(x_1, x_2)}\le (n-3)+(n-4)=2n-7<3n-12$, a contradiction. If the latter holds, w.l.o.g., let $\{x_1, x_2\} \cap E=\{x_1\}$, then the sets containing $\{x_1, x_2\}$ must contain the center $x$ or just the set $E\cup \{x_2\} $, so $d_{(x_1, x_2)}\le (n-3)+1<3n-12$, also a contradiction. Hence, $x\in \{x_1, x_2\}$, as desired. For (ii), we have shown that $x\in \{x_1, x_2\}$ by (i), w.l.o.g, let $x_1=x$ be the center. If $x_2\not \in E$, then the sets containing $\{x_1, x_2\}$ must intersect with $E$, so $d_{(x_1, x_2)}\le {n-2 \choose 2}-{n-5 \choose 2}=3n-12$, a contradiction to that $d_{\{x_1, x_2\}}> 3n-12$, so $x_2 \in E$, that is $\{x_1, x_2\} \subseteq B$, as desired. \end{proof} \begin{clm}\label{clm00} Fix $n>6$. Let $\mathcal{F}\subseteq\mathcal{G}_3$ be a 4-uniform family with center $x$ and core $E$ and let $B=\{x\}\cup E$. If $d_{\{x_1, x_2, x_3\}}\ge n-3$, then either $\{x_1, x_2, x_3\} \subset B$ or $|\{x_1, x_2, x_3\} \cap B|=2$ with $x\in \{ x_1, x_2, x_3\}$. \end{clm} \begin{proof} Suppose on the contrary that neither $\{x_1, x_2, x_3\} \subset B$ nor $|\{x_1, x_2, x_3\} \cap B|=2$ with $x\in \{ x_1, x_2, x_3\}$. Since $\mathcal{F}\subseteq\mathcal{G}_3$, it's easy to see that if $\{x_1, x_2, x_3\}\subseteq [n]\setminus B$, then $d_{\{x_1, x_2, x_3\}}=0$, so $1\le|\{x_1, x_2, x_3\}\cap B|\le 2$. First consider that $|\{x_1, x_2, x_3\}\cap B|=1$. If $\{x_1, x_2, x_3\}\cap B=\{x\}$, then the sets containing $\{x_1, x_2, x_3\}$ in $\mathcal{F}$ must intersect with $E$, so $d_{\{x_1, x_2, x_3\}}\le3<n-3$, a contradiction. If $|\{x_1, x_2, x_3\}\cap E|=1$, then the set containing $\{x_1, x_2, x_3\}$ in $\mathcal{F}$ must contain $x$, so $d_{\{x_1, x_2, x_3\}}\le1<n-3$, also a contradiction. Hence $|\{x_1, x_2, x_3\}\cap B|=2$. By hypothesis, $|\{x_1, x_2, x_3\}\cap E|=2$, w.l.o.g., let $\{x_1, x_2, x_3\}\cap E=\{x_1, x_2\}$, then $d_{\{x_1, x_2, x_3\}}\le2$ since the possible sets in $\mathcal{F}$ containing $\{x_1, x_2, x_3\}$ are $\{x_1, x_2, x_3\}\cup \{x\}$ and $E\cup \{x_3\}$, a contradiction. \end{proof} {\it Proof of Lemma \ref{lem2.1}}. We first consider that $k\ge 5$. In {\it Case 1}, i.e., $S_{xy}(\mathcal{H}_1)$ is EKR with center $x$, we take $X_1=\{x, y\}$. In {\it Case 2}, since $S_{xy}(\mathcal{H}_2)$ is HM at center $x$, let $E=\{z_1, z_2, \dots ,z_k\}$ be the only member missing $x$, and without loss of generality, we assume $z_1\ne y$, and take $X_2=\{x, y, z_1\}$. In {\it Case 3}, $S_{xy}(\mathcal{H}_3)\subseteq\mathcal{J}_2$ with center $x$, kernal $\{z_1, z_2, \dots ,z_{k-1}\}$. Without loss of generality, we assume $z_1\ne y$, and take $X_3=\{x, y, z_1\}$. We can see that for any set $G\in \mathcal{H}_i$, $G \cap X_i \ne \emptyset$, for $i=1,2,3$. After the shifts $S_{x'y'}$ for all $x'<y', x', y' \in [n]\setminus X_i$ to $\mathcal{H}_i$, the resulting family $\mathcal{H'}_i$ satisfies that for every set $G' \in \mathcal{H'}_i$, $G'\cap X_i\ne \emptyset$. By the maximality of $|\mathcal{H}|$, we may assume that all $k$-sets containing $X_i \,(i=1, 2, 3)$ are in $\mathcal{H}$, so is in $\mathcal{H}_i$. These sets will keep stable after any shift $S_{x'y'}$, so there are at least ${n-3 \choose k-2}$ (or ${n-4 \choose k-3}$) $>2$ sets missing $x'$ in $\mathcal{H}'_i$. Fact \ref{fact 2} (i), (ii) and (iii) implies that $\mathcal{H}'_i$ is neither EKR nor HM nor contained in $\mathcal{J}_2$. We are done for $k\ge 5$. We now assume that $k=4$. We will complete the proof by showing the following Lemmas corresponding to Cases 1-5 in Remark \ref{remark2.4} \begin{lemma}[{\it Case 1}]\label{case1} If we each a 4-uniform family $\mathcal{H}_1$ such that $S_{xy}(\mathcal{H}_1)$ is EKR at $x$, then there is a set $X_1=\{x, y, y', z, w\}$ such that after a series of shifts $S_{x'y'}$ $(x'<y'$ and $x', y'\in [n]\setminus X_1)$ to $\mathcal{H}_1$, we will reach a stable family $\mathcal{G}$ satisfying the conditions of Theorem \ref{thm1.5}. Moreover, $\{y, y', z, w\}$ or $ \{x, y', z, w\}$ is in $\mathcal{G}$. Furthermore, $G\cap \{x, y\}\ne \emptyset$ for any $G\in \mathcal{G}$. \end{lemma} \begin{proof} Since $S_{xy}( \mathcal{H}_1)$ is EKR, for any $F\in \mathcal{H}_1$, we have $F \cap \{x, y\} \ne \emptyset$. Any set obtained by performing shifts $[n]\setminus \{x, y\}$ to a set in $\mathcal{H}_1$ still contains $x$ or $y$. We will show Claims \ref{clm 3}, \ref{clm 4} and \ref{clm 6} implying Lemma \ref{case1}. \begin{clm}\label{clm 3} Performing shifts in $[n]\setminus \{x, y\}$ to $\mathcal{H}_1$ repeatedly will not reach {\it Cases 1-3} in Remark \ref{remark2.4}. \end{clm} \begin{proof} Since $S_{xy}( \mathcal{H}_1)$ is EKR, for any $G\in \mathcal{H}_1$, we have $G \cap \{x, y\} \ne \emptyset$. By the maximality of $|\mathcal{H}| $ ($|\mathcal{H}_1|$ as well), we have \begin{align}\label{eq10} &\left\{G\in {[n]\choose k}: \{x, y\}\subseteq G \right\}\subseteq \mathcal{H}_1,\nonumber\\ &\vert \left\{ G\in \mathcal{H}_1: \{x, y\}\subseteq G \right\}\vert={n-2\choose 2}. \end{align} All these sets containing $\{x, y\}$ are stable after performing $S_{x'y'}\,\,(x'<y', x', y'\not \in \{x, y\}) $. So there are still at least ${n-3 \choose 2}>2$ sets missing $x'$ after $S_{x'y'}$, so we will not reach {\it Case 1-3}. \end{proof} \begin{clm}\label{clm 4} If performing some shifts in $[n]\setminus \{x, y\}$ repeatedly to $\mathcal{H}_1$ reaches $\mathcal{H}_4$ in {\it Case 4}( $S_{x'y'}(\mathcal{H}_4)\subseteq\mathcal{G}_2$), then there exists $X_1=\{x, y, y', z, w\}$ such that performing shifts in $[n]\setminus X_1$ repeatedly to $\mathcal{H}_4$ will not reach {\it Cases 1-5} as in Remark \ref{remark2.4}, and $\{y, y', z, w\}$ or $ \{x, y', z, w\}$ is in the final stable family $\mathcal{G}$. \end{clm} \begin{proof} Assume that after some shifts in $[n]\setminus \{x, y\}$ to $\mathcal{H}_1$, we get $\mathcal{H}_4$ such that $S_{x'y'}(\mathcal{H}_4)\subseteq\mathcal{G}_2$ with core $A$. Since there are ${n-2 \choose 2}$ sets containing $\{x, y\}$ in $\mathcal{H}_1$ and they are stable (so in $\mathcal{H}_4$), and ${n-2 \choose 2} > 2n-7$ ($n\ge6$), by Fact \ref{fact 2} (ii) and Claim \ref{clm 1}, $A=\{x', x, y\}$. Since $S_{x'y'}(\mathcal{H}_4)\subseteq\mathcal{G}_2$ with core $\{x', x, y\}$, there exists $\{y, y', z_1, w_1\}$ (or $\{x, y', z_2, w_2\}$) in $\mathcal{H}_4$. Let $ X_1:=\{x, y, y', z_1, w_1\}$ (or $X_1:=\{x, y, y', z_2, w_2\}$). Clearly, any set containing $\{x, y\}$ and missing $x''\in [n]\setminus X_1$ are stable after performing shifts in $[n]\setminus X_1$ repeatedly to $\mathcal{H}_4$, so performing shifts $S_{x''y''}, x'', y'' \in [n]\setminus X_1$ to $\mathcal{H}_4$ will not reach {\it Cases 1-3}. If we reach {\it Case 4}, that is we get a family $\mathcal{H'}_4$, such that $S_{x''y''}(\mathcal{H'}_4)\subseteq\mathcal{G}_2$ with core $A'$. By Fact \ref{fact 2} and Claim \ref{clm 1}, we have $A'=\{x'', x, y\}$. However, $\{y, y', z_1, w_1\}$ (or $\{x, y', z_2, w_2\}$) is stable under all the shifts in $[n]\setminus X_1$, so it is still in $S_{x''y''}(\mathcal{H'}_4)$, contradicting that $S_{x''y''}(\mathcal{H'}_4)\subseteq\mathcal{G}_2$ with core $\{x'', x, y\}$. Thus we can not reach {\it Case 4}. Now assume that after some shifts in $[n]\setminus X_1$ to $\mathcal{H}_4$, we get $\mathcal{H}_5$ such that $S_{x''y''}(\mathcal{H}_5)\subseteq\mathcal{G}_3$ with center and core forming a 4-set $B$ for some $x''$ and $y''\in [n]\setminus X_1$. By Fact \ref{fact 2} (iv), (\ref{eq10}) and Claim \ref{clm 2} (ii), we have $\{x, y, x''\}\subseteq B$. Since there are ${n-2 \choose 2}$ sets which contain $\{x, y\}$ in $\mathcal{H}_1$ (so in $S_{x''y''}(\mathcal{H}_5)$), we have one of the following cases: ($\ast$) $x$ is the center, and $y$ is in the core; ($\ast \ast$) $y$ is the center, and $x$ is in the core. Recall that there exists $\{y, y', z_1, w_1\}$ or $\{x, y', z_2, w_2\}$ in $\mathcal{H}_4$. We will meet one of the following three cases: (a) There is no set $G\in \mathcal{H}_4$ such that $G \cap \{x, y, x'\}=\{x\}$. So there exists $\{y, y', z_1, w_1\}\in \mathcal{H}_4$, and all sets containing $\{x', x\}$ in $S_{x'y'}(\mathcal{H}_4)$ are originally in $\mathcal{H}_4$. Take $X_1:=\{x, y, y', z_1, w_1\}$. By the maximality of $|\mathcal{H}|$ (so is $|\mathcal{H}_4|$), there are ${n-2 \choose 2}$ sets containing $\{x', x\}$ in $\mathcal{H}_4$ (so in $S_{x''y''}(\mathcal{H}_5)$ as well). This implies that $x'\in E$, and $x$ is the center. However, $\{y, y', z_1, w_1\}$ is contained in $S_{x''y''}(\mathcal{H}_5)$, a contraction to that $S_{x''y''}(\mathcal{H}_5)\subseteq\mathcal{G}_3$ with center $x$ and core $\{y, x', x''\}$. (b) There is no set $G\in \mathcal{H}_4$ such that $G \cap \{x, y, x'\}=\{y\}$. So there exists $\{x, y', z_2, w_2\}\in \mathcal{H}_4$, and all sets containing $\{x', y\}$ in $S_{x'y'}(\mathcal{H}_4)$ are originally in $\mathcal{H}_4$. Take $X_1:=\{x, y, y', z_2, w_2\}$. By the maximality of $|\mathcal{H}|$ (so is $|\mathcal{H}_4|$), there are ${n-2 \choose 2}$ sets containing $\{x', y\}$ in $\mathcal{H}_4$, so in $S_{x''y''}(\mathcal{H}_5)$. This implies that $x'\in E$ and $y$ is the center for $S_{x''y''}(\mathcal{H}_5)$. However, $\{x, y', z_2, w_2\}$ is in $S_{x''y''}(\mathcal{H}_5)$, contradicting to that $S_{x''y''}(\mathcal{H}_5)\subseteq\mathcal{G}_3$ at center $y$ and core $\{x, x', x''\}$. (c) There are both $\{y, y', z_1, w_1\}$ and $\{x, y', z_2, w_2\}$ in $\mathcal{H}_4$. We choose $X_1:=\{x, y, y', z_1, w_1\}$ first. Assume that ($\ast$) happens. Since $\{y, y', z_1, w_1\}$ is still in $S_{x''y''}(\mathcal{H}_5)$, this contradicts that $S_{x''y''}(\mathcal{H}_5)\subseteq\mathcal{G}_3$ with center $x$ and $\{y, x''\}$ contained in the core. So we assume that ($\ast \ast$) happens. Let $B=\{x, y, x'', u\}$ for some $u$. If $u=x'$, then the existence of $\{y, y', z_1, w_1\}$ makes a contradiction again. Now consider $u\ne x'$. \begin{clm} \label{clm 5} If $u\ne x'$, then $u = y'$. \end{clm} \begin{proof} Assume on the contrary that $u\ne y'$. We have shown that $S_{x''y''}(\mathcal{H}_5)$ can not be contained in $\mathcal{J}_2$ at center $y$, then there are at least 3 sets containing $\{x, u, x''\}$. Although $\{x, x', x'', u\}$ and $\{x, y', x'', u\}$ may be two such sets, there must be $\{x, u, x'', v\} \in S_{x''y''}(\mathcal{H}_5)$ for some $v\in [n]\setminus \{x, y, u, x', y', x''\}$. However, every set in $\mathcal{H}_4$ contains $\{x, y\}$, or $\{x', x\}$, or $\{x', y\}$, or $\{x, y'\}$, or $\{y, y'\}$ by recalling that $S_{x'y'}(\mathcal{H}_4)\subseteq\mathcal{G}_2$ with core $\{x, y, x'\}$, so is every set in $S_{x''y''}(\mathcal{H}_5)$ since $x'', y''\in [n]\setminus \{x, y, y', z_1, w_1\}$, a contradiction. \end{proof} By Claim \ref{clm 5}, we have that $S_{x''y''}(\mathcal{H}_5)\subseteq\mathcal{G}_3$ at center $y$ and core $\{x, x'', y'\}$. This time, we change $X_1$ to $X'_1:=\{x, y, y', z_2, w_2\}$. Similar to the lines in the first paragraph of the proof of Claim \ref{clm 4}, we will not reach {\it Cases 1-4} after performing shifts $S_{x'y'}$ in $[n]\setminus X'_1$. If we reach {\it Case 5}, that is, after some shifts in $[n]\setminus X'_1$ to $\mathcal{H}_4$, we get $\mathcal{H'}_5$ such that $S_{x'''y'''}(\mathcal{H'}_5)\subseteq\mathcal{G}_3$ with center and core forming a 4-set $B'$ for some $x''', y'''\in [n]\setminus X'_1$. By the previous analysis, $B'=\{x, y, x''', y'\}$, and we only need to consider the case that $x$ is the center (If $y$ is the center, since $\{y, y', z_2, w_2\}$ is still in $S_{x'''y'''}(\mathcal{H'}_5)$, this contradicts that $S_{x'''y'''}(\mathcal{H'}_5)\subseteq\mathcal{G}_3$ with center $y$ and core $\{x, y', x'''\}$). We have shown that $S_{x''y''}(\mathcal{H}_5)$ can not be contained in $\mathcal{G}_2$ with core $\{x, y, y'\}$, so there is $G\in S_{x''y''}(\mathcal{H}_5)$ such that $G\cap \{x, y\}=\emptyset $ or $G\cap \{x, y'\}=\emptyset$ or $G\cap \{y, y'\}=\emptyset$. Since $S_{x''y''}(\mathcal{H}_5)\subseteq\mathcal{G}_3$ with core $\{x, x'', y' \}$ and center $y$, $G$ must contain $x$ or $y$. If $G\cap \{y, y'\}=\emptyset$, it contradicts that $S_{x''y''}(\mathcal{H}_5)\subseteq\mathcal{G}_3$ with core $\{x, x'', y' \}$ and center $y$. So there is $G\in S_{x''y''}(\mathcal{H}_5)$ such that $G\cap \{x, y'\}=\emptyset$. After shifts in $[n]\setminus X'_1$ to $G$, we get $G'$ missing $x$ and $y'$ still. This contradicts that $S_{x'''y'''}(\mathcal{H'}_5)\subseteq\mathcal{G}_3$ with core $\{ y, x''', y'\}$ and center $x$. Hence, we will not reach {\it Case 5}. In summary, we have shown that there exists $X_1$ in the form of $\{x, y, y', z, w\}$ such that performing shifts in $[n]\setminus X_1$ repeatedly to $\mathcal{H}_4$ will not reach {\it Cases 1-5} as in Remark \ref{remark2.4}. Moreover, $\{y, y', z, w\}$ or $ \{x, y', z, w\}$ is in the final stable family $\mathcal{G}$. This completes the proof of Claim \ref{clm 4}. \end{proof} \begin{clm}\label{clm 6} If performing some shifts in $[n]\setminus \{x, y\}$ repeatedly to $\mathcal{H}_1$ does not reach {\it Cases1-4}, but reaches $\mathcal{H}_5$ in {\it Case 5} ($S_{x'y'}(\mathcal{H}_5)\subseteq\mathcal{G}_3$), then there exists $X_1$ in the form of $\{x, y, y', z, w\}$ such that performing shifts in $[n]\setminus X_1$ repeatedly to $\mathcal{H}_4$ will not reach {\it Cases 1-5} as in Remark \ref{remark2.4}. Moreover, $\{y, y', z, w\}$ or $ \{x, y', z, w\}$ is in the final stable family $\mathcal{G}$. \end{clm} \begin{proof} Suppose that we get some $\mathcal{H}_5$ such that $S_{x'y'}(\mathcal{H}_5)\subseteq\mathcal{G}_3$ with center and core forming a 4-set $B$. By (\ref{eq10}) and Claim \ref{clm 2}, the center must be $x$ or $y$, and $\{x, y\}\subset B$. By Fact \ref{fact 2} (iv), $X'\in B$ and $y'\not \in B$. Let $B=\{x, y, x', z\}$. We consider the case that $x$ is the center, the proof for $y$ being the center is similar. Since $S_{x'y'}(\mathcal{H}_5)\subseteq\mathcal{G}_3$, and recall that we are under Case 1, every set in $\mathcal{H}_5$ intersects $\{x, y\}$, there exists $\{y, y' , z, w\}$ (or $\{x, y' , z_1, z_2\}$) $\in \mathcal{H}_5$. And by the maximality of $|\mathcal{H}|$ (so is $|\mathcal{H}_5|$), we may assume that all the sets containing $\{x, z\}$ in $S_{x'y'}(\mathcal{H}_5)$ are originally in $\mathcal{H}_5$. Let $X_1:=\{x, y, y', z, w\}$ (or $\{x, y, y' , z_1, z_2\}$). Similar to the analysis in the first paragraph of the proof of Claim \ref{clm 4}, for any shifts $S_{x''y''}$ to $\mathcal{H}_5$ in $[n]\setminus X_1$, we won't reach {\it Cases 1-4}. If we reach {\it Case 5} again, then the resulting family $S_{x''y''}(\mathcal{H'}_5)$ ($x''$ and $y'' \in [n]\setminus X_1$) must be contained in $\mathcal{G}_3$ with core $\{y, x'', z\}$ and center $x$. However $\{y, y' , z, w\} \, (\text{or} \,\{x, y' , z_1, z_2\} )$ is still in $S_{x''y''}(\mathcal{H'}_5)$, and misses $x''$ and $x$ (or $\{x'', z, y\}\cap \{x, y', z_1, z_2\}=\emptyset$), contradicting that the family $S_{x''y''}(\mathcal{H'}_5)\subseteq\mathcal{G}_3$ with core $\{y, x'', z\}$ and center $x$. So we will not achieve {\it Case 5}, as desired. \end{proof} By Claims \ref{clm 3}, \ref{clm 4} and \ref{clm 6}, we have shown that if we reach a $4$-uniform family $\mathcal{H}_1$ such that $\mathcal{H}_1$ is EKR, then there exists a set $X_1$ with $|X_1|\le 5 $ and $\{x, y\}\subseteq X_1$ such that performing shifts $S_{x'y'}$ in $[n]\setminus X_1$ repeatedly to $\mathcal{H}_1$ will result in a stable family satisfying the conditions of Lemma \ref{case1}. This completes the proof of Lemma \ref{case1}. \end{proof} \begin{lemma}[{\it Case 2}]\label{case2} If we each a 4-uniform family $\mathcal{H}_2$ such that $S_{xy}(\mathcal{H}_2)$ is HM at $x$, then there is a set $X_2=\{x, y, z_1, z_2, z_3\}$ such that after a series of shifts $S_{x'y'}$ $(x'<y'$ and $x', y'\in [n]\setminus X_2)$ to $\mathcal{H}_2$, we will reach a stable family $\mathcal{G}$ satisfying the conditions of Theorem \ref{thm1.5}. Moreover, $\{z_1, z_2, z_3, y\}$ or $\{z_1, z_2, z_3, z'_4\} $ $\in \mathcal{G}$. Furthermore, if $\{z_1, z_2, z_3, y\}\in \mathcal{G}$, then every member in $\mathcal{G}$ contains $x$ or $y$. If $\{z_1, z_2, z_3, z'_4\}\in \mathcal{G}$, then every other member in $\mathcal{G}$ contains $x$ or $y$. \end{lemma} \begin{proof} Note that $S_{xy}(\mathcal{H}_2)$ contains exactly one set, say, $G_0=\{z_1, z_2, z_3, z_4\}$, that misses $x$. W.l.o.g., let $z_1, z_2, z_3\ne y$. Let $X_2:=\{x, y, z_1, z_2, z_3\}$. By the maximality of $|\mathcal{H}_2|$, we may assume $$\left\{ G\in {X \choose 4}: \{x, y\} \subseteq G, G\cap G_0 \ne \emptyset \right\}\subseteq \mathcal{H}_2.$$ If $y\in G_0$, that is, $ y=z_4$, then \begin{align}\label{eq11} \vert\{G\in \mathcal{H}_2: \{x, y\}\}\vert={n-2\choose 2}, \end{align} Otherwise, $y\not \in G_0$. We have \begin{align}\label{eq12} \vert\{G\in \mathcal{H}_2: \{x, y\}\}\vert=4n-18. \end{align} In particular, $\{x, y , z_1, z_2\}$, $\{x, y , z_1, z_3\}$ and $\{x, y , z_2, z_3\}$ are in $\mathcal{H}_2$. Assume that applying shifts in $[n]\setminus X_2$ to $\mathcal{H}_2$, we get $\mathcal{H'}$, such that $S_{x'y'}(\mathcal{H'})$ is EKR or HM or contained in $\mathcal{J}_2$ at center $x'$. However, the three sets $\{x, y , z_1, z_2\}$, $\{x, y , z_1, z_3\}$ and $\{x, y , z_2, z_3\}$ are still in $S_{x'y'}(\mathcal{H'})$ and they miss $x'$, a contradiction. Thus we will not reach {\it Cases 1-3}. Assume we reach {\it Case 4} as in Remark \ref{remark2.4}, i.e., $S_{x'y'}(\mathcal{H'})\subseteq\mathcal{G}_2$ with core $A$. By (\ref{eq11}), (\ref{eq12}), Claim \ref{clm 1} and Fact \ref{fact 2} (ii), we have $A=\{x, y, x'\}$. However $\{z_1, z_2, z_3\}\cap \{x, y , x', y'\} = \emptyset$, after a series of shifts of $[n]\setminus X_2$ to $G_0=\{z_1, z_2, z_3, z_4\}$, we get the resulting set $G'_0\in \mathcal{H'}$ satisfying that $|G'_0\cap |\le \{x, y , x', y'\}1$, a contradiction to that $S_{x'y'}(\mathcal{H'})\subseteq\mathcal{G}_2$ with core $\{x, y, x'\}$. Thus we will not reach {\it Case 4}. At last, assume $S_{x'y'}(\mathcal{H'})\subseteq\mathcal{G}_3$ as in Remark \ref{remark2.4} ({\it Case 5}) with center and core forming a 4-set $B$. By Fact \ref{fact 2} (iv), $x'\in B$. By Claim \ref{clm 2} (ii) and (\ref{eq11}), (\ref{eq12}), there are at least $4n-18> 3n-12$ ( $n > 6$ ) sets containing $\{x, y\}$, so $\{x, y, x'\}\subset B$. And if $\{x, y\}\subset E$, then the number of sets containing $\{x, y\}$ in $\mathcal{H'}$ is at most $2n-7$, which is smaller than $4n-18$, this contradicts to (\ref{eq12}). Thus the resulting family can only have center $x$ or center $y$. First assume $y\in G_0$, that is $y=z_4$ and $G_0=\{y, z_1, z_2, z_3\}$. This implies that $\{x, z_1, z_2, z_3\}\in \mathcal{H}_2$. Both $\{x, z_1, z_2, z_3\}$ and $G_0=\{z_1, z_2, z_3, z_4\}$ are stable under shifts $S_{x'y'}$ $(x'<y'$ and $x', y'\in [n]\setminus X_2)$, so both of them are in $S_{x'y'}(\mathcal{H'})$. Since $x, x' \not \in G_0$ and $S_{x'y'}(\mathcal{H'})\subseteq\mathcal{G}_3$ with $B \supset \{x, y, x'\}$, $x$ can not be the center. But if $y$ is the center, since $x', y \not \in \{x, z_1, z_2, z_3\}$, also a contradiction. Next assume $y \not \in G_0$. Notice that $\{z_1, z_2, z_3\} \cap \{x, y, x', y'\} = \emptyset$, after a series of shifts of $[n]\setminus X_2$ to $G_0$, the resulting set $G'_0\in S_{x'y'}(\mathcal{H'})$ satisfies that $G'_0 \cap \{x, y\}= \emptyset$, also contradicts that $S_{x'y'}(\mathcal{H'})\subseteq\mathcal{G}_3$ with $B\supset\{x, y, x'\}$, hence we will not reach {\it Case 5}. Notice that if $y\in G_0$, we have $\{x, z_1, z_2, z_3\}\in \mathcal{H}_2$ and $G_0=\{y, z_1, z_2, z_3\}\in \mathcal{H}_2$. Note that $\{z_1, z_2, z_3, y\}$ is stable under shifts $S_{x'y'}$ $(x'<y'$ and $x', y'\in [n]\setminus X_2)$, so $G_0=\{z_1, z_2, z_3, y\}\in \mathcal{G}$. In this case, every member in $\mathcal{H}_2$ contains $x$ or $y$, Since every member in $\mathcal{H}_2$ is stable at $x$ and $y$, every member in $\mathcal{G}$ contains $x$ or $y$. If $y\not \in G_0$, then $G'_0=\{z_1, z_2, z_3, z'_4\}\in \mathcal{G}$ for some $z'_4\neq y$, and this is the only set in $\mathcal{G}$ that disjoint from set $\{x, y\}$. \end{proof} \begin{lemma}[{\it Case 3}]\label{case3} If we each a 4-uniform $\mathcal{H}_3$ such that $S_{xy}(\mathcal{H}_3)\subseteq\mathcal{J}_2$ at center $x$, kernel $E$ and the set of pages $J$, then there is a set $X_3=\{x, y, z_1, z_2, z_3\}$ such that after a series of shifts $S_{x'y'}$ $(x'<y'$ and $x', y'\in [n]\setminus X_3)$ to $\mathcal{H}_3$, we will reach a stable family $\mathcal{G}$ satisfying the conditions of Theorem \ref{thm1.5} and $G\cap X_3\ne \emptyset$ for any $G\in \mathcal{G}$. Moreover, either $|G\cap X_3|\ge 2$ for any $G\in \mathcal{G}$, or $|G\cap G'|\ge 2$ if $G\cap X_3=\{x\}$ and $G'\cap X_3=\{y\}$. \end{lemma} \begin{proof} We will meet one of the following three cases. Case (a): $y\in E$. In this case, let $E=\{y, z_1, z_2\}, J=\{z_3, z_4\}$ and $X_3:=\{x, y, z_1, z_2, z_3\}$. Case (b): $y\in J$. In this case, let $E=\{z_1, z_2, z_3\}, J=\{y, z_4\}$ and $X_3:=\{x, y, z_1, z_2, z_3\}$. Case (c): $y\in [n]\setminus (E\cup J\cup \{x\})$. In this case, let $E=\{z_1, z_2, z_3\}, J=\{z_4, z_5\}$ and $X_3:=\{x, y, z_1, z_2, z_3\}$. In each of the above three cases, by the maximality of $|\mathcal{H}|$ $(|\mathcal{H}_3|$ as well), $\{x, y, z_1, z_2\}$, $\{x, y, z_1, z_3\}$, $\{x, y, z_2, z_3\}$ are in $\mathcal{H}_3$, and they are stable after a series of shifts in $[n]\setminus X_3$, so we will not reach {\it Cases 1-3} after performing shifts in $[n]\setminus X_3$. Assume that applying shifts in $[n]\setminus X_3$ to $\mathcal{H}_3$, we get $\mathcal{H''}$, such that $\mathcal{H'}:=S_{x'y'}(\mathcal{H''})\subseteq\mathcal{G}_2$ with core $A$. Similarly, by the maximality of $|\mathcal{H}_3|$ and direct computation, we have the following claim: \begin{clm}\label{clm3.7} There are at least ${n-2 \choose 2}, 4n-18, 3n-11$ members that contain $\{x, y\}$ in Cases (a), (b), (c) respectively. \end{clm} Notice that ${n-2 \choose 2}, 4n-18, 3n-11> 2n-7$. By Claim \ref{clm 1}, Claim \ref{clm3.7} and Fact \ref{fact 2} (ii), $A=\{x, y, x'\}$. In Case (a) or (b), we can see that $\{y, z_1, z_2, z_3\}\in \mathcal{H'}, |\{y, z_1, z_2, z_3\}\cap A|=1$, a contradiction. In Case (c), we have $\{z_1, z_2, z_3, z_4\}\in \mathcal{H}_3$, after some shifts in $[n]\setminus X_3$, it becomes $F$ in $\mathcal{H'}$, and $ |F\cap A|\le1$, a contradiction to that $\mathcal{H'}\subseteq\mathcal{G}_2$ with core $\{x, y, x'\}$. Thus we will not reach {\it Case 4} after performing shifts in $[n]\setminus X_3$ repeatedly. At last, we assume that $\mathcal{H'}:=S_{x'y'}(\mathcal{H''})\subseteq \mathcal{G}_3$ with center and core forming a 4-set $B$. By Claim \ref{clm 2}, Claim \ref{clm3.7} and Fact \ref{fact 2} (iv), we have $\{x, y, x'\}\subseteq B$, and the center of $\mathcal{H'}$ must be $x$ or $y$. In Cases (a) and (b), we have $\{y, z_1, z_2, z_3\} \in \mathcal{H}_3$, so in $\mathcal{H'}$. Since $ x, x'\not \in \{y, z_1, z_2, z_3\}$, $\mathcal{H'}$ can not be contained in $\mathcal{G}_3$ with $B\supset \{x, y, x'\}$ and center $x$. Since $\{x, z_1, z_2, z_3\} \in \mathcal{H}_3$, so in $\mathcal{H'}$ as well. Notice that $ y, x'\not \in \{x, z_1, z_2, z_3\}$, $\mathcal{H'}$ can not be contained in $\mathcal{G}_3$ with $B\supset \{x, y, x'\}$ and center $y$. A contradiction. Now consider Case (c). In this case, $\{z_1, z_2, z_3, z_4\}\in \mathcal{H}_3$. Because it is stable at $\{z_1, z_2, z_3\}$ under any shift in $[n]\setminus X_3$, the resulting set $\{z_1, z_2, z_3, z'_4\}$ does not contain $x$ or $y$. This contradicts that $\mathcal{H'}\subseteq\mathcal{G}_3$ with $B\supset \{x, y, x'\}$ and center $x$ or $y$. If Case (a) or (b) happens, then any 4-set $G\in \mathcal{G}$ satisfies $|G\cap X_3|\ge 2$. If Case (c) happens, since $\{z_1, z_2, z_3, z_4\}$ and $\{z_1, z_2, z_3, z_5\}$ are the only two sets disjoint from $\{x, y\}$ in $S_{xy}(\mathcal{H}_3)$, then every set in $\mathcal{H}_3$ (so in $\mathcal{G}$) missing $x$ and $y$ must contain $\{z_1, z_2, z_3\}$. If $x\in G$, $y\in G'$ and $G\cap \{z_1, z_2, z_3, y\}=G'\cap \{z_1, z_2, z_3, x\}=\emptyset$, let $F, F'$ $\in \mathcal{H}_3$ such that $G$ and $G'$ become their resulting sets in $\mathcal{G}$ after a series of shifts in $[n]\setminus X_3$. By the reason that $S_{xy}(\mathcal{H}_3)\subseteq \mathcal{J}_2$ with center $x$, kernel $\{z_1, z_2, z_3\}$ and the set of pages $\{z_4, z_5\}$, for any set $H\in\mathcal{H}_3$ satisfying that $|H\cap \{x, y\}|=1$ and $H\cap\{z_1, z_2, z_3\}=\emptyset$, we have $\{z_4, z_5\}\subseteq H$. So $\{z_4, z_5\}\subseteq F\cap F'$, consequently, $|G\cap G'|\ge 2$. \end{proof} \begin{lemma}[{\it Case 4}]\label{case4} If we reach a 4-uniform $\mathcal{H}_4$ such that $S_{xy}(\mathcal{H}_4)\subseteq\mathcal{G}_2$ with core $\{x, x_1, x_2\}$, then there is a set $X_4=\{x, y, x_1, x_2, x_3\}$ such that after a series of shifts $S_{x'y'}$ $(x'<y'$ and $x', y'\in [n]\setminus X_4)$ to $\mathcal{H}_4$, we will reach a stable family $\mathcal{G}$ satisfying the conditions of Theorem \ref{thm1.5}. Moreover, $\{x, y, x_1, x_3\} \in \mathcal{G}$ and $G\cap X_4\ne \emptyset$ for any $G\in \mathcal{G}$. \end{lemma} \begin{proof} Since $S_{xy}(\mathcal{H}_4)\subseteq\mathcal{G}_2$ with core $A$, by Fact \ref{fact 2} (ii), we have that $x\in A$ and $y\not \in A$. Let $A=\{x, x_1, x_2\}$. By the maximality of $|\mathcal{H}_4|$, we may assume \begin{align*} &\left\{ G\in {X \choose 4}: \{x_1, x_2\} \subseteq G \right\}\subseteq \mathcal{H}_4, \\ &\left\{G\in {X \choose 4}: \{x, y\} \subseteq G, G\cap \{x_1, x_2\} \ne \emptyset \right\}\subseteq \mathcal{H}_4. \end{align*} So \begin{align}\label{eq13} &\vert \left\{ G\in \mathcal{H}_4: \{x_1, x_2\} \subseteq G \right\}\vert={n-2 \choose 2}, \\ \label{eq14} &\vert \left\{G\in \mathcal{H}_4: \{x, y\} \subseteq G, G\cap \{x_1, x_2\} \ne \emptyset \right\}\vert=2n-7. \end{align} Choose a set $G=\{x, y, x_1, x_3\}\in \mathcal{H}_4$ and let $X_4:=\{x, y, x_1, x_2, x_3\}$. Since $S_{xy}(\mathcal{H}_4)\subseteq\mathcal{G}_2$ with core $\{x, x_1, x_2\}$, every member in $\mathcal{H}_4$ intersects $X_4$. Every member in $\mathcal{H}_4$ is stable at every element in $X_4$ under shifts $S_{x'y'}$ $(x'<y'$ and $x', y'\in [n]\setminus X_4)$. So $\{x, y, x_1, x_3\}$ is in the final stable family $\mathcal{G}$ and $G\cap X_4\ne \emptyset$ for any $G\in \mathcal{G}$. What remains is to show that performing shifts $S_{x'y'}$ $(x'<y'$ and $x', y'\in [n]\setminus X_4)$ to $\mathcal{H}_4$ will not reach {\it Cases 1-5} in Remark \ref{remark2.4}. By (\ref{eq13}), for any $x'\in [n]\setminus X_4$, there are at least ${n-3 \choose 2}$ members in $\mathcal{H}_4$ missing $x'$, so we can not reach {\it Cases 1-3}. Assume $\mathcal{H'}:=S_{x'y'}(\mathcal{H''})\subseteq\mathcal{G}_2$ with core $A'$. By (\ref{eq13}), Fact \ref{fact 2} (ii) and Claim \ref{clm 1}, $A'=\{x', x_1, x_2\}$. Since $G \in \mathcal{H'}$, and $|H\cap A'|=1$, we get a contradiction, hence we will not reach {\it Case 4}. At last, assume $\mathcal{H'}:=S_{x'y'}(\mathcal{H''})\subseteq\mathcal{G}_3$ with center and core forming a 4-set $B$. By Fact \ref{fact 2} (iv), $x'\in B$. By Claim \ref{clm 2} (ii) and (\ref{eq13}), $\{x_1, x_2\}\subseteq B$ and the center must be $x_1$ or $x_2$. That is $\{x_1, x_2, x'\}\subset B$. Since $|B|=4$, $|\{x, y\}\cap B|=0 $ or $1$. If $|\{x, y\}\cap B|=0 $, then the sets containing $\{x, y\}$ in $\mathcal{H'}$ must contain center and one point of core $A'$, so $d_{\{x, y\}}\le 3$. If $|\{x, y\}\cap B|=1$, then the sets containing $\{x, y\}$ in $\mathcal{H'}$ either contain center or contain core $A'$, so $d_{\{x, y\}}\le n-3+1=n-2$. These members containing $\{x, y\}$ in $\mathcal{H}_4$ are also in $\mathcal{H'}$, by (\ref{eq14}), there are at least $2n-7$, a contradiction. Hence we can not reach {\it Case 5}. \end{proof} \begin{lemma}[{\it Case 5}]\label{case5} If we reach a 4-uniform $\mathcal{H}_5$ such that $S_{xy}(\mathcal{H}_5)\subseteq\mathcal{G}_3$ with center and core $E$ forming a 4-set $B$, then there is a set $X_5=\{x, y, x_1, x_2, x_3\}$ such that after a series of shifts $S_{x'y'}$ $(x'<y'$ and $x', y'\in [n]\setminus X_5)$ to $\mathcal{H}_5$, we will reach a stable family $\mathcal{G}$ satisfying the conditions of Theorem \ref{thm1.5}. Furthermore, $\vert G\cap X_5\vert\ge 2$ for each $G\in \mathcal{G}$. \end{lemma} \begin{proof} For $S_{xy}(\mathcal{H}_5)$, we will meet one of the following three cases. Case (a): $x$ is the center, $y \in E$, and $E=\{y, x_1, x_2\}$. In this case, we may assume that $ \{y, x_1, x_2, x_3\}\in S_{xy}(\mathcal{H}_5)$ for some $x_3\in [n]\setminus B$. Let $X_5:= \{x, y, x_1, x_2, x_3\}$. Case (b): $x$ is the center, $y \not \in E$, and $E=\{ x_1, x_2, x_3\}$. In this case, let $X_5:= \{x, y, x_1, x_2, x_3\}$. Case (c): $x_1$ is the center, $x \in E$, and $E=\{x, x_2, x_3\}, y\in [n]\setminus B$. In this case, let $X_5:= \{x, y, x_1, x_2, x_3\}$. We first observe that $\vert G\cap X_5\vert\ge 2$ for each $G\in \mathcal{H}_5$ in each case. First we consider Case (a). In this case, a member in $\mathcal{H}_5$ must contain $x$ or $y$. By the maximality of $|\mathcal{H}_5|$, we may assume \begin{align*} \left\{ G\in {X \choose 4}: \{x, y\} \subseteq G \right\}\subseteq \mathcal{H}_5. \end{align*} So \begin{align}\label{eq15} \vert \left\{ G\in \mathcal{H}_5: \{x, y\} \subseteq G \right\}\vert={n-2 \choose 2}. \end{align} Performing $S_{x'y'}$ in $[n]\setminus X_5$ to $\mathcal{H}_5$ will not reach {\it Cases 1-3} since there are at least ${n-3 \choose 2}$ members that containing $\{x, y\}$ and missing $x'$ in $\mathcal{H}_5$ and these sets are stable after $S_{x'y'}$ in $[n]\setminus X_5$ (by (\ref{eq15})). Assume that $\mathcal{H'}:=S_{x'y'}(\mathcal{H''})\subseteq\mathcal{G}_2$ with core $A$. By (\ref{eq15}), Fact \ref{fact 2} (ii) and Claim \ref{clm 1}, $A=\{x', x, y\}$. Since $S_{xy}(\mathcal{H}_5)$ is not EKR, $\{y, x_1, x_2, x_3\} \in S_{xy}(\mathcal{H}_5)$, $\{x, x_1, x_2, x_3\} \in \mathcal{H}_5$, so in $\mathcal{H'}$. However, $|\{x, x_1, x_2, x_3\}\cap A|=1$, this is a contradiction, hence we will not reach {\it Case 4}. Assume that $\mathcal{H'}:=S_{x'y'}(\mathcal{H''})\subseteq\mathcal{G}_3$ with center and core forming a 4-set $B'$. By (\ref{eq15}), Fact \ref{fact 2} (iv) and Claim \ref{clm 2} (ii), $\{x, y, x'\}\subseteq B'$, and the center is either $x$ or $y$. In either case, the existence of $\{x, x_1, x_2, x_3\}$ and $ \{y, x_1, x_2, x_3\} $ will lead to a contradiction. Hence we will not reach {\it Case 5}. Next we consider Case (b). By the maximality of $|\mathcal{H}_5|$, we may assume that $$\left\{G\in {X \choose 4}: \{x, y\} \subseteq G\,\,\text{and}\,\, G\cap E \ne \emptyset \right\}\subseteq \mathcal{H}_5$$ and $$\left\{G\in {X \choose 4}: \{x_1, x_2, x_3\} \subseteq G \right\}\subseteq \mathcal{H}_5.$$ In particular, $\{x, x_1, x_2, x_3\}\in \mathcal{H}_5$ and $\{y, x_1, x_2, x_3\}\in \mathcal{H}_5$. Computing directly, we have \begin{equation}\label{eq16} \vert\left\{G\in \mathcal{H}_5: \{x, y\} \subseteq G, G\cap E \ne \emptyset \right\}\vert=3n-12 \end{equation} and \begin{equation}\label{eq17} \vert\left\{G\in \mathcal{H}_5: \{x_1, x_2, x_3\} \subseteq G \right\}\vert=n-3. \end{equation} Since $\{x, y, x_1, x_2\}, \{x, y, x_1, x_3\}, \{x, y, x_2, x_3\}\in \mathcal{H}_5$ and these sets miss $x'$ and are stable after shifts $S_{x'y'}$ ($x' <y'$ and $x', y' \in [n]\setminus X_5$), we will not reach {\it Cases 1-3}. Assume $\mathcal{H'}:=S_{x'y'}(\mathcal{H''})\subseteq\mathcal{G}_2$ with core $A$, where $x' <y'$ and $x', y' \in [n]\setminus X_5$. By (\ref{eq16}), Fact \ref{fact 2} (ii) and Claim \ref{clm 1}, we have $A=\{x', x, y\}$. However $\{x, x_1, x_2, x_3\}\in \mathcal{H'}$ and $|\{x, x_1, x_2, x_3\}\cap A|=1$, a contradiction, so we will not reach {\it Case 4}. Assume that $\mathcal{H'}:=S_{x'y'}(\mathcal{H''})\subseteq\mathcal{G}_3$ with center and core forming a 4-set $B'$. By Fact \ref{fact 2} (iv), $x' \in B'$. Equation (\ref{eq16}) and Claim \ref{clm 2} (i) imply that the center must be $x$ or $y$. By Claim \ref{clm00} and (\ref{eq17}), either $\{x_1, x_2, x_3\} \subset B'$ or $|\{x_1, x_2, x_3\} \cap B'|=2$ and one of $\{ x_1, x_2, x_3\}$ is the center. But it's impossible to satisfy both conditions in the previous paragraph and this paragraph, hence we will not reach {\it Case 5}. At last we consider Case (c). By the maximality of $|\mathcal{H}_5|$, we may assume \begin{align*} \left\{G\in {X \choose 4}: \{x_1, x_2\} \subseteq G \right\}\subseteq \mathcal{H}_5 \,\,\,\, \text{and}\,\,\,\, \left\{G\in {X \choose 4}: \{x_1, x_3\} \subseteq G \right\}\subseteq \mathcal{H}_5. \end{align*} By direct computation, \begin{align}\label{eq18} &\vert \left\{ G\in \mathcal{H}_5: \{x_1, x_2\} \subseteq G \right\}\vert={n-2 \choose 2}, \\ \label{eq19} &\vert \left\{G\in \mathcal{H}_5: \{x_1, x_3\} \subseteq G \right\}\vert={n-2 \choose 2}. \end{align} Since there are ${n-3 \choose 2}$ sets containing $\{x_1, x_2\}$ but missing $x'$, after performing $S_{x'y'}$ in $[n]\setminus X_5$ to $\mathcal{H}_5$, we will not reach {\it Case 1-3}. If we reach {\it Cases 4}, that is, after performing shifts in $[n]\setminus X_5$ to $\mathcal{H}_5$ repeatedly, $S_{x'y'}(\mathcal{H'})\subseteq\mathcal{G}_2$ with core $A$. By (\ref{eq18}), (\ref{eq19}), Fact \ref{fact 2} (ii) and Claim \ref{clm 1}, $x', x_1, x_2, x_3\in A$, but $|A|=3$, a contradiction. If we reach {\it Case 5}, that is $S_{x'y'}(\mathcal{H'})\subseteq\mathcal{G}_3$ with the center and the core forming a 4-set $B'$. By (\ref{eq18}), (\ref{eq19}), Fact \ref{fact 2} (iv) and Claim \ref{clm 1}, $B'=\{x_1, x_2, x_3, x'\}$, and $x_1$ is the center. Recall that $\{x, y, x_2, x_3\}\in \mathcal{H}_5$, also in $S_{x'y'}(\mathcal{H'})$, but $\{x, y, x_2, x_3\}\cap \{x_1, x', y'\} = \emptyset$, a contradiction, hence we cannot reach {\it Case 5}. As remarked earlier, $\vert G\cap X_5\vert\ge 2$ for each $G\in \mathcal{H}_5$. Note that performing shifts in $[n]\setminus X_5$ to $\mathcal{H}_5$ keeps this property, so $\vert G\cap X_5\vert\ge 2$ for each $G\in \mathcal{G}$. \end{proof} By Lemmas \ref{case1} to \ref{case5}, we have shown that if one of {\it Case 1-5} happens, then there exists a set $X_i$ with $|X_i| \le 5$ and $\{x, y\} \subseteq X_i$ such that performing shifts in $[n]\setminus X_i$ to $\mathcal{H}_i$ will not result in any case of {\it Case 1-5}, so the final family is a stable family satisfying the conditions in Theorem \ref{thm1.5}. Furthermore, $G \cap X_i \ne \emptyset$ for any set $G$ in the final family. So we complete the proof of Lemma \ref{lem2.1}. \subsection{Proof of Lemma \ref{lem2.2}} \begin{proof} We first consider $k\ge5$. In this case, we have $|X_i|\le 3$ and $|Y_i|\ge 2k-3$. We first prove (ii). Assume on the contrary that there are $G$ and $G'\in \mathcal{G}$ such that $G\cap G' \cap Y= \emptyset $ and let $|G\cap G'|$ be the minimum among all pairs of sets in $\mathcal{G}$ not intersecting in $Y$. Clearly $|G\cap G' \cap ([n]\setminus Y)|\ge1$. Note that $| (G\cup G' )\cap Y_i|\le |G\cap Y_i|+|G'\cap Y_i|\le 2k-4$ (since $|G\cap ([n]\setminus Y)|\ge1$ and $|G\cap X_i|\ge1$, so $|G\cap Y_i|\le k-2$, same for $G'$). But $|Y_i|\ge 2k-3$, so there exists a point $a\in Y_i\setminus (G\cup G' )$. Pick any point $b\in G\cap G' \cap ([n]\setminus Y)$, we have $a<b$. Notice that $\mathcal{G}$ is stable on $[n]\setminus X_i$, so $G'':=(G'\setminus \{b\})\cup \{a\}\in \mathcal{G}$. Then $G\cap G'' \cap Y= \emptyset$ and $|G\cap G''|<|G\cap G'|$, contradicting the minimality of $|G\cap G'|$. For (i), assume on the contrary, that $\mathcal{A}_1\ne \emptyset$. Let $\{x\}\in \mathcal{A}_1$, then there is a set $G\in \mathcal{G}$ such that $G\cap Y=\{x\}$. By (ii), for any another set $G'\in \mathcal{G}$ we have $G\cap G' \cap Y\ne \emptyset$, so $x\in G'$. This implies that $\mathcal{G}$ is EKR, a contradiction, so $\mathcal{A}_1=\emptyset$. Next consider for $k=4$. In this case, for $1\le i\le 5$, $|X_i|=5$ and $|Y_i|=9-5=4$, and for $i=6$, $|X_i|=0$ and $|Y_i|=9 $. \begin{clm}\label{clm3.8} If $G$ and $G'$ in $\mathcal{G}$ satisfies that $|Y_i\setminus (G\cup G')|\ge |G\cap G'\cap ([n]\setminus Y)|$, then $G\cap G' \cap Y\ne \emptyset$. \end{clm} \begin{proof} If $G\cap G'\cap Y=\emptyset$, then $D:=G\cap G'\cap ([n]\setminus Y)\ne \emptyset$. Since $|Y_i\setminus (G\cup G')|\ge$ $|G\cap G'\cap ([n]\setminus Y)|$, there is a subset $D' \subseteq Y_i\setminus (G\cup G')$ with size $|D'|=|D|$. By the definition of $Y_i$, all numbers in $D'$ are smaller than $D$. Since $\mathcal{G}$ is stable on $[n]\setminus X_i$, $F:=(G'\setminus D)\cup D'\in \mathcal{G}$. However $G \cap F=\emptyset$, a contradiction to the intersecting property of $\mathcal{G}$. So $G\cap G'\cap Y \ne\emptyset$. \end{proof} \begin{clm}\label{clm3.9} $ |\mathcal{A}_1|\le 1$; $\mathcal{A}_2$ and $\mathcal{A}_4$ are intersecting. \end{clm} \begin{proof} Obviously, $\mathcal{A}_4$ is intersecting. Assume that $|\mathcal{A}_1|\ge 2$ and $\{x_1\}, \{x_2\}\in \mathcal{A}_1$. Then there are $G$ and $G'$ in $\widetilde{\mathcal{A}_1}$ such that $G\cap Y=\{x_1\}$ and $G'\cap Y=\{x_2\}$. Since any set in $\mathcal{G}$ intersects with $X_i$ (for $i\in [5]$), $x_1, x_2 \in X_i$. So $1\le |G\cap G'\cap ([n]\setminus Y)|\le 3< 4=|Y_i\setminus (G\cap G')|$. By Claim \ref{clm3.8}, $G\cap G'\cap Y\ne \emptyset$, a contradiction. Hence, $ |\mathcal{A}_1|\le 1$. Let $G$ and $G'$ be in $\widetilde{\mathcal{A}_2}$. Then $|G\cap G'\cap ([n]\setminus Y)|\le 2$. Since $|G\cap X_i|\ge 1$ and $|G'\cap X_i|\ge 1$ (for $i\in [5]$), then $|Y_i\setminus (G\cup G')|\ge 2$. By Claim \ref{clm3.8}, $G\cap G'\cap Y\ne \emptyset$, that is $\mathcal{A}_2$ is intersecting, as desired. \end{proof} \begin{clm} $\mathcal{A}_1=\emptyset$. \end{clm} \begin{proof} By Claim \ref{clm3.9}, $\vert\mathcal{A}_1\vert \le 1$. We may assume on the contrary that $\mathcal{A}_1=\{\{x\}\} $ for some $x\in X_i$. For any $G \in \widetilde{\mathcal{A}_1}$ and $G'\in \widetilde{\mathcal{A}_j}$ (for $j=2, 3, 4$), $G$ and $G'$ satisfy the condiction of Claim \ref{clm3.8}, so $G\cap G'\cap Y\ne \emptyset$, this implies that $x\in G'$ and hence $\mathcal{G}$ is EKR, a contradiction. \end{proof} \begin{clm}\label{clm3.10} $\mathcal{A}_2$ and $\mathcal{A}_3$ are cross-intersecting. \end{clm} \begin{proof} Let $G\in \widetilde{\mathcal{A}_2}$ and $G'\in \widetilde{\mathcal{A}_3}$. Then $|G \cap G' \cap ([n] \setminus Y)|\le 1$. Since any set in $\mathcal{G}$ intersects with $X_i$ (for $i \in [5]$), $|Y_i\setminus (G\cup G')|\ge 1$. By Claim \ref{clm3.8}, $G\cap G'\cap Y\ne \emptyset$, that is $\mathcal{A}_2$ and $\mathcal{A}_3$ are cross-intersecting, as desired. \end{proof} \begin{clm}\label{clm3.11} $\mathcal{A}_3$ is intersecting. \end{clm} \begin{proof} Assume on the contrary, that there exist $A$, $A'\in \mathcal{A}_3$ and $G$, $G' \in \widetilde{\mathcal{A}_3}$ such that $G\cap Y=A$, $G'\cap Y=A'$ and $A\cap A'=\emptyset$, in other words, $G\cap G'\cap Y=\emptyset$ and $|G\cap G'\cap ( [n]\setminus Y)|=1$. If $|(G\cup G')\cap Y_i|\le 3$, by Claim \ref{clm3.8}, $G\cap G'\cap Y\ne\emptyset$, a contradiction. Hence we only need to consider the following case : $|A\cap X_i|=1, |A\cap Y_i|=2, |A'\cap X_i|=1$ and $|A'\cap Y_i|=2$. Now we show the conclusion for each case of Lemma \ref{lem2.1}. All sets below are inherited from the proof of Lemma \ref{lem2.1} for each cooresponding case. If we meet {\it Cases 1} in Lemma \ref{lem2.1}, then by Lemma \ref{case1}, we have that $X_1=\{x, y, y', z_1, w_1\}$ or $X_1=\{x, y, y', z_2, w_2\}$, and we may assume that $G\cap X_i=\{x\}$ and $G'\cap X_i=\{y\}$. Respectively, $\{y, y', z_1, w_1\}$ or $\{x, y', z_2, w_2\}$ $\in \mathcal{G}$, which is disjoint from $G$ or $G'$. A contradiction to the intersecting property of $\mathcal{G}$. If we meet {\it Cases 2}\, in Lemma \ref{lem2.1}, then by Lemma \ref{case2}, we have that $X_2=\{x, y, z_1, z_2, z_3\}$, and either $\{z_1, z_2, z_3, y\}\in \mathcal{G}$ or $\{z_1, z_2, z_3, z_4'\}\in \mathcal{G}$ for some $y \neq z_4'$. Furthermore, if $\{z_1, z_2, z_3, y\}\in \mathcal{G}$, then every member in $\mathcal{G}$ contains $x$ or $y$. So we may assume that $G\cap X_i=\{x\}$ and $G'\cap X_i=\{y\}$. Then $\{z_1, z_2, z_3, y\}\cap G=\emptyset$, a contradiction. If $\{z_1, z_2, z_3, z'_4\}\in \mathcal{G}$, then every other member in $\mathcal{G}$ contains $x$ or $y$, we may assume that $G\cap X_i=\{x\}$ and $G'\cap X_i=\{y\}$. Since $\mathcal{G}$ is stable, we may assume that $z'_4 \in Y_i$. Recall that $|G\cap Y_i|=|G'\cap Y_i|=2$, hence $\{z_1, z_2, z_3, z'_4\}$ must be disjoint from $G$ or $G'$, a contradiction. If we meet {\it Cases 3}\, in Lemma \ref{lem2.1}, then by Lemma \ref{case3}, $|G\cap G'|\ge 2$, a contradiction. If we meet {\it Cases 4}\, in Lemma \ref{lem2.1}, then by Lemma \ref{case4}, we have that $X_4=\{x, y, x_1, x_2, x_3\}$, $\{x, y, x_1, x_3\}\in \mathcal{G}$ and $S_{xy}(\mathcal{H}_4)\subseteq\mathcal{G}_2$ with core $\{x, x_1, x_2\}$. So for every set $F$ in $\mathcal{H}_4$, either $|F\cap \{x, x_1, x_2\}|\ge 2$, or $F\cap \{x, x_1, x_2\}=\{x_1\}$ and $y\in F$, or $F\cap \{x, x_1, x_2\}=\{x_2\}$ and $y\in F$. In all cases, $\vert F\cap X_4\vert\ge 2$. Performing shifts in $[n]\setminus X_4$ will not change these properties, hence every set in $\mathcal{G}$ also has the same properties, in particular, $G$ and $G'$ do. This makes a contradiction to $\vert G\cap X_4\vert=\vert G'\cap X_4\vert=1$. Assume that we meet {\it Case 5} in Lemma \ref{lem2.1}, then by Lemma \ref{case5}, we have that $\vert G\cap X_5\vert\ge 2$ for each $G\in \mathcal{G}$. This makes a contradiction to $\vert G\cap X_5\vert=\vert G'\cap X_5\vert =1$. At last, assume that we will not meet any of Cases 1-5 in Lemma \ref{lem2.1} if we perform shifts repeatedly to $\mathcal{G}$. In this case, $Y=[2k]$. Assume on the contrary, and let $G$ and $G'\in \mathcal{G}$ such that $G\cap G' \cap Y= \emptyset $ and $|G\cap G'|$ is the minimum among all pairs of sets in $\mathcal{G}$ not intersecting in $Y$. Then $|G\cap G' \cap (X\setminus Y)|\ge1$. Consequently, $| (G\cup G' )\cap Y|\le |G\cap Y|+|G'\cap Y|\le 2k-2$ since $|G\cap Y|\le k-1$ and $|G'\cap Y|\le k-1$. So there exists a point $a\in Y\setminus (G\cup G' )$. Pick any point $b\in G\cap G' \cap (X\setminus Y)$. Note that $a<b$, then $G'':=(G'\setminus \{b\})\cup \{a\}\in \mathcal{G}$ since $\mathcal{G}$ is stable. It is easy to see that $G\cap G'' \cap Y= \emptyset$ and $|G\cap G''|<|G\cap G'|$, contradicting the minimality of $|G\cap G'|$. \end{proof} Since $\mathcal{G}$ is intersecting, $\mathcal{A}_2$ and $\mathcal{A}_4$ are cross-intersecting, and $\mathcal{A}_3$ and $\mathcal{A}_4$ are cross-intersecting. Combining with Claims \ref{clm3.9}, \ref{clm3.10} and \ref{clm3.11}, we have completed the proof of (ii). \end{proof} \section{Concluding remarks} It is natural to ask what is the maximum size of a $k$-uniform intersecting family $\mathcal{F}$ with $\tau(\mathcal{F})\ge 3$. About this problem, Frankl \cite{Fra1980} gave an upper bound for sufficient large $n$. To introduce the result, we need the following construction. \begin{construction} Let $x\in [n]$, $Y\subseteq [n]$ with $|Y|=k$, and $Z\subseteq [n]$ with $|Z|=k-1$, $x\not\in Y\cup Z$, $Z\cap Y=\emptyset$ and $Y_0=\{y_1, y_2\}\subseteq Y$. Define \begin{gather*} \mathcal{G}=\{A\subseteq [n]: x\in A, A\cap Y\ne \emptyset \,\,\text{and}\,\, A\cap Z\ne \emptyset\}\cup\{Y, Z\cup \{y_1\}, Z\cup \{y_2\}, \{x, y_1, y_2 \}\},\\ FP(n, k)=\{F\subseteq [n]: |F|=k, \exists G\in \mathcal{G} \,s.t., G\subseteq F\}. \end{gather*} \end{construction} It is easy to see that $FP(n, k)$ is intersecting and $\tau(FP(n, k))=3$. \begin{theorem}[Frankl \cite{Fra1980}] Let $k\ge 3$ and $n$ be sufficiently large integers. Let $\mathcal{H}$ be an $n$-vertex $k$-uniform family with $\tau(\mathcal{H})\ge 3$. Then $|\mathcal{H}|\le |FP(n, k)|$. Moreover, for $k\ge 4$, the equality holds only for $\mathcal{H}=FP(n, k)$.˙ \end{theorem} It is interesting to consider what is the maximum $k$-uniform intersecting families with covering number $s\ge 4$. \section{Acknowledgements} This research is supported by National natural science foundation of China (Grant No. 11931002). \frenchspacing \begin{thebibliography}{99} \bibitem{EKR1961} P. Erd\H{o}s, C. Ko, R. Rado, Intersection theorems for systems of finite sets, Quart. J. Math. Oxf. 2(12) (1961) 313--320. \bibitem{MC1965} P. Erd\H{o}s, A problem on independent $r$-tuples, Ann.Univ. Sci. Budapest. E\"{o}tv\"{o}s Sect. Math. 8 (1965), 93-95. \bibitem{Fra1980} P\'eter Frankl, On intersecting families of finite sets, Bull. Austral. Math. Soc. 21 (1980), no. 3, 363–372. \bibitem{Fra1987shift} P. Frankl, The shifting techniques in extremal set theory, in: C. Whitehead (Ed.), Surveys in Combinatorics, LMS Lecture Note Series, vol.123, Cambridge Univ. Press, 1987, pp. 81--110. \bibitem{Fra2013} P. Frankl, Improved bounds for Erd\H{o}s' matching conjecture, J. Combin. Theory Ser. A. 120 (2013), 1068-1072. \bibitem{FT2018} P. Frankl, N. Tokushige, Extremal Problem for Finite Sets, Student Mathematical Library 86, Amer. Math. Soc., Providence, RI, 2018. \bibitem{FraX} P. Frankl, On the maximum number of edges in a hypergraph with given matching number, arXiv:1205.6847. \bibitem{HK2017} J. Han, Y. Kohayakawa, The maximum size of a non-trivial intersecting uniform family that is not a subfamily of the Hilton--Milner family, Proc. Amer. Math. Soc. 145 (1) (2017) 73--87. \bibitem{HM1967} A.J.W. Hilton, E.C. Milner, Some intersection theorems for systems of finite sets, Q. J. Math. 18 (1967) 369--384. \bibitem{HLS2012} H. Huang, P. Loh, B. Sudakov, The size of a hypergraph and its matching number, Combin. Probab. Comput. 21 (2012) 442--450. \bibitem{KM2016} Alexandr Kostochka, Dhruv Mubayi, The structure of large intersecting families, Proc. Amer. Math. Soc. 145 (2017) 2311--2321. \bibitem{LM2014} Tomasz Luczak and Katarzyna Mieczkowska, On Erd˝os’ extremal problem on matchings in hypergraphs, J. Combin. Theory Ser. A 124 (2014), 178–194, DOI 10.1016/j.jcta.2014.01.003. MR3176196 \end{thebibliography} \end{document}
2205.05377v1
http://arxiv.org/abs/2205.05377v1
Mathematical theory for electromagnetic scattering resonances and field enhancement in a subwavelength annular gap
\documentclass[a4paper,11pt]{article} \usepackage[margin=1.2in]{geometry} \pdfoutput=1 \usepackage{mathtools} \DeclarePairedDelimiter\ceil{\lceil}{\rceil} \DeclarePairedDelimiter\floor{\lfloor}{\rfloor} \usepackage[utf8]{inputenc} \usepackage{amsfonts,amssymb, bm,amsthm} \usepackage{graphicx,color,epstopdf} \usepackage[colorlinks, linkcolor=red, anchorcolor=blue, citecolor=green ]{hyperref} \usepackage[inline]{showlabels} \allowdisplaybreaks \newtheorem{myprop}{Proposition}[section] \newtheorem{mylemma}{Lemma}[section] \newtheorem{mytheorem}{Theorem}[section] \newtheorem{mycorollary}{Corollary}[section] \newtheorem{myremark}{Remark}[section] \newtheorem{mydef}{Definition}[section] \def\Xint#1{\mathchoice {\XXint\displaystyle\textstyle{#1}} {\XXint\textstyle\scriptstyle{#1}} {\XXint\scriptstyle\scriptscriptstyle{#1}} {\XXint\scriptscriptstyle\scriptscriptstyle{#1}} \!\int} \def\XXint#1#2#3{{\setbox0=\hbox{$#1{#2#3}{\int}$} \vcenter{\hbox{$#2#3$}}\kern-.5\wd0}} \def\dd{''} \def\vek{\varepsilon_k} \def\cint{\Xint-} \def\xint{\Xint\times} \def\tx {\tilde{x}} \def\ty {\tilde{y}} \def\rId {{\rm Id}} \def\tu {\tilde{u}} \def\d{{\rm dist}} \def\Op {{\rm Op}} \def\bb {{\bf b}} \def\bc {{\bf c}} \def\bn {{\bf n}} \def\bN {{\bm N}} \def\bR {{\bf R}} \def\bF {{\bf F}} \def\bG {{\bf G}} \def\bE {{\bf E}} \def\bI {{\bf I}} \def\bS {{\bf S}} \def\bD {{\bf D}} \def\bp {{\bf p}} \def\bP {{\bf P}} \def\bB {{\bf B}} \def\bA {{\bf A}} \def\bC {{\bf C}} \def\bH {{\bf H}} \def\bu {{\bm u}} \def\br {{\bm r}} \def\bl {{\bm l}} \def\hE {\hat{E}} \def\hH {\hat{H}} \def\hbE {\hat{\bE}} \def\hbH {\hat{\bH}} \def\Log {{\rm Log}} \def\Si {{\rm Si}} \def\Ci {{\rm Ci}} \def\lsim {{\lesssim}} \def\esim {{\eqsim}} \def\gsim {{\gtrsim}} \def\rsc{{\rm sc}} \def\rog{{\rm og}} \def\rtot{{\rm tot}} \def\wtd#1{{\widetilde{#1}}} \def\Os{{{\cal O}_{\rm small}}} \def\cl {{\rm curl}} \def\dv {{\rm div}} \def\Cl {{\rm Curl}} \def\Dv {{\rm Div}} \def\cS {{\cal S}} \def\cK {{\cal K}} \def\cKd {{\cal K}'} \def\cT {{\cal T}} \def\hhG {\hat{\hat{G}}} \def\hG {\hat{G}} \def\hu {\hat{u}} \def\hf {\hat{f}} \def\btu {{\hat{\bm u}}} \def\bnuc {{{\bm \nu}^{\bf c}}} \def\bnu {{{\bm \nu}^{\bf c}}} \def\ke{{k_{\epsilon}}} \def\mux{{\mu}_\xi} \def\mue{{\mu}_\eta} \def\muem{{\mu}_{\eta,m}} \def\bi{{\bf i}} \def\cN {{\cal N}} \newcommand{\tcr}{\textcolor{red}} \newcommand{\tck}{\textcolor{black}} \begin{document} \title{Mathematical theory for electromagnetic scattering resonances and field enhancement in a subwavelength annular gap} \author{Junshan Lin$^1$, Wangtao Lu$^2$ and Hai Zhang$^3$} \footnotetext[1]{Department of Mathematics, Auburn University, Auburn, AL, 36849, USA. Email: [email protected]. Junshan Lin is partially supported by the NSF grant DMS-2011148.} \footnotetext[2]{School of Mathematical Sciences, Zhejiang University, Hangzhou 310027, China. Email: [email protected]. Wangtao Lu is partially supported by NSF of Zhejiang Province for Distinguished Young Scholars Grant LR21A010001, NSFC Grant 12174310, and a Key Project of Joint Funds For Regional Innovation and Development (U21A20425).} \footnotetext[3]{Department of Mathematics, HKUST, Clear Water Bay, Kowloon, Hong Kong. Email: [email protected]. Hai Zhang is partially supported by Hong Kong RGC grant GRF 16305419 and GRF 16304621.} \maketitle \begin{abstract} This work presents a mathematical theory for electromagnetic scattering resonances in a subwavelength annular hole embedded in a metallic slab, with the annulus width $h\ll1$. The model is representative among many 3D subwavelength hole structures, which are able to induce resonant scattering of electromagnetic wave and the so-called extraordinary optical transmission. We develop a multiscale framework for the underlying scattering problem based upon a combination of the integral equation in the exterior domain and the waveguide mode expansion inside the tiny hole. The matching of the electromagnetic field over the hole aperture leads to a sequence of decoupled infinite systems, which are used to set up the resonance conditions for the scattering problem. By performing rigorous analysis for the infinite systems and the resonance conditions, we characterize all the resonances in a bounded domain over the complex plane. It is shown that the resonances are associated with the TE and TEM waveguide modes in the annular hole, and they are close to the real axis with the imaginary parts of order ${\cal O}(h)$. We also investigate the resonant scattering when an incident wave is present. It is proved that the electromagnetic field is amplified with order ${\cal O}(1/h)$ at the resonant frequencies that are associated with the TE modes in the annular hole. On the other hand, one particular resonance associated with the TEM mode can not be excited by a plane wave but can be excited with a near-field electric dipole source, leading to field enhancement of order ${\cal O}(1/h)$. \end{abstract} \section{Introduction} Resonances play a significant role in wave interactions with subwavelength structures, due to their ability to generate unusual physical phenomena that open up a broad possibilities in modern science and technology. One representative type of resonant subwavelength structure is nano-holes perforated in noble metals, such as gold or silver. The device of this sort was first introduced in the seminal work \cite{ebblezwol98}, and tremendous research has been sparked since then in pursuit of more efficient resonant nano-hole devices (cf \cite{garmarebbkui10, rodrigo16} and references therein). The most remarkable phenomenon occurs in these subwavelength devices when an electromagnetic wave is illuminated at the resonant frequencies. The corresponding transmission through the tiny holes exhibit extraordinary large values that can not be explained by the classical diffraction theory developed by Bethe and is called extraordinary optical transmission (EOT) \cite{ebblezwol98}. In addition, the EOT is accompanied by strong localized electromagnetic field enhancement inside the subwavelength holes and in the vicinity of hole apertures \cite{garmarebbkui10}. The capability to trigger EOT and to confine light in deep subwavelength apertures leads to many important applications in biological and chemical sensing, optical lenses, and the design of novel optical devices, etc \cite{blanchard17, cetin2015, huang08, li17plasmonic, rodrigo16}. The main mechanisms for the EOT and field amplification in the subwavelength hole devices are due to resonances. These include scattering resonances induced by the tiny holes and surface plasmonic resonances generated from the metallic materials \cite{garmarebbkui10}. Significant progress has been made on the mathematical studies of resonances for two-dimensional subwavelength slit structures in the past few years. In a series of studies, we have established rigorous mathematical theories for a variety of resonances and the induced EOT via the layer potential technique and asymptotic analysis \cite{linshizha20, linzha17, linzha18a, linzha18b, linzhang21}. The layer potential approach with the operator-based Gohberg–Sigal theory was previously used to investigate the resonances in a closely related subwavelength cavity problem \cite{babbontri10, bontri10}. More recently, other mathematical methods were developed to derive the resonances for the two-dimensional slit structures. These include the matched asymptotic method and the Fourier mode matching method \cite{holsch19, zhlu21}. The matched asymptotic expansion techniques have also been applied to construct the solution of the slit scattering problem in \cite{joltor06a, joltor06b, joltor08}. The generalization of the above techniques to the studies of the acoustic wave resonances in three-dimensional subwavelength holes can be found in \cite{fatima21, liazou20, luwanzho21}. We would also like to refer readers to \cite{ammari18, ammari17, ammari16, ammari15} and references therein for the mathematical studies of other type of subwavelength resonances, such as Helmholtz resonators and nanoparticles, etc. In the previous studies of 2D subwavelength hole resonances or 3D acoustic wave resonances, the governing equations are scalar wave equations. The mathematical studies of electromagnetic resonances for the 3D subwavelength holes remains completely open. In this paper, we aim to advance the work in this direction by investigating the electromagnetic scattering resonances for the full vector Maxwell's equations. More specifically, we consider the electromagnetic wave scattering by an annular gap, wherein the gap width is much smaller than the incident wavelength. Figure \ref{fig:problem_geometry} depicts the top view and side view of the structure, in which a coaxial waveguide is perforated through a metal slab of thickness $l$, forming an annular gap on the $x_1x_2$ plane. The annular hole occupies the domain $G^h=R^h\times(-l/2,l/2)$, where \begin{equation} \label{eq:annulus} R^h:=\{(x_1,x_2)\in\mathbb{R}^2:x_1=r\cos\theta,x_2=r\sin\theta, r\in(a,a(1+h)),\theta\in[0,2\pi]\} \end{equation} denotes the cross-sectional annulus on the $x_1x_2$ plane. In the above, $a$ and $a(1+h)$ are the inner and outer radius of the annulus. In the subsequent analysis, for clarity of presentation we shall scale the geometry of the problem such that $a=1$. The resonances when $a\neq1$ are scaled accordingly by replacing the wavenumber $k$ by $ka$ and the thickness $l$ by $l/a$. It is assumed that the gap width is small with $h \ll 1$. The metal region is denoted by $$\Omega_{\rm M}:= \big\{(x_1, x_2, x_3) \in\mathbb{R}^3: (x_1,x_2) \in\mathbb{R}^2\backslash \overline{R^h}, x_3\in(-l/2,l/2) \big\}.$$ In this work, we focus on the resonances induced by the tiny hole and consider the configuration when the metal is a perfect electric conductor. The studies of plasmonic resonances for real metals and their interactions with the hole resonances are avenues of future research. \begin{figure} \centering \includegraphics[width=6.5cm]{annular_gap_xy.pdf} \quad \includegraphics[width=6.5cm]{annular_gap_xz.pdf} \caption{Top view (left) and side view (right) of the subwavelength structure. The cylindrical hole $G^h$ is perforated through the metallic slab with a thickness of $l$ and it forms an annular aperture $R^h$ on the $x_1x_2$ plane with the inner and outer radius $a$ and $a(1+h)$ respectively. } \label{fig:problem_geometry} \end{figure} Let $\{ {\bf E}^{\rm inc}, {\bf H}^{\rm inc} \}$ be the incoming time-harmonic electromagnetic plane wave given by \begin{equation} \label{eq:pla:inc} {\bf E}^{\rm inc} = {\bf E}^0e^{\bi k x\cdot d},\quad {\bf H}^{\rm inc} = {\bf H}^0e^{\bi k x\cdot d}, \end{equation} wherein $k$ is the wavenumber, $d\in\mathbb{R}^3$ is the propagation direction, and the electric and magnetic polarization vectors satisfy ${\bf E}^0\perp d$ and ${\bf H}^0=d\times {\bf E}^0$. The total electromagnetic field after the scattering is governed by the following Maxwell's equations: \begin{align} \label{eq:E} \cl\ {\bf E} =& \bi k {\bf H}\quad {\rm in}\; \mathbb{R}^3\backslash\overline{\Omega_{\rm M}},\\ \label{eq:H} \cl\ {\bf H} =& -\bi k {\bf E}\quad {\rm in}\; \mathbb{R}^3\backslash\overline{\Omega_{\rm M}},\\ \label{eq:bc} \nu\times{\bf E} =& 0 \quad{\rm on}\; \partial\Omega_{\rm M}, \end{align} where $\nu$ denotes the outward unit normal pointing to the exterior of $\Omega_{\rm M}$. Let $\{{\bf E}^{\rm ref}, {\bf H}^{\rm ref}\}$ be the reflected field above the metal in the absence of the annular hole and $h=0$. The perturbed field generated by the hole $G_h$ when $h>0$ is called scattered field, denoted by $ {\bf E}^{\rm sc} := {\bf E} - {\bf E}^{\rm inc} - {\bf E}^{\rm ref} $ and $ {\bf H}^{\rm sc} := {\bf H} - {\bf H}^{\rm inc} - {\bf H}^{\rm ref} $. They satisfy the Silver-M\"uller radiation condition (SMC) at infinity above and below the metal (cf \cite{kirhet15}): \begin{equation} \label{eq:smc} \lim_{\substack{|x_3|>l/2 \\ |x|\to\infty}}({\bf H}^{\rm sc}\times x - |x| {\bf E}^{\rm sc}) = 0. \end{equation} The resonant phenomena for the scattering problem \eqref{eq:E} - \eqref{eq:smc} was reported and studied experimentally and numerically in \cite{baida02, baida04, hulinluoh18, park15, yoo16}. In this paper, we aim to establish the rigorous mathematical theory for the underlying resonant scattering. The goal is to quantitatively characterize the resonances and study the field enhancement at various resonant frequencies. The mathematical theory presented for this representative structure also seeks to lay the foundational framework in establishing electromagnetic resonant scattering theory for many other 3D subwavelength hole devices to be explored in the future. To this end, we first study the scattering resonances, which lie on the lower complex plane and are the poles the resolvent associated with the scattering problem. The real and imaginary parts of the scattering resonances represent the resonant frequencies and the reciprocal of the resonant magnitude respectively. The corresponding nontrivial solutions are called quasi-normal modes \cite{dyatlov19}. Equivalently, we consider the homogeneous problem \eqref{eq:E} - \eqref{eq:smc} when the incident field ${\bf E}^{\rm inc} = {\bf H}^{\rm inc} = { \bf 0 }$. The quasi-normal modes satisfy the radiation condition \eqref{eq:smc} but they grow at infinity. We then study the resonant scattering when the incoming wave attains the resonant frequencies. The main contribution of this paper is as follows: \begin{itemize} \item[(i)] We prove the existence of electromagnetic scattering resonances for the problem \eqref{eq:E} - \eqref{eq:smc} and present quantitative analysis for the resonances. The structure of resonances is much richer than the resonances for a 2D hole analyzed in \cite{holsch19, linzha17, zhlu21}. In more details, it is shown that the resonances are a sequence of complex numbers that are associated with the TE and TEM waveguide modes in the annular hole. We derive the asymptotic expansion of these resonances. Furthermore, it is demonstrated that the imaginary parts of the resonances attain the order ${\cal O}(h)$. The quantitative analysis of resonances is summarized in Theorems \ref{thm:even:res} and \ref{thm:odd:res}. \item[(ii)] We also analyze the electromagnetic field governed by \eqref{eq:E}-\eqref{eq:smc} when an incident plane wave is present. We prove that electromagnetic field is amplified by order ${\cal O}(1/h)$ at the resonant frequencies that are associated with the TE modes in the annular hole. A particular resonance associated with the TEM mode can not be excited by the a plane wave. We prove that a near-field electric monopole can be used to excite this resonance to achieve field enhancement of order ${\cal O}(1/h)$. The analysis is provided in Section 5 and it explains the observed resonant phenomena through the tiny annular hole reported in \cite{baida02, baida04, hulinluoh18, yoo16}. \end{itemize} There are several main challenges in analysis of resonances, due to multiscale nature of the problem and the vector form of the mathematical model. In addition, as elaborated in Section 3, the solution inside the tiny hole consists of several types of waveguide modes (TE, TM and TEM modes), which are responsible for the richness of resonances for the scattering problem. Our multiscale analysis is based upon a combination of the integral equation formulation with the mode matching method. More precisely, the electromagnetic field outside the annular hole (large-scale domain) is represented by the vector layer potentials and the wave field in the hole (small-scale domain) is expressed as a sum of coaxial waveguide modes, which form a complete basis for the solution space. The matching of the two wave fields for each mode over the annular aperture leads to an infinite system for the expansion coefficients. The main advantage of the mode matching method lies in the natural decoupling of the original system into subsystems with distinct angular momentum in the annulus. Moreover, each individual subsystem can be further reduced into a single nonlinear characteristic equation (resonance condition) by projecting the solution in an infinite-dimensional space onto the dominant resonant modes, and the resonances are the roots of the characteristic equation that can be analyzed by the complex analysis tools. This is achieved by the estimation of the contribution from the modes that are orthogonal to the resonant modes in each subsystem and is accomplished by the asymptotic analysis with respect to the small parameter $h$. The main technical parts are presented in Section 4. The rest of the paper is organized as follows. In Section 2, we introduce necessary functions spaces and notations to be used throughout the analysis and decompose the whole scattering problem \eqref{eq:E}-\eqref{eq:smc} into two subproblems. The boundary value problems outside and inside the tiny hole are studied in details in Section 3. In particular, we express their solutions via integral equations and the mode expansion respectively. These serve as the starting point for the mode matching framework. Section 4 is devoted to the analysis of scattering resonances. The details of the mode matching formulation, the reduction to the resonance condition in the form of nonlinear characteristic equations, and the analysis of their roots for the complex-valued resonances will be given. Finally, we study the electromagnetic field enhancement at the resonant frequencies in Section 5, and conclude the paper with some discussions in Section 6. \section{Preliminaries} \subsection{Function spaces and notations} We introduce several Sobolev spaces for scalar and vector valued functions that will be used throughout the paper and refer the readers to \cite{mcl00,colkre13,kirhet15} for more details. Let $\Omega\subset \mathbb{R}^3$ be a bounded Lipschitz domain with the boundary $\Gamma:=\partial\Omega$, and $\nu(x)$ is the unit outward normal on $\Gamma$. $H^0(\Omega):=L^2(\Omega)$ denotes the set of all square-integrable functions on $\Omega$. Let $H^1(\Omega)=\{f\in L^2(\Omega): \nabla f\in [L^2(\Omega)]^3\}$ and $H^{-1}(\Omega)$ be its dual space. $H^s(\Omega)$ denotes the fractional Sobolev space for $-1<s<1$,. Given $\Gamma_1\subset \Gamma$, we define $H^{s}(\Gamma_1)$ by $H^{s}(\Gamma_1)=\{f|_{\Gamma_1}:f\in H^s(\Gamma)\}$ and its dual space by $$ [H^{s}(\Gamma_1)]'=\widetilde{H^{-s}}(\Gamma_1):=\{f\in H^{-s}(\Gamma): {\rm supp} f\subset \overline{\Gamma_1}\}.$$ Here $H^s(\Gamma)$ is the Sobolev space over the boundary $\Gamma$. For a vector valued function ${\bf F}(x)=[F_1(x),F_2(x),F_3(x)]^{T}$ with components $F_j\in\mathbb{C}, j=1,2,3$, $\cl\ \bF=\nabla \times \bF$ and $\dv\ \bF=\nabla\cdot \bF$ denote the curl and the divergence of ${\bf F}$, respectively. Let $$H(\cl,\Omega):=\{{\bf F}\in [L^2(\Omega)]^3: \cl\ {\bf F}\in [L^2(\Omega)]^3\}.$$ We also define $$ H_t^{s}(\Gamma)=\{\bF\in[H^{s}(\Gamma)]^3: \nu\cdot \bF=0\} \quad \mbox{for} -1/2\leq s\leq 1/2, $$ and $L_t^2(\Gamma)=H_t^{0}(\Gamma)$. Let $\Cl\ F$ and $\Dv\ F$ be the surface divergence and the surface curl of $F$ on $\Gamma$, respectively (c.f. Eqs.~(6.37) and (6.41) in \cite{colkre13}). For the planar surfaces $R^h \times \{x_3=\pm l/2\}$ considered in this paper, we have $$ \Dv = \nabla_2\cdot=[\partial_{x_1},\partial_{x_2}]^{T}\cdot \; , \quad \Cl =\cl_2= [\partial_{x_2},-\partial_{x_1}]^{T}\cdot \;. $$ Define $$H^{-1/2}(\Dv, \Gamma)=\{\bF\in H_t^{-1/2}(\Gamma): \Dv\ \bF\in H^{-1/2}(\Gamma) \}$$ and $$ H^{-1/2}(\Cl,\Gamma)=\{\bF\in H_t^{-1/2}(\Gamma):\Cl\ \bF\in H^{-1/2}(\Gamma)\}. $$ By \cite[Thm. 5.26]{kirhet15}, $H^{-1/2}(\Cl,\Gamma) = [H^{-1/2}(\Dv, \Gamma)]'$ where the duality is defined by \begin{equation} \label{eq:bl} \bF(\bG) = \int_{\Gamma} \bF\cdot\bG \, ds(\Gamma) \end{equation} for any $\bF\in H^{-1/2}(\Cl,\Gamma)$ and $\bG\in H^{-1/2}(\Dv,\Gamma)$. From the trace theorem \cite[Thm. 5.24]{kirhet15}, the trace operators $$\gamma_t: H(\cl,\Omega)\to H^{-1/2}(\Dv,\Gamma), \bF\mapsto \nu\times \bF|_{\Gamma}$$ and $$\gamma_T: H(\cl,\Omega)\to H^{-1/2}(\Cl,\Gamma), \bF\mapsto (\nu\times\bF|_{\Gamma})\times \nu$$ are bounded. Given an open domain $\Gamma_1\subset \Gamma$, we define $$ H^{-1/2}(\Cl,\Gamma_1)=\{\bF|_{\Gamma_1}: \bF\in H^{-1/2}(\Cl,\Gamma)\}$$ and its dual space $$\tilde{H}^{-1/2}(\Dv,\Gamma_1)=\{\bF\in H^{-1/2}(\Dv,\Gamma): {\rm supp}\,\bF\subset\overline{\Gamma_1}\}. $$ Finally, for an unbounded Lipschitz domain $\Omega$, we let $$H_{\rm loc}(\cl,\Omega):=\{\bF: \bF|_{\Omega\cap B(0,r)}\in H(\cl,\Omega\cap B(0,r))\textrm{\ for any\ }r>0\}$$ wherein $B(0,r):=\{x:|x|<r\}$. We also introduce the following notations for the problem geometry to be used in the rest of the paper: \begin{itemize} \item[(1)]$\Omega_\pm=\{x\in\mathbb{R}^3\backslash\overline{\Omega_M}:\pm x_3>0\}$: the upper and lower half domain exterior to the metal; \item[(2)] $\mathbb{R}^{3}_+=\{x \in \mathbb{R}^3 : x_3>l/2 \}$: the half space above the metal; \item[(3)] $G^{h}_+ = \{ x \in G_h: x_3>0 \}$: the upper half of the annular hole $G_h$; \item[(4)] $ A^h = \{x: (x_1,x_2)\in R^h, x_3=l/2\}$: the upper annular aperture of $G^{h}_+$; \item[(5)] $\Gamma_b=\{ x: (x_1,x_2)\in R^h, x_3=0\}$: the annulus on the $x_1x_2$ plane or the base of $G^{h}_+$; \item[(6)] $\Gamma^{h}_{+}$: the side boundary of $G^{h}_+$. \end{itemize} In addition, the following sets will be used: \begin{itemize} \item[(1)] ${\cal B} := \{z\in \mathbb{C}: |z|<C_0\}$, where $C_0$ is a fixed positive constant; \item[(2)] $\mathbb{N}^* := \{1, 2, 3, \cdots.\}$; \item[(3)] $(\mathbb{Z}\times\mathbb{N})^*:=(\mathbb{Z}\times\mathbb{N})\backslash\{(0,0)\}$. \end{itemize} Finally, $ A \eqsim B$ implies $c_1 B \leq A \leq c_2B$ for some positive constants $c_1, c_2 $ that are independent of $A$ and $B$. \subsection{Decomposition of the scattering problem} Due to the symmetry of the structure with respect to the $x_1x_2$ plane, the scattering problem (\ref{eq:E})-(\ref{eq:smc}) can be decomposed as the two subproblems as follows: \begin{itemize} \item[(E).] Given the incident field $[\bE^{\rm inc},\bH^{\rm inc}]/2$, solve for $[\bE^{\rm e},\bH^{\rm e}]$ that satisfies \begin{align} \cl\ \bE^{\rm e} =& \, \bi k \bH^{\rm e}\quad{\rm in}\; \Omega_+, \label{eq:problemE_1} \\ \cl\ \bH^{\rm e} =& \, -\bi k \bE^{\rm e}\quad{\rm in}\; \Omega_+, \label{eq:problemE_2}\\ \nu\times {\bE}^{\rm e} =& \, 0\quad{\rm on}\; \partial\Omega_+\backslash\Gamma_b, \label{eq:problemE_3} \\ \nu\times {\bH}^{\rm e} =& \, 0 \quad{\rm on}\; \Gamma_b, \label{eq:problemE_4} \end{align} and the radiation condition (\ref{eq:smc}) for $x_3\ge l/2$ with $[\bE^{\rm sc},\bH^{\rm sc}]=[\bE^{\rm e},\bH^{\rm e}] - [\bE^{\rm inc},\bH^{\rm inc}]/2 - [\bE^{\rm ref},\bH^{\rm ref}]/2$. \item[(O).] Given the incident field $[\bE^{\rm inc},\bH^{\rm inc}]/2$, solve for $[\bE^{\rm o},\bH^{\rm o}]$ that satisfies \begin{align} \cl\ \bE^{\rm o} =& \, \bi k \bH^{\rm o}\quad{\rm in}\; \Omega_+, \label{eq:problemO_1} \\ \cl\ \bH^{\rm o} =& \, -\bi k \bE^{\rm o}\quad{\rm in}\; \Omega_+, \label{eq:problemO_2} \\ \nu\times {\bE}^{\rm o} =& \, 0\quad{\rm on}\; \partial\Omega_+, \label{eq:problemO_3} \end{align} and the radiation condition (\ref{eq:smc}) for $x_3\ge l/2$ with $[\bE^{\rm sc},\bH^{\rm sc}]=[\bE^{\rm o},\bH^{\rm o}] - [\bE^{\rm inc},\bH^{\rm inc}]/2 - [\bE^{\rm ref},\bH^{\rm ref}]/2$. \end{itemize} It is clear that the solution of the scattering problem \eqref{eq:E} - \eqref{eq:smc} can be written as \begin{align} \bE(x) = \left\{ \begin{array}{lc} \bE^{\rm e}(x) + \bE^{\rm o}(x), & x_3\geq 0,\\ \bE^{\rm e,*}(x^*) - \bE^{\rm o,*}(x^*), & x_3<0,\\ \end{array} \right. \quad \bH(x) = (\bi k)^{-1}\cl\ \bE. \end{align} In the above, $*$ denotes the reflection vector with respect to the $x_1x_2$ plane. On the other hand, there holds \begin{align} {\bE}^{\rm e}(x) =& \frac{\bE(x)+\bE^*(x^*)}{2},\quad \bE^{\rm o}(x) = \frac{\bE(x)-\bE^*(x^*)}{2},\quad x_3>0, \\ \bH^{\rm j}(x) =& (\bi k)^{-1}\cl\ \bE^{\rm j},\quad {\rm j}\in\{{\rm e},{\rm o} \}. \end{align} In the rest of the paper, for clarity we shall present the detailed analysis for the resonances for Problem (E) only. Problem (O) can be analyzed similarly, thus we will point out the main difference in the analysis and present the main results directly. To simplify the notations, we shall overload $\bE$ and $\bH$ for $\bE^{\rm e}$ and $\bH^{\rm e}$, respectively. \section{Two auxiliary boundary value problems} In this section, we study the exterior boundary value problem above the metal and the interior boundary value problem in the annular hole. They will serve as the foundation for the mode matching framework and for establishing the resonance condition for the scattering problem (E). The notations introduced in Section 2.1 for the problem geometry are used. \subsection{Scattering problem above the metal} For a given vector valued function $\bF$ on $A^h$, let \begin{align} \tilde{\cal L}_k[\bF](x) =& \cl\ \cl\int_{A^h}\Phi_k(x;y)\bF(y)ds(y),\\ \tilde{\cal M}_k[\bF](x) =& \cl \int_{A^h}\Phi_k(x;y)\bF(y)ds(y), \end{align} be the vector layer potentials for $x\in\mathbb{R}_{+}^3$, where $\Phi_k(x;y)=\frac{e^{\bi k|x-y|}}{4\pi|x-y|}$ for $x\neq y$. Consider the following half-space problem above the metal \begin{align*} ({\rm HSP}):\quad\quad \left\{ \begin{array}{ll} \cl\ \bE = \bi k \bH,\quad&{\rm in}\quad \mathbb{R}_{+}^3,\\ \cl\ \bH = -\bi k \bE,\quad&{\rm in}\quad \mathbb{R}_{+}^3,\\ \nu\times\bE=0,\quad &{\rm on}\quad \{x\in\mathbb{R}^3:x_3=l/2\}\backslash \overline{A^{h}},\\ \nu\times\bE=\bF,\quad &{\rm on}\quad A^{h},\\ \end{array} \right. \end{align*} with the radiation condition (\ref{eq:smc}) in $x_3>l/2$. The following theorem states the well-posedness of the problem. \begin{mytheorem} \label{thm:wp:hsp} For any $k>0$ and any $\bF\in \tilde{H}^{-1/2}(\Dv, A^{h})$, the following two functions \begin{align} \label{eq:EH:hsp} \bE = -2 \tilde{M}_k [\bF],\quad \bH = -\frac{2}{\bi k}\tilde{L}_k[\bF], \end{align} in $H_{\rm loc}(\cl,\mathbb{R}_{+}^3)$ constitute the unique solution to problem (HSP). \begin{proof} For $\bF\equiv 0$, one follows Lemma 5.30 in \cite{kirhet15} to extend $[\bE,\bH]$ to be a function in $[H_{\rm loc}(\cl,\mathbb{R}^3)]^2$, which satisfies the radiation condition (\ref{eq:smc}) in all directions $x/|x|$. Thus, $\bE\equiv \bH\equiv 0$ so that (HSP) has at most one solution for $\bF\neq 0$. One directly verifies that $[\bE,\bH]$ in (\ref{eq:EH:hsp}) is the unique solution of (HSP) in $[H_{\rm loc}(\cl,\mathbb{R}_{+}^3)]^2$. \end{proof} \end{mytheorem} Let ${\cal L}_k[{\bF}]$ be the trace of $\tilde{\cal L}_k[\bF]$ on $A^h$. By Theorem~\ref{thm:wp:hsp} and the open mapping theorem, \begin{align} \label{eq:t2t:0} \nu\times \bH|_{A^h} = \frac{-2}{\bi k}{\cal L}_k[\nu\times \bE|_{A^h}]\in H^{-1/2}(\Dv, A^h), \end{align} and ${\cal L}_k$ is bounded from $\tilde{H}^{-1/2}(\Dv,A^h)$ to $H^{-1/2}(\Dv,A^h)$. Clearly, ${\cal L}_k$ maps the tangential component of $\bE$ to that of $\bH$ so that we shall call it the tangential-to-tangential (T2T) map in the sequel. As we shall see in Section 4, the T2T map ${\cal L}_k$ plays a central role in formulating the resonance eigenvalue problem. \subsection{Boundary value problem in the annular hole} Recall that $A_h$ denotes the planar annular aperture. In this section, we first construct a countable basis for the function space $\widetilde{H}^{-1/2}(\Dv,A^h)$ and then express the solution of the boundary value problem in the annular hole $G^{h}_+$ using the basis. \subsubsection{A complete basis for $\widetilde{H}^{-1/2}(\Dv,A^h)$} As $[L^2(A^h)]^2\cap \widetilde{H}^{-1/2}(\Dv,A^h)$ is dense in $\widetilde{H}^{-1/2}(\Dv,A^h)$, we only need to construct a dense and countable basis of $[L^2(A^h)]^2$. From Eqs.~(1.42)\&(1.55) in Chpt. IX of \cite{daulio90}, the Helmholtz decomposition of $[L^2(A^h)]^2$ is given below. \begin{mylemma} \label{lem:helmdecomp} Let $\Delta_2=\nabla_2\cdot\nabla_2$, and \begin{align} \cl_2\ H^1(A^h):=&\{\cl_2 f: f\in H^1(A^h)\},\\ \nabla_2 H_0^1(A^h):=&\{\nabla_2f: f\in H_0^1(A^h)\},\\ \mathbb{H}_2(A^h):=&\{\nabla_2 f: f\in H^1(A^h), \Delta_2 f = 0, f|_{r=a}=C_1, f|_{r=a(1+h)}=C_2; C_1, C_2\in\mathbb{R}\}, \end{align} be three closed subspaces of $[L^2(A^h)]^2$ that are orthogonal to each other in the sense of the $L^2$-inner product. Then, $[L^2(A^h)]^2$ can be decomposed into the direct sum of the above three subspaces, i.e., \begin{equation} [L^2(A^h)]^2 = \cl_2\ H^1(A^h) \oplus \nabla_2 H_0^1(A^h)\oplus\mathbb{H}_2(A^h). \end{equation} \end{mylemma} Now we find a countable basis for each of the three subspaces. It is not hard to see that one-dimensional $\mathbb{H}_2(A^h)$ is given by \begin{equation} \label{eq:bs:h2} \mathbb{H}_2(A^h) = {\rm span}\big\{\nabla_2\log(r)\big\}. \end{equation} To characterize $\nabla_2 H_0^1(A^h)$, we consider the following Dirichlet eigenvalue problem \begin{align*} {\rm (DEP):}\quad\quad \left\{ \begin{array}{ll} -\Delta_2\psi = \lambda \psi \quad&{\rm in}\; R^h,\\ \psi = 0 \quad&{\rm on}\; \partial R^h.\\ \end{array} \right. \end{align*} The countable normalized eigenfunctions are (cf. \cite{kutsig84}) \begin{align} \label{eq:phimnD} \psi_{ij}^{D}(r,\theta;h) :=& \left[ C_{ij}^{D} \right]^{-1}\Big[ Y_{|i|}(\beta_{|i|j}^{D})J_{|i|}(\beta_{|i|j}^{D}r)-J_{|i|}(\beta_{|i|j}^{D})Y_{|i|}(\beta_{|i| j}^{D}r) \Big]e^{\bi i\theta}, \end{align} for $(i,j)\in\mathbb{Z}\times \mathbb{N}^*$. The associated eigenvalues \begin{equation} \label{eq-lambdaD} \lambda_{ij}^{D} = \left( \beta_{|i| j}^{D}\right)^2>0. \end{equation} In the above, $J_i$ and $Y_i$ are the first and second kind Bessel functions of order $i$, $\beta_{|i|j}^{D}$ is the $j$-th positive root (in ascending order) of the following euqation: \begin{equation} \label{eq:gov:roots} F^{D}_{|i|}(\beta;h):=Y_{|i|}(\beta)J_{|i|}\left( \beta(1+h) \right) - J_{|i|}(\beta)Y_{|i|}\left( \beta(1+h) \right)=0, \end{equation} and $C_{ij}^{D}>0$ is chosen such that $||\psi_{ij}^{\rm D}||_{L^2(R^h)}=1$. For $0<h\ll1$, the asymptotic analysis of $\lambda_{ij}^{D}$ and $\psi_{ij}^{D}$ are carried out in detail in Appendix A. It is shown in \eqref{eq:asy:betamn} and \eqref{eq:psimnD} that $$\lambda_{ij}^{D}\sim \left(\frac{j\pi}{h}\right)^2 \quad \mbox{and} \quad \psi_{ij}^D \sim \frac{e^{im\theta}}{\sqrt{\pi r h}} \sin\left(\frac{n\pi}{h}(r-1)\right)$$ for $h\ll1$. It follows from \cite[Thm. 4.12]{mcl00} that $\{\psi_{ij}^{D}(\cdot;h)\}_{(i,j)\in\mathbb{Z}\times\mathbb{N}^*}$ constitutes a complete orthonormal basis of $L^2(R^h)$ vanishing on the boundary $\partial R^h$, and they form a dense and countable basis for $H_0^1(R^h)$. Therefore, \begin{equation} \label{eq:bs:gr} \nabla_2 H_0^1(R^h)=\overline{{\rm span}\{\nabla\psi_{ij}^{D}(\cdot;h)\}_{(i,j)\in\mathbb{Z}\times\mathbb{N}^*}}, \end{equation} where the overline denotes the closure. As for the subspace $\cl_2\ H^1(A^h)$, we consider the following Neumann eigenvalue problem \begin{align*} {\rm (NEP):}\quad\quad \left\{ \begin{array}{ll} -\Delta_2\psi = \lambda \psi,\quad&{\rm in}\quad R^h\\ \partial_{\nu}\psi = 0\quad&{\rm on}\quad \partial R^h.\\ \end{array} \right. \end{align*} As shown in \cite{kutsig84}, the countable eigenvalues are $\lambda^{N}_{00}:=0$ and \begin{equation}\label{eq-lambdaN} \lambda_{mn}^{N} = \left( \beta_{|m|n}^{N}\right)^2, \quad (m,n)\in(\mathbb{Z}\times\mathbb{N})^*, \end{equation} where $\beta_{|m|n}^{N}$ is the $n$-th nonnegative root (in ascending order starting from $n=0$) of the equation \begin{equation} \label{eq:gov:roots:n} F_{|m|}^{N}(\beta;h):=Y_{|m|}'(\beta)J_{|m|}'\left( \beta(1+h) \right) - J_{|m|}'(\beta)Y_{|m|}'\left( \beta(1+h) \right)=0. \end{equation} The associated normalized eigenfunctions are $\psi^{N}_{00}:=\frac{1}{\sqrt{\pi h(2+h)}}$ and \begin{align}\label{eq:psimnN} \psi_{mn}^{N}(r,\theta;h) :=& \left[ C_{mn}^{N} \right]^{-1}\Big[ Y_{|m|}'(\beta_{|m|n}^{N})J_{|m|}(\beta_{|m|n}^{N}r)-J_{|m|}'(\beta_{|m|n}^{N})Y_{|m|}(\beta_{|m|n}^{N}r) \Big]e^{\bi m\theta}, \end{align} in which $C_{mn}^{N}>0$ is chosen such that $||\psi_{mn}^{\rm N}||_{L^2(R^h)}=1$. The asymptotic formulas of $\lambda_{mn}^{N}$ and $\psi_{mn}^{N}$ are provided in \eqref{eq:asy:betamn:N} - \eqref{eq:psimnN:n0}. It is important to note that when $h\ll 1$, $$\lambda_{m0}^{N} \sim m^2 \quad \mbox{while} \quad \lambda_{mn}^{N} \sim (\frac{n\pi}{h})^2 \quad \mbox{for} \; n\ge1. $$ The eigenfunctions $$\psi_{m0}^{\rm N}\sim\frac{e^{\bi m\theta}}{\sqrt{\pi h(h+2)}} \quad\mbox{and}\quad \psi_{mn}^{N} \sim \frac{e^{\bi m\theta}}{\sqrt{\pi rh}}\cos\left[ \frac{n\pi}{h}(r-1) \right] \quad \mbox{for} \; n\ge1. $$ $\{\psi_{mn}^{\rm N}\}_{m\in\mathbb{Z},n\geq 0}$ constitutes a complete orthonormal basis of $L^2(R^h)$ (cf. \cite[Thm. 4.12]{mcl00}), and is a dense and countable basis of $H^1(A^h)$. Therefore, \begin{equation} \label{eq:bs:Cl} \cl_2\ H^1(A^h) = \overline{{\rm span}\{\cl_2\ \psi_{mn}^{N}(\cdot;h)\}_{(m,n)\in(\mathbb{Z}\times\mathbb{N})^*}}. \end{equation} where we have excluded the constant eigenfunction $\psi_{00}^{N}$. To ease the burden of notations in the subsequent analysis, we introduce the rotation operator ${\cal R}$: \begin{equation}\label{eq:R} {\cal R}: f=[f_1,f_2]\mapsto [f_2,-f_1],\quad\forall f\in [L^2(A^h)]^2. \end{equation} It is clear that ${\cal R}[L^2(A^h)]^2=[L^2(A^h)]^2$. Consequently, by virtue of (\ref{eq:bs:h2}), (\ref{eq:bs:gr}) and (\ref{eq:bs:Cl}), we have \begin{align} \label{eq:bs:tot} \widetilde{H}^{-1/2}(\Dv,A^h) =& \overline{{\cal R} [L^2(A^h)]^2\cap\widetilde{H}^{-1/2}(\Dv,A^h)}\nonumber\\ =& \overline{{\rm span}\{{\cal R}\cl_2\ \psi_{mn}^{N}, {\cal R}\nabla \psi_{ij}^{D}, {\cal R}\nabla\log r\}_{(m,n,i,j)\in(\mathbb{Z}\times\mathbb{N})^*\times \mathbb{Z}\times\mathbb{N}^*}}, \end{align} where the norm of $\widetilde{H}^{-1/2}(\Dv,A^h)$ is used for the completion. In the next subsection, we construct the solution in $G^{h}_+$ for $\nu\times \bE|_{A^{h}}$ being one of the basis functions. \subsubsection{Field representation in the annular hole} Given $\bF\in \widetilde{H^{-1/2}}(\Dv,A^h)$, let us consider the boundary value problem \begin{align*} ({\rm AHP}):\quad\quad \left\{ \begin{array}{l} \cl\ \bE = \bi k \bH \quad{\rm in} \; G^{h}_+,\\ \cl\ \bH = -\bi k \bE \quad{\rm in} \; G^{h}_+,\\ \nu\times \bE|_{\Gamma^{h}_{+}} = 0,\\ \nu\times \bH|_{\Gamma_b} = 0,\\ \nu\times \bE|_{A^h} = \bF. \\ \end{array} \right. \end{align*} The well-posedness of problem (AHP) is given in the following theorem. \begin{mytheorem} \label{thm:wp:hlp} Assume that $k$ is real and positive and $k\notin\{\sqrt{\lambda_{mn}^N+(2i+1)^2\pi^2/l^2}: (m,i,n)\in\mathbb{Z}^2\times\mathbb{N}\}$. For $0<h\ll 1$, the boundary value problem (AHP) attains a unique solution $[\bE,\bH]\in H(\cl, G^{h}_+)$ that depends continuously on the boundary data $\bF\in \widetilde{H^{-1/2}}(\Dv,A^h)$. \begin{proof} We first address the uniqueness. Let $\bF\equiv 0$. It follows from Lemma 5.30(b) of \cite{kirhet15} that an even reflection of $\bE$ extends $\bE$ and $\bH$ into $G^h$ such that \begin{align*} \cl\ \bE =& \bi k \bH\;{\rm in}\quad G^{h},\\ \cl\ \bH =& -\bi k \bE\;{\rm in}\quad G^{h},\\ \nu\times \bE =& 0\quad{\rm on}\; \partial G^{h}. \end{align*} An odd reflection of $\bE$ w.r.t $x_3=\pm l/2$ extends both $\bE,\bH$ to a larger domain $\Omega$ with $G^h\subset \Omega$ and that $[\bE,\bH]\in [H(\cl,\Omega)]^2$ with $\nu\times \bE=0$ on $\partial\Omega$. It can be shown that $\bE,\bH\in [H^1(G^h)]^3$; see, for instance, \cite[Chapter IX,\S 1]{daulio90}. Thus $E_3\in H^1(G^h)$ satisfies \begin{align*} -\Delta E_3 =& k^2 E_3 \quad{\rm in}\; G^h,\\ E_3 =& 0 \quad{\rm on} \; \Gamma^h,\\ \partial_{\nu} E_3 =& 0 \quad {\rm on} \; A^{h}\cup A^{h}_{-}, \end{align*} where $A^{h}_{-}:=\{x:(x_1,x_2)\in R^h,x_3 = -l/2\}$ is the bottom aperture of $G^h$ and $\Gamma^h$ is the side boundary of $G^h$. In light of \eqref{eq:asy:betamn}, we choose sufficiently small $h$ such that $k^2$ is not an eigenvalue of the above problem. Consequently, $E_3\equiv 0$ in $G^h$. Next, $H_3\in H^1(G^h)$ satisfies \begin{align*} -\Delta H_3 =& k^2 H_3 \quad{\rm in}\; G^h,\\ \partial_{\nu}H_3 =& 0 \quad{\rm on}\; \Gamma^h,\\ H_3 =& 0 \quad {\rm on}\; A^{h}\cup A^{h}_{-}. \end{align*} The boundary value problem attains trivial solution when $k\notin\{\sqrt{\lambda_{mn}^N+(2i+1)^2\pi^2/l^2}: (m,i,n)\in\mathbb{Z}^2\times\mathbb{N}\}$. Therefore, $E_1$ and $E_2$ can be expressed as in the form of \[ \left[ \begin{array}{c} E_1(x)\\ E_2(x)\\ \end{array} \right] = \left[ \begin{array}{c} f_1(x_1,x_2)\\ f_2(x_1,x_2)\\ \end{array} \right]e^{\bi kx_3} + \left[ \begin{array}{c} g_1(x_1,x_2)\\ g_2(x_1,x_2)\\ \end{array} \right]e^{-\bi kx_3}, \] where $f_j$ and $g_j$ ($j=1,2$) are harmonic functions, and \[ \Dv \left[ \begin{array}{c} f_1(x_1,x_2)\\ f_2(x_1,x_2)\\ \end{array} \right] = \Dv \left[ \begin{array}{c} g_1(x_1,x_2)\\ g_2(x_1,x_2)\\ \end{array} \right] = \Cl \left[ \begin{array}{c} f_1(x_1,x_2)\\ f_2(x_1,x_2)\\ \end{array} \right] = \Cl \left[ \begin{array}{c} g_1(x_1,x_2)\\ g_2(x_1,x_2)\\ \end{array} \right] = 0. \] By Lemma~\ref{lem:helmdecomp}, it can be verified that $[f_1,f_2]^{T},[g_1,g_2]^{T}\in\mathbb{H}_2$. Consequently, \begin{align*} E_1 =& \frac{x_1}{x_1^2+x_2^2}(c_1e^{\bi k x_3} + c_2e^{-\bi k x_3}),\quad E_2 = \frac{x_2}{x_1^2+x_2^2}(c_1e^{\bi k x_3} + c_2e^{-\bi k x_3}),\\ H_1 =& \frac{-x_2}{x_1^2+x_2^2}(c_1e^{\bi k x_3} - c_2e^{-\bi k x_3}),\quad H_2 = \frac{x_1}{x_1^2+x_2^2}(c_1e^{\bi k x_3} -c_2e^{-\bi k x_3}), \end{align*} for some constants $c_1$ and $c_2$. The boundary condition $E_1=E_2=0$ on $A^h\cup A^{h}_{-}$ implies \[ c_1 e^{\bi kl/2} + c_2e^{-\bi kl/2} = 0,\quad c_1=c_2. \] Thus a nonzero solution $[\bE,\bH]$ exists if and only if $e^{\bi kl/2}+e^{-\bi kl/2}=2\cos(kl/2)=0$, which is excluded by our assumption. Now the well-posedness follows thanks to Theorem 5.60 in \cite{kirhet15}. \end{proof} \end{mytheorem} We now construct special solutions to the problem (AHP), which are called waveguide modes in the annular hole $G^h_{+}$. Denote \begin{equation} \label{eq-s} s_{mn}^N=\sqrt{k^2-\lambda_{mn}^N}, \quad s_{ij}^D = \sqrt{k^2-\lambda_{ij}^D}. \end{equation} Assume that $k\notin\{\sqrt{\lambda_{mn}^N+(2i+1)^2\pi^2/l^2}$. \begin{itemize} \item[1.] Transverse electric (TE) modes. For each $(m,n)\in(\mathbb{Z}\times\mathbb{N})^*$, define \begin{align}\label{eq:E_TE_modes} \bE_{mn}^{TE} =& \left[ \begin{array}{l} (e^{\bi s_{mn}^N x_3} + e^{-\bi s_{mn}^N x_3})\partial_{x_2}\psi_{mn}^N\\ -(e^{\bi s_{mn}^N x_3} + e^{-\bi s_{mn}^N x_3})\partial_{x_1}\psi_{mn}^N\\ 0 \end{array} \right], \end{align} \begin{align}\label{eq:H_TE_modes} \bH_{mn}^{TE} =& \frac{1}{k}\left[ \begin{array}{l} s_{mn}^N(e^{\bi s_{mn}^N x_3} - e^{-\bi s_{mn}^N x_3})\partial_{x_1}\psi_{mn}^N\\ s_{mn}^N(e^{\bi s_{mn}^N x_3} - e^{-\bi s_{mn}^N x_3})\partial_{x_2}\psi_{mn}^N\\ -\bi \lambda_{mn}^{N}(e^{\bi s_{mn}^N x_3} + e^{-\bi s_{mn}^N x_3})\psi_{mn}^{N}. \end{array} \right] \end{align} Then $\{[\bE_{mn}^{TE},\bH_{mn}^{TE}]\}_{(m,n)\in(\mathbb{Z}\times\mathbb{N})^*}$ is the unique solution of (AHP) with $\bF=[F_1,F_2, 0]^T =[ 2\cos(s_{mn}^Nl/2){\cal R}\cl_2\ \psi_{mn}^N, 0]^T$. These solutions are called transverse electric (TE) modes. \item[2.] Transverse magnetic (TM) modes. For each $(i,j)\in\mathbb{Z}\times\mathbb{N}^*$, define \begin{align}\label{eq:E_TM_modes} \bE_{ij}^{TM} =& \left[ \begin{array}{l} (e^{\bi s_{ij}^D x_3} + e^{-\bi s_{ij}^D x_3})\partial_{x_1}\psi_{ij}^D\\ (e^{\bi s_{ij}^D x_3} + e^{-\bi s_{ij}^D x_3})\partial_{x_2}\psi_{ij}^D\\ \lambda_{ij}^{D}/(\bi s_{ij}^D)(e^{\bi s_{ij}^D x_3} -e^{-\bi s_{ij}^D x_3})\psi_{ij}^{D}\\ \end{array} \right], \end{align} \begin{align}\label{eq:H_TM_modes} \bH_{ij}^{TM} =& \frac{k(e^{\bi s_{ij}^D x_3} - e^{-\bi s_{ij}^D x_3})}{s_{ij}^D}\left[ \begin{array}{l} -\partial_{x_2}\psi_{ij}^D\\ \partial_{x_1}\psi_{ij}^D\\ 0\\ \end{array} \right]. \end{align} Then $\{[\bE_{ij}^{TM},\bH_{ij}^{TM}]\}_{(i,j)\in\mathbb{Z}\times\mathbb{N}^*}$ is the unique solution of (AHP) with $\bF= [2\cos(s_{ij}^Dl/2){\cal R}\nabla_2\psi_{ij}^D,0]^T$. These solutions are called transverse magnetic (TM) modes. \item[3.] Transverse electromagnetic (TEM) mode. Define \begin{align}\label{eq:E_TEM_modes} \bE_{E}^{TEM} =& (e^{\bi k x_3} + e^{-\bi k x_3})\left[ \begin{array}{l} \partial_{x_1}\log r\\ \partial_{x_2}\log r\\ 0\\ \end{array} \right], \end{align} \begin{align}\label{eq:H_TEM_modes} \bH_{E}^{TEM} =& (e^{\bi k x_3} - e^{-\bi k x_3})\left[ \begin{array}{l} -\partial_{x_2}\log r\\ \partial_{x_1}\log r\\ 0\\ \end{array} \right]. \end{align} Then $\{[\bE_{E}^{TEM},\bH_{E}^{TEM}]\}$ is the unique solution of (AHP) with $\bF= [2\cos(kl/2){\cal R}\nabla_2\log r,0]^T$. This solution is called the transverse electromagnetic (TEM) mode. \end{itemize} \begin{myremark} For $k\in\{\sqrt{\lambda_{mn}^N+(2i+1)^2\pi^2/l^2}: (m,i,n)\in\mathbb{Z}^2\times\mathbb{N}\}$, we use $\nu\times\bH|_{A^h}=\bF^H=[F_1^H,F_2^H,0]$ as the boundary condition instead, where we choose $[F_1^H,F_2^H]$ from $\Big\{\frac{2\bi s_{mn}^N}{k}\sin(s_{mn}^Nl/2)\nabla_2\psi_{mn}^N, \frac{-2k\bi \sin(s_{ij}^Dl/2)}{s_{ij}^D}\cl_2\psi_{ij}^D, -2\bi\sin(kl/2)\cl_2\log r \Big\}$ for $(m,n)\in(\mathbb{Z}\times\mathbb{N})^*$ and $(i,j)\in\mathbb{Z}\times\mathbb{N}^*$. The above TE, TM and TEM modes can be reproduced as well. \end{myremark} \medskip Finally, we use the above waveguide modes to construct solutions to the problem (AHP). Let ${\bf F}\in \widetilde{H^{-1/2}}(\Dv,A^h)$, we expand it as \begin{align} \label{eq:tE:as} \bf F =& \sum_{(m,n)\in(\mathbb{Z}\times\mathbb{N})^*}d^{TE}_{mn}(\nu\times \bE_{mn}^{TE}|_{A^h}) + \sum_{(i,j)\in\mathbb{Z}\times\mathbb{N}^*} d^{TM}_{ij}(\nu\times \bE_{ij}^{TM}|_{A^h}) \nonumber\\ &+ d^{TEM}(\nu\times \bE_{E}^{TEM}|_{A^h})\in \widetilde{H^{-1/2}}(\Dv,A^h), \end{align} with the Fourier coefficients $\{d_{mn}^{TE},d_{ij}^{TM},d^{TEM}\}$. Then it follows from Theorem~\ref{thm:wp:hlp} that the unique solution of the boundary value problem is \begin{align} \label{eq:rep:bE} \bE =& \sum_{(m,n)\in(\mathbb{Z}\times\mathbb{N})^*}d^{TE}_{mn}\bE_{mn}^{TE} + \sum_{(i,j)\in\mathbb{Z}\times\mathbb{N}^*} d^{TM}_{ij}\bE_{ij}^{TM}+ d^{TEM}\bE_{E}^{TEM}\in [L^2(G^h)]^3,\\ \label{eq:rep:bH} \bH =& \sum_{(m,n)\in(\mathbb{Z}\times\mathbb{N})^*}d^{TE}_{mn}\bH_{mn}^{TE} + \sum_{(i,j)\in\mathbb{Z}\times\mathbb{N}^*} d^{TM}_{ij}\bH_{ij}^{TM}+ d^{TEM}\bH_{E}^{TEM}\in [L^2(G^h)]^3, \end{align} where the modes $\bE_{mn}^{TE}$, $\bH_{mn}^{TE}$, $\bE_{ij}^{TM}$, $\bH_{ij}^{TM}$, $\bE_{E}^{TEM}$, $\bE_{H}^{TEM}$ are defined in \eqref{eq:E_TE_modes}-\eqref{eq:H_TEM_modes}. We have \begin{align*} ||\bE||_{[L^2(G^h)]^3}^2 =& \sum_{(m,n)\in(\mathbb{Z}\times\mathbb{N})^*}|d^{TE}_{mn}|^2||\bE_{mn}^{TE}||_{[L^2(G^h)]^3}^2 + \sum_{(i,j)\in\mathbb{Z}\times\mathbb{N}^*} |d^{TM}_{ij}|^2||\bE_{ij}^{TM}||_{[L^2(G^h)]^3}^2\\ &+ |d^{TEM}|^2||\bE_{E}^{TEM}||_{[L^2(G^h)]^3}^2\\ =&\sum_{(m,n)\in(\mathbb{Z}\times\mathbb{N})^*}|d^{TE}_{mn}|^2\frac{\lambda_{mn}^{N}}{|s_{mn}^N|}|s_{mn}^Nl + \sin(s_{mn}^Nl)| +\sum_{(i,j)\in\mathbb{Z}\times\mathbb{N}^*}|d^{TM}_{ij}|^2\frac{\lambda_{ij}^{D}}{|s_{ij}^D|}|s_{ij}^Dl + \sin(s_{ij}^Dl)|\\ &+\sum_{(i,j)\in\mathbb{Z}\times\mathbb{N}^*}|d^{TM}_{ij}|^2\frac{[\lambda_{ij}^{D}]^2}{|s_{ij}^D|^3}|s_{ij}^Dl - \sin(s_{ij}^Dl)|+|d^{TEM}|^22\pi \log(1+h)|kl + \sin(kl)|<\infty, \end{align*} and \begin{align*} ||\bH||_{[L^2(G^h)]^3}^2 =& \sum_{(m,n)\in(\mathbb{Z}\times\mathbb{N})^*}|d^{TE}_{mn}|^2||\bH_{mn}^{TE}||_{[L^2(G^h)]^3}^2 + \sum_{(i,j)\in\mathbb{Z}\times\mathbb{N}^*} |d^{TM}_{ij}|^2||\bH_{ij}^{TM}||_{[L^2(G^h)]^3}^2\\ &+ |d^{TEM}|^2||\bH_{E}^{TEM}||_{[L^2(G^h)]^3}^2\\ =&\sum_{(m,n)\in(\mathbb{Z}\times\mathbb{N})^*}|d^{TE}_{mn}|^2\frac{|s_{mn}^N|^2\lambda_{mn}^{N}}{k^2|s_{mn}^N|}|s_{mn}^Nl - \sin(s_{mn}^Nl)|\\ &+\sum_{(m,n)\in(\mathbb{Z}\times\mathbb{N})^*}|d^{TE}_{mn}|^2\frac{|\lambda_{mn}^{N}|^2}{k^2|s_{mn}^N|}|s_{mn}^Nl +\sin(s_{mn}^Nl)|\\ &+\sum_{(i,j)\in\mathbb{Z}\times\mathbb{N}^*}|d^{TM}_{ij}|^2\frac{k^2\lambda_{ij}^{D}}{|s_{ij}^D|^3}|s_{ij}^Dl -\sin(s_{ij}^Dl)|+|d^{TEM}|^22\pi \log(1+h)|kl -\sin(kl)|<+\infty. \end{align*} By Lemmas~\ref{lem:dir:eig} and ~\ref{lem:neu:eig}, $\lambda_{mn}^N,\lambda_{mn}^D\to+\infty$ as $m^2+n^2\to\infty$, so there holds \[ |s^o_{mn}|\eqsim \sqrt{\lambda_{mn}^o}, |s_{mn}^{o}l\pm \sin(s_{mn}^ol)|\eqsim |2\pm\sin(s_{mn}^{o}l)|,\quad{\rm for}\quad o=N,D, \] where $2$ is introduced to ensure that $|2\pm \sin(s_{mn}^o)l)|\geq 1$. In summary, we have the following proposition. \begin{myprop} Let $E, H$ be defined as in (\ref{eq:rep:bE})-(\ref{eq:rep:bH}). Then $||\bE||_{[L^2(G^h)]^3}^2<\infty$ and $||\bH||_{[L^2(G^h)]^3}^2<\infty$ if and only if \begin{equation} \label{eq:cond:coef} \left\{ \begin{array}{l} \{c_{mn}^{TE}:=d_{mn}^{TE}(\lambda_{mn}^N)^{3/4}|2+\sin(s_{mn}^Nl)|^{1/2}\}_{(m,n)\in(\mathbb{Z}\times\mathbb{N})^*}\in \ell^2,\\ \{c_{ij}^{TM}:=d_{ij}^{TM}(\lambda_{ij}^D)^{1/4}|2+\sin(s_{ij}^Dl)|^{1/2}\}_{(i,j)\in \mathbb{Z}\times\mathbb{N}^*}\in \ell^2,\\ |d^{TEM}|<\infty, \end{array} \right. \end{equation} where $\ell^2$ denotes the space of square-summable sequences. On the other hand, for any Fourier coefficients $\{d_{mn}^{TE},d_{ij}^{TM},d^{TEM}\}$ satisfying (\ref{eq:cond:coef}), (\ref{eq:rep:bE}) and (\ref{eq:rep:bH}) provide the unique solution to (AHP) in $H(\cl, G^{h}_+)$ with $\nu\times \bE|_{A^h}\in\widetilde{H^{-1/2}}(\Dv,A^h)$. \end{myprop} \begin{myremark} Unless otherwise stated, here and thereafter, the $\ell^2$ sequence with two indices is arranged in the usual dictionary order. \end{myremark} \begin{myremark} Transforming the sequence $\{d_{mn}^{TE},d_{ij}^{TM}\}$ to an $\ell^2$ sequence $\{c_{mn}^{TE},c_{ij}^{TM}\}$ balances the magnitudes of the TE, TM, and TEM modes in the hole, which is essential in solving the eigenvalue problem formulated as an infinite-dimensional (INF) linear system by the mode matching method in the next section. As we shall see below, such a transformation eases the analysis of the mapping property of the related INF coefficient matrix and the reduction of the INF system into finite-dimensional ones. \end{myremark} \section{Quantitative analysis of scattering resonances} In this section, we quantitatively characterizes the resonances for the scattering problem (E) in a bounded domain over the complex plane. These resonances are complex values of $k$ such that the homogeneous problem \eqref{eq:problemE_1}-\eqref{eq:problemE_4} with $\bE^{\rm inc} = \bH^{\rm inc}=0$ attains nontrivial solutions. \subsection{A vectorial mode matching formulation} We first develop a vectorial analogy of the mode matching method originally proposed in \cite{zholu21,luwanzho21} to reformulate the scattering problem (E) with trivial incident field. Before proceeding, we introduce the following bilinear form over $H^{-1/2}(\Dv,A^h)\times\widetilde{H}^{-1/2}(\Dv,A^h)$ (see\cite[P. 306]{kirhet15}): \[ \langle\bF,\bG\rangle =: \langle \bF,\nu\times \bG \rangle_{A^h},\] where $\nu\times\bG=-(\bG\times\nu)\in \widetilde{H^{-1/2}}(\Cl,A^h)$, and $\langle \cdot,\cdot \rangle_{A^h}$ represents the duality pair between $H^{-1/2}(\Dv,A^h)$ and $\widetilde{H^{-1/2}}(\Cl,A^h)$. Let ${\cal S}_{k}$ be the following single-layer operator \begin{equation} \label{eq:Sh} {\cal S}_{k}[\phi](x) =\int_{A^{h}} \Phi_k(x;y)\phi(y)\, dS(y),\quad x\in A^{h}. \end{equation} Then ${\cal S}_{k}$ is bounded from $\widetilde{H^{-1/2}}(A^h)$ to $H^{1/2}(A^h)$. The following holds for the T2T map ${\cal L}_k$. \begin{mylemma}[\cite{kirhet15}, Lemma 5.61] For any $\bF,\bG\in \widetilde{H}^{-1/2}(\Dv,A^h)$, \begin{align} \langle{\cal L}_k[\bF],\bG \rangle =& \langle {\cal L}_k[\bG],\bF \rangle,\\ \label{eq:LkSk} \langle{\cal L}_k[\bF],\bG \rangle =& -\langle\Dv~\bG, {\cal S}_k[\Dv~\bF] \rangle_{A^h} + k^2\langle\bG, {\cal S}_k[\bF] \rangle_{A^h}, \end{align} where ${\cal S}_k[\bF]$ is taken componentwisely, and it belongs to $H^{-1/2}(\Cl, A^h)$. \end{mylemma} Now, from the integral equation formulation \eqref{eq:EH:hsp} and the tangential traces of $\bE$ and $\bH$ (\ref{eq:rep:bE}) (\ref{eq:rep:bH}) over the annular aperture $A^h$, when $\bE^{\rm inc} = \bH^{\rm inc}=0$, the homogeneous problem \eqref{eq:problemE_1}-\eqref{eq:problemE_4} can be formulated as the following system over the aperture $A^h$: \begin{align} \label{eq:t2t} \nu\times \bH|_{A^h} =& \frac{-2}{\bi k}{\cal L}_k[\nu\times \bE|_{A^h}],\\ \label{eq:rep:tE} \nu\times \bE|_{A^h} =& \sum_{(m,n)\in(\mathbb{Z}\times\mathbb{N})^*}\frac{c^{TE}_{mn} \cdot 2\cos(s_{mn}^Nl/2) }{(\lambda_{mn}^N)^{3/4}|2+\sin(s_{mn}^Nl)|^{1/2}}{\cal R}\cl_2\ \psi_{mn}^N\nonumber\\ &+ \sum_{(i,j)\in\mathbb{Z}\times\mathbb{N}^*} \frac{c^{TM}_{ij} \cdot 2\cos(s_{ij}^Dl/2) }{(\lambda_{ij}^{D})^{1/4}|2+\sin(s_{ij}^Dl)|^{1/2}}{\cal R}\nabla_2\psi_{ij}^{D} \nonumber\\ &+ d^{TEM} \cdot 2\cos(kl/2){\cal R}\nabla_2\log r,\\ \label{eq:rep:tH} \nu\times\bH|_{A^h}=&\sum_{(m,n)\in(\mathbb{Z}\times\mathbb{N})^*}\frac{c^{TE}_{mn} \cdot 2\bi s_{mn}^N\sin(s_{mn}^Nl/2) }{k(\lambda_{mn}^N)^{3/4}|2+\sin(s_{mn}^Nl)|^{1/2}}\cl_2\ \psi_{mn}^N\nonumber\\ &+ \sum_{(i,j)\in\mathbb{Z}\times\mathbb{N}^*} \frac{c^{TM}_{ij} \cdot 2\bi k\sin(s_{ij}^Dl/2) }{s_{ij}^D(\lambda_{ij}^{D})^{1/4}|2+\sin(s_{ij}^Dl)|^{1/2}}\nabla_2\psi_{ij}^D\nonumber\\ &+ d^{TEM} \cdot 2\bi \sin(kl/2)\nabla_2\log r. \end{align} At a resonance $k$, there exist nontrivial solutions $\{c^{TE}_{mn}, c^{TM}_{mn}, d^{TEM}\}$ for the above system. Using the completeness of the basis given in (\ref{eq:bs:tot}), the integral equation (\ref{eq:t2t}) is equivalent to the following system: \begin{align} \langle \nu\times \bH|_{A^h}, {\cal R}\cl_2 \overline{\psi_{m'n'}^{N}} \rangle =& \frac{-2}{\bi k} \langle{\cal L}_k[\nu\times \bE|_{A^h}], {\cal R}\cl_2 \overline{\psi_{m'n'}^{N}} \rangle,\quad (m',n')\in(\mathbb{Z}\times\mathbb{N})^*, \label{eq:system1} \\ \langle \nu\times \bH|_{A^h}, {\cal R}\nabla_2 \overline{\psi_{i'j'}^{D}} \rangle =& \frac{-2}{\bi k} \langle{\cal L}_k[\nu\times \bE|_{A^h}], {\cal R}\nabla_2 \overline{\psi_{i'j'}^{D}} \rangle,\quad(i',j')\in\mathbb{Z}\times\mathbb{N}^*, \label{eq:system2} \\ \langle \nu\times \bH|_{A^h}, {\cal R}\nabla_2 \log r \rangle =& \frac{-2}{\bi k} \langle{\cal L}_k[\nu\times \bE|_{A^h}], {\cal R}\nabla_2 \log r \rangle, \label{eq:system3} \end{align} where the overline represents the complex conjugate. Using the expansions (\ref{eq:rep:tE}), (\ref{eq:rep:tH}), and the following identities \begin{align} \langle \cl_2 \psi_{mn}^N,{\cal R}\cl_2 \overline{\psi_{m'n'}^{N}} \rangle =& -\lambda_{mn}^{N}\delta_{mm'}\delta_{nn'}, \label{eq:identity1} \\ \langle \nabla_2 \psi_{ij}^D,{\cal R}\nabla_2 \overline{\psi_{i'j'}^{D}} \rangle =& -\lambda_{ij}^{D}\delta_{ii'}\delta_{jj'}, \label{eq:identity2} \\ \langle \nabla_2 \log r,{\cal R}\nabla_2 \log r \rangle =& -2\pi\log(1+h), \label{eq:identity3} \end{align} where $\delta_{m0}$ is the Kronecker delta function, the system \eqref{eq:system1}-\eqref{eq:system3} can be rewritten as an equation of INF matrices and INF vectors: \begin{align} \label{eq:INF:sys} &\left[ \begin{array}{lll} \bS^{TE}\bD^{TE} & & \\ & \bD^{TM} & \\ & & D^{TEM}\\ \end{array} \right]\left[ \begin{array}{l} \bc^{TE}\\ \bc^{TM}\\ d^{TEM}\\ \end{array} \right] \nonumber\\ =& \left[ \begin{array}{lll} \bA^{TE,TE} & \bA^{TE,TM} & \bC^{TE,TEM}\\ \bA^{TM,TE} & \bA^{TM,TM} & \bC^{TM,TEM}\\ \bR^{TEM,TE} & \bR^{TEM,TM} & A^{TEM,TEM}\\ \end{array} \right] \left[ \begin{array}{l} \bc^{TE}\\ \bc^{TM}\\ d^{TEM}\\ \end{array} \right]. \end{align} In the above, the unknown coefficients are given by the two INF column vectors $\bc^{TE} = [c_{m'n'}^{TE}]_{(m',n')\in(\mathbb{Z}\times\mathbb{N})^*}$ and $\bc^{TM} = [c_{i'j'}^{TM}]_{(i',j')\in\mathbb{Z}\times\mathbb{N}^*}$, and a complex number $d^{TEM}$. They represent the Fourier coefficients of the TE, TM, and TEM modes respectively in \eqref{eq:rep:tE} and \eqref{eq:rep:tH}. On the left side of the system, $D^{TEM} =\sin(kl/2)$, and the three INF diagonal matrices are given by \begin{align*} \bS^{TE} = &{\rm Diag}\{s_{mn}^N\},\\ \bD^{TE} =& {\rm Diag }\{\sin(s_{mn}^Nl/2)|2+\sin(s_{mn}^Nl)|^{-1/2}\},\\ \bD^{TM} =& {\rm Diag }\{\sin(s_{ij}^Dl/2)|2+\sin(s_{ij}^Dl)|^{-1/2}\}. \end{align*} The elements in the matrices are obtained from the field representation \eqref{eq:rep:tH} and the identities \eqref{eq:identity1}-\eqref{eq:identity3}. We use the superscripts to denote the contribution of each type of mode to the matrices. On the right side of the system, the four INF matrices are \begin{align} \bA^{TE,TE} =& \left[\frac{-2\cos(s_{mn}^Nl/2)|2+\sin(s_{mn}^Nl)|^{-1/2}}{(\lambda_{m'n'}^{N})^{1/4}(\lambda_{mn}^N)^{3/4}}\langle {\cal L}_k{\cal R}\cl_2\psi_{mn}^{N},{\cal R}\cl_2\overline{\psi_{m'n'}^{N}} \rangle\right],\nonumber\\ \bA^{TE,TM} =& \left[\frac{-2\cos(s_{ij}^Dl/2)|2+\sin(s_{ij}^Dl)|^{-1/2}}{(\lambda_{m'n'}^{N})^{1/4}(\lambda_{ij}^D)^{1/4}}\langle {\cal L}_k{\cal R}\nabla_2\psi_{ij}^{D},{\cal R}\cl_2\overline{\psi_{m'n'}^{N}} \rangle\right],\nonumber\\ \bA^{TM,TE} =& \left[\frac{-2s_{i'j'}^D\cos(s_{mn}^Nl/2)|2+\sin(s_{mn}^Nl)|^{-1/2}}{k^2(\lambda_{i'j'}^D)^{3/4}(\lambda_{mn}^N)^{3/4}}\langle {\cal L}_k{\cal R}\cl_2\psi_{mn}^{N},{\cal R}\nabla_2\overline{\psi_{i'j'}^{D}} \rangle\right],\nonumber\\ \bA^{TM,TM} =& \left[\frac{-2s_{i'j'}^D\cos(s_{ij}^Dl/2)|2+\sin(s_{ij}^Dl)|^{-1/2}}{k^2(\lambda_{i'j'}^D)^{3/4}(\lambda_{ij}^D)^{1/4}}\langle {\cal L}_k{\cal R}\nabla_2\psi_{ij}^{D},{\cal R}\nabla_2\overline{\psi_{i'j'}^{D}} \rangle\right]. \nonumber \end{align} The two INF column vectors $\bC^{TE,TEM}$ and $\bC^{TM,TEM}$, and two INF row vectors $\bR^{TEM,TE}$, and $\bR^{TEM,TM}$ are \begin{align} \bC^{TE,TEM} =& \left[\frac{-2\cos(kl/2)}{(\lambda_{m'n'}^{N})^{1/4}}\langle {\cal L}_k{\cal R}\nabla_2\log r,{\cal R}\cl_2\overline{\psi_{m'n'}^{N}} \rangle\right],\nonumber\\ \bC^{TM,TEM} =& \left[\frac{-2s_{i'j'}^D\cos(kl/2)}{k^2(\lambda_{i'j'}^D)^{3/4}}\langle {\cal L}_k{\cal R}\nabla_2\log r,{\cal R}\nabla_2\overline{\psi_{i'j'}^{D}} \rangle\right],\nonumber\\ \bR^{TEM,TE} =& \left[\frac{-\cos(s_{mn}^Nl/2)|2+\sin(s_{mn}^Nl)|^{1/2}}{(\lambda_{mn}^N)^{3/4}\pi k\log(1+h)}\langle {\cal L}_k{\cal R}\cl_2\psi_{mn}^{N},{\cal R}\nabla_2\log r \rangle\right],\nonumber\\ \bR^{TEM,TM} =& \left[\frac{-\cos(s_{ij}^Dl/2)|2+\sin(s_{ij}^Dl)|^{1/2}}{(\lambda_{ij}^D)^{1/4}\pi k\log(1+h)}\langle {\cal L}_k{\cal R}\nabla_2\psi_{ij}^{D},{\cal R}\nabla_2\log r \rangle\right],\nonumber \end{align} and the scalar \begin{align} A^{TEM,TEM} =\frac{-\cos(kl/2)}{\pi k\log(1+h)}\langle {\cal L}_k{\cal R}\nabla_2\log r,{\cal R}\nabla_2\log r \rangle.\nonumber \end{align} The elements in the matrices and vectors are obtained from using the expansion \eqref{eq:rep:tE} for the systems \eqref{eq:system1}-\eqref{eq:system3} and the identities \eqref{eq:identity1}-\eqref{eq:identity3}. Each pair of superscript for the matrix/vector denotes the interaction of two modes after applying the operator ${\cal L}_k$ to one mode. We set the following rules for the indices of the elements of the INF matrices/vectors: \begin{itemize} \item[(1).] $(m,n)$ and $(m',n')$ range over $(\mathbb{Z}\times\mathbb{N})^*$; \item[(2).] $(i,j)$ and $(i',j')$ range over $\mathbb{Z}\times\mathbb{N}^*$; \item[(3).] the index $(m,n)$ or $(i,j)$ is the column index of the matrix, while the prime index $(m',n')$ or $(i',j')$ is the row index of the matrix. \item[(4).] The columns (and rows) of each INF matrix are arranged in the dictionary order. \end{itemize} The product of the block INF matrix and the block INF vector in (\ref{eq:INF:sys}) is well-defined by the usual matrix-vector product. \subsection{Resonances for Problem (E)} We are ready to analyze the resonances for the scattering problem (E), which are the characteristic values of the system (\ref{eq:INF:sys}). We shall follow the avenues described below to derive their asymptotic expansions: \begin{enumerate} \item[(1).] First, we decompose the whole system (\ref{eq:INF:sys}) into a sequence of subsystems \eqref{eq:eig:m} with different angular momentum $m\in\mathbb{Z}$. \item[(2).] We further reduce each subsystem \eqref{eq:eig:m} to a nonlinear characteristic equation \eqref{eq:single} by projecting the solution onto the dominant resonant mode. Such a characteristic equation is called resonance condition. To this end, we estimate the contribution from the modes that are orthogonal to the resonant modes in each subsystem, which is accomplished by the asymptotic analysis of each matrix element with respect to the parameter $h$ and the key estimates are provided in Lemma \ref{lem:dm'm:h}. \item[(3).] Finally, we investigate the resonance condition \eqref{eq:single} and analyze its roots to obtain the asymptotic expansions of resonances. The main results for the resonances are summarized in Theorems~\ref{thm:even:res} and \ref{thm:odd:res}. \end{enumerate} \subsubsection{Subsystem for each angular momentum} For a function depending on the angle $\theta$, we use $\Theta(f)$ to denote its angular momentum so that the $\theta$-dependence of the function is given by $e^{\bi \Theta(f)\theta}$. For example, for $\psi_{ij}^D$ and $\psi_{mn}^N$ defined \eqref{eq:phimnD} and \eqref{eq:psimnN}, there holds $\Theta(\psi_{mn}^N)=m$ and $\Theta(\psi_{ij}^D)=i$. On the other hand, $\Theta(\log r)=0$. We have the following orthogonality relation for two basis functions with different momenta. \begin{mylemma} \label{lem:orth:theta} For any $f,g\in\{\psi_{mn}^N,\psi_{ij}^D,\log r\}_{(m,n,i,j)\in(\mathbb{Z}\times\mathbb{N})^*\times \mathbb{Z}\times\mathbb{N}^*}$ with $\Theta(f)\neq \Theta(g)$, there holds \begin{align} \langle {\cal L}_k {\cal R}{\rm Op}_1 [f],{\cal R}{\rm Op}_2 \overline{[g]} \rangle =& 0, \end{align} where ${\rm Op}_j$ represents one of the two operators $\{\cl_2,\nabla_2\}$, for $j=1,2$. \begin{proof} We only show the proof when $\Op_1 =\cl_2$, $\Op_2 = \nabla_2$, $f=\psi_{mn}^N$ and $g=\psi_{ij}^D$ with $m\neq i$. For simplicity, let $f(r,\theta)=f_n(r)e^{\bi m\theta}$ and $g(r',\theta')=g_j(r')e^{\bi i\theta'}$, where both $f_n$ and $g_j$ are real. A direction calculation gives \begin{align*} \nabla_2 f \cdot \cl_2'\bar{g} =& (f_n'(r)e^{\bi m\theta}\hat{r} +\bi m f_n(r)e^{\bi m\theta}\hat{\theta})\cdot (-g_j'(r')e^{-\bi i\theta'}\hat{\theta}'+\bi i f_n(r)e^{-\bi i\theta'}\hat{r}')\\ =& [h^1_{nj}(r,r')\cos(\theta-\theta') + h^2_{nj}(r,r')\sin(\theta-\theta')]e^{\bi m\theta-\bi i\theta'}, \end{align*} where $\hat{\theta}$ and $\hat{r}$ are the polar unit vectors, and $h^{o}_{nj},o=1,2$ are uniquely determined from $f_n$, $g_j$ and their first-order derivatives. Thus by (\ref{eq:LkSk}), \begin{align*} &\langle {\cal L}_k {\cal R}\cl_2 [f],{\cal R}\nabla_2' \overline{[g]} \rangle \\ =& -k^2\langle \nabla_2[f], {\cal S}_k[\cl_2'\overline{g}] \rangle_{A^h}\\ =&-k^2\int_{0}^{2\pi}e^{\bi (m-i)\theta'}d\theta' \int_{0}^{2\pi}d\theta\int_{[a,a(1+h)^2]}\frac{e^{\bi k \sqrt{r^2+r'^2-2rr'\cos\theta}}[h_{nj}^1\cos\theta+h_{nj}^2\sin\theta]}{4\pi|r^2+r'^2-2rr'\cos\theta|} e^{\bi m\theta}drdr'\\ =& \, 0. \end{align*} The proof for the other cases are similar. \end{proof} \end{mylemma} Using the above lemma, the full system (\ref{eq:INF:sys}) can be decoupled into a sequence of subproblems, where the elements in each subsystem attain the same angular dependence $e^{\bi m \theta}.$ More specifically, for each $m\in\mathbb{Z}$, we have \begin{align} \label{eq:eig:m} \left[ \begin{array}{lll} D_m & & \\ & \bI & \\ & & \bI \\ \end{array} \right]\left[ \begin{array}{l} d_m\\ \bc_m^{TE}\\ \bc_m^{TM}\\ \end{array} \right] = \left[ \begin{array}{lll} A_{mm} & \bR_m^{TE} & \bR_m^{TM}\\ \bC_m^{TE} & \bB_m^{TE,TE} & \bB_m^{TE,TM} \\ \bC_m^{TM} & \bB_m^{TM,TE} & \bB_m^{TM,TM} \\ \end{array} \right] \left[ \begin{array}{l} d_m\\ \bc_m^{TE}\\ \bc_m^{TM}\\ \end{array} \right]. \end{align} In the above, the unknown coefficients for each $m$ are \begin{align*} d_m= \left\{ \begin{array}{lc} d^{TEM},& m=0;\\ d_{m0}^{TE},& m\neq 0,\\ \end{array} \right.\quad \bc_m^{TE}=[D_{mn'}^{TE}c_{mn'}^{TE}]_{n'\in\mathbb{N}^*},\quad \bc_m^{TM}=[D_{mj'}^{TM}c_{mj'}^{TM}]_{j'\in\mathbb{N}^*}. \end{align*} ${\bI}$ denotes the INF identity matrix on $\ell^2$ such that $\bI\bc_m^j=\bc_m^j, j\in\{TE,TM\}$. The scalar $D_0=\sin (kl/2)$ and $D_m=s_{m0}^N\sin (s_{m0}^Nl/2)$ for $m\neq 0$. The $3\times 3$ block matrices, relating to the $3\times 3$ block matrices on the right-side in (\ref{eq:INF:sys}) for each $m$, are given as follows: \begin{align*} & A_{mm} := \left\{ \begin{array}{lc} A^{TEM,TEM},& m=0;\\ |2+\sin(s_{m0}^Nl)|^{1/2}A_{m0,m0}^{TE,TE},& m\neq 0,\\ \end{array} \right.\\ \\ & \bR_m^{TE}:=[R_{n;m}^{TE}]_{n\in\mathbb{N^*}}=\left\{ \begin{array}{lc} \ \left[R_{mn}^{TEM,TE}(D^{TE}_{mn})^{-1}\right]_{n\in\mathbb{N}^*},& m=0;\\ \\ \ \left[(\lambda_{m0}^{N})^{-3/4}A^{TE,TE}_{m0,mn}(D^{TE}_{mn})^{-1}\right]_{n\in\mathbb{N}^*}, & m\neq 0,\\ \end{array} \right.\\ \\ & \bR_m^{TM}:=\left[R_{j;m}^{TM}\right]_{j\in\mathbb{N^*}}=\left\{ \begin{array}{lc} \ \left[R_{mj}^{TEM,TM}(D^{TM}_{mj})^{-1}\right]_{j\in\mathbb{N}^*},& m=0;\\ \\ \ \left[(\lambda_{m0}^{N})^{-3/4}A^{TE,TM}_{m0,mj}(D^{TM}_{mj})^{-1}\right]_{j\in\mathbb{N}^*}, & m\neq 0,\\ \end{array} \right.\\ \\ & \bC_m^{TE}:=\left[C_{n';m}^{TE}\right]_{n'\in\mathbb{N^*}}=\left\{ \begin{array}{lc} \ \left[(s_{mn'}^{N})^{-1}C_{mn'}^{TE,TEM}\right]_{n'\in\mathbb{N}^*},& m=0;\\ \\ \ \left[(s_{mn'}^{N})^{-1}A^{TE,TE}_{mn',m0}(\lambda_{m0}^N)^{3/4}|2+\sin(s_{m0}^Nl)|^{1/2}\right]_{n'\in\mathbb{N}^*}, & m\neq 0,\\ \end{array} \right.\\ \\ & \bC_m^{TM}:=\left[C_{j';m}^{TM}\right]_{j'\in\mathbb{N^*}}=\left\{ \begin{array}{lc} \ \left[C_{mj'}^{TM,TEM}\right]_{j'\in\mathbb{N}^*},& m=0;\\ \\ \ \left[A^{TM,TE}_{mj',m0}(\lambda_{m0}^N)^{3/4}|2+\sin(s_{m0}^Nl)|^{1/2}\right]_{j'\in\mathbb{N}^*}, & m\neq 0,\\ \end{array} \right. \\ \\ & \bB_m^{TE,TE} := \left[B^{TE,TE}_{n'n;m}=(s^N_{mn'})^{-1}A^{TE,TE}_{mn',mn}(D^{TE}_{mn})^{-1}\right]_{n',n\in\mathbb{N}^*},\\ \\ & \bB_m^{TM,TE} :=\left[B^{TM,TE}_{j'n;m}=A^{TM,TE}_{mj',mn}(D^{TE}_{mn})^{-1}\right]_{j',n\in\mathbb{N}^*},\\ \\ & \bB_m^{TE,TM} := \left[B^{TE,TM}_{n'j;m}=(s^{N}_{mn'})^{-1}A^{TE,TM}_{mn',mj}(D^{TM}_{mj})^{-1}\right]_{n',j\in\mathbb{N}^*},\\ \\ & \bB_m^{TM,TM} :=\left[B^{TM,TM}_{j'j;m}=A^{TM,TM}_{mj',mj}(D^{TM}_{mj})^{-1}\right]_{j',j\in\mathbb{N}^*}. \end{align*} In the above, $D^{TE}_{mn'}$ denotes the $mn'$-th diagonal element of $\bD^{TE}$ in \eqref{eq:INF:sys}, and $A^{TE,TE}_{mn',mn}$ denotes the $mn'$-th row, $mn$-th column element of $\bA^{TE,TE}$ in \eqref{eq:INF:sys}, etc. The square bracket $[\cdot]$ represents an INF matrix, an INF row vector or an INF column vector, with the subscript given by the following: \begin{itemize} \item[1.] The subscript $n\in\mathbb{N}^*$ (or $j\in\mathbb{N}^*$) represents the column index $n$ (or $j$) in a row vector; \item[2.] The subscript $n'\in\mathbb{N}^*$ or $j'\in\mathbb{N}^*$ with a prime represents row index $n'$ (or $j'$) a column vector; \item[3.] The subscript $n',n\in\mathbb{N}^*$ represents the column index $n'$ and row index $n$ for a matrix. \end{itemize} Consequently, to solve for \eqref{eq:INF:sys} it is equivalent to solve for $k\in{\cal B}$ such that, for each $m$, the system (\ref{eq:eig:m}) attains nonzero $\ell^2$-sequences $\{d_m, c_m^{TE},c_m^{TM}\}$. \subsubsection{Characteristic equation and resonance condition} To proceed, we transform each system (\ref{eq:eig:m}) to an equivalent characteristic equation. To this end, we first analyze the matrix elements in (\ref{eq:eig:m}) for $h\ll 1$. Let ${\cal S}_0$ be a single-layer potential over the interval $(0,1)$ given by \begin{align} ({\cal S}_0 [\phi])(r):= 2\int_{0}^{1}\frac{1}{2\pi}\log\frac{1}{|r-r'|} \phi(r')ds(r'), \end{align} wherein the kernel function is the fundamental solution of the 2D Laplacian. It is known that ${\cal S}_0$ is bounded from $\widetilde{H^{-1/2}}(0,1)$ to $H^{1/2}(0,1) = (\widetilde{H^{-1/2}}(0,1))'$ \cite[Lem. 2.1.2]{luwanzho21}. Let \begin{equation} \label{eq:phi} \phi_n(r) = \begin{cases} \frac{1}{\sqrt{2}}\cos(n\pi r),& n\in\mathbb{N}^*,\\ 1,& n=0. \end{cases} \end{equation} Then, $\{\phi_n\}_{n\in\mathbb{N}}$ forms an orthonormal basis of the space $L^2(0,1)$. We equip $H^{1/2}(0,1)$ with the following norm: \[ ||f||_{H^{1/2}(0,1)}^2:=\sum_{n=0}^{\infty}(1+n^2)^{1/2}|(f,\phi_n)_{L^2(0,1)}|^2, \] and $\widetilde{H^{-1/2}}(0,1)$ with the norm \[ ||f||_{\widetilde{H^{-1/2}}(0,1)}^2:=\sum_{n=0}^{\infty}(1+n^2)^{-1/2}|\langle f,\phi_n\rangle_{(0,1)} |^2, \] where $\langle \cdot,\cdot \rangle_{(0,1)}$ indicates the duality pair between $\widetilde{H^{-1/2}}(0,1)$ and $H^{1/2}(0,1)$. The estimations of the matrix elements in \eqref{eq:eig:m} are given in the following lemma: \begin{mylemma} \label{lem:dm'm:h} Let $h\ll 1$ and $k\in{\cal B}$. For each $m\in\mathbb{Z}$, the following hold: \begin{itemize} \item[(i).] The element $B_{n'n;m}^{TE,TE}$ in the matrix $\bB_m^{TE,TE}$ attains the following asymptotic expansions: \begin{align} \label{eq:n'n:TETE} B_{n'n;m}^{TE,TE} = -2({\cal S}_0[(n'\pi)^{1/2}\phi_{n'}],(n\pi)^{1/2}\phi_n)_{L^2(0,1)} + {\cal O}(h)\epsilon_{n'n;m}^{TE,TE}, \end{align} where the INF matrix $\{\epsilon_{n'n;m}^{TE,TE}\}_{n,n'=1}^{\infty}:\ell^2\to\ell^2$ is uniformly bounded for $h\ll1$. \item[(ii).] The element $B_{n'j;m}^{TE,TM}$ in the INF matrix $\bB_m^{TE,TM}$ attains the following asymptotic expansions: \begin{align} \label{eq:n'j:TETM} B_{n'j;m}^{TE,TM} = (1-\delta_{m0}){\cal O}(h)\epsilon_{n'j;m}^{TE,TM}, \end{align} where the INF matrix $\{\epsilon_{n'n;m}^{TE,TM}\}_{n,n'=1}^{\infty}:\ell^2\to\ell^2$ is uniformly bounded for $h\ll1$. \item[(iii).] The element $B_{j'n;m}^{TM,TE}$ in the INF matrix $\bB_m^{TM,TE}$ attains the following asymptotic expansions: \begin{align} \label{eq:j'n:TMTE} B_{j'n;m}^{TM,TE} = (1-\delta_{m0}){\cal O}(h)\epsilon_{j'n;m}^{TM,TE}, \end{align} where the INF matrix $\{\epsilon_{j'n;m}^{TM,TE}\}_{n,n'=1}^{\infty}:\ell^2\to\ell^2$ is uniformly bounded for $h\ll1$. \item[(iv).] The element $B_{j'j;m}^{TM,TM}$ in the INF matrix $\bB_m^{TE,TE}$ attains the following asymptotic expansions: \begin{align} \label{eq:j'j:TMTM} B_{j'j;m}^{TM,TM} = -2({\cal S}_0[(n'\pi)^{1/2}\phi_{j'}],(n\pi)^{1/2}\phi_j)_{L^2(0,1)} + {\cal O}(h)\epsilon_{j'j;m}^{TM,TM}, \end{align} where the INF matrix $\{\epsilon_{n'n;m}^{TE,TE}\}_{n,n'=1}^{\infty}:\ell^2\to\ell^2$ is uniformly bounded for $h\ll1$. \item[(v).] The two INF column vectors $\bC_m^{TE}$ and $\bC_m^{TM}$ are uniformly bounded in $\ell^2$ as $h\to 0+$. For $m\neq 0$, \begin{align} \label{eq:Cn'mTE} C_{n';m}^{TE} =& -\bi \lambda_{m0}\cos(s_{m0}^Nl/2)h^{1/2}\left[({\cal S}_0[(n'\pi)^{1/2}\phi_{ n' }],\phi_{0})_{L^2(0,1)} + {\cal O}(h\log h)\epsilon_{n';m}^{CTE} \right],\\ \label{eq:Cj'mTM} C_{j';m}^{TM} =& m\cos(s_{m0}^Nl/2)h^{1/2}\left[({\cal S}_0[\phi_0],(j'\pi)^{1/2}\phi_{j'})_{L^2(0,1)} +{\cal O}(h\log h)\epsilon_{n';m}^{CTM}\right], \end{align} and for $m=0$, \begin{align} \label{eq:Cn'mTE:m=0} C_{n';m}^{TE} =& 0,\\ \label{eq:Cj'mTM:m=0} C_{j';m}^{TM} =& -\sqrt{2\pi}\bi \cos(kl/2)h\left[({\cal S}_0[\phi_0],(j'\pi)^{1/2}\phi_{j'})_{L^2(0,1)} +{\cal O}(h\log h)\epsilon_{n';m}^{CTM}\right], \end{align} where the two INF column vectors $\{\epsilon_{n';m}^{CTE}\}$ and $\{\epsilon_{j';m}^{CTM}\}$ are uniformly bounded in $\ell^2$ for $h\ll1$. \item[(vi).] The two INF row vectors $\bR_m^{TE}$ and $\bR_m^{TM}$ are uniformly bounded in $\ell^2$ as $h\to 0+$. For $m\neq 0$, \begin{align} \label{eq:RnmTE} R_{n;m}^{TE} =& -\bi h^{1/2}({\cal S}_0[\phi_{0}],(n\pi)^{1/2}\phi_{ n })_{L^2(0,1)} + {\cal O}(h^{3/2} \log h)\{\epsilon_{n;m}^{RTE}\},\\ \label{eq:RjmTM} R_{j;m}^{TM} =& \frac{m}{\lambda_{m0}}k^2h^{1/2}\left[({\cal S}_0[\phi_0],(j\pi)^{1/2}\phi_{j})_{L^2(0,1)} + {\cal O}(h\log h)\{\epsilon_{j;m}^{RTM}\} \right], \end{align} and for $m=0$, \begin{align} \label{eq:RnmTE:m=0} R_{n;m}^{TE} =& 0,\\ \label{eq:RjmTM:m=0} R_{j;m}^{TM} =& \frac{\bi k}{\sqrt{2\pi}} \left[({\cal S}_0[(j\pi)^{1/2}\phi_{j}],\phi_0)_{L^2(0,1)} + {\cal O}(h\log h)\{\epsilon_{j;m}^{RTM}\} \right], \end{align} where the two INF row vectors $\{\epsilon_{n;m}^{RTE}\}$ and $\{\epsilon_{j;m}^{RTM}\}$ in $\ell^2$ are uniformly bounded for $h\ll1$. \item[(vii).] As $h\to 0^+$, for $m\neq 0$, \begin{align} \label{eq:Amm} A_{mm} =& 2\cos(s_{m0}^Nl/2)\lambda_{m0}^N[-h\frac{\log h}{4\pi} + \alpha_m(k)h + \bi \beta_m(k)h]\nonumber\\ &-\frac{2k^2m^2\cos(s_{m0}^Nl/2)}{\lambda_{m0}}[-h\frac{\log h}{4\pi} + \tilde{\alpha}_m(k)h + \bi \tilde{\beta}_m(k)h ] \nonumber\\ &+ \cos(s_{m0}^Nl/2){\cal O}(h^2\log h), \end{align} and for $m=0$, \begin{align} \label{eq:Amm:m=0} A_{mm} =& -\cos(kl/2)k[-h\frac{\log h}{4\pi} + \alpha_1(k)h + \bi \beta_1(k)h + {\cal O}(h^2\log h)], \end{align} where for $m\in\mathbb{Z}$, \begin{align} \label{eq:alpham:k} \alpha_m(k) =&\frac{3}{8\pi}+ \frac{1}{\pi}\int_{0}^{\pi/2}\frac{(\cos(k\sin(\theta))-1)\cos(2m\theta)}{\sin(\theta)}d\theta \nonumber\\ &+ \frac{1}{4\pi}[\log 2 -\gamma - \psi(|m|+1/2)],\\ \label{eq:betam:k} \beta_m(k) = &\frac{1}{\pi}\int_{0}^{\pi/2}\frac{\sin(k\sin(\theta))\cos(2m\theta)}{\sin(\theta)}d\theta=\frac{1}{2}\int_{0}^{k}J_{2m}(t)dt,\\ \label{eq:talpham:k} \tilde{\alpha}_m(k)=&\frac{\alpha_{m+1}(k)+\alpha_{m-1}(k)}{2},\\ \label{eq:tbetam:k} \tilde{\beta}_m(k)=&\frac{\beta_{m+1}(k)+\beta_{m-1}(k)}{2}, \end{align} $\gamma$ is Euler's constant, and $\psi$ denotes the logarithmic derivative of gamma function (c.f. \cite[\S 5.2(i)]{nist10}). \end{itemize} In the above, the prefactors in the ${\cal O}$-notations depend only on ${\cal B}$ and $m$. \begin{proof} Details of the proof are presented in Appendix C. \end{proof} \end{mylemma} Now, we define three INF matrices \begin{align*} {\bB}_m:=\left[ \begin{array}{lll} \bB_m^{TE,TE} & \bB_m^{TE,TM} \\ \bB_m^{TM,TE} & \bB_m^{TM,TM} \\ \end{array} \right],\ {\bP}:=[p_{n'n}]_{n',n\in\mathbb{N}^*},\ {\bP}_2:=\left[ \begin{array}{lll} \bP & \\ & \bP \\ \end{array} \right], \end{align*} which are uniformly bounded from $\ell^2$ to $\ell^2$ for $h\ll 1$, and four INF column/row vectors \begin{align*} \bc_m:=\left[ \begin{array}{l} \bc_m^{TE}\\ \bc_m^{TM} \\ \end{array} \right],\quad \bR_m:=\left[ \begin{array}{ll} \bR_m^{TE} & \bR_m^{TM} \\ \end{array} \right],\quad\bC_m:=\left[ \begin{array}{l} \bC_m^{TE}\\ \bC_m^{TM} \\ \end{array} \right],\quad \bp:=[p_{n'0}]_{n'\in\mathbb{N}^*}, \end{align*} uniformly bounded in $\ell^2$. In the above, the element \[ p_{n'n} =\begin{cases} ({\cal S}_0[(n'\pi)^{1/2}\phi_{n'}],(n\pi)^{1/2}\phi_n)_{L^2(0,1)} & n\in\mathbb{N}^*;\\ ({\cal S}_0[(n'\pi)^{1/2}\phi_{n'}],\phi_0)_{L^2(0,1)} & n=0;\\ \end{cases} \] for $n\in\mathbb{N}$, where $\phi_{n}$ was defined in (\ref{eq:phi}). Then, equation (\ref{eq:eig:m}) becomes \begin{align} \label{eq:eig:m:2} \left[ \begin{array}{ll} D_m - A_{mm} & -\bR_m\\ -\bC_m & \bI - \bB_m \\ \end{array} \right]\left[ \begin{array}{l} d_m\\ \bc_m\\ \end{array} \right] = {\bf 0},\quad m\in\mathbb{Z}. \end{align} We have the following lemma. \begin{mylemma} \label{lem:inv:I-Bm} For $0<h\ll 1$ and $k\in{\cal B}$, $\bB_m$ is uniformly bounded from $\ell^2$ to $\ell^2$ with \begin{align} \label{eq:bounded:Bm} ||\bB_m + 2{\bP}_2|| = {\cal O}(h)\quad \mbox{as} \; h\to 0^+. \end{align} Moreover, $\bI-\bB_m$ and $\bI + 2\bP_2$ attain uniformly bounded inverses for $h\ll1$, and there holds \begin{align} \label{eq:inv:Bm} ||({\bI}-\bB_m)^{-1} - ({\bI}+2{\bP}_2)^{-1}|| = {\cal O}(h),\quad \quad \mbox{as} \; h\to 0^+. \end{align} In the above, the prefactors in the ${\cal O}$-notations depend only on $m$ and ${\cal B}$. \begin{proof} The estimation~\eqref{eq:bounded:Bm} follow from Lemma~\ref{lem:dm'm:h} (i)-(iv). Using the Neumann series, we see that (\ref{eq:inv:Bm}) holds if $\bI + 2\bP_2$ is invertible, which is true since ${\cal S}_0$ is positive and bounded below \cite[Cor. 8.13]{mcl00}. \end{proof} \end{mylemma} From the invertbility of the operator $\bI-\bB_m$ in the above lemma, for each $m\in \mathbb{Z}$ the system (\ref{eq:eig:m:2}) can be further reduced to the following single nonlinear equation: \begin{equation} \label{eq:single} \left[D_m(k) - A_{mm}(k)- \bR_m(k)(\bI-\bB_m(k))^{-1}\bC_m(k) \right]d_m = 0, \end{equation} where we make the argument $k$ explicit to emphasize the dependence of the equation on $k$. We call \eqref{eq:single} the characteristic equation, which is the resonance condition for the scattering problem \eqref{eq:problemE_1} - \eqref{eq:problemE_4}. \subsubsection{Asymptotic analysis of resonances} We are ready to state and prove our first main result for the resonances. \begin{mytheorem} \label{thm:even:res} Assume that $h\ll 1$, the resonances for the scattering problem \eqref{eq:problemE_1} - \eqref{eq:problemE_4} in the bounded region $\mathcal{B}$ are given as follows: \begin{itemize} \item[(i)] For each given integer $m\neq0$, there exist a finite sequence of resonance in $\mathcal{B}$ \begin{equation} \label{eq:even:res} k^*_{m,2m'}=k_{m,2m'} -\frac{m^2h}{2k_{m,2m'}}- \frac{2\Pi_m(k_{m,2m'},h)}{k_{m,2m'}l} + {\cal O}(h^2\log^2h), \quad m'\in\mathbb{N}^* \; \mbox{is bounded}, \end{equation} and a near-$|m|$ resonance \begin{equation} \label{eq:even:nearm} k^*_m = |m| - \frac{|m| h}{2} - \frac{|m| h}{l}\left[ (\tilde{\alpha}_m(|m|)-\alpha_m(|m|)) +\bi(\tilde{\beta}_m(|m|)-\beta_m(|m|))\right] + {\cal O}(h^2\log h), \end{equation} where $k_{m,2m'} = \sqrt{m^2 + \frac{(2m')^2\pi^2}{l^2}}$, \begin{align} \label{eq:Pim} \Pi_m(k,h) =& \frac{(m^2-k^2)}{2\pi}h\log h +2k^2h(\tilde{\alpha}_m(k) +\bi\tilde{\beta}_m(k)) - 2m^2h(\alpha_m(k)+\bi\beta_m(k))\nonumber\\ &+(m^2-k^2)h(\bp^{T}(\bI+2\bP)^{-1}\bp), \end{align} and $\alpha_m$, $\beta_m$, $\tilde{\alpha}_m$ and $\tilde{\beta}_m$ are defined in (\ref{eq:alpham:k})-(\ref{eq:tbetam:k}). \item [(ii)] m = 0: there exist a finite sequence of resonance in $\mathcal{B}$ \begin{equation} \label{eq:even:res:m=0} k^*_{0,2m'} =k_{0,2m'}-2k_{0,2m'}\Pi_0(k_{0,2m'},h) + {\cal O}(h^2\log^2 h),\end{equation} where \begin{align} \label{eq:Pi0} \Pi_0(k,h) = & -h\frac{\log h}{4\pi} + \alpha_1(k)h + \bi \beta_1(k)h - h\bp^{T}(\bI+2\bP)^{-1}\bp. \end{align} \end{itemize} Moreover, each resonance in \eqref{eq:even:res} - \eqref{eq:even:res:m=0} obtains an imaginary part of order ${\cal O}(h)$. \begin{proof} We obtain the resonances for the scattering problem by solving for the characteristic values satisfying \begin{equation} \label{eq:single:m} D_m(k) =A_{mm}(k)+\bR_m(k)(\bI-\bB_m(k))^{-1}\bC_m(k) \end{equation} for each $m \neq 0$, which reads \begin{equation} \label{eq:tmp2} s_{m0}^N(e^{\bi s_{m0}^N l/2}-e^{-\bi s_{m0}^N l/2}) = -\bi(e^{\bi s_{m0}^N l/2}+e^{-\bi s_{m0}^N l/2})\left[\Pi_m(k,h) + {\cal O}(h^2\log h) \right]. \end{equation} Recall that $s_{m0}^N$ is defined by (\ref{eq-s}) with $\lambda_{m0}^N =(\beta_{m0}^N)^2$ given in (\ref{eq-lambdaN}). Note that $\beta_{m0}^N$ attains asymptotic expansion (\ref{eq:asy:betam2:N}) as $h \to 0$. We first find the resonances that are away from the integer number $m$ as $h \to 0$. More precisely, at such resonances, we have $$ \liminf_{h\to 0}|s_{m0}^N|>0. $$ To proceed, note that $e^{\bi s_{m0}^N l/2}-e^{-\bi s_{m0}^N l/2}={\cal O}(h\log h)$ for $h\ll 1$, since the r.h.s. of (\ref{eq:tmp2}) is ${\cal O}(h\log h)$. Therefore, we have for some $m'\in\mathbb{N}^*$ that \[ \epsilon_{mm'}:=s_{m0}^Nl -2m'\pi =o(1),\quad {\rm as}\quad h\to 0^+, \] and \eqref{eq:tmp2} leads to \[ 1 - e^{\bi \epsilon_{mm'}} = \frac{2\bi \Pi_m(k,h)}{s^N_{m0}+\bi \Pi_m(k,h)} + {\cal O}(h^2\log h). \] By Taylor's expansion of $\log(1-2x/(s_{m0}^N+x))$ at $x=0$ and by $\Pi_m(k,h)={\cal O}(h\log h)$, \begin{align*} \epsilon_{mm'} = -\bi\log\left[1 - \frac{2 \bi \Pi_m(k,h)}{s_{m0}^N+\bi\Pi_m(k,h)} -{\cal O}(h^2\log h)\right] =\frac{-2\Pi_m(k,h)}{s_{m0}^N} + {\cal O}(h^2\log^2 h). \end{align*} Therefore, $\epsilon_{m,2m'}={\cal O}(h\log h)$ so that $s_{m0}^Nl=2m'\pi + {\cal O}(h\log h)$. But by (\ref{eq:asy:betam2:N}), \[ k = \sqrt{(\lambda_{m0}^N)^2 + (s_{m0}^N)^2} = \sqrt{m^2- m^2 h + s_{m0}^2} + {\cal O}(h^2). \] We thus have \[ k = k_{m,2m'} + {\cal O}(h\log h). \] Now, according to the definition of $\Pi_m$ in (\ref{eq:Pim}), we have \[ \Pi_m(k,h) = \Pi_m(k_{m,2m'},h) + {\cal O}(h^2\log^2 h), \] so that \[ \epsilon_{mm'} = -\frac{2l\Pi_m(k_{m,2m'},h)}{(2m')\pi} + {\cal O}(h^2\log^2 h). \] Hence \begin{align*} s^N_{m0} =& \frac{(2m')\pi}{l} -\frac{2\Pi_m(k_{m,2m'},h)}{(2m')\pi} + {\cal O}(h^2\log^2 h), \end{align*} and one obtains the expansion (\ref{eq:even:res}). Therefore, resonances $k$ satisfying (\ref{eq:single:m}) attain the asymptotic expansion (\ref{eq:odd:res}) for $h\ll 1$ for some $m'\in\mathbb{N}^*$. As for the existence of resonances, one notices that when $k$ lies in the region $D_h=\{k\in\mathbb{C}:{\rm Re}(k)>0,|s_{m0}^N(k)l -(2m')\pi|\leq h^{1/2}\}\subset {\cal B}$, the following holds on the boundary of this disk \begin{align*} &\Bigg|\left[(e^{\bi s_{m0}^N l}+1) -(\bi s_{m0}^N)^{-1}(e^{\bi s_{m0}^N l}-1)\left[ \Pi_{m}(k,h) +{\cal O}(h^2\log h)\right] \right] - \left[\bi (s_{m0}^N l - (2m')\pi) \right]\Bigg| \\ &={\cal O}(h)\leq \sqrt{h}=|\bi (s_{m0}^N l - (2m')\pi)|. \end{align*} Rouch\'e's theorem states that there exists a unique root for (\ref{eq:single:m}) in $D_h$. Similarly one can verify the expansion (\ref{eq:even:res:m=0}) for $m=0$. Finally, we solve for resonances that are asymptotically close to the integer $m$ when $h \to 0$. To do so, assume that $s_{m0}^N=o(1)$, as $h\to 0^+$. Since $(e^{\bi s_{m0}^Nl /2}+e^{-\bi s_{m0}^N l/2}) = 1 + {\cal O}([s_{m0}^N]^2)$, we have \begin{align*} \bi [s_{m0}^N]^2l=& s_{m0}^N(e^{\bi s_{m0}^N l/2}-e^{-\bi s_{m0}^N l/2})+{\cal O}([s_{m0}^N]^4)\\ =& -\bi\left[\Pi_m(k,h) + {\cal O}(h^2\log h) \right] + {\cal O}([s_{m0}^N]^2h\log h) + {\cal O}([s_{m0}^N]^4)\\ =& -\bi \Pi_m(m,h)+ {\cal O}(h^2\log h) + {\cal O}([s_{m0}^N]^2h\log h) + {\cal O}([s_{m0}^N]^4)\\ =&-2m^2h\bi\left[ (\tilde{\alpha}_m(m) +\bi\tilde{\beta}_m(m)) - (\alpha_m(m)+\bi\beta_m(m)) \right]\\ &+ {\cal O}(h^2\log h) + {\cal O}([s_{m0}^N]^2h\log h) + {\cal O}([s_{m0}^N]^4). \end{align*} Thus, $[s_{m0}^N]^2= {\cal O}(h)$ and \begin{align*} [s_{m0}^N]^2=& -2m^2hl^{-1}\left[ (\tilde{\alpha}_m(m) +\bi\tilde{\beta}_m(m)) - (\alpha_m(m)+\bi\beta_m(m)) \right]+ {\cal O}(h^2\log h), \end{align*} which implies (\ref{eq:even:nearm}). \end{proof} \end{mytheorem} \begin{myremark} In fact, the constant $\bp^{T}(\bI+2\bP)^{-1}\bp$ in the expansions~(\ref{eq:even:res}) and~(\ref{eq:even:res:m=0}) is given explicitly as \[ \bp^{T}(\bI+2\bP)^{-1}\bp = \frac{1}{2\pi^2} - \frac{1}{\pi^2}\log\left( \frac{\pi}{2} \right). \] It is obtained in \cite{holsch19} for the 2D slit problem using the matched asymptotic expansion and also numerically verified in \cite{zholu21}. \end{myremark} \begin{myremark} The resonances attain the imaginary parts of order ${\cal O}(h)$, thus they are very close to the real axis when $h\ll1$. We point out that $k^*_{m,2m'}$ and $k^*_{m}$ given in \eqref{eq:even:res} and \eqref{eq:even:nearm} are resonances associated with the TE modes in the annular hole. Note that the leading-order of resonances $k^*_{m,2m'}$ depends on the metal thickness $l$, while the leading-order of resonances $k^*_{m}$ is independent of $l$. The independence on the metal thickness for the latter is also called epsilon-near-zero phenomenon \cite{yoo16}. On the other hand, $k^*_{0,2m'}$ given in \eqref{eq:even:res:m=0} are resonances associated with the TEM mode in the annular hole. As discussed in Section 5, the excitation of these two types of resonances are very different. \end{myremark} \subsection{Resonances for Problem (O)} In this section, we characterize the resonances for scattering problem (O). Due to the similarity between the even problem \eqref{eq:problemE_1} - \eqref{eq:problemE_4} and the odd problem \eqref{eq:problemO_1} - \eqref{eq:problemO_3}, we shall directly state the difference and the final results. By the same vectorial mode matching procedure, we can still obtain the linear system (\ref{eq:INF:sys}) but with the following replacements: on the l.h.s, \begin{align*} \sin(kl/2)\to \cos(kl/2),\quad \sin(s_{mn}^Nl/2)\to \cos(s_{mn}^Nl/2),\quad \sin(s_{ij}^Dl/2)\to \cos(s_{ij}^Dl/2); \end{align*} on the r.h.s., \begin{align*} \cos(kl/2)\to -\sin(kl/2),\quad \cos(s_{mn}^Nl/2)\to -\sin(s_{mn}^Nl/2),\quad \cos(s_{ij}^Dl/2)\to -\sin(s_{ij}^Dl/2), \end{align*} and the auxiliary coefficients $|2+\sin(s_{mn}^Nl/2)|$ and $|2+\sin(s_{ij}^Dl/2)|$ on both sides keep unchanged. With the above minor changes, we obtain the eigenvalue problem (\ref{eq:eig:m:2}) and the characteristic equation (\ref{eq:single:m}) with $D_m$, $A_{mm}$, $\bR_m$, $\bB_m$ and $\bC_m$ changed accordingly. From now on, we shall add the superscript $o$ (or $e$) to all the elements in (\ref{eq:eig:m:2}) to indicate that they are for Problem (O) (or (E)). The asymptotic analysis of the resonances are stated in the following theorem. \begin{mytheorem} \label{thm:odd:res} Assume that $h\ll 1$, there exist a finite sequence of resonances for Problem (O) in $\mathcal{B}$ for each given integer $m\neq0$: \begin{equation} \label{eq:odd:res} k^*_{m,2m'+1}=k_{m,2m'+1} -\frac{m^2h}{2k_{m,2m'+1}}- \frac{2\Pi_m(k_{m,2m'+1},h)}{k_{m,2m'+1}l} + {\cal O}(h^2\log^2h),\quad m'\in\mathbb{N}, \end{equation} and a finite sequence of resonances when $m=0$: \begin{equation} \label{eq:odd:res:m=0} k^*_{0,2m'+1} =k_{0,2m'+1}-2k_{0,2m'+1}\Pi_0(k_{0,2m'+1},h) + {\cal O}(h^2\log^2 h),\quad m'\in\mathbb{N}, \end{equation} where $k_{m,2m'+1} = \sqrt{m^2 + \frac{(2m'+1)^2\pi^2}{l^2}}$. \begin{proof} For the scattering problem (O), the characteristic equation (\ref{eq:single:m}) becomes \begin{align} \label{eq:tmp1} \bi s_{m0}^N(e^{\bi s_{m0}^N l}+1) =&(e^{\bi s_{m0}^N l}-1)\left[ \Pi_{m}(k,h) +{\cal O}(h^2\log h)\right]. \end{align} In the following, we claim the trivial solution $k=\sqrt{\lambda_{m0}^N}$ is not a resonance. According to Lemmas~\ref{lem:dm'm:h} (v) and \ref{lem:inv:I-Bm}, $\bC^o_m\equiv 0$ so that $\bc^o_m\equiv 0$. But (\ref{eq:rep:tE}) and (\ref{eq:rep:tH}) imply $\nu\times \bE^o =\nu\times\bH^o = 0$ on $A^h$ so that $\bE^o=\bH^o\equiv 0$ in the whole space $\mathbb{R}^3\backslash\overline{\Omega_{\rm M}}$. Moreover, \begin{align*} \lim_{h\to 0}\frac{2}{l} =& \lim_{h\to 0}\lim_{k\to \sqrt{\lambda_{m0}^N}}\frac{\bi s_{m0}^N(e^{\bi s_{m0}^N l}+1)}{(e^{\bi s_{m0}^N l}-1)} = \lim_{h\to 0}\lim_{k\to \sqrt{\lambda_{m0}^N}}\left[ \Pi_{m}(k,h) +{\cal O}(h^2\log h)\right] = 0, \end{align*} which is impossible. Thus, there is no resonance near $s_{m0}$ for problem (O), which is the main difference compared with problem (E). The proof of the expansions ~\eqref{eq:odd:res} and \eqref{eq:odd:res:m=0} for the resonances follow the same lines as Theorem \ref{thm:even:res}. \end{proof} \end{mytheorem} \section{Electromagnetic field enhancement at resonant frequencies} In this section, we solve the scattering problem (\ref{eq:E}) - (\ref{eq:smc}) when the incident wave $\{ {\bf E}^{\rm inc}, {\bf H}^{\rm inc} \}$ is present and study the electromagnetic field enhancement. \subsection{Field enhancement due to the excitation of a TE mode in the annular hole} Let us first consider the scattering problem when the incident frequency coincides with the real part of the resonance $k_{1,m'}^*$ ($m'\in\mathbb{N}^*$) in \eqref{eq:even:res} or \eqref{eq:odd:res}, or the resonance $k_1^*$ in \eqref{eq:even:nearm}. For conciseness of the presentation, we only show the calculations for the normal incidence such that the polarization vectors in \eqref{eq:pla:inc} are given by ${\bf E}^0=(0,1,0)^{T}$ and ${\bf H}^0=(1,0,0)^{T}$. \begin{mytheorem} \label{thm:enhance:te} For a normal incident wave with the polarization vectors ${\bf E}^0=(0,1,0)^{T}$ and ${\bf H}^0=(1,0,0)^{T}$, the magnitude of electromagnetic field $\bE$ and $\bH$ in the hole $G^h$ attains the order ${\cal O}(h^{-1})$ at resonant frequencies ${\rm Re}(k_{1,m'}^*)$ for each $m'\in\mathbb{N}^*$ or ${\rm Re}(k_1^*)$. Specifically, \begin{itemize} \item[(1).] If $k={\rm Re}(k_{1,m'}^*)$ for an even integer $m'$, the following expansions hold inside the hole $G^{h}_+$: \begin{align} \label{eq:H3e:case1} H_3(x) =& h^{-1}(-1)^{m'/2}\frac{\cos(m'\pi/l x_3)x_1 e^{\bi k_{1,m'}l/2}\bi}{2[k_{1,m'}^{2}\tilde{\beta}_1(k_{1,m'})-\beta_1(k_{1,m'})]} + {\cal O}(\log h),\\ E_1(x) =&h^{-1}(-1)^{m'/2}\frac{k_{1,m'}e^{\bi k_{1,m'}l/2}\cos(m'\pi/l x_3)x_1x_2 }{2[k_{1,m'}^{2}\tilde{\beta}_1(k_{1,m'})-\beta_1(k_{1,m'})]} + {\cal O}(\log h)\\ E_2(x) =&-h^{-1}(-1)^{m'/2}\frac{k_{1,m'}e^{\bi k_{1,m'}l/2}\cos(m'\pi/l x_3)x_2^2 }{2[k_{1,m'}^{2}\tilde{\beta}_1(k_{1,m'})-\beta_1(k_{1,m'})]} + {\cal O}(\log h). \end{align} \item[(2).] If $k={\rm Re}(k_{1,m'}^*)$ for an odd integer $m'$, then in the hole $G^{h}_+$, we have \begin{align} \label{eq:H3o:case1} H_3(x) =& h^{-1}(-1)^{(m'-1)/2}\frac{\sin(m'\pi/l x_3)x_1e^{\bi k_{1,m'}l/2}\bi}{2[k_{1,m'}^{2}\tilde{\beta}_1(k_{1,m'})-\beta_1(k_{1,m'})]} + {\cal O}(\log h),\\ E_1(x) =&h^{-1}(-1)^{(m'-1)/2}\frac{k_{1,m'}e^{\bi k_{1,m'}l/2}\sin(m'\pi/l x_3)x_1x_2 }{2[k_{1,m'}^{2}\tilde{\beta}_1(k_{1,m'})-\beta_1(k_{1,m'})]} + {\cal O}(\log h),\\ E_2(x) =&-h^{-1}(-1)^{(m'-1)/2}\frac{k_{1,m'}e^{\bi k_{1,m'}l/2}\sin(m'\pi/l x_3)x_2^2 }{2[k_{1,m'}^{2}\tilde{\beta}_1(k_{1,m'})-\beta_1(k_{1,m'})]} + {\cal O}(\log h). \end{align} \item[(3).] If $k={\rm Re}(k_1^*)$, then \begin{align} \label{eq:H3e:case2} H_3(x) =&\frac{2\bi h^{-1}e^{\bi l/2}x_1}{\tilde{\beta}_1(1)-\beta_1(1)}+{\cal O}(\log h),\\ E_1(x) =& \frac{2h^{-1}e^{\bi l/2}x_1x_2}{\tilde{\beta}_1(1)-\beta_1(1)}+{\cal O}(\log h),\\ E_2(x) =& -\frac{2h^{-1}e^{\bi l/2} x_2^2}{\tilde{\beta}_1(1)-\beta_1(1)}+{\cal O}(\log h).\end{align} \end{itemize} \end{mytheorem} \begin{proof} For the normal incidence, the reflected field is \[ \bE^{\rm ref} = -\bE^0 e^{\bi k(x_3-l)}, \bH^{\rm ref} = \bH^0 e^{\bi k(x_3-l)} \; \mbox{for} \; x_3>l/2. \] The total field can be decomposed as \begin{align*} \bE(x) = \left\{ \begin{array}{lc} \bE^{\rm e}(x) + \bE^{\rm o}(x), & x_3\geq 0,\\ \bE^{\rm e,*}(x^*) - \bE^{\rm o,*}(x^*), & x_3<0,\\ \end{array} \right. \quad \bH(x) = (\bi k)^{-1}\cl\ \bE, \end{align*} wherein $\{\bE^{\rm e}, \bH^{\rm e}\}$ and $\{\bE^{\rm o}, \bH^{\rm o}\}$ satisfy \eqref{eq:problemE_1}-\eqref{eq:problemE_4} and \eqref{eq:problemO_1}-\eqref{eq:problemO_3} respectively. On the annular aperture $A^h$, using the integral equation (\ref{eq:t2t:0}), it follows that \begin{equation*} \nu\times \bH^{j}|_{A^h} +\frac{2}{\bi k}{\cal L}_k[\nu\times\bE^{j}|_{A^h}] = \nu\times \frac{\bH^{\rm inc}+\bH^{\rm ref}}{2}|_{A^h} = \left[\begin{matrix} 0\\ -e^{\bi kl/2} \\ 0\\ \end{matrix}\right]= \left[\begin{matrix} -e^{\bi kl/2}\nabla_2(r\sin\theta)\\ 0\\ \end{matrix}\right],\quad j=e,o. \end{equation*} Then, using $\sin\theta= (2\bi)^{-1}(e^{\bi \theta}-e^{-\bi\theta})$ and Lemma~\ref{lem:orth:theta}, the mode-matching procedure in Section 4.1 gives rise to only two inhomogeneous INF linear system as shown below: \begin{align} \label{eq:inh:sys} \left[ \begin{array}{ll} D_m^{j} - A^{j}_{mm} & -\bR^{j}_{m}\\ -\bC^{j}_m & \bI - \bB^{j}_{m}\\ \end{array} \right]\left[ \begin{array}{l} d_m^{j}\\ \bc_m^{j}\\ \end{array} \right] = c^{j}\left[ \begin{array}{l} a_m\\ \bb_m \end{array} \right],\quad j=e,o;m=\pm 1. \end{align} In the above, $c^e=1$, $c^o=\bi$, $a_m = \frac{ke^{\bi kl/2}}{2\bi\lambda_{m0}^N}\langle \nabla_2 r\sin\theta, {\cal R}\cl_2\overline{\psi_{m0}^N} \rangle$, and \begin{align} \bb_m &= \left[ \begin{matrix} \{\frac{ke^{\bi kl/2}}{2\bi s_{mn'}^N(\lambda_{mn'}^N)^{1/4}}\langle \nabla_2 r\sin\theta, {\cal R}\cl_2\overline{\psi_{mn'}^N} \rangle\}_{n'\in\mathbb{N}^*}\\ \{\frac{e^{\bi kl/2}s_{mj'}^D}{2\bi k(\lambda_{mj'}^D)^{3/4}}\langle \nabla_2 r\sin\theta, {\cal R}\nabla_2\overline{\psi_{mj'}^D} \rangle\}_{j'\in\mathbb{N}^*}\\ \end{matrix} \right]\nonumber\\ &=\left[ \begin{matrix} \{\frac{ke^{\bi kl/2}(\lambda_{mn'}^N)^{3/4}}{2\bi s_{mn'}^N}\langle r\sin\theta, \overline{\psi_{mn'}^N} \rangle\}_{n'\in\mathbb{N}^*}\\ \{0 \}_{j'\in\mathbb{N}^*}\\ \end{matrix} \right], \end{align} where the last equality holds due to Green's identities. We study the enhancement of the electromagnetic field $\{\bE^{e},\bH^{e}\}$ in the hole $G^{h}_+$ first. By Lemma~\ref{lem:psimnN}, integrating by parts gives \[ a_{m} = \frac{ke^{\bi kl/2}}{2\bi}\langle r\cos\theta, \overline{\psi_{m0}^N} \rangle_{A^h} = \frac{ke^{\bi kl/2}}{2\bi}\left[\sqrt{\frac{\pi h}{2}}+{\cal O}(h^{3/2}) \right] \] and \[ \langle r\cos\theta, \overline{\psi_{mn'}^N} \rangle_{A^h} = {\cal O}[(n')^{-2}h^{3/2}], \] so that $||\bb_{m}||_{\ell^2} = {\cal O}(h)$ . Using Lemmas~\ref{lem:inv:I-Bm} and \ref{lem:dm'm:h} (vi), the system (\ref{eq:inh:sys}) can be reduced to the following inhomogeneous equation: \begin{align*} \left[D^e_m(k) - A^e_{mm}(k)- \bR^e_m(k)(\bI-\bB^e_m(k))^{-1}\bC^e_m(k) \right]d^e_m =& a_m + \bR^e_m(\bI-\bB^e_m)^{-1}\bb_m \\ =& \frac{ke^{\bi kl/2}\sqrt{2\pi h}}{4\bi}+{\cal O}(h^{3/2}), \end{align*} for $m=\pm 1$. \bigskip \noindent {\it (1)}. Let $k={\rm Re}(k_{1,m'}^*)$ for an even integer $m'\in\mathbb{N}^*$, in which $k_{1,m'}^*$ is given by \eqref{eq:even:res}. Since $k - k_{1,m'}^*=-{\rm Im}(k_{1,m'}^*)\bi ={\cal O}(h)$, we obtain \begin{align*} D^e_m(k) - D^e_m(k_{1,m'}^*) =& -[D_m^e]'(k_{1,m'}){\rm Im}(k_{1,m'}^*)\bi + {\cal O}(h^2\log h)\\ =&2h(-1)^{m'/2}\left[k_{1,m'}^2\tilde{\beta}_1(k_{1,m'})-\beta_1(k_{1,m'})\right]\bi + {\cal O}(h^2\log h),\\ A^e_{mm}(k) - A^e_{mm}(k_{m,m'}^*)=&{\cal O}(h^2\log h),\\ \bR^e_m(k)(\bI-\bB^e_m(k))^{-1}\bC^e_m(k)=&\bR^e_m(k_{1,m'}^*)(\bI-\bB^e_m(k_{1,m'}^*))^{-1}\bC^e_m(k_{1,m'}^*)+{\cal O}(h^2\log h). \end{align*} Therefore, \begin{align*} d^e_m =& -\frac{k_{1,m'}(-1)^{m'}e^{\bi k_{1,m'}l/2}\sqrt{2\pi h}+{\cal O}(h^{3/2})}{8 h[k_{1,m'}^{2}\tilde{\beta}_1(k_{1,m'})-\beta_1(k_{1,m'})]+{\cal O}(h^2\log h)}\\ =& -\frac{h^{-1/2}k_{1,m'}(-1)^{m'/2}e^{\bi k_{1,m'}l/2}\sqrt{2\pi }}{8 [k_{1,m'}^{2}\tilde{\beta}_1(k_{1,m'})-\beta_1(k_{1,m'})]}(1+{\cal O}(h\log h)) \end{align*} and \begin{align*} ||\bc^e_m||_{\ell^2}=& ||(\bI-\bB^e_m)^{-1}(\bb_m+\bC^e_md^e_m)||_{\ell^2} = {\cal O}(1), \end{align*} for $m=\pm 1$. Hence, using the field representation (\ref{eq:rep:bH}), inside the hole $G^{h}_+$, there holds \begin{align*} H_3^e(x) =& \sum_{m\in\{-1,1\}}\sum_{n=1}^{\infty}[d_{mn}^{TE}]^e\frac{-2\bi\lambda_{mn}^N\cos(s_{mn}^Nx_3)}{k} \psi_{mn}^N(r,\theta)\nonumber\\ &+\sum_{m\in\{-1,1\}}[d_{m0}^{TE}]^e\frac{-2\bi\lambda_{m0}^N\cos(s_{m0}^Nx_3)}{k} \psi_{m0}^N(r,\theta)\nonumber\\ =&\sum_{m\in\{-1,1\}}\sum_{n=1}^{\infty}[c_{m;n}^{TE}]^e\frac{-2\bi[\lambda_{mn}^N]^{1/4}\cos(s_{mn}^Nx_3)}{k\sin(s_{mn}^Nl/2)} \psi_{mn}^N(r,\theta)\nonumber\\ &+\sum_{m\in\{-1,1\}}d_{m}^e\frac{-2\bi\lambda_{m0}^N\cos(s_{m0}^Nx_3)}{k} \psi_{m0}^N(r,\theta)\nonumber\\ =&h^{-1}(-1)^{m'/2}\frac{\cos(m'\pi/l x_3)\cos(\theta)e^{\bi k_{1,m'}l/2}\bi}{2[k_{1,m'}^{2}\tilde{\beta}_1(k_{1,m'})-\beta_1(k_{1,m'})]} + {\cal O}(\log h). \end{align*} Similarly, an application of (\ref{eq:rep:bE}) yields the asymptotic for $E_1^e(x)$ and $E_2^e(x)$. \bigskip \noindent{\it (2).} The case when $m'$ is odd can be derived similarly as above. \bigskip \noindent{\it (3).} Let $k={\rm Re}(k_1^*)$. Again, $k-k_1^*=-{\rm Im}(k_1^*)\bi={\cal O}(h)$ so that \begin{align*} D^e_m(k)-D^e_m(k_1^*) =& -[D^e_m]'(k_1^*){\rm Im}(k_1^*)\bi + {\cal O}(h^2)\\ =&\frac{h}{2}(\tilde{\beta}_1(1)-\beta_1(1))\bi + {\cal O}(h^2),\\ A^e_{mm}(k) - A^e_{mm}(k_{1}^*)=&{\cal O}(h^2\log h),\\ \bR^e_m(k)(\bI-\bB^e_m(k))^{-1}\bC^e_m(k)=&\bR^e_m(k_{1}^*)(\bI-\bB^e_m(k_{1}^*))^{-1}\bC^e_m(k_{1}^*)+{\cal O}(h^2\log h), \end{align*} Thus, \begin{align*} d^e_m = \frac{-h^{-1/2}e^{\bi l/2}\sqrt{2\pi }}{2((\tilde{\beta}_1(1)-\beta_1(1))}(1 + {\cal O}(h\log h)), \end{align*} and $||\bc^e_m||_{\ell^2}={\cal O}(1)$. Inside the hole $G^{h}_+$, \begin{align*} \label{eq:H3e:case2} H_3^e(x)=&\sum_{m\in\{-1,1\}}\sum_{n=1}^{\infty}[c_{m;n}^{TE}]^e\frac{-2\bi[\lambda_{mn}^N]^{1/4}\cos(s_{mn}^Nx_3)}{k\sin(s_{mn}^Nl/2)} \psi_{mn}^N(r,\theta)]\nonumber\\ &+\sum_{m\in\{-1,1\}}d_{m}^e\frac{-2\bi\lambda_{m0}^N\cos(s_{m0}^Nx_3)}{k} \psi_{m0}^N(r,\theta)\nonumber\\ =&\frac{2\bi h^{-1}e^{\bi l/2}\cos\theta}{\tilde{\beta}_1(1)-\beta_1(1)}+{\cal O}(\log h), \end{align*} and similarly, the asymptotic expansions for $E_1^e(x)$ and $E_2^e(x)$ can be derived. \end{proof} From \eqref{eq:inh:sys}, we see that a normal incident plane wave can only excite the TE$_{mn}$ modes in the annular hole with $m=\pm1$. To excite higher-order modes with $|m|\ge2$, an oblique incident wave needs to be applied. Without loss of generality, we assume that the incident direction $d=(d_1,0,-d_3)^{T}$ and the electric polarization vector $\bE_0=(0,1,0)^{T}$. Then repeating the above procedure gives \begin{equation}\label{eq:obliq_src} \nu\times \frac{\bH^{\rm inc}+\bH^{\rm ref}}{2}|_{A^h} =\left[ \begin{matrix} 0\\ -d_3e^{-\bi k d_3l/2+\bi kd_1x_1}\\ 0\\ \end{matrix} \right]=\left[ \begin{matrix} d_3e^{-\bi k d_3l/2}\cl_2\frac{e^{\bi kd_1 r\cos\theta}-1}{\bi kd_1}\\ 0\\ \end{matrix} \right]. \end{equation} From the Jacobi-Anger expansion \cite[Eq. (3.89)]{colkre13}, \begin{align} \frac{e^{\bi kd_1 r\cos\theta}-1}{\bi k d_1}=(\bi k d_1)^{-1}\sum_{m=-\infty}^{\infty}[\bi^m J_m(k d_1 r)-\delta_{m0}]e^{\bi m\theta}, \end{align} the expansion contains terms with higher-order angular momentum. Therefore, the enhancement of the electromagnetic field $\{\bE,\bH\}$ can be obtained at the resonant frequencies $k={\rm Re}(k_{m,m'}^*)$ with $|m|\ge 2$. We omit the detailed calculations here. \subsection{Field enhancement due to excitation of the TEM mode in the annular hole} In this section, we consider field enhancement at the resonant frequencies $k={\rm Re}(k_{0,2m'}^*)$ in \eqref{eq:even:res:m=0}, which are associated with the TEM mode in the annular hole. When a plane wave impinges on the subwavelength structure, the source term in \eqref{eq:obliq_src} is orthogonal to the resonant mode in the sense that \[ \langle\cl_2\frac{e^{\bi kd_1 r\cos\theta}-1}{\bi kd_1}, {\cal R}\nabla_2\log r \rangle = -\langle\cl_2\frac{e^{\bi kd_1 r\cos\theta}-1}{\bi kd_1}, \nabla_2\log r \rangle_{A^h} = 0. \] This follows from the orthogonality relation $\cl_2 H^1(A^h) \perp \mathbb{H}_2(A^h)$ in Lemma~\ref{lem:helmdecomp}. Thus the mode matching formulation leads to a homogeneous system for $m=0$, which only attains trivial solution for $k\in\mathbb{R}$. This implies that no field amplification could be obtained at the resonant frequencies $k={\rm Re}(k_{0,m'}^*)$. In other words, TEM modes can not be excited by using the plane wave incidence. To excite a TEM mode, we consider a spherical incident wave produced by an electric monopole located at $(0,0, y_3)^{T}$ with $y_3>0$ and it points toward the $x_3$-direction. Namely, $\bE^{\rm inc}$ satisfies \[ \cl\ \cl\ \bE^{\rm inc} - k^2 \bE^{\rm inc}= -\hat{x}_3\delta(0,0,x_3-y_3), \] where $\hat{x}_3 = (0,0,1)^{T}$ is the unit vector in $x_3$-direction. Indeed, it is known that \begin{equation} \label{eq:sph:inc} \bE^{\rm inc} = \left[\hat{x}_3 + k^{-2}\partial_{x_3}\nabla\right]\left(\frac{e^{\bi k\sqrt{r^2+(x_3-y_3)^2}}}{4\pi \sqrt{r^2+(x_3-y_3)^2}}\right). \end{equation} The reflected electric field produced by the perfect conducting metallic slab is given by \[ \bE^{\rm ref} = \left[\hat{x}_3 + k^{-2}\partial_{x_3}\nabla\right]\left(\frac{e^{\bi k\sqrt{r^2+(x_3+y_3)^2}}}{4\pi \sqrt{r^2+(x_3+y_3)^2}}\right). \] We have the following result for the field amplification. \begin{mytheorem} \label{thm-enhance-tem} Let the incident electric field be of the form (\ref{eq:sph:inc}). The magnitude of the total electromagnetic field $\bE$ and $\bH$ attains the order ${\cal O}(h^{-1})$ at frequencies $k={\rm Re}(k_{0,m'}^*)$ for $m'\in\mathbb{N}^*$. Specifically, \begin{itemize} \item[(1).] If $k={\rm Re}(k_{0,m'}^*)$ for an even integer $m'$, the following hold in the annular gap $G_+^h$ : \begin{align} E_1 =& \frac{2(-1)^{m'/2}c(k_{0,m'}, y_3)}{h} \, x_1\, \cos(k_{0,m'}x_3) + {\cal O}(\log h),\\ E_2 =& \frac{2(-1)^{m'/2}c(k_{0,m'}, y_3)}{h} \, x_2 \, \cos(k_{0,m'}x_3) + {\cal O}(\log h),\\ H_1 =& -\frac{2(-1)^{m'/2}\bi c(k_{0,m'}, y_3)}{h} \, x_2 \, \sin( k_{0,m'} x_3) + {\cal O}(\log h),\\ H_2 =& \frac{2(-1)^{m'/2}\bi c(k_{0,m'}, y_3)}{h} \, x_1 \, \sin(k_{0,m'} x_3) + {\cal O}(\log h). \end{align} where $c(k,y) = -\frac{e^{\bi k\sqrt{1+y^2}}(\bi k\sqrt{1+y^2}-1)}{2m'\pi\beta_1(k)(1+y^2)^{3/2}}$. \item[(2).] If $k={\rm Re}(k_{0,m'}^*)$ for an odd integer $m'$, then in the annular gap $G_+^h$, \begin{align} E_1 =& \frac{2(-1)^{(m'-1)/2}c(k_{0,m'}, y_3)}{h} \, x_1\, \sin(k_{0,m'}x_3) + {\cal O}(\log h),\\ E_2 =& \frac{2(-1)^{(m'-1)/2}c(k_{0,m'}, y_3)}{h} \, x_2 \, \sin(k_{0,m'}x_3) + {\cal O}(\log h),\\ H_1 =& -\frac{2(-1)^{(m'-1)/2}\bi c(k_{0,m'}, y_3)}{h} \, x_2 \, \cos( k_{0,m'} x_3) + {\cal O}(\log h),\\ H_2 =& \frac{2(-1)^{(m'-1)/2}\bi c(k_{0,m'}, y_3)}{h} \, x_1 \, \cos(k_{0,m'} x_3) + {\cal O}(\log h). \end{align} \end{itemize} \end{mytheorem} \begin{proof} We have on the annular aperture $A^h$, \[ \nu\times \frac{\bH^{\rm inc}+\bH^{\rm ref}}{2}|_{A^h} =F(r,y_3)\left[ \begin{matrix} \nabla_2\log r\\ 0 \end{matrix} \right], \] where \[ F(r,y_3) = \frac{(k\sqrt{r^2+y_3^2}+\bi)r}{k(r^2+y_3^2)^{3/2}}e^{\bi k \sqrt{r^2+y_3^2}}. \] Following our mode matching procedure, we obtain the systems below analogous to (\ref{eq:inh:sys}): \begin{align} \label{eq:sys:TEM:m} \left[ \begin{array}{ll} D_m^{j} - A^{j}_{mm} & -\bR^{j}_{m}\\ -\bC^{j}_m & \bI - \bB^{j}_{m}\\ \end{array} \right]\left[ \begin{array}{l} d_m^{j}\\ \bc_m^{j}\\ \end{array} \right] = c^{j}\left[ \begin{array}{l} a_m\\ \bb_m \end{array} \right], \quad j=e,o, \quad m\in\mathbb{Z}. \end{align} In the above, the source term \begin{equation} \label{eq-am} a_m = \begin{cases} \frac{k\bi }{4\pi \log(1+h)}\langle F(r,y_3)\nabla_2\log r, {\cal R} \nabla_2 \log r \rangle, & m=0;\\ \frac{k\bi }{2\lambda_{m0}^N}\langle F(r,y_3)\nabla_2\log r, {\cal R} \cl\overline{\psi_{m0}^N} \rangle, & m\neq 0, \end{cases} \end{equation} and \[ \bb_m = \left[ \begin{matrix} \{\frac{k\bi }{2s_{mn'}^N(\lambda_{mn'}^N)^{1/4}}\langle F(r,y_3)\nabla_2 \log r, {\cal R}\cl_2\overline{\psi_{mn'}^N} \rangle \}_{n'\in\mathbb{N}^*}\\ \{\frac{\bi s_{mj'}^D}{2k(\lambda_{mj'}^D)^{3/4}}\langle F(r,y_3)\nabla_2 \log r, {\cal R}\nabla_2\overline{\psi_{mj'}^D} \rangle\}_{j'\in\mathbb{N}^*}\\ \end{matrix} \right]. \] We can derive from Lemma~\ref{lem:orth:theta} that $a_m=0$ and $\bb_m=0$ for $m\neq 0$. As such we need only focus on the case $m=0$. We further restrict to the case $j=e$ as the case $j=o$ can be dealt with in a similar manner. A direct calculation shows that \begin{align*} a_0 &=-\frac{\bi }{2\log(1+h)}\int_{1}^{1+h} \frac{(k\sqrt{r^2+y_3^2}+\bi)}{(r^2+y_3^2)^{3/2}}e^{\bi k \sqrt{r^2+y_3^2}}dr \\ & = -\frac{\bi }{2} \frac{(k\sqrt{1+y_3^2}+\bi)}{(1+y_3^2)^{3/2}}e^{\bi k \sqrt{1+y_3^2}} + {\cal O}(kh), \\ ||\bb_0||_{\ell^2}&= {\cal O}(h),\\ \bR^e_0(\bI-\bB^e_0)^{-1}\bb_0 &= {\cal O}(h). \end{align*} At $k={\rm Re}(k_{0,m'}^*)$ for an even integer $m'\in\mathbb{N}^*$, noting that $k-k_{0,2m'}^*=-{\rm Im}(k_{0,2m'}^*)\bi={\cal O}(h)$, we have \begin{align*} D_0^e(k)-D_0^e(k_{0,m'}^*) =& -l/2\cos(k_{0,m'}^*l/2) {\rm Im}(k_{0,m'}^*)\bi + {\cal O}(h^2)\\ =&(-1)^{m'/2}m'\pi\beta_1(k_{0,m'})h\bi + {\cal O}(h^2\log h),\\ A^e_{00}(k)-A^e_{00}(k_{0,m'}^*)=&{\cal O}(h^2\log h),\\ \bR^e_0(k)(\bI-\bB^e_0(k))^{-1}\bC^e_0(k)=&\bR^e_0(k_{0,m'}^*)(\bI-\bB^e_0(k_{0,m'}^*))^{-1}\bC^e_0(k_{0,m'}^*)+{\cal O}(h^2\log h). \end{align*} Hence, from the equation $$\left[D^e_0(k) - A^e_{00}(k)- \bR^e_0(k)(\bI-\bB^e_0(k))^{-1}\bC^e_0(k) \right]d^e_0 = \, a_0 + \bR^e_0(\bI-\bB^e_0)^{-1}\bb_0, $$ we can derive that \begin{align*} d_0^e =& -(-1)^{m'/2}h^{-1}\frac{e^{\bi k_{0,m'}\sqrt{1+y_3^2}}( k_{0,m'}\sqrt{1+y_3^2}+\bi)}{2m'\pi\beta_1(k_{0,m'})(1+y_3^2)^{3/2}}(1+{\cal O}(h\log h)), \end{align*} and by Lemma~\ref{lem:dm'm:h} (v), there holds \begin{align*} ||c_0^e||_{\ell^2}=||(\bI-\bB_0^e)^{-1}(\bb_0 + \bC_0^ed_0^e)||_{\ell^2} = {\cal O}(1). \end{align*} Therefore, using the field representations in (\ref{eq:rep:bE}) and (\ref{eq:rep:bH}), we obtain the desired asymptotic for the electromagnetic fields in the annular gap $G^h$. \end{proof} \begin{myremark} We note that the electromagnetic field in the annular hole is amplified with the order ${\cal O}(h^{-1})$ at the resonant frequency $k={\rm Re}(k_{0,m'}^*)$ for some $m'\in\mathbb{N}^*$. Moreover, the waves oscillates along the $x_3$ direction but it varies linearly in the narrow annular cross section. \end{myremark} \section{Discussion and conclusion} In this section, we discuss how the problem geometry and the topology of the subwavelength hole may affect the resonances and field enhancement for the scattering problem \eqref{eq:E}-\eqref{eq:smc}. First, as pointed out at the beginning, for clarity the analysis is only presented for the inner radius of the annulus $a=1$. If $a\neq 1$, by the change of the scale, the roots $k$ of the characteristic equation (\ref{eq:single:m}) and the thickness $l$ are replaced by $ka$ and $l/a$, respectively. Thus the value of $a$ could significantly affect the resonant frequencies given in (\ref{eq:even:res}), (\ref{eq:even:nearm}), (\ref{eq:even:res:m=0}), (\ref{eq:odd:res}), and (\ref{eq:odd:res:m=0}). In practice, one can tune this parameter for applications in different frequency regimes. Moreover, we note that $h$ is the relative width of the gap $G^h$. Thus one can increase the inner radius $a$ while keeping the absolute gap width $d=ah$ invariant. This will further increase the electromagnetic enhancement to the order ${\cal O}(h^{-1}) = d^{-1}{\cal O}(a)$. Note that we have assumed that the metal thickness $l={\cal O}(1)$ throughout the paper, and the prefactors in the error terms of the resonance formulae (\ref{eq:even:res}), (\ref{eq:even:nearm}), (\ref{eq:even:res:m=0}), (\ref{eq:odd:res}), and (\ref{eq:odd:res:m=0}) depends on $l$. One natural question is how large $l$ is allowed so that our analysis still holds true. Compare the expressions in Theorems~\ref{thm:enhance:te} and \ref{thm-enhance-tem}. The leading terms of the fields due to TE modes do not change significantly as $l$ decreases or increases; however, the leading terms of the fields due to the TEM mode change significantly in terms of order $l^3{\cal O}(h^{-1})$ as $l$ increases, since $\beta_1(k_{0,m'}^*) = {\cal O}(l^{-3})$ as $l\to\infty$. We can carry out more delicate analysis to quantify the dependence of the resonances on $l$ more precisely and for $l$ not necessarily small. In fact, to ensure the existence of finite resonances for $m\neq 0$, it is sufficient to assume that $lh\log h\ll 1$. Let us revisit the characteristic equation (\ref{eq:tmp2}): \begin{equation*} s_{m0}^N(e^{\bi s_{m0}^N l/2}-e^{-\bi s_{m0}^N l/2}) = -\bi(e^{\bi s_{m0}^N l/2}+e^{-\bi s_{m0}^N l/2})\left[\Pi_m(k,h) + {\cal O}(h^2\log h) \right]. \end{equation*} If $lh\log h\ll 1$, there holds \[ s_{m0}^N l \tan(s_{m0}^N l/2) = -l[\Pi_m(k,h)+{\cal O}(h^2\log h)] = {\cal O}(hl\log h)\ll 1. \] Thus $s_{m0}^N l\ll 1$ or $s_{m0}^N l/2 - m'\pi\ll 1$ for $m'\in\mathbb{N}^*$. Then following the parallel lines as the proof of Theorem~\ref{thm:even:res}, it can be shown that \[ {\rm Im}(k_{m,m'}^*) = {\cal O}(l^{-1}h\log h),\quad m'\in\mathbb{N}^*, \] where the prefactors no longer depend on $l$. However, the configuration when $l=\infty$ is more subtle, since the problem is not posed in an open medium anymore. One needs to define the corresponding scattering problem properly and impose the radiation conditions carefully. The other extreme case is when the metal is infinitely thin with $l=0$. In such scenario, the hole no longer supports waveguide modes and a totally different approach needs to be adopted for analyzing the scattering problem. This is beyond the scope of this paper and will be investigated in a separate work. We would also like to point out there are no resonances in the region ${\cal B}$ when the narrow annular hole is replaced by a tiny hole with a simply connected cross section, such as a tiny hollow hole with circular cross section. Assume that the radius of the circle is given by $h\ll 1$. We can still apply the multiscale analysis framework in this paper for analyzing the resonances. However, the two eigenvalue problems (DEP) and (NEP) attain the eigenvalues of order ${\cal O}(1/h)$, though the corresponding eigenfunctions can still be used to construct the two function spaces $\cl_2 H^1(A^h)$ and $\nabla_2 H_0^1(A^h)$. In addition, the finite-dimensional space $\mathbb{H}_2(A^h)$ becomes $\{{\bf 0}\}$. Therefore, the mode matching procedure leads to a system analogous to (\ref{eq:eig:m:2}), with its first row and first column removed. This new system possesses only the zero solution for $k\in{\cal B}$ by Lemma~\ref{lem:inv:I-Bm}. In other words, there are no resonances in ${\cal B}$. Finally, we point out several directions along the line of this research. In this work, we focus on the resonances induced by the subwavelength annulus gap. Another type of resonance is related to surface plasmon, and the quantification of its effect on the overall resonant behavior of the structure is still open \cite{garmarebbkui10}. There are some preliminary studies of the plasmonic effect on the resonances in \cite{haftel2006} for the annular hole, but the understanding of the interactions between two types of resonances is far from complete. Another direction is to investigate the resonant scattering in more sophisticated structures, such as an array of annular holes, or the bull's eye structure, etc \cite{garmarebbkui10, yoo16}. The resonant phenomena become richer in those structures. In terms of applications, there are also several topics that need to be explored. For instance, the annular hole structures have been applied for detecting biomolecular events in a label-free and highly sensitive manner from the shifts of resonant transmission peaks \cite{park15}. One fundamental question in such applications is the sensitivity analysis of resonance frequencies, where the goal is to quantify how the transmission peaks shift with respect to the refractive index change or the profile change of the biochemical samples. \appendix \section{Dirichlet eigenvalues and eigenfunctions} In this section, we characterize the asymptotic behavior of the eigenpair $\{\lambda_{mn}^D,\psi_{mn}^D\}_{m\in\mathbb{Z},n\in\mathbb{N}^*}$ of the Dirichlet eigenvalue problem (DEP) for $0<h\ll1$. \begin{mylemma} \label{lem:dir:eig} For $h\ll 1$ and $m\in\mathbb{N}$, the nonzero roots of (\ref{eq:gov:roots}) admit the expansion \begin{equation} \label{eq:asy:betamn} \beta_{mn}^{D}(h)=\frac{n\pi}{h} + \frac{(4m^2-1)h}{8n\pi} + n^{-3}{\cal O}(h^2),\quad {m=0,1,2,\cdots,\quad n=1,2,\cdots,} \end{equation} where the prefactor in the ${\cal O}$-notation depends on $m$ only. \begin{proof} Let $C>0$ be a sufficiently large constant that is independent of $h$. Since $\beta$ and $\beta(1+h)$ are of the same order of magnitude when $|\beta|\geq C$, we apply the formulas in \cite[Sec. 10.21(x)]{nist10} to obtain the asymptotic formula (\ref{eq:asy:betamn}) in the region $(-\infty,-C]\cup[C,\infty)$. The rest is to show that there are no roots in $(-C,C)/\{0\}$. We distinguish two cases: $0<|\beta|\leq c_0$ or $|\beta|\in(c_0,C)$ for some sufficiently small constant $c_0\geq 0$. If $0<\beta\leq c_0$, then by the asymptotic behaviors of $J_m$ and $Y_m$ of small arguments \cite[10.7.3 \& 10.7.4]{nist10}, we obtain for any fixed $h>0$, \begin{align*} F_{m}(\beta,h) \eqsim &(\beta/2)^{m}/(\Gamma(m+1))(-\pi^{-1})\Gamma(m)(\beta(1+h)/2)^{-|m|}\nonumber\\ &-(\beta(1+h)/2)^{m}/(\Gamma(m+1))(-\pi^{-1})\Gamma(m)(\beta/2)^{-|m|}\\ =&(-( m\pi )^{-1}) \left[ (1+h)^{-m}-(1+h)^{m}\right]>0,\quad \beta \ll 1. \end{align*} Now, suppose $|\beta|\in(c_0,C)$ so that $F$ becomes analytic at $h=0$. Taylor's expansion directly gives rise to \begin{align*} F_{m}(\beta,h) =& (Y_{m}(\beta)J_{m}'(\beta)-J_{m}(\beta)Y_{m}'(\beta))\beta h \\ &+ (Y_{m}(\beta){J_{m}''}(\beta+\xi_h h)-J_{m}(\beta){Y_m''}(\beta+\xi_h h)) (\beta h)^2/2, \end{align*} for some $\xi_h\in (0,1)$ depending on $h$. Since $Y_m(\beta)$ and $J_m(\beta)$ are linearly independent over the interval $[c_0,C]$, the first term is in fact strictly nonzero, so that for $h\ll 1$, $F(h;\beta,m)\neq 0$ for any $|\beta|\in(c_0,C)$, which concludes the proof. \end{proof} \end{mylemma} The following lemma characterizes the asymptotic behavior of $\psi_{mn}^{D}(r,\theta;h)$ for $h\ll 1$. \begin{mylemma} For $h\ll 1$, $r\in[1,1+h]$, $(i,j)\in\mathbb{Z}\times\mathbb{N}^*$, we have \label{lem:psimnD} \begin{align} \label{eq:psimnD} \psi_{mn}^{D}(r,\theta;h) =& \frac{e^{\bi m\theta}}{\sqrt{\pi rh}}\Bigg\{\sin\left[ \frac{n\pi}{h}(r-1) \right] -\frac{\gamma_m^D(r)h}{n\pi}\cos\left[ \frac{n\pi}{h}(r-1) \right]+n^{-2}{\cal O}(h^2)\Bigg\}, \end{align} where the function $\gamma^D_{m}:[1,1+h]\to\mathbb{R}$ is given by \[ \gamma^D_{m}(x) = \frac{(4m^2-1)}{8r}(r-1). \] and the prefactors in the ${\cal O}$-notations depend on $m$ only. \begin{proof} For $h\ll1$, Lemma~\ref{lem:dir:eig} implies that $\beta_{|m|n}^{D}\gg 1$, thus by \cite[10.17.3\&10.17.4]{nist10}, we have \begin{align*} &Y_{|m|}(\beta_{|m|n}^{D})J_{|m|}(\beta_{|m|n}^{D}r)-J_{|m|}(\beta_{|m|n}^{D})Y_{|m|}(\beta_{|m| n}^{D}r)\\ =&\frac{2}{\pi\beta_{|m|n}^{D}\sqrt{r}}\Bigg\{\sin(\frac{n\pi}{h}(1-r))(1+ {\cal O}(n^{-2}h^2))+ \frac{\gamma_m^{D}(r) h}{n\pi }\cos(\frac{n\pi}{h}(1-r))(1+ {\cal O}(n^{-2}h^2)) \Bigg\} \end{align*} where the prefactor in the ${\cal O}$-notation depends only on $m$. Therefore, it can be verified that \[ C_{mn}^{D}= \frac{2h^{1/2}}{\beta_{|m|n}^{D}\pi^{1/2}}\left[1+n^{-2}{\cal O}(h^2) \right], \] and hence (\ref{eq:psimnD}) follows. \end{proof} \end{mylemma} \section{Neumann eigenvalues and eigenfunctions} In this section, we derive the asymptotic expansion of the eigenpair $\{\lambda_{mn}^N,\psi_{mn}^N\}_{m\in\mathbb{Z},n\in\mathbb{N}}$ for the Neumann eigenvalue problem (NEP) when $0<h\ll1$. \begin{mylemma} \label{lem:neu:eig} For $h\ll 1$, the nonzero roots to (\ref{eq:gov:roots:n}) with sufficiently large magnitude attain the expansion \begin{equation} \label{eq:asy:betamn:N} \beta_{mn}^{N}=\frac{n\pi}{h} + \frac{(4m^2+3)h}{8n\pi(1+h)} + n^{-3}{\cal O}(h^3),\quad {m=0,1,2,\cdots,\quad n=1,2,\cdots,} \end{equation} where the prefactor in the ${\cal O}$-notation depends on $m$. On the other hand, when $m\neq 0$, there exists a unique root close to $m$ satisfying \begin{equation} \label{eq:asy:betam2:N} \beta_{m0}^{N}=m -\frac{mh}{2} + {\cal O}(h^2),\quad {m=1,2,\cdots.} \end{equation} \begin{proof} When $m=0$, \begin{align*} F_0^{N}(\beta;h)=Y_{1}(\beta)J_{1}\left( \beta(1+h) \right) - J_{1}(\beta)Y_{1}\left( \beta(1+h) \right) = F_1^{D}(\beta,h), \end{align*} so that Lemma~\ref{lem:dir:eig} applies. In the following, we assume $m\neq 0$. Let $C>0$ be a sufficiently large constant that is independent of $h$. Since $\beta$ and $\beta(1+h)$ are of the same order of magnitude when $\beta\geq C$, the asymptotic formula (\ref{eq:asy:betamn}) in the region $[C,\infty)$ can be derived using the formulas in \cite[Sec. 10.21(x)]{nist10}. We only need to show that there is only one root in $(0,C)$ and it is near $m$. We distinguish two cases: $0<\beta\leq c_0$ or $\beta\in(c_0,C)$ for some sufficiently small constant $c_0\geq 0$ independent of $h$. If $\beta\leq c_0$, by the power series representation of $J_m$ and $Y_m$ with small arguments \cite[10.2.2 \& 10.8.1]{nist10}, we obtain for any fixed $h>0$, \begin{align*} F_m^{N}(\beta;h) \eqsim & \frac{m!2^m}{\pi\beta^{m+1}}\frac{\left( \beta(1+h)\right)^{m-1}}{2^m (m-1)!} - \frac{m!2^m}{\pi\beta^{m+1}(1+h)^{m+1}}\frac{\left( \beta\right)^{m-1}}{2^m (m-1)!} \\ \eqsim & \frac{m}{\pi\beta^2}\left[(1+h)^{m-1}-(1+h)^{-m-1} \right]>0,\quad \beta \ll 1. \end{align*} Now if $\beta\in(c_0,C)$ so that $F_m^{N}$ is analytic at $h=0$. Taylor's expansion of $F_m^{N}$ at $h=0$ and Bessel's differential equation directly give rise to \begin{align*} F_m^{N}(\beta;h) =& (Y_m'(\beta)J_m{\dd}(\beta)-J_m'(\beta)Y_m{\dd}(\beta))\beta h + {\cal O}(h^2)\\ =& -(1-m^2\beta^{-2})\left[Y_m'(\beta)J_m(\beta)-J_m'(\beta)Y_m(\beta)\right]\beta h+ {\cal O}(h^2). \end{align*} Since $Y_m(\beta)$ and $J_m(\beta)$ are linearly independent over the interval $[c_0,C]$, the first term is in fact strictly nonzero and is far greater than the second term if $(\beta - m)\gg h $ for $h\ll 1$. But the intermediate value theorem implies that there exists a root in $(m-c_0,m+c_0)$ satisfying $\beta-m={\cal O}(h)$. By a similar asymptotic analysis of $F_m^{N}(\beta;h)$ at $\beta=m$, the expansion (\ref{eq:asy:betam2:N}) follows. \end{proof} \end{mylemma} The following lemma characterizes the asymptotic behavior of $\psi_{mn}^{N}(r,\theta;h)$ for $h\ll 1$. \begin{mylemma} \label{lem:psimnN} Let $h\ll 1$, $r\in[1,1+h]$, $m\in\mathbb{Z}$ and $n\in\mathbb{N}$. If $n\geq 1$, then \begin{align} \label{eq:psimnN:n1} \psi_{mn}^{N}(r,\theta;h) =&\frac{e^{\bi m\theta}}{\sqrt{\pi rh}}\Bigg\{\cos\left[ \frac{n\pi}{h}(r-1) \right] +\frac{\gamma^N_{m}(r) h}{n\pi}\sin\left[ \frac{n\pi}{h}(r-1) \right]+{\cal O}(n^{-2}h^2)\Bigg\}, \end{align} where the function $\gamma^N_{m}:[1,1+h]\to\mathbb{R}$ is given by \[ \gamma^N_{m}(r) = \frac{(4m^2+3)h}{8(1+h)} - \frac{(4m^2-1)}{8 r}. \] If $n=0$, \begin{equation} \label{eq:psimnN:n0} \psi_{m0}^{N}(r,\theta;h) = \frac{e^{\bi m\theta}}{\sqrt{\pi h(h+2)}} + {\cal O}(h^{3/2}). \end{equation} In the above, the prefactors in the ${\cal O}$-notations depend on $m$ only. \begin{proof} For $n\ge1$, it follows from Lemma~\ref{lem:neu:eig} that $\beta_{|m|n}^{N}\gg 1$. Thus by \cite[10.17:3,4,9,10]{nist10}, we obtain \begin{align*} &Y_{|m|}'(\beta_{|m|n}^{N})J_{|m|}(\beta_{|m|n}^{N}r)-J_{|m|}'(\beta_{|m|n}^{N})Y_{|m|}(\beta_{|m|n}^{N}r)\\ =&\frac{2}{\pi\beta_{|m|n}^{N}}\sqrt{\frac{1}{r}}\left\{ \cos\left[ \frac{n\pi}{h}(r-1) \right](1+{\cal O}(n^{-2}h^2)) +\frac{\gamma_{m}(r;h)h}{n\pi}\sin\left[\frac{n\pi}{h}(r-1) \right](1+{\cal O}(n^{-2}h^2))\right\}, \end{align*} where the prefactor in the ${\cal O}$-notation depends only on $m$. It can be verified that the normalization constant \begin{align*} C_{mn}^{N} =&\frac{2\sqrt{\pi h} }{\pi\beta_{|m|n}^{N}}(1 + {\cal O}(n^{-2}h^2)), \end{align*} so that (\ref{eq:psimnN:n1}) follows. If $n=0$, we have \begin{align*} &Y_{|m|}'(\beta_{|m|0}^{N})J_{|m|}(\beta_{|m|0}^{N}r)-J_{|m|}'(\beta_{|m|0}^{N})Y_{|m|}(\beta_{|m|0}^{N}r)\\ =&\left[ Y_{|m|}'(\beta_{|m|0}^{N})J_{|m|}(\beta_{|m|0}^{N})-J_{|m|}'(\beta_{|m|0}^{N})Y_{|m|}(\beta_{|m|0}^{N})\right]\left[1 + {\cal O}(h^2)\right]. \end{align*} Then \begin{align*} C_{m0}^{N} = \sqrt{\pi h(h+2)}\left[ Y_{|m|}'(\beta_{|m|0}^{N})J_{|m|}(\beta_{|m|0}^{N})-J_{|m|}'(\beta_{|m|0}^{N})Y_{|m|}(\beta_{|m|0}^{N})\right]\left[1 + {\cal O}(h^2)\right], \end{align*} and hence (\ref{eq:psimnN:n0}) follows. \end{proof} \end{mylemma} \section{Asymptotic analysis of matrix elements in \eqref{eq:eig:m:2}} To prove Lemma~\ref{lem:dm'm:h}, we need the following two lemmas. \begin{mylemma} \label{prop:S0} For any two functions $f,g\in C_{\rm comp}^{\infty}(\mathbb{R})$ with \[ \max\{||f||_{W_{\infty}^1(\mathbb{R})},||g||_{W_{\infty}^1(\mathbb{R})}\}\leq M, \] for some constant $M>0$, the following three INF matrices \begin{align*} &\{({\cal S}_0[( n' )^{-1/2}f(\cdot)\sin(n'\pi\cdot)],n^{-1/2}g(\cdot)\sin(n\pi\cdot))_{L^2(0,1)}\}_{n,n'=1}^{\infty},\\ &\{({\cal S}_0[( n' )^{1/2}f(\cdot)\cos(n'\pi\cdot)],n^{-1/2}g(\cdot)\sin(n\pi\cdot))_{L^2(0,1)}\}_{n,n'=1}^{\infty},\\ &\{({\cal S}_0[( n' )^{1/2}f(\cdot)\cos(n'\pi\cdot)],n^{1/2}g(\cdot)\cos(n\pi\cdot))_{L^2(0,1)}\}_{n,n'=1}^{\infty}, \end{align*} are bounded operators mapping from $\ell^2$ to $\ell^2$, with norms depending only on $M$ but not on functions $f$ and $g$. \begin{proof} We only give the proof for the first matrix. For any $n\in\mathbb{N}$, \begin{align*} ((n')^{-1/2}\sin(n'\pi\cdot),\phi_n)_{L^2(0,1)} =& \begin{cases} \frac{1}{\sqrt{n'}}\frac{1-(-1)^{n'}}{n'\pi}& n = 0,\\ \frac{1}{2\sqrt{2n'}}\left[ \frac{1-(-1)^{(n'+n)}}{(n'+n)\pi} + \frac{1-(-1)^{(n'-n)}}{(n'-n)\pi} \right] & n\neq n',\\ \frac{1}{2\sqrt{2n'}} \frac{1-(-1)^{(n'+n)}}{2n'\pi} & n = n'. \end{cases} \end{align*} Thus for any $\{a_{n'}\}_{n'=1}^{\infty}\in\ell^2$ and any $N,N'\in\mathbb{N}^*$, we have \begin{align*} &\left|\sum_{n'=1}^{N'}a_{n'}((n')^{-1/2}\sin(n'\pi\cdot),\phi_n)_{L^2(0,1)} \right|\\ \leq &\left(\sum_{n'=1}^{\infty}|a_{n'}|^2 \right)^{1/2}\left(\sum_{n'=1}^{\infty} |((n')^{-1/2}\sin(n'\pi\cdot),\phi_n)_{L^2(0,1)}|^2\right)^{1/2} <\infty \end{align*} for $n\leq N$. We obtain \begin{align*} &\sum_{n=0}^{N}(1+n^2)^{-1/2}\left|\sum_{n'=1}^{N'}(a_{n'}(n')^{-1/2}\sin(n'\pi\cdot),\phi_n)_{L^2(0,1)}\right|^2\\ \leq& \left(\sum_{n'=1}^{\infty}|a_{n'}|^2 \right)\sum_{n=0}^{N}\sum_{n'=1}^{N'}(1+n^2)^{-1/2}\left|((n')^{-1/2}\sin(n'\pi\cdot),\phi_n)_{L^2(0,1)}\right|^2\\ \leq& \left(\sum_{n'=1}^{\infty}|a_{n'}|^2 \right)\Bigg[\sum_{n=0}^{\infty}(1+n^2)^{-1/2}\frac{C}{n^3} +\sum_{0\le n\le N, 1\le \delta n'\le N'}(1+n^2)^{-1/2}\frac{C}{\delta n'^2(\delta n'+n)} \\ &+\sum_{1\le \delta n\le M', 1\le n'\le N'}(1+(\delta n+n')^2)^{-1/2}\frac{C}{\delta n^2n'}\Bigg]\leq C ||\{a_{n'}\}||_{\ell^2}^2, \end{align*} where $C>0$ denotes a generic and sufficiently large constant. The above implies that \[ \phi(r'):=\sum_{n'=1}^{\infty}a_{n'}(n')^{-1/2}\sin(n'\pi r')\in \widetilde{H^{-1/2}}(0,1), \] with $||\phi||_{\widetilde{H^{-1/2}}(0,1)}\le C||\{a_n\}||_{\ell^2}$. Similarly, for any $\{b_n\}_{n=1}^{\infty}\in\ell^2$, \[ \psi(r'):=\sum_{n=1}^{\infty}b_{n}(n)^{-1/2}\sin(n\pi r)\in \widetilde{H^{-1/2}}(0,1), \] with $||\psi||_{\widetilde{H^{-1/2}}(0,1)}\le C||\{b_n\}||_{\ell^2}$. Therefore, \begin{align*} &|\sum_{n=1}^{\infty}\sum_{n'=1}^{\infty}a_{n'}({\cal S}_0[( n' )^{-1/2}f(\cdot)\sin(n'\pi\cdot)],n^{-1/2}g(\cdot)\sin(n\pi\cdot))_{L^2(0,1)} b_{n}|\\ =&|\langle {\cal S}_0 f\phi,g\psi \rangle_{(0,1)}| \leq C||f\phi||_{\widetilde{H}^{-1/2}(0,1)}||g\psi||_{\widetilde{H}^{-1/2}(0,1)} \leq C(M)||\{a_{n'}\}_{n'=1}^{\infty}||_{\ell^2}||\{b_{n}\}_{n=1}^{\infty}||_{\ell^2}, \end{align*} where $C(M)$ denotes the dependence of the constant $C$ on $M$ \cite[Thm. 3.20]{mcl00}, implying the boundedness of the first matrix mapping from $\ell^2$ to $\ell^2$. \end{proof} \end{mylemma} Define \begin{align} F_m(r,r'):=\frac{1}{2\pi}\int_{0}^{2\pi} \frac{e^{\bi k\sqrt{r^2+r'^2-2rr'\cos\theta}}}{\sqrt{r^2+r'^2-2rr'\cos\theta}}e^{\bi m\theta}d\theta. \end{align} \begin{mylemma} \label{prop:FourierG} For $k\in{\cal B}$ and $r\neq r'\in(1,1+h)$, we have \begin{align} F_m(r,r')=& \frac{1}{\sqrt{rr'}}\Bigg[\log(1-w^{-2})\left[\frac{(-1)}{4\pi} + (1-w^{-2})f_m(1-w^{-2},\lambda^2) \right]\nonumber\\ &+ g_m(1-w^{-2},\lambda^2) \Bigg], \end{align} where $w=(r^2+r'^2)/(2rr')$, $\lambda=k\sqrt{2rr'}$, and $f_m(t_1,t_2)$ and $g_m(t_1,t_2)$ are analytic for $t_1\in (-1,1)$ and $|t_2|<C$ for some sufficiently large constant $C$. \begin{proof} By Eqs.~(45-48) in \cite{concoh10}, we have \begin{align*} F_m(r,r') =& \frac{1}{2}\left[\hat{\Lambda}_+^m(rr',\lambda,w) + \hat{\Lambda}_-^m(rr',\lambda,w) \right], \end{align*} where \begin{align*} \hat{\Lambda}_+^m(rr',\lambda,w) =& \frac{(-1)^m}{\sqrt{rr'}}\sum_{p=0}^{\infty}\left( \frac{\lambda^2\sqrt{w^2-1}}{4} \right)^{p}\frac{Q_{m-1/2}^p(w)}{p!\Gamma(p-m+1/2)\Gamma(p+m+1/2)},\\ \hat{\Lambda}_-^m(rr',\lambda,w) =& \frac{(-1)^m}{\sqrt{rr'}}\sum_{p=0}^{\infty}\left( \frac{\lambda^2\sqrt{w^2-1}}{4} \right)^{p+m+1/2}\frac{Q_{m-1/2}^{p+m+1/2}(w)}{p!\Gamma(p+m+3/2)\Gamma(p+2m+1)}, \end{align*} and $Q_v^\mu$ denotes the associated Legendre function \cite[\S 14.1]{nist10}. By Eqs.~(14.3.7), (15.8.10), and (15.8.12) in~\cite{nist10}, we obtain \begin{align*} \hat{\Lambda}_+^m(rr',\lambda,w) =&\frac{(-1)^m}{\sqrt{rr'}}\sum_{p=0}^{\infty}\left( \frac{\lambda^2(w^2-1)}{4} \right)^{p}\frac{\sqrt{\pi}e^{\bi p\pi} {}_2F_1(\frac{m+p+1/2}{2},\frac{m+p+3/2}{2};m+1;w^{-2})}{2^{m+1/2}p!w^{m+p+1/2}\Gamma(p-m+1/2)}\\ =&\frac{(-1)^m}{\sqrt{rr'}}\sum_{p=0}^{\infty}\left( \frac{\lambda^2}{4w^2} \right)^{p}\frac{\sqrt{\pi}e^{\bi p\pi} {}_2F_1(\frac{m-p+1/2}{2},\frac{m-p+3/2}{2};m+1;w^{-2})}{2^{m+1/2}p!w^{m+p+1/2}\Gamma(p-m+1/2)}\\ =&\frac{1}{\sqrt{rr'}}\left[\log(1-w^{-2})\frac{(-1)}{2\pi} + \log(1-w^{-2})(1-w^{-2})f^1_m(1-w^{-2},\lambda^2) + g^1_m(1-w^{-2},\lambda^2) \right] \end{align*} for some analytic functions $f^1_m(t_1,t_2)$ and $g^1_m(t_1,t_2)$, where ${}_2F_1$ denotes the hypergeometric function \cite[\S 15.1]{nist10}. Let $z=\frac{1}{2}\left( 1-\frac{1}{1-w^{-2}}\right)$. By Eqs.~(14.3.7), (15.8.19) and (15.8.8) in \cite{nist10}, we have \begin{align*} \hat{\Lambda}_-^m(rr',\lambda,w) =& \frac{1}{\sqrt{rr'}}\sum_{p=0}^{\infty}\left( \frac{\lambda^2(w^2-1)}{4} \right)^{p+m+1/2}\frac{\sqrt{\pi} e^{\bi (p+1/2)\pi}{}_2F_1(\frac{p+2m+1}{2},\frac{p+2m+2}{2};m+1;w^{-2})}{p!w^{p+2m+1}\Gamma(p+m+3/2)}\\ =&\frac{1}{\sqrt{rr'}}\sum_{p=0}^{\infty}\left( \frac{\lambda^2(w^2-1)}{4} \right)^{p+m+1/2}(1-2z)^{p+2m+1}(1-z)^{-m}\sqrt{\pi} e^{\bi (p+1/2)\pi}\\ &\frac{{}_2F_1(-p-m,p+m+1;m+1;z)}{p!w^{p+2m+1}\Gamma(p+m+3/2)},\\ =&\frac{1}{\sqrt{rr'}}\sum_{p=0}^{\infty}\left( \frac{\lambda^2}{4} \right)^{p+m+1/2}(1-z)^{-m}\sqrt{\pi} e^{\bi (p+1/2)\pi}\frac{{}_2F_1(-p-m,p+m+1;m+1;z)}{p!(4z(z-1))^{p/2}\Gamma(p+m+3/2)}\\ =&\frac{1}{\sqrt{rr'}}\left[\log(1-w^{-2})(1-w^{-2})f^2_m(w^{-2}-1,\lambda^2) + g^2_m(w^{-2}-1,\lambda^2) \right], \end{align*} for some analytic functions $f^2_m(t_1,t_2)$ and $g^2_m(t_1,t_2)$. The proof is complete by taking $f_m=f^1_m+f^2_m$ and $g_m=g^1_m+g^2_m$. \end{proof} \end{mylemma} \noindent{\bf Proof of Lemma~\ref{lem:dm'm:h}:} (i). It follows from \eqref{eq:LkSk} that \begin{equation*} B^{TE,TE}_{n'n;m} = 2\frac{(1 + {\cal O}[(\lambda_{mn'}^N)^{-1}])}{(\lambda^N_{mn'}\lambda_{mn}^N)^{3/4}}\left[k^2\langle {\cal S}_k \nabla_2\psi_{mn}^N,\nabla_2\bar{\psi_{mn}^N} \rangle_{A^h} - \lambda_{mn}^N\lambda_{mn'}^N \langle {\cal S}_k\psi_{mn}^N,\bar{\psi_{mn}^N} \rangle_{A^h} \right]. \end{equation*} By Lemmas~\ref{lem:psimnN} and \ref{prop:FourierG}, we obtain \begin{align*} &[\lambda_{mn}^N\lambda_{mn'}^N]^{1/4} \langle {\cal S}_k\psi_{mn}^N,\bar{\psi_{mn}^N} \rangle_{A^h}\\ =&(nn')^{1/2}\pi(1+{\cal O}(h^2))\int_{0}^{1}dr\int_{0}^{1}F_m(1+hr,1+hr')\sqrt{1+hr}\sqrt{1+hr'}\\ &\left[\cos(n'\pi r') +\frac{\gamma_{m}(1+hr';h) h}{n'\pi}\sin(n'\pi r')+{\cal O}((n')^{-2}h^2)\right]\\ &\left[\cos(n\pi r) +\frac{\gamma_{m}(1+hr;h) h}{n\pi}\sin(n\pi r)+{\cal O}(n^{-2}h^2)\right]dr', \end{align*} where the prefactors in the ${\cal O}$-notations do not depend on $n,n'$. Using Lemma \ref{prop:FourierG}, it follows that \[ F_m(1+hr,1+hr') = -\frac{\log [h|r-r'|]}{2\pi\sqrt{(1+hr)(1+hr')}} + 2h^2\log [h|r-r'|]\tilde{f}_m(1+hr,1+hr') + \tilde{g}_m(1+hr,1+hr'), \] for some analytic functions $\tilde{f}_m$ and $\tilde{g}_m$. It is straightforward to verify using Lemma~\ref{prop:S0} that \[ [\lambda_{mn}^N\lambda_{mn'}^N]^{1/4} \langle {\cal S}_k\psi_{mn}^N,\bar{\psi_{mn}^N} \rangle_{A^h} = ({\cal S}_0[(n'\pi)^{1/2}\phi_{n'}],(n\pi)^{1/2}\phi_n)_{L^2(0,1)} + {\cal O}(h)\epsilon_{n'n;m}^{TE,TE;1}, \] where the INF matrix $[\epsilon_{n'n;m}^{TE,TE;1}]$ is uniformly bounded for $h\ll1$. One similarly verifies that \[ [\lambda_{mn}^N\lambda_{mn'}^N]^{-3/4} k^2\langle {\cal S}_k\nabla_2\psi_{mn}^N,\nabla_2\bar{\psi_{mn}^N} \rangle_{A^h} = {\cal O}(h)\epsilon_{n'n;m}^{TE,TE;2}, \] where the INF matrix $[\epsilon_{n'n;m}^{TE,TE;2}]$ is uniformly bounded $h\ll1$, and (\ref{eq:n'n:TETE}) follows immediately. \\ \medskip \noindent(ii-vi). The proof follows from similar arguments as in (i). We omit the details here. \\ \medskip \noindent(vii). According to \eqref{eq:LkSk}, \begin{align*} A_{mm} =& \, 2\cos(s_{m0}^Nl/2)\lambda_{m0}^N \langle S_k\psi_{m0}^N,\overline{\psi_{m0}^N} \rangle_{A^h}-\frac{2k^2\cos(s_{m0}^Nl/2)}{\lambda_{m0}}\langle S_k\nabla_2\psi_{m0}^N,\nabla_2\overline{\psi_{m0}^N} \rangle_{A^h}. \end{align*} Similar to the derivations in (i), Lemma~\ref{prop:FourierG} implies that \begin{align} \langle S_k\psi_{m0}^N,\overline{\psi_{m0}^N} \rangle_{A^h}=& \, h\int_{0}^{1}\int_{0}^{1}F_m(1+hr,1+hr')\frac{(1+hr)(1+hr')}{2+h}(1 +{\cal O}(h^2))drdr' \nonumber \\ =&-h\frac{\log h}{4\pi} + \alpha_m(k)h + \bi \beta_m(k)h + {\cal O}(h^2\log h) \label{eq:S_psi0_psi} \end{align} for some constants $\alpha_m(k)$ and $\beta_m(k)$, both of which are analytic in ${\cal B}$. By (50) in \cite{concoh10} and (14.8.9) in \cite{nist10}, \begin{align*} F_m(1+hr,1+hr') =& \frac{1}{\pi}\int_{0}^{\pi}\frac{(e^{\bi k\sqrt{h^2(r-r')^2 + 4(1+hr)(1+hr')\sin^2(\theta/2)}}-1)\cos(m\theta)}{\sqrt{h^2(r-r')^2 + 4(1+hr)(1+hr')\sin^2(\theta/2)}}d\theta \\ &+ \frac{Q_{m-1/2}\left[ [(1+hr)^2+(1+hr')^2]/(2(1+hr)(1+hr')) \right]}{2\pi\sqrt{(1+hr)(1+hr')}} \\ =&\frac{1}{\pi}\int_{0}^{\pi}\frac{(e^{\bi k\sin(\theta/2)}-1)\cos(m\theta)}{\sin(\theta/2)}d\theta -\frac{1}{2\pi}\log [h|r-r'|] \\ &+ \frac{1}{2\pi}[\log 2 -\gamma - \psi(m+1/2)] + o(1). \end{align*} Plugging the above into \eqref{eq:S_psi0_psi} and comparing both sides lead to (\ref{eq:alpham:k}) and (\ref{eq:betam:k}); the second equality in \eqref{eq:betam:k} follows from \cite[Eq. (10.9.2)]{nist10}. Similarly one can show that \begin{align*} \langle S_k\nabla_2\psi_{m0}^N,\nabla_2\overline{\psi_{m0}^N} \rangle_{A^h} = m^2[-h\frac{\log h}{4\pi} + \tilde{\alpha}_m(k)h + \bi \tilde{\beta}_m(k)h] + {\cal O}(h^2\log h). \end{align*} Eq.~\eqref{eq:Amm:m=0} for $m=0$ can be verified similarly. \bibliographystyle{plain} \bibliography{wt} \end{document} {\color{blue} The proof is straightforward: for any $f\in C^{\infty}(A^h)$, \begin{align*} \cl_2 f = \partial_r f \hat{\theta} + r^{-1}\partial_\theta f \hat{r},\\ \nabla_2 f = \partial_r f \hat{r} + r^{-1}\partial_\theta f \hat{\theta}, \end{align*} so that \begin{align*} \langle\cl_2 f, \nabla_2\log r \rangle_{A^h} = \int_{1}^{1+h}r^{-1} dr\int_{0}^{2\pi} \partial_\theta f d\theta = 0. \end{align*} }consider the change of resonances in (\ref{eq:even:res}), (\ref{eq:even:nearm}), (\ref{eq:even:res:m=0}), (\ref{eq:odd:res}), and (\ref{eq:odd:res:m=0}) w.r.t $a$. For $m\neq 0$, as $a$ decreases, \[ k_{m,m'} = \sqrt{m^2+ \frac{(m'\pi a)^2}{l^2}} \] moves closer to $m$ so that \[ {\rm Re}(k_{m,m'}^*a) \] becomes closer to $m$, while ${\rm Im}(k_{m,m'}^*a)= a{\cal O}(h)$ becomes smaller. Thus, it can be derived that $k_{m,m'}^*$ becomes larger, and the related EM field for a plane-wave incidence at $k={\rm Re}(k_{m,m'}^*)$ becomes greater in the hole $G^h$. Nevertheless, one cannot expect such a result for the resonance $k_m^*a$ in (\ref{eq:even:nearm}) near $m$, since now \[ {\rm Im}(k_m^* a) = {\cal O}(ha) + {\cal O}(h^3 a^{-1}), \] where the second term can dominate its magnitude as $a$ decreases. Finally, for the TEM resonances $k_{0,m'}^*$ in (\ref{eq:even:res:m=0}) and (\ref{eq:odd:res:m=0}), it can be seen that ${\rm Re}(k_{0,m'}^*a)$ is close to $\frac{2m'\pi a}{l}$ and ${\rm Im}(k_{0,m'}^*a) = a{\cal O}(h)$ so that the leading terms of $k_{0,m'}^*$ is independent of the radius $a$. Thus, it is expected that the related EM field due to the spherical incident wave (\ref{eq:sph:inc}) will not change as $a$ varies.
2205.05347v2
http://arxiv.org/abs/2205.05347v2
Quasipositivity and braid index of pretzel knots
\pdfsuppresswarningpagegroup=1 \documentclass{amsart} \title{Quasipositivity and braid index of pretzel knots} \author{Lukas Lewark} \address{\rule{0pt}{1cm}Faculty of Mathematics, University of Regensburg, 93053 Regensburg, Germany} \email{\myemail{[email protected]}} \urladdr{\url{http://www.lewark.de/lukas/}} \keywords{Pretzel knots, quasipositive knots, Homflypt polynomial, braid index, Jones' conjecture, Khovanov-Rozansky homologies, slice-torus invariants} \subjclass{57K10, 57K14, 57K18} \input{preamble.tex} \begin{document} \thispagestyle{empty} \begin{abstract} \input{abstract} \end{abstract} \maketitle \section{Introduction} \input{intro} \section{The braid index of pretzel knots} \label{sec:br} \input{br} \section{The quasipositivity of pretzel knots} \label{sec:qp} \input{qp} \section{Further properties of pretzel knots} \input{further} \bibliographystyle{myamsalpha} \bibliography{References} \end{document} \usepackage{thmtools,enumerate,graphicx,xcolor,enumitem,afterpage,float,ucs,amsmath,amssymb,amscd,url,mathtools,microtype,tikz-cd,caption,amsthm,wrapfig} \usepackage[latin1]{inputenc} \usepackage{hyperref} \usepackage[nameinlink]{cleveref} \let\cref\Cref \crefname{subsection}{subsection}{subsections} \Crefname{subsection}{Subsection}{Subsections} \Crefname{enumi}{}{} \crefname{equation}{}{} \definecolor{darkblue}{RGB}{0,0,96} \definecolor{gray}{RGB}{127,127,127} \definecolor{darkred}{RGB}{160,0,0} \definecolor{lightyellow}{RGB}{255,255,128} \pdfstringdefDisableCommands{ \def\unskip{}} \hypersetup{colorlinks={true},linkcolor={black},citecolor={black},filecolor={black},urlcolor={black},pdfauthor={\authors},pdftitle={\shorttitle}} \newcommand{\myemail}[1]{\href{mailto:#1}{#1}} \newcommand{\qua}{\hskip 0.4em \ignorespaces} \href{http://arxiv.org/abs/#1}{\tt arXiv:\penalty -100\unskip#1}} \let\arXiv\arxiv \href{http://www.ams.org/mathscinet-getitem?mr=#1}{\tt MR#1}} \def\xox#1{\csname xx#1\endcsname} \let\xxarXiv\arxiv\let\xxMR\MR \let\oldthebibliography\thebibliography \let\endoldthebibliography\endthebibliography \renewenvironment{thebibliography}[1]{ \begin{oldthebibliography}{#1}\small \setlength{\itemsep}{.5ex} \setlength{\parskip}{0em} } { \end{oldthebibliography} } \newcommand{\myqed}{\pushQED{\qed}\qedhere} \declaretheorem{lemma} \newtheorem{theorem}[lemma]{Theorem} \newtheorem{corollary}[lemma]{Corollary} \newtheorem{proposition}[lemma]{Proposition} \newtheorem{conjecture}[lemma]{Conjecture} \newtheorem{prize}[lemma]{Prize} \newtheorem*{prize*}{Prize} \newtheorem*{theorem*}{Theorem} \newtheorem{question}[lemma]{Question} \newtheorem*{question*}{Question} \theoremstyle{definition} \newtheorem{definition}[lemma]{Definition} \newtheorem*{construction}{Construction} \newtheorem{remark}[lemma]{Remark} \newtheorem{claim}[lemma]{Claim} \newtheorem{example}[lemma]{Example} \hyphenation{dim-en-sional} \hyphenation{man-i-fold} \hyphenation{re-du-ci-ble} \hyphenation{re-du-ci-bles} \hyphenation{geo-me-tric} \DeclareMathOperator{\Gr}{Gr} \DeclareMathOperator{\Fi}{Fi} \DeclareMathOperator{\mult}{mult} \DeclareMathOperator{\ev}{ev} \DeclareMathOperator{\coker}{coker} \DeclareMathOperator{\coim}{coim} \DeclareMathOperator{\im}{im} \DeclareMathAlphabet{\mathpzc}{OT1}{pzc}{m}{it} \DeclareMathOperator{\sgn}{sgn} \DeclareMathOperator{\br}{br} \newcommand{\cC}{\mathcal{C}} \newcommand{\N}{\mathbb{N}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\C}{\mathbb{C}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\R}{\mathbb{R}} \newcommand{\vphi}{\Phi} \newcommand{\tensor}{\otimes} \newcommand{\End}{\text{End}} \newcommand{\Hom}{\text{Hom}} \newcommand{\Aut}{\text{Aut}} \renewcommand{\det}{\text{det}} \newcommand{\id}{\text{id}} \newcommand{\Hol}{\text{Hol}} \newcommand{\hash}{\#} \renewcommand{\bar}{\overline} \newcommand{\rk}{\operatorname{rk}} \newcommand{\TE}{T\hspace{-1pt}E} \def\sqp{strongly-quasi\-positive} \def\sqn{strongly-quasi\-negative} \def\sqh{strongly-quasi\-homo\-geneous} \def\qp{quasi\-positive} \def\qn{quasi\-negative} \def\qh{quasi\-homo\-geneous} \def\Sqp{Strongly-quasi\-positive} \def\Sqn{Strongly-quasi\-negative} \def\Sqh{Strongly-quasi\-homo\-geneous} \def\Qp{Quasi\-positive} \def\Qn{Quasi\-negative} \def\Qh{Quasi\-homogeneous} \def\d{d} \graphicspath{{figures/}} \captionsetup{width=.8\linewidth, font=small} This short note is about three-stranded pretzel knots that have an even number of crossings in one of the strands. We calculate the braid index of such knots and determine which of them are quasipositive. The main tools are the Morton-Franks-Williams inequalities, and Khovanov-Rozansky concordance homomorphisms. Our protagonists are the $P(p,q,-2r)$ pretzel knots, where $p,q,r$ are integers, $p,q$ are odd and not $\pm 1$, and $r$ is not $0$. See \cref{fig:firstex} for an example. Recently, Boileau, Boyer and Gordon proved that $P(p,q,-2r)$ is a strongly quasipositive knot if and only if all of $p,q,r$ are positive \cite{MR4016557}. The first result of this note gives a similar condition for these pretzel knots to be quasipositive. \begin{theorem} \label{prop:pretzqp} Let $p,q$ be odd integers not equal to $\pm 1$, and $r$ a non-zero integer. Then the $P(p,q,-2r)$ pretzel knot is quasipositive if and only if $p + q\geq 0$ and $r > 0$. \end{theorem} As a second result, which we need to prove the first, we explicitly define a braid $\beta(p,q,-2r)$ on $|r| + 2$ strands with closure $P(p,q,-2r)$, and show that it is \emph{minimal}, i.e.~that it has the minimal number of strands among all braids with that closure. \begin{theorem} \label{prop:pretzbr} Let $p,q$ be odd integers not equal to $\pm 1$, and $r$ a non-zero integer. Then the braid index of $P(p,q,-2r)$ is $|r| + 2$. It is realized by the braid $\beta(p,q,-2r)$. \end{theorem} The braid $\beta(p,q,-2r)$ is defined in \cref{sec:br}. To show its minimality and thus prove \cref{prop:pretzbr}, we partially compute the Homflypt polynomial of $P(p,q,-2r)$, and then rely on the Morton-Franks-Williams inequalities. To prove the `if' direction of \cref{prop:pretzqp}, we simply observe that $\beta(p,q,-2r)$ is quasipositive if $p+q\geq 0$ and $r>0$. To show the `only if' direction, we require the following obstruction to quasipositivity. \begin{restatable}{lemma}{qpobs}\cite[Lemma~3.6]{MR3694648} \label{lemma:qpobs} Let $K$ be a quasipositive knot with braid index $b$. Let $w$ be the writhe of any minimal braid of $K$. Let $\phi$ be any slice-torus invariant. Then \[ 1 + w - b = 2\phi(K).\myqed \] \end{restatable} The proof of \cref{lemma:qpobs} uses Jones' conjecture, shown in \cite{MR3235791,MR3302972}, stating that all minimal braids of a knot $K$ have the same writhe. Comparing \cref{lemma:qpobs} with the previously computed values of the Khovanov-Rozansky $\mathfrak{sl}_3$-slice torus invariant~\cite{lew2} reveals that $P(p,q,-2r)$ is not quasipositive if $p+q<0$ or $r<0$. The proof of \cref{prop:pretzqp} is contained in \cref{sec:qp}. \begin{wrapfigure}[10]{r}{58mm} \centering \raisebox{1mm}[0mm]{\includegraphics[angle=270,scale=.9]{figures/example.pdf}} \captionsetup{width=.9\linewidth} \caption{The $P(5,-3,-2)$ pretzel knot, which is quasipositive, but not strongly quasipositive.} \label{fig:firstex} \end{wrapfigure} The canonical diagram $D(p,q,-2r)$ of the $P(p,q,-2r)$ pretzel consists of two disks connected by three bands with $p,q,-2r$ half-twists, respectively. Note that by convention the signs of $p,q,-2r$ encode the \emph{handedness} of twists. The \emph{signs} of the crossings in the three bands in $D(p,q,-2r)$ are $\sgn(p), \sgn(q), \sgn(r)$, respectively. As further background, let us list some results in a similar vein as \cref{prop:pretzqp}:\medskip \begin{itemize} \setlength\itemsep{.5ex} \item $P(p,q,-2r)$ is strongly quasipositive $\Leftrightarrow$ $P(p,q,-2r)$ is positive\\$\Leftrightarrow$ $D(p,q,-2r)$ is positive $\Leftrightarrow$ $p,q,r$ are all positive \cite{MR4016557}. \item $P(p,q,-2r)$ is alternating $\Leftrightarrow$ $D(p,q,-2r)$ is alternating\\$\Leftrightarrow$ $p,q,-r$ all have the same sign \cite{MR966948}. \item $P(p,q,-2r)$ is quasi-alternating $\Leftrightarrow$ $p,q,-r$ all have the same sign, or\break $\{p+q,p-2r,q-2r\}$ contains a negative and a positive number \cite{greene2}. \item $P(p,q,-2r)$ is fibred $\Leftrightarrow$ $p,q$ are of opposite sign, or $|r| = 1$ \cite{gabai2}. \item Conjecturally, $P(p,q,-2r)$ is slice $\Leftrightarrow$ $p + q = 0$ \cite{MR3402337}. \end{itemize}\medskip \paragraph{\bfseries Acknowledgments.} I warmly thank Peter Feller and Andrew Lobb for the inspiring collaboration on squeezed knots \cite{sqz} and slice-torus invariants \cite{sti}, from which this note originated. I gratefully acknowledge support by the DFG, project no.~412851057. Thanks to Paula Tru\"ol for comments on a first version of this note. \begin{definition}\label{def:beta} Let $p,q$ be odd integers not equal to $\pm 1$, and $r$ a non-zero integer. Let us define the following braids on $|r| + 2$ strands, denoting the standard generators of the braid group by $a_1, \ldots, a_{|r|+1}$: \begin{align*} \gamma_r & = a_3 a_4 \cdots a_{|r|} a_{|r| + 1},\\ \overline{\gamma_r} & = a_{|r| + 1} a_{|r|} \cdots a_4 a_3, \\ \beta(p,q,-2r) & = a_1^p a_2 a_1^q \gamma_r^{-1} a_2 \gamma_r \overline{\gamma_r} \quad\text{if $r > 0$,}\\ \beta(p,q,-2r) & = a_1^p a_2^{-1} a_1^q \overline{\gamma_r} a_2^{-1}\overline{\gamma_r}^{-1} \gamma_r^{-1} \quad\text{if $r < 0$.} \end{align*} \end{definition} It has been shown in \cite{brpr} that the closure of $\beta(p,q,-2r)$ is $P(p,q,-2r)$ (see also \Cref{fig:pretzelbraid} for an example). Note that in the simplest cases $r = \pm 1$, $\gamma_r$ and $\overline{\gamma_r}$ are empty and thus $\beta(p,q,-2) = a_1^p a_2 a_1^q a_2$ and $\beta(p,q,2) = a_1^p a_2^{-1} a_1^q a_2^{-1}$. \begin{figure}[t] \begin{center} \includegraphics[angle=-90,scale=.9]{figures/iso1.pdf}\qquad \includegraphics[angle=-90,scale=.9]{figures/iso2.pdf} \end{center} \caption{On the left, the closure of $\beta(5,-3,-6) = a_1^5 a_2 a_1^{-3} a_4^{-1}a_3^{-1} a_2 a_3 a_4^2 a_3$. On the right, the standard diagram $D(5,-3,-6)$ of the pretzel knot $P(5,-3,-6)$. The reader is invited to spot the isotopy between the two diagrams.} \label{fig:pretzelbraid} \end{figure} Let us now prove \cref{prop:pretzbr} by showing that $\beta(p,q,-2r)$ realizes the braid index; for this, we will use the Morton-Franks-Williams inequalities \cite{MR809504,MR896009}: \begin{equation}\label{eq:mfw1} 1 + w(\beta) - k(\beta) \leq e(K) \leq E(K) \leq -1 + w(\beta) + k(\beta), \end{equation} where $\beta$ is a braid with writhe $w(\beta)$, on $k(\beta)$ strands, and closure a knot $K$. Moreover, $e$ and $E$ are respectively the minimum and maximum exponent of $v$ appearing in the Homflypt polynomial $Q_K(v,z) \in \Z[v^{\pm 1}, z^{\pm 1}]$, which is defined by the skein relation $v^{-1} Q_{L_+} - v Q_{L_-} = zQ_{L_0}$ and setting $Q_U = 1$ (where $L_+$ is a link admitting a diagram $D$, such that $L_-$ arises from $D$ by changing a positive crossing to a negative one, and $L_0$ by orientedly resolving that crossing). Now \eqref{eq:mfw1} implies \[2\br(K) \geq E(K) - e(K) + 2,\] where $\br(K)$ denotes the braid index of $K$. For our purposes, it is sufficient to use the specialization $Q'_K(v) \coloneqq Q_K(v,1) \in \Z[v^{\pm 1}]$ of the Homflypt polynomial. Defining $e'$ and $E'$ as minimum and maximum exponent of $v$ appearing in $Q'$, we clearly have $E'\leq E$ and $e'\geq e$, and so \begin{equation} \label{eq:mfw2} 2\br(K) \geq E'(K) - e'(K) + 2. \end{equation} This is the lower bound for the braid index that we will use. Let us now compute $e'$ and $E'$ of $P(p,q,-2r)$. We start by computing $Q'$ of the torus link $T(2,n)$ for all~$n\in \Z$, which is the closure of the two-stranded braid $a_1^n$. One finds \begin{align*} Q'_{T(2,0)} & = v^{-1} - v, \\ Q'_{T(2,1)} & = 1, \\ Q'_{T(2,n+2)} & = v Q'_{T(2,n+1)} + v^2Q'_{T(2,n)} \qquad\text{for all $n \geq 0$}. \end{align*} Denote by $F(n)$ the $n$-th Fibonacci number, i.e.\ $F(0) = 0, F(1) = 1$ and $F(n+2) = F(n+1) + F(n)$ for all $n\geq 0$. It easily follows inductively that for all $n \geq 1$, \begin{equation}\label{eq:homflypttorus} Q'_{T(2,n)} = F(n+1) v^{n-1} - F(n - 1) v^{n+1}. \end{equation} Using $Q'_{T(2,-n)}(v) = (-1)^{n+1}Q'_{T(2,n)}(v^{-1})$, one may extend \eqref{eq:homflypttorus} to all $n$ by setting $F(-n) = (-1)^{n+1} F(n)$ for all positive $n$. Since $F(n) \neq 0$ for all $n\neq 0$, we find $e'(T(2,n)) = n-1$ and $E'(T(2,n))=n+1$ for all $n\neq \pm 1$. Let us proceed to the pretzels, starting with the case $r = 1$. Applying the skein relation to one of the two crossings in the band with 2 twists in $D(p,q,-2)$, and using that $Q'(K\# J) = Q'(K)Q'(J)$ for knots $K,J$ yields \[ Q'_{P(p,q,-2)} = v Q'_{T(2,p+q)} + v^2 Q'_{T(2,p)} Q'_{T(2,q)}. \] In this polynomial, the term with highest exponent of $v$ is $F(p-1)F(q-1)v^{p+q+4}$. The term with lowest exponent of $v$ is $x v^{p+q}$, provided \begin{equation}\label{eq:fibonaccidetail} x \coloneqq F(p+q+1) + F(p+1)F(q+1) \neq 0. \end{equation} Let us show \cref{eq:fibonaccidetail}. We have $F(p+q+1) > 0$, $\sgn F(p+1) = \sgn p, \sgn F(q+1) = \sgn q$. So if $\sgn p = \sgn q$, then $F(p+1)F(q+1) > 0$, and \cref{eq:fibonaccidetail} follows. If $\sgn p \neq \sgn q$, then $|p + 1| > |p + q + 1|$ or $|q + 1| > |p + q + 1|$, and thus $|F(p+1)| > |F(p + q + 1)|$ or $|F(q+1)| > |F(p + q + 1)|$ (using monotonicity of the Fibonacci numbers), and so \eqref{eq:fibonaccidetail} also holds. It follows that $e'(P(p,q,-2)) = p+q$ and $E'(P(p,q,-2)) = p+q+4$, so $\br(P(p,q,-2)) \geq 3 = r + 2$ as desired. Now for the case $r \geq 2$, applying the skein relation to one of the crossings in the band with $2r$ twists in $D(p,q,-2r)$ gives \[ Q'_{P(p,q,-2r)} = v Q'_{T(2,p+q)} + v^{2} Q'_{P(p,q,2-2r)}. \] It follows inductively that $Q'_{P(p,q,-2r)} = A + B$ for \begin{align*} A &= (v + v^{3} + \ldots + v^{2r-1})\cdot Q'_{T(2,p+q)},\\ B &= v^{2r}\cdot Q'_{T(2,p)}\cdot Q'_{T(2,q)} \end{align*} and so \begin{alignat*}{3} e'(A) & = p + q, & E'(A) & = p+q+2r \\ e'(B) & = p+q+2r-2,\quad & E'(B) & = p+q+2r+2. \end{alignat*} Thus $e'(P(p,q,-2r)) = e'(A)$ and $E'(P(p,q,-2r)) = E'(B)$, and it follows that $\br(P(p,q,-2r)) \geq r + 2$. We have shown for all positive $r$ that the braid index of $P(p,q,-2r)$ equals $|r| + 2$. The braid index of the knots $P(p,q,-2r)$ and $P(-p,-q,2r)$ agrees, since they are mirror images of one another. Thus we have completed the proof of \cref{prop:pretzbr}. \myqed As sketched in the introduction, the proof of \cref{prop:pretzqp} relies on \cref{lemma:qpobs}, which we restate below for the reader's convenience. Here, a braid is \emph{quasipositive} if it equals a product of conjugates of the standard generators $a_i$ of the braid group. A knot is \emph{quasipositive} it it is the closure of some quasipositive braid. A \emph{slice-torus invariant} $\phi$ is a homomorphism from the smooth knot concordance group to $\mathbb{R}$, such that $\phi(K) \leq g_4(K)$ holds for all knots $K$ (where $g_4$ denotes the smooth slice genus), and $\phi(T(p,q)) = g_4(T(p,q))$ holds for all positive torus knots $T(p,q)$ \cite{livingston} (see also \cite{lew2,sti}). \qpobs* While the proof of \cref{lemma:qpobs} uses Jones' conjecture, it is worth noting that if the Morton-Franks-Williams inequality \cref{eq:mfw2} is an equality for a knot $K$ (as it is for the pretzel knots we are considering), then the statement of Jones' conjecture for $K$ can be easily deduced directly from \cref{eq:mfw1}. Let us now prove \cref{prop:pretzqp}. We consider four exhaustive and mutually exclusive cases. Note that the writhe $w(\beta(p,q,-2r))$ equals $p + q + r + \sgn r$. We will use the statement of \cref{prop:pretzbr} that $\beta(p,q,-2r)$ is a minimal braid representative for~$P(p,q,-2r)$. \begin{itemize} \setlength\itemsep{1ex} \item Case $p+q\geq 0$ and $r>0$. Then, $\beta(p,q,-2r)$ is clearly quasipositive. This settles the `if' direction of \cref{prop:pretzqp}. The remaining three cases cover the `only if' direction. \item Case $p+q\leq 0$ and $r<0$. Since $w(\beta(p,q,-2r))$ is negative, it follows that $1 + w(\beta(p,q,-2r)) - \br(P(p,q,-2r))$ is also negative. So by \cref{lemma:qpobs}, $\phi(P(p,q,-2r)) < 0$. However, quasipositive knots have non-negative slice-torus invariants \cite{lew2}, so $P(p,q,-2r)$ is not quasipositive. Note that $P(p,q,-2r)$ is in fact quasinegative; so this case also follows from Hayden's theorem \cite{MR3788795} that no non-trivial knot is both quasipositive and quasinegative. \item Case $p + q < 0$ and $r > 0$. If $P(p,q,-2r)$ is quasipositive, by \cref{lemma:qpobs}, it follows that any slice-torus invariant $\phi$ satisfies $2\phi(P(p,q,-2r)) = p+q$. However, as computed in \cite{lew2}, the Khovanov-Rozansky $\mathfrak{sl}_3$-slice torus invariant $s_3$ (which takes values in $\tfrac12\mathbb{Z}$) satisfies \[ 2s_3(P(p,q,-2r)) \in \{p+q+2, p + q+1\}, \] contradicting the assumption of quasipositivity of $P(p,q,-2r)$. \item Case $p + q > 0$ and $r < 0$. If $P(p,q,-2r)$ were quasipositive, then by \cref{lemma:qpobs} any slice torus invariant $\phi$ would satisfy $2\phi(K) = p + q + 2r - 2$. However, \[ 2s_3(P(p,q,-2r)) \in \{p+q-2, p + q-1\}, \] leading to a contradiction.\myqed \end{itemize} Let us have a quick look at further notions of positivity. A knot is called \emph{braid positive} if it is the closure of a positive braid, i.e.~a braid that can be written as product of positive powers of the standard generators. Such knots are fibered~\cite{MR520522} and positive. Recalling from the introduction which $P(p,q,-2r)$ pretzel knots are fibered, and which are positive, one sees that for $P(p,q,-2r)$ to be braid positive, it is necessary that $p$ and $q$ are positive and $r = 1$. Since $\beta(p,q,-2) = a_2^p a_1 a_2^q a_1$, this condition is also sufficient. A knot is called \emph{strongly quasipositive} if it is the closure of a \emph{strongly quasipositive braid}, i.e.~a braid on $n$ strands that can be written as product of certain conjugates of the standard generators, namely $b_{ij} \coloneqq (a_{i+1} \cdots a_j)^{-1} a_i (a_{i+1} \cdots a_j)$ for all\break $1 \leq i \leq j < n$. Note that $b_{ii} = a_i$. Now, recall that $P(p,q,-2r)$ is strongly quasipositive if and only if $p, q, r > 0$. On the other hand, one observes that for $p,q,r>0$, the braid $\beta(p,q,-2r) = a_1^p a_2 a_1^q b_{2,r+1} \overline{\gamma_r}$ is strongly quasipositive. In summary we have:\medskip \begin{itemize} \setlength\itemsep{.5ex} \item $P(p,q,-2r)$ braid positive $\Leftrightarrow$ $\beta(p,q,-2r)$ positive braid $\Leftrightarrow$ $p, q > 0$, $r = 1$. \item $P(p,q,-2r)$ str.~quasipos.~$\Leftrightarrow$ $\beta(p,q,-2r)$ str.~quasipos.~$\Leftrightarrow$ $p, q, r >0$. \item $P(p,q,-2r)$ quasipositive $\Leftrightarrow$ $\beta(p,q,-2r)$ quasipositive $\Leftrightarrow$ $p + q\geq 0$, $r > 0$. \end{itemize}\medskip A knot $K$ is called \emph{squeezed} if it appears as a slice of a genus-minimizing oriented connected smooth cobordism between a positive and a negative torus knot \cite{sqz}. The class of squeezed knots includes quasipositive, quasinegative, and alternating knots. This guarantess the squeezedness of $P(p,q,-2r)$ unless $\sgn (p + q) = - \sgn r$ and $\sgn p = -\sgn q$. For pretzels satisfying those equations, let us distinguish two cases. In the first case that $\sgn (p - 2r) = - \sgn (q - 2r)$, e.g.~as for $P(5,-3,2)$, the Khovanov-Rozansky $\mathfrak{sl}_3$-slice torus invariant $s_3$ is not equal to the Rasmussen invariant \cite{lew2}. Since all slice-torus invariants take the same value on a fixed squeezed knot, the knots in that case are not squeezed. I have not been able to determine the squeezedness of the pretzel knots of the second case, namely those with $\sgn (p - 2r) = \sgn (q - 2r)$, such as~$P(5,-3,4)$.
2205.05332v1
http://arxiv.org/abs/2205.05332v1
Spreading speeds and pulsating fronts for a field-road model in a spatially periodic habitat
\documentclass[11pt,a4paper]{article} \usepackage{amsfonts,latexsym,amsmath, amssymb, amscd, amsthm} \usepackage{mathrsfs} \usepackage{bm} \usepackage{appendix} \usepackage{color} \usepackage{geometry} \geometry{a4paper,scale=0.8} \usepackage{cite} \usepackage{cases} \usepackage{ulem} \usepackage[utf8]{inputenc} \usepackage{graphicx,enumerate} \usepackage[T1]{fontenc} \usepackage{authblk} \numberwithin{equation}{section}\newcommand{\blue}{\textcolor{blue}} \newcommand{\gap}{\hspace{1pt}} \newcommand{\eql}{\eqlabel} \newcommand{\nn}{\nonumber} \def\medn{\medskip\noindent} \def\varep{\varepsilon} \def\Re{\mathop{\rm Re}} \def\Im{\mathop{\rm Im}} \DeclareMathOperator{\ch}{ch} \DeclareMathOperator{\sh}{sh} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{definition}[theorem]{Definition} \newtheorem{remark}{Remark} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{claim}[theorem]{Claim} \newcommand{\red}{\color{red}} \normalem \allowdisplaybreaks \title{Spreading speeds and pulsating fronts for a field-road model in a spatially periodic habitat} \author{ Mingmin Zhang\thanks{[email protected] (M. Zhang).} \\ {\small School of Mathematical Sciences, University of Science and Technology of China, Hefei, Anhui 230026, China\\ Aix Marseille Universit\'{e}, CNRS, Centrale Marseille, I2M, Marseille, France } } \date{} \begin{document} \maketitle \begin{abstract} A reaction-diffusion model which is called the field-road model was introduced by Berestycki, Roquejoffre and Rossi \cite{BRR2013-1} to describe biological invasion with fast diffusion on a line. In this paper, we investigate this model in a heterogeneous landscape and establish the existence of the asymptotic spreading speed $c^*$ as well as its coincidence with the minimal wave speed of pulsating fronts along the road. We start with a truncated problem with an imposed Dirichlet boundary condition. We prove the existence of spreading speed $c^*_R$ which coincides with the minimal speed of pulsating fronts for the truncated problem in the direction of the road. The arguments combine the dynamical system method with PDE's approach. Finally, we turn back to the original problem in the half-plane via generalized principal eigenvalue approach as well as an asymptotic method. \vspace{2mm} \noindent {\small{\it Mathematics Subject Classification}: 35B40; 35B53; 35C07; 35K57} \vspace{1mm} \noindent {\small{\it Key words}: KPP equations; Reaction-diffusion equations; Pulsating fronts; Asymptotic spreading speed; Generalized principal eigenvalues.}\\ \end{abstract} \section{Introduction} The goal of this paper is to investigate propagation properties for a field-road model in a spatially periodic environment. Taking into account this heterogeneity in space, we shall establish the existence of the asymptotic spreading speed and its coincidence with the minimal wave speed of pulsating traveling fronts in the direction of the road. In this paper, the line $\{(x,0):~ x\in\mathbb{R}\}$ will be referred to as the road in the plane $\mathbb{R}^2$. The heterogeneity is assumed to appear in $x$-direction. Then by symmetry, we can consider the upper half-plane $\Omega:=\{(x,y)\in\mathbb{R}^2:~ y>0\}$ as the field. Denote by $u(t,x)$ the density of population on the road and by $v(t,x,y)$ the density of population in the field. The population in the field is assumed to be governed by a Fisher-KPP equation with diffusivity $d$ and heterogeneous nonlinearity $f(x,v)$, whereas the population on the road is subject to a diffusion equation with diffusivity $D>0$ which is a priori different from $d$. Moreover, there are exchanges of populations between the road and the field in which the parameter $\mu>0$ stands for the rate of individuals on the road going into the field, while the parameter $\nu>0$ represents the rate of individuals passing from the field to the road. Therefore, we are led to the following system: \begin{equation} \label{pb in half-plane} \begin{cases} \partial_t u -D\partial_{xx}u=\nu v(t,x,0)-\mu u, & t>0,\ x\in \mathbb{R},\\ \partial_t v -d\Delta v =f(x,v), &t>0,\ (x,y)\in\Omega,\\ -d\partial_y v(t,x,0)=\mu u-\nu v(t,x,0), &t>0,\ x\in \mathbb{R}. \end{cases} \end{equation} We assume that the reaction term $f(x,v)$ depends on the $x$ variable in a periodic fashion. As a simple example, $f$ may be of the type $f(x,v)=a(x)v(1-v)$ in which the periodic coefficient $a(x)$ can be interpreted as an effective birth rate of the population. In models of biological invasions, the heterogeneity may be a consequence of the presence of highly differentiated zones such as forests, rivers, grasslands, roads, villages, etc., where the species in consideration may tend to reproduce or die with different rates from one place to another. Therefore, it is a fundamental problem to understand how heterogeneity influences the characteristics of front propagation such as front speeds and front profiles. Let us recall the origin of this model and relevant results. The field-road model was first introduced by Berestycki, Roquejoffre and Rossi \cite{BRR2013-1} in 2013 where all parameters are homogeneous. The authors proved that a strong diffusion on the road enhances global invasion in the field. More precisely, denote by $w^*$ the asymptotic spreading speed in the direction of the road for the homogeneous field-road model and by $c_{KPP}:=2\sqrt{df'(0)}$ the spreading speed for the scalar KPP equation $u_t-d u_{xx}=f(u)$, they proved that: if $D\le 2d$, then $w^*=c_{KPP}$; if $D>2d$, then $w^*>c_{KPP}$. Moreover, they showed that the propagation velocity on the road increases indefinitely as $D$ grows to infinity. As a sequel, the same authors introduced in \cite{BRR2013-2} transport and mortality on the road to understand the resulting new effects. Let us point out that the original model was considered in a homogeneous frame, which means that every place in the field is equivalently suitable for the survival of species, whereas this homogeneity assumption is hardly satisfied in natural environments. Therefore, it is of the essence to take into account the heterogeneity of the medium. Later on, it was proved in \cite{BCRR2013-2014,BRR2016-1} that the road enhances the asymptotic speed of propagation in a cone of directions. The paper \cite{BRR2016-2} established the existence of standard traveling fronts for this homogeneous system for $c\ge w^*$. Giletti, Monsaingeon and Zhou \cite{GMZ2015} considered this model with spatially periodic exchange coefficients: \begin{equation*} \begin{cases} \partial_t u -D\partial_{xx}u=\nu(x) v(t,x,0)-\mu(x) u, &t>0,\ x\in \mathbb{R}, \\ \partial_t v -d\Delta v =f(v), &t>0,\ (x,y)\in\Omega,\\ -d\partial_y v(t,x,0)=\mu(x) u(t,x)-\nu(x) v(t,x,0), &t>0,\ x\in \mathbb{R}, \end{cases} \end{equation*} where $\mu(x), \nu(x)$ are $L$-periodic in $x$ in $C^{1,r}(\mathbb{R})$, and $\mu(x),\nu(x) \ge\not\equiv 0$. They recovered the same diffusion threshold $D=2d$ in \cite{BRR2013-1}. In 2016, Tellini \cite{Tellini2016} studied the homogeneous field-road model in a strip with an imposed Dirichlet boundary condition on the other side of the strip. It is noticed that traveling fronts were just studied in \cite{BRR2016-2} for the original homogeneous model and in \cite{Dietrich2015} for a truncated problem with ignition-type nonlinearity. Related results were also obtained in various frameworks. The case of a fractional diffusion on the road was treated in \cite{BCRR2013-2014,BCRR2015}. Nonlocal exchanges were studied in \cite{Pauthier2015,Pauthier2016}. Models with an ignition-type nonlinearity were considered in \cite{Dietrich2015,Dietrich2017}. The field-road model set in an infinite cylinder with fast diffusion on the surface was investigated in \cite{RTV2017}. The case where the field is a cone was studied in \cite{Ducasse2018}. The authors in \cite{BDR2019} discussed the effect of the road on a population in an ecological niche facing climate change based on the notion of generalized principal eigenvalues for heterogeneous road-field systems developed in \cite{BDR1}. Propagation phenomena for heterogeneous KPP bulk-surface systems in a cylindrical domain was investigated recently in \cite{BGT2020}. The existence of weak solutions to an elliptic problem in bounded and unbounded strips motivated by the field-road model was discussed in \cite{CZ2020}. An interesting but different field-road model where the road is with very thin width was introduced in \cite{LW2017} using the so-called effective boundary conditions to study speed enhancement and the asymptotic spreading speed. By contrast with standard periodic reaction-diffusion equations, the mathematical study of \eqref{pb in half-plane} contains the following difficulties: firstly, the periodic assumption only set on $x$ variable but not on $y$ leads to the noncompactness of the domain, therefore the existence of pulsating fronts cannot be obtained by PDE's methods easily. Secondly, due to the heterogeneous hypothesis on $f$, the situation is much more involved so that we are not able to derive precise threshold result of speed enhancement with respect to different diffusivities on the road and in the field. Thirdly, in terms of the generalized eigenvalue problem in the half-plane, one of main technical difficulties is to get some estimates for the generalized principal eigenfunction pair. To the best of our knowledge, there has been no known result about the existence of generalized traveling fronts for the field-road model in heterogeneous media up to now. The aim of this work is to prove the existence of the asymptotic spreading speed $c^*$ as well as its coincidence with the minimal speed of pulsating traveling fronts along the road for \eqref{pb in half-plane} in a spatially periodic habitat. Our strategy is to study a truncated problem with an imposed zero Dirichlet upper boundary condition as a first step. Specifically, by application of principal eigenvalue theory and of dynamical system method, we show the existence of the asymptotic spreading speed $c^*_R$ as well as its coincidence with the minimal speed of pulsating traveling fronts along the road. We further give a variational formula for $c^*_R$ by using the principal eigenvalue of certain linear elliptic problem. Based on the study of the truncated problem, we eventually go back to the analysis of the original problem in the half-plane by combining generalized principal eigenvalue approach with an asymptotic method. Let us mention that the results in this paper can also be adapted to the case of periodic exchange coefficients treated in \cite{GMZ2015}. For general reaction-diffusion problems, there have been lots of remarkable works on spreading properties and pulsating traveling fronts. We refer to \cite{Weinberger2002, BH2002, BHN2005, LZ2007, LZ2007, BHN2010, BHR2005-2, HR2011} and references therein. \section{Hypotheses and main results} Throughout this paper, we assume that $f$: $ \mathbb{R}\times\mathbb{R}_+\to \mathbb{R}$ is of class $C^{1,\delta}$ in $(x,v)$ (with $0<\delta<1$) and $C^2$ in $v$, $L$-periodic in $x$, and satisfies the KPP assumption: $$ f(\cdot,0)\equiv 0\equiv f(\cdot,1),~ 0<f(\cdot,v)\le f_v(\cdot,0)v~ \text{for}~v\in(0,1),~ f(\cdot,v)<0~ \text{for}~v\in(1,+\infty).$$ Define $M:=\max_{[0,L]}f_v(x,0)$ and $m:=\min_{[0,L]}f_v(x,0)$. Then $M\ge m>0$. We further assume that $$\forall x\in\mathbb{R},~ v\mapsto \frac{f(x,v)}{v} \ \text{is decreasing in} \ v>0.$$ In what follows, as far as the Cauchy problem is concerned, we always assume that the initial condition $(u_0,v_0)$ is nonnegative, bounded and continuous. We now present our results in this paper. As a first step, we focus on the following truncated problem with an imposed Dirichlet upper boundary condition: \begin{equation} \label{pb in strip} \begin{cases} \partial_t u -D\partial_{xx}u=\nu v(t,x,0)-\mu u, &t>0,\ x\in \mathbb{R}, \\ \partial_t v -d\Delta v =f(x,v), &t>0,\ (x,y)\in\Omega_R,\\ -d\partial_y v(t,x,0)=\mu u-\nu v(t,x,0), &t>0,\ x\in \mathbb{R},\\ v(t,x,R)=0, &t>0,\ x\in \mathbb{R}, \end{cases} \end{equation} in which $\Omega_R:=\{(x,y)\in\mathbb{R}:~ 0<y<R\}$ denotes a truncated domain with width $R$ sufficiently large. In fact, the width $R$ of the strip plays a crucial role in long time behavior of the corresponding Cauchy problem \eqref{pb in strip} due to the zero Dirichlet upper boundary condition. A natural explanation, from the biological point of view, is that if the width of the strip is not sufficiently large, the species may finally extinct because of the effect of unfavorable Dirichlet condition on the upper boundary. Therefore, we shall give a sufficient condition on $R$ such that the species can persist successfully. Here is our statement. \begin{theorem} \label{thm2.1} If \begin{equation} \label{R-condition} m>\frac{d\pi^2}{4R^2}, \end{equation} then \eqref{pb in strip} admits a unique nontrivial nonnegative stationary solution $(U_R, V_R)$, which is $L$-periodic in $x$. Moreover, let $(u,v)$ be the solution of \eqref{pb in strip} with a nonnegative, bounded and continuous initial datum $(u_0, v_0)\not\equiv(0,0)$, then \begin{equation} \label{large time-truncated} \lim\limits_{t\to +\infty}(u(t,x),v(t,x,y))= (U_R(x), V_R(x,y))~~\text{locally uniformly in}~ (x,y)\in\overline\Omega_R. \end{equation} \end{theorem} \begin{remark} \textnormal{In particular, when the environment is homogeneous, i.e. $f(x,v)\equiv f(v)$, $R$ should satisfy $4R^2 f'(0)>d\pi^2$, which coincides with the condition in \cite{Tellini2016}. Let $R_*>0$ be such that $m=\frac{d\pi^2}{4R^2_*}$. For any $R>R_0:=2R_*$, \eqref{R-condition} is satisfied and there also holds $m=\frac{d\pi^2}{R^2_0}>\frac{d\pi^2}{R^2}$. \textit{Throughout the paper, as far as the truncated problem is concerned, it is not restrictive to assume that $R>R_0$ (since our concern is to take $R\to+\infty$ to consider \eqref{pb in half-plane}), which will be convenient to prove the positivity of the asymptotic spreading speed $c^*_R$ for problem \eqref{pb in strip}.}} \end{remark} Let $(U_R,V_R)$ be the unique nontrivial nonnegative stationary solution of \eqref{pb in strip} in the sequel. We are now in a position to investigate spreading properties of solutions to \eqref{pb in strip} in $\overline\Omega_R$, which is based on dynamical system method and principal eigenvalue theory. We first consider the following eigenvalue problem in the strip $\overline\Omega_R$: \begin{align} \label{eigen-pb-strip} \begin{cases} -D\phi''+2D\alpha\phi'+(-D\alpha^2+\mu)\phi-\nu\psi(x,0)=\sigma\phi, &x\in\mathbb{R},\\ -d\Delta\psi+2d\alpha\partial_x\psi-(d\alpha^2+f_v(x,0))\psi=\sigma\psi, &(x,y)\in \Omega_R,\\ -d\partial_y\psi(x,0)+\nu\psi(x,0)-\mu\phi=0, \ &x\in\mathbb{R},\\ \psi(x,R)=0, \ &x\in\mathbb{R},\\ \phi, \psi \ \text{are} \ L\text{-periodic with respect to} \ x. \end{cases} \end{align} The compactness of the domain allows us to apply the classical Krein-Rutman theory which provides the existence of the principal eigenvalue $\lambda_R(\alpha)\in\mathbb{R}$ and the associated unique (up to multiplication by some constant) positive principal eigenfunction pair $(P_{\alpha,R}(x),Q_{\alpha,R}(x,y)) \in C^3(\mathbb{R})\times C^3(\overline\Omega_R)$ for each $\alpha\in\mathbb{R}$. \begin{theorem} \label{thm-asp-strip} Let $(U_R,V_R)$ be the unique nontrivial nonnegative stationary solution of \eqref{pb in strip} obtained in Theorem \ref{thm2.1} and let $(u,v)$ be the solution of \eqref{pb in strip} with a nontrivial continuous initial datum $(u_0,v_0)$ with $(0,0)\le (u_0,v_0)\le (U_R,V_R)$ in $\overline\Omega_R$. Then there exists $c^*_R>0$ given by \begin{equation*} c^*_{R}=\inf\limits_{\alpha>0}\frac{-\lambda_R(\alpha)}{\alpha}, \end{equation*} called the asymptotic spreading speed, such that the following statements are valid: \begin{enumerate}[(i)] \item If $(u_0,v_0)$ is compactly supported, then for any $c> c^*_{R}$, there holds \begin{equation*} \lim\limits_{ t\to+\infty}\sup\limits_{|x|\ge ct,\ y\in[0,R]} |(u(t,x),v(t,x,y))|=0. \end{equation*} \item For any $0<c<c^*_R$, there holds \begin{equation*} \lim\limits_{ t\to+\infty}\sup\limits_{|x|\le ct,\ y\in[0,R]}|(u(t,x),v(t,x,y))-(U_R(x),V_R(x,y))|=0. \end{equation*} \end{enumerate} \end{theorem} Before stating the result of pulsating fronts for \eqref{pb in strip}, let us give the definition of pulsating traveling fronts in the strip $\overline\Omega_R$ for clarity. \begin{definition}\label{Def2.3} A rightward pulsating front of \eqref{pb in strip} connecting $(U_R(x), V_R(x,y))$ to $(0,0)$ with effective mean speed $c\in\mathbb{R}_+$ is a time-global classical solution $(u(t,x),v(t,x,y))=(\phi_R(x-ct,x),\psi_R(x-ct,x,y))$ of \eqref{pb in strip} such that the following periodicity property holds: \begin{equation} \label{periodicity-strip} u(t+\frac{k}{c},x)=u(t,x-k),\quad v(t+\frac{k}{c},x,y)=v(t,x-k,y) \quad \forall k\in L\mathbb{Z},~ \forall t\in\mathbb{R}, ~\forall (x,y)\in\overline\Omega_R. \end{equation} Moreover, the profile $(\phi_R(s,x),\psi_R(s,x,y))$ satisfies \begin{equation} \label{limit-strip} \begin{cases} \phi_R(-\infty,x)=U_R(x), \ \phi_R(+\infty,x)=0 \ \text{uniformly in} \ x\in\mathbb{R},\\ \psi_R(-\infty,x,y)=V_R(x,y), \ \psi_R(+\infty,x,y)=0 \ \text{uniformly in} \ (x,y)\in\overline\Omega_R, \end{cases} \end{equation} with $(\phi_R(s,x), \psi_R(s,x,y))$ being continuous in $s\in\mathbb{R}$. Similarly, a leftward pulsating front of \eqref{pb in strip} connecting $(0,0)$ to $(U_R(x), V_R(x,y))$ with effective mean speed $c\in\mathbb{R}_+$ is a time-global classical solution $(\tilde u(t,x),\tilde v(t,x,y))=(\phi_R(x+ct,x),\psi_R(x+ct,x,y))$ of \eqref{pb in strip} such that the following periodicity property holds: \begin{equation*} \tilde u(t+\frac{k}{c},x)=\tilde u(t,x+k),\quad \tilde v(t+\frac{k}{c},x,y)=\tilde v(t,x+k,y) \quad \forall k\in L\mathbb{Z},~\forall t\in\mathbb{R}, ~\forall (x,y)\in\overline\Omega_R. \end{equation*} Moreover, the profile $(\phi_R(s,x),\psi_R(s,x,y))$ satisfies \begin{equation*} \begin{cases} \phi_R(-\infty,x)=0, \ \phi_R(+\infty,x)=U_R(x) \ \text{uniformly in} \ x\in\mathbb{R},\\ \psi_R(-\infty,x,y)=0, \ \psi_R(+\infty,x,y)=V_R(x,y) \ \text{uniformly in} \ (x,y)\in\overline\Omega_R, \end{cases} \end{equation*} with $(\phi_R(s,x), \psi_R(s,x,y))$ being continuous in $s\in\mathbb{R}$. \end{definition} \begin{theorem}\label{thm-PTF-in strip} Let $c^*_{R}$ be given as in Theorem \ref{thm-asp-strip}. Then the following statements are valid: \begin{enumerate}[(i)] \item Problem \eqref{pb in strip} admits a rightward pulsating front connecting $(U_R(x), V_R(x,y))$ to $(0,0)$ with wave profile $(\phi_R(s,x),\psi_R(s,x,y))$ being continuous and decreasing in $s$ if and only if $c\ge c^*_{R}$. \item Problem \eqref{pb in strip} admits a leftward pulsating front connecting $(0,0)$ to $(U_R(x), V_R(x,y))$ with wave profile $(\phi_R(s,x),\psi_R(s,x,y))$ being continuous and increasing in $s$ if and only if $c\ge c^*_{R}$. \end{enumerate} \end{theorem} Theorems \ref{thm-asp-strip} and \ref{thm-PTF-in strip} are proved simultaneously. It is worth mentioning that, compared with the homogeneous field-road model \cite{BRR2013-1} where there exists a unique minimal speed $w^*$ along the road in the left and right directions, here we get a striking resemblance. That is, with the spatially periodic assumption and one-dimentional setting on the road, the KPP minimal wave speeds in the right and left directions are still the same, even if there is no homogeneity in $x$-direction anymore. However, the asymptotic spreading speeds may differ in general, according to the direction in heterogeneous media and/or in higher dimension. Having the principal eigenvalue $\lambda_R(\alpha)$ for eigenvalue problem \eqref{eigen-pb-strip} in hand, we construct in Section \ref{generalized p-e} the \textit{generalized} principal eigenvalue $\lambda(\alpha)$ by passing to the limit $R\to+\infty$ in $\lambda_R(\alpha)$ for each $\alpha\in\mathbb{R}$, and show that there exists a positive and $L$-periodic (in $x$) solution $(P_\alpha,Q_\alpha)$ of the following generalized eigenvalue problem in the half-plane: \begin{align} \label{generalized ep} \begin{cases} -D P_\alpha''+2D\alpha P_\alpha'+(-D\alpha^2+\mu) P_\alpha-\nu Q_\alpha(x,0)=\lambda(\alpha) P_\alpha, &x\in\mathbb{R},\\ -d\Delta Q_\alpha+2d\alpha\partial_x Q_\alpha-(d\alpha^2+f_v(x,0)) Q_\alpha=\lambda(\alpha) Q_\alpha, &(x,y)\in\Omega,\\ -d\partial_y Q_\alpha(x,0)+\nu Q_\alpha(x,0)-\mu P_\alpha=0, \ &x\in\mathbb{R},\\ P_\alpha, Q_\alpha \ \text{are} \ L\text{-periodic with respect to} \ x. \end{cases} \end{align} We call $(P_\alpha,Q_\alpha)$ the generalized principal eigenfunction pair associated with $\lambda(\alpha)$. We are now in a position to give the spreading result in the half-plane. \begin{theorem}\label{thm-asp-half-plane} Let $(u, v)$ be the solution of \eqref{pb in half-plane} with a nonnegative, bounded and continuous initial datum $(u_0,v_0)\not\equiv (0,0)$. Then there exists $c^*>0$ defined by \begin{equation*} c^*=\inf\limits_{\alpha>0}\frac{-\lambda(\alpha)}{\alpha}, \end{equation*} called the asymptotic spreading speed, such that the following statements are valid: \begin{itemize} \item[(i)] If $(u_0,v_0)$ is compactly supported, then for any $c> c^*$, for any $A>0$, \begin{equation*} \lim\limits_{ t\to+\infty}\sup\limits_{|x|\ge ct,\ 0\le y\le A} |(u(t,x),v(t,x,y))|=0. \end{equation*} \item[(ii)] If $(u_0,v_0)<(\nu/\mu,1)$, then, for any $0<c<c^*$, for any $A>0$, \begin{equation} \label{find lower bound} \lim\limits_{ t\to+\infty}\sup\limits_{|x|\le ct,\ 0\le y\le A} |(u(t,x),v(t,x,y))-(\nu/\mu,1)|=0. \end{equation} \end{itemize} \end{theorem} In the proof of Theorem \ref{thm-asp-half-plane}, the generalized principal eigenfunction pair $(P_\alpha,Q_\alpha)$ of \eqref{generalized ep} associated with $\lambda(\alpha)$ will play a crucial role in getting the upper bound for the spreading result. The lower bound follows from spreading results in the truncated domain via an asymptotic method. Next, we state the concept of pulsating fronts for problem \eqref{pb in half-plane} in the half-plane $\Omega$. \begin{definition}\label{Def2.6} A rightward pulsating front of \eqref{pb in half-plane} connecting $(\nu/\mu,1)$ and $(0,0)$ with effective mean speed $c\in\mathbb{R}_+$ is a time-global classical solution $(u(t,x),v(t,x,y))=(\phi(x-ct,x),\psi(x-ct,x,y))$ of \eqref{pb in half-plane} such that the following periodicity property holds: \begin{equation*} u(t+\frac{k}{c},x)=u(t,x-k),\quad v(t+\frac{k}{c},x,y)=v(t,x-k,y) \quad \forall k\in L\mathbb{Z},~ \forall t\in\mathbb{R},~\forall (x,y)\in\overline{\Omega}. \end{equation*} Moreover, the profile $(\phi(s,x),\psi(s,x,y))$ satisfies \begin{align} \label{PTF-limit condition} \begin{cases} \phi(-\infty,x)={\nu}/{\mu}, \ \phi(+\infty,x)=0 \ \text{uniformly in} \ x\in\mathbb{R},\\ \psi(-\infty,x,y)=1, \ \psi(+\infty,x,y)=0 \ \text{uniformly in} \ x\in\mathbb{R}, \ \text{locally uniformly in} \ y\in [0,+\infty), \end{cases} \end{align} with $(\phi(s,x), \phi(s,x,y))$ being continuous in $s\in\mathbb{R}$. Similarly, a leftward pulsating front of \eqref{pb in half-plane} connecting $(0,0)$ and $(\nu/\mu,1)$ with effective mean speed $c\in\mathbb{R}_+$ is a time-global classical solution $(u(t,x),v(t,x,y))=(\phi(x+ct,x),\psi(x+ct,x,y))$ of \eqref{pb in half-plane} such that the following periodicity property holds: \begin{equation*} \tilde u(t+\frac{k}{c},x)=\tilde u(t,x+k),\quad \tilde v(t+\frac{k}{c},x,y)=\tilde v(t,x+k,y) \quad \forall k\in L\mathbb{Z},~ \forall t\in\mathbb{R},~\forall (x,y)\in\overline{\Omega}. \end{equation*} Moreover, the profile $(\phi(s,x),\psi(s,x,y))$ satisfies \begin{align*} \begin{cases} \phi(-\infty,x)={\nu}/{\mu}, \ \phi(+\infty,x)=0 \ \text{uniformly in} \ x\in\mathbb{R},\\ \psi(-\infty,x,y)=1, \ \psi(+\infty,x,y)=0 \ \text{uniformly in} \ x\in\mathbb{R}, \ \text{locally uniformly in} \ y\in [0,+\infty), \end{cases} \end{align*} with $(\phi(s,x), \phi(s,x,y))$ being continuous in $s\in\mathbb{R}$. \end{definition} Based on Theorem \ref{thm-PTF-in strip} and an asymptotic method, we can show: \begin{theorem}\label{PTF-in half-plane} Let $c^*$ be defined as in Theorem \ref{thm-asp-half-plane}. Then the following statements are valid: \begin{enumerate}[(i)] \item Problem \eqref{pb in half-plane} admits a rightward pulsating front connecting $(\nu/\mu, 1)$ to $(0,0)$ with wave profile $(\phi(s,x),\psi(s,x,y))$ being continuous and decreasing in $s$ if and only if $c\ge c^*$. \item Problem \eqref{pb in half-plane} admits a leftward pulsating front connecting $(0,0)$ to $(\nu/\mu, 1)$ with wave profile $(\phi(s,x),\psi(s,x,y))$ being continuous and increasing in $s$ if and only if $c\ge c^*$. \end{enumerate} \end{theorem} {\bf Outline of the paper.} The remaining part of this paper is organized as follows. In Section \ref{section3}, we state some preliminary results for problem \eqref{pb in half-plane} in the half-plane and for problem \eqref{pb in strip} in the strip, respectively. We prove Liouville-type results and investigate large time behaviors for problem \eqref{pb in half-plane} in Section 3.1 and for problem \eqref{pb in strip} in Section 3.2, respectively. Section \ref{section4} is dedicated to the proofs of Theorems \ref{thm-asp-strip} and \ref{thm-PTF-in strip}. Particularly, the principal eigenvalue problem in $\overline\Omega_R$ is investigated, see Proposition \ref{principal eigenvalue in strip}. In Section \ref{section5}, we give the proofs of Theorems \ref{thm-asp-half-plane} and \ref{PTF-in half-plane}, based on the study of the generalized principal eigenvalue problem and the results derived for truncated problems. \section{Preliminaries}\label{section3} In this section, we state some auxiliary results in the half-plane and in the truncated domain, respectively. Specifically, two versions of comparison principles that will be diffusely used throughout this paper and the well-posedness of the Cauchy problems for problem \eqref{pb in half-plane} in the half-line and for problem \eqref{pb in strip} in the strip will be given below, respectively. Since the results can be proved by slight modifications of the arguments in \cite{BRR2013-1}, we omit the proofs here. We also prove Liouville-type results and large time behavior of solutions to Cauchy problems \eqref{pb in half-plane} and \eqref{pb in strip}, respectively. Finally, we investigate the limiting property of the stationary solution in the strip when the width of the strip goes to infinity. In the sequel, a subsolution (resp. supersolution) is a couple satisfying the system in the classical sense with $=$ replaced by $\le$ (resp. $\ge$) which is continuous up to time 0. We say that a function is a generalized supersolution (resp. subsolution) if it is the minimum (resp. maximum) of a finite number of supersolutions (resp. subsolutions). \subsection{Preliminary results in the half-plane} \begin{proposition} \label{cp} Let $(\underline{u},\underline{v})$ and $(\overline{u}, \overline{v})$ be respectively a subsolution bounded from above and a supersolution bounded from below of \eqref{pb in half-plane} satisfying $\underline{u}\le\overline{u}$ and $\underline{v}\le\overline{v}$ in $\overline\Omega$ at $t=0$. Then, either $\underline{u}<\overline{u}$ and $\underline{v}<\overline{v}$ in $\overline\Omega$ for all $t>0$, or there exists $T>0$ such that $(\underline{u},\underline{v})=(\overline{u}, \overline{v})$ in $\overline\Omega$ for $t\le T$. \end{proposition} \begin{proposition} \label{cp--generalized sub} Let $E\subset (0,+\infty)\times\mathbb{R}$ and $F\subset (0,+\infty)\times\Omega$ be two open sets and let $(u_1, v_1)$ and $(u_2,v_2)$ be two subsolutions of \eqref{pb in half-plane} bounded from above, satisfying \begin{equation*} u_1\le u_2 \ \text{on} \ (\partial E)\cap((0,+\infty)\times\mathbb{R}),\qquad v_1\le v_2\ \text{on} \ (\partial F)\cap((0,+\infty)\times\Omega). \end{equation*} If the functions $\underline{u}$, $\underline{v}$ defined by \begin{align*} \underline{u}(t,x)&:= \begin{cases} \max\{u_1(t,x), u_2(t,x)\}\qquad &\text{if}\ (t,x)\in\overline{E}\\ u_2(t,x)\qquad&\text{otherwise} \end{cases}\\ \underline{v}(t,x,y)&:= \begin{cases} \max\{v_1(t,x,y), v_2(t,x,y)\}~ &\text{if}\ (t,x,y)\in\overline{F}\\ v_2(t,x,y)\qquad&\text{otherwise} \end{cases} \end{align*} satisfy \begin{align*} \underline{u}(t,x)>u_2(t,x)\Rightarrow \underline{v}(t,x,0)\ge v_1(t,x,0),\\ \underline{v}(t,x,0)>v_2(t,x,0)\Rightarrow \underline{u}(t,x)\ge u_1(t,x), \end{align*} then, any supersolution $(\overline{u},\overline{v})$ of \eqref{pb in half-plane} bounded from below and such that $\underline{u}\le\overline{u}$ and $\underline{v}\le\overline{v}$ in $\overline\Omega$ at $t=0$, satisfies $\underline{u}\le\overline{u}$ and $\underline{v}\le\overline{v}$ in $\overline\Omega$ for all $t>0$. \end{proposition} \begin{proposition} \label{wellposedness-half-plane} The Cauchy problem \eqref{pb in half-plane} with nonnegative, bounded and continuous initial datum $(u_0,v_0)\not\equiv(0,0)$ admits a unique nonnegative classical bounded solution $(u,v)$ for $t\ge 0$ and $(x,y)\in\overline\Omega$. \end{proposition} Now we prove a Liouville-type result for the stationary problem corresponding to \eqref{pb in half-plane} as well as the long time behavior of solutions for the Cauchy problem \eqref{pb in half-plane}. \begin{theorem} \label{liouville} Problem \eqref{pb in half-plane} has a unique positive bounded stationary solution $(U,V)\equiv({\nu}/{\mu}, 1)$. Moreover, let $(u,v)$ be the solution of \eqref{pb in half-plane} with a nonnegative, bounded and continuous initial datum $(u_0, v_0)\not\equiv(0,0)$, then \begin{equation} \label{large time} \lim\limits_{t\to +\infty}(u(t,x),v(t,x,y))= (\nu/\mu, 1)\quad\text{locally uniformly for}\ (x,y)\in\overline{\Omega}. \end{equation} \end{theorem} \begin{proof} Let $(u,v)$ be the solution, given in Proposition \ref{wellposedness-half-plane}, of the Cauchy problem \eqref{pb in half-plane} starting from a nonnegative, bounded and continuous initial datum $(u_0, v_0)\not\equiv(0,0)$. We first show the existence of the nontrivial nonnegative and bounded stationary solution of \eqref{pb in half-plane}, by using a sub- and supersolution argument. Take $\rho>0$ large enough such that the principal eigenvalue of $-\Delta$ in $B_\rho\subset\mathbb{R}^2$ with Dirichlet boundary condition is less than $m/(2d)$ (recall that $m=\min_{[0,L]}f_v(x,0)>0$). Then, the associated principal eigenfunction $\varphi_\rho$ satisfies $-d\Delta \varphi_\rho\le m\varphi_\rho/2$ in $B_\rho$. Hence, there exists $\varep_0>0$ such that the function $\varep\varphi_\rho$ satisfies $-d\Delta(\varep\varphi_\rho)\le f(x,\varep\varphi_\rho)$ in $B_\rho$ for all $\varep\in (0,\varep_0]$. Define $\underline V(x,y)=\varep\varphi_\rho(x,y-\rho-1)$ in $B_\rho(0,\rho+1)$ and extend it by 0 outside. The function pair $(0,\underline{V})$ is nonnegative, bounded and continuous. On the other hand, Proposition \ref{cp} implies that $(u,v)$ is positive for $t>0$ and $(x,y)\in\overline\Omega$. Hence, up to decreasing $\varep$ if necessary, we have that $(0, \underline{V})$ is below $(u(1,\cdot),v(1,\cdot,\cdot))$ in $\overline\Omega$. Let $(\underline{u}, \underline{v})$ be the unique bounded classical solution of \eqref{pb in half-plane} with initial condition $(0, \underline{V})$. It then follows from Proposition \ref{cp--generalized sub} that $(\underline{u},\underline{v})$ is increasing in $t$ and $(\underline u(t,x),\underline v(t,x,y))<(u(t+1,x),v(t+1,x,y))$ for $t>0$ and $(x,y)\in\overline\Omega$. By parabolic estimates, $(\underline{u},\underline{v})$ converges locally uniformly in $\overline\Omega$ as $t\to+\infty$ to a stationary solution $(U_1, V_1)$ of \eqref{pb in half-plane} satisfying \begin{equation} \label{lower-0} 0<U_1\le\liminf\limits_{ t\to+\infty}u(t,x),\qquad \underline{V}<V_1\le\liminf\limits_{ t\to+\infty}v(t,x,y). \end{equation} On the other hand, define \begin{equation}\label{super-constant sol} (\overline{U},\overline{V}):=\max\bigg\{\frac{\Vert u_0\Vert_{L^{\infty}(\mathbb{R})}}{\nu},\frac{\Vert v_0\Vert_{L^{\infty}(\overline\Omega)}+1}{\mu}\bigg\}(\nu,\mu). \end{equation} Obviously, $(\overline{U},\overline{V})$ is a supersolution of \eqref{pb in half-plane} and satisfies $(\overline{U},\overline{V})\ge (u_0,v_0)$. Let $(\overline{u},\overline{v})$ be the solution of \eqref{pb in half-plane} with initial datum $(\overline{U}, \overline{V})$, then Proposition \ref{cp} implies that $(\overline{u},\overline{v})$ is non-increasing in $t$. From parabolic estimates, $(\overline{u},\overline{v})$ converges locally uniformly in $\overline\Omega$ as $t\to+\infty$ to a stationary solution $(U_2, V_2)$ of \eqref{pb in half-plane} satisfying \begin{equation} \label{upper-0} \limsup\limits_{ t\to+\infty}u(t,x)\le U_2\le \overline U,\qquad \limsup\limits_{ t\to+\infty}v(t,x,y)\le V_2\le \overline V. \end{equation} Therefore, the existence of nontrivial nonnegative and bounded stationary solutions of \eqref{pb in half-plane} is proved. Let $(U,V)$ be an arbitrary nontrivial nonnegative and bounded stationary solution of \eqref{pb in half-plane}. In the spirit of \cite[Proposition 4.1]{BRR2013-1} and \cite[Lemma 2.5]{GMZ2015}, one can further conclude that $\inf_{\overline\Omega}V>0$ and then $\inf_{\mathbb{R}}U>0$, by using the hypothesis $m:=\min_{[0,L]}f_v(x,0)>0$. Next, we show the uniqueness. Assume that $(U_1, V_1)$ and $(U_2, V_2)$ are two distinct positive bounded stationary solutions of \eqref{pb in half-plane}. Then, there is $\varep>0$ such that $U_i\ge\varep$ in $\mathbb{R}$ and $V_i\ge\varep$ in $\overline\Omega$ for $i=1,2$. Therefore, we can define \begin{equation*} \theta^*:=\sup\big\{\theta>0:~ (U_1, V_1)>\theta(U_2, V_2)~~ \text{in} ~\overline\Omega\big\}>0. \end{equation*} Assume that $\theta^*<1$. Set $P:=U_1-\theta^*U_2\ge 0$ in $\mathbb{R}$ and $Q:=V_1-\theta^*V_2\ge 0$ in $\overline\Omega$. From the definition of $\theta^*$, there exists a sequence $(x_n,y_n)_{n\in\mathbb{N}}$ in $\overline\Omega$ such that $P(x_n)\to 0$, or $Q(x_n,y_n)\to 0$ as $n\to+\infty$. Assume that the second case occurs, we claim that $(y_n)_{n\in\mathbb{N}}$ is bounded. Assume by contradiction that $y_n\to+\infty$ as $n\to+\infty$, then, up to extraction of a subsequence, the functions $V_{i,n}(x,y):=V_i(x,y+y_n)$ $(i=1,2)$ converge locally uniformly to positive bounded functions $V_{i,\infty}$ solving $-d\Delta V_{i,\infty}=f(x,V_{i,\infty})$ in $\mathbb{R}^2$, which implies that $V_{i,\infty}\equiv 1$ in $\mathbb{R}^2$, because of the KPP hypothesis on $f$. Then, it follows that $Q(x_n,y_n)\to 1-\theta^*>0$ as $n\to+\infty$, which is a contradiction. Thus, the sequence $(y_n)_{n\in\mathbb{N}}$ must be bounded. We then distinguish two subcases. Assume that, up to a subsequence, $x_n\to\overline x\in\mathbb{R}$ and $y_n\to \overline y\in[0,+\infty)$ as $n\to+\infty$. By continuity, one has $Q\ge 0$ in $\overline\Omega$ and $Q(\overline x, \overline y)=0$. Suppose that $\overline y>0$. Notice that $Q$ satisfies \begin{equation} \label{Q-1} -d\Delta Q=f(x,V_1)-\theta^*f(x,V_2)~~\text{in}~\Omega, \end{equation} Since $f(x,v)/v$ is decreasing in $v>0$ for all $x\in\mathbb{R}$ and since $\theta^*<1$, it follows that $-d\Delta Q>f(x,V_1)-f(x,\theta^*V_2)$ in $\Omega$. Since $f$ is locally Lipschitz continuous in the second variable, there exists a bounded function $b(x,y)$ defined in $\Omega$ such that \begin{equation} \label{Q-eq-half-plane} -d\Delta Q+b Q>0~~\text{in}~\Omega. \end{equation} Since $Q\ge 0$ in $\overline\Omega$ and $Q(\overline x,\overline y)=0$, it follows from the strong maximum principle that $Q\equiv 0$ in $\Omega$, which contradicts the strict inequality in \eqref{Q-eq-half-plane}. Hence, $Q>0$ in $\Omega$. Suppose now that $\overline y=0$, then $Q(\overline x,0)=0$. The Hopf lemma implies that $\partial_y Q(\overline x,0)>0$. Using the boundary condition, one gets $0>-d\partial_y Q(\overline x,0)=\mu P(\overline x)-\nu Q(\overline x,0)=\mu P(\overline x)\ge 0$. This is a contradiction. Therefore, $Q>0$ in $\overline\Omega$. In the general case, let $\overline{x}_n\in [0,L]$ be such that $x_n-\overline{x}_n\in L\mathbb{Z}$, then up to extraction of a subsequence, $\overline{x}_n\to \overline x_{\infty}\in[0,L]$ as $n\to\infty$. Since $y_n$ are bounded, up to extraction of a further subsequence, one gets $y_n\to \overline y_\infty\in[0,+\infty)$ as $n\to+\infty$. Let us set $U_{i,n}(x):=U_i(x+x_n)$ and $V_{i,n}(x,y):=V_i(x+x_n,y+y_n)$ for $i=1,2$. Then, $(U_{i,n}, V_{i,n})$ satisfies \begin{align} \label{U-V-1} \begin{cases} -DU''_{i,n}=\nu V_{i,n}(x,0)-\mu U_{i,n}~~&\text{in}~\mathbb{R},\cr -d\Delta V_{i,n}=f(x+\overline x_n, V_{i,n})~~&\text{in}~\Omega,\cr -d\partial_y V_{i,n}(x,0)=\mu U_{i,n}-\nu V_{i,n}(x,0)~~&\text{in}~\mathbb{R}. \end{cases} \end{align} From standard elliptic estimates, it follows that, up to a subsequence, $(U_{i,n}, V_{i,n})$ converges locally uniformly as $n\to+\infty$ to a classical solution $(U_{i,\infty}, V_{i,\infty})$ of \begin{align}\label{U-V-2} \begin{cases} -DU''_{i,\infty}=\nu V_{i,\infty}(x,0)-\mu U_{i,\infty}~~&\text{in}~\mathbb{R},\cr -d\Delta V_{i,\infty}=f(x+\overline x_\infty, V_{i,\infty})~~&\text{in}~\Omega,\cr -d\partial_y V_{i,\infty}(x,0)=\mu U_{i,\infty}-\nu V_{i,\infty}(x,0)~~&\text{in}~\mathbb{R}. \end{cases} \end{align} Moreover, there is $\varep>0$ such that $U_{i,\infty}\ge \varep$ in $\mathbb{R}$ and $V_{i,\infty}\ge\varep$ in $\overline\Omega$. Set $P_\infty:=U_{1,\infty}-\theta^* U_{2,\infty}$ in $\mathbb{R}$, and $Q_\infty:=V_{1,\infty}-\theta^* V_{2,\infty}$ in $\overline\Omega$. Then, $P_\infty\ge 0$ in $\mathbb{R}$, $Q_\infty\ge 0$ in $\overline\Omega$ and $Q_\infty(\overline x_\infty,\overline y_\infty)=0$. Suppose that $\overline y_\infty>0$. Notice that $Q_\infty$ satisfies \begin{equation*} -d\Delta Q_\infty=f(x+\overline x_\infty, V_{1,\infty})-\theta^* f(x+\overline x_\infty, V_{2,\infty})~~~\text{in}~\Omega. \end{equation*} By analogy with the analysis for problem \eqref{Q-1}, one eventually obtains that $Q_\infty>0$ in $\Omega$. One can exclude the case that $\overline y_\infty=0$, by using again the Hopf lemma and the boundary condition. Therefore, $Q_\infty>0$ in $\overline\Omega$. Thus, the case that $Q(x_n,y_n)\to 0$ is ruled out. It is left to discuss the first case that $P(x_n)\to 0$ as $n\to+\infty$. Assume first that, up to a subsequence, $x_n\to\overline x$ as $n\to+\infty$. By continuity, one has $P\ge 0$ in $\mathbb{R}$ and $P(\overline x)=0$. Moreover, $P$ satisfies $-DP''+\mu P=\nu Q(\cdot,0)>0$ in $\mathbb{R}$. The strong maximum principle implies that $P\equiv 0$ in $\mathbb{R}$, which is a contradiction. In the general case, let $\overline{x}_n\in [0,L]$ be such that $x_n-\overline{x}_n\in L\mathbb{Z}$, then up to a subsequence, $\overline{x}_n\to \overline x_{\infty}\in[0,L]$ as $n\to\infty$. Set $U_{i,n}(x):=U_i(x+x_n)$ in $\mathbb{R}$ and $V_{i,n}(x,y):=V_i(x+x_n,y)$ in $\overline\Omega$, for $i=1,2$. Then $(U_{i,n},V_{i,n})$ satisfies \eqref{U-V-1} in $\overline\Omega$. From standard elliptic estimates, it follows that, up to a subsequence, $U_{i,n}$ and $V_{i,n}$ converge as $n\to+\infty$ in $C^2_{loc}$ to $U_{i,\infty}$ and $V_{i,\infty}$, respectively, which satisfy \eqref{U-V-2}. Set $P_\infty:=U_{1,\infty}-\theta^* U_{2,\infty}$ in $\mathbb{R}$ and $Q_\infty:=V_{1,\infty}-\theta^* V_{2,\infty}$ in $\overline\Omega$. Then, $P_\infty\ge 0$ in $\mathbb{R}$ and $P_\infty(0)=0$, $Q_\infty> 0$ in $\overline\Omega$. Moreover, it holds \begin{equation*} -DP''_\infty+\mu P_\infty=\nu Q_\infty(\cdot,0)>0~~\text{in}~\mathbb{R}. \end{equation*} The strong maximum principle then implies that $P_\infty\equiv 0$ in $\mathbb{R}$, which is a contradiction. Hence, the case that $P(x_n)\to 0$ is also impossible. Consequently, $\theta^*\ge 1$, whence $(U_1,V_1)\ge (U_2,V_2)$ in $\overline\Omega$. By interchanging the roles of $(U_1,V_1)$ and $(U_2,V_2)$, one can show that $(U_2,V_2)\ge (U_1,V_1)$ in $\overline\Omega$. The uniqueness is achieved. Furthermore, if $(U,V)$ is a positive bounded stationary solution of \eqref{pb in half-plane}, it is easy to check that any $L$-lattice translation in $x$ of $(U,V)$ is still a positive bounded stationary solution of \eqref{pb in half-plane}. Thus, $(U,V)$ is $L$-periodic in $x$. It is straightforward to check that $(\nu/\mu,1)$ satisfies the stationary problem of \eqref{pb in half-plane}. Therefore, $(\nu/\mu,1)$ is the unique positive and bounded stationary solution of \eqref{pb in half-plane}. The large time behavior \eqref{large time} of the solution to the Cauchy problem \eqref{pb in half-plane} then follows immediately from \eqref{lower-0} and \eqref{upper-0}. The proof of Theorem \ref{liouville} is thereby complete. \end{proof} \subsection{Preliminary results in the strip} \label{section3-2} \begin{proposition} \label{cp-strip} Let $(\underline{u},\underline{v})$ and $(\overline{u}, \overline{v})$ be respectively a subsolution bounded from above and a supersolution bounded from below of \eqref{pb in strip} satisfying $\underline{u}\le\overline{u}$ and $\underline{v}\le\overline{v}$ in $\overline\Omega_R$ at $t=0$. Then, either $\underline{u}<\overline{u}$ and $\underline{v}<\overline{v}$ in $\mathbb{R}\times[0,R)$ and $\partial_y\bar v(t,x,R)<\partial_y\underline v(t,x,R)$ on $\mathbb{R}$ for all $t>0$, or there exists $T>0$ such that $(\underline{u},\underline{v})=(\overline{u}, \overline{v})$ in $\overline\Omega_R$ for $t\le T$. \end{proposition} \begin{proposition} \label{cp--generalized sub-strip} Let $E\subset(0,+\infty)\times \mathbb{R}$ and $F\subset (0,+\infty)\times\Omega_R$ be two open sets and let $(u_1, v_1)$ and $(u_2,v_2)$ be two subsolutions of \eqref{pb in strip} bounded from above, satisfying \begin{equation*} u_1\le u_2 \ \text{on} \ (\partial E)\cap((0,+\infty)\times\mathbb{R}),\qquad v_1\le v_2\ \text{on} \ (\partial F)\cap((0,+\infty)\times\Omega_R). \end{equation*} If the functions $\underline{u}$, $\underline{v}$ defined by \begin{align*} \underline{u}(t,x)&:= \begin{cases} \max\{u_1(t,x), u_2(t,x)\}\qquad &\text{if}\ (t,x)\in\overline{E}\\ u_2(t,x)\qquad\qquad\qquad\qquad&\text{otherwise} \end{cases}\\ \underline{v}(t,x,y)&:= \begin{cases} \max\{v_1(t,x,y), v_2(t,x,y)\}~ &\text{if}\ (t,x,y)\in\overline{F}\\ v_2(t,x,y)\qquad\qquad&\text{otherwise} \end{cases} \end{align*} satisfy \vspace{-3mm} \begin{align*} \underline{u}(t,x)>u_2(t,x)\Rightarrow \underline{v}(t,x,0)\ge v_1(t,x,0),\\ \underline{v}(t,x,0)>v_2(t,x,0)\Rightarrow \underline{u}(t,x)\ge u_1(t,x), \end{align*} then, any supersolution $(\overline{u},\overline{v})$ of \eqref{pb in strip} bounded from below and such that $\underline{u}\le\overline{u}$ and $\underline{v}\le\overline{v}$ in $\overline\Omega_R$ at $t=0$, satisfies $\underline{u}\le\overline{u}$ and $\underline{v}\le\overline{v}$ in $\overline\Omega_R$ for all $t>0$. \end{proposition} \begin{proposition} \label{wellposedness-strip} The Cauchy problem \eqref{pb in strip} with nonnegative, bounded and continuous initial datum $(u_0,v_0)\not\equiv (0,0)$ admits a unique bounded classical solution $(u,v)$ for all $t\ge 0$ and $(x,y)\in\overline\Omega_R$. Moreover, for any $0<\tau<T$ and for any compact subsets $I\subset\mathbb{R}$ and $H\subset\Omega_R$ with $\overline H\cap\{y=0\}=\overline I$, \begin{align*} \Vert u(t,x)\Vert_{C^{1+\frac{\alpha}{2},2+\alpha}([\tau,T]\times \overline I)}+\Vert v(t,x,y)\Vert_{C^{1+\frac{\alpha}{2},2+\alpha}([\tau,T]\times \overline H)}\le C, \end{align*} where $C$ is a positive constant depending on $\tau$,$ T$, $f$, $\Vert u_0\Vert_{L^\infty(\mathbb{R})}$ and $\Vert v_0\Vert_{L^\infty(\overline\Omega_R)}$. \end{proposition} The existence of the solution to the Cauchy problem \eqref{pb in strip} follows from an approximation argument by constructing a sequence of approximate solutions in $[-n,n]\times[0,R]$ for $n$ large enough, which satisfy\footnote{Problem \eqref{pb in strip-cut off} with a nonnegative, continuous and bounded initial function has a unique bounded classical solution defined for all $t>0$, which can be obtained in the spirit of Appendix A in \cite{BRR2013-1} and the strong maximum principle.} \begin{equation} \label{pb in strip-cut off} \begin{cases} \partial_t u -D\partial_{xx}u=\nu v(t,x,0)-\mu u, &t>0,\ x\in [-n,n], \\ \partial_t v -d\Delta v =f(x,v), &t>0,\ (x,y)\in(-n,n)\times(0,R),\\ -d\partial_y v(t,x,0)=\mu u-\nu v(t,x,0), &t>0,\ x\in [-n,n],\\ v(t,x,R)=0, &t>0,\ x\in [-n,n],\\ v(t,\pm n, y)=0, &t>0,\ y\in[0,R], \end{cases} \end{equation} and then passing to the limit $n\to+\infty$ via the Arzel\`{a}-Ascoli theorem. Uniqueness comes from the comparison principle Proposition \ref{cp-strip}. The estimate can be derived by standard parabolic $L^p$ theory (see, e.g., \cite[page 168, Proposition 7.14]{Liberman}) and then the Schauder theory. In the following, we show the continuous dependence of the solutions to the Cauchy problem \eqref{pb in strip} on initial data. \begin{proposition} \label{prop3.8} The solutions of the Cauchy problem \eqref{pb in strip} depend continuously on the initial data. \end{proposition} \begin{proof} Let $(u,v)$ be the solution, given in Proposition \ref{wellposedness-strip}, of \eqref{pb in strip} with nonnegative, bounded and continuous initial datum $(u_0,v_0)\not\equiv (0,0)$. We shall prove that for any $\varep>0$, $T>0$, there is $\delta>0$, depending on $\varep$, $T$ and $(u_0,v_0)$, such that for any nonnegative, bounded and continuous function pair $(\tilde u_0,\tilde v_0)$ satisfying \begin{equation} \label{initial-1} \sup_{x\in\mathbb{R}}|u_0(x)-\tilde u_0(x)|<\frac{\nu}{\mu}\delta,~~\sup_{(x,y)\in\overline\Omega_R}|v_0(x,y)-\tilde v_0(x,y)|<\delta, \end{equation} the solution to \eqref{pb in strip} with initial value $(\tilde u_0,\tilde v_0)$ satisfies \begin{equation}\label{sol-1} \sup_{(t,x)\in[0,T]\times\mathbb{R}}|u(t,x)-\tilde u(t,x)|<\frac{\nu}{\mu}\varep,~~\sup_{(t,x,y)\in[0,T]\times\overline\Omega_R}|v(t,x,y)-\tilde v(t,x,y)|<\varep. \end{equation} Recall that $M=\max_{[0,L]}f_v(x,0)$. Define $(w,z):=(u,v)e^{-Mt}$, then $(w,z)$ satisfies \begin{equation} \label{wz} \begin{cases} \partial_t w -D\partial_{xx}w=\nu z(t,x,0)-(\mu+M) w, &t>0,\ x\in \mathbb{R}, \\ \partial_t z -d\Delta z =g(t,x,z), &t>0,\ (x,y)\in\Omega_R,\\ -d\partial_y z(t,x,0)=\mu w-\nu z(t,x,0), &t>0,\ x\in \mathbb{R},\\ z(t,x,R)=0, &t>0,\ x\in \mathbb{R}, \end{cases} \end{equation} where the function $g(t,x,z):=f(x,e^{Mt}z)e^{-Mt}-Mz$ is non-increasing in $z$. We observe that $(u,v)e^{-Mt}$ and $(\tilde u,\tilde v)e^{-Mt}$ are the solutions of \eqref{wz} with initial functions $(u_0,v_0)$ and $(\tilde u_0,\tilde v_0)$, respectively. Define \begin{equation*} \begin{cases} \underline u(t,x):= \max\big(0,w(t,x)-\frac{\nu}{\mu}\delta(\frac{t}{T}+1)\big),~~~~~~\cr \underline v(t,x,y):=\max\big(0,z(t,x,y)-\delta(\frac{t}{T}+1)\big), \end{cases} \end{equation*} and \begin{equation*} \begin{cases} \overline u(t,x):=\min\big(\frac{\nu}{\mu+M}A, w(t,x)+\frac{\nu}{\mu}\delta(\frac{t}{T}+1)\big),\cr \overline v(t,x,y):=\min\big(A, z(t,x,y)+\delta(\frac{t}{T}+1)\big), \end{cases} \end{equation*} where $A:=\max\Big(1,\Vert u_0 \Vert_{L^{\infty}(\mathbb{R})}+\Vert v_0 \Vert_{L^\infty(\overline\Omega_R)}+\delta, \frac{\mu+M}{\nu}(\Vert u_0 \Vert_{L^{\infty}(\mathbb{R})}+\Vert v_0 \Vert_{L^\infty(\overline\Omega_R)}+\frac{\nu}{\mu}\delta)\Big)$. It can be checked that $(\underline u,\underline v)$ and $(\overline u,\overline v)$ are, respectively, a generalized sub- and a generalized supersolution of \eqref{wz}. Notice that \begin{align*} \underline u(0,x)=\max\Big(0, u_0(x)-\frac{\nu}{\mu}\delta \Big)<&u_0(x)<u_0(x)+\frac{\nu}{\mu}\delta= \overline u(0,x),~~\forall x\in\mathbb{R},\cr \underline v(0,x,y)=\max\Big( 0, v_0(x,y)-\delta\Big)<&v_0(x,y)<v_0(x,y)+\delta= \overline v(0,x,y),~~\forall (x,y)\in\overline\Omega_R. \end{align*} From \eqref{initial-1}, one infers that \begin{align*} \underline u(0,x)=\max\Big(0, u_0(x)-\frac{\nu}{\mu}\delta \Big)<&\tilde u_0(x)<u_0(x)+\frac{\nu}{\mu}\delta= \overline u(0,x),~~\forall x\in\mathbb{R},\cr \underline v(0,x,y)=\max\Big( 0, v_0(x,y)-\delta\Big)<&\tilde v_0(x,y)<v_0(x,y)+\delta= \overline v(0,x,y),~~\forall (x,y)\in\overline\Omega_R. \end{align*} By a comparison argument, it follows that \begin{equation*} (\underline u,\underline v)\le (u,v)e^{-Mt}\le (\overline u,\overline v),~~~~ (\underline u,\underline v)\le (\tilde u,\tilde v)e^{-Mt}\le (\overline u,\overline v), \end{equation*} for all $t\in[0,T]$ and $(x,y)\in\overline\Omega_R$. Thus, \begin{equation*} \sup_{[0,T]\times\mathbb{R}}|u(t,x)-\tilde u(t,x)|\le e^{MT}\sup_{[0,T]\times\mathbb{R}}|\overline u(t,x)-\underline u(t,x)|\le 2e^{MT}\frac{\nu}{\mu}\delta\sup_{[0,T]}(\frac{t}{T}+1)\le 4e^{MT}\frac{\nu}{\mu}\delta, \end{equation*} \begin{equation*} \sup_{[0,T]\times\overline\Omega_R}|v(t,x,y)-\tilde v(t,x,y)|\le e^{MT}\sup_{[0,T]\times\overline\Omega_R}|\overline v(t,x,y)-\underline v(t,x,y)|\le 2e^{MT}\delta\sup_{[0,T]}(\frac{t}{T}+1)\le 4e^{MT}\delta. \end{equation*} By choosing $\delta>0$ so small that $4e^{MT}\delta<\varep$, \eqref{sol-1} is therefore achieved. \end{proof} Next, we prove a Liouville-type result in the strip, provided that the width $R$ is sufficiently large. Namely, for all $R$ large, problem \eqref{pb in strip} admits a unique nonnegative nontrivial stationary solution $(U_R,V_R)$, which is $L$-periodic in $x$. Moreover, we show that $(U_R,V_R)$ is the global attractor for solutions of the Cauchy problem in the strip. \begin{proof}[Proof of Theorem \ref{thm2.1}] The strategy of this proof is similar in spirit to that of Theorem \ref{liouville}. We only sketch the proof of the existence and positivity property of stationary solutions, for which the construction of subsolutions is inspired by \cite[Proposition 3.4]{Tellini2016}. Let $(u,v)$ be the solution to the Cauchy problem \eqref{pb in strip} with nonnegative, bounded and continuous initial value $(u_0,v_0)\not\equiv(0,0)$. Set \begin{align*} (\underline{u},\underline{v}):=\begin{cases} \cos(\omega x)\Big(1, \frac{\mu\sin(\beta(R-y))}{d\beta\cos(\beta R)+\nu\sin(\beta R)}\Big) \ &\text{for}\ (x,y)\in (-\frac{\pi}{2\omega},\frac{\pi}{2\omega})\times[0,R],\\ (0,0) &\text{otherwise,} \end{cases} \end{align*} where $\omega$ and $\beta$ are parameters to be chosen later so that $(\underline{u},\underline{v})$ satisfies \begin{equation} \label{beta,w} \begin{cases} -D\underline{u}''\le \nu\underline{v}(x,0)-\mu \underline{u}, & x\in \mathbb{R}, \\ -d\Delta\underline{v} \le \big(m-\delta\big)\underline{v}, &(x,y)\in\Omega_R,\\ -d\partial_y \underline{v}(x,0)=\mu \underline{u}-\nu \underline{v}(x,0), & x\in \mathbb{R},\\ \underline{v}(x,R)=0, & x\in\mathbb{R}, \end{cases} \end{equation} where $\delta>0$ is small enough such that $0< \delta< m=\min_{[0,L]}f_v(x,0)$. A lengthy but straightforward calculation reveals, from the first two relations of \eqref{beta,w}, that $\omega$ and $\beta$ should verify \begin{equation*} \begin{cases} D\omega^2\le -\frac{\mu d\beta \cos(\beta R)}{d\beta\cos(\beta R)+\nu \sin(\beta R)},\\ d\omega^2+d\beta^2\le m-\delta. \end{cases} \end{equation*} Because of \eqref{R-condition}, $\delta>0$ can be chosen sufficiently small such that $d\big(\frac{\pi}{2R}\big)^2<m-\delta$. Then, $\beta$ can be chosen very closely to $\frac{\pi}{2R}$, say $\beta\sim\frac{\pi}{2R}$ and $\frac{\pi}{2R}<\beta< \frac{\pi}{R}$, such that \begin{equation*} \kappa:=\min\bigg\{- \frac{\mu d\beta \cos(\beta R)}{D(d\beta\cos(\beta R)+\nu \sin(\beta R))}, \frac{m-\delta}{d}-\beta^2 \bigg\}>0. \end{equation*} Therefore, $(\underline{u},\underline{v})$ satisfies \eqref{beta,w}, provided $\omega^2\le \kappa$. On the other hand, $u(t,x)>0$ and $v(t,x,y)>0$ for all $t>0$ and $(x,y)\in \mathbb{R}\times [0,R)$, and $\partial_y v(t,x,R)<0$ for all $t>0$ and $x\in\mathbb{R}$, which is a direct consequence of Proposition \ref{cp-strip} and the Hopf lemma. Hence, there is $\varep_0>0$ such that $\varep(\underline{u},\underline{v})\le (u(1,\cdot),v(1,\cdot,\cdot))$ in $\overline\Omega_R$ for all $\varep\in(0,\varep_0]$. It then follows from the same lines as in Theorem \ref{liouville} that there is a nontrivial steady state $(U_1,V_1)$ of \eqref{pb in strip} such that \begin{equation}\label{lower-1} \varep\underline{u}\le U_1\le\liminf\limits_{t\to +\infty}u(t,x),\qquad \varep\underline{v}\le V_1\le\liminf\limits_{ t\to+\infty}v(t,x,y), \end{equation} locally uniformly in $\overline{\Omega}_R$, thanks to Proposition \ref{cp--generalized sub-strip}. On the other hand, by choosing $(\overline U,\overline V)$ as in \eqref{super-constant sol} and by using the same argument as in Theorem \ref{liouville}, it comes that there is a stationary solution $(U_2, V_2)$ of \eqref{pb in strip} satisfying \begin{equation}\label{upper-1} \limsup\limits_{ t\to+\infty}u(t,x)\le U_2\le\overline U,\qquad \limsup\limits_{ t\to+\infty}v(t,x,y)\le V_2\le\overline V, \end{equation} locally uniformly in $\overline{\Omega}_R$. Therefore, the existence part is proved. Moreover, let $(U,V)$ be a nonnegative bounded stationary solution of \eqref{pb in strip}. From the analysis above and from the elliptic strong maximum principle, one also deduces that, for any given $\hat x\in\mathbb{R}$, for $\forall (x,y)\in(\hat x-\frac{\pi}{2\omega},\hat x+\frac{\pi}{2\omega})\times[0,R)$, \begin{equation*} U(x)>\varep\cos(\omega(x-\hat x))>0,~~V(x,y)>\varep\cos(\omega(x-\hat x))\frac{\mu\sin(\beta(R-y))}{d\beta\cos(\beta R)+\nu\sin(\beta R)}>0,~~\text{for all}~\varep\in(0,\varep_0], \end{equation*} which implies $\inf_\mathbb{R}U>0$ and $\inf_{\mathbb{R}\times[0,R)}V>0$. Then, by repeating the uniqueness argument in the proof of Theorem \ref{liouville} and by \eqref{lower-1}--\eqref{upper-1}, the conclusion in Theorem \ref{thm2.1} follows. \end{proof} In the sequel, we show the limiting behavior of the steady state $(U_R,V_R)$ of \eqref{pb in strip} as $R$ goes to infinity, which will play a crucial role in obtaining the existence of pulsating fronts in the half-plane $\Omega$ in Section \ref{section5}. \begin{proposition} \label{prop3.9} The stationary solution $(U_R,V_R)$ of \eqref{pb in strip} satisfies the following properties: \begin{itemize} \item[(i)] $0<U_R<{\nu}/{\mu}$ in $\mathbb{R}$, $0< V_R<1$ in $\mathbb{R}\times [0,R)$; \item[(ii)] the limiting property holds: \begin{equation} \label{truncated to half-plane} (U_R(x),V_R(x,y))\to ({\nu}/{\mu}, 1)\quad \text{as}\ R\to+\infty \end{equation} uniformly in $x\in\mathbb{R}$ and locally uniformly in $y\in[0,+\infty)$. \end{itemize} \end{proposition} \begin{proof} (i) From the proof of Theorem \ref{thm2.1}, it is seen that $U_R>0$ in $\mathbb{R}$ and $V_R>0$ in $\mathbb{R}\times[0,R)$. Notice also that $(\nu/\mu,1)$ is obviously a strict stationary supersolution of \eqref{pb in strip}. Let $(\overline u, \overline v)$ be the unique bounded classical solution of \eqref{pb in strip} with initial condition $(\nu/\mu,1)$. It follows from Proposition \ref{cp-strip} that $(\overline u,\overline v)$ is decreasing in $t$. Since $(\overline u(t,\cdot),\overline v(t,\cdot,\cdot))$ converges to $(U_R,V_R)$ as $t\to+\infty$ locally uniformly in $\overline\Omega_R$ by Theorem \ref{thm2.1}, one has $\overline u(t,x)>U_R(x)$ and $\overline v(t,x,y)> V_R(x,y)$ for all $t\ge 0$ and $(x,y)\in\mathbb{R}\times[0,+\infty)$. Therefore, $U_R<\nu/\mu$ in $\mathbb{R}$ and $V_R<1$ in $\mathbb{R}\times[0,+\infty)$. The statement (i) is then proved. (ii) Now, let us turn to show the limiting property. First, we claim that $(U_R,V_R)$ is increasing in $R$. To prove this, we fix $R_1<R_2$. Denote by $\Omega_i:=\Omega_{R_i}$ ($i=1,2$) and by $(U_i, V_i):=(U_{R_i},V_{R_i})$($i=1,2$) the unique nontrivial stationary solutions of \eqref{pb in strip} in $\overline\Omega_i$ $(i=1,2)$, respectively. One can prove that $U_1<U_2$ in $\mathbb{R}$ and $V_1<V_2$ in $\mathbb{R}\times[0,R_1)$, by noticing that $(U_2,V_2)$ is a strict stationary supersolution of \eqref{pb in strip} in $\Omega_1$ and by a similar argument as in (i). Our claim is thereby proved. Due to the boundedness of $(U_R,V_R)$ in (i), it follows from the monotone convergence theorem and standard elliptic estimates that $(U_R,V_R)$ converges as $R\to+\infty$ locally uniformly in $\overline\Omega$ to a classical solution $(U, V)$ of the following stationary problem: \begin{equation*} \begin{cases} -D\partial_{xx}U=\nu V(x,0)-\mu U, & x\in \mathbb{R}, \\ -d\Delta V =f(x,V), &(x,y)\in\Omega,\\ -d\partial_y V(x,0)=\mu U(x)-\nu V(x,0), & x\in \mathbb{R}. \end{cases} \end{equation*} Owing to Theorem \ref{liouville}, it follows that $(U, V)=({\nu}/{\mu}, 1)$. Thus, \eqref{truncated to half-plane} is proved. \end{proof} \section{Propagation properties in the strip: Proofs of Theorems \ref{thm-asp-strip} and \ref{thm-PTF-in strip}} \label{section4} This section is dedicated to the existence of the asymptotic spreading speed $c^*_R$ and its coincidence with the minimal wave speed for pulsating fronts for truncated problem \eqref{pb in strip} along the road. In particular, we will give a variational formula for $c^*_R$ by using the principal eigenvalue for certain linear eigenvalue problem. As will be shown below, the discussion combines the dynamical system approach for monostable evolution problems developed in \cite{LZ2010} with PDE's method. Let $D:=[0,L]\times[0,R]$ and define the Banach spaces $$X=\{(u,v)\in C([0,L])\times C(D): v(\cdot,R)=0~\text{in}~[0,L]\}$$ with the norm $\Vert(u,v)\Vert_X=\Vert u\Vert_{C([0,L])}+\Vert v\Vert_{C(D)}$, then $(X,X^+)$ is an ordered Banach space and the cone $X^+$ has empty interior. Let $Y$ be a closed subspace of $X$ given by $$Y=\{(u,v)\in C^1([0,L])\times C^1(D): v(\cdot,R)=0~\text{in}~[0,L]\}$$ with the norm $\Vert (u,v)\Vert_Y =\Vert u\Vert_{C^1([0,L])}+\Vert v\Vert_{C^1(D)}$. It is seen that the inclusion map $Y\hookrightarrow X$ is a continuous linear map. Moreover, the cone $Y^+$ has nonempty interior $\text{Int}(Y^+)$, see e.g. \cite[Corollary 4.2]{Smith1995}, given by \begin{align*} \text{Int}(Y^+)=\{(u,v)\in Y^+: (u,v)>(0,0)~ \text{in}~ [0,L]\times[0,R),~ \partial_y v(\cdot,R)<0~\text{in}~[0,L]\}. \end{align*} We write $(u_1,v_1)\ll (u_2,v_2)$ if $(u_1,v_1),(u_2,v_2)\in Y$ and $(u_2,v_2)-(u_1,v_1)\in\text{Int}(Y^+)$. Set $\mathcal{H}:=L\mathbb{Z}$. We use $\mathcal{C}$ to denote the set of all bounded and continuous function pairs from $\mathcal{H}$ to $X$, and $\mathcal{D}$ to denote the set of all bounded and continuous function pairs from $\mathcal{H}$ to $Y$. Moreover, any element in $X$ ($Y$) can be regarded as a constant function in $\mathcal{C}$ ($\mathcal{D}$). For any $u,v\in\mathcal{C}$, we write $u\ge v$ provided $u(x)\ge v(x)$ for all $x\in\mathcal{H}$, $u>v$ provided $u\ge v$ but $u\neq v$. For $u,v\in\mathcal{D}$, we write $u\gg v$ provided $u(x)\gg v(x)$ for all $x\in\mathcal{H}$. We equip $\mathcal{C}$ $(\mathcal{D})$ with the compact open topology, i.e., $(u_n,v_n)\to (u,v)$ in $\mathcal{C}$ ($\mathcal{D}$) means that $u_n(x)\to u(x)$ uniformly for $x$ in every compact interval of $\mathbb{R}$ and $v_n(x,y)\to v(x,y)$ uniformly for $(x,y)$ in every compact subset of $\overline\Omega_R$. Define \begin{align*} &\mathbb{C}_0:=\{(u,v)\in C(\mathbb{R})\times C(\overline\Omega_R):~ v(\cdot,R)=0~ \text{in}~\mathbb{R}\},\cr &\mathbb{C}^1_0:=\{(u,v)\in C^1(\mathbb{R})\times C^1(\overline\Omega_R): v(\cdot,R)=0~\text{in}~\mathbb{R}\}. \end{align*} Any continuous and bounded function pair $(u,v)$ in $\mathbb{C}_0$ can be regarded as a function pair $(u(z),v(z))$ in the space $\mathcal{C}$ in the sense that $\big(u(z)(x),v(z)(x,y)\big):=\big(u(x+z),v(x+z,y)\big)$ for all $z\in\mathcal{H}$ and $(x,y)\in D$. In this sense, $(U_R,V_R)\in\mathcal{C}$ and the set $$\mathcal{K}:=\big\{(u,v)\in C(\mathbb{R})\times C(\overline\Omega_R): (0,0)\le (u,v)\le (U_R,V_R)~\text{in}~\overline\Omega_R\big\}$$ is a closed subset of $\mathcal{C}_{(U_R,V_R)}:=\{(u,v)\in\mathcal{C}: (0,0)\le (u,v)\le (U_R,V_R)\}$ and satisfies (K1)--(K5) in \cite{LZ2010}. Define a family of operators $\{Q_t\}_{t\ge 0}$ on $\mathcal{K}$ by \begin{equation*} Q_t[(u_0,v_0)]:=(u(t,\cdot;u_0), v(t,\cdot,\cdot;v_0))~~\text{for}~ (u_0,v_0)\in \mathcal{K}, \end{equation*} where $(u(t,\cdot;u_0), v(t,\cdot,\cdot;v_0))$ is the solution of system \eqref{pb in strip} with initial datum $(u_0,v_0)\in \mathcal{K}$. It is easily seen that $Q_0[(u_0,v_0)]=(u_0,v_0)$ for all $(u_0,v_0)\in\mathcal{K}$, and $Q_{t}\circ Q_{s}[(u_0,v_0)]=Q_{t+s}[(u_0,v_0)]$ for any $t,s\ge 0$ and $(u_0,v_0)\in\mathcal{K}$. For any given $(u_0,v_0)\in\mathcal{K}$, it can be deduced from Proposition \ref{wellposedness-strip} that $Q_t[(u_0,v_0)]$ is continuous in $t\in[0,+\infty)$ with respect to the compact open topology. Assume that $(u_k,v_k)$ and $(u,v)$ are the unique solutions to \eqref{pb in strip} with initial data $(u_{0k},v_{0k})$ and $(u_0,v_0)$ in $\mathcal{K}$, respectively. Suppose that $(u_{0k},v_{0k})\to(u_0,v_0)$ as $k\to+\infty$ locally uniformly in $\overline\Omega_R$, we claim that $(u_{k},v_{k})\to(u,v)$ as $k\to+\infty$ in $C^{1,2}_{loc}([0,+\infty)\times\overline\Omega_R)$, which will imply that $Q_t[(u_0,v_0)]$ is continuous in $(u_0,v_0)\in\mathcal{K}$ uniformly in $t\in[0,T]$ for any $T>0$. To prove this, we define a smooth cut-off function $\chi^n: \mathbb{R}\mapsto[0,1]$ such that $\chi^n(\cdot)=1$ in $[-n+1,n-1]$ and $\chi^n(\cdot)=0$ in $\mathbb{R}\backslash[-n,n]$. Then, $(\chi^n u_{0k},\chi^n v_{0k})\to (\chi^n u_0,\chi^n v_0)$ as $k\to+\infty$ uniformly in $[-n,n]$. Let $(u^n_k,v^n_k)$ and $(u^n,v^n)$ be the solutions to \eqref{pb in strip-cut off} in $D_1:=[-n,n]\times[0,R]$ with initial data $(\chi^n u_{0k},\chi^n v_{0k})$ and $(\chi^n u_0,\chi^n v_0)$, respectively. One can choose two positive bounded and monotone function sequences $(\underline u^n_{0k},\underline v^n_{0k})$ and $(\bar u^n_{0k},\bar v^n_{0k})$ in the space $\left\{(u,v)\in C^\infty([-n,n])\times C^\infty(D_1): u(\pm n)=0, v(\cdot,R)=0~ \text{in}~ [-n,n], v(\pm n,\cdot)=0~\text{in}~[0,R]\right\}$, such that \begin{align*} (0,0)\le (\underline u^n_{0k},\underline v^n_{0k})&\le (\chi^n u_{0k},\chi^n v_{0k})\le (\bar u^n_{0k},\bar v^n_{0k}),\\ (\underline u^n_{0k},\underline v^n_{0k})\nearrow (\chi^n u_0,\chi^n v_0),&~~(\bar u^n_{0k},\bar v^n_{0k})\searrow (\chi^n u_0,\chi^n v_0)~~\text{uniformly in}~D_1~\text{as}~k\to+\infty. \end{align*} By a comparison argument, it follows that \begin{align*} (\underline u^n_k,\underline v^n_k)\le (\underline u^n_{k+1},\underline v^n_{k+1})\le (u^n_{k+1},v^n_{k+1})\le (\bar u^n_{k+1},\bar v^n_{k+1})\le (\bar u^n_k,\bar v^n_k)~\text{for all}~t>0~\text{and}~(x,y)\in D_1, \end{align*} where $(\underline u^n_k,\underline v^n_k)$ and $(\bar u^n_k,\bar v^n_k)$ are the classical solutions to \eqref{pb in strip-cut off} with initial data $(\underline u^n_{0k},\underline v^n_{0k})$ and $(\bar u^n_{0k},\bar v^n_{0k})$, respectively. From standard parabolic estimates, the functions $(\underline u^n_k,\underline v^n_k)$ and $(\bar u^n_k,\bar v^n_k)$ converge to $(\underline u^n,\underline v^n)$ and $(\bar u^n,\bar v^n)$ as $k\to+\infty$ in $C^{1+\alpha/2,2+\alpha}_{loc}([0,+\infty)\times D_1)$, respectively. Moreover, $(\underline u^n,\underline v^n)$ and $(\bar u^n,\bar v^n)$ are classical solutions to \eqref{pb in strip-cut off}. Since \begin{align*} \lim_{ t\to 0,k\to +\infty }(\underline u^n_k (t,\cdot),\underline v^n_k(t,\cdot,\cdot)) = \lim_{ t\to 0,k\to +\infty } (\bar u^n_k (t,\cdot),\bar v^n_k (t,\cdot,\cdot))=(\chi^n u_0,\chi^n v_0), \end{align*} uniformly in $(x,y)\in D_1$, therefore \begin{align*} \lim_{ t\to 0,k\to +\infty }(u^n_{k}(t,\cdot),v^n_{k}(t,\cdot,\cdot))=(\chi^n u_0,\chi^n v_0)= \lim_{ t\to 0} (u^n(t,\cdot),v^n(t,\cdot,\cdot)), \end{align*} uniformly in $(x,y)\in D_1$, and by the uniqueness of the solutions to \eqref{pb in strip-cut off}, it follows that $$(\underline u^n,\underline v^n)=(\bar u^n,\bar v^n)=(u^n,v^n)~\text{for}~t>0~\text{and}~(x,y)\in D_1.$$ Hence, $(u^n_k,v^n_k)\to (u^n,v^n)$ as $k\to+\infty$ in $C^{1+\alpha/2,2+\alpha}([0,T]\times D_1)$ for any $T>0$. By the approximation argument and parabolic estimates, $(u^n_k,v^n_k)$ and $(u^n,v^n)$ converge, respectively, to $(u_k,v_k)$ and $(u,v)$ as $n\to+\infty$ (at least) in $C^{1,2}_{loc}([0,+\infty)\times\overline\Omega_R)$. Consequently, $(u_k,v_k)\to (u,v)$ as $k\to+\infty$ in $C^{1,2}_{loc}([0,+\infty)\times\overline\Omega_R)$. From the observation that for any $t,s\ge 0$ and for $(u_0,v_0), (\tilde u_0,\tilde v_0)\in\mathcal{K}$, \begin{equation*} \big|Q_t[(u_0,v_0)]-Q_s[(\tilde u_0,\tilde v_0)]\big|\le \big|Q_t[(u_0,v_0)]-Q_t[(\tilde u_0,\tilde v_0)]\big|+\big|Q_t[(\tilde u_0,\tilde v_0)]-Q_s[(\tilde u_0,\tilde v_0)]\big|, \end{equation*} it comes that $Q_t[(u_0,v_0)]$ is continuous in $(t,(u_0,v_0))\in[0,T]\times \mathcal{K}$. Note that for any $t>0$, it can be expressed as $t=mT+t'$ for some $m\in\mathbb{Z}_+$ and $t'\in[0,T)$. Hence, $Q_t[(u_0,v_0)]=(Q_T)^m Q_{t'}[(u_0,v_0)]$. Thus, $Q_t[(u_0,v_0)]$ is continuous in $(t,(u_0,v_0))\in[0,+\infty)\times \mathcal{K}$. Therefore, it follows that $\{Q_t\}_{t\ge 0}$ is a continuous-time semiflow. We claim that $\{Q_t\}_{t\ge 0}$ is subhomogeneous on $\mathcal{K}$ in the sense that $Q_t[\kappa(u_0,v_0)]\ge \kappa Q_t[(u_0,v_0)]$ for all $\kappa\in[0,1]$ and for all $(u_0,v_0)\in\mathcal{K}$. The case that $\kappa=0,1$ is trivial. Suppose now that $\kappa\in(0,1)$. Define \begin{equation*} (\overline u,\overline v)=(u(t,\cdot;\kappa u_0), v(t,\cdot,\cdot;\kappa v_0)),~~~(\underline u,\underline v)=\kappa (u(t,\cdot;u_0),v(t,\cdot,\cdot;v_0)). \end{equation*} From Proposition \ref{cp-strip}, it follows that $(\overline u,\overline v)$ and $(\underline u,\underline v)$ belong to $\mathcal{K}$. Moreover, $(\overline u,\overline v)$ and $(\underline u,\underline v)$ satisfy, respectively, \begin{equation*} \begin{cases} \partial_t\overline u-D\partial_{xx}\overline{u}= \nu\overline{v}(t,x,0)-\mu \overline{u}, &t>0, x\in \mathbb{R}, \\ \partial_t \overline v-d\Delta\overline{v} = f(x,\overline{v}), &t>0, (x,y)\in\Omega_R,\\ -d\partial_y \overline{v}(t,x,0)=\mu \overline{u}-\nu \overline{v}(t,x,0), &t>0, x\in \mathbb{R},\\ \overline{v}(t,x,R)=0, &t>0, x\in\mathbb{R},\\ (\overline u_0,\overline v_0)=\kappa(u_0,v_0), \end{cases} \end{equation*} and \begin{equation*} \begin{cases} \partial_t\underline u-D\partial_{xx}\underline{u}= \nu\underline{v}(t,x,0)-\mu \underline{u}, &t>0, x\in \mathbb{R}, \\ \partial_t \underline v-d\Delta\underline{v} < f(x,\underline{v}), &t>0, (x,y)\in\Omega_R,\\ -d\partial_y \underline{v}(t,x,0)=\mu \underline{u}-\nu \underline{v}(t,x,0), &t>0, x\in \mathbb{R},\\ \underline{v}(t,x,R)=0, &t>0, x\in\mathbb{R},\\ (\underline u_0,\underline v_0)=\kappa(u_0,v_0), \end{cases} \end{equation*} by using the assumption that $f(x,v)/v$ is decreasing in $v>0$ for all $x\in\mathbb{R}$. Proposition \ref{cp-strip} then yields that $\overline u(t,x)\ge \underline u(t,x)$ and $\overline v(t,x,y)\ge \underline v(t,x,y)$ for all $t\ge 0$ and $(x,y)\in\overline\Omega_R$. This proves our claim. By classical parabolic theory, together with Propositions \ref{cp-strip}--\ref{wellposedness-strip} and Theorem \ref{thm2.1}, for each $t>0$, the solution map $Q_t:\mathcal{K}\to\mathcal{K}$ satisfies the following properties: \begin{itemize} \item[(A1)] $Q_t[T_a[(u_0,v_0)]]=T_a[Q_t[(u_0,v_0)]]$ for all $(u_0,v_0)\in\mathcal{K}$ and $a\in \mathcal{H}$, where $T_a$ is a shift operator defined by $T_a[(u(t,x),v(t,x,y))]=(u(t,x-a),v(t,x-a,y))$. \item[(A2)] $Q_t[\mathcal{K}]$ is uniformly bounded and $Q_t:\mathcal{K}\to \mathcal{D}$ is continuous with respect to the compact open topology, due to the analysis above. \item[(A3)] $Q_t:\mathcal{K}\to \mathcal{D}$ is compact with respect to the compact open topology, which follows from Proposition \ref{wellposedness-strip}. \item[(A4)] $Q_t: \mathcal{K}\to \mathcal{K}$ is monotone (order-preserving) in the sense that if $(u_{01},v_{01})$ and $(u_{02},v_{02})$ belong to $ \mathcal{K}$ satisfying $u_{01}\le u_{02}$ in $\mathbb{R}$ and $v_{01}\le v_{02}$ in $\overline{\Omega}_R$, then $u(t,x;u_{01})\le u(t,x;u_{02})$ and $ v(t,x,y;v_{01})\le v(t,x,y;v_{02})$ for all $t>0$ and $(x,y)\in\overline{\Omega}_R$. This follows from Proposition \ref{cp-strip}. \item[(A5)] $Q_t$ admits exactly two fixed points $(0,0)$ and $(U_R,V_R)$ in $Y$. Let $(u(t,x;u_0),v(t,x,y;v_0))$ be the solution of \eqref{pb in strip} with $L$-periodic (in $x$) initial value $(u_0,v_0)\in\mathcal{K}\cap Y$ satisfying $(0,0)\ll(u_0,v_0)\le (U_R,V_R)$, it comes that \begin{equation} \label{long time -periodic initial} \lim_{t\to +\infty} (u(t,x;u_0),v(t,x,y;v_0))= (U_R(x),V_R(x,y))~~\text{uniformly in}~(x,y)\in\overline\Omega_R. \end{equation} Indeed, Theorem \ref{thm2.1} implies that $(U_R,V_R)$ is the unique $L$-periodic positive steady state of \eqref{pb in strip}. Moreover, \eqref{long time -periodic initial} can be achieved by a similar argument to that of Theorem \ref{thm2.1}. \end{itemize} Therefore, $Q_t$ is a subhomogeneous semiflow on $\mathcal{K}$ and satisfies hypotheses (A1)--(A5) in \cite{LZ2010} for any $t>0$. Moreover, it is straightforward to check that assumption (A6) in \cite{LZ2010} is also satisfied. In particular, $Q_1$ satisfies (A1)--(A6) in \cite{LZ2010}. By Theorem 3.1 and Proposition 3.2 in \cite{LZ2010}, it then follows that the solution map $Q_1$ admits rightward and leftward spreading speeds $c^*_{R,\pm}$. Furthermore, Theorems 4.1--4.2 in \cite{LZ2010} imply that $Q_1$ has a rightward periodic traveling wave $(\phi_R(x-c,x),\psi_R(x-c,x,y))$ connecting $(U_R,V_R)$ and $(0,0)$ such that $(\phi_R(s,x),\psi_R(s,x,y))$ is non-increasing in $s$ if and only if $c\ge c^*_{R,+}$. Similar results holds for leftward periodic traveling waves with minimal wave speed $c^*_{R,-}$. To obtain the variational formulas for $c^*_{R,\pm}$, we use the linear operators approach. Let us consider the linearization of the truncated problem \eqref{pb in strip} at its zero solution: \begin{equation} \label{linear pb in strip} \begin{cases} \partial_t u -D\partial_{xx}u=\nu v(t,x,0)-\mu u, &t>0,\ x\in \mathbb{R}, \\ \partial_t v -d\Delta v =f_v(x,0)v, &t>0,\ (x,y)\in\Omega_R,\\ -d\partial_y v(t,x,0)=\mu u-\nu v(t,x,0), &t>0,\ x\in \mathbb{R},\\ v(t,x,R)=0, &t>0,\ x\in \mathbb{R}. \end{cases} \end{equation} Let $\{L(t)\}_{t\ge 0}$ be the linear solution semigroup generated by \eqref{linear pb in strip}, that is, $L(t)[(u_0, v_0)]=(u_t(u_0),v_t(v_0))$, where $(u_t(u_0),v_t(v_0)):=(u(t,\cdot;u_0),v(t,\cdot,\cdot;v_0))$ is the solution of \eqref{linear pb in strip} with initial value $(u_0, v_0)\in \mathcal{D}$. For any given $\alpha\in\mathbb{R}$, substituting $(u(t,x),v(t,x,y))=e^{-\alpha x}(\widetilde{u}(t,x),\widetilde{v}(t,x,y))$ in \eqref{linear pb in strip} yields \begin{equation} \label{modified linear pb in strip} \begin{cases} \partial_t \widetilde u -D\partial_{xx}\widetilde u+2D\alpha\partial_x \widetilde{u}-D\alpha^2\widetilde u=\nu\widetilde v(t,x,0)-\mu\widetilde u, &t>0,\ x\in \mathbb{R}, \\ \partial_t\widetilde v -d\Delta\widetilde v +2d\alpha\partial_x \widetilde{v}-d\alpha^2\widetilde{v} =f_v(x,0)\widetilde v, &t>0,\ (x,y)\in\Omega_R,\\ -d\partial_y\widetilde v(t,x,0)=\mu \widetilde u-\nu\widetilde v(t,x,0), &t>0,\ x\in \mathbb{R},\\ \widetilde v(t,x,R)=0, &t>0,\ x\in \mathbb{R}. \end{cases} \end{equation} Let $\{L_\alpha (t)\}_{t\ge 0}$ be the linear solution semigroup generated by \eqref{modified linear pb in strip}, then one has $L_\alpha(t)[(\widetilde u_0, \widetilde v_0)]=(\widetilde{u}_t(\widetilde u_0),\widetilde{v}_t(\widetilde v_0))$, where $(\widetilde{u}_t(\widetilde u_0),\widetilde{v}_t(\widetilde v_0)):=(\widetilde u(t,\cdot;\widetilde u_0),\widetilde v(t,\cdot,\cdot;\widetilde v_0))$ is the solution of \eqref{modified linear pb in strip} with initial value $(\widetilde u_0, \widetilde v_0)=(u_0,v_0)e^{\alpha x}$. It then follows that, for any $(\widetilde u_0, \widetilde v_0)\in\mathcal{D}$, \begin{equation*} L(t)[e^{-\alpha x}(\widetilde u_0, \widetilde v_0)]=e^{-\alpha x}L_\alpha (t)[(\widetilde u_0, \widetilde v_0)]~~~~\text{for}~ t\ge 0~\text{and}~ (x,y)\in\overline{\Omega}_R. \end{equation*} Substituting $(\tilde u(t,x), \tilde v(t,x,y))=e^{-\sigma t}(p(x),q(x,y))$, with $p,q$ periodic (in $x$), into \eqref{modified linear pb in strip} leads to the following periodic eigenvalue problem: \begin{align} \label{5.15/5.18'} \begin{cases} \mathcal{L}_{1,\alpha}(p, q):=-Dp''+2D\alpha p'+(-D\alpha^2+\mu) p-\nu q(x,0)=\sigma p, &x\in\mathbb{R},\\ \mathcal{L}_{2,\alpha}( p, q):=-d\Delta q+2d\alpha\partial_x q-(d\alpha^2+f_v(x,0)) q=\sigma q, &(x,y)\in\Omega_R,\\ \mathcal{B}( p, q):=-d\partial_y q(x,0)+\nu q(x,0)-\mu p=0, \ &x\in\mathbb{R},\\ q(x,R)=0, \ &x\in\mathbb{R},\\ p, q \ \text{are}~ L\text{-periodic with respect to} \ x. \end{cases} \end{align} Recall that $M:=\max_{[0,L]}f_v(x,0)$ and $m:=\min_{[0,L]}f_v(x,0)$. We have: \begin{proposition} \label{principal eigenvalue in strip} Set $\zeta(x):=f_v(x,0)$. For all $\alpha\in\mathbb{R}$, the periodic eigenvalue problem \eqref{5.15/5.18'} admits the principal eigenvalue $\lambda_{R,\zeta}(\alpha)\in\mathbb{R}$ with a unique (up to multiplication by some constant) positive periodic (in $x$) eigenfunction pair $(p,q)$ belonging to $ C^{3}(\mathbb{R})\times C^{3}(\overline\Omega_R)$. Moreover, $\lambda_{R,\zeta}(\alpha)$ has the following properties: \begin{enumerate}[(i)] \item For all $\alpha\in\mathbb{R}$, the principal eigenvalue $\lambda_{R,\zeta}(\alpha)$ is equal to \begin{equation} \label{5.16'} \lambda_{R,\zeta}(\alpha)=\max_{( p, q)\in \varSigma}\min\bigg\{\inf_{\mathbb{R}}\frac{\mathcal{L}_{1,\alpha}( p, q)}{ p}, \inf_{\mathbb{R}\times[0,R)}\frac{\mathcal{L}_{2,\alpha}( p, q)}{ q}\bigg\}, \end{equation} where \begin{align*} \varSigma:=&\big\{( p, q)\in C^2(\mathbb{R})\times C^{2}(\overline\Omega_R):~ p>0~\text{in}~\mathbb{R}, q>0~\text{in} \ \mathbb{R}\times[0,R),~~~~~~~~~~~~~\\ &~p,q~\text{are}~L\text{-periodic in}~x,~\mathcal{B}(p, q)= 0 \ \text{in} \ \mathbb{R}, ~ \partial_y q(\cdot,R)<0= q(\cdot,R)~\text{in}~\mathbb{R}\big\}. \end{align*} \item For fixed $R$ and for all $\alpha\in\mathbb{R}$, $\zeta\mapsto\lambda_{R,\zeta}(\alpha)$ is non-increasing in the sense that, if $\zeta_1(x)\le \zeta_2(x)$ for all $x\in\mathbb{R}$, then $\lambda_{R,\zeta_1}(\alpha)\ge\lambda_{R,\zeta_2}(\alpha)$. Moreover, $\lambda_{R,\zeta}(\alpha)$ is continuous with respect to $\zeta$ in the sense that, if $\zeta_n\to \zeta$, then $\lambda_{R,\zeta_n}(\alpha)\to\lambda_{R,\zeta}(\alpha)$. \item For all $\alpha\in\mathbb{R}$, $ \lambda_{R,\zeta}(\alpha)$ is decreasing with respect to $R$. \item For fixed $R$, $\alpha\mapsto\lambda_{R,\zeta}(\alpha)$ is concave in $\mathbb{R}$ and satisfies \begin{align} \label{bound'} \max\Big\{D\alpha^2-\mu,\ d\alpha^2+m-d\frac{\pi^2}{R^2}\Big\}<-\lambda_{R,\zeta}(\alpha)<\max\Big\{D\alpha^2+\nu-\mu+\frac{\mu\nu}{d},\ d(\alpha^2+1)+M\Big\}. \end{align} \end{enumerate} \end{proposition} \begin{proof}[Proof of Proposition \ref{principal eigenvalue in strip}] The proof is divided into six steps. \textit{Step 1. Solving the eigenvalue problem \eqref{5.15/5.18'}.} Set $\Lambda_\zeta(\alpha):=\max\big\{D\alpha^2+\nu-\mu+{\mu\nu}/{d}, d(\alpha^2+1)+M\big\}$. We introduce a Banach space $\mathcal{F}$ of periodic (in $x$) function pairs $(u,v)$ belonging to $C^{1}(\mathbb{R})\times C^{1}(\overline\Omega_R)$ such that $v(\cdot,R)=0$ in $\mathbb{R}$, equipped with $\Vert(u,v)\Vert_{\mathcal{F}}=\Vert u\Vert_{C^1([0,L])}+\Vert u\Vert_{C^1([0,L]\times[0,R])}$. For any $(g_1, g_2)\in \mathcal{F}$ and $\Lambda\ge \Lambda_{\zeta}(\alpha)$, let us consider the modified problem: \begin{align} \label{5.17'} \begin{cases} \mathcal{L}_{1,\alpha}( p, q)+\Lambda p=g_1, \ &x\in\mathbb{R},\\ \mathcal{L}_{2,\alpha}( p, q)+\Lambda q=g_2, \ &(x,y)\in\Omega_R,\\ \mathcal{B}( p, q)=0, \ \quad &x\in\mathbb{R},\\ q(x,R)=0, \ \quad &x\in\mathbb{R},\\ p, q \ \text{are} \ L\text{-periodic with respect to} \ x. \end{cases} \end{align} First, we construct ordered super- and subsolutions for problem \eqref{5.17'}. Set $(\overline p,\overline q)=K(1,1+\frac{\mu}{d}e^{-y})$. Choosing $K>0$ large enough (depending only on $\Vert g_1\Vert_{L^\infty(\mathbb{R})}$ and $\Vert g_2\Vert_{L^\infty(\overline\Omega_R)}$ if $g_1$, $g_2$ are positive), it follows that $(\overline p,\overline q)$ is indeed a strict supersolution of \eqref{5.17'}. By linearity of \eqref{5.17'}, up to increasing $K$ (depending only on $\Vert g_1\Vert_{L^\infty(\mathbb{R})}$ and $\Vert g_2\Vert_{L^\infty(\overline\Omega_R)}$ if $g_1$, $g_2$ are negative), $(\underline{ p},\underline{ q}):=-(\overline{ p}, \overline{ q})$ is a negative strict subsolution of \eqref{5.17'}. By monotone iteration method, it is known that the associated evolution problem of \eqref{5.17'} with initial datum $(\overline p,\overline q)$ is uniquely solvable and its solution $(\overline u,\overline v)$ is decreasing in time and is bounded from below by $(\underline p,\underline q)$ and from above by $(\overline p,\overline q)$, respectively. From the monotone convergence theorem as well as elliptic regularity theory up to the boundary, it follows that $(\overline u,\overline v)$ converges as $t\to+\infty$ locally uniformly in $\overline\Omega_R$ to a classical periodic (in $x$) solution $(p,q)\in C^{3}(\mathbb{R})\times C^{3}(\overline\Omega_R)$ of problem \eqref{5.17'}. To prove uniqueness of the solution to \eqref{5.17'}, we first claim that $g_1\ge 0$ in $\mathbb{R}$, $g_2\ge 0$ in $\overline\Omega_R$ implies that $ p\ge 0$ in $\mathbb{R}$, $q\ge 0$ in $\mathbb{R}\times[0,R)$. Indeed, for any fixed nonnegative function pair $(g_1,g_2)\in \mathcal{F}$, let $( p, q)$ be the unique solution to \eqref{5.17'}. One can easily check that, for any $K>0$, $(\underline{ p},\underline{ q})$ defined as above is a strict subsolution of \eqref{5.17'}. Assume that $p$ or $ q$ attains a negative value somewhere in their respective domains. Define \begin{equation*} \theta^*:=\min\big\{\theta>0:~ ( p, q)\ge\theta(\underline{ p},\underline{ q}) \ \text{in} \ \overline\Omega_R\big\}. \end{equation*} Then, $\theta^*\in(0,+\infty)$. The function pair $( p-\theta^*\underline{ p}, q-\theta^*\underline{ q})$ is nonnegative and at least one component attains zero somewhere in $\mathbb{R}\times[0,R)$ by noticing $(q-\theta^*\underline q)(\cdot,R)>0$ in $\mathbb{R}$. Set $(w,z):=( p-\theta^*\underline{ p}, q-\theta^*\underline{ q})$, then it satisfies \begin{align} \label{4.8} \begin{cases} -Dw''+2D\alpha w'+(\Lambda-D\alpha^2+\mu) w-\nu z(x,0)\ge 0, &x\in\mathbb{R},\\ -d\Delta z+2d\alpha\partial_x z+(\Lambda-d\alpha^2-\zeta(x)) z> 0, &(x,y)\in\Omega_R,\\ -d\partial_y z(x,0)+\nu z(x,0)-\mu w > 0, \ &x\in\mathbb{R},\\ z(x,R)>0, \ &x\in\mathbb{R},\\ w, z \ \text{are}~ L\text{-periodic with respect to} \ x. \end{cases} \end{align} Assume first that there is $(x_0,y_0)\in\mathbb{R}\times[0,R)$ such that $z(x_0,y_0)=0$. There are two subcases. Suppose that $(x_0,y_0)\in\Omega_R$, then the strong maximum principle implies that $z\equiv 0$ in $\Omega_R$. This contradicts the strict inequality of $z$ in \eqref{4.8}, whence $z>0$ in $\Omega_R$. Suppose now that $y_0=0$ and $z(x_0,0)=0$, it follows that $\partial_y z(x_0,0)>0$. One then deduces from $-d\partial_y z(x_0,0)+\nu z(x_0,0)-\mu w(x_0)>0$ that $w(x_0)<-(d/\mu)\partial_y z(x_0,0)<0$, which is impossible since $w\ge 0$ in $\mathbb{R}$. Therefore, $z>0$ in $\overline\Omega_R$. It is seen from the first inequality of \eqref{4.8} that \begin{equation} \label{4.9} -Dw''+2D\alpha w'+(\Lambda-D\alpha^2+\mu) w\ge \nu z(\cdot,0)>0 ~~\text{in}~\mathbb{R}. \end{equation} Finally, assume that there is $x_0\in\mathbb{R}$ such that $w(x_0)=0$, then the strong maximum principle implies that $w\equiv 0$ in $\mathbb{R}$. This contradicts the strict inequality in \eqref{4.9}. Consequently, $ p\ge 0$ on $\mathbb{R}$ and $ q\ge 0$ in $\overline \Omega_R$. If we further assume that $g_1\not\equiv 0$ in $\mathbb{R}$ or $g_2\not\equiv 0$ in $\mathbb{R}\times[0,R)$, then $p>0$ in $\mathbb{R}$ and $q>0$ in $\mathbb{R}\times[0,R)$. This can be proved by the strong maximum principle and by a similar argument as above. To prove uniqueness, we assume that $( p_1, q_1)$ and $( p_2, q_2)$ are two distinct solutions of \eqref{5.17'}, then $( p_1- p_2, q_1- q_2)$ satisfies \eqref{5.17'} with $g_1=0$ and $g_2=0$. Using the result derived from above, we conclude that $ p_1\equiv p_2$ in $\mathbb{R}$, $ q_1\equiv q_2$ in $\overline\Omega_R$. According to \eqref{5.17'}, one defines an operator $T: \mathcal{F}\to \mathcal{F}$, $ (g_1,g_2)\mapsto ( p, q)=T(g_1,g_2)$. Obviously, the mapping $T$ is linear. Moreover, we notice that the solution $( p, q)$ of \eqref{5.17'} has a global bound which depends only on $\Vert g_1\Vert_{L^\infty(\mathbb{R})}$ and $\Vert g_2\Vert_{L^\infty(\overline\Omega_R)}$. By regularity estimates, $( p, q)=T(g_1, g_2)$ belongs to $C^{3}(\mathbb{R})\times C^{3}(\overline\Omega_R)$, whence $(p,q)\in\mathcal{F}$. Therefore, $T$ is compact. Let $K$ be the cone $K=\left\{(u,v)\in\mathcal{F}:u\ge0~\text{in}~\mathbb{R}, v\ge 0~\text{in}~\overline\Omega_R\right\}$. Its interior $K^\circ=\big\{(u,v)\in \mathcal{F}:u>0~\text{in}~\mathbb{R}, v>0~\text{in}~\mathbb{R}\times[0,R)\big\}\neq \emptyset $ (for instance, $(u,v(y))=(1,1-y/R)$ belongs to $K^\circ$) and $K\cap (-K)={(0,0)}$. By the analysis above, $T(K^\circ)\subset K^\circ$ and $T$ is strongly positive in the sense that, if $(g_1, g_2)\in K\backslash \{(0,0)\}$, then $ p>0$ in $\mathbb{R}$ and $ q>0$ in $\mathbb{R}\times[0,R)$. From the classical Krein-Rutman theory, there exists a unique positive real number $\lambda^*_{R,\zeta}(\alpha)$ and a unique (up to multiplication by constants) function pair $( p, q)\in K^\circ$ such that $\lambda^*_{R,\zeta}(\alpha)T( p, q)=( p, q)$. The principal eigenvalue $\lambda^*_{R,\zeta}(\alpha)$ depends on $R$, $\alpha$ and $\zeta$. Set $\lambda_{R,\zeta}(\alpha):=\lambda^*_{R,\zeta}(\alpha)-\Lambda$, then the function $\lambda_{R,\zeta}(\alpha)$ takes value in $\mathbb{R}$. For each $\alpha\in\mathbb{R}$, $( p, q)$ is the unique (up to multiplication by constants) positive eigenfunction pair of \eqref{5.15/5.18'} associated with $\lambda_{R,\zeta}(\alpha)$. \textit{Step 2. Proof of formula \eqref{5.16'}.} We notice from Step 1 that $( p, q)\in \varSigma$. It then follows that \begin{equation*} \lambda_{R,\zeta}(\alpha)\le\sup_{( p, q)\in \varSigma}\min\bigg\{\inf_{\mathbb{R}}\frac{\mathcal{L}_{1,\alpha}( p, q)}{ p}, \inf_{\mathbb{R}\times[0,R)}\frac{\mathcal{L}_{2,\alpha}( p, q)}{ q}\bigg\}. \end{equation*} To show the reverse inequality, assume by contradiction that there exists $( p_1, q_1)\in \varSigma$ such that \begin{equation*} \lambda_{R,\zeta}(\alpha)<\min\bigg\{\inf_{\mathbb{R}}\frac{\mathcal{L}_{1,\alpha}( p_1, q_1)}{ p_1}, \inf_{\mathbb{R}\times[0,R)}\frac{\mathcal{L}_{2,\alpha}( p_1, q_1)}{ q_1}\bigg\}. \end{equation*} Define \begin{equation*} \theta^*:=\min\big\{\theta>0:~ \theta( p_1, q_1)\ge ( p, q) \ \text{in} \ \mathbb{R}\times[0,R)\big\}. \end{equation*} Then, $\theta^*>0$ and $(\theta^* p_1- p, \theta^* q_1- q)$ is nonnegative and two cases may occur, namely, either at least one component attains zero somewhere in $\mathbb{R}\times[0,R)$, or $\theta^* p_1- p>0$ in $\mathbb{R}$, $\theta^* q_1-q>0$ in $[0,R)$ and $\partial_y(\theta^* q_1-q)(x_0,R)=0$ for some $x_0\in\mathbb{R}$. Set $(w,z):=(\theta^* p_1- p, \theta^* q_1- q)$, then $(w,z)$ satisfies \begin{align} \label{4.10} \begin{cases} -Dw''+2D\alpha w'+(-D\alpha^2+\mu-\lambda_{R,\zeta}(\alpha)) w-\nu z(x,0)>0, &x\in\mathbb{R},\\ -d\Delta z+2d\alpha\partial_x z-(d\alpha^2+\zeta(x)+\lambda_{R,\zeta}(\alpha)) z>0, &(x,y)\in\Omega_R,\\ -d\partial_y z(x,0)+\nu z(x,0)-\mu w=0, \ &x\in\mathbb{R},\\ z(x,R)=0, \ &x\in\mathbb{R},\\ w, z \ \text{are}~ L\text{-periodic with respect to} \ x. \end{cases} \end{align} For the first case, assume first that there is $(x_0,y_0)\in\mathbb{R}\times[0,R)$ such that $z(x_0,y_0)=0$. We divide into two subcases. Suppose that $(x_0,y_0)\in\Omega_R$, then the strong maximum principle implies that $z\equiv 0$ in $\Omega_R$. This contradicts the strict inequality of $z$ in \eqref{4.10}, whence $z>0$ in $\Omega_R$. Suppose now that $y_0=0$ and $z(x_0,0)=0$, it follows that $\partial_y z(x_0,0)>0$. One then deduces from $-d\partial_y z(x_0,0)+\nu z(x_0,0)-\mu w(x_0)=0$ that $w(x_0)=-(d/\mu)\partial_y z(x_0,0)<0$, which is impossible since $w\ge 0$ in $\mathbb{R}$. Therefore, $z>0$ in $\mathbb{R}\times[0,R)$. It is seen from the first inequality of \eqref{4.10} that \begin{equation*} -Dw''+2D\alpha w'+(-D\alpha^2+\mu-\lambda_{R,\zeta}(\alpha)) w> \nu z(\cdot,0)>0 ~~\text{in}~\mathbb{R}. \end{equation*} Finally, assume that there is $x_0\in\mathbb{R}$ such that $w(x_0)=0$, then the strong maximum principle implies that $w\equiv 0$ in $\mathbb{R}$, contradicting the strict inequality above. Consequently, one has $w>0$ in $\mathbb{R}$ and $z>0$ in $\mathbb{R}\times[0,R)$. On the other hand, by Hopf lemma it follows that $\partial_y z(\cdot,R)<0$ in $\mathbb{R}$, whence the second case is ruled out. Therefore, \begin{equation*} \lambda_{R,\zeta}(\alpha)\ge \sup_{( p, q)\in \varSigma}\min\bigg\{\inf_{\mathbb{R}}\frac{\mathcal{L}_{1,\alpha}( p, q)}{ p}, \inf_{\mathbb{R}\times[0,R)}\frac{\mathcal{L}_{2,\alpha}( p, q)}{ q}\bigg\}. \end{equation*} Therefore, formula \eqref{5.16'} is proven and the supremum is indeed maximum since \eqref{5.16'} is reached by the function pair $(p,q)\in\Sigma_\alpha$. Therefore, (i) is proved. \vspace{2mm} \textit{Step 3. Monotonicity and continuity of the function $\zeta\mapsto\lambda_{R,\zeta}(\alpha)$ for all $\alpha\in\mathbb{R}$.} For any fixed $R$, if $\zeta_1(x)\le \zeta_2(x)$ in $\mathbb{R}$, formula \eqref{5.16'} together with the definition of the operator $\mathcal{L}_{2,\alpha}$ immediately implies that $\lambda_{\zeta_1}(\alpha)\ge\lambda_{\zeta_2}(\alpha)$ for all $\alpha\in\mathbb{R}$. Assume now that $\zeta_n\to \zeta$ as $n\to +\infty$, we have to show that $\lambda_{R,\zeta_n}(\alpha)\to \lambda_{R,\zeta}(\alpha)$ as $n\to +\infty$. Let $(\lambda_{R,\zeta_n}(\alpha) ;(p_n,q_n))$ be the principal eigenpair of \eqref{5.15/5.18'} with $\zeta$ replaced by $\zeta_n$ satisfying the normalization $\Vert p_n\Vert_{L^{\infty}(\mathbb{R})}=1$. From Step 1, it is seen that $(p_n,q_n)$ belongs to $C^{3}(\mathbb{R})\times C^{3}(\overline\Omega_R)$. By elliptic estimates, up to extraction of some subsequence, $(p_n,q_n)$ converges as $n\to +\infty$ uniformly in $\overline\Omega_R$ to a positive function pair $(p,q)\in C^{3}(\mathbb{R})\times C^{3}(\overline\Omega_R)$ which satisfies \eqref{5.15/5.18'} associated with $\tilde \lambda_{R}(\alpha)$ with normalization $\Vert p\Vert_{L^{\infty}(\mathbb{R})}=1$. By the uniqueness of the principal eigenpair of \eqref{5.15/5.18'}, it follows that $\tilde\lambda_{R,\zeta}(\alpha)=\lambda_{R,\zeta}(\alpha)$. Namely, $\lambda_{R,\zeta_n}(\alpha)\to \lambda_{R,\zeta}(\alpha)$ as $n\to+\infty$. This completes the proof of (ii). \vspace{2mm} \textit{Step 4. Monotonicity of the function $R\mapsto\lambda_{R,\zeta}(\alpha)$ for all $\alpha\in\mathbb{R}$.} Fix $\alpha\in\mathbb{R}$ and choose $R_1>R_2$. Set $\lambda_1=\lambda_{R_1,\zeta}(\alpha)$ and $\lambda_2=\lambda_{R_2,\zeta}(\alpha)$ and let $(\lambda_1;(p_1,q_1))$ and $(\lambda_2;(p_2,q_2))$ be the eigenpairs of \eqref{5.15/5.18'} in $\overline\Omega_{R_1}$ and in $\overline\Omega_{R_2}$, respectively. Define \begin{equation*} \theta^*:=\min\{\theta>0:~ \theta(p_1,q_1)\ge(p_2,q_2) \ \text{in} \ \overline\Omega_{R_2}\}. \end{equation*} Then, $\theta^*>0$ is well-defined. The function pair $(w,z):=( \theta^*p_1-p_2, \theta^*q_1-q_2)$ is nonnegative and at least one component attains zero somewhere in $\mathbb{R}\times[0,R_2)$ by noticing that $q_1|_{y=R_2}>q_2|_{y=R_2}=0$. Moreover, $(w,z)$ satisfies \begin{align} \label{4.11} \begin{cases} -Dw''+2D\alpha w'+(-D\alpha^2+\mu) w-\nu z(x,0)=\theta^*\lambda_1p_1-\lambda_2p_2, &x\in\mathbb{R},\\ -d\Delta z+2d\alpha\partial_x z-(d\alpha^2+\zeta(x)) z=\theta^*\lambda_1q_1-\lambda_2q_2, &(x,y)\in\Omega_{R_2},\\ -d\partial_y z(x,0)+\nu z(x,0)-\mu w=0, \ &x\in\mathbb{R},\\ z(x,R_2)>0, \ &x\in\mathbb{R},\\ w, z \ \text{are}~ L\text{-periodic with respect to} \ x. \end{cases} \end{align} Assume that there is $x_0\in\mathbb{R}$ such that $w(x_0)=0$, it follows from the first equation in \eqref{4.11} that \begin{align*} -Dw''(x_0)+2D\alpha w'(x_0)+(-D\alpha^2+\mu) w(x_0)-\nu z(x_0,0)=(\lambda_1-\lambda_2)p_2(x_0), \end{align*} Since the function $w$ attains its minimum at $x_0$, one has $w'(x_0)=0$ and $w''(x_0)\ge 0$, whence $(\lambda_1-\lambda_2)p_2(x_0)\le -\nu z(x_0,0)\le 0$, therefore $\lambda_1\le\lambda_2$. Assume now that there is $ (x_0,y_0)\in\mathbb{R}\times[0,R_2)$ such that $z(x_0, y_0)=0$, we distinguish two subcases. Suppose that $y_0\in(0,R)$, a similar analysis of the second equation in \eqref{4.11} as above implies that $\lambda_1\le\lambda_2$. Otherwise, $z>0$ in $\Omega_R$ and $z(x_0,0)=0$, which leads to $w(x_0)=-(d/\mu)\partial_y z(x_0,0)<0$. This contradicts $w\ge 0$ in $\mathbb{R}$. To sum up, one obtains $\lambda_1\le\lambda_2$. Moreover, $\lambda_1=\lambda_2$ is impossible, otherwise $(p_1,q_1)$ would be a positive multiple of $(p_2,q_2)$, which contradicts $q_1|_{y=R_2}>q_2|_{y=R_2}=0$. As a consequence, $\lambda_1<\lambda_2$, namely, the function $R\mapsto\lambda_{R,\zeta}$ is decreasing. The proof of (iii) is complete. \vspace{2mm} \textit{Step 5. The concavity of the function $\alpha\mapsto\lambda_{R,\zeta}(\alpha)$.} Let $(\lambda_{R,\zeta}(\alpha); (p,q))$ be the principal eigenpair of \eqref{5.15/5.18'}. With the change of functions $( p, q)=e^{\alpha x}( \Phi,\Psi)$ in formula \eqref{5.16'}, one has \begin{align*} \frac{\mathcal{L}_{1, \alpha}( p, q)}{ p}=\frac{-D \Phi''-\nu\Psi(x,0)}{\Phi}+\mu,\quad \frac{\mathcal{L}_{2, \alpha}( p, q)}{ q}=\frac{-d \Delta\Psi}{\Psi}-\zeta(x). \end{align*} Then, it is immediate to see that \begin{align*} \lambda_{R,\zeta}(\alpha)=\max_{(\Phi,\Psi)\in \Sigma'_\alpha}\min\bigg\{\inf_{\mathbb{R}}\frac{-D\Phi''-\nu\Psi(x,0)}{\Phi}+\mu,\ \inf_{\mathbb{R}\times[0,R)}\frac{-d \Delta\Psi}{\Psi}-\zeta(x)\bigg\}, \end{align*} where $\Sigma'_\alpha:=\left\{(\Phi,\Psi)\in C^2(\mathbb{R})\times C^2(\overline\Omega_R): ~ e^{\alpha x}(\Phi,\Psi)\in\Sigma_\alpha\right\}.$ Let $\alpha_1$, $\alpha_2$ be real numbers and $t\in[0,1]$. Set $\alpha=t\alpha_1+(1-t)\alpha_2$. One has to show that $\lambda_{R,\zeta}(\alpha)\ge t\lambda_{R,\zeta}(\alpha_1)+(1-t)\lambda_{R,\zeta}(\alpha_2)$. Let $(\Phi_1,\Psi_1)$ and $(\Phi_2,\Psi_2)$ be two arbitrarily chosen function pairs in $\Sigma'_{\alpha_1}$ and $\Sigma'_{\alpha_2}$, respectively. Set $(w_1,z_1)=(\ln\Phi_1,\ln\Psi_1)$, $(w_2,z_2)=(\ln\Phi_2,\ln\Psi_2)$, $w=tw_1+(1-t)w_2$, $z=tz_1+(1-t)z_2$ and $(\Phi,\Psi)=(e^w, e^z)$. It follows that $(\Phi,\Psi)\in\Sigma'_\alpha$. Then, it is obvious to see that \begin{equation*} \lambda_{R,\zeta}(\alpha)\ge\min\bigg\{\inf_\mathbb{R}\frac{-D\Phi''-\nu\Psi(x,0)}{\Phi}+\mu, \inf_{\mathbb{R}\times[0,R)}\frac{-d\Delta\Psi}{\Psi}-\zeta(x)\bigg\}. \end{equation*} After some calculations, one has \begin{align*} &\frac{-D\Phi''-\nu\Psi(x,0)}{\Phi}=-Dw''-Dw'^2-\nu e^{z(x,0)-w(x)},\ \ \ \frac{-d\Delta\Psi}{\Psi}=-d\Delta z-d\nabla z\cdot\nabla z. \end{align*} Noticing that $x\mapsto e^x$ is convex, $\nu>0$ and $t(1-t)\ge 0$, it follows that \begin{align*} \frac{-D\Phi''-\nu\Psi(x,0)}{\Phi}+\mu&\ge t\big(-Dw''_1-Dw'^2_1-\nu e^{z_1(x,0)-w_1}\big)\cr &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+(1-t)\big(-Dw''_2-Dw'^2_2-\nu e^{z_2(x,0)-w_2}\big)+\mu\cr &\ge t\bigg( \frac{-D\Phi''_1-\nu\Psi_1(x,0)}{\Phi_1}+\mu\bigg)+(1-t)\bigg( \frac{-D\Phi''_2-\nu\Psi_2(x,0)}{\Phi_2}+\mu\bigg). \end{align*} Similarly, \begin{align*} \frac{-d\Delta\Psi}{\Psi}-\zeta(x)\ge t\bigg(\frac{-d\Delta\Psi_1}{\Psi_1}-\zeta(x)\bigg)+(1-t)\bigg(\frac{-d\Delta\Psi_2}{\Psi_2}-\zeta(x)\bigg). \end{align*} Therefore, \begin{align*} \lambda_{R,\zeta}(\alpha)\ge &t\min\bigg\{ \inf \frac{-D\Phi''_1-\nu\Psi_1(x,0)}{\Phi_1}+\mu, \inf \frac{-d\Delta\Psi_1}{\Psi_1}-\zeta(x) \bigg\}\\ &\qquad +(1-t)\min\bigg\{\inf \frac{-D\Phi''_2-\nu\Psi_2(x,0)}{\Phi_2}+\mu, \inf \frac{-d\Delta\Psi_2}{\Psi_2}-\zeta(x)\bigg\}. \end{align*} Since $(\Phi_1,\Psi_1)$ and $(\Phi_2,\Psi_2)$ were arbitrarily chosen, one concludes that $\lambda_{R,\zeta}(\alpha)\ge t\lambda_{R,\zeta}(\alpha_1)+(1-t)\lambda_{R,\zeta}(\alpha_2)$. That is, $\alpha\mapsto\lambda_{R,\zeta}(\alpha)$ is concave in $\mathbb{R}$ and then continuous in $\mathbb{R}$. \textit{Step 6. The upper and lower bounds \eqref{bound'} of $\lambda_{R,\zeta}(\alpha)$.} From Step 1, it follows that $\lambda^*_{R,\zeta}(\alpha)$ is positive, whence it is immediate to see that $\lambda_{R,\zeta}(\alpha)=\lambda^*_{R,\zeta}(\alpha)-\Lambda_\zeta(\alpha)> -\Lambda_\zeta(\alpha)$, namely, \begin{align*} -\lambda_{R,\zeta}(\alpha)< \max\big\{D\alpha^2+\nu-\mu+{\mu\nu}/{d}, d(\alpha^2+1)+M\big\}. \end{align*} It suffices to show that \begin{align*} -\lambda_{R,\zeta}(\alpha)>\max\bigg\{D\alpha^2-\mu, d\alpha^2+m-d\frac{\pi^2}{R^2}\bigg\}. \end{align*} From Step 3 we have that $-\lambda_{R,\zeta}(\alpha)$ is non-decreasing with respect to $\zeta$ for all $\alpha\in\mathbb{R}$, it then follows that $-\lambda_{R,\zeta}(\alpha)\ge -\lambda_{R,m}(\alpha)$ for all $\alpha\in\mathbb{R}$. We claim that \begin{align} \label{sub-bound} -\lambda_{R,m}(\alpha)>\max\bigg\{D\alpha^2-\mu, d\alpha^2+m-d\frac{\pi^2}{R^2}\bigg\}. \end{align} Inspired from \cite[Proposition 3.4]{GMZ2015}, we assume by contradiction that $-D\alpha^2+\mu-\lambda_{R,m}(\alpha)\le 0$. Denote by $\big(\lambda_{R,m}(\alpha),(\tilde p,\tilde q)\big)$ the principal eigenpair of eigenvalue problem \eqref{5.15/5.18'} with $\zeta$ replaced by $m$, then $\big(\lambda_{R,m}(\alpha),(\tilde p,\tilde q)\big)$ satisfies \begin{align} \label{4.4'} \begin{cases} -D\tilde p''+2D\alpha \tilde p'+(-D\alpha^2+\mu)\tilde p-\nu \tilde q(x,0)=\lambda_{R,m}(\alpha) \tilde p, &x\in\mathbb{R},\\ -d\Delta\tilde q+2d\alpha\partial_x \tilde q-(d\alpha^2+m)\tilde q=\lambda_{R,m}(\alpha) \tilde q, &(x,y)\in\Omega_R,\\ -d\partial_y\tilde q(x,0)+\nu \tilde q(x,0)-\mu \tilde p=0, \ &x\in\mathbb{R},\\ \tilde q(x,R)=0, \ &x\in\mathbb{R},\\ \tilde p, \tilde q \ \text{are} \ L\text{-periodic with respect to} \ x. \end{cases} \end{align} Since $\tilde p$ satisfies \begin{equation}\label{z'} -D\tilde p''+2D\alpha \tilde p'+\big(-D\alpha^2+\mu-\lambda_{R,m}(\alpha)\big)\tilde p=\nu \tilde q(\cdot,0)>0~~~\text{in}~\mathbb{R}, \end{equation} one infers that any positive constant is a subsolution of \eqref{z'}. Since $\tilde p$ is $L$-periodic in $x$, one gets that $\tilde p$ is identically equal to its minimum and thus $\tilde p$ is a positive constant in $\mathbb{R}$. Then, $0<\nu \tilde q(\cdot,0)=(-D\alpha^2+\mu-\lambda_{R,m}(\alpha))\tilde p\le 0$ in $\mathbb{R}$. This is a contradiction. Therefore, $-D\alpha^2+\mu-\lambda_{R,m}(\alpha)> 0$. Next, we assume that $\lambda_{R,m}(\alpha)\ge-d\alpha^2-m+d\frac{\pi^2}{R^2}$. We denote $w_R=\frac{\pi}{R}$, then \begin{equation*} w:=\sqrt{\frac{d\alpha^2+m+\lambda_{R,m}(\alpha)}{d}}\ge w_R>0. \end{equation*} Integrating the second equation in \eqref{4.4'} with respect to $x$ over $[0,L]$, then $\Psi(y):=\int_0^L\tilde q(x,y)\mathrm{d}x$ satisfies $\Psi''(y)+w^2\Psi(y)=0$, with $\Psi(y)>0$ in $[0,R)$, $\Psi(R)=0$. One gets that $\Psi(\cdot)=C\sin(w(R-\cdot))$ in $[0,R]$ for some constant $C>0$. Since $w\ge w_R$, it is easy to see that $[0,R)$ contains at least a half period of $\Psi$, namely, $\Psi$ must attain a non-positive value in $[0,R)$, which is impossible. Therefore, $\lambda_{R,m}(\alpha)<-d\alpha^2-m+d\frac{\pi^2}{R^2}$, namely, \eqref{sub-bound} is proved. This completes the proof of (iv). \end{proof} In what follows, we shall give the variational formulas for $c^*_{R,\pm}$ by linear operators approach. For simplicity of the notation, we write $\lambda_R(\alpha):=\lambda_{R,\zeta}(\alpha)$ in the sequel. We have: \begin{proposition} \label{formula} Let $c^*_{R,+}$ and $c^*_{R,+}$ be the rightward and leftward asymptotic spreading speeds of $Q_1$. Then, \begin{equation*} c^*_{R,+}=\inf\limits_{\alpha>0}\frac{-\lambda_{R}(\alpha)}{\alpha},~~~c^*_{R,-}=\inf\limits_{\alpha>0}\frac{-\lambda_{R}(-\alpha)}{\alpha}. \end{equation*} \end{proposition} \begin{proof} Since $f(x,v)\le f_v(x,0)v$ for all $(x,y)\in\overline{\Omega}_R$ and $v\ge 0$, it follows that, for any $(u_0,v_0)\in \mathcal{K}$, the solution $(u(t,\cdot;u_0), v(t,\cdot,\cdot;v_0))$ of \eqref{pb in strip} is a strict subsolution of \eqref{linear pb in strip} for all $t>0$ and $(x,y)\in\overline{\Omega}_R$. By a comparison argument, it implies that $Q_t[(u_0,v_0)]\le L(t)[(u_0,v_0)]$ for all $t> 0$ and $(u_0,v_0)\in\mathcal{K}$. Letting $t=1$, we have $Q_1((u_0,v_0))\le L(1)[(u_0,v_0)]$ for every $(u_0,v_0)\in\mathcal{K}$. Define a linear operator $\mathbb{L}_\alpha$ on $\mathbb{P}=\{(u,v)\in\mathbb{C}^1_0: (u_0,v_0)~\text{is}~L\text{-periodic in}~x\}$ associated with $L(1)$ by \begin{align*} \mathbb{L}_\alpha[(u_0,v_0)]:&=e^{\alpha x}\cdot L(1)[e^{-\alpha x}(u_0,v_0)]\\ &=e^{\alpha x}\cdot e^{-\alpha x}L_\alpha(1)[(u_0,v_0)]\\ &=L_\alpha(1)[(u_0,v_0)]~~~~ \text{for every}~ (u_0, v_0)\in \mathbb{P}~\text{and}~(x,y)\in\overline\Omega_R. \end{align*} It then follows that $\mathbb{L}_\alpha=L_\alpha(1)$, and hence, $e^{-\lambda_R(\alpha)}$ is the principal eigenvalue of $\mathbb{L}_\alpha$. Since the function $\alpha\mapsto \ln ( e^{-\lambda_R(\alpha)})=-\lambda_R(\alpha)$ is convex, using similar arguments as in \cite[Theorem 2.5]{Weinberger2002} and \cite[Theorem 3.10(i)]{LZ2007}, we obtain that \begin{equation} \label{7.8} c^*_{R,+}\le \inf\limits_{\alpha>0}\frac{\ln( e^{-\lambda_R(\alpha)})}{\alpha}=\inf\limits_{\alpha>0}\frac{-\lambda_R(\alpha)}{\alpha}. \end{equation} On the other hand, for any given $\varep\in(0,1)$, there exists $\delta>0$ such that $f(x,v)\ge (1-\varep)f_v(x,0)v$ for all $v\in[0,\delta]$ and $(x,y)\in\overline\Omega_R$. By the continuity of the solutions of \eqref{pb in strip} with respect to the initial conditions given in Proposition \ref{prop3.8}, there exists a $L$-periodic (in $x$) positive function pair $(u_1,v_1)\in \text{Int}(\mathbb{P}^+)$ satisfying $u_1\le U_R$ in $\mathbb{R}$ and $v_1\le V_R$ in $\overline\Omega_R$ such that $u(t,x;u_1)\le \nu\delta/\mu,v(t,x,y;v_1)\le \delta$ for all $t\in[0,1]$ and $(x,y)\in\overline\Omega_R$. By Proposition \ref{cp-strip}, one infers that, for all $(u_0,v_0)\in\mathcal{K}_1:=\{(u,v)\in C(\mathbb{R})\times C(\overline\Omega_R): (0,0)\le (u,v)\le (u_1,v_1)~\text{in}~\overline\Omega_R\}$, $$u(t,\cdot;u_0)\le u(t,\cdot;u_1)\le \nu\delta/\mu~~ \text{for all}~ t\in[0,1]~\text{and}~ x\in\mathbb{R},~~~~~$$ $$v(t,\cdot,\cdot;v_0)\le v(t,\cdot,\cdot;v_1)\le \delta~~ \text{for all}~ t\in[0,1]~\text{and}~ (x,y)\in\overline\Omega_R.$$ Thus, for any $(u_0,v_0)\in\mathcal{K}_1$, the solution $(u(t,\cdot;u_0), v(t,\cdot,\cdot; v_0))$ of \eqref{pb in strip} satisfies \begin{align*} \begin{cases} u_t-Du_{xx}=\nu v(t,x,0)-\mu u,\quad &t\in[0,1],\ x\in\mathbb{R}, \\ v_t-d\Delta v\ge (1-\varep)f_v(x,0)v,\quad &t\in[0,1],\ (x,y)\in\Omega_R, \\ -d\partial_y v(t,x,0)=\mu u-\nu v(t,x,0),\quad &t\in[0,1],\ x\in\mathbb{R}, \\ v(t,x,R)=0,\quad &t\in[0,1],\ x\in\mathbb{R}. \end{cases} \end{align*} Let $\{\mathbb{L}^\varep(t)\}_{t\ge 0}$ be the solution semigroup generated by the following linear system: \begin{align*} \begin{cases} u_t-Du_{xx}=\nu v(t,x,0)-\mu u,\quad &t>0,\ x\in\mathbb{R}, \\ v_t-d\Delta v= (1-\varep)f_v(x,0)v,\quad &t>0,\ (x,y)\in\Omega_R, \\ -d\partial_y v(t,x,0)=\mu u-\nu v(t,x,0),\quad &t>0,\ x\in\mathbb{R}, \\ v(t,x,R)=0,\quad &t>0,\ x\in\mathbb{R}. \end{cases} \end{align*} Then, Proposition \ref{cp-strip} implies that $\mathbb{L}^\varep(t)[(u_0,v_0)]\le Q_t[(u_0,v_0)]$ for all $t\in[0,1]$ and $(u_0,v_0)\in \mathcal{K}_1$. In particular, $\mathbb{L}^\varep(1)[(u_0,v_0)]\le Q_1[(u_0,v_0)]$ for all $(u_0,v_0)\in \mathcal{K}_1$. Let $\lambda^\varep_R(\alpha)$ be the principal eigenvalue of the eigenvalue problem \eqref{5.15/5.18'} with $f_v(x,0)$ replaced by $(1-\varep)f_v(x,0)$. As argued above, the concavity of $\lambda^\varep_R(\alpha)$ and similar arguments as in \cite[Theorem 2.4]{Weinberger2002} and \cite[Theorem 3.10(ii)]{LZ2007} give rise to \begin{equation} \label{7.11} c^*_{R,+}\ge \inf\limits_{\alpha>0}\frac{\ln( e^{-\lambda^\varep_R(\alpha)})}{\alpha}=\inf\limits_{\alpha>0}\frac{-\lambda^\varep_R(\alpha)}{\alpha}~~\text{for all}~ \varep\in (0,1). \end{equation} Combining \eqref{7.8} and \eqref{7.11}, we obtain \begin{equation*} \inf\limits_{\alpha>0}\frac{-\lambda^\varep_R(\alpha)}{\alpha}\le c^*_{R,+}\le \inf\limits_{\alpha>0}\frac{-\lambda_R(\alpha)}{\alpha}~~\text{for all}~ \varep\in (0,1). \end{equation*} Letting $\varep\to 0$, thanks to the continuity of the function $\zeta\mapsto\lambda_{R,\zeta}(\alpha)$ in Proposition \ref{principal eigenvalue in strip} (ii), we then have $$c^*_{R,+}=\inf\limits_{\alpha>0}\frac{-\lambda_R(\alpha)}{\alpha}.$$ By change of variables $\hat u(t,x):=u(t,-x)$ and $\hat v(t,x,y):=v(t,-x,y)$, it follows that $c^*_{R,-}$ is the rightward asymptotic spreading speed of the resulting system for $(\hat u,\hat v)$. From the lines as above, it can be derived that $$c^*_{R,-}=\inf_{\alpha>0}\frac{-\lambda_R(-\alpha)}{\alpha}.$$ The proposition is therefore proved. \end{proof} \begin{lemma}\label{lemma-same speed} $c^*_{R,+}=c^*_{R,-}>0.$ \end{lemma} \begin{proof} We first prove that $c^*_{R,+}=c^*_{R,-}$. By virtue of the variational formulas obtained above, it is enough to show $\lambda_R(\alpha)=\lambda_R(-\alpha)$. Let $(\lambda_R(\alpha);(p,q))$ be the principal eigenpair of the eigenvalue problem \eqref{5.15/5.18'}, namely, \begin{align} \label{right} \begin{cases} -Dp''+2D\alpha p'+(-D\alpha^2+\mu) p-\nu q(x,0)=\lambda_R(\alpha) p, &x\in\mathbb{R},\\ -d\Delta q+2d\alpha\partial_x q-(d\alpha^2+f_v(x,0)) q=\lambda_R(\alpha) q, &(x,y)\in\Omega_R,\\ -d\partial_y q(x,0)+\nu q(x,0)-\mu p=0, \ &x\in\mathbb{R},\\ q(x,R)=0, \ &x\in\mathbb{R},\\ p, q \ \text{are} \ L\text{-periodic with respect to} \ x, \end{cases} \end{align} and let $(\lambda_R(-\alpha);(\phi,\psi))$ be the principal eigenpair of the eigenvalue problem \eqref{5.15/5.18'}, that is, \begin{align} \label{left} \begin{cases} -D\phi''-2D\alpha \phi'+(-D\alpha^2+\mu) \phi-\nu \psi(x,0)=\lambda_R(-\alpha) \phi, &x\in\mathbb{R},\\ -d\Delta \psi-2d\alpha\partial_x \psi-(d\alpha^2+f_v(x,0)) \psi=\lambda_R(-\alpha) \psi, &(x,y)\in\Omega_R,\\ -d\partial_y \psi(x,0)+\nu \psi(x,0)-\mu \phi=0, \ &x\in\mathbb{R},\\ \psi(x,R)=0, \ &x\in\mathbb{R},\\ \phi, \psi \ \text{are} \ L\text{-periodic with respect to} \ x. \end{cases} \end{align} We multiply the first equations in \eqref{right} and in \eqref{left} by $\phi$ and $p$, repectively, then we integrate the two resulting equations over $(0,L)$. By subtraction, it follows that \begin{equation*} \big[\lambda_R(\alpha)-\lambda_R(-\alpha)\big]\int_0^L p\phi \mathrm{d}x=-\nu\int_0^L\big(q(x,0)\phi-\psi(x,0)p\big) \mathrm{d}x. \end{equation*} Similarly, we multiply the second equations in \eqref{right} and in \eqref{left} by $\psi$ and $q$, respectively. By subtracting the integration of the two resulting equations over $S=(0,L)\times (0,R)$, one gets \begin{equation*} \big[\lambda_R(\alpha)-\lambda_R(-\alpha)\big]\int_S q\psi \mathrm{d}x \mathrm{d}y=\mu\int_0^L\big(q(x,0)\phi-\psi(x,0)p\big)\mathrm{d}x. \end{equation*} Therefore, by using the positivity of $(p,q)$ and $(\phi,\psi)$, one has \begin{equation*} \text{sgn}\big(\lambda_R(\alpha)-\lambda_R(-\alpha)\big)=\text{sgn} \Big(\int_0^L\big(q(x,0)\phi-\psi(x,0)p\big)\mathrm{d}x\Big) =-\text{sgn}\Big(\int_0^L\big(q(x,0)\phi-\psi(x,0)p\big)\mathrm{d}x\Big), \end{equation*} which implies that $\lambda_R(\alpha)=\lambda_R(-\alpha)$. Consequently, $c^*_{R,+}=c^*_{R,-}$. From $\lambda_R(\alpha)=\lambda_R(-\alpha)$ and from Proposition \ref{principal eigenvalue in strip} (iv), it is seen that the function $\alpha\mapsto-\lambda_R(\alpha)$ is convex and even in $\mathbb{R}$ and $-\lambda_R(0)\ge m-d\pi^2/(R^2)>0$. Thus, $-\lambda_R(\alpha)>0$ for all $\alpha\in\mathbb{R}$, whence $c^*_{R,+}=c^*_{R,-}> 0$. \end{proof} \begin{proof}[Proofs of Theorems \ref{thm-asp-strip} and \ref{thm-PTF-in strip}] By Theorems 3.4, 4.3 and 4.4 in \cite{LZ2010}, as well as Lemma \ref{lemma-same speed} above, one derives the conclusion of Theorem \ref{thm-asp-strip} with spreading speed $c^*_R$, as well as the existence of the non-increasing in $s$ rightward and non-decreasing in $s$ leftward periodic traveling waves for problem \eqref{pb in strip} with minimal wave speed $c^*_R$. To complete the proof of Theorem \ref{thm-PTF-in strip}, it remains to show that these periodic traveling fronts are strictly monotone in $s$. For $c\ge c^*_R$, consider a periodic rightward traveling front of \eqref{pb in strip} (the case of leftward waves can be dealt with similarly), written as $(\phi_R(s,x),\psi_R(s,x,y))=(u(\frac{x-s}{c},x),v(\frac{x-s}{c},x,y))$ for all $s\in\mathbb{R}$ and $(x,y)\in\overline\Omega_R$. Notice that $(u(t,x),v(t,x,y))$ satisfies \eqref{pb in strip} and \eqref{periodicity-strip}, and is defined for all $t\in\mathbb{R}$ and $(x,y)\in\overline\Omega_R$. Since $c\ge c^*_R>0$, the function pair $(u,v)$ is non-decreasing in $t\in\mathbb{R}$. Then, for any $\tau>0$, $w(t,x)=u(t+\tau,x)-u(t,x)\ge 0$, $z(t,x,y)=v(t+\tau,x,y)-v(t,x,y)\ge 0$ for all $t\in\mathbb{R}$ and $(x,y)\in\overline\Omega_R$. The function pair $(w,z)$ is a classical solution to a linear problem in $\mathbb{R}\times\overline\Omega_R$. The strong parabolic maximum principle and the Hopf lemma, as well as the uniqueness of the corresponding Cauchy problem then imply that either $(w,z)$ is indentically $(0,0)$ or positive everywhere in $\mathbb{R}\times[0,R)$. If $(w,z)\equiv(0,0)$, then $(\phi_R(s-c\tau,x),\psi_R(s-c\tau,x,y))=(\phi_R(s,x),\psi_R(s,x,y))$ for all $s\in\mathbb{R}$ and $(x,y)\in\overline\Omega_R$, which contradicts the limit condition \eqref{limit-strip} as $s\to\pm\infty$ due to $c\tau>0$. Therefore, $w>0$ in $\mathbb{R}$ and $z>0$ in $\mathbb{R}\times[0,R)$ for any $\tau>0$. Hence, $(\phi_R(s,x),\psi_R(s,x,y))$ is decreasing in $s$. This completes the proof. \end{proof} \section{Propagation properties in the half-plane: Proofs of Theorems \ref{thm-asp-half-plane} and \ref{PTF-in half-plane}} \label{section5} This section is devoted to propagation properties for problem \eqref{pb in half-plane} in the half-plane. We only sketch the detailed proof in the right direction along the road, since the discussion in the left direction can be handled similarly. \subsection{The generalized eigenvalue problem in the half-plane}\label{generalized p-e} Recall from Proposition \ref{principal eigenvalue in strip} that \begin{equation} \label{bound in strip} \max\Big\{D\alpha^2-\mu, d\alpha^2+m-d\frac{\pi^2}{R^2}\Big\}<-\lambda_R(\alpha)< \max\Big\{D\alpha^2+\nu-\mu+\frac{\mu\nu}{d}, d(\alpha^2+1)+M\Big\}, \end{equation} and the function $R\mapsto -\lambda_R(\alpha)$ is increasing. For any fixed $\alpha\in\mathbb{R}$, we can take the limit as follows: \begin{equation} \label{principal-e} \lambda(\alpha):=\lim\limits_{R\to+\infty} \lambda_R(\alpha). \end{equation} It can be deduced from \eqref{bound in strip} that \begin{equation} \label{bound in half-plane} \max\Big\{D\alpha^2-\mu, d\alpha^2+m\Big\}\le -\lambda(\alpha)\le \max\Big\{D\alpha^2+\nu-\mu+\frac{\mu\nu}{d}, d(\alpha^2+1)+M\Big\}. \end{equation} Since the function $\alpha\mapsto -\lambda_R(\alpha)$ is convex and continuous in $\mathbb{R}$ and since the pointwise limit of a convex function is still convex, it follows that the function $\alpha\mapsto -\lambda(\alpha)$ is convex and continuous in $\mathbb{R}$. Furthermore, we have: \begin{theorem} \label{thm 5.1} For any $\alpha\in\mathbb{R}$, let $\lambda(\alpha)$ be defined by \eqref{principal-e}. Then there exists a positive $L$-periodic (in $x$) function pair $(P_\alpha(x), Q_\alpha(x,y))$ associated with $\varLambda=\lambda(\alpha)$ satisfying \begin{align} \label{eigenvalue pb in half-plane} \begin{cases} -DP_\alpha''+2D\alpha P_\alpha'+(-D\alpha^2+\mu) P_\alpha-\nu Q_\alpha(x,0)=\varLambda P_\alpha, &x\in\mathbb{R},\\ -d\Delta Q_\alpha+2d\alpha\partial_x Q_\alpha-(d\alpha^2+f_v(x,0)) Q_\alpha=\varLambda Q_\alpha, &(x,y)\in\Omega,\\ -d\partial_y Q_\alpha(x,0)+\nu Q_\alpha(x,0)-\mu P_\alpha=0, \ &x\in\mathbb{R},\\ P_\alpha, Q_\alpha \ \text{are positive and} \ L\text{-periodic with respect to} \ x, \end{cases} \end{align} and such that, up to some normalization, \begin{equation*} P_\alpha\le 1~\text{in}~\mathbb{R},~~Q_\alpha~\text{is locally bounded in}~\overline\Omega. \end{equation*} We call $\lambda(\alpha)$ the generalized principal eigenvalue of \eqref{eigenvalue pb in half-plane} and $(P_\alpha, Q_\alpha)$ the generalized principal eigenfunction pair associated with $\lambda(\alpha)$. Moreover, problem \eqref{eigenvalue pb in half-plane} admits no positive and $L$-periodic (in $x$) eigenfunction pair for any $\varLambda>\lambda(\alpha)$. \end{theorem} \begin{remark} \textnormal{We point out here that the classical Krein-Rutman theorem cannot be applied anymore due to the noncompactness of the domain. We denote by $(P_R,Q_R):=(P_{\alpha,R},Q_{\alpha,R})$ the principal eigenfunction pair of \eqref{eigen-pb-strip} in $\overline\Omega_R$ associated with the principal eigenvalue $\lambda_R(\alpha)$ for simplicity. As will be shown later, with the technical Lemmas \ref{lemma3.5}--\ref{lemma3.7}, we can show that, up to normalization, $\lim_{R\to +\infty}(P_{R},Q_{R})$ turns out to be the generalized principal eigenfunction pair $(P_\alpha, Q_\alpha)$ of \eqref{eigenvalue pb in half-plane} in $\overline\Omega$ corresponding to the generalized principal eigenvalue $\lambda(\alpha)$. The statements of Lemmas \ref{lemma3.5}--\ref{lemma3.7} are similar to Lemmas 3.5--3.7 in \cite{GMZ2015}, however, our case is much more involved, since the heterogeneous assumption is now set on $f$, this does not allow us to get the nice upper estimate as in Lemma 3.6 of \cite{GMZ2015}. For the sake of completeness, we give the details below. } \end{remark} \begin{lemma} \label{lemma3.5} For any $R>R_0$, normalizing with $\Vert P_R(\cdot)\Vert_{L^\infty(\mathbb{R})}=1$, there exists $C_1>0$ (independent of $R$) such that \begin{equation*} \Vert Q_R(\cdot,0)\Vert_{L^\infty(\mathbb{R})}>C_1. \end{equation*} \end{lemma} \begin{proof} If the conclusion is not true, we assume that there exists a sequence $(R_k)_{k\in\mathbb{N}}$ satisfying $R_k\to+\infty$ such that $\Vert Q_{R_k}(\cdot,0)\Vert_{L^{\infty}(\mathbb{R})}\to 0$ and $\Vert P_{R_k}\Vert_{L^{\infty}(\mathbb{R})}=1$. Since $(P_{R_k},Q_{R_k})$ is $L$-periodic in $x$, we assume with no loss of generality that $x_k\in [0,L]$ such that $P(x_k)=1$. Since $(P_{R_k})_{k\in\mathbb{N}}$ and $(Q_{R_k}(\cdot, 0) )_{k\in\mathbb{N}}$ are uniformly bounded, by the Arzel\`{a}-Ascoli Theorem, up to extraction of a subsequence, one has $P_{R_k}\to P_{\infty}\ge 0$ and $Q_{R_k}(\cdot, 0)\to 0$ as $k\to+\infty$. Moreover, there exists $x_\infty\in [0,L]$ such that, up to a subsequence, $x_k\to x_\infty$ as $k\to+\infty$. Passing to the limit $k\to+\infty$ in the first equation of eigenvalue problem \eqref{eigen-pb-strip} satisfied by $(P_{R_k},Q_{R_k})$ in $\overline\Omega_{R_k}$ implies \begin{equation*} -DP''_\infty+2D\alpha P'_\infty+(-D\alpha^2+\mu)P_\infty=\lambda(\alpha)P_\infty~~\text{in}~\mathbb{R}. \end{equation*} Moreover, $P_\infty$ is $L$-periodic in $x$ and $P_\infty(x_\infty)=1$. The strong maximum principle implies $P_\infty>0$ in $\mathbb{R}$. Thus, $P_\infty$ is a positive constant. Hence, $\lambda(\alpha)=-D\alpha^2+\mu$. This implies $\lambda_R(\alpha)\ge \lambda(\alpha)= -D\alpha^2+\mu$ , which contradicts \eqref{bound in strip}. Consequently, Lemma \ref{lemma3.5} is proved. \end{proof} \begin{lemma} \label{lemma3.6} For any $R>R_0$, assume that $\Vert Q_R(\cdot, 0)\Vert_{L^\infty(\mathbb{R})}=1$, then $Q_R(x,y)$ is locally bounded as $R\to+\infty$ by some positive constant (independent of $R$). \end{lemma} \begin{proof} For convenience, let us introduce some new notations. For $n>R_0$ large enough, we denote by $(\lambda_n(\alpha);(P_n,Q_n))$ the principal eigenpair of \eqref{eigen-pb-strip} in $\overline\Omega_n=\mathbb{R}\times[0,n]$ with normalization $\Vert Q_n(\cdot, 0)\Vert_{L^\infty(\mathbb{R})}=1$. Then, one has to show that, for any compact set $K\subset\overline\Omega$, there holds \begin{equation}\label{lem3.6-1} \sup\limits_n(\max\limits_{K\cap\overline\Omega_n}Q_n(x,y))<+\infty. \end{equation} To prove this, we first claim that $\Vert P_n\Vert_{L^\infty(\mathbb{R})}\le C_0$ for some constant $C_0>0$. Assume by contradiction that $\Vert P_n\Vert_{L^\infty(\mathbb{R})}$ is unbounded, then we choose a sequence $(P_n)_{n\in\mathbb{N}}$ such that $\Vert P_n\Vert_{L^\infty(\mathbb{R})}\to +\infty$ as $n\to +\infty$. By renormalization, it follows that $\Vert P_n\Vert_{L^\infty(\mathbb{R})}=1$ while $\Vert Q_n(\cdot,0)\Vert_{L^\infty(\mathbb{R})}\to 0$. This contradicts the conclusion of Lemma \ref{lemma3.5}. Our claim is thereby achieved. It then follows from the boundary condition $-d\partial_y Q_n(\cdot, 0)=\mu P_n(\cdot)-\nu Q_n(\cdot,0)$ that $\Vert\partial_y Q_n(\cdot,0)\Vert_{L^\infty(\mathbb{R})}\le (\mu C_0+\nu)/d$. Assume now that \eqref{lem3.6-1} is not true. Then, there exist a compact subset $K\subset\overline\Omega$ and a sequence $(x_n,y_n)_{n\in\mathbb{N}}$ in $K\cap\overline\Omega_n$ so that $Q_n(x_n,y_n)=\max_{K\cap\overline\Omega_n} Q_n>n$. Then we are able to find a larger compact set containing $K$ such that this assumption is still satisfied. Therefore, without loss of generality we take $K=\overline{B^+_\rho((0,0))}$ with radius $\rho$ large. Therefore, up to extraction of some subsequence, $x_n\to x_\infty\in[-\rho,\rho]$, $y_n\to y_\infty\in [0,+\infty)$ as $n\to +\infty$, thanks to the boundedness of $(y_n)_{n\in\mathbb{N}}$. It follows that either $y_\infty>0$ or $ y_\infty=0$. By setting \begin{equation*} w_n(x,y):=\frac{Q_n(x,y)}{Q_n(x_n,y_n)}~~\text{in}\ K\cap\overline\Omega_n, \end{equation*} one has $0<w_n\le 1$ in $K\cap\overline\Omega_n$ and $w_n(\cdot,0)<\frac{1}{n}$ in $[-\rho,\rho]$ for all $n$ large enough. In particular, $w_n(x_n,y_n)=1$. It can be deduced that the function $w_n$ satisfies \begin{equation*} \begin{cases} -d \Delta w_n+2d\alpha\partial_x w_n-(d\alpha^2+f_v(x,0)+\lambda_n(\alpha))w_n=0,\quad &\text{in}\ K\cap\overline\Omega_n,\\ -d\partial_y w_n(x,0)=\mu\frac{P_n(x)}{Q_n(x_n,y_n)}-\nu w_n(x,0), \quad &\text{in}\ [-\rho,\rho]. \end{cases} \end{equation*} From standard elliptic estimates up to the boundary, the positive function $w_n$ converges, up to extraction of some subsequence, to a classical solution $w_\infty\in [0,1]$ of \begin{equation*} \begin{cases} -d\Delta w_\infty+2d\alpha\partial_x w_\infty-(d\alpha^2+f_v(x,0)+\lambda(\alpha))w_\infty=0, \quad &\text{in}\ K\cap\overline\Omega,\\ -d\partial_y w_\infty(x,0)+\nu w_\infty (x,0)=0, \quad &\text{in}\ [-\rho,\rho]. \end{cases} \end{equation*} Moreover, $w_\infty(\cdot,0)=0$ in $[-\rho,\rho]$ and $w_\infty(x_\infty,y_\infty)=1$. Therefore, the case that $y_\infty=0$ is impossible. Assume now that $y_\infty>0$. By using the Harnack inequality up to the boundary, there exists a point $(x',y')$ in the neighborhood of $(x_\infty,y_\infty)$ belonging to $ (K\cap\overline\Omega)^\circ$ such that $w_\infty(x',y')\ge \frac{1}{2}$. Then, the strong maximum principle implies that $w_\infty>0$ in $(K\cap\overline\Omega)^\circ$. Since $w_\infty(\cdot,0)=0$ in $[-\rho,\rho]$, one infers from the boundary condition that $\partial_y w_\infty(\cdot,0)=0$ in $[-\rho,\rho]$. This is a contradiction with the Hopf lemma. This completes the proof of Lemma \ref{lemma3.6}. \end{proof} \begin{lemma} \label{lemma3.7} For any $R>R_0$, normalizing with $\Vert P_R(\cdot)\Vert_{L^\infty(\mathbb{R})}=1$, there is $C_2>0$ (independent of $R$) such that \begin{equation*} \Vert Q_R(\cdot,0)\Vert_{L^\infty(\mathbb{R})}\le C_2. \end{equation*} \end{lemma} \begin{proof} If the statement is not true, by suitable renormalization we assume that there is a sequence $(R_n)_{n\in\mathbb{N}}$ satisfying $R_n\to +\infty$ such that $\Vert Q_{R_n}(\cdot, 0)\Vert_{L^\infty(\mathbb{R})}=1$ and such that $\Vert P_{R_n}(\cdot)\Vert_{L^\infty(\mathbb{R})}\to 0$. Without loss of generality, we assume that $x_n\in [0,L]$ for all $n\in\mathbb{N}$, such that $Q_{R_n}(x_n,0)=1$. Therefore, there is $x_\infty\in[0,L]$ such that, up to some subsequence, $x_n\to x_\infty$ as $n\to+\infty$. Since $(P_{R_n})_{n\in\mathbb{N}}$ and $(Q_{R_n}(\cdot,0))_{n\in\mathbb{N}}$ are uniformly bounded in $L^\infty(\mathbb{R})$, it follows from Lemma \ref{lemma3.6} and from standard elliptic estimates up to the boundary that the function pair $(P_{R_n},Q_{R_n})$ converges as $n\to+\infty$, up to extraction of some subsequence, locally uniformly in $\overline\Omega$ to $(P_\infty,Q_\infty)$. In particular, $P_\infty\equiv 0$ in $\mathbb{R}$ and $Q_\infty(x_\infty,0)=1$. Moreover, $P_\infty$ satisfies \begin{equation*} -DP_\infty''+2D\alpha P_\infty'+(-D\alpha^2+\mu)P_\infty-\nu Q_\infty(\cdot,0)=\lambda(\alpha)P_\infty~~\text{in}~\mathbb{R}. \end{equation*} Then, it is easily derived from above equation that $Q_\infty(\cdot,0)\equiv 0$ in $\mathbb{R}$, which contradicts $Q_\infty(x_\infty,0)=1$. The proof of this lemma is thereby complete. \end{proof} \begin{proof}[Proof of Theorem \ref{thm 5.1}] By elliptic estimates and Lemmas \ref{lemma3.5}--\ref{lemma3.7}, the eigenfunction pair $(P_R,Q_R)$ converges locally uniformly in $\overline\Omega$ as $R\to+\infty$ to a nonnegative and $L$-periodic (in $x$) function pair $(P_\alpha, Q_\alpha))$ solving the generalized eigenvalue problem \eqref{eigenvalue pb in half-plane} in the half-plane $\overline\Omega$ associated with the generalized principal eigenvalue $\lambda(\alpha)$. Moreover, up to normalization, it follows that $P_\alpha\le 1$ in $\mathbb{R}$ and $Q_\alpha$ is locally bounded in $\overline \Omega$. By the strong maximum principle and the Hopf Lemma, we further derive that $(P_\alpha, Q_\alpha)$ is positive in $\overline\Omega$. Assume that $\varLambda$ corresponds to a positive and $L$-periodic (in $x$) eigenfunction pair $(P,Q)$ such that the generalized eigenvalue problem \eqref{eigenvalue pb in half-plane} is satisfied. By reasoning as in the proof of Proposition \ref{principal eigenvalue in strip} (iii), it follows that $\varLambda<\lambda_R(\alpha)$ for any $R>R_0$, which reveals $\varLambda\le \lambda(\alpha)$ by taking $R\to+\infty$. \end{proof} \subsection{Spreading speeds and pulsating fronts in the half-plane} This subsection is devoted to the proofs of Theorems \ref{thm-asp-half-plane} and \ref{PTF-in half-plane}. We start with variational characterization of the rightward and leftward asymptotic spreading speeds $c^*_\pm$ by using the generalized principal eigenvalue constructed in the preceding subsection. Define \begin{equation*} c^*_+:=\inf_{\alpha>0}\frac{-\lambda(\alpha)}{\alpha},~~~c^*_-:=\inf_{\alpha>0}\frac{-\lambda(-\alpha)}{\alpha}. \end{equation*} Thanks to \eqref{bound in half-plane}, it is noticed that $c^*_+\in [2\sqrt{dm},+\infty)$ is well-defined. Moreover, we point out that, from the definitions of $\lambda(\pm\alpha)$ and of $c^*_{\pm}$ and from the property that $\lambda_{R}(\alpha)=\lambda_R(-\alpha)$ for all $\alpha\in\mathbb{R}$ (for any $R>R_0$) shown in the proof of Lemma \ref{lemma-same speed}, it is obvious to see that $c^*_+=c^*_-$. In what follows, we denote $c^*:=c^*_+=\inf_{\alpha>0}-\lambda(\alpha)/\alpha>0$. \begin{lemma} \label{lemma-c*} There holds $c^*_R<c^*$ and $c^*_R\to c^*$ as $R\to+\infty$. \end{lemma} \begin{proof} Since the function $R\mapsto-\lambda_R(\alpha)$ is increasing for all $\alpha\in\mathbb{R}$, one has $-\lambda_R(\alpha)<-\lambda(\alpha)$ for all $\alpha\in\mathbb{R}$. This implies \begin{equation*} \frac{-\lambda_R(\alpha)}{\alpha}<\frac{-\lambda(\alpha)}{\alpha} ~~~ \text{for all}\ \alpha>0. \end{equation*} Furthermore, \begin{equation*} \inf\limits_{\alpha>0}\frac{-\lambda_R(\alpha)}{\alpha}<\inf\limits_{\alpha>0}\frac{-\lambda(\alpha)}{\alpha}, \end{equation*} which implies \begin{equation} \label{c*} 0<c^*_R<c^*. \end{equation} It remains to prove that $c^*_R\to c^*$ as $R\to+\infty$. Since the functions $\alpha\mapsto -\lambda_R(\alpha)$ and $\alpha\mapsto-\lambda(\alpha)$ are convex and continuous in $\mathbb{R}$, one has $\alpha\mapsto-\lambda_R(\alpha)/\alpha$ and $\alpha\mapsto-\lambda(\alpha)/\alpha$ are continuous for all $\alpha\in(0,+\infty)$. Since $-\lambda_R(\alpha)/\alpha$ increasingly converges to $-\lambda(\alpha)/\alpha$ as $R\to+\infty$ for each $\alpha\in(0,+\infty)$, the Dini's Theorem (see, e.g., \cite[Theorem 7.13]{Rudin1976}) implies that \begin{equation*} \frac{-\lambda_R(\alpha)}{\alpha}\to \frac{-\lambda(\alpha)}{\alpha} \quad \text{as}\ R\to+\infty~~\text{uniformly in}~\alpha\in (0,+\infty). \end{equation*} On the other hand, it is seen from \eqref{bound in strip} and \eqref{bound in half-plane} that both $-\lambda_R(\alpha)/\alpha$ and $-\lambda(\alpha)/\alpha$ tend to infinity as $\alpha\to 0^+$ and as $\alpha\to+\infty$. One then concludes that \begin{equation*} \inf_{\alpha>0}\frac{-\lambda_R(\alpha)}{\alpha}\to \inf_{\alpha>0}\frac{-\lambda(\alpha)}{\alpha} \quad \text{as}\ R\to+\infty. \end{equation*} That is, $c^*_R\to c^*$ as $R\to+\infty$. The proof is thereby complete. \end{proof} \begin{proof}[Proof of Theorem \ref{thm-asp-half-plane}] (i) We first construct the upper bound in the rightward propagation. Let $(u,v)$ be the solution of \eqref{pb in half-plane} with nonnegative, bounded, continuous and compactly supported initial condition $(u_0,v_0)\not\equiv(0,0)$. We need to show \begin{equation} \label{find upper bound} \lim\limits_{ t\to+\infty}\sup\limits_{x\ge ct,\ 0\le y\le A} |(u(t,x),v(t,x,y))|=0~~\text{for all}~c> c^*, \end{equation} For any $c>c^*$, choose $c'\in[c^*,c)$ and $\alpha>0$ such that $-\lambda( \alpha)=\alpha c'$. Let $(\lambda(\alpha);(P_\alpha, Q_\alpha))$ be the generalized principal eigenpair of \eqref{eigenvalue pb in half-plane} derived in Theorem \ref{thm 5.1}. Since $(u_0,v_0)$ is compactly supported, %satisfying $0\le u_0\le p<\nu/\mu$ in $\mathbb{R}$ and $0\le v_0\le q<1$ in $\overline\Omega$ for some $(p,q)\in\mathbb{\overline P}$, one infers that, for some $\gamma>0$, $\gamma e^{-\alpha(x-c't)}(P_\alpha(x), Q_\alpha(x,y))$ lies above $(u_0,v_0)$ at time $t=0$. Thanks to the KPP assumption, one further deduces that $\gamma e^{-\alpha(x-c't)}(P_\alpha(x), Q_\alpha(x,y))$ is an exponential supersolution of the Cauchy problem \eqref{pb in half-plane} and $\gamma e^{-\alpha(x-c't)}(P_\alpha(x), Q_\alpha(x,y))\ge (u(t,x),v(t,x,y))$ for all $t\ge 0$ and $(x,y)\in\overline\Omega$ by Proposition \ref{cp}. It follows that, for any $A>0$, \begin{equation*} \sup\limits_{x\ge ct,0\le y\le A} (u(t,x),v(t,x,y))\le \sup\limits_{x\ge ct,0\le y\le A}\gamma e^{-\alpha(c-c')t}(P_\alpha(x),Q_\alpha(x,y)), \end{equation*} whence, by Theorem \ref{thm 5.1} and by passing to the limit $t\to+\infty$, the formula \eqref{find upper bound} is proved. (ii) Let us prove the lower bound \eqref{find lower bound}. Choose any $c\in(0,c^*)$. Let $(u,v)$ be the solution of \eqref{pb in half-plane} with nonnegative, nontrivial, bounded and continuous initial condition $(u_0,v_0)<(\nu/\mu,1)$. Thanks to \eqref{truncated to half-plane}, we know that $(U_B(x), V_B(x,y))$ increasingly converges to $(\nu/\mu,1)$ as $B\to +\infty$ uniformly in $x$ and locally uniformly in $y$. Since $(u_0,v_0)<(\nu/\mu,1)$ in $\overline\Omega$, for $B>R_0$ sufficiently large, there is a smooth cut-off function $\chi^B:[0,+\infty)\mapsto[0,1]$ satisfying $\chi^B(\cdot)=1$ in $[0,B-1]$ and $\chi^B(\cdot)=0$ in $[B,+\infty)$, such that $(0,0)\le (u_0,\chi^B v_0)\le (U_B,V_B)$ in $\overline\Omega_B$. Let $(u_B,v_B)$ be the solution to the Cauchy problem \eqref{pb in strip} in $\overline\Omega_B$ with initial datum $(u_0,\chi^B v_0)$ and let $(U_B, V_B)$ be the associated unique nontrivial stationary solution of \eqref{pb in strip}. By Lemma \ref{lemma-c*}, up to increasing $B$, the asymptotic spreading speed $c^*_B$ of the solution $(u_B,v_B)$ to \eqref{pb in strip} in $\overline\Omega_B$ can be very close to $c^*$, say $c^*_B\sim c^*$, such that $c<c^*_B<c^*$. From Theorem \ref{thm-asp-strip}, one derives \begin{equation*} \lim\limits_{t\to+\infty}\inf\limits_{0\le x\le ct,\ y\in [0,B]}(u_B(t,x),v_B(t,x,y))=(U_B(x), V_B(x,y)), \end{equation*} due to $0<c<c^*_B$. Notice that $(u,v)$ is a strict supersolution to problem \eqref{pb in strip} with initial datum $(u_0,\chi^Bv_0)$ in $\overline\Omega_B$, Proposition \ref{cp-strip} yields $(u(t,x),v(t,x,y))>(u_B(t,x),v_B(t,x,y))$ for all $t>0$ and $(x,y)\in\overline\Omega_B$. Thus, for all $0<A\le B$, it follows that \begin{equation*} (U_B(x),V_B(x,y))\le \lim\limits_{t\to+\infty}\inf\limits_{0\le x\le ct, y\in[0,A]}(u(t,x),v(t,x,y))\le ({\nu}/{\mu},1). \end{equation*} Passing to the limit $B\to+\infty$ together with Proposition \ref{prop3.9} (ii) implies that, for any $A>0$, \begin{equation*} \lim\limits_{ t\to+\infty}\inf\limits_{0\le x\le ct,0\le y\le A} (u(t,x),v(t,x,y))=({\nu}/{\mu},1). \end{equation*} The proof of Theorem \ref{thm-asp-half-plane} is thereby complete. \end{proof} Finally, we prove Theorem \ref{PTF-in half-plane} in the right direction, that is, problem \eqref{pb in half-plane} admits rightward pulsating fronts if and only if $c\ge c^*$. The proof is based on an asymptotic method. \begin{proof}[Proof of Theorem \ref{PTF-in half-plane}] Fix $c\ge c^*$, one infers from \eqref{c*} that $c>c^*_R$ for any $R>R_0$. It follows from Theorem \ref{thm-PTF-in strip} that the truncated problem \eqref{pb in strip} admits a rightward pulsating traveling front $(u_R(t,x),v_R(t,x,y))=(\phi_R(x-ct,x),\psi_R(x-ct,x,y))$ with wave speed $c$ in the strip $\overline\Omega_R$ connecting $(U_R,V_R)$ and $(0,0)$. Moreover, the profile $(\phi_R(s,x),\psi_R(s,x,y))$ is decreasing in $s$ and $L$-periodic in $x$. Consider a sequence $(R_n)_{n\in\mathbb{N}}$ such that $R_n\to+\infty$ as $n\to+\infty$. Denote by $(\phi_{R_n}(s,x),\psi_{R_n}(s,x,y))$ the sequence of the rightward pulsating traveling fronts of \eqref{pb in strip} with speed $c$ and by $(U_{R_n},V_{R_n})$ the corresponding nontrivial steady states of \eqref{pb in strip} in the strips $\overline\Omega_{R_n}$. One has \begin{align*} \phi_{R_n}(-\infty,x)=U_{R_n}(x), ~~~~~~~ \phi_{R_n}(+\infty,x)=0,\cr \psi_{R_n}(-\infty,x,y)=V_{R_n}(x,y), \ \psi_{R_n}(+\infty,x,y)=0, \end{align*} uniformly in $(x,y)\in\overline\Omega_{R_n}$. Moreover, it follows from Proposition \ref{prop3.9} that $0<U_{R_n}<{\nu}/{\mu}$ in $\mathbb{R}$, $0<V_{R_n}<1$ in $\mathbb{R}\times[0,R)$. By the limiting property in Proposition \ref{prop3.9} (ii), one can assume, without loss of generality, that $\frac{4\nu}{5\mu}<U_{R_n}(\cdot) <\frac{\nu}{\mu}$ in $\mathbb{R}$ for each $n\in\mathbb{N}$. Then due to the monotonicity and continuity of the function $s\mapsto\phi_{R_n}(s,\cdot)$, there is a unique $s_n\in\mathbb{R}$ such that \begin{equation*} \max\limits_{x\in\mathbb{R}}\phi_{R_n}(s_n,\cdot)=\max\limits_{x\in[0,L]}\phi_{R_n}(s_n,\cdot)=\frac{\nu}{2\mu}. \end{equation*} Set $(\phi_n(s,x),\psi_n(s,x,y)):=(\phi_{R_n}(s+s_n,x),\psi_{R_n}(s+s_n,x,y))$. Since \begin{equation*} \big(u_n(\frac{x-s}{c},x),v_n(\frac{x-s}{c},x,y)\big)=(\phi_n(s,x),\psi_n(s,x,y)), \end{equation*} by standard parabolic estimates, the sequence $((u_n, v_n))_{n\in\mathbb{N}}$ converges, up to extraction of a subsequence, locally uniformly to a classical solution $\big(u(\frac{x-s}{c},x), v(\frac{x-s}{c},x,y)\big)=(\phi(s,x),\psi(s,x,y))$ of \eqref{pb in half-plane} satisfying the normalization condition \begin{equation*} \max\limits_{x\in\mathbb{R}}\phi(0,\cdot)=\max\limits_{x\in[0,L]}\phi(0,\cdot)=\frac{\nu}{2\mu}. \end{equation*} Moreover, the profile $(\phi(s,x),\psi(s,x,y))$ is non-increasing in $s$ and $L$-periodic in $x$ such that \begin{align*} \phi(-\infty,x)&={\nu}/{\mu}, ~~~~ \phi(+\infty,x)=0,\cr \psi(-\infty,x,y)&=1, ~~~~~ \psi(+\infty,x,y)=0, \end{align*} uniformly in $x\in\mathbb{R}$ and locally uniformly in $y\in[0,+\infty)$. Now, let us show the monotonicity of $(\phi(s,x),\psi(s,x,y))$ in $s$. Since the pulsating front $(u(t,x),v(t,x,y))=(\phi(x-ct,x),\psi(x-ct,x,y))$ propagates with speed $c\ge c^*>0$, it follows that $u_t\ge 0$ for $t\in\mathbb{R}$ and $x\in\mathbb{R}$, $v_t\ge 0$ for $t\in\mathbb{R}$ and $(x,y)\in\overline\Omega$. Notice also that $(u(t,x), v(t,x,y))$ is a global classical solution of problem \eqref{pb in half-plane}, whence $z=v_t$ is a global classical solution of $z_t=d\Delta z+f_v(x,v)z$ for $t\in\mathbb{R}$ and $(x,y)\in\Omega$ with $z\ge 0$. From the strong parabolic maximum principle, it follows that $z>0$ or $z\equiv 0$ for $t\in\mathbb{R}$ and $(x,y)\in\Omega$. That is, $v_t>0$ or $v_t\equiv 0$ for $t\in\mathbb{R}$ and $(x,y)\in\Omega$. The latter case is impossible, otherwise one would derive from $v_t\equiv0$ that either $v\equiv 0$ or $v\equiv 1$ for $t\in\mathbb{R}$ and $(x,y)\in\Omega$. This is a contradiction with the limiting behavior of the pulsating fronts. Therefore, $v_t>0$ for $t\in\mathbb{R}$ and $(x,y)\in\Omega$ and by continuity $v_t>0$ for $t\in\mathbb{R}$ and $(x,y)\in\overline\Omega$. Likewise, one infers that $u_t>0$ for $t\in\mathbb{R}$ and $x\in\mathbb{R}$. Hence, the rightward traveling fronts $(\phi(s,x),\psi(s,x,y))$ are decreasing in $s$. Assume that there exists a rightward pulsating traveling front $(\phi(x-ct,x),\psi(x-ct,x,y))$ of \eqref{pb in half-plane} with speed $c>0$. Then, one infers from Theorem \ref{thm-asp-half-plane} that, for any $c'\in[0,c^*)$ and for any $B>0$, \begin{equation*} \lim\limits_{ t\to+\infty}\sup\limits_{0<x\le c't, y\in[0,B]}|(\phi(x-ct,x),\psi(x-ct,x,y))-({\nu}/{\mu},1)|=0. \end{equation*} In particular, for any $c'\in[0,c^*)$ and for any $B>0$, taking $x=c't$ and $y\in[0,B]$, there holds \begin{equation*} \lim\limits_{t\to+\infty}\phi((c'-c)t,c't)={\nu}/{\mu},\quad \lim\limits_{t\to+\infty}\psi((c'-c)t,c't,y)=1. \end{equation*} From the limiting condition \eqref{PTF-limit condition}, it follows that $c'<c$ for all $c'\in[0,c^*)$. Consequently, one gets $c^*\le c$. This implies the non-existence of rightward pulsating traveling fronts with speed $0<c<c^*$. \end{proof} \section*{Acknowledgments} I would like to express my sincerest gratitude to Professor François Hamel and Professor Xing Liang for their continued guidance, support and encouragement during the preparation of this project. I would like to thank Lei Zhang for many helpful discussions and thank Professor Xuefeng Wang and the anonymous referee for their careful reading and for their valuable and constructive comments which led to an important improvement of this manuscript. \begin{thebibliography}{10} \small \bibitem{BCRR2013-2014} H. Berestycki, A.-C. Coulon, J.-M. Roquejoffre and L. Rossi, \newblock {\em Speed-up of reaction-diffusion fronts by a line of fast diffusion}, S\'{e}minaire Laurent Schwartz-- EDP et applications, 25 (2013--2014), 1--25. \bibitem{BCRR2015} H. Berestycki, A.-C. Coulon, J.-M. Roquejoffre and L. Rossi, \newblock {\em The effect of a line with nonlocal diffusion on Fisher-KPP propagation}, Math. Models Methods Appl. Sci., 25 (2015), no. 13, 2519--2562. \bibitem{BDR1} H. Berestycki, R. Ducasse, and L. Rossi, \newblock {\em Generalized principal eigenvalues for heterogeneous road-field systems}, Commun. Contemp. Math., 22 (2020), 1950013. \bibitem{BDR2019} H. Berestycki, R. Ducasse, and L. Rossi, \newblock {\em Influence of a road on a population in an ecological niche facing climate change}, J. Math. Biol., 81 (2020): 1059–1097. \bibitem{BH2002} H. Berestycki, F. Hamel, \newblock {\em Front Propagation in periodic excitable media}, Comm. Pure Appl. Math., 55 (2002) no.8, 949--1032. \bibitem{BHN2005} H. Berestycki, F. Hamel, and N. Nadirashvili, \newblock {\em The speed of propagation for KPP type problems. I. Periodic framework.}, J. Eur. Math. Soc., 7 (2005):173--213. \bibitem{BHN2010} H. Berestycki, F. Hamel, and N. Nadirashvili, \newblock {\em The speed of propagation for KPP type problems. II. General domains.}, J. Amer. Math. Soc., 23 (2010):1--34. \bibitem{BHR2005-2} H. Berestycki, F. Hamel, and L. Roques, \newblock {\em Analysis of the periodically fragmented environment model: II-Biological invasions and pulsating traveling fronts}, J. Math. Pures Appl., 84 (2005):1101--1146. \bibitem{BRR2013-1} H. Berestycki, J-M. Roquejoffre and L. Rossi, \newblock {\em The influence of a line with fast diffusion on Fisher-KPP propagation}, J. Math. Biol., 66 (2013) 743--766. \bibitem{BRR2013-2} H. Berestycki, J-M. Roquejoffre and L. Rossi, \newblock {\em Fisher-KPP propagation in the presence of a line: Further effects}, Nonlinearity, 26 (2013) 2623--2640. \bibitem{BRR2016-1} H. Berestycki, J-M. Roquejoffre and L. Rossi, \newblock {\em The shape of expansion induced by a line with fast diffusion in Fisher-KPP equations}, Comm. Math. Phys., 343 (2016) no.1, 207--232. \bibitem{BRR2016-2} H. Berestycki, J-M. Roquejoffre and L. Rossi, \newblock {\em Travelling waves, spreading and extinction for Fisher-KPP propagation driven by a line wih fast diffusion}, Nonlinear Anal., 137 (2016) 171--189. \bibitem{BGT2020} B. Bogosel, T. Giletti, A. Tellini, \newblock {\em Propagation for KPP bulk-surface systems in a general cylindrical domain}, arXiv:2006.14224. \bibitem{CZ2020} M. Chipot, M. Zhang, \newblock {\em On some model problem for the propagation of interacting species in a special environment}, Discrete Contin. Dyn. Syst., 2021, 41 (7) : 3141-3161. doi: 10.3934/dcds.2020401. \bibitem{Dietrich2015} L. Dietrich, \newblock {\em Existence of travelling waves for a reaction-diffusion system with a line of fast diffusion}, Appl. Math. Res. Express. AMRX, 2 (2015) 204--252. \bibitem{Dietrich2017} L. Dietrich, \newblock {\em Velocity enhancement of reaction-diffusion fronts by a line of fast diffusion}, Trans. Amer. Math. Soc., 369 (2017), 3221--3252. \bibitem{Ducasse2018} R. Ducasse, \newblock {\em Influence of the geometry on a field-road model: the case of a conical field}, J. Lond. Math. Soc., (2) 97 (2018) 441--469. \bibitem{GMZ2015} T. Giletti, L. Monsaingeon, and M. Zhou, \newblock {\em A KPP road-field system with spatially periodic exchange terms}, Nonlinear Anal., 128 (2015), 273--302. \bibitem{HR2011} F. Hamel, and L. Roques, \newblock {\em Uniqueness and stability properties of monostable pulsating fronts}, J. Eur. Math. Soc., 13 (2011) 345--390. \bibitem{LW2017} H. Li, X. Wang, \newblock {\em Using effective boundary conditions to model fast diffusion on a road in a large field}, Nonlinearity, 30 (2017) pp. 3853--3894. \bibitem{LZ2007} X. Liang and X.-Q. Zhao, \newblock {\em Asymptotic speeds of spread and traveling waves for monotone semiflows with applications}, Comm. Pure Appl. Math., 60 (2007), 1--40. \bibitem{LZ2010} X. Liang and X.-Q. Zhao, \newblock {\em Spreading speeds and traveling waves for abstract monostable evolution systems}, J. Funct. Anal., 259 (2010), 857--903. \bibitem{Liberman} G.~M. Lieberman, \newblock {\em Second Order Parabolic Differential Equations}, World Scientific, 1996. \bibitem{Pauthier2015} A. Pauthier, \newblock {\em Uniform dynamics for Fisher-KPP propagation driven by a line of fast diffusion under a singular limit}, Nonlinearity, 28 (2015) 3891--3920. \bibitem{Pauthier2016} A. Pauthier, \newblock {\em The influence of nonlocal exchange terms on Fisher-KPP propagation driven by a line of fast diffusion}, Commun. Math. Sci., 14 (2016) 535-570. \bibitem{RTV2017} L. Rossi, A. Tellini, E. Valdinoci, \newblock {\em The effect on Fisher-KPP propagation in a cylinder with fast diffusion on the boundary}, SIAM J. Math. Anal., 49 (2017) 4595--4624. \bibitem{Rudin1976} W. Rudin, \newblock {\em Principles of Mathematical Analysis}, 3rd edn. (McGraw-Hill Book Company, Inc., New York-Toronto-London, 1976). \bibitem{Smith1995} H. L. Smith, \newblock {\em Monotone Dynamical Systems}, an introduction to the theory of competitive and cooperative systems, Math. Surveys \& Monographs, No. 41, American Mathematical Society, Providence, Rhode Island (1995). \bibitem{Tellini2016} A. Tellini, \newblock {\em Propagation speed in a strip bounded by a line with different diffusion}, J. Differential Equations, 260 (2016), 5956--5986. \bibitem{Weinberger2002} H.~F. Weinberger, \newblock {\em On spreading speeds and traveling waves for growth and migration models in a periodic habitat}, J. Math. Biol., 45(6): 511--548, 2002. \end{thebibliography} \end{document}
2205.05274v1
http://arxiv.org/abs/2205.05274v1
Connected power domination number of product graphs
\documentclass[sn-mathphys]{sn-jnl} \jyear{2022} \theoremstyle{thmstyleone}\newtheorem{theorem}{Theorem}\newtheorem{proposition}[theorem]{Proposition} \theoremstyle{thmstylethree}\newtheorem{example}{Example}\newtheorem{remark}{Remark} \newtheorem{observation}{Observation} \theoremstyle{thmstylethree}\newtheorem{definition}{Definition}\newtheorem{corollary}[theorem]{Corollary} \raggedbottom \begin{document} \title[Connected power domination number of product graphs]{Connected power domination number of product graphs} \author*{ \sur{S. Ganesamurthy}}\email{[email protected]} \author{\sur{J. Jeyaranjani}}\email{[email protected]} \equalcont{These authors contributed equally to this work.} \author{\sur{R. Srimathi}}\email{[email protected]} \equalcont{These authors contributed equally to this work.} \affil*[1]{\orgdiv{Department of Mathematics}, \orgname{Periyar University}, \orgaddress{\city{Salem}, \postcode{636011}, \state{Tamil Nadu}, \country{India}}} \affil[2]{\orgdiv{Department of Computer science and Engineering}, \orgname{Kalasalingam Academy of Research and Education}, \orgaddress{\street{ Krishnankoil}, \city{Srivilliputhur}, \postcode{626128}, \state{Tamil Nadu}, \country{India}}} \affil[3]{\orgdiv{Department of Mathematics}, \orgname{Idhaya College of Arts and Science for Women}, \orgaddress{\city{Lawspet}, \postcode{605008}, \state{Puducherry}, \country{India}}} \abstract{In this paper, we consider the connected power domination number ($\gamma_{P, c}$) of three standard graph products. The exact value for $\gamma_{P, c}(G\circ H)$ is obtained for any two non-trivial graphs $G$ and $H.$ Further, tight upper bounds are proved for the connected power domination number of the Cartesian product of two graphs $G$ and $H.$ Consequently, the exact value of the connected power domination number of the Cartesian product of some standard graphs is determined. Finally, the connected power domination number of tensor product of graphs is discussed.} \keywords{Connected Power domination number, Power domination number, Product graphs.} \pacs[MSC Classification]{05C38, 05C76, 05C90.} \maketitle \section{Introduction} We only consider non-trivial simple connected graphs of finite order, unless otherwise stated. For a vertex $v\in V(G),$ the \textit{open neighborhood} of $v$ is $N(v)=\{u\,:\,uv\in E(G)\}$ and the \textit{closed neighborhood} of $v$ is $N[v]=\{v\}\cup N(v).$ For a set $A\subset V(G),$ the \textit{open neighborhood of $A$} is $N(A)= \cup_{v\in A} N(v)$ and the \textit{closed neighborhood of $A$} is $N[A]=\cup_{v\in A} N[v].$ The subgraph of the graph $G$ induced by the subset $A$ of the vertices of $G$ is denoted by $\langle A \rangle.$ A vertex $v\in V(G)$ is called \textit{universal vertex} of $G$ if $v$ is adjacent to each vertex of the graph $G.$ Let $K_n,\,P_n,\,C_n,\,W_n,\,F_n,$ and $K_{m,\,n},$ respectively, denote complete graph, path, cycle, wheel, fan, and complete bipartite graph. For $k\geq 3$ and $1\leq m_1\leq m_2\leq \dots\leq m_k,$ the complete multipartite graph with each partite set of size $m_i$ is denoted by $K_{m_1,\,m_2,\,\dots,\,m_k}.$ Let $S\subset V(G).$ If $N[S]=V(G), $ then $S$ is called a \textit{domination set}. If the subgraph induced by the dominating set is connected, then we say $S$ is a \textit{connected dominating set}. For each vertex $v\in V(G),$ if a dominating set $S$ satisfies the property $N(v) \cap S \neq \emptyset,$ then we call the set $S$ is a \textit{total dominating set}. The minimum cardinality of dominating set (connected dominating set) of $G$ is called domination number (connected domination number) and it is denoted by $\gamma(G)$ ($\gamma_c(G)$). \emph{\textbf{Algorithm:}}\cite{dmks22} For the graph $G$ and a set $S\subset V(G),$ let $M(S)$ be the collection of vertices of $G$ monitored by $S.$ The set $M(S)$ is built by the following rules: \begin{enumerate} \item (Domination) \item[] Set $M(S) \leftarrow S\cup N(S).$ \item (Propagation) \item[] As long as there exists $v\in M(S)$ such that $N(v)\cap (V(G)-M(S))=\{w\},$ set $M(S)\leftarrow M(S)\cup \{w\}.$ \end{enumerate} In other words, initially the set $M(S)=N[S],$ and then repeatedly add to $M(S)$ vertices $w$ that has a neighbor $v$ in $M(S)$ such that all the other neighbors of $v$ are already in $M(S).$ After no such vertex $w$ exists, the set monitored by $S$ is constructed. For a subset $S$ of $V(G),$ if $M(S)=V(G),$ then the set $S$ is called a \textit{power dominating set} (PDS). The minimum cardinality of power dominating set of $G$ denoted by $\gamma_{p}(G).$ If the subgraph of $G$ induced by the vertices of a PDS $S$ is connected, then the set $S$ is \textit{connected power domination set} (CPDS), and its minimum cardinality is denoted by $\gamma_{P,\,c}(G).$ \noindent {\bf \cite{laa428} Color-change rule:} \textit{If $G$ is a graph with each vertex colored either white or black, $u$ is a black vertex of $G,$ and exactly one neighbor $v$ of $u$ is white, then change the color of $v$ to black. Given a coloring of $G,$ the derived coloring is the result of applying the color-change rule until no more changes are possible.} A \textit{zero forcing set} for a graph G is a set $Z\subset V (G)$ such that if initially the vertices in $Z$ are colored black and the remaining vertices are colored white, the entire graph G may be colored black by repeatedly applying the color-change rule. The zero forcing number of $G, Z(G),$ is the minimum cardinality of a zero forcing set. If a zero forcing set $Z$ satisfies the connected condition, then we call such set as \textit{connected zero forcing set} (CZFC) and it is denoted by $Z_c.$ The connected zero forcing number of $G, Z_c(G),$ is the minimum cardinality of a connected zero forcing set. For a graph $G$ and a set $X \subseteq V(G),$ the set $X_i,\,i>0,$ denotes the collection of all vertices of the graph $G$ monitored by the propagation up to step $i,$ that is, $X_1=N[X]$ (dominating step) and $X_{i+1}=\cup\{N[v]\,:\, v\in X_i$ such that $\vert N[v]\setminus X_i\vert \leq 1\}$ (propagation steps). Similarly, for a connected zero forcing set $Z_c \subseteq V(G)$ and $i\geq 1,$ let $Z_c^i$ denote the collection of all vertices of the graph $G$ whose color changed from white to black at step $i$ (propagation steps). For two graphs $G$ and $H,$ the vertex set of the Cartesian product ($G\square H$), tensor product $(G\times H)$ and lexicographic product ($G\circ H$) is $V(G)\times V(H).$ The adjacency relationship between the vertices $u=(a,\,b)$ and $v=(x,\,y)$ of these products are as follows: \begin{itemize} \item Cartesian product: $uv\in E(G\square H)$ if either $a=x$ and $by\in E(H),$ or $b=y$ and $ax\in E(G).$ \item Tensor product: $uv\in E(G\times H)$ if $ax\in E(G)$ and $by\in E(H).$ \item Lexicographic product: $uv\in E(G\circ H)$ if $ax\in E(G),$ or $a=x$ and $by\in E(H).$ \end{itemize} Let $G \ast H$ be any of the three graph products defined above. Then the subgraph of $G \ast H$ induced by $\{g\}\times V(H)$ ($V(G)\times \{h\})$ is called an $H$-fiber ($G$-fiber) and it is denoted by $^gH$ ($G^h$). Notation and definitions which are not presented here can be found in \cite{rbbook,hikbook}. The problem of computing the power domination number of $G$ is NP-hard in general. The complexity results for power domination in graphs are studied in \cite{ajco19,gnr52,hhhh15,lllncs}. Further, some upper bound for the power domination number of graphs is obtained in \cite{zkc306}. Furthermore, the power domination number of some standard families of graphs and product graphs are studied in \cite{bf58,bgpv38,dmks22,dh154,ks13,ks16,skp18,sk11,sk48,vthesis,vvlncs,vvh38}. Recently, Brimkvo et al. \cite{bms38} introduced the concept of connected power domination number of graph and obtained the exact value for trees, block graph, and cactus graph. Further, in \cite{gplncs}, the complexity results for split graph, chain graph, and chordal graph are considered. In this paper, we extend the study of connected power domination number for three standard products. \section{The Lexicographic Product} The exact value of the power domination number of the lexicographic product of graphs obtained in \cite{dmks22}. In this section, we have obtained the exact value of the connected power domination number of $G\circ H.$ The assumption of the connected condition for graph $H$ is relaxed in this section. \begin{theorem} For any two graphs $G$ and $H,$ \begin{center} $\gamma_{P,c}(G\circ H)= \left\{ \begin{array}{rl} \mbox{$\gamma_c(G);$} & \mbox{ if $\gamma_c(G)\geq 2,$} \\ \mbox{$1;$} & \mbox{either $\gamma(G)=\gamma(H)=1$ or $\gamma(G)=1$ and $H\cong \overline{K_2},$}\\ \mbox{$2;$} & \mbox{if $\gamma(G)=1$ and $\gamma(H)>1$ with $\vert V(H)\vert\geq 3.$} \end{array}\right.$ \end{center} \end{theorem} \begin{proof} First we complete the proof for the case $\gamma_c(G)\geq 2.$ Let $X$ be a minimum connected dominating set of $G$ and let $u\in V(H).$ Set $S=X\times \{u\}.$ As $X$ is a connected dominating set of $G,$ it is a total dominating set of $G;$ consequently, each vertex of $G$ is a neighbor of some vertex in $X.$ Thus each vertex $(g,\,h)\in V(G\circ H)$ is a neighbour of some vertex in $S.$ Since $\langle S\rangle$ is connected and which monitors each vertex of $G\circ H,$ $\gamma_{P,c}(G\circ H)\leq \gamma_c(G).$ Assume that $S$ is a connected power dominating set of $G\circ H$ whose cardinality is strictly less than $\gamma_c(G).$ Then there exists a vertex $u\in V(G)$ such that $\{u\}\times V(H) \cap N[S]=\emptyset.$ Hence the vertices in $\{u\}\times V(H)$ are monitored by the propagation. Let $A= \{u\}\times V(H).$ Clearly, each vertex in $V(G\circ H)\setminus A$ has either zero or $\vert A\vert$ neighbours in $\langle A\rangle\cong \,^uH$-fiber. Therefore propagation on $^uH$-fiber is not possible as $\vert V(H)\vert\geq 2.$ Therefore $\gamma_{P,c}(G\circ H)\geq \gamma_c(G).$ Let $\gamma(G)=\gamma(H)=1.$ Then the graphs $G$ and $H$ have universal vertices, namely, $u$ and $v,$ respectively. Consequently, the vertex $(u,\,v)\in V(G\circ H)$ is a universal vertex of the graph $G\circ H.$ Thus $\gamma_{P,c}(G\circ H)=1.$ Consider $\gamma(G)=1$ and $H\cong \overline{K_2}.$ Let $u$ be a universal vertex of $G$ and let $V(H)=\{x,\,y\}.$ Then the vertex $(u,\,x)\in V(G\circ H)$ dominates all the vertices of the graph $G\circ H$ except $(u,\,y).$ Clearly, the vertex $(u,\,y)$ is monitored by the propagation as $(u,\,y)$ is the only unmonitored vertex of $G\circ H.$ Therefore, $\gamma_{P,c}(G\circ H)=1.$ Assume that $\gamma(G)=1$ and $\gamma(H)>1.$ It is easy to observe that a $\gamma_{P,c}(G\circ H)\geq 2$ as $\vert V(H)\vert\geq 3$ and $\gamma(H)>1.$ Let $u$ be a universal vertex of the graph $G.$ Then the set $\{(u,\,a),\,(v,\,a)\}$ dominates all the vertices of the graph $G\circ H.$ Since $u$ is a universal vertex, $\langle \{(u,\,a),\,(v,\,a)\}\rangle\cong K_2.$ Hence, $\gamma_{P,c}(G\circ H)\leq 2.$ \end{proof} \section{The Cartesian Product} We begin this section by proving a general upper bound for the connected power domination number of $G\square H.$ \begin{theorem} For any two graphs $G$ and $H,$ \begin{center} $\gamma_{P,c}(G \,\square\,H)\leq$ min$\{\gamma_{P,c}(G)\vert V(H)\vert, \gamma_{P,c}(H)\vert V(G)\vert\}.$ \end{center} \end{theorem} \begin{proof} Let $X$ be a CPDS of $G.$ Consider $X'=X\times V(H).$ Clearly, for each vertex $u\in X,\,^uH$-fiber is observed as $\{u\}\times V(H)\in X'.$ Also, by our choice of $X',$ for each vertex $v\in N(X),\,^vH$-fiber is observed (dominating step). To complete the proof, it is enough to show that if $w\in X_i,$ then $V(^wH)\in X_i'.$ We proceed with the proof by induction. The result is true for $i=1.$ Assume that the result holds for some $i>0.$ Let $w\in X_{i+1}.$ If $w\in X_i,$ then $V(^wH)\in X_i'$ by induction hypothesis. If $w\notin X_i,$ then there exists a vertex $y\in X_i$ which is the neighbour of $w$ such that $\vert N[y]\setminus X_i\vert\leq 1.$ This gives $V(^yH)\in X_i',$ by induction hypothesis. Hence, for fixed $h\in V(H),\,\vert N[(y,\,h)]\setminus X_i'\vert=\vert N[y]\setminus X_i\vert\leq 1.$ Thus, $N[(y,\,h)]\in X_{i+1}'$ which implies that $(w,\,h)\in X_{i+1}'.$ As it is true for each $h\in V(H),\, V(^wH)\in X_{i+1}'.$ Therefore, $\gamma_{P,c}(G \,\square\,H)\leq \gamma_{P,c}(G)\vert V(H)\vert.$ It is easy to prove that $\gamma_{P,c}(G \,\square\,H)\leq \gamma_{P,c}(H)\vert V(G)\vert$ as $G\square H$ is commutative. \end{proof} From the definitions of CPDS and CZFS, it is clear that if $X\subseteq V(G)$ is a CPDS, then $N[X]$ is a CZFS. From this observation, we prove the following upper bound for $\gamma_{P,c}(G\square H)$ in terms of the product of Connected zero forcing number and connected domination number. \begin{theorem}\label{upcpdczfs} For any two graphs $G$ and $H,$ \begin{center} $\gamma_{P,c}(G \,\square\,H)\leq$ min$\{Z_c(G)\gamma_c(H), Z_c(H)\gamma_c(G)\}.$ \end{center} \end{theorem} \begin{proof} Let $Z_c$ be a CPDS of $G$ and let $S$ be a connected dominating set of $H.$ Consider $X=Z_c\times S.$ Clearly, for each vertex $u\in Z_c,\,^uH$-fiber is observed as $\{u\}\times S\in X.$ We proceed with the proof by induction. The result is true for $i=0.$ Assume that the result holds for some $i\geq 0.$ Let $w\in Z_c^{i+1}.$ If $w\in Z_c^i,$ then $V(^wH)\in X_i$ by induction hypothesis. If $w\notin Z_c^i,$ then there exists a vertex $y\in Z_c^i$ which is the neighbour of $w$ such that $\vert N[y]\setminus Z_c^i\vert\leq 1.$ This gives $V(^yH)\in X_i,$ by induction hypothesis. Hence, for fixed $h\in V(H),\,\vert N[(y,\,h)]\setminus X_i\vert=\vert N[y]\setminus Z_c^i\vert\leq 1.$ Thus, $N[(y,\,h)]\in X_{i+1}$ which implies that $(w,\,h)\in X_{i+1}.$ As it is true for each $h\in V(H),\, V(^wH)\in X_{i+1}.$ Therefore, $\gamma_{P,c}(G \,\square\,H)\leq Z_c(G)\gamma_c(H).$ In a similar way, it is easy to prove that $\gamma_{P,c}(G \,\square\,H)\leq Z_c(H)\gamma_c(G).$ \end{proof} The upper bound in the above theorem is tight if $G$ has a universal vertex and $H\in\{P_n,\,C_n,\,W_n,\,F_n\}.$ Also, if we replace $Z_c=Z$ and $\gamma_c=\gamma$ in the above theorem, then we have the upper bound for $\gamma_P(G\square H)$ in terms of zero forcing number and domination number. \begin{corollary} For any two graphs $G$ and $H,$ \begin{center} $\gamma_{P}(G \,\square\,H)\leq$ min$\{Z(G)\gamma(H), Z(H)\gamma(G)\}.$ \end{center} \end{corollary} The following corollaries are immediate from Theorem \ref{upcpdczfs} as $Z_c(P_n)=1,$ $Z_c(C_n)=2,$ $Z_c(W_n)=3$ and $Z_c(F_n)=2.$ \begin{corollary} For a graph $G,$ $\gamma_{P,c}(G \,\square\,P_n)\leq \gamma_c(G).$ \end{corollary} \begin{corollary}\label{cpdgboxcn} For a graph $G,$ $\gamma_{P,c}(G \,\square\,C_n)\leq 2\gamma_c(G),$ where $\vert V(G)\vert\geq 3.$ \end{corollary} \begin{corollary}\label{cpdgboxwn} For $n\geq 4$ and a graph $G,\,\gamma_{P,c}(G \,\square\,W_n)\leq 3\gamma_c(G),$ where $\vert V(G)\vert\geq 3.$ \end{corollary} \begin{corollary}\label{cpdgboxfn} For a graph $G,$ $\gamma_{P,c}(G \,\square\,F_n)\leq 2\gamma_c(G),$ where $\vert V(G)\vert\geq 3$ and $n\geq 3.$ \end{corollary} As mentioned earlier, the upper bounds in the above four corollaries are tight if $G$ has a universal vertex. Some of their consequences are listed in the following table. \begin{table}[!h] \begin{center} \begin{tabular}{ l l l } \hline Result & $G$ & $\gamma_{P,c}$ \\\hline Corollary \ref{cpdgboxcn} & $C_m\square K_n,\,m,\,n\geq 3 $& 2 \\ Corollary \ref{cpdgboxcn} & $C_m\square W_n,\,m\geq 3$ and $m\geq 4$ & 2 \\ Corollary \ref{cpdgboxcn} & $C_m\square K_{1,\,m},\,m,\,n\geq 3 $& 2 \\ Corollary \ref{cpdgboxcn} & $C_m\square F_n,\,m,\,n\geq 3 $& 2 \\ Corollary \ref{cpdgboxwn} & $W_m\square W_n,\,m,\,n\geq 4$ & 3 \\ Corollary \ref{cpdgboxwn} & $W_m\square K_{1,\,m},\,m,\,n\geq 4 $& 3 \\ Corollary \ref{cpdgboxwn} & $W_m\square K_n,\,m,\,n\geq 4$ & 3 \\ Corollary \ref{cpdgboxfn} & $F_m\square F_n,\,m,\,n\geq 3$ & 2 \\ Corollary \ref{cpdgboxfn} & $F_m\square K_n,\,m,\,n\geq 3$ & 2\\ Corollary \ref{cpdgboxfn} & $F_m\square K_{1,\,n},\,m,\,n\geq 3$ & 2\\ Corollary \ref{cpdgboxfn} & $F_m\square W_n,\,m\geq 3$ and $n\geq 4$ &2\\\hline \end{tabular} \end{center} \end{table} \begin{observation}\label{O1} For any graph $G,$ $\gamma_p(G)\leq \gamma_{P,c}(G).$ \end{observation} \begin{theorem}\cite{sk11}\label{pdofkmtimeskn} For $2\leq m\leq n,$ $\gamma_p(K_m\square K_n)=m-1.$ \end{theorem} \begin{theorem} For $2\leq m\leq n,$ $\gamma_{P,c}(K_m\square K_n)=m-1.$ \end{theorem} \begin{proof} By Theorem \ref{pdofkmtimeskn} and Observation \ref{O1}, we have $m-1\leq \gamma_{P,c}(K_m\square K_n).$ Let $V(K_m)=\{v_1,\,v_2,\,\dots,\,v_m\}$ and $V(K_n)=\{u_1,\,u_2,\,\dots,\,u_n\}.$ It is easy to observe that the set $S=\{(v_1,\,u_1),\,(v_2,\,u_1),\,\dots,\,(v_{m-1},\,u_1)\}$ is a CPDS of $K_m\square K_n.$ Thus, $\gamma_{P,c}(K_m\square K_n) = m-1$ as $\vert S\vert=m-1.$\end{proof} \begin{theorem}\cite{ks16}\label{pdkmtimesk1,n} For $m,\,n\geq 3,$ $\gamma_{P}(K_m\square K_{1,\,n})=min\{m-1,\,n-1\}.$ \end{theorem} \begin{theorem} For $m,\,n\geq 3,$ $\gamma_{P,c}(K_m\square K_{1,\,n})=min\{m-1,\,n\}.$ \end{theorem} \begin{proof} Let $V(K_m)=Z_m$ and $V(K_{1,n})=Z_{n+1},$ where the vertex $0$ is the universal vertex of $K_{1,\,n}.$ Then $V(K_m\square K_{1,\,n})=Z_m\times Z_{n+1}.$ \noindent {\bf Case 1:} $m\leq n+1$ By Theorem \ref{upcpdczfs}, we have $\gamma_{P,c}(K_m\square K_{1,\,n}) \leq m-1$ as $Z_c(K_m)=m-1$ and $\gamma_c(K_{1,\,n})=1.$ By Theorem \ref{pdkmtimesk1,n} and Observation \ref{O1}, $m-1\leq \gamma_{P,c}(K_m\square K_{1,\,n}).$ Hence, $\gamma_{P,c}(K_m\square K_{1,\,n})= m-1.$ \noindent {\bf Case 2:} $m>n+1$ Since $\gamma(K_m)=1$ and $Z_c(K_{1,n})=n,\,\gamma_{P,c}(K_m\square K_{1,\,n}) \leq n$ (By Theorem \ref{upcpdczfs}). To prove the lower bound, first we need to observe that any minimum CPDS $X$ of $K_m\square K_{1,\,n}$ must contains at least one of the vertices of the form $(i,\,0)$ for some $i\in Z_m;$ otherwise, all the vertices in any CPDS $X \subset V(K_m^j),$ for some fixed $j,$ where $j\in (Z_m\setminus \{0\}),$ and hence $\vert X \vert >n$ as $m>n+1.$ Suppose there exists a minimum CPDS $X$ of $K_m\square K_{1,\,n}$ with $\vert X \vert \leq n-1.$ Then the vertices in at least three $^iK_{1,\,n}$-fiber and two $K_m^j$-fiber do not belong to $X.$ WLOG let $i\in\{m-1,\,m,\,m+1\}$ and $j\in \{n-1,\,n\}.$ Let $A= \{(i,\,j)\,\vert\, i\in\{m-1,\,m,\,m+1\}\,\,\mbox{and}\,\,j\in \{n-1,\,n\} \}.$ Since $\vert N(x)\cap A\vert > 1$ for any vertex $x\notin X$ and $x\in N(A)\setminus A,$ propagation is not possible to observe any vertices in the set $A.$ This leads to the contradiction for the cardinality of the minimum CPDS is $n-1.$ Thus, $\gamma_{P,c}(K_m\square K_{1,\,n}) \geq n.$ This completes the proof. From Case $1$ and $2,$ we have $\gamma_{P,c}(K_m\square K_{1,\,n})=min\{m-1,\,n\}.$ \end{proof} \begin{theorem} For $3\leq x\leq y,\,\gamma_{P,\,c}(K_{1,\,x}\square K_{1,\,y})=x.$ \end{theorem} \begin{proof} Let $V(K_{1,\,x})=Z_x$ and $V(K_{1,\,y})=Z_y.$ Consider the vertex with label $0$ is the universal vertex of the graph $K_{1,\,x}$ (respectively, $K_{1,\,y}$). By Theorem \ref{upcpdczfs}, we have $\gamma_{P,c}(K_{1,\,x}\square K_{1,\,y}) \leq x$ as $Z_c(K_{1,\,x})=x$ and $\gamma_c(K_{1,\,y})=1.$ To attain the lower bound, we claim that any set $X\subset V(K_{1,\,x}\square K_{1,\,y})$ with cardinality $x-1$ does not satisfy the CPDS condition. Note that any minimum CPDS contains at least one of the vertex of the form $(0,\,i)$ or $(j,\,0);$ otherwise, the connected condition fails. Suppose $X$ is a minimum CPDS of $K_{1,\,x}\square K_{1,\,y}$ with size $x-1.$ Since $\vert X\vert =x-1,$ the vertices in at least two $^iK_{1,\,y}$-fiber and two $K_{1,\,x}^j$-fiber do not belong to $X.$ WLOG let $i\in\{x-1,\,x\}$ and $j\in \{y-1,\,y\}.$ Let $Y=\{(a,\,b): a\in\{x-1,\,x\}\,\,\mbox{and}\,\,b\in\{y-1,\,y\} \}.$ It is clear that the vertices in $Y$ are monitored only by propagation set. But it is not possible as $\vert N((0,\,b))\cap Y\vert > 1$ and $\vert N((a,\,0))\cap Y\vert > 1.$ Which is a contradiction for $\vert X\vert=x-1.$ Hence, $\gamma_{P,\,c}(K_{1,\,x}\square K_{1,\,y})=x.$ \end{proof} \begin{theorem} Let the order of two graphs $G$ and $H$ be at least four and let $\gamma(G)=1.$ $Z_c(H)=2$ if and only if $\gamma_{P,c}(G \square H)=2.$ \end{theorem} \begin{proof} By hypothesis and Theorem \ref{upcpdczfs}, $\gamma_{P,c}(G \square H)\leq 2.$ Also, $\gamma_{P,c}(G \square H) > 1$ as $Z_c(H)=2.$ Hence $\gamma_{P,c}(G \square H) = 2.$ Conversely, assume that $\gamma(G)=1$ and $\gamma_{P,c}(G\square H)=2.$ By our assumption, it is clear that $H\not\cong P_m.$ Let $v$ be a universal vertex of $G$ and let $X$ be a CPDS for $G\square H.$ If $(a,\,b)$ and $(c,\,d)$ are the vertices in $X,$ then $a=c=v$ and $b\neq d$ as $\langle X \rangle \cong K_2;$ otherwise $a\neq b$ and $b=d,$ then the vertices in $G \square H$ cannot be observed by propagation as $H\not\cong P_m.$ Consequently, propagation occurs from one $G$-fiber to another $G$-fiber only if $Z_c(H)\leq 2.$ Since $H\not\cong P_m,$ $Z_c(H) > 1.$ Thus, $Z_c(H)=2.$ \end{proof} \begin{theorem} Let $\gamma(G)=1$ and let $H=G\circ \overline{K_n}.$ For $n,\,m\geq 2,\,\gamma_{P,\,c}(H\square P_m)=2.$ \end{theorem} \begin{proof} It is easy to observe that if $\gamma(G)=1,$ then $\gamma(G\circ \overline{K_n})=2$ for all integer $n\geq 2.$ That is, $\gamma_c(H)=2.$ By Theorem \ref{upcpdczfs}, we have $\gamma_{P,\,c}(H\square P_m)\leq 2$ as $Z_c(P_m)=1.$ On the other hand, $\gamma_{P,\,c}(H\square P_m)> 1$ as $\gamma(H)\neq 1.$ Thus, $\gamma_{P,\,c}(H\square P_m)=2.$ \end{proof} \section{The Tensor Product} Throughout this section, for a graph $G$ and $H,$ let $V(G)=\{u_1,\,u_2,\,\dots,\,u_a\}$ and $V(H)=\{v_1,\,v_2,\,\dots,\,v_b\}.$ Let $U_i=u_i\times V(H)$ and $V_j=V(G)\times v_j.$ Then $V(G\times H)=\{\bigcup_{i=1}^{a}U_i\}=\{\bigcup_{j=1}^{b}V_j\}.$ The sets $U_i$ and $V_j$ are called the $i^{th}$-row and $j^{th}$-column of the graph $G\times H,$ respectively. The following theorem is proved for power domination number $G\times H$ but it is true for connected power domination number of $G\times H$ also. \begin{theorem}\cite{skp18} \label{cpdntp=1} If $\gamma_P(G\times H)=\gamma_{P,\,c}(G\times H)=1,$ then $G$ or $H$ is isomorphic to $K_2.$ \end{theorem} \begin{theorem} Let $G$ and $H$ be two non-bipartite graphs with at least two universal vertices. Then $\gamma_{P,\,c}(G\times H)= 2.$ \end{theorem} \begin{proof} Let $\{u_1,\,u_2\}$ and $\{v_1,\,v_2\}$ be universal vertices of the graphs $G$ and $H,$ respectively. Consider the set $X=\{(u_1,\,v_1),\,(u_2,\,v_2)\} \subset V(G\times H).$ Clearly, $\langle X \rangle \cong K_2.$ Since $u_1$ and $v_1$ are the universal vertices of the graphs $G$ and $H,$ respectively, the vertex $(u_1,\,v_1)$ dominates the vertices in the set $\{\bigcup_{i=2}^a(U_i\setminus(u_i,\,v_1))\}.$ The vertex $(u_2,\,v_2)$ dominates the vertices in the set $(V_1\setminus(u_1,\,v_2))\cup\{\bigcup_{j=3}^b (V_j\setminus (u_2,\,v_j))\}$ as $u_2$ and $v_2$ are the universal vertices of the graphs $G$ and $H,$ respectively. Hence, the only unmonitored vertices of the graph $G\times H$ are $(u_1,\,v_2)$ and $(u_2,\,v_1).$ These vertices are monitored by the propagation step as $\vert N(u_1,\,v_2)\setminus X_1\vert =\vert N(u_2,\,v_1)\setminus X_1\vert = 1.$ Thus, $\gamma_{P,\,c}(G\times H)\leq 2.$ By Theorem \ref{cpdntp=1}, we have $\gamma_{P,\,c}(G\times H) \neq 1.$ Therefore, $\gamma_{P,\,c}(G\times H)= 2.$ \end{proof} \begin{corollary}\label{ctp1} \begin{enumerate} \item[] \item For $m,\,n\geq 3,\,\gamma_{P,\,c}(K_m\times K_n)=\gamma_{P}(K_m\times K_n)=2.$ \item For $a\geq 1$ and $b\geq 1,\,\gamma_{P,\,c}(K_{1,\,1,\,m_1,\,m_2,\dots,\,m_a}\times K_{1,\,1,\,n_1,\,n_2,\dots,\,n_b})=$ \item[] $\gamma_{P}(K_{1,\,1,\,m_1,\,m_2,\dots,\,m_a}\times K_{1,\,1,\,n_1,\,n_2,\dots,\,n_b})=2.$ \end{enumerate} \end{corollary} \begin{theorem}\label{cpdsgtimeskx,y} Let $G$ be a non-bipartite graph. For $2\leq x\leq y,\,\gamma_{P,c}(G\times K_{x,\,y})=\gamma_c(G\times K_2).$ \end{theorem} \begin{proof} Let the bipartition of $K_{x,\,y}$ be $A=\{a_1,\,a_2,\,\dots,\,a_x\}$ and $B=\{b_1,\,b_2,\,\dots,\,b_y\}$ and let $V(G)=\{u_1,\,u_2,\,\dots,\,u_t\}.$ Clearly, $G\times K_{x,\,y}$ is a bipartite graph with bipartition $V_A$ and $V_B,$ where $V_A = V(G) \times A$ and $V_B= V(G) \times B.$ Let $U_i^A=u_i\times A$ and $U_i^B=u_i\times B.$ Then $V(G\times K_{x,\,y}) = V_A \cup V_B= \{\bigcup_{i=1}^t U_i^A\}\cup \{\bigcup_{i=1}^t U_i^B\}.$ Observe that, if $u_iu_j\in E(G),$ then $\langle U_i^A\cup U_j^B\rangle \cong \langle U_j^A\cup U_i^B \rangle\cong K_{x,\,y}.$ Let $X$ be a minimum connected dominating set of $G\times K_2.$ Now we claim that $X$ is CPDS of $G\times K_{x,\,y}.$ If $(u_i,\,a_i)$ dominates $(u_j,\,b_1),$ then $(u_i,\,a_i)$ dominates all the vertices in $U_j^B$ as $\langle U_i^A\cup U_j^B\rangle \cong K_{x,\,y}.$ Further, each vertex in $G\times K_2$ is adjacent to at least one of the vertices in $X.$ Consequently, $X$ is connected dominating set of $G\times K_{x,\,y}$ and hence $X$ is a CPDS of $G\times K_{x,\,y}.$ From this we have $\gamma_{P,c}(G\times K_{x,\,y})\leq \gamma_c(G\times K_2).$ Assume that $X$ is a minimum CPDS of $G\times K_{x,\,y}$ with $\vert X \vert < \gamma_c(G\times K_2).$ Then we can find $i$ or $j$ such that the vertex $(u_i,\,a_1)$ or $(u_j,\,b_1)$ is not dominated by the vertices in $X.$ This implies that all the vertices in $U_i^A$ or $U_j^B$ are monitored only by propagation step (not dominating step). But it is not possible as $U_i^A=x\geq 2$ or $U_j^B=y\geq 2.$ Hence, $\gamma_{P,c}(G\times K_{x,\,y})=\gamma_c(G\times K_2).$ \end{proof} In fact, from the proof of the above theorem, it is easy to observe that $\gamma_{P,c}(G\times K_{x,\,y})= \gamma_{c}(G\times K_{x,\,y})$ for $2\leq x\leq y.$ This observation is used in the proof of the following theorem. \begin{theorem} \label{gtimeskmn} Let $G$ be a non-bipartite graph with at least two universal vertices. Then $\gamma_{P,c}(G\times K_{x,\,y})= \left\{ \begin{array}{rl} 1;& \mbox{if $G \cong C_3$ and $x=y=1,$}\\ 2;& \mbox{if $G \not\cong C_3$ and $x=y=1,$}\\ 3;& \mbox{if $x=1$ and $y\geq 2,$}\\ 4;& \mbox{if $x,\,y\geq 2.$} \end{array}\right.$ \end{theorem} \begin{proof} Consider the vertex set of $G\times K_{x,\,y}$ is as in Theorem \ref{cpdsgtimeskx,y}. Let $u_1$ and $u_2$ be two universal vertices of $G.$ First we complete the proof for $x=y=1.$ If $G\cong C_3,$ then $G\times K_2\cong C_6$ and hence $G\times K_2=1.$ Now we assume that $G\not\cong C_3.$ Let $X=\{(u_1,\,a_1),\,(u_2,\,b_1)\}.$ The vertices $(u_1,\,a_1)$ and $(u_2,\,b_1)$ dominates the vertices in $V_B\setminus (u_1,\,b_1)$ and $V_A\setminus (u_2,\,a_1),$ respectively. The vertices $(u_1,\,b_1)$ and $(u_2,\,a_1)$ are monitored by the propagation step as $\vert N((u_1,\,b_1))\setminus X_1\vert= \vert N((u_2,\,b_1))\setminus X_1\vert=1.$ Hence, $\gamma_{P,\,c}(G\times K_2) \leq 2.$ Since $G$ has two universal vertices, minimum degree of $G$ is at least two and two vertices have degree $t-1.$ As a consequence $\gamma_{P,\,c}(G\times K_2) \neq 1.$ Thus, $\gamma_{P,\,c}(G\times K_2) = 2.$ Now we consider $x=1$ and $y\geq 2.$ For this, let $X=\{(u_1,\,a_1),\,(u_2,\,b_1),\, (u_3,\,a_1)\}.$ The set $X$ dominates all the vertices of $G\times K_{1,\,y}$ except $(u_2,\,a_1).$ This vertex is observed by the propagation step and hence $\gamma_{P,\,c}(G\times K_{1,\,y})\leq 3.$ To prove the equality, assume that $\gamma_{P,\,c}(G\times K_{1,\,y})=2.$ Then the CPDS contains two vertices, namely, $X=\{(u_i,\,a_1),\,(u_j,\,b_m)\},$ where $i\neq j.$ WLOG we assume that $i=1$ and $j=2$ as this choice of $i$ and $j$ dominates maximum number of vertices of $G\times K_{1,\,y}.$ The vertices which are dominated by the vertices in $X$ are the vertices in $U_1^B$ and the vertex $(u_2,\,a_2.)$ Since $\vert U_1^B\vert=y\geq 2,$ propagation step from $(u_i,\,a_1)\in V^A$ to the vertices in $U_1^B$ is not possible. This implies that $\gamma_{P,\,c}(G\times K_{1,\,y})\neq 2.$ Thus, $\gamma_{P,\,c}(G\times K_{1,\,y})=3.$ Let $2\leq x\leq y.$ Recall that $\gamma_{P,c}(G\times K_{x,\,y})= \gamma_{c}(G\times K_{x,\,y})$ for $2\leq x\leq y.$ Form this, it is enough to find $\gamma_{c}(G\times K_{x,\,y}).$ Let $X=\{(u_1,\,a_1),\,(u_2,\,b_1),\,(u_3,\,a_1),\,(u_1,\,b_1)\}.$ Clearly, the vertices in the set $X$ dominate all the vertices $G\times K_{x,\,y}$ and $\langle X\rangle \cong P_4$ and hence $\gamma_{c}(G\times K_{x,\,y})\leq 4.$ Since $G\times K_{x,\,y}$ is bipartite, connected subgraph induced by any three vertices of $G\times K_{x,\,y}$ is isomorphic to $P_3.$ Clearly, the end vertices of $P_3$ belong to either $V^A$ or $V^B.$ We assume that the end vertices of $P_3$ belong to $V^A.$ Then the two degree vertex belongs to $V^B.$ Let the two degree vertex be $(u_i,\,b_j).$ Clearly, this vertex does not dominates the vertices in the set $U_i^A.$ Consequently, three vertices do not form the connected dominating set. Therefore, $\gamma_{c}(G\times K_{x,\,y})\geq 4.$ \end{proof} \begin{theorem} \label{gtimesmul} Let $G$ be a graph with at least two universal vertices. For $k\geq 3$ and $1\leq m_1 \leq m_2 \leq \dots \leq m_k,$ $\gamma_{P,\,c}(G\times K_{m_1,\,m_2,\,\dots,\,m_k})= \left\{ \begin{array}{rl} 2;& \mbox{if $m_1=m_2=1,$}\\ 3;& \mbox{otherwise} \end{array}\right.$ \end{theorem} \begin{proof} Let $V(G)=\{u_1,\,u_2,\,\dots,\,u_t\}.$ For $1\leq i \leq k,$ let $V_i=\{a_1^i,\, a_2^i,\,\dots,\,a_{m_i}^i\}$ denote the $i^{th}$ partite set of the graph $K_{m_1,\,m_2,\,\dots,\,m_k}$ with size $m_i.$ Let $U_i=\{\bigcup_{j=1}^k U_i^{V_j}\},$ where $U_i^{V_j}=u_i\times V_j.$ Then $V(G\times K_{m_1,\,m_2,\,\dots,\,m_k}) = \bigcup_{i=1}^i U_i= \bigcup_{i=1}^i\{\bigcup_{j=1}^k U_i^{V_j}\}.$ Let the universal vertices of $G$ be $u_1$ and $u_2.$ Let $H=G\times K_{m_1,\,m_2,\,\dots,\,m_k}.$ If $m_1=m_2=1,$ then the result follows by Corollary \ref{ctp1}. Now we assume that $m_2\geq 2.$ Consider the set $X=\{(u_1,\,a_1^1),\,(u_2,\,a_1^2),\,(u_3,\,a_1^3)\}.$ The vertices in $V(H)\setminus (U_2^{V_1}\cup U_1^{V_2})$ are dominated by the vertices in $\{(u_1,\,a_1^1),\,(u_2,\,a_1^2)\}$ and the vertices in $U_2^{V_1}\cup U_1^{V_2}$ are dominated by the vertex $(u_3,\,a_1^3).$ Hence $X$ is CPDS of $H.$ This gives $\gamma_{P,\,c}(H)\leq 3.$ To obtain the reverse inequality, we claim that any set $X$ contains two vertices of $H$ is not a CPDS of $H.$ Let $X=\{(u_i,\,a_1^x),\,(u_j,\,a_1^y)\}.$ Then $X_1=N[X].$ Clearly, the set $X_1$ does not contain the vertices in the set $U_i^{V_x}\cup U_j^{V_y}.$ The propagation step from any vertex in $X_1$ to any vertex in $U_i^{V_x}\cup U_j^{V_y}$ is not possible as $\vert U_i^{V_x}\vert$ and $\vert U_j^{V_y}\vert$ are at least two. Consequently, $\gamma_{P,\,c}(H) >2.$ Hence, $\gamma_{P,\,c}(G\times K_{m_1,\,m_2,\,\dots,\,m_k})= 3.$ \end{proof} \begin{theorem} For $t\geq 3,\,k\geq 3,\,1\leq n_1\leq n_2\leq \dots\leq n_t$ and $1\leq m_1 \leq m_2 \leq \dots \leq m_k,$ we have $\gamma_{P,\,c}(K_{n_1,\,n_2,\,\dots,\,n_t}\times K_{m_1,\,m_2,\,\dots,\,m_k})= \left\{ \begin{array}{rl} 2;& \mbox{if $n_1=n_2=1$ and $m_1=m_2=1,$}\\ 3;& \mbox{otherwise} \end{array}\right.$ \end{theorem} \begin{proof} Let $U_i=\{u_1^i,\,u_2^i,\,\dots,\,u_{n_i}^i\}$ and $V_j=\{v_1^j,\,v_2^j,\,\dots,\,v_{m_j}^j\},$ respectively, denote the partite sets of the graphs $K_{n_1,\,n_2,\,\dots,\,n_t}$ and $K_{m_1,\,m_2,\,\dots,\,m_k}$ of sizes $n_i$ and $m_j.$ If $n_1=n_2=m_1=m_2=1,$ then the result is immediate by Corollary \ref{ctp1}. Now we assume that $n_2\geq 2.$ Consider the set $X=\{(u_1^1,\,v_1^1),\,(u_1^2,\,v_1^2),\,(u_1^3,\,v_1^3)\}.$ The vertices in $X$ dominates all the vertices of $K_{n_1,\,n_2,\,\dots,\,n_t}\times K_{m_1,\,m_2,\,\dots,\,m_k}$ and hence $\gamma_{P,\,c}(K_{n_1,\,n_2,\,\dots,\,n_t}\times K_{m_1,\,m_2,\,\dots,\,m_k})\leq 3.$ By employing similar argument as in Theorem \ref{gtimesmul}, we can conclude that $\gamma_{P,\,c}(K_{n_1,\,n_2,\,\dots,\,n_t}\times K_{m_1,\,m_2,\,\dots,\,m_k})\geq 3.$ \end{proof} \bmhead{Acknowledgment} The first author is supported by Dr. D. S. Kothari Postdoctoral Fellowship, University Grand Commission, Government of India, New Delhi through the Grand No.F.4-2/2006(BSR)/MA/20-21/0067. \begin{thebibliography}{50} \bibitem{ajco19} Aazami, A.: Domination in graphs with bounded propagation: algorithms, formulations and hardness results. J. Comb. Optim. 19(4), 429--456 (2010) \bibitem{laa428} AIM Minimum Rank: Special Graphs Work Group: Zero forcing sets and the minimum rank of graphs. Linear Algebra Appl. 428(7), 1628--1648 (2008) \bibitem{rbbook} Balakrishnan, R., Ranganathan, K.: A textbook of graph theory. 2nd Editon, Springer Science $\&$ Business Media, (2012) \bibitem{bmbaieee} Baldwin, T.L., Mili, L., Boisen Jr., M.B., Adapa, R.: Power system observability with minimal phasor measurement placement. IEEE Trans. Power Syst. 8(2), 707--715 (1993) \bibitem{bf58} Barrera, R., Ferrero, D.: Power domination in cylinders, tori and generalized Petersen graphs. Networks 58(1), 43--49 (2011) \bibitem{bgpv38} Bose, P., Gledel, V., Pennarun, C., Verdonschot, S.: Power domination on triangular grids with triangular and hexagonal shape. J. Comb. Optim. 40(2), 482--500 (2020) \bibitem{bms38} Brimkov, B., Mikesell, D., Smith, L.: Connected power domination in graphs. J. Comb. Optim. 38(1), 292--315 (2019) \bibitem{dmks22} Dorbec, D., Mollard, M., Klav$\check{z}$ar, S., $\check{S}$pacapan, S.: Power domination in product graphs. SIAM J. Discrete Math. 22(2), 554--567 (2008) \bibitem{dh154} Dorfling, M., Henning, M.A.: A note on power domination in grid graphs. Discrete Appl. Math. 154(6), 1023--1027 (2006) \bibitem{gplncs} Goyal P., Panda B.S.: Hardness Results of Connected Power Domination for Bipartite Graphs and Chordal Graphs. In: Du DZ., Du D., Wu C., Xu D. (eds) Combinatorial Optimization and Applications. COCOA 2021. LNCS, 13135. Springer, Cham. (2021) \bibitem{gnr52} Guo, J., Niedermeier, R., Raible, D.: Improved algorithms and complexity results for power domination in graphs. Algorithmica 52(2), 177--202 (2008) \bibitem{hhhh15} Haynes, T.W., Hedetniemi, S.M., Hedetniemi, S.T., Henning, M.A.: Domination in graphs applied to electric power networks. SIAM J. Discrete Math. 15(4), 519--529 (2002) \bibitem{hikbook} Hammack, R., Imrich, W., Klav$\check{z}$ar, S.: Handbook of Product Graphs. CRC Press, Taylor and Francis, Boca Raton (2011) \bibitem{ks13} Koh, K. M, Soh, K. W.: Power domination of the cartesian product of graphs. AKCE Int. J. Graphs Comb. 13(1), 22--30 (2016) \bibitem{ks16} Koh, K. M, Soh, K. W.: On the power domination number of the Cartesian product of graphs. AKCE Int. J. Graphs Comb. 16 253--257 (2019) \bibitem{lllncs} Liao, C.-S., Lee, D.-T.: Power domination problem in graphs. In: Wang, L. (ed.) COCOON 2005. LNCS, vol. 3595, pp. 818–828. Springer, Heidelberg (2005) \bibitem{skp18} Shahbaznejad, N., Kazemi, A. P., Pelayo, I. M.: Some product graphs with power dominating number at most 2. AKCE Int. J. Graphs Comb. 18(3), 127--131 (2021) \bibitem{sk11} Soh, K. W, Koh, K. M.: A note on power domination problem in diameter two graphs. AKCE Int. J. Graphs Comb. 11(1), 51--55 (2014) \bibitem{sk48} Soh, K. W, Koh, K. M.: Recent results on the power domination numbers of graph products. New Zealand J. Math. 48, 41--53 (2018) \bibitem{vthesis} Varghese, S.: Studies on some generalizations of line graph and the power domination problem in graphs. Ph. D thesis, Cochin University of Science and Technology, Cochin, India (2011) \bibitem{vvlncs} Varghese S., Vijayakumar A.: On the Power Domination Number of Graph Products. In: Govindarajan S., Maheshwari A. (eds) Algorithms and Discrete Applied Mathematics. CALDAM 2016. LNCS. 9602. Springer, (2016) \bibitem{vvh38} Varghese, S., Vijayakumar, A., Hinz, A. M.: Power domination in Kn$\ddot{o}$del graphs and Hanoi graphs. Discuss. Math. Graph Theory 38(1), 63--74 (2018) \bibitem{zkc306} Zhao, M., Kang, L., Chang, G.J.: Power domination in graphs. Discrete Math. 306(15), 1812--1816 (2006) \end{thebibliography} \end{document}
2205.05262v1
http://arxiv.org/abs/2205.05262v1
A globally convergent fast iterative shrinkage-thresholding algorithm with a new momentum factor for single and multi-objective convex optimization
\documentclass{article} \usepackage{subfiles} \usepackage{ifthen} \newif\ifpreprint \preprinttrue \newcommand{\TheTitle}{A globally convergent fast iterative shrinkage-thresholding algorithm with a new momentum factor for single and multi-objective convex optimization} \newcommand{\TheAuthors}{H. Tanabe, E. H. Fukuda, and N. Yamashita} \ifpreprint \usepackage[nohyperref,abbrvbib,preprint]{jmlr2e} \else \usepackage[nohyperref,abbrvbib]{jmlr2e} \jmlrheading{}{}{}{}{}{}{Hiroki Tanabe, Ellen H. Fukuda, and Nobuo Yamashita} \ShortHeadings{A globally convergent FISTA}{Tanabe, Fukuda, and Yamashita} rstpageno{1} \title{\TheTitle} \author{\name Hiroki Tanabe \email [email protected] \\ \name Ellen H. Fukuda \email [email protected] \\ \name Nobuo Yamashita \email [email protected] \\ \addr Department of Applied Mathematics and Physics\\ Graduate School of Informatics\\ Kyoto University\\ Yoshida-Honmachi, Sakyo-ku, Kyoto 606-8501, Japan } \editor{} \usepackage{url} \usepackage{natbib} \usepackage{graphicx} \usepackage{adjustbox} \usepackage{epstopdf} \usepackage{setspace} \usepackage{amsmath,amssymb} \usepackage{mathtools} \usepackage{cases} \usepackage[section]{algorithm} \usepackage[colorlinks=false,allbordercolors={1 1 1}]{hyperref} \usepackage{algpseudocode} \usepackage{derivative} \usepackage{booktabs} \usepackage{tabularray} \usepackage{array} \usepackage{here} \usepackage{empheq} \usepackage{siunitx} \usepackage{csvsimple} \usepackage{subcaption} \usepackage{enumitem} \usepackage[capitalize,nameinlink]{cleveref} \let\etoolboxcsvloop\csvloop \let\csvloop\relax \usepackage{autonum} \let\csvloop\etoolboxcsvloop \def\usebbsetcapital{\def\setcapital##1{\mathbb{##1}}} \def\usebfsetcapital{\def\setcapital##1{\mathbf{##1}}} \def\usebmsetcapital{\def\setcapital##1{\bm{##1}}} \usebfsetcapital \def\setC{\setcapital{C}} \def\setH{\setcapital{H}} \def\setN{\setcapital{N}} \def\setNpos{\setcapital{N}_{\mathord{+}}} \def\setP{\setcapital{P}} \def\setQ{\setcapital{Q}} \def\setR{\setcapital{R}} \def\setRpos{\setcapital{R}_{\mathord{+}}} \def\setZ{\setcapital{Z}} \newcommand{\st}{\mathrm{s.t.}} \newcommand{\level}{\mathcal{L}} \newcommand{\Beta}{\mathrm{B}} \newcommand\condition[1]{\quad \text{#1}} \newcommand\forallcondition[1]{\condition{for all~$#1$}} \newcommand\eqand{\quad \text{and} \quad} \DeclareMathOperator*{\argmax}{argmax} \DeclareMathOperator*{\argmin}{argmin} \DeclareMathOperator*{\myliminf}{liminf} \renewcommand{\liminf}{\myliminf} \DeclareMathOperator{\dist}{dist} \DeclareMathOperator{\interior}{int} \DeclareMathOperator{\conv}{conv} \DeclareMathOperator{\dom}{dom} \DeclareMathOperator{\prox}{\mathbf{prox}} \DeclareMathOperator{\proj}{\mathbf{proj}} \DeclareMathOperator{\envelope}{\mathcal{M}} \DeclareMathOperator{\indicator}{\chi} \DeclarePairedDelimiter{\abs}{\lvert}{\rvert} \DeclarePairedDelimiter{\norm}{\lVert}{\rVert} \DeclarePairedDelimiter{\ceil}{\lceil}{\rceil} \DeclarePairedDelimiter{\floor}{\lfloor}{\rfloor} \DeclarePairedDelimiter{\set}{\lbrace}{\rbrace} \DeclarePairedDelimiterX{\Set}[2]{\lbrace}{\rbrace}{#1\mathrel{}\delimsize\vert\mathrel{}#2} \DeclarePairedDelimiterX{\innerp}[2]{\langle}{\rangle}{#1, #2} \newcommand{\T}{\top\hspace{-1pt}} \renewcommand{\thesubsubsection}{(\alph{subsubsection})} \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \newcommand{\acc}{\mathrm{acc}} \newcommand{\ra}[1]{\renewcommand{\arraystretch}{#1}} \newtheorem{assumption}{Assumption}[section] \crefname{equation}{}{} \Crefname{equation}{Eq.}{Eqs.} \crefname{enumi}{}{} \crefname{figure}{Figure}{Figures} \crefname{assumption}{Assumption}{Assumptions} \crefname{line}{line}{lines} \newcommand{\crefrangeconjunction}{--} \setlist[enumerate]{ label=(\roman*) } \AtBeginEnvironment{theorem}{ \setlist[enumerate]{ label=(\roman*), ref=Theorem~\thetheorem~(\roman*) } } \AtEndEnvironment{theorem}{ \setlist[enumerate]{ label=(\roman*), ref=(\roman*) } } \AtBeginEnvironment{lemma}{ \setlist[enumerate]{ label=(\roman*), ref=Lemma~\thelemma~(\roman*) } } \AtEndEnvironment{lemma}{ \setlist[enumerate]{ label=(\roman*), ref=(\roman*) } } \AtBeginEnvironment{proposition}{ \setlist[enumerate]{ label=(\roman*), ref=Proposition~\theproposition~(\roman*) } } \AtEndEnvironment{proposition}{ \setlist[enumerate]{ label=(\roman*), ref=(\roman*) } } \AtBeginEnvironment{corollary}{ \setlist[enumerate]{ label=(\roman*), ref=Corollary~\thecorollary~(\roman*) } } \AtEndEnvironment{corollary}{ \setlist[enumerate]{ label=(\roman*), ref=(\roman*) } } \AtBeginEnvironment{remark}{ \setlist[enumerate]{ label=(\roman*), ref=Remark~\theremark~(\roman*) } } \AtEndEnvironment{remark}{ \setlist[enumerate]{ label=(\roman*), ref=(\roman*) } } \AtBeginEnvironment{definition}{ \setlist[enumerate]{ label=(\roman*), ref=Definition~\thedefinition~(\roman*) } } \AtEndEnvironment{definition}{ \setlist[enumerate]{ label=(\roman*), ref=(\roman*) } } \AtBeginEnvironment{assumption}{ \setlist[enumerate]{ label=(\roman*), ref=Assumption~\theassumption~(\roman*) } } \AtEndEnvironment{assumption}{ \setlist[enumerate]{ label=(\roman*), ref=(\roman*) } } \AtBeginEnvironment{example}{ \setlist[enumerate]{ label=(\roman*), ref=Example~\theexample~(\roman*) } } \AtEndEnvironment{example}{ \setlist[enumerate]{ label=(\roman*), ref=(\roman*) } } \newcounter{subcreftmpcnt} \newcommand\romansubformat[1]{(\roman{#1})} \makeatletter \newcommand\subcref[2][\romansubformat]{ \ifcsname r@#2@cref\endcsname \cref@getcounter {#2}{\mylabel} \setcounter{subcreftmpcnt}{\mylabel} \hyperref[#2]{part~#1{subcreftmpcnt}} } \newcommand\sublabelcref[2][\romansubformat]{ \ifcsname r@#2@cref\endcsname \cref@getcounter {#2}{\mylabel} \setcounter{subcreftmpcnt}{\mylabel} \hyperref[#2]{#1{subcreftmpcnt}} } \newcommand\subCref[2][\romansubformat]{ \ifcsname r@#2@cref\endcsname \cref@getcounter {#2}{\mylabel} \setcounter{subcreftmpcnt}{\mylabel} \hyperref[#2]{Part~#1{subcreftmpcnt}} } \makeatother \begin{document} \maketitle \begin{abstract} Convex-composite optimization, which minimizes an objective function represented by the sum of a differentiable function and a convex one, is widely used in machine learning and signal/image processing. Fast Iterative Shrinkage Thresholding Algorithm (FISTA) is a typical method for solving this problem and has a global convergence rate of~$O(1 / k^2)$. Recently, this has been extended to multi-objective optimization, together with the proof of the~$O(1 / k^2)$ global convergence rate. However, its momentum factor is classical, and the convergence of its iterates has not been proven. In this work, introducing some additional hyperparameters~$(a, b)$, we propose another accelerated proximal gradient method with a general momentum factor, which is new even for the single-objective cases. We show that our proposed method also has a global convergence rate of~$O(1/k^2)$ for any $(a,b)$, and further that the generated sequence of iterates converges to a weak Pareto solution when~$a$ is positive, an essential property for the finite-time manifold identification. Moreover, we report numerical results with various~$(a,b)$, showing that some of these choices give better results than the classical momentum factors. \end{abstract} \ifpreprint \else \begin{keywords} Optimization, Multi-objective optimization, Convergence, First-order algorithms, Proximal algorithms \end{keywords} \newboolean{isMain} \setboolean{isMain}{true} \subfile{sections/introduction} \subfile{sections/preliminaries} \subfile{sections/generalization} \subfile{sections/convergence_sequence} \subfile{sections/numerical_experiment} \subfile{sections/conclusion} \section*{Acknowledgements} This work was supported by the Grant-in-Aid for Scientific Research (C) (21K11769 and 19K11840) and Grant-in-Aid for JSPS Fellows (20J21961) from the Japan Society for the Promotion of Science. \bibliography{001_library} \end{document} \documentclass[../acc_pgm_convergence_main]{subfiles} \newboolean{isMain} \setboolean{isMain}{false} \begin{document} \section{Introduction} \label{sec: intro} We consider the following convex-composite single~($m = 1$) or multi-objective~($m \ge 2$) optimization problem: \[ \label{eq:MOP} \min_{x \in \setR^n} \quad F(x) ,\] where~$F \colon \setR^n \to (\setR \cup \set*{\infty})^m$ is a vector-valued function with~$F \coloneqq (F_1, \dots, F_m)^\T$. We assume that each component~$F_i \colon \setR^n \to \setR \cup \set*{\infty}$ is given by \[ F_i(x) \coloneqq f_i(x) + g_i(x) \forallcondition{i = 1, \dots, m} \] with convex and continuously differentiable functions~$f_i \colon \setR^n \to \setR, i = 1, \dots, m$ and closed, proper and convex functions~$g_i \colon \setR^n \to \setR \cup \set*{\infty}, i = 1, \dots, m$, and each~$\nabla f_i$ is Lipschitz continuous. As suggested in~\cite{Tanabe2019}, this problem involves many important classes. For example, it can express a convex-constrained problem if each~$g_i$ is the indicator function of a convex set~$S$, i.e., \[ \label{eq:indicator} \indicator_S(x) \coloneqq \begin{dcases} 0, & \text{if } x \in S,\\ \infty, & \text{otherwise} .\end{dcases} \] Multi-objective optimization has many applications in engineering~\citep{Eschenauer1990}, statistics~\citep{Carrizosa1998}, and machine learning (particularly multi-task learning~\citep{Sener2018,Lin2019} and neural architecture search~\citep{Kim2017,Dong2018,Elsken2019}). In the multi-objective case, no single point minimizes all objective functions simultaneously in general. Therefore, we use the concept of \emph{Pareto optimality}. We call a point weakly Pareto optimal if there is no other point where the objective function values are strictly smaller. This generalizes the usual optimality for single-objective problems. In other words, single-objective problems are considered to be included in multi-objective ones. Hence, in the following, unless otherwise noted, we refer to~\cref{eq:MOP} as multi-objective, including the case where~$m = 1$. One of the main strategies for multi-objective problems is the~\emph{scalarization approach}~\citep{Gass1955,Geoffrion1968,Zadeh1963}, which reduces the original multi-objective problem into a parameterized (or weighted) scalar-valued problem. However, it requires an \emph{a priori} parameters (or weights) selection, which might be challenging. The meta-heuristics~\citep{Gandibleux2004} is also popular, but it has no theoretical convergence properties under reasonable assumptions. Many descent methods have been developed in recent years~\citep{Fukuda2014}, overcoming those drawbacks. They decrease all objective values simultaneously at each iteration, and their global convergence property can be analyzed under reasonable assumptions. For example, the steepest descent method~\citep{Fliege2000,Fliege2019,Desideri2012} converges globally to Pareto solutions for differentiable multi-objective problems. From a practical point of view, its applicability has also been reported in multi-task learning~\citep{Sener2018,Lin2019}. Afterwards, the projected gradient~\citep{Fukuda2013}, Newton's~\citep{Fliege2009,Goncalves2021}, trust-region~\citep{Carrizo2016}, and conjugate gradient methods~\citep{LucambioPerez2018} were also considered. Moreover, the proximal point~\citep{Bonnel2005} and the inertial forward-backward methods~\citep{Bot2018} can solve infinite-dimensional vector optimization problems. For~\cref{eq:MOP}, the proximal gradient method~\citep{Tanabe2019,Tanabe2022} is effective. Using it, the merit function~\citep{Tanabe2022b}, which returns zero at the Pareto solutions and strictly positive values otherwise, converges to zero with rate~$O(1 / k)$ under reasonable assumptions. It is also shown that the generated sequence of iterates converges to a weak Pareto solution~\citep{Bello-Cruz2022}. On the other hand, the accelerated proximal gradient method~\citep{Tanabe2022a}, which generalizes the Fast Iterative Shrinkage Thresholding Algorithm (FISTA)~\citep{Beck2009} for convex-composite single-objective problems, has also been considered, along with a proof of the merit function's~$O(1 / k^2)$ convergence rate. However, the momentum factor used there is classical ($t_1 = 1, t_{k + 1} = \sqrt{t_k^2 + 1 / 4} + 1 / 2$), and the iterates' convergence is not proven. This paper generalizes the associated factor by~$t_1 = 1, t_{k + 1} = \sqrt{t_k^2 - a t_k + b} + 1 / 2$ with hyperparameters~$a \in [0, 1), b \in [a^2 / 4, 1 / 4]$. This is new even in the single-objective context, and it generalizes well-known factors. For example, when~$a = 0$ and~$b = 1 / 4$, it reduces to~$t_1 = 1, t_{k + 1} = \sqrt{t_k^2 + 1 / 4} + 1 / 2$, proposed in~\cite{Nesterov1983,Beck2009}, and when~$b = a^2 / 4$, it gives~$t_k = (1 - a) k / 2 + (1 + a) / 2$, suggested in~\cite{Chambolle2015,Attouch2016,Attouch2018,Su2016}. We show that the merit function converges to zero with rate~$O(1 / k^2)$ for any~$(a, b)$. In addition, we prove the iterates' convergence to a weak Pareto solution when~$a > 0$. As discussed in \cref{sec: convergence sequence}, this suggests that the proposed method might achieve finite-iteration manifold (active set) identification~\citep{Sun2019} without the assumption of strong convexity. Furthermore, we carry out numerical experiments with various~$(a, b)$ and observe that some~$(a, b)$ yield better results than the classical factors. The outline of this paper is as follows. We present some notations and definitions used in this paper in \cref{sec:notations}. \Cref{sec: acc prox} recalls the accelerated proximal gradient method for~\cref{eq:MOP} and its associated results. We generalize the momentum factor and prove that it preserves an~$O(1 / k^2)$ convergence rate in \cref{sec: generalization}, and we demonstrate the convergence of the iterates in \cref{sec: convergence sequence}. Finally, \cref{sec: experiments} provides numerical experiments and compares the numerical performances depending on the hyperparameters. \ifthenelse{\boolean{isMain}}{ }{ {\bibliographystyle{jorsj} \bibliography{library}} } \end{document} \documentclass[../acc_pgm_convergence_main]{subfiles} \newboolean{isMain} \setboolean{isMain}{false} \begin{document} \section{Conclusion} \label{sec: conclusion} We have generalized the momentum factor of the multi-objective accelerated proximal gradient algorithm~\citep{Tanabe2022a} in a form that is even new in the single-objective context and includes the known FISTA momentum factors~\citep{Beck2009,Chambolle2015}. Furthermore, with the proposed momentum factor, we proved under reasonable assumptions that the algorithm has an~$O(1/k^2)$ convergence rate and that the iterates converge to Pareto solutions. Moreover, the numerical results reinforced these theoretical properties and suggested the potential for our new momentum factor to improve the performance. As we mentioned in \cref{sec: convergence sequence}, our proposed method has the potential to achieve finite-time manifold (active set) identification~\citep{Sun2019} without the assumption of the strong convexity (or its generalizations such as PL conditions or error bounds~\citep{Karimi2016}). Moreover, we took a single update rule of~$t_k$ for all iterations in this work, but the adaptive change of the strategy in each iteration is conceivable. It might also be interesting to estimate the Lipschitz constant simultaneously with that change, like in~\cite{Scheinberg2014}. In addition, an extension to the inexact scheme like~\cite{Villa2013} would be significant. Those are issues to be addressed in the future. \ifthenelse{\boolean{isMain}}{ }{ {\bibliographystyle{jorsj} \bibliography{library}} } \end{document} \documentclass[../acc_pgm_convergence_main]{subfiles} \newboolean{isMain} \setboolean{isMain}{false} \begin{document} \section{Preliminaries} \label{sec: preliminaries} \subsection{Definitions and notations} \label{sec:notations} For every natural number~$d$, write the~$d$-dimensional real space by~$\setR^d$, and define \[ \setRpos^d \coloneqq \Set*{v \in \setR^d}{v_i \ge 0, i = 1, \dots, d} .\] This induces the partial orders: for any~$v^1, v^2 \in \setR^d$,~$v^1 \le v^2$ (alternatively, $v^2 \ge v^1$) if~$v^2 - v^1 \in \setRpos^d$ and~$v^1 < v^2$ (alternatively,~$v^2 > v^1$) if~$v^2 - v^1 \in \interior \setRpos^d$. In other words, $v^1 \le v^2$ and~$v^1 < v^2$ mean that~$v^1_i \le v^2_i$ and~$v^1_i < v^2_i$ for all~$i = 1, \dots, d$, respectively. Furthermore, let~$\innerp*{\cdot}{\cdot}$ be the Euclidean inner product in~$\setR^d$, i.e.,~$\innerp*{v^1}{v^2} \coloneqq \sum_{i = 1}^d {v^1_i v^2_i}$, and let~$\norm*{ \cdot }$ be the Euclidean norm, i.e., $\norm*{v} \coloneqq \sqrt{\innerp*{v}{v}}$. Moreover, we define the $\ell_1$-norm and~$\ell_\infty$-norm by~$\norm*{v}_1 \coloneqq \sum_{i = 1}^{m} \abs*{v_i}$ and~$\norm*{v}_\infty \coloneqq \max_{i = 1, \dots, d} \abs*{v_i}$, respectively. We introduce some concepts used in the problem~\cref{eq:MOP}. Recall that \[ \label{eq:weak Pareto} X^\ast \coloneqq \Set*{x^\ast \in \setR^n}{\text{There does not exist~$x \in \setR^n$ such that~$F(x) < F(x^\ast)$}} \] is the set of \emph{weakly Pareto optimal} solutions for~\cref{eq:MOP}. When~$m = 1$,~$X^\ast$ reduces to the optimal solution set. Moreover, define the effective domain of~$F$ by \[ \dom F \coloneqq \Set*{x \in \setR^n}{F(x) < \infty} ,\] and write the level set of~$F$ on~$c \in \setR^m$ as \[ \label{eq:level set} \level_F(c) \coloneqq \Set*{x \in \setR^n}{F(x) \le c} .\] Furthermore, we express the image of~$A \subseteq \setR^n$ and the inverse image of~$B \subseteq (\setR \cup \set*{\infty})^m$ under~$F$ as \[ F(A) \coloneqq \Set*{F(x) \in \setR^m}{x \in A} \eqand F^{-1}(B) \coloneqq \Set*{x \in \setR^n}{F(x) \in B}, \] respectively. Finally, let us recall the merit function~$u_0 \colon \setR^n \to \setR \cup \set*{\infty}$ for~\cref{eq:MOP} proposed in~\cite{Tanabe2022b}: \[ \label{eq:u_0} u_0(x) \coloneqq \sup_{z \in \setR^n} \min_{i = 1, \dots, m} [ F_i(x) - F_i(z) ] ,\] which returns zero at optimal solutions and strictly positive values otherwise. The following theorem shows that $u_0$ is a merit function in the Pareto sense. \begin{theorem}~\cite[Theorem 3.1]{Tanabe2022b} \label{thm:merit Pareto} Let~$u_0$ be defined by~\cref{eq:u_0}. Then,~$u_0(x) \ge 0$ for all~$x \in \setR^n$. Moreover,~$x \in \setR^n$ is weakly Pareto optimal for~\cref{eq:MOP} if and only if~$u_0(x) = 0$. \end{theorem} Note that when~$m = 1$, we have \[ u_0(x) = F_1(x) - F_1^\ast ,\] where~$F_1^\ast$ is the optimal objective value. Clearly, this is a merit function for scalar-valued optimization. \subsection{The accelerated proximal gradient method for multi-objective optimization} \label{sec: acc prox} This subsection recalls the accelerated proximal gradient method for~\cref{eq:MOP} proposed in~\cite{Tanabe2022a} and its main results. Recall that each~$F_i$ is the sum of a continuously differentiable function~$f_i$ and a closed, proper, and convex function~$g_i$, and that~$\nabla f_i$ is Lipschitz continuous with Lipschitz constant~$L_i > 0$. Define \[ \label{eq:L} L \coloneqq \max_{i = 1, \dots, m} L_i .\] The method solves the following subproblem at each iteration for given~$x \in \dom F$,~$y \in \setR^n$, and~$\ell \ge L$: \[ \label{eq:acc prox subprob} \min_{z \in \setR^n} \quad \varphi^\acc_\ell(z; x, y) ,\] where \[ \label{eq:varphi acc} \varphi^\acc_\ell(z; x, y) \coloneqq \max_{i = 1, \dots, m} \left[ \innerp*{\nabla f_i(y)}{z - y} + g_i(z) + f_i(y) - F_i(x) \right] + \frac{\ell}{2} \norm*{ z - y }^2 .\] From the strong convexity,~\cref{eq:acc prox subprob} has a unique optimal solution~$p^\acc_\ell(x, y)$, i.e., \[ \label{eq:p theta acc} p^\acc_\ell(x, y) \coloneqq \argmin_{z \in \setR^n} \varphi^\acc_\ell(z; x, y) .\] The following proposition characterizes weak Pareto optimality in terms of the mapping~$p^\acc_\ell$. \begin{proposition}~\cite[Proposition 4.1 (i)]{Tanabe2022a} \label{thm:acc prox termination} Let~$p^\acc_\ell(x, y)$ be defined by~\cref{eq:p theta acc}. Then,~$y \in \setR^n$ is weakly Pareto optimal for~\cref{eq:MOP} if and only if~$p^\acc_\ell(x, y) = y$ for some~$x \in \setR^n$. \end{proposition} This implies that using~$\norm*{p^\acc_\ell(x, y) - y}_\infty < \varepsilon$ for some~$\varepsilon > 0$ is reasonable as the stopping criteria. We state below the accelerated proximal gradient method for~\cref{eq:MOP}. \begin{algorithm}[hbtp] \caption{Accelerated proximal gradient method for~\cref{eq:MOP}} \label{alg:acc-pgm} \begin{algorithmic}[1] \Require Set~$x^0 = y^1 \in \dom F, \ell \ge L, \varepsilon > 0$. \Ensure $x^\ast$: A weakly Pareto optimal point \State $k \gets 1$ \State $t_1 \gets 1$ \label{line:t ini} \While{$\norm*{p^\acc_\ell(x^{k - 1}, y^k) - y^k}_\infty \ge \varepsilon$} \State $x^k \gets p^\acc_\ell(x^{k - 1}, y^k)$ \State $t_{k + 1} \gets \sqrt{t_k^2 + 1 / 4} + 1/2$ \label{line:t rr} \State $\gamma_k \gets (t_k - 1) / t_{k + 1}$ \label{line:gamma} \State $y^{k + 1} \gets x^k + \gamma_k (x^k - x^{k - 1})$ \label{line:y} \State $k \gets k + 1$ \EndWhile \end{algorithmic} \end{algorithm} \Cref{alg:acc-pgm} generates~$\set*{x^k}$ such that~$\set*{u_0(x^k)}$ converges to zero with rate~$O(1 / k^2)$ under the following assumption. This assumption is also used to analyze the proximal gradient method without acceleration~\citep{Tanabe2022} and is not particularly strong as suggested in~\citep[Remark 5.2]{Tanabe2022}. \begin{assumption}~\cite[Assumption 5.1]{Tanabe2022} \label{asm:bound} Let~$X^\ast$ and~$\level_F$ be defined by~\cref{eq:weak Pareto,eq:level set}, respectively. Then, for all~$x \in \level_F(F(x^0))$, there exists~$x^\ast \in X^\ast$ such that~$F(x^\ast) \le F(x)$ and \[ \label{eq:R} R \coloneqq \sup_{F^\ast \in F(X^\ast \cap \level_F(F(x^0))} \inf_{z \in F^{-1}(\set*{F^\ast})} \norm*{z - x^0}^2 < \infty .\] \end{assumption} \begin{theorem}~\cite[Theorem 5.2]{Tanabe2022a} \label{thm:conv rate} Under \cref{asm:bound}, \cref{alg:acc-pgm} generates~$\set*{x^k}$ such that \[ u_0(x^k) \le \frac{2 \ell R}{(k + 1)^2} \forallcondition{k \ge 1} ,\] where~$R \ge 0$ is given by~\cref{eq:R}, and~$u_0$ is a merit function defined by~\cref{eq:u_0}. \end{theorem} The following corollary shows the global convergence of \cref{alg:acc-pgm}. \begin{corollary}~\cite[Corollary 5.2]{Tanabe2022a} \label{thm:accumulation point} Suppose that \cref{asm:bound} holds. Then, every accumulation point of~$\set*{x^k}$ generated by \cref{alg:acc-pgm} is weakly Pareto optimal for~\cref{eq:MOP}. \end{corollary} \ifthenelse{\boolean{isMain}}{ }{ {\bibliographystyle{jorsj} \bibliography{library}} } \end{document} \documentclass[../acc_pgm_convergence_main]{subfiles} \newboolean{isMain} \setboolean{isMain}{false} \begin{document} \section{Numerical experiments} \label{sec: experiments} This section compares the performance between \cref{alg:acc-pgm general} with various~$a$ and~$b$ and \cref{alg:acc-pgm} ($a = 0, b = 1 / 4$) through numerical experiments. We run all experiments in Python 3.9.9 on a machine with 2.3 GHz Intel Core i7 CPU and 32 GB memory. For each example, we test 15 different hyperparameters combining~$a = 0, 1 / 6, 1 / 4, 1 / 2, 3 / 4$ and~$b = a^2 / 4, (a^2 + 1) / 8, 1 / 4$, i.e., \[ (a, b) = \left\{ \begin{gathered} (0, 0), (0, 1 / 8), (0, 1 / 4),\\ (1 / 6, 1 / 144), (1 / 6, 37 / 288), (1 / 6, 1 / 4),\\ (1 / 4, 1 / 64), (1 / 4, 17 / 128), (1 / 4, 1 / 4), \\ (1 / 2, 1 / 16), (1 / 2, 5 / 32), (1 / 2, 1 / 4), \\ (3 / 4, 9 / 64), (3 / 4, 25 / 128), (3 / 4, 1 / 4) \end{gathered} \right\}, \] and we set~$\varepsilon = 10^{-5}$ for the stopping criteria. \subsection{Artificial test problems (bi-objective and tri-objective)} First, we solve the multi-objective test problems in the form~\cref{eq:MOP} used in~\cite{Tanabe2022a}, modifications from~\cite{Jin2001,Fliege2009}, whose objective functions are defined by \begin{align} &f_1(x) = \frac{1}{n} \norm*{x}^2, f_2(x) = \frac{1}{n} \norm*{x - 2}^2, g_1(x) = g_2(x) = 0, \tag{JOS1}\label{eq:JOS1} \\ &f_1(x) = \frac{1}{n} \norm*{x}^2, f_2(x) = \frac{1}{n} \norm*{x - 2}^2, g_1(x) = \frac{1}{n} \norm*{x}_1, g_2(x) = \frac{1}{2n} \norm*{x - 1}_1, \label{eq:JOS1_L1} \tag{JOS1-L1}\\ &\left\{\begin{aligned} f_1(x) &= \frac{1}{n^2} \sum_{i = 1}^{n} i (x_i - i)^4, f_2(x) = \exp \left( \sum_{i = 1}^{n} \frac{x_i}{n} \right) + \norm*{x}^2,\\ f_3(x) &= \frac{1}{n (n + 1)} \sum_{i = 1}^{n} i (n - i + 1) \exp (- x_i), g_1(x) = g_2(x) = g_3(x) = 0, \end{aligned} \right. \label{eq:FDS} \tag{FDS}\\ &\left\{\begin{aligned} f_1(x) &= \frac{1}{n^2} \sum_{i = 1}^{n} i (x_i - i)^4, f_2(x) = \exp \left( \sum_{i = 1}^{n} \frac{x_i}{n} \right) + \norm*{x}^2,\\ f_3(x) &= \frac{1}{n (n + 1)} \sum_{i = 1}^{n} i (n - i + 1) \exp (- x_i), g_1(x) = g_2(x) = g_3(x) = \indicator_{\setRpos^n}(x), \end{aligned} \right.\label{eq:FDS_CONSTRAINED} \tag{FDS-CON} \end{align} where~$x \in \setR^n, n = 50$ and~$\indicator_{\setRpos^n}$ is an indicator function~\cref{eq:indicator} of the nonnegative orthant. We choose~$1000$ initial points, commonly for all pairs~$(a, b)$, and randomly with a uniform distribution between~$\underline{c}$ and~$\overline{c}$, where~$\underline{c} = (-2, \dots, -2)^\T$ and~$\overline{c} = (4, \dots, 4)^\T$ for~\cref{eq:JOS1,eq:JOS1_L1},~$\underline{c} = (-2, \dots, -2)^\T$ and~$\overline{c} = (2, \dots, 2)^\T$ for~\cref{eq:FDS}, and~$\underline{c} = (0, \dots, 0)^\T$ and~$\overline{c} = (2, \dots, 2)^\T$ for~\cref{eq:FDS_CONSTRAINED}. Moreover, we use backtracking for updating~$\ell$, with~$1$ as the initial value of~$\ell$ and~$2$ as the constant multiplied into~$\ell$ at each iteration (cf.~\cite[Remark~4.1~(v)]{Tanabe2022a}). Furthermore, at each iteration, we transform the subproblem~\cref{eq:acc prox subprob} into their dual as suggested in~\cite{Tanabe2022a} and solve them with the trust-region interior point method~\citep{Byrd1999} using the scientific library SciPy. \Cref{fig:Pareto,tab:Average computational costs} present the experimental results. \Cref{fig:Pareto} plots the solutions only for the cases~$(a, b) = (0, 1 / 4), (3 / 4, 1 / 4)$, but other combinations also yield similar plots, including a wide range of Pareto solutions. \Cref{tab:Average computational costs} shows that the new momentum factors are fast enough to compete with the existing ones ($(a, b) = (0, 1/4)$ or $b = a^2/4$) and better than them in some cases. \begin{figure}[htbp] \centering \begin{minipage}[b]{.49\hsize} \centering \begin{minipage}[b]{.49\hsize} \centering \adjincludegraphics[trim={{.66\width} {.8\height} 0 0}, clip, width=\linewidth]{figs/JOS1_ab.pdf} \end{minipage} \begin{minipage}[b]{.49\hsize} \centering \adjincludegraphics[trim={{.66\width} 0 0 {.8\height}}, clip, width=\linewidth]{figs/JOS1_ab.pdf} \end{minipage} \subcaption{\cref{eq:JOS1}} \label{fig:JOS1} \end{minipage} \begin{minipage}[b]{.49\hsize} \centering \begin{minipage}[b]{.49\hsize} \centering \adjincludegraphics[trim={{.66\width} {.8\height} 0 0}, clip, width=\linewidth]{figs/JOS1_L1_ab.pdf} \end{minipage} \begin{minipage}[b]{.49\hsize} \centering \adjincludegraphics[trim={{.66\width} 0 0 {.8\height}}, clip, width=\linewidth]{figs/JOS1_L1_ab.pdf} \end{minipage} \subcaption{\cref{eq:JOS1_L1}} \label{fig:JOS1_L1} \end{minipage} \begin{minipage}[b]{.49\hsize} \centering \begin{minipage}[b]{.49\hsize} \centering \adjincludegraphics[trim={{.605\width} {.8\height} {.014\width} 0}, clip, width=\linewidth]{figs/FDS_ab.pdf} \end{minipage} \begin{minipage}[b]{.49\hsize} \centering \adjincludegraphics[trim={{.605\width} 0 {.014\width} {.8\height}}, clip, width=\linewidth]{figs/fDS_ab.pdf} \end{minipage} \subcaption{\cref{eq:FDS}} \label{fig:FDS} \end{minipage} \begin{minipage}[b]{.49\hsize} \centering \begin{minipage}[b]{.49\hsize} \centering \adjincludegraphics[trim={{.605\width} {.8\height} {.014\width} 0}, clip, width=\linewidth]{figs/FDS_CONSTRAINED_ab.pdf} \end{minipage} \begin{minipage}[b]{.49\hsize} \centering \adjincludegraphics[trim={{.605\width} 0 {.014\width} {.8\height}}, clip, width=\linewidth]{figs/FDS_CONSTRAINED_ab.pdf} \end{minipage} \subcaption{\cref{eq:FDS_CONSTRAINED}} \label{fig:FDS_CON} \end{minipage} \caption{Pareto solutions obtained with some~$(a, b)$} \label{fig:Pareto} \end{figure} \begin{table}[htbp] \centering \ra{1.15} \caption{Average computational costs to solve the multi-objective examples} \label{tab:Average computational costs} \begin{minipage}{.49\hsize} \centering \subcaption{\cref{eq:JOS1}} \begin{tabular}{@{}cccc@{}} \toprule $a$ & $b$ & Time [\si{\second}] & Iterations \\ \midrule \csvreader[no head,late after line=\\]{data_complexity/JOS1_ab.csv} {1=\a, 2=\b,3=\totaltime,4=\iterationcounts} { $\a$ & $\b$ & \totaltime & \iterationcounts} \midrule \end{tabular} \end{minipage} \begin{minipage}{.49\hsize} \centering \subcaption{\cref{eq:JOS1_L1}} \begin{tabular}{@{}cccc@{}} \toprule $a$ & $b$ & Time [\si{\second}] & Iterations \\ \midrule \csvreader[no head,late after line=\\]{data_complexity/JOS1_L1_ab.csv} {1=\a, 2=\b,3=\totaltime,4=\iterationcounts} { $\a$ & $\b$ & \totaltime & \iterationcounts} \bottomrule \end{tabular} \end{minipage} \begin{minipage}{.49\hsize} \centering \subcaption{\cref{eq:FDS}} \begin{tabular}{@{}cccc@{}} \toprule $a$ & $b$ & Time [\si{\second}] & Iterations \\ \midrule \csvreader[no head,late after line=\\]{data_complexity/FDS_ab.csv} {1=\a, 2=\b,3=\totaltime,4=\iterationcounts} { $\a$ & $\b$ & \totaltime & \iterationcounts} \bottomrule \end{tabular} \end{minipage} \begin{minipage}{.49\hsize} \centering \subcaption{\cref{eq:FDS_CONSTRAINED}} \begin{tabular}{@{}cccc@{}} \toprule $a$ & $b$ & Time [\si{\second}] & Iterations \\ \midrule \csvreader[no head,late after line=\\]{data_complexity/FDS_CONSTRAINED_ab.csv} {1=\a, 2=\b,3=\totaltime,4=\iterationcounts} { $\a$ & $\b$ & \totaltime & \iterationcounts} \bottomrule \end{tabular} \end{minipage} \end{table} \subsection{Image deblurring (single-objective)} Since our proposed momentum factor is also new in the single-objective context, we also tackle deblurring the cameraman test image via a single-objective $\ell_2$-$\ell_1$ minimization, inspired by~\cite{Beck2009}. In detail, as shown in \cref{fig:cameraman}, to a~$256 \times 256$ cameraman test image with each pixel scaled to~$[0,1]$, we generate an observed image by applying a Gaussian blur of size~$9 \times 9$ and standard deviation $4$ and adding a zero-mean white Gaussian noise with standard deviation $10^{-3}$. \begin{figure}[htpb] \centering \begin{minipage}[b]{.45\hsize} \centering \includegraphics[width=0.8\textwidth]{figs/cameraman_original.png} \subcaption{Original} \label{fig:cameraman:original} \end{minipage} \begin{minipage}[b]{.45\hsize} \centering \includegraphics[width=0.8\textwidth]{figs/cameraman_blurred_and_noisy.png} \subcaption{Blurred and noisy} \label{fig:cameraman:blurred_and_noisy} \end{minipage} \caption{Deblurring of the cameraman} \label{fig:cameraman} \end{figure} Letting~$\theta, B,$ and~$W$ be the observed image, the blur matrix, and the inverse of the Haar wavelet transform, respectively, consider the single-objective problem~\cref{eq:MOP} with~$m = 1$ and \[ \label{eq:cam_deblur} \tag{CAM-DEBLUR} f_1(x) \coloneqq \norm*{BWx - \theta}^2 \eqand g_1(x)=\lambda \norm*{x}_1 ,\] where~$\lambda \coloneqq 2 \times 10^{-5}$ is a regularization parameter. Unlike in the previous subsection, we can compute~$\nabla f$'s Lipschitz constant by calculating~$(BW)^\T(BW)$'s eigenvalues using the two-dimensional cosine transform~\citep{Hansen2006}, so we use it constantly as~$\ell$. Moreover, we use the observed image's Wavelet transform as the initial point. \Cref{fig:deblurred} shows the reconstructed image from the obtained solution. Images produced by all hyperparameters are similar, so we present only~$(a, b) = (0, 1 / 4)$ and~$(1 / 2, 1 / 4)$. Moreover, we summarize the numerical performance in \cref{tab:cam_deblur,fig:cameraman_plot}. Like the last subsection, this example also suggests that our new momentum factors may occasionally improve the algorithm's performance even for single-objective problems. \begin{figure}[htpb] \centering \begin{minipage}[b]{.49\hsize} \centering \includegraphics[width=\linewidth]{figs/cameraman_deblurred_2.png} \subcaption{$(a, b) = (0 , 1 / 4)$} \end{minipage} \begin{minipage}[b]{.49\hsize} \centering \includegraphics[width=\linewidth]{figs/cameraman_deblurred_11.png} \subcaption{$(a, b) = (1 / 2, 1 / 4)$} \end{minipage} \caption{Deblurred image} \label{fig:deblurred} \end{figure} \begin{table}[htbp] \centering \ra{1.15} \caption{Computational costs for the image deblurring} \label{tab:cam_deblur} \begin{tabular}{@{}cccc@{}} \toprule $a$ & $b$ & Total time [\si{\second}] & Iteration counts \\ \midrule \csvreader[no head,late after line=\\]{data_complexity/cameraman_ab.csv} {1=\a, 2=\b,3=\totaltime,4=\iterationcounts} { $\a$ & $\b$ & \totaltime & \iterationcounts} \bottomrule \end{tabular} \end{table} \begin{figure}[htpb] \centering \includegraphics[width=\textwidth]{figs/cameraman_plot.pdf} \caption{Values of~$u_0(x^k) = F_1(x) - F_1(x^\ast)$, where~$x^\ast$ is the optimal solution estimated from the original image} \label{fig:cameraman_plot} \end{figure} \ifthenelse{\boolean{isMain}}{ }{ {\bibliographystyle{jorsj} \bibliography{library}} } \end{document} \documentclass[../acc_pgm_convergence_main]{subfiles} \newboolean{isMain} \setboolean{isMain}{false} \begin{document} \section{Generalization of the momentum factor and convergence rate analysis} \label{sec: generalization} This section generalizes the momentum factor~$\set*{t_k}$ used in \cref{alg:acc-pgm} and shows that the~$O(1 / k^2)$ convergence rate also holds in that case. First, we describe below the algorithm in which we replace \cref{line:t rr} of \cref{alg:acc-pgm} by a formula using given constants~$a \in [0, 1)$ and~$b \in [a^2 / 4, 1 / 4]$: \begin{algorithm}[hbtp] \caption{Accelerated proximal gradient method with general stepsizes for~\cref{eq:MOP}} \label{alg:acc-pgm general} \begin{algorithmic}[1] \Require Set~$x^0 = y^1 \in \dom F, \ell \ge L, \varepsilon > 0, a \in [0, 1), b \in [a^2 / 4, 1 / 4]$. \Ensure $x^\ast$: A weakly Pareto optimal point \State $k \gets 1$ \State $t_1 \gets 1$ \label{line:t ini general} \While{$\norm*{p^\acc_\ell(x^{k - 1}, y^k) - y^k}_\infty \ge \varepsilon$} \State $x^k \gets p^\acc_\ell(x^{k - 1}, y^k)$ \State $t_{k + 1} \gets \sqrt{t_k^2 - a t_k + b} + 1/2$ \label{line:t rr general} \State $\gamma_k \gets (t_k - 1) / t_{k + 1}$ \label{line:gamma general} \State $y^{k + 1} \gets x^k + \gamma_k (x^k - x^{k - 1})$ \label{line:y general} \State $k \gets k + 1$ \EndWhile \end{algorithmic} \end{algorithm} The sequence~$\set*{t_k}$ defined in~\cref{line:t ini general,line:t rr general} of \cref{alg:acc-pgm} generalizes the well-known momentum factors in single-objective accelerated methods. For example, when~$a = 0$ and~$b = 1 / 4$, they coincide with the one in \cref{alg:acc-pgm} and the original FISTA~\citep{Nesterov1983,Beck2009} ($t_1 = 1$ and~$t_{k + 1} = (1 + \sqrt{1 + 4 t_k^2}) / 2$). Moreover, if~$b = a^2 / 4$, then~$\set*{t_k}$ has the general term~$t_k = (1 - a) k / 2 + (1 + a) / 2$, which corresponds to the one used in~\cite{Chambolle2015,Su2016,Attouch2016,Attouch2018}. This means that our generalization allows a finer tuning of the algorithm by varying~$a$ and~$b$. We present below the main theorem of this section. \begin{theorem} \label{thm:main theorem of stepsize section} Let~$\set*{x^k}$ be a sequence generated by \cref{alg:acc-pgm general} and recall that~$u_0$ is given by~\cref{eq:u_0}. Then, the following two equations hold: \begin{enumerate} \item $F_i(x^k) \le F_i(x^0)$ for all~$i = 1, \dots, m$ and~$k \ge 0$; \label{thm:main theorem of stepsize section:less than x0} \item $u_0(x^k) = O(1 / k^2)$ as $k \to \infty$ under \cref{asm:bound}. \label{thm:main theomrem of stepsize section:O(1/k2)} \end{enumerate} \end{theorem} \subCref{thm:main theorem of stepsize section:less than x0} means that~$\set*{x^k} \subseteq \level_F(F(x^0))$, where~$\level_F$ denotes the level set of~$F$ (cf.~\cref{eq:level set}). Note, however, that the objective functions are generally not monotonically non-increasing. \subCref{thm:main theomrem of stepsize section:O(1/k2)} also claims the global convergence rate. Before proving \cref{thm:main theorem of stepsize section}, let us give several lemmas. First, we present some properties of~$\set*{t_k}$ and~$\set*{\gamma_k}$. \begin{lemma} \label{thm:t} Let~$\set*{t_k}$ and~$\set*{\gamma_k}$ be defined by \cref{line:t ini,line:t rr,line:gamma} in \cref{alg:acc-pgm} for arbitrary~$a \in [0, 1)$ and~$b \in [a^2 / 4, 1 / 4]$. Then, the following inequalities hold for all~$k \ge 1$. \begin{enumerate} \item $t_{k + 1} \ge t_k + \dfrac{1 - a}{2}$ and~$t_k \ge \dfrac{1 - a}{2} k + \dfrac{1 + a}{2}$; \label{thm:t:t geq} \item $t_{k + 1} \le t_k + \dfrac{1 - a + \sqrt{4b - a^2}}{2}$ and~$t_k \le \dfrac{1 - a + \sqrt{4b - a^2}}{2} (k - 1) + 1 \le k$; \label{thm:t:t leq} \item $t_k^2 - t_{k + 1}^2 + t_{k + 1} = a t_k - b + \dfrac{1}{4} \ge a t_k$; \label{thm:t:t over-relax geq} \item $0 \le \gamma_k \le \dfrac{k - 1}{k + 1 / 2}$; \label{thm:t:gamma} \item $1 - \gamma_k^2 \ge \dfrac{1}{t_k}$. \label{thm:t:t moment} \end{enumerate} \end{lemma} \begin{proof} \sublabelcref{thm:t:t geq}: From the definition of~$\set*{t_k}$, we have \[ \label{eq:t cs} t_{k + 1} = \sqrt{t_k^2 - a t_k + b} + \frac{1}{2} = \sqrt{\left( t_k - \frac{a}{2} \right)^2 + \left( b - \frac{a^2}{4} \right)} + \frac{1}{2} .\] Since~$b \ge a^2 / 4$, we get \[ t_{k + 1} \ge \abs*{t_k - \frac{a}{2}} + \frac{1}{2} .\] Since~$t_1 = 1 \ge a / 2$, we can quickly see that~$t_k \ge a / 2$ for any~$k$ by induction. Thus, we have \[ t_{k + 1} \ge t_k + \frac{1 - a}{2} .\] Applying the above inequality recursively, we obtain \[ t_k \ge \frac{1 - a}{2} (k - 1) + t_1 = \frac{1 - a}{2} k + \frac{1 + a}{2} .\] \sublabelcref{thm:t:t leq}: From~\cref{eq:t cs} and the relation~$\sqrt{\alpha + \beta} \le \sqrt{\alpha} + \sqrt{\beta}$ with~$\alpha, \beta \ge 0$, we get the first inequality. Using it recursively, it follows that \[ t_k \le \frac{1 - a + \sqrt{4 b - a^2}}{2} (k - 1) + t_1 = \frac{1 - a + \sqrt{4 b - a^2}}{2} (k - 1) + 1 .\] Since~$a \in [0, 1), b \in [a^2 / 4, 1 / 4]$, we observe that \[ \frac{1 - a + \sqrt{4 b - a^2}}{2} \le \frac{1 - a + \sqrt{1 - a^2}}{2} \le 1 .\] Hence, the above two inequalities lead to the desired result. \sublabelcref{thm:t:t over-relax geq}: An easy computation shows that \[ \begin{split} t_k^2 - t_{k + 1}^2 + t_{k + 1} &= t_k^2 - \left[ \sqrt{t_k^2 - a t_k + b} + \frac{1}{2} \right]^2 + \sqrt{t_k^2 - a t_k + b} + \frac{1}{2} \\ &= a t_k - b + \frac{1}{4} \ge a t_k ,\end{split} \] where the inequality holds since~$b \le 1 / 4$. \sublabelcref{thm:t:gamma}: The first inequlity is clear from the definition of~$\gamma_k$ since \subcref{thm:t:t geq} yields~$t_k \ge 1$. Again, the definition of~$\gamma_k$ and \subcref{thm:t:t geq} give \[ \gamma_k = \frac{t_k - 1}{t_{k + 1}} \le \frac{t_k - 1}{t_k + (1 - a) / 2} = 1 - \frac{3 - a}{2 t_k + 1 - a} .\] Combining with \subcref{thm:t:t leq}, we get \[ \label{eq:gamma} \begin{split} \gamma_k &\le 1 - \frac{3 - a}{\left(1 - a + \sqrt{4 b - a^2} \right)(k - 1) + 3 - a} \\ &= \frac{\left( 1 - a + \sqrt{4 b - a^2} \right) (k - 1)}{\left(1 - a + \sqrt{4 b - a^2} \right)(k - 1) + 3 - a} \\ &= \frac{k - 1}{k - 1 + (3 - a) / \left( 1 - a + \sqrt{4 b - a^2} \right)} .\end{split} \] On the other hand, it follows that \[ \label{eq:min a b} \min_{a \in [0, 1), b \in [a^2 / 4, 1 / 4]} \frac{3 - a}{1 - a + \sqrt{4 b - a^2}} = \min_{a \in [0, 1)} \frac{3 - a}{1 - a + \sqrt{1 - a^2}} = \frac{3}{2} ,\] where the second equality follows from the monotonic non-decreasing property implied by \[ \odv*{\left(\frac{3 - a}{1 - a + \sqrt{1 - a^2}}\right)}{a} = \frac{2 \sqrt{1 - a^2} + 3a - 1}{\left( \sqrt{1 - a^2} - a + 1 \right)^2 \sqrt{1 - a^2}} > 0 \forallcondition{a \in [0, 1)} .\] Combining~\cref{eq:gamma,eq:min a b}, we obtain~$\gamma_k \le (k - 1) / (k + 1 / 2)$. \sublabelcref{thm:t:t moment}: \subCref{thm:t:t geq} implies that~$t_{k + 1} > t_k \ge 1$. Thus, the definition of~$\gamma_k$ implies that \[ 1 - \gamma_k^2 = 1 - \left( \frac{t_k - 1}{t_{k + 1}} \right)^2 \ge 1 - \left( \frac{t_k - 1}{t_k} \right)^2 = \frac{2 t_k - 1}{t_k^2} \ge \frac{2 t_k - t_k}{t_k^2} = \frac{1}{t_k} .\] \end{proof} As in~\cite{Tanabe2022a}, we also introduce~$\sigma_k \colon \setR^n \to \setR \cup \set*{- \infty}$ and~$\rho_k \colon \setR^n \to \setR$ for~$k \ge 0$ as follows, which assist the analysis: \[ \label{eq:sigma rho} \begin{gathered} \sigma_k(z) \coloneqq \min_{i = 1, \dots, m}\left[ F_i(x^k) - F_i(z) \right], \\ \rho_k(z) \coloneqq \norm*{t_{k + 1} x^{k + 1} - (t_{k + 1} - 1) x^k - z}^2 .\end{gathered} \] The following lemma on~$\sigma_k$ is helpful in the subsequent discussions. \begin{lemma}~\cite[Lemma 5.1]{Tanabe2022a} \label{thm:sigma} Let~$\set*{x^k}$ and~$\set*{y^k}$ be sequences generated by \cref{alg:acc-pgm general}. Then, the following inequalities hold for all~$z \in \setR^n$ and~$k \ge 0$: \begin{enumerate} \item \label{thm:sigma:1} $\begin{multlined}[t] \sigma_{k + 1}(z) \le - \frac{\ell}{2} \left( 2 \innerp*{x^{k + 1} - y^{k + 1}}{y^{k + 1} - z} + \norm*{x^{k + 1} - y^{k + 1}}^2 \right)\\ \displaystyle - \frac{\ell - L}{2} \norm*{x^{k + 1} - y^{k + 1}}^2;\end{multlined}$ \item \label{thm:sigma:2}$\begin{multlined}[t] \sigma_k(z) - \sigma_{k + 1}(z) \ge \frac{\ell}{2} \left( 2 \innerp*{x^{k + 1} - y^{k + 1}}{y^{k + 1} - x^k} + \norm*{x^{k + 1} - y^{k + 1}}^2 \right) \\ + \frac{\ell - L}{2} \norm*{x^{k + 1} - y^{k + 1}}^2 .\end{multlined}$ \end{enumerate} \end{lemma} Therefore, from \cref{thm:t:t moment}, we can obtain the following result quickly in the same way as in the proof of~\cite[Corollary 5.1]{Tanabe2022a}. \begin{lemma} \label{thm:sigma k1 k2} Let~$\set*{x^k}$ and~$\set*{y^k}$ be sequences generated by \cref{alg:acc-pgm general}. Then, we have \begin{multline} \sigma_{k_1}(z) - \sigma_{k_2}(z) \\ \ge \frac{\ell}{2} \left( \norm*{x^{k_2} - x^{k_2 - 1}}^2 - \norm*{x^{k_1} - x^{k_1 - 1}}^2 + \sum_{k = k_1}^{k_2 - 1} \frac{1}{t_k} \norm*{x^k - x^{k - 1}}^2 \right) \end{multline} for any~$k_2 \ge k_1 \ge 1$. \end{lemma} We can now show the first part of \cref{thm:main theorem of stepsize section}. \begin{proof}[Proof of \cref{thm:main theorem of stepsize section:less than x0}] From \cref{thm:sigma k1 k2}, we can prove this part with similar arguments used in the proof of~\cite[Theorem~5.1]{Tanabe2022a}. \end{proof} The next step is to prepare the proof of \cref{thm:main theomrem of stepsize section:O(1/k2)}. First, we mention the following relation, used frequently hereafter: \begin{align} &\norm*{v^2 - v^1}^2 + 2 \innerp*{v^2 - v^1}{v^1 - v^3} = \norm*{ v^2 - v^3 }^2 - \norm*{v^1 - v^3}^2 , \label{eq:Pythagoras} \\ &\sum_{s=1}^r \sum_{p=1}^s A_p = \sum_{p=1}^r \sum_{s=p}^r A_p \label{eq:change sum} \end{align} for any vectors~$v^1, v^2, v^3$ and sequence~$\set*{A_p}$. With these, we show the lemma below, which is similar to~\cite[Lemma 5.2]{Tanabe2022a} but more complex due to the generalization of~$\set*{t_k}$. \begin{lemma} \label{thm:key relation} Let~$\set*{x^k}$ and~$\set*{y^k}$ be sequences generated by \cref{alg:acc-pgm general}. Also, let~$\sigma_k$ and~$\rho_k$ be defined by~\cref{eq:sigma rho}. Then, we have \begin{align} \MoveEqLeft \frac{\ell}{2} \norm*{x^0 - z}^2 \\ \ge{}& \frac{1}{1 - a} \left[ t_{k + 1}^2 - a t_{k + 1} + \left( \frac{1}{4} - b \right) k \right] \sigma_{k + 1}(z) \\ &+ \frac{\ell}{2 (1 - a)} \left[ a (t_{k + 1}^2 - t_{k + 1}) + \left( \frac{1}{4} - b \right) k \right] \norm*{x^{k + 1} - x^k}^2 \\ &+ \frac{\ell}{2 (1 - a)} \sum_{p = 1}^{k} \left[ a^2 (t_p - 1) + \left( \frac{1}{4} - b \right) \frac{p - t_p + a(t_p - 1)}{t_p} \right] \norm*{x^p - x^{p - 1}}^2 \\ &+ \frac{\ell}{2} \rho_k(z) + \frac{\ell - L}{2} \sum_{p = 1}^{k} t_{p + 1}^2 \norm*{x^{p + 1} - y^{p + 1}}^2 \end{align} for all~$k \ge 0$ and~$z \in \setR^n$. \end{lemma} \begin{proof} Let~$p \ge 1$ and~$z \in \setR^n$. Recall that \cref{thm:sigma} gives \begin{gather} \begin{multlined} - \sigma_{p + 1}(z) \ge \frac{\ell}{2} \left[ 2 \innerp*{x^{p + 1} - y^{p + 1}}{y^{p + 1} - z} + \norm*{x^{p + 1} - y^{p + 1}}^2 \right] \\ + \frac{\ell - L}{2} \norm*{x^{p + 1} - y^{p + 1}}^2, \end{multlined} \\ \begin{multlined} \sigma_p(z) - \sigma_{p + 1}(z) \ge \frac{\ell}{2} \left[ 2 \innerp*{x^{p + 1} - y^{p + 1}}{y^{p + 1} - x^p} + \norm*{x^{p + 1} - y^{p + 1}}^2 \right] \\ + \frac{\ell - L}{2} \norm*{x^{p + 1} - y^{p + 1}}^2 .\end{multlined} \end{gather} We then multiply the second inequality above by $(t_{p + 1} - 1)$ and add it to the first one: \begin{multline} (t_{p + 1} - 1) \sigma_p(z) - t_{p + 1} \sigma_{p + 1}(z) \\ \ge \frac{\ell}{2} \left[ t_{p + 1} \norm*{x^{p + 1} - y^{p + 1}}^2 + 2 \innerp*{x^{p + 1} - y^{p + 1}}{t_{p + 1} y^{p + 1} - (t_{p + 1} - 1)x^p - z} \right] \\ + \frac{\ell - L}{2} t_{p + 1} \norm*{x^{p + 1} - y^{p + 1}}^2 .\end{multline} Multiplying this inequality by~$t_{p + 1}$ and using the relation~$t_p^2 = t_{p + 1}^2 - t_{p + 1} + (a t_p - b + 1/4)$ (cf. \cref{thm:t:t over-relax geq}), we get \begin{multline} t_p^2 \sigma_p(z) - t_{p + 1}^2 \sigma_{p + 1}(z) \ge \frac{\ell}{2} \Bigl[ \norm*{t_{p + 1} (x^{p + 1} - y^{p + 1})}^2 \\ + 2 t_{p + 1} \innerp*{x^{p + 1} - y^{p + 1}}{t_{p + 1} y^{p + 1} - (t_{p + 1} - 1)x^p - z} \Bigr] \\ + \frac{\ell - L}{2} t_{p + 1}^2 \norm*{x^{p + 1} - y^{p + 1}}^2 + \left( a t_p - b + \frac{1}{4} \right) \sigma_p(z) .\end{multline} Applying~\cref{eq:Pythagoras} to the right-hand side of the last inequality with \[ v^1 \coloneqq t_{p + 1} y^{p + 1}, \quad v^2 \coloneqq t_{p + 1} x^{p + 1}, \quad v^3 \coloneqq (t_{p + 1} - 1) x^p + z .\] we get \begin{multline} t_p^2 \sigma_p(z) - t_{p + 1}^2 \sigma_{p + 1}(z) \\ \ge \frac{\ell}{2} \left[ \norm*{t_{p + 1} x^{p + 1} - (t_{p + 1} - 1) x^p - z}^2 - \norm*{t_{p + 1} y^{p + 1} - (t_{p + 1} - 1) x^p - z}^2 \right] \\ + \frac{\ell - L}{2} t_{p + 1}^2 \norm*{x^{p + 1} - y^{p + 1}}^2 + \left( a t_p - b + \frac{1}{4} \right) \sigma_p(z) .\end{multline} Recall that~$\rho_p(z) \coloneqq \norm*{t_{p + 1} x^{p + 1} - (t_{p + 1} - 1) x^p - z}^2$. Then, considering the definition of~$y^p$ given in \cref{line:y} of \cref{alg:acc-pgm}, we obtain \begin{multline} t_p^2 \sigma_p(z) - t_{p + 1}^2 \sigma_{p + 1}(z) \\ \ge \frac{\ell}{2} \left[ \rho_p(z) - \rho_{p - 1}(z) \right] + \frac{\ell - L}{2} t_{p + 1}^2 \norm*{x^{p + 1} - y^{p + 1}}^2 + \left( a t_p - b + \frac{1}{4} \right) \sigma_p(z) .\end{multline} Now, let~$k \ge 0$. \Cref{thm:sigma k1 k2} with~$(k_1, k_2) = (p, k + 1)$ implies \begin{multline} t_p^2 \sigma_p(z) - t_{p + 1}^2 \sigma_{p + 1}(z) \ge \frac{\ell}{2} \left[ \rho_p(z) - \rho_{p - 1}(z) \right] \\ + \frac{\ell - L}{2} t_{p + 1}^2 \norm*{x^{p + 1} - y^{p + 1}}^2 + \left( a t_p - b + \frac{1}{4} \right) \Biggl[ \sigma_{k + 1}(z) \\ + \frac{\ell}{2} \left( \norm*{x^{k + 1} - x^k}^2 - \norm*{x^p - x^{p - 1}}^2 + \sum_{r = p}^k \frac{1}{t_r} \norm*{x^r - x^{r - 1}}^2 \right) \Biggr] .\end{multline} Adding up the above inequality from~$p = 1$ to~$p = k$, the fact that~$t_1 = 1$ and~$\rho_0(z) = \norm*{x^1 - z}^2$ leads to \begin{multline} \label{eq:key relation tmp} \sigma_1(z) - t_{k + 1}^2 \sigma_{k + 1}(z) \\ \ge \frac{\ell}{2} \left[ \rho_{k}(z) - \norm*{x^1 - z}^2 \right] + \frac{\ell - L}{2} \sum_{p = 1}^k t_{k + 1}^2 \norm*{x^{k + 1} - y^{k + 1}}^2 \\ + \left( a \sum_{p = 1}^{k} t_p + \left( \frac{1}{4} - b \right) k \right) \left[ \sigma_{k + 1}(z) + \frac{\ell}{2} \norm*{x^{k + 1} - x^k}^2 \right] \\ - \frac{\ell}{2} \sum_{p = 1}^{k} \left( a t_p - b + \frac{1}{4} \right) \norm*{x^p - x^{p + 1}}^2 \\ + \frac{\ell}{2} \sum_{p = 1}^{k} \left( a t_p - b + \frac{1}{4} \right) \sum_{r = p}^{k} \frac{1}{t_r} \norm*{x^r - x^{r - 1}}^2 .\end{multline} Let us write the last two terms of the right-hand side for~\cref{eq:key relation tmp} as~$S_1$ and~$S_2$, respectively. \Cref{eq:change sum} yields \[ \begin{split} S_2 &= \frac{\ell}{2} \sum_{r = 1}^{k} \sum_{p = 1}^{r} \left( a t_p - b + \frac{1}{4} \right) \frac{1}{t_r} \norm*{x^r - x^{r - 1}}^2 \\ &= \frac{\ell}{2} \sum_{p = 1}^{k} \sum_{r = 1}^{p} \left( a t_r - b + \frac{1}{4} \right) \frac{1}{t_p} \norm*{x^p - x^{p - 1}}^2 .\end{split} \] Hence, it follows that \begin{multline} \label{eq:s_1 + s_2} S_1 + S_2 = \frac{\ell}{2} \sum_{p = 1}^{k} \left[ \frac{1}{t_p} \sum_{r = 1}^{p} \left( a t_r - b + \frac{1}{4} \right) - \left( a t_p - b + \frac{1}{4} \right) \right] \norm*{x^p - x^{p - 1}}^2 \\ = \frac{\ell}{2} \sum_{p = 1}^{k} \frac{1}{t_p} \left[ a \left( \sum_{r = 1}^{p - 1} t_r - t_p^2 + t_p \right) + \left( \frac{1}{4} - b \right) (p - t_p) \right] \norm*{x^p - x^{p - 1}}^2 .\end{multline} Again~$t_1 = 1$ gives \[ \begin{split} - t_p^2 + t_p &= \sum_{r = 1}^{p - 1} ( - t_{r + 1}^2 + t_{r + 1} + t_r^2 - t_r ) = \sum_{r = 1}^{p - 1} \left(- (1 - a) t_r - b + \frac{1}{4} \right) \\ &= - (1 - a) \sum_{r = 1}^{p - 1} t_r + \left( \frac{1}{4} - b \right) (p - 1) ,\end{split} \] where the second equality comes from \cref{thm:t:t over-relax geq}. Thus, we get \[ \label{eq:sum t} \sum_{r = 1}^{p - 1} t_r = \frac{t_p^2 - t_p}{1 - a} + \left( \frac{1}{4} - b \right) \frac{p - 1}{1 - a} .\] Substituting this into~\cref{eq:s_1 + s_2}, it follows that \begin{multline} S_1 + S_2 \\ = \frac{\ell}{2 (1 - a)} \sum_{p = 1}^{k} \left[ a^2 (t_p - 1) + \left( \frac{1}{4} - b \right) \frac{p - t_p + a (t_p - 1)}{t_p} \right] \norm*{x^p - x^{p - 1}}^2 .\end{multline} Combined with~\cref{eq:key relation tmp,eq:sum t}, we have \begin{align} \MoveEqLeft \sigma_1(z) - t_{k + 1}^2 \sigma_{k + 1}(z) \\ \ge{}& \frac{\ell}{2} \left[ \rho_k(z) - \norm*{x^1 - z}^2 \right] + \frac{\ell - L}{2} \sum_{p = 1}^{k} t_{p + 1}^2 \norm*{x^{k + 1} - y^{k + 1}}^2 \\ &+ \frac{1}{1 - a} \left[ a (t_{k + 1}^2 - t_{k + 1}) + \left( \frac{1}{4} - b \right) k \right] \left[ \sigma_{k + 1}(z) + \frac{\ell}{2} \norm*{x^{k + 1} - x^k}^2 \right] \\ &+ \frac{\ell}{2 (1 - a)} \sum_{p = 1}^{k} \left[ a^2 (t_p - 1) + \left( \frac{1}{4} - b \right) \frac{p - t_p + a (t_p - 1)}{t_p} \right] \norm*{x^p - x^{p - 1}}^2 .\end{align} Easy calculations give \begin{align} \MoveEqLeft \sigma_1(z) + \frac{\ell}{2} \norm*{x^1 - z}^2 \\ \ge{}& \frac{1}{1 - a} \left[ t_{k + 1}^2 - a t_{k + 1} + \left( \frac{1}{4} - b \right) k \right] \sigma_{k + 1}(z) \\ &+ \frac{\ell}{2 (1 - a)} \left[ a (t_{k + 1}^2 - t_{k + 1}) + \left( \frac{1}{4} - b \right) k \right] \norm*{x^{k + 1} - x^k}^2 \\ &+ \frac{\ell}{2 (1 - a)} \sum_{p = 1}^{k} \left[ a^2 (t_p - 1) + \left( \frac{1}{4} - b \right) \frac{p - t_p + a(t_p - 1)}{t_p} \right] \norm*{x^p - x^{p - 1}}^2 \\ &+ \frac{\ell}{2} \rho_k(z) + \frac{\ell - L}{2} \sum_{p = 1}^{k} t_{p + 1}^2 \norm*{x^{k + 1} - y^{k + 1}}^2 .\end{align} \Cref{thm:sigma:1} with~$k = 0$ and~$y^1 = x^0$ and~\cref{eq:Pythagoras} with~$(v^1, v^2, v^3) = (x^0, x^1, z)$ lead to \[ \sigma_1(z) \le - \frac{\ell}{2} \left[ \norm*{x^1 - z}^2 - \norm*{x^0 - z}^2 \right] - \frac{\ell - L}{2} \norm*{x^1 - y^1}^2 .\] From the above two inequalities and the fact that~$\ell \ge L$, we can derive the desired inequality. \end{proof} Let us define the linear function~$P \colon \setR \to \setR$ and quadratic ones~$Q_1 \colon \setR \to \setR$,~$Q_2 \colon \setR \to \setR$, and~$Q_3 \colon \setR \to \setR$ by \[ \label{eq:P Q} \begin{aligned} &P(\alpha) \coloneqq \frac{a^2 (\alpha - 1)}{2},\\ &Q_1(\alpha) \coloneqq \frac{1 - a}{4} \alpha^2 + \left[ 1 - \frac{a}{2} + \frac{1 - 4 b}{4 (1 - a)} \right] \alpha + 1,\\ &Q_2(\alpha) \coloneqq \frac{a (1 - a)}{4} \alpha^2 + \left[ \frac{a}{2} + \frac{1 - 4 b}{4 (1 - a)} \right] \alpha, \\ &Q_3(\alpha) \coloneqq \left( \frac{1 - a}{2} \alpha + 1 \right)^2 .\end{aligned} \] The following lemma provides the key relation to evaluate the convergence rate of \cref{alg:acc-pgm general}. \begin{lemma} \label{thm:acc conv rate} Under \cref{asm:bound}, \cref{alg:acc-pgm general} generates a sequence~$\set*{x^k}$ such that \begin{multline} \frac{\ell R}{2} \ge Q_1(k) u_0(x^{k + 1}) + \frac{\ell}{2} Q_2(k) \norm*{x^{k + 1} - x^k}^2 + \frac{\ell}{2} \sum_{p = 1}^{k} P(p) \norm*{x^p - x^{p - 1}}^2 \\ + \frac{\ell - L}{2} \sum_{p = 1}^{k} Q_3(p) \norm*{x^{p + 1} - y^{p + 1}}^2 \end{multline} for all~$k \ge 0$, where~$R \ge 0$ and~$P, Q_1, Q_2, Q_3 \colon \setR \to \setR$ are given in~\cref{eq:R,eq:P Q}, respectively, and~$u_0$ is a merit function defined by~\cref{eq:u_0}. \end{lemma} \begin{proof} Let~$k \ge 0$. With similar arguments used in the proof of \cref{thm:conv rate} (see~\cite[Theorem 5.2]{Tanabe2022a}), we get \[ \sup_{F^\ast \in F(X^\ast \cap \level_F(F(x^0)))} \inf_{z \in F^{-1}(\set*{F^\ast})} \sigma_{k + 1}(z) = u_0(x^{k + 1}) .\] Since~$\rho_k(z) \ge 0$, \cref{thm:key relation} and the above equality lead to \begin{align} \frac{\ell R}{2} \ge{}& \frac{1}{1 - a} \left[ t_{k + 1}^2 - a t_{k + 1} + \left( \frac{1}{4} - b \right) k \right] u_0(x^{k + 1}) \\ &+ \frac{\ell}{2 (1 - a)} \left[ a (t_{k + 1}^2 - t_{k + 1}) + \left( \frac{1}{4} - b \right) k \right] \norm*{x^{k + 1} - x^k}^2 \\ &+ \frac{\ell}{2 (1 - a)} \sum_{p = 1}^{k} \left[ a^2 (t_p - 1) + \left( \frac{1}{4} - b \right) \frac{p - t_p + a(t_p - 1)}{t_p} \right] \norm*{x^p - x^{p - 1}}^2 \\ &+ \frac{\ell - L}{2} \sum_{p = 1}^{k} t_{p + 1}^2 \norm*{x^{p + 1} - y^{p + 1}}^2 .\end{align} We now show that the coefficients of the four terms on the right-hand side can be bounded from below by the polynomials given in~\cref{eq:P Q}. First, by using the relation \[ \label{eq:t k+1 geq} t_{k + 1} \ge \frac{1 - a}{2} k + 1 \] obtained from \cref{thm:t:t geq} and~$a \in [0, 1)$, we have \begin{align} \MoveEqLeft \frac{1}{1 - a} \left[ t_{k + 1}^2 - a t_{k + 1} + \left( \frac{1}{4} - b \right) k \right] = \frac{1}{1 - a} \left[ t_{k + 1} (t_{k + 1} - a) + \left( \frac{1}{4} - b \right) k \right] \\ &\ge \frac{1}{1 - a} \left[ \left( \frac{1 - a}{2} k + 1 \right) \left( \frac{1 - a}{2} k + 1 - a \right) + \left( \frac{1}{4} - b \right) k \right] = Q_1(k) .\end{align} Again,~\cref{eq:t k+1 geq} gives \begin{align} \MoveEqLeft \frac{1}{1 - a} \left[ a (t_{k + 1}^2 - t_{k + 1}) + \left( \frac{1}{4} - b \right) k \right] = \frac{a}{1 - a} t_{k + 1} (t_{k + 1} - 1) + \frac{1 - 4 b}{4 (1 - a)} k \\ &\ge \frac{a}{1 - a} \left( \frac{1 - a}{2} k + 1 \right) \left( \frac{1 - a}{2} k \right) + \frac{1 - 4 b}{4 (1 - a)} k = Q_2(k) .\end{align} Moreover, since~$t_p \le p$ (cf.~\cref{thm:t:t leq}),~$t_k \ge 1$ (cf.~\cref{thm:t:t geq}), and~$b \in (a^2 / 4, 1 / 4]$, we obtain \[ \frac{1}{1 - a} \left[ a^2 (t_p - 1) + \left( \frac{1}{4} - b \right) \frac{p - t_p + a (t_p - 1)}{t_p} \right] \ge \frac{a^2}{1 - a} (t_p - 1) \ge P(p) .\] It is also clear from~\cref{eq:t k+1 geq} that \[ t_{p + 1}^2 \ge Q_3(p) .\] Thus, combining the above five inequalities, we get the desired inequality. \end{proof} Then, we can finally prove the main theorem. \begin{proof}[\cref{thm:main theomrem of stepsize section:O(1/k2)}] It is clear from \cref{thm:acc conv rate} and~$Q_1(k) = O(k^2)$ as~$k \to \infty$. \end{proof} \begin{remark} \Cref{thm:acc conv rate} also implies the following other claims than \cref{thm:main theomrem of stepsize section:O(1/k2)}: \begin{itemize} \item $O(1 / k^2)$ convergence rate of~$\set*{\norm*{x^{k + 1} - x^k}^2}$ when~$a > 0$; \item the absolute convergence of~$\set*{k \norm*{x^{k + 1} - x^k}^2}$ when~$a > 0$; \item the absolute convergence of~$\set*{k^2 \norm*{x^k - y^k}^2}$ when~$\ell > L$. \end{itemize} Note that the second one generalize~\cite[Corollary~3.2]{Chambolle2015} for single-objective problems. \end{remark} \ifthenelse{\boolean{isMain}}{ }{ {\bibliographystyle{jorsj} \bibliography{library}} } \end{document} \documentclass[../acc_pgm_convergence_main]{subfiles} \newboolean{isMain} \setboolean{isMain}{false} \begin{document} \section{Convergence of the iterates} \label{sec: convergence sequence} While the last section shows that \cref{alg:acc-pgm general} has an~$O(1 / k^2)$ convergence rate like \cref{alg:acc-pgm}, this section proves the following theorem, which is more strict than \cref{thm:accumulation point} related to \cref{alg:acc-pgm}: \begin{theorem} \label{thm:main convergence} Let~$\set*{x^k}$ be generated by \cref{alg:acc-pgm general} with~$a > 0$. Then, under \cref{asm:bound}, the following two properties hold: \begin{enumerate} \item \label{thm:main convergence:bound} $\set*{x^k}$ is bounded, and it has an accumulation point; \item \label{thm:main convergence:Pareto} $\set*{x^k}$ converges to a weak Pareto optimum for~\cref{eq:MOP}. \end{enumerate} \end{theorem} The latter claim is also significant in application. For example, finite-time manifold (active set) identification, which detects the low-dimensional manifold where the optimal solution belongs, essentially requires only the convergence of the generated sequence to a unique point rather than the strong convexity of the objective functions~\citep{Sun2019}. Again, we will prove \cref{thm:main convergence} after showing some lemmas. First, we mention the following result, obvious from \cref{asm:bound,thm:main theorem of stepsize section:less than x0}. \begin{lemma} \label{thm:z exist} Let~$\set*{x^k}$ be generated by \cref{alg:acc-pgm general}. Then, for any~$k \ge 0$, there exists~$z \in X^\ast \cap \level_F(F(x^0))$ (see \cref{eq:weak Pareto,eq:level set} for the definitions of~$X^\ast$ and~$\level_F$) such that \[ \sigma_k(z) \ge 0 \eqand \norm*{z - x^0}^2 \le R ,\] where~$R \ge 0$ is given by~\cref{eq:R}. \end{lemma} The following lemma also contributes strongly to the proof of the main theorem. \begin{lemma} \label{thm:prod} Let~$\set*{\gamma_q}$ be defined by \cref{line:gamma} in \cref{alg:acc-pgm general}. Then, we have \[ \sum_{p = s}^r \prod_{q = s}^p \gamma_q \le 2 ( s - 1) \forallcondition{s, r \ge 1} .\] \end{lemma} \begin{proof} By using \Cref{thm:t:gamma}, we see that \[ \prod_{q = s}^{p} \gamma_q \le \prod_{q = s}^{p} \frac{q - 1}{q + 1 / 2} .\] Let~$\Gamma$ and~$\Beta$ denote the gamma and beta functions defined by \[ \label{eq:gamma and beta} \Gamma(\alpha) \coloneqq \int_{0}^{\infty} \tau^{\alpha - 1} \exp(-\tau) \odif{\tau} \eqand \Beta(\alpha, \beta) \coloneqq \int_{0}^{1} \tau^{\alpha - 1} (1 - \tau)^{\beta - 1} \odif{\tau} ,\] respectively. Applying the well-known properties: \[ \label{eq:gamma and beta properties} \Gamma(\alpha) = (\alpha - 1)!, \quad \Gamma(\alpha + 1) = \alpha \Gamma(\alpha), \eqand B(\alpha, \beta) = \frac{\Gamma(\alpha) \Gamma(\beta)}{\Gamma(\alpha + \beta)} .\] we get \[ \prod_{q = s}^{p} \gamma_q \le \frac{\Gamma(p) / \Gamma(s - 1)}{\Gamma(p + 3 / 2) / \Gamma(s + 1 / 2)} = \frac{B(p, 3 / 2)}{B(s - 1, 3 / 2)} .\] This implies \[ \sum_{p = s}^{r} \prod_{q = s}^{p} \gamma_q \le \sum_{p = 1}^{r} B(p, 3 / 2) / B(s - 1, 3 / 2) .\] Then, it follows from the definition~\cref{eq:gamma and beta} of~$\Beta$ that \[ \begin{split} \sum_{p = s}^{r} \prod_{q = s}^{p} \gamma_q &\le \sum_{p = s}^{r} \int_{0}^{1} \tau^{p - 1} (1 - \tau)^{1 / 2} \odif{\tau} / B(s - 1, 3 / 2) \\ &= \int_{0}^{1} \sum_{p = s}^{r} \tau^{p - 1} (1 - \tau)^{1 / 2} \odif{\tau} / B(s - 1, 3 / 2) \\ &= \int_{0}^{1} \frac{\tau^{s - 1} - \tau^r}{1 - \tau} (1 - \tau)^{1 / 2} \odif{\tau} / B(s - 1, 3 / 2) \\ &= \frac{B(s, 1 / 2) - B(r + 1, 1 / 2)}{B(s - 1, 3 / 2)} \le \frac{B(s, 1 / 2)}{B(s - 1, 3 / 2)} .\end{split} \] Using again~\cref{eq:gamma and beta properties}, we conclude that \[ \sum_{p = s}^{r} \prod_{q = s}^{p} \gamma_q \le \frac{\Gamma(s) \Gamma(1 / 2) / \Gamma(s + 1 / 2)}{\Gamma(s - 1) \Gamma(3 / 2) / \Gamma(s + 1 / 2)} = 2 (s - 1) .\] \end{proof} Now, we introduce two functions~$\omega_k \colon \setR^n \to \setR$ and~$\nu_k \colon \setR^n \to \setR$ for any~$k \ge 1$, which will help our analysis, by \begin{align} \omega_k(z) &\coloneqq \max \left( 0, \norm*{x^k - z}^2 - \norm*{x^{k - 1} - z}^2 \right) \label{eq:omega}, \\ \nu_k(z) &\coloneqq \norm*{x^k - z}^2 - \sum_{s = 1}^{k} \omega_s(z) \label{eq:nu} .\end{align} The lemma below describes the properties of~$\omega_k$ and~$\nu_k$. \begin{lemma} \label{thm:omega nu} Let~$\set*{x^k}$ be generated by \cref{alg:acc-pgm general} and recall that~$X^\ast, \level_F, \omega_k$, and~$\nu_k$ are defined by~\cref{eq:weak Pareto,eq:level set,eq:omega,eq:nu}, respectively. Moreover, suppose that \cref{asm:bound} holds and that~$z \in X^\ast \cap \level_F(F(x^0))$ satisfies the statement of \cref{thm:z exist} for some~$k \ge 1$. Then, it follows for all~$r = 1, \dots, k$ that \begin{enumerate} \item $\displaystyle \sum_{s = 1}^{r} \omega_s(z) \le \sum_{s = 1}^{r}(6s - 5) \norm*{x^s - x^{s - 1}}^2;$ \label{thm:omega nu:omega} \item $\displaystyle \nu_{r + 1}(z) \le \nu_r(z).$ \label{thm:omega nu:nu} \end{enumerate} \end{lemma} \begin{proof} \sublabelcref{thm:omega nu:omega}: Let~$k \ge p \ge 1$. From the definition of~$y^{p + 1}$ given in \cref{line:y} of \cref{alg:acc-pgm}, we have \[ \begin{split} &\norm*{x^{p + 1} - z}^2 - \norm*{x^p - z}^2 \\ &= - \norm*{x^{p + 1} - x^p}^2 + 2 \innerp*{x^{p + 1} - y^{p + 1}}{x^{p + 1} - z} + 2 \gamma_p \innerp*{x^p - x^{p - 1}}{x^{p + 1} - z} \\ &= - \norm*{x^{p + 1} - x^p}^2 + 2 \innerp*{x^{p + 1} - y^{p + 1}}{y^{p + 1} - z} + 2 \norm*{x^{p + 1} - y^{p + 1}}^2 \\ &\quad + 2 \gamma_p \innerp*{x^p - x^{p - 1}}{x^{p + 1} - z} . \end{split} \] On the other hand, \cref{thm:sigma:1} gives \[ 2 \innerp*{x^{p + 1} - y^{p + 1}}{y^{p + 1} - z} \le - \frac{2}{\ell} \sigma_{p + 1}(z) - \frac{2 \ell - L}{\ell} \norm*{x^{p + 1} - y^{p + 1}}^2 .\] Moreover, \cref{thm:sigma k1 k2} with~$(k_1, k_2) = (p + 1, k + 1)$ implies \[ \begin{split} \MoveEqLeft - \frac{2}{\ell} \sigma_{p + 1}(z) \\ &\le - \frac{2}{\ell} \sigma_{k + 1}(z) - \norm*{x^{k + 1} - x^k}^2 + \norm*{x^{p + 1} - x^p}^2 - \sum_{r = p + 1}^{k} \frac{1}{t_r}\norm*{x^r - x^{r - 1}}^2 \\ &\le \norm*{x^{p + 1} - x^p}^2 ,\end{split} \] where the second inequality comes from the assumption on~$z$. Combining the above three inequalities, we get \begin{multline} \norm*{x^{p + 1} - z}^2 - \norm*{x^p - z}^2 \le \frac{L}{\ell} \norm*{x^{p + 1} - y^{p + 1}}^2 + 2 \gamma_p \innerp*{x^p - x^{p - 1}}{x^{p + 1} - z} \\ \shoveleft = \frac{L}{\ell} \norm*{x^{p + 1} - y^{p + 1}}^2 + \gamma_p \Bigl( \norm*{x^p - z}^2 - \norm*{x^{p - 1} - z}^2 + \norm*{x^p - x^{p - 1}}^2 \\ + 2 \innerp*{x^p - x^{p - 1}}{x^{p + 1} - x^p} \Bigr) .\end{multline} Using the relation~$\norm*{x^{p + 1} - y^{p + 1}}^2 + 2 \gamma_p \innerp*{x^p - x^{p - 1}}{x^{p + 1} - x^p} = \norm*{x^{p + 1} - x^p}^2 + \gamma_p^2 \norm*{x^p - x^{p - 1}}^2$, which holds from the definition of~$y^k$, we have \begin{multline} \norm*{x^{p + 1} - z}^2 - \norm*{x^p - z}^2 \le - \frac{\ell - L}{\ell} \norm*{x^{p + 1} - y^{p + 1}}^2 + \norm*{x^{p + 1} - x^p}^2 \\ + \gamma_p \left( \norm*{x^p - z}^2 - \norm*{x^{p - 1} - z}^2 \right) + ( \gamma_p + \gamma_p^2 ) \norm*{x^p - x^{p - 1}}^2 .\end{multline} Since~$0 \le \gamma_p \le 1$ from \cref{thm:t:gamma} and~$\ell \ge L$, we obtain \begin{multline} \norm*{x^{p + 1} - z}^2 - \norm*{x^p - z}^2 \\ \le \gamma_p \left( \norm*{x^p - z}^2 - \norm*{x^{p - 1} - z}^2 + 2 \norm*{x^p - x^{p - 1}}^2 \right) + \norm*{x^{p + 1} - x^p}^2 \\ \le \gamma_p \left( \omega_p(z) + 2 \norm*{x^p - x^{p - 1}}^2 \right) + \norm*{x^{p + 1} - x^p}^2 ,\end{multline} where the second inequality follows from the definition~\cref{eq:omega} of~$\omega_p$. Since the right-hand side is nonnegative,~\cref{eq:omega} again gives \[ \omega_{p + 1}(z) \le \gamma_p \left( \omega_p(z) + 2 \norm*{x^p - x^{p - 1}}^2 \right) + \norm*{x^{p + 1} - x^p}^2 .\] Let~$s \le k$. Applying the above inequality recursively and using~$\gamma_1 = 0$, we get \[ \begin{split} \omega_s(z) &\le 3 \sum_{p = 2}^{s} \prod_{q = p}^{s} \gamma_q \norm*{x^p - x^{p - 1}}^2 + 2 \prod_{q = 1}^{s} \gamma_q \norm*{x^1 - x^0}^2 + \norm*{x^s - x^{s - 1}}^2 \\ &\le 3 \sum_{p = 2}^{s} \prod_{q = p}^{s} \gamma_q \norm*{x^p - x^{p - 1}}^2 + \norm*{x^s - x^{s - 1}}^2 .\end{split} \] Adding up the above inequality from~$s = 1$ to~$s = r \le k$, we have \[ \begin{split} \sum_{s = 1}^r \omega_s(z) &\le 3 \sum_{s = 1}^r \sum_{p = 1}^s \prod_{q = p}^s \gamma_q \norm*{x^p - x^{p - 1}}^2 + \sum_{s = 1}^r \norm*{x^s - x^{s - 1}}^2 \\ &= 3 \sum_{p = 1}^r \sum_{s = p}^r \prod_{q = p}^s \gamma_q \norm*{x^p - x^{p - 1}}^2 + \sum_{s = 1}^r \norm*{x^s - x^{s - 1}}^2 \\ &= \sum_{s = 1}^r \left( 3 \sum_{p = s}^r \prod_{q = s}^p \gamma_q + 1 \right) \norm*{x^s - x^{s - 1}}^2 ,\end{split} \] where the first equality follows from~\cref{eq:change sum}. Thus, \cref{thm:prod} implies \[ \sum_{s = 1}^{r} \omega_s(z) \le \sum_{s = 1}^{r} (6 s - 5) \norm*{x^s - x^{s - 1}}^2 .\] \sublabelcref{thm:omega nu:nu}: \Cref{eq:nu} yields \[ \begin{split} \nu_{r + 1}(z) &= \norm*{x^{r + 1} - z}^2 - \omega_{r + 1}(z) - \sum_{s = 1}^r \omega_s(z) \\ &= \norm*{x^{r + 1} - z}^2 - \max \left( 0, \norm*{x^{r + 1} - z}^2 - \norm*{x^r - z}^2 \right) - \sum_{s = 1}^{r} \omega_s(z) \\ &\le \norm*{x^{r + 1} - z}^2 - \left( \norm*{x^{r + 1} - z}^2 - \norm*{x^r - z}^2 \right) - \sum_{s = 1}^{r} \omega_s(z) \\ &= \norm*{x^r - z}^2 - \sum_{s = 1}^{r} \omega_s(z) = \nu_r(z) ,\end{split} \] where the second and third equalities come from the definitions~\cref{eq:omega,eq:nu} of~$\omega_{r + 1}$ and~$\nu_r$, respectively. \end{proof} Let us now prove the first part of the main theorem. \begin{proof}[\cref{thm:main convergence:bound}] Let~$k \ge 1$ and suppose that~$z \in X^\ast \cap \level_F(F(x^0))$ satisfies the statement of \cref{thm:z exist}, where~$X^\ast$ and~$\level_F$ are given by~\cref{eq:weak Pareto,eq:level set}, respectively. Then, \cref{thm:omega nu:nu} gives \[ \begin{split} \nu_k(z) &\le \nu_1(z) = \norm*{x^1 - z}^2 - \omega_1(z) \\ &= \norm*{x^1 - z}^2 - \max \left( 0, \norm*{x^1 - z}^2 - \norm*{x^0 - z}^2 \right) \\ &\le \norm*{x^1 - z}^2 - \left( \norm*{x^1 - z}^2 - \norm*{x^0 - z}^2 \right) = \norm*{x^0 - z}^2 ,\end{split} \] where the second equality follows from the definition~\cref{eq:omega} of~$\omega_1$. Considering the definition~\cref{eq:nu} of~$\nu_k$, we obtain \[ \norm*{x^k - z}^2 \le \norm*{x^0 - z}^2 + \sum_{s = 1}^{k} \omega_s(z) .\] Taking the square root of both sides and using~\cref{eq:omega}, we get \[ \norm*{x^k - z} \le \sqrt{ \norm*{x^0 - z}^2 + \sum_{s = 1}^{k} (6s - 5) \norm*{x^s - x^{s - 1}}^2 } .\] Applying the reverse triangle inequality~$\norm*{x^k - x^0} - \norm*{x^0 - z} \le \norm*{x^k - z}$ to the left-hand side leads to \begin{align} \norm*{x^k - x^0} &\le \norm*{x^0 - z} + \sqrt{ \norm*{x^0 - z}^2 + \sum_{s = 1}^{k} (6s - 5) \norm*{x^s - x^{s - 1}}^2 } \\ &\le \sqrt{R} + \sqrt{R + \sum_{s = 1}^{k} (6s - 5) \norm*{x^s - x^{s - 1}}^2} ,\end{align} where the second inequality comes from the assumption on~$z$. Moreover, since~$a > 0$, the right-hand side is bounded from above according to \cref{thm:acc conv rate}. This implies that~$\set*{x^k}$ is bounded, and so it has accumulation points. \end{proof} Before proving \cref{thm:main convergence:Pareto}, we show the following lemma. \begin{lemma} \label{thm:convergence norm} Let~$\set*{x^k}$ be generated by \cref{alg:acc-pgm general} with~$a > 0$ and suppose that \cref{asm:bound} holds. Then, if~$\bar{z}$ is an accumulation point of~$\set*{x^k}$, then~$\set*{\norm*{x^k - \bar{z}}}$ is convergent. \end{lemma} \begin{proof} Assume that~$\set*{x^{k_j}} \subseteq \set*{x^k}$ converges to~$\bar{z}$. Then, we have~$\sigma_{k_j}(\bar{z}) \to 0$ by the definition~\cref{eq:sigma rho} of~$\sigma_{k_j}$. Therefore, we can regard~$\bar{z}$ to satisfy the statement of \cref{thm:z exist} at~$k = \infty$, and thus the inequalities of \cref{thm:omega nu} hold for any~$r \ge 1$ and~$\bar{z}$. This means~$\set*{\nu_k(\bar{z})}$ is non-increasing and bounded, i.e., convergent. Hence~$\set*{\norm*{x^k - \bar{z}}}$ is convergent. \end{proof} Finally, we finish the proof of the main theorem. \begin{proof}[\cref{thm:main convergence:Pareto}] Suppose that~$\set*{x^{k^1_j}}$ and~$\set*{x^{k^2_j}}$ converges to~$\bar{z}^1$ and~$\bar{z}^2$, respectively. From \cref{thm:convergence norm}, we see that \[ \lim_{j \to \infty} \left( \norm*{x^{k^2_j} - \bar{z}^1}^2 - \norm*{x^{k^2_j} - \bar{z}^2}^2 \right) = \lim_{j \to \infty} \left( \norm*{x^{k^1_j} - \bar{z}^1}^2 - \norm*{x^{k^1_j} - \bar{z}^2}^2 \right) .\] This yields that~$\norm*{\bar{z}^1 - \bar{z}^2}^2 = - \norm*{\bar{z}^1 - \bar{z}^2}^2$, and so~$\norm*{\bar{z}^1 - \bar{z}^2}^2 = 0$, i.e.,~$\set*{x^k}$ is convergent. Let~$x^k \to x^\ast$. Since~$\norm*{x^{k + 1} - x^k}^2 \to 0$, $\set*{y^k}$ is also convergent to~$x^\ast$. Therefore, \cref{thm:acc prox termination} shows that~$x^\ast$ is weakly Pareto optimal for~\cref{eq:MOP}. \end{proof} \ifthenelse{\boolean{isMain}}{ }{ {\bibliographystyle{jorsj} \bibliography{library}} } \end{document}
2205.05247v1
http://arxiv.org/abs/2205.05247v1
On some properties of polycosecant numbers and polycotangent numbers
\documentclass{article} \usepackage{amssymb,amsmath,color,enumerate,ascmac,latexsym,diagbox} \usepackage[all]{xy} \usepackage[abbrev]{amsrefs} \usepackage[mathscr]{eucal} \usepackage[top=30truemm,bottom=30truemm,left=30truemm,right=30truemm]{geometry} \usepackage{mathrsfs} \usepackage{amsthm} \def\objectstyle{\displaystyle} \newcommand{\kg}{{\vspace{0.2in}}} \newcommand{\zp}{\mathbb{Z}_p} \newcommand{\zetp}{\mathbb{Z}_{(p)}} \newcommand{\pzp}[1]{p^{#1}\zp} \newcommand{\qp}{\mathbb{Q}_p} \newcommand{\cp}{\mathbb{C}_p} \newcommand{\N}{\mathbb{N}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\mdq}{\bmod{{(\Q^\times)}^2}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\R}{\mathbb{R}} \newcommand{\C}{\mathbb{C}} \newcommand{\F}{\mathbb{F}} \newcommand{\di}{\displaystyle} ve}{~~~~~} \newcommand{\ten}{~~~~~~~~~~} ft}{~~~~~~~~~~~~~~~} \newcommand{\twen}{~~~~~~~~~~~~~~~~~~~~} \newcommand{\mm}[1]{\mathop{\mathrm{#1}}} \newcommand{\mb}[1]{\mathbb{#1}} \newcommand{\vpjl}{\mathop{\varprojlim}} \newcommand{\jk}[1]{\mb{Z}/#1 \mb{Z}} \newcommand{\jg}[1]{(\mb{Z}/#1 \mb{Z})^{\times}} \newcommand{\brkt}[1]{\langle #1 \rangle} \newcommand{\nji}[1]{\Q(\sqrt{#1})} \newcommand{\hnji}[1]{h(\nji{#1})} \setcounter{section}{-1} \newcommand{\be}[2]{\beta_{#1}^{(#2)}} \newcommand{\ta}[1]{\tanh^{#1}{(t/2)}} \newcommand{\bber}[2]{B_{#1}^{(#2)}} \newcommand{\cber}[2]{C_{#1}^{#2}} \newcommand{\co}[2]{D_{#1}^{(#2)}} \newcommand{\coti}[2]{\widetilde{D}_{#1}^{(#2)}} \newcommand{\copoly}[4]{\!^{#1}\!D_{#2}^{(#3)}(#4)} \newcommand{\copasym}[4]{\!^{#1}{\mathfrak{D}}_{#2}^{(#3)}(#4)} \newcommand{\cosym}[3]{{\mathfrak{D}}_{#1}^{(#2)}(#3)} \newcommand{\stirf}[2]{\left[#1 \atop #2\right]} \newcommand{\stirs}[2]{\left\{#1 \atop #2\right\}} n}{\hfill \Box} \newcommand{\red}[1]{\textcolor{red}{#1}} \newcommand{\dnkij}{d_{n}^{(k)}(i,j)} \newcommand{\ordp}{{\mm{ord}}_{p}} \newcommand{\eulerian}[2] {\renewcommand\arraystretch{0.7}\left\langle\begin{matrix}#1\\#2\end{matrix}\right\rangle} \newcommand{\dv}[2]{\frac{d^{#1}}{{d#2}^{#1}}} \newcommand{\pdv}[2]{\frac{{\partial}^{#1}}{\partial {#2}^{#1}}} \usepackage{graphicx} \usepackage{stmaryrd} \makeatletter \def\mapstofill@{ \arrowfill@{\mapstochar\relbar}\relbar\rightarrow} \newcommand*\xmapsto[2][]{ \ext@arrow 0395\mapstofill@{#1}{#2}} \usepackage[dvipdfmx]{hyperref} \usepackage{xcolor} \hypersetup{ colorlinks=true, citebordercolor=green, linkbordercolor=red, urlbordercolor=cyan, } \newtheorem{theorem}{Theorem}[section] \newtheorem{definition}[theorem]{Definition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \theoremstyle{definition} \newtheorem{remark}[theorem]{Remark} \newtheorem{example}[theorem]{Example} \newtheorem{question}[theorem]{Question} \newtheorem{conjecture}[theorem]{Conjecture} \makeatletter \def\underbrace@#1#2{\vtop {\m@th \ialign {##\crcr $\hfil #1{#2}\hfil $\crcr \noalign {\kern 3\p@ \nointerlineskip }\upbracefill \crcr \noalign {\kern 3\p@ }}}} \def\overbrace@#1#2{\vbox {\m@th \ialign {##\crcr \noalign {\kern 3\p@ }\downbracefill \crcr \noalign {\kern 3\p@ \nointerlineskip }$\hfil #1 {#2}\hfil $\crcr }}} \def\underbrace#1{ \mathop{\mathchoice{\underbrace@{\displaystyle}{#1}} {\underbrace@{\textstyle}{#1}} {\underbrace@{\scriptstyle}{#1}} {\underbrace@{\scriptscriptstyle}{#1}}}\limits } \def\overbrace#1{ \mathop{\mathchoice{\overbrace@{\displaystyle}{#1}} {\overbrace@{\textstyle}{#1}} {\overbrace@{\scriptstyle}{#1}} {\overbrace@{\scriptscriptstyle}{#1}}}\limits } \makeatother \allowdisplaybreaks \hyphenpenalty=1000\relax \exhyphenpenalty=1000\relax \sloppy \begin{document} \title{On some properties of polycosecant numbers and polycotangent numbers} \author{Kyosuke Nishibiro} \date{} \maketitle \setcounter{section}{0} \begin{abstract} Polycosecant numbers and polycotangent numbers are introduced as level two analogues of poly-Bernoulli numbers. It is shown that polycosecant numbers and polycotangent numbers satisfy many formulas similar to those of poly-Bernoulli numbers. However, there is much unknown about polycotangent numbers. For example, the zeta function interpolating them at non-positive integers has not yet been constructed. In this paper, we show some algebraic properties of polycosecant numbers and polycotangent numbers. Also, we generalize duality formulas for polycosecant numbers which essentially include those for polycotangent numbers. \end{abstract} \section{Introduction.} For $k\in\Z$, two types of poly-Bernoulli numbers $\{B_n^{(k)}\}$ and $\{C_n^{(k)}\}$ are defined by Kaneko as follows: \begin{align} \frac{\mm{Li}_k(1-e^{-t})}{1-e^{-t}}&=\sum_{n=0}^{\infty}B_{n}^{(k)}\frac{t^n}{n!},\\ \frac{\mm{Li}_k(1-e^{-t})}{e^t-1}&=\sum_{n=0}^{\infty}C_{n}^{(k)}\frac{t^n}{n!}, \end{align} where \begin{align*} {\mm{Li}}_k(z)=\sum_{n=1}^{\infty}\frac{z^n}{n^k}~~(|z|<1) \end{align*} is the polylogarithm function~(see Kaneko\cite{K1}, Arakawa-Kaneko\cite{AK1}, and Arakawa-Ibukiyama-Kaneko\cite{AIK}). Since ${\mm{Li}}_1(z)=-\log{(1-z)}$, $C_n^{(1)}=(-1)^nB_n^{(1)}$ coincides with the ordinary Bernoulli number $B_n$. In \cite{KPT}, as the level two analogue of poly-Bernoulli numbers, polycosecant numbers $\{\co{n}{k}\}$ are defined by Kaneko, Pallewatta and Tsumura as follows: \begin{align} \frac{\mm{A}_{k}(\ta{})}{\sinh{t}} &= \sum_{n=0}^{\infty}\co{n}{k} \frac{t^n}{n!}, \label{genecosecant} \end{align} where \begin{align*} {\mm{A}}_k(z)=2\sum_{n=0}^{\infty}\frac{z^{2n+1}}{(2n+1)^k}={\mm{Li}}_k(z)-{\mm{Li}}_k(-z)~~(|z|<1) \end{align*} is the polylogarithm function of level two. Note that Sasaki first considered \eqref{genecosecant} in \cite{Sasaki}. Since ${\mm{A}}_1(z)=2\tanh^{-1}(z)$, $\co{n}{1}$ coincides with the ordinary cosecant number $D_n$ (see \cite{No}). Also, as its relatives, polycotangent numbers $\{\be{n}{k}\}$ are defined by Kaneko, Komori and Tsumura as follows: \begin{align} \frac{\mm{A}_{k}(\ta{})}{\tanh{t}} &= \sum_{n=0}^{\infty}\be{n}{k}\frac{t^n}{n!}. \end{align} \begin{remark} Because each generating function is an even function, we have $\co{2n-1}{k}=0$ and $\be{2n-1}{k}=0$ for $n\in\Z_{\geq1}$ and $k\in\Z$. \end{remark} A calculation shows that \begin{align} \be{2n}{k}&=\sum_{i=0}^{n} \dbinom{2n}{2i}D_{2i}^{(k)}\label{cotacose},\\ \co{2n}{k}&=\sum_{i=0}^{n} \dbinom{2n}{2i}E_{2n-2i}\be{2i}{k} \end{align} hold, where $E_n$ is the Euler number defined by \[ \frac{1}{\cosh{t}}=\sum_{n=0}^{\infty} E_n\frac{t^n}{n!}. \] It is known that the Bernoulli numbers satisfy the following congruence relations, which are called Kummer congruences. Here, $\varphi$ is the Euler's totient function. \begin{theorem} Let $p$ be an odd prime number. For $m,n$ and $N\in\Z_{\geq1}$ with $m\equiv n \bmod \varphi(p^N)$ and $(p-1)\nmid n$, we have \[ (1-p^{m-1})\frac{B_{m}}{m}\equiv(1-p^{n-1})\frac{B_{n}}{n} \bmod{p^N}. \] \end{theorem} Sakata proved Kummer type congruences for poly-Bernoulli numbers. \begin{theorem}[{\cite[Theorem 6.1]{Sa}\label{kusakata}}] Let $p$ be an odd prime number. For $m,n,$ and $N\in\Z_{\geq1}$ with $m\equiv n \bmod \varphi(p^N)$ and $m, n\geq N$, we have \begin{align*} B_n^{(-k)}&\equiv B_m^{(-k)} \bmod{p^N},\\ C_{n}^{(-k)}&\equiv C_{m}^{(-k)} \bmod{p^N}. \end{align*} \end{theorem} \begin{remark} Kitahara generalized Theorem \ref{kusakata} by using $p$-adic distributions (\cite[Theorem 7]{Ki}), and Katagiri proved the congruence relation for multi-poly-Bernoulli numbers (\cite[Theorem 1.6]{Ka}). \end{remark} Also, Sakata proved the following congruence relation. \begin{theorem}[{\cite[Theorem 6.10]{Sa}}] Let $p$ be an odd prime number. For $k \in\Z_{\geq 0}$ and $n, N\in\Z_{\geq 1}$ with $n\geq N$, we have \begin{align} \sum_{i=0}^{\varphi(p^N)-1} B_{n}^{(-k-i)}\equiv0 \bmod{p^N}. \label{sum} \end{align} \end{theorem} By using the explicit formula for polycosecant numbers, Pallewatta showed that polycosecant numbers satisfy the same type of congruence. \begin{theorem}[{\cite[Theorem 3.12]{Pa}\label{kupa}}] Let $p$ be a prime number. For $k,m,n$ and $N\in\Z_{\geq 1}$ with $2m\equiv 2n \bmod \varphi(p^N)$ and $2m, 2n\geq N$, we have \[ \co{2m}{-2k+1}\equiv\co{2n}{-2k+1}\bmod{p^N}. \] \end{theorem} Also, Kaneko proved that poly-Bernoulli numbers satisfy duality formulas as follows: \begin{theorem}[{\cite[Theorem 2]{K1}}] For $l, m\in\Z_{\geq0}$, we have \begin{align} B_m^{(-l)}&=B_l^{(-m)},\label{dualb}\\ C_m^{(-l-1)}&=C_l^{(-m-1)}.\label{dualc} \end{align} \end{theorem} As its analogue, it was proved that $\co{2m}{-2l-1}$ and $\be{2m}{-2l}$ satisfy duality formulas (see \cite[Theorem 3.1]{Pa} and \cite{KKT}). \begin{theorem} For $l, m\in\Z_{\geq0}$, we have \begin{align} \co{2m}{-2l-1}&=\co{2l}{-2m-1},\label{dualcose}\\ \be{2m}{-2l}&=\be{2l}{-2m}.\label{dualcota} \end{align} \end{theorem} In Section 2, we review some properties of polycosecant numbers and polycotangent numbers. In Section 3, we prove some algebraic properties of $\co{n}{k}$ and $\be{n}{k}$. In Section 4, we define symmetrized polycosecant numbers and prove their duality formulas. \section{Preliminaries} In this section, we review some properties of $\co{n}{k}$. First of all, we prepare some notations. \begin{definition}[{\cite[Definition 2.2]{AIK}}] For $n,m\in\Z$, the Stirling number of the first kind $\stirf{n}{m}$ and of the second kind $\stirs{n}{m}$ are defined by the following recursion formulas, respectively: \begin{align*} \stirf{0}{0}&=1, \stirf{n}{0}=\stirf{0}{m}=0,\\ \stirf{n}{m}&=\stirf{n-1}{m-1}+n\stirf{n}{m-1},\\ \stirs{0}{0}&=1, \stirs{n}{0}=\stirs{0}{m}=0,\\ \stirs{n}{m}&=\stirs{n-1}{m-1}+m\stirs{n-1}{m}. \end{align*} \end{definition} \begin{lemma}[{\cite[Proposition 2.6]{AIK}\label{expstir}}] For $j\in\Z_{\geq0}$, we have \[ \frac{(e^t-1)^m}{m!}=\sum_{n=m}^{\infty} \stirs{n}{m}\frac{t^n}{n!}. \] \end{lemma} \begin{lemma}[{\cite[Proposition 2.6 (6)]{AIK}\label{bekistir}}] For $m, n\in\Z_{\geq0}$, we have \[ \stirs{n}{m}=\frac{(-1)^m}{m!}\sum_{l=0}^{m}(-1)^l\binom{m}{l}l^n. \] \end{lemma} \begin{lemma}[{\cite[Section 3]{Pa}\label{tax2}}] For $m\in\Z_{\geq0}$, we have \[ \ta{m}=(-1)^m\sum_{n=m}^{\infty}\sum_{j=m}^{n} (-1)^j\frac{j!}{2^j}\binom{j-1}{m-1}\stirs{n}{j}\frac{t^n}{n!}. \] \end{lemma} Here, we recall some properties of polycosecant and polycotangent numbers. \begin{proposition}[{\cite[Theorem 3.4]{Pa}\label{explicose}}] For $n\in\Z_{\geq0}$ and $k\in\Z$, we have \begin{align} \co{n}{k}=\sum_{i=0}^{\lfloor n/2 \rfloor}\frac{1}{(2i+1)^{k+1}}\sum_{j=2i+1}^{n+1}\frac{(-1)^{j+1}j!}{2^{j-1}}\binom{j-1}{2i}\stirs{n+1}{j}. \end{align} \end{proposition} The following expressions \eqref{eqsasaki} were noticed by Sasaki. \begin{proposition}\label{sasaki} For $n, k\in\Z_{\geq0}$, we have \begin{align} \co{2n}{-k}=\sum_{i=1}^{\min{\{2n+1,k\}}}\frac{i!(i-1)!}{2^{i-1}}\stirs{k}{i}\stirs{2n+1}{i}.\label{eqsasaki} \end{align} \end{proposition} \begin{proposition}[{\cite[Proposition 3.3]{Pa}\label{recurcose}}] For $n\in\Z_{\geq0}$ and $k\in\Z$, we have \[ \co{n}{k-1}=\sum_{m=0}^{\lfloor n/2\rfloor} \binom{n+1}{2m+1} \co{n-2m}{k}. \] \end{proposition} \begin{lemma}[{\cite[Section 3]{Pa}\label{genfuncose}}] Let $f(t,y)$ be the generating function of $\co{n}{-k}$ such as \[ f(t,y) = \sum_{n=0}^{\infty}\sum_{k=0}^{\infty} \co{n}{-k}\frac{t^n}{n!}\frac{y^k}{k!}. \] Then we have \[ f(t,y)=1+\frac{e^t(e^y-1)}{1+e^t+e^y-e^{t+y}}+\frac{e^{-t}(e^y-1)}{1+e^{-t}+e^y-e^{-t+y}}. \] \end{lemma} It is known that poly-Bernoulli numbers satisfy the following recurrence formulas. \begin{proposition}[{\cite[Theorem 3.2]{OS2}}] For $k\in\Z$ and $m, n\in\Z_{\geq1}$ with $m\geq n$, we have \begin{align*} \sum_{m\geq j\geq i\geq0}^{}&(-1)^i\stirf{m+2}{i+1}B_{n}^{(-k-j)}=0,\\ \sum_{j=0}^{m}&(-1)^j\stirf{m+1}{j+1}C_{n-1}^{(-k-j)}=0. \end{align*} \end{proposition} Also, there are some studies to determine the denominator of poly-Bernoulli numbers. For an odd prime number $p$, we denote by $\ordp$ the $p$-adic valuation and let \[ \zetp=\left\{\frac{a}{b}\in\Q~ \middle| ~\ordp(b)=0\right\}. \] The Clausen von-Staudt theorem is as follows. \begin{theorem}[{\cite[Theorem 3.1]{AIK}\label{theoremvon}}] Let $n$ be 1 or an even positive integer. Then we have \[ B_n+\sum_{\substack{p:\mm{prime}\\ (p-1)\mid n}}\frac{1}{p} \in\Z. \] \end{theorem} This can be generalized to that for poly-Bernoulli numbers as follows. \begin{theorem}[{\cite[Theorem 14.7]{AIK}\label{clausenpb}}] Let $p$ be a prime number. For $k\in\Z_{\geq 2}$ and $n\in\Z_{\geq1}$ satisfying $k+2\leq p\leq n+1$, we have following results. \begin{enumerate} \item When $(p-1) \mid n$, $p^kB_n^{(k)}\in\zetp$ and \[ p^kB_n^{(k)}\equiv-1 \bmod p. \] \item When $(p-1) \nmid n$, $p^{k-1}B_n^{(k)}\in\zetp$ and \[ p^{k-1}B_n^{(k)} \equiv \begin{cases} \displaystyle \frac{1}{p}\stirs{n}{p-1}-\frac{n}{2^k} \bmod{p}& (n\equiv1\bmod{p-1}), \\ \displaystyle \frac{(-1)^{n-1}}{p}\stirs{n}{p-1} \bmod{p} & (otherwise). \end{cases} \] \end{enumerate} \end{theorem} To prove Theorem \ref{clausenpb}, we need the following lemma. \begin{lemma}[{\cite[Lemma 14.8]{AIK}\label{vonlemma}}] For $n$ and $a\in\Z_{\geq1}$, we have \[ \stirs{n}{ap-1} \equiv \begin{cases} \displaystyle \binom{c-1}{a-1} \bmod{p}& (if ~~n=a-1+c(p-1)~~for~some~~c\geq a), \\ 0 \bmod{p} & (otherwise). \end{cases} \] \end{lemma} \section{Algebraic properties} \subsection{Explicit formulas and recurrence formulas} In this subsection, we give some explicit formulas and recurrence formulas of $\co{n}{k}$ and $\be{n}{k}.$ \begin{proposition}\label{explicota} For $n\in\Z_{\geq0}$ and $k\in\Z$, we have \begin{align} \be{2n}{k}=\sum_{j=0}^{2n}\sum_{i=0}^{\lfloor j/2\rfloor} \frac{(-1)^j}{2^j}\frac{j!}{(2i+1)^k}\binom{j+1}{2i+1}\left(\frac{(j+1)(j+2)}{2}\stirs{2n}{j+2}+\stirs{2n+1}{j+1}\right). \end{align} \end{proposition} \begin{proof} We consider the generating function of $\be{n}{k}-\co{n}{k}$ instead of $\be{n}{k}$. By using Lemma \ref{tax2} and $\cosh{t}-1=2\sinh^2{\frac{t}{2}}$, we have \begin{align*} \sum_{n=0}^{\infty}\left(\be{n}{k}-\co{n}{k}\right)\frac{t^n}{n!} &=2\sum_{i=0}^{\infty}\frac{\ta{2i+2}}{(2i+1)^k}\\ &=2\sum_{i=0}^{\infty}\frac{1}{(2i+1)^k}\sum_{n=2i+2}^{\infty}\sum_{j=2i+2}^{n}\frac{(-1)^jj!}{2^j}\binom{j-1}{2i+1}\stirs{n}{j}\frac{t^n}{n!}\\ &=\sum_{n=2}^{\infty}\sum_{i=0}^{\lfloor(n-2)/2\rfloor}\sum_{j=2i+2}^{n}\frac{(-1)^jj!}{(2i+1)^k2^{j-1}}\binom{j-1}{2i+1}\stirs{n}{j}\frac{t^n}{n!}. \end{align*} Comparing coefficients on the both sides and using Lemma \ref{explicose}, we get \begin{align*} \be{2n}{k}&=\co{2n}{k}+\sum_{i=0}^{n-1}\sum_{j=2i+2}^{2n}\frac{(-1)^jj!}{(2i+1)^k2^{j-1}}\binom{j-1}{2i+1}\stirs{2n}{j}\\ &=\sum_{j=0}^{2n}\sum_{i=0}^{\lfloor j/2\rfloor}\frac{(-1)^{j}j!}{(2i+1)^{k+1}2^j}\binom{j}{2i}\stirs{2n+1}{j+1}+\sum_{j=0}^{2n-2}\sum_{i=0}^{\lfloor j/2\rfloor}\frac{(-1)^j(j+2)!}{(2i+1)^{k+1}2^{j+1}}\binom{j}{2i}\stirs{2n}{j+2}. \end{align*} Since $\stirs{2n}{2n+1}=\stirs{2n}{2n+2}=0$, we have \[ \be{2n}{k}=\sum_{j=0}^{2n}\sum_{i=0}^{\lfloor j/2\rfloor}\frac{(-1)^{j}j!}{(2i+1)^{k+1}2^j}\binom{j}{2i}\stirs{2n+1}{j+1}+\sum_{j=0}^{2n}\sum_{i=0}^{\lfloor j/2\rfloor}\frac{(-1)^j(j+2)!}{(2i+1)^{k+1}2^{j+1}}\binom{j}{2i}\stirs{2n}{j+2}. \] Hence we obtain Proposition \ref{explicota}. \end{proof} \begin{proposition}\label{stircota} For $n, k\in\Z_{\geq1}$, we have \begin{align*} \be{2n}{-k}&=\sum_{j=0}^{\min{\{2n,k-1\}}}\frac{j!(j+1)!}{2^{j+1}}\stirs{2n}{j}\stirs{k}{j+1}\\&~~~~+\sum_{j=0}^{\min{\{2n-1,k-1\}}}\frac{\{(j+1)!\}^2}{2^{j+1}}\stirs{2n}{j+1}\stirs{k}{j+1}\\ &~~~~+\sum_{j=0}^{\min{\{2n-1,k-1\}}}\frac{(j+1)!(j+2)!}{2^{j+1}}\stirs{2n}{j+2}\stirs{k}{j+1}\\&~~~~+\sum_{j=0}^{\min{\{2n,k-1\}}}\frac{j!(j+1)!}{2^{j+1}}\stirs{2n+1}{j+1}\stirs{k}{j+1}. \end{align*} \end{proposition} \begin{proof} By using Lemma \ref{genfuncose}, we have \begin{align} \sum_{n=0}^{\infty}\sum_{k=1}^{\infty}\be{n}{-k}\frac{x^n}{n!}\frac{y^k}{k!}&=\cosh{x}\left\{\frac{e^x(e^y-1)}{1+e^x+e^y-e^{x+y}}+\frac{e^{-x}(e^y-1)}{1+e^{-x}+e^y-e^{-x+y}}\right\}\nonumber\\ &=\frac{1}{4}\left\{\frac{e^y-1}{1-\frac{1}{2}(e^x-1)(e^y-1)}+\frac{e^y-1}{1-\frac{1}{2}(e^{-x}-1)(e^y-1)}\right\}\label{hitotume}\\ &~~+\frac{1}{4}\left\{e^x\frac{(e^x-1)(e^y-1)}{1-\frac{1}{2}(e^x-1)(e^y-1)}+e^{-x}\frac{(e^{-x}-1)(e^y-1)}{1-\frac{1}{2}(e^{-x}-1)(e^y-1)}\right\}\label{futatume}\\ &~~+\frac{1}{4}\left\{e^x\frac{e^y-1}{1-\frac{1}{2}(e^x-1)(e^y-1)}+e^{-x}\frac{e^y-1}{1-\frac{1}{2}(e^{-x}-1)(e^y-1)}\right\}\label{mittsume}. \end{align} For \eqref{hitotume}, by using Lemma \ref{expstir}, a calculation similar to that in \cite[Theorem 14.14]{AIK} shows that \begin{align*} \frac{1}{4}&\left\{\frac{e^y-1}{1-\frac{1}{2}(e^x-1)(e^y-1)}+\frac{e^y-1}{1-\frac{1}{2}(e^{-x}-1)(e^y-1)}\right\}\\ &=\frac{1}{4}\sum_{j=0}^{\infty}\frac{1}{2^j}(e^x-1)^j(e^y-1)^{j+1}+\frac{1}{4}\sum_{j=0}^{\infty}\frac{1}{2^j}(e^{-x}-1)^j(e^y-1)^{j+1}\\ &=\sum_{n,k=0}^{\infty}\left(\sum_{j=0}^{\min{\{2n,k\}}}\frac{j!(j+1)!}{2^{j+1}}\stirs{2n}{j}\stirs{k+1}{j+1}\right)\frac{x^{2n}}{(2n)!}\frac{y^{k+1}}{(k+1)!}. \end{align*} By the same calculations, we have \begin{align*} \sum_{n=0}^{\infty}\sum_{k=1}^{\infty}\be{n}{-k}\frac{x^n}{n!}\frac{y^k}{k!}&=\sum_{n,k=0}^{\infty}\left(\sum_{j=0}^{\min{\{2n,k\}}}\frac{j!(j+1)!}{2^{j+1}}\stirs{2n}{j}\stirs{k+1}{j+1}\right)\frac{x^{2n}}{(2n)!}\frac{y^{k+1}}{(k+1)!}\\ &+\sum_{n,k=0}^{\infty}\left(\sum_{j=0}^{\min{\{2n+1,k\}}}\frac{\{(j+1)!\}^2}{2^{j+1}}\stirs{2n+2}{j+1}\stirs{k+1}{j+1}\right)\frac{x^{2n+2}}{(2n+2)!}\frac{y^{k+1}}{(k+1)!}\\ &+\sum_{n,k=0}^{\infty}\left(\sum_{j=0}^{\min{\{2n+1,k\}}}\frac{(j+1)!(j+2)!}{2^{j+1}}\stirs{2n+2}{j+2}\stirs{k+1}{j+1}\right)\frac{x^{2n+2}}{(2n+2)!}\frac{y^{k+1}}{(k+1)!}\\ &+\sum_{n,k=0}^{\infty}\left(\sum_{j=0}^{\min{\{2n,k\}}}\frac{j!(j+1)!}{2^{j+1}}\stirs{2n+1}{j+1}\stirs{k+1}{j+1}\right)\frac{x^{2n}}{(2n)!}\frac{y^{k+1}}{(k+1)!}. \end{align*} Comparing coefficients on the both sides, we obtain Proposition \ref{stircota}. \end{proof} \begin{proposition} For $k\in\Z$, and $m, n\in\Z_{\geq1}$ with $m\geq 2n$, we have \begin{align} \sum_{j=0}^{m}&(-1)^j\stirf{m+1}{j+1}\co{2n}{-k-j}=0\label{cose},\\ \sum_{j=0}^{m}&(-1)^j\stirf{m+1}{j+1}\be{2n}{-k-j}=0\label{cota}. \end{align} \end{proposition} \begin{proof} By Proposition \ref{explicose}, we have \begin{align*} \sum_{j=0}^{m}(-1)^j\stirf{m+1}{j+1}\co{2n}{-k-j}&=\sum_{l=0}^{n}(2l+1)^{k-2}\sum_{i=2m}^{2n}\frac{(-1)^{i+1}(i+1)!}{2^i}\binom{i}{2l}\stirs{2n+1}{i+1}\\ ft\times\sum_{j=0}^{m}(-1)^{j+1}\stirf{m+1}{j+1}(2l+1)^{j+1}. \end{align*} For the last sum, we have \begin{align*} \sum_{j=0}^{m}(-1)^{j+1}\stirf{m+1}{j+1}(2l+1)^{j+1}&=(-2l-1)(-2l)\cdots(-2l+m)\\ &=0. \end{align*} Hence we obtain \eqref{cose}. Also, by using \eqref{cotacose}, we have \begin{align*} \sum_{j=0}^{m}(-1)^j\stirf{m+1}{j+1}\be{2n}{-k-j}&=\sum_{l=0}^{n}\binom{2n}{2l}\sum_{j=0}^{m}(-1)^j\stirf{m+1}{j+1}\co{2l}{-k-j}=0. \end{align*} This completes the proof. \end{proof} \begin{remark} By the same way, we have \[ \sum_{j=0}^{m}(-1)^j\stirf{m+1}{j+1}B_{n}^{(-k-j)}=0~~(~k\in\Z, 0\leq n\leq m~). \] \end{remark} \begin{example} For $k=-2, n=2$ and $m=5$, we have $\co{4}{2}=\frac{176}{225}, \co{4}{1}=\frac{7}{15}, \co{4}{0}=0, \co{4}{-1}=1, \co{4}{-2}=16$ and $\co{4}{-3}=121$. Then we can see \begin{align*} \sum_{j=0}^{5}(-1)^j\stirf{6}{j+1}\co{4}{2-j}&=\frac{1408}{15}-\frac{1918}{15}+0-85+240-121\\ &=0. \end{align*} Also, we have $\be{4}{2}=-\frac{199}{225}, \be{4}{1}=-\frac{8}{15}, \be{4}{0}=1, \be{4}{-1}=8, \be{4}{-2}=41$ and $\be{4}{-3}=200$. Then we can see \end{example} \begin{align*} \sum_{j=0}^{5}(-1)^j\stirf{6}{j+1}\be{4}{2-j}&=-\frac{1592}{15}+\frac{2192}{15}+225-680+615-200\\ &=0. \end{align*} \subsection{Congruence formulas} In this subsection, we prove some congruence formulas. \begin{lemma}\label{congruencestir} Let $p$ be a prime number. For $n, m$ and $N\in\Z_{\geq1}$ with $n\equiv m\bmod\varphi(p^N)$ and $m,n\geq N$, and $j\in\Z_{\geq0}$, we have \[ j!\stirs{n}{j}\equiv j!\stirs{m}{j} \bmod{p^N}. \] \end{lemma} \begin{proof} By using Lemma \ref{bekistir} and the Euler's theorem, we have \begin{align*} j!\stirs{n}{j}&=\sum_{l=0}^{j}(-1)^{l+j}\binom{l}{j}l^n\\ &=\sum_{l=0}^{j}(-1)^{l+j}\binom{l}{j}l^m\\ &=j!\stirs{m}{j}. \end{align*} Hence we obtain Lemma \ref{congruencestir}. \end{proof} The following theorem is the general form of Kummer type congruences for polycosecant numbers and polycotangent numbers, which is the generalization of Theorem \ref{kupa}. \begin{theorem}\label{kupoly} Let $p$ be an odd prime number. For $n,m,k$ and $N\in\Z_{\geq 1}$ with $2m\equiv 2n \bmod \varphi(p^N)$ and $2n, 2m\geq N$, we have \begin{align} \co{2m}{-k}&\equiv\co{2n}{-k}\bmod{p^N}\label{kucose},\\ \be{2m}{-k}&\equiv\be{2n}{-k}\bmod{p^N}\label{kucota}. \end{align} \end{theorem} \begin{proof} From Lemma \ref{congruencestir} and Proposition \ref{sasaki}, we have \begin{align*} \co{2n}{-k}&=\sum_{i=1}^{\min{\{2n+1,k\}}}\frac{i!(i-1)!}{2^{i-1}}\stirs{k}{i}\stirs{2n+1}{i}\\ &\equiv \sum_{i=1}^{\min{\{2n+1,k\}}}\frac{i!(i-1)!}{2^{i-1}}\stirs{k}{i}\stirs{2m+1}{i} \bmod{p^N}. \end{align*} Since $\stirs{2m+1}{i}=0$ for $i=2m+2,\ldots, 2n+1$, the right hand side is equal to $\co{2m}{-k}$. Hence we obtain the formula \eqref{kucose}. By the same argument, we obtain the formula \eqref{kucota}. \end{proof} \begin{example} For $p=3, m=2, n=5, k=3$ and $N=2$, we have $\co{4}{-3}=121$ and $\co{10}{-3}=88573$. Then we can see \[ \co{4}{-3}\equiv \co{10}{-3}\equiv 4\bmod9. \] Also, we have $\be{4}{-3}=200$ and $\be{10}{-3}=786944$. Then we can see \[ \be{4}{-3}\equiv\be{10}{-3}\equiv2\bmod9. \] \end{example} \begin{remark} Since \begin{align} \co{2n}{1}&=(2-2^{2n})B_{2n}, \label{coseber}\\ \be{2n}{1}&=2^{2n}B_{2n}, \end{align} we have \begin{align*} (1-p^{2n-1})\frac{\co{2n}{1}}{2n}&\equiv(1-p^{2m-1})\frac{\co{2m}{1}}{2m} \bmod{p^N},\\ (1-p^{2n-1})\frac{\be{2n}{1}}{2n}&\equiv(1-p^{2m-1})\frac{\be{2m}{1}}{2m} \bmod{p^N} \end{align*} for $n, m, N\in\Z_{\geq0}$ with $2n\equiv 2m\bmod {(p-1)p^{N-1}}$ and $(p-1)\nmid 2n$. \end{remark} Moreover, we obtain more accurate evaluations of 2-order of $\co{2n}{-2k}$ and $\be{2n}{-2k-1}$. Note that some of the following results were suggested by private communication from Kaneko. \begin{proposition}\label{twoorder} For $n$ and $k\in\Z_{\geq1}$, we have \[ \co{2n}{-2k}\equiv0\bmod{2^{2n}}. \] \end{proposition} \begin{proof} We prove this result by induction on $k$. For $k$=1, we can obtain that $\co{2n}{-2}=2^{2n}$ by induction on $n$. We assume that $\co{2n}{-2j}\equiv0\pmod{2^{2n}}$ holds for $j\leq k-1$. Then we have \begin{align*} \co{2n}{-2k}&=\sum_{i=0}^{n}\binom{2n+1}{2i+1}\co{2(n-i)}{-2k+1}\\ &=\sum_{i=0}^{n}\binom{2n+1}{2i+1}\sum_{j=0}^{n-i}\binom{2(n-i)+1}{2j+1}\co{2(n-i-j)}{-2(k-1)}\\ &=\sum_{N=0}^{n}\sum_{i=0}^{n-N}\binom{2n+1}{2i+1}\binom{2(n-i)+1}{2N}\co{2N}{-2(k-1)}. \end{align*} Also, a direct calculation shows \[ \sum_{i=0}^{n-N}\binom{2n+1}{2i+1}\binom{2(n-i)+1}{2N}\equiv0 \bmod{2^{2n-2N}}. \] By combining these and the induction hypothesis, we obtain Proposition \ref{twoorder}. \end{proof} \begin{corollary}\label{cortwo} For $n\in\Z_{\geq1}$ and $k\in\Z_{\geq0}$, we have \[ \be{2n}{-2k-1} \equiv0 \bmod{2^{2n-1}}. \] \end{corollary} \begin{proof} By using the formula \eqref{cotacose}, we have \begin{align*} \be{2n}{-2k-1}&=\sum_{i=0}^{n}\binom{2n}{2i}\co{2i}{-2k-1}\\ &=\sum_{i=0}^{n}\sum_{j=0}^{i}\binom{2n}{2(n-i)}\binom{2i+1}{2(i-j)}\co{2(i-j)}{-2k}. \end{align*} Also, a direct calculation shows \begin{align*} \sum_{j=0}^{i}\binom{2n}{2(n-i)}\binom{2i+1}{2(i-j)}&\equiv0 \bmod{2^{2(n-i+j)-1}}. \end{align*} By combining these and Proposition \ref{twoorder}, we obtain Corollary \ref{cortwo}. \end{proof} \begin{remark} When we put $N=1$, we can see that the polycosecant number and the polycotangent number have a period $p-1$ modulo $p$. In \cite[Theorem 4.3]{OS} and \cite[Theorem 6.5 and 6.6]{Sa}, Sakata showed the following proposition. \end{remark} \begin{proposition}\label{periodbc} Let $p$ be an odd prime number. For $k\in\Z_{\geq0}$, we have \begin{align} B_{p-1}^{(-k)} &\equiv \begin{cases} 1 \bmod{p}& (k=0~\mm{or}~p-1 \nmid k), \\ 2 \bmod{p} & (k\neq0~\mm{and}~p-1 \mid k), \end{cases}\label{periodb}\\ C_{p-2}^{(-k-1)} &\equiv \begin{cases} 0 \bmod{p}& (p-1 \nmid k), \\ 1 \bmod{p} & (p-1 \mid k). \end{cases}\label{periodc} \end{align} \end{proposition} By duality formulas \eqref{dualb} and \eqref{dualc}, congruences in Proposition \ref{periodbc} can be written as \begin{align*} B_{k}^{(-p+1)}\equiv \begin{cases} 1 \bmod{p}& (k=0~\mm{or}~p-1 \nmid k), \\ 2 \bmod{p} & (k\neq0~\mm{and}~p-1 \mid k), \end{cases}~~~ C_{k}^{(-p+1)}\equiv \begin{cases} 0 \bmod{p}& (p-1 \nmid k), \\ 1 \bmod{p} & (p-1 \mid k). \end{cases} \end{align*} Also, Sakata and Pallewatta proved the following proposition in \cite{Sa} and in \cite{Pa}, respectively. \begin{proposition}\label{lemma} Let $p$ be an odd prime number. For $k\in\Z_{\geq0}$, we have \begin{align*} C_{p-1}^{(-k-1)}&\equiv 1 \bmod p,\\ \co{p-1}{-2k-1}&\equiv 1 \bmod p. \end{align*} \end{proposition} \begin{remark} By using Theorem \ref{kupoly}, we can see that $\co{p-1}{-k}\equiv 1\bmod p$ for $k\in\Z_{\geq1}$. \end{remark} By duality formulas \eqref{dualc} and \eqref{dualcose}, congruences in Proposition \ref{lemma} can be written as \begin{align*} C_{k}^{(-p)}\equiv 1 \bmod p,~~~ \co{2k}{-p}\equiv 1 \bmod p. \end{align*} We show that congruence formulas similar to \eqref{periodb} and \eqref{periodc} hold for $\co{n}{-k}$ and $\be{n}{-k}$. \begin{proposition}\label{sakatatype} Let $p$ be an odd prime number. For $n\in\Z_{\geq0}$, we have \begin{align*} \co{2n}{-p+1} &\equiv \begin{cases} 0 \bmod{p}& (p-1 \nmid 2n), \\ 1 \bmod{p} & (p-1 \mid 2n), \end{cases} \\ \be{2n}{-p+1} &\equiv \begin{cases} 1 \bmod{p}& (2n=0~\mm{or}~ p-1 \nmid 2n), \\ 2 \bmod{p} & (2n\neq0~\mm{and}~p-1 \mid 2n). \end{cases} \end{align*} \end{proposition} \begin{proof} First, we show the case of $\co{2n}{-p+1}$. For $p=3$, we have $\co{2n}{-2}=4^n$, so the proposition holds. For $p\neq3$, we prove the proposition by induction on $n$. When $n=0$, we have $\co{0}{-p+1}=1$. When $n=1$, by using Proposition \ref{lemma}, we have \begin{align*} 3\co{2}{-p+1}&=\co{2}{-p}-\co{0}{-p+1}\\ &\equiv 1-1=0\bmod{p} \end{align*} (because $\co{2}{-p}\equiv 1 \bmod{p}$ was shown in \cite[Theorem 3.13]{Pa}), and we have $\co{2}{-p+1}\equiv 0\bmod p$. The same calculation shows that $\co{2n}{-p+1}\equiv 0\bmod p$ holds for $2n=2, 4, \ldots, p-3$. When $2n=p-1$, we have \begin{align*} \co{p-1}{-p+1}=\sum_{i=1}^{p-1}\frac{i!(i-1)!}{2^{i-1}}\stirs{p-1}{i}\stirs{p}{i}. \end{align*} Since $\stirs{p}{m}\equiv 0 \bmod p$ holds for $m=2,3,\ldots, p-1$, only the term $i=1$ remains and we have $\co{p-1}{-p+1}\equiv1 \bmod{p}$. For $\be{2n}{-p+1}$, since \begin{align*} \be{2n}{-p+1}=\sum_{i=0}^{n}\binom{2n}{2i}\co{2n-2i}{-p+1}, \end{align*} we have \begin{align*} \be{2n}{-p+1}&\equiv \co{0}{-p+1} =1\bmod p~~(2n\neq p-1),\\ \be{p-1}{-p+1}&\equiv \co{0}{-p+1}+\co{p-1}{-p+1}\equiv 2\bmod p. \end{align*} Hence we obtain Proposition \ref{sakatatype}. \end{proof} In addition, $\co{n}{-k}$ and $\be{n}{-k}$ satisfy the congruence formula similar to \eqref{sum}. Here, we denote by $T_{2n+1}$ the tangent number defined by \begin{align*} \tan{t}=\sum_{n=0}^{\infty} T_{2n+1} \frac{t^{2n+1}}{(2n+1)!} \end{align*} (see \cite{No}), and \[ \widetilde{T}_{2n}=\begin{cases} 1&(n=0),\\ (-1)^{n-1}T_{2n+1}&(n\in\Z_{\geq1}). \end{cases} \] It is known that \[ T_{2n+1}=\begin{cases} 2^{2n}(2^{2n}-1)\frac{B_{2n}}{2n}&(n\in\Z_{\geq1}),\\ 1&(n=0). \end{cases} \] Also, we have \[ \sum_{n=0}^{\infty} \widetilde{T}_{2n} \frac{t^{2n}}{(2n)!}=1+\tanh^2{t}. \] \begin{proposition} Let $p$ be an odd prime number. For $n \in\Z_{\geq 0}$ and $k, N\in\Z_{\geq 1}$ with $k\geq N$, we have \begin{align} 2^{2n}\sum_{i=0}^{\varphi(p^N)-1} \co{2n}{-k-i}&\equiv (-1)^nT_{2n+1}\varphi(p^N) \bmod{p^N},\label{sumcose}\\ 2^{2n}\sum_{i=0}^{\varphi(p^N)-1} \be{2n}{-k-i}&\equiv \widetilde{T}_{2n}\varphi(p^N) \bmod{p^N}.\label{sumcota} \end{align} Hence we have \begin{align*} \sum_{i=0}^{\varphi(p^N)-1} \co{2n}{-k-i}&\equiv 0 \bmod{p^{N-1}},\\ \sum_{i=0}^{\varphi(p^N)-1} \be{2n}{-k-i}&\equiv 0 \bmod{p^{N-1}}. \end{align*} \end{proposition} \begin{proof} First, we show \eqref{sumcose}. By using Proposition \ref{sasaki} and Lemma \ref{bekistir}, we have \begin{align*} 2^{2n}\sum_{i=0}^{\varphi(p^N)-1} \co{2n}{-k-i}&=\sum_{j=0}^{2n} 2^{2n-j}j!\stirs{2n+1}{i+1}\sum_{l=0}^{j+1}(-1)^{l+j+1}\binom{j+1}{l}l^{k+i}\\ &\equiv \varphi(p^N) \sum_{j=0}^{2n} (-1)^j2^{2n-j}(j+1)!\stirs{2n+1}{j+1} \bmod{p^N}. \end{align*} Also, a direct calculation shows that \begin{align*} \sum_{n=0}^{\infty} (-1)^nT_{2n+1}\frac{t^{2n+1}}{(2n+1)!}&=\tanh{t}=\frac{(1-e^{2t})/2}{1-(1-e^{2t})/2}\\ &=\sum_{n=1}^{\infty} \left(\sum_{j=1}^{n} (-1)^{j-1}2^{n-j}j!\stirs{n}{j}\right)\frac{t^n}{n!}. \end{align*} Hence we have \begin{align*} (-1)^nT_{2n+1}=\sum_{j=0}^{2n} (-1)^{j}2^{2n-j}(j+1)!\stirs{2n+1}{j+1}. \end{align*} Therefore we obtain \eqref{sumcose}. For \eqref{sumcota}, we have \begin{align*} 2^{2n}\sum_{i=0}^{\varphi(p^N)-1} \be{2n}{-k-i} &=\sum_{j=0}^{n}2^{2n-2j}\binom{2n}{2j} 2^{2j}\sum_{i=0}^{\varphi(p^N)-1}\co{2j}{-k-i}\\ &\equiv \varphi(p^N)\sum_{j=0}^{n}2^{2n-2j}\binom{2n}{2j} (-1)^jT_{2j+1} \bmod{p^N}. \end{align*} Also, a direct calculation shows that \begin{align*} 1+\tanh^2{t}&=\cosh{2t}\dv{}{t}\tanh{t}\\ &=\sum_{n=0}^{\infty} \left(\sum_{j=0}^{n} 2^{2n-2j}\binom{2n}{2j}(-1)^jT_{2j+1}\right)\frac{t^{2n}}{(2n)!}. \end{align*} Hence we have \begin{align*} \widetilde{T}_{2n}=\sum_{j=0}^{n} 2^{2n-2j}\binom{2n}{2j}(-1)^jT_{2j+1}. \end{align*} Therefore we obtain \eqref{sumcota}. \end{proof} \begin{remark} The similar calculation shows that \begin{align*} \sum_{i=0}^{\varphi(p^N)-1} C_n^{(-k-i)}&\equiv (-1)^n\varphi(p^N)\bmod{p^N} \end{align*} for an odd prime number $p$, $n \in\Z_{\geq 0}$ and $k, N\in\Z_{\geq 1}$ with $k\geq N$. \end{remark} \begin{example} For $p=2, n=3, k=3$ and $N=2$, we have $\co{6}{-3}=1093, \co{6}{-4}=12160, \co{6}{-5}=111721, \co{6}{-6}=927424, \co{6}{-7}=7256173, \be{6}{-8}=54726400$ and $T_{7}=272$. Then we can see \[ 2^6\sum_{i=0}^{5}\co{6}{-3-i}\equiv 6\equiv -272\times6\bmod9. \] Also, we have $\be{6}{-3}=3104, \be{6}{-4}=23801, \be{6}{-5}=174752, \be{6}{-6}=1257125, \be{6}{-7}=8948384, \be{6}{-8}=63318641$ and $\widetilde{T}_{7}=272$. Then we can see \[ 2^6\sum_{i=0}^{5}\be{6}{-3-i}\equiv 3\equiv 272\times6\bmod9. \] \end{example} \subsection{Clausen von-Staudt type theorem} In this subsection, we prove the Clausen von-Staudt type theorem. \begin{proposition}\label{1von} Let $p$ be an odd prime number. For $n\in\Z_{\geq1}$, we have \[ \ordp(d(2n))=\ordp(\widehat{\beta}(2n))=\ordp(b(2n)), \] where $b(2n)$, $d(2n)$ and $\widehat{\beta}(2n)$ denote the denominator of $B_{2n}$, $\co{2n}{1}$ and $\be{2n}{1}$ , respectively. \end{proposition} \begin{proof} By \eqref{coseber} and Theorem \ref{theoremvon}, we see that only a prime $p$ with $(p-1)\mid 2n$ possibly appears in the denominator of $\co{2n}{1}$. On the other hand, we have for this $p$, \[ 2-2^{2n}\equiv 1 \bmod p. \] Therefore we have $\ordp(d(2n))=\ordp(b(2n))$. For $\be{2n}{1}$, by considering the generating function of $\be{2n}{1}-\co{2n}{1}$, we have \[ \be{2n}{1}-\co{2n}{1}=2(2^{2n}-1)B_{2n}. \] On the other hand, as stated above, if $(p-1) \nmid 2n$, then \[ 2^{2n}-1\equiv 0 \bmod p. \] Hence we have $\be{2n}{1}-\co{2n}{1}\in\zetp$ and obtain Proposition \ref{1von}. \end{proof} \begin{theorem}\label{vonco} Let $p$ be an odd prime number and $k\geq 2$ be an integer satisfying $k+2\leq p\leq 2n+1$. \begin{enumerate} \item When $(p-1) \mid 2n$, $p^k\co{2n}{k}$and $p^k\be{2n}{k}\in\zetp$. More explicitly, \[ p^k\co{2n}{k}\equiv p^k\be{2n}{k}\equiv-1 \bmod p \] holds. \item When $(p-1) \nmid 2n$, $p^{k-1}\co{2n}{k}$and $p^{k-1}\be{2n}{k}\in\zetp$. More explicitly, \begin{align*} p^{k-1}\co{2n}{k}&\equiv -\frac{1}{p}\stirs{2n}{p-1}+\sum_{\substack{l=p\\l\equiv1\bmod{p-1}}}^{2n}\binom{2n+1}{l}\sum_{j=0}^{\alpha-1}\frac{(-1)^jj!}{2^{j+1}}\stirs{\alpha}{j+1} \bmod{p}, \\ p^{k-1}\be{2n}{k}&\equiv -\frac{1}{p}\stirs{2n}{p-1}+\sum_{\substack{l=p\\l\equiv1\bmod{p-1}}}^{2n}\binom{2n+1}{l}\sum_{j=0}^{\alpha-1}\frac{(-1)^jj!}{2^{j+1}}\stirs{\alpha}{j+1}\\ &~~~+\sum_{j=p}^{\gamma} \frac{(-1)^j}{2^j}\frac{(j+2)!}{p}\stirs{2n}{j+2}\binom{j+1}{p} \bmod p \end{align*} hold, where $\alpha\in\Z$ is the remainder of $2n$ modulo $p-1$ and $\gamma=\min{\left\{2n, 2p-3\right\}}$. \end{enumerate} \end{theorem} \begin{proof} First, we see Theorem \ref{vonco} for $\co{n}{k}$. For simplicity, we put \[ \dnkij=\frac{(-1)^j}{2^j}\frac{j!}{(2i+1)^k}\stirs{2n+1}{j+1}\binom{j+1}{2i+1}~~(~0\leq j\leq 2n, 0\leq i\leq\lfloor j/2\rfloor~). \] By using this, we can write \begin{align}\label{codnkij} \co{n}{k}=\sum_{j=0}^{2n}\sum_{i=0}^{\lfloor j/2\rfloor} \dnkij. \end{align} Furthermore, we write $2i+1$ as $ap^e~(\gcd(a,p)=1, e\geq0)$. Note that \[ \ordp(\dnkij)\geq\ordp\left(\frac{j!}{(2i+1)^k}\right). \] First, we assume $e=0$. Then $\dnkij\in\zetp$ holds and $p^{k-1}\dnkij\equiv0 \bmod p$ by the assumption $k\geq 2$. Next, we assume $e\geq 2$. By the same argument in \cite[$\S$14]{AIK}, we have \begin{align*} \ordp\left(\frac{j!}{(2i+1)^k}\right)&\geq-k+1, \end{align*} so we have $p^{k-1}\dnkij\in\zetp$ and $p^k\dnkij\equiv0\bmod p$. We see that $p^{k-1}\dnkij\not\equiv 0\bmod p$ holds for only the case $e=2, 2i+1=p^2, p=k+2$, and $\ordp(j!)=\ordp((2i)!)$. Namely, the condition $2i+1=p^2$ and $\ordp(j!)=\ordp((2i)!)$ holds only if $j=2i$ holds. On the other hand, the assumption $(p-1)\nmid 2n$ and Lemma \ref{vonlemma} imply \[ \stirs{2n}{j}=\stirs{2n}{p^2-1}\equiv0 \bmod p~~(~\mm{changing}~n~\mm{by}~2n~\mm{and}~a=p~), \] so we have \[ \stirs{2n+1}{j+1}\equiv 0 \bmod p, \] which implies that $p^{k-1}\dnkij\equiv0 \bmod p$ holds. Now, we suppose $e=1~($namely $2i+1=ap)$. If $a\geq 3$, $\ordp\left(\frac{j!}{(2i+1)^k}\right)>-k+1$ since $p^2 \mid (ap-1)!$. The case $a=2$ never happens since $p$ is an odd prime. If $a=1~($namely $2i+1=p)$, then we have \[ \dnkij=\frac{(-1)^jj!}{2^jp^k}\stirs{2n+1}{j+1}\binom{j+1}{p}. \] If $j>2i$ holds, we have $\ordp(\dnkij)\geq -k+1$. If $j=2i$ holds, then we have \[ \dnkij=\frac{(p-1)!}{2^{p-1}p^k}\stirs{2n+1}{p}. \] By using Lemma \ref{vonlemma} and the formula $\stirs{2n+1}{p}=\stirs{2n}{p-1}+p\stirs{2n}{p}$, we get the following results. \begin{enumerate} \item If $(p-1) \nmid 2n$, then we have $\stirs{2n+1}{p}\equiv 0 \bmod p$ and \[ p^{k-1}\dnkij\equiv -\frac{1}{p}\stirs{2n+1}{p} \bmod p. \] \item If $(p-1) \mid 2n$, then we have $\stirs{2n}{p-1}\equiv 1 \bmod p$ and \[ p^k\dnkij\equiv -1 \bmod p. \] \end{enumerate} Hence by \eqref{codnkij}, we have the following. \begin{enumerate} \item If $(p-1) \mid 2n$, then we have \[ p^{k}\co{2n}{k}\equiv -1 \bmod p. \] \item If $(p-1) \nmid 2n$, then we have \[ p^{k-1}\co{2n}{k}\equiv -\frac{1}{p}\stirs{2n+1}{p}+\sum_{j=p}^{2n} \frac{(-1)^j}{2^j}\frac{j!}{p}\stirs{2n+1}{j+1}\binom{j+1}{p} \bmod p. \] \end{enumerate} For the former term, we have \begin{align*} -\frac{1}{p}\stirs{2n+1}{p}&=-\frac{1}{p}\stirs{2n}{p-1}-\stirs{2n-1}{p-1}-p\stirs{2n-1}{p}\\ &\equiv -\frac{1}{p}\stirs{2n}{p-1}\bmod p \end{align*} by Lemma \ref{vonlemma}. For the latter sum, by using the formula \[ \stirs{n}{j+1}\binom{j+1}{p}=\sum_{k}^{}\stirs{k}{p}\stirs{n-k}{j+1-p}\binom{n}{k} \] in \cite{GKP}, we have \begin{align*} \sum_{j=p}^{2n} \frac{(-1)^jj!}{2^jp}\stirs{2n+1}{j+1}\binom{j+1}{p}&=\sum_{j=p}^{2n} \frac{(-1)^jj!}{2^jp}\sum_{l=j}^{2n}\stirs{l}{p}\stirs{2n+1-l}{j+1-p}\binom{2n+1}{l}. \end{align*} Also, we have \begin{align*} \stirs{l}{p}=\stirs{l-1}{p-1}+p\stirs{l-1}{p}&\equiv \begin{cases} 1\bmod{p}~(l-1\equiv0 \bmod{p-1}),\\ 0\bmod{p}~(otherwise), \end{cases}\\ \stirs{2n+1-l}{j+1-p}&\equiv\stirs{2n+1-l}{j}\bmod{p}. \end{align*} Hence we have \begin{align*} \sum_{j=p}^{2n} \frac{(-1)^jj!}{2^jp}\sum_{l=j}^{2n}\stirs{l}{p}\stirs{2n+1-l}{j+1-p}\binom{2n+1}{l}&\equiv \sum_{\substack{l=p\\l\equiv1\bmod{p-1}}}^{2n}\binom{2n+1}{l}\sum_{j=0}^{l-p}\frac{(-1)^jj!}{2^{j+1}}\stirs{2n+1-l}{j+1}\\ &\equiv \sum_{\substack{l=p\\l\equiv1\bmod{p-1}}}^{2n}\binom{2n+1}{l}\sum_{j=0}^{\alpha-1}\frac{(-1)^jj!}{2^{j+1}}\stirs{\alpha}{j+1} \bmod p. \end{align*} Hence we obtain the desired result for $\co{n}{k}$. For $\be{n}{k}$, it is sufficient to consider the case $e=1, a=1$ and $j=2i=p-1$. In this case, we have \[ p^k\frac{(-1)^{p-1}j!}{2^{p-1}p^k}\frac{p(p+1)}{2}\stirs{2n}{p+1}\equiv0 \bmod p, \] so \[ p^k\be{2n}{k}\equiv p^k\co{2n}{k}\bmod p \] holds, and we obtain Theorem \ref{vonco}. \end{proof} \begin{remark} By using Proposition \ref{explicose} and Proposition \ref{explicota}, we have \[ \ordp(\co{2n}{k}), ~~\ordp(\be{2n}{k}) \geq 0 \] for an odd prime number $p$ with $2n+2\leq p$. Also, by using Proposition \ref{recurcose}, the formula \eqref{cotacose} and the fact that $\co{0}{k}=1$ for all $k\in\Z$, we have \[ {\mm{ord}}_2(\co{2n}{k}), ~~{\mm{ord}}_2(\be{2n}{k}) \geq 0. \] \end{remark} \section{On symmetrized polycosecant numbers.} In \cite{KST}, symmetrized poly-Bernoulli numbers are defined by Kaneko, Sakurai and Tsumura as follows: \begin{definition}[{\cite[Section2]{KST}}] For $l, m$ and $n\in\Z_{\geq0}$, symmetrized poly-Bernoulli numbers $\{\mathscr{B}_m^{(-l)}(n)\}$ are defined by \[ \mathscr{B}_m^{(-l)}(n)=\sum_{j=0}^{\infty} \stirf{n}{j}B_{m}^{(-l-j)}(n), \] where $B_{m}^{(k)}(x)$ is the poly-Bernoulli polynomial defined by \[ e^{-xt}\frac{\mm{Li}_k(1-e^{-t})}{1-e^{-t}}=\sum_{n=0}^{\infty}B_{n}^{(k)}(x)\frac{t^n}{n!}~(k\in\Z). \] \end{definition} It can be checked that \begin{align} \mathscr{B}_m^{(-l)}(0)=B_m^{(-l)},~~~\mathscr{B}_m^{(-l)}(1)=C_m^{(-l-1)}. \label{casesymber} \end{align} In particular, $\mathscr{B}_m^{(-l)}(n)$ satisfies the following duality formula. \begin{theorem}[{\cite[Corollary 2.2]{KST}\label{symber}}] For $l, m$ and $n\in\Z_{\geq0}$, we have \[ \mathscr{B}_m^{(-l)}(n)=\mathscr{B}_l^{(-m)}(n). \] \end{theorem} Theorem \ref{symber} can be regarded as a generalization of duality formulas for $B_m^{(-l)}$ and $C_m^{(-l-1)}$ because of \eqref{casesymber}. This theorem can be shown by considering the following generating function in two variables. \begin{theorem}[{\cite[Theorem 2.1]{KST}}] For $n\in\Z_{\geq0}$, we have \[ \sum_{l=0}^{\infty}\sum_{m=0}^{\infty} \mathscr{B}_m^{(-l)}(n)\frac{x^l}{l!}\frac{y^m}{m!}=\frac{n!e^{x+y}}{(e^x+e^y-e^{x+y})^{n+1}}. \] \end{theorem} The following proposition says that $\mathscr{B}_m^{(-l)}(n)\in\Z_{\geq1}$. \begin{proposition}[{\cite{KST}}] For $l, m$ and $n\in\Z_{\geq0}$, we have \[ \mathscr{B}_m^{(-l)}(n)=\sum_{j=0}^{\min{(l, m)}} n!(j!)^2\binom{j+n}{n}\stirs{l+1}{j+1}\stirs{m+1}{j+1}. \] \end{proposition} Also, there are some studies on combinatorial properties of $\mathscr{B}_m^{(-l)}(n)$ (see \cite{BM}). In this section, we define symmetrized polycosecant numbers $\{\cosym{m}{-k}{n}\}$ and prove duality formulas for them. \begin{definition} For $l, m$ and $n\in\Z_{\geq0}$, we define ~$\copoly{}{m}{-l}{n}$ by \begin{align} \frac{1}{2}(e^t+1)^{1-n}\frac{{\mm{Li}}_{-l}(\ta{})}{\sinh{t}}+\frac{1}{2}(e^{-t}+1)^{1-n}\frac{{\mm{Li}}_{-l}(-\ta{})}{\sinh{(-t)}}&=\sum_{m=0}^{\infty}\copoly{}{m}{-l}{n}\frac{t^m}{m!} \end{align} and symmetrized polycosecant numbers $\cosym{m}{-l}{n}$ by \[ \cosym{m}{-l}{n}=\sum_{j=0}^{n}\stirf{n}{j}\copoly{}{m}{-l-j}{n}. \] \end{definition} Since the generating function is an even function, we have $\cosym{2m+1}{-l}{n}=0$ for $l, m$ and $n\in\Z_{\geq0}$. Also, when $n=1$, $\cosym{m}{-l}{1}=\frac{1}{2}\co{m}{-l-1}$. Hence we can regard $\cosym{m}{-l}{n}$ as a generalization of $\co{m}{-l}$. The duality formula for them is as follows. \begin{theorem}\label{cosym} For $l, m$ and $n\in\Z_{\geq0}$, we have \[ \cosym{2m}{-2l}{n}=\cosym{2l}{-2m}{n}. \] \end{theorem} To prove this theorem, we need the following lemmas. For simplicity, we put \begin{align*} f_{1,n}(t,y)&=\frac{n!e^{t+y}}{(1+e^t+e^y-e^{t+y})^{n+1}},\\ f_{2,n}(t,y)&=\frac{n!e^{-t+y}}{(1+e^{-t}+e^y-e^{-t+y})^{n+1}}.\\ \end{align*} \begin{lemma}\label{fncosym} For $n\in\Z_{\geq0}$, we have \begin{align*} f_{1,n}(t,y)&=(e^t+1)^{-n}\sum_{j=0}^{n}\stirf{n}{j}\pdv{j}{y}f_{1,0}(t,y),\\ f_{2,n}(t,y)&=(e^{-t}+1)^{-n}\sum_{j=0}^{n}\stirf{n}{j}\pdv{j}{y}f_{2,0}(t,y).\\ \end{align*} \end{lemma} \begin{lemma}\label{gencosym} For $n\in\Z_{\geq0}$, we have \begin{align*} f_{1,n}(t,y)+f_{2,n}(t,y)=\sum_{m=0}^{\infty}\sum_{l=0}^{\infty}\copasym{}{m}{-l}{n}\frac{t^m}{m!}\frac{y^l}{l!}. \end{align*} \end{lemma} From Lemmas \ref{fncosym} and \ref{gencosym}, and by $\cosym{2m+1}{-l}{n}=0$, we have \begin{align*} 2\sum_{m=0}^{\infty}\sum_{l=0}^{\infty}\cosym{2m}{-2l}{n}\frac{t^{2m}}{(2m)!}\frac{y^{2l}}{(2l)!}&=f_{1,n}(t,y)+f_{2,n}(t,y)+f_{1,n}(t,-y)+f_{2,n}(t,-y)\\ &=\frac{n!e^{t+y}}{(1+e^t+e^y-e^{t+y})^{n+1}}+\frac{n!e^{-t+y}}{(1+e^{-t}+e^y-e^{-t+y})^{n+1}}\\ ve+\frac{n!e^{t-y}}{(1+e^t+e^{-y}-e^{t-y})^{n+1}}+\frac{n!e^{-t-y}}{(1+e^{-t}+e^{-y}-e^{-t-y})^{n+1}}. \end{align*} Hence we obtain Theorem \ref{cosym}. \begin{proof}[Proof of Lemma \ref{fncosym}] We prove the result $f_{1,n}(t,y)$ by induction on $n$. The case $n=0$ is straightforward. For $n\geq1$, a direct calculation shows that \begin{align*} \pdv{}{y}f_{1,n}(t,y)&=(e^t+1)f_{1,n+1}(t,y)-nf_{1,n}(t,y). \end{align*} By the induction hypothesis, we have \begin{align*} f_{1,n+1}(t,y)&=(e^t+1)^{-1}n!e^{-ny}e^t\pdv{}{y}\frac{1}{(1+e^{-y}+e^{t-y}-e^t)^{n+1}}\\ &=\frac{(n+1)!e^{t+y}}{(1+e^t+e^y-e^{t+y})^{n+1}}. \end{align*} By the similar argument, we obtain the result for $f_{2,n}(t,y)$. \end{proof} \begin{proof}[Proof of Lemma \ref{gencosym}] Since \begin{align*} \sum_{m=0}^{\infty}\sum_{l=0}^{\infty}\sum_{j=0}^{n}&\stirf{n}{j}\copoly{}{m}{-l-j}{n}\frac{t^m}{m!}\frac{y^l}{l!}\\ =&\frac{1}{2}\sum_{j=0}^{n}\stirf{n}{j}\sum_{l=0}^{\infty}(e^t+1)^{1-n}\frac{{\mm{Li}}_{-l-j}(\ta{})}{\sinh{t}}\frac{y^l}{l!}\\ &+\frac{1}{2}\sum_{j=0}^{n}\stirf{n}{j}\sum_{l=0}^{\infty}(e^{-t}+1)^{1-n}\frac{{\mm{Li}}_{-l-j}(-\ta{})}{\sinh{(-t)}}\frac{y^l}{l!}, \end{align*} we aim to prove that $f_{1,n}(t,y)$ coincides with the former sum and $f_{2,n}(t,y)$ coincides with the latter sum. In particular, since the argument for $f_{2,n}(t,y)$ is the same as that for $f_{1,n}(t,y)$, we show the result for $f_{1,n}(t,y)$. When $n=0$, we have \begin{align} \sum_{l=0}^{\infty}{\mm{Li}}_{-l}(z)\frac{y^l}{l!}=\sum_{l=0}^{\infty}\sum_{m=0}^{\infty}m^lz^m\frac{y^l}{l!}=\frac{e^yz}{1-e^yz}~(|z|<1).\label{genli} \end{align} Hence we have \begin{align*} \sum_{l=0}^{\infty}\frac{1}{2}(e^t+1)\frac{\mm{Li}_{-l}(\ta{})}{\sinh{t}}\frac{y^l}{l!}&=\frac{1}{2}(e^t+1)\frac{1}{\sinh{t}}\frac{e^y\ta{}}{1-e^y\ta{}}\\ &=\frac{e^{t+y}}{1+e^t+e^y-e^{t+y}}\\ &=f_{1,0}(t,y) \end{align*} For $n\geq1$, by using \eqref{genli}, we have \begin{align*} &\frac{1}{2}\sum_{j=0}^{n}\stirf{n}{j}\sum_{l=0}^{\infty}(e^t+1)^{1-n}\frac{{\mm{Li}}_{-l-j}(\ta{})}{\sinh{t}}\frac{y^l}{l!}\\ &=\frac{1}{2}\sum_{j=1}^{n}\stirf{n}{j}\pdv{j-1}{y}\sum_{k=0}^{\infty}(e^t+1)^{1-n}\pdv{}{t}{\mm{Li}}_{-k}(\ta{})\frac{y^k}{k!}\\ &=\frac{1}{2}\sum_{j=1}^{n}\stirf{n}{j}\pdv{j-1}{y}(e^t+1)^{1-n}\times\pdv{}{t}\frac{e^y\ta{}}{1-e^y\ta{}}\\ &=\sum_{j=1}^{n}\stirf{n}{j}\pdv{j-1}{y}(e^t+1)^{1-n}\frac{e^{t+y}}{(1+e^t+e^y-e^{t+y})^2}\\ &=\sum_{j=1}^{n}\stirf{n}{j}\pdv{j}{y}(e^t+1)^{-n}\frac{e^{t+y}}{1+e^t+e^y-e^{t+y}}\\ &=f_{1,n}(t,y). \end{align*} \end{proof} From Lemma \ref{gencosym}, we obtain an explicit formula of $\cosym{m}{-l}{n}$. \begin{proposition}\label{expsym} For $l, m$ and $n\in\Z_{\geq0}$, we have \[ \cosym{2m}{-l}{n}=\frac{n!}{2^{n+1}}\sum_{j=0}^{\min{\{2m, l\}}} \frac{(j!)^2}{2^{j-1}} \binom{j+n}{n}\stirs{2m+1}{j+1}\stirs{l+1}{j+1}. \] Hence we obtain $\frac{2^{n+1}}{n!}\cosym{2m}{-l}{n}\in\Z_{\geq0}$. \end{proposition} \begin{proof} Since we have \begin{align*} \left(\frac{1}{1-z}\right)^{n+1}=\sum_{j=0}^{\infty} \binom{j+n}{n}z^j, \end{align*} a direct calculation shows that \begin{align*} \frac{n!e^{t+y}}{(1+e^t+e^y-e^{t+y})^{n+1}} &=\frac{n!}{2^{n+1}}e^{t+y}\left(\frac{1}{1-\frac{1}{2}(e^t-1)(e^y-1)}\right)^{n+1}\\ &=\frac{n!}{2^{n+1}}e^{t+y}\sum_{j=0}^{\infty} \binom{j+n}{n}\frac{1}{2^j}(e^t-1)^j(e^y-1)^j\\ &=\frac{n!}{2^{n+1}}\sum_{m=j}^{\infty}\sum_{l=j}^{\infty}\left(\sum_{j=0}^{\min{\{m, l\}}} \frac{(j!)^2}{2^{j}}\binom{j+n}{n}\stirs{m+1}{j+1}\stirs{l+1}{j+1}\right)\frac{t^m}{m!}\frac{y^l}{l!}. \end{align*} By Lemma \ref{gencosym}, we obtain Proposition \ref{expsym}. \end{proof} \begin{example} When $n=1$, Theorem \ref{cosym} implies \[ \co{2m}{-2l-1}=\co{2l}{-2m-1}~( l, m\in\Z_{\geq0} ). \] \end{example} \begin{example} When $n=0$, from Propositions \ref{sasaki}, \ref{stircota} and \ref{expsym}, we have \begin{align*} 1+\frac{e^{t+y}}{1+e^t+e^y-e^{t+y}}&+\frac{e^{-t+y}}{1+e^{-t}+e^y-e^{-t+y}}+\frac{e^{t-y}}{1+e^t+e^{-y}-e^{t-y}}+\frac{e^{-t-y}}{1+e^{-t}+e^{-y}-e^{-t-y}}\\ &=\sum_{m=0}^{\infty}\sum_{l=0}^{\infty} \left(\be{2m}{-2l}+\co{2m}{-2l}+\co{2l}{-2m}\right)\frac{t^{2m}}{(2m)!}\frac{y^{2l}}{(2l)!}. \end{align*} Hence Theorem \ref{cosym} implies \begin{align*} \be{2m}{-2l}+\co{2m}{-2l}+\co{2l}{-2m}&=\be{2l}{-2m}+\co{2l}{-2m}+\co{2m}{-2l}~(l,m\in\Z_{\geq 0}). \end{align*} Therefore we have \[ \be{2m}{-2l}=\be{2l}{-2m}. \] \end{example} \begin{example} When $n=2$, Theorem \ref{cosym} implies \[ \sum_{j=0}^{2m} \binom{2m}{j}E_{j}(0)\left(\widetilde{D}_{2m-j}^{(-2l-1)}+\widetilde{D}_{2m-j}^{(-2l-2)}\right)=\sum_{j=0}^{2l} \binom{2l}{j}E_{j}(0)\left(\widetilde{D}_{2l-j}^{(-2m-1)}+\widetilde{D}_{2l-j}^{(-2m-2)}\right), \] where \[ ve \frac{{\mm{Li}}_{-k}(\ta{})}{\sinh{t}}=\sum_{m=0}^{\infty} \widetilde{D}_m^{(-k)}\frac{t^m}{m!}. \] \end{example} \section*{Acknowledgments} The author would like to thank my supervisor Professor Hirofumi Tsumura for his kind advice and helpful comments. The author also thanks Professor Masanobu Kaneko for his important suggestions. This work was supported by JST, the establishment of university fellowships towards the creation of science technology innovation, Grant Number JPMJFS2139. \begin{bibdiv} \begin{biblist} \bib{AIK}{book}{ author={T. Arakawa},author={T. Ibukiyama},author={M. Kaneko. with appendix by Don Zagier}, title={Bernoulli numbers and Zeta Functions}, publisher={Springer Monographs in Mathematics, Springer, Tokyo}, date={2014} } \bib{AK1}{article}{ author={T. Arakawa},author={M. Kaneko}, title={Multiple zeta values, poly-Bernoulli numbers, and related zeta functions}, journal={Nagoya Math. J.}, volume={153}, date={1999}, number={}, pages={189--209}, issn={}, } \bib{BM}{article}{ author={B. B\'enyi},author={T. Matsusaka}, title={On the combinatorics of symmetrized poly-Bernoulli numbers}, journal={Electron. J. Combin}, volume={28}, date={2021}, number={1}, pages={20pp}, issn={}, } \bib{GKP}{book}{ author={R. L. Graham},author={D. E. Knuth},author={O. Patashnik}, title={Concrete mathematics}, publisher={Addison-Wesley}, date={1994}, } \bib{IKT}{article}{ author={K. Imatomi},author={M. Kaneko},author={E. Takeda}, title={Multi-poly-Bernoulli numbers and finite multiple zeta values}, journal={J. Integer Seq.}, volume={17}, date={2014}, number={4}, pages={12pp}, issn={}, } \bib{K1}{article}{ author={M. Kaneko}, title={Poly-Bernoulli numbers}, journal={J. Th\'eor. Nombres Bordeaux}, volume={9}, date={1997}, number={1}, pages={199--206}, issn={}, } \bib{KKT}{article}{ author={M. Kaneko},author={Y. Komori},author={H. Tsumura}, title={On Arakawa-Kaneko zeta-functions associated with $GL_2(\C)$ and their functional relations II}, journal={in preparation}, volume={}, date={}, number={}, pages={}, issn={}, } \bib{KPT}{article}{ author={M. Kaneko},author={M. Pallewatta},author= {H. Tsumura}, title={On polycosecant numbers}, journal={J. Integer Seq.}, volume={23}, date={2020}, number={6}, pages={17pp}, issn={}, } \bib{KST}{article}{ author={M. Kaneko},author={F. Sakurai},author={H. Tsumura}, title={On a duality formula for certain sums of values of poly-Bernoulli polynomials and its application}, journal={J. Th\'eor. Nombres Bordeaux}, volume={30}, date={2018}, number={1}, pages={203--218}, issn={}, } \bib{Ka}{article}{ author={Y. Katagiri}, title={Kummer-type congruences for multi-poly-Bernoulli numbers}, journal={Commentarii mathematici Universitatis Sancti Pauli}, volume={69}, date={2021}, number={}, pages={75--85}, issn={}, } \bib{Ki}{thesis}{ author={R. Kitahara}, title={On Kummer-type congruences for poly-Bernoulli numbers~(in Japanese)}, language={}, organization={Tohoku University, master thesis~(2012)}, volume={}, date={}, number={}, pages={}, issn={}, } \bib{No}{book}{ author={N. E. N$\mm{\ddot{o}}$rlund}, title={Vorlesungen $\ddot{u}$ber Differenzenrechnung}, publisher={Springer Verlag}, date={1924}, } \bib{Pa}{thesis}{ author={M. Pallewatta}, title={On polycosecant numbers and level Two generalization of Arakawa-Kaneko zeta functions}, language={}, organization={Kyushu University, doctor thesis~(2020)}, volume={}, date={}, number={}, pages={52pp}, issn={}, } \bib{OS}{article}{ author={Y. Ohno},author={M. Sakata}, title={On certain properties of poly-Bernoulli numbers with negative index}, journal={J. School sci. Eng.}, volume={49}, date={2013}, number={}, pages={5--7}, issn={}, } \bib{OS2}{article}{ author={Y. Ohno},author={Y. Sasaki}, title={Recursion formulas for poly-Bernoulli numbers and their applications}, journal={Int. J. Number Theory}, volume={17}, date={2021}, number={1}, pages={175--189}, issn={}, } \bib{Sa}{thesis}{ author={M. Sakata}, title={On p-adic properties of poly-Bernoulli numbers~(in Japanese)}, language={}, organization={Kindai University, master thesis~(2012)}, volume={}, date={}, number={}, pages={}, issn={}, } \bib{Sasaki}{article}{ author={Y. Sasaki}, title={On generalized poly-Bernoulli numbers and related L-functions}, journal={Journal of Number Theory}, volume={132}, date={2012}, number={}, pages={156--170}, issn={}, } \end{biblist} \end{bibdiv} \end{document}
2205.05229v2
http://arxiv.org/abs/2205.05229v2
A new approach for the fractional Laplacian via deep neural networks
\documentclass[11pt, reqno]{amsart} \usepackage{a4wide} \usepackage[english, activeacute]{babel} \usepackage{pifont} \usepackage{amsmath,amsthm,amsxtra} \usepackage{epsfig} \usepackage{amssymb} \usepackage{latexsym} \usepackage{amsfonts} \usepackage{dsfont} \usepackage{hyperref} \pagestyle{headings} \usepackage{xparse} \usepackage{tabularx} \usepackage{multicol} \usepackage{mathrsfs} \usepackage{xcolor} \usepackage{color} \usepackage{hyperref} \usepackage{pgf,tikz} \usepackage{listings} \newcommand{\lemref}[1]{Lemma~\ref{#1}} \newcommand{\R}{\mathbb{R}} \newcommand{\N}{\mathbb{N}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\T}{\mathbb{T}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\C}{\mathbb{C}} \newcommand{\E}{\mathbb{E}} \newcommand{\PP}{\mathbb{P}} \newcommand{\Com}{\mathbb{C}} \newcommand{\la}{\lambda} \newcommand{\pd}{\partial} \newcommand{\wqs}{{Q}_c} \newcommand{\ys}{y_c} \newcommand{\al}{\alpha} \newcommand{\bt}{\beta} \newcommand{\ve}{\varepsilon} \newcommand{\ga}{\gamma} \newcommand{\de}{\delta} \newcommand{\te}{\theta} \newcommand{\si}{\sigma} \newcommand{\De}{\Delta} \newcommand{\arctanh}{\operatorname{arctanh}} \newcommand{\spawn}{\operatorname{span}} \newcommand{\sech}{\operatorname{sech}} \newcommand{\dist}{\operatorname{dist}} \newcommand{\re}{\operatorname{Re}} \newcommand{\ima}{\operatorname{Im}} \newcommand{\diam}{\operatorname{diam}} \newcommand{\supp}{\operatorname{supp}} \newcommand{\pv}{\operatorname{pv}} \newcommand{\sgn}{\operatorname{sgn}} \newcommand{\vv}[1]{\partial_x^{-1}\partial_y{#1}} \newcommand{\sne}{\operatorname{sn}} \newcommand{\cd}{\operatorname{cd}} \newcommand{\sd}{\operatorname{sd}} \newcommand{\cne}{\operatorname{cn}} \newcommand{\dne}{\operatorname{dn}} \newenvironment{psmallmatrix} {\left(\begin{smallmatrix}} {\end{smallmatrix}\right)} \newtheorem{thm}{Theorem}[section] \newtheorem{teo}{Theorem}[section] \newtheorem{cor}[thm]{Corollary} \newtheorem{lem}[thm]{Lemma} \newtheorem{lema}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{ax}{Axiom} \newtheorem{ass}{Assumptions} \newtheorem*{thma}{{\bf Main Theorem}} \newtheorem*{thmap}{{\bf Theorem A'}} \newtheorem*{thmb}{{\bf Theorem B}} \newtheorem*{thmc}{{\bf Theorem C}} \newtheorem*{cord}{{\bf Corolary D}} \newtheorem{defn}[thm]{Definition} \theoremstyle{remark} \newtheorem{rem}{Remark}[section] \newtheorem*{notation}{Notation} \newtheorem{Cl}{Claim} \newcommand{\be}{\begin{equation}} \newcommand{\ee}{\end{equation}} \newcommand{\bp}{\begin{proof}} \newcommand{\ep}{\end{proof}} \newcommand{\bel}{\begin{equation}\label} \newcommand{\eeq}{\end{equation}} \newcommand{\bea}{\begin{eqnarray}} \newcommand{\eea}{\end{eqnarray}} \newcommand{\bee}{\begin{eqnarray*}} \newcommand{\eee}{\end{eqnarray*}} \newcommand{\ben}{\begin{enumerate}} \newcommand{\een}{\end{enumerate}} \newcommand{\nonu}{\nonumber} \newcommand{\bs}{\bigskip} \newcommand{\ms}{\medskip} \newcommand{\header}{ } \newcommand{\vertiii}[1]{{\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert #1 \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}} \usepackage{hyperref} \providecommand{\abs}[1]{\left|#1 \right|} \providecommand{\norm}[1]{\left\| #1 \right\|} \date{} \title{A new approach for the fractional Laplacian via deep neural networks} \author{Nicol\'as Valenzuela} \address{Departamento de Ingenier\'ia Matem\'atica DIM, and CMM UMI 2807-CNRS, Universidad de Chile, Beauchef 851 Torre Norte Piso 5, Santiago Chile} \email{[email protected]} \thanks{N.V. is partially supported by Fondecyt no. 1191412 and CMM Projects ``Apoyo a Centros de Excelencia'' ACE210010 and Fondo Basal FB210005} \date{\today} \subjclass[2000]{Primary: 35R11, Secondary: 62M45, 68T07} \keywords{Deep Neural Networks, Fractional Laplacian, Approximation} \numberwithin{equation}{section} \begin{document} \maketitle \markboth{Nicol\'as Valenzuela}{A new approach for the fractional Laplacian via deep neural networks} \begin{abstract} \noindent The fractional Laplacian has been strongly studied during past decades, see e.g. \cite{CS}. In this paper we present a different approach for the associated Dirichlet problem, using recent deep learning techniques. In fact, intensively PDEs with a stochastic representation have been understood via neural networks, overcoming the so-called \emph{curse of dimensionality}. Among these equations one can find parabolic ones in $\mathbb{R}^d$, see \cite{Hutz}, and elliptic in a bounded domain $D \subset \mathbb{R}^d$, see \cite{Grohs}. \medskip In this paper we consider the Dirichlet problem for the fractional Laplacian with exponent $\alpha \in (1,2)$. We show that its solution, represented in a stochastic fashion by \cite{AK1}, can be approximated using deep neural networks. We also check that this approximation does not suffer from the curse of dimensionality. The spirit of our proof follows the ideas in \cite{Grohs}, with important variations due to the nonlocal nature of the fractional Laplacian; in particular, the stochastic representation is given by an $\alpha$-stable isotropic L\'evy process, and not given by a standard Brownian motion. \end{abstract} \tableofcontents \section{Introduction}\label{Sect:1} \subsection{Motivation} Deep neural networks (DNNs) have become recent key actors in the study of partial differential equations (PDEs) \cite{WE1,WE2,Hutz}. Among them, deep learning based algorithms have provided new insights for the approximation of certain PDEs, see e.g. \cite{Beck1,Beck2,Grohs2,WE1,WE2}. The corresponding numerical simulations suggest that DNNs overcome the so-called \emph{curse of dimensionality}, in the sense that the number of real parameters that describe the DNN is bounded by a polynomial on the dimension $d$, and on the reciprocal of the accuracy of the approximation. Even better, recent works have theoretically proved that certain PDEs can be approximated by DNNs, overcoming the curse of dimensionality, see e.g. \cite{Berner1,Gonnon,Grohs,Grohs3,Hutz,Jentz1}. \medskip In order to describe the previous results in more detail, we start by considering the classical Dirichlet boundary value Problem in $d$-dimensions over a bounded, convex domain $D \subset \R^d$: \[ \left\{\begin{aligned} -\Delta u(x) &= f(x) \quad x \in D,\\ u(x) &= g(x) \quad x \in \partial D, \end{aligned}\right. \] where $f,g$ are suitable continuous functions. In a recent work, Grohs and Herrmann \cite{Grohs} proved that DNNs overcome the curse of dimensionality in the approximation of solution of the above problem. More precisely, they used stochastic techniques such as the Feynman-Kac formula, the so-called \emph{Walk-on-Spheres} (WoS) processes (defined in Section \ref{Sect:4}) and Monte Carlo simulations in order to show that DNNs approximate the exact solution, with arbitrary precision. \medskip The main purpose of this paper is to extend the nice results obtained by Grohs and Herrman in the case of the fractional Laplacian $(-\Delta)^{\alpha/2}$, with $\alpha \in (0,2)$, formally defined in $\R^d$ as \begin{equation}\label{FractLap} -(-\Delta)^{\alpha/2} u(x) = c_{d,\alpha} \lim_{\varepsilon \downarrow 0} \int_{\R^d \setminus B(0,\varepsilon)} \frac{u(y)-u(x)}{|y-x|^{d+\alpha}}dy, \hspace{.5cm} x \in \R^d, \end{equation} where $c_{d,\alpha}=- \frac{2^\alpha \Gamma((d+\alpha)/2)}{\pi^{d/2}\Gamma(-\alpha/2)}$ and $\Gamma(\cdot)$ is the classical Gamma function. We also prove that DNNs overcome the curse of dimensionality in this general setting, a hard problem, specially because of the nonlocal character of the problem. However, some recent findings are key to fully describe the problem here. Indeed, Kyprianou et al. \cite{AK1} showed that the Feynman-Kac formula and the WoS processes are also valid in the nonlocal case. We will deeply rely on these results to reproduce the Grohs and Herrmann program. \subsection{Setting} Let $\alpha \in (0,2)$, $d \in \N$ and $D \subset \R^d$ a bounded domain. Consider the following Dirichlet boundary value problem \begin{equation}\label{eq:1.1} \left\{ \begin{array}{rll} (-\Delta)^{\alpha/2}u(x)= f(x) & x \in D,\\ u(x)= g(x) & x \in D^c. \end{array} \right. \end{equation} Here, $f,g$ are functions that satisfy suitable assumptions. More precisely, we ask for the following: \begin{itemize} \item $g:D^c \to \R$ is a $L_g$-Lipschitz continuous function in $L^{1}_{\alpha}(D^c)$, $L_g >0$, that is to say \begin{equation}\label{Hg0} \int_{D^c} \frac{|g(x)|}{1+|x|^{d+\alpha}}dx < \infty. \tag{Hg-0} \end{equation} \item $f:D \to \R$ is a $L_f$-Lipschitz continuous function, $L_f >0$, such that \begin{equation}\label{Hf0} f \in C^{\alpha + \varepsilon_0}(\overline{D}) \qquad \hbox{for some fixed} \qquad \varepsilon_0>0. \tag{Hf-0} \end{equation} \end{itemize} These assumptions are standard in the literature (see e.g. \cite{AK1}), and are required to give a rigorous sense to the {\bf continuous solution} in $L^1_\alpha(\R^d)$ of \eqref{eq:1.1} in terms of the stochastic representation \begin{equation}\label{u(x)} u(x) = \E_x \left[ g(X_{\sigma_D}) \right] + \E_x \left[ \int_0^{\sigma_D} f(X_s) ds \right], \end{equation} where $(X_t)_{t\geq 0}$ is an $\alpha$-stable isotropic L\'evy process and $\sigma_D$ is the exit time of $D$ for this process. See Theorem \ref{teo:sol} below for full details. \medskip Problem \eqref{eq:1.1} has attracted considerable interest in past decades. Starting from the foundational work by Caffarelli and Silvestre \cite{CS}, the study of fractional problems has always required a great amount of detail and very technical mathematics. The reader can consult the monographs by \cite{Acosta,Bonito,Gulian,AK1,Lischke1}. The work of Kyprianou et al. \cite{AK1} proved that the solution of Problem \eqref{eq:1.1} can be represented with the WoS processes described formally in Section \ref{Sect:4}, namely \begin{equation}\label{u(x)2} u(x) = \E_{x} \left[ g(\rho_N) \right] + \E_{x}\left[\sum_{n=1}^{N} r_n^{\alpha} V_1(0,f(\rho_{n-1} +r_n\cdot))\right], \end{equation} where $\left(\rho_{n}\right)_{n=0}^{N}$ is the WoS process starting at $\rho_0 = x \in D$, $N=\min \{n\in \N: \rho_n \notin D\}$ and $r_n = \dist(\rho_{n-1},\partial D)$. $V_1(0,1(\cdot))$ is defined in Section \ref{Sect:5} and represents the expected occupation of the stable process exiting the unit ball centered at the origin. \medskip In this paper we propose a new approach to the Problem \eqref{eq:1.1} in terms of some deep learning techniques. We will work with the DNNs described formally in the Section \ref{Sect:3}. In particular, we work with DNNs with $H \in \N$ hidden layers, one input and one output layer, each layer with dimension $k_{i} \in \N$, $i=0,...,H+1$. For $i=1,...,H+1$ the weights and biases of the DNN are denoted as $W_i \in \R^{k_i \times k_{i-1}}$ and $B_i \in \R^{k_i}$. The DNN is represented by his weights and biases as $\Phi = ((W_1,B_1),...,(W_{H+1},B_{H+1}))$. For $x_0 \in \R^{k_0}$ the realization of the DNN $\Phi$ is defined as \[\mathcal{R}(\Phi)(x_0) = W_{H+1}x_{H} + B_{H+1},\] where $x_i \in \R^{k_i}$, $i=1,...,H$ is defined as \[ x_i = A(W_{i}x_{i-1}+B_{i}). \] Here $A(\cdot)$ is the activation function of the DNN. In our case we work with activation function of ReLu type ($A(x)=\max\{x,0\}$). The number of parameters used to describe the DNN and the dimension of the layers are defined as \[ \mathcal{P}(\Phi) = \sum_{i=1}^{H+1} k_i (k_{i-1}+1), \quad \mathcal{D}(\Phi) = (k_0,...,k_{H+1}), \] respectively. For the approximation of \eqref{u(x)}, we need to assume some hypothesis in relation to the implicated functions. In particular, we suppose that $g$, $\dist(\cdot,\partial D)$, $(\cdot)^{\alpha}$ and $f$ can be approximated by ReLu DNNs $\Phi_g$, $\Phi_{\dist}$, $\Phi_{\alpha}$ and $\Phi_{f}$ that overcome the curse of dimensionality. The full details of the hypotheses can be found in Assumptions \ref{Sup:g}, \ref{Sup:D} and \ref{Sup:f} defined in Section \ref{Sect:6}. Our main result is the following: \begin{thm}\label{Main} Let $\alpha \in (1,2)$, $p,s \in (1,\alpha)$ such that $s < \frac{\alpha}{p}$ and $q \in \left[s,\frac{\alpha}{p}\right)$. Assume that \eqref{Hg0} and \eqref{Hf0} are satisfied. Suppose that for every $\delta_{\alpha}, \delta_{\dist}, \delta_{f}, \delta_{g} \in (0,1)$ there exist ReLu DNNs $\Phi_g$, $\Phi_{\alpha}, \Phi_{\dist}$ and $\Phi_f$ satisfying Assumptions \ref{Sup:g}, \ref{Sup:D} and \ref{Sup:f}, respectively. Then for every $\epsilon \in (0,1)$, there exists a ReLu DNN $\Psi_{\epsilon}$ and its realization $\mathcal{R}(\Psi_{\epsilon}): D \to \R$ which is a continuous function such that: \begin{enumerate} \item Proximity in $L^q(D)$: If $u$ is the solution of \eqref{eq:1.1} \begin{equation} \left(\int_D \left|u(x) - \left(\mathcal{R}(\Psi_{\epsilon})\right)(x)\right|^q dx\right)^{\frac 1q} \leq \epsilon. \end{equation} \item Bounds: There exists $\widehat{B},\eta>0$ such that \begin{equation} \mathcal{P}(\Psi_{\epsilon}) \leq \widehat{B}|D|^{\eta}d^{\eta} \epsilon^{-\eta}. \end{equation} The constant $\widehat{B}$ depends on $\norm{f}_{L^{\infty}(D)}$, the Lipschiptz constants of $g$, $\mathcal{R}(\Phi_f)$ and $\mathcal{R}(\Phi_{\alpha})$, and on $\diam(D)$. \end{enumerate} \end{thm} \subsection{Idea of the proof} In this section we sketch the proof of Theorem \ref{Main}. Following \cite{Grohs}, we will work with each of the terms in \eqref{u(x)2} by separated, and for each of them we will prove that the value for the solution can be approximated by DNN that do not suffer the curse of dimensionality. For full details see Propositions \ref{Prop:homo} and \ref{Prop:6p1}. We will sketch the proof of the homogeneous part, the other term is pretty similar. \medskip The proof will be divided in several steps: First of all we approximate in expectation the respective term of \eqref{u(x)2} with Monte Carlo simulations. Due the nature of WoS processes we need to approximate also random variables such as copies of the number of iterations in the WoS process $N$ and copies of the norm of isotropic $\alpha$-stable process exiting a unit ball centered at the origin, $|X_{\sigma_{B(0,1)}}|$. Next we find Monte Carlo simulations that approximate the term of \eqref{u(x)2} punctually and latter in $L^q(D)$, whose copies satisfy also the approximations of $N$ and $|X_{\sigma_{B(0,1)}}|$ mentioned before. Those additional terms help us to find ReLu DNNs that approximates the WoS proccesses, then we can approximate the Monte Carlo simulation by a ReLu DNN. Next we use the found ReLu DNN to approximate the term of \eqref{u(x)2}, using suitable choices of the parameters in order to have an accuracy of $\epsilon \in (0,1)$ in the approximation. Finally we study the found ReLu DNN, and we prove that the total number of parameters that describes de DNN are at most polynomial in the dimension and in the reciprocal of the accuracy. \subsection{Discussion} As stated before, for the approximation of the solutions of Problem \eqref{eq:1.1} we follow the ideas presented in the work of Grohs and Herrmann \cite{Grohs}, with several changes due the non local nature of the fractional Laplacian. In particular: \begin{enumerate} \item The non local problem has the boundary condition $g$ defined on the complement of the domain $D$ and the local problem has $g$ defined on $\partial D$. This variation changes the way to approximates $g$ by DNNs. The Assumption \ref{Sup:g} is classical in the literature for functions defined in unbounded sets (see, e.g. \cite{Hutz}). \item The isotropic $\alpha$-stable process associated to the fractional Laplacian has no second moment, therefore the approximation can not be approximated in $L^{2}(D)$, but in $L^{q}(D)$ for some suitable $q<2$. \item The process associated to the local case is a Brownian motion that is continuous, then the norm of the process exiting the unit ball centered at the origin is equal to 1, i.e, $|X_{\sigma_{B(0,1)}}|=1$. In the non local case the isotropic $\alpha$-stable is a pure jump process, therefore $|X_{\sigma_{B(0,1)}}|>1$. This is the reason that in our proof we approximate the copies of this random variable to have that the copies of $X_{\sigma_{B(0,1)}}$ exits near the domain $D$. \item The last notable difference is about the sum that appears in \eqref{u(x)2}: The value of $r_n$ is raised to the power of $\alpha$, then we need an extra hypothesis for the approximation of the function $(\cdot)^{\alpha}$ by DNNs. In the local case $r_n$ is squared, then can be approximated by DNNs using the Lemma \ref{lem:DNN_mult} stated in the Section \ref{Sect:3}. \end{enumerate} \medskip A similar way to represent solutions to the Problem \eqref{eq:1.1} is found in the work of Gulian and Pang \cite{Gulian}. Thanks to stochastic calculus results (see, e.g. \cite{Applebaum,Bertoin}), the processes described in that work and the isotropic $\alpha$-stable processes are similar. In addition, in that paper they found a Feynman-Kac formula for the parabolic generalized problem for the associated fractional Laplacian. In a possible extension we could see if the parabolic generalized problem can be adapted to our setting, i.e. the solution of the parabolic case can be approximated by DNNs that overcome the curse of dimensionality. \medskip Finally, just before finishing this work, we learned about the research by Changtao Sheng et al. \cite{ultimo}, who have showed numerical simulations of the Problem \eqref{eq:1.1} using similar Monte Carlo methods. However, our methods are radically different in terms of the main goal, which is here to approximate the solution by DNNs in a rigorous fashion. \section{Preliminaries}\label{Sect:2} \subsection{Notation} Along this paper, we shall use the following conventions: \begin{itemize} \item $\N=\{1,2,3,...\}$ will be the set of Natural numbers. \item For any $q\geq 1$, $(\Omega, \mathcal F, \mu)$ measure space, $L^q(\Omega,\mu)$ denotes the Lebesgue space of order $q$ with the measure $\mu$. If $\mu$ is the Lebesgue measure, then the Lebesgue space will be denoted as $L^q(\Omega)$. \end{itemize} \subsection{A quick review on Lévy processes}\label{Sect:2p1} Let us introduce a brief review on the Lévy processes needed for the proof of the main results. For a detailed account on these processes, see e.g. \cite{Applebaum, Bertoin,AK2,Schilling}. \begin{defn} $L:=(L_{t})_{t \geq 0}$ is a Lévy process in $\R^d$ if it satisfies $L_0 = 0$ and \begin{enumerate} \item[i)] $L$ has independent increments, namely, for all $n \in \N$ and for each $0 \leq t_{1} < ... < t_{n} < \infty $, the random variables $(L_{t_{2}}-L_{t_{1}},...,L_{t_{n}}-L_{t_{n-1}})$ are independent. \item[ii)] $L$ has stationary increments, namely, for all $s \geq 0$, $L_{t+s}-L_{s}$ and $L_{t}$ have the same law. \item[iii)] $L_{t}$ is continuous on the right and has limit on the left for all $t > 0$ (i.e., $(L_{t})_{t \geq 0}$ is \textbf{càdlàg}). \end{enumerate} \end{defn} Examples of Lévy processes are the Brownian motion, but also processes with jumps such as the \emph{Poisson process} and the \emph{compound Poisson process} \cite{Applebaum}. \begin{defn} The Poisson process of intensity $\lambda>0$ is a Lévy process $N$ taking values in $\N \cup \{0\}$ wherein each $N(t) \sim Poisson(\lambda t)$, then we have \[ \PP(N(t) = n) = \frac{(\lambda t)^n}{n!} e^{-\lambda t}. \] \end{defn} \begin{defn} Let $(Z(n))_{n \in \N}$ be a sequence of i.i.d. random variables taking values in $\R^d$ with law $\mu$ and let $N$ be a Poisson process with intensity $\lambda$ that is independent of all the $Z(n)$. The compound Poisson process $Y$ is defined as follows: \[ Y(t) = Z(1) + ... + Z(N(t)), \] for each $t \geq 0$. \end{defn} Another important element is the so-called Lévy's characteristic exponent. \begin{defn} Let $(L_t)_{t \geq 0}$ be a Lévy process in $\R^d$. \begin{itemize} \item Its characteristic exponent $\Psi:\R^d \rightarrow \C$ is the continuous function that satisfies $\Psi(0)=0$ and for all $t \geq 0$, \begin{equation}\label{eq:EC} \E\left[e^{i\xi \cdot L_{t}}\right] = e^{-t\Psi(\xi)}, \qquad \xi \in \R^d\setminus \{0\}. \end{equation} \item A Lévy triple is $(b,A,\Pi)$, where $b \in \R^d$, $A \in \R^{d \times d}$ is a positive semi-definite matrix, and $\Pi$ is a Lévy measure in $\R^d$, i.e. \begin{equation} \Pi(\{0\})=0 \quad \hbox{ and } \quad \int_{\R^d} (1 \wedge |z|^2) \Pi(dz) < \infty. \end{equation} \end{itemize} \end{defn} A Lévy process is uniquely determined via its Lévy triple and its characteristic exponent. \begin{teo}[Lévy-Khintchine, \cite{AK2}] Let $(b,A,\Pi)$ be a Lévy triple. Define for each $\xi \in \R^d$ \begin{equation}\label{eq:LK} \Psi(\xi) = ib\cdot \xi + \frac{1}{2} \xi \cdot A\xi + \int_{\R^d} \left( 1 - e^{i\xi\cdot z} + i\xi\cdot z {\bf 1}_{\{|z|< 1\}} \right) \Pi(dz). \end{equation} If $\Psi$ is the characteristic exponent of a Lévy process with triple $(b,A,\Pi)$ in the sense of \eqref{eq:EC}, then it necessarily satisfies \eqref{eq:LK}. Conversely, given \eqref{eq:LK} there exists a probability space $(\Omega, \mathcal{F},\PP)$, on which a Lévy process is defined having characteristic exponent $\Psi$ in the sense of \eqref{eq:EC}. \end{teo} For the next Theorem we define the \emph{Poisson random measure}. \begin{defn} Let $(S,\mathcal{S},\eta)$ be an arbitrary $\sigma$-finite measure space and $(\Omega,\mathcal{F},\PP)$ a probability space. Let $N: \Omega \times \mathcal{S} \to \N \cup \{0,\infty\}$ such that $(N(\cdot,A))_{A \in \mathcal{S}}$ is a family of random variables defined on $(\Omega,\mathcal{F},\PP)$. For convenience we supress the dependency of $N$ on $\omega$. $N$ is called a Poisson random measure on $S$ with intensity $\eta$ if \begin{enumerate} \item[i)] For mutually disjoint $A_1,...,A_n$ in $\mathcal{S}$, the variables $N(A_1),...,N(A_n)$ are independent. \item[ii)] For each $A \in \mathcal{S}$, $N(A) \sim \hbox{Poisson}(\eta(A))$, \item[iii)] $N(\cdot)$ is a measure $\PP$-almost surely. \end{enumerate} \end{defn} \begin{rem} In the next theorem we use $S \subset [0,\infty) \times \R^d$, and the intensity $\eta$ will be defined on the product space. \end{rem} From the Lévy-Khintchine formula, every Lévy process can be decomposed in three components: a Brownian part with drift, the large jumps and the compensated small jumps of the process $L$. \begin{teo}[Lévy-Itô Decomposition]\label{teo:levy_ito} Let $L$ be a Lévy process with triple $(b,A,\Pi)$. Then there exists process $L^{(1)}$, $L^{(2)}$ y $L^{(3)}$ such that for all $t \geq 0$ \[ L_t = L^{(1)}_t + L^{(2)}_t + L^{(3)}_t, \] where \begin{enumerate} \item $L^{(1)}_t = bt + AB_{t}$ and $B_{t}$ is a standard $d$-dimensional Brownian motion. \item $L^{(2)}_t$ satisfies \[ \displaystyle L^{(2)}_t = \int_{0}^{t}\int_{|z|\geq 1} zN(ds,dz), \] where $N(ds,dz)$ is a Poisson random measure on $[0,\infty)\times\{z \in \R^d: |z| \geq 1\}$ with intensity \[ \displaystyle \Pi(\{z\in \R^d :|z|\geq 1\})dt \times \frac{\Pi(dz)}{\Pi(\{z\in \R^d :|z|\geq 1\})}. \] If $\Pi(\{z\in \R^d :|z|\geq 1\})=0$, then $L^{(2)}$ is the process identically equal to 0. In other words $L^{(2)}$ is a compound Poisson process. \item The process $L^{(3)}_t$ satisfies \[ \displaystyle L^{(3)}_t = \int_{0}^{t} \int_{|z|<1} z \widetilde{N}(ds,dz), \] where $\widetilde{N}(ds,dz)$ is the compensated Poisson random measure, defined by \[ \displaystyle \widetilde{N}(ds,dz) = N(ds,dz) - ds\Pi(dz), \] with $N(ds,dz)$ the Poisson random measure on $[0,\infty)\times \{z \in \R^d: |z|<1\}$ with intensity \[ \displaystyle ds \times \left.\Pi(dz)\right|_{\{z \in \R^d : |z| < 1\}}. \] \end{enumerate} \end{teo} \subsection{Lévy processes and the Fractional Laplacian} A particular set of Lévy processes are the so-called isotropic $\alpha$-stable processes, for $\alpha \in (0,2)$. The following definitions can be found in \cite{AK2} in full detail. \begin{defn}\label{def:iso} Let $\alpha \in (0,2)$. $X := (X_t)_{t \geq 0}$ is an isotropic $\alpha$-stable process if $X$ has a Lévy triple $(0,0,\Pi)$, with \begin{equation}\label{eq:levy_measure} \Pi(dz) = 2^{-\alpha} \pi^{-d/2} \frac{\Gamma((d+\alpha)/2)}{|\Gamma(-\alpha/2)|} \frac{1}{|z|^{\alpha + d}} dz, \hspace{.5cm} z \in \R^d. \end{equation} Recall that $\Gamma$ here is the Gamma function. \end{defn} \begin{defn}[Equivalent definitions of an isotropic $\alpha$-stable process] $\left.\right.$ \begin{enumerate} \item $X$ is an isotropic $\alpha$-stable process iff \begin{equation}\label{eq:scaling} \hbox{for all }\quad c>0, \quad (cX_{c^{-\alpha}t})_{t \geq 0} \quad \hbox{and} \quad X \quad \hbox{have the same law,} \end{equation} and for all the orthogonal transformations $U$ on $\R^d$, \[ (UX_t)_{t\geq 0} \quad \hbox{and} \quad X \quad \hbox{have the same law.} \] \item An isotropic $\alpha$-stable process is an stable process whose characteristic exponent is given by \begin{equation} \Psi(\xi) = |\xi|^\alpha, \hspace{.5cm} \xi \in \R^d. \end{equation} \end{enumerate} \end{defn} \begin{rem} For the first definition, we say that $X$ satisfies the \emph{scaling property} and is \emph{rotationally invariant}, respectively. \end{rem} Note by Definition \ref{def:iso} and by Lévy-Itô decomposition (Theorem \ref{teo:levy_ito}),an isotropic $\alpha$-stable process can be decomposed as \begin{equation} X_{t} = \int_0^t \int_{|z| \geq 1} zN(ds,dz) + \int_0^t \int_{|z| < 1} z \widetilde{N}(ds,dz). \end{equation} From this equation, we can conclude that every isotropic $\alpha$-stable process is a pure jump process, whose jumps are determined by the Lévy measure defined in \eqref{eq:levy_measure}. We enunciate further properties about these processes: \begin{teo} Let $g$ be a locally bounded, submultiplicative function and let $L$ a Lévy process, then the following are equivalent: \begin{enumerate} \item $\E[g(L_t)]< \infty$ for some $t > 0$. \item $\E[g(L_t)] < \infty $ for all $t > 0$. \item $\int_{|z|>1} g(z) \Pi(dz) < \infty.$ \end{enumerate} \end{teo} An important result from the previous theorem gives necessary and sufficient conditions for the existence of the $p$ moment of an isotropic $\alpha$-stable process. \begin{cor}\label{cor:Xmoment} Let $X$ be an $\alpha$-stable process and $p>0$, then the following are equivalent \begin{enumerate} \item $p < \alpha$. \item $\E[|X_t|^p]< \infty$ for some $t>0$. \item $\E[|X_t|^{p}]< \infty$ for all $t > 0$. \item $\int_{|z|>1} |z|^{p} \Pi(dz) < \infty$. \end{enumerate} \end{cor} \begin{rem} If $\alpha \in (0,1)$, by this corollary we have that $X$ has no first moment. Otherwise, if $\alpha \in (1,2)$ then it has finite first moment, but no second moment. \end{rem} \subsection{Type $s$ spaces and Monte Carlo Methods.} We now introduce some results that controls the difference between the expectation of a random variable and a Monte Carlo operator associated to his expectation in $L^{p}$ norm, $p>1$. For more details see \cite{Hutz2}. In the following results are simplified the results of \cite{Hutz2}. Along this section we work with real valued Banach spaces. \medskip We start with some concepts related to Banach spaces. The reader can consult \cite{Hutz2,ProbBan} for more details in this topic.\begin{defn} Let $(\Omega, \mathcal{F}, \PP)$ be a probability space, let $J$ be a set, and let $r_j : \Omega \to \{-1,1\}$, $j \in J$, be a family of independent random variables with for all $j \in J$, \[ \PP(r_j = 1) = \PP(r_j = -1) = \frac{1}{2}. \] Then we say that $(r_j)_{j \in J}$ is a $\PP$-Rademacher family. \end{defn} \begin{defn}\label{def:stype} Let $(r_j)_{j \in \N}$ a $\PP$-Rademacher family. Let $s \in (0,\infty)$. A Banach space $(E,\norm{\cdot}_{E})$ is said to be of type $s$ if there is a constant $C$ such that for all finite sequences $(x_j)$ in $E$, \[ \E\left[\norm{\sum_{j}r_j x_j}^s_E\right]^{\frac{1}{s}} \leq C \left(\sum_{j} \norm{x_j}_{E}^{s}\right)^{\frac{1}{s}}. \] The supremum of the constants $C$ is called the type $s$-constant of $E$ and it is denoted as $\mathscr{T}_s(E)$. \end{defn} \begin{rem} The existence of a finite constant $C$ in Definition \ref{def:stype} is valid for $s \leq 2$ only (see, e.g. \cite{ProbBan} Section 9 for more details). \end{rem} \begin{rem} Any Banach space $(E,\norm{\cdot}_E)$ is of type $1$. Moreover, triangle inequality ensures that $\mathscr{T}_1(E)=1$. \end{rem} \begin{rem} Notice that for all Banach spaces $(E,\norm{\cdot}_E)$, the function $(0,\infty) \ni s\to \mathscr{T}_s(E) \in [0,\infty]$ is non-decreasing. This implies for all $s \in (0,1]$ and all Banach spaces $(E,\norm{\cdot}_E)$ with $E \neq \{0\}$ that $\mathscr{T}_s(E) = 1$. \end{rem} \begin{rem} For all $s \in (0,2]$ and all Hilbert spaces $(H,\langle\cdot,\cdot \rangle_H, \norm{\cdot}_H)$ with $H \neq \{0\}$ it holds that $\mathscr{T}_s(H) = 1$. \end{rem} \begin{defn} Let $(r_j)_{j\in \N}$ a $\PP$-Rademacher family. Let $q,s \in (0,\infty)$ and $(E,\norm{\cdot}_E)$ be a Banach space. The $\mathscr{K}_{q,s}$ $(q,s)$-Kahane-Khintchine constant of the space $E$ is the extended real number given by the supremum of a constant $C$ such that for all finite sequences $(x_j)$ in $E$, \[ \E\left[\norm{\sum_{j}r_j x_j}^q_E\right]^{\frac{1}{q}} \leq C\,\E\left[\norm{\sum_{j}r_j x_j}^s_E\right]^{\frac{1}{s}} \] \end{defn} \begin{rem} For all $q,s \in (0,\infty)$ it holds that $\mathscr{K}_{q,s} < \infty$. Moreover, if $q\leq s$ by Jensen's inequality implies that $\mathscr{K}_{q,s} = 1$. \end{rem} \begin{defn} Let $q,s \in (0,\infty)$ and let $(E,\norm{\cdot}_E)$ be a Banach space. Then we denote by $\Theta_{q,s}(E) \in [0,\infty]$ the extended real number given by \[ \Theta_{q,s}(E) = 2 \mathscr{T}_{s}(E)\mathscr{K}_{q,s}. \] \end{defn} Consider the case $(\R,|\cdot|)$. Notice that $(\R,\langle \cdot,\cdot \rangle_{\R}, |\cdot|)$ is a Hilbert space with the inner product $\langle x,y \rangle_{\R} = xy$. Then it holds for all $s \in (0,2]$ that \[ \mathscr{T}_{s} := \mathscr{T}_{s}(\R) = 1, \] in other words, $(\R,|\cdot|)$ has type $s$ for all $s \in (0,2]$. Moreover, it holds for all $q \in (0,\infty)$, $s \in (0,2]$ that \[ \Theta_{q,s} := \Theta_{q,s}(\R) = 2\mathscr{K}_{q,s} < \infty. \] With this in mind, we enunciate a particular version of the Corollary 5.12 found in \cite{Hutz2}, replacing the Banach space $(E,\norm{\cdot}_E)$ by $(\R,|\cdot|)$. \begin{cor}\label{cor:MCq} Let $M \in \N$, $s \in [1,2]$, $(\Omega,\mathcal{F},\PP)$ be a probability space, and let $\xi_j \in L^1(\PP,|\cdot|)$, $j \in \{1,...,M\}$, be independent and identically distributed. Then, for all $q \in [s,\infty]$, \begin{equation}\label{eq:MCq} \begin{aligned} \norm{\E[\xi_1] - \frac{1}{M} \sum_{j=1}^{M} \xi_j}_{L^{q}(\Omega,\PP)} &= \frac{1}{M} \E\left[\left|\sum_{j=1}^{M}\xi_j - \E \left[\sum_{j=1}^{M} \xi_j\right]\right|^{q}\right]^{\frac{1}{q}}\\ &\leq \frac{\Theta_{q,s}}{M^{1-\frac{1}{s}}} \E\left[\left|\xi_1 - \E[\xi_1]\right|^{q}\right]^{\frac{1}{q}}. \end{aligned} \end{equation} \end{cor} \begin{rem} The choice of the Banach space as $(\R,|\cdot|)$ ensures that $\Theta_{q,s}$ is finite and the bound above converges for suitable $M$ large. \end{rem} \section{Deep Neural Networks}\label{Sect:3} In this section we review recent results on the mathematical analysis of neural networks needed for the proof of the main theorem. For a detailed description, see e.g. \cite{Grohs2,Hutz,Jentz1}. \subsection{Setting} for $d \in \N$ define \[ A_d : \R^d \rightarrow \R^d \] the ReLU activation function such that for all $z \in \R^d$, $z=(z_1,...,z_d)$, with \[ A_d(z)=(\max\{z_1,0\},...,\max\{z_d,0\}). \] Let also \begin{enumerate} \item[(NN1)] $H \in \N$ be the number of hidden layers; \item[(NN2)] $(k_i)_{i=0}^{H+1}$ be a positive integer sequence; \item[(NN3)] $W_{i} \in \R^{k_i \times k_{i-1}}$, $B_{i} \in \R^{k_i}$, for any $i=1,...,H+1$ be the weights and biases, respectively; \item[(NN4)] $x_0 \in \R^{k_0}$, and for $i = 1,...,H$ let \begin{equation}\label{x_i} x_i = A_{k_i}(W_ix_{i-1}+B_i). \end{equation} \end{enumerate} We call \begin{equation}\label{eq:DNN_def} \Phi := (W_i,B_i)_{i=1}^{H+1} \in \prod_{i=1}^{H+1} \left(\R^{k_i \times k_{i-1}} \times \R^{k_i}\right) \end{equation} the DNN associated to the parameters in (NN1)-(NN4). The space of all DNNs in the sense of \eqref{eq:DNN_def} is going to be denoted by $\textbf{N}$, namely \[ \textbf{N} = \bigcup_{H \in \N} \bigcup_{(k_0,...,k_{H+1})\in \N^{H+2}} \left[\prod_{i=1}^{H+1} \left(\R^{k_i \times k_{i-1}} \times \R^{k_i}\right)\right]. \] Define the realization of the DNN $\Phi \in \textbf{N}$ as \begin{equation}\label{Realization} \mathcal{R}(\Phi)(x_0) = W_{H+1}x_H + B_{H+1}. \end{equation} Notice that $\mathcal{R}(\Phi) \in C(\R^{k_0},\R^{k_{H+1}})$. For any $\Phi \in \textbf{N}$ define \begin{equation}\label{P_D} \mathcal{P}(\Phi) = \sum_{n =1}^{H+1} k_n (k_{n-1}+1), \qquad \mathcal{D}(\Phi) = (k_0,k_1,...,k_{H+1}), \end{equation} and \begin{equation}\label{norma} \vertiii{ \mathcal{D}(\Phi ) } = \max\{ k_0,k_1,...,k_{H+1} \}. \end{equation} The entries of $(W_i,B_i)_{i=1}^{H+1}$ will be the weights of the DNN, $\mathcal{P}(\Phi)$ represents the number of parameters used to describe the DNN, working always with fully connected DNNs, and $\mathcal{D}(\Phi)$ representes the dimension of each layer of the DNN. Notice that $\Phi \in \textbf{N}$ has $H+2$ layers: $H$ of them hidden, one input and one output layer. \begin{rem} For $\Phi \in \textbf{N}$ one has \[ \vertiii{\mathcal{D}(\Phi)} \leq \mathcal{P}(\Phi) \leq (H+1)\vertiii{\mathcal{D}(\Phi)} (\vertiii{\mathcal{D}(\Phi)}+1). \] Indeed, from the definition of $\vertiii{\cdot}$, \[ \vertiii{\mathcal{D}(\Phi)} \leq \sum_{n=1}^{H+1} k_n \leq \mathcal{P}(\Phi). \] In addition, the definition of $\mathcal{P}(\Phi)$ implies that \[ \mathcal{P}(\Phi) \leq \sum_{n=1}^{H+1} \vertiii{\mathcal{D}(\Phi)}(\vertiii{\mathcal{D}(\Phi)}+1) = (H+1)\vertiii{\mathcal{D}(\Phi)}(\vertiii{\mathcal{D}(\Phi)}+1). \] \end{rem} \begin{rem}\label{rem:CoD} From the previous remark one has \[ \mathcal{P}(\Phi) \leq 2(H+1)\vertiii{\mathcal{D}(\Phi)}. \] If $\vertiii{\mathcal{D}(\Phi)}$ grows at most polynomially in both the dimension of the input layer and the reciprocal of the accuracy $\varepsilon$ of the DNN, then $\mathcal{P}(\Phi)$ satisfies that bound too. This means that, with the right bound on $\vertiii{\mathcal{D}(\Phi)}$, the DNN $\Phi$ do not suffer the curse of dimensionality, in the sense established in Section \ref{Sect:1}. \end{rem} \subsection{Operations} In this section we summarize some operations between DNNs. We start with the definition of two vector operators \begin{defn} Let $\textbf{D} = \bigcup_{H \in \N} \N^{H+2}$. \begin{enumerate} \item Define $\odot: \textbf{D} \times \textbf{D} \rightarrow \textbf{D}$ such that for all $H_1,H_2 \in \N$, $\alpha = (\alpha_0,...,\alpha_{H_1+1}) \in \N^{H_1+2}$, $\beta = (\beta_0,...,\beta_{H_2+1}) \in \N^{H_2+2}$ it satisfied \begin{equation} \alpha \odot \beta = (\beta_0,\beta_1,...,\beta_{H_2},\beta_{H_2+1}+\alpha_0,\alpha_1,...,\alpha_{H_1+1}) \in \N^{H_1+H_2+3}. \end{equation} \smallskip \item Define $\boxplus: \textbf{D} \times \textbf{D} \rightarrow \textbf{D}$ such that for all $H \in \N$, $\alpha = (\alpha_0,...,\alpha_{H+1}) \in \N^{H+2}$, $\beta = (\beta_0,...,\beta_{H+1}) \in \N^{H+2}$ it satisfied \begin{equation} \alpha \boxplus \beta = (\alpha_0, \alpha_1 + \beta_1,...,\alpha_H + \beta_H, \beta_{H+1}) \in \N^{H+2}. \end{equation} \smallskip \item Define $\mathfrak{n}_n \in \textbf{D}$, $n \in \N$, $n \geq 3$ as \begin{equation} \mathfrak{n}_n = (1,\underbrace{2,\ldots,2}_{(n-2)\hbox{-times}},1) \in \N^n. \end{equation} \end{enumerate} \end{defn} \begin{rem} From these definitions and the norm $\vertiii{\cdot}$ defined in \eqref{norma}, we the following bounds are clear \begin{enumerate} \item For $H_1, H_2 \in \N$, $\alpha \in \N^{H_1 + 2}$ and $\beta \in \N^{H_2 + 2}$, \[ \vertiii{\alpha \odot \beta} \leq \max\{\vertiii{\alpha},\vertiii{\beta},\alpha_0 + \beta_{H_2 + 1}\}. \] \item For $H \in \N$ and $\alpha,\beta \in \N^{H+2}$, \[ \vertiii{\alpha \boxplus \beta} \leq \vertiii{\alpha} + \vertiii{\beta}. \] \item For $n \in \N$, $n \geq 3$, $\vertiii{\mathfrak{n}_n} = 2$. \end{enumerate} \end{rem} Now we state classical operations between DNNs. For a full details of the next Lemmas, the reader can consult e.g. \cite{Hutz,Jentz1}. \begin{lema}\label{lem:DNN_id} Let $Id_{\R} : \R \rightarrow \R$ be the identity function on $\R$ and let $H \in \N$. Then $Id_{\R} \in \mathcal{R}\left(\{\Phi \in \textbf{N}: \mathcal{D}(\Phi)=\mathfrak{n}_{H+2}\}\right)$. \end{lema} \begin{rem} A similar consequence is valid in $\R^d$. Let $Id_{\R^d} : \R^d \to \R^d$ be the identity function on $\R^d$ and let $H \in \N$. Therefore $Id_{\R^d} \in \mathcal{R}\left(\{\Phi \in \textbf{N}: \mathcal{D}(\Phi)=d\mathfrak{n}_{H+2}\}\right)$. The case with $H=3$ is proved on \cite{Jentz1}. \end{rem} \begin{rem} Let $H \in \N$ and $\Phi \in \textbf{N}$ such that $\mathcal{R}(\Phi) = Id_{\R^d}$. Then by Remark 3.3 we have that $\vertiii{\mathcal{D}(\Phi)} = 2d$. \end{rem} \begin{lema}\label{lem:DNN_comp} Let $d_1,d_2,d_3 \in \N$, $f \in C(\R^{d_2},\R^{d_3})$, $g \in C(\R^{d_1},\R^{d_2})$, $\alpha,\beta \in \textbf{D}$ such that $f \in \mathcal{R}(\{\Phi \in \textbf{N}: \mathcal{D}(\Phi) = \alpha \})$ and $g \in \mathcal{R}(\{\Phi \in \textbf{N}: \mathcal{D}(\Phi) = \beta \})$. Therefore $(f \circ g) \in \mathcal{R}(\{\Phi \in \textbf{N}: \mathcal{D}(\Phi) = \alpha \odot \beta\})$. \end{lema} \begin{rem} Let $\Phi_f, \Phi_g, \Phi \in \N$ such that $\mathcal{R}(\Phi_f) = f$, $\mathcal{R}(\Phi_g) = g$ and $\mathcal{R}(\Phi) = f \circ g$. Then by Remark 3.3 it follows that \[ \vertiii{\mathcal{D}(\Phi)} \leq \max\{\vertiii{\mathcal{D}(\Phi_f)},\vertiii{\mathcal{D}(\Phi_g)},2d_2\}. \] \end{rem} \begin{lema}\label{lem:DNN_sum} Let $M,H,p,q \in \N$, $h_i \in \R$, $\beta_i \in \textbf{D}$, $f_i \in C(\R^p,\R^q)$, $i=1,...,M$ such that for all $i=1,...,M$ $\dim(\beta_i) = H+2$ and $f_i \in \mathcal{R}(\{\Phi \in \textbf{N}: \mathcal{D}(\Phi) = \beta_i\})$. Then \begin{equation} \sum_{i=1}^{M} h_i f_i \in \mathcal{R}\left(\left\{\Phi \in \textbf{N}: \mathcal{D}(\Phi) = \overset{M}{\underset{i=1}{\boxplus}} \beta_i \right\}\right). \end{equation} \end{lema} \begin{rem} For $i=1,...,M$ let $\Phi_i \in \textbf{N}$ such that $\mathcal{R}(\Phi_i)=f_i$ and let $\Phi \in \textbf{N}$ such that \[ \mathcal{R}(\Phi) = \sum_{i=1}^{M} h_i f_i. \] It follows from Remark 3.3 that \[ \vertiii{\mathcal{D}(\Phi)} \leq \sum_{i=1}^{M} \vertiii{\mathcal{D}(\Phi_i)}. \] \end{rem} The following Lemma comes from \cite{Elb} and is adapted to our notation. \begin{lema}\label{lem:DNN_para} Let $H,d,d_i \in \N$, $\beta_i \in \textbf{D}$, $f_i \in C(\R^d,\R^{d_i})$, $i=1,2$ such that for $i=1,2$ $\dim(\beta_i) = H+2$ and $f_i \in \mathcal{R}\left(\left\{ \Phi \in \textbf{N}: \mathcal{D}(\Phi) = \beta_i\right\}\right)$. Then \begin{equation} (f_1,f_2) \in \mathcal{R}\left(\left\{\Phi \in \textbf{N}: \mathcal{D}(\Phi) = (d,\beta_{1,1}+\beta_{2,1},...,\beta_{1,H+1}+\beta_{2,H+1}) \right\}\right). \end{equation} \end{lema} \begin{rem} Let $\Phi_1, \Phi_2, \Phi \in \textbf{N}$ such that $\mathcal{R}(\Phi_i) = f_i$, $i=1,2$ and $\mathcal{R}(\Phi) = (f_1,f_2)$. Notice by Lemma \ref{lem:DNN_para} and definition of the norm $\vertiii{\cdot}$ in \eqref{norma} that \[ \vertiii{\mathcal{D}(\Phi)} \leq \vertiii{\mathcal{D}(\Phi_1)} + \vertiii{\mathcal{D}(\Phi_2)}. \] \end{rem} For sake of completeness, we state the following lemma with his proof. We continue with the notation from \cite{Hutz}: \begin{lema}\label{lem:DNN_mat} Let $H,p,q,r \in \N$, $M \in \R^{r\times q}$, $\alpha \in \textbf{D}$, $f \in C(\R^p,\R^q)$, such that $dim(\alpha)=H+2$ and $f \in \mathcal{R}(\{\Phi \in \textbf{N}: \mathcal{D}(\Phi)=\alpha\})$. Then \begin{equation} Mf \in \mathcal{R}\left(\left\{\Phi \in \textbf{N}: \mathcal{D}(\Phi)=(\alpha_0,...,\alpha_H,r)\right\}\right). \end{equation} \end{lema} \begin{proof} Let $H, \alpha_0, ..., \alpha_{H+1} \in \N$, $\Phi_f \in \textbf{N}$ satisfying that \[ \alpha = \left(\alpha_0,...,\alpha_{H+1}\right), \qquad \mathcal{R}(\Phi_f) = f, \qquad \hbox{and} \qquad \mathcal{D}(\Phi_f) = \alpha. \] Note that $p = \alpha_0$ and $q = \alpha_{H+1}$. Let $\left((W_1,B_1),...,(W_{H+1},B_{H+1})\right) \in \prod_{n=1}^{H+1} \left(\R^{\alpha_n \times \alpha_{n-1}} \times \R^{\alpha_n}\right)$ satisfty that \[ \Phi_f = \left((W_1,B_1),...,(W_{H+1},B_{H+1})\right). \] Let $M \in \R^{r \times \alpha_{H+1}}$ and define \[ \Phi = \left((W_1,B_1),...,(W_{H},B_{H}),(MW_{H+1},MB_{H+1})\right). \] Notice that $(MW_{H+1},MB_{H+1}) \in \R^{r \times \alpha_{H}} \times \R^{r}$, then $\Phi \in \N$. For $y_0 \in \R^{\alpha_0}$, and $y_i$, $i=1,...,H$ defined as in (NN4) we have \[ \left(\mathcal{R}(\Phi)\right)(y_0) = MW_{H+1}y_{H} + MB_{H+1} = M \left(W_{H+1}y_{H}+B_{H+1}\right) = M \left(\mathcal{R}(\Phi_f)\right)(y_0). \] Therefore \[ \mathcal{R}(\Phi) = Mf, \qquad \hbox{and} \qquad \mathcal{D}(\Phi) = \left(\alpha_0,...,\alpha_{H},r\right), \] and the Lemma is proved. \end{proof} \begin{rem} Let $\Phi_f, \Phi \in \textbf{N}$ such that $\mathcal{R}(\Phi_f) = f$ and $\mathcal{R}(\Phi) = Mf$. From previous Lemma and the definition of $\vertiii{\cdot}$ it follows that \[ \vertiii{\mathcal{D}(\Phi)} \leq \max\{\vertiii{\mathcal{D}(\Phi_f)},r\}. \] \end{rem} The following Lemma is from \cite{Grohs}. \begin{lema}\label{lem:DNN_mult} There exists constants $C_1,C_2,C_3,C_4 > 0$ such that for all $\kappa > 0$ and for all $\delta \in \left(0,\frac{1}{2}\right)$ there exists a ReLu DNN $\Upsilon \in \textbf{N}$, with $\mathcal{R}(\Upsilon) \in C(\R^2,\R)$ such that \begin{equation} \sup_{a,b \in [-\kappa,\kappa]} |ab-\left(\mathcal{R}(\Upsilon)\right)(a,b)| \leq \delta. \end{equation} Moreover, for all $\delta \in \left(0,\frac{1}{2}\right)$ , \begin{align} \mathcal{P}(\Upsilon) &\leq C_1 \left( \log_2 \left(\frac{\max\{\kappa,1\}}{\delta}\right)\right) + C_2, \\ \dim(\mathcal{D}(\Upsilon)) &\leq C_3 \left( \log_2 \left(\frac{\max\{\kappa,1\}}{\delta}\right)\right) + C_4. \end{align} \end{lema} \medskip \section{Walk-on-spheres Processes}\label{Sect:4} We start with some key notation that will be extensively used along this paper. \medskip Let $(\Omega, \PP, \mathcal{F})$ be a filtered probability space with $\mathcal{F}=\left(\mathcal{F}_t\right)_{t\geq 0}$. Let $\left(X_{t}\right)_{t \geq 0}$ be an isotropic $\alpha$-stable process starting at $X_0$. For $x \in D$ denote $\PP_x$ the probability measure conditional to $X_0 = x$ and $\E_x$ the respective expectation. Finally, define for any $B \subset \R^d$ the exit time for the set $B$ as \[ \sigma_B = \inf \{t \geq 0 : X_t \notin B\}. \] Now we introduce the classical WoS process. \begin{defn}[\cite{AK1}]\label{defn:WoS} The Walk-on-Spheres (WoS) process $\rho := (\rho_n)_{n\in \N}$ is defined as follows: \begin{itemize} \item $\rho_0 = x$, $x \in D$; \item given $\rho_{n-1}$, $n\geq 1$, the distribution of $\rho_n$ is chosen according to an independent sample of $X_{\sigma_{B_n}}$ under $\PP_{\rho_{n-1}}$, where $B_n$ is the ball centered on $\rho_{n-1}$ and radius $r_{n} = \dist(\rho_{n-1},\partial D)$. \end{itemize} \end{defn} \begin{rem} Notice by the Markov property that the process $\rho$ can be written as the recurrence \[ \rho_n = \rho_{n-1} + Z_n, \quad n \in \N, \] where $Z_n$ is an independent sample of $X_{\sigma_{B(0,r_n)}}$ under $\PP_0$. \end{rem} From the previous Remark, it is possible to rewrite $\rho_n$ for $n \in \N$ depending on $x \in D$ and on $n$ independent processes distributing accord $X_{\sigma_{B(0,1)}}$, as indicates the following Lemma. \begin{lem} The WoS process $\rho := \left(\rho_n\right)_{n \in \N}$ can be defined as follows \begin{itemize} \item $\rho_0 = x$, $x \in D$; \item for $n \geq 1$, \begin{equation}\label{eq:paseo} \rho_n = \rho_{n-1} + r_n Y_n, \end{equation} where $Y_{n}$ is an independent sample of $X_{\sigma_{B(0,1)}}$ and $r_n = \dist(\rho_{n-1},\partial D)$. \end{itemize} \end{lem} \begin{proof} Note by the scaling property \eqref{eq:scaling} that \[ X_{t} \qquad \hbox{and} \qquad r_n X_{r_n^{-\alpha}t} \] have the same distribution for all $n \in \N$. Therefore \begin{equation}\label{eq:scaling1} \begin{aligned} \sigma_{B(0,r_n)} &= \inf \{t \geq 0 : X_{t} \notin B(0,r_n)\} \\ &= r_n^{\alpha} \inf \{r_n^{-\alpha}t \geq 0: r_n X_{r^{-\alpha}_n t} \notin B(0,r_n)\}\\ &= r_n^{\alpha} \inf \{s \geq 0: r_n X_s \notin B(0,r_n)\}\\ &= r_n^{\alpha} \inf\{s \geq 0: X_s \notin B(0,1) \} = r_n^{\alpha} \sigma_{B(0,1)}. \end{aligned} \end{equation} This equality and the scaling property implies that \begin{equation}\label{eq:scaling2} X_{\sigma_{B(0,1)}} \qquad \hbox{and} \qquad r_n^{-1}X_{r_n^{\alpha}\sigma_{B(0,1)}} \end{equation} are equal in law under $\PP_0$, and then from Remark 4.1 $Z_n$ and $r_n X_{\sigma_{B(0,1)}}$ have the same distribution under $\PP_0$. We can conclude that for $n\geq 1$, $\rho_n$ can be written as the recurrence \[ \rho_n = \rho_{n-1} + r_n Y_n, \] where $Y_n$ is an independent sample of $X_{\sigma_{B(0,1)}}$. \end{proof} To study the WoS processes, we need to know about the processes $X_{\sigma_{B(0,1)}}$. The following result gives the distribution density of $X_{\sigma_{B(0,1)}}$. \begin{teo}[Blumenthal, Getoor, Ray, 1961. \cite{Blumenthal1}]\label{teo:Blu} Suppose that $B(0,1)$ is a unit ball centered at the origin and write $\sigma_{B(0,1)} = \inf \{ t > 0 : X_t \notin B(0,1)\}$. Then, \begin{equation} \PP_0\left(X_{\sigma_{B(0,1)}} \in dy\right) = \pi^{-(d/2+1)} \Gamma \left(\frac d2 \right) \sin(\pi\alpha/2) \left|1-|y|^{2}\right|^{-\alpha/2} |y|^{-d} dy, \qquad |y| > 1. \end{equation} \end{teo} Using this result, one can prove a key result for the expectation of $X_{\sigma_{B(0,1)}}$ moments. \begin{cor}\label{cor:Kab} For all $\alpha \in (0,2)$, $\beta \in [0,\alpha)$ we have \begin{equation}\label{eq:Kab} \E_0 \left[\left|X_{\sigma_{B(0,1)}}\right|^{\beta}\right] = \frac{\sin(\pi \alpha /2)}{\pi} \frac{\Gamma\left(1- \frac{\alpha}{2}\right)\Gamma\left(\frac{\alpha-\beta}{2}\right)}{\Gamma\left(1-\frac{\beta}{2}\right)} =: K(\alpha,\beta). \end{equation} \end{cor} \begin{rem} Notice that the value of $K(\alpha,\beta)$ does not depend of the dimension $d$. \end{rem} \begin{rem} the condition $\beta < \alpha $ is necessary due to Corollary \ref{cor:Xmoment}. If $\beta \geq \alpha$ then $\E[|X_t|^{\beta}] = \infty$ for all $t>0$. Moreover, the integral \[ \int_1^{\infty} \frac{r^{\beta-1}}{(r^2-1)^{\alpha/2}}dr, \] obtained in the proof of the Corollary \ref{cor:Kab} does not converges if $\beta \geq \alpha$. \end{rem} \begin{proof}[Proof of Corollary \ref{cor:Kab}] Let $\alpha \in (0,2)$, $\beta \in [0,\alpha)$. Notice by Theorem \ref{teo:Blu} and definition of the expectation that \begin{equation}\label{eq:E0X} \begin{aligned} \E_0 \left[\left|X_{\sigma_{B(0,1)}}\right|^{\beta}\right] &= \int_{|y|>1} |y|^{\beta} \PP_0 \left(X_{\sigma_{B(0,1)}} \in dy\right) \\ &= \pi^{-(d/2+1)} \Gamma \left(\frac d2 \right) \sin(\pi\alpha/2) \int_{|y|>1} \left|1-|y|^{2}\right|^{-\alpha/2} |y|^{\beta-d} dy. \end{aligned} \end{equation} Using spherical coordinates one has \[ \int_{|y|>1} \left|1-|y|^{2}\right|^{-\alpha/2} |y|^{\beta-d} dy = \int_{\mathbb{S}^{d-1}} \int_{1}^{\infty} \left|1-r^2\right|^{-\alpha/2} r^{\beta-d} r^{d-1} dr dS, \] where $\mathbb{S}^{d-1}$ is the surface area of the unit $(d-1)$-sphere embedded in dimension $d$. One has that \cite{surface} \[ \left|\mathbb{S}^{d-1}\right| = \frac{2\pi^{d/2}}{\Gamma \left(\frac{d}{2}\right)}, \] and then \[ \int_{|y|>1} \left|1-|y|^{2}\right|^{-\alpha/2} |y|^{\beta-d} dy = \frac{2\pi^{d/2}}{\Gamma \left(\frac{d}{2}\right)} \int_1^{\infty} \frac{r^{\beta-1}}{(r^2-1)^{\alpha/2}} dr. \] Replacing this result into \eqref{eq:E0X} give us that \[ \E_0 \left[\left|X_{\sigma_{B(0,1)}}\right|^{\beta}\right] = \frac{2}{\pi} \sin(\pi \alpha/2)\int_1^{\infty} \frac{r^{\beta-1}}{(r^2-1)^{\alpha/2}}dr. \] Now we are able to use a change of variables $u = 1/r$, then \[ \int_1^{\infty} \frac{r^{\beta-1}}{(r^2-1)^{\alpha/2}}dr = \int_{0}^{1} \frac{1}{u^2} u^{1-\beta} \frac{u^{\alpha}}{(1-u^2)^{-\alpha/2}} du = \int_0^1 u^{\alpha-\beta-1}(1-u^2)^{-\alpha/2}du. \] Using another change of variable, $t = u^2$ we have \[ \int_0^1 \frac{1}{2t^{\frac{1}{2}}} t^{\frac{\alpha-\beta-1}{2}}(1-t)^{-\frac{\alpha}{2}}dt = \frac{1}{2}\int_0^1 t^{\frac{\alpha-\beta}{2}-1}(1-t)^{1-\frac{\alpha}{2}-1} dt. \] This result implies that \[ \E_x\left[\left|X_{\sigma_{B(0,1)}}\right|^{\beta}\right] = \frac{\sin(\pi \alpha/2)}{\pi} \int_0^1 t^{\frac{\alpha-\beta}{2} - 1}(1-t)^{1-\frac{\alpha}{2} - 1}dt. \] The integral has the form of the \emph{Beta function}, formally defined as: \[ B(z,w) := \int_0^1 u^{z-1}(1-u)^{w-1}du, \] For full details of the Beta function see, e.g. \cite{beta}. In particular, the Beta function satisfies \[ B(z,w) = \frac{\Gamma(z)\Gamma(w)}{\Gamma(z+w)}. \] Finally \[ \E_0 \left[\left|X_{\sigma_{B(0,1)}}\right|^{\beta}\right] = \frac{\sin(\pi \alpha/2)}{\pi} B\left(\frac{\alpha-\beta}{2},1-\frac{\alpha}{2}\right) = \frac{\sin(\pi\alpha/2)}{\pi}\frac{\Gamma\left(1-\frac{\alpha}{2}\right)\Gamma\left(\frac{\alpha-\beta}{2}\right)}{\Gamma\left(1-\frac{\beta}{2}\right)}. \] \end{proof} \subsection{Relation between WoS and isotropic $\alpha$-stable process} Recall from Section \ref{Sect:4} that the process $(\rho_n)_{n\geq0}$ is related to a family of processes distributing accord $X_{\sigma_{B(0,1)}}$. We want now to have a relation between the processes $(X_t)_{t \geq0}$ and $(\rho_n)_{n\geq 0}$. For this define $\widetilde{r}_1 := \dist(x,\partial D)$, $\widetilde{B}_1 := B(x,r_1)$, $\tau_1 := \sigma_{\widetilde{B}_1}$ and for all $n \geq 1$ define: \begin{align} \widetilde{r}_{n+1} &:= \hbox{dist}(X_{\mathcal{I}(n)},\partial D), \\ \widetilde{B}_{n+1} &:=B\left(X_{\mathcal{I}(n)},\widetilde{r}_{n+1}\right), \\ \tau_{n+1} &:= \inf\{t \geq 0 : X_{t + \mathcal{I}(n)} \notin \widetilde{B}_{n+1}\}, \label{eq:tau_n} \end{align} where \begin{equation}\label{eq:I_n} \mathcal{I}(n) := \sum_{i=1}^{n} \tau_i, \qquad \mathcal{I}(0) = 0. \end{equation} $\mathcal{I}(n)$ represents the total time of the process $X_t$ takes to exit the $n$ balls $\widetilde{B}_{1},...,\widetilde{B}_{n}$. The following Lemma establishes that for all $n \in \N$, $\rho_n$ is equally distributed to the process $(X_t)_{t\geq0}$ exiting the $n$ balls $\widetilde{B}_{1},...,\widetilde{B}_{n}$, that is, $X_{\mathcal{I}(n)}$. \begin{lem} For all $n \geq 0$ and $x \in D$, $\rho_n$ and $X_{\mathcal{I}(n)}$ have the same distribution starting at $x$. \end{lem} \begin{proof} Note that under $\PP_{X_{\mathcal{I}(n-1)}}$, $X_{\mathcal{I}(n)}$ has the same distribution as $X_{\sigma_{\widetilde{B}_n}}$. Thus, by the Markov property and the scaling property one has \[ X_{\mathcal{I}(n)} = X_{\mathcal{I}(n-1)} + \widetilde{r}_{n} Y'_n, \] where $Y'_n$ is an independent sample of $X_{\sigma_{B(0,1)}}$ under $\PP_0$. From the two constructions below in addition with induction, one has that $\rho_n$ and $X_{\mathcal{I}(n)}$ has the same distribution starting at $x$, for all $n \geq 0$ and $x \in D$. \end{proof} Let \begin{equation}\label{eq:N} N = \min \{n \in \N : \rho_n \notin D\}. \end{equation} This random variable describes the quantity of balls $\widetilde{B}_{n}$ that the process $(X_t)_{t\geq0}$ exits before exits the domain $D$. The following Theorem ensures that $N$ is almost surely finite. \begin{teo}[ \cite{AK1}, Theorem 5.4] \label{teo:N} Let $D$ be a open and bounded set. Therefore for all $x \in D$, there exists a constant $\widetilde{p} = \widetilde{p}(\alpha,d) > 0$ independent of $x$ and $D$, and a random variable $\Gamma$ such that $N \leq \Gamma$ $\PP_x$-a.s., where \begin{equation} \PP_x(\Gamma = k) = (1-\widetilde{p})^{k-1}\widetilde{p}, \hspace{.5cm} k \in \N. \end{equation} \end{teo} \begin{rem} Although the random variable $\Gamma$ has the same distribution for each $x \in D$, it is not the same random variable for each $x \in D$. \end{rem} \begin{rem} This theorem implies that \[ \PP_x(N>n) \leq \PP_x(\Gamma > n) = (1-\widetilde{p})^n, \qquad n \in \N. \] \end{rem} The definition of $\mathcal{I}(n)$ and $N$ in \eqref{eq:I_n} and \eqref{eq:N} imply that the total time of $(X_t)_{t \geq 0}$ that takes to exit $N$ balls $\widetilde{B}_{1},...,\widetilde{B}_{N}$ is equal to the time of $(X_t)_{t\geq0}$ that takes to exit $D$. More precisely \begin{lem}\label{lem:INsigmaD} For $x \in D$, let $X_{t}$ be an isotropic $\alpha$-stable process. Therefore, a.s. \[ \mathcal{I}(N) = \sigma_D. \] \end{lem} \begin{proof} For the inequality $\geq$, note by definition of $N$ that \[ X_{\mathcal{I}(N)} \notin D. \] Recall that $\sigma_D$ is the infimum time $t\geq 0$ such that $X_t \notin D$, then \[ \mathcal{I}(N) \geq \sigma_D. \] For $\leq$ suppose by contradiction that $\sigma_D < \mathcal{I}(N)$. If $\sigma_D < \mathcal{I}(N-1)$, then \[ X_{\mathcal{I}(N-1)} \notin D. \] This is a contradiction with the definition of $N$, because $N-1$ is a natural less than $N$ satisfying the above condition. Therefore $\mathcal{I}(N-1) \leq \sigma_D$ and this implies that there exists $t^* \geq 0$ such that \[ \mathcal{I}(N)>\sigma_D = \mathcal{I}(N-1) + t^*, \] Using the definition of $\mathcal{I}(n)$, for $n \in \N$ and the supposition $\sigma_D < \mathcal{I}(N)$, one has \[ t^{*} < \tau_N, \] but \[ X_{\sigma_D} = X_{\mathcal{I}(N-1) + t^*} \notin D, \] therefore, from the definition of $\tau_N$ in \eqref{eq:tau_n}, \[ \tau_N \leq t^*, \] a contradiction. Therefore $\mathcal{I}(N-1) \leq \sigma_D$ and we can conclude that \[ \mathcal{I}(N) = \sigma_D. \] \end{proof} \begin{rem}\label{rem:igualdades} From the relation between $X_{\mathcal{I}(n)}$ and $\rho_n$ for $n \in \N$, it follows that \[ \E_x \left[\rho_{N}\right] = \E_x \left[X_{\mathcal{I}(N)}\right] = \E_x \left[X_{\sigma_D}\right]. \] \end{rem} Remark \ref{rem:igualdades} give us a relation between $(X_{t})_{t\geq0}$ and $(\rho_{n})_{n\geq0}$. Figure \ref{fig:1} shows in an example the relation between WoS and isotropic $\alpha$-stable processes, exiting a bounded domain $D$. \begin{figure}[h]\label{fig:1} \tikzset{every picture/.style={line width=0.75pt}} \begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1.2,xscale=1.2] \draw (195.24,55.73) .. controls (217.25,95.11) and (227.6,75.21) .. (240.42,68.57) .. controls (253.24,61.92) and (289.55,71.86) .. (274.05,137.07) .. controls (258.54,202.28) and (185.1,157.73) .. (110.07,205.3) .. controls (35.05,252.88) and (19.82,205.49) .. (46.3,117.58) .. controls (72.78,29.67) and (103.97,1.75) .. (133.95,3.47) .. controls (163.94,5.2) and (173.23,16.35) .. (195.24,55.73) -- cycle ; \draw [color={rgb, 255:red, 255; green, 0; blue, 0 } ,draw opacity=1 ] (180.02,124.21) .. controls (180.37,124) and (180.82,124.11) .. (181.03,124.46) .. controls (181.24,124.81) and (181.13,125.26) .. (180.78,125.47) .. controls (180.44,125.68) and (179.98,125.57) .. (179.77,125.22) .. controls (179.56,124.87) and (179.67,124.42) .. (180.02,124.21)(151.48,77.1) .. controls (177.85,61.13) and (212.17,69.55) .. (228.14,95.92) .. controls (244.12,122.28) and (235.69,156.61) .. (209.33,172.58) .. controls (182.96,188.56) and (148.64,180.13) .. (132.66,153.77) .. controls (116.69,127.4) and (125.11,93.08) .. (151.48,77.1) ; \draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ] (106,93.8) -- (95.75,95.1) -- (98.7,97.12) -- (85.84,101.38) -- (87.28,84.36) -- (91.13,103.59) -- (107.97,107.36) -- (99.46,114.97) -- (108.95,115.04) -- (115.34,102.31) -- (121.17,103.62) -- (93,123.15) -- (103.15,123.38) -- (95.98,129.6) -- (112.65,123.47) -- (107.16,121.01) -- (117.44,132.83) -- (122.75,157.29) -- (121.71,160.44) -- (125.11,150.19) -- (107.45,142.5) -- (117.01,151.44) -- (111.79,151.97) -- (125.71,180.92) -- (126.41,167.14) -- (138.39,172.88) -- (135.05,148.33) -- (181.74,125.66) -- (173.3,103.52) -- (164.6,142.37) -- (150.83,112.77) -- (136.51,102.42) -- (159.9,81.48) -- (159.78,86.55) -- (171.83,88.91) -- (182.93,81.87) -- (198.61,84.71) -- (181.32,95.11) -- (185.65,110.7) -- (198.11,106.69) -- (224.57,122.14) -- (227.9,136.14) -- (239.01,137.97) -- (255.25,158.25) -- (295.67,145) ; \draw [color={rgb, 255:red, 255; green, 0; blue, 0 } ,draw opacity=1 ] (237.53,138.27) .. controls (237.78,138.11) and (238.11,138.2) .. (238.27,138.45) .. controls (238.42,138.71) and (238.34,139.04) .. (238.09,139.19) .. controls (237.83,139.34) and (237.5,139.26) .. (237.35,139.01) .. controls (237.19,138.75) and (237.27,138.42) .. (237.53,138.27)(220.67,110.45) .. controls (236.29,100.98) and (256.63,105.97) .. (266.09,121.59) .. controls (275.55,137.21) and (270.56,157.55) .. (254.94,167.01) .. controls (239.32,176.48) and (218.99,171.49) .. (209.52,155.87) .. controls (200.06,140.25) and (205.05,119.91) .. (220.67,110.45) ; \draw [color={rgb, 255:red, 255; green, 0; blue, 0 } ,draw opacity=1 ] (122.62,159.2) .. controls (122.88,159.04) and (123.22,159.12) .. (123.38,159.39) .. controls (123.54,159.65) and (123.46,159.99) .. (123.2,160.15) .. controls (122.93,160.31) and (122.59,160.22) .. (122.43,159.96) .. controls (122.27,159.7) and (122.36,159.36) .. (122.62,159.2)(105.24,130.52) .. controls (121.35,120.76) and (142.31,125.91) .. (152.07,142.01) .. controls (161.82,158.11) and (156.68,179.08) .. (140.57,188.83) .. controls (124.47,198.59) and (103.51,193.44) .. (93.75,177.34) .. controls (84,161.24) and (89.14,140.27) .. (105.24,130.52) ; \draw [color={rgb, 255:red, 255; green, 0; blue, 0 } ,draw opacity=1 ] (105.2,91.8) .. controls (105.56,91.58) and (106.04,91.7) .. (106.26,92.07) .. controls (106.49,92.43) and (106.37,92.91) .. (106,93.14) .. controls (105.63,93.36) and (105.16,93.24) .. (104.93,92.87) .. controls (104.71,92.51) and (104.83,92.03) .. (105.2,91.8)(80.85,51.62) .. controls (103.41,37.95) and (132.78,45.16) .. (146.45,67.72) .. controls (160.12,90.28) and (152.91,119.65) .. (130.35,133.32) .. controls (107.79,146.99) and (78.42,139.78) .. (64.75,117.22) .. controls (51.08,94.66) and (58.29,65.29) .. (80.85,51.62) ; \draw [color={rgb, 255:red, 245; green, 166; blue, 35 } ,draw opacity=1 ][fill={rgb, 255:red, 245; green, 166; blue, 35 } ,fill opacity=1 ] (104.48,90.73) .. controls (105.14,90.31) and (106.02,90.51) .. (106.44,91.17) .. controls (106.86,91.83) and (106.67,92.71) .. (106,93.14) .. controls (105.34,93.56) and (104.46,93.36) .. (104.04,92.7) .. controls (103.62,92.03) and (103.81,91.15) .. (104.48,90.73) -- cycle ; \draw [color={rgb, 255:red, 245; green, 166; blue, 35 } ,draw opacity=1 ][fill={rgb, 255:red, 245; green, 166; blue, 35 } ,fill opacity=1 ] (122.15,158.47) .. controls (122.81,158.05) and (123.69,158.25) .. (124.11,158.91) .. controls (124.53,159.58) and (124.33,160.46) .. (123.67,160.88) .. controls (123.01,161.3) and (122.13,161.1) .. (121.71,160.44) .. controls (121.29,159.77) and (121.48,158.89) .. (122.15,158.47) -- cycle ; \draw [color={rgb, 255:red, 245; green, 166; blue, 35 } ,draw opacity=1 ][fill={rgb, 255:red, 245; green, 166; blue, 35 } ,fill opacity=1 ] (180.21,122.59) .. controls (180.88,122.17) and (181.76,122.37) .. (182.18,123.03) .. controls (182.6,123.69) and (182.4,124.57) .. (181.74,125) .. controls (181.07,125.42) and (180.2,125.22) .. (179.77,124.56) .. controls (179.35,123.89) and (179.55,123.01) .. (180.21,122.59) -- cycle ; \draw [color={rgb, 255:red, 245; green, 166; blue, 35 } ,draw opacity=1 ][fill={rgb, 255:red, 245; green, 166; blue, 35 } ,fill opacity=1 ] (237.04,136.86) .. controls (237.71,136.44) and (238.59,136.64) .. (239.01,137.3) .. controls (239.43,137.96) and (239.23,138.84) .. (238.57,139.27) .. controls (237.91,139.69) and (237.03,139.49) .. (236.6,138.83) .. controls (236.18,138.16) and (236.38,137.28) .. (237.04,136.86) -- cycle ; \draw [color={rgb, 255:red, 245; green, 166; blue, 35 } ,draw opacity=1 ][fill={rgb, 255:red, 245; green, 166; blue, 35 } ,fill opacity=1 ] (104.48,90.73) .. controls (105.36,90.17) and (106.52,90.43) .. (107.08,91.31) .. controls (107.64,92.19) and (107.37,93.36) .. (106.5,93.91) .. controls (105.62,94.47) and (104.45,94.21) .. (103.89,93.33) .. controls (103.34,92.45) and (103.6,91.29) .. (104.48,90.73) -- cycle ; \draw [color={rgb, 255:red, 245; green, 166; blue, 35 } ,draw opacity=1 ][fill={rgb, 255:red, 245; green, 166; blue, 35 } ,fill opacity=1 ] (121.42,158.37) .. controls (122.3,157.81) and (123.47,158.07) .. (124.03,158.95) .. controls (124.58,159.83) and (124.32,161) .. (123.44,161.55) .. controls (122.56,162.11) and (121.4,161.85) .. (120.84,160.97) .. controls (120.28,160.09) and (120.55,158.93) .. (121.42,158.37) -- cycle ; \draw [color={rgb, 255:red, 245; green, 166; blue, 35 } ,draw opacity=1 ][fill={rgb, 255:red, 245; green, 166; blue, 35 } ,fill opacity=1 ] (180.21,122.59) .. controls (181.09,122.03) and (182.26,122.29) .. (182.82,123.17) .. controls (183.37,124.05) and (183.11,125.22) .. (182.23,125.77) .. controls (181.35,126.33) and (180.19,126.07) .. (179.63,125.19) .. controls (179.07,124.31) and (179.34,123.15) .. (180.21,122.59) -- cycle ; \draw [color={rgb, 255:red, 245; green, 166; blue, 35 } ,draw opacity=1 ][fill={rgb, 255:red, 245; green, 166; blue, 35 } ,fill opacity=1 ] (237.08,136.93) .. controls (237.96,136.37) and (239.12,136.64) .. (239.68,137.51) .. controls (240.24,138.39) and (239.97,139.56) .. (239.1,140.12) .. controls (238.22,140.67) and (237.05,140.41) .. (236.49,139.53) .. controls (235.94,138.65) and (236.2,137.49) .. (237.08,136.93) -- cycle ; \draw [color={rgb, 255:red, 245; green, 166; blue, 35 } ,draw opacity=1 ][fill={rgb, 255:red, 245; green, 166; blue, 35 } ,fill opacity=1 ] (294.66,143.41) .. controls (295.54,142.85) and (296.7,143.11) .. (297.26,143.99) .. controls (297.82,144.87) and (297.56,146.03) .. (296.68,146.59) .. controls (295.8,147.15) and (294.63,146.89) .. (294.07,146.01) .. controls (293.52,145.13) and (293.78,143.97) .. (294.66,143.41) -- cycle ; \draw (99.83,159.69) node [anchor=north west][inner sep=0.75pt] [font=\scriptsize] {$\rho _{1}$}; \draw (184.82,126.57) node [anchor=north west][inner sep=0.75pt] [font=\scriptsize] {$\rho _{2}$}; \draw (240.91,122.41) node [anchor=north west][inner sep=0.75pt] [font=\scriptsize] {$\rho _{3}$}; \draw (301.85,144.45) node [anchor=north west][inner sep=0.75pt] [font=\scriptsize] {$\rho _{4}$}; \draw (99.17,73.96) node [anchor=north west][inner sep=0.75pt] [font=\scriptsize] {$x$}; \draw (50.46,158.09) node [anchor=north west][inner sep=0.75pt] [font=\scriptsize] {$D$}; \end{tikzpicture} \caption{Illustration of isotropic $\alpha$-stable and WoS processes starting at $x$ exiting a domain $D$. The blue line represents the $\alpha$-stable process $(X_t)_{t\geq0}$, the orange dots are the WoS process $(\rho_n)_{n\geq 0}$ and the red balls are given by Definition \ref{defn:WoS}. In this case $N=4$ and $\rho_4 = X_{\sigma_D}$.} \end{figure} \section{Stochastic representation of the Fractional Laplacian}\label{Sect:5} \subsection{Stochastic Representation} Recall Problem \eqref{eq:1.1}. The following theorem gives an stochastic representation of the solution of problem \eqref{eq:1.1} from the process $(X_t)_{t\geq 0}$. The proof of this Theorem can be found in \cite{AK1} \begin{teo}[\cite{AK1}, Theorem 6.1]\label{teo:sol} Let $d \geq 2$ and assume that $D$ is a bounded domain in $\R^d$. Additionally, assume \eqref{Hg0} and \eqref{Hf0}. Then there exist a {\bf unique continuous solution} for \eqref{eq:1.1} in $L^1_\alpha(\R^d)$, given by the explicit formula \begin{equation}\label{hipotesis_imp_2} u(x) = \E_x \left[ g(X_{\sigma_D}) \right] + \E_x \left[ \int_0^{\sigma_D} f(X_s) ds \right], \end{equation} valid for any $x \in D$. \end{teo} \medskip The previous representation can be expressed in terms of the WoS process. For this define the expected occupation measure of the stable process prior to exiting a ball of radius $r>0$ centered in $x \in \R^d$ as follows: \begin{equation}\label{eq:V_r(x)} V_r(x,dy) := \int_0^{\infty} \PP_x\left(X_t \in dy, t < \sigma_{B(x,r)}\right) dt, \qquad x \in \R^d, \quad |y|<1, \quad r > 0. \end{equation} We have the following result for $V_1(0,dy)$. \begin{teo}[\cite{AK1}, Theorem 6.2]\label{teo:V0dy} The measure $V_1(0,dy)$ is given for $|y|<1$, by \begin{equation}\label{eq:V_1(0)} V_1(0,dy) = 2^{-\alpha} \pi^{-d/2} \frac{\Gamma(d/2)}{\Gamma(\alpha/2)^2}|y|^{\alpha-d}\left( \int_0^{|y|^{-2}-1} (u+1)^{-d/2} u^{\alpha/2-1}du \right)dy. \end{equation} \end{teo} Denote $\displaystyle V_r(x,f(\cdot)) = \int_{|y-x|<r}f(y) V_r(x,dy)$ for a bounded measurable function $f$. $V_r(x,f(\cdot))$ defines the expected value of $f$ under the measure $V_{r}(x,dy)$ over the ball $B(x,r)$. An important property of this expected value is the following: for $r>0$ and $x \in \R^d$ \begin{equation}\label{eq:V_prop} V_r(x,f(\cdot)) = V_{1}(0,f(x+r\cdot)). \end{equation} The proof of this property can be found in \cite{AK1}. The following Lemma uses the WoS process and the Theorem \ref{teo:V0dy}. Recall that $(\rho_n)_{n=1,...,N}$ represents the WoS process defined in Section \ref{Sect:4}, and $r_n = \dist(\rho_n,\partial D)$. \begin{lema} [\cite{AK1}, Lema 6.3] \label{lem:sol2} For $x \in D$, $g \in L_{\alpha}^1(D^c)$ and $f \in C^{\alpha + \varepsilon_0}(\overline{D})$ we have the representation \begin{equation} u(x) = \E_{x} \left[ g(\rho_N) \right] + \E_{x}\left[\sum_{n=1}^{N} r_n^{\alpha} V_1(0,f(\rho_{n-1} +r_n\cdot))\right]. \end{equation} \end{lema} \begin{rem}\label{rem:WOSf} Recall that $\rho_n$ and $X_{\mathcal{I}(n)}$ are equal on law under $\PP_x$ for all $n \in \N$. Therefore we can write \begin{equation}\label{eq:solWOS} u(x) = \E_{x} \left[ g\left(X_{\mathcal{I}(N)}\right) \right] + \E_{x}\left[\sum_{n=1}^{N} r_n^{\alpha} V_1\left(0,f\left(X_{\mathcal{I}(n-1)} +r_n\cdot\right)\right)\right]. \end{equation} \end{rem} \subsection{Equivalent representations of non-homogeneous solution} Consider again the problem \eqref{eq:1.1}. Remember from Remark \ref{rem:WOSf} that its solution can be written as \begin{equation*} u(x) = \E_{x} \left[ g\left(X_{\mathcal{I}(N)}\right) \right] + \E_{x}\left[\sum_{n=1}^{N} r_n^{\alpha} V_1(0,f(X_{\mathcal{I}(n-1)} +r_n\cdot))\right]. \end{equation*} Notice also that from the definition of $V_1(0,f(\cdot))$, it can be expressed as the expectation of $f$ under the measure $V_1(0,dy)$ on $B(0,1)$. This measure is not necessarily a probability measure, so we are going to normalize the measure $V_1(0,dy)$. For this define for all $d \geq 2$, $d \in \N$ and $\alpha \in (0,2)$ \[ \kappa_{d,\alpha} = \int_{B(0,1)} V_1(0,dy). \] In the following Lemma we prove that $\kappa_{d,\alpha}$ is positive and finite \begin{lem} For all $d\geq 2$ and $\alpha \in (0,2)$, we have that $0<\kappa_{d,\alpha} <+\infty$. \end{lem} \begin{proof} Notice first from Theorem \ref{teo:Blu} that \[ \kappa_{d,\alpha} = \widetilde{c}_{d,\alpha} \int_{B(0,1)} |y|^{\alpha-d} \left(\int_{0}^{|y|^{-2}-1} (u+1)^{-d/2}u^{\alpha/2-1}du \right)dy, \] where $\displaystyle \widetilde{c}_{d,\alpha} = 2^{-\alpha}\pi^{-d/2} \frac{\Gamma(d/2)}{\Gamma(\alpha/2)^2}$. Now we work with the interior integral. With a change of variables $\displaystyle u = \frac{1-t}{t}$ and integral properties one has: \begin{align*} &\int_1^{|y|^2} t^{d/2} \left(\frac{1-t}{t}\right)^{\alpha/2-1}(-t^{-2})dt\\ &\qquad = \int_0^1 t^{d/2-\alpha/2-1}(1-t)^{\alpha/2-1}dt-\int_0^{|y|^2}t^{d/2-\alpha/2-1}(1-t)^{\alpha/2-1}dt. \end{align*} For $z,w > 0$, $x \in [0,1]$ let $B(z,w)$ and $I(x; z,w)$ be the Beta and the Incomplete Beta functions respectively, defined as \begin{align*} B(z,w) &:= \int_0^1 u^{z-1}(1-u)^{w-1}du,\\ I(x;z,w) &:= \frac{1}{B(z,w)} \int_0^{x} u^{z-1}(1-u)^{w-1} du. \end{align*} For further details of these functions the reader can consult \cite{beta}. Notice that $\kappa_{d,\alpha}$ can be written in terms of $B(z,w)$ and $I(x;z,w)$. Indeed \begin{equation*} \kappa_{d,\alpha} =\widetilde{c}_{d,\alpha} B \left(\frac d2-\frac\alpha2,\frac\alpha2 \right)\int_{B(0,1)} |y|^{\alpha-d}\left( 1-I \left(|y|^2;\frac d2-\frac\alpha2,\frac\alpha2 \right)\right)dy. \end{equation*} Note by property of Beta function that \[ B\left(\frac{d}{2}-\frac{\alpha}{2},\frac{\alpha}{2}\right) = \frac{\Gamma\left(\frac{d}{2}-\frac{\alpha}{2}\right)\Gamma\left(\frac{\alpha}{2}\right)}{\Gamma\left(\frac{d}{2}\right)}. \] The Gamma function is well defined and positive on $(0,\infty)$. If $d>\alpha$ then \[ 0 < B\left(\frac{d}{2}-\frac{\alpha}{2},\frac{\alpha}{2}\right) < + \infty. \] On the other hand side, note by the definition of $I(x;z,w)$ that for $x < 1$, \[ 0 \leq I(x;z,w) < \frac{1}{B(z,w)} \int_0^1 u^{z-1} (1-u)^{w-1}du = 1, \] Then for all $|y|^2<1$, \[ 0 < 1 - I \left(|y|^2; \frac{d}{2}-\frac{\alpha}{2}, \frac{\alpha}{2}\right) \leq 1. \] Therefore in $\kappa_{d,\alpha}$ we are integrating the multiplication of two positive functions over a set of positive measure. This implies that \[ 0 < \kappa_{d,\alpha} \leq \widetilde{c}_{d,\alpha} B\left(\frac{d}{2}-\frac{\alpha}{2},\frac{\alpha}{2}\right) \int_{B(0,1)} |y|^{\alpha - d} dy. \] The above integral can be calculated using a change of variables in spherical coordinates, and his value is finite. Finally we conclude that \[ 0 < \kappa_{d,\alpha} < + \infty. \] \end{proof} Now we are able to define a probability measure $\mu$ on $B(0,1)$ given by \[ \mu(dy) := \kappa_{d,\alpha}^{-1} V_1(0,dy). \] Therefore, for any bounded measurable function $f$ we have \begin{align} V_1(0,f(X_{\mathcal{I}(n-1)}+r_n\cdot)) &= \int_{B(0,1)} f(X_{\mathcal{I}(n-1)}+r_n y) V_1(0,dy) \notag\\ &= \kappa_{d,\alpha} \int_{B(0,1)} f(X_{\mathcal{I}(n-1)}+r_n y) \mu(dy) \notag\\ &= \kappa_{d,\alpha} \E^{(\mu)}\left[f(X_{\mathcal{I}(n-1)}+r_n \cdot)\right]. \end{align} where $\E^{(\mu)}$ correspond to the expectation over the probability measure $\mu$ on $B(0,1)$. With this representation, we can rewrite the solution of \eqref{eq:1.1} as \begin{equation}\label{eq:solfinal} u(x) = \E_x \left[g\left(X_{\mathcal{I}(N)}\right)\right] + \E_x \left[ \sum_{n=1}^{N} r_{n}^\alpha \kappa_{d,\alpha} \E^{(\mu)}\left[f\left(X_{\mathcal{I}(n-1)}+r_n \cdot\right)\right] \right]. \end{equation} From the construction of $\kappa_{d,\alpha}$, the following properties are valid \begin{lem}\label{lem:aux_f} One has \begin{enumerate} \item \[ \E_x \left[ \sum_{n=1}^{N} r_n^{\alpha} \kappa_{d,\alpha}\right] = \E_x \left[ \sigma_D \right], \] \item \[ \E_x \left[ \left|\sum_{n=1}^{N} r_n^{\alpha} \kappa_{d,\alpha} \right|^2\right] \leq \E_x \left[ \sigma_D^2 \right]. \] \end{enumerate} \end{lem} \begin{proof}~{} \begin{enumerate} \item Notice from the definition of $V_1(0,f(\cdot))$, with $f\equiv 1$ that \[ \kappa_{d,\alpha} = V_1\left(0,1\left(X_{\mathcal{I}(n-1)}+r_n \cdot\right)\right). \] It follows from \eqref{eq:V_prop} that \[ r_n^{\alpha}\kappa_{d,\alpha} = V_{r_n} \left(X_{\mathcal{I}(n-1)},1(\cdot)\right). \] Moreover \[ \begin{aligned} \E_x\left[\sum_{n=1}^{N} r_n^{\alpha} \kappa_{d,\alpha}\right] = &~{} \E_x\left[\sum_{n=1}^{N} V_{r_n}\left(X_{\mathcal{I}(n-1)},1(\cdot)\right)\right] \\ =&~{} \E_x \left[ \int_0^{\sigma_D} 1(X_s) ds\right] = \E_x \left[\sigma_D \right]. \end{aligned} \] \item From the definition of $V_r(x,f(\cdot))$ with $f \equiv 1$, it follows that \[ V_{r_n}\left(X_{\mathcal{I}(n-1)},1(\cdot)\right) = \E_{X_{\mathcal{I}(n-1)}} \left[\int_{0}^{\sigma_{B(X_{\mathcal{I}(n-1)},r_n)}} 1 (X_t) dt\right] = \E_{X_{\mathcal{I}(n-1)}} \left[\sigma_{B(X_{\mathcal{I}(n-1)},r_n)}\right]. \] By definition of $\tau_n$, $n \in \N$, \eqref{eq:tau_n} and Markov property one has \[ \E_{X_{\mathcal{I}(n-1)}}\left[\sigma_{B(X_{\mathcal{I}(n-1)},r_n)}\right] = \E_0 \left[\tau_{n}\right]. \] Then, Jensen inequality implies \[ \E_x \left[\left|\sum_{n=1}^{N} r_n^{\alpha} \kappa_{d,\alpha}\right|^2\right] = \E_x \left[\left|\sum_{n=1}^{N} \E_0 \left[\tau_n\right]\right|^2\right] \leq \E_x \left[ \E_0 \left[\left|\sum_{n=1}^{N} \tau_n\right|^2\right]\right]. \] Finally, by tower property \[ \E_x \left[\left|\sum_{n=1}^{N} r_n^{\alpha} \kappa_{d,\alpha}\right|^2\right] \leq \E_x \left[\mathcal{I}(N)^2\right] = \E_x\left[\sigma_D^2\right]. \] \end{enumerate} \end{proof} \medskip \section{Approximation of solutions of the Fractional Dirichlet problem using DNNs: the boundary data case}\label{Sect:6} As usual, Problem \eqref{eq:1.1} can be decomposed in two subproblems, that will be treated in a separate way. We first deal with the homogeneous case. \subsection{Homogeneous Fractional Laplacian}\label{Sect:6p1} We consider \eqref{eq:1.1} with $f \equiv 0$, namely, \begin{equation} \label{eq:2.1} \left\{ \begin{array}{rll} (-\Delta)^{\alpha/2}u(x)&= 0 & \hbox{ for } x \in D,\\ u(x)&= g(x) & \hbox{ for } x \in D^c. \end{array} \right. \end{equation} Note that under \eqref{Hg0}, one has from \eqref{hipotesis_imp_2} \begin{equation} \label{eq:2.2} u(x) = \E_x \left[ g(X_{\sigma_D}) \right], \hspace{.5cm} x \in D. \end{equation} The main idea of this section is approximate the solution \eqref{eq:2.2} by a deep neural network with ReLu activation, with an accurateness $\varepsilon>0$. For this we going to assume that $g$ can be approximated by a ReLu DNN satisfying several hypotheses. These hypotheses are expressed in the following assumption. \medskip Recall that $\vertiii{\cdot}$ represents the maximum number of hidden layers dimensions introduced in \eqref{norma}, $\mathcal R$ is the realization of a DNN as in \eqref{Realization}, and $\mathcal D$ was introduced in \eqref{P_D}. \begin{ass}\label{Sup:g} Let $d\geq 2$. Let $g:D^c\to \R$ satisfying \eqref{Hg0}. Let $\delta_g \in (0,1)$, $a,b \geq 1$, $p \in (1,\alpha)$ and $B >0$. Then there exists a {\it ReLu DNN} $\Phi_g \in$ \textbf{N} with \begin{enumerate} \item $\mathcal R(\Phi_g):D^c \to \R$ is continuous, and \item The following are satisfied: \begin{align} |g(y)-\left(\mathcal{R}(\Phi_g)\right)(y)| &\leq\delta_g Bd^p(1+|y|)^{p}, \hspace{.5cm} \forall y \in D^c. \tag{Hg-1} \label{H1}\\ |\left(\mathcal{R}(\Phi_g)\right)(y)| &\leq Bd^p(1+|y|)^{p}, \hspace{1.0cm} \forall y \in D^c. \tag{Hg-2} \label{H2}\\ \vertiii{\mathcal{D}(\Phi_g)} &\leq B d^b \delta_g^{-a}, \tag{Hg-3} \label{H3} \end{align} \end{enumerate} \end{ass} \begin{rem} We use the hypotheses presented in \cite{Hutz} for the approximation of function defined over non bounded sets. \end{rem} In addition to the previous assumptions, we will require \emph{structural properties} related to the domain $D$ itself. \begin{ass}\label{Sup:D} Let $\alpha\in (1,2)$, $a,b \geq 1$ and $B>0$. Suppose that $D$ bounded domain enjoys the following structure. \begin{enumerate} \item For any $\delta_{\dist} \in (0,1)$, the function $x \mapsto \dist(x,\partial D)$ can be approximated by a ReLu DNN $\Phi_{\dist} \in \textbf{N}$ such that \[ \sup_{x \in D} \left|\dist(x,\partial D) - \left(\mathcal{R}(\Phi_{\dist})\right)(x)\right| \leq \delta_{\dist}, \tag{HD-1} \label{HD-1} \] and \[ \vertiii{\mathcal{D}(\Phi_{\dist})} \leq Bd^b\lceil \log(\delta_{\dist}^{-1}) \rceil^{a}. \tag{HD-2} \label{HD-2} \] \item For all $\delta_\alpha \in (0,1)$ there exists a ReLu DNN $\Phi_{\alpha} \in \textbf{N}$ such that \begin{equation} \sup_{|x|\leq\diam(D)}\left|\left(\mathcal{R}(\Phi_{\alpha})\right)(x) -x^{\alpha}\right| \leq \delta_{\alpha}. \tag{HD-3} \label{HD-3} \end{equation} and \[ \vertiii{\mathcal{D}(\Phi_{\alpha})} \leq Bd^b \delta_{\alpha}^{-a}. \tag{HD-4} \label{HD-4} \] Moreover, $\mathcal{R}(\Phi_{\alpha})$ is a $L_{\alpha}$-Lipschitz function, $L_{\alpha}>0$, for $|x|\leq \diam(D)$. \end{enumerate} \end{ass} \begin{rem} Notice that Assumption \eqref{HD-3} is assured by Hornik's Theorem \cite{Hornik}. Also, (HD-2) may seem too demanding because of the log term, but actually this is the situation in the case of a ball, see \cite{Grohs}. \end{rem} In the next proposition prove the existence of a ReLu DNN such that the Dirichlet problem without source is well approximated. \begin{prop}\label{Prop:homo} Let $\alpha \in (1,2)$, $L_g >0$ and \begin{equation}\label{condiciones} \hbox{ $p,s \in (1,\alpha)$ such that $s < \frac{\alpha}{p}$ \quad and \quad $q \in \left[s,\frac{\alpha}{p}\right)$. } \end{equation} Suppose that the function $g$ satisfies \eqref{Hg0} and Assumptions \ref{Sup:g}. Suppose additionally that $D$ satisfies Assumptions \ref{Sup:D}. Then for all $\varepsilon \in (0,1)$ there exists a ReLu DNN $\Psi_{1,\varepsilon}$ that satisfies \begin{enumerate} \item Proximity in $L^q(D)$: \begin{equation} \label{eq:2.3} \left(\int_{D} \left|\E_{x}\left[g\left(X_{\mathcal{I}(N)}\right)\right]-\left(\mathcal{R}(\Psi_{1,\varepsilon})\right)(x)\right|^q dx\right)^{\frac{1}{q}} \leq \varepsilon. \end{equation} \item Realization: $\mathcal{R}(\Psi_{1,\varepsilon})$ has the following structure: there exists $M \in \N$, $\overline{N}_{i} \in \N$, $Y_{i,n}$ i.i.d. copies of $X_{\sigma_{B(0,1)}}$ under $\PP_0$, for $i=1,...,M$, $n=1,...,\overline{N}_i$ such that for all $x \in D$, \begin{equation} \mathcal{R}(\Psi_{1,\varepsilon})(x) = \frac{1}{M} \sum_{i=1}^{M} \left(\mathcal{R}(\Phi_g) \circ \mathcal{R}(\Phi^{i}_{\overline{N}_i})\right)\left(x\right), \end{equation} where $\Phi^{i}_{\overline{N}_i}$ is a ReLu DNN approximating $X^i_{\mathcal{I}(\overline{N}_i)} = X^i_{\mathcal{I}(\overline{N}_i)}(x,Y_{i,1},...,Y_{i,\overline{N}_i})$. \item Bounds: There exists $\widetilde{B}>0$ such that \begin{equation} \vertiii{\mathcal{D}(\Psi_{1,\varepsilon})} \leq \widetilde{B}|D|^{\frac{1}{q}\left(2a + ap+\frac{s}{s-1}(1+2a+ap)\right)} d^{b+2ap+2ap^2 + \frac{ps}{s-1}(1+2a + ap) }\varepsilon^{-a-\frac{s}{s-1}(1+2a+ap)}. \end{equation} \end{enumerate} \end{prop} \begin{rem} The hypotheses \eqref{condiciones} are non empty if $\alpha\in (1,2)$. This requirement is standard in the literature devoted to the Fractional Laplacian, where some proofs are highly dependent on the cases $\alpha\in (0,1]$ versus $\alpha\in (1,2).$ \end{rem} \begin{rem} The condition $\alpha \in (1,2)$ is very important. In particular, if $\alpha \in (0,1)$ the hyportheses \eqref{condiciones} are empty, and moreover, processes that we are working with not necessarily have finite expectation and there are not guarantee on the convergence of the ReLu DNNs. \end{rem} \subsection{Proof of Proposition \ref{Prop:homo}: existence} The proof will be divided in several steps. As explained before, we follow the ideas in \cite{Grohs}, with several changes due to the nonlocal character of the treated equation. \medskip Let $s,p$ and $q$ be as in \eqref{condiciones}. \medskip \noindent {\bf Step 1. Preliminaries.} Let $(\rho_n)_{n \in \N}$ be a WoS process introduced in Definition \ref{defn:WoS} starting from $x \in D$. Let also $\mathcal I(N)$ and $N$ be defined as in \eqref{eq:I_n} and \eqref{eq:N}. Recall that from \eqref{hipotesis_imp_2}, \begin{equation}\label{igualdades} u(x) = \E_x[g(\rho_{N})] = \E_x[g(X_{\mathcal{I}(N)})] . \end{equation} From the construction of $X_{\mathcal{I}(N)}$, one has that $X_{\mathcal{I}(N)} \in D^c$ and it depends on $N$ i.i.d. copies of $X_{\sigma_{B(0,1)}}$, namely \begin{equation}\label{eq:X_IN} X_{\mathcal{I}(N)} = X_{\mathcal{I}(N)}(x,Y_1,...,Y_N), \end{equation} where each $Y_n$, $n=1,\ldots, N$, is an independent copy of $X_{\sigma_{B(0,1)}}$ under $\PP_0$. \medskip Let $M \in \N$. Consider $M$ copies of $X_{\mathcal{I}(N)}$ starting from $x \in D$, as described in \eqref{eq:X_IN}. We denote such copies as \begin{equation}\label{eq:Y_n} X^{i}_{\mathcal{I}(N_i)} = X^{i}_{\mathcal{I}(N_i)}(x,Y_{i,1},...,Y_{i,N_i}), \end{equation} with $Y_{i,n}$, $i=1,...,M$, $n = 1,...,N_i$ i.i.d. copies of $X_{\sigma_{B(0,1)}}$ under $\PP_0$, and where each $N_i$ is an i.i.d. copy of $N$. Notice that for each copy, $N_i$ can be different (as a random variable). \medskip With this in mind, and following \cite{Grohs}, we introduce the \emph{Monte Carlo operator} \begin{equation}\label{eq:E_M} E_{M}(x) := \frac{1}{M} \sum_{i=1}^{M} \left(\mathcal{R}(\Phi_g)\right)\left(X^{i}_{\mathcal{I}(N_i)}\right), \end{equation} where $\mathcal{R}(\Phi_g)$ denotes the realization as a continuous function of the DNN $\Phi_g \in \textbf{N}$ that approximates $g$ in Assumption \ref{Sup:g}. Notice that $E_{M}(x)$ may not be a DNN in the general case. \medskip Our main objective in the following steps is to obtain suitable bounds on the difference between the expectation of $g(X_{\mathcal I(N)})$ and $E_{M}(x)$, in a certain sense to be determined. Step 2 controls the difference between $\E_x\left[g\left(X_{\mathcal{I}(N)}\right)\right]$ and the intermediate term $\E_x\left[\left(\mathcal{R}(\Phi_g)\right)\left(X_{\mathcal{I}(N)}\right)\right]$. Notice that this last term is not necessarily a DNN, because of the quantity $X_{\mathcal{I}(N)}$. \medskip \noindent {\bf Step 2.} Define \[ J_1 := \left|\E_x\left[g\left(X_{\mathcal{I}(N)}\right)\right]-\E_x\left[\left(\mathcal{R}(\Phi_g)\right)\left(X_{\mathcal{I}(N)}\right)\right]\right|. \] Notice that by Jensen inequality and hypothesis \eqref{H1} one has \[ \begin{aligned} J_1 \leq \E_x \left[\left|g\left(X_{\mathcal{I}(N)}\right)-\left(\mathcal{R}(\Phi_g)\right)\left(X_{\mathcal{I}(N)}\right)\right|\right] \leq Bd^{p}\delta_g \E_x \left[\left(1+\left|X_{\mathcal{I}(N)}\right|\right)^{p}\right]. \end{aligned} \] Recall that we have an expression for $\E_0\left[\left|X_{\sigma_{B(0,1)}}\right|^{\beta}\right]$ with $\beta<\alpha$ from Corollary \ref{cor:Kab}. The idea is to find a bound for $\E_x\left[\left(1+\left|X_{\mathcal{I}(N)}\right|\right)^p\right]$ in terms of \eqref{eq:Kab}. \medskip Let $R>1$ be large enough to have $D \subset B^*:=B(x,R)$. The right hand side of the previous inequality is going to be separated in two terms: the case where $\sigma_D = \sigma_{B^{*}}$, and otherwise. Notice that $\sigma_D> \sigma_{B^*}$ is not possible. We obtain: \[ \begin{aligned} J_1&\leq Bd^{p} \delta_g \Big( \E_x \left[\left( 1 + \left|X_{\mathcal{I}(N)}\right| \right)^{p} {\bf 1}_{\{ \sigma_D = \sigma_{B^*} \} }\right] + \E_x \left[\left( 1 + \left|X_{\mathcal{I}(N)}\right| \right)^{p} {\bf 1}_{\{\sigma_D < \sigma_{B^*} \}}\right] \Big). \end{aligned} \] In the case of the equality, the processes $X_{\mathcal{I}(N)}$ and $X_{\sigma_{B^{*}}}$ are equal on law under $\PP_x$ from Lemma \ref{lem:INsigmaD} and Remark \ref{rem:igualdades}. Then the Markov property and the scaling property of the process (see \eqref{eq:scaling1} and \eqref{eq:scaling2}) can be used to get \[ \E_x \left[\left( 1 + \left|X_{\mathcal{I}(N)}\right| \right)^{p} {\bf 1}_{\{ \sigma_D = \sigma_{B^*} \} }\right] = \E_0 \left[\left(1+ \left|x + RX_{\sigma_{B(0,1)}}\right|\right)^{p}\right] . \] On the other hand note that if $\sigma_D < \sigma_{B^{*}}$ then $X_{\mathcal{I}(N)} \in B^{*} \setminus D$. Therefore \[ \E_x \left[\left( 1 + \left|X_{\mathcal{I}(N)}\right| \right)^{p} {\bf 1}_{\{\sigma_D < \sigma_{B^*} \}}\right] \leq \sup_{y \in B^{*} \setminus D} (1 + |y|)^{p}. \] We conclude \[ J_1\leq B d^{p} \delta_g \Bigg( \E_0 \left[\left(1+ \left|x + RX_{\sigma_{B(0,1)}}\right|\right)^{p}\right] + \sup_{y \in B^{*} \setminus D} (1 + |y|)^{p} \Bigg). \] Using the Minkowski inequality and the fact that the sets $D$ and $B^{*}\setminus D$ are bounded, one has \[ \begin{aligned} J_1 &\leq B d^{p} \delta_g \left( \left(1+ |x| + R\E_0\left[\left|X_{\sigma_{B(0,1)}}\right|^{p}\right]^{\frac{1}{p}}\right)^{p} + \sup_{y \in B^{*} \setminus D} (1 + |y|)^{p} \right)\\ &\leq B d^{p} \delta_g \left( \left(K_1 + R \E_0 \left[\left|X_{\sigma_{B(0,1)}}\right|^{p}\right]^{\frac{1}{p}}\right)^{p} + K_2^{p} \right), \end{aligned} \] where $K_1$ and $K_2$ are constants such that for all $x \in D$, $y \in B^{*} \setminus D$ \begin{equation}\label{eq:K1K2} 1 + |x| \leq K_1 \quad \hbox{ and } \quad 1 + |y| \leq K_2. \end{equation} By Corollaries \ref{cor:Xmoment} and \ref{cor:Kab} one has \[ \E_0 \left[\left|X_{\sigma_{B(0,1)}}\right|^{p}\right]^{\frac{1}{p}} = K(\alpha,p)^{\frac{1}{p}} < \infty \quad \Longleftrightarrow \quad p < \alpha. \] Therefore, from the choice of $p$, one has that $J_1$ is finite and bounded as follows: \begin{equation}\label{eq:cotaJ1} J_1 \leq B d^{p} \delta_g \left(\left(K_1 + R K(\alpha,p)^{\frac{1}{p}} \right)^{p} + K_2^{p} \right), \end{equation} with $K(\alpha,p)<+\infty$ defined in \eqref{eq:Kab}. \medskip \noindent {\bf Step 3.} In this step we control the difference between the intermediate term $\E_x\left[\left(\mathcal{R}(\Phi_g)\right)\left(X_{\mathcal{I}(N)}\right)\right]$ previously introduced in Step 2, and the Monte Carlo $E_M(x)$ \eqref{eq:E_M}. Define \[ J_2 := \norm{\E_x\left[\left(\mathcal{R}(\Phi_g)\right)\left(X_{\mathcal{I}(N)}\right)\right]-E_M(x)}_{L^q(\Omega,\PP_x)}. \] In order to bound this term, we are going to use Corollary \ref{cor:MCq}. First of all notice from \eqref{H2} that \[ \E_x\left[|\left(\mathcal{R}(\Phi_g)\right)(X_{\mathcal{I}(N)})|\right] < Bd^p \E_x\left[\left(1+\left|X_{\mathcal{I}(N)}\right|\right)^p\right]. \] Note by Step 2 that \[ \E_x \left[\left(1 + \left|X_{\mathcal{I}(N)}\right|\right)^p\right] \leq \left(K_1 + R K(\alpha,p)^{\frac{1}{p}}\right)^p + K_2^{p} < +\infty, \] where $K_1$ and $K_2$ are defined in \eqref{eq:K1K2}. Therefore one can conclude that \[ \E_x\left[\left|\left(\mathcal{R}(\Phi_g)\right)(X_{\mathcal{I}(N)})\right|\right]<\infty. \] Then for all $i \in \{1,...,M\}$, $\left(\mathcal{R}(\Phi_g)\right)(X^{i}_{\mathcal{I}(N_i)}) \in L^1(\Omega,\PP_x)$. For $s$ as in \eqref{condiciones}, Corollary \ref{cor:MCq} ensures that for all $q \in [s,\infty)$ (and in particular for all $q$ as in \eqref{condiciones}), one has \begin{equation}\label{cota1_J2} J_2 \leq \frac{\Theta_{q,s}}{M^{1-\frac{1}{s}}} \norm{\E_x\left[\left(\mathcal{R}(\Phi_g)\right)\left(X_{\mathcal{I}(N)}\right)\right] - \left(\mathcal{R}(\Phi_g)\right)\left(X_{\mathcal{I}(N)}\right)}_{L^{q}(\Omega,\PP_x)}. \end{equation} Now we bound the norm on the right hand side of \eqref{cota1_J2}. By Minkowski's inequality one has \[ \begin{aligned} & \norm{\E_x\left[\left(\mathcal{R}(\Phi_g)\right)\left(X_{\mathcal{I}(N)}\right)\right] - \left(\mathcal{R}(\Phi_g)\right)\left(X_{\mathcal{I}(N)}\right)}_{L^{q}(\Omega,\PP_x)} \\ &\qquad\leq \norm{\E_x\left[\left(\mathcal{R}(\Phi_g)\right)\left(X_{\mathcal{I}(N)}\right)\right]}_{L^q(\Omega,\PP_x)}+\norm{\left(\mathcal{R}(\Phi_g)\right)\left(X_{\mathcal{I}(N)}\right)}_{L^q(\Omega,\PP_x)}\\ &\qquad\leq 2\E_x\left[\left|\left(\mathcal{R}(\Phi_g)\right)\left(X_{\mathcal{I}(N)}\right)\right|^q\right]^{\frac{1}{q}}. \end{aligned} \] Now using hypothesis \eqref{H2} and the same results in previous Step to obtain \[ \begin{aligned} & \norm{\E_x\left[\left(\mathcal{R}(\Phi_g)\right)\left(X_{\mathcal{I}(N)}\right)\right] - \left(\mathcal{R}(\Phi_g)\right)\left(X_{\mathcal{I}(N)}\right)}_{L^{q}(\Omega,\PP_x)} \\ &\qquad\leq 2Bd^{p} \E_x \left[\left(1+\left|X_{\mathcal{I}(N)}\right|\right)^{pq}\right]^{\frac{1}{q}}\\ &\qquad\leq 2Bd^{p} \left(\E_x \left[\left(1+\left|X_{\mathcal{I}(N)}\right|\right)^{pq}{\bf 1}_{\{\sigma_D=\sigma_{B^*} \} }\right]^\frac{1}{q} + \E_x \left[\left(1+\left|X_{\mathcal{I}(N)}\right|\right)^{pq}{\bf 1}_{\{\sigma_D<\sigma_{B^*} \}}\right]^{\frac{1}{q}}\right), \end{aligned} \] where we recall that $B^{*}$ is a ball in $\R^d$ centered in $x$ with radious $R>1$ large enough such that $D \subset B^{*}$. Then using the scaling property of $X$ and Minkowski inequality, we have \begin{equation} \begin{aligned}\label{cota2_J2} & \norm{\E_x\left[\left(\mathcal{R}(\Phi_g)\right)\left(X_{\mathcal{I}(N)}\right)\right] - \left(\mathcal{R}(\Phi_g)\right)\left(X_{\mathcal{I}(N)}\right)}_{L^{q}(\Omega,\PP_x)} \\ &\hspace{1.6cm}\leq 2Bd^{p} \left( \E_0 \left[ \left(1 + \left|x+RX_{\sigma_{B(0,1)}}\right|\right)^{pq}\right]^{\frac{1}{q}} + \sup_{y \in B^{*}\setminus D} (1 + |y|)^{pq} \right)\\ &\hspace{1.6cm}\leq 2Bd^{p} \left( \left(K_1 + R \E_0\left[ \left|X_{\sigma_{B(0,1)}}\right|^{pq} \right]^{\frac{1}{pq}}\right)^{p} + K_2^{p}\right). \end{aligned} \end{equation} Therefore, by \eqref{cota1_J2}, \eqref{cota2_J2} and Corollary \ref{cor:Kab} we have that $J_2$ is finite and bounded as follows: \begin{equation}\label{eq:cotaJ2} J_2 \leq \frac{2\Theta_{q,s}}{M^{1-\frac{1}{s}}}Bd^{p} \left( \left( K_1 + R K(\alpha,pq)^{\frac{1}{pq}}\right)^{p} + K_2^{p}\right). \end{equation} \medskip \noindent {\bf Step 4.} Thanks to Steps 2 and 3 now it is possible to bound the difference \[ \norm{\E_x \left[g\left(X_{\mathcal{I}(N)}\right)\right] - E_M(x)}_{L^q(\Omega,\PP_x)}. \] Indeed, first notice from Jensen inequality in \eqref{eq:Kab} that for $1<q<\frac{\alpha}p$ (see \eqref{condiciones}), \[ K(\alpha,p)^{\frac{1}{p}} \leq K(\alpha,pq)^{\frac{1}{pq}}<+\infty . \] Condition $q < \frac{\alpha}{p}$ is necessary in order to have $K(\alpha,pq)$ finite (see Corollary \eqref{cor:Kab}). It follows from \eqref{eq:cotaJ1}, \eqref{eq:cotaJ2} and Minkowski's inequality that \begin{equation}\label{eq:cotaP4_1} \begin{aligned} &\norm{\E_x \left[g\left(X_{\mathcal{I}(N)}\right)\right] - E_M(x)}_{L^q(\Omega,\PP_x)} \leq J_1 + J_2\\ &\leq \left(\delta_g + \frac{2\Theta_{q,s}}{M^{1- \frac{1}{s}}}\right)Bd^p\left( \left(K_1 + R K(\alpha,pq)^{\frac{1}{pq}}\right)^{p} + K_2^{p}\right). \end{aligned} \end{equation} Define \begin{equation} C := \left(\left(K_1 +RK(\alpha,pq)^{\frac{1}{pq}}\right)^{p} + K_2^p\right) < \infty. \end{equation} Note that the choice of $R$ depends on the starting point $x$ in order to have $D \subset B(x,R)$. If we choose e.g. $R = 2\diam(D)$, it follows that for all $x \in D$, $D \subset B(x,2\diam(D))$ and then $C$ is uniform w.r.t. $x \in D$. Fubini and \eqref{eq:cotaP4_1} implies that \begin{equation}\label{eq:cotaP4_2} \E_x \left[ \int_{D}\left| \E_x \left[g\left(X_{\mathcal{I}(N)}\right)\right] - E_M(x) \right|^q dx \right] \leq \left(\delta_g + \frac{2\Theta_{q,s}}{M^{1- \frac{1}{s}}}\right)^{q}|D|B^qd^{pq}C^q. \end{equation} In the following steps we are going to control two quantities that help us to obtain bounds for the random variables $N_i$ and $|Y_{i,n}|$, for all $i=1,...,M$, $n=1,...,N_i$. Although similar to the steps followed in \cite{Grohs}, here we need additional estimates because of the non continuous nature of the L\'evy jump processes. \medskip \noindent {\bf Step 5.} In order to bound the following expectation \[\E_x \left[\left|\E_x[N] - \frac{1}{M} \sum_{i=1}^{M} N_i\right|^q\right], \] we are going to use Corollary \ref{cor:MCq}. Notice by Theorem \ref{teo:N} that for all $x \in D$ there exists a geometric random variable $\Gamma$ with parameter $\widetilde{p} = \widetilde{p}(\alpha,d) > 0$ such that \[ \E_x\left[|N|\right] \leq \E_x\left[\Gamma\right] = \frac{1}{\widetilde{p}} < \infty, \] and then for all $i \in \{1,...,M\}$, $N_i \in L^1(\Omega,\PP_x)$. For $s$ as in \eqref{condiciones}, Corollary \ref{cor:MCq} implies for all $q$ as in \eqref{condiciones} that \begin{equation}\label{eq:cotaP5_1} \E_x\left[\left|\E_x[N]-\frac{1}{M} \sum_{i=1}^{M} N_i\right|^q\right]\leq \left( \frac{2\Theta_{q,s}}{M^{1-\frac{1}{s}}}\right)^q \E_x \left[|N|^{q}\right] \leq \left( \frac{2\Theta_{q,s}}{M^{1-\frac{1}{s}}}\right)^q \E_x \left[|N|^{2}\right], \end{equation} where we used that $q<2$ and then $\E_x\left[|\cdot|^q\right]\leq \E_x\left[|\cdot|^2\right]$. Recall that \begin{equation}\label{eq:cotaN^2} \E_x\left[|N|^2\right] \leq \E_x\left[\Gamma^2\right] = \frac{2-\widetilde{p}}{\widetilde{p}^2} < \infty, \end{equation} and therefore, it holds from \eqref{eq:cotaP5_1} and \eqref{eq:cotaN^2} that \begin{equation}\label{eq:cotaP5_2} \E_x\left[\left|\E_x[N]-\frac{1}{M} \sum_{i=1}^{M} N_i\right|^q\right] \leq \left( \frac{2\Theta_{q,s}}{M^{1-\frac{1}{s}}}\right)^q \frac{2-\widetilde{p}}{\widetilde{p}^{2}}. \end{equation} \medskip \noindent {\bf Step 6.} Finally, we want to estimate \[ \E_x \left[\left|\E_x \left[\sum_{n=1}^{N} |Y_{n}|\right] - \frac{1}{M} \sum_{i=1}^{M} \sum_{n=1}^{N_i} |Y_{i,n}| \right|^{q} \right], \] where $Y_{i,n}$ were introduced in \eqref{eq:Y_n}. As in the previous step, we use the Corollary \ref{cor:MCq}. First of all, it follows from the independence of $\left(Y_{n}\right)_{n=1}^{k}$ and $N$ for fixed $k \in \N$ ($Y_{n}$ and $X$ are independent), and the law of total expectation that \[ \begin{aligned} \E_x \left[\left|\sum_{n=1}^{N} |Y_{n}|\right|\right] &= \sum_{k \geq 1} \E_x \left[\left.\left|\sum_{n=1}^{N}|Y_{n}|\right| ~ \right| ~ N=k\right] \PP_x (N = k)\\ &= \sum_{k \geq 1} \E_0 \left[\left|\sum_{n=1}^{k}|Y_{n}|\right|\right] \PP_x (N = k). \end{aligned} \] Recall that $(Y_{n})_{n=1}^{k}$ are i.i.d. with the same distribution as $X_{\sigma_{B(0,1)}}$. Triangle inequality ensures that \[ \begin{aligned} \E_x \left[\left|\sum_{n=1}^{N} |Y_{n}|\right|\right] &\leq \sum_{k\geq1} \sum_{n=1}^{k} \E_{0}\left[|Y_n|\right] \PP_x(N=k)\\ &= \E_0\left[\left|X_{\sigma_{B(0,1)}}\right|\right] \sum_{k \geq 1} k \PP_x(N=k)\\ &= K(\alpha,1) \E_x[N]. \end{aligned} \] Then for all $i \in \{1,...,M\}$, $\sum_{n=1}^{N_i} |Y_{i,n}| \in L^{1}(\Omega,\PP_x)$. Moreover, with similar arguments \[ \begin{aligned} \E_x \left[\left|\sum_{n=1}^{N} |Y_{n}|\right|^q\right] &= \sum_{k \geq 1} \E_x \left[\left.\left|\sum_{n=1}^{N}|Y_{n}|\right|^q \right| ~ N=k\right] \PP_x (N = k)\\ &= \sum_{k \geq 1} \E_0 \left[\left|\sum_{n=1}^{k}|Y_{n}|\right|^q\right] \PP_x (N = k). \end{aligned} \] Recall from the bounds of $q$ that appear in \eqref{condiciones}, one has that $q \in (1,2)$ and the function $|\cdot|^q$ is convex. This implies that, for all $k \in \N$ \[ \left|\sum_{n=1}^{k} \frac{|Y_n|}{k}\right|^q \leq \sum_{n=1}^{k} \frac{|Y_n|^q}{k}. \] Therefore \[ \left|\sum_{n=1}^{k}|Y_n|\right|^q \leq k^{q-1} \sum_{n=1}^{k} |Y_n|^{q}. \] Replacing this on the previous estimate one has \begin{equation}\label{eq:cotaP6_1} \begin{aligned} \E_x \left[\left|\sum_{n=1}^{N} |Y_{n}|\right|^q\right] &\leq \sum_{k\geq1} \sum_{n=1}^{k} k^{q-1} \E_{0}\left[|Y_n|^q\right] \PP_x(N=k)\\ &= \E_0\left[\left|X_{\sigma_{B(0,1)}}\right|^q\right] \sum_{k \geq 1} k^q \PP_x(N=k)\\ &= K(\alpha,q) \E_x[N^q]. \end{aligned} \end{equation} For $s$ as in \eqref{condiciones}, Corollary \ref{cor:MCq} implies for all $q$ as in \eqref{condiciones} that \[ \E_x \left[\left|\E_x \left[\sum_{n=1}^{N} |Y_{n}|\right] - \frac{1}{M} \sum_{i=1}^{M} \sum_{n=1}^{N_i} |Y_{i,n}| \right|^{q} \right] \leq \left( \frac{2\Theta_{q,s}}{M^{1-\frac{1}{s}}}\right)^{q} \E_x \left[\left|\sum_{n=1}^{N}|Y_{n}|\right|^q\right]. \] Therefore it follows from \eqref{eq:cotaN^2} and \eqref{eq:cotaP6_1} that \begin{equation}\label{eq:cotaP6_2} \E_x \left[\left|\E_x \left[\sum_{n=1}^{N} |Y_{n}|\right] - \frac{1}{M} \sum_{i=1}^{M} \sum_{n=1}^{N_i} |Y_{i,n}| \right|^{q} \right] \leq \left( \frac{2\Theta_{q,s}}{M^{1-\frac{1}{s}}}\right)^{q} K(\alpha,q) \frac{2-\widetilde{p}}{\widetilde{p}^2}. \end{equation} {\bf Step 7.} Coupling the bounds obtained in \eqref{eq:cotaP4_2}, \eqref{eq:cotaP5_2} and \eqref{eq:cotaP6_2}, it holds that \begin{equation} \begin{aligned} &\E_x \left[ \int_{D}\left| \E_x \left[g\left(X_{\mathcal{I}(N)}\right)\right] - E_M(x) \right|^q dx + \left|\E_x[N] - \frac{1}{M} \sum_{i=1}^{M} N_i\right|^q \right. \\ & \qquad \left. + \left|\E_x\left[\sum_{n=1}^{N}|Y_n|\right]-\frac{1}{M}\sum_{i=1}^{M}\sum_{n=1}^{N_i} |Y_{i,n}|\right|^q\right] \\ &\leq \left(\delta_g + \frac{2\Theta_{q,s}}{M^{1- \frac{1}{s}}}\right)^{q}|D|B^qd^{pq}C^q + \left(\frac{2\Theta_{q,s}}{M^{1-\frac{1}{s}}}\right)^q (1+ K(\alpha,q))\frac{2-\widetilde{p}}{\widetilde{p}^{2}} . &=: \hbox{error}_g^q \end{aligned} \end{equation} Using now that $\E(Z) \leq c<+\infty$, we summarize the following result. \begin{lem}\label{lem:sacada} There exists $\overline{N}_{i} \in \N$, $Y_{i,n}$ i.i.d. copies of $X_{\sigma_{B(0,1)}}$ under $\PP_0$, $i=1,...,M$, $n=1,...,\overline{N}_i$ such that \begin{equation} \begin{aligned} &\int_{D}\left| \E_x \left[g\left(X_{\mathcal{I}(N)}\right)\right] - \frac{1}{M} \sum_{i=1}^{M} \left(\mathcal{R}(\Phi_g)\right)\left(X^{i}_{\mathcal{I}(\overline{N}_i)}\right) \right|^q dx \\ & \quad + \left|\E_x[N] - \frac{1}{M} \sum_{i=1}^{M} \overline{N}_i\right|^q + \left|\E_x\left[\sum_{n=1}^{N}|Y_n|\right]-\frac{1}{M}\sum_{i=1}^{M}\sum_{n=1}^{N_i} |Y_{i,n}|\right|^q\\ &\qquad \leq \hbox{error}_g^q. \end{aligned} \end{equation} \end{lem} With a slight abuse of notation, we redefine $E_M$ from \eqref{eq:E_M} as \begin{equation}\label{eq:E_M_final} E_M(x) = \frac{1}{M} \sum_{i=1}^{M} \left(\mathcal{R}(\Phi_g)\right)\left(X^{i}_{\mathcal{I}(\overline{N}_i)}\right). \end{equation} \medskip \noindent {\bf Step 8.} We are going to prove now that $X^{i}_{\mathcal{I}(\overline{N}_i)}$ can be approximated by a ReLu DNN. Let $\delta_{\dist} \in (0,1).$ Recall that from \eqref{HD-1} there exists $\Phi_{\dist} \in \textbf{N}$ ReLu DNN such that for all $x \in D$ \[ \left|\left(\mathcal{R}(\Phi_{\dist})\right)(x) - \dist(x,\partial D)\right| \leq \delta_{\dist}. \] Define $\left(\Phi_{i,n}\right)_{i=1,...,M, n=1,...,\overline{N}_i} \in \textbf{N}$ as follows: for $x \in D$ \begin{equation}\label{eq:Phi_i1} \left(\mathcal{R}(\Phi_{i,1})\right)(x) = x + Y_{i,1}\left(\mathcal{R}(\Phi_{\dist})\right)(x), \end{equation} and for all $n = 2,...,\overline{N}_{i}$, $x \in D$ \begin{equation}\label{eq:Phi_in} \left(\mathcal{R}(\Phi_{i,n})\right)(x) = \left(\mathcal{R}(\Phi_{i,n-1})\right)(x) + Y_{i,n}\left(\mathcal{R}(\Phi_{\dist}) \circ \mathcal{R}(\Phi_{i,n-1})\right)(x). \end{equation} In the Section \ref{Sect:6p3} we will see that $\left(\Phi_{i,n}\right)_{i=1,...,M, n=1,...,\overline{N}_i}$ is indeed a ReLu DNN. Note that, for $x \in D$, $i=1,...,M$, \[ \left|X^{i}_{\mathcal{I}(1)}- \left(\mathcal{R}(\Phi_{i,1})\right)(x)\right| \leq |Y_{i,1}| \left|\left(\mathcal{R}(\Phi_{\dist})\right)(x) - \dist(x,\partial D)\right| \leq \delta_{\dist}\sum_{n=1}^{\overline{N}_i}|Y_{i,n}|, \] and for all $n = 2,...,\overline{N}_i$, by triangle inequality \[ \begin{aligned} &\left|X^{i}_{\mathcal{I}(n)}- \left(\mathcal{R}(\Phi_{i,n})\right)(x)\right|\leq \left|X^{i}_{\mathcal{I}(n-1)}- \left(\mathcal{R}(\Phi_{i,n-1})\right)(x)\right|\\ &\qquad+ |Y_{i,n}| \left|\dist\left(X^{i}_{\mathcal{I}(n-1)},\partial D\right) - \dist\left(\left(\mathcal{R}(\Phi_{i,n-1})\right)(x),\partial D\right)\right| \\ &\qquad+ |Y_{i,n}|\left| \dist\left(\left(\mathcal{R}(\Phi_{i,n-1})\right)(x),\partial D\right) -\left(\mathcal{R}(\Phi_{\dist}) \circ \mathcal{R}(\Phi_{i,n-1})\right)(x)\right|. \end{aligned} \] Using the hypothesis on $\Phi_{\dist}$ and the fact that the function $x \to \dist(x,\partial D)$ is 1-Lipschitz one has \[ \left|X^{i}_{\mathcal{I}(n)}- \left(\mathcal{R}(\Phi_{i,n})\right)(x)\right|\leq \left(\sum_{n=1}^{\overline{N}_i}|Y_{i,n}| + 1\right)\left|X^{i}_{\mathcal{I}(n-1)}- \left(\mathcal{R}(\Phi_{i,n-1})\right)(x)\right| + \delta_{\dist} \sum_{n=1}^{\overline{N}_i}|Y_{i,n}|. \] By the previous recursion one obtain that for all $i=1,...,M$ \[ \begin{aligned} \left|X^{i}_{\mathcal{I}(\overline{N}_i)}- \left(\mathcal{R}(\Phi_{i,\overline{N}_i})\right)(x)\right|&\leq \left(\sum_{n=1}^{\overline{N}_i}|Y_{i,n}|\right) \delta_{\dist}\sum_{i=1}^{\overline{N}_i} \left(\sum_{n=1}^{\overline{N}_i}|Y_{i,n}| + 1\right)^{i-1} \\ & \leq \left(\sum_{n=1}^{\overline{N}_i}|Y_{i,n}|\right) \delta_{\dist} \frac{\left(\sum_{n=1}^{\overline{N}_i}|Y_{i,n}|+1\right)^{\overline{N}_i}-1}{\left(\sum_{n=1}^{\overline{N}_i}|Y_{i,n}|+1\right)-1}\\ & \leq \delta_{\dist}\left(\sum_{n=1}^{\overline{N}_i}|Y_{i,n}| + 1\right)^{\overline{N}_i}. \end{aligned} \] \medskip \noindent {\bf Step 9.} With the ReLu DNNs defined in Step 8, we are able to find a ReLu DNN that approximates $E_M(x)$. Define $\Phi^{i}_{g} \in \textbf{N}$ as follows \begin{equation}\label{eq:DNN_gi} \left(\mathcal{R}(\Phi^{i}_g)\right)(x) = \left(\mathcal{R}(\Phi_{g}) \circ \mathcal{R}(\Phi_{i,\overline{N}_i})\right)(x), \end{equation} valid for $x \in D$. Notice from Lemma \ref{lem:DNN_comp} that $\Phi_g^i$ is indeed a ReLu DNN. For full details see Section \ref{Sect:6p1}. By triangle inequality one has \[ \begin{aligned} &\left|\left(\mathcal{R}(\Phi_g)\right)\left(X^{i}_{\mathcal{I}(\overline{N}_i)}\right)-\left(\mathcal{R}(\Phi^{i}_g)\right)(x)\right| \\ &\qquad \leq \left|\left(\mathcal{R}(\Phi_g)\right)\left(X^{i}_{\mathcal{I}(\overline{N}_i)}\right) - g \left(X^{i}_{\mathcal{I}(\overline{N}_i)}\right)\right| + \left|g \left(X^{i}_{\mathcal{I}(\overline{N}_i)}\right) - \left(g \circ \mathcal{R}(\Phi_{i,\overline{N}_i})\right)(x)\right| \\ &\qquad \qquad + \left|\left(g \circ \mathcal{R}(\Phi_{i,\overline{N}_i})\right)(x) - \left(\mathcal{R}(\Phi_g^{i})\right)(x)\right|. \end{aligned} \] We use the hypothesis \eqref{H1} and the assumption that $g$ is $L_g$-Lipschitz to obtain \[ \begin{aligned} &\left|\left(\mathcal{R}(\Phi_g)\right)\left(X^{i}_{\mathcal{I}(\overline{N}_i)}\right)-\left(\mathcal{R}(\Phi^{i}_g)\right)(x)\right|\\ &\quad \leq Bd^p \delta_g \left(\left(1+ \left|X^{i}_{\mathcal{I}(\overline{N}_i)}\right|\right)^p + \left(1+ \left|\left(\mathcal{R}(\Phi_{i,\overline{N}_i})\right)(x)\right|\right)^p\right) + L_g\left|X^{i}_{\mathcal{I}(\overline{N}_i)}- \left(\mathcal{R}(\Phi_{i,\overline{N}_i})\right)(x)\right|. \end{aligned} \] By triangle inequality one has \[ \left|\left(\mathcal{R}(\Phi_{i,\overline{N}_i})\right)(x)\right| \leq \left|X_{\mathcal{I}(\overline{N}_i)}^i - \left(\mathcal{R}(\Phi_{i,\overline{N}_i})\right)(x)\right| + \left|X_{\mathcal{I}(\overline{N}_i)}^i\right|. \] With the previous estimate and using that $(\cdot)^{p}$ is a convex function, we obtain \[ \begin{aligned} &\left|\left(\mathcal{R}(\Phi_g)\right)\left(X^{i}_{\mathcal{I}(\overline{N}_i)}\right)-\left(\mathcal{R}(\Phi^{i}_g)\right)(x)\right| \\ & \leq Bd^p\delta_g\left(1 + \left|X_{\mathcal{I}(\overline{N}_i)}^i\right|\right)^p + L_g\left|X^{i}_{\mathcal{I}(\overline{N}_i)}- \left(\mathcal{R}(\Phi_{i,\overline{N}_i})\right)(x)\right|\\ &\quad + 2^{p-1}Bd^p \delta_g \left(\left(1 + \left|X_{\mathcal{I}(\overline{N}_i)}^i\right|\right)^p + \left|X_{\mathcal{I}(\overline{N}_i)}^i - \left(\mathcal{R}(\Phi_{i,\overline{N}_i})\right)(x)\right|^p\right). \end{aligned} \] Notice that from \eqref{eq:paseo}, \[ \left|X_{\mathcal{I}(\overline{N}_i)}^i\right| \leq |x| + \diam(D) \sum_{n=1}^{\overline{N}_i} |Y_{i,n}|. \] Therefore, in addition to Step 6, one has \begin{equation}\label{eq:step8g} \begin{aligned} &\left|\left(\mathcal{R}(\Phi_g)\right)\left(X^{i}_{\mathcal{I}(\overline{N}_i)}\right)-\left(\mathcal{R}(\Phi^{i}_g)\right)(x)\right| \leq 3Bd^p\delta_g \left(1 + |x| + \diam(D)\sum_{n=1}^{\overline{N}_i} |Y_{i,n}|\right)^p\\ &\qquad + L_g \delta_{\dist}\left(1 + \sum_{n=1}^{\overline{N}_i} |Y_{i,n}|\right)^{\overline{N}_i}+ 2Bd^p\delta_g \delta_{\dist}^p \left(1 + \sum_{n=1}^{\overline{N}_i} |Y_{i,n}|\right)^{p \overline{N}_i}. \end{aligned} \end{equation} Now define for $\varepsilon \in (0,1)$ the ReLu DNN $\Psi_{1,\varepsilon}$ such that it satisfies for $x \in D$ \[ \left(\mathcal{R}(\Psi_{1,\varepsilon})\right)(x) = \frac{1}{M} \sum_{i=1}^{M} \left(\mathcal{R}(\Phi^{i}_g)\right)(x). \] This is the requested DNN. Section \ref{Sect:6p3} shows that $\Psi_{1,\varepsilon}$ is a ReLu DNN. From the bound obtained in \eqref{eq:step8g}, we have that \[ \begin{aligned} &\left|E_M(x)- \left(\mathcal{R}(\Psi_{1,\varepsilon})\right)(x)\right| \leq \frac{1}{M} \sum_{i=1}^{M} \left|\left(\mathcal{R}(\Phi_g)\right)\left(X^{i}_{\mathcal{I}(\overline{N}_i)}\right)-\left(\mathcal{R}(\Phi^{i}_g)\right)(x)\right|\\ &\qquad\leq 3Bd^p \delta_g \left(K_1 + \diam(D)\sum_{i=1}^{M} \sum_{n=1}^{\overline{N}_i} |Y_{i,n}|\right)^p + L_g \delta_{\dist} \left( 1 + \sum_{i=1}^{M} \sum_{n=1}^{\overline{N}_i} |Y_{i,n}|\right)^{\sum_{i=1}^{M}\overline{N}_i} \\ &\qquad \qquad+ 2Bd^p\delta_g\delta_{\dist}^p \left(1 + \sum_{i=1}^{M} \sum_{n=1}^{\overline{N}_i} |Y_{i,n}|\right)^{p \sum_{i=1}^{M} \overline{N}_i}. \end{aligned} \] \medskip \noindent {\bf Step 10.} We want to bound error$_g$. Using that $\frac{1}{q} < 1$ one has \[ \begin{aligned} \hbox{error}_g &\leq \left(\delta_g + \frac{2\Theta_{q,s}}{M^{1-\frac{1}{s}}}\right) |D|^{\frac{1}{q}} B d^p C + 2 \frac{\Theta_{q,s}}{M^{1-\frac{1}{s}}} \left(1+K(\alpha,q)^\frac{1}{q}\right) \left(\frac{2-\widetilde{p}}{\widetilde{p}^2}\right)^{\frac{1}{q}}.\\ &= 2\frac{\Theta_{q,s}}{M^{1-\frac{1}{s}}} \left(|D|^{\frac{1}{q}}Bd^pC + \left(\frac{2-\widetilde{p}}{\widetilde{p}^2}\right)^{\frac{1}{q}}\left(1+K(\alpha,q)^{\frac{1}{q}}\right)\right) + \delta_g |D|^\frac{1}{q}Bd^{p}C \end{aligned} \] Denote \begin{equation}\label{eq:C1C2} C_1 = 2\Theta_{q,s} \left(|D|^{\frac{1}{q}}Bd^pC + \left(\frac{2-\widetilde{p}}{\widetilde{p}^2}\right)^{\frac{1}{q}}\left(1+K(\alpha,q)^{\frac{1}{q}}\right)\right), \quad \hbox{and} \quad C_2 = |D|^\frac{1}{q}Bd^{p}C. \end{equation} Note that $C_1$ and $C_2$ are polynomial on the dimension $d$. Then \begin{equation}\label{eq:error_g} \hbox{error}_g \leq \frac{C_1}{M^{1-\frac{1}{s}}} + C_2 \delta_g. \end{equation} In adition, thanks to Step 5, one has \[ \sum_{i=1}^{M} \sum_{n=1}^{\overline{N}_i} |Y_{i,n}| \leq M \left( \hbox{error}_g + \E_x \left[\sum_{n=1}^{N} |Y_{i,n}|\right]\right) \leq M \left( \hbox{error}_g + K(\alpha,1) \frac{1}{\widetilde{p}} \right). \] Define $C_3 := K(\alpha,1)/\widetilde{p}$, then \[ \begin{aligned} \sum_{i=1}^{M} \sum_{n=1}^{\overline{N}_i} |Y_{i,n}| &\leq M^{\frac{1}{s}}C_1 + M\delta_{g}C_2 + MC_3\\ &\leq M^{\frac{1}{s}}C_1 + M(C_2 + C_3). \end{aligned} \] On the other hand side define $C_4 = \frac{1}{\widetilde{p}}$, then \[ \begin{aligned} \sum_{i=1}^{M} \overline{N}_i &\leq M (\hbox{error}_g + \E_x[N]) \leq M^{\frac{1}{s}} C_1 + M\delta_g C_2 + \frac{M}{\widetilde{p}}\\ &\leq M^{\frac{1}{s}} C_1 + M(C_2 + C_4). \end{aligned} \] \medskip \noindent {\bf Step 11.} Using the auxiliary Lemma \ref{lem:sacada} and \eqref{eq:error_g}, it follows that \[ \left(\int_D \left|\E_x \left[g\left(X_{\mathcal{I}(N)}\right)\right] - E_M(x)\right|^{q}dx\right)^{\frac{1}{q}} \leq \frac{C_1}{M^{1-\frac 1s}} + C_2 \delta_g. \] In addition, from Step 9 and \eqref{eq:error_g} one has \[ \begin{aligned} &\left(\int_D \left|E_M(x) - \left(\mathcal{R}(\Psi_1)\right)(x)\right|^{q}dx\right)^{\frac{1}{q}}\\ &\qquad\leq 3|D|^{\frac{1}{q}}Bd^p\delta_g \left(K_1 + \diam(D)\left(M^{\frac{1}{s}}C_1 + M (C_2 + C_3)\right)\right)^p \\ &\qquad \quad+ |D|^{\frac{1}{q}}L_g \delta_{\dist} \left(1 + M^{\frac{1}{s}}C_1 + M (C_2 + C_3)\right)^{M^{\frac{1}{s}}C_1 + M (C_2 + C_4)} \\ &\qquad \quad + 2|D|^{\frac{1}{q}}Bd^p\delta_g \delta_{\dist}^p \left(1 + M^{\frac{1}{s}}C_1 + M (C_2 + C_3)\right)^{p\left(M^{\frac{1}{s}}C_1 + M(C_2 + C_4)\right)}. \end{aligned} \] Therefore, Minkowski inequality implies that \begin{equation}\label{eq:step11g} \begin{aligned} &\left(\int_D \left|\E_x \left[g\left(X_{\mathcal{I}(N)}\right)\right] - \left(\mathcal{R}(\Psi_{1,\varepsilon})\right)(x)\right|^{q}dx\right)^{\frac{1}{q}} \\ &\leq \left(\int_D \left|\E_x \left[g\left(X_{\mathcal{I}(N)}\right)\right] - E_M(x)\right|^{q}dx\right)^{\frac{1}{q}} + \left(\int_D \left|E_M(x) - \left(\mathcal{R}(\Psi_1)\right)(x)\right|^{q}dx\right)^{\frac{1}{q}}\\ &\leq \frac{C_1}{M^{1-\frac{1}{s}}} + C_2 \delta_g + 3|D|^{\frac{1}{q}}Bd^p\delta_g \left(K_1 + \diam(D)\left(M^{\frac{1}{s}}C_1 + M (C_2 + C_3)\right)\right)^p \\ &\qquad+ |D|^{\frac{1}{q}}L_g \delta_{\dist} \left(1 + M^{\frac{1}{s}}C_1 + M (C_2 + C_3)\right)^{M^{\frac{1}{s}}C_1 + M (C_2 + C_4)} \\ &\qquad + 2|D|^{\frac{1}{q}}Bd^p\delta_g \delta_{\dist}^p \left(1 + M^{\frac{1}{s}}C_1 + M (C_2 + C_3)\right)^{p\left(M^{\frac{1}{s}}C_1 + M(C_2 + C_4)\right)}. \end{aligned} \end{equation} for $\varepsilon \in (0,1)$ let $M \in \N$ large enough such that \[ M = \left\lceil \left(\frac{5C_1}{\varepsilon}\right)^{\frac{s}{s-1}} \right\rceil \], and $\delta_{\dist} \in (0,1)$ small enough such that \[ \delta_{\dist} = \frac{\varepsilon}{5|D|^{\frac{1}{q}}L_g} \left(1 + M^{\frac{1}{s}}C_1 + M (C_2 + C_3)\right)^{-\left(M^{\frac{1}{s}}C_1 + M (C_2 + C_4)\right)}. \] Let \begin{equation}\label{C5} C_5 = \max \left\{ C_2, 3|D|^{\frac{1}{q}}Bd^p\left(K_1 + \diam(D)\left(M^{\frac{1}{s}}C_1 + M(C_2 + C_3)\right)\right)^{p},\frac{2|D|^{\frac{1}{q}}Bd^p}{5^p|D|^{\frac{p}{q}} L_g^p} \right\}, \end{equation} and consider $\delta_g \in (0,1)$ small enough such that \[ \delta_g = \frac{\varepsilon }{5C_5}. \] Therefore each term of \eqref{eq:step11g} can be bounded by $\varepsilon/5$. Then \[ \begin{aligned} &\left(\int_D \left|\E_x \left[g\left(X_{\mathcal{I}(N)}\right)\right] - \left(\mathcal{R}(\Psi_{1,\varepsilon})\right)(x)\right|^{q}dx\right)^{\frac{1}{q}} \leq \varepsilon.\\ \end{aligned} \] This allos us to conclude that \ref{eq:2.2} can be approximated in $L^{q}(D)$ by a DNN $\Psi_{1,\varepsilon}$ with accurateness $\varepsilon \in (0,1)$. \medskip \subsection{Proof of Proposition \ref{Prop:homo}: Quantification of DNNs} \label{Sect:6p3} In this Section we will prove that $\Psi_{1,\varepsilon}$ is in fact a ReLu DNN wich does not suffer of the curse of dimensionality. \medskip \noindent {\bf Step 12.} We now use the Definitions and Lemmas of Section \ref{Sect:3} to study $\Psi_{1,\varepsilon}$. Let \[ \beta_{\dist} = \mathcal{D}(\Phi_{\dist}) \quad \hbox{and} \quad H_{\dist} = \dim(\beta_{\dist}) - 2. \] And we will verify by induction that for all $i=1,...,M$ $n=1,...,\overline{N}_i$, $\Phi_{i,n}$ (defined in \ref{eq:Phi_i1} and \ref{eq:Phi_in}) is a ReLu DNN that satisfy \begin{equation}\label{eq:HI_homo} \mathcal{D}(\Phi_{i,n}) = \overunderset{n}{m=1}{\odot}(d\mathfrak{n}_{H_{\dist}+2}\boxplus \widetilde{\beta}_{\dist}), \end{equation} where \[ \widetilde{\beta}_{\dist} = (\beta_{\dist,0},...,\beta_{\dist,H_{\dist}},d) \in \N^{H_{\dist}+2}. \] If \ref{eq:HI_homo} is true, then from \eqref{eq:HI_homo} and the definition of the operator $\odot$ is easy to see that \begin{equation}\label{eq:HI_homo2} \vertiii{\mathcal{D}(\Phi_{i,n}) } \leq 2d + \vertiii{\mathcal{D}(\Phi_{\dist})}, \quad \hbox{and} \quad \dim(\mathcal{D}(\Phi_{i,n})) = (H_{\dist}+1)n+1. \end{equation} For $n=1$, recall the definition of $\Phi_{i,1}$ from \eqref{eq:Phi_i1}. By Lemma \ref{lem:DNN_mat} one has that \[ Y_{i,1} \mathcal{R}(\Phi_{\dist}) \in \mathcal{R}\left(\left\{\Phi \in \textbf{N}: \mathcal{D}(\Phi) = \widetilde{\beta}_{\dist} \right\}\right). \] By Lemma \ref{lem:DNN_id}, the identity function can be represented by a ReLu DNN with $H_{\dist} + 2$ number of layers. Therefore by Lemma \ref{lem:DNN_sum} it follows that $\mathcal{R}(\Phi_{i,1}) \in C(D,\R^d)$ and \[ \mathcal{D}(\Phi_{i,1}) = d\mathfrak{n}_{H_{\dist} + 2} \boxplus \widetilde{\beta}_{\dist}, \qquad \dim(\mathcal{D}(\Phi_{i,1})) = H_{\dist} + 2. \] Moreover \[ \vertiii{\mathcal{D}(\Phi_{i,1}) } \leq 2d + \vertiii{\mathcal{D}(\Phi_{\dist})}. \] Now suppose that for $n=2,...,\overline{N}_i-1$ that \eqref{eq:HI_homo} is valid. Recall the definition of $\Phi_{i,n}$ from \eqref{eq:Phi_in}. Notice that $\mathcal{R}(\Phi_{i,n+1})$ can be written as \[ \mathcal{R}(\Phi_{i,n+1}) = \mathcal{R}(\widetilde{\Phi}_{i,n+1}) \circ \mathcal{R}(\Phi_{i,n}). \] where $\widetilde{\Phi}_{i,n} \in \textbf{N}$ is a ReLu DNN that satisfies \[ \left(\mathcal{R}(\widetilde{\Phi}_{i,n})\right)(x) = x + Y_{i,n} \left(\mathcal{R}(\Phi_{\dist})\right)(x). \] By the same arguments as in the case $n=1$, it follows for all $n=2,...,\overline{N}_i$ that \[ \mathcal{D}(\widetilde{\Phi}_{i,n}) = d\mathfrak{n}_{H_{\dist}+2} \boxplus \widetilde{\beta}_{\dist}, \quad \dim(\mathcal{D}(\widetilde{\Phi}_{i,n})) = H_{\dist} + 2, \] and \[ \vertiii{\mathcal{D}(\widetilde{\Phi}_{i,n})} \leq 2d + \vertiii{\mathcal{D}(\Phi_{\dist})}. \] Therefore from the inductive hypothesis \eqref{eq:HI_homo} and Lemma \ref{lem:DNN_comp}, $\Phi_{i,n+1}$ is a ReLu DNN that satisfies \[ \mathcal{D}(\Phi_{i,n+1}) = (d\mathfrak{n}_{H_{\dist}+2} \boxplus \widetilde{\beta}_{\dist}) \odot \left(\overunderset{n}{m=1}{\odot}(d\mathfrak{n}_{H_{\dist}+2}\boxplus \widetilde{\beta}_{\dist})\right) = \overunderset{n+1}{m=1}{\odot}(d\mathfrak{n}_{H_{\dist}+2}\boxplus \widetilde{\beta}_{\dist}). \] Then the claim for $\Phi_{i,n}$ is proved for any $i=1,...,M$, $n=1,...,\overline{N}_i$. Recall that \ref{eq:HI_homo2} is valid too. Therefore \[ \mathcal{D}(\Phi_{i,\overline{N}_i}) = \overunderset{\overline{N}_i}{m=1}{\odot}(d\mathfrak{n}_{H_{\dist}+2}\boxplus \widetilde{\beta}_{\dist}). \] Moreover \[ \vertiii{\mathcal{D}(\Phi_{i,\overline{N}_i})} \leq 2d + \vertiii{\mathcal{D}(\Phi_{\dist})}, \quad \hbox{and} \quad \dim(\mathcal{D}(\Phi_{i,\overline{N}_i})) = (H_{\dist}+1)\overline{N}_i+1. \] Let $\beta_g = \mathcal{D}(\Phi_g)$ and $H_g = \dim(\beta_g) - 2$. By Lemma \ref{lem:DNN_comp} one has that \[ \mathcal{D}(\Phi_{g}^{i}) = \beta_g \odot \left(\overunderset{\overline{N}_i}{m=1}{\odot}(d\mathfrak{n}_{H_{\dist}+2}\boxplus \widetilde{\beta}_{\dist})\right), \qquad \dim(\mathcal{D}(\Phi_{g}^{i})) =(H_{\dist}+1) \overline{N}_i + H_g + 2. \] Moreover \[ \vertiii{\mathcal{D}(\Phi_g^{i})} \leq \max\{\vertiii{\mathcal{D}(\Phi_g)},2d+\vertiii{\mathcal{D}(\Phi_{\dist})} \}. \] Recall that $\overline{N}_i$ not necessarily be the same for $i=1,...,M$. Now we need that for all $i=1,...,M$, $\Phi_{g}^{i}$ have the same number of layers to use Lemma \ref{lem:DNN_sum}. For any $i=1,...,M$ define \[ H_i = (H_{\dist}+1)\left(\sum_{j=1}^{M} \overline{N}_j - \overline{N}_i\right) - 1. \] By Lemma \ref{lem:DNN_id}, The identity function can be represented by a ReLu DNN with $H_i$ hidden layers. Recall the definition of $\Phi_g^i$ in \eqref{eq:DNN_gi}. Using Lemma \ref{lem:DNN_comp} we have that \[ \mathcal{D}(\Phi_g^i) = \mathfrak{n}_{H_i + 2} \odot \beta_g \odot \left(\overunderset{\overline{N}_i}{m=1}{\odot}(d\mathfrak{n}_{H_{\dist}+2}\boxplus \widetilde{\beta}_{\dist})\right), \] and \[ \dim(\mathcal{D}(\Phi_{g}^{i})) = (H_{\dist}+1)\sum_{i=1}^{M}\overline{N}_i + H_g + 2. \] Now we use Lemma \ref{lem:DNN_sum} to conclude that \[ \mathcal{D}(\Psi_{1,\varepsilon}) = \overset{M}{\underset{i=1}{\boxplus}} \left(\mathfrak{n}_{H_i+2} \odot \beta_g \odot \left(\overunderset{\overline{N}_i}{m=1}{\odot}(d\mathfrak{n}_{H_{\dist}+2}\boxplus \widetilde{\beta}_{\dist})\right)\right), \] and \[ \dim(\mathcal{D}(\Psi_{1,\varepsilon})) = (H_{\dist}+1)\sum_{i=1}^M \overline{N}_i + H_g + 2. \] In addition \begin{equation}\label{eq:normpsi1eps} \begin{aligned} \vertiii{\mathcal{D}(\Psi_{1,\varepsilon})} &\leq \sum_{i=1}^{M} \max\{\vertiii{\mathcal{D}(\Phi_g)},2d+\vertiii{\mathcal{D}(\Phi_{\dist})}\}\\ &\leq M(\vertiii{\mathcal{D}(\Phi_g)} + 2d + \vertiii{\mathcal{D}(\Phi_{\dist})}). \end{aligned} \end{equation} Notice fron \eqref{eq:C1C2} that the constants $C_1$ and $C_2$ are bounded by a multiple of $|D|^{\frac 1q} d^p$. Therefore, by choice of $M$, \begin{equation}\label{eq:Mfinalg} M \leq B_1 |D|^{\frac{s}{q(s-1)}}d^{\frac{ps}{s-1}}\varepsilon^{-\frac{s}{s-1}}, \end{equation} where $B_1>0$ is a generic constant. With \eqref{eq:Mfinalg} and the bound of $C_1$ and $C_2$, we have that $C_5$ defined in \eqref{C5} is bounded by a multiple of \[ |D|^{\frac{1}{q}\left(1+p+\frac{ps}{s-1}\right)}d^{p+p^2+\frac{p^2s}{s-1}}\varepsilon^{-\frac{ps}{s-1}}. \] By the choice of $\delta_g$ we have, for $B_2>0$ a generic constant that \begin{equation}\label{eq:deltagfinalg} \delta_g^{-a} \leq B_2 |D|^{\frac{a}{q}\left(1+p+\frac{ps}{s-1}\right)}d^{ap+ap^2+\frac{ap^2s}{s-1}}\varepsilon^{-a-\frac{aps}{s-1}}. \end{equation} For $\delta_{\dist}$ we estimate $\log(\delta_{\dist}^{-1})$ as indicates Assumption \ref{Sup:D}. By the choice of $\delta_{\dist}$ and properties of $\log$ function, we have that \[ \log(\delta_{\dist}^{-1}) \leq 5|D|^{\frac 1q}L_g \varepsilon^{-1} +(M^{\frac 1s}C_1 + M(C_2 + C_4))(1 + M^{\frac 1s}C_1 + M(C_2 + C_3)). \] Therefore \begin{equation}\label{eq:deltadistfinalg} \lceil \log(\delta_{\dist}^{-1}) \rceil^{a} \leq B_3 |D|^{\frac{2a}{q}\left(1+\frac{s}{s-1}\right)}d^{2ap\left(1+\frac{s}{s-1}\right)}\varepsilon^{-a-\frac{2as}{s-1}}, \end{equation} where $B_3>0$ is a generic constant. Assumptions \ref{Sup:g} and \ref{Sup:D}, in addition with \eqref{eq:normpsi1eps} implies that \[ \vertiii{\mathcal{D}(\Psi_{1,\varepsilon})} \leq B_4 d^b M (\delta_g^{-a}+\lceil\log(\delta_{\dist}^{-1})\rceil^{a}), \] where $B_4>0$ is a generic constant. Therefore, from \eqref{eq:Mfinalg}, \eqref{eq:deltagfinalg} and \eqref{eq:deltadistfinalg} we conclude that there exists $\widetilde{B}>0$ such that \[ \vertiii{\mathcal{D}(\Psi_{1,\varepsilon})} \leq \widetilde{B}|D|^{\frac{1}{q}\left(2a + ap+\frac{s}{s-1}(1+2a+ap)\right)} d^{b+2ap+ap^2+\frac{ps}{s-1}(1+2a + ap)}\varepsilon^{-a-\frac{s}{s-1}(1+2a+ap)}. \] Note also that this implies from Remark \ref{rem:CoD} that $\Psi_{1,\varepsilon}$ overcomes the curse of dimensionality. This completes the proof of Proposition \ref{Prop:homo}. \section{Approximation of solutions of the Fractional Dirichlet problem using DNNs: the source case}\label{Sect:7} \subsection{Non-homogeneous Fractional Laplacian} In the previous subsection we have proved the the solution \eqref{eq:2.2} of the fractional Dirichlet Problem without source can be approximated by a ReLu DNN. In this subsection we focus in the term \begin{equation}\label{eq:7.1} \E_x \left[\sum_{n=1}^{N} r_n^{\alpha} \kappa_{d,\alpha} \E^{(\mu)}\left[f\left(X_{\mathcal{I}(n-1)}+r_n \cdot\right)\right]\right]. \end{equation} We will prove that \eqref{eq:7.1} can be approximated by a ReLu DNN that does not suffer of the curse of dimensionality. Notice that \eqref{eq:7.1} corresponds to the extra term in the solution \eqref{u(x)} of the fractional Dirichlet Problem with source \eqref{eq:1.1}. In order to do this approximation, the following assumption will be introduced \begin{ass}\label{Sup:f} Let $d \geq 2$. Let $f: D \to \R$ a function satisfying \eqref{Hf0}. Let $\delta_f \in (0,1)$, $a,b\geq 1$ and $B>0$. Then there exists a ReLu DNN $\Phi_f \in \textbf{N}$ with \begin{enumerate} \item $\mathcal{R}(\Phi_f):D \to \R$ is $\widetilde{L}_f$-Lipschitz continuous, $\widetilde{L}_f>0$, and \item The following are satisfied: \begin{align} |f(x) - \left(\mathcal{R}(\Phi_f)\right)(x)| &\leq \delta_f, \qquad x \in D. \tag{Hf-1} \label{H5}\\ \vertiii{\mathcal{D}(\Phi_f)} &\leq Bd^b\delta_f^{-a}. \tag{Hf-2} \label{H6} \end{align} \end{enumerate} \end{ass} \begin{rem}\label{rem:Rf} If $\Phi_f$ satisfies Assumptions \ref{Sup:f}, then it holds for all $x \in D$ that \[ |\left(\mathcal{R}(\Phi_f)\right)(x)| \leq |f(x)-\left(\mathcal{R}(\Phi_f)\right)(x)|+|f(x)| \leq \delta_f + \norm{f}_{L^{\infty}(D)}. \] Then \begin{equation}\label{eq:Phif} \norm{\mathcal{R}(\Phi_f)}_{L^{\infty}(D)} \leq \delta_f + \norm{f}_{L^{\infty}(D)}. \end{equation} \end{rem} The main result of this section is the following proposition, that ensures the existence of a ReLu DNN such that \eqref{eq:7.1} is well approximated \begin{prop}\label{Prop:6p1} Let $\alpha \in (1,2)$, $L_f>0$ and \begin{equation}\label{condiciones_f} \hbox{ $p,s \in (1,\alpha)$ such that $s < \frac{\alpha}{p}$, \quad and \quad $q \in \left[s, \frac{\alpha}{p} \right)$. } \end{equation} Suppose that $f$ is a function satisfying \eqref{Hf0} and Assumptions \ref{Sup:f}. Suppose additionally that $D$ satisfies Assumptions \ref{Sup:D}. \medskip Then for all $\widetilde{\varepsilon} \in (0,1)$, there exists a ReLu DNN $\Psi_{2,\widetilde{\varepsilon}}$ such that \begin{enumerate} \item Proximity in $L^q(D)$: \begin{equation}\label{eq:prop_f} \left(\int_{D} \left| \E_{x}\left[\sum_{n=1}^{N} r_n^{\alpha} V_1(0,f(X_{\mathcal{I}(n-1)}+r_n \cdot))\right] - \left(\mathcal{R}(\Psi_{2,\widetilde{\varepsilon}})\right)(x) \right|^q\right)^{\frac{1}{q}} \leq \widetilde{\varepsilon}. \end{equation} \item Realization: $\mathcal{R}(\Psi_{2,\widetilde{\varepsilon}})$ has the following structure: there exist $M_1, M_2 \in \N$, $\overline{N}_{i} \in \N$, $Y_{i,n}$ i.i.d copies of $X_{\sigma_{B(0,1)}}$ under $\PP_0$, $v_{i,j,n}$ i.i.d copies with law $\mu$ under $B(0,1)$, for $i =1,...,M_1$, $j=1,...,M_2$, $n=1,...,\overline{N}_{i}$ such that for all $x \in D$, \begin{equation} \begin{aligned} \left(\mathcal{R}(\Psi_{2,\widetilde{\varepsilon}})\right)(x) = \frac{1}{M_1} \sum_{i=1}^{M_1} \sum_{n=1}^{\overline{N}_i} \kappa_{d,\alpha}\left(\mathcal{R}(\Upsilon)\right)\Big( \left(\mathcal{R}(\Phi_{\alpha}) \circ \mathcal{R}(\Phi_r^{i,n})\right)(x), \left(\mathcal{R}(\Phi_{f}^{i,n})\right)(x)\Big), \end{aligned} \end{equation} where for all $y \in D$ \begin{equation} \left(\mathcal{R}(\Phi_{r}^{i,n})\right)(y)=\left(\mathcal{R}(\Phi_{\dist}) \circ \mathcal{R}(\Phi_{i,n-1})\right)(y), \end{equation} \begin{equation} \left(\mathcal{R}(\Phi_f^{i,n})\right)(y) = \frac{1}{M_2} \sum_{j=1}^{M_2} \left(\mathcal{R}(\Phi_f) \circ \left(\mathcal{R}(\Phi_{i,n}) + v_{i,j,n} \mathcal{R}(\Phi_{r}^{i,n})\right)\right)(y), \end{equation} and $\mathcal{R}(\Phi_{i,n})$ is a Relu DNN that approximates $X_{\mathcal{I}(n)}^{i}$, for $i=1,...,M_1$, $n = 1,...,\overline{N}_i$. \item Bounds: there exists $\widetilde{B}>0$ such that \begin{equation} \vertiii{\mathcal{D}(\Psi_{2,\widetilde{\varepsilon}})} \leq \widetilde{B} |D|^{\frac{1}{q}\left(1 +2a+\frac{2s}{s-1}(1+a)\right)}d^b\widetilde{\varepsilon}^{-a-\frac{2s}{s-1}(1+a)}. \end{equation} \end{enumerate} \end{prop} \subsection{Proof of Proposition \ref{Prop:6p1}: existence} As in the proof of Proposition \ref{Prop:homo}, this proof will be divided in several steps. Let $s,p \hbox{ and } q$ as in \eqref{condiciones_f}. \medskip \noindent {\bf Step 1.} Let $(\rho_n)_{n \in \N}$ the WoS process starting at $x \in D$. Recall that for all $n=1,...,N$ the process $X_{\mathcal{I}(n)}$ depends of the point $x \in D$ and $n$ copies of $X_{\sigma_{B(0,1)}}$, namely \begin{equation}\label{eq:X_Inf} X_{\mathcal{I}(n)} = X_{\mathcal{I}(n)}(x,Y_1,...,Y_n), \end{equation} where $Y_{k}$, $k=1,...,n$ are i.i.d. copies of $X_{\sigma_{B(0,1)}}$ under $\PP_0$. Let $M_1 \in \N$. Consider $M_1$ copies of $X_{\mathcal{I}(n)}$ starting at $x \in D$, as described in \eqref{eq:X_Inf}. We denote such copies as \[ X^i_{\mathcal{I}(n)} = X_{\mathcal{I}(n)}^i(x,Y_{i,1},...,Y_{i,n}), \] where $Y_{i,k}$, $i=1,...,M_1$, $n=1...,N_i$, $k=1,...,n$ are i.i.d. copies of $X_{\sigma_{B(0,1)}}$ under $\PP_0$, and each $N_i$ is an i.i.d. copy of $N$. Recall that $N_i$ not necessarily be the same (as a random variable). \medskip Let $M_2 \in \N$. for all $n=1,...,N$ let $(v_{j,n})_{j=1}^{M_2}$ be $M_2$ copies of a random variable $v$ with distribution $\mu$ over $B(0,1)$. For all $n=1,...,N$ and $\chi \in L^2(B(0,1),\mu)$ define the Monte Carlo operator \begin{equation}\label{eq:EM2} E_{M_2}^{n}(\chi(\cdot)) = \frac{1}{M_2} \sum_{j=1}^{M_2} \chi(v_{j,n}), \end{equation} and we will refer to $E_{M_2}$ when the copies of $v$ in \ref{eq:EM2} do not depend on $n$. Additionally define the operator \begin{equation}\label{eq:EM1} E_{M_1}(x) = \frac{1}{M_1} \sum_{i=1}^{M_1} \sum_{n=1}^{N_i} (r^i_{n})^{\alpha} \kappa_{d,\alpha} E_{M_2} \left(\left(\mathcal{R}(\Phi_f)\right)\left(X_{\mathcal{I}(n-1)}^i + r_n^i \cdot\right)\right), \end{equation} where \begin{equation}\label{eq:rin} r_{i,n} = \dist\left(X_{\mathcal{I}(n-1)}^i,\partial D\right), \end{equation} and $\mathcal{R}(\Phi_f)$ denotes the realization as a Lipschitz continuous funtion of the DNN $\Phi_f \in \textbf{N}$ that approximates $f$ in Assumption \eqref{Sup:f}. Note that $E_{M_1}$ not necessarily be a DNN. \medskip We want to establish suitable bounds of the difference between \eqref{eq:7.1} and the operator $E_{M_1}(x)$. For this, in the next step we work for all $n=1,...,N$ with the term \[ E_{M_2}^{n}\left(\left(\mathcal{R}(\Phi_f)\right)\left(X_{\mathcal{I}(n-1)} + r_n \cdot\right)\right). \] \medskip \noindent {\bf Step 2.} Notice by Remark \ref{rem:Rf} that for all $n=1,...,N$ that \begin{equation}\label{eq:Rfnorm} \E^{(\mu)}\left(\left|\left(\mathcal{R}(\Phi_f)\right)\left(X_{\mathcal{I}(n-1)}+r_n\cdot\right)\right|\right) \leq \delta_f + \norm{f}_{L^{\infty}(D)}. \end{equation} Then for all $j=1,...,M_2$, $\left(\mathcal{R}(\Phi_f)\right)\left(X_{\mathcal{I}(n-1)}+r_n v_{j,n}\right) \in L^1(B(0,1),\mu)$. For $s$ as in \eqref{condiciones_f} it follows from Corollary \ref{cor:MCq} that for all $q$ as in \eqref{condiciones_f} \[ \begin{aligned} &\norm{\E^{(\mu)} \left[\left(\mathcal{R}(\Phi_f)\right)\left(X_{\mathcal{I}(n-1)} + r_n \cdot\right)\right]-E_{M_2} \left(\left(\mathcal{R}(\Phi_f)\right)\left(X_{\mathcal{I}(n-1)} + r_n \cdot\right)\right)}_{L^{q}(B(0,1),\mu)}\\ &\qquad \leq \frac{2\Theta_{q,s}}{M_2^{1-\frac{1}{s}}} \E^{(\mu)} \left[\left(\mathcal{R}(\Phi_f)\right)\left(X_{\mathcal{I}(n-1)}+r_n\cdot\right)^q\right]^{\frac{1}{q}}. \end{aligned} \] From Remark \ref{rem:Rf} it follows that \[ \E^{(\mu)}\left(\left|\left(\mathcal{R}(\Phi_f)\right)\left(X_{\mathcal{I}(n-1)}+r_n\cdot\right)\right|^q\right)^{\frac 1q} \leq \delta_f + \norm{f}_{L^{\infty}(D)}. \] Therefore \begin{align*} & \norm{\E^{(\mu)} \left[\left(\mathcal{R}(\Phi_f)\right)\left(X_{\mathcal{I}(n-1)} + r_n \cdot\right)\right]-E_{M_2} \left(\left(\mathcal{R}(\Phi_f)\right)\left(X_{\mathcal{I}(n-1)} + r_n \cdot\right)\right)}_{L^{q}(B(0,1),\mu)}\\ & \qquad \leq \frac{2\Theta_{q,s}\left(\delta_f + \norm{f}_{L^{\infty}(D)}\right)}{M_2^{1-\frac{1}{s}}}. \end{align*} Then for any $n=1,...,N$ there exists $v_{j,n}$, $j=1,...,M_2$, i.i.d random variables with distribution $\mu$ such that \[ \begin{aligned} &\left|\E^{(\mu)} \left[\left(\mathcal{R}(\Phi_f)\right)\left(X_{\mathcal{I}(n-1)} + r_n \cdot\right)\right]-\frac{1}{M_2} \sum_{j=1}^{M_2} \left(\mathcal{R}(\Phi_f)\right)\left(X_{\mathcal{I}(n-1)} + r_n v_{j,n}\right)\right|\\ &\qquad\leq \frac{2\Theta_{q,s}\left(\delta_f + \norm{f}_{L^{\infty}(D)}\right)}{M_2^{1-\frac{1}{s}}}. \end{aligned} \] We redefine $E_{M_2}^{n}$ with the random variables $v_{j,n}$ found for all $n=1,...,N$. \medskip In the two next steps we control the difference between \eqref{eq:7.1} and $E_{M_1}$ with the intermediate term \[ \E_x \left[\sum_{n=1}^{N}r_n^{\alpha}\kappa_{d,\alpha} E^n_{M_2}\left(\mathcal{R}(\Phi_f)\right)\left(X_{\mathcal{I}(n-1)}+r_n \cdot\right)\right], \] \medskip \noindent {\bf Step 3.} Define \[ \begin{aligned} J_3 &= \Bigg\|\E_x\left[\sum_{n=1}^{N}r^{\alpha}_{n} \kappa_{d,\alpha} \E^{(\mu)}\left[\left(\mathcal{R}(\Phi_f)\right)\left(X_{\mathcal{I}(n-1)}+r_n\cdot\right)\right] \right] \\ & \qquad -\E_x\left[\sum_{n=1}^{N}r^{\alpha}_{n} \kappa_{d,\alpha} E_{M_2}^{n}\left(\left(\mathcal{R}(\Phi_f)\right) \left(X_{\mathcal{I}(n-1)}+r_n \cdot\right)\right) \right]\Bigg\|_{L^q(\Omega,\PP_x)}. \end{aligned} \] From Step 2 we have \[ J_3 \leq \frac{2\Theta_{q,s}}{M_2^{1-\frac 1s}} \left(\delta_f + \norm{f}_{L^{\infty}(D)} \right) \norm{\E_x \left[\sum_{n=1}^{N} r_n^{\alpha}\kappa_{d,\alpha}\right]}_{L^{q}(\Omega,\PP_x)}. \] Using Lemma \ref{lem:aux_f} it follows that \[ J_3 \leq \frac{2\Theta_{q,s}}{M_2^{1-\frac 1s}} \left(\delta_f + \norm{f}_{L^{\infty}(D)} \right) \E_x[\sigma_D]. \] \medskip \noindent {\bf Step 4.} Define \[ J_4 = \norm{\E_x \left[\sum_{n=1}^{N}r_n^{\alpha}\kappa_{d,\alpha} E^n_{M_2}\left(\mathcal{R}(\Phi_f)\right)\left(X_{\mathcal{I}(n-1)}+r_n \cdot\right)\right] - E_{M_1}(x)}_{L^{q}(\Omega,\PP_x)}. \] Recall Remark \ref{rem:Rf}. Then \[ \E_x \left[\left|\sum_{n=1}^{N}r_n^{\alpha}\kappa_{d,\alpha} E^n_{M_2}\left(\mathcal{R}(\Phi_f)\right)\left(X_{\mathcal{I}(n-1)}+r_n \cdot\right)\right|\right] \leq (\delta_f + \norm{f}_{L^{\infty}(D)})\E_x[\sigma_D]<\infty. \] This implies that \[ \sum_{n=1}^{N}r_{i,n}^{\alpha}\kappa_{d,\alpha} E^n_{M_2}\left(\mathcal{R}(\Phi_f)\right)\left(X^i_{\mathcal{I}(n-1)}+r_{i,n} \cdot\right) \in L^1(\Omega,\PP_x). \] Then using Corollary \ref{cor:MCq} we have for $s$ as in \eqref{condiciones_f} it holds for all $q$ as in \eqref{condiciones_f} that \begin{align} J_4 &\leq \frac{2\Theta_{q,s}}{M_1^{1-\frac{1}{s}}} \E_x \left[\left(\sum_{n=1}^{N} r_n^{\alpha}\kappa_{d,\alpha} E^n_{M_2}\left(\left(\mathcal{R}(\phi_f)\right)(X_{\mathcal{I}(n-1)}+r_n\cdot)\right)\right)^q\right]^{\frac{1}{q}}. \end{align} Remark \ref{rem:Rf} implies that \[ J_4 \leq \frac{2\Theta_{q,s}}{M_1^{1-\frac{1}{s}}}\left(\delta_f + \norm{f}_{L^{\infty}(D)}\right) \E_x \left[\left(\sum_{n=1}^{N}r_{n}^{\alpha} \kappa_{d,\alpha}\right)^q\right]^{\frac{1}{q}} \] By Lemma \ref{lem:aux_f} and Jensen inequality with $(\cdot)^{\frac{2}{q}}$, $q<2,$ we have \[ \E_x\left[\left(\sum_{n=1}^{N}r_n^{\alpha}\kappa_{d,\alpha}\right)^q\right]^{\frac{1}{q}} \leq \E_x\left[\left(\sum_{n=1}^{N}r_n^{\alpha}\kappa_{d,\alpha}\right)^2\right]^{\frac{1}{2}} \leq \E_x [\sigma_D^2]^{\frac{1}{2}}. \] Therefore \begin{align} J_4 \leq \frac{2\Theta_{q,s}}{M_1^{1-\frac{1}{s}}}\left(\delta_f + \norm{f}_{L^{\infty}(D)}\right) \E_x[\sigma_D^2]^{\frac{1}{2}}. \end{align} \medskip \noindent {\bf Step 5.} With the bounds obtained in Steps 3 and 4, we have by Minkowski inequality that \begin{equation}\label{eq:step5f} \begin{aligned} &\norm{\E_x\left[\sum_{n=1}^{N}r^{\alpha}_{n} \kappa_{d,\alpha} \E^{(\mu)}\left[\left(\mathcal{R}(\Phi_f)\right)\left(X_{\mathcal{I}(n-1)}+r_n\cdot\right)\right] \right]-E_{M_1}(x)}_{L^q(\Omega,\PP_x)} \leq J_3 + J_4\\ &\hspace{3.5cm}\leq 2\Theta_{q,s}\left(\delta_f + \norm{f}_{L^{\infty}(D)}\right)\left(\frac{\E_x[\sigma_D]}{M_2^{1-\frac 1s}}+\frac{\E_x[|\sigma_D|^2]^{\frac{1}{2}}}{M_1^{1-\frac 1s}}\right). \end{aligned} \end{equation} Fubini and \eqref{eq:step5f} implies that \begin{equation}\label{eq:cotafP4} \begin{aligned} &\E_x \left[\int_D \left|\E_x\left[\sum_{n=1}^{N}r_n^{\alpha} \kappa_{d,\alpha} \E^{(\mu)}\left[\left(\mathcal{R}(\Phi_f)\right)\left(X_{\mathcal{I}(n-1)}+r_n\cdot\right)\right]\right]-E_{M_1}(x)\right|^qdx\right] \\ &\qquad \leq 2^q|D|\Theta_{q,s}^{q} \left(\delta_f + \norm{f}_{L^{\infty}(D)}\right)^q \left(\frac{\E_x[\sigma_D]}{M_2^{1-\frac 1s}} + \frac{\E_x[\sigma_D^2]^\frac{1}{2}}{M_1^{1-\frac 1s}}\right)^{q} \end{aligned} \end{equation} On the other hand side, from hypothesis \eqref{H5} and Lemma \ref{lem:aux_f} one has \[ \begin{aligned} &\E_x \left[\int_D\left|\E_x\left[\sum_{n=1}^{N}r^{\alpha}_{n} \kappa_{d,\alpha} \E^{(\mu)}\left[\left(\mathcal{R}(\phi_f)\right)\left(X_{\mathcal{I}(n-1)}+r_n\cdot\right)\right] \right] \right.\right.\\ &\qquad\left.\left.-\E_x\left[\sum_{n=1}^{N}r^{\alpha}_{n} \kappa_{d,\alpha} \E^{(\mu)}\left[f\left(X_{\mathcal{I}(n-1)}+r_n\cdot\right)\right] \right]\right|^q dx\right] \leq \delta_f^q|D|\E_x[\sigma_D]^q. \end{aligned} \] Therefore, using that $(\cdot)^q$ is a convex function and \eqref{eq:cotafP4} it follows that \begin{equation}\label{eq:cotafP42} \begin{aligned} &\E_x \left[\int_D \left|\E_x\left[\sum_{n=1}^{N}r_n^{\alpha} \kappa_{d,\alpha} \E^{(\mu)}\left[f\left(X_{\mathcal{I}(n-1)}+r_n\cdot\right)\right]\right]-E_{M_1}(x)\right|^qdx\right] \\ &\quad \leq 2^{q-1}\delta_f^q|D|\E_x[\sigma_D]^q+2^{q-1}2^q|D|\Theta_{q,s}^{q} \left(\delta_f + \norm{f}_{L^{\infty}(D)}\right)^q \left(\frac{\E_x[\sigma_D]}{M_2^{1-\frac 1s}} + \frac{\E_x[\sigma_D^2]^\frac{1}{2}}{M_1^{1-\frac 1s}}\right)^{q}\\ &\quad \leq 2\delta_f^q|D|\E_x[\sigma_D]^q+2^{q+1}|D|\Theta_{q,s}^{q} \left(\delta_f + \norm{f}_{L^{\infty}(D)}\right)^q \left(\frac{\E_x[\sigma_D]}{M_2^{1-\frac 1s}} + \frac{\E_x[\sigma_D^2]^\frac{1}{2}}{M_1^{1-\frac 1s}}\right)^{q}, \end{aligned} \end{equation} where we use that $2^{q-1}<2$ since $q<2$. \medskip \noindent {\bf Step 6.} In order to bound the following expectation \[\E_x \left[\left|\E_x[N] - \frac{1}{M_1} \sum_{i=1}^{M_1} N_i\right|^q\right], \] we use Corollary \ref{cor:MCq}. Notice by Theorem \ref{teo:N} that for all $x \in D$ there exists a geometric random variable $\Gamma$ with parameter $\widetilde{p} = \widetilde{p}(\alpha,d) > 0$ such that \[ \E_x\left[|N|\right] \leq \E_x\left[\Gamma\right] = \frac{1}{\widetilde{p}} < \infty, \] and then for all $i \in \{1,...,M_1\}$, $N_i \in L^1(\PP_x,|\cdot|)$. For $s$ as in \eqref{condiciones_f}, Corollary \ref{cor:MCq} implies for all $q$ as in \eqref{condiciones_f} that \begin{equation}\label{eq:cotafP5_1} \E_x\left[\left|\E_x[N]-\frac{1}{M_1} \sum_{i=1}^{M_1} N_i\right|^q\right]\leq \left( \frac{2\Theta_{q,s}}{M_1^{1-\frac{1}{s}}}\right)^q \E_x \left[|N|^{q}\right] \leq \left( \frac{2\Theta_{q,s}}{M_1^{1-\frac{1}{s}}}\right)^q \E_x \left[|N|^{2}\right], \end{equation} where we used that $q<2$ and then $\E_x\left[|\cdot|^q\right]\leq \E_x\left[|\cdot|^2\right]$. Recall that \begin{equation}\label{eq:cotafN^2} \E_x\left[|N|^2\right] \leq \E_x\left[\Gamma^2\right] = \frac{2-\widetilde{p}}{\widetilde{p}^2} < \infty, \end{equation} and therefore, it holds from \eqref{eq:cotafP5_1} and \eqref{eq:cotafN^2} that \begin{equation}\label{eq:cotafP5_2} \E_x\left[\left|\E_x[N]-\frac{1}{M_1} \sum_{i=1}^{M_1} N_i\right|^q\right] \leq \left( \frac{2\Theta_{q,s}}{M_1^{1-\frac{1}{s}}}\right)^q \frac{2-\widetilde{p}}{\widetilde{p}^{2}}. \end{equation} \medskip \noindent {\bf Step 7.} We want to estimate \[ \E_x \left[\left|\E_x \left[\sum_{n=1}^{N} |Y_{n}|\right] - \frac{1}{M_1} \sum_{i=1}^{M_1} \sum_{n=1}^{N_i} |Y_{i,n}| \right|^{q} \right] \] As in the previous step, we use the Corollary \ref{cor:MCq}. First of all, it follows from the independence of $\left(Y_{n}\right)_{n=1}^{k}$ and $N$ for fixed $k \in \N$ ($Y_{n}$ and $X$ are independent), and law of total expectation that \[ \begin{aligned} \E_x \left[\left|\sum_{n=1}^{N} |Y_{n}|\right|\right] &= \sum_{k \geq 1} \E_0 \left[\left.\left|\sum_{n=1}^{N}|Y_{n}|\right|\right|N=k\right] \PP_x (N = k)\\ &= \sum_{k \geq 1} \E_0 \left[\left|\sum_{n=1}^{k}|Y_{n}|\right|\right] \PP_x (N = k). \end{aligned} \] Recall that $(Y_{n})_{n=1}^{k}$ are i.i.d. with the same distribution as $X_{\sigma_{B(0,1)}}$. Triangle inequality ensures that \[ \begin{aligned} \E_x \left[\left|\sum_{n=1}^{N} |Y_{n}|\right|\right] &\leq \sum_{k\geq1} \sum_{n=1}^{k} \E_{0}\left[|Y_n|\right] \PP_x(N=k)\\ &= \E_0\left[\left|X_{\sigma_{B(0,1)}}\right|\right] \sum_{k \geq 1} k \PP_x(N=k)\\ &= K(\alpha,1) \E_x[N]. \end{aligned} \] and then for all $i \in \{1,...,M_1\}$, $\sum_{n=1}^{N} |Y_n| \in L^{1}(\PP_x,|\cdot|)$. Moreover, with similar arguments it holds that \begin{equation}\label{eq:cotafP6_1} \E_x \left[\left|\sum_{n=1}^{N} |Y_{i,n}|\right|^q\right] \leq K(\alpha,q) \E_x[N^{q}]. \end{equation} For $s$ as in \eqref{condiciones_f}, Corollary \ref{cor:MCq} implies for all $q$ as in \eqref{condiciones_f} that \[ \E_x \left[\left|\E_x \left[\sum_{n=1}^{N} |Y_{n}|\right] - \frac{1}{M_1} \sum_{i=1}^{M_1} \sum_{n=1}^{N_i} |Y_{i,n}| \right|^{q} \right] \leq \left( \frac{2\Theta_{q,s}}{M_1^{1-\frac{1}{s}}}\right)^{q} \E_x \left[\left|\sum_{n=1}^{N}|Y_{n}|\right|^q\right]. \] And therefore it follows from \eqref{eq:cotafN^2} and \eqref{eq:cotafP6_1} that \begin{equation}\label{eq:cotafP6_2} \E_x \left[\left|\E_x \left[\sum_{n=1}^{N} |Y_{n}|\right] - \frac{1}{M_1} \sum_{i=1}^{M_1} \sum_{n=1}^{N_i} |Y_{i,n}| \right|^{q} \right] \leq \left( \frac{2\Theta_{q,s}}{M_1^{1-\frac{1}{s}}}\right)^{q} K(\alpha,q) \frac{2-\widetilde{p}}{\widetilde{p}^2}. \end{equation} \medskip \noindent {\bf Step 8.} It follows from \eqref{eq:cotafP42}, \eqref{eq:cotafP5_2} and \eqref{eq:cotafP6_2} that \begin{equation} \begin{aligned} &\E_x \left[\int_D \left|\E_x\left[\sum_{n=1}^{N}r_n^{\alpha} \kappa_{d,\alpha} \E^{(\mu)}\left[\left(\mathcal{R}(\Phi_f)\right)\left(X_{\mathcal{I}(n-1)}+r_n\cdot\right)\right]\right]-E_{M_1}(x)\right|^qdx\right. \\ &\qquad \left.+ \left|\E_x\left[N\right] - \frac{1}{M_1} \sum_{i=1}^{M_1} N_i\right|^q + \left|\E_x\left[\sum_{n=1}^{N}|Y_n|\right]-\frac{1}{M_1}\sum_{i=1}^{M_1}\sum_{n=1}^{N_i} |Y_{i,n}|\right|^q\right]\\ &\leq 2\delta_f^q|D|\E_x[\sigma_D]^q + 2^{q+1}|D|\Theta_{q,s}^{q} \left(\delta_f + \norm{f}_{L^{\infty}(D)}\right)^q \left(\frac{\E_x[\sigma_D]}{M_2^{1-\frac 1s}} + \frac{\E_x[\sigma_D^2]^\frac{1}{2}}{M_1^{1-\frac 1s}}\right)^{q} \\ & \qquad + \left( \frac{2\Theta_{q,s}}{M_1^{1-\frac{1}{s}}}\right)^{q} (1+K(\alpha,q)) \frac{2-\widetilde{p}}{\widetilde{p}^2} =: \hbox{error}_f^q. \end{aligned} \end{equation} Using now that $\E(Z)\leq c<\infty$ we have the following result \begin{lem}\label{lem:sacada2} This implies that there exists $\overline{N}_i \in \N$, $Y_{i,n}$ i.i.d. copies of $X_{\sigma_{B(0,1)}}$, $v_{i,j,n}$ i.i.d. random variables with law $\mu$, $i=1,...,M_1$, $j=1,...,M_2$, $n=1,...,\overline{N}_i$ such that \begin{equation} \begin{aligned} &\int_D \left|\E_x\left[\sum_{n=1}^{N}r_n^{\alpha} \kappa_{d,\alpha} \E^{(\mu)}\left[f\left(X_{\mathcal{I}(n-1)}+r_n\cdot\right)\right]\right]-E_{M_1}(x)\right|^qdx \\ &\qquad + \left|\E_x\left[N\right] - \frac{1}{M_1} \sum_{i=1}^{M_1} \overline{N}_i\right|^q + \left|\E_x\left[\sum_{n=1}^{N}|Y_n|\right]-\frac{1}{M_1}\sum_{i=1}^{M_1}\sum_{n=1}^{\overline{N}_i} |Y_{i,n}|\right|^q\\ &\qquad\leq \hbox{error}_f^q. \end{aligned} \end{equation} \end{lem} Here, $E_{M_1}$ will be redefined from \eqref{eq:EM1} according to the copies found in Lemma \ref{lem:sacada2}, with the Monte Carlo operator $E_{M_2}^{i,n}$ defined from the copies $(v_{i,j,n})_{j=1}^{M_2}$. \medskip \noindent {\bf Step 9.} As similar in Step 8 of Proposition 6.1, we can see that for all $i=1,...,M_1$ $n=1,...,\overline{N}_i$ $X^{i}_{\mathcal{I}(n-1)}$ can be approximated by a ReLu DNN $\Phi_{i,n-1} \in \textbf{N}$ which satisfies for all $x \in D$ \[ \left|X_{\mathcal{I}(n-1)}^{i}-\left(\mathcal{R}(\Phi_{i,n-1})\right)(x)\right| \leq \delta_{\dist}\left(\sum_{n=1}^{\overline{N}_i}|Y_{i,n}| + 1\right)^{\overline{N}_i}. \] We can find now a DNN that approximates $r_{i,n}$ in \eqref{eq:rin}. Indeed, define $\Phi_{r}^{i,n} \in \textbf{N}$ as follows: \[ \left(\mathcal{R}(\Phi_r^{i,n})\right)(x) = \left(\mathcal{R}(\Phi_{\dist}) \circ \mathcal{R}(\Phi_{i,n-1})\right)(x), \] valid for $x \in D$. In Section \ref{Sect:7p3} we show that $\Phi_{r}^{i,n}$ is in fact a ReLu DNN. For $x \in D$ we have by triangle inequality that \[ \begin{aligned} \left|r_{i,n} - \left(\mathcal{R}(\Phi_r^{i,n})\right)(x)\right|&\leq \left|r_{i,n}-\dist\left(\left(\Phi_{i,n-1}\right)(x),\partial D\right)\right|\\ &\qquad + \left|\dist\left(\left(\Phi_{i,n-1}\right)(x),\partial D\right) - \left(\mathcal{R}(\Phi_r^{i,n})\right)(x)\right| \end{aligned} \] Hypothesis \eqref{HD-1} and the fact that the function $x \mapsto \dist(x,\partial D)$ is 1-Lipschitz implies \[ \begin{aligned} \left|r_{i,n} - \left(\mathcal{R}(\Phi_r^{i,n})\right)(x)\right|&\leq \left|X_{\mathcal{I}(n-1)}^{i}-\left(\mathcal{R}(\Phi_{i,n-1})\right)(x)\right| + \delta_{\dist}\\ &\leq \delta_{\dist}\left(\sum_{n=1}^{\overline{N}_i}|Y_{i,n}| + 1\right)^{\overline{N}_i} + \delta_{\dist}. \end{aligned} \] \medskip \noindent {\bf Step 10.} We will find now a DNN that approximates \begin{equation}\label{EM2in} E_{M_2}^{i,n}\left( \left(\mathcal{R}(\Phi_f)\right)\left(X_{\mathcal{I}(n-1)}^{i}+r_{i,n}\cdot\right)\right). \end{equation} Define the DNN $\Phi_f^{i,n} \in \textbf{N}$ as follows: for $x \in D$ \[ \left(\mathcal{R}(\Phi_f^{i,n})\right)(x) = \frac{1}{M_2} \sum_{j=1}^{M_2} \left(\mathcal{R}(\Phi_f) \circ \left(\mathcal{R}(\Phi_{i,n-1}) + v_{i,j,n} \mathcal{R}(\Phi_{r}^{i,n})\right)\right)(x). \] In Section \ref{Sect:7p3} we will prove that $\Phi_{f}^{i,n}$ is a ReLu DNN. We now use the assumption that $\mathcal{R}(\Phi_f)$ is a $\widetilde{L}_{f}$-Lipschitz function to obtain \[ \begin{aligned} &\left|E_{M_2}^{i,n}\left( \left(\mathcal{R}(\Phi_f)\right)\left(X_{\mathcal{I}(n-1)}^{i}+r_{i,n}\cdot\right)\right)-\left(\mathcal{R}(\Phi_f^{i,n})\right)(x)\right| \\ &\qquad \leq \frac{\widetilde{L}_f}{M_2} \sum_{j=1}^{M_2} \left(\left|X_{\mathcal{I}(n-1)}^{i} - \left(\mathcal{R}(\Phi_{i,n-1})\right)(x)\right| + \left|v_{i,j,n}\right| \left|r_{i,n} -\left(\mathcal{R}(\Phi_r^{i,n})\right)(x)\right|\right). \end{aligned} \] Notice that for all $i=1,...,M_1$, $j=1,...,M_2$, $n=1,...,\overline{N}_i$ one has $|v_{i,j,n}|\leq 1$ ($v_{i,j,n}$ is a random variable on $B(0,1)$). Therefore, it follows from Step 9 that \[ \begin{aligned} & \left|E_{M_2}^{i,n}\left( \left(\mathcal{R}(\Phi_f)\right)\left(X_{\mathcal{I}(n-1)}^{i}+r_{i,n}\cdot\right)\right)-\left(\mathcal{R}(\Phi_f^{i,n})\right)(x)\right| \\ &~{} \qquad \leq \widetilde{L}_f\delta_{\dist} \left(2\left(\sum_{n=1}^{\overline{N}_i} |Y_{i,n}| + 1\right)^{\overline{N}_i}+1\right). \end{aligned} \] \medskip \noindent {\bf Step 11.} We want to approximate the multiplication between $r_{i,n}^{\alpha}$ and \eqref{EM2in}. For all $i=1,...,M_1$, $n=1,...,\overline{N}_i$ define the DNN $\Upsilon_{i,n} \in \textbf{N}$ as \[ \left(\mathcal{R}(\Upsilon_{i,n})\right)(x) = \left(\mathcal{R}(\Upsilon)\right)\left(\left(\mathcal{R}(\Phi_{\alpha})\circ \mathcal{R}(\Phi_{r}^{i,n})\right)(x),\left(\mathcal{R}(\Phi_f^{i,n})\right)(x)\right), \] valid for $x \in D$. In Section \ref{Sect:7p3} we show that $\Upsilon_{i,n}$ is a ReLu DNN. Note by triangle inequality that for all $x \in D$ \[ \begin{aligned} &\left|r_{i,n}^{\alpha}E_{M_2}^{i,n}\left( \left(\mathcal{R}(\Phi_f)\right)\left(X_{\mathcal{I}(n-1)}^{i}+r_{i,n}\cdot\right)\right) - \left(\mathcal{R}(\Upsilon_{i,n})\right)(x)\right| \\ &\qquad\leq \left|r_{i,n}^{\alpha}\Big(E_{M_2}^{i,n}\left( \left(\mathcal{R}(\Phi_f)\right)\left(X_{\mathcal{I}(n-1)}^{i}+r_{i,n}\cdot\right)\right)-\left(\mathcal{R}(\Phi_f^{i,n})\right)(x)\Big)\right|\\ &\qquad \quad+ \left|\left(\mathcal{R}(\Phi_f^{i,n})\right)(x) \left(r_{i,n}^{\alpha} - \left(\mathcal{R}(\Phi_{\alpha})\circ\mathcal{R}(\Phi_r^{i,n})\right)(x)\right)\right|\\ &\qquad \quad+\left|\left(\mathcal{R}(\Phi_f^{i,n})\right)(x)\left(\mathcal{R}(\Phi_{\alpha})\circ\mathcal{R}(\Phi_r^{i,n})\right)(x) - \left(\mathcal{R}(\Upsilon_{i,n})\right)(x)\right| \end{aligned} \] For all $i=1,...,M_1$, $n=1,...,M_1$ one has $r_n^{i} < \diam(D)$. From Step 9, the first term can be bounded as \[ \begin{aligned} &\left|r_{i,n}^{\alpha}\left(E_{M_2}^{i,n}\left( \left(\mathcal{R}(\Phi_f)\right)\left(X_{\mathcal{I}(n-1)}^{i}+r_{i,n}\cdot\right)\right)-\left(\mathcal{R}(\Phi_f^{i,n})\right)(x)\right)\right| \\ &\qquad\leq \diam(D)^{\alpha}\widetilde{L}_f\delta_{\dist} \left(2\left(\sum_{n=1}^{\overline{N}_i} |Y_{i,n}| + 1\right)^{\overline{N}_i}+1\right). \end{aligned} \] For the second term of the inequality, note that \[ \left|\left(\mathcal{R}(\Phi_f^{i,n})\right)(x)\right| \leq \norm{\mathcal{R}(\Phi_f)}_{L^{\infty}(D)} \leq \delta_f + \norm{f}_{L^{\infty}(D)}. \] Also, by triangle inequality \[ \begin{aligned} \left|r_{i,n}^{\alpha} - \left(\mathcal{R}(\Phi_{\alpha})\circ\mathcal{R}(\Phi_r^{i,n})\right)(x) \right| &\leq \left|r_{i,n}^{\alpha} - \left(\mathcal{R}(\Phi_{\alpha})\right)(r_{i,n})\right| \\ &\quad+ \left|\left(\mathcal{R}(\Phi_{\alpha})\right)(r_{i,n})- \left(\mathcal{R}(\Phi_{\alpha})\circ\mathcal{R}(\Phi_r^{i,n})\right)(x)\right|. \end{aligned} \] From the Hypothesis \ref{HD-3} and the fact that $\mathcal{R}(\Phi_{\alpha})$ is $L_{\alpha}$-Lipschitz one has \[ \left|r_{i,n}^{\alpha} - \left(\mathcal{R}(\Phi_{\alpha})\circ\mathcal{R}(\Phi_r^{i,n})\right)(x) \right| \leq \delta_{\alpha} + L_{\alpha} \left|r_{i,n} - \left(\mathcal{R}(\Phi_r^{i,n})\right)(x)\right| \] And by Step 9 it follows that \[ \begin{aligned} &\left|\left(\mathcal{R}(\Phi_f^{i,n})\right)(x) \left(r_{i,n}^{\alpha} - \left(\mathcal{R}(\Phi_{\alpha})\circ\mathcal{R}(\Phi_r^{i,n})\right)(x)\right)\right| \\ &\quad\leq \left(\delta_f + \norm{f}_{L^{\infty}(D)}\right)\left(\delta_{\alpha} + L_{\alpha} \delta_{\dist} \left(\left(\sum_{n=1}^{\overline{N}_i}|Y_{i,n}|+1\right)^{\overline{N}_i}+1\right)\right). \end{aligned} \] Finally, by Lemma \ref{lem:DNN_mult} for all $\delta_{\Upsilon} \in \left(0,\frac{1}{2}\right)$ the third term can be bounded by \[ \left|\left(\mathcal{R}(\Phi_f^{i,n})\right)(x)\left(\mathcal{R}(\Phi_{\alpha})\circ\mathcal{R}(\Phi_r^{i,n})\right)(x) - \left(\mathcal{R}(\Upsilon_{i,n})\right)(x)\right| \leq \delta_{\Upsilon}. \] with $\kappa$ from the Lemma \ref{lem:DNN_mult} equal to \[ \kappa = \max\left\{1+\norm{f}_{L^{\infty}(D)},1+L_{\alpha}\left(\left(\sum_{i=1}^{\overline{N}_i}|Y_{i,n}|+1\right)^{\overline{N}_i}+1\right)+\diam(D)^{\alpha}\right\} \] Therefore \begin{equation}\label{eq:step11f} \begin{aligned} &\left|r_{i,n}^{\alpha}E_{M_2}^{i,n}\left( \left(\mathcal{R}(\Phi_f)\right)\left(X_{\mathcal{I}(n-1)}^{i}+r_{i,n}\cdot\right)\right) - \left(\mathcal{R}(\Upsilon_{i,n})\right)(x)\right| \\ &\quad\leq \diam(D)^{\alpha}L_f\delta_{\dist} \left(2\left(\sum_{n=1}^{\overline{N}_i} |Y_{i,n}| + 1\right)^{\overline{N}_i}+1\right)\\ &\quad \quad+\left(\delta_f + \norm{f}_{L^{\infty}(D)}\right)\left(\delta_{\alpha} + L_{\alpha} \delta_{\dist} \left(\left(\sum_{n=1}^{\overline{N}_i}|Y_{i,n}|+1\right)^{\overline{N}_i}+1\right)\right) + \delta_{\Upsilon}.\\ &\quad\leq \delta_{\Upsilon} + \delta_{\alpha}\left(\delta_f + \norm{f}_{L^{\infty}(D)}\right) \\ &\quad \quad+ \delta_{\dist} \left(\diam(D)^{\alpha}L_f + L_{\alpha}\left(\delta_f + \norm{f}_{L^{\infty}(D)}\right)\right) \left(2\left(\sum_{n=1}^{\overline{N}_i} |Y_{i,n}| + 1\right)^{\overline{N}_i}+1\right). \end{aligned} \end{equation} \noindent {\bf Step 12.} For $\widetilde{\varepsilon} \in (0,1)$ define the DNN $\Psi_{2,\widetilde{\varepsilon}} \in \textbf{N}$ as follows: for any $x \in D$ \[ \left(\mathcal{R}(\Psi_{2,\widetilde{\varepsilon}})\right)(x) = \frac{1}{M_1} \sum_{i=1}^{M_1} \sum_{n=1}^{\overline{N}_i} \kappa_{d,\alpha} \left(\mathcal{R}\left(\Upsilon_{i,n}\right)\right)(x). \] This is the requested DNN. See Section \ref{Sect:7p3} for the proof that $\Psi_{2,\widetilde{\varepsilon}}$ is indeed a ReLu DNN. By triangle inequality and \eqref{eq:step11f} we have \[ \begin{aligned} &\left|E_{M_1}(x) - \left(\mathcal{R}(\Psi_{2,\widetilde{\varepsilon}})\right)(x)\right| \\ &\quad\leq \frac{\kappa_{d,\alpha}}{M_1}\sum_{i=1}^{M_1} \sum_{n=1}^{\overline{N}_i} \left|r_{i,n}^{\alpha}E_{M_2}^{i,n}\left( \left(\mathcal{R}(\Phi_f)\right)\left(X_{\mathcal{I}(n-1)}^{i}+r_{i,n}\cdot\right)\right) - \left(\mathcal{R}(\Upsilon_{i,n})\right)(x)\right|.\\ &\quad \leq \frac{\kappa_{d,\alpha}}{M_1} \left(\delta_{\Upsilon} + \delta_{\alpha} \left(\delta_f + \norm{f}_{L^{\infty}(D)}\right)\right) \sum_{i=1}^{M_1} \overline{N}_i \\ &\quad \quad + \frac{\kappa_{d,\alpha}}{M_1} \delta_{\dist} \left(\diam(D)^{\alpha}L_f + L_{\alpha}\left(\delta_f + \norm{f}_{L^{\infty}(D)}\right)\right) \sum_{i=1}^{M_1} \overline{N}_i\left(2\left(\sum_{n=1}^{\overline{N}_i} |Y_{i,n}| + 1\right)^{\overline{N}_i}+1\right). \end{aligned} \] Therefore \[ \begin{aligned} &\left|E_{M_1}(x) - \left(\mathcal{R}(\Psi_{2,\widetilde{\varepsilon}})\right)(x)\right| \\ &\quad \leq \frac{\kappa_{d,\alpha}}{M_1} \left(\delta_{\Upsilon} + \delta_{\alpha} \left(\delta_f + \norm{f}_{L^{\infty}(D)}\right)\right) \sum_{i=1}^{M_1} \overline{N}_i \\ &\quad \quad + \kappa_{d,\alpha} \delta_{\dist} \left(\diam(D)^{\alpha}L_f + L_{\alpha}\left(\delta_f + \norm{f}_{L^{\infty}(D)}\right)\right) \left(\sum_{i=1}^{M_1} \overline{N}_i\right)\ell, \end{aligned} \] with $\ell:=\left(2\left(\sum_{i=1}^{M_1}\sum_{n=1}^{\overline{N}_i} |Y_{i,n}| + 1\right)^{\sum_{i=1}^{M_1}\overline{N}_i}+1\right).$ \medskip \noindent {\bf Step 13.} We want to bound error$_f$. Notice that \[ \begin{aligned} \hbox{error}_f &\leq 2^{\frac{1}{q}}|D|^{\frac{1}{q}}\delta_f \E_x[\sigma_D] + 2^{1+\frac{1}{q}}|D|^{\frac{1}{q}} \Theta_{q,s} \left(\delta_f + \norm{f}_{L^{\infty}(D)}\right) \left(\frac{\E_x\left[\sigma_D\right]}{M_2^{1-\frac{1}{s}}} + \frac{\E_x\left[\sigma_D^2\right]^{\frac{1}{2}}}{M_1^{1-\frac{1}{s}}}\right)\\ &\qquad + \frac{2^{1+\frac{1}{q}}\Theta_{q,s}}{M_1^{1-\frac{1}{s}}} \left(1 + K(\alpha,q)\right)^{\frac{1}{q}}\left(\frac{2-\widetilde{p}}{\widetilde{p}^2}\right)^{\frac{1}{q}}. \end{aligned} \] Consider now $M \in \N$ and let $M = M_1 = M_2$. Define the constant $\widetilde{C}_1$ as \[ \widetilde{C}_1 = 2^{1+\frac{1}{q}}\Theta_{q,s}\left(|D|^{\frac{1}{q}} \left(1 + \norm{f}_{L^{\infty}(D)}\right)\left(\E_x\left[\sigma_D\right]+\E_x\left[\sigma_D^2\right]^{\frac{1}{2}}\right)+ \left(1 + K(\alpha,q)\right)^{\frac{1}{q}} \left(\frac{2-\widetilde{p}}{\widetilde{p}^{2}}\right)^{\frac{1}{q}}\right), \] and the constant $\widetilde{C}_2$ as \[ \widetilde{C}_2 = 2^{\frac{1}{q}}|D|^{\frac{1}{q}} \E_x\left[\sigma_D\right]. \] Therefore \begin{equation}\label{eq:error_f} \hbox{error}_f \leq \frac{\widetilde{C}_1}{M^{1-\frac{1}{s}}} + \widetilde{C}_2 \delta_f. \end{equation} In addition \[ \sum_{i=1}^{M} \sum_{n=1}^{\overline{N}_i} |Y_{i,n}| \leq M \left( \hbox{error}_f + \E_x \left[\sum_{n=1}^{N} |Y_{i,n}|\right]\right) \leq M \left( \hbox{error}_f + K(\alpha,1) \frac{1}{\widetilde{p}} \right). \] Recall that $C_3 = K(\alpha,1) \frac{1}{\widetilde{p}}$. Then \[ \begin{aligned} \sum_{i=1}^{M} \sum_{n=1}^{\overline{N}_i} |Y_{i,n}| &\leq M^{\frac{1}{s}}\widetilde{C}_1 + M\left(\delta_f\widetilde{C}_2+C_3\right). \\ &\leq M^{\frac 1s}\widetilde{C}_1 + M(\widetilde{C}_2 + C_3). \end{aligned} \] Recall that $C_4 = \frac{1}{\widetilde{p}}$, therefore \[ \begin{aligned} \sum_{i=1}^{M} \overline{N}_i \leq M (\hbox{error}_f + \E_x[N]) &\leq M^{\frac{1}{s}} \widetilde{C}_1 +M \left(\delta_f\widetilde{C}_2+C_4\right)\\ &\leq M^{\frac 1s}\widetilde{C}_1 + M(\widetilde{C}_2+C_4). \end{aligned} \] \medskip \noindent From the Step 12 and the estimates of this step it follows that \[ \begin{aligned} &\left|E_{M_1}(x) - \left(\mathcal{R}(\Psi_{2,\widetilde{\varepsilon}})\right)(x)\right|\\ & \leq \kappa_{d,\alpha} \left(\delta_{\Upsilon} + \delta_{\alpha} \left(\delta_f + \norm{f}_{L^{\infty}(D)}\right)\right) \left(M^{\frac{1}{s}-1}\widetilde{C}_1 + \widetilde{C}_2 + C_4\right)\\ & \quad+ \kappa_{d,\alpha} \delta_{\dist} \left(\diam(D)^{\alpha}L_f + L_{\alpha}\left(\delta_f + \norm{f}_{L^{\infty}(D)}\right)\right) \left(M^{\frac{1}{s}}\widetilde{C}_1 + M(\widetilde{C}_2 + C_4)\right)\widetilde \ell, \end{aligned} \] where $\widetilde \ell :=\left(2\left(M^{\frac{1}{s}}\widetilde{C}_1 + M(\widetilde{C}_2 + C_3) + 1\right)^{M^{\frac{1}{s}}\widetilde{C}_1 + M(\widetilde{C}_2 + C_4)}+1\right).$ \medskip \noindent {\bf Step 14.} Lemma \ref{lem:sacada2}, the inequality \eqref{eq:error_f}, Step 13 and Minkowski inequality ensure that \[ \begin{aligned} &\left(\int_D \left|\E_x\left[\sum_{n=1}^{N}r_n^{\alpha} \kappa_{d,\alpha} \E^{(\mu)}\left[f\left(X_{\mathcal{I}(n-1)}+r_n\cdot\right)\right]\right] - \left(\mathcal{R}(\Psi_{2,\widetilde{\varepsilon}})\right)(x)\right|^q dx\right)^{\frac{1}{q}}\\ &\leq \left(\int_D \left|\E_x\left[\sum_{n=1}^{N}r_n^{\alpha} \kappa_{d,\alpha} \E^{(\mu)}\left[f\left(X_{\mathcal{I}(n-1)}+r_n\cdot\right)\right]\right] - E_{M_1}(x)\right|^q dx\right)^{\frac{1}{q}} \\ &~{} \quad + \left(\int_D \left|E_{M_1}(x)-\left(\mathcal{R}(\Psi_{2,\widetilde{\varepsilon}})\right)(x)\right|^q dx\right)^{\frac{1}{q}}\\ &\leq \frac{\widetilde{C}_1}{M^{1-\frac{1}{s}}} + \widetilde{C}_2 \delta_f + |D|^{\frac{1}{q}}\kappa_{d,\alpha} \left(\delta_{\Upsilon} + \delta_{\alpha} \left(\delta_f + \norm{f}_{L^{\infty}(D)}\right)\right) \left(M^{\frac{1}{s}-1}\widetilde{C}_1 + \widetilde{C}_2 + C_4\right)\\ &\quad +|D|^{\frac{1}{q}}\kappa_{d,\alpha} \delta_{\dist} \left(\diam(D)^{\alpha}L_f + L_{\alpha}\left(\delta_f + \norm{f}_{L^{\infty}(D)}\right)\right) \left(M^{\frac{1}{s}}\widetilde{C}_1 + M(\widetilde{C}_2 + C_4)\right)\widetilde{\ell}. \end{aligned} \] For $\widetilde{\varepsilon} \in (0,1)$, let $M \in \N$ large enough such that \[ M = \left\lceil \left(\frac{5\widetilde{C}_1}{\widetilde{\varepsilon}}\right)^{\frac{s}{s-1}} \right\rceil, \] and from the choice of $M$ let $\delta_{\Upsilon} \in \left(0,\frac{1}{2}\right)$ $\delta_{\dist},\delta_{\alpha} \in (0,1)$ small enough such that \begin{align*} \delta_{\Upsilon} &= \frac{\widetilde{\varepsilon}}{5|D|^{\frac 1q} \kappa_{d,\alpha}} \left(M^{\frac{1}{s}-1}\widetilde{C}_1 + \widetilde{C}_2 + C_4\right)^{-1},\\ \delta_{\alpha}&= \frac{\widetilde{\varepsilon}}{5|D|^{\frac 1q}\kappa_{d,\alpha}} \left(1 + \norm{f}_{L^{\infty}(D)}\right)^{-1} \left(M^{\frac{1}{s} - 1}\widetilde{C}_1 + \widetilde{C}_2 + C_4\right)^{-1},\\ \delta_{\dist} &= \frac{\widetilde{\varepsilon}}{5|D|^{\frac 1q}\kappa_{d,\alpha}\widetilde{\ell}} \left(\diam(D)^{\alpha}L_f + L_{\alpha}\left(1 + \norm{f}_{L^{\infty}(D)}\right)\right)^{-1} \left(M^{\frac 1s} \widetilde{C}_1 + M(\widetilde{C}_2 + C_4)\right)^{-1}. \end{align*} Finally we choose $\delta_{f} \in (0,1)$ small enough such that \[ \delta_f = \frac{\widetilde{\varepsilon}}{5\widetilde{C}_2}. \] Therefore \[ \left(\int_D \left|\E_x\left[\sum_{n=1}^{N}r_n^{\alpha} \kappa_{d,\alpha} \E^{(\mu)}\left[f\left(X_{\mathcal{I}(n-1)}+r_n\cdot\right)\right]\right] - \left(\mathcal{R}(\Psi_{2,\widetilde{\varepsilon}})\right)(x)\right|^q dx\right)^{\frac{1}{q}} \leq \widetilde{\varepsilon}. \] We conclude that for all $\widetilde{\varepsilon} \in (0,1)$ there exists $\Psi_{2,\widetilde{\varepsilon}} \in \textbf{N}$ that approximates \eqref{eq:7.1} with accuracy $\widetilde{\varepsilon}$. \subsection{Proof of Proposition \ref{Prop:6p1}: quantification of DNNs}\label{Sect:7p3} In this Section we will prove that $\Psi_{2,\widetilde{\varepsilon}}$ is a ReLu DNN such that overcomes the curse of dimensionality. \medskip \noindent {\bf Step 15.} We now study the DNN $\Psi_{2,\widetilde{\varepsilon}}$ with the Definitions and Lemmas of Section \ref{Sect:3}. Let \[ \beta_{\dist} = \mathcal{D}(\Phi_{\dist}) \quad \hbox{and} \quad H_{\dist} = \dim(\beta_{\dist} )-2. \] Recall from Step 12 in the proof of Proposition \ref{Prop:homo} that for all $i=1,...,M$ \[ \mathcal{D}(\Phi_{i,1}) = d\mathfrak{n}_{H_{\dist}+2} \boxplus \widetilde{\beta}_{\dist}, \qquad \dim(\mathcal{D}(\Phi_{i,1})) = H_{\dist} +2, \] where \[ \widetilde{\beta}_{\dist} = \left(\beta_{\dist,0}, ...,\beta_{\dist,H_{\dist}},d\right) \in \N^{H_{\dist}+2}, \] and for all $n=2,...,\overline{N}_i$ \[ \mathcal{D}(\Phi_{i,n}) = \overunderset{n}{m=1}{\odot}(d\mathfrak{n}_{H_{\dist}+2}\boxplus\widetilde{\beta}_{\dist}), \quad \dim(\mathcal{D}(\Phi_{i,n})) = (H_{\dist}+1)n+1, \] with \[ \vertiii{\mathcal{D}(\Phi_{i,n})} \leq 2d + \vertiii{\mathcal{D}(\Phi_{\dist})}. \] Denote \[ \beta_f = \mathcal{D}(\Phi_f) \quad \hbox{and} \quad H_f = \dim(\beta_f) - 2. \] Define $\widetilde{\Phi}_{i,j,n} \in \textbf{N}$ as follows: \[ \mathcal{R}(\widetilde{\Phi}_{i,j,n}) = x + v_{i,j,n}\mathcal{R}(\Phi_{\dist})(x). \] As similar as in the case of $\Phi_{i,1}$, we have that $\widetilde{\Phi}_{i,j,n}$ is a ReLu DNN such that \[ \mathcal{D}(\widetilde{\Phi}_{i,j,n}) = d\mathfrak{n}_{H_{\dist}+2} \boxplus \widetilde{\beta}_{\dist}, \quad \dim(\mathcal{D}(\widetilde{\Phi}_{i,j,n})) = H_{\dist} + 2. \] Moreover \[ \vertiii{\mathcal{D}(\widetilde{\Phi}_{i,j,n})} \leq 2d + \vertiii{\mathcal{D}(\Phi_{dist})}. \] Using the DNN $\widetilde{\Phi}_{i,j,n}$ we have that \[ \mathcal{R}(\Phi_{i,n-1}) + v_{i,j,n}\mathcal{R}(\Phi_r^{i,n}) = \mathcal{R}(\widetilde{\Phi}_{i,j,n}) \circ \mathcal{R}(\Phi_{i,n-1}). \] Therefore, by Lemma \ref{lem:DNN_comp} it follows that \[ \mathcal{R}(\Phi_f) \circ \left(\mathcal{R}(\Phi_{i,n}) + v_{i,j,n} \mathcal{R}(\Phi_{r}^{i,n})\right) \in \mathcal{R} \left(\left\{\Phi \in \textbf{N}: \mathcal{D}(\Phi)= \beta_{f} \odot \left(\overunderset{n}{m=1}{\odot}\left(d\mathfrak{n}_{H_{\dist}+2}\boxplus\widetilde{\beta}_{\dist}\right)\right)\right\}\right). \] with $(H_{\dist}+1)n+H_{f} + 2$ the total number of layers. Note this ReLu DNN is continuous from $D$ to $\R$. Let \[ \beta_{\alpha} = \mathcal{D}(\Phi_{\alpha}), \quad \hbox{and} \quad H_{\alpha} = \dim(\beta_{\alpha}) -2. \] For $n=1,...,\overline{N}_i$ we compound the previous DNN with the identity with $(H_{\dist}+1)(\sum_{i=1}^{M_1}\overline{N}_i-n) + H_{\alpha} + 1$ layers to obtain a DNN $\widehat{\Phi}_{i,j,n} \in \textbf{N}$ such that \[ \mathcal{R}(\widehat{\Phi}_{i,j,n}) = \mathcal{R}(\Phi_f) \circ \mathcal{R}(\widetilde{\Phi}_{i,j,n}) \circ \mathcal{R}(\Phi_{i,n}), \] with \[ \mathcal{D}(\widehat{\Phi}_{i,j,n}) =\left( \mathfrak{n}_{(H_{\dist}+1)(\sum_{i=1}^{M_1}\overline{N}_i-n) + H_{\alpha} + 1} \right)\odot \beta_{f} \odot \left(\overunderset{n}{m=1}{\odot}\left(d\mathfrak{n}_{H_{\dist}+2}\boxplus\widetilde{\beta}_{\dist}\right)\right), \] and \[ \dim(\mathcal{D}(\widehat{\Phi}_{i,j,n})) = (H_{\dist}+1)\sum_{i=1}^{M_1} \overline{N}_i + H_{\alpha} + H_{f} +2. \] Note from the definition of DNN $\Phi_{f}^{i,n}$ that \[ \mathcal{R}(\Phi_{f}^{i,n}) = \sum_{j=1}^{M_2} \mathcal{R}(\widehat{\Phi}_{i,j,n}). \] Therefore, Lemma \ref{lem:DNN_sum} implies that $\Phi_{f}^{i,n}$ is a ReLu DNN with \[ \mathcal{D}(\Phi_{f}^{i,n}) = \overunderset{M_2}{j=1}{\boxplus}\left( \mathfrak{n}_{(H_{\dist}+1)(\sum_{i=1}^{M_1}\overline{N}_i-n) +H_{\alpha}+ 1}\odot \beta_{f} \odot \left(\overunderset{n}{m=1}{\odot}\left(d\mathfrak{n}_{H_{\dist}+2}\boxplus\widetilde{\beta}_{\dist}\right)\right)\right), \] and \[ \dim(\mathcal{D}(\Phi_f^{i,n})) = (H_{\dist}+1)\sum_{i=1}^{M_1} \overline{N}_i +H_{\alpha}+ H_{f} + 2. \] Moreover \[ \vertiii{\mathcal{D}(\Phi_{f}^{i,n})} \leq \sum_{j=1}^{M_2} \max\{\vertiii{\mathcal{D}(\Phi_f)},2d+\vertiii{\mathcal{D}(\Phi_{\dist})} \} = M_2 \max \{\vertiii{\mathcal{D}(\Phi_f)},2d+\vertiii{\mathcal{D}(\Phi_{\dist})} \}. \] On the other hand side, note that \[ \mathcal{D}(\Phi_{r}^{i,n}) = \beta_{\dist} \odot \mathcal{D}(\Phi_{i,n-1}), \quad \dim(\mathcal{D}(\Phi_{r}^{i,n})) = (H_{\dist}+1)n+ 1, \] and \[ \vertiii{\mathcal{D}(\Phi_{r}^{i,n})} \leq \max \{2d,\vertiii{\mathcal{D}(\Phi_{\dist})},2d+\vertiii{\mathcal{D}(\Phi_{\dist})}\}=2d + \vertiii{\mathcal{D}(\Phi_{\dist})}. \] Therefore by Lemma \ref{lem:DNN_comp} \[ \mathcal{R}(\Phi_{\alpha}) \circ \mathcal{R}(\Phi_{r}^{i,n}) \in \mathcal{R}\left(\left\{\Phi \in \textbf{N}: \mathcal{D}(\Phi) = \beta_{\alpha} \odot \beta_{\dist} \odot \left(\overunderset{n}{m=1}{\odot}\left(d\mathfrak{n}_{H_{\dist}+2}\boxplus\widetilde{\beta}_{\dist}\right)\right)\right\}\right), \] with $(H_{\dist}+1)n + H_{\alpha} + 2$ number of layers. Like before, we compound the previous DNN with the identity with $(H_{\dist}+1)(\sum_{i=1}^{M_1} \overline{N}_i - n ) + H_{f} + 1$ to obtain, by Lemma \ref{lem:DNN_comp} a DNN $\widehat{\Phi}_{i,n} \in \textbf{N}$ such that \[ \mathcal{R}(\widehat{\Phi}_{i,n}) = \mathcal{R}(\Phi_{\alpha}) \circ \mathcal{R}(\Phi_{r}^{i,n}), \] with \[ \mathcal{D}(\widehat{\Phi}_{i,n}) = \mathfrak{n}_{(H_{\dist}+1)(\sum_{i=1}^{M_1}\overline{N}_i-n) + H_{f} + 1} \odot \beta_{\alpha} \odot \beta_{\dist} \odot \left(\overunderset{n}{m=1}{\odot}\left(d\mathfrak{n}_{H_{\dist}+2}\boxplus\widetilde{\beta}_{\dist}\right)\right), \] and \[ \dim(\mathcal{D}(\widehat{\Phi}_{i,n})) = (H_{\dist}+1)\sum_{i=1}^{M_1} \overline{N}_i +H_{\alpha}+ H_{f} + 2. \] Moreover \[ \vertiii{\mathcal{D}(\widehat{\Phi}_{i,n})} \leq \max \{\vertiii{\mathcal{D}(\Phi_{\alpha})},2d + \vertiii{\mathcal{D}(\Phi_{\dist})}\} . \] Define $H \in \N$ as \[ H = (H_{\dist}+1)\sum_{i=1}^{M_1}\overline{N}_i + H_{\alpha} + H_f. \] We now realize a parallelization between the DNNs $\widehat{\Phi}_{i,n}$ and $\Phi_f^{i,n}$. By Lemma \ref{lem:DNN_para}, there exists a ReLu DNN $\overline{\Phi}_{i,n} \in \textbf{N}$ such that \[ \mathcal{R}(\overline{\Phi}_{i,n}) = (\mathcal{R}(\widehat{\Phi}_{i,n}),\mathcal{R}(\Phi_{f}^{i,n})), \] with \[ \mathcal{D}(\overline{\Phi}_{i,n}) = \mathcal{D}(\widehat{\Phi}_{i,n}) \boxplus \mathcal{D}(\Phi_{f}^{i,n}) + e_{H+2}, \] and \[ \dim(\mathcal{D}(\overline{\Phi}_{i,n})) = (H_{\dist}+1)\sum_{i=1}^{M_1} \overline{N}_i +H_{\alpha}+ H_{f} + 2, \] where \[ e_{H+2} = (0,...,0,1) \in \R^{H+2}. \] Moreover, \[ \vertiii{\mathcal{D}(\overline{\Phi}_{i,n})} \leq \vertiii{\mathcal{D}(\widehat{\Phi}_{i,n})} + \vertiii{\mathcal{D}(\Phi_{f}^{i,n})}, \] and thus \[ \vertiii{\mathcal{D}(\overline{\Phi}_{i,n})} \leq \vertiii{\mathcal{D}(\Phi_{\alpha})} + M_2\vertiii{\mathcal{D}(\Phi_{f})} + (M_2 + 1)(2d+\vertiii{\mathcal{D}(\Phi_{\dist})}). \] Let \[ \beta_{\Upsilon} = \mathcal{D}(\Upsilon), \quad \hbox{and} \quad H_{\Upsilon} = \dim(\beta_{\Upsilon}) + 2. \] Therefore, by Lemma \ref{lem:DNN_comp} it follows that \[ \mathcal{D}(\Upsilon_{i,n}) = \beta_{\Upsilon} \odot \left(\mathcal{D}(\widehat{\Phi}_{i,n}) \boxplus \mathcal{D}(\Phi_{f}^{i,n}) + e_{H+2}\right), \] and \[ \dim(\mathcal{D}(\Upsilon_{i,n})) =H+H_{\Upsilon} + 3. \] Moreover \[ \begin{aligned} \vertiii{\mathcal{D}(\Upsilon_{i,n})} &\leq \max\{\vertiii{\mathcal{D}(\Upsilon)},\vertiii{\mathcal{D}(\overline{\Phi}_{i,n})}\}\\ &\leq \vertiii{\mathcal{D}(\Upsilon)} + \vertiii{\mathcal{D}(\Phi_{\alpha})} + M_2\vertiii{\mathcal{D}(\Phi_{f})} + (M_2 + 1)(2d+\vertiii{\mathcal{D}(\Phi_{\dist})}). \end{aligned} \] Finally, from Lemma \ref{lem:DNN_sum} it follows that \[ \mathcal{D}(\Psi_{2,\widetilde{\varepsilon}}) = \overunderset{M_1}{i=1}{\boxplus}\overunderset{\overline{N}_i}{n=1}{\boxplus} \left(\beta_{\Upsilon} \odot \left(\mathcal{D}(\widehat{\Phi}_{i,n}) \boxplus \mathcal{D}(\Phi_{f}^{i,n}) + e_{H+2}\right)\right), \] and \[ \dim(\mathcal{D}(\Psi_{2,\widetilde{\varepsilon}})) = H+H_{\Upsilon} + 3. \] Moreover \begin{equation}\label{eq:psi2} \vertiii{\mathcal{D}(\Psi_{2,\widetilde{\varepsilon}})} \leq \Big(\vertiii{\mathcal{D}(\Upsilon)} + \vertiii{\mathcal{D}(\Phi_{\alpha})} + M_2\vertiii{\mathcal{D}(\Phi_{f})} + (M_2 + 1)(2d+\vertiii{\mathcal{D}(\Phi_{\dist})})\Big) \sum_{i=1}^{M_1} \overline{N}_i . \end{equation} Notice from the definition of $\widetilde{C}_1$ and $\widetilde{C}_2$ that both constants are multiple of $|D|^{\frac 1q}$. Therefore, by choice of $M_1$ and $M_2$ we have \begin{equation}\label{eq:Mfinalf} M_2 = M_1 \leq B_1 |D|^{\frac{s}{q(s-1)}}\widetilde{\varepsilon}^{-\frac{s}{s-1}}, \end{equation} where $B_1>0$ is a generic constant. The choice of $\delta_f$ and the constant $\widetilde{C}_2$ implies that \begin{equation}\label{eq:deltaffinal} \delta_{f}^{-a} \leq B_2|D|^{\frac aq} \widetilde{\varepsilon}^{-a}, \end{equation} for some constant $B_2 >0$. From the choice of $\delta_{\Upsilon}$, $\delta_{\alpha}$ and \eqref{eq:Mfinalf} it follows that \begin{equation}\label{eq:deltaupsfinal} \log(\delta_{\Upsilon}^{-1}) \leq \delta_{\Upsilon}^{-1} \leq B_3|D|^{\frac 2q} \widetilde{\varepsilon}^{-1}, \end{equation} and \begin{equation} \label{eq:deltaalpfinal} \delta_{\alpha}^{-a} \leq B_4 |D|^{\frac{2a}q} \widetilde{\varepsilon}^{-a}. \end{equation} where $B_3,B_4>0$ are a generic constant, and from the choice of $\delta_{\dist}$ and properties of Logarithm function we have \[ \begin{aligned} \log(\delta_{\dist}^{-1}) &\leq 5|D|^{\frac 1q}\kappa_{d,\alpha}\left(\diam(D)^{\alpha}L_f + L_{\alpha}\left(1 + \norm{f}_{L^{\infty}(D)}\right)\right)\left(M^{\frac 1s} \widetilde{C}_1 + M(\widetilde{C}_2 + C_4)\right)\widetilde{\varepsilon}^{-1}\\ &\qquad+ 4\left(M^{\frac 1s} \widetilde{C}_1 + M(\widetilde{C}_2 + C_4)\right)\left(M^{\frac 1s} \widetilde{C}_1 + M(\widetilde{C}_2 + C_3)+1\right). \end{aligned} \] Therefore from \eqref{eq:Mfinalf} \begin{equation}\label{eq:deltadistfinalf} \lceil \log(\delta_{\dist}^{-1})\rceil^{a} \leq B_5 |D|^{\frac{2a}q\left(1+\frac{s}{s-1}\right)} \widetilde{\varepsilon}^{-a-\frac{2as}{s-1}}, \end{equation} for some $B_5 > 0$ generic. Note also that \begin{equation}\label{eq:Nfinalf} \sum_{i=1}^{M_1}\overline{N}_i \leq B_6 |D|^{\frac{1}{q} \left(1 + \frac{s}{s-1}\right)}\widetilde{\varepsilon}^{-\frac{s}{s-1}}, \end{equation} with $B_6 >0$. Finally from Assumptions \ref{Sup:D}, \ref{Sup:f} and inequalities \eqref{eq:psi2}, $\eqref{eq:Nfinalf}$ we got \[ \begin{aligned} &\vertiii{\mathcal{D}(\Psi_{2,\widetilde{\varepsilon}})} \\ &\leq B_6 |D|^{\frac{1}{q} \left(1 + \frac{s}{s-1}\right)}\widetilde{\varepsilon}^{-\frac{s}{s-1}}\Big(\log(\delta_{\Upsilon}^{-1})+\delta_{\alpha}^{-a}+M_2Bd^b\delta_f^{-a} + (M_2 + 1)(2d + Bd^b\lceil\log(\delta_{\dist}^{-1})\rceil^{a})\Big). \end{aligned} \] Finally, from inequalities \eqref{eq:Mfinalf}, \eqref{eq:deltaffinal}, \eqref{eq:deltaupsfinal}, \eqref{eq:deltaalpfinal} and \eqref{eq:deltadistfinalf} we conclude that there exists $\widetilde{B}>0$ such that \[ \vertiii{\mathcal{D}(\Psi_{2,\widetilde{\varepsilon}})} \leq \widetilde{B} |D|^{\frac{1}{q}\left(1 +2a+\frac{2s}{s-1}(1+a)\right)}d^b\widetilde{\varepsilon}^{-a-\frac{2s}{s-1}(1+a)}. \] This completes the proof of Proposition \ref{Prop:6p1}. \section{Proof of the Main Result}\label{Sect:8} This final section is devoted to the proof of Theorem \ref{Main}. Gathering Propositions \ref{Prop:homo} and \ref{Prop:6p1}, Theorem \ref{Main} is finally proved.\\ \noindent {\bf Step 1.} Let $\alpha \in (1,2)$, $p,s \in (1,\alpha)$ such that $s< \frac{\alpha}{p}$ and $q \in \left[s,\frac{\alpha}{p}\right)$. Let Assumptions \eqref{Hg0} and \eqref{Hf0} be satisfied. Recall from Theorem \ref{teo:sol} and Lemma \ref{lem:sol2} that the solution $u$ of \eqref{eq:1.1} takes the form of \eqref{eq:solfinal}, namely \[ u(x) = \E_x \left[g\left(X_{\mathcal{I}(N)}\right)\right] + \E_x \left[ \sum_{n=1}^{N} r_n^{\alpha} \kappa_{d,\alpha} \E^{(\mu)} \left[f\left(X_{\mathcal{I}(n-1)}+r_n \cdot\right)\right]\right], \qquad x \in D. \] From Propositions \ref{Prop:homo} and \ref{Prop:6p1} for all $\varepsilon, \widetilde{\varepsilon} \in (0,1)$ there exist ReLu DNNs $\Psi_{1,\varepsilon}$ and $\Psi_{2,\widetilde{\varepsilon}}$ that satisfy \eqref{eq:2.3} and \eqref{eq:prop_f}. For the right approximation of the ReLu DNNs, $\delta_{\dist}$ and $M$ will be defined as \[ \delta_{\dist} \leq \min\{\ell_1, \ell_2\}, \qquad M \geq \max \left\{\left\lceil\left(\frac{5C_1}{\varepsilon}\right)^{\frac{s}{s-1}}\right\rceil, \left\lceil\left(\frac{5\widetilde{C}_1}{\widetilde{\varepsilon}}\right)^{\frac{s}{s-1}}\right\rceil\right\}, \] where \[ \ell_1 := \frac{\varepsilon}{5|D|^{\frac{1}{q}}L_g} \left( 1 + M^{\frac{1}{s}}C_1 + M(C_2 + C_3)\right)^{-\left(M^{\frac{1}{s}}C_1 + M(C_2 + C_4)\right)}, \] and \[ \ell_2 := \frac{\widetilde{\varepsilon}}{5|D|^{\frac{1}{q}}\widetilde{\ell}} \left(\diam(D)^{\alpha}L_f + L_{\alpha}\left(1+\norm{f}_{L^{\infty}(D)}\right)\right)^{-1} \left(M^{\frac{1}{s}}\widetilde{C}_1 + M(\widetilde{C}_2 + C_4)\right)^{-1}. \] Recall that the constants in $\ell_1$ and $\ell_2$ are defined in Propositions \ref{Prop:homo} and \ref{Prop:6p1}. Let $\epsilon \in (0,1)$ and define the ReLu DNN $\Psi_{\epsilon}$ that satisfies for all $x \in D$ \[ \left(\mathcal{R}(\Psi_{\epsilon})\right)(x) = \left(\mathcal{R}(\Psi_{1,\varepsilon})\right)(x) + \left(\mathcal{R}(\Psi_{2,\widetilde{\varepsilon}})\right)(x), \] where $\varepsilon = \widetilde{\varepsilon} = \frac{\epsilon}{2}$. From Minkowski inequality one has \[ \begin{aligned} &\left(\int_D \left|u(x) - \left(\mathcal{R}(\Psi_{\epsilon})\right)(x)\right|^q dx\right)^{\frac{1}{q}} \\ &\qquad\leq \left(\int_D \left|\E_x \left[g\left(X_{\mathcal{I}(N)}\right)\right] - \left(\mathcal{R}(\Psi_{1,\varepsilon})\right)(x)\right|^q dx\right)^{\frac{1}{q}}\\ &\qquad \quad + \left(\int_D \left| \E_x \left[\sum_{n=1}^{N} r_n^{\alpha} \kappa_{d,\alpha} \E^{(\mu)}\left[f(X_{\mathcal{I}(n-1)}+r_n \cdot)\right]\right] - \left(\mathcal{R}(\Psi_{2,\widetilde{\varepsilon}})\right)\right|^q dx\right)^{\frac{1}{q}}\\ &\qquad\leq \frac{\epsilon}{2} + \frac{\epsilon}{2} = \epsilon. \end{aligned} \] \medskip \noindent {\bf Step 2} We now study the ReLu DNN $\Psi_{\epsilon}$. For $i=1,...,M$ Let $\overline{N}_{i,1}, \overline{N}_{i,2}$ the random variables $\overline{N}_i$ found in the Propositions \ref{Prop:homo} and \ref{Prop:6p1}, respectively. Recall that the DNN $\Psi_{1,\varepsilon}$ satisfy\[ \mathcal{D}(\Psi_{1,\varepsilon}) = \overset{M}{\underset{i=1}{\boxplus}} \left(\mathfrak{n}_{H_i+2} \odot \beta_g \odot \left(\overunderset{\overline{N}_{i,1}}{m=1}{\odot}(d\mathfrak{n}_{H_{\dist}+2}\boxplus \widetilde{\beta}_{\dist})\right)\right), \] where $H_{i}$ is defined as \[ H_{i} = (H_{\dist}+1)\left(\sum_{j=1}^{M} \overline{N}_{j,1} - \overline{N}_{i,1}\right) - 1. \] and \[ \dim(\mathcal{D}(\Psi_{1,\varepsilon})) = (H_{\dist}+1)\sum_{i=1}^M \overline{N}_{i,1} + H_g + 2. \] Recall also that $\Psi_{2,\widetilde{\varepsilon}}$ satisfy \[ \mathcal{D}(\Psi_{2,\widetilde{\varepsilon}}) = \overunderset{M_1}{i=1}{\boxplus}\overunderset{\overline{N}_{i,2}}{n=1}{\boxplus} \left(\beta_{\Upsilon} \odot \left(\mathcal{D}(\widehat{\Phi}_{i,n}) \boxplus \mathcal{D}(\Phi_{f}^{i,n}) + e_{H+2}\right)\right), \] where $e_{H+2}=(0,...,0,1) \in \R^{H+2}$ with $H$ defined as \[ H = (H_{\dist}+1)\sum_{i=1}^{M_1}\overline{N}_{i,2} + H_{\alpha} + H_f, \] and \[ \dim(\mathcal{D}(\Psi_{2,\widetilde{\varepsilon}})) = H+H_{\Upsilon} + 3. \] To use Lemma \ref{lem:DNN_sum}, the ReLu DNNs $\Psi_{1,\varepsilon}$ and $\Psi_{2,\widetilde{\varepsilon}}$ must have the same number of layers. We compound each DNN by a suitable ReLu DNN that represents the identity function. Define then the ReLu DNN $\overline{\Psi}_{1,\varepsilon}$ that satisfy $\mathcal{R}(\overline{\Psi}_{1,\varepsilon})=\mathcal{R}(\Psi_{1,\varepsilon})$ with\[ \mathcal{D}(\overline{\Psi}_{1,\varepsilon}) = \mathfrak{n}_{H+H_{\Upsilon}+3} \odot \mathcal{D}(\Psi_{1,\varepsilon}), \] and Define the ReLu DNN $\overline{\Psi}_{2,\widetilde{\varepsilon}}$ that satisfy $\mathcal{R}(\overline{\Psi}_{2,\widetilde{\varepsilon}}) = \mathcal{R}(\Psi_{2,\widetilde{\varepsilon}})$ with \[ \mathcal{D}(\overline{\Psi}_{2,\widetilde{\varepsilon}}) = \mathfrak{n}_{(H_{\dist}+1)\sum_{i=1}^{M}\overline{N}_{i,1}+H_g+2} \odot \mathcal{D}(\Psi_{2,\widetilde{\varepsilon}}), \] Therefore we have that $\dim(\mathcal{D}(\overline{\Psi}_{1,\varepsilon}))=\dim(\mathcal{D}(\overline{\Psi}_{2,\widetilde{\varepsilon}}))$. Moreover \[ \dim(\mathcal{D}(\overline{\Psi}_{1,\varepsilon}))= (H_{\dist}+1)\sum_{i=1}^{M}(\overline{N}_{i,1} + \overline{N}_{i,2}) +H_{\alpha}+ H_{f}+ H_g +H_{\Upsilon} + 4. \] Therefore we can use Lemma \ref{lem:DNN_sum} to obtain that $\Psi_{\epsilon}$ is a ReLu DNN such that \[ \mathcal{D}(\Psi_{\epsilon}) = \mathcal{D}(\overline{\Psi}_{1,\varepsilon}) \boxplus \mathcal{D}(\overline{\Psi}_{2,\widetilde{\varepsilon}}), \] and \[ \dim(\mathcal{D}(\Psi_{\epsilon})) = (H_{\dist}+1)\sum_{i=1}^{M}(\overline{N}_{i,1} + \overline{N}_{i,2}) +H_{\alpha}+ H_{f}+ H_g +H_{\Upsilon} + 4. \] Moreover \[ \vertiii{\mathcal{D}(\Psi_{\epsilon})} \leq \vertiii{\mathcal{D}(\Psi_{1,\varepsilon})} + \vertiii{\mathcal{D}(\Psi_{2,\widetilde{\varepsilon}})}. \] Recall from Propositions \ref{Prop:homo} and \ref{Prop:6p1} that there exists $\widetilde{B}>0$ such that \[ \vertiii{\mathcal{D}(\Psi_{1,\varepsilon})} \leq \widetilde{B}|D|^{\frac{1}{q}\left(2a + ap+\frac{s}{s-1}(1+2a+ap)\right)} d^{b+2ap+2ap^2 + \frac{ps}{s-1}(1+2a + ap) }\varepsilon^{-a-\frac{s}{s-1}(1+2a+ap)}. \] and \[ \vertiii{\mathcal{D}(\Psi_{2,\widetilde{\varepsilon}})} \leq \widetilde{B} |D|^{\frac{1}{q}\left(1 +2a+\frac{2s}{s-1}(1+a)\right)}d^b\widetilde{\varepsilon}^{-a-\frac{2s}{s-1}(1+a)}. \] Therefore \[ \vertiii{\mathcal{D}(\Psi_{\epsilon})} \leq \widehat{B}|D|^{\frac{1}{q}\left(1+2a+ap+\frac{s}{s-1}(2+2a+ap)\right)}d^{b+2ap+ap^2+\frac{ps}{s-1}(1+2a + ap)} \epsilon^{-a-\frac{s}{s-1}(2+2a+ap)}, \] where $\widehat{B}>0$ is a generic constant. Theorem \ref{Main} can be concluded choosing $\eta>0$ as the maximum between $\frac{1}{q}\left(1+2a+ap+\frac{s}{s-1}(2+2a+ap)\right)$, $b+2ap+ap^2+\frac{ps}{s-1}(1+2a + ap)$ and $\frac{s}{s-1}(2+2a+ap)$. \begin{thebibliography}{99} \bibitem{Applebaum} D. Applebaum. \emph{Lévy Processes and Stochastic Calculus, 2nd ed.} Cambridge Studies in Advanced Mathematics. doi:10.1017/CBO9780511809781, 2009. \bibitem{Acosta} G. Acosta and J. P. Borthagaray. \emph{A fractional Laplace equation: regularity of solutions and finite element approximations}. SIAM Journal on Numerical Analysis, 55(2), 472-495, 2017. \bibitem{Beck1} C. Beck, S. Becker, P. Grohs, N. Jaafari and A. Jentzen. \emph{Solving the Kolmogorov PDE by Means of Deep Learning.} ournal of Scientific Computing, vol. 88, no. 3, July 2021. \url{https://doi.org/10.1007/s10915-021-01590-0}. \bibitem{Beck2} C. Beck and A. Jentzen. \emph{Machine learning approximation algorithms for high-dimensional fully nonlinear partial differential equations and second-order backward stochastic differential equations}. Journal of Nonlinear Science, 29(4), 1563-1619, 2019. \bibitem{Berner1} J. Berner, P. Grohs and A. Jentzen. \emph{Analysis of the generalization error: Empirical risk minimization over deep artificial neural networks overcomes the curse of dimensionality in the numerical approximation of Black--Scholes partial differential equations.} SIAM Journal on Mathematics of Data Science, 2(3), 631-657, 2020. \bibitem{Bertoin} J. Bertoin. \emph{Lévy Processes}. Volume 121 of Cambridge Tracts in Mathematics. Cambridge University Press, Cambridge. ISBN 0-521-56243-0, 1996. \bibitem{Blumenthal1} R. Blumenthal, R. Getoor and D. Ray. \emph{On the Distribution of First Hits for the Symmetric Stable Processes}. Transactions of the American Mathematical Society, 99(3), 540–554, 1961. \url{https://doi.org/10.2307/1993561}. \bibitem{Bonito} A. Bonito, J. P. Borthagaray, R. H. Nochetto, E. Otárola and A. J. Salgado. \emph{Numerical methods for fractional diffusion}. Computing and Visualization in Science, 19(5), 19-46, 2018. \bibitem{CS} L. Caffarelli, and L. Silvestre, \emph{An extension problem related to the fractional Laplacian}, Comm. Partial Differential Equations 32 (2007), no. 7-9, 1245--1260. \bibitem{Hutz2} S. Cox, M. Hutzenthaler, A. Jentzen, J. van Neerven and T. Welti. \emph{Convergence in Hölder norms with applications to Monte Carlo methods in infinite dimensions}. IMA Journal of Numerical Analysis, 41(1), 493-548, 2021. \bibitem{beta} J, Dutka. \emph{The incomplete Beta function—a historical profile}. Archive for history of exact sciences, 11-29, 1981. \bibitem{Elb} D. Elbrächter, D. Perekrestenko, P. Grohs, and H. Bölcskei. \emph{Deep Neural Network Approximation Theory}. IEEE Transactions on Information Theory, vol. 67, no. 5, May 2021, pp. 2581–623. \url{https://doi.org/10.1109/tit.2021.3062161}. \bibitem{Getoor} R. Getoor. \emph{First Passage Times for Symmetric Stable Processes in Space}. Transactions of the American Mathematical Society, 101(1), 75–90, 1961. \url{https://doi.org/10.2307/1993412} \bibitem{Gonnon} L. Gonon and C. Schwab. \emph{Deep ReLU neural networks overcome the curse of dimensionality for partial integrodifferential equations}. arXiv preprint arXiv:2102.11707, 2021. \bibitem{Grohs} P. Grohs and L. Herrmann. \emph{Deep neural network approximation for high-dimensional elliptic PDEs with boundary conditions}. IMA Journal of Numerical Analysis, 2021. \url{https://doi.org/10.1093/imanum/drab031}. \bibitem{Grohs2} P. Grohs, F. Hornung, A. Jentzen, and P. Von Wurstemberger. \emph{A proof that artificial neural networks overcome the curse of dimensionality in the numerical approximation of Black-Scholes partial differential equations}. arXiv preprint arXiv:1809.02362, 2018. \bibitem{Grohs3} P. Grohs, A. Jentzen and D. Salimova. \emph{Deep neural network approximations for Monte Carlo algorithms.} arXiv preprint arXiv:1908.10828, 2019. \bibitem{Gulian} M. Gulian, and G. Pang. \emph{Stochastic solution of elliptic and parabolic boundary value problems for the spectral fractional Laplacian.} arXiv preprint arXiv:1812.01206, 2018. \bibitem{Hornik} Kurt Hornik et al., \emph{Approximation Capabilities of Multilayer Feedforward Networks}, Neural Networks, Vol. 4, pp. 251-257. 1991. \bibitem{Hutz} M. Hutzenthaler, A. Jentzen, Thomas Kruse, Tuan Anh Nguyen, \emph{A proof that rectified deep neural networks overcome the curse of dimensionality in the numerical approximation of semilinear heat equations}, Partial Differ. Equ. Appl. 1, 10 (2020). \bibitem{Jentz1} A. Jentzen, D. Salimova, and T. Welti. \emph{A proof that deep artificial neural networks overcome the curse of dimensionality in the numerical approximation of Kolmogorov partial differential equations with constant diffusion and nonlinear drift coefficients.} Communications in Mathematical Sciences, vol. 19, no. 5, 2021, pp. 1167–205. \url{https://doi.org/10.4310/cms.2021.v19.n5.a1}. \bibitem{AK1} A. E Kyprianou, A. Osojnik and T. Shardlow, \emph{Unbiased ‘walk-on-spheres’ Monte Carlo methods for the fractional Laplacian}, IMA Journal of Numerical Analysis, Volume 38, Issue 3, July 2018, pp 1550–1578, \url{https://doi.org/10.1093/imanum/drx042}. \bibitem{AK2} A. Kyprianou and J. Pardo. \emph{Stable Lévy Processes via Lamperti-Type Representations} (Institute of Mathematical Statistics Monographs). Cambridge: Cambridge University Press, 2021. \bibitem{ProbBan} M. Ledoux and M. Talagrand. \emph{Probability in Banach Spaces: isoperimetry and processes} (Vol. 23). Springer Science \& Business Media, 1991. \bibitem{Lischke1} A. Lischke, G. Pang, M. Gulian, F. Song, C. Glusa, X. Zheng, ... and G. E. Karniadakis. \emph{What is the fractional Laplacian?}. arXiv preprint arXiv:1801.09767, 2018. \bibitem{Schilling} R. Schilling. \emph{An Introduction to L\'evy and Feller Processes}. Advanced Courses in Mathematics-CRM Barcelona 2014. arXiv preprint arXiv:1603.00251, 2016. \bibitem{ultimo} C. Sheng, B. Su and C. Xu. \emph{Efficient Monte Carlo method for integral fractional Laplacian in multiple dimensions}. arXiv preprint, arXiv:2204.08860, 2022. \bibitem{surface} D. J. Smith, and M. K. Vamanamurthy. \emph{How Small Is a Unit Ball?}. Mathematics Magazine, 62(2), 101–107, 1989. \url{https://doi.org/10.2307/2690391}. \bibitem{WE1} W. E, J. Han and A. Jentzen. \emph{Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations.} Commun. Math. Stat. 5(4), 349–380, 2017. \bibitem{WE2} W.E, J. Han and A. Jentzen. \emph{Solving high-dimensional partial differential equations using deep learning.} Proc. Natl. Acad. Sci. 115(34), 8505–8510, 2018. \end{thebibliography} \end{document}
2205.05210v5
http://arxiv.org/abs/2205.05210v5
Hilbert-type operators acting between weighted Fock spaces
\documentclass[10pt]{amsart} \usepackage{amsmath} \usepackage{amssymb} \def\beqnn{\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{problem}[theorem]{Problem} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{xca}[theorem]{Exercise} \newtheorem{remark}[theorem]{Remark} \newtheorem{claim}[theorem]{Claim} \newtheorem{statement}[theorem]{Statement} \theoremstyle{question} \newtheorem{question}[theorem]{Question} \numberwithin{equation}{section} \newcommand{\abs}[1]{\lvert#1\rvert} \newcommand{\blankbox}[2]{ \parbox{\columnwidth}{\centering \psetlength{\fboxsep}{0pt} \fbox{\raisebox{0pt}[#2]{\hspace{#1}}} }} \begin{document} \title{Hilbert-type operators acting between weighted Fock spaces} \author{Jianjun Jin} \address{School of Mathematics Sciences, Hefei University of Technology, Xuancheng Campus, Xuancheng 242000, P.R.China} \email{[email protected], [email protected]} \author{Shuan Tang} \address{School of Mathematics Sciences, Guizhou Normal University, Guiyang 550001, P.R.China} \email{[email protected]} \author{Xiaogao Feng} \address{College of Mathematics and Information, China West Normal University, Nanchong 637009, P.R.China} \email{[email protected]} \thanks{The first author supported by National Natural Science Foundation of China (Grant Nos. 11501157). The second author was supported by National Natural Science Foundation of China (Grant Nos. 12061022) and the foundation of Guizhou Provincial Science and Technology Department (Grant Nos. [2017]7337 and [2017]5726). The third author was supported by National Natural Science Foundation of China (Grant Nos. 11701459). } \thanks{ {\bf{Data Availability Statement:} the datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.} } \subjclass[2010]{47B35; 30H20} \keywords{Hilbert-type operators; weighted Fock spaces; boundedness of operator; compactness of operator; generalized Carleson measure} \begin{abstract} In this paper we introduce and study several new Hilbert-type operators acting between the weighted Fock spaces. We provide some sufficient and necessary conditions for the boundedness and compactness of certain Hilbert-type operators from one weighted Fock space to another. \end{abstract} \maketitle \section{Introduction and main results} In this paper we will use $C, C_1, C_2, \cdots$ to denote universal positive constants that might change from one line to another. For two positive numbers $A, B$, we write $A \preceq B$, or $A \succeq B$, if there exists a positive constant $C$ independent of $A$ and $B$ such that $A \leq C B$, or $A \geq C B$, respectively. We will write $A \asymp B$ if both $A \preceq B$ and $A \succeq B$. Let $\mathbb{C}$ be the complex plane, $\Omega$ be an open domain in $\mathbb{C}$. Let $\mathcal{H}(\Omega)$ be the class of all holomorphic functions on $\Omega$. In particular, $\mathcal{H}(\mathbb{C})$ is the class of all entire functions. The {\em Fock space}, denoted by $\mathcal{F}^{2}$, is defined as $$\mathcal{F}^{2}=\{f\in \mathcal{H}(\mathbb{C}): \|f\|_2^2:=\frac{1}{\pi}\int_{\mathbb{C}}|f(z)e^{-\frac{1}{2}|z|^2}|^2dA(z)<\infty\},$$ where $dA$ is the Lebesgue area measure, see \cite{Zhu}. For an entire function $f=\sum_{n=0}^{\infty}a_nz^n$. A calculation yields that $$\|f\|_2^2=2\int_{0}^{\infty}(\sum_{n=0}^{\infty}|a_n|^2r^{2n})e^{-r^2}rdr=\sum_{n=0}^{\infty}|a_n|^2 n!.$$ Thus we have $f\in \mathcal{F}^2$ if and only if $\sum_{n=0}^{\infty}|a_n|^2 n!<\infty.$ For $\theta>0$, $\alpha\in \mathbb{R}$, we introduce the {\em weighted Fock space}, which is denoted by $\mathcal{F}^{2}_{\theta, \alpha}$ and defined as $$\mathcal{F}^{2}_{\theta, \alpha}=\{f=\sum_{n=0}^{\infty} a_n z^n\in \mathcal{H}(\mathbb{C}): \|f\|_{\theta, \alpha}^2:=\sum_{n=0}^{\infty}(n+\theta)^{\alpha}|a_n|^2n!<\infty\}.$$ \begin{remark} When $\alpha=0$, $\mathcal{F}^{2}_{\theta, \alpha}$ becomes the Fock space $\mathcal{F}^{2}$. When $\alpha\in \mathbb{N}$, we know from \cite{CZ} that an entire function $f(z)$ belongs to the Fock-Sobolev space $F^{2, \alpha}$ on $\mathbb{C}$, if and only if $z^{\alpha}f(z)\in \mathcal{F}^{2}$. Then it is easy to see that $\mathcal{F}^{2}_{\theta, \alpha}$ coincides with the Fock-Sobolev space $F^{2, \alpha}$ on $\mathbb{C}$ for $\alpha\in \mathbb{N}$. \end{remark} \begin{remark} We notice that, if the coefficients of a power series $f(z)=\sum_{n=0}^{\infty} a_n z^n$ satisfy the condition $ \sum_{n=0}^{\infty}(n+\theta)^{\alpha}|a_n|^2n!<\infty$, then $f(z)$ is an entire function. \end{remark} \begin{remark} If $f(z)=\sum_{n=0}^{\infty} a_n z^n \in \mathcal{F}^{2}_{\theta, \alpha}$, then, by Cauchy-Schwartz inequality, we see that there is a positive constant $C$, depends only on $\theta, \alpha$, such that \begin{equation}|f(z)|\leq \left[\sum_{n=0}^{\infty}(n+\theta)^{\alpha}|a_n|^2n!\right]^{\frac{1}{2}}\left [\sum_{n=0}^{\infty}(n+\theta)^{-\alpha}\frac{|z|^{2n}}{n!}\right]^{\frac{1}{2}}\leq e^{C|z|^2} \|f\|_{\theta, \alpha}, \nonumber \end{equation} for all $z\in \mathbb{C}$. \end{remark} It should be pointed out that the weighted Fock space $\mathcal{F}^{2}_{\theta, \alpha}$ can be considered as a special case of the weighted Hardy spaces, which have been detailed studied in \cite{Sh}. We recall the definition of the weighted Hardy spaces. Let $\beta=\{\beta_n\}_{n=0}^{\infty}$ be a sequence of positive numbers. The weighted Hardy space $\mathcal{H}^2(\beta)$ is the class of formal complex power series $f(z)=\sum_{n=0}^{\infty}a_nz^n$ such that $$\sum_{n=0}^{\infty}|a_n|^2 \beta_n^2<\infty.$$ From \cite{Sh}, we know that, if $\beta_{n+1}/\beta_n \rightarrow 1$, when $n\rightarrow \infty,$ $\mathcal{H}^2(\beta)$ is a Hilbert space of holomorphic functions on the unit disk $\mathbb{D}$ with the inner product $$\langle f,g\rangle=\sum_{n=0}^{\infty}a_n \overline{b}_{n} \beta_n^2,$$ where $f=\sum_{n=0}^{\infty}a_nz^n$ and $g=\sum_{n=0}^{\infty}b_nz^n$. In particular, if we take $\beta_n=1$ for all $n\in \mathbb{N}_0=\mathbb{N}\cup\{0\}$, then $\mathcal{H}^2(\beta)$ reduces the classic Hardy space $H^2(\mathbb{D})$, which is a Hilbert space of holomorphic functions on the unit disk $\mathbb{D}$ with the inner product $\langle f,g\rangle=\sum_{n=0}^{\infty}a_n \overline{b}_{n},$ see \cite{Zh}. \begin{remark}Define the inner product on $\mathcal{F}_{\theta, \alpha}^{2}$ as $$\langle f,g\rangle=\sum_{n=0}^{\infty}(n+\theta)^{\alpha}a_n \overline{b}_{n} n!,$$ where $f=\sum_{n=0}^{\infty}a_nz^n$ and $g=\sum_{n=0}^{\infty}b_nz^n.$ Consider the operator $W$ on $\mathcal{F}_{\theta, \alpha}^{2}$, defined as $$Wf(z)=\sum_{n=0}^{\infty} [(n+\theta)^{\frac{\alpha}{2}}\sqrt{n!}]a_nz^n, \quad \forall f(z)=\sum_{n=0}^{\infty}a_n z^n \in \mathcal{F}_{\theta, \alpha}^{2}$$ Then we conclude that the operator $W$ on $\mathcal{F}_{\theta, \alpha}^{2}$ is an isometric isomorphism from $\mathcal{F}_{\theta, \alpha}^{2}$ to the Hardy space $H^2(\mathbb{D})$. Consequently, we see that $\mathcal{F}_{\theta, \alpha}^{2}$ is a Hilbert space with the above inner product. For any nonnegative integer $n$, let $e_n(z)=(n!)^{-\frac{1}{2}}(n+\theta)^{-\frac{\alpha}{2}}z^n.$ Then we see that the set $\{e_n\}$ is an orthonormal basis of $\mathcal{F}_{\theta, \alpha}^{2}$. By the standard theory of reproducing kernel Hilbert spaces(see Theorem 2.4 in \cite{PR1}), we get that the reproducing kernel $K_{\theta, \alpha}(z,w)$ for $\mathcal{F}_{\theta, \alpha}^{2}$ is $$K_{\theta, \alpha}(z,w)=\sum_{n=0}^{\infty} e_n(z) \overline{e_n(w)}=\sum_{n=0}^{\infty}(n+\theta)^{-\alpha}\frac{(z\overline{w})^n}{n!}.$$ In particular, when $\alpha=0$, we have $K_{\theta, 0}(z,w)=e^{z\overline{w}}$; when $\theta\in \mathbb{N}, \alpha=-1$, we have $K_{\theta, -1}(z,w)=e^{z\overline{w}}(z\overline{w}+\alpha).$ \end{remark} For $f=\sum_{k=0}^{\infty}a_k z^k \in \mathcal{H}(\Omega)$, $\Omega$ is an open domain in $\mathbb{C}$. The {\em Hilbert operator} $\mathbf{H}$, which is an operator on spaces of holomorphic functions by its action on the Taylor coefficients, is defined as, \begin{equation*} \mathbf{H}(f)(z):=\sum_{n=0}^\infty \left(\sum_{k=0}^{\infty} \frac{a_k}{k+n+1} \right)z^n. \end{equation*} During the last two decades, the Hilbert operator $\mathbf{H}$ and its generalizations defined on various spaces of holomorphic functions on the unit disk $\mathbb{D}$ in $\mathbb{C}$ have been much investigated. See, for example, [1-14], \cite{PR}, \cite{YZ1}, \cite{YZ2}. For $\lambda>0$, $f=\sum_{k=0}^{\infty}a_k z^k \in \mathcal{H}(\mathbb{C})$, we define the following {\em Hilbert-type operator $ \mathbf{H}_{\lambda}$} as \begin{equation*} \mathbf{H}_{\lambda}(f)(z):=\sum_{n=0}^\infty \left[\frac{1}{\sqrt{n!}}\sum_{k=0}^{\infty} \frac{a_k\sqrt{k!}}{(k+n)^{\lambda}+1} \right]z^n. \end{equation*} In this paper, we first study the boundedness of $\mathbf{H}_{\lambda}$ acting from one weighted Fock space to another, and obtain that \begin{theorem}\label{main-1} Let $\lambda>0, -1<\alpha, \beta<1$. Then, for any $\theta>0$, $ \mathbf{H}_{\lambda}$ is bounded from $\mathcal{F}_{\theta, \alpha}^{2}$ to $\mathcal{F}_{\theta, \beta}^{2}$ if and only if $\lambda\geq 1+\frac{1}{2}(\beta-\alpha)$ . \end{theorem} Next we consider the case $\lambda=1$. We see from Theorem \ref{main-1} that $ \mathbf{H}_{1}$ is not bounded from $\mathcal{F}_{\theta, \alpha}^{2}$ to $\mathcal{F}_{\theta, \beta}^{2}$ if $\alpha<\beta$. We notice that $$\frac{1}{k+n+1}=\int_{0}^1t^{k+n}dt,\: k, n\in \mathbb{N}_0.$$ To make $ \mathbf{H}_{1}$ bounded from $\mathcal{F}_{\theta, \alpha}^{2}$ to $\mathcal{F}_{\theta, \beta}^{2}$ when $\alpha<\beta$. For $f=\sum_{n=0}^{\infty}a_n z^n \in \mathcal{H}(\mathbb{C})$, let $\mu$ be a positive Bore measure on $[0, 1)$, we introduce the {\em Hilbert-type operator $\mathbf{H}_{\mu}$}, which is defined as \begin{equation*} \mathbf{H}_{\mu}(f)(z):=\sum_{n=0}^\infty \left(\frac{1}{\sqrt{n!}}\sum_{k=0}^{\infty}\mu[k+n] a_k\sqrt{k!}\right)z^n. \end{equation*} Where \begin{equation}\label{mu}\mu{[n]}=\int_{0}^1 t^{n}d\mu(t),\: n\in \mathbb{N}_{0}.\end{equation} We then study the problem of characterizing measures $\mu$ such that $\mathbf{H}_{\mu}: \mathcal{F}_{\theta, \alpha}^{2}\rightarrow \mathcal{F}_{\theta, \beta}^{2}$ is bounded and prove that \begin{theorem}\label{main-2} Let $-1<\alpha, \beta<1$, $\mu$ be a finite positive Bore measure on $[0, 1)$ and $\mathbf{H}_{\mu}$ be as above. Then, for any $\theta>0$, $\mathbf{H}_{\mu}: \mathcal{F}_{\theta, \alpha}^{2}\rightarrow \mathcal{F}_{\theta, \beta}^{2}$ is bounded if and only if $\mu$ is a $[1+\frac{1}{2}(\beta-\alpha)]$-Carleson measure on $[0, 1)$. \end{theorem} Here, for $s>0$, a positive Borel measure $\mu$ on $[0,1)$, we say $\mu$ is an $s$-Carleson measure if there is a constant $C_1>0$ such that $$\mu([t, 1))\leq C_1 (1-t)^s$$ holds for all $t\in [0, 1)$. Also, we characterize measures $\mu$ such that $\mathbf{H}_{\mu}: \mathcal{F}_{\theta, \alpha}^{2}\rightarrow \mathcal{F}_{\theta, \beta}^{2}$ is compact and shall show that \begin{theorem}\label{main-3} Let $-1<\alpha, \beta<1$, $\mu$ be a finite positive Bore measure on $[0, 1)$ and $\mathbf{H}_{\mu}$ be as above. Then, for any $\theta>0$, $\mathbf{H}_{\mu}: \mathcal{F}_{\theta, \alpha}^{2}\rightarrow \mathcal{F}_{\theta, \beta}^{2}$ is compact if and only if $\mu$ is a vanishing $[1+\frac{1}{2}(\beta-\alpha)]$-Carleson measure on $[0, 1)$. \end{theorem} Here, an $s$-Carleson measure $\mu$ on $[0, 1)$ is said to be a vanishing $s$-Carleson measure, if it satisfies further that $$\lim_{t\rightarrow 1^{-}}\frac{\mu([t, 1))}{(1-t)^s}=0.$$ The paper is organized as follows. The proofs of Theorem \ref{main-1}, \ref{main-2} and \ref{main-3} will be given in Section 3, 4 and 5, respectively. Some lemmas will be proved in Section 2. Several remarks will be finally presented in the last section. \section{Some lemmas} In this section, we establish some lemmas, which will be used in the proofs of Theorem \ref{main-1}, \ref{main-2} and \ref{main-3} . \begin{lemma}\label{lem-0} For $\theta>0, -1<\alpha, \beta<1$, we define \begin{equation*} w_{\alpha, \beta}^{[1]}(n):=\sum_{k=0}^{\infty}\frac{1}{(k+n+2\theta)^{1+\frac{1}{2}(\beta-\alpha)}}\cdot \frac{(n+\theta)^{\frac{1-\beta}{2}}}{(k+\theta)^{\frac{1+\alpha}{2}}},\: n\in \mathbb{N}_{0}, \end{equation*} and \begin{equation*} w_{\alpha, \beta}^{[2]}(k):=\sum_{n=0}^{\infty}\frac{1}{(k+n+2\theta)^{1+\frac{1}{2}(\beta-\alpha)}}\cdot \frac{(k+\theta)^{\frac{1+\alpha}{2}}}{(n+\theta)^{\frac{1-\beta}{2}}},\: k\in \mathbb{N}_0. \end{equation*} Then \begin{equation}\label{w-1} w_{\alpha, \beta}^{[1]}(n)\leq B(\frac{1+\beta}{2}, \frac{1-\alpha}{2})(n+\theta)^{-\beta}, n\in \mathbb{N}_{0}; \end{equation} \begin{equation}\label{w-2} w_{\alpha, \beta}^{[2]}(k)\leq B(\frac{1+\beta}{2}, \frac{1-\alpha}{2})(k+\theta)^{\alpha}, k\in \mathbb{N}_0. \end{equation} \end{lemma} Here $B(\cdot, \cdot)$ is the Beta function, defined as $$B(u,v)=\int_{0}^{\infty}\frac{t^{u-1}}{(1+t)^{u+v}}\,dt,\: u>0,v>0.$$ It is known that $$B(u,v)=\int_{0}^{1}t^{u-1}(1-t)^{v-1}\,dt=\frac{\Gamma(u)\Gamma{(v)}}{\Gamma(u+v)}. $$ and $B(u,v)=B(v,u)$, where $\Gamma(x) $ is the Gamma function, defined as $$\Gamma(x)=\int_{0}^{\infty}e^{-t} t^{x-1}\,dt,\: x>0.$$ For more detailed introduction to the Beta function and Gamma function, see \cite{W} . \begin{proof}[Proof of Lemma \ref{lem-0} ] Since $-1<\alpha, \beta<1$, we see that \begin{eqnarray} w_{\alpha, \beta}^{[1]}(n)\leq \int_{0}^{\infty}\frac{1}{(x+n+\theta)^{1+\frac{1}{2}(\beta-\alpha)}}\cdot \frac{(n+\theta)^{\frac{1-\beta}{2}}}{x^{\frac{1+\alpha}{2}}}\,dx = B(\frac{1+\beta}{2}, \frac{1-\alpha}{2})(n+\theta)^{-\beta}.\nonumber \end{eqnarray} Similarly, we can obtain that \begin{equation*} w_{\alpha, \beta}^{[2]}(k)\leq B(\frac{1+\beta}{2}, \frac{1-\alpha}{2})(k+\theta)^{\alpha}. \end{equation*} Lemma \ref{lem-0} is proved. \end{proof} \begin{lemma}\label{lem-1} Let $\theta>0, -1<\alpha, \beta<1$. Let $\mu$ be a positive Borel measure on $[0, 1)$ and $\mu[n]$ is defined as in (\ref{mu}) for $n\in \mathbb{N}_0$. If $\mu$ is a $[1+\frac{1}{2}(\beta-\alpha)]$-Carleson measure on $[0, 1)$, then \begin{equation}\label{mun-1}\mu[n] \preceq \frac{1}{(n+2\theta)^{1+\frac{1}{2}(\beta-\alpha)}}\end{equation} holds for all $n \in \mathbb{N}_0$. Furthermore, if $\mu$ is a vanishing $[1+\frac{1}{2}(\beta-\alpha)]$-Carleson measure on $[0, 1)$, then \begin{equation}\label{mun-2}\mu[n] =o\left(\frac{1}{(n+2\theta)^{1+\frac{1}{2}(\beta-\alpha)}}\right), \quad n\rightarrow \infty.\end{equation} \end{lemma} \begin{proof} For $n\in \mathbb{N}$, we get from integration by parts that \begin{eqnarray} \mu[n]=\int_{0}^1 t^{n}d\mu(t)&=&\mu([0,1))-n\int_{0}^1 t^{n-1}\mu([0, t))dt \nonumber \\ &=& n\int_{0}^1 t^{n-1}\mu([t, 1))dt.\nonumber \end{eqnarray} If $\mu$ is a $[1+\frac{1}{2}(\beta-\alpha)]$-Carleson measure on $[0, 1)$, then we see that there is a constant $C_3>0$ such that $$\mu([t,1))\leq C_3 (1-t)^{1+\frac{1}{2}(\beta-\alpha)}$$ holds for all $t\in [0,1)$. It follows that \begin{eqnarray}\mu[n] &\leq & C_3 n\int_{0}^1 t^{n-1}(1-t)^{1+\frac{1}{2}(\beta-\alpha)}dt\nonumber \\&=&C_3 \frac{n\Gamma(n)\Gamma(2+\frac{1}{2}(\beta-\alpha))}{\Gamma(n+2+\frac{1}{2}(\beta-\alpha))} .\nonumber \end{eqnarray} By using the fact that $$\Gamma(x) = \sqrt{2\pi} x^{x-\frac{1}{2}}e^{-x}[1+r(x)],\, |r(x)|\leq e^{\frac{1}{12x}}-1,\, x>0,$$ we obtain that $$\frac{n\Gamma(n)\Gamma(2+\frac{1}{2}(\beta-\alpha))}{\Gamma(n+2+\frac{1}{2}(\beta-\alpha))} \asymp \frac{1}{n^{1+\frac{1}{2}(\beta-\alpha)}}.$$ Consequently, it is easy to see that $$\mu[n] \preceq \frac{1}{(n+2\theta)^{1+\frac{1}{2}(\beta-\alpha)}}$$ holds for all $n \in \mathbb{N}_0$. By minor modifications of above arguments, we can similarly show that (\ref{mun-2}) holds if $\mu$ is a vanishing $[1+\frac{1}{2}(\beta-\alpha)]$-Carleson measure on $[0, 1)$. Lemma \ref{lem-1} is proved. \end{proof} \begin{lemma}\label{lem-2} Let $\theta>0, -1<\alpha, \beta<1$. For $f=\sum_{k=0}^{\infty}a_k z^k \in \mathcal{H}(\mathbb{C})$, we define \begin{equation*}\mathbf{\check{H}}_{\theta}(f)(z):=\sum_{n=0}^\infty \left[\frac{1}{\sqrt{n!}}\sum_{k=0}^{\infty} \frac{a_k\sqrt{k!}}{(k+n+2\theta)^{1+\frac{1}{2}(\beta-\alpha)}} \right]z^n. \end{equation*} Then $\mathbf{\check{H}}_{\theta}$ is bounded from $\mathcal{F}^{2}_{\theta, \alpha}$ to $\mathcal{F}^{2}_{\theta, \beta}.$ \end{lemma} \begin{proof} For $f=\sum_{k=0}^{\infty}{a_k}z^k \in \mathcal{F}_{\theta, \alpha}^2$, $n\in \mathbb{N}_0$, by Cauchy's inequality, we have \begin{eqnarray}\label{g1} \lefteqn{\left |\sum_{k=0}^{\infty}\frac{a_k\sqrt{k!}}{(k+n+2\theta)^{1+\frac{1}{2}(\beta-\alpha)}}\right| \leq \sum_{k=0}^{\infty}\frac{|a_k|\sqrt{k!}}{(k+n+2\theta)^{1+\frac{1}{2}(\beta-\alpha)}}}\nonumber \\ &=&\sum_{k=0}^{\infty} \left\{[\frac{1}{(k+n+2\theta)^{1+\frac{1}{2}(\beta-\alpha)}}]^{\frac{1}{2}}\cdot \frac{(k+\theta)^{\frac{1+\alpha}{4}}}{(n+\theta)^{\frac{1-\beta}{4}}}\cdot |a_k|\sqrt{k!}\right\} \nonumber \\ && \quad\quad \times \left\{[\frac{1}{(k+n+2\theta)^{1+\frac{1}{2}(\beta-\alpha)}}]^{\frac{1}{2}}\cdot \frac{(n+\theta)^{\frac{1-\beta}{4}}}{(k+\theta)^{\frac{1+\alpha}{4}}}\right\} \nonumber \\ &\leq & \left[ \sum_{k=0}^{\infty}\frac{1}{(k+n+2\theta)^{1+\frac{1}{2}(\beta-\alpha)}}\cdot \frac{(k+\theta)^{\frac{1+\alpha}{2}}}{(n+\theta)^{\frac{1-\beta}{2}}}\cdot |a_k|^2k!\right]^{\frac{1}{2}} \nonumber \\ && \quad\quad \times \left[\sum_{k=0}^{\infty} \frac{1}{(k+n+2\theta)^{1+\frac{1}{2}(\beta-\alpha)}}\cdot\frac{(n+\theta)^{\frac{1-\beta}{2}}}{(k+\theta)^{\frac{1+\alpha}{2}}}\right]^{\frac{1}{2}}. \nonumber \end{eqnarray} Then, in view of Lemma \ref{lem-0}, we get that \begin{eqnarray} \lefteqn{\left |\sum_{k=0}^{\infty}\frac{a_k\sqrt{k!}}{(k+n+2\theta)^{1+\frac{1}{2}(\beta-\alpha)}}\right|}\nonumber \\ &\leq & [w_{\alpha, \beta}^{[1]}(n)]^{\frac{1}{2}}\left[ \sum_{k=0}^{\infty}\frac{1}{(k+n+2\theta)^{1+\frac{1}{2}(\beta-\alpha)}}\cdot \frac{(k+\theta)^{\frac{1+\alpha}{2}}}{(n+\theta)^{\frac{1-\beta}{2}}}\cdot |a_k|^2k!\right]^{\frac{1}{2}}\nonumber \\ &=& [B(\frac{1+\beta}{2}, \frac{1-\alpha}{2})]^{\frac{1}{2}} (n+\theta)^{-\frac{\beta}{2}}\nonumber \\ &&\quad\quad\quad\quad\times\left[ \sum_{k=0}^{\infty}\frac{1}{(k+n+2\theta)^{1+\frac{1}{2}(\beta-\alpha)}}\cdot \frac{(k+\theta)^{\frac{1+\alpha}{2}}}{(n+\theta)^{\frac{1-\beta}{2}}}\cdot |a_k|^2k!\right]^{\frac{1}{2}}.\nonumber \end{eqnarray} Consequently, it follows from (\ref{w-2}) that \begin{eqnarray} \lefteqn{\|\mathbf{\check{H}}_{\theta}f\|_{\theta, \beta}^2= \sum_{n=0}^{\infty}(n+\theta)^{\beta}\left |\sum_{k=0}^{\infty}\frac{a_k\sqrt{k!}}{(k+n+2\theta)^{1+\frac{1}{2}(\beta-\alpha)}}\right|^2} \nonumber \\ &&\leq B(\frac{1+\beta}{2}, \frac{1-\alpha}{2})\sum_{n=0}^{\infty}\sum_{k=0}^{\infty}\frac{1}{(k+n+2\theta)^{1+\frac{1}{2}(\beta-\alpha)}}\cdot \frac{(k+\theta)^{\frac{1+\alpha}{2}}}{(n+\theta)^{\frac{1-\beta}{2}}}\cdot |a_k|^2k! \nonumber \\ &&=B(\frac{1+\beta}{2}, \frac{1-\alpha}{2})\sum_{k=0}^{\infty}w_{\alpha, \beta}^{[2]}(k)|a_k|^{2} k!\leq [B(\frac{1+\beta}{2}, \frac{1-\alpha}{2})]^2\|f\|_{\theta, \alpha}^2. \nonumber \end{eqnarray} This means that $\mathbf{\check{H}}_{\theta}: \mathcal{F}^{2}_{\theta, \alpha} \rightarrow \mathcal{F}^{2}_{\theta, \beta}$ is bounded and Lemma \ref{lem-2} is proved. \end{proof} \section{Proof of Theorem \ref{main-1}} We first show the "if" part. For $f=\sum_{k=0}^{\infty}{a_k}z^k \in \mathcal{F}_{\theta, \alpha}^2$, $n\in \mathbb{N}_0$, when $\lambda\geq 1+\frac{1}{2}(\beta-\alpha)$, we have \begin{equation*}\label{g1} \left |\sum_{k=0}^{\infty}\frac{a_k\sqrt{k!}}{(k+n)^{\lambda}+1}\right|\leq \sum_{k=0}^{\infty}\frac{|a_k|\sqrt{k!}}{(k+n)^{1+\frac{1}{2}(\beta-\alpha)}+1}\preceq\sum_{k=0}^{\infty}\frac{|a_k|\sqrt{k!}}{(k+n+2\theta)^{1+\frac{1}{2}(\beta-\alpha)}}. \end{equation*} Consequently, by Lemma \ref{lem-2}, we obtain that $\mathbf{{H}}_{\lambda}: \mathcal{F}^{2}_{\theta, \alpha} \rightarrow \mathcal{F}^{2}_{\theta, \beta}$ is bounded. This proves the "if" part of Theorem \ref{main-1}. Next, we shall prove the "only if" part. We will show that, if $\lambda<1+\frac{1}{2}(\beta-\alpha)$, then, for any $\theta>0$, $ \mathbf{H}_{\lambda}: \mathcal{F}_{\theta, \alpha}^{2} \rightarrow \mathcal{F}_{\theta, \beta}^{2}$ is not bounded. Actually, let $\varepsilon>0$ and set $f_{\varepsilon}=\sum_{k=0}^{\infty}a_k z^k$ with $$a_0=0,\, a_k=\sqrt{\varepsilon\theta^\varepsilon}(k+\theta)^{-\frac{\alpha+1+\varepsilon}{2}}\frac{1}{\sqrt{k!}},\, k\in \mathbb{N}.$$ It is easy to see that $$\|f_\varepsilon\|_{\theta, \alpha}^2=\varepsilon\theta^\varepsilon\sum_{k=1}^{\infty}(n+\theta)^{-1-\varepsilon}\leq \varepsilon\theta^\varepsilon \int_{0}^{\infty}(x+\theta)^{-1-\varepsilon}dx=1.$$ Then we have \begin{eqnarray}\label{eq--1}\|\mathbf{H}_{\lambda}f_\varepsilon\|_{\theta,\beta}^2&=&\varepsilon\theta^\varepsilon\sum_{n=0}^{\infty}(n+\theta)^{\beta}\left|\sum_{k=1}^{\infty}\frac{1}{(k+n)^{\lambda}+1}\cdot (k+\theta)^{-\frac{\alpha+1+\varepsilon}{2}}\right|^2 \\ &\succeq &\varepsilon\theta^\varepsilon\sum_{n=0}^{\infty}(n+\theta)^{\beta}\left|\sum_{k=1}^{\infty}\frac{1}{(k+n+2\theta)^{\lambda}}\cdot (k+\theta)^{-\frac{\alpha+1+\varepsilon}{2}}\right|^2 \nonumber \end{eqnarray} On the other hand, we notice that \begin{eqnarray}\label{eq--2}\lefteqn{ \sum_{k=1}^{\infty}\frac{1}{(k+n+2\theta)^{\lambda}}\cdot (k+\theta)^{-\frac{\alpha+1+\varepsilon}{2}}} \\ &\geq& \int_{1}^{\infty} \frac{1}{(x+\theta+n+\theta)^{\lambda}}\cdot (x+\theta)^{-\frac{\alpha+1+\varepsilon}{2}}\,dx\nonumber \\ &=& \int_{1+\theta}^{\infty} \frac{1}{(s+n+\theta)^{\lambda}}\cdot s^{-\frac{\alpha+1+\varepsilon}{2}}\,ds\nonumber \\ &=& (n+\alpha)^{1-\lambda-\frac{\alpha+1+\varepsilon}{2}}\int_{\frac{1+\theta}{n+\theta}}^{\infty} \frac{1}{(1+t)^{\lambda}}\cdot t^{-\frac{\alpha+1+\varepsilon}{2}}\,dt \nonumber \end{eqnarray} Combine (\ref{eq--1}) and (\ref{eq--2}), we get that \begin{eqnarray}\|\mathbf{H}_{\lambda}f_\varepsilon\|_{\theta,\beta}^2&\succeq&\varepsilon\theta^\varepsilon\sum_{n=0}^{\infty}(n+\theta)^{(\beta-\alpha)+2(1-\lambda)-1-\varepsilon}\int_{\frac{1+\theta}{n+\theta}}^{\infty} \frac{1}{(1+t)^{\lambda}}\cdot t^{-\frac{\alpha+1+\varepsilon}{2}}\,dt\nonumber \\ &\geq &\varepsilon\theta^\varepsilon\sum_{n=0}^{\infty}(n+\theta)^{(\beta-\alpha)+2(1-\lambda)-1-\varepsilon}\int_{1+\frac{1}{\theta}}^{\infty} \frac{1}{(1+t)^{\lambda}}\cdot t^{-\frac{\alpha+1+\varepsilon}{2}}\,dt \nonumber \end{eqnarray} If $\lambda< 1+\frac{1}{2}(\beta-\alpha)$, we see that $(\beta-\alpha)+2(1-\lambda)>0$. We suppose that $\mathbf{H}_{\lambda}: \mathcal{F}_{\theta, \alpha}^2 \rightarrow \mathcal{F}_{\theta, \beta}^2$ is bounded, then there exists a constant $C_4>0$ such that \begin{eqnarray}\label{c-1}C_4&\geq&\frac{\|\mathbf{H}_{\lambda}f_{\varepsilon}\|_{\theta, \beta}^2}{\|f_{\varepsilon}\|_{\theta,\alpha}^2}\\ &\geq& \varepsilon\theta^\varepsilon\sum_{n=0}^{\infty}(n+\theta)^{(\beta-\alpha)+2(1-\lambda)-1-\varepsilon}\int_{1+\frac{1}{\theta}}^{\infty} \frac{1}{(1+t)^{\lambda}}\cdot t^{-\frac{\alpha+1+\varepsilon}{2}}\,dt. \nonumber \end{eqnarray} But when $\varepsilon<(\beta-\alpha)+2(1-\lambda)$, we have $$\sum_{n=0}^{\infty}(n+\theta)^{(\beta-\alpha)+2(1-\lambda)-1-\varepsilon}=\infty.$$ Hence we get that (\ref{c-1}) is a contradiction. This implies that $\mathbf{H}_{\lambda}: \mathcal{F}_{\theta, \alpha}^2 \rightarrow \mathcal{F}_{\theta, \beta}^2$ can not be bounded when $\lambda< 1+\frac{1}{2}(\beta-\alpha)$. This proves the "only if" part of Theorem \ref{main-1}. The Theorem \ref{main-1} is now proved. \section{Proof of Theorem \ref{main-2}} \begin{proof}[Proof of "if" part of Theorem \ref{main-2}] By combining Lemma \ref{lem-1} and Lemma \ref{lem-2}, we easily see that, for any $\theta>0$, $\mathbf{H}_{\mu}: \mathcal{F}_{\theta, \alpha}^{2}\rightarrow \mathcal{F}_{\theta, \beta}^{2}$ is bounded if $\mu$ is a $[1+\frac{1}{2}(\beta-\alpha)]$-Carleson measure on $[0, 1)$. The "if" part of Theorem \ref{main-2} is proved. \end{proof} \begin{proof}[Proof of "only if" part of Theorem \ref{main-2}] In our proof, we need the following well-known estimate, see [18, Page 54]. Let $0<w<1$. For any $c>0$, we have \begin{equation}\label{est}\sum_{n=1}^{\infty}n^{c-1}w^{2n}\asymp \frac{1}{(1-w^2)^c}. \end{equation} For any $0<w<1$, we set ${f}_w=\sum_{k=0}^{\infty} {a}_k z^k \in \mathcal{H}(\mathbb{C})$ with $${a}_k=(1-w^2)^{\frac{1}{2}}(k+\theta)^{-\frac{\alpha}{2}}w^{k}\frac{1}{\sqrt{k!}},\; k\in \mathbb{N}_0.$$ Then we see from (\ref{est}) that $\|{f}_w\|_{\theta, \alpha}^2\asymp 1.$ In view of the boundedness of $\mathbf{H}_{\mu}: \mathcal{F}_{\theta, \alpha}^{2}\rightarrow \mathcal{F}_{\theta, \beta}^{2}$, we obtain that \begin{eqnarray}\label{boun-1} 1 &\succeq & \|\mathbf{H}_{\mu}{f}_w\|_{\theta, \beta}^2 \\ &=&\sum_{n=0}^{\infty}(n+\theta)^{\beta}\left [\sum_{k=0}^{\infty}{a}_k\sqrt{k!}\int_{0}^1t^{k+n}d\mu(t) \right]^2 \nonumber \\ &=&(1-w^2)\sum_{n=0}^{\infty}(n+\theta)^{\beta}\left[\sum_{k=0}^{\infty}(k+\theta)^{-\frac{\alpha}{2}}w^{k}\int_{0}^{1}t^{k+n}d\mu(t)\right]^{{2}} \nonumber \\ &\succeq &(1-w^2)\sum_{n=1}^{\infty}n^{\beta}\left[\sum_{k=1}^{\infty}k^{-\frac{\alpha}{2}}w^{k}\int_{w}^{1}t^{k+n}d\mu(t)\right]^{{2}}.\nonumber \end{eqnarray} On the other hand, by again (\ref{est}), we have \begin{eqnarray}\label{boun-2} \lefteqn{\sum_{n=1}^{\infty}n^{\beta}\left[\sum_{k=1}^{\infty}k^{-\frac{\alpha}{2}}w^{k}\int_{w}^{1}t^{k+n}d\mu(t)\right]^{{2}} }\\ &\geq &[\mu([w, 1))]^{2}\sum_{n=1}^{\infty}n^{\beta}\left[\sum_{k=1}^{\infty}k^{-\frac{\alpha}{2}}w^{k}\cdot w^{k+n}\right]^{{2}}\nonumber \\ &=&[\mu([w, 1))]^{2}\left[\sum_{n=1}^{\infty}n^{\beta}w^{2n}\right]\left[\sum_{k=1}^{\infty}k^{-\frac{\alpha}{2}}w^{2k}\right]^{{2}}\nonumber \\ &\asymp &[\mu([w, 1))]^{2}\cdot \frac{1}{(1-w^2)^{1+\beta}} \cdot\frac{1}{(1-w^2)^{2-\alpha}}. \nonumber \end{eqnarray} We see from (\ref{boun-1}) and (\ref{boun-2}) that $$\mu([w, 1))\preceq (1-w^2)^{1+\frac{1}{2}(\beta-\alpha)}$$ holds for all $0<w<1$. It follows that $\mu$ is a $[1+\frac{1}{2}(\beta-\alpha)]$-Carleson measure on $[0, 1)$ and the "only if" part is proved. \end{proof} Now, the proof of Theorem \ref{main-2} is finished. \section{Proof of Theorem \ref{main-3}} We first show the "if" part. For any $f=\sum_{k=0}^{\infty}a_k z^k \in \mathcal{F}_{\theta, \alpha}^2$. Let $\mathfrak{N}\in \mathbb{N}$, we consider \begin{equation*} \mathbf{H}_{\mu}^{[\mathfrak{N}]}(f)(z):=\sum_{n=0}^{\mathfrak{N}} \left[\frac{1}{\sqrt{n!}}\sum_{k=0}^{\infty}\mu[k+n]a_k\sqrt{k!}\right]z^n, z\in \mathbb{C}. \end{equation*} Then we see that $ \mathbf{H}_{\mu}^{[\mathfrak{N}]}$ is a finite rank operator and hence it is compact from $\mathcal{F}_{\theta, \alpha}^2$ to $\mathcal{F}_{\theta, \beta}^2$. By Lemma \ref{lem-1}, we see that, for any $\epsilon>0$, there is an $N_0 \in \mathbb{N}$ such that $$\mu[n] \preceq \frac{\epsilon}{(n+2\theta)^{1+\frac{1}{2}(\beta-\alpha)}}$$ holds for all $n>N_0$. Note that \begin{eqnarray}\|(\mathbf{H}_{\mu}-\mathbf{H}_{\mu}^{[\mathfrak{N}]})(f)\|_{\theta, \beta}^2=\sum_{n=\mathfrak{N}+1}^{\infty}(n+\theta)^{\beta} \left|\frac{1}{\sqrt{n!}}\sum_{k=0}^{\infty}\mu[k+n]a_k\sqrt{k!}\right|^2.\nonumber \end{eqnarray} When $\mathfrak{N}>N_0$, we get that \begin{eqnarray}\|(\mathbf{H}_{\mu}-\mathbf{H}_{\mu}^{[\mathfrak{N}]})(f)\|_{\theta, \beta}^2\preceq {\epsilon}^2 \sum_{n=\mathfrak{N}+1}^{\infty}(n+\theta)^{\beta} \left|\sum_{k=0}^{\infty}\frac{a_k}{(k+n+2\theta)^{1+\frac{1}{2}(\beta-\alpha)}} \right|^2.\nonumber\end{eqnarray} Consequently, by Lemma \ref{lem-2}, we see that, for any $\epsilon>0$, there is an $N_0 \in \mathbb{N}$ such that \begin{eqnarray}\|(\mathbf{H}_{\mu}-\mathbf{H}_{\mu}^{[\mathfrak{N}]})(f)\|_{\theta, \beta}^2\preceq {\epsilon}^2 \|f\|^2_{\theta, \alpha}\nonumber\end{eqnarray} holds for all $\mathfrak{N}>N_0$. We conclude that $\mathbf{H}_{\mu}$ is compact from $\mathcal{F}_{\theta, \alpha}^2$ to $\mathcal{F}_{\theta, \beta}^2$. This proves the "if" part. Next, we show the "only if" part. For $0<w<1$. We set $\widetilde{f}_w=\sum_{k=0}^{\infty}\widetilde{a}_k z^k$ with $$\widetilde{a}_k=(1-w^2)^{\frac{1+\alpha}{2}}w^{k} \frac{1}{\sqrt{k!}},\: k\in \mathbb{N}_0. $$ We see from (\ref{est}) that $\|\widetilde{f}_w\|_{\theta, \alpha}\asymp1$, and we conclude that $\widetilde{f}_w$ is convergent weakly to $0$ in $\mathcal{F}_{\theta, \alpha}^2$ as $w\rightarrow 1^{-}$. Since $\mathbf{H}_{\mu}$ is compact from $\mathcal{F}_{\theta, \alpha}^2$ to $\mathcal{F}_{\theta, \beta}^2$, we get \begin{equation}\label{com-0}\lim_{w\rightarrow {1^{-}}} \|\mathbf{H}_{\mu}(\widetilde{f}_w)\|_{\theta, \beta}=0.\end{equation} On the other hand, we have \begin{eqnarray} \|\mathbf{H}_{\mu}(\widetilde{f}_w)\|_{\theta, \beta}^2 &=&(1-w^2)^{1+\alpha}\sum_{n=0}^{\infty}(n+\theta)^{\beta}\left[\sum_{k=0}^{\infty}w^{k}\int_{0}^{1}t^{k+n}d\mu(t)\right]^{{2}}\nonumber \\ &\succeq &(1-w^2)^{1+\alpha}\sum_{n=1}^{\infty}n^{\beta}\left[\sum_{k=1}^{\infty}w^{k}\int_{w}^{1}t^{k+n}d\mu(t)\right]^{{2}}\nonumber \\ &\geq &(1-w^2)^{1+\alpha}[\mu([w, 1))]^{2}\left[\sum_{n=1}^{\infty}n^{\beta}w^{2n}\right]\left[\sum_{k=1}^{\infty}w^{2k}\right]^{{2}}\nonumber \\ &\succeq &(1-w^2)^{1+\alpha}[\mu([w, 1))]^{2}\cdot \frac{1}{(1-w^2)^{\beta+1}}\cdot \frac{1}{(1-w^2)^2}.\nonumber \end{eqnarray} Then we get that \begin{eqnarray} \mu([w, 1))\preceq \|\mathbf{H}_{\mu}(\widetilde{f}_w)\|_{\theta, \beta}(1-w^2)^{1+\frac{1}{2}(\beta-\alpha)}\nonumber. \end{eqnarray} It follows from (\ref{com-0}) that $\mu$ is a vanishing $[1+\frac{1}{2}(\beta-\alpha)$-Carleson measure on $[0, 1)$. This proves the "only if" part of Theorem \ref{main-3} and the proof of Theorem \ref{main-3} is completed. \section{Final remarks} \begin{remark} For $\lambda>0$, we define the {\em Hilbert-type operator} $\mathbf{\widehat{H}}_{\lambda}(f)$ as \begin{equation*} \mathbf{\widehat{H}}_{\lambda}(f)(z):=\sum_{n=0}^\infty \left(\sum_{k=0}^{\infty} \frac{a_k}{(k+n+1)^{\lambda}} \right)z^n. \end{equation*} When $\lambda=1$, we get the classical Hilbert operator. We next will study the boundedness of $\mathbf{\widehat{H}}_{\lambda}(f)$ acting on certain spaces of holomorphic functions on $\mathbb{D}$. Let $p$ be a positive number and $X_p$ be a Banach space of holomorphic functions on $\mathbb{D}$. For any $f\in X_p$, we assume that the norm $\|f\|_{X_{p}}$ of $f$ is determined by $f, p$ and other finite parameters $\beta_1, \beta_2, \cdots, \beta_m$; $m\in \mathbb{N}_0$($m=0$ means that there is no parameter). For any $f=\sum_{n=0}^{\infty}a_n z^n\in \mathcal{H}(\mathbb{D})$ with $\{a_n\}_{n=0}^{\infty}$ is a decreasing sequence of non-negative real numbers, we say $X_p$ have the {\em property $(\star)$}, if there exist a function $\mathbf{G}_X=\mathbf{G}_{X}(p, \beta_1, \beta_2, \cdots, \beta_m)$ with $\mathbf{G}_X>-1$ such that $f \in X_p$ if and only if $$\sum_{n=1}^{\infty} n^{\mathbf{G}_X}a_n^{p}<\infty.$$ We list several classical spaces of holomorphic functions on $\mathbb{D}$, which have the property $(\star)$. Let $f=\sum_{n=0}^{\infty}a_n z^n\in \mathcal{H}(\mathbb{D})$ with $\{a_n\}_{n=0}^{\infty}$ is a decreasing sequence of non-negative real numbers. For example, (1) the Hardy space $H^p(\mathbb{D}), 1<p<\infty$, we know that $f \in H^p(\mathbb{D})$ if and only if $$\sum_{n=1}^{\infty} n^{p-2}a_n^{p}<\infty.$$ See \cite{Pa}. (2) For $1<p<\infty$, let $p-2<\alpha\leq p-1$. It holds that $f \in D^p_{\alpha}(\mathbb{D})$ if and only if $$\sum_{n=1}^{\infty} n^{2p-3-\alpha}a_n^{p}<\infty.$$ Here $D^p_{\alpha}(\mathbb{D})$ is the Dirichlet-type space, defined as $$D^p_{\alpha}(\mathbb{D})=\left\{f\in \mathcal{H}({\mathbb{D}}): \|f\|_{D^p_{\alpha}}=|f(0)|+\left[\int_{\mathbb{D}}|f'(z)|^p(1-|z|^2)^{\alpha}dA(z)\right]^{\frac{1}{p}}<\infty\right\}.$$ See \cite{GM-2}. (3) For $1<p<\infty$, when $-1<\alpha< p-2$, we have $f \in A^p_{\alpha}(\mathbb{D})$ if and only if $$\sum_{n=1}^{\infty} n^{2p-3-\alpha}a_n^{p}<\infty.$$ Here $A^p_{\alpha}(\mathbb{D})$ is the Bergman space, defined as $$A^p_{\alpha}(\mathbb{D})=\left\{f\in \mathcal{H}({\mathbb{D}}): \|f\|_{A^p_{\alpha}}=\left[(\alpha+1)\int_{\mathbb{D}}|f(z)|^p(1-|z|^2)^{\alpha}dA(z)\right]^{\frac{1}{p}}<\infty\right\}.$$ See \cite{GM-2}. We obtain that \begin{proposition}\label{rem} Let $\lambda$, $p$ be two positive numbers, $X_p$ be a Banach space of holomorphic functions on $\mathbb{D}$ having the property $(\star)$ and $\mathbf{\widehat{H}}_{\lambda}$ be as above. Then the necessary condition of $\mathbf{\widehat{H}}_{\lambda}: X_p\rightarrow X_p$ is bounded is $\lambda\geq 1$. \end{proposition} \begin{proof} It is enough to prove that, $\mathbf{\widehat{H}}_{\lambda}: X_p\rightarrow X_p$ can not be bounded, if $0<\lambda<1$. Let $\varepsilon>0$ and set $\widehat{f}_{\varepsilon}=\sum_{k=0}^{\infty}\widehat{a}_k z^k$ with $\widehat{a}_0=(\frac{\varepsilon}{1+\varepsilon})^{\frac{1}{p}},\, \widehat{a}_k=(\frac{\varepsilon}{1+\varepsilon})^{\frac{1}{p}}k^{-\frac{\mathbf{G}_X+1+\varepsilon}{p}},\, k\geq 1.$ It is easy to see that $\{\widehat{a}_k\}_{k=0}^{\infty}$ is a decreasing sequence and $\sum_{k=1}^{\infty}k^{\mathbf{G}_{X}}\widehat{a}_k^{p}<\infty.$ Hence $\widehat{f}_\varepsilon \in X_p$. Set $$b_n=\sum_{k=0}^{\infty}\frac{\widehat{a}_k}{(k+n+1)^{\lambda}}, \, n\in \mathbb{N}_0.$$ We suppose that $\mathbf{\widehat{H}}_{\lambda}: X_p\rightarrow X_p$ is bounded. Then, by the fact that $\{b_n\}_{n=0}^{\infty}$ is a decreasing sequence, we see that $\widehat{f}=\sum_{n=0}^{\infty}b_n z^n \in X_p$ and hence $$\sum_{n=1}^{\infty} n^{\mathbf{G}_X} b_n^p <\infty.$$ Then \begin{eqnarray}\label{e-1} \infty &>& \sum_{n=1}^{\infty} n^{\mathbf{G}_X} \left[\sum_{k=0}^{\infty}\frac{\widehat{a}_k}{(k+n+1)^{\lambda}}\right]^p \\ &\geq& \frac{\varepsilon}{1+\varepsilon} \sum_{n=1}^{\infty} n^{\mathbf{G}_X} \left[\sum_{k=1}^{\infty}\frac{1}{(k+n+1)^{\lambda}} \cdot k^{-\frac{\mathbf{G}_X+1+\varepsilon}{p}}\right]^p\nonumber \end{eqnarray} On the other hand, from the fact that $k+n+1\leq 2(k+n)$ for all $k ,n \in \mathbb{N}$, we see that \begin{eqnarray}\label{e-2} \lefteqn{\sum_{k=1}^{\infty}\frac{1}{(k+n+1)^{\lambda}} \cdot k^{-\frac{\mathbf{G}_X+1+\varepsilon}{p}}}\\ &\geq& \frac{1}{2^{\lambda}}\sum_{k=1}^{\infty}\frac{1}{(k+n)^{\lambda}} \cdot k^{-\frac{\mathbf{G}_X+1+\varepsilon}{p} }\nonumber \\ &\geq& \frac{1}{2^{\lambda}}\int_{1}^{\infty} \frac{1}{(x+n)^{\lambda}} \cdot x^{-\frac{\mathbf{G}_X+1+\varepsilon}{p} }\,dx \nonumber \\ & =& n^{(1-\lambda)-\frac{\mathbf{G}_X+1+\varepsilon}{p}} \cdot \frac{1}{2^{\lambda}}\int_{\frac{1}{n}}^{\infty} \frac{1}{(1+s)^{\lambda}}\cdot s^{-\frac{\mathbf{G}_X+1+\varepsilon}{p} }\,ds \nonumber \\ &\geq& n^{(1-\lambda)-\frac{\mathbf{G}_X+1+\varepsilon}{p}}\cdot \frac{1}{2^{\lambda}} \int_{1}^{\infty} \frac{1}{(1+s)^{\lambda}}\cdot s^{-\frac{\mathbf{G}_X+1+\varepsilon}{p} }\,ds \nonumber \end{eqnarray} Combine (\ref{e-1}) and (\ref{e-2}), we get that \begin{eqnarray}\label{e-3} \infty &>& \frac{\varepsilon}{1+\varepsilon} \left[\sum_{n=1}^{\infty}n^{p(1-\lambda)-1-\varepsilon}\right]\cdot \left[\frac{1}{2^{\lambda}} \int_{1}^{\infty} \frac{1}{(1+s)^{\lambda}}\cdot s^{-\frac{\mathbf{G}_X+1+\varepsilon}{p} }\,ds\right]^p.\end{eqnarray} If $\lambda< 1$, when $\varepsilon<p(1-\lambda)$, we have $\sum_{n=1}^{\infty}n^{p(1-\lambda)-1-\varepsilon}=\infty.$ Thus (\ref{e-3}) is a contradiction. This means that $\mathbf{\widehat{H}}_{\lambda}: X_p\rightarrow X_p$ can not be bounded if $\lambda< 1$. The Theorem \ref{rem} is proved. \end{proof} \end{remark} \begin{remark} From Theorem \ref{main-1}, we know that $ \mathbf{H}_{\lambda}$ is not bounded from $\mathcal{F}_{\theta, \alpha}^{2}$ to $\mathcal{F}_{\theta, \beta}^{2}$ if $0<\lambda <1+\frac{1}{2}(\beta-\alpha)$. On the other hand, we notice that, for $\lambda>0$, it holds that $$\frac{1}{(k+n)^{\lambda}+1}\asymp \frac{1}{(k+n+1)^{\lambda}}\asymp\int_{0}^1t^{k+n}(1-t)^{\lambda-1}dt$$ for all $k, n\in \mathbb{N}_0$. To make $ \mathbf{H}_{\lambda}$ bounded from $\mathcal{F}_{\theta, \alpha}^{2}$ to $\mathcal{F}_{\theta, \beta}^{2}$ when $0<\lambda <1+\frac{1}{2}(\beta-\alpha)$. For $\lambda>0, f=\sum_{n=0}^{\infty}a_n z^n \in \mathcal{H}(\mathbb{C})$, let $\mu$ be a positive Bore measure on $[0, 1)$, we define a new {\em Hilbert-type operator $\mathbf{H}_{\lambda}^{\mu}$} as \begin{equation*} \mathbf{H}_{\lambda}^{\mu}(f)(z):=\sum_{n=0}^\infty \left(\frac{1}{\sqrt{n!}}\sum_{k=0}^{\infty}\mu_{\lambda}[k+n] a_k\sqrt{k!}\right)z^n. \end{equation*} Here \begin{equation*}\mu_{\lambda}{[k+n]}=\int_{0}^1 t^{k+n}(1-t)^{\lambda-1}d\mu(t),\: k, n\in \mathbb{N}_{0}.\end{equation*} In view of the proofs of Theorem \ref{main-2} and \ref{main-3}, we can get the following result, which is a generalization of Theorem \ref{main-2} and \ref{main-3}. \begin{proposition} Let $\lambda>0, -1<\alpha, \beta<1$. Let $\mu$ be a positive Bore measure on $[0, 1)$ such that $d\nu(t)=(1-t)^{\lambda-1}d\mu(t)$ is a finite measure on $[0, 1)$, and $\mathbf{H}_{\lambda}^{\mu}$ be as above. Then, for any $\theta>0$, it holds that (1) $\mathbf{H}_{\lambda}^{\mu}: \mathcal{F}_{\theta, \alpha}^{2}\rightarrow \mathcal{F}_{\theta, \beta}^{2}$ is bounded if and only if $\nu$ is a $[1+\frac{1}{2}(\beta-\alpha)]$-Carleson measure on $[0, 1)$; (2) $\mathbf{H}_{\lambda}^{\mu}: \mathcal{F}_{\theta, \alpha}^{2}\rightarrow \mathcal{F}_{\theta, \beta}^{2}$ is compact if and only if $\nu$ is a vanishing $[1+\frac{1}{2}(\beta-\alpha)]$-Carleson measure on $[0, 1)$. \end{proposition} \end{remark} \section*{Acknowledgement} We thank the referee for invaluable comments and useful suggestions on this paper. \begin{thebibliography}{99} \bibitem{BW} Bao G., Wulan H., {\em Hankel matrices acting on Dirichlet spaces}, J. Math. Anal. Appl., 409(2014), pp. 228-235. \bibitem{BK} Bo\v{z}in V., Karapetrovi\'{c}, B., {\em Norm of the Hilbert matrix on Bergman spaces}, J. Funct. Anal., 274 (2018), no. 2, pp. 525-543. \bibitem{CGP}Chatzifountas C., Girela D., Pel\'{a}ez J., {\em A generalized Hilbert matrix acting on Hardy spaces}, J. Math. Anal. Appl., 413(2014), pp. 154-168. \bibitem{CZ}Cho H., Zhu K., {\em Fock–Sobolev spaces and their Carleson measures}, Journal of Functional Analysis, 263 (2012), pp. 2483-2506. \bibitem{D-1} Diamantopoulos E., {\em Operators induced by Hankel matrices on Dirichlet spaces}, Analysis (Munich), 24 (2004), no. 4, pp. 345-360. \bibitem{D}Diamantopoulos E., {\em Hilbert matrix on Bergman spaces}. Illinois J. Math., 48 (2004), no. 3, pp. 1067-1078. \bibitem{DS}Diamantopoulos E., Siskakis Aristomenis G., {\em Composition operators and the Hilbert matrix}, Studia Math., 140 (2000), no. 2, pp. 191-198. \bibitem{DJV} Dostani\'{c} M., Jevti\'{c}, M., Vukoti\'{c}, D., {\em Norm of the Hilbert matrix on Bergman and Hardy spaces and a theorem of Nehari type}, J. Funct. Anal., 254 (2008), no. 11, pp. 2800-2815. \bibitem{GGPS}Galanopoulos P., Girela D., Pel\'{a}ez J., Siskakis Aristomenis G., {\em Generalized Hilbert operators}, Ann. Acad. Sci. Fenn. Math., 39 (2014), no. 1, pp. 231-258. \bibitem{GP}Galanopoulos P., Pel\'{a}ez J., {\em A Hankel matrix acting on Hardy and Bergman spaces}, Studia Math., 200(2010), pp. 201-220. \bibitem{GM}Girela D., Merch\'{a}n N., {\em A generalized Hilbert operator acting on conformally invariant spaces}, Banach Journal of Mathematical Analysis, 12(2018), pp. 374-398. \bibitem{GM-2}Girela D., Merch\'{a}n N., {\em Hankel matrices acting on the Hardy space $H^1$ and on Dirichlet spaces}, Revista Matematica Complutense, 32(2019), pp. 799-822. \bibitem{Ka} Karapetrovi\'{c} B., {\em Hilbert matrix and its norm on weighted Bergman spaces}, J. Geom. Anal., 31 (2021), no. 6, pp. 5909-5940. \bibitem{LS} Li S., Stevi\'{c} S., {\em Generalized Hilbert operator and Fejér-Riesz type inequalities on the polydisc}, Acta Math. Sci. Ser. B (Engl. Ed.), 29(2009), no. 1, pp. 191-200. \bibitem{PR1}Paulsen V., Raghupathi M., {\em An Introduction to the Theory of Reproducing Kernel Hilbert Spaces}, Cambridge University Press, Cambridge, 2016. \bibitem{Pa} Pavlovi\'{c} M., {\em Introduction to function spaces on the disk}, Matemati\v{c}ki Institut SANU, Belgrade, 2004. \bibitem{PR}Pel\'{a}ez J., R\"{a}tty\"{a} J., {\em Generalized Hilbert operators on weighted Bergman spaces}. Adv. Math., 240 (2013), pp. 227-267. \bibitem{Sh} Shields A., {\em Weighted shift operators and analytic function theory}, in Topics in Operator Theory, Math. Surveys, No. 13, Amer. Math. Soc., Providence, 1974. \bibitem{W} Wang Z., Gua D., {\em An Introduction to Special Functions}, Science Press, Beijing, 1979. \bibitem{YZ1}Ye S., Zhou, Z., {\em A derivative-Hilbert operator acting on the Bloch space}, Complex Anal. Oper. Theory, 15 (2021), no. 5, Paper No. 88, 16 pp. \bibitem{YZ2}Ye S., Zhou, Z., {\em A Derivative-Hilbert operator acting on Bergman spaces}, J. Math. Anal. Appl., 506(2022), 125553, pp. 1-18. \bibitem{Zh}Zhu K., {\em Operator Theory in Function Spaces}, Marcel Dekker, New York, 1990. \bibitem{Zhu} Zhu K., {\em Analysis on Fock spaces}, Springer, New York, 2012. \end{thebibliography} \end{document}
2205.05164v1
http://arxiv.org/abs/2205.05164v1
C*-module operators which satisfy in the generalized Cauchy--Schwarz type inequality
\documentclass[12pt, reqno]{amsart} \UseRawInputEncoding \usepackage{amsmath, amsthm, amscd, amsfonts, amssymb, graphicx, color} \usepackage[bookmarksnumbered, colorlinks, plainpages]{hyperref} \input{mathrsfs.sty} \hypersetup{colorlinks=true,linkcolor=red, anchorcolor=green, citecolor=cyan, urlcolor=red, filecolor=magenta, pdftoolbar=true} \textheight 22.5truecm \textwidth 14.5truecm \setlength{\oddsidemargin}{0.35in}\setlength{\evensidemargin}{0.35in} \setlength{\topmargin}{-.5cm} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{exercise}[theorem]{Exercise} \newtheorem{conclusion}[theorem]{Conclusion} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{criterion}[theorem]{Criterion} \newtheorem{summary}[theorem]{Summary} \newtheorem{axiom}[theorem]{Axiom} \newtheorem{problem}[theorem]{Problem} \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} \newtheorem{notation}[theorem]{Notation} \numberwithin{equation}{section} \begin{document} \setcounter{page}{1} \title[$C^*$-module operators which satisfy in GCSI] {$C^*$-module operators which satisfy in the generalized Cauchy--Schwarz type inequality} \author[A.~Zamani] {Ali Zamani} \address{School of Mathematics and Computer Sciences, Damghan University, P.O.BOX 36715-364, Damghan, Iran} \email{[email protected]} \subjclass[2010]{46L08; 47A63} \keywords{Hilbert $C^*$-module, hyponormal operators, operator inequality, Cauchy--Schwarz inequality.} \begin{abstract} Let $\mathcal{L}(\mathscr{H})$ denote the $C^*$-algebra of adjointable operators on a Hilbert $C^*$-module $\mathscr{H}$. We introduce the generalized Cauchy--Schwarz inequality for operators in $\mathcal{L}(\mathscr{H})$ and investigate various properties of operators which satisfy the generalized Cauchy--Schwarz inequality. In particular, we prove that if an operator $A\in\mathcal{L}(\mathscr{H})$ satisfies the generalized Cauchy--Schwarz inequality such that $A$ has the polar decomposition, then $A$ is paranormal. In addition, we show that if for $A$ the equality holds in the generalized Cauchy--Schwarz inequality, then $A$ is cohyponormal. Among other things, when $A$ has the polar decomposition, we prove that $A$ is semi-hyponormal if and only if $\big\|\langle Ax, y\rangle\big\| \leq \big\|{|A|}^{1/2}x\big\|\big\|{|A|}^{1/2}y\big\|$ for all $x, y \in\mathscr{H}$. \end{abstract} \maketitle \section{Introduction and preliminaries} One of the most important inequalities in mathematics is the Cauchy--Schwarz inequality. Its most familiar version states that in a Hilbert space $\big(X, [\cdot, \cdot]\big)$, it holds \begin{align*} |[\zeta, \eta]| \leq \|\zeta\|\|\eta\| \qquad (\zeta, \eta \in X). \end{align*} This classical inequality has a lot of elegant applications, for instance, in classical and modern analysis, partial differential equations, multivariable calculus and probability theory. There are also several extensions of this inequality in various settings for different objects, see \cite{A.B.F.M} and references therein. Some generalizations of the Cauchy-Schwarz inequality in Hilbert spaces can be found in \cite{B.D, C.K.K, Wat}. The notion of Hilbert $C^*$-module is a generalization of that of Hilbert space in which the inner product takes its values in a $C^*$-algebra instead of the complex field. We use $\mathcal{L}(\mathscr{H})$ to denote the $C^*$-algebra of adjointable operators on a Hilbert $C^*$-module $\big(\mathscr{H}, \langle \cdot, \cdot\rangle\big)$. Let $\mathcal{L}(\mathscr{H})_{+}$ be the set of positive elements in $\mathcal{L}(\mathscr{H})$. For any $A\in\mathcal{L}(\mathscr{H})$, the range and the null space of $A$ are denoted by $\mathcal{R}(A)$ and $\mathcal{N}(A)$, respectively. An operator $A\in\mathcal{L}(\mathscr{H})$ is said to be normal if $AA^* = A^*A$, cohyponormal if $|A|^2 \leq |A^*|^2$, semi-hyponormal if $|A^*| \leq |A|$ and paranormal if $\|Ax\|^2 \leq \|A^2x\|\|x\|$ for every $x \in \mathscr{H}$. For details about $C^*$-algebras and Hilbert $C^*$-modules we refer the reader to \cite{Lan, Wegge}. A version of the Cauchy--Schwarz inequality in $\mathscr{H}$ appeared in \cite{Lan} as follows: \begin{align}\label{I.1.1} \|\langle x, y\rangle\|\leq \|x\|\|y\| \qquad (x, y \in \mathscr{H}). \end{align} Davis \cite{Dav}, Joi\c{t}a \cite{Joi}, Ilisevi\'c--Varo\v{s}anec \cite{I.V}, Aramba\v{s}i\'{c}--Baki\'{c}--Moslehian \cite{A.B.M} and Fujii--Fujii--Seo \cite{F.F.S} have investigated the Cauchy-Schwarz inequality and its various reverses in the framework of $C^*$-algebras and Hilbert $C^*$-modules. Although Hilbert $C^*$-modules generalize Hilbert spaces some fundamental properties of Hilbert spaces are no longer valid in Hilbert $C^*$-modules in their full generality. For instance, norm-closed or even orthogonally closed submodules may not be orthogonal summands, and an adjointable operator between Hilbert $C^*$-modules may have no polar decomposition, cf. \cite{Wegge}. Therefore, when we are studying Hilbert $C^*$-modules, it is always of interest under which conditions the results analogous to those for Hilbert spaces can be reobtained, as well as which more general situations might appear. In this paper, by using some ideas of \cite{C.K.K, Wat}, we introduce the generalized Cauchy--Schwarz inequality for operators in $\mathcal{L}(\mathscr{H})$ and investigate various properties of operators which satisfy the generalized Cauchy-Schwarz inequality. In particular, we prove that if an operator $A\in\mathcal{L}(\mathscr{H})$ satisfies the generalized Cauchy--Schwarz inequality, then $\mathcal{N}(A) = \mathcal{N}(A^2)$. In addition, we show that if for $A$ the equality holds in the generalized Cauchy--Schwarz inequality, then $A$ is cohyponormal. Among other things, when $A$ has the polar decomposition, we prove that $A$ is semi-hyponormal if and only if $\big\|\langle Ax, y\rangle\big\| \leq \big\|{|A|}^{1/2}x\big\|\big\|{|A|}^{1/2}y\big\|$ for all $x, y \in\mathscr{H}$. Some other related results are also discussed. \section{Main Results} Throughout this section, $\mathscr{A}$ is a $C^*$-algebra and $\mathscr{H}$ is a Hilbert $\mathscr{A}$-module. We start our work with the following definition. \begin{definition}\label{D.2.1} An operator $A\in \mathcal{L}(\mathscr{H})$ is said to satisfy the generalized Cauchy--Schwarz inequality if there exists $\lambda \in (0, 1)$ such that \begin{align}\label{I.D.2.1} \|\langle Ax, y\rangle\|\leq (\|Ax\|\|y\|)^{\lambda}(\|Ay\|\|x\|)^{1 - \lambda} \qquad (x, y \in \mathscr{H}). \end{align} \end{definition} \begin{notation}\label{N.2.3} The collection of adjointable operators on $\mathscr{H}$ which satisfy the generalized Cauchy--Schwarz inequality is denoted by $GCSI(\mathscr{H})$. \end{notation} \begin{remark}\label{R.2.2} When $A$ is the identity operator on $\mathscr{H}$, the inequality in definition \ref{D.2.1} turns into \eqref{I.1.1}, that is, we arrive at the usual Cauchy--Schwarz inequality in $\mathscr{H}$. \end{remark} \begin{remark} Let $A\in GCSI(\mathscr{H})$. Then there exists $\lambda\in(0,1)$ such that \eqref{I.D.2.1} holds. If $\lambda'\in (\lambda, 1)$, then \eqref{I.D.2.1} is also valid for $\lambda'$. Indeed, there is $\alpha\in (0,1)$ such that $\lambda'=\alpha \lambda+(1-\alpha)$. By \eqref{I.D.2.1} we get \begin{align}\label{a11} \|\langle Ax, y\rangle\|^\alpha\leq (\|Ax\|\|y\|)^{\alpha\lambda}(\|Ay\|\|x\|)^{\alpha(1 - \lambda)} \qquad (x, y \in \mathscr{H}). \end{align} Also by \eqref{I.1.1} we have \begin{align}\label{a12} \|\langle Ax,y\rangle \|^{(1-\alpha)}\leq (\|Ax\|\|y\|)^{(1-\alpha)} \qquad (x, y \in \mathscr{H}). \end{align} From \eqref{a11} and \eqref{a12} we obtain \begin{align*} \|\langle Ax,y\rangle\|\leq (\|Ax\|\|y\|)^{\alpha\lambda+(1-\alpha)}(\|Ay\|\|x\|)^{\alpha(1-\lambda)} \qquad (x, y \in \mathscr{H}). \end{align*} Now, since $\alpha(1-\lambda) = 1 - \lambda'$, by the above above inequality we have \begin{align*} \|\langle Ax, y\rangle\|\leq (\|Ax\|\|y\|)^{\lambda'}(\|Ay\|\|x\|)^{1 - \lambda'} \qquad (x, y \in \mathscr{H}). \end{align*} \end{remark} \begin{remark}\label{R.2.2.5} let $\mathscr{A} = \mathbb{C}$, $\mathscr{H} = \mathbb{C}^3$ and put \begin{align*} A = \left(\begin{array}{ccc} 1 & 0 & 0\\ 0 & 0 & 1\\ 0 & 0 & 0\\ \end{array}\right) \in \mathcal{L}(\mathscr{H}). \end{align*} Then simple computations show that $A\notin GCSI(\mathscr{H})$ (see also Theorem \ref{T.2.5}). \end{remark} For an arbitrary Hilbert $C^*$-module $\mathscr{H}$ there are many examples for operators in $GCSI(\mathscr{H})$. \begin{proposition}\label{T.2.4} Let $A\in \mathcal{L}(\mathscr H)$ be a normal. Then \begin{align*} \|\langle Ax, y\rangle\|\leq (\|Ax\|\|y\|)^{\lambda}(\|Ay\|\|x\|)^{1-\lambda} \qquad (x, y \in \mathscr{H}), \end{align*} for every $\lambda\in (0,1)$. \end{proposition} \begin{proof} Let $\lambda\in (0,1)$. Since $A$ is normal, for every $z \in \mathscr{H}$, we have $\|A^*z\|=\|Az\|$. Therefore, \begin{align*} \|\langle Ax, y\rangle\|=\|\langle x,A^*y\rangle\|\leq \|x\|\|A^*y\|=\|x\|\|Ay\|, \end{align*} for all $x, y \in \mathscr{H}$. Thus \begin{align}\label{sc11} \|\langle Ax,y\rangle\|^{1-\lambda}\leq (\|x\|\|Ay\|)^{1-\lambda} \qquad(x, y\in \mathscr H). \end{align} Also by \eqref{I.1.1} we have \begin{align}\label{sc22} \|\langle Ax,y\rangle\|^{\lambda}\leq (\|Ax\|\|y\|)^{\lambda} \qquad(x, y\in \mathscr H). \end{align} Utilizing \eqref{sc11} and \eqref{sc22}, we deduce the desired result. \end{proof} In the following proposition, we provide some evident properties of $GCSI(\mathscr{H})$ deduced from Definition \ref{D.2.1}. \begin{proposition}\label{P.2.4.1} The following statements hold. \begin{itemize} \item[(i)] $GCSI(\mathscr{H})$ is closed in the strong topology for adjointable operators. \item[(ii)] If $A\in GCSI(\mathscr{H})$, then $\alpha A\in GCSI(\mathscr{H})$ for all $\alpha \in \mathbb{C}$. \item[(iii)] If $A\in GCSI(\mathscr{H})$ is invertible, then $A^{-1}\in GCSI(\mathscr{H})$. \end{itemize} \end{proposition} The following auxiliary lemmas are needed for our investigation. \begin{lemma}\cite[Theorem 3.2]{Lan}\label{L.2.1} Let $A\in \mathcal{L}(\mathscr{H})$. Then \begin{itemize} \item[(i)] $\mathcal{R}(A)$ is closed if and only if $\mathcal{R}(A^*)$ is closed, and in this case, $\mathcal{R}(A)$ and $\mathcal{R}(A^*)$ are orthogonally complemented with $\mathcal{R}(A) = {\mathcal{N}(A^*)}^{\perp}$ and $\mathcal{R}(A^*) = {\mathcal{N}(A)}^{\perp}$. \item[(ii)] $\mathcal{N}(A) = \mathcal{N}(|A|)$, $\mathcal{N}(A^*) = {\mathcal{R}(A)}^{\perp}$ and ${\mathcal{N}(A^*)}^{\perp} = {\mathcal{R}(A)}^{\perp\perp} \supseteq \overline{\mathcal{R}(A)}$. \end{itemize} \end{lemma} \begin{lemma}\label{L.2.2} Let $A\in \mathcal{L}(\mathscr{H})_{+}$. Then $\mathcal{N}(A^t) = \mathcal{N}(A)$ for any $t> 0$. \end{lemma} \begin{proof} For every $t\in (0, 1)$, it is proved in \cite[Lemma 2.2]{V.M.X} that $\mathcal{N}(A^t) = \mathcal{N}(A)$. Now, let $t\geq 1$. Put $B = A^t$. Then $B\in \mathcal{L}(\mathscr{H})_{+}$ and so $\mathcal{N}\big(B^{1/t}\big) = \mathcal{N}(B)$. Thus \begin{align*} \mathcal{N}(A) = \mathcal{N}\big(B^{1/t}\big) = \mathcal{N}(B) = \mathcal{N}(A^t). \end{align*} \end{proof} Here we present one of our main results. \begin{theorem}\label{T.2.5} Let $A\in GCSI(\mathscr{H})$. Then $\mathcal{N}(A) = \mathcal{N}(A^2)$. \end{theorem} \begin{proof} Since $A\in GCSI(\mathscr{H})$, there exists $\lambda \in (0, 1)$ such that (\ref{I.D.2.1}) holds. Now, assume that $z\in\mathcal{N}(A^2)$. Thus $A^2z = 0$. Applying (\ref{I.D.2.1}) with $x= |A|^2z$ and $y = Az$, we get \begin{align*} \big\||A|^2z\big\|^2 = \big\|\langle A|A|^2z, Az\rangle\big\| \leq \big(\big\|A|A|^2z\big\|\|Az\|\big)^{\lambda}\big(\|A^2z\|\big\||A|^2z\big\|\big)^{1 - \lambda} = 0. \end{align*} Hence $|A|^2z = 0$, or equivalently, $z\in \mathcal{N}(|A|^2)$. Since, by Lemma \ref{L.2.2} and Lemma \ref{L.2.1} (ii), $\mathcal{N}(|A|^2) = \mathcal{N}(|A|) = \mathcal{N}(A)$ we reach $z\in \mathcal{N}(A)$. Thus $\mathcal{N}(A^2) \subseteq \mathcal{N}(A)$. Also, since $\mathcal{N}(A) \subseteq \mathcal{N}(A^2)$ we therefore conclude that $\mathcal{N}(A) = \mathcal{N}(A^2)$. \end{proof} \begin{corollary}\label{C.2.6.2} Let $A^*\in GCSI(\mathscr{H})$. Then ${\mathcal{R}(A)}^{\perp} = {\mathcal{R}(A^2)}^{\perp}$. \end{corollary} \begin{proof} Since $A^*\in GCSI(\mathscr{H})$, by Theorem \ref{T.2.5} and Lemma \ref{L.2.1} (ii), we have \begin{align*} {\mathcal{R}(A)}^{\perp} = \mathcal{N}(A^*) = \mathcal{N}\big((A^*)^2\big) = \mathcal{N}\big((A^2)^*\big) = {\mathcal{R}(A^2)}^{\perp}. \end{align*} \end{proof} \begin{remark}\label{R.2.7} We remark that the converse of Theorem \ref{T.2.5} is not true, in general. For example, let $A\,:\ell^2(\mathbb{Z})\longrightarrow \ell^2(\mathbb{Z})$ be the weighted bilateral shift defined by $Ae_n = \alpha_{n}e_{n+1}$ for all $n\in \mathbb{Z}$, where \begin{align*} \alpha_n : =\left\{\begin{array}{ll} 1 & n\neq 2\\ \frac{1}{2} & n=2.\end{array}\right. \end{align*} Then it is easy to see that $\mathcal{N}(A) = \mathcal{N}(A^2) = \{0\}$, but $A\notin GCSI\big(\ell^2(\mathbb{Z})\big)$. \end{remark} \begin{remark}\label{R.2.8} For an arbitrary operator $A\in GCSI(\mathscr{H})$ we have $\mathcal{N}(A) \subseteq \mathcal{N}(A^*)$. Indeed, if $y\in\mathcal{N}(A)$, then \begin{align*} \|A^*y\|^2 = \|\langle AA^*y, y\rangle\| \leq (\|AA^*y\|\|y\|)^{\lambda}(\|Ay\|\|A^*y\|)^{1 - \lambda} = 0, \end{align*} for some $\lambda \in (0, 1)$. Hence $A^*y = 0$, or equivalently, $y\in \mathcal{N}(A^*)$. \end{remark} The following result is well-known (see \cite{Ka}). \begin{lemma}\label{L.2.6.1} Let $X$ and $Y$ be Banach spaces, and let $A\in \mathcal{B}(X, Y)$. Suppose $M$ is a closed linear subspace of $Y$ such that $\mathcal{R}(A)\cap M =\{0\}$ and $\mathcal{R}(A) + M$ is closed in $Y$. Then $\mathcal{R}(A)$ is closed in $Y$. \end{lemma} \begin{theorem}\label{P.2.6} Let $A\in GCSI(\mathscr{H})$. If $\mathcal{R}(A) + \mathcal{N}(A)$ is closed, then $\mathcal{R}(A)$ is closed. \end{theorem} \begin{proof} By Lemma \ref{L.2.6.1}, it suffices to show that $\mathcal{R}(A) \cap \mathcal{N}(A) =\{0\}$. Since $A\in GCSI(\mathscr{H})$, there exists $\lambda \in (0, 1)$ such that (\ref{I.D.2.1}) holds. Let $y = Ax$ for some $x\in \mathscr{H}$ such that $Ay=0$. Then \begin{align*} \|\langle y, y\rangle\| = \|\langle Ax, y\rangle\| \leq (\|Ax\|\|y\|)^{\lambda}(\|Ay\|\|x\|)^{1 - \lambda} = 0, \end{align*} and hence $y = 0$. \end{proof} \begin{remark}\label{R.2.6.1} Let $A\in GCSI(\mathscr{H})$. If $\mathcal{R}(A) + \mathcal{N}(A)$ is closed, then, by Theorem \ref{P.2.6} and Lemma \ref{L.2.1} (i), $\mathcal{R}(A^*)$ is orthogonally complemented with $\mathcal{R}(A^*) = {\mathcal{N}(A)}^{\perp}$. \end{remark} Let us quote a result from \cite{F.M.X}. For any $C^*$-algebra $\mathscr{B}$, let $\mathscr{B}_+$ be the set of all positive elements of $\mathscr{B}$. \begin{lemma}\cite[Lemma 2.3]{F.M.X}\label{L.2.11.1} Let $\mathscr{B}$ be a $C^*$-algebra and $a, b \in\mathscr{B}_+$ be such that $\|a^{1/2}c\| \leq \|b^{1/2}c\|$ for all $c \in\mathscr{B}_+$. Then $a \leq b$. \end{lemma} \begin{theorem}\label{Thno} Let $A\in \mathcal{L}(\mathscr H)$. If the following conditions hold, then $A$ is normal. \begin{itemize} \item[(i)] There is $x\in\mathscr{H}$ with $Ax\neq 0$ such that for every $y\in \mathscr{H}$ the equality in \eqref{I.D.2.1} holds for some $\lambda\in (0, 1)$. \item[(ii)] $\|\langle Au,v\rangle\|\leq \|Au\|\|v\|$ for all $u, v\in \mathscr{H}$. \end{itemize} \end{theorem} \begin{proof} By (i), we have \begin{align}\label{I.11.T.2.12} \|\langle Ax, y\rangle\| = (\|Ax\|\|y\|)^{\lambda}(\|Ay\|\|x\|)^{1 - \lambda} \qquad (y\in \mathscr{H}). \end{align} We will show that for any vector $z\in\mathscr{H}$ \begin{align}\label{I.22.T.2.12} \|Az\| \leq \|A^*z\|. \end{align} Note that \eqref{I.22.T.2.12} holds if $z\in \mathcal{N}(A^*)$, since \eqref{I.11.T.2.12} gives that $Az=0$. So, let us suppose $z\notin \mathcal{N}(A^*)$. By Remark \ref{R.2.8} we know that $z\notin \mathcal{N}(A)$. From \eqref{I.11.T.2.12} it follows that \begin{align*}\label{I.33.T.2.12} \left(\frac{\|\langle Ax, z\rangle\|}{\|Ax\|\|z\|}\right)^{\lambda} \left(\frac{\|\langle Ax, z\rangle\|}{\|Az\|\|x\|}\right)^{1-\lambda} = 1. \end{align*} Thus $\frac{\|\langle Ax, z\rangle\|}{\|Az\|\|x\|}\geq 1$, because $\frac{\|\langle Ax, z\rangle\|}{\|Ax\|\|z\|} \leq 1$ by the Cauchy--Schwarz inequality in $\mathscr{H}$. Therefore, \begin{align*} \|Az\| \leq \frac{\|\langle Ax, z\rangle\|}{\|x\|} = \frac{\|\langle x, A^*z\rangle\|}{\|x\|} \leq \|A^*z\|. \end{align*} Now, let $w \in \mathscr{H}$. Put $a = \langle |A|^2w, w\rangle$ and $b = \langle |A^*|^2w, w\rangle$. Then $a, b \in \mathscr{A}_+$ and for any $c\in\mathscr{A}_+$, by \eqref{I.22.T.2.12}, we have \begin{align*} \|a^{1/2}c\| &= \sqrt{\|cac\|} = \sqrt{\|c\langle |A|^2w, w\rangle c\|} \\& = \sqrt{\|\langle A(wc), A(wc)\rangle\|} = \|A(wc)\| \leq \|A^*(wc)\| = \|b^{1/2}c\|. \end{align*} So, by Lemma \ref{L.2.11.1}, we obtain \begin{align*} \langle |A|^2w, w\rangle = a \leq b = \langle |A^*|^2w, w\rangle. \end{align*} This implies $|A|^2 \leq |A^*|^2$. In the next we show that $|A^*|^2\leq |A|^2$. By use of (ii) we know that \begin{align*} \|\langle u,A^*v\rangle\|= \|\langle Au, v\rangle\|\leq \|Au\|\|v\| \qquad (u, v\in \mathscr{H}). \end{align*} This gives that $\|A^*v\|\leq \|Av\|$ for every $v\in \mathscr H$. By the same way of proof in the above we get $|A^*|^2\leq |A|^2$ and the proof is completed. \end{proof} By the proof of Theorem \ref{Thno} we get the following result. \begin{theorem}\label{T.2.12} Let $A\in \mathcal{L}(\mathscr{H})$. If there is $x\in \mathscr H$ with $Ax\ne 0$ such that for every $y\in \mathscr H$ the equality in \eqref{I.D.2.1} holds for some $\lambda\in (0,1)$, then $A$ is cohyponormal. \end{theorem} The following example shows that the revers of Theorem \ref{T.2.12} is not true. \begin{example} Let $A\,:\ell^2(\mathbb{Z})\longrightarrow \ell^2(\mathbb{Z})$ be the unilateral shift operator. Then $A$ is cohyponormal. Note there is no $x\in \ell^2(\mathbb{Z})$ with $Ax\neq 0$ such that the equality in \eqref{I.D.2.1} is not valid. Indeed, let $x=\sum_{i=1}^\infty\alpha_ie_i$ where $\{e_i:i\in \mathbb{Z}\}$ is an orthonormal basis. If the equality in \eqref{I.D.2.1} holds, then for every $k\in \mathbb{Z}$ there is $\lambda_k\in (0,1)$ such that \begin{align*} |\alpha_{k+1}|=\|\langle Ax,e_k\rangle\|=\left(\sum_{i=1}^\infty|\alpha_{i+1}|^2\right)^{\frac{\lambda_k}{2}} \left(\sum_{i=1}^\infty|\alpha_i|^2\right)^{\frac{1-\lambda_k}{2}}\qquad(k=1,2,...). \end{align*} This yields that $\alpha_k=0$ for all $k\geq 2$ and so $x=\alpha_1e_1$. In this case we get $Ax=0$. \end{example} As an immediate consequence of Theorem \ref{T.2.12} we have the following result. \begin{corollary} Let $A\in \mathcal{L}(\mathscr{H})$ and let $x, x'\in \mathscr{H}$ with $Ax\neq 0$ and $A^*x'\neq0$. Suppose that there are $\lambda,\lambda'\in (0,1)$ such that \begin{align*} \|\langle Ax,y\rangle\|=(\|Ax\|\|y\|)^\lambda(\|x\|\|Ay\|)^{1-\lambda} \end{align*} and \begin{align*} \|\langle A^*x',y\rangle\|=(\|A^*x'\|\|y\|)^{\lambda'}(\|x'\|\|A^*y\|)^{1-\lambda'} \qquad (y\in \mathscr{H}). \end{align*} Then $A$ is normal. \end{corollary} Recall that an element $U$ of $\mathcal{L}(\mathscr{H})$ is said to be a partial isometry if $U^*U$ is a projection in $\mathcal{L}(\mathscr{H})$. \begin{lemma}\cite[Proposition 15.3.7]{Wegge}\label{L.2.3} Let $A\in \mathcal{L}(\mathscr{H})$. Then the following statements are equivalent: \begin{itemize} \item[(i)] $\mathscr{H} = \mathcal{N}(|A|)\oplus \overline{\mathcal{R}(|A|)}$ and $\mathscr{H} = \mathcal{N}(A^*)\oplus \overline{\mathcal{R}(A)}$. \item[(ii)] Both $\overline{\mathcal{R}(|A|)}$ and $\overline{\mathcal{R}(A)}$ are orthogonally complemented. \item[(iii)] $A$ has the polar decomposition $A = U|A|$, where $U\in\mathcal{L}(\mathscr{H})$ is a partial isometry such that \begin{align} \mathcal{N}(U) = \mathcal{N}(A), \quad \mathcal{N}(U^*) = \mathcal{N}(A^*), \quad \mathcal{R}(U) = \overline{\mathcal{R}(A)}, \quad \mathcal{R}(U^*) = \overline{\mathcal{R}(A^*)}. \end{align} \end{itemize} \end{lemma} The following lemma will be useful in the proof of the next result. \begin{lemma}\cite[Lemma 3.12]{Liu-Luo-Xu}\label{L.2.4} Let $A = U|A|$ be the polar decomposition of $A\in \mathcal{L}(\mathscr{H})$. Then for any $t> 0$, the following statements are valid: \begin{itemize} \item[(i)] $U|A|^tU^* = (U|A|U^*)^t = |A^*|^t$. \item[(ii)] $U|A|^t = |A^*|^tU$. \item[(iii)] $U^*|A^*|^tU = (U^*|A^*|U)^t = |A|^t$. \end{itemize} \end{lemma} \begin{theorem}\label{P.2.11} Let $A\in \mathcal{L}(\mathscr{H})$ have the polar decomposition $A = U|A|$. If $A$ is semi-hyponormal, then $A$ belongs to $GCSI(\mathscr{H})$ with $\lambda = 1/2$. \end{theorem} \begin{proof} Let $x, y \in \mathscr{H}$. Since $|A^*|\leq |A|$, we get $0 \leq \langle |A^*|y, y\rangle \leq \langle |A|y, y\rangle$ and hence \begin{align}\label{I.1.P.2.11} \big\|\langle |A^*|y, y\rangle\big\| \leq \big\|\langle |A|y, y\rangle\big\|. \end{align} Therefore, by the Cauchy--Schwarz inequality, we have \begin{align*} \big\|\langle Ax, y\rangle\big\|^2 &= \big\|\langle |A|x, U^*y\rangle\big\|^2 = \big\|\langle |A|^{1/2}x, |A|^{1/2}U^*y\rangle\big\|^2 \\& \leq \big\|\langle |A|^{1/2}x, |A|^{1/2}x\rangle\big\| \big\|\langle |A|^{1/2}U^*y, |A|^{1/2}U^*y\rangle\big\| \\& = \big\|\langle |A|x, x\rangle\big\| \big\|\langle U|A|U^*y, y\rangle\big\| \\& = \big\|\langle |A|x, x\rangle\big\| \big\|\langle |A^*|y, y\rangle\big\| \qquad \qquad \qquad \qquad \big(\mbox{by Lemma \ref{L.2.4}~(i)}\big) \\& \leq \big\|\langle |A|x, x\rangle\big\| \big\|\langle |A|y, y\rangle\big\| \qquad \qquad \qquad \qquad \,\, \big(\mbox{by}\,\,(\ref{I.1.P.2.11})\big) \\& \leq \big\||A|x\big\|\|x\|\big\||A|y\big\|\|y\| \\& = \big\|Ax\big\|\|x\|\big\|Ay\big\|\|y\| \qquad \big(\mbox{since}\,\,\big\||A|z\big\| = \big\|Az\big\| \,\,\mbox{for every}\,\,z \in \mathscr{H}\big) \\& = \left(\left(\big\|Ax\big\|\|y\|\right)^{1/2}\left(\big\|Ay\big\|\|x\|\right)^{1/2}\right)^2. \end{align*} Thus \begin{align*} \big\|\langle Ax, y\rangle\big\|\leq (\big\|Ax\big\|\|y\|)^{1/2}(\big\|Ay\big\|\|x\|)^{1/2} \end{align*} and so $A$ belongs to $GCSI(\mathscr{H})$ with $\lambda = 1/2$. \end{proof} Our next result reads as follows. \begin{theorem}\label{L.2.15} Let $A\in \mathcal{L}(\mathscr{H})$ have the polar decomposition $A = U|A|$. If $A\in GCSI(\mathscr{H})$, then $A$ is paranormal. \end{theorem} \begin{proof} Let $A\in GCSI(\mathscr{H})$. Therefore, there exists $\lambda \in (0, 1)$ such that (\ref{I.D.2.1}) holds. We first show that for any vector $z \in \mathscr{H}$ \begin{align}\label{I.2.L.2.15} \big\|AU^*z\big\|^2 \leq \big\|A^2U^*z\big\|\|U^*z\|. \end{align} If $z\in \mathcal{N}(U^*)$, then (\ref{I.2.L.2.15}) is trivially true. So, assume that $z\notin \mathcal{N}(U^*)$. It follows from Lemma \ref{L.2.3} that $\mathcal{N}(A^*) = \mathcal{N}(U^*)$, so we get $z\notin \mathcal{N}(A^*)$. Hence $\|A^*z\|>0$. Therefore, \begin{align*} \big\|AU^*z\big\|^2 &= \big\|\langle AU^*z, AU^*z\rangle\big\| \\& = \big\|\langle U|A|^2U^*z, z\rangle\big\| \\& = \big\|\langle |A^*|^2z, z\rangle\big\| \qquad \qquad \qquad \qquad \big(\mbox{by Lemma \ref{L.2.4}~(i)}\big) \\& = \big\||A^*|z\big\|^2 = \|A^*z\|^2. \end{align*} Thus \begin{align}\label{I.3.L.2.15} \big\|AU^*z\big\| = \big\||A^*|z\big\|> 0. \end{align} Again, by Lemma \ref{L.2.4}~(i), we have \begin{align*} \big\|A|A^*|z\big\|^2 &= \big\|\langle A|A^*|z, A|A^*|z\rangle\big\| = \big\|\langle AU|A|U^*z, AU|A|U^*z\rangle\big\| \\& = \big\|\langle A^2U^*z, A^2U^*z\rangle\big\| = \big\|A^2U^*z\big\|^2, \end{align*} whence \begin{align}\label{I.4.L.2.15} \big\|A|A^*|z\big\| = \big\|A^2U^*z\big\|. \end{align} Utilizing Lemma \ref{L.2.4}~(i), (\ref{I.D.2.1}), (\ref{I.2.L.2.15}), (\ref{I.3.L.2.15}) and (\ref{I.4.L.2.15}), we have \begin{align*} \big\|AU^*z\big\|^2 &= \big\|U|A|U^*z\big\|^2 \\& = \big\|\langle U|A|U^*z, U|A|U^*z\rangle\big\| \\& = \big\|\langle AU^*z, |A^*|z\rangle\big\| \\& \leq (\big\|AU^*z\big\|\big\||A^*|z\big\|)^{\lambda}(\big\|A|A^*|z\big\|\big\|U^*z\big\|)^{(1 - \lambda)} \\& = \big\|AU^*z\big\|^{2\lambda}(\big\|A^2U^*z\big\|\big\|U^*z\big\|)^{(1 - \lambda)}, \end{align*} and hence \begin{align*} \big\|AU^*z\big\|^{2(1-\lambda)} \leq (\big\|A^2U^*z\big\|\big\|U^*z\big\|)^{(1 - \lambda)}. \end{align*} This ensures that \begin{align}\label{I.5.L.2.15} \big\|AU^*z\big\|^2 \leq \big\|A^2U^*z\big\|\big\|U^*z\big\|. \end{align} We need only to show that $\|Aw\|^2 \leq \|A^2w\|\|w\|$ for every $w \in \mathscr{H}$. To this end, suppose that $w \in \mathscr{H}$. Since $\mathscr{H} = \mathcal{N}(U)\oplus \mathcal{R}(U^*) = \mathcal{N}(A)\oplus \mathcal{R}(U^*)$, there exist $v\in \mathcal{N}(A)$ and $z \in \mathscr{H}$ such that $w = v + U^*z$. Then \begin{align*} \|w\|^2 = \|\langle v, v\rangle + \langle U^*z, U^*z\rangle\| \geq \|\langle U^*z, U^*z\rangle\| = \|U^*z\|^2, \end{align*} and so \begin{align}\label{I.6.L.2.15} \|w\|\geq \|U^*z\|. \end{align} Employing (\ref{I.5.L.2.15}) and (\ref{I.6.L.2.15}) we obtain \begin{align*} \|Aw\|^2 = \|AU^*z\|^2 \leq \|A^2U^*z\|\|U^*z\| \leq \|A^2w\|\|w\| \end{align*} and we arrive at the desired result. \end{proof} In the following theorem, we get a characterization of semi-hyponormality of adjointable operators. \begin{theorem}\label{T.2.14} Let $A\in \mathcal{L}(\mathscr{H})$ have the polar decomposition $A = U|A|$. Then the following statements are equivalent: \begin{itemize} \item[(i)] $A$ is semi-hyponormal. \item[(ii)] $\big\|\langle Ax, y\rangle\big\| \leq \big\|{|A|}^{1/2}x\big\|\big\|{|A|}^{1/2}y\big\|$ for all $x, y \in\mathscr{H}$. \end{itemize} \end{theorem} \begin{proof} (i)$\Rightarrow$(ii) The implication follows from the proof of Theorem \ref{P.2.11}. (ii)$\Rightarrow$(i) Suppose (ii) holds. We will show that for any vector $z\in\mathscr{H}$ \begin{align}\label{I.1.T.2.14} \big\||A^*|^{1/2}z\big\| \leq \big\||A|^{1/2}z\big\|. \end{align} Note that (\ref{I.1.T.2.14}) holds if $z\in \mathcal{N}\big(|A^*|^{1/2}\big)$. So, let us suppose $z\notin \mathcal{N}\big(|A^*|^{1/2}\big)$. By letting $x = U^*z$ and $y = z$ in (ii), we have \begin{align}\label{I.2.T.2.14} \big\|\langle AU^*z, z\rangle\big\|^2 \leq \big\|\langle |A|U^*z, U^*z\rangle\big\|\big\|\langle |A|z, z\rangle\big\|. \end{align} Therefore, \begin{align*} \big\||A^*|^{1/2}z\big\|^4 &= \big\|\langle |A^*|z, z\rangle\big\|^2 \\& = \big\|\langle U|A|U^*z, z\rangle\big\|^2 \qquad \qquad \qquad \qquad \qquad \big(\mbox{by Lemma \ref{L.2.4}~(i)}\big) \\& = \big\|\langle AU^*z, z\rangle\big\|^2 \\& \leq \big\|\langle |A|U^*z, U^*z\rangle\big\| \big\|\langle |A|z, z\rangle\big\| \qquad \qquad \qquad \qquad \qquad \big(\mbox{by (\ref{I.2.T.2.14})}\big) \\& = \big\|\langle U|A|U^*z, z\rangle\big\| \big\|\langle |A|z, z\rangle\big\| \\& = \big\|\langle |A^*|z, z\rangle\big\| \big\|\langle |A|z, z\rangle\big\| \qquad \qquad \qquad \qquad \big(\mbox{by Lemma \ref{L.2.4}~(i)}\big) \\& = \big\||A^*|^{1/2}z\big\|^2 \big\||A|^{1/2}z\big\|^2, \end{align*} which yields $\big\||A^*|^{1/2}z\big\|^2 \leq \big\||A|^{1/2}z\big\|^2$. Hence $\big\||A^*|^{1/2}z\big\| \leq \big\||A|^{1/2}z\big\|$. Utilizing a similar argument as in Theorem \ref{T.2.12}, we conclude that $|A^*| \leq |A|$. Thus $A$ is semi-hyponormal. \end{proof} We finish this paper with the following result. \begin{corollary}\label{C.2.15} Let $A\in \mathcal{L}(\mathscr{H})$ have the polar decomposition. If the equality in $GCSI(\mathscr{H})$ holds for $A^*$, then \begin{align*} \big\|\langle Ax, y\rangle\big\| \leq \big\|{|A|}^{1/2}x\big\|\big\|{|A|}^{1/2}y\big\| \qquad (x, y \in\mathscr{H}). \end{align*} \end{corollary} \begin{proof} Since the equality in $GCSI(\mathscr{H})$ holds for $A^*$, by Theorem \ref{T.2.12}, it follows that $A^*$ is cohyponormal. Thus $|A^*|^2 \leq |A|^2$. Hence $|A^*| \leq |A|$, that is, $A$ is semi-hyponormal. So, by Theorem \ref{T.2.14}, we deduce the desired result. \end{proof} \textbf{Acknowledgement.} The author would like to thank Prof.~M.~S.~Moslehian, Prof.~Q.~Xu and Dr.~R.~Eskandari for their invaluable suggestions while writing this paper. \bibliographystyle{amsplain} \begin{thebibliography}{99} \bibitem{A.B.F.M} J.~M.~Aldaz, S.~Barza, M.~Fujii and M.~S.~Moslehian, \textit{Advances in Operator Cauchy--Schwarz inequalities and their reverses}, Ann.~Funct.~Anal. \textbf{6} (2015), no.~3, 275--295. \bibitem{A.B.M} Lj.~Aramba\v{s}i\'{c}, D.~Baki\'{c} and M.~S.~Moslehian, \textit{A treatment of the Cauchy-Schwarz inequality in $C^*$-modules}, J.~Math.~Anal.~Appl. \textbf{381} (2011), 546-556. \bibitem{B.D} R.~Bhatia and C.~Davis, \textit{A Cauchy--Schwartz inequality for operators with applications}, Linear Algebra Appl. \textbf{223/224} (1995), 119--129. \bibitem{C.K.K} H.~Choi, Y.~Kim and E.~Ko, \textit{On operators satisfying the generalized Cauchy--Schwarz inequality}, Proc.~Amer.~Math.~Soc. \textbf{145} (2017), 3447--3453 \bibitem{Dav} C.~Davis, \textit{A Schwartz inequality for positive linear maps on $C^*$-algebras}, Illinois J.~Math. \textbf{18} (1974), 565--574. \bibitem{F.M.X} X.~Fang, M.~S.~Moslehian and Q.~Xu, \textit{On majorization and range inclusion of operators on Hilbert $C^*$-modules}, Linear Multilinear Algebra \textbf{66} (2018), no.~12, 2493--2500. \bibitem{F.F.S} J.~I.~Fujii, M.~Fujii and Y.~Seo, \textit{Operator inequalities on Hilbert $C^*$-modules via the Cauchy--Schwarz inequality}, Math.~Inequal.~Appl. \textbf{17} (2014), no. 12, 295--315. \bibitem{I.V} D.~Ilisevi\'c and S.~Varo\v{s}anec, \textit{On the Cauchy-Schwarz inequality and its reverse in semi-inner product $C^*$-modules}, Banach J.~Math.~Anal. \textbf{1}(1) (2007), 78-84. \bibitem{Joi} M.~Joi\c{t}a, \textit{On the Cauchy-Schwarz inequality in $C^*$-algebras}, Math.~Rep.~(Bucur.) (3)\textbf{53} (3) (2001), 243-246. \bibitem{Ka} T.~Kato, \textit{Perturbation Theory for Linear Operator}, 2nd edn.~SpringerVerlag, Berlin Heidelberg, New York, Tokyo, (1984). \bibitem{Lan} E.~C.~Lance, \textit{Hilbert $C^*$-modules - a toolkit for operator algebraists}, London Mathematical Society Lecture Note Series, vol.~210, Cambridge University Press, Cambridge, 1995. \bibitem{Liu-Luo-Xu} N.~Liu, W.~Luo and Q.~Xu, \textit{The polar decomposition for adjointable operators on Hilbert $C^*$-modules and centered operators}, Adv.~Oper.~Theory \textbf{3} (2018), no. 4, 855--867. \bibitem{V.M.X} M.~Vosough, M.~S. Moslehian and Q.~Xu,, \textit{Closed range and nonclosed range adjointable operators on Hilbert $C^*$-modules}, Positivity \textbf{22} (2018), 701--710. \bibitem{Wat} H.~Watanabe, \textit{Operators characterized by certain Cauchy--Schwarz type inequalities}, Publ.~Res.~Inst.~Math.~Sci., \textbf{30} (1994), no.~2, 249-259. \bibitem{Wegge} N.~E.~Wegge-Olsen, \textit{$K$-theory and $C^*$-algebras: A friendly approach}, Oxford Univ.~Press, Oxford, England, 1993. \end{thebibliography} \end{document}
2205.05118v2
http://arxiv.org/abs/2205.05118v2
On the intersection density of the Kneser Graph $K(n,3)$
\documentclass[10pt,reqno]{elsarticle} \usepackage{graphicx,amssymb,amstext,amsmath} \usepackage{enumerate} \usepackage{amsthm} \usepackage{xcolor} \usepackage[colorlinks=true, linkcolor=blue, urlcolor=blue, citecolor=blue]{hyperref} \usepackage[margin = 2.3cm]{geometry} \usepackage{listings}\lstset{ basicstyle=\ttfamily, mathescape } \usepackage{float} \newtheoremstyle{plainsl} {\topsep} {\topsep} {\slshape} {} {\normalfont\bfseries} {.} { } {} \theoremstyle{plainsl} \newtheorem{theorem}{Theorem}[section] \newtheorem{thm}[theorem]{Theorem} \newtheorem{obs}[theorem]{Observation} \newtheorem{defn}[theorem]{Definition} \newtheorem{lem}[theorem]{Lemma} \newtheorem{cor}[theorem]{Corollary} \newtheorem{prop}[theorem]{Proposition} \newtheorem{quest}[theorem]{Question} \newtheorem{conj}[theorem]{Conjecture} \newtheorem{claim}[theorem]{Claim} \newtheorem{example}[theorem]{Example} \newtheorem{cond}{Condition} \newtheorem{rmk}[theorem]{Remark} \newtheorem{prob}[theorem]{Problem} \DeclareMathOperator\Aut{Aut} \DeclareMathOperator\Ind{ind} \DeclareMathOperator\one{\bf{1}} \DeclareMathOperator\sym{Sym} \DeclareMathOperator\alt{Alt} x}{fix} \DeclareMathOperator\Res{res} \DeclareMathOperator\Alt{Alt} \DeclareMathOperator\Der{Der} \DeclareMathOperator\fld{\mathbb{F}} \DeclareMathOperator\flde{\mathbb{E}} \DeclareMathOperator\elsm{sum} \DeclareMathOperator\rank{rank} \DeclareMathOperator\spn{span} \DeclareMathOperator\tr{tr} \DeclareMathOperator\der{der} \DeclareMathOperator\id{id} \newcommand\cx{{\mathbb C}}\newcommand\ip[2]{\langle#1,#2\rangle} \newcommand{\sqr}{\: \Box\: } \newcommand{\dprime}{{\prime\prime}} \newcommand{\agl}[2]{\operatorname{AGL}_#1(#2)} \newcommand{\pgl}[2]{\operatorname{PGL}_#1(#2)} \newcommand{\psl}[2]{\operatorname{PSL}_#1(#2)} \newcommand{\asl}[2]{\operatorname{ASL}_#1(#2)} \newcommand{\asigmal}[2]{\operatorname{A\Sigma L}_#1(#2)} \newcommand{\agammal}[2]{\operatorname{A\Gamma L}_#1(#2)} \newcommand{\pgammal}[2]{\operatorname{P\Gamma L}_#1(#2)} \newcommand{\psigmal}[2]{\operatorname{P\Sigma L}_#1(#2)} \newcommand{\mathieu}[1]{\operatorname{M}_{#1}} \newcommand\vphi{\varphi} \newcommand\nts[1]{ {\color{blue} \centerline{#1}}} \begin{document} \title{On the Intersection Density of the Kneser Graph $K(n,3)$} \author[Uregina]{Karen Meagher\fnref{fn1}} \ead{[email protected]} \author[address1,address2]{Andriaherimanana Sarobidy Razafimahatratra\corref{cor1}} \cortext[cor1]{Corresponding author} \address[Uregina]{Department of Mathematics and Statistics, University of Regina, Regina, Saskatchewan S4S 0A2, Canada} \address[address1]{University of Primorska, UP FAMNIT, Glagolja\v{s}ka 8, 6000 Koper, Slovenia\\ }\ead{[email protected]} \address[address2]{University of Primorska, UP IAM, Muzejski trg 2, 6000 Koper, Slovenia } \fntext[fn1]{Research supported by NSERC Discovery Research Grant, Application No.: RGPIN-2018-03952.} \date{\today} \begin{abstract} A set $\mathcal{F} \subset \sym(V)$ is \textsl{intersecting} if any two of its elements agree on some element of $V$. Given a finite transitive permutation group $G\leq \sym(V)$, the \textsl{intersection density} $\rho(G)$ is the maximum ratio $\frac{|\mathcal{F}||V|}{|G|}$ where $\mathcal{F}$ runs through all intersecting sets of $G$. The \textsl{intersection density} $\rho(X)$ of a vertex-transitive graph $X = (V,E)$ is equal to $\max \left\{ \rho(G) : G \leq \Aut(X), \mbox{ $G$ transitive} \right\}$. In this paper, we study the intersection density of the Kneser graph $K(n,3)$, for $n\geq 7$. The intersection density of $K(n,3)$ is determined whenever its automorphism group contains $\psl{2}{q}$, with some exceptional cases depending on the congruence of $q$. We also briefly consider the intersection density of $K(n,2)$ for values of $n$ where $\psl{2}{q}$ is a subgroup of its automorphism group. \end{abstract} \begin{keyword} Derangement graphs, cocliques, Erd\H{o}s-Ko-Rado theorem, $3$-homogeneous groups, Kneser graphs \MSC{Primary 05C35; Secondary 05C69, 20B05} \end{keyword} \maketitle \section{Introduction} There have been many recent papers looking at the size of the largest set of intersecting permutations in a transitive permutation group, see for example~\cite{ ellis2011intersecting, HKKMMO, HKMMwreath, HMMK, li2020erd, meagher2016erdHos, 2oddprimes, spiga2019erdHos}. In these works, two permutations $g,h \in G \leq \sym(V)$ are said to be \textsl{intersecting} if $g(v) =h(v)$ for some element $v \in V$, and the main problem is to determine the largest set of permutations in which any two are intersecting. If the subgroup $H \leq G$ is the stabilizer of a point under the action of $G$ on $V$, then this action is equivalent to $G$ acting on the cosets of $H$. Clearly with this action, the subgroup $H$ is an intersecting set. If there are no intersecting sets of cardinality larger than $|H|$, then the group $G$ is said to have the ``Erd\H{o}s-Ko-Rado (EKR) Property". Indeed, such results are often considered to be generalizations of the famous Erd\H{o}s-Ko-Rado Theorem. The most recent work in this area has turned to looking for groups that do not have the EKR-property, this has led to trying to measure how far a group can be from having this property. One way to do this is to use the \textsl{intersection density of a group}. This is a group parameter, introduced in~\cite{li2020erd}, defined for a transitive group $G\leq \sym(n)$, to be the rational number \begin{align*} \rho(G) := \max\left\{ \frac{|\mathcal{F}| \, n}{|G|} \ :\ \mathcal{F} \subset G \mbox{ is intersecting} \right\}. \end{align*} Since $G$ is transitive, the orbit-stabilizer lemma implies that the stabilizer of a point in $G$ has order $\frac{|G|}{n}$. Since the stabilizer of a point is an intersecting set, $\rho(G) \geq 1$ for any transitive group $G$. Further, a transitive permutation group has intersection density 1 if and only if it has the EKR property. In~\cite{HKKMcyclic} the authors initiate a program aimed at obtaining a deeper understanding of the intersection density of transitive permutation groups, with a focus on groups not having the EKR-property. They find many interesting examples using actions with a cyclic point stabilizer. In~\cite{BidySym}, the concept of intersection density was extended to vertex-transitive graphs. A graph $X$ is \textsl{vertex transitive} if its automorphism group $\Aut(X)$ acts transitively on the vertex set of $X$. The \textsl{intersection density} $\rho(X)$ of a vertex-transitive graph $X$ is the largest intersection density among the transitive subgroups of the automorphism group of the graph. Specifically, the intersection density of a graph $X$ is defined to be the rational number \begin{align} \rho(X) := \max\left\{ \rho(H) \ :\ H\leq \Aut(X), \mbox{ $H$ transitive} \right\}. \end{align} We note here that the intersection density parameter for vertex-transitive graphs only measures the largest possible intersection density of a transitive subgroup; it does not take into account any smaller intersection densities from other transitive subgroups of automorphism. In \cite{KMP}, the intersection density of vertex-transitive graphs was further refined into the \emph{intersection density array}. Given a vertex-transitive graph $X= (V,E)$, the intersection density array of $X$ is the increasing sequence of rational numbers \begin{align*} \overline{\rho}(X) := [\rho_1,\rho_2,\ldots,\rho_t], \end{align*} for some integer $t\geq 0$, such that for any $i\in\{1,2,\ldots,t\}$, there exists a transitive subgroup $K\leq \Aut(X)$ such that $\rho_i = \rho(K)$ and for any transitive subgroup $G\leq \Aut(X)$, there exists $i\in \{1,2,\ldots,n\}$ such that $\rho(G) = \rho_i.$ This array gives a more robust way of viewing the intersection property of the automorphism group. For example, the Petersen graph has intersection density $2$, whereas its intersection density array is $[1,2]$. Another interesting example is the Tutte-Coxeter graph; its intersection density is equal to $\frac{3}{2}$ and its intersection density array is $\left[\frac{3}{2}\right]$. That is, every transitive subgroup of the automorphism group of the Tutte-Coxeter graph has intersection density equal to $\frac{3}{2}$. A vertex-transitive graph $X=(V,E)$ exhibiting this property, i.e. $\overline{\rho}(X) = [\rho_1]$, is called \emph{intersection density stable }. In this paper, we continue the work in~\cite{BidySym} to determine the intersection density, and if possible, the intersection density array of the \textsl{Kneser graphs}. These are a well-known family of vertex-transitive graphs. For integers $n$ and $k$, with $n \geq 2k$, the Kneser graph $K(n,k)$ has all the $k$-subsets of $\{1,2,\dots,n\}$ as its vertex set and two vertices are adjacent if they are disjoint. For $n >2k$, it is well-known that $\sym(n)$ is the automorphism group of $K(n,k)$ (this is implied by the EKR theorem, see~\cite[Section 7.8]{godsilroyle}). Since $\sym(n)$ is transitive on the $k$-sets of $\{1,2,\dots,n\}$, the graph $K(n,k)$ is vertex transitive. We want to determine the largest intersection density over all subgroups of $\sym(n)$ that are transitive on the $k$-subsets of $\{1, \dots,n\}$ with $n>2k$. There has already been much work done to determine the intersection density of $\sym(n)$ with its action on $k$-sets. The most general result is given by Ellis in~\cite{Ellis}, where it is shown that if $n$ is large relative to $k$, then $\sym(n)$ has intersection density 1 under this action. Ellis conjectured the requirement that $n$ be large relative to $k$ is not necessary. Indeed, this has been confirmed for the smallest values of $k$; for $k=2$ this conjecture is proven in~\cite{2setwise}, and for $k=3$, it is proven in~\cite{3setwise}. It is shown in~\cite{2oddprimes} that the alternating group $\alt(n)$ acting on 2-sets has intersection density 1, provided $n\geq 16$. Using \verb*|Sagemath|, it is not hard to verify that this still holds for $6\leq n\leq 15$; but the group $\alt(5)$ acting on the $2$-sets does not have the EKR property, in fact it has intersection density $2$. Further, in~\cite{BidySym} the authors prove that the alternating group acting on $k$-sets with $k =3,4,5$ also has intersection density 1 for $n > 2k$ (they also determine the intersection density of some sporadic groups that are also transitive on the $k$-sets). To determine the intersection density of the graph $K(n,k)$, it is necessary to determine the largest intersection density over all subgroups that are transitive on the $k$-sets. The next proposition follows from Theorem 14.6.2 of~\cite{godsil2016erdos} and shows that the largest intersection density will be achieved by a minimal (by inclusion) transitive subgroup. \begin{prop} Let $G$ be a transitive group and let $H$ be a transitive subgroup of $G$ (where $H$ has the same action as $G$). The intersection density of $G$ is bounded above by the intersection density of $H$. \end{prop} As stated above, the alternating group acting on $3$-sets has intersection density 1 (see~\cite{BidySym}), so we have the following result. \begin{thm} Let $n\geq 7$. If $\alt(n)$ is the minimal transitive subgroup of $\sym(n)$ under its action on 3-sets, then $K(n,3)$ has intersection density equal to $1$. \end{thm} In this paper, we will consider values of $n$ for which the alternating group is not the minimal transitive group on the 3-sets. A group that is transitive on the $k$-sets is called $k$-homogeneous, clearly any group that is 3-transitive is also 3-homogeneous. These groups have been classified and we state two results on 3-homogeneous groups that motivate the choice of groups in this work. The first result is taken from~\cite{MR1373655}. \begin{thm} Let $G\leq \sym(n)$ be $3$-transitive. If $G$ is not equal to $\alt(n)$ with $n \geq 5$ nor $\sym(n)$ with $n \geq 7$, then $G$ is one of: $\asl{d}{2}$, $V_{16}.\alt(7)$ (of degree $16$), $\mathieu{11}$ (of degree $12$), $\mathieu{22}, \Aut(\mathieu{22})$, $\mathieu{23}$, $\mathieu{24}$ or \[ \psl{2}{q} \leq G \leq \pgammal{2}{q}, \] with degree $q+1$, where $q$ is a prime power. \label{thm:classification-transitive} \end{thm} \begin{thm}[Kantor~\cite{MR306296}] Suppose that $G$ is $3$-homogeneous on a set of size $n\geq 6$. Then $G$ is $2$-transitive. Moreover, $G$ is $3$-transitive with the exception of \[ \psl{2}{q} \leq G \leq \operatorname{P\Sigma L}(2,q), \] where $n-1 = q \equiv 3\ (\operatorname{mod} 4)$; and \[ G \in \{ \agl{1}{8},\agammal{1}{8},\agammal{1}{32} \}. \] \label{thm:classification-homogeneous} \end{thm} For many values of $n$, the minimal transitive group on the $2$-sets and 3-sets is $\psl{2}{q}$ or contains $\psl{2}{q}$ as a proper subgroup. In this paper we focus on these groups with their actions on 2 and 3-sets. We note that some of the minimal subgroups in these two theorems are not transitive on the $3$-subsets. In particular $G = \psl{2}{q}$, where $q \equiv 1 \pmod{4}$ in Theorem~\ref{thm:classification-homogeneous} is not 3-homogeneous. \section{Background Results} Our approach to this problem is to build a graph for each group that has the property that the cocliques (independent sets) in the graph correspond exactly to the intersecting sets in the group. Then the size of the maximum cocliques can be determined using algebraic techniques. Given a group $G$ and a subset $C\subset G\setminus \{1\}$ with the property that $x^{-1} \in C$ whenever $x\in C$, recall that the \emph{Cayley graph} $\operatorname{Cay}(G,C)$ is the graph whose vertex set is $G$ and two group elements $g$ and $h$ are adjacent if and only if $hg^{-1} \in C$. If $C$ has the additional property that $gxg^{-1} \in C$ for all $g\in G$ and $x\in C$, then we say that the Cayley graph $\operatorname{Cay}(G,C)$ is a \emph{normal Cayley graph. } For any permutation group $G$, define the \textsl{derangement graph}, denoted by $\Gamma_G$, to have the elements of $G$ as its vertices, and two vertices are adjacent if they are not intersecting. Then the maximum cocliques in $\Gamma_G$ are exactly the maximum intersecting sets in $G$. We can also consider the complement of the derangement graph (denoted $\overline{\Gamma_G}$); clearly the maximum cliques in $\overline{\Gamma_G}$ are maximum intersecting sets. The derangement graph is a Cayley graph with connection-set equal to the set $\der(G)$ of all derangements (i.e., fixed-point-free permutations) of $G$. Further, since the connection-set $\der(G)$ of $\Gamma_{G}$ is a union of conjugacy classes, $\Gamma_{G}$ is a normal Cayley graph. The derangement graph is also a graph in an \textsl{association scheme}, namely the \textsl{conjugacy class association scheme}. Details are given in~\cite[Chapter 3]{godsil2016erdos}. Since $\Gamma_{G}$ is a normal Cayley graph, its eigenvalues of can be calculated using the complex irreducible characters of the group $G$. For details see~\cite[Chapter 14]{godsil2016erdos} or~\cite{BidyThesis} as we only state the formula here. \begin{theorem} Let $G$ a permutation group. The eigenvalues of $\Gamma_G$ are \begin{align}\label{eq:evalues} \lambda_\chi = \frac{1}{\chi(1)} \sum_{g \in \der(G)} \chi(g) \end{align} where $\chi$ is taken over all irreducible characters of $G$. \end{theorem} A fascinating aspect of this approach is that frequently the eigenvalues of the derangement graph can be used to determine very effective upper bounds on the size of cocliques and cliques in the derangement graphs. The next results are two such bounds, before stating them we need some notation. For a graph $X$ on $n$ vertices, a real symmetric $n \times n$ matrix $M$ is \textsl{compatible} with $X$ if $M_{u,v}=0$ whenever $u$ and $v$ are non-adjacent vertices in $X$. The \textsl{adjacency matrix} of a graph $X$ on $n$ vertices is an $n \times n$ 01-matrix with $(u,v)$-entry equal to 1 if and only if $u$ and $v$ are adjacent in $X$. The adjacency matrix is denoted by $A(X)$ and is an example of a matrix compatible with $X$. The sum of all the entries of a matrix $M$ will be denoted by $\elsm(M)$ and the trace by $\tr(M)$. In a graph $X$, the size of the largest coclique is denoted by $\alpha(X)$, and the size of the largest clique by $\omega(X)$. The all ones vector will be denoted by $\mathbf{1}$, and $J$ will represent the all ones matrix (the sizes will be clear from context). \begin{thm} [Weighted Ratio Bound (Theorem~2.4.2 \cite{godsil2016erdos})]~\label{thm:wtRatio} Let $X$ be a connected graph. Let $A$ be a matrix compatible with $X$ that has constant row and column sum $d$. If the least eigenvalue of $A$ is $\tau$, then \[ \alpha(X) \le\frac{|V(X)|}{1-\frac{d}{\tau}}. \] \end{thm} \begin{theorem}[Theorem~3.7.1 \cite{godsil2016erdos}]\label{thm:LP-clq} Let $\mathcal{A} = \{A_0=I, A_1, \dots, A_d\}$ be an association scheme with $d$ classes and let $X$ be a graph with adjacency matrix $A(X) = \sum_{i \in T} A_i$, where $T \subset \{1,2,\ldots,d\}$. If $C$ is a clique in $X$, then \[ |C| \le \max_{M \in \mathcal{M} } \frac{\elsm(M)}{\tr(M)} \] where $\mathcal{M}$ is the set of all positive semidefinite matrices in $\mathbb{C}[\mathcal{A}]$ that are compatible with $X$. \end{theorem} The next result gives a simple method to test if a subgroup is an intersecting set. \begin{lem}\label{lem:intersectingsubgroup} If $H \leq \sym(n)$ is a subgroup with no derangements, then $H$ is an intersecting set. \end{lem} \begin{proof} If $g, h \in H$, then $g h^{-1} \in H$ and is not a derangement. This means that $gh^{-1}$ has a fixed point, thus $g$ and $h$ are intersecting. \end{proof} Finally, we state a well-known result about the transitivity of $\psl{2}{q}$ on the 3-sets that proves it is not $3$-homogeneous. \begin{lem} If $q \equiv 1 \pmod{4}$, then $\psl{2}{q}$ has two orbits on the 3-sets from $\{1,\dots, q+1\}$. \end{lem} In the next three sections we consider groups containing $\psl{2}{q}$ acting on 3-sets of $\{1,2,\dots, q+1\}$. The first of these sections considers $\pgl{2}{q}$ where $q$ is even (in this case $\psl{2}{q} =\pgl{2}{q}$). Section~\ref{sec:1mod4} considers two groups that contain $\psl{2}{q}$ with $q \equiv 1 \pmod{4}$. In Section~\ref{psl}, we consider $\psl{2}{q}$ for $q \equiv 3 \pmod{4}$, since this is when $\psl{2}{q}$ is transitive. Section~\ref{2sets} discusses $\psl{2}{q}$ on the 2-sets. Section~\ref{intrans} briefly considers the intransitive action of $\psl{2}{q}$ on the 3-sets when $q \equiv 1 \pmod{4}$. \section{ $\pgl{2}{q}$ acting on $K(q+1,3)$ where $q$ even} For $q$ even, we can determine the exact intersection density of $\pgl{2}{q}$ acting on the 3-sets from $\{1,\dots, q+1\}$. \begin{thm}\label{thm:qeven} Let $q$ be even. The intersection density of $\pgl{2}{q}$ acting on the $3$-sets is \begin{enumerate} \item $\frac{q}{6}$, if $q = 2^{2 \ell + 1}$; \item $\frac{q}{2}$, if $q = 2^{2\ell}$. \end{enumerate} \end{thm} We prove this result in two lemmas. The first is a construction of an intersecting set of the required size. \begin{lem} Consider the action of $\pgl{2}{q}$ on the 3-sets from $\{ 1,\dots, q+1\}$. If $q =2^{2\ell}$ there is an intersecting set of size $3q$, and if $q =2^{2\ell + 1 }$ there is an intersecting set of size $q$. \label{lem:max-intersecting-set-PGL-even} \end{lem} \begin{proof} For all $q = 2^n$ the subgroup of $\pgl{2}{q}$ generated by the matrices of the form \[ \begin{pmatrix} 1 & a \\ 0 & 1\end{pmatrix}, \] with $a \in \mathbb{F}_q$, is a subgroup in which all non-identity elements have order 2. Thus, by Lemma~\ref{lem:intersectingsubgroup}, these form an intersecting set under this action. If $q = 2^{2\ell}$ then there is an $x \in \mathbb{F}_q$ with $x^3=1$. The set of all matrices of the forms \[ \begin{pmatrix} 1 & a \\ 0 & 1 \\ \end{pmatrix}, \quad \begin{pmatrix} x & a \\ 0 & x^2 \\ \end{pmatrix}, \quad \begin{pmatrix} x^2 & a \\ 0 & x \\ \end{pmatrix} \] with $a \in \mathbb{F}_q$, is a subgroup with size $3q$. Each of these vertices either has order 3, or has order 2 and fixes a point, so each fixes a 3-set. Thus, by Lemma~\ref{lem:intersectingsubgroup}, these matrices form an intersecting set under this action. \end{proof} Using Theorem~\ref{thm:LP-clq}, we can show that the sets given in Lemma~\ref{lem:max-intersecting-set-PGL-even} are the largest possible intersecting sets under the action of $\pgl{2}{q}$ on the 3-sets. We note that the stabilizer of $\pgl{2}{q}$ acting on the 3-sets is isomorphic to $\sym(3)$, for any prime power $q$. Henceforth, we denote the stabilizer of a point of $\pgl{2}{q}$ acting on the 3-sets by $H_q$. Let $X_q$ be the complement of the derangement graph under this action. So the vertices of $X_q$ are the elements of $\pgl{2}{q}$ and two vertices $g, h$ are adjacent if $gh^{-1}$ is conjugate to an element in $H_q$. A clique in this graph is an intersecting set under the action on the 3-sets. The graph $X_q$ is in the conjugacy class association scheme of $\pgl{2}{q}$; we denote this association scheme by $\mathcal{A}$. The matrix in $\mathcal{A}$ that corresponds to the conjugacy class of order 2 elements in $H_q$ will be denoted by $A_1$, and $A_2$ will denote the matrix corresponding to the conjugacy class of the order 3 elements. This means that $A(X_q) = A_1 + A_2$. By Theorem~\ref{thm:LP-clq}, any clique in $X_q$ is bounded by the maximum of \[ \frac{\elsm(M)}{\tr(M)} \] taken over all positive semi-definite matrices $M$ of the form $M = dI + aA_1 + bA_2$. \begin{lem}\label{lem:even} Consider the action of $\pgl{2}{q}$ on the 3-sets of $\{1,2,\dots,q+1\}$. If $q =2^{2\ell}$, then an intersecting set under this action has at most $3q$ elements, and if $q =2^{2\ell + 1}$ an intersecting set has at most $q$ elements. \end{lem} \begin{proof} We will apply Theorem~\ref{thm:LP-clq}. Let $\mathcal{A}$ be the conjugacy class association scheme for $\pgl{2}{q}$. We will first find a positive semi-definite matrix $M \in \mathbb{C} [\mathcal{A}]$ that is compatible with the complement of the derangement graph of $\pgl{2}{q}$ under this action, and then we show that $\frac{\elsm(M)}{\tr(M)}$ equals the bounds in the lemma. Let $X_q$ be the complement of the derangement graph, so $X_q$ is the graph with the elements of $\pgl{2}{q}$ as its vertices and two vertices $g, h$ are adjacent if $gh^{-1}$ is conjugate to an element in $H_q$ (where $H_q$ is the stabilizer of a point under this action). Let $C_1$ be the conjugacy class of $\pgl{2}{q}$ that contains the elements in $H_q$ of order two, and $C_2$ the conjugacy class that contains the elements of order 3. Define $A_1$ to be the matrix with rows and columns indexed by the element of $\pgl{2}{q}$ and the entry $(g,h)$ equal to 1 if $gh^{-1} \in C_1$ and 0 otherwise; a matrix $A_2$ is defined similarly, but for $C_2$. Both $A_1$ and $A_2$ are matrices in $\mathcal{A}$. The adjacency matrix of $X_q$ is equal to $A_1 + A_2$, and any matrix in $\mathbb{C}[\mathcal{A}] $ compatible with $X_q$ has the form $M=dI + aA_1 +bA_2$. If we set $v = | \pgl{2}{q} | $, then \[ \frac{\elsm(M)}{\tr(M)} = \frac{v (d + a |C_1| + b |C_2|) }{vd} = 1+ \frac{a}{d} |C_1| + \frac{b}{d} |C_2|. \] So we need to find values of $\frac{a}{d}$ and $\frac{b}{d}$ so that the eigenvalues of $M$ are non-negative and $\frac{\elsm(M)}{\tr(M)} $ is maximized. The eigenvalues of $A_1$ and $A_2$ can be calculated easily from the character table of $\pgl{2}{q}$, as the eigenvalue of $A_i$ is simply \[ \lambda_{\chi} (A_i) = \frac{ |C_i| \chi(c_i) } {\chi(\id)} \] where $c_i \in C_i$ and $\chi$ an irreducible character of $\pgl{2}{q}$. First consider when $q =2^{2\ell}$. The value of all the irreducible characters of $\pgl{2}{q}$ on these two conjugacy classes are known, and recorded in the table below using the notation of~\cite{adams2002character}. \def\arraystretch{1.3} \begin{table}[h] \begin{center} \begin{tabular}{|c|cc ccc|} \hline Character & $\rho(1)$ & $\rho(\alpha)$ & $\overline{\rho}(1)$ & $ \rho'(1)$ & $\pi(\chi)$ \\ Degree & $q+1$ & $q+1$ & $q$ & $1$ & $q-1$ \\ \hline value on $C_1$ (order 2) & $1$ & $1$ & $0$ & $1$ & $-1$ \\ eigenvalue of $A_1$ & $q-1$ & $q-1$& $0$ & $q-1$ & $-(q+1)$\\ \hline value on $C_2$ (order 3) & $2$ & $-1$ & $1$ & $1$ & $0$ \\ eigenvalue of $A_2$ & $2q$ & $-q$ & $q+1$ & $q(q+1)$ & $0$ \\ \hline \end{tabular} \end{center} \caption{Partial character table for $\pgl{2}{2^{2\ell}}$, with eigenvalues for $A_1$ and $A_2$.} \end{table} By Theorem~\ref{thm:LP-clq}, a bound for the size of the cliques in $X_q$ is given by the solution to the following linear program (we use $x$ and $y$ in place of $\frac{a}{d}$ and $\frac{b}{d}$). \begin{align} \begin{split} \mathsf{Maximize: } &\ \ \ 1 + x (q^2-1) + y q(q+1),\\ \mathsf{Subject \, to } & \\ & \begin{aligned} & -1 \leq x (q-1) +2yq \\ & -1 \leq x (q-1) - y q \\ & -1 \leq y(q+1) \\ & -1 \leq -x(q+1) \\ \end{aligned} \end{split} \end{align} It is straight-forward to see that this is maximized at $x = \frac{1}{q+1}$ and $y =\frac{2}{q+1}$ to give a maximum value of $3q$. For $q = 2^{2\ell+1}$, the values that the irreducible characters take on the conjugacy classes with order 2 and 3 are given in the table below. \def\arraystretch{1.3} \begin{table}[h] \begin{center} \begin{tabular}{|c|cc ccc|} \hline Character & $\rho(\alpha)$ & $\overline{\rho}(1)$ & $\rho'(1)$ & $\pi(1)$ & $\pi(\chi)$ \\ Degree & $q+1$ & $q$ & $1$ & $q-1$ & $q-1$ \\ \hline value on $C_1$ (order 2) & $1$ & $0$ & $1$ & $-1$ & $-1$ \\ eigenvalue of $A_1$ & $q-1$ & $0$ & $q-1$ & $-(q+1)$ & $-(q+1)$\\ \hline value on $C_2$ (order 3) & $0$ & $-1$ & $1$ & $-2$ & $1$ \\ eigenvalue of $A_2$ & $0$ & $-(q-1)$ & $q(q-1)$ & $-2q$ & $q$\\ \hline \end{tabular} \end{center} \caption{Partial character table for $\pgl{2}{2^{2\ell+1}}$, with eigenvalues for $A_1$ and $A_2$.} \end{table} The solution of the following linear optimization is a bound on the size of the maximum clique in $X_q$. \begin{align} \begin{split} \mathsf{Maximize } &\ \ \ 1 + x (q^2-1) + y q(q-1),\\ \mathsf{Subject \, to } & \\ & \begin{aligned} & -1 \leq x (q-1) \\ & -1 \leq -y (q-1) \\ & -1 \leq -x (q+1) - 2yq \\ & -1 \leq -x (q+1)+ yq. \\ \end{aligned} \end{split} \end{align} It is straight-forward to solve this linear program, it is maximized at $x = \frac{1}{q+1}$ and $y=0$, giving a maximum value of $q$. \end{proof} \section{The subgroups containing $\psl{2}{q}$ on $K(q+1,3)$ when $q \equiv 1 \pmod 4$} \label{sec:1mod4} In this section, we consider the subgroups of the automorphism group of $K(q+1,3)$ containing $\psl{2}{q}$ for a prime power $q \equiv 1 \pmod 4$. In this case, $\psl{2}{q}$ is intransitive, so we consider the two minimally transitive subgroups of the automorphism group of $K(q+1,3)$ containing it. The first one of these groups of course is $\pgl{2}{q}$. The other minimally transitive group is described as follows. If $q = p^k$ for some even number $k$ and an odd prime $p$, then the outer automorphism group of $\psl{2}{q}$ is $\langle \alpha \rangle \times \langle \tau\rangle\cong C_2 \times C_k$, where $\alpha$ and $\tau$ have order $2$ and $k$, respectively. The other minimally transitive subgroup containing $\psl{2}{q}$ is the group $\psl{2}{q} \langle \alpha \tau^{\frac{k}{2}}\rangle$. For both groups the stabilizer of a 3-set has size 6 and is isomorphic to $\sym(3)$. We start this section with a note about the structure of the derangement graph $\Gamma_{G}$ where $G$ is either $\pgl{2}{q}$, or $\psl{2}{q} \langle \alpha \tau^{\frac{k}{2}}\rangle$. A graph $X = (V(X),E(X))$ is a \textsl{join} of two vertex-disjoint graphs $Y$ and $Z$, denoted $X = Y \vee Z$, if $V(X) = V(Y) \cup V(Z)$ and $E(X) = E(Y) \cup E(Z) \cup \left\{ (y,z) \mid y \in V(Y), z \in V(Z) \right\}$. That is, $X$ is obtained by taking the disjoint union of $Y$ and $Z$, and adding all the possible edges between the vertices of $Y$ and $Z$. We will show that $\Gamma_{G}$ is a join, to prove this we need to define an operation on the 3-sets. The group $\psl{2}{q}$ acts on the lines (i.e., $1$-dimensional subspaces of $\mathbb{F}_q^2$). These lines can be represented as homogeneous coordinates of the form $u = (u_1,u_2)$. For any two vectors $u, v$ we can define $D(u,v) = u_1 v_2 - u_2 v_1$ and for any ordered triple of lines $(u,v,w)$ consider \[ D(u,v,w) : = D(u,v) D(v,w) D(w,u). \] Note that \[ D(v,u,w) = D(v,u) D(u,w) D(w,v) = -D(u,v) D(v,w) D(w,u) = -D(u,v,w). \] This value is invariant under scalar multiplication and the action of $\psl{2}{q}$. If $q \equiv 1 \pmod{4}$ then $-1$ is a square and for every triple this value will be either a square or a non-square--this shows that the action of $\psl{2}{q}$ has 2 orbits (of equal length by normality) on the 3-sets when $q \equiv 1 \pmod{4}$. Consider $G \in \{ \pgl{2}{q}, \psl{2}{q} \langle \alpha \tau^{\frac{k}{2}}\rangle \}$. Any element of $G \backslash \psl{2}{q}$ will map 3-sets from one orbit to the other orbit. This implies that the elements in $G \backslash \psl{2}{q}$ are all derangements, and all the elements of $G$ that fix a 3-set are contained in the subgroup $\psl{2}{q}$. Of these elements, the ones of order 2 are always contained in a single conjugacy class of $\psl{2}{q}$, that we denote by $C_1$; this is also a conjugacy class in $G$. The elements of order 3 are also contained in a single conjugacy class denoted $C_2$, unless $q=3^\ell$. If $q =3^\ell$, then the elements of order 3 are split between two conjugacy classes, $C_2'$ and $C_2''$ in $\psl{2}{q}$ and these conjugacy classes are closed under inversion if and only if $\ell$ is even. For all values of $q$, the elements of order 3 are all contained in a single conjugacy class $C_2$ of $G$. In the case that $q$ is a power of 3, $C_2 = C_2' \cup C_2''$. For any $g \in \psl{2}{q}$ and any $h \in G \backslash \psl{2}{q}$, it follows that $g h^{-1} \in G \backslash \psl{2}{q}$. Since all the elements in $G \backslash \psl{2}{q}$ are derangements, this means that the vertices corresponding to $g$ and $h$ are adjacent in $\Gamma_G$. From this we can see $\Gamma_G = X \vee X$ where $X$ is the subgraph induced by permutations $\psl{2}{q}$. In $X$, vertices $g, h$ are adjacent if $gh^{-1}$ is not conjugate, in $G$, to an element in the stabilizer of a 3-set. So vertices $g, h \in X$ are adjacent if $gh^{-1}$ is not in one of the conjugacy classes, $C_1$ or $C_2$ in $G$. This shows that $\Gamma_{\pgl{2}{q}}$ and $\Gamma_{\psl{2}{q} \langle \alpha \tau^{\frac{k}{2}}\rangle }$ are isomorphic since they are both isomorphic to a join of two copies of $X$. Further, since a coclique of $\Gamma_{G} = X \vee X$ must lie in a copy of $X$, the cocliques in $\Gamma_{G}$ are exactly the cocliques in the subgraph induced by the elements of $\psl{2}{q}$. \begin{lem} Let $q \equiv 1 \pmod{4}$ and $G \in \{ \pgl{2}{q}, \psl{2}{q} \langle \alpha \tau^{\frac{k}{2}}\rangle\}$. Then $\Gamma_{G}$, with the action on the 3-sets, is the join of two copies of a graph $X$. The vertices of $X$ are the elements of $\psl{2}{q}$ and two elements are adjacent if they are not conjugate, in $G$, to an element in the stabilizer of a 3-set. \end{lem} Note that if $q = 3^{2\ell}$, then $X$ is not the same as the derangement graph from the action of $\psl{2}{q}$ acting on the 3-sets (which is the action considered in the next section). In the graph $\Gamma_{\psl{2}{q}}$, with the action on the 3-sets, two vertices $g,h$ are adjacent if $gh^{-1}$ is in $C_1$ or $C_2'$ (but not in $C_2''$). We will consider the intersection density of the intransitive subgroup $\psl{2}{q}$ in Section~\ref{intrans}. For the remainder of this section, we only consider $\pgl{2}{q}$, since the derangement graphs are isomorphic, the same results will hold for $\psl{2}{q} \langle \alpha \tau^{\frac{k}{2}}\rangle$. The size of $\pgl{2}{q}$ is $(q-1)q(q+1)$, and the stabilizer of a 3-set is $H_q \cong \sym(3)$. We start with the case where $q \equiv 1 \pmod 4$ is a power of $3$. We note that this implies that $q = 3^{2 \ell}$. \begin{thm}\label{thm:powerof3} If $q =3^{2\ell}$, for some positive integer $\ell$, then the intersection density of $\pgl{2}{q}$ with its action on 3-sets of $\{1, \dots, q+1\}$ is $\frac{q}{3}$. \end{thm} \begin{proof} First, the set of matrices of the forms \[ \begin{pmatrix} 1 & x \\ 0 & 1\end{pmatrix}, \quad \begin{pmatrix} 1 & x \\ 0 & -1 \end{pmatrix}, \] with $x \in \fld_q$, form a subgroup that is an intersecting set of size $2q$ with the action of $\pgl{2}{q}$ on the 3-sets. Next we will use Theorem~\ref{thm:LP-clq}, with the method in the previous section to show that $2q$ is an upper bound on the size of a clique in $X_q$. Again, define a graph $X_q$ whose vertices are the elements of $\pgl{2}{q}$ and two vertices $g, h$ are adjacent if $gh^{-1}$ is in one the conjugacy classes that contain elements from $H_q$. Using the same notation as in the proof of Lemma~\ref{lem:even}, this implies that $A(X_q) = A_1 + A_2$. Table~\ref{tab:pgl32l} gives the values of all the irreducible characters of $\pgl{2}{q}$, this is taken from~\cite{adams2002character} and we use the same notation. \def\arraystretch{1.3} \begin{table} \begin{center} \begin{tabular}{|c| cc cc c|} \hline Character & $\rho(\alpha)$ & $\rho(\alpha)$ & $\overline{\rho}(\alpha)$ & $\rho'(\alpha)$ & $\pi(\chi)$ \\ Dimension & $q+1$ & $q+1$ & $q$ & $1$ & $q-1$ \\ \hline value on $C_1$ & $2$ & $-2$ & $1$ & $1$ & $0$ \\ eigenvalue of $A_1$ & $ q$ & $-q$ & $\frac{q+1}{2}$ & $\frac{q(q+1)}{2} $ & $0$ \\ \hline value on $C_2$ & $1$ & $1$ & $0$ & $1$ & $-1$ \\ eigenvalue $A_2$ & $q-1$ & $q-1$ & $0$ & $q^2-1$ & $-(q+1)$ \\ \hline \end{tabular} \end{center} \caption{Partial character table for $\pgl{2}{3^{2\ell} }$, with eigenvalues for $A_1$ and $A_2$.}\label{tab:pgl32l} \end{table} The linear optimization problem we need to solve is the following: \bigskip \begin{align} \begin{split} \mathsf{Maximize } &\ \ \ 1 + x (q^2-1) + y \frac{q(q+1)}{2},\\ \mathsf{Subject \, to } & \\ & \begin{aligned} & -1 \leq x \frac{(q^2-1)}{q+1} + 2 y \frac{q(q+1)}{2(q+1)} = x (q-1) + y q \\ & -1 \leq x \frac{(q^2-1)}{q+1} - 2 y \frac{q(q+1)}{2(q+1)} = x (q-1) - y q \\ & -1\leq y \frac{q(q+1)}{2q} = y \frac{q+1}{2} \\ & -1 \leq x (q^2-1) + y \frac{q(q+1)}{2} \\ & -1 \leq x \frac{ -(q^2-1)}{q-1} = -x(q+1) \\ \end{aligned} \end{split} \end{align} This linear program can be easily solved to see that the objective function is maximized at $x = \frac{1}{q+1}$ and $y =\frac{2}{q+1}$ with a value of \[ 1 + \frac{1}{q+1} (q^2-1) + \frac{2}{q+1} \frac{q(q+1)}{2} = 1 + (q-1) + q = 2q.\qedhere \] \end{proof} Next, we show that there is an intersecting set of size twice the order of the stabilizer of a 3-set in $\pgl{2}{q}$, for any $q \equiv 1 \pmod 4$. \begin{lem}\label{lem:alt} If $q \equiv 1 \pmod{4}$, then there is an intersecting set of size $12$ in $\pgl{2}{q}$ with the action on the 3-sets of $\{1, \dots ,q+1\}$. \end{lem} \begin{proof} The group $\pgl{2}{q}$ has a subgroup isomorphic to $\alt(4)$. Each of the elements in $\alt(4)$ have order either 2 or 3, so all the element of order 3 fix at least one 3-set. Further, since $q \equiv 1 \pmod{4}$, any element with order 2 has $\frac{q-1}{2}$ 2-cycles. This means any such permutation has two fixed points, so it will also fix a 3-set. Thus by Lemma~\ref{lem:intersectingsubgroup}, this subgroup is an intersecting set. \end{proof} We conjecture that the intersecting sets in Lemma~\ref{lem:alt} are the largest. \begin{conj} If $q$ is not a power of $2$ or $3$, and $q \equiv 1 \pmod{4}$, then the intersection density of $\pgl{2}{q}$ acting on the 3-sets from $\{1, \dots, q+1\}$ is $2$. \end{conj} \section{$\psl{2}{q}$ on $K(q+1,3)$ when $q \equiv 3 \pmod 4$} \label{psl} Next we will consider the group $\psl{2}{q}$ acting on 3-sets from $\{1,\dots, q+1\}$ where $q \equiv 3 \pmod 4$. This action is transitive, and the stabilizer of a 3-set in $\psl{2}{q}$ has size $3$ and is isomorphic to $\mathbb{Z}_3$. First we consider when $q$ is a power of $3$. Since we are only considering $q \equiv 3 \pmod{4}$ in this section, $q$ must be an odd power of $3$. \begin{thm}\label{thm:oddpowerof3} If $q = 3^{2\ell+1}$ the intersection density of $\psl{2}{q}$ under the action on $3$-sets is $\frac{q}{3}$. \end{thm} \begin{proof} First there is an intersecting set of size $q$ given by the subgroup of matrices with the form \[ \begin{pmatrix} 1 & x \\ 0 & 1\end{pmatrix} \] where $x \in \fld_q$. We define the derangement graph $\Gamma_{\psl{2}{q}}$, as usual, with the elements of $\psl{2}{q}$ as its vertices, and two vertices $g, h$ are adjacent if $gh^{-1}$ is a derangement. A maximum intersecting set of the group is a maximum coclique in this graph. Using Theorem~\ref{thm:wtRatio}, we will show a coclique in $\Gamma_{\psl{2}{q}}$ is no larger than $q$, so the subgroup above is the largest possible intersecting set. In~\cite{adams2002character} the conjugacy classes of $\psl{2}{q}$ are grouped into families; the families denoted by $c_3(x)$ (with $x \neq 1$ and $x^2 \neq -1$) and $c_4(x)$ are exactly the derangements under this action. Below, we record the sums of the values of the irreducible characters over the conjugacy classes in each family. \begin{table}[h] \def\arraystretch{1.3} \begin{center} \begin{tabular}{|c |ccc ccc|} \hline Character & $\rho'(1)$ & $\overline{\rho}(1)$ & $\rho(\alpha)$ & $\pi(\chi)$& $\pi(\chi)$ & $\omega_0^{\pm}$ \\ Dimension & 1 & $q$ & $q+1$ & $q-1$ & $q-1$ & $\frac{q-1}{2}$ \\ \hline value on $c_3 (x)$ & $\frac{q-3}{4}$ & $\frac{q-3}{4}$ & $-1$ & 0 & 0 & 0 \\ eigenvalue & $\frac{q(q+1)(q-3)}{4}$ & $\frac{(q+1)(q-3)}{4}$ & $-q$ & 0 & 0 & 0 \\ \hline value on $c_4(x)$ & $\frac{q-3}{4}$ & $-\frac{q-3}{4}$ & 0 & 0 & 2 & 0 \\ eigenvalue & $\frac{q(q-1)(q-3)}{4}$ & $-\frac{(q-1)(q-3)}{4}$ & 0 & 0 & $2q$ & 0 \\ \hline \end{tabular} \end{center} \caption{Partial character table $\psl{2}{3^{2\ell+1}}$.} \end{table} From this, it is straight-forward to find the eigenvalues of a matrix compatible with $\Gamma_{\psl{2}{q}}$. If we set the weight on the conjugacy classes of type $c_3(x)$ to be $a= \frac{1}{q}$ and weight of the conjugacy class of type $c_4(z)$ to be $b = \frac{(q+3) }{q(q-3) } $, then the largest eigenvalue of the weighted adjacency matrix is \[ \frac{1}{q} \frac{q(q+1)(q-3)}{4} + \frac{(q+3) }{q(q-3) } \frac{q(q-1)(q-3)}{4} =\frac{q^2-1}{2} - 1 , \] and the smallest is $-1$ (from both $\overline{\rho}(1)$ and $\rho(\alpha)$). Then the ratio bound gives the size of the largest coclique \[ \frac{(q-1)q(q+1)}{2(\frac{q^2-1}{2} )} = q.\qedhere \] \end{proof} This group action has also been considered in~\cite{HKKMcyclic}, where an exact value has been determined for some values of $q$. \begin{thm}[Theorem 6.1, \cite{HKKMcyclic}] \label{thm:q1mod3} Consider $\psl{2}{q}$ with its action on 3-sets. If $q = p^\ell$ with $q \equiv 1 \pmod{3}$ then \[ \rho(G) = \begin{cases} 4/3 & \textrm{ if } p\neq5, \\ 2 & \textrm{ if } p = 5. \\ \end{cases} \] \end{thm} \medskip If $q$ is such that $q \equiv 3 \pmod 4$ and $q \equiv 2 \pmod{3}$ then $q \equiv 11 \pmod{12}$. For $q \equiv 11 \pmod{12}$, a simple calculation shows that $q^2 \equiv 1 \pmod{5}$ or $q^2 \equiv 4 \pmod{5}$. \begin{lem} If $q^2 \equiv 1 \pmod{5}$ then the density of $\psl{2}{q}$ with its action on 3-sets is at least $4/3$. \end{lem} \begin{proof} If $q^2 \equiv 1 \pmod{5}$ then $\psl{2}{q}$ contains a copy of $\alt(5)$. The subgroup $\alt(5)$ has an intersecting set of size 4. An example of such a set is \[ \{id, (1,2,3), (1,2,4), (1,2,5) \}. \] The subgraph induced by this subset of $\psl{2}{q}$ is a clique of size 4 under this action. \end{proof} We end this section with a conjecture on the intersection density of the group $\psl{2}{q}$ with its action on the 3-sets, when $q^2 \equiv \pm 1\pmod{5}$. \begin{conj} If $q^2 \equiv 1 \pmod{5}$ then the intersection density of $\psl{2}{q}$ with its action on 3-sets is $4/3$; if $q^2 \equiv 4 \pmod{5}$ then the intersection density of this action is $1$. \end{conj} \section{$\psl{2}{q}$ acting on the Kneser graph $K(q+1,2)$} \label{2sets} Both the groups $\pgl{2}{q}$ and $\psl{2}{q}$ are transitive subgroups of the automorphism group of $K(q+1,2)$ where $q$ is a prime power. We will only consider $\psl{2}{q}$ since it would have the larger intersection density of the two. If $q$ is odd, the size of the stabilizer of a point on $\psl{2}{q}$ under the action on 2-sets has size \[ \frac{(q-1)q(q+1)}{2} \binom{q+1}{2} ^{-1} = q-1, \] and if $q$ is even it is \[ (q-1)q(q+1) \binom{q+1}{2} ^{-1} = 2(q-1). \] \begin{lem} For $q$ even, the group $\psl{2}{q}$ acting on 2-sets from $\{1,\dots,q+1\}$ has intersection density $\frac{q}{2}$. \end{lem} \begin{proof} Since $q$ is even, any element that fixes exactly one point has order 2, and also fixes a 2-set. Clearly any element with two fixed points also fixes a 2-set. Thus every element in the stabilizer of a point in $\psl{2}{q}$, in its action on the points $\{1,\dots , q+1\}$, also fixes a 2-set from $\{1,\dots , q+1\}$. So the stabilizer of a point in the natural action is also an intersecting set under the action on 2-sets and it has size $q(q-1)$. We will use the ratio bound, Theorem~\ref{thm:wtRatio}, to show that this is the largest possible intersecting set. Still using the notation of~\cite{adams2002character}, only the conjugacy classes of type $c_4(z)$ are derangements with this action and the eigenvalues of the derangement graph are \[ \left\{ \frac{q^2(q-1)}{2}, \quad 0, \quad -\frac{q(q-1)}{2}, \quad q \right \}. \] The ratio between the largest and the smallest eigenvalue is $-q$, so by the ratio bound, Theorem~\ref{thm:wtRatio}, the size of a maximum coclique cannot be any larger than \[ \frac{(q-1)q(q+1) }{1 - (-q) } =(q-1)q . \] This implies the intersection density is \[ \frac{q^2-q}{ 2(q-1) } = \frac{q}{2}.\qedhere \] \end{proof} For values of $q$ smaller than $32$ we have done some calculations of the intersection density, and based on these calculations we make the following conjecture. \begin{conj} For $q \equiv 1 \pmod{4}$ the action of the group $\psl{2}{q}$ on 2-sets has intersection density 1. \end{conj} The case for $q \equiv 3 \pmod{4}$ seems more complicated, we did find groups that had larger intersecting sets, but we did not find a general construction. \begin{lem}\label{lem:counters} For $q = 7$ the group $\alt(4)$ is an intersecting subgroup in $\psl{2}{q}$ and the intersection density is at least $2$. For $q = 31$ the group $\alt(5)$ is an intersecting subgroup in $\psl{2}{q}$ and the intersection density is at least $2$. \end{lem} The second part of Lemma~\ref{lem:counters} is Example 2.2 in~\cite{li2020erd}, in this paper the authors show that there are no larger intersecting subgroups, but there may be larger intersecting subsets. There are larger values of $q$ for which $\psl{2}{q}$ contains $\alt(4)$ or $\alt(5)$ as intersecting subgroup, but in these groups, the stabilizer under the action on 2-sets is larger than either $\alt(4)$ or $\alt(5)$; so these subgroups do not give an intersection density above $1$. This leaves us with the following question. \begin{quest} For $q \equiv 3 \pmod{4}$ and $q>31$, does the group $\psl{2}{q}$ have intersection density larger than 1 with its action on the 2-sets? \end{quest} Finally, we also make a conjecture about the intersection density of $\pgl{2}{q}$, even though the maximum intersection density of the Kneser graph would be given by the group $\psl{2}{q}$. From our computations, this conjecture seems true for $q\leq 27$. \begin{conj} For $q$ odd the group $\pgl{2}{q}$ with its action on the 2-sets of $\{1,\dots ,q+1\}$ has intersection density 1. \end{conj} \section{The intransitive action of $\psl{2}{q}$ on $K(q+1,3)$} \label{intrans} If $q \equiv 1 \pmod{4}$, then $\psl{2}{q}$ is not transitive on the 3-sets, but we can consider the action of the $\psl{2}{q}$ on one of its orbits of 3-sets. The stabilizer of a 3-sets under this action is a subgroup $H_q$, isomorphic to $\sym(3)$. If $q = 3^\ell$ then the conjugacy class of elements of order three splits into 2 conjugacy classes in $\psl{2}{q}$. If $\ell$ is even then these classes are closed under inverses, so all the order three elements of $H_q$ belong to only one of these conjugacy classes; if $\ell$ is odd these conjugacy classes are not closed under inverses. We have done some computer searches for intersecting sets under this action. Our results are recorded in Table~\ref{tab:intrans}. \begin{table}[h] \begin{center} \begin{tabular}{|c | c| c |c|} \hline $q$ & Max. intersecting & Lower bound on Density & Intersection density \\ & set found & of $\psl{2}{q}$ on $H_q$ & of $\pgl{2}{q}$ on $H_q$ \\ \hline 5 & 12 & 2 & 2\\ 9 & 15 & 5/2 & 3 \\ 13 & 12 & 2 & 2\\ 17 & 12 & 2 & 2\\ 25 &12 & 2 & 2\\ \hline \end{tabular} \end{center} \caption{Calculations for intersection density of $\psl{2}{q}$ acting on a single orbit of 3-sets when $q \equiv 1 \pmod{4}$.} \label{tab:intrans} \end{table} We consider the group $\pgl{2}{9}$ more carefully; it is an example where there are 2 conjugacy classes of elements of order three and each such class is closed under inverses. \begin{example} The group $\psl{2}{9}$ acting on $\sym(3)$ has an intersecting set of size 15. The elements in this set are the following: \begin{center} \begin{tabular}{cc} $id$ & \\ $(1,2)(5,10)(6,9)(7,8)$ & $(1,2)(3,4)(5,7)(8,10)$ \\ $(1,10)(2,7)(3,6)(5,8)$ & $(1,7)(2,10)(4,9)(5,8) $ \\ $(1,5)(2,8)(4,6)(7,10)$ & $(1,8)(2,5)(3,9)(7,10) $ \\ $( 1,4,2)(5,8,6)(7,10,9)$ & $(1,2,4)(5,6,8)(7,9,10)$ \\ $(1,6,2)(3,10,7)(4,5,8)$ & $(1,2,6)(3,7,10)(4,8,5)$ \\ $(1,3,2)(5,9,8)(6,10,7)$ & $(1,2,3)(5,8,9)(6,7,10)$ \\ $(1,9,2)(3,8,5)(4,7,10)$ & $(1,2,9)(3,5,8)(4,10,7) $\\ \end{tabular} \end{center} Any pair of order 2 elements in a single row generate a subgroup isomorphic to $C_2 \times C_2$. Any other pair of order 2 elements generates a subgroup isomorphic to $\sym(3)$. Any pair of order 3 elements in a single row are inverses, so generate a copy of $C_3$; any pair of order 3 element not in a row generate a subgroup isomorphic to $\alt(4)$. Finally, an element of order 2 and an element of order 3 generate subgroups isomorphic to either $\sym(3)$ or $\alt(4)$. If an element of order two generates a subgroup isomorphic to $\sym(3)$ with an element of order 3, then the other element of order two in the same row generates a subgroup isomorphic to $\alt(4)$ with the element of order 3. \end{example} \section{Conclusions and Further Work} In this paper, we initiated the study of intersection density of the Kneser graphs $K(n,3)$ and $K(n,2)$. Our main focus is the $3$-homogeneous groups containing the almost simple group $\psl{2}{q}$, for some prime power $q$. The group $\psl{2}{q}$ is not transitive in its action on the $3$-sets when $q \equiv 1 \pmod 4$; for other values of $q$ modulo $4$, it is transitive on $3$-sets. From Theorems~\ref{thm:classification-transitive} and~\ref{thm:classification-homogeneous}, in the case $n = 2^\ell+1$ for some $\ell$, the minimal 3-transitive subgroups on $\{1, \dots, n\}$ are $\pgl{2}{2^{\ell} }$. Theorem~\ref{thm:qeven} gives the exact values of the intersection density of $K(n,3)$ in this case. \begin{theorem} The intersection density of $K(2^\ell+1,3)$ is $3(2^{\ell})$ if $\ell$ is even, and $2^{\ell}$ if $\ell$ is odd. \end{theorem} Further, provided that $3^\ell+1 \neq 2^d$ for some $d$, the smallest 3-transitive subgroup is $\pgl{2}{3^{\ell} } $, this, with Theorem~\ref{thm:powerof3} and Theorem~\ref{thm:oddpowerof3}, gives the intersection density of $K(3^{\ell} + 1 , 3)$. In 2004, Mihăilescu \cite{Catalan} proved the Catalan conjecture, which asserts that the only solution to the equation $x^a = y^b+1$, where $(x,y) \in \mathbb{N} \times \mathbb{N}$ and $a,b>1$, is the pair $(x,y)= (3,2)$ with $a =2$ and $b =3$. Therefore, we can see that $3^\ell+1 \neq 2^d$ unless $\ell =1$ and $d=2$. \begin{theorem} If $\ell >1$, then the intersection density of $K(3^{\ell} + 1 , 3)$ is $3^{\ell-1}$. \end{theorem} The next project is to calculate the intersection density of $\asl{d}{2}$ with its action on the 3-sets. In particular, we would like to determine if there exists a weighted adjacency matrix for the action of $\asl{d}{2}$ on the 3-sets, for all values of $d$, for which the ratio bound can be used to prove that the stabilizers are the largest intersecting sets. This means the question of the intersection density of $K(2^\ell,3)$ is still open, as is the question which groups achieve the largest density. \begin{quest} Does the subgroup $\pgl{2}{q}$ give the maximum intersection density of $K(q+1,3)$ for $q$ a prime power with $q \equiv 1 \pmod{4}$? \end{quest} In the case that $q \equiv 3 \pmod{4}$, we conjecture that the group $\psl{2}{q}$ gives the maximum intersection density. \begin{conj} For $q$ a prime power with $q \equiv 3 \pmod{4}$, the group $\psl{2}{q}$ gives the maximum intersection density among all subgroups of automorphism of $K(q+1,3)$. If $q^2 = 1 \pmod{5}$ then the intersection density of $K(q+1,3)$ is $4/3$, and if $q^2 = 4 \pmod{5}$ then the intersection density of $K(q+1,3)$ is $1$. \end{conj} We would like to determine the intersection density of the other groups acting transitively on the 3-sets. Using \verb|Sagemath|~\cite{sagemath}, we verified that $\agl{1}{8}$ and $\agammal{1}{8}$ with their actions on the $3$-sets both have intersection density 1. The groups $V_{16}.\alt(7)$ has intersection density 1 (via \verb|Sagemath|~\cite{sagemath} and \verb|Gurobi| \cite{gurobi}) and $\agammal{1}{32}$ has intersection density 1, as it is regular. The group $\mathieu{11}(12)$ has intersection density 1 (this was verified by determining that the maximum coclique in the subgraph of the derangement graph induced by the non-neighbours of the identity has size $35$.) Using \verb| Gurobi |~\cite{gurobi}, there exists a weighted adjacency matrix for the derangement graph so that the ratio bound holds with equality for the groups (with the action on the 3-sets) $\asl{3}{2}$, $\asl{4}{2}$ and $M_{24}$. It can be further determined that the derangement graph for groups $M_{11}$, $M_{22}$ and $M_{23}$ do not have such a weighted adjacency matrix. We still have yet to determine the intersection density of the Mathieu groups, $M_{22}$ and $M_{23}$, and the group $\Aut(\mathieu{22})$ with their action on the 3-sets. Putting these calculations together, we get the following lemma. \begin{lem} The groups $\agl{1}{8}$, $\agammal{1}{8}$, $\agammal{1}{32}$, $\asl{3}{2}$, $\asl{4}{2}$, $V_{16}.\alt(7)$, $\mathieu{11} (12)$ and $M_{24}$ acting on the 3-sets have intersection density 1. \end{lem} As stated in the introduction, the notion of intersection density has been generalized by Kutnar, Maru{\v{s}}i{\v{c}} and Pujol in~\cite{KMP} to an intersection density array. For a vertex-transitive graph, this array consists of the intersection densities for all transitive subgroups of the automorphism group of the graph. The intersection density of a graph is the largest entry in this array, but the entire intersection density array for the Kneser graphs is unknown in general. If $\Alt(n)$ is the smallest transitive automorphism of $K(n,k)$, then the intersection density array of $K(n,k)$ is $[1]$; in this case $K(n,k)$ is intersection density stable and has the EKR-property. In this paper we give several examples of Kneser graphs that have intersection density greater than $1$ and hence do not have the EKR property. For these graphs we would like to determine the entire intersection density array. For small Kneser graphs, using \texttt{GAP}~\cite{GAP4}, we were able to compute the entire array, these are recorded in Table~\ref{ArrayTable}. In the examples we checked, with the exception of $q=8$, all the groups $G$ with $\psl{2}{q} \leq G \leq \pgammal{2}{q}$, the size of the largest coclique in $G$ is the same as the size of largest coclique in $\psl{2}{q}$. \begin{table}[h] \begin{center} \begin{tabular}{|c|c|} \hline Kneser Graph & Array \\ \hline \hline $K(7,3)$ & $[1]$ \\ \hline $K(8,3)$ & $[1,4/3]$ \\ \hline $K(9,3)$ & $[1 ,4/3 ]$ \\ \hline $K(10,3)$ & $[1,3]$ \\ \hline \hline $K(17,3)$ & $[1,2,4,8]$ \\ \hline $K(26,3)$ & $[1, 2]$ \\ \hline $K(28,3)$ & $[1,3,9]$ \\ \hline \end{tabular} \end{center} \caption{Intersection Array for some Kneser Graphs.\label{ArrayTable}} \end{table} \section{Acknowledgements} We wish to thank the anonymous referee for their valuable comments which greatly improved this paper. This research was done while the second author was a Ph.D. student under the supervision of Dr. Karen Meagher and Dr. Shaun Fallat at the Department of Mathematics and Statistics, University of Regina. \begin{thebibliography}{10} \bibitem{adams2002character} J.~Adams. \newblock Character tables for $GL(2)$, $SL(2)$, $PGL(2)$ and $PSL(2)$ over a finite field. \newblock {\em Lecture Notes, University of Maryland}, 25:26--28, 2002. \bibitem{3setwise} A.~Behajaina, R.~Maleki, A.~T.~Rasoamanana, and A. S.~Razafimahatratra. \newblock 3-setwise intersecting families of the symmetric group. \newblock {\em Discrete Math.}, {\bf 344}(8):112467, 2021. \bibitem{BidySym} A.~Behajaina, R.~Maleki, A.~S.~Razafimahatratra \newblock On the intersection density of the symmetric group acting on uniform subsets of small size \newblock {\em Linear Algebra Appl.}, 664: 61-103, 2023. \bibitem{dixon1996permutation} J.~D. Dixon and B.~Mortimer. \newblock {\em Permutation {G}roups}, volume~{\bf 163}. \newblock Springer Science \& Business Media, 1996. \bibitem{Ellis} D.~Ellis. \newblock Setwise intersecting families of permutations. \newblock {\em J. Combin. Theory Ser. A}, {\bf 119}(4):825--849, 2012. \bibitem{ellis2011intersecting} D.~Ellis, E.~Friedgut, and H.~Pilpel. \newblock Intersecting families of permutations. \newblock {\em J. Amer. Math. Soc.}, {\bf 24}(3):649--682, 2011. \bibitem{GAP4} The GAP~Group, \emph{GAP -- Groups, Algorithms, and Programming, Version 4.12.2}; 2022, \url{https://www.gap-system.org}. \bibitem{godsil2016erdos} C.~Godsil and K.~Meagher. \newblock {\em {E}rd\H{o}s-{K}o-{R}ado {T}heorems: {A}lgebraic {A}pproaches}. \newblock Cambridge University Press, 2016. \bibitem{godsil1980more} C.~D.~Godsil. \newblock More odd graph theory. \newblock {\em Discrete Math.}, {\bf 32}(2):205--207, 1980. \bibitem{godsilroyle} C.~Godsil and G.~Royle. \newblock {\em Algebraic {G}raph {T}heory}, volume 207 of {\em Graduate Texts in Mathematics}. \newblock Springer-Verlag, New York, 2001. \bibitem {MR1373655} Handbook of {C}ombinatorics. {V}ol. 1 \newblock Edited by R. L. Graham, M. Gr\"otschel and L. Lov\'asz. \newblock Elsevier Science B.V., Amsterdam; MIT Press, Cambridge, MA, 1995. \bibitem{gurobi} Gurobi Optimization, LLC \newblock Gurobi Optimizer Reference Manual. \newblock \url{https://www.gurobi.com}. \newblock 2022 \bibitem{HKKMcyclic} A.~Hujdurovi\'c, I.~Kov\'acs, K.~Kutnar and D.~Maru\v{s}i\v{c}. \newblock Intersection density of transitive groups with cyclic point stabilizers. \newblock \url{https://arxiv.org/abs/2201.11015v1} \bibitem{HKKMMO} A. Hujdurovi\'c, K. Kutnar, B. Kuzma, D. Maru\v{s}i\v{c}, \v{S}. Miklavi\v{c}, and M. Orel. \newblock On intersection density of transitive groups of degree a product of two odd primes. \newblock {\em Finite Fields Appl.}, 78:101975, 2022. \bibitem{HKMMwreath} A. Hujdurovi\'c, K. Kutnar, D. Maru\v{s}i\v{c}, and \v{S}. Miklavi\v{c}. \newblock On maximum intersecting sets in direct and wreath product of groups. \newblock {\em European J. Combin.} 103:103523, 2022. \bibitem{HMMK} A. Hujdurovi\'c, D. Maru\v{s}i\v{c}, \v{S}. Miklavi\v{c}, and K. Kutnar. \newblock Intersection density of transitive groups of certain degrees. \newblock {\em Algebraic Combin.}, {\bf 5}(2), 289-297, 2022. \bibitem{MR306296} W.~M.~Kantor. \newblock {$k$}-homogeneous groups. \newblock {\em Math. Z.} 124:261--265, 1972. \bibitem{KMP} K.~Kutnar, D.~Maru{\v{s}}i{\v{c}}, and C~.Pujol. \newblock Intersection density of cubic symmetric graphs. \newblock {\em J. Algebraic Combin.} {\bf 57}(4):1313--1326, 2023. \bibitem{li2020erd} C.~H. Li, S.~J. Song, and V.~Pantangi. \newblock {E}rd{\H{o}}s-{K}o-{R}ado problems for permutation groups. \newblock { arXiv preprint arXiv:2006.10339}, 2020. \bibitem{meagher2016erdHos} K.~Meagher, P.~Spiga, and P.~H. Tiep. \newblock An {E}rd{\H{o}}s--{K}o--{R}ado theorem for finite 2-transitive groups. \newblock {\em European J. Combin.}, 55:100--118, 2016. \bibitem{2setwise} K.~Meagher and A.~S.~Razafimahatratra. \newblock The Erd\H{o}s-Ko-Rado Theorem for 2-pointwise and 2-setwise intersecting permutations. \newblock {\em Electron. J. Combin.}, {\bf 28}(4):P4--10, 2021. \bibitem{meagher180triangles} K.~Meagher, A.~S. Razafimahatratra, and P.~Spiga. \newblock On triangles in derangement graphs.A \newblock {\em J. Combin. Theory Ser. A }, 180:105390, 2021. \bibitem{Catalan} P.~Mihăilescu. \newblock Primary Cyclotomic Units and a Proof of Catalan's Conjecture. \newblock {\em J. Reine Angew. Math.}, 572:167-195, 2004. \bibitem{BidyThesis} A. S. Razafimahatratra. \newblock The Erd\H{o}s-Ko-Rado Theorem for Transitive Permutation Groups. \newblock PhD Thesis, University of Regina, 2022. \newblock \url{https://ourspace.uregina.ca/handle/10294/14951} \bibitem{2oddprimes} A. S. Razafimahatratra. \newblock On the intersection density of primitive groups of degree a product of two odd primes. \newblock {\em J. Combin. Theory Ser. A }, \newblock 194:105707, 2023. \bibitem{BidyCMP} A. S. Razafimahatratra. \newblock On complete multipartite derangement graphs. \newblock {\em Ars Math. Contemp.}, 21, P1.07, 2021. \bibitem{sagemath} \newblock The Sage Developers \newblock {S}ageMath, the {S}age {M}athematics {S}oftware {S}ystem ({V}ersion 8.9), \newblock{{\tt https://www.sagemath.org}}, {2020} \bibitem{spiga2019erdHos} P.~Spiga. \newblock The {E}rd{\H{o}}s-{K}o-{R}ado theorem for the derangement graph of the projective general linear group acting on the projective space. \newblock {\em J. Combin. Theory Ser. A}, 166:59--90, 2019. \end{thebibliography} \end{document}
2205.04953v2
http://arxiv.org/abs/2205.04953v2
Colouring Strong Products
\documentclass[11pt]{article} \usepackage[T1]{fontenc} \usepackage{lmodern,amsmath,amsthm,amsfonts,amssymb,graphicx,float,wrapfig,calc,microtype,thmtools,underscore,mathtools,paralist,thm-restate} \usepackage[usenames,dvipsnames,svgnames,table]{xcolor} \usepackage{breakurl} \usepackage[unicode=true]{hyperref} \hypersetup{ colorlinks, linkcolor={black}, citecolor={black}, urlcolor={blue!60!black}, pdftitle={Colourings of Strong Products}, pdfauthor={Louis Esperet, David~R.~Wood}} \usepackage[noabbrev,capitalise]{cleveref} \crefname{lem}{Lemma}{Lemmas} \crefname{thm}{Theorem}{Theorems} \crefname{cor}{Corollary}{Corollaries} \crefname{prop}{Proposition}{Propositions} \crefname{conj}{Conjecture}{Conjectures} \crefname{qu}{Question}{Questions} \crefname{openproblem}{Open Problem}{Open Problems} \crefformat{equation}{(#2#1#3)} \Crefformat{equation}{Equation #2(#1)#3} \newcommand{\defn}[1]{\textcolor{Maroon}{\emph{#1}}} \newcommand{\refcomment}[1]{\textcolor{red}{Referee: #1 }} \newcommand{\bigchi}{\raisebox{1.55pt}{\scalebox{1.2}{\ensuremath\chi}}} \newcommand{\fomega}{\omega^f\hspace*{-0.2ex}} \newcommand{\fchi}{\bigchi^f\hspace*{-0.2ex}} \newcommand{\cchi}{\bigchi_{\star}\hspace*{-0.2ex}} \newcommand{\lchi}{\bigchi^{\ell}} \newcommand{\lcchi}{\bigchi^{\ell}_{\star}\hspace*{-0.2ex}} \newcommand{\dchi}{\bigchi\hspace*{-0.1ex}_{\Delta}\hspace*{-0.2ex}} \newcommand{\ldchi}{\bigchi\hspace*{-0.1ex}_{\Delta}^{\ell}\hspace*{-0.2ex}} \newcommand{\chigen}[1]{\bigchi\hspace*{-0.01ex}_{#1}\hspace*{-0.15ex}} \newcommand{\cfchi}{\bigchi^f_{\star}\hspace*{-0.1ex}} \newcommand{\dfchi}{\bigchi^f_{\Delta}\hspace*{-0.1ex}} \newcommand{\CartProd}{\mathbin{\square}} \renewcommand{\Pr}{\,\mathbb{P}} \newcommand{\MM}{\mathcal{M}} \newcommand{\TT}{\mathcal{T}} \newcommand{\GG}{\mathcal{G}} \newcommand{\STAR}{\mathcal{S}} \newcommand{\XX}{\mathcal{X}} \newcommand{\YY}{\mathcal{Y}} \newcommand{\ZZ}{\mathcal{Z}} \newcommand{\HH}{\mathcal{H}} \newcommand{\CC}{\mathcal{C}} \usepackage[longnamesfirst,numbers,sort&compress]{natbib} \makeatletter \def\NAT@spacechar{~} \makeatother \setlength{\bibsep}{0.4ex plus 0.2ex minus 0.2ex} \usepackage[bmargin=25mm,tmargin=25mm,lmargin=35mm,rmargin=35mm]{geometry} \setlength{\footnotesep}{\baselinestretch\footnotesep} \setlength{\parindent}{0cm} \setlength{\parskip}{1.5ex} \allowdisplaybreaks \newcommand{\half}{\ensuremath{\protect\tfrac{1}{2}}} \DeclarePairedDelimiter{\floor}{\lfloor}{\rfloor} \DeclarePairedDelimiter{\ceil}{\lceil}{\rceil} \renewcommand{\ge}{\geqslant} \renewcommand{\le}{\leqslant} \renewcommand{\geq}{\geqslant} \renewcommand{\leq}{\leqslant} \DeclareMathOperator{\dist}{dist} \DeclareMathOperator{\mad}{mad} \DeclareMathOperator{\col}{col} \DeclareMathOperator{\tw}{tw} \DeclareMathOperator{\ltw}{ltw} \DeclareMathOperator{\tcn}{tcn} \renewcommand{\thefootnote}{\fnsymbol{footnote}} \allowdisplaybreaks \theoremstyle{plain} \newtheorem{thm}{Theorem} \newtheorem{lem}[thm]{Lemma} \newtheorem{cor}[thm]{Corollary} \newtheorem{prop}[thm]{Proposition} \newtheorem{obs}[thm]{Observation} \newtheorem{qu}[thm]{Question} \theoremstyle{definition} \newtheorem{conj}[thm]{Conjecture} \newcommand{\PP}{\mathcal{P}} \newcommand{\QQ}{\mathbb{Q}} \newcommand{\NN}{\mathbb{N}} \newcommand{\RR}{\mathbb{R}} \newcommand{\comment}[1]{ \bigskip \framebox{ \parbox{\textwidth}{ \textcolor{red}{#1} } } \smallskip} \newcommand{\scomment}[1]{\framebox{\textcolor{red}{#1}}} \def\longequation{$$\vcenter\bgroup\advance\hsize by -9em \noindent\ignorespaces\refstepcounter{equation}}\makeatletter\def\endlongequation{\egroup\eqno(\theequation)$$\global\@ignoretrue} \makeatother \begin{document} \author{Louis Esperet\,\footnotemark[2] \qquad David~R.~Wood\,\footnotemark[5]} \footnotetext[2]{Laboratoire G-SCOP (CNRS, Univ.\ Grenoble Alpes), Grenoble, France (\texttt{[email protected]}). Partially supported by the French ANR Projects GATO (ANR-16-CE40-0009-01), GrR (ANR-18-CE40-0032), TWIN-WIDTH (ANR-21-CE48-0014-01) and by LabEx PERSYVAL-lab (ANR-11-LABX-0025).} \footnotetext[5]{School of Mathematics, Monash University, Melbourne, Australia (\texttt{[email protected]}). Research supported by the Australian Research Council.} \sloppy \title{\textbf{Colouring Strong Products}} \maketitle \begin{abstract} Recent results show that several important graph classes can be embedded as subgraphs of strong products of simpler graphs classes (paths, small cliques, or graphs of bounded treewidth). This paper develops general techniques to bound the chromatic number (and its popular variants, such as fractional, clustered, or defective chromatic number) of the strong product of general graphs with simpler graphs classes, such as paths, and more generally graphs of bounded treewidth. We also highlight important links between the study of (fractional) clustered colouring of strong products and other topics, such as asymptotic dimension in metric geometry and topology, site percolation in probability theory, and the Shannon capacity in information theory. \end{abstract} \renewcommand{\thefootnote}{\arabic{footnote}} \section{Introduction} \label{Introduction} The past few years have seen a renewed interest in the structure of strong products of graphs. One motivation is the Planar Graph Product Structure Theorem (see \cref{SubgraphsStrongProducts}), which shows that every planar graph is a subgraph of the strong product of a graph with bounded treewidth and a path. As a consequence, results on the structure of planar graphs can be simply deduced from the study of the structure of strong products of graphs (and in particular from the study of the strong product of a graph with a path). This theorem was preceded by several results that can be also stated as product structure theorems (where the host graph is the strong product of paths, trees, or small complete graphs); see \cref{SubgraphsStrongProducts} below. Note that grids in finite-dimensional euclidean spaces can be expressed as the strong product of finitely many paths, and colouring properties of these grids are related to important topological or metric properties of these spaces; see \cref{sec:hex} and \cref{sec:asdim}. It turns out that the study of colouring properties of strong products of graphs has interesting connections with various problems in combinatorics, which we highlight below. In particular, in order to understand the chromatic number (or the clustered or defective chromatic number) of strong products, it is very helpful to understand the fractional versions of such colourings, which have strong ties with site percolation in probability theory (see \cref{sec:perco}) and Shannon capacity in information theory (see \cref{sec:shannon}). We start with the definitions of various graph products, as well as various graph colouring notions that are studied in this paper. We then give an overview of our main results in \cref{sec:results}. \subsection{Definitions}\label{sec:def} The \defn{cartesian product} of graphs $A$ and $B$, denoted by $A\CartProd B$, is the graph with vertex set $V(A)\times V(B)$, where distinct vertices $(v,x),(w,y)\in V(A)\times V(B)$ are adjacent if: $v=w$ and $xy\in E(B)$, or $x=y$ and $vw\in E(A)$. The \defn{direct product} of graphs $A$ and $B$, denoted by $A\times B$, is the graph with vertex set $V(A)\times V(B)$, where distinct vertices $(v,x),(w,y)\in V(A)\times V(B)$ are adjacent if $vw\in E(A)$ and $xy\in E(B)$. The \defn{strong product} of graphs $A$ and $B$, denoted by $A\boxtimes B$, is the graph $(A\CartProd B)\cup (A\times B)$. For graph classes $\GG_1$ and $\GG_2$, let \begin{align*} \GG_1\CartProd \GG_2 & := \{G_1\CartProd G_2: G_1\in \GG_1,G_2\in\GG_2\}\\ \GG_1\times \GG_2 & := \{G_1\times G_2: G_1\in \GG_1,G_2\in\GG_2\}\\ \GG_1\boxtimes \GG_2 & := \{G_1\boxtimes G_2: G_1\in \GG_1,G_2\in\GG_2\}. \end{align*} A \defn{colouring} of a graph $G$ is simply a function $f:V(G)\to\mathcal{C}$ for some set $\mathcal{C}$ whose elements are called \defn{colours}. If $|\mathcal{C}| \leq k$ then $f$ is a \defn{$k$-colouring}. An edge $vw$ of $G$ is \defn{$f$-monochromatic} if $f(v)=f(w)$. An \defn{$f$-monochromatic component}, sometimes called a \defn{monochromatic component}, is a connected component of the subgraph of $G$ induced by $\{v\in V(G):f(v)=\alpha\}$ for some $\alpha\in \mathcal{C}$. We say $f$ has \defn{clustering} $c$ if every $f$-monochromatic component has at most $c$ vertices. The \defn{$f$-monochromatic degree} of a vertex $v$ is the degree of $v$ in the monochromatic component containing $v$. Then $f$ has \defn{defect} $d$ if every $f$-monochromatic component has maximum degree at most $d$ (that is, each vertex has monochromatic degree at most $d$). There have been many recent papers on clustered and defective colouring \citep{NSSW19,vdHW18,KO19,CE19,EJ14,EO16,DN17,LO18,HW19,MRW17,LW1,LW2,LW3,LW4,LW5,NSW22,DS20}; see \citep{WoodSurvey} for a survey. The general goal of this paper is to study defective and clustered chromatic number of graph products, with the focus on minimising the number of colours with bounded defect or bounded clustering a secondary goal. The \defn{clustered chromatic number} of a graph class $\GG$, denoted by $\cchi(\GG)$, is the minimum integer $k$ for which there exists an integer $c$ such that every graph in $\GG$ has a $k$-colouring with clustering $c$. If there is no such integer $k$, then $\GG$ has \defn{unbounded} clustered chromatic number. The \defn{defective chromatic number} of a graph class $\GG$, denoted by $\dchi(\GG)$, is the minimum integer $k$ for which there exists $c\in\NN$ such that every graph in $\GG$ has a $k$-colouring with defect $c$. If there is no such integer $k$, then $\GG$ has \defn{unbounded} defective chromatic number. Every colouring of a graph with clustering $c$ has defect $c-1$. Thus $\dchi(\GG)\leq \cchi(\GG) \leq\bigchi(\GG)$ for every class $\GG$. Obviously, for all graphs $G$ and $H$, $$\max\{ \chi(G), \chi(H) \} \leq \chi( G \boxtimes H ) \leq \chi( G) \, \chi( H ) .$$ The upper bound is tight if $G$ and $H$ are complete graphs, for example. The lower bound is tight if $G$ or $H$ has no edges. However, \citet{Vesztergombi78} proved that $\chi(G\boxtimes K_2)\geq \chi(G)+2$, implying that if $E(H)\neq\emptyset$ then $$ \chi( G \boxtimes H ) \geq \chi(G) + 2 .$$ More generally, \citet{KM94} proved that $$\chi( G \boxtimes H ) \geq \chi(G) + 2\omega(H) -2 .$$ \citet{Zerovnik06} studied the chromatic numbers of the strong product of odd cycles. A classical result of \citet{Sabidussi57} states that for any graphs $G$ and $H$, $\chi(G\Box H)=\max\{\chi(G),\chi(H)\}$, while a famous conjecture of Hedetniemi stated that $\chi(G\times H)=\min\{\chi(G),\chi(H)\}$. This conjecture was recently disproved by \citet{Shitov19}. The remainder of the paper focuses on the strong product $\boxtimes$ of graphs rather than $\Box$ and $\times$. \subsection{Subgraphs of Strong Products} \label{SubgraphsStrongProducts} The study of colourings of strong products is partially motivated from the following results that show that natural classes of graphs are subgraphs of certain strong products. Thus, colouring results for the product imply an analogous result for the original class. Later in the paper we use \cref{DegreeTreewidthStructure}, while \cref{KL,PlanarPartition} are not used. Nevertheless, these results provide further motivation for studying colouring of strong graph products, since they show that several classes with a complicated structure can be expressed as subgraphs of the strong product of significantly simpler graph classes. For a graph $G$ and an integer $d\ge 1$, let $\boxtimes_dG$ denote the $d$-fold strong product $G\boxtimes \dots \boxtimes G$. \begin{thm}[\citep{KL07}] \label{KL} For every $c\in \NN$ there exists $d\in O(c\log c)$, such that if $G$ is a graph with $|\{w\in V(G):\dist(v,w)\leq r\}| \leq r^c$ for every vertex $v\in V(G)$ and integer $r\geq 2$, then $G \subseteq \boxtimes_d P$. \end{thm} \begin{thm}[\citep{DO95,Wood09,DW22}] \label{DegreeTreewidthStructure} Every graph with maximum degree $\Delta\in\mathbb{N}^+$ and treewidth less than $k\in\mathbb{N}^+$ is a subgraph of $T\boxtimes K_{20k\Delta}$ for some tree $T$ with maximum degree at most $20k\Delta^2$. \end{thm} \begin{thm}[\citep{DJMMUW20,UWY22}] \label{PlanarPartition} Every planar graph is a subgraph of: \begin{compactenum}[(a)] \item $H\boxtimes P$ for some planar graph $H$ of treewidth at most $6$ and for some path $P$; \item $H\boxtimes P\boxtimes K_3$ for some planar graph $H$ of treewidth at most $3$ and for some path $P$. \end{compactenum} \end{thm} The interested reader is referred to \citep{DJMMUW20,DMW,DHHW22,HW21b,ISW,UTW} for extensions of this result to graphs of bounded genus, and other natural generalisations of planar graphs. \subsection{Hex Lemma}\label{sec:hex} The famous Hex Lemma says that the game of Hex cannot end in a draw; see \citep{HT19} for an account of the rich history of this game. As illustrated in \cref{Hex}, the Hex Lemma is equivalent to saying that in every 2-colouring of the vertices of the $n\times n$ triangulated grid, there is a monochromatic path from one side to the opposite side. \begin{figure}[!h] \centering\includegraphics{HexGraphPlayed} \caption{A Hex game.} \label{Hex} \end{figure} This result generalises to higher dimensions as follows. Let $G_n^d$ be the graph with vertex-set $\{1,\dots,n\}^d$, where distinct vertices $(v_1,\dots,v_d)$ and $(w_1,\dots,w_d)$ are adjacent in $G^d_n$ whenever $w_i\in\{v_i,v_i+1\}$ for each $i\in\{1,\dots,d\}$, or $v_i\in\{w_i,w_i+1\}$ for each $i\in\{1,\dots,d\}$. Note that if each vertex $(v_1,\dots,v_d)$ is coloured $(\sum_i v_i)\bmod{(d+1)}$, then adjacent vertices $(v_1,\dots,v_d)$ and $(w_1,\dots,w_d)$ are assigned distinct colours, since $|(\sum_i v_i)-(\sum_i w_i)| \leq d$. Thus $\bigchi(G_n^d)\leq d+1$. In fact, $\bigchi(G^d_n)=d+1$ since $\{(v_1,\dots,v_d),(v_1+1,v_2,\dots,v_d),(v_1+1,v_2+1,v_3,\dots,v_d),\dots,(v_1+1,v_2+1,\dots,v_d+1)\}$ is a $(d+1)$-clique. The $d$-dimensional Hex Lemma provides a stronger lower bound: in every $d$-colouring of $G_n^d$ there is a monochromatic path from one `side' of $G^d_n$ to the opposite side \citep{Gale79}. Thus $$\cchi(\{ G^d_n: n \in \NN\})=d+1.$$ See \citep{Gale79,LMST08,MP08,BDN17,MP08,Matdinov13,Karasev13} for related results. For example, \citet{Gale79} showed that this theorem is equivalent to the Brouwer Fixed Point Theorem. These results are related to clustered colourings of strong products, as we now explain. Let $P_n$ denote the $n$-vertex path and $\boxtimes_d P_{n}$ be the $d$-dimensional grid $P_n\boxtimes \cdots \boxtimes P_n$. Then $G_n^d$ is a subgraph of $\boxtimes_d P_{n}$. So $$\cchi(\{ \boxtimes_d P_{n} : n \in\NN\})\geq \cchi(\{ G^d_n : n \in\NN\}) = d+1.$$ A corollary of our main result is that equality holds here. In particular, \cref{HexGrid} shows that there is a $(d+1)$-colouring of $P^{\boxtimes d}_{n}$ with clustering $d!$. Thus $$\cchi(\{ \boxtimes_d P_{n} : n \in\NN\}) = d+1.$$ \subsection{Asymptotic Dimension}\label{sec:asdim} Given a graph $G$ and an integer $\ell\ge 1$, $G^\ell$ is the graph obtained from $G$ by adding an edge between each pair of distinct vertices $u,v$ at distance at most $\ell$ in $G$. Note that $G^1=G$. We say that a subset $S$ of vertices of a graph $G$ has \defn{weak diameter at most $d$} in $G$ if any two vertices of $S$ are at distance at most $d$ in $G$. The asymptotic dimension of a metric space was introduced by \citet{Gro93} in the context of geometric group theory. For graph classes (and their shortest paths metric), it can be defined as follows~\citep{BBEGLPS}: the \defn{asymptotic dimension } of a graph class $\mathcal{F}$ is the minimum $m\in\NN_0$ for which there is a function $f: \NN \rightarrow \NN$ such that for every $G \in \mathcal{F}$ and $\ell \in {\mathbb N}$, $G^\ell$ has an $(m+1)$-colouring in which each monochromatic component has weak diameter at most $f(\ell)$ in $G^\ell$. (If no such integer $m$ exists, the asymptotic dimension of $\mathcal{F}$ is said to be $\infty$). Taking $\ell= 1$, we see that graphs from any graph class $\mathcal{F}$ of asymptotic dimension at most $m$ have an $(m+1)$-colouring in which each monochromatic component has bounded weak diameter. If, in addition, the graphs in $\mathcal{F}$ have bounded maximum degree, then all graphs in $\mathcal{F}$ have $(m+1)$-colourings with bounded clustering \citep{BBEGLPS}, implying $\cchi(\mathcal{F})\leq m+1$. It is well-known that the class of $d$-dimensional grids (with or without diagonals) has asymptotic dimension $d$ (see~\citep{Gro93}), and since they also have bounded degree, it directly follows from the remarks above that $d$-dimensional grids have $(d+1)$-colourings with bounded clustering. An important problem is to bound the dimension of the product of topological or metric spaces as a function of their dimensions. It follows from the work of \citet{BD06} and \citet{BDLM08} that if $\mathcal{F}_1$ and $\mathcal{F}_2$ are classes of asymptotic dimension $m_1$ and $m_2$, respectively, then the class $\mathcal{F}_1 \boxtimes \mathcal{F}_2 := \{G_1\boxtimes G_2\,:\, G_1\in\mathcal{F}_1, G_2\in \mathcal{F}_2\}$ has asymptotic dimension at most $m_1+m_2$. For example, that $d$-dimensional grids have asymptotic dimension at most $d$ can be deduced from this product theorem by induction, using the fact that the family of paths has asymptotic dimension 1. In particular, if two classes $\mathcal{F}_1$ and $\mathcal{F}_2$ have asymptotic dimension $m_1$ and $m_2$, respectively, and have uniformly bounded maximum degree, then the graphs in $ \mathcal{F}_1 \boxtimes \mathcal{F}_2$ have $(m_1+m_2+1)$-colourings with bounded clustering. Since graphs of bounded treewidth have asymptotic dimension at most 1 \citep{BBEGLPS}, this implies the following. \begin{restatable}{thm}{DegreeTreewidthClustered} \label{DegreeTreewidthClustered} If $G_1,\dots,G_d$ are graphs with treewidth at most $k\in\NN$ and maximum degree at most $\Delta\in\NN$, then $G_1\boxtimes \dots\boxtimes G_d$ is $(d+1)$-colourable with clustering at most some function $c(d,\Delta,k)$. \end{restatable} Similarly, using the fact that graphs excluding some fixed minor have asymptotic dimension at most 2 \citep{BBEGLPS}, we have the following. \begin{thm} \label{DegreeMinorClustered} Let $H$ be a graph. If $G_1,\dots,G_d$ are $H$-minor free graphs with maximum degree at most $\Delta\in\NN$, then $G_1\boxtimes \dots\boxtimes G_d$ is $(2d+1)$-colourable with clustering at most some function $c(d,\Delta,H)$. \end{thm} The conditions that $\mathcal{F}_1$ and $\mathcal{F}_2$ have bounded asymptotic dimension and degree are quite strong, and instead we would like to obtain conditions only based on the fact that $\mathcal{F}_1$ and $\mathcal{F}_2$ are themselves colourable with bounded clustering with few colours, and if possible, without the maximum degree assumption. \subsection{Fractional Colouring}\label{sec:frac} Let $G$ be a graph. For $p,q\in\NN$ with $p\geq q$, a \defn{$(p\!:\!q)$-colouring} of $G$ is a function $f:V(G)\to \binom{C}{q}$ for some set $C$ with $|C|=p$. That is, each vertex is assigned a set of $q$ colours out of a palette of $p$ colours. For $t\in\RR$, a \defn{fractional $t$-colouring} is a $(p\!:\!q)$-colouring for some $p,q\in\NN$ with $\frac{p}{q}\leq t$. A \defn{$(p\!:\!q)$-colouring} $f$ of $G$ is \defn{proper} if $f(v)\cap f(w)=\emptyset$ for each edge $vw\in E(G)$. The \defn{fractional chromatic number} of $G$ is $$\fchi(G) := \inf\left\{ t \in\RR \,: \, \text{$G$ has a proper fractional $t$-colouring} \right\}.$$ The fractional chromatic number is widely studied; see the textbook \citep{SU97}, which includes a proof of the fundamental property that $\fchi(G)\in\QQ$. The next result relates $\fchi(G)$ and $\alpha(G)$, the size of the largest independent set in $G$. \begin{lem}[\citep{SU97}] \label{fchiAlphaVertexTransitive} For every graph $G$, $$\fchi(G) \, \alpha(G) \geq |V(G)|,$$ with equality if $G$ is vertex-transitive. \end{lem} The following well-known observation shows an immediate connection between fractional colouring and strong products. \begin{obs} \label{pqProduct} A graph $G$ is properly $(p\!:\!q)$-colourable if and only if $G\boxtimes K_q$ is properly $p$-colourable. \end{obs} \cref{pqProduct} is normally stated in terms of the lexicographic product $G[K_q]$, which equals $G\boxtimes K_q$ (although $G[H]\neq G\boxtimes H$ for other graphs $H$). See \citep{KY02,Klavzar98} for results on the fractional chromatic number and the lexicographic product. Fractional 1-defective colourings were first studied by \citet{FS15}; see \citep{GX16,MOS11,Klostermeyer02} for related results. Fractional clustered colourings were introduced by \citet{DS20} and subsequently studied by \citet{NSW22} and \citet{LW5}. The notions of clustered and defective colourings introduced in \cref{sec:def} naturally extend to fractional colouring as follows. For a $(p\!:\!q)$-colouring $f:V(G)\to \binom{C}{q}$ of $G$ and for each colour $\alpha\in C$, the subgraph $G[ \{ v \in V(G): \alpha \in f(v) \} ]$ is called an \defn{$f$-monochromatic subgraph} or \defn{monochromatic subgraph} when $f$ is clear from the context. A connected component of an $f$-monochromatic subgraph is called an \defn{$f$-monochromatic component} or \defn{monochromatic component}. Note that $f$ is proper if and only if each $f$-monochromatic component has exactly one vertex. A \defn{$(p\!:\!q)$-colouring} has \defn{defect} $c$ if every monochromatic subgraph has maximum degree at most $c$. A $(p\!:\!q)$-colouring has \defn{clustering} $c$ if every monochromatic component has at most $c$ vertices. The \defn{fractional clustered chromatic number $\cfchi(\GG)$} of a graph class $\GG$ is the infimum of all $t\in\RR$ such that, for some $c\in\NN$, every graph in $\GG$ is fractionally $t$-colourable with clustering $c$. The \defn{fractional defective chromatic number $\dfchi(\GG)$} of a graph class $\GG$ is the infimum of $t>0$ such that, for some $c\in\NN$, every graph in $\GG$ is fractionally $t$-colourable with defect $c$. \citet{Dvorak16} proved that every hereditary class admitting strongly sublinear separators and bounded maximum degree has fractional clustered chromatic number 1 (see also \cite{DS20}). Using, this result, \citet{LW5} proved that for every hereditary graph class $\GG$ admitting strongly sublinear separators, $$\cfchi(\GG)=\dfchi(\GG).$$ \citet{LW5} also proved that for every monotone graph class $\GG$ admitting strongly sublinear separators and with $K_{s,t}\not\in\GG$, $$\cfchi(\GG) = \dfchi(\GG) \leq \dchi(\GG) \leq s.$$ \citet{NSW22} determined $\dfchi$ and $\cfchi$ for every minor-closed class. In particular, for every proper minor-closed class $\GG$, $$\dfchi(\GG)=\cfchi(\GG)=\min\{k\in\NN:\exists n\,C_{k,n}\not\in\GG\},$$ where $C_{n,k}$ is a specific graph (see \cref{sec:ptc} for the definition of $C_{n,k}$ and for more details about this result). As an example, say $\GG_t$ is the class of $K_t$-minor-free graphs. \citet{Hadwiger43} famously conjectured that $\bigchi(\GG_t)=t-1$. It is even open whether $\fchi(\GG_t)=t-1$. The best known upper bound is $\fchi(\GG_t) \leq 2t-2$ due to \citet{RS98}. \citet{EKKOS15} proved that $$\dchi(\GG_t)=t-1.$$ It is open whether $\cchi(\GG_t)=t-1$. The best known upper bound is $\cchi(\GG_t)\leq t+1$ due to \citet{LW2}. \citet{DN17} have announced that a forthcoming paper will prove that $\cchi(\GG_t)=t-1$. The above result of \citet{NSW22} implies that $$\dfchi(\GG_t)=\cfchi(\GG_t)=t-1.$$ As another example, the result of \citet{NSW22} implies that the class of graphs embeddable in any fixed surface has fractional clustered chromatic number and fractional defective chromatic number 3. \subsection{Site percolation}\label{sec:perco} There is an interesting connection between percolation and fractional clustered colouring. Consider a graph $G$ and a real number $x>0$, and let $S$ be a random subset of vertices of $G$, such that $\Pr(v\in S)\ge x$ for every $v\in V(G)$. Then $S$ is a \defn{site percolation} of density at least $x$. If the events $(v\in S)_{v\in V(G)}$ are independent and $\Pr(v\in S)= x$ for every $v\in V(G)$, then $S$ is called a \defn{Bernouilli site percolation}, but in general the events $(v\in S)_{v\in V(G)}$ can be dependent. Each connected component of $G[S]$ is called a \defn{cluster} of $S$, and $S$ has \defn{bounded clustering} (for a family of graphs $G$) if all clusters have bounded size. An important problem in percolation theory is to understand when $S$ has finite clusters (when $G$ is infinite), or when $S$ has bounded clustering, independent of the size of $G$ (when $G$ is finite). Assume that all clusters of $S$ have bounded size almost surely. Then, by discarding the vanishing proportion of sets of unbounded clustering in the support of $S$, we obtain a probability distribution over the subsets of vertices of bounded clustering in $G$, such that each $v\in V(G)$ is in a random subset (according to the distribution) with probability at least $x-\epsilon$, for any $\epsilon>0$. If all graphs $G$ in some class $\GG$ satisfy this property (with uniformly bounded clustering), this implies that $\cfchi(\GG)\le \tfrac1{x-\epsilon}$, for any $\epsilon>0$, and thus $\cfchi(\GG)\le \tfrac1{x}$. Conversely, if a class $\GG$ satisfies $\cfchi(\GG)\le \tfrac1{x}$, then this clearly gives a site percolation of bounded clustering for $\GG$, with density at least $x$. As an example, \citet{CGHV15} recently proved that in any cubic graph of sufficiently large (but constant) girth, there is a percolation of density at least $0.534$ in which all clusters are bounded almost surely. It follows that for this class of graphs, $\cfchi(\GG)\le \tfrac1{0.534}\le 1.873$. Note that percolation in finite dimensional lattices (and in particular the critical probability at which an infinite cluster appears) is a well-studied topic in probability theory. Finite dimensional lattices themselves are easily expressed as a strong product of paths. \subsection{Shannon Capacity}\label{sec:shannon} Motivated by connections to communications theory, \citet{Shannon56} defined what is now called the \defn{Shannon capacity} of a graph $G$ to be $$\Theta(G)=\sup_{d}(\alpha (\boxtimes_d G))^{1/d}=\lim_{d\rightarrow \infty } (\alpha (\boxtimes_d G))^{1/d}.$$ \citet{Lovasz79} famously proved that $\Theta(C_5)=\sqrt{5}$. See \citep{AL06,Alon02,Alon98,Vesztergombi78,KM94,Vesztergombi81,Farber86,HR82,KM94,Klavzar93} for more results. By \cref{fchiAlphaVertexTransitive}, for any graph $G$ we have $\alpha(G) \ge |V(G)|/\fchi(G)$, with equality if $G$ is vertex-transitive. It follows that for any integer $d\ge 1$, $$\alpha(\boxtimes_d G) \ge |V(\boxtimes_d G)|/\fchi(\boxtimes_d G)=|V(G)|^d/\fchi(\boxtimes_d G),$$ with equality if $\boxtimes_d G$ is vertex-transitive, and in particular if $G$ itself is vertex-transitive. As a consequence, we have the following alternative definition of the Shannon capacity of vertex-transitive graphs in terms of the fractional chromatic number of their strong products. \begin{obs} \label{ShannonReformulation} For any graph $G$, $$\Theta(G)\ge \frac{|V(G)|}{\inf_{d } (\fchi (\boxtimes_d G))^{1/d}}=\frac{|V(G)|}{\lim_{d\rightarrow \infty } (\fchi (\boxtimes_d G))^{1/d}},$$ with equality if $G$ is vertex-transitive. \end{obs} As a consequence, results on the Shannon capacity of graphs imply lower bounds (or exact bounds) on the fractional chromatic number of strong products of graphs. \subsection{Our results}\label{sec:results} We start by recalling basic results on the chromatic number of the product $G_1\boxtimes \cdots\boxtimes G_d$ in \cref{ProductColourings}: the chromatic number of the product is at most the product of the chromatic numbers. We show that the same holds for the fractional version and for the clustered version (for graph classes). While complete graphs show that the result on proper colouring is tight in general, other constructions are needed for (fractional) clustered chromatic number. In \cref{sec:ptc}, we show that for products of tree-closures, the (fractional) defective and clustered chromatic number is equal to the product of the (fractional) defective and clustered chromatic numbers. In \cref{sec:consistent}, we introduce consistent (fractional) colouring and use it to combine any proper $(p\!:\!q)$-colouring of a graph $G$ with a $(q\!:\!r)$-colouring of a graph $H$ with bounded clustering into a $(p\!:\!r)$-colouring of $G\boxtimes H$ of bounded clustering. Using consistent $(k+1\!:\!k)$-colourings of paths (and more generally bounded degree trees) with bounded clustering, we prove general results on the clustered chromatic number of the product of a graph with a path (and more generally a bounded degree tree, or a graph of bounded treewidth and maximum degree). We also study the fractional clustered chromatic number of graphs of bounded degree, showing that the best known lower bound for the non-fractional case also holds for the fractional relaxation. In \cref{sec:param}, we prove that many of our results on the clustered colouring of graph products can be extended to the broader setting of general graph parameters. \section{Basics} \subsection{Product Colourings} \label{ProductColourings} We start with the following folklore result about proper colourings of strong products; see the informative survey about proper colouring of graph products by \citet{Klavzar96}. \begin{lem} \label{ChromaticNumber} For all graphs $G_1,\dots,G_d$, $$\chi( G_1\boxtimes \cdots\boxtimes G_d) \leq \prod_{i=1}^d \chi(G_i).$$ \end{lem} \begin{proof} Let $\phi_i$ be a $\chi(G_i)$-colouring of $G_i$. Assign each vertex $(v_1,\dots,v_d)$ of $G_1\boxtimes \cdots\boxtimes G_d$ the colour $(\phi_1(v_1),\dots,\phi_d(v_d))$. If $(v_1,\dots,v_d)(w_1,\dots,w_d)$ is an edge of $G_1\boxtimes \cdots\boxtimes G_d$, then $v_iw_i$ is an edge of $G_i$ for some $i$, implying $\phi_i(v_i)\neq \phi_i(w_i)$ and $G_1\boxtimes \cdots\boxtimes G_d$ is properly coloured with $\prod_{i=1}^d \chi(G_i)$ colours. \end{proof} We have the following similar result for fractional colouring. \begin{lem} \label{FractionalProduct} For all graphs $G_1,\dots,G_d$, $$\fchi( G_1\boxtimes \cdots\boxtimes G_d) \leq \prod_{i=1}^d \fchi(G_i).$$ \end{lem} \begin{proof} $\fchi(G_i)=\frac{p_i}{q_i}$ for some $p_i,q_i\in\NN$. By \cref{pqProduct}, $\chi(G_i\boxtimes K_{q_i} )\leq p_i$. Let $P:=\prod_i p_i$ and $Q:=\prod_i q_i$. By \cref{ChromaticNumber}, $\chi( ( G_1\boxtimes K_{q_1} ) \cdots ( G_d \boxtimes K_{q_d} ) ) \leq P.$ Since $( G_1\boxtimes K_{q_1} ) \cdots ( G_d \boxtimes K_{q_d} ) \cong ( G_1\boxtimes \cdots \boxtimes G_d) \boxtimes K_Q$, we have $\chi( ( G_1\boxtimes \cdots \boxtimes G_d ) \boxtimes K_Q ) \leq P$. By \cref{pqProduct} again, $G_1\boxtimes \cdots \boxtimes G_d $ is $(P\!:\!Q)$-colourable, and $\fchi( G_1\boxtimes \cdots \boxtimes G_d ) \leq P/Q = \prod_{i=1}^d \fchi(G_i)$. \end{proof} Equality holds in \cref{FractionalProduct} when $G_1,\dots,G_d$ are complete graphs, for example. However, equality does not always hold in \cref{FractionalProduct}. For example, if $G=H=C_5$ then $\fchi(C_5)=\frac{5}{2}$ but $\fchi(C_5\boxtimes C_5)\leq \chi(C_5 \boxtimes C_5) \leq 5$, where a proper 5-colouring of $C_5\boxtimes C_5$ is shown in \cref{C5C5}. In fact, a simple case-analysis shows that $\alpha(C_5\boxtimes C_5)=5$, implying that $\fchi(C_5\boxtimes C_5)=\bigchi(C_5\boxtimes C_5)=5$ (since $\alpha(G)\, \fchi(G) \geq |V(G)|$ for every graph $G$). Note that using \cref{ShannonReformulation}, the classical result of \citet{Lovasz79} stating that $\Theta(C_5)=\sqrt{5}$ can be rephrased as $\fchi(\boxtimes_d C_5)=5^{d/2}$ for any $d\ge 2$. \begin{figure}[!h] \centering \includegraphics{C5C5} \caption{A proper 5-colouring of $C_5\boxtimes C_5$} \label{C5C5} \end{figure} By \cref{FractionalProduct}, for all graphs $G_1$ and $G_2$, $$\fchi(G_1 \boxtimes G_2) \le \fchi(G_1) \fchi(G_2),$$ with equality when $G_1$ or $G_2$ is a complete graph~\citep{KY02}\footnote{\citet{KY02} proved that $\fchi(G_1 \circ G_2) = \fchi(G_1) \fchi(G_2)$ for all graphs $G_1$ and $G_2$, where $\circ$ denotes the lexicographic product. Since $G\boxtimes K_t = G \circ K_t$, we have $\fchi(G \boxtimes K_t) = t\, \fchi(G)$ for every graph $G$.}. It is tempting therefore to hope for an analogous lower bound on $\fchi(G_1 \boxtimes G_2)$ in terms of $\fchi(G_1)$ and $\fchi(G_2)$ for all graphs $G_1$ and $G_2$. The following lemma dashes that hope. \begin{lem} For infinitely many $n\in\NN$ there is an $n$-vertex graph $G$ such that $$\fchi(G \boxtimes G) \le \frac{ (256+o(1))\log^4n}{n} \, \fchi(G)^2.$$ \end{lem} \begin{proof} \citet[Theorems~5~and~6]{AO95} proved that for infinitely many $n\in\NN$, there is a (Cayley) graph $G$ on $n$ vertices with $\chi(G \boxtimes G)\le n$ and $\alpha(G)\leq (16+o(1))\log^2 n$. By \cref{fchiAlphaVertexTransitive}, \begin{align*} \fchi(G \boxtimes G) \leq \chi(G\boxtimes G) \leq n \leq\; & \frac{ (256+o(1))\log^4n}{n} \left(\frac{n}{\alpha(G)}\right)^2 \\ \le\; & \frac{ (256+o(1))\log^4n}{n}\,\fchi(G)^2.\qedhere \end{align*} \end{proof} The next lemma generalises \cref{ChromaticNumber,FractionalProduct}. \begin{lem} \label{ClusteredProductColouring} Let $G_1,\dots,G_d$ be graphs, such that $G_i$ is $(p_i\!:\!q_i)$-colourable with clustering $c_i$, for each $i\in[1,d]$. Then $G:= G_1\boxtimes \cdots\boxtimes G_d$ is $(\prod_i p_i:\prod_i q_i)$-colourable with clustering $\prod_ic_i$. \end{lem} \begin{proof} For $i\in[1,d]$, let $\phi_i$ be a $(p_i\!:\!q_i)$-colouring of $G_i$ with clustering $c_i$. Let $\phi$ be the colouring of $G$, where each vertex $v=(v_1,\dots,v_d)$ of $G$ is coloured $\phi(v) := \{ (a_1,\dots,a_d): a_i \in \phi_i(v_i), i \in [1,d] \}$. So each vertex of $G$ is assigned a set of $\prod_i q_i$ colours, and there are $\prod_i p_i$ colours in total. Let $X:= X_1\boxtimes\dots\boxtimes X_d$, where each $X_i$ is a monochromatic component of $G_i$ using colour $a_i$. Then $X$ is a monochromatic connected induced subgraph of $G$ using colour $(a_1,\dots,a_d)$. Consider any edge $(v_1,\dots,v_d)(w_1,\dots,w_d)$ of $G$ with $(v_1,\dots,v_d)\in V(X)$ and $(w_1,\dots,w_d)\not\in V(X)$. Thus $v_iw_i\in E(G_i)$ and $w_i\not\in V(X_i)$ for some $i\in\{1,\dots,d\}$. Hence $a_i\not\in \phi_i(w_i)$, implying $(a_1,\dots,a_d)\not\in \phi(w)$. Hence $X$ is a monochromatic component of $G$ using colour $(a_1,\dots,a_d)$. As $X$ contains at most $\prod_ic_i$ vertices, it follows that $\phi$ has clustering at most $\prod_ic_i$. \end{proof} \cref{ClusteredProductColouring} implies the following analogues of \cref{ChromaticNumber} for clustered colouring. \begin{thm} \label{ClusteredProductColouringClasses} For all graph classes $\GG_1,\dots,\GG_d$ $$\cchi( \GG_1 \boxtimes \dots\boxtimes \GG_d) \leq \prod_{i=1}^d \cchi(\GG_i).$$ \end{thm} It is interesting to consider when the naive upper bound in \cref{ClusteredProductColouringClasses} is tight. \cref{FracClosureProduct} below shows that this result is tight for products of closures of high-degree trees. Finally, note that \cref{ClusteredProductColouring} implies the following basic observation. \begin{lem} \label{BlowUp} If a graph $G$ is $k$-colourable with clustering $c$, then $G\boxtimes K_t$ is $k$-colourable with clustering $ct$. \end{lem} More generally, \cref{ClusteredProductColouringClasses} implies: \begin{lem} \label{FractionalBlowUp} If a graph $G$ is $(p\!:\!q)$-colourable with clustering $c$, then $G\boxtimes K_t$ is $(p\!:\!q)$-colourable with clustering $ct$. \end{lem} \section{Products of Tree-Closures}\label{sec:ptc} This section presents examples of graphs that show that the naive upper bound in \cref{ClusteredProductColouringClasses} is tight. The \emph{depth} of a node in a rooted tree is its distance to the root plus one. For $k,n\in\NN$, let $T_{k,n}$ be the rooted tree in which every leaf is at depth $k$, and every non-leaf has $n$ children. Let $C_{k,n}$ be the graph obtained from $T_{k,n}$ by adding an edge between every ancestor and descendant (called the \defn{closure} of $T_{k,n}$). Colouring each vertex by its distance from the root gives a $k$-colouring of $C_{k,n}$, and any root-leaf path in $C_{k,n}$ induces a $k$-clique. So $\chi(C_{k,n})=k$. The class $\CC_k:= \{ C_{k,n} : n \in \NN\}$ is important for defective and clustered colouring, and is often called the `standard' example. It is well-known and easily proved (see \citep{WoodSurvey}) that $$\dchi(\CC_k)=\cchi(\CC_k)=\bigchi(\CC_k)=k.$$ \citet{NSW22} extended this result (using a result of \citet{DS20}) to the setting of defective and clustered fractional chromatic number by showing that $$\dfchi(\CC_k)= \cfchi(\CC_k) = \bigchi^f(\CC_k) = \dchi(\CC_k)=\cchi(\CC_k)=\bigchi(\CC_k)=k.$$ Here we give an elementary and self-contained proof of this result. In fact, we prove the following generalisation in terms of strong products. This shows that \cref{ClusteredProductColouringClasses} is tight, even for fractional colourings. \begin{thm} \label{FracClosureProduct} For all $k,d\in\NN$, if $\GG := \boxtimes_d \CC_{k}$ then $$ \dfchi(\GG) = \cfchi(\GG) = \fchi(\GG) =\dchi( \GG ) = \cchi( \GG ) = \bigchi( \GG ) =k^d.$$ \end{thm} \begin{proof} Let $K:=k^d$. It follows from the definitions that $\dfchi(\GG)\leq \dchi(\GG) \leq \chi(\GG)=K$ and $\dfchi(\GG)\leq \cfchi(\GG) \leq \cchi(\GG) \leq \chi(\GG)=K$ and $\dfchi(\GG)\leq \bigchi^f(\GG) \leq \chi(\GG)=K$. Thus it suffices to prove that $\dfchi(\GG)\geq K$. Recall that $\dfchi(\GG)$ is the infimum of all $t\in\RR$ such that, for some $c\in\NN$, for every $G\in\GG$ there exists $p,q\in\NN$ such that $p\leq tq$ and $G$ is $(p\!:\!q)$-colourable with defect $c$. Suppose for the sake of contradiction that $\dfchi(\GG)\le K-\epsilon$, for some $\epsilon>0$. It follows that there exists $c\in\NN$ such that for every integer $n$ there exist $p,q\in\NN$ such that $p\leq (K-\epsilon)q$ and $\boxtimes_d C_{k,n}$ is $(p\!:\!q)$-colourable with defect $c$. For $n\in \NN$, consider the graph $G=\boxtimes_d C_{k,n}$ and a $(p\!:\!q)$-colouring of $G$ with defect $c$ such that $p\leq (K-\epsilon)q$. We will prove that for fixed $k$, $d$, and $\epsilon$, the value of $c$ must be at least linear in $n$. Since $n$ can be chosen to be arbitrary, this immediately yields the desired contradiction. Each vertex $x$ of $G$ is a $d$-tuple $(x_1,\ldots,x_d)$, where each $x_i$ is a vertex of $C_{k,n}$. Whenever we mention ancestors, descendants, leaves and the depth of vertices in $C_{k,n}$, these terms refer to the corresponding notions in the spanning subgraph $T_{k,n}$ of $C_{k,n}$. Note that distinct vertices $x=(x_1,\ldots,x_d)$ and $y=(y_1,\ldots,y_d)$ are adjacent in $G$ if and only if for each $i\in[d]$, $x_i$ is an ancestor of $y_i$ or $y_i$ is an ancestor of $x_i$ in $C_{k,n}$ (where we adopt the convention that every vertex is an ancestor and descendant of itself). For each $k$-tuple of non-negative integers $s=(s_1,\ldots,s_k)$ such that $\sum_{i=1}^k s_i=d$, define $V_s$ to be the set of vertices $x=(x_1,\ldots,x_d)$ of $G$ such that for each $i\in [k]$, there are precisely $s_i$ indices $j\in [d]$ such that $x_j$ has depth $i$ in $C_{k,n}$. Since $C_{k,n}$ contains $n^{i-1}$ vertices at depth $i$ (for each $i\in [d]$) and since $\binom{a}{b}\leq 2^a$, $$|V_s|={d\choose s_1}{d-s_1\choose s_2}\cdots {d-s_1-\cdots -s_{k-1}\choose s_k}\cdot n^{s_2}n^{2s_3}\cdots n^{(k-1)s_k}\le 2^{dk}\cdot n^{\sum_{i=1}^k (i-1)s_i}.$$ Let $V^*:= V_{(0,\ldots,0,d)}$; that is, $V^*$ is the set of vertices $x=(x_1,\ldots,x_d)$ of $G$ such that $x_1,\ldots,x_d$ are leaves of $C_{k,n}$. Note that $|V^*|=n^{d(k-1)}$. For each vertex $x=(x_1,\ldots,x_d)\in V^*$, let $Q(x)$ be the set of vertices $y=(y_1,\ldots,y_d)$ of $G$ such that for each $i\in [d]$, $y_i$ is an ancestor of $x_i$ in $C_{k,n}$. By the definition of $G$, $Q(x)$ is a clique of size $k^d=K$ (including $x$). Let $S_1,\ldots,S_K$ be the sets of colours assigned to the elements of $Q(x)$ (so each $S_i$ is a $q$-element subset of $[p]$, with $p\le (K-\epsilon)q$). We claim that there are indices $i<j$ such that $|S_i\cap S_j|\ge \tfrac{\epsilon}{K^2}\cdot q$. If not, for each $i\in[K]$, $S_i$ has at least $q-(i-1)\tfrac{\epsilon}{K^2}\cdot q$ elements not in $S_1,S_2,\ldots S_{i-1}$. Thus $$|\bigcup_{i=1}^K S_i| \geq \sum_{i=1}^K (q-(i-1)\tfrac{\epsilon}{K^2}\cdot q ) >Kq-\tfrac{K^2}2\cdot \tfrac{\epsilon}{K^2}\cdot q >(K-\epsilon)q\ge p,$$ which is a contradiction. Thus there exist distinct vertices $u,v\in Q(x)$ whose sets of colours intersect in at least $\tfrac{\epsilon}{K^2}\cdot q$ elements. Assume without loss of generality that $u\in V_{s}$ and $v\in V_t$, and the sequence $s$ precedes $t$ in lexicographic order. Orient the edge $uv$ from $u$ to $v$ and \emph{charge} $x$ to the arc $(u,v)$. We now bound the number of vertices $x=(x_1,\ldots,x_d)\in V^*$ that are charged to a given arc $(u,v)$, with $u=(u_1,\ldots,u_d)\in V_s$ and $v=(v_1\ldots,v_d)\in V_t$, where $s$ precedes $t=(t_1,\ldots,t_k)$ in lexicographic order. By definition, if $x$ is charged to $(u,v)$, each $x_i$ is a descendant of $u_i$ and $v_i$ (and in particular $u_i$ and $v_i$ are also in ancestor relationship). Each vertex at depth $i$ in $C_{k,n}$ has precisely $n^{k-i}$ descendants that are leaves in $C_{k,n}$. Thus (considering only $v_1,\dots,v_d$), at most $$ \prod_{i=1}^k n^{t_i(k-i)} =n^{\sum_{i=1}^kt_i(k-i)}$$ vertices of $V^*$ are charged to $(u,v)$. We claim that there is an index $i\in [d]$, such that $u_i$ is a strict descendant of $v_i$. Since $x_i$ is a descendant of $u_i$, it follows that the bound above can be divided by a factor $n$, and thus at most $n^{-1+\sum_{i=1}^kt_i(k-i)}$ vertices of $V^*$ are charged to $(u,v)$. To prove the claim, consider first the $t_1$ indices $i\in [d]$ such that $v_i$ has depth 1. If $u_i$ has depth greater than 1 for one of these indices, then the desired property holds. So we may assume that all the corresponding $u_i$'s also have depth 1. Since $s$ precedes $t$ in lexicographic order, $s_1\le t_1$ and thus $s_1=t_1$. It follows that for each $i\in [d]$, $u_i$ has depth 1 if and only if $v_i$ has depth 1. By considering the $t_2$ indices $i$ such that $v_i$ has depth 2, the same reasoning shows that for each $i\in [d]$, $v_i$ has depth 2 if and only if $u_i$ has depth 2. By iterating this argument, for each $i\in [d]$ and $j\in [k]$, $v_i$ has depth $j$ if and only if $u_i$ has depth $j$. Since $u_i$ and $v_i$ are ancestors for each $i\in [d]$, we have that $u=v$, which is a contradiction. It follows that some $u_i$ is a strict ancestor of $v_i$, and thus (as argued above) at most $n^{-1+\sum_{i=1}^kt_i(k-i)}$ vertices of $V^*$ are charged to $(u,v)$. Each vertex of $V^*$ is charged to some arc $(u,v)$, where $u$ and $v$ share at least $\tfrac{\epsilon}{K^2}\cdot q$ colours. We claim that for each $v\in V^*$, there are at most $cK^2/\epsilon$ such arcs $(u,v)$. If not, the $q$ colours of $v$ must appear (with repetition) more than $\tfrac{\epsilon}{K^2}\cdot q \cdot cK^2/\epsilon=cq$ times in the neighbourhood of $v$. By the pigeonhole principle some colour of $v$ appears more than $c$ times in the neighbourhood of $v$, which contradicts the assumption that the colouring has defect at most $c$. For each vertex $v\in V_t$, where $t=(t_1,\ldots,t_k)$, and for each of the at most $cK^2/\epsilon$ arcs $(u,v)$ as above, we have proved that at most $n^{-1+\sum_{i=1}^kt_i(k-i)} $ vertices of $V^*$ are charged to $(u,v)$. Since $|V_t|\le 2^{dk}\cdot n^{\sum_{i=1}^k (i-1)t_i}$ and $\sum_{i=1}^kt_i=d$, at most $$c\tfrac{K^2}{\epsilon}\cdot2^{dk}\cdot n^{\sum_{i=1}^k (i-1)t_i}\cdot n^{-1+\sum_{i=1}^kt_i(k-i)} =c\tfrac{K^2}{\epsilon}\cdot 2^{dk}\cdot n^{-1+\sum_{i=1}^k(k-1)t_i}=c\tfrac{K^2}{\epsilon}\cdot2^{dk}\cdot n^{d(k-1)-1}$$ vertices of $V^*$ are charged to an arc $(u,v)$. Since there are at most $(d+1)^k$ possible $k$-tuples of integers $t=(t_1,\ldots,t_k)$ with $\sum_{i=1}^kt_i=d$, it follows that $$n^{d(k-1)}=|V^*|\le (d+1)^k c\cdot \tfrac{K^2}{\epsilon}\cdot2^{dk}\cdot n^{d(k-1)-1} ,$$ and thus $c\ge n\cdot \tfrac{\epsilon}{k^{2d} (d+1)^k 2^{dk}}$, as desired. \end{proof} Let $\STAR$ be the class of all star graphs; that is, $\STAR:=\{K_{1,n}:n\in\NN\}$. Since $K_{1,n} \cong C_{2,n}$, \cref{FracClosureProduct} implies: \begin{cor} \label{StarProduct} For $d\in\mathbb{N}$, let $\STAR^d$ be the class of all $d$-dimensional strong products of star graphs. Then $$\dfchi(\STAR^d) = \cfchi(\STAR^d) = \fchi(\STAR^d) = \dchi( \STAR^d) = \cchi( \STAR^d ) = \bigchi( \STAR^d ) = 2^d.$$ \end{cor} \section{Consistent colourings}\label{sec:consistent} A $(p\!:\!q)$-colouring $\alpha$ of a graph $G$ is \defn{consistent} if for each vertex $x\in V(G)$, there is an ordering $\alpha_x^1,\dots,\alpha_x^q$ of $\alpha(x)$, such that $\alpha_x^{i} \neq \alpha_y^{j}$ for each edge $xy\in E(G)$ and for all distinct $i,j\in[1,q]$. For example, the following $(4\!:\!3)$-colouring of a path is consistent: $$\begin{array}{c} 0\\ 1\\ 2\end{array}, \begin{array}{c} 0\\ 1\\ 3\end{array}, \begin{array}{c} 0\\ 2\\ 3\end{array}, \begin{array}{c} 1\\ 2\\ 3\end{array}, \begin{array}{c} 1\\ 2\\ 0\end{array}, \begin{array}{c} 1\\ 3\\ 0\end{array}, \begin{array}{c} 2\\ 3\\ 0\end{array}, \begin{array}{c} 2\\ 3\\ 1\end{array}, \begin{array}{c} 2\\ 0\\ 1\end{array}, \begin{array}{c} 3\\ 0\\ 1\end{array}, \begin{array}{c} 3\\ 0\\ 2\end{array}, \begin{array}{c} 3\\ 1\\ 2\end{array}, \begin{array}{c} 0\\ 1\\ 2\end{array},\dots$$ \begin{lem} \label{Consistent} If a graph $G$ has a consistent $(p\!:\!q)$-colouring with clustering $c_1$, and a graph $H$ has a $(q\!:\!r)$-colouring with clustering $c_2$, then $G\boxtimes H$ has a $(p\!:\!r)$-colouring with clustering $c_1c_2$. \end{lem} \begin{proof} Let $\alpha$ be a consistent $(p\!:\!q)$-colouring of $G$ with clustering $c_1$, that is each vertex $x\in V(G)$ has colours $\alpha^1_x,\ldots,\alpha^q_x$ such that $\alpha_x^{i} \neq \alpha_y^{j}$ for each edge $xy\in E(G)$ and for all distinct $i,j\in[1,q]$. Let $\beta$ be a $(q\!:\!r)$-colouring of $H$ with clustering $c_2$, with colours from $[q]$. Colour each vertex $(x,v)$ of $G\boxtimes H$ by $\{\alpha_x^i:i\in \beta(v)\} \in \binom{[p]}{r}$. Let $Z$ be a monochromatic component of $G\boxtimes H$ defined by colour $c\in[p]$. Let $(x,v)$ be any vertex in $Z$. So $c=\alpha_x^i$ for some $i\in \beta(v)$. Let $A_x$ be the $\alpha$-component of $G$ defined by $c$ and containing $x$. Let $B_v$ be the $\beta$-component of $H$ defined by $i$ and containing $v$. Consider an edge $(x,v)(y,w)$ of $Z$. Thus $c=\alpha_y^j$ for some $j\in \beta(w)$. Since $A_x$ is an $\alpha$-component containing $x$ and ($xy\in E(G)$ or $x=y$), the vertex $y$ is also in $A_x$. If $xy\in E(G)$, then $i=j$ since $\alpha$ is consistent. Otherwise, $x=y$ and $\alpha_x^i=c=\alpha_y^j =\alpha_x^j$, again implying $i=j$. In both cases $i=j \in \beta(w)$. Since $B_v$ is a $\beta$-component defined by $i$ and containing $v$ and ($vw\in E(G)$ or $v=w$), the vertex $w$ is also in $B_v$. For every edge $(x,v)(y,w)$ of $Z$, we have shown that $y\in V(A_x)$ and $w\in V(B_v)$. Thus $A_y=A_x$ and $B_w=B_v$. Since $Z$ is connected, for all vertices $(x,v)$ and $(y,w)$ of $Z$, we have $A_x=A_y$ and $B_v=B_w$. Hence $Z\subseteq A_x\boxtimes B_v$ for any $(x,v)\in V(Z)$. Since $A_x$ is $\alpha$-monochromatic, $|A_x|\leq c_1$. Since $B_v$ is $\beta$-monochromatic, $|B_v|\leq c_2$. As $Z\subseteq A_x\boxtimes B_v$, $|Z| \leq |A_x \boxtimes B_v| \leq c_1c_2$. Hence, our colouring of $G\boxtimes H$ has clustering $c_1c_2$. \end{proof} Every proper colouring is consistent, so \cref{Consistent} implies: \begin{cor} \label{ConsistentClustered} If a graph $G$ has a proper $(p\!:\!q)$-colouring, and a graph $H$ has a $(q\!:\!r)$-colouring with clustering $c$, then $G\boxtimes H$ has a $(p\!:\!r)$-colouring with clustering $c$. \end{cor} \begin{cor} \label{ConsistentProperProper} If a graph $G$ has a proper $(p\!:\!q)$-colouring and a graph $H$ has a proper $(q\!:\!r)$-colouring, then $G\boxtimes H$ has a proper $(p\!:\!r)$-colouring. \end{cor} \cref{ClusteredProductColouring} states that a $(p_1\!:\!q_1)$-colouring of a graph $G$ (with bounded clustering) can be combined with a $(p_2\!:\!q_2)$-colouring of a graph $H$ (with bounded clustering) to produce a $(p_1p_2\!:\!q_1q_2)$-colouring of $G\boxtimes H$ (with bounded clustering). A natural question is whether this fractional colouring can be simplified; that is, is there a $(p_3\! :\!q_3)$-colouring of $G\boxtimes H$ with $\tfrac{p_3}{q_3}\le\tfrac{p_1p_2}{q_1q_2}$ and $p_3<p_1p_2$? There is no hope to obtain such a simplification in general, since if $G$ and $H$ are complete graphs and $q_1=q_2=1$, then $G\boxtimes H$ is a complete graph on $p_1p_2$ vertices. However, \cref{ConsistentClustered} shows that when the fractional colouring of $G$ is proper and $q_1=p_2$, the resulting fractional colouring of $G\boxtimes H$ can be simplified significantly. We now show another way to simplify the $(p_1p_2\!:\!q_1q_2)$-colouring of the graph $G\boxtimes H$, by allowing a small loss on the fraction $\tfrac{p_1p_2}{q_1q_2}$. Below we only consider the case $p_1=p_2$ and $q_1=q_2$ for simplicity, but the technique can be extended to the more general case. We use the Chernoff bound: For any $0 \le t \le nx$, the probability that the binomial distribution $\mathrm{Bin}(n,x) $ with parameters $n$ and $x$ differs from its expectation $nx$ by at least $t$ satisfies \[\Pr(|\mathrm{Bin}(n,x)-nx| > t) < 2\exp(-t^2/(3nx)).\] \begin{lem} Assume that $G$ has a $(p\!:\!q)$-colouring (with bounded clustering) and $H$ has a $(p\!:\!q)$-colouring (with bounded clustering). Then for any real number $0<x\le 1$, $G\boxtimes H$ has a $\Big(xp^2+O(p\sqrt{x}):\big(xq^2-O(q^{3/2}\sqrt{x\log p})\big)\Big)$-colouring (with bounded clustering). \end{lem} \begin{proof} Let $X$ be a random subset of $[p]^2$ obtained by including each element of $[p]^2$ independently with probability $x$. By the Chernoff bound, $p':=|X|\le xp^2+O(p\sqrt{x})$ with high probability (i.e., with probability tending to $1 $ as $p\to \infty$). Consider two $q$-element subsets $S,T\subseteq [p]$. Then it follows from the Chernoff bound that for any $0\le t \le xq^2$, the probability that $S\times T$ contains less than $xq^2-t$ elements of $X$ is at most $2\exp(-t^2/(3q^2x))$. By the union bound, the probability that there exist two $q$-element subsets $S,T\subseteq [p]$ with $|(S\times T)\cap X|<xq^2-t$ is at most $p^q\cdot p^q\cdot 2\exp(-t^2/(3q^2x))<2\exp(2q\log p-t^2/(3q^2x))$. By taking $t=\Theta(q^{3/2}\sqrt{x\log p})$, this quantity is less than $\tfrac12$. It follows that there exists a subset $X\subseteq [p]^2$ of at most $p'=xp^2+O(p\sqrt{x})$ elements, such that for all $q$-elements subsets $S,T\subseteq [p]$, $|(S\times T)\cap X|\ge q':=xq^2-O(q^{3/2}\sqrt{x\log p})$. \smallskip Let $c_G$ be a $(p\!:\!q)$-colouring of $G$ with colours $[p]$ and let $c_H$ be a $(p\!:\!q)$-colouring of $H$ with colours $[p]$. For any pair $(i,j)\in X$ define the colour class $C_{ij}=\{u\in V(G)\,:\, i \in c_G(u)\}\times \{v\in V(H)\,:\, j\in c_H(v)\} $ in $G\boxtimes H$. Let $c$ denote the resulting colouring of $G\boxtimes H$, and observe that if $c_G$ and $c_H$ are proper, so is $c$, and if $c_G$ and $c_H$ have bounded clustering, so does $c$, since each colour class in $c$ is the cartesian product of a colour class of $G$ and a colour class of $H$. Moreover $c$ uses at most $p'$ colours, and each vertex of $G\boxtimes H$ receives at least $q'$ colours. It follows that $c$ is a $(p'\!:\!q')$-colouring of $G\boxtimes H$ (with bounded clustering), as desired. \end{proof} \subsection{Paths and Cycles}\label{sec:pathscycles} The next lemma shows how to obtain a consistent colouring of a tree with small clustering. \begin{lem} \label{EdgePartitionConsistentColouring} If a tree $T$ has an edge-partition $E_1,\dots,E_k$ such that for each $i\in[1,k]$ each component of $T-E_i$ has at most $c$ vertices, then $T$ has a consistent $(k+1\!:\!k)$-colouring with clustering $c$. \end{lem} \begin{proof} Root $T$ at some leaf vertex $r$ and orient $T$ away from $r$. We now label each vertex $v$ of $T$ by a sequence $(\ell^1_v,\dots,\ell^k_v)$ of distinct elements of $\{1,\dots,k+1\}$. First label $r$ by $(1,\dots,k)$. Now label vertices in order of non-decreasing distance from $r$. Consider an arc $vw$ with $v$ labelled $(\ell^1_v,\dots,\ell^k_v)$ and $w$ unlabelled. Say $vw\in E_i$. Let $z$ be the element of $\{1,\dots,k+1\}\setminus\{\ell^1_v,\dots,\ell^k_v\}$. Then label $w$ by $(\ell^1_v,\dots,\ell^{i-1}_v,z,\ell^{i+1}_v,\dots,\ell^k_v)$. Label every vertex in $T$ by repeated application of this rule. It is immediate that this labelling determines a consistent $(k+1\!:\!k)$-colouring of $T$. Consider a monochromatic component $X$ of $T$ determined by colour $i$. If $vw$ is an edge of $T$ with $v\in V(X)$ and $w\not\in V(X)$ and $vw\in E_j$, then $\ell_j(v)=i$ and $\ell_j(w)\neq i$ and the only colour not assigned to $w$ is $\ell_j(v)=i$. By consistency, $\ell_j(x)=i$ for every $x \in V(X)$, and for every edge $xy\in E(T)$ with $x\in V(X)$ and $y\not\in V(X)$ we have $xy\in E_j$. Thus $X$ is contained in $T-E_j$ and has at most $c$ vertices. \end{proof} \begin{lem} \label{ConsistentPath} For every $k\in\NN$, every path has a consistent $(k+1\!:\!k)$-colouring with clustering $k$. \end{lem} \begin{proof} Let $e_1,\dots,e_n$ be the sequence of edges in a path $P$. For $i\in[0,k-1]$ let $E_i:=\{e_j: j \equiv i \pmod{k}\}$. So $E_0,\dots,E_{k-1}$ is an edge-partition of $P$ such that for each $i\in[0,k-1]$ each component of $T-E_i$ has at most $k$ vertices. By \cref{EdgePartitionConsistentColouring}, $P$ has a consistent $(k+1\!:\!k)$ colouring with clustering $k$. \end{proof} \cref{ConsistentPath,Consistent} imply the following result. Products with paths are of particular interest because of the results in \cref{SubgraphsStrongProducts}. \begin{thm} \label{MultiplyPath} If a graph $G$ is $k$-colourable with clustering $c$ and $P$ is a path, then $G\boxtimes P$ is $(k+1)$-colourable with clustering $ck$. \end{thm} Recall that $\boxtimes_d P_{n}$ denotes the $d$-dimensional grid $P_n\boxtimes \cdots \boxtimes P_n$. \cref{MultiplyPath} implies the upper bound in the following result. As discussed in \cref{sec:hex}, the lower bound comes from the $d$-dimensional Hex Lemma~\citep{Gale79,LMST08,MP08,BDN17,MP08,Matdinov13,Karasev13}. \begin{cor} \label{HexGrid} $\boxtimes_d P_{n}$ is $(d+1)$-colourable with clustering $d!$. Conversely, every $d$-colouring of $\boxtimes_d P_{n}$ has a monochromatic component of size at least $n$. Hence $$\cchi(\{ \boxtimes_d P_{n} : n \in\NN\})=d+1.$$ \end{cor} Note that the corollary above can also be deduced from the following simple lemma, which does not use consistent colourings (however we need this notion to prove the stronger \cref{MultiplyPath} above, and its generalisation \cref{MultiplyTree} in \cref{sec:treestreewidth}). \begin{lem}\label{lem:comment} If $G$ is $(p\!:\!q)$-colourable with clustering $c_1$ and $H$ is $(p\!:\!r)$-colourable with clustering $c_2$, and $q+r>p$, then $G\boxtimes H$ is $(p\!:\!(q+r-p))$-colourable with clustering $c_1c_2$. \end{lem} \begin{proof} Consider a $(p\!:\!q)$-colouring of $G$ and a $(p\!:\!r)$-colouring of $H$ and for each $i\in [p]$, let the colour class of colour $i$ in $G\boxtimes H$ be the product of the colour class of colour $i$ in $G$ and the colour class of colour $i$ in $H$. Clearly, monochromatic components in $G\boxtimes H$ have size at most $c_1c_2$. Moreover, a pigeonhole argument tells us that each vertex of $G\boxtimes H$ is covered by at least $q+r-p$ colours in $G\boxtimes H$, as desired. \end{proof} In particular \cref{lem:comment} shows that if $G$ is $(p\!:\!q)$-colourable with clustering $c$, and $P$ is a path, then $G\boxtimes P$ is $(p\!:\!q-1)$-colourable with clustering $(p-1)c$. Here we have used the statement of \cref{ConsistentPath}, that every path has a $(p\!:\!p-1)$-colouring with clustering $p-1$ (but we did not use the additional property that such a colouring could be taken to be consistent). By induction this easily implies \cref{HexGrid}. \medskip It is an interesting open problem to determine the minimum clustering function in a $(d+1)$-colouring of $\boxtimes_d P_{n}$. Since $\boxtimes_d P_{n}$ contains a $2^d$-clique, every $(d+1)$-colouring has a monochromatic component with at least $2^d/(d+1)$ vertices. The fractional clustered chromatic number of Hex grid graphs is very different from the clustered chromatic number. \begin{prop} \label{cfchi-hexgrid} For fixed $d\in\NN$, $$\cfchi(\{ \boxtimes_d P_{n} : n \in\NN\})=1.$$ \end{prop} \begin{proof} Fix $\epsilon\in(0,1)$ and let $k:= \ceil{2d/\epsilon}$. By \cref{ConsistentPath}, every path has a $(k+1\!:\!k)$-colouring with clustering $k$. By \cref{ClusteredProductColouring}, for every $n\in\NN$, the graph $\boxtimes_d P_n$ is $((k+1)^d\!:\!k^d)$-colourable with clustering $k^d$. For $k\geq 2d$, it is easily proved by induction on $d$ that $(k+1)^d/k^d \leq 1 + 2d/k$. Thus $(k+1)^d/k^d \leq 1+\epsilon$. This says that for any $\epsilon>0$ there exists $c$ (namely, $\ceil{2d/\epsilon}^d$) such that for every $n\in\NN$, the graph $\boxtimes_d P_n$ is fractionally $(1+\epsilon)$-colourable with clustering $c$. The result follows. \end{proof} \cref{cfchi-hexgrid} can also be deduced from a result of \citet{Dvorak16} (who proved that the conclusion holds for any class of bounded degree having sublinear separators). It can also be deduced from a result of \citet{BDLM08}, which states that classes of bounded asymptotic dimension have fractional asymptotic dimension 1 (combined with the discussion of \cref{sec:asdim} on the connection between asymptotic dimension and clustered colouring for classes of graphs of bounded degree). Note that the main result of \cite{BDLM08}, which states that if $\mathcal{F}_1$ and $\mathcal{F}_2$ have asymptotic dimension $m_1$ and $m_2$, respectively, then $\mathcal{F}_1\boxtimes \mathcal{F}_2$ has asymptotic dimension $m_1+m_2$, is obtained by combining the result on fractional asymptotic dimension mentioned above with an elaborate version of \cref{lem:comment}. \begin{lem} \label{ConsistentCycle} For every $k\in\NN$, every cycle has a consistent $(k+1\!:\!k)$-colouring with clustering $k^2+3k-1$. \end{lem} \begin{proof} Let $C=(v_1,\dots,v_n)$ be an $n$-vertex cycle. Consider integers $a$ and $b\in[0,k(k+1)-1]$ such that $n= ak(k+1)+b$. By \cref{ConsistentPath}, the path $(v_1,\dots,v_{n-b})$ has a consistent $(k+1\!:\!k)$-colouring with clustering $k$. Observe that the colour sequences assigned to vertices repeat every $k(k+1)$ vertices. Thus $v_1$ and $v_{n-b}$ are assigned the same sequence of $k$ colours. Give this colour sequence to all of $v_{n-b+1},\dots,v_n$. We obtain a consistent $(k+1\!:\!k)$-colouring of $C$ with clustering $k+k(k+1)-1+k=k^2+3k-1$. \end{proof} \cref{ConsistentPath,ConsistentCycle} imply that for every $\epsilon>0$ there exists $c\in\NN$, such that every graph with maximum degree 2 is fractionally $(1+\epsilon)$-colourable with clustering $c$. Thus the fractional clustered chromatic number of graphs with maximum degree 2 equals 1. The following open problem naturally arises: \begin{qu}\label{q:clusteringDelta} What is the fractional clustered chromatic number of graphs with maximum degree $\Delta$? \end{qu} We now show that the same lower bound from the non-fractional setting (see \citep{WoodSurvey}) holds in the fractional setting. \begin{prop} For every even integer $\Delta$, the fractional clustered chromatic number of the class of graphs with maximum degree $\Delta$ is at least $\frac{\Delta}{4}+\half$. \end{prop} \begin{proof} We need to prove that for any even integer $\Delta$ and any integer $c$, there is a graph $G$ of maximum degree $\Delta$ such that for any integers $p,q$, if $G$ is $(p\!:\!q)$-colourable with clustering $c$, then $\tfrac{p}{q}\ge\frac{\Delta}{4}+\half$. Fix an even integer $\Delta$ and an integer $c$, and consider a $(\frac{\Delta}{2}+1)$-regular graph $H$ with girth greater than $c$ (such a graph exists, as proved by \citet{ES63}). Let $G=L(H)$ be the line-graph of $H$. Note that $G$ is $\Delta$-regular. Let $p,q$ be such that $G$ is $(p\!:\!q)$-colourable with clustering $c$, and let $f$ be such a colouring. Consider an $f$-monochromatic subgraph of $G$ with vertex set $X$, so every component of $G[X]$ has at most $c$ vertices. Let $F_X$ be the set of edges of $H$ corresponding to the vertices of $X$ in $G$. Since $H$ has girth greater than $c$, the subgraph of $H$ determined by the edges of $F$ is a forest, and thus $|X|=|F_X|< |V(H)|$. There are $p$ such monochromatic subgraphs and each vertex of $G$ is in exactly $q$ such subgraphs. Thus $$ q \half (\tfrac{\Delta}{2}+1) |V(H)| = q |E(H)| = q |V(G)| = \sum_X |X| = \sum_{X} |F_X| < p |V(H)|.$$ Hence $\frac{p}{q} > \frac{\Delta}{4} + \half $, as desired. \end{proof} The $\Delta=3$ case of \cref{q:clusteringDelta} is an interesting problem. The line graph of the 1-subdivision of a high girth cubic graph provides a lower bound of $\frac{6}{5}$ on the fractional clustered chromatic number. \subsection{Trees and Treewidth}\label{sec:treestreewidth} \cref{ConsistentPath} is generalised for bounded degree trees as follows: \begin{lem} \label{ConsistentTree} For all $k,\Delta\in\NN$, every tree $T$ with maximum degree $\Delta\geq 3$ has a consistent $(k+1\!:\!k)$-colouring with clustering less than $2(\Delta-1)^{k-1}$. \end{lem} \begin{proof} If $k=1$ then a proper 2-colouring of $T$ suffices. Now assume that $k\geq 2$. Root $T$ at some leaf vertex $r$. Consider the edge-partition $E_0,\dots,E_{k-1}$ of $T$, where $E_i$ is the set of edges $uv$ in $T$ such that $u$ is the parent of $v$ and $\dist_T(r,u)\equiv i\pmod{k}$. Each component $X$ of $T-E_i$ has height at most $k-1$ and each vertex $v$ in $X$ has at most $\Delta-1$ children in $X$, implying $|V(X)|\leq 1+(\Delta-1)+(\Delta-1)^2+\dots+(\Delta-1)^{k-1}= \frac{(\Delta-1)^k-1}{\Delta-2} <2(\Delta-1)^{k-1}$. The result then follows from \cref{EdgePartitionConsistentColouring}. \end{proof} \cref{Consistent,ConsistentTree} imply the following generalisation of \cref{MultiplyPath}: \begin{lem} \label{MultiplyTree} If a graph $G$ is $k$-colourable with clustering $c$ and $T$ is a tree with maximum degree $\Delta\geq 3$, then $G\boxtimes T$ is $(k+1)$-colourable with clustering less than $2c(\Delta-1)^{k-1}$. \end{lem} \cref{MultiplyTree} leads to the next result. We emphasise that $T_1$ may have arbitrarily large maximum degree (if $T_1$ also has bounded degree then the result is again a simple consequence of \cref{lem:comment}, which does not use consistent colourings). \begin{thm} \label{TreeProduct} If $T_1,\dots,T_d$ are trees, such that each of $T_2,\dots,T_d$ have maximum degree at most $\Delta\geq3$, then $T_1\boxtimes \dots\boxtimes T_d$ is $(d+1)$-colourable with clustering less than $2^d(\Delta-1)^{ \binom{d}{2} }$. \end{thm} \begin{proof} We proceed by induction on $d\geq 1$. In the base case, $T_1$ is 2-colourable with clustering 1. Now assume that $T_1\boxtimes \dots \boxtimes T_{d-1}$ is $d$-colourable with clustering less than $2^{d-1}(\Delta-1)^{ \binom{d-1}{2} }$. \cref{MultiplyTree} with $G=T_1\boxtimes\dots\boxtimes T_{d-1}$ implies that $T_1\boxtimes\dots\boxtimes T_{d}$ is $(d+1)$-colourable with clustering less than $2\cdot 2^{d-1}(\Delta-1)^{ \binom{d-1}{2} } (\Delta-1)^{d-1} = 2^d(\Delta-1)^{ \binom{d}{2} }$. \end{proof} \cref{TreeProduct} is in sharp contrast with \cref{StarProduct}: for the strong product of $d$ stars we need $2^d$ colours even for defective colouring, whereas for bounded degree trees we only need $d+1$ colours in the stronger setting of clustered colouring. This highlights the power of assuming bounded degree in the above results. Let $\TT_k$ be the class of graphs with treewidth $k$. Such graphs are $k$-degenerate and $(k+1)$-colourable. Since the graph $C_{n,k}$ (defined in \cref{sec:ptc}) has treewidth $k-1$, \cref{FracClosureProduct} implies that $$\dfchi(\TT_k) = \cfchi(\TT_k) = \fchi(\TT_k) = \dchi(\TT_k) = \cchi(\TT_k) = \bigchi(\TT_k) = k+1.$$ \citet{ADOV03} showed that graphs of bounded treewidth and bounded degree are 2-colourable with bounded clustering. Note that \cref{DegreeTreewidthClustered} in \cref{sec:asdim} generalises this result ($d=1$) and generalises \cref{TreeProduct} ($k=1$). We now give a short proof of \cref{DegreeTreewidthClustered} (which we restate below for convenience) that does not use asymptotic dimension, or any results related to it. \DegreeTreewidthClustered* \begin{proof} By \cref{DegreeTreewidthStructure}, $G_i$ is a subgraph of $T_i \boxtimes K_{20k\Delta}$ for some tree $T_i$ with maximum degree at most $20k\Delta^2$. Thus $G_1\boxtimes \dots\boxtimes G_d$ is a subgraph of $ ( T_1 \boxtimes K_{20k\Delta} ) \boxtimes \dots \boxtimes ( T_d \boxtimes K_{20k\Delta} )$, which is a subgraph of $(T_1\boxtimes \dots\boxtimes T_d ) \boxtimes K_t$, where $t:= (20k\Delta)^d$. By \cref{TreeProduct}, $T_1\boxtimes \dots\boxtimes T_d$ is $(d+1)$-colourable with clustering at most some function $c'(d,k,\Delta)$. By \cref{BlowUp}, $(T_1\boxtimes \dots\boxtimes T_d ) \boxtimes K_t$ and thus $G_1\boxtimes \dots\boxtimes G_d$ is $(d+1)$-colourable with clustering $c(d,k,\Delta) :=c'(d,k,\Delta) \cdot t$. \end{proof} The next result shows that for any sequence of non-trivial classes $\GG_1,\dots,\GG_d$, the bound on the number of colours in \cref{DegreeTreewidthClustered} is best possible. \begin{thm} If $\GG_1,\dots,\GG_d$ are graph classes with $\cchi(\GG_i)\geq 2$ for each $i\in\{1,\dots,d\}$, then $\cchi(\GG_1\boxtimes \dots \boxtimes \GG_d) \geq d+1$. \end{thm} \begin{proof} By replacing each class $\GG_i$ by its monotone closure if necessary, we can assume without loss of generality that each class $\GG_i$ is monotone (i.e., closed under taking subgraphs). If there is a constant $d$ such that every component of a graph of $\GG_i$ has maximum degree at most $d$ and diameter at most $d$, then $\cchi(\GG_i)\le 1$. It follows that for any $1\le i\le d$, the graphs of the class $\GG_i$ contain arbitrarily large degree vertices or arbitrarily long paths. By monotonicity, it follows that for some constant $1\le k \le d$, $\GG_1\boxtimes \dots \boxtimes \GG_d$ contains the class $(\boxtimes_k \mathcal{P} )\boxtimes (\boxtimes_{d-k}\mathcal{S})$, where $\mathcal{P}$ denotes the class of all paths, and $\mathcal{S}$ denotes the class of all stars. We claim that for any graph $G$, $\cchi(G\boxtimes \mathcal{P})\le \cchi(G\boxtimes \mathcal{S})$. To see this, assume that $G\boxtimes \mathcal{S}$ is $\ell$-colourable with clustering $c$, and consider an $\ell$-colouring $f$ of $G\boxtimes K_{1,n}$ with clustering $c$, with $n=c\cdot \ell^{|V(G)|}$. The graph $G\boxtimes K_{1,n}$ can be considered as the union of $n+1$ copies of $G$, one copy for the centre of the star $K_{1,n}$ (call it the central copy of $G$), and $n$ copies for the leafs of $K_{1,n}$ (call them the leaf copies of $G$). By the pigeonhole principle, at least $c$ leaf copies $G_1,\ldots,G_c$ of $G$ have precisely the same colouring, that is for each vertex $u$ of $G$, and any two copies $G_i$ and $G_j$ with $1\le i<j\le c$, the two copies of $u$ in $G_i$ and $G_j$ have the same colour in $f$. Let us denote this colouring of $G$ by $f_l$, and let us denote the restriction of $f$ to the central copy by $f_c$ (considered as a colouring of $G$). Note that for any vertex $v$ of $G$ we have $f_l(v)\ne f_c(v)$, and for any edge $uv$ of $G$ we have $f_l(u)\ne f_c(v)$, otherwise $f$ would contain a monochromatic star on $c+1$ vertices. We can now obtain a colouring of $G\boxtimes P$, for any path $P$, by alternating the colourings $f_l$ and $f_c$ of $G$ along the path. This shows that $G\boxtimes \mathcal{P}$ is $\ell$-colourable with clustering $c$. It follows from the previous paragraph that $\cchi((\boxtimes_k \mathcal{P} )\boxtimes (\boxtimes_{d-k}\mathcal{S}))\ge \cchi(\boxtimes_k \mathcal{P} )$. By the Hex lemma (see \cref{sec:hex}), this implies $\cchi(\GG_1\boxtimes \dots \boxtimes \GG_d) \geq d+1$, as desired. \end{proof} \subsection{Graph Parameters}\label{sec:param} We now explain how some results of this paper can be proved in a more general setting. For the sake of readability, we chose to present them (and prove them) only for the case of (clustered) colouring in the previous sections. \medskip A \defn{graph parameter} is a function $\eta$ such that $\eta(G)\in\RR\cup\{\infty\}$ for every graph $G$, and $\eta(G_1)=\eta(G_2)$ for all isomorphic graphs $G_1$ and $G_2$. For a graph parameter $\eta$ and a set of graphs $\GG$, let $\eta(\GG):=\sup\{\eta(G):G\in\GG\}$ (possibly $\infty$). For a graph parameter $\eta$, a colouring $f:V(G)\to C$ of a graph $G$ has \defn{$\eta$-defect $d$} if $\eta( G[ f^{-1}(i) ] ) \leq d$ for each $i\in C$. Then a graph class $\GG$ is \defn{$k$-colourable with bounded $\eta$} if there exists $d\in\RR$ such that every graph in $\GG$ has a $k$-colouring with $\eta$-defect $d$. Let $\bigchi_\eta(\GG)$ be the minimum integer $k$ such that $\GG$ is $k$-colourable with bounded $\eta$, called the \defn{$\eta$-bounded chromatic number}. Maximum degree, $\Delta$, is a graph parameter, and the $\Delta$-bounded chromatic number coincides with the defective chromatic number, both denoted $\dchi(\GG)$. Define $\star(G)$ to be the maximum number of vertices in a connected component of a graph $G$. Then $\star$ is a graph parameter, and the $\star$-bounded chromatic number coincides with the clustered chromatic number, both denoted $\cchi(\GG)$. These definitions also capture the usual chromatic number. For every graph $G$, define \begin{equation*} \iota(G):=\begin{cases} 1 & \text{ if $E(G)=\emptyset$}\\ \infty & \text{ otherwise }\\ \end{cases} \end{equation*} For $d\in\RR$, a colouring $f$ of $G$ has $\iota$-defect $d$ if and only if $f$ is proper. Then $\bigchi_\iota(\{G\})=\bigchi(G)$. A graph parameter $\eta$ is \defn{$g$-well-behaved} with respect to a particular graph product $\ast\in\{\square,\boxtimes\}$ if: \begin{enumerate}[(W1)] \item $\eta(H) \leq \eta(G)$ for every graph $G$ and every subgraph $H$ of $G$, \item $\eta( G_1 \cup G_2) = \max\{ \eta(G_1), \eta(G_2) \}$ for all disjoint graphs $G_1$ and $G_2$. \item $\eta( G_1 \ast G_2) \leq g( \eta(G_1), \eta(G_2))$ for all graphs $G_1$ and $G_2$. \end{enumerate} A graph parameter is \defn{well-behaved} if it is $g$-well-behaved for some function $g$. For example: \begin{compactitem} \item $\Delta$ is $g$-well-behaved with respect to $\square$, where $g(\Delta_1,\Delta_2)=\Delta_1+\Delta_2$. \item $\Delta$ is $g$-well-behaved with respect to $\boxtimes$, where $g(\Delta_1,\Delta_2)=(\Delta_1+1)(\Delta_2+1)-1$. \item $\star$ is $g$-well-behaved with respect to $\square$ or $\boxtimes$, where $g(\star_1,\star_2)=\star_1 \star_2$. \item $\iota$ is $g$-well-behaved with respect to $\square$ or $\boxtimes$, where $g(\iota_1,\iota_2)= \iota_1 \iota_2 $. \end{compactitem} On the other hand, some graph parameters are $g$-well-behaved for no function $g$. One such example is treewidth, since if $G_1$ and $G_2$ are $n$-vertex paths, then $\tw(G_1)=\tw(G_2)=1$ but $\tw(G_1\boxtimes G_2) \geq \tw(G_1 \CartProd G_2) = n$, implying there is no function $g$ for which (W3) holds. Let $\eta$ be a graph parameter. A fractional colouring of a graph $G$ has \defn{$\eta$-defect $d$} if $\eta(X)\leq d$ for each monochromatic subgraph $X$ of $G$. A graph class $\GG$ is \defn{fractionally $t$-colourable with bounded $\eta$} if there exists $d\in\RR$ such that every graph in $\GG$ has a fractional $t$-colouring with $\eta$-defect $d$. Let $\bigchi^f_\eta(\GG)$ be the infimum of all $t\in\RR^+$ such that $\GG$ is fractionally $t$-colourable with bounded $\eta$, called the \defn{fractional $\eta$-bounded chromatic number}. The next lemma generalises \cref{ChromaticNumber,FractionalProduct}. \begin{lem} \label{ProductColouringEta} Let $\eta$ be a $g$-well-behaved parameter with respect to $\boxtimes$. Let $G_1,\dots,G_d$ be graphs, such that $G_i$ is $(p_i\!:\!q_i)$-colourable with $\eta$-defect $c_i$, for each $i\in[1,d]$. Then $G:= G_1\boxtimes \cdots\boxtimes G_d$ is $(\prod_i p_i\!:\!\prod_i q_i)$-colourable with $\eta$-defect $g( c_1, g( c_2, \ldots , g( c_{d-1}, c_d ) ) )$. \end{lem} \begin{proof} For $i\in[1,d]$, let $\phi_i$ be a $(p_i\!:\!q_i)$-colouring of $G_i$ with $\eta$-defect $c_i$. Let $\phi$ be the colouring of $G$, where each vertex $v=(v_1,\dots,v_d)$ of $G$ is coloured $\phi(v) := \{ (a_1,\dots,a_d): a_i \in \phi_i(v_i), i \in [1,d] \}$. So each vertex of $G$ is assigned a set of $\prod_i q_i$ colours, and there are $\prod_i p_i$ colours in total. Let $X:= X_1\boxtimes\dots\boxtimes X_d$, where each $X_i$ is a monochromatic component of $G_i$ using colour $a_i$. Then $X$ is a monochromatic connected induced subgraph of $G$ using colour $(a_1,\dots,a_d)$. Consider any edge $(v_1,\dots,v_d)(w_1,\dots,w_d)$ of $G$ with $(v_1,\dots,v_d)\in V(X)$ and $(w_1,\dots,w_d)\not\in V(X)$. Thus $v_iw_i\in E(G_i)$ and $w_i\not\in V(X_i)$ for some $i\in\{1,\dots,d\}$. Hence $a_i\not\in \phi_i(w_i)$, implying $(a_1,\dots,a_d)\not\in \phi(w)$. Hence $X$ is a monochromatic component of $G$ using colour $(a_1,\dots,a_d)$. It follows from (W3) by induction that $|V(X)| \leq g( c_1, g( c_2, \ldots , g( c_{d-1}, c_d ) ) )$. Hence $\phi$ has $\eta$-defect $g( c_1, g( c_2, \ldots , g( c_{d-1}, c_d ) ) )$. \end{proof} \cref{ProductColouringEta} implies: \begin{thm} \label{ProductColouringEtaClasses} Let $\eta$ be a well-behaved parameter with respect to $\boxtimes$. For all graph classes $\GG_1,\dots,\GG_d$, $$\bigchi_\eta( \GG_1 \boxtimes \dots\boxtimes \GG_d) \leq \prod_{i=1}^d \bigchi_\eta(\GG_i).$$ \end{thm} We have the following special case of \cref{ProductColouringEta}. \begin{lem} \label{DefectiveProductColouring} For all graphs $G_1,\dots,G_d$, if each $G_i$ is $(p_i\!:\!q_i)$-colourable with defect $c_i$, then $G_1\boxtimes \cdots\boxtimes G_d$ is $(\prod_i p_i\!:\!\prod_i q_i)$-colourable with defect $\prod_i(1+c_i)-1$. \end{lem} \cref{DefectiveProductColouring} implies the following analogues of \cref{ChromaticNumber} for defective colouring. \begin{thm} \label{DefectiveProductColouringClasses} For all graph classes $\GG_1,\dots,\GG_d$ $$\dchi( \GG_1 \boxtimes \dots\boxtimes \GG_d) \leq \prod_{i=1}^d \dchi(\GG_i).$$ \end{thm} This is a generalised version of \cref{Consistent}. \begin{lem} \label{ConsistentGeneral} Let $\eta$ be a $g$-well-behaved graph parameter. If a graph $G$ has a consistent $(p\!:\!q)$-colouring with $\eta$-defect $c_1$, and a graph $H$ has a $(q\!:\!r)$-colouring with $\eta$-defect $c_2$. Then $G\boxtimes H$ has a $(p\!:\!r)$-colouring with $\eta$-defect $g(c_1,c_2)$. \end{lem} We also have the following generalised version of \cref{ConsistentClustered}. \begin{lem} \label{ConsistentProperGeneral} Let $\eta$ be a $g$-well-behaved graph parameter. If a graph $G$ has a proper $(p\!:\!q)$-colouring, and a graph $H$ has a $(q\!:\!r)$-colouring with $\eta$-defect $c$, then $G\boxtimes H$ has a $(p\!:\!r)$-colouring with $\eta$-defect $g(\eta(K_1),c_2)$. \end{lem} \begin{cor} \label{ConsistentDefectiveGeneral} If a graph $G$ has a proper $(p\!:\!q)$-colouring, and a graph $H$ has a $(q\!:\!r)$-colouring with defect $c$, then $G\boxtimes H$ has a $(p\!:\!r)$-colouring with defect $c$. \end{cor} \subsection*{Acknowledgements} This research was initiated at the Graph Theory Workshop held at Bellairs Research Institute in April 2019. We thank the other workshop participants for creating a productive working atmosphere (and in particular Vida Dujmovi{\'c} and Bartosz Walczak for discussions related to the paper). Thanks to both referees for several insightful comments. \def\soft#1{\leavevmode\setbox0=\hbox{h}\dimen7=\ht0\advance \dimen7 by-1ex\relax\if t#1\relax\rlap{\raise.6\dimen7 \hbox{\kern.3ex\char'47}}#1\relax\else\if T#1\relax \rlap{\raise.5\dimen7\hbox{\kern1.3ex\char'47}}#1\relax \else\if d#1\relax\rlap{\raise.5\dimen7\hbox{\kern.9ex \char'47}}#1\relax\else\if D#1\relax\rlap{\raise.5\dimen7 \hbox{\kern1.4ex\char'47}}#1\relax\else\if l#1\relax \rlap{\raise.5\dimen7\hbox{\kern.4ex\char'47}}#1\relax \else\if L#1\relax\rlap{\raise.5\dimen7\hbox{\kern.7ex \char'47}}#1\relax\else\message{accent \string\soft \space #1 not \fi\fi\fi\fi} \begin{thebibliography}{70} \providecommand{\natexlab}[1]{#1} \providecommand{\msn}[1]{MR:\,\href{http://www.ams.org/mathscinet-getitem?mr=MR{#1}}{#1}} \providecommand{\ZBL}[1]{Zbl:\,\href{https://www.zentralblatt-math.org/zmath/en/search/?q=an:#1}{#1}} \providecommand{\url}[1]{\texttt{#1}} \providecommand{\urlprefix}{} \expandafter\ifx\csname urlstyle\endcsname\relax \providecommand{\doi}[1]{doi:\discretionary{}{}{}#1}\else \providecommand{\doi}{doi:\discretionary{}{}{}\begingroup \bibitem[{Alon(1998)}]{Alon98} \textsc{Noga Alon}. \newblock \href{https://doi.org/10.1007/PL00009824}{The {S}hannon capacity of a union}. \newblock \emph{Combinatorica}, 18(3):301--310, 1998. \bibitem[{Alon(2002)}]{Alon02} \textsc{Noga Alon}. \newblock Graph powers. \newblock In \emph{Contemporary combinatorics}, vol.~10 of \emph{Bolyai Soc. Math. Stud.}, pp. 11--28. J\'{a}nos Bolyai Math. Soc., Budapest, 2002. \bibitem[{Alon et~al.(2003)Alon, Ding, Oporowski, and Vertigan}]{ADOV03} \textsc{Noga Alon, Guoli Ding, Bogdan Oporowski, and Dirk Vertigan}. \newblock \href{https://doi.org/10.1016/S0095-8956(02)00006-0}{Partitioning into graphs with only small components}. \newblock \emph{J. Combin. Theory Ser. B}, 87(2):231--243, 2003. \bibitem[{Alon and Lubetzky(2006)}]{AL06} \textsc{Noga Alon and Eyal Lubetzky}. \newblock \href{https://doi.org/10.1109/TIT.2006.872856}{The {S}hannon capacity of a graph and the independence numbers of its powers}. \newblock \emph{IEEE Trans. Inform. Theory}, 52(5):2172--2176, 2006. \bibitem[{Alon and Orlitsky(1995)}]{AO95} \textsc{Noga Alon and Alon Orlitsky}. \newblock \href{https://doi.org/10.1109/18.412676}{Repeated communication and ramsey graphs}. \newblock \emph{{IEEE} Trans. Inf. Theory}, 41(5):1276--1289, 1995. \bibitem[{Bell and Dranishnikov(2006)}]{BD06} \textsc{G.~C. Bell and A.~N. Dranishnikov}. \newblock \href{https://doi.org/10.1090/S0002-9947-06-04088-8}{A {H}urewicz-type theorem for asymptotic dimension and applications to geometric group theory}. \newblock \emph{Trans. Amer. Math. Soc.}, 358(11):4749--4764, 2006. \bibitem[{Berger et~al.(2017)Berger, Dvo{\v{r}}{\'a}k, and Norin}]{BDN17} \textsc{Eli Berger, Zden{\v{e}}k Dvo{\v{r}}{\'a}k, and Sergey Norin}. \newblock \href{https://doi.org/10.1007/s00493-017-3548-5}{Treewidth of grid subsets}. \newblock \emph{Combinatorica}, 38(6):1337--1352, 2017. \bibitem[{Bonamy et~al.(2021)Bonamy, Bousquet, Esperet, Groenland, Liu, Pirot, and Scott}]{BBEGLPS} \textsc{Marthe Bonamy, Nicolas Bousquet, Louis Esperet, Carla Groenland, Chun-Hung Liu, François Pirot, and Alex Scott}. \newblock \href{http://arxiv.org/abs/2012.02435}{Asymptotic dimension of minor-closed families and {A}ssouad--{N}agata dimension of surfaces}. \newblock \emph{J. European Math. Soc. \textup{(in press)}}, 2021. \newblock arXiv:2012.02435. \bibitem[{Brodskiy et~al.(2008)Brodskiy, Dydak, Levin, and Mitra}]{BDLM08} \textsc{Nikolay Brodskiy, Jerzy Dydak, Michael Levin, and Atish Mitra}. \newblock \href{https://doi.org/10.1112/jlms/jdn005}{A {H}urewicz theorem for the {A}ssouad-{N}agata dimension}. \newblock \emph{J. Lond. Math. Soc. (2)}, 77(3):741--756, 2008. \bibitem[{Campbell et~al.(2022)Campbell, Clinch, Distel, Gollin, Hendrey, Hickingbotham, Huynh, Illingworth, Tamitegama, Tan, and Wood}]{UTW} \textsc{Rutger Campbell, Katie Clinch, Marc Distel, J.~Pascal Gollin, Kevin Hendrey, Robert Hickingbotham, Tony Huynh, Freddie Illingworth, Youri Tamitegama, Jane Tan, and David~R. Wood}. \newblock \href{http://arxiv.org/abs/2206.02395}{Product structure of graph classes with bounded treewidth}. \newblock 2022, arXiv:2206.02395. \bibitem[{Choi and Esperet(2019)}]{CE19} \textsc{Ilkyoo Choi and Louis Esperet}. \newblock \href{https://doi.org/10.1002/jgt.22418}{Improper coloring of graphs on surfaces}. \newblock \emph{J. Graph Theory}, 91(1):16--34, 2019. \bibitem[{Cs\'{o}ka et~al.(2015)Cs\'{o}ka, Gerencs\'{e}r, Harangi, and Vir\'{a}g}]{CGHV15} \textsc{Endre Cs\'{o}ka, Bal\'{a}zs Gerencs\'{e}r, Viktor Harangi, and B\'{a}lint Vir\'{a}g}. \newblock \href{https://doi.org/10.1002/rsa.20547}{Invariant {G}aussian processes and independent sets on regular graphs of large girth}. \newblock \emph{Random Structures Algorithms}, 47(2):284--303, 2015. \bibitem[{Ding and Oporowski(1995)}]{DO95} \textsc{Guoli Ding and Bogdan Oporowski}. \newblock \href{https://doi.org/10.1002/jgt.3190200412}{Some results on tree decomposition of graphs}. \newblock \emph{J. Graph Theory}, 20(4):481--499, 1995. \bibitem[{Distel et~al.(2022)Distel, Hickingbotham, Huynh, and Wood}]{DHHW22} \textsc{Marc Distel, Robert Hickingbotham, Tony Huynh, and David~R. Wood}. \newblock \href{https://doi.org/10.48550/arXiv.2112.10025}{Improved product structure for graphs on surfaces}. \newblock \emph{Discrete Math. Theor. Comput. Sci.}, 24(2):\#6, 2022. \bibitem[{Distel and Wood(2022)}]{DW22} \textsc{Marc Distel and David~R. Wood}. \newblock \href{http://arxiv.org/abs/2210.12577}{Tree-partitions with bounded degree trees}. \newblock 2022, arXiv:2210.12577. \bibitem[{Dujmovi{\'c} et~al.(2020)Dujmovi{\'c}, Joret, Micek, Morin, Ueckerdt, and Wood}]{DJMMUW20} \textsc{Vida Dujmovi{\'c}, Gwena\"{e}l Joret, Piotr Micek, Pat Morin, Torsten Ueckerdt, and David~R. Wood}. \newblock \href{https://doi.org/10.1145/3385731}{Planar graphs have bounded queue-number}. \newblock \emph{J. ACM}, 67(4):\#22, 2020. \bibitem[{Dujmovi{\'c} et~al.(2019)Dujmovi{\'c}, Morin, and Wood}]{DMW} \textsc{Vida Dujmovi{\'c}, Pat Morin, and David~R. Wood}. \newblock \href{http://arxiv.org/abs/1907.05168}{Graph product structure for non-minor-closed classes}. \newblock 2019, arXiv:1907.05168. \bibitem[{Dvo{\v{r}}{\'a}k(2016)}]{Dvorak16} \textsc{Zden{\v{e}}k Dvo{\v{r}}{\'a}k}. \newblock \href{https://doi.org/10.1016/j.ejc.2015.09.001}{Sublinear separators, fragility and subexponential expansion}. \newblock \emph{European J. Combin.}, 52(A):103--119, 2016. \bibitem[{Dvo{\v{r}}{\'a}k and Norin(2017)}]{DN17} \textsc{Zden{\v{e}}k Dvo{\v{r}}{\'a}k and Sergey Norin}. \newblock \href{http://arxiv.org/abs/1710.02727}{Islands in minor-closed classes. {I}. {B}ounded treewidth and separators}. \newblock 2017, arXiv:1710.02727. \bibitem[{Dvo{\v{r}}{\'a}k and Sereni(2020)}]{DS20} \textsc{Zden{\v{e}}k Dvo{\v{r}}{\'a}k and Jean-S\'ebastien Sereni}. \newblock \href{https://doi.org/10.37236/8909}{On fractional fragility rates of graph classes}. \newblock \emph{Electronic J. Combinatorics}, 27:P4.9, 2020. \bibitem[{Edwards et~al.(2015)Edwards, Kang, Kim, Oum, and Seymour}]{EKKOS15} \textsc{Katherine Edwards, Dong~Yeap Kang, Jaehoon Kim, Sang-il Oum, and Paul Seymour}. \newblock \href{https://doi.org/10.1137/141002177}{A relative of {H}adwiger's conjecture}. \newblock \emph{SIAM J. Discrete Math.}, 29(4):2385--2388, 2015. \bibitem[{Erd\H{o}s and Sachs(1963)}]{ES63} \textsc{Paul Erd\H{o}s and Horst Sachs}. \newblock Regul\"are {G}raphen gegebener {T}aillenweite mit minimaler {K}notenzahl. \newblock \emph{Wiss. Z. Martin-Luther-Univ. Halle-Wittenberg Math.-Natur. Reihe}, 12:251--257, 1963. \bibitem[{Esperet and Joret(2014)}]{EJ14} \textsc{Louis Esperet and Gwena{\"{e}}l Joret}. \newblock \href{https://doi.org/10.1017/S0963548314000170}{Colouring planar graphs with three colours and no large monochromatic components}. \newblock \emph{Combin., Probab. Comput.}, 23(4):551--570, 2014. \bibitem[{Esperet and Ochem(2016)}]{EO16} \textsc{Louis Esperet and Pascal Ochem}. \newblock \href{https://doi.org/10.1137/140957883}{Islands in graphs on surfaces}. \newblock \emph{SIAM J. Discrete Math.}, 30(1):206--219, 2016. \bibitem[{Farber(1986)}]{Farber86} \textsc{Martin Farber}. \newblock \href{https://doi.org/10.1137/0607008}{An analogue of the {S}hannon capacity of a graph}. \newblock \emph{SIAM J. Algebraic Discrete Methods}, 7:67--72, 1986. \bibitem[{Farkasov\'a and Sot\'ak(2015)}]{FS15} \textsc{Zuzana Farkasov\'a and Roman Sot\'ak}. \newblock Fractional and circular 1-defective colorings of outerplanar graphs. \newblock \emph{Australas. J. Combin.}, 63:1--11, 2015. \bibitem[{Gale(1979)}]{Gale79} \textsc{David Gale}. \newblock \href{https://doi.org/10.2307/2320146}{The game of {H}ex and the {B}rouwer fixed-point theorem}. \newblock \emph{Amer. Math. Monthly}, 86(10):818--827, 1979. \bibitem[{Goddard and Xu(2016)}]{GX16} \textsc{Wayne Goddard and Honghai Xu}. \newblock \href{https://doi.org/10.1002/jgt.21868}{Fractional, circular, and defective coloring of series-parallel graphs}. \newblock \emph{J. Graph Theory}, 81(2):146--153, 2016. \bibitem[{Gromov(1993)}]{Gro93} \textsc{M.~Gromov}. \newblock Asymptotic invariants of infinite groups. \newblock In \emph{Geometric group theory, {V}ol. 2 ({S}ussex, 1991)}, vol. 182 of \emph{London Math. Soc. Lecture Note Ser.}, pp. 1--295. Cambridge Univ. Press, 1993. \bibitem[{Hadwiger(1943)}]{Hadwiger43} \textsc{Hugo Hadwiger}. \newblock \href{http://www.ngzh.ch/archiv/1943_88/88_2/88_17.pdf}{\"{U}ber eine {K}lassifikation der {S}treckenkomplexe}. \newblock \emph{Vierteljschr. Naturforsch. Ges. Z\"urich}, 88:133--142, 1943. \bibitem[{Hayward and Toft(2019)}]{HT19} \textsc{Ryan~B. Hayward and Bjarne Toft}. \newblock \href{https://doi.org/10.1201/9780429031960}{Hex, inside and out---the full story}. \newblock CRC Press, 2019. \bibitem[{Hell and Roberts(1982)}]{HR82} \textsc{Pavol Hell and Fred~S. Roberts}. \newblock Analogues of the {S}hannon capacity of a graph. \newblock In \emph{Theory and practice of combinatorics}, vol.~60 of \emph{North-Holland Math. Stud.}, pp. 155--168. North-Holland, 1982. \bibitem[{Hendrey and Wood(2019)}]{HW19} \textsc{Kevin Hendrey and David~R. Wood}. \newblock \href{https://doi.org/10.1017/S0963548319000063}{Defective and clustered colouring of sparse graphs}. \newblock \emph{Combin. Probab. Comput.}, 28(5):791--810, 2019. \bibitem[{Hickingbotham and Wood(2021)}]{HW21b} \textsc{Robert Hickingbotham and David~R. Wood}. \newblock \href{http://arxiv.org/abs/2111.12412}{Shallow minors, graph products and beyond planar graphs}. \newblock 2021, arXiv:2111.12412. \bibitem[{Illingworth et~al.(2022)Illingworth, Scott, and Wood}]{ISW} \textsc{Freddie Illingworth, Alex Scott, and David~R. Wood}. \newblock \href{http://arxiv.org/abs/2104.06627}{Product structure of graphs with an excluded minor}. \newblock 2022, arXiv:2104.06627. \bibitem[{Kang and Oum(2019)}]{KO19} \textsc{Dong~Yeap Kang and Sang-il Oum}. \newblock \href{https://doi.org/10.1017/S0963548318000548}{Improper coloring of graphs with no odd clique minor}. \newblock \emph{Combin. Probab. Comput.}, 28(5):740--754, 2019. \bibitem[{Karasev(2013)}]{Karasev13} \textsc{Roman~N. Karasev}. \newblock \href{https://doi.org/10.1007/s00454-013-9490-4}{An analogue of {G}romov's waist theorem for coloring the cube}. \newblock \emph{Discrete {\&} Computational Geometry}, 49(3):444--453, 2013. \bibitem[{Klav\v{z}ar(1993)}]{Klavzar93} \textsc{Sandi Klav\v{z}ar}. \newblock \href{https://doi.org/10.1007/BF01855874}{Strong products of {$\chi$}-critical graphs}. \newblock \emph{Aequationes Math.}, 45(2-3):153--162, 1993. \bibitem[{Klav\v{z}ar(1998)}]{Klavzar98} \textsc{Sandi Klav\v{z}ar}. \newblock \href{https://doi.org/10.1016/S0012-365X(97)00212-4}{On the fractional chromatic number and the lexicographic product of graphs}. \newblock \emph{Discrete Math.}, 185(1-3):259--263, 1998. \bibitem[{Klav\v{z}ar and Milutinovi\'{c}(1994)}]{KM94} \textsc{Sandi Klav\v{z}ar and Uro\v{s} Milutinovi\'{c}}. \newblock \href{https://doi.org/10.1016/0012-365X(94)90037-X}{Strong products of {K}neser graphs}. \newblock \emph{Discrete Math.}, 133(1-3):297--300, 1994. \bibitem[{Klav\v{z}ar and Yeh(2002)}]{KY02} \textsc{Sandi Klav\v{z}ar and Hong-Gwa Yeh}. \newblock \href{https://doi.org/10.1016/S0012-365X(01)00312-0}{On the fractional chromatic number, the chromatic number, and graph products}. \newblock \emph{Discrete Math.}, 247(1-3):235--242, 2002. \bibitem[{Klav{\v{z}}ar(1996)}]{Klavzar96} \textsc{Sandi Klav{\v{z}}ar}. \newblock Coloring graph products---a survey. \newblock \emph{Discrete Math.}, 155(1--3):135--145, 1996. \bibitem[{Klostermeyer(2002)}]{Klostermeyer02} \textsc{William Klostermeyer}. \newblock Defective circular coloring. \newblock \emph{Australas. J. Combin.}, 26:21--32, 2002. \bibitem[{Krauthgamer and Lee(2007)}]{KL07} \textsc{Robert Krauthgamer and James~R. Lee}. \newblock \href{https://doi.org/10.1007/s00493-007-2183-y}{The intrinsic dimensionality of graphs}. \newblock \emph{Combinatorica}, 27(5):551--585, 2007. \bibitem[{Linial et~al.(2008)Linial, Matou{\v{s}}ek, Sheffet, and Tardos}]{LMST08} \textsc{Nathan Linial, Ji{\v{r}}{\'i} Matou{\v{s}}ek, Or~Sheffet, and G\'abor Tardos}. \newblock \href{https://doi.org/10.1017/S0963548308009140}{Graph colouring with no large monochromatic components}. \newblock \emph{Combin. Probab. Comput.}, 17(4):577--589, 2008. \bibitem[{Liu and Oum(2018)}]{LO18} \textsc{Chun-Hung Liu and Sang-il Oum}. \newblock \href{https://doi.org/10.1016/j.jctb.2017.08.003}{Partitioning {$H$}-minor free graphs into three subgraphs with no large components}. \newblock \emph{J. Combin. Theory Ser. B}, 128:114--133, 2018. \bibitem[{Liu and Wood(2019{\natexlab{a}})}]{LW2} \textsc{Chun-Hung Liu and David~R. Wood}. \newblock \href{http://arxiv.org/abs/1905.09495}{Clustered coloring of graphs excluding a subgraph and a minor}. \newblock 2019{\natexlab{a}}, arXiv:1905.09495. \bibitem[{Liu and Wood(2019{\natexlab{b}})}]{LW1} \textsc{Chun-Hung Liu and David~R. Wood}. \newblock \href{http://arxiv.org/abs/1905.08969}{Clustered graph coloring and layered treewidth}. \newblock 2019{\natexlab{b}}, arXiv:1905.08969. \bibitem[{Liu and Wood(2022{\natexlab{a}})}]{LW4} \textsc{Chun-Hung Liu and David~R. Wood}. \newblock \href{http://arxiv.org/abs/2209.12327}{Clustered coloring of graphs with bounded layered treewidth and bounded degree}. \newblock 2022{\natexlab{a}}, arXiv:2209.12327. \bibitem[{Liu and Wood(2022{\natexlab{b}})}]{LW3} \textsc{Chun-Hung Liu and David~R. Wood}. \newblock \href{https://doi.org/10.1016/j.jctb.2021.09.002}{Clustered variants of {H}aj\'os' conjecture}. \newblock \emph{J. Combin. Theory, Ser. B}, 152:27--54, 2022{\natexlab{b}}. \bibitem[{Liu and Wood(tion)}]{LW5} \textsc{Chun-Hung Liu and David~R. Wood}. \newblock Fractional clustered colourings of graphs with no {$K_{s,t}$} subgaph, \newblock in preparation. \bibitem[{Lov{\'{a}}sz(1979)}]{Lovasz79} \textsc{L{\'{a}}szl{\'{o}} Lov{\'{a}}sz}. \newblock \href{https://doi.org/10.1109/TIT.1979.1055985}{On the {S}hannon capacity of a graph}. \newblock \emph{{IEEE} Trans. Inf. Theory}, 25(1):1--7, 1979. \bibitem[{Matdinov(2013)}]{Matdinov13} \textsc{Marsel Matdinov}. \newblock \href{https://doi.org/10.1007/s00454-013-9504-2}{Size of components of a cube coloring}. \newblock \emph{Discrete {\&} Computational Geometry}, 50(1):185--193, 2013. \bibitem[{Matou{\v{s}}ek and P\v{r}\'\i{v}\v{e}tiv\'{y}(2008)}]{MP08} \textsc{Ji{\v{r}}{\'i} Matou{\v{s}}ek and Ale\v{s} P\v{r}\'\i{v}\v{e}tiv\'{y}}. \newblock \href{https://doi.org/10.1137/070684112}{Large monochromatic components in two-colored grids}. \newblock \emph{SIAM J. Discrete Math.}, 22(1):295--311, 2008. \bibitem[{Mih\'ok et~al.(2011)Mih\'ok, Oravcov\'a, and Sot\'ak}]{MOS11} \textsc{Peter Mih\'ok, Janka Oravcov\'a, and Roman Sot\'ak}. \newblock \href{https://doi.org/10.7151/dmgt.1550}{Generalized circular colouring of graphs}. \newblock \emph{Discuss. Math. Graph Theory}, 31(2):345--356, 2011. \bibitem[{Mohar et~al.(2017)Mohar, Reed, and Wood}]{MRW17} \textsc{Bojan Mohar, Bruce Reed, and David~R. Wood}. \newblock \href{https://ajc.maths.uq.edu.au/pdf/69/ajc_v69_p236.pdf}{Colourings with bounded monochromatic components in graphs of given circumference}. \newblock \emph{Australas. J. Combin.}, 69(2):236--242, 2017. \bibitem[{Norin et~al.(2019)Norin, Scott, Seymour, and Wood}]{NSSW19} \textsc{Sergey Norin, Alex Scott, Paul Seymour, and David~R. Wood}. \newblock \href{https://doi.org/10.1007/s00493-019-3848-z}{Clustered colouring in minor-closed classes}. \newblock \emph{Combinatorica}, 39(6):1387--1412, 2019. \bibitem[{Norin et~al.(2023)Norin, Scott, and Wood}]{NSW22} \textsc{Sergey Norin, Alex Scott, and David~R. Wood}. \newblock \href{https://doi.org/10.1017/S0963548322000165}{Clustered colouring of graph classes with bounded treedepth or pathwidth}. \newblock \emph{Combin. Probab. Comput.}, 32:122--133, 2023. \bibitem[{Reed and Seymour(1998)}]{RS98} \textsc{Bruce~A. Reed and Paul Seymour}. \newblock \href{https://doi.org/10.1006/jctb.1998.1835}{Fractional colouring and {H}adwiger's conjecture}. \newblock \emph{J. Combin. Theory Ser. B}, 74(2):147--152, 1998. \bibitem[{Sabidussi(1957)}]{Sabidussi57} \textsc{Gert Sabidussi}. \newblock \href{https://doi.org/10.4153/CJM-1957-060-7}{Graphs with given group and given graph-theoretical properties}. \newblock \emph{Canad. J. Math.}, 9:515--525, 1957. \bibitem[{Scheinerman and Ullman(1997)}]{SU97} \textsc{Edward~R. Scheinerman and Daniel~H. Ullman}. \newblock Fractional graph theory. \newblock Wiley, 1997. \bibitem[{Shannon(1956)}]{Shannon56} \textsc{Claude~E. Shannon}. \newblock \href{https://doi.org/10.1109/TIT.1956.1056798}{The zero error capacity of a noisy channel}. \newblock \emph{{IRE} Trans. Inf. Theory}, 2(3):8--19, 1956. \bibitem[{Shitov(2019)}]{Shitov19} \textsc{Yaroslav Shitov}. \newblock \href{https://doi.org/10.4007/annals.2019.190.2.6}{Counterexamples to {H}edetniemi's conjecture}. \newblock \emph{Ann. of Math. (2)}, 190(2):663--667, 2019. \bibitem[{Ueckerdt et~al.(2022)Ueckerdt, Wood, and Yi}]{UWY22} \textsc{Torsten Ueckerdt, David~R. Wood, and Wendy Yi}. \newblock \href{https://doi.org/10.37236/10614}{An improved planar graph product structure theorem}. \newblock \emph{Electron. J. Combin.}, 29:P2.51, 2022. \bibitem[{van~den Heuvel and Wood(2018)}]{vdHW18} \textsc{Jan van~den Heuvel and David~R. Wood}. \newblock \href{https://doi.org/10.1112/jlms.12127}{Improper colourings inspired by {H}adwiger's conjecture}. \newblock \emph{J. London Math. Soc.}, 98:129--148, 2018. \bibitem[{Vesztergombi(7879)}]{Vesztergombi78} \textsc{Katalin Vesztergombi}. \newblock Some remarks on the chromatic number of the strong product of graphs. \newblock \emph{Acta Cybernet.}, 4(2):207--212, 1978/79. \bibitem[{Vesztergombi(1981)}]{Vesztergombi81} \textsc{Katalin Vesztergombi}. \newblock Chromatic number of strong product of graphs. \newblock In \emph{Algebraic methods in graph theory, {V}ol. {I}, {II} ({S}zeged, 1978)}, vol.~25 of \emph{Colloq. Math. Soc. J\'{a}nos Bolyai}, pp. 819--825. North-Holland, 1981. \bibitem[{\v{Z}erovnik(2006)}]{Zerovnik06} \textsc{Janez \v{Z}erovnik}. \newblock Chromatic numbers of the strong product of odd cycles. \newblock \emph{Math. Slovaca}, 56(4):379--385, 2006. \bibitem[{Wood(2009)}]{Wood09} \textsc{David~R. Wood}. \newblock \href{https://doi.org/10.1016/j.ejc.2008.11.010}{On tree-partition-width}. \newblock \emph{European J. Combin.}, 30(5):1245--1253, 2009. \bibitem[{Wood(2018)}]{WoodSurvey} \textsc{David~R. Wood}. \newblock \href{https://doi.org/10.37236/7406}{Defective and clustered graph colouring}. \newblock \emph{Electron. J. Combin.}, DS23, 2018. \newblock Version 1. \end{thebibliography} \end{document}
2205.04916v1
http://arxiv.org/abs/2205.04916v1
Coloring of zero-divisor graphs of posets and applications to graphs associated with algebraic structures
\documentclass[10]{amsart} \usepackage{amsbsy,amssymb,pstricks,pst-node,mathrsfs,MnSymbol} \usepackage{subfigure} \usepackage[utf8]{inputenc} \usepackage[left]{lineno} \setlength{\textheight}{9 in} \setlength{\textwidth}{6.2in} \usepackage{tikz} \usetikzlibrary{positioning} \theoremstyle{definition} \newtheorem*{tccproof}{\underline{Proof of Theorem \ref{tcc}}} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{remark}[theorem]{Remark} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{example}[theorem]{Example} \newtheorem{question}[theorem]{Question} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{observation}[theorem]{Observation} \newtheorem{definition}[theorem]{Definition} \definecolor{aquamarine}{rgb}{0.5, 1.0, 0.83} \newcommand{\G}{\mbox{\hspace*{.01in}$G/$$\sim$}} \newcommand{\sG}{\mbox{\hspace*{.01in}$_{G/}$$_\sim$}} \raggedbottom \begin{document} \title{Coloring of zero-divisor graphs of posets and applications to graphs associated with algebraic structures }\maketitle \markboth{ Nilesh Khandekar and Vinayak Joshi}{ Coloring of Zero-divisor graphs and Applications }\begin{center}\begin{large} Nilesh Khandekar and Vinayak Joshi\end{large}\\\begin{small}\vskip.1in\emph{Department of Mathematics, Savitribai Phule Pune University,\\ Pune - 411007, Maharashtra, India}\\E-mail: \texttt{[email protected], [email protected], [email protected]}\end{small}\end{center}\vskip.2in \begin{abstract} In this paper, we characterize chordal and perfect zero-divisor graphs of finite posets. Also, it is proved that the zero-divisor graphs of finite posets and the complement of zero-divisor graphs of finite $0$-distributive posets satisfy the Total Coloring Conjecture. These results are applied to the zero-divisor graphs of finite reduced rings, the comaximal ideal graph of rings, the annihilating ideal graphs, the intersection graphs of ideals of rings, and the intersection graphs of subgroups of cyclic groups. In fact, it is proved that these graphs associated with a commutative ring $R$ with identity can be effectively studied via the zero-divisor graph of a specially constructed poset from $R$. \end{abstract}\vskip.2in \noindent\begin{Small}\textbf{Mathematics Subject Classification (2020)}: 05C15, 05C17, 05C25, 06E05, 06E40, 06D15, 06A07, 13A70 \\\textbf{Keywords}: Zero-divisor graphs, comaximal ideal graph, annihilating ideal graphs, intersection graphs, co-annihilating ideal graph, Total Coloring Conjecture, pseudocomplemented poset, reduced ring. \end{Small}\vskip.2in \vskip.25in \baselineskip 14truept \section{Introduction}\label{intro} The study of graphs associated with algebraic and ordered structures is an active area of research. Some of the interesting classes of such graphs are the zero-divisor graphs of rings, comaximal ideal graphs of rings, annihilating ideal graphs of rings, and intersection graphs of ideals of rings. Besides being interesting in their own right, these classes of graphs, on the one hand, have served as a testing ground for some of the conjectures in graph theory, while on the other hand demonstrate the rich interplay that comes with associating graphs to algebraic and ordered structures. One of the important aspects of graph theory is the notion of the coloring of graphs and the computation of a graph's chromatic number. It is well known that the coloring of graphs is an NP-Complete problem; see \cite{Kar72}. However, for perfect graphs, coloring can be done in polynomial time. A \textit{perfect graph} is a finite simple graph in which the chromatic number of every induced subgraph is equal to its clique number, that is, the order of the subgraph's largest clique. Efforts have been made to determine the classes of perfect graphs. For instance, it is known that the class of chordal graphs is perfect; see Dirac \cite{dirac}. The notion of perfectness, weakly perfectness and chordalness of graphs associated with algebraic structures has been an active area of research; see \cite{aghapouramin}, \cite{azadi}, \cite{bagheri}, \cite{Be}, \cite{das}, \cite{mirghadim}, \cite{ashkan}, \cite{smith}, etc. On the lines of the zero-divisor graphs of rings, we study the zero-divisor graphs of ordered sets to construct examples of perfect graphs and also, at the same time, demonstrate the rich interplay that naturally comes with studying the questions of coloring. However, we would like to state that not much attention has been given to the interplay of the zero-divisor graphs of ordered sets and the graphs associated with algebraic structures. In this paper, we aim to mainly explore these aspects in the last section and highlight the interplay. Besides the vertex and edge colorings of graphs, total coloring has been another important coloring. The \textit{total coloring} of a graph $G$ is an assignment of colors to the vertices and the edges of $G$ such that every pair of adjacent vertices, every pair of incident edges, and every vertex and incident edge pair receive different colors. The total chromatic number $\chi" (G)$ of a graph $G$ is the minimum number of colors needed in a total coloring of $G$; see \cite{vgv1}. Vizing \cite{vgv1, vgv2} and Behzad \cite{mbehzad} studied the total coloring of graphs. They both formulated the following conjecture, now known as the Total Coloring Conjecture. \vskip 5truept \noindent \textbf{Total Coloring Conjecture:} {\it Let $G$ be a finite simple undirected graph. Then $\chi''(G) =\Delta(G)+1$ or $\chi''(G)=\Delta(G)+2$.} \vskip 5truept One of our main establishes the Total Coloring conjecture for graphs associated with finite posets and the complement of zero-divisor graphs of finite 0-distributive posets. As a consequence of these results, we are able to prove that the comaximal graphs and the complement of the comaximal graphs of commutative rings satisfy the Total Coloring Conjecture. We now briefly discuss the contents of each Section. Section 2 deals with the preliminary results from ordered sets needed for the paper's later development. Sections 3 and 4 deal with the chordal zero-divisor graphs and perfect zero-divisor graphs of ordered sets. In section 5, we prove one of our main results that establish the Total Coloring conjecture for zero-divisor graphs finite posets. The work in this section is a continuation of the earlier work in \cite{nkvj}. Section 6 focuses on applications of these ideas to the study of the interplay between zero-divisor graphs ordered sets and various graphs associated with algebraic structures such as the divisor graphs of rings, the comaximal ideal graphs of rings, the annihilating ideal graphs of rings, the intersection graphs of ideals of rings. In fact, we prove the following main results and their corollaries. One can refer to the subsequent sections for the terminologies and notations mentioned in the following results. \begin{theorem} \label{zdgchordal} Let $P$ be a finite poset such that $[P]$ is a Boolean lattice. Then \textbf{(A)} $G(P)$ is chordal if and only if one of the following hold: \begin{enumerate} \item $P$ has exactly one atom; \item $P$ has exactly two atoms with $|P_i|=1$ for some $i\in \{1,2\}$; \item $P$ has exactly three atoms with $|P_i|=1$ for all $i\in \{1,2,3\}$. \end{enumerate} \textbf{(B)} $G^c(P)$ is chordal if and only if number of atoms of $P$ are at most $3$. \textbf{(C)} $G(P)$ is perfect if and only if $P$ has at most 4 atoms. \end{theorem} \begin{theorem}\label{zdgtcc} Let $P$ be a finite poset. Then $G(P)$ satisfies the Total Coloring Conjecture. Moreover, if $P$ is a finite 0-distributive poset, then $G^c(P)$ satisfies Total Coloring Conjecture. \end{theorem} \begin{corollary} \label{cgchordperfect} Let $R= R_1 \times R_2 \times \cdots \times R_n$ be a ring with identity such that each $R_i$ is a local ring with finitely many ideals. Let $\mathbb{CG}^*(R)$ be the comaximal graph, $\mathbb{CAG}^*(R)$ be the co-annihilating ideal graph and $\mathbb{AG}^{*c}(R)$ be the complement of the annihilating ideal graph of $R$. Then \textbf{(A)} $\mathbb{CG}^*(R)=\mathbb{CAG}^*(R)=\mathbb{AG}^{*c}(R)$. \textbf{(B)} $\mathbb{CG}^*(R)=\mathbb{CAG}^*(R)=\mathbb{AG}^{*c}(R)$ is chordal if and only if one of the following hold: \begin{enumerate} \item $n=1$; \item $n=2$ and $R_i$ is field for some $i\in \{1,2\}$; \item $n=3$ and $R_i$ is field for all $i\in \{1,2,3\}$. \end{enumerate} \textbf{(C)} $\mathbb{CG}^{*c}(R)=\mathbb{CAG}^{*c}(R)=\mathbb{AG}^{*}(R)$ is chordal if and only if $n \leq 3$. \textbf{(D)} $\mathbb{CG}^*(R)=\mathbb{CAG}^*(R)=\mathbb{AG}^{*c}(R)$ is perfect if and only if $n \leq 4$. \textbf{(E)} $\mathbb{CG}^*(R)$ and $\mathbb{CG}^{*c}(R)$ satisfies the Total Coloring Conjecture. Moreover, the edge chromatic number $\chi'(\mathbb{CG}^*(R))=\Delta(\mathbb{CG}^*(R))$. \textbf{(F)} Moreover, if each $R_i$ is a local Artinian principal ideal ring, then the statements $(B), (C), (D)$ and $(E)$ are also true, if we replace $\mathbb{CG}^*(R)$ by the complement of the intersection graph $\mathbb{IG}^c(R)$ of ideals of $R$. \end{corollary} \section{Preliminaries} \par We begin with the following necessary definitions and terminologies given in Devhare et al. \cite{djl}. Also, we prove the key results (cf. Lemma \ref{property}) that are required to prove chordality, perfectness and Total Coloring Conjecture. \vskip 5truept \begin{definition}[Devhare et al. \cite{djl}] \par Let $P$ be a poset. Given any $ A\subseteq P$, the \textit{upper cone} of $A$ is given by $A^u=\{b\in P$ $|$ $b\geq a$ for every $a\in A\}$ and the \textit{lower cone} of $A$ is given by $A^\ell=\{b\in P$ $|$ $b\leq a$ for every $a\in A\}$. If $a\in P$, then the sets $\{a\}^u$ and $\{a\}^\ell$ will be denoted by $a^u$ and $a^\ell$, respectively. By $A^{u\ell}$, we mean $\{A^u\}^\ell$. Dually, we have the notion of $A^{\ell u}$. A poset $P$ with 0 is called \textit{$0$-distributive} if $\{a,b\}^\ell=\{0\}=\{a,c\}^\ell$ implies $\{a,\{b,c\}^u\}^\ell=0$; see \cite{jw1}. Note that if $\{b,c\}^u=\emptyset$, then $\{b,c\}^{u\ell}=P$. A lattice $L$ with $0$ is said to be {\it $0$-distributive }if $a\wedge b=0$ and $a\wedge c=0$ implies $a\wedge(b\vee c)=0$. Hence it is clear that if a lattice $L$ is 0-distributive, then $L$, as a poset, is a 0-distributive poset. Dually, a lattice $L$ with $1$ is said to be \textit{$1$-distributive} if $a\vee b=1$ and $a\vee c=1$ implies $a\vee(b\wedge c)=1$. A lattice $L$ is \textit{modular} if for all $a,b,c \in L$, $a \leq b$ implies \mbox{$(a\vee c)\wedge b=a\vee(c\wedge b)$; see \cite[page 132]{stern}.} \par Suppose that $P$ is a poset with $0$. If $\emptyset\neq A \subseteq P$, then the {\it annihilator} of $A$ is given by \linebreak $A^{\perp} = \{ b \in P ~~\vert ~~\{a, b\}^\ell = \{0\}~~ {\rm for~ all }~~ a \in A \}$, and if $A = \{a\}$, then we write $a^\perp=A^\perp$. An element $a\in P$ is an {\it atom} if $a >0$ and $\{b\in P$ $|$ $0<b<a\}=\emptyset$, and $P$ is called \emph{atomic} if for every $b\in P\setminus\{0\}$, there exists an atom $a\in P$ such that $a\leq b$. A poset $P$ with 0 is said to {\it section semi-complemented} (in brief SSC poset), if for $a \not \leq b$, there exists a nonzero element $c \in P$ such that $ c\leq a$ and $\{b,c\}^\ell =\{0\}$. An atomic, SSC poset is called an {\it atomistic} poset. Equivalently, $P$ is atomistic, if for $a\not\leq b$, then there is an atom $p\in P$ such that $p \leq a$ and $p \not \leq b$; see Joshi \cite{vjssc}. \par A poset $P$ is called \emph{bounded} if $P$ has both the least element $0$ and the greatest element $1$. An element $a'$ of a bounded poset $P$ is a \emph{complement} of $a\in P$ if $\{a,a'\}^\ell=\{0\}$ and $\{a,a'\}^u=\{1\}$. A \textit{pseudocomplement} of $a\in P$ is an element $b\in P$ such that $\{a,b\}^\ell=\{0\}$, and $x\leq b$ for every $x\in P$ with $\{a,x\}^\ell=\{0\}$; that is, $b$ is the pseudocomplement of $a$ if and only if $a^\perp=b^\ell$; see Venkatanarasimhan \cite{venkat} (see also Hala\v{s} \cite{halas}). It is straightforward to check that any element $a\in P$ has at most one pseudocomplement, and it will be denoted by $a^*$ (if it exists). A bounded poset $P$ is called \emph{complemented} (respectively, \textit{pseudocomplemented}) if every element $a$ of $P$ has a complement $a'$ (the pseudocomplement $a^*$). \par A poset $P$ is called \emph{distributive} if, $\{\{a\}\cup\{b,c\}^u\}^\ell=\{\{a,b\}^\ell\cup\{a,c\}^\ell\}^{u\ell}$ holds for all $a,b,c\in P$; see \cite{LR}. This definition generalizes the usual notion of a distributive lattice (i.e., a lattice is distributive in the usual sense if and only if it is a distributive poset). Moreover, a bounded poset $P$ is called \emph{Boolean}, if $P$ is distributive and complemented; see \cite{H}. Clearly, every Boolean algebra is a Boolean poset, but the converse is not true. \par It is well-known that in a Boolean poset, complementation is nothing but the pseudocomplementation (cf. \cite[Lemma 2.4]{JK}). In particular, if $P$ is Boolean, then $P$ is pseudocomplemented, and every element $x\in P$ has the unique complement $x'$. Further, every Boolean poset is atomistic (cf. \cite{jw}). \vskip 2truept The concept of zero-divisor graph of a poset is introduced in \cite{HJ}, which was modified later in \cite{LW}. \vskip 2truept \par Let $P$ be a poset with $0$. Define a \emph{zero-divisor} of $P$ to be any element of the set \linebreak $Z(P)=\{a\in P$ $|$ there exists $b\in P\setminus\{0\}$ such that $\{a,b\}^\ell=\{0\}\}$. As in \cite{LW}, the \emph{zero-divisor graph} of $P$ is the graph $G(P)$ whose vertices are the elements of $Z(P)\setminus\{0\}$ such that two vertices $a$ and $b$ are adjacent if and only if $\{a,b\}^\ell=\{0\}$. If $Z(P)\neq\{0\}$, then clearly $G(P)$ has at least two vertices, and $G(P)$ is connected with diameter at most three (\cite[Proposition 2.1]{LW}). We abuse the notation $G^*(P)$ for the \emph{zero-divisor graph} of $P$ with the vertex set $P\setminus\{0,1\}$, if $P$ has the greatest element 1, if $P$ do not have 1, then the vertex set is $P\setminus\{0\}$. Further, two vertices $a$ and $b$ are adjacent in $G^*(P)$ if and only if $\{a,b\}^\ell=\{0\}$. So from the notation $G(P)$ or $G^*(P)$, the underlined vertex set of the zero-divisor graph is clear and the adjacency relation remains same in both the graphs. \par We set $\mathcal{D}=P\setminus Z(P)$. The elements $d\in\mathcal{D}$ are the {\it dense elements} of $P$. \vskip 2truept \par For a poset $P$ with $0$, an equivalence relation $\sim$ is given on $P$ by $a \sim b$ if and only if $a^\perp = b^\perp$. The set of equivalence classes of $P$ will be denoted by $[P^\sim]=\bigl\{[a]~ \vert ~~a \in P~~\bigr\}$, where $[a] = \{x\in P ~\vert~ x \sim a\}$. Clearly, $[0]=\{0\}$, and if $d\in\mathcal{D}$ then $[d]=\mathcal{D}$; see Joshi et al. \cite{jwp} and Devhare, Joshi and LaGrange \cite{djl}. \vskip 2truept \par Note that $[P^\sim]$ is a poset under a partial order given by $[a] \leq [b]$ if and only if $b^\perp\subseteq a^\perp$. From the observation that $b^\perp\subseteq a^\perp$ whenever $a,b\in P$ with $a\leq b$, it follows that the canonical mapping $P\rightarrow [P^\sim]$ defined by $a\mapsto[a]$ is an order-preserving surjection. Furthermore, if $a$ is an atom of the poset $P$ then, for every $b\in P\setminus\{0\}$, either $a\leq b$, or $\{a,b\}^\ell=\{0\}$. It follows that if $a$ is an atom of $P$, then $[a]$ is an atom of $[P^\sim]$; however, the converse is not true. \par Let $P$ be a pseudocomplemented poset. If $a,b\in P$, then $a^*\leq b^*$ if and only if $a^*\in b^\perp$, if and only if $a^\perp=(a^*)^\ell\subseteq b^\perp$. That is, $a^*\leq b^*$ if and only if $[b]\leq[a]$ and, in particular, $a^*=b^*$ if and only if $[a]=[b]$. \par Note that pseudocomplemented poset need not be Boolean. \vskip 5truept \vskip 5truept \par The {\it direct product} of posets $P^1,\dots,P^n$ is the poset $\textbf{P}=\prod\limits_{i=1}^nP^i$ with $\leq$ defined such that $a\leq b$ in $\textbf{P}$ if and only if $a_{i}\leq b_{i}$ (in $P^i$) for every $i\in\{1,\dots,n\}$. For any $\emptyset\neq A\subseteq \prod\limits_{i=1}^nP^i$, note that $A^u=\{b\in\prod\limits_{i=1}^nP^i$ $|$ $b_{i}\geq a_{i}$ for every $a\in A$ and $i\in\{1,\dots,n\}\}$. Similarly, $A^\ell=\{b\in\prod\limits_{i=1}^nP^i$ $|$ $b_{i}\leq a_{i}$ for every $a\in A$ and $i\in\{1,\dots,n\}\}$. Clearly, $\textbf{P}$ is a $0$-distributive poset, if $Z(P^i)=\{0\}$ for every $i$. We set $A_{{q}_{_1}} = ({q}_{_1}^u)\setminus \mathcal{D}$ and define $A_{{q}_{_j}} = \Big({{q}_{_j}}^{u}\Big)\setminus\Big(\mathcal{D}\cup (\bigcup \limits_{i=1}^{j-1}{{q}_{_{i}}}^{u})\Big)$ for every $j\in\{2,\dots,n\}$. \vskip 10truept \par Throughout, $P$ denotes a poset with $0$ and $q_i$, $i \in\{1,2, \cdots, n\}$ are all atoms of $P$, where $n \geq 2$. All graphs are finite simple graphs. The poset $ \mathbf{ P}$ is $\prod\limits_{i=1}^{n}P^i$, where $P^i$'s are finite bounded posets such that $Z(P^i)=\{0\}$ and $2\leq|P^i|$, $\forall i$. \vskip 10truept Afkhami et al. \cite{afkhami} partitioned the set $P\setminus \{0\}$ as follows. \par Let $1\leq i_1< i_2<\dots<i_k\leq n$. The notation $P_{i_1i_2\dots i_k}$ stands for the set: $$P_{i_1i_2\dots i_k}=\Bigg\{x\in P~\mathbin{\Big|}~x\in\biggl( \bigcap\limits_{s=1}^k\{q_{_{i_s}}\}^u\biggr) \mathbin{\Big\backslash}\biggl(\bigcup\limits_{j\neq i_1,i_2,\dots,i_k}\{q_{_j}\}^u\biggr)\Bigg\}. \hfill{ \hspace{.2in} -------(\circledcirc)}$$ In \cite{afkhami}, the following observations are proved. \begin{enumerate} \item If the index sets $\{i_1,\dots,i_k\}$ and $\{j_1,\dots,j_{k'}\}$ of $P_{i_1i_2\dots i_k}$ and $P_{j_1j_2\dots j_{k'}}$, respectively, are distinct, that is, $\{i_1,\dots,i_k\}\neq \{j_1,\dots,j_{k'}\}$, then $(P_{i_1i_2\dots i_k})\cap (P_{j_1j_2\dots j_k'})=\emptyset$. \item $\displaystyle P\backslash \{0\}= \bigcupdot\limits_{\substack{k=1,\\ 1\leq i_1< i_2<\dots<i_k\leq n}}^{n} P_{i_1i_2\dots i_k}$. \end{enumerate} \par Define a relation $\approx$ on $P\setminus\{0\}$ as follows: $x\approx y$ if and only if $x,y\in P_{i_1i_2\dots i_k}$ for some partition $P_{i_1i_2\dots i_k}$ of $P\setminus \{0\}$. It is easy to observe that $\approx$ is an equivalence relation. The following result proves that the equivalence relations $\sim$ and $\approx$ are the same. \end{definition} \vskip 5truept \begin{lemma}\label{eqsame} The equivalence relations $\sim$ and $\approx$ are same on $P\setminus \{0\}$. \end{lemma} \begin{proof} Let $t\in P_{i_1i_2\dots i_k}=\Bigg\{x\in P~\mathbin{\Big|}~x\in\biggl( \bigcap\limits_{s=1}^k\{q_{_{i_s}}\}^u\biggr) \mathbin{\Big\backslash}\biggl(\bigcup\limits_{j\neq i_1,i_2,\dots,i_k}\{q_{_j}\}^u\biggr)\Bigg\}$. Then it is easy check that, \linebreak $t^\perp =\biggl(\bigcup\limits_{j\neq i_1,i_2,\dots,i_k}\bigl\{q_{_j}\bigr\}^u\biggr)\mathbin{\Big\backslash} \biggl(\bigcup\limits_{s=1}^k\bigl\{q_{_{i_s}}\bigr\}^u\biggr)$. From this, it is clear that, if $x,y\in P_{i_1i_2\dots i_k}$, then $x^\perp=y^\perp$. Thus $x\approx y$ implies that $x\sim y$. Conversely, assume that $x\sim y$. Then $x^\perp=y^\perp$. We have to prove that $x\approx y$, that is, $x,y\in P_{i_1i_2\dots i_k}$ for some partion $P_{i_1i_2\dots i_k}$ of $P\setminus \{0\}$. Suppose on contrary, there exist the index sets $\{i_1,\dots,i_k\}$ and $\{j_1,\dots,j_{k'}\}$ of $P_{i_1i_2\dots i_k}$ and $P_{j_1j_2\dots j_{k'}}$, respectively, that are distinct such that $x\in P_{i_1i_2\dots i_k}$ and $y\in P_{j_1j_2\dots j_{k'}}$. Clearly, $x\neq y$. Since $\{i_1,\dots,i_k\}$ and $\{j_1,\dots,j_{k'}\}$ are distinct, then $i_p\notin \{j_1,\dots,j_{k'}\}$ for some $p\in \{1,\dots,k\}$ or $j_q\notin \{i_1,\dots,i_k\}$ for some $q\in \{1,\dots,k'\}$. Without loss of generality assume that $i_p\notin \{j_1,\dots,j_{k'}\}$ for some $p\in \{1,\dots,k\}$. Therefore, the atom $q_{_{i_p}}\notin x^\perp$ but the atom $q_{_{i_p}}\in y^\perp$. Hence $x^\perp\neq y^\perp$, a contradiction. Therefore $x\approx y$. Thus $x\sim y$ implies that $x\approx y$. \end{proof} \begin{remark} From the above discussion, we conclude that the set $P_{i_1i_2\dots i_k}$ for some $\{i_1,i_2\dots,i_k\}\subseteq \{1,2,\dots,n\}$, is nothing but the equivalence class, say $[a]$, where $a\in P_{i_1i_2\dots i_k}$, under the equivalence relation $\sim$, and vice-versa. The set of equivalence classes under $\approx$ of $P\setminus \{0\}$ will be denoted by $$[P^\approx]'=\{~P_{i_1i_2\dots i_k}~ \vert~~ \{i_1,i_2\dots,i_k\}\subseteq \{1,2,\dots,n\}, \text{and } P_{i_1i_2\dots i_k}\neq\emptyset~~\}.$$ Now, we set $[P^\approx]= [P^\approx]'\cup P_0$. Define a relation $\leq$ on $[P^\approx]$ as follows. $P_{i_1i_2\dots i_k}\leq P_{j_1j_2\dots j_m}$ if and only if $b^\perp\subseteq a^\perp$, for some $a\in P_{i_1i_2\dots i_k}$ and for some $ b\in P_{j_1j_2\dots j_m}$, where $P_{i_1i_2\dots i_k}, P_{j_1j_2\dots j_m}\in [P^\approx]' $ and $P_0\leq P_{i_1i_2\dots i_k}$ for all $\{i_1,i_2\dots,i_k\}\subseteq \{1,2,\dots,n\}$. It is not very difficult to prove that $([P^\approx], \leq)$ is a poset. The least element of $([P^\approx], \leq)$ is $P_0$ and if $P$ has the greatest element 1, then the greatest element of the poset $([P^\approx],\; \leq)$ is $P_{12\dots n}$ (see Proposition \ref{porder} for details). In view of Lemma \ref{eqsame}, the posets $([P^\approx], \leq)$ and $([P^\sim],\leq)$ are same. \textbf{Henceforth, without any distinction, we write $[P^\approx]=[P^\sim]$ by simply $[P]$.} \vskip 5truept Let $P$ is a poset with 0. Since $[P]$ is a poset with the least element $[0]=P_0$, and except $[d]$ (where $d\in \mathcal{D}$), every element of $[P]$ is a zero-divisor. Note that $\mathcal{D}$ may be empty. Hence the zero-divisor graphs $G([P])$ and $G^*([P])$ of the poset $[P]$ are same, that is, $G([P])=G^*([P])$. Hence afterwards, we write $G([P])$ for the zero-divisor graph of $[P]$. Clearly, $a$ and $b$ are adjacent in $G(P)$ if and only if $[a]$ and $[b]$ are adjacent in $G([P])$. More about the inter relationship between the properties of $G(P)$ and $G([P])$ are mentioned in Lemma \ref{property}. \end{remark} For this purpose, we need the following definition. \vskip 5truept \begin{definition}[West \cite{west}] A set $I$ of vertices of a graph $G$ is said to be \textit{independent} if no two vertices $u$ and $v$ in $I$ are adjacent in $G$. If $|I|=n$, then we denote $I_n$, the independent set on $n$ vertices. The maximum cardinality of an independent set of vertices of $G$ is called the \textit{vertex-independence number} and is denoted by $\alpha(G)$. \end{definition} \begin{proposition}\label{porder} Let $P$ be a poset. Then $P_{i_1i_2\dots i_k}\leq P_{j_1j_2\dots j_m}$ in $[P]$ if and only if $\{i_1,i_2,\dots, i_k\}\subseteq \{j_1,j_2,\dots, j_m\}$. \end{proposition} \begin{proof} Let $P_{i_1i_2\dots i_k}\leq P_{j_1j_2\dots j_m}$ in $[P]$. This gives that $b^\perp\subseteq a^\perp$, for some $a\in P_{i_1i_2\dots i_k}$, and for some $ b\in P_{j_1j_2\dots j_m}$. It is easy to verify that $q_{_{l_1}},q_{_{l_2}},\dots,q_{_{l_s}}$ are the atoms in $b^\perp$, where $\{l_1,\dots,l_s\}=\{1,\dots,n\}\setminus \{j_1,\dots,j_m\}$. Note that for every $t$, $l_t\notin\{j_1,\dots,j_m\}$, otherwise $q_{_{l_t}}\in b^\ell \cap b^\perp=\{0\},$ a contradiction. Since $q_{_{l_1}},q_{_{l_2}},\dots,q_{_{_{l_s}}}\in b^\perp\subseteq a^\perp$. From this, we have $l_1,\dots,l_s\notin \{i_1,\dots,i_k\}$. Hence, we get $\{i_1,\dots,i_k\}\subseteq \{j_1,\dots,j_m\}$. Conversely, assume that $\{i_1,\dots,i_k\}\subseteq \{j_1,\dots,j_m\}$. We want to prove that $P_{i_1i_2\dots i_k}\leq P_{j_1j_2\dots j_m}$ in $[P]$. For this, let $a\in P_{i_1i_2\dots i_k}$ and $b\in P_{j_1j_2\dots j_m}$. Then $a^\perp =\biggl(\bigcup\limits_{l\neq i_1,i_2,\dots,i_k}\bigl\{q_{_l}\bigr\}^u\biggr)\mathbin{\Big\backslash} \biggl(\bigcup\limits_{s=1}^k\bigl\{q_{_{i_s}}\bigr\}^u\biggr)$ and $b^\perp =\biggl(\bigcup\limits_{l\neq j_1,j_2,\dots,j_k}\bigl\{q_{_l}\bigr\}^u\biggr)\mathbin{\Big\backslash} \biggl(\bigcup\limits_{s=1}^m\bigl\{q_{_{j_s}}\bigr\}^u\biggr)$. Since $\{i_1,\dots,i_k\}\subseteq \{j_1,\dots,j_m\}$. Then we get, $b^\perp\subseteq a^\perp$. Hence, $P_{i_1i_2\dots i_k}\leq P_{j_1j_2\dots j_m}$ in $[P]$.\end{proof} \vskip 5truept \begin{corollary}\label{adj} Let $G([P])$ be the zero-divisor graph of $[P]$. Then $P_{i_1i_2\dots i_k}$ and $P_{j_1j_2\dots j_m}$ are adjacent in $G([P])$ if and only if $\{i_1,i_2,\dots, i_k\}\cap \{j_1,j_2,\dots, j_m\}=\emptyset$. \end{corollary} \begin{proof} Let $P_{i_1i_2\dots i_k}$ and $P_{j_1j_2\dots j_m}$ are adjacent in $G([P])$. Suppose on contrary that $\{i_1,i_2,\dots, i_k\}\cap \{j_1,j_2,\dots, j_m\}\neq \emptyset$, i.e., there exists $l\in \{i_1,i_2,\dots, i_k\}\cap \{j_1,j_2,\dots, j_m\}$. By Proposition \ref{porder}, $P_l\leq P_{i_1i_2\dots i_k}$ and $P_l\leq P_{j_1j_2\dots j_m}$. This gives that $\{P_0,P_l\}\subseteq\{P_{i_1i_2\dots i_k}, ~ P_{j_1j_2\dots j_m}\}^\ell$, a contradiction, as $P_{i_1i_2\dots i_k}$ and $P_{j_1j_2\dots j_m}$ are adjacent in $G([P])$. Thus, $\{i_1,i_2,\dots, i_k\}\cap \{j_1,j_2,\dots, j_m\}=\emptyset$. Conversely, we assume that $\{i_1,i_2,\dots, i_k\}\cap \{j_1,j_2,\dots, j_m\}=\emptyset$. We want to prove that $P_{i_1i_2\dots i_k}$ and $P_{j_1j_2\dots j_m}$ are adjacent in $G([P])$, i.e., $\{P_{i_1i_2\dots i_k}, ~ P_{j_1j_2\dots j_m}\}^\ell=\{P_0\}$. Suppose not, then there exists $P_l\not=P_0$ such that $P_l\in\{P_{i_1i_2\dots i_k},~ P_{j_1j_2\dots j_m}\}^\ell$. By Proposition \ref{porder}, $l\in\{i_1,i_2,\dots, i_k\}\cap \{j_1,j_2,\dots, j_m\}$, a contradiction. Thus, $P_{i_1i_2\dots i_k}$ and $P_{j_1j_2\dots j_m}$ are adjacent in $G([P])$. \end{proof} \vskip 5truept \begin{lemma} Let $q$ be an atom of a $0$-distributive poset $P$. If $\{q,x_i\}^\ell=\{0\}$ for every $i, 2\leq i\leq n$, then $\{q,\{x_1,\dots,x_n\}^u\}^\ell=\{0\}$. Moreover, there exists $d\in\{x_1,\dots,x_n\}^u$ such that $q\nleq d$. \end{lemma} \begin{proof} For $n=2$, the result follows from the definition of 0-distributivity. \par Now, we prove for the result for $n=3$. For this, let $\{q,x_1\}^\ell=\{q,x_2\}^\ell=\{q,x_3\}^\ell=\{0\}$. By 0-distributivity, this gives that $\{q,\{x_1,x_2\}^u\}^\ell=\{0\}$. Hence $q\notin \{x_1,x_2\}^{u\ell}$. Therefore, there exist $d_1\in \{x_1,x_2\}^u$ such that $q\nleq d_1$. Thus, $\{q,d_1\}=\{0\}$. Similarly, there exist $d_2\in \{x_1,x_3\}^u$ such that $q\nleq d_2$ and $\{q,d_2\}=\{0\}$. Since $P$ is $0$-distributive, we have $\{q,\{d_1,d_2\}^u\}^\ell=\{0\}$. Therefore, there exist $d_3\in \{d_1,d_2\}^u$ such that $q\nleq d_3$. Hence $\{q,d_3\}^\ell=\{0\}$. Clearly, $x_1,x_2\leq d_1\leq d_3$ and $x_1,x_3\leq d_2\leq d_3$. This together implies that $x_1,x_2,x_3\leq d_3$. Thus, $d_3\in\{x_1,x_2,x_3\}^u$ and $q\nleq d_3$. Therefore we have $q\notin \{x_1,x_2,x_3\}^{u\ell}$ and $\{q,d_3\}^\ell=\{0\}$. This gives that $\{0\}=q^\ell\cap\{x_1,x_2,x_3\}^{u\ell}$. In this case $d=d_3$. Hence moreover part also proved. \par Repeating this procedure, we can prove for any finite $n$. \end{proof} \vskip 5truept \begin{corollary}\label{pseudoatom} Let $ q_{_1} , q_{_2} ,\dots, q_{_n} $ be the all atoms of a $0$-distributive poset $P$.\\ Then $\{ q_{_i}, \{ q_{_1}, \dots, q_{_{i-1}}, \; q_{_{i+1}},\dots, q_{_n}\}^u\}^\ell=\{0\}$ for all $i$. Moreover, $P_{1\dots(i-1)(i+1)\dots n}\neq\emptyset$ for all $i$. \end{corollary} \vskip 5truept \begin{remark} If $P$ is a $0$-distributive poset with $n$ number of atoms, then $P_{i_1i_2\dots i_k}$ may not be a nonempty set for $2\leq k\leq n-2$, where $\{i_1,i_2,\dots, i_k\}\subseteq \{1,\dots,n\}$. Let $P$ be a $0$-distributive poset with $4$ atoms (as shown in Figure \ref{exa1}). Then $P_{ij}=\emptyset$ for $i,j \in \{1,2,3,4\}$. \begin{figure}[h] \begin{center} \begin{tikzpicture}[scale =0.7] \centering \draw [fill=black] (-1.5,0) circle (.1); \draw [fill=black] (-.5,0) circle (.1); \draw [fill=black] (1.5,0) circle (.1); \draw [fill=black] (.5,0) circle (.1); \draw [fill=black] (-1.5,1.5) circle (.1); \draw [fill=black] (-.5,1.5) circle (.1); \draw [fill=black] (1.5,1.5) circle (.1); \draw [fill=black] (.5,1.5) circle (.1); \draw [fill=black] (0,-1.5) circle (.1); \draw [fill=black] (0,3) circle (.1); \draw (0,-1.5)--(-1.5,0)--(-1.5,1.5)--(0,3)--(-0.5,1.5)--(-0.5,1.5)--(-0.5,0)--(0,-1.5)--(0.5,0)--(0.5,1.5)--(0,3)--(1.5,1.5)--(1.5,0)--(0,-1.5); \draw (-1.5,0)--(-0.5,1.5)--(1.5,0)--(1.5,1.5)--(.5,0)--(0.5,1.5); \draw (-1.5,0)--(0.5,1.5); \draw (-.5,0)--(1.5,1.5); \draw (0.5,1.5)--(1.5,0); \draw (-0.5,0)--(-1.5,1.5)--(0.5,0); \draw node [above] at (-1.5,1.5) {$q_{4}^*$}; \draw node [above] at (-0.6,1.5) {$q_{3}^*$}; \draw node [above] at (0.6,1.5) {$q_{2}^*$}; \draw node [above] at (1.5,1.5) {$q_{1}^*$}; \draw node [below] at (-0.6,0) {$q_{2}$}; \draw node [below] at (-1.5,-0.05) {$q_{1}$}; \draw node [below] at (0.65,0) {$q_{3}$}; \draw node [below] at (1.5,-0.05) {$q_{4}$}; \draw node [below] at (0,-1.7) {$0$}; \draw node [above] at (0,3) {$1$}; \end{tikzpicture} \end{center} \caption{0-distributive poset $P$}\label{exa1} \end{figure} \end{remark} \begin{proposition}\label{atompseudo} If $P$ is a $0$-distributive poset, then $P_1,\dots,P_n$ are the atoms of $[P]$ and $ P_{1\dots(i-1)(i+1)\dots n}$ is the pseudocomplement of $P_i$ in $[P]$ for all $i, 1\leq i\leq n$. \end{proposition} \begin{proof} By Proposition \ref{porder}, $P_1,\dots,P_n$ are the atoms of $[P]$. We prove that $ P_{1\dots(i-1)(i+1)\dots n}$ is the pseudocomplement of $P_i$ in $[P]$, for all $i$. Clearly, $\Bigl\{P_i, \; P_{1\dots(i-1)(i+1)\dots n}\Bigr\}^\ell =\{P_0\}$. Assume that $P_{j_1j_2\dots j_m}\in [P]$ such that $\Bigl\{P_i,P_{j_1j_2\dots j_m}\Bigr\}^\ell=\{P_0\}$. By Corollary \ref{adj}, $i\notin \{j_1,j_2,\dots,j_m\}$. Since $\{i\}\cup\{1,\dots,i-1,i+1,\dots,n\}=\{1,\dots,n\}$ and $\{j_1,j_2,\dots,j_m\}\subseteq \{1,\dots,n\}$. Then $\{j_1,j_2,\dots,j_m\}\subseteq \{1,\dots,i-1,i+1,\dots,n\}$. By Proposition \ref{porder}, we have $P_{j_1j_2\dots j_m}\leq P_{1\dots(i-1)(i+1)\dots n}$. This proves that $ P_{1\dots(i-1)(i+1)\dots n}$ is pesudocomplement of $P_i$ in $[P]$, for all $i$.\end{proof} \vskip 5pt The following statements 1-4 are essentially proved in \cite{djl} (see Lemma 4.5, Lemma 4.2), statements 5-7 are essentially proved in \cite{nkvj} (see Lemma 2.1 (6), (8), respectively). We write these statements in terms of $P_{i_1i_2\dots i_k}$. These properties will be used frequently in the sequel. \begin{lemma}\label{property} The following statements are true. \begin{enumerate} \item If $q_{_1},q_{_2},\dots,q_{_n}$ are distinct atoms of ${ P}$, then $[ {q}_{_1} ],\dots,[ {q}_{_n} ]$ are distinct atoms of $[{ P}]$. Note that $[q_{_i}]=P_{i}$, for every $i\in\{1,\dots,n\}$ . \vskip 5truept \item If $ {a}\leq {b}$ in ${ P}$, then $[{a} ]\leq [{b} ]$ in $[{ P}]$. Moreover, $P_{i_1i_2\dots i_k}\leq P_{j_1j_2\dots j_m}$ in $[P]$ if and only if $\{i_1,i_2,\dots, i_k\}\subseteq \{j_1,j_2,\dots, j_m\}$. \vskip 5truept \vskip 5truept \item $\{{a,b} \}^\ell=\{{0} \}$ in ${ P}$ if and only if $\{[{a} ],[{b} ]\}^\ell=\{[{0} ]\}$ in $[{ P}]$. Note that the lower cones are taken in the respective posets. $P_{i_1i_2\dots i_k}$ and $P_{j_1j_2\dots j_m}$ are adjacent in $G([P])$ if and only if $\{i_1,i_2,\dots, i_k\}\cap \{j_1,j_2,\dots, j_m\}=\emptyset$. Further, $ {a} \in V(G({ P}))$ if and only if $[{a} ] \in V(G([{ P}]))$. \vskip 5truept \item Let $[{a} ]\in V(G([{ P}]))$. Then for any ${x,y}\in [{a} ]$, $\{{x,y} \}^\ell\neq \{{0} \}$ in ${ P}$. Hence vertices of $[{a} ]$ forms an independent set in $G({ P})$. Further, if $\{[{a} ],[{b} ]\}^\ell=\{[{0} ]\}$ in $[{ P}]$, then for any ${x}\in [{a} ]$ and for any $ {y}\in [{b} ]$, $\{{x,y} \}^\ell=\{{0} \}$ in ${ P}$. In particular, $[{a} ]$ and $ [{b}]$ are adjacent in $G({ [P]})$ with $|[{a} ]|=m$, $|[{b} ]|=n$, then the vertices of $[{a} ]$ and $[{b} ]$ forms an induced complete bipartite subgraph $K_{m,n}$ of $G({ P})$. Moreover, for any $x, y \in [a]$, deg$_{G(P)}(x)= $ deg$_{G(P)}(y)$. \vskip 5truept \item If ${q}_{_1},\dots, {q}_{_n}$ are the atoms of ${ P}$, then $A_{{ q}_{_i}}$ is set of an independent vertices of $G({P})$ for every $i\in\{1,2,\dots,n\}$, and $V(G({P}))=\bigcupdot A_{{q}_{_{i}} }$. Also, $q_{_i}^u\setminus \mathcal{D}$ is set of an independent vertices of $G(P)$ for every $i\in\{1,\dots,n\}$. \vskip 5truept \item The induced subgraph of $G([P])$ on the set $\{P_1,P_2,\dots,P_n\}$ is a complete graph on $n$ vertices and the induced subgraph of $G(P)$ on the set $\bigcup\limits_{i=1}^{n}P_i$ is a complete $n$-partite graph. Therefore the induced subgraph of $G([P])$ on the set $\{P_1,\dots,P_n,P_{23\dots n},\dots, P_{12\dots (n-1)}\}$ is a complete graph on $n$ vertex $K_n$ with exactly one pendent vertex attached to each vertex of $K_n$ (See Figure \ref{fig1} in the case $n=3$), provided $P_{23\dots n},\dots, P_{12\dots (n-1)}$ are nonempty sets. \item Let $P$ be a $0$-distributive poset. Then $P_i$ be the atom of $[P]$ and $P_{1\dots(i-1)(i+1)\dots n}$ is the pseudocomplement of $P_i$ in $[P]$ for all $i$. Moreover in $G([P])$, the vertex $P_{1\dots(i-1)(i+1)\dots n}$ is adjacent to only $P_i$, that is, $P_{1\dots(i-1)(i+1)\dots n}$ is the only pendent vertex to $P_i$ in $G([P])$, for all $i$. Therefore the set $\{P_{23\dots n}, P_{13\dots n}, P_{12\dots(n-1)}\}$ of $G([P])$ is an independent set of vertex, that is, the induced subgraph of $G^c([P])$ on the set $\{P_{23\dots n}, P_{13\dots n}, P_{12\dots(n-1)}\}$ is a complete graph on $n$ vertex. \end{enumerate} \end{lemma} \noindent\textbf{Notation:} Let $q_{_1},q_{_2},\dots,q_{_n}$ be all atoms of a $0$-distributive poset $P$. Then $[q_1]=P_1,\dots,[q_n]=P_n$ are the atoms of $[P]$ and $[q_i]^*= P_{1\dots(i-1)(i+1)\dots n}$ is the pseudocomplement of $[q_i]=P_i$ in $[P]$ for all $i$. Moreover, one can check that if $[P]$ has the greatest element, then $[q_i]^*= P_{1\dots(i-1)(i+1)\dots n}$ is the complement of $[q_i]=P_i$ in $[P]$ for all $i$. \vskip 5truept The following result is useful to prove that a finite lattice $L$ is pseudocomplemented. One needs to look at whether all the atoms have pseudocomplements. \begin{lemma}[{Chameni-Nembua and Monjardet \cite[Lemma 3]{cm}}]\label{mon} A finite lattice $L$ is pseudocomplemented if and only if each atom of $L$ has the pseudocomplement. \end{lemma} \begin{lemma}[{Joshi and Mundlik \cite[Lemma 2.5]{jm}}]\label{ssc} The poset $[P]$ of all equivalence classes of a poset $P$ with 0 is an SSC poset. \end{lemma} \begin{theorem}[Janowitz \cite{jan}]\label{boolean} Every pseudocomplemented, SSC lattice is Boolean. \end{theorem} \begin{theorem}\label{nboolean} Let $L$ be a $0$-distributive bounded lattice with finitely many atoms. Then $[L]$ is a Boolean lattice. \end{theorem} \begin{proof} Let $L$ be a $0$-distributive bounded lattice with finitely many atoms $q_{_1},q_{_2},\dots,q_{_n}$. Then by Lemma \ref{ssc}, $[L]$ is SSC with $P_1,P_2,\dots,P_n$ as atoms of $[L]$. Clearly, $[L]$ is a atomic lattice. This together with $[L]$ is SSC, we observe that $[L]$ is an atomistic lattice with $n$ atoms. Since every element of an atomistic lattice is a join of atoms below it, so $[L]$ may have at most $2^n$ elements. Thus $[L]$ is a finite atomistic lattice. By Corollary \ref{pseudoatom}, the sets $P_{23\dots n},P_{13\dots n}, \dots, P_{12\dots(n-1)}$ are nonempty. Hence these sets are the elements of $[L]$. By Lemma \ref{property}(7), $P_{1\dots (i-1)(i+1)\dots n}$ is the pseudocomplement of the atom $P_i$ for all $i$. By Lemma \ref{mon}, $[L]$ is pseudocomplemented. Hence, by Theorem \ref{boolean}, $[L]$ is a Boolean lattice. \end{proof} \vskip 5truept \begin{remark} Note that the above result fails if we remove the condition of finiteness of atoms. For this, consider the set $L=\{X \subseteq \mathbb{N} ~|~ |X| < \infty\} \cup \{\mathbb{N}\}$. Then $L$ is a bounded 0-distributive lattice under set inclusion as a partial order. Since $L$ is a 0-distributive, atomistic lattice, $[L] \cong L$ (cf. \cite[Theorem 2.4]{jm}) is a lattice. However, $[L]$ is not Boolean. \end{remark} The following remark gives the relation between $G(P)$ and $G^*(P)$. For this purpose, we need the following definitions. \begin{definition}[\cite{west}] The \textit{join} of two graphs $G$ and $H$ is a graph formed from disjoint copies of $G$ and $H$ by connecting each vertex of $G$ to each vertex of $H$. We denote the join of graphs $G$ and $H$ by $G\vee H$. The \textit{disjoint union} of graphs is an operation that combines two or more graphs to form a larger graph. It is analogous to the disjoint union of sets and is constructed by making the vertex set of the result be the disjoint union of the vertex sets of the given graphs and by making the edge set of the result be the disjoint union of the edge sets of the given graphs. Any disjoint union of two or more nonempty graphs is necessarily disconnected. The disjoint union is also called the graph sum. If $G$ and $H$ are two graphs, then $G+H$ denotes their disjoint union. \end{definition} \begin{remark} \label{zdgczdg} Let $P$ be a poset. Then $G^*(P)=G(P)+I_m$ and $G^{*c}(P)=G^c(P)\vee K_m$, where $m=|\mathcal{D}\setminus \{1\}|$, if $P$ has the largest element 1; otherwise $m=|\mathcal{D}|$. If $P$ has $n$ atoms, then $|\mathcal{D}|=|P_{12\dots n}|$. \end{remark} \vskip 5truept \section{Chordal zero-divisor graphs } A \textit{chord} of a cycle $C$ of graph $G$ is an edge that is not in $C$ but has both its end vertices in $C$. A graph $G$ is \textit{chordal} if every cycle of length at least $4$ has a chord, \textit{i.e.}, $G$ is chordal if and only if it does not contains induced cycle of length at least $4$. We assume that a null graph (without edges and vertices) is chordal. Let $G$ be a finite graph. The set $\{u\in V(G)~~|~~u-v\in E(G)\}$ be the neighborhood of a vertex $v$ in graph $G$, denoted by $N_G(v)$. If there is no ambiguity about the graph under consideration, we write $N(v)$. Define a relation on $G$ such that $u\simeq v$ if and only if either $u=v$, or $u-v\in E(G)$ and $N(u)\setminus \{v\}= N(v)\setminus \{u\}$. Clearly, $\simeq$ is an equivalence relation on $V(G)$. The equivalence class of $v$ is the set $\{u\in V(G)~~|~~u\simeq v\}$, denoted by $[v]^\simeq$. Denote the set $\{[v]^\simeq~~|~~ v\in V(G)\}$ by $G_{red}$. Define $[u]-[v]$ is an edge in $E(G_{red})$ if and only if $u-v\in E(G)$, where $[u]\neq [v]$. D. F. Anderson and John LaGrange \cite{adlag} studied the few equivalance relation on graphs, rings, and in particular, on zero-divisor graphs of rings. One of them is on a finite simple graph $G$ which is as follows: Define a relation on $G$ such that $u~\Theta~ v$ if and only if $N(u)\setminus \{v\}= N(v)\setminus \{u\}$. Note that the relations $\simeq$ and $\Theta$ on a simple finite graph are different. \begin{remark}\label{gred} It is easy to observe that, if $G^c(P)$ be the complement of the zero-divisor graph $G(P)$, then $(G^c(P))_{red}=G^c([P])$. \end{remark} \begin{theorem} \label{gchord} Let $G$ be a finite graph. Then $G$ is chordal if and only if $G_{red}$ is chordal. \end{theorem} \begin{proof} Let $G$ be a chordal graph. Suppose on contrary that $G_{red}$ is not chordal. Hence $G_{red}$ contains an induced cycle of length at least $4$. Let the set vertices $[a_1]-[a_2]-\dots-[a_n]-[a_1]$, where $n\geq 4$, form an induced cycle of length $n$ in $G_{red}$. By the definition of $G_{red}$, we have the set of vertices $a_1-a_2-\dots-a_n-a_1$ forms an induced cycle of length $n \geq 4$ in $G$, a contradiction. Thus $G_{red}$ is chordal. \par Conversely, suppose that $G_{red}$ is a chordal graph. Suppose on contrary that $G$ is not chordal. Hence, $G$ contains an induced cycle of length at least $4$. Let $a_1-a_2-\dots-a_n-a_1$ be the smallest induced cycle of length $n\geq 4$ in $G$. This gives that $\{a_{(i-1)mod ~n},a_{(i+1)mod~n}\}\subseteq N(a_i)$. We first prove that $[a_i]\neq[a_j]$ for $i\neq j$. Clearly, $a_i\not= a_j$. If $[a_i]=[a_j]$ for some $i\neq j$, then by the definition of equivalence relation $a_i-a_j$ and $N(a_i)\setminus \{a_j\} =N(a_j)\setminus \{a_i\}$. If $a_j \notin\{a_{i-1}, a_{i+1}\}$, then this implies that $\{a_{(i-1)mod ~n},a_{(i+1)mod~n}\}\subseteq N(a_j)$, that is, $a_j$ is adjacent to $a_i, a_{(i-1)mod ~n}$ and $a_{(i+1)mod~n}$, a contradiction to the minimality of the length of the cycle. Thus, in this case, $[a_i]\neq[a_j]$ for $i\neq j$. Now, assume that $a_j \in\{a_{i-1}, a_{i+1}\}$. Without loss of generality, $a_j=a_{i+1}$. Since $\{a_{i-1},a_{j}\}\subseteq N(a_i)$ and $N(a_i)\setminus \{a_j\} =N(a_j)\setminus \{a_i\}$, we have $a_{i-1}\in N(a_j)$, that is, $a_j$ is adjacent to $a_i$, and $a_{i-1}$, a contradiction to the minimality of the length of the cycle. Thus, in this case, $[a_i]\neq[a_j]$ for $i\neq j$. Therefore, in both the cases, $[a_i]\neq[a_j]$ for $i\neq j$. By the definition of $G_{red}$, we have an induced cycle $[a_1]-[a_2]-\dots-[a_n]-[a_1]$ of length $n\geq 4$ in $G_{red}$, a contradiction. Thus, $G$ is a chordal graph. \end{proof} In view of Remark \ref{gred} and Theorem \ref{gchord}, we have the following corollary. \begin{corollary} Let $P$ be a finite poset. Then $G^c(P)$ is a chordal graph if and only if $G^c([P])$ is a chordal graph. \end{corollary} \begin{remark}\label{obs1} It is easy to observe that $G+ I_m$ is a chordal graph if and only if $G$ is chordal if and only if $G\vee K_m$ is chordal. In particular, $G(P)$ is chordal if and only if $G^*(P)$ is chordal. Also, $G^c(P)$ is chordal if and only if $G^{*c}(P)$ is a chordal graph. \end{remark} It should be noted that if a poset $P$ has exactly one atom, then $G^*(P)$ is an empty graph (without edges) of size $|P\setminus \{0,1\}|$, if $P$ has the greatest element, otherwise of size $|P\setminus \{0\}|$. However, if $P$ has exactly one atom, the zero-divisor graph $G(P)$ is a null graph (without vertices and edges) that we can assume to be a chordal graph. Thus, with this preparation, we are ready to prove statements $\textbf{(A)}$ and $\textbf{(B)}$ of our first main result. \begin{proof}[\underline{\underline{Proof of Theorem \ref{zdgchordal}}}] \textbf{(A)} Since $P$ be a finite poset such that $[P]$ is Boolean, we have $P_{i_1i_2\dots i_k}\neq \emptyset$ for any nonempty set $\{i_1,i_2,\dots, i_k\}\subsetneqq \{1,2,\dots,n\}$. Suppose that $G(P)$ is chordal. Then we prove that the number $n$ of atoms of $P$ is $\leq 3$. We assume that $n\geq 4$. Then using the statements (3) and (5) of Lemma \ref{property}, we show that there exists an induced cycle of length $4$, where $q_1\in P_1,\ q_2\in P_2, \ x_{14}\in P_{14}, \ x_{23}\in P_{23}$ shown in Figure \ref{chordal1}(A). Thus $n\leq 3$. We discuss the following three cases. If $n=1$, then $G(P)$ is a null graph, and hence $G(P)$ is chordal, as per the assumption. Now, assume that $n=2$. Then $G(P)$ is a complete bipartite graph $K_{m_1,m_2}$, where $|P_i|=m_i$ for $i\in \{1,2\}$. We show that one of $m_1$ and $m_2$ is $1$. If not, then $m_i\geq 2$ for all $i\in \{1,2\}$. Let $x_{11},x_{12}\in P_1$ and let $x_{21},x_{22}\in P_2$. Figure \ref{chordal1}(B) shows an induced cycle $x_{11} - x_{21} - x_{12} - x_{22} - x_{11}$ of length $4$, a contradiction. This proves that one of $m_1$ and $m_2$ is $1$. Let $n=3$. We have to show that $|P_i|=1$ for all $i\in \{1,2,3\}$. If not, then $|P_i|\geq 2$ for some $i\in \{1,2,3\}$. Without lose of generality, we assume that $|P_1|\geq 2$. Let $x_{11}, x_{12}\in P_1$, $x_{23}\in P_{23}$, and $q_2\in P_2$. Figure \ref{chordal1}(C) shows an induced cycle $x_{11} - x_{23} - x_{12} - q_2 - x_{11}$ of length $4$, a contradiction. This proves that $|P_i|=1$ for all $i\in \{1,2,3\}$. \begin{figure}[h] \begin{center} \begin{tikzpicture}[scale =.8] \draw [fill=black] (-1,0) circle (.1); \draw [fill=black] (1,0) circle (.1); \draw [fill=black] (-1,2) circle (.1); \draw [fill=black] (1,2) circle (.1); \draw (-1,0) --(1,0)--(1,2)--(-1,2)--(-1,0); \draw node [below] at (-1,0) {$q_1$}; \draw node [below] at (1,0) {$q_2$}; \draw node [above] at (1,2) {$x_{14}$}; \draw node [above] at (-1,2) {$x_{23 }$}; \draw node [below] at (0,-1) {$(A)$}; \begin{scope}[shift={(4,0)}] \draw [fill=black] (-1,0) circle (.1); \draw [fill=black] (1,0) circle (.1); \draw [fill=black] (-1,2) circle (.1); \draw [fill=black] (1,2) circle (.1); \draw (-1,0) --(1,0)--(1,2)--(-1,2)--(-1,0); \draw node [below] at (-1,0) {$x_{11}$}; \draw node [below] at (1,0) {$x_{21}$}; \draw node [above] at (1,2) {$x_{12}$}; \draw node [above] at (-1,2) {$x_{22 }$}; \draw node [below] at (0,-1) {$(B)$}; \end{scope} \begin{scope}[shift={(8,0)}] \draw [fill=black] (-1,0) circle (.1); \draw [fill=black] (1,0) circle (.1); \draw [fill=black] (-1,2) circle (.1); \draw [fill=black] (1,2) circle (.1); \draw (-1,0) --(1,0)--(1,2)--(-1,2)--(-1,0); \draw node [below] at (-1,0) {$x_{11}$}; \draw node [below] at (1,0) {$x_{23}$}; \draw node [above] at (1,2) {$x_{12}$}; \draw node [above] at (-1,2) {$q_2$}; \draw node [below] at (0,-1) {$(C)$}; \end{scope} \begin{scope}[shift={(13,0)}] \draw [fill=black] (-1,0) circle (.1); \draw [fill=black] (1,0) circle (.1);\draw [fill=black] (0,2) circle (.1); \draw [fill=black] (2,.5) circle (.1); \draw [fill=black] (-2,.5) circle (.1); \draw [fill=black] (-0.5,3) circle (.1); \draw [fill=black] (2,-.5) circle (.1); \draw [fill=black] (-2,-.5) circle (.1); \draw [fill=black] (0.5,3) circle (.1); \draw [fill=black] (0.2,3) circle (.025); \draw [fill=black] (-0.2,3) circle (.025); \draw [fill=black] (0,3) circle (.025); \draw [fill=black] (-2,0.2) circle (.025); \draw [fill=black] (-2,-0.2) circle (.025); \draw [fill=black] (-2,0) circle (.025); \draw [fill=black] (2,0.2) circle (.025); \draw [fill=black] (2,-0.2) circle (.025); \draw [fill=black] (2,0) circle (.025); \draw (-1,0) --(1,0)--(0,2)--(-1,0)--(-2,0.5); \draw (-1,0)--(-2,-0.5); \draw (2,-0.5)-- (1,0)--(2,0.5); \draw (0.5,3)--(0,2)--(-0.5,3); \draw (-2,0) ellipse (.2 and 1); \draw (2,0) ellipse (.2 and 1); \draw (0,3) ellipse (1 and 0.2); \draw node [below] at (-1,-.1) {$q_1$}; \draw node [below] at (1,-.1) {$q_2$}; \draw node [right] at (0,2) {$q_3$}; \draw node [below] at (0,-1) { $(D)$}; \draw node [left] at (-2.2,0) {$P_{23}$}; \draw node [right] at (2.2,0) {$P_{13}$}; \draw node [above] at (0,3.2) {$P_{12}$}; \end{scope} \end{tikzpicture} \end{center} \caption{}\label{chordal1} \end{figure} Conversely, suppose that one of the condition $(1), (2), (3)$ is satisfied. One can see that Condition $(1)$ implies that $G(P)$ is a null graph, and thus $G(P)$ is chordal. If Condition $(2)$ holds, then $G(P)$ is a complete bipartite graph $K_{m_1, m_2}$, where $m_1=1$ or $m_2=1$. This implies that $G(P)$ is chordal. If Condition $(3)$ satisfied, then the vertex set of $G([P])$ is the set $\{P_1, P_2, P_3, P_{12}, P_{13}, P_{23}\}$ and $|P_i|=1$ for all $i\in \{1,2,3\}$. Then the zero-divisor graph $G(P)$ is shown in Figure \ref{chordal1}(D). Further, the vertices of $P_{ij}$ forms an independent set for $i, j \in \{1,2,3\}$ and $i <j$. Clearly, in this case also $G(P)$ is chordal. \textbf{(B)} Suppose $G^c(P)$ is a chordal graph. We have to prove that $P$ has at most 3 atoms. Suppose on contrary the number $n$ of atoms is $\geq 4$. Choose $x_{12}\in P_{12}$, $x_{14}\in P_{14}$, $x_{34}\in P_{34}$, $x_{23}\in P_{23}$. Then we can have an induced cycle $x_{12} - x_{14} - x_{34} - x_{23} - x_{12}$ of length $4$ as shown in Figure \ref{chordal2}(A), a contradiction to the fact that $G^c(P)$ is chordal. Thus $n\leq 3$. Conversely, suppose that the number $n$ of atoms in $P$ is at most $3$. We must prove that $G^c(P)$ is a chordal graph. If $n=1$, then $G^c(P)$ is a null graph. Hence $G^c(P)$ is a chordal graph. Now, assume that $n=2$. Then $G^c(P)$ is a union of two complete graphs $K_{m_1}+K_{m_2}$, where $|P_i|=m_i$ for $i\in \{1,2\}$. This implies that $G^c(P)$ is chordal graph. Lastly, assume that $n=3$. By Remark \ref{gred} and Theorem \ref{gchord}, $G^c(P)$ is chordal if and only if $G^{c}([P])$ is chordal. From Figure \ref{chordal2}(B), it is easy to observe that $G^{c}([P])$ is chordal. Therefore $G^c(P)$ is chordal. \begin{figure}[h] \begin{center} \begin{tikzpicture}[scale =.71] \draw [fill=black] (-1,0) circle (.1); \draw [fill=black] (1,0) circle (.1); \draw [fill=black] (-1,2) circle (.1); \draw [fill=black] (1,2) circle (.1); \draw (-1,0) --(1,0)--(1,2)--(-1,2)--(-1,0); \draw node [below] at (-1,0) {$x_{12}$}; \draw node [below] at (1,0) {$x_{14}$}; \draw node [above] at (1,2) {$x_{34}$}; \draw node [above] at (-1,2) {$x_{23 }$}; \draw node [below] at (0,-1.7) {$(A)$}; \begin{scope}[shift={(5,0)}] \draw [fill=black] (-1,0) circle (.1); \draw [fill=black] (1,0) circle (.1);\draw [fill=black] (0,2) circle (.1); \draw [fill=black] (1.3,1.5) circle (.1); \draw [fill=black] (-1.3,1.5) circle (.1); \draw [fill=black] (0,-1) circle (.1); \draw (-1,0) --(1,0)--(0,-1)--(-1,0)-- (-1.3,1.5)--(0,2)--(-1,0)--(1,0)--(1.3,1.5)--(0,2)--(1,0); \draw node [left] at (-1,0) {$P_{13}$}; \draw node [right] at (1,0) {$P_{12}$}; \draw node [above] at (-1.3,1.5) {$P_{3}$}; \draw node [above] at (1.3,1.5) {$P_{2}$}; \draw node [above] at (0,2) {$P_{23}$}; \draw node [below] at (0,-1.1) {$P_{1}$}; \draw node [below] at (0,-1.7) {(B) ~~The graph $G^{c}([P])$}; \end{scope} \end{tikzpicture} \end{center} \caption{}\label{chordal2} \end{figure} \end{proof} In the following remark, we provide the class of posets $P$ such that $[P]$ is a Boolean lattice. \begin{remark}\label{bolrem} We observe that a finite 0-distributive lattice $L$ is pseudocomplemented. Hence in view of Theorem \ref{nboolean}, $[L]$ is Boolean. Another class of posets $P$ for which $[P]$ is a Boolean lattice is $\textbf{P}=\prod\limits_{i=1}^nP^i$, where $P^i$ be a finite bounded poset with $Z(P^i)=\{0\}$ for every $i$. This follows from Lemma \ref{product}. \end{remark} \begin{lemma} [{Khandekar and Joshi \cite[Lemma 2.1(3)]{nkvj}}] \label{product} Let $\textbf{P}=\prod\limits_{i=1}^nP^i$, where $P^i$ be a finite bounded poset with $Z(P^i)=\{0\}$ for every $i$. Then $[\textbf{P}]$ is a Boolean lattice and $|P_{i_1\dots i_k}|= \prod\limits_{i=i_1}^{i_k}\big(|P^i|-1\big)$, where $\{i_1, \dots, i_k\}\subseteq \{1,\dots, n\}$. \end{lemma} In view of Theorem \ref{zdgchordal}, Theorem \ref{nboolean}, and Lemma \ref{product}, we have the following corollaries. \begin{corollary} \label{zdgchordlattice} Let $L$ be a finite $0$-distributive lattice. Then \textbf{(A)} $G(L)$ is chordal if and only if one of the following hold: \begin{enumerate} \item $L$ has exactly one atom; \item $L$ has exactly two atoms with $|L_i|=1$ for some $i\in \{1,2\}$ (see $(\circledcirc)$); \item $L$ has exactly three atoms with $|L_i|=1$ for all $i\in \{1,2,3\}$ (see $(\circledcirc)$). \end{enumerate} \textbf{(B)} $G^{c}(L)$ is chordal if and only if number of atoms of $L$ are at most $3$. . \end{corollary} \begin{corollary} \label{zdgchordproduct} Let $\textbf{P}=\prod\limits_{i=1}^nP^i$, where $P^i$ be a finite bounded poset with $Z(P^i)=\{0\}$ for every $i$. Then \textbf{(A)} $G(\mathbf{P})$ is chordal if and only if one of the following holds: \begin{enumerate} \item $n=1$; \item $n=2$ with $|P^i|=2$ for some $i\in \{1,2\}$, i.e., $P^i=C_2$ for some $i\in \{1,2\}$; \item $n=3$ with $|P^i|=2$ for all $i\in \{1,2,3\}$, i.e., $\textbf{P}=C_2\times C_2\times C_2$. \end{enumerate} \textbf{(B)} $G^c(\mathbf{P})$ is chordal if and only if number of atoms of $\mathbf{P}$ are at most $3$. . \end{corollary} \vskip 25truept \section{ Perfect Zero-divisor Graphs } In this section, we prove the characterizations of perfect zero-divisor graphs of ordered sets. A key result to prove the perfectness of zero-divisor graphs of ordered sets is the Strong Perfect Graph Theorem due to Chudnosky et al. \cite{strongperfect}. \begin{theorem} [Strong Perfect Graph Theorem \cite{strongperfect}]\label{strongperfect} A graph $G$ is perfect if and only if neither $G$ nor $G^c$ contains an induced odd cycle of length at least $5$. \end{theorem} In view of Theorem \ref{strongperfect}, we have the following corollary. The statement $(1)$ is nothing but the Perfect Graph Theorem due to Lov\'asz \cite{perfectlovasz}. \begin{corollary} \label{corsperfect} Let $G$ be a graph. Then the following statements hold: \begin{enumerate} \item $G$ is a perfect graph if and only if $G^c$ is a perfect graph. \item If $G$ is a complete bipartite graph, then $G$ is a perfect graph. \end{enumerate} \end{corollary} The following result gives the relation between chordal graphs and perfect graphs. \begin{theorem} [Dirac \cite{dirac}] \label{choedalperfect} Every chordal graph is perfect. \end{theorem} \begin{theorem} \label{redperfect} Let $G$ be a finite graph. Then $G$ is perfect if and only if $G_{red}$ is perfect. \end{theorem} \begin{proof} Let $G$ be a perfect graph. By Strong Perfect Graph Theorem, neither $G$ nor $G^c$ contains an induced odd cycle of length at least $5$. Suppose, on the contrary, that $G_{red}$ is not perfect. By Strong Perfect Graph Theorem, either $G_{red}$ or $G^c_{red}$ contains an induced odd cycle of length at least $5$. Without loss of generality, we assume that $G_{red}$ contains an induced odd cycle of length at least $5$. Let $[a_1]-[a_2]-\dots-[a_n]-[a_1]$ be an induced cycle of odd length $n$, where $n\geq 5$ in $G_{red}$. By the definition of the equivalence relation $\simeq$ and $G_{red}$, we get $a_1-a_2-\dots-a_n-a_1$ an induced odd cycle of length $n\geq 5$ in $G$, a contradiction. Thus $G_{red}$ is perfect. \par Conversely, assume that $G_{red}$ is a perfect graph. By Strong Perfect Graph Theorem, neither $G_{red}$ nor $G^c_{red}$ contains an induced odd cycle of length at least $5$. Suppose on contrary that $G$ is not perfect. Then $G$ or $G^c$ contains an induced odd cycle of length at least $5$. Assume that $G$ contains an induced odd cycle $a_1-a_2-\dots-a_n-a_1$ of length $n$, where $n\geq 5$. In view of proof of Theorem \ref{cgchord}, we have $[a_i]\neq[a_j]$ for $i\neq j$. By the definition of the equivalence relation $\simeq$ and $G_{red}$, we have an induced odd cycle $[a_1]-[a_2]-\dots-[a_n]-[a_1]$ of length $n \geq 5$ in $G_{red}$, a contradiction. Assume that $G^c$ contains an induced odd cycle of length at least $5$. Let $a_1-a_2-\dots-a_n-a_1$forms an induced odd cycle of length $n \geq 5$ in $G^c$. This gives that $\{a_{(i-1)},a_{(i+1)}\}\nsubseteq N_G(a_i)$. We first prove that $[a_i]\neq[a_j]$ for $i\neq j$ and $i, j \in \{1,2, \cdots, n\}$. Clearly, $a_i\not= a_j$. If $[a_i]=[a_j]$ for some $i\neq j$, and $i,\ j \in \{1,2, \cdots, n\}$, then by the definition of the equivalence relation $\simeq$, $a_i-a_j$ in $G$ and $N_G(a_i)\setminus \{a_j\} =N_G(a_j)\setminus \{a_i\}$. If $a_j \notin\{a_{i-1}, a_{i+1}\}$, then this implies that $\{a_{(i-1)},a_{(i+1)}\}\nsubseteq N_G(a_j)$, that is, $a_j$ is adjacent to $ a_{(i-1)}$ and $a_{(i+1)}$ in $G^c$. This contradicts the fact that $a_1-a_2-\dots-a_n-a_1$ forms an induced odd cycle in $G^c$. Thus, in this case, $[a_i]\neq[a_j]$ for $i\neq j$. Now, assume that $a_j \in\{a_{i-1}, a_{i+1}\}$. Without loss of generality, $a_j=a_{i+1}$. This gives that $a_i$ and $a_j$ are not adjacent in $G$. Thus, in this case, $[a_i]\neq[a_j]$ for $i\neq j$. Therefore $[a_i]\neq[a_j]$ for $i\neq j$. By the definitions of the equivalence relation $\simeq$ and $G_{red}$, we have an induced odd cycle of length $n\geq 5$ $[a_1]-[a_2]-\dots,[a_n]-[a_1]$ in $G^c_{red}$, a contradiction. Thus in both cases, we get a contradiction. Therefore neither $G$ nor $G^c$ contains an induced odd cycle of length at least $5$. Thus, $G$ is a perfect graph. \end{proof} Bagheri et al. \cite{bagheri} considered the following relation on a graph $G$: $u\approxeq v$ if and only if $N_G(u)= N_G(v)$. Clearly, $\approxeq$ is an equivalence relation on $V(G)$. The equivalence class of $v$ is the set $\{u\in V(G)~~|~~u\approx v\}$, denoted by $[v^\approxeq]$. Denote the set $\{[v^\approxeq]~~|~~ v\in V(G)\}$ by $[V(G)]$. Define $[u^\approxeq]-[v^\approxeq]$ is an edge in $E([G])$ if and only if $u-v\in E(G)$, where $[u^\approxeq]\neq [v^\approxeq]$. Let $[G]$ be a simple graph whose vertex set is $[V(G)]$, and edge set is $E([G])$. \begin{remark}\label{rem4.5} It is easy to observe that, if $G(P)$ be the zero-divisor graph, then $[G(P)]=G([P])$. \end{remark} In view of Corollary \ref{corsperfect} and Theorem \ref{redperfect}, we have the following result. \begin{corollary} [{Bagheri et al. \cite[Corollary 3.2]{bagheri}}] \label{corsqperfect} $G$ is perfect if and only if $[G]$ is perfect. \end{corollary} \begin{remark}\label{obs2} $G+ I_m$ is a perfect graph if and only if $G$ is a perfect graph if and only if $G\vee K_m$ is a perfect graph. In particular, $G(P)$ is a perfect graph if and only if $G^*(P)$ is a perfect graph. \end{remark} With this preparation, we are ready to prove statement $\textbf{(C)}$ of our main result. \begin{proof}[\underline{\underline{Proof of Theorem \ref{zdgchordal}}}] \textbf{(C)} Let $P$ be a finite poset with $n$ atoms such that $[P]$ is a Boolean lattice. This implies that $P_{i_1i_2\dots i_k}\neq \emptyset$ for any nonempty set $\{i_1,i_2,\dots, i_k\}\subseteq \{1,2,\dots,n\}$. Suppose $G(P)$ is a perfect graph. We have to prove that the number $n$ of atoms of $P$ is $\leq 4$. Suppose on contrary that $n\geq 5$. We show that $G([P])$ contains induced $5$-cycle. For this, we choose the set of vertices $X=\{P_{13}, P_{14}, P_{24}, P_{25}, P_{35}\}$. By Lemma \ref{property}(3), we have the elements of set $X$ induces $5$-cycle $P_{14}- P_{25}- P_{13}- P_{24}- P_{35}- P_{14}$ in $G([P])$. This gives that $G([P])$ is not a perfect graph. By Remark \ref{rem4.5} and Corollary \ref{corsqperfect}, $G(P)$ is not perfect graph. Thus, $n\leq 4$. Conversely, suppose that $n\leq 4$. We have to show that $G(P)$ is perfect. First, we observe that the graph $G(P)$ is perfect for $n\leq 3$. By Theorem \ref{zdgchordal} \textbf{(B)}, $G^{c}(P)$ is a chordal graph , if $n\leq 3$. Thus, by Theorem \ref{choedalperfect}, $G^{c}(P)$ is a perfect graph, if $n\leq 3$. By Corollary \ref{corsperfect}(1), $G(P)$ is a perfect graph, if $n\leq 3$. Now, we prove that $G(P)$ is a perfect graph, if $n=4$. By Remark \ref{rem4.5} and Corollary \ref{corsqperfect}, it is enough to prove that $G([P])$ is perfect. Since $n=4$, then $[P]$ is a Boolean lattice with $2^4$ elements, as shown in Figure \ref{b24}(a). The Figure \ref{b24}(b) shows that the zero-divisor graph $G([P])$ of $[P]$. It is not very difficult to verify that, $G([P])$ does not contains an induced odd cycle of length $\geq 5$ and its complement. By Theorem \ref{strongperfect}, $G([P])$ is a perfect graph, if $n=4$. This completes the proof. \begin{figure}[h] \begin{center} \begin{tikzpicture}[scale = 1] \node [right] at (0,-2.5) {(a) $ \mathbf{ [P]}=\mathbf{2}^4$ }; \draw [fill=black] (1,-1) circle (.05); \node [below] at (1,-1) {$0$}; \draw [fill=black] (1.5,0) circle (.05); \node [below] at (1.6,0) {$q_3$}; \draw [fill=black] (2.5,0) circle (.05); \node [below] at (2.6,0) {$q_4$}; \draw [fill=black] (0.5,0) circle (.05);\node [below] at (0.4,0) {$q_2$}; \draw [fill=black] (-0.5,0) circle (.05); \node [below] at (-.6,0) {$q_1$}; \draw [fill=black] (-1,1.25) circle (.05); \node [left] at (-1,1.25) {$q_{12}$}; \draw [fill=black] (-0.2,1.25) circle (.05); \node [left] at (-.2,1.25) {$q_{13}$}; \draw [fill=black] (0.6,1.25) circle (.05); \node [below] at (0.65,1.2) {$q_{14}$}; \draw [fill=black] (1.4,1.25) circle (.05); \node [right] at (1.4,1.25) {$q_{23}$}; \draw [fill=black] (2.2,1.25) circle (.05); \node [right] at (2.2,1.25) {$q_{24}$}; \draw [fill=black] (3,1.25) circle (.05); \node [right] at (3,1.25) {$q_{34}$}; \draw [fill=black] (1.5,2.5) circle (.05); \node [above] at (1.6,2.5) {$q_2^*$}; \draw [fill=black] (2.5,2.5) circle (.05); \node [above] at (2.5,2.5) {$q_1^*$}; \draw [fill=black] (0.5,2.5) circle (.05); \node [above] at (.4,2.5) {$q_3^*$}; \draw [fill=black] (-0.5,2.5) circle (.05); \node [above] at (-.5,2.5) {$q_4^*$}; \draw [fill=black] (1,3.5) circle (.05);\node [above] at (1,3.5) {$1$}; \draw (1,3.5) -- (-0.5,2.5);\draw (1,3.5) -- (0.5,2.5);\draw (1,3.5) -- (1.5,2.5);\draw (1,3.5) -- (2.5,2.5); \draw (1,-1) -- (-0.5,0);\draw (1,-1) -- (0.5,0);\draw (1,-1) -- (1.5,0);\draw (1,-1) -- (2.5,0); \draw (-1,1.25) -- (-0.5,0);\draw (-0.2,1.25) -- (-0.5,0);\draw (0.6,1.25) -- (-0.5,0);\draw (-1,1.25) -- (0.5,0);\draw (1.4,1.25) -- (0.5,0);\draw (2.2,1.25) -- (0.5,0);\draw (-0.2,1.25) -- (1.5,0); \draw (1.4,1.25) -- (1.5,0);\draw (3,1.25) -- (1.5,0);\draw (3,1.25) -- (2.5,0);\draw (2.2,1.25) -- (2.5,0);\draw (0.6,1.25) -- (2.5,0); \draw (-1,1.25) -- (-0.5,2.5);\draw (-0.2,1.25) -- (-0.5,2.5);\draw (1.4,1.25) -- (-0.5,2.5);\draw (-1,1.25) -- (0.5,2.5);\draw (0.6,1.25) -- (0.5,2.5);\draw (2.2,1.25) -- (0.5,2.5);\draw (-.2,1.25) -- (1.5,2.5);\draw (0.6,1.25) -- (1.5,2.5);\draw (3,1.25) -- (1.5,2.5); \draw (1.4,1.25) -- (2.5,2.5);\draw (2.2,1.25) -- (2.5,2.5);\draw (3,1.25) -- (2.5,2.5); \begin{scope}[shift={(7,0)}] \node [right] at (0.5,-2.5) {(b) $G(\mathbf{ [P]})$ }; \draw (0,0) -- (2,0); \draw (0,0) -- (0,2);\draw (0,0) -- (-1,1); \draw (0,0) -- (2,2); \draw (0,0) -- (-0.5,-0.5);\draw (0,0) -- (1,-1);\draw (0,0) -- (3.5,-1.5); \draw (2,0) -- (2.5,-0.5); \draw (2,0) -- (2,2); \draw (2,0) -- (0,2); \draw (2,0) -- (3,1); \draw (2,0) -- (3.5,3.5);\draw (2,0) -- (1,-1); \draw (2,2) -- (0,2); \draw (2,2) -- (2.5,2.5);\draw (2,2) -- (3,1); \draw (2,2) -- (3.5,-1.5); \draw (2,2) -- (1,3); \draw (0,2) -- (-1,1); \draw (0,2) -- (-0.5,2.5);\draw (0,2) -- (1,3); \draw (0,2) -- (3.5,3.5); \draw (3.5,-1.5) -- (3.5,3.5); \draw (-1,1) -- (3,1);\draw (1,-1) -- (1,3); \draw [fill=black] (0,0) circle (.1); \draw [fill=black] (2,0) circle (.1); \draw [fill=black] (2,2) circle (.1); \draw [fill=black] (0,2) circle (.1); \draw [fill=black] (-0.5,-0.5) circle (.1); \node [below] at (-.5,-.5) {$q_2^*$}; \draw [fill=black] (-0.5,2.5) circle (.1);\node [above] at (-.5,2.5) {$q_1^*$}; \draw [fill=black] (2.5,-0.5) circle (.1); \node [below] at (2.5,-.5) {$q_3^*$}; \draw [fill=black] (2.5,2.5) circle (.1);\node [above] at (2.5,2.5) {$q_4^*$}; \draw [fill=black] (-1,1) circle (.1);\node [above] at (-1,1) {$q_{34}$}; \draw [fill=black] (1,3) circle (.1);\node [above] at (1,3) {$q_{23}$}; \draw [fill=black] (3.5,-1.5) circle (.1);\node [below] at (3.5,-1.5) {$q_{13}$}; \draw [fill=black] (1,-1) circle (.1);\node [below] at (1,-1) {$q_{14}$}; \draw [fill=black] (3,1) circle (.1);\node [below] at (3.15,1) {$q_{12}$}; \draw [fill=black] (3.5,3.5) circle (.1);\node [above] at (3.5,3.5) {$q_{24}$}; \node [left] at (-0.05,0) {$q_{2}$};\node [right] at (2.05,0) {$q_{3}$};\node [left] at (-0.05,2) {$q_{1}$};\node [right] at (2.05,2) {$q_{4}$}; \end{scope} \end{tikzpicture} \end{center} \caption{}\label{b24} \end{figure} \end{proof} \vskip 5truept \begin{theorem}[{Joshi \cite[Corollary 2.11]{joshi}}]\label{weaklyperfectatomic} Let $G(P)$ be the zero-divisor graph of an atomic poset $P$. Then $\chi(G(P))=\omega(G(P))=n$, where $n$ is the finite number of atoms of $P$. \end{theorem} Thus, Theorem \ref{zdgchordal}(C) extends the following result. \begin{corollary}[{Patil et al. \cite[Theorem 2.6]{awj}}]\label{t8} Let $L$ be a lattice with the least element $0$ such that $G(L)$ is finite. If $G(L)$ is not perfect, then $\omega(G(L))\geq 5$. Moreover, if $L$ is $0$-distributive, then $G(L)$ contains an induced $5$-cycle. \end{corollary} In view of Proof of Theorem \ref{zdgchordal}, Theorem \ref{gchord}, Theorem \ref{redperfect}, Remark \ref{rem4.5} and Corollary \ref{corsqperfect}, we have the following result. \begin{theorem}\label{t3} Let $P$ be a finite poset such that $[P]$ is a Boolean lattice. Then the following statements are equivalent. \begin{enumerate} \item $G(P)$ is perfect. \item $G([P])$ is perfect. \item The number of atoms of $P$ is $\leq 4$. \item $G(P)$ does not contain an induced 5-cycle. \item $G([P])$ does not contain an induced 5-cycle. \item $[P]$ does not contain a meet sub-semilattice isomorphic to the meet sub-semilattice containing the least element 0 of $L$ as depicted in Figure \ref{sc}. \end{enumerate} \end{theorem} \begin{figure}[h] \begin{center} \begin{tikzpicture}[scale=.6] \draw (-1.5,0)--(-.5,2)--(0.5,0)--(1.5,2)--(2.5,0)--(3.5,2)--(4.5,0)--(5.5,2)--(6.5,0)--(7.5,2)--cycle; \draw (-1.5,0)--(2.5,-2);\draw (.5,0)--(2.5,-2);\draw (2.5,0)--(2.5,-2);\draw (4.5,0)--(2.5,-2);\draw (6.5,0)--(2.5,-2); \node [left] at(-1.5,0) { $P_{1}$};\node [above] at(-.5,2) {$P_{12}$};\node [left] at(0.6,0) { $P_{2}$};\node [above] at(1.5,2) {$P_{23}$};\node [right] at(2.4,0) { $P_{3}$};\node [above] at(3.5,2) {$P_{34}$};\node [right] at(4.4,0) { $P_{4}$};\node [above] at(5.5,2) {$P_{45}$};\node [right] at(6.4,0) { $P_{5}$};\node [above] at(7.5,2) {$P_{15}$};\node [left] at (2.5,-2.1) {$P_{0}$}; \draw[fill=white](-1.5,0) circle(.06); \draw[fill=white](-.5,2) circle(.06);\draw[fill=white](.5,0) circle(.06);\draw[fill=white](1.5,2) circle(.06);\draw[fill=white](2.5,0) circle(.06);\draw[fill=white](3.5,2) circle(.06);\draw[fill=white](4.5,0) circle(.06);\draw[fill=white](5.5,2) circle(.06);\draw[fill=white](6.5,0) circle(.06);\draw[fill=white](7.5,2) circle(.06);\draw[fill=white](2.5,-2) circle(.06); \end{tikzpicture} \end{center} \caption{Meet sub-semilattice}\label{sc} \end{figure} Using Theorem \ref{zdgchordal}, Theorem \ref{nboolean}, Lemma \ref{product} and Theorem \ref{weaklyperfectatomic}, we have the following results. \begin{corollary}\label{perfect0lattice} Let $L$ be a finite $0$-distributive lattice with $n$ atoms. Then $G(L)$ is perfect if and only if $\omega(G(L))=n\leq 4$. \end{corollary} \begin{corollary} \label{zdgperfectproduct} Let $\textbf{P}=\prod\limits_{i=1}^nP^i$, where $P^i$ be a finite bounded poset such that $Z(P^i)=\{0\}$ for every $i$. Then $G(\textbf{P})$ is perfect graph if and only if $\omega(G(\textbf{P}))=n\leq 4$. \end{corollary} \begin{remark} The condition on poset $P$ with $[P]$ is Boolean lattice in our first main Theorem \ref{zdgchordal} is necessary. Let $P$ be a uniquely complemented distributive poset with $5$ atoms and $5$ dual atoms (as shown in Figure \ref{condition}(A)) such that $[P]$ is a Boolean poset but not a lattice. Also, $P_{ij}\not= \emptyset$ for all $i, j \in \{1,2,3,4,5\}$. The zero-divisor graph $G(P)$ of $P$ and its complement $G^c(P)$ are shown in Figure \ref{condition}(B), Figure \ref{condition}(C), respectively. Then $G(P)$ and $G^c(P)$ does not contains an induced cycle of length $\geq 4$. Thus, $G(P)$ and $G^c(P)$ are the chordal graphs. Hence, $G(P)$ is a perfect graph though number of atoms $\nleq 4$, contrary to Theorem \ref{zdgchordal}. In fact, we can construct a Boolean poset with $n$ atoms and $n$ dual atoms ($n$- arbitrarily large) similar to as shown in Figure \ref{condition}(A) such that $G(P)$ and $G^c(P)$ are chordal graphs, hence perfect graphs. \begin{figure}[h] \begin{center} \begin{tikzpicture}[scale =0.85] \centering \draw [fill=black] (0,0) circle (.1); \draw [fill=black] (-2,1.5) circle (.1); \draw [fill=black] (-1,1.5) circle (.1); \draw [fill=black] (0,1.5) circle (.1); \draw [fill=black] (1,1.5) circle (.1); \draw [fill=black] (2,1.5) circle (.1); \draw [fill=black] (-2,3) circle (.1); \draw [fill=black] (-1,3) circle (.1); \draw [fill=black] (0,3) circle (.1); \draw [fill=black] (1,3) circle (.1); \draw [fill=black] (2,3) circle (.1); \draw [fill=black] (-2,1.5) circle (.1); \draw [fill=black] (-1,1.5) circle (.1); \draw [fill=black] (0,1.5) circle (.1); \draw [fill=black] (1,1.5) circle (.1); \draw [fill=black] (0,4.5) circle (.1); \draw (0,0)--(-2,1.5)--(-2,3)--(-1,1.5)--(0,0)--(0,1.5)--(-2,3)--(1,1.5)--(0,0)--(2,1.5)--(-1,3)--(-2,1.5)--(0,3)--(-1,1.5); \draw(1,3)--(0,4.5)--(2,3)--(2,1.5)--(0,3)--(0,4.5)--(-1,3)--(0,1.5)--(1,3)--(2,1.5); \draw (-2,3)--(0,4.5)--(2,3)--(1,1.5); \draw (-1,1.5)--(-1,3); \draw (-2,1.5)--(1,3)--(1,1.5)--(0,3); \draw (-1,1.5)--(2,3)--(0,1.5); \draw node [above] at (0,4.6) {$1$}; \draw node [above] at (0,3.1) {$q^*_3$}; \draw node [above] at (-2,3.1) {$q^*_5$}; \draw node [above] at (-1,3.1) {$q^*_4$}; \draw node [above] at (1,3.1) {$q^*_2$}; \draw node [above] at (2,3.1) {$q^*_1$}; \draw node [below] at (0,1.4) {$q_3$}; \draw node [below] at (-2,1.4) {$q_1$}; \draw node [below] at (-1,1.4) {$q_2$}; \draw node [below] at (1,1.4) {$q_4$}; \draw node [below] at (2,1.4) {$q_5$}; \draw node [below] at (0,-0.1) {$0$}; \draw node [above] at (0,-1.5) {A. $P$}; \begin{scope}[shift={(4,1.)}] \draw [fill=black] (0,0) circle (.1); \draw [fill=black] (0,2) circle (.1); \draw [fill=black] (2.5,0) circle (.1); \draw [fill=black] (2.5,2) circle (.1); \draw [fill=black] (1.25,3.5) circle (.1); \draw [fill=black] (-0.75,3) circle (.1); \draw [fill=black] (3.25,3.3) circle (.1); \draw [fill=black] (-1.5,1) circle (.1); \draw [fill=black] (4,1) circle (.1); \draw [fill=black] (1.25,-1.5) circle (.1); \draw (0,0)--(0,2)--(2.5,2)--(2.5,0)--(0,0)--(1.25,3.5)--(2.5,2)--(0,0)--(1.25,-1.5); \draw (2.5,0)--(0,2)--(1.25,3.5)--(2.5,0)--(4,1); \draw (0,2)--(-1.5,1); \draw (1.25,3.5)--(-0.75,3); \draw (3.25,3.3)--(2.5,2); \draw node [below] at (0,0) {$q_1$}; \draw node [below] at (2.5,0) {$q_2$}; \draw node [above] at (2.5,2) {$q_3$}; \draw node [above] at (1.25,3.5) {$q_4$}; \draw node [above] at (0,2) {$q_5$}; \draw node [above] at (-1.5,1) {$q^*_5$}; \draw node [above] at (1.25,-1.5) {$q^*_1$}; \draw node [above] at (4,1) {$q^*_2$}; \draw node [above] at (3.25,3.3) {$q^*_3$}; \draw node [above] at (-0.75,3) {$q^*_4$}; \draw node [above] at (1.25,-2.5) {B. $G(P)$}; \end{scope} \begin{scope}[shift={(10,1)}] \draw [fill=black] (0,0) circle (.1); \draw [fill=black] (0,2) circle (.1); \draw [fill=black] (2.5,0) circle (.1); \draw [fill=black] (2.5,2) circle (.1); \draw [fill=black] (1.25,3.5) circle (.1); \draw [fill=black] (-0.75,3) circle (.1); \draw [fill=black] (3.25,3) circle (.1); \draw [fill=black] (-1.5,1) circle (.1); \draw [fill=black] (4,1) circle (.1); \draw [fill=black] (1.25,-1.5) circle (.1); \draw (0,0)--(0,2)--(2.5,2)--(2.5,0)--(0,0)--(1.25,3.5)--(2.5,2)--(0,0)--(1.25,-1.5)--(2.5,0)--(0,2)--(1.25,3.5)--(3.25,3)--(2.5,2)--(4,1)--(2.5,0)--(1.25,-1.5)--(0,0)--(-1.5,1)--(2.5,0)--(3.25,3)--(0,2)--(-0.75,3)--(2.5,2); \draw (-1.5,1)--(0,2)--(1.25,-1.5)--(2.5,2)--(4,1)--(0,0)--(-0.75,3)--(1.25,3.5)--(2.5,0); \draw (-1.5,1)--(1.25,3.5)--(4,1); \draw node [below] at (0,0) {$q^*_1$}; \draw node [below] at (2.5,0) {$q^*_2$}; \draw node [above] at (2.5,2) {$q^*_3$}; \draw node [above] at (1.25,3.5) {$q^*_4$}; \draw node [above] at (0,2) {$q^*_5$}; \draw node [above] at (-1.5,1) {$q_3$}; \draw node [above] at (1.25,-1.5) {$q_4$}; \draw node [above] at (4,1) {$q_5$}; \draw node [above] at (3.25,3.1) {$q_1$}; \draw node [above] at (-0.75,3) {$q_2$}; \draw node [above] at (1.25,-2.5) {C. $G^c(P)$}; \end{scope} \end{tikzpicture} \end{center} \caption{}\label{condition} \end{figure} \end{remark} \newpage \section{Coloring of zero-divisor graphs} In this section, we prove that the zero-divisor graphs of finite posets satisfy the Total Coloring Conjecture. Further, we prove that the complement of the zero-divisor graph of finite 0-distributive posets satisfies the Total Coloring Conjecture. Recently, Srinivasa Murthy \cite[Revision 3]{murthy} claims the proof of Total Coloring Conjecture for the finite simple graphs. However, this claim is not verified. \vskip 5truept We quote the following definition and results needed in the sequel. \vskip 5truept The \textit{vertex chromatic number} $\chi(G)$ of a graph $G$ is the minimum number of colors required to color the vertices of $G$ such that no two adjacent vertices receive the same color, whereas the \textit{edge chromatic number} $\chi'(G)$ of $G$ is the minimum number of colors required to color the edges of $G$ such that incident edges receive different colors. A graph $G$ is {\it class one}, if $\chi'(G)=\Delta(G)$ and is {\it class two}, if $\chi'(G)=\Delta(G)+1$. Note that every graph $G$ requires at least $\Delta(G)+1$ colors for the total coloring. A graph is said to be {\it type I}, if $\chi''(G) =\Delta(G)+1$ and is said to be {\it type II}, if $\chi''(G) =\Delta(G)+2$. \begin{theorem}[Behzad et al. {\cite[Theorem 1]{behzad}}]\label{complete} The following statements hold for the complete graph $K_n$. \begin{enumerate} \item $\chi(K_{n})=n$. \item $\chi{'}(K_{n})= \begin{cases} n & \text{ for $n$ odd $n\geq 3;$} \\ n-1 & \text{for $n$ even.} \end{cases}$ \item $\chi{''}(K_{n})= \begin{cases} n & \text{for $n$ odd;}\\ n+1 & \text{ for $n$ even.} \end{cases}$ \end{enumerate} \end{theorem} \vskip 5truept \begin{theorem}[Behzad et al. {\cite[Theorem 2]{behzad}}]\label{bipartite} The following statements hold for the complete bipartite graph $K_{m,n}$. \begin{enumerate} \item $\chi(K_{m,n})=2$; \item $\chi{'}(K_{m,n})=$ max$\{m,n\}$; \item $\chi{''}(K_{m,n})=$ max$\{m,n\}+1+\delta_{mn}$, where $\delta_{mn}= \begin{cases} 0 & \text{if $m\neq n;$}\\ 1 & \text{if $m= n.$} \end{cases}$ \end{enumerate} \end{theorem} \vskip 5truept \begin{theorem}[Yap {\cite[Theorem 2.6, p. 8]{tcyap}}]\label{zdg} Let $G$ be a graph of order $N$ and $\Delta(G)$ be the maximum degree of graph $G$. If $G$ contains an independent set $X$ of vertices, where $|X|\geq N-\Delta(G)-1$, then $\chi''(G)\leq\Delta(G)+2$, that is, $G$ satisfies the Total Coloring Conjecture. \end{theorem} \begin{theorem}[Yap {\cite[Theorem 5.15, p. 52]{tcyap}}]\label{czdg} If $G$ is a graph having $\Delta(G)\geq\frac{3}{4}|G|$, then \linebreak $\chi''(G)\leq\Delta(G)+2$, that is, $G$ satisfies the Total Coloring Conjecture. \end{theorem} \begin{theorem}[Khandekar and Joshi {\cite[Theorem 1.2, p.1]{nkvj}}]\label{tcc} Let $ \mathbf{ P}=\prod\limits_{i=1}^{n}P^i$, $(n\geq 2)$, where $P^i$'s are finite bounded posets such that $Z(P^i)=\{0\}$, $\forall i$ and $2\leq|P^1|\leq|P^2|\leq\dots\leq|P^n|$. Then $G(\mathbf{ P})$ satisfies the Total Coloring Conjecture. In particular, \\ $\chi{''}(G(\mathbf{ P}))=\begin{cases} \Delta(G(\mathbf{ P}))+2 & \text{if $n=2$ and $|P^1|=|P^2|;$}\\ \Delta(G(\mathbf{ P}))+1 & \text{ otherwise.}\\ \end{cases}$ \end{theorem} \begin{theorem}[Khandekar and Joshi {\cite[Theorem 3.5, p.6]{nkvj}}]\label{edge chromatic number} Let $ \mathbf{ P}=\prod\limits_{i=1}^{n}P^i$, $(n\geq 2)$, where $P^i$'s are finite bounded posets such that $Z(P^i)=\{0\}$, $\forall i$ and $2\leq|P^1|\leq|P^2|\leq\dots\leq|P^n|$. Let $G(\mathbf{ P})$ be the zero-divisor graph of a poset $\mathbf{ P}$. Then $\chi{'}(G(\mathbf{ P}))=\Delta(G(\mathbf{ P}))$. \end{theorem} \begin{remark} [Khandekar and Joshi {\cite[Remark 3.9, p.13]{nkvj}}]\label{type2} Let $ \mathbf{ P}=\prod\limits_{i=1}^{n}P^i$, $(n\geq 2)$, where $P^i$'s are finite bounded posets such that $Z(P^i)=\{0\}$, $\forall i$ and $2\leq|P^1|\leq|P^2|\leq\dots\leq|P^n|$. The zero-divisor graph of $\mathbf{P}$ is of type II if and only if $\mathbf{P}$ is a direct product of exactly two posets $P^1$, $P^2$ with $Z(P^i)=\{0\}$ for $i=1,2$ and $|P^1|=|P^2|$. \end{remark} With this preparation, we are ready to prove our second main result of the paper which follows from Theorem \ref{zdgtcc} and Theorem \ref{czdgtcc}. \begin{theorem}\label{zdgtcc} Let $P$ be a finite poset. Then $G(P)$ satisfies the Total Coloring Conjecture. \end{theorem} \begin{proof} Let $q_1, q_2, \dots, q_n$ be all atoms of $P$. Let $G(P)$ be the zero-divisor graph of $P$ of order $N$, i.e., $|V(G(P))|=N$ and $\Delta(G(P))$ be the maximum degree of $G(P)$. Then $q_i^u\cap V(G(P))$ is an independent set of $G(P)$, for every $i\in\{1,\dots,n\}$. Hence $\alpha(G(P))\geq |q_i^u\cap V(G(P))|$. It is clear to see that, if $x\in V(G(P))$ and $x\notin q_i^u$, then $x$, $q_i$ are adjacent in $G(P)$. Therefore, the degree of $q_i$ is equal to $|V(G(P))\setminus q_i^u|$, for every $i\in\{1,\dots,n\}$. Hence, $\Delta(G)\geq$ deg$(q_i)$, for all $i$. Note that if $x\in V(G(P))$, then either $x\in q_i^u\cap V(G(P))$ or $x\in V(G(P))\setminus q_i^u$. Thus, $V(G(P))$ is the disjoint union of $q_i^u\cap V(G(P))$ and $V(G(P))\setminus q_i^u$, for all $i$. In particular, $V(G(P))$ is the disjoint union of $q_1^u\cap V(G(P))$ and $V(G(P))\setminus q_1^u$. Let $|q_1^u\cap V(G(P))|=m$ and $|V(G(P))\setminus q_1^u|=k$. We have $m+k=N$. From this, we get $m\geq N-k-1$. This implies that $m\geq N-\Delta(G(P))-1$ (as $\Delta(G)\geq$ deg$(q_1)$). By Theorem \ref{zdg}, $G(P)$ satisfies the Total Coloring Conjecture. \end{proof} Now, we prove that if $P$ is a finite 0-distributive poset, then $G^c(P)$ satisfies Total Coloring Conjecture. Since the proof of this result is a little lengthy, we first give the skeleton of it using Figure \ref{example}. Consider the $0$-distributive poset $P$ and it's zero-divisor graph shown in Figure \ref{example}(a) and (b). This is symbolically shown in Figure \ref{example}(c), where $I_4$ is the independent set on $4$ vertices and the edges between $I_4$ is nothing but the induced graph $I_4\vee I_4$. One can compare this graph with Figure \ref{fig1}(a). With the same idea, the complement of zero-divisor graph of $P$ is shown in Figure \ref{example}(d) (see also Figure \ref{fig1}(b)). Now, we discuss the skeleton of the proof of Theorem \ref{czdgtcc}. If a poset has exactly two atoms, then $G^c(P)$ is a union of two complete graphs and hence satisfies the Conjecture. Further, if $n\geq 4$, then we use Theorem \ref{czdg} to prove that $G^c(P)$ satisfies the Total Coloring Conjecture. The case when $n=3$ is a little complicated. First, we do the total coloring to the vertex induced subgraph $G'$ of $G^c(P)$ on the vertices of $[q_1]^*,[q_2]^*,[q_3]^*,[q_3]$, where $|[q_i]|=l_i$ and $|[q_i]^*| =m_i$. We denote $[q_i]=\{q_{i1},q_{i2},\dots,q_{il_i}\}$ and $[q_i]^*=\{q_{i1}^*,q_{i2}^*,\dots,q_{im_i}^*\}$. Without loss of generality, assume that $l_3\geq l_2\geq l_1$. Then we do the total coloring to the vertex induced subgraph by $[q_2]$ and followed by $[q_1]$. Afterwards, we do the edge coloring of the complete bipartite graph $K_{|[q_3]*|} \vee K_{|[q_1]|}$. We denote the set $V_1=\{q_{21},q_{22},\dots,q_{2l_2}\}$ and $V_2=\{q_{11}^*,\dots,q_{1m_1}^*,~ q_{31}^*,\dots,q_{3m_3}^*\}$. Let us complete the edge coloring of the complete bipartite graph $K_{s,t}$ on sets $V_1$ and $V_2$, where $s=l_2, ~ t=m_1+m_3$. Similarly, we do the edge coloring of the complete bipartite graph $K'_{r',s'}$ on sets $V_1'$ and $V_2'$, where $r'=l_1,~ s'=m_2$,~ $V_1'=\{q_{11},q_{12},\dots,q_{1l_1}\}$ and $V_2'=\{q_{21}^*,\dots,q_{2m_2}^*\}$. In conclusion, we prove that this is a required total coloring of $G^c(P)$. \begin{figure}[h] \begin{center} \begin{tikzpicture}[scale =1] \draw[fill=pink] (-1.5,1.8) ellipse(0.2 and 1.2); \draw[fill=pink] (0,1.8) ellipse(0.2 and 1.2); \draw[fill=pink] (1.5,1.8) ellipse(0.2 and 1.2); \draw[fill=aquamarine] (-1.5,4.8) ellipse(0.2 and 1.2); \draw[fill=aquamarine] (0,4.8) ellipse(0.2 and 1.2); \draw[fill=aquamarine] (1.5,4.8) ellipse(0.2 and 1.2); \node [left] at (-1.6,1.8) {$[q_1]$}; \node [left] at (-0.1,1.8) {$[q_2]$}; \node [right] at (1.6,1.8) {$[q_3]$}; \node [left] at (-1.6,4.8) {$[q_3]^*$}; \node [left] at (-0.1,4.8) {$[q_2]^*$}; \node [right] at (1.6,4.8) {$[q_1]^*$}; \draw [fill=black] (0,-0.5) circle (0.08);\draw [fill=black] (0,7) circle (0.08); \draw [fill=black] (0,1) circle (0.08);\draw [fill=black] (0,1.5) circle (0.08);\draw [fill=black] (0,2.5) circle (0.08);\draw [fill=black] (0,2) circle (0.08); \draw [fill=black] (-1.5,5.5) circle (0.08);\draw [fill=black] (-1.5,4) circle (0.08);\draw [fill=black] (-1.5,4.5) circle (0.08);\draw [fill=black] (-1.5,5) circle (0.08); \draw [fill=black] (-1.5,1) circle (0.08);\draw [fill=black] (-1.5,1.5) circle (0.08);\draw [fill=black] (-1.5,2.5) circle (0.08);\draw [fill=black] (-1.5,2) circle (0.08); \draw [fill=black] (1.5,5.5) circle (0.08);\draw [fill=black] (1.5,4) circle (0.08);\draw [fill=black] (1.5,4.5) circle (0.08);\draw [fill=black] (1.5,5) circle (0.08); \draw [fill=black] (1.5,1) circle (0.08);\draw [fill=black] (1.5,1.5) circle (0.08);\draw [fill=black] (1.5,2.5) circle (0.08);\draw [fill=black] (1.5,2) circle (0.08); \draw [fill=black] (1.5,5.5) circle (0.08);\draw [fill=black] (1.5,4) circle (0.08);\draw [fill=black] (1.5,4.5) circle (0.08);\draw [fill=black] (1.5,5) circle (0.08); \draw [fill=black] (0,5.5) circle (0.08);\draw [fill=black] (0,4) circle (0.08);\draw [fill=black] (0,4.5) circle (0.08);\draw [fill=black] (0,5) circle (0.08); \draw (0,-0.5) -- (-1.5,1);\draw (0,-0.5) -- (0,1);\draw (0,-0.5) -- (1.5,1); \draw (-1.5,5.5) -- (-1.5,1);\draw (0,2.5) -- (0,1);\draw (1.5,5.5) -- (1.5,1); \draw (-1.5,2.5) -- (0,4);\draw (0,2.5) -- (0,1);\draw (1.5,2.5) -- (1.5,1);\draw (1.5,2.5) -- (0,4);\draw (0,2.5) -- (1.5,4); \draw (0,2.5) -- (-1.5,4); \draw (0,5.5) -- (0,4);\draw (0,7) -- (0,4); \draw (0,7) -- (-1.5,5.5);\draw (0,7) -- (1.5,5.5); \node [above] at (0,7) {$1$}; \node [below] at (0,-0.6) {$0$}; \node [below] at (-0.2,-1) {$(a)~ P$}; \begin{scope}[shift={(7,2.5)}] \node [left] at (2.5,0) {$[q_1]$}; \node [left] at (2.7,-2) {$[q_1]^*$}; \node [left] at (2,3.8) {$[q_2]$}; \node [left] at (-0.2,3.8) {$[q_3]$}; \node [left] at (4.8,3.8) {$[q_2]^*$}; \node [left] at (-3.3,3.8) {$[q_3]^*$}; \draw[fill=pink] (0.7,0) ellipse(1 and 0.5); \draw[fill=aquamarine] (0.7,-2) ellipse(1 and 0.5); \begin{scope}[rotate=50]{\tiny \draw[fill=pink] (1.3,2.5) ellipse(1 and 0.5); \draw[fill=aquamarine] (1.,4.45) ellipse(1 and 0.5);}\end{scope} \begin{scope}[rotate=125]{\tiny \draw[fill=pink] (1.,-3.2) ellipse(1 and 0.5); \draw[fill=aquamarine] (1.,-4.8) ellipse(1 and 0.5);}\end{scope} \draw [fill=black] (0,0) circle (.05);\draw [fill=black] (0.5,0) circle (.05);\draw [fill=black] (1,0) circle (.05);\draw [fill=black] (1.5,0) circle (.05); \draw [fill=black] (0,-2) circle (.05);\draw [fill=black] (0.5,-2) circle (.05);\draw [fill=black] (1,-2) circle (.05);\draw [fill=black] (1.5,-2) circle (.05); \draw [fill=black] (-1.5,2) circle (.05); \draw [fill=black] (-1.2,2.4) circle (.05);\draw [fill=black] (-0.9,2.8) circle (.05);\draw [fill=black] (-0.6,3.2) circle (.05); \draw [fill=black] (-3.3,3) circle (.05); \draw [fill=black] (-3.0,3.4) circle (.05);\draw [fill=black] (-2.7,3.8) circle (.05);\draw [fill=black] (-2.4,4.2) circle (.05); \draw [fill=black] (2.5,2) circle (.05); \draw [fill=black] (2.2,2.4) circle (.05);\draw [fill=black] (1.9,2.8) circle (.05);\draw [fill=black] (1.6,3.2) circle (.05); \draw [fill=black] (3.8,3) circle (.05); \draw [fill=black] (3.5,3.4) circle (.05);\draw [fill=black] (3.2,3.8) circle (.05);\draw [fill=black] (2.9,4.2) circle (.05); \draw (0,0) -- (0,-2);\draw (0.5,0) -- (0,-2);\draw (1,0) -- (0,-2);\draw (1.5,0) -- (0,-2); \draw (0,0) -- (0.5,-2);\draw (0.5,0) -- (0.5,-2);\draw (1,0) -- (0.5,-2);\draw (1.5,0) -- (0.5,-2); \draw (0,0) -- (1,-2);\draw (0.5,0) -- (1,-2);\draw (1,0) -- (1,-2);\draw (1.5,0) -- (1,-2); \draw (0,0) -- (1.5,-2);\draw (0.5,0) -- (1.5,-2);\draw (1,0) -- (1.5,-2);\draw (1.5,0) -- (1.5,-2); \draw (2.5,2) -- (3.8,3);\draw (0.5,0) -- (2.5,2);\draw (1,0) -- (2.5,2);\draw (1.5,0) -- (2.5,2); \draw (0,0) -- (2.2,2.4);\draw (0.5,0) -- (2.2,2.4);\draw (1,0) -- (2.2,2.4);\draw (1.5,0) -- (2.2,2.4); \draw (0,0) -- (1.9,2.8);\draw (0.5,0) -- (1.9,2.8);\draw (1,0) -- (1.9,2.8);\draw (1.5,0) -- (1.9,2.8); \draw (0,0) -- (1.6,3.2);\draw (0.5,0) -- (1.6,3.2);\draw (1,0) -- (1.6,3.2);\draw (1.5,0) -- (1.6,3.2); \draw (0,0) -- (2.5,2);\draw (3.5,3.4) -- (2.5,2);\draw (3.2,3.8) -- (2.5,2);\draw (2.9,4.2) -- (2.5,2); \draw (3.8,3) -- (2.2,2.4);\draw (3.5,3.4) -- (2.2,2.4);\draw (3.2,3.8) -- (2.2,2.4);\draw (2.9,4.2) -- (2.2,2.4); \draw (3.8,3) -- (1.9,2.8);\draw (3.5,3.4) -- (1.9,2.8);\draw (3.2,3.8) -- (1.9,2.8);\draw (2.9,4.2) -- (1.9,2.8); \draw (3.8,3) -- (1.6,3.2);\draw (3.5,3.4) -- (1.6,3.2);\draw (3.2,3.8) -- (1.6,3.2);\draw (2.9,4.2) -- (1.6,3.2); \draw (-1.5,2) -- (2.5,2);\draw (-1.2,2.4) -- (2.5,2);\draw (-0.9,2.8) -- (2.5,2);\draw (-0.6,3.2) -- (2.5,2); \draw (-1.5,2) -- (2.2,2.4);\draw (-1.2,2.4) -- (2.2,2.4);\draw (-0.9,2.8) -- (2.2,2.4);\draw (-0.6,3.2) -- (2.2,2.4); \draw (-1.5,2) -- (1.9,2.8);\draw (-1.2,2.4) -- (1.9,2.8);\draw (-0.9,2.8) -- (1.9,2.8);\draw (-0.6,3.2) -- (1.9,2.8); \draw (-1.5,2) -- (1.6,3.2);\draw (-1.2,2.4) -- (1.6,3.2);\draw (-0.9,2.8) -- (1.6,3.2);\draw (-0.6,3.2) -- (1.6,3.2); \draw (-1.5,2) -- (0,0);\draw (-1.2,2.4) -- (0,0);\draw (-0.9,2.8) -- (0,0);\draw (-0.6,3.2) -- (0,0); \draw (-1.5,2) -- (0.5,0);\draw (-1.2,2.4) -- (0.5,0);\draw (-0.9,2.8) -- (0.5,0);\draw (-0.6,3.2) -- (0.5,0); \draw (-1.5,2) -- (1,0);\draw (-1.2,2.4) -- (1,0);\draw (-0.9,2.8) -- (1,0);\draw (-0.6,3.2) -- (1,0); \draw (-1.5,2) -- (1.5,0);\draw (-1.2,2.4) -- (1.5,0);\draw (-0.9,2.8) -- (1.5,0);\draw (-0.6,3.2) -- (1.5,0); \draw (-1.5,2) -- (-3.3,3);\draw (-1.2,2.4) -- (-3.3,3);\draw (-0.9,2.8) -- (-3.3,3);\draw (-0.6,3.2) -- (-3.3,3); \draw (-1.5,2) -- (-3,3.4);\draw (-1.2,2.4) -- (-3,3.4);\draw (-0.9,2.8) -- (-3,3.4);\draw (-0.6,3.2) -- (-3,3.4); \draw (-1.5,2) -- (-2.7,3.8);\draw (-1.2,2.4) -- (-2.7,3.8);\draw (-0.9,2.8) -- (-2.7,3.8);\draw (-0.6,3.2) -- (-2.7,3.8); \draw (-1.5,2) -- (-2.4,4.2);\draw (-1.2,2.4) -- (-2.4,4.2);\draw (-0.9,2.8) -- (-2.4,4.2);\draw (-0.6,3.2) -- (-2.4,4.2); \node [below] at (0.5,-3.2) {$(b)~ G(P)$}; \end{scope} \begin{scope}[shift={(0,-6)}] \draw [fill=pink] (0,0) circle (0.5);\draw [fill=pink] (-1.5,2) circle (0.5);\draw [fill=pink] (1.5,2) circle (0.5);\draw [fill=aquamarine] (0,-2) circle (0.5);\draw [fill=aquamarine] (3,3) circle (0.5);\draw [fill=aquamarine] (-3,3) circle (0.5); \draw (-0.25,-0.4) -- (-0.25,-1.6);\draw (0.25,-0.4) -- (0.25,-1.6);\draw (0.5,0.1) -- (1.6,1.5);\draw (0.1,0.5) -- (1.1,1.7);\draw (-0.5,0.1) -- (-1.6,1.5);\draw (-0.1,0.5) -- (-1.1,1.7); \draw (-1.05,1.8) -- (1.05,1.8);\draw (-1.2,2.4) -- (1.2,2.4); \draw (-2,1.85) -- (-2.9,2.5);\draw (-1.8,2.4) -- (-2.5,2.9); \draw (2,1.85) -- (2.9,2.5);\draw (1.8,2.4) -- (2.5,2.9); \node [right] at (-.5,0) {$I_{|[q_1]|}$}; \node [right] at (-.6,-2) {$I_{|[q_1]^*|}$};\node [right] at (-2,2) {$I_{|[q_3]|}$};\node [right] at (1.,2) {$I_{|[q_2]|}$}; \node [right] at (2.4,3) {$I_{|[q_2]^*|}$};\node [right] at (-3.6,3) {$I_{|[q_3]^*|}$}; \node [right] at (-1.05,2.05) {$I_{|[q_3]|}\vee I_{|[q_2]|}$}; \node [below] at (0.5,-3.5) {$(c)~ G(P)$}; \end{scope} \begin{scope}[shift={(8,-6.5)}] \draw [fill=aquamarine] (0,4) circle (0.6); \draw [fill=aquamarine] (-2,0) circle (0.6);\draw [fill=aquamarine] (2,0) circle (0.6);\draw [fill=pink] (2.8,3.1) circle (0.6);\draw [fill=pink] (-2.8,3.1) circle (0.6);\draw [fill=pink] (0,-2) circle (0.6); \node [right] at (-.7,4) {$K_{|[q_3]^*|}$};\node [right] at (-2.65,0) {$K_{|[q_1]^*|}$};\node [right] at (1.4,0) {$K_{|[q_2]^*|}$};\node [right] at (2.2,3.1) {$K_{|[q_1]|}$};\node [right] at (-3.4,3.1) {$K_{|[q_2]|}$};\node [right] at (-.6,-2) {$K_{|[q_3]|}$}; \node [right] at (-1.35,-0.05) {$K_{|[q_1]^*|}\vee K_{|[q_2]^*|}$}; \draw (-1.5,0.3) -- (1.5,0.3);\draw (-1.5,-0.3) -- (1.5,-0.3); \draw (-1.55,0.35) -- (-0.05,3.4);\draw (1.55,0.35) -- (0.05,3.4);\draw (-2.1,0.55) -- (-0.45,3.6);\draw (2.1,0.55) -- (0.45,3.6); \draw (-2.15,0.55) -- (-2.5,2.6);\draw (2.15,0.55) -- (2.5,2.6); \draw (-2.6,0.05) -- (-3.,2.55);\draw (2.6,0.05) -- (3.,2.55); \draw (-2.2,3.1) -- (-.6,3.85);\draw (-2.6,3.65) -- (-.2,4.6);\draw (2.2,3.1) -- (.6,3.85);\draw (2.6,3.65) -- (.2,4.6); \draw (-1.65,-.45) -- (-.45,-1.65);\draw (-2.3,-.5) -- (-.5,-2.3);\draw (1.65,-.45) -- (.45,-1.65);\draw (2.3,-.5) -- (.5,-2.3); \node [below] at (0.5,-3.0) {$(d)~ G^c(P)$}; \end{scope} \end{tikzpicture} \end{center} \caption{}\label{example} \end{figure} \begin{theorem}\label{czdgtcc} Let $P$ be a finite $0$-distributive poset. Then $G^c(P)$ satisfies the Total Coloring Conjecture. \end{theorem} \begin{proof} Let $q_1, q_2, \dots, q_n$ be all atoms of $P$. Then using the statements (1) and (7) of Lemma \ref{property}, $[q_1],[q_2],\dots,[q_n]$ are $n$ atoms of $[P]$ and $[q_i]^*$ is the pseudocomplement of $[q_i]$ in $[P]$ for every $i\in\{1,2,\dots,n\}$. Clearly, $[q_k]^*$ is a pendent vertex of $G([P])$ adjacent to $[q_k]$ only (see Figure \ref{fig1}(a), if $n=3$). Let $G^c(P)$ be the complement of the zero-divisor graph of $P$ and let $G^c([P])$ be the complement of the zero-divisor graph of $[P]$. To get $\Delta (G^c(P))$, we calculate $\delta(G(P))$. We claim that $\delta(G(P))= $ deg$(x)=|[q_i]|$ for some $i\in \{1,\dots, n\}$ and $x\in [q_i]^*$. If $[a]\in V(G([P]))\setminus\{[q_1]^*,\dots,[q_n]^*\}$, then $[a]$ is adjacent to at least two $[q_i]$ in $G([P])$ for $n\geq 3$. Otherwise, if $[a]$ is adjacent to exactly one $[q_i]$ in $G([P])$, then $[a]=[q_i]^*$, a contradiction. Hence, we assume that $[a]$ is adjacent of $[q_i]$ and $[q_j]$ for $i, j$ in $G([P])$. Hence if $y\in [a]$, then $y$ will be adjacent to every vertex of $[q_i]$ and $[q_j]$. Hence deg$_{G(P)}(y)\geq|[q_i]|+|[q_j]|$ for $i,j\in\{1,\dots,n\}$. Since $[q_i]^*$ is adjacent to $[q_i]$ only (Lemma \ref{property}(7)) in $G([P])$, by Lemma \ref{property}(4), it is easy to prove that deg$_{G(P)}(x)=|[q_i]|$ for $x\in [q_i]^*$. This together with deg$_{G(P)}(y)\geq|[q_i]|+|[q_j]|$ gives that $\delta(G(P))=$deg$(x)=|[q_i]|$, for some $i\in \{1,\dots, n\}$ and $x\in [q_i]^*$. Therefore,\\ \underline{\underline{ $\Delta(G^c(P))=$ deg$_{G^c(P)}(x)=|V(G^c(P))|-|[q_i]|-1$ for some $i\in \{1,\dots, n\}$ and $x\in [q_i]^*$-------- (A) }} \vskip 5truept We consider the following cases on $n$: \noindent \textbf{Case-I:} $n=2$.\\ If $n=2$, then $[P]$ is a poset with exactly two atoms $[q_1], [q_2]$ with $|[q_i]|=m_i$. In this case, the complement of the zero-divisor graph $G^c([P])$ of $[P]$ is a graph having two isolated vertices. Therefore $G^c(P)$ is the graph $K_{m_{1}} + K_{m_{2}}$. Clearly, $G^c(P)$ satisfies the Total Coloring Conjecture. \vskip 5truept \noindent \textbf{Case-II:} $n=3$. Now, assume that $n=3$. Then $[P]$ is a poset with three atoms, $[q_1],[q_2]$ and $[q_3]$. Then $V(G([P]))=\{[q_1],[q_2],[q_3],[q_1]^*,[q_2]^*,[q_3]^*\}$. The zero-divisor graph $G([P])$ of $[P]$ and its complement $G^c([P])$ are shown in Figure \ref{fig1}. \begin{figure}[h] \begin{center} \begin{tikzpicture}[scale =0.5] \begin{scope}[shift={(6,2)}] \draw (0,0) -- (1.5,2);\draw (0,0) -- (-1.5,-1.5); \draw (0,0) -- (3,0);\draw (1.5,4) -- (1.5,2);\draw (3,0) -- (1.5,2); \draw (3,0) -- (4.5,-1.5); \draw [fill=black] (0,0) circle (.1);\draw [fill=black] (3,0) circle (.1);\draw [fill=black] (1.5,2) circle (.1);\draw [fill=black] (1.5,4) circle (.1);\draw [fill=black] (-1.5,-1.5) circle (.1); \draw [fill=black] (4.5,-1.5) circle (.1); \node [above] at (-0.3,0) {$[q_3]$}; \node [above] at (3.2,0) {$[q_2]$};\node [above] at (1.5,4.1) {$[q_1]^*$};\node [above] at (-1.5,-1.4) {$[q_3]^*$};\node [above] at (4.6,-1.4) {$[q_2]^*$};\node [right] at (1.6,2) {$[q_1]$}; \node [below] at (1.85,-3.2) {(a) $G([P])$}; \end{scope} \begin{scope}[shift={(14,3)}] \draw [fill=black] (0,0) circle (.1);\draw [fill=black] (4,0) circle (.1);\draw [fill=black] (2,3) circle (.1);\draw [fill=black] (2,-1.75) circle (.1);\draw [fill=black] (-0.2,2.6) circle (.1);\draw [fill=black] (4.2,2.6) circle (.1); \node [left] at (0,0) {$[q_3]^*$};\node [right] at (4,0) {$[q_2]^*$};\node [above] at (2,3) {$[q_1]^*$};\node [above] at (-0.2,2.7) {$[q_2]$};\node [above] at (4.2,2.7) {$[q_3]$};\node [below] at (2,-1.8) {$[q_1]$}; \draw (0,0) -- (4,0);\draw (0,0) -- (2,3);\draw (0,0) -- (2,-1.75);\draw (0,0) -- (-0.2,2.6);\draw (2,-1.75) -- (4,0);\draw (2,3) -- (4,0);\draw (4.2,2.6) -- (4,0);\draw (2,3) -- (-0.2,2.6);\draw (2,3) -- (4.2,2.6);\node [below] at (1.85,-4.2) {(b) $G^c([P])$}; \end{scope} \end{tikzpicture} \end{center} \caption{}\label{fig1} \end{figure} Since deg$_{G^c(P)}(x)=|V(G^c(P))|-|[q_i]|-1$ for $x\in [q_i]^*$ for some $i$, by $(A)$, and $\Delta(G^c(P))\geq$ deg$_{G^c(P)}(x)$, in view of Theorem \ref{czdg}, to prove that $G^c(P)$ satisfies the Total Coloring Conjecture, it is enough to show that $|[q_i]|< \frac{1}{4} |V(G^c(P))|$ for some $i$. Suppose on the contrary that $|[q_i]|\geq \frac{1}{4} |V(G^c(P))|$ for all $i$. Let $|[q_i]|=l_i$ and $|[q_i]^*| =m_i$, where $l_i, m_i\in \mathbb{N}$. Without loss of generality, assume that $l_3\geq l_2\geq l_1$. By the assumption, $l_i\geq \frac{1}{4} |V(G^c(P))|$ for every $i$ and $l_1+l_2+l_3+m_1+m_2+m_3=|V(G^c(P))|$. Hence $m_1+m_2+m_3 \leq \frac{1}{4} |V(G^c(P))|$. \vskip 5truept \underline{\underline{This implies that both $(m_1+m_3), m_2< \frac{1}{4} |V(G^c(P))|\leq l_1, ~l_2$. \hspace{.2in} --------------- (B)}} \vskip 5truept Since $|[q_i]|=l_i$ and $|[q_i]^*| =m_i$, we denote $[q_i]=\{q_{i1},q_{i2},\dots,q_{il_i}\}$ and $[q_i]^*=\{q_{i1}^*,q_{i2}^*,\dots,q_{im_i}^*\}$. By Lemma \ref{property}(4), it is clear that deg$(q_{ij})= $ deg$(q_{ik})$ for $q_{ij}, q_{ik} \in [q_i]$. By $(A)$, we have $\Delta(G^c(P))=$deg$(q_{11}^*)=|V(G^c(P))|-l_1-1=l_1+l_2+l_3+m_1+m_2+m_3-l_1-1= l_2+l_3+m_1+m_2+m_3-1$. \vskip 5truept Now, we prove that $\chi''(G^c(P))\leq \Delta(G^c(P))+2= l_2+l_3+m_1+m_2+m_3+1$. \vskip 5truept Let $G'$ be the vertex induced subgraph on the vertices of $[q_1]^*,[q_2]^*,[q_3]^*$, and $[q_3]$. First, we do the total coloring of $G'$. Put $A=\{q_{11}^*,q_{12}^*,\dots,q_{1m_1}^*,~q_{21}^*,q_{22}^*,\dots,q_{2m_2}^*,~q_{31}^*,q_{32}^*,\dots,q_{3m_3}^*, q_{31},q_{32},\dots,q_{3k_3}\}$. Consider the complete graph $K_r$ on the set $A$, where $r=m_1+m_2+m_3+l_3$. By Theorem \ref{complete}, $\chi''(K_r)\leq \Delta(K_r)+2=m_1+m_2+m_3+l_3-1+2=m_1+m_2+m_3+l_3+1$. Thus, we can do the total coloring of $K_r$ with at most $m_1+m_2+m_3+l_3+1$ colors. Now, we give the total coloring to $G'$. The vertex $x$ of $G'$ is colored by the color of the vertex $x$ given in the total coloring of $K_r$ and edge $xy$ of $G'$ is colored by the color of the edge $xy$ given in the total coloring of $K_r$. Let's do the total coloring to the vertex induced subgraph by $[q_2]$. For this, the vertex $q_{2j}$ is colored by the color of the vertex $q_{3j}$ in the total coloring of $K_r$ and edge $q_{2j}q_{2k}$ is colored by the color of the edge $q_{3j}q_{3k}$ in the total coloring of $K_r$, where $j,k\in\{1,2,\dots,l_2\}\subseteq\{1,2,\dots,l_3\}$, as $l_2 \leq l_3$. Similarly, we do the total coloring to the vertex-induced subgraph by $[q_1]$. We do the edge coloring to the edges between the vertices of $[q_3]^*$ and the vertices of $[q_1]$. For this, the edge $q_{3j}^*q_{1k}$ is colored by the color of the edge $q_{3j}^*q_{3k}$ in the total coloring of $K_r$. Note that there are no edges between $[q_3]$ and $[q_3]^*$ in $G^c([P])$. Further, $l_3 \geq l_1$. Hence this edge coloring is possible. We denote the set $V_1=\{q_{21},q_{22},\dots,q_{2l_2}\}$ and $V_2=\{q_{11}^*,\dots,q_{1m_1}^*,~ q_{31}^*,\dots,q_{3m_3}^*\}$. Consider the complete bipartite graph $K_{s,t}$ on sets $V_1$ and $V_2$, where $s=l_2, ~ t=m_1+m_3$. By $(B)$ and Theorem \ref{bipartite}, $\chi'(K_{s,t})=max\{s,t\}=s=l_2$. Note that we use $l_2$ different colors other than the color used in the total coloring of $K_r$ to do the edge coloring of $K_{r,s}$. Thus up till now, we have used $m_1+m_2+m_3+l_3+1$ colors for the total coloring of $K_r$ and $l_2$ colors to do the edge coloring of $K_{s,t}$. Put $V_1'=\{q_{11},q_{12},\dots,q_{1l_1}\}$ and $V_2'=\{q_{21}^*,\dots,q_{2m_2}^*\}$. We consider the complete bipartite graph $K'_{r',s'}$ on sets $V_1'$ and $V_2'$, where $r'=l_1,~ s'=m_2$. Again by (B) and Theorem \ref{bipartite}, $\chi'(K'_{r',s'})=max\{r',s'\}=r'=l_1$. To do the edge coloring of $K'_{r',s'}$ we choose $r'=l_1$ colors out of $l_2$ colors used in the edge coloring of $K_{s,t}$. Combing all, we get the proper total coloring of $G^c(P)$. It is not very difficult to prove that, this is the proper total coloring to $G^c(P)$ with $ l_2+l_3+m_1+m_2+m_3+1=\Delta(G^c(P))+2$ colors. Thus, $\chi''(G^c(P))\leq \Delta(G^c(P))+2$. Thus, in this case, $G^c(P)$ satisfies the Conjecture. \vskip 5truept \textbf{Case-III:} $n\geq4$. In view of the discussion in Case-II, if $|[q_i]|< \frac{1}{4} |V(G^c(P))|$ for some $i$, then we are through. Suppose on the contrary that $|[q_i]|\geq \frac{1}{4} |V(G^c(P))|$ for all $i$. Then $|V(G^c(P))|\geq |[q_1]|+|[q_2]|+|[q_3]|+|[q_4]|+|[q_1]^*|+|[q_2]^*|+|[q_3]^*|+|[q_4]^*|\geq \frac{1}{4} |V(G^c(P))|+\frac{1}{4} |V(G^c(P))|+\frac{1}{4} |V(G^c(P))|+\frac{1}{4} |V(G^c(P))|+1+1+1+1$, as $|[q_i]^*| \geq 1$. This implies that $|V(G^c(P))|\geq |V(G^c(P))|+4$, a contradiction. Thus $|[q_i]|< \frac{1}{4} |V(G^c(P))|$ for some $i$. Therefore $G^c(P)$ satisfies the Conjecture in this case too. This completes the proof. \end{proof} \begin{remark} Now, we show that the techniques used in Theorem \ref{zdgtcc} can not be used to prove Theorem \ref{czdgtcc} and vice versa. It is easy to see that (in Figure $\ref{example} (b)$) $\Delta(G(P))=12$ and $|V(G(P))|=24$. Clearly, $\Delta(G(P))\ngeq\frac{3}{4} |V(G(P))|$. However by Theorem \ref{zdgtcc}, $G(P)$ satisfies the Total Coloring Conjecture. From Figure $\ref{example} (d)$, $|V(G^c(P))|=24$ and $\Delta(G^c(P))=19$. Further, $\alpha(G^c(P))=3$. It is easy to verify that, $|S|\ngeq |V(G^c(P))| - \Delta(G^c(P))-1$ for any independent set $S$ of $G^c(P)$. However, by Theorem \ref{czdgtcc}, $G^c(P)$ satisfies the Conjecture. \end{remark} \begin{remark}\label{tccrmk} It is easy to observe that if $G$ satisfies the Total Coloring Conjecture, then $G+ I_m$ satisfies the Total Coloring Conjecture. In view of Theorem \ref{czdg}, for any finite simple graph $G$, $G\vee K_m$ satisfies the Total Coloring Conjecture, where $m\in \mathbb{N}$. \end{remark} \section{Applications}\label{applications} In this section, we study the interplay of the zero-divisor graphs of ordered sets and the graphs associated with algebraic structures. Also, We apply our results to various graphs associated with algebraic structures such as the comaximal ideal graph of rings, the (co-)annihilating ideal graph of rings, the zero-divisor graph of reduced rings, the intersection graphs of rings, and the intersection graphs of subgroups of groups. \vskip10pt \textbf{I. The comaximal ideal graph of a ring } \vskip10pt Let $R$ be a commutative ring with identity. Then $(Id(R),\leq)$ is a modular, $1$-distributive lattice (cf. discussion before Proposition 1.10 of Atiyah and MacDonald \cite{AM} and the fact that the ideal multiplication distributes over ideal sum) under the set inclusion as a partial order. Clearly, sup$\{I,J\}=I+J$ and inf$\{ I, J\} = I\cap J$. It is well known that the lattice $Id( R)$ is a complete lattice with the ideals $(0)$ and $R$ as its least and the greatest element, respectively. Henceforth, we denoted the lattice $Id(R)$ by $L$. Let $L^\partial$ be the dual of the lattice of $L$. Therefore in $L^\partial$, sup$_{L^\partial}\{I, J\} = I\cap J$ and inf$_{L^\partial}\{I, J\} = I+ J$. The ideal $R$ is the least element of $L^\partial$, and the ideal $(0)$ is the greatest element of $L^\partial$. Since modularity is a self dual notion, $L^\partial$ is also modular. Further, by the duality, $ L^\partial$ is a $0$-distributive lattice. Moreover, the maximal ideals of $R$ are nothing but the atoms of $L^\partial$. Therefore, $L^\partial$ is an atomic lattice. With this preparation, we prove that the comaximal ideal graph of $R$ is nothing but the zero-divisor graph of $L^\partial$. \begin{definition}{[Ye and Wu \cite{yewu}]} Let $R$ be a commutative ring with identity. We associate a simple undirected graph with $R$, called as the {\it comaximal ideal graph $\mathbb{CG}(R)$} of $R$, where the vertices of $\mathbb{\mathbb{CG}}(R)$ are proper ideals of $R$ that are not contained in the Jacobson radical $J(R)$ of $R$ and two vertices $I$ and $J$ are adjacent if and only if $I+J=R$. \end{definition} The following is the modified definition of the comaximal ideal graph. The \textit{modified comaximal ideal graph} $\mathbb{CG}^*(R)$ is the graph with the vertex set as the nonzero proper ideals of $R$ and two vertices $I$ and $J$ are adjacent if and only if $I$ and $J$ are comaximal. Clearly, one can see that the comaximal ideal graph $\mathbb{CG}(R)$ is the subgraph of the modified comaximal ideal graph $\mathbb{CG}^*(R)$. Moreover, $\mathbb{CG}^*(R)=\mathbb{CG}(R)+I_m$, where $m=|J(R)|-1$. \vskip 5truept \begin{theorem}\label{same} Let $R$ be a commutative ring with identity and let $L^\partial$ be the dual of the lattice $Id (R)$ of all ideals of $R$. Then $\mathbb{CG}(R)=G(L^\partial)$. \end{theorem} \begin{proof} It is clear that if a ring is a local ring, in this case, the comaximal ideal graph is a null graph. Hence we consider non-local commutative rings only. Further, in the case of zero-divisor graphs of lattices, it is clear that if a lattice has exactly one atom, then its zero-divisor graph is a null graph. Hence, now consider that $R$ is non-local. Observe that $V(G(L^\partial)) =\{ I\in L^\partial\setminus \{0\}| I\wedge J=0$ for some $J\in L^\partial\setminus \{0\}\}$, where $0$ is the least element of $L^\partial$ which is $R$. Hence $V( G(L^\partial) ) =\{ I\in Id(R)\setminus \{R\}| I+ J=R$ for some $J\in Id(R)\setminus \{R\} \}$. Further, $I$ is adjacent to $ J$ in $G(L^\partial)$ if and only if $\inf_{L^\partial}\{I, J\}=\{0\}$, that is, $I+ J=R$. \par Now, we prove $V(\mathbb{CG}(R))=V(G(L^\partial))$. For this, let $I\in V(\mathbb{CG}(R))$. Hence $I\in Id(R)\setminus\{R\}$ and $ I\nsubseteq J(R)$. Therefore there exists a maximal ideal $M_i$ such that $I\nsubseteq M_i$. Clearly, $M_i \not\subseteq J(R)$ and $I+M_i=R$. Hence $I\in V(G(L^\partial))$. Thus, $V(\mathbb{CG}(R))\subseteq V(G(L^\partial))$. \par Let $I\in V(G(L^\partial))$. Hence $ I\in Id(R)\setminus \{R\}$ and there exists $J\neq R$ such that $I+ J=R$. We have to prove that $I\nsubseteq J(R)$. Suppose on the contrary that $I\subseteq J(R)$. Therefore $I\subseteq M_i$ for all $i$. Since $J\neq R$, then $J\subseteq M_i$ for some $i$. Hence $R=I+J\subseteq M_i$, a contradiction. Hence $I\nsubseteq J(R)$. This proves that $I\in V(\mathbb{CG}(R))$ and hence $V(G(L^\partial))\subseteq V(\mathbb{CG}(R))$. Further, adjacency in $\mathbb{CG}(R)$ and $G(L^\partial)$ is same. Thus, the comaximal ideal graph of $R$ is the same as the zero-divisor graph of lattice $L^\partial$. \end{proof} \vskip 5truept \begin{remark} In view of Theorem \ref{same}, it is clear that the study of the comaximal ideal graph of a commutative ring $R$ with identity is nothing but the study of the zero-divisor graph of a $0$-distributive, modular lattice.\end{remark} \begin{remark} We rewrite the vertex set of the comaximal ideal graph $\mathbb{CG}(R)$ of a commutative ring $R$ with identity as follows: $V(\mathbb{CG}(R))=\{ I\in Id(R)\setminus \{R\} ~ | ~I+ J=R$ for some $J\neq R\}$. \end{remark} \vskip 5truept Let $R$ be an Artinian ring. Then $R=\prod\limits_{i=1}^nR_i$, where $R_i$ is an Artinian local ring for every $i$. It is easy to observe that $R$ has identity if and only if $R_i$ has identity for all $i$. Hence it follows from the result of Chajda and L\"anger \cite[Theorem 2]{cl} that $Id(R)= Id(R_1) \times Id(R_2) \times \cdots \times Id(R_n) $, where $Id(R_1) \times Id(R_2) \times \cdots \times Id(R_n) $ denotes the (lattice) direct product of the ideal lattices $Id(R_i)$ of $R_i$. Hence, $Id(R)^\partial= Id(R_1)^\partial \times Id(R_2)^\partial \times \times \cdots \times Id(R_n)^\partial $. \vskip 5truept The following result is immediate from Theorems \ref{edge chromatic number}, \ref{zdgtcc}, \ref{czdgtcc}, Remark \ref{type2} and Theorem \ref{same}. \vskip 5truept \begin{corollary}\label{cgtcc} Let $R$ be a commutative ring with finitely many ideals and let $\mathbb{CG}(R)$ be its comaximal ideal graph. Then $\chi'(\mathbb{CG}(R))=\Delta(\mathbb{CG}(R))$ and $\mathbb{CG}(R)$, $\mathbb{CG}^c(R)$ satisfies the Total Coloring Conjecture. Moreover, $\mathbb{CG}(R)$ is of type II if and only if $R=R_1\times R_2$ with $|Id(R_1)|=|Id(R_2)|$. \end{corollary} \vskip 5truept Now, we characterize the chordal comaximal ideal graphs of Artinian rings. \vskip 5truept As an immediate consequence of Corollary \ref{zdgchordproduct} and Theorem \ref{same}, we have the following result. \begin{corollary} \label{cgchord} Let $R$ be an Artinian ring with finitely many ideals. Then \textbf{(A)} $\mathbb{CG}(R)$ is chordal if and only if one of the following hold: \begin{enumerate} \item $R$ is a local ring; \item $R=R_1\times R_2$ and $R_i$ is field for some $i\in \{1,2\}$; \item $R=F_1\times F_2\times F_3$, where $F_i$ is field for all $i\in \{1,2,3\}$. \end{enumerate} \textbf{(B)} $\mathbb{CG}^{c}(R)$ is chordal if and only if the number of maximal ideals of $R$ is at most $3$. \end{corollary} \vskip 5truept The following result is immediate from Corollary \ref{perfect0lattice} and Theorem \ref{same}. \begin{corollary}\label{cgperfect} Let $R$ be an Artinian ring with finitely many ideals. Then $\mathbb{CG}(R)$ is perfect if and only if $\omega(\mathbb{CG}(R))=|\text{Max}(R)|\leq 4$. \end{corollary} This extends the following result. \begin{corollary} [{\cite[Theorem 3.6]{azadi}}] Let $R$ be an Artinian ring with finitely many ideals. If $Max(R)\leq 4$, then $\mathbb{CG}(R)$ is a perfect graph. \end{corollary} \vskip 15truept \noindent \textbf{II. Annihilating ideal graph and co-annihilating ideal graph of ring} \vskip 5truept Many researchers studied annihilating ideal graphs and co-annihilating ideal graphs of rings. In this section, we characterize perfect annihilating ideal graphs and co-annihilating ideal graphs of rings. \vskip 5truept \begin{definition}[Akbari et al. \cite{akbari}] The \textit{co-annihilating-ideal graph} of $R$, denoted by $\mathbb{CAG}^*(R)$, is a graph whose vertex set is the set of all non-zero proper ideals of $R$ and two distinct vertices $I$ and $J$ are adjacent whenever Ann$(I)\cap$ Ann$(J) =(0)$. \end{definition} \begin{definition}[Visweswaran and Patel \cite{viswe}] Let $R$ be a commutative ring with identity. Associate a simple undirected graph with $R$, called as the {\it annihilating ideal graph $\mathbb{AG}(R)$} of $R$, where the vertices of $\mathbb{AG}(R)$ are nonzero annihilating ideals of $R$, and two vertices $I$ and $J$ are adjacent if and only if $I+J$ is also an annihilating ideal of $R$. \end{definition} The following is the modified definition of annihilating ideal graph. The modified \textit{annihilating ideal graph} $\mathbb{AG}^*(R)$ be a simple undirected graph with vertex set $V(\mathbb{AG}^*(R))$ is set of all nonzero proper ideals of $R$ and two distinct vertices $I, J$ are joined if and only if $I+J$ is also an annihilating ideal of $R$. \begin{lemma}[{\cite[Lemma 2.1]{aghapouramin}}] $I-J$ is an edge of $\mathbb{AG}^*(R)$ if and only if Ann$(I) ~\cap$ Ann$(J)\neq(0)$. Hence $\mathbb{AG}^{*c}(R)=\mathbb{CAG}^*(R)$. \end{lemma} \begin{observation} \label{obs3} If $R$ is an Artinian ring, then for any non-zero proper ideal $I$ of $R$, Ann$_R(I)\neq (0)$. Hence in $R$, if non-zero proper ideals $I$ and $J$ are adjacent in $\mathbb{CAG}^*(R)$ if and only if Ann$(I)\cap$ Ann$(J) =$ Ann$(I+J) =(0)=$Ann$(R)$. Thus, $I$ and $J$ are adjacent in $\mathbb{CAG}^*(R)$ if and only if $I+J=R$, whenever $R$ is an Artinian ring. \end{observation} \begin{lemma}[{\cite[Corollary 1.2]{akbari}}] \label{artiniansame1} If $R$ is an Artinian ring, then $\mathbb{CG}^*(R)=\mathbb{CAG}^*(R)$. \end{lemma} Hence, we have the following result. \begin{theorem}\label{artiniansame} If $R$ is an Artinian ring, then $\mathbb{CG}^*(R)=\mathbb{CG}(R)+I_m=\mathbb{CAG}^*(R)=\mathbb{AG}^{*c}(R)=\mathbb{AG}^{c}(R)$, where $m=|J(R)|-1$. \end{theorem} Hence the following result is immediate from Remark \ref{tccrmk}, Corollary \ref{cgtcc}, and Theorem \ref{artiniansame}. \begin{corollary}\label{cagtcc} Let $R$ be a commutative ring with finitely many ideals and let $\mathbb{CAG}^*(R), ~ \mathbb{AG}^*(R)$ be its co-annihilating ideal graph and annihilating ideal graph respectively. Then $\chi'(\mathbb{CAG}^*(R))=\chi'(\mathbb{AG}^{*c}(R))=\Delta(\mathbb{CAG}^*(R))=\Delta(\mathbb{AG}^{*c}(R))$ and $\mathbb{CAG}^*(R)=\mathbb{AG}^{*c}(R)$, $\mathbb{CAG}^{*c}(R)=\mathbb{AG}^{*}(R)$ satisfies the Total Coloring Conjecture. Moreover, $\mathbb{CAG}^*(R)=\mathbb{AG}^{*c}(R)$ is of type II if and only if $R=R_1\times R_2$ with $|Id(R_1)|=|Id(R_2)|$. \end{corollary} The following result is immediate from Remark \ref{obs1}, Corollary \ref{cgchord} and Theorem \ref{artiniansame}. \begin{corollary}\label{cagchord} Let $R$ be an Artinian ring with finitely many ideals. Then \textbf{(A)} $\mathbb{CAG}^*(R)=\mathbb{AG}^{*c}(R)$ is chordal if and only if one of the following hold: \begin{enumerate} \item $R$ is local ring; \item $R=R_1\times R_2$ and $R_i$ is field for some $i\in \{1,2\}$; \item $R=F_1\times F_2\times F_3$, where $F_i$ is field for all $i\in \{1,2,3\}$. \end{enumerate} \textbf{(B)} $\mathbb{CAG}^{*c}(R)=\mathbb{AG}^*(R)$ is chordal if and only if the number of maximal ideals of $R$ is at most $3$. \end{corollary} Hence the following result is immediate from Remark \ref{obs2}, Corollary \ref{cgperfect}, and Theorem \ref{artiniansame}. The perfectness of $\mathbb{AG}(R)$ is studied in \cite[Corollary 2.3]{aghapouramin}. The Corollary \ref{cagchord} and Corollary \ref{cagperfect} are essentially proved for co-annihilating ideal graph in \cite{mirghadim}. \begin{corollary}\label{cagperfect} Let $R$ be an Artinian ring with finitely many ideals. Then $\mathbb{CAG}^*(R)$ is perfect if and only if $\mathbb{AG}^*(R)$ is perfect if and only if $\omega(\mathbb{CG}^*(R))=$ number of maximal ideals of $R$ is at most 4. \end{corollary} \vskip 15truept \noindent \textbf{III. The zero-divisor graph of a reduced ring } \vskip10pt In \cite{djl}, it is mentioned that if $R$ is a reduced Artinian ring with exactly $k$ prime ideals, then there exist fields $F_1,\dots,F_k$ such that $R\cong F_1\times\cdots\times F_k$. Further, it is proved that the ring-theoretic zero-divisor graph $\Gamma(R)=\Gamma(\prod\limits_{i=1}^kF_i)$ equals the poset-theoretic zero-divisor graph of $R$ ($R$ treated as a poset under the partial order given in \cite[Lemma 3.3]{djl}), that is, $\Gamma(\prod\limits_{i=1}^kF_i)=G(\prod\limits_{i=1}^kF_i)$ (cf. \cite[Remark 3.4]{LR1}). Hence the following result is immediate from Theorems \ref{tcc}, \ref{edge chromatic number}, Remark \ref{type2}, Theorems \ref{zdgtcc}, \ref{czdgtcc}. The first part of the result, i.e., $\chi'(\Gamma(R))=\Delta(\Gamma(R))$, $\Gamma(R)$ satisfies Total Coloring Conjecture and last part of result, i.e., $\Gamma(R)$ is of type II if and only if $R=F_1\times F_2$ with $|F_1|=|F_2|$ are proved in \cite{nkvj}. \begin{corollary} Let $R$ be a finite reduced ring and let $\Gamma(R)$ be its ring-theoretic zero-divisor graph. Then $\chi'(\Gamma(R))=\Delta(\Gamma(R))$ and $\Gamma(R)$, $\Gamma^c(R)$ satisfies the Total Coloring Conjecture. Moreover, $\Gamma(R)$ is of type II if and only if $R=F_1\times F_2$ with $|F_1|=|F_2|$. \end{corollary} In view of Corollary \ref{zdgchordproduct}, we have the following result. \begin{corollary} \label{zdgchordfinite} Let $R$ is a finite reduced ring. Then \textbf{(A)} $\Gamma(R)$ is chordal if and only if one of the following holds: \begin{enumerate} \item $R$ is a field; \item $R=F_1\times F_2$ and $|F_i|=2$ for some $i\in \{1,2\}$, i.e., $F_i\cong \mathbb{Z}_2$ for some $i\in \{1,2\}$; \item $R=F_1\times F_2\times F_3$ and $|F_i|=2$ for all $i\in \{1,2,3\}$, i.e., $R\cong \mathbb{Z}_2\times \mathbb{Z}_2\times \mathbb{Z}_2$. \end{enumerate} \textbf{(B)} $\Gamma^c(R)$ is chordal if and only if number of maximal ideals $n\leq 3$, i.e., $R\cong F_1\times F_2 \times F_3$. \end{corollary} In view of Corollary \ref{zdgperfectproduct}, we have the following result. \begin{corollary}[Smith \cite{smith}] \label{zdgperfectfinite} Let $R$ is a finite reduced ring. Then $\Gamma(R)$ is perfect if and only if the number of maximal ideals of $R$ is at most $ 4$, i.e., $R\cong F_1\times F_2 \times F_3 \times F_4$. \end{corollary} \vskip 5truept It is not very difficult to prove that, $P$ be a finite bounded poset has a unique dual atom if and only if $Z(P^\partial)=\{0\}$, where $P^\partial$ is dual of $P$. If $\textbf{P}=\prod \limits _{i=1}^{n}P^i$, where $P^i$ is a finite bounded poset for all $i$, then it is easy to see that, $\textbf{P}^\partial=\prod \limits _{i=1}^{n}(P^i)^\partial$. If a finite poset $P$ has exactly one atom, then $\{x,y\}^\ell=\{0\}$, where $0$ is the least element of $P$ (the lower cone taken in $P$), if and only if either $x=0$ or $y=0$, for all $x, y\in P$. Also, if a finite poset $P$ has exactly one dual atom, then $\{x,y\}^\ell=\{0\}$, where $0$ is the least element of $P^\partial$ (the lower cone taken $P^\partial$), if and only if either $x=0$ or $y=0$, for all $x,y\in P^\partial$. \begin{lemma}\label{5.12} Let $\textbf{P}=\prod \limits _{i=1}^{n}P^i$, where $P^i$ is a finite bounded poset that has exactly one atom and exactly one dual atom for all $i$. Then the zero-divisor graph of $\textbf{P}$ is equal to the zero-divisor graph of $\textbf{P}^\partial$. \end{lemma} If $C$ is a finite chain, then $\{x,y\}^\ell=\{0\}$, if and only if either $x=0$ or $y=0$, for all $x,y\in C$. Further, every finite chain has exactly one atom and exactly one dual atom. From this fact and Lemma \ref{5.12}, the following result is immediate. \begin{lemma}\label{chains} The zero-divisor graph of $\textbf{P}=\prod\limits_{i=1}^nP^i$, where $P^i$ is a finite bounded poset with $Z(P^i)=\{0\}$ for all $i$ is isomorphic to the zero-divisor graph of $\textbf{C}=\prod\limits_{i=1}^nC_i$, where $|C_i|=|P^i|$ for all $i$. Moreover, the zero-divisor graph of $\textbf{C}$ is equal to the zero-divisor graph of $\textbf{C}^\partial$. \end{lemma} \begin{remark} Note that though the zero-divisor graph of $\textbf{C}$ is equal to the zero-divisor graph of $\textbf{C}^\partial$, the zero-divisor graph of $\textbf{P}^\partial$ where $\textbf{P}=\prod\limits_{i=1}^nP^i$, with $P^i$ is a finite bounded poset such that $Z(P^i)=\{0\}$ for every $i$, is not isomorphic to the zero-divisor graph of $\textbf{C}^\partial$. However, each $P^i$ has a unique dual atom, then $G(\textbf{C}^\partial) \cong G(\textbf{P}^\partial)$. \end{remark} \vskip 15truept \noindent \textbf{IV. Intersection graphs of ideals of Artinian principal ideal rings } \vskip10pt Let $R$ be a commutative ring with identity. Then the \textit{intersection graph of ideals} $\mathbb{IG}(R)$ of $R$ is the graph whose vertices are the nonzero proper ideals of $R$ such that distinct vertices $I$ and $J$ are adjacent if and only if $I\cap J\neq\{0\}$; see \cite{chakra}. The vertex set of the zero-divisor graph ${G^*}(Id(R))$ of a lattice $Id(R)$ is $Id(R)\setminus \{0_{_{Id(R)} },1_{_{Id(R)}}\}$, that is, the set of nonzero proper ideals of $R$. Therefore $V(\mathbb{IG}(R))=V({G^*}(Id(R)))=V({G}^{*c}(Id(R)))$. The ideals $I$ and $J$ are adjacent in ${G^*}(Id(R))$ if and only if $I\wedge J=0_{_{Id(R)}}$, that is, the ideals $I$ and $J$ are adjacent in ${G^*}(Id(R))$ if and only if $I\cap J=(0)$. The ideals $I$ and $J$ are adjacent in ${G}^{*c}(Id(R))$ if and only if $I\cap J\neq (0)$. Hence the following result. \begin{lemma}\label{igczdg} Let $R$ be a commutative ring with identity. Then $\mathbb{IG}(R)={G}^{*c}(Id(R))$. \end{lemma} The following discussion can be found in \cite{djl}. Let $R$ be a commutative ring with identity. Recall that $R$ is a \emph{special principal ideal ring} (or, \emph{SPIR} for brevity) if $R$ is a local Artinian principal ideal ring (cf. \cite{Hung}). If $R$ is an SPIR with maximal ideal $M$, then there exists $n\in\mathbb{N}$ such that $M^n=\{0\}$, $M^{n-1}\neq\{0\}$, and if $I$ is an ideal of $R$, then $I=M^i$ for some $i\in\{0,1,\dots,n\}$ (\cite[Proposition 4]{Hung}). In this case, $M$ is nilpotent with index of nilpotency equal to $n$, and the lattice of ideals of $R$ is isomorphic to the chain $C$ of length $n$ (chain of $n+1$ elements). By \cite[Lemma 10]{Hung}, $R$ is an Artinian principal ideal ring if and only if there exist SPIRs $R_1,\dots,R_n$ such that $R\cong R_1\times\cdots\times R_n$ (it is also a straightforward consequence of the structure theorem of Artinian rings in \cite[Theorem 8.7]{AM}). Thus, $Id(R)$ of Artinian principal ideal ring is product of chains $\textbf{C}=\prod\limits_{i=1}^nC_i$ ($C_i$ is chain of length $n_i$), where $R\cong R_1\times\cdots\times R_n$, nilpotency index of maximal ideal $M_i$ of $R_i$ is $n_i$. \vskip 5truept By Theorems \ref{edge chromatic number}, \ref{zdgtcc}, \ref{czdgtcc}, Remark \ref{type2}, Lemma \ref{chains} and Lemma \ref{igczdg}, the following result follows. \vskip 5truept \begin{corollary}\label{spirtcc} Let $R$ be an Artinian principal ideal ring and let $\mathbb{IG}(R)$ be the intersection graph of ideals of $R$. Then $\chi'(\mathbb{IG}^c(R))=\Delta(\mathbb{IG}^c(R))$ and $\mathbb{IG}(R)$, $\mathbb{IG}^c(R)$ satisfies the Total Coloring Conjecture. Moreover, $\mathbb{IG}^c(R)$ is of type II if and only if $R=R_1\times R_2$ with $|Id(R_1)|=|Id(R_2)|$. \end{corollary} \noindent The next result characterizes chordal graphs $\mathbb{IG}^c(R)$ and $\mathbb{IG}(R)$ for an Artinian principal ideal ring $R$. \vskip 5truept The Corollary \ref{zdgchordproduct}, Lemma \ref{chains} and Lemma \ref{igczdg}, yields the following result. \vskip 5truept \begin{corollary}\label{spirchordal} \label{igchord} Let $R$ be an Artinian principal ideal ring. Then \textbf{(A)} $\mathbb{IG}^c(R)$ is chordal if and only if one of the following holds: \begin{enumerate} \item $R$ is a local ring; \item $R=R_1\times R_2$ and $R_i$ is a field for some $i\in \{1,2\}$; \item $R=F_1\times F_2\times F_3$, where $F_i$ is a field for all $i\in \{1,2,3\}$. \end{enumerate} \textbf{(B)} $\mathbb{IG}(R)$ is chordal if and only if number of maximal ideals of $R$ is at most $ 3$. \end{corollary} \vskip 5truept From Corollary \ref{zdgperfectproduct}, Lemma \ref{chains} and Lemma \ref{igczdg}, we have: \vskip 5truept \begin{corollary}\label{spirperfect} Let $R$ be an Artinian principal ideal ring. Then $\mathbb{IG}(R)$ is perfect if and only if the number of maximal ideals of $R$ is at most $ 4$. \end{corollary} \begin{theorem}[Das \cite{das}] The intersection graph of ideals of $Z_n$ is perfect if and only if $n = p_1^{n_1}p_2^{n_2}p_3^{n_3}p_4^{n_4}$, where $p_i$ are distinct primes and $n_i\in \mathbb{N}\cup \{0\}$, that is, the number of distinct prime factors of $n$ is less than or equal to $4$. \end{theorem} \vskip 5truept \noindent \textbf{ Note that the Corollary \ref{cgchordperfect} follows from Corollary \ref{cgchord}, \ref{cgperfect}, Theorem \ref{artiniansame}, Corollary \ref{cagtcc}, \ref{cagchord}, \ref{cagperfect}, \ref{spirtcc}, \ref{spirchordal}, \ref{spirperfect}.} \vskip15pt \noindent \textbf{V. Intersection graph of subgroups of a group } \vskip10pt Let $\mathcal{G}$ be a group. Then the \textit{intersection graph of subgroups} $\mathbb{IG}(\mathcal{G})$ of $\mathcal{G}$ is the graph whose vertices are the proper non-trivial subgroups of $\mathcal{G}$ such that distinct vertices $H$ and $K$ are adjacent if and only if $H\cap K\neq\{e\}$. Let $L(\mathcal{G})$ be the subgroup lattice of $\mathcal{G}$. The vertex set of ${G^*}(L(\mathcal{G}))$ is $L(\mathcal{G})\setminus \{0_{L(\mathcal{G})},1_{L(\mathcal{G})}\}$, that is, the set of proper non-trivial subgroups of $\mathcal{G}$. Therefore, $V(\mathbb{IG}(\mathcal{G}))=V({G^*}(L(\mathcal{G})))=V(\mathbb{G}^{*c}(L(\mathcal{G})))$. The subgroups $H$ and $K$ are adjacent in ${G}(L(\mathcal{G}))$ if and only if $H\wedge K=0_{L(\mathcal{G})}$, that is, the subgroups $H$ and $K$ are adjacent in ${G}(L(\mathcal{G}))$ if and only if $H\cap K=(e)$. The subgroups $H$ and $K$ are adjacent in ${G}^{*c}(L(\mathcal{G}))$ if and only if $I\cap J\neq (e)$. Hence we have the following result. \begin{lemma} Let $\mathcal{G}$ be a group and $L(\mathcal{G})$ be the subgroup lattice of $\mathcal{G}$. Then $\mathbb{IG}(\mathcal{G})={G}^{*c}(L(\mathcal{G}))$. \end{lemma} \begin{remark}\label{groupspir} If $p_1,\dots,p_k\in\mathbb{N}$ are (not necessarily distinct) prime numbers, then the intersection graph of subgroups $\mathbb{IG}(\mathbb{Z}_{p_1^{n_1}}\times\cdots\times\mathbb{Z}_{p_k^{n_k}})$ of the group $\mathbb{Z}_{p_1^{n_1}}\times\cdots\times\mathbb{Z}_{p_k^{n_k}}$ is same as the intersection graph of ideals of the ring $\mathbb{Z}_{p_1^{n_1}}\times\cdots\times\mathbb{Z}_{p_k^{n_k}}$. Since $\mathbb{Z}_{p_i^{n_i}}$ is an SPIR whose maximal ideal has index of nilpotency equal to $n_i$ for every $i\in\{1,\dots,k\}$. \end{remark} \vskip 5truept In view of Corollaries \ref{spirtcc}, \ref{spirchordal}, \ref{spirperfect} and Remark \ref{groupspir}, we have the following Corollaries. \vskip 5truept \begin{corollary} Let $\mathcal{G}=\mathbb{Z}_{p_1^{n_1}}\times\cdots\times\mathbb{Z}_{p_k^{n_k}}$ be a group, where $p_1,\dots,p_k\in\mathbb{N}$ are (not necessarily distinct) prime numbers and let $\mathbb{IG}(\mathcal{G})$ be the intersection graph of ideals of $\mathcal{G}$. Then $\chi'(\mathbb{IG}^c(\mathcal{G}))=\Delta(\mathbb{IG}^c(\mathcal{G}))$ and $\mathbb{IG}(\mathcal{G})$, $\mathbb{IG}^c(\mathcal{G})$ satisfies the Total Coloring Conjecture. Moreover, $\mathbb{IG}^c(\mathcal{G})$ is of type II if and only if $R=R_1\times R_2$ with $|Id(R_1)|=|Id(R_2)|$. \end{corollary} \begin{corollary} \label{iggchord} Let $\mathcal{G}=\mathbb{Z}_{p_1^{n_1}}\times\cdots\times\mathbb{Z}_{p_k^{n_k}}$ be a group, where $p_1,\dots,p_k\in\mathbb{N}$ are (not necessarily distinct) prime numbers. Then \textbf{(A)} $\mathbb{IG}^c(\mathcal{G})$ is chordal if and only if one of the following hold: \begin{enumerate} \item $\mathcal{G}$ is a cyclic group of order $p_1^{n_1}$; \item $\mathcal{G}=\mathbb{Z}_{p_1^{n_1}}\times \mathbb{Z}_{p_2^{n_2}}$, with $n_i=1$ for some $i\in \{1,2\}$; \item $\mathcal{G}=\mathbb{Z}_{p_1^{n_1}}\times \mathbb{Z}_{p_2^{n_2}}\times \mathbb{Z}_{p_3^{n_3}}$, with $n_i=1$ for all $i\in \{1,2,3\}$. \end{enumerate} \textbf{(B)} $\mathbb{IG}(\mathcal{G})$ is chordal if and only if number of maximal subgroups of $\mathcal{G}$ is at most $3$ (that is, $k\leq 3$). \end{corollary} \begin{corollary} Let $\mathcal{G}=\mathbb{Z}_{p_1^{n_1}}\times\cdots\times\mathbb{Z}_{p_k^{n_k}}$ be a group, where $p_1,\dots,p_k\in\mathbb{N}$ are (not necessarily distinct) prime numbers. Then $\mathbb{IG}(\mathcal{G})$ is perfect if and only if the number of maximal subgroups of $\mathcal{G}$ is at most $4$ (that is, $k\leq 4$). \end{corollary} \vskip 5truept \noindent \textbf{Acknowledgment:} The first author is financially supported by the Council of Scientific and Industrial Research(CSIR), New Delhi, via Junior Research Fellowship Award Letter No. 09/137(0620)/2019-EMR-I. \vskip 10pt \noindent\textbf{Conflict of interest:} The authors declare that there is no conflict of interest regarding the publishing of this paper. \vskip 10pt \noindent\textbf{Authorship Contributions:} Both the authors contributed equally in the study of zero-divisor graphs of ordered sets and their applications to algebraic structures. Both the authors read and approved the final version of the manuscript. \vskip10pt \begin{thebibliography}{99} \bibitem{aghapouramin} V. Aghapouramin and M. J. Nikmehr, \emph{Perfectness of a graph associated with annihilating ideals of a ring}, Discrete Math. Algorithms Appl., 10(4) (2018), 1850047(11 pages). \bibitem{afkhami} M. Afkhami, Z. Barati and K. Khashyarmanesh, \emph{Planar zero divisor graphs of partially ordered sets}, Acta Math. Hungar., 137(1-2) (2012), 27-35. \bibitem{akbari} Saeeid Akbari, Abbas Alilou, Jafar Amjadi, and Seyed Mahmoud Sheikholeslami, \emph{The Co-annihilating-ideal Graphs of Commutative Rings}, Canad. Math. Bull., 60(1) (2017), 3-11. \bibitem{adlag} David F. Anderson and John D. LaGrange, \emph{Some remarks on the compressed zero-divisor graph}, J. Algebra, 447 (2016), 297-321. \bibitem{azadi} Mehrdad Azadi, Zeinab Jafari and Changiz Eslahchi, \emph{On the comaximal ideal graph of a commutative ring}, Turkish J. Math., 40 (2016), 905-913. \bibitem{AM} M. F. Atiyah and I. G. MacDonald, \emph{Introduction to Commutative Algebra}, Addison-Wesley, Reading, MA, (1969). \bibitem{bagheri} Saeid Bagheri, Fatemeh Nabael, Rashid Rezaeii and Karim Samei, \emph{Reduction graph and its application on algebraic graphs}, Rocky Mountain J. Math., 48(3) (2018), 729-751. \bibitem{Be} I. Beck, \emph{Coloring of a commutative ring}, J. Algebra, 116 (1988), 208-226. \bibitem{mbehzad} M. Behzad, \emph{Graphs and their chromatic numbers}, Ph.D. Thesis, Michigan State University, East Lansing, (1965). \bibitem{behzad} M. Behzad, G. Chartrand, and J. K. Cooper, Jr., \emph{The Colour Numbers of Complete Graphs}, J. Lond. Math. Soc. (2), 42 (1967), 226-228. \bibitem{cl} Ivan Chajda, G\"unther Eigenthaler and Helmut L\"anger, \emph{Ideals of direct products of rings}, Asian-Eur. J. Math., 11 (4) (2018), 1850094(6 pages). \bibitem{chakra} Ivy Chakrabarty, Shamik Ghosh, T. K. Mukherjee, and M. K. Sen, \emph{Intersection graphs of ideals of rings}, Discrete Math., 309(17)(2009), 5381-5392. \bibitem{cm} C. Chameni-Nembua and B. Monjardet, \emph{Finite pseudocomplemented lattices}, European J. Combin., 13(2) (1992), 89-107. \bibitem{strongperfect} M. Chudnovsky, N. Robertson, P. Seymour, R. Thomas, \emph{The strong perfect graph theorem}, Ann. of Math. (2), 164(1) (2006), 51–229. \bibitem{das} A. Das, \emph{On perfectness of intersection graph of ideals of $\mathbb{Z}_n$}, Discuss. Math. Gen. Algebra Appl., 37(2) (2017), 119-126. \bibitem{djl} Sarika Devhare, Vinayak Joshi and John D. LaGrange, \emph{Eulerian and Hamiltonian complements of zero-divisor graphs of pseudocomplemented posets}, Palest. J. Math., 8(2) (2019), 30-39. \bibitem{dirac} G. A. Dirac, \emph{ On rigid circuit graphs}, Abh. Math. Semin. Univ. Hambg., 38 (1961), 18-26. \bibitem{GroLS93} M. Gr\"otschel, L. Lov\'asz and A. Schrijver, \emph{Geometric Algorithms and Combinatorial Optimization}, 2nd edn. Springer, Berlin (1993). \bibitem{Hung}T. W. Hungerford, \emph{On the structure of principal ideal rings}, Pacific J. Math., 25(3) (1968), 543-547. \bibitem{jan} M. F. Janowitz, \emph{Section semicomplemented lattices}, Math. Z., 63 (1968), 63-76. \bibitem{HJ} R. Hala\v{s} and M. Jukl, \emph{On Beck's coloring of partially ordered sets}, Discrete Math., 309 (2009), 4584-4589. \bibitem{halas} R. Hala\v{s}, \emph{ Pseudocomplemented ordered sets}, Arch. Math. (Brno), 29 (1993), 153-160. \bibitem{H} R. Hala\v{s}, \emph{Some properties of Boolean ordered sets}, Czechoslovak Math. J., 46 (1996), 93-98. \bibitem{tjbt} Tommy R. Jensen and Bjarne Toft, \emph{Graph coloring problems}, Wiley-Inter-science Series in Discrete Mathematics and optimization, (1995). \bibitem{vjssc} Vinayak Joshi, \emph{On Completion of Section Semicomplemented Posets}, Southeast Asian Bull. Math., 31 (2007), 881-892. \bibitem{joshi} Vinayak Joshi, \emph{Zero divisor graph of a poset with respect to an ideal}, Order, 29 (2012), 499-506. \bibitem{JK} Vinayak Joshi and Anagha Khiste, \emph{The zero-divisor graphs of Boolean posets}, Math. Slovaca, 64 (2014), 511-519. \bibitem{jm} Vinayak Joshi and Nilesh Mundlik, \emph{Baer ideals in 0-distributive posets}, Asian-Eur. J. Math., 9(3) (2016), 1650055 (16 pages). \bibitem{jwp} Vinayak Joshi, H. Y. Pourali and B. N. Waphare, \emph{The graph of equivalence classes of zero divisors}, International Scholarly Research Notices, (2014), Article ID 896270, 7 pages. \bibitem{jw1} Vinayak Joshi and B. N. Waphare, \emph{Characterization of 0-distributive posets}, Math. Bohem., 130(1) (2005), 73-80. \bibitem{Kar72} R.M. Karp, \emph{Reducibility among Combinatorial Problems. In: Miller R.E., Thatcher J.W., Bohlinger J.D. (eds) Complexity of Computer Computations}, The IBM Research Symposia Series. Springer, Boston, MA., (1972), 85–103. \bibitem{nkvj} Nilesh Khandekar and Vinayak Joshi, \emph{Zero-divisor graphs and total coloring conjecture}, Soft Comput, 24 (2020), 18273-18285. \bibitem{LR1} J. D. LaGrange and K. A. Roy, \emph{Poset graphs and the lattice of graph annihilators}, Discrete Math., 313(10) (2013), 1053-1062. \bibitem{LR}J. Larmerov\'{a} and J. Rach\r{u}nek, \emph{Translations of distributive and modular ordered sets}, Acta Univ. Palack. Olomuc. Fac. Rerum Natur. Math., 91 (1988), 13-23. \bibitem{perfectlovasz} L\'aszl\'o Lov\'asz, \textit{A characterization of perfect graphs}, J. Combin. Theory Ser. B, 13(2) (1972), 95-98. \bibitem{LW}D. Lu and T. Wu, \emph{The zero-divisor graphs of partially ordered sets and an application to semigroups}, Graphs Combin., 26 (2010), 793-804. \bibitem{mm} F. Maeda and S. Maeda, \emph{Theory of Symmetric Lattices}, Grundlehren der mathematischen Wissenschaften, 173 (1970, illustrated 2012). \bibitem{mirghadim} S. M. Mirghadim, M. J. Nikmehr and R. Nikandish, \emph{On perfect co-annihilating-ideal graph of a commutative Artinian ring}, Kragujevac J. Math., 45(1) (2021), 63-73. \bibitem{ashkan} Ashkan Nikseresht, \emph{Chordality of graphs associated to commutative rings}, Turkish J. Math., 42 (2018), 2202-2213. \bibitem{murthy} T. Srinivasa Murthy, \emph{A proof of the Total Coloring Conjecture}, arXiv (2020), arXiv:2003.09658. \bibitem{awj} Avinash Patil, B. N. Waphare and Vinayak Joshi, \emph{Perfect zero-divisor graphs}, Discrete Math., 340(4) (2017), 740-745. \bibitem{smith} Bennett Smith, \emph{Perfect Zero-Divisor Graphs of $\mathbb{Z}_n$}, Rose-Hulman Undergrad. Math J., 17(2) (2016), 111-132. \bibitem{stern} Manfred Stern, \emph{Semimodular lattices theory and applications}, Cambridge University Press, 1999. \bibitem{venkat} P. V. Venkatanarasimhan, \emph{Pseudo-complements in posets}, Proc. Amer. Math. Soc., 28(1) (1971), 9-17. \bibitem{viswe} S. Visweswaran and Hiren D. Patel, \emph{ A graph associated with the set of all nonzero annihilating ideals of a commutative ring}, Discrete Math. Algorithms Appl., 6(4) (2014), 1450047(22 pages). \bibitem{vgv1} V. G. Vizing, \emph{On an estimate of the chromatic class of a p-graph}, (in Russian), Metody Diskret. Analiz., 3 (1964), 25-30. \bibitem{vgv2} V. G. Vizing, \emph{The chromatic class of a multigraph}, (in Russian), Kibernetika(Kiev) No. 3 (1965), 29-39. English translation in cybernetics, 1 (1965), 32-41. \bibitem{yap} H. P. Yap, \emph{Some Topics in Graph Theory}, London Math. Soc. Lecture Note Ser., 108 (1986). \bibitem{tcyap} H. P. Yap, \emph{Total colorings of graphs}, Lecture Notes in Mathematics, Springer-Verlag Berlin Heidelberg, 1623 (1996). \bibitem{yewu} M. Ye and T. S. Wu, \emph{Comaximal ideal graphs of commutative rings}, J. Algebra Appl., 11(6) (2012), 1250114 (14 pages). \bibitem{jw} B. N. Waphare and Vinayak Joshi, \emph{On uniquely complemented posets}, Order, 22 (2005), 11-20. \bibitem{west} D. B. West, \emph{Introduction to Graph Theory}, (Prentice Hall, $2^{nd}$ edition, 2001). \end{thebibliography} \end{document} \par Clearly, every graph $G$ requires at least $\Delta(G)+1$ colors for the total coloring. The \textit{vertex chromatic number} $\chi(G)$ of a graph $G$ is the minimum number of colors required to color the vertices of $G$ such that no two adjacent vertices receive the same color, whereas the \textit{edge chromatic number} $\chi'(G)$ of $G$ is the minimum number of colors required to color the edges of $G$ such that incident edges receive different colors. Vizing \cite{vgv1, vgv2} obtained an upper bound for the edge chromatic number of a graph $G$. It should be noted that Ram Prakash Gupta \cite{gupta} also independently calculated the same bound for $\chi'(G)$ in terms of the maximum degree $\Delta(G)$ of $G$. A graph $G$ is {\it class one}, if $\chi'(G)=\Delta(G)$ and is {\it class two}, if $\chi'(G)=\Delta(G)+1$. \vskip 5truept \begin{theorem}[Vizing-Gupta Theorem \cite{mbl}]\label{vgt} {If $G$ is a finite simple graph, then either $\chi{'}(G)=\Delta(G)$ or $\chi{'}(G)=\Delta(G)+1$.} \end{theorem} \par Another famous coloring of graphs is the total coloring. A graph is said to be {\it type I}, if $\chi''(G) =\Delta(G)+1$ and is said to be {\it type II}, if $\chi''(G) =\Delta(G)+2$. The Conjecture is true for some important classes of graphs, such as bipartite graphs, most of the planar graphs except those with maximum degree $6$. Note that the Total Coloring Conjecture is still open. Recently, T. Srinivasa Murthy \cite{murthy} claimed that the Conjecture is true. However, this paper is unpublished, and now the third revised version is available on arXiv (2003.09658v1). \par For coloring of graphs, the readers are referred to Tommy R. Jensen and Bjarne Toft \cite{tjbt}, and Michael Stiebitz, Diego Scheide, Bjarne Toft, and Lene M. Favrholdt \cite{mbl}. In \cite{nkvj}, we have proved the Total Coloring Conjecture for the zero-divisor graph of a particular class of posets. We continue this study. In this paper, we prove the Total Coloring Conjecture for the zero-divisor graphs of finite posets. Further, it is proved that the complement of the zero-divisor graph of 0-distributive posets satisfies the Total Coloring Conjecture. These results are applied to the comaximal ideal graph. In fact, it is proved that the comaximal ideal graph of a commutative ring with identity $R$ is nothing but the zero-divisor graph of a specially constructed poset of ideals of $R$. Therefore comaximal ideal graph too satisfies the Total Coloring Conjecture. The coloring of the zero-divisor graph of a commutative ring with identity was studied by Beck \cite{Be} whereas for ordered sets, the coloring of zero-divisor graphs is studied by Hala\v{s} and Jukl \cite{HJ}, Lu and Wu \cite{LW}, Joshi \cite{joshi}, Joshi and Khiste \cite{JK}, Joshi, Pourali and Waphare \cite{jwp}, Devhare, Joshi and LaGrange \cite{djl2}, etc.
2205.04881v1
http://arxiv.org/abs/2205.04881v1
Bounds for Privacy-Utility Trade-off with Per-letter Privacy Constraints and Non-zero Leakage
\documentclass[10pt,conference]{IEEEtran} \usepackage{fixltx2e} \usepackage{cite} \usepackage{url} \usepackage{mathrsfs} \usepackage{color} \usepackage{float} \usepackage[caption=false]{subfig} \ifCLASSINFOpdf \usepackage[pdftex]{graphicx} \graphicspath{{Figs/}} \DeclareGraphicsExtensions{.pdf,.jpeg,.png} \else \usepackage[cmex10]{amsmath} \usepackage{amsmath} \usepackage{amssymb} \usepackage{amsthm} \usepackage{amsfonts} \usepackage{bm} \usepackage{xfrac} \usepackage{empheq} \usepackage[normalem]{ulem} \usepackage{soul} \usepackage{mathtools} \DeclarePairedDelimiter\ceil{\lceil}{\rceil} \DeclarePairedDelimiter\floor{\lfloor}{\rfloor} \newtheorem{theorem}{Theorem} \newtheorem*{conclusion}{Conclusion} \newtheorem{example}{Example} \newtheorem{proposition}{Proposition} \newtheorem{lemma}{Lemma} \newtheorem{claim}{Claim} \newtheorem{notation}{Notation} \newtheorem{corollary}{Corollary} \newtheorem{remark}{Remark} \theoremstyle{definition} \newtheorem{definition}{Definition} \usepackage[a4paper,bindingoffset=0.2in,left=1in,right=1in,top=1in,bottom=1in,footskip=.25in]{geometry} \graphicspath{{figs/}} \interdisplaylinepenalty=2500 \begin{document} \newgeometry{left=0.7in,right=0.7in,top=.5in,bottom=1in} \title{Bounds for Privacy-Utility Trade-off with Per-letter Privacy Constraints and Non-zero Leakage} \vspace{-5mm} \author{ \IEEEauthorblockN{Amirreza Zamani, Tobias J. Oechtering, Mikael Skoglund \vspace*{0.5em} \IEEEauthorblockA{\\ Division of Information Science and Engineering, KTH Royal Institute of Technology \\ Email: \protect [email protected], [email protected], [email protected] }} } \maketitle \section{Introduction} The privacy mechanism design problem is recently receiving increased attention in information theory \cite{ makhdoumi, issa, Calmon2,yamamoto, sankar,borz, gun,khodam,Khodam22,kostala, dwork1, calmon4, issajoon, asoo, Total, issa2,zamani2022bounds,kosenaz}. Specifically, in \cite{makhdoumi}, the concept of a privacy funnel is introduced, where the privacy utility trade-off has been studied considering a distortion measure for utility and the log-loss as privacy measure. In \cite{issa}, the concept of maximal leakage has been introduced and some bounds on the privacy utility trade-off have been derived. Fundamental limits of the privacy utility trade-off measuring the leakage using estimation-theoretic guarantees are studied in \cite{Calmon2}. A related secure source coding problem is studied in \cite{yamamoto}. In both \cite{yamamoto} and \cite{sankar}, the privacy-utility trade-offs considering expected distortion and equivocation as a measures of utility and privacy are studied. The problem of privacy-utility trade-off considering mutual information both as measures of utility and privacy given the Markov chain $X-Y-U$ is studied in \cite{borz}. Under the perfect privacy assumption it is shown that the privacy mechanism design problem can be reduced to a linear program. This has been extended in \cite{gun} considering the privacy utility trade-off with a rate constraint on the disclosed data. Moreover, in \cite{borz}, it has been shown that information can be only revealed if the kernel (leakage matrix) between useful data and private data is not invertible. In \cite{khodam}, we generalize \cite{borz} by relaxing the perfect privacy assumption allowing some small bounded leakage. More specifically, we design privacy mechanisms with a per-letter privacy criterion considering an invertible kernel where a small leakage is allowed. We generalized this result to a non-invertible leakage matrix in \cite{Khodam22}.\\ In this paper, random variable (RV) $Y$ denotes the useful data and is correlated with the private data denoted by RV $X$. Furthermore, RV $U$ describes the disclosed data. Two scenarios are considered in this work, where in both scenarios, an agent wants to disclose the useful information to a user as shown in Fig.~\ref{ISITsys}. In the first scenario, the agent observes $Y$ and has not directly access to $X$, i.e., the private data is hidden. The goal is to design $U$ based on $Y$ that reveals as much information as possible about $Y$ and satisfies a privacy criterion. In the second scenario, the agent has access to both $X$ and $Y$ and can design $U$ based on $(X,Y)$ to release as much information as possible about $Y$ while satisfying the bounded leakage constraint. In both scenarios we consider two different per-letter privacy criterion. \\ \begin{figure}[] \centering \includegraphics[scale = .15]{ISITsys.jpg} \caption{In the first scenario the agent has only access to $Y$ and in the second scenario the agent has additionally access to $X$.} \label{ISITsys} \end{figure} In \cite{kostala}, by using the Functional Representation Lemma bounds on privacy-utility trade-off for the two scenarios are derived. These results are derived under the perfect secrecy assumption, i.e., no leakages are allowed. The bounds are tight when the private data is a deterministic function of the useful data. In \cite{zamani2022bounds}, we generalize the privacy problems considered in \cite{kostala} by relaxing the perfect privacy constraint and allowing some leakages. More specifically, we considered bounded mutual information, i.e., $I(U;X)\leq \epsilon$ for privacy leakage constraint. Furthermore, in the special case of perfect privacy we found a new upper bound for the perfect privacy function by using the \emph{excess functional information} introduced in \cite{kosnane}. It has been shown that this new bound generalizes the bound in \cite{kostala}. Moreover, we have shown that the bound is tight when $|\mathcal{X}|=2$. In \cite{zamani2022bounds}, we have used mutual information for measuring the privacy leakage, however in the present work, for each scenario we use two different per letter privacy constraints. As argued in \cite{Khodam22}, it is more desirable to protect the private data individually and not on average. By using an average constraint, a data point can exist which leaks more than average threshold. In this work, we first derive similar lemmas as \cite[Lemma~3]{zamani2022bounds} and \cite[Lemma~4]{zamani2022bounds} where we have extended the Functional Representation Lemma and the Strong Functional Representation Lemma considering bounded leakage, i.e., $I(U;X)\leq \epsilon$, instead of independent $X$ and $U$. In this paper, we derive similar results considering per-letter privacy constraint rather than bounded mutual information. Using these lemmas we find a lower bound for the privacy-utility trade-off in the second scenario with first per letter leakage constraint. Furthermore, we provide bounds for three other problems and study a special case where $X$ is a deterministic function of $Y$. We show that the obtained upper and lower bounds in the first scenario are asymptotically optimal when $X$ is a deterministic function of $Y$. Finally, we evaluate the bounds in a numerical example. \section{system model and Problem Formulation} \label{sec:system} Let $P_{XY}$ denote the joint distribution of discrete random variables $X$ and $Y$ defined on finite alphabets $\cal{X}$ and $\cal{Y}$ with $|\mathcal{X}|<|\mathcal{Y}|$. We represent $P_{XY}$ by a matrix defined on $\mathbb{R}^{|\mathcal{X}|\times|\mathcal{Y}|}$ and marginal distributions of $X$ and $Y$ by vectors $P_X$ and $P_Y$ defined on $\mathbb{R}^{|\mathcal{X}|}$ and $\mathbb{R}^{|\mathcal{Y}|}$ given by the row and column sums of $P_{XY}$. We assume that each element in vectors $P_X$ and $P_Y$ is non-zero. Furthermore, we represent the leakage matrix $P_{X|Y}$ by a matrix defined on $\mathbb{R}^{|\mathcal{X}|\times|\cal{Y}|}$, which is assumed to be of full rank. Furthermore, for given $u\in \mathcal{U}$, $P_{X,U}(\cdot,u)$ and $P_{X|U}(\cdot|u)$ defined on $\mathbb{R}^{|\mathcal{X}|}$ are distribution vectors with elements $P_{X,U}(x,u)$ and $P_{X|U}(x|u)$ for all $x\in\cal X$ and $u\in \cal U$. The relation between $U$ and $Y$ is described by the kernel $P_{U|Y}$ defined on $\mathbb{R}^{|\mathcal{U}|\times|\mathcal{Y}|}$, furthermore, the relation between $U$ and the pair $(Y,X)$ is described by the kernel $P_{U|Y,X}$ defined on $\mathbb{R}^{|\mathcal{U}|\times|\mathcal{Y}|\times|\mathcal{X}|}$. The privacy mechanism design problems for the two scenarios can be stated as follows \begin{align} g_{\epsilon}^1(P_{XY})&=\sup_{\begin{array}{c} \substack{P_{U|Y}:X-Y-U\\ \ d(P_{X,U}(\cdot,u),P_XP_{U}(u))\leq\epsilon,\ \forall u} \end{array}}I(Y;U),\label{main2}\\ h_{\epsilon}^1(P_{XY})&=\sup_{\begin{array}{c} \substack{P_{U|Y,X}: d(P_{X,U}(\cdot,u),P_XP_{U}(u))\leq\epsilon,\ \forall u} \end{array}}I(Y;U),\label{main1}\\ g_{\epsilon}^2(P_{XY})&=\sup_{\begin{array}{c} \substack{P_{U|Y}:X-Y-U\\ \ d(P_{X|U}(\cdot|u),P_X)\leq\epsilon,\ \forall u} \end{array}}I(Y;U),\label{main22}\\ h_{\epsilon}^2(P_{XY})&=\sup_{\begin{array}{c} \substack{P_{U|Y,X}: d(P_{X|U}(\cdot|u),P_X)\leq\epsilon,\ \forall u} \end{array}}I(Y;U),\label{main12} \end{align} where $d(P,Q)$ corresponds to the total variation distance between two distributions $P$ and $Q$, i.e., $d(P,Q)=\sum_x |P(x)-Q(x)|$. The functions $h_{\epsilon}^1(P_{XY})$ and $h_{\epsilon}^2(P_{XY})$ are used when the privacy mechanism has access to both the private data and the useful data. The functions $g_{\epsilon}^1(P_{XY})$ and $g_{\epsilon}^2(P_{XY})$ are used when the privacy mechanism has only access to the useful data. In this work, the privacy constraints used in \eqref{main2} and \eqref{main22}, i.e., $d(P_{X,U}(\cdot|u),P_XP_U(u))\leq\epsilon,\ \forall u,$ and $d(P_{X|U}(\cdot|u),P_X)\leq\epsilon,\ \forall u,$ are called the \emph{strong privacy criterion 1} and the \emph{strong privacy criterion 2}. We call them strong since they are per-letter privacy constraints, i.e., they must hold for every $u\in\cal U$. The difference between the two privacy constraints in this work is the weight $P_U(u)$, which we later show that it enables us to use extended versions of the Functional Representation Lemma and Strong Functional Representation Lemma to find lower bounds considering the second scenario. \begin{remark} \normalfont We have used the leakage constraint $d(P_{X|U}(\cdot|u),P_X)\leq\epsilon,\ \forall u$ in \cite{Khodam22}, where we called it the \emph{strong $\ell_1$-privacy criterion}. \end{remark} \begin{remark} \normalfont For $\epsilon=0$, both \eqref{main2} and \eqref{main22} lead to the perfect privacy problem studied in \cite{borz}. It has been shown that for a non-invertible leakage matrix $P_{X|Y}$, $g_0(P_{XY})$ can be obtained by a linear program. \end{remark} \begin{remark} \normalfont For $\epsilon=0$, both \eqref{main1} and \eqref{main12} lead to the secret-dependent perfect privacy function $h_0(P_{XY})$, studied in \cite{kostala}, where upper and lower bounds on $h_0(P_{XY})$ have been derived. In \cite{zamani2022bounds}, we have strengthened these bounds. \end{remark} \begin{remark} \normalfont The privacy problem defined in \eqref{main22} has been studied in \cite{Khodam22} where we provide a lower bound on $g_{\epsilon}^2(P_{XY})$ using the information geometry concepts. Furthermore, we have shown that with out loss of optimality it is sufficient to assume $|\mathcal{U}|\leq |\mathcal{Y}|$ so that it is ensured that the supremum can be achieved. \end{remark} Intuitively, for small $\epsilon$, both privacy constraints mean that $X$ and $U$ are almost independent. As we discussed in \cite{Khodam22}, closeness of $P_{X|U}(\cdot|u)$ and $P_X$ allows us to approximate $g_{\epsilon}^2(P_{XY})$ with a series expansion and find a lower bound. In this work we show that by using a similar methodology, we can approximate $g_{\epsilon}^1(P_{XY})$ exploiting the closeness of $P_{X,U}(\cdot,u)$ and $P_XP_U(u)$. This provides us a lower bound for $g_{\epsilon}^1(P_{XY})$. Next, we study some properties of the strong privacy criterion 1 and the strong privacy criterion 2. To this end recall that the \emph{linkage inequality} is the property that if $\cal L$ measures the privacy leakage between two random variables and the Markov chain $X-Y-U$ holds then we have $\mathcal{L}(X;U)\leq\mathcal{L}(Y;U)$. Since the strong privacy criterion 1 and the strong privacy criterion 2 are per letter constraints we define $\mathcal{L}^1(X;U=u)\triangleq \left\lVert P_{X|U}(\cdot|u)-P_X \right\rVert_1$, $\mathcal{L}^1(Y;U=u)\triangleq \left\lVert P_{Y|U}(\cdot|u)-P_Y \right\rVert_1$, $\mathcal{L}^2(X;U=u)\triangleq \left\lVert P_{X,U}(\cdot,u)-P_XP_U(u) \right\rVert_1$, $\mathcal{L}^2(Y;U=u)\triangleq \left\lVert P_{Y,U}(\cdot,u)-P_YP_U(u) \right\rVert_1$. \begin{proposition} The strong privacy criterion 1 and the strong privacy criterion 2 satisfy the linkage inequality. Thus, for each $u\in\mathcal{U}$ we have $\mathcal{L}^1(X;U=u)\leq\mathcal{L}^1(Y;U=u)$ and $\mathcal{L}^2(X;U=u)\leq\mathcal{L}^2(Y;U=u)$. \end{proposition} \begin{proof} The proof is provided in Appendix A. \end{proof} As discussed in \cite[page 4]{Total}, one benefit of the linkage inequality is to keep the privacy in layers of private information which is discussed in the following. Assume that the Markov chain $X-Y-U$ holds and distribution of $X$ is not known. If we can find $\tilde{X}$ such that $X-\tilde{X}-Y-U$ holds and distribution of $\tilde{X}$ is known then by the linkage inequality we can conclude $\mathcal{L}(X;U=u)\leq \mathcal{L}(\tilde{X};U=u)$. In other words, if the framework is designed for $\tilde{X}$, then a privacy constraint on $\tilde{X}$ leads to the constraint on $X$, i.e., provides an upper bound for any pre-processed RV $X$. To have the Markov chain $X-\tilde{X}-Y-U$ consider the scenario where $\tilde{X}$ is the private data and $X$ is a function of private data which is not known. For instance let $\tilde{X}=(X_1,X_2,X_3)$ and $X=X_1$. Thus, the mechanism that is designed based on $\tilde{X}-Y-U$ preserves the leakage constraint on $X$ and $U$. As pointed out in \cite[Remark 2]{Total}, among all the $L^p$-norms ($p\geq 1$), only the $\ell_1$ norm satisfies the linkage inequality. Next, given a leakage measure $\mathcal{L}$ and let the Markov chain $X-Y-U$ hold, if we have $\mathcal{L}(X;U)\leq \mathcal{L}(X;Y)$, then we say that the \emph{post processing inequality} holds. In this work we use $\mathcal{L}^1(X;U)= \sum_u P_U(u)\mathcal{L}^1(X;U=u)$, $\mathcal{L}^2(X;U)= \sum_u \mathcal{L}^2(X;U=u)$ and $\mathcal{L}^1(Y;U)=\sum_u P_U(u)\mathcal{L}^1(Y;U=u)$, $\mathcal{L}^2(Y;U)=\sum_u \mathcal{L}^2(Y;U=u)$. \begin{proposition} The average of strong privacy constraints 1 and 2 with weights $1$ and $P_U(u)$, respectively, satisfy the post-processing inequality, i.e., we have $\mathcal{L}^1(X;U)\leq\mathcal{L}^1(Y;U)$ and $\mathcal{L}^2(X;U)\leq\mathcal{L}^2(Y;U)$. \end{proposition} \begin{proof} The proof is same as proof of \cite[Theorem 3]{Total} which is based on the convexity of $\ell_1$-norm. \end{proof} \begin{proposition} The strong privacy criterion 1 and 2 result in bounded inference threat that is modeled in \cite{Calmon1}. \end{proposition} \begin{proof} The strong privacy criterion 1 and 2 lead to a bounded on average constraint $\sum_u P_U(u)\left\lVert P_{X|U=u}\!-\!P_X \right\rVert_1=2TV(X;U)\leq \epsilon$, where $TV(.|.)$ corresponds to the total variation. Thus, using \cite[Theorem 4]{Total}, we conclude that inference threats are bounded. \end{proof} Another property of $\ell_1$ distance is the relation between the $\ell_1$-norm and probability of error in a hypothesis test. As argued in \cite[Remark~6.5]{polyanskiy2014lecture}, for the binary hypothesis test with $H_0:X\sim P$ and $H_1:X\sim Q$, the expression $1-TV(P, Q)$ is the sum of false alarm and missed detection probabilities. Thus, we have $TV(P,Q)=1-2P_e$, where $P_e$ is the error probability (the probability that we can not decide the right distribution for $X$). To see a benefit, consider the scenario where we want to decide whether $X$ and $U$ are independent or correlated. Thus, let $P=P_{X,U}$, $Q=P_{X}P_{U}$, $H_0:X,U\sim P$ and $H_1:X,U\sim Q$. We have \begin{align*} TV(P_{X,U};P_{X}P_{U})&=\frac{1}{2}\sum_u P_U(u)\left\lVert P_{X|U=u}-P_X \right\rVert_1\!\\&\leq \frac{1}{2}\epsilon. \end{align*} Thus, by increasing the leakage, the error of probability decreases. \\ Finally, if we use $\ell_1$ distance as privacy leakage, after approximating $g_{\epsilon}^1(P_{XY})$ and $g_{\epsilon}^2(P_{XY})$, we face linear program problems in the end, which are much easier to handle. \section{Main Results}\label{sec:resul} In this section, we first introduce similar lemmas as \cite[Lemma~3]{zamani2022bounds} and \cite[Lemma~4]{zamani2022bounds}, where we have replaced mutual information, i.e., $I(U;X)=\epsilon$, with a per letter constraint. In the remaining part of this work $d(\cdot,\cdot)$ corresponds to the total variation distance, i.e., $d(P,Q)=\sum_x |P(x)-Q(x)|$. \begin{lemma}\label{lemma11} For any $0\leq\epsilon< \sqrt{2I(X;Y)}$ and any pair of RVs $(X,Y)$ distributed according to $P_{XY}$ supported on alphabets $\mathcal{X}$ and $\mathcal{Y}$ where $|\mathcal{X}|$ is finite and $|\mathcal{Y}|$ is finite or countably infinite, there exists a RV $U$ supported on $\mathcal{U}$ such that $X$ and $U$ satisfy the strong privacy criterion 1, i.e., we have \begin{align}\label{c11} d(P_{X,U}(\cdot,u),P_XP_{U}(u))\leq\epsilon,\ \forall u, \end{align} $Y$ is a deterministic function of $(U,X)$, i.e., we have \begin{align} H(Y|U,X)=0,\label{c21} \end{align} and \begin{align} |\mathcal{U}|\leq |\mathcal{X}|(|\mathcal{Y}|-1)+1.\label{c31} \end{align} \end{lemma} \begin{proof} The proof is provided in Appendix B. \end{proof} \begin{lemma}\label{lemma22} For any $0\leq\epsilon< \sqrt{2I(X;Y)}$ and pair of RVs $(X,Y)$ distributed according to $P_{XY}$ supported on alphabets $\mathcal{X}$ and $\mathcal{Y}$ where $|\mathcal{X}|$ is finite and $|\mathcal{Y}|$ is finite or countably infinite with $I(X,Y)< \infty$, there exists a RV $U$ supported on $\mathcal{U}$ such that $X$ and $U$ satisfy the strong privacy criterion 1, i.e., we have \begin{align*} d(P_{X,U}(\cdot,u),P_XP_{U}(u))\leq\epsilon,\ \forall u, \end{align*} $Y$ is a deterministic function of $(U,X)$, i.e., we have \begin{align*} H(Y|U,X)=0, \end{align*} $I(X;U|Y)$ can be upper bounded as follows \begin{align} I(X;U|Y)\!\leq \alpha H(X|Y)\!+\!(1-\alpha)\!\left[ \log(I(X;Y)+1)+4\right],\label{bala} \end{align} and $ |\mathcal{U}|\leq \left[|\mathcal{X}|(|\mathcal{Y}|-1)+2\right]\left[|\mathcal{X}|+1\right], $ where $\alpha =\frac{\epsilon^2}{2H(X)}$. \end{lemma} \begin{proof} Let $U$ be found by ESFRL as in \cite[Lemma~4]{zamani2022bounds}, where we let the leakage be $\frac{\epsilon^2}{2}$. The first constraint in this statement can be obtained by using the same proof as Lemma \ref{lemma11} and \eqref{bala} can be derived using \cite[Lemma~4]{zamani2022bounds}. \end{proof} In the next proposition we find a lower bound on $h_{\epsilon}^1(P_{XY})$ using Lemma~1 and Lemma~2. \begin{proposition}\label{prop111} For any $0\leq \epsilon< \sqrt{2I(X;Y)}$ and pair of RVs $(X,Y)$ distributed according to $P_{XY}$ supported on alphabets $\mathcal{X}$ and $\mathcal{Y}$ we have \begin{align}\label{prop12} h_{\epsilon}^1(P_{XY})\geq \max\{L_{h^1}^{1}(\epsilon),L_{h^1}^{2}(\epsilon)\}, \end{align} where \begin{align*} L_{h_1}^{1}(\epsilon) &= H(Y|X)-H(X|Y)+\frac{\epsilon^2}{2},\\ L_{h_1}^{2}(\epsilon) &= H(Y|X)-\alpha H(X|Y)+\frac{\epsilon^2}{2}\\&\ -(1-\alpha)\left( \log(I(X;Y)+1)+4 \right),\\ \end{align*} with $\alpha=\frac{\epsilon^2}{2H(X)}$. \end{proposition} \begin{proof} For deriving $L_{h_1}^{1}(\epsilon)$ let $U$ be produced by Lemma \ref{lemma11}. Thus, $I(X;U)=\frac{\epsilon^2}{2}$ and $U$ satisfies \eqref{c11} and \eqref{c21}. We have \begin{align*} h_{\epsilon}^1(P_{XY})&\geq\\ I(U;Y)&=I(X;U)\!+\!H(Y|X)\!-\!I(X;U|Y)\!-\!H(Y|X,U)\\&=\frac{\epsilon^2}{2}+H(Y|X)-H(X|Y)+H(X|Y,U)\\ &\geq \frac{\epsilon^2}{2}+H(Y|X)-H(X|Y). \end{align*} Next for deriving $L_{h_1}^{2}(\epsilon)$ let $U$ be produced by Lemma \ref{lemma22}. Hence, $I(X;U)=\frac{\epsilon^2}{2}$ and $U$ satisfies \eqref{c11}, \eqref{c21} and \eqref{bala}. We obtain \begin{align*} h_{\epsilon}^1(P_{XY})&\geq I(U;Y) = \frac{\epsilon^2}{2}+H(Y|X)-I(X;U|Y)\\ &\geq \frac{\epsilon^2}{2}+H(Y|X)-\alpha H(X|Y)\\&-(1-\alpha)\left( \log(I(X;Y)+1)+4 \right). \end{align*} \end{proof} In the next section, we provide a lower bound on $g_{\epsilon}^1(P_{XY})$ by following the same approach as in \cite{Khodam22}. For more details about the proofs and steps of approximation see \cite[Section III]{Khodam22}. \subsection{Lower bound on $g_{\epsilon}^1(P_{XY})$} In \cite{Khodam22}, we show that $g_{\epsilon}^2(P_{XY})$ can be approximated by a linear program. Using this result we can derive a lower bound for $g_{\epsilon}^2(P_{XY})$. In this part, we follow a similar approach to approximate $g_{\epsilon}^2(P_{XY})$ which results in a lower bound. Similar to \cite{Khodam22}, for sufficiently small $\epsilon$, by using the leakage constraint in $g_{\epsilon}^2(P_{XY})$, i.e., the strong privacy criterion 1, we can rewrite the distribution $P_{X,U}(\cdot,u)$ as a perturbation of $P_XP_U(u)$. Thus, for any $u$ we can write $P_{X,U}(\cdot,u)=P_XP_U(u)+\epsilon J_u$, where $J_u\in \mathbb{R}^{|\mathcal{X}|}$ is a perturbation vector and satisfies the following properties: \begin{align} \bm{1}^T\cdot J_u&=0,\ \forall u, \label{koon1}\\ \sum_u J_u&=\bm{0}\in \mathbb{R}^{|\mathcal{X}|},\label{koon2}\\ \bm{1}^T\cdot |J_u|&\leq 1,\ \forall u, \label{koon3} \end{align} where $|\cdot|$ corresponds to the absolute value of the vector. We define matrix $M\in \mathbb{R}^{|\mathcal{X}|\times|\mathcal{Y}|}$ which is used in the remaining part as follows: Let $V$ be the matrix of right eigenvectors of $P_{X|Y}$, i.e., $P_{X|Y}=U\Sigma V^T$ and $V=[v_1,\ v_2,\ ... ,\ v_{|\mathcal{Y}|}]$, then $M$ is defined as \begin{align*} M \triangleq \left[v_1,\ v_2,\ ... ,\ v_{|\mathcal{X}|}\right]^T. \end{align*} Similar to \cite[Proposition~2]{Khodam22}, we have the following result. \begin{proposition}\label{prop222} In \eqref{main2}, it suffices to consider $U$ such that $|\mathcal{U}|\leq|\mathcal{Y}|$. Since the supremum in \eqref{main2} is achieved, we can replace the supremum by the maximum. \end{proposition} \begin{proof} The proof follows the similar lines as proof of \cite[Proposition~2]{Khodam22}. The only difference is that the new convex and compact set is as follows \begin{align*} \Psi\!=\!\left\{\!y\in\mathbb{R}_{+}^{|\mathcal{Y}|}|My\!=\!MP_Y\!+\!\frac{\epsilon}{P_U(u)} M\!\!\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0 \end{bmatrix}\!\!,J_u\in\mathcal{J} \!\right\}\!, \end{align*} where $\mathcal{J}=\{J\in\mathbb{R}^{|\mathcal{X}|}_{+}|\left\lVert J\right\rVert_1\leq 1,\ \bm{1}^{T}\cdot J=0\}$ and $\mathbb{R}_{+}$ corresponds to non-negative real numbers. Only non-zero weights $P_U(u)$ are considered since in the other case the corresponding $P_{Y|U}(\cdot|u)$ does not appear in $H(Y|U)$. \end{proof} \begin{lemma}\label{madar1} If the Markov chain $X-Y-U$ holds, for sufficiently small $\epsilon$ and every $u\in\mathcal{U}$, the vector $P_{Y|U}(\cdot|u)$ lies in the following convex polytope \begin{align*} \mathbb{S}_{u} = \left\{y\in\mathbb{R}_{+}^{|\mathcal{Y}|}|My=MP_Y+\frac{\epsilon}{P_U(u)} M\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0 \end{bmatrix}\right\}, \end{align*} where $J_u$ satisfies \eqref{koon1}, \eqref{koon2} and \eqref{koon3}. Furthermore, $P_U(u)>0$, otherwise $P_{Y|U}(\cdot|u)$ does not appear in $I(Y;U)$. \end{lemma} \begin{proof} Using the Markov chain $X-Y-U$, we have \begin{align*} P_{X|U=u}-P_X=P_{X|Y}[P_{Y|U=u}-P_Y]=\epsilon \frac{J_u}{P_U(u)}. \end{align*} Thus, by following the similar lines as \cite[Lemma~2]{Khodam22} and using the properties of Null($M$) as \cite[Lemma~1]{Khodam22}, we have \begin{align*} MP_{Y|U}(\cdot|u)=MP_Y+\frac{\epsilon}{P_U(u)} M\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0 \end{bmatrix}. \end{align*} \end{proof} By using the same arguments as \cite[Lemma~3]{Khodam22}, it can be shown that any vector inside $\mathbb{S}_{u}$ is a standard probability vector. Thus, by using \cite[Lemma~3]{Khodam22} and Lemma~2 we have following result. \begin{theorem} We have the following equivalency \begin{align}\label{equii} \min_{\begin{array}{c} \substack{P_{U|Y}:X-Y-U\\ d(P_{X,U}(\cdot,u),P_XP_U(u))\leq\epsilon,\ \forall u\in\mathcal{U}} \end{array}}\! \! \! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!H(Y|U) =\!\!\!\!\!\!\!\!\! \min_{\begin{array}{c} \substack{P_U,\ P_{Y|U=u}\in\mathbb{S}_u,\ \forall u\in\mathcal{U},\\ \sum_u P_U(u)P_{Y|U=u}=P_Y,\\ J_u \text{satisfies}\ \eqref{koon1},\ \eqref{koon2},\ \text{and}\ \eqref{koon3}} \end{array}} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!H(Y|U). \end{align} \end{theorem} Furthermore, similar to \cite[Prpoposition~3]{Khodam22}, it can be shown that the minimum of $H(Y|U)$ occurs at the extreme points of the sets $\mathbb{S}_{u}$, i.e., for each $u\in \mathcal{U}$, $P_{Y|U}^*(\cdot|u)$ that minmizes $H(Y|U)$ must belong to the extreme points of $\mathbb{S}_{u}$. To find the extreme points of $\mathbb{S}_{u}$ let $\Omega$ be the set of indices which correspond to $|\mathcal{X}|$ linearly independent columns of $M$, i.e., $|\Omega|=|\mathcal{X}|$ and $\Omega\subset \{1,..,|\mathcal{Y}|\}$. Let $M_{\Omega}\in\mathbb{R}^{|\mathcal{X}|\times|\mathcal{X}|}$ be the submatrix of $M$ with columns indexed by the set $\Omega$. Assume that $\Omega = \{\omega_1,..,\omega_{|\mathcal{X}|}\}$, where $\omega_i\in\{1,..,|\mathcal{Y}|\}$ and all elements are arranged in an increasing order. The $\omega_i$-th element of the extreme point $V_{\Omega}^*$ can be found as $i$-th element of $M_{\Omega}^{-1}(MP_Y+\frac{\epsilon}{P_U(u)} M\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0\end{bmatrix})$, i.e., for $1\leq i \leq |\mathcal{X}|$ we have \begin{align}\label{defin1} V_{\Omega}^*(\omega_i)= \left(M_{\Omega}^{-1}MP_Y+\frac{\epsilon}{P_U(u)} M_{\Omega}^{-1}M\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0\end{bmatrix}\right)(i). \end{align} Other elements of $V_{\Omega}^*$ are set to be zero. Now we approximate the entropy of $V_{\Omega}^*$. \begin{proposition}\label{koonkos} Let $V_{\Omega_u}^*$ be an extreme point of the set $\mathbb{S}_u$, then we have \begin{align*} H(P_{Y|U=u}) &=\sum_{y=1}^{|\mathcal{Y}|}-P_{Y|U=u}(y)\log(P_{Y|U=u}(y))\\&=-(b_u+\frac{\epsilon}{P_{U}(u)} a_uJ_u)+o(\epsilon), \end{align*} with $b_u = l_u \left(M_{\Omega_u}^{-1}MP_Y\right),\ a_u = l_u\left(M_{\Omega_u}^{-1}M(1\!\!:\!\!|\mathcal{X}|)P_{X|Y_1}^{-1}\right)\in\mathbb{R}^{1\times|\mathcal{X}|},\ l_u = \left[\log\left(M_{\Omega_u}^{-1}MP_{Y}(i)\right)\right]_{i=1:|\mathcal{X}|}\in\mathbb{R}^{1\times|\mathcal{X}|}, $ and $M_{\Omega_u}^{-1}MP_{Y}(i)$ stands for $i$-th ($1\leq i\leq |\mathcal{X}|$) element of the vector $M_{\Omega_u}^{-1}MP_{Y}$. Furthermore, $M(1\!\!:\!\!|\mathcal{X}|)$ stands for submatrix of $M$ with first $|\mathcal{X}|$ columns. \end{proposition} \begin{proof} The proof follows similar lines as \cite[Lemma~4]{Khodam22} and is based on first order Taylor expansion of $\log(1+x)$. \end{proof} By using Proposition \ref{koonkos} we can approximate \eqref{main2} as follows. \begin{proposition}\label{baghal} For sufficiently small $\epsilon$, the minimization problem in \eqref{equii} can be approximated as follows \begin{align}\label{kospa} &\min_{P_U(.),\{J_u, u\in\mathcal{U}\}} -\left(\sum_{u=1}^{|\mathcal{Y}|} P_U(u)b_u+\epsilon a_uJ_u\right)\\\nonumber &\text{subject to:}\\\nonumber &\sum_{u=1}^{|\mathcal{Y}|} P_U(u)V_{\Omega_u}^*=P_Y,\ \sum_{u=1}^{|\mathcal{Y}|} J_u=0,\ P_U\in \mathbb{R}_{+}^{|\cal Y|},\\\nonumber &\bm{1}^T |J_u|\leq 1,\ \bm{1}^T\cdot J_u=0,\ \forall u\in\mathcal{U}, \end{align} where $a_u$ and $b_u$ are defined in Proposition~\ref{koonkos}. \end{proposition} By using the vector $\eta_u=P_U(u)\left(M_{\Omega_u}^{-1}MP_Y\right)+\epsilon \left(M_{\Omega_u}^{-1}M(1:|\mathcal{X}|)P_{X|Y_1}^{-1}\right)(J_u)$ for all $u\in \mathcal{U}$, where $\eta_u\in\mathbb{R}^{|\mathcal{X}|}$, we can write \eqref{kospa} as a linear program. The vector $\eta_u$ corresponds to multiple of non-zero elements of the extreme point $V_{\Omega_u}^*$, furthermore, $P_U(u)$ and $J_u$ can be uniquely found as \begin{align*} P_U(u)&=\bm{1}^T\cdot \eta_u,\\ J_u&=\frac{P_{X|Y_1}M(1:|\mathcal{X}|)^{-1}M_{\Omega_u}[\eta_u\!-\!(\bm{1}^T \eta_u)M_{\Omega_u}^{-1}MP_Y]}{\epsilon}. \end{align*} By solving the linear program we obtain $P_U$ and $J_u$ for all $u$, thus, $P_{Y|U}(\cdot|u)$ can be computed using \eqref{defin1}. \begin{lemma}\label{lg1} Let $P_{U|Y}^*$ be found by the linear program which solves \eqref{kospa} and let $I(U^*;Y)$ be evaluated by this kernel. Then we have \begin{align*} g_{\epsilon}^1(P_{XY})\geq I(U^*;Y) = L_{g_1}^1(\epsilon). \end{align*} \end{lemma} \begin{proof} The proof follows since the kernel $P_{U|Y}^*$ that achieves the approximate solution satisfies the constraints in \eqref{main2}. \end{proof} In the next result we present lower and upper bounds of $g_{\epsilon}^1(P_{XY})$ and $h_{\epsilon}^1(P_{XY})$. \begin{theorem}\label{choon1} For sufficiently small $\epsilon\geq 0$ and any pair of RVs $(X,Y)$ distributed according to $P_{XY}$ supported on alphabets $\mathcal{X}$ and $\mathcal{Y}$ we have \begin{align*} L_{g_1}^1(\epsilon)\leq g_{\epsilon}^1(P_{XY}), \end{align*} and for any $\epsilon\geq 0$ we obtain \begin{align*} g_{\epsilon}^1(P_{XY})&\leq \frac{\epsilon|\mathcal{Y}||\mathcal{X}|}{\min P_X}+H(Y|X)=U_{g_1}(\epsilon),\\ g_{\epsilon}^1(P_{XY})&\leq h_{\epsilon}^1(P_{XY}). \end{align*} Furthermore, for any $0\leq \epsilon\leq \sqrt{2I(X;Y)}$ we have \begin{align*} \max\{L_{h_1}^{1}(\epsilon),L_{h_1}^{2}(\epsilon)\}\leq h_{\epsilon}^1(P_{XY}), \end{align*} where $L_{h^1}^{1}(\epsilon)$ and $L_{h^1}^{2}(\epsilon)$ are defined in Proposition~\ref{prop111}. \end{theorem} \begin{proof} Lower bounds on $ g_{\epsilon}^1(P_{XY})$ and $h_{\epsilon}^1(P_{XY})$ are derived in Lemma~\ref{lg1} and Proposition~\ref{prop111}, respectively. Furthermore, inequality $g_{\epsilon}^1(P_{XY})\leq h_{\epsilon}^1(P_{XY})$ holds since $h_{\epsilon}^1(P_{XY})$ has less constraints. To prove the upper bound on $g_{\epsilon}^1(P_{XY})$, i.e., $U_{g_1}(\epsilon)$, let $U$ satisfy $X-Y-U$ and $d(P_{X,U}(\cdot,u),P_XP_U(u))\leq\epsilon$, then we have \begin{align*} I(U;Y) &= I(X;U)\!+\!H(Y|X)\!-\!I(X;U|Y)\!-\!H(Y|X,U)\\ &\stackrel{(a)}{=} I(X;U)\!+\!H(Y|X)-H(Y|X,U)\\&\leq I(X;U)\!+\!H(Y|X)\\&=\sum_u P_U(u)D(P_{X|U}(\cdot|u),P_X)+H(Y|X)\\ &\stackrel{(b)}{\leq}\!\sum_u\!\! P_U(u)\frac{\left(d(P_{X|U}(\cdot|u),\!P_X)\right)^2}{\min P_X}\!+\!H(Y|X) \\&\stackrel{(c)}{\leq}\!\sum_u\!\! P_U(u)\frac{d(P_{X|U}(\cdot|u),\!P_X)}{\min P_X}|\mathcal{X}|\!+\!H(Y|X) \\&=\sum_u \!\!\frac{d(P_{X|U}(\cdot,u),\!P_XP_U(u))}{\min P_X}|\mathcal{X}|\!+\!H(Y|X)\\&\stackrel{(d)}{\leq} \frac{\epsilon|\mathcal{Y}||\mathcal{X}|}{\min P_X}+H(Y|X), \end{align*} where (a) follows by the Markov chain $X-Y-U$, (b) follows by the reverse Pinsker inequality \cite[(23)]{verdu} and (c) holds since $d(P_{X|U}(\cdot|u),\!P_X)=\sum_{i=1}^{|\mathcal{X}|} |P_{X|U}(x_i|u)-P_X(x_i)|\leq |\mathcal{X}|$. Latter holds since for each $u$ and $i$, $|P_{X|U}(x_i|u)-P_X|\leq 1$. Moreover, (d) holds since by Proposition~\ref{prop222} without loss of optimality we can assume $|\mathcal{U}|\leq |\mathcal{Y}|$. In other words (d) holds since by Proposition~\ref{prop222} we have \begin{align} g_{\epsilon}^1(P_{XY})&=\sup_{\begin{array}{c} \substack{P_{U|Y}:X-Y-U\\ \ d(P_{X,U}(\cdot,u),P_XP_{U}(u))\leq\epsilon,\ \forall u} \end{array}}I(Y;U)\nonumber\\&= \max_{\begin{array}{c} \substack{P_{U|Y}:X-Y-U\\ \ d(P_{X,U}(\cdot,u),P_XP_{U}(u))\leq\epsilon,\ \forall u\\ |\mathcal{U}|\leq |\mathcal{Y}|} \end{array}}I(Y;U).\label{koontala} \end{align} \end{proof} In the next section we provide bounds for $g_{\epsilon}^2(P_{XY})$ and $h_{\epsilon}^2(P_{XY})$. \subsection{Lower and Upper bounds on $g_{\epsilon}^2(P_{XY})$ and $h_{\epsilon}^2(P_{XY})$} As we mentioned earlier in \cite{Khodam22}, we have provided an approximate solution for $g_{\epsilon}^2(P_{XY})$ using local approximation of $H(Y|U)$ for sufficiently small $\epsilon$. Furthermore, in \cite[Proposition~8]{Khodam22} we specified permissible leakages. By using \cite[Proposition~8]{Khodam22}, we can write \begin{align} g_{\epsilon}^2(P_{XY})&=\sup_{\begin{array}{c} \substack{P_{U|Y}:X-Y-U\\ \ d(P_{X|U}(\cdot|u),P_X)\leq\epsilon,\ \forall u} \end{array}}I(Y;U)\nonumber\\&= \max_{\begin{array}{c} \substack{P_{U|Y}:X-Y-U\\ \ d(P_{X|U}(\cdot|u),P_X)\leq\epsilon,\ \forall u\\ |\mathcal{U}|\leq |\mathcal{Y}|} \end{array}}I(Y;U).\label{antar} \end{align} In the next lemma we find a lower bound for $g_{\epsilon}^2(P_{XY})$, where we use the approximate problem for \eqref{main22}. \begin{lemma}\label{lg2} Let the kernel $P_{U^*|Y}$ achieve the optimum solution in \cite[Theorem~2]{Khodam22}. Thus, $I(U^*;Y)$ evaluated by this kernel is a lower bound for $g_{\epsilon}^2(P_{XY})$. In other words, we have \begin{align*} g_{\epsilon}^2(P_{XY})\geq I(U^*;Y) = L_{g_2}^1(\epsilon). \end{align*} \end{lemma} \begin{proof} The proof follows since the kernel $P_{U|Y}^*$ that achieves the approximate solution satisfies the constraints in \eqref{main22}. \end{proof} Next we provide upper bounds for $g_{\epsilon}^2(P_{XY})$. To do so, we first bound the approximation error in \cite[Theorem~2]{Khodam22}. Let $\Omega^1$ be the set of all $\Omega_i\subset\{1,..,|\mathcal{Y}|\},\ |\Omega_i|=|\cal X|$, such that each $\Omega_i$ produces a valid standard distribution vector $M_{\Omega_i}^{-1}MP_Y$, i.e., all elements in the vector $M_{\Omega_i}^{-1}MP_Y$ are positive. \begin{proposition}\label{mos} Let the approximation error be the distance between $H(Y|U)$ and the approximation derived in \cite[Theorem~2]{Khodam22}. Then, for all $\epsilon<\frac{1}{2}\epsilon_2$, we have \begin{align*} |\text{Approximation\ error}|<\frac{3}{4}. \end{align*} Furthermore, for all $\epsilon<\frac{1}{2}\frac{\epsilon_2}{\sqrt{|\mathcal{X}|}}$ the upper bound can be strengthened as follows \begin{align*} |\text{Approximation\ error}|<\frac{1}{2(2\sqrt{|\mathcal{X}|}-1)^2}+\frac{1}{4|\mathcal{X}|}. \end{align*} where $\epsilon_2=\frac{\min_{y,\Omega\in \Omega^1} M_{\Omega}^{-1}MP_Y(y)}{\max_{\Omega\in \Omega^1} |\sigma_{\max} (H_{\Omega})|}$, $H_{\Omega}=M_{\Omega}^{-1}M(1:|\mathcal{X}|)P_{X|Y_1}^{-1}$ and $\sigma_{\max}$ is the largest right singular value. \end{proposition} \begin{proof} The proof is provided in Appendix~C. \end{proof} As a result we can find an upper bound on $g_{\epsilon}^2(P_{XY})$. To do so let $\text{approx}(g_{\epsilon}^2)$ be the value that the Kernel $P_{U^*|Y}$ in Lemma~\ref{lg2} achieves, i.e., the approximate value in \cite[(7)]{Khodam22}. \begin{corollary}\label{ghahve} For any $0\leq\epsilon<\frac{1}{2}\epsilon_2$ we have \begin{align*} g_{\epsilon}^2(P_{XY})\leq \text{approx}(g_{\epsilon}^2)+\frac{3}{4}=U_{g_2}^1(\epsilon), \end{align*} furthermore, for any $0\leq\epsilon<\frac{1}{2}\frac{\epsilon_2}{\sqrt{|\mathcal{X}|}}$ the upper bound can be strengthened as \begin{align*} g_{\epsilon}^2(P_{XY})\!\leq \text{approx}(g_{\epsilon}^2)+\frac{1}{2(2\sqrt{|\mathcal{X}|}-1)^2}\!+\frac{1}{4|\mathcal{X}|}\!=\!U_{g_2}^2(\epsilon). \end{align*} \end{corollary} In the next theorem we summarize the bounds for $g_{\epsilon}^2(P_{XY})$ and $h_{\epsilon}^2(P_{XY})$, furthermore, a new upper bound for $h_{\epsilon}^2(P_{XY})$ is derived. \begin{theorem}\label{koontala1} For any $0\leq\epsilon<\frac{1}{2}\epsilon_2$ and pair of RVs $(X,Y)$ distributed according to $P_{XY}$ supported on alphabets $\mathcal{X}$ and $\mathcal{Y}$ we have \begin{align*} L_{g_2}^1(\epsilon)\leq g_{\epsilon}^2(P_{XY})\leq U_{g_2}^1(\epsilon), \end{align*} and for any $0\leq\epsilon<\frac{1}{2}\frac{\epsilon_2}{\sqrt{|\mathcal{X}|}}$ we get \begin{align*} L_{g_2}^1(\epsilon)\leq g_{\epsilon}^2(P_{XY})\leq U_{g_2}^2(\epsilon), \end{align*} furthermore, for any $0\leq\epsilon$ \begin{align*} g_{\epsilon}^2(P_{XY})\leq h_{\epsilon}^2(P_{XY})\leq \frac{\epsilon^2}{\min P_X}+H(Y|X)=U_{h_2}(\epsilon). \end{align*} \end{theorem} \begin{proof} It is sufficient to show that the upper bound on $h_{\epsilon}^2(P_{XY})$ holds, i.e., $U_{h_2}(\epsilon)$. To do so, let $U$ satisfy $d(P_{X|U}(\cdot|u),P_X)\leq\epsilon$, then we have \begin{align*} I(U;Y) &= I(X;U)\!+\!H(Y|X)\!-\!I(X;U|Y)\!-\!H(Y|X,U)\\ &\leq I(X;U)\!+\!H(Y|X)\\ &\stackrel{(a)}{\leq}\!\sum_u\!\! P_U(u)\frac{\left(d(P_{X|U}(\cdot|u),\!P_X)\right)^2}{\min P_X}\!+\!H(Y|X) \\& = \frac{\epsilon^2}{\min P_X}+H(Y|X), \end{align*} where (a) follows by the reverse Pinsker inequality. \end{proof} In next section we study the special case where $X$ is a deterministic function of $Y$, i.e., $H(X|Y)=0$. \subsection{Special case: $X$ is a deterministic function of $Y$} In this case we have \begin{align} h_{\epsilon}^1(P_{XY})&=g_{\epsilon}^1(P_{XY})\label{khatakar}\\&= \max_{\begin{array}{c} \substack{P_{U|Y}:X-Y-U\\ \ d(P_{X,U}(\cdot,u),P_XP_{U}(u))\leq\epsilon,\ \forall u\\ |\mathcal{U}|\leq |\mathcal{Y}|} \end{array}}I(Y;U)\nonumber\\&= \sup_{\begin{array}{c} \substack{P_{U|Y}: d(P_{X,U}(\cdot,u),P_XP_{U}(u))\leq\epsilon,\ \forall u\\ |\mathcal{U}|\leq |\mathcal{Y}|} \end{array}}I(Y;U),\nonumber\\ h_{\epsilon}^2(P_{XY})&=g_{\epsilon}^2(P_{XY})\label{khatakoon}\\&= \max_{\begin{array}{c} \substack{P_{U|Y}:X-Y-U\\ \ d(P_{X|U}(\cdot|u),P_X)\leq\epsilon,\ \forall u\\ |\mathcal{U}|\leq |\mathcal{Y}|} \end{array}}I(Y;U)\nonumber\\&= \sup_{\begin{array}{c} \substack{P_{U|Y}: d(P_{X|U}(\cdot|u),P_X)\leq\epsilon,\ \forall u\\ |\mathcal{U}|\leq |\mathcal{Y}|} \end{array}}I(Y;U),\nonumber \end{align} since the Markov chain $X-Y-U$ holds. Consequently, by using Theorem~2 and \eqref{khatakar} we have next corollary. \begin{corollary}\label{chooni} For any $0\leq \epsilon\leq \sqrt{2I(X;Y)}$ we have \begin{align*} \max\{L_{h_1}^{1}(\epsilon),L_{h_1}^{2}(\epsilon),L_{g_1}^1(\epsilon)\}\leq g_{\epsilon}^1(P_{XY})\leq U_{g_1}(\epsilon). \end{align*} \end{corollary} We can see that the bounds in Corollary~\ref{chooni} are asymptotically optimal. The latter follows since in high privacy regimes, i.e., the leakage tends to zero, $U_{g_1}(\epsilon)$ and $L_{h_1}^{1}(\epsilon)$ both tend to $H(Y|X)$, which is the optimal solution to $g_{0}(P_{XY})$ when $X$ is a deterministic function of $Y$, \cite[Theorem~6]{kostala}. Furthermore, by using Theorem~3 and \eqref{khatakoon} we obtain the next result. \begin{corollary} For any $0\leq\epsilon<\frac{1}{2}\epsilon_2$ we have \begin{align*} L_{g_2}^1(\epsilon)\leq g_{\epsilon}^2(P_{XY}) \leq \min\{U_{g_2}^1(\epsilon),U_{h_2}(\epsilon)\}. \end{align*} \end{corollary} \begin{remark} For deriving the upper bound $U_{h_2}(\epsilon)$ and lower bounds $L_{h_1}^1(\epsilon)$ and $L_{h_1}^2(\epsilon)$ we do not use the assumption that the leakage matrix $P_{X|Y}$ is of full row rank. Thus, these bounds hold for all $P_{X|Y}$ and all $\epsilon\geq 0$. \end{remark} In the next part, we study a numerical example to illustrate the new bounds. \subsection{Example} Let us consider RVs $X$ and $Y$ with joint distribution $P_{XY}=\begin{bmatrix} 0.693 & 0.027 &0.108& 0.072\\0.006 & 0.085 & 0.004 & 0.005 \end{bmatrix}$. Using definition of $\epsilon_2$ in Proposition~\ref{mos} we have $\epsilon_2 = 0.0341$. Fig.~\ref{kir12} illustrates the lower bound and upper bounds for $g_{\epsilon}^2$ derived in Theorem~\ref{koontala1}. As shown in Fig.~\ref{kir12}, the upper bounds $U_{g_2}^1(\epsilon)$ and $U_{g_2}^2(\epsilon)$ are valid for $\epsilon< 0.0171 $ and $\epsilon < 0.0121$, however the upper bound $U_{h_2}(\epsilon)$ is valid for all $\epsilon\geq 0$. In this example, we can see that for any $\epsilon$ the upper bound $U_{h_2}(\epsilon)$ is the smallest upper bound. \begin{figure}[] \centering \includegraphics[scale = .18]{mj1} \caption{Comparing the upper bound and lower bound for $g_{\epsilon}^1$.} \label{kir11} \end{figure} Furthermore, Fig.~\ref{kir11} shows the lower bound $L_{g_1}(\epsilon)$ and upper bound $U_{g_1}(\epsilon)$ obtained in Theorem~\ref{choon1}. \begin{figure}[] \centering \includegraphics[scale = .18]{kosnanat2} \caption{Comparing the upper bound and lower bound for $g_{\epsilon}^2$. The upper bounds $U_{g_2}^1(\epsilon)$ and $U_{g_2}^2(\epsilon)$ are valid for $\epsilon< 0.0171 $ and $\epsilon < 0.0121$, respectively. On the other hand, the upper bound $U_{h_2}(\epsilon)$ is valid for all $\epsilon\geq 0$.} \label{kir12} \end{figure} \begin{figure}[] \centering \includegraphics[scale = .18]{goh2} \caption{Comparing the upper bound and lower bound for $g_{\epsilon}^1$.} \label{kir111} \end{figure} \begin{figure}[] \centering \includegraphics[scale = .18]{koskesh2} \caption{Comparing the upper bound and lower bound for $g_{\epsilon}^2$. The upper bounds $U_{g_2}^1(\epsilon)$ and $U_{g_2}^2(\epsilon)$ are valid for $\epsilon<0.0997 $ and $\epsilon <0.0705$, respectively. However, the upper bound $U_{h_2}(\epsilon)$ is valid for all $\epsilon\geq 0$.} \label{kir122} \end{figure} Next, let $P_{XY}=\begin{bmatrix} 0.350 & 0.025 &0.085& 0.040\\0.025 & 0.425 & 0.035 & 0.015 \end{bmatrix}$. In this case, $\epsilon_2=0.1994$. Fig.~\ref{kir122} illustrates the lower bound and upper bounds for $g_{\epsilon}^2$. We can see that for $\epsilon<0.0705$, $U_{g_2}^2(\epsilon)$ is the smallest upper bound and for $\epsilon>0.0705$, $U_{h_2}(\epsilon)$ is the smallest bound. Furthermore, Fig.~\ref{kir111} shows the lower bound $L_{g_1}(\epsilon)$ and upper bound $U_{g_1}(\epsilon)$. \section*{Appendix A} For each $u\in\mathcal{U}$ we have \begin{align*} \mathcal{L}^1(X;U=u)&= \left\lVert P_{X|U}(\cdot|u)-P_X \right\rVert_1\\&=\left\lVert P_{X|Y}(P_{Y|U}(\cdot|u)-P_Y)\right\rVert_1\\&=\sum_x |\sum_y P_{X|Y}(x,y)(P_{Y|u}(y)\!-\!P_Y(y))|\\ & \stackrel{(a)}{\leq} \sum_x\sum_y P_{X|Y}(x,y)|P_{Y|u}(y)-P_Y(y)|\\&=\sum_y\sum_x P_{X|Y}(x,y)|P_{Y|u}(y)-P_Y(y)|\\&=\sum_y |P_{Y|u}(y)-P_Y(y)|\\&= \left\lVert P_{Y|U=u}\!-\!P_Y \right\rVert_1=\mathcal{L}^1(Y;U=u), \end{align*} where (a) follows from the triangle inequality. Furthermore, we can multiply all the above expressions by the term $P_U(u)$ and we obtain \begin{align*} \mathcal{L}^2(X;U=u)\leq \mathcal{L}^2(Y;U=u). \end{align*} \section*{Appendix B} Let $U$ be found by EFRL as in \cite[Lemma~3]{zamani2022bounds}, where we let the leakage be $\frac{\epsilon^2}{2}$. Thus, we have \begin{align*} \frac{\epsilon^2}{2}=I(U;X)&=\sum_u P_U(u)D(P_{X|U}(\cdot|u),P_X)\\ &\stackrel{(a)}{\geq} \sum_u \frac{P_U(u)}{2}\left( d(P_{X|U}(\cdot|u),P_X)\right)^2\\&\stackrel{(b)}{\geq} \sum_u \frac{P_U(u)^2}{2}\left( d(P_{X|U}(\cdot|u),P_X)\right)^2\\ &\geq \frac{P_U(u)^2}{2}\left( d(P_{X|U}(\cdot|u),P_X)\right)^2\\ &= \frac{\left( d(P_{X,U}(\cdot,u),P_XP_U(u))\right)^2}{2}, \end{align*} where $D(\cdot,\cdot)$ corresponds to KL-divergence. Furthermore, (a) follows by the Pinsker’s inequality \cite{verdu} and (b) follows since $0\leq P_U(u)\leq 1$. Using the last line we obtain \begin{align*} d(P_{X,U}(\cdot,u),P_XP_U(u))\leq \epsilon,\ \forall u. \end{align*} The other constraints can be obtained by using \cite[Lemma~3]{zamani2022bounds}. \section*{Appendix C} By using \cite[Proposition~2]{Khodam22}, it suffices to assume $|\mathcal{U}|\leq|\mathcal{Y}|$. Using \cite[Proposition~3]{Khodam22}, let us consider $|\mathcal{Y}|$ extreme points that achieves the minimum in \cite[Theorem~2]{Khodam22} as $V_{\Omega_j}$ for $j\in\{1,..,|\mathcal{Y}|\}$. Let $|\mathcal{X}|$ non-zero elements of $V_{\Omega_j}$ be $a_{ij}+\epsilon b_{ij}$ for $i\in\{1,..,|\mathcal{X}|\}$ and $j\in\{1,..,|\mathcal{Y}|\}$, where $a_{ij}$ and $b_{ij}$ can be found in \cite[(6)]{Khodam22}. As a summary for $i\in\{1,..,|\mathcal{X}|\}$ and $j\in\{1,..,|\mathcal{Y}|\}$ we have $\sum_i a_{ij}=1$, $\sum_i b_{ij}=0$, $0\leq a_{ij}\leq 1$, and $0\leq a_{ij}+\epsilon b_{ij}\leq1.$ We obtain \begin{align*} \max I(U;Y)&=H(Y)\\&+\sum_jP_j\sum_i (a_{ij}+\epsilon b_{ij})\log(a_{ij}+\epsilon b_{ij}),\\ &=H(Y)+\sum_jP_j\times\\&\sum_i (a_{ij}+\epsilon b_{ij})(\log(a_{ij})+\log(1+\epsilon\frac{b_{ij}}{a_{ij}})). \end{align*} In \cite[Theorem~2]{Khodam22}, we have used the Taylor expansion to derive the approximation of the equivalent problem. From the Taylor's expansion formula we have \begin{align*} f(x)&=f(a)+\frac{f'(a)}{1!}(x-a)+\frac{f''(a)}{2!}(x-a)^2+...\\&+\frac{f^{(n)}(a)}{n!}(x-a)^n+R_{n+1}(x), \end{align*} where \begin{align}\label{kosss} R_{n+1}(x)&=\int_{a}^{x}\frac{(x-t)^n}{n!}f^{(n+1)}(t)dt\\&=\frac{f^{(n+1)}(\zeta)}{(n+1)!}(x-a)^{n+1}, \end{align} for some $\zeta\in[a,x]$. In \cite{Khodam22} we approximated the terms $\log(1+\frac{b_{ij}}{a_{ij}}\epsilon)$ by $\frac{b_{ij}}{a_{ij}}\epsilon+o(\epsilon)$. Using \eqref{kosss}, there exists an $\zeta_{ij}\in[0,\epsilon]$ such that the error of approximating the term $\log(1+\epsilon\frac{a_{ij}}{b_{ij}})$ is as follows \begin{align*} R_2^{ij}(\epsilon)=-\frac{1}{2}\left(\frac{\frac{b_{ij}}{a_{ij}}}{1+\frac{b_{ij}}{a_{ij}}\zeta_{ij}}\right)^2\epsilon^2=-\frac{1}{2}\left( \frac{b_{ij}}{a_{ij}+b_{ij}\zeta_{ij}}\right)^2\epsilon^2. \end{align*} Thus, the error of approximation is as follows \begin{align}\label{chos} &\text{Approximation\ error}\\&=\sum_{ij} P_j(a_{ij}+\epsilon b_{ij})R_2^{ij}(\epsilon)+\sum_{ij}P_j\frac{b_{ij}^2}{a_{ij}}\epsilon^2\nonumber \\ &= -\sum_{ij} P_j(a_{ij}+\epsilon b_{ij})\frac{1}{2}\left( \frac{b_{ij}}{a_{ij}+b_{ij}\zeta_{ij}}\right)^2\!\!\epsilon^2\!+\!\sum_{ij}P_j\frac{b_{ij}^2}{a_{ij}}\epsilon^2 \end{align} An upper bound on approximation error can be obtained as follows \begin{align}\label{antala} &|\text{Approximation\ error}|\\&\leq |\sum_{ij} P_j(a_{ij}+\epsilon b_{ij})\frac{1}{2}\left( \frac{b_{ij}}{a_{ij}+b_{ij}\zeta_{ij}}\right)^2\epsilon^2|\\&+|\sum_{ij}P_j\frac{b_{ij}^2}{a_{ij}}\epsilon^2|. \end{align} By using the definition of $\epsilon_2$ in Proposition~5 we have $\epsilon<\epsilon_2$ implies $\epsilon<\frac{\min_{ij} a_{ij}}{\max_{ij} |b_{ij}|}$, since $\min_{ij} a_{ij}=\min_{y,\Omega\in \Omega^1} M_{\Omega}^{-1}MP_Y(y)$ and $\max_{ij} |b_{ij}|<\max_{\Omega\in \Omega^1} |\sigma_{\max} (H_{\Omega})|$. By using the upper bound $\epsilon<\frac{\min_{ij} a_{ij}}{|\max_{ij} b_{ij}|}$ we can bound the second term in \eqref{antala} by $1$, since we have \begin{align*} |\sum_{ij}P_j\frac{b_{ij}^2}{a_{ij}}\epsilon^2|&<|\sum_{ij}P_j \frac{b_{ij}^2}{a_{ij}}\left(\frac{\min_{ij} a_{ij}}{\max_{ij} |b_{ij}|}\right)^2|\\&<|\sum_{ij} P_j\min_{ij} a_{ij}|=|\mathcal{X}|\min_{ij} a_{ij}\stackrel{(a)}{<} 1, \end{align*} where (a) follows from $\sum_{i} a_{ij}=1,\ \forall j\in\{1,..,|\mathcal{Y}|\}$. \\If we use $\frac{1}{2}\epsilon_2$ as an upper bound on $\epsilon$, we have $\epsilon<\frac{1}{2}\frac{\min_{ij} a_{ij}}{\max_{ij} |b_{ij}|}$. We show that by using this upper bound the first term in \eqref{antala} can be upper bounded by $\frac{1}{2}$. We have \begin{align*} &\frac{1}{2}|\sum_{ij} P_j(a_{ij}+\epsilon b_{ij})\left( \frac{b_{ij}}{a_{ij}+b_{ij}\zeta_{ij}}\right)^2\epsilon^2|\\&\stackrel{(a)}{<}\frac{1}{2}|\sum_{ij} P_j(a_{ij}+\epsilon b_{ij})\left(\frac{|b_{ij}|}{a_{ij}-\epsilon|b_{ij}|}\epsilon\right)^2|\\&\stackrel{(b)}{<}\frac{1}{2}|\sum_{ij} P_j(a_{ij}+\epsilon b_{ij})|<\frac{1}{2}, \end{align*} where (a) follows from $0\leq\zeta_{ij}\leq \epsilon,\ \forall i,\ \forall j,$ and (b) follows from $\frac{|b_{ij}|}{a_{ij}-\epsilon|b_{ij}|}\epsilon<1$ for all $i$ and $j$. The latter can be shown as follows \begin{align*} \frac{|b_{ij}|}{a_{ij}-\epsilon|b_{ij}|}\epsilon<\frac{|b_{ij}|}{a_{ij}-\frac{1}{2}\frac{\min_{ij} a_{ij}}{\max_{ij} |b_{ij}|}|b_{ij}|}\epsilon<\frac{b_{ij}}{\frac{1}{2}\min_{ij}a_{ij}}\epsilon<1. \end{align*} For $\epsilon<\frac{1}{2}\epsilon_2$ the term $a_{ij}-\epsilon|b_{ij}|$ is positive and there is no need of absolute value for this term. Thus, $\epsilon<\frac{1}{2}\epsilon_2$ implies the following upper bound \begin{align*} |\text{Approximation\ error}|<\frac{3}{4}. \end{align*} Furthermore, by following similar steps if we use the upper bound $\epsilon<\frac{1}{2}\frac{\epsilon_2}{\sqrt{|\mathcal{X}|}}$ instead of $\epsilon<\frac{1}{2}\epsilon_2$, the upper bound on error can be strengthened by \begin{align*} |\text{Approximation\ error}|<\frac{1}{2(2\sqrt{|\mathcal{X}|}-1)^2}+\frac{1}{4|\mathcal{X}|}. \end{align*} \end{proof} \begin{lemma} For any pair of RVs $(X,Y)$ distributed according to $P_{XY}$ supported on alphabets $\mathcal{X}$ and $\mathcal{Y}$, where $|\mathcal{X}|$ is finite and $|\mathcal{Y}|$ is finite or countably infinite, there exists RV $U$ such that it satisfies \eqref{c1}, \eqref{c2}, and \begin{align*} H(U)\leq \sum_{x\in\mathcal{X}}H(Y|X=x)+\epsilon+h(\alpha) \end{align*} with $\alpha=\frac{\epsilon}{H(X)}$ and $h(\cdot)$ denotes the binary entropy function. \end{lemma} \begin{proof} Let $U=(\tilde{U},W)$ where $W$ is the same RV used in Lemma~\ref{lemma3} and $\tilde{U}$ is produced by FRL which has the same construction as used in proof of \cite[Lemma~1]{kostala}. Thus, by using \cite[Lemma~2]{kostala} we have \begin{align*} H(\tilde{U})\leq \sum_{x\in\mathcal{X}} H(Y|X=x), \end{align*} therefore, \begin{align*} H(U)&=H(\tilde{U},W)\leq H(\tilde{U})+H(W),\\&\leq\sum_{x\in\mathcal{X}} H(Y|X=x)+H(W), \end{align*} where, \begin{align*} H(W)\! &= -(1-\alpha)\log(1-\alpha)\!-\!\!\sum_{x\in \mathcal{X}} \alpha P_X(x)\log(\alpha P_X(x)),\\&=h(\alpha)+\alpha H(X), \end{align*} which completes the proof. \end{proof} Before stating the next theorem we derive an expression for $I(Y;U)$. We have \begin{align} I(Y;U)&=I(X,Y;U)-I(X;U|Y),\nonumber\\&=I(X;U)+I(Y;U|X)-I(X;U|Y),\nonumber\\&=I(X;U)\!+\!H(Y|X)\!-\!H(Y|U,X)\!-\!I(X;U|Y).\label{key} \end{align} As argued in \cite{kostala}, \eqref{key} is an important observation to find lower and upper bounds for $h_{\epsilon}(P_{XY})$ and $g_{\epsilon}(P_{XY})$. \begin{theorem} For any $0\leq \epsilon< I(X;Y)$ and pair of RVs $(X,Y)$ distributed according to $P_{XY}$ supported on alphabets $\mathcal{X}$ and $\mathcal{Y}$, if $h_{\epsilon}(P_{XY})>\epsilon$ then we have \begin{align*} H(Y|X)>0. \end{align*} Furthermore, if $H(Y|X)-H(X|Y)=H(Y)-H(X)>0$, then \begin{align*} h_{\epsilon}(P_{XY})>\epsilon. \end{align*} \end{theorem} \begin{proof} For proving the first part let $h_{\epsilon}(P_{XY})>\epsilon$. Using \eqref{key} we have \begin{align*} \epsilon&< h_{\epsilon}(P_{XY})\leq H(Y|X)+\sup_{U:I(X;U)\leq\epsilon} I(X;U)\\&=H(Y|X)+\epsilon \Rightarrow 0<H(Y|X). \end{align*} For the second part assume that $H(Y|X)-H(X|Y)>0$. Let $U$ be produced by EFRL. Thus, using the construction of $U$ as in Lemma~\ref{lemma3} we have $I(X,U)=\epsilon$ and $H(Y|X,U)=0$. Then by using \eqref{key} we obtain \begin{align*} h_{\epsilon}(P_{XY})&\geq \epsilon\!+\!H(Y|X)\!-\!H(X|Y)+H(X|Y,U)\\&\geq \epsilon\!+\!H(Y|X)\!-\!H(X|Y)>\epsilon. \end{align*} \end{proof} In the next theorem we provide a lower bound on $h_{\epsilon}(P_{XY})$. \begin{theorem}\label{th.1} For any $0\leq \epsilon< I(X;Y)$ and pair of RVs $(X,Y)$ distributed according to $P_{XY}$ supported on alphabets $\mathcal{X}$ and $\mathcal{Y}$ we have \begin{align}\label{th2} h_{\epsilon}(P_{XY})\geq \max\{L_1^{\epsilon},L_2^{\epsilon},L_3^{\epsilon}\}, \end{align} where \begin{align*} L_1^{\epsilon} &= H(Y|X)-H(X|Y)+\epsilon=H(Y)-H(X)+\epsilon,\\ L_2^{\epsilon} &= H(Y|X)-\alpha H(X|Y)+\epsilon\\&\ -(1-\alpha)\left( \log(I(X;Y)+1)+4 \right),\\ L_3^{\epsilon} &= \epsilon\frac{H(Y)}{I(X;Y)}+g_0(P_{XY})\left(1-\frac{\epsilon}{I(X;Y)}\right), \end{align*} and $\alpha=\frac{\epsilon}{H(X)}$. The lower bound in \eqref{th2} is tight if $H(X|Y)=0$, i.e., $X$ is a deterministic function of $Y$. Furthermore, if the lower bound $L_1$ is tight then we have $H(X|Y)=0$. \end{theorem} \begin{proof} $L_3^{\epsilon}$ can be derived by using \cite[Remark~2]{shahab}, since we have $h_{\epsilon}(P_{XY})\geq g_{\epsilon}(P_{XY})\geq L_3^{\epsilon}$. For deriving $L_1$, let $U$ be produced by EFRL. Thus, using the construction of $U$ as in Lemma~\ref{lemma3} we have $I(X,U)=\epsilon$ and $H(Y|X,U)=0$. Then, using \eqref{key} we obtain \begin{align*} h_{\epsilon}(P_{XY})&\geq I(U;Y)\\&=I(X;U)\!+\!H(Y|X)\!-\!H(Y|U,X)\!-\!I(X;U|Y)\\&=\epsilon+H(Y|X)-H(X|Y)+H(X|Y,U)\\ &\geq\epsilon+H(Y|X)-H(X|Y)=L_1. \end{align*} For deriving $L_2^{\epsilon}$, let $U$ be produced by ESFRL. Thus, by using the construction of $U$ as in Lemma~\ref{lemma4} we have $I(X,U)=\epsilon$, $H(Y|X,U)=0$ and $I(X;U|Y)\leq \alpha H(X|Y)+(1-\alpha)\left(\log(I(X;Y)+1)+4\right)$. Then, by using \eqref{key} we obtain \begin{align*} h_{\epsilon}(P_{XY})&\geq I(U;Y)\\&=I(X;U)\!+\!H(Y|X)\!-\!H(Y|U,X)\!-\!I(X;U|Y)\\&=\epsilon+H(Y|X)-I(X;U|Y)\\&\geq\epsilon+H(Y|X)-\alpha H(X|Y)\\&\ +(1-\alpha)\left(\log(I(X;Y)+1)+4\right)=L_2^{\epsilon}. \end{align*} Let $X$ be a deterministic function of $Y$. In this case, set $\epsilon=0$ in $L_1^{\epsilon}$ so that we obtain $h_0(P_{XY})\geq H(Y|X)$. Furthermore, by using \eqref{key} we have $h_0(P_{XY})\leq H(Y|X)$. Moreover, since $X$ is a deterministic function of $Y$, the Markov chain $X-Y-U$ holds and we have $h_0(P_{XY})=g_0(P_{XY})=H(Y|X)$. Therefore, $L_3^{\epsilon}$ can be rewritten as \begin{align*} L_3^{\epsilon}&=\epsilon\frac{H(Y)}{H(X)}+H(Y|X)\left(\frac{H(X)-\epsilon}{H(X)}\right),\\&=\epsilon\frac{H(Y)}{H(X)}+(H(Y)-H(X))\left(\frac{H(X)-\epsilon}{H(X)}\right),\\&=H(Y)-H(X)+\epsilon. \end{align*} $L_2^{\epsilon}$ can be rewritten as follows \begin{align*} L_2^{\epsilon}=H(Y|X)+\epsilon-(1-\frac{\epsilon}{H(X)})(\log(H(X)+1)+4). \end{align*} Thus, if $H(X|Y)=0$, then $L_1^{\epsilon}=L_3^{\epsilon}\geq L_2^{\epsilon}$. Now we show that $L_1^{\epsilon}=L_3^{\epsilon}$ is tight. By using \eqref{key} we have \begin{align*} I(U;Y) &\stackrel{(a)}{=} I(X;U)+H(Y|X)-H(Y|U,X),\\&\leq \epsilon+H(Y|X)=L_1^{\epsilon}=L_3^{\epsilon}. \end{align*} where (a) follows since $X$ is deterministic function of $Y$ which leads to $I(X;U|Y)=0$. Thus, if $H(X|Y)=0$, the lower bound in \eqref{th2} is tight. Now suppose that the lower bound $L_1^{\epsilon}$ is tight and $X$ is not a deterministic function of $Y$. Let $\tilde{U}$ be produced by FRL using the construction of \cite[Lemma~1]{kostala}. As argued in the proof of \cite[Th.~6]{kostala}, there exists $x\in\cal X$ and $y_1,y_2\in\cal Y$ such that $P_{X|\tilde{U},Y}(x|\tilde{u},y_1)>0$ and $P_{X|\tilde{U},Y}(x|\tilde{u},y_2)>0$ which results in $H(X|Y,\tilde{U})>0$. Let $U=(\tilde{U},W)$ where $W$ is defined in Lemma~\ref{lemma3}. For such $U$ we have \begin{align*} H(X|Y,U)&=(1-\alpha)H(X|Y,\tilde{U})>0,\\ \Rightarrow I(U;Y)&\stackrel{(a)}{=}\epsilon+H(Y|X)-H(X|Y)+H(X|Y,U)\\ &>\epsilon+H(Y|X)-H(X|Y). \end{align*} where in (a) we used the fact that such $U$ satisfies $I(X;U)=\epsilon$ and $H(Y|X,U)=0$. The last line is a contradiction with tightness of $L_1^{\epsilon}$, since we can achieve larger values, thus, $X$ needs to be a deterministic function of $Y$. \end{proof} In next corollary we let $\epsilon=0$ and derive lower bound on $h_0(P_{XY})$. \begin{corollary}\label{kooni} Let $\epsilon=0$. Then, for any pair of RVs $(X,Y)$ distributed according to $P_{XY}$ supported on alphabets $\mathcal{X}$ and $\mathcal{Y}$ we have \begin{align*} h_{0}(P_{XY})\geq \max\{L^0_1,L^0_2\}, \end{align*} where \begin{align*} L^0_1 &= H(Y|X)-H(X|Y)=H(Y)-H(X),\\ L^0_2 &= H(Y|X) -\left( \log(I(X;Y)+1)+4 \right).\\ \end{align*} \end{corollary} Note that the lower bound $L^0_1$ has been derived in \cite[Th.~6]{kostala}, while the lower bound $L^0_2$ is a new lower bound. In the next two examples we compare the bounds $L_1^{\epsilon}$, $L_2^{\epsilon}$ and $L_3^{\epsilon}$ in special cases where $I(X;Y)=0$ and $H(X|Y)=0$. \begin{example} Let $X$ and $Y$ be independent. Then, we have \begin{align*} L_1^{\epsilon}&=H(Y)-H(X)+\epsilon,\\ L_2^{\epsilon}&=H(Y)-\frac{\epsilon}{H(X)}H(X)+\epsilon-4(1-\frac{\epsilon}{H(X)}),\\ &=H(Y)-4(1-\frac{\epsilon}{H(X)}). \end{align*} Thus, \begin{align*} L_2^{\epsilon}-L_1^{\epsilon}&=H(X)-4+\epsilon(\frac{4}{H(X)}-1),\\ &=(H(X)-4)(1-\frac{\epsilon}{H(X)}). \end{align*} Consequently, for independent $X$ and $Y$ if $H(X)>4$, then $L_2^{\epsilon}>L_1^{\epsilon}$, i.e., the second lower bound is dominant and $h_{\epsilon}(P_{X}P_{Y})\geq L_2^{\epsilon}$. \end{example} \begin{example} Let $X$ be a deterministic function of $Y$. As we have shown in Theorem~\ref{th.1}, if $H(X|Y)=0$, then \begin{align*} L_1^{\epsilon}&=L_3^{\epsilon}=H(Y|X)+\epsilon\\&\geq H(Y|X)+\epsilon-(1-\frac{\epsilon}{H(X)})(\log(H(X)+1)+4)\\&=L_2^{\epsilon}. \end{align*} Therefore, $L_1^{\epsilon}$ and $L_3^{\epsilon}$ become dominants. \end{example} In the next lemma we find a lower bound for $\sup_{U} H(U)$ where $U$ satisfies the leakage constraint $I(X;U)\leq\epsilon$, the bounded cardinality and $ H(Y|U,X)=0$. \begin{lemma}\label{koonimooni} For any pair of RVs $(X,Y)$ distributed according to $P_{XY}$ supported on alphabets $\mathcal{X}$ and $\mathcal{Y}$, then if $U$ satisfies $I(X;U)\leq \epsilon$, $H(Y|X,U)=0$ and $|\mathcal{U}|\leq \left[|\mathcal{X}|(|\mathcal{Y}|-1)+1\right]\left[|\mathcal{X}|+1\right]$, we have \begin{align*} \sup_U H(U)\!&\geq\! \alpha H(Y|X)\!+\!(1-\alpha)(\max_{x\in\mathcal{X}}H(Y|X=x))\!\\&+\!h(\alpha)\!+\!\epsilon \geq H(Y|X)\!+\!h(\alpha)\!+\!\epsilon, \end{align*} where $\alpha=\frac{\epsilon}{H(X)}$ and $h(\cdot)$ corresponds to the binary entropy. \end{lemma} \begin{proof} Let $U=(\tilde{U},W)$ where $W=\begin{cases} X,\ \text{w.p}.\ \alpha\\ c,\ \ \text{w.p.}\ 1-\alpha \end{cases}$, and $c$ is a constant which does not belong to the support of $X$, $Y$ and $\tilde{U}$, furthermore, $\tilde{U}$ is produced by FRL. Using \eqref{key} and Lemma~\ref{koss} we have \begin{align} H(\tilde{U}|Y)&=H(\tilde{U})-H(Y|X)+I(X;\tilde{U}|Y)\nonumber\\ &\stackrel{(a)}{\geq} \max_{x\in\mathcal{X}} H(Y|X=x)-H(Y|X)\nonumber\\&\ +H(X|Y)-H(X|Y,\tilde{U}),\label{toole} \end{align} where (a) follows from Lemma~\ref{koss} with $\epsilon=0$. Furthermore, in the first line we used $I(X;\tilde{U})=0$ and $H(Y|\tilde{U},X)=0$. Using \eqref{key} we obtain \begin{align*} H(U)&\stackrel{(a)}{=}\!H(U|Y)\!+\!H(Y|X)\!-\!H(X|Y)\!+\!\epsilon\!+\!H(X|Y,U),\\ &\stackrel{(b)}{=}H(W|Y)+\alpha H(\tilde{U}|Y,X)+(1-\alpha)H(\tilde{U}|Y)\\ & \ \ \ +H(Y|X)\!-\!H(X|Y)\!+\!\epsilon+(1\!-\!\alpha)H(X|Y,\tilde{U}),\\ &\stackrel{(c)}{=}\!(\alpha-1)H(X|Y)\!+\!h(\alpha)\!+\alpha H(\tilde{U}|Y,X)+\!\epsilon\\& \ \ +\!(1-\alpha)H(\tilde{U}|Y)\!+\!H(Y|X)\!+\!(1\!-\!\alpha)H(X|Y,\tilde{U}), \\ & \stackrel{(d)}{\geq} (\alpha-1)H(X|Y)+h(\alpha)\!+\alpha H(\tilde{U}|Y,X)\\ \ \ \ &+\!\!(1-\alpha) (\max_{x\in\mathcal{X}}H(Y|X=x)-H(Y|X)+\!H(X|Y) \\& -\!H(X|Y,\tilde{U}))+H(Y|X)\!+\!\epsilon\!+\!(1-\alpha)H(X|Y,\tilde{U})\\ &=\alpha H(Y|X)+(1-\alpha)(\max_{x\in\mathcal{X}}H(Y|X=x))\\ &\ \ \ +h(\alpha)+\epsilon. \end{align*} In step (a) we used $I(U;X)=\epsilon$ and $H(Y|X,U)=0$ and in step (b) we used $H(U|Y)=H(W|Y)+H(\tilde{U}|Y,W)=H(W|Y)+\alpha H(\tilde{U}|Y,X)+(1-\alpha)H(\tilde{U}|Y)$ and $H(X|Y,U)=H(X|Y,\tilde{U},W)=(1-\alpha)H(X|Y,\tilde{U})$. In step (c) we used the fact that $P_{W|Y}=\begin{cases} \alpha P_{X|Y}(x|\cdot)\ &\text{if}\ w=x,\\ 1-\alpha \ &\text{if} \ w=c, \end{cases}$ since $P_{W|Y}(w=x|\cdot)=\frac{P_{W,Y}(w=x,\cdot)}{P_Y(\cdot)}=\frac{P_{Y|W}(\cdot|w=x)P_W(w=x)}{P_Y(\cdot)}=\frac{P_{Y|X}(\cdot|x)\alpha P_X(x)}{P_Y(\cdot)}=\alpha P_{X|Y}(x|\cdot)$, furthermore, $P_{W|Y}(w=c|\cdot)=1-\alpha$. Hence, after some calculation we obtain $H(W|Y)=h(\alpha)+\alpha H(X|Y)$. Finally, step (d) follows from \eqref{toole}. \end{proof} \begin{remark} The constraint $|\mathcal{U}|\leq \left[|\mathcal{X}|(|\mathcal{Y}|-1)+1\right]\left[|\mathcal{X}|+1\right]$ in Lemma~\ref{koonimooni} guarantees that $\sup_U H(U)<\infty$. \end{remark} In the next lemma we find an upper bound for $h_{\epsilon}(P_{XY})$. \begin{lemma}\label{goh} For any $0\leq\epsilon< I(X;Y)$ and pair of RVs $(X,Y)$ distributed according to $P_{XY}$ supported on alphabets $\mathcal{X}$ and $\mathcal{Y}$ we have \begin{align*} g_{\epsilon}(P_{XY})\leq h_{\epsilon}(P_{XY})\leq H(Y|X)+\epsilon. \end{align*} \end{lemma} \begin{proof} By using \eqref{key} we have \begin{align*} h_{\epsilon}(P_{XY})\leq H(Y|X)+\sup I(U;X)\leq H(Y|X)+\epsilon. \end{align*} \end{proof} \begin{corollary} If $X$ is a deterministic function of $Y$, then by using Theorem~\ref{th.1} and Lemma~\ref{goh} we have \begin{align*} g_{\epsilon}(P_{XY})=h_{\epsilon}(P_{XY})=H(Y|X)+\epsilon, \end{align*} since in this case the Markov chain $X-Y-U$ holds. \end{corollary} \begin{lemma}\label{kir} Let $\bar{U}$ be an optimizer of $h_{\epsilon}(P_{XY})$. We have \begin{align*} H(Y|X,\bar{U})=0. \end{align*} \end{lemma} \begin{proof} The proof is similar to \cite[Lemma~5]{kostala}. Let $\bar{U}$ be an optimizer of $h_{\epsilon}(P_{XY})$ and assume that $H(Y|X,\bar{U})>0$. Consequently, we have $I(X;\bar{U})\leq \epsilon.$ Let $U'$ be founded by FRL with $(X,\bar{U})$ instead of $X$ in Lemma~\ref{lemma1} and same $Y$, that is $I(U';X,\bar{U})=0$ and $H(Y|X,\bar{U},U')=0$. Using \cite[Th.~5]{kostala} we have \begin{align*} I(Y;U')>0, \end{align*} since we assumed $H(Y|X,\bar{U})>0$. Let $U=(\bar{U},U')$ and we first show that $U$ satisfies $I(X;U)\leq \epsilon$. We have \begin{align*} I(X;U)&=I(X;\bar{U},U')=I(X;\bar{U})+I(X;U'|\bar{U}),\\ &=I(X;\bar{U})+H(U'|\bar{U})-H(U'|\bar{U},X),\\ &=I(X;\bar{U})+H(U')-H(U')\leq \epsilon, \end{align*} where in last line we used the fact that $U'$ is independent of the pair $(X,\bar{U})$. Finally, we show that $I(Y;U)>I(Y,\bar{U})$ which is a contradiction with optimality of $\bar{U}$. We have \begin{align*} I(Y;U)&=I(Y;\bar{U},U')=I(Y;U')+I(Y;\bar{U}|U'),\\ &=I(Y;U')+I(Y,U';\bar{U})-I(U';\bar{U})\\ &= I(Y;U')+I(Y,\bar{U})+I(U';\bar{U}|Y)-I(U';\bar{U})\\ &\stackrel{(a)}{\geq} I(Y;U')+I(Y,\bar{U})\\ &\stackrel{(b)}{>} I(Y,\bar{U}), \end{align*} where in (a) follows since $I(U';\bar{U}|Y)\geq 0$ and $I(U';\bar{U}=0$. Step (b) follows since $I(Y;U')>0$. Thus, the obtained contradiction completes the proof. \end{proof} In the next theorem we generalize the equivalent statements in \cite[Th.~7]{kostala} for bounded leakage between $X$ and $U$. \begin{theorem} For any $\epsilon<I(X;Y)$, we have the following equivalencies \begin{itemize} \item [i.] $g_{\epsilon}(P_{XY})=H(Y|X)+\epsilon$, \item [ii.] $g_{\epsilon}(P_{XY})=h_{\epsilon}(P_{XY})$, \item [iii.] $h_{\epsilon}(P_{XY})=H(Y|X)+\epsilon$. \end{itemize} \end{theorem} \begin{proof} \begin{itemize} \item i $\Rightarrow$ ii: Using Lemma~\ref{goh} we have $H(Y|X)+\epsilon= g_{\epsilon}(P_{XY})\leq h_{\epsilon}(P_{XY}) \leq H(Y|X)+\epsilon$. Thus, $g_{\epsilon}(P_{XY})=h_{\epsilon}(P_{XY})$. \item ii $\Rightarrow$ iii: Let $\bar{U}$ be the optimizer of $g_{\epsilon}(P_{XY})$. Thus, the Markov chain $X-Y-\bar{U}$ holds and we have $I(X;U|Y)=0$. Furthermore, since $g_{\epsilon}(P_{XY})=h_{\epsilon}(P_{XY})$ this $\bar{U}$ achieves $h_{\epsilon}(P_{XY})$. Thus, by using Lemma~\ref{kir} we have $H(Y|\bar{U},X)=0$ and according to \eqref{key} \begin{align} I(\bar{U};Y)&=I(X;\bar{U})\!+\!H(Y|X)\!-\!H(Y|\bar{U},X)\!\\& \ \ \ -\!I(X;\bar{U}|Y)\nonumber\\ &=I(X;\bar{U})\!+\!H(Y|X).\label{kir2} \end{align} We claim that $\bar{U}$ must satisfy $I(X;Y|\bar{U})>0$ and $I(X;\bar{U})=\epsilon$. For the first claim assume that $I(X;Y|\bar{U})=0$, hence the Markov chain $X-\bar{U}-Y$ holds. Using $X-\bar{U}-Y$ and $H(Y|\bar{U},X)=0$ we have $H(Y|\bar{U})=0$, hence $Y$ and $\bar{U}$ become independent. Using \eqref{kir2} \begin{align*} H(Y)&=I(Y;\bar{U})=I(X;\bar{U})\!+\!H(Y|X),\\ &\Rightarrow I(X;\bar{U})=I(X;Y). \end{align*} The last line is a contradiction since by assumption we have $I(X;\bar{U})\leq \epsilon < I(X;Y)$. Thus, $I(X;Y|\bar{U})>0$. For proving the second claim assume that $I(X;\bar{U})=\epsilon_1<\epsilon$. Let $U=(\bar{U},W)$ where $W=\begin{cases} Y,\ \text{w.p}.\ \alpha\\ c,\ \ \text{w.p.}\ 1-\alpha \end{cases}$, and $c$ is a constant that $c\notin \mathcal{X}\cup \mathcal{Y}\cup \mathcal{\bar{U}}$ and $\alpha=\frac{\epsilon-\epsilon_1}{I(X;Y|\bar{U})}$. We show that $\frac{\epsilon-\epsilon_1}{I(X;Y|\bar{U})}<1$. By the assumption we have \begin{align*} \frac{\epsilon-\epsilon_1}{I(X;Y|\bar{U})}<\frac{I(X;Y)-I(X;\bar{U})}{I(X;Y|\bar{U})}\stackrel{(a)}{\leq}1, \end{align*} step (a) follows since $I(X;Y)-I(X;\bar{U})-I(X;Y|\bar{U})=I(X;Y)-I(X;Y,\bar{U})\leq 0$. It can be seen that such $U$ satisfies $H(Y|X,U)=0$ and $I(X;U|Y)=0$ since \begin{align*} H(Y|X,U)&=\alpha H(Y|X,\bar{U},Y)\\ & \ \ \ + (1-\alpha)H(Y|X,\bar{U})=0,\\ I(X;U|Y)&= H(X|Y)-H(X|Y,\bar{U},W)\\&=H(X|Y)\!-\!\alpha H(X|Y,\bar{U})\!\\& \ \ \ -\!(1\!-\!\alpha) H(X|Y,\bar{U})\\&=H(X|Y)-H(X|Y)=0, \end{align*} where in deriving the last line we used the Markov chain $X-Y-\bar{U}$. Furthermore, \begin{align*} I(X;U)&=I(X;\bar{U},W)=I(X;\bar{U})+I(X;W|\bar{U})\\ &=I(X;\bar{U})+\alpha H(X|\bar{U})-\alpha H(X|\bar{U},Y)\\ &=I(X;\bar{U})+\alpha I(X;Y|\bar{U})\\ &=\epsilon_1+\epsilon-\epsilon_1=\epsilon, \end{align*} and \begin{align*} I(Y;U)&=I(X;U)\!+\!H(Y|X)\!-\!H(Y|U,X)\\& \ \ \ -\!I(X;U|Y)\\ &=\epsilon+H(Y|X). \end{align*} Thus, if $I(X;\bar{U})=\epsilon_1<\epsilon$ we can substitute $\bar{U}$ by $U$ for which $I(U;Y)>I(\bar{U};Y)$. This is a contraction and we conclude that $I(X;\bar{U})=\epsilon$ which proves the second claim. Hence, \eqref{kir2} can be rewritten as \begin{align*} I(\bar{U};Y)=\epsilon+H(Y|X). \end{align*} As a result $h_\epsilon(P_{XY})=\epsilon+H(Y|X)$ and the proof is completed. \item iii $\Rightarrow$ i: Let $\bar{U}$ be the optimizer of $h_{\epsilon}(P_{XY})$ and $h_{\epsilon}(P_{XY})=H(Y|X)+\epsilon$. Using Lemma~\ref{kir} we have $H(Y|\bar{U},X)=0$. By using \eqref{key} we must have $I(X;\bar{U}|Y)=0$ and $I(X;\bar{U})=\epsilon$. We conclude that for this $\bar{U}$, the Markov chain $X-Y-\bar{U}$ holds and as a result $\bar{U}$ achieves $g_{\epsilon}(P_{XY})$ and we have $g_{\epsilon}(P_{XY})=H(Y|X)+\epsilon$. \end{itemize} \end{proof} \subsection* {Special case: $\epsilon=0$ (Independent $X$ and $Y$)} In this section we derive new lower and upper bounds for $h_0(P_{XY})$ and compare them with the previous bounds found in \cite{kostala}. We first state the definition of \emph{excess functional information} defined in \cite{kosnane} as \begin{align*} \psi(X\rightarrow Y)=\inf_{\begin{array}{c} \substack{P_{U|Y,X}: I(U;X)=0,\ H(Y|X,U)=0} \end{array}}I(X;U|Y), \end{align*} and the lower bound on $\psi(X\rightarrow Y)$ derived in \cite[Prop.~1]{kosnane} is given in the next lemma. Since this lemma is useful for deriving the lower bound we state it here. \begin{lemma}\cite[Prop.~1]{kosnane}\label{haroomi} For discrete $Y$ we have \begin{align} &\psi(X\rightarrow Y)\geq\nonumber\\& -\sum_{y\in\mathcal{Y}}\!\int_{0}^{1}\!\!\! \mathbb{P}_X\{P_{Y|X}(y|X)\geq t\}\log (\mathbb{P}_X\{P_{Y|X}(y|X)\geq t\})dt\nonumber\\&-I(X;Y),\label{koonsher} \end{align} where for $|\mathcal{Y}|=2$ the equality holds and it is attained by the Poisson functional representation in \cite{kosnane}. \end{lemma} \begin{remark} The lower bound in \eqref{koonsher} can be negative. For instance, let $Y$ be a deterministic function of $X$, i.e., $H(Y|X)=0$. In this case we have $-\sum_{y\in\mathcal{Y}}\!\int_{0}^{1} \mathbb{P}_X\{P_{Y|X}(y|X)\geq t\}\log (\mathbb{P}_X\{P_{Y|X}(y|X)\geq t\})dt-I(X;Y)=-I(X;Y)=-H(Y).$ \end{remark} In the next theorem lower and upper bounds on $h_0(P_{XY})$ are provided. \begin{theorem} For any pair of RVs $(X,Y)$ distributed according to $P_{XY}$ supported on alphabets $\mathcal{X}$ and $\mathcal{Y}$ we have \begin{align*} \max\{L^0_1,L^0_2\} \leq h_{0}(P_{XY})\leq \min\{U^0_1,U^0_2\}, \end{align*} where $L^0_1$ and $L^0_2$ are defined in Corollary~\ref{kooni} and \begin{align*} &U^0_1 = H(Y|X),\\ &U^0_2 = H(Y|X) +\sum_{y\in\mathcal{Y}}\int_{0}^{1} \mathbb{P}_X\{P_{Y|X}(y|X)\geq t\}\times\\&\log (\mathbb{P}_X\{P_{Y|X}(y|X)\geq t\})dt+I(X;Y). \end{align*} Furthermore, if $|\mathcal{Y}|=2$, then we have \begin{align*} h_{0}(P_{XY}) = U^0_2. \end{align*} \end{theorem} \begin{proof} $L^0_1$ and $L^0_2$ can be obtained by letting $\epsilon=0$ in Theorem~\ref{th.1}. $U^0_1$ which has been derived in \cite[Th.~7]{kostala} can be obtained by \eqref{key}. $U^0_1$ can be derived as follows. Since $X$ and $U$ are independent, \eqref{key} can be rewritten as \begin{align*} I(Y;U)=H(Y|X)-H(Y|U,X)-I(X;U|Y), \end{align*} thus, using Lemma~\ref{haroomi} \begin{align*} h_0(P_{XY})&\leq H(Y|X)-\inf_{H(Y|U,X)=0,\ I(X;U)=0}I(X;U|Y)\\ &= H(Y|X)-\psi(X\rightarrow Y)\\&\leq H(Y|X)\\&+\sum_{y\in\mathcal{Y}}\int_{0}^{1} \mathbb{P}_X\{P_{Y|X}(y|X)\geq t\}\times\\&\log (\mathbb{P}_X\{P_{Y|X}(y|X)\geq t\})dt+I(X;Y). \end{align*} For $|\mathcal{Y}|=2$ using Lemma~\ref{haroomi} we have $\psi(X\rightarrow Y)=-\sum_{y\in\mathcal{Y}}\int_{0}^{1} \mathbb{P}_X\{P_{Y|X}(y|X)\geq t\}\log (\mathbb{P}_X\{P_{Y|X}(y|X)\geq t\})dt-I(X;Y)$ and let $\bar{U}$ be the RV that attains this bound. Thus, \begin{align*} I(\bar{U};Y)&=H(Y|X)\\&+\sum_{y\in\mathcal{Y}}\int_{0}^{1} \mathbb{P}_X\{P_{Y|X}(y|X)\geq t\}\times\\&\log (\mathbb{P}_X\{P_{Y|X}(y|X)\geq t\})dt+I(X;Y). \end{align*} Therefore, $\bar{U}$ attains $U_2^0$ and $h_0(P_{XY})=U_0^2$. \end{proof} As mentioned before the upper bound $U^0_1$ has been derived in \cite[Th.~7]{kostala}. The upper bound $U^0_2$ is a new upper bound. \begin{lemma}\label{ankhar} If $X$ is a deterministic function of $Y$, i.e., $H(X|Y)=0$, we have \begin{align*} &\sum_{y\in\mathcal{Y}}\int_{0}^{1} \mathbb{P}_X\{P_{Y|X}(y|X)\geq t\}\log (\mathbb{P}_X\{P_{Y|X}(y|X)\geq t\})dt\\&+I(X;Y)=0. \end{align*} \end{lemma} \begin{proof} Since $X$ is a deterministic function of $Y$, for any $y\in \cal Y$ we have \begin{align*} P_{Y|X}(y|x)=\begin{cases} \frac{P_Y(y)}{P_X(x)},\ &x=f(y)\\ 0, \ &\text{else} \end{cases}, \end{align*} thus, \begin{align*} &\sum_{y\in\mathcal{Y}}\int_{0}^{1} \mathbb{P}_X\{P_{Y|X}(y|X)\geq t\}\log (\mathbb{P}_X\{P_{Y|X}(y|X)\geq t\})dt\\ &= \sum_{y\in\mathcal{Y}}\!\int_{0}^{\frac{P_Y(y)}{P_X(x=f(y))}} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\mathbb{P}_X\{P_{Y|X}(y|X)\geq t\}\log (\mathbb{P}_X\{P_{Y|X}(y|X)\geq t\})dt\\ &= \sum_{y\in\mathcal{Y}} \frac{P_Y(y)}{\mathbb{P}_X\{x=f(y)\}}\mathbb{P}_X\{x=f(y)\}\log(\mathbb{P}_X\{x=f(y)\})\\ &= \sum_{y\in\mathcal{Y}} P_Y(y)\log(\mathbb{P}_X\{x=f(y)\})\\ &= \sum_{y\in\mathcal{Y}} P_X(x)\log(P_X(x))=-H(X)=-I(X;Y), \end{align*} where in last line we used $\sum_{y\in\mathcal{Y}} P_Y(y)\log(\mathbb{P}_X\{x=f(y)\})=\sum_{x\in\mathcal{X}} \sum_{y:x=f(y)} P_Y(y)\log(\mathbb{P}_X\{x=f(y)\})=\sum_{x\in\mathcal{X}} P_X(x)\log(P_X(x))$. \end{proof} \begin{remark} According to Lemma~\ref{ankhar}, if $X$ is a deterministic function of $Y$, then we have $U_2^0=U_1^0$. \end{remark} In the next example we compare the bounds $U^0_1$ and $U^0_2$ for a $BSC(\theta)$. \begin{figure}[] \centering \includegraphics[scale = .2]{kirtodahan.eps} \caption{Comparing the upper bounds $U_1^0$ and $U_2^0$ for $BSC(\theta)$. The blue curve illustrates the upper bound found in \cite{kostala} and the red line shows the upper bound found in this work.} \label{fig:kir} \end{figure} \begin{example}(Binary Symmetric Channel) Let the binary RVs $X\in\{0,1\}$ and $Y\in\{0,1\}$ have the following joint distribution \begin{align*} P_{XY}(x,y)=\begin{cases} \frac{1-\theta}{2}, \ &x=y\\ \frac{\theta}{2}, \ &x\neq y \end{cases}, \end{align*} where $\theta<\frac{1}{2}$. We obtain \begin{align*} &\sum_{y\in\mathcal{Y}}\int_{0}^{1} \mathbb{P}_X\{P_{Y|X}(y|X)\geq t\}\log (\mathbb{P}_X\{P_{Y|X}(y|X)\geq t\})dt\\&=\int_{\theta}^{1-\theta} \mathbb{P}_X\{P_{Y|X}(0|X)\geq t\}\log (\mathbb{P}_X\{P_{Y|X}(0|X)\geq t\})dt\\&+\int_{\theta}^{1-\theta} \mathbb{P}_X\{(P_{Y|X}(1|X)\geq t\}\log (\mathbb{P}_X\{P_{Y|X}(1|X)\geq t\})dt\\&=(1-2\theta)\left(P_X(0)\log (P_X(0))+P_X(1)\log (P_X(1))\right)\\&=-(1-2\theta)H(X)=-(1-2\theta). \end{align*} Thus, \begin{align*} U_2^0&=H(Y|X) +\sum_{y\in\mathcal{Y}}\int_{0}^{1} \mathbb{P}_X\{P_{Y|X}(y|X)\geq t\}\times\\&\log (\mathbb{P}_X\{P_{Y|X}(y|X)\geq t\})dt+I(X;Y)\\&=h(\theta)-(1-2\theta)+(1-h(\theta))=2\theta,\\ U_1^0&=H(Y|X)=h(\theta), \end{align*} where $h(\cdot)$ corresponds to the binary entropy function. As shown in Fig.~ \ref{fig:kir}, we have \begin{align*} h_{0}(P_{XY})\leq U^0_2\leq U^0_1. \end{align*} \end{example} \begin{example}(Erasure Channel) Let the RVs $X\in\{0,1\}$ and $Y\in\{0,e,1\}$ have the following joint distribution \begin{align*} P_{XY}(x,y)=\begin{cases} \frac{1-\theta}{2}, \ &x=y\\ \frac{\theta}{2}, \ &y=e\\ 0, \ & \text{else} \end{cases}, \end{align*} where $\theta<\frac{1}{2}$. We have \begin{align*} &\sum_{y\in\mathcal{Y}}\int_{0}^{1} \mathbb{P}_X\{P_{Y|X}(y|X)\geq t\}\log (\mathbb{P}_X\{P_{Y|X}(y|X)\geq t\})dt\\&=\int_{0}^{1-\theta} \mathbb{P}_X\{P_{Y|X}(0|X)\geq t\}\log (\mathbb{P}_X\{P_{Y|X}(0|X)\geq t\})dt\\&+\int_{0}^{1-\theta} \mathbb{P}_X\{P_{Y|X}(1|X)\geq t\}\log (\mathbb{P}_X\{P_{Y|X}(1|X)\geq t\})dt\\&=-(1-\theta)H(X)=-(1-\theta). \end{align*} Thus, \begin{align*} U_2^0&=H(Y|X) +\sum_{y\in\mathcal{Y}}\int_{0}^{1} \mathbb{P}_X\{P_{Y|X}(y|X)\geq t\}\times\\&\log (\mathbb{P}_X\{P_{Y|X}(y|X)\geq t\})dt+I(X;Y)\\&=h(\theta)-(1-\theta)+h(\theta)+1-\theta-h(\theta) \\&=h(\theta),\\ U_1^0&=H(Y|X)=h(\theta). \end{align*} Hence, in this case, $U_1^0=U_2^0=h(\theta)$. Furthermore, in \cite[Example~8]{kostala}, it has been shown that for this pair of $(X,Y)$ we have $g_0(P_{XY})=h_0(P_{XY})=h(\theta)$. \end{example} In \cite[Prop.~2]{kosnane} it has been shown that for every $\alpha\geq 0$, there exist a pair $(X,Y)$ such that $I(X;Y)\geq \alpha$ and \begin{align} \psi(X\rightarrow Y)\geq \log(I(X;Y)+1)-1.\label{kir1} \end{align} \begin{lemma}\label{choon} Let $(X,Y)$ be as in \cite[Prop.~2]{kosnane}, i.e. $(X,Y)$ satisfies \eqref{kir1}. Then for such pair we have \begin{align*} &H(Y|X)-\log(I(X;Y)+1)-4\\&\leq h_0(P_{XY})\leq H(Y|X)-\log(I(X;Y)+1)+1. \end{align*} \begin{proof} The lower bound follows from Corollary~1. For the upper bound, we use \eqref{key} and \eqref{kir1} so that \begin{align*} I(U;Y)&\leq H(Y|X)-\psi(X\rightarrow Y)\\ &\leq H(Y|X)-\log(I(X;Y)+1)+1. \end{align*} \end{proof} \end{lemma} \begin{remark} From Lemma~\ref{choon} and Corollary~1 we can conclude that the lower bound $L_2^0=H(Y|X)-(\log(I(X;Y)+1)+4)$ is tight within $5$ bits. \end{remark} \section{conclusion}\label{concull} It has been shown that by extending the FRL and SFRL, upper bound for $h_{\epsilon}(P_{XY})$ and $g_{\epsilon}(P_{XY})$ and lower bound for $h_{\epsilon}(P_{XY})$ can be derived. If $X$ is a deterministic function of $Y$, then the bounds are tight. Moreover, a necessary condition for an optimizer of $h_{\epsilon}(P_{XY})$ has been obtained. In the case of perfect privacy, new lower and upper bounds are derived using ESFRL and excess functional information. In an example it has been shown that new bounds are dominant compared to the previous bounds. \begin{proposition}\label{prop1111} It suffices to consider $U$ such that $|\mathcal{U}|\leq|\mathcal{Y}|$. Furthermore, a maximum can be used in \eqref{privacy} since the corresponding supremum is achieved. \end{proposition} \begin{proof} The proof is based on Fenchel-Eggleston-Carath\'{e}odory's Theorem \cite{el2011network}. For more details see \cite{Amir2}. \end{proof} \section{Privacy Mechanism Design}\label{result} In this section we show that the problem defined in \eqref{1} can be approximated by a quadratic problem. Furthermore, it is shown that the quadratic problem can be converted to a standard linear program. By using \eqref{local}, we can rewrite the conditional distribution $P_{X|U=u}$ as a perturbation of $P_X$. Thus, for any $u\in\mathcal{U}$, similarly as in \cite{Amir,borade, huang}, we can write $P_{X|U=u}=P_X+\epsilon\cdot J_u$, where $J_u\in\mathbb{R}^{|\mathcal{X}|}$ is the perturbation vector that has the following three properties: \begin{align} \sum_{x\in\mathcal{X}} J_u(x)=0,\ \forall u,\label{prop1}\\ \sum_{u\in\mathcal{U}} P_U(u)J_u(x)=0,\ \forall x\label{prop2},\\ \sum_{x\in\mathcal{X}}|J_u(x)|\leq 1,\ \forall u\label{prop3}. \end{align} The first two properties ensure that $P_{X|U=u}$ is a valid probability distribution and the third property follows from \eqref{local}. In the following lemma we derive two important properties of Null$(P_{X|Y})$. \begin{lemma}\label{null} Let $\beta$ be a vector in $\mathbb{R}^{|\mathcal{X}|}$. Then, $\beta\in\text{Null}(P_{X|Y})$ if and only if $\beta\in\text{Null}(M)$, where $M\in \mathbb{R}^{|\mathcal{X}|\times|\mathcal{Y}|}$ is constructed as follows: Let $V$ be the matrix of right eigenvectors of $P_{X|Y}$, i.e., $P_{X|Y}=U\Sigma V^T$ and $V=[v_1,\ v_2,\ ... ,\ v_{|\mathcal{Y}|}]$, then $M$ is defined as \begin{align*} M \triangleq \left[v_1,\ v_2,\ ... ,\ v_{|\mathcal{X}|}\right]^T. \end{align*} Furthermore, if $\beta\in\text{Null}(P_{X|Y})$, then $1^T\beta=0$. \end{lemma} \begin{proof} Since the rank of $P_{X|Y}$ is $|\mathcal{X}|$, every vector $\beta$ in Null$(P_{X|Y})$ can be written as linear combination of $\{v_{|\mathcal{X}|+1}$, ... ,$v_{|\mathcal{Y}|}\}$ and since each vector in $\{v_{|\mathcal{X}|+1}$, ... ,$v_{|\mathcal{Y}|}\}$ is orthogonal to the rows of $M$ we conclude that $\beta\in\text{Null}(M)$. If $\beta\in\text{Null}(M)$, $\beta$ is orthogonal to the vectors $\{v_1$,...,$v_{|\mathcal{X}|}\}$ and thus, $\beta\in \text{linspan}\{v_{|\mathcal{X}|+1},...,v_{|\mathcal{Y}|}\}=\text{Null}(P_{X|Y})$. Furthermore, for every $\beta\in\text{Null}(P_{X|Y})$, we can write $1^T\beta=1^TP_{X|Y}\beta=0.$ \end{proof} The next lemma shows that if the Markov chain $X-Y-U$ holds and $J_u$ satisfies the three properties \eqref{prop1}, \eqref{prop2}, and \eqref{prop3}, then $P_{Y|U=u}$ lies in a convex polytope. \begin{lemma}\label{null2} For sufficiently small $\epsilon>0$, for every $u\in\mathcal{U}$, the vector $P_{Y|U=u}$ belongs to the convex polytope $\mathbb{S}_{u}$ defined as \begin{align*} \mathbb{S}_{u} \triangleq \left\{y\in\mathbb{R}^{|\mathcal{Y}|}|My=MP_Y+\epsilon M\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0 \end{bmatrix},\ y\geq 0\right\}, \end{align*} where $\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0 \end{bmatrix}\in\mathbb{R}^{|\cal Y|}$ and $J_u$ satisfies \eqref{prop1}, \eqref{prop2}, and \eqref{prop3}. \end{lemma} \begin{proof} By using the Markov chain $X-Y-U$ we have \begin{align} \epsilon J_u=P_{X|U=u}-P_X=P_{X|Y}[P_{Y|U=u}-P_Y].\label{jigar} \end{align} Let $\alpha=P_{Y|U=u}-P_Y$. By partitioning $\alpha$ in two parts $[\alpha_1\ \alpha_2]^T$ with sizes $|\cal X|$ and $|\cal Y|-|\cal X|$, respectively, from \eqref{jigar} we obtain \begin{align*} P_{X|Y_1}\alpha_1+P_{X|Y_2}\alpha_2=\epsilon J_u, \end{align*} which implies using invertibility of $P_{X|Y_1}$ that \begin{align*} \alpha_1=\epsilon P_{X|Y_1}^{-1}J_u-P_{X|Y_1}^{-1}P_{X|Y_2}\alpha_2, \end{align*} thus, \begin{align*} P_{Y|U=u} = P_Y+\epsilon\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0 \end{bmatrix} +\begin{bmatrix} -P_{X|Y_1}^{-1}P_{X|Y_2}\alpha_2\\\alpha_2 \end{bmatrix}. \end{align*} Note that the vector $\begin{bmatrix} -P_{X|Y_1}^{-1}P_{X|Y_2}\alpha_2\\\alpha_2 \end{bmatrix}$ belongs to Null$(P_{X|Y})$ since we have \begin{align*} P_{X|Y}\!\!\begin{bmatrix} -P_{X|Y_1}^{-1}P_{X|Y_2}\alpha_2\\\alpha_2 \end{bmatrix}&\!\!=\![P_{X|Y_1}\ P_{X|Y_2}]\!\!\begin{bmatrix} -P_{X|Y_1}^{-1}P_{X|Y_2}\alpha_2\\\alpha_2 \end{bmatrix}\\&\!\!=\!-P_{X|Y_1}\!P_{X|Y_1}^{-1}\!P_{X|Y_2}\alpha_2\!+\!\!P_{X|Y_2}\alpha_2\\&\!\!=\bm{0}\in\mathbb{R}^{|\cal X|}, \end{align*} and thus by Lemma~\ref{null} it belongs to Null$(M)$. So, we have \begin{align*} MP_{Y|U=u} = MP_Y+\epsilon M\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0 \end{bmatrix}. \end{align*} Consequently, if the Markov chain $X-Y-U$ holds and the perturbation vector $J_u$ satisfies \eqref{prop1}, \eqref{prop2}, and \eqref{prop3}, then we have $P_{Y|U=u}\in\mathbb{S}_u$. \end{proof} \begin{lemma}\label{44} Any vector $\alpha$ in $\mathbb{S}_u$ is a standard probability vector. Also, for any pair $(U,Y)$, for which $P_{Y|U=u}\in\mathbb{S}_u$, $\forall u\in\mathcal{U}$ with $J_u$ satisfying \eqref{prop1}, \eqref{prop2}, and \eqref{prop3}, we can have $X-Y-U$ and $P_{X|U=u}-P_X=\epsilon\cdot J_u$. \end{lemma} \begin{proof} The proof is provided in Appendix~A. \end{proof} \begin{theorem} We have the following equivalency \begin{align}\label{equi} \min_{\begin{array}{c} \substack{P_{U|Y}:X-Y-U\\ \|P_{X|U=u}-P_X\|_1\leq\epsilon,\ \forall u\in\mathcal{U}} \end{array}}\! \! \! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!H(Y|U) =\!\!\!\!\!\!\!\!\! \min_{\begin{array}{c} \substack{P_U,\ P_{Y|U=u}\in\mathbb{S}_u,\ \forall u\in\mathcal{U},\\ \sum_u P_U(u)P_{Y|U=u}=P_Y,\\ J_u \text{satisfies}\ \eqref{prop1},\ \eqref{prop2},\ \text{and}\ \eqref{prop3}} \end{array}} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!H(Y|U). \end{align} \end{theorem} \begin{proof} The proof follows directly from Lemmas~\ref{null2} and~\ref{44}. \end{proof} In next proposition we discuss how $H(Y|U)$ is minimized over $P_{Y|U=u}\in\mathbb{S}_u$ for all $u\in\mathcal{U}$. \begin{proposition}\label{4} Let $P^*_{Y|U=u},\ \forall u\in\mathcal{U}$ be the minimizer of $H(Y|U)$ over the set $\{P_{Y|U=u}\in\mathbb{S}_u,\ \forall u\in\mathcal{U}|\sum_u P_U(u)P_{Y|U=u}=P_Y\}$, then $P^*_{Y|U=u}\in\mathbb{S}_u$ for all $u\in\mathcal{U}$ must belong to extreme points of $\mathbb{S}_u$. \end{proposition} \begin{proof} The proof builds on the concavity property of the entropy. For more details see \cite{Amir2}. \end{proof} In order to solve the minimization problem found in \eqref{equi}, we propose the following procedure: In the first step, we find the extreme points of $\mathbb{S}_u$, which is an easy task. Since the extreme points of the sets $\mathbb{S}_u$ have a particular geometry, in the second step, we locally approximate the conditional entropy $H(Y|U)$ so that we end up with a quadratic problem with quadratic constraints over $P_U(.)$ and $J_u$ that can be easily solved.\subsection{Finding $\mathbb{S}_u^*$ (Extreme points of $\mathbb{S}_u$)}\label{seca} In this part we find the extreme points of $\mathbb{S}_u$ for each $u\in\mathcal{U}$. As argued in \cite{deniz6}, the extreme points of $\mathbb{S}_u$ are the basic feasible solutions of $\mathbb{S}_u$. Basic feasible solutions of $\mathbb{S}_u$ can be found in the following manner. Let $\Omega$ be the set of indices which corresponds to $|\mathcal{X}|$ linearly independent columns of $M$, i.e., $|\Omega|=|\mathcal{X}|$ and $\Omega\subset \{1,..,|\mathcal{Y}|\}$. Let $M_{\Omega}\in\mathbb{R}^{|\mathcal{X}|\times|\mathcal{X}|}$ be the submatrix of $M$ with columns indexed by the set $\Omega$. It can be seen that $M_{\Omega}$ is an invertible matrix since $rank(M)=|\cal X|$. Then, if all elements of the vector $M_{\Omega}^{-1}(MP_Y+\epsilon M\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0\end{bmatrix})$ are non-negative, the vector $V_{\Omega}^*\in\mathbb{R}^{|\mathcal{Y}|}$, which is defined in the following, is a basic feasible solution of $\mathbb{S}_u$. Assume that $\Omega = \{\omega_1,..,\omega_{|\mathcal{X}|}\}$, where $\omega_i\in\{1,..,|\mathcal{Y}|\}$ and all elements are arranged in an increasing order. The $\omega_i$-th element of $V_{\Omega}^*$ is defined as $i$-th element of $M_{\Omega}^{-1}(MP_Y+\epsilon M\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0\end{bmatrix})$, i.e., for $1\leq i \leq |\mathcal{X}|$ we have \begin{align}\label{defin} V_{\Omega}^*(\omega_i)= (M_{\Omega}^{-1}MP_Y+\epsilon M_{\Omega}^{-1}M\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0\end{bmatrix})(i). \end{align} Other elements of $V_{\Omega}^*$ are set to be zero. In the next proposition we show two properties of each vector inside $V_{\Omega}^*$. \begin{proposition}\label{kos} Let $\Omega\subset\{1,..,|\mathcal{Y}|\},\ |\Omega|=|\cal X|$. For every $\Omega$ we have $1^T\left(M_{\Omega}^{-1}MP_Y \right)=1$. Furthermore, $1^T\left(M_{\Omega}^{-1}M\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0\end{bmatrix}\right)=0$. \end{proposition} \begin{proof} The proof is provided in Appendix~B. \end{proof} Next we define a set $\mathcal{H}_{XY}$ which includes all leakage matrices $P_{X|Y}$ having a property that allows us to approximate the conditional entropy $H(Y|U)$. \begin{definition} Let $\mathcal{P}(\cal X)$ be the standard probability simplex defined as $\mathcal{P}(\mathcal {X})=\{x\in\mathbb{R}^{|\mathcal{X}|}|1^Tx=1,\ x(i) \geq 0,\ 1\leq i\leq |\cal X|\}$ and let $\mathcal{P'}(\cal X)$ be the subset of $\mathcal{P}(\cal X)$ defined as $\mathcal{P'}(\mathcal {X})=\{x\in\mathbb{R}^{|\mathcal{X}|}|1^Tx=1,\ x(i) > 0,\ 1\leq i\leq |\cal X|\}$. For every $\Omega\subset\{1,..,|\cal Y|\}$, $|\Omega|=|\mathcal{X}|$, $\mathcal{H}_{XY}$ is the set of all leakage matrices defined as follows \begin{align*} \mathcal{H}_{XY} = \{P_{X|Y}\!\in\! \mathbb{R}^{|\cal X|\times |\cal Y|}|\text{if}\ t_{\Omega}\in\mathcal{P}(\mathcal {X})\Rightarrow t_{\Omega}\in\mathcal{P'}(\mathcal {X})\}, \end{align*} where $t_{\Omega}=M_{\Omega}^{-1}MP_Y$. In other words, for any $P_{X|Y}\in\mathcal{H}_{XY}$, if $t_{\Omega}$ is a probability vector, then it consists only of nonzero elements for every $\Omega$. \end{definition} \begin{example} Let $P_{X|Y}=\begin{bmatrix} 0.3 &0.8 &0.5 \\0.7 &0.2 &0.5 \end{bmatrix} $. By using SVD of $P_{X|Y}$, $M$ can be found as $\begin{bmatrix} -0.5556 &-0.6016 &0.574 \\0.6742 &-0.73 &0.1125 \end{bmatrix}$. Possible sets $\Omega$ are $\Omega_1=\{1,2\},\ \Omega_2=\{1,3\},$ and $\Omega_3=\{2,3\}$. By calculating $M_{\Omega_{u_i}}^{-1}MP_Y$ we have $M_{\Omega_{1}}^{-1}MP_Y =\begin{bmatrix} 0.7667\\0.2333 \end{bmatrix},\ M_{\Omega_{2}}^{-1}MP_Y =\begin{bmatrix} 0.4167\\0.5833 \end{bmatrix}, M_{\Omega_{3}}^{-1}MP_Y =\begin{bmatrix} -0.2778\\1.2778 \end{bmatrix}.$ Since the elements of the first two probability vectors are positive, $P_{X|Y}\in \mathcal{H}_{XY}$. \end{example} \begin{remark}\label{ann} For $P_{X|Y}\in\mathcal{H}_{XY}$, $M_{\Omega}MP_Y>0$ implies $M_{\Omega}^{-1}MP_Y+\epsilon M_{\Omega}^{-1}M\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0\end{bmatrix}>0$, furthermore, if the vector $M_{\Omega}MP_Y$ contains a negative element, the vector $M_{\Omega}^{-1}MP_Y+\epsilon M_{\Omega}^{-1}M\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0\end{bmatrix}$ contains a negative element as well (i.e., not a feasible distribution), since we assumed $\epsilon$ is sufficiently small. %Thus, the set $\Omega$ corresponds to a basic feasible solution. \end{remark} From Proposition~\ref{kos} and Remark~\ref{ann}, we conclude that each basic feasible solution of $\mathbb{S}_u$, i.e., $V_{\Omega}^*$, can be written as summation of a standard probability vector (built by $M_{\Omega}^{-1}MP_Y$) and a perturbation vector (built by $\epsilon M_{\Omega}^{-1}M\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0\end{bmatrix}$). This is the key property to locally approximate $H(Y|U)$. \begin{remark} The number of basic feasible solutions of $\mathbb{S}_u$ is at most $\binom{|\mathcal{Y}|}{|\mathcal{X}|}$, thus we have at most $\binom{|\mathcal{Y}|}{|\mathcal{X}|}^{|\mathcal{Y}|}$ optimization problems with variables $P_U(.)$ and $J_u$.\end{remark} \subsection{Quadratic Optimization Problem} In this part, we approximate $H(Y|U)$ for an obtained basic feasible solution which leads to the new quadratic problem. In the following we use the Bachmann-Landau notation where $o(\epsilon)$ describes the asymptotic behaviour of a function $f:\mathbb{R}^+\rightarrow\mathbb{R}$ which satisfies that $\frac{f(\epsilon)}{\epsilon}\rightarrow 0$ as $\epsilon\rightarrow 0$. \begin{lemma}\label{5} Assume $P_{X|Y}\in\mathcal{H}_{XY}$ and $V_{\Omega_u}^*$ is an extreme point of the set $\mathbb{S}_u$, then for $P_{Y|U=u}=V_{\Omega_u}^*$ we have \begin{align*} H(P_{Y|U=u}) &=\sum_{y=1}^{|\mathcal{Y}|}-P_{Y|U=u}(y)\log(P_{Y|U=u}(y))\\&=-(b_u+\epsilon a_uJ_u)+o(\epsilon), \end{align*} with $b_u = l_u \left(M_{\Omega_u}^{-1}MP_Y\right),\ a_u = l_u\left(M_{\Omega_u}^{-1}M(1\!\!:\!\!|\mathcal{X}|)P_{X|Y_1}^{-1}\right)\in\mathbb{R}^{1\times|\mathcal{X}|},\ l_u = \left[\log\left(M_{\Omega_u}^{-1}MP_{Y}(i)\right)\right]_{i=1:|\mathcal{X}|}\in\mathbb{R}^{1\times|\mathcal{X}|}, $ and $M_{\Omega_u}^{-1}MP_{Y}(i)$ stands for $i$-th ($1\leq i\leq |\mathcal{X}|$) element of the vector $M_{\Omega_u}^{-1}MP_{Y}$. Furthermore, $M(1\!\!:\!\!|\mathcal{X}|)$ stands for submatrix of $M$ with first $|\mathcal{X}|$ columns. \end{lemma} \begin{proof} The result corresponds to first order term of Taylor expansion of $\log(1+x)$. For more details see \cite{Amir2}. \end{proof} In the following we use the notation $f(x)\cong g(x)$ which denotes $f(x) = g(x) + o(x)$, i.e., $g(x)$ is the first order term of the Taylor expansion of $f(x)$. \begin{theorem}\label{th11} Let $P_{X|Y}\in\mathcal{H}_{XY}$ and $V_{\Omega_u}^*\in\mathbb{S}_u^*,\ u\in\{1,..,|\mathcal{Y}|\}$. For sufficiently small $\epsilon$, the minimization problem in \eqref{equi} can be approximated as follows \begin{align}\label{minmin} &\min_{P_U(.),\{J_u, u\in\mathcal{U}\}} -\left(\sum_{u=1}^{|\mathcal{Y}|} P_ub_u+\epsilon P_ua_uJ_u\right)\\\nonumber &\text{subject to:}\\\nonumber &\sum_{u=1}^{|\mathcal{Y}|} P_uV_{\Omega_u}^*=P_Y,\ \sum_{u=1}^{|\mathcal{Y}|} P_uJ_u=0,\ P_U(.)\geq 0,\\\nonumber &\sum_{i=1}^{|\mathcal{X}|} |J_u(i)|\leq 1,\ \sum_{i=1}^{|\mathcal{X}|}J_u(i)=0,\ \forall u\in\mathcal{U}. \end{align} \end{theorem} \begin{proof} The proof is based on Proposition~\ref{4} and Lemma~\ref{5}. For $P_{Y|U=u}=V_{\Omega_u}^*,\ u\in\{1,..,|\mathcal{Y}|\}$, $H(Y|U)$ can be approximated as $ H(Y|U) = \sum_u P_uH(P_{Y|U=u})\cong \sum_{u=1}^{|\mathcal{Y}|} P_ub_u+\epsilon P_ua_uJ_u,$ where $b_u$ and $a_u$ are defined as in Lemma~\ref{5}. \end{proof} \begin{remark} The weights $P_u$, which satisfy the constraints $\sum_{u=1}^{|\mathcal{Y}|} P_uV_{\Omega_u}^*=P_Y$ and $P_U(.)\geq 0$, form a standard probability distribution, since the sum of elements in each vector $V_{\Omega_u}^*\in\mathbb{S}_u^*$ equals to one. \end{remark} \begin{remark} If we set $\epsilon=0$, \eqref{minmin} becomes the same linear programming problem as presented in \cite{deniz6}. \end{remark} \begin{proposition} All constraints in \eqref{minmin} are feasible. \end{proposition} \begin{proof} Let $J_u=0$ for all $u\in{\mathcal{U}}$. In this case all sets $\mathbb{S}_u$ become the same sets named $\mathbb{S}$. Since the set $\mathbb{S}$ is an at most $|\mathcal{Y}|-1$ dimensional polytope and $P_Y\in\mathbb{S}$, $P_Y$ can be written as convex combination of some extreme points in $\mathbb{S}$. Thus, the constraint $\sum_{u=1}^{|\mathcal{Y}|} P_uV_{\Omega_u}^*=P_Y$ is feasible. Furthermore, by choosing $J_u=0$ for all $u\in{\mathcal{U}}$, all other constraints in \eqref{minmin} are feasible too. \end{proof} In next section we show that \eqref{minmin} can be converted into a linear programming problem. \subsection{Equivalent linear programming problem}\label{c} Let $\eta_u=P_uM_{\Omega_u}^{-1}MP_Y+\epsilon M_{\Omega_u}^{-1}M(1:|\mathcal{X}|)P_{X|Y_1}^{-1}(P_uJ_u)$ for all $u\in \mathcal{U}$, where $\eta_u\in\mathbb{R}^{|\mathcal{X}|}$. $\eta_u$ corresponds to multiple of non-zero elements of the extreme point $V_{\Omega_u}^*$. $P_u$ and $J_u$ can be found as follows \begin{align*} P_u&=\bm{1}^T\cdot \eta_u,\\ J_u&=\frac{P_{X|Y_1}M(1:|\mathcal{X}|)^{-1}M_{\Omega_u}[\eta_u-(\bm{1}^T \eta_u)M_{\Omega_u}^{-1}MP_Y]}{\epsilon(\bm{1}^T\cdot \eta_u)}, \end{align*} Thus, \eqref{minmin} can be rewritten as a linear programming problem using $\eta_u$ as follows. For the cost function it can be shown \begin{align*} &-\left(\sum_{u=1}^{|\mathcal{Y}|} P_ub_u+\epsilon P_ua_uJ_u\right)= -\sum_u b_u(\bm{1}^T\eta_u)\\&-\!\epsilon\!\sum_u \!a_u\! \left[P_{X|Y_1}\!M(1\!:\!|\mathcal{X}|)^{-1}\!M_{\Omega_u}\![\eta_u\!\!-\!(\bm{1}^T \eta_u)M_{\Omega_u}^{-1}M\!P_Y]\right]\!, \end{align*} which is a linear function of elements of $\eta_u$ for all $u\in\mathcal{U}$. Non-zero elements of the vector $P_uV_{\Omega_u}^*$ are equal to the elements of $\eta_u$, i.e., we have $P_uV_{\Omega_u}^*(\omega_i)=\eta_u(i).$ Thus, the constraint $\sum_{u=1}^{|\mathcal{Y}|} P_uV_{\Omega_u}^*=P_Y$ can be rewritten as a linear function of elements of $\eta_u$. The constraints $\sum_{u=1}^{|\mathcal{Y}|} P_uJ_u=0$ and $P_u\geq 0\ \forall u$ as well as $\sum_{i=1}^{|\mathcal{X}|}J_u(i)=0$ are reformulated as $\sum_uP_{X|Y_1}M(1:|\mathcal{X}|)^{-1}M_{\Omega_u}\left[\eta_u-(\bm{1}^T\cdot\eta_u)M_{\Omega_u}^{-1}MP_Y\right]=0,\ \sum_i \eta_u(i)\geq 0,\ \forall u$ and $\bm{1}^T P_{X|Y_1}M(1:|\mathcal{X}|)^{-1}M_{\Omega_u}\left[\eta_u-(\bm{1}^T\cdot\eta_u)M_{\Omega_u}^{-1}MP_Y\right]=0$, respectively. Furthermore, the last constraint, i.e., $\sum_{i=1}^{|\mathcal{X}|} |J_u(i)|\leq 1$, can be rewritten as \begin{align*} &\sum_i\! \left|\left(P_{X|Y_1}\!M(1\!:\!|\mathcal{X}|)^{-1}\!M_{\Omega_u}\!\left[\eta_u\!-\!(\bm{1}^T\eta_u)M_{\Omega_u}^{-1}MP_Y\right]\right)\!(i)\right|\\&\leq\epsilon(\bm{1}^T\eta_u),\ \forall u. \end{align*} Thus, all constraints can be rewritten as linear functions of elements of $\eta_u$ for all $u$. thus we have $\mathbb{S}_u^*=\{V_{\Omega_{u_1}}^*,V_{\Omega_{u_2}}^*,V_{\Omega_{u_5}}^*,V_{\Omega_{u_6}}^*\}$ for $u\in\{1,2,3,4\}$. Since for each $u$, $|\mathbb{S}_u^*|=4$, $4^4$ optimization problems need to be considered where not all are feasible. In Step 2, we choose first element of $\mathbb{S}_1^*$, second element of $\mathbb{S}_2^*$, third element of $\mathbb{S}_3^*$ and fourth element of $\mathbb{S}_4^*$. Thus we have the following quadratic problem \begin{align*} \min\ &P_10.9097+P_20.6962+P_30.6254+P_40.9544\\-&P_1\epsilon J_1^2 2.1089+P_2\epsilon J_2^2 10.5816-P_3 \epsilon J_3^2 6.0808 \\+&P_4\epsilon J_4^2 7.3747\\ &\text{s.t.}\begin{bmatrix} \frac{1}{2}\\\frac{1}{4}\\ \frac{1}{8}\\ \frac{1}{8} \end{bmatrix} = P_1 \begin{bmatrix} 0.675+\epsilon2J_1^2\\0.325-\epsilon2J_1^2\\0\\0 \end{bmatrix}+ P_2\begin{bmatrix} 0.1875+\epsilon 5J_2^2\\0\\0.8125-\epsilon 5J_2^2\\0 \end{bmatrix}\\&+P_3\begin{bmatrix} 0\\0.1563-\epsilon 2.5J_3^2\\0\\0.8437+\epsilon 2.5J_3^2 \end{bmatrix}+P_4\begin{bmatrix} 0\\0\\0.6251-\epsilon 10J_4^2\\0.3749+\epsilon 10J_4^2 \end{bmatrix},\\ &P_1J_1^2+P_2J_2^2+P_3J_3^2+P_4J_4^2=0,\ P_1,P_2,P_3,P_4\geq 0,\\ &|J_1^2|\leq \frac{1}{2},\ |J_2^2|\leq \frac{1}{2},\ |J_3^2|\leq \frac{1}{2},\ |J_4^2|\leq \frac{1}{2}, \end{align*} where the minimization is over $P_u$ and $J_u^2$ for $u\in\{1,2,3,4\}$. Now we convert the problem to linear programming. We have \begin{align*} \eta_1&= \begin{bmatrix} 0.675P_1+\epsilon2P_1J_1^2\\0.325P_1-\epsilon2P_1J_1^2 \end{bmatrix}=\begin{bmatrix} \eta_1^1\\\eta_1^2 \end{bmatrix},\\ \eta_2&=\begin{bmatrix} 0.1875P_2+\epsilon 5P_2J_2^2\\0.8125P_2-\epsilon 5P_2J_2^2 \end{bmatrix}=\begin{bmatrix} \eta_2^1\\\eta_2^2 \end{bmatrix},\\ \eta_3&= \begin{bmatrix} 0.1563P_3-\epsilon 2.5P_3J_3^2\\0.8437P_3+\epsilon 2.5P_3J_3^2 \end{bmatrix}=\begin{bmatrix} \eta_3^1\\\eta_3^2 \end{bmatrix},\\ \eta_4&= \begin{bmatrix} 0.3749P_4-\epsilon 10P_4J_4^2\\0.6251P_4+\epsilon 10P_4J_4^2 \end{bmatrix}=\begin{bmatrix} \eta_3^1\\\eta_3^2 \end{bmatrix}. \end{align*} \begin{align*} \min\ &0.567\eta_1^1+1.6215\eta_1^2+2.415\eta_2^1+0.2995\eta_2^2\\ &2.6776\eta_3^1+0.2452\eta_3^2+0.6779\eta_4^1+1.4155\eta_4^2\\ &\text{s.t.}\begin{bmatrix} \frac{1}{2}\\\frac{1}{4}\\ \frac{1}{8}\\ \frac{1}{8} \end{bmatrix} = \begin{bmatrix} \eta_1^1+\eta_2^1\\\eta_1^2+\eta_3^2\\\eta_2^2+\eta_4^1\\\eta_3^2+\eta_4^2 \end{bmatrix},\ \begin{cases} &\eta_1^1+\eta_1^2\geq 0\\ &\eta_2^1+\eta_2^2\geq 0\\ &\eta_3^1+\eta_3^2\geq 0\\ &\eta_4^1+\eta_4^2\geq 0 \end{cases}\\ &\frac{0.325\eta_1^1-0.675\eta_1^2}{2}+\frac{0.8125\eta_2^1-0.1875\eta_2^2}{5}+\\ &\frac{0.1563\eta_3^2-0.8437\eta_3^1}{2.5}+\frac{0.6251\eta_4^2-0.3749\eta_4^1}{10}=0,\\ &\frac{|0.325\eta_1^1-0.675\eta_1^2|}{\eta_1^1+\eta_1^2}\leq \epsilon,\ \frac{|0.8125\eta_2^1-0.1875\eta_2^2|}{\eta_2^1+\eta_2^2}\leq 2.5\epsilon\\ &\frac{|0.1563\eta_3^2-0.8437\eta_3^1|}{\eta_3^1+\eta_3^2}\leq 1.125\epsilon,\\ &\frac{|0.6251\eta_4^2-0.3749\eta_4^1|}{\eta_4^1+\eta_4^2}\leq 5\epsilon. \end{align*} The solution to the obtained problem is as follows (we assumed $\epsilon = 10^{-2}$) \begin{align*} &P_U = \begin{bmatrix} 0.7048 \\ 0.1493 \\ 0.146 \\ 0 \end{bmatrix},\ J_1 = \begin{bmatrix} -0.0023 \\ 0.0023 \end{bmatrix},\ J_2 = \begin{bmatrix} 0.5 \\ -0.5 \end{bmatrix}\\ &J_3 = \begin{bmatrix} -0.5 \\ 0.5 \end{bmatrix},\ J_4 = \begin{bmatrix} 0 \\ 0 \end{bmatrix},\ \min \text{(cost)} = 0.8239, \end{align*} thus, for this combination we obtain $I(U;Y)\cong H(Y)-0.8239=1.75-0.8239 = 0.9261$. If we try all possible combinations we get the minimum cost as $0.8239$ and thus we have $\max I(U;Y)\cong 0.9261$. For feasibility of $\sum_{u=1}^{|\mathcal{Y}|} P_uV_{\Omega_u}^*=P_Y$, consider the following combination. We choose first extreme point from each set $\mathbb{S}_u$ for $u\in\{1,2,3,4\}$. Now consider the condition $\sum_{u=1}^{|\mathcal{Y}|} P_uV_{\Omega_u}^*=P_Y$, we have \begin{align*} \begin{bmatrix} \frac{1}{2}\\\frac{1}{4}\\ \frac{1}{8}\\ \frac{1}{8} \end{bmatrix} = P_1 \begin{bmatrix} 0.675+\epsilon2J_1^2\\0.325-\epsilon2J_1^2\\0\\0 \end{bmatrix}+ P_2\begin{bmatrix} 0.675+\epsilon2J_2^2\\0.325-\epsilon2J_2^2\\0\\0 \end{bmatrix} \end{align*} \begin{align*}+P_3\begin{bmatrix} 0.675+\epsilon2J_3^2\\0.325-\epsilon2J_3^2\\0\\0 \end{bmatrix}+P_4\begin{bmatrix} 0.675+\epsilon2J_4^2\\0.325-\epsilon2J_4^2\\0\\0 \end{bmatrix}, \end{align*} which is obviously not feasible since there is no $P_U$ and $J_u$ satisfying this constraint. Thus, not all combinations are feasible. Furthermore, if we set $\epsilon=0$ we obtain the solution in \cite{deniz6}. The cost function in \cite{deniz6} is equal to $0.9063$ which is lower than our result. \end{example} Condition \eqref{orth} can be interpreted as an inner product between vectors $L_u$ and $\sqrt{P_X}$, where$\sqrt{P_X}\in\mathbb{R}^{\mathcal{K}}$ is a vector with entries $\{\sqrt{P_X(x)},\ x\in\mathcal{X}\}$. Thus, condition \eqref{orth} states an orthogonality condition. Furthermore, \eqref{orth2} can be rewritten in vector form as $\sum_u P_U(u)L_u=\bm 0\in\mathcal{R}^{\mathcal{K}}$ using the assumption that $P_X(x)>0$ for all $x\in\mathcal{X}$. Therewith, the problem in corollary \eqref{corr1} can be rewritten as \begin{align} \max_{\begin{array}{c} \substack{L_u,P_U:\|L_u\|^2\leq 1,\\ L_u\perp\sqrt{P_X},\\ \sum_u P_U(u)L_u=\bm 0} \end{array}} \sum_u P_U(u)\|W\cdot L_u\|^2.\label{max} \end{align} The next proposition shows how to simplify \eqref{max}. \begin{proposition} Let $L^*$ be the maximizer of \eqref{max2}, then \eqref{max} and \eqref{max2} achieve the same maximum value while $U$ as a uniform binary RV with $L_0=-L_1=L^*$ maximizes \eqref{max}. \begin{align} \max_{L:L\perp \sqrt{P_X},\ \|L\|^2\leq 1} \|W\cdot L\|^2.\label{max2} \end{align} \begin{proof} Let $\{L_u^*,P_U^*\}$ be the maximizer of \eqref{max}. Furthermore, let $u'$ be the index that maximizes $\|W\cdot L_{u}^*\|^2$, i.e., $u'=\text{argmax}_{u\in\mathcal{U}} \|W\cdot L_{u}^*\|^2$. Then we have \begin{align*} \sum_u P_U^*(u)||W\cdot L_u^*||^2\leq ||W\cdot L_{u'}^*||^2\leq||W\cdot L^*||^2, \end{align*} where the right inequality comes from the fact that $L^*$ has to satisfy one less constraint than $L_{u'}^*$. However, by choosing $U$ as a uniform binary RV and $L_0=-L_1=L^*$ the constraints in \eqref{max} are satisfied and the maximum in \eqref{max2} is achieved. Thus without loss of optimality we can choose $U$ as a uniformly distributed binary RV and \eqref{max} reduces to \eqref{max2}. \end{proof} \end{proposition} After finding the solution of \eqref{max2}, the conditional distributions $P_{X|U=u}$ and $P_{Y|U=u}$ are given by \begin{align} P_{X|U=0}&=P_X+\epsilon[\sqrt{P_X}]L^*,\\ P_{X|U=1}&=P_X-\epsilon[\sqrt{P_X}]L^*,\\ P_{Y|U=0}&=P_X+\epsilon P_{X|Y}^{-1}[\sqrt{P_X}]L^*,\label{condis1}\\ P_{Y|U=1}&=P_X-\epsilon P_{X|Y}^{-1}[\sqrt{P_X}]L^*.\label{condis2} \end{align} In next theorem we derive the solution of \eqref{max2}. \begin{theorem}\label{th1} $L^*$, which maximizes \eqref{max2}, is the right singular vector corresponding to largest singular value of $W$. \end{theorem} \begin{proof} The proof is provided in Appendix B. \end{proof} By using Theorem~\ref{th1}, the solution to the problem in Corollary~\ref{corr1} can be summarized as $\{P_U^*,L_u^*\}=\{U\ \text{uniform binary RV},\ L_0=-L_1=L^*\}$, where $L^*$ is the solution of \eqref{max2}. Thus, we have the following result. \begin{corollary} The maximum value in \eqref{privacy} can be approximated by $\frac{1}{2}\epsilon^2\sigma_{\text{max}}^2$ for small $\epsilon$ and can be achieved by a privacy mechanism characterized by the conditional distributions found in \eqref{condis1} and \eqref{condis2}, where $\sigma_{\text{max}}$ is the largest singular value of $W$ corresponding to the right singular vector $L^*$. \end{corollary} \begin{figure*}[] \centering \includegraphics[width = .7\textwidth]{Figs/inter.jpg} \caption{ For the privacy mechanism design, we are looking for $L^*$ in the red region (vector space A) which results in a vector with the largest Euclidean norm in vector space D. Space B and space C are probability spaces for the input and output distributions, the circle in space A represents the vectors that satisfy the $\chi^2$-square privacy criterion and the red region denotes all vectors that are orthogonal to vector $\sqrt{P_X}$. } \label{fig:inter} \end{figure*} \section{discussions}\label{dissc} In Figure~\ref{fig:inter}, four spaces are illustrated. Space B and space C are probability spaces of the input and output distributions, where the points are inside a simplex. Multiplying input distributions by $P_{X|Y}^{-1}$ results in output distributions. Space A illustrates vectors $L_u$ with norm smaller than 1, which corresponds to the $\chi^2$-privacy criterion. The red region in this space includes all vectors that are orthogonal to $\sqrt{P_X}$. For the optimal solution with $U$ chosen to be a equiprobable binary RV, it is shown that it remains to find the vector $L_u$ in the red region that results in a vector that has the largest norm in space D. This is achieved by the principal right-singular vector of $W$. The mapping between space A and B is given by $[\sqrt{P_X}^{-1}]$ and also the mapping between space C and D is given by $[\sqrt{P_Y}^{-1}]$. Thus $W$ is given by $[\sqrt{P_Y}^{-1}]P_{X|Y}^{-1}[\sqrt{P_X}]$. In following, we provide an example where the procedure of finding the mechanism to produce $U$ is illustrated. \begin{example} Consider the kernel matrix $P_{X|Y}=\begin{bmatrix} \frac{1}{4} & \frac{2}{5} \\ \frac{3}{4} & \frac{3}{5} \end{bmatrix}$ and $P_Y$ is given as $[\frac{1}{4} , \frac{3}{4}]^T$. Thus we can calculate $W$ and $P_X$ as: \begin{align*} P_X&=P_{X|Y}P_Y=[0.3625, 0.6375]^T,\\ W &= [\sqrt{P_Y}^{-1}]P_{X|Y}^{-1}[\sqrt{P_X}] = \begin{bmatrix} -4.8166 & 4.2583 \\ 3.4761 & -1.5366 \end{bmatrix}. \end{align*} The singular values of $W$ are $7.4012$ and $1$ with corresponding right singular vectors $[0.7984, -0.6021]^T$ and $[0.6021 , 0.7954]^T$, respectively. Thus the maximum of \eqref{privacy} is approximately $\frac{1}{2}\epsilon^2\times(7.4012)^2=27.39\cdot \epsilon^2$. The maximizing vector $L^*$ in \eqref{max2} is equal to $[0.7984, -0.6021]^T$ and the mapping between $U$ and $Y$ can be calculated as follows (the approximate maximum of $I(U;Y)$ is achieved by the following conditional distributions): \begin{align*} P_{Y|U=0}&=P_X+\epsilon P_{X|Y}^{-1}[\sqrt{P_X}]L^*,\\ &=[0.3625-3.2048\cdot\epsilon , 0.6375+3.2048\cdot\epsilon]^T,\\ P_{Y|U=1}&=P_X-\epsilon P_{X|Y}^{-1}[\sqrt{P_X}]L^*,\\ &=[0.3625+3.2048\cdot\epsilon , 0.6375-3.2048\cdot\epsilon]^T. \end{align*} Note that the approximation is valid if $|\epsilon \frac{P_{X|Y}^{-1}J_{u}(y)}{P_{Y}(y)}|\ll 1$ holds for all $y$ and $u$ . For the example above we have $\epsilon\cdot P_{X|Y}^{-1}J_0=\epsilon[-3.2048,\ 3.2048]^T$ and $\epsilon\cdot P_{X|Y}^{-1}J_1=\epsilon[3.2048,\ -3.2048]^T$ so that $\epsilon\ll 0.078$. \end{example} In next example we consider a BSC($\alpha$) channel as kernel. We provide an example with a constant upper bound on the approximated mutual information. \begin{example} Let $P_{X|Y}=\begin{bmatrix} 1-\alpha & \alpha \\ \alpha & 1-\alpha \end{bmatrix}$ and $P_Y$ is given by $[\frac{1}{4} , \frac{3}{4}]^T$. By following the same procedure we have: \begin{align*} P_X&=P_{X|Y}P_Y=[\frac{2\alpha+1}{4}, \frac{3-2\alpha}{4}]^T,\\ W &= [\sqrt{P_Y}^{-1}]P_{X|Y}^{-1}[\sqrt{P_X}] \\&= \begin{bmatrix} \frac{\sqrt{2\alpha+1}(\alpha-1)}{(2\alpha-1)} & \frac{\alpha\sqrt{3-2\alpha}}{2\alpha-1} \\ \frac{\alpha\sqrt{2\alpha+1}}{\sqrt{3}(2\alpha-1)} & \frac{\sqrt{3-2\alpha}(\alpha-1)}{\sqrt{3}(2\alpha-1)} \end{bmatrix}. \end{align*} Singular values of $W$ are $\sqrt{\frac{(2\alpha+1)(3-2\alpha)}{3(2\alpha-1)^2}}\geq 1$ for $\alpha\in[0,\frac{1}{2})$ and $1$ with corresponding right singular vectors $[-\sqrt{\frac{3-2\alpha}{4}},\ \sqrt{\frac{2\alpha+1}{4}}]^T$ and $[\sqrt{\frac{2\alpha+1}{4}}, \sqrt{\frac{3-2\alpha}{4}}]^T$, respectively. Thus, we have $L^*=[-\sqrt{\frac{3-2\alpha}{4}},\ \sqrt{\frac{2\alpha+1}{4}}]^T$ and $\max I(U;Y)\approx \epsilon^2\frac{(2\alpha+1)(3-2\alpha)}{6(2\alpha-1)^2}$ with the following conditional distributions: \begin{align*} P_{Y|U=0}&=P_Y+\epsilon\cdot P_{X|Y}^{-1}[\sqrt{P_X}]L^*\\&=[\frac{1}{4}\!\!+\!\epsilon\frac{\sqrt{(3\!-\!2\alpha)(2\alpha\!+\!1)}}{4(2\alpha\!-\!1)},\ \!\!\!\frac{3}{4}\!\!-\!\epsilon\frac{\sqrt{(3\!-\!2\alpha)(2\alpha\!+\!1)}}{4(2\alpha\!-\!1)}],\\ P_{Y|U=1}&=P_Y-\epsilon\cdot P_{X|Y}^{-1}[\sqrt{P_X}]L^*\\&=[\frac{1}{4}\!\!-\!\epsilon\frac{\sqrt{(3\!-\!2\alpha)(2\alpha\!+\!1)}}{4(2\alpha\!-\!1)},\ \!\!\!\frac{3}{4}\!\!+\!\epsilon\frac{\sqrt{(3\!-\!2\alpha)(2\alpha\!+\!1)}}{4(2\alpha\!-\!1)}].\\ \end{align*} The approximation of $I(U;Y)$ holds when we have $|\epsilon\frac{P_{X|Y}^{-1}[\sqrt{P_X}]L^*}{P_Y}|\ll 1$ for all $y$ and $u$, which leads to $\epsilon\ll\frac{|2\alpha-1|}{\sqrt{(3-2\alpha)(2\alpha+1)}}$. If $\epsilon<\frac{|2\alpha-1|}{\sqrt{(3-2\alpha)(2\alpha+1)}}$, then the approximation of the mutual infromation $I(U;Y)\cong\frac{1}{2}\epsilon^2\sigma_{\text{max}}^2$ is upper bounded by $\frac{1}{6}$ for all $0\leq\alpha<\frac{1}{2}$. \end{example} Now we investigate the range of permissible $\epsilon$ in Corollary \ref{corr1}. For approximating $I(U;Y)$, we use the second Taylor expansion of $\log(1+x)$. Therefore we must have $|\epsilon\frac{P_{X|Y}^{-1}J_u(y)}{P_Y(y)}|<1$ for all $u$ and $y$. One sufficient condition for $\epsilon$ to satisfy this inequality is to have $\epsilon<\frac{|\sigma_{\text{min}}(P_{X|Y})|\min_{y\in\mathcal{Y}}P_Y(y)}{\sqrt{\max_{x\in{\mathcal{X}}}P_X(x)}}$, since in this case we have \begin{align*} \epsilon^2|P_{X|Y}^{-1}J_u(y)|^2&\leq\epsilon^2\left\lVert P_{X|Y}^{-1}J_u\right\rVert^2\leq\epsilon^2 \sigma_{\max}^2\left(P_{X|Y}^{-1}\right)\left\lVert J_u\right\rVert^2\\&\stackrel{(a)}\leq\frac{\epsilon^2\max_{x\in{\mathcal{X}}}P_X(x)}{\sigma^2_{\text{min}}(P_{X|Y})}<\min_{y\in\mathcal{Y}} P_Y^2(y), \end{align*} which implies $|\epsilon\frac{P_{X|Y}^{-1}J_u(y)}{P_Y(y)}\!|<1$. The step (a) follows from $\sigma_{\max}^2\left(P_{X|Y}^{-1}\right)=\frac{1}{\sigma_{\min}^2\left(P_{X|Y}\right)}$ and $\|J_u\|^2\leq\max_{x\in{\mathcal{X}}}P_X(x)$. The latter inequality follows from \eqref{prop3} since we have \begin{align*} \frac{\|J_u\|^2}{\max_{x\in{\mathcal{X}}}P_X(x)}\leq \sum_{x\in\mathcal{X}}\frac{J_u^2(x)}{P_X(x)}\leq 1. \end{align*} Furthermore, for approximating $I(U;X)$ we should have $|\epsilon\frac{J_u(x)}{P_X(x)}|<1$ for all $x$ and $u$. One sufficient condition is to have $\epsilon<\frac{\min_{x\in\mathcal{X}}P_X(x)}{\sqrt{\max_{x\in\mathcal{X}}P_X(x)}}$. The proof follows again from \eqref{prop3}. \section{conclusion}\label{concul} In summary, we have shown that we can use the concept of Euclidean information theory to simplify the analytical privacy mechanism design problem if a small $\epsilon$ privacy leakage using the $\chi^2$-privacy criterion is tolerated. This allows us to locally approximate the KL divergence and hence the mutual information between $U$ and $Y$. By using this approximation it is shown that the original problem can be reduced to a simple linear algebra problem. \section{conclusion}\label{concul} It has been shown that the information-theoretic disclosure control problem can be decomposed if the leakage matrix is of full row rank and fat which belongs to $\mathcal{H}_{XY}$. Furthermore, information geometry can be used to approximate $H(Y|U)$, allowing us to simplify the optimization problem for sufficiently small $\ell_1$-privacy leakage. In particular, the new optimization problem can be transferred to a linear programming problem with details as found in \cite{Amir2}. Therefore, one is the smallest singular value of $W$ with $\sqrt{P_X}$ as corresponding right singular vector. Furthermore, we have that the right singular vector of the largest singular value is orthogonal to $\sqrt{P_X}$. Thus, the principal right-singular vector is the solution of \eqref{max2}. \section*{Appendix A}\label{appaa} For the first claim it is sufficient to show that for any $\gamma\in\mathbb{S}_u$, we have $1^T\gamma=1$. Since $\gamma\in\mathbb{S}_u$, we have $M(\gamma-P_Y-\epsilon\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0 \end{bmatrix})=0$ and from Lemma~\ref{null} we obtain $1^T(\gamma-P_Y-\epsilon\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0 \end{bmatrix})=0$ which yields $1^T\gamma=1$. In the last conclusion we used the fact that $1^T\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0 \end{bmatrix}=0$, which is true since by using \eqref{prop1} we have $1_{|\mathcal{Y}|}^T\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\ 0 \end{bmatrix}=1_{|\mathcal{X}|}^T P_{X|Y_1}^{-1}J_u\stackrel{(a)}{=}1^TJ_u=0$, where $(a)$ comes from $1_{|\mathcal{X}|}^T P_{X|Y_1}^{-1}=1_{|\mathcal{X}|}^T\Leftrightarrow 1_{|\mathcal{X}|}^T P_{X|Y_1}=1_{|\mathcal{X}|}^T$ and noting that columns of $P_{X|Y_1}$ are vector distributions. If $P_{Y|U=u}\in\mathbb{S}_u$ for all $u\in\mathcal{U}$ then we can have $X-Y-U$ and \begin{align*} &M(P_{Y|U=u}-P_Y-\epsilon\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0 \end{bmatrix})=0, \\ \Rightarrow\ &P_{X|Y}(P_{Y|U=u}-P_Y-\epsilon\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0 \end{bmatrix})=0,\\ \Rightarrow\ &P_{X|U=u}-P_X=\epsilon\cdot P_{X|Y}(\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0 \end{bmatrix})=\epsilon\cdot J_u, \end{align*} where in last line we use Markovity of $X-Y-U$. Furthermore, $J_u$ satisfies \eqref{prop1}, \eqref{prop2}, and \eqref{prop3} for all $u\in\mathcal{U}$, thus, the privacy criterion defined in \eqref{local} holds. \section*{Appendix B} Consider the set $\mathbb{S}=\{y\in\mathbb{R}^{|\mathcal{Y}|}|My=MP_Y\}$. Any element in $\mathbb{S}$ has sum elements equal to one since we have $M(y-P_Y)=0\Rightarrow y-P_Y\in \text{Null}(P_{X|Y})$ and from Lemma~\ref{null} we obtain $1^T(y-P_Y)=0\Rightarrow 1^Ty=1$. The basic solutions of $\mathbb{S}$ are $W_{\Omega}^*$ defined as follows. Let $\Omega = \{\omega_1,..,\omega_{|\mathcal{X}|}\}$, where $\omega_i\in\{1,..,|\mathcal{Y}|\}$, Then \begin{align*} W_{\Omega}^*(\omega_i)=M_{\Omega}^{-1}MP_Y(i), \end{align*} other elements of $W_{\Omega}^*$ are zero. Thus, the sum over all elements of $M_{\Omega}^{-1}MP_Y$ equal to one, since each element in $\mathbb{S}$ has sum element equal to one. For the second statement consider the set $\mathbb{S}^{'}=\left\{y\in\mathbb{R}^{|\mathcal{Y}|}|My=MP_Y+\epsilon M\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0 \end{bmatrix}\right\}$. As argued before basic solutions of $\mathbb{S}^{'}$ are $V_{\Omega}^*$, where \begin{align*} V_{\Omega}^*(\omega_i)= (M_{\Omega}^{-1}MP_Y+\epsilon M_{\Omega}^{-1}M\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0\end{bmatrix})(i). \end{align*} Here elements of $V_{\Omega}^*$ can be negative or non-negative. From Lemma~\ref{null2}, each element in $\mathbb{S}^{'}$ has sum elements equal to one. Thus, by using the first statement of this proposition sum over all elements of the vector $ M_{\Omega}^{-1}M\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0\end{bmatrix}$ equal to zero. \bibliographystyle{IEEEtran} \bibliography{IEEEabrv,IZS} \end{document} \documentclass[10pt,conference]{IEEEtran} \usepackage{fixltx2e} \usepackage{cite} \usepackage{url} \usepackage{mathrsfs} \usepackage{color} \usepackage{float} \usepackage[caption=false]{subfig} \ifCLASSINFOpdf \usepackage[pdftex]{graphicx} \graphicspath{{Figs/}} \DeclareGraphicsExtensions{.pdf,.jpeg,.png} \else \usepackage[cmex10]{amsmath} \usepackage{amsmath} \usepackage{amssymb} \usepackage{amsthm} \usepackage{amsfonts} \usepackage{bm} \usepackage{xfrac} \usepackage{empheq} \usepackage[normalem]{ulem} \usepackage{soul} \usepackage{mathtools} \DeclarePairedDelimiter\ceil{\lceil}{\rceil} \DeclarePairedDelimiter\floor{\lfloor}{\rfloor} \newtheorem{theorem}{Theorem} \newtheorem*{conclusion}{Conclusion} \newtheorem{example}{Example} \newtheorem{proposition}{Proposition} \newtheorem{lemma}{Lemma} \newtheorem{claim}{Claim} \newtheorem{notation}{Notation} \newtheorem{corollary}{Corollary} \newtheorem{remark}{Remark} \theoremstyle{definition} \newtheorem{definition}{Definition} \usepackage[a4paper,bindingoffset=0.2in,left=1in,right=1in,top=1in,bottom=1in,footskip=.25in]{geometry} \graphicspath{{figs/}} \interdisplaylinepenalty=2500 \begin{document} \newgeometry{left=0.7in,right=0.7in,top=.5in,bottom=1in} \title{Bounds for Privacy-Utility Trade-off with Non-zero Leakage} \vspace{-5mm} \author{ \IEEEauthorblockN{Amirreza Zamani, Tobias J. Oechtering, Mikael Skoglund \vspace*{0.5em} \IEEEauthorblockA{\\ Division of Information Science and Engineering, KTH Royal Institute of Technology \\ Email: \protect [email protected], [email protected], [email protected] }} } \maketitle \begin{abstract} The design of privacy mechanisms for two scenarios is studied where the private data is hidden or observable. In the first scenario, an agent observes useful data $Y$, which is correlated with private data $X$, and wants to disclose the useful information to a user. A privacy mechanism is employed to generate data $U$ that maximizes the revealed information about $Y$ while satisfying a privacy criterion. In the second scenario, the agent has additionally access to the private data. To this end, the Functional Representation Lemma and Strong Functional Representation Lemma are extended relaxing the independence condition and thereby allowing a certain leakage. Lower bounds on privacy-utility trade-off are derived for the second scenario as well as upper bounds for both scenarios. In particular, for the case where no leakage is allowed, our upper and lower bounds improve previous bounds. \end{abstract} \section{Introduction} In this paper, random variable (RV) $Y$ denotes the useful data and is correlated with the private data denoted by RV $X$. Furthermore, disclosed data is described by RV $U$. Two scenarios are considered in this work, where in both scenarios, an agent wants to disclose the useful information to a user as shown in Fig.~\ref{ISITsys}. In the first scenario, the agent observes $Y$ and has not directly access to $X$, i.e., the private data is hidden. The goal is to design $U$ based on $Y$ that reveals as much information as possible about $Y$ and satisfies a privacy criterion. We use mutual information to measure utility and privacy leakage. In this work, some bounded privacy leakage is allowed, i.e., $I(X;U)\leq \epsilon$. In the second scenario, the agent has access to both $X$ and $Y$ and can design $U$ based on $(X,Y)$ to release as much information as possible about $Y$ while satisfying the bounded leakage constraint. \\ The privacy mechanism design problem is receiving increased attention in information theory recently. Related works can be found in \cite{Calmon2,yamamoto, sankar,borz, gun,khodam,Khodam22,kostala,issa, makhdoumi, dwork1, calmon4, issajoon, asoo, Total, issa2}. In \cite{Calmon2}, fundamental limits of the privacy utility trade-off measuring the leakage using estimation-theoretic guarantees are studied. In \cite{yamamoto}, a source coding problem with secrecy is studied. \begin{figure}[] \centering \includegraphics[scale = .15]{ISITsys.jpg} \caption{Considered two scenarios where the agent has only access to $Y$ and the agent has access to both $X$ and $Y$.} \label{ISITsys} \end{figure} Privacy-utility trade-offs considering equivocation as measure of privacy and expected distortion as a measure of utility are studied in both \cite{yamamoto} and \cite{sankar}. In \cite{borz}, the problem of privacy-utility trade-off considering mutual information both as measures of privacy and utility given the Markov chain $X-Y-U$ is studied. It is shown that under perfect privacy assumption, i.e., $\epsilon=0$, the privacy mechanism design problem can be reduced to a linear program. This work has been extended in \cite{gun} considering the privacy utility trade-off with a rate constraint for the disclosed data. Moreover, in \cite{borz}, it has been shown that information can be only revealed if $P_{X|Y}$ is not invertible. In \cite{khodam}, we designed privacy mechanisms with a per letter privacy criterion considering an invertible $P_{X|Y}$ where a small leakage is allowed. We generalized this result to a non-invertible leakage matrix in \cite{Khodam22}. Our problem here is closely related to \cite{kostala}, where the problem of \emph{secrecy by design} is studied. Similarly, the two scenarios are considered while the results are however derived under the perfect secrecy assumption, i.e., no leakages are allowed which corresponds to $\epsilon=0$. Bounds on secure decomposition have been derived using the Functional Representation Lemma and new bounds on privacy-utility trade-off for the two scenarios are derived. The bounds are tight when the private data is a deterministic function of the useful data. In the present work, we generalize the privacy problems considered in \cite{kostala} by relaxing the perfect privacy constraint and allowing some leakages. To this end, we extend the Functional Representation Lemma relaxing the independence condition. Additionally, we derive bounds by also extending the Strong Functional Representation Lemma, introduced in \cite{kosnane}. Furthermore, in the special case of perfect privacy we find a new upper bound for perfect privacy function by using the \emph{excess functional information} introduced in \cite{kosnane}. We then compare our new lower and upper bounds with the bounds found in \cite{kostala} when the leakage is zero. \section{system model and Problem Formulation} \label{sec:system} Let $P_{XY}$ denote the joint distribution of discrete random variables $X$ and $Y$ defined on alphabets $\cal{X}$ and $\cal{Y}$. We assume that cardinality $|\mathcal{X}|$ is finite and $|\mathcal{Y}|$ is finite or countably infinite. We represent $P_{XY}$ by a matrix defined on $\mathbb{R}^{|\mathcal{X}|\times|\mathcal{Y}|}$ and marginal distributions of $X$ and $Y$ by vectors $P_X$ and $P_Y$ defined on $\mathbb{R}^{|\mathcal{X}|}$ and $\mathbb{R}^{|\mathcal{Y}|}$ given by the row and column sums of $P_{XY}$. We represent the leakage matrix $P_{X|Y}$ by a matrix defined on $\mathbb{R}^{|\mathcal{X}|\times|\cal{Y}|}$. For both design problems we use mutual information as utility and leakage measures. The privacy mechanism design problems for the two scenarios can be stated as follows \begin{align} g_{\epsilon}(P_{XY})&=\sup_{\begin{array}{c} \substack{P_{U|Y}:X-Y-U\\ \ I(U;X)\leq\epsilon,} \end{array}}I(Y;U),\label{main2}\\ h_{\epsilon}(P_{XY})&=\sup_{\begin{array}{c} \substack{P_{U|Y,X}: I(U;X)\leq\epsilon,} \end{array}}I(Y;U).\label{main1} \end{align} The relation between $U$ and $Y$ is described by the kernel $P_{U|Y}$ defined on $\mathbb{R}^{|\mathcal{U}|\times|\mathcal{Y}|}$, furthermore, the relation between $U$ and the pair $(Y,X)$ is described by the kernel $P_{U|Y,X}$ defined on $\mathbb{R}^{|\mathcal{U}|\times|\mathcal{Y}|\times|\mathcal{X}|}$. The function $h_{\epsilon}(P_{XY})$ is used when the privacy mechanism has access to both the private data and the useful data. The function $g_{\epsilon}(P_{XY})$ is used when the privacy mechanism has only access to the useful data. Clearly, the relation between $h_{\epsilon}(P_{XY})$ and $g_{\epsilon}(P_{XY})$ can be stated as follows \begin{align} g_{\epsilon}(P_{XY})\leq h_{\epsilon}(P_{XY}). \end{align} In the following we study the case where $0\leq\epsilon< I(X;Y)$, otherwise the optimal solution of $h_{\epsilon}(P_{XY})$ or $g_{\epsilon}(P_{XY})$ is $H(Y)$ achieved by $U=Y$. \begin{remark} \normalfont For $\epsilon=0$, \eqref{main2} leads to the perfect privacy problem studied in \cite{borz}. It has been shown that for a non-invertible leakage matrix $P_{X|Y}$, $g_0(P_{XY})$ can be obtained by a linear program. \end{remark} \begin{remark} \normalfont For $\epsilon=0$, \eqref{main1} leads to the secret-dependent perfect privacy function $h_0(P_{XY})$, studied in \cite{kostala}, where upper and lower bounds on $h_0(P_{XY})$ have been derived. \end{remark} \section{Main Results}\label{sec:resul} In this section, we first recall the Functional Representation Lemma (FRL) \cite[Lemma~1]{kostala} and Strong Functional Representation Lemma (SFRL) \cite[Theorem~1]{kosnane} for discrete $X$ and $Y$. Then we extend them for correlated $X$ and $U$, i.e., $0\leq I(U;X)=\epsilon$ and we call them Extended Functional Representation Lemma (EFRL) and Extended Strong Functional Representation Lemma (ESFRL), respectively. \begin{lemma}\label{lemma1} (Functional Representation Lemma \cite[Lemma~1]{kostala}): For any pair of RVs $(X,Y)$ distributed according to $P_{XY}$ supported on alphabets $\mathcal{X}$ and $\mathcal{Y}$ where $|\mathcal{X}|$ is finite and $|\mathcal{Y}|$ is finite or countably infinite, there exists a RV $U$ supported on $\mathcal{U}$ such that $X$ and $U$ are independent, i.e., we have \begin{align}\label{c1} I(U;X)=0, \end{align} $Y$ is a deterministic function of $(U,X)$, i.e., we have \begin{align} H(Y|U,X)=0,\label{c2} \end{align} and \begin{align} |\mathcal{U}|\leq |\mathcal{X}|(|\mathcal{Y}|-1)+1.\label{c3} \end{align} \end{lemma} \begin{lemma}\label{lemma2} (Strong Functional Representation Lemma \cite[Theorem~1]{kosnane}): For any pair of RVs $(X,Y)$ distributed according to $P_{XY}$ supported on alphabets $\mathcal{X}$ and $\mathcal{Y}$ where $|\mathcal{X}|$ is finite and $|\mathcal{Y}|$ is finite or countably infinite with $I(X,Y)< \infty$, there exists a RV $U$ supported on $\mathcal{U}$ such that $X$ and $U$ are independent, i.e., we have \begin{align*} I(U;X)=0, \end{align*} $Y$ is a deterministic function of $(U,X)$, i.e., we have \begin{align*} H(Y|U,X)=0, \end{align*} $I(X;U|Y)$ can be upper bounded as follows \begin{align*} I(X;U|Y)\leq \log(I(X;Y)+1)+4, \end{align*} and $ |\mathcal{U}|\leq |\mathcal{X}|(|\mathcal{Y}|-1)+2. $ \end{lemma} \begin{remark} By checking the proof in \cite[Th.~1]{kosnane}, the term $e^{-1}\log(e)+2+\log(I(X;Y)+e^{-1}\log(e)+2)$ can be used instead of $\log(I(X;Y)+1)+4$. \end{remark} \begin{lemma}\label{lemma3} (Extended Functional Representation Lemma): For any $0\leq\epsilon< I(X;Y)$ and pair of RVs $(X,Y)$ distributed according to $P_{XY}$ supported on alphabets $\mathcal{X}$ and $\mathcal{Y}$ where $|\mathcal{X}|$ is finite and $|\mathcal{Y}|$ is finite or countably infinite, there exists a RV $U$ supported on $\mathcal{U}$ such that the leakage between $X$ and $U$ is equal to $\epsilon$, i.e., we have \begin{align*} I(U;X)= \epsilon, \end{align*} $Y$ is a deterministic function of $(U,X)$, i.e., we have \begin{align*} H(Y|U,X)=0, \end{align*} and $ |\mathcal{U}|\leq \left[|\mathcal{X}|(|\mathcal{Y}|-1)+1\right]\left[|\mathcal{X}|+1\right]. $ \end{lemma} \begin{proof} Let $\tilde{U}$ be the RV found by FRL and let $W=\begin{cases} X,\ \text{w.p}.\ \alpha\\ c,\ \ \text{w.p.}\ 1-\alpha \end{cases}$, where $c$ is a constant which does not belong to the support of $X$ and $Y$ and $\alpha=\frac{\epsilon}{H(X)}$. We show that $U=(\tilde{U},W)$ satisfies the conditions. We have \begin{align*} I(X;U)&=I(X;\tilde{U},W)\\&=I(\tilde{U};X)+I(X;W|\tilde{U})\\&\stackrel{(a)}{=}H(X)-H(X|\tilde{U},W)\\&=H(X)-\alpha H(X|\tilde{U},X)-(1-\alpha)H(X|\tilde{U},c)\\&=H(X)-(1-\alpha)H(X)=\alpha H(X)=\epsilon, \end{align*} where in (a) we used the fact that $X$ and $\tilde{U}$ are independent. Furthermore, \begin{align*} H(Y|X,U)&=H(Y|X,\tilde{U},W)\\&=\alpha H(Y|X,\tilde{U})+(1-\alpha)H(Y|X,\tilde{U},c)\\&=H(Y|X,\tilde{U})=0. \end{align*} In the last line we used the fact that $\tilde{U}$ is produced by FRL. \end{proof} \begin{lemma}\label{lemma4} (Extended Strong Functional Representation Lemma): For any $0\leq\epsilon< I(X;Y)$ and pair of RVs $(X,Y)$ distributed according to $P_{XY}$ supported on alphabets $\mathcal{X}$ and $\mathcal{Y}$ where $|\mathcal{X}|$ is finite and $|\mathcal{Y}|$ is finite or countably infinite with $I(X,Y)< \infty$, there exists a RV $U$ supported on $\mathcal{U}$ such that the leakage between $X$ and $U$ is equal to $\epsilon$, i.e., we have \begin{align*} I(U;X)= \epsilon, \end{align*} $Y$ is a deterministic function of $(U,X)$, i.e., we have \begin{align*} H(Y|U,X)=0, \end{align*} $I(X;U|Y)$ can be upper bounded as follows \begin{align*} I(X;U|Y)\leq \alpha H(X|Y)+(1-\alpha)\left[ \log(I(X;Y)+1)+4\right], \end{align*} and $ |\mathcal{U}|\leq \left[|\mathcal{X}|(|\mathcal{Y}|-1)+2\right]\left[|\mathcal{X}|+1\right], $ where $\alpha =\frac{\epsilon}{H(X)}$. \end{lemma} \begin{proof} Let $\tilde{U}$ be the RV found by SFRL and $W$ be the same RV which is used to prove Lemma~\ref{lemma3}. It is sufficient to show that $I(X;U|Y)\leq \alpha H(X|Y)+(1-\alpha)\left[ \log(I(X;Y)+1)+4\right]$ since all other properties are already proved in Lemma~3. We have \begin{align*} I(X;\tilde{U},W|Y)&=I(X;\tilde{U}|Y)+I(X,W|\tilde{U},Y)\\&\stackrel{(a)}{=}I(X;\tilde{U}|Y)+\alpha H(X|\tilde{U},Y)\\&=I(X;\tilde{U}|Y)+\alpha(H(X|Y)-I(X;\tilde{U}|Y))\\&=\alpha H(X|Y)+(1-\alpha)I(X;\tilde{U}|Y)\\&\stackrel{(b)}{\leq} \!\alpha H(X|Y)\!+\!(1-\alpha)\!\left[ \log(I(X;Y)\!+\!1)\!+\!4\right], \end{align*} where in step (a) we used the fact that \begin{align*} I(X,W|\tilde{U},Y) &= H(X|\tilde{U},Y)-H(X|W,\tilde{U},Y)\\&= H(X|\tilde{U},Y)-(1-\alpha)H(X|\tilde{U},Y)\\&=\alpha H(X|\tilde{U},Y), \end{align*} and (b) follows since $\tilde{U}$ is produced by SFRL. \end{proof} In next two lemmas, we first find a lower bound on $H(U)$ where $U$ satisfies \eqref{c1} and \eqref{c2}, then we show that there exists a RV $U$ that satisfies \eqref{c1}, \eqref{c2} and has bounded entropy. The next two lemmas are generalizations of \cite[Lemma~2]{kostala} and \cite[Lemma~3]{kostala} for dependent $X$ and $U$. \begin{lemma}\label{koss} For any pair of RVs $(X,Y)$ distributed according to $P_{XY}$ supported on alphabets $\mathcal{X}$ and $\mathcal{Y}$, then if $U$ satisfies $I(X;U)\leq \epsilon$ and $H(Y|X,U)=0$, we have \begin{align*} H(U)\geq \max_{x\in{\mathcal{X}}} H(Y|X=x). \end{align*} \end{lemma} \begin{proof}The proof follows similar arguments as the proof of \cite[Lemma~3]{kostala}. The only difference is that we use $H(U)\geq H(U|X=x)$ instead of $H(U)=H(U|X=x)$. \end{proof} \begin{lemma} For any pair of RVs $(X,Y)$ distributed according to $P_{XY}$ supported on alphabets $\mathcal{X}$ and $\mathcal{Y}$, where $|\mathcal{X}|$ is finite and $|\mathcal{Y}|$ is finite or countably infinite, there exists RV $U$ such that it satisfies \eqref{c1}, \eqref{c2}, and \begin{align*} H(U)\leq \sum_{x\in\mathcal{X}}H(Y|X=x)+\epsilon+h(\alpha) \end{align*} with $\alpha=\frac{\epsilon}{H(X)}$ and $h(\cdot)$ denotes the binary entropy function. \end{lemma} \begin{proof} Let $U=(\tilde{U},W)$ where $W$ is the same RV used in Lemma~\ref{lemma3} and $\tilde{U}$ is produced by FRL which has the same construction as used in proof of \cite[Lemma~1]{kostala}. Thus, by using \cite[Lemma~2]{kostala} we have \begin{align*} H(\tilde{U})\leq \sum_{x\in\mathcal{X}} H(Y|X=x), \end{align*} therefore, \begin{align*} H(U)&=H(\tilde{U},W)\leq H(\tilde{U})+H(W),\\&\leq\sum_{x\in\mathcal{X}} H(Y|X=x)+H(W), \end{align*} where, \begin{align*} H(W)\! &= -(1-\alpha)\log(1-\alpha)\!-\!\!\sum_{x\in \mathcal{X}} \alpha P_X(x)\log(\alpha P_X(x)),\\&=h(\alpha)+\alpha H(X), \end{align*} which completes the proof. \end{proof} Before stating the next theorem we derive an expression for $I(Y;U)$. We have \begin{align} I(Y;U)&=I(X,Y;U)-I(X;U|Y),\nonumber\\&=I(X;U)+I(Y;U|X)-I(X;U|Y),\nonumber\\&=I(X;U)\!+\!H(Y|X)\!-\!H(Y|U,X)\!-\!I(X;U|Y).\label{key} \end{align} As argued in \cite{kostala}, \eqref{key} is an important observation to find lower and upper bounds for $h_{\epsilon}(P_{XY})$ and $g_{\epsilon}(P_{XY})$. \begin{theorem} For any $0\leq \epsilon< I(X;Y)$ and pair of RVs $(X,Y)$ distributed according to $P_{XY}$ supported on alphabets $\mathcal{X}$ and $\mathcal{Y}$, if $h_{\epsilon}(P_{XY})>\epsilon$ then we have \begin{align*} H(Y|X)>0. \end{align*} Furthermore, if $H(Y|X)-H(X|Y)=H(Y)-H(X)>0$, then \begin{align*} h_{\epsilon}(P_{XY})>\epsilon. \end{align*} \end{theorem} \begin{proof} For proving the first part let $h_{\epsilon}(P_{XY})>\epsilon$. Using \eqref{key} we have \begin{align*} \epsilon&< h_{\epsilon}(P_{XY})\leq H(Y|X)+\sup_{U:I(X;U)\leq\epsilon} I(X;U)\\&=H(Y|X)+\epsilon \Rightarrow 0<H(Y|X). \end{align*} For the second part assume that $H(Y|X)-H(X|Y)>0$. Let $U$ be produced by EFRL. Thus, using the construction of $U$ as in Lemma~\ref{lemma3} we have $I(X,U)=\epsilon$ and $H(Y|X,U)=0$. Then by using \eqref{key} we obtain \begin{align*} h_{\epsilon}(P_{XY})&\geq \epsilon\!+\!H(Y|X)\!-\!H(X|Y)+H(X|Y,U)\\&\geq \epsilon\!+\!H(Y|X)\!-\!H(X|Y)>\epsilon. \end{align*} \end{proof} In the next theorem we provide a lower bound on $h_{\epsilon}(P_{XY})$. \begin{theorem}\label{th.1} For any $0\leq \epsilon< I(X;Y)$ and pair of RVs $(X,Y)$ distributed according to $P_{XY}$ supported on alphabets $\mathcal{X}$ and $\mathcal{Y}$ we have \begin{align}\label{th2} h_{\epsilon}(P_{XY})\geq \max\{L_1^{\epsilon},L_2^{\epsilon},L_3^{\epsilon}\}, \end{align} where \begin{align*} L_1^{\epsilon} &= H(Y|X)-H(X|Y)+\epsilon=H(Y)-H(X)+\epsilon,\\ L_2^{\epsilon} &= H(Y|X)-\alpha H(X|Y)+\epsilon\\&\ -(1-\alpha)\left( \log(I(X;Y)+1)+4 \right),\\ L_3^{\epsilon} &= \epsilon\frac{H(Y)}{I(X;Y)}+g_0(P_{XY})\left(1-\frac{\epsilon}{I(X;Y)}\right), \end{align*} and $\alpha=\frac{\epsilon}{H(X)}$. The lower bound in \eqref{th2} is tight if $H(X|Y)=0$, i.e., $X$ is a deterministic function of $Y$. Furthermore, if the lower bound $L_1$ is tight then we have $H(X|Y)=0$. \end{theorem} \begin{proof} $L_3^{\epsilon}$ can be derived by using \cite[Remark~2]{shahab}, since we have $h_{\epsilon}(P_{XY})\geq g_{\epsilon}(P_{XY})\geq L_3^{\epsilon}$. For deriving $L_1$, let $U$ be produced by EFRL. Thus, using the construction of $U$ as in Lemma~\ref{lemma3} we have $I(X,U)=\epsilon$ and $H(Y|X,U)=0$. Then, using \eqref{key} we obtain \begin{align*} h_{\epsilon}(P_{XY})&\geq I(U;Y)\\&=I(X;U)\!+\!H(Y|X)\!-\!H(Y|U,X)\!-\!I(X;U|Y)\\&=\epsilon+H(Y|X)-H(X|Y)+H(X|Y,U)\\ &\geq\epsilon+H(Y|X)-H(X|Y)=L_1. \end{align*} For deriving $L_2^{\epsilon}$, let $U$ be produced by ESFRL. Thus, by using the construction of $U$ as in Lemma~\ref{lemma4} we have $I(X,U)=\epsilon$, $H(Y|X,U)=0$ and $I(X;U|Y)\leq \alpha H(X|Y)+(1-\alpha)\left(\log(I(X;Y)+1)+4\right)$. Then, by using \eqref{key} we obtain \begin{align*} h_{\epsilon}(P_{XY})&\geq I(U;Y)\\&=I(X;U)\!+\!H(Y|X)\!-\!H(Y|U,X)\!-\!I(X;U|Y)\\&=\epsilon+H(Y|X)-I(X;U|Y)\\&\geq\epsilon+H(Y|X)-\alpha H(X|Y)\\&\ +(1-\alpha)\left(\log(I(X;Y)+1)+4\right)=L_2^{\epsilon}. \end{align*} Let $X$ be a deterministic function of $Y$. In this case, set $\epsilon=0$ in $L_1^{\epsilon}$ so that we obtain $h_0(P_{XY})\geq H(Y|X)$. Furthermore, by using \eqref{key} we have $h_0(P_{XY})\leq H(Y|X)$. Moreover, since $X$ is a deterministic function of $Y$, the Markov chain $X-Y-U$ holds and we have $h_0(P_{XY})=g_0(P_{XY})=H(Y|X)$. Therefore, $L_3^{\epsilon}$ can be rewritten as \begin{align*} L_3^{\epsilon}&=\epsilon\frac{H(Y)}{H(X)}+H(Y|X)\left(\frac{H(X)-\epsilon}{H(X)}\right),\\&=\epsilon\frac{H(Y)}{H(X)}+(H(Y)-H(X))\left(\frac{H(X)-\epsilon}{H(X)}\right),\\&=H(Y)-H(X)+\epsilon. \end{align*} $L_2^{\epsilon}$ can be rewritten as follows \begin{align*} L_2^{\epsilon}=H(Y|X)+\epsilon-(1-\frac{\epsilon}{H(X)})(\log(H(X)+1)+4). \end{align*} Thus, if $H(X|Y)=0$, then $L_1^{\epsilon}=L_3^{\epsilon}\geq L_2^{\epsilon}$. Now we show that $L_1^{\epsilon}=L_3^{\epsilon}$ is tight. By using \eqref{key} we have \begin{align*} I(U;Y) &\stackrel{(a)}{=} I(X;U)+H(Y|X)-H(Y|U,X),\\&\leq \epsilon+H(Y|X)=L_1^{\epsilon}=L_3^{\epsilon}. \end{align*} where (a) follows since $X$ is deterministic function of $Y$ which leads to $I(X;U|Y)=0$. Thus, if $H(X|Y)=0$, the lower bound in \eqref{th2} is tight. Now suppose that the lower bound $L_1^{\epsilon}$ is tight and $X$ is not a deterministic function of $Y$. Let $\tilde{U}$ be produced by FRL using the construction of \cite[Lemma~1]{kostala}. As argued in the proof of \cite[Th.~6]{kostala}, there exists $x\in\cal X$ and $y_1,y_2\in\cal Y$ such that $P_{X|\tilde{U},Y}(x|\tilde{u},y_1)>0$ and $P_{X|\tilde{U},Y}(x|\tilde{u},y_2)>0$ which results in $H(X|Y,\tilde{U})>0$. Let $U=(\tilde{U},W)$ where $W$ is defined in Lemma~\ref{lemma3}. For such $U$ we have \begin{align*} H(X|Y,U)&=(1-\alpha)H(X|Y,\tilde{U})>0,\\ \Rightarrow I(U;Y)&\stackrel{(a)}{=}\epsilon+H(Y|X)-H(X|Y)+H(X|Y,U)\\ &>\epsilon+H(Y|X)-H(X|Y). \end{align*} where in (a) we used the fact that such $U$ satisfies $I(X;U)=\epsilon$ and $H(Y|X,U)=0$. The last line is a contradiction with tightness of $L_1^{\epsilon}$, since we can achieve larger values, thus, $X$ needs to be a deterministic function of $Y$. \end{proof} In next corollary we let $\epsilon=0$ and derive lower bound on $h_0(P_{XY})$. \begin{corollary}\label{kooni} Let $\epsilon=0$. Then, for any pair of RVs $(X,Y)$ distributed according to $P_{XY}$ supported on alphabets $\mathcal{X}$ and $\mathcal{Y}$ we have \begin{align*} h_{0}(P_{XY})\geq \max\{L^0_1,L^0_2\}, \end{align*} where \begin{align*} L^0_1 &= H(Y|X)-H(X|Y)=H(Y)-H(X),\\ L^0_2 &= H(Y|X) -\left( \log(I(X;Y)+1)+4 \right).\\ \end{align*} \end{corollary} Note that the lower bound $L^0_1$ has been derived in \cite[Th.~6]{kostala}, while the lower bound $L^0_2$ is a new lower bound. In the next two examples we compare the bounds $L_1^{\epsilon}$, $L_2^{\epsilon}$ and $L_3^{\epsilon}$ in special cases where $I(X;Y)=0$ and $H(X|Y)=0$. \begin{example} Let $X$ and $Y$ be independent. Then, we have \begin{align*} L_1^{\epsilon}&=H(Y)-H(X)+\epsilon,\\ L_2^{\epsilon}&=H(Y)-\frac{\epsilon}{H(X)}H(X)+\epsilon-4(1-\frac{\epsilon}{H(X)}),\\ &=H(Y)-4(1-\frac{\epsilon}{H(X)}). \end{align*} Thus, \begin{align*} L_2^{\epsilon}-L_1^{\epsilon}&=H(X)-4+\epsilon(\frac{4}{H(X)}-1),\\ &=(H(X)-4)(1-\frac{\epsilon}{H(X)}). \end{align*} Consequently, for independent $X$ and $Y$ if $H(X)>4$, then $L_2^{\epsilon}>L_1^{\epsilon}$, i.e., the second lower bound is dominant and $h_{\epsilon}(P_{X}P_{Y})\geq L_2^{\epsilon}$. \end{example} \begin{example} Let $X$ be a deterministic function of $Y$. As we have shown in Theorem~\ref{th.1}, if $H(X|Y)=0$, then \begin{align*} L_1^{\epsilon}&=L_3^{\epsilon}=H(Y|X)+\epsilon\\&\geq H(Y|X)+\epsilon-(1-\frac{\epsilon}{H(X)})(\log(H(X)+1)+4)\\&=L_2^{\epsilon}. \end{align*} Therefore, $L_1^{\epsilon}$ and $L_3^{\epsilon}$ become dominants. \end{example} In the next lemma we find a lower bound for $\sup_{U} H(U)$ where $U$ satisfies the leakage constraint $I(X;U)\leq\epsilon$, the bounded cardinality and $ H(Y|U,X)=0$. \begin{lemma}\label{koonimooni} For any pair of RVs $(X,Y)$ distributed according to $P_{XY}$ supported on alphabets $\mathcal{X}$ and $\mathcal{Y}$, then if $U$ satisfies $I(X;U)\leq \epsilon$, $H(Y|X,U)=0$ and $|\mathcal{U}|\leq \left[|\mathcal{X}|(|\mathcal{Y}|-1)+1\right]\left[|\mathcal{X}|+1\right]$, we have \begin{align*} \sup_U H(U)\!&\geq\! \alpha H(Y|X)\!+\!(1-\alpha)(\max_{x\in\mathcal{X}}H(Y|X=x))\!\\&+\!h(\alpha)\!+\!\epsilon \geq H(Y|X)\!+\!h(\alpha)\!+\!\epsilon, \end{align*} where $\alpha=\frac{\epsilon}{H(X)}$ and $h(\cdot)$ corresponds to the binary entropy. \end{lemma} \begin{proof} Let $U=(\tilde{U},W)$ where $W=\begin{cases} X,\ \text{w.p}.\ \alpha\\ c,\ \ \text{w.p.}\ 1-\alpha \end{cases}$, and $c$ is a constant which does not belong to the support of $X$, $Y$ and $\tilde{U}$, furthermore, $\tilde{U}$ is produced by FRL. Using \eqref{key} and Lemma~\ref{koss} we have \begin{align} H(\tilde{U}|Y)&=H(\tilde{U})-H(Y|X)+I(X;\tilde{U}|Y)\nonumber\\ &\stackrel{(a)}{\geq} \max_{x\in\mathcal{X}} H(Y|X=x)-H(Y|X)\nonumber\\&\ +H(X|Y)-H(X|Y,\tilde{U}),\label{toole} \end{align} where (a) follows from Lemma~\ref{koss} with $\epsilon=0$. Furthermore, in the first line we used $I(X;\tilde{U})=0$ and $H(Y|\tilde{U},X)=0$. Using \eqref{key} we obtain \begin{align*} H(U)&\stackrel{(a)}{=}\!H(U|Y)\!+\!H(Y|X)\!-\!H(X|Y)\!+\!\epsilon\!+\!H(X|Y,U),\\ &\stackrel{(b)}{=}H(W|Y)+\alpha H(\tilde{U}|Y,X)+(1-\alpha)H(\tilde{U}|Y)\\ & \ \ \ +H(Y|X)\!-\!H(X|Y)\!+\!\epsilon+(1\!-\!\alpha)H(X|Y,\tilde{U}),\\ &\stackrel{(c)}{=}\!(\alpha-1)H(X|Y)\!+\!h(\alpha)\!+\alpha H(\tilde{U}|Y,X)+\!\epsilon\\& \ \ +\!(1-\alpha)H(\tilde{U}|Y)\!+\!H(Y|X)\!+\!(1\!-\!\alpha)H(X|Y,\tilde{U}), \\ & \stackrel{(d)}{\geq} (\alpha-1)H(X|Y)+h(\alpha)\!+\alpha H(\tilde{U}|Y,X)\\ \ \ \ &+\!\!(1-\alpha) (\max_{x\in\mathcal{X}}H(Y|X=x)-H(Y|X)+\!H(X|Y) \\& -\!H(X|Y,\tilde{U}))+H(Y|X)\!+\!\epsilon\!+\!(1-\alpha)H(X|Y,\tilde{U})\\ &=\alpha H(Y|X)+(1-\alpha)(\max_{x\in\mathcal{X}}H(Y|X=x))\\ &\ \ \ +h(\alpha)+\epsilon. \end{align*} In step (a) we used $I(U;X)=\epsilon$ and $H(Y|X,U)=0$ and in step (b) we used $H(U|Y)=H(W|Y)+H(\tilde{U}|Y,W)=H(W|Y)+\alpha H(\tilde{U}|Y,X)+(1-\alpha)H(\tilde{U}|Y)$ and $H(X|Y,U)=H(X|Y,\tilde{U},W)=(1-\alpha)H(X|Y,\tilde{U})$. In step (c) we used the fact that $P_{W|Y}=\begin{cases} \alpha P_{X|Y}(x|\cdot)\ &\text{if}\ w=x,\\ 1-\alpha \ &\text{if} \ w=c, \end{cases}$ since $P_{W|Y}(w=x|\cdot)=\frac{P_{W,Y}(w=x,\cdot)}{P_Y(\cdot)}=\frac{P_{Y|W}(\cdot|w=x)P_W(w=x)}{P_Y(\cdot)}=\frac{P_{Y|X}(\cdot|x)\alpha P_X(x)}{P_Y(\cdot)}=\alpha P_{X|Y}(x|\cdot)$, furthermore, $P_{W|Y}(w=c|\cdot)=1-\alpha$. Hence, after some calculation we obtain $H(W|Y)=h(\alpha)+\alpha H(X|Y)$. Finally, step (d) follows from \eqref{toole}. \end{proof} \begin{remark} The constraint $|\mathcal{U}|\leq \left[|\mathcal{X}|(|\mathcal{Y}|-1)+1\right]\left[|\mathcal{X}|+1\right]$ in Lemma~\ref{koonimooni} guarantees that $\sup_U H(U)<\infty$. \end{remark} In the next lemma we find an upper bound for $h_{\epsilon}(P_{XY})$. \begin{lemma}\label{goh} For any $0\leq\epsilon< I(X;Y)$ and pair of RVs $(X,Y)$ distributed according to $P_{XY}$ supported on alphabets $\mathcal{X}$ and $\mathcal{Y}$ we have \begin{align*} g_{\epsilon}(P_{XY})\leq h_{\epsilon}(P_{XY})\leq H(Y|X)+\epsilon. \end{align*} \end{lemma} \begin{proof} By using \eqref{key} we have \begin{align*} h_{\epsilon}(P_{XY})\leq H(Y|X)+\sup I(U;X)\leq H(Y|X)+\epsilon. \end{align*} \end{proof} \begin{corollary} If $X$ is a deterministic function of $Y$, then by using Theorem~\ref{th.1} and Lemma~\ref{goh} we have \begin{align*} g_{\epsilon}(P_{XY})=h_{\epsilon}(P_{XY})=H(Y|X)+\epsilon, \end{align*} since in this case the Markov chain $X-Y-U$ holds. \end{corollary} \begin{lemma}\label{kir} Let $\bar{U}$ be an optimizer of $h_{\epsilon}(P_{XY})$. We have \begin{align*} H(Y|X,\bar{U})=0. \end{align*} \end{lemma} \begin{proof} The proof is similar to \cite[Lemma~5]{kostala}. Let $\bar{U}$ be an optimizer of $h_{\epsilon}(P_{XY})$ and assume that $H(Y|X,\bar{U})>0$. Consequently, we have $I(X;\bar{U})\leq \epsilon.$ Let $U'$ be founded by FRL with $(X,\bar{U})$ instead of $X$ in Lemma~\ref{lemma1} and same $Y$, that is $I(U';X,\bar{U})=0$ and $H(Y|X,\bar{U},U')=0$. Using \cite[Th.~5]{kostala} we have \begin{align*} I(Y;U')>0, \end{align*} since we assumed $H(Y|X,\bar{U})>0$. Let $U=(\bar{U},U')$ and we first show that $U$ satisfies $I(X;U)\leq \epsilon$. We have \begin{align*} I(X;U)&=I(X;\bar{U},U')=I(X;\bar{U})+I(X;U'|\bar{U}),\\ &=I(X;\bar{U})+H(U'|\bar{U})-H(U'|\bar{U},X),\\ &=I(X;\bar{U})+H(U')-H(U')\leq \epsilon, \end{align*} where in last line we used the fact that $U'$ is independent of the pair $(X,\bar{U})$. Finally, we show that $I(Y;U)>I(Y,\bar{U})$ which is a contradiction with optimality of $\bar{U}$. We have \begin{align*} I(Y;U)&=I(Y;\bar{U},U')=I(Y;U')+I(Y;\bar{U}|U'),\\ &=I(Y;U')+I(Y,U';\bar{U})-I(U';\bar{U})\\ &= I(Y;U')+I(Y,\bar{U})+I(U';\bar{U}|Y)-I(U';\bar{U})\\ &\stackrel{(a)}{\geq} I(Y;U')+I(Y,\bar{U})\\ &\stackrel{(b)}{>} I(Y,\bar{U}), \end{align*} where in (a) follows since $I(U';\bar{U}|Y)\geq 0$ and $I(U';\bar{U}=0$. Step (b) follows since $I(Y;U')>0$. Thus, the obtained contradiction completes the proof. \end{proof} In the next theorem we generalize the equivalent statements in \cite[Th.~7]{kostala} for bounded leakage between $X$ and $U$. \begin{theorem} For any $\epsilon<I(X;Y)$, we have the following equivalencies \begin{itemize} \item [i.] $g_{\epsilon}(P_{XY})=H(Y|X)+\epsilon$, \item [ii.] $g_{\epsilon}(P_{XY})=h_{\epsilon}(P_{XY})$, \item [iii.] $h_{\epsilon}(P_{XY})=H(Y|X)+\epsilon$. \end{itemize} \end{theorem} \begin{proof} \begin{itemize} \item i $\Rightarrow$ ii: Using Lemma~\ref{goh} we have $H(Y|X)+\epsilon= g_{\epsilon}(P_{XY})\leq h_{\epsilon}(P_{XY}) \leq H(Y|X)+\epsilon$. Thus, $g_{\epsilon}(P_{XY})=h_{\epsilon}(P_{XY})$. \item ii $\Rightarrow$ iii: Let $\bar{U}$ be the optimizer of $g_{\epsilon}(P_{XY})$. Thus, the Markov chain $X-Y-\bar{U}$ holds and we have $I(X;U|Y)=0$. Furthermore, since $g_{\epsilon}(P_{XY})=h_{\epsilon}(P_{XY})$ this $\bar{U}$ achieves $h_{\epsilon}(P_{XY})$. Thus, by using Lemma~\ref{kir} we have $H(Y|\bar{U},X)=0$ and according to \eqref{key} \begin{align} I(\bar{U};Y)&=I(X;\bar{U})\!+\!H(Y|X)\!-\!H(Y|\bar{U},X)\!\\& \ \ \ -\!I(X;\bar{U}|Y)\nonumber\\ &=I(X;\bar{U})\!+\!H(Y|X).\label{kir2} \end{align} We claim that $\bar{U}$ must satisfy $I(X;Y|\bar{U})>0$ and $I(X;\bar{U})=\epsilon$. For the first claim assume that $I(X;Y|\bar{U})=0$, hence the Markov chain $X-\bar{U}-Y$ holds. Using $X-\bar{U}-Y$ and $H(Y|\bar{U},X)=0$ we have $H(Y|\bar{U})=0$, hence $Y$ and $\bar{U}$ become independent. Using \eqref{kir2} \begin{align*} H(Y)&=I(Y;\bar{U})=I(X;\bar{U})\!+\!H(Y|X),\\ &\Rightarrow I(X;\bar{U})=I(X;Y). \end{align*} The last line is a contradiction since by assumption we have $I(X;\bar{U})\leq \epsilon < I(X;Y)$. Thus, $I(X;Y|\bar{U})>0$. For proving the second claim assume that $I(X;\bar{U})=\epsilon_1<\epsilon$. Let $U=(\bar{U},W)$ where $W=\begin{cases} Y,\ \text{w.p}.\ \alpha\\ c,\ \ \text{w.p.}\ 1-\alpha \end{cases}$, and $c$ is a constant that $c\notin \mathcal{X}\cup \mathcal{Y}\cup \mathcal{\bar{U}}$ and $\alpha=\frac{\epsilon-\epsilon_1}{I(X;Y|\bar{U})}$. We show that $\frac{\epsilon-\epsilon_1}{I(X;Y|\bar{U})}<1$. By the assumption we have \begin{align*} \frac{\epsilon-\epsilon_1}{I(X;Y|\bar{U})}<\frac{I(X;Y)-I(X;\bar{U})}{I(X;Y|\bar{U})}\stackrel{(a)}{\leq}1, \end{align*} step (a) follows since $I(X;Y)-I(X;\bar{U})-I(X;Y|\bar{U})=I(X;Y)-I(X;Y,\bar{U})\leq 0$. It can be seen that such $U$ satisfies $H(Y|X,U)=0$ and $I(X;U|Y)=0$ since \begin{align*} H(Y|X,U)&=\alpha H(Y|X,\bar{U},Y)\\ & \ \ \ + (1-\alpha)H(Y|X,\bar{U})=0,\\ I(X;U|Y)&= H(X|Y)-H(X|Y,\bar{U},W)\\&=H(X|Y)\!-\!\alpha H(X|Y,\bar{U})\!\\& \ \ \ -\!(1\!-\!\alpha) H(X|Y,\bar{U})\\&=H(X|Y)-H(X|Y)=0, \end{align*} where in deriving the last line we used the Markov chain $X-Y-\bar{U}$. Furthermore, \begin{align*} I(X;U)&=I(X;\bar{U},W)=I(X;\bar{U})+I(X;W|\bar{U})\\ &=I(X;\bar{U})+\alpha H(X|\bar{U})-\alpha H(X|\bar{U},Y)\\ &=I(X;\bar{U})+\alpha I(X;Y|\bar{U})\\ &=\epsilon_1+\epsilon-\epsilon_1=\epsilon, \end{align*} and \begin{align*} I(Y;U)&=I(X;U)\!+\!H(Y|X)\!-\!H(Y|U,X)\\& \ \ \ -\!I(X;U|Y)\\ &=\epsilon+H(Y|X). \end{align*} Thus, if $I(X;\bar{U})=\epsilon_1<\epsilon$ we can substitute $\bar{U}$ by $U$ for which $I(U;Y)>I(\bar{U};Y)$. This is a contraction and we conclude that $I(X;\bar{U})=\epsilon$ which proves the second claim. Hence, \eqref{kir2} can be rewritten as \begin{align*} I(\bar{U};Y)=\epsilon+H(Y|X). \end{align*} As a result $h_\epsilon(P_{XY})=\epsilon+H(Y|X)$ and the proof is completed. \item iii $\Rightarrow$ i: Let $\bar{U}$ be the optimizer of $h_{\epsilon}(P_{XY})$ and $h_{\epsilon}(P_{XY})=H(Y|X)+\epsilon$. Using Lemma~\ref{kir} we have $H(Y|\bar{U},X)=0$. By using \eqref{key} we must have $I(X;\bar{U}|Y)=0$ and $I(X;\bar{U})=\epsilon$. We conclude that for this $\bar{U}$, the Markov chain $X-Y-\bar{U}$ holds and as a result $\bar{U}$ achieves $g_{\epsilon}(P_{XY})$ and we have $g_{\epsilon}(P_{XY})=H(Y|X)+\epsilon$. \end{itemize} \end{proof} \subsection* {Special case: $\epsilon=0$ (Independent $X$ and $Y$)} In this section we derive new lower and upper bounds for $h_0(P_{XY})$ and compare them with the previous bounds found in \cite{kostala}. We first state the definition of \emph{excess functional information} defined in \cite{kosnane} as \begin{align*} \psi(X\rightarrow Y)=\inf_{\begin{array}{c} \substack{P_{U|Y,X}: I(U;X)=0,\ H(Y|X,U)=0} \end{array}}I(X;U|Y), \end{align*} and the lower bound on $\psi(X\rightarrow Y)$ derived in \cite[Prop.~1]{kosnane} is given in the next lemma. Since this lemma is useful for deriving the lower bound we state it here. \begin{lemma}\cite[Prop.~1]{kosnane}\label{haroomi} For discrete $Y$ we have \begin{align} &\psi(X\rightarrow Y)\geq\nonumber\\& -\sum_{y\in\mathcal{Y}}\!\int_{0}^{1}\!\!\! \mathbb{P}_X\{P_{Y|X}(y|X)\geq t\}\log (\mathbb{P}_X\{P_{Y|X}(y|X)\geq t\})dt\nonumber\\&-I(X;Y),\label{koonsher} \end{align} where for $|\mathcal{Y}|=2$ the equality holds and it is attained by the Poisson functional representation in \cite{kosnane}. \end{lemma} \begin{remark} The lower bound in \eqref{koonsher} can be negative. For instance, let $Y$ be a deterministic function of $X$, i.e., $H(Y|X)=0$. In this case we have $-\sum_{y\in\mathcal{Y}}\!\int_{0}^{1} \mathbb{P}_X\{P_{Y|X}(y|X)\geq t\}\log (\mathbb{P}_X\{P_{Y|X}(y|X)\geq t\})dt-I(X;Y)=-I(X;Y)=-H(Y).$ \end{remark} In the next theorem lower and upper bounds on $h_0(P_{XY})$ are provided. \begin{theorem} For any pair of RVs $(X,Y)$ distributed according to $P_{XY}$ supported on alphabets $\mathcal{X}$ and $\mathcal{Y}$ we have \begin{align*} \max\{L^0_1,L^0_2\} \leq h_{0}(P_{XY})\leq \min\{U^0_1,U^0_2\}, \end{align*} where $L^0_1$ and $L^0_2$ are defined in Corollary~\ref{kooni} and \begin{align*} &U^0_1 = H(Y|X),\\ &U^0_2 = H(Y|X) +\sum_{y\in\mathcal{Y}}\int_{0}^{1} \mathbb{P}_X\{P_{Y|X}(y|X)\geq t\}\times\\&\log (\mathbb{P}_X\{P_{Y|X}(y|X)\geq t\})dt+I(X;Y). \end{align*} Furthermore, if $|\mathcal{Y}|=2$, then we have \begin{align*} h_{0}(P_{XY}) = U^0_2. \end{align*} \end{theorem} \begin{proof} $L^0_1$ and $L^0_2$ can be obtained by letting $\epsilon=0$ in Theorem~\ref{th.1}. $U^0_1$ which has been derived in \cite[Th.~7]{kostala} can be obtained by \eqref{key}. $U^0_1$ can be derived as follows. Since $X$ and $U$ are independent, \eqref{key} can be rewritten as \begin{align*} I(Y;U)=H(Y|X)-H(Y|U,X)-I(X;U|Y), \end{align*} thus, using Lemma~\ref{haroomi} \begin{align*} h_0(P_{XY})&\leq H(Y|X)-\inf_{H(Y|U,X)=0,\ I(X;U)=0}I(X;U|Y)\\ &= H(Y|X)-\psi(X\rightarrow Y)\\&\leq H(Y|X)\\&+\sum_{y\in\mathcal{Y}}\int_{0}^{1} \mathbb{P}_X\{P_{Y|X}(y|X)\geq t\}\times\\&\log (\mathbb{P}_X\{P_{Y|X}(y|X)\geq t\})dt+I(X;Y). \end{align*} For $|\mathcal{Y}|=2$ using Lemma~\ref{haroomi} we have $\psi(X\rightarrow Y)=-\sum_{y\in\mathcal{Y}}\int_{0}^{1} \mathbb{P}_X\{P_{Y|X}(y|X)\geq t\}\log (\mathbb{P}_X\{P_{Y|X}(y|X)\geq t\})dt-I(X;Y)$ and let $\bar{U}$ be the RV that attains this bound. Thus, \begin{align*} I(\bar{U};Y)&=H(Y|X)\\&+\sum_{y\in\mathcal{Y}}\int_{0}^{1} \mathbb{P}_X\{P_{Y|X}(y|X)\geq t\}\times\\&\log (\mathbb{P}_X\{P_{Y|X}(y|X)\geq t\})dt+I(X;Y). \end{align*} Therefore, $\bar{U}$ attains $U_2^0$ and $h_0(P_{XY})=U_0^2$. \end{proof} As mentioned before the upper bound $U^0_1$ has been derived in \cite[Th.~7]{kostala}. The upper bound $U^0_2$ is a new upper bound. \begin{lemma}\label{ankhar} If $X$ is a deterministic function of $Y$, i.e., $H(X|Y)=0$, we have \begin{align*} &\sum_{y\in\mathcal{Y}}\int_{0}^{1} \mathbb{P}_X\{P_{Y|X}(y|X)\geq t\}\log (\mathbb{P}_X\{P_{Y|X}(y|X)\geq t\})dt\\&+I(X;Y)=0. \end{align*} \end{lemma} \begin{proof} Since $X$ is a deterministic function of $Y$, for any $y\in \cal Y$ we have \begin{align*} P_{Y|X}(y|x)=\begin{cases} \frac{P_Y(y)}{P_X(x)},\ &x=f(y)\\ 0, \ &\text{else} \end{cases}, \end{align*} thus, \begin{align*} &\sum_{y\in\mathcal{Y}}\int_{0}^{1} \mathbb{P}_X\{P_{Y|X}(y|X)\geq t\}\log (\mathbb{P}_X\{P_{Y|X}(y|X)\geq t\})dt\\ &= \sum_{y\in\mathcal{Y}}\!\int_{0}^{\frac{P_Y(y)}{P_X(x=f(y))}} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\mathbb{P}_X\{P_{Y|X}(y|X)\geq t\}\log (\mathbb{P}_X\{P_{Y|X}(y|X)\geq t\})dt\\ &= \sum_{y\in\mathcal{Y}} \frac{P_Y(y)}{\mathbb{P}_X\{x=f(y)\}}\mathbb{P}_X\{x=f(y)\}\log(\mathbb{P}_X\{x=f(y)\})\\ &= \sum_{y\in\mathcal{Y}} P_Y(y)\log(\mathbb{P}_X\{x=f(y)\})\\ &= \sum_{y\in\mathcal{Y}} P_X(x)\log(P_X(x))=-H(X)=-I(X;Y), \end{align*} where in last line we used $\sum_{y\in\mathcal{Y}} P_Y(y)\log(\mathbb{P}_X\{x=f(y)\})=\sum_{x\in\mathcal{X}} \sum_{y:x=f(y)} P_Y(y)\log(\mathbb{P}_X\{x=f(y)\})=\sum_{x\in\mathcal{X}} P_X(x)\log(P_X(x))$. \end{proof} \begin{remark} According to Lemma~\ref{ankhar}, if $X$ is a deterministic function of $Y$, then we have $U_2^0=U_1^0$. \end{remark} In the next example we compare the bounds $U^0_1$ and $U^0_2$ for a $BSC(\theta)$. \begin{figure}[] \centering \includegraphics[scale = .2]{kirtodahan.eps} \caption{Comparing the upper bounds $U_1^0$ and $U_2^0$ for $BSC(\theta)$. The blue curve illustrates the upper bound found in \cite{kostala} and the red line shows the upper bound found in this work.} \label{fig:kir} \end{figure} \begin{example}(Binary Symmetric Channel) Let the binary RVs $X\in\{0,1\}$ and $Y\in\{0,1\}$ have the following joint distribution \begin{align*} P_{XY}(x,y)=\begin{cases} \frac{1-\theta}{2}, \ &x=y\\ \frac{\theta}{2}, \ &x\neq y \end{cases}, \end{align*} where $\theta<\frac{1}{2}$. We obtain \begin{align*} &\sum_{y\in\mathcal{Y}}\int_{0}^{1} \mathbb{P}_X\{P_{Y|X}(y|X)\geq t\}\log (\mathbb{P}_X\{P_{Y|X}(y|X)\geq t\})dt\\&=\int_{\theta}^{1-\theta} \mathbb{P}_X\{P_{Y|X}(0|X)\geq t\}\log (\mathbb{P}_X\{P_{Y|X}(0|X)\geq t\})dt\\&+\int_{\theta}^{1-\theta} \mathbb{P}_X\{(P_{Y|X}(1|X)\geq t\}\log (\mathbb{P}_X\{P_{Y|X}(1|X)\geq t\})dt\\&=(1-2\theta)\left(P_X(0)\log (P_X(0))+P_X(1)\log (P_X(1))\right)\\&=-(1-2\theta)H(X)=-(1-2\theta). \end{align*} Thus, \begin{align*} U_2^0&=H(Y|X) +\sum_{y\in\mathcal{Y}}\int_{0}^{1} \mathbb{P}_X\{P_{Y|X}(y|X)\geq t\}\times\\&\log (\mathbb{P}_X\{P_{Y|X}(y|X)\geq t\})dt+I(X;Y)\\&=h(\theta)-(1-2\theta)+(1-h(\theta))=2\theta,\\ U_1^0&=H(Y|X)=h(\theta), \end{align*} where $h(\cdot)$ corresponds to the binary entropy function. As shown in Fig.~ \ref{fig:kir}, we have \begin{align*} h_{0}(P_{XY})\leq U^0_2\leq U^0_1. \end{align*} \end{example} \begin{example}(Erasure Channel) Let the RVs $X\in\{0,1\}$ and $Y\in\{0,e,1\}$ have the following joint distribution \begin{align*} P_{XY}(x,y)=\begin{cases} \frac{1-\theta}{2}, \ &x=y\\ \frac{\theta}{2}, \ &y=e\\ 0, \ & \text{else} \end{cases}, \end{align*} where $\theta<\frac{1}{2}$. We have \begin{align*} &\sum_{y\in\mathcal{Y}}\int_{0}^{1} \mathbb{P}_X\{P_{Y|X}(y|X)\geq t\}\log (\mathbb{P}_X\{P_{Y|X}(y|X)\geq t\})dt\\&=\int_{0}^{1-\theta} \mathbb{P}_X\{P_{Y|X}(0|X)\geq t\}\log (\mathbb{P}_X\{P_{Y|X}(0|X)\geq t\})dt\\&+\int_{0}^{1-\theta} \mathbb{P}_X\{P_{Y|X}(1|X)\geq t\}\log (\mathbb{P}_X\{P_{Y|X}(1|X)\geq t\})dt\\&=-(1-\theta)H(X)=-(1-\theta). \end{align*} Thus, \begin{align*} U_2^0&=H(Y|X) +\sum_{y\in\mathcal{Y}}\int_{0}^{1} \mathbb{P}_X\{P_{Y|X}(y|X)\geq t\}\times\\&\log (\mathbb{P}_X\{P_{Y|X}(y|X)\geq t\})dt+I(X;Y)\\&=h(\theta)-(1-\theta)+h(\theta)+1-\theta-h(\theta) \\&=h(\theta),\\ U_1^0&=H(Y|X)=h(\theta). \end{align*} Hence, in this case, $U_1^0=U_2^0=h(\theta)$. Furthermore, in \cite[Example~8]{kostala}, it has been shown that for this pair of $(X,Y)$ we have $g_0(P_{XY})=h_0(P_{XY})=h(\theta)$. \end{example} In \cite[Prop.~2]{kosnane} it has been shown that for every $\alpha\geq 0$, there exist a pair $(X,Y)$ such that $I(X;Y)\geq \alpha$ and \begin{align} \psi(X\rightarrow Y)\geq \log(I(X;Y)+1)-1.\label{kir1} \end{align} \begin{lemma}\label{choon} Let $(X,Y)$ be as in \cite[Prop.~2]{kosnane}, i.e. $(X,Y)$ satisfies \eqref{kir1}. Then for such pair we have \begin{align*} &H(Y|X)-\log(I(X;Y)+1)-4\\&\leq h_0(P_{XY})\leq H(Y|X)-\log(I(X;Y)+1)+1. \end{align*} \begin{proof} The lower bound follows from Corollary~1. For the upper bound, we use \eqref{key} and \eqref{kir1} so that \begin{align*} I(U;Y)&\leq H(Y|X)-\psi(X\rightarrow Y)\\ &\leq H(Y|X)-\log(I(X;Y)+1)+1. \end{align*} \end{proof} \end{lemma} \begin{remark} From Lemma~\ref{choon} and Corollary~1 we can conclude that the lower bound $L_2^0=H(Y|X)-(\log(I(X;Y)+1)+4)$ is tight within $5$ bits. \end{remark} \section{conclusion}\label{concull} It has been shown that by extending the FRL and SFRL, upper bound for $h_{\epsilon}(P_{XY})$ and $g_{\epsilon}(P_{XY})$ and lower bound for $h_{\epsilon}(P_{XY})$ can be derived. If $X$ is a deterministic function of $Y$, then the bounds are tight. Moreover, a necessary condition for an optimizer of $h_{\epsilon}(P_{XY})$ has been obtained. In the case of perfect privacy, new lower and upper bounds are derived using ESFRL and excess functional information. In an example it has been shown that new bounds are dominant compared to the previous bounds. \begin{proposition}\label{prop1111} It suffices to consider $U$ such that $|\mathcal{U}|\leq|\mathcal{Y}|$. Furthermore, a maximum can be used in \eqref{privacy} since the corresponding supremum is achieved. \end{proposition} \begin{proof} The proof is based on Fenchel-Eggleston-Carath\'{e}odory's Theorem \cite{el2011network}. For more details see \cite{Amir2}. \end{proof} \section{Privacy Mechanism Design}\label{result} In this section we show that the problem defined in \eqref{1} can be approximated by a quadratic problem. Furthermore, it is shown that the quadratic problem can be converted to a standard linear program. By using \eqref{local}, we can rewrite the conditional distribution $P_{X|U=u}$ as a perturbation of $P_X$. Thus, for any $u\in\mathcal{U}$, similarly as in \cite{Amir,borade, huang}, we can write $P_{X|U=u}=P_X+\epsilon\cdot J_u$, where $J_u\in\mathbb{R}^{|\mathcal{X}|}$ is the perturbation vector that has the following three properties: \begin{align} \sum_{x\in\mathcal{X}} J_u(x)=0,\ \forall u,\label{prop1}\\ \sum_{u\in\mathcal{U}} P_U(u)J_u(x)=0,\ \forall x\label{prop2},\\ \sum_{x\in\mathcal{X}}|J_u(x)|\leq 1,\ \forall u\label{prop3}. \end{align} The first two properties ensure that $P_{X|U=u}$ is a valid probability distribution and the third property follows from \eqref{local}. In the following lemma we derive two important properties of Null$(P_{X|Y})$. \begin{lemma}\label{null} Let $\beta$ be a vector in $\mathbb{R}^{|\mathcal{X}|}$. Then, $\beta\in\text{Null}(P_{X|Y})$ if and only if $\beta\in\text{Null}(M)$, where $M\in \mathbb{R}^{|\mathcal{X}|\times|\mathcal{Y}|}$ is constructed as follows: Let $V$ be the matrix of right eigenvectors of $P_{X|Y}$, i.e., $P_{X|Y}=U\Sigma V^T$ and $V=[v_1,\ v_2,\ ... ,\ v_{|\mathcal{Y}|}]$, then $M$ is defined as \begin{align*} M \triangleq \left[v_1,\ v_2,\ ... ,\ v_{|\mathcal{X}|}\right]^T. \end{align*} Furthermore, if $\beta\in\text{Null}(P_{X|Y})$, then $1^T\beta=0$. \end{lemma} \begin{proof} Since the rank of $P_{X|Y}$ is $|\mathcal{X}|$, every vector $\beta$ in Null$(P_{X|Y})$ can be written as linear combination of $\{v_{|\mathcal{X}|+1}$, ... ,$v_{|\mathcal{Y}|}\}$ and since each vector in $\{v_{|\mathcal{X}|+1}$, ... ,$v_{|\mathcal{Y}|}\}$ is orthogonal to the rows of $M$ we conclude that $\beta\in\text{Null}(M)$. If $\beta\in\text{Null}(M)$, $\beta$ is orthogonal to the vectors $\{v_1$,...,$v_{|\mathcal{X}|}\}$ and thus, $\beta\in \text{linspan}\{v_{|\mathcal{X}|+1},...,v_{|\mathcal{Y}|}\}=\text{Null}(P_{X|Y})$. Furthermore, for every $\beta\in\text{Null}(P_{X|Y})$, we can write $1^T\beta=1^TP_{X|Y}\beta=0.$ \end{proof} The next lemma shows that if the Markov chain $X-Y-U$ holds and $J_u$ satisfies the three properties \eqref{prop1}, \eqref{prop2}, and \eqref{prop3}, then $P_{Y|U=u}$ lies in a convex polytope. \begin{lemma}\label{null2} For sufficiently small $\epsilon>0$, for every $u\in\mathcal{U}$, the vector $P_{Y|U=u}$ belongs to the convex polytope $\mathbb{S}_{u}$ defined as \begin{align*} \mathbb{S}_{u} \triangleq \left\{y\in\mathbb{R}^{|\mathcal{Y}|}|My=MP_Y+\epsilon M\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0 \end{bmatrix},\ y\geq 0\right\}, \end{align*} where $\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0 \end{bmatrix}\in\mathbb{R}^{|\cal Y|}$ and $J_u$ satisfies \eqref{prop1}, \eqref{prop2}, and \eqref{prop3}. \end{lemma} \begin{proof} By using the Markov chain $X-Y-U$ we have \begin{align} \epsilon J_u=P_{X|U=u}-P_X=P_{X|Y}[P_{Y|U=u}-P_Y].\label{jigar} \end{align} Let $\alpha=P_{Y|U=u}-P_Y$. By partitioning $\alpha$ in two parts $[\alpha_1\ \alpha_2]^T$ with sizes $|\cal X|$ and $|\cal Y|-|\cal X|$, respectively, from \eqref{jigar} we obtain \begin{align*} P_{X|Y_1}\alpha_1+P_{X|Y_2}\alpha_2=\epsilon J_u, \end{align*} which implies using invertibility of $P_{X|Y_1}$ that \begin{align*} \alpha_1=\epsilon P_{X|Y_1}^{-1}J_u-P_{X|Y_1}^{-1}P_{X|Y_2}\alpha_2, \end{align*} thus, \begin{align*} P_{Y|U=u} = P_Y+\epsilon\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0 \end{bmatrix} +\begin{bmatrix} -P_{X|Y_1}^{-1}P_{X|Y_2}\alpha_2\\\alpha_2 \end{bmatrix}. \end{align*} Note that the vector $\begin{bmatrix} -P_{X|Y_1}^{-1}P_{X|Y_2}\alpha_2\\\alpha_2 \end{bmatrix}$ belongs to Null$(P_{X|Y})$ since we have \begin{align*} P_{X|Y}\!\!\begin{bmatrix} -P_{X|Y_1}^{-1}P_{X|Y_2}\alpha_2\\\alpha_2 \end{bmatrix}&\!\!=\![P_{X|Y_1}\ P_{X|Y_2}]\!\!\begin{bmatrix} -P_{X|Y_1}^{-1}P_{X|Y_2}\alpha_2\\\alpha_2 \end{bmatrix}\\&\!\!=\!-P_{X|Y_1}\!P_{X|Y_1}^{-1}\!P_{X|Y_2}\alpha_2\!+\!\!P_{X|Y_2}\alpha_2\\&\!\!=\bm{0}\in\mathbb{R}^{|\cal X|}, \end{align*} and thus by Lemma~\ref{null} it belongs to Null$(M)$. So, we have \begin{align*} MP_{Y|U=u} = MP_Y+\epsilon M\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0 \end{bmatrix}. \end{align*} Consequently, if the Markov chain $X-Y-U$ holds and the perturbation vector $J_u$ satisfies \eqref{prop1}, \eqref{prop2}, and \eqref{prop3}, then we have $P_{Y|U=u}\in\mathbb{S}_u$. \end{proof} \begin{lemma}\label{44} Any vector $\alpha$ in $\mathbb{S}_u$ is a standard probability vector. Also, for any pair $(U,Y)$, for which $P_{Y|U=u}\in\mathbb{S}_u$, $\forall u\in\mathcal{U}$ with $J_u$ satisfying \eqref{prop1}, \eqref{prop2}, and \eqref{prop3}, we can have $X-Y-U$ and $P_{X|U=u}-P_X=\epsilon\cdot J_u$. \end{lemma} \begin{proof} The proof is provided in Appendix~A. \end{proof} \begin{theorem} We have the following equivalency \begin{align}\label{equi} \min_{\begin{array}{c} \substack{P_{U|Y}:X-Y-U\\ \|P_{X|U=u}-P_X\|_1\leq\epsilon,\ \forall u\in\mathcal{U}} \end{array}}\! \! \! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!H(Y|U) =\!\!\!\!\!\!\!\!\! \min_{\begin{array}{c} \substack{P_U,\ P_{Y|U=u}\in\mathbb{S}_u,\ \forall u\in\mathcal{U},\\ \sum_u P_U(u)P_{Y|U=u}=P_Y,\\ J_u \text{satisfies}\ \eqref{prop1},\ \eqref{prop2},\ \text{and}\ \eqref{prop3}} \end{array}} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!H(Y|U). \end{align} \end{theorem} \begin{proof} The proof follows directly from Lemmas~\ref{null2} and~\ref{44}. \end{proof} In next proposition we discuss how $H(Y|U)$ is minimized over $P_{Y|U=u}\in\mathbb{S}_u$ for all $u\in\mathcal{U}$. \begin{proposition}\label{4} Let $P^*_{Y|U=u},\ \forall u\in\mathcal{U}$ be the minimizer of $H(Y|U)$ over the set $\{P_{Y|U=u}\in\mathbb{S}_u,\ \forall u\in\mathcal{U}|\sum_u P_U(u)P_{Y|U=u}=P_Y\}$, then $P^*_{Y|U=u}\in\mathbb{S}_u$ for all $u\in\mathcal{U}$ must belong to extreme points of $\mathbb{S}_u$. \end{proposition} \begin{proof} The proof builds on the concavity property of the entropy. For more details see \cite{Amir2}. \end{proof} In order to solve the minimization problem found in \eqref{equi}, we propose the following procedure: In the first step, we find the extreme points of $\mathbb{S}_u$, which is an easy task. Since the extreme points of the sets $\mathbb{S}_u$ have a particular geometry, in the second step, we locally approximate the conditional entropy $H(Y|U)$ so that we end up with a quadratic problem with quadratic constraints over $P_U(.)$ and $J_u$ that can be easily solved.\subsection{Finding $\mathbb{S}_u^*$ (Extreme points of $\mathbb{S}_u$)}\label{seca} In this part we find the extreme points of $\mathbb{S}_u$ for each $u\in\mathcal{U}$. As argued in \cite{deniz6}, the extreme points of $\mathbb{S}_u$ are the basic feasible solutions of $\mathbb{S}_u$. Basic feasible solutions of $\mathbb{S}_u$ can be found in the following manner. Let $\Omega$ be the set of indices which corresponds to $|\mathcal{X}|$ linearly independent columns of $M$, i.e., $|\Omega|=|\mathcal{X}|$ and $\Omega\subset \{1,..,|\mathcal{Y}|\}$. Let $M_{\Omega}\in\mathbb{R}^{|\mathcal{X}|\times|\mathcal{X}|}$ be the submatrix of $M$ with columns indexed by the set $\Omega$. It can be seen that $M_{\Omega}$ is an invertible matrix since $rank(M)=|\cal X|$. Then, if all elements of the vector $M_{\Omega}^{-1}(MP_Y+\epsilon M\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0\end{bmatrix})$ are non-negative, the vector $V_{\Omega}^*\in\mathbb{R}^{|\mathcal{Y}|}$, which is defined in the following, is a basic feasible solution of $\mathbb{S}_u$. Assume that $\Omega = \{\omega_1,..,\omega_{|\mathcal{X}|}\}$, where $\omega_i\in\{1,..,|\mathcal{Y}|\}$ and all elements are arranged in an increasing order. The $\omega_i$-th element of $V_{\Omega}^*$ is defined as $i$-th element of $M_{\Omega}^{-1}(MP_Y+\epsilon M\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0\end{bmatrix})$, i.e., for $1\leq i \leq |\mathcal{X}|$ we have \begin{align}\label{defin} V_{\Omega}^*(\omega_i)= (M_{\Omega}^{-1}MP_Y+\epsilon M_{\Omega}^{-1}M\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0\end{bmatrix})(i). \end{align} Other elements of $V_{\Omega}^*$ are set to be zero. In the next proposition we show two properties of each vector inside $V_{\Omega}^*$. \begin{proposition}\label{kos} Let $\Omega\subset\{1,..,|\mathcal{Y}|\},\ |\Omega|=|\cal X|$. For every $\Omega$ we have $1^T\left(M_{\Omega}^{-1}MP_Y \right)=1$. Furthermore, $1^T\left(M_{\Omega}^{-1}M\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0\end{bmatrix}\right)=0$. \end{proposition} \begin{proof} The proof is provided in Appendix~B. \end{proof} Next we define a set $\mathcal{H}_{XY}$ which includes all leakage matrices $P_{X|Y}$ having a property that allows us to approximate the conditional entropy $H(Y|U)$. \begin{definition} Let $\mathcal{P}(\cal X)$ be the standard probability simplex defined as $\mathcal{P}(\mathcal {X})=\{x\in\mathbb{R}^{|\mathcal{X}|}|1^Tx=1,\ x(i) \geq 0,\ 1\leq i\leq |\cal X|\}$ and let $\mathcal{P'}(\cal X)$ be the subset of $\mathcal{P}(\cal X)$ defined as $\mathcal{P'}(\mathcal {X})=\{x\in\mathbb{R}^{|\mathcal{X}|}|1^Tx=1,\ x(i) > 0,\ 1\leq i\leq |\cal X|\}$. For every $\Omega\subset\{1,..,|\cal Y|\}$, $|\Omega|=|\mathcal{X}|$, $\mathcal{H}_{XY}$ is the set of all leakage matrices defined as follows \begin{align*} \mathcal{H}_{XY} = \{P_{X|Y}\!\in\! \mathbb{R}^{|\cal X|\times |\cal Y|}|\text{if}\ t_{\Omega}\in\mathcal{P}(\mathcal {X})\Rightarrow t_{\Omega}\in\mathcal{P'}(\mathcal {X})\}, \end{align*} where $t_{\Omega}=M_{\Omega}^{-1}MP_Y$. In other words, for any $P_{X|Y}\in\mathcal{H}_{XY}$, if $t_{\Omega}$ is a probability vector, then it consists only of nonzero elements for every $\Omega$. \end{definition} \begin{example} Let $P_{X|Y}=\begin{bmatrix} 0.3 &0.8 &0.5 \\0.7 &0.2 &0.5 \end{bmatrix} $. By using SVD of $P_{X|Y}$, $M$ can be found as $\begin{bmatrix} -0.5556 &-0.6016 &0.574 \\0.6742 &-0.73 &0.1125 \end{bmatrix}$. Possible sets $\Omega$ are $\Omega_1=\{1,2\},\ \Omega_2=\{1,3\},$ and $\Omega_3=\{2,3\}$. By calculating $M_{\Omega_{u_i}}^{-1}MP_Y$ we have $M_{\Omega_{1}}^{-1}MP_Y =\begin{bmatrix} 0.7667\\0.2333 \end{bmatrix},\ M_{\Omega_{2}}^{-1}MP_Y =\begin{bmatrix} 0.4167\\0.5833 \end{bmatrix}, M_{\Omega_{3}}^{-1}MP_Y =\begin{bmatrix} -0.2778\\1.2778 \end{bmatrix}.$ Since the elements of the first two probability vectors are positive, $P_{X|Y}\in \mathcal{H}_{XY}$. \end{example} \begin{remark}\label{ann} For $P_{X|Y}\in\mathcal{H}_{XY}$, $M_{\Omega}MP_Y>0$ implies $M_{\Omega}^{-1}MP_Y+\epsilon M_{\Omega}^{-1}M\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0\end{bmatrix}>0$, furthermore, if the vector $M_{\Omega}MP_Y$ contains a negative element, the vector $M_{\Omega}^{-1}MP_Y+\epsilon M_{\Omega}^{-1}M\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0\end{bmatrix}$ contains a negative element as well (i.e., not a feasible distribution), since we assumed $\epsilon$ is sufficiently small. %Thus, the set $\Omega$ corresponds to a basic feasible solution. \end{remark} From Proposition~\ref{kos} and Remark~\ref{ann}, we conclude that each basic feasible solution of $\mathbb{S}_u$, i.e., $V_{\Omega}^*$, can be written as summation of a standard probability vector (built by $M_{\Omega}^{-1}MP_Y$) and a perturbation vector (built by $\epsilon M_{\Omega}^{-1}M\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0\end{bmatrix}$). This is the key property to locally approximate $H(Y|U)$. \begin{remark} The number of basic feasible solutions of $\mathbb{S}_u$ is at most $\binom{|\mathcal{Y}|}{|\mathcal{X}|}$, thus we have at most $\binom{|\mathcal{Y}|}{|\mathcal{X}|}^{|\mathcal{Y}|}$ optimization problems with variables $P_U(.)$ and $J_u$.\end{remark} \subsection{Quadratic Optimization Problem} In this part, we approximate $H(Y|U)$ for an obtained basic feasible solution which leads to the new quadratic problem. In the following we use the Bachmann-Landau notation where $o(\epsilon)$ describes the asymptotic behaviour of a function $f:\mathbb{R}^+\rightarrow\mathbb{R}$ which satisfies that $\frac{f(\epsilon)}{\epsilon}\rightarrow 0$ as $\epsilon\rightarrow 0$. \begin{lemma}\label{5} Assume $P_{X|Y}\in\mathcal{H}_{XY}$ and $V_{\Omega_u}^*$ is an extreme point of the set $\mathbb{S}_u$, then for $P_{Y|U=u}=V_{\Omega_u}^*$ we have \begin{align*} H(P_{Y|U=u}) &=\sum_{y=1}^{|\mathcal{Y}|}-P_{Y|U=u}(y)\log(P_{Y|U=u}(y))\\&=-(b_u+\epsilon a_uJ_u)+o(\epsilon), \end{align*} with $b_u = l_u \left(M_{\Omega_u}^{-1}MP_Y\right),\ a_u = l_u\left(M_{\Omega_u}^{-1}M(1\!\!:\!\!|\mathcal{X}|)P_{X|Y_1}^{-1}\right)\in\mathbb{R}^{1\times|\mathcal{X}|},\ l_u = \left[\log\left(M_{\Omega_u}^{-1}MP_{Y}(i)\right)\right]_{i=1:|\mathcal{X}|}\in\mathbb{R}^{1\times|\mathcal{X}|}, $ and $M_{\Omega_u}^{-1}MP_{Y}(i)$ stands for $i$-th ($1\leq i\leq |\mathcal{X}|$) element of the vector $M_{\Omega_u}^{-1}MP_{Y}$. Furthermore, $M(1\!\!:\!\!|\mathcal{X}|)$ stands for submatrix of $M$ with first $|\mathcal{X}|$ columns. \end{lemma} \begin{proof} The result corresponds to first order term of Taylor expansion of $\log(1+x)$. For more details see \cite{Amir2}. \end{proof} In the following we use the notation $f(x)\cong g(x)$ which denotes $f(x) = g(x) + o(x)$, i.e., $g(x)$ is the first order term of the Taylor expansion of $f(x)$. \begin{theorem}\label{th11} Let $P_{X|Y}\in\mathcal{H}_{XY}$ and $V_{\Omega_u}^*\in\mathbb{S}_u^*,\ u\in\{1,..,|\mathcal{Y}|\}$. For sufficiently small $\epsilon$, the minimization problem in \eqref{equi} can be approximated as follows \begin{align}\label{minmin} &\min_{P_U(.),\{J_u, u\in\mathcal{U}\}} -\left(\sum_{u=1}^{|\mathcal{Y}|} P_ub_u+\epsilon P_ua_uJ_u\right)\\\nonumber &\text{subject to:}\\\nonumber &\sum_{u=1}^{|\mathcal{Y}|} P_uV_{\Omega_u}^*=P_Y,\ \sum_{u=1}^{|\mathcal{Y}|} P_uJ_u=0,\ P_U(.)\geq 0,\\\nonumber &\sum_{i=1}^{|\mathcal{X}|} |J_u(i)|\leq 1,\ \sum_{i=1}^{|\mathcal{X}|}J_u(i)=0,\ \forall u\in\mathcal{U}. \end{align} \end{theorem} \begin{proof} The proof is based on Proposition~\ref{4} and Lemma~\ref{5}. For $P_{Y|U=u}=V_{\Omega_u}^*,\ u\in\{1,..,|\mathcal{Y}|\}$, $H(Y|U)$ can be approximated as $ H(Y|U) = \sum_u P_uH(P_{Y|U=u})\cong \sum_{u=1}^{|\mathcal{Y}|} P_ub_u+\epsilon P_ua_uJ_u,$ where $b_u$ and $a_u$ are defined as in Lemma~\ref{5}. \end{proof} \begin{remark} The weights $P_u$, which satisfy the constraints $\sum_{u=1}^{|\mathcal{Y}|} P_uV_{\Omega_u}^*=P_Y$ and $P_U(.)\geq 0$, form a standard probability distribution, since the sum of elements in each vector $V_{\Omega_u}^*\in\mathbb{S}_u^*$ equals to one. \end{remark} \begin{remark} If we set $\epsilon=0$, \eqref{minmin} becomes the same linear programming problem as presented in \cite{deniz6}. \end{remark} \begin{proposition} All constraints in \eqref{minmin} are feasible. \end{proposition} \begin{proof} Let $J_u=0$ for all $u\in{\mathcal{U}}$. In this case all sets $\mathbb{S}_u$ become the same sets named $\mathbb{S}$. Since the set $\mathbb{S}$ is an at most $|\mathcal{Y}|-1$ dimensional polytope and $P_Y\in\mathbb{S}$, $P_Y$ can be written as convex combination of some extreme points in $\mathbb{S}$. Thus, the constraint $\sum_{u=1}^{|\mathcal{Y}|} P_uV_{\Omega_u}^*=P_Y$ is feasible. Furthermore, by choosing $J_u=0$ for all $u\in{\mathcal{U}}$, all other constraints in \eqref{minmin} are feasible too. \end{proof} In next section we show that \eqref{minmin} can be converted into a linear programming problem. \subsection{Equivalent linear programming problem}\label{c} Let $\eta_u=P_uM_{\Omega_u}^{-1}MP_Y+\epsilon M_{\Omega_u}^{-1}M(1:|\mathcal{X}|)P_{X|Y_1}^{-1}(P_uJ_u)$ for all $u\in \mathcal{U}$, where $\eta_u\in\mathbb{R}^{|\mathcal{X}|}$. $\eta_u$ corresponds to multiple of non-zero elements of the extreme point $V_{\Omega_u}^*$. $P_u$ and $J_u$ can be found as follows \begin{align*} P_u&=\bm{1}^T\cdot \eta_u,\\ J_u&=\frac{P_{X|Y_1}M(1:|\mathcal{X}|)^{-1}M_{\Omega_u}[\eta_u-(\bm{1}^T \eta_u)M_{\Omega_u}^{-1}MP_Y]}{\epsilon(\bm{1}^T\cdot \eta_u)}, \end{align*} Thus, \eqref{minmin} can be rewritten as a linear programming problem using $\eta_u$ as follows. For the cost function it can be shown \begin{align*} &-\left(\sum_{u=1}^{|\mathcal{Y}|} P_ub_u+\epsilon P_ua_uJ_u\right)= -\sum_u b_u(\bm{1}^T\eta_u)\\&-\!\epsilon\!\sum_u \!a_u\! \left[P_{X|Y_1}\!M(1\!:\!|\mathcal{X}|)^{-1}\!M_{\Omega_u}\![\eta_u\!\!-\!(\bm{1}^T \eta_u)M_{\Omega_u}^{-1}M\!P_Y]\right]\!, \end{align*} which is a linear function of elements of $\eta_u$ for all $u\in\mathcal{U}$. Non-zero elements of the vector $P_uV_{\Omega_u}^*$ are equal to the elements of $\eta_u$, i.e., we have $P_uV_{\Omega_u}^*(\omega_i)=\eta_u(i).$ Thus, the constraint $\sum_{u=1}^{|\mathcal{Y}|} P_uV_{\Omega_u}^*=P_Y$ can be rewritten as a linear function of elements of $\eta_u$. The constraints $\sum_{u=1}^{|\mathcal{Y}|} P_uJ_u=0$ and $P_u\geq 0\ \forall u$ as well as $\sum_{i=1}^{|\mathcal{X}|}J_u(i)=0$ are reformulated as $\sum_uP_{X|Y_1}M(1:|\mathcal{X}|)^{-1}M_{\Omega_u}\left[\eta_u-(\bm{1}^T\cdot\eta_u)M_{\Omega_u}^{-1}MP_Y\right]=0,\ \sum_i \eta_u(i)\geq 0,\ \forall u$ and $\bm{1}^T P_{X|Y_1}M(1:|\mathcal{X}|)^{-1}M_{\Omega_u}\left[\eta_u-(\bm{1}^T\cdot\eta_u)M_{\Omega_u}^{-1}MP_Y\right]=0$, respectively. Furthermore, the last constraint, i.e., $\sum_{i=1}^{|\mathcal{X}|} |J_u(i)|\leq 1$, can be rewritten as \begin{align*} &\sum_i\! \left|\left(P_{X|Y_1}\!M(1\!:\!|\mathcal{X}|)^{-1}\!M_{\Omega_u}\!\left[\eta_u\!-\!(\bm{1}^T\eta_u)M_{\Omega_u}^{-1}MP_Y\right]\right)\!(i)\right|\\&\leq\epsilon(\bm{1}^T\eta_u),\ \forall u. \end{align*} Thus, all constraints can be rewritten as linear functions of elements of $\eta_u$ for all $u$. thus we have $\mathbb{S}_u^*=\{V_{\Omega_{u_1}}^*,V_{\Omega_{u_2}}^*,V_{\Omega_{u_5}}^*,V_{\Omega_{u_6}}^*\}$ for $u\in\{1,2,3,4\}$. Since for each $u$, $|\mathbb{S}_u^*|=4$, $4^4$ optimization problems need to be considered where not all are feasible. In Step 2, we choose first element of $\mathbb{S}_1^*$, second element of $\mathbb{S}_2^*$, third element of $\mathbb{S}_3^*$ and fourth element of $\mathbb{S}_4^*$. Thus we have the following quadratic problem \begin{align*} \min\ &P_10.9097+P_20.6962+P_30.6254+P_40.9544\\-&P_1\epsilon J_1^2 2.1089+P_2\epsilon J_2^2 10.5816-P_3 \epsilon J_3^2 6.0808 \\+&P_4\epsilon J_4^2 7.3747\\ &\text{s.t.}\begin{bmatrix} \frac{1}{2}\\\frac{1}{4}\\ \frac{1}{8}\\ \frac{1}{8} \end{bmatrix} = P_1 \begin{bmatrix} 0.675+\epsilon2J_1^2\\0.325-\epsilon2J_1^2\\0\\0 \end{bmatrix}+ P_2\begin{bmatrix} 0.1875+\epsilon 5J_2^2\\0\\0.8125-\epsilon 5J_2^2\\0 \end{bmatrix}\\&+P_3\begin{bmatrix} 0\\0.1563-\epsilon 2.5J_3^2\\0\\0.8437+\epsilon 2.5J_3^2 \end{bmatrix}+P_4\begin{bmatrix} 0\\0\\0.6251-\epsilon 10J_4^2\\0.3749+\epsilon 10J_4^2 \end{bmatrix},\\ &P_1J_1^2+P_2J_2^2+P_3J_3^2+P_4J_4^2=0,\ P_1,P_2,P_3,P_4\geq 0,\\ &|J_1^2|\leq \frac{1}{2},\ |J_2^2|\leq \frac{1}{2},\ |J_3^2|\leq \frac{1}{2},\ |J_4^2|\leq \frac{1}{2}, \end{align*} where the minimization is over $P_u$ and $J_u^2$ for $u\in\{1,2,3,4\}$. Now we convert the problem to linear programming. We have \begin{align*} \eta_1&= \begin{bmatrix} 0.675P_1+\epsilon2P_1J_1^2\\0.325P_1-\epsilon2P_1J_1^2 \end{bmatrix}=\begin{bmatrix} \eta_1^1\\\eta_1^2 \end{bmatrix},\\ \eta_2&=\begin{bmatrix} 0.1875P_2+\epsilon 5P_2J_2^2\\0.8125P_2-\epsilon 5P_2J_2^2 \end{bmatrix}=\begin{bmatrix} \eta_2^1\\\eta_2^2 \end{bmatrix},\\ \eta_3&= \begin{bmatrix} 0.1563P_3-\epsilon 2.5P_3J_3^2\\0.8437P_3+\epsilon 2.5P_3J_3^2 \end{bmatrix}=\begin{bmatrix} \eta_3^1\\\eta_3^2 \end{bmatrix},\\ \eta_4&= \begin{bmatrix} 0.3749P_4-\epsilon 10P_4J_4^2\\0.6251P_4+\epsilon 10P_4J_4^2 \end{bmatrix}=\begin{bmatrix} \eta_3^1\\\eta_3^2 \end{bmatrix}. \end{align*} \begin{align*} \min\ &0.567\eta_1^1+1.6215\eta_1^2+2.415\eta_2^1+0.2995\eta_2^2\\ &2.6776\eta_3^1+0.2452\eta_3^2+0.6779\eta_4^1+1.4155\eta_4^2\\ &\text{s.t.}\begin{bmatrix} \frac{1}{2}\\\frac{1}{4}\\ \frac{1}{8}\\ \frac{1}{8} \end{bmatrix} = \begin{bmatrix} \eta_1^1+\eta_2^1\\\eta_1^2+\eta_3^2\\\eta_2^2+\eta_4^1\\\eta_3^2+\eta_4^2 \end{bmatrix},\ \begin{cases} &\eta_1^1+\eta_1^2\geq 0\\ &\eta_2^1+\eta_2^2\geq 0\\ &\eta_3^1+\eta_3^2\geq 0\\ &\eta_4^1+\eta_4^2\geq 0 \end{cases}\\ &\frac{0.325\eta_1^1-0.675\eta_1^2}{2}+\frac{0.8125\eta_2^1-0.1875\eta_2^2}{5}+\\ &\frac{0.1563\eta_3^2-0.8437\eta_3^1}{2.5}+\frac{0.6251\eta_4^2-0.3749\eta_4^1}{10}=0,\\ &\frac{|0.325\eta_1^1-0.675\eta_1^2|}{\eta_1^1+\eta_1^2}\leq \epsilon,\ \frac{|0.8125\eta_2^1-0.1875\eta_2^2|}{\eta_2^1+\eta_2^2}\leq 2.5\epsilon\\ &\frac{|0.1563\eta_3^2-0.8437\eta_3^1|}{\eta_3^1+\eta_3^2}\leq 1.125\epsilon,\\ &\frac{|0.6251\eta_4^2-0.3749\eta_4^1|}{\eta_4^1+\eta_4^2}\leq 5\epsilon. \end{align*} The solution to the obtained problem is as follows (we assumed $\epsilon = 10^{-2}$) \begin{align*} &P_U = \begin{bmatrix} 0.7048 \\ 0.1493 \\ 0.146 \\ 0 \end{bmatrix},\ J_1 = \begin{bmatrix} -0.0023 \\ 0.0023 \end{bmatrix},\ J_2 = \begin{bmatrix} 0.5 \\ -0.5 \end{bmatrix}\\ &J_3 = \begin{bmatrix} -0.5 \\ 0.5 \end{bmatrix},\ J_4 = \begin{bmatrix} 0 \\ 0 \end{bmatrix},\ \min \text{(cost)} = 0.8239, \end{align*} thus, for this combination we obtain $I(U;Y)\cong H(Y)-0.8239=1.75-0.8239 = 0.9261$. If we try all possible combinations we get the minimum cost as $0.8239$ and thus we have $\max I(U;Y)\cong 0.9261$. For feasibility of $\sum_{u=1}^{|\mathcal{Y}|} P_uV_{\Omega_u}^*=P_Y$, consider the following combination. We choose first extreme point from each set $\mathbb{S}_u$ for $u\in\{1,2,3,4\}$. Now consider the condition $\sum_{u=1}^{|\mathcal{Y}|} P_uV_{\Omega_u}^*=P_Y$, we have \begin{align*} \begin{bmatrix} \frac{1}{2}\\\frac{1}{4}\\ \frac{1}{8}\\ \frac{1}{8} \end{bmatrix} = P_1 \begin{bmatrix} 0.675+\epsilon2J_1^2\\0.325-\epsilon2J_1^2\\0\\0 \end{bmatrix}+ P_2\begin{bmatrix} 0.675+\epsilon2J_2^2\\0.325-\epsilon2J_2^2\\0\\0 \end{bmatrix} \end{align*} \begin{align*}+P_3\begin{bmatrix} 0.675+\epsilon2J_3^2\\0.325-\epsilon2J_3^2\\0\\0 \end{bmatrix}+P_4\begin{bmatrix} 0.675+\epsilon2J_4^2\\0.325-\epsilon2J_4^2\\0\\0 \end{bmatrix}, \end{align*} which is obviously not feasible since there is no $P_U$ and $J_u$ satisfying this constraint. Thus, not all combinations are feasible. Furthermore, if we set $\epsilon=0$ we obtain the solution in \cite{deniz6}. The cost function in \cite{deniz6} is equal to $0.9063$ which is lower than our result. \end{example} Condition \eqref{orth} can be interpreted as an inner product between vectors $L_u$ and $\sqrt{P_X}$, where$\sqrt{P_X}\in\mathbb{R}^{\mathcal{K}}$ is a vector with entries $\{\sqrt{P_X(x)},\ x\in\mathcal{X}\}$. Thus, condition \eqref{orth} states an orthogonality condition. Furthermore, \eqref{orth2} can be rewritten in vector form as $\sum_u P_U(u)L_u=\bm 0\in\mathcal{R}^{\mathcal{K}}$ using the assumption that $P_X(x)>0$ for all $x\in\mathcal{X}$. Therewith, the problem in corollary \eqref{corr1} can be rewritten as \begin{align} \max_{\begin{array}{c} \substack{L_u,P_U:\|L_u\|^2\leq 1,\\ L_u\perp\sqrt{P_X},\\ \sum_u P_U(u)L_u=\bm 0} \end{array}} \sum_u P_U(u)\|W\cdot L_u\|^2.\label{max} \end{align} The next proposition shows how to simplify \eqref{max}. \begin{proposition} Let $L^*$ be the maximizer of \eqref{max2}, then \eqref{max} and \eqref{max2} achieve the same maximum value while $U$ as a uniform binary RV with $L_0=-L_1=L^*$ maximizes \eqref{max}. \begin{align} \max_{L:L\perp \sqrt{P_X},\ \|L\|^2\leq 1} \|W\cdot L\|^2.\label{max2} \end{align} \begin{proof} Let $\{L_u^*,P_U^*\}$ be the maximizer of \eqref{max}. Furthermore, let $u'$ be the index that maximizes $\|W\cdot L_{u}^*\|^2$, i.e., $u'=\text{argmax}_{u\in\mathcal{U}} \|W\cdot L_{u}^*\|^2$. Then we have \begin{align*} \sum_u P_U^*(u)||W\cdot L_u^*||^2\leq ||W\cdot L_{u'}^*||^2\leq||W\cdot L^*||^2, \end{align*} where the right inequality comes from the fact that $L^*$ has to satisfy one less constraint than $L_{u'}^*$. However, by choosing $U$ as a uniform binary RV and $L_0=-L_1=L^*$ the constraints in \eqref{max} are satisfied and the maximum in \eqref{max2} is achieved. Thus without loss of optimality we can choose $U$ as a uniformly distributed binary RV and \eqref{max} reduces to \eqref{max2}. \end{proof} \end{proposition} After finding the solution of \eqref{max2}, the conditional distributions $P_{X|U=u}$ and $P_{Y|U=u}$ are given by \begin{align} P_{X|U=0}&=P_X+\epsilon[\sqrt{P_X}]L^*,\\ P_{X|U=1}&=P_X-\epsilon[\sqrt{P_X}]L^*,\\ P_{Y|U=0}&=P_X+\epsilon P_{X|Y}^{-1}[\sqrt{P_X}]L^*,\label{condis1}\\ P_{Y|U=1}&=P_X-\epsilon P_{X|Y}^{-1}[\sqrt{P_X}]L^*.\label{condis2} \end{align} In next theorem we derive the solution of \eqref{max2}. \begin{theorem}\label{th1} $L^*$, which maximizes \eqref{max2}, is the right singular vector corresponding to largest singular value of $W$. \end{theorem} \begin{proof} The proof is provided in Appendix B. \end{proof} By using Theorem~\ref{th1}, the solution to the problem in Corollary~\ref{corr1} can be summarized as $\{P_U^*,L_u^*\}=\{U\ \text{uniform binary RV},\ L_0=-L_1=L^*\}$, where $L^*$ is the solution of \eqref{max2}. Thus, we have the following result. \begin{corollary} The maximum value in \eqref{privacy} can be approximated by $\frac{1}{2}\epsilon^2\sigma_{\text{max}}^2$ for small $\epsilon$ and can be achieved by a privacy mechanism characterized by the conditional distributions found in \eqref{condis1} and \eqref{condis2}, where $\sigma_{\text{max}}$ is the largest singular value of $W$ corresponding to the right singular vector $L^*$. \end{corollary} \begin{figure*}[] \centering \includegraphics[width = .7\textwidth]{Figs/inter.jpg} \caption{ For the privacy mechanism design, we are looking for $L^*$ in the red region (vector space A) which results in a vector with the largest Euclidean norm in vector space D. Space B and space C are probability spaces for the input and output distributions, the circle in space A represents the vectors that satisfy the $\chi^2$-square privacy criterion and the red region denotes all vectors that are orthogonal to vector $\sqrt{P_X}$. } \label{fig:inter} \end{figure*} \section{discussions}\label{dissc} In Figure~\ref{fig:inter}, four spaces are illustrated. Space B and space C are probability spaces of the input and output distributions, where the points are inside a simplex. Multiplying input distributions by $P_{X|Y}^{-1}$ results in output distributions. Space A illustrates vectors $L_u$ with norm smaller than 1, which corresponds to the $\chi^2$-privacy criterion. The red region in this space includes all vectors that are orthogonal to $\sqrt{P_X}$. For the optimal solution with $U$ chosen to be a equiprobable binary RV, it is shown that it remains to find the vector $L_u$ in the red region that results in a vector that has the largest norm in space D. This is achieved by the principal right-singular vector of $W$. The mapping between space A and B is given by $[\sqrt{P_X}^{-1}]$ and also the mapping between space C and D is given by $[\sqrt{P_Y}^{-1}]$. Thus $W$ is given by $[\sqrt{P_Y}^{-1}]P_{X|Y}^{-1}[\sqrt{P_X}]$. In following, we provide an example where the procedure of finding the mechanism to produce $U$ is illustrated. \begin{example} Consider the kernel matrix $P_{X|Y}=\begin{bmatrix} \frac{1}{4} & \frac{2}{5} \\ \frac{3}{4} & \frac{3}{5} \end{bmatrix}$ and $P_Y$ is given as $[\frac{1}{4} , \frac{3}{4}]^T$. Thus we can calculate $W$ and $P_X$ as: \begin{align*} P_X&=P_{X|Y}P_Y=[0.3625, 0.6375]^T,\\ W &= [\sqrt{P_Y}^{-1}]P_{X|Y}^{-1}[\sqrt{P_X}] = \begin{bmatrix} -4.8166 & 4.2583 \\ 3.4761 & -1.5366 \end{bmatrix}. \end{align*} The singular values of $W$ are $7.4012$ and $1$ with corresponding right singular vectors $[0.7984, -0.6021]^T$ and $[0.6021 , 0.7954]^T$, respectively. Thus the maximum of \eqref{privacy} is approximately $\frac{1}{2}\epsilon^2\times(7.4012)^2=27.39\cdot \epsilon^2$. The maximizing vector $L^*$ in \eqref{max2} is equal to $[0.7984, -0.6021]^T$ and the mapping between $U$ and $Y$ can be calculated as follows (the approximate maximum of $I(U;Y)$ is achieved by the following conditional distributions): \begin{align*} P_{Y|U=0}&=P_X+\epsilon P_{X|Y}^{-1}[\sqrt{P_X}]L^*,\\ &=[0.3625-3.2048\cdot\epsilon , 0.6375+3.2048\cdot\epsilon]^T,\\ P_{Y|U=1}&=P_X-\epsilon P_{X|Y}^{-1}[\sqrt{P_X}]L^*,\\ &=[0.3625+3.2048\cdot\epsilon , 0.6375-3.2048\cdot\epsilon]^T. \end{align*} Note that the approximation is valid if $|\epsilon \frac{P_{X|Y}^{-1}J_{u}(y)}{P_{Y}(y)}|\ll 1$ holds for all $y$ and $u$ . For the example above we have $\epsilon\cdot P_{X|Y}^{-1}J_0=\epsilon[-3.2048,\ 3.2048]^T$ and $\epsilon\cdot P_{X|Y}^{-1}J_1=\epsilon[3.2048,\ -3.2048]^T$ so that $\epsilon\ll 0.078$. \end{example} In next example we consider a BSC($\alpha$) channel as kernel. We provide an example with a constant upper bound on the approximated mutual information. \begin{example} Let $P_{X|Y}=\begin{bmatrix} 1-\alpha & \alpha \\ \alpha & 1-\alpha \end{bmatrix}$ and $P_Y$ is given by $[\frac{1}{4} , \frac{3}{4}]^T$. By following the same procedure we have: \begin{align*} P_X&=P_{X|Y}P_Y=[\frac{2\alpha+1}{4}, \frac{3-2\alpha}{4}]^T,\\ W &= [\sqrt{P_Y}^{-1}]P_{X|Y}^{-1}[\sqrt{P_X}] \\&= \begin{bmatrix} \frac{\sqrt{2\alpha+1}(\alpha-1)}{(2\alpha-1)} & \frac{\alpha\sqrt{3-2\alpha}}{2\alpha-1} \\ \frac{\alpha\sqrt{2\alpha+1}}{\sqrt{3}(2\alpha-1)} & \frac{\sqrt{3-2\alpha}(\alpha-1)}{\sqrt{3}(2\alpha-1)} \end{bmatrix}. \end{align*} Singular values of $W$ are $\sqrt{\frac{(2\alpha+1)(3-2\alpha)}{3(2\alpha-1)^2}}\geq 1$ for $\alpha\in[0,\frac{1}{2})$ and $1$ with corresponding right singular vectors $[-\sqrt{\frac{3-2\alpha}{4}},\ \sqrt{\frac{2\alpha+1}{4}}]^T$ and $[\sqrt{\frac{2\alpha+1}{4}}, \sqrt{\frac{3-2\alpha}{4}}]^T$, respectively. Thus, we have $L^*=[-\sqrt{\frac{3-2\alpha}{4}},\ \sqrt{\frac{2\alpha+1}{4}}]^T$ and $\max I(U;Y)\approx \epsilon^2\frac{(2\alpha+1)(3-2\alpha)}{6(2\alpha-1)^2}$ with the following conditional distributions: \begin{align*} P_{Y|U=0}&=P_Y+\epsilon\cdot P_{X|Y}^{-1}[\sqrt{P_X}]L^*\\&=[\frac{1}{4}\!\!+\!\epsilon\frac{\sqrt{(3\!-\!2\alpha)(2\alpha\!+\!1)}}{4(2\alpha\!-\!1)},\ \!\!\!\frac{3}{4}\!\!-\!\epsilon\frac{\sqrt{(3\!-\!2\alpha)(2\alpha\!+\!1)}}{4(2\alpha\!-\!1)}],\\ P_{Y|U=1}&=P_Y-\epsilon\cdot P_{X|Y}^{-1}[\sqrt{P_X}]L^*\\&=[\frac{1}{4}\!\!-\!\epsilon\frac{\sqrt{(3\!-\!2\alpha)(2\alpha\!+\!1)}}{4(2\alpha\!-\!1)},\ \!\!\!\frac{3}{4}\!\!+\!\epsilon\frac{\sqrt{(3\!-\!2\alpha)(2\alpha\!+\!1)}}{4(2\alpha\!-\!1)}].\\ \end{align*} The approximation of $I(U;Y)$ holds when we have $|\epsilon\frac{P_{X|Y}^{-1}[\sqrt{P_X}]L^*}{P_Y}|\ll 1$ for all $y$ and $u$, which leads to $\epsilon\ll\frac{|2\alpha-1|}{\sqrt{(3-2\alpha)(2\alpha+1)}}$. If $\epsilon<\frac{|2\alpha-1|}{\sqrt{(3-2\alpha)(2\alpha+1)}}$, then the approximation of the mutual infromation $I(U;Y)\cong\frac{1}{2}\epsilon^2\sigma_{\text{max}}^2$ is upper bounded by $\frac{1}{6}$ for all $0\leq\alpha<\frac{1}{2}$. \end{example} Now we investigate the range of permissible $\epsilon$ in Corollary \ref{corr1}. For approximating $I(U;Y)$, we use the second Taylor expansion of $\log(1+x)$. Therefore we must have $|\epsilon\frac{P_{X|Y}^{-1}J_u(y)}{P_Y(y)}|<1$ for all $u$ and $y$. One sufficient condition for $\epsilon$ to satisfy this inequality is to have $\epsilon<\frac{|\sigma_{\text{min}}(P_{X|Y})|\min_{y\in\mathcal{Y}}P_Y(y)}{\sqrt{\max_{x\in{\mathcal{X}}}P_X(x)}}$, since in this case we have \begin{align*} \epsilon^2|P_{X|Y}^{-1}J_u(y)|^2&\leq\epsilon^2\left\lVert P_{X|Y}^{-1}J_u\right\rVert^2\leq\epsilon^2 \sigma_{\max}^2\left(P_{X|Y}^{-1}\right)\left\lVert J_u\right\rVert^2\\&\stackrel{(a)}\leq\frac{\epsilon^2\max_{x\in{\mathcal{X}}}P_X(x)}{\sigma^2_{\text{min}}(P_{X|Y})}<\min_{y\in\mathcal{Y}} P_Y^2(y), \end{align*} which implies $|\epsilon\frac{P_{X|Y}^{-1}J_u(y)}{P_Y(y)}\!|<1$. The step (a) follows from $\sigma_{\max}^2\left(P_{X|Y}^{-1}\right)=\frac{1}{\sigma_{\min}^2\left(P_{X|Y}\right)}$ and $\|J_u\|^2\leq\max_{x\in{\mathcal{X}}}P_X(x)$. The latter inequality follows from \eqref{prop3} since we have \begin{align*} \frac{\|J_u\|^2}{\max_{x\in{\mathcal{X}}}P_X(x)}\leq \sum_{x\in\mathcal{X}}\frac{J_u^2(x)}{P_X(x)}\leq 1. \end{align*} Furthermore, for approximating $I(U;X)$ we should have $|\epsilon\frac{J_u(x)}{P_X(x)}|<1$ for all $x$ and $u$. One sufficient condition is to have $\epsilon<\frac{\min_{x\in\mathcal{X}}P_X(x)}{\sqrt{\max_{x\in\mathcal{X}}P_X(x)}}$. The proof follows again from \eqref{prop3}. \section{conclusion}\label{concul} In summary, we have shown that we can use the concept of Euclidean information theory to simplify the analytical privacy mechanism design problem if a small $\epsilon$ privacy leakage using the $\chi^2$-privacy criterion is tolerated. This allows us to locally approximate the KL divergence and hence the mutual information between $U$ and $Y$. By using this approximation it is shown that the original problem can be reduced to a simple linear algebra problem. \section{conclusion}\label{concul} It has been shown that the information-theoretic disclosure control problem can be decomposed if the leakage matrix is of full row rank and fat which belongs to $\mathcal{H}_{XY}$. Furthermore, information geometry can be used to approximate $H(Y|U)$, allowing us to simplify the optimization problem for sufficiently small $\ell_1$-privacy leakage. In particular, the new optimization problem can be transferred to a linear programming problem with details as found in \cite{Amir2}. Therefore, one is the smallest singular value of $W$ with $\sqrt{P_X}$ as corresponding right singular vector. Furthermore, we have that the right singular vector of the largest singular value is orthogonal to $\sqrt{P_X}$. Thus, the principal right-singular vector is the solution of \eqref{max2}. \section*{Appendix A}\label{appaa} For the first claim it is sufficient to show that for any $\gamma\in\mathbb{S}_u$, we have $1^T\gamma=1$. Since $\gamma\in\mathbb{S}_u$, we have $M(\gamma-P_Y-\epsilon\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0 \end{bmatrix})=0$ and from Lemma~\ref{null} we obtain $1^T(\gamma-P_Y-\epsilon\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0 \end{bmatrix})=0$ which yields $1^T\gamma=1$. In the last conclusion we used the fact that $1^T\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0 \end{bmatrix}=0$, which is true since by using \eqref{prop1} we have $1_{|\mathcal{Y}|}^T\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\ 0 \end{bmatrix}=1_{|\mathcal{X}|}^T P_{X|Y_1}^{-1}J_u\stackrel{(a)}{=}1^TJ_u=0$, where $(a)$ comes from $1_{|\mathcal{X}|}^T P_{X|Y_1}^{-1}=1_{|\mathcal{X}|}^T\Leftrightarrow 1_{|\mathcal{X}|}^T P_{X|Y_1}=1_{|\mathcal{X}|}^T$ and noting that columns of $P_{X|Y_1}$ are vector distributions. If $P_{Y|U=u}\in\mathbb{S}_u$ for all $u\in\mathcal{U}$ then we can have $X-Y-U$ and \begin{align*} &M(P_{Y|U=u}-P_Y-\epsilon\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0 \end{bmatrix})=0, \\ \Rightarrow\ &P_{X|Y}(P_{Y|U=u}-P_Y-\epsilon\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0 \end{bmatrix})=0,\\ \Rightarrow\ &P_{X|U=u}-P_X=\epsilon\cdot P_{X|Y}(\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0 \end{bmatrix})=\epsilon\cdot J_u, \end{align*} where in last line we use Markovity of $X-Y-U$. Furthermore, $J_u$ satisfies \eqref{prop1}, \eqref{prop2}, and \eqref{prop3} for all $u\in\mathcal{U}$, thus, the privacy criterion defined in \eqref{local} holds. \section*{Appendix B} Consider the set $\mathbb{S}=\{y\in\mathbb{R}^{|\mathcal{Y}|}|My=MP_Y\}$. Any element in $\mathbb{S}$ has sum elements equal to one since we have $M(y-P_Y)=0\Rightarrow y-P_Y\in \text{Null}(P_{X|Y})$ and from Lemma~\ref{null} we obtain $1^T(y-P_Y)=0\Rightarrow 1^Ty=1$. The basic solutions of $\mathbb{S}$ are $W_{\Omega}^*$ defined as follows. Let $\Omega = \{\omega_1,..,\omega_{|\mathcal{X}|}\}$, where $\omega_i\in\{1,..,|\mathcal{Y}|\}$, Then \begin{align*} W_{\Omega}^*(\omega_i)=M_{\Omega}^{-1}MP_Y(i), \end{align*} other elements of $W_{\Omega}^*$ are zero. Thus, the sum over all elements of $M_{\Omega}^{-1}MP_Y$ equal to one, since each element in $\mathbb{S}$ has sum element equal to one. For the second statement consider the set $\mathbb{S}^{'}=\left\{y\in\mathbb{R}^{|\mathcal{Y}|}|My=MP_Y+\epsilon M\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0 \end{bmatrix}\right\}$. As argued before basic solutions of $\mathbb{S}^{'}$ are $V_{\Omega}^*$, where \begin{align*} V_{\Omega}^*(\omega_i)= (M_{\Omega}^{-1}MP_Y+\epsilon M_{\Omega}^{-1}M\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0\end{bmatrix})(i). \end{align*} Here elements of $V_{\Omega}^*$ can be negative or non-negative. From Lemma~\ref{null2}, each element in $\mathbb{S}^{'}$ has sum elements equal to one. Thus, by using the first statement of this proposition sum over all elements of the vector $ M_{\Omega}^{-1}M\begin{bmatrix} P_{X|Y_1}^{-1}J_u\\0\end{bmatrix}$ equal to zero. \bibliographystyle{IEEEtran} \bibliography{IEEEabrv,IZS} \end{document}
2205.04880v1
http://arxiv.org/abs/2205.04880v1
Consensus based optimization via jump-diffusion stochastic differential equations
\documentclass[a4paper]{article} \usepackage[english]{babel} \usepackage[utf8x]{inputenc} \usepackage[T1]{fontenc} \usepackage[colorinlistoftodos]{todonotes} \usepackage{tikz} \usepackage{caption} \usepackage{enumerate} \usepackage[a4paper,top=3cm,bottom=2cm,left=3cm,right=3cm,marginparwidth=1.75cm]{geometry} \usepackage{mathrsfs, amsthm} \newtheorem{theorem}{Theorem}[section] \newtheorem{corollary}{Corollary}[theorem] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{assumption}{Assumption}[section] \newtheorem{condition}{Condition} \newtheorem{remarkex}{Remark}[section] \newtheorem{experiment}{Experiment}[section] \newenvironment{remark} {\pushQED{\qed}\renewcommand{\qedsymbol}{$\triangle$}\remarkex} {\popQED\endremarkex} \renewenvironment{abstract} {\small \begin{center} \bfseries \abstractname\vspace{-0.5em}\vspace{0pt} \end{center} \list{}{ \setlength{\leftmargin}{7mm} \setlength{\rightmargin}{\leftmargin} } \item\relax} {\endlist} \usepackage{mathtools} \DeclarePairedDelimiter\ceil{\lceil}{\rceil} \DeclarePairedDelimiter\floor{\lfloor}{\rfloor} \usepackage{algorithm2e} \RestyleAlgo{ruled} \SetKwInput{KwInput}{Input} \SetKwInput{KwOutput}{Output} \SetKwInput{KwInitialize}{Initialize} \usepackage{comment} \usepackage{amsmath,amssymb} \usepackage{graphicx} \usepackage[colorinlistoftodos]{todonotes} \usepackage[colorlinks=true, allcolors=blue]{hyperref} \usepackage[english]{babel} \usepackage{bbm} \numberwithin{equation}{section} \DeclareMathOperator{\dist}{dist} \DeclareMathOperator{\diag}{Diag} \DeclareMathOperator{\bias}{Bias} \DeclareMathOperator{\var}{Var} \DeclareMathOperator{\M}{M} \usepackage{newpxtext,newpxmath} \newcommand{\rd}[1]{{\color{red} #1}} \newcommand{\Zstroke}{ \text{\ooalign{\hidewidth\raisebox{0.2ex}{--}\hidewidth\cr$Z$\cr}}} \newcommand{\zstroke}{ \text{\ooalign{\hidewidth -\kern-.3em-\hidewidth\cr$z$\cr}}} \begin{document} \title{Consensus based optimization via jump-diffusion stochastic differential equations} \author{D. Kalise\thanks{Department of Mathematics, Imperial College London, South Kensington Campus, SW7 2AZ London, UK; [email protected]} \and A. Sharma\thanks{School of Mathematical Sciences, University of Nottingham, UK; [email protected]} \and M.V. Tretyakov\thanks{School of Mathematical Sciences, University of Nottingham, UK; [email protected]}} \date{} \maketitle \begin{abstract} We introduce a new consensus based optimization (CBO) method where interacting particle system is driven by jump-diffusion stochastic differential equations. We study well-posedness of the particle system as well as of its mean-field limit. The major contributions of this paper are proofs of convergence of the interacting particle system towards the mean-field limit and convergence of a discretized particle system towards the continuous-time dynamics in the mean-square sense. We also prove convergence of the mean-field jump-diffusion SDEs towards global minimizer for a large class of objective functions. We demonstrate improved performance of the proposed CBO method over earlier CBO methods in numerical simulations on benchmark objective functions. \end{abstract} \section{Introduction} Large-scale individual-based models have become a well-established modelling tool in modern science and engineering, with applications including pedestrian motion, collective animal behaviour, swarm robotics and molecular dynamics, among many others. Through the iteration of basic interactions forces such as attraction, repulsion, and alignment, these complex systems of exhibit a rich self-organization behaviour (see e.g. \cite{cbo23,cbo20,cbo21,cbos19,cbo22,cbo39}). Over the last decades, individual-based models have also entered the field of global optimization and its many applications in operations research, control, engineering, economics, finance, and machine learning. In many applied problems arising in the aforementioned fields, the objective function to be optimized can be non-convex and/or non-smooth, disabling the use of traditional continuous/convex optimization technique. In such scenarios, individual-based metaheuristic models have been proven surprisingly effective. Examples include genetic algorithms, ant colony optimization, particle swarm optimization, simulated annealing, etc. (see \cite{cbo26,cbo24,cbo25} and references therein). These methods are probabilistic in nature which set them apart from other derivative-free algorithms \cite{cbo30}. Unlike many convex optimization methods, metaheuristic algorithms, are relatively simple to implement and easily parallelizable. This combination of simplicity and effectiveness has fuelled the application of metaheuristic in complex engineering problems such as shape optimization, scheduling problems, and hyper-parameter tuning in machine learning models. However, it is often the case that metaheuristics lack rigorous convergence results, a question which has become an active area of research \cite{cbo50,cbo41}. In \cite{cbo1}, the authors introduced a optimization algorithm which employs an individual-based model to frame a global minimization \begin{equation*} \min\limits_{x \in \mathbb{R}^{d}} f(x), \end{equation*} where $f(x)$ is a positive function from $\mathbb{R}^{d}$ to $\mathbb{R}$, as a consensus problem . In this model, each individual particle explores the energy landscape given by $f(x)$, broadcasting its current value to the rest of the ensemble through a weighted average. This iterated interaction generates trajectories which flock towards a consensus point which correspond to a global minimizer of $f(x)$, hence the name \textit{Consensus Based Optimization} (CBO). We refer to \cite{cbo40,cbo41} for two recent surveys on the topic. The dynamics of existing CBO models are governed by stochastic differential equations with Wiener noise \cite{cbo1,cbo2,cbo3}. Hence, we can resort to a toolbox from stochastic calculus and stochastic numerics to perform analysis of these models. This amenability of CBO models to theoretical as well as numerical analysis differentiates them from other agent based optimization algorithms. In this paper, we propose a new CBO model which is governed by jump-diffusion stochastic differential equations. This means randomness in the dynamics of the proposed CBO models comes from Wiener noise as well as compound Poisson process. The following are the contributions of this paper: \begin{itemize} \item[(i)] We prove the well-posedness of the interacting-particle system and of its mean-field limit driven by jump-diffusion SDEs and convergence of the mean-field SDEs to the global minimum. The approach to study well-posedness and convergence to the global minimum is similar to \cite{cbo2} but adapted to the jump-diffusion case with time-dependent coefficients. \item[(ii)] The major contribution of the paper is that we prove mean-square convergence of the interacting particle system to the mean-field limit when number of particles, $N$, tend to $\infty$. This also implies convergence of the particle system towards the mean-field limit in $2-$Wasserstein metric. Let us emphasize that we prove this result for quadratically growing objective function. We also study convergence of the implementable discretized particle system towards the jump-diffusion SDEs as the discretization step, $h$, goes to $0$. Our results can be utilized for the earlier CBO models \cite{cbo1,cbo2,cbo3}. \item[(iii)] As illustrated in the numerical experiments, the addition of a jump-diffusion process in the particle system leads to a more effective exploration of the energy landscape. This particularly relevant when a good prior knowledge of the optimal solution for initialization of the CBO is not available. \end{itemize} As was highlighted in \cite[Remark 3.2]{cbo2}, it is not straightforward to prove convergence of the interacting particle system towards its mean-field limit, even after proving uniform in $N$ moment bound of the solutions of the SDEs driving particles system. Convergence results of this type have been proved for special cases of compact manifolds (see \cite{cbo34} for compact hypersurfaces and \cite{cbo51} for Stiefel manifolds) and globally Lipschitz continuous objective functions. In this case, not only the objective function is bounded but also particles are evolving on a compact set. Under the assumptions on the objective function as in our paper, in the diffusion case weak convergence of the empirical measure of a particle system to the law of the corresponding mean field SDEs has been proved in \cite{cbo41, cbo52} exploiting Prokhorov's theorem. Here we prove convergence of the particle system to the mean-field SDEs in the mean-square sense for a quadratically growing locally-Lipschitz objective function defined on $\mathbb{R}^{d}$. Furthermore, practical implementation of the particle system corresponding to a CBO model needs a numerical approximation in the mean-square sense. We utilize an explicit Euler scheme to implement the proposed jump-diffusion CBO model. This leads to the question whether the Euler scheme converges to the CBO model taking into account that the coefficients of the particle system are not globally Lipschitz and the Lipschitz constants grow exponentially when the objective function is not bounded. At the same time, the coefficients of the particle system have linear growth at infinity. In the case of jump-diffusion SDEs, earlier works either showed convergence of the Euler scheme in the case of globally Lipschitz coefficients \cite{cbo28} or proposed special schemes in the case of non-globally Lipschitz coefficients with super-linear growth, e.g. a tamed Euler scheme \cite{cbo15}. Here we prove mean-square convergence of the Euler scheme and we show that this convergence is uniform in the number of particles $N$, i.e. the choice of a discretization time-step $h$ is independent of $N$. Our convergence result also holds for earlier CBO models \cite{cbo1,cbo2,cbo3}. In Section \ref{sec_lit_rev}, we first present a review of existing CBO models and then describe our CBO model driven by jump-diffusion SDEs. We also formally introduce mean-field limit of the new CBO model. In Section~\ref{sec_wel_pos}, we focus on well-posedness of the interacting particle system behind the new CBO model and its mean-field limit. In Section~\ref{cbo_conv_res}, we discuss convergence of the mean field limit towards a point in $\mathbb{R}^{d}$ which approximates the global minimum, convergence of the interacting particle system towards mean field limit, and convergence of the implementable discretized particle system towards the particle system. We present results of numerical experiments in Section~\ref{cbo_num_exp} to compare performance of our model and the existing CBO models. Throughout the paper, $C$ is a floating constant which may vary at different places. We denote $(a\cdot b)$ as dot product between two vectors, $a,b \in \mathbb{R}^{d}$. We will omit brackets $()$ wherever it does not lead to any confusion. \section{ CBO models : existing and new}\label{sec_lit_rev} In Section~\ref{sec_ex_cbo}, we review the existing CBO models. In Section~\ref{sec_our_mod}, we introduce a new CBO model driven by jump-diffusion SDEs and and discuss potential advantages of adding jumps to CBO models which are confirmed by numerical experiments in Section~\ref{cbo_num_exp}. The numerical experiments of Section~\ref{cbo_num_exp} are conducted using the Euler scheme presented in Section~\ref{sec_our_mod}. \subsection{Review of the existing CBO models}\label{sec_ex_cbo} Let $N \in \mathbb{N}$ denote the number of agents with position vector, $X^{i}_{N}(t) \in \mathbb{R}^{d}$, $i=1,\dots,N$. The following model was proposed in \cite{cbo1}: \begin{align}\label{cbos1.2} dX^{i}_{N}(t) &= -\beta(X^{i}_{N}(t) - \bar{X}^{\alpha,f}_{N}(t))H^{\epsilon}(f(X^{i}_{N}(t)) - f(\bar{X}^{\alpha,f}_{N}(t)))dt \nonumber \\ & \;\;\;\; + \sqrt{2}\sigma \vert X^{i}_{N}(t) -\bar{X}^{\alpha,f}_{N}(t)\vert dW^{i}(t),\;\;\;\;i = 1,\dots,N, \end{align} where $H^{\epsilon} : \mathbb{R} \rightarrow \mathbb{R}$ is a smooth regularization of the Heaviside function, $W^{i}(t)$, $i=1,\ldots , N,$ represent $N-$independent $d$-dimensional standard Wiener processes, $\beta> 0$, $\sigma > 0 $, and $\bar{X}^{\alpha,f}_{N}(t)$ is given by \begin{equation} \label{cbo2.2} \bar{X}^{\alpha,f}_{N}(t) = \frac{\sum_{i =1}^{N}X^{i}_{N}(t)w_{f}^{\alpha}(X^{i}_{N}(t))}{\sum_{i =1}^{N}w_{f}^{\alpha}(X^{i}_{N}(t))}, \end{equation} with $w_{f}^{\alpha}(x) = \exp{(-\alpha f(x))}$, $\alpha > 0$. Each particle $X^{i}_{N}$ at time $t$ is assigned an opinion $f(X^{i}_{N}(t))$. The lesser the value of $f$ for a particle, the more is the influence of that particle, i.e. the more weight is assigned to that particle at that time as can be seen in (\ref{cbo2.2}) of the instantaneous weighted average. If the value $f(X^{i}_{N}(t))$ of a particle $X^{i}_{N}$ at time $t$ is greater than the value $f(\bar{X}_{N}^{\alpha,f}(t))$ at the instantaneous weighted average $\bar{X}_{N}^{\alpha, f}(t)$ then the regularised Heaviside function forces the particle $X^{i}_{N}$ to drift towards $\bar{X}_{N}^{\alpha,f}$. If the opinion of $i$-th particle matters more among the interacting particles, i.e. the value $f(X^{i}_{N}(t))$ is less than $f(\bar{X}^{i}_{N}(t))$, then it is not beneficial for it to move towards $\bar{X}_{N}^{\alpha, f}$. The noise term is added to explore the space $\mathbb{R}^{d}$ and to avoid non-uniform consensus. The noise intensity induced in the dynamics of the $i-$th particle at time $t$ takes into account the distance of the particle from the instantaneous weighted average, $\bar{X}_{N}^{\alpha, f}(t)$. Over a period of time as the particles start moving towards a consensus opinion, the coefficients in (\ref{cbos1.2}) go to zero. One can observe that the more influential opinion a particular particle has, the higher is the weight assigned to that particle in the instantaneous weighted average (\ref{cbo2.2}). Based on this logic, in \cite{cbo2} the authors dropped the regularised Heaviside function in the drift coefficient and the model (\ref{cbos1.2}) was simplified as follows: \begin{equation}\label{cbos1.3} dX^{i}_{N}(t) = -\beta (X^{i}_{N}(t) -\bar{X}_{N}^{\alpha,f}(t)) dt + \sigma \vert X^{i}_{N}(t) - \bar{X}_{N}^{\alpha,f}(t)\vert dW^{i}(t),\;\;\; i = 1,\dots,N, \end{equation} with $\beta$, $ \sigma$, $\bar{X}_{N}^{\alpha,f}$ as in (\ref{cbos1.2})-(\ref{cbo2.2}). The major drawback of the consensus based models (\ref{cbos1.2}) and (\ref{cbos1.3}) is that the parameters $\beta$ and $\sigma$ are dependent on the dimension $d$. To illustrate this fact, we replace $\bar{X}_{N}^{\alpha,f}$ in (\ref{cbos1.3}) by a fixed vector $V \in \mathbb{R}^{d}$. Then, using Ito's formula, we have \begin{equation} \frac{d}{dt}\mathbb{E}|X^{i}_{N}(t)-V|^{2} = (-2\beta + \sigma^{2}d)\mathbb{E}|X^{i}_{N}(t)-V|^{2},\;\;\;\; i = 1,\dots,N. \end{equation} As one can notice, for particles to reach the consensus point whose position vector is $V$, one needs $2\beta > d\sigma^{2}$. To overcome this deficiency, the authors of \cite{cbo3} proposed the following model which is based on component-wise noise intensity instead of isotropic noise used in (\ref{cbos1.2}) and (\ref{cbos1.3}): \begin{equation}\label{cbos1.5} dX^{i}_{N}(t) = -\beta (X^{i}_{N}(t) - \bar{X}_{N}^{\alpha,f}(t)) dt + \sqrt{2}\sigma\diag(X^{i}_{N}(t) - \bar{X}_{N}^{\alpha,f}(t)) dW^{i}(t), \;\;\;\; i =1,\dots,N, \end{equation} where $\beta, \sigma$, and $\bar{X}_{N}^{\alpha,f} $ are as in (\ref{cbos1.2})-(\ref{cbo2.2}), and $\diag(U)$ is a diagonal matrix whose diagonal is a vector $U \in \mathbb{R}^{d}$. Now, if we replace $\bar{X}_{N}^{\alpha, f}$ by a fixed vector $V$ and then use Ito's formula for (\ref{cbos1.5}), we get \begin{align} \frac{d}{dt}\mathbb{E}|X^{i}_{N}(t) -V|^{2} & = -2\beta\mathbb{E}|X^{i}_{N}(t) -V|^{2} + \sigma^{2}\mathbb{E}\sum\limits_{j=1}^{d}(X^{i}_{N}(t) - V)_{j}^{2} \nonumber \\ & =(-2\beta + \sigma^{2})\mathbb{E}|X^{i}_{N}(t) - V|^{2},\;\;\;\;i=1,\dots,N, \end{align} where $(X_{N}^{i}(t) - V)_{j} $ denotes the $j-$th component of $(X_{N}^{i}(t) -V)$. It is clear that in this model there is no dimensional restriction on $\beta$ and $\sigma$. Other CBO models \cite{cbo4,cbo5} are based on interacting particles driven by common noise. Since the same noise drives all the particles, the exploration is not effective. Therefore, they are not scalable with respect to dimension and do not perform well in contrast to the CBO models (\ref{cbos1.2}), (\ref{cbos1.3}), (\ref{cbos1.5}) and model introduced in Section~\ref{sec_our_mod}. This fact is demonstrated in experiments in Section~\ref{cbo_num_exp}. \subsection{Jump-diffusion CBO models}\label{sec_our_mod} Let us consider the following jump-diffusion model: \begin{align}\label{cbos1.6} dX^{i}_{N}(t) &= -\beta(t)(X^{i}_{N}(t^{}) - \bar{X}_{N}(t^{}))dt + \sqrt{2}\sigma(t) \diag(X^{i}_{N}(t^{})-\bar{X}_{N}(t^{}))dW^{i}(t) \nonumber \\ &\;\;\;\; + \gamma(t)\diag(X^{i}_{N}(t^{-}) -\bar{X}_{N}(t^{-}))dJ^{i}(t), \;\; i=1,\dots,N, \end{align} with \begin{equation} \label{cbo_neweq_2.8} J^{i}(t) = \sum\limits_{j=1}^{N^{i}(t)}Z^{i}_{j}, \end{equation} where $N^{i}(t)$, $i=1\dots,N$ are $N-$independent Poisson processes with jump intensity $\lambda$ and $Z_{j}^{i} = (Z_{j,1}^{i},\dots,Z_{j,d}^{i})^{\top}$ are i.i.d. $d$-dimensional random variables denoting $j-$th jump by $i-$th particle and $Z_{j}^{i} \sim Z$. The distribution of $Z$ is called as jump size distribution. For the sake of convenience, we write $Z_{l}$ as the $l$-th component of vector $Z$. We assume that each component $Z_{l}$ of $Z$ is also i.i.d. random variable and distributed as \begin{equation} Z_{l} \sim \Zstroke, \end{equation} where $\Zstroke $ is an $\mathbb{R}-$valued random variable whose probability density is given by $\rho_{\zstroke}(\zstroke)$ such that $\mathbb{E}(\Zstroke) = \int_{\mathbb{R}}\zstroke \rho_{\zstroke}(\zstroke)d\zstroke = 0$. We also denote the probability density of $Z$ as $\rho_{z}(z) = \prod_{l=1}^{d}\rho_{\zstroke}(z_{l}) $. Note that $\mathbb{E}(Z)$ is a $d-$dimensional zero vector, since each $Z_{l}$ is distributed as $\Zstroke$. The Wiener processes $W^{i}(t)$, the Poisson processes $N^{i}(t)$, $i = 1\dots, N$ and the jump sizes $Z$ are assumed to be mutually independent (see further theoretical details concerning L\'{e}vy-driven SDEs in \cite{cbos11}). Also, $\beta(t)$, $\sigma(t), \gamma(t)$ are continuous functions and \begin{equation} \label{cbos1.7} \bar{X}_{N}(t) = (\bar{X}^{1}_{N}(t),\dots, \bar{X}^{d}_{N}(t)) := \frac{\sum_{i=1}^{N}X^{i}_{N}(t)e^{-\alpha f(X^{i}_{N}(t))}}{\sum_{i=1}^{N}e^{-\alpha f(X^{i}_{N}(t))}}, \end{equation} with $\alpha > 0$. Note that we have omitted $\alpha $ and $f$ of $\bar{X}_{N}^{\alpha,f}$ in the notation used in (\ref{cbos1.6}) for the simplicity of writing. We recall the meaning of the jump term \begin{equation*} \int_{0}^{t}\gamma(s)\diag(X^{i}(s^{-}) -\bar{X}_{N}(s^{-}))dJ^{i}(s)= \sum_{j=1}^{N^{i}(t)}\gamma(\tau_{j})\diag(X^{i}(\tau_{j}^{-}) - \bar{X}_{N}(\tau_{j}^{-}))Z^{i}_{j} , \end{equation*}where $\tau_{j}$ denotes the time of $j$-th jump of the Poisson process $N^{i}(t)$. Thanks to the assumption that $\mathbb{E}(\Zstroke) = 0$ \big(which in turn implies $\mathbb{E}(Z^{i}_{j,l}) = 0$, $j=1,\dots,N^{i}(t)$, $i =1,\dots,N$, $l =1,\dots,d$\big), the above integral is a martingale, and hence (similar to Ito's integral term in (\ref{cbos1.6})) it does not bias trajectories of $X_{N}^{i}(t)$, $i=1,\dots,N$. The jump diffusion SDEs (\ref{cbos1.6}) are different from (\ref{cbos1.5}) in the two ways: \begin{itemize} \item The SDEs (\ref{cbos1.6}) are a consequence of interlacing of Ito's diffusion by jumps arriving according to the Poisson process whose jump intensity is given by $\lambda$. \item We take $\beta(t)$ as a continuous positive non-decreasing function of $t$ such that $\beta(t) \rightarrow \beta > 0$ as $t \rightarrow \infty$, $\sigma(t)$ as a continuous positive non-increasing function of $t$ such that $\sigma(t) \rightarrow \sigma > 0$ as $t \rightarrow \infty$ and $\gamma(t)$ as a continuous non-negative non-increasing function of $t$ such that $\gamma(t) \rightarrow \gamma \geq 0$ as $t \rightarrow \infty$. \end{itemize} Although we analyse CBO model (\ref{cbos1.6}) with time-dependent parameters, a decision to take parameters time-dependent or not is problem specific. Note that the particles driven by SDEs (\ref{cbos1.6}) jump at different times with different jump sizes and jumps arrive according to the Poisson process with intensity $\lambda$. We can also write the jump-diffusion SDEs (\ref{cbos1.6}) in terms of Poisson random measure \cite{cbos11} as \begin{align}\label{cboeq1.8} dX^{i}_{N}(t) &= -\beta(t)(X^{i}_{N}(t^{}) -\bar{X}_{N}(t^{}))dt + \sqrt{2}\sigma(t)\diag(X^{i}_{N}(t^{}) -\bar{X}_{N}(t^{}))dW^{i}(t) \nonumber\\ & \;\;\;\;+\int_{\mathbb{R}^{d}}\gamma(t)\diag(X^{i}_{N}(t^{-}) -\bar{X}_{N}(t^{-}))z\mathcal{N}^{i}(dt,dz), \end{align} where $\mathcal{N}^{i}(dt,dz)$, $i=1,\dots, N$, represent the independent Poisson random measures with intensity measure $\nu(dz)dt$ and $\nu(dz)$ is a L\'{e}vy measure which is finite in our case (\ref{cbos1.6}). Although for simplicity we introduced our model as (\ref{cbos1.6}), in proving well-posedness and convergence results we will make use of (\ref{cboeq1.8}). We can formally write the mean field limit of the model (\ref{cbos1.6}) as the following McKean-Vlasov SDEs: \begin{align}\label{cbomfsde} dX(t) &= -\beta(t)(X(t^{}) -\bar{X}(t^{}))dt + \sqrt{2}\sigma(t) \diag(X(t^{})-\bar{X}(t^{}))dW(t) \nonumber \\ &\;\;\;\; +\gamma(t)\diag(X(t^{-}) -\bar{X}(t^{-}))dJ(t), \end{align} where $J(t) = \sum_{j=1}^{N(t)}Z_{j}$, $N(t)$ is a Poisson process with intensity $\lambda$, and \begin{align}\label{eqcbo2.12} \bar{X}(t) := \bar{X}^{\mathcal{L}_{X(t)}} = \frac{\int_{\mathbb{R}^{d}} xe^{-\alpha f(x)}\mathcal{L}_{X(t)}(dx)}{\int_{\mathbb{R}^{d}}e^{-\alpha f(x)}\mathcal{L}_{X(t)}(dx)} = \frac{\mathbb{E}\big(X(t)e^{-\alpha f(X(t))}\big)}{\mathbb{E}\big(e^{-\alpha f(X(t))}\big)}, \end{align} with $\mathcal{L}_{X(t)} := \text{Law}(X(t))$. We can rewrite the mean field jump diffusion SDEs (\ref{cbomfsde}) in terms of Poisson random measure as \begin{align}\label{cbomfsdep} dX(t) &= -\beta(t)(X(t^{}) - \bar{X}(t^{}))dt + \sqrt{2}\sigma(t)\diag(X(t^{}) - \bar{X}(t^{}))dW(t) \nonumber \\ &\;\;\;\; + \gamma(t) \int_{\mathbb{R}^{d}}\diag(X(t^{-}) - \bar{X}(t^{-}))z\mathcal{N}(dt,dz). \end{align} \subsubsection{Other jump-diffusion CBO models} Although the aim of the paper is it to analyse the CBO model (\ref{cboeq1.8}), we discuss three other jump-diffusion CBO models of interest. \textbf{Additional Model 1 :} Writing (\ref{cbos1.6}) in terms of Poisson random measure suggests that we can also consider an infinite activity L\'{e}vy process, e.g. an $\alpha-$stable process, to introduce jumps in dynamics of particles. We can write the CBO model as \begin{align}\label{} dX^{i}_{N}(t) &= -\beta(t)(X^{i}_{N}(t^{}) -\bar{X}_{N}(t^{}))dt + \sqrt{2}\sigma(t)\diag(X^{i}_{N}(t^{}) -\bar{X}_{N}(t^{}))dW^{i}(t) \nonumber\\ & \;\;\;\;+\int_{\mathbb{R}^{d}}\gamma(t)\diag(X^{i}_{N}(t^{-}) -\bar{X}_{N}(t^{-}))z\mathcal{N}^{i}(dt,dz), \end{align} However, numerical approximation of SDEs driven by infinite activity L\'{e}vy processes is computationally more expensive (see e.g. \cite{cbo28, cbos12}), hence it can be detrimental for the overall CBO performance. \textbf{Additional Model 2 :} In the SDEs (\ref{cbos1.6}), the intensity of Poisson process $\lambda$ is constant. If we take jump intensity as $\lambda(t) $, i.e. a function of $t$ then the corresponding SDEs will be as follows: \begin{align}\label{cbos1.9} dX^{i}(t) &= -\beta(t)(X^{i}_{N}(t^{}) - \bar{X}_{N}(t^{}))dt + \sqrt{2}\sigma(t) \diag(X^{i}_{N}(t^{})-\bar{X}(t^{}))dW^{i}(t) \nonumber \\ &\;\;\;\; + \diag(X^{i}_{N}(t^{-}) -\bar{X}_{N}(t^{-}))dJ^{i}(t), \;\; i=1,\dots,N, \end{align} where all the notation are as in (\ref{cbos1.6}) and (\ref{cbos1.7}) except here the intensity of the Poisson processes $N^{i}(t)$ is a time-dependent function $\lambda(t)$. It is assumed that $\lambda(t)$ is a decreasing function such that $\lambda(t) \rightarrow 0$ as $t \rightarrow \infty$. Also, in comparison with (\ref{cbos1.6}), there is no $\gamma(t)$ in the jump component of (\ref{cbos1.9}). Note that, the compound Poisson process with constant jump intensity $\lambda $ is a L\'{e}vy process but with time-dependent jump intensity $\lambda(t)$, it is not a L\'{e}vy process, rather it is an additive process. Additive process is a generalization of L\'{e}vy process which satisfies all conditions of L\'{e}vy process except stationarity of increments \cite{cbos14}. The SDEs (\ref{cbos1.9}) present another jump-diffusion CBO model driven by additive process. The analysis of model (\ref{cbos1.9}) follows similar arguments since the jump-diffusion SDEs (\ref{cbos1.9}) can also be written in terms of the Poisson random measure with intensity measure $\nu_{t}(dz)dt $, where $(\nu_{t})_{t\geq 0}$ is a family of L\'{e}vy measures. \textbf{Additional Model 3 :} In model (\ref{cboeq1.8}), the particles have idiosyncratic noise which means they are driven by different Wiener processes and different compound Poisson processes. Instead, we can have a different jump-diffusion model in which the same Poisson noise drives particle system but jumps sizes still independently vary for all particles. This means jumps arrive at the same time for all particles, but particles jump with different jump-sizes. We can write CBO model as \begin{align} \label{cbo_neweq_2.17} dX^{i}_{N}(t) &= -\beta(t)(X^{i}_{N}(t^{}) -\bar{X}_{N}(t^{}))dt + \sqrt{2}\sigma(t)\diag(X^{i}_{N}(t^{}) -\bar{X}_{N}(t^{}))dW^{i}(t) \nonumber\\ & \;\;\;\;+\int_{\mathbb{R}^{d}}\gamma(t)\diag(X^{i}_{N}(t^{-}) -\bar{X}_{N}(t^{-}))z\mathcal{N}^{}(dt,dz). \end{align} We compare performance of the jump-diffusion CBO models (\ref{cboeq1.8}) and (\ref{cbo_neweq_2.17}) in Section~\ref{cbo_num_exp}. \subsubsection{Discussion}\label{cbo_sec_disc} Firstly, we will discuss dependence of the parameters $\beta(t)$, $\sigma(t)$, $\gamma(t)$ and $\lambda$ on dimension $d$. The independent and identical distribution of $Z_{l}$, which denotes the $l-$th component of $Z$, result in the non-dependency of parameters on dimension in the similar manner as for the model (\ref{cbos1.5}). We illustrate this fact by fixing a vector $V \in \mathbb{R}^{d}$ and replacing $\bar{X}_{N}$ in (\ref{cboeq1.8}) by $V$ then using Ito's formula and the assumption made on $\rho_{\zstroke}(\zstroke)$, we have \begin{align} \frac{d}{dt}\mathbb{E}|X^{i}_{N}(t) - V|^{2} &= -2 \beta(t)\mathbb{E}|X^{i}_{N}(t) - V|^{2} + \sigma^{2}(t)\sum\limits_{j =1}^{d}\mathbb{E}(X^{i}_{N}(t) - V)_{j}^{2} \nonumber \\ & \;\;\;\; + \lambda \int_{\mathbb{R}^{d}}\big(|X^{i}_{N}(t) - V + \gamma(t)\diag(X^{i}_{N}(t) - V)z|^{2} - |X^{i}_{N}(t) -V|^{2}\big)\rho_{z}(z)dz \nonumber \\ & = (-2 \beta(t) + \sigma^{2}(t))\mathbb{E}|X^{i}_{N}(t) - V|^{2} + \lambda\int_{\mathbb{R}^{d}}\gamma^{2}(t)|\diag(X^{i}_{N}(t)-V)z|^{2}\rho_{z}(z)dz \nonumber \\ & = (-2 \beta(t) + \sigma^{2}(t))\mathbb{E}|X^{i}_{N}(t) - V|^{2} + \lambda \gamma^{2}(t)\sum\limits_{j=1}^{d}\int_{\mathbb{R}^{d}}(X^{i}_{N}(t)-V)_{j}^{2}z_{j}^{2}\prod_{l=1}^{d}\rho_{\zstroke}(z_{l})dz \nonumber \\ & = \big(-2 \beta(t) + \sigma^{2}(t) + \lambda \gamma^{2}(t)\mathbb{E}(\Zstroke^{2})\big)\mathbb{E}|X^{i}_{N}(t) - V|^{2}. \label{cboeq2.16} \end{align} We can choose $\beta(t)$, $\sigma(t)$, $\gamma(t)$, $\lambda$ and distribution of $\Zstroke$ guaranteeing that there is a $t_{*} \geq 0$ such that $-2\beta(t) + \sigma^{2}(t)+ \lambda \gamma^{2}(t)\mathbb{E}(\Zstroke^{2}) < 0 $ for all $t \geq t_{*}$ and such a choice is independent of $d$. It is clear from (\ref{cboeq2.16}) that with this choice, $\mathbb{E}|X^{i}_{N}(t)-V|^{2}$, $i =1,\dots,N$, decay in time as $t\rightarrow \infty$. In the previous CBO models, there were only two terms namely, the drift term and the diffusion term. The drift tries to take the particles towards their instantaneous weighted average. The diffusion term helps in exploration of the state space with the aim to find a state with better weighted average than the current one. The model (\ref{cbos1.6}) contains one extra term, which we call the jump term. Jumps help in intensifying the search in a search space and aids in avoiding premature convergence or trapping in local minima. This results in more effective use of the interaction of particles. Moreover, the effect of jumps decays with time in (\ref{cbos1.6}) by virtue of decreasing $\gamma (t)$. The reason for considering the model (\ref{cbos1.6}) where jumps affect only the initial period of time is that we want particles to explore more space faster at the beginning of simulation and, as soon as the weighted average of particles is in a vicinity of the global minimum, we do not want jumps to affect convergence of particles towards that consensus point lying in the close neighbourhood of the global minimum. Therefore, the time-dependent parameters and degeneracy of the coefficients help in exploiting the searched space. As a consequence, the jump-diffusion noise and degenerate time-dependent coefficients in model (\ref{cbos1.6}) may help in keeping the balance of \textbf{\textit{exploration}} and \textbf{\textit{exploitation}} by interacting particles over a period of time. We will continue this discussion on exploration and exploitation in Section~\ref{cbo_num_exp}, where the proposed CBO method is tested. \subsubsection{Implementation}\label{subsec_implemen} Let $0=t_{0}<\dots<t_{n}=T$ be a uniform partition of the time interval $[0,T]$ into $n $ sub-intervals such that $h:= t_{k+1} -t_{k}$, $k =0,\dots, n-1$ and $T = nh$. To approximate (\ref{cbos1.6}), we construct a Markov chain $(Y_{N}^{i}(t_{k}))$, $ k = 1,\dots, n$, using the following Euler scheme: \begin{align}\label{cbo_dis_ns} Y^{i}_{N}(t_{k+1}) &= Y_{N}^{i}(t_{k}) - \beta(t_{k})(Y^{i}_{N}(t_{k}) - \bar{Y}_{N}(t_{k}) ) h + \sigma(t_{k})\diag(Y^{i}_{N}(t_{k})- \bar{Y}_{N}(t_{k}))\Delta W(t_{k})\nonumber \\& \;\;\;\;+ \gamma(t_{k})\sum\limits_{j = N^{i}(t_{k})+1}^{N^{i}(t_{k+1})}\diag(Y^{i}_{N}(t_{k}) -\bar{Y}_{N}(t_{k})) Z^{i}_{j}, \end{align} where $\Delta W(t_{k}) = W(t_{k+1}) - W(t_{k})$ has Gaussian distribution with mean $0$ and variance $h$, $Z^{i}_{j}$ denotes $j-$th jump size of the $i-$th particle, $N^i(t)$ are independent Poisson processes with jump intensity $\lambda$, and \begin{align}\label{cbo_e2.18} \bar{Y}_{N}(t) = \sum\limits_{i=1}^{N}Y^{i}_{N}(t)\frac{e^{-\alpha f(Y^{i}_{N}(t))}}{\sum_{j=1}^{N}e^{-\alpha f(Y^{i}_{N}(t))}}. \end{align} To implement the discretization scheme we initialize the $N\times d$ matrix $Y$ at time $t_0=0$, and update it for $n$ iterations using (\ref{cbo_dis_ns}) by calculating (\ref{cbo_e2.18}) at each iteration. The code to implement above numerical scheme utilizing $N\times d$ matrix, which allows to save memory and time in computations, is available on \href{https://github.com/akashspace/Consensus-based-opmization}{github}. We will discuss the convergence of scheme (\ref{cbo_dis_ns}) in Subsection~\ref{cbo_conv_ns}. \section{Well-posedness results}\label{sec_wel_pos} In Section~\ref{sec_well_pos_1}, we discuss well-posedness of the interacting particle system (\ref{cboeq1.8}) and prove moment bound for this system. In Section~\ref{sec_well_pos_2}, we prove well-posedness and moment bound of the mean field limit (\ref{cbomfsdep}) of the particle system (\ref{cboeq1.8}). \subsection{Well-posedness of the jump-diffusion particle system}\label{sec_well_pos_1} This section is focused on showing existence and uniqueness of the solution of (\ref{cboeq1.8}). We first introduce the notation which are required in this section. Let us denote $\textbf{x}_{N} := (x_{N}^{1},\dots,x_{N}^{N})^{\top} \in \mathbb{R}^{Nd}$, $\bar{\textbf{x}}_{N} = \sum_{i=1}^{N}x^{i}_{N}e^{-\alpha f(x^{i}_{N})}/\sum_{j=1}^{N}e^{-\alpha f(x^{j}_{N})}$, $\textbf{W}(t) := (W^{1}_{}(t),\dots,W_{N}^{}(t))^{\top}$, $\textbf{F}_{N}(\textbf{x}_{N}) := \big( F^{1}_{N}(\textbf{x}_{N}),\dots,F^{N}_{N}(\textbf{x}_{N})\big)^{\top} \in \mathbb{R}^{Nd}$ with $F_{N}^{i}(\textbf{x}_{N}) = (x_{N}^{i} - \bar{x}_{N}) \in \mathbb{R}^{d}$ for all $i = 1,\dots,N$, $\textbf{G}_{N}(\textbf{x}_{N}) : = \diag(\textbf{F}_{N}(\textbf{x}_{N})) \in \mathbb{R}^{Nd\times Nd}$ and $\textbf{J}(t) = ({J}^{1}(t),\dots,{J}^{N}(t))$, where $J^{i}(t)$ is from (\ref{cbo_neweq_2.8}) which implies $\int_{0}^{t}\gamma(t)\diag(F^{i}_{N}(\textbf{x}_{N}^{i}))d{J}^{i}(t) = \int_{0}^{t}\int_{\mathbb{R}^{d}}\diag(F^{i}_{N}(\textbf{x}_{N}))z\mathcal{N}^{i}(dt,dz)$. Let us represent $\ell(dz)$ as the Lebesgue measure of $dz$, and for the sake of convenience we will use $dz$ in place of $\ell(dz)$ whenever there is no confusion. We can write the particle system (\ref{cboeq1.8}) using the above notation as \begin{align}\label{cboeq3.1} d\textbf{X}_{N}(t) = \beta(t)\textbf{F}_{N}(\textbf{X}_{N}(t^{-}))dt + \sqrt{2}\sigma(t)\textbf{G}_{N}(\textbf{X}_{N}(t^{-}))d\textbf{W}(t) + \gamma(t)\textbf{G}_{N}(\textbf{X}_{N}(t^{-}))d\textbf{J}(t). \end{align} In order to show well-posedness of (\ref{cboeq3.1}), we need the following natural assumptions on the objective function $f$. Let \begin{equation}\label{cbo_eq_fm} f_{m} := \inf f. \end{equation} \begin{assumption}\label{cboh3.1} $f_{m} > 0$. \end{assumption} \begin{assumption}\label{cboasu1.1} $f : \mathbb{R}^{d} \rightarrow \mathbb{R}$ is locally Lipschtiz continuous, i.e. there exists a positive function $L(R)$ such that \begin{equation*} |f(x) - f(y) | \leq L(R)|x-y|, \end{equation*} whenever $|x|$, $|y| \leq R$, $x$, $y \in \mathbb{R}^{d}$, $R>0$. \end{assumption} Assumption~\ref{cboasu1.1} is used for proving local Lipschitz continuity and linear growth of $F^{i}_{N}$ and $G^{i}_{N}$, $i=1,\dots,N$. Let $B(R) = \{ x\in \mathbb{R}^{d}\;;\;|x| \leq R\}$. \begin{lemma}\label{cbolemma3.1} Under Assumptions~\ref{cboh3.1}-\ref{cboasu1.1}, the following inequalities hold for any $\textbf{x}_{N}$, $\textbf{y}_{N} \in \mathbb{R}^{Nd}$ satisfying $\sup_{i=1,\dots,N}|x^{i}_{N}|, \sup_{i=1,\dots,N}|y^{i}_{N}| \leq R$ and for all $i = 1,\dots,N$: \begin{enumerate} \item $ |F^{i}_{N}(\textbf{x}_{N}) -F^{i}_{N}(\textbf{y}_{N})| \leq |x^{i}_{N} - y^{i}_{N}| + \frac{C(R)}{N^{1/2}}|\textbf{x}_{N} - \textbf{y}_{N}|,$ \item $ |F^{i}_{N}(\textbf{x}_{N})|^{2} \leq 2(|x_{N}^{i}|^{2} + |\textbf{x}_{N}|^{2}), $ \end{enumerate} where $C(R) = e^{\alpha (|f|_{L_{\infty}(B(R))} - f_{m}})\big( 1+ \alpha R L(R)+ \alpha R L(R) e^{\alpha (|f|_{L_{\infty}(B(R))} - f_{m})})$. \end{lemma} \begin{proof} Let us deal with the first inequality above. We have \begin{align*} |F^{i}_{N}(\textbf{x}_{N}) &- F^{i}_{N}(\textbf{y}_{N})| \leq |x^{i}_{N} - y^{i}_{N}| + \Bigg| \frac{\sum_{i=1}^{N}x^{i}_{N}e^{-\alpha f(x^{i}_{N})}}{\sum_{i=1}^{N}e^{-\alpha f(x^{i}_{N})}} - \frac{\sum_{i=1}^{N}y^{i}_{N}e^{-\alpha f(y^{i}_{N})}}{\sum_{i=1}^{N}e^{-\alpha f(y^{i}_{N})}}\Bigg| \\ & \leq |x^{i}_{N} - y^{i}_{N}| + \frac{1}{\sum_{j=1}^{N}e^{-\alpha f(x^{j}_{N})}}\Bigg|\sum\limits_{i=1}^{N}\bigg(x^{i}_{N}e^{-\alpha f(x^{i}_{N})} - y^{i}_{N}e^{-\alpha f(y^{i}_{N})}\bigg)\Bigg| \\ & \;\;\;\; + \sum\limits_{i=1}^{N}|y^{i}_{N}|e^{-\alpha f(y^{i}_{N})}\Bigg| \frac{1}{\sum_{j=1}^{N}e^{-\alpha f(x^{j}_{N})}} - \frac{1}{\sum_{j=1}^{N}e^{-\alpha f(y^{j}_{N})}}\Bigg| \\ & \leq |x^{i}_{N} - y^{i}_{N}| + \frac{1}{\sum_{j=1}^{N}e^{-\alpha f(x^{j}_{N})}}\Bigg(\Bigg|\sum\limits_{i=1}^{N}(x^{i}_{N} - y^{i}_{N})e^{-\alpha f(x^{i}_{N})}\Bigg| + \Bigg|\sum\limits_{i=1}^{N}y^{i}_{N}(e^{-\alpha f(x^{i}_{N})} - e^{-\alpha f(y^{i}_{N})})\Bigg|\Bigg) \\ & \;\;\;\; + \sum\limits_{i=1}^{N}|y^{i}_{N}|e^{-\alpha f(y^{i}_{N})}\Bigg| \frac{1}{\sum_{j=1}^{N}e^{-\alpha f(x^{j}_{N})}} - \frac{1}{\sum_{j=1}^{N}e^{-\alpha f(y^{j}_{N})}}\Bigg|. \end{align*} Using Jensen's inequality, we have \begin{align*} \frac{1}{\frac{1}{N}\sum_{i=1}^{N}e^{-\alpha f(x^{i}_{N})}} &\leq e^{\alpha \frac{1}{N}\sum_{i=1}^{N}f(x^{i}_{N})}. \end{align*} Using the Cauchy-Bunyakowsky-Shwartz inequality, we get \begin{align*} &|F^{i}_{N}(\textbf{x}_{N}) - F^{i}_{N}(\textbf{y}_{N})| \leq |x^{i}_{N} - y^{i}_{N}| + e^{\alpha |f|_{L_{\infty}(B(R))}}e^{-\alpha f_{m}}\frac{1}{N}\sum_{i=1}^{N}\big|x^{i}_{N} - y^{i}_{N}\big| + \alpha e^{-\alpha f_{m}}e^{\alpha |f|_{L_{\infty}(B(R))}}L(R)\\ &\times\bigg(\frac{1}{N}\sum\limits_{i=1}^{N}|y^{i}_{N}|^{2}\bigg)^{1/2}\bigg(\frac{1}{N}\sum\limits_{i=1}^{N}|x^{i}_{N} - y^{i}_{N}|^{2}\bigg)^{1/2} + \alpha e^{-2\alpha f_{m}}e^{2\alpha |f|_{L_{\infty}(B(R))}}\frac{L(R)}{N}\sum\limits_{i=1}^{N}|y^{i}_{N}| \sum\limits_{i=1}^{N}|x^{i}_{N} - y^{i}_{N}| \\ & \leq |x^{i}_{N} - y^{i}_{N}| + e^{\alpha |f|_{L_{\infty}(B(R))}}e^{-\alpha f_{m}}\frac{1}{N}\sum_{i=1}^{N}\big|x^{i}_{N} - y^{i}_{N}\big| + \alpha e^{-\alpha f_{m}}e^{\alpha |f|_{L_{\infty}(B(R))}}R L(R)\bigg(\frac{1}{N}\sum\limits_{i=1}^{N}|x^{i}_{N} - y^{i}_{N}|^{2}\bigg)^{1/2} \\ & + \alpha e^{-2\alpha f_{m}}e^{2\alpha |f|_{L_{\infty}(B(R))}}R L(R)\bigg(\frac{1}{N}\sum\limits_{i=1}^{N}|x^{i}_{N} - y^{i}_{N}|^{2}\bigg)^{1/2} \\ & \leq |x^{i}_{N} - y^{i}_{N}| + e^{\alpha (|f|_{L_{\infty}(B(R))} - f_{m})})\big( 1+ \alpha R L(R)+ \alpha R L(R) e^{\alpha (|f|_{L_{\infty}(B(R))} - f_{m})})\frac{1}{N^{1/2}}|\textbf{x}_{N} - \textbf{y}_{N}|. \end{align*} The second inequality directly follows from \begin{align*} |F^{i}_{N}(\textbf{x}_{N})| \leq |x^{i}_{N}| + |\textbf{x}_{N}|. \end{align*} \end{proof} \begin{theorem}\label{cbo_thrm_3.2} Let the initial condition $\textbf{X}_{N}(0)$ of the jump-diffusion SDE (\ref{cbos1.6}) satisfy $\mathbb{E}|\textbf{X}_{N}(0)|^2 < \infty$ and $\mathbb{E}|\Zstroke|^{2} < \infty$, then the $Nd-$dimensional system (\ref{cbos1.6}) has a unique strong solution $\textbf{X}_{N}(t)$ under Assumptions~\ref{cboh3.1}-\ref{cboasu1.1}. \end{theorem} \begin{proof} Note that $|G^{i}_{N}(\textbf{x}_{N}) - G^{i}_{N}(\textbf{y}_{N})| = |F^{i}_{N}(\textbf{x}_{N}) - F^{i}_{N}(\textbf{y}_{N})|$ and for all $i=1\dots,N$, \begin{align*} \int_{\mathbb{R}^{d}}|{F}^{i}_{N}(\textbf{x}_{N}){z}|^{2}\rho_{{z}}({z})d{z} &=\int_{\mathbb{R}^{d}}\sum\limits_{l=1}^{d}|(x_{N}^{i})_{l} - (y_{N}^{i})_{l}|^{2}|z^{}_{l}|^{2}\prod\limits_{k=1}^{d}\rho_{\zstroke}(z^{}_{k})d{z} \\ &= \sum\limits_{l=1}^{d}|(x_{N}^{i})_{l} - (y_{N}^{i})_{l}|^{2}\int_{\mathbb{R}^{d}}|z^{}_{l}|^{2}\prod\limits_{k=1}^{d}\rho_{\zstroke}(z^{}_{k})d{z} = |{F}^{i}_{N}(\textbf{x}_{N})|^{2} \mathbb{E}(\Zstroke)^{2}, \end{align*} where $(x^{i}_{N})_{l}$ means the $l-$th component of $d$-dimensional vector $x^{i}_{N}$ and $z^{}_{l}$ means the $l-$th component of $d-$dimensional vector $z^{}$. Therefore, from Lemma~\ref{cbolemma3.1}, we can say that we have a positive function $K(R)$ of $R > 0$ such that \begin{align*} |\textbf{F}_{N}(\textbf{x}_{N}) - \textbf{F}_{N}(\textbf{y}_{N}) |^{2} + |\textbf{G}_{N}(\textbf{x}_{N}) - \textbf{G}_{N}(\textbf{y}_{N}) |^{2}& + \sum_{i=1}^{N}\int_{\mathbb{R}^{d}}|\diag({F}^{i}_{N}(\textbf{x}_{N})-{F}^{i}_{N}(\textbf{y}_{N})){z}|^{2}\rho_{{z}}({z})d{z} \\ & \leq K(R) |\textbf{x}_{N}-\textbf{y}_{N}|, \end{align*} whenever $|\textbf{x}_{N}|$, $|\textbf{y}_{N}| \leq R$. Moreover, \begin{align*} |\textbf{F}_{N}(\textbf{x}_{N})|^{2} + |\textbf{G}_{N}(\textbf{x}_{N})|^{2} + \sum_{i=1}^{N}\int_{\mathbb{R}^{d}}|\diag({F}^{i}_{N}({x}_{N})){z}|^{2}\rho_{{z}}({z})d{z} \leq C|\textbf{x}_{N}|^{2}, \end{align*} where $C$ is some positive constant independent of $|\textbf{x}_{N}| $. Then the proof immediately follows from \cite[Theorem 1]{cbo19}. Consequently, by \cite[Lemma 2.3]{cbo15}, the following moment bound, provided $\mathbb{E}|\textbf{X}_{N}(0)|^{2p} <\infty$ and $\mathbb{E}|\textbf{Z}|^{2p} < \infty$, holds: \begin{align}\label{cbo_eqn_3.2} \mathbb{E}\sup_{0\leq t\leq T}|\textbf{X}_{N}(t)|^{2p} \leq C_{N}, \end{align} where $C_{N}$ may depend on $N$ and $p \geq 1$.\end{proof} In the last step of proof above, we highlighted that $C_{N}$ may depend on $N$. However, for convergence analysis in later sections we need an uniform in $N$ bound for $\sup_{i=1,\dots,N}\mathbb{E}\big(\sup_{t\in[0,T]}|X^{i}_{N}(t)|^{2p}\big)$, $p \geq 1$ which we prove under the following assumptions as in \cite{cbo2}. \begin{assumption}\label{cboh3.2} There exists a positive constant $K_{f}$ such that \begin{align*} |f(x) - f(y)| &\leq K_{f}(1+|x| + |y|)|x-y|, \;\;\text{for all}\;x, y , \in \mathbb{R}^{d}. \end{align*} \end{assumption} \begin{assumption}\label{cboassu3.4} There is a constant $K_{u} > 0$ \begin{align*} f(x) - f_{m} &\leq K_{u}(1+ |x|^{2}), \;\; \text{for all}\; x \in \mathbb{R}^{d}. \end{align*} \end{assumption} \begin{assumption}\label{cboasm1.4} There exists constants $R>0$ and $K_{l} > 0$ such that \begin{equation*} f(x) - f_{m} \geq K_{l}|x|^{2},\;\; |x|\geq R. \end{equation*} \end{assumption} As one can see, we need a stronger Assumption~\ref{cboh3.2} as compared to Assumption~\ref{cboasu1.1} to obtain a moment bound uniform in $N$. The Assumptions~\ref{cboassu3.4}-\ref{cboasm1.4} are to make sure that objective function $f$ has quadratic growth at infinity. From \cite[Lemma 3.3]{cbo2}, we have the following result under Assumptions~\ref{cboh3.1}, \ref{cboh3.2}-\ref{cboasm1.4}: \begin{align}\label{y4.2} \sum_{i=1}^{N}|x_{N}^{i}|^{2} \frac{e^{-\alpha f(x_{N}^{i})}}{\sum_{j=1}^{N}e^{-\alpha f(x_{N}^{j})}} \leq L_{1} + L_{2}\frac{1}{N}\sum_{i=1}^{N}|x_{N}^{i}|^{2}, \end{align} where $L_{1} = R^{2} + L_{2}$ and $L_{2} = 2\frac{K_{u}}{K_{l}}\Big(1 + \frac{1}{\alpha K_{l} R^{2}}\Big) $, $R$ is from Assumption~\ref{cboasm1.4}. \begin{lemma}\label{cbolemma3.3} Let Assumptions~\ref{cboh3.1}, \ref{cboh3.2}-\ref{cboasm1.4} be satisfied. Let $p\geq 1$, $\sup_{i=1,\dots,N}\mathbb{E}|X^{i}_{N}(0)|^{2p} < \infty $ and $\mathbb{E}|Z|^{2p} < \infty$. Then \begin{equation*} \sup_{i\in\{1,\dots,N\}}\mathbb{E}\sup_{0\leq t\leq T}|X^{i}_{N}(t)|^{2p} \leq K_{m}, \end{equation*} where $X_{N}^{i}(t)$ is from (\ref{cboeq1.8}) and $K_{m}$ is a positive constant independent of $N$. \end{lemma} \begin{proof} Let $p$ be a positive integer. Using Ito's formula, we have \begin{align*} |X_{N}^{i}(t)|^{2p} &= |X^{i}_{N}(0)|^{2p} -2p \mathbb{E}\int_{0}^{t}\beta(s)|X_{N}^{i}(s)|^{2p-2}\big(X_{N}^{i}(s)\cdot(X_{N}^{i}(s) - \bar{X}_{N}(s))\big)ds \\ & \;\;\;\;+ 2 \sqrt{2}p\int_{0}^{t}\sigma(s)|X^{i}_{N}(s)|^{2p-2}\big(X_{N}^{i}(s) \cdot \diag(X_{N}^{i}(s) - \bar{X}_{N}(s))dW^{i}(s)\big) \\ & \;\;\;\;+4p(p-1)\int_{0}^{t}\sigma^{2}(s)|X_{N}^{i}(s)|^{2p-4}|\diag(X_{N}^{i}(s)-\bar{X}_{N}(s))X_{N}^{i}(s)|^{2}ds \\ &\;\;\;\; +2 p\int_{0}^{t}\sigma^{2}(s)|X_{N}^{i}(s)|^{2p-2}|\diag(X_{N}^{i}(s) - \bar{X}_{N}(s)|^{2}ds \\ & \;\;\;\; + \int_{0}^{t}\int_{\mathbb{R}^{d}}\big(|X_{N}^{i}(s^{-}) + \gamma(s)\diag(X_{N}^{i}(s^{-}) - \bar{X}_{N}(s^{-}))z|^{2p} - |X_{N}^{i}(s^{-})|^{2p}\big)\mathcal{N}^{i}(ds,dz). \end{align*} First taking supremum over $0\leq t\leq T$ and then taking expectation, we get \begin{align}\label{cbo_eq_3.3} &\mathbb{E}\sup_{0 \leq t\leq T}|X^{i}_{N}(t)|^{2p} \leq \mathbb{E}|X^{i}_{N}(0)|^{2p} + C \mathbb{E}\int_{0}^{T}|X_{N}^{i}(s)|^{2p-2}\big|X_{N}^{i}(s)\cdot(X_{N}^{i}(s) - \bar{X}_{N}(s))\big|ds \nonumber \\ & \;\;\;\; + C\mathbb{E}\sup_{0 \leq t\leq T}\bigg|\int_{0}^{t}|X^{i}_{N}(s)|^{2p-2}\big(X_{N}^{i}(s) \cdot \diag(X_{N}^{i}(s) - \bar{X}_{N}(s))dW^{i}(s)\big)\bigg| \nonumber\\ & \;\;\;\;+ C\mathbb{E}\int_{0}^{T}|X_{N}^{i}(s)|^{2p-4}|\diag(X_{N}^{i}(s)-\bar{X}_{N}(s))X_{N}^{i}(s)|^{2}ds \nonumber\\ &\;\;\;\; + C\mathbb{E}\int_{0}^{T}|X_{N}^{i}(s)|^{2p-2}|\diag(X_{N}^{i}(s) - \bar{X}_{N}(s)|^{2}ds \nonumber \\ & \;\;\;\;+ C\mathbb{E}\sup_{0\leq t\leq T}\int_{0}^{t}\int_{\mathbb{R}^{d}}\big(|X_{N}^{i}(s^{-}) + \gamma(s)\diag(X_{N}^{i}(s^{-}) - \bar{X}_{N}(s^{-}))z|^{2p} - |X_{N}^{i}(s^{-})|^{2p}\big)\mathcal{N}^{i}(ds,dz). \end{align} To deal with the second term in (\ref{cbo_eq_3.3}), we use Young's inequality and obtain \begin{align*} |X_{N}^{i}(s)|^{2p-2}\big|X_{N}^{i}(s)\cdot(X_{N}^{i}(s) - \bar{X}_{N}(s))\big| &\leq |X_{N}^{i}(s)|^{2p} + |X_{N}^{i}(s)|^{2p-1}|\bar{X}_{N}(s)| \\ & \leq \frac{4p-1}{2p}|X_{N}^{i}(s)|^{2p} + \frac{1}{2p}|\bar{X}_{N}(s)|^{2p}. \end{align*} To ascertain a bound on $|\bar{X}_{N}(s)|^{2p}$, we first apply Jensen's inequality to $ |\bar{X}_{N}(s)|^{2}$ to get \begin{equation*} |\bar{X}_{N}(s)|^{2} = \Bigg|\sum_{i = 1}^{N}X_{N}^{i}(s)\frac{e^{-\alpha f(X_{N}^{i}(s))}}{\sum_{j=1}^{N}e^{-\alpha f(X_{N}^{j}(s))}}\Bigg|^{2} \leq \sum_{i=1}^{N}|X_{N}^{i}(s)|^{2}\frac{e^{-\alpha f(X_{N}^{i}(s))}}{\sum_{j=1}^{N}e^{-\alpha f(X_{N}^{j}(s))}}, \end{equation*} then using (\ref{y4.2}), we obtain $ |\bar{X}_{N}(s)|^{2} \leq L_{1} + L_{2}\frac{1}{N}\sum\limits_{i=1}^{N}|X_{N}^{i}(s)|^{2}, $ which on applying the elementary inequality, $ (a + b )^{p} \leq 2^{p-1}(a^{p} + b^{p}), \; a,b \in \mathbb{R}_{+}$ and Jensen's inequality, gives \begin{align*} |\bar{X}_{N}(s)|^{2p} \leq 2^{p-1}\Big(L_{1}^{p} + L_{2}^{p}\frac{1}{N}\sum\limits_{i=1}^{N}|X_{N}^{i}(s)|^{2p}\Big). \end{align*} As a consequence of the above calculations, we get \begin{align}\label{cbo_eq_3.4} |X_{N}^{i}(s)|^{2p-2}\big|X_{N}^{i}(s)\cdot(X_{N}^{i}(s) - \bar{X}_{N}(s))\big| \leq C\Big(1 + |X^{i}_{N}(s)|^{2p} + \frac{1}{N}\sum\limits_{i=1}^{N}|X_{N}^{i}(s)|^{2p}\Big), \end{align} where $C$ is a positive constant independent of $N$. Using the Burkholder-Davis-Gundy inequality, we get \begin{align} \mathbb{E}&\sup_{0 \leq t\leq T}\bigg|\int_{0}^{t}|X^{i}_{N}(s)|^{2p-2}\big(X_{N}^{i}(s) \cdot \diag(X_{N}^{i}(s) - \bar{X}_{N}(s))dW^{i}(s)\big)\bigg|\nonumber \\ & \leq \mathbb{E}\bigg(\int_{0}^{T} \big(|X^{i}_{N}(s)|^{2p-2}\big(X_{N}^{i}(s) \cdot \diag(X_{N}^{i}(s) - \bar{X}_{N}(s))\big)\big)^{2}ds\bigg)^{1/2} \nonumber \\ & \leq \mathbb{E}\Bigg(\sup_{0\leq t \leq T } |X_{N}^{i}(t)|^{2p-1}\bigg(\int_{0}^{T}|X_{N}^{i}(s) - \bar{X}_{N}(s))|^{2}ds\bigg)^{1/2}\Bigg),\nonumber \end{align} which on applying generalized Young's inequality ($ab \leq (\epsilon a^{q_{1}})/q_{1} + b^{q_{2}}/(\epsilon^{q_{2}/q_{1}}q_{2}),\; \epsilon, q_{1}, q_{2} >0, 1/q_{1} + 1/q_{2} = 1$) yields \begin{align} \mathbb{E}&\sup_{0 \leq t\leq T}\bigg|\int_{0}^{t}|X^{i}_{N}(s)|^{2p-2}\big(X_{N}^{i}(s) \cdot \diag(X_{N}^{i}(s) - \bar{X}_{N}(s))dW^{i}(s)\big)\bigg|\nonumber \\ & \leq \frac{1}{2}\mathbb{E}\sup_{0\leq t\leq T}|X^{i}_{N}(t)|^{2p} + C\mathbb{E}\bigg(\int_{0}^{T}|X_{N}^{i}(s) - \bar{X}_{N}(s))|^{2}ds\bigg)^{p}\nonumber \\ & \leq \frac{1}{2}\mathbb{E}\sup_{0\leq t\leq T}|X^{i}_{N}(t)|^{2p} + C\mathbb{E}\bigg(\int_{0}^{T}|X_{N}^{i}(s) - \bar{X}_{N}(s))|^{2p}ds\bigg),\label{cbo_eq_3.5} \end{align} where in the last step we have utilized Holder's inequality. Now, we move on to obtain estimates which are required to deal with fourth and fifth term in (\ref{cbo_eq_3.3}). Using Young's inequality, we have \begin{align} A_{1} := |X_{N}^{i}(s)|^{2p-4}(|X_{N}^{i}(s)|^{2} &- (X_{N}^{i}(s)\cdot\bar{X}_{N}(s)))^{2} \leq 2|X_{N}^{i}(s)|^{2p} + 2|X_{N}^{i}(s)|^{2p-2}|\bar{X}_{N}(s)|^{2}\nonumber \\ & \leq \frac{4p-2}{p}|X_{N}^{i}(s)|^{2p} + \frac{2}{p}|\bar{X}_{N}(s)|^{2p}. \end{align} In the same way, applying Young's inequality, we obtain \begin{align} A_{2} := |X_{N}^{i}(s)|^{2p-2}|\diag(X_{N}^{i}(s) &- \bar{X}_{N}(s))|^{2} \leq 2|X_{N}^{i}(s)|^{2p} + 2|X_{N}^{i}(s)|^{2p-2}|\bar{X}_{N}(s)|^{2} \nonumber \\ & \leq \frac{4p-2}{p}|X_{N}^{i}(s)|^{2p} + \frac{2}{p}|\bar{X}_{N}(s)|^{2p}. \end{align} Following the same procedure based on (\ref{y4.2}), which we followed to obtain bound (\ref{cbo_eq_3.4}), we also get \begin{align}\label{cbo_eq_3.8} A_{1} + A_{2} \leq C\Big(1 + |X_{N}^{i}(s)|^{2p} + \frac{1}{N}\sum\limits_{i=1}^{N}|X_{N}^{i}(s)|^{2p} \Big), \end{align} where $C$ is a positive constant independent of $N$. It is left to deal with the last term in (\ref{cbo_eq_3.3}). Using the Cauchy-Bunyakowsky-Schwartz inequality, we get \begin{align*} &\mathbb{E}\sup_{0\leq t\leq T}\int_{0}^{t}\int_{\mathbb{R}^{d}}\big(|X_{N}^{i}(s^{-}) + \gamma(s)\diag(X_{N}^{i}(s^{-}) - \bar{X}_{N}(s^{-}))z|^{2p} - |X_{N}^{i}(s^{-})|^{2p}\big)\mathcal{N}^{i}(ds,dz) \\ & \leq \mathbb{E}\sup_{0\leq t\leq T}\int_{0}^{t}\int_{\mathbb{R}^{d}}\bigg(2^{2p-1}\big(|X_{N}^{i}(s^{-})|^{2p} + |\gamma(s)\diag(X_{N}^{i}(s^{-}) - \bar{X}_{N}(s^{-}))z|^{2p}\big) - |X_{N}^{i}(s^{-})|^{2p}\bigg)\mathcal{N}^{i}(ds,dz) \\ & \leq C\mathbb{E}\int_{0}^{T}\int_{\mathbb{R}^{d}}\big(|X_{N}^{i}(s^{-})|^{2p} + |\gamma(s)\diag(X_{N}^{i}(s^{-}) - \bar{X}_{N}(s^{-}))z|^{2p}\big) \mathcal{N}^{i}(ds,dz) \\ & \leq C\mathbb{E}\int_{0}^{T}\int_{\mathbb{R}^{d}}(|X_{N}^{i}(s)|^{2p} + |\gamma(s)\diag(X_{N}^{i}(s)-\bar{X}_{N}(s))z|^{2p}\big)\rho_{z}(z)dz \\ & \leq C\mathbb{E}\int_{0}^{T}\Big(|X_{N}^{i}(s)|^{2p} + |X_{N}^{i}(s) - \bar{X}_{N}(s)|^{2p}\int_{\mathbb{R}^{d}}|z|^{2p}\rho_{z}(z)dz\Big)ds. \end{align*} We have \begin{align*} |X_{N}^{i}(s) - \bar{X}_{N}(s)|^{2p} &\leq 2^{2p-1}\big(|X_{N}^{i}(s)|^{2p} + |\bar{X}_{N}^{i}(s)|^{2p}\big) \leq C\Big( 1 + |X_{N}^{i}(s)|^{2p} + \frac{1}{N}\sum\limits_{i=1}^{N}|X_{N}^{i}(s)|^{2p}\Big), \end{align*} and hence \begin{align} &\mathbb{E}\sup_{0\leq t\leq T}\int_{0}^{t}\int_{\mathbb{R}^{d}}\big(|X_{N}^{i}(s^{-}) + \gamma(s)\diag(X_{N}^{i}(s^{-}) - \bar{X}_{N}(s^{-}))z|^{2p} - |X_{N}^{i}(s^{-})|^{2p}\big)\mathcal{N}^{i}(ds,dz) \nonumber \\ & \leq C\mathbb{E}\int_{0}^{T}\Big(1+ |X_{N}^{i}(s)|^{2p} + \frac{1}{N}\sum\limits_{i=1}^{N}|X_{N}^{i}(s)|^{2p}\Big)ds,\label{cbo_eq_3.9} \end{align} where $C >0$ does not depend on $N$. Using (\ref{cbo_eq_3.4}), (\ref{cbo_eq_3.5}), (\ref{cbo_eq_3.8}) and (\ref{cbo_eq_3.9}) in (\ref{cbo_eq_3.3}), we get \begin{align*} \frac{1}{2}\mathbb{E}\sup_{0\leq t\leq T}|X_{N}^{i}(t)|^{2p} &\leq \mathbb{E}|X_{N}^{i}(0)|^{2p} + C\mathbb{E}\int_{0}^{T}\Big(1 + |X_{N}^{i}(s)|^{2p} + \frac{1}{N}\sum\limits_{i=1}^{N}|X_{N}^{i}(s)|^{2p}\Big)ds \end{align*} and \begin{align*} \mathbb{E}\sup_{0\leq t\leq T}|X_{N}^{i}(t)|^{2p} &\leq 2\mathbb{E}|X_{N}^{i}(0)|^{2p} + C\mathbb{E}\int_{0}^{T}\Big(1 + \sup_{0\leq u\leq s}|X_{N}^{i}(u)|^{2p} + \frac{1}{N}\sum\limits_{i=1}^{N}\sup_{0\leq u\leq s}|X_{N}^{i}(u)|^{2p}\Big)ds. \end{align*} Taking supremum over $\{1,\dots,N\}$, we obtain \begin{align*} \sup_{i=1,\dots,N}\mathbb{E}\sup_{0\leq t\leq T}|X_{N}^{i}(t)|^{2p} &\leq 2\sup_{i=\{1,\dots,N\}}\mathbb{E}|X_{N}^{i}(0)|^{2p} + C \bigg(1 + \int_{0}^{T}\sup_{i = 1,\dots,N}\mathbb{E}\sup_{0\leq u \leq s}|X_{N}^{i}(u)|^{2p} ds\bigg), \end{align*} which gives our targeted result for positive integer valued $p$ by applying Gr\"{o}nwall's lemma (note that we can apply Gr\"{o}nwall's lemma due to (\ref{cbo_eqn_3.2})). We can extend the result to non-integer values of $p \geq 1$ using Holder's inequality. \end{proof} \subsection{Well-posedness of mean-field jump-diffusion SDEs} \label{sec_well_pos_2} In this section, we first introduce Wasserstein metric and state Lemma~\ref{cboblw} which is crucial for establishing well-posedness of the mean-field limit. Then, we prove existence and uniqueness of the McKean-Vlasov jump-diffusion SDEs (\ref{cbomfsde}) in Theorem~\ref{mf_wel_pos_th}. Let $\mathbb{D}([0,T];\mathbb{R}^{d})$ be the space of $\mathbb{R}^{d}$ valued c\'{a}dl\'{a}g functions and $\mathcal{P}_{p}(\mathbb{R}^{d}),\; p\geq 1$, be the space of probability measures on the measurable space $(\mathbb{R}^{d},\mathcal{B}(\mathbb{R}^{d}))$ such that for any $\mu \in \mathcal{P}_{p}(\mathbb{R}^{d})$, $\int_{\mathbb{R}^{d}}|x|^{p}\mu(dx)< \infty$, and which is equipped with the $p$-Wasserstein metric \begin{equation*} \mathcal{W}_{p}(\mu,\vartheta) := \inf_{\pi \in \prod(\mu,\vartheta)}\Big( \int_{\mathbb{R}^{d}\times \mathbb{R}^{d}}|x-y|^{p}\pi(dx,dy)\Big)^{\frac{1}{p}}, \end{equation*} where $\prod(\mu,\vartheta)$ is the set of couplings of $\mu,\vartheta \in \mathcal{P}_{p}(\mathbb{R}^{d})$ \cite{cbo33}. Let $\mu \in \mathcal{P}_{2}(\mathbb{R}^{d})$ with $\int_{\mathbb{R}^{d}}|x|^{2}\mu(dx) \leq K$. Then, using Jensen's inequality, we have \begin{align*} e^{-\alpha \int_{\mathbb{R}^{d}}f(x)\mu(dx) } \leq \int_{\mathbb{R}^{d}}e^{-\alpha f(x)}\mu(dx), \end{align*} and the simple rearrangement together with Assumption~\ref{cboassu3.4}, gives \begin{align}\label{cbol3.4} \frac{e^{-\alpha f_{m}}}{\int_{\mathbb{R}^{d}}e^{-\alpha f(x)}\mu(dx)} \leq e^{\alpha(\int_{\mathbb{R}^{d}}f(x)\mu(dx) - f_{m})} \leq e^{\alpha K_{u}\int_{\mathbb{R}^{d}}(1 + |x|^{2})\mu(dx)} \leq C_{K}, \end{align} where $C_{K} > 0$ is a constant. We will also need the following notation: \begin{align*} \bar{X}^{\mu} = \frac{\int_{\mathbb{R}^{d}} xe^{-\alpha f(x)}\mu(dx)}{\int_{\mathbb{R}^{d}}e^{-\alpha f(x)}\mu(dx)}, \end{align*} where $\mu \in \mathcal{P}_{4}(\mathbb{R}^{d})$. The next lemma is required for proving well-posedness of the McKean-Vlasov SDEs (\ref{cbomfsdep}). Its proof is available in \cite[Lemma 3.2]{cbo2}. \begin{lemma}\label{cboblw} Let Assumptions~\ref{cboh3.1}, \ref{cboh3.2}-\ref{cboasm1.4} hold and there exists a constant $K>0$ such that $\int |x|^{4}\mu(dx) \leq K$ and $\int |y|^{4} \vartheta(dy) \leq K$ for all $\mu,\vartheta \in \mathcal{P}_{4}(\mathbb{R}^{d})$, then the following inequality is satisfied: \begin{equation*} |\bar{X}^{\mu} - \bar{X}^{\vartheta}| \leq C\mathcal{W}_{2}(\mu,\vartheta), \end{equation*} where $C>0$ is independent of $\mu$ and $\vartheta$. \end{lemma} \begin{theorem}\label{mf_wel_pos_th} Let Assumptions~\ref{cboh3.1}, \ref{cboh3.2}-\ref{cboasm1.4} hold, and let $\mathbb{E}|X(0)|^{4} < \infty $ and $\int_{\mathbb{R}^{d}}|z|^{4}\rho_{z}(z)dz < \infty$. Then, there exists a unique nonlinear process $X \in \mathbb{D}([0,T];\mathbb{R}^{d})$, $T>0$ which satisfies the McKean-Vlasov SDEs (\ref{cbomfsdep}) in the strong sense. \end{theorem} \begin{proof} Let $v \in C([0,T];\mathbb{R}^{d})$. Consider the following SDEs: \begin{align} dX_{v}(t) &= -\beta(t)(X_{v}(t) - v(t))dt + \sigma(t)\diag(X_{v}(t) - v(t))dW(t) \nonumber \\ & \;\;\;\;+ \gamma(t)\int_{\mathbb{R}^{d}}\diag(X_{v}(t^{-}) - v(t)))z\mathcal{N}(dt,dz) \label{cbo_neweq_3.14} \end{align} for any $t \in[0,T]$. Note that $v(t)$ is a deterministic function of $t$, therefore the coefficients of SDEs (\ref{cbo_neweq_3.14}) only depend on $x$ and $t$. The coefficients are globally Lipschitz continuous and have linear growth in $x$. The existence and uniqueness of a process $X_{v} \in \mathbb{D}([0,T];\mathbb{R}^{d})$ satisfying SDEs with L\'{e}vy noise (\ref{cbo_neweq_3.14}) follows from \cite[pp. 311-312]{cbos11}. We also have $\int_{\mathbb{R}^{d}}|x|^{4}\mathcal{L}_{X_{v}(t)}(dx) = \mathbb{E}|X_{v}(t)|^{4} \leq \sup_{t\in[0,T]}\mathbb{E}|X_{v}(t)|^{4} \leq K$, where $K$ is a positive constant depending on $v$ and $T$, and $\mathcal{L}_{X_{v}(t)}$ represents the law of $X_{v}(t)$. We define a mapping \begin{align} \mathbb{T} : C([0,T];\mathbb{R}^{d}) \rightarrow C([0,T];\mathbb{R}^{d}),\;\;\mathbb{T}(v) = \bar{X}_{v}, \end{align} where \begin{align*} \mathbb{T}v(t) & = \bar{X}_{v}(t) = \mathbb{E}(X_{v}(t)e^{-\alpha f(X_{v}(t))})\Big/\mathbb{E}(e^{-\alpha f(X_{v}(t))}) \\ & = \int_{\mathbb{R}^{d}}xe^{-\alpha f(x)}\mathcal{L}_{X_{v}(t)}(dx) \bigg/\int_{\mathbb{R}^{d}}e^{-\alpha f(x)}\mathcal{L}_{X_{v}(t)}(dx)= \bar{X}^{\mathcal{L}_{X_{v}(t)}}(t). \end{align*} Let $\delta \in (0,1)$. For all $t, t+\delta \in (0,T)$, Ito's isometry provides \begin{align} \mathbb{E}|X_{v}(t + \delta) - X_{v}(t)|^{2} &\leq C\int_{t}^{t+\delta}\mathbb{E}|X_{v}(s) - v(s)|^{2}ds \nonumber \\ & \;\;\;\;+ \int_{t}^{t+\delta}\int_{\mathbb{R}^{d}}\mathbb{E}|X_{v}(s) - v(s)|^{2}|z|^{2}\rho(z)dzds \leq C \delta, \label{cbo_neweq_3.17} \end{align} where $C$ is a positive constant independent of $\delta$. Using Lemma~\ref{cboblw} and (\ref{cbo_neweq_3.17}), we obtain \begin{align*} |\bar{X}_{v}(t+\delta ) - \bar{X}_{v}(t)| &= |\bar{X}^{\mathcal{L}_{X_{v}(t+\delta)}}(t+\delta) - \bar{X}^{\mathcal{L}_{X_{v}(t)}}(t)| \leq C\mathcal{W}_{2}(\mathcal{L}_{X_{v}(t+\delta)}, \mathcal{L}_{X_{v}(t)}) \\ & \leq C\big(\mathbb{E}|X_{v}(t+\delta) - X_{v}(t)|^{2}\big)^{1/2} \leq C|\delta|^{1/2}, \end{align*} where $C$ is a positive constant independent $\delta$. This implies the H\"{o}lder continuity of the map $t \rightarrow \bar{X}_{v}(t)$. Therefore, the compactness of $\mathbb{T}$ follows from the compact embedding $C^{0,\frac{1}{2}}([0,T];\mathbb{R}^{d}) \hookrightarrow C([0,T];\mathbb{R}^{d}) $. Using Ito's isometry, we have \begin{align} \mathbb{E}|X_{v}(t)|^{2} &\leq 4\bigg(\mathbb{E}|X_{v}(0)|^{2} + \mathbb{E}\bigg|\int_{0}^{t}\beta(s)(X_{v}(s) - v(s))ds\bigg|^{2} + \mathbb{E}\bigg|\int_{0}^{t}\sigma(s)\diag(X_{v}(s) - v(s))dW(s)\bigg|^{2} \nonumber \\ & \;\;\;\; + \mathbb{E}\bigg|\int_{0}^{t}\gamma(s)\diag(X_{v}(s^-) - v(s))z\mathcal{N}(ds,dz)\bigg|^{2}\bigg) \nonumber \\ & \leq C\bigg(1 + \int_{0}^{t}\mathbb{E}|X_{v}(s) - v(s)|^{2}ds\bigg) \leq C\bigg(1+ \int_{0}^{t}(\mathbb{E}|X_{v}(s)|^{2} + |v(s)|^{2}) ds\bigg), \label{cbo_eq_3.17} \end{align} where $C$ is a positive constant independent of $v$. Moreover, we have the following result under Assumptions~\ref{cboh3.1}, \ref{cboh3.2}-\ref{cboasm1.4} \cite[Lemma 3.3]{cbo2}: \begin{align} |\bar{X}_{v}(t)|^{2} \leq L_{1} + L_{2}\mathbb{E}|X_{v}(t)|^{2}, \label{cbo_neweq_3.18} \end{align} where $L_{1}$ and $L_{2}$ are from (\ref{y4.2}). Consider a set $\mathcal{S} = \{ v\in C([0,T];\mathbb{R}^{d}) : v = \epsilon \mathbb{T}v, \; 0\leq \epsilon \leq 1\} $. The set $\mathcal{S}$ is non-empty due to the fact that $\mathbb{T}$ is compact (see the remark after Theorem~10.3 in \cite{104}). Therefore, for any $v \in \mathcal{S}$, we have the corresponding unique process $X_{v}(t) \in \mathbb{D}([0,T];\mathbb{R}^{d})$ satisfying (\ref{cbo_neweq_3.14}), and $\mathcal{L}_{X_{v}(t)}$ represents the law of $X_{v}(t)$, such that the following holds due to (\ref{cbo_neweq_3.18}): \begin{align} |v(s)|^{2} = \epsilon^{2} |\mathbb{T}v(s)|^{2} = \epsilon^{2} |\bar{X}_{v}(s)|^{2} \leq \epsilon^{2} \big(L_{1} + L_{2}\mathbb{E}|X(s)|^{2}) \label{cbo_neweq_3.19} \end{align} for all $s \in [0,T]$. Substituting (\ref{cbo_neweq_3.19}) in (\ref{cbo_eq_3.17}), we get \begin{align*} \mathbb{E}|X_{v}(t)|^{2} \leq C\bigg(1+\int_{0}^{t}\mathbb{E}|X_{v}(s)|^{2}ds\bigg), \end{align*} which on applying Gr\"{o}nwall's lemma gives \begin{align} \mathbb{E}|X_{v}(t)|^{2} \leq C, \label{cbo_neweq_3.20} \end{align} where $C$ is independent of $v$. Due to (\ref{cbo_neweq_3.19}) and (\ref{cbo_neweq_3.20}), we can claim the boundedness of the set $\mathcal{S}$. Therefore, from the Leray-Schauder theorem \cite[Theorem~10.3]{104} there exists a fixed point of the mapping $\mathbb{T}$. This proves existence of the solution of (\ref{cbomfsdep}). Let $v_{1}$ and $v_{2}$ be two fixed points of the mapping $\mathbb{T}$ and let us denote the corresponding solutions of (\ref{cbo_neweq_3.14}) as $X_{v_{1}}$ and $X_{v_{2}}$. Using Ito's isometry, we can get \begin{align} \mathbb{E}|X_{v_{1}}(t) - X_{v_{2}}(t)|^{2} \leq \mathbb{E}|X_{v_{1}}(0) - X_{v_{2}}(0)|^{2} + C\int_{0}^{t}\big(\mathbb{E}|X_{v_{1}}(s) -X_{v_{2}}(s)|^{2} + |v_{1}(s) - v_{2}(s)|^{2}\big)ds. \label{cbo_neweq_3.21} \end{align} Note that $\mathcal{S}$ is a bounded set and by definiiton $v_{1}$ and $v_{2}$ belong to $\mathcal{S}$. Then, we can apply Lemma~\ref{cboblw} to ascertain \begin{align*} |v_{1}(s) - v_{2}(s)|^{2} = |\bar{X}_{v_{1}}(s) - \bar{X}_{v_{2}}(s)|^{2} \leq C\mathcal{W}_{2}(\mathcal{L}_{X_{v_{1}}(s)} , \mathcal{L}_{X_{v_{2}}(s)}) \leq C \mathbb{E}|X_{v_{1}}(s) - X_{v_{2}}(s)|^{2}. \end{align*} Using the above estimate, Gr\"{o}nwall's lemma and the fact $X_{v_{1}}(0) = X_{v_{2}}(0)$ in (\ref{cbo_neweq_3.21}), we get uniqueness of the solution of (\ref{cbomfsdep}). \end{proof} \begin{theorem}\label{cbolem3.6} Let Assumptions~\ref{cboh3.1}, \ref{cboh3.2}-\ref{cboasm1.4} are satisfied. Let $p\geq 1$, $\mathbb{E}|X(0)|^{2p} < \infty $ and $\mathbb{E}|Z|^{2p}< \infty$, then the following holds: \begin{align*} \mathbb{E} \sup_{0\leq t \leq T}|X(t)|^{2p} \leq K_{p}, \end{align*} where $X(t)$ satisfies (\ref{cbomfsdep}) and $K_{p}$ is a positive constant. \end{theorem} \begin{proof} Recall that under the assumptions of this theorem, Theorem~\ref{mf_wel_pos_th} guarantees existence of a strong solution of (\ref{cbomfsdep}). Let $p$ be a positive integer. Let us denote $ \theta_{R} = \inf\{s \geq 0\; ; \; |X(s)| \geq R\}$. Using Ito's formula, we obtain \begin{align} |X(t)|^{2p} &= |X(0)|^{2p} - 2p \int_{0}^{t}\beta(s)|X(s)|^{2p-2}\big(X(s)\cdot(X(s) - \bar{X}(s))\big)ds \nonumber \\ & \;\;\;\; + 2\sqrt{2}p \int_{0}^{t}\sigma(s)|X(s)|^{2p-2}\big(X(s)\cdot(\diag(X(s)- \bar{X}(s))dW(s))\big) \nonumber\\ & \;\;\;\; + 4p(p-1)\int_{0}^{t}\sigma^{2}(s)|X(s)|^{2p-4}|\diag(X(s) -\bar{X}(s))X(s)|^{2}ds \nonumber\\ & \;\;\;\; + 2p\int_{0}^{t}\sigma^{2}(s)|X(s)|^{2p-2}|\diag(X(s) - \bar{X}(s))|^{2}ds \nonumber\\ & \;\;\;\; + \int_{0}^{t}\int_{\mathbb{R}^{d}}(|X(s^{-}) + \gamma(s)\diag(X(s^{-})-\bar{X}(s^{-}))z|^{2p} - |X(s^{-})|^{2p})\mathcal{N}(ds,dz). \nonumber \end{align} First taking suprema over $0\leq t\leq T\wedge \theta_{R}$ and then taking expectation on both sides, we get \begin{align} \mathbb{E}&\sup_{0\leq t\leq T \wedge \theta_{R}}|X(t)|^{2p} \leq \mathbb{E}|X(0)|^{2p} + C\mathbb{E}\int_{0}^{T\wedge \theta_{R}}|X(s)|^{2p-2}\big|X(s)\cdot(X(s) - \bar{X}(s))\big|ds \nonumber \\ & \;\;\;\; + C\mathbb{E}\sup_{0\leq t\leq T\wedge\theta_{R}}\bigg|\int_{0}^{t}|X(s)|^{2p-2}\big(X(s)\cdot(\diag(X(s)- \bar{X}(s))dW(s))\big)\bigg| \nonumber \\ & \;\;\;\;+ C\mathbb{E}\int_{0}^{T\wedge \theta_{R}}|X(s)|^{2p-4}|\diag(X(s) -\bar{X}(s))X(s)|^{2}ds\nonumber \\ & \;\;\;\; +C\mathbb{E}\int_{0}^{T\wedge \theta_{R}}|X(s)|^{2p-2}|\diag(X(s) - \bar{X}(s))|^{2}ds \nonumber\\ & \;\;\;\;+ \mathbb{E}\sup_{0\leq t\leq T\wedge \theta_{R}}\int_{0}^{t}\int_{\mathbb{R}^{d}}(|X(s^{-}) + \gamma(s)\diag(X(s^{-})-\bar{X}(s^{-}))z|^{2p} - |X(s^{-})|^{2p})\mathcal{N}(ds,dz).\label{w3.5} \end{align} To deal with the second term in (\ref{w3.5}), we use Young's inequality and ascertain \begin{align} &|X(s)|^{2p-2}\big|X(s)\cdot(X(s) -\bar{X}(s))\big| \leq |X(s)|^{2p} + |X(s)|^{2p-1}|\bar{X}(s)| \nonumber \\& \leq \frac{4p-1}{2p}|X(s)|^{2p} + \frac{1}{2p}|\bar{X}(s)|^{2p} \leq C(|X(s)|^{2p} + |\bar{X}(s)|^{2p}).\label{cbo_eq_3.14}\end{align} Using Burkholder-Davis-Gundy inequality, we have \begin{align} \mathbb{E}&\sup_{0\leq t\leq T\wedge\theta_{R}}\bigg|\int_{0}^{t}|X(s)|^{2p-2}\big(X(s)\cdot(\diag(X(s)- \bar{X}(s))dW(s))\big)\bigg| \nonumber \\ & \leq \mathbb{E}\bigg(\int_{0}^{T\wedge \theta_{R}}|X(s)|^{4p-2}|X(s)- \bar{X}(s)|^{2}ds\bigg)^{1/2} \nonumber \\ & \leq \mathbb{E}\Bigg(\sup_{0\leq t \leq T \wedge \theta_{R}} |X(t)|^{2p-1}\bigg(\int_{0}^{T\wedge \theta_{R}}|X(s) - \bar{X}(s))|^{2}ds\bigg)^{1/2}\Bigg).\label{cbo_eq_3.15} \end{align} We apply generalized Young's inequality $\big(ab \leq (\epsilon a^{q_{1}})/q_{1} + b^{q_{2}}/(\epsilon^{q_{2}/q_{1}}q_{2}),\; \epsilon, q_{1},q_{2} >0, 1/q_{1} + 1/q_{2} = 1$\big) and Holder's inequality on the right hand side of (\ref{cbo_eq_3.15}) to get \begin{align} \mathbb{E}&\sup_{0 \leq t\leq T\wedge \theta_{R}}\bigg|\int_{0}^{t}|X(s)|^{2p-2}\big(X(s) \cdot \diag(X(s) - \bar{X}(s))dW(s)\big)\bigg|\nonumber \\ & \leq \frac{1}{2}\mathbb{E}\sup_{0\leq t\leq T\wedge \theta_{R}}|X(t)|^{2p} + C\mathbb{E}\bigg(\int_{0}^{T\wedge \theta_{R}}|X(s) - \bar{X}(s)|^{2}ds\bigg)^{p}\nonumber \\ & \leq \frac{1}{2}\mathbb{E}\sup_{0\leq t\leq T\wedge \theta_{R}}|X(t)|^{2p} + C\mathbb{E}\bigg(\int_{0}^{T\wedge \theta_{R}}|X(s) - \bar{X}(s)|^{2p}ds\bigg)\nonumber \\ & \leq \frac{1}{2}\mathbb{E}\sup_{0\leq t\leq T\wedge \theta_{R}}|X(t)|^{2p} + C\mathbb{E}\bigg(\int_{0}^{T\wedge \theta_{R}} \big(|X(s)|^{2p} + |\bar{X}(s)|^{2p} \big) ds\bigg).\label{cbo_eq_3.16} \end{align} We have the following estimate to use in the fourth term in (\ref{w3.5}): \begin{align} |X(s)|^{2p-4}&|\diag(X(s)- \bar{X}(s))X(s)|^{2} \leq |X(s)|^{2p-4}(|X(s)|^{2} + (X(s)\cdot\bar{X}(s)))^{2} \nonumber \\ &\leq 2|X(s)|^{2p} + 2|X(s)|^{2p-2}|\bar{X}(s)|^{2} \leq C\big(|X(s)|^{2p} + |\bar{X}(s)|^{2p}\big).\label{w3.8} \end{align} We make use of Minkowski's inequality to get \begin{align*} |X(s)|^{2p-2}|\diag(X(s) - \bar{X}(s))|^{2} = |X(s)|^{2p-2}|X(s) - \bar{X}(s)|^{2} \leq 2|X(s)|^{2p} + 2|X(s)|^{2p-2}|\bar{X}(s)|^{2}, \end{align*} then Young's inequality implies \begin{align} |X(s)|^{2p-2}|X(s) - \bar{X}(s)|^{2} \leq C(|X(s)|^{2p} + |\bar{X}(s)|^{2p}). \label{w3.9} \end{align} Now, we find an estimate for the last term in (\ref{w3.5}). Using the Cauchy-Bunyakowsky-Schwartz inequality, we obtain \begin{align} \mathbb{E}&\sup_{0\leq t\leq T\wedge \theta_{R}}\int_{0}^{t}\int_{\mathbb{R}^{d}}(|X(s^{-}) + \gamma(s)\diag(X(s^{-})-\bar{X}(s^{-}))z|^{2p} - |X(s^{-})|^{2p})\mathcal{N}(ds,dz)\nonumber \\ & \leq \mathbb{E}\sup_{0\leq t\leq T\wedge \theta_{R}}\int_{0}^{t}\int_{\mathbb{R}^{d}}2^{2p-1}(|X(s^{-})|^{2p} + |\gamma(s)\diag(X(s^{-})-\bar{X}(s^{-}))z|^{2p})- |X(s^{-})|^{2p}\mathcal{N}(ds,dz) \nonumber \\ & \leq C\mathbb{E}\int_{0}^{T\wedge \theta_{R}}\int_{\mathbb{R}^{d}}(|X(s^{-})|^{2p} + |\gamma(s)\diag(X(s^{-})-\bar{X}(s^{-}))z|^{2p})\mathcal{N}(ds,dz). \nonumber \end{align} Using Doob's optional stopping theorem \cite[Theorem 2.2.1]{cbos11}, we get \begin{align} \mathbb{E}&\sup_{0\leq t\leq T\wedge \theta_{R}}\int_{0}^{t}\int_{\mathbb{R}^{d}}(|X(s^{-}) + \gamma(s)\diag(X(s^{-})-\bar{X}(s^{-}))z|^{2p} - |X(s^{-})|^{2p})\mathcal{N}(ds,dz)\nonumber \\ & \leq C\mathbb{E}\int_{0}^{T\wedge \theta_{R}}\int_{\mathbb{R}^{d}}(|X(s)|^{2p} + |\gamma(s)\diag(X(s)-\bar{X}(s))z|^{2p})\rho_{z}(z)dzds \nonumber \\ & \leq C\mathbb{E}\int_{0}^{T\wedge \theta_{R}}\Big(|X(s)|^{2p} + |\bar{X}(s)|^{2p}\Big)\Big(1+\int_{\mathbb{R}^{d}}|z|^{2p}\rho_{z}(z)dz\Big)ds \nonumber \\ & \leq C\mathbb{E}\int_{0}^{T\wedge \theta_{R}}\big(|X(s)|^{2p} + |\bar{X}(s)|^{2p}\big)ds. \label{w3.10} \end{align} We have the following result under Assumptions~\ref{cboh3.1}, \ref{cboh3.2}-\ref{cboasm1.4} \cite[Lemma 3.3]{cbo2}: \begin{align} |\bar{X}(s)|^{2} \leq L_{1} + L_{2}\mathbb{E}|X(s)|^{2}, \label{cbo_neweq_3.29} \end{align} where $L_{1}$ and $L_{2}$ are from (\ref{y4.2}). Substituting (\ref{cbo_eq_3.14}), (\ref{cbo_eq_3.16})-(\ref{cbo_neweq_3.29}) in (\ref{w3.5}), using Holder's inequality, we arrive at the following bound: \begin{align*} \mathbb{E}\sup_{0\leq t\leq T\wedge \theta_{R}}|X(t)|^{2p} &\leq 2\mathbb{E}|X(0)|^{2p} + C \mathbb{E}\int_{0}^{T\wedge \theta_{R}}(|X(s)|^{2p} + |\bar{X}(s)|^{2p})ds \\ & \leq C + C\mathbb{E}\int_{0}^{T\wedge \theta_{R}}(1 + |X(s)|^{2p} + \mathbb{E}|X(s)|^{2p})ds \\ & \leq C + C\int_{0}^{T} \mathbb{E}\sup_{0\leq u\leq s \wedge \theta_{R}}|X(u)|^{2p} ds, \end{align*} which on using Gr\"{o}nwall's lemma gives \begin{align*} \mathbb{E}\sup_{0\leq t\leq T\wedge \theta_{R}}|X(t)|^{2p} \leq C, \end{align*} where $C$ is independent of $R$. Then, tending $R\rightarrow \infty$ and applying Fatau's lemma give the desired result. \end{proof} \section{Convergence results}\label{cbo_conv_res} In Section \ref{cbo_sec_gl_min}, we prove the convergence of $X(t)$, which is the mean field limit of the particle system (\ref{cboeq1.8}), towards global minimizer. This convergence proof is based on the Laplace principle. Our approach in Section~\ref{cbo_sec_gl_min} is similar to \cite[Appendix A]{cbo3}. The main result (Theorem~\ref{cbo_thrm_4.3}) of Section~\ref{cbo_sec_gl_min} differs from \cite{cbo3} in three respects. First, in our model (\ref{cboeq1.8}), the parameters are time-dependent. Second, we need to treat the jump part of (\ref{cboeq1.8}). Third, the analysis in \cite{cbo3} is done for quadratic loss function but the assumptions that we impose on the objective function here are less restrictive. In Section~\ref{cbo_sec_mf}, we prove convergence of the interacting particle system (\ref{cboeq1.8}) towards the mean-field limit (\ref{cbomfsdep}) as $N\rightarrow \infty$. In Section~\ref{cbo_conv_ns}, we prove uniform in $N$ convergence of the Euler scheme (\ref{cbo_dis_ns}) to (\ref{cboeq1.8}) as $h \rightarrow 0$, where $h$ is the discretization step. \subsection{Convergence towards the global minimum}\label{cbo_sec_gl_min} The aim of this section is to show that the non-linear process $X(t)$ driven by the distribution dependent SDEs (\ref{cbomfsde}) converges to a point $x^{*} $ which lies in a close vicinity of the global minimum which we denote as $x_{\min}$. To this end, we will first prove that $\var(t) := \mathbb{E}|X(t) - \mathbb{E}(X(t))|^{2} $ satisfies a differential inequality which, with particular choice of parameters, implies exponential decay of $\var(t)$ as $t \rightarrow \infty$. We also obtain a differential inequality for $M(t) := \mathbb{E}\big(e^{-\alpha f(X(t))}\big )$. The approach that we follow in this section is along the lines of \cite{cbo2,cbo3} but with necessary adjustments for the jump term in (\ref{cbomfsde}). \begin{lemma} \label{cbo_prop_4.1} Under Assumptions~\ref{cboh3.1}, \ref{cboh3.2}-\ref{cboasm1.4}, the following inequality is satisfied for $\var(t)$: \begin{align} \frac{d}{dt}\var(t) &\leq - \bigg(2\beta(t) - \big(2\sigma^{2}(t) +\lambda\gamma^{2}(t)\mathbb{E}|\Zstroke|^{2}\big)\Big( 1+ \frac{e^{-\alpha f_{m}}}{M(t^{})} \Big) \bigg)\var(t^{}). \label{h4.1} \end{align} \end{lemma} \begin{proof} Using Ito's formula, we have \begin{align} |X(t) &- \mathbb{E}X(t)|^{2} = |X(0) - \mathbb{E}X(0)|^{2}-2\int_{0}^{t}\beta(s)(X(s) - \mathbb{E}X(s))\cdot(X(s) - \bar{X}(s))ds \nonumber \\ &- 2\int_{0}^{t}(X(s) - \mathbb{E}X(s))\cdot d\mathbb{E}X(s) + 2\int_{0}^{t}\sigma^{2}(s)|X(s) - \bar{X}(s)|^{2}ds \nonumber \\ & + 2\sqrt{2}\int_{0}^{t}\sigma(s)(X(s) - \mathbb{E}X(s))\cdot \big(\diag(X(s) - \bar{X}(s)) dW(s)\big)\nonumber \\ & + \int_{0}^{t}\int_{\mathbb{R}^{d}}\big\{|X(s^{-}) - \mathbb{E}X(s^{-}) + \gamma(s)\diag(X(s^{-}) - \bar{X}(s^{-}))z|^{2} - |X(s^{-})- \mathbb{E}(X(s^{-}))|^{2}\big\} \mathcal{N}(ds,dz). \nonumber \end{align} Taking expectation on both sides, we get \begin{align} &\var(t) = \var(0) -2\mathbb{E}\int_{0}^{t}\beta(s)\mathbb{E}\big((X(s^{}) - \mathbb{E}X(s^{}))\cdot(X(s^{}) - \bar{X}(s^{}))\big)dt + 2\int_{0}^{t}\sigma^{2}(s)\mathbb{E}|X(s^{}) - \bar{X}(s^{})|^{2}ds \nonumber \\ & \;\;\;\; + \lambda \gamma^{2}(s)\int_{0}^{t}\int_{\mathbb{R}^{d}}\mathbb{E}|\diag(X(s^{}) - \bar{X}(s^{}))z|^{2}\rho_{z}(z)dzds \nonumber \\ & = \var(0) -2\int_{0}^{t} \big(\beta(s)\var(s^{}) + 2\sigma^{2}(s)\mathbb{E}|X(s^{}) - \bar{X}(s^{})|^{2} + \lambda \gamma^{2}(s)\mathbb{E}|\Zstroke|^{2}\mathbb{E}|X(s^{}) - \bar{X}(s^{})|^{2} \big) ds, \label{cbo_neweq_4.2} \end{align} since \begin{align*} &\mathbb{E}\big((X(t^{}) - \mathbb{E}X(t^{}))\cdot(\mathbb{E}X(t^{}) - \bar{X}(t^{}))\big) = 0, \\ & |X(t^{}) - \mathbb{E}X(t^{}) + \diag(X(t^{}) -\bar{X}(t^{}))z|^{2} = |X(t^{}) - \mathbb{E}X(t^{})|^{2} + |\diag(X(t^{}) - \bar{X}(t^{}))z|^{2} \\ & \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; + 2\big((X(t^{}) - \mathbb{E}X(t^{}))\cdot\diag(X(t^{}) - \bar{X}(t^{}))z\big), \\ & \int_{\mathbb{R}^{d}}\big((X(t^{}) - \mathbb{E}X(t^{}))\cdot \diag(X(t^{})-\bar{X}(t^{}))z\big)\rho_{z}(z)dz = 0. \end{align*} Moreover, $ \int_{\mathbb{R}^{d}} \sum_{l=1}^{d} (X(t^{}) - \bar{X}(t^{}))_{l}^{2}z_{l}^{2} \rho_{z}(z)dz = \sum_{l=1}^{d} (X(t^{}) - \bar{X}(t^{}))_{l}^{2} \int_{\mathbb{R}{^d}}z_{l}^{2}\prod_{i=1}^{d}\rho_{\zstroke}(z_{i})dz = |X(t^{}) - \bar{X}(t^{})|^{2}\mathbb{E}|\Zstroke|^{2} $, since each component $Z_{l}$ of $Z$ is distributed as $\Zstroke$. We also have \begin{align}\mathbb{E}|X(t^{}) - \bar{X}(t^{})|^{2} = \var(t) + |\mathbb{E}X(t^{}) - \bar{X}(t^{})|^{2}. \label{cbo_eq_4.2} \end{align} We estimate the term $|\mathbb{E}(X(t^{})) - \bar{X}(t^{})|^{2}$ using Jensen's inequality as \begin{align}\label{cboeq4.2} |\mathbb{E}X(t^{}) - \bar{X}(t^{})|^{2} & = \bigg| \mathbb{E}X(t^{}) - \frac{\mathbb{E}X(t^{})e^{-\alpha f(X(t^{}))}}{\mathbb{E}e^{-\alpha f(X(t^{}))}}\bigg|^{2} \nonumber = \bigg|\mathbb{E} \bigg( \Big(\mathbb{E}X(t^{}) - X(t^{})\Big)\frac{e^{-\alpha f(X(t^{}))}}{\mathbb{E}e^{-\alpha f(X(t^{}))} }\bigg)\bigg|^{2} \nonumber\\ & = \bigg|\int_{\mathbb{R}^{d}}\big(\mathbb{E}X(t) - x\big) \vartheta_{X(t)}(dx)\bigg|^{2} \leq \int_{\mathbb{R}^{d}}\big|\mathbb{E}X(t) - x\big|^{2} \vartheta_{X(t)}(dx)\nonumber \\ & = \mathbb{E}\bigg(|X(t^{}) - \mathbb{E}(X(t^{}))|^{2} \frac{e^{-\alpha f(X(t^{}))}}{\mathbb{E}e^{-\alpha f(X(t^{}))}}\bigg)\leq \frac{e^{-\alpha f_{m}}}{\M(t^{})}\var(t^{}), \end{align} where $\vartheta_{X(t)}(dx) = e^{-\alpha f(x)}/\mathbb{E}(e^{-\alpha f(X(t))}) \mathcal{L}_{X(t)}(dx) $ which implies $\int_{\mathbb{R}^{d}}\vartheta_{X(t)}(dx) = 1$. Using (\ref{cbo_eq_4.2}) and (\ref{cboeq4.2}) in (\ref{cbo_neweq_4.2}) gives the targeted result. \end{proof} To prove the main result of this section, we need an additional inequality, which is proved under the following assumption. \begin{assumption}\label{cbohas4.1} $f \in C^{2}(\mathbb{R}^{d})$ and there exist three constants $K_{1}$,$K_{2}, K_{3} > 0$ such that the following inequalities are satisfied for sufficiently large $\alpha$: \begin{itemize} \item[(i)] $(\nabla f(x) -\nabla f(y))\cdot (x-y) \geq -K_{1}|x-y|^{2}$ for all $x$, $ y \in \mathbb{R}^{d}$. \item[(ii)] $ \alpha\Big(\frac{\partial f}{\partial x_{i}}\Big)^{2} -\frac{\partial^{2} f}{\partial x_{i}^{2}} \geq -K_{2}$ for all $i = 1,\dots,d$ and $x \in \mathbb{R}^{d}$. \item[(iii)] $\mathbb{E}f(x+ \diag(x)Z) - f(x) \leq K_{3} |x|^{2}\mathbb{E}|\Zstroke|^{2} $, \\ where $Z$ is a d-dimensional random vector and $\Zstroke$ is real valued random variable introduced in Section~\ref{sec_our_mod}. \end{itemize} \end{assumption} We note that for $f(x) = 1+ |x|^{2}$, $x \in \mathbb{R}^{d}$, we have $\mathbb{E}|x+ \diag(x)Z|^{2} - |x|^{2} = \mathbb{E}|\diag(x)Z|^{2} = \sum_{l=1}^{d}\mathbb{E}(x_{l}Z_{l})^{2}$. However, each $Z_{l}$ is distributed as $\Zstroke$. Hence, $\mathbb{E}|x+ \diag(x)Z|^{2} - |x|^{2} = |x|^{2}\mathbb{E}|\Zstroke|^{2}$. The conditions $(i)$ and $(ii)$ are straightforward to verify for $1+|x|^{2}$. This implies the existence of a function satisfying the above assumption. This ensures that the class of functions satisfying the above assumption is not empty and is consistent with Assumptions~\ref{cboh3.1}, \ref{cboh3.2}-\ref{cboasm1.4}. The most important implication is that the above assumption allows $f$ to have quadratic growth which is important for several loss functions in machine learning problems. In \cite{cbo2}, the authors assumed $f \in C^{2}(\mathbb{R}^{d})$, the norm of Hessian of $f$ being bounded by a constant, and the norm of gradient and Laplacian of $f$ satisfying the inequality, $\Delta f \leq c_{0} + c_{1}|\nabla f|^{2}$, where $c_{0}$ and $ c_{1}$ are positive constants. Therefore, in Assumption~\ref{cbohas4.1}, we have imposed restrictions on $f$ similar to \cite{cbo2} in the essence of regularity but adapted to our jump-diffusion case with component-wise Wiener noise. \begin{lemma}\label{cbo_lem_4.2} The following inequality holds under Assumptions~\ref{cboh3.1}, \ref{cboh3.2}-\ref{cboasm1.4} and \ref{cbohas4.1}: \begin{align} \frac{d}{dt}\M^{2}(t) &\geq - 4\alpha e^{-\alpha f_{m}}\Big(\beta(t)K_{1} + \sigma^{2}(t)K_{2} + \lambda \gamma^{2}(t)K_{3}\mathbb{E}|\Zstroke|^{2}\Big)\var(t^{}).\label{h4.2} \end{align} \end{lemma} \begin{proof} Using Ito's formula, we get \begin{align*} e^{-\alpha f(X(t))}& = \int_{0}^{t}\alpha \beta(s) e^{-\alpha f(X(s))}\nabla f(X(s))\cdot (X(s) -\bar{X}(s)) ds \\ & \;\;\;\; - \sqrt{2} \int_{0}^{t}\alpha \sigma(s) e^{-\alpha f(X(s))}\nabla f(X(s))\cdot \big(\diag(X(s) -\bar{X}(s)) dW(s)\big) \\ & \;\;\;\;+ \int_{0}^{t}\sigma^{2}(s)e^{-\alpha f(X(s))}\sum\limits_{j = 1}^{d}\bigg( \big(X(s) - \bar{X}(s)\big)^{2}_{j} \Big(\alpha^{2} \Big(\frac{\partial f(X(s))}{\partial x_{j}}\Big)^{2} - \alpha\frac{\partial^{2}f(X(s))}{\partial x_{j}^{2}}\Big)\bigg)ds \\ & \;\;\;\;+ \int_{0}^{t}\int_{\mathbb{R}^{d}}\Big(e^{-\alpha f(X(s^{-}) + \gamma(s)\diag(X(s^{-}) - \bar{X}(s^{-}))z)} - e^{-\alpha f(X(s^{-}))}\Big) \mathcal{N}(ds,dz). \end{align*} Taking expectation on both sides and writing in the differential form yield \begin{align*} d\mathbb{E}e^{-\alpha f(X(t))} & = \alpha \beta(t)\mathbb{E}\big(e^{-\alpha f(X(t^{}))}(\nabla f(X(t^{})) -\nabla f(\bar{X}(t^{})))\cdot (X(t^{}) - \bar{X}(t^{}))\big) dt \\ & +\sigma^{2}(t)\mathbb{E}\Bigg(e^{-\alpha f(X(t^{}))}\sum\limits_{j = 1}^{d}\bigg( \big(X(t^{}) - \bar{X}(t^{})\big)^{2}_{j} \Big(\alpha^{2} \Big(\frac{\partial f(X(t^{}))}{\partial x_{j}}\Big)^{2} - \alpha\frac{\partial^{2}f(X(t^{}))}{\partial x_{j}^{2}}\Big)\bigg)\Bigg)dt \\ & +\lambda \int_{\mathbb{R}^{d}}\mathbb{E}\Big(e^{-\alpha f(X(t^{}) + \gamma(t)\diag(X(t^{}) - \bar{X}(t^{}))z)} - e^{-\alpha f(X(t^{}))}\Big) \rho_{z}(z)dz dt, \end{align*} where we have used the fact $ \mathbb{E}\big[e^{-\alpha f(X(t))}(\nabla f(\bar{X}(t))\cdot (X(t) -\bar{X}(t)))\big] = 0$. Note that $|e^{-\alpha f(x)} - e^{-\alpha f(y)}| \leq \alpha e^{-\alpha f_{m}}|f(x) - f(y)| $ which means $e^{-\alpha f(x)} - e^{-\alpha f(y)} \geq -\alpha e^{-\alpha f_{m}} |f(x) -f(y)| $. Using Assumption~\ref{cbohas4.1}, we get \begin{align*} d\mathbb{E}e^{-\alpha f(X(t))} \geq - \alpha e^{-\alpha f_{m}}\big(\beta(t)K_{1} + \sigma^{2}(t)K_{2} + \lambda \gamma^{2}(t)K_{3}\mathbb{E}|\Zstroke|^{2}\big)\mathbb{E}|X(t^{}) - \bar{X}(t^{})|^{2}. \end{align*} From (\ref{cbo_eq_4.2}) and (\ref{cboeq4.2}), we have \begin{align*} \mathbb{E}|X(t) - \bar{X}(t)|^{2} \leq \var(t^{}) + \frac{e^{-\alpha f_{m}}}{\M(t^{})}\var(t^{}) \leq 2 \frac{e^{-\alpha f_{m}}}{\M(t^{})}\var(t^{}). \end{align*} This implies \begin{align*} d \M(t) \geq -2 \alpha e^{-\alpha f_{m}}\big(\beta(t)K_{1} + \sigma^{2}(t)K_{2} + \lambda \gamma^{2}(t) K_{3}\mathbb{E}|\Zstroke|^{2}\big) \frac{e^{-\alpha f_{m}}}{\M(t^{})}\var(t^{}) dt, \end{align*} which is what we aimed to prove in this lemma. \end{proof} Our next objective is to show that $\mathbb{E}(X(t))$ converges to $x^{*}$ as $t \rightarrow \infty$, where $x^{*}$ is close to $x_{\min}$, i.e. the point at which $f(x)$ attains its minimum value, $f_{m}$. Applying Laplace's method (see e.g. \cite[Chap. 3]{cbo38} and also \cite{cbo1,cbo2}), we can calculate the following asymptotics: for any compactly supported probability measure $\rho \in \mathcal{P}(\mathbb{R}^{d})$ with $x_{\min} \in \text{supp}(\rho)$, we have \begin{align} \lim\limits_{\alpha \rightarrow \infty}\Bigg(-\frac{1}{\alpha}\log\bigg(\int_{\mathbb{R}^{d}}e^{-\alpha f(x)}d\rho(x)\bigg)\Bigg) = f_{m} > 0. \label{cbo_neweq_4.6} \end{align} Based on the above asymptotics, we aim to prove that \begin{align*} f(x^{*}) \leq f_{m} + \Gamma(\alpha) + \mathcal{O}\bigg(\frac{1}{\alpha}\bigg), \end{align*} where a function $\Gamma(\alpha) \rightarrow 0 $ as $ \alpha \rightarrow \infty$. We introduce the following function: \begin{align*} \chi(t) = 2\beta(t) - \big(2\sigma^{2}(t) +\lambda\gamma^{2}(t)\mathbb{E}|\Zstroke|^{2}\big)\Big( 1+ \frac{2e^{-\alpha f_{m}}}{M(0)} \Big). \end{align*} We choose $\alpha$, $\beta(t)$, $\sigma(t)$, $\gamma(t)$, $\lambda$, distribution of $\Zstroke$ such that \begin{itemize} \item[(i)] $\chi(t)$ is a continuous function of time $t$, \item[(ii)] $\chi(t) > 0$ for all $t \geq 0$, and \item[(iii)] $ \chi(t) $ attains its minimum which we denote as $\chi_{\min}$. \end{itemize} We also introduce \begin{align*} \eta &:= 4\alpha e^{-\alpha f_{m}}\var(0)\frac{K_{1} \beta + K_{2}\sigma^{2}(0) + K_{3}\lambda \gamma^{2}(0)\mathbb{E}|\Zstroke|^{2}}{ \M^{2}(0)\chi_{\min}}, \end{align*} where $\beta $ is introduced in Section~\ref{sec_our_mod}, and $K_{1}$, $K_{2}$ and $K_{3}$ are from Assumption~\ref{cbohas4.1}. The next theorem is the main result of this section. We will be assuming that $\eta \leq 3/4$ which can always be achieved by choosing sufficiently small $\var(0)$. \begin{theorem}\label{cbo_thrm_4.3} Let Assumptions~\ref{cboh3.1}, \ref{cboh3.2}-\ref{cboasm1.4} and \ref{cbohas4.1} hold. Let us also assume that $\mathcal{L}_{X(0)}$ is compactly supported and $x_{\min} \in \text{supp}(\mathcal{L}_{X(0)})$. If $\eta \leq 3/4$, then $\var(t)$ exponentially decays to zero as $t \rightarrow \infty$. Further, there exists an $x^{*} \in \mathbb{R}^{d}$ such that $X(t) \rightarrow x^{*}$ a.s., $\mathbb{E}(X(t)) \rightarrow x^{*}$, $\bar{X}(t) \rightarrow x^{*}$ as $ t \rightarrow \infty$ and the following inequality holds: \begin{align*} f(x^{*}) \leq f_{m} + \Gamma(\alpha) + \frac{\log{2}}{\alpha}, \end{align*} where function $\Gamma(\alpha) \rightarrow 0 $ as $ \alpha \rightarrow \infty$. \end{theorem} \begin{proof} Let $ T^{*} = \sup\big\{ t \;;\; \M(s) > \frac{\M(0)}{2}, \text{for all}\; s \in [0,t]\big\}. $ Observe that $T^{*} > 0$ by definition. Let us assume that $T^{*} < \infty$. We can deduce that the following holds by definition of $T^{*}$ for all $t\in [0,T^{*}]$: \begin{align*} 2\beta(t) - \big(2\sigma^{2}(t) +\lambda\gamma^{2}(t)\mathbb{E}|\Zstroke|^{2}\big)\Big( 1+ \frac{e^{-\alpha f_{m}}}{M(t^{})} \Big) \geq 2\beta(t) - \big(2\sigma^{2}(t) +\lambda\gamma^{2}(t)\mathbb{E}|\Zstroke|^{2}\big)\Big( 1+ \frac{2e^{-\alpha f_{m}}}{M(0)} \Big) = \chi(t), \end{align*} where the left hand side of the above inequality is from (\ref{h4.1}). Using Lemma~\ref{cbo_prop_4.1}, the fact that $\chi(t)$ is continuous and $\chi(t) > 0 $ for all $t \geq 0$, we get for all $t \in [0, T^{*}]$: \begin{align*} \var(t) \leq \var(0)e^{-\chi(t)t} \leq \var(0)e^{-\chi_{\min}t}. \end{align*} We have from Lemma~\ref{cbo_lem_4.2} for all $t \in (0,T^{*}]$: \begin{align*} \M^{2}(t) &\geq \M^{2}(0) - 4\alpha e^{-\alpha f_{m}}\int_{0}^{t} \big(K_{1} \beta(s) + K_{2}\sigma^{2}(s) + K_{3}\lambda \gamma^{2}(s) \mathbb{E}|\Zstroke|^{2}\big)\var(s)ds \\ & \geq \M^{2}(0) - 4\alpha e^{-\alpha f_{m}}\big(K_{1} \beta + K_{2}\sigma^{2}(0) + K_{3}\lambda\gamma^{2}(0)\mathbb{E}|\Zstroke|^{2}\big)\var(0) \int_{0}^{t}e^{-\chi_{\min}s}ds \\ & = \M^{2}(0) - 4\alpha e^{-\alpha f_{m}}\big(K_{1} \beta + K_{2}\sigma^{2}(0) + K_{3}\lambda\gamma^{2}(0)\mathbb{E}|\Zstroke|^{2}\big)\frac{\var(0)}{\chi_{\min}}\big(1 - e^{-\chi_{\min}t}\big)\\ & > \M^{2}(0) - 4\alpha e^{-\alpha f_{m}}\big(K_{1} \beta + K_{2}\sigma^{2}(0) + K_{3}\lambda\gamma^{2}(0)\mathbb{E}|\Zstroke|^{2}\big)\frac{\var(0)}{\chi_{\min}} \geq \frac{\M^{2}(0)}{4}, \end{align*} where in the last step we have used the fact that $\eta \leq 3/4$. This shows $\M(t) > \M(0)/2$ which implies $\M(t) - \M(0)/2 > 0$ on the set $(0,T^{*}]$. Also, note that $M(t)$ is continuous in $t$, therefore there exists an $\epsilon > 0$ such that $\M(t) > \M(0)/2$ for all $t \in [T^{*},T^{*}+\epsilon)$. This creates a contradiction which implies $T^{*} = \infty$. Hence, \begin{equation} \var(t) \leq \var(0) e^{- \chi_{\min}t}\;\; \text{and}\;\; \M(t) > \M(0)/2 \; \text{ for all}\; t > 0. \label{cbo_neweq_4.7} \end{equation} This implies $\var(t)$ exponentially decays to zero as $t \rightarrow \infty$. From (\ref{cboeq4.2}) and (\ref{cbo_neweq_4.7}), we get \begin{align} \label{cbo_eq_4.7} |\mathbb{E}X(t) - \bar{X}(t)|^{2} \leq e^{-\alpha f_{m}} \frac{\var(t)}{\M(t)} \leq Ce^{-\chi_{\min} t},\;\;\;\; t > 0, \end{align} where $C$ is a positive constant independent of $t$. Taking expectation on both sides of (\ref{cbomfsdep}) (recall that $\mathbb{E}\Zstroke = 0$), applying Holder's inequality and using (\ref{cbo_eq_4.2}) gives \begin{align} \bigg| \frac{d}{dt}\mathbb{E}X(t)\bigg| &\leq \beta \mathbb{E}|X(t^{})- \bar{X}(t^{})| \leq \beta (\mathbb{E}|X(t^{}) - \bar{X}(t)|^{2})^{1/2 } \leq \beta \big(\var(t) + |\mathbb{E}X(t^{}) - \bar{X}(t^{})|^{2}\big)^{1/2} \nonumber \\ & \leq Ce^{-\chi_{\min}t/2},\;\;\;\; t > 0, \label{cbo_eq_4.8} \end{align} where $C$ is a positive constant independent of $t$. It is clear from (\ref{cbo_eq_4.8}) that there exists an $x^{*} \in \mathbb{R}^{d}$ such that $ \mathbb{E}(X(t)) \rightarrow x^{*}$ as $t \rightarrow \infty$. Further, $\bar{X}(t) \rightarrow x^{*}$ as $ t \rightarrow \infty$ due to (\ref{cbo_eq_4.7}). Let $\ell > 0$. Using Chebyshev's inequality, we have \begin{align*} \mathbb{P}(|X(t) - \mathbb{E}X(t)| \geq e^{-\ell t}) \leq \frac{\var{(t)}}{e^{-2\ell t}} \leq Ce^{-(\chi_{\min} - 2\ell )t}, \end{align*} where $C>0$ is independent of $t$. If we choose $\ell < \chi_{\min}/2$, then we can say $|X(t) - \mathbb{E}X(t)| \rightarrow 0$ as $t \rightarrow 0$ a.s. due to the Borel-Cantelli lemma. This implies $X(t) \rightarrow x^{*}$ a.s. Application of the bounded convergence theorem gives the convergence result: $\mathbb{E}e^{-\alpha f(X(t))} \rightarrow e^{-\alpha f(x^{*})} $ as $t \rightarrow \infty$. Then, due to (\ref{cbo_neweq_4.7}), we obtain \begin{align*} e^{-2\alpha f(x^{*})} \geq M^{2}(0)/4 \end{align*} and hence \begin{align*} f(x^{*}) \leq - \frac{1}{\alpha}\log(\M(0)) + \frac{1}{\alpha}\log{2}. \end{align*} Then, using the asymptotics (\ref{cbo_neweq_4.6}), we get \begin{align} \label{cbo_eqn_4.9} f(x^{*}) \leq f_{m} + \Gamma(\alpha) + \frac{1}{\alpha}\log{2}, \end{align} where the function $\Gamma(\alpha) \rightarrow 0 $ as $ \alpha \rightarrow \infty$. \end{proof} \subsection{Convergence to the mean-field SDEs}\label{cbo_sec_mf} In the previous section, we showed convergence of the non-linear process $X(t)$ from (\ref{cbomfsdep}) towards the global minimizer. However, the CBO method is based on the system (\ref{cbos1.6}) of finite particles. This means there is a missing link in the theoretical analysis which we fill in this section by showing convergence of the particle system (\ref{cbos1.6}) to the mean-field limit in mean-square sense (\ref{cbomfsdep}) as the number of particles tends to infinity. The proof of this result has some ingredients inspired from \cite{cbo36} (see also \cite{cbo37}), precisely where we partition the sample space (cf. Theorem~\ref{cbo_thrm4.5}). Further, it is clear from the proof that we need stronger moment bound result like in Lemmas~\ref{cbolemma3.3} and \ref{cbolem3.6}, as compared to \cite[Lemma 3.4]{cbo2}. We first discuss some concepts necessary for later use in this section. We introduce the following notation for the empirical measure of i.i.d. particles driven by the McKean-Vlasov SDEs (\ref{cbomfsdep}): \begin{align} \mathcal{E}_{t} : = \frac{1}{N}\sum\limits_{i=1}^{N}\delta_{X^{i}(t)}, \end{align} where $\delta_{x}$ is the Dirac measure at $x \in \mathbb{R}^{d}$. We will also need the following notation: \begin{align}\label{cboeq5.2} \bar{X}^{\mathcal{E}_{t}}(t) = \frac{\int_{\mathbb{R}^{d}}x e^{-\alpha f(x)} \mathcal{E}_{t}(dx)}{\int_{\mathbb{R}^{d}} e^{-\alpha f(x)} \mathcal{E}_{t}(dx)} = \frac{\sum_{i=1}^{N}X^{i}(t)e^{-\alpha f(X^{i}(t))}}{\sum_{i=1}^{N}e^{-\alpha f(X^{i}(t))}}. \end{align} Using discrete Jensen's inequality, we have \begin{align*} \exp{\bigg(-\alpha\frac{1}{N}\sum\limits_{i=1}^{N}f(X^{i}(t)) \bigg)} &\leq \frac{1}{N}\sum\limits_{i=1}^{N}\exp{\Big(-\alpha f(X^{i}(t))\Big)}, \end{align*} which, on rearrangement and multiplying both sides by $e^{-\alpha f_{m}}$, gives \begin{align}\label{y4.5} \frac{e^{-\alpha f_{m}}}{\frac{1}{N}\sum_{i=1}^{N}e^{-\alpha f(X^{i}(t))}} &\leq \exp{\bigg(\alpha\Big(\frac{1}{N}\sum\limits_{i=1}^{N}f(X^{i}(t)) - f_{m}\Big)\bigg)} \leq e^{\alpha K_{u}} \exp{\Big(\frac{\alpha K_{u}}{N}\sum\limits_{i=1}^{N} |X^{i}(t)|^{2}\Big)}, \end{align} where we have used Assumption~\ref{cboassu3.4} for the second inequality. We recall that a random variable $ \zeta(\omega)$ is a.s. finite if there is an increasing sequence $\{e_{k}\}_{k\in \mathbb{N}}$ with $e_{k}\rightarrow \infty$ as $k \rightarrow \infty$ such that \begin{align*} \mathbb{P}\big(\cup_{k=1}^{\infty}\{\omega \; : \; |\zeta(\omega)| < e_{k} \}\big) = 1, \end{align*} which means \begin{align*} \mathbb{P}\big(\cap_{k=1}^{\infty}\{\omega \; : \; |\zeta(\omega)| \geq e_{k} \}\big) = 0, \;\;\;\;\text{i.e.}\;\;\;\;\;\; \mathbb{P}\big(\lim_{k\rightarrow \infty}\{ \omega\; : \; |\zeta(\omega)| \geq e_{k}\} \big) =0. \end{align*} Let $g(x)$ be an increasing continuous function of $x \in \mathbb{R}$ then $g(\zeta(\omega))$ is a.s. finite random variable as well. Also, if $\zeta_{1}(\omega)$ and $\zeta_{2}(\omega)$ are a.s. finite random variables then $\zeta_{1}(\omega) \vee \zeta_{2}(\omega)$ is also an a.s. finite random variable. If $\zeta(\omega)$ is a.s. finite then by continuity of probability we have \cite{cbo35}: \begin{align} \lim_{k\rightarrow \infty}\mathbb{P}(\{ \omega \; : \; |\zeta(\omega)| \geq e_{k}\}) = 0. \end{align} We know that $X^{i}(t)$, governed by the McKean-Vlasov SDEs (\ref{cbomfsdep}), are i.i.d. random variables for every $t\geq 0$, therefore using Chebyshev's inequality, we get \begin{align*} &\mathbb{P}\Big(\frac{1}{N}\sum\limits_{i=1}^{N}|X^{i}(t)|^{2} - \mathbb{E}|X(t)|^{2} \geq N^{(\epsilon-1)/4}\Big) \leq \frac{\mathbb{E}\Big|\frac{1}{N}\sum_{i=1}^{N}|X^{i}(t)|^{2} - \mathbb{E}|X(t)|^{2}\Big|^{4}}{N^{(\epsilon-1)}} \\ & = \frac{\mathbb{E}\Big|\sum_{i=1}^{N}\big(|X^{i}(t)|^{2} - \mathbb{E}|X(t)|^{2}\big)\Big|^{4}}{N^{3+\epsilon}} = \frac{\sum_{i =1}^{N}\mathbb{E}U_{i}^{4}}{N^{3+\epsilon}} + \frac{\sum_{i=1}^{N}\mathbb{E}U_{i}^{2}\sum_{j=1}^{N}\mathbb{E}U_{j}^{2}}{N^{3+\epsilon}} \\ & \leq \frac{C}{N^{1+\epsilon}}, \end{align*} where we have used Lemma~\ref{cbolem3.6}, $U_{i} = |X^{i}(t)|^{2} - \mathbb{E}|X(t)|^{2}$ and $C$ is independent of $N$. We take $\epsilon \in (0,1)$ and define $E_{N} = \left\{ \frac{1}{N}\sum\limits_{i =1}^{N}|X^{i}(t)|^{2} - \mathbb{E}|X(t)|^{2} > \frac{1}{N^{(1-\epsilon)/4}}\right\}$ then \begin{align*} \sum\limits_{N =1}^{\infty}\mathbb{P}(E_{N}) < \infty. \end{align*} The Borel-Cantelli lemma implies that the random variable \begin{align*} \zeta_{1}(t) := \sup_{N\in \mathbb{N}}N^{(1-\epsilon)/4}\Big(\frac{1}{N}\sum_{i=1}^{N}|X^{i}(t)|^{2} - \mathbb{E}|X(t)|^{2}\Big) \end{align*} is a.s. finite. Therefore, \begin{align} \label{y4.6} \frac{1}{N}\sum\limits_{i=1}^{N}|X^{i}(t)|^{2} \leq \mathbb{E}|X(t)|^{2} + \zeta_{1}(t,\omega)N^{(-1 +\epsilon)/4},\;\;\;\; a.s., \end{align} for all $t \in [0,T]$. Using (\ref{y4.6}) in (\ref{y4.5}) and Lemma~\ref{cbolem3.6}, we get \begin{align}\label{cboeq5.6} \frac{e^{-\alpha f_{m}}}{\frac{1}{N}\sum\limits_{i=1}^{N}e^{-\alpha f(X^{i}(t))}} \leq e^{\alpha K_{u}(1+ K_{p}+\zeta_{1}(t,\omega)N^{(-1+\epsilon)/4}) },\;\;\;\; a.s. \end{align} This show that \begin{align} \lim\limits_{N \rightarrow \infty}\frac{e^{-\alpha f_{m}}}{\frac{1}{N}\sum\limits_{i=1}^{N}e^{-\alpha f(X^{i}(t))}} \leq e^{\alpha K_{u}(1+ K_{p})},\;\;\;\; a.s. \end{align} \begin{lemma}\label{cbolem5.1} Let Assumptions~\ref{cboh3.1}, \ref{cboh3.2}-\ref{cboasm1.4} be satisfied. Let $\mathbb{E}|X(0)|^{4} < \infty$ and $\mathbb{E}|Z|^{4} < \infty$. Then, the following bound holds for all $t\in [0,T]$ and sufficiently large $N$: \begin{align} \label{cboeq5.7} |\bar{X}^{\mathcal{E}_{t}}(t) - \bar{X}(t)| \leq \frac{\zeta(t,\omega)}{N^{(1-\epsilon)/4}}, \;\;\;\; a.s.,\end{align} where $\bar{X}^{\mathcal{E}_{t}}(t)$ is from (\ref{cboeq5.2}), $\bar{X}(t)$ is from (\ref{eqcbo2.12}), $\zeta(t,\omega) $ is an $a.s.$ finite $\mathscr{F}_{t}-$ measurable random variable and $ \epsilon \in( 0,1)$. \end{lemma} \begin{proof} We have \begin{align} |\bar{X}^{\mathcal{E}_{t}}(t) &- \bar{X}(t)| = \bigg| \sum_{i=1}^{N}X^{i}(t)\frac{e^{-\alpha f(X^{i}(t))}}{\sum_{j=1}^{N}e^{-\alpha f(X^{j}(t))} } - \int_{\mathbb{R}^{d}}x\frac{e^{-\alpha f(x)}}{\int_{\mathbb{R}^{d}}e^{-\alpha f(x)} \mathcal{L}_{X(t)}(dx)}\mathcal{L}_{X(t)}(dx)\bigg| \nonumber\\ & \leq \bigg| \frac{1}{\sum_{j=1}^{N}e^{-\alpha f(X^{j}(t))}}\bigg( \sum_{i=1}^{N}X^{i}(t) e^{-\alpha f(X^{i}(t))} - \int_{\mathbb{R}^{d}}xe^{-\alpha f(x)}\mathcal{L}_{X(t)}(dx)\bigg)\bigg| \nonumber \\ & \;\;\;\; + \bigg|\int_{\mathbb{R}^{d}}xe^{-\alpha f(x)}\mathcal{L}_{X(t)}(dx)\bigg(\frac{1}{\sum_{j=1}^{N}e^{-\alpha f(X^{j}(t))}} - \frac{1}{\int_{\mathbb{R}^{d}}e^{-\alpha f(x)}\mathcal{L}_{X(t)}(dx)}\bigg)\bigg|. \label{cbo_eq_4.13} \end{align} Let $ Y^{i}(t) = X^{i}(t) e^{-\alpha f(X^{i}(t))} - \int_{\mathbb{R}^{d}}xe^{-\alpha f(x)}\mathcal{L}_{X(t)}(dx)$. Note that $\mathbb{E}Y^{i}(t)$ is a $d-$dimensional zero vector and $\mathbb{E}(Y^{i}(t)\cdot Y^{j}(t)) = 0$, $i\neq j$. Then, using Theorem~\ref{cbolem3.6}, we obtain \begin{align}\label{cboeq4.14} \mathbb{E}\Big|\sum_{i=1}^{N}X^{i}(t) e^{-\alpha f(X^{i}(t))} &- \int_{\mathbb{R}^{d}}xe^{-\alpha f(x)}\mathcal{L}_{X(t)}(dx)\Big|^{4} = \frac{1}{N^{4}}\mathbb{E}\Big|\sum\limits_{i=1}^{N}Y^{i}(t)\Big|^{4}\nonumber\\ & = \frac{1}{N^{4}}\mathbb{E}\bigg(\sum\limits_{i=1}^{N}|Y^{i}(t)|^{4}+ \sum_{i=1}^{N}|Y^{i}(t)|^{2}\sum_{j=1}^{N}|Y^{j}(t)|^{2}\bigg) \leq \frac{C}{N^{2}}, \end{align} where $C$ is a positive constant independent of $N$. As a consequence of above estimate and using Chebyshev's inequality, we get \begin{align*} \mathbb{P}\bigg(\Big|\sum_{i=1}^{N}X^{i}(t) e^{-\alpha f(X^{i}(t))} &- \int_{\mathbb{R}^{d}}X(t)e^{-\alpha f(X(t))}\mathcal{L}_{X(t)}(dx)\Big| \geq N^{(\epsilon-1)/4}\bigg) \leq \frac{C}{N^{1+\epsilon}}. \end{align*} Therefore, by the Borel-Cantelli lemma there exists an a.s. finite $\mathcal{F}_{t}$-measurable random variable $\zeta_{2}(t,\omega)$ such that the following bound holds: \begin{align}\label{cboeq5.8} \Big|\sum_{i=1}^{N}X^{i}(t) e^{-\alpha f(X^{i}(t))} &- \int_{\mathbb{R}^{d}}X(t)e^{-\alpha f(X(t))}\mathcal{L}_{X(t)}(dx)\Big| \leq \frac{\zeta_{2}(t,\omega)}{N^{(1-\epsilon)/4}},\;\;\;\; a.s. \end{align} In the same manner, we can ascertain \begin{align}\label{cboeq5.9} \Big|\sum_{i=1}^{N} e^{-\alpha f(X^{i}(t))} &- \int_{\mathbb{R}^{d}}e^{-\alpha f(X(t))}\mathcal{L}_{X(t)}(dx)\Big| \leq \frac{\zeta_{3}(t,\omega)}{N^{(1-\epsilon)/4}},\;\;\;\; a.s., \end{align} where $\zeta_{3}(t,\omega)$ is an a.s. finite $\mathcal{F}_{t}$-measurable random variable. Substituting (\ref{cboeq5.6}), (\ref{cboeq5.8}) and (\ref{cboeq5.9}) in (\ref{cbo_eq_4.13}), we conclude that (\ref{cboeq5.7}) is true for sufficiently large $N$. \end{proof} \begin{remark} From (\ref{y4.6}), we have $\lim_{N\rightarrow \infty}\int_{\mathbb{R}^{d}}|x|^{2}\mathcal{E}_{t}(dx) = \mathbb{E}|X(t)|^{2}$, $a.s.$, which is the strong law of large numbers for i.i.d. random variables $|X^{i}(t)|^{2}$. Also, the result of Lemma~\ref{cbolem5.1} can be treated as a law of large numbers which shows a.s. convergence of weighted average $\bar{X}^{\mathcal{E}_{t}}(t)$ (as compared to empirical average of (\ref{y4.6})) of i.i.d. particle system towards $\bar{X}(t)$ as $N \rightarrow \infty$. \end{remark} Let $R>0$ be a sufficiently large real number. Let us fix a $t \in [0,T]$. Let us denote \begin{align} \tau_{1,R} = \inf\Big\{ s\geq 0\; ; \; \frac{1}{N}\sum\limits_{i=1}^{N}|X^{i}_{N}(s)|^{4} \geq R \Big\},&\;\;\;\; \tau_{2,R} = \inf\Big\{ s \geq 0\; ; \; \frac{1}{N}\sum\limits_{i=1}^{N}|X^{i}(s)|^{4} \geq R\Big\}, \\ \tau_{R} & = \tau_{1,R}\wedge \tau_{2,R}, \label{cbo_neweq_4.23} \end{align} and \begin{align} \Omega_{1}(t) &= \{ \tau_{1,R} \leq t\} \cup \{ \tau_{2,R} \leq t \}, \label{cbo_eq_4.20}\\ \Omega_{2}(t) &= \Omega\backslash\Omega_{1}(t) = \{\tau_{1,R} > t\} \cap \{ \tau_{2,R} > t \}. \label{cbo_eq_4.21} \end{align} \begin{lemma} Let Assumptions~\ref{cboh3.1}, \ref{cboh3.2}-\ref{cboasm1.4} be satisfied. Then, the following inequality holds for all $t \in [0,T]$: \begin{align} \mathbb{E}\int_{0}^{t\wedge \tau_{R}} |\bar{X}_{N}(s) &- \bar{X}^{\mathcal{E}_{s}}(s)|^{2} ds \leq CRe^{4\alpha K_{u}\sqrt{R}}\int_{0}^{t}\frac{1}{N}\sum\limits_{i=1}^{N}\mathbb{E}|X^{i}_{N}(s\wedge \tau_{R}) - X^{i}(s\wedge \tau_{R})|^{2}ds, \label{cbo_eq_4.23} \end{align} where $\tau_{R}$ is from (\ref{cbo_neweq_4.23}), $\bar{X}_{N}(s)$ is from (\ref{cbos1.7}), $\bar{X}^{\mathcal{E}_{s}}(s)$ is from (\ref{cboeq5.2}), $C > 0$ is independent of $N$ and $R$. \end{lemma} \begin{proof} We have \begin{align*} &|\bar{X}_{N}(s) - \bar{X}^{\mathcal{E}_{s}}(s)| = \bigg|\sum\limits_{i=1}^{N}X^{i}_{N}(s) \frac{e^{-\alpha f(X^{i}_{N}(s))}}{\sum_{j =1}^{N}e^{-\alpha f(X_{N}^{j}(s))}} - \sum\limits_{i=1}^{N}X^{i}(s) \frac{e^{-\alpha f(X^{i}(s))}}{\sum_{j =1}^{N}e^{-\alpha f(X^{j}(s))}}\bigg|\\ & \leq \Bigg|\frac{1}{N}\sum\limits_{i=1}^{N}\big(X_{N}^{i}(s) - X^{i}(s)\big)\frac{e^{-\alpha f(X_{N}^{i}(s))}}{\frac{1}{N}\sum_{j=1}^{N}e^{-\alpha f(X^{j}_{N}(s))}}\Bigg| + \Bigg|\frac{\frac{1}{N}\sum_{i=1}^{N}X^{i}(s)\big(e^{-\alpha f(X_{N}^{i}(s))} - e^{-\alpha f(X^{i}(s))}\big)}{\frac{1}{N}\sum_{j=1}^{N}e^{-\alpha f(X^{j}_{N}(s))}}\Bigg| \\ & \;\;\;\;+\Bigg|\frac{1}{N}\sum_{i=1}^{N}X^{i}(s)e^{-\alpha f(X^{i}(s))}\bigg(\frac{1}{\frac{1}{N}\sum_{j=1}^{N}e^{-\alpha f(X^{j}_{N}(s))}} - \frac{1}{\frac{1}{N}\sum_{j=1}^{N}e^{-\alpha f(X^{j}(s))}}\bigg)\Bigg|. \end{align*} Using the discrete Jensen inequality, we get \begin{align} &|\bar{X}_{N}(s) - \bar{X}^{\mathcal{E}_{s}}(s)| \leq C\Bigg(e^{\frac{\alpha}{N}\sum_{j=1}^{N}f(X^{j}_{N}(s))}\frac{1}{N}\sum_{i=1}^{N}|X^{i}_{N}(s) - X^{i}(s)|\nonumber \\ & \;\;\;\; +e^{\frac{\alpha }{N}\sum_{j=1}^{N}f(X^{j}_{N}(s))}\frac{1}{N}\sum_{i=1}^{N}|X^{i}(s)||e^{-\alpha f(X_{N}^{i}(s))} - e^{-\alpha f(X^{i}(s))}| \nonumber \\ & \;\;\;\; +e^{\frac{\alpha }{N}\sum_{j=1}^{N}(f(X^{j}_{N}(s)) + f(X^{j}(s)))}\frac{1}{N}\sum_{i=1}^{N}|X^{i}(s)|\frac{1}{N}\sum_{j=1}^{N}|e^{-\alpha f(X_{N}^{j}(s))} - e^{-\alpha f(X^{j}(s))}|\Bigg),\label{cbo_eqn_4.26} \end{align} where $C $ is a positive constant independent of $N$. Applying Assumptions~\ref{cboh3.2}-\ref{cboassu3.4}, the Cauchy-Bunyakowsky-Schwartz inequality and Young's inequality, $ab\leq a^{2}/2 + b^{2}/2$, $a,b>0$, we obtain \begin{align} &|\bar{X}_{N}(s) - \bar{X}^{\mathcal{E}_{s}}(s)| \leq C\Bigg(e^{\frac{\alpha K_{u}}{N}\sum_{j=1}^{N}|X^{j}_{N}(s)|^{2}}\frac{1}{N}\sum_{i=1}^{N}|X^{i}_{N}(s) - X^{i}(s)| \nonumber\\ & \;\;\;\; +e^{\frac{\alpha K_{u}}{N}\sum_{j=1}^{N}|X^{j}_{N}(s)|^{2}}\frac{1}{N}\sum_{i=1}^{N}|X^{i}(s)|\big(1+ |X^{i}_{N}(s)| + |X^{i}(s)| \big)|X_{N}^{i}(s) - X^{i}(s)| \nonumber\\ & \;\;\;\; +e^{\frac{\alpha K_{u}}{N}\sum_{j=1}^{N}(|X^{j}_{N}(s)|^{2} + |X^{j}(s)|^{2})}\frac{1}{N}\sum_{i=1}^{N} |X^{i}(s)|\frac{1}{N}\sum_{j=1}^{N}\big(1+ |X^{j}_{N}(s)| + |X^{j}(s)| \big)|X_{N}^{j}(s) - X^{j}(s)| \Bigg) \nonumber\\ & \leq C\Bigg(e^{\frac{\alpha K_{u}}{N}\sum_{j=1}^{N}|X^{j}_{N}(s)|^{2}}\frac{1}{N}\sum_{i=1}^{N}|X^{i}_{N}(s) - X^{i}(s)| \nonumber\\ & \;\;\;\; + e^{\frac{\alpha K_{u}}{N}\sum_{j=1}^{N}(|X^{j}_{N}(s)|^{2} + |X^{j}(s)|^{2})}\frac{1}{N}\sum_{i=1}^{N}\big(1+|X^{i}_{N}(s)|^{2} + |X^{i}(s)|^{2}\big)|X_{N}^{i}(s) - X^{i}(s)| \nonumber \\ & \;\;\;\; + e^{\frac{\alpha K_{u}}{N}\sum_{j=1}^{N}(|X^{j}_{N}(s)|^{2} + |X^{j}(s)|^{2})} \frac{1}{N}\sum_{i=1}^{N}|X^{i}(s)|^{2}\frac{1}{N}\sum\limits_{j=1}^{N}|X^{j}_{N}(s) - X^{j}(s)| \Bigg)\nonumber\\ & \leq C\Bigg(e^{\frac{\alpha K_{u}}{N}\sum_{j=1}^{N}|X^{j}_{N}(s)|^{2}}\frac{1}{N}\sum_{i=1}^{N}|X^{i}_{N}(s) - X^{i}(s)| + e^{\frac{\alpha K_{u}}{N}\sum_{j=1}^{N}(|X^{j}_{N}(s)|^{2} + |X^{j}(s)|^{2})}\nonumber\\ & \;\;\;\;\times\bigg(\frac{1}{N}\sum\limits_{i=1}^{N}\big(1+ |X_{N}^{i}(s)|^{2} + |X^{i}(s)|^{2}\big)^{2}\bigg)^{1/2}\bigg(\frac{1}{N}\sum\limits_{i=1}^{N}|X_{N}^{i}(s) - X^{i}(s)|^{2}\bigg)^{1/2}\Bigg). \label{cbo_neweq_4.28} \end{align} On squaring both sides, we ascertain \begin{align*} &|\bar{X}_{N}(s) - \bar{X}^{\mathcal{E}_{s}}(s)|^{2} \leq C\Bigg(e^{\frac{2\alpha K_{u}}{N}\sum_{j=1}^{N}|X^{j}_{N}(s)|^{2}}\frac{1}{N}\sum_{i=1}^{N}|X^{i}_{N}(s) - X^{i}(s)|^{2} + e^{\frac{2\alpha K_{u}}{N}\sum_{j=1}^{N}(|X^{j}_{N}(s)|^{2} + |X^{j}(s)|^{2})}\\ & \;\;\;\;\times\bigg(\frac{1}{N}\sum\limits_{i=1}^{N}\big(1+ |X_{N}^{i}(s)|^{2} + |X^{i}(s)|^{2}\big)^{2}\bigg)\bigg(\frac{1}{N}\sum\limits_{i=1}^{N}|X_{N}^{i}(s) - X^{i}(s)|^{2}\bigg)\Bigg). \end{align*} Using Holder's inequality, we have \begin{align*} \frac{1}{N}\sum_{j=1}^{N}(|X^{j}_{N}(s)|^{2} + |X^{j}(s)|^{2}) \leq \frac{2}{N^{1/2}}\bigg(\sum_{j=1}^{N}(|X^{j}_{N}(s)|^{4} + |X^{j}(s)|^{4})\bigg)^{1/2}. \end{align*} Therefore, \begin{align*} \mathbb{E}\int_{0}^{t\wedge \tau_{R}} |\bar{X}_{N}(s) &- \bar{X}^{\mathcal{E}_{s}}(s)|^{2} ds \leq CRe^{4\alpha K_{u}\sqrt{R}}\int_{0}^{t}\frac{1}{N}\sum\limits_{i=1}^{N}\mathbb{E}|X^{i}_{N}(s\wedge \tau_{R}) - X^{i}(s\wedge \tau_{R})|^{2}ds, \end{align*} where $C > 0$ is independent of $N$ and $R$. \end{proof} \begin{lemma} Let Assumptions~\ref{cboh3.1}, \ref{cboh3.2}-\ref{cboasm1.4} be satisfied. Then, the following inequality holds for all $t \in [0,T]$: \begin{align}\label{cbo_eq_4.28} \mathbb{E}\int_{0}^{t\wedge \tau_{R}} |\bar{X}^{\mathcal{E}_{s}}(s) &- \bar{X}(s)|^{2}ds \leq C\frac{e^{2 \alpha K_{u} \sqrt{R}}}{N}, \end{align} where $\tau_{R}$ is from (\ref{cbo_neweq_4.23}), $\bar{X}^{\mathcal{E}_{s}}(s)$ is from (\ref{cboeq5.2}), $\bar{X}(s)$ is from (\ref{eqcbo2.12}), $C > 0$ is independent of $N$ and $R$. \end{lemma} \begin{proof} We have \begin{align*} |\bar{X}^{\mathcal{E}_{s}}(s) &- \bar{X}(s)| = \bigg|\sum_{i=1}^{N}X^{i}(s) \frac{e^{-\alpha f(X^{i}(s))}}{\sum_{j=1}^{N}e^{-\alpha f(X^{j}(s))}} - \int_{\mathbb{R}^{d}}x\frac{e^{-\alpha f(x)}}{\int_{\mathbb{R}^{d}}e^{-\alpha f(x)} \mathcal{L}_{X(s)}(dx)}\mathcal{L}_{X(s)}(dx)\bigg| \\ & \leq \frac{1}{\frac{1}{N}\sum_{j=1}^{N}e^{-\alpha f(X^{j}(s))}}\Bigg|\frac{1}{N}\sum_{i=1}^{N}\bigg(X^{i}(s)e^{-\alpha f(X^{i}(s))}- \int_{\mathbb{R}^{d}}x e^{-\alpha f(x)}\mathcal{L}_{X(s)}(dx)\bigg)\Bigg|\\ & \;\;\;\; + \Bigg|\int_{\mathbb{R}^{d}}x e^{-\alpha f(x)}\mathcal{L}_{X(s)}(dx)\frac{\frac{1}{N}\sum_{j=1}^{N}\Big(e^{-\alpha f(X^{j}(s))} - \int_{\mathbb{R}^{d}}e^{-\alpha f(x)} \mathcal{L}_{X(s)}(dx)\Big)}{\frac{1}{N}\sum_{j=1}^{N}e^{-\alpha f(X^{j}(s))} \int_{\mathbb{R}^{d}}e^{-\alpha f(x)} \mathcal{L}_{X(s)}(dx)}\Bigg|. \end{align*} Using Jensen's inequality and squaring both sides, we get \begin{align*} |\bar{X}^{\mathcal{E}_{s}}(s) &- \bar{X}(s)|^{2} \leq Ce^{\frac{2\alpha }{N}\sum_{j=1}^{N}f(X^{j}(s))} \bigg|\frac{1}{N}\sum_{i=1}^{N}\Big(X^{i}(s)e^{-\alpha f(X^{i}(s))}- \mathbb{E}\big(X(s)e^{-\alpha f(X(s))}\big)\Big)\bigg|^{2}\\ & \;\;\;\; + Ce^{\frac{2\alpha }{N}\sum_{j=1}^{N}f(X^{j}(s))}e^{2\alpha \mathbb{E}f(X(s))}(\mathbb{E}|X(s)|)^{2}\bigg|\frac{1}{N}\sum_{j=1}^{N}\Big(e^{-\alpha f(X^{j}(s))} - \mathbb{E}\big(e^{-\alpha f(X(s))}\big)\Big)\bigg|^{2}, \end{align*} where $C$ is a positive constant independent of $N$. Applying Assumption~\ref{cboassu3.4}, we ascertain \begin{align*} |\bar{X}^{\mathcal{E}_{s}}(s) &- \bar{X}(s)|^{2} \leq Ce^{\frac{2\alpha K_{u}}{N}\sum_{j=1}^{N}|X^{j}(s)|^{2}} \bigg|\frac{1}{N}\sum_{i=1}^{N}\Big(X^{i}(s)e^{-\alpha f(X^{i}(s))}- \mathbb{E}\big(X(s)e^{-\alpha f(X(s))}\big)\Big)\bigg|^{2}\\ & \;\;\;\; + Ce^{\frac{2\alpha K_{u}}{N}\sum_{j=1}^{N}|X^{j}(s)|^{2}}e^{2\alpha K_{u}\mathbb{E}|X(s)|^{2}}(\mathbb{E}|X(s)|)^{2}\bigg|\frac{1}{N}\sum_{j=1}^{N}\Big(e^{-\alpha f(X^{j}(s))} - \mathbb{E}\big(e^{-\alpha f(X(s))}\big)\Big)\bigg|^{2}. \end{align*} Hence, using Theorem~\ref{cbolem3.6}, we obtain \begin{align*} \mathbb{E}\int_{0}^{t\wedge \tau_{R}} |\bar{X}^{\mathcal{E}_{s}}(s) &- \bar{X}(s)|^{2}ds \leq Ce^{2 \alpha K_{u} \sqrt{R}} \mathbb{E}\int_{0}^{t\wedge \tau_{R}} \bigg|\frac{1}{N}\sum_{i=1}^{N}\Big(X^{i}(s)e^{-\alpha f(X^{i}(s))}- \mathbb{E}\big(X(s)e^{-\alpha f(X(s))}\big)\Big)\bigg|^{2}ds\\ & \;\;\;\; + Ce^{2 \alpha K_{u} \sqrt{R}}\mathbb{E}\int_{0}^{t\wedge \tau_{R}}\bigg|\frac{1}{N}\sum_{j=1}^{N}\Big(e^{-\alpha f(X^{j}(s))} - \mathbb{E}\big(e^{-\alpha f(X(s))}\big)\Big)\bigg|^{2}ds \\ & \leq Ce^{2 \alpha K_{u} \sqrt{R}}\int_{0}^{t} \mathbb{E}\bigg|\frac{1}{N}\sum_{i=1}^{N}U^{i}_{1}(s\wedge \tau_{R})\bigg|^{2}ds + Ce^{2 \alpha K_{u} \sqrt{R}}\int_{0}^{t}\mathbb{E}\bigg|\frac{1}{N}\sum_{i=1}^{N} U^{i}_{2}(s\wedge \tau_{R})\bigg|^{2}ds, \end{align*} where $U_{1}^{i}(s \wedge \tau_{R}) = X^{i}(s\wedge \tau_{R})e^{-\alpha f(X^{i}(s\wedge \tau_{R}))} - \mathbb{E}\big(X(s\wedge \tau_{R})e^{-\alpha f(X(s\wedge \tau_{R}))}\big) $, $U_{2}^{i}(s\wedge \tau_{R}) = e^{-\alpha f(X^{i}(s))} - \mathbb{E}\big(e^{-\alpha f(X(s))}\big)$, and $C$ is independent of $N$ and $R$. We have \begin{align*} \mathbb{E}\bigg|\frac{1}{N}\sum_{i=1}^{N}U^{i}_{1}(s\wedge \tau_{R})\bigg|^{2} = \frac{1}{N^{2}}\sum\limits_{i=1}^{N}\mathbb{E}|U_{1}^{i}(s\wedge \tau_{R})|^{2} + \frac{1}{N^{2}}\sum_{\substack{i,j=1 ,\; i\neq j }}^{N}\mathbb{E}\big(U^{i}_{1}(s\wedge \tau_{R})\cdot U_{1}^{j}(s\wedge \tau_{R})\big). \end{align*} Note that $\mathbb{E}\big(U^{i}_{1}(s)\cdot U_{1}^{j}(s)\big) = 0 $ for $i\neq j$ and $s\wedge \tau_{R}$ is a bounded stopping time then $\mathbb{E}\big(U^{i}_{1}(s\wedge \tau_{R})\cdot U_{1}^{j}(s\wedge \tau_{R})\big) = 0$ for $i\neq j$ because of Doob's optional stopping theorem \cite[Theorem 2.2.1]{cbos11}. Using Theorem~\ref{cbolem3.6}, we deduce \begin{align}\label{cbo_eq_4.26} \mathbb{E}\bigg|\frac{1}{N}\sum_{i=1}^{N}U^{i}_{1}(s\wedge \tau_{R})\bigg|^{2} \leq \frac{C}{N}, \end{align} where $C$ is independent of $N$. In the similar manner, we can obtain \begin{align}\label{cbo_eq_4.27} \mathbb{E}\bigg|\frac{1}{N}\sum_{i=1}^{N} U^{i}_{2}(s\wedge \tau_{R})\bigg|^{2} \leq \frac{C}{N}, \end{align} where $C$ is independent of $N$. Using (\ref{cbo_eq_4.26}) and (\ref{cbo_eq_4.27}), we get the following estimate: \begin{align*} \mathbb{E}\int_{0}^{t\wedge \tau_{R}} |\bar{X}^{\mathcal{E}_{s}}(s) &- \bar{X}(s)|^{2}ds \leq C\frac{e^{2 \alpha K_{u} \sqrt{R}}}{N}, \end{align*} where $C$ is independent of $N$ and $R$. \end{proof} \begin{theorem}\label{cbo_thrm4.5} Let Assumptions~\ref{cboh3.1}, \ref{cboh3.2}-\ref{cboasm1.4} be satisfied. Let $X_{N}^{i}(t)$ solve (\ref{cboeq1.8}). Let $X^{i}(t)$ represent independent processes which solve (\ref{cbomfsdep}). Let us assume that $X^{i}_{N}(0) = X^{i}(0) $, a.s., $i=1\dots,N$. Let $\mathbb{E}|Z|^{4} \leq C$, $\sup_{i =1,\dots,N}\mathbb{E}|X^{i}(0)|^{4} \leq C$, and $\sup_{i=1,\dots,N}\mathbb{E}|X^{i}_{N}(0)|^{4} \leq C$. Then, the following mean-square convergence result holds for all $t \in [0,T]$: \begin{align} \lim\limits_{N \rightarrow \infty }\sup_{i =1,\dots,N}\mathbb{E}|X_{N}^{i}(t) - X^{i}(t)|^{2} = 0. \end{align} \end{theorem} \begin{proof} Let $t \in (0,T]$. We can write \begin{align*} \mathbb{E}|X_{N}^{i}(t) - X^{i}(t)|^{2} &= \mathbb{E}\big(|X_{N}^{i}(t) - X^{i}(t)|^{2}I_{\Omega_{1}(t)}\big) + \mathbb{E}\big( |X_{N}^{i}(t) - X^{i}(t)|^{2}I_{\Omega_{2}(t)}\big) \\ & =: E_{1}(t) + E_{2}(t), \end{align*} where $\Omega_{1}(t)$ and $\Omega_{2}(t)$ are from (\ref{cbo_eq_4.20}) and (\ref{cbo_eq_4.21}), respectively. Using the Cauchy-Bunyakowsky-Shwartz inequality and Chebyshev's inequality, we obtain \begin{align*} E_{1}(t) :&= \mathbb{E}\big(|X_{N}^{i}(t) - X^{i}(t)|^{2}I_{\Omega_{1}(t)}\big) \leq \big(\mathbb{E}|X_{N}^{i}(t) - X^{i}(t)|^{4}\big)^{1/2}\big(\mathbb{E}I_{\Omega_{1}(t)}\big)^{1/2} \\ & \leq C \big(\mathbb{E}|X_{N}^{i}(t)|^{4} + \mathbb{E}|X^{i}(t)|^{4}\big)^{1/2} \bigg(\frac{1}{RN}\sum\limits_{i=1}^{N}\mathbb{E}\sup_{0\leq s \leq t}|X^{i}_{N}(s)|^{4} + \frac{1}{RN}\sum\limits_{i=1}^{N}\mathbb{E}\sup_{0\leq s\leq t}|X^{i}(s)|^{4}\bigg)^{1/2}. \end{align*} We get the following estimate for $E_{1}(t)$ by applying Lemma~\ref{cbolemma3.3} and Theorem~\ref{cbolem3.6}: \begin{align}\label{cbo_neqeq_4.33} E_{1}(t) \leq \frac{C}{R}, \end{align} where $C$ is a positive constant independent of $N$ and $R$. Now, we estimate $E_{2}(t)$. We have $\mathbb{E}(|X_{N}^{i}(t) - X^{i}(t)|^{2}I_{\Omega_{2}(t)}) \leq \mathbb{E}(|X_{N}^{i}(t\wedge \tau_{R}) -X^{i}(t \wedge \tau_{R})|^{2}) $. Using Ito's formula, we have \begin{align} |X_{N}^{i}&(t\wedge \tau_{R}) - X^{i}(t\wedge \tau_{R})|^{2} = |X^{i}_{N}(0) - X^{i}(0)|^{2} \nonumber\\ & \;\;- 2\mathbb{E}\int_{0}^{t\wedge \tau_{R}}\beta(s)(X_{N}^{i}(s) - X^{i}(s))\cdot (X_{N}^{i}(s) - \bar{X}_{N}(s) - X^{i}(s) + \bar{X}(s))ds \nonumber \\ & \;\; + 2\int_{0}^{t\wedge \tau_{R}}\sigma^{2}(s)|\diag(X_{N}^{i}(s) - \bar{X}_{N}(s) -X^{i}(s) + \bar{X}(s))|^{2}ds \nonumber\\ & \;\;+2\sqrt{2}\int_{0}^{t\wedge \tau_{R}}\sigma(s)\big((X_{N}^{i}(s) -X^{i}(s))\cdot\diag(X_{N}^{i}(s) - \bar{X}_{N}(s)- X^{i}(s) +\bar{X}(s))dW^{i}(s)\big) \nonumber \\ & \;\;+\int_{0}^{t \wedge \tau_{R}}\int_{\mathbb{R}^{d}}\Big(|X_{N}^{i}(s^{-}) - X^{i}(s^{-}) + \gamma(s)\diag(X_{N}^{i}(s^{-}) - \bar{X}_{N}(s^{-}))z \nonumber \\ & \;\;\;\;\;\;\;\;- \gamma(s)\diag(X^{i}(s^{-}) - \bar{X}(s^{-}))z|^{2}- |X^{i}_{N}(s^{-}) - X^{i}(s^{-})|^{2}\Big)\mathcal{N}^{i}(ds,dz). \label{cbo_neweq_4.34} \end{align} The Cauchy-Bunyakowsky-Schwartz inequality and Young's inequality provide the following estimates: \begin{align} &(X_{N}^{i}(s) - X^{i}(s))\cdot (X_{N}^{i}(s) - \bar{X}_{N}(s) - X^{i}(s) + \bar{X}(s)) \leq C(|X^{i}_{N}(s) - X^{i}(s)|^{2} + |\bar{X}_{N}(s) - \bar{X}(s)|^{2}), \label{cbo_neweq_4.35}\\ &|\diag(X_{N}^{i}(s) - \bar{X}_{N}(s) -X^{i}(s) + \bar{X}(s))|^{2} \leq C(|X^{i}_{N}(s) - X^{i}(s)|^{2} + |\bar{X}_{N}(s) - \bar{X}(s)|^{2}), \end{align} and \begin{align} &\Big(|X_{N}^{i}(s^{-}) - X^{i}(s^{-}) + \gamma(s)\diag(X_{N}^{i}(s^{-}) - \bar{X}_{N}(s^{-}))z - \gamma(s)\diag(X^{i}(s^{-})\nonumber \\ & \;\;\;\; - \bar{X}(s^{-}))z|^{2}- |X^{i}_{N}(s^{-}) - X^{i}(s^{-})|^{2}\Big) = \gamma^{2}(s)|\big((X^{i}_{N}(s^{-}) - \bar{X}_{N}(s^{-}) - X^{i}(s^{-}) + \bar{X}(s^{-}))\cdot z\big)|^{2} \nonumber \\ & \;\;\;\; + 2\gamma(s)\Big( \big(X_{N}^{i}(s^{-}) - X^{i}(s^{-})\big)\cdot \big(\diag(X_{N}^{i}(s^{-}) - \bar{X}_{N}(s^{-}) - X^{i}(s^{-}) + \bar{X}(s^{-}))z\big)\Big) \nonumber\\ & \leq C(|X^{i}_{N}(s^{-}) - X^{i}(s^{-})|^{2} + |\bar{X}_{N}(s^{-}) - \bar{X}(s^{-})|^{2})|z|^{2}\nonumber \\ & \;\;\;\; + 2\gamma(s)\Big( \big(X_{N}^{i}(s^{-}) - X^{i}(s^{-})\big)\cdot \big(\diag(X_{N}^{i}(s^{-}) - \bar{X}_{N}(s^{-}) - X^{i}(s^{-}) + \bar{X}(s^{-}))z\big)\Big). \label{cbo_neweq_4.38} \end{align} Taking expectations on both sides of (\ref{cbo_neweq_4.34}), using estimates (\ref{cbo_neweq_4.35})-(\ref{cbo_neweq_4.38}) and applying Doob's optional stopping theorem \cite[Theorem 2.2.1]{cbos11}, we get \begin{align} &\mathbb{E}|X_{N}^{i}(t\wedge \tau_{R}) - X^{i}(t\wedge \tau_{R})|^{2} \leq \mathbb{E}|X_{N}^{i}(0) - X^{i}(0)|^{2} \nonumber \\ & \;\;\;\; + C\mathbb{E}\int_{0}^{t\wedge \tau_{R}}\big(|X_{N}^{i}(s) - X^{i}(s)|^{2} + |\bar{X}_{N}(s) - \bar{X}(s)|^{2}\big) ds \nonumber \\ & \;\;\;\; + C\mathbb{E}\int_{0}^{t\wedge \tau_{R}}\int_{\mathbb{R}^{d}}(|X^{i}_{N}(s) - X^{i}(s)|^{2} + |\bar{X}_{N}(s) - \bar{X}(s)|^{2})|z|^{2}\rho_{z}(z)dz ds \nonumber \\ & \leq \mathbb{E}|X_{N}^{i}(0) - X^{i}(0)|^{2} + C\mathbb{E} \int_{0}^{t\wedge \tau_{R}}|X_{N}^{i}(s) - X^{i}(s)|^{2}ds \nonumber\\ & \;\;\;\; + C\mathbb{E}\int_{0}^{t\wedge \tau_{R}} |\bar{X}_{N}(s) - \bar{X}^{\mathcal{E}_{s}}(s)|^{2} ds + C\mathbb{E}\int_{0}^{t\wedge \tau_{R}}|\bar{X}^{\mathcal{E}_{s}}(s) - \bar{X}(s)|^{2} ds. \label{cbo_eq_4.22} \end{align} Substituting (\ref{cbo_eq_4.23}) and (\ref{cbo_eq_4.28}) in (\ref{cbo_eq_4.22}), we obtain \begin{align*} \mathbb{E}&\big(|X_{N}^{i}(t\wedge \tau_{R}) - X^{i}(t\wedge \tau_{R})|^{2}\big) \leq \mathbb{E}|X_{N}^{i}(0) - X^{i}(0)|^{2} \\ & \;\;\;\; + CRe^{4\alpha K_{u}\sqrt{R}}\int_{0}^{t}\frac{1}{N}\sum\limits_{i=1}^{N}\mathbb{E}\big(|X^{i}_{N}(s\wedge \tau_{R}) - X^{i}(s\wedge \tau_{R})|^{2}\big)ds + C\frac{e^{2 \alpha K_{u} \sqrt{R}}}{N}, \end{align*} where $C>0$ is independent of $N$ and $R$. Taking supremum over $i =1,\dots, N$, we get \begin{align*} \sup_{i=1,\dots,N}\mathbb{E}\big(|&X_{N}^{i}(t\wedge \tau_{R}) - X^{i}(t\wedge \tau_{R})|^{2}\big) \leq \sup_{i=1,\dots,N}\mathbb{E}|X_{N}^{i}(0) - X^{i}(0)|^{2} \\ & + CRe^{4\alpha K_{u}\sqrt{R}}\int_{0}^{t}\sup_{i=1,\dots,N}\mathbb{E}\big(|X^{i}_{N}(s\wedge \tau_{R}) - X^{i}(s\wedge \tau_{R})|^{2}\big)ds + C\frac{e^{2 \alpha K_{u} \sqrt{R}}}{N}. \end{align*} Using Gr\"{o}nwall's inequality, we have \begin{align} \sup_{i=1,\dots,N}\mathbb{E}\big(|X_{N}^{i}(t\wedge \tau_{R})& - X^{i}(t\wedge \tau_{R})|^{2}\big) \leq \frac{C}{N}e^{CRe^{4\alpha K_{u}\sqrt{R}}}e^{2 \alpha K_{u} R} \leq \frac{C}{N}e^{e^{C_{u}\sqrt{R}}},\label{cbo_eqn_4.30} \end{align} where $C>0$ and $C_{u}>0$ are constants independent of $N$ and $R$. In the above calculations, we have used the facts that $R < e^{2\alpha K_{u}\sqrt{R}}$ and $2\alpha K_{u}\sqrt{R} < e^{2\alpha K_{u}\sqrt{R}}$ for sufficiently large $R$. We choose $R = \frac{1}{C_{u}^{2}}(\ln{(\ln({N^{1/2})})})^{2} $. Therefore, \begin{align*} \sup_{i=1,\dots,N} \mathbb{E}(|X_{N}^{i}(t) - X^{i}(t)|^{2}I_{\Omega_{2}(t)}) \leq \sup_{i=1,\dots,N}\mathbb{E}\big(|X_{N}^{i}(t\wedge \tau_{R})& - X^{i}(t\wedge \tau_{R})|^{2}\big) \leq \frac{C}{N^{1/2}}, \end{align*} which implies \begin{align}\label{cbo_eq_4.31} \lim\limits_{N\rightarrow \infty} \sup_{i=1,\dots,N} \mathbb{E}(|X_{N}^{i}(t) - X^{i}(t)|^{2}I_{\Omega_{2}(t)}) = \lim\limits_{N\rightarrow \infty} \sup_{i=1,\dots,N}\mathbb{E}\big(|X_{N}^{i}(t\wedge \tau_{R})& - X^{i}(t\wedge \tau_{R})|^{2}\big) = 0. \end{align} The term (\ref{cbo_neqeq_4.33}) and the choice of $R$ provide the following estimate: \begin{align*} \mathbb{E}\big(|X_{N}^{i}(t) - X^{i}(t)|^{2}I_{\Omega_{1}}(t)\big) \leq \frac{C}{R} \leq \frac{C}{(\ln{(\ln({N^{1/2})}))^{2}}}, \end{align*} where $C>0$ is independent of $N$ and $R$. This yields \begin{align}\label{cbo_eq_4.24} \lim_{N\rightarrow \infty}\sup_{i = 1,\dots,N }\mathbb{E}\big(|X^{i}_{N}(t) - X^{i}(t)|^{2}I_{\Omega_{1}(t)}\big) = 0. \end{align} As a consequence of (\ref{cbo_eq_4.31}) and (\ref{cbo_eq_4.24}) , we get \begin{align*} \lim_{N\rightarrow \infty}\sup_{i=1,\dots,N}\mathbb{E}|X_{N}^{i}(t) - X^{i}(t)|^{2} = 0, \end{align*} for all $t \in [0,T]$. \end{proof} \begin{remark} It is not difficult to see from the above theorem that the empirical measure of the particle system (\ref{cboeq1.8}) converges to the law of the mean-field SDEs (\ref{cbomfsdep}) in $2-$Wasserstein metric, i.e. for all $t \in [0,T]$: \begin{align} \lim_{N\rightarrow \infty}\mathcal{W}_{2}^{2}(\mathcal{E}_{t}^{N}, \mathcal{L}_{X(t)}) = 0, \end{align} where $\mathcal{E}_{t}^{N} = \frac{1}{N}\sum_{i=1}^{N}\delta_{X^{i}_{N}(t)} $. \end{remark} \begin{remark} Theorem~\ref{cbo_thrm4.5} implies weak convergence of the empirical measure, $\mathcal{E}_{t}^{N}$ of interacting particle system towards $\mathcal{L}_{X(t)}$ which is the law of the mean-field limit process $X(t)$ (see \cite{cbo35,cbo29}). \end{remark} \subsection{Convergence of the numerical scheme}\label{cbo_conv_ns} To implement the particle system (\ref{cbos1.6}), we have proposed to utilize the Euler scheme introduced in Section~\ref{subsec_implemen}. The jump-diffusion SDEs (\ref{cbos1.6}), governing interacting particle system, have locally Lipschitz and linearly growing coefficients. Due to non-global Lipschitzness of the coefficients, it is not straightforward to deduce convergence of the Euler scheme to (\ref{cbos1.6}). In this section, we go one step further and prove this convergence result uniform in $N$. To this end, we introduce the function $\kappa_{h}(t) = t_{k}$, $t_{k} \leq t < t_{k+1}$, where $ 0=t_{0}<\dots<t_{n} = T$ is a uniform partition of $[0,T]$, i.e. $t_{k+1} - t_{k} = h$ for all $k=0,\dots,n-1$. We write the continuous version of the numerical scheme (\ref{cbo_dis_ns}) as follows: \begin{align}\label{cboeq5.20} dY^{i}_{N}(t) &= -\beta(t)(Y^{i}_{N}(\kappa_{h}(t)) - \bar{Y}_{N}(\kappa_{h}(t)))dt + \sqrt{2}\sigma(t)\diag(Y^{i}_{N}(\kappa_{h}(t)) - \bar{Y}_{N}(\kappa_{h}(t)))dW^{i}(t)\nonumber \\ & \;\;\;\; + \int_{\mathbb{R}^{d}}\diag(Y^{i}_{N}(\kappa_{h}(t)) - \bar{Y}_{N}(\kappa_{h}(t)))z\mathcal{N}^{i}(dt,dz). \end{align} In this section, our aim is to show mean-square convergence of $Y^{i}_{N}(t)$ to $X^{i}_{N}(t)$ uniformly in $N$, i.e. \begin{align} \lim_{h\rightarrow 0}\sup_{i=1,\dots,N}\mathbb{E}|Y^{i}_{N}(t) - X^{i}_{N}(t)|^{2} = 0, \end{align} where $h \rightarrow 0$ means that keeping $T$ fixed the time-step of uniform partition of $[0,T]$ goes to zero. Let Assumptions~\ref{cboh3.1}-\ref{cboasu1.1} hold. Let $\mathbb{E}|Y^{i}_{N}(0)|^{2} < \infty$ and $\mathbb{E}|Z|^{2} < \infty$, then the particle system (\ref{cboeq5.20}) is well-posed (cf. Theorem~\ref{cbo_thrm_3.2}). Moreover, if $\mathbb{E}|Y^{i}_{N}(0)|^{2p} <\infty $ and $\mathbb{E}|Z|^{2p} < \infty$ for some $p \geq 1$, then, due to Lemma~\ref{cbolemma3.3}, the following holds: \begin{align} \mathbb{E}\sup_{0\leq t\leq T}|Y^{i}_{N}(t)|^{2p} \leq K, \label{cbo_neweq_4.45} \end{align} where we cannot say that $K$ is independent of $h$. However, to prove the convergence of numerical scheme we need the uniform in $h$ and $N$ moment bound, which we prove in the next lemma. \begin{lemma}\label{cbo_lem4.6} Let Assumptions~\ref{cboh3.1}, \ref{cboh3.2}-\ref{cboasm1.4} hold. Let $p \geq 1$, $\mathbb{E}|Y^{i}_{N}(0)|^{2p} < \infty$ and $\mathbb{E}|Z|^{2p} < \infty$. Then, the following holds: \begin{align} \sup_{i=1,\dots,N}\mathbb{E}\sup_{0\leq t\leq T}|Y^{i}_{N}(t)|^{2p} \leq K_{d}, \end{align} where $K_{d}$ is a positive constant independent of $h$ and $N$. \end{lemma} \begin{proof} Let $p$ be a positive integer. Using Ito's formula, the Cauchy-Bunyakowsky-Schwartz inequality and Young's inequality, we have \begin{align*} |Y^{i}_{N}&(t)|^{2p} = |Y^{i}_{N}(0)|^{2p} - 2p\int_{0}^{t}\beta(s)|Y^{i}_{N}(s)|^{2p-2}\big(Y^{i}_{N}(s)\cdot (Y^{i}_{N}(\kappa_{h}(s)) - \bar{Y}_{N}(\kappa_{h}(s)))\big)ds \\ & \;\;\;\; + 2\sqrt{2}p\int_{0}^{t}\sigma(s)|Y^{i}_{N}(s)|^{2p-2}\big(Y^{i}_{N}(s)\cdot \diag(Y^{i}_{N}(\kappa_{h}(s)) - \bar{Y}_{N}(\kappa_{h}(s)))dW^{i}(s)\big)\\ & \;\;\;\; + 4p(p-1)\int_{0}^{t}\sigma^{2}(s)|Y^{i}_{N}(s)|^{2p-4}|\diag(Y^{i}_{N}(\kappa_{h}(s)) - \bar{Y}_{N}(\kappa_{h}(s))) Y^{i}_{N}(s)|^{2}ds\\ & \;\;\;\;+ 2p\int_{0}^{t}\sigma^{2}(s)|Y^{i}_{N}(s)|^{2p-2}|\diag(Y^{i}_{N}(\kappa_{h}(s)) - \bar{Y}_{N}(s)|^{2}ds \\ & \;\;\;\; + \int_{0}^{t}\int_{\mathbb{R}^{d}}\Big(|Y^{i}_{N}(s^{-}) + \gamma(s)\diag(Y^{i}_{N}(\kappa_{h}(s)) - \bar{Y}_{N}(\kappa_{h}(s)))z|^{2p} - |Y^{i}_{N}(s^{-})|^{2p}\Big)\mathcal{N}^{i}(ds,dz) \\ & \leq |Y^{i}_{N}(0)|^{2p} + C \int_{0}^{t}(|Y^{i}_{N}(s)|^{2p} + |Y^{i}_{N}(\kappa_{h}(s))|^{2p}+|\bar{Y}_{N}(\kappa_{h}(s))|^{2p})ds\\ &\;\;\;\; + 2\sqrt{2}p\int_{0}^{t}\sigma(s)|Y^{i}_{N}(s)|^{2p-2}(Y^{i}_{N}(s)\cdot\diag(Y^{i}_{N}(\kappa_{h}(s)) - \bar{Y}_{N}(\kappa_{h}(s)))dW^{i}(s)) \\ & \;\;\;\;+ C\int_{0}^{t}\int_{\mathbb{R}^{d}}\Big(|Y^{i}_{N}(s^{-})|^{2p} + (|Y^{i}_{N}(\kappa_{h}(s))|^{2p} + |\bar{Y}_{N}(\kappa_{h}(s))|^{2p})(1+|z|^{2p})\Big)\mathcal{N}^{i}(ds,dz). \end{align*} First taking supremum over $0\leq t\leq T$ and then expectation, we obtain \begin{align*} \mathbb{E}&\sup_{0\leq t\leq T}|Y^{i}_{N}(t)|^{2p} \leq \mathbb{E}|Y^{i}_{N}(0)|^{2p} + C\mathbb{E}\int_{0}^{T}\Big(|Y^{i}_{N}(s)|^{2p} + |Y^{i}_{N}(\kappa_{h}(s))|^{2p} + |\bar{Y}_{N}(\kappa_{h}(s))|^{2p}\Big) ds \\ & +2\sqrt{2}p\mathbb{E}\sup_{0\leq t\leq T}\bigg|\int_{0}^{t}\sigma(s)|Y^{i}_{N}(s)|^{2p-2}(Y^{i}_{N}(s)\cdot \diag(Y^{i}_{N}(\kappa_{h}(s))-\bar{Y}_{N}(\kappa_{h}(s)))dW^{i}(s))\bigg| \\ & +C\mathbb{E}\int_{0}^{T}\int_{\mathbb{R}^{d}}\Big(|Y^{i}_{N}(s^{-})|^{2p} + (|Y^{i}_{N}(\kappa_{h}(s))|^{2p} + |\bar{Y}_{N}(\kappa_{h}(s))|^{2p})(1+|z|^{2p})\Big)\mathcal{N}^{i}(ds,dz), \end{align*} where $C$ is independent of $h$ and $N$. Using the Burkholder-Davis-Gundy inequality (note that we can apply this inequality due to (\ref{cbo_neweq_4.45})) and the fact that $\mathbb{E}|Z|^{2p} < \infty$, we get \begin{align*} \mathbb{E}&\sup_{0\leq t\leq T}|Y^{i}_{N}(t)|^{2p} \leq \mathbb{E}|Y^{i}_{N}(0)|^{2p} + C\mathbb{E}\int_{0}^{T}\Big(|Y^{i}_{N}(s)|^{2p} + |Y^{i}_{N}(\kappa_{h}(s))|^{2p} + |\bar{Y}_{N}(\kappa_{h}(s))|^{2p}\Big) ds \\ & \;\;\;\;+C\mathbb{E}\bigg(\int_{0}^{T}|Y^{i}_{N}(s)|^{4p-4}\big(Y^{i}_{N}(s)\cdot(Y^{i}_{N}(\kappa_{h}(s))-\bar{Y}_{N}(\kappa_{h}(s)))\big)^{2}ds\bigg)^{1/2} \\ & \;\;\;\;+C\mathbb{E}\int_{0}^{T}\int_{\mathbb{R}^{d}}\Big(|Y^{i}_{N}(s)|^{2p} + (|Y^{i}_{N}(\kappa_{h}(s))|^{2p} + |\bar{Y}_{N}(\kappa_{h}(s))|^{2p})(1+|z|^{2p})\Big)\rho_{z}(z)dzds \\ &\leq \mathbb{E}|Y^{i}_{N}(0)|^{2p} + C\mathbb{E}\int_{0}^{T}\Big(|Y^{i}_{N}(s)|^{2p} + |Y^{i}_{N}(\kappa_{h}(s))|^{2p} + |\bar{Y}_{N}(\kappa_{h}(s))|^{2p}\Big) ds \\ &\;\;\;\; +\mathbb{E}\sup_{0\leq t\leq T}|Y^{i}_{N}(t)|^{2p-1}\bigg(\int_{0}^{T}|Y^{i}_{N}(\kappa_{h}(s))-\bar{Y}_{N}(\kappa_{h}(s))|^{2}ds\bigg)^{1/2}.\end{align*} Applying Young's inequality and Holder's inequality, we ascertain \begin{align} &\mathbb{E}\sup\limits_{0\leq t\leq T}|Y^{i}_{N}(t)|^{2p} \leq \mathbb{E}|Y_{N}^{i}(0)|^{2p} + C\int_{0}^{T}(|Y^{i}_{N}(s)|^{2p} + |Y^{i}_{N}(\kappa_{h}(s))|^{2p} + |\bar{Y}_{N}(\kappa_{h}(s))|^{2p}) ds \nonumber \\ & \;\;\;\; + \frac{1}{2}\mathbb{E}\sup\limits_{0\leq t\leq T}|Y^{i}_{N}(t)|^{2p} + C\mathbb{E}\Big(\int_{0}^{T}|Y^{i}_{N}(\kappa_{h}(s)) - \bar{Y}_{N}(\kappa_{h}(s))|^{2}ds\Big)^{p} \nonumber\\ & \leq \mathbb{E}|Y_{N}^{i}(0)|^{2p} + C\int_{0}^{t}(|Y^{i}_{N}(s)|^{2p} + |Y^{i}_{N}(\kappa_{h}(s))|^{2p} + |\bar{Y}_{N}(\kappa_{h}(s))|^{2p}) ds \nonumber \\ & \;\;\;\; + \frac{1}{2}\mathbb{E}\sup\limits_{0\leq t\leq T}|Y^{i}_{N}(t)|^{2p} + C\mathbb{E}\int_{0}^{T}|Y^{i}_{N}(\kappa_{h}(s)) - \bar{Y}_{N}(\kappa_{h}(s))|^{2p}ds. \label{cbo_neweq_4.47} \end{align} Using Jensen's inequality and (\ref{y4.2}), we have \begin{align} |\bar{Y}_{N}(\kappa_{h}(s))|^{2} &\leq \sum\limits_{i=1}^{N}|Y^{i}_{N}(\kappa_{h}(s))|^{2}\frac{e^{-\alpha f(Y^{i}_{N}(\kappa_{h}(s)))}}{\sum_{j=1}^{N}e^{-\alpha f(Y^{j}_{N}(\kappa_{h}(s)))}} \leq L_{1} + \frac{L_{2}}{N}\sum\limits_{i=1}^{N}|Y^{i}_{N}(\kappa_{h}(s))|^{2}. \label{cbo_neweq_4.48} \end{align} Therefore, substituting (\ref{cbo_neweq_4.48}) in (\ref{cbo_neweq_4.47}) yields \begin{align*} &\mathbb{E}\sup\limits_{0\leq t\leq T}|Y^{i}_{N}(t)|^{2p} \leq 2\mathbb{E}|Y_{N}^{i}(0)|^{2p} + C + C\mathbb{E}\int_{0}^{T}\Big(|Y^{i}_{N}(s)|^{2p} + |Y^{i}_{N}(\kappa_{h}(s))|^{p} + \frac{1}{N}\sum\limits_{i=1}^{N}|Y_{N}^{i}(\kappa_{h}(s))|^{2p}\Big)ds \\ & \leq 2\mathbb{E}|Y_{N}^{i}(0)|^{2p} +C+ C\int_{0}^{T}\Big(\mathbb{E}\sup_{0\leq u\leq s}|Y^{i}_{N}(u)|^{2p} + \frac{1}{N}\sum\limits_{i=1}^{N}\mathbb{E}\sup_{0\leq u\leq s} |Y_{N}^{i}(u)|^{2p}\Big)ds, \end{align*} where $C>0$ is independent of $h$ and $N$. Taking supremum over $ i =1,\dots, N$, we get \begin{align*} \sup\limits_{i=1,\dots,N}\mathbb{E}\sup\limits_{0\leq t\leq T}|Y^{i}_{N}(t)|^{2p} \leq 2\mathbb{E}|Y^{i}_{N}(0)|^{2p}+ C + C\int_{0}^{T}\sup_{i=1,\dots,N}\mathbb{E}\sup_{0\leq u\leq s}|Y^{i}_{N}(u)|^{2p}ds, \end{align*} where $C>0$ is independent of $h$ and $N$. Using Gr\"{o}nwall's lemma, we have the desired result. \end{proof} \begin{lemma}\label{cbo_lem4.7} Let Assumptions~\ref{cboh3.1}, \ref{cboh3.2}-\ref{cboasm1.4} hold. Let $\sup_{i=1,\dots,N}\mathbb{E}|X^{i}_{N}(0)|^{4} < \infty$, $ \sup_{i=1,\dots,N} \mathbb{E}|Y^{i}_{N}(0)|^{4} < \infty$, $\mathbb{E}|Z|^{4} < \infty$. Then \begin{align*} \sup_{i=1,\dots,N} \mathbb{E}|Y^{i}_{N}(t) - Y^{i}_{N}(\kappa_{h}(t))|^{2} \leq Ch, \end{align*} where $C$ is a positive constant independent of $N$ and $h$. \end{lemma} \begin{proof} We have \begin{align*} |Y^{i}_{N}(t) &- Y^{i}_{N}(\kappa_{h}(t))|^{2} \leq C\bigg(\bigg|\int_{\kappa_{h}(t)}^{t}( Y^{i}_{N}(\kappa_{h}(s)) - \bar{Y}_{N}(\kappa_{h}(s)))ds\bigg|^{2} \\ & \;\;\;\; + \bigg| \int_{\kappa_{h}(t)}^{t}\diag ( Y^{i}_{N}(\kappa_{h}(s)) - \bar{Y}_{N}(\kappa_{h}(s)))dW^{i}(s)\bigg|^{2}\\ & \;\;\;\; + \bigg|\int_{\kappa_{h}(t)}^{t}\int_{\mathbb{R}^{d}}\diag( Y^{i}_{N}(\kappa_{h}(s)) - \bar{Y}_{N}(\kappa_{h}(s)))z^{}\mathcal{N}^{i}(ds,dz^{})\bigg|^{2}\bigg), \end{align*} where $C$ is independent of $h$ and $N$. Taking expectation and using Ito's isometry (note that we can apply Ito's isometry due to Lemma~\ref{cbo_lem4.6}), we get \begin{align*} \mathbb{E}|Y^{i}_{N}(t) &- Y^{i}_{N}(\kappa_{h}(t))|^{2} \leq C(1+\mathbb{E}|Z|^{2})\bigg(\int_{\kappa_{h}(t)}^{t}\mathbb{E}| Y^{i}_{N}(\kappa_{h}(s)) - \bar{Y}_{N}(\kappa_{h}(s))|^{2}ds\bigg). \end{align*} Therefore, use of (\ref{cbo_neweq_4.48}) gives \begin{align*} \sup_{i=1,\dots,N}\mathbb{E}&|Y^{i}_{N}(t) - Y^{i}_{N}(\kappa_{h}(t))|^{2} \leq C(1+\mathbb{E}|Z|^{2})\bigg(\int_{\kappa_{h}(t)}^{t} \sup_{i=1,\dots,N}\mathbb{E}|Y^{i}_{N}(\kappa_{h}(s))|^{2} \\ &\;\;\;\; + 2L_{1} + \frac{L_{2}}{N}\sum\limits_{i=1}^{N}\sup_{i=1,\dots,N}\big( \mathbb{E}|Y^{i}_{N}(\kappa_{h}(s))|^{2})ds\bigg). \end{align*} Using Lemma~\ref{cbolemma3.3} and Lemma~\ref{cbo_lem4.6}, we get \begin{align*} \sup_{i=1,\dots,N} \mathbb{E}|Y^{i}_{N}(t) - Y^{i}_{N}(\kappa_{h}(t))|^{2} \leq C(t -\kappa_{h}(t)) \leq Ch, \end{align*} where $C$ is independent of $N$ and $h$. \end{proof} \begin{theorem} Let Assumptions~\ref{cboh3.1}, \ref{cboh3.2}-\ref{cboasm1.4} hold. Let $\mathbb{E}|Z|^{4} < \infty$, $\sup_{i=1,\dots,N}\mathbb{E}|X^{i}_{N}(0)|^{4} < \infty$, $ \sup_{i=1,\dots,N} \mathbb{E}|Y^{i}_{N}(0)|^{4} < \infty$ and $Y^{i}_{N}(0) = X^{i}_{N}(0) $, $i=1,\dots, N$. Then \begin{align} \lim\limits_{h \rightarrow 0}\lim\limits_{N\rightarrow \infty}\sup_{i=1,\dots,N}\mathbb{E}|Y^{i}_{N}(t) - X^{i}_{N}(t)|^{2} = \lim\limits_{N \rightarrow \infty}\lim\limits_{h\rightarrow 0}\sup_{i=1,\dots,N}\mathbb{E}|Y^{i}_{N}(t) - X^{i}_{N}(t)|^{2}= 0, \end{align} for all $t \in [0,T]$. \end{theorem} \begin{proof} Let \begin{align*} \tau_{1.R} = \inf\Big\{ t\geq 0 \; ; \; \frac{1}{N}\sum\limits_{i=1}^{N}|X^{i}_{N}(t)|^{4} \geq R\Big\}&,\;\;\;\; \tau_{3,R} = \inf\Big\{ t \geq 0\; ; \; \frac{1}{N}\sum\limits_{i=1}^{N}|Y^{i}_{N}(t)|^{4} \geq R \Big\}, \\ \tau^{h}_{R} & = \tau_{1,R} \wedge \tau_{3,R}, \end{align*} and \begin{align*} \Omega_{3}(t) & = \{ \tau_{1,R} \leq t\} \cup \{ \tau_{3,R} \leq t\}, \;\;\; \Omega_{4}(t) = \Omega \backslash \Omega_{3}(t) = \{ \tau_{1,R} \geq t\} \cap \{ \tau_{3,R} \geq t\} .\end{align*} We have \begin{align*} \mathbb{E}|Y^{i}_{N}(t) - X^{i}_{N}(t)|^{2} &= \mathbb{E}\big(|Y^{i}_{N}(t) - X^{i}_{N}(t)|^{2}I_{\Omega_{3}(t)}\big) \nonumber + \mathbb{E}\big(|Y^{i}_{N}(t) - X^{i}_{N}(t)|^{2}I_{\Omega_{4}(t)}\big)\\ & =: E_{3}(t) + E_{4}(t). \end{align*} Let us first estimate the term $E_{3}(t)$. Using Cauchy-Bunyakowsky-Schwartz inequality, Chebyshev's inequality, Lemma~\ref{cbolemma3.3} and Lemma~\ref{cbo_lem4.6}, we get \begin{align} \mathbb{E}\big(|Y^{i}_{N}(t) - X^{i}_{N}(t)|^{2}I_{\Omega_{3}(t)}\big) &\leq \big(\mathbb{E}|Y^{i}_{N}(t) - X^{i}_{N}(t)|^{4}\big)^{1/2}\big(\mathbb{E}I_{\Omega_{3}(t)}\big)^{1/2} \nonumber \\ &\leq C \bigg( \frac{1}{RN}\sum\limits_{i=1}^{N}\mathbb{E}\sup_{0\leq s\leq t}|Y^{i}_{N}(s)|^{4} + \frac{1}{RN}\sum\limits_{i=1}^{N}\mathbb{E}\sup_{0\leq s\leq t}|X^{i}_{N}(s)|^{4} \bigg) \leq \frac{C}{R},\label{cbo_neweq_4.49}\end{align} where $C$ is independent of $h$, $N$ and $R$. Note that $ \mathbb{E}\big(|Y^{i}_{N}(t) - X^{i}_{N}(t)|^{2}I_{\Omega_{4}(t)}\big) \leq \mathbb{E}|Y^{i}_{N}(t \wedge \tau^{h}_{R}) - X^{i}_{N}(t \wedge \tau^{h}_{R})|^{2} $. Using Ito's formula, we obtain \begin{align*} &|Y^{i}_{N}(t \wedge \tau^{h}_{R}) - X^{i}_{N}(t \wedge \tau^{h}_{R})|^{2} = |Y^{i}_{N}(0) - X^{i}_{N}(0)|^{2} \\& - 2\int_{0}^{t\wedge \tau^{h}_{R}} \beta(s)\big((Y^{i}_{N}(s) - X^{i}_{N}(s))\cdot (Y^{i}_{N}(\kappa_{h}(s)) - \bar{Y}_{N}(\kappa_{h}(s)) - X^{i}_{N}(s) + \bar{X}_{N}(s))\big)ds \\ & +2 \sqrt{2}\int_{0}^{t \wedge \tau^{h}_{R}}\sigma(s)\big((Y^{i}_{N}(s) - X^{i}_{N}(s))\cdot \diag(Y^{i}_{N}(\kappa_{h}(s)) - \bar{Y}_{N}(\kappa_{h}(s)) - X^{i}_{N}(s) + \bar{X}_{N}(s))dW^{i}(s)\big)\\ & + 2\int_{0}^{t\wedge \tau^{h}_{R}}\sigma^{2}(s)|Y^{i}_{N}(\kappa_{h}(s))- \bar{Y}_{N}(\kappa_{h}(s)) - X^{i}_{N}(s) + \bar{X}_{N}(s)|^{2} ds \\ & + \int_{0}^{t\wedge \tau^{h}_{R}}\int_{\mathbb{R}^{d}}\big(|Y^{i}_{N}(s^{-}) - X^{i}_{N}(s^{-}) + \diag(Y^{i}_{N}(\kappa_{h}(s)) - \bar{Y}_{N}(\kappa_{h}(s)))z - \diag(X^{i}_{N}(s) - \bar{X}_{N}(s))z|^{2} \\ & \;\;\;\;\;\;- |Y^{i}_{N}(s^{-}) - X^{i}_{N}(s^{-})|^{2}\big)\mathcal{N}^{i}(ds,dz). \end{align*} Taking expectation on both sides, and using the Cauchy-Bunyakowsky-Schwartz inequality, Young's inequality, Ito's isometry (note that we can apply Ito's isometry due to Lemma~\ref{cbo_lem4.6}) and Doob's optional stopping theorem \cite[Theorem 2.2.1]{cbos11}, we get \begin{align} \mathbb{E}\big(|Y^{i}_{N}(t\wedge \tau^{h}_{R}) - X^{i}_{N}(t\wedge \tau^{h}_{R})|^{2}\big) &\leq C h +C(1+|z|^{2})\mathbb{E}\int_{0}^{t\wedge \tau^{h}_{R}}\Big(|Y^{i}_{N}(\kappa_{h}(s)) - X^{i}_{N}(s)|^{2} \nonumber \\ & \;\;\;\; \;\;\;\;\;\;\;+ |\bar{Y}_{N}(\kappa_{h}(s)) - \bar{X}_{N}(s)|^{2}\Big)ds \nonumber \\ & \leq C\mathbb{E}\int_{0}^{t\wedge \tau^{h}_{R}} \Big(| Y^{i}_{N}(\kappa_{h}(s)) - Y^{i}_{N}(s)|^{2}+|Y^{i}_{N}(s) - X^{i}_{N}(s)|^{2} \nonumber \\ & \;\;\;\; \;\;\;\;\;\;\;+ | \bar{Y}_{N}(\kappa_{h}(s))-\bar{Y}_{N}(s)|^{2} + |\bar{Y}_{N}(s) - \bar{X}_{N}(s)|^{2}\Big) ds. \label{cbo_eq4.30} \end{align} Due to Lemma~\ref{cbo_lem4.7}, we have \begin{align} \sup_{i=1,\dots,N}\mathbb{E}|Y^{i}_{N}(\kappa_{h}(s)) - Y^{i}_{N}(s )|^{2} \leq Ch, \label{cbo_eq4.31} \end{align} where $C$ is independent of $h$ and $N$. Now, we will estimate the term $|\bar{Y}_{N}(s) - \bar{Y}_{N}(\kappa_{h}(s))| $. Recall that we used discrete Jensen's inequality, Assumptions~\ref{cboh3.2}-\ref{cboassu3.4} and Cauchy-Bunyakowsky-Schwartz inequality to obtain (\ref{cbo_neweq_4.28}). We apply the same set of arguments as before to get \begin{align*} |\bar{Y}_{N}(s)& - \bar{Y}_{N}(\kappa_{h}(s))| = \bigg|\sum\limits_{i=1}^{N}Y^{i}_{N}(s)\frac{e^{-\alpha f(Y^{i}_{N}(s))}}{\sum_{j=1}^{N}e^{-\alpha f(Y^{j}_{N}(s))}} -\sum\limits_{i=1}^{N}Y^{i}_{N}(\kappa_{h}(s))\frac{e^{-\alpha f(Y^{i}_{N}(\kappa_{h}(s)))}}{\sum_{j=1}^{N}e^{-\alpha f(Y^{j}_{N}(\kappa_{h}(s)))}}\bigg| \\ & \leq \frac{1}{\frac{1}{N}\sum_{j=1}^{N}e^{-\alpha f(Y^{j}_{N}(s))}}\bigg|\frac{1}{N}\sum\limits_{i=1}^{N} \big(Y^{i}_{N}(s) - Y^{i}_{N}(\kappa_{h}(s))\big)e^{-\alpha f(Y^{i}_{N}(s))}\bigg| \\ & \;\;\;\; +\frac{1}{\frac{1}{N}\sum_{j=1}^{N}e^{-\alpha f(Y^{j}_{N}(s))}}\bigg|\frac{1}{N}\sum\limits_{i=1}^{N}Y^{i}_{N}(\kappa_{h}(s))\Big(e^{-\alpha f(Y^{i}_{N}(s))} - e^{-\alpha f(Y^{i}_{N}(\kappa_{h}(s)))}\Big)\bigg|\\ & \;\;\;\;+ \bigg|\frac{1}{N}\sum\limits_{i=1}^{N}Y^{i}_{N}(\kappa_{h}(s))e^{-\alpha f(Y^{i}_{N}(\kappa_{h}(s)))}\bigg(\frac{1}{\frac{1}{N}\sum_{j=1}^{N}e^{-\alpha f(Y^{j}_{N}(s))}} - \frac{1}{\frac{1}{N}\sum_{j=1}^{N}e^{-\alpha f(Y^{j}_{N}(\kappa_{h}(s)))}}\bigg)\bigg| \\ & \leq C\Bigg(e^{\frac{\alpha K_{u}}{N}\sum_{j=1}^{N}|Y^{j}_{N}(s)|^{2}}\frac{1}{N}\sum\limits_{i=1}^{N}|Y^{i}_{N}(s) - Y^{i}_{N}(\kappa_{h}(s))| \\ & \;\;\;\; +e^{\frac{\alpha K_{u}}{N}\sum_{j=1}^{N}(|Y^{j}_{N}(s)|^{2} + |Y^{j}_{N}(\kappa_{h}(s))|^{2})} \times\bigg(\frac{1}{N}\sum\limits_{i=1}^{N}(1+ |Y^{i}_{N}(s)|^{2} + |Y^{i}_{N}(\kappa_{h}(s))|^{2})^{2}\bigg)^{1/2}\\ & \;\;\;\;\; \times\bigg(\frac{1}{N}\sum\limits_{i=1}^{N}|Y^{i}_{N}(s) - Y^{i}_{N}(\kappa_{h}(s))|^{2}\bigg)^{1/2}\Bigg), \end{align*} where $C > 0$ is independent of $h$ and $N$. Squaring both sides, we ascertain \begin{align} &|\bar{Y}_{N}(s) - \bar{Y}_{N}(\kappa_{h}(s))|^{2} \leq C\Bigg( e^{\frac{2\alpha K_{u}}{N}\sum_{j=1}^{N}|Y^{j}_{N}(s)|^{2}}\frac{1}{N}\sum\limits_{i=1}^{N}|Y^{i}_{N}(s) - Y^{i}_{N}(\kappa_{h}(s))|^{2} \nonumber \\ & \;\;\;\; +e^{\frac{2\alpha K_{u}}{N}\sum_{j=1}^{N}(|Y^{j}_{N}(s)|^{2} + |Y^{j}_{N}(\kappa_{h}(s))|^{2})} \times\bigg(\frac{1}{N}\sum\limits_{i=1}^{N}(1+ |Y^{i}_{N}(s)|^{2} + |Y^{i}_{N}(\kappa_{h}(s))|^{2})^{2}\bigg) \nonumber \\ & \;\;\;\;\; \times\bigg(\frac{1}{N}\sum\limits_{i=1}^{N}|Y^{i}_{N}(s) - Y^{i}_{N}(\kappa_{h}(s))|^{2}\bigg)\Bigg). \label{cbo_eq4.32} \end{align} In the similar manner, we can obtain the following bound: \begin{align} &|\bar{X}_{N}(s) - \bar{Y}_{N}(s)|^{2} \leq C\Bigg(e^{ \frac{2\alpha K_{u}}{N}\sum_{j=1}^{N}|X^{j}_{N}(s)|^{2}}\frac{1}{N}\sum\limits_{i=1}^{N}|X^{i}_{N}(s) - Y^{i}_{N}(s)|^{2} \nonumber \\ & \;\;\;\; +e^{\frac{2\alpha K_{u}}{N}\sum_{j=1}^{N}(|X^{j}_{N}(s)|^{2} + |Y^{j}_{N}(s)|^{2})} \times\bigg(\frac{1}{N}\sum\limits_{i=1}^{N}(1+ |X^{i}_{N}(s)|^{2} + |Y^{i}_{N}(s)|^{2})^{2}\bigg) \nonumber \\ & \;\;\;\;\; \times\bigg(\frac{1}{N}\sum\limits_{i=1}^{N}|X^{i}_{N}(s) - Y^{i}_{N}(s)|^{2}\bigg) \Bigg), \label{cbo_eq4.33} \end{align} where $C>0$ is independent of $h$ and $N$. We substitute (\ref{cbo_eq4.31}), (\ref{cbo_eq4.32}) and (\ref{cbo_eq4.33}) in (\ref{cbo_eq4.30}) to get \begin{align*} &\mathbb{E}\big(|Y^{i}_{N}(t\wedge \tau^{h}_{R}) - X^{i}_{N}(t\wedge \tau^{h}_{R})|^{2}\big) \leq C\mathbb{E}\int_{0}^{t\wedge \tau_{R}^{h}}\big(|X^{i}_{N}(s) - Y^{i}_{N}(s)|^{2}\big)ds + Ch \\ & \;\; + CRe^{4\alpha K_{u}\sqrt{R}}\bigg( \mathbb{E}\int_{0}^{t\wedge \tau_{R}^{h}}\frac{1}{N}\sum\limits_{i=1}^{N}\big(|Y^{i}_{N}(s) - Y^{i}_{N}(\kappa_{h}(s))|^{2}\big) ds + \mathbb{E}\int_{0}^{t\wedge \tau_{R}^{h}}\frac{1}{N}\sum\limits_{i=1}^{N}\big(|X^{i}_{N}(s) - Y^{i}_{N}(s)|^{2}\big) ds \bigg) \\ & \leq C\int_{0}^{t}\mathbb{E}\big(|X^{i}_{N}(s\wedge \tau_{R}^{h}) - Y^{i}_{N}(s\wedge \tau_{R}^{h})|^{2}\big)ds + Ch + CRe^{4\alpha K_{u}\sqrt{R}}\int_{0}^{t}\frac{1}{N}\sum\limits_{i=1}^{N}\mathbb{E}\big(|Y^{i}_{N}(s) - Y^{i}_{N}(\kappa_{h}(s))|^{2}\big) ds \\ & \;\; + CRe^{4\alpha K_{u}\sqrt{R}}\int_{0}^{t}\frac{1}{N}\sum\limits_{i=1}^{N}\mathbb{E}\big(|X^{i}_{N}(s\wedge \tau_{R}^{h}) - Y^{i}_{N}(s\wedge \tau_{R}^{h})|^{2}\big) ds, \end{align*} where $C>0$ is independent of $h$, $N$ and $R$. Taking supremum over $i=1,\dots,N$ and using Lemma~\ref{cbo_lem4.7}, we obtain \begin{align*} \sup_{i=1,\dots,N}\mathbb{E}\big(|Y^{i}_{N}(t\wedge \tau^{h}_{R}) &- X^{i}_{N}(t\wedge \tau^{h}_{R})|^{2}\big) \leq CRe^{4\alpha K_{u}\sqrt{R}}h \\ & + CRe^{4\alpha K_{u}\sqrt{R}}\int_{0}^{t}\sup_{i=1,\dots,N}\mathbb{E}\big(|Y^{i}_{N}(s\wedge \tau^{h}_{R}) - X^{i}_{N}(s\wedge \tau^{h}_{R})|^{2}\big)ds \bigg), \end{align*} where $C$ is independent of $h$, $N$ and $R$. Using Gr\"{o}nwall's lemma, we get \begin{align*} \sup_{i=1,\dots,N}\mathbb{E}\big(|Y^{i}_{N}(t\wedge \tau^{h}_{R}) &- X^{i}_{N}(t\wedge \tau^{h}_{R})|^{2}\big) \leq CRe^{4\alpha K_{u}\sqrt{R}}e^{CRe^{4\alpha K_{u}\sqrt{R}}}h \leq Ce^{e^{C_{u}\sqrt{R}}}h, \end{align*} where $C>0$ and $C_{u}>0$ are constants independent of $h$, $N$ and $R$. We choose $R= \frac{1}{C_{u}^{2}}(\ln{(\ln{(h^{-1/2})})})^{2}$. Consequently, we have \begin{align*} \sup_{i=1,\dots,N}\mathbb{E}\big(|Y^{i}_{N}(t) - X^{i}_{N}(t)|^{2}I_{\Omega_{4}(t)}\big)\leq \sup_{i=1,\dots,N}\mathbb{E}\big(|Y^{i}_{N}(t\wedge \tau^{h}_{R}) &- X^{i}_{N}(t\wedge \tau^{h}_{R})|^{2}\big) \leq Ch^{1/2}, \end{align*} where $C>0$ is independent of $h$ and $N$. This implies \begin{align} \lim\limits_{h \rightarrow 0}\lim\limits_{N \rightarrow \infty} \sup_{i=1,\dots,N}\mathbb{E}\big(|Y^{i}_{N}(t) - X^{i}_{N}(t)|^{2}I_{\Omega_{4}(t)}\big) = \lim\limits_{N \rightarrow \infty}\lim\limits_{h \rightarrow 0} \sup_{i=1,\dots,N}\mathbb{E}\big(|Y^{i}_{N}(t) - X^{i}_{N}(t)|^{2}I_{\Omega_{4}(t)}\big) = 0. \label{cbo_neweq_4.54} \end{align} The term (\ref{cbo_neweq_4.49}) and the choice of $R$ provide the following estimate: \begin{align*} \sup_{i=1,\dots,N}\mathbb{E}\big(|Y^{i}_{N}(t) - X^{i}_{N}(t)|^{2}I_{\Omega_{3}(t)}\big) \leq \frac{C}{(\ln{(\ln{(h^{-1/2})})})^{2}}, \end{align*} where $C$ is independent of $h$ and $N$. This gives \begin{align} \lim\limits_{h \rightarrow 0}\lim\limits_{N \rightarrow \infty} \sup_{i=1,\dots,N}\mathbb{E}\big(|Y^{i}_{N}(t) - X^{i}_{N}(t)|^{2}I_{\Omega_{3}(t)}\big) = \lim\limits_{N \rightarrow \infty}\lim\limits_{h \rightarrow 0}\sup_{i=1,\dots,N}\mathbb{E}\big(|Y^{i}_{N}(t) - X^{i}_{N}(t)|^{2}I_{\Omega_{3}(t)}\big) = 0. \label{cbo_neweq_4.55} \end{align} As a consequence of (\ref{cbo_neweq_4.54}) and (\ref{cbo_neweq_4.55}), we get \begin{align*} \lim\limits_{h\rightarrow 0}\lim\limits_{N \rightarrow \infty}\sup_{i=1,\dots,N}\mathbb{E}\big(|Y^{i}_{N}(t) - X^{i}_{N}(t)|^{2}\big)= \lim\limits_{N \rightarrow \infty}\lim\limits_{h \rightarrow 0}\sup_{i=1,\dots,N}\mathbb{E}\big(|Y^{i}_{N}(t) - X^{i}_{N}(t)|^{2}\big) = 0. \end{align*} \end{proof} \section{Numerical Examples}\label{cbo_num_exp} In this section, we conduct numerical experiments on the Rastrigin and Rosenbrock functions by implementing the models (\ref{cbos1.5}), (\ref{cbos1.6}), (\ref{cbo_neweq_2.17}) and model with common noise introduced in \cite{cbo4, cbo5}. We use the Euler scheme for implementation with $h= 0.01$. We run $100 $ simulations and quote the success rates. We call a run of $N$ particles a success if $|\bar{Y}_{N}(T) - x_{\min}| \leq 0.25$. Defining success rate in this manner is consistent with earlier CBO papers. \begin{experiment} We\; perform\; the\; experiment\; with\; the\; CBO\; model (\ref{cbos1.5}),\; JumpCBO model (\ref{cbos1.6}), JumpCBOwCPN model (jump-diffuison CBO model with common Poisson noise from (\ref{cbo_neweq_2.17})), CBOwCWN model (CBO model with common Wiener noise of \cite{cbo4,cbo5}) for the Rastrigin function \begin{equation} f(x)= 10 + \sum_{i=1}^{d}\big((x_i - B)^2 - 10\cos(2\pi (x_i - B))\big)/d, \end{equation} where we take $d= 20$. The minimum is located at $(0,\dots,0)\in \mathbb{R}^{20}$. In this experiment for the Rastrigin function, the initial search space is $[-6,6]^{20}$ and final time, $T= 100$. We take $\beta = 1$, $\sigma = 5.1 $ for CBO, CBOwCWN, JumpCBO and JumpCBOwCPN models. We take $\gamma(t) = 1 $ when $t \leq 20$ and $\gamma(t) = e^{1-t/20}$ when $t > 20$ for JumpCBO and JumpCBOwCPN models. Also, $\Zstroke$ is distributed as standard Gaussian random variable and we choose jump intensity, $\lambda$, of Poisson process equal to $20$. \end{experiment} \captionof{table}{Success rate for $\alpha =20$}\label{exptab8.1} \begin{center} \begingroup \setlength{\tabcolsep}{2.9pt} \begin{tabular}{c c c c c } \hline\hline $N$ & CBO & \shortstack{CBOwCWN} & JumpCBO & \shortstack{JumpCBOwCPN}\\ [0.5ex] \hline 20 & 53 & 1 & 61 & 65 \\ 50 & 62 & 0 & 69 & 72\\ 80 & 22 & 2 & 41 & 40 \\ 100 & 1 & 2 & 29 & 25 \\ [1ex] \hline \end{tabular} \endgroup \end{center} \hfill \captionof{table}{Success rate for $\alpha = 30 $ }\label{exptab8.2} \begin{center} \begingroup \setlength{\tabcolsep}{2.9pt} \begin{tabular}{c c c c c } \hline\hline $N$ & CBO & \shortstack{CBOwCWN} & JumpCBO & \shortstack{JumpCBOwCPN} \\ [0.5ex] \hline 20 & 87 & 0 & 90 & 94\\ 50 & 99 & 0 & 100 & 100 \\ 80 & 100 & 0 & 100 & 100 \\ 100 & 100 & 0 & 100 & 100 \\ [1ex] \hline \end{tabular} \endgroup \end{center} In the case of Rastrigin function, the performance of JumpCBO model (\ref{cbos1.6}), JumpCBOwCPN model (\ref{cbo_neweq_2.17}) and CBO model (\ref{cbos1.5}) is comparable. However, CBOwCWN of \cite{cbo4,cbo5} does not perform well. As the alpha is increased from $20$ to $30$, the success rates are fairly improved. We have taken constant $\beta$ and $\sigma$, and decaying $\gamma$ for the jump-diffusion CBO models. As one can see, jumps have impacted the performance positively in CBO when $\alpha = 20$. Another fact to be noticed is that performance of the jump-diffusion models with common or independent Poisson processes is very similar. It is also clear from the experiment that CBOwCWN model of \cite{cbo4,cbo5} does not induce enough noise in the dynamics of the particle system sufficient for effective space exploration. \begin{experiment} We perform the experiment with the CBO model (\ref{cbos1.5}), JumpCBO model (\ref{cbos1.6}), CBOwCN model (CBO model with common noise of \cite{cbo4,cbo5}) for the Rosenbrock function \begin{equation} \sum\limits_{i=1}^{d-1}[100(x_{i+1} - x_{i}^{2})^{2} + (x_{i} - 1)^{2}]/d, \end{equation} where we take $d= 5$. The minimum is located at $(1,\dots,1)\in \mathbb{R}^{5}$. In this experiment for the Rosenbrock function, the initial search space is $[-1,3]^5$ and final time, $T= 120$. We take $\beta = 1$, $\sigma = 5 $ for CBO as well as CBOwCN models. We take $\beta(t) = 2- e^{-t/100}$, $\sigma(t) = 4 + e^{-t/90}$ and $\gamma(t) = 1 $ for $t \leq 90$ and $\gamma(t) = e^{1-t/90}$ for $t > 90$. Note that $\beta(0) = 1$ and $\sigma(0) = 5$ which are same as parameters $\beta$ and $\sigma$ for the CBO and CBOwCN models. Also, $\Zstroke$ is distributed as standard Gaussian random variable and we choose jump intensity, $\lambda$, of Poisson process equal to $90$. \end{experiment} \captionof{table}{Success rate for $\alpha = 20 $ }\label{exptab8.3} \begin{center} \begingroup \setlength{\tabcolsep}{2.9pt} \begin{tabular}{c c c c c } \hline\hline $N$ & CBO & CBOwCWN & JumpCBO &JumpCBOwCPN \\ [0.5ex] \hline 20 & 2 & 1 & 35 & 37 \\ 50 & 3 & 1 & 75 & 76 \\ 80 & 3 & 0 & 96 & 89 \\ 100 & 4 & 4 & 85 & 94 \\ [1ex] \hline \end{tabular} \endgroup \end{center} \captionof{table}{Success rate for $\alpha = 30 $ }\label{exptab8.4} \begin{center} \setlength{\tabcolsep}{2.9pt} \begin{tabular}{c c c c c } \hline\hline $N$ & CBO & CBOwCWN & JumpCBO & JumpCBOwCPN \\ [0.5ex] \hline 20 & 6 & 2 & 20 & 25 \\ 50 & 3 & 0 & 49 & 45 \\ 80 & 5 & 2 & 69 & 64 \\ 100 & 4 & 1 & 74 & 70 \\ [1ex] \hline \medskip \end{tabular} \end{center} In the case of Rosenbrock function, there is a significant improvement in finding global minimum when using the jump-diffusion models (\ref{cbos1.6}) and (\ref{cbo_neweq_2.17}) in comparison with (\ref{cbos1.5}) and CBOwCWN of \cite{cbo4,cbo5}. As is the case with the Rastrigin funciton, for the Rosenbrock funciton, both jump-diffusion models have similar performance. We note that the Rosenbrock function has quartic growth. We take time-dependent $\beta(t)$, $\sigma(t) $ and $\gamma(t)$ for the jump diffusion models so that $\beta(t)$ is increasing function, $\sigma(t)$ is a decreasing function, and $\gamma(t)$ is constant for some period of time and then starts decreasing exponentially. This experiment illustrates a good balance of \emph{exploration} and \emph{exploitation} delivered by the proposed jump-diffusion models. The particles explore the space until $t= 90$ and after that particles start exploiting the searched space. \section{Concluding remarks} We have developed a new CBO algorithm with jump-diffusion SDEs, for which we have studied its well-posedness both at the particle level and its mean-field approximation. The key feature of the jump-diffusion CBO is a more effective energy landscape exploration driven by the randomness introduced by both Wiener and Poisson processes. In practice, this translates into better success rates in finding the global minimizer, and a more robust initialization, which can be located far away from the global minimizer. A natural extension of the current work is a systematic study of CBO with constraints in the search space as recently discussed in \cite{cbo50,cbo44,cbo45,cbo46}. This is particularly challenging because of the need to accurately treat boundary conditions for the SDEs (see e.g. \cite{cbo37}). Another interesting research direction is the exploration of jump-diffusion processes in the framework of kinetic-type CBO models \cite{cbo47,cbo48}. \section*{Acknowledgements} AS was supported by EPSRC grant no. EP/W52251X/1. DK was supported by EPSRC grants EP/T024429/1 and EP/V04771X/1. For the purpose of open access, the authors have applied a Creative Commons Attribution (CC-BY) licence to any Author Accepted Manuscript version arising. \bibliographystyle{alpha} \bibliography{references} \end{document}
2205.04817v2
http://arxiv.org/abs/2205.04817v2
Trisections obtained by trivially regluing surface-knots
\documentclass[pdftex]{amsart} \address{Department of Mathematics, Tokyo Institute of Technology, 2-12-1 Ookayama, Meguro-ku, Tokyo, 152-8551, Japan} \email{[email protected]} \usepackage{amsmath,amssymb} \usepackage{amsfonts} \usepackage{comment} \usepackage{graphicx} \usepackage[all]{xy} \def\objectstyle{\displaystyle} \usepackage{hyperref} \sloppy \usepackage{tikz} \usetikzlibrary{intersections,calc,arrows} \usetikzlibrary{cd} \usetikzlibrary{decorations.markings} \usetikzlibrary{arrows.meta} \usepackage{here} \usepackage{caption} \usepackage[final]{pdfpages} \usepackage{amsthm} \theoremstyle{plain} \newtheorem{thm}{Theorem}[section] \newtheorem{prop}[thm]{Proposition} \newtheorem{lem}[thm]{Lemma} \newtheorem{cor}[thm]{Corollary} \newtheorem{exam}[thm]{Example} \newtheorem*{thm*}{Theorem} \newtheorem*{cor*}{Corollary} \theoremstyle{definition} \newtheorem{dfn}[thm]{Definition} \newtheorem{rem}[thm]{Remark} \newtheorem{exm}[thm]{Example} \newtheorem{que}[thm]{Question} \newtheorem{nota}[thm]{Notation} \newtheorem{con}[thm]{Conjecture} \newtheorem*{que*}{Question} \newtheorem*{con*}{Conjecture} \begin{document} \title{Trisections obtained by trivially regluing surface-knots} \author{Tsukasa Isoshima} \date{} \begin{abstract} Let $S$ be a $P^2$-knot which is the connected sum of a 2-knot with normal Euler number 0 and an unknotted $P^2$-knot with normal Euler number $\pm2$ in a closed 4-manifold $X$ with trisection $T_{X}$. Then, we show that the trisection of $X$ obtained by the trivial gluing relative trisections of $\overline{\nu(S)}$ and $X-\nu(S)$ is diffeomorphic to a stabilization of $T_{X}$. It should be noted that this result is not obvious since boundary-stabilizations introduced by Kim and Miller are used to construct a relative trisection of $X-\nu(S)$. As a corollary, if $X=S^4$, the resulting trisection is diffeomorphic to a stabilization of the genus 0 trisection of $S^4$. This result is related to the conjecture that is a 4-dimensional analogue of Waldhausen's theorem on Heegaard splittings. \end{abstract} \maketitle \section{Introduction}\label{sec:intro} In 2012, Gay and Kirby \cite{GK} introduced the notion of a trisection of a 4-manifold, which is an analogue of a Heegaard splitting of a 3-manifold. A trisection of a 4-manifold with boundary is called a relative trisection. Meier and Zupan \cite{MZ2} introduced the notion of a bridge trisection of a surface-knot, which is an analogue of a bridge decomposition of a classical knot. A surface-knot can be put in a nice position in a 4-manifold, called a bridge position, such that the surface-knot is trisected according to a trisection of the 4-manifold. Let $T=(X_1 , X_2 , X_3)$ be a trisection of a 4-manifold $X$, namely, $X=X_1 \cup X_2 \cup X_3$ and each $X_i$ is a 4-dimensional 1-handlebody. For a 2-knot $K$ in $X$ which is in 1-bridge position, the decomposition of $X-\nu(K)$ into the union of three $X_i-\nu(K)$'s is a relative trisection of $X-\nu(K)$, where $\nu(K)$ is an open tubular neighborhood of $K$. On the other hand, for a surface-knot $S$ in $X$ which is not a 2-knot, the decomposition of $X-\nu(S)$ is never a relative trisection of $X-\nu(S)$. Kim and Miller \cite{KM} introduced a new technique, called a boundary-stabilization, to change the above decomposition of $X-\nu(S)$ into a relative trisection. We can construct a new trisection of $X=\overline{\nu(S)} \cup_{id} X-\nu(S)$ by gluing a relative trisection of $\overline{\nu(S)}$ and that of $X-\nu(S)$ constructed above using a gluing technique given by Castro and Ozbagci \cite{CO}. In this section, the new trisection is called a trisection obtained by trivially gluing $\nu(S)$ and $X-\nu(S)$. This trisection and $T$ are stably diffeomorphic (resp. stably isotopic), namely, they are diffeomorphic (resp. isotopic) after finitely many stabilizations. However, it is not obvious whether this trisection is diffeomorphic, especially isotopic, to a stabilization of $T$ since when we construct a relative trisection of $X-\nu(S)$ from the union of three $X_i-\nu(S)$'s, we use boundary-stabilizations as mentioned above. Thus, we can think about the following question. \begin{que*}[Question \ref{que:original question}] Let $S$ be a surface-knot in a closed 4-manifold $X$ with trisection $T$. Is a trisection obtained by trivially gluing $\nu(S)$ and $X-\nu(S)$ diffeomorphic, especially isotopic, to a stabilization of $T$? In particular, if $X=S^4$, does this hold? \end{que*} The Price twist is a surgery along a $P^2$-knot $P$ in a 4-manifold $X$, which yields at most three different 4-manifolds, namely, $X$, $\Sigma_P({X})$ and a non-simply connected 4-manifold $\tau_P({X})$. The closed 4-manifold $\Sigma_P({S^4})$ is a homotopy 4-sphere. In this paper, we call the twist having $X$ the trivial Price twist. Kim and Miller \cite{KM} constructed trisections obtained by the Price twist by attaching a relative trisection of $\overline{\nu(P)}$ obtained from its Kirby diagram to a relative trisection of $X-\nu(P)$ constructed by a boundary-stabilization. In this paper, we show the following theorem for Question \ref{que:original question}. Note that a trisection obtained by the trivial Price twist along $S$ corresponds to that obtained by trivially gluing a relative trisection of $\overline{\nu(S)}$ and that of $X-\nu(S)$. \begin{thm*}[Theorem \ref{main theorem}] Let $X$ be a closed 4-manifold and $S$ the connected sum of a 2-knot $K$ with normal Euler number 0 and an unknotted $P^2$-knot with normal Euler number $\pm2$ in $X$. Also let $T_{(X,S)}$ be a bridge trisection of $(X,S)$ and $T_X$ the underlying trisection. Suppose that $S$ is in bridge position with respect to $T_X$. Also let $T_{X}^{'}$ be the underlying trisection of the bridge trisection obtained by meridionally stabilizing $T_{(X,S)}$ so that $S$ is in 2-bridge position with respect to $T_{X}^{'}$. Then, the trisection $T_S$ obtained by the trivial Price twist along $S$ is diffeomorphic to a stabilization of $T_{X}^{'}$. In particular, the trisection $T_S$ is diffeomorphic to a stabilization of $T_{X}$. \end{thm*} In the proof of Theorem \ref{main theorem}, we will perform handle slides and destabilizations many times (see also \cite{Na}). A $P^2$-knot $S$ in $S^4$ is said to be of Kinoshita type if $S$ is the connected sum of a 2-knot and an unknotted $P^2$-knot. It is conjectured that every $P^2$-knot in $S^4$ is of Kinoshita type (see Remark \ref{rem:Kisoshita conjecture}). \begin{cor*}[Corollary \ref{main corollary}] For each $P^2$-knot $S$ in $S^4$ that is of Kinoshita type, the trisection obtained by the trivial Price twist along $S$ is diffeomorphic to a stabilization of the genus 0 trisection of $S^4$. \end{cor*} This implies that if any two diffeomorphic trisections of $S^4$ are isotopic, the resulting trisection gives a positive evidence to the conjecture that is a 4-dimensional analogue of Waldhausen's theorem on Heegaard splittings. \begin{con*}[\cite{MSZ}] Every trisection of $S^4$ is isotopic to either the genus 0 trisection or its stabilization. \end{con*} \section*{Organization} In Section \ref{sec:preliminaries}, we review trisections, relative trisections and bridge trisections. In Section \ref{sec:Price twist}, we recall a surgery along a $P^2$-knot in a 4-manifold, called the Price twist and provide a topic related to a trisection obtained by the Price twist. In Section \ref{sec:boundary-stabilization}, we review the definition of a boundary-stabilization and the way of constructing a relative trisection of the complement of a surface-knot. Finally, in Section \ref{sec:main theorem}, we raise a question on a stabilization of a trisection obtained by the trivial regluing of a surface-knot and prove our main theorem and its corollary related to the conjecture that is a 4-dimensional analogue of Waldhausen's theorem on Heegaard splittings. \section*{Acknowledgement} The author would like to thank his supervisor Hisaaki Endo for his helpful comments on this research and careful reading of this paper. He also would like to thank Maggie Miller for her helpful comments on his question and David Gay for his advice on our main theorem. \section{Preliminaries}\label{sec:preliminaries} In this paper, we assume that 4-manifolds are compact, connected, oriented, and smooth unless otherwise stated and a surface-knot in a 4-manifold is a closed surface smoothly embedded in the 4-manifold. \subsection{Trisections of 4-manifolds} In this subsection, we review a definition and properties of trisections of closed 4-manifolds introduced in \cite{GK}. Let $g$, $k_1$, $k_2$ and $k_3$ be integers satisfying $0 \le k_1,k_2,k_3 \le g$. \begin{dfn}\label{def:trisection} Let $X$ be a closed 4-manifold. A $(g;k_1,k_2,k_3)$-\textit{trisection} of $X$ is a decomposition $X=X_1 \cup X_2 \cup X_3$ into three submanifolds $X_1,X_2,X_3$ of $X$ satisfying the following conditions: \begin{itemize} \item For each $i=1,2,3$, there exists a diffeomorphism $\phi_i \colon X_i \to Z_{k_i}$, where $Z_{k_i} = \natural_{k_i}S^1 \times D^3$. \item For each $i=1,2,3$, $\phi_i(X_i \cap X_{i-1}) = Y_{k_i,g}^{-}$ and $\phi_i(X_i \cap X_{i+1}) = Y_{k_i,g}^{+}$, where $Y_{k_i,g}^{\pm}$ is the genus $g$ Heegaard splitting $\partial{Z_{k_i}} = Y_{k_i,g}^{-} \cup Y_{k_i,g}^{+}$ of $\partial{Z_{k_i}}$ obtained by stabilizing the standard genus $k_i$ Heegaard splitting of $\partial{Z_{k_i}}$ $g-k_i$ times. \end{itemize} \end{dfn} Note that when $X$ admits a trisection $X=X_1 \cup X_2 \cup X_3$, we call the 3-tuple $T=(X_1, X_2, X_3)$ also a trisection of $X$. If $k_1=k_2=k_3=k$, the trisection is called a \textit{balanced} trisection, or a $(g,k)$-trisection; if not, it is called an \textit{unbalanced} trisection. For a $(g,k)$-trisection, since $\chi(X)=2+g-3k$, we simply call the trisection a \textit{genus} $g$ trisection. For example, the 4-sphere $S^4$ admits the $(0,0)$-trisection, namely genus 0 trisection. For a trisection $(X_1,X_2,X_3)$, let $H_\alpha=X_3 \cap X_1$, $H_\beta=X_1 \cap X_2$ and $H_\gamma=X_2 \cap X_3$. Then, the trisection is uniquely determined from $H_\alpha \cup H_\beta \cup H_\gamma$ \cite{LP}. The union $H_\alpha \cup H_\beta \cup H_\gamma$ is called the \textit{spine}. Given a trisection, we can define its diagram, called a trisection diagram. Note that from the definition, we see that the triple intersection $X_1 \cap X_2 \cap X_3$ is an oriented closed surface $\Sigma_g$ of genus $g$. \begin{dfn} Let $\Sigma$ be a compact, connected, oriented surface, and $\delta$, $\epsilon$ collections of disjoint simple closed curves on $\Sigma$. The 3-tuples $(\Sigma, \delta, \epsilon)$ and $(\Sigma, \delta^{'}, \epsilon^{'})$ are said to be \textit{diffeomorphism and handleslide equivalent} if there exists a self diffeomorphism $h$ of $\Sigma$ such that $h(\delta)$ and $h(\epsilon)$ are related to $\delta^{'}$ and $\epsilon^{'}$ by a sequence of handleslides, respectively. \end{dfn} \begin{dfn} A $(g;k_1,k_2,k_3)$-\textit{trisection diagram} is a 4-tuple $(\Sigma_g,\alpha,\beta,\gamma)$ satysfying the following conditions: \begin{itemize} \item $(\Sigma_g,\alpha,\beta)$ is diffeomorphism and handleslide equivalent to the standard genus $g$ Heegaard diagram of $\#_{k_1}S^1 \times S^2$. \item $(\Sigma_g,\beta,\gamma)$ is diffeomorphism and handleslide equivalent to the standard genus $g$ Heegaard diagram of $\#_{k_2}S^1 \times S^2$. \item $(\Sigma_g,\gamma,\alpha)$ is diffeomorphism and handleslide equivalent to the standard genus $g$ Heegaard diagram of $\#_{k_3}S^1 \times S^2$. \end{itemize} \end{dfn} Figure \ref{fig:Heegaarddiagramfork_iS^1×S^2} describes the standard genus $g$ Heegaard diagram of $\#_{k_i}S^1 \times S^2$. Note that given a trisection diagram $(\Sigma_g,\alpha,\beta,\gamma)$, $\alpha$, $\beta$ and $\gamma$ are respectively indicated by red, blue and green curves as in Figure \ref{fig:trisection diagram of CP^2}. \begin{figure}[h] \begin{center} \includegraphics[width=13cm, height=7cm, keepaspectratio, scale=1]{Heegaarddiagram.pdf} \end{center} \caption{The standard genus $g$ Heegaard diagram of $\#_{k_i}S^1 \times S^2$. } \label{fig:Heegaarddiagramfork_iS^1×S^2} \end{figure} \begin{exam} Figure \ref{fig:trisection diagram of CP^2} is a $(1,0)$-trisection diagram of $\mathbb{C}P^2$ (see also Figure \ref{fig:dptd of CP^2 and CP^1}). \end{exam} \begin{figure}[h] \begin{center} \includegraphics[width=5.5cm, height=7cm, keepaspectratio, scale=1]{diagramofCP2.pdf} \end{center} \caption{A $(1,0)$-trisection diagram of $\mathbb{C}P^2$. } \label{fig:trisection diagram of CP^2} \end{figure} \begin{dfn}[\cite{Is}] Let $X$ be a closed 4-manifold, and $T=(X_1,X_2,X_3)$ and $T^{'}=(X_1^{'},X_2^{'},X_3^{'})$ trisections of $X$. We say that $T$ and $T^{'}$ are \textit{diffeomorphic} if there exists a diffeomorphism $h \colon X \to X$ such that $h(X_i)=X_i^{'}$ for each $i=1,2,3$. We say that $T$ and $T^{'}$ are \textit{isotopic} if there exists an isotopy $\{h_t\}_{t \in [0,1]}$ of $X$ such that $h_0=id$ and $h_1(X_i)=X_i^{'}$ for each $i=1,2,3$. \end{dfn} Note that $T$ and $T^{'}$ are diffeomorphic if and only if trisection diagrams of $T$ and $T^{'}$ are related by handle slides on the same color curves and diffeomorphisms of a surface. As with the stabilization for a Heegaard splitting, we can define a stabilization for a trisection. \begin{dfn} Let $(X_1,X_2,X_3)$ be a trisection and $C$ a boundary-parallel arc properly embedded in $X_i \cap X_j$. We define $X_i^{'}$, $X_j^{'}$, and $X_k^{'}$ as follows, where $\{i,j,k\}=\{1,2,3\}$. \begin{itemize} \item $X_i^{'}=X_i-\nu(C)$, \item $X_j^{'}=X_j-\nu(C)$, \item $X_k^{'}=X_k \cup \overline{\nu(C)}$. \end{itemize} The replacement of $(X_1,X_2,X_3)$ by $(X_1^{'},X_2^{'},X_3^{'})$ is said to be the $k$-\textit{stabilization}. \end{dfn} Note that the stabilization does not depend on the choice of an arc since any two boundary-parallel arcs in a 3-dimensional 1-handlebody are isotopic. We can define a stabilization for a trisection using its trisection diagram. \begin{dfn} Let $(\Sigma,\alpha,\beta,\gamma)$ be a trisection diagram. The diagram obtainted by connect-summing $(\Sigma,\alpha,\beta,\gamma)$ with one of three diagrams depicted in Figure \ref{fig:stabilizationfordiagram} is called the \textit{stabilization} of $(\Sigma,\alpha,\beta,\gamma)$. \end{dfn} \begin{figure}[h] \begin{center} \includegraphics[width=12cm, height=7cm, keepaspectratio, scale=1]{stabilizationfordiagram.pdf} \end{center} \caption{The unbalanced trisection diagrams of $S^4$. } \label{fig:stabilizationfordiagram} \end{figure} The diagrams in Figure \ref{fig:stabilizationfordiagram} are $(1;1,0,0),(1;0,1,0),(1;0,0,1)$-trisection diagrams of $S^4$ from left to right. Note that for a $(g;k_1,k_2,k_3)$-trisection diagram $(\Sigma,\alpha,\beta,\gamma)$, the diagram obtained by connect-summing $(\Sigma,\alpha,\beta,\gamma)$ with the leftmost (resp. middle, resp. rightmost) diagram in Figure \ref{fig:stabilizationfordiagram} is a $(g+1;k_{1}+1,k_2,k_3)$ (resp. $(g+1;k_{1},k_{2}+1,k_3)$, resp. $(g+1;k_{1},k_2,k_{3}+1)$)-trisection diagram. Given a trisection diagram $(\Sigma, \alpha, \beta, \gamma)$, we can define a closed 4-manifold $X(\Sigma, \alpha, \beta, \gamma)$ as follows: We attach 2-handles to $\Sigma \times D^2$ along $\alpha \times \{1\}$, $\beta \times \{e^\frac{2{\pi}i}{3}\}$, and $\gamma \times \{e^\frac{4{\pi}i}{3}\}$, where the framing of each 2-handle is the surface framing. Then, we attach 3, 4-handles. Note that the way of attaching 3, 4-handles is unique up to diffeomorphism \cite{LP}. Gay and Kirby \cite{GK} showed that every closed 4-manifold $X$ admits a trisection with nice handle decomposition. Moreover, they showed that any two trisections of a fixed closed 4-manifold are stably isotopic. Namely, they are isotopic after finitely many stabilizations. Note that they proved it in the balanced case. In general, an $i$-stabilized trisection is not isotopic to a $j$-stabilized trisection when $i\not=j$ \cite{MSZ}. For more details on trisections of closed 4-manifolds, see \cite{GK}. \subsection{Relative trisections} In this subsection, we review trisections of 4-manifolds with boundary, called relative trisections. Before the definition, we introduce some notations. Let $g$, $k$, $p$ and $b$ be non-negative integers with $b \ge 1$ and $g+p+b-1 \ge k \ge 2p+b-1$. Also let $\Sigma_p^b$ be a compact, connected, oriented genus $p$ surface with $b$ boundary components and $l=2p+b-1$. We define $D$, $\partial^{-}D$, $\partial^{0}D$, and $\partial^{+}D$ as follows: \[D=\left\{(r,\theta) \ | \ r \in [0,1],\ \theta \in [-\frac{\pi}{3},\frac{\pi}{3}]\right\}, \ \partial^{-}D=\left\{(r,\theta) \ | \ r \in [0,1],\ \theta=-\frac{\pi}{3}\right\},\] \[\partial^{0}D=\left\{(r,\theta) \ | \ r =1,\ \theta \in [-\frac{\pi}{3},\frac{\pi}{3}]\right\},\ \partial^{+}D=\left\{(r,\theta) \ | \ r \in [0,1],\ \theta=\frac{\pi}{3}\right\}.\] Then, $\partial{D}=\partial^{-}D \cup \partial^{0}D \cup \partial^{+}D$ holds. We write $P$ for $\Sigma_p^b$ and $U$ for $D\times P$. Then, from the decomposition of $\partial{D}$, we have $\partial{U}=\partial^{-}{U} \cup \partial^{0}{U} \cup \partial^{+}{U}$, where \[\partial^{\pm}{U}=\partial^{\pm}{D} \times P,\ \partial^{0}{U}=P \times \partial^{0}{D} \cup \partial{P} \times D.\] For an integer $n > 0$, let $V_n=\natural_{n}S^1 \times D^3$ and $\partial{V_{n}}=\partial^{-}{V_{n}} \cup \partial^{+}{V_{n}}$ be the standard genus $n$ Heegaard splitting of $\partial{V_{n}}$. Moreover, for an integer $s \ge n$, the Heegaard splitting of $\partial{V_{n}}$ obtained by stabilizing the standard Heegaard splitting is denoted by $\partial{V_{n}}=\partial_{s}^{-}{V_{n}} \cup \partial_{s}^{+}{V_{n}}$. Henceforth, let $n=k-2p-b+1=k-l$, $s=g-k+p+b-1$ ($V_{n}=V_{k-2p-b+1}=V_{k-l}$). Lastly, we define $Z_k=U \natural V_n$, where the boundary sum is taken by identifying the neighborhood of a point in int($\partial^{-}U \cap \partial^{+}U$) with the neighborhood of a point in int($\partial_{s}^{-}V_{n} \cap \partial_{s}^{+}V_{n}$). Here, we define $Y_k=\partial{Z_k}=\partial{U} \# \partial{V_n}$. Then, from the above decomposition, we have $Y_{k}=Y_{g,k;p,b}^{-} \cup Y_{g,k;p,b}^{0} \cup Y_{g,k;p,b}^{+}$, where $Y_{g,k;p,b}^{\pm}=\partial^{\pm}U \natural \partial_{s}^{\pm}V_n$ and $Y_{g,k;p,b}^{0}=\partial^{0}U=P \times \partial^{0}{D} \cup \partial{P} \times D$. Using these notations, we can define a relative trisection as follows. \begin{dfn}\label{def: relative trisection} Let $X$ be a 4-manifold with connected boundary. The decomposition $X=X_1 \cup X_2 \cup X_3$ of $X$ satisfying the following conditions is called a $(g,k;p,b)$-\textit{relative trisection}: \begin{itemize} \item For each $i=1,2,3$, there exists a diffeomorphism $\phi_i \colon X_i \to Z_k$. \item For each $i=1,2,3$, $\phi_i(X_{i} \cap X_{i-1})=Y_{g,k;p,b}^{-}$, $\phi_i(X_{i} \cap X_{i+1})=Y_{g,k;p,b}^{+}$ and $\phi_i(X_{i} \cap \partial{X})=Y_{g,k;p,b}^{0}$, where $X_{4}=X_{1}$ and $X_{0}=X_{3}$. \end{itemize} \end{dfn} Note that this definition is that of a \textit{balanced} relative trisection. As with the definition \ref{def:trisection}, we can define an \textit{unbalanced} relative trisection. Moreover in Definition \ref{def: relative trisection}, $X_i \cap X_j \cap \partial{X} \cong \Sigma_{p}^{b}$ must be connected since $\partial{X}$ is assumed to be connected. This fact is used in Section \ref{sec:boundary-stabilization} to consider a relative trisection of the complement of a surface-knot. Given a relative trisection, we can define a relative trisection diagram. \begin{dfn} A $(g,k;p,b)$-\textit{relative trisection diagram} is a 4-tuple $(\Sigma_g^b,\alpha,\beta,\gamma)$ satysfying the following conditions: \begin{itemize} \item $\alpha$, $\beta$ and $\gamma$ are respectively $(g-p)$-tuples of curves on $\Sigma_g^b$. \item Each of the 3-tuples $(\Sigma_{g}^{b},\alpha,\beta)$, $(\Sigma_{g}^{b},\beta,\gamma)$, $(\Sigma_{g}^{b},\gamma, \alpha)$ is diffeomorphism and handleslide equivalent to the diagram described in Figure \ref{fig:model diagram of relative trisection}. \end{itemize} \end{dfn} \begin{figure}[h] \centering \includegraphics[width=11cm, height=7cm, keepaspectratio, scale=1]{modeldiagramofrelativetrisection.pdf} \setlength{\captionmargin}{50pt} \caption{The standard diagram for a relative trisection diagram. Note that $l=2p+b-1$.} \label{fig:model diagram of relative trisection} \end{figure} \begin{lem}[Lemma 11 in \cite{CGPC}]\label{lem:open book decomposition} A $(g,k;p,b)$-relative trisection of a 4-manifold $X$ with non-empty boundary induces an open book decomposition on $\partial{X}$ with page $\Sigma_p^b$ (hence binding $\partial{\Sigma_p^b}$). \end{lem} If we want to glue several relative trisection diagrams, we must describe a diagram with arcs, called an \textit{arced relative trisection diagram}. There exists an algorithm for drawing such arcs. \begin{lem}[Lemma 2.7 in \cite{CO}]\label{lem:gluing} For $i=1,2$, let $X_i$ be a 4-manifold with nomempty and connected boundary, and $T_i$ a relative trisection of $X_i$. Also let $\mathcal{O}X_i$ be the open book decomposition on $\partial{X_i}$ induced by $T_i$. If $f \colon \partial{X_1} \to \partial{X_2}$ is an orientation reversing diffeomorphism which takes $\mathcal{O}X_1$ to $\mathcal{O}X_2$, then we obtain a trisection of $X=X_1 \cup_{f} X_2$ by gluing $T_1$ and $T_2$. \end{lem} Note that if there exists a diffeomorphism $f$ as above, the page of $\mathcal{O}X_1$ is diffeomorphic to the page of $\mathcal{O}X_2$ via $f$. Thus, if $T_i$ is the $(g_i,k_i;p_i,b_i)$-relative trisection, then $p_1=p_2$ and $b_1=b_2$. Let $(\Sigma(i),\alpha(i),\beta(i),\gamma(i),a(i),b(i),c(i))$ be an arced relative trisection diagram of $X_i$. If there exsits $f$ in Lem \ref{lem:gluing}, we can obtain three kinds of new simple closed curves in $\Sigma(1) \cup_{f} \Sigma(2)$, i.e. $a(1) \cup a(2)$, $b(1) \cup b(2)$ and $c(1) \cup c(2)$ via $f$. Thus, we have the following proposition, where $\Sigma=\Sigma(1) \cup_{f} \Sigma(2)$ and $\tilde{\alpha}$ (resp. $\tilde{\beta}$, resp. $\tilde{\gamma}$) $=(a(1)_j \cup_{\partial} a(2)_j)_j$ (resp. $(b(1)_j \cup_{\partial} b(2)_j)_j$, resp. $(c(1)_j \cup_{\partial} c(2)_j)_j$. \begin{prop}[Proposition 2.12 in \cite{CO}] In addition to the assumptions in Lem \ref{lem:gluing}, let $(\Sigma(i),\alpha(i),\beta(i),\gamma(i),a(i),b(i),c(i))$ be an arced relative trisection diagram of $X_i$. Then, the 4-tuple $(\Sigma, \alpha, \beta, \gamma)$ is a trisection diagram of $X$, where $\alpha=\alpha(1) \cup \alpha(2) \cup \tilde{\alpha}$. \end{prop} \begin{prop}[Theorem 5 in \cite{CGPC}] Let $(\Sigma,\alpha,\beta,\gamma)$ be a $(g,k;p,b)$ relative trisection diagram and $\Sigma_{\alpha}$ the surface obtained by performing the surgery along $\alpha$. Suppose that this operation comes with an embedding $\phi_{\alpha} \colon \Sigma-\alpha \to \Sigma_{\alpha}$. Consider the following step. \begin{enumerate} \item Choose a collection of arcs $a$ such that $a$ is disjoint from $\alpha$ in $\Sigma$ and $\phi_{\alpha}(a)$ cuts $\Sigma_{\alpha}$ into a disk. Note that $a$ consists of $2p+b-1$ arcs. \item Choose $b$ by handle sliding $a$ over $\alpha$ so that $b$ is disjoint from $\beta$. If necessary, we slide $\beta_i$ over $\beta_j$. In this case, the $\beta$ is denoted by $\beta{'}$. If handle slides are not needed, $\beta{'}=\beta$.\item Choose $c$ by handle sliding $b$ over $\beta{'}$ so that $c$ is disjoint from $\gamma$. If necessary, we slide $\gamma_i$ over $\gamma_j$. In this case, the $\gamma$ is denoted by $\gamma{'}$. If handle slides are not needed, $\gamma{'}=\gamma$.\end{enumerate} Then, $(\Sigma, \alpha, \beta{'}, \gamma{'}, a, b, c)$ is an arced relative trisection diagram. \end{prop} \begin{exam} Figure \ref{fig:D^2 bundle over S^2} is a $(2,1;0,2)$-relative trisection diagram of the $D^2$ bundle over $S^2$ with Euler number $-1$ and its arced relative trisection diagram constructed from the algorithm. \end{exam} \begin{figure}[h] \centering \includegraphics[width=8.5cm, height=6.5cm, keepaspectratio, scale=1]{exampleofrelativetrisectiondiagram.pdf} \setlength{\captionmargin}{50pt} \caption{(Left) A $(2,1;0,2)$-relative trisection diagram of the $D^2$ bundle over $S^2$ with Euler number $-1$. (Right) Its arced relative trisection diagram.} \label{fig:D^2 bundle over S^2} \end{figure} For more details on relative trisections, see \cite{Ca, CGPC, CO}. \subsection{Bridge trisections} In this subsection, we review trisections of surface-knots, called bridge trisections. \begin{dfn} Let $V$ be a 4-dimensional 1-handlebody and $\mathcal{D}$ a collection of disks properly embedded in $V$. We say that $\mathcal{D}$ is \textit{trivial} if the disks of $\mathcal{D}$ are simultaneously isotoped into $\partial{V}$. \end{dfn} \begin{dfn}\label{def:trivial tangle} Let $H$ be a 3-dimensional 1-handlebody and $\tau=\{\tau_i\}$ a collection of arcs properly embedded in $H$. We say that $\tau$ is \textit{trivial} if $\tau_i$ is isotoped into $\partial{H}$ for each $i$. Or equivalently, there exists a collection $\Delta=\{\Delta_i\}$ of disks in $H$ with $\Delta_i \cap \Delta_j =\emptyset$ such that $\partial{\Delta_i} = \tau_i \cup \tau_i^{'}$ for some arc $\tau_i^{'} \subset \partial{H}$. We call $\tau$, $\Delta$ and $\tau_i^{'}$ \textit{trivial tangles}, \textit{bridge disks} and a \textit{shadow} of $\tau_i$ respectively. \end{dfn} \begin{dfn}[\cite{MZ2}]\label{def:bridge trisection} Let $X=X_1 \cup X_2 \cup X_3$ be a $(g;k_1,k_2,k_3)$-trisection of a closed 4-manifold $X$, and $S$ a surface-knot in $X$. A decomposition $(X,S) = (X_1,\mathcal{D}_1) \cup (X_2,\mathcal{D}_2) \cup (X_3,\mathcal{D}_3)$ is a $(g;k_1,k_2,k_3;b;c_1,c_2,c_3)$-\textit{bridge trisection} of $(X,S)$ if \begin{itemize} \item For each $i=1,2,3$, $\mathcal{D}_i$ is a collection of trivial $c_i$ disks in $X_i$. \item For $i \not= j$, $\mathcal{D}_i \cap \mathcal{D}_j$ form trivial $b$ tangles in $X_i \cap X_j$. \end{itemize} We say that $S$ is in $(b;c_1,c_2,c_3)$-\textit{bridge position} with respect to $(X_1,X_2,X_3)$ if $(X,S) = (X_1,S \cap X_1) \cup (X_2,S \cap X_2) \cup (X_3,S \cap X_3)$ is a $(g;k_1,k_2,k_3;b;c_1,c_2,c_3)$-bridge trisection. \end{dfn} We call the trisection $(X_1,X_2,X_3)$ the \textit{underlying trisection} of the bridge trisection. \begin{rem}\label{rem:bridge trisection in $S^4$} In Definition \ref{def:bridge trisection}, if $X=S^4$, then the trisection is the $(0,0)$-trisection \cite[Definition 1.2]{MZ1}. \end{rem} As with a balanced trisection, when $k_1=k_2=k_3=k$ and $c_1=c_2=c_3=c$, we say that the decomposition of $(X,S)$ is a $(g,k;b,c)$-\textit{bridge trisection} and $S$ is in $(b,c)$-\textit{bridge position}. Note that if $S$ is in $(b;c_1,c_2,c_3)$-bridge position, then $\chi(S)=c_1+c_2+c_3-b$. So, when $c_1=c_2=c_3$, we often say that $S$ is in $b$-bridge position. Meier and Zupan \cite{MZ2} showed that every pair of a 4-manifold $X$ and a surface-knot $S$ in $X$ admits a bridge trisection, using a technical operation called \textit{meridional stabilization}. \begin{dfn} Let $(X,S)=(X_1, \mathcal{D}_1) \cup (X_2, \mathcal{D}_2) \cup (X_3, \mathcal{D}_3)$ be a bridge trisection and $C$ an arc in $\mathcal{D}_i \cap \mathcal{D}_j$ whose endpoints are in distinct components of $\mathcal{D}_k$. We define $(X_\ell^{'},\mathcal{D}_\ell^{'})$ as follows, where $\{i,j,k\}=\{1,2,3\}$. \begin{itemize} \item $(X_i^{'},\mathcal{D}_i^{'})=(X_i - \nu(C), \mathcal{D}_i - \nu(C))$ \item $(X_j^{'},\mathcal{D}_j^{'})=(X_j - \nu(C), \mathcal{D}_j - \nu(C))$ \item $(X_k^{'},\mathcal{D}_k^{'})=(X_k \cup \overline{\nu(C)}, \mathcal{D}_k \cup (\overline{\nu(C)} \cap S)$ \end{itemize} The replacement of $(X_\ell,\mathcal{D}_\ell)$ by $(X_\ell^{'},\mathcal{D}_\ell^{'})$ for all $\ell$ is said to be a $k$-meridionally stabilization. \end{dfn} Note that when we meridionally stabilize a bridge trisection of $(X,S)$, for the underlying trisection of $X$, we simply stabilize it. This observation is used in the proof of our main theorem. \begin{thm}[Theorem 2 in \cite{MZ2}]\label{thm:good bridge position} Let $S$ be a surface-link in a closed 4-manifold $X$ with a $(g,k)$-trisection $T$. Then, the pair $(X,S)$ admits a $(g,k;b,n)$-bridge trisection with $b=3n-\chi(S)$, where $n$ is the number of connected components of $S$. \end{thm} Note that in Theorem \ref{thm:good bridge position}, if $S$ is a 2-knot, then $S$ can be in 1-bridge position with respect to a trisection obtained by stabilizing $T$. Furthermore if $S$ is a $P^2$-knot, then $S$ can be in 2-bridge position. A surface-knot in $S^4$ can be described by a triplane diagram introduced by Meier and Zupan \cite{MZ1}. On the other hand, it is difficult to describe a surface-knot in a general 4-manifold in the same way. Therefore, Meier and Zupan \cite{MZ2} developed another diagram using shadows in Definition \ref{def:trivial tangle}. It is called a shadow diagram. \begin{dfn} Let $(X,S)=(X_1,\mathcal{D}_1) \cup (X_2,\mathcal{D}_2) \cup (X_3,\mathcal{D}_3)$ be a bridge trisection. A 4-tuple $(\Sigma, (\alpha, a), (\beta, b), (\gamma, c))$ is called a \textit{shadow diagram} if the 4-tuple $(\Sigma,\alpha,\beta,\gamma)$ is a trisection diagram of $(X_1,X_2,X_3)$, and $a$, $b$ and $c$ are shadows of $\mathcal{D}_1 \cap \mathcal{D}_2$, $\mathcal{D}_2 \cap \mathcal{D}_3$ and $\mathcal{D}_3 \cap \mathcal{D}_1$ respectively. In particular, $a$, $b$ and $c$ are a shadow of $\mathcal{D}_i \cap \mathcal{D}_j$, the shadow diagram is called a \textit{doubly pointed trisection diagram}. \end{dfn} Each 2-knot in a close 4-manifold admits a doubly pointed trisection diagram since it can be put in 1-bridge position. Note that for a 2-knot $K$ in 1-bridge position with respect to a trisection $T$ of $X$, the underlying trisection diagram of $(X,K)$ is the diagram of $T$. For example, Figure \ref{fig:dptd of CP^2 and CP^1} describes a doubly pointed trisection diagram of $(\mathbb{C}P^2, \mathbb{C}P^1)$. We call the two black points of a doubly pointed trisection diagram \textit{base points} in the proof of our main theorem. For more details on bridge trisections, see \cite{MZ1, MZ2}. \begin{figure}[h] \begin{center} \includegraphics[width=5.5cm, height=6cm, keepaspectratio, scale=1]{dptdofCP2andCP1.pdf} \end{center} \setlength{\captionmargin}{50pt} \caption{A doubly pointed trisection diagram of $(\mathbb{C}P^2,\mathbb{C}P^1)$. The red, blue, and green curves describe a $(1,1)$-trisection diagram of $\mathbb{C}P^2$ and the arcs $a$, $b$, and $c$ describe $\mathbb{C}P^1$. Note that in a doubly pointed trisection diagram, we do not need to draw the arcs since there is a unique way to describe them.} \label{fig:dptd of CP^2 and CP^1} \end{figure} \section{The Price twist}\label{sec:Price twist} In this section, we review a surgery along a $P^2$-knot in a closed 4-manifold, called the Price twist. Let $S$ be a $P^2$-knot, that is, a real projective plane smoothly embedded in a closed 4-manifold $X$, with normal Euler number $e(S)=\pm2$. Note that when $X=S^4$, from Whitney-Massey's theorem \cite{Ma, Wh}, each $P^2$-knot $S$ satisfies $e(S)=\pm2$. Then, for a tubular neighborhood $\nu(S)$ of $S$ in $X$, the boundary $\partial{\nu(S)}$ is a Seifert-fibered space $Q$ over $S^2$ with three singular fibers labeled $S_0$, $S_1$ and $S_{-1}$, where these indices are respectively $\pm2$, $\pm2$ and $\mp2$ when $e(S)=\pm2$. Since $\partial(X-\nu(S)) \cong Q$, $\partial(X-\nu(S))$ has the same label with $\partial{\nu(S)}$. Price \cite{Pr} showed that there exist three kinds of self-homeomorphism of $\partial{\nu(S)}$ up to isotopy, that is, $S_{-1} \mapsto S_{-1}$, $S_{-1} \mapsto S_{0}$ and $S_{-1} \mapsto S_{1}$. Thus, when we reglue $\nu(S)$ deleted from $X$ according to $\phi \colon \partial \nu(S) \to \partial(X-\nu(S))$, we can obtain the following at most (see below) three 4-manifolds up to diffeomorphism (the notation follows \cite{KM}): \begin{itemize} \item If $\phi(S_{-1})=S_{-1}$, the resulting manifold is $X$. \item If $\phi(S_{-1})=S_{0}$, the resulting manifold is denoted by $\tau_S(X)$. \item If $\phi(S_{-1})=S_{1}$, the resulting manifold is denoted by $\Sigma_S(X)$. \end{itemize} This operation is called the \textit{Price twist} of $X$ along $S$. Especially, in this paper, we call the first twist, that is, the twist having the original manifold $X$, the \textit{trivial Price twist}. Note that $\Sigma_S(S^4)$ is a homotopy 4-sphere. Let $\Sigma_K^G(X)$ be the 4-manifold obtained by the Gluck twist along $K$, where $K$ is a 2-knot in $X$. Then, from \cite{KSTY}, we see that for a $P^2$-knot $S=K\#P_{\pm}$, $\Sigma_S(X) \cong \Sigma_K^G(X)$ holds, where $P_{\pm}$ is an unknotted $P^2$-knot with normal Euler number $\pm2$ in $X$. So, for a 2-knot $K$ satisfying $\Sigma_K^G(S^4) \cong S^4$ such as a twist spun 2-knot, we have $\Sigma_S(S^4) \cong S^4$. Thus, we can ask whether the conjecture that is a 4-dimensional analogue of Waldhausen's theorem on Heegaard splittings \cite[Conjecture 3.11]{MSZ} holds for such $\Sigma_S(S^4)$ (\cite[Question 6.2]{KM}). Note that \cite[Question 6.2]{KM} is a specific case of \cite[Conjecture 3.11]{MSZ}. \begin{que*}[Question 6.2 in \cite{KM}]\label{KM's Question} Let $S$ be a $P^2$-knot in $S^4$ so that $\Sigma_S(S^4) \cong S^4$. Is a trisection of $\Sigma_S(S^4)$ obtained from the algorithm of Section 5 in \cite{KM} isotopic to a stabilization of the genus 0 trisection of $S^4$? \end{que*} \begin{con*}[Conjecture 3.11 in \cite{MSZ}] Every trisection of $S^4$ is isotopic to either the genus 0 trisection or its stabilization. \end{con*} \begin{rem} From \cite{Na}, we immediately see that Figure 19 right of \cite{KM}, that is, a $(6,2)$-trisection diagram of $\Sigma_{P_-}(S^4)$, is a stabilization of the $(0,0)$-trisection diagram of $S^4$ up to handle slides and diffeomorphisms. \end{rem} \begin{rem}\label{rem:Kisoshita conjecture} For a 2-knot $K$ and an unknotted $P^2$-knot $P$ in $S^4$, the $P^2$-knot $S$ admits the decomposition $K\#P$ is said to be \textit{of Kinoshita type}. It is not known whether every $P^2$-knot in $S^4$ is of Kinoshita type. This question is called the Kinoshota question or the Kinoshita conjecture. We may answer the question with $\tau_S({S^4})$ \cite{KM}. Note that in \cite[Question 6.2]{KM}, if $S$ is of Kinoshita type, then trisections in the question are diffeomorphic to trisections obtained by the Gluck twist \cite{Na}. In particular, if $S$ is the connected sum of the unknotted $P^2$-knot and a spun or twist spun 2-knot, \cite[Question 6.2]{KM} reduces to \cite[Question 6.4]{GM} in the sense of diffeomorphic trisections.\end{rem} \begin{que*}[Question 6.4 in \cite{GM}] Is the trisection diagram constructed by \cite{Me} and \cite[Lemma 5.5]{GM} for the Gluck twist along a spun or twist spun 2-knot is a stabilization of the $(0,0)$-trisection diagram of $S^4$? \end{que*} This question is not answered even in the case of the spun trefoil, which can be regarded as the simplest non trivial spun 2-knot. By the following theorem, called Waldhausen's theorem, we can see the reason that \cite[Conjecture 3.11]{MSZ} is a 4-dimensional analogue of Waldhausen's theorem on Heegaard splittings. \begin{thm}[\cite{Wal},\cite{Sch}] The 3-sphere $S^3$ admits a unique Heegaard splitting up to isotopy for each genus. \end{thm} For more details on the Price twist and a trisection obtained by the Price twist, see \cite{KM, Pr}. \section{A boundary-stabilization}\label{sec:boundary-stabilization} In this section, we review a boundary-stabilization for a 4-manifold with boundary introduced in \cite{KM}. \begin{dfn} Let $Y=Y_1 \cup Y_2 \cup Y_3$ be a 4-manifold with $\partial{Y}\not=\emptyset$, where $Y_i \cap Y_j = \partial{Y_i} \cap \partial{Y_j}$, and $C$ an arc properly embedded in $Y_i \cap Y_j \cap \partial{Y}$ whose endpoints are in $Y_1 \cap Y_2 \cap Y_3$. Also let $\nu(C)$ be a fixed open tubular neighborhood of $C$. Then, we define $\tilde{Y_i}, \tilde{Y_j}, \tilde{Y_k}$ as follows: \begin{itemize} \item $\tilde{Y_i} = Y_i - \nu(C)$, \item $\tilde{Y_j} = Y_j - \nu(C)$, \item $\tilde{Y_k} = Y_k \cup \overline{\nu(C)}$. \end{itemize} The replacement of $(Y_1,Y_2,Y_3)$ by $(\tilde{Y_i}, \tilde{Y_j}, \tilde{Y_k})$ is said to be a \textit{boundary-stabilization} along $C$. In this case, we say that $\tilde{Y_k}$ has been obtained by boundary-stabilizing $Y_k$ along $C$. \end{dfn} As we have seen in Section \ref{sec:intro}, we need a boundary-stabilization in order to construct a relative trisection of the complement of a surface-knot in a closed 4-manifold. The following explanation is more precise. Let $S$ be a surface-knot in a closed 4-manifold $X$ with trisection $(X_1,X_2,X_3)$. Suppose that $S$ is in $(b,c)$-bridge position with respect to $(X_1,X_2,X_3)$. Let $X_i^{'} = X_i-\nu(S)$. Then, $X-\nu(S)$ admits a natural decomposition $X-\nu(S) = X_1^{'} \cup X_2^{'} \cup X_3^{'}$. However, this decomposition of $X-\nu(S)$ admits $(X_1^{'}, X_2^{'}, X_3^{'})$ as a relative trisection if and only if $S$ is a 2-knot and $S$ is in 1-bridge position, that is, $b=1$. This is because if $b>1$, then the triple intersection $X_i^{'} \cap X_j^{'} \cap \partial{(X-\nu(S))}$, which is diffeomorphic to the disjoint union $\sqcup_{b} S^1 \times I$ of $b$ annuli, is disconnected. This contradicts the fact that for a relative trisection $(Y_1,Y_2,Y_3)$, if $\partial{Y}$ is connected, then $Y_i \cap Y_j \cap \partial{Y}$ must be connected. So, for all $S$ except 2-knots, $X-\nu(S)$ cannot admit $(X_1^{'}, X_2^{'}, X_3^{'})$ as a relative trisection. Although, we can refine the decomposition by boundary-stabilizing each $X_i^{'}$ so that $X-\nu(S)$ admits a relative trisection for each $S$. Put briefly, the way is the following: In this paper, since we focus on a $P^2$-knot, we first review a boundary-stabilization of the complement of a $P^2$-knot. In the above situation, suppose also that $S$ is a $P^2$-knot and $b=2$ (Theorem \ref{thm:good bridge position}). For each $i=1,2,3$ and $\{i,j,k\} = \{1,2,3\}$, we define $C_i$ to be an arc in $X_j^{'} \cap X_k^{'} \cap \partial(X-\nu(S))$ whose endpoints are in $X_1^{'} \cap X_2^{'} \cap X_3^{'}$ which intersects two distinct connected components of $\partial({X_1^{'} \cap X_2^{'} \cap X_3^{'}})$. Take $C_1$, $C_2$ and $C_3$ so that they have different endpoints. Then, if we boundary-stabilize $X_\ell^{'}$ along $C_\ell$, we obtain the decomposition $X-\nu(S) = \tilde{X_1} \cup \tilde{X_2} \cup \tilde{X_3}$, where $\tilde{X_\ell}$ is the submanifold of $X-\nu(S)$ obtained by boundary-stabilizing $X_\ell^{'}$ along $C_1$, $C_2$, and $C_3$. We see that $\tilde{X_i} \cap \tilde{X_j} \cap \partial(X-\nu(S))$ is connected and if we furthermore check on the structure of an open book decomposition which will be induced, we have the following proposition. \begin{prop}[\cite{KM}] The 3-tuple $(\tilde{X_1}, \tilde{X_2}, \tilde{X_3})$ is a relative trisection of $X-\nu(S)$. \end{prop} For a surface-knot $S$ except $P^2$-knots, we can construct a relative trisection of the complement of $S$ as with the case of a $P^2$-knot. The differencies are that for each $i=1,2,3$, we take $C_i$ to be a collection of $2-\chi(S)$ arcs and take each arc in $C_i$ so that the arc is parallel to a different one in $\nu(S) \cap X_j \cap X_k$. Note that unlike a stabilization of a trisection, a boundary-stabilization depends on the choice of an arc. If $S$ is a $P^2$-knot, the type of a relative trisection of $X-\nu(S)$ obtained by boundary-stabilizations as above is either $(g,k;0,3)$ or $(g^{'},k^{'};1,1)$. In Section \ref{sec:main theorem}, since we glue a $(2,2;0,3)$-relative trisection of $\overline{\nu(S)}$ and a relative trisection of $X-\nu(S)$ from boundary-stabilizations, we need to boundary-stabilize $X-\nu(S)=\bigcup_{i=1}^{3} X_{i} - \nu(S)$ so that the type of the resulting relative trisection is $(g,k;0,3)$ for some $g$ and $k$. Kim and Miller developed an algorithm to describe a relative trisection diagram of the complement of a surface-knot using the shadow diagram; see \cite[Section 4]{KM}. For more details on boundary-stabilizations and a relative trisection of the complement of a surface-knot, see \cite{KM}. \section{Main Theorem}\label{sec:main theorem} As we have seen in Section \ref{sec:intro}, we can think about the following question. \begin{que}\label{que:original question} Let $S$ be a surface-knot in a closed 4-manifold $X$ with trisection $T$. Is a trisection obtained by trivially gluing $\nu(S)$ and $X-\nu(S)$ diffeomorphic, especially isotopic, to a stabilization of $T$? In particular, if $X=S^4$, does this hold? \end{que} For the restricting case, we answer Question \ref{que:original question} affirmatively in Theorem \ref{main theorem}, our main theorem. \begin{thm}\label{main theorem} Let $X$ be a closed 4-manifold and $S$ the connected sum of a 2-knot $K$ with normal Euler number 0 and an unknotted $P^2$-knot with normal Euler number $\pm2$ in $X$. Also let $T_{(X,S)}$ be a bridge trisection of $(X,S)$ and $T_X$ the underlying trisection. Suppose that $S$ is in bridge position with respect to $T_X$. Also let $T_{X}^{'}$ be the underlying trisection of the bridge trisection obtained by meridionally stabilizing $T_{(X,S)}$ so that $S$ is in 2-bridge position with respect to $T_{X}^{'}$. Then, the trisection $T_S$ obtained by the trivial Price twist along $S$ is diffeomorphic to a stabilization of $T_{X}^{'}$. In particular, the trisection $T_S$ is diffeomorphic to a stabilization of $T_{X}$. \end{thm} \begin{proof}[Proof of Theorem \ref{main theorem}] Let ${\mathcal{D}}_Y$ be a relative trisection diagram of a 4-manifold $Y$. Also let $P_+$ and $P_-$ be unknotted $P^2$-knots in $X$ with normal Euler number $2$ and $-2$, respectively. Since the preferred diagram ${\mathcal{D}}_{\nu(P_+)}$ and ${\mathcal{D}}_{S^4-\nu(P_+)}$ in \cite{KM} are the mirror images of ${\mathcal{D}}_{\nu(P_-)}$ and ${\mathcal{D}}_{S^4-\nu(P_-)}$, respectively, it suffices to proof Theorem \ref{main theorem} only for $S=K \# P_-$. \subsection*{Constructing $T_S$} It follows from \cite{KM} that ${\mathcal{D}}_{X-{\nu(S)}}$ is the union of ${\mathcal{D}}_{S^4-\nu(P_-)}$ and ${\mathcal{D}}_{X-\nu(K)}$. Thus, the gluing ${\mathcal{D}}_{\nu(P_-)}$ and ${\mathcal{D}}_{X-{\nu(S)}}$ together by the trivial Price twist is described as Figure \ref{fig:gluing diagram}. Note that we construct $\mathcal{D}_{\nu(P_-)}$ in Figure \ref{fig:gluing diagram} by deforming the preferred diagram of $\nu(P_-)$ in \cite{KM} so that the gluing is described as Figure \ref{fig:gluing diagram}. In Figure \ref{fig:gluing diagram}, if we draw arcs of ${\mathcal{D}}_{\nu(P_-)}$ and ${\mathcal{D}}_{S^4-\nu(P_-)}$, then we can obtain Figure \ref{fig:starting diagram}. The diagram depicted in Figure \ref{fig:starting diagram} corresponds to $T_S$. It should be noted that we do not draw curves and arcs on the surface of ${\mathcal{D}}_{X-\nu(K)}$ in Figure \ref{fig:gluing diagram}, but ${\mathcal{D}}_{X-\nu(K)}$ has them. \begin{figure}[h] \begin{center} \includegraphics[width=16cm, height=7.95cm, keepaspectratio, scale=1]{gluingdiagram.pdf} \end{center} \setlength{\captionmargin}{50pt} \caption{The gluing diagram of ${\mathcal{D}}_{\nu(P_-)}$ and ${\mathcal{D}}_{X-{\nu(S)}}$ by the trivial Price twist along $S=K\# P_-$ in $X$. We glue ${\mathcal{D}}_{\nu(P_-)}$, ${\mathcal{D}}_{S^4-\nu(P_-)}$, and ${\mathcal{D}}_{X-\nu(K)}$ along the boundary components of the corresponding characters.} \label{fig:gluing diagram} \end{figure} From now on, we deform trisection diagrams specifically. Note that from Figure \ref{fig:starting diagram} to Figure \ref{fig:after fifth destabilization}, the undrawn part describes ${\mathcal{D}}_{X-\nu(K)}$ with arcs and if necessary, let two arcs of $\mathcal{D}_{X-\nu(K)}$ be parallel by performing handle slides. Also note that for a $\alpha$ curve $\alpha_i$, we call a curve obtained by sliding $\alpha_i$ over another $\alpha$ curve also $\alpha_i$. The same is true for $\beta$ and $\gamma$ curves. \subsection*{The first destabilization} In Figure \ref{fig:starting diagram} (or Figure \ref{fig:before first destabilization}), we will destabilize $\alpha_1$, $\beta_1$ and $\gamma_1$. To do this, we slide $\gamma_{2}$ over $\gamma_{3}$ so that the geometric intersection number of $\gamma_{2}$ and $\alpha_{1}$ is 2. Then, we slide $\gamma_{2}$, $\gamma_{3}$ and $\gamma_{4}$ over $\gamma_{1}$ in this order. After that, we slide $\gamma_{2}$ over $\gamma_{4}$. As a result, $\gamma_{2}$ does not intersect $\alpha_1$. We also slide $\gamma_{4}$ over $\gamma_{3}$ so that $\gamma_{4}$ does not intersect $\alpha_1$. Finally, we slide $\gamma_3$ over $\gamma_1$, so that all $\gamma$ curves except $\gamma_1$ do not meet $\alpha_1$ and $\beta_1$. Then, we obtain Figure \ref{fig:before first destabilization}. In Figure \ref{fig:before first destabilization}, by destabilizing $\alpha_1$, $\beta_1$ and $\gamma_1$, that is, erasing $\gamma_1$ and surgering $\alpha_1$ or $\beta_1$ (if we choose $\alpha_1$, then we erase $\beta_1$ and vise versa), we get Figure \ref{fig:after first destabilization}. \begin{figure}[h] \begin{center} \includegraphics[width=16cm, height=7cm, keepaspectratio, scale=1]{startingdiagram.pdf} \end{center} \caption{Starting diagram.} \label{fig:starting diagram} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=16cm, height=7cm, keepaspectratio, scale=1]{beforefirstdestabilization.pdf} \end{center} \caption{Before the first destabilization.} \label{fig:before first destabilization} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=16cm, height=7cm, keepaspectratio, scale=1]{afterfirstdestabilization.pdf} \end{center} \caption{After the first destabilization.} \label{fig:after first destabilization} \end{figure} \subsection*{The second destabilization} In Figure \ref{fig:after first destabilization} (or Figure \ref{fig:before second destabilization}), we will destabilize $\alpha_1$, $\beta_1$ and $\gamma_1$. To do this, we firstly need to make $\beta_{1}$ parallel to $\gamma_1$. We slide $\beta_3$ over $\beta_4$ and $\beta_1$ over $\beta_2$. We again slide $\beta_1$ over $\beta_2$ so that $\beta_1$ is parallel to $\gamma_1$. After that, we slide $\gamma_2$ and $\gamma_3$ over $\gamma_1$ in order to remove the crossings of $\gamma_2$, $\gamma_3$ and $\alpha_1$. As a result, we obtain Figure \ref{fig:before second destabilization}. In Figure \ref{fig:before second destabilization}, by destabilizing $\alpha_1$, $\beta_1$ and $\gamma_1$, that is, erasing $\beta_1$ and $\gamma_1$ and surgering $\alpha_1$, we get Figure \ref{fig:after second destabilization}. \begin{figure}[h] \begin{center} \includegraphics[width=16cm, height=7cm, keepaspectratio, scale=1]{beforeseconddestabilization.pdf} \end{center} \caption{Before the second destabilization.} \label{fig:before second destabilization} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=16cm, height=7cm, keepaspectratio, scale=1]{afterseconddestabilization.pdf} \end{center} \caption{After the second destabilization.} \label{fig:after second destabilization} \end{figure} \subsection*{The third destabilization} In Figure \ref{fig:after second destabilization} (or Figure \ref{fig:before third destabilization}), we will destabilize $\alpha_1$, $\beta_1$ and $\gamma_1$. To do this, we need to make $\beta_{1}$ parallel to $\gamma_1$. We slide $\gamma_1$ over $\gamma_2$ so that $\gamma_1$ does not intersect $\alpha_2$. Then, we slide $\beta_1$ over $\beta_2$ and $\beta_3$, so that $\beta_1$ is parallel to $\gamma_1$. As a result, we obtain Figure \ref{fig:before third destabilization}. In Figure \ref{fig:before third destabilization}, by destabilizing $\alpha_1$, $\beta_1$ and $\gamma_1$, we get Figure \ref{fig:after third destabilization}. \begin{figure}[h] \begin{center} \includegraphics[width=16cm, height=7cm, keepaspectratio, scale=1]{beforethirddestabilization.pdf} \end{center} \caption{Before the third destabilization.} \label{fig:before third destabilization} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=16cm, height=7cm, keepaspectratio, scale=1]{afterthirddestabilization.pdf} \end{center} \caption{After the third destabilization.} \label{fig:after third destabilization} \end{figure} \subsection*{The fourth destabilization} In Figure \ref{fig:after third destabilization} (or Figure \ref{fig:before fourth destabilization}), we will destabilize $\alpha_1$, $\beta_1$ and $\gamma_1$. To do this, we need to make $\beta_{1}$ parallel to $\alpha_1$. We slide $\beta_1$ over $\beta_2$ and $\alpha_1$ over $\alpha_2$, so that $\alpha_1$ is parallel to $\beta_1$. As a result, we obtain Figure \ref{fig:before fourth destabilization}. In Figure \ref{fig:before fourth destabilization}, by destabilizing $\alpha_1$, $\beta_1$ and $\gamma_1$, we get Figure \ref{fig:after fourth destabilization}. \begin{figure}[h] \begin{center} \includegraphics[width=16cm, height=7cm, keepaspectratio, scale=1]{beforefourthdestabilization.pdf} \end{center} \caption{Before the fourth destabilization.} \label{fig:before fourth destabilization} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=16cm, height=7cm, keepaspectratio, scale=1]{afterfourthdestabilization.pdf} \end{center} \caption{After the fourth destabilization.} \label{fig:after fourth destabilization} \end{figure} \subsection*{The fifth destabilization} In Figure \ref{fig:after fourth destabilization}, we make $\gamma_1$ and $\alpha_1$ be parallel by isotopies. Then, we obtain Figure \ref{fig:before fifth destabilization}. In Figure \ref{fig:before fifth destabilization}, by destabilizing $\alpha_1$, $\beta_1$ and $\gamma_1$, we get Figure \ref{fig:after fifth destabilization}. \begin{figure}[h] \begin{center} \includegraphics[width=16cm, height=7cm, keepaspectratio, scale=1]{beforefifthdestabilization.pdf} \end{center} \caption{Before the fifth destabilization.} \label{fig:before fifth destabilization} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=16cm, height=7cm, keepaspectratio, scale=1]{afterfifthdestabilization.pdf} \end{center} \caption{After the fifth destabilization.} \label{fig:after fifth destabilization} \end{figure} \subsection*{The sixth destabilization} In Figure \ref{fig:after fifth destabilization}, the trisection of $X-\nu(K)$ is 0-annular since the normal Euler number of $K$ is 0. Thus, the monodromy of the open book decomposition is the identity, that is, $\alpha_1$ and $\gamma_1$ in Figure \ref{fig:after fifth destabilization} can be parallel. By destabilizing $\alpha_1$, $\beta_1$ and $\gamma_1$, we have a diagram $\mathcal{D}$. The diagram $\mathcal{D}$ is obtained by attaching two disks to the two boundary components of the surface of $\mathcal{D}_{X-\nu(K)}$ since we surger along $\alpha_1$ when we destabilize $\alpha_1$, $\beta_1$ and $\gamma_1$ in Figure \ref{fig:after fifth destabilization}. In fact, $\mathcal{D}_{X-\nu(K)}$ is the diagram obtained by removing the open neighborhood of base points of the doubly pointed trisection diagram of $(X,K)$. Thus, $\mathcal{D}$ is the diagram obtained by simply deleting the base points. (Note that the surface erased the base points has no punctures.) In addition, the underlying trisection diagram of the doubly pointed trisection diagram of $(X,K)$ is the diagram of $T_{X}^{'}$. It can be seen from the way of boundary-stabilizations performed to construct a relative trisection diagram of $X-\nu(S)$ \cite{KM}. This means that $\mathcal{D}$ is just the diagram of $T_{X}^{'}$. Therefore, $T_S$ is diffeomorphic to a stabilization of $T_{X}^{'}$. Moreover, a meridionally stabilization of a bridge trisection corresponds to a stabilization for the underlying trisection. Thus, $T_{X}^{'}$ is a stabilization of $T_{X}$. This completes the proof of Theorem \ref{main theorem}. \end{proof} \begin{cor}\label{main corollary} For each $P^2$-knot $S$ in $S^4$ that is of Kinoshita type, the trisection obtained by the trivial Price twist along $S$ is diffeomorphic to a stabilization of the genus 0 trisection of $S^4$. \end{cor} \begin{proof} In Theorem \ref{main theorem}, if $X=S^4$, then $T_X$ is the genus 0 trisection of $S^4$ (see Remark \ref{rem:bridge trisection in $S^4$}). \end{proof} Lastly, as we have seen in Section \ref{sec:intro}, if any two diffeomorphic trisections of $S^4$ are isotopic, it follows from corollary \ref{main corollary} that the trisection obtained by the trivial Price twist along a $P^2$-knot which is of Kinoshita type is isotopic to a stabilization of the genus 0 trisection of $S^4$. Namely, Conjecture 3.11 in \cite{MSZ}, i.e. the conjecture that is a 4-dimansional analogue of Waldhausen's theorem on Heegaard splittings, is correct for this trisection. \bibliographystyle{amsalpha} \bibliography{math} \end{document}
2205.04811v2
http://arxiv.org/abs/2205.04811v2
An example of $A_2$ Rogers-Ramanujan bipartition identities of level 3
\documentclass[reqno]{amsart} \usepackage{amsmath} \usepackage{amssymb} \usepackage{amsthm} \usepackage{cases} \usepackage{mathtools} \mathtoolsset{showonlyrefs=true} \usepackage{mathrsfs} \usepackage{hyperref} \pagestyle{plain} \title{An example of $A_2$ Rogers-Ramanujan bipartition identities of level 3} \author{Shunsuke Tsuchioka} \address{Department of Mathematical and Computing Sciences, Institute of Science Tokyo, Tokyo 152-8551, Japan} \email{[email protected]} \date{Dec 4, 2024} \keywords{integer partitions, Rogers-Ramanujan identities, vertex operators, cylindric partitions, W algebras} \subjclass[2020]{Primary~11P84, Secondary~17B69} \usepackage{tikz} \def\node#1#2{\overset{#1}{\underset{#2}{\circ}}} \def\bnode#1#2{\overset{#1}{\underset{#2}{\odot}}} \def\ver#1#2{\overset{{\llap{$\scriptstyle#1$}\displaystyle\circ{\rlap{$\scriptstyle#2$}}}}{\scriptstyle\vert}} \usetikzlibrary{arrows} \tikzstyle{every picture}+=[remember picture] \tikzstyle{na} = [baseline=-.5ex] \tikzstyle{mine}= [arrows={angle 90}-{angle 90},thick] \def\Llleftarrow{\lower2pt\hbox{\begingroup \tikz \draw[shorten >=0pt,shorten <=0pt] (0,3pt) -- ++(-1em,0) (0,1pt) -- ++(-1em-1pt,0) (0,-1pt) -- ++(-1em-1pt,0) (0,-3pt) -- ++(-1em,0) (-1em+1pt,5pt) to[out=-105,in=45] (-1em-2pt,0) to[out=-45,in=105] (-1em+1pt,-5pt); \endgroup} } \newtheorem{Thm}{Theorem}[section] \newtheorem{Def}[Thm]{Definition} \newtheorem{Conj}[Thm]{Conjecture} \newtheorem{DefThm}[Thm]{Definition and Theorem} \newtheorem{Prop}[Thm]{Proposition} \newtheorem{Lem}[Thm]{Lemma} \newtheorem{SubLem}[Thm]{Sublemma} \newtheorem{Rem}[Thm]{Remark} \newtheorem{Cor}[Thm]{Corollary} \newtheorem{Ex}[Thm]{Example} \newcommand{\YY}{Y} \newcommand{\CP}[1]{\mathcal{C}_{#1}} \newcommand{\LC}[1]{\mathcal{C}^{(#1)}} \newcommand{\YP}[1]{Y_{+}(#1)} \newcommand{\YM}[1]{Y_{-}(#1)} \newcommand{\YMM}[2]{Y_{-}(#1,#2)} \newcommand{\YMMPP}[2]{Y_{\pm}(#1,#2)} \newcommand{\YMP}[1]{Y_{\pm}(#1)} \newcommand{\YPP}[2]{Y(#1,#2)} \newcommand{\princhar}[2]{\chi_{#1}(#2)} \newcommand{\GEE}{\mathfrak{g}} \newcommand{\GE}{\widetilde{\mathfrak{g}}} \newcommand{\GAA}{\widetilde{\mathfrak{a}}} \newcommand{\GAAA}{\widehat{\mathfrak{a}}} \newcommand{\GBB}{\mathfrak{b}} \newcommand{\LEXO}[1]{<^{#1}_{\mathsf{lex}}} \newcommand{\LEXXO}[1]{\leq^{#1}_{\mathsf{lex}}} \DeclareMathOperator{\LEX}{<_{\mathsf{lex}}} \DeclareMathOperator{\RLEX}{\geq_{\mathsf{lex}}} \DeclareMathOperator{\RLEXX}{>_{\mathsf{lex}}} \DeclareMathOperator{\LEXX}{\leq_{\mathsf{lex}}} \DeclareMathOperator{\SHAPE}{\mathsf{shape}} \DeclareMathOperator{\SEQ}{\mathsf{Seq}} \DeclareMathOperator{\RED}{\mathsf{Irr}} \DeclareMathOperator{\ZSEQ}{\mathsf{FS}} \DeclareMathOperator{\NSEQ}{\mathsf{FS}_0} \DeclareMathOperator{\SORT}{\mathsf{sort}} \DeclareMathOperator{\MAT}{Mat} \DeclareMathOperator{\SPAN}{\mathsf{span}} \DeclareMathOperator{\PAR}{\mathsf{Par}} \DeclareMathOperator{\TPAR}{\mathsf{Par}_{2\mathsf{color}}} \DeclareMathOperator{\IND}{\mathsf{Ind}} \DeclareMathOperator{\MAP}{\mathsf{Map}} \DeclareMathOperator{\ID}{id} \DeclareMathOperator{\SHIFT}{\mathsf{shift}} \DeclareMathOperator{\BIR}{\mathsf{R}} \DeclareMathOperator{\BIRP}{\mathsf{R}'} \DeclareMathOperator{\BIRR}{\mathsf{R}} \DeclareMathOperator{\TR}{\mathsf{tr}} \DeclareMathOperator{\THETA}{\widehat{\Theta}} \DeclareMathOperator{\DEG}{\mathsf{deg}} \DeclareMathOperator{\END}{\mathsf{End}} \DeclareMathOperator{\AUT}{\mathsf{Aut}} \newcommand{\ggeq}{\mathrel{\underline{\gg}}} \newcommand{\lleq}{\mathrel{\underline{\ll}}} \DeclareMathOperator{\NAANO}{\vartriangleleft} \DeclareMathOperator{\AANO}{\trianglelefteq} \DeclareMathOperator{\ANO}{\gg} \DeclareMathOperator{\ANOE}{\ggeq} \DeclareMathOperator{\AANOOO}{\vartriangleright} \DeclareMathOperator{\ANOOO}{\ll} \DeclareMathOperator{\ANOO}{\lleq} \DeclareMathOperator{\SUPP}{\mathsf{Supp}} \DeclareMathOperator{\COLOR}{\mathsf{color}} \DeclareMathOperator{\CONT}{\mathsf{cont}} \DeclareMathOperator{\CONTT}{\mathsf{cont}} \DeclareMathOperator{\LENGTH}{\ell} \DeclareMathOperator{\LM}{\mathsf{LM}} \DeclareMathOperator{\PAI}{\pi^{\bullet}} \newcommand{\HPRIN}{H^{\textrm{prin}}} \newcommand{\HPPRIN}{H^{\rm{prin}}} \DeclareMathOperator{\isom}{\to} \newcommand{\SUM}[1]{|#1|} \newcommand{\cint}[1]{{\underline{#1}}_c} \newcommand{\EXTE}[1]{\widehat{#1}} \newcommand{\REG}[1]{\mathsf{CReg}_{#1}} \newcommand{\JO}[1]{$q_{#1}$} \newcommand{\JOO}[1]{q_{#1}} \newcommand{\ENU}{N} \newcommand{\KEI}[1]{K_{#1}} \newcommand{\AOX}[1]{X(#1)} \newcommand{\AOB}[1]{B(#1)} \newcommand{\emptypar}{\emptyset} \newcommand{\mya}{a} \newcommand{\myb}{b} \newcommand{\myc}{c} \newcommand{\myd}{d} \newcommand{\mye}{e} \newcommand{\myf}{f} \newcommand{\myg}{g} \newcommand{\myh}{h} \newcommand{\myi}{i} \newcommand{\myj}{j} \newcommand{\myk}{k} \newcommand{\myl}{l} \newcommand{\mym}{m} \newcommand{\myphi}{\Psi} \newcommand{\TTE}{\widetilde{\varepsilon}} \newcommand{\TTP}{\widetilde{p}} \newcommand{\TTQ}{\widetilde{q}} \newcommand{\TTR}{\widetilde{r}} \newcommand{\TTS}{\widetilde{s}} \newcommand{\EMPTYWORD}{\varepsilon} \newcommand{\NONC}{\mathcal{E}} \newcommand{\CZ}{\cint{\mathbb{Z}}} \newcommand{\AZ}{\mathbb{Z}_{2\mathsf{color}}} \newcommand{\MP}{P^{+}} \newcommand{\VZ}{v} \newcommand{\VU}{u} \newcommand{\BUI}[1]{Y(#1)v} \newcommand{\BUII}[2]{Y(#1)v_{#2}} \newcommand{\BUIII}[1]{Y(#1)w} \newcommand{\RR}{\mathsf{RR}_i} \usepackage{graphicx} \begin{document} \maketitle \begin{abstract} We give manifestly positive Andrews-Gordon type series for the level 3 standard modules of the affine Lie algebra of type $A^{(1)}_2$. We also give corresponding bipartition identities, which have representation theoretic interpretations via the vertex operators. Our proof is based on the Borodin product formula, the Corteel-Welsh recursion for the cylindric partitions, a $q$-version of Sister Celine's technique and a generalization of Andrews' partition ideals by finite automata due to Takigiku and the author. \end{abstract} \section{Introduction} \subsection{The Rogers-Ramanujan partition identities} A partition $\lambda=(\lambda_1,\dots,\lambda_{\ell})$ of a nonnegative integer $n$ is a weakly decreasing sequence of positive integers (called parts) whose sum $|\lambda|=\lambda_1+\dots+\lambda_{\ell}$ (called the size) is $n$. The celebrated Rogers-Ramanujan partition identities are stated as follows. \begin{quotation}\label{eq:RR:PT} Let $i=1,2$. The number of partitions of $n$ such that parts are at least $i$ and consecutive parts differ by at least $2$ is equal to the number of partitions of $n$ into parts congruent to $\pm i$ \mbox{modulo $5$.} \end{quotation} As $q$-series identities, the Rogers-Ramanujan identities are stated as \begin{align} \sum_{n\ge 0} \frac{q^{n^2}}{(q;q)_n} = \frac{1}{(q,q^4;q^5)_\infty}, \quad \sum_{n\ge 0} \frac{q^{n^2+n}}{(q;q)_n} = \frac{1}{(q^2,q^3;q^5)_\infty}, \label{eq:RR:q} \end{align} where, as usual, for $n\in\mathbb{Z}_{\geq0} \sqcup \{\infty\}$, we define the $q$-Pochhammer symbols by \begin{align*} (a;q)_n = \prod_{0\leq j<n} (1-aq^j), \quad (a_1,\dots,a_k;q)_{n} = (a_1;q)_n \cdots (a_k;q)_n. \end{align*} \subsection{The main result}\label{mainse} A bipartition of a nonnegative integer $n$ is a pair of partitions $\boldsymbol{\lambda}=(\lambda^{(1)},\lambda^{(2)})$ such that $|\lambda^{(1)}|+|\lambda^{(2)}|=n$. A 2-colored partition $\lambda=(\lambda_1,\dots,\lambda_{\ell})$ is a weakly decreasing sequence of positive integers and colored positive integers (called parts) with respect to the order \begin{align} \cdots>3>\cint{3}>2>\cint{2}>1>\cint{1}. \label{orde} \end{align} We define the size $|\lambda|$ of $\lambda$ by $|\lambda|=\CONT(\lambda_1)+\dots+\CONT(\lambda_{\ell})$, where $\CONT(k)=\CONT(\cint{k})=k$ for a positive integer $k$. It is evident that there exists a natural identification between the bipartitions of $n$ and the 2-colored partitions of $n$. We put $\COLOR(k)=+$ and $\COLOR(\cint{k})=-$ for a positive integer $k$. For a 2-colored partition $\lambda=(\lambda_1,\dots,\lambda_{\ell})$, we consider the following conditions. \begin{enumerate} \item[(D1)] If consecutive parts $\lambda_a$ and $\lambda_{a+1}$, where $1\leq a<\ell$, satisfy $\CONT(\lambda_a)-\CONT(\lambda_{a+1})\leq 1$, then $\CONT(\lambda_a)+\CONT(\lambda_{a+1})\in 3\mathbb{Z}$ and $\COLOR(\lambda_a)\ne\COLOR(\lambda_{a+1})$. \item[(D2)] If consecutive parts $\lambda_a$ and $\lambda_{a+1}$, where $1\leq a<\ell$, satisfy $\CONT(\lambda_a)-\CONT(\lambda_{a+1})=2$ and $\CONT(\lambda_a)+\CONT(\lambda_{a+1})\not\in 3\mathbb{Z}$, then $(\COLOR(\lambda_a),\COLOR(\lambda_{a+1}))\ne(-,+)$. \item[(D3)] $\lambda$ does not contain $(3k,\cint{3k},\cint{3k-2})$, $(3k+2,3k,\cint{3k})$ and $(\cint{3k+2},3k+1,3k-1,\cint{3k-2})$ for $k\geq 1$. \item[(D4)] $\lambda$ does not contain $1$, $\cint{1}$, and $\cint{2}$ as parts (i.e., $\lambda_a\ne 1,\cint{1},\cint{2}$ for $1\leq a\leq\ell$). \end{enumerate} On (D3), we say that $\lambda=(\lambda_1,\dots,\lambda_{\ell})$ contains $\mu=(\mu_1,\dots,\mu_{\ell'})$ if there exists $0\leq i\leq \ell-\ell'$ such that $\lambda_{j+i}=\mu_j$ for $1\leq j\leq \ell'$. \begin{Thm}\label{RRbiiden} Let $\BIR$ (resp. $\BIRP$) the set of 2-colored partitions that satisfy the conditions (D1)--(D3) (resp. (D1)--(D4)) above. Then, we have \begin{align*} \sum_{\lambda\in\BIR}q^{|\lambda|} &= \frac{(q^2,q^4;q^6)_{\infty}}{(q,q,q^3,q^3,q^5,q^5;q^6)_{\infty}},\\ \sum_{\lambda\in\BIRP}q^{|\lambda|} &= \frac{1}{(q^2,q^3,q^3,q^4;q^6)_{\infty}} \end{align*} \end{Thm} \begin{Ex}\label{exbipar} We have \begin{align*} \BIR &= \{\emptypar, (1),(\cint{1}),(2),(\cint{2}),(3),(\cint{3}),(2,\cint{1}),(\cint{2},1), (4),(\cint{4}),(3,1),(3,\cint{1}),(\cint{3},\cint{1}),\dots\},\\ \BIRP &= \{\emptypar, (2),(3),(\cint{3}),(4),(\cint{4}),(5),(\cint{5}),(6),(\cint{6}),(4,2),(\cint{4},2),(3,\cint{3}),(7),(\cint{7}),(5,2),(\cint{5},2),\dots\}. \end{align*} \end{Ex} We propose the aforementioned bipartition identities (in terms of 2-colored partitions) as $A_2$ Rogers-Ramanujan bipartition identities of level 3. In the rest of this section, we give some justifications for our proposal. \subsection{Lie theoretic interpretations} Throughout this paper, we denote by $\GEE(A)$ the Kac-Moody Lie algebra associated with a generalized Cartan matrix (GCM, for short) $A$. For an affine GCM $A$ and a dominant integral weight $\Lambda\in\MP$, we denote by $\chi_{A}(V(\Lambda))$ the principal character of the standard module (a.k.a., the integrable highest weight module) $V(\Lambda)$ with a highest weight vector $v_{\Lambda}$ of the affine Lie algebra $\GEE(A)$. In ~\cite{LM}, Lepowsky and Milne observed a similarity between the characters of the level 3 standard modules of the affine Lie algebra of type $A^{(1)}_{1}$ \begin{align} \chi_{A^{(1)}_1}(V(2\Lambda_0+\Lambda_1)) = \frac{1}{(q,q^4;q^5)_{\infty}},\quad \chi_{A^{(1)}_1}(V(3\Lambda_0)) = \frac{1}{(q^2,q^3;q^5)_{\infty}}. \label{a11level3} \end{align} and the infinite products of the Rogers-Ramanujan identities \eqref{eq:RR:q}. This was one of the motivations for inventing the vertex operators~\cite{LW1} as well as ~\cite{FK,Seg}. Subsequently, in ~\cite{LW3}, Lepowsky and Wilson promoted the observation \eqref{a11level3} to a vertex operator theoretic proof of \eqref{eq:RR:q} (see also ~\cite{LW2}), which provides a Lie theoretic interpretations of the infinite sums in \eqref{eq:RR:q}. The result is generalized to higher levels in ~\cite{LW4}, assuimg the Andrews-Gordon-Bressoud identities (a generalization of the Rogers-Ramanujan identities, see ~\cite[\S3.2]{Sil}), for which Meurman and Primc gave a vertex operator theoretic proof in ~\cite{MP}. Recall the principal realization of the affine Lie algebra $\GEE(A^{(1)}_{1})$ (see ~\cite[\S7,\S8]{Kac}). Using the notation in ~\cite[\S2]{MP}, it affords a basis \begin{align*} \{\AOB{n},\AOX{n'},c,d\mid n\in\mathbb{Z}\setminus 2\mathbb{Z}, n'\in\mathbb{Z}\}, \end{align*} of $\GEE(A^{(1)}_{1})$. Note that $\{\AOB{n},c\mid n\in\mathbb{Z}\setminus 2\mathbb{Z}\}$ forms a basis of the principal Heisenberg subalgebra of $\GEE(A^{(1)}_{1})$. The following is essentially the Lepowsky-Wilson interpretation of the Rogers-Ramanujan partition identities in terms of the representation theory of $\GEE(A^{(1)}_1)$ (see also \cite[Theorem 10.4]{LW3} and \cite[Appendix]{MP}). \begin{Thm}[{\cite{LW3,MP}}] For $i=1,2$, let $\RR$ be the set of partitions such that parts are at least $i$ and consecutive parts differ by at least $2$. Then, the set \begin{align*} \{\AOB{-\mu_1}\cdots\AOB{-\mu_{\ell'}}\AOX{-\lambda_1}\cdots\AOX{-\lambda_\ell}v_{(i+1)\Lambda_0+(2-i)\Lambda_1}\} \end{align*} forms a basis of $V((i+1)\Lambda_0+(2-i)\Lambda_1)$, where $(\mu_1,\dots,\mu_{\ell'})$ varies in $\REG{2}$ and $(\lambda_1,\dots,\lambda_{\ell})$ varies in $\RR$. \end{Thm} Here, we denote by $\REG{p}$ the set of $p$-class regular partitions for $p\geq 2$. Recall that a partition is called $p$-class regular if no parts are divisible by $p$. We show a similar interpretation for $\BIR$ and $\BIRP$. Using the notation in \S\ref{vertset}, \begin{align*} \{ B(n),x_{\alpha_1}(n'),x_{-\alpha_1}(n'),c,d \mid n\in\mathbb{Z}\setminus3\mathbb{Z},n'\in\mathbb{Z} \} \end{align*} forms a basis of $\GEE(A^{(1)}_2)$ (see also Remark \ref{sl3basis}). \begin{Thm}\label{biideninter} For $i=1$ (resp. $i=2$), the set \begin{align*} \{\AOB{-\mu_1}\cdots\AOB{-\mu_{\ell'}} x_{\COLOR(\lambda_1)\alpha_1}(-\!\CONTT(\lambda_1))\cdots x_{\COLOR(\lambda_{\ell})\alpha_1}(-\!\CONTT(\lambda_{\ell})) v_{(2i-1)\Lambda_0+(2-i)\Lambda_1+(2-i)\Lambda_2}\} \end{align*} forms a basis of $V((2i-1)\Lambda_0+(2-i)\Lambda_1+(2-i)\Lambda_2)$, where $(\mu_1,\dots,\mu_{\ell'})$ varies in $\REG{3}$ and $(\lambda_1,\dots,\lambda_{\ell})$ varies in $\BIR$ (resp. $\BIRP$). \end{Thm} Note that Theorem \ref{biideninter} implies Theorem \ref{RRbiiden} thanks to \begin{align} \chi_{A^{(1)}_{2}}(V(\Lambda_0+\Lambda_1+\Lambda_2)) &= \frac{(q^2,q^4;q^6)_{\infty}}{(q,q,q^3,q^3,q^5,q^5;q^6)_{\infty}},\\ \chi_{A^{(1)}_{2}}(V(3\Lambda_0)) &= \frac{1}{(q^2,q^3,q^3,q^4;q^6)_{\infty}}. \label{charcalc} \end{align} In \S\ref{maincomp} and \S\ref{cal}, we show that the set in Theorem \ref{biideninter} spans $V((2i-1)\Lambda_0+(2-i)\Lambda_1+(2-i)\Lambda_2)$ (see Corollary \ref{biidenintercor}). Thus, Theorem \ref{biideninter} and Theorem \ref{RRbiiden} are equivalent. \subsection{$A_2$ Rogers-Ramanujan identities}\label{agthm} A standard $q$-series technique to prove the Andrews-Gordon-Bressoud identities is the Bailey Lemma (see ~\cite[\S3]{An2} and ~\cite[\S3]{Sil}). In ~\cite{ASW}, Andrews-Schilling-Warnaar found an $A_2$ analog of it and obtained a family of Rogers-Ramanujan type identities for characters of the $W_3$ algebra. The result can be regarded as an $A^{(1)}_2$ analog of the Andrews-Gordon-Bressoud identities, whose infinite products are some of the principal characters of the standard modules of $\GEE(A^{(1)}_2)$ (see ~\cite[Theorem 5.1, Theorem 5.3, Theorem 5.4]{ASW}) after a multiplication of $(q;q)_{\infty}$. In our case of level 3, Andrews-Schilling-Warnaar identities are stated as follows. \begin{Thm}[{\cite[Theorem 5.4 specialized to $k=2$ and $i=2,1$]{ASW}}] \begin{align*} \sum_{s,t\geq 0}\frac{q^{s^2-st+t^2}}{(q;q)^2_{s+t}}{s+t \brack s}_{q^3} &= \frac{1}{(q;q)_{\infty}}\frac{(q^2,q^4;q^6)_{\infty}}{(q,q,q^3,q^3,q^5,q^5;q^6)_{\infty}},\\ \sum_{s,t\geq 0}\frac{q^{s^2-st+t^2+s+t}}{(q;q)_{s+t+1}(q;q)_{s+t}}{s+t \brack s}_{q^3} &= \frac{1}{(q;q)_{\infty}}\frac{1}{(q^2,q^3,q^3,q^4;q^6)_{\infty}}. \end{align*} \end{Thm} We show manifestly positive, Andrews-Gordon type series (in the sense of ~\cite{TT2}) for $\BIR$ and $\BIRP$ as follows. Here, the length $\ell(\lambda)$ of a 2-colored partition $\lambda$ is defined to be the number of parts. For the size $|\lambda|$ of $\lambda$, see \S\ref{mainse}. \begin{Thm}\label{RRidentAG} \begin{align*} \sum_{\lambda\in\BIR}x^{\ell(\lambda)}q^{|\lambda|} &= \sum_{a,b,c,d\geq 0}\frac{q^{a^2+b^2+3c^2+3d^2+2ab+3ac+3ad+3bc+3bd+6cd}x^{a+b+2c+2d}}{(q;q)_a(q;q)_b(q^3;q^3)_c(q^3;q^3)_d},\\ \sum_{\lambda\in\BIRP}x^{\ell(\lambda)}q^{|\lambda|} &= \sum_{a,b,c,d\geq 0}\frac{q^{a(a+1)+b(b+2)+3c(c+1)+3d(d+1)+2ab+3ac+3ad+3bc+3bd+6cd}x^{a+b+2c+2d}}{(q;q)_a(q;q)_b(q^3;q^3)_c(q^3;q^3)_d}. \end{align*} \end{Thm} The result is simiar to the fact that, for $i=1,2$, we have \begin{align*} \sum_{\lambda\in\RR}x^{\ell(\lambda)}q^{|\lambda|}=\sum_{n\geq 0}\frac{q^{n(n+i-1)}x^n}{(q;q)_n}. \end{align*} Recently, Kanade-Russell showed (see ~\cite[(1.8)]{KR3}) \begin{align*} \sum_{s,t\geq 0}\frac{q^{s^2-st+t^2+t}}{(q;q)_{s+t+1}(q;q)_{s+t}}{s+t \brack s}_{q^3} = \frac{1}{(q;q)_{\infty}}\frac{1}{(q,q^2;q^3)_{\infty}}, \end{align*} where $(q,q^2;q^3)_{\infty}^{-1}=\chi_{A^{(1)}_2}(2\Lambda_0+\Lambda_1)$. Although it can be similarly proven \begin{align*} \sum_{a,b,c,d\geq 0}\frac{q^{a^2+b(b+1)+3c^2+c+3d^2+2d+2ab+3ac+3ad+3bc+3bd+6cd}}{(q;q)_a(q;q)_b(q^3;q^3)_c(q^3;q^3)_d} = \frac{1}{(q,q^2;q^3)_{\infty}}, \end{align*} we do not consider the level 3 module $V(2\Lambda_0+\Lambda_1)$ in this paper. A reason is that $V(2\Lambda_0+\Lambda_1+p\delta)$ for $p\in\mathbb{Z}$ is not a submodule of $V(\Lambda_0)^{\otimes 3}$ as in Remark \ref{notsub}. After the groundbreaking paper ~\cite{ASW}, a vast literature is devoted to the study of $A_2$ Rogers-Ramanujan identities (see ~\cite{CDA,CW,FFW,FW,KR3,Unc,War2,War,War3}), especially to the search for manifestly positive infinite sums such as ~\cite[Theorem 5.2]{ASW}, ~\cite[Theorem 1.1]{CW} for level 4 and ~\cite[Theorem 1.6]{CDA} for level 5. After a success of vertex operator theoretic proofs of the Rogers-Ramanujan identities such as ~\cite{LW3,MP}, it has been expected that, for an affine GCM $X^{(r)}_N$ and a dominant integral weight $\Lambda\in\MP$, there should exist a Rogers-Ramanujan type identity whose infinite product is given by $\chi_A(V(\Lambda))$. It is natural to expect that the sum side is related to the $n(X^{(r)}_N)$-colored partitions, where \begin{align*} n(X^{(r)}_N)=\frac{\textrm{the number of roots of type $X_N$}}{\textrm{the $r$-twisted Coxeter number of $X_N$}} =\textrm{the size of $X^{(r)}_N$}. \end{align*} Concerning the fact $n(A^{(1)}_{r})=r$, it is expected that the $A_2$ Rogers-Ramanujan identities are related to the 2-colored partitions (and thus to the bipartitions). It would be interesting if the results in this paper are generalized to higher levels. \hspace{0mm} \noindent{\bf Organization of the paper.} The paper is organized as follows. In \S\ref{vertset}, we recall the principal realization of $\widehat{\mathfrak{sl}_3}=\GEE(A^{(1)}_2)$ and the vertex operator realization of the basic module following ~\cite{LW3}. In \S\ref{maincomp} and \S\ref{cal}, we show that the definiting conditions for $\BIR$ and $\BIRP$ are naturally deduced by calculations (similar to ~\cite{Cap0,MP,Nan}) of the vertex operators on the triple tensor product of the basic module. In \S\ref{auto}, we show that $q$-difference equations for $\BIR$ and $\BIRP$ are automatically derived by the technique developed in ~\cite{TT} as a generalization of Andrews' linked partition ideals~\cite[Chapter 8]{An1} using finite automata. In \S\ref{ags}, we briefly review certificate recurrences, which are obtained by a $q$-version of Sister Celine's technique~\cite{Koe,Rie,WZ} for a $q$-proper hypergeometric term. It gives automatically a $q$-difference equation for an Andrews-Gordon type series and thus a proof of Theorem \ref{RRidentAG}. These and corresponding results for the cylindric partitions (see \S\ref{gs111}) give a proof of Theorem \ref{RRbiiden} (see \S\ref{finalsec}) by combining the standard results, such as the Corteel-Welsh recursion~\cite{CW} and the Borodin product formula~\cite{Bor}, which are reviewed in \S\ref{cylin}. \section{The vertex operators}\label{vertset} In this section, we fix $m=3$ and $\omega=\exp(2\pi\sqrt{-1}/m)$. As usual, the affine Cartan matrix \begin{align} A^{(1)}_2= \begin{pmatrix} 2 & -1 & -1 \\ -1 & 2 & -1 \\ -1 & -1 & 2 \end{pmatrix} \label{A12GCM} \end{align} is indexed by the set $I=\{0,1,2\}$. Note that $A_2=(A^{(1)}_2)|_{I_0\times I_0}$, where $I_0=\{1,2\}$. \subsection{The principal realization of the affine Lie algebra $\GEE(A^{(1)}_2)$}\label{prafflie} We regard \begin{align*} \mathfrak{sl}_3=\{M\in\MAT_3(\mathbb{C})\mid\TR M=0\} \end{align*} with the Cartan-Killing form $\langle M_1,M_2\rangle=\TR(M_1M_2)$ as the Kac-Moody Lie algebra $\GEE(A_2)$ with the Chevalley generators $e_1=E_{12}$, $e_2=E_{23}$, $f_1=E_{21}$, $f_2=E_{32}$, $h_1=E_{11}-E_{22}$ and $h_2=E_{22}-E_{33}$, where $E_{ij}$ is the $3\times 3$ matrix unit for $1\leq i,j\leq 3$. Take the principal automorphism \begin{align*} \nu:\mathfrak{sl}_3\to\mathfrak{sl}_3,\quad E_{ij}\mapsto \omega^{j-i}E_{ij}. \end{align*} Note that $\langle,\rangle$ is $\nu$-invariant, i.e., $\langle\nu M_1,\nu M_2\rangle=\langle M_1,M_2\rangle$ for $M_1,M_2\in\mathfrak{sl}_3$. The $\nu$-twisted affinization is given by \begin{align*} \GE &= (\SPAN\{E_{11}-E_{22},E_{22}-E_{33}\}\otimes\mathbb{C}[t^3,t^{-3}]) \oplus(\SPAN\{E_{12},E_{23},E_{31}\}\otimes t\mathbb{C}[t^3,t^{-3}])\\ &\quad\quad \oplus(\SPAN\{E_{13},E_{21},E_{32}\}\otimes t^2\mathbb{C}[t^3,t^{-3}]) \oplus\mathbb{C}c\oplus\mathbb{C}d, \end{align*} with the Lie algebra structure \begin{align*} [M\otimes t^n,M'\otimes t^{n'}]=[M,M']\otimes t^{n+n'}+\frac{n \langle M,M'\rangle}{3}\delta_{n+n',0}c, \quad [d,M\otimes t^n]=nM\otimes t^n, \end{align*} where $M,M'\in\mathfrak{sl}_3$, $n,n'\in\mathbb{Z}$, and $c$ is central (see ~\cite[(2.17)]{LW3}). The principal realization is the Lie algebra isomorphism $\GEE(A^{(1)}_2)\isom \GE$ given by \begin{align*} \begin{array}{lll} e_0\mapsto E_{31}\otimes t, & e_1\mapsto E_{12}\otimes t, & e_2\mapsto E_{23}\otimes t,\\ f_0\mapsto E_{13}\otimes t^{-1}, & f_1\mapsto E_{21}\otimes t^{-1}, & f_2\mapsto E_{32}\otimes t^{-1},\\ h_0\mapsto (E_{33}-E_{11})\otimes 1+\frac{c}{3}, & h_1\mapsto (E_{11}-E_{22})\otimes 1+\frac{c}{3}, & h_2\mapsto (E_{22}-E_{33})\otimes 1+\frac{c}{3}, \end{array} \end{align*} in addition to $d\mapsto d$. The principal degeree on $\GEE(A^{(1)}_2)$ (and thus on $\GE$) is defined by $\DEG e_i=1=-\DEG f_i$ and $\DEG h_i=0=\DEG d$ for $i\in I$. See also ~\cite[\S7,\S8]{Kac}. \subsection{The principal Cartan subalgebra} Let $E_{(3)}=E_{12}+E_{23}+E_{31}$ be the principal cyclic element due to Kostant~\cite{Kos}. Take the principal Cartan subalgebra \begin{align*} \HPRIN = \{M\in \mathfrak{sl}_3\mid [M,E_{(3)}]=O\}=\mathbb{C}E_{(3)}\oplus\mathbb{C}E^2_{(3)}. \end{align*} We have a root space decomposition (see ~\cite[Corollary 26.2.7]{DBT}) \begin{align} \mathfrak{sl}_3 = \HPRIN \oplus \bigoplus_{1\leq s\ne t\leq 3}\mathbb{C}A_{st},\quad \textrm{where}\quad A_{st}=\frac{1}{3}\sum_{1\leq i,j\leq 3}\omega^{jt-is}E_{ij}. \label{rootdec} \end{align} By ~\cite[Lemma 26.2.4]{DBT}, for $1\leq s,t\leq 3$, we have \begin{align*} [h,A_{st}] = \langle h,B_{st} \rangle A_{st},\quad \textrm{where}\quad B_{st}=\frac{1}{3}\sum_{k=1}^{2}(\omega^{-ks}-\omega^{-kt})E_{(3)}^{3-k}. \end{align*} In the rest of the paper, we put \begin{align} \begin{split} \alpha_1 &= B_{12} = \frac{\omega-\omega^2}{3}E_{(3)}+\frac{\omega^2-\omega}{3}E_{(3)}^2,\\ \alpha_2 &= B_{23}=\frac{\omega^2-1}{3}E_{(3)}+\frac{\omega-1}{3}E_{(3)}^2,\\ \Phi &= \{\alpha_1,\alpha_2,\alpha_1+\alpha_2,-\alpha_1,-\alpha_2,-(\alpha_1+\alpha_2)\}(\subseteq\HPRIN),\\ x_{\alpha_1} &= A_{12}=\begin{footnotesize} \frac{1}{3} \begin{pmatrix} \omega & 1 & \omega^2 \\ 1 & \omega^2 & \omega \\ \omega^2 & \omega & 1 \end{pmatrix}\end{footnotesize},\quad x_{-\alpha_1} = A_{21}=\begin{footnotesize} \frac{1}{3} \begin{pmatrix} \omega^2 & 1 & \omega \\ 1 & \omega & \omega^2 \\ \omega & \omega^2 & 1 \end{pmatrix}\end{footnotesize},\\ x_{\alpha_2} &= A_{23}=\begin{footnotesize} \frac{1}{3} \begin{pmatrix} \omega & \omega & \omega \\ \omega^2 & \omega^2 & \omega^2 \\ 1 & 1 & 1 \end{pmatrix}\end{footnotesize},\quad x_{-\alpha_2} = A_{32} =\begin{footnotesize} \frac{1}{3} \begin{pmatrix} \omega^2 & \omega & 1 \\ \omega^2 & \omega & 1 \\ \omega^2 & \omega & 1 \end{pmatrix}\end{footnotesize},\\ x_{\alpha_1+\alpha_2} &= A_{13}=\begin{footnotesize} \frac{1}{3} \begin{pmatrix} \omega^2 & \omega^2 & \omega^2 \\ \omega & \omega & \omega \\ 1 & 1 & 1 \end{pmatrix}\end{footnotesize},\quad x_{-(\alpha_1+\alpha_2)} = A_{31} =\begin{footnotesize} \frac{1}{3} \begin{pmatrix} \omega & \omega^2 & 1 \\ \omega & \omega^2 & 1 \\ \omega & \omega^2 & 1 \end{pmatrix}\end{footnotesize}. \end{split} \label{rootde2} \end{align} Note that we have $\nu(\alpha_1)=\alpha_2$, $\nu(\alpha_2)=-(\alpha_1+\alpha_2)$. Note also that we have $\nu(x_{\beta})=x_{\nu\beta}$ for $\beta\in\Phi$, which is easily seen by ~\cite[(26.2.17),(26.3.4),(26.3.5)]{DBT}. \subsection{The principal Heisenberg subalgebra} For $M\in\mathfrak{sl}_3$ and $n\in\mathbb{Z}$, we put \begin{align} M(n)=\pi_{(n)}(M)\otimes t^n\in\GE, \label{emurecall} \end{align} where $\pi_{(n)}:\mathfrak{sl}_3\to\{M\in\mathfrak{sl}_3\mid\nu(M)=\omega^nM\}$ is the projection. Let $\HPRIN_{(n)}=\pi_{(n)}(\HPRIN)$. Then, $\HPRIN_{(n)}=\{0\}$ when $n\in 3\mathbb{Z}$. For $n\in\mathbb{Z}\setminus 3\mathbb{Z}$, $B(n)=E_{(3)}^n\otimes t^n$ is a basis of the 1-dimensional space $\HPRIN_{(n)}$, where $B=E_{(3)}+E^2_{(3)}$. Note that we have $[B(n),B(n')]=nc\delta_{n+n',0}$ for $n,n'\in\mathbb{Z}\setminus 3\mathbb{Z}$. \begin{Rem}\label{sl3basis} It is not difficult to see (e.g., by $\pi_{(n)}(\nu x_{\beta})=\omega^n\pi_{(n)}(x_{\beta})$) that \begin{align*} \{ B(n),x_{\alpha_1}(n'),x_{-\alpha_1}(n'),c,d \mid n\in\mathbb{Z}\setminus3\mathbb{Z},n'\in\mathbb{Z} \} \end{align*} forms a basis of $\GE$ with $\DEG B(n)=n$ and $\DEG x_{\pm\alpha_1}(n')=n'$ (see \eqref{rootdec} and \eqref{rootde2}). \end{Rem} Let $\GAAA=[\GAA,\GAA]= \GAAA_{+}\oplus\GAAA_{-}\oplus\mathbb{C}c$ be the principal Heisenberg subalgebra of $\GE$, where \begin{align*} \GAA = \GAAA_{+}\oplus\GAAA_{-}\oplus\mathbb{C}c\oplus\mathbb{C}d,\quad \GAAA_{\pm}=\bigoplus_{\pm n>0}\HPPRIN_{(n)}\otimes t^n. \end{align*} \begin{Rem}[{\cite[Proposition 5.4.(1)]{LW3}}]\label{indv} The induced $\GAA$-module \begin{align*} V=\IND_{\GAAA_{+}\oplus\mathbb{C}c\oplus\mathbb{C}d}^{\GAA}\mathbb{C}\cong U(\GAAA_-)=\mathbb{C}[B(n)\mid n\in\mathbb{Z}_{<0}\setminus3\mathbb{Z}]. \end{align*} is irreducible (even as an $\GAAA$-module), where $\GAAA_{+}\oplus\mathbb{C}d$ (resp. $c$) acts trivially (resp. as 1) on $\mathbb{C}$. \end{Rem} In this paper, for a vector space $U$, we denote by $U\{\zeta\}$ (resp. $U\{\zeta_1,\zeta_2\}$) the set of formal power series $\sum_{n\in\mathbb{Z}}u_n\zeta^n$ (resp. $\sum_{n,n'\in\mathbb{Z}}u_{n,n'}\zeta_1^n\zeta_2^{n'}$), where $u_n\in U$ (resp. $u_{n,n'}\in U$)~\cite[\S2]{LW3}. For $M\in\mathfrak{sl}_3$, we put (see \eqref{emurecall}) \begin{align*} M(\zeta)=\sum_{n\in\mathbb{Z}}M(n)\zeta^n\in\GE\{\zeta\}. \end{align*} \begin{Rem}\label{koukan} By ~\cite[(2.44), Theorem 2.4]{LW3}, the commutation relations are given in terms of the generating functions in $\GE\{\zeta_1,\zeta_2\}$ and $\GE\{\zeta\}$ as follows. \begin{align*} [x_{\alpha}(\zeta_1),x_{\beta}(\zeta_2)] &= \frac{1}{m}\sum_{s\in C_{-1}} \varepsilon(\nu^s\alpha,\beta)x_{\nu^s\alpha+\beta}(\zeta_2) \delta(\omega^{-s}\zeta_1/\zeta_2)\\ &\quad + \frac{\langle x_{\beta},x_{-\beta}\rangle}{m^2} \sum_{s\in C_{-2}} \left(cD\delta(\omega^{-s}\zeta_1/\zeta_2)-m\beta(\zeta_2)\delta(\omega^{-s}\zeta_1/\zeta_2)\right),\\ [\gamma(\zeta_1),x_{\beta}(\zeta_2)] &= \frac{1}{m}\sum_{s\in I}\langle\nu^s\alpha,\beta\rangle x_{\beta}(\zeta_2)\delta(\omega^{-s}\zeta_1/\zeta_2),\\ x_{\nu^p\alpha}(\zeta) &= x_{\alpha}(\omega^p\zeta), \end{align*} where $\delta(\zeta)=\sum_{n\in\mathbb{Z}}\zeta^n$, $D\delta(\zeta)=\sum_{n\in\mathbb{Z}}n\zeta^n$, $\alpha,\beta\in\Phi$, $\gamma\in\HPPRIN$, $p\in I$, $C_k=\{s\in I\mid \langle \nu^s\alpha,\beta\rangle=k\}$, and $\varepsilon(\alpha',\beta')\in\mathbb{C}^{\times}$ is defined by $[x_{\alpha'},x_{\beta'}]=\varepsilon(\alpha',\beta')x_{\alpha'+\beta'}$ if $\alpha',\beta',\alpha'+\beta'\in\Phi$ (see ~\cite[(2.34)]{LW3}). \end{Rem} For $s\in\mathbb{Z}$ and a Lie subalgebra $\mathfrak{r}\subseteq \GE$, we denote by $U(\mathfrak{r})_{s}$ the subspace of principal degree $s$ elements in $U(\mathfrak{r})$. For integers $\ell\geq 0$ and $b$, we define a subspace \begin{align*} \Theta_{\ell,b} = \sum_{\substack{0\leq\ell'\leq\ell, \beta_1,\dots,\beta_{\ell'}\in\{\pm\alpha_1\}, \\ s,t,m_1,\dots,m_{\ell'}\in\mathbb{Z}, s+m_1+\dots+m_{\ell'}+t=b}} U(\GAAA_-)_{s} x_{\beta_1}(m_1)\dots x_{\beta_{\ell'}}(m_{\ell'}) U(\GAAA_+\oplus\mathbb{C}c\oplus\mathbb{C}d)_{t} \end{align*} of the universal enveloping algebra $U(\GE)=\bigcup_{\ell\geq 0,b\in\mathbb{Z}}\Theta_{\ell,b}$. Note that $\Theta_{0,b}=U(\GAA)_{b}$. We also define $\Theta_{-1,b}=\{0\}$ for convenience. \begin{Rem}[{\cite[\S2]{MP}}]\label{sortresRM} By Remark \ref{koukan}, for $\beta\in\Phi$, $b,m\in\mathbb{Z}$, $\ell\geq -1$, we have \begin{align} x_{\beta}(m)\Theta_{\ell,b}\subseteq \Theta_{\ell+1,b+m},\quad \Theta_{\ell,b}x_{\beta}(m)\subseteq \Theta_{\ell+1,b+m} \label{thetaideal} \end{align} and $U(\GAA)_m\Theta_{\ell,b}\subseteq\Theta_{\ell,b+m}$, $\Theta_{\ell,b}U(\GAA)_m\subseteq\Theta_{\ell,b+m}$. It is also easy to see that we have \begin{align} x_{\beta_1}(m_1)\cdots x_{\beta_{\ell}}(m_{\ell})-x_{\beta_{p(1)}}(m_{p(1)})\cdots x_{\beta_{p(\ell)}}(m_{p(\ell)})\in \Theta_{\ell-1,m_1+\dots+m_{\ell}} \label{sortres} \end{align} for $\ell\geq 0$, a permutation $p\in\mathfrak{S}_{\ell}$ and $\beta_1,\dots,\beta_{\ell}\in\{\pm\alpha_1\}$, $m_1,\dots,m_{\ell}\in\mathbb{Z}$ (see ~\cite[(2.11)]{MP}). Thus, we have \begin{align*} \Theta_{\ell,b} = \sum_{\substack{0\leq\ell'\leq\ell, \beta_1,\dots,\beta_{\ell'}\in\{\pm\alpha_1\}, \\ s,t,m_1,\dots,m_{\ell'}\in\mathbb{Z}, s+m_1+\dots+m_{\ell'}+t=b,\\ m_1\leq\dots\leq m_{\ell'} \\ \textrm{$m_i=m_j,\beta_i\ne\beta_j$ implies $\beta_i=\alpha_1, \beta_j=-\alpha_1$ for $i<j$}}} U(\GAAA_-)_{s} x_{\beta_1}(m_1)\dots x_{\beta_{\ell'}}(m_{\ell'}) U(\GAAA_+\oplus\mathbb{C}c\oplus\mathbb{C}d)_{t} \end{align*} \end{Rem} \subsection{The principal realization of the basic module}\label{vor} In virtue of ~\cite{KKLW} (see also \cite[Theorem 8.7]{LW3}), the assignment $x_{\beta}(n')\mapsto X(\beta,n')$ by \begin{align} \begin{split} X(\beta,\zeta) &= \sum_{n'\in\mathbb{Z}}X(\beta,n')\zeta^{n'} = \Lambda_0(\pi_{(0)}(x_{\beta})) E^-(-\beta,\zeta)E^+(-\beta,\zeta), \\ E^{\pm}(\beta,\zeta) &= \sum_{\pm n\geq 0}E^{\pm}(\beta,n)\zeta^n =\exp\left(m\sum_{\pm j>0}\frac{\beta(j)}{j}\zeta^j\right), \end{split} \label{vvoorr} \end{align} where $\beta\in\Phi$ and $n'\in\mathbb{Z}$, in addition to the $\GAA$-module structure (see Remark \ref{indv}) identifies $V$ with the basic $\GE$-module $V(\Lambda_0)$ under the isomorphism $\GEE(A^{(1)}_2)\cong \GE$. By \eqref{rootde2}, it is not difficult to see (cf. ~\cite[p.104]{KKLW}, where $\varepsilon=\omega$ and $\varepsilon^i/(\varepsilon^i-1)=(1-\omega^i)/3$) \begin{align} \begin{split} \Lambda_0(\pi_{(0)}(x_{\alpha_1})) = \Lambda_0(\pi_{(0)}(x_{\alpha_2})) = \Lambda_0(\pi_{(0)}(x_{-(\alpha_1+\alpha_2)})) = \frac{1-\omega}{9},\\ \Lambda_0(\pi_{(0)}(x_{-\alpha_1})) = \Lambda_0(\pi_{(0)}(x_{-\alpha_2})) = \Lambda_0(\pi_{(0)}(x_{\alpha_1+\alpha_2})) = \frac{1-\omega^2}{9}. \end{split} \label{explicitcvalue} \end{align} \begin{Lem}[{\cite[Proposition 3.5, Proposition 3.6]{LW3}}] For $\alpha,\beta\in\Phi$, we have \begin{align*} X(\alpha,\zeta_1)E^-(\beta,\zeta_2) &= E^-(\beta,\zeta_2)X(\alpha,\zeta_1)\myphi_{\alpha,\beta}(\zeta_1/\zeta_2),\\ E^+(\alpha,\zeta_1)X(\beta,\zeta_2) &= X(\beta,\zeta_2)E^+(\alpha,\zeta_1)\myphi_{\alpha,\beta}(\zeta_1/\zeta_2), \end{align*} in $(\END V)\{\zeta_1,\zeta_2\}$, where \begin{align*} \myphi_{\alpha,\beta}(x)=\prod_{p\in I}(1-\omega^{-p}x)^{-\langle\nu^p\alpha,\beta\rangle}. \end{align*} \end{Lem} \begin{Ex}\label{exppoly} The following explicit values will be used in \S\ref{cal}. \begin{align*} \myphi_{\alpha_1,\alpha_1}(x) &= \myphi_{-\alpha_1,-\alpha_1}(x)=\frac{1-x^3}{(1-x)^3}=1+\sum_{k\geq 1}3kx^k,\\ \myphi_{-\alpha_1,\alpha_1}(x) &= \myphi_{\alpha_1,-\alpha_1}(x)=\frac{(1-x)^3}{1-x^3}=1+\sum_{k\geq 1}3(x^{3k-1}-x^{3k-2}). \end{align*} \end{Ex} \begin{Cor}[{\cite[Corollary 2.2.12]{Nan}}]\label{idounan} For $\alpha,\beta\in\Phi$, let $\myphi_{\alpha,\beta}(x)=\sum_{k\geq 0}c_kx^k$. For $n,n'\in\mathbb{Z}$, we have \begin{align*} X(\alpha,n)E^-(\beta,n') &= \sum_{k\geq 0}c_kE^{-}(\beta,n'+k)X(\alpha,n-k),\\ E^+(\alpha,n)X(\beta,n') &= \sum_{k\geq 0}c_kX(\beta,n'+k)E^+(\alpha,n-k), \end{align*} where $E^-(\beta,n')=0$ (resp. $E^+(\alpha,n)=0$) when $n'>0$ (resp. $n<0$). More generally, for $n_1,\dots,n_{\ell},n'_1,\dots,n'_{\ell}\in\mathbb{Z}$ and $\alpha_1,\dots,\alpha_{\ell},\beta_1,\dots,\beta_{\ell}\in\Phi$, we have \begin{align*} {} &{} X(\alpha_1,n_1)\cdots X(\alpha_{\ell},n_{\ell})E^-(\beta,n') \\ &= \sum_{j_1,\cdots,j_{\ell}\geq 0}c_{j_1}\cdots c_{j_{\ell}}E^{-}(\beta,n'+j_1+\dots+j_{\ell})X(\alpha_1,n-j_1)\cdots X(\alpha_{\ell},n_{\ell}-j_{\ell}),\\ {} &{} E^+(\alpha,n)X(\beta_1,n'_1)\cdots X(\beta_{\ell},n'_{\ell}) \\ &= \sum_{j_1,\cdots,j_{\ell}\geq 0}c_{j_1}\cdots c_{j_{\ell'}}X(\beta_1,n'+j_1)\cdots X(\beta_{\ell},n'+{j_{\ell}}) E^+(\alpha,n-j_1-\cdots-j_{\ell}). \end{align*} \end{Cor} \subsection{The triple tensor product of the basic module}\label{tripletensor} For $\alpha,\beta\in\Phi$, let \begin{align*} P_{\alpha,\beta}(x)=\prod_{p\in I, \langle\nu^p\alpha,\beta\rangle<0}(1-\omega^{-p}x)^{-\langle\nu^p\alpha,\beta\rangle}. \end{align*} \begin{Ex}\label{exppoly2} The following explicit values will be used in \S\ref{cal}. \begin{align*} P_{\alpha_1,\alpha_1}(x) &= P_{-\alpha_1,-\alpha_1}(x)=1+x+x^2,\\ P_{\alpha_2,-\alpha_1}(x) &= P_{\alpha_1,\alpha_1+\alpha_2}(x)=(1-\omega x)^2,\\ P_{-(\alpha_1+\alpha_2),-\alpha_1}(x) &= P_{\alpha_1,-\alpha_2}(x)=(1-\omega^2 x)^2. \end{align*} \end{Ex} Let $W$ be a $\GE$-module in a category $\mathcal{C}_K$ for $K\in\mathbb{C}$ (see ~\cite[\S3]{LW3}). By Remark \ref{koukan}, \begin{align} \lim_{\zeta_1,\zeta_2\to\zeta}P_{\alpha,\beta}(\zeta_1/\zeta_2)x_{\alpha}(\zeta_1)x_{\beta}(\zeta_2) \label{limdef} \end{align} makes sense as an element of $(\END W)\{\zeta\}$ (see ~\cite[\S4,\S5]{MP}), and we denote it by $x_{\alpha,\beta}(\zeta)$. In the rest of this paper, we take $W=V^{\otimes 3}(\cong V(\Lambda_0)^{\otimes 3})$. \begin{Prop}\label{mpprop} We have the following relations in $(\END W)\{\zeta\}$. \begin{enumerate} \item\label{mpprop1} $x_{-\alpha_1,-\alpha_1}(\zeta)= \frac{2\Lambda_0(\pi_{(0)}(x_{-\alpha_1}))^{2}P_{-\alpha_1,-\alpha_1}(1)}{\Lambda_0(\pi_{(0)}(x_{\alpha_1}))} E^{-}(\alpha_1,\zeta)x_{\alpha_1}(\zeta)E^{+}(\alpha_1,\zeta)$. \item\label{mpprop2} $x_{\alpha_1,\alpha_1}(\zeta)= \frac{2\Lambda_0(\pi_{(0)}(x_{\alpha_1}))^{2}P_{\alpha_1,\alpha_1}(1)}{\Lambda_0(\pi_{(0)}(x_{-\alpha_1}))} E^{-}(-\alpha_1,\zeta)x_{-\alpha_1}(\zeta)E^{+}(-\alpha_1,\zeta)$. \item\label{mpprop3} $x_{\alpha_2,-\alpha_1}(\zeta)= E^{-}(\alpha_1,\zeta)x_{\alpha_1,\alpha_1+\alpha_2}(\zeta)E^{+}(\alpha_1,\zeta)$. \item\label{mpprop4} $x_{-(\alpha_1+\alpha_2),-\alpha_1}(\zeta)= E^{-}(\alpha_1,\zeta)x_{\alpha_1,-\alpha_2}(\zeta)E^{+}(\alpha_1,\zeta)$. \end{enumerate} \end{Prop} \begin{proof} These are similarly established as the proof of ~\cite[Theorem 6.6]{MP}. For explicit values of the constants, see \eqref{explicitcvalue} and Example \ref{exppoly2}. \end{proof} \begin{Prop}\label{high} For $a_1,a_2,a_3\in\mathbb{C}$, we define a vector in $W$ by \begin{align*} u_{a_1,a_2,a_3} = a_1(B(-1)\otimes 1\otimes 1) + a_2(1\otimes B(-1)\otimes 1) + a_3 (1\otimes 1\otimes B(-1)). \end{align*} If $a_1+a_2+a_3=0$ and $(a_1,a_2,a_3)\ne (0,0,0)$, then $u_{a_1,a_2,a_3}$ is a highest weight vector with highest weight $3\Lambda_0-\alpha_0=\Lambda_0+\Lambda_1+\Lambda_2-\delta$. \end{Prop} \begin{proof} It is easily verified by direct calculation using $B(-1)=f_0+f_1+f_2$ in $\GE$ (see ~\S\ref{prafflie}) and $(f_0+f_1+f_2)1=f_{0}1$ in $V\cong V(\Lambda_{0})$ (see ~\cite[(10.1.1)]{Kac}). \end{proof} \begin{Rem}\label{notsub} The level 3 module $V(2\Lambda_i+\Lambda_j+n\delta)$, where $i\ne j\in I$ and $n\in\mathbb{Z}$, is not a submodule of $W$ because the equation in the weight lattice \begin{align*} 3\Lambda_{0}-(k_0\alpha_0+k_1\alpha_1+k_2\alpha_2)\equiv p_0\Lambda_0+p_1\Lambda_1+p_2\Lambda_2\pmod{\mathbb{Z}\delta}, \end{align*} where $k_0,k_1,k_2,p_0,p_1,p_2\in\mathbb{Z}$, implies $p_a\equiv p_b\pmod{3}$ for $a\ne b\in I$. This is easily seen by the fact that any entry in $A^{(1)}_2$ is congruent to 2 modulo 3 (see \eqref{A12GCM}). \end{Rem} \section{Spanning vectors}\label{maincomp} \subsection{Colored integers} Let $\CZ=\{\cint{n}\mid n\in\mathbb{Z}\}$ be the set of colored integers and put $\AZ=\mathbb{Z}\sqcup\CZ$. Recall \eqref{orde} in \S\ref{mainse}. There, we defined the order $\geq$ for positive integers and colored positive integers for simplicity. The order $\geq$ below is a generalization of the order $\geq$ in \S\ref{mainse}. \begin{Def}\label{twoorders} On the set $\AZ$, we consider total orders $\geq$ and $\ANOE$ such that \begin{align*} \dots>2>\cint{2}>1>\cint{1}>0>\cint{0}>-1>\cint{-1}>-2>\cint{-2}>\cdots, \\ \cdots\ANO\cint{2}\ANO 2\ANO\cint{1}\ANO 1\ANO\cint{0}\ANO 0\ANO\cint{-1}\ANO -1\ANO\cint{-2}\ANO -2\ANO\cdots. \end{align*} \end{Def} Similarly, we (re)define $\COLOR(n)\in\{\pm\}$ and $\CONT(n)\in\mathbb{Z}$ for $n\in\AZ$ by \begin{align*} \COLOR(n) = \begin{cases} + & \textrm{if $n\in\mathbb{Z}$},\\ - & \textrm{if $n\in\CZ$}, \end{cases}\quad \CONT(n) = \begin{cases} n & \textrm{if $n\in\mathbb{Z}$},\\ m & \textrm{if $n=\cint{m}$ for some $m\in\mathbb{Z}$}. \end{cases} \end{align*} Let $\ZSEQ$ be the set of finite length sequences $\boldsymbol{n}=(n_1,\dots,n_{\ell})$ of $\AZ$. We put $\SHAPE(\boldsymbol{n})=(\CONT(n_1),\dots,\CONT(n_{\ell}))$, and (re)define the length $\LENGTH(\boldsymbol{n})$ and the size $\SUM{\boldsymbol{n}}$ of $\boldsymbol{n}$ by \begin{align*} \LENGTH(\boldsymbol{n})=\ell,\quad \SUM{\boldsymbol{n}}=\CONT(n_1)+\dots+\CONT(n_{\ell}). \end{align*} \begin{Ex}\label{ex2color} For $\boldsymbol{n}=(\cint{-5},-5,\cint{-5},-5,\cint{-6})$, we have $\LENGTH(\boldsymbol{n})=5$, $\SUM{\boldsymbol{n}}=-26$ and $\SHAPE(\boldsymbol{n})=(-5,-5,-5,-5,-6)$. \end{Ex} \subsection{Lexicographical orders} Let $X$ be a set with a total order $\AANO$. The result $(n'_1,\dots,n'_{\ell})$ of sorting a finite sequence $\boldsymbol{n}=(n_1,\dots,n_{\ell})\in X^{\ell}$ so that $n'_1\AANO\cdots\AANO n'_{\ell}$ is denoted by $\SORT^{(X,\AANO)}(\boldsymbol{n})$, where $\ell\geq 0$. For example, we have $\SORT^{(\AZ,\ANOO)}(\boldsymbol{n})=(\cint{-6},-5,-5,\cint{-5},\cint{-5})$ in Example \ref{ex2color}. \begin{Def}\label{lexdef3} For two finite sequences $\boldsymbol{m}=(m_1,\dots,m_{\ell})$ and $\boldsymbol{n}=(n_1,\dots,n_{\ell})$ of $X$ that have the same length $\ell\geq 0$, we write $\boldsymbol{m}\LEXO{(X,\AANO)}\boldsymbol{n}$ if there exists $1\leq j\leq\ell$ such that $n_j\NAANO m_j$ and $m_k=n_k$ for $k<j$. \end{Def} The following four lemmata are easily proved, and we omit proofs. \begin{Lem}\label{transx} The binary relation $\LEXO{(X,\AANO)}$ (on $X^{\ell}$ for $\ell\geq 0$) is transitive (and thus $\LEXXO{(X,\AANO)}$ is a total order). \end{Lem} \begin{Lem}\label{sortx} For $\ell\geq 0$ and $\boldsymbol{n}\in X^{\ell}$, we have $\boldsymbol{n}\LEXXO{(X,\AANO)}\SORT^{(X,\AANO)}(\boldsymbol{n})$. \end{Lem} \begin{Lem}\label{mergex} Let $\boldsymbol{m},\boldsymbol{n}\in X^{\ell}$, where $\ell\geq 0$. If $\boldsymbol{m}\LEXO{(X,\AANO)}\boldsymbol{n}$ and $\boldsymbol{m}=\SORT^{(X,\AANO)}(\boldsymbol{m})$, then we have $\SORT^{(X,\AANO)}((\boldsymbol{h},\boldsymbol{m},\boldsymbol{t}))\LEXO{(X,\AANO)}\SORT^{(X,\AANO)}((\boldsymbol{h},\boldsymbol{n},\boldsymbol{t}))$ for any finite sequences $\boldsymbol{h},\boldsymbol{t}$ of $X$. Here, $(\boldsymbol{h},\boldsymbol{m},\boldsymbol{t})$ stands for the concatenation of $\boldsymbol{h},\boldsymbol{m}$ and $\boldsymbol{t}$. \end{Lem} For example, $(\boldsymbol{h},\boldsymbol{m},\boldsymbol{t})=(\cint{-6},\cint{-5},\cint{-6},-6,\cint{-5},\cint{-5})$ when $\boldsymbol{h}=(\cint{-6},\cint{-5})$, $\boldsymbol{m}=(\cint{-6})$ and $\boldsymbol{t}=(-6,\cint{-5},\cint{-5})$ when $X=\AZ$. \begin{Lem}\label{lexcomp3x} Let $\boldsymbol{m},\boldsymbol{n}$ be two finite sequences of $X$ such that $(\boldsymbol{m},\boldsymbol{n})=\SORT^{(X,\AANO)}((\boldsymbol{m},\boldsymbol{n}))$. We have $(\boldsymbol{m},\boldsymbol{n})\LEXO{(X,\AANO)}\SORT^{(X,\AANO)}((\boldsymbol{m}',\boldsymbol{n}'))$ for any finite sequences $\boldsymbol{m}',\boldsymbol{n}'$ of $X$ such that $\LENGTH(\boldsymbol{m})=\LENGTH(\boldsymbol{m}')$, $\LENGTH(\boldsymbol{n})=\LENGTH(\boldsymbol{n}')$ and $\boldsymbol{m}\LEXO{(X,\AANO)}\boldsymbol{m}'$. \end{Lem} \begin{Def}\label{lexdef} For $\boldsymbol{m}=(m_1,\dots,m_{\ell}),\boldsymbol{n}=(n_1,\dots,n_{\ell})\in\ZSEQ$, we write $\boldsymbol{m}\LEX\boldsymbol{n}$ if we have either $\SHAPE(\boldsymbol{m})\LEXO{(\mathbb{Z},\leq)}\SHAPE(\boldsymbol{n})$ (evidently, exclusive) or $\SHAPE(\boldsymbol{m})=\SHAPE(\boldsymbol{n})$ and $\boldsymbol{m}\LEXO{(\AZ,\ANOO)}\boldsymbol{n}$. \end{Def} Note that $\LEX$ is transitive, which follows from Lemma \ref{transx}. Thus, $\LEXX$ is a total order (on $\AZ^{\ell}$ for $\ell\geq 0$). \begin{Rem}\label{orderimportant} We emphasize that our definition of $\boldsymbol{m}\LEX\boldsymbol{n}$, where $\boldsymbol{m},\boldsymbol{n}\in\ZSEQ$, requires $\LENGTH(\boldsymbol{m})=\LENGTH(\boldsymbol{n})$ while we do not require $\SUM{\boldsymbol{m}}=\SUM{\boldsymbol{n}}$. \end{Rem} \begin{Ex} We have $(-5,-3,\cint{-3})\LEX (\cint{-5},-4,1)$ and $(-7,-5,\cint{-5})\LEX (-7,-5,-5)$. \end{Ex} Throughout this paper, we put $\SORT=\SORT^{(\AZ,\ANOO)}$. Note that we have \begin{align} \SHAPE(\SORT(\boldsymbol{n}))=\SORT^{(\mathbb{Z},\leq)}(\SHAPE(\boldsymbol{n})) \label{shrel} \end{align} for $\boldsymbol{n}\in\ZSEQ$ by the fact that $a\ANOO b$ implies $\CONT(a)\leq\CONT(b)$. \begin{Cor}\label{lexcomp2} For $\boldsymbol{n}\in\ZSEQ$, we have $\boldsymbol{n}\LEXX\SORT(\boldsymbol{n})$. \end{Cor} \begin{proof} It directly follows from \eqref{shrel} and Lemma \ref{sortx}. \end{proof} \begin{Cor}\label{lexcomp4} Let $\boldsymbol{n}=(n_1,\dots,n_{\ell})\in\ZSEQ$. We have \begin{align*} \boldsymbol{n}\LEX\SORT((n_1-\varepsilon_1,\dots,n_{\ell}-\varepsilon_{\ell})) \end{align*} for nonnegative integers $\varepsilon_1,\dots,\varepsilon_{\ell}\geq 0$ such that $\varepsilon_1+\dots+\varepsilon_{\ell}\geq 1$. Here, for $n\in\AZ$ and $\varepsilon\in\mathbb{Z}$, we define $m=n-\varepsilon\in\AZ$ so that $\CONT(m)=\CONT(n)-\varepsilon$ and $\COLOR(m)=\COLOR(n)$. \end{Cor} \begin{proof} It is not difficult to show $\SHAPE(\boldsymbol{n})\LEXO{(\mathbb{Z},\leq)}\SHAPE(\SORT((n_1-\varepsilon_1,\dots,n_{\ell}-\varepsilon_{\ell})))$ by \eqref{shrel}, Lemma \ref{transx} and Lemma \ref{sortx}. \end{proof} The following two lemmata are easily deduced by \eqref{shrel} and Lemma \ref{mergex}, Lemma \ref{lexcomp3x} respectively. \begin{Cor}\label{lexcomp} Let $\boldsymbol{m},\boldsymbol{n}\in\ZSEQ$. If $\boldsymbol{m}\LEX\boldsymbol{n}$ and $\boldsymbol{m}=\SORT(\boldsymbol{m})$, then we have $\SORT((\boldsymbol{h},\boldsymbol{m},\boldsymbol{t}))\LEX\SORT((\boldsymbol{h},\boldsymbol{n},\boldsymbol{t}))$ for $\boldsymbol{h},\boldsymbol{t}\in\ZSEQ$. \end{Cor} \begin{Cor}\label{lexcomp3} Let $\boldsymbol{m},\boldsymbol{n}\in\ZSEQ$ be $(\boldsymbol{m},\boldsymbol{n})=\SORT((\boldsymbol{m},\boldsymbol{n}))$. We have $(\boldsymbol{m},\boldsymbol{n})\LEX\SORT((\boldsymbol{m}',\boldsymbol{n}'))$ for $\boldsymbol{m}',\boldsymbol{n}'\in\ZSEQ$ such that $\LENGTH(\boldsymbol{m})=\LENGTH(\boldsymbol{m}')$, $\LENGTH(\boldsymbol{n})=\LENGTH(\boldsymbol{n}')$, $\boldsymbol{m}\LEX\boldsymbol{m}'$, $\SHAPE(\boldsymbol{m})\ne\SHAPE(\boldsymbol{m}')$. \end{Cor} In \S\ref{cal}, we apply Corollary \ref{lexcomp3} under $\SUM{\boldsymbol{m}}\ne\SUM{\boldsymbol{m}'}$, which implies $\SHAPE(\boldsymbol{m})\ne\SHAPE(\boldsymbol{m}')$. \subsection{Reducibilities}\label{redchap} In the rest, we put $x_{\pm\alpha_1}(n)=\YMP{n}$ for $n\in\mathbb{Z}$ and $\YY(\boldsymbol{n})=\YY_{\COLOR(n_1)}(\CONT(n_1))\cdots\YY_{\COLOR(n_{\ell})}(\CONT(n_{\ell}))$ for $\boldsymbol{n}\in\ZSEQ$. \begin{Def}\label{defnseq} Let $\NSEQ$ be the subset of $\ZSEQ$ consisting of $\boldsymbol{n}=(n_1,\dots,n_{\ell})\in\ZSEQ$ such that $\boldsymbol{n}=\SORT(\boldsymbol{n})$ (i.e., $\boldsymbol{n}$ is weakly increasing by the order $\ANOO$) and $\CONT(n_i)<0$ for $1\leq i\leq\ell$ (i.e., $\boldsymbol{n}$ consists of negative integers or colored negative integers). \end{Def} Note that the assignment $(\lambda_1,\dots,\lambda_{\ell})\mapsto (-\lambda_1,\dots,-\lambda_{\ell})$ is a bijection from the set of 2-colored partitions (in the sense of \S\ref{mainse}) to $\NSEQ$. \begin{Lem}\label{sortres3} For $\ell\geq 0$, $s\in\mathbb{Z}$ and a highest weight vector $\VZ\in W$, we have \begin{align*} \Theta_{\ell,s}\VZ =\sum_{\substack{\boldsymbol{m}\in\NSEQ \\ \LENGTH(\boldsymbol{m})\leq\ell, s\leq\SUM{\boldsymbol{m}}}} U(\GAAA_-)_{s-\SUM{\boldsymbol{m}}}\BUI{\boldsymbol{m}}. \end{align*} \end{Lem} \begin{proof} It follows from Remark \ref{sortresRM}, highestness of $\VZ$ and $U(\GAAA_-)_{t}=\{0\}$ for $t>0$. \end{proof} \begin{Def} Let $\VZ\in W$ be a highest weight vector. We say that an element $\boldsymbol{n}$ in $\ZSEQ$ is $\VZ$-reducible if $\BUI{\boldsymbol{n}}\in W_{>\boldsymbol{n}}(\VZ)$, where (see also Remark \ref{orderimportant}) \begin{align*} W_{>\boldsymbol{n}}(\VZ) = \sum_{\substack{\boldsymbol{m}\in\NSEQ \\ \LENGTH(\boldsymbol{n})\geq\LENGTH(\boldsymbol{m}), \SUM{\boldsymbol{n}}<\SUM{\boldsymbol{m}}}} U(\GAAA_-)_{\SUM{\boldsymbol{n}}-\SUM{\boldsymbol{m}}}\BUI{\boldsymbol{m}} + \sum_{\substack{\boldsymbol{m}\in\NSEQ \\ \LENGTH(\boldsymbol{n})>\LENGTH(\boldsymbol{m}), \SUM{\boldsymbol{n}}=\SUM{\boldsymbol{m}}}} \mathbb{C}\BUI{\boldsymbol{m}} + \sum_{\substack{\boldsymbol{m}\in\NSEQ \\ \boldsymbol{n}\LEX\boldsymbol{m}, \SUM{\boldsymbol{n}}=\SUM{\boldsymbol{m}}}} \mathbb{C}\BUI{\boldsymbol{m}}. \end{align*} Otherwise, we say that $\boldsymbol{n}$ is $\VZ$-irreducible. \end{Def} \begin{Rem}\label{deghomgpart2} By Lemma \ref{sortres3}, for $\boldsymbol{n}\in\ZSEQ$, we have \begin{align*} \Theta_{\LENGTH(\boldsymbol{n})-1,\SUM{\boldsymbol{n}}}\VZ\subseteq W_{>\boldsymbol{n}}(\VZ). \end{align*} \end{Rem} \begin{Prop}\label{ensort} For $\boldsymbol{n}\in\ZSEQ\setminus\NSEQ$, $\boldsymbol{n}$ is $\VZ$-irreducible. \end{Prop} \begin{proof} Let $\boldsymbol{n}=(n_1,\dots,n_{\ell})$ and $\boldsymbol{n}'=\SORT(\boldsymbol{n})$. By \eqref{sortres}, we have $\BUI{\boldsymbol{n}}-\BUI{\boldsymbol{n}'}\in\Theta_{\LENGTH(\boldsymbol{n})-1,\SUM{\boldsymbol{n}}}\VZ$. If $\CONT(n_i)>0$ for some $i$, we have $\BUI{\boldsymbol{n}'}=0$. If otherwise and $\CONT(n_j)= 0$ for some $j$, we have $\BUI{\boldsymbol{n}'}\in\Theta_{\LENGTH(\boldsymbol{n})-1,\SUM{\boldsymbol{n}}}\VZ$. If otherwise, we have $\boldsymbol{n}\ne\boldsymbol{n}'$ and thus we have $\boldsymbol{n}\LEX\boldsymbol{n}'$ by Corollary \ref{lexcomp2}. \end{proof} \begin{Prop}\label{spaneq} Let $\RED(\VZ)$ be the subset of $\NSEQ$, consisting of all $\VZ$-irreducible elements $\boldsymbol{n}$. We have \begin{align*} U(\GE)\VZ = \sum_{\boldsymbol{n}\in\RED(\VZ)} U(\GAAA_-)\BUI{\boldsymbol{n}}. \end{align*} \end{Prop} \begin{proof} It is routine to show $\BUI{\boldsymbol{m}}\in\sum_{\boldsymbol{n}\in\RED(\VZ)} U(\GAAA_-)\BUI{\boldsymbol{n}}$ for any $\boldsymbol{m}\in\NSEQ$ by induction on $-\SUM{\boldsymbol{m}}$ (and on $\LENGTH(\boldsymbol{m})$ and on $\LEXX$). \end{proof} \subsection{Forbidden patterns} We say that $\boldsymbol{n}=(n_1,\dots,n_{\ell})\in\ZSEQ$ contains $\boldsymbol{m}=(m_1,\dots,m_{\ell'})\in\ZSEQ$ if there exists $0\leq i\leq \ell-\ell'$ such that $n_{j+i}=m_j$ for $1\leq j\leq \ell'$. \begin{Def} We say that $\boldsymbol{m}\in\ZSEQ$ is a forbidden pattern if $\boldsymbol{n}\in\ZSEQ$ is $\VZ$-reducible for any highest weight vector $\VZ\in W$ whenever $\boldsymbol{n}$ contains $\boldsymbol{m}$. \end{Def} \begin{Thm}\label{forthm} For $k\in\mathbb{Z}$, the following elements in $\ZSEQ$ are forbidden patterns. \begin{enumerate} \item[(1)] $(-k,-k), (\cint{-k},\cint{-k}), (-k-1,-k), (\cint{-k-1},\cint{-k})$. \item[(2)] $(\cint{-1-3k},-3k),(-1-3k,\cint{-3k})$. \item[(3)] $(-1-3k,\cint{-1-3k}),(\cint{-2-3k},-3k)$. \item[(4)] $(-2-3k,\cint{-2-3k}),(\cint{-3-3k},-1-3k)$. \item[(5)] $(\cint{-3-3k},-2-3k),(-3-3k,\cint{-2-3k})$. \item[(6)] $(-3-3k,\cint{-3-3k},\cint{-1-3k}),(-5-3k,-3-3k,\cint{-3-3k})$. \item[(7)] $(\cint{-5-3k},-4-3k,-2-3k,\cint{-1-3k})$. \end{enumerate} \end{Thm} \begin{proof} We prove that the elements in (1),(2),(3),(4),(5),(6),(7) are forbidden patterns in \S\ref{forone}, \S\ref{fortwoA}, \S\ref{fortwoB}, \S\ref{fortwoC}, \S\ref{fortwoD}, \S\ref{forthree}, \S\ref{forfour}, respectively. \end{proof} In the rest of this section, we put $w=1\otimes 1\otimes 1\in W$. \begin{Prop}\label{initcor} An element $\lambda=(\lambda_1,\dots,\lambda_{\ell})$ in $\NSEQ$ is $w$-reducible if $\ell\geq 1$ and $\lambda_{\ell}=-1,\cint{-1},\cint{-2}$. \end{Prop} \begin{proof} By direct calculation, we have $\BUIII{-1}=-\BUIII{\cint{-1}} \in \mathbb{C}B(-1)w$. Thus, by Remark \ref{sortresRM}, we have $\BUIII{(\lambda_1,\dots,\lambda_{\ell})}\in\Theta_{\ell-1,|\lambda|}$ if $\lambda_{\ell}=-1,\cint{-1}$. The argument for $\lambda_{\ell}=\cint{-2}$ is similar because of $\BUIII{-2}-\BUIII{\cint{-2}} \in \mathbb{C}B(-2)w$, \eqref{sortres} and $(\lambda_1,\dots,\lambda_{\ell-1},\cint{-2})\LEX\SORT((\lambda_1,\dots,\lambda_{\ell-1},-2))$ by Corollary \ref{lexcomp}. \end{proof} For a 2-colored partition $\lambda=(\lambda_1,\dots,\lambda_{\ell})$, we easily see that the condition that $\lambda$ satisfies (D1)--(D3) in \S\ref{mainse} is equivalent to the condition that $(-\lambda_1,\dots,-\lambda_{\ell})\in\NSEQ$ does not contain the elements (1)--(7) in Theorem \ref{forthm}. Concerning Proposition \ref{initcor} and (D4), we may restate Proposition \ref{spaneq} as follows. \begin{Cor}\label{biidenintercor} For $i=1$ (resp. $i=2$), put $v_1=u_{a_1,a_2,a_3}$ with $a_1+a_2+a_3=0$ and $(a_1,a_2,a_3)\ne (0,0,0)$ (resp. $v_2=w$). The set \begin{align*} \{\AOB{-\mu_1}\cdots\AOB{-\mu_{\ell'}}\BUII{(-\lambda_1,\dots,-\lambda_{\ell})}{i}\} \end{align*} spans a module that is isomorphic to $V((2i-1)\Lambda_0+(2-i)\Lambda_1+(2-i)\Lambda_2-(2-i)\delta)$, where $(\mu_1,\dots,\mu_{\ell'})$ varies in $\REG{3}$ and $(\lambda_1,\dots,\lambda_{\ell})$ varies in $\BIR$ (resp. $\BIRP$). \end{Cor} \section{Vertex operator calculations}\label{cal} Recall $\omega=\exp(2\pi\sqrt{-1}/3)$ in \S\ref{vertset}. In the following of this section, we fix an integer $k$ and a highest weight vector $\VZ$ in $W$. We also recall $\YY(\boldsymbol{n})=\YY_{\COLOR(n_1)}(\CONT(n_1))\cdots\YY_{\COLOR(n_{\ell})}(\CONT(n_{\ell}))$ for $\boldsymbol{n}\in\ZSEQ$ (see \S\ref{redchap}). It is called a $\YY$-monomial in this section. We remark that the arguments in this section are similar to ~\cite{Nan}. \subsection{Annihilating elements} Recall \eqref{vvoorr} and \eqref{limdef}. We denote by $R_i(n)$ the coefficient of $\zeta^n$ in $R_i(\zeta)$ for $i\in\{1,2,\pm\}$, where \begin{align*} R_{+}(\zeta) &= x_{\alpha_1,\alpha_1}(\zeta), \quad\quad R_{-}(\zeta) = x_{-\alpha_1,-\alpha_1}(\zeta), \\ R_1(\zeta) &= E^{-}(-\alpha_1,\zeta)x_{\alpha_2,-\alpha_1}(\zeta)-x_{\alpha_1,\alpha_1+\alpha_2}(\zeta)E^{+}(\alpha_1,\zeta),\\ R_2(\zeta) &= E^{-}(-\alpha_1,\zeta)x_{-(\alpha_1+\alpha_2),-\alpha_1}(\zeta)-x_{\alpha_1,-\alpha_2}(\zeta)E^{+}(\alpha_1,\zeta). \end{align*} We abbreviate $E^+(\alpha_1,n)$ (resp. $E^-(-\alpha_1,n)$) to $E^+(n)$ (resp. $E^-(n)$) for $n\in\mathbb{Z}$. Note that $E^\pm(n)=0$ when $\mp n>0$. \subsection{Preparations}\label{nandiprep} For an integer $\ell\geq 0$ and summable expressions (see ~\cite[\S4]{MP}) $G$ and $H$ that have the same principal degree $a\in\mathbb{Z}$, the notation $G\equiv_{\ell} H$ means that we have $G\VU-H\VU\in\Theta_{\ell,a}\VU$ for any (i.e., not necessarily highest weight) vector $\VU$ in any $\GE$-module in a category $\mathcal{C}_K$ for any $K\in\mathbb{C}$ (see ~\cite[\S3]{LW3}). \begin{Def} Let $\ell\geq 1$ and $a\in\mathbb{Z}$. In this paper, a good expression of width $\ell$ and degree $a$ is an expansion (i.e., possibly an infinite formal sum) of the form \begin{align} \xi=\sum_{\boldsymbol{p}=(p_1,\dots,p_{\ell})\in\AZ^{\ell}, \SUM{\boldsymbol{p}}=a} c_{\boldsymbol{p}}\YY(\boldsymbol{p}), \label{goodexp} \end{align} where $c_{\boldsymbol{p}}$ are complex numbers, such that $\SUPP_i(\xi)$ is finite for any $i\in\mathbb{Z}$. \end{Def} As usual (see ~\cite[(4.8),(6.19)]{LW3}), $\SUPP_i(\xi)$ is defined to be the set \begin{align*} \{\boldsymbol{p}=(p_1,\dots,p_{\ell})\in\AZ^{\ell}\mid c_{\boldsymbol{p}}\ne 0 \textrm{ and } \CONT(p_{\ell'})+\dots+\CONT(p_{\ell})\leq i \textrm{ for } 1\leq \ell'\leq\ell\}. \end{align*} A good expression stands for a good expression of width $\ell$ and degree $a$ for some $\ell\geq 1$ and $a\in\mathbb{Z}$. It is clear that a good expression is summable for any $\GE$-module in a category $\mathcal{C}_K$ for any $K\in\mathbb{C}$. We say that a good expression $\xi$ is zero if it is formally zero (i.e., $c_{\boldsymbol{p}}=0$ for all $\boldsymbol{p}$). For a nonzero good expression $\xi$, we define the leading monomial $\LM(\xi)$ to be the $\YY$-monomial $\YY(\boldsymbol{p})$ such that $c_{\boldsymbol{p}}\ne 0$, $\boldsymbol{p}=\SORT(\boldsymbol{p})$ and $\boldsymbol{p}$ is minimum with respect to $\LEXX$ among such in \eqref{goodexp} if exists (otherwise, $\LM(\xi)$ is not defined). \begin{Ex}\label{firstexpansionex} By \eqref{sortres} and $P_{\beta,\beta}(1)=3$ for $\beta=\pm\alpha_1$ (see Example \ref{exppoly2}), we have \begin{align*} R_{\pm}(-2k)/3 &\equiv_1 \YMP{-k}\YMP{-k}+2\YMP{-k-1}\YMP{-k+1}+2\YMP{-k-2}\YMP{-k+2}+\cdots, \\ R_{\pm}(-2k-1)/3 &\equiv_1 2(\YMP{-k-1}\YMP{-k}+\YMP{-k-2}\YMP{-k+1}+\YMP{-k-3}\YMP{-k+2}+\cdots), \end{align*} where the right hand sides are good expressions of width 2 and degree $-2k$, $-2k-1$ with the leading monomials $\YMP{-k}\YMP{-k}$, $\YMP{-k-1}\YMP{-k}$, respectively. Note that $\YY$-monomials are arranged in order of $\LEX$. \end{Ex} \begin{Prop}\label{forbhantei} Let $b\geq 1$ and $a\in\mathbb{Z}$. Assume that we have an expression \begin{align*} R = r + \sum_{i\geq 1}E^{-}(-i)r_{i}^{-} + \sum_{i\geq 1}r_{i}^{+}E^{+}(i), \end{align*} such that $R\equiv_{b-1}0$, where $r$ is a nonzero good expression of width $b$ and degree $a$ with $\boldsymbol{g}=\LM(r)$, $r_i^{\pm}$ is a good expression of width $b$ and degree $a\mp i$. If any $\YY$-monomial $\YY(\boldsymbol{m})$ in $r_i^{+}$ satisfies $\boldsymbol{g}\LEX\boldsymbol{m}$ for $i\geq 1$, then $\boldsymbol{g}$ is a forbidden pattern. \end{Prop} \begin{proof} Take $\boldsymbol{j},\boldsymbol{h}\in\ZSEQ$ and let $\boldsymbol{n}=(\boldsymbol{j},\boldsymbol{g},\boldsymbol{h})$. By $R\equiv_{b-1}0$ and \eqref{thetaideal}, we have $\YY(\boldsymbol{j})R\BUI{\boldsymbol{h}}\in\Theta_{\ell(\boldsymbol{n})-1,\SUM{\boldsymbol{n}}}\VZ$, which is a subspace of $W_{>\boldsymbol{n}}(\VZ)$ by Remark \ref{deghomgpart2}. By Proposition \ref{ensort}, it is enough to show that $\boldsymbol{n}$ is $\VZ$-reducible assuming $\boldsymbol{n}=\SORT(\boldsymbol{n})$. Let $c_{\boldsymbol{g}}$ be the (nonzero) coefficient of $\YY(\boldsymbol{g})$ in $r$ and \begin{align*} U=\sum_{{\boldsymbol{m}\in\NSEQ, \boldsymbol{n}\LEX\boldsymbol{m}, \SUM{\boldsymbol{n}}=\SUM{\boldsymbol{m}}}} \mathbb{C}\BUI{\boldsymbol{m}}. \end{align*} By Corollary \ref{lexcomp2}, Corollary \ref{lexcomp} and \eqref{sortres}, we have \begin{align*} \YY(\boldsymbol{j})r\BUI{\boldsymbol{h}} - c_{\boldsymbol{g}}\BUI{\boldsymbol{n}} \in \Theta_{\LENGTH(\boldsymbol{n})-1,\SUM{\boldsymbol{n}}}\VZ + U.\end{align*} Let $\boldsymbol{h}=(h_1,\dots,h_u)$. By Corollary \ref{idounan}, we have \begin{align*} E^{+}(i)\BUI{\boldsymbol{h}}\in\sum_{(i_1,\dots,i_u)\in\mathbb{Z}^u_{\geq 0}, i_1+\dots+i_u=i}\mathbb{C}\BUI{(h_1+i_1,\dots,h_u+i_u)}. \end{align*} By \eqref{sortres}, Corollary \ref{lexcomp2}, Corollary \ref{lexcomp} and Corollary \ref{lexcomp3}, we have \begin{align*} \YY(\boldsymbol{j})r_i^{+}E^{+}(i)\BUI{\boldsymbol{h}}\in \Theta_{\LENGTH(\boldsymbol{n})-1,\SUM{\boldsymbol{n}}}\VZ + U.\end{align*} Let $\boldsymbol{j}=(j_1,\dots,j_p)$. By Corollary \ref{idounan}, $\YY(\boldsymbol{j})E^{-}(-i)r_i^{-}\BUI{\boldsymbol{h}}$ belongs to \begin{align*} \sum_{i'=1}^{i} E^{-}(-i')\Theta_{\LENGTH(\boldsymbol{n}),\SUM{\boldsymbol{n}}+i'}\VZ +\sum_{(i_1,\dots,i_p)\in\mathbb{Z}^p_{\geq 0}, i_1+\dots+i_p=i} \mathbb{C}Y((j_1-i_1,\dots,j_p-i_p))r_i^{-}\BUI{\boldsymbol{h}}. \end{align*} By Remark \ref{sortresRM}, Corollary \ref{lexcomp4} and Corollary \ref{lexcomp3}, we see that \begin{align*} Y((j_1-i_1,\dots,j_p-i_p))r_i^{-}\BUI{\boldsymbol{h}}\in \Theta_{\LENGTH(\boldsymbol{n})-1,\SUM{\boldsymbol{n}}}\VZ+ U.\end{align*} By $\YY(\boldsymbol{j})R\BUI{\boldsymbol{h}} = c_{\boldsymbol{g}}\BUI{\boldsymbol{n}} +(\YY(\boldsymbol{j})r\BUI{\boldsymbol{h}}-c_{\boldsymbol{g}}\BUI{\boldsymbol{n}}) +\sum_{i\geq 1}\YY(\boldsymbol{j})r_i^{+}E^{+}(i)\BUI{\boldsymbol{h}} +\sum_{i\geq 1}\YY(\boldsymbol{j})E^{-}(-i)r_i^{-}\BUI{\boldsymbol{h}}$ and Lemma \ref{sortres3}, %Remark \ref{deghomgpart2}, we have $\BUI{\boldsymbol{n}}\in W_{>\boldsymbol{n}}(\VZ)$. \end{proof} \subsection{Forbiddenness of $(-k,-k)$, $(\cint{-k},\cint{-k})$, $(-k-1,-k)$ and $(\cint{-k-1},\cint{-k})$}\label{forone} By Proposition \ref{mpprop} (\ref{mpprop2}), we have (on any $\GE$-module in a category $\mathcal{C}_K$ for any $K\in\mathbb{C}$) \begin{align*} R_+(-2k) = \rho\sum_{i,j\geq 0} E^-(-\alpha_1,-i)\YM{-2k+i-j}E^+(-\alpha_1,j), \end{align*} where $\rho=2\Lambda_0(\pi_{(0)}(x_{\alpha_1}))^{2}P_{\alpha_1,\alpha_1}(1)/\Lambda_0(\pi_{(0)}(x_{-\alpha_1}))$, which is a complex number. Thus, we have $R_+(-2k)\equiv_{1}0$. By Example \ref{firstexpansionex} and Proposition \ref{forbhantei} (in this case, $r^{\pm}_i=0$ for $i\geq 1$), $(-k,-k)$ is a forbidden pattern. The other cases are similar. \subsection{Forbiddenness of $(\cint{-1-3k},-3k)$ and $(-1-3k,\cint{-3k})$}\label{fortwoA} This follows from expansions below, Proposition \ref{forbhantei} and $R_{1}(-6k-1)=R_{2}(-6k-1)=0$. {\footnotesize \begin{align*} {} &{} R_1(-6k-1)/(-3(1+2\omega)) \\ &\equiv_1 \YM{-1-3k}\YP{-3k} -(1+\omega)\YP{-1-3k}\YM{-3k} + \omega\YM{-2-3k}\YP{1-3k}+\cdots\\ &+ \frac{1}{3}E^-(-1)((2+\omega)\YP{-3k}\YM{-3k} + (-1+\omega)\YM{-1-3k}\YP{1-3k} - (1+2\omega)\YP{-1-3k}\YM{1-3k}+\cdots)\\ &+ \cdots\\ &- \frac{1}{3}((-1+\omega)\YP{-1-3k}\YM{-1-3k} -(1+2\omega) \YM{-2-3k}\YP{-3k}+(2+\omega)\YP{-2-3k}\YM{-3k}+\cdots)E^+(1)\\ &- \cdots,\\ &{} \\ {} &{} (R_1(-6k-1)+R_2(-6k-1))/(-9) \\ &\equiv_1 \YP{-1-3k}\YM{-3k} -\YM{-2-3k}\YP{1-3k} -\YP{-2-3k}\YM{1-3k}+\cdots\\ &+ \frac{1}{3}E^-(-1)(-\YP{-3k}\YM{-3k} -\YM{-1-3k}\YP{1-3k} + 2\YP{-1-3k}\YM{1-3k}+\cdots)\\ &+ \cdots\\ &- \frac{1}{3}(-\YP{-1-3k}\YM{-1-3k} + 2\YM{-2-3k}\YP{-3k} -\YP{-2-3k}\YM{-3k}+\cdots)E^+(1)\\ &- \cdots. \end{align*} \normalsize} Here, $-3(1+2\omega)=C_1(1-\omega^{2(-3k-1)})$, where $C_1=P_{\alpha_2,-\alpha_1}(1)=P_{\alpha_1,\alpha_1+\alpha_2}(1)(=-3\omega)$ by Remark \ref{koukan} and Example \ref{exppoly2}. Similarly, we see that $R_2(-6k-1)$ begins with (modulo $\equiv_1$) $C_2(1-\omega^{1(-3k-1)})\YM{-1-3k}\YP{-3k}$, where $C_2=P_{-(\alpha_1+\alpha_2),-\alpha_1}(1)=P_{\alpha_1,-\alpha_2}(1)(=3(1+\omega))$. Because of $C_1(1-\omega^{2(-3k-1)})+C_2(1-\omega^{1(-3k-1)})=0$, $R_1(-6k-1)+R_2(-6k-1)$ begins with \begin{align*} (C_1(\omega^{-3k-1}-1)+C_2(\omega^{2(-3k-1)}-1))\YP{-1-3k}\YM{-3k}=-9\YP{-1-3k}\YM{-3k}. \end{align*} Throughout this section, we do not explain such routine calculations. \subsection{Forbiddenness of $(-1-3k,\cint{-1-3k})$ and $(\cint{-2-3k},-3k)$}\label{fortwoB} This follows from expansions of $R_{1}(-6k-2)$ and $R_{2}(-6k-2)$ below and Proposition \ref{forbhantei}. {\footnotesize \begin{align*} {} &{} R_1(-6k-2)/(-3(2+\omega)) \\ &\equiv_1 \YP{-1-3k}\YM{-1-3k} + \omega \YM{-2-3k}\YP{-3k} -(1+\omega)\YP{-2-3k}\YM{-3k} +\cdots\\ &+ \frac{1}{3}E^-(-1)((1+2\omega)\YM{-1-3k}\YP{-3k} + (1-\omega)\YP{-1-3k}\YM{-3k} -(2+\omega)\YM{-2-3k}\YP{1-3k}+\cdots)\\ &+ \cdots\\ &- \frac{1}{3}((1-\omega)\YM{-2-3k}\YP{-1-3k} -(2+\omega)\YP{-2-3k}\YM{-1-3k} + (1+2\omega)\YM{-3-3k}\YP{-3k}+\cdots)E^+(1)\\ &- \cdots,\\ &{} \\ {} &{} (R_1(-6k-2)-(1+\omega)R_2(-6k-2))/(-9\omega) \\ &\equiv_1 \YM{-2-3k}\YP{-3k} -\YP{-2-3k}\YM{-3k} -\YM{-3-3k}\YP{1-3k}+\cdots \\ &+ \frac{1}{3}E^-(-1)(2\YM{-1-3k}\YP{-3k} -\YP{-1-3k}\YM{-3k} -\YM{-2-3k}\YP{1-3k}+\cdots)\\ &+ \cdots\\ &- \frac{1}{3}(-\YM{-2-3k}\YP{-1-3k} -\YP{-2-3k}\YM{-1-3k} +2\YM{-3-3k}\YP{-3k}+\cdots)E^+(1)\\ &- \cdots. \end{align*} \normalsize} \subsection{Forbiddenness of $(-2-3k,\cint{-2-3k})$ and $(\cint{-3-3k},-1-3k)$}\label{fortwoC} This follows from expansions of $R_{1}(-6k-4)$ and $R_{2}(-6k-4)$ below and Proposition \ref{forbhantei}. {\footnotesize \begin{align*} {} &{} R_1(-6k-4)/(3(2+\omega)) \\ &\equiv_1 \YP{-2-3k}\YM{-2-3k} + \omega \YM{-3-3k}\YP{-1-3k} -(1+\omega) \YP{-3-3k}\YM{-1-3k}+\cdots\\ &+ \frac{1}{3}E^-(-1)((-1+\omega)\YM{-2-3k}\YP{-1-3k} + (2+\omega)\YP{-2-3k}\YM{-1-3k} -(1+2\omega)\YM{-3-3k}\YP{-3k}+\cdots)\\ &+ \cdots\\ &- \frac{1}{3}(-(1+2\omega)\YM{-3-3k}\YP{-2-3k} + (-1+\omega)\YP{-3-3k}\YM{-2-3k} + (2+\omega)\YM{-4-3k}\YP{-1-3k}+\cdots)E^+(1)\\ &- \cdots,\\ &{} \\ {} &{} (R_1(-6k-4)-(1+\omega)R_2(-6k-4))/(9\omega) \\ &\equiv_1 \YM{-3-3k}\YP{-1-3k} -\YP{-3-3k}\YM{-1-3k} -\YM{-4-3k}\YP{-3k}+\cdots\\ &+ \frac{1}{3}E^-(-1)(\YM{-2-3k}\YP{-1-3k} + \YP{-2-3k}\YM{-1-3k} -2\YM{-3-3k}\YP{-3k}+\cdots)\\ &+ \cdots\\ &- \frac{1}{3}(-2\YM{-3-3k}\YP{-2-3k} + \YP{-3-3k}\YM{-2-3k} + \YM{-4-3k}\YP{-1-3k}+\cdots)E^+(1)\\ &- \cdots. \end{align*} \normalsize} \subsection{Forbiddenness of $(\cint{-3-3k},-2-3k)$ and $(-3-3k,\cint{-2-3k})$}\label{fortwoD} This follows from expansions of $R_{1}(-6k-5)$ and $R_{2}(-6k-5)$ below and Proposition \ref{forbhantei}. {\footnotesize \begin{align*} {} &{} R_1(-6k-5)/(3(1+2\omega)) \\ &\equiv_1 \YM{-3-3k}\YP{-2-3k} -(1+\omega)\YP{-3-3k}\YM{-2-3k} + \omega \YM{-4-3k}\YP{-1-3k}+\cdots\\ &+ \frac{1}{3}E^-(-1)((1-\omega)\YP{-2-3k}\YM{-2-3k} + (1+2\omega)\YM{-3-3k}\YP{-1-3k} -(2+\omega)\YP{-3-3k}\YM{-1-3k}+\cdots)\\ &+ \cdots\\ &- \frac{1}{3}(-(2+\omega)\YP{-3-3k}\YM{-3-3k} + (1-\omega)\YM{-4-3k}\YP{-2-3k} + (1+2\omega)\YP{-4-3k}\YM{-2-3k}+\cdots)E^+(1)\\ &- \cdots, \end{align*} \begin{align*} {} &{} (R_1(-6k-5)+R_2(-6k-5))/9 \\ &\equiv_1 \YP{-3-3k}\YM{-2-3k} -\YM{-4-3k}\YP{-1-3k} -\YP{-4-3k}\YM{-1-3k}+\cdots\\ &+ \frac{1}{3}E^-(-1)(\YP{-2-3k}\YM{-2-3k} -2\YM{-3-3k}\YP{-1-3k} + \YP{-3-3k}\YM{-1-3k}+\cdots)\\ &+ \cdots\\ &- \frac{1}{3}(\YP{-3-3k}\YM{-3-3k} + \YM{-4-3k}\YP{-2-3k} -2\YP{-4-3k}\YM{-2-3k}+\cdots)E^+(1)\\ &- \cdots. \end{align*} \normalsize} \subsection{Forbiddenness of $(-3-3k,\cint{-3-3k},\cint{-1-3k})$ and $(-5-3k,-3-3k,\cint{-3-3k})$}\label{forthree} Using Corollary \ref{idounan}, we see that \begin{align*} Z_1(k)=\YP{-3-3k}R_{-}(-4-6k)/3-(R_1(-6k-5)+R_2(-6k-5))\YM{-2-3k}/9 \end{align*} is expanded as follows. {\tiny \begin{align*} {} &{} Z_1(k)\\ &\equiv_2 \YP{-3-3k}\YM{-3-3k}\YM{-1-3k} + \YM{-4-3k}\YM{-2-3k}\YP{-1-3k} -\YM{-4-3k}\YP{-2-3k}\YM{-1-3k}+\cdots\\ &+ \frac{1}{3}E^-(-1)(-\YP{-2-3k}\YM{-2-3k}\YM{-2-3k} + 2\YM{-3-3k}\YM{-2-3k}\YP{-1-3k} -\YP{-3-3k}\YM{-2-3k}\YM{-1-3k}+\cdots)\\ &+ \cdots\\ &- \frac{1}{3}(-\YP{-3-3k}\YM{-3-3k}\YM{-2-3k} -\YM{-4-3k}\YP{-2-3k}\YM{-2-3k} + 2\YP{-4-3k}\YM{-2-3k}\YM{-2-3k}+\cdots)E^+(1)\\ &- \cdots. \end{align*} \normalsize} The element $(-3-3k,\cint{-3-3k},\cint{-1-3k})$ is a forbidden pattern because of $Z_1(k)\equiv_2 0$ and that any $\YY$-monomial $\boldsymbol{m}$ appearing in the coefficient of $E^+(i)$ satisfies $(-3-3k,\cint{-3-3k},\cint{-1-3k})\LEX\boldsymbol{m}$ (see Proposition \ref{forbhantei}). Similarly, using Corollary \ref{idounan}, we see that \begin{align*} Z_2(k)=R_{+}(-8-6k)\YM{-3-3k}/3+\YP{-4-3k}(R_1(-7-6k)+R_2(-7-6k))/9 \end{align*} is expanded as follows. {\tiny \begin{align*} {} &{} Z_2(k)\\ &\equiv_2 \YP{-5-3k}\YP{-3-3k}\YM{-3-3k} + \YM{-5-3k}\YP{-4-3k}\YP{-2-3k} -\YP{-5-3k}\YM{-4-3k}\YP{-2-3k}+\cdots\\ &+ \frac{1}{3}E^-(-1)(\YP{-4-3k}\YP{-3-3k}\YM{-3-3k} + \YP{-4-3k}\YM{-4-3k}\YP{-2-3k} -2\YP{-4-3k}\YP{-4-3k}\YM{-2-3k}+\cdots)\\ &+ \cdots\\ &- \frac{1}{3}(\YP{-4-3k}\YP{-4-3k}\YM{-4-3k} -2\YM{-5-3k}\YP{-4-3k}\YP{-3-3k} + \YP{-5-3k}\YP{-4-3k}\YM{-3-3k}+\cdots)E^+(1)\\ &- \frac{1}{3}(-2\YM{-5-3k}\YP{-4-3k}\YP{-4-3k} + \YP{-5-3k}\YP{-4-3k}\YM{-4-3k} + \YM{-6-3k}\YP{-4-3k}\YP{-3-3k}+\cdots)E^+(2)\\ &- \cdots. \end{align*} \normalsize} By applying \begin{align*} 0\equiv_1 R_+(-8-6k)/3\equiv_1 \YP{-4-3k}\YP{-4-3k}+2\YP{-5-3k}\YP{-3-3k}+\cdots, \end{align*} to the coefficient of $E^+(1)$ in $Z_2(k)$, we see that $Z_2(k)\equiv_2 Z'_2(k)$, where {\tiny \begin{align*} {} &{} Z'_2(k)\\ &\equiv_2 \YP{-5-3k}\YP{-3-3k}\YM{-3-3k} + \YM{-5-3k}\YP{-4-3k}\YP{-2-3k} -\YP{-5-3k}\YM{-4-3k}\YP{-2-3k}+\cdots\\ &+ \frac{1}{3}E^-(-1)(\YP{-4-3k}\YP{-3-3k}\YM{-3-3k} + \YP{-4-3k}\YM{-4-3k}\YP{-2-3k} -2\YP{-4-3k}\YP{-4-3k}\YM{-2-3k}+\cdots)\\ &+ \cdots\\ &- \frac{1}{3}(-2\YM{-5-3k}\YP{-4-3k}\YP{-3-3k}-2\YP{-5-3k}\YM{-4-3k}\YP{-3-3k} + \YP{-5-3k}\YP{-4-3k}\YM{-3-3k}+\cdots)E^+(1)\\ &- \frac{1}{3}(-2\YM{-5-3k}\YP{-4-3k}\YP{-4-3k} + \YP{-5-3k}\YP{-4-3k}\YM{-4-3k} + \YM{-6-3k}\YP{-4-3k}\YP{-3-3k}+\cdots)E^+(2)\\ &- \cdots. \end{align*} \normalsize} By applying the same relation to the coefficient of $E^+(2)$ and by applying \begin{align*} 0\equiv_1 R_+(-7-6k)/6\equiv_1 \YP{-4-3k}\YP{-3-3k}+\YP{-5-3k}\YP{-2-3k}+\cdots, \end{align*} to the coefficient of $E^+(1)$ in $Z'_2(k)$ in preparation for \S\ref{forfour}, we see that $Z'_2(k)\equiv_2 Z''_2(k)$, where {\tiny \begin{align*} {} &{} Z''_2(k)\\ &\equiv_2 \YP{-5-3k}\YP{-3-3k}\YM{-3-3k} + \YM{-5-3k}\YP{-4-3k}\YP{-2-3k} -\YP{-5-3k}\YM{-4-3k}\YP{-2-3k}+\cdots\\ &+ \frac{1}{3}E^-(-1)(\YP{-4-3k}\YP{-3-3k}\YM{-3-3k} + \YP{-4-3k}\YM{-4-3k}\YP{-2-3k} -2\YP{-4-3k}\YP{-4-3k}\YM{-2-3k}+\cdots)\\ &+ \cdots\\ &- \frac{1}{3}(-2\YP{-5-3k}\YM{-4-3k}\YP{-3-3k}+\YP{-5-3k}\YP{-4-3k}\YM{-3-3k} + 2\YP{-5-3k}\YM{-5-3k}\YP{-2-3k}+\cdots)E^+(1)\\ &- \frac{1}{3}(\YP{-5-3k}\YP{-4-3k}\YM{-4-3k} + 4\YP{-5-3k}\YM{-5-3k}\YP{-3-3k} + \YM{-6-3k}\YP{-4-3k}\YP{-3-3k}+\cdots)E^+(2)\\ &- \cdots. \end{align*} \normalsize} The element $(-5-3k,-3-3k,\cint{-3-3k})$ is a forbidden pattern because of $Z'_2(k)\equiv_2 Z_2(k)\equiv_2 0$ and that any $\YY$-monomial $\boldsymbol{m}$ appearing in the coefficient of $E^+(i)$ in $Z'_2(k)$ satisfies $(-5-3k,-3-3k,\cint{-3-3k})\LEX\boldsymbol{m}$ (see Proposition \ref{forbhantei}). \subsection{Forbiddenness of $(\cint{-5-3k},-4-3k,-2-3k,\cint{-1-3k})$}\label{forfour} By applying \begin{align*} 0\equiv_1 R_-(-5-6k)/6 \equiv_1 \YM{-3-3k}\YM{-2-3k}+\YM{-4-3k}\YM{-1-3k}+\cdots,\end{align*} to the coefficient of $E^+(1)$ in $Z_1(k)$, we see that $Z_1(k)\equiv_2 Z'_1(k)$, where {\tiny \begin{align*} {} &{} Z'_1(k)\\ &\equiv_2 \YP{-3-3k}\YM{-3-3k}\YM{-1-3k} + \YM{-4-3k}\YM{-2-3k}\YP{-1-3k} -\YM{-4-3k}\YP{-2-3k}\YM{-1-3k}+\cdots\\ &+ \frac{1}{3}E^-(-1)(-\YP{-2-3k}\YM{-2-3k}\YM{-2-3k} + 2\YM{-3-3k}\YM{-2-3k}\YP{-1-3k} -\YP{-3-3k}\YM{-2-3k}\YM{-1-3k}+\cdots)\\ &+ \cdots\\ &- \frac{1}{3}(-\YM{-4-3k}\YP{-2-3k}\YM{-2-3k} +2\YP{-4-3k}\YM{-2-3k}\YM{-2-3k} + 4\YM{-4-3k}\YP{-3-3k}\YM{-1-3k}+\cdots)E^+(1)\\ &- \frac{1}{3}(-\YM{-4-3k}\YP{-3-3k}\YM{-2-3k} -\YP{-4-3k}\YM{-3-3k}\YM{-2-3k} + \YP{-4-3k}\YM{-4-3k}\YM{-1-3k}+\cdots)E^+(2)\\ &- \cdots. \end{align*} \normalsize} Again, using Corollary \ref{idounan}, we see that \begin{align*} Z_3(k)=Z''_2(k)\YM{-1-3k}-\YP{-5-3k}Z'_1(k) \end{align*} is expanded as follows. {\tiny \begin{align*} {} &{} Z_3(k)\\ &\equiv_3 \YM{-5-3k}\YP{-4-3k}\YP{-2-3k}\YM{-1-3k} -\YP{-5-3k}\YM{-4-3k}\YM{-2-3k}\YP{-1-3k} +\cdots\\ &+ \frac{1}{3}E^-(-1)(\YP{-4-3k}\YP{-3-3k}\YM{-3-3k}\YM{-1-3k} + \YP{-4-3k}\YM{-4-3k}\YP{-2-3k}\YM{-1-3k} +\cdots)\\ &+ \cdots\\ &- \frac{1}{3}(\YP{-5-3k}\YM{-4-3k}\YP{-2-3k}\YM{-2-3k} -2\YP{-5-3k}\YP{-4-3k}\YM{-2-3k}\YM{-2-3k} +\cdots)E^+(1)\\ &- \frac{1}{3}(\YP{-5-3k}\YM{-4-3k}\YP{-3-3k}\YM{-2-3k} + \YP{-5-3k}\YP{-4-3k}\YM{-3-3k}\YM{-2-3k} +\cdots)E^+(2)\\ &- \cdots. \end{align*} \normalsize} The element $(\cint{-5-3k},-4-3k,-2-3k,\cint{-1-3k})$ is a forbidden pattern because of $Z_3(k)\equiv_3 0$ and that any $\YY$-monomial $\boldsymbol{m}$ appearing in the coefficient of $E^+(i)$ satisfies $(\cint{-5-3k},-4-3k,-2-3k,\cint{-1-3k})\LEX\boldsymbol{m}$ (see Proposition \ref{forbhantei}). \section{An automatic derivation of $q$-difference equations via ~\cite{TT}}\label{auto} Let $\Sigma$ be a nonempty finite set, called an alphabet in the context of formal language theory. We denote by $\Sigma^\ast$ the set of finite words $\bigsqcup_{n\geq 0}\Sigma^n$ of $\Sigma$. A word $(w_1,\dots,w_n)\in\Sigma^n$ of length $n$ is written as $w_1\cdots w_n$. By the word concatenation and the empty word $\EMPTYWORD$, we regard the set $\Sigma^{\ast}$ as a free monoid generated by $\Sigma$. For $X,Y\subseteq\Sigma^{\ast}$, we put $XY=\{wv\mid w\in X,v\in Y\}$. \begin{Def} A deterministic finite automaton (DFA, for short) over $\Sigma$ is a 5-tuple $(Q,\Sigma,\delta,s,F)$, where $Q$ is a finite set (called the set of states), $\delta:Q\times \Sigma\to Q$ is a function (called the transition function), $s$ is an element of $Q$ (called the start state) and $F$ is a subset of $Q$ (called the set of accept states). \end{Def} For a DFA $M=(Q,\Sigma,\delta,s,F)$, the language recognized by $M$ is defined as \begin{align*} L(M)=\{w\in\Sigma^{\ast}\mid \EXTE{\delta}(s,w)\in F\}, \end{align*} where $\EXTE{\delta}(q,w)$ is defined inductively by $\EXTE{\delta}(q,\EMPTYWORD)=q$ and $\EXTE{\delta}(q,w_1v)=\EXTE{\delta}(\delta(q,w_1),v)$ for $q\in Q, w_1\in\Sigma$ and $w,v\in\Sigma^{\ast}$. We denote the set of 2-colored partitions (see \S\ref{mainse}) by $\TPAR$. Let $I=\{\mya,\myb,\dots,\mym\}$ and consider the map $\pi:I\to \TPAR$ defined by (see also Example \ref{exbipar}) \begin{align*} \mya\mapsto\emptypar,\quad \myb\mapsto(1),\quad \myc\mapsto(\cint{1}),\quad \myd\mapsto(2),\quad \mye\mapsto(\cint{2}),\quad \myf\mapsto(3),\quad \myg\mapsto(\cint{3}),\\ \myh\mapsto(2,\cint{1}),\quad \myi\mapsto(\cint{2},1),\quad \myj\mapsto(3,1),\quad \myk\mapsto(3,\cint{1}),\quad \myl\mapsto(\cint{3},\cint{1}),\quad \mym\mapsto(3,\cint{3}). \end{align*} Let $\SEQ(I)=I^{\mathbb{Z}_{\geq 1}}(=\{(i_1,i_2,\dots)\mid \textrm{$i_j\in I$ for $j\geq 1$}\})$. We denote by $\SEQ(I,\pi)$ the subset of $\SEQ(I)$, which consists of $\boldsymbol{i}=(i_1,i_2,\dots)$ such that $\{k\geq 1\mid i_k\ne\mya\}$ is a finite set. For a nonnegative integer $t$, we define $\SHIFT_t(\lambda)=(\lambda_1+t,\dots,\lambda_{\ell}+t)$ (see Corollary \ref{lexcomp4}) for a 2-colored partition $\lambda=(\lambda_1,\dots,\lambda_{\ell})$. Then, the map \begin{align*} \PAI:\SEQ(I,\pi)\to \TPAR, \end{align*} which sends $\boldsymbol{i}=(i_1,i_2,\dots)\in \SEQ(I,\pi)$ to the smallest (with respect to the length) 2-colored partition that contains $\SHIFT_{3k-3}(\pi(i_k))$ for any $k\geq 1$ such that $i_k\ne\mya$ is well-defined (see ~\cite[Definition 2.5.(2)]{TT}). It is clearly an injection. \begin{Ex} $\PAI(\boldsymbol{i})=(\cint{11},10,\cint{8},3,\cint{3})$ for $\boldsymbol{i}=(\mym,\mya,\mye,\myi,\mya,\mya,\dots)$. \end{Ex} The translation result below is verified elementarily, and we omit a proof. \begin{Prop}\label{iikae} For $\boldsymbol{i}\in\SEQ(I,\pi)$, the condition $\PAI(\boldsymbol{i})\in\BIR$ is equivalent to the condition that $\boldsymbol{i}$ does not contain any of the word in $J$, where \begin{align*} J &= \{\myf\myb,\myg\myb,\myj\myb,\myk\myb,\myl\myb,\mym\myb,\myl\myc,\mym\myc,\myj\myc,\myk\myc,\myf\myc,\myg\myc, \mym\myd,\myf\mye,\myj\mye,\myk\mye,\mym\mye,\myh\myi,\myf\myi,\myg\myi,\myj\myi,\myk\myi,\myl\myi,\mym\myi,\\ &\quad\quad \myf\myh,\myg\myh,\myj\myh,\myk\myh,\myl\myh,\mym\myh,\myf\myj,\myg\myj,\myj\myj,\myk\myj,\myl\myj,\mym\myj, \myf\myk,\myg\myk,\myj\myk,\myk\myk,\myl\myk,\mym\myk,\myf\myl,\myg\myl,\myj\myl,\myk\myl,\myl\myl,\mym\myl \}. \end{align*} Moreover, any $\lambda\in\BIR$ is obtained in this way (i.e., $\lambda=\PAI(\boldsymbol{i})$ for some $\boldsymbol{i}\in\SEQ(I,\pi)$). \end{Prop} Let $M=(Q,I,\delta,s,F)$ be the minimum DFA (with respect to the number of states) over $I$ such that $L(M)=I^{\ast}JI^{\ast}$. It is computable by the standard algorithms in formal language theory (see ~\cite[Appendix A]{TT} for a review). By running them (see Example \ref{gap} below), we see automatically that $Q=\{\JOO{0},\dots,\JOO{5}\}$, $s=\JOO{0}$, $F=\{\JOO{1}\}$ and that $\delta(q,j)$ for $q\in Q, j\in I$ is given by the table below. \begin{center} \begin{tabular}{r|rrrrrrrrrrrrr} $q\backslash j$ & $\mya$ & $\myb$ & $\myc$ & $\myd$ & $\mye$ & $\myf$ & $\myg$ & $\myh$ & $\myi$ & $\myj$ & $\myk$ & $\myl$ & $\mym$ \\ \hline \JO{0} & \JO{0} & \JO{0} & \JO{0} & \JO{0} & \JO{0} & \JO{2} & \JO{4} & \JO{5} & \JO{0} & \JO{2} & \JO{2} & \JO{4} & \JO{3} \\ \JO{1} & \JO{1} & \JO{1} & \JO{1} & \JO{1} & \JO{1} & \JO{1} & \JO{1} & \JO{1} & \JO{1} & \JO{1} & \JO{1} & \JO{1} & \JO{1} \\ \JO{2} & \JO{0} & \JO{1} & \JO{1} & \JO{0} & \JO{1} & \JO{2} & \JO{4} & \JO{1} & \JO{1} & \JO{1} & \JO{1} & \JO{1} & \JO{3} \\ \JO{3} & \JO{0} & \JO{1} & \JO{1} & \JO{1} & \JO{1} & \JO{2} & \JO{4} & \JO{1} & \JO{1} & \JO{1} & \JO{1} & \JO{1} & \JO{3} \\ \JO{4} & \JO{0} & \JO{1} & \JO{1} & \JO{0} & \JO{0} & \JO{2} & \JO{4} & \JO{1} & \JO{1} & \JO{1} & \JO{1} & \JO{1} & \JO{3} \\ \JO{5} & \JO{0} & \JO{0} & \JO{0} & \JO{0} & \JO{0} & \JO{2} & \JO{4} & \JO{5} & \JO{1} & \JO{2} & \JO{2} & \JO{4} & \JO{3} \end{tabular} \end{center} \begin{Ex}\label{gap} An execution for obtaining the minimum DFA using a GAP package Automata~\cite{DHLM} is as follows. \begin{small} \noindent\verb|A:=RationalExpression("fbUgbUjbUkbUlbUmbUlcUmcUjcUkcUfcUgcUmdUfeUjeUkeUmeUhiUfiUgiUjiUkiUliU| \noindent\verb|miUfhUghUjhUkhUlhUmhUfjUgjUjjUkjUljUmjUfkUgkUjkUkkUlkUmkUflUglUjlUklUllUml","abcdefghijklm");| \noindent\verb|Is:=RationalExpression("(aUbUcUdUeUfUgUhUiUjUkUlUm)*","abcdefghijklm");| \noindent\verb|r:=ProductRatExp(Is,ProductRatExp(A,Is)); M:=RatExpToAut(r); Display(M);| \end{small} \end{Ex} For each $q\in Q\setminus F(=\{\JOO{0},\JOO{2},\JOO{3},\JOO{4},\JOO{5}\})$, we define \begin{align*} \LC{q}=\PAI(\SEQ(I,\pi)\cap (L(M_q)^{c})^{\wedge}), \end{align*} where, for $A\subseteq I^{\ast}$, we put (see \cite[\S3.2]{TT}) \begin{align*} A^{\wedge} = \{\boldsymbol{i}\in\SEQ(I)\mid i_1\cdots i_n\in A\textrm{ for all }n\geq 1\}, \end{align*} and, for a DFA $M=(I,\Sigma,\delta,s,F)$ and $v\in I$, we put $M_v=(I,\Sigma,\delta,v,F)$, i.e., the DFA obtained from $M$ by changing the start state to $v$ (see \cite[Definition 3.11]{TT}). \begin{Rem} By Proposition \ref{iikae}, we have $\LC{q_0}=\BIR$. Moreover, we accidentally have $\LC{q_2}=\BIRP$ thanks to the following two facts. \begin{enumerate} \item For $\boldsymbol{i}\in\SEQ(I,\pi)$ such that $\PAI(\boldsymbol{i})\in\BIR$, the condition $\PAI(\boldsymbol{i})\in\BIRP$ is equivalent to the condition $i_1\ne \myb,\myc,\mye,\myh,\myi,\myj,\myk,\myl$, where $\boldsymbol{i}=(i_1,i_2,\dots)$. \item For $t\in\Sigma$, the condition $\delta(\JOO{2},t)\not\in F$ is equivalent to the condition $t\ne \myb,\myc,\mye,\myh,\myi,\myj,\myk,\myl$. Moreover, $\delta(\JOO{2},\mya)=\JOO{0}$. \end{enumerate} \end{Rem} By ~\cite[Theorem 3.17]{TT}, we have \begin{align*} f_{\LC{q}}(x,q)=\sum_{a\in I, \delta(q,a)\not\in F} x^{\ell(\pi(a))}q^{|\pi(a)|}f_{\LC{\delta(q,a)}}(xq^3,q). \end{align*} Putting $F_i(x)=f_{\LC{\JOO{i}}}(x,q)$, we have the simultaneous $q$-difference equation \begin{align*} \begin{pmatrix} F_0(x) \\ F_2(x) \\ F_3(x) \\ F_4(x) \\ F_5(x) \end{pmatrix} = \begin{pmatrix} 1+2xq+2xq^2+x^2q^3 & xq^3+2x^2q^4 & x^2q^6 & xq^3+x^2q^4 & x^2q^3 \\ 1+xq^2 & xq^3 & x^2q^6 & xq^3 & 0 \\ 1 & xq^3 & x^2q^6 & xq^3 & 0 \\ 1+2xq^2 & xq^3 & x^2q^6 & xq^3 & 0 \\ 1+2xq+2xq^2 & xq^3+2x^2q^4 & x^2q^6 & xq^3+x^2q^4 & x^2q^3 \\ \end{pmatrix} \begin{pmatrix} F_0(xq^3) \\ F_2(xq^3) \\ F_3(xq^3) \\ F_4(xq^3) \\ F_5(xq^3) \end{pmatrix}. \end{align*} By the Murray-Miller algorithm (see ~\cite[Appendix B]{TT}), we get the following. \begin{Prop}\label{PropqdiffR} The generating functions $f_{\BIR}(x,q)$ and $f_{\BIRP}(x,q)$ satisfy the following $q$-difference equations respectively. \tiny \begin{align*} {} &{} (2+3xq^{4}+xq^{6})f_{\BIR}(x,q)\\ &= (2+4xq+4xq^{2}+4xq^{3}+3xq^{4}+xq^{6}+4x^{2}q^{3} +6x^{2}q^{4}+6x^{2}q^{5}+8x^{2}q^{6}+2x^{2}q^{7} +2x^{2}q^{8}+2x^{2}q^{9}+6x^{3}q^{7}+2x^{3}q^{9}+3x^{3}q^{10}+x^{3}q^{12})f_{\BIR}(xq^{3},q)\\ &- x^{2}q^{7}(2+2q+3xq+4xq^{2}+xq^{3}+4xq^{4}-xq^{5}+4xq^{6} +xq^{7}+2x^{2}q^{5}+6x^{2}q^{7}+6x^{2}q^{8}+2x^{2}q^{9}+2x^{2}q^{10}+4x^{2}q^{11}+3x^{3}q^{9} +x^{3}q^{11}\\ &+6x^{3}q^{12}+2x^{3}q^{14})f_{\BIR}(xq^{6},q)+ x^{4}q^{21}(1-xq^6)^2(2+3xq+xq^{3})f_{\BIR}(xq^{9},q),\\ {} &{} (1+xq^{5}+xq^{7})f_{\BIRP}(x,q) \\ &+(1+xq^{2}+2xq^{3}+2xq^{4}+2xq^{5}+xq^{7}+3x^{2}q^{6}+2x^{2}q^{7}+3x^{2}q^{8}+3x^{2}q^{9} +2x^{2}q^{10}+x^{2}q^{11}+x^{2}q^{12}+2x^{3}q^{11}+2x^{3}q^{13}+x^{3}q^{14}+x^{3}q^{16})f_{\BIRP}(xq^{3},q)\\ &-x^{2}q^{10}(1+q+xq^{2}+xq^{3}+2xq^{4}+2xq^{6}+xq^{7}+xq^{8}-x^{2}q^{7}+2x^{2}q^{8}+x^{2}q^{9} +2x^{2}q^{10}+3x^{2}q^{11}+x^{2}q^{12}+x^{2}q^{13}+2x^{2}q^{14}+x^{3}q^{13}+x^{3}q^{15}\\ &+2x^{3}q^{16}+2x^{3}q^{18})f_{\BIRP}(xq^{6},q)+x^{4}q^{27}(1-xq^6)(1-xq^9)(1+xq^{2}+xq^{4})f_{\BIRP}(xq^{9},q) \end{align*} \normalsize \end{Prop} \section{The cylindric partitions}\label{cylin} Let $r\geq 1$. Recall that a profile $\boldsymbol{c}=(c_i)_{i\in\mathbb{Z}/r\mathbb{Z}}$ is a family of nonnegative integers (indexed by $\mathbb{Z}/r\mathbb{Z}$). Recall also that a cylindric partition~\cite{GK} $\boldsymbol{\lambda}=(\lambda^{(i)})_{i\in\mathbb{Z}/r\mathbb{Z}}$ of a profile $\boldsymbol{c}=(c_i)_{i\in\mathbb{Z}/r\mathbb{Z}}$ is a family of partitions such that $\lambda^{(i)}_j\geq \lambda^{(i+1)}_{j+c_{i+1}}$ for $i\in\mathbb{Z}/r\mathbb{Z}$ and $j\geq 1$. As usual, we put $\mu_k=0$ for a partition $\mu$ if $k>\ell(\mu)$. We define \begin{align*} F_{\boldsymbol{c}}(x,q)=\sum_{\boldsymbol{\lambda}\in\CP{\boldsymbol{c}}}x^{\max(\boldsymbol{\lambda})}q^{\SUM{\boldsymbol{\lambda}}},\quad G_{\boldsymbol{c}}(x,q)=(xq;q)_{\infty}F_{\boldsymbol{c}}(x,q), \end{align*} where $\CP{\boldsymbol{c}}$ be the set of cylindric partitions of profile $\boldsymbol{c}$, $\max(\boldsymbol{\lambda})=\max\{\lambda^{(i)}_1,\dots,\lambda^{(i)}_{\ell(\lambda^{(i)})}\mid i\in\mathbb{Z}/r\mathbb{Z}\}$ and $\SUM{\boldsymbol{\lambda}}=\sum_{i\in\mathbb{Z}/r\mathbb{Z}}|\lambda^{(i)}|$ for a cylindric partition $\boldsymbol{\lambda}$. \begin{Thm}[{\cite[Proposition 5.1]{Bor}}]\label{borp} For a profile $\boldsymbol{c}=(c_i)_{i\in\mathbb{Z}/r\mathbb{Z}}$, we have \begin{align*} F_{\boldsymbol{c}}(1,q)=\frac{1}{(q;q)_{\infty}}\chi_{A^{(1)}_{r-1}}(\sum_{i\in\mathbb{Z}/r\mathbb{Z}}c_i\Lambda_i). \end{align*} \end{Thm} \begin{Thm}[{\cite[Proposition 3.1$+$(3.5)]{CW}}]\label{cwrec} For a profile $\boldsymbol{c}=(c_i)_{i\in\mathbb{Z}/r\mathbb{Z}}$, we have \begin{align*} G_{\boldsymbol{c}}(x,q)=\sum_{\emptyset\ne J\subseteq I_{\boldsymbol{c}}}(-1)^{|J|-1}(xq;q)_{|J|-1}G_{\boldsymbol{c}(J)}(xq^{|J|}), \end{align*} where $I_{\boldsymbol{c}}=\{i\in\mathbb{Z}/r\mathbb{Z}\mid c_i>0\}$ and $\boldsymbol{c}(J)=(c_i(J))_{i\in\mathbb{Z}/r\mathbb{Z}}$ is defined by \begin{align*} c_i(J)= \begin{cases} c_i-1 & \textrm{if $i\in J$ and $(i-1)\not\in J$},\\ c_i+1 & \textrm{if $i\not\in J$ and $(i-1)\in J$},\\ c_i & \textrm{otherwise}. \end{cases} \end{align*} \end{Thm} In this paper, when a profile is written in one-line format $(c_0,\dots,c_{r-1})$, the nonnegative integer $c_i$ is the one indexed by $i+r\mathbb{Z}\in\mathbb{Z}/r\mathbb{Z}$ in the profile. \begin{Prop}\label{PropqdiffG} The generating functions $G_{(3,0,0)}(x,q)$ and $G_{(1,1,1)}(x,q)$ satisfy the following $q$-difference equations respectively. \begin{align*} {} &(1+xq^5)G_{(3,0,0)}(x,q)\\ &=(1+xq^{2}+2xq^{3}+2xq^{4}+2xq^{5}+2x^{2}q^{6}+2x^{2}q^{7}+2x^{2}q^{8}+x^{2}q^{9}+x^{3}q^{11})G_{(3,0,0)}(xq^{3},q)\\ &\quad +xq^6(1+xq^2)(1-xq^4)(1-xq^5)G_{(3,0,0)}(xq^{6},q),\\ {} &(1+xq^4)G_{(1,1,1)}(x,q)\\ &= (1+2xq+2xq^{2}+2xq^{3}+xq^{4}+x^{2}q^{3}+2x^{2}q^{4}+2x^{2}q^{5}+2x^{2}q^{6}+x^{3}q^{7})G_{(1,1,1)}(xq^{3},q)\\ &\quad +xq^{3}(1+xq)(1-xq^4)(1-xq^5)G_{(1,1,1)}(xq^6,q). \end{align*} \end{Prop} \begin{proof}\label{cwrecex} By the Corteel-Welsh recursion formula (Theorem \ref{cwrec}), we have \begin{align*} G_{(3,0,0)}(x) &= G_{(2,1,0)}(xq), \\ G_{(2,1,0)}(x) &= 2G_{(2,0,1)}(xq)-(1-xq)G_{(1,1,1)}(xq^2), \\ G_{(2,0,1)}(x) &= G_{(3,0,0)}(xq)+G_{(1,1,1)}(xq)-(1-xq)G_{(2,1,0)}(xq^2), \\ G_{(1,1,1)}(x) &= 3G_{(2,1,0)}(xq)-3(1-xq)G_{(2,0,1)}(xq^2)+(1-xq)(1-xq^2)G_{(1,1,1)}(xq^3). \end{align*} Thus, we easily see \begin{align} \begin{pmatrix} G_{(3,0,0)}(x) \\ G_{(1,1,1)}(x) \end{pmatrix} = \begin{pmatrix} 2xq^3 & 1+xq^2 \\ 3xq^3(1+xq) & 1+2xq+2xq^2+x^2q^3 \end{pmatrix} \begin{pmatrix} G_{(3,0,0)}(xq^3) \\ G_{(1,1,1)}(xq^3) \end{pmatrix}. \label{Grel} \end{align} By the Murray-Miller algorithm (see ~\cite[Appendix B]{TT}), we get the result. \end{proof} \section{Andrews-Gordon type series}\label{ags} \subsection{Certificate recurrences}\label{certe} Let $r\geq 2$ and we denote the set of maps from $\mathbb{Z}^r$ to $\mathbb{Q}(q)$ by $\MAP(\mathbb{Z}^r,\mathbb{Q}(q))$, with the variables $n,k_2,\dots,k_r$ in this order. Let $f\in \MAP(\mathbb{Z}^r,\mathbb{Q}(q))$. The shift operators $\ENU$, $\KEI{2},\dots,\KEI{r}$ are defined by \begin{align*} \ENU f(n,k_2,\dots,k_r) &= f(n-1,k_2,\dots,k_r),\\ \KEI{i} f(n,k_2,\dots,k_r) &= f(n,k_2,\dots,k_{i-1},k_i-1,k_{i+1},\dots,k_r) \end{align*} for $2\leq i\leq r$. As usual, we say that $f$ is summable if $\{(k_2,\dots,k_r)\in\mathbb{Z}^{r-1}\mid f(n,k_2,\dots,k_r)\ne 0\}$ is a finite set for any $n\in\mathbb{Z}$. In this case, \begin{align} f_n=\sum_{k_2,\dots,k_r\in\mathbb{Z}}f(n,k_2,\dots,k_r) \label{efuenu} \end{align} is well-defined. Let $\NONC$ be a $\mathbb{Q}(q)$-subalgebra in $\END_{\textrm{$\mathbb{Q}(q)$-lin}}(\MAP(\mathbb{Z}^r,\mathbb{Q}(q)))$ generated by the shift operators $\ENU$, $\KEI{2},\dots,\KEI{r}$ and the multiplications $q^n$, $q^{k_2},\dots,q^{k_r}$. Note that in $\NONC$ we have \begin{align} \begin{split} \ENU\KEI{i}=\KEI{i}\ENU,\quad q^{k_i}\ENU=\ENU q^{k_i},\quad q^{n}\KEI{i}=\KEI{i} q^{n},\\ \ENU q^{n} = q^{n-1}\ENU,\quad \KEI{j} q^{k_i} = q^{k_i-\delta_{i,j}}\KEI{j},\quad q^{n}q^{k_i}=q^{k_i}q^{n}, \end{split} \label{noncrel} \end{align} for $2\leq i,j\leq r$. We denote by $\mathbb{Q}(q)[q^n]$ the $\mathbb{Q}(q)$-subalgebra generated by $q^n$ in $\NONC$, which is clearly a polynomial $\mathbb{Q}(q)$-algebra generated by $q^n$. The following is standard in deriving a recurrence relation (see ~\cite[\S3]{Rie}). For completeness, we duplicate a proof. \begin{Prop}\label{certprop} Assume $f\in \MAP(\mathbb{Z}^r,\mathbb{Q}(q))$ is summable and there exists $J\geq 0$, $p_0,\dots,p_J\in\mathbb{Q}(q)[q^n]$ and $C_2,\dots,C_r\in\NONC$ such that \begin{align} \Big(\sum_{j=0}^{J}p_j\ENU^j + \sum_{i=2}^{r}(1-\KEI{i})C_i\Big)f=0. \label{certr} \end{align} Then, we have a $q$-holonomic recurrence (see \eqref{efuenu}) $f_n+p_1f_{n-1}+\dots+p_Jf_{n-J}=0$. \end{Prop} \begin{proof} Let $G_i=C_if$ for $2\leq i\leq r$ and note that it is summable. For $n\in\mathbb{Z}$, take $M>0$ so that $(k_2,\dots,k_r)\not\in\{-M,-M+1,\dots,M-1,M\}^{r-1}$ implies \begin{align*} 0=f(n,k_2,\dots,k_r)=\cdots=f(n-J,k_2,\dots,k_r)=G_2(n,k_2,\dots,k_r)=\dots=G_r(n,k_2,\dots,k_r). \end{align*} By summation $\sum_{k_2=-M}^{M+1}\dots\sum_{k_r=-M}^{M+1}$ of $\sum_{j=0}^{J}p_jf(n-j,k_2,\dots,k_r)$, which is equal to $-\sum_{i=2}^{r}(G_i(n,k_2,\dots,k_i,\dots,k_r)-G_i(n,k_2,\dots,k_i-1,\dots,k_r))$, we get the result. \end{proof} \subsection{Automatic derivation of $q$-difference equations via $q$-Sister Celine}\label{sister} The goal of this subsection is to give a brief review of an automatic calculation of a $q$-difference equation of an Andrews-Gordon type series \begin{align} H(x,q) =\sum_{n\in\mathbb{Z}}h_n(q)x^n =\sum_{i_1,\dots,i_{\ell}\geq 0} \frac{q^{h(i_1,\dots,i_{\ell})+f_1i_1+\dots+f_{\ell}i_{\ell}}}{(q^{d_1};q^{d_1})_{i_1}\cdots(q^{d_{\ell}};q^{d_{\ell}})_{i_{\ell}}}x^{c_1i_1+\dots+c_{\ell}i_{\ell}}, \label{agh} \end{align} where $\ell\geq 2$, $f_1,\dots,f_\ell$ are integers, $c_1,\dots,c_\ell,d_1,\dots,d_{\ell}$ are positive integers such that $c_1=1$, and $h(i_1,\dots,i_{\ell})$ is a quadratic term, which takes an integer value for any $(i_1,\dots,i_{\ell})\in\mathbb{Z}^{\ell}$. The algorithm is a typical application of $q$-version of the Sister Celine technique~\cite{Rie} to a $q$-proper hypergeometric term~\cite[Definition 2.3]{Rie} \begin{align*} F(n,i_2,\dots,i_{\ell})= \frac{q^{G(n,i_2,\dots,i_{\ell})+z_1n+z_2i_2+\dots+z_{\ell}i_{\ell}}}{(q^{d_1};q^{d_1})_{n-(c_2i_2+\dots+c_{\ell}i_{\ell})}(q^{d_2};q^{d_2})_{i_2}\cdots(q^{d_{\ell}};q^{d_{\ell}})_{i_{\ell}}}, \end{align*} where the quadratic term $G$ and $z_1,\dots,z_{\ell}\in\mathbb{Z}$ are determined by the equation \begin{align*} G(n,i_2,\dots,i_{\ell})+z_1n+ z_2i_2+\dots+z_{\ell}i_{\ell} =h(i_1,i_2,\dots,i_{\ell})+f_1i_1+f_2i_2+\dots+f_{\ell}i_{\ell}, \end{align*} and $i_1=n-(c_2i_2+\dots+c_{\ell}i_{\ell})$. Note that $F(n,i_2,\dots,i_{\ell})$ is well-defined for any arguments by the convention $\frac{1}{(q^d;q^d)_{-m}}=0$ for positive integers $d$ and $m$ (See ~\cite[p.~3]{Yen} and ~\cite[\S2.1]{Rie}). In the following, we expand the quadratic term $G$ as \begin{align*} G(a,b_2\dots,b_{\ell}) =\sum_{t=1}^{\ell}g_{t,t}{b_t\choose 2} +\sum_{1\leq t<u\leq \ell}g_{t,u}b_tb_u, \end{align*} where we put $b_1=a$. We also put $g_{u,t}=g_{t,u}$ for $1\leq t<u\leq\ell$ as a convenience. \begin{Def}[{\cite[Definition 2.5]{Rie}}] For a finite subset $S\subseteq \mathbb{Z}_{\geq 0}^{\ell}$ such that $(0,\dots,0)\in S$ and for $(i,j_2,\dots,j_{\ell})\in S$, let a Laurent polynomial $T^{(S)}_{i,j_2,\dots,j_{\ell}}\in\mathbb{Q}(q)[Q_1,Q_1^{-1},\dots,Q_{\ell},Q_{\ell}^{-1}]$ be \begin{align*} T^{(S)}_{i,j_2,\dots,j_{\ell}} &= q^{-z_1i-z_2j_2-z_{\ell}j_{\ell}} (q^{-M_Sd_1}Q_1^{d_1}Q_2^{-c_2d_1}\cdots Q_{\ell}^{-c_{\ell}d_1};q^{-d_1})_{i-c_2j_2-\dots -c_{\ell}j_{\ell}-M_S}\\ &\quad \Big(\prod_{t=2}^{\ell}(Q_t^{d_t};q^{-d_t})_{j_{t}}\Big) \Big(q^{\sum_{t=1}^{\ell}g_{t,t}{j_t+1\choose 2}+\sum_{1\leq t<u\leq \ell}g_{t,u}j_tj_{u}}\Big) \Big(\prod_{t=1}^{\ell}Q_t^{-\sum_{u=1}^{\ell} g_{t,u}j_{u}}\Big), \end{align*} where $j_1=i$ and $M_S=\min\{i-c_2j_2-\dots -c_{\ell}j_{\ell}\mid (i,j_2,\dots,j_{\ell})\in S\}$. \end{Def} \begin{Ex}\label{exsister} For an Andrews-Gordon type series \begin{align*} H(x,q)= \sum_{s,t\geq 0} \frac{q^{s^2+2t^2+2st}}{(q^2;q^2)_s(q^2;q^2)_t}x^{s+t}, \end{align*} we take a $q$-proper hypergeometric term \begin{align*} F(n,i_2) = \frac{q^{(n-i_2)^2+2i_2^2+2(n-i_2)i_2}}{(q^2;q^2)_{n-i_2}(q^2;q^2)_{i_2}} = \frac{q^{2{n\choose 2}+2{i_2\choose 2}}}{(q^2;q^2)_{n-i_2}(q^2;q^2)_{i_2}}q^{n}q^{i_2}, \end{align*} i.e., $z_1=z_2=1$, $g_{1,1}=g_{2,2}=2$, $g_{1,2}=0(=g_{2,1})$, and $c_1=c_2=1$, $d_1=d_2=2$. For $S=\{(0,0),(0,1),(1,0),(1,1)\}$, we have $M_S=-1$ and \begin{align*} T^{(S)}_{0,0} = 1-q^2Q_1^2Q_2^{-2},\quad T^{(S)}_{1,0} = q^{-1}(1-q^2Q_1^2Q_2^{-2})(1-Q_1^2Q_2^{-2})q^{2}Q_1^{-2},\\ T^{(S)}_{0,1} = q^{-1}(1-Q_2^2)q^2Q_2^{-2},\quad T^{(S)}_{1,1} = q^{-2}(1-q^2Q_1^2Q_2^{-2})(1-Q_2^2)q^{4}Q_1^{-2}Q_2^{-2}. \end{align*} \end{Ex} \begin{Prop}\label{kurikaeshi} Assume that, in $\mathbb{Q}(q)[Q_1^{\pm1},\dots,Q_{\ell}^{\pm1}]$, we have \begin{align} \sum_{(i,j_2,\dots,j_{\ell})\in S} \sigma_{i,j_2,\dots,j_{\ell}}T^{(S)}_{i,j_2,\dots,j_{\ell}} =0 \label{algoeqmain} \end{align} for an $S$-tuple of Laurent polynomials $(\sigma_{i,j_2,\dots,j_{\ell}})_{(i,j_2,\dots,j_{\ell})\in S}\in(\mathbb{Q}(q)[Q_1^{\pm1},Q_2^{\pm1},\dots,Q_{\ell}^{\pm1}])^{S}$. Then, in $\mathbb{Q}(q)$, we have, for $n,i_2,\dots,i_{\ell}\in\mathbb{Z}$, \begin{align*} \sum_{(i,j_2,\dots,j_{\ell})\in S}\sigma_{i,j_2,\dots,j_{\ell}}(q^n,q^{i_2},\dots,q^{i_{\ell}}) F(n-i,i_2-j_2,\dots,i_{\ell}-j_{\ell})=0. \end{align*} \end{Prop} \begin{proof} Note that the left hand side of \eqref{algoeqmain} is a slightly expanded version of $L_{F,S}$ in ~\cite{Rie} (see ~\cite[Definition 2.5]{Rie}), based on the assumptions of $S$. The claim is exactly a combination of ~\cite[Theorem 2.1]{Rie} and ~\cite[Lemma 2.1]{Rie} (and the remark following ~\cite[Lemma 2.1]{Rie}). \end{proof} For a Laurent polynomial $\tau\in\mathbb{Q}(q)[Q_1,Q_1^{-1},\dots,Q_{\ell},Q_{\ell}^{-1}]$, we expand as \begin{align*} \tau= \sum_{(p_2,\dots,p_{\ell})\in\mathbb{Z}^{\ell-1}} \tau^{(p_2,\dots,p_{\ell})}Q_2^{p_2}\dots Q_{\ell}^{p_{\ell}}, \end{align*} where $\tau^{(p_2,\dots,p_{\ell})}\in\mathbb{Q}(q)[Q_1,Q_1^{-1}]$, which is nonzero for finitely many $(p_2,\dots,p_{\ell})$. \begin{Cor}\label{shubu} In Proposition \ref{kurikaeshi}, we further assume that we have \begin{align} \sum_{(j_2,\dots,j_{\ell})\in S_i} \sigma^{(p_2,\dots,p_{\ell})}_{i,j_2,\dots,j_{\ell}}q^{j_2p_2+\dots+j_{\ell}p_{\ell}}=0 \label{addass} \end{align} in $\mathbb{Q}(q)[Q_1,Q_1^{-1}]$ for each $i\in\mathbb{Z}$ and $(p_2,\dots,p_{\ell})\in\mathbb{Z}^{\ell-1}\setminus\{(0,\dots,0)\}$, where \begin{align*} S_i = \{(j_2,\dots,j_{\ell})\in\mathbb{Z}^{\ell-1}\mid (i,j_2,\dots,j_{\ell})\in S\}. \end{align*} We have a $q$-holonomic recurrence \begin{align} 0=\sum_{i\in\mathbb{Z}}h_{n-i}\sum_{(j_2,\dots,j_{\ell})\in S_i} \sigma^{(0,\dots,0)}_{i,j_2,\dots,j_{\ell}}. \label{reclocal} \end{align} \end{Cor} \begin{proof} Recall $\NONC$ in \S\ref{certe} and put $r=\ell$. By Proposition \ref{kurikaeshi}, we have \begin{align*} \sum_{(i,j_2,\dots,j_{\ell})\in S}\sigma_{i,j_2,\dots,j_{\ell}}(q^n,q^{k_2},\dots,q^{k_{\ell}}) \ENU^{i}\KEI{2}^{j_2}\cdots\KEI{\ell}^{j_{\ell}} =0 \end{align*} in $\NONC$. Note that the left hand side is equal to \begin{align*} \sum_{(i,j_2,\dots,j_{\ell})\in S} \sum_{(p_2,\dots,p_{\ell})\in \mathbb{Z}^{\ell-1}} \sigma^{(p_2,\dots,p_{\ell})}_{i,j_2,\dots,j_{\ell}}(q^n)q^{p_2k_2+\dots+p_{\ell}k_{\ell}}\ENU^{i}\KEI{2}^{j_2}\cdots\KEI{\ell}^{j_{\ell}}, \end{align*} which is, by \eqref{noncrel}, equal to \begin{align} \sum_{\substack{(i,j_2,\dots,j_{\ell})\in S \\ (p_2,\dots,p_{\ell})\in \mathbb{Z}^{\ell-1}}} \sigma^{(p_2,\dots,p_{\ell})}_{i,j_2,\dots,j_{\ell}}(q^n)q^{p_2j_2+\dots+p_{\ell}j_{\ell}}\ENU^{i}\KEI{2}^{j_2}\cdots\KEI{\ell}^{j_{\ell}}q^{p_2k_2+\dots+p_{\ell}k_{\ell}}. \label{temp9} \end{align} Thanks to the obvious identity $1-x^n=(1-x)(1+x+\dots+x^{n-1})$ for $n\geq 1$, there exist $T_2,\dots,T_{\ell}\in\NONC$ such that \eqref{temp9} is equal to \begin{align*} \Delta_2T_2+\dots+\Delta_{\ell}T_{\ell}+ \sum_{(p_2,\dots,p_{\ell})\in \mathbb{Z}^{\ell-1}} \sum_{(i,j_2,\dots,j_{\ell})\in S} \sigma^{(p_2,\dots,p_{\ell})}_{i,j_2,\dots,j_{\ell}}(q^n)q^{p_2j_2+\dots+p_{\ell}j_{\ell}}\ENU^{i}q^{p_2k_2+\dots+p_{\ell}k_{\ell}}, \end{align*} where $\Delta_j=1-\KEI{j}$ (see ~\cite[\S4]{Rie}). Thus, if \eqref{addass} holds, we get the claim by Proposition \ref{certprop} thanks to the obvious identity \begin{align*} h_n= \sum_{(i_2,\dots,i_{\ell})\in\mathbb{Z}^{\ell-1}} F(n,i_2,\dots,i_{\ell}). \end{align*} Note that, for $n\in\mathbb{Z}$, $F(n,i_2,\dots,i_{\ell})$ is nonzero for finitely many $(i_2,\dots,i_{\ell})\in\mathbb{Z}^{\ell-1}$ by the positivities of $c_2,\dots,c_{\ell}$. \end{proof} \begin{Ex}\label{reicertexp} In Example \ref{exsister}, we have \begin{align*} T^{(S)}_{0,0}q^2Q_1^{-4}(1-Q_1^2) -T^{(S)}_{1,0}qQ_1^{-2} -T^{(S)}_{1,1}=0. \end{align*} Note that the additional assumption \eqref{addass} in Corollary \ref{shubu} is vacuously true because each $\sigma_{(i,j_2)}\in\mathbb{Q}[Q_1,Q_1^{-1},Q_2,Q_2^{-1}]$ is $Q_2$-free. By the division algorithm, we have \begin{align*} q^2Q_1^{-4}(1-Q_1^2)-qQ_1^{-2}\ENU-\ENU\KEI{2} = q^2Q_1^{-4}(1-Q_1^2)-(1+qQ_1^{-2})\ENU+(1-\KEI{2})\ENU. \end{align*} By Proposition \ref{certprop} (or by Corollary \ref{shubu} without the above rewritten), we have \begin{align*} q^{2-4M}(1-q^{2M})h_M=(1+q^{-2M+1})h_{M-1} \end{align*} for $M\in\mathbb{Z}$. This is equivalent to $(1-q^{2M})h_M=(q^{2M-1}+q^{4M-2})h_{M-1}$, i.e., \begin{align*} H(x,q)=(1+xq)H(xq^2,q)+xq^2H(xq^4,q). \end{align*} \end{Ex} To summarize, the algorithm to obtain a $q$-difference equation of \eqref{agh} is as follows. For examples of certificate recurrences, see ~\cite{Ts1} and the proof of Proposition \ref{gs111}. Note that ~\cite[Theorem 2.2]{Rie} guarantees that (Step 3) below gives a nontrivial relation for large enough $S$ and $D=\{(0,\dots,0)\}$. For an effective bound on $S$, see ~\cite[Proof of Theorem 2.2]{Rie}. \begin{enumerate} \item[(Step 1)] As an input, give finite subsets $S\subseteq\mathbb{Z}_{\geq0}^{\ell}$ and $D\subseteq\mathbb{Z}^{\ell-1}$ such that $(0,\dots,0)\in S$ and $(0,\dots,0)\in D$ heuristically. \item[(Step 2)] Solve the equation \eqref{algoeqmain} and \eqref{addass} under the ansatz \begin{align*} \sigma_{i,j_2,\dots,j_{\ell}}=\sum_{(p_2,\dots,p_{\ell})\in D} \sigma^{(p_2,\dots,p_{\ell})}_{i,j_2,\dots,j_{\ell}}Q_2^{p_2}\cdots Q_{\ell}^{p_{\ell}} \end{align*} for $(i,j_2,\dots,j_{\ell})\in S$ (i.e., solve the simultaneous linear equation on the unknowns $(\sigma_{i,j_2,\dots,j_{\ell}}^{(p_2,\dots,p_{\ell})})_{(i,j_2,\dots,j_{\ell})\in S, (p_2,\dots,p_{\ell})\in D}$ over $\mathbb{Q}(q,Q_1)$). \item[(Step 3)] If the relation \eqref{reclocal} is nontrivial, transform it to a nontrivial $q$-difference equation of $H(x,q)$. Otherwise, try (Step 1) for different $S$ and $D$. \end{enumerate} \subsection{Andrews-Gordon type series for $G_{(1,1,1)}(x,q)$ and $G_{(3,0,0)}(x,q)$}\label{gs111} \begin{Prop}\label{g111} We have \begin{align*} G_{(1,1,1)}(x,q)=\sum_{a,b,c,d\geq 0}\frac{q^{a^2+b^2+3c^2+3d^2+2ab+3ac+3ad+3bc+3bd+6cd}}{(q;q)_a(q;q)_b(q^3;q^3)_c(q^3;q^3)_d}x^{a+b+c+2d}. \end{align*} \end{Prop} \begin{proof} First, we show that the Andrews-Gordon type series in Proposition \ref{g111} satisfies the same $q$-difference equation for $G_{(1,1,1)}(x,q)$ in Proposition \ref{PropqdiffG}. Let \begin{align*} F(n,b,c,d)= \frac{q^{(n-b-c-2d)^2+b^2+3c^2+3d^2+2(n-b-c-2d)b+3(n-b-2c-2d)(c+d)+3bc+3bd+6cd}}{(q;q)_{n-b-c-2d}(q;q)_b(q^3;q^3)_c(q^3;q^3)_d} \end{align*} and put \begin{align*} p_0 &= -2q^{6n+8}-q^{3n+12}+2q^{3n+8}+q^{12}, \quad r_0 = 0,\quad s_0 = (2q^{3n+8}+q^{12})(q^b-1),\\ q_0 &= (4q^{4n+9}+2q^{4n+8}+q^{n+12}+2q^{n+10})(q^{2n}-q^{b+2c+d})+(2q^{3n+8}+q^{12})(q^{n+2c+d}-q^b),\\ p_1 &= -(2q^{6n}+3q^{3n+4}+4q^{3n+3}+4q^{3n+2}+2q^{3n+1}+2q^{6}+2q^{5}+2q^{4})q^{3n+5},\\ q_1 &= (4q^{6n+6}+2q^{6n+5}+q^{3n+9}+2q^{3n+7})(q^{3n}C+q^{2+b}+(B-Cq^b)q^{n+2+2c+d})\\ &\quad+ (4q^{7n+8}+2q^{7n+7}-q^{4n+11}+2q^{4n+9}-q^{n+12})q^{b+2c+d}\\ &\quad+ q^{n+7}(2q^{3n}+q^{4})((1+C)q^{3n}+Dq^{1+b})q^{2c+d}+ (2q^{3n+3}+2q^{3n+1}+2q^{3n}+q^{5}+2q^{4})q^{3n+6},\\ r_1 &= (2q^{9n+5}+q^{6n+9}),\quad r_2 = (2q^{9n+3}+q^{6n+4})-(2q^{7n+4}+q^{4n+8})q^{b+2c+d},\\ s_1 &= (4q^{6n+8}+2q^{6n+7}+q^{3n+11}+2q^{3n+9})(1-q^b) + (2q^{3n}+q^4)q^{8+n+b+2c+d}, \\ p_2 &= 2q^{9n+4}+2q^{9n+3}-3q^{6n+8}+2q^{6n+5}+q^{6n+4}-q^{3n+9}, \\ q_2 &= ((4q^{10n+4}+2q^{10n+3}+q^{7n+7}+2q^{7n+5})(B+q^b)+2q^{10n+3}+q^{7n+7}-q^{4n+8+b}-2q^{7n+7+b})Cq^{2c+d}\\ &\quad +(-2q^{9n+4}+q^{6n+8}-2q^{6n+5})+2(q^{7n+7}+q^{7n+4}+q^{4n+8})q^{b+2c+d} \\ &\quad +(2q^{6n+7}+q^{3n+8})(q^{3n-4}C+Bq^{n+2c+d}+q^{n+2c+d}+q^{1+b}),\\ s_2 &= (2q^{6n+7}+q^{3n+8})(q(1-q^b)-2q^{n+b+2c+d}),\quad r_3 = (4q^{3n}+2q)q^{7n+b+2c+d},\quad s_3 = 0,\\ p_3 &= -(2q^{9n+2}+q^{6n+3}), \quad q_3 = (2q^{9n}+q^{6n+1})(q^2+(1+q^b+B)Cq^{n+2c+d}). \end{align*} Let $\TTE=\varepsilon_0+\varepsilon_1N+\dots+\varepsilon_3N^3$ for $\varepsilon\in\{p,q,r,s\}$. One can check \begin{align} (\TTP+(1-B)\TTQ+(1-C)\TTR+(1-D)\TTS)F(n,b,c,d)=0, \label{onecancheck} \end{align} where $NF(n,b,c,d)=F(n-1,b,c,d)$, $BF(n,b,c,d)=F(n,b-1,c,d)$, $CF(n,b,c,d)=F(n,b,c-1,d)$ and $DF(n,b,c,d)=F(n,b,c,d-1)$. By Proposition \ref{certprop}, we have \begin{align*} p_0f_n+p_1f_{n-1}+p_2f_{n-2}+p_3f_{n-3}=0. \end{align*} On the other hand, the $q$-difference equation for $G_{(1,1,1)}(x,q)$ in Proposition \ref{PropqdiffG} is equivalent to the claim that \begin{align*} p'_0g_n+p'_1g_{n-1}+p'_2g_{n-2}+p'_3g_{n-3}+p'_4g_{n-4}=0. \end{align*} holds for all $n\in\mathbb{Z}$. Here, \begin{align*} p'_0 &= -1+q^{3n},\quad p'_1 = -q^{4}+2q^{3n-2}+2q^{3n-1}+2q^{3n}+q^{3n+1}+q^{6n-3},\\ p'_2 &= q^{3n-3}+2q^{3n-2}+2q^{3n-1}+2q^{3n}+q^{6n-8}-q^{6n-5}-q^{6n-4},\\ p'_3 &= q^{3n-2}-q^{6n-10}-q^{6n-9}+q^{6n-6},\quad p'_4 = q^{6n-11}, \end{align*} and $g_{n}(q)$ is defined by $G_{(1,1,1)}(x,q)=\sum_{n\in\mathbb{Z}} g_{n}(q)x^n$. One can check that \begin{align*} \frac{1}{-2q^{3n+8}-q^{12}}(p_0,p_1,p_2,p_3,0) + \frac{1}{-2q^{3n+4}-q^{8}}(0,\ENU p_0,\ENU p_1,\ENU p_2,\ENU p_3)\end{align*} is equal to $(p'_0,p'_1,p'_2,p'_3,p'_4)$, which implies that the both $q$-difference equations are the same. Finally, the claim follows from $g_{n}=f_n=0$ for $n<0$, $g_0=f_0=1$ and $p_0\ne 0$, $p'_0\ne 0$ for $n\geq 1$. \end{proof} In the rest of this section, we omit the display of explicit certificate recurrences because of the following four reasons: they can be automatically get, they are large, they seem to give us no insight, and we have been provided them in the first version of this paper on arXiv. Instead, we specify our input $(S,D)$ to the algorithm. This high level specification guarantees that certificate recurrences can be reproduced if desired. For example, the certificate recurrence in Example \ref{reicertexp} is obtained by putting $(S,D)=(S_{2,2},D_{1})$, where $Y_{a_1,...,a_b}=\{(c_1,\dots,c_b)\in\mathbb{Z}^b\mid 0\leq c_j< a_j\}$ for $Y\in\{S,D\}$, and \eqref{onecancheck} is obtained by putting $(S,D)=(S_{4,3,2,2},D_{2,3,2})$. \begin{Prop}\label{g300} We have \begin{align*} G_{(3,0,0)}(x,q)=\sum_{a,b,c,d\geq 0}\frac{q^{a(a+1)+b(b+2)+3c(c+1)+3d(d+1)+2ab+3ac+3ad+3bc+3bd+6cd}}{(q;q)_a(q;q)_b(q^3;q^3)_c(q^3;q^3)_d}x^{a+b+c+2d}. \end{align*} \end{Prop} \begin{proof} The argument is the same as in \S\ref{gs111} by $(S,D)=(S_{4,3,2,2},D_{2,3,2})$ after swapping $a$ and $b$. As declared in \S\ref{certe}, we omit a detail of the automatic proof. \end{proof} \subsection{A proof of Theorem \ref{RRidentAG} for $\BIR$}\label{proof} As in \S\ref{gs111}, the key is to show that the Andrews-Gordon type series in Theorem \ref{RRidentAG} for $\BIR$ satisfies the same $q$-difference equation for $f_{\BIR}(x,q)$ in Proposition \ref{PropqdiffR}. As declared in \S\ref{certe}, we omit a detail of the automatic derivation of the former only saying that it is enough to put $(S,D)=(S_{7,3,2,2},D_{2,2,2})$ in the algorithm. \subsection{A proof of Theorem \ref{RRidentAG} for $\BIRP$}\label{proofp} The argument is the same as in \S\ref{proof} by $(S,D)=(S_{7,2,2,2},D_{2,3,3})$ after swapping $a$ and $b$. As declared in \S\ref{certe}, we omit a detail of the automatic proof. \subsection{A proof of Theorem \ref{RRbiiden}}\label{finalsec} By Proposition \ref{g111}, Proposition \ref{g300} and Theorem \ref{RRidentAG}, we have $f_{\BIR}(q)=G_{(3,0,0)}(1,q)$ and $f_{\BIRP}(q)=G_{(1,1,1)}(1,q)$. Thanks to \eqref{charcalc} and Theorem \ref{borp}, we get the results. \hspace{0mm} \noindent{\bf Acknowledgments.} The author was supported by the Research Institute for Mathematical Sciences, an International Joint Usage/Research Center located in Kyoto University, JSPS Kakenhi Grants 20K03506, 23K03051, Inamori Foundation, JST CREST Grant Number JPMJCR2113, Japan and Leading Initiative for Excellent Young Researchers, MEXT, Japan. \begin{thebibliography}{99} \bibitem{An1} G.E.~Andrews, \textit{The Theory of Partitions}, Encyclopedia of Mathematics and its Applications, vol.2, Addison-Wesley, 1976. \bibitem{An2} G.E.~Andrews, \textit{$q$-series: their development and application in analysis, number theory, combinatorics, physics, and computer algebra}, CBMS Regional Conference Series in Mathematics, 66. Published for the Conference Board of the Mathematical Sciences, Washington, DC; by the American Mathematical Society, Providence, RI, 1986. \bibitem{ASW} G.E.~Andrews, A.~Schilling and S.O.~Warnaar, \textit{An $A_2$ Bailey lemma and Rogers-Ramanujan-type identities}, J.Amer.Math.Soc. \textbf{12} (1999), 677--702. \bibitem{Bor} A.~Borodin, \textit{Periodic Schur process and cylindric partitions}, Duke Math.J. \textbf{140} (2007), 391--468. \bibitem{Cap0} S.~Capparelli, \textit{On some representations of twisted affine Lie algebras and combinatorial identities}, J.Algebra \textbf{154} (1993), 335--355. \bibitem{CDA} S.~Corteel, J.~Dousse and A.~Uncu, \textit{Cylindric partitions and some new $A_2$ Rogers-Ramanujan identities}, Proc.Amer.Math.Soc. \textbf{150} (2022), 481--497. \bibitem{CW} S.~Corteel and T.~Welsh, \textit{The $A_2$ Rogers-Ramanujan identities revisited}, Ann.Comb. \textbf{23} (2019), 683--694. \bibitem{DBT} E.A.~de Kerf. G.G.A.~B\"auerle and A.P.E.ten Kroode \textit{Finite and Infinite Dimensional Lie Algebras and Applications in Physics. Part 2}, Studies in Mathematical Physics vol7, North-Holland, 1997 \bibitem{DHLM} M.~Delgado, R.~Hoffmann, S.~Linton, and J.J.~Morais, \textit{Automata, a package on automata, Version 1.16}, https://gap-packages.github.io/automata/, Refereed GAP package, 2024. \bibitem{FFW} B.~Feigin, O.~Foda and T.~Welsh, \textit{Andrews-Gordon type identities from combinations of Virasoro characters}, Ramanujan J. \textbf{17} (2008), 33--52. \bibitem{FK} I.~Frenkel and V.~Kac, \textit{Basic representations of affine Lie algebras and dual resonance models}, Invent.Math. \textbf{62} (1980/81), 23--66. \bibitem{FW} O.~Foda and T.~Welsh, \textit{Cylindric partitions, Wr characters and the Andrews-Gordon-Bressoud identities}, J.Phys.A \textbf{49} (2016), 164004, 37 pp. \bibitem{GK} I.M.~Gessel and C.~Krattenthaler, \textit{Cylindric partitions}, Trans.Amer.Math.Soc. \textbf{349} (1997), 429--479. \bibitem{Kac} V.~Kac. \textit{Infinite Dimensional Lie Algebras}. Cambridge University Press, 1990. \bibitem{KKLW} V.~Kac, D.~Kazhdan, J.~Lepowsky and R.~Wilson, \textit{Realization of the basic representations of the Euclidean Lie algebras}, Adv.Math. \textbf{42} (1981), 83--112 \bibitem{Koe} W.~Koepf, \textit{Hypergeometric Summation. An Algorithmic Approach to Summation and Special Function Identities}, Universitext. Springer, 2014. \bibitem{Kos} B.~Kostant, \textit{The principal three-dimensional subgroup and the Betti numbers of a complex simple Lie group}, Amer.J.Math.\textbf{81} (1959), 973--1032. \bibitem{KR3} S.~Kanade and M.C.~ Russell, \textit{Completing the $A_2$ Andrews-Schilling-Warnaar identities}, Int.Math.Res.Not. IMRN 2023, no.20, 17100--17155 \bibitem{LM} J.~Lepowsky and S.~Milne, \textit{Lie algebraic approaches to classical partition identities}, Adv.Math. \textbf{29} (1978), 15--59. \bibitem{LW1} J.~Lepowsky and R.~Wilson, \textit{Construction of the affine Lie algebra $A^{(1)}_1$}, Comm.Math.Phys. \textbf{62} (1978), 43--53. \bibitem{LW2} J.~Lepowsky and R.~Wilson, \textit{A Lie theoretic interpretation and proof of the Rogers-Ramanujan identities}, Adv.Math. \textbf{45} (1982), 21--72. \bibitem{LW3} J.~Lepowsky and R.~Wilson, \textit{The structure of standard modules. I. Universal algebras and the Rogers-Ramanujan identities}, Invent.Math. \textbf{77} (1984), 199--290. \bibitem{LW4} J.~Lepowsky and R.~Wilson, \textit{The structure of standard modules. II. The case $A^{(1)}_{1}$, principal gradation}, Invent.Math. \textbf{79} (1985), 417--442. \bibitem{MP} A.~Meurman and M.~Primc, \textit{Annihilating ideals of standard modules of $sl(2,{\bold{C}})^{\sim}$ and combinatorial identities}, Adv.Math. \textbf{64} (1987), 177--240. \bibitem{Nan} D.~Nandi, \textit{Partition identities arising from the standard $A^{(2)}_2$-modules of level 4}, Ph.D. Thesis (Rutgers University), 2014. \bibitem{Rie} A.~Riese, \textit{qMultiSum -- a package for proving q-hypergeometric multiple summation identities}, J.Symbolic Comput. \textbf{35} (2003), 349--376. \bibitem{Seg} G.~Segal, \textit{Unitary representations of some infinite-dimensional groups}, Commun.Math.Phys. \textbf{80} (1981) 301--342. \bibitem{Sil} A.V.~Sills, \textit{An Invitation to the Rogers-Ramanujan Identities. With a Foreword by George E. Andrews}, CRC Press, (2018). \bibitem{Ts1} S.~Tsuchioka, \textit{A proof of the second Rogers-Ramanujan identity via Kleshchev multipartitions}, Proc.Japan Acad.Ser.A Math.Sci. \textbf{99} (2023) no.3, 23-26 \bibitem{TT} M.~Takigiku and S.~Tsuchioka, \textit{A proof of conjectured partition identities of Nandi}, Amer.J.Math. \textbf{146} (2024) 405-433. \bibitem{TT2} M.~Takigiku and S.~Tsuchioka, \textit{Andrews-Gordon type series for the level 5 and 7 standard modules of the affine Lie algebra ${A}^{(2)}_2$}, Proc.Amer.Math.Soc. \textbf{149} (2021) 2763--2776. \bibitem{Unc} A.~Uncu, \textit{Proofs of Modulo 11 and 13 Cylindric Kanade-Russell Conjectures for $A_2$ Rogers-Ramanujan Type Identities}, arXiv:2301.01359. \bibitem{War2} O.~Warnaar, \textit{Hall-Littlewood functions and the $A_2$ Rogers-Ramanujan identities}, Adv.Math. \textbf{200} (2006) 403--434. \bibitem{War} O.~Warnaar, \textit{The $A_2$ Andrews-Gordon identities and cylindric partitions}, Trans.Amer.Math.Soc.Ser.B \textbf{10} (2023), 715--765. \bibitem{War3} O.~Warnaar, \textit{An $A_2$ Bailey tree and $A^{(1)}_2$ Rogers-Ramanujan-type identities}, arXiv:2303.09069. \bibitem{WZ} H.~Wilf and D.~Zeilberger, \textit{An algorithmic proof theory for hypergeometric (ordinary and ``$q$'') multisum/integral identities}, Invent.Math. \textbf{108} (1992), 575--633. \bibitem{Yen} L.~Yen, \textit{A two-line algorithm for proving $q$-hypergeometric identities}, J.Math.Anal.Appl. \textbf{213} (1997) 1--14. \end{thebibliography} \end{document}
2205.04767v3
http://arxiv.org/abs/2205.04767v3
Instanton sheaves on projective schemes
\documentclass[a4paper,reqno]{amsart} \makeatletter \@namedef{subjclassname@2020}{ \textup{2020} Mathematics Subject Classification} \makeatother \usepackage[toc]{appendix} \usepackage{amsmath} \usepackage{amssymb} \usepackage{amsthm} \usepackage{mathrsfs} \usepackage{latexsym} \usepackage{amscd} \usepackage{xypic} \xyoption{curve} \usepackage{ifthen} \usepackage{hyperref} \usepackage{graphicx} \usepackage{enumerate} \usepackage{enumitem} \usepackage{rotating} \usepackage{xcolor} \usepackage{mathtools} \usepackage{todonotes} \usepackage{color,soul} \usepackage{float} \restylefloat{table} \numberwithin{equation}{section} \def\cocoa{{\hbox{\rm C\kern-.13em o\kern-.07em C\kern-.13em o\kern-.15em A}}} \def\change#1{\textcolor{orange}{#1}} \def\red#1{\textcolor{red}{#1}} \def\cyan#1{\textcolor{cyan}{#1}} \newcommand{\btodo}[1]{\todo[color=blue!30,bordercolor=blue]{{#1}}} \newcommand{\otodo}[1]{\todo[color=orange!30,bordercolor=orange]{{#1}}} \newtheorem{theorem}{Theorem}[section] \newtheorem{question}[theorem]{Question} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{claim}[theorem]{Claim} \theoremstyle{definition} \newtheorem{remark}[theorem]{Remark} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{hypothesis}[theorem]{Hypothesis} \newtheorem{construction}[theorem]{Construction} \newcommand {\SL}{\mathrm{SL}} \newcommand {\PGL}{\mathrm{PGL}} \newcommand {\pf}{\mathrm{pf}} \newcommand {\BL}{\mathrm{BL}} \newcommand {\Ann}{\mathrm{Ann}} \newcommand {\ord}{\mathrm{ord}} \newcommand {\emdim}{\mathrm{emdim}} \newcommand {\coker}{\mathrm{coker}} \newcommand {\gr}{\mathrm{gr}} \newcommand {\spec}{\mathrm{spec}} \newcommand {\proj}{\mathrm{proj}} \newcommand {\depth}{\mathrm{depth}} \newcommand {\reg}{\mathrm{reg}} \newcommand {\sHom}{\mathcal{H}\kern -0.25ex{\mathit om}} \newcommand {\sExt}{\mathcal{E}\kern -0.25ex{\mathit xt}} \newcommand {\sTor}{\mathcal{T}\kern -0.25ex{\mathit or}} \newcommand {\Spl}{Spl} \newcommand {\im}{\mathrm{im}} \newcommand {\rk}{\mathrm{rk}} \newcommand {\lev}{\mathrm{msdeg}} \newcommand {\cdeg}{\mathrm{cdeg}} \newcommand {\Sing}{\mathrm{Sing}} \newcommand {\Soc}{\mathrm{Soc}} \newcommand {\ldf}{\mathrm{ldf}} \newcommand {\tdf}{\mathrm{tdf}} \newcommand {\Tor}{\mathrm{Tor}} \newcommand {\Ext}{\mathrm{Ext}} \newcommand {\Hom}{\mathrm{Hom}} \newcommand {\Hilb}{\mathcal{H}\kern -0.25ex{\mathit ilb\/}} \newcommand {\Mov}{\overline{\mathrm{Mov}}} \newcommand {\quantum}{k} \newcommand {\defect}{\delta} eld}{\mathbf k} \newcommand {\rH}{{H}} \newcommand {\rh}{{h}} \newcommand {\fN}{\mathfrak{N}} \newcommand {\fM}{\mathfrak{M}} \newcommand {\cK}{\mathcal{K}} \newcommand {\cA}{\mathcal{A}} \newcommand {\cB}{\mathcal{B}} \newcommand {\cJ}{\mathcal{J}} \newcommand {\cU}{\mathcal{U}} \newcommand {\cV}{\mathcal{V}} \newcommand {\cW}{\mathcal{W}} \newcommand {\cP}{\mathcal{P}} \newcommand {\cQ}{\mathcal{Q}} \newcommand{\cC}{{\mathcal C}} \newcommand{\cS}{{\mathcal S}} \newcommand{\cE}{{\mathcal E}} \newcommand{\cF}{{\mathcal F}} \newcommand{\cM}{{\mathcal M}} \newcommand{\cN}{{\mathcal N}} \newcommand{\cO}{{\mathcal O}} \newcommand{\cG}{{\mathcal G}} \newcommand{\cT}{{\mathcal T}} \newcommand{\cI}{{\mathcal I}} \newcommand {\bN}{\mathbb{N}} \newcommand {\bZ}{\mathbb{Z}} \newcommand {\bQ}{\mathbb{Q}} \newcommand {\bC}{\mathbb{C}} \newcommand {\bP}{\mathbb{P}} \newcommand {\bA}{\mathbb{A}} \newcommand {\bF}{\mathbb{F}} \newcommand{\GL}{\operatorname{GL}} \newcommand{\Bl}{\operatorname{Bl}} \newcommand{\Pic}{\operatorname{Pic}} \newcommand{\Num}{\operatorname{Num}} \newcommand{\NS}{\operatorname{NS}} \newcommand{\Cliff}{\operatorname{Cliff}} \newcommand{\lra}{\longrightarrow} \newcommand{\codim}{\operatorname{codim}} \def\p#1{{\bP^{#1}}} \def\a#1{{{\bA}^{#1}_{k}}} \def\Ofa#1{{\cO_{#1}}} \def\ga#1{{{\accent"12 #1}}} \newcommand{\xycenter}[1]{\begin{center}\mbox{\xymatrix{#1}}\end{center}} \def\mapright#1{\mathbin{\smash{\mathop{\longrightarrow} \limits^{#1}}}} \title[Instanton sheaves on projective schemes]{Instanton sheaves on projective schemes} \thanks{The authors are members of GNSAGA group of INdAM and are supported by the framework of the MIUR grant Dipartimenti di Eccellenza 2018-2022 (E11G18000350001). The first author is a postdoctoral research fellow (``titolare di Assegno di Ricerca") of Istituto Nazionale di Alta Matematica (INdAM, Italy). } \subjclass[2020]{Primary: 14F06. Secondary: 14D21, 14J45, 14J60} \keywords{Ulrich sheaf, Instanton sheaf, Fano $3$--fold, Scroll} \author[V. Antonelli, G. Casnati]{Vincenzo Antonelli, Gianfranco Casnati} \begin{document} \maketitle \begin{abstract} A $h$--instanton sheaf on a closed subscheme $X$ of some projective space endowed with an ample and globally generated line bundle $\cO_X(h)$ is a coherent sheaf whose cohomology table has a certain prescribed shape. In this paper we deal with $h$--instanton sheaves relating them to Ulrich sheaves. Moreover, we study $h$--instanton sheaves on smooth curves and surfaces, cyclic $n$--folds, Fano $3$--folds and scrolls over arbitrary smooth curves. We also deal with a family of monads associated to $h$--instanton bundles on varieties satisfying some mild extra technical conditions. \end{abstract} \section{Introduction} eld$. For instance, if $X$ is irreducible of dimension $n$ and it is endowed with an ample and globally generated line bundle $\cO_X(h)$, then it is natural to deal with coherent sheaves defined by some particular vanishings of their cohomology groups. For instance, it may be interesting to deal with the existence and classification of {\sl Ulrich sheaves (with respect to $\cO_X(h)$)}, i.e. non--zero coherent sheaves $\cE$ on $X$ such that: \begin{itemize} \item $h^0\big(\cE(-(t+1)h)\big)=h^n\big(\cE((t-n)h)\big)=0$ if $t\ge0$; \item $\cE$ is {\sl without intermediate cohomology}, i.e. $h^i\big(X,\cE(th)\big)=0$ for $0< i<n$, $t\in\bZ$. \end{itemize} Ulrich sheaves have been object of deep study and several attempts of generalization both from the algebraic and the geometric point of view: without any claim of completeness we refer the interested reader to \cite{E--S--W, Ap--Kim, C--H2, Bea6, K--M--S} for further details on such sheaves. In particular, as pointed out in \cite{E--S--W}, it is natural to ask whether Ulrich sheaves actually exist on each irreducible projective scheme. Ulrich line bundles always exist on {\sl curves} (i.e. irreducible projective schemes of dimension $1$) when smooth: see \cite[p. 542]{E--S--W} for further details. When the dimension is at least $2$ such question is still wide open, despite the great number of scattered results proved in the last decade. Thus it could be reasonable to look for some kind of approximations of Ulrich sheaves, allowing more non--zero cohomology groups. E.g. we can consider torsion--free coherent sheaves $\cE$ on a projective scheme $X$ with $\dim(X)=n$ such that: \begin{itemize} \item $h^0\big(\cE(-(t+1)h)\big)=h^n\big(\cE((t-n)h)\big)=0$ if $t\ge0$; \item $h^i\big(\cE(-(i+t+1)h)\big)=h^{n-i}\big(\cE((t-n+i)h)\big)=0$ if $1\le i\le n-2$, $t\ge0$. \end{itemize} If $X\cong\p n$ and $\cO_X(h)\cong\cO_{\p n}(1)$, such sheaves are characterized as the torsion--free ones which are cohomology of some {\sl linear monad}, i.e. a monad of the form \begin{equation*} 0\longrightarrow\cO_{\p n}(-1)^{\oplus a}\longrightarrow\cO_{\p n}^{\oplus b}\longrightarrow\cO_{\p n}(1)^{\oplus c}\longrightarrow0 \end{equation*} (see \cite{Ja}). It is immediate to check that the above monad has a symmetric shape, i.e. $a=c$, if and only if $h^1\big(\cE(-h)\big)=h^{n-1}\big(\cE(-nh)\big)$ or, equivalently, if and only if $c_1(\cE)=0$: a sheaf $\cE$ with such properties is called {\sl instanton sheaf} in \cite{Ja}. Classically, a {\sl mathematical instanton bundle} is a rank two vector bundle $\cE$ on $\p3$ satisfying $h^0\big(\cE\big)=h^1\big(\cE(-2)\big)=0$ and $c_1(\cE)=0$. Instanton bundles have been widely studied from different viewpoints since the discovery, through the Atiyah--Penrose--Ward transformation, of their connection with the solutions of the Yang--Mills equations (see \cite{A--W}) and therefore with the physics of particles. Because of the intrinsic interest due to its physical importance, the notion of instanton bundle has been extended in several papers to different classes of projective schemes: e.g. see \cite{Ok--Sp, A--O1,Fa2, Kuz,Ja, J--MR, C--MR3}. In particular, M. Jardim and R.M. Mir\'o--Roig introduced the following definition on smooth irreducible projective schemes of dimension $n$ ({\sl $n$--folds} for short). \begin{definition}[see \cite{J--MR}] \label{dJardim} Let $X$ be an $n$--fold endowed with a very ample line bundle $\cO_X(h)$. A torsion--free coherent sheaf $\cE$ is called instanton sheaf with quantum number $\quantum\ge0$ if it is the cohomology of a monad \begin{equation} \label{MonadJardim} 0\longrightarrow\cO_{X}(-h)^{\oplus \quantum}\longrightarrow\cO_{X}^{\oplus b}\longrightarrow\cO_{X}(h)^{\oplus \quantum}\longrightarrow0. \end{equation} \end{definition} Another possible way of generalizing mathematical instanton bundles is by considering rank two bundles supported on $3$--folds, which are similar to $\p3$. Recall that the $n$--fold $X$ is {\sl Fano} if the dual of its canonical line bundle $\omega_X$ is ample. The greatest integer $i_X$ such that $\omega_X\cong\cO_X(-i_Xh)$ for some ample line bundle $\cO_X(h)$ is well--defined and called {\sl index of $X$}. Similarly, the line bundle $\cO_X(h)$ is well--defined as well and called {\sl fundamental line bundle of $X$}. It is well--known that $1\le i_X\le n+1$. Moreover, $i_X=n+1$ if and only if $X\cong\p n$ and $i_X=n$ if and only if $X$ is a smooth quadric hypersurface in $\p{n+1}$: there also exists a complete classification when $X$ is also {\sl del Pezzo}, i.e. $i_X=n-1$. When $i_X\le n-2$ there are many deformation classes. From the Fano $3$--folds viewpoint $\cO_{\p3}(-2)$ is the square root of $\omega_{\p3}$, hence an instanton bundle $\cE$ on $\p3$ can be also viewed as the normalization of a stable bundle $\cF$ of rank $2$ such that $\cF\cong\cF^\vee\otimes\omega_{\p3}$ and $h^1\big(\cF\big)=0$. For this reason D. Faenzi introduced the following definition in \cite{Fa2} (see also \cite{Kuz} for $i_X=2$). \begin{definition}[see \cite{Fa2}: see also \cite{Kuz}] \label{dFaenzi} Let $X$ be a Fano $3$--fold with cyclic Picard group, fundamental line bundle $\cO_X(h)$, index $i_X$ and let $ q_X:=\left[\frac{i_X}2\right] $. A vector bundle $\cE$ of rank two on $X$ is called instanton bundle if the following properties hold: \begin{itemize} \item $c_1(\cE)=(2q_X-i_X)h$; \item $h^0\big(\cE\big)=h^1\big(\cE(-q_Xh)\big)=0$. \end{itemize} \end{definition} The above definition has been generalized also to Fano $3$--folds $X$ with non--cyclic Picard group in \cite{C--C--G--M} (see also \cite{M--M--PL}) and to any $3$--fold in \cite{A--M2}. In particular there are two different possible definitions for an instanton bundle $\cE$ on the smooth quadric hypersurface $X\subseteq\p4$ in the literature, i.e. the one according to Definition \ref{dJardim}, which satisfies $c_1(\cE)=0$, and the one according to Definition \ref{dFaenzi}, which satisfies $c_1(\cE)=-h$. A similar duplicity of definitions has been recently explored in \cite{A--C--G} for each Fano $3$--fold (see also \cite{El--Gr} when $X\cong\p3$). In this paper we try to merge all the aforementioned different viewpoints from \cite{J--MR, Kuz, Fa2, El--Gr, A--M2} in the following purely cohomological definition. \begin{definition} \label{dMalaspinion} Let $X$ be an irreducible projective scheme of dimension $n\ge1$ endowed with an ample and globally generated line bundle $\cO_X(h)$. A non--zero coherent sheaf $\cE$ on $X$ is called instanton sheaf (with respect to $\cO_X(h)$) with defect $\defect\in\{\ 0,1\ \}$ and quantum number $\quantum\in\bZ$ if the following properties hold: \begin{itemize} \item $h^0\big(\cE(-(t+1)h)\big)=h^n\big(\cE((\defect-n+t)h)\big)=0$ if $t\ge0$; \item $h^i\big(\cE(-(i+t+1)h)\big)=h^{n-i}\big(\cE((\defect+t-n+i)h)\big)=0$ if $1\le i\le n-2$, $t\ge0$; \item $\defect h^i\big(\cE(-ih)\big)=0$ for $2\le i\le n-2$; \item $h^1\big(\cE(-h)\big)=h^{n-1}\big(\cE((\defect-n)h)\big)=\quantum$; \item $\defect(\chi(\cE)-(-1)^n\chi(\cE(-nh)))=0$. \end{itemize} The instanton sheaf $\cE$ is called ordinary or non--ordinary according to $\defect$ is $0$ or $1$. If $\cE$ is an instanton sheaf with respect to $\cO_X(h)$ we will also briefly say that $\cE$ is an $\cO_X(h)$--instanton sheaf or a $h$--instanton sheaf or simply an instanton sheaf if the polarization is evident from the context. \end{definition} The first three properties in Definition \ref{dMalaspinion} are related to the existence of few non--zero entries in the cohomology table of $\cE$: in particular, $h^i\big(\cE(th)\big)=0$ for $t\in\bZ$ and $2\le i\le n-2$. The two latter conditions imply that the cohomology $h^i\big(\cE(th)\big)$ is approximately symmetric in the range $-n\le t\le\defect-1$. Moreover, it is clear from the above definition that the $h$--instanton sheaf $\cE$ is Ulrich if $\defect=\quantum=0$. Also the converse is true (see Corollary \ref{cUlrich}), hence instanton sheaves may be also viewed as some kind of approximations of Ulrich sheaves. \medbreak We briefly describe below the content of the paper. In Section \ref{sPrel} we fix the notation used in the paper and collect several general results. In Section \ref{sSpace} we recall some well known facts from \cite{Ja} about ordinary instanton sheaves on $\p n$ with respect to $\cO_{\p n}(1)$, giving some kind of generalization to the non--ordinary case. More precisely, when $n\ge2$, we recall how to use the Beilinson theorem for associating to each instanton sheaf on $\p n$ a monad $\cM^\bullet$ with a very precise shape (see equalities \eqref{MonadValue}). Moreover, such an $\cM^\bullet$ completely characterizes instanton sheaves on $\p n$ with respect to $\cO_{\p n}(1)$ with arbitrary defect. In Section \ref{sGeneral} we prove several properties of instanton sheaves. Notice that Definition \ref{dMalaspinion} features the notion of ordinary instanton sheaf as a natural generalization of the notion of Ulrich sheaf and they actually coincide when $n=1$. If $X$ is a projective scheme of dimension $n\ge1$ endowed with an ample and globally generated line bundle $\cO_X(h)$, then the morphism $X\to\p N$ induced by any choice of a basis of $H^0\big(\cO_X(h)\big)$ is finite. Thus its image has dimension $n$ and we can project it from $N-n$ general points obtaining a finite morphism $p\colon X\to\p n$ such that $\cO_X(h)\cong p^*\cO_{\p n}(1)$. Conversely each such morphism arises in this way. Thus, the following characterization of instanton sheaves extending \cite[Theorem 2.1]{E--S--W} seems to be quite natural. Moreover, it is clearly helpful for both reducing the infinite set of vanishings required in Definition \ref{dMalaspinion} to a finite one and relating $h$--instanton sheaves with the aforementioned monad $\cM^\bullet$ on the projective space. \begin{theorem} \label{tCharacterization} Let $X$ be an irreducible projective scheme of dimension $n\ge1$ endowed with an ample and globally generated line bundle $\cO_X(h)$. If $\cE$ is a coherent sheaf on $X$, $\quantum$ a non--negative integer and $\defect\in\{\ 0, 1\ \}$, then the following assertions are equivalent. \begin{enumerate} \item $\cE$ is a $h$--instanton sheaf with defect $\defect$ and quantum number $\quantum$. \item $\cE$ is non--zero and the following finite set of properties holds: \begin{itemize} \item $h^0\big(\cE(-h)\big)=h^n\big(\cE((\defect-n)h)\big)=0$; \item $h^i\big(\cE(-(i+1)h)\big)=h^{n-i}\big(\cE((\defect-n+i)h)\big)=0$ if $1\le i\le n-2$; \item $\defect h^i\big(\cE(-ih)\big)=0$ for $2\le i\le n-2$; \item $h^1\big(\cE(-h)\big)=h^{n-1}\big(\cE((\defect-n)h)\big)=\quantum$; \item $\defect(\chi(\cE)-(-1)^n\chi(\cE(-nh)))=0$. \end{itemize} \item For each finite morphism $p\colon X\to\p n$ with $\cO_X(h)\cong p^*\cO_{\p n}(1)$, then $p_*\cE$ is an instanton sheaf on $\p n$ with respect to $\cO_{\p n}(1)$ with defect $\defect$ and quantum number $\quantum$. \item There is a finite morphism $p\colon X\to\p n$ with $\cO_X(h)\cong p^*\cO_{\p n}(1)$ such that $p_*\cE$ is an instanton sheaf on $\p n$ with respect to $\cO_{\p n}(1)$ with defect $\defect$ and quantum number $\quantum$.\end{enumerate} \end{theorem} Taking account of the monadic representation of instanton sheaves on $\p n$ described in Section \ref{sSpace}, it follows that instanton sheaves can be still characterized, in some sense, in terms of monads on each projective scheme $X$. Though such a description is not so easy to handle, we obtain some further results concerning the Hilbert function of an instanton sheaf and their relation with Ulrich sheaves. Moreover, we also inspect the behaviour of their restrictions to general divisors in $\vert h\vert$. When we allow some further properties for the projective scheme $X$ or for the sheaf $\cE$ we can say something more. E.g. if $\cO_X(h)$ is very ample then we can deal with the minimal free resolution of $\cE$. More precisely, for each $h$--instanton sheaf $\cE$ on $X$ with defect $\defect$, we define \begin{gather*} v(\cE):=\min\{\ t\in\bZ\ \vert\ h^0\big(\cE(th)\big)\ne0\ \},\qquad w(\cE):=h^1\big(\cE((\defect-1)h)\big)+\defect. \end{gather*} The Castelnuovo--Mumford regularity of $\cE$ is bounded from above by $w(\cE)$ (see Proposition \ref{pRegularity}) and the following result holds. \begin{theorem} \label{tResolution} eld$--algebra of $V$. If $\cE$ is a $h$--instanton sheaf, then the minimal free resolution of $H^0_*\big(\cE\big)$ as $S$--module has the form \begin{equation} \label{Resolution} 0\longrightarrow F_{N-1}\longrightarrow F_{N-2}\longrightarrow\dots\longrightarrow F_1\longrightarrow F_0\longrightarrow H^0_*\big(\cE\big)\longrightarrow0, \end{equation} where $F_p\cong \bigoplus_{i=v(\cE)}^{w(\cE)}S(-i-p)^{\oplus \beta_{p,i}}$. \end{theorem} The property of being $h$--instanton imposes a strong condition on the first Chern class of the sheaf (at least when $X$ is smooth). Indeed, in Section \ref{sBundle} we prove the following theorem. \begin{theorem} \label{tSlope} eld$ is $0$ or $\cO_X(h)$ is very ample. Let $\cE$ be a non--zero vector bundle on $X$ and $\defect\in\{\ 0,1\ \}$ such that the following conditions hold: \begin{itemize} \item $h^0\big(\cE(-h)\big)=h^n\big(\cE((\defect-n)h)\big)=0$; \item $h^i\big(\cE(-(i+1)h)\big)=h^{j}\big(\cE((\defect-j)h)\big)=0$ if $1\le i\le n-2$, $2\le j\le n-1$; \item $\defect h^i\big(\cE(-ih)\big)=0$ for $2\le i\le n-2$; \item $\defect h^1\big(\cE(-h)\big)=\defect h^{n-1}\big(\cE((\defect-n)h)\big)$. \end{itemize} Then $\cE$ is a $h$--instanton bundle on $X$ with defect $\defect$ if and only if \begin{equation} \label{Slope} c_1(\cE)h^{n-1}=\frac{\rk(\cE)}2((n+1-\defect)h^n+K_Xh^{n-1}). \end{equation} \end{theorem} In Proposition \ref{pSpecial}, we explain how Theorem \ref{tSlope} is simplified by the assumption that $\cE$ is a rank two vector bundle which is {\sl orientable}, i.e. $c_1(\cE)=(n+1-\defect)h+K_X$ as elements in $A^1(X)\cong\Pic(X)$. Ordinary instanton sheaves on smooth curves are exactly the Ulrich bundles, hence they always exist: the existence of non--ordinary instanton sheaves on smooth curves is similarly easy to check (see Proposition \ref{pCurve}). When $X$ is a {\sl surface} (i.e. an irreducible projective scheme with $n=2$) which is also smooth, rank two orientable instanton bundles are easy to produce, even when it is not known if $X$ supports Ulrich bundles (see Examples \ref{eMukai} and \ref{eGenus0}). As we pointed out, instanton sheaves are often defined in the literature as the cohomology of linear monads. We prove some results in this direction in Section \ref{sMonad}. More precisely, let $X$ be an $n$--fold with $n\ge3$ endowed with a very ample line bundle $\cO_X(h)$ and let $X\subseteq\p N$ be the induced embedding. A coherent sheaf $\cE$ on $X$ without intermediate cohomology is necessarily a vector bundle: in this case $\cE$ is called {\sl aCM bundle (with respect to $\cO_X(h)$)}. If $\cO_X$ is an aCM bundle and $h^1\big(\cI_{X\vert \p N}(t)\big)=0$ for each $t\in \bZ$ we say that $X$ is {\sl aCM (with respect to $\cO_X(h)$)}. \begin{theorem} \label{tMonad} Let $X$ be an $n$--fold with $n\ge3$ endowed with a very ample line bundle $\cO_X(h)$. Assume that $X$ is aCM with respect to $\cO_X(h)$. If $\cE$ is a non--zero vector bundle on $X$, $\quantum$ a non--negative integer and $\defect\in\{\ 0, 1\ \}$, then the following assertions are equivalent. \begin{enumerate} \item $\cE$ is a $h$--instanton bundle with defect $\defect$ and quantum number $\quantum$. \item $\cE$ satisfies $h^0\big(\cE(-h)\big)=h^n\big(\cE((\defect-n)h)\big)=0$ and it is the cohomology of a monad of the form \begin{equation} \label{Monad} 0\longrightarrow\cA\longrightarrow\cB\longrightarrow\cC\longrightarrow0, \end{equation} where $$ \cA:=\omega_X((n-\defect)h)^{\oplus \quantum}\oplus\omega_X((n+1-\defect)h)^{\oplus a},\qquad \cC:=\cO_X^{\oplus c}\oplus \cO_X(h)^{\oplus \quantum} $$ are such that \begin{equation} \label{DimensionMonad} a=\defect h^{n-1}\big(\cE(-n h)\big),\qquad c=\defect h^1\big(\cE\big) \end{equation} and $\cB$ is an aCM bundle such that \begin{gather} \label{h^iB} {\begin{gathered} h^0\big(\cB(-h)\big)=\quantum h^0\big(\omega_X((n-1-\defect)h)\big)+ah^0\big(\omega_X((n-\defect)h)\big),\\ h^n\big(\cB((\defect-n)h)\big)=\quantum h^0\big(\omega_X((n-1-\defect)h)\big)+ch^0\big(\omega_X((n-\defect)h)\big), \end{gathered}}\\ \label{ChiB} {\begin{gathered} \defect(\chi(\cB)-(-1)^n\chi(\cB(-nh)))=\defect(c-a)(\chi(\cO_X)-(-1)^n\chi(\cO_X(-nh))). \end{gathered}} \end{gather} \end{enumerate} \end{theorem} Moreover, we are also able to exploit another interesting link among ordinary instanton and Ulrich bundles for aCM $n$--folds $X$ whose adjoint linear system is not globally generated (see Corollary \ref{cMonad}). In particular, we recover the already known results for monads associated to instanton sheaves on smooth quadrics (and projective spaces) and we give some new examples of monads associated to instanton sheaves on some rational normal scrolls. In Section \ref{sCyclic} we deal with the case of instanton bundles on $n$--folds $X$ which are {\sl cyclic}, i.e. such that $\Pic(X)$ is free of rank $\varrho_X=1$. On the one hand, we deal with instanton sheaves of rank up to two on such an $X$, studying their $\mu$--(semi)stability properties. On the other hand, we confront in Proposition \ref{pFano} rank two instanton bundles with respect to the fundamental line bundle on a Fano $3$--fold in the sense of Definition \ref{dMalaspinion} with the notion of instanton bundle introduced in \cite{A--C--G}, which extends to each Fano $3$--fold Definition \ref{dFaenzi}. In particular, we show that if $\cE$ is not an ordinary rank two $\cO_X(h)$--instanton bundle on a {\sl prime Fano $3$--fold} $X$, i.e. a cyclic Fano $3$--fold with $i_X=1$, then the two notions coincide with very few exceptions, hence the existence of such bundles follows from the results proved in \cite{Fa2}. When $i_X=1$ the existence of ordinary rank two $\cO_X(h)$--instanton bundle is slightly less immediate. Recall that in this case $\cO_X(h)\cong\omega_X^{-1}$ and $h^3$ is even, say $h^3=2g_X-2$: the number $g_X$ is called the {\sl genus} of $X$. In the case $\cO_X(h)$ is very ample, then it induces an embedding $X\subseteq\p{g_X+1}$. It is well--known that the $3$--fold $X$ contains lines and we say that $X$ is {\sl ordinary} if it contains a line with normal bundle $\cO_{\p1}\oplus\cO_{\p1}(-1)$. The following result answers the question of the existence of $\cO_X(h)$--instanton bundle in the case $i_X=1$. \begin{theorem} \label{tPrime} eld=\bC$. For each $k\ge0$ there exists a rank two orientable, ordinary, $\mu$--stable, $h$--instanton bundle $\cE$ with quantum number $\quantum$, such that $$ h^1\big(\cE\otimes\cE^\vee\big)=4+g_X+2k,\qquad h^i\big(\cE\otimes\cE^\vee\big)=0$$ for $i\ge2$. \end{theorem} In Section \ref{sSextic} we deal with low rank instanton bundles on del Pezzo $3$--folds $X$ of degree $6$, namely the general hyperplane section of the image of the Segre embedding $\p2\times\p2\subseteq\p8$, which we usually call {\sl flag $3$--fold}, and the image of the Segre embedding $\p1\times\p1\times\p1\subseteq\p7$. In these cases $X$ is not cyclic, hence several pathologies may appear as we show in Examples \ref{eSegreDeform} and \ref{eSegreStable} when $X\cong\p1\times\p1\times\p1$. Nevertheless, when $X$ is the flag $3$--fold, we are able to prove the following result. \begin{theorem} \label{tFlag2} Let $X$ be the flag $3$--fold and let $\cO_X(h)$ be its fundamental line bundle. Every rank two orientable, ordinary, $h$--instanton bundle $\cE$ is $\mu$--semistable. If it is indecomposable, then it is also simple. \end{theorem} In Section \ref{sScrollCurve} we describe Construction \ref{conScrollCurve}, partially extending some results from \cite{A--M2} to scrolls over smooth curves of arbitrary genus, and we prove the following existence theorem. \begin{theorem} \label{tScrollCurve} Let $\cG$ be a vector bundle of rank $n\ge3$ on a smooth curve $B$ and set $X:=\bP(\cG)$. Assume that $\cO_X(h):=\cO_{\bP(\cG)}(1)$ is an ample and globally generated line bundle. For each integer $\quantum\ge0$ the bundle $\cE$ defined in Construction \ref{conScrollCurve} is an ordinary rank two orientable, $\mu$--semistable, $h$--instanton bundle with quantum number $\quantum$. \end{theorem} The bundles obtained via Construction \ref{conScrollCurve} are actually simple if and only if $\quantum\ge1$ (see Proposition \ref{pExt}). The existence of rank two orientable, Ulrich, simple bundles on $X$, i.e. ordinary rank two orientable, $\mu$--semistable, simple, $h$--instanton bundles with quantum number $0$, is also proved in Proposition \ref{pExt0}. \subsection{Acknowledgements} The authors express their thanks to F. Malaspina and M. Jardim for some helpful suggestions, and to A.F. Lopez for an illuminating comment on the characterization of Ulrich sheaves via finite projections. \section{Notation and first results} \label{sPrel} eld$ will be denoted by $\p N$: $\cO_{\p N}(1)$ will denote the hyperplane line bundle. A projective scheme $X$ is a closed subscheme of some projective space over $\field$: $X$ is a {{\sl variety}} if it is also integral. A {\sl manifold} $X$ is a smooth variety: we often use the term $n$--fold for underlying that $X$ is a manifold of dimension $n$. The structure sheaf of a projective scheme $X$ is denoted by $\cO_X$ and its Picard group by $\Pic(X)$. Let $X$ be a projective scheme. For each closed subscheme $Z\subseteq X$ the ideal sheaf $\cI_{Z\vert X}$ of $Z$ in $X$ fits into the exact sequence \begin{equation} \label{seqStandard} 0\longrightarrow \cI_{Z\vert X}\longrightarrow \cO_X\longrightarrow \cO_Z\longrightarrow 0. \end{equation} If $\cA$ is a coherent sheaf on a projective scheme $X$ we set $h^i\big(\cA\big):=\dim H^i\big(\cA\big)$. If $X$ is endowed with a globally generated line bundle $\cO_X(h)$, then we have an induced morphism $\phi_h\colon X\to\p N$ where $N+1=h^0\big(\cO_X(h)\big)$ and $\phi_h$ is finite if and only is $\cO_X(h)$ is also ample. For each $i\ge0$ we set $H^i_*\big(\cA\big):=\bigoplus_{t\in\bZ} H^i\big(\cA(th)\big)$. eld[x_0,\dots,x_N]$ and $H^i_*\big(\cA\big)$ is naturally an $S$--module. The morphism $\phi_h$ induces a morphism $S\to H^0_*\big(\cO_X\big)$ of $\field$--algebras whose image is denoted by $S[X]$. Let $\cA$ be a coherent sheaf on $X$. We say that a $\cA$ has {\sl natural cohomology in shift $t$} if $h^i\big(\cA(th)\big)=0$ for all $i\in\bZ$ but one. The sheaf $\cA$ is called {\sl $m$--regular (in the sense of Castelnuovo--Mumford)} if $h^i\big(\cA((m-i)h)\big)=0$ for $i\ge1$ and the regularity $\reg(\cA)$ of $\cA$ is defined as the minimum integer $m$ such that $\cA$ is $m$--regular. We refer the reader to \cite[Chapter 4]{Ei} for further details about this notion. The following result will be used several times. \begin{proposition} \label{pNatural} Let $X$ be a projective scheme of dimension $n\ge1$ endowed with a globally generated line bundle $\cO_X(h)$. If $\cA$ is a coherent sheaf on $X$ and there is $m\le n-1$ (resp $m\ge 1$) such that $h^i\big(\cA(-ih)\big)=0$ for each $i\le m$ (resp $i\ge m$), then the following assertions hold. \begin{enumerate} \item $h^i\big(\cA(-th)\big)=0$ for each $t\ge i$ and $i\le m$ (resp. $t\le i$ and $i\ge m$). \item $h^i\big(\cO_Y\otimes\cA(-ih)\big)=0$ for each $i\le m-1$ (resp. $i\ge m$) and general $Y\in\vert h\vert$. \end{enumerate} \end{proposition} \begin{proof} For each general $Y\in\vert h\vert$, let $\cO_Y(h):=\cO_Y\otimes\cO_X(h)$ and $\cA_Y:=\cO_Y\otimes\cA$: notice that $\cA_Y$ is coherent and $\dim(Y)=n-1$. Assume that $\cA$ is not zero, otherwise the statement is trivially true. The set of associated points of $\cA$ is finite, hence the general $Y\in\vert h\vert$ does not contain any such point. Thus the exact sequence \begin{equation} \label{seqSection} 0\longrightarrow \cO_X(-h)\longrightarrow\cO_X\longrightarrow\cO_Y\longrightarrow0. \end{equation} tensored by $\cA(th)$ remains exact. Let $n\ge1$, $m\le n-1$ (resp. $m\ge1$) and let $\cA$ be a coherent sheaf on $X$ satisfying $h^i\big(\cA(-ih)\big)=0$ for each $i\le m$ (resp. $i\ge m$). If $m=0$ (resp. $m=n$), then the cohomology of sequence \eqref{seqSection} tensored by $\cA(th)$ implies $$ h^0\big(\cA(-(t+1)h)\big)\le h^0\big(\cA(-th)\big),\qquad (\text{resp. $h^n\big(\cA(-(t+1)h)\big)\ge h^n\big(\cA(-th)\big)$}) $$ when $t\ge 0$ (resp. $t\le 1$). Thus the statement is true in this case. We complete the proof of the statement by induction on $n\ge1$, the base case $n=1$ being a particular case of the discussion above. Thus we assume that the statement holds true for each projective scheme of dimension $n - 1 \ge 1$ and that $m\ge1$ (resp. $m\le n-1$) in what follows. The cohomology of sequence \eqref{seqSection} tensored by $\cA(-ih)$ implies $h^i\big(\cA_Y(-ih)\big)=0$ for each $i\le m-1$ (resp. $i\ge m$), hence assertion (2) is proved. By the inductive hypothesis $h^i\big(\cA_Y(-th)\big)=0$ for each $t\ge i$ and $i\le m-1$ (resp. $t\le i$ and $i\ge m$). Again sequence \eqref{seqSection} tensored by $\cA(th)$ implies $$ h^{i}\big(\cA(-(t+1)h)\big)\le h^{i}\big(\cA(-th)\big),\qquad (\text{resp. $h^i\big(\cA(-(t+1)h)\big)\ge h^i\big(\cA(-th)\big)$}) $$ for each $t\ge i-1$ and $i\le m$ (resp. $t\le i$ and $i\ge m$). We have $h^{i}\big(\cA(-ih)\big)=0$ for $i\le m$ (resp. $i\ge m$) by hypothesis, hence $h^{i}\big(\cA(-th)\big)=0$ for each $i\le m$ and $t\ge i$ (resp. $t\le i$ and $i\ge m$). \end{proof} If $\cA$ and $\cB$ are coherent sheaves on an $n$--fold $X$, then the Serre duality holds \begin{equation} \label{Serre} \Ext_X^i\big(\cA,\cB\otimes\omega_X\big)\cong \Ext_X^{n-i}\big(\cB,\cA\big)^\vee \end{equation} (see \cite[Proposition 7.4]{Ha3}). If $X$ is an $n$--fold, we denote by $A^r(X)$ the group of cycles on $X$ of codimension $r$ modulo rational equivalence: in particular $A^1(X)\cong\Pic(X)$ (see \cite[Proposition 1.30]{Ei--Ha2}) and we set $A(X):=\bigoplus_{r\ge0}A^r(X)$. The Chern classes of a coherent sheaf $\cA$ on $X$ are elements in $A(X)$: in particular, when $\cA$ is locally free, then $c_1(\cA)$ is identified with $\det(\cA)$ via the isomorphism $A^1(X)\cong\Pic(X)$. If $X$ is a smooth curve, a smooth surface, a $3$--fold, Hirzebruch--Riemann--Roch formulas for a coherent sheaf $\cA$ are \begin{gather} \label{RRcurve} \chi(\cA)=\rk(\cA)\chi(\cO_X)+\deg(c_1(\cA)),\\ \label{RRsurface} \chi(\cA)=\rk(\cA)\chi(\cO_X)+{\frac12}c_1(\cA)^2-{\frac12}\omega_Xc_1(\cA)-c_2(\cA),\\ \label{RRgeneral} \begin{aligned} \chi(\cA)&=\rk(\cA)\chi(\cO_X)+{\frac16}(c_1(\cA)^3-3c_1(\cA)c_2(\cA)+3c_3(\cA))\\ &-{\frac14}(\omega_Xc_1(\cA)^2-2\omega_Xc_2(\cA))+{\frac1{12}}(\omega_X^2c_1(\cA)+c_2(\Omega^1_{X}) c_1(\cA)), \end{aligned} \end{gather} respectively (see \cite[Theorem 14.4]{Ei--Ha2}). Let $\cA$ be a rank two vector bundle on an $n$--fold $X$ and let $s\in H^0\big(\cA\big)$. In general its zero--locus $(s)_0\subseteq X$ is either empty or its codimension is at most $2$. We can always write $(s)_0=Y\cup Z$ where $Z$ has codimension $2$ (or it is empty) and $Y$ has pure codimension $1$ (or it is empty). In particular $\cA(-Y)$ has a section vanishing on $Z$, thus we can consider its Koszul complex \begin{equation} \label{seqSerre} 0\longrightarrow \cO_X(Y)\longrightarrow \cA\longrightarrow \cI_{Z\vert X}(-Y)\otimes\det(\cA)\longrightarrow 0. \end{equation} Sequence \ref{seqSerre} tensored by $\cO_Z$ yields $\cI_{Z\vert X}/\cI^2_{Z\vert X}\cong\cA^\vee(Y)\otimes\cO_Z$, whence the normal bundle $\cN_{Z\vert X}:=(\cI_{Z\vert X}/\cI^2_{Z\vert X})^\vee$ of $Z$ inside $X$ satisfies \begin{equation} \label{Normal} \cN_{Z\vert X}\cong\cA(-Y)\otimes\cO_Z. \end{equation} If $Y=\emptyset$, then $Z$ is locally complete intersection inside $X$, because $\rk(\cA)=2$. In particular, it has no embedded components. The Serre correspondence reverts the above construction as follows. \begin{theorem} \label{tSerre} Let $X$ be an $n$--fold with $n\ge2$ and $Z\subseteq X$ a local complete intersection subscheme of codimension $2$. If $\det(\cN_{Z\vert X})\cong\cO_Z\otimes\mathcal L$ for some $\mathcal L\in\Pic(X)$ such that $h^2\big(\mathcal L^\vee\big)=0$, then there exists a rank two vector bundle $\cA$ on $X$ satisfying the following properties. \begin{enumerate} \item $\det(\cA)\cong\mathcal L$. \item $\cA$ has a section $s$ such that $Z$ coincides with the zero locus $(s)_0$ of $s$. \end{enumerate} Moreover, if $H^1\big({\mathcal L}^\vee\big)= 0$, the above two conditions determine $\cA$ up to isomorphism. \end{theorem} \begin{proof} See \cite{Ar}. \end{proof} Let $X$ be an $n$--fold with $n\ge1$ endowed with an ample line bundle $\cO_X(h)$. If $\cA$ is any torsion--free sheaf we define the {\sl slope of $\cA$ (with respect to $\cO_X(h)$)} as $$ \mu_h(\cA):=\frac{c_1(\cA)h^{n-1}}{\rk(\cA)}. $$ The torsion--free sheaf $\cA$ is {\sl $\mu$--semistable} (resp. {\sl $\mu$--stable}) if for all proper subsheaves $\cB$ with $0<\rk(\cB)<\rk(\cA)$ we have $\mu_h(\cB) \le \mu_h(\cA)$ (resp. $\mu_h(\cB)<\mu_h(\cA)$). Each $\mu$--stable bundle $\cA$ is {\sl simple}, i.e. $h^0\big(\cA\otimes\cA^\vee\big)=1$ (see \cite[Theorem II.1.2.9]{O--S--S}). \section{Instanton sheaves on projective spaces} \label{sSpace} In this section we deal with instanton sheaves on $\p n$ with respect to $\cO_{\p n}(1)$. We start by quickly dealing with the case $n=1$. \begin{remark} \label{rLine} If $\cE$ is an instanton sheaf with respect to $\cO_{\p1}(1)$, then it splits in the direct sum of a vector bundle plus its torsion subsheaf $\cT$. Thus $h^0\big(\cT\big)\le h^0\big(\cE(-h)\big)=0$, hence $\cT=0$ and $\cE$ is actually a vector bundle. Thanks to Definition \ref{dMalaspinion} one immediately deduces $$ \cE\cong\left\lbrace\begin{array}{ll} \cO_{\p1}^{\oplus \chi(\cE)}\quad&\text{if $\defect=0$,}\\ (\cO_{\p1}\oplus\cO_{\p1}(-1))^{\oplus \chi(\cE)}\quad&\text{if $\defect=1$:} \end{array}\right.\\ $$ hence the quantum number of $\cE$ is $\defect \chi(\cE)=\defect\rk(\cE)/2$. \end{remark} In what follows we assume $n\ge2$. We prove below a generalization of the well--known characterization of instanton sheaves (e.g. see \cite[Sections 1 and 2]{Ja} and \cite[Section 5.2]{El--Gr}) in terms of the cohomology of a very simple monad $\cM^\bullet$, i.e. a complex $$ 0\longrightarrow \cM^{-1}\longrightarrow \cM^{0}\longrightarrow \cM^{1}\longrightarrow0 $$ which is exact everywhere but in degree $0$. To this purpose we set \begin{equation} \label{MonadValue} \begin{gathered} \cM^{-1}:=\left\lbrace\begin{array}{ll} \cO_{\p n}(-1)^{\oplus k}\quad&\text{if $\defect=0$,}\\ \cO_{\p n}(-1)^{\oplus b_1-\chi(\cE)}\quad&\text{if $\defect=1$,} \end{array}\right.\\ \cM^{0}:=\left\lbrace\begin{array}{ll} \cO_{\p n}^{\oplus \chi(\cE)+(n+1)\quantum}\quad&\text{if $\defect=0$,}\\ \begin{aligned} \cO_{\p n}^{\oplus b_0}&\oplus \Omega_{\p n}^1(1)^{\oplus \quantum} \\ &\oplus\Omega_{\p n}^{n-1}(n-1)^{\oplus \quantum}\oplus\cO_{\p n}(-1)^{\oplus b_1} \end{aligned} \quad&\text{if $\defect=1$, $n\ge3$,}\\ \begin{aligned} \cO_{\p 2}^{\oplus b_0}\oplus \Omega_{\p 2}^1(1)^{\oplus \quantum}\oplus\cO_{\p 2}(-1)^{\oplus b_1} \end{aligned} \quad&\text{if $\defect=1$, $n=2$,} \end{array}\right.\\ \cM^{1}:=\left\lbrace\begin{array}{ll} \cO_{\p n}(1)^{\oplus k}\quad&\text{if $\defect=0$,}\\ \cO_{\p n}^{\oplus b_0-\chi(\cE)}\quad&\text{if $\defect=1$.} \end{array}\right. \end{gathered} \end{equation} \begin{proposition} \label{pSpace} Let $n\ge2$. For a non--zero coherent sheaf $\cE$ on $\p n$ the following assertions hold. \begin{enumerate} \item If $\cE$ is an instanton sheaf with respect to $\cO_{\p n}(1)$ with defect $\defect$ and quantum number $\quantum$, then $\cE$ is the cohomology of a monad $\cM^\bullet$ where the $\cM^i$'s are as in equalities \eqref{MonadValue} with, if $\defect=1$, $b_0\le h^0(\cE)$ and $b_1\le h^{n}(\cE(-n))$. \item If $\cE$ is the cohomology of a monad $\cM^\bullet$ where the $\cM^i$'s are as in equalities \eqref{MonadValue} with, if $\defect=1$, $b_0,b_1\ge\chi(\cE)$, then $\cE$ is an instanton sheaf with respect to $\cO_{\p n}(1)$ with defect $\defect$ and quantum number $\quantum$. \end{enumerate} \end{proposition} \begin{proof} Assume $\cE$ is an instanton bundle on $\p n$. If $\defect=0$, then \cite[Theorem 3]{Ja} implies the existence of a monad of the form $$ 0\longrightarrow\cO_{\p n}(-1)^{\oplus a}\longrightarrow\cO_{\p n}^{\oplus b}\longrightarrow\cO_{\p n}(1)^{\oplus c}\longrightarrow0, $$ whose cohomology is $\cE$ under the additional hypothesis that it is torsion--free. Such latter hypothesis is only used for proving that sequence \eqref{seqSection} tensored by $\cE(t)$ remains exact for a general hyperplane $Y\subseteq\p n$. Since the general hyperplane does not contain any associated point of $\cE$, it follows that the torsion--freeness hypothesis on $\cE$ can be removed. Splitting the monad above in the two short exact sequences \begin{gather*} 0\longrightarrow\cU\longrightarrow\cO_{\p n}^{\oplus b}\longrightarrow\cO_{\p n}(1)^{\oplus c}\longrightarrow0,\\ 0\longrightarrow\cO_{\p n}(-1)^{\oplus a}\longrightarrow\cU\longrightarrow\cE\longrightarrow0. \end{gather*} and taking their cohomology, possibly twisted by $\cO_X(-h)$ and $\cO_X(-nh)$, we obtain $$ c=h^1\big(\cE(-1)\big),\qquad a=h^{n-1}\big(\cE(-n)\big),\qquad \chi(\cE)=b-(n+1)c $$ hence $a=c=\quantum$ and $b=\chi(\cE)+(n+1)\quantum$. If $\defect=1$, then $\cE$ is the cohomology of a complex $\cM^\bullet$ which is everywhere exact but in degree $0$ and with $i^{\textrm {th}}$--term of the form $$ \widehat{\cM}^i:=\bigoplus_{p+q=i}H^q\big(\cE(p)\big)\otimes \Omega_{\p n}^{-p}(-p): $$ see \cite[Beilinson Theorem (strong form)]{A--O1}. In particular $\cM^i=0$ if $\vert i\vert\ge2$ and \begin{gather*} \widehat{\cM}^{-1}\cong \cO_{\p n}(-1)^{\oplus h^{n-1}(\cE(-n))},\qquad \widehat{\cM}^{1}\cong \cO_{\p n}^{\oplus h^{1}(\cE)},\\ \widehat{\cM}^0\cong\left\lbrace\begin{array}{ll} \cO_{\p n}^{\oplus h^0(\cE)}\oplus \left(\Omega_{\p n}^1(1)\oplus \Omega_{\p n}^{n-1}(n-1)\right)^{\oplus \quantum}\oplus\cO_{\p n}(-1)^{\oplus h^{n}(\cE(-n))},\quad &\text{if $n\ge3$,}\\ \cO_{\p 2}^{\oplus h^0(\cE)}\oplus \Omega_{\p 2}^1(1)^{\oplus \quantum}\oplus\cO_{\p 2}(-1)^{\oplus h^{2}(\cE(-2))},\quad &\text{if $n=2$.} \end{array}\right. \end{gather*} The equality $$ h^{0}\big(\cE\big)- h^{1}\big(\cE\big)=\chi(\cE)=(-1)^n\chi(\cE(-n))= h^{n}\big(\cE(-n)\big)- h^{n-1}\big(\cE(-n)\big). $$ implies $h^{1}\big(\cE\big)=h^0\big(\cE\big)-\chi(\cE)$, $h^{n-1}\big(\cE(-n)\big)=h^{n}\big(\cE(-n)\big)-\chi(\cE)$. Erasing isomorphic summands in the $\widehat{\cM}^i$'s if any, we finally obtain the monad $\cM^\bullet$ where $b_0\le h^0(\cE)$ and $b_1\le h^{n}(\cE(-n))$. Conversely, regardless of the value of $\defect$, if $\cE$ is the cohomology of the monad $\cM^\bullet$ whose $\cM^i$'s are as in equalities \eqref{MonadValue}, then we have the induced exact sequences \begin{equation} \label{DisplayM} \begin{gathered} 0\longrightarrow\cU\longrightarrow\cM^0\longrightarrow\cM^1\longrightarrow0,\\ 0\longrightarrow\cM^{-1}\longrightarrow\cU\longrightarrow\cE\longrightarrow0. \end{gathered} \end{equation} Computing the cohomologies of the above sequences \eqref{DisplayM} after suitable twists, one deduces that $\cE$ is an instanton bundle with defect $\defect$ and quantum number $\quantum$. \end{proof} Let $n\ge2$. In the following remarks $\cE$ is an instanton sheaf on $\p n$ with defect $\defect$ and quantum number $\quantum$. We define $\rk(\cE)$ as the rank of the stalk of $\cE$ at the generic point of $\p n$. We will see later on in Corollary \ref{cCharacteristic} that the support of $\cE$ is $\p n$, hence $\rk(\cE)\ge1$ (see \cite[Corollary of Lemma II.1.14]{O--S--S}). \begin{remark} \label{rRankChern} If $\cE$ is ordinary we deduce that $\rk(\cE)=\chi(\cE)+(n-1)\quantum$ thanks to a direct computation via the monad $\cM^\bullet$. If $\cE$ is non--ordinary we similarly obtain $$ \rk(\cE)=\left\lbrace\begin{array}{ll} 2\chi(\cE)+2n\quantum\quad&\text{if $n\ge3$,}\\ 2\chi(\cE)+2\quantum\quad&\text{if $n=2$.} \end{array}\right. $$ Thus non--ordinary instanton sheaves on $\p n$ have necessarily even rank. \end{remark} \begin{remark} \label{rChernPoly} Monad $\cM^\bullet$ and Remark \ref{rRankChern} imply that $$ c_t(\cE)=\left\lbrace\begin{array}{ll} \dfrac1{(1-t^2)^\quantum}\quad&\text{if $\defect=0$,}\\ \dfrac{(1-t)^{\frac{\rk(\cE)}2+\quantum}}{(1+t)^\quantum(1-2t)^\quantum}\quad&\text{if $\defect=1$, $n\ge3$,}\\ \dfrac{(1-t)^{\frac{\rk(\cE)}2}}{(1-t^2)^\quantum}\quad&\text{if $\defect=1$, $n=2$.} \end{array}\right. $$ Thus, via the canonical identification $A^i(\p n)\cong\bZ$, we obtain \begin{equation} \label{ChernP^n} c_1(\cE)=-\defect\dfrac{\rk(\cE)}2,\qquad c_2(\cE)=\epsilon\quantum+\defect\dfrac{\rk(\cE)(\rk(\cE)-2)}8 \end{equation} where $$ \epsilon=\left\{ \begin{array}{ll} 1\quad&\text{if $n=2$,}\\ 1+\defect\quad&\text{if $n\ge3$.} \end{array}\right. $$ \end{remark} \section{Instanton sheaves on irreducible projective schemes} \label{sGeneral} In this section we characterize instanton sheaves on projective schemes. We start the section by proving some general properties of instanton sheaves. \begin{proposition} \label{pQuantum} Let $X$ be an irreducible projective scheme of dimension $n\ge1$ endowed with an ample and globally generated line bundle $\cO_X(h)$. If $\cE$ is a $h$--instanton sheaf on $X$ with defect $\defect$ and quantum number $\quantum$, then it has natural cohomology with respect to $\cO_X(h)$ in shifts $\defect-n\le t\le -1$. In particular \begin{equation*} \quantum=-\chi(\cE(-h))=(-1)^{n-1}\chi(\cE((\defect-n)h)), \end{equation*} \end{proposition} \begin{proof} If $\cE$ is a $h$--instanton sheaf, then $h^i\big(\cE(-th)\big)=0$ if $0\le i\le n$ and $2\le t\le n-1-\defect$ or $i\ne1$ and $t=1$ or $i=n-1$ and $t=\defect-n$ by definition. \end{proof} The proof of the proposition below follows immediately from Definition \ref{dMalaspinion}. \begin{proposition} \label{pDirectSum} Let $X$ be an irreducible projective scheme of dimension $n\ge1$ endowed with an ample and globally generated line bundle $\cO_X(h)$. Every extension of two instanton sheaves on $X$ with the same defect is an instanton sheaf with the same defect and whose quantum number is the sum of the quantum numbers. \end{proposition} The main result of this section is the proof of Theorem \ref{tCharacterization}. \medbreak \noindent{\it Proof of Theorem \ref{tCharacterization}.} Assertion (2) follows trivially from assertion (1). Conversely, assertion (1) follows from assertion (2) thanks to Proposition \ref{pNatural}. We now show that assertions (1), (3) and (4) are equivalent. Assume assertion (1) holds. Thanks to \cite[Corollary III.11.2 and Exercises III.8.1, III.8.3]{Ha2} we have \begin{equation} \label{PushDown} h^i\big((p_*\cE)(t)\big)=h^i\big(\cE(th)\big) \end{equation} for each $i,t\in\bZ$. It follows that $p_*\cE$ is an instanton sheaf on $\p n$ with respect to $\cO_{\p n}(1)$. Thus assertion (3) is true. Trivially, assertion (3) implies assertion (4). Assume finally that assertion (4) holds. Thanks to Proposition \ref{pSpace} we know that $p_*\cE$ is the cohomology of monad $\cM^\bullet$ where $\cM^i$ is as in equalities \eqref{MonadValue}. The cohomology of the twists of the exact sequences \eqref{DisplayM} allows us to check that the values of $h^i\big(\cE(th)\big)=h^i\big((p_*\cE)(t)\big)$ are as in Definition \ref{dMalaspinion}. \qed \medbreak We list below some immediate corollaries of Theorem \ref{tCharacterization}. From now on we set $$ {m\choose n}:=\frac1{n!}{\prod_{i=0}^{n-1}(m-i)} $$ for each integer $n\ge1$. \begin{corollary} \label{cCharacteristic} Let $X$ be an irreducible projective scheme of dimension $n\ge1$ endowed with an ample and globally generated line bundle $\cO_X(h)$. If $\cE$ is a $h$--instanton sheaf on $X$ the following assertions hold. \begin{enumerate} \item The support of $\cE$ is the underlying space of $X$. \item If the defect and quantum number of $\cE$ are $\defect$ and $\quantum$ respectively, then \begin{equation*} \chi(\cE(th))=\left\lbrace\begin{array}{ll} {\begin{aligned} (\chi(\cE)&+(n+1)\quantum)\left({{t+n}\choose n}+\defect{{t+n-\defect}\choose n}\right)\\ &-\quantum\left({{t+n+1}\choose n}+{{t+n-1-\defect}\choose n}\right) \end{aligned}}\quad&{\begin{aligned} &\text{if $n\ge2$ and\ \ }\\ &\text{ \ $(n,\defect)\ne(2,1)$,} \end{aligned}} \vspace{5pt}\\ \,(\chi(\cE)+\quantum)(t+1)^2-\quantum\quad&\text{if $(n,\defect)=(2,1)$,} \vspace{5pt}\\ \,\chi(\cE)(t+1+\defect t)\quad&\text{if $n=1$.} \end{array}\right. \end{equation*} \end{enumerate} \end{corollary} \begin{proof} We first prove assertion (2). To this purpose, let $p\colon X\to \p n$ be finite with $\cO_X(h)\cong p^*\cO_{\p n}(1)$. Taking into account that $\chi(\cE(th))=\chi((p_*\cE)(t))$ by equalities \eqref{PushDown}, the statement follows by combining Proposition \ref{pSpace} and Remark \ref{rLine}. We prove below assertion (1) for the case $\defect=1$ and $n\ge3$: in the other cases the argument is similar. If $\cE$ is supported on a proper subscheme, then the coefficient of $t^n$ in $\chi(\cE(th))$ must vanish. It follows that $\chi(\cE)=-n\quantum$, hence \begin{align*} \chi(\cE(th))=-\quantum\frac{n(n-1)(2t+n)}{(t-1)t(t+1)}{{t+n-2}\choose n}. \end{align*} Since $\cE$ is not the zero sheaf, it follows that $0<h^0\big(\cE(th)\big)=\chi(\cE(th))\le 0$ for $t\gg0$, a contradiction. \end{proof} \begin{corollary} \label{cUlrich} Let $X$ be an irreducible projective scheme of dimension $n\ge1$ endowed with an ample and globally generated line bundle $\cO_X(h)$. If $\cE$ is a $h$--instanton sheaf on $X$ with quantum number $\quantum$ and defect $\defect$, then the following assertions hold. \begin{enumerate} \item $\quantum=\defect h^1\big(\cE\big)=\defect h^{n-1}\big(\cE(-nh)\big)=0$ if and only if $\cE$ is a sheaf without intermediate cohomology. \item $\quantum=\defect=0$ if and only if $\cE$ is an Ulrich sheaf. \end{enumerate} \end{corollary} \begin{proof} If $\cE$ is without intermediate cohomology, then we certainly have the vanishings in assertion (1). Conversely, if such vanishings hold, then $\cE$ is without intermediate cohomology thanks to Proposition \ref{pNatural}. If $\quantum=\defect=0$, then $\cE$ is trivially Ulrich. Conversely, let $\cE$ be Ulrich. Thus $\quantum=-h^1\big(\cE(-h)\big)=0$. If $p\colon X\to\p n$ is finite and $\cO_X(h)\cong p^*\cO_{\p n}(1)$, then $p_*\cE\cong\cO_{\p n}^{\oplus \chi(\cE)}$ by \cite[Proposition 2.1]{E--S--W}, hence $c_1(p_*\cE)=0$. Thus Remark \ref{rChernPoly} implies that the defect of $p_*\cE$ vanishes, hence $\defect=0$. \end{proof} If $X$ is an irreducible projective scheme of dimension $n\ge2$ and $\cO_X(h)$ is ample and globally generated, then \cite[Corollaire I.6.11 3)]{Jou} implies that each general $Y\in \vert h\vert$ is irreducible. In particular it makes sense to ask if the restriction of a $h$--instanton sheaf to $Y$ is still an instanton sheaf. \begin{corollary} \label{cRestriction} Let $X$ be a projective scheme of dimension $n\ge3$ endowed with an ample and globally generated line bundle $\cO_X(h)$. If $Y\in \vert h\vert$ is general and $\cE$ is an instanton sheaf with respect to $\cO_X(h)$, then $\cO_Y\otimes\cE$ is an instanton sheaf with respect to $\cO_Y\otimes\cO_X(h)$. Moreover, the following assertions hold. \begin{enumerate} \item The defect of $\cO_Y\otimes\cE$ coincides with the defect $\defect$ of $\cE$. \item The quantum number of $\cO_Y\otimes\cE$ coincides with the quantum number $\quantum$ of $\cE$ if $(n,\defect)\ne(3,1)$ and with $2\quantum$ if $(n,\defect)=(3,1)$. \end{enumerate} \end{corollary} \begin{proof} Let $\cE_Y:=\cO_Y\otimes\cE$, $\cO_Y(h_Y):=\cO_Y\otimes\cO_X(h)$. Let $p\colon X\to\p n$ be a finite morphism such that $\cO_X(h)\cong p^*\cO_{\p n}(1)$. If $Y=p^{-1}(H)$ for some hyperplane $H\subseteq \p n$, then the restriction $p_Y\colon Y\to H$ of $p$ is still finite. Moreover, both $p$ and $p_Y$ are affine, hence for each open affine subset $\cU\cong\spec(A)\subseteq\p n$, let $\cV\cong\spec(B)=p^{-1}(\cU)$ and $H\cap \cU\cong\spec(A/I)$ for some ideal $I\subseteq A$, so that $Y\cap \cV\cong\spec(B\otimes_AA/I)$. If the restriction of $\cE$ to $\cV$ is $\widetilde{M}$, then the canonical isomorphism of $A$--algebras $M\otimes_B(B\otimes_AA/I)\cong M\otimes_AA/I$ holds true. Glueing together all such canonical isomorphisms we obtain $(p_Y)_*\cE_Y\cong \cO_{H}\otimes(p_*\cE)$. If $p_*\cE$ is the cohomology of monad $\cM^\bullet$ as in Proposition \ref{pSpace} and $Y\in\vert h\vert$ does not contain any associated point of $\cE$, then $(p_Y)_*\cE_Y$ is the cohomology of monad $\cO_{H}\otimes\cM^\bullet$. Notice that if $\defect=1$, then $\Omega_{\p n}^{n-1}(n-1)\cong(\Omega_{\p n}^1(2))^\vee$, hence \begin{gather*} \cO_{H}\otimes\Omega_{\p n}^1(1)\cong\cO_{\p{n-1}}\oplus\Omega_{\p{n-1}}^1(1),\\ \cO_{H}\otimes\Omega_{\p n}^{n-1}(n-1)\cong\cO_{\p{n-1}}(-1)\oplus\Omega_{\p{n-1}}^{n-2}(n-2). \end{gather*} In particular $\cO_{H}\otimes\cM^\bullet$ has the same shape of $\cM^\bullet$. Thus $(p_Y)_*\cE_Y$ is still an instanton sheaf on $H\cong\p{n-1}$ with defect $\defect$ by Proposition \ref{pSpace}, hence $\cE_Y$ is an instanton sheaf on $Y$. Moreover $\Omega_{\p{n-1}}^1(1)\not\cong\Omega_{\p{n-1}}^{n-2}(n-2)$ if and only if $n\ge4$, hence the assertion on the quantum number of $\cE_Y$ follows. \end{proof} The restriction $n\ge3$ is sharp. Indeed if $\cE$ is an instanton sheaf on $\p2$ with defect $\defect$, rank $(1+\defect)a$ and quantum number $\quantum\ne \defect a$ its restriction to a line cannot be an instanton sheaf due to Remark \ref{rLine}. \section{Resolutions of instanton sheaves\\ on embedded irreducible projective schemes} \label{sResolution} eld$--algebra $S$ over $V$, when $\cE$ is an instanton sheaf. As a first step, we bound $\reg(\cE)$ from above in terms of cohomology of $\cE$: see also Remark \ref{rSpace1} for a confront of the bound below for non--ordinary instanton sheaves with the one in \cite[Theorem 3.2]{C--MR1}. \begin{proposition} \label{pRegularity} Let $X$ be an irreducible projective scheme of dimension $n\ge2$ endowed with a very ample line bundle $\cO_X(h)$. If $\cE$ is a $h$--instanton sheaf with defect $\defect$, then $\reg(\cE)\le h^1\big(\cE((\defect-1)h)\big)+\defect$. \end{proposition} \begin{proof} In what follows we explain the argument only for $\defect=1$, because in the case $\defect=0$ it is similar (and well--known: see \cite[Corollary 3.3]{C--MR1}). Notice that $h^i\big(\cE((1-i)h)\big)=0$ for $i\ge2$ by definition, hence it suffices to check that also $h^1\big(\cE((w-1)h)\big)=0$ for $w=h^1\big(\cE\big)+1$. If $p\colon X\to\p n$ is any finite morphism such that $\cO_X(h)\cong p^*\cO_{\p n}(1)$, then $h^1\big(\cE((w-1)h)\big)=h^1\big((p_*\cE)(w-1)\big)$, hence it suffices to deal with $X=\p n$, $\cO_X(h)= \cO_{\p n}(1)$ and $V=H^0\big(\cO_{\p n}(1)\big)$. Let $\quantum$ be the quantum number of $\cE$. Thanks to Proposition \ref{pSpace}, we know that $\cE$ is the cohomology of a monad $\cM^\bullet$ whose sheaves are as in equalities \eqref{MonadValue}. If the restriction of the map $\cM^{0}\to\cM^{1}$ to the direct summand $\cO_{\p n}^{\oplus b_0}$ is non--zero, then such a morphism necessarily splits, hence we can assume that such map vanishes. By splitting ${\cM}^\bullet$, we obtain sequences as in \eqref{DisplayM}. Their cohomologies return $h^1\big(\cE(-t)\big)=h^1\big(\cU(-t)\big)=0$ for $t\ge-1$ and \begin{equation} \label{seqLong} \begin{aligned} 0\longrightarrow H^0_*\big({\cU}\big)&\longrightarrow S^{\oplus b_0}\oplus H^0_*\big(\Omega_{\p n}^1(1)\oplus \Omega_{\p n}^{n-1}(n-1)\big)^{\oplus \quantum}\oplus S(-1)^{\oplus b_1}\\ &\mapright B S^{\oplus g}\longrightarrow H^1_*\big({\cU}\big)\longrightarrow H^1_*\big(\Omega_{\p n}^1(1)\big)^{\oplus \quantum}\longrightarrow0 \end{aligned} \end{equation} where $g:=b_0-\chi(\cE)\le h^1\big(\cE\big)=w-1$ and the restriction of $B$ to $S^{\oplus b_0}$ is zero. The exact sequence \begin{equation*}\label{Eulerpn} 0 \longrightarrow \Omega_{\p n}^p(p-1) \longrightarrow \cO_{\p n}^{{n+1}\choose{p}}(-1)\longrightarrow \Omega_{\p n}^{p-1}(p-1)\longrightarrow 0 \end{equation*} with $p=n$ and $p=2$ yields surjective maps $$ S(-1)^{\oplus n+1}\longrightarrow H^0_*\big( \Omega_{\p n}^{n-1}(n-1)\big), \qquad S(-1)^{\oplus {{n+1\choose2}}}\longrightarrow H^0_*\big( \Omega_{\p n}^{1}(1)\big). $$ eld^{\oplus \quantum}$ concentrated in degree $-1$. Thus the last non--zero morphism on the right in sequence \eqref{seqLong} splits and we finally obtain an exact sequence of the form \begin{equation} \label{seqEN} S(-1)^{\oplus f} \mapright{A}S^{\oplus g}\longrightarrow \bigoplus_{t\ge0} H^1\big({\cU}(t)\big)\longrightarrow0, \end{equation} where $f:={{n+2\choose2}}+b_1$. Let $\cF:=\cO_{\p n}^{\oplus f}$, $\cG:=\cO_{\p n}(1)^{\oplus g}$. Thus we have an induced surjective morphism $\alpha\colon\cF\to\cG$, because the $S$--module on the right in sequence \eqref{seqEN} has finite length. The Eagon--Northcott complex \begin{align*} 0\longrightarrow\wedge^f\cF\otimes S^{f-g}\cG^\vee&\mapright{\alpha_{f-g}}\wedge^{f-1}\cF\otimes S^{f-g-1}\cG^\vee\mapright{\alpha_{f-g-1}}\dots\\ &\mapright{\alpha_2}\wedge^{g+1}\cF\otimes \cG^\vee\mapright{\alpha_1}\wedge^g\cF\mapright{\wedge^g\alpha}\wedge^g\cG\longrightarrow0 \end{align*} is exact thanks to \cite[Proposition 3]{Ea--No}. We have $\wedge^g\cG\cong\cO_{\p n}(g)$ and $\wedge^{g+i}\cF\otimes S^{i}\cG^\vee\cong\cO_{\p n}(-i)^{\oplus e_i}$ for some integers $e_i$. Thus $H^i(\ker \alpha_i)=0$ for all $i\ge1$, hence $H^0\big(\wedge^g\cF\big)\to H^0\big(\wedge^g\cG\big)$ is surjective. In particular, we deduce that the component $S_g\subseteq S$ of degree $g$ is generated by the maximal minors of the matrix of $A$ and it is not difficult to check that $\bigoplus_{t\ge g}S_t^{\oplus g}\subseteq S^{\oplus g}$ is contained in $\im(A)$. It follows that $h^1\big(\cE(g)\big)=h^1\big(\cU(g)\big)=0$, hence $\cE$ is certainly $(g+1)$--regular. Thus $\cE$ is also $w$--regular thanks to \cite[Corollary 4.18]{Ei} and the obvious inequality $w\ge g+1$. \end{proof} Recall that in the introduction we defined the numbers \begin{gather*} v(\cE):=\min\{\ t\in\bZ\ \vert\ h^0\big(\cE(th)\big)\ne0\ \},\qquad w(\cE):=h^1\big(\cE((\defect-1)h)\big)+\defect. \end{gather*} The following corollary is immediate by \cite[Corollary 4.18]{Ei}. \begin{corollary} \label{cRegularity} Let $X$ be an irreducible projective scheme of dimension $n\ge2$ endowed with a very ample line bundle $\cO_X(h)$. If $\cE$ is a $h$--instanton sheaf, then $\cE(w(\cE))$ is globally generated. In particular, if $\cE$ is ordinary and its quantum number is $\quantum$, then $\cE(\quantum)$ is globally generated. \end{corollary} We prove Theorem \ref{tResolution} below. \medbreak \noindent{\it Proof of Theorem \ref{tResolution}.} Let $e_{0,1},\dots,e_{0,m_0}\in E_{0}:=H^0_*\big(\cE\big)$ be a minimal set of generators as $S$--module such that the sequence $\deg(e_{0,j})=\nu_{0,j}$ is non--decreasing. There is a surjective morphism $F_0:=\bigoplus_{j=1}^{m_0}S(-\nu_{0,j})\to E_0$: by definition we have $\nu_{0,1}\ge v(\cE)\ge0$ and Corollary \ref{cRegularity} implies $\nu_{0,m_0}\le w(\cE)$. By sheafification, we obtain a short exact sequence $$ 0\longrightarrow \cE_1\longrightarrow \cF_{0}\mapright{\varphi_{0}} \cE\longrightarrow0. $$ The definition of $\varphi_0$ yields $H^1_*\big(\cE_1\big)=0$. Moreover, the cohomology of the above sequence twisted by $\cO_{\p N}(v(\cE))$ implies $h^0\big(\cE(v(\cE))\big)=0$, hence the minimal generators of $E_{1}:=H^0_*\big(\cE_1\big)$ have degree $\nu_{0,1}+1\ge v(\cE)+1\ge1$ at least. The cohomology of the above sequence with suitable twists also yields $\reg(\cE_1)\le w(\cE)+1$. By induction, the same argument leads for $0\le p\le N-1$ to exact sequences \begin{equation} \label{seqE} 0\longrightarrow \cE_p\longrightarrow \cF_{p-1}\mapright{\varphi_{p-1}} \cE_{p-1}\longrightarrow0 \end{equation} such that $\cF_p\cong \bigoplus_{j= v(\cE)}^{w(\cE)}\cO_{\p N}(-j-p)^{\oplus \beta_{p,j}}$ and $\varphi_{p-1}$ is surjective on global sections. It follows that $H^1_*\big(\cE_p\big)=0$ for $1\le p\le N-1$. Moreover, the cohomologies of sequences \eqref{seqE} yield $H^i_*\big(\cE_p\big)=H^{i-1}_*\big(\cE_{p-1}\big)$ for $2\le i\le N-1$, hence $H^i_*\big(\cE_{N-1}\big)=H^1_*\big(\cE_{N-i}\big)=0$ for $1\le i\le N-1$. The Horrocks theorem implies that $\cE_{N-1}$ splits as a sum of line bundles. Since $H^1_*\big(\cE_p\big)=0$ for $1\le p\le N-1$, it follows that we can glue together the twisted cohomologies of sequences \eqref{seqE} obtaining sequence \eqref{Resolution}. \qed \medbreak \begin{corollary} \label{cResolutionSheaf} Let $X$ be an irreducible projective scheme of dimension $n\ge1$ endowed with a very ample line bundle $\cO_X(h)$. Let $V\subseteq H^0\big(\cO_X(h)\big)$ be a subspace associated to an embedding $X\subseteq\p N$. If $\cE$ is a $h$--instanton sheaf, then there exists an exact sequence of the form \begin{equation} \label{ResolutionSheaf} 0\longrightarrow \cF_{N}\longrightarrow \cF_{N-1}\longrightarrow\dots\longrightarrow \cF_1\longrightarrow \cF_0\longrightarrow \cE\longrightarrow0, \end{equation} where $\cF_p\cong \cO_{\p N}(-w(\cE)-p)^{\oplus \beta_{p}}$. \end{corollary} \begin{proof} Each hyperplane not passing through any of the associated points of $X$ corresponds to a section in $V$ which is not a zero--divisor in ${H^0_*\big(\cE\big)}$, hence in ${\bigoplus_{t\ge w(\cE)}H^0\big(\cE(th)\big)}$. In particular, the depth of the latter is at least $1$, hence the statement follows from \cite[Theorem 1.2 (1)]{Ei--Go}. \end{proof} \begin{remark} \label{rResolution} If $\cE$ is a $h$--instanton sheaf with resolution \eqref{Resolution}, then Theorem \ref{tResolution} and Corollary \ref{cCharacteristic} yield linear equations in the $\beta_{p,i}$'s. Similar constraints can be obtained for the $\beta_p$'s in resolution \ref{ResolutionSheaf}.\end{remark} Let $\cE$ be a $h$--instanton sheaf with natural cohomology in positive degrees. The following corollary generalizes the results in \cite{Rah1,Rah2}. \begin{corollary} \label{cResolution} eld$--algebra of $V$. If $\cE$ is a $h$--instanton sheaf with natural cohomology in positive degrees, then the minimal free resolution of $H^0_*\big(\cE\big)$ as $S$--module has the form \begin{equation*} 0\longrightarrow F_{N-1}\longrightarrow F_{N-2}\longrightarrow\dots\longrightarrow F_1\longrightarrow F_0\longrightarrow H^0_*\big(\cE\big)\longrightarrow0, \end{equation*} where $F_p\cong S(-v(\cE)-p)^{\oplus \beta_{p,0}}\oplus S(-v(\cE)-p-1)^{\oplus \beta_{p,1}}$. \end{corollary} \begin{proof} The statement follows from the inequality $\reg(\cE)\le v(\cE)+1$, because we know from Theorem \ref{tResolution} that the projective dimension of $H^0_*\big(\cE\big)$ over $S$ is $N-1$ at most. \end{proof} \section{Instanton bundles on $n$--folds} \label{sBundle} In this section we focus our attention on instanton bundles supported on an $n$--fold $X$, i.e. a smooth variety of dimension $n$. In particular $\omega_X\cong\cO_X(K_X)$ is a line bundle. Notice that each finite morphism $p\colon X\to \p 1$ is flat, thanks to \cite[Exercise III.9.3]{Ha2}. If $\cO_X(h)\cong p^*\cO_{\p 1}(1)$, then the degree of $p$ is $h^n$. Thus $\rk(p_*\cE)=\rk(\cE)h^n$ for each vector bundle $\cE$ on $X$. We start by studying the case of smooth curves integrating Remark \ref{rLine}. \begin{proposition} \label{pCurve} eld$ is not $2$. The following assertions hold. \begin{enumerate} \item Every instanton sheaf is a vector bundle. \item $X$ supports ordinary instanton sheaves of each rank. \item $X$ supports non--ordinary instanton sheaves of each even rank. \item The quantum number of $\cE$ is $\frac\defect2\rk(\cE)\deg(X)$. \end{enumerate} \end{proposition} \begin{proof} Assertion (1) can be proved as the analogous assertion in Remark \ref{rLine}. Let $g$ be the genus of $X$. If $\theta$ is a non--effective theta--characteristic on $X$, then it is easy to check that $\cO_X(\theta+h)^{\oplus a}$ and $(\cO_X(\theta)\oplus\cO_X(\theta+h))^{\oplus a}$ are respectively an ordinary and a non--ordinary instanton sheaf. Thus assertions (2) and (3) are proved. If $p\colon X\to \p 1$ is any finite morphism, then $p_*\cE$ is an $\cO_{\p 1}(1)$--instanton bundle by Theorem \ref{tCharacterization} with $\rk(p_*\cE)=\rk(\cE)\deg(X)$. Thanks to Remark \ref{rLine}, its quantum number is $\defect\rk(p_*\cE)/2$. Thus the quantum number of $\cE$ is as in the assertion (4). \end{proof} Though only smooth curves of even degree can support non--ordinary $h$--instanton line bundles (and, more generally, of odd rank), the following example shows that each smooth curve can be endowed with many very ample line bundles in such a way it supports non--ordinary $h$--instanton line bundles. \begin{example} \label{eCurve} Assume $X$ is a smooth curve and let $g$ be its genus. Let $\theta\in\Pic^{g-1}(X)$ be any non--effective theta--characteristic and $\cO_X(A)$ be any effective ample line bundle such that $\cO_X(h):=\cO_X(2A)$ is very ample. Thus $\cO_X(\theta+A)$ is a non--ordinary $h$--instanton line bundle. \end{example} Our first general result in this section is that instanton bundles with positive quantum number on an $n$--fold must have sufficiently large rank. \begin{proposition} \label{pHorrocks} Let $X$ be an $n$--fold with $n\ge4$ endowed with an ample and globally generated line bundle $\cO_X(h)$. If $\cE$ is a $h$--instanton bundle with quantum number $\quantum$ on $X$, then the following assertions hold. \begin{enumerate} \item If $\rk(\cE)h^n<2\left[\frac n2\right]$, then $\cE$ is aCM. In particular, $\quantum=0$. \item If $\cE$ is ordinary, and $\quantum\ge1$, then $\rk(\cE)h^n\ge n-1$. \item If $\cE$ is ordinary, $\rk(\cE)h^n= n-1$ and $n$ is even, then $\cE$ is Ulrich. \end{enumerate} \end{proposition} \begin{proof} Let $p\colon X\to \p n$ be any finite morphism with $\cO_X(h)\cong p^*\cO_{\p n}(1)$, then $p_*\cE$ is a $\cO_{\p n}(1)$--instanton bundle on $\p n$ with $\rk(p_*\cE)=\rk(\cE)h^n$. Theorem \ref{tCharacterization} yields $H^i_*\big(p_*\cE\big)=0$ for $2\le i\le n-2$, hence \cite[Theorem 1]{MK--P--R} implies that $H^i_*\big(\cE\big)=H^i_*\big(p_*\cE\big)=0$ for $1\le i\le n-1$. The vanishings follow from Corollary \ref{cUlrich}. Thus assertion (1) holds. If $\cE$ is ordinary, then the same holds for $p_*\cE$. Thus $p_*\cE$ is the cohomology of a monad $\cM^\bullet$ as in Proposition \eqref{pSpace} and assertion (2) follows from \cite[Main Theorem]{Flo}. Assertion (3) follows by combining assertion (1) with Corollary \ref{cUlrich}. \end{proof} For instanton bundles the following partial converse of Corollary \ref{cRestriction} holds. \begin{proposition} \label{pExtension} Let $X$ be an $n$--fold with $n\ge5$ endowed with an ample and globally generated line bundle $\cO_X(h)$ and let $Y\in\vert h\vert$ be an $(n-1)$--fold. If $\cE$ is a vector bundle on $X$ such that $\cO_Y\otimes\cE$ is an instanton bundle on $Y$ with respect to $\cO_Y\otimes\cO_X(h)$, then $\cE$ is an instanton bundle with respect to $\cO_X(h)$. Moreover, the defects and the quantum numbers of $\cE$ and $\cO_Y\otimes\cE$ coincide. \end{proposition} \begin{proof} Let $\cE_Y:=\cO_Y\otimes\cE$, $\cO_Y(h_Y):=\cO_Y\otimes\cO_X(h)$ and denote by $\defect$ and $\quantum$ the defect and the quantum number of $\cE_Y$. We have $h^i\big(\cE(-(1+t)h)\big)\le h^i\big(\cE(-(2+t)h)\big)$ for $t\ge i$ and $i\le n-3$ by computing the cohomology of sequence \eqref{seqSection} tensored by $\cE(th)$, hence $h^i\big(\cE(-(1+t)h)\big)=0$ in the same range by \cite[Theorem III.5.2]{Ha2}. If $2\le i\le n-3$, then $h^i\big(\cE_Y(-ih_Y)\big)=0$, hence the cohomology of sequence \eqref{seqSection} tensored by $\cE(-ih)$ combined with the same argument used above implies $h^i\big(\cE(-ih)\big)=0$ in that range. The same argument for $\cE_Y^\vee((\defect-n) h_Y+K_Y)$ yields $h^i\big(\cE((\defect-i-t+t)h)\big)=0$ for $i\ge2$ and $t\ge 1$ and $h^{n-2}\big(\cE(-(n-2)h)\big)=0$. Since $n\ge 5$, it follows that $h^i\big(\cE(-th)\big)=0$ for $i\in \bZ$ and $2\le t\le \defect+1-n$. Thus the cohomology of sequence \eqref{seqSection} tensored by $\cE(-h)$ and $\cE((\defect+1-n)h)$ and the equality $h^1\big(\cE_Y(-h_Y)\big)=h^{n-2}\big(\cE_Y((\defect+1-n)h_Y)\big)$ finally yield $$ h^1\big(\cE(-h)\big)=h^{n-1}\big(\cE((\defect-n)h)\big). $$ The same argument yields the equality $\chi(\cE)=(-1)^n\chi(\cE(-nh))$ if $\defect=1$. \end{proof} The main result of this section is the proof of Theorem \ref{tSlope}. \medbreak \noindent{\it Proof of Theorem \ref{tSlope}.} Thanks to Theorem \ref{tCharacterization}, it suffices to check that $\cE$ also satisfies the additional condition \begin{equation} \label{A} (1-\defect)(\chi(\cE(-h))-(-1)^n\chi(\cE((\defect-n)h)))+\defect(\chi(\cE)-(-1)^n\chi(\cE(-nh)))=0 \end{equation} if and only if equality \eqref{Slope} holds. In what follows we will provide a proof of the statement in the case $\defect=1$. The argument in the case $\defect=0$ is analogous. Thus, from now on, we will assume that the following conditions hold: \begin{itemize} \item $h^0\big(\cE(-h)\big)=h^n\big(\cE((1-n)h)\big)=0$; \item $h^i\big(\cE(-(i+1)h)\big)=h^{n-i}\big(\cE((1-n+i)h)\big)=0$ if $1\le i\le n-2$; \item $h^i\big(\cE(-ih)\big)=0$ for $2\le i\le n-2$; \item $ h^1\big(\cE(-h)\big)= h^{n-1}\big(\cE((1-n)h)\big)$. \end{itemize} We first prove the statement for $1\le n\le2$ and then by induction on $n\ge2$. If $n=2$, equality \eqref{A} becomes $\chi(\cE)=\chi(\cE(-2h))$. The statement then follows by computing the two sides of this identity by means of equality \eqref{RRsurface}. If $n=1$ we can argue similarly by using equality \eqref{RRcurve} instead of \eqref{RRsurface} (see also \cite[Lemma 2.4]{C--H2}). Let $n\ge3$. Each general $Y\in\vert h\vert$ is an $(n-1)$--fold by the Bertini theorem (see \cite[Corollaire I.6.11 2), 3)]{Jou}) and we set $\cO_Y(h_Y):=\cO_Y\otimes\cO_X(h)$, $\cE_Y:=\cO_Y\otimes\cE$. Assume that the statement holds on such a $Y$. Thus $$ (n+1-\defect)h^n+K_Xh^{n-1}=(n-\defect)h^{n-1}_Y+K_Yh^{n-2}_Y $$ thanks to the adjunction formula on $X$. It follows that the equality \eqref{Slope} holds for the bundle $\cE$ on $X$ if and only if it holds for the bundle $\cE_Y$ on $Y$. Moreover, the cohomology of sequence \eqref{seqSection} tensored by the shifts of $\cE$ yields that $\cE_Y$ satisfies a list of condition similar to the one for $\cE$ If $\cE$ is a non--ordinary $h$--instanton bundle, then $\cE_Y$ is a non--ordinary $h_Y$--instanton bundle thanks to Corollary \ref{cRestriction}, hence equality \eqref{Slope} holds for $\cE_Y$ by induction. It follows that equality \eqref{Slope} holds for $\cE$ too. Conversely, let $\cE$ satisfy equality \eqref{Slope}. By induction we know that $\cE_Y$ is a $h_Y$--instanton bundle on $Y$, hence $ \chi(\cE_Y)= \chi(\cE_Y((1-n)h_Y)\big)$. Thus, tensoring sequence \eqref{seqSection} by $\cE$ and $\cE((1-n)h)$, we obtain $$ \chi(\cE)-(-1)^n\chi(\cE(-nh))=\chi(\cE(-h))-(-1)^{n-1}\chi(\cE((1-n)h))=0 $$ thanks to Proposition \ref{pQuantum}. Thus $\cE$ is a $h$--instanton sheaf on $X$. \qed \medbreak \begin{remark} \label{rSlope} Let $\cE$ be a $h$--instanton bundle on an $n$--fold $X$ and consider any finite morphism $p\colon X\to \p n$ such that $\cO_X(h)\cong p^*\cO_{\p n}(1)$. eld$ is $0$, then the Bertini theorem (see \cite[Corollaire I.6.11 2), 3)]{Jou}) implies that $S:=p^{-1}(H)$ is a smooth surface such that $$ \chi(\cO_S)=\sum_{i=0}^{n-2}(-1)^i{{n-2}\choose i}\chi(\cO_X(-ih)) $$ when $\p2\cong H\subseteq \p n$ is a general linear subspace. Since $X$ is smooth, it follows that $p$ is flat, hence $p_*\cE$ is locally free of rank $\rk(p_*\cE)=\rk(\cE)h^n$. Thus Remark \ref{rRankChern} yields $$ \chi(\cO_S\otimes\cE)=\frac{\rk(\cE)}{1+\defect}h^n-\epsilon k $$ where $\epsilon$ is as in Remark \ref{rChernPoly}. Equality \eqref{RRsurface} for $\cO_S\otimes\cE$ and the adjunction formula on $H$ then imply \begin{equation*} \begin{aligned} \epsilon\quantum&=\left(c_2(\cE)-\frac12c_1(\cE)(c_1(\cE)-K_X-(n-2)h)\right)h^{n-2}\\ &+\rk(\cE)\left(\frac {1}{1+\defect}h^n-\sum_{i=0}^{n-2}(-1)^i{{n-2}\choose i}\chi(\cO_X(-ih))\right). \end{aligned} \end{equation*} When $X\cong\p n$ the above coincides with the second equality \eqref{ChernP^n}. Similarly, one can obtain equalities involving the quantum number and the first $m$ Chern classes of each $h$--instanton sheaf on $X$ when $n\ge m$. \end{remark} If $\cF$ is a vector bundle on $X$, we define its {\sl Ulrich dual (with respect to $\cO_X(h)$)} as $\cF^{U,h}:=\cF^\vee((n+1)h+K_X)$: trivially $(\cF^{U,h})^{U,h}\cong\cF$. Moreover, instanton bundles are preserved by Ulrich duality. \begin{proposition} \label{pDual} Let $X$ be an $n$--fold with $n\ge1$ endowed with an ample and globally generated line bundle $\cO_X(h)$. If $\cE$ is a vector bundle on $X$, then $\cE$ is a $h$--instanton bundle with defect $\defect$ and quantum number $\quantum$ if and only if the same is true for $\cE^{U,h}(-\defect h)$. \end{proposition} \begin{proof} The statement follows from equality \eqref{Serre} because $\cE$ is a vector bundle. \end{proof} Recall that a rank two $h$--instanton bundle is called orientable if it has defect $\defect$ and $c_1(\cE)=(n+1-\defect)h+K_X$ in $A^1(X)\cong\Pic(X)$. \begin{proposition} \label{pSpecial} Let $X$ be an $n$--fold with $n\ge1$ endowed with an ample and globally generated line bundle $\cO_X(h)$. The rank two vector bundle $\cE$ on $X$ is an orientable $h$--instanton bundle with defect $\defect$ if and only if the following conditions hold: \begin{itemize} \item $c_1(\cE)=(n+1-\defect)h+K_X$; \item $h^0\big(\cE(-h)\big)=0$; \item $h^i\big(\cE(-(i+1)h)\big)=0$ if $1\le i\le n-2$; \item $\defect h^i\big(\cE(-ih)\big)=0$ for $2\le i\le n-2$. \end{itemize} \end{proposition} \begin{proof} The statement follows by combining the definition of rank two orientable $h$--instanton bundle, equality \eqref{Serre} and Theorem \ref{tSlope}. \end{proof} \begin{remark} \label{rAM2} When $n=3$, it follows from the above proposition that $\cF$ is a $h$--instanton bundle as defined in \cite{A--M2} if and only if $\cE:=\cF(h)$ is a rank two orientable, $\mu$--semistable, ordinary $h$--instanton bundle. \end{remark} The proof of the following corollary is immediate. \begin{corollary} \label{cSpecial} Let $X$ be an $n$--fold with $1\le n\le 2$ endowed with an ample and globally generated line bundle $\cO_X(h)$. The rank two vector bundle $\cE$ on $X$ is an orientable $h$--instanton bundle with defect $\defect$ if and only if $c_1(\cE)=(n+1-\defect)h+K_X$ and $h^0\big(\cE(-h)\big)=0$. \end{corollary} It is immediate to check that if $\theta$ is a non--effective theta--characteristic on a smooth curve $X$, then $\cO_X(\theta+(1-\defect)h)\oplus\cO_X(\theta+h)$ is a rank two orientable $h$--instanton bundle with defect $\defect$. In \cite[Introduction]{K--M--S} the notion of $\delta$--Ulrich sheaf on a projective scheme $X$ is defined, proving that if $X$ is a normal surface such sheaves are instanton sheaves and exist if $X$ is aCM (see \cite[Proposition 5.1 and Theorem A]{K--M--S}). Nevertheless, the existence of $\delta$--Ulrich sheaves on each smooth surface and their admissible ranks are actually open problems. In the following examples we show that smooth surfaces always support rank two orientable $h$--instanton bundles with large enough quantum number. If $\cE$ is such a bundle, then its intermediate cohomology does not necessarily vanish, but Propositions \ref{pRegularity}, \ref{pNatural} and equality \eqref{Serre} guarantee $h^1\big(\cE(th)\big)=0$ unless possibly if $\defect-1-w(\cE)\le t\le w(\cE)-2$. \begin{example} \label{eMukai} Let $X$ be a smooth surface endowed with a very ample line bundle $\cO_X(h)$: the construction below is the same as in \cite[Proposition 6.2]{E--S--W}. eld=\bC$ (see \cite[Theorem 0.1]{So--VV} and \cite[Exercise II.7.5 (d)]{Ha2}). If $\cO_C(D)$ is globally generated and $\sigma_0,\sigma_1\in H^0\big(\cO_C(D)\big)$ are two sections without common zeros, then we have a surjective morphism $\sigma\colon\cO_X^2\to\cO_C(D)$ and we set $\cE:=\ker(\sigma)^\vee$. Thus there is the exact sequence $$ 0\longrightarrow\cE^\vee\longrightarrow\cO_X^2\mapright\sigma\cO_C(D)\longrightarrow0. $$ If we denote by $i\colon C\to X$ the inclusion, then the multiplicativity of the Chern polynomials in short exact sequences yields $$ c_1(\cE)=c_1(i_*\cO_C(D)), \qquad c_2(\cE)=c_1(i_*\cO_C(D))^2-c_2(i_*\cO_C(D)). $$ Thus $c_1(\cE)=(3-\defect)h+K_X$. Moreover $\chi(i_*\cO_C(D))=\chi(\cO_C(D))$, hence $c_2(\cE)=\deg(D)$ thanks to equalities \eqref{RRcurve} and \eqref{RRsurface}. The cohomology of the above sequence tensored by $\cO_X(h+K_X)$, equality \eqref{Serre} and the Kodaira vanishing theorem imply $$ h^0\big(\cE(-h)\big)=h^1\big(\cO_X(h+K_X)\otimes\cO_C(D)\big). $$ Thus, if $h^1\big(\cO_X(h+K_X)\otimes\cO_C(D)\big)=0$, then $\cE$ is a rank two orientable $h$--instanton bundle with defect $\defect$, thanks to Corollary \ref{cSpecial}. In this case Equality \eqref{RRsurface} and Proposition \ref{pQuantum} yield that the quantum number of $\cE$ is \begin{equation*} \begin{aligned} \quantum&=-\chi(\cE(-h))=\deg(D)-2\chi(\cO_X)-\frac12\left((\defect^2-4\defect+5)h^2+(3-\defect)K_Xh\right) \end{aligned} \end{equation*} \end{example} \begin{example} \label{eGenus0} In the above example we assumed that $\omega_X$ is positive enough: in the following example we deal with any smooth surface $X$ with $p_g(X)=0$ and endowed with a very ample line bundle $\cO_X(h)$ such that $h^0\big(\cO_X(h)\big)=N+1$: the construction here is completely analogous to the one described in \cite{Cs4}. Choose a $0$--dimensional subscheme $Z\subseteq X$ of degree $z\ge (1-\defect)(N+1)+1$ with $\defect\in\{\ 0,1\ \}$: if $\defect=0$ we also assume that no subscheme $Z'\subseteq Z$ of degree $N+1$ is contained in any divisor in $\vert h\vert$. Thus, the Cayley--Bacharach theorem (see \cite[Theorem 5.1.1]{Hu--Le}) yields the existence of a rank two vector bundle $\cF$ fitting into the exact sequence \begin{equation} \label{Genus0} 0\longrightarrow\cO_X\longrightarrow\cF\longrightarrow\cI_{Z\vert X}((1-\defect) h-K_X)\longrightarrow0. \end{equation} Thus, the vector bundle $\cE:=\cF(h+K_X)$ satisfies $c_1(\cE)=(3-\defect)h+K_X$ and $h^0\big(\cE(-h)\big)=p_g(X)=0$, hence $\cE$ is a rank two orientable $h$--instanton bundle on $X$. In particular $h^2\big(\cE(-h)\big)=0$, hence the cohomologies of sequences \eqref{Genus0} tensored by $\omega_X$ and \eqref{seqStandard} by $\cO_X((1-\defect)h)$ imply that its quantum number is $$ \quantum=h^1\big(\cE(-h)\big)=z+(1+\defect)(q(X)-1)+(1-\defect)(h^1\big(\cO_X(h)\big)-N-1). $$ \end{example} \begin{remark} eld=\bC$, then rank two, orientable, Ulrich bundles exist on the very general $X$ if and only if $\deg(X)\le 15$: see \cite{Bea, Fa1}. Moreover, they exist on each $X$ when $\deg(X)\le4$: see \cite{Bea} for $\deg(X)\le 3$ and \cite{C--K--M, C--N} for $\deg(X)=4$. \end{remark} We close this section by generalizing the tight relation between ordinary $h$--instanton and Ulrich bundles with respect to $\cO_X(dh)$ evidenced for the first time in \cite{C--MR8} when $X\cong\p3$. \begin{corollary} \label{cVeronese} Let $X$ be an $n$--fold with $1\le n\le 3$ endowed with an ample and globally generated line bundle $\cO_X(h)$. For a vector bundle $\cE$ on $X$, the following assertions are equivalent for every positive integer $d$ such that $(n+1)(d-1)$ is even. \begin{enumerate} \item $\cE\left(\frac{n+1}2(d-1)h\right)$ is an Ulrich bundle with respect to $\cO_X(dh)$. \item $\cE$ is an ordinary $h$--instanton bundle with natural cohomology in each shift and quantum number \begin{equation} \label{Quantum} \quantum=\frac{(n-1)^n\rk(\cE)(d^2-1)}{2^nn!}h^n. \end{equation} \end{enumerate} \end{corollary} \begin{proof} Let $p\colon X\to\p n$ be a finite morphism such that $\cO_X(h)\cong p^*\cO_{\p n}(1)$. If assertion (1) holds, then $(p_*\cE)\left(\frac{n+1}2(d-1)\right)$ is an Ulrich bundle with respect to $\cO_{\p n}(d)$, thanks to Theorem \ref{tCharacterization} and Corollary \ref{cUlrich}. Moreover, $\rk(p_*\cE)=\rk(\cE)h^n$ hence \cite[Theorem 5.1]{E--S--W} and \cite[Lemma 2.6]{C--H2} imply that $p_*\cE$ has natural cohomology, \begin{equation} \label{QuantumChar} \chi\left((p_*\cE)\left(\frac{n+1}2(d-1)+t\right)\right)=\rk(\cE)d^nh^n{{\frac td+n}\choose n}, \end{equation} $h^0\big((p_*\cE)(-1)\big)=h^n\big((p_*\cE)(-n)\big)=0$ and, when $n=3$, also $h^1\big((p_*\cE)(-2)\big)=h^2\big((p_*\cE)(-2)\big)=0$. We deduce that $p_*\cE$ is an ordinary $\cO_{\p n}(1)$--instanton sheaf and equality \eqref{Quantum} is obtained by taking $t=-1-\frac{n+1}2(d-1)$ in formula \eqref{QuantumChar}. Thus assertion (2) follows from Theorem \ref{tCharacterization}. Let assertion (2) hold. Computing $\chi\left((p_*\cE)\left(\frac{n+1}2(d-1)+td\right)\right)$ via Corollary \ref{cCharacteristic}, taking into account equality \eqref{Quantum} and that $\rk(\cE)h^n=\chi(\cE)+(n-1)\quantum$ (see Remark \ref{rRankChern}) one obtains $$ \chi\left((p_*\cE)\left(\frac{n+1}2(d-1)-jd\right)\right)=0 $$ for $1\le j\le n$. It follows from the naturality of the cohomology of $\cE$ that the cohomology of $(p_*\cE)\left(\frac{n+1}2(d-1)\right)$ vanishes in the range $-n\le t\le -1$, hence it is an Ulrich sheaf with respect to $\cO_{\p n}(d)$. \end{proof} \begin{example} \label{eVeronesePlane} Thanks to \cite[Theorem 3]{C--MR7} there exist indecomposable Ulrich bundles with respect to $\cO_{\p2}(d)$ for each $r\ge2$ and $d\ge3$ such that $r(d-1)$ is even. It follows that, in the same range, there exist indecomposable ordinary $\cO_{\p2}(1)$--instanton bundles with quantum number $\quantum =\frac{r(d^2-1)}{8}$ and natural cohomology. \end{example} \begin{example} \label{eVeroneseQuadric} Let $X\subseteq\p4$ be a smooth quadric hypersurface and $\cO_X(h):=\cO_X\otimes\cO_{\p4}(1)$. Let $\cM_X(2;h,3)$ be the moduli space of stable rank two bundles $\cE$ with $c_1(\cE)=h$ and $c_2(\cE)h=3$ on $X$. The scheme $\cM_X(2;h,3)$ is irreducible, unirational, reduced of dimension $12$ and its general point represents a bundle $\cE$ with $h^0\big(\cE(h)\big)=0$ and $\reg(\cE)=1$ (see \cite[Theorem 5.2 and its proof]{Ot--Sz}), hence $h^1\big(\cE(th)\big)=0$ for $t\ge0$ and $h^0\big(\cE(th)\big)=0$ for $t\le 1$. Moreover, \cite[Corollary 2.4]{Ein--So} implies $h^1\big(\cE(-2h)\big)=0$. It follows that $h^3\big(\cE(-5h)\big)=h^2\big(\cE(-2h)\big)=0$, thanks to equality \eqref{Serre}, hence $h^2\big(\cE(th)\big)=0$ for $t\le-4$ and $h^3\big(\cE(th)\big)=0$ for $t\ge-5$. Thus $\cE$ is a rank two, ordinary, $h$--instanton bundle with natural cohomology in each shift. Equality \eqref{RRgeneral} finally implies that its quantum number is $\quantum=2$. Corollary \ref{cVeronese} then implies that $\cF:=\cE(2h)$ is a rank two Ulrich bundle on $X$ with respect to $\cO_X(2h)$. \end{example} \section{Monadic representation of instanton bundles on aCM $n$--folds} \label{sMonad} As pointed out in Theorem \ref{tCharacterization}, instanton sheaves on a projective scheme $X$ endowed with a very ample line bundle $\cO_X(h)$ are exactly the sheaves whose direct images via a suitable finite morphism on $\p n$ are $\cO_{\p n}(1)$--instanton sheaves or, equivalently, the cohomology of the monads $\cM^\bullet$ described in Proposition \ref{pSpace}. Several authors used the property of being cohomology of monads of a certain fixed shape for defining instanton bundles on $n$--folds: e.g. see \cite{J--MR}. The property of being cohomology of a linear monad with a certain fixed shape does not characterize $h$--instanton bundles when $X\not\cong\p n$. Nevertheless, $h$--instanton bundles can be associated to some particular monads making use of the results in \cite[Sections 2 and 3]{J--VM} as claimed in Theorem \ref{tMonad}. Recall that $\cF^{U,h}:=\cF^\vee((n+1)h+K_X)$ for each vector bundle $\cF$ on $X$. \medbreak \noindent{\it Proof of Theorem \ref{tMonad}.} If $\cE$ is a $h$--instanton bundle, then $h^p\big(\cE(\defect-p)\big)=0$ for each $p\ge2$ because $n\ge3$, hence $H^1_*\big(\cE\big)$ is generated by $\bigoplus_{i=0}^\defect H^1\big(\cE((i-1)h)\big)$ as an $S[X]$--module by \cite[Lemma 3.4]{J--VM}. Similarly, $H^1_*\big(\cE^{U,h}(-\defect h)\big)$ is generated by $\bigoplus_{i=0}^\defect H^1\big(\cE^{U,h}((i-1-\defect)h)\big)$ thanks to Proposition \ref{pDual}. Thanks to \cite[Theorem 2.3]{J--VM} we know the existence of monad \eqref{Monad} such that $H^i_*\big(\cB\big)=0$ for $1\le i\le n-1$ where $a$ and $c$ are as in equalities \eqref{DimensionMonad}. Monad \eqref{Monad} induces the two exact sequences \begin{equation} \label{Display} \begin{gathered} 0\longrightarrow\cU\longrightarrow\cB\longrightarrow\cO_X^{\oplus c}\oplus \cO_X(h)^{\oplus \quantum}\longrightarrow0,\\ 0\longrightarrow\omega_X((n-\defect)h)^{\oplus \quantum}\oplus\omega_X((n+1-\defect)h)^{\oplus a}\longrightarrow\cU\longrightarrow\cE\longrightarrow0. \end{gathered} \end{equation} The cohomology of sequences \eqref{Display} tensored by $\cO_X(-h)$ and $\cO_X((\defect-n)h)$ and equality \eqref{Serre} yield equalities \eqref{h^iB} and \eqref{ChiB}. Conversely, let $\cE$ be a vector bundle satisfying $h^0\big(\cE(-h)\big)=h^n\big(\cE((\defect-n)h)\big)=0$. Assume that $\cE$ is the cohomology of monad \eqref{Monad}, where $\cB$ is aCM and satisfies equalities \eqref{h^iB} and \eqref{ChiB}, and $a$, $c$ are as in equalities \eqref{DimensionMonad}. By hypothesis $X$ is aCM, hence $H^i_*\big(\omega_X\big)=H^i_*\big(\cO_X\big)=0$ for $1\le i\le n-1$ thanks to equality \eqref{Serre}. Thus the cohomology of sequences \eqref{Display} twisted by $\cO_X(-2h)$ yields \begin{gather*} h^1\big(\cE(-2h)\big)=h^1\big(\cU(-2h)\big)\le h^0\big(\cC(-2h)\big)+h^1\big(\cB(-2h)\big)=0. \end{gather*} Similarly $h^{n-1}\big(\cE((\defect+1-n)h)\big)=0$ and $H^i_*\big(\cE\big)=0$ for $2\le i\le n-2$. The same argument yields $h^1\big(\cE(-h)\big)=h^{n-1}\big(\cE((\defect-n)h)\big)=\quantum$ and $\defect(\chi(\cE)-(-1)^n\chi(\cE(-nh)))=0$ thanks to equalities \eqref{h^iB} and \eqref{ChiB}. Thus $\cE$ is an instanton bundle. \qed \medbreak \begin{remark} \label{rMonad} Assume $h^0\big(\omega_X((n-1)h)\big)=0$. If $\cE$ is the cohomology of monad \eqref{Monad}, then the cohomology of sequences \eqref{Display} implies \begin{gather*} h^0\big(\cE(-h)\big)\le h^0\big(\cU(-h)\big)\le h^0\big(\cB(-h)\big)=0,\\ h^n\big(\cE((\defect-n)h)\big)\le h^0\big(\cU((\defect-n)h)\big)\le h^0\big(\cB((\defect-n)h)\big)=0, \end{gather*} thanks to equalities \eqref{h^iB}, because, being $X$ aCM, $H^1_*\big(\omega_X\big)=H^{n-1}_*\big(\cO_X\big)=0$. \end{remark} If $\cE$ is rank two orientable, then equality \eqref{Serre} yields $a=c$ in monad \eqref{Monad}. The monad above can be put in an even more explicit form only if there is description of aCM bundles on $X$. Unfortunately such a description is known in very few cases, but if we restrict to ordinary $h$--instanton bundles we have a more useful result. \begin{corollary} \label{cMonad} Let $X$ be an $n$--fold with $n\ge3$ endowed with a very ample line bundle $\cO_X(h)$. Assume that $h^0\big(\omega_X((n-1)h)\big)=0$ and $X$ is aCM with respect to $\cO_X(h)$. The non--zero vector bundle $\cE$ is an ordinary $h$--instanton bundle with quantum number $\quantum$ if and only if it is the cohomology of a monad of the form \begin{equation} \label{MonadOrdinary} 0\longrightarrow\cC^{U,h}\longrightarrow\cB\longrightarrow\cC\longrightarrow0 \end{equation} where $\cC\cong\cO_X(h)^{\oplus \quantum}$ and $\cB$ is Ulrich. \end{corollary} \begin{proof} If $\cE$ is ordinary, then it is the cohomology of monad \eqref{Monad} with $\cC\cong\cO_X(h)^{\oplus k}$ and $\cA\cong\omega_X(nh)^{\oplus k}\cong\cC^{U,h}$ thanks to Theorem \ref{tMonad}. The same theorem also yields that $\cB$ is aCM. Thanks to equalities \eqref{h^iB} we deduce that $\cB$ is actually Ulrich. Conversely, let $\cE$ be the cohomology of monad \eqref{MonadOrdinary}. Being $\cB$ Ulrich, it is aCM and equalities \eqref{h^iB} hold. The thesis follows from Theorem \ref{tMonad} and Remark \ref{rMonad}. \end{proof} Corollary \ref{cMonad} returns the characterization of ordinary $\cO_{\p n}(1)$--instanton bundles in terms of monad \eqref{MonadJardim} because $h^0\big(\omega_{\p n}(n-1)\big)=h^0\big(\cO_{\p n}(-2)\big)=0$. \begin{remark} \label{rSo--VV} eld=\bC$ and $n\ge3$: thanks to \cite[Theorem 0.1]{So--VV} and simple calculations, we know that $h^0\big(\omega_X((n-1)h)\big)\ne0$ unless $X\cong\p n$ and $\cO_X(h)\cong\cO_{\p n}(1)$, or $X\subseteq\p {n+1}$ is the smooth quadric hypersurface and $\cO_X(h)\cong\cO_X\otimes\cO_{\p {n+1}}(1)$, or $X$ is a scroll on a smooth curve $B$. \end{remark} \begin{remark} eld=\bC$ and $n\ge3$: assume also that $X\subseteq\p N$ is an $n$--fold of minimal degree $d:=N-n+1$. Thus $X$ is one of the varieties listed in Remark \ref{rSo--VV} and, if it is a scroll, $B\cong\p1$ necessarily (see \cite{Ei--Ha1} for further details). The variety $X$ is aCM with respect to $\cO_X(h)\cong\cO_X\otimes\cO_{\p {N}}(1)$, $d=h^n$ and $h^0\big(\omega_X((n-1)h)\big)=0$ (this follows from Remark \ref{rSo--VV}), hence each $h$--instanton bundle $\cE$ on $X$ fits into monad \eqref{MonadOrdinary}. If $p\colon X\to \p n$ is finite and $p^*\cO_{\p n}(1)\cong\cO_X(h)$, then its degree is $d$ and the functors $R^ip_*$ vanish for $i\ge1$ by \cite[Corollary III.11.2]{Ha2}. Thus, by applying $p_*$ to the exact sequences \eqref{Display}, we deduce that $p_*\cE$ is the cohomology of the monad \begin{equation} \label{PushForward} 0\longrightarrow p_*(\omega_X(nh))^{\oplus \quantum}\longrightarrow p_*\cB\longrightarrow p_*(\cO_X(h))^{\oplus \quantum}\longrightarrow0. \end{equation} Trivially $\rk(\cB)=2\quantum+\rk(\cE)$ hence $p_*\cB\cong\cO_{\p n}^{2d\quantum+d\rk(\cE)}$, because $\cB$ is Ulrich. Notice that $p_*\cO_X\cong\bigoplus_{i=0}^{d-1}\cO_{\p n}(-\alpha_i)$ for suitable $\alpha_i\in \bZ$. Thanks to \cite[Corollary III.11.2 and Exercises III.8.1, III.8.3]{Ha2} we have $h^i\big((p_*\cO_X)(t)\big)=h^i\big(\cO_X(th)\big)$ for each $i,t\in\bZ$. Since $h^0\big(\cO_X\big)=1$, it follows that we can assume $\alpha_0=0$ and $\alpha_i\le -1$ for $i\ge1$. Since $h^0\big(\cO_X(h)\big)=N+1$ and $d=N-n+1$, it follows that $\alpha_i= -1$ for $i\ge1$. Thus $$ p_*(\cO_X(h))\cong\cO_{\p n}(1)\oplus\cO_{\p n}^{\oplus d-1}. $$ Moreover, $\omega_X\cong\omega_{X\vert\p n}\otimes\cO_X(-(n+1)h)$, hence $$ p_*(\omega_X(nh))\cong(p_*\cO_X)^\vee(-1)\cong\cO_{\p n}(-1)\oplus\cO_{\p n}^{\oplus d-1} $$ by duality for finite flat morphisms (see \cite[Exercise III.6.10]{Ha2}). Thus we have induced morphisms $a\colon \cO_{\p n}(-1)^{\oplus\quantum}\oplus\cO_{\p n}^{\oplus (d-1)\quantum}\to \cO_{\p n}^{2d\quantum+d\rk(\cE)}$ and $b\colon \cO_{\p n}^{2d\quantum+d\rk(\cE)}\to \cO_{\p n}(1)^{\oplus\quantum}\oplus\cO_{\p n}^{\oplus (d-1)\quantum}$. Since $a$ is injective and $b$ is surjective it is easy to check that the above morphisms split. Thus monad \eqref{PushForward} splits into monad \eqref{MonadJardim} having $p_*\cE$ as cohomology and a trivial exact sequence. \end{remark} In the following examples we apply Corollary \ref{cMonad} to some smooth varieties. \begin{example} \label{eQuadric0} Let $X\subseteq\p{n+1}$ be a smooth quadric hypersurface and set $\cO_X(h):=\cO_X\otimes\cO_{\p {n+1}}(1)$. Thus $h^0\big(\omega_X((n-1)h)\big)=h^0\big(\cO_X(-h)\big)=0$. Consider an ordinary $h$--instanton bundle $\cE$ of rank $r$ and quantum number $\quantum$. If $c_1(\cE)=\varepsilon h$, then equality \ref{Slope} implies $c_1(\cE)=rh/2$, hence $r$ is even. If $n\ge3$, then $\cE$ is the cohomology of monad \eqref{MonadOrdinary}. Recall that the Kn\"orrer theorem (see \cite[Corollary 6.8]{A--O1}) implies that the only indecomposable aCM bundles on $X$ are, up to shifts, $\cO_X$ and the spinor bundles: for their definition and properties we refer the reader to \cite[Definition 1.3, Theorem 2.8 and Remark 2.9]{Ott2}. In particular, if $n$ is odd, then there is exactly one spinor bundle $\cS$, while there are two non--isomorphic spinor bundles $\cS'$ and $\cS''$ if $n$ is even. Regardless of the parity of $n$ we have \begin{equation} \label{Spinor} \begin{gathered} \rk(\cS)=\rk(\cS')=\rk(\cS'')=2^{\left[\frac{n-1}2\right]},\\ c_1(\cS(h))=c_1(\cS'(h))=c_1(\cS''(h))=2^{\left[\frac{n-3}2\right]}h. \end{gathered} \end{equation} Notice that $\cO_X$ is not an Ulrich bundle, while the shifted spinor bundles $\cS(h)$, $\cS'(h)$, $\cS''(h)$ are Ulrich bundles. Monad \eqref{MonadOrdinary} becomes \begin{equation} \label{MonadOdd} 0\longrightarrow\cO_{X}^{\oplus k}\longrightarrow\cS(h)^{\oplus s}\longrightarrow\cO_{X}(h)^{\oplus k}\longrightarrow0, \end{equation} if $n$ is odd and \begin{equation} \label{MonadEven} 0\longrightarrow\cO_{X}^{\oplus k}\longrightarrow\cS'(h)^{\oplus s'}\oplus\cS''(h)^{\oplus s''}\longrightarrow\cO_{X}(h)^{\oplus k}\longrightarrow0, \end{equation} if $n$ is even. It is clear that $s=2^{-\left[\frac{n-1}2\right]}(r+2k)$ and $s'+s''=2^{-\left[\frac{n-1}2\right]}(r+2k)$, when $n$ is respectively either odd or even, thanks to equalities \eqref{Spinor}. In the former case the cohomology of sequences \eqref{Display} tensored by $\cS$ returns $$ s=h^0\big(\cE\otimes\cS\big)-h^1\big(\cE\otimes\cS\big)+2^{\left[\frac{n-1}2\right]}k, $$ thanks to \cite[Theorems 2.8 and 2.10]{Ott2}. In the latter case we have similarly \begin{gather*} s'=\left\lbrace\begin{array}{ll} h^0\big(\cE\otimes\cS'\big)-h^1\big(\cE\otimes\cS'\big)+2^{\left[\frac{n-1}2\right]}k\quad&\text{if $n\equiv 0\pmod 4$,}\\ h^0\big(\cE\otimes\cS''\big)-h^1\big(\cE\otimes\cS''\big)+2^{\left[\frac{n-1}2\right]}k\quad&\text{if $n\equiv 2\pmod 4$,} \end{array}\right.\\ s''=\left\lbrace\begin{array}{ll} h^0\big(\cE\otimes\cS''\big)-h^1\big(\cE\otimes\cS''\big)+2^{\left[\frac{n-1}2\right]}k\quad&\text{if $n\equiv 0\pmod 4$,}\\ h^0\big(\cE\otimes\cS'\big)-h^1\big(\cE\otimes\cS'\big)+2^{\left[\frac{n-1}2\right]}k\quad&\text{if $n\equiv 2\pmod 4$.} \end{array}\right. \end{gather*} Recall that $r$ is even, hence each indecomposable ordinary $h$--instanton bundle on $X$ has rank at least $4$ when $n\ge6$. Indeed if $\quantum=0$, then $\cE$ is a spinor bundle, while if $\quantum\ge1$ this follows from Proposition \ref{pHorrocks} (see also \cite[Corollary 1.2]{Mal1}). If $n\le 5$ rank two ordinary $h$--instanton bundles exist and we know what follows (see \cite{Mal1}). \begin{itemize} \item If $n=3$, then $s=k+1$: monad \eqref{MonadOdd} coincides with the one in \cite[Lemma 4]{Fa2} up to tensoring by $\cO_X(-h)$. \item If $n=4$, besides the obvious case of the $h$--instanton bundles $\cS'(h)$ and $\cS''(h)$, we must have $s'=s''=1$ and $k=1$: in this case the cohomology of monad \eqref{MonadEven} tensored by $\cO_X(-h)$ is the bundle defined in \cite[Proposition p. 205]{A--S}. \item If $n=5$, then $s=1$ and $\quantum=1$: the cohomology of monad \eqref{MonadOdd} tensored by $\cO_X(-h)$ is the Cayley bundle described in \cite{Ott3}. In particular, such an $X$ supports a rank two orientable $h$--instanton bundle, but no any rank two Ulrich sheaf. \end{itemize} The general problem of the existence of $h$--instanton bundles for odd $n$ and rank $n-1$ is discussed in \cite[Section 4]{C--MR--PL}. Conversely, if the cohomology of monads \eqref{MonadEven} and \eqref{MonadOdd} is a vector bundle $\cE$, then it is an ordinary $h$--instanton bundle of rank $r$ with quantum number $k$ thanks to Corollary \ref{cMonad}. \end{example} One can work out analogous computations on each manifold $X$ with sufficiently negative canonical line bundle and whose Ulrich bundles are completely described. In the following example we deal with the case of some particular scrolls. Let us spend a few words on scrolls for fixing the notation here and in Section \ref{sScrollCurve}. Recall that an $n$--fold $X$ endowed with a very ample line bundle $\cO_X(h)$ is a scroll over a smooth curve $B$ if there is a vector bundle $\cG$ of rank $n$ on $B$ such that $X\cong\bP(\cG)$ and $\cO_X(h):=\cO_{\bP(\cG)}(1)$: thus $\cG$ must be ample and globally generated, hence $0<h^n=\deg(\frak g)$. With the notation above we set $\cO_B(\frak g):=\det(\cG)$. In $A^1(X)$ we denote by $f$ the class of a fibre of the projection $\pi\colon\bP(\cG)\to B$ and for each divisor $\frak a\subseteq B$ we write $\frak a f$ instead of $\pi^*\frak a$. If the genus of $B$ satisfies $g:=p_a(B)\ge1$ such fibres are not linearly equivalent: anyhow they form an algebraic system. The Chern equation in $A(X)$ is $h^n-c_1(\cG)fh^{n-1}=0$: the intersection product in $A(X)$ is given by $$ h^n=\deg(\frak g),\qquad fh^{n-1}=1,\qquad f^2=0 $$ in $A(X)$. Moreover $K_X=-nh+(\frak g+K_B)f$. The following lemma will be repeatedly used from now on in the paper. \begin{lemma} \label{lDerived} Let $X\cong\bP(\cG)$ and $\cO_X(h):=\cO_{\bP(\cG)}(1)$. For each divisor $\frak a\subseteq B$ we have $$ h^i\big(\cO_X(th+\frak a f)\big)=\left\lbrace\begin{array}{ll} h^i\big((S^t\cG)(\frak a)\big)\quad&\text{if $t\ge0$,}\\ 0\quad&\text{if $1-n\le t\le -1$.} \end{array}\right. $$ \end{lemma} \begin{proof} Recall that $R^i\pi_*\cO_X(th)=0$ for $t\ge1-n$ and $i\ge0$, and $\pi_*\cO_X(th)\cong S^t\cG$ for $t\in\bZ$ (see \cite[Exercise III. 8.3]{Ha2}). Thus, the statement follows from the projection formula (see \cite[Exercise III.8.1]{Ha2}). \end{proof} As a first consequence of the above lemma we deduce that $h^0\big(\omega_X((n-1)h)\big)=0$. Nevertheless, it is not always true that the embedding induced by $\cO_X(h)$ is aCM, unless $B\cong\p1$. Thus we can assume $\cG\cong\bigoplus_{j=0}^{n-1}\cO_{\p1}(a_j)$ where $1\le a_0\le \dots\le a_{n-1}$. In this case $\cO_X(h)$ is actually very ample and $d:=h^n=N-n+1=\sum_{j=0}^{n-1}a_j$, i.e. $X$ is a variety of minimal degree on $\p N$. Such varieties are described in \cite{Ei--Ha1}. In particular, $\cO_X$ is aCM. Moreover the description of Ulrich bundles is known: see \cite{A--H--M--PL}. In the following examples we deal with the cases $n=3$ and $X\cong\p1\times\p3$. \begin{example} \label{e2Segre} If $n=3$, then $K_X=-3h+(d-2)f$. Let $\Omega^p_{X\vert\p1}:=\wedge^p\Omega^1_{X\vert\p1}$, where $\Omega^1_{X\vert\p1}$ is the sheaf of relative differentials of the projection $\pi\colon X\to \p1$. Recall that it fits into the relative Euler sequence \begin{equation} \label{seqEuler} 0\longrightarrow\Omega^1_{X\vert\p1}\longrightarrow\bigoplus_{j=0}^2\cO_{X}(a_jf-h)\longrightarrow\cO_X\longrightarrow0,. \end{equation} Let $\cB$ be a vector bundle on $X$. As pointed out in \cite[Theorem 4.7]{A--H--M--PL}, $\cB$ is Ulrich if and only if there is a filtration $$ 0=:\cB_0\subseteq\cB_1\subseteq\cB_2\subseteq\cB_3:=\cB $$ such that $\cB_{p+1}/\cB_p\cong\Omega^p_{X\vert\p1}((p+1)h-f)^{\oplus s_{p+1}}$ for $0\le p\le 2$. It follows that $\cB$ is Ulrich if and only if there are exact sequences \begin{gather} \label{seqUlrich} 0\longrightarrow\cB_2\longrightarrow \cB\longrightarrow\cO_X((d-1)f)^{\oplus s_3}\longrightarrow0,\\ \label{seqKernel} 0\longrightarrow\cO_X(h-f)^{\oplus s_1}\longrightarrow \cB_2\longrightarrow\Omega^1_{X\vert\p1}(2h-f)^{\oplus s_2}\longrightarrow0. \end{gather} Notice that the cohomology of the dual of sequence \eqref{seqEuler} tensored by $\cO_X(-h)$ returns \begin{equation} \label{D} h^i\big((\Omega_{X\vert\p1}^1)^\vee(-h)\big)=h^i\big(\Omega_{X\vert\p1}^1(2h-df)\big)=\left\lbrace\begin{array}{ll} d-3\quad&\text{if $i=1$,}\\ 0\quad&\text{if $i\ne1$.}\\ \end{array}\right. \end{equation} Thus $\cB_2\cong \cO_X(h-f)^{\oplus s_1}\oplus\Omega^1_{X\vert\p1}(2h-f)^{\oplus s_2}$ if $a_2=1$. On the other hand, sequence \eqref{seqKernel} could be unsplit when $a_2\ge2$. If $\cE$ is an ordinary $h$--instanton bundle with $\rk(\cE)=r$ and $\quantum(\cE)=k$, then $c_1(\cE)h=r(d-1)$. Moreover $\cE$ is the cohomology of monad \eqref{MonadOrdinary}, that is \begin{equation} \label{Monad2Segre} 0\longrightarrow\cO_{X}((d-2)f)^{\oplus k}\longrightarrow\cB\longrightarrow\cO_{X}(h)^{\oplus k}\longrightarrow0. \end{equation} If $\cB$ is as in sequence \eqref{seqUlrich}, then $ s_1+2s_2+s_3=r+2k. $ The cohomology of the dual of sequence \eqref{seqEuler} and its shift, the one of sequences \eqref{Display}, \eqref{seqKernel} twisted by $\cO_X(f-2h)$, $\cO_X(f-h)$, $\cO_X(-2h-f)$, $\cO_X(-3h-f)$ and Lemma \ref{lDerived} yield \begin{gather*} s_1=h^0\big(\cE(f-h)\big)-h^1\big(\cE(f-h)\big)+2k,\\ s_2=h^1\big(\cE(f-2h)\big)=h^2\big(\cE(-2h-f)\big),\\ s_3=h^3\big(\cE(-3h-f)\big)-h^2\big(\cE(-3h-f)\big)+2k. \end{gather*} Conversely, if the cohomology of monad \eqref{Monad2Segre} is a vector bundle, then it is an ordinary $h$--instanton bundle of rank $r$ with quantum number $k$ by Corollary \ref{cMonad}. In \cite[Theorem 3.5]{A--M2} the existence of completely different and more explicit monads is proved via a derived category approach under restrictive hypotheses on $\cE$. \end{example} \begin{example} \label{e3Segre} Let $X\subseteq\p7$ be the image of the Segre embedding of $\p1\times\p3$, i.e. the rational normal scroll with $a_0=a_1=a_2=a_3=1$. Notice that, besides $\pi$, we also have the second projection $p\colon X\to \p3$ and $\Omega^1_{X\vert\p1}\cong p^*\Omega^1_{\p3}$. If $\cE$ is an ordinary $h$--instanton bundle with $\rk(\cE)=r$ and $\quantum(\cE)=k$, then $c_1(\cE)h^3=3r$ and it is the cohomology of monad \eqref{MonadOrdinary}, which becomes \begin{equation*} 0\longrightarrow\cO_{X}(2f)^{\oplus k}\longrightarrow\cB\longrightarrow\cO_{X}(h)^{\oplus k}\longrightarrow0 \end{equation*} where $\cB$ fits into an exact sequence of the form \begin{align*} 0\longrightarrow \cO_{X}(h-f)^{\oplus s_1}&\oplus p^*\Omega^1_{\p3}(2h-f)^{\oplus s_2}\\ \longrightarrow\cB&\longrightarrow p^*\Omega^2_{\p3}(3h-f)^{\oplus s_3}\oplus \cO_{X}(3f)^{\oplus s_4}\longrightarrow0. \end{align*} with $s_1+3s_2+3s_3+s_4=r+2k$ (see \cite[Example 4.17]{A--H--M--PL}). The cohomology of sequences \eqref{Display}, the cohomology of the dual of the Euler sequence on $\p3$ and the K\"unneth formulas yield \begin{gather*} s_1=h^0\big(\cE(f-h)\big)-h^1\big(\cE(f-h)\big)+2k,\qquad s_2=h^1\big(\cE(f-2h)\big),\\ s_3=h^2\big(\cE(-3h-f)\big), \qquad s_4=h^4\big(\cE(-4h-f)\big)-h^3\big(\cE(-4h-f)\big)+2k. \end{gather*} Conversely, if the cohomology of monad \eqref{Monad2Segre} is a vector bundle, then it is an ordinary $h$--instanton bundle of rank $r$ with quantum number $k$ by Corollary \ref{cMonad}. The above example can be generalized to every scroll over $\p1$ of dimension $n\ge4$ along the same lines of the previous Example \ref{e2Segre}, though the description of $\cB$ becomes more and more involved as $n$ increases. \end{example} Theorem \ref{tMonad} says something even for non--ordinary instanton bundles. \begin{example} \label{eSpace1} If $\cE$ is a non--ordinary $\cO_{\p n}(1)$--instanton bundle with $r:=\rk(\cE)$, then equality \eqref{ChernFano} implies $c_1(\cE)=-r/2$. If $n\ge3$, then $\cE$ is the cohomology of Monad \eqref{Monad} where $a,c$ are as in equality \eqref{DimensionMonad}. If $\cB\cong\bigoplus_{i=1}^b\cO_{\p n}(\beta_i)$, then the vanishings $h^0\big(\cB(-1)\big)=h^n\big(\cB(1-n)\big)=0$ force $-1\le \beta_1\le0$. Theorem \ref{tMonad} then yields that $\cE$ is the cohomology of a quasi--linear monad of the form \begin{equation} \label{MonadSpace1} \begin{aligned} 0\longrightarrow\cO_{\p n}(-2)^{\oplus \quantum}&\oplus\cO_{\p n}(-1)^{\oplus a}\\ &\longrightarrow\cO_{\p n}(-1)^{\oplus b_0}\oplus\cO_{\p n}^{\oplus b_1}\longrightarrow\cO_{\p n}^{\oplus c}\oplus\cO_{\p n}(1)^{\oplus \quantum}\longrightarrow0, \end{aligned} \end{equation} where $a,c$ are as in equality \eqref{DimensionMonad}, $\quantum$ is as usual the quantum number of $\cE$ and $$ b_0=\frac r2+\quantum+a,\qquad b_1=\frac r2+\quantum+c. $$ When $n=3$ and $r=2$, monad \eqref{MonadSpace1} has been described and used in \cite[Section 5.2]{El--Gr}. The shape of monad \eqref{MonadSpace1} is not surprising. Indeed, if $\cA$ is any ordinary rank two $\cO_{\p n}(1)$--instanton bundle, then $\cE:=\cA(-1)\oplus\cA$ is a non--ordinary $\cO_{\p n}(1)$--instanton bundle. In this case monad \eqref{MonadSpace1} is the direct sum of the monad defining $\cA$ (which is \eqref{MonadJardim}) and its first negative shift. Conversely, if the cohomology of the monad above is a vector bundle $\cE$, then it is an instanton bundle on $\p n$ with respect to $\cO_{\p n}(1)$ thanks to a direct computation. \end{example} \begin{remark} \label{rSpace1} We use the same notation of Example \ref{eSpace1}. Let $n\ge3$ and $$ \widehat{w}(\cE):=\min\{\ \chi(\cE)+(n+2)\quantum+h^{n-1}\big(\cE(-n)\big), h^1\big(\cE\big)+2\quantum+n\ \} $$ Thanks to \cite[Theorem 3.2]{C--MR1} we know that $\reg(\cE)\le \max\{\ \widehat{w}(\cE), 2\ \}$. It is not easy to confront such a bound with the one in Proposition \ref{pRegularity}. Nevertheless, if $n=3$, $r=2$ and $\quantum\ge1$, then equalities \eqref{Serre} and \eqref{RRgeneral} yield $\widehat{w}(\cE)>w(\cE)=h^1\big(\cE\big)+1$. \end{remark} \begin{example} \label{eQuadric1} If $\cE$ is a non--ordinary instanton bundle with rank $r$ and quantum number $\quantum$ on a smooth quadric hypersurface $X\subseteq\p {n+1}$, then $c_1(\cE)=0$ by equality \eqref{ChernFano} and the picture is considerably more complicated and far from many very well--known results: e.g. see \cite[Proposition 9.2]{A--C--G} and the references therein for $n=3$ and \cite{C--MR3} for $n\ge5$. We only suggest what happens when $n$ is odd. We know that the bundle $\cB$ in Theorem \ref{tMonad} is aCM, hence we can write it as follows $$ \cB\cong\bigoplus_{i=1}^b\cO_{X}(\beta_ih)\oplus\bigoplus_{j=1}^s\cS(\sigma_jh). $$ The vanishings $h^0\big(\cB(-h)\big)=h^n\big(\cB((1-n)h)\big)=0$ force $\beta_i=0$ and $0\le \sigma_j\le1$. Since $c_1(\cE)=0$, it follows that $\cE$ is the cohomology of a monad of the form \begin{equation} \label{MonadQuadric1} \begin{aligned} 0\longrightarrow\cO_{X}(-h)^{\oplus \quantum}&\oplus\cO_{X}^{\oplus a}\\ &\longrightarrow\cO_{X}^{\oplus b}\oplus\cS^{\oplus s}\oplus\cS(h)^{\oplus s}\longrightarrow\cO_{X}^{\oplus c}\oplus\cO_{X}(h)^{\oplus \quantum}\longrightarrow0, \end{aligned} \end{equation} where $a,c$ are as in equality \eqref{DimensionMonad}. Computing $r$ from monad \eqref{MonadQuadric1} we deduce $$ s=2^{-\left[\frac{n+1}2\right]}(r+2\quantum+a+c-b). $$ Conversely, if the cohomology of monad \eqref{MonadQuadric1} is a vector bundle, then a direct computation shows that it is an $\cO_X(h)$--instanton bundle. Notice that if $s=a=c=0$, then monad \eqref{MonadQuadric1} coincides with monad \eqref{MonadJardim}. \end{example} \section{Instanton bundles of low rank on cyclic $n$--folds} \label{sCyclic} We briefly deal with $h$--instanton bundles on an $n$--fold $X$ which is cyclic, i.e. such that $\Pic(X)$ is free of rank $\varrho_X=1$, and let $\cO_X(H)$ be its ample generator. In what follows we focus on the case $n\ge2$, because the unique cyclic curve is $\p1$. Let $\cO_X(h)\cong\cO_X(uH)$ and $\omega_X\cong\cO_X(vH)$. The ampleness of $h$ implies $u\ge1$. Moreover, if $v<0$, then $X$ is a Fano $n$--fold, hence $v\ge-n-1$ and equality holds if and only if $X\cong\p n$ by \cite[Corollary 2]{Ka--Ko}. \begin{lemma} \label{lCyclicChern} Let $X$ be a cyclic $n$--fold with $n\ge2$ endowed with an ample and globally generated line bundle $\cO_X(h)$. If $\cE$ is a $h$--instanton bundle with defect $\defect$, then \begin{equation} \label{Cyclic} 2c_1(\cE)={\rk(\cE)}((n+1-\defect)h+K_X). \end{equation} \end{lemma} \begin{proof} If $c_1(\cE)=\varepsilon H$, then $ (2\varepsilon-\rk(\cE)((n+1-\defect)u+v))u^{n-1}H^n=0 $ by equality \ref{Slope}, hence the statement follows from the ampleness of $\cO_X(H)$ and the Nakai--Moishezon criterion on $X$ (see \cite[Theorem A.5.1]{Ha2}). \end{proof} In particular, if $(n+1-\defect)u+v$ is odd, then $\rk(\cE)$ must be even. In what follows we will deal with $h$--instanton bundles of small rank on a cyclic manifold $X$. \begin{proposition} \label{pCyclicLine} Let $X$ be a cyclic $n$--fold with $n\ge2$ endowed with an ample and globally generated line bundle $\cO_X(h)$. Assume also that the ample generator $\cO_X(H)$ of $\Pic(X)$ is effective. If $\mathcal L\in\Pic(X)$ is a $h$--instanton line bundle with defect $\defect$, then one of the following assertions holds. \begin{enumerate} \item $X\cong\p n$, $\cO_X(h)\cong\cO_{\p n}(1)$, $\defect=0$, $\mathcal L\cong\cO_{\p{n}}$. \item $X\cong\p{3}$, $\cO_X(h)\cong\cO_{\p{3}}(2)$, $\defect=1$, $\mathcal L\cong\cO_{\p{3}}(1)$. \item $X\subseteq\p {n+1}$ is the smooth quadric hypersurface, $\cO_X(h)\cong\cO_X\otimes\cO_{\p {n+1}}(1)$, $\defect=1$, $\mathcal L\cong\cO_X$, $n\ge3$. \end{enumerate} \end{proposition} \begin{proof} Let $\cO_X(H)$ be the ample generator of $\Pic(X)$. Equality \eqref{Cyclic} yields $$ \mathcal L^2\cong\cO_X((n+1-\defect)h+K_X)\cong\cO_X((u(n+1-\defect)+v)H). $$ It follows that $u(n+1-\defect)+v=2w$ for some integer $w$ and $\mathcal L\cong\cO_X(wH)$. The conditions $h^0\big(\mathcal L(-h)\big)=0$ and $h^0\big(\cO_X(H)\big)\ne0$ imply $w\le u-1$, hence \begin{equation} \label{CyclicLine} u(n-1-\defect)+v\le -2. \end{equation} Since $u\ge1$, it then follows $v\le -n-1+\defect$, i.e. either $X\cong\p n$ or $X\subseteq \p{n+1}$ is the smooth quadric hypersurface and $\defect=1$ by \cite[Corollary 2]{Ka--Ko}. In the latter case $v=-n$, hence inequality \eqref{CyclicLine} becomes $(n-2)(u-1)\le0$. If $n=2$, then $\varrho_X\ge2$, hence $n\ge3$ and $u=1$ necessarily. We deduce that $\mathcal L\cong\cO_X$ and $\cO_X(h)\cong\cO_X\otimes\cO_{\p{n+1}}(1)$, i.e. assertion (3) holds true. If $X\cong\p n$ and $\defect=0$, then inequality \eqref{CyclicLine} becomes $(n-1)(u-1)\le0$, hence $\cO_X(h)\cong\cO_{\p n}(1)$ and $\mathcal L\cong\cO_X$, i.e. assertion (1) holds. If $X\cong\p n$ and $\defect=1$ the same argument leads to $(n-2)(u-1)\le1$. Both the cases $u=1$ (and arbitrary $n$) and $n=2$ (and arbitrary $u$) can be excluded because $u(n+1-\defect)+v$ is not even in these cases. Thus the only possible case is $n=3$ and $u=2$ leading to $\cO_X(h)\cong\cO_{\p3}(2)$ and $\mathcal L\cong\cO_{\p3}(1)$, which is assertion (2). \end{proof} If $\cE$ is a vector bundle on $X$, then $t_h^{\cE}:=\left[-{\mu_h(\cE)}/{h^n}\right]$ is the unique $t\in\bZ$ such that $-\rk(\cE)h^n<c_1(\cE(th))h\le 0$. The sheaf $\cE_{norm,h}:=\cE(t_h^{\cE}h)$ is called {\sl normalization of $\cE$ (with respect to $\cO_X(h)$)}. \begin{lemma} \label{lHoppe} Let $X$ be a cyclic manifold. If $\cO_X(H)$ is the ample generator of $\Pic(X)$ and $\cE$ is a vector bundle on $X$ with $c_1(\cE)=\varepsilon H$, then the following assertions hold. \begin{enumerate} \item $\cE$ is $\mu$--semistable (resp. $\mu$--stable) with respect to an ample line bundle if and only the same is true with respect to $\cO_X(H)$. \item If $\cE$ is $\mu$--semistable (resp. $\mu$--stable), then $h^0\big(\cE_{norm,H}(-H)\big)=0$, (resp. $h^0\big(\cE_{norm,H}\big)=0$). \item If $\rk(\cE)=2$ and $h^0\big(\cE_{norm,H}\big)=0$, then $\cE$ is $\mu$--stable. \item If $\rk(\cE)=2$, $\varepsilon$ is even and $h^0\big(\cE_{norm,H}(-H)\big)=0$, then $\cE$ is $\mu$--semistable. \item If $\rk(\cE)=2$ and $\varepsilon$ is odd, then $\cE$ is $\mu$--stable if and only if it is $\mu$--semistable. \end{enumerate} \end{lemma} \begin{proof} If $\cO_X(h)$ is ample, then there is a positive integer $u$ such that $\cO_X(h)\cong\cO_X(uH)$, hence $\mu_h(\cE)=u^{n-1}\mu_H(\cE)$ for each sheaf $\cE$ on $X$. Thus, the stability properties of $\cE$ with respect to $\cO_X(h)$ and $\cO_X(H)$ are the same. eld$). \end{proof} If $\cE$ is a $h$--instanton bundle on $X$, then equality \eqref{Cyclic} holds for $c_1(\cE)$. In the following proposition we deal with the semistability properties of rank two $h$--instanton bundles on a cyclic $n$--fold (see \cite[Proposition 2]{J--MR} for a similar result). \begin{proposition} \label{pCyclicStable} Let $X$ be a cyclic $n$--fold with $n\ge2$ endowed with an ample and globally generated line bundle $\cO_X(h)$. If $\cE$ is a rank two $h$--instanton bundle on $X$ with defect $\defect$, then the following assertions hold. \begin{enumerate} \item $\cE$ is $\mu$--semistable unless possibly when: \begin{enumerate} \item $X\cong\p n$, $\cO_X(h)\cong\cO_{\p n}(1)$, $\defect=1$; \item $X\cong\p2$, $\cO_X(h)\cong\cO_{\p2}(u)$, $\defect=1$. \end{enumerate} \item If $\cE$ is $\mu$--semistable, then it is also $\mu$--stable unless possibly when: \begin{enumerate} \item $X\cong\p n$, $\cO_X(h)\cong\cO_{\p n}(1)$, $\defect=0$; \item $X\cong\p3$, $\cO_X(h)\cong\cO_{\p3}(2)$, $\defect=1$; \item $X\subseteq\p{n+1}$ is a smooth quadric hypersurface, $\cO_X(h)\cong\cO_X\otimes \cO_{\p {n+1}}(1)$, $\defect=1$, $n\ge3$. \end{enumerate} \end{enumerate} \end{proposition} \begin{proof} Since $\varrho_X=1$, then equality \eqref{Cyclic} yields $c_1(\cE)=(n+1-\defect)h+K_X=\varepsilon H$, where $\varepsilon=u(n+1-\defect)+v$. By assumption we have $h^0\big(\cE(-h)\big)=0$. Since $t_H^{\cE}:=\left[-{\varepsilon}/{2}\right]$, it follows from Lemma \ref{lHoppe} that when $\varepsilon$ is even and $t_H^{\cE}-1\le -u$ (resp. $t_H^{\cE}\le -u$), then $\cE$ is $\mu$--semistable (resp. $\mu$--stable). When $\varepsilon$ is odd, the same lemma yields the $\mu$--stability and $\mu$--semistability of $\cE$ when $t_H^{\cE}\le -u$. In what follows we will assume $n\ge3$: the computations in the case $n=2$ are similar. If $\varepsilon$ is even, then $\cE$ is not $\mu$--semistable if $u(n-1-\defect)+v+3\le 0$. If $u\ge2$, then $v\le -2n-1+2\defect$. Since $v\ge-n-1$, it follows that $n\le 2\defect\le 2$, contradicting $n\ge3$. If $u=1$, then $v\le -n-2+\defect$, hence necessarily $\defect=1$ and $X\cong\p n$: thus $\varepsilon=-1$ which is not even, a contradiction. If $\varepsilon$ is odd, then $\cE$ is not $\mu$--(semi)stable if $u(n-1-\defect)+v+2\le 0$. If $u\ge3$, then $v\le -3n+1+3\defect$. Since $v\ge-n-1$, it follows that $2n\le 3+2\defect\le 5$, contradicting $n\ge3$. If $u=2$, the same argument leads to $\defect=1$ and $X\cong\p3$ necessarily: thus $\varepsilon=2$ which is not odd, a contradiction. Finally if $u=1$, arguing as above one deduces that the only admissible case is $\defect=1$ and $X\cong\p n$. If $\varepsilon$ is even, then $\cE$ is not $\mu$--stable if $u(n-1-\defect)+v+1\le 0$. If $u\ge4$, then $v\le -4n+3+4\defect$ and one can exclude this case arguing as above. If $u=3$, then $\defect=1$ and $X\cong\p3$ necessarily, hence $\varepsilon=5$ which is not even, a contradiction. If $u=2$, then again one obtains $\defect=1$ and $X$ is either $\p3$ or a smooth quadric hypersurface in $\p{n+1}$: computing $\varepsilon$, one deduces that only the former case is possible. Finally if $u=1$, then $X$ is either $\p3$ or a smooth quadric hypersurface in $\p{n+1}$ or $v=-n+1$: the computation of $\varepsilon$ yields either $\defect=0$ and $X\cong \p n$ or $\defect=1$ and $X$ is a smooth quadric hypersurface in $\p{n+1}$ \end{proof} \begin{remark} \label{rCyclic} Proposition \ref{pCyclicStable} is sharp. Indeed, the $\cO_{\p n}(u-1)\oplus\cO_{\p n}(u-2)$ is an unstable non--ordinary $\cO_{\p 2}(u)$--instanton bundle if either $u=1$ or $n=2$. Similarly, $\cO_{\p n}^{\oplus2}$ is a strictly $\mu$--semistable ordinary $\cO_{\p n}(1)$--instanton bundle and $\cO_{\p3}(1)^{\oplus2}$ is a rank two strictly $\mu$--semistable non--ordinary $\cO_{\p3}(2)$--instanton bundle. Finally, if $X\subseteq\p{n+1}$ is a smooth quadric hypersurface, then $\cO_X^{\oplus2}$ is a rank two strictly $\mu$--semistable non--ordinary $\cO_X\otimes\cO_{\p{n+1}}(1)$--instanton bundle. \end{remark} More generally than projective spaces and smooth hyperquadric, other important examples of cyclic $n$--folds are provided by cyclic Fano $3$--folds. If $X$ is such an $n$--fold, then $\Pic(X)$ is generated by the fundamental line bundle $\cO_X(h)$. If $\cE$ is a $h$--instanton bundle on $X$, then equality \eqref{Cyclic} becomes \begin{equation} \label{ChernFano} 2c_1(\cE)={\rk(\cE)}(4-\defect-i_X)h \end{equation} where, as usual, $i_X$ is the index of $X$. In particular ordinary (resp. non--ordinary) $h$--instanton bundles must have even rank when $i_X=1,3$ (resp. $i_X=2,4$). We say that a rank two vector bundle $\cE$ on $X$ is a {\sl classical instanton bundle} if $c_1(\cE)=-\varepsilon h$ with $\varepsilon\in \{\ 0,1\ \}$ and $h^0\big(\cE\big)=h^1\big(\cE(-q_X^\varepsilon h)\big)=0$, where $$ q_X^\varepsilon:=\left[\frac{i_X+1-\varepsilon}2\right] $$ (see \cite[Definition 1.1]{A--C--G}). Notice that $0\le q_X^\varepsilon\le 2$. Moreover, $q_X^\varepsilon=0$ if and only if $i_X=\varepsilon=1$, and $q_X^\varepsilon=2$ if and only if either $X\cong\p3$ or $X\subseteq\p4$ is a smooth quadric hypersurface and $\defect=1$. Below, we confront the notion of classical instanton bundle with Definition \ref{dMalaspinion}. \begin{proposition} \label{pFano} Let $X$ be a cyclic Fano $3$--fold with very ample fundamental line bundle $\cO_X(h)$. If $\cE$ is a rank two vector bundle with $c_1(\cE)=(4-\defect-i_X)h$ where $\defect\in\{\ 0,1\ \}$, then the following assertions hold. \begin{enumerate} \item If $\cE$ is an $h$--instanton bundle, then its defect is $\defect$ and: \begin{enumerate} \item if $(i_X,\defect)\not\in\{\ (4,0),(4,1),(3,1)\ \}$, then $\cE_{norm,h}$ is a classical instanton bundle; \item if $(i_X,\defect)\in\{\ (4,0),(4,1),(3,1)\ \}$, then $\cE_{norm,h}$ is a classical instanton bundle if and only if $h^0\big(\cE\big)=0$. \end{enumerate} \item If $\cE_{norm,h}$ is a classical instanton bundle and: \begin{enumerate} \item $(i_X,\defect)\ne(1,0)$, then $\cE$ is a $h$--instanton bundle with defect $\defect$; \item $(i_X,\defect)=(1,0)$, then $\cE$ is a $h$--instanton bundle with defect $\defect$ if and only if $h^0\big(\cE_{norm,h}(h)\big)=0$. \end{enumerate} \end{enumerate} \end{proposition} \begin{proof} Let $\cE$ be a rank two vector bundle such that $c_1(\cE)=(4-\defect-i_X)h$ where $\defect\in\{\ 0,1\ \}$. Thus $\cE_{norm,h}\cong\cE(t_h^\cE h)$ where $$ t_h^\cE=\left[-\frac{\mu_h(\cE)}{h^3}\right]=\left[\frac{i_X+\defect}{2}\right]-2=q_X^{1-\defect}-2. $$ Let $\cE$ be a rank two $h$--instanton bundle. Since $c_1(\cE)=(4-\defect-i_X)h$ its defect is $\defect$. Moreover, by definition we know that $h^0\big(\cE(-h)\big)=h^1\big(\cE(-2h)\big)=0$, hence $$ h^0\big(\cE_{norm,h}((1-q_X^{1-\defect})h)\big)=h^1\big(\cE_{norm,h}(-q_X^{1-\defect} h)\big)=0. $$ Notice that $1-q_X^{1-\defect}\ge-1$ and $1-q_X^{1-\defect}\ge0$ unless $q_X^{1-\defect}=2$. The latter equality holds if and only if either $X\cong\p3$ or $X\subseteq\p{4}$ is a smooth quadric hypersurface and $\defect=1$. In these cases $\cE_{norm,h}$ is a classical instanton bundle if and only if $h^0\big(\cE_{norm,h}\big)=0$. In the remaining cases the vanishing $h^0\big(\cE_{norm,h}\big)=0$ is for free. Conversely, if $\cE_{norm,h}$ is a classical instanton bundle, then $c_1(\cE_{norm,h})=-\varepsilon h$ where $\varepsilon:=i_X+\defect-2q_X^{1-\defect}$ and $h^0\big(\cE_{norm,h}\big)=h^1\big(\cE_{norm,h}(-q_X^{1-\defect} h)\big)=0$ by definition. Notice that $$ q_X^\varepsilon=\left[\frac{i_X+1-\varepsilon}2\right]=\left[\frac{1-\defect+2q_X^{1-\defect}}2\right]=\left[\frac{1-\defect}2\right]+q_X^{1-\defect}=q_X^{1-\defect}, $$ because $\defect\in\{\ 0,1\ \}$, hence $$ h^0\big(\cE((q_X^{1-\defect}-2)h)\big)=h^1\big(\cE(-2h)\big)=0, $$ Notice that $q_X^{1-\defect}-2\ge-2$ and $1-q_X^{1-\defect}\ge-1$ unless $q_X^{1-\defect}=0$. The latter equality holds if and only $(i_X,\defect)=(1,0)$. In this case $\cE$ is a $h$--instanton bundle if and only if $h^0\big(\cE_{norm,h}(h)\big)=0$. \end{proof} In what follows we briefly deal with the case $(i_X,\defect)=(1,0)$. In this case $\cO_X(h)\cong\omega_X^{-1}$ and $\cE_{norm,h}\cong\cE(-2h)$, hence $c_1(\cE_{norm,h})=-h$. Equality \eqref{RRgeneral} for $\cO_X(h)$ implies that $\cO_X(h)$ induces an embedding $X\subseteq\p{g_X+1}$ where $h^3=2g_X-2$, hence $g_X\ge3$ necessarily and $\cO_X(h)\cong\cO_X\otimes\cO_{\p{g_X+1}}(1)$. Thanks to equality \eqref{RRgeneral} for $\cE(-h)$, the inequality $h^1\big(\cE(-h)\big)=-\chi(\cE(-h))\ge0$ is equivalent to $c_2(\cE)h\ge 5g_X-1$, hence $c_2(\cE_{norm,h})h\ge g_X+3$. In \cite[Theorem 3.1]{B--F1} the existence of classical instanton bundles $\cF$ satisfying $g_X+3>c_2(\cF)h$ is proved when $X$ is ordinary. Thus the condition $h^0\big(\cE_{norm,h}(h)\big)=0$ is restrictive. Nevertheless, $\omega_X^{-1}$--instanton bundles on ordinary prime Fano $3$--folds exist for each admissible quantum number as claimed in Theorem \ref{tPrime}. eld=\bC$ is assumed. For this reason, we make such a hypothesis in Theorem \ref{tPrime}. \medbreak \noindent{\it Proof of Theorem \ref{tPrime}.} We will prove the statement by induction on $k\ge0$ checking the existence of a vector bundle $\cE_k$ such that: \begin{itemize} \item $\cE_k$ is $\mu$--stable of rank $2$ with $c_1(\cE_k)=3h$, $c_2(\cE_k)h=5g_X-1+k$; \item $h^0\big(\cE_k(-h)\big)=h^1\big(\cE_k(-2h)\big)=0$; \item $h^i\big(\cE_k\otimes\cE_k^\vee\big)=0$ for $i\ge2$; \item $\cE_k\otimes\cO_L\cong\cO_{\p1}(1)\oplus\cO_{\p1}(2)$ for each general line $L\subseteq X$. \end{itemize} If such an $\cE_k$ exists, then equalities \eqref{Serre} and \eqref{RRgeneral} for $\cE_k$ imply $$ h^1\big(\cE_k(-h)\big)=-\chi(\cE_k(-h))=\quantum, $$ hence $\cE_k$ is a $h$--instanton bundle, thanks to Proposition \ref{pSpecial}. Since $\cE_k$ is $\mu$--stable, it follows $h^0\big(\cE_k\otimes\cE_k^\vee\big)=1$, hence yields $h^1\big(\cE_k\otimes\cE_k^\vee\big)=4+g_X+2k$ by equality \eqref{RRgeneral} for $\cE_k\otimes\cE_k^\vee$. Let us shortly recall how to prove the base step. If $k=0$, thanks to \cite[Lemma 3.8]{B--F1} there exists on $X$ a rank two $\mu$--stable bundle $\cE_0$ such that $h^1\big(\cE_0(-2h)\big)=0$, $c_1(\cE_0)=3h$, $c_2(\cE_0)h=5g_X-1$, $h^2\big(\cE_0\otimes\cE_0^\vee\big)=0$ and $\cE_0\otimes\cO_L\cong \cO_{\p1}(1)\oplus\cO_{\p1}(2)$ for the general line $L\subseteq X$. Looking at \cite[Proof of Theorem 3.1 for $d=g+3$ (p. 132)]{B--F1}, one also deduces that $\cE_0$ can be assumed both aCM and satisfying $h^0\big(\cE_0(-h)\big)=0$. By \cite[Theorem 3.1 (p.120)]{B--F1} we can also assume that $\cE_0$ corresponds to a smooth point in a component $M$ of dimension $4+g_X$ of its moduli space. The tangent space to $M$ at the point $\cE_0$ has dimension $h^1\big(\cE_0\otimes\cE_0^\vee\big)=4+g_X$. Since $\cE_0$ is $\mu$--stable, it follows that $h^0\big(\cE_0\otimes\cE_0^\vee\big)=1$. Thus equality \eqref{RRgeneral} finally returns $h^3\big(\cE_0\otimes\cE_0^\vee\big)=0$, hence the proof of the base step of the induction is complete. In order to prove the inductive step, we assume the existence of the vector bundle $\cE_k$ for some $k\ge0$ satisfying the list of properties above. Such properties are the same ones listed in the statement of \cite[Theorem 3.7]{B--F2}, but the vanishing $h^0\big(\cE_k(-h)\big)=0$. The proof of the inductive step almost coincides with the proof of the inductive step in \cite[proof of Theorem 3.7]{B--F2}, the only additional property we have to check being the vanishing $h^0\big(\cE_k(-h)\big)=0$. In order to show that such a vanishing holds as well we recall the argument used in \cite{B--F2} for obtaining $\cE_{k+1}$. Let $L\subseteq X$ be a general line, hence $\cN_{L\vert X}\cong\cO_{\p1}\oplus\cO_{\p1}(-1)$. Consider the exact sequence $$ 0\longrightarrow\widehat{\cE}_{k+1}\longrightarrow{\cE_k}\longrightarrow\cO_L\otimes\cO_X(h)\longrightarrow0. $$ The sheaf $\widehat{\cE}_{k+1}$ is not a vector bundle, but it satisfies all the other required properties. Thus the same properties also hold for each general deformation $\cE_{k+1}$ of $\widehat{\cE}_{k+1}$ inside the moduli space of torsion--free coherent $\mu$--stable sheaves with Chern classes $c_1=h$, $c_2=c_2(\cE_k)h+1$, $c_3=c_3(\cE_k)=0$. In the proof of \cite[Theorem 3.7]{B--F2} the authors show that $\cE_{k+1}$ is a vector bundle still satisfying the same properties. Moreover, if $h^0\big(\cE_k(-h)\big)=0$, then $h^0\big(\widehat{\cE}_{k+1}(-h)\big)=0$, hence $h^0\big({\cE}_{k+1}(-h)\big)=0$ by semicontinuity. In particular $\cE_{k+1}$ is a $h$--instanton sheaf. Notice that $h^3\big(\cE_{k+1}\otimes\cE_{k+1}^\vee\big)=0$ (e.g. see \cite[Lemma 2.1]{C--C--G--M}). Moreover, the $\mu$-stability of $\cE_{k+1}$ yields $h^0\big(\cE_{k+1}\otimes\cE_{k+1}^\vee\big)=1$. In the proof of the inductive step in \cite[proof of Theorem 3.7]{B--F2} the authors also check that $h^2\big(\cE_{k+1}\otimes\cE_{k+1}^\vee\big)=0$, thus $$ h^1\big(\cE_{k+1}\otimes\cE_{k+1}^\vee\big)=4+g_X+2k $$ (see \cite[Equalities (3.15) and (3.20)]{B--F2}). In particular the proof of the inductive step is complete, hence the statement is proved by induction. \qed \medbreak \begin{remark} The bundles defined in Theorem \ref{tPrime} are simple. Thus they correspond to smooth points in the moduli space $\cS_X(2;3h,5g_X-1+k)$ of rank two simple sheaves $\cE$ with $c_1(\cE)=3h$ and $c_2(\cE)h=5g_X-1+k$. If we denote by $\cS\cI_X(3h,5g_X-1+k)$ the locus of points in $\cS_X(2;3h,5g_X-1+k)$ representing such bundles, we deduce that every such point is contained in a generically smooth component of $\cS\cI_X(3h,5g_X-1+k)$ of dimension $4+g_X+2k$. \end{remark} \section{Instanton bundles of low rank on sextic del Pezzo $3$--folds} \label{sSextic} If $X$ is a Fano $3$--fold with fundamental line bundle $\cO_X(h)$ and such that $\varrho_X\ge2$, then the notions of classical and $h$--instanton bundle can be considerably different. In order to give examples of the possible pathologies and to complete the results of the previous section when $i_X=2$, we deal with the del Pezzo $3$--folds of degree $6$, i.e. the flag $3$--fold and the image of the Segre embedding $\p1\times\p1\times\p1\subseteq\p7$. Classical instanton bundles on the flag $3$--fold $X$ have been described in \cite{M--M--PL} (see also \cite{C--C--G--M}). Let $p_i\colon X\to\p2$, $i=1,2$, be the projections and set $\cO_X(h_i):=p_i^*\cO_{\p2}(1)$, $i=1,2$. The group $\Pic(X)$ is freely generated by such line bundles and $$ A(X)\cong\frac{\bZ[h_1,h_2]}{(h_1^3,h_2^3,h_1^2-h_1h_2+h_2^2)} $$ (for the details see \cite{M--M--PL}). The fundamental line bundle is $\cO_X(h)=\cO_X(h_1+h_2)$. \begin{proposition} \label{pFlag1} Let $X$ be the flag $3$--fold and let $\cO_X(h)$ be its fundamental line bundle. A line bundle $\mathcal L$ on $X$ is a $h$--instanton bundle with defect $\defect$ if and only if there is an integer $a\ge1$ such that $\mathcal L\cong \mathcal L_a:=\cO_X(-ah_1+(a+2-\defect)h_2)$ up to permutations of the $h_i$'s. In particular $\quantum(\mathcal L_a)=\frac{(2-\defect)}2a(a+2-\defect)$. \end{proposition} \begin{proof} Let $\cE:=\cO_X(a_1h_1+a_2h_2)$ be a $h$--instanton bundle on $X$: without loss of generality we assume $a_1\le a_2$. Then $a_1+a_2=2-\defect$ by Theorem \ref{tSlope}, hence $a_2=-a_1+2-\defect$. The vanishing $h^0\big(\cE(-h)\big)=0$ and \cite[Proposition 2.5]{C--F--M2} yields $a_1=-a$ for some $a\ge0$. The same proposition also yields that the other vanishings in Theorem \ref{tSlope} are fulfilled, hence every such line bundle is a $h$--instanton bundle. The value $\quantum(\mathcal L_a)$ is obtained via \cite[Proposition 2.5]{C--F--M2}. \end{proof} \begin{example} \label{eFlagExtension} If $a\ge b+2$ we have $h^1\big(\cO_X((b-a)h_1-bh_2+ah_3)\big)\ne0$ by \cite[Proposition 2.5]{C--F--M2}. It follows the existence of a non--trivial extension $$ 0\longrightarrow\cO_X(-ah_1+(a+2-\defect)h_2)\longrightarrow\cE\longrightarrow\cO_X(-bh_1+(b+2-\defect)h_2)\longrightarrow0. $$ We show below that $\cE$ is $\mu$--semistable and simple, hence indecomposable, which is non--orientable because $c_1(\cE)\ne2h$. To this purpose, we first notice that all the sheaves in the above sequence have the same slope $3(2-\defect)$. If $\mathcal L\subseteq\cE$ is a destabilizing sheaf of maximal slope $\mu_h(\mathcal L)>\mu_h(\cE)$. Set $\mathcal Q:=\cE/\mathcal L$ and let $\cM$ be the kernel of the natural quotient morphism $\cE\to\mathcal Q/\mathcal T$, where $\mathcal T\subseteq\mathcal Q$ is the torsion subsheaf. Trivially $\cM$ is torsion--free and of rank $1$ and $\mu_h(\cM)\ge\mu_h(\mathcal L)>\mu_h(\cE)$. Since $\cE/\cM$ is torsion--free, it follows that $\cM$ is also normal thanks to \cite[Lemma II.1.1.16]{O--S--S}. Thus \cite[Lemma II.1.1.12]{O--S--S} implies that $\cM$ is reflexive, hence it is a line bundle by \cite[Lemma II.1.1.15]{O--S--S}. By composition we obtain a morphism $\mathcal M\to\cO_X(-bh_1+(b+2-\defect)h_2)$ which is zero by slope reasons. It follows that the inclusion $\mathcal M\subseteq\cE$ factors through an inclusion $\mathcal M\subseteq\mathcal L_a$, an absurd again by slope reasons. We deduce that $\cE$ is $\mu$--semistable. Moreover, $\cE$ is also simple, hence indecomposable: this assertion follows from the indecomposability of sequence \eqref{seqSegreStable} and standard arguments (see \cite[Lemma 4.2]{C--H2} for a brief and explicit statement in an even more general setting). \end{example} We close our analysis of $h$--instanton bundles on the flag $3$--fold by proving Theorem \ref{tFlag2} stated in the introduction. \medbreak \noindent{\it Proof of Theorem \ref{tFlag2}.} We have $h^0\big(\cE(-h)\big)=h^1\big(\cE(-2h)\big)=0$. Moreover, $c_1(\cE)=2h$ because $\cE$ is ordinary and orientable, hence $\mu_h(\cE)=6$. If $\cE$ is not $\mu$--semistable, then the same argument used in Example \ref{eFlagExtension} yields the existence of a maximal destabilizing line bundle $\cM:=\cO_X(\alpha_1 h_1+\alpha_2 h_2)\subseteq\cE(-h)$ for some $\alpha_1,\alpha_2\in\bZ$ such that $3(\alpha_1+\alpha_2)=\mu_h(\cM)\ge1>0=\mu_h(\cE(-h))$ and $\alpha_1\le\alpha_2$. Moreover, its maximality implies the existence of an exact sequence \begin{equation} \label{seqFlag} 0\longrightarrow\cO_X(\alpha_1 h_1+\alpha_2 h_2)\longrightarrow\cE(-h)\longrightarrow\cI_{Z\vert X}(-\alpha_1 h_1-\alpha_2 h_2)\longrightarrow0 \end{equation} where either $Z\subseteq X$ has pure codimension $2$ or $Z=\emptyset$. On the one hand $h^0\big(\cE(-h)\big)=0$, hence $\alpha_1\le-1$ and $3\alpha_2\ge-3\alpha_1+1$. Thus $\alpha_2\ge2$, hence $$ h^0\big(\cI_{Z\vert X}(-(\alpha_1+1) h_1-(\alpha_2+1) h_2)\big)\le h^0\big(\cO_X(-(\alpha_1+1) h_1-(\alpha_2+1) h_2)\big)=0. $$ The cohomology of sequence \eqref{seqFlag} tensored by $\cO_X(-h)$ finally yields $$ h^1\big(\cO_X((\alpha_1-1)h_1+(\alpha_2-1) h_2)\big)\le h^1\big(\cE(-2h)\big). $$ On the other hand $\alpha_1-1\le-2$ and $(\alpha_1-1)+(\alpha_2-1)+1\ge0$. Thus \cite[Proposition 2.5]{C--F--M2} implies that the left--hand member of the above inequality is positive, hence the same is true for the right--hand member, contradicting Definition \ref{dMalaspinion}. We deduce that $\cE$ must be $\mu$--semistable. Notice that if the $h$--instanton bundle $\cE$ is indecomposable, then all the hypotheses of \cite[Proposition 2.4]{A--C--G} are fulfilled by $\cE(-h)$, hence $\cE$ is also simple. \qed \medbreak There are even more pathologies when $X=\p1\times\p1\times\p1$: classical instanton bundles on $X$ have been studied in \cite{A--M1}. In this case, there are three projections $p_i\colon X\to\p1$, $i=1,2,3$, and we set $\cO_X(h_i):=p_i^*\cO_{\p1}(1)$, $i=1,2,3$. Thus $\Pic(X)$ is freely generated by such line bundles, $$ A(X)\cong\frac{\bZ[h_1,h_2,h_3]}{(h_1^2,h_2^2,h_3^2)} $$ and the fundamental line bundle is $\cO_X(h_1+h_2+h_3)$. \begin{proposition} \label{pSegre1} Let $X\cong \p1\times\p1\times\p1$ and let $\cO_X(h)$ be its fundamental line bundle. A line bundle $\mathcal L$ on $X$ is a $h$--instanton bundle if and only if there is an integer $a\ge1$ such that $\mathcal L\cong \mathcal L_a:=\cO_X(-ah_1+h_2+(2+a)h_3)$ up to permutations of the $h_i$'s. In particular, it is ordinary and $\quantum(\mathcal L_a)=a(a+2)$. \end{proposition} \begin{proof} Let $\mathcal L:=\cO_X(a_1h_1+a_2h_2+a_3h_3)$ be a $h$--instanton line bundle on $X$: without loss of generality we assume $a_1\le a_2\le a_3$. The condition on $c_1(\mathcal L)h^{2}$ in Definition \ref{dMalaspinion} implies $2(a_1+a_2+a_3)=3(2-\defect)$, hence $\defect=0$ necessarily and $a_3=3-a_1-a_2$. The vanishings $h^0\big(\mathcal L(-h)\big)=h^1\big(\mathcal L(-2h)\big)=0$ and the K\"unneth formulas imply $a_1\le0$ and $a_2\le1$. The vanishings $h^3\big(\mathcal L(-3h)\big)=h^2\big(\mathcal L(-2h)\big)=0$ and the K\"unneth formulas imply $a_3\ge2$ and $a_2\ge1$. Thus, $\mathcal L\cong\mathcal L_a$ where $a:=-a_1$. The K\"unneth formulas return the value of the quantum number. \end{proof} There exist many non--orientable $h$--instanton bundles $\cE$ on such an $X$. \begin{example} \label{eSegreExtension} If $b\ge2$ and $b\ge a\ge0$ we have $h^1\big(\cO_X((b-a)h_1-bh_2+ah_3)\big)\ne0$, hence there are non--trivial extensions of the form $$ 0\longrightarrow\cO_X(-ah_1+h_2+(1+a)h_3)\longrightarrow\cE\longrightarrow\cO_X(-bh_1+(1+b)h_2+h_3)\longrightarrow0. $$ One can check as in Example \ref{eFlagExtension} that such $\cE$'s are $\mu$--semistable and simple (hence indecomposable). \end{example} \begin{example} \label{eSegreDeform} In \cite{C--F--M1} the existence of two irreducible families of pairwise non--isomorphic rank two Ulrich bundles $\cE_0$ with $c_1(\cE_0)=h_1+2h_2+3h_3$ is proved: moreover, the general one in each family is $\mu$--stable. Such bundles are ordinary $h$--instanton bundles thanks to Corollary \ref{cUlrich}. For each positive integer $\quantum$ one can then construct by deformation ordinary, $\mu$--stable, $h$--instanton bundles $\cE$ with $c_1(\cE)=h_1+2h_2+3h_3$, starting from the aforementioned $\cE_0$ and deforming the kernel of a general morphism $\cE_0(-h)\to\cO_L$ where $L$ is a general line with class $h_1h_3\in A^2(X)$ using the same argument of the proof of Theorem \ref{tPrime}. Notice that such bundles, being $\mu$--stable are simple. \end{example} We show below that there are rank two, orientable, ordinary, simple, unstable $h$--instanton bundles $\cE$ on $X$ with arbitrarily large quantum number. \begin{example} \label{eSegreStable} let $L\subseteq X$ be the complete intersection of general divisors in $\vert h_2\vert$ and $\vert h_3\vert$. We have $L\cong\p1$, because $Lh^2=1$, and we can find $s\ge0$ pairwise disjoint curves of this type on $X$: let $Z$ be their union. We claim that $\det(\cN_{Z\vert X})\cong \cO_Z\otimes\cO_X(2h_2-4h_3)$. In order to prove such an isomorphism it suffices to check that the two line bundles are trivial when restricted to each connected component, because they are all isomorphic to $\p1$. Theorem \ref{tSerre} implies the existence of a rank two vector bundle $\cE$ fitting into the exact sequence \begin{equation} \label{seqSegreStable} 0\longrightarrow\cO_X(h_1+3h_3)\longrightarrow\cE\longrightarrow\cI_{Z\vert X}(h_1+2h_2-h_3)\longrightarrow0 \end{equation} We can assume that sequence \eqref{seqSegreStable} does not split. This is for free if $s>0$, while if $s=0$ it follows from the equality $h^1\big(\cO_X(-2h_2+4h_3)\big)=5$. We have $h^0\big(\cE(-h)\big)=h^1\big(\cE(-2h)\big)=0$ and equality \eqref{Cyclic} is fulfilled because $c_1(\cE)=2h$, hence $\cE$ is a rank two orientable, ordinary $h$--instanton bundle by Proposition \ref{pSpecial}. Moreover, the cohomologies of sequences \eqref{seqSegreStable} tensored by $\cO_X(-h)$ and \eqref{seqStandard} tensored by $\cO_X(h_2-2h_3)$ imply that the quantum number is $\quantum:=s-2$. Since $\mu_h(\cO_X(h_1+3h_3))=8>6=\mu_h(\cE)$, it follows that it is not $\mu$--semistable. Thus, $\cE$ is obtained neither as in Examples \ref{eSegreExtension} and \ref{eSegreDeform}, nor possibly via the method described in the proof of Theorem \ref{tPrime} applied to $X$. We claim that $\cE$ is simple. If $s=0$ the claim is standard, because sequence \eqref{seqSegreStable} is assumed non--split (see\cite[Lemma 4.2]{C--H2}). If $s>0$, it suffices to check $h^0\big(\cE\otimes\cE^\vee\big)\le1$. Sequence \eqref{seqSegreStable} tensored by $\cE^\vee\cong\cE(-2h)$ yields \begin{equation} \label{Simple} h^0\big(\cE\otimes\cE^\vee\big)\le h^0\big(\cE(-h_1-2h_2+h_3)\big)+h^0\big(\cI_{Z\vert X}\otimes\cE(-h_1-3h_3)\big). \end{equation} Sequence \eqref{seqSegreStable} tensored by $\cO_X(-h_1-2h_2+h_3)$ returns the exact sequence $$ h^0\big(\cE(-h_1-2h_2+h_3)\big)\le h^0\big(\cO_X(-h_1-2h_2+h_3)\big)+h^0\big(\cI_{Z\vert X}\big)=0. $$ Sequences \eqref{seqStandard} tensored by $\cE(-h_1-3h_3)$ and \eqref{seqSegreStable} by $\cO_X(-h_1-3h_3)$ return $$ h^0\big(\cI_{Z\vert X}\otimes\cE(-h_1-3h_3)\big)\le h^0\big(\cE(-h_1-3h_3)\big)\le h^0\big(\cO_X\big)+h^0\big(\cI_{Z\vert X}(2h_2-4h_3)\big). $$ Trivially $h^0\big(\cI_{Z\vert X}(2h_2-4h_3)\big)=0$, hence $h^0\big(\cI_{Z\vert X}\otimes\cE(-h_1-3h_3)\big)=1$. We deduce that inequality \eqref{Simple} leads to $h^0\big(\cE\otimes\cE^\vee\big)\le1$ and the claim follows. \end{example} \section{Existence of rank two orientable ordinary\\ instanton bundles on scrolls over smooth curves} \label{sScrollCurve} We already pointed out in Section \ref{sMonad} the importance of ordinary instanton bundles on scrolls over smooth curves. In this section we will prove the existence of rank two orientable, ordinary instanton bundles on such kind of $n$--folds. For the notation we refer the reader to Section \ref{sMonad}. Let $X\subseteq \p N$ be a scroll of dimension $n$ on a smooth curve $B$ and let $g:=p_a(B)$ be its genus. It is well--known that if $\theta\in \Pic^{g-1}(B)$ is a non--effective theta--characteristic, then $\cO_X(h+\theta f)$ and $\cO_X((\frak g+\theta)f)$ are both Ulrich line bundles, hence $h$--instanton bundles with zero quantum number. In what follows we will construct $h$--instanton bundles of rank $2$ with arbitrarily large quantum number. Each hyperplane $L$ in a fibre of $\pi$ is cut on that fibre by a divisor in $\vert h\vert$. In particular the class of $L$ inside $A(X)$ is $hf$. Thus there is an exact sequence $$ 0\longrightarrow\cO_X(-h-f)\longrightarrow\cO_X(-h)\oplus\cO_X(-f)\longrightarrow\cI_{L\vert X}\longrightarrow0, $$ it follows that $\cN_{L\vert X}\cong\cO_{\p{n-2}}\oplus\cO_{\p{n-2}}(1)$. In particular \begin{equation} \label{EZ-g-e} h^i\big(\cN_{L\vert X}\big)=\left\lbrace\begin{array}{ll} n\quad&\text{if $i=0$,}\\ 0\quad&\text{if $i\ne0$.} \end{array}\right. \end{equation} \begin{construction} \label{conScrollCurve} Let $\cG$ be a vector bundle of rank ${n}\ge3$ on a smooth curve $B$ and set $X:=\bP(\cG)$. Assume that $\cO_X(h):=\cO_{\bP(\cG)}(1)$ is an ample and globally generated line bundle. For each $k\ge0$ take general points $b_i\in B$, hyperplanes $L_i\subseteq\pi^{-1}(b_i)\cong\p{n-1}$ for $1\le i\le k$ and set $Z:=\bigcup_{i=1}^kL_i$. The definition of $Z$ implies $\det(\cN_{Z\vert X})\cong\cO_Z\otimes\cO_X(h-\frak g f)$ by the adjunction formula. Since Lemma \ref{lDerived} implies $h^i\big( \cO_X(-h+\frak g f)\big)=0$, it follows that Theorem \ref{tSerre} yields the existence of an exact sequence \begin{equation} \label{seqScrollCurve} 0\longrightarrow\cO_X((\frak g+\theta)f)\longrightarrow\cE\longrightarrow\cI_{Z\vert X}(h+\theta f)\longrightarrow0 \end{equation} where $\theta\in \Pic^{g-1}(B)$ is a non--effective theta--characteristic. \end{construction} Notice that $$ c_1(\cE)=h+(\frak g+K_B)f=(n+1)h+K_X,\qquad c_2(\cE)=c_2^{B,\cG,k}:=(\frak g+\theta+k)hf $$ for the bundles obtained via the above construction. We are ready to prove Theorem \ref{tScrollCurve} stated in the Introduction. \medbreak \noindent{\it Proof of Theorem \ref{tScrollCurve}.} Sequences \eqref{seqScrollCurve} and \eqref{seqStandard} yield $$ -\chi(\cE(-h))=-\chi(\cO_X(-h+(\frak g+\theta)f))-\chi(\cO_X(\theta f))+\chi(\cO_Z\otimes\cO_X(\theta f)). $$ Lemma \ref{lDerived} and equality \eqref{RRcurve} for $\cO_B(\theta)$ imply $$ \chi(\cO_X(-h+(\frak g+\theta)f))=\chi(\cO_X(\theta f))=0. $$ Since $\cO_Z\otimes\cO_X(\theta f)\cong\cO_Z$, it finally follows that $-\chi(\cE(-h))=k$. The cohomology of sequence \eqref{seqScrollCurve} tensored by $\cO_X(th)$ yields $$ h^i\big(\cE(th)\big)\le h^i\big(\cO_X(th+(\frak g+\theta)f)\big)+h^i\big(\cI_{Z\vert X}((t+1)h+\theta f)\big). $$ First, we show that the summands on the right are both zero if $2\le i\le n-2$ and $-n\le t\le -1$. From Lemma \ref{lDerived}, we have $h^i\big(\cO_X(th+(\frak g+\theta)f)\big)=0$ if $1-n\le t\le -1$. If $t=-n$, then equality \eqref{Serre} yields $$ h^i\big(\cO_X(th+(\frak g+\theta)f)\big)=h^i\big(\cO_X(-nh+(\frak g+\theta)f)\big)=h^{n-i}\big(\cO_X(\theta f)\big). $$ The latter dimension is zero again by Lemma \ref{lDerived} and $2\le i\le n-2$. Now consider the second summand on the right. The same argument as above also gives $h^i\big(\cO_X((t+1)h+\theta f)\big)=0$ for $i\ge2$ and $-n-1\le t\le-1$. Moreover, if $L$ is any component of $Z$, then $$ h^{i-1}\big(\cO_Z\otimes\cO_X((t+1)h+\theta f)\big)=k h^{i-1}\big(\cO_{L}(t+1)\big)=0 $$ for $2\le i\le n-2$ and $-n\le t\le -1$. Thus the cohomology of sequence \eqref{seqStandard} tensored by $\cO_X((t+1)h+\theta f)$ yields $h^i\big(\cI_{Z\vert X}((t+1)h+\theta f)\big)=0$ in the same range. Similarly one checks that $h^1\big(\cE(-2h)\big)=0$. The same argument, sequence \eqref{seqStandard} tensored by $\cO_X(\theta f)$ and the choice of $\theta$ yield $$ h^0\big(\cE(-h)\big)\le h^0\big(\cI_{Z\vert X}(\theta f)\big)\le h^0\big(\cO_X(\theta f)\big)=0. $$ Since $c_1(\cE)=(n+1)h+K_X$, it follows that $\cE$ is an ordinary, orientable $h$--instanton bundle by Proposition \ref{pSpecial}. Since $\mu_h(\cO_X((\frak g+\theta)f))=\mu_h(\cI_{Z\vert X}(h+\theta f))$, it follows that the assertion on the $\mu$--semistability can be proved with the same argument used in Example \ref{eFlagExtension}. Thus, the proof of the statement is then complete. \qed \medbreak We now deal with the decomposability of the bundles in Construction \ref{conScrollCurve}. \begin{proposition} \label{pSplit} Let $\cG$ be a vector bundle of rank $n\ge3$ on a smooth curve $B$ and set $X:=\bP(\cG)$. Assume that $\cO_X(h):=\cO_{\bP(\cG)}(1)$ is an ample and globally generated line bundle. If $\cE$ is the vector bundle defined in Construction \ref{conScrollCurve}, then it is decomposable if and only if $k=0$. In this case $\cE\cong\cO_X((\frak g+\theta)f)\oplus\cO_X(h+\theta f)$ and sequence \eqref{seqScrollCurve} splits. \end{proposition} \begin{proof} Let $\cE\cong\mathcal L\oplus\cM$ where $\mathcal L,\cM\in\Pic(X)$. Thanks to Theorem \ref{tScrollCurve} $\cE$ is $\mu$--semistable, then $\mu_h(\mathcal L)=\mu_h(\cM)=\mu_h(\cE)=\deg(\frak g)+g-1$. If neither $\mathcal L$ nor $\cM$ is isomorphic to $\cO_X((\frak g+\theta)f)$, then the map $\cO_X((\frak g+\theta)f)\to\cE$ should be zero. Thus we can assume $\mathcal L\cong \cO_X((\frak g+\theta)f)$. It follows the existence of a surjective morphism $\varphi\colon\cM\to\cI_{Z\vert X}(h+\theta f)$. Composing such morphism with the inclusion $\psi\colon \cI_{Z\vert X}(h+\theta f)\to\cO_X(h+\theta f)$ we deduce $\cM\cong\cO_X(h+\theta f)$. Thus $\psi\varphi$ is an isomorphism, hence $\varphi$ is also injective. Thus $\cI_{Z\vert X}(h+\theta f)=\cO_X(h+\theta f)$, i.e. $Z=\emptyset$ or, in other words, $k=0$. Conversely, if $k=0$, then the extensions of $\cO_X(h+\theta f)$ with $\cO_X((\frak g+\theta)f)$ are parameterized by the sections of $H^1\big(\cO_X(-h+\frak g f)\big)$ up to scalars. Lemma \ref{lDerived} shows that such a space is zero, hence sequence \eqref{seqScrollCurve} always splits when $k=0$. \end{proof} In what follows we deal with the parameter space of the bundles obtained via Construction \ref{conScrollCurve}. Thus we need to compute $h^i\big(\cE\otimes\cE^\vee\big)$ when $\cE$ is indecomposable. \begin{proposition} \label{pExt} Let $\cG$ be a vector bundle of rank ${n}\ge3$ on a smooth curve $B$ and set $X:=\bP(\cG)$. Assume that $\cO_X(h):=\cO_{\bP(\cG)}(1)$ is an ample and globally generated line bundle. If $\cE$ is the bundle defined in Construction \ref{conScrollCurve} and $k\ge1$, then $\cE$ is simple, $$ h^1\big(\cE\otimes\cE^\vee\big)=(n-1)\deg(\frak g)+(n+1)(g-1)+2nk, $$ and $h^i\big(\cE\otimes\cE^\vee\big)=0$ for $i\ge2$. \end{proposition} \begin{proof} Assume that $k\ge1$. Thus the definition of $Z$, the cohomology of sequence \eqref{seqScrollCurve} tensored by $\cO_X(-h-\theta f)$ and Lemma \ref{lDerived} imply \begin{equation} \label{E-h-f} h^i\big(\cE(-h-\theta f)\big)=h^i\big(\cI_{Z\vert X}\big)=\left\lbrace\begin{array}{ll} k-1\quad&\text{if $i=1$,}\\ 0\quad&\text{if $i\ne1$.} \end{array}\right. \end{equation} On the one hand $(h-\frak g f)fh^{n-2}=1$, hence $\cO_X(h-\frak g f)\not\cong\cO_X$. On the other hand $(h-\frak g f)h^{n-1}=0$: it follows that $h^0\big(\cO_X(h-\frak g f)\big)=0$, thanks to the Nakai--Moishezon criterion (see \cite[Theorem A.5.1]{Ha2}). Moreover, Lemma \ref{lDerived} yields $h^i\big(\cO_X(h-\frak g f)\big)=h^i\big(\cG(-\frak g)\big)$ for $i\ge1$, hence it vanishes for $i\ge2$. Equality \eqref{RRcurve} on the curve $B$ for $\cG(-\frak g)$ finally returns $h^1\big(\cO_X(h-\frak g f)\big)$. Thus \begin{equation} \label{Xh-G} h^i\big(\cO_X(h-\frak g f)\big)=\left\lbrace\begin{array}{ll} (n-1)\deg(\frak g)+n(g-1)\quad&\text{if $i=1$,}\\ 0\quad&\text{if $i\ne1$.} \end{array}\right. \end{equation} Since $Z:=\bigcup_{i=1}^kL_i$ where $L_i\cong\p{n-2}$ and $\cO_{L_i}\otimes\cO_X(h-\frak g f)\cong\cO_{\p{n-2}}(1)$ it follows that \begin{equation} \label{Zh-G} h^i\big(\cO_Z\otimes\cO_X(h-\frak g f)\big)=\left\lbrace\begin{array}{ll} (n-1)k\quad&\text{if $i=0$,}\\ 0\quad&\text{if $i\ne0$.} \end{array}\right. \end{equation} The cohomology of sequence \eqref{seqStandard} tensored by $\cO_X(h-\frak g f)$ and equalities \eqref{Xh-G} and \eqref{Zh-G} yield \begin{equation} \label{IZh-G} h^i\big(\cI_{Z\vert X}(h-\frak g f)\big)=\left\lbrace\begin{array}{ll} \begin{aligned} n&(g-1)\\ &+(n-1)(\deg(\frak g)+k) \end{aligned} \quad&\text{if $i=1$,}\\ \,0\quad&\text{if $i\ne1$.} \end{array}\right. \end{equation} The cohomology of sequence \eqref{seqScrollCurve} tensored by $\cO_X(-(\frak g+\theta) f)$ and equalities \eqref{IZh-G} yield \begin{equation} \label{E-g-G} h^i\big(\cE(-(\frak g+\theta) f)\big)=\left\lbrace\begin{array}{ll} \,1\quad&\text{if $i=0$,}\\ \begin{aligned} n&(g-1)\\ &+(n-1)(\deg(\frak g)+k)+g \end{aligned}\quad&\text{if $i=1$,}\\ \,0\quad&\text{if $i\ne0,1$.} \end{array}\right. \end{equation} Notice that $\cO_Z\otimes\cE(-(\frak g+\theta)f)\cong\cN_{Z\vert X}$ thanks to equality \eqref{Normal}. The cohomology of sequence \eqref{seqStandard} tensored by $\cE(-(\frak g+\theta) f)$ and equalities \eqref{E-g-G} and \eqref{EZ-g-e} yield \begin{equation} \label{IZE-g-e} h^i\big(\cI_{Z\vert X}\otimes\cE(-(\frak g+\theta) f)\big)=\left\lbrace\begin{array}{ll} \,x\quad&\text{if $i=0$,}\\ {\begin{aligned} x&+(n-1)\deg(\frak g)\\ &+(n+1)(g-1)\\ &+(2n-1)k \end{aligned}}\quad&\text{if $i=1$,}\\ \,0\quad&\text{if $i\ne0,1$,} \end{array}\right. \end{equation} where $x\le 1$. The cohomology of sequence \eqref{seqScrollCurve} tensored by $\cE^\vee$, Theorem \ref{tScrollCurve} and equalities \eqref{IZE-g-e} and \eqref{E-h-f} finally yield the statement. \end{proof} If $k=0$, then Construction \ref{conScrollCurve} leads to a decomposable bundle. Nevertheless also in this case there exist indecomposable $h$--instanton bundles. \begin{proposition} \label{pExt0} Let $\cG$ be a vector bundle of rank ${n}\ge3$ on a smooth curve $B$ and set $X:=\bP(\cG)$. Assume that $\cO_X(h):=\cO_{\bP(\cG)}(1)$ is an ample and globally generated line bundle. There exist bundles fitting into a non--split exact sequence \begin{equation} \label{seqScrollCurve0} 0\longrightarrow\cO_X(h+\theta f)\longrightarrow\cE\longrightarrow\cO_X((\frak g+\theta)f)\longrightarrow0 \end{equation} Each such a bundle is a rank two orientable, ordinary, $\mu$--semistable, simple $h$--instanton bundle. Moreover, $$ h^1\big(\cE\otimes\cE^\vee\big)=(n-1)\deg(\frak g)+(n+1)(g-1)+g. $$ and $h^i\big(\cE\otimes\cE^\vee\big)=0$ for $i\ge2$. \end{proposition} \begin{proof} Thanks to Lemma \ref{lDerived} and equality \eqref{RRcurve} we have $$ \chi(\cO_X(h-\frak g f))=\chi(\cG(-\frak g))=(1-n)\deg(\frak g)+n(1-g). $$ Since $h^n=\deg(\frak g)$, it follows that $h^1\big(\cO_X(h-\frak g f)\big)>0$, hence sequence \eqref{seqScrollCurve0} can be assumed non--split. Taking into account that $\mu_h(\cO_X(h+\theta f))=\mu_h(\cO_X((\frak g+\theta)f))$ and \cite[Lemma 4.2]{C--H2}, we deduce that $\cE$ is simple. Moreover, $\cE$ is an extension of Ulrich line bundles, hence it is an ordinary $\mu$--semistable, $h$--instanton bundle. Essentially the same argument used in the proof of Proposition \ref{pExt} also returns the values of $h^i\big(\cE\otimes\cE^\vee\big)$ for $i\ge1$. \end{proof} \begin{remark} Construction \ref{conScrollCurve} and Proposition \ref{pExt0} yield the existence of rational maps from $\Lambda^k$ and $\vert \cO_X(h-\frak g f)\vert$ to the moduli space $\cS_X$ of simple bundles on $X$ if $k\ge1$ and $k=0$ respectively. It follows that all the indecomposable bundles obtained via the above constructions represent smooth points in one and the same component $\cS^0_X$. Thus such an $\cS^0_X$ is generically smooth and its dimension is $$ (n-1)\deg(\frak g)+(n+1)(g-1)+2nk+\epsilon g $$ where $\epsilon=1$ if $k=0$ and $0$ otherwise. \end{remark} \begin{thebibliography}{44} \bibitem{A--O1} V. Ancona, G. Ottaviani: {\em Some applications of Beilinson's theorem to projective spaces and quadrics}. Forum Math. \textbf{3} (1991), 157--176. \bibitem{A--C--G} V. Antonelli, G. Casnati, O. Genc: \emph{Even and odd instanton bundles on Fano threefolds}. Available at arXiv:2105.00632 [math.AG], to appear in Asian J. Math. \bibitem{A--M1}V. Antonelli, F. Malaspina: {\em Instanton bundles on the Segre threefold with Picard number three}. Math. Nachr. \textbf{293} (2020), 1026--1043. \bibitem{A--M2} V. Antonelli, F. Malaspina: {\em $H$--instanton bundles on three-dimensional polarized projective varieties}. J. Algebra \textbf{598} (2022), 570--607. \bibitem{A--H--M--PL} M. Aprodu, S. Huh, F. Malaspina, J. Pons-Llopis: {\em Ulrich bundles on smooth projective varieties of minimal degree}. Proc. Amer. Math. Soc. \textbf{147} (2019), 5117--5129. \bibitem{Ap--Kim} M. Aprodu, Y. Kim: {\em Ulrich line bundles on Enriques surfaces with a polarization of degree four}. Ann. Univ. Ferrara Sez. VII Sci. Mat. \textbf{63} (2017), 9--23. \bibitem{Ar} E. Arrondo: \emph{A home--made Hartshorne--Serre correspondence}. Comm. Alg. \textbf{ 20} \rm (2007), 423--443. \bibitem{A--S} E. Arrondo, I. Sols: {\em Classification of smooth congruences of low degree}. J. Reine Angew. Math. \textbf{393} (1989), 199--219. \bibitem{A--W} M.F. Atiyah, R.S. Ward: \emph{Instantons and algebraic geometry}. Comm. Math. Phys. \textbf{55} (1977), 117--124. \bibitem{Bea} A. Beauville: \emph{Determinantal hypersurfaces}. Michigan Math. J. \textbf{48} \rm(2000), 39--64. \bibitem{Bea6} A. Beauville: \emph{An introduction to Ulrich bundles}. Eur. J. Math. \textbf{4} (2018), 26--36. \bibitem{B--F1} M.C. Brambilla, D. Faenzi: {\em Moduli spaces of rank--$2$ ACM bundles on prime Fano threefolds}. Michigan Math. J. \textbf{60} (2011), 113--148. \bibitem{B--F2} M.C. Brambilla, D. Faenzi: {\em Vector bundles on Fano threefolds of genus $7$ and Brill--Noether loci}. Internat. J. Math. \textbf{25} (2014), 1450023, 59 pp.. \bibitem{C--H2} M. Casanellas, R. Hartshorne, F. Geiss, F.O. Schreyer: \emph{Stable Ulrich bundles}. Int. J. of Math. \textbf{23} \rm(2012), 1250083. \bibitem{Cs4} G. Casnati: \emph{Special Ulrich bundles on non--special surfaces with $p_g=q=0$}. Int. J. Math. \textbf{28} (2017), 1750061. \emph{Erratum}. Int. J. of Math. \textbf{29} (2018), 1892001. \bibitem{C--C--G--M} G. Casnati, E. Coskun, O. Genc, F. Malaspina: \emph{Instanton bundles on the blow up of $\p3$ at a point}. Michigan Math. J. \textbf{70} (2021), 807--836. \bibitem{C--F--M1} G. Casnati, D. Faenzi, F. Malaspina: \emph{Rank two aCM bundles on the del Pezzo threefold with Picard number $3$}. J. Algebra \textbf{429} (2015), 413--446. \bibitem{C--F--M2} G. Casnati, D. Faenzi, F. Malaspina: \emph{Rank two aCM bundles on the del Pezzo fourfold of degree $6$ and its general hyperplane section}. J. Pure Appl. Algebra \textbf{22} (2018), 585--609. \bibitem{C--N} G. Casnati, R. Notari: \emph{Examples of rank two aCM bundles on smooth quartic surfaces in $\p3$}. Rend. Circ. Mat. Palermo (2) \textbf{66} (2017), 19--41. \bibitem{C--K--M} E. Coskun, R.S. Kulkarni, Y. Mustopa: \emph{Pfaffian quartic surfaces and representations of Clifford algebras}. Doc. Math. \textbf{17} \rm (2012), 1003--1028. \bibitem{C--MR1} L. Costa, R.M. Mir\'o--Roig: {\em Monads and regularity of vector bundles on projective varieties}. Mich. Math. J. \textbf{55} (2007), 417--436. \bibitem{C--MR3} L. Costa, R.M. Mir\'o--Roig: {\em Monads and instanton bundles on smooth hyperquadrics}. Math. Nachr. \textbf{282} (2009), 169--179. \bibitem{C--MR7} L. Costa, R.M. Mir\'o--Roig: {\em Ulrich bundles on Veronese surfaces}. In \lq Singularities, algebraic geometry, commutative algebra, and related topics\rq\ Festschrift for Antonio Campillo on the occasion of his 65th birthday (G-M. Greuel, L. Narv\'aez Macarro, S. Xamb\'o-Descamps eds.), Springer, Cham (2020), 375--381. \bibitem{C--MR8} L. Costa, R.M. Mir\'o--Roig: {\em Instanton bundles vs Ulrich bundles on projective spaces}. Beitr. Algebra. Geom. \textbf{62} (2021), 429--439. \bibitem{C--MR--PL} L. Costa, R.M. Mir\'o--Roig, J. Pons--Llopis: {\em An account of instanton bundles on hyperquadrics}. In \lq From classical to modern algebraic geometry\rq\ (G. Casnati, A. Conte, L. Gatto, L. Giacardi, M. Marchisio, A. Verra eds.), Trends Hist. Sci., Birkh\"auser/Springer (2016), 409--428. \bibitem{Ea--No} J.A. Eagon, D.G. Northcott: \emph{Ideals defined by matrices and a certain complex associated with them}. Proc. Roy. Soc. London Ser. A \textbf{269} (1962), 188--204. \bibitem{Ein--So} L. Ein, I. Sols: \emph{Stable vector bundles on quadric hypersurfaces}. Nagoya Math. J. \textbf{96} (1984), 11--22. \bibitem{Ei} D. Eisenbud: \emph{The geometry of syzygies. A second course in commutative algebra and algebraic geometry}. G.T.M. 229. Springer (2005). \bibitem{Ei--Go} D. Eisenbud, S. Goto: \emph{Linear free resolutions and minimal multiplicity}. J. Algebra \textbf{88} (1984), 89--133. \bibitem{Ei--Ha1} D. Eisenbud, J. Harris: \emph{On varieties of minimal degree (a centennial account)}. In Algebraic geometry, Bowdoin 1985), 3--13, Proceedings of symposia in pure mathematics, \textbf{46}, A.M.S., 1987. \bibitem{Ei--Ha2} D. Eisenbud, J. Harris: \emph{3264 and all that -- a second course in algebraic geometry}. Cambridge U.P. (2016). \bibitem{E--S--W} D. Eisenbud, F.O. Schreyer, J. Weyman: \emph{Resultants and Chow forms via exterior syzigies}. J. Amer. Math. Soc. \textbf{16} \rm(2003), 537--579. \bibitem{El--Gr} Ph. Ellia, L. Gruson: \emph{On the Buchsbaum index of rank two vector bundles on $\p3$}. Rend. Istit. Mat. Univ. Trieste \textbf{47} (2015), 65--79. \bibitem{Fa1} D. Faenzi: \emph{A remark on Pfaffian surfaces and aCM bundles}. In \lq Vector bundles and low codimensional subvarieties: state of the art and recent developments,\rq\ (G. Casnati, F. Catanese, R. Notari eds.), Quad. Mat., 21, Dept. Math., Seconda Univ. Napoli, Caserta (2007), 209--217. \bibitem{Fa2} D. Faenzi: \emph{Even and odd instanton bundles on Fano threefolds of Picard number one}. Manuscripta Math. \textbf{144} (2014), 199--239. \bibitem{Flo} G. Fl\o ystad: {\em Monads on projective spaces}. Comm. Algebra \textbf{28} (2000), 5503--5516. \bibitem{Ha2} R. Hartshorne: {\em Algebraic geometry}. G.T.M. 52, Springer \rm (1977). \bibitem{Ha3} R. Hartshorne: {\em Coherent functors}. Adv. Math. \textbf{140} (1998), 44--94. \bibitem{Hu--Le} D. Huybrechts, M. Lehn: \emph {The geometry of moduli spaces of sheaves. Second edition}. Cambridge Mathematical Library, Cambridge U.P. \rm (2010). \bibitem{Ja} M.B. Jardim: {\em Instanton sheaves on complex projective spaces}. Collect. Math. \textbf{57} (2006), 69--91. \bibitem{J--MR} M.B. Jardim, R.M. Mir\'o--Roig: {\em On the semistability of instanton sheaves over certain projective varieties}. Comm. Alg. \textbf{36} (2008), 288--298. \bibitem{J--VM} M.B. Jardim, R. Vidal Martins: {\em Linear and Steiner bundles on projective varieties}. Comm. Alg. \textbf{38} (2010), 2249--2270. \bibitem{Jou} J.P. Jouanolou: \emph{Th\'eor\ga emes de Bertini et applications}. Progress in Mathematics 42, Birkh\"auser (1983). \bibitem{Ka--Ko} Y. Kachi, J. Koll\'ar: \emph{Characterizations of $\p N$ in arbitrary characteristic}. Asian J. Math. \textbf{4} (2000), 115--21. \bibitem{K--M--S} R.S. Kulkarni, Y. Mustopa, I. Shipman: {\em Vector bundles whose restriction to a linear section is Ulrich}. Math. Z. \textbf{287} (2017), 1307--1326. \bibitem{Kuz} A. Kuznetsov: {\em Instanton bundles on Fano threefolds}. Cent. Eur. J. Math. \textbf{10} (2012), 1198--1231. \bibitem{Mal1} F. Malaspina: {\em Monads and Vector Bundles on Quadrics}. Adv. Geom. \textbf{9} (2009), 137--152. \bibitem{M--M--PL} F. Malaspina, S. Marchesi, J. Pons-Llopis: {\em Instanton bundles on the flag variety $F(0,1,2)$}. Ann. Sc. Norm. Super. Pisa Cl. Sci. (5) \textbf{20} (2020), 1469--1505. \bibitem{MK--P--R} N. Mohan Kumar, C. Peterson, A.P. Rao: {\em Monads on projective spaces}. Manuscripta Math. \textbf{112} (2003), 183--189. \bibitem{Ok--Sp} C. Okonek, H. Spindler: {\em Mathematical instanton bundles on $\p{2n+1}$}. J. Reine Angew. Math. \textbf{364} (1985) 35--50. \bibitem{O--S--S} C. Okonek, M. Schneider, H. Spindler: {\em Vector bundles on complex projective spaces}. Progress in Mathematics 3, Birkh\"auser \rm(1980). \bibitem{Ott2} G. Ottaviani: {\em Spinor bundles on quadrics}. Trans. Am. Math. Soc. \textbf{307} (1988), 301--316. \bibitem{Ott3} G. Ottaviani: {\em On Cayley bundles on the five--dimensional quadric}. Boll. Un. Mat. Ital. A \textbf{4} (1990), 87--100. \bibitem{Ot--Sz} G. Ottaviani, M. Szurek: {\em On moduli of stable $2$-bundles with small Chern classes on $Q_3$. With an appendix by Nicolae Manolache}. Ann. Mat. Pura Appl. \textbf{167} (1994), 191--241. \bibitem{Rah1} O. Rahavandrainy: \emph{R\'esolution des fibr\'es instantons g\'en\'eraux}. C. R. Acad. Sci. Paris S\'er. I Math. \textbf{325} (1997), 189--192. \bibitem{Rah2} O. Rahavandrainy: \emph{R\'esolution des fibr\'es g\'en\'eraux stables de rang $2$ sur $\p 3$ de classes de Chern $c_1=-1$, $c_2=2p\ge6$. I.} Ann. Fac. Sci. Toulouse Math. (6) \textbf{19} (2010), 231--267. \bibitem{So--VV} A. Sommese, A. Van de Ven: \emph{On the adjunction mapping}. Math. Ann. \textbf{278} \rm(1987), 593--603. \end{thebibliography} \bigskip \noindent Vincenzo Antonelli,\\ Dipartimento di Scienze Matematiche, Politecnico di Torino,\\ c.so Duca degli Abruzzi 24,\\ 10129 Torino, Italy\\ e-mail: {\tt [email protected]} \bigskip \noindent Gianfranco Casnati,\\ Dipartimento di Scienze Matematiche, Politecnico di Torino,\\ c.so Duca degli Abruzzi 24,\\ 10129 Torino, Italy\\ e-mail: {\tt [email protected]} \end{document}
2205.04729v5
http://arxiv.org/abs/2205.04729v5
Higher Du Bois and higher rational singularities
\documentclass[11pt]{amsart} \usepackage{latexsym,amsmath,amssymb,amsfonts,amscd,graphics,appendix,amsxtra} \usepackage[mathscr]{eucal} \usepackage[normalem]{ulem} \usepackage{fullpage} \usepackage{xr-hyper} \usepackage{hyperref} \numberwithin{equation}{section} \newtheorem{theorem}{Theorem}[section] \newtheorem{proposition}[theorem]{Proposition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{claim}[theorem]{Claim} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{assumption}[theorem]{Assumption} \newtheorem{D}[theorem]{Definition} \newenvironment{definition}{\begin{D} \rm }{\end{D}} \newtheorem{R}[theorem]{Remark} \newenvironment{remark}{\begin{R}\rm }{\end{R}} \newtheorem{E}[theorem]{Example} \newenvironment{example}{\begin{E}\rm }{\end{E}} \makeatletter\let\@wraptoccontribs\wraptoccontribs\makeatother \newcommand{\Dis}{\displaystyle} \def\Zee{\mathbb{Z}} \def\Q{\mathbb{Q}} \def\Ar{\mathbb{R}} \def\Cee{\mathbb{C}} \def\Pee{\mathbb{P}} \def\NN{\mathbb{N}} \def\Id{\operatorname{Id}} \def\Ker{\operatorname{Ker}} \def\Coker{\operatorname{Coker}} \def\Hom{\operatorname{Hom}} \def\Ext{\operatorname{Ext}} \def\Aut{\operatorname{Aut}} \def\Pic{\operatorname{Pic}} \def\Gr{\operatorname{Gr}} \def\Sym{\operatorname{Sym}} \def\im{\operatorname{Im}} \def\irr{\operatorname{irr}} \def\Spec{\operatorname{Spec}} \def\Sp{\operatorname{Sp}} \def\Proj{\operatorname{Proj}} \def\scrO{\mathcal{O}} \def\Sing{\operatorname{Sing}} \def\codim{\operatorname{codim}} \def\spcheck{^{\vee}} \def\hX{\widehat{X}} \def\hY{\widehat{Y}} \def\cX{\mathcal{X}} \def\cC{\mathcal{C}} \def\cY{\mathcal{Y}} \def\cU{\mathcal{U}} \def\uOb{\underline\Omega^\bullet} \def\uOp{\underline\Omega^p} \def\uOnp{\underline\Omega^{n-p}} \def\Xt{\widetilde{Y}} \def\IH{\mathrm{IH}} \def\bD{\mathbb{D}} \def\uh{\underline{h}} \title{Higher Du Bois and higher rational singularities} \author[R. Friedman]{Robert Friedman} \address{Columbia University, Department of Mathematics, New York, NY 10027} \email{[email protected]} \author[R. Laza]{Radu Laza} \address{Stony Brook University, Department of Mathematics, Stony Brook, NY 11794} \email{[email protected]} \contrib[with an appendix by]{Morihiko Saito} \begin{document} \begin{abstract} We prove that the higher direct images $R^qf_*\Omega^p_{\cY/S}$ of the sheaves of relative K\"ahler differentials are locally free and compatible with arbitrary base change for flat proper families whose fibers have $k$-Du Bois local complete intersection singularities, for $p\leq k$ and all $q\geq 0$, generalizing a result of Du Bois (the case $k=0$). We then propose a definition of $k$-rational singularities extending the definition of rational singularities, and show that, if $X$ is a $k$-rational variety with either isolated or local complete intersection singularities, then $X$ is $k$-Du Bois. As applications, we discuss the behavior of Hodge numbers in families and the unobstructedness of deformations of singular Calabi-Yau varieties. In an appendix, Morihiko Saito proves that, in the case of hypersurface singularities, the $k$-rationality definition proposed here is equivalent to a previously given numerical definition for $k$-rational singularities. As an immediate consequence, it follows that for hypersurface singularities, $k$-Du Bois singularities are $(k-1)$-rational. This statement has recently been proved for all local complete intersection singularities by Chen-Dirks-Musta\c{t}\u{a}. \end{abstract} \thanks{Research of the second author is supported in part by NSF grant DMS-2101640}. \bibliographystyle{amsalpha} \maketitle \section{Introduction}\label{section1} The Hodge numbers are constant in a smooth family of complex projective varieties over a connected base. A powerful way of encoding this fundamental fact is Deligne's theorem \cite{Deligne-L}: If $f:\cY\to S$ is a smooth morphism of complex projective varieties, then the higher direct image sheaves $R^qf_*\Omega^p_{\cY/S}$ of the relative K\"ahler differentials are locally free and compatible with base change. This theorem fails for families of varieties which have singular fibers (and in positive characteristic). For $Y$ a singular compact complex algebraic variety, Du Bois \cite{duBois} showed that the K\"ahler-de Rham complex $\Omega_Y^\bullet$ should be replaced by the \textsl{filtered de Rham} or \textsl{Deligne-Du Bois} complex $\uOb_Y$ whose graded pieces $\uOp_Y\in D_b^{\text{\rm{coh}}}(Y)$ play the role of $\Omega_Y^p$ (see \cite[\S7.3]{PS}). Namely, the associated spectral sequence with $E_1^{p,q}=\mathbb H^q(Y;\uOp_Y)$ degenerates at $E_1$ and computes $H^{p+q}(Y,\Cee)$ together with the Hodge filtration associated to the mixed Hodge structure on $H^*(Y)$. However, the associated Hodge numbers $\uh^{p,q}:=\dim \mathbb H^q(Y;\uOp_Y)$ do not behave well in families in general. \smallskip The filtered de Rham complex is related to the complex of K\"ahler differentials via the canonical comparison K\"ahler-to-Du Bois map $\phi^p:\Omega_Y^p\to \uOp_Y$. The maps $\phi^p$ are isomorphisms for all $p$ only when $Y$ is smooth, at least when $Y$ is a local complete intersection \cite[Theorem 3.39, Theorem F]{MP-loc}. It is thus natural to consider the case when $\phi^p$ is a quasi-isomorphism in a certain range. Steenbrink \cite[\S3]{Steenbrink} introduced the notion of \textsl{Du Bois singularities}, which play a role in the study of compactifications of moduli. By definition, $Y$ is Du Bois if $\phi^0$ is a quasi-isomorphism. Following \cite{MOPW} and \cite{JKSY-duBois}, we say that $Y$ is \textsl{$k$-Du Bois} if $\phi^p$ is a quasi-isomorphism for $0\le p\le k$. Thus $0$-Du Bois singularities are exactly the Du Bois singularities in Steenbrink's terminology. A key property satisfied by Du Bois singularities is the following: \begin{theorem}[{Du Bois \cite[Thm. 4.6]{duBois}, \cite[Lem. 1]{DBJ}}] \label{Thm-duBois} Let $f:\cY\to S$ be a flat proper family of complex algebraic varieties. Assume that some fiber $Y_s$ has Du Bois singularities. Then, possibly after replacing $S$ by a neighborhood of $s$, for all $q\ge 0$, the sheaves $R^qf_*\scrO_\cY$ are locally free of finite type and compatible with base change. \end{theorem} The theorem can be interpreted (in particular) as giving a relation between the mixed Hodge structure $H^*(Y_0)$ of a singular fiber $Y_0$ with the limit mixed Hodge $H^*_{\lim}$ associated to a one-parameter smoothing $\cX/\Delta$. In this version, the theorem (for dimension $2$ slc hypersurface singularities) was independently established by Shah \cite{Shah}, and plays a key role in the study of degenerations of $K3$ surfaces (e.g.\ \cite{Shah2}) and related objects (e.g. \cite{Laza}, \cite{KLSV}). Theorem \ref{Thm-duBois} continues to have important consequences for the study of compact moduli of varieties of general type (see e.g.\ \cite{KollarBook}, esp. \S2.5 of loc. cit.). Here, we prove the following generalization of Theorem~\ref{Thm-duBois} for the case of a local complete intersection (lci) morphism: \begin{theorem}\label{mainThm} Let $f:\cY\to S$ be a flat proper family of complex algebraic varieties and let $s\in S$. Suppose that the fiber $Y_s$ has $k$-Du Bois lci singularities. Then, possibly after replacing $S$ by a neighborhood of $s$, the higher direct image sheaves $R^qf_*\Omega^p_{\cY/S}$ of the relative K\"ahler differentials are locally free and compatible with base change for $0\le p\le k$ and all $q\ge 0$. \end{theorem} \begin{remark} We use the lci assumption to control the sheaves of K\"ahler and relative K\"ahler differentials in two ways. First, a result of \cite{MP-loc} gives an estimate on the codimension of the singular locus for $k$-Du Bois lci singularities. Using this and some results of Greuel, we prove a key technical point: under the lci assumption, the sheaves $\Omega^p_{\cX/S}$ are flat over $S$ for $p\le k$ (Theorem \ref{flatness}). In case $k=0$, both the codimension estimate and the flatness are automatic and one recovers Theorem~\ref{Thm-duBois} as a special case. \end{remark} As a corollary, we obtain: \begin{corollary}\label{mainThmcor} Let $f\colon \cY\to S$ be a flat proper family of complex algebraic varieties over an irreducible base. For $s\in S$, suppose that the fiber $Y_s$ has $k$-Du Bois lci singularities. Then, for every fiber $t$ such that $Y_t$ is smooth, $\dim \Gr_F^p H^{p+q}(Y_t) = \dim \Gr_F^p H^{p+q}(Y_s)$ for every $q$ and for $0\le p \le k$. Equivalently $\uh^{p,q}(Y_s)=h^{p,q}(Y_t)$ for all $p\le k$. \qed \end{corollary} In the case of hypersurface singularities, results similar to Corollary~\ref{mainThmcor} were obtained by Kerr-Laza \cite{KL2, KL3} with further clarifications given by Saito (personal communications) based on \cite{SaitoV}. These type of results are for example relevant to the study of the moduli of cubic fourfolds \cite{Laza}. \medskip Another application of Theorem \ref{mainThm} is the following generalization of the results of Kawamata \cite{Kawamata}, Ran \cite{Ran}, and Tian \cite{Tian} on the unobstructedness of deformations for nodal Calabi-Yau varieties in any dimension, where the special case of isolated hypersurface singularities was established in \S6 of the first version of \cite{FL}: \begin{corollary}\label{CYdef} Let $Y$ be a canonical Calabi-Yau variety (Definition~\ref{defcanonCY}) which is additionally a scheme with $1$-Du Bois lci (not necessarily isolated) singularities. Then the functor $\mathbf{Def}(Y)$ is unobstructed. \end{corollary} A more well-known class of singularities are the {\it rational singularities}. By work of Steenbrink \cite{Steenbrink}, Kov\'acs \cite{Kovacs99}, and others, a rational singularity is Du Bois. In the context of higher Du Bois singularities, it is natural to consider {\it higher rational singularities}. In the case of hypersurfaces $X$ in a smooth variety, the $k$-Du Bois singularities are characterized numerically by the condition $\tilde \alpha_X\ge k+1$ (\cite[Thm. 1.1]{MOPW}, \cite[Thm. 1]{JKSY-duBois}), where $\tilde \alpha_X$ is the {\it minimal exponent} invariant, a generalization of the log canonical threshold. Given that $\tilde \alpha_X>1$ characterizes rational hypersurface singularities (\cite{Saito-b}), it is natural to define {\it $k$-rational singularities} numerically by the strict inequality $\tilde \alpha_X> k+1$ (\cite[Def. 4.3]{KL2}); this definition also occurs implicitly in \cite{MOPW} and \cite{JKSY-duBois}. While results can be obtained using this numerical definition, it has the disadvantage of being restricted to hypersurfaces, and to be somewhat ad hoc. A more general definition of $k$-rational isolated singularities, based on local vanishing properties (see esp.\ \cite{MP-Inv}), was given in \S3 of the first version of \cite{FL} (see also \cite{MP-rat} for further discussion). In this paper, we propose a more intrinsic definition of $k$-rational singularities in general (Definition~\ref{defkrat}) and show that it agrees with the usual definition of rational singularities for $k=0$ and with the definition of \cite[\S3]{FL} (under mild assumptions; see Corollaries~\ref{cor-rat0} and \ref{cor-olddef}). Additionally, for hypersurface singularities, M. Saito proves that the new definition proposed here is indeed equivalent to the previous numerical definition mentioned above (Theorem~\ref{Thm-A}). The main advantage of the definition of higher rational singularities given here is that it naturally factors through the higher Du Bois condition. In analogy with the case $k=0$, we conjecture that {\it $k$-rational implies $k$-Du Bois} in general. In the first version of \cite[\S3]{FL} (and expanded in \cite{FL22d}), we verified this conjecture under the assumption of isolated lci singularities. Here, we generalize this in both directions, for arbitrary isolated or lci singularities: \begin{theorem}\label{thm-krat} Let $X$ have either lci singularities (not necessarily isolated) or isolated singularities (not necessarily lci). If $X$ is $k$-rational, then $X$ is $k$-Du Bois. \end{theorem} \begin{remark} Musta\c{t}\u{a} and Popa gave an independent proof of Theorem~\ref{thm-krat} for the case of lci singularities (\cite[Thm. B]{MP-rat}). \end{remark} The isolated complete intersection case (see \cite{FL22d}) sheds light on the tight relationship between higher rational and higher Du Bois singularities. In the first version of this paper, we made the following conjecture: \begin{conjecture}\label{conjDBrat} If $X$ has lci singularities and $X$ is $k$-Du Bois, then $X$ is $(k-1)$-rational. \end{conjecture} For an isolated hypersurface singularity, Conjecture~\ref{conjDBrat} is an immediate consequence of the following result: \begin{proposition}\label{hypersurfcase} Let $(X,x)$ be an isolated hypersurface singularity and let $\widetilde\alpha_{X,x} = \widetilde\alpha_X$ be the minimal exponent as defined by Saito \cite{Saito-b}. Then \begin{itemize} \item[\rm(i)] $X$ is $k$-Du Bois $\iff$ $\widetilde\alpha_X \geq k+1$. \item[\rm(ii)] $X$ is $k$-rational $\iff$ $\widetilde\alpha_X > k+1$. \qed \end{itemize} \end{proposition} Here (i) follows from \cite[Thm. 1]{JKSY-duBois} and \cite[Thm. 1.1]{MOPW} and holds true for a general, not necessarily isolated, hypersurface singularity, and (ii) is proved in \cite[Corollary 6.6]{FL22d}. In Appendix \ref{AppendixA}, M. Saito proves (ii) for the case of a general hypersurface singularity, based on the results of \cite{JKSY-duBois}. Thus implies Conjecture~\ref{conjDBrat} for the case of general hypersurface singularities, not necessarily isolated (Cor. \ref{Cor-A}). Musta\c{t}\u{a} and Popa have proved (ii) of Proposition~\ref{hypersurfcase}, and hence Conjecture~\ref{conjDBrat} in this case as well (cf. \cite[Thm. E, Cor. F]{MP-rat}). Additionally, in \cite[Corollary 5.5]{FL22d}, we established Conjecture~\ref{conjDBrat} for isolated lci singularities. Recently, Chen-Dirks-Musta\c{t}\u{a} \cite{ChenDirksM} have proved Conjecture~\ref{conjDBrat} in general. Since $k$-rational singularities are milder than $k$-Du Bois singularities, one expects that more of the Hodge diamond is preserved in families with $k$-rational singularities. Indeed, this is the case as shown in \cite[Cor. 4.2]{KL2} (isolated hypersurface $k$-rational singularities) and \cite[Thm. 1]{KLS} (arbitrary rational singularities). Here, we extend these results to $k$-rational lci singularities, and clarify the difference between $k$-Du Bois and $k$-rational in this context. Essentially, for $k$-rational singularities, in addition to the preservation given by Corollary \ref{mainThmcor}, one gains Hodge symmetry in a certain range. Informally, we can say that the frontier Hodge diamond up to coniveau $k$ is preserved for deformations of $k$-rational singularities. \begin{corollary}\label{Corkrat} Let $f\colon \cY\to S$ be a flat proper family of complex algebraic varieties over an irreducible base. For $s\in S$, suppose that the fiber $Y_s$ has $k$-rational lci singularities. Then, for every fiber $t$ such that $Y_t$ is smooth, and for all $p\le k$, $$\uh^{p,q}(Y_s)=\uh^{q,p}(Y_s)=\uh^{n-p,n-q}(Y_s) =h^{p,q}(Y_t)= h^{q,p}(Y_t)=h^{n-p,n-q}(Y_t).$$ Moreover, for all $p\le k$, $$\uh^{p,q}(Y_s) = \dim \Gr_F^p\mathrm{Im}(H^{p+q}(Y_s)\xrightarrow{\pi_s^*} H^{p+q}(\hat Y_s)),$$ where $\pi_s:\hat Y_s\to Y_s$ is an arbitrary projective resolution. \end{corollary} \begin{remark}\label{rem-examples} (1) The Du Bois complex for a (connected) curve $C$ was computed in \cite[Prop. 4.9]{duBois}, \cite[Example 7.23(2)]{PS}: let $\tilde C$ be the normalization of $C$ and $C^w$ the weak normalization, so that there is a factorization $\tilde C\to C^w\to C$. Let $\nu\colon \tilde C \to C$ and $\nu'\colon C^w \to C$ be the corresponding morphisms. Then $\underline\Omega^0_C = \nu'_*\scrO_{C^w}$ and $\underline\Omega^1_C = \nu_*\Omega^1_{\tilde{C}}$. In particular, $\uh^{1,0}(C)=p_g(C)=g(\tilde C)$, while $\uh^{0,1}(C)=p_a(C^w)$. Thus the only Hodge number which is always preserved in flat proper families of curves is $\uh^{0,0}=1$. Note that $C$ has Du Bois singularities $\iff$ $C^w\cong C$, and $C$ has rational singularities $\iff$ $C$ is smooth. However, at least one of the Hodge symmetries $\uh^{1,0}\neq \uh^{0,1}=p_a(C)$ or $\uh^{1,1}\neq\uh^{0,0}=1$ will fail if the singularities of $C$ are Du Bois but not rational (depending on $C$ of compact/non-compact type), reflecting the fact that there are no rational singularities in dimension $1$. \smallskip \noindent (2) Suppose that $C$ has ordinary double points and let $\tau^\bullet_C$ be the subcomplex of ``torsion differentials" on $C$, i.e.\ those sections of $\Omega^\bullet_C$ which pull back to $0$ on $\tilde{C}$. By \cite[(1.1) and (1.5)]{F83}, $(\Omega^\bullet_C, d)$ and $(\Omega^\bullet_C/\tau^\bullet_C,d)$ are both resolutions of $\Cee$. In fact, it is easy to see directly that (1) $(\Omega^\bullet_C/\tau^\bullet_C,d)$ is a resolution of $\Cee$ and (2) at an ordinary double point locally defined by $z_1z_2 =0$, $\tau^1_C =\Cee\cdot z_1\,dz_2$, $\tau^2_C = \Cee\cdot dz_1\wedge dz_2$, and $d\colon \tau^1_C \to \tau^2_C$ is an isomorphism. Hence $(\tau^\bullet_C,d)$ is acyclic and $(\Omega^\bullet_C, d)$ is quasi-isomorphic to $(\Omega^\bullet_C/\tau^\bullet_C,d)$ and thus is a resolution of $\Cee$. In particular the spectral sequence with $E_1$ term $H^q(C; \Omega^p_C) \implies \mathbb{H}^{p+q}(C; \Omega^\bullet_C) = H^{p+q}(C;\Cee)$ does not degenerate at $E_1$ as $d=d_1$ is nonzero on $H^0(C; \tau^1_C) \subseteq H^0(\Omega^1_C)$. \smallskip \noindent (3) More generally, if $X$ has normal crossings, then $\uOb_X \cong \Omega^\bullet_X/\tau^\bullet_X$ (with the trivial filtration), where as above $\tau^\bullet_X$ is the subcomplex of torsion differentials, by e.g.\ \cite[Example 7.23(1)]{PS}. Moreover, by \cite[(1.1)]{F83} and generalizing the explicit calculations of (2) above, $(\Omega^\bullet_X, d)$ is also a resolution of $\Cee$ and in particular $(\Omega^\bullet_X, d)$ and $(\Omega^\bullet_X/\tau^\bullet_X, d)$ are quasi-isomorphic. For a related result, for a general variety $X$, Guill\'en showed in \cite[III(1.15)]{GNPP} that the homomorphism $\Omega^\bullet_X \to \uOb_X$ always has a section (viewed as a morphism in the derived category). However, these results do not seem to say much about the individual homomorphisms $\Omega^p_X \to \uOp_X$. \smallskip \noindent (4) Fourfolds $Y$ with ADE hypersurface singularities (such as those occurring in \cite{Laza}) are examples of $1$-rational singularities. For such fourfolds, Corollary \ref{Corkrat} implies that only $\uh^{2,2}(Y)$ can vary in small deformations. Thus, any smoothing of $Y$ will have finite monodromy (compare \cite[Cor. 1]{KLS}). \end{remark} A brief description of the contents of this paper is as follows. Section~\ref{section2} deals with some basic results about K\"ahler differentials in the lci case. These include Theorem~\ref{flatness} regarding the flatness of the relative K\"ahler differentials and Proposition~\ref{KDexact} on restricting to a generic hypersurface section. In Section~\ref{section3}, we give a quick review of the definition and the basic facts about higher Du Bois singularities (following \cite{MOPW}, \cite{JKSY-duBois}, and \cite[\S3]{FL}) and define higher rational singularities. After these preliminaries, we establish Theorem~\ref{mainThm}. Our argument is close to the original argument (\cite[Lemma 1]{DBJ}) used to establish Theorem \ref{Thm-duBois}, following a suggestion of J. Koll\'ar. Finally, in Section~\ref{section5}, we prove Theorem \ref{thm-krat} following the strategy of \cite{Kovacs99}, and deduce a consequence about the Hodge numbers of a smoothing along the lines of Corollary~\ref{mainThmcor}. An appendix section by M. Saito discusses Conjecture~\ref{conjDBrat} in the hypersurface case. Finally, beyond the conjectures and speculation we have already made, we emphasize the importance of extending these results wherever possible to the non-lci case. Along these lines, Shen, Venkatesh and Vo have recently posted a preprint \cite{SVV} proposing different definitions of $k$-Du Bois and $k$-rational singularities which agree with the previous ones in the lci case. \subsection*{Notations and Conventions} We work in the complex algebraic category (but see Remark \ref{rem-can} for some possible generalizations). Following our conventions from previous work \cite{FL}, we use $X$ (and $\cX$) in the local context (e.g. statements such as Theorem \ref{flatness} are purely local, and hold in the analytic case as well), whereas $Y$ and $\pi\colon \cY\to S$ are meant to be proper over $\Spec \Cee$ or $S$ respectively (e.g.\ in Theorem \ref{mainThm} where the properness is essential). The scheme or analytic space $X$ is an \textsl{algebraic variety} if it is reduced and irreducible. In this case, its singular locus $X_{\text{\rm{sing}}}$ is denoted by $\Sigma$. We set $n =\dim X$ and $d =\dim \Sigma$. \subsection*{Acknowledgement} We have benefited from discussions and correspondence with J. Koll\'ar, M. Musta\c{t}\u{a}, and M. Popa on higher du Bois singularities while preparing \cite{FL}. The second author had several related discussions with M. Kerr and M. Saito while preparing previous joint work. We also thank J. de Jong, M. Saito and C. Schnell for some further comments related to this paper. After circulating an earlier version of this paper, M. Musta\c{t}\u{a}, M. Popa informed us of some their recent work (\cite{MP-rat}, \cite{ChenDirksM}) related to Theorem \ref{thm-krat} and Conjecture~\ref{conjDBrat}. We are grateful to them for these communications. M. Saito kindly provided us with proofs of Proposition~\ref{hypersurfcase}(ii) and Conjecture~\ref{conjDBrat} in the hypersurface case, and agreed to include those as an appendix to our paper. Finally, we would like the referees for a very carefully reading of the first version of this paper and for many helpful comments. \section{Some results on K\"ahler and relative K\"ahler differentials}\label{section2} The K\"ahler differentials are coherent sheaves that are determined by certain universal properties, including compatibility with base change. For smooth families $\cY/S$, the proof of the constancy of the Hodge numbers uses in an essential way the semi-continuity of $h^q(\Omega^p_{Y_t})$, which in turn depends on the flatness of $\Omega^p_{\cY/S}$. Here, we generalize this key point, noting that, for an lci morphism, $\Omega^p_{\mathcal X/S}$ is flat over $S$ (Theorem \ref{flatness}) for $p$ satisfying a bound depending on the dimension $d$ of the singular locus. Our argument depends essentially on the lci assumption, and it is a consequence of some depth estimates for $\Omega_X^p$ for $X$ with lci singularities due to Greuel. A second result (Proposition \ref{KDexact}) that follows by related arguments are higher adjunction type results regarding restrictions of K\"ahler differentials to generic hypersurface sections. \subsection{A flatness result} We begin by recalling some basic notions concerning depth. Recall that, if $R$ is a local ring with maximal ideal $\mathfrak{m}$ and $M$ is an $R$-module, then $\operatorname{depth}M$ is the maximal length of a regular sequence for $M$ \cite[p.\ 120]{Matsumura}. If $\mathcal{F}$ is a coherent sheaf on a complex space $X$ and $x\in X$, let $\operatorname{depth}_x \mathcal{F} = \operatorname{depth} \mathcal{F}_x$, viewed as an $\scrO_{X,x}$-module. A key technical result we will need is then the following theorem \cite{Scheja}, \cite[Satz 1.2]{Greuel75}: \begin{theorem}[Scheja]\label{Schejathm} Suppose that $X$ is an analytic space, $A$ a closed analytic subspace, and $\mathcal{F}$ a coherent sheaf on $X$. Let $\rho\colon H^0(X; \mathcal{F}) \to H^0(X-A; \mathcal{F}|X-A)$ be the restriction map. \begin{enumerate} \item[\rm(i)] If $\operatorname{depth} \mathcal{F}_x \ge \dim A + 1$ for every $x\in X$, the homomorphism $\rho$ is injective. \item[\rm(ii)] If $\operatorname{depth} \mathcal{F}_x \ge \dim A + 2$ for every $x\in X$, the homomorphism $\rho$ is an isomorphism. \qed \end{enumerate} \end{theorem} It is well known that K\"ahler $p$-differentials on singular spaces can have torsion. For instance this is already the case for $\Omega^1_C$ where $C$ is a nodal curve. However, we have the following \cite[Lemma 1.8]{Greuel75} \begin{theorem}[Greuel]\label{Greuelthm} Suppose that $X$ is an lci singularity of dimension $n$ and that $\dim \Sigma\leq d$. Then, for all $x\in X$, $\operatorname{depth}_x \Omega^p_X \geq n-p$ for $p\leq n-d$. More generally, let $f\colon \cX \to S$ be a flat lci morphism of relative dimension $n$ over a smooth base $S$ and let $\cX_{\text{\rm{crit}}}$ denote the critical locus of $f$, i.e.\ the points of $\cX$ where $f$ is not a smooth morphism. If $\dim S = m$ and the relative dimension of $\cX_{\text{\rm{crit}}}$ is at most $d$ then, for every $x\in \cX$, then $\operatorname{depth}_x \Omega^p_{\cX/S} \geq \dim \cX -p = n+m-p$ for $p\leq n-d$. \qed \end{theorem} Before stating the next corollary, we fix the following notation which will be used for the rest of the section: If $f\colon \cX \to S$ is a morphism and $s\in S$, we denote by $X_s$ the fiber $f^{-1}(s)$ and by $\Sigma_s$ the singular locus of $X_s$: $\Sigma_s = (X_s)_{\text{\rm{sing}}}$. \begin{corollary}\label{cor2} Suppose that $f\colon \cX \to S$ is a flat lci morphism of relative dimension $n$ over a smooth base $S$ of dimension $m$, and that $\dim \Sigma_s\le d$, with $n-d\geq 2k+1$ for some integer $k\geq 1$ and every point $s\in S$. Let $\cX_{\text{\rm{crit}}}$ denote the critical locus of $f$, i.e.\ the points of $\cX$ where $f$ is not a smooth morphism. Then, for every $x\in \cX$ and $p\leq n-d$, $$ \operatorname{depth}_x \Omega^p_{\cX/S} \geq d+m +2,$$ and, for all $p\leq k$ and every open subset $\cU$ of $\cX$, the restriction map $$H^0(\cU; \Omega^p_{\cX/S}|\cU) \to H^0(\cU- \cX_{\text{\rm{crit}}}; \Omega^p_{\cX/S}|\cU- \cX_{\text{\rm{crit}}})$$ is an isomorphism. \end{corollary} \begin{proof} By assumption, $\dim \cX_{\text{\rm{crit}}} \leq d+m$. Note that $n-d \geq 2k+1 \geq k$, and hence, if $p\leq k$, then $p\leq n-d$. Thus Theorem~\ref{Greuelthm} implies that, for all $x\in \cX$, $$ \operatorname{depth}_x \Omega^p_{\cX/S} \geq n+m-p \geq n-k+m \geq d+m +k + 1 \geq d+m+ 2\geq \dim \cX_{\text{\rm{crit}}} +2. $$ Then $H^0(\cU; \Omega^p_{\cX/S}|\cU) \to H^0(\cU- \cX_{\text{\rm{crit}}}; \Omega^p_{\cX/S}|\cU- \cX_{\text{\rm{crit}}})$ is an isomorphism, by Theorem~\ref{Schejathm}(ii). \end{proof} \begin{proposition}\label{prop4} Suppose that $f\colon \cX \to S$ is a flat lci morphism of relative dimension $n$ over a smooth base $S$ of dimension $m$, and that $\dim \Sigma_s\leq d$, with $n-d\geq 2k+1$ for some integer $k\geq 1$ and every point $s\in S$. Suppose that $\cX \subseteq A\times S$, where $A$ is smooth, and let $I$ be the defining ideal of $\cX$ in $A\times S$. Then there exists a filtration of $\bigwedge ^k\pi_1^*\Omega_A^1|\cX = \pi_1^*\Omega_A^k|\cX$ by coherent subsheaves $\mathcal{K}^a$, $0\leq a\leq k$, such that $\mathcal{K}^0 = \pi_1^*\Omega_A^k|\cX$, $\operatorname{depth}_x\mathcal{K}^a \geq d+m+2$ for all $x\in \cX$, and, for all $a\geq 0$, $$\mathcal{K}^a/\mathcal{K}^{a+1} \cong \bigwedge ^a(I/I^2) \otimes \Omega_{\cX/S}^{k-a}.$$ \end{proposition} \begin{proof}The proof is by induction on $k$. The case $k=1$ is the conormal sequence $$0 \to I/I^2 \to \pi_1^*\Omega_A^1|\cX \to \Omega_{\cX/S}^1 \to 0,$$ where we let $\mathcal{K}^1$ be the image of $I/I^2$. In general, for all $0\leq a\leq k$, let $\mathcal{K}^a$ be the image of $\bigwedge ^a(I/I^2) \otimes \pi_1^*\Omega_A^{k-a}|\cX$ in $\pi_1^*\Omega_A^k|\cX$. Thus $\{\mathcal{K}^a\}$ is the usual Koszul filtration, and the proposition is clear over all points of $\cX- \cX_{\text{\rm{crit}}}$. Moreover, $\mathcal{K}^k$ is either $0$ or $\bigwedge^k(I/I^2)$. By hypothesis $k < n=\operatorname{rank}\Omega_{\cX/S}^1$. Let $r =\operatorname{rank}I/I^2$. We can clearly assume that $r\ge 1$. If $r\geq k$, then $\mathcal{K}^k =\bigwedge^k(I/I^2)$. If $r< k$, then $\mathcal{K}^k =0$, and the largest $p$ such that $\mathcal{K}^p\neq 0$ is $\mathcal{K}^r$, the image of $\bigwedge^r(I/I^2) \otimes \pi_1^*\Omega_A^{k-r}|\cX$. In this case, the image of $\bigwedge^r(I/I^2) \otimes(I/I^2) \otimes \pi_1^*\Omega_A^{k-r-1}|\cX$ in $\bigwedge^r(I/I^2) \otimes \pi_1^*\Omega_A^{k-r}|\cX$ maps to $0$ in $\pi_1^*\Omega_A^k|\cX$. By the inductive hypothesis, the sequence $$\bigwedge^r(I/I^2) \otimes(I/I^2) \otimes \pi_1^*\Omega_A^{k-r-1}|\cX \to \bigwedge^r(I/I^2) \otimes\pi_1^*\Omega_A^{k-r}|\cX \to \bigwedge^r(I/I^2) \otimes\Omega_{\cX/S}^{k-r} $$ is exact, so there is an induced map $\varphi_r\colon \bigwedge^r(I/I^2) \otimes \Omega_{\cX/S}^{k-r}\to \pi_1^*\Omega_A^k|\cX$. Then the image of $\varphi_r$ is contained in the torsion free sheaf $\mathcal{K}^r$ and is equal to $\mathcal{K}^r$ over $\cX- \cX_{\text{\rm{crit}}}$. Moreover $\varphi_r$ is injective over $\cX- \cX_{\text{\rm{crit}}}$. By the inductive hypothesis on $k$, $$\operatorname{depth}_x\bigwedge^r(I/I^2) \otimes \Omega_{\cX/S}^{k-r} \geq d+m+2,$$ and hence $\im \varphi_r = \mathcal{K}^r$, and $\varphi_r \colon \bigwedge^r(I/I^2) \otimes \Omega_{\cX/S}^{k-r} \cong \mathcal{K}^r = \mathcal{K}^r/\mathcal{K}^{r+1}$ is an isomorphism. For a general $a$, there is a commutative diagram $$\begin{CD} \scriptstyle 0 @>>> \scriptstyle\mathcal{J} @>>> \scriptstyle \bigwedge ^a(I/I^2) \otimes \pi_1^*\Omega_A^{k-a}|\cX @>>>\scriptstyle \bigwedge ^a(I/I^2) \otimes \Omega_{\cX/S}^{k-a} @>>> \scriptstyle0\\ @. @. @VVV @VV{\varphi_a}V @.\\ \scriptstyle0 @>>> \scriptstyle\mathcal{K}^{a+1} @>>> \scriptstyle\mathcal{K}^a @>>>\scriptstyle \mathcal{K}^a/\mathcal{K}^{a+1} @>>>\scriptstyle 0, \end{CD}$$ where by definition $\mathcal{J} = \Ker\{\bigwedge ^a(I/I^2) \otimes \pi_1^*\Omega_A^{k-a}|\cX \to \bigwedge ^a(I/I^2) \otimes \Omega_{\cX/S}^{k-a}\}$ and $\varphi_a$, which exists and is an isomorphism over $\cX - \cX_{\text{\rm{crit}}}$, is yet to be constructed over all of $\cX$. For a fixed $k$, arguing by descending induction on $a$, and starting either at $k$ or at $r$, we can assume that $\operatorname{depth}_x\mathcal{K}^{a+1} \geq d+m+2$. Moreover, $\mathcal{K}^{a+1}|\cX - \cX_{\text{\rm{crit}}}$ is a subbundle of $\mathcal{K}^a|\cX - \cX_{\text{\rm{crit}}}$, so the torsion subsheaf of $\mathcal{K}^a/\mathcal{K}^{a+1}$ is supported on $\cX_{\text{\rm{crit}}}$. If $\sigma$ is a section of $\mathcal{K}^a$ over some open subset $\cU$ and $\sigma|V \in \mathcal{K}^{a+1}$ for an open subset $\mathcal{V}=\cU-W$ of $\cU$, where $W$ is an analytic subset of $\cU$, then we may assume that $W = \cX_{\text{\rm{crit}}}$. Since $\mathcal{K}^a$ is torsion free and $\operatorname{depth}_x\mathcal{K}^{a+1} \geq d+m+2$, $\sigma \in \mathcal{K}^{a+1}(\cU)$. Thus $\mathcal{K}^a/\mathcal{K}^{a+1}$ is torsion free. Let $\mathcal{I}$ be the kernel of the map $$\bigwedge ^a(I/I^2) \otimes \pi_1^*\Omega_A^{k-a}|\cX \to \mathcal{K}^a/\mathcal{K}^{a+1}.$$ Since $\mathcal{K}^a/\mathcal{K}^{a+1}$ is torsion free, a standard argument shows that, for every open subset $\cU$ of $\cX$, the map $$H^0(\cU; \mathcal{I}|\cU) \to H^0(\cU- \cX_{\text{\rm{crit}}}; \mathcal{I}|\cU- \cX_{\text{\rm{crit}}})$$ is an isomorphism. Over $\cX - \cX_{\text{\rm{crit}}}$, there is an isomorphism $\mathcal{I}|\cX - \cX_{\text{\rm{crit}}} \cong \mathcal{J}|\cX - \cX_{\text{\rm{crit}}}$. Since$\bigwedge ^a(I/I^2) $ is locally free, $\operatorname{depth}_x\mathcal{J}\geq \operatorname{depth}_x\bigwedge ^a(I/I^2) \otimes \Omega_{\cX/S}^{k-a} \geq d+m+2$ by Corollary~\ref{cor2}. In particular, $$H^0(\cU; \mathcal{J}|\cU) \to H^0(\cU- \cX_{\text{\rm{crit}}}; \mathcal{J}|\cU- \cX_{\text{\rm{crit}}})$$ is also an isomorphism. Thus, $\mathcal{J}=\mathcal{I}$, and there is an induced injective homomorphism $\varphi_a$ as in the diagram. Since $\varphi_a$ is an isomorphism over $\cX- \cX_{\text{\rm{crit}}}$ and $\operatorname{depth}_x\bigwedge ^a(I/I^2) \otimes \Omega_{\cX/S}^{k-a} \geq d+m+2$, $\varphi_a$ is an isomorphism. Finally, since both $\operatorname{depth}_x\mathcal{K}^{a+1} \geq d+m+2$ and $\operatorname{depth}_x\mathcal{K}^a/\mathcal{K}^{a+1} \geq d+m+2$, $\operatorname{depth}_x\mathcal{K}^a \geq d+m+2$ as well. \end{proof} A key technical step in establishing Theorem~\ref{mainThm}, where the lci assumption seems crucial, is the following: \begin{theorem}\label{flatness} Suppose that $f\colon \cX \to S$ is a flat lci morphism, where $S$ is an arbitrary base space. Let $s\in S$ and suppose that $\codim_{X_s} \Sigma_s \geq 2k+1$. Then, possibly after replacing $S$ by a neighborhood of $s$, the sheaf of relative differentials $\Omega^p_{\cX/S}$ is flat over $S$ for all $p\leq k$. For $p= 0,1$, $\Omega^p_{\cX/S}$ is flat over $S$ with no assumption on the dimension of the singular locus. \end{theorem} \begin{remark} For $k\geq 2$, at least some assumption is needed for the conclusions of Theorem~\ref{flatness} to hold. For example, if $X$ has simple normal crossings singularities, say $X$ is the product of a nodal curve $\Spec \Cee[x,y]/(xy)$ with a smooth manifold of dimension $n-1$ and $\cX$ is the standard smoothing over $\mathbb{A}^1$ given by $xy=t$, then $\operatorname{depth}_x \Omega^1_{\cX/\mathbb{A}^1} = n$ at a singular point and $\Omega^2_{\cX/\mathbb{A}^1}$ has torsion in the fiber over the singular fiber at $t=0$, hence is not flat over $\mathbb{A}^1$. In the same spirit, with respect to Theorem~\ref{mainThm}, suppose that $\pi\colon \cC \to \Delta$ is a family of smooth projective curves acquiring a single ordinary double point over $0$, with the same local picture as above (so that $\cC$ is also smooth). Then the sheaf $\Omega^1_{\cC/\Delta}$ of relative K\"ahler differentials is flat over $\Delta$ either by the case $p=1$ of Theorem~\ref{flatness} or by the direct computation that $\Omega^1_{\cC/\Delta}$ is isomorphic in a neighborhood of the singular point $p\in \cC$ to the maximal ideal $\frak{m}_p$, and in particular is torsion free over $\scrO_\Delta$. Let $C_t$ denote the fiber over $t$. If $C_0$ is reducible, the values of $h^0(\Omega^1_{C_t})$ and $h^1(\Omega^1_{C_t})$ jump up at $t=0$. Hence $R^1\pi_*\Omega^1_{\cC/\Delta}$ has torsion at $0$, and in particular is not locally free. This follows from very general results about cohomology and base change. In our situation, since $\Delta$ is smooth of dimension one, it follows directly by applying $R^i\pi_*$ to the exact sequence $$0 \to \Omega^1_{\cC/\Delta} \xrightarrow{\times t} \Omega^1_{\cC/\Delta} \to \Omega^1_{C_0} \to 0$$ to see that the dimension of the $\scrO_\Delta$-module $R^1\pi_*\Omega^1_{\cC/\Delta}/tR^1\pi_*\Omega^1_{\cC/\Delta}$ is larger than the rank of $R^1\pi_*\Omega^1_{\cC/\Delta}$. \end{remark} \begin{proof}[Proof of Theorem~\ref{flatness}]It suffices to consider the case $p=k$. Since every deformation is (locally) pulled back from the versal deformation, by standard properties of flatness and wedge product under base change we can assume that $S$ is smooth. First, with no assumption on the singular locus, $\scrO_{\cX}$ is flat over $S$ by assumption. To see that $\Omega^1_{\cX/S}$ is flat over $S$, again with no assumption on the singular locus, we have the conormal sequence $$0\to I/I^2 \xrightarrow{u} \pi_1^*\Omega^1_A|\cX \to \Omega^1_{\cX/S} \to 0.$$ Let $I_s$ be the ideal of $X_s$ in $A$. By the lci assumption, $I_s = I \otimes \scrO_{S,s}/\mathfrak{m}_s$. Then the conormal sequence for $\cX/S$ becomes the corresponding sequence for $\Omega^1_{X_s}$ after tensoring with $\scrO_{S,s}/\mathfrak{m}_s$, namely $$0\to I_s/I_s^2 \xrightarrow{\bar{u}} \Omega^1_A|X_s \to \Omega^1_{X_s} \to 0.$$ In particular, $\bar{u}$ is injective. Hence $\Omega_{\cX/S}^1 =\operatorname{Coker} u$ is flat over $S$ by the local criterion of flatness \cite[(20.E) pp.\ 150--151]{Matsumura}. For $k\geq 2$, by induction on $k$ and descending induction on $a$, $\mathcal{K}^{a+1}$ and $\mathcal{K}^a/\mathcal{K}^{a+1}$ are flat over $S$ for all $a\geq 1$, and hence so is $\mathcal{K}^a$. Let $\mathcal{K}^a_s$ be the image of $\bigwedge ^a(I_s/I_s^2) \otimes \Omega_A^{k-a}|X_s$ in $ \Omega_A^k|X_s$. Proposition~\ref{prop4} applied to the case $S =$ pt implies that $\mathcal{K}_s^a/\mathcal{K}_s^{a+1} \cong \bigwedge ^a(I_s/I_s^2) \otimes \Omega_{X_s}^{k-a}$. We have a commutative diagram $$\begin{CD} \scriptstyle0 @>>> \scriptstyle\mathcal{K}^{a+1}\otimes (\scrO_{S,s}/\mathfrak{m}_s) @>>> \scriptstyle\mathcal{K}^a\otimes (\scrO_{S,s}/\mathfrak{m}_s) @>>> \scriptstyle \mathcal{K}^a/\mathcal{K}^{a+1}\otimes (\scrO_{S,s}/\mathfrak{m}_s) @>>> \scriptstyle 0\\ @. @VVV @VVV @VV{\cong}V @.\\ \scriptstyle0 @>>> \scriptstyle\mathcal{K}_s^{a+1} @>>> \scriptstyle\mathcal{K}_s^a @>>> \scriptstyle \mathcal{K}_s^a/\mathcal{K}_s^{a+1} @>>> \scriptstyle 0 \end{CD}$$ Here, the top row is exact for $a\geq 1$ by the flatness of $\mathcal{K}^a/\mathcal{K}^{a+1}$ and induction on $k$. By descending induction on $a$, the above diagram implies that the natural map $\mathcal{K}^a\otimes (\scrO_{S,s}/\mathfrak{m}_s)\to \mathcal{K}_s^a$ is an isomorphism for all $a\geq 1$. In particular, for $a=1$, we have the map $u\colon \mathcal{K}^1 \to \pi_1^*\Omega_A^k|\cX$ with cokernel $\Omega_{\cX/S}^k$, and the reduction $\bar{u}$ of $u$ mod $\mathfrak{m}_s$ is the natural inclusion $\mathcal{K}_s^1 \to \Omega_A^k|X_s$. Since $\bar{u}$ is injective, $\Omega_{\cX/S}^k =\operatorname{Coker} u$ is flat over $S$ by the local criterion of flatness as above. This completes the proof of Theorem~\ref{flatness}. \end{proof} \subsection{K\"ahler differentials and generic hyperplane sections}\label{S-Bertini} \begin{proposition}\label{KDexact} Let $X$ be a reduced local complete intersection with $X_{\text{\rm{sing}}}= \Sigma$ and let $H$ be a Cartier divisor on $X$ which is a generic element of a base point free linear system, with ideal sheaf $\scrO_X(-H)$. If $\codim \Sigma \ge 2k-1$, then, setting $\scrO_H(-H) =\scrO_X(-H)|H$, we have \begin{equation*} 0 \to \Omega^{p-1}_H \otimes \scrO_H(-H) \to \Omega^p_X|H \to \Omega^p_H \to 0.\tag*{$(*)_p$} \end{equation*} is exact for $p \le k$. \end{proposition} \begin{proof} We can assume that $X$ is affine and that $H =(f)$ for a function $f\colon X \to \mathbb{A}^1$ which has no critical points away from $\Sigma$. The sequence $(*)_p$ is always exact for $p=0$, and it is for $p=1$ because we have the exact sequence $$\scrO_H(-H) \to \Omega^1_X|H \to \Omega^1_H \to 0,$$ with $\scrO_H(-H) \cong \scrO_H$ locally free and $H$ reduced (as it is generically reduced and lci). By induction on $k$, it suffices to consider the case $p=k$. By the remarks above, we can assume $k\ge 2$. By the de Rham lemma \cite[Lemma 1.6]{Greuel75}, we have the following: \begin{lemma} Let $d =\dim \Sigma$. For $0\le p < n-d$, the following sequence is exact: $$0 \to \scrO_X \xrightarrow{df \wedge} \Omega^1_X \xrightarrow{df \wedge} \cdots \xrightarrow{df \wedge} \Omega^{p+1}_X .$$ Equivalently, for $0\le p < n-d$, $$df \wedge \colon \Omega^p_X/ df \wedge \Omega^{p-1}_X \to \Omega^{p+1}_X$$ is injective. \end{lemma} \begin{proof} Since $f$ is generic, the critical set $C(f)$ of $f$ (in the notation of \cite{Greuel75}) is just $\Sigma$. Then the result follows immediately from \cite[Lemma 1.6]{Greuel75} (with $g=h$, $k=1$, and $C(f,g)\cap N(h) = \Sigma$). \end{proof} From the lemma, since $n-d = \codim \Sigma \ge 2k-1> k $, because $k >1$, there is an exact sequence $$ 0 \to \Omega^{k-1}_X/ df \wedge \Omega^{k-2}_X \xrightarrow{df \wedge} \Omega^k_X \to \Omega^k_X/ df \wedge \Omega^{k-1}_X \to 0.$$ Tensoring with $\scrO_H$ gives an exact sequence $$0 \to \mathit{Tor}_1^{\scrO_X} (\Omega^k_X/ df \wedge \Omega^{k-1}_X, \scrO_H) \to \Omega^{k-1}_H \to \Omega^k_X|H \to \Omega^k_H \to 0.$$ So we have to show that, for a general choice of $f$, $\mathit{Tor}_1^{\scrO_X} (\Omega^k_X/ df \wedge \Omega^{k-1}_X, \scrO_H) = 0$. Since $\scrO_H$ has the free resolution $$0 \to \scrO_X \xrightarrow{\times f} \scrO_X \to \scrO_H \to 0,$$ it suffices to show that $\Omega^k_X/ df \wedge \Omega^{k-1}_X$ has no $f$-torsion, i.e.\ that multiplication by $f$ is injective. Since there is an inclusion $\Omega^k_X/ df \wedge \Omega^{k-1}_X \to \Omega^{k+1}_X$, it suffices to prove that $\Omega^{k+1}_X$ has no $f$-torsion. Let $\tau$ be an $f$-torsion local section of $\Omega^{k+1}_X$. Note that the support of $\tau$ must be contained in $\Sigma \cap V(f)$, since $\Omega^{k+1}_X|X-\Sigma$ is torsion free and $f$ is invertible on $X- V(f)$. By the assumption that $H$ is general, every component of $\Sigma \cap V(f)$ has dimension $\le d-1$. By \cite[Lemma 1.8]{Greuel75}, since $k+1 \le 2k-1 \leq \codim \Sigma$ as long as $k\ge 2$, we have, for every $x\in X$, $$\operatorname{depth}_x\Omega^{k+1}_X \geq n - (k+1) .$$ Using $n-d \geq 2k-1$ gives $n-k-1 \geq d+ k -2 \geq d$, since we are assuming $k\geq 2$. Thus $$\operatorname{depth}_x\Omega^{k+1}_X \geq \dim (\Sigma \cap V(f)) + 1.$$ It follows from Theorem~\ref{Schejathm}(i) that, for every open subset $U$ of $X$, the restriction map $$H^0(U; \Omega^{k+1}_X|U) \to H^0(U-(\Sigma \cap V(f); \Omega^{k+1}_X|U-(\Sigma \cap V(f))$$ is injective. In particular, $\tau=0$. Hence $(*)_p$ is exact for $p \le k$. \end{proof} Note that we showed the following in the course of proving Proposition~\ref{KDexact}: \begin{corollary}\label{KDexactcor} With $X$ and $H$ as in the statement of Proposition~\ref{KDexact}, $\mathit{Tor}_1^{\scrO_X} (\Omega^p_X, \scrO_H)= 0$ for all $p\le k+1$. \qed \end{corollary} \section{$k$-Du Bois and $k$-rational singularities}\label{section3} We now review the notion of Du Bois and higher Du Bois singularities, and propose a general definition for higher rational singularities. After verifying that our definition agrees with the standard definition of rational singularities, as well as with previous definitions of higher rational singularities (under mild assumptions, expected to hold in general), we discuss the connection between higher rational and higher Du Bois singularities. \subsection{The filtered de Rham complex} On a smooth scheme $Y$ proper over $\Spec \Cee$ or a compact K\"ahler manifold, the Hodge-de Rham spectral sequence with $E_1$ page $E_1^{p,q}= H^q(Y;\Omega^p_Y)$ degenerates at $E_1$ and computes the Hodge structure on $H^{p+q}(Y,\Cee)$. For $X$ not necessarily smooth or proper over $\Spec \Cee$, Deligne showed that $H^*(X,\Cee)$ carries a canonical mixed Hodge structure. Subsequently, Du Bois \cite{duBois} introduced an object $\uOb_X = (\uOb_X, F^\bullet\uOb_X)$ in the filtered derived category whose graded pieces $\uOp_X=\Gr^p_F\uOb_X[p]$ are analogous to $\Omega_X^p$ in the smooth case (see \cite[\S7.3]{PS}). Since $\uOp_X$ is defined locally in the \'etale topology, it agrees with $\Omega_X^p$ at smooth points. Near the singular locus, if $\pi:\hX\to X$ is a log resolution with $E\subseteq \hX$ the reduced exceptional divisor, $\uOb_X$ is closely related to $R\pi_*\Omega^\bullet_{\hX}(\log E)(-E)$. More precisely, we have the following by \cite[Example 7.25]{PS} and by \cite[\href{https://stacks.math.columbia.edu/tag/05S3}{Tag 05S3}]{stacks-project}: \begin{theorem}\label{relative} If $\pi\colon \hX \to X$ is a log resolution with reduced exceptional divisor $E$, and we define $\uOb_{X,\Sigma} = R\pi_*\Omega^\bullet_{\hX}(\log E)(-E)$, where $\Sigma$ is the singular locus of $X$, then there is the distinguished triangle of relative cohomology in the filtered derived category $$\uOb_{X,\Sigma} \to \uOb_X \to \uOb_\Sigma \xrightarrow{+1} .$$ Thus, in the derived category, there is a corresponding distinguished triangle $$\uOp_{X,\Sigma} \to \uOp_X \to \uOp_\Sigma \xrightarrow{+1} .\qed$$ \end{theorem} In the proper case, there is the following fundamental result of Du Bois, based on Deligne's construction of the mixed Hodge structure on $Y$: \begin{theorem}[Du Bois]\label{thm-e1} If $Y$ is proper over $\Spec \Cee$, then the spectral sequence with $E_1$ page $\mathbb{H}^q(Y; \underline{\Omega}^p_Y) \implies \mathbb{H}^{p+q}(Y;\underline{\Omega}^\bullet_Y) = H^{p+q}(Y; \Cee)$ degenerates at $E_1$ and the corresponding filtration on $H^*(Y;\Cee)$ is the Hodge filtration associated to the mixed Hodge structure on $H^*(Y;\Cee)$. \qed \end{theorem} The result above allows us to define Hodge numbers in the singular case: \begin{definition}\label{defHodgeno} Let $Y$ be a compact complex algebraic variety. We define the {\it Hodge-Du Bois numbers} $$\uh^{p,q}(Y):=\dim\Gr^p_FH^{p+q}(Y),$$ for $0\le p,q\le n$. In particular, if $Y$ is smooth, $\uh^{p,q}(Y)=h^{p,q}(Y)$. In general, by Theorem \ref{thm-e1}, $$\uh^{p,q}(Y)=\dim \mathbb H^q(Y; \uOp_Y)=\sum_{0\le r\le q} h^{p,r}_{p+q}(Y),$$ where $h^{p,r}_{p+q}(Y):=\dim \Gr^p_F\Gr^W_{p+r} H^{p+q}(Y)$ are the {\it Hodge-Deligne numbers} associated to the mixed Hodge structure on $Y$. \end{definition} \begin{remark} As the example of nodal curves shows (Remark~\ref{rem-examples}) the Hodge-Du Bois diamond will not satisfy either of the Hodge symmetries: $\uh^{p,q}=\uh^{q,p}$ or $\uh^{p,q}=\uh^{n-p,n-q}$. Nonetheless, some vestige of these symmetries remains (in the form of inequalities) as those given by Lemma \ref{ineq-sym} below. \end{remark} Another key consequence for us is the following: \begin{corollary}\label{DBdegen} With notation as above, \begin{enumerate} \item[\rm(i)] The natural map $H^i(Y; \Cee) \to \mathbb{H}^i(Y;\underline{\Omega}^\bullet_Y/F^{k+1}\underline{\Omega}^\bullet_Y)$ is surjective for all $i$ and $k$. \item[\rm(ii)] The spectral sequence with $E_1$ term $$E_1^{p,q} = \begin{cases} \mathbb{H}^q(Y; \underline{\Omega}^p_Y) , &\text{for $p\leq k$;}\\ 0, &\text{for $p> k$.} \end{cases}$$ converging to $\mathbb{H}^{p+q}(Y; \underline{\Omega}^\bullet_Y/F^{k+1}\underline{\Omega}^\bullet_Y)$ degenerates at $E_1$. \qed \end{enumerate} \end{corollary} \begin{proof} This is a consequence of the following general fact: If $(\mathcal{C}^\bullet, d,\mathcal{F}^\bullet\mathcal{C}^\bullet)$ is a filtered complex, then the associated spectral sequence degenerates at the $E_1$ page $\iff$ the differential $d$ is strict with respect to the filtration $\mathcal{F}^\bullet\mathcal{C}^\bullet$. Then, assuming $E_1$-degeneration, an easy argument shows that $H^i(\mathcal{C}^\bullet,d) \to H^i(\mathcal{C}^\bullet/\mathcal{F}^{k+1}\mathcal{C}^\bullet,d)$ is surjective for every $i$ and $k$. Since the induced filtration on $\mathcal{C}^\bullet/\mathcal{F}^{k+1}\mathcal{C}^\bullet$ is also strict with respect to $d$, the corresponding spectral sequence for the complex $\mathcal{C}^\bullet/\mathcal{F}^{k+1}\mathcal{C}^\bullet$ also degenerates at $E_1$. \end{proof} \subsection{Du Bois and higher Du Bois singularities} There is a natural comparison map $$\phi^\bullet \colon (\Omega_X^\bullet, \sigma^\bullet)\to (\uOb_X, F^\bullet\uOb_X)$$ \cite[p.175]{PS}, where $\sigma^\bullet$ is the trivial or naive filtration on $\Omega_X^\bullet$. Following \cite{MOPW} and \cite{JKSY-duBois}, one defines: \begin{definition} Let $X$ be a complex algebraic variety. Then $X$ is \textsl{$k$-Du Bois} if the natural maps $$\phi^p:\Omega_X^p\to \uOp_X$$ are quasi-isomorphisms for $0\le p\le k$. Note that the case $k=0$ coincides with the usual definition of Du Bois singularities \cite[Def. 7.34]{PS}. \end{definition} \begin{example} Let $f(z_1, \dots, z_{n+1}) = z_1^{d_1} + \cdots + z_{n+1}^{d_{n+1}}$ define the weighted homogeneous singularity $X =V(f) \subseteq \mathbb{A}^{n+1}$. Then, as a consequence of Proposition~\ref{hypersurfcase}(i) and a theorem of Saito \cite[(2.5.1)]{SaitoV} (and see also \cite[Corollary 6.8]{FL22d}), $X$ is $k$-Du Bois at $0$ $\iff$ $\Dis\sum_{i=1}^{n+1} \frac1{d_i} \geq k+1$. In particular, an ordinary double point of dimension $n$ is $k$-Du Bois for all $k\leq \Dis \left[\frac{n-1}{2}\right]$. Thus an ordinary double point of dimension $3$ is $1$-Du Bois, and in fact is the unique $1$-Du Bois hypersurface singularity in dimension $3$ (e.g. \cite[Thm. 2.2]{NS}). More generally, it is expected that $k$-Du Bois singularities occur first in dimension $2k+1$. This is true at least in the lci case as the following result shows. \end{example} \begin{theorem}[{\cite{MP-loc}, \cite{MOPW}, \cite{JKSY-duBois}}]\label{thmdim} Let $X$ be a complex algebraic variety with lci singularities. Assume $X$ is $k$-Du Bois with $k\ge 1$. Then $X$ is normal and regular in codimension $2k$, i.e.\ $\codim \Sigma \ge 2k+1$. \end{theorem} \begin{proof} The normality claim is \cite[Cor. 5.6]{MP-loc}. For hypersurface singularities, various dimension bounds (covering the claim of the theorem) were obtained in both \cite{MOPW} and \cite{JKSY-duBois}. The general lci case follows from \cite[Thm. F]{MP-loc} (numerical characterization of $k$-Du Bois) and \cite[Cor. 3.40]{MP-loc} (bounds on $\dim \Sigma$ in terms of the relevant numerical invariant). \end{proof} \begin{corollary}\label{cor1} Suppose that $X$ has lci $k$-Du Bois singularities and that $f\colon \cX \to S$ is a flat morphism, where $S$ is arbitrary, with $X=X_s=f^{-1}(s)$ for some $s\in S$. Then possibly after replacing $S$ by a neighborhood of $s$, the sheaf of relative differentials $\Omega^p_{\cX/S}$ is flat over $S$ for all $p\leq k$. \end{corollary} \begin{proof} This is immediate from Theorem~\ref{flatness} and Theorem~\ref{thmdim}. \end{proof} \begin{remark} (i) Theorem~\ref{thmdim} and Theorem~\ref{Greuelthm} imply the following previously known results: \begin{enumerate} \item[(1)] If $X$ has hypersurface $k$-Du Bois singularities, then $\Omega_X^p$ is torsion free for $0\le p\le k$ (\cite[Prop. 2.2]{JKSY-duBois}). \item[(2)] If $X$ has lci $k$-Du Bois singularities, then $\Omega_X^p$ is reflexive for $0\le p\le k$ (\cite[Cor. 5.6]{MP-loc}). \end{enumerate} \smallskip \noindent (ii) For $p=1$ and lci singularities, the situation is well understood by a result of Kunz \cite[Prop. 9.7]{Kunz}: $\Omega_X^1$ satisfies Serre's condition $S_a$ $\iff$ $X$ satisfies $R_a$ (i.e.\ is regular in codimension $a$). \end{remark} \begin{remark} For other examples of $k$-Du Bois and $k$-rational singularities, one can consult \cite{KL2} (cf.\ Theorem 5.3 and \S6.1). \end{remark} \subsection{Higher rational singularities} The standard definition of a rational singularity involves the choice of a resolution (e.g. \cite[Def. 5.8]{KollarMori}). As we will explain below, it is possible to give an equivalent definition for rational singularities without reference to resolutions, but using instead the dualizing complex. In addition to being more intrinsic, it generalizes to higher rational singularities, and it factors naturally through the higher Du Bois condition. For a complex algebraic variety $X$ of dimension $n$, let $\omega^\bullet_X$ denote the dualizing complex. Define the {\it Grothendieck duality functor} $\mathbb D_X$ as follows: $$\mathbb D_X(-):= RHom(-, \omega_X^\bullet)[-n].$$ In particular, $\omega_X^\bullet=\mathbb D_X(\scrO_X)[n]$. \begin{lemma}\label{lem-psi} For all $p$, there exists a natural sequence of maps in the derived category $$\Omega_X^p\xrightarrow{\phi^p} \uOp_X\xrightarrow{\psi^p} \bD_X(\uOnp_{X}).$$ \end{lemma} \begin{proof} By functoriality of the filtered de Rham complex, there is a map $\uOnp_X\to R\pi_*\Omega^{n-p}_{\hX}$. Applying $\bD_X$ gives $$\bD_X(R\pi_*\Omega^{n-p}_{\hX})\to \bD_X(\uOnp_X).$$ Since $\pi$ is proper, Grothendieck duality gives $$\bD_X(R\pi_*\Omega^{n-p}_{\hX})\cong R\pi_*\bD_{\hX}(\Omega^{n-p}_{\hX})\cong R\pi_*\Omega_{\hX}^p.$$ Thus we get a sequence of maps $$\Omega_X^p\to \uOp_X\to R\pi_*\Omega_{\hX}^p \to \bD_X(R\pi_*\Omega^{n-p}_{\hX}) \to \bD_X(\uOnp_X),$$ as claimed. The map $\psi^p$ is easily seen to be independent of the choice of a resolution, by the usual factorization arguments. \end{proof} \begin{definition}\label{defkrat} The variety $X$ has \textsl{$k$-rational singularities} if the maps $$\Omega_X^p\xrightarrow{\psi^p \circ \phi^p} \bD_X(\uOnp_{X})$$ are quasi-isomorphisms for all $0\le p\le k$. \end{definition} \begin{example} Let $f(z_1, \dots, z_{n+1}) = z_1^{d_1} + \cdots + z_{n+1}^{d_{n+1}}$ define the weighted homogeneous singularity $X =V(f) \subseteq \mathbb{A}^{n+1}$. Then, by \cite[Corollary 6.8]{FL22d}, $X$ is $k$-rational at $0$ $\iff$ $\Dis\sum_{i=1}^{n+1} \frac1{d_i} > k+1$. In particular, an ordinary double point of dimension $n$ is $k$-rational $\iff$ $k< \Dis \frac{n-1}{2}$. Thus an ordinary double point of dimension $3$ is not $1$-rational. On the other hand, ADE singularities in dimension $4$ are $1$-rational. Conjecturally, $k$-rational singularities occur in codimension at least $2(k+1)$ (compare Theorem~\ref{thmdim}). \end{example} The following lemma connects our definition to more standard ones: \begin{lemma}\label{lemma-onp} Suppose that $\dim \Sigma \leq d$. Then, for all $p< n-d$, $$\bD_X(\uOnp_{X}) \cong R\pi_*\Omega^p_{\hX}(\log E).$$ In particular, if $\codim \Sigma \geq 2k+1$ for some $k\geq 0$, then $\bD_X(\uOnp_{X}) \cong R\pi_*\Omega^p_{\hX}(\log E)$ for all $p\leq k$. \end{lemma} \begin{proof} By Theorem~\ref{relative}, there is the distinguished triangle of relative cohomology $$\uOnp_{X,\Sigma} \to \uOnp_X \to \uOnp_\Sigma \to . $$ Since $n-p > d$, $\uOnp_\Sigma =0$ and hence $\uOnp_X \cong \uOnp_{X,\Sigma}= R\pi_*\Omega^{n-p}_{\hX}(\log E)(-E)$. Applying Grothendieck duality, it follows as in \cite[\S2.2]{MOPW} that \begin{align*} \bD_X(\uOnp_{X}) &= \bD_X(R\pi_*\Omega^{n-p}_{\hX}(\log E)(-E)) = R\pi_*\bD_{\hX}(\Omega^{n-p}_{\hX}(\log E)(-E)) \\ &= R\pi_*\Omega^p_{\hX}(\log E). \end{align*} The final statement is clear since $n-d\ge 2k+1$ $\implies$ $n-p \geq n-k \geq d+k+1 > d$. \end{proof} \begin{corollary}\label{cor-rat0} $X$ is $0$-rational $\iff$ $X$ has rational singularities. \end{corollary} \begin{proof} Since $X$ is reduced, $\dim \Sigma \leq n-1$. Thus $X$ is $0$-rational $\iff$ the natural map $\scrO_X \to R\pi_*\scrO_{\hX}$ is an isomorphism $\iff$ $X$ has rational singularities in the usual sense. \end{proof} \begin{remark} Lemma \ref{lemma-onp} and Grothendieck duality give the identification $\underline{\Omega}_X^n=R\pi_*\omega_{\hX}$, which by Grauert-Riemenschneider vanishing is in fact a single sheaf, the {\it Grauert-Riemenschneider sheaf} $\omega_X^{GR}:=\pi_*\omega_{\hX}$. Following \cite{KK20}, let us denote by $\underline{\omega}_X:=\bD_X(\underline{\Omega}_X^0)$. Then the dual form of Definition \ref{defkrat} for $k=0$ is: $X$ has rational singularities $\iff$ the composite map \begin{equation}\omega_X^{GR}\to\underline{\omega}_X\to \omega_X^\bullet \end{equation} is a quasi-isomorphism. This formulation occurs for instance in \cite{KK20}, and it is equivalent to that given by \cite[Thm. 5.10(3)]{KollarMori} (note that the quasi-isomorphism $\omega_X^{GR}\cong \omega_X^\bullet$ forces $X$ to be Cohen-Macaulay). \end{remark} In \cite[\S3]{FL}, we defined $k$-rational singularities for an isolated singularity by the condition that $R\pi_*\Omega^p_{\hX}(\log E)\cong \Omega_X^p$. This is equivalent to Definition \ref{defkrat} (under a mild assumption): \begin{corollary}\label{cor-olddef} In the above notation, suppose that $\codim \Sigma \geq 2k+1$. \begin{enumerate} \item[\rm(i)] $X$ is $k$-rational $\iff$ the natural map $\Omega_X^p \to R\pi_*\Omega^p_{\hX}(\log E)$ is an isomorphism for all $p\le k$. \item[\rm(ii)] $X$ is $k$-rational $\iff$ $X$ is $(k-1)$-rational and $\Omega_X^k \to R\pi_*\Omega^k_{\hX}(\log E)$ is an isomorphism. \qed \end{enumerate} \end{corollary} In the lci case, the assumption on $R^0\pi_*\Omega^p_{\hX}(\log E)$ is automatic: \begin{lemma}\label{case0krat} Suppose that $X$ has lci singularities and $\codim \Sigma \geq 2k+1$. Then, for all $p\leq k$, $\psi^p\circ \phi^p\colon \Omega_X^p \to R^0\pi_*\Omega^p_{\hX}(\log E)$ is an isomorphism. Hence $X$ is $k$-rational $\iff$ for all $p\le k$ and all $q> 0$, $R^q\pi_*\Omega^p_{\hX}(\log E) =0$. \end{lemma} \begin{proof} This follows easily from Theorem~\ref{Greuelthm} and Corollary~\ref{cor2}, as $R^0\pi_*\Omega^p_{\hX}(\log E)$ is torsion free for all $p$. \end{proof} \begin{remark}\label{case0kDB} Suppose that $X$ has an isolated singularity $x$, so that $\Sigma = \{x\}$. Then by Theorem~\ref{relative}, there is the distinguished triangle of relative cohomology $$\uOb_{ X,x} \to \uOb_X \to \Cee[0] \to , $$ where we somewhat carelessly write $\Cee[0]$ for the skyscraper sheaf $\Cee_x$, viewed as a complex in degree $0$. Hence $\uOp_{X,x} \to \uOp_X$ is an isomorphism for $p>0$, so if $X$ is lci and $n = \dim X \ge 2k+1$, by the same argument as that of Lemma~\ref{case0krat}, $ \phi^p\colon \Omega_X^p \to R^0\pi_*\Omega^p_{\hX}(\log E)(-E)$ is an isomorphism. For $p=0$, if $X$ has an isolated singularity and is normal, then the map $\pi_*\scrO_{\hX}(-E) \to \underline{\Omega}^0_X$ has cokernel $\Cee[0]$ and factors through $\mathfrak{m}_x\subseteq \scrO_X$. Hence $\scrO_X \to \mathcal{H}^0\underline{\Omega}^0_X$ is an isomorphism. Thus, if $X$ has an isolated lci singularity and $\dim X \ge 2k+1$, i.e.\ $k \le (n-1)/2$, then $X$ is $k$-Du Bois $\iff$ for all $p\le k$ and all $q> 0$, $R^q\pi_*\Omega^p_{\hX}(\log E)(-E) =0$. \end{remark} \subsection{$k$-rational vs.\ $k$-Du Bois singularities} Steenbrink \cite{Steenbrink} proved that, if $X$ is an isolated rational singularity, then $X$ is Du Bois, and Kov\'acs \cite{Kovacs99} generalized this, showing that that any rational singularity is Du Bois. Saito gave a different proof in \cite[Thm.\ 5.4]{Saito2000}. The method of \cite{Steenbrink} generalizes to prove that isolated $k$-rational singularities are $k$-Du Bois (see \cite[Theorem 3.2]{FL22d}), and the method of \cite{Kovacs99} can be generalized to handle both isolated and lci singularities. Using the ideas of \cite{Kovacs99}, we shall show the following in Section~\ref{section5}: \begin{theorem}\label{kratkDB} Suppose either that $X$ has isolated singularities or $X$ is lci. If $X$ is $k$-rational, then $X$ is $k$-Du Bois. \end{theorem} Assuming for the moment Theorem \ref{kratkDB}, we note that the $k$-rational assumption adds additional Hodge symmetries to the $k$-Du Bois condition: If $X$ is $k$-Du Bois, then $X$ is $k$-rational $\iff$ the map $\psi^p \colon \uOp_X\to \bD_X(\uOnp_{X})$ is a quasi-isomorphism for all $p\le k$. If $X$ is proper, these assumptions lead to a Hodge symmetry, Serre duality for the Hodge-Du Bois numbers: \begin{corollary}\label{cor-sym1} If $Y$ is a compact complex algebraic variety of dimension $n$ with lci $k$-rational singularities, then, for $0\le p\le k$, \begin{equation*} \uh^{p,q}(Y)=\dim \Gr_F^p H^{p+q}(Y) = \dim \Gr_F^{n-p} H^{2n-(p+q)}(Y)=\uh^{n-p,n-q}(Y). \end{equation*} In particular, taking $p+q =n$ gives $\uh^{p,n-p}=\uh^{n-p,p}$ for $p\le k$. \end{corollary} \begin{proof} Using Theorem \ref{thm-e1}, we get the following identifications $$\Gr_F^p H^{p+q}(Y) \cong \mathbb{H}^q(Y; \uOp_Y)\cong\mathbb{H}^q(Y; \bD_Y(\uOnp_{Y}))\cong \mathbb{H}^{n-q}(Y; \uOnp_{Y})^\vee,$$ where the middle isomorphism is given by the quasi-isomorphism $\psi^p$ (for $p\le k$). \end{proof} It remains to discuss the Hodge symmetry $h^{p,q}=h^{q,p}$, which is induced by complex conjugation in the smooth case. We recall that the cohomology of a compact singular algebraic variety $Y$ carries a mixed Hodge structure $(H^*(Y),F^\bullet, W_\bullet)$. The Hodge-Deligne numbers $h^{p,r}_i=\Gr^p_F\Gr_{p+r}^WH^i(Y)$ satisfy the symmetry given by conjugation: $h^{p,r}_i=h^{r,p}_i$ (as $\Gr_{p+r}^WH^i(Y)$ is a pure Hodge structure). Since $Y$ is compact, the weights on $H^i(Y)$ are at most $i$, and in fact between $2i-2n$ and $i$ if $i\geq n$. It follows that, for $i\le n$, $\uh^{p,i-p}=\sum_{r=0}^{i-p} h^{p,r}_i$. However, the Hodge-Du Bois numbers do not satisfy the same kind of symmetry as they reflect only the Hodge filtration $F^\bullet$. In fact, we note the following: \begin{lemma}\label{ineq-sym} Let $Y$ be a compact complex algebraic variety of dimension $n$. For $0\le p\le i\le n$, $$ \sum_{a=0}^p \uh^{i-a,a} \le \sum_{a=0}^p \uh^{a,i-a}.$$ Furthermore, equality holds above for all $p\le k$ $\iff$ $\uh^{p,i-p}=\uh^{i-p,p}$ for all $p\le k$ $\iff$ $\Gr_F^pW_{i-1}H^i(Y)=0$ for all $p\le k$. \end{lemma} \begin{proof} Clearly $\sum_{a=0}^p \uh^{i-a,a} = \sum_{a=0}^p \uh^{a,i-a}$ for all $p\le k$ $\iff$ $\uh^{p,i-p}=\uh^{i-p,p}$ for all $p\le k$. For a fixed $p$, we have $$\sum_{a=0}^p \uh^{i-a,a} = \sum_{a=0}^p \sum_{q\leq a}h^{i-a, q}_i = \sum_{a=0}^p \sum_{q\leq a}h^{q,i-a}_i=\sum_{\substack {i-p \le s \le i \\r+s \le i}}h^{r, s}_i,$$ by the Hodge symmetries, whereas $$\sum_{a=0}^p \uh^{a,i-a} = \sum_{a=0}^p \sum_{a+q \le i}h^{a, q}_i=\sum_{\substack { r \le p \\r+s \le i}}h^{r, s}_i.$$ Note that, if $i-p \le s \le i $ and $r+s \le i$, then $r\le i-s \le p$, so the second sum is greater that the first, giving the inequality. For a given $p$, equality holds $\iff$ $h^{r, s}_i = 0$ for $r\leq p$ and $s\leq i-p -1$. Moreover, $h^{r, s}_i = 0$ for $r\leq p$ and $s\leq i-p -1$ for some $p\le k$ $\iff$ $h^{r, s}_i = 0$ for $r\leq k$, $r+s \leq i-1$. This is equivalent to: $\Gr_F^pW_{i-1}H^i(Y)=0$ for $p\le k$. \end{proof} Recall that, for any resolution $\pi:\hat Y\to Y$, \begin{equation}\label{intrepret-w1} W_{i-1} H^i(Y)=\Ker \left(H^i(Y)\xrightarrow{\pi^*}H^i(\hat Y)\right) \end{equation} (e.g. \cite[Cor.\ 5.42]{PS}). Using this, we obtain: \begin{theorem}\label{cor-sym2} If $Y$ is a compact complex algebraic variety of dimension $n$ with either isolated or lci $k$-rational singularities, then $$\uh^{p,q}=\uh^{q,p}.$$ for $0\le p\le k$ and $0\le q\le n$. \end{theorem} \begin{proof} In view of the discussion above, we define the discrepancy $$\delta_i^p:=\dim \Gr^p_F W_{i-1} H^i(Y)=\dim \Gr^p_F\Ker \left(H^i(Y)\xrightarrow{\pi^*}H^i(\hat Y)\right).$$ By Lemma~\ref{ineq-sym}, the equality $\uh^{p,i-p}=\uh^{i-p,p}$ holds for $p\le k$ $\iff$ $\delta_i^p=0$ for $p\le k$. The map $\psi^p$ occurring in the definition of higher rationality (Definition~\ref{defkrat} and Lemma~\ref{lem-psi}) factors through the resolution $\pi:\hat Y\to Y$, and at the level of cohomology corresponds to the $\Gr^p_F$ piece (see also Corollary~\ref{cor-sym1}) of the natural map $$\Psi^i:H^i(Y)\xrightarrow{\pi^*} H^i(\hat Y)\xrightarrow[PD]{\sim}H^{2n-i}(\hat Y)^\vee(-n)\xrightarrow{(\pi^*)^\vee} H^{2n-i}( Y)^\vee(-n),$$where all spaces are endowed with the natural Hodge structures. On the graded piece $\Gr_F^pH^i(Y) = H^{i-p}(Y; \uOp_Y)$, $\Gr_F^p\Psi^i$ is the map $\psi^p\colon H^{i-p}(Y; \uOp_Y) \to H^{n-i+p}(Y; \uOnp_Y)\spcheck$, which is an isomorphism if $Y$ is both $k$-rational and $k$-Du Bois, and in particular if $Y$ is $k$-rational and has either isolated singularities or lci singularities. By the strictness of morphisms of Hodge filtrations, if $\Gr_F^p\Psi^i$ is an isomorphism then $\pi^*$ is injective on $\Gr_F^p$ and hence $\delta^p_i=0$. Thus, $k$-rationality implies $\delta^p_i=0$ for $p\le k$ and all $i$, which in turn means $\uh^{p,i-p}=\uh^{i-p,i}$ in this range. \end{proof} \begin{remark} As noted by one of the referees, in the above proof as well as in the proof of Lemma~\ref{lem-psi}, the main point is to apply relative duality to a resolution of singularities morphism. Thus the proof does not take into account the full information of a hyperresolution. \end{remark} \section{Proof of Theorem~\ref{mainThm} and Corollary~\ref{CYdef}}\label{section4} We turn now to the global setting of a deformation of a compact analytic space or proper scheme $Y$, and to the question of the local freeness of $\Omega^p_{\cY/S}$. \begin{theorem}\label{thm5} Let $f\colon \cY \to \Spec A$ be a proper morphism of complex spaces, where $A$ is an Artin local $\Cee$-algebra, with closed fiber $Y$. Let $(\mathcal{F}^\bullet, d)$ be a bounded complex of coherent sheaves on $\cY$, flat over $A$, where $d\colon \mathcal{F}^i\to \mathcal{F}^{i+1}$ is $A$-linear, but not necessarily $\scrO_{\cY}$-linear. Finally suppose that the natural map $\mathbb{H}^i(\cY; \mathcal{F}^\bullet) = \Ar^if_*\mathcal{F}^\bullet \to \mathbb{H}^i(Y; \mathcal{F}^\bullet|Y)$ is surjective for all $i$. Then $\mathbb{H}^i(\cY; \mathcal{F}^\bullet)$ is a finite $A$-module whose length satisfies: $$\ell(\mathbb{H}^i(\cY; \mathcal{F}^\bullet)) = \ell(A)\dim \mathbb{H}^i(Y; \mathcal{F}^\bullet|Y).$$ \end{theorem} \begin{proof} This is a minor variation on very standard arguments. Note that $\mathbb{H}^i(\cY; \mathcal{F}^\bullet)$ is a finite $A$-module since it is the abutment of a spectral sequence with $E_1$ page $E_1^{p,q}= H^q(\cY; \mathcal{F}^p)$ and such that all of the differentials in the spectral sequence are $A$-module homomorphisms. Next we show that, for every finite $A$-module $M$, the natural map $$\Psi_M\colon \mathbb{H}^i(\cY; \mathcal{F}^\bullet)\otimes_AM \to \mathbb{H}^i(\cY; \mathcal{F}^\bullet\otimes _AM)$$ is surjective. The proof is via induction on $\ell(M)$, the case $\ell(M) =1$ being the hypothesis of the theorem. The inductive step follows from: given an exact sequence $0\to M' \to M\to M'' \to 0$ such that $\Psi_{M'}$ and $\Psi_{M''}$ are surjective, then $\Psi_M$ is surjective. This follows from the commutative diagram $$\begin{CD} \mathbb{H}^i(\cY; \mathcal{F}^\bullet)\otimes_AM' @>>> \mathbb{H}^i(\cY; \mathcal{F}^\bullet)\otimes_AM @>>> \mathbb{H}^i(\cY; \mathcal{F}^\bullet)\otimes_AM'' @>>> 0\\ @VV{\Psi_{M'}}V @VV{\Psi_M}V @VV{\Psi_{M''}}V @.\\ \mathbb{H}^i(\cY; \mathcal{F}^\bullet\otimes _AM') @>>> \mathbb{H}^i(\cY; \mathcal{F}^\bullet\otimes _AM)@>>> \mathbb{H}^i(\cY; \mathcal{F}^\bullet\otimes _AM'') @. \end{CD}$$ where the top row is exact since tensor product is right exact and the second is exact since $\mathcal{F}^\bullet$ is $A$-flat. To prove the theorem, we argue by induction on $\ell(A)$. The result is clearly true if $\ell(A) = 1$. For the inductive step, write $A$ as a small extension $0 \to I \to A \to A/I \to 0$, so that $I\cong A/\mathfrak{m}$ and $\mathcal{F}^\bullet \otimes_A I \cong \mathcal{F}^\bullet \otimes_A (A/\mathfrak{m})=\mathcal{F}^\bullet|Y$. By flatness, there is an exact sequence $$\cdots \to \mathbb{H}^i(\cY; \mathcal{F}^\bullet \otimes_A I) \to \mathbb{H}^i(\cY; \mathcal{F}^\bullet) \to \mathbb{H}^i(\cY; \mathcal{F}^\bullet \otimes_AA/I) \to \cdots $$ Then the above implies that $\mathbb{H}^i(\cY; \mathcal{F}^\bullet) \to \mathbb{H}^i(\cY; \mathcal{F}^\bullet \otimes_AA/I)$ is surjective for all $i$. Hence the long exact sequence breaks up into short exact sequences and thus \begin{align*} \ell(\mathbb{H}^i(\cY; \mathcal{F}^\bullet)) &= \ell(\mathbb{H}^i(\cY; \mathcal{F}^\bullet\otimes_AI ))+ \ell(\mathbb{H}^i(\cY; \mathcal{F}^\bullet\otimes_AA/I))\\ &= \dim \mathbb{H}^i(Y; \mathcal{F}^\bullet|Y) + \ell(A/I)\dim \mathbb{H}^i(Y; \mathcal{F}^\bullet|Y)\\ &= \ell(A)\dim \mathbb{H}^i(Y; \mathcal{F}^\bullet|Y). \end{align*} This concludes the inductive step. \end{proof} Now assume that $f\colon \cY \to \Spec A$ is a proper morphism, where $\cY$ is a scheme of finite type over $\Spec \Cee$ and $A$ is an Artin local $\Cee$-algebra, with closed fiber $Y$. Consider the complex $\Omega^\bullet_{\cY/\Spec A}$ and the quotient complex $\Omega^\bullet_{\cY/\Spec A}/\sigma^{\geq k+1}\Omega^\bullet_{\cY/\Spec A}$. For the closed fiber $Y$, we also have the complex $\Omega^\bullet_Y/\sigma^{\geq k+1}\Omega^\bullet_Y$. \begin{lemma}\label{lemma2} With notation as above, suppose that $Y$ is $k$-Du Bois. Then: \begin{enumerate} \item[\rm(i)] For every $i$, the natural map $$\mathbb{H}^i(\cY; \Omega^\bullet_{\cY/\Spec A}/\sigma^{\geq k+1}\Omega^\bullet_{\cY/\Spec A}) \to \mathbb{H}^i(Y; \Omega^\bullet_Y/\sigma^{\geq k+1}\Omega^\bullet_Y)$$ is surjective. \item[\rm(ii)] The spectral sequence with $E_1$ term $$E_1^{p,q} = \begin{cases} H^q(Y; \Omega^p_Y) , &\text{for $p\leq k$;}\\ 0, &\text{for $p> k$.} \end{cases}$$ converging to $\mathbb{H}^{p+q}(Y; \Omega^\bullet_Y/\sigma^{\geq k+1}\Omega^\bullet_Y)$ degenerates at $E_1$. Hence, for every $i$, $$\dim \mathbb{H}^i(Y; \Omega^\bullet_Y/\sigma^{\geq k+1}\Omega^\bullet_Y) = \sum_{\substack {p+q = i\\ p \leq k}}\dim H^q(Y; \Omega^p_Y).$$ \end{enumerate} \end{lemma} \begin{proof} By the $k$-Du Bois condition, the natural map $\Omega^\bullet_Y/\sigma^{\geq k+1}\Omega^\bullet_Y \to \underline{\Omega}^\bullet_Y/F^{k+1}\underline{\Omega}^\bullet_Y$ is a quasi-isomorphism of filtered complexes. Thus there are isomorphisms \begin{align*} \mathbb{H}^i(Y;\Omega^\bullet_Y/\sigma^{\geq k+1}\Omega^\bullet_Y) &\cong \mathbb{H}^i(Y; \underline{\Omega}^\bullet_Y/F^{k+1}\underline{\Omega}^\bullet_Y) \\ H^q(Y;\Omega^p_Y) &\cong \mathbb{H}^q(Y; \underline{\Omega}^p_Y) \qquad \qquad (p\leq k). \end{align*} By Corollary~\ref{DBdegen}(ii), the spectral sequence with $E_1$ page $\mathbb{H}^q(Y; \underline{\Omega}^p_Y)$ for $p\leq k$ and $0$ otherwise degenerates at $E_1$. Hence the same is true for the spectral sequence in (ii). To prove (i), arguing as in \cite{DBJ}, there is a commutative diagram $$\begin{CD} H^i(\cY; \Cee) @>>> \mathbb{H}^i(\cY; \Omega^\bullet_{\cY/\Spec A}) @>>> \mathbb{H}^i(\cY; \Omega^\bullet_{\cY/\Spec A}/\sigma^{\geq k+1}\Omega^\bullet_{\cY/\Spec A}) \\ @| @VVV @VVV \\ H^i(Y; \Cee) @>>> \mathbb{H}^i(Y; \Omega^\bullet_Y) @>>> \mathbb{H}^i(Y; \Omega^\bullet_Y/\sigma^{\geq k+1}\Omega^\bullet_Y). \end{CD}$$ By Corollary~\ref{DBdegen}(i), $H^i(Y; \Cee) \to \mathbb{H}^i(Y; \Omega^\bullet_Y/\sigma^{\geq k+1}\Omega^\bullet_Y)\cong \mathbb{H}^i(Y; \underline{\Omega}^\bullet_Y/F^{k+1}\underline{\Omega}^\bullet_Y)$ is surjective. Hence $$\mathbb{H}^i(\cY; \Omega^\bullet_{\cY/\Spec A}/\sigma^{\geq k+1}\Omega^\bullet_{\cY/\Spec A}) \to \mathbb{H}^i(Y; \Omega^\bullet_Y/\sigma^{\geq k+1}\Omega^\bullet_Y)$$ is surjective as well. \end{proof} \begin{proof}[Proof of Theorem~\ref{mainThm}] By Corollary~\ref{cor1}, possibly after shrinking $S$, $\Omega^p_{\cY/S}$ is flat over $S$. By a standard reduction to the case of Artin local algebras, it is enough to show the following: Let $A$ be an Artin local $\Cee$-algebra and $f\colon \cY\to \Spec A$ a proper morphism whose closed fiber $Y$ is isomorphic to $Y_s$. Then $R^qf_*\Omega^p_{\cY/\Spec A}= H^q(\cY; \Omega^p_{\cY/\Spec A})$ is a free $A$-module, compatible with base change. In fact, by \cite[Theorem 12.11]{Hartshorne}, it is enough to show that, for every $q$ and quotient $A \to A/I=\overline{A}$, the natural map $H^q(\cY; \Omega^p_{\cY/\Spec A}) \to H^q(\overline{\cY}; \Omega^p_{\overline{\cY}/\Spec \overline{A}})$ is surjective, where $\overline{\cY} = \cY\times_{\Spec A}\Spec \overline{A}$. By Theorem~\ref{thm5} and Lemma~\ref{lemma2}, for every $i$, \begin{align*} \ell(\mathbb{H}^i(\cY; \Omega^\bullet_{\cY/\Spec A}/\sigma^{\geq k+1}\Omega^\bullet_{\cY/\Spec A})) &= \ell(A) \dim \mathbb{H}^i(Y; \Omega^\bullet_Y/\sigma^{\geq k+1}\Omega^\bullet_Y)\\ &= \ell(A)\sum_{\substack {p+q = i\\ p \leq k}}\dim H^q(Y; \Omega^p_Y). \end{align*} On the other hand, by analogy with Lemma~\ref{lemma2}(ii), there is the spectral sequence converging to $\mathbb{H}^i(\cY; \Omega^\bullet_{\cY/\Spec A}/\sigma^{\geq k+1}\Omega^\bullet_{\cY/\Spec A})$ with $E_1$ page $$E_1^{p,q} = \begin{cases} H^q(\cY; \Omega^p_{\cY/\Spec A}) , &\text{for $p\leq k$;}\\ 0, &\text{for $p> k$.} \end{cases}$$ Thus $\Dis\ell(\mathbb{H}^i(\cY; \Omega^\bullet_{\cY/\Spec A}/\sigma^{\geq k+1}\Omega^\bullet_{\cY/\Spec A})) \leq \sum_{\substack {p+q = i\\ p \leq k}}\ell(H^q(\cY; \Omega^p_{\cY/\Spec A}))$, with equality $\iff$ the spectral sequence degenerates at $E_1$. Moreover, a straightforward induction on $\ell(A)$ along the lines of the proof of Theorem~\ref{thm5} shows that $\ell(H^q(\cY; \Omega^p_{\cY/\Spec A})) \leq \ell(A) \dim H^q(Y; \Omega^p_Y)$, with equality holding for all $q$ $\iff$ for every $q$ and every quotient $A \to A/I=\overline{A}$, the natural map $H^q(\cY; \Omega^p_{\cY/\Spec A}) \to H^q(\overline{\cY}; \Omega^p_{\overline{\cY}/\Spec \overline{A}})$ is surjective. Combining, we have \begin{align*} \ell(\mathbb{H}^i(\cY;& \Omega^\bullet_{\cY/\Spec A}/\sigma^{\geq k+1} \Omega^\bullet_{\cY/\Spec A})) \leq \sum_{\substack {p+q = i\\ p \leq k}}\ell(H^q(\cY; \Omega^p_{\cY/\Spec A})) \\ &\leq \sum_{\substack {p+q = i\\ p \leq k}}\ell(A) \dim H^q(Y; \Omega^p_Y) = \ell(A)\dim \mathbb{H}^i(Y; \Omega^\bullet_Y/\sigma^{\geq k+1}\Omega^\bullet_Y) \\ &= \ell(\mathbb{H}^i(\cY; \Omega^\bullet_{\cY/\Spec A}/\sigma^{\geq k+1}\Omega^\bullet_{\cY/\Spec A}). \end{align*} Thus all of the inequalities must have been equalities, and in particular $H^q(\cY; \Omega^p_{\cY/\Spec A}) \to H^q(\overline{\cY}; \Omega^p_{\overline{\cY}/\Spec \overline{A}})$ is surjective for every $q$ and for every quotient $A \to \overline{A}$. This completes the proof, by the remarks in the first paragraph of the proof. \end{proof} \begin{remark}\label{rem-can} Instead of assuming that $Y$ is a scheme of finite type proper over $\Spec \Cee$, it is enough to assume that $Y$ is a compact analytic space with isolated singularities and that there exists a resolution of singularities of $Y$ which satisfies the $\partial\bar\partial$-lemma. Roughly speaking, the argument is as follows: The main point is to construct the analogue of the complex $\uOb_Y$ in this case, using Hironaka's resolution of singularities, and to note that all of the smooth varieties arising from a cubical hyperrresolution satisfy the $\partial\bar\partial$-lemma. Such varieties arise in two different ways: first, there is one such which is a resolution of singularities of $Y$. A standard argument (cf.\ \cite[proof of Theorem 2.29]{PS}) shows that in fact every resolution of singularities of $Y$ satisfies the $\partial\bar\partial$-lemma. The remaining varieties in the construction can all be assumed to be smooth projective varieties coming from proper transforms of blowups of subvarieties of the exceptional divisors of the blowup of $Y$ along $Y_{\text{\rm{sing}}}$ or by iterating this procedure. In particular, for every such variety, its cohomology carries a pure Hodge structure and the restriction morphisms between the various cohomology groups are morphisms of Hodge structures. From this, it follows that the Hodge spectral sequence $E_1^{p,q} =\mathbb{H}^q(Y; \uOp_Y) \implies H^{p+q}(Y;\Cee)$ degenerates at $E_1$. Then the argument of Theorem~\ref{mainThm} applies. It seems likely that there is a more general result, in the non-isolated case, assuming sufficiently strong conditions on every subvariety of $Y_{\text{\rm{sing}}}$. \end{remark} We turn now to a proof of Corollary~\ref{CYdef}. First, we define canonical Calabi-Yau varieties following \cite[Definition 6.1]{FL}: \begin{definition}\label{defcanonCY} A \textsl{canonical Calabi-Yau variety} $Y$ is a scheme proper over $\Spec \Cee$ (resp.\ a compact analytic space) which is reduced, Gorenstein, with canonical singularities, and such that $\omega_Y \cong \scrO_Y$. \end{definition} \begin{proof}[Proof of Corollary~\ref{CYdef}] It suffices by \cite{Kawamata} to show that $Y$ has the $T^1$-lifting property. In fact, we show the following somewhat stronger statement: let $A$ be an Artin local $\Cee$-algebra, $\pi\colon \mathcal{Y} \to \Spec A$ a deformation of $Y$ over $A$, and $I$ an ideal of $A$ with $\overline{A} =A/I$ with $\overline{\cY} = \cY\times _{\Spec A}\Spec \overline{A}$, the natural map $$\Ext^1(\Omega^1_{\cY/\Spec A} , \scrO_{\cY}) \to \Ext^1(\Omega^1_{\overline{\cY}/\Spec \overline{A}} , \scrO_{\overline{\cY}})$$ is surjective. Arguing as in \cite[Lemma 3]{Kawamata}, we claim that $\omega_{\cY/\Spec A} \cong \scrO_{\cY}$. To see this, note that, as $Y$ is $1$-Du Bois it is $0$-Du Bois, and hence by Theorem~\ref{Thm-duBois} $R^i\pi_*\scrO_{\cY}=H^i(\cY; \scrO_{\cY})$ is a free $A$-module for all $i$, compatible with base change. In particular, $R^n\pi_*\scrO_{\cY}$ is a free rank one $A$-module. An application of relative duality then shows that $(R^i\pi_*\scrO_{\cY})\spcheck \cong R^{n-i}\omega_{\cY/\Spec A}$, and hence $R^{n-i}\omega_{\cY/\Spec A}$ is a free $A$-module for all $i$. Thus $R^0\pi_*\omega_{\cY/\Spec A}$ is a free rank one $A$-module and the natural map $R^0\pi_*\omega_{\cY/\Spec A} \otimes_A \scrO_{\cY} \to \omega_{\cY/\Spec A}$ is an isomorphism. Hence $\omega_{\cY/\Spec A} \cong \scrO_{\cY}$. By a similar application of relative duality, and since $R^i\pi_*\Omega^1_{\cY/\Spec A} = H^i(\cY; \Omega^1_{\cY/\Spec A})$ is a free $A$-module for all $i$, there is an isomorphism of $A$-modules \begin{align*} \Ext^1(\Omega^1_{\cY/\Spec A} , \scrO_{\cY}) &\cong \textit{Ext}_{\pi}^1(\Omega^1_{\cY/\Spec A} , \omega_{\cY/\Spec A}) \\ &\cong \Hom_A(H^{n-1}(\cY;\Omega^1_{\cY/\Spec A} ), A) . \end{align*} Then the $T^1$-lifting criterion follows from the fact that $H^{n-1}(\cY;\Omega^1_{\cY/\Spec A})$ is a free $A$-module and that the natural map $H^{n-1}(\cY;\Omega^1_{\cY/\Spec A}) \otimes _A\overline{A} \to H^{n-1}(\overline{\cY};\Omega^1_{\overline{\cY}/\Spec \overline{A}})$ is an isomorphism. \end{proof} \begin{remark} A similar argument using Remark~\ref{rem-can} shows that, if $Y$ is a compact analytic space with isolated lci $1$-Du Bois singularities such that $\omega_Y \cong \scrO_Y$ and there exists a resolution of singularities of $Y$ satisfying the $\partial\bar\partial$-lemma, then $\mathbf{Def} (Y)$ is unobstructed. \end{remark} \section{Proof of Theorem~\ref{kratkDB}}\label{section5} For $X$ an algebraic variety, let $\Sigma_k= \Sigma_k(X)$ be the set of points where $X$ is not $k$-Du Bois. Thus, for all $k$, $\Sigma_k \subseteq \Sigma_{k+1} \subseteq \Sigma$. It is easy to see that $\Sigma_k$ is a closed subvariety of $X$. In fact, completing the morphism $\phi^p\colon \Omega^p_X \to \uOp_X $ to a distinguished triangle $$ \Omega^p_X \to \uOp_X \to \mathcal{G}^p \xrightarrow{+1},$$ by definition we have $$\Sigma_k = \bigcup_{\substack{ i, p\\ 0\le p \le k}}\operatorname{Supp}\mathcal{H}^i\mathcal{G}^p.$$ In order to prove that $k$-rational singularities are $k$-Du Bois, we will need to consider situations more general than $k$-rational (as in the statement of Theorem~\ref{mainkratthm}). Recall that a \textsl{left inverse} to the map $\phi^p\colon \Omega^p_X \to \uOp_X $ is a map $h^p\colon \uOp_X \to \Omega^p_X$ such that $h^p\circ \phi^p =\Id$. Of course, left inverses need not exist in general. More generally, we consider a set of $k+1$ left inverses $\{h^p\}_{p=0}^k$. If $X$ is $k$-Du Bois, then $\phi^p$ is an isomorphism for $p\le k$ and so $\{h^p\}_{p=0}^k =\{(\phi^p)^{-1}\}_{p=0}^k$ is a set of left inverses. In this case, we shall always use $\phi^p$ to identify $\Omega^p_X$ with $\uOp_X $, and thus $h^p =\Id$ for $p \le k$. If there exists a left inverse $H_k \colon\uOb_X /F^{k+1} \to \Omega^\bullet_X/\sigma^{\geq k+1} $ in the filtered derived category, then $\{\Gr^p H_k\}_{p=0}^k$ defines a set of $k+1$ left inverses. The following lemma shows that, conversely, given set of $k+1$ left inverses $\{h^p\}_{p=0}^k$, we can modify them so as to arrange a left inverse $H_k \colon\uOb_X /F^{k+1} \to \Omega^\bullet_X/\sigma^{\geq k+1} $ in the filtered derived category. \begin{lemma}\label{lemma5.2} If the map $\Omega^\bullet_X/\sigma^{\geq k+1} \to \uOb_X /F^{k+1}$ has a set of $k+1$ left inverses $\{h^p\}_{p=0}^k$, then there exists a left inverse $H_k \colon\uOb_X /F^{k+1} \to \Omega^\bullet_X/\sigma^{\geq k+1} $ in the filtered derived category. \end{lemma} \begin{remark} We do not claim that the left inverse $H_k$ constructed in the proof satisfies $h^p = \Gr^p H_k$ for all $p \le k$. \end{remark} \begin{proof}[Proof of Lemma~\ref{lemma5.2}] We argue by induction on $k$, where the case $k=0$ is obvious. There is a diagram $$\begin{CD} \underline{\Omega}_X^k[-k] @>>> \uOb_X/F^{k+1} @>>> \uOb_X/F^k @>{+1}>> \\ @VV{h^k}V @. @VV{H_{k-1}}V \\ \Omega^k_X[-k] @>>> \Omega^\bullet_X/\sigma^{\geq k+1} @>>> \Omega^\bullet_X/\sigma^{\geq k} @>{+1}>>. \end{CD}$$ We can thus complete the diagram to find a morphism $H_k'\colon \uOb_X/F^{k+1} \to \Omega^\bullet_X/\sigma^{\geq k+1}$ in the filtered derived category which yields a morphism of distinguished triangles. The composition $G_k\colon \Omega^\bullet_X/\sigma^{\geq k+1} \to \uOb_X/F^{k+1} \xrightarrow{H_k'} \Omega^\bullet_X/\sigma^{\geq k+1}$ is an isomorphism since it is an isomorphism on the graded pieces. Set $H_k = G_k^{-1}\circ H_k'\colon \uOb_X/F^{k+1} \to \Omega^\bullet_X/\sigma^{\geq k+1}$. Then $H_k$ is a left inverse to the map $\Omega^\bullet_X/\sigma^{\geq k+1} \to \uOb_X/F^{k+1}$. \end{proof} Next we define a special class of left inverses: \begin{definition} Let $\alpha \in H^0(X; \Omega^1_X)$. Then $\alpha$ pulls back to some fixed hyperresolution and so defines a map $\alpha\wedge \colon \underline{\Omega}^{p-1}_X \to \uOp_X$. Two left inverses $h^p$ and $h^{p-1}$ are \textsl{compatible} if, for all $p\le k$ and all $\alpha \in H^0(X; \Omega^1_X)$, $$h^p\circ (\alpha\wedge) = (\alpha\wedge)\circ h^{p-1}.$$ If $X$ is $k$-Du Bois, then, for all $p\le k$, the left inverses $h^p$ and $h^{p-1}$ are isomorphisms. In fact, after identifying $\uOp_X$ and $\underline{\Omega}^{p-1}_X$ with $\Omega^p_X$ and $\Omega^{p-1}_X$ respectively, we can assume that $h^p =\Id$ for all $p \le k$. With this convention, necessarily $h^p\circ (\alpha\wedge) = (\alpha\wedge)\circ h^{p-1}$, hence $h^p$ and $h^{p-1}$ are compatible. Finally, the set of $k+1$ left inverses $\{h^p\}_{p=0}^k$ is \textsl{compatible} if, for all $p\le k$, $h^p$ and $h^{p-1}$ are compatible. \end{definition} \begin{lemma}\label{existsleftinv} If $X$ is $k$-rational, then there exists a compatible set of $k+1$ left inverses $\{h^p\}_{p=0}^k$. \end{lemma} \begin{proof} The following diagram commutes up to a sign (all vertical maps are $\alpha\wedge$): $$\begin{CD} \Omega^{p-1}_X @>>> \underline{\Omega}^{p-1}_X @>>> R\pi_*\Omega^{p-1}_{\hX} @>>> \mathbb{D}_X(\underline{\Omega}^{n-p+1}_X) \\ @VVV @VVV @VVV @VVV\\ \Omega^p_X @>>> \uOp_X @>>> R\pi_*\Omega^p_{\hX} @>>> \mathbb{D}_X(\underline{\Omega}^{n-p}_X) \end{CD}$$ For $p\le k$, let $h^p$ be the left inverse to the map $\Omega^p_X \to \uOp_X$ defined by taking the inverse to the isomorphism $\gamma^p =\psi^p\circ \phi^p\colon \Omega^p_X \to \mathbb{D}_X(\underline{\Omega}^{n-p}_X)$ and composing it with the map $\uOp_X\to \mathbb{D}_X(\underline{\Omega}^{n-p}_X)$. Since $\gamma^p\circ (\alpha\wedge) = (\alpha\wedge)\circ \gamma^{p-1}$, $(\gamma^p)^{-1}\circ (\alpha\wedge) = (\alpha\wedge)\circ (\gamma^{p-1})^{-1}$ and thus $h^p\circ (\alpha\wedge) = (\alpha\wedge)\circ h^{p-1}$. \end{proof} We now deal with the case where $\dim \Sigma_k =0$, following the method of proof of Kov\'acs \cite[Theorem 2.3]{Kovacs99}, \cite[p.\ 186]{PS}: \begin{proposition}\label{dim=0case} Suppose that $\dim \Sigma_k =0$ and that there is a left inverse $H_k\colon \uOb_X /F^{k+1} \to \Omega^\bullet_X/\sigma^{\geq k+1}$ to the map $\Omega^\bullet_X/\sigma^{\geq k+1} \to \uOb_X /F^{k+1}$. Then $\Omega^\bullet_X/\sigma^{\geq k+1} \cong \uOb_X /F^{k+1}$. \end{proposition} \begin{proof} Let $Y$ be some projective completion of $X$, so that $Y-X = D$. Let $Z = D \cup \Sigma_k \subseteq Y$. By hypothesis, $\Sigma_k$ is finite, hence $Z$ is closed. Let $U = X -\Sigma_k = Y - D - \Sigma_k$. Thus $U$ is $k$-Du Bois. Now consider the commutative diagram $$\begin{CD} \scriptstyle\mathbb{H}^{i-1}(U; \Omega^\bullet_U/\sigma^{\geq k+1}) @>>> \scriptstyle\mathbb{H}^i_Z(Y; \Omega^\bullet_Y/\sigma^{\geq k+1}) @>>> \scriptstyle\mathbb{H}^i(Y; \Omega^\bullet_Y/\sigma^{\geq k+1}) @>>> \scriptstyle\mathbb{H}^i(U; \Omega^\bullet_U/\sigma^{\geq k+1})\\ @VV{\cong}V @VVV @VVV @V{\cong}VV \\ \scriptstyle\mathbb{H}^{i-1}(U; \uOb_U/F^{k+1}) @>>>\scriptstyle \mathbb{H}^i_Z(Y; \uOb_Y/F^{k+1}) @>>> \scriptstyle\mathbb{H}^i(Y; \uOb_Y/F^{k+1}) @>>> \scriptstyle\mathbb{H}^i(U; \uOb_U/F^{k+1}). \end{CD}$$ We claim that $\mathbb{H}^i(Y; \Omega^\bullet_Y/\sigma^{\geq k+1}) \to \mathbb{H}^i(Y; \uOb_Y/F^{k+1})$ is surjective: this is the usual argument that $H^i(Y;\Cee) \to \mathbb{H}^i(Y; \uOb_Y/F^{k+1})$ is surjective and factors through $H^i(Y;\Cee) \to \mathbb{H}^i(Y; \Omega^\bullet_Y/\sigma^{\geq k+1})$. Then $\mathbb{H}^i_Z(Y; \Omega^\bullet_Y/\sigma^{\geq k+1}) \to \mathbb{H}^i_Z(Y; \uOb_Y/F^{k+1})$ is surjective. We have compatible direct sum decompositions $$\begin{CD} \mathbb{H}^i_Z(Y; \Omega^\bullet_Y/\sigma^{\geq k+1}) @>{\cong}>> \mathbb{H}^i_D(Y; \Omega^\bullet_Y/\sigma^{\geq k+1}) \oplus \mathbb{H}^i_{\Sigma_k}(Y; \Omega^\bullet_Y/\sigma^{\geq k+1})\\ @VVV @VVV \\ \mathbb{H}^i_Z(Y; \uOb_Y/F^{k+1}) @>{\cong}>> \mathbb{H}^i_D(Y; \uOb_Y/F^{k+1})\oplus \mathbb{H}^i_{\Sigma_k}(Y; \uOb_Y/F^{k+1}). \end{CD}$$ Hence $\mathbb{H}^i_{\Sigma_k}(Y; \Omega^\bullet_Y/\sigma^{\geq k+1}) \to \mathbb{H}^i_{\Sigma_k}(Y; \uOb_Y/F^{k+1})$ is surjective. By excision, $\mathbb{H}^i_{\Sigma_k}(Y; \Omega^\bullet_Y/\sigma^{\geq k+1}) \cong \mathbb{H}^i_{\Sigma_k}(X; \Omega^\bullet_X/\sigma^{\geq k+1})$ and similarly for $\mathbb{H}^i_{\Sigma_k}(Y; \uOb_Y/F^{k+1})$. Thus, $$ \mathbb{H}^i_{\Sigma_k}(X; \Omega^\bullet_X/\sigma^{\geq k+1}) \to \mathbb{H}^i_{\Sigma_k}(X; \uOb_X/F^{k+1})$$ is surjective. However, the existence of the left inverse $H_k$ gives a map $\mathbb{H}^i_{\Sigma_k}(X; \uOb_X/F^{k+1}) \to \mathbb{H}^i_{\Sigma_k}(X; \Omega^\bullet_X/\sigma^{\geq k+1})$ such that the composed map $ \mathbb{H}^i_{\Sigma_k}(X; \Omega^\bullet_X/\sigma^{\geq k+1}) \to \mathbb{H}^i_{\Sigma_k}(X; \Omega^\bullet_X/\sigma^{\geq k+1})$ is the identity. Hence $$ \mathbb{H}^i_{\Sigma_k}(X; \Omega^\bullet_X/\sigma^{\geq k+1}) \to \mathbb{H}^i_{\Sigma_k}(X; \uOb_X/F^{k+1})$$ is also injective, and thus an isomorphism. Since $\Omega^p_X|X-\Sigma_k \to \uOp_X|X-\Sigma_k$ is an isomorphism, it then follows from the ``Localization principle" of \cite[p.\ 186]{PS} that $\Omega^\bullet_X/\sigma^{\geq k+1} \cong \uOb_X /F^{k+1}$. \end{proof} The following corollary then deals with the case $\dim \Sigma =0$ of Theorem~\ref{kratkDB}, in fact under the somewhat weaker hypothesis that $\dim \Sigma_k =0$. \begin{corollary} Suppose that $\dim \Sigma_k =0$ and that there exists a left inverse $H_k \colon\uOb_X /F^{k+1} \to \Omega^\bullet_X/\sigma^{\geq k+1} $ to the map $\Omega^\bullet_X/\sigma^{\geq k+1} \to \uOb_X /F^{k+1}$. Then $X$ is $k$-Du Bois. In particular, if $X$ is $k$-rational and $\dim \Sigma_k =0$, then $X$ is $k$-Du Bois. \end{corollary} \begin{proof} The proof is by induction on $k$. There is a morphism of distinguished triangles $$\begin{CD} \Omega^k_X[-k] @>>> \Omega^\bullet_X/\sigma^{\geq k+1} @>>> \Omega^\bullet_X/\sigma^{\geq k} @>{+1}>> \\ @VVV @VVV @VVV \\ \underline\Omega_X^k[-k] @>>> \uOb_X/F^{k+1} @>>> \uOb_X/F^k @>{+1}>> . \end{CD}$$ Here, the center vertical arrow is an isomorphism by Proposition~\ref{dim=0case} and the right vertical arrow is an isomorphism by induction. Thus, the left vertical arrow is an isomorphism as well. \end{proof} To handle the case where $\dim \Sigma_k > 0$, we consider the effect of passing to a general hyperplane section. Note that, if $H$ is an effective Cartier divisor on $X$, since $\scrO_H$ is quasi-isomorphic to the complex $\scrO_X(-H) \to \scrO_X$, we can define the operation $\otimes^{\mathbb{L}}\scrO_H$ on the (bounded) derived category, and similarly for $\otimes^{\mathbb{L}}\scrO_H(-H)$. For a coherent sheaf $\mathcal{F}$ on $X$, if $\mathit{Tor}_1^{\scrO_X}(\mathcal{F}, \scrO_H) =0$, then $\mathcal{F}\otimes^{\mathbb{L}}\scrO_H = \mathcal{F}\otimes \scrO_H$. In particular, by Corollary~\ref{KDexactcor}, if $ \codim \Sigma \ge 2k-1$, then $\Omega^p_X\otimes^{\mathbb{L}}\scrO_H = \Omega^p_X\otimes \scrO_H$ for all $p\le k+1$. The following is due to Navarro Aznar \cite[V(2.2.1)]{GNPP}: \begin{proposition}[Navarro Aznar]\label{NAV} If $X$ is an algebraic variety and $H$ is a general element of a base point free linear system on $X$, there is a distinguished triangle $$\underline{\Omega}^{p-1}_H \otimes^{\mathbb{L}} \scrO_H(-H) \xrightarrow{df\wedge} \uOp_X \otimes^{\mathbb{L}}\scrO_H \to \uOp_H \xrightarrow{+1}. \qed$$ \end{proposition} Here the term $\underline{\Omega}^{p-1}_H \otimes^{\mathbb{L}} \scrO_H(-H)$ refers to the tensor product as $\scrO_H$-modules, i.e.\ to $\underline{\Omega}^{p-1}_H \otimes^{\mathbb{L}}_{\scrO_H} \scrO_H(-H)$, and we could write this as $\underline{\Omega}^{p-1}_H \otimes \scrO_H(-H)$ or simply as $\underline{\Omega}^{p-1}_H$ in case $\scrO_H(-H) \cong \scrO_H$. Recall that, in Proposition~\ref{KDexact}, we defined the (not necessarily exact) sequence \begin{equation*} 0 \to \Omega^{p-1}_H \otimes \scrO_H(-H) \to \Omega^p_X|H \to \Omega^p_H \to 0,\tag*{$(*)_p$} \end{equation*} Since the maps in the above distinguished triangle are clearly compatible with the augmentation maps $\phi^p$, we get: \begin{corollary}\label{morphtriangles} If $(*)_p$ is exact, then there is a morphism of distinguished triangles $$\begin{CD} \Omega^{p-1}_H \otimes \scrO_H(-H) @>{df \wedge}>> \Omega^p_X\otimes \scrO_H @>>> \Omega^p_H @>{+1}>> \\ @VVV @VVV @VVV @.\\ \underline{\Omega}^{p-1}_H \otimes^{\mathbb{L}} \scrO_H(-H) @>{df \wedge}>> \uOp_X \otimes^{\mathbb{L}}\scrO_H @>>>\uOp_H @>{+1}>> \end{CD}$$ In particular, if $X$ is $(k-1)$-Du Bois and lci, then $(*)_p$ is exact for $p \le k$ and hence there is a morphism of distinguished triangles as above for $p \le k$. \end{corollary} \begin{proof} We only need to check the final statement. By Theorem~\ref{thmdim}, $\codim \Sigma \geq 2k-1$. Thus, for a general $H$, by Proposition~\ref{KDexact}, the sequence $(*)_p$ is exact for all $p\le k$. \end{proof} The key step is then the following: \begin{theorem}\label{mainkratthm} Let $X$ be a reduced local complete intersection and suppose that there exists a compatible set of $k+1$ left inverses $\{h^p\}_{p=0}^k$. Then $X$ is $k$-Du Bois. \end{theorem} \begin{proof} We will assume that $\Sigma_k(X) = \Sigma_k \neq \emptyset$ and derive a contradiction. As this is a local question, we can assume that $X$ is affine. The proof is by induction both on $k$ and on $\dim \Sigma _k$. The cases $k=0$ and $\dim \Sigma _k =0$ have been proved by Kov\'acs \cite{Kovacs99} or Proposition~\ref{dim=0case}, or follow by starting the induction at $k=-1$. Assume inductively that the theorem has been proved for all $j < k$ and for all varieties $H$ with $\dim \Sigma_k(H) < \dim \Sigma _k$. In particular, $X$ is $(k-1)$-Du Bois, and hence $\codim \Sigma \geq 2k-1$. Choose a general $H$ as in Proposition~\ref{NAV}. Here the hypothesis that $H$ is general means in particular that (i) $\dim \Sigma_k \cap H < \dim \Sigma _k$ and (ii) if $\dim \Sigma _k >0$, then $\Sigma_k\cap H \neq \emptyset$. We claim that $\Sigma_k(H) \neq \emptyset$. For otherwise $H$ is $k$-Du Bois, hence $\Omega^p_H \cong \uOp_H$ for all $p\le k$. Then $\underline{\Omega}^k_X \otimes^{\mathbb{L}} \scrO_H\cong \Omega^k _X \otimes^{\mathbb{L}}\scrO_H$ as follows from the morphism of distinguished triangles in Corollary~\ref{morphtriangles}. On the other hand, we have the distinguished triangle $$ \Omega^k _X\otimes^{\mathbb{L}} \scrO_H\to \underline{\Omega}^k_X \otimes^{\mathbb{L}}\scrO_H\to \mathcal{G}^k\otimes^{\mathbb{L}}\scrO_H \xrightarrow{+1} .$$ Since $\scrO_H$ has the projective resolution $\scrO_X \xrightarrow{\times f} \scrO_X$, for all $p\le k$, and all $i$, there is an exact sequence $$ 0 \to Tor_1^{\scrO_X}(\mathcal{H}^i\mathcal{G}^p, \scrO_H) \to \mathcal{H}^i(\mathcal{G}^p\otimes^{\mathbb{L}}\scrO_H) \to (\mathcal{H}^i\mathcal{G}^p )\otimes \scrO_H \to 0.$$ Thus $\displaystyle\bigcup_{\substack{ i, p\\ 0\le p \le k}}\operatorname{Supp} \mathcal{H}^i(\mathcal{G}^p\otimes^{\mathbb{L}}\scrO_H) =\Sigma_k\cap H \neq \emptyset$, and hence $\underline{\Omega}^k_X \otimes^{\mathbb{L}} \scrO_H\to \Omega^k _X \otimes^{\mathbb{L}}\scrO_H$ is not an isomorphism. For all $p\le k$, we claim that we can find a compatible set of $p+1$ left inverses $\{\bar{h}^i\}_{i=0}^p $. Again, we argue by induction on $p$ and the case $p=0$ is clear. For the inductive step, assume that there exists a compatible set of $p$ left inverses $\{\bar{h}^i\}_{i=0}^{p-1}$ for $H$. By the inductive hypothesis, $H$ and $X$ are $(p-1)$-Du Bois, and thus we can take $\bar{h}^i = \Id$ for all $i \le p-1$. Also, as noted above, by Corollary~\ref{KDexactcor}, $\Omega^p_X\otimes^{\mathbb{L}} \scrO_H= \Omega^p_X\otimes \scrO_H$. In what follows, we fix the function $f$ and will omit the factor $\scrO_H(-H)$ as it is trivialized. By the assumption of compatibility, there is a commutative diagram $$\begin{CD} \Omega^{p-1}_X\otimes \scrO_H @>{df \wedge}>> \uOp_X \otimes^{\mathbb{L}}\scrO_H\\ @VV{=}V @VV{h^p\otimes \Id}V \\ \Omega^{p-1}_X\otimes \scrO_H @>{df \wedge}>> \Omega^p_X\otimes \scrO_H. \end{CD}$$ There is an induced diagram $$\begin{CD} \Omega^{p-1}_X\otimes \scrO_H @>>> \Omega^{p-1}_H @>{df \wedge}>> \uOp_X \otimes^{\mathbb{L}}\scrO_H\\ @VV{=}V @VV{=}V @VV{h^p\otimes \Id}V \\ \Omega^{p-1}_X\otimes \scrO_H @>>> \Omega^{p-1}_H @>{df \wedge}>> \Omega^p_X\otimes \scrO_H. \end{CD}$$ Here, the left hand square is commutative and the outer rectangle is commutative. Since all terms except for $\uOp_X \otimes^{\mathbb{L}}\scrO_H$ are sheaves and the morphism $\Omega^{p-1}_X\otimes \scrO_H \to \Omega^{p-1}_H$ is surjective, a straightforward argument shows that the right hand square is commutative as well. There is a diagram of distinguished triangles $$\begin{CD} \underline{\Omega}^{p-1}_H\cong \Omega^{p-1}_H @>{df \wedge}>> \uOp_X \otimes^{\mathbb{L}}\scrO_H @>>>\uOp_H @>{+1}>> \\ @VV{=}V @VV{h^p\otimes \Id}V @. @.\\ \Omega^{p-1}_H @>{df \wedge}>> \Omega^p_X\otimes \scrO_H @>>> \Omega^p_H @>{+1}>> \end{CD}$$ Thus, we can complete the diagram by finding $\bar{h}^p$ which makes the diagram commute, and it is automatically a left inverse to the map $\Omega^p_H \to \uOp_H$ since $\Omega^p_X\otimes \scrO_H \to \Omega^p_H$ is surjective. We claim that any such $\bar{h}^p$ is compatible with $\bar{h}^{p-1}=\Id$: Since $\underline{\Omega}^{p-1}_H \cong \Omega^{p-1}_H$, there is a commutative diagram with vertical arrows given by $\alpha\wedge$: $$\begin{CD} \underline{\Omega}^{p-1}_H \cong \Omega^{p-1}_H @>{=}>> \Omega^{p-1}_H \\ @VVV @VVV \\ \uOp_H @>{\bar{h}^p}>> \Omega^p_H \end{CD}$$ As $\bar{h}^p$ is a left inverse, for all $\varphi \in \underline{\Omega}^{p-1}_H \cong \Omega^{p-1}_H $, $\bar{h}^p(\alpha\wedge \varphi) = \alpha\wedge \varphi = \alpha\wedge \bar{h}^{p-1}(\varphi)$. It follows that $\bar{h}^p$ is compatible with $\bar{h}^{p-1}$. This completes the inductive step for $p$. But then, since $\dim \Sigma_k(H) < \dim \Sigma _k$, by the inductive hypothesis $H$ is $k$-Du Bois and hence $\Sigma_k(H) =\emptyset$. This contradicts the statement that $\Sigma_k(H) \neq\emptyset$. \end{proof} The following is then the lci case of Theorem~\ref{kratkDB}: \begin{corollary}\label{kratkDBlci} If $X$ is a $k$-rational algebraic variety with lci singularities, then $X$ is $k$-Du Bois. \end{corollary} \begin{proof} This follows from Lemma~\ref{existsleftinv} and Theorem~\ref{mainkratthm}. \end{proof} In fact, the proof also shows the following result of Musta\c{t}\u{a}-Popa \cite[Theorem 9.17]{MP-loc}: \begin{corollary} If $X$ is an algebraic variety with $k$-Du Bois singularities, then a general complete intersection $H_1\cap \cdots \cap H_a$ of $X$ is $k$-Du Bois. \qed \end{corollary} As an application of Corollary~\ref{kratkDBlci}, we have: \begin{proposition}\label{0.6} Let $f:\cY\to S$ be a flat proper family of complex algebraic varieties of relative dimension $n$ over an irreducible base $S$. For $s\in S$, suppose that the fiber $Y_s$ has $k$-rational lci singularities. Then, for every fiber $t$ such that $Y_t$ is smooth, $\dim \Gr_F^p H^{p+q}(Y_t) = \dim \Gr_F^{n-p} H^{2n-p-q}(Y_s)$ for every $q$ and for $0\le p \le k$. Equivalently, for such $p$ and $q$, $$h^{p,q}(Y_t) = h^{n-p, n-q}(Y_t) = \uh^{p,q}(Y_s) = \uh^{n-p, n-q}(Y_s).$$ \end{proposition} \begin{proof} By Corollary~\ref{kratkDBlci}, $Y_s$ is $k$-Du Bois. By Theorem~\ref{mainThm}, for $p\le k$, $R^qf_*\Omega^p_{\cY/S}$ is locally free in a neighborhood of $s$ and compatible with base change. Thus, $$\dim \Gr_F^p H^{p+q}(Y_t) = H^q(Y_t; \Omega^p_{Y_t}) = \dim H^q(Y_s; \Omega^p_{Y_s}).$$ Then $\dim \Gr_F^p H^{p+q}(Y_t) = \dim \mathbb{H}^q(Y_s; \bD_{Y_s}(\underline\Omega^{n-p}_{Y_s} ))$. By Grothendieck duality, $\mathbb{H}^q(Y_s; \bD_{Y_s}(\underline\Omega^{n-p}_{Y_s} ))$ is dual to $\mathbb{H}^{n-q}(Y_s; \underline\Omega^{n-p}_{Y_s})$. Computing dimensions gives the result. \end{proof} In fact, combining the above with the Hodge symmetries given by Theorem~\ref{cor-sym2}, we obtain Corollary~\ref{Corkrat} announced in the introduction. This is modeled on \cite[Thm. 1]{KLS} (case $k=0$). Note however that loc.\ cit.\ does not assume lci singularities, and works in the analytic category. \bigskip \bigskip \bigskip \appendix \section{Proof of Conjecture \ref{conjDBrat} for hypersurfaces}\label{AppendixA} \par\bigskip \centerline{Morihiko Saito} \smallskip \centerline{\smaller\smaller RIMS Kyoto University, Kyoto 606-8502 Japan} \par\bigskip\noindent We prove that two definitions of higher $k$-rational singularities for hypersurfaces coincide, see Theorem~\ref{Thm-A} below. (The case $k\,{=}\,0$ was treated in \cite{Saito-b}.) This implies a proof of Conjecture~ \ref{conjDBrat} for hypersurfaces using the converse of a theorem of Musta\c t\u a, Olano, Popa, and Witaszek \cite[Thm.\,1.1]{MOPW} (see \cite[Thm.\,1]{JKSY-duBois}). \par\medskip\noindent \begin{theorem}\label{Thm-A} Assume $X$ is a reduced hypersurface of a smooth complex algebraic variety $Y$. Then for $k\in{\mathbb Z}_{>0}$, we have $\widetilde{\alpha}_X>k{+}1$ if and only if $X$ has only $k$-rational singularities. \end{theorem} \begin{proof} Assume $\widetilde{\alpha}_X>k{+}1$. We may assume that $X\subset Y$ is defined by a function $f$ shrinking $X,Y$ if necessary. Since $\widetilde{\alpha}_X>k{+}1$, we have by \cite[Thm.\,2]{JKSY-duBois} the canonical isomorphism $$\underline{\Omega}_X^{d_X-k}=\sigma_{\,{\geqslant}\, 0}\bigl(\Omega_Y^{\scriptscriptstyle\bullet}[d_Y{-}k]|_X,{\rm d} f\wedge\bigr), \leqno{\rm(A1)}$$ where $d_Y:=\dim Y$. Applying the functor ${\mathbb D}$, we get the isomorphisms $$\aligned&{\mathbb D}(\underline{\Omega}_X^{d_X-k})={\mathbb D}\bigl(\sigma_{\,{\geqslant}\, 0}(\Omega_Y^{\scriptscriptstyle\bullet}[d_Y{-}k]|_X,{\rm d} f\wedge)\bigr)\\ &=C\bigl(f:\sigma_{\,{\leqslant}\, 0}(\Omega_Y^{\scriptscriptstyle\bullet}[k],{\rm d} f\wedge)\to\sigma_{\,{\leqslant}\, 0}(\Omega_Y^{\scriptscriptstyle\bullet}[k],{\rm d} f\wedge)\bigr)[d_Y{-}1],\endaligned \leqno{\rm(A2)}$$ since ${\mathbb D}(L)=L^{\vee}{\otimes}_{{\mathcal O}_Y}\omega_Y[d_Y]$ for a locally free ${\mathcal O}_Y$-module $L$ in general. \par\smallskip It is well known that $${\mathcal H}^j(\Omega_Y^{\scriptscriptstyle\bullet},{\rm d} f\wedge)=0\quad\hbox{if}\quad j<{\rm codim}_Y{\rm Sing}\,X, \leqno{\rm(A3)}$$ $$\widetilde{\alpha}_X<\tfrac{1}{2}\,{\rm codim}_Y{\rm Sing}\,X, \leqno{\rm(A4)}$$ see for instance \cite[Prop.\,1--2]{JKSY-duBois}. These imply the quasi-isomorphism $$\sigma_{\,{\leqslant}\, 0}(\Omega_Y^{\scriptscriptstyle\bullet}[k],{\rm d} f\wedge)\,\,\rlap{\hskip1.5mm\raise1.4mm\hbox{$\sim$}}\hbox{$\longrightarrow$}\,\,\Omega_Y^k/{\rm d} f{\wedge}\Omega_Y^{k-1}, \leqno{\rm(A5)}$$ together with the injection $$\Omega_Y^k/{\rm d} f{\wedge}\Omega_Y^{k-1}\hookrightarrow\Omega_Y^{k+1}, \leqno{\rm(A6)}$$ which gives $f$-torsion-freeness of $\Omega_Y^k/{\rm d} f{\wedge}\Omega_Y^{k-1}$. We thus get the canonical isomorphism $${\mathbb D}(\underline{\Omega}_X^{d_X-k})[-d_X]=\Omega_X^k. \leqno{\rm(A7)}$$ Here $k$ can be replaced by any $j\in[0,k{-}1]$. So $X$ has only $k$-rational singularities. \par\smallskip Assume now $X$ has only $k$-rational singularities. This means that the composition $$\Omega_X^k\to\underline{\Omega}_X^k\to{\mathbb D}(\underline{\Omega}_X^{d_X-k})[-d_X] \leqno{\rm(A8)}$$ is an isomorphism (hence $\Omega_X^k$ is a direct factor of $\underline{\Omega}_X^k$) and the same holds with $k$ replaced by any $j\in[0,k{-}1]$. We will consider the morphisms obtained by applying the functor ${\mathbb D}$ to these morphisms. We argue by induction on $k$. Note that $$\widetilde{\alpha}_X>k, \leqno{\rm(A9)}$$ since $X$ has only $(k{-}1)$-rational singularities by definition. This implies that $$k{+}1<{\rm codim}_Y{\rm Sing}\,X, \leqno{\rm(A10)}$$ using (A4), since ${\rm codim}_Y{\rm Sing}\,X\,{\geqslant}\, 2$. By the same argument as above, we then get that $${\mathbb D}(\Omega_X^k)[-d_X]=\sigma_{\,{\geqslant}\, 0}(\Omega_Y^{\scriptscriptstyle\bullet}[d_Y{-}k]|_X,{\rm d} f\wedge). \leqno{\rm(A11)}$$ \par\smallskip On the other hand we have by \cite[Thm.~4.2]{Saito2000} $$\underline{\Omega}_X^k={\rm Gr}_F^k{\rm DR}\bigl({\mathbb Q}_{h,X}[d_X]\bigr)[k{-}d_X], \leqno{\rm(A12)}$$ hence $${\mathbb D}(\underline{\Omega}_X^k)[-d_X]={\rm Gr}_F^{d_X-k}{\rm DR}\,{\mathbb D}\bigl({\mathbb Q}_{h,X}(d_X)[d_X]\bigr)[d_X{-}k]. \leqno{\rm(A13)}$$ By the theory of Hodge ideals (see \cite{MP-Mem}, \cite{SaitoV}, \cite{JKSY}, \cite{JKSY-duBois}) and using (A9), we can get the isomorphism $${\mathbb D}(\underline{\Omega}_X^k)[-d_X]\cong K^{(k),\scriptscriptstyle\bullet}\subset\sigma_{\,{\geqslant}\, 0}(\Omega_Y^{\scriptscriptstyle\bullet}[d_Y{-}k]|_X,{\rm d} f\wedge), \leqno{\rm(A14)}$$ where $K^{(k),j}:=\Omega_Y^{j+d_Y-k}|_X$ if $j\ne k$, and $K^{(k),k}:=I_k(X)\Omega_Y^{d_Y}/f\Omega_Y^{d_Y}$ with $I_k(X)$ the Hodge ideal. \par\smallskip If $\widetilde{\alpha}_X\,{\in}\,(k,k+1)$ so that $I_k(X)\ne{\mathcal O}_Y$ (see for instance \cite[Cor.\,1]{SaitoV}), then by (A11), (A14) the canonical morphism $${\mathcal H}^{k-d_X}{\mathbb D}(\underline{\Omega}_X^k)\to{\mathcal H}^{k-d_X}{\mathbb D}(\Omega_X^k) \leqno{\rm(A15)}$$ is never surjective. Indeed, since $\Omega_X^k$ is a direct factor of $\underline{\Omega}_X^k$, the mapping cone of a morphism $\phi\,{:}\,\Omega_X^k\,{\to}\,\underline{\Omega}_X^k$ is independent of $\phi$ as long as it induces an isomorphism on $X\,{\setminus}\,{\rm Sing}\,X$. Note that ${\mathcal H}^0_{{\rm Sing}\,X}\Omega_X^k\,{=}\, 0$, since the proof of Prop.\,2.2 in \cite{JKSY-duBois} holds also for $q\,{=}\, p{+}1$ (where the last inequality in the proof of Prop.\,2.2 becomes $q{+}1\,{=}\, p{+}2<{\rm codim}_Y{\rm Sing}\,X$). Hence the dual of the composition (A8) cannot be an isomorphism. \par\smallskip Assume now $\widetilde{\alpha}_X\,{=}\,k{+}1$. (Here $\Omega_X^k\,{=}\,\underline{\Omega}_X^k$, see \cite{MOPW}, \cite{JKSY-duBois}.) The canonical isomorphism (A1) and the second morphism of (A8) are induced by the canonical morphism of mixed Hodge modules $${\mathbb Q}_{h,X}[d_X]\to{\mathbb D}\bigl({\mathbb Q}_{h,X}(d_X)[d_X]\bigr), \leqno{\rm(A16)}$$ see \cite[3.1]{JKSY-duBois}. Note that this coincides with the composition of $${\mathbb Q}_{h,X}[d_X]\,{\to}\,\rho_*{\mathbb Q}_{h,\widetilde{X}}[d_X]$$ with its dual, where $\rho\,{:}\,\widetilde{X}\,{\to}\, X$ is a desingularization. Let $(M',F),(M'',F)$ be the underlying filtered ${\mathcal D}_Y$-modules of its kernel and cokernel respectively. Then the condition $\widetilde{\alpha}_X\,{=}\, k{+}1$ implies that $$\min\{p\in{\mathbb Z}\mid F_pM'\ne 0\}>d_X{-}k,\quad\min\{p\in{\mathbb Z}\mid F_pM''\ne 0\}=d_X{-}k, \leqno{\rm(A17)}$$ using \cite[(1.3.2--4)]{SaitoV} and \cite[3.1]{JKSY-duBois} together with the $N$-primitive decomposition, see for instance \cite[(2.2.4)]{KLS}. We then see that the morphism (A16) {\it cannot} induce an isomorphism $${\rm Gr}_F^{d_X-k}{\rm DR}\bigl({\mathbb Q}_{h,X}[d_X]\bigr)\,\,\rlap{\hskip1.5mm\raise1.4mm\hbox{$\sim$}}\hbox{$\longrightarrow$}\,\,{\rm Gr}_F^{d_X-k}{\rm DR}\bigl({\mathbb D}\bigl({\mathbb Q}_{h,X}(d_X)[d_X]\bigr)\bigr). \leqno{\rm(A18)}$$ This means that the composition (A8) cannot be an isomorphism in view of (A12--A13). This is a contradiction. We thus get $\widetilde{\alpha}_X>k{+}1$. This finishes the proof of Theorem \ref{Thm-A}. \end{proof} Combining Theorem \ref{Thm-A} with the converse of a theorem of Musta\c t\u a, Olano, Popa, and Witaszek \cite[Thm.\,1.1]{MOPW} (see \cite[Thm.\,1]{JKSY-duBois}), we can get a positive answer to Conjecture~\ref{conjDBrat} for hypersurfaces as follows. \begin{corollary}\label{Cor-A} Assume $X$ is a reduced hypersurface of a smooth complex algebraic variety $Y$, and has only $k$-Du Bois singularities $(k\,{\geqslant}\, 1)$. Then $X$ has only $(k{-}1)$-rational singularities. \end{corollary} \begin{remark}In \cite{JKSY-duBois} an assertion slightly stronger than the converse of \cite[Thm.\,1.1]{MOPW} is proved (since it is not assumed that the isomorphism is induced by the canonical morphism). The assertion can be proved rather easily using the extension of \cite[Prop.\,2.2]{JKSY-duBois} to the case $q\,{=}\, p{+}1$ as is explained after (A15) and assuming that the restriction of the isomorphism to the smooth points of $D$ is the identity. Indeed, the argument in \cite[2.3]{JKSY-duBois} is so complicated, since even this natural condition is not assumed. \end{remark} \bibliography{duBois} \end{document}
2205.04660v2
http://arxiv.org/abs/2205.04660v2
A representation-theoretic computation of the rank of $1$-intersection incidence matrices: $2$-subsets vs. $n$-subsets
\documentclass{amsart} \usepackage[utf8]{inputenc} \usepackage{mathtools, amsmath, amssymb} \usepackage{ytableau} \numberwithin{equation}{section} \setcounter{secnumdepth}{4} \theoremstyle{plain} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}{Lemma}[section] \newtheorem{corollary}{Corollary}[section] \newcommand{\XX}{{\mathcal{X}}} \newcommand{\spechtj}[2]{{S^{(m-#1, #2)(m-#1, #1)}}} \newcommand{\specht}[1]{{S^{(m-#1, #1)}}} \newcommand{\spechtd}[1]{{D^{(m-#1, #1)}}} \newcommand{\Mperm}[1]{{M^{(m-#1, #1)}}} \newcommand{\allone}{{\mathbf{1}}} \DeclareMathOperator{\ch}{char} \DeclareMathOperator{\im}{Im} \DeclareMathOperator{\GL}{GL} \title[$1$-intersection ranks of $2$-subsets vs. $n$-subsets]{A representation-theoretic computation of the rank of $1$-intersection incidence matrices: $2$-subsets vs. $n$-subsets.} \author[Ducey]{Joshua E. Ducey} \author[Sherwood]{Colby J. Sherwood} \address{Dept.\ of Mathematics and Statistics, James Madison University, Harrisonburg, VA 22807, USA} \email{[email protected]} \email{[email protected]} \keywords{incidence matrix, Smith normal form, p-rank, representation theory} \subjclass[2020]{05E18,20C30} \begin{document} \begin{abstract} Let $W_{k,n}^{i}(m)$ denote a matrix with rows and columns indexed by the $k$-subsets and $n$-subsets, respectively, of an $m$-element set. The row $S$, column $T$ entry of $W_{k,n}^{i}(m)$ is $1$ if $|S \cap T| = i$, and is $0$ otherwise. We compute the rank of the matrix $W_{2,n}^{1}(m)$ over any field by making use of the representation theory of the symmetric group. We also give a simple condition under which $W_{k,n}^{i}(m)$ has large $p$-rank. \end{abstract} \maketitle \section{Introduction} Incidence matrices are interesting objects that encode a relation between two finite sets into a zero-one matrix that can then be studied algebraically. Properties of the matrix that are unchanged by the different ways the matrix can be constructed become properties of the relation itself. One example of such an invariant is the rank of the matrix, or more generally the elementary divisors of the matrix. In the case that the two sets are the same and the relation describes adjacency of vertices in a graph, the spectrum of the matrix is another invariant. In this paper we will be interested in the rank. Note that the incidence matrix can be defined over any field, and the answer can depend on the field's characteristic. Given nonnegative integers $k \leq n \leq m$ and a fixed set $\XX$ of size $m$, one can define an $\binom{m}{k} \times \binom{m}{n}$ incidence matrix $W_{k,n}^{i}(m)$ as follows. Let the rows of $W_{k,n}^{i}(m)$ be indexed by the $k$-element subsets ($k$-subsets, for short) of $\XX$, the columns be indexed by the $n$-subsets of $\XX$, and let the row $S$, column $T$ entry of $W_{k,n}^{i}(m)$ be $1$ if $|S \cap T| = i$, and $0$ otherwise. These subset-intersection matrices describe fundamental relations and are naturally interesting. Suppose for a moment that we have $i = k$; the matrix $W_{k,n}^{k}(m)$ is the well-studied \textit{inclusion matrix} of $k$-subsets vs.\ $n$-subsets. Mathematicians have been interested in this matrix since the 1960s, when it was shown that $W_{k,n}^{k}(m)$ has full rank over the rational numbers \cite{gottlieb}. Later, the rank of $W_{k,n}^{k}(m)$ was calculated over any field of characteristic $2$ (the $2$-rank, for short) and the $3$-rank was computed when $n=k+1$ \cite{LR}. The problem was completely solved when Wilson \cite{wilson:diagonal} found a beautiful diagonal form for the inclusion matrices (the $p$-rank is the number of diagonal entries of Wilson's form not divisible by $p$). Further refinements to Wilson's result were made in \cite{frankl} and \cite{bier}. The study of inclusion matrices has applications to the theory of designs. For much information and more history of these matrices see \cite[Section 10]{qing} and \cite{plaza}. For a recent application to computing integer invariants of the $n$-cube graph, see \cite{CXS:cube}. We now switch our attention to the situation where $i < k$. When $i=0$ the matrices $W_{k,n}^{0}(m)$ describe the \textit{disjointness} relation. However note that a $k$-subset $S$ being disjoint from an $n$-subset $T$ happens precisely when $S \subseteq \XX \backslash T$. So we have $W_{k,n}^{0}(m) = W_{k,m-n}^{k}(m)$ and thus the ranks of the disjointness matrices are also known. When $0<i<k$ much less is known. In \cite[Example SNF3]{eijl} the authors solve the rank problem for the matrix $W_{2,2}^{1}(m)$ by finding a diagonal form (the Smith normal form in that case). The matrix $W_{2,2}^{1}(m)$ can also be thought of as an adjacency matrix of the triangular graph $T(m)$, and if one instead considers the Laplacian matrix the rank problem has been solved for both $T(m)$ \cite{line} and its complement (the Kneser graph on $2$-subsets) \cite{kneser2}. In \cite{wong}, a diagonal form is found for a very general class of matrices $N_{2}(G)$. The rows of $N_{2}(G)$ are indexed by the $2$-subsets of a set of size $m$, the first column is the characteristic vector of the edges of a graph $G$ on $m$ vertices, and the rest of the columns come from the action of the symmetric group on this first column. When $G$ is the complete bipartite graph $K_{n,m-n}$ the matrix $N_{2}(G)$ becomes $W_{2,n}^{1}(m)$, and from the result \cite[Theorem 17]{wong} the rank of $W_{2,n}^{1}(m)$ follows. The purpose of this paper is to give an alternative computation of the rank of $W_{2,n}^{1}(m)$ by using the representation theory of the symmetric group $G_{m}$. There are several reasons why one would want to do this. Representation theory has already been shown to be a powerful tool for problems such as computing $p$-ranks and Smith normal forms. Some successes in this respect, including the $q$-analogue of this problem (that is, concerning various intersection relations of subspaces of a vector space and the representation theory of $\GL(n,q)$) include \cite{1spaces, skew, frumkin, kneser2, jolliffe, raghu}. Whether intentionally or not, in the various statements and hypotheses of theorems in the works on subsets one can find standard Young tableau, proper partitions, and other reflections of the representation theory of $G_{m}$ present. There has been particular interest in recasting the impressive matrix methods (fronts, shadows, etc.) of Wilson and Wong \cite{wilson:diagonal, wong} in the hope that some light might be shed on other incidence problems of subsets. The recent paper \cite{jolliffe} makes progress on this for the inclusion/disjointness matrices $W_{k,n}^{0}(m)$; we now give attention to the $1$-intersection matrices of $2$-subsets vs. $n$-subsets. Furthermore, representation theory provides an organized setting to frame other related problems. Incidence matrices of subset inclusion, disjointness, intersection in a fixed size; the Laplacian matrices (and others) of Kneser-type graphs; these all represent $FG_{m}$-module homomorphisms, and understanding of ranks (or Smith normal forms) of these matrices can come from a sufficient understanding of the module structure of the domain and codomain. In particular, the incidence problems concerning $3$-subsets of a set seem to have not been handled before. Although the complexity of the modules will increase, the manner of investigation remains the same. The papers \cite{frumkin, jolliffe, kneser2} make use of the modular representation theory techniques expounded by James \cite{james}. This is the approach we will take here. The next section will review the basic ideas that we need. Section \ref{sec:calc} will provide some useful calculations that will be used repeatedly, and in Section \ref{sec:cases} we state and prove our theorem. In the final Section \ref{sec:conclude} we give a lemma that may be used to investigate the general subset-intersection matrix $W_{k,n}^{i}(m)$. We also identify a simple condition that forces this matrix to have large rank. \section{Representation Theory of Symmetric Groups} Suppose $\lambda = (\lambda_1, \lambda_2, \lambda_3,...)$ is a partition of $m$. That is, $\lambda_1 \geq \lambda_2 \geq \lambda_3 \geq ...\geq 0$ and $\sum_k \lambda_k = m$. A $\lambda$-tableau is an array of the integers from $1$ to $m$ without repeats where the $k$-th row has $\lambda_k$ entries. For example, if $\lambda = (3,2,1)$, the following are $\lambda$-tableaux. \vspace{.2 in} \begin{center} \begin{ytableau} 1 & 2 & 3 \\ 4 & 5 \\ 6 \end{ytableau} \hspace{.2 in} \begin{ytableau} 4 & 2 & 3 \\ 6 & 1 \\ 5 \end{ytableau} \hspace{.2 in} \begin{ytableau} 6 & 1 & 5 \\ 2 & 4 \\ 3 \end{ytableau} \end{center} \vspace{.2 in} For a given $\lambda$-tableau, $t$, we define its \textit{row stabilizer}, $R_t$ to be the set of $\sigma \in G_m$, where $G_m$ is the symmetric group on $m$ elements, that keep the rows of $t$ fixed set-wise. We now define the tabloid, $\{t\}$, to be the equivalence class of $t$ under the relation $t_1 \sim t_2$ if $t_1 = \sigma t_2$ for some $\sigma \in R_{t_1}$. We now as in \cite{jolliffe} define the $j$-\textit{column stabilizer} of $t$, denoted $C_t^j$, to be the set of permutations that fix each of the first $j$ columns of $t$ set-wise, and fix the remaining symbols in $t$ point-wise. Note that when $j \geq \lambda_2$, the $j$-column stabilizer fixes all columns set-wise; it is then called the \textit{column stabilizer} of $t$ and is denoted $C_t$. Now fix a field $F$ and consider the group algebra $FG_{m}$. Let $M^{\lambda}$ denote the $FG_{m}$-permutation module with basis the $\lambda$-tabloids. Define the $j$-\textit{polytabloid} for tableau $t$, denoted $e_t^j$, as follows: $$e_t^j = \kappa_t^j\{t\}$$ where $$\kappa_t^j = \sum_{\sigma \in C_t^j} (-1)^{\sigma}\sigma$$ and $(-1)^{\sigma}$ is $1$ or $-1$ when $\sigma$ is even or odd, respectively. When $C_t^j = C_t$, we call a $j$-polytabloid $e_{t}^{j}$ a \textit{polytabloid} and denote it by $e_{t}$. For example, let us consider the following tableau for the partition $\lambda=(4,2)$. \vspace{.2 in} $t = $ \begin{ytableau} 1 & 2 & 3 & 4 \\ 5 & 6 \\ \end{ytableau} \vspace{.2 in} $\{t\} =$ \ytableausetup {boxsize=normal,tabloids} \ytableaushort{ 1234, 56 } $=$ \ytableaushort{ 2134, 65 } $=$ \ytableaushort{ 1243, 56 } \vspace{.2 in} $e_t^0 = \{t\}$ \vspace{.2 in} $e_t^1 =$ \ytableaushort{ 1234, 56 } $-$ \ytableaushort{ 5234, 16 } \vspace{.2 in} $e_t^2 = e_t =$ \ytableaushort{ 1234, 56 } $-$ \ytableaushort{ 5234, 16 } $-$ \ytableaushort{ 1634, 52 } $+$ \ytableaushort{ 5634, 12 } \vspace{.2 in} It is important to realize that the $j$-polytabloid for $t$ depends on the tableau $t$, not the tabloid $\{t\}$. For our purposes we will restrict our attention to partitions of the form $\lambda = (m-i, i)$ where $i \leq m-i$. Any $(m-i,i)$-tabloid is determined by the $i$-subset of $\XX = \{1, 2, \cdots, m\}$ in its second row, and for convenience of notation we will often identify the two. Let $S^{(m-i, j)(m-i,i)}$ be the submodule of $M^{(m-i,i)}$ spanned by $j$-polytabloids. Notice that $\specht{i} \coloneqq S^{(m-i,i)(m-i, i)}$ is the span of polytabloids; this is the \textit{Specht module} for the partition $(m-i,i)$. We have the following filtration of $\Mperm{i}$: \[ M^{(m-i,i)} \supseteq S^{(m-i,1)(m-i,i)} \supseteq S^{(m-i,2)(m-i,i)} \supseteq \cdots \supseteq S^{(m-i,i)(m-i,i)}=S^{(m-i,i)}\supseteq 0. \] The reader can consult \cite{james}, from which our notation is taken, for the general theory. It can also be shown (\cite[Chapter 17]{james}) that successive quotients of this submodule chain are isomorphic to Specht modules. That is, \begin{equation}\label{eqn:spechtseries} S^{(m-i, j)(m-i,i)}/S^{(m-i,j+1)(m-i,i)} \cong S^{(m-j, j)}. \end{equation} The Specht modules above can be defined over any field $F$. When $\ch(F) = 0$ they are irreducible, but this is not always the case when $F$ has positive characteristic. When the indexing partition $\lambda = (m-i,i)$ has two parts and $2i < m$, it turns out that the Specht module $\specht{i}$ has a unique maximal submodule with simple quotient denoted $\spechtd{i}$. Furthermore, the only other possible composition factors of $\specht{i}$ are $\spechtd{j} (0 \leq j < i)$, and these occur with multiplicity zero or one depending on both the characteristic $p$ and $m$. We collect this important information below, which is a special case of Theorem 24.15 in \cite{james}. \begin{theorem}[\cite{james}, Theorem 24.15]\label{thm:james} Let $F$ be a field with $\ch(F) = p$. Suppose $m>2i$ and let $\spechtd{i}$ denote the simple head of the Specht module $\specht{i}$ defined over $F$. Let $[\specht{i}:\spechtd{j}]$ denote the multiplicity of $\spechtd{j}$ as a composition factor of $\specht{i}$. Then \begin{enumerate} \item $S^{(m)} = D^{(m)}$. \item $[\specht{1} : D^{(m)}] = 1$ if $m \equiv 0 \pmod{p}$, $0$ otherwise. \item $[\specht{2}:\spechtd{1}] = 1$ if $m \equiv 2 \pmod{p}$, $0$ otherwise. \item Let $p>2$. Then $[\specht{2} : D^{(m)}] = 1$ if $m \equiv 1 \pmod{p}$, $0$ otherwise. \item Let $p=2$. Then $[\specht{2} : D^{(m)}] = 1$ if $m \equiv 1 \mbox{ or } 2 \pmod{4}$, $0$ otherwise. \end{enumerate} \end{theorem} \section{Some calculations}\label{sec:calc} Let $0 \leq k \leq n \leq \frac{1}{2}m$ and consider the map $\rho_{n,k} \colon M^{(m-n, n)} \rightarrow M^{(m-k, k)}$ that sends an $(m-n,n)$-tabloid $\{t\}$ to the sum of the $(m-k,k)$-tabloids that each have in their second row exactly one symbol in common with the second row of $\{t\}$. In other words, an $n$-subset maps to the sum of the $k$-subsets that intersect it in a set of size $1$; we may consider this to be the map defined by the $1$-intersection matrix $W_{k,n}^{1}(m)$. Fixing a field $F$, the domain and codomain are $FG_{m}$-permutation modules and we will compute the rank of this $FG_{m}$-homomorphism when $k=2$. When a $2$-subset intersects an $n$-subset in a set of size $1$, it also intersects the complement of the $n$-subset in a set of size $1$. Thus we have $W_{2,n}^{1}(m) = W_{2,m-n}^{1}(m)$, and so we lose no generality by our assumption that $n \leq \frac{1}{2}m$. We will make this assumption throughout the paper. Let $\psi_{k,j} \colon M^{(m-k, k)} \rightarrow M^{(m-j, j)}$ denote the \textit{inclusion} map; that is, the map that sends an $(m-k,k)$-tabloid $\{s\}$ to the sum of the $(m-j,j)$-tabloids that each have as their second row a subset of the second row of $\{s\}$. The two lemmas below collect some useful calculations that will be used repeatedly. In Lemma \ref{lem:maps3} we give a more general statement of the lemma below that applies to all the incidence matrices $W_{k,n}^{i}(m)$; however, for clarity, here we state and prove the special case related to the maps we are considering. We again remind the reader that for notational convenience we often identify an $(m-k,k)$-tabloid with the $k$-subset of elements in its second row. \begin{lemma}\label{lem:maps1} Suppose $2 \leq n \leq \frac{1}{2}m$. Let $t = \ytableausetup{smalltableaux, notabloids} \begin{ytableau} b & d & \cdots & & \cdots & \\ a & c & \cdots & \end{ytableau}$ be an $(m-n,n)$-tableau. \begin{enumerate} \item \label{eqns:map0} $e_{t}^{0} = \{t\} \xmapsto{\rho_{n,2}} \sum_{|\{s\}\cap \{t\}|=1} \{s\} \xmapsto{\psi_{2,0}} n(m-n)\emptyset$. \item $e_t^{1} \xmapsto{\rho_{n,2}} \sum_{\{s\} \in \mathcal{A}} e_{s}^{1} + \sum_{\{s^{\prime}\} \in \mathcal{B}} e_{s^{\prime}}^{1} \xmapsto{\psi_{2,1}} (m-2n)e_{\begin{ytableau} b & & & & \cdots & \\ a \end{ytableau}}$ , \\where $\mathcal{A}$ consists of the $2$-subsets containing $a$ but not $b$, $\mathcal{B}$ consists of the $2$-subsets containing $b$ but not $a$, and $s$ (resp. $s^{\prime}$) is a tableau representing such a $2$-subset with $b$ (resp. $a$) in the top-left position. \item $e_{t}^{2} \xmapsto{\rho_{n,2}} 2e_{\begin{ytableau} b & c & & & \cdots & \\ a & d \end{ytableau}}$. \end{enumerate} \end{lemma} \begin{proof} For part $(1)$ of the lemma, notice that the inclusion map $\psi_{2,0}$ sends each $2$-subset to the empty set (that is, the tabloid with only one row). The number of $2$-subsets sharing exactly one element with the $n$-subset $\{t\}$ is $n(m-n)$. For part $(2)$, we have \begin{equation} \rho_{n,2}(e_{t}^{1}) = \kappa_{t}^{1}\rho_{n,2}(\{t\}) = \kappa_{t}^{1}\left ( \sum_{|\{s\}\cap \{t\}|=1} \{s\} \right ) \end{equation} and since $\kappa_{t}^{1} = \left( 1 - (ab) \right)$, the only terms in this sum not killed by $\kappa_{t}^{1}$ are those $2$-subsets in $\mathcal{A}$ or $\mathcal{B}$. If we represent such a $2$-subset with a tableau $s$ that has both $a$ and $b$ in the first column, then $\kappa_{t}^{1} = \kappa_{s}^{1}$ and so \begin{equation}\label{eqn:sum} \rho_{n,2}(e_{t}^{1}) = \sum_{\{s\} \in \mathcal{A}} e_{s}^{1} + \sum_{\{s^{\prime}\} \in \mathcal{B}} e_{s^{\prime}}^{1}. \end{equation} Notice that if $s = \begin{ytableau} b & c & & & \cdots & \\ a & d \end{ytableau}$ then we have (omitting the first row of the tabloid): \[ e_{s}^{1} = \overline{ad} - \overline{bd} \] and $\psi_{2,1}(e_{s}^{1}) = \overline{a} - \overline{b}= e_{\begin{ytableau} b & & & & \cdots & \\ a \end{ytableau}}$. Thus each term in the first sum of Equation \ref{eqn:sum} is mapped by $\psi_{2,1}$ to $e_{\begin{ytableau} b & & & & \cdots & \\ a \end{ytableau}}$ and in the same way one sees that each term in the second sum maps to $e_{\begin{ytableau} a & & & & \cdots & \\ b \end{ytableau}}= -e_{\begin{ytableau} b & & & & \cdots & \\ a \end{ytableau}}.$ Part $(2)$ of the lemma now follows by counting the number of terms in each sum. Finally, to prove part $(3)$ of the lemma, notice that $\kappa_{t}^{2} = (1-(ab))(1-(cd))$. Therefore \[ \rho_{n,2}(e_{t}^{2}) = (1-(ab))(1-(cd))\left ( \sum_{|\{s\}\cap \{t\}|=1} \{s\} \right ). \] One easily sees that the only terms in this sum not killed by $\kappa_{t}^{2}$ are $\overline{ad}$ and $\overline{bc}$. Thus \begin{align*} \rho_{n,2}(e_{t}^{2}) &= \kappa_{t}^{2}\left( \overline{ad} + \overline{bc} \right)\\ &= e_{\begin{ytableau} b & c & & & \cdots & \\ a & d \end{ytableau}} + e_{\begin{ytableau} a & d & & & \cdots & \\ b & c \end{ytableau}}\\ &= 2e_{\begin{ytableau} b & c & & & \cdots & \\ a & d \end{ytableau}} \end{align*} \end{proof} The lemma above gives information about the images under $\rho_{n,2}$ of $j$-polytabloids in $\Mperm{n}$. The next lemma describes what the inclusion maps $\psi_{2,j}$ do to the image of $\rho_{n,2}$ in $\Mperm{2}$. \begin{lemma}\label{lem:maps2} Let $\{t\}$ be an $n$-subset of $\XX$. Set $Y = \rho_{n,2}(\{t\})$. Note we may view $Y$ as the element of $\Mperm{2}$ represented by a column of $W_{2,n}^{1}(m)$. \begin{enumerate} \item $\psi_{2,0}(Y) = n(m-n)\emptyset$. \item $\psi_{2,1}(Y) = (m-n)\sum_{\{z\} \subset \{t\}} \{z\} + n\sum_{\{z^{\prime}\} \not\subset \{t\}} \{z^{\prime}\}$. \end{enumerate} \end{lemma} \begin{proof} Part (1) is just a restatement of part $(1)$ of Lemma \ref{lem:maps1}. To see part $(2)$, notice that if a $1$-subset $\{z\}$ is contained in the $n$-subset $\{t\}$, then there are $(m-n)$ $2$-subsets that simultaneously contain $\{z\}$ and meet $\{t\}$ in a singleton (that singleton necessarily being $\{z\}$). If $\{z\} \not\subset \{t\}$, then there are $n$ $2$-subsets that contain $\{z\}$ and meet $\{t\}$ in a singleton. \end{proof} One more result we need is a consequence of the well-known hook length formula \cite[Theorem 20.1]{james}. \begin{theorem}\label{thm:hook} Over any field, the dimension of the Specht module $\specht{j}$ is \[ \binom{m}{j} - \binom{m}{j-1}, \] where $\binom{m}{-1}=0$. \end{theorem} We now describe how we will compute the rank of $W_{2,n}^{1}(m)$. Let $F$ be a field. The matrix $W_{2,n}^{1}(m)$ represents the $FG_{m}$-module homomorphism \[ \rho_{n,2} \colon \Mperm{n} \to \Mperm{2}, \] so we will compute the dimension of $\im{\rho_{n,2}}$. As mentioned previously, there is a Specht series \[ \Mperm{2} \supseteq \spechtj{2}{1} \supseteq \spechtj{2}{2} = \specht{2} \supseteq \{0\} \] with successive quotients isomorphic to Specht modules. To be precise, the inclusion $\psi_{2,j}$ maps $\spechtj{2}{j}$ onto $\specht{j}$, and $\ker{\psi_{2,j}} \cap \spechtj{2}{j} = \spechtj{2}{j+1}$. Following the idea in \cite{jolliffe}, we set $P^{j} \coloneqq \im{\rho_{n,2}} \cap \spechtj{2}{j}$, for $0 \leq j \leq 2$. Then \[ \im{\rho_{n,2}} = P^0 \supseteq P^{1} \supseteq P^{2} \supseteq \{0\} \] is a filtration for $\im(\rho_{n,2})$ and by the second isomorphism theorem each quotient $L^{j} \coloneqq P^{j}/P^{j+1}$ is isomorphic to a submodule of $\specht{j}$. We indicate this situation with the picture \begin{center} \begin{tabular}{lll} & & $L^{0}$ \\ $\im{\rho_{n,2}}$ & $\sim$ & $L^{1}$ \\ & & $L^{2}$ \end{tabular}. \end{center} It turns out that if we replace $\rho_{n,2}$ with the inclusion map $\psi_{n,2}$, each of these quotients is either zero or the full Specht module \cite{jolliffe}. The situation with size-$1$ intersection is a bit more subtle since $L^{j}$ can be a proper submodule of $\specht{j}$. However, we will be able to work out what is going on with the help of Theorem \ref{thm:james} that describes the composition factors of $\specht{j}$. We emphasize once more that what Lemma \ref{lem:maps1} tells us is that $j$-polytabloids in $\Mperm{n}$ are mapped by $\rho_{n,2}$ to elements of $\spechtj{2}{j}$ in $\Mperm{2}$. These images are representatives of elements of $L^{j}$, and we can identify the element by further applying $\psi_{2,j}$. \section{The Main Result}\label{sec:cases} Let $2 \leq n \leq m$. Recall that we may assume $2n \leq m$. Here is the main result of this paper. \begin{theorem} Let $2 \leq n \leq \frac{1}{2}m$. Let $F$ be a field and view the entries of $W_{2,n}^{1}(m)$ as coming from $F$. The rank of $W_{2,n}^{1}(m)$ is given in the following table: \begin{center} \begin{tabular}{l|l} Case & Rank \\ \hline $\ch{F} =0$, $2n<m$ & $m(m-1)/2$\\ $\ch{F} = 0$, $2n=m$ & $(m-1)(m-2)/2$\\ $\ch{F}=2$, $m$ odd & $m-1$\\ $\ch{F} = 2$, $m$ even, $n$ even & $m-2$\\ $\ch{F} = 2$, $m$ even, $n$ odd & $m-1$\\ $\ch{F} = p>2$, $p \nmid m-2n$, $p \nmid n(m-n)$ & $m(m-1)/2$\\ $\ch{F} = p>2$, $p \nmid m-2n$, $p \mid n(m-n)$ & $(m+1)(m-2)/2$\\ $\ch{F} = p>2$, $p \mid m-2n$, $p \mid m$ & $m(m-3)/2$\\ $\ch{F} = p>2$, $p \mid m-2n$, $p \nmid m$ & $(m-1)(m-2)/2$. \end{tabular} \end{center} \end{theorem} We will spend the rest of this section proving the theorem. For our first and simplest case we consider: \subsection{Case: $\ch{F}=0$ and $2n<m$.} \hfil \\ Since all of $n(m-n)$, $m-2n$, and $2$ are nonzero, we see from Lemma \ref{lem:maps1} that each of the $L^{j}$ contains a polytabloid that generates the full Specht module. By Theorem \ref{thm:hook} the dimension of $\im{\rho_{n,2}}$ is \begin{align*} &\dim{L^{0}} + \dim{L^{1}} + \dim{L^{2}}\\ &= \dim{S^{(m)}} + \dim{\specht{1}} + \dim{\specht{2}}\\ &= \binom{m}{0} - \binom{m}{-1}+\binom{m}{1} - \binom{m}{0}+\binom{m}{2} - \binom{m}{1}\\ &=\binom{m}{2}\\ &= \frac{m(m-1)}{2} \end{align*} and so in this case $W_{2,n}^{1}(m)$ has full rank. \subsection{Case: $\ch{F}=0$ and $2n=m$.}\hfil \\ A look at Lemma \ref{lem:maps1} shows that $L^{0} \cong S^{(m)}$ and also $L^{2} \cong \specht{2}$. Let us try to identify $L^{1}$. From Lemma \ref{lem:maps2} we see that \[ L^{1} \cong \psi_{2,1}\left(\im{\rho_{n,2}} \cap \spechtj{2}{1}\right) \subseteq \langle \allone \rangle, \] where $\allone$ denotes the all-one vector; that is, the sum of all tabloids. Since in this case $\specht{1}$ is irreducible, we must in fact have \[ \psi_{2,1}\left(\im{\rho_{n,2}} \cap \spechtj{2}{1}\right) = 0. \] Thus $W_{2,n}^{1}(m)$ has rank \[ 1 + \binom{m}{2} - \binom{m}{1} = \frac{(m-1)(m-2)}{2}. \] \subsection{Case: $\ch{F} = 2$.}\hfil \\ In this case, part $(3)$ of Lemma \ref{lem:maps1} shows that $\spechtj{n}{2}$ is in the kernel of $\rho_{n,2}$. Thus by considering the Specht series in $\Mperm{n}$ one sees that the composition factors of $\im{\rho_{n,2}}$ must be a sub-multiset of the composition factors of $S^{(m)}$ and $\specht{1}$. Always we have that $S^{(m)} = D^{(m)}$ is the trivial module and $\specht{1}$ has $\spechtd{1}$ as a top composition factor, but whether or not $\specht{1}$ contains $D^{(m)}$ as a composition factor will depend on the parity of $m$. \subsubsection{Case: $m$ even.}\hfil \\ By Theorem \ref{thm:james}, we know that $\specht{1}$ has $\{\spechtd{1}, D^{(m)}\}$ as its set of composition factors. Notice that this implies that $\dim \spechtd{1} = \binom{m}{1} - \binom{m}{0} - 1$. Thus in this case the composition factors of $\im{\rho_{n,2}}$ are a sub-multiset of \[ \{D^{(m)},\spechtd{1}, D^{(m)}\}. \] In fact, one can deduce (\cite[Chapter $5$, Example $1$]{james}) that $\Mperm{1}$ is uniserial with unique composition series \[ \Mperm{1} \supset \specht{1} \supset \langle \allone \rangle \supset \{0\} \] and corresponding composition factors $D^{(m)}, \spechtd{1}, D^{(m)}$. We will make use of this very understandable submodule structure of $\Mperm{1}$ in the following way. Since $\ch{F} = 2$, it is easy to see that \[ \rho_{n,2} = \psi_{1,2} \circ \psi_{n,1}, \] where $\psi_{1,2}$ sends a $1$-subset to the sum of the $2$-subsets containing it. This means that $\im{\rho_{n,2}}$ is isomorphic to a subquotient of $\Mperm{1}$. The image of $\psi_{n,1} \colon \Mperm{n} \to \Mperm{1}$ is easily identified with the same technique we have been applying to $\rho_{n,2}$; namely, intersect $\im{\psi_{n,1}}$ with the Specht series in $\Mperm{1}$ and focus on the successive quotients. The picture is \begin{center} \begin{tabular}{lll} $\im{\psi_{n,1}}$ & $\sim$ & $N^{0}$ \\ & & $N^{1}$ \end{tabular}. \end{center} This is done in \cite{jolliffe} for all the inclusion maps, and always the quotients are either zero or the entire Specht module. The specific result applied to this case is that $N^{j} \cong \specht{j}$ precisely when $\binom{n-j}{1-j}$ is odd. Thus the answer is this case depends on the parity of $n$. \paragraph{Case: $n$ even.} \hfil \\ Since $n$ is even the discussion above shows that $\im{\psi_{n,1}}$ is $\specht{1}$. Furthermore, it is easy to see that $\allone$ (which generates $D^{(m)}$ in $\specht{1}$) is in the kernel of $\psi_{1,2}$. It follows that $\im{\rho_{n,2}} \cong \spechtd{1}$, and so in this case the rank of $W_{2,n}^{1}(m)$ is \[ \binom{m}{1} - \binom{m}{0} - 1 = m-2. \] \paragraph{Case: $n$ odd.} \hfil \\ In this case we see that $\psi_{n,1} \colon \Mperm{n} \to \Mperm{1}$ is surjective. Again $\allone \in \ker{\psi_{1,2}}$, and so we see that the composition factors of $\im{\rho_{n,2}}$ are a subset of $\{D^{(m)}, \spechtd{1}\}$. We pick up $D^{(m)}$ as a top composition factor of $\im{\rho_{n,2}}$ from Lemma \ref{lem:maps1} part $(1)$. Clearly $G_{m}$ does not act trivially on $\im{\rho_{n,2}}$, so we must pick up $\spechtd{1}$ as a composition factor as well. Therefore in this case the rank of $W_{2,n}^{1}(m)$ is \[ \binom{m}{1}-\binom{m}{0} = m-1. \] \subsubsection{Case: $m$ odd.}\hfil \\ By Theorem \ref{thm:james}, we have $\specht{1} = \spechtd{1}$. Thus the composition factors of $\im{\rho_{n,2}}$ form a subset of $\{ D^{(m)}, \spechtd{1}\}$. As in the previous case, \[ \rho_{n,2} = \psi_{1,2} \circ \psi_{n,1}, \] and so $\im{\rho_{n,2}}$ is a subquotient of $\Mperm{1}$. Under the current hypotheses, it is easily deduced (\cite[Chapter $5$, Example $1$]{james}) that \[ \Mperm{1} = \langle \allone \rangle \oplus \specht{1}. \] Since $\allone \in \ker{\psi_{1,2}}$, and $\rho_{n,2} \neq 0$, we must have $\im{\rho_{n,2}} \cong \spechtd{1}$. So in this case the rank of $W_{2,n}^{1}(m)$ is \[ \binom{m}{1} - \binom{m}{0} = m-1. \] \subsection{Case: $\ch{F} = p$, $p>2$.}\hfil \\ In these remaining cases we see from Lemma \ref{lem:maps1} part $(3)$ that $\im{\rho_{n,2}}$ contains the Specht module $\specht{2}$; that is, $L^{2} \cong \specht{2}$. \subsubsection{Case: $p \nmid m-2n$ and $p \nmid n(m-n)$.}\hfil \\ From Lemma \ref{lem:maps1} we see that $L^{0}$ and $L^{1}$ are also isomorphic to the full Specht modules. Thus in this case the rank of $W_{2,n}^{1}(m)$ is \[ \binom{m}{2} = \frac{m(m-1)}{2}. \] \subsubsection{Case: $p \nmid m-2n$ and $p \mid n(m-n)$.}\hfil \\ By Lemma \ref{lem:maps1} we see that $L^{0} = \{0\}$ and $L^{1}$ is isomorphic to the full Specht module. We have that the rank of $W_{2,n}^{1}(m)$ equals \[ \binom{m}{2} - 1 = \frac{(m+1)(m-2)}{2}. \] \subsubsection{Case: $p \mid m-2n$.}\hfil \\ This rank in this case will depend on whether or not $p$ divides $m$. \paragraph{Case: $p \mid m$.} \hfil \\ We must have that $p \mid n$ as well, so by Lemma \ref{lem:maps1} part $(1)$ $L^{0} = \{0\}$. Part $(2)$ of Lemma \ref{lem:maps2} shows that $\psi_{2,1}$ kills $\im{\rho_{n,2}}$, so we have $L^{1} = \{0\}$. Thus the rank of $W_{2,n}^{1}(m)$ is \[ \binom{m}{2} - \binom{m}{1} = \frac{m(m-3)}{2}. \] \paragraph{Case: $p \nmid m$.} \hfil \\ This implies that $p \nmid n$, so by Lemma \ref{lem:maps1} part $(1)$ we see that $L^{0} \cong D^{(m)}$. By Lemma \ref{lem:maps2} part $(2)$ we see that $\psi_{2,1}$ maps $\im{\rho_{n,2}}$ into $\langle \allone \rangle$. Thus we must have that $L^{1} = \{0\}$ or $L^{1} \cong D^{(m)}$. But Theorem \ref{thm:james} shows that $D^{(m)}$ is not a composition factor of $\specht{1}$ in this case. So we have $L^{1} = \{0\}$, and the rank of $W_{2,n}^{1}(m)$ is \[ \binom{m}{2} - \binom{m}{1} + 1 = \frac{(m-1)(m-2)}{2}. \] \section{General subset intersection matrices}\label{sec:conclude} We conclude with some observations about the general incidence matrix $W_{k,n}^{i}(m)$ mentioned in the introduction. This matrix represents the map $\tau_{n,k}^{i} \colon \Mperm{n} \to \Mperm{k}$ which sends an $n$-subset of $\XX$ to the sum of the $k$-subsets that intersect it in a set of size $i$. Our often used Lemma \ref{lem:maps1} readily generalizes to this situation. \begin{lemma}\label{lem:maps3} Let $0 \leq i \leq k \leq n \leq \frac{1}{2}m$. Let $t$ be an $(m-n,n)$-tableau. Then, for $j \leq k$, $\tau_{n,k}^{i}$ maps any $j$-polytabloid $e_{t}^{j}$ in $\Mperm{n}$ into $\spechtj{k}{j}$. Furthermore, \[ \psi_{k,j}\left(\tau_{n,k}^{i}(e_{t}^{j})\right) = \left(\sum_{\ell \geq 0} (-1)^{j-\ell} \binom{j}{\ell} \binom{n-j}{i-\ell} \binom{m-n-j}{k-i-j+\ell}\right) e_{s}, \] where $s$ is the $(m-j,j)$-tableau obtained from $t$ by moving the last $n-j$ entries of the second row of $t$ to the end of the first row of $t$. \end{lemma} \begin{proof} Assume the hypotheses of the lemma and let $t$ be an $(m-n,n)$-tableau. Then \[ e_{t}^{j} \xmapsto{\tau_{n,k}^{i}} \kappa_{t}^{j}\sum_{\vert \{s\} \cap \{t\}\vert = i}\{s\}. \] The terms in the sum above are $k$-subsets $\{s\}$ that meet $\{t\}$ in $i$ elements of the second row of $t$; thus the other $k-i$ elements are chosen from the first row of $t$. The terms in the sum that are not killed by $\kappa_{t}^{j}$ are precisely those $k$-subsets that contain exactly one entry from each of the first $j$ columns of $t$. We may classify such a $k$-subset by how many of those entries from the first $j$ columns of $t$ came from the second row. Call this number $\ell$. Then we see the number of terms in the sum not killed by $\kappa_{t}^{j}$ is \[ \sum_{\ell \geq 0} \binom{j}{\ell} \binom{n-j}{i-\ell} \binom{m-n-j}{k-i-j+\ell}. \] Each $k$-subset that is such a term in the sum may be represented by an $(m-k,k)$-tableau $r$ that has the same first $j$ columns as $t$, where the entries of each of the first $j$ columns may be transposed (so that elements of the $k$-subset are in the second row). Applying $\kappa_{t}^{j}$ to each of these $k$-subsets $\{r\}$ we in fact then have a sum of $j$-polytabloids $e_{r}^{j}$ in $\spechtj{k}{j}$. Finally, the inclusion map $\psi_{k,j}$ now sends any such $j$-polytabloid $e_{r}^{j}$ to $\pm e_{s}$, where $s$ is the $(m-j,j)$-tableau described in the statement of the lemma. The sign depends only on the number of transpositions $j-\ell$ that change the first $j$ columns of $r$ into those of $t$. The lemma follows. \end{proof} Using this lemma we could begin a similar analysis of any of the subset incidence matrices as we did for $W_{2,n}^{1}(m)$. We can also get some easy results such as: \begin{corollary} Let $0 \leq i \leq k \leq n \leq \frac{1}{2}m$. If $\ch{F}=0$, or $\ch{F}=p$ with $p \nmid \binom{k}{i}$, then the matrix $W_{k,n}^{i}(m)$ has rank at least \[ \binom{m}{k} - \binom{m}{k-1}. \] \end{corollary} \begin{proof} Let $t$ be an $(m-n,n)$-tableau. Applying Lemma \ref{lem:maps3} to the case when $j=k$, we see that the $k$-polytabloid $e_{t}^{k}$ is mapped by the $FG_{m}$-module homomorphism $\tau_{n,k}^{i}$ to $\pm\binom{k}{i}e_{s}$, where the tableau $s$ is described in the lemma. Thus, if $\binom{k}{i} \neq 0$, $\im{\tau_{n,k}^{i}}$ contains $\specht{k}$ and the corollary follows. \end{proof} We leave the reader with a final remark, which we hope will encourage more study of these problems through the lens of representation theory. Notice that for the inclusion relation, where $i=k$, the coefficients of $e_{s}$ in Lemma \ref{lem:maps3} are \[ \binom{n-j}{k-j},\] for $0 \leq j \leq k$. These same numbers appear in Wilson's diagonal form for the inclusion matrices, with multiplicities equal to Specht module dimensions. Furthermore, the coefficients of $e_{s}$ appearing in the case of $1$-intersection of $2$-subsets vs. $n$-subsets, which can be read from Lemma \ref{lem:maps1} (or extracted from Lemma \ref{lem:maps3}) are very similar to the numbers appearing in the diagonal form for $W_{2,n}^{1}(m)$ given in Theorem 17 of \cite{wong}. Looking at various examples, one can check that these coefficients (with multiplicities equal to Specht module dimensions) do in fact give an alternative diagonal form for this matrix in some cases (but not all). \bibliographystyle{amsplain} \bibliography{subset_bib} \end{document}
2205.04574v2
http://arxiv.org/abs/2205.04574v2
A universal heat semigroup characterisation of Sobolev and BV spaces in Carnot groups
\documentclass[11pt,a4paper]{amsart} \usepackage{amssymb,amsmath,epsfig,graphics,mathrsfs} \usepackage{fancyhdr} \pagestyle{fancy} \fancyhead[RO,LE]{\small\thepage} \fancyhead[LO]{\small \emph{\nouppercase{\rightmark}}} \fancyhead[RE]{\small \emph{\nouppercase{\rightmark}}} \fancyfoot[L,R,C]{} \renewcommand{\headrulewidth}{1pt} \renewcommand{\footrulewidth}{0pt} \usepackage{hyperref} \hypersetup{ colorlinks = true, urlcolor = blue, linkcolor = blue, citecolor = red , bookmarksopen=true } \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsthm} \usepackage{epsfig,graphics,mathrsfs} \usepackage{graphicx} \usepackage{dsfont} \usepackage[usenames, dvipsnames]{color} \usepackage{hyperref} \textwidth = 16.1cm \textheight = 19.63cm \hoffset = -1.6cm \newcommand*\MSC[1][1991]{\par\leavevmode\hbox{\textit{#1 Mathematical subject classification:\ }}} \newcommand\blfootnote[1]{ \begingroup \renewcommand\thefootnote{}\footnote{#1} \addtocounter{footnote}{-1} \endgroup } \def \de {\partial} \def \e {\ve} \def \N {\mathbb{N}} \def \O {\Omega} \def \phi {\varphi} \def \RNu {\mathbb{R}^{n+1}} \def \RN {\mathbb{R}^N} \def \R {\mathbb{R}} \def \l {\lambda} \def \Gconv {G\left((p')^{-1}\circ p\right)} \def \Geta {G_\eta} \def \K {\mathscr{K}} \def \LL {\mathscr L_a} \def \Ga{\mathscr{G}_z} \def \G{\Gamma} \newcommand{\Ba}{\mathscr B_z^{(a)}} \newcommand{\paa}{z^a \de_z} \def \vf{\varphi} \def \S {\mathscr{S}(\R^{N+1})} \def \So {\mathscr{S}} \newcommand{\As}{(-\mathscr A)^s} \newcommand{\sA}{\mathscr A} \newcommand{\Ms}{\mathscr M^{(s)}} \newcommand{\Bpa}{\mathfrak B^\sA_{\alpha,p}} \newcommand{\Bps}{\mathfrak B_{s,p}(\bG)} \newcommand{\Ia}{\mathscr I_\alpha} \newcommand{\spp}{\sigma_p(\sA)} \newcommand{\rpp}{\rho_p(\sA)} \newcommand{\CO}{C^\infty_0( \Omega)} \newcommand{\Rn}{\mathbb R^n} \newcommand{\Rm}{\mathbb R^m} \newcommand{\Om}{\Omega} \newcommand{\Hn}{\mathbb H^n} \newcommand{\aB}{\alpha B} \newcommand{\eps}{\ve} \newcommand{\BVX}{BV_X(\Omega)} \newcommand{\p}{\partial} \newcommand{\IO}{\int_\Omega} \newcommand{\bG}{\mathbb{G}} \newcommand{\bg}{\mathfrak g} \newcommand{\bz}{\mathfrak z} \newcommand{\bv}{\mathfrak v} \newcommand{\Bux}{\mbox{Box}} \newcommand{\X}{\mathcal X} \newcommand{\Y}{\mathcal Y} \newcommand{\W}{\mathcal W} \newcommand{\la}{\lambda} \newcommand{\La}{\mathscr L} \newcommand{\rhh}{|\nabla_H \rho|} \newcommand{\Za}{Z_\beta} \newcommand{\ra}{\rho_\beta} \newcommand{\na}{\nabla_\beta} \newcommand{\vt}{\vartheta} \newcommand{\HHa}{\mathscr H_a} \newcommand{\HH}{\mathscr H} \numberwithin{equation}{section} \newcommand{\Sob}{S^{1,p}(\Omega)} \newcommand{\dgk}{\frac{\partial}{\partial x_k}} \newcommand{\Co}{C^\infty_0(\Omega)} \newcommand{\Je}{J_\ve} \newcommand{\beq}{\begin{equation}} \newcommand{\bea}[1]{\begin{array}{#1} } \newcommand{\eeq}{ \end{equation}} \newcommand{\ea}{ \end{array}} \newcommand{\eh}{\ve h} \newcommand{\dgi}{\frac{\partial}{\partial x_{i}}} \newcommand{\Dyi}{\frac{\partial}{\partial y_{i}}} \newcommand{\Dt}{\frac{\partial}{\partial t}} \newcommand{\aBa}{(\alpha+1)B} \newcommand{\GF}{\psi^{1+\frac{1}{2\alpha}}} \newcommand{\GS}{\psi^{\frac12}} \newcommand{\HFF}{\frac{\psi}{\rho}} \newcommand{\HSS}{\frac{\psi}{\rho}} \newcommand{\HFS}{\rho\psi^{\frac12-\frac{1}{2\alpha}}} \newcommand{\HSF}{\frac{\psi^{\frac32+\frac{1}{2\alpha}}}{\rho}} \newcommand{\AF}{\rho} \newcommand{\AR}{\rho{\psi}^{\frac{1}{2}+\frac{1}{2\alpha}}} \newcommand{\PF}{\alpha\frac{\psi}{|x|}} \newcommand{\PS}{\alpha\frac{\psi}{\rho}} \newcommand{\ds}{\displaystyle} \newcommand{\Zt}{{\mathcal Z}^{t}} \newcommand{\XPSI}{2\alpha\psi \begin{pmatrix} \frac{x}{|x|^2}\\ 0 \end{pmatrix} - 2\alpha\frac{{\psi}^2}{\rho^2}\begin{pmatrix} x \\ (\alpha +1)|x|^{-\alpha}y \end{pmatrix}} \newcommand{\Z}{ \begin{pmatrix} x \\ (\alpha + 1)|x|^{-\alpha}y \end{pmatrix} } \newcommand{\ZZ}{ \begin{pmatrix} xx^{t} & (\alpha + 1)|x|^{-\alpha}x y^{t}\\ (\alpha + 1)|x|^{-\alpha}x^{t} y & (\alpha + 1)^2 |x|^{-2\alpha}yy^{t}\end{pmatrix}} \newcommand{\norm}[1]{\lVert#1 \rVert} \newcommand{\ve}{\varepsilon} \newcommand{\Rnn}{\mathbb R^{n+1}} \newcommand{\Rnp}{\mathbb R^{N+1}_+} \newcommand{\B}{\mathbb{B}} \newcommand{\Ha}{\mathbb{H}} \newcommand{\xx}{\mathscr X} \newcommand{\Sa}{\mathbb{S}} \newcommand{\x}{\nabla_\mathscr X} \newcommand{\I}{\mathscr I_{HL}} \newcommand{\Lo}{\mathscr L^{2s,p}} \newcommand{\Ma}{\mathscr M} \newcommand{\Po}{\mathscr P} \newcommand{\Ps}{\mathfrak P_s^{\sA}} \newcommand{\In}{1_E} \newcommand{\Lp}{L^p} \newcommand{\Li}{L^\infty} \newcommand{\Lii}{L^\infty_0} \newcommand{\tr}{\operatorname{tr} B} \newcommand{\ssA}{\mathscr A^\star} \newcommand{\tA}{\tilde \sA} \newcommand{\ue}{\mathbf 1_{(-\ve,0)}} \newcommand{\ud}{\mathbf 1_{(0,\delta)}} \newcommand{\uex}{\mathbf 1_{(-\ve,0)}(g)} \newcommand{\udg}{\mathbf 1_{(0,\delta)}(g)} \newcommand{\uE}{\mathbf 1_E} \newcommand{\nh}{\nabla_H} \newcommand{\cg}{\mathrm{g}} \def \dive{\mathrm{div}} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{remark}[theorem]{Remark} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{xca}[theorem]{Exercise} \numberwithin{equation}{section} \setcounter{tocdepth}{1} \begin{document} \title[A universal heat semigroup characterisation, etc.]{A universal heat semigroup characterisation\\of Sobolev and BV spaces in Carnot groups} \blfootnote{\MSC[2020]{35K08, 46E35, 53C17}} \keywords{Sub-Riemannian heat kernels, Integral decoupling, Folland-Stein and BV spaces} \date{} \begin{abstract} In sub-Riemannian geometry there exist, in general, no known explicit representations of the heat kernels, and these functions fail to have any symmetry whatsoever. In particular, they are not a function of the control distance, nor they are for instance spherically symmetric in any of the layers of the Lie algebra. Despite these unfavourable aspects, in this paper we establish a new heat semigroup characterisation of the Sobolev and $BV$ spaces in a Carnot group by means of an integral decoupling property of the heat kernel. \end{abstract} \author{Nicola Garofalo} \address{Dipartimento d'Ingegneria Civile e Ambientale (DICEA)\\ Universit\`a di Padova\\ Via Marzolo, 9 - 35131 Padova, Italy} \vskip 0.2in \email{[email protected]} \author{Giulio Tralli} \address{Dipartimento d'Ingegneria Civile e Ambientale (DICEA)\\ Universit\`a di Padova\\ Via Marzolo, 9 - 35131 Padova, Italy} \vskip 0.2in \email{[email protected]} \maketitle \tableofcontents \section{Introduction}\label{S:intro} For $1\le p < \infty$ and $0<s<1$ consider in $\Rn$ the Banach space $W^{s,p}$ of functions $f\in \Lp$ with finite Aronszajn-Gagliardo-Slobedetzky seminorm, \begin{equation}\label{ags} [f]^p_{s,p} = \int_{\Rn} \int_{\Rn} \frac{|f(x) - f(y)|^p}{|x-y|^{n+ps}} dx dy, \end{equation} see e.g. \cite{Ad, RS}. In their celebrated works \cite{BBM1, BBM2, B}, Bourgain, Brezis and Mironescu discovered a new characterisation of the spaces $W^{1,p}$ and $BV$ based on the study of the limiting behaviour of the spaces $W^{s,p}$ as $s\nearrow 1$. To state their result, consider a one-parameter family of functions $\{\rho_\ve\}_{\ve>0}\in L^1_{loc}(0,\infty)$, $\rho_\ve\geq 0$, satisfying the following assumptions \begin{equation}\label{condbbm} \int_0^\infty \rho_\ve(r)r^{n-1}dr=1,\quad\underset{\ve \to 0^+}{\lim}\int_\delta^\infty \rho_\ve(r)r^{n-1}dr = 0\ \ \mbox{for every $\delta>0$}, \end{equation} see \cite[(9)-(11)]{B}. Also, for $1\le p<\infty$ let \[ K_{p,n}=\int_{\mathbb S^{n-1}} |\langle \omega,e_n\rangle|^p d\sigma(\omega). \] \vskip 0.3cm \noindent \textbf{Theorem A.} [Bourgain, Brezis \& Mironescu]\label{T:bbm}\ \emph{ Assume $1\le p <\infty$. Let $f\in L^p(\Rn)$ and suppose that $$ \underset{\ve\to 0^+}{\liminf} \int_{\Rn}\int_{\Rn} \frac{|f(x)-f(y)|^p}{|x-y|^p}\rho_\ve(|x-y|) dydx < \infty. $$ If $p>1$, then $f\in W^{1,p}$ and \begin{equation}\label{thesisp} \underset{\ve \to 0^+}{\lim} \int_{\Rn}\int_{\Rn} \frac{|f(x)-f(y)|^p}{|x-y|^p}\rho_\ve(|x-y|) dydx= K_{p,n} \int_{\Rn} |\nabla f(x)|^p dx. \end{equation} If instead $p=1$, then $f\in BV$ and \begin{equation}\label{thesis1} \underset{\ve \to 0^+}{\lim} \int_{\Rn}\int_{\Rn} \frac{|f(x)-f(y)|}{|x-y|}\rho_\ve(|x-y|) dydx= K_{1,n} \operatorname{Var}(f). \end{equation}} In \eqref{thesis1} we have denoted with $\operatorname{Var}(f)$ the total variation of $f$ in the sense of De Giorgi (when $f\in W^{1,1}$ one has $\operatorname{Var}(f) = \int_{\Rn} |\nabla f(x)| dx$). We also remark that for $n\ge 2$ the equality \eqref{thesis1} was proved by D\'avila in \cite{Da}. From Theorem \hyperref[T:bbm]{A} one immediately obtains the limiting behaviour of the seminorms \eqref{ags}. To see this, it is enough for $0<s<1$ to let $\ve=1-s$ and take $$ \rho_{1-s}(r)=\begin{cases} \frac{(1-s)p}{r^{n-(1-s)p}}, \qquad\,\,\,\,\,\, \ 0<r< 1, \\ 0 \qquad\quad\quad\quad\ \ \ \ \,\, \ r\geq 1. \end{cases} $$ It is easy to see that \eqref{condbbm} are satisfied and that \eqref{thesisp} gives in such case \begin{equation}\label{caso1} \underset{s \to 1^-}{\lim} (1-s)p \int_{\Rn}\int_{\Rn} \frac{|f(x)-f(y)|^p}{|x-y|^{n+sp}} dydx= K_{p,n} ||\nabla f||^p_p. \end{equation} From \eqref{caso1}, and from the identity \begin{equation}\label{Kappa} K_{p,n}=2\pi^{\frac{n-1}{2}}\frac{\G\left(\frac{p+1}{2}\right)}{\G\left(\frac{n+p}{2}\right)}, \end{equation} one concludes that \begin{equation}\label{seminorm} \underset{s \to 1^-}{\lim} (1-s)\int_{\Rn}\int_{\Rn} \frac{|f(x)-f(y)|^p}{|x-y|^{n+sp}} dydx= 2\pi^{\frac{n-1}{2}}\frac{\G\left(\frac{p+1}{2}\right)}{p\G\left(\frac{n+p}{2}\right)} ||\nabla f||^p_p. \end{equation} To introduce the results in this paper we now emphasise a different perspective on Theorem \hyperref[T:bbm]{A}. If, in fact, we take $\rho_\ve=\rho_{t}$, with \begin{equation}\label{rho} \rho_{t}(r)= \frac{\pi^{\frac{n}{2}}}{2^{p-1} \G\left(\frac{n+p}{2}\right)} \frac{r^{p}}{t^{\frac{p}{2}}}\frac{e^{-\frac{r^2}{4t}}}{(4\pi t)^{\frac{n}{2}}}, \end{equation} then it is easy to see that also such $\rho_t$ satisfies \eqref{condbbm}. Furthermore, with this choice we can write for $1\le p < \infty$ \begin{align*} & \int_{\Rn}\int_{\Rn} \frac{|f(x)-f(y)|^p}{|x-y|^p}\rho_\ve(|x-y|) dydx = \frac{\pi^{\frac{n}{2}}}{2^{p-1} \G\left(\frac{n+p}{2}\right)} \frac{1}{t^{\frac{p}{2}}}\int_{\Rn} P_t(|f-f(x)|^p)(x) dx, \end{align*} where we have denoted by $P_t f(x) = (4\pi t)^{-\frac{n}{2}}\int_{\Rn} e^{-\frac{|x-y|^2}{4t}} f(y) dy$ the heat semigroup in $\Rn$. If we combine this observation with \eqref{Kappa} and with Legendre duplication formula for the gamma function (see \cite[p.3]{Le}), which gives $2^{p-1} \G(p/2) \G\left(\frac{p+1}{2}\right) = \sqrt \pi \G(p), $ we obtain the following notable consequence of Theorem \hyperref[T:bbm]{A}. \vskip 0.3cm \noindent \textbf{Theorem B.}\label{C:bbm}\ \emph{ Assume $1\le p <\infty$. Let $f\in L^p(\Rn)$ and suppose that $$ \underset{t\to 0^+}{\liminf} \frac{1}{t^{\frac{p}{2}}}\int_{\Rn} P_t(|f-f(x)|^p)(x) dx < \infty. $$ If $p>1$, then $f\in W^{1,p}$ and \begin{equation}\label{thesispPtk} \underset{t \to 0^+}{\lim} \frac{1}{t^{\frac{p}{2}}}\int_{\Rn} P_t(|f-f(x)|^p)(x) dx = \frac{2 \G(p)}{\G(p/2)} \int_{\Rn} |\nabla f(x)|^p dx. \end{equation} If instead $p=1$, then $f\in BV$ and \begin{equation}\label{thesis11} \underset{t \to 0^+}{\lim} \frac{1}{\sqrt{t}}\int_{\Rn} P_t(|f-f(x)|)(x) dx= \frac{2}{\sqrt \pi} \operatorname{Var}(f). \end{equation}} One remarkable aspect of \eqref{thesispPtk}, \eqref{thesis11} is the dimensionless constant $\frac{2 \G(p)}{\G(p/2)}$ in the right-hand side. For the purpose of the present work it is important for the reader to keep in mind that, while we have presented Theorem \hyperref[T:bbm]{B} as a consequence of Theorem \hyperref[T:bbm]{A}, we could have derived the dimensionless heat semigroup characterisations \eqref{thesispPtk}, \eqref{thesis11} of $W^{1,p}$ and $BV$ completely independently of Theorem \hyperref[T:bbm]{A}. In fact, once Theorem \hyperref[T:bbm]{B} is independently proved, one can go full circle and easily obtain from it a dimensionless heat semigroup version of the characterisation \eqref{seminorm}. Such a perspective, which is close in spirit to M. Ledoux' approach to the isoperimetric inequality in \cite{Led}, represents the starting point of our work, to whose description we now turn. One of the main objectives of the present paper is to establish, independently of a result such as Theorem \hyperref[T:bbm]{A}, a surprising generalisation of Theorem \hyperref[T:bbm]{B} that we state as Theorems \ref{T:mainp} and \ref{T:p1} below. To provide the reader with a perspective on our results we note that if, as we have done above, one looks at Theorem \hyperref[T:bbm]{B} as a corollary of Theorem \hyperref[T:bbm]{A}, then the spherical symmetry of the approximate identities $\rho_\ve(|x-y|)$, and therefore of the Euclidean heat kernel in \eqref{rho}, seems to play a crucial role in the dimensionless characterisations \eqref{thesispPtk} and \eqref{thesis11}. With this comment in mind, we mention there has been considerable effort in recent years in extending Theorem \hyperref[T:bbm]{A} to various non-Euclidean settings, see \cite{Bar, Lud, CLL, FMPPS, KM, CMSV, Go, CDPP, ArB, HP} for a list, far from being exhaustive, of some of the interesting papers in the subject. In these works the approach is similar to that in the Euclidean setting, and this is reflected in the fact that the relevant approximate identities $\rho_\ve$ either depend on a distance $d(x,y)$, or are asymptotically close in small scales to the well-understood symmetric scenario of $\Rn$. The point of view of our work is different since, as we have already said, our initial motivation was to understand a result such as Theorem \hyperref[T:bbm]{B} completely independently from Theorem \hyperref[T:bbm]{A}. In this endevor, one immediately runs into the following potentially serious obstruction. \medskip \noindent \textbf{Problem:} \emph{Are universal characterisations such as \eqref{thesispPtk} and \eqref{thesis11} even possible in a genuinely non-Riemannian ambient, when the spherical symmetry, or any other symmetries, of the heat kernel are completely lost?} \medskip Concerning this problem a testing ground of basic interest is, for the reasons that we explain below, that of a connected, simply connected Lie group $\bG$ whose Lie algebra admits a stratification $\bg=\bg_1 \oplus \cdots \oplus \bg_r$ which is $r$-nilpotent, i.e., $[\bg_1,\bg_j] = \bg_{j+1},$ $j = 1,...,r-1$, $[\bg_j,\bg_r] = \{0\}$, $j = 1,..., r$. The study of these Lie groups presents considerable challenges and many basic questions pertaining their analytical and geometric properties presently remain fully open. Nowadays known as Carnot groups, they model physical systems with constrained dynamics, in which motion is only possible in a prescribed set of directions in the tangent space (sub-Riemannian, versus Riemannian geometry), see E. Cartan's seminal work \cite{Ca}. Every stratified nilpotent Lie group is endowed with an important second order partial differential operator. The idea goes back to the visionary address of E. Stein \cite{Stein}. Fix a basis $\{e_1,...,e_{m}\}$ of the Lie algebra generating layer $\bg_1$ (called the horizontal layer) and define left-invariant vector fields on $\bG$ by the rule $X_j(g) = dL_g(e_j)$, $g\in \bG$, where $dL_g$ is the differential of the left-translation operator $L_g(g') = g \circ g'$. We indicate with $|\nabla_H f|^2 = \sum_{i=1}^m (X_i f)^2$ the horizontal gradient of a function $f$ with respect to the basis $\{e_1,...,e_m\}$. Associated with such \emph{carr\'e du champ} there is a natural left-invariant intrinsic distance in $\bG$ defined by \begin{equation}\label{d} d(g,g') \overset{def}{=} \sup \{f(g) - f(g')\mid f\in C^\infty(\bG),\ |\nabla_H f|^2\le 1\}. \end{equation} Such $d(g,g')$ coincides with the Carnot-Carath\'eodory distance, see Gromov's beautiful account \cite{Gro}. We respectively denote by $W^{1,p}(\bG)$ and $BV(\bG)$ the Folland-Stein Sobolev space and the space of $L^1$ functions having bounded variation with respect to the horizontal bundle, see Section \ref{S:prelim} for precise definitions and notations. The horizontal Laplacian relative to $\{e_1,...,e_m\}$ is defined as \begin{equation}\label{L} \mathscr L = \sum_{i=1}^m X_i^2. \end{equation} When the step of the stratification of $\bg$ is $r=1$, then the group is Abelian and we are back into the familiar Riemannian setting of $\Rn$, in which case $\mathscr L = \Delta$ is the standard Laplacian. However, in the genuinely non-Abelian situation when $r>1$, then the differential operator $\mathscr L$ fails to be elliptic at every point of the ambient space $\bG$, but it possesses nonetheless a heat semigroup $P_t f(g) = e^{-t \mathscr L} f(g) = \int_{\bG} p(g,g',t) f(g') dg'$, see the construction in Folland's work \cite{Fo}. Such semigroup is positive, formally self-adjoint and stochastically complete, i.e. $P_t 1 = 1$. The heat kernel $p(g,g',t)$ satisfies appropriate Gaussian estimates with respect to the metric $d(g,g')$ (see Proposition \ref{P:gaussian} below), but this fact is of no help when it comes to a universal statement such as Theorem \hyperref[T:bbm]{B} since, in general, there is no known explicit representation of $p(g,g',t)$, and such heat kernel fails to have any symmetry whatsoever. In particular, it is not a function of the distance $d(g,g')$, nor it is for instance spherically symmetric in any of the layers $\bg_i$, $i=1,...,r$, of the Lie algebra (see the discussion in the opening of Section \ref{S:new}). Despite these disheartening aspects, we have the following two surprising results. \begin{theorem}\label{T:mainp} Let $1<p<\infty$. Then $$ W^{1,p}(\bG) = \{f\in L^p(\bG)\mid \underset{t\to 0^+}{\liminf}\ \frac{1}{t^{\frac{p}{2}}}\int_{\bG} P_t(|f-f(g)|^p)(g) dg <\infty\}. $$ Furthermore, if $f\in W^{1,p}(\bG)$ then \begin{equation}\label{2p} \underset{t \to 0^+}{\lim} \frac{1}{t^{\frac{p}{2}}}\int_{\bG} P_t(|f-f(g)|^p)(g) dg = \frac{2 \G(p)}{\G(p/2)} \int_{\bG} |\nabla_H f(g)|^p dg. \end{equation} \end{theorem} Concerning the case $p=1$, the following is our second main result. \begin{theorem}\label{T:p1} We have \begin{equation}\label{1uno} BV(\bG) =\left\{f\in L^1(\bG)\mid \underset{t \to 0^+}{\liminf}\ \frac{1}{\sqrt t} \int_{\bG} P_t\left(|f - f(g)|\right)(g) dg<\infty \right\}, \end{equation} and for any $f\in W^{1,1}(\bG)$ \begin{equation}\label{2unouno} \underset{t \to 0^+}{\lim} \frac{1}{\sqrt{t}}\ \int_{\bG} P_t\left(|f - f(g)|\right)(g) dg = \frac{2}{\sqrt{\pi}} \int_{\bG} |\nabla_H f(g)| dg. \end{equation} Furthermore, if the Carnot group $\bG$ has the property \emph{(B)}\footnote{for this property the reader should see Definition \ref{D:B} below}, then for any $f\in BV(\bG)$ we have \begin{equation}\label{2uno} \underset{t \to 0^+}{\lim} \frac{1}{\sqrt{t}}\ \int_{\bG} P_t\left(|f - f(g)|\right)(g) dg = \frac{2}{\sqrt{\pi}} {\rm{Var}}_\bG(f). \end{equation} \end{theorem} We draw the reader's attention to the remarkable similarity between \eqref{2p}, \eqref{2uno} and their Euclidean predecessors \eqref{thesispPtk}, \eqref{thesis11}. The presence of the universal constant $\frac{2 \G(p)}{\G(p/2)}$ in the right-hand sides of \eqref{2p}, \eqref{2uno} underscores a remarkable general character of the heat semigroup that we next clarify. Having stated our main results, we must explain our comment on their surprising aspect. While we refer the reader to Section \ref{S:new} for a detailed discussion of this point, here we confine ourselves to mention that the crucial novelty in our approach is Theorem \ref{T:int} below. The latter represents an \emph{integral decoupling property} of the sub-Riemannian heat kernels. With such result in hands we obtain the basic Lemma \ref{L:id}. It is precisely this lemma that accounts for the universal character of Theorems \ref{T:mainp} and \ref{T:p1}. We mention that Lemma \ref{L:id} is reminiscent of two remarkable properties of the classical heat semigroup first discovered respectively by Ledoux in his approach to the isoperimetric inequality \cite{Led}, and by Huisken in his work on singularities of flow by mean curvature \cite{Hui}. It is worth remarking at this point that, as we explain in Section \ref{SS:fulvio} below, some experts in the noncommutative analysis community are familiar with the integral decoupling property in Theorem \ref{T:int}. However, the use that we make of such result is completely new. In this respect, we mention that the special case of Carnot groups of step 2 in Theorem \ref{T:p1} was treated in our recent work \cite{GTbbmd}. In that setting we were able to extract the crucial information \eqref{punoint} in Lemma \ref{L:id} from the explicit Gaveau-Hulanicki-Cygan representation formula \eqref{ournucleo} below. No such formula is available for Carnot groups of step 3 or higher, and it is precisely a result such as Theorem \ref{T:int} that allows to successfully handle this situation. As previously mentioned, in the special situation when $\bG=\Rn$ we recover Theorem \hyperref[T:bbm]{B} from Theorems \ref{T:mainp} and \ref{T:p1}, as well as a dimensionless heat semigroup formulation of the Brezis-Bourgain-Mironescu limiting behaviour \eqref{seminorm}. We next show that this comment extends to the geometric setting of the present paper. We begin by introducing the relevant function spaces. \begin{definition}\label{D:besov} Let $\bG$ be a Carnot group. For any $0<s<1$ and $1\le p<\infty$ we define the \emph{fractional Sobolev space} $\Bps$ as the collection of all functions $f\in L^p(\bG)$ such that the seminorm $$ \mathscr N_{s,p}(f) = \left(\int_0^\infty \frac{1}{t^{\frac{s p}2 +1}} \int_{\bG} P_t\left(|f - f(g)|^p\right)(g) dg dt\right)^{\frac 1p} < \infty. $$ \end{definition} The norm \[ ||f||_{\Bps} = ||f||_{\Lp(\bG)} + \mathscr N_{s,p}(f) \] turns $\Bps$ into a Banach space. We stress that the space $\Bps$ is nontrivial since, for instance, it contains $W^{1,p}(\bG)$ (see Lemma \ref{L:inclus} below). We also emphasise that, when the step $r=1$ and $\bG\cong \R^n$ is Abelian, then the space $\Bps$ coincides with the classical Aronszajn-Gagliardo-Slobedetzky space of fractional order $W^{s,p}(\R^n)$ of the functions $f\in L^p$ with finite seminorm $[f]^p_{s,p}$ in \eqref{ags}. It is in fact an exercise to recognise in this case that \[ \mathscr N_{s,p}(f)^p = \frac{2^{sp} \G(\frac{n+sp}2)}{\pi^{\frac n2}}\ [f]_{s,p}^p. \] Concerning the spaces $\Bps$ our main result is the following. It provides a sub-Riemannian dimensionless version of the above mentioned limiting phenomenon \eqref{seminorm}. \begin{theorem}\label{T:bbmG} Let $\bG$ be a Carnot group. Then \begin{equation}\label{1sp} W^{1,p}(\bG) = \{f\in L^p(\bG)\mid \underset{s\to 1^-}{\liminf}\ (1-s) \mathscr N_{s,p}(f)^p <\infty\}\qquad \mbox{ for }1< p<\infty, \end{equation} and \begin{equation}\label{1suno} BV(\bG) =\left\{f\in L^1(\bG)\mid \underset{s\to 1^-}{\liminf}\ (1-s) \mathscr N_{s,1}(f) <\infty \right\}. \end{equation} For any $1\leq p<\infty $ and $f\in W^{1,p}(\bG)$, one has \begin{equation}\label{2sp} \underset{s\to 1^-}{\lim}\ (1-s) \mathscr N_{s,p}(f)^p = \frac{4 \G(p)}{p\G(p/2)} \int_{\bG} |\nabla_H f(g)|^p dg. \end{equation} Furthermore, if the Carnot group $\bG$ has the property \emph{(B)}, then for any $f\in BV(\bG)$ we have \begin{equation}\label{2suno} \underset{s\to 1^-}{\lim}\ (1-s) \mathscr N_{s,1}(f) = \frac{4}{\sqrt{\pi}} {\rm{Var}}_\bG(f). \end{equation} \end{theorem} Our last result concerns the asymptotic behaviour in $s$ of the seminorms $\mathscr N_{s,p}(f)$ at the other end-point of interval $(0,1)$. Such result provides a dimensionless generalisation of that proved by Maz'ya and Shaposhnikova in \cite{MS}. \begin{theorem}\label{T:MS} Let $\bG$ be a Carnot group, and $1\leq p <\infty$. Suppose that $f\in \underset{0<s<1}{\bigcup}\Bps$. Then, $$ \underset{s\to 0^+}{\lim} s \mathscr N_{s,p}(f)^p = \frac{4}{p} ||f||_p^p. $$ \end{theorem} In closing, we briefly discuss the structure of the paper. In Section \ref{S:prelim} we recall the geometric setup. In Section \ref{S:prep} we present some basic preparatory results that will be needed in the rest of the paper. Section \ref{S:new} is central to the rest of our work, but we also feel that it has an independent interest with consequences that go beyond those in the present work. We establish Theorem \ref{T:int} which, as we have said, represents a key property of the heat kernel in a Carnot group. With such result in hand, we obtain the crucial Lemma \ref{L:id}. Section \ref{S:proofs} is devoted to proving Theorems \ref{T:mainp} and \ref{T:p1}. Finally, in Section \ref{S:seminorms} we prove Theorems \ref{T:bbmG} and \ref{T:MS}. \section{Background}\label{S:prelim} In the last decades various aspects of analysis and geometry in Carnot groups have attracted a lot of attention, and we refer to the monographs \cite{FS, V, CG, VSC, Gro, Ri, BLU, Gparis} for insightful perspectives. For the reader's convenience we have collected in this section the background material which is needed in the present paper. Although the relevant geometric setting has been introduced in Section \ref{S:intro}, we recall it here. \begin{definition}\label{D:carnot} Given $r\in \mathbb N$, a \emph{Carnot group} of step $r$ is a simply-connected real Lie group $(\bG, \circ)$ whose Lie algebra $\bg$ is stratified and $r$-nilpotent. This means that there exist vector spaces $\bg_1,...,\bg_r$ such that \begin{itemize} \item[(i)] $\bg=\bg_1\oplus \dots\oplus\bg_r$; \item[(ii)] $[\bg_1,\bg_j] = \bg_{j+1}$, $j=1,...,r-1,\ \ \ [\bg_1,\bg_r] = \{0\}$. \end{itemize} \end{definition} We assume that $\bg$ is endowed with a scalar product $\langle\cdot,\cdot\rangle$ with respect to which the layers $\bg_j's$, $j=1,...,r$, are mutually orthogonal. We let $m_j =$ dim\ $\bg_j$, $j= 1,...,r$, and denote by $N = m_1 + ... + m_r$ the topological dimension of $\bG$. From the assumption (ii) on the Lie algebra it is clear that any basis of the first layer $\bg_1$ bracket generates the whole Lie algebra $\bg$. Because of such special role $\bg_1$ is usually called the horizontal layer of the stratification. For ease of notation we henceforth write $m = m_1$. In the case in which $r =1$ we are in the Abelian situation in which $\bg = \bg_1$, and thus $\bG$ is isomorphic to $\R^m$, where $m =$ dim\ $\bg_1$. We are thus back in $\R^m$, there is no sub-Riemannian geometry involved and everything is classical. We are of course primarily interested in the genuinely non-Riemannian setting $r>1$. The exponential map $\exp : \bg \to \bG$ defines an analytic diffeomorphism of the Lie algebra $\bg$ onto $\bG$, see e.g. \cite[Sec. 2.10 forward]{V}. Using such diffeomorphism, whenever convenient we will routinely identify a point $g = \exp \xi \in \bG$ with its logarithmic image $\xi = \exp^{-1} g\in \bg$. With such identification, if $\xi = \xi_1+...+\xi_r$, we let $\xi_1 = z_{1} e_{1} +...+z_{m}e_{m}\in \bg_1$, and $\xi_j = \sigma_{j,1} e_{j,1} +...+\sigma_{j,m_j}e_{j,m_j}\in \bg_j$, $j = 1,...,r$. Whenever convenient, see for instance the important expression \eqref{Xi} below, we will routinely identify the vector $\xi_1\in \bg_1$ with the point $z = (z_1,...,z_m)\in \Rm$, and the vector $\xi_2+...+\xi_r$ with the point $\sigma = (\sigma_{2},...,\sigma_{r})\in \R^{N-m}$, where $\sigma_j = (\sigma_{j,1},...,\sigma_{j,m_j}) \in \mathbb R^{m_j}$. Given $\xi, \eta\in \bg$, the Baker-Campbell-Hausdorff formula reads \begin{equation}\label{BCH} \exp(\xi) \circ \exp(\eta) = \exp{\bigg(\xi + \eta + \frac{1}{2} [\xi,\eta] + \frac{1}{12} \big\{[\xi,[\xi,\eta]] - [\eta,[\xi,\eta]]\big\} + ...\bigg)}, \end{equation} where the dots indicate commutators of order four and higher, see \cite[Sec. 2.15]{V}. Furthermore, since by (ii) in Definition \ref{D:carnot} all commutators of order higher than $r$ are trivial, in every Carnot group the Baker-Campbell-Hausdorff series in the right-hand side of \eqref{BCH} is finite. Using \eqref{BCH}, with $g = \exp \xi, g' = \exp \xi'$, one can recover the group law $g \circ g'$ in $\bG$ from the knowledge of the algebraic commutation relations between the elements of its Lie algebra. We respectively denote by $L_g(g') = g \circ g'$ and $R_g(g') = g'\circ g$ the left- and right-translation operator by an element $g\in \bG$. We indicate by $dg$ the bi-invariant Haar measure on $\bG$ obtained by lifting via the exponential map the Lebesgue measure on $\bg$. The stratification (ii) induces in $\bg$ a natural one-parameter family of non-isotropic dilations by assigning to each element of the layer $\bg_j$ the formal degree $j$. Accordingly, if $\xi = \xi_1 + ... + \xi_r \in \bg$, with $\xi_j\in \bg_j$, one defines dilations on $\bg$ by the rule $\Delta_\lambda \xi = \lambda \xi_1 + ... + \lambda^r \xi_r,$ and then use the exponential map to transfer such anisotropic dilations to the group $\bG$ as follows \begin{equation}\label{dilG} \delta_\lambda(g) = \exp \circ \Delta_\lambda \circ \exp^{-1} g. \end{equation} The homogeneous dimension of $\bG$ with respect to \eqref{dilG} is the number $Q = \sum_{j=1}^r j m_j.$ Such number plays an important role in the analysis of Carnot groups. The motivation for this name comes from the equation $$(d\circ\delta_\lambda)(g) = \lambda^Q dg.$$ In the non-Abelian case $r>1$, one clearly has $Q>N$. We will use the non-isotropic gauge in $\bg$ defined in the following way $|\xi| = \left(\sum_{j=1}^r ||\xi_j||^{2r!/j}\right)^{1/2r!}$, see \cite{Fo}. It is obvious that $|\Delta_\lambda \xi| = \lambda |\xi|$ for $\la>0$. One defines a non-isotropic gauge in the group $\bG$ by letting $|g| = |\xi|$ for $g = \exp \xi$. Clearly, $|\cdot|\in C^\infty(\bG\setminus\{e\})$, and moreover $|\delta_\la g| = \la |g|$ for every $g\in \bG$ and $\la>0$. The pseudodistance generated by such gauge is equivalent to the intrinsic distance \eqref{d} on $\bG$, i.e., there exists a universal constant $c_1>0$ such that for every $g, g'\in \bG$ $$c_1 |(g')^{-1} \circ g| \le d(g,g') \le c_1^{-1} |(g')^{-1} \circ g|,$$ see \cite{Fo}. Given a orthonormal basis $\{e_1,...,e_m\}$ of the horizontal layer $\bg_1$ one associates corresponding left-invariant $C^\infty$ vector fields on $\bG$ by the formula $X_i(g) = (L_g)_\star(e_i)$, $i=1,...,m$, where $(L_g)_\star$ indicates the differential of $L_g$. We note explicitly that, given a smooth function $u$ on $\bG$, the derivative of $u$ in $g\in \bG$ along the vector field $X_i$ is given by the Lie formula \begin{equation}\label{lie} X_i u(g) = \frac{d}{ds} u(g \exp s e_i)\big|_{s=0}. \end{equation} \subsection{Stratified mean-value formula}\label{SS:stratified} Using the Baker-Campbell-Hausdorff formula \eqref{BCH} we can express \eqref{lie} in the logarithmic coordinates, obtaining the following representation that will be useful subsequently in this paper, see \cite[Prop. (1.26)]{FS} or also \cite[Remark 1.4.6]{BLU}. As previously mentioned, the layer $\bg_j$, $j=1,...,r,$ in the stratification of $\bg$ is assigned the formal degree $j$. Correspondingly, each homogeneous monomial $\xi_1^{\alpha_1} \xi_2^{\alpha_2}...\xi_r^{\alpha_r}$, with multi-indices $\alpha_j = (\alpha_{j,1},...,\alpha_{j,m_j}),\ j=1,...,r,$ is said to have \emph{weighted degree} $k$ if \[ \sum_{j=1}^r j (\sum_{s=1}^{m_j} \alpha_{j,s}) = k. \] Then for each $i = 1,...,m$ we have \begin{align}\label{Xi} X_i & = \frac{\partial }{\partial{z_i}} + \sum_{j=2}^{r}\sum_{s=1}^{m_j} b^s_{j,i}(z_1,...,\sigma_{{j-1},m_{(j-1)}}) \frac{\partial }{\partial{\sigma_{j,s}}} \\ & = \frac{\partial }{\partial{z_i}} + \sum_{j=2}^{r}\sum_{s=1}^{m_j} b^s_{j,i}(\xi_1,...,\xi_{j-1}) \frac{\partial }{\partial{\sigma_{j,s}}}, \notag \end{align} where each $b^s_{j,i}$ is a homogeneous polynomial of weighted degree $j-1$. Since we assume that $\bG$ is endowed with a left-invariant Riemannian metric with respect to which the vector fields $\{X_1,...,X_m\}$ are orthonormal, then given a smooth function $u$ on $\bG$ we denote by $\nh u = \sum_{i=1}^m X_i u X_i$ its horizontal gradient, and denote $|\nh u|^2 = \sum_{i=1}^m (X_i u)^2$. The quasi-metric open ball centered at $g$ and with radius $r>0$ with respect to the non-isotropic gauge $|\cdot|$ will be denoted by $B(g,r)$. We will need the following special case of the stratified Taylor inequality, see \cite[Theor. 1.42]{FS}. \begin{proposition}\label{P:taylor} Let $f\in C^1(\bG)$. There exist universal constants $C, b>0$ such that for every $g\in \bG$ and $r>0$ one has for $g'\in B(g,r)$ \[ |f(g') - f(g) - \langle\nh f(g),z'-z\rangle| \leq C r \underset{g''\in B(g, b r)}{\sup} |\nh f(g'') - \nh f(g)|. \] \end{proposition} \subsection{The heat kernel}\label{SS:heat} The horizontal Laplacian $\mathscr L$ relative to $\{e_1,...,e_m\}$ is defined as in \eqref{L}, and we denote by $P_t f(g) = e^{-t \mathscr L} f(g) = \int_{\bG} p(g,g',t) f(g') dg'$ the corresponding heat semigroup constructed by Folland in \cite{Fo}. Since by \eqref{Xi} we have $X_i^\star = - X_i$, $i=1,...,m$, the heat kernel is symmetric $p(g,g',t)=p(g',g,t)$. Furthermore, it satisfies the following properties. Hereafter, we denote with $e\in \bG$ the identity element. \begin{proposition}\label{P:prop} For every $g, g', g''\in \bG$ and $t>0$, one has \begin{itemize} \item[(i)] $p(g,g',t)=p(g''\circ g,g''\circ g',t)$; \item[(ii)] $p(g,e,t)=t^{-\frac{Q}{2}}p(\delta_{1/\sqrt{t}}g,e,1)$; \item[(iii)] $P_t 1(g) = \int_\bG p(g,g',t) dg'=1$. \end{itemize} \end{proposition} In addition, one has the following basic Gaussian estimates, see \cite{JS, VSC}. Such estimates play a ubiquitous role in the present work. \begin{proposition}\label{P:gaussian} There exist universal constants $\alpha, \beta>0$ and $C>1$ such that for every $g, g' \in \bG$, $t > 0$, and $j\in\{1,\ldots,m\}$ \begin{equation}\label{gauss0} \frac{C^{-1}}{t^{\frac Q2}} \exp \bigg(-\alpha\frac{|(g')^{-1}\circ g|^2}{t}\bigg)\leq p(g,g',t) \leq \frac{C}{t^{\frac Q2 }} \exp \bigg(-\beta\frac{ |(g')^{-1}\circ g|^2}{t}\bigg), \end{equation} \begin{equation}\label{gauss1} \left|X_{j}p(g,g',t)\right|\ \leq\ \frac{C}{t^{\frac{Q+1}{2}}} \exp \bigg(-\beta\frac{ |(g')^{-1}\circ g|^2}{t}\bigg), \end{equation} \begin{equation}\label{gauss2} \left|X^2_{j}p(g,g',t)\right| + \left|\partial_t p(g,g',t)\right|\ \leq\ \frac{C}{t^{\frac Q2 +1}} \exp \bigg(-\beta\frac{ |(g')^{-1}\circ g|^2}{t}\bigg). \end{equation} \end{proposition} We next introduce the relevant functional spaces for the present work. If $1\le p<\infty$, the Folland-Stein Sobolev space of order one is $W^{1,p}(\bG) = \{f\in L^p(\bG)\mid X_i f\in L^p(\bG), i=1,...,m\}$. Endowed with the norm \[ ||f||_{W^{1,p}(\bG)} = ||f||_{L^p(\bG)} + ||\nh f||_{L^p(\bG)} \] this is a Banach space, which is reflexive when $p>1$. Such latter property will be used in Lemma \ref{L:inf} below. We will need the following approximation property \begin{equation}\label{dense} C^\infty_0(\bG)\ \text{is dense in}\ W^{1,p}(\bG)\quad\ \ \ \ \ 1\le p<\infty, \end{equation} see \cite[Theor. 4.5]{Fo} and also \cite[Theor. A.2]{GN}. \subsection{Bounded variation and coarea}\label{SS:bv} In connection with Theorem \ref{T:p1} we need to recall the space of functions with horizontal bounded variation introduced in \cite{CDG}. Let $\mathscr F = \{\zeta = (\zeta_1,...,\zeta_m)\in C^1_0(\bG,\Rm)\mid ||\zeta||_\infty = \underset{g\in \bG}{\sup} (\sum_{i=1}^m \zeta_i(g)^2)^{1/2} \le 1\}$. Then $BV(\bG) = \{f\in L^1(\bG)\mid \operatorname{Var}_\bG(f)<\infty\}$, where \begin{equation}\label{var} \operatorname{Var}_\bG(f) = \underset{\zeta\in \mathscr F}{\sup} \int_{\bG} f \sum_{i=1}^m X_i \zeta_i dg. \end{equation} Endowed with the norm $||f||_{L^1(\bG)} + \operatorname{Var}_\bG(f)$ the space $BV(\bG)$ is a Banach space. When $f\in W^{1,1}(\bG)$, then $f\in BV(\bG)$ and one has $\operatorname{Var}_\bG(f) = \int_{\bG} |\nh f| dg$, but the inclusion is strict. In some formulas we indicate with $d\operatorname{Var}_\bG(f)$ the differential of horizontal total variation associated with a function $f\in BV(\bG)$, so that $\operatorname{Var}_\bG(f) = \int_\bG d\operatorname{Var}_\bG(f)(g)$. It is well-known that there exists a $\operatorname{Var}_\bG(f)$-measurable function $\sigma_f:\bG\to \Rm$, such that $||\sigma_f(g)|| = 1$ for $\operatorname{Var}_\bG(f)$-a.e. $g\in \bG$, and for which one has for any $\zeta\in \mathscr F$ \begin{equation}\label{sigmaf} \int_{\bG} f \sum_{i=1}^m X_i \zeta_i dg = \int_\bG \langle\sigma_f(g),\zeta(g)\rangle d\operatorname{Var}_\bG(f)(g). \end{equation} A measurable set $E\subset \bG$ is said to have finite $\bG$-perimeter if $1_E\in BV(\bG)$, and in this case we denote $P_\bG(E) = \operatorname{Var}_\bG(1_E)$. The following two basic results are special cases of respectively \cite[Theor. 1.14]{GN} and \cite[Theor. 5.2]{GN}. \begin{theorem}\label{T:dense} Given $f\in BV(\bG)$, there exists $f_k\in C^\infty_0(\bG)$ such that as $k\to \infty$ one has $f_k \longrightarrow f$ in $L^1(\bG)$, and $\int_{\bG} |\nh f_k| dg \longrightarrow \operatorname{Var}_\bG(f)$. \end{theorem} \begin{theorem}\label{T:coarea} Let $f\in BV(\bG)$. Then $P_\bG(E^f_\tau)<\infty$ for a.e. $\tau \in \R$ and \begin{equation}\label{coarea} \operatorname{Var}_\bG(f) = \int_\R P_\bG(E^f_\tau) d\tau, \end{equation} where we have let $E^f_\tau=\{g\in\bG\mid f(g)>\tau\}$. Vice-versa, if $f\in L^1(\bG)$ and the right-hand side of \eqref{coarea} is finite, then $f\in BV(\bG)$. \end{theorem} Let $E\subset \bG$ be a set having finite perimeter. We denote by $\p^\star E$ its reduced boundary, see \cite[Def. 2.8]{BMP}. Given a point $g\in \p^\star E$ we indicate by $\nu_E(g)$ the measure theoretic horizontal normal at $g$. Also, given a point $g_0\in \bG$ we let $E_{g_0,r} = \delta_{1/r} L_{g_0^{-1}}(E)$. The vertical planes and half-spaces associated with a unit vector $\nu\in \Rm$ are respectively defined by \[ T_\bG(\nu) = \{(z,\sigma)\in \bG\mid \langle z,\nu\rangle = 0\},\ \ \ \ S_\bG^+(\nu) = \{(z,\sigma)\in \bG\mid \langle z,\nu\rangle \ge 0\}. \] We denote by $T_\nu$ the $(m-1)$-dimensional subspace of $\R^m$ defined as $\left(\rm{span}\{\nu\}\right)^{\perp}$, and indicate with $P_\nu:\Rm\to \Rm$ the orthogonal projection onto $T_\nu$, i.e. $P_\nu z=z-<z,\nu>\nu$. Clearly, its range is $R(P_\nu) =T_\nu$, and we have $P_\nu^2=P_\nu=P_\nu^*$. One has \begin{equation}\label{Tnu} T_\bG(\nu)=T_\nu\times\R^{N-m}=\{(\hat z,\sigma)\in \bG\mid \hat z\in\R^{m},\ \sigma\in \R^{N-m}, \mbox{ such that } P_\nu \hat z=\hat z\}. \end{equation} \begin{definition}\label{D:B} We say that a Carnot group $\bG$ satisfies the property \emph{(B)} if for every set of finite perimeter $E\subset \bG$, and for every $g_0\in \p^\star E$, one has in $L^1_{loc}(\bG)$ \[ 1_{E_{g_0,r}}\ \longrightarrow\ 1_{S_\bG^+(\nu_E(g_0))}\quad\mbox{ as }r\to 0^+. \] \end{definition} It is presently not known whether every Carnot group satisfies the property (B), but in \cite[Theor. 3.1]{FSS} it was shown that groups of step two possess this property. The paper \cite{Ma} contains a class of groups, which includes Carnot groups of step two, that satisfy the property (B) (see in this respect \cite[Theor. 4.12]{Ma}). In the proof of Theorem \ref{T:p1} we will need the following result, which is \cite[Theor. 2.14]{BMP}. We stress that although in that paper the authors assume that $\bG$ has step 2, as they themselves emphasise their result continue to hold in any Carnot group with the property (B). \begin{theorem}\label{T:bmpB} Let $\bG$ be a Carnot group satisfying property \emph{(B)}. If $f\in BV(\bG)$ one has \[ \underset{t\to 0^+}{\lim} \frac{1}{\sqrt t} \int_{\bG} P_t(|f - f(g)|)(g) dg = 4 \int_{\bG} \int_{T_\bG(\sigma_f(g))} p(g',e,1) dg' d \operatorname{Var}_\bG(f)(g), \] where $\sigma_f$ is the function defined by \eqref{sigmaf} above. \end{theorem} \section{Preparatory results}\label{S:prep} This section contains some preparatory results that will be used in the main body of the paper. \subsection{A basic preliminary estimate}\label{SS:basic} We begin with a useful consequence of the property \eqref{dense} and of the upper Gaussian estimate in \eqref{gauss0}. \begin{lemma}\label{L:diffquot} Let $1\le p<\infty$. There exists a universal constant $C_p>0$ such that, for all $f\in W^{1,p}(\bG)$ and $t>0$, one has \begin{equation}\label{suppose} t^{-\frac{p}{2}} \int_{\bG} P_t\left(|f - f(g)|^p\right)(g) dg \le C_p \int_{\bG} |\nh f(g)|^p dg. \end{equation} \end{lemma} \begin{proof} Suppose first that $f\in C^\infty_0(\bG)$. For any given $g, g'\in \bG$ we write \[ |f(g\circ g')- f(g)| = \left|\int_0^1 \frac{d}{d\la} f(g\circ \delta_\la g') d\la\right|, \] where $\delta_\la$ are the group anisotropic dilations \eqref{dilG}. Recalling the notion of Pansu differential $Df(g)(g')$ in \cite{P}, when the target Carnot group is $\bG' = (\R,+)$ one has \[ Df(g)(g') = \lim_{\lambda\to 0^+} \frac{f(g \circ \delta_\lambda g') - f(g)}{\lambda} = \frac{d}{d\la} f(g \circ \delta_\lambda g')\big|_{\la=0}, \] with the limit being locally uniform in $g'\in \bG$. By the Baker-Campbell-Hausdorff formula one finds that when $f$ is smooth then \[ Df(g)(g') = \langle\nh f(g),z' \rangle. \] Substituting in the above equation, integrating in $g\in \bG$, and using the invariance of Lebesgue measure on $\bG$ with respect to right-translations, after changing variable $g\to g'' = g \circ \delta_\lambda g'$, we easily find \begin{equation}\label{mezzo} \int_{\bG} |f(g\circ g')- f(g)|^p dg \le ||z'||^p \int_{\bG} |\nh f(g'')|^p dg''. \end{equation} Multiplying both sides of \eqref{mezzo} by $p(e,g',t)$, and integrating with respect to $g'$, we obtain \begin{align*} &\int_{\bG} p(e,g',t)||z'||^p dg' \int_{\bG} |\nh f(g'')|^p dg''\\ &\geq \int_{\bG}\int_{\bG} p(e,g',t)|f(g\circ g')- f(g)|^p dg dg' =\int_{\bG}\int_{\bG} p(e,g^{-1}\circ g''',t)|f(g''')- f(g)|^p dg''' dg \\ &=\int_{\bG}\int_{\bG} p(g,g''',t)|f(g''')- f(g)|^p dg''' dg, \end{align*} where in the last equality we have used (i) in Proposition \ref{P:prop}. Exploiting also (ii) in Proposition \ref{P:prop} and \eqref{gauss0}, we find $$ t^{-\frac{p}{2}}\int_{\bG} p(e,g',t)||z'||^p dg'= \int_{\bG} p(g,e,1)||z||^p dg \leq C \int_{\bG} e^{-\beta |g|^2 }||z||^p dg =:C_p <\infty. $$ The previous two inequalities yield \begin{align*} t^{-\frac{p}{2}} \int_{\bG} P_t\left(|f - f(g)|^p\right)(g) dg &= t^{-\frac{p}{2}} \int_{\bG}\int_{\bG} p(g,g''',t)|f(g''')- f(g)|^p dg''' dg\\ &\leq C_p \int_{\bG} |\nh f(g'')|^p dg''. \end{align*} This shows \eqref{suppose} for $f\in C^\infty_0(\bG)$. Appealing to the density result \eqref{dense} it is easy to see that \eqref{suppose} continues to be valid for functions $f\in W^{1,p}(\bG)$. \end{proof} \subsection{Heat semigroup estimates for BV functions}\label{SS:heatBV} The following basic result combines the case $p=1$ of Lemma \ref{L:diffquot}, with Theorems \ref{T:dense} and \ref{T:coarea}. It displays the typical character of the geometric Sobolev embedding. Whatever constant works for \eqref{minchietto} below, that same constant works for \eqref{minchiettof}, and vice-versa. \begin{proposition}\label{P:minchietto} There exists a universal constant $C_1>0$ such that if $E\subset \bG$ has finite $\bG$-perimeter, then for every $t>0$ one has\ \footnote{A different proof of \eqref{minchietto} is given in \cite[Prop. 4.2]{BMP}.} \begin{equation}\label{minchietto} \frac{1}{\sqrt t} ||P_t\In - \In||_{L^1(\bG)} \le C_1\ P_\bG(E). \end{equation} Furthermore, if $f\in BV(\bG)$, then for every $t>0$ one has \begin{equation}\label{minchiettof} \frac 1{\sqrt t}\ \int_{\bG} P_t\left(|f - f(g)|\right)(g) dg \le C_1\ \operatorname{Var}_\bG(f). \end{equation} \end{proposition} \begin{proof} In view of Theorem \ref{T:dense} there exists a sequence $f_k\in C^\infty_0(\bG)$ such that as $k\to \infty$ one has $f_k \longrightarrow 1_E$ in $L^1(\bG)$, and $\int_{\bG} |\nh f_k| dg \longrightarrow \operatorname{Var}_\bG(1_E)= P_\bG(E)$. By the case $p=1$ of Lemma \ref{L:diffquot} one has \[ \frac{1}{\sqrt t} \int_{\bG} P_t\left(|f_k - f_k(g)|\right)(g) dg \le C_1 \int_{\bG} |\nh f_k(g)| dg. \] Since, by possibly passing to a subsequence, we can always assume that $f_k\to 1_E$ a.e. in $\bG$, we have $P_t(|f_k - f_k(g)|)(g)\longrightarrow P_t(|1_E - 1_E(g)|)(g)$ for a.e. $g\in \bG$. This observation and the theorem of Fatou give \begin{align*} & \frac{1}{\sqrt t} \int_{\bG} P_t(|1_E - 1_E(g)|)(g) dg \le \underset{k\to \infty}{\liminf} \frac{1}{\sqrt t}\int_{\bG} P_t\left(|f_k - f_k(g)|\right)(g) dg \\ & \le C_1\ \underset{k\to \infty}{\liminf} \int_{\bG} |\nh f_k(g)| dg dg = C_1\ P_\bG(E). \end{align*} To finish the proof of \eqref{minchietto}, all we are left with is observing that from $P_t1=1$ (see (iii) in Proposition \ref{P:prop}) one has \begin{equation}\label{unoE} ||P_t\In - \In||_{L^1(\bG)} = \int_{\bG}{P_t(|\In-\In(g)|)(g)dg}. \end{equation} To prove \eqref{minchiettof}, we observe that if $f\in L^1(\bG)$, then for a.e. $g, g'\in \bG$ we trivially have \begin{equation}\label{riesz} |f(g')-f(g)|=\int_\R \left|1_{E^f_\tau} (g')-1_{E^f_\tau} (g)\right| d\tau. \end{equation} In view of \eqref{riesz}, Fubini's theorem allows to write \begin{equation}\label{slices} \int_{\bG} P_t(|f-f(g)|)(g) dg = \int_\R \int_{\bG} P_t(|1_{E^f_\tau}-1_{E^f_\tau}(g)|)(g) dg d\tau. \end{equation} We now note that if $f\in BV(\bG)$, then by Theorem \ref{T:coarea} we know that $P_\bG(E^f_\tau)<\infty$ for a.e. $\tau \in \R$. With this in mind, if we combine \eqref{slices} with \eqref{unoE}, we obtain \begin{align*} & \frac{1}{\sqrt t} \int_{\bG} P_t(|f - f(g)|)(g) dg = \frac{1}{\sqrt t} \int_\R \int_{\bG} P_t(|1_{E^f_\tau}-1_{E^f_\tau}(g)|)(g) dg d\tau \\ & = \frac{1}{\sqrt t} \int_\R ||P_t1_{E^f_\tau} - 1_{E^f_\tau}||_{L^1(\bG)} d\tau \le C_1\ \int_\R P_\bG(E^f_\tau) d\tau = C_1\ \operatorname{Var}_\bG(f), \end{align*} where in the last two passages we have first used \eqref{minchietto}, and then \eqref{coarea} in Theorem \ref{T:coarea}. The latter estimate establishes \eqref{minchiettof}. \end{proof} \subsection{Fractional Sobolev spaces and heat semigroup}\label{SS:fracheat} In this subsection we establish some basic properties of the fractional Sobolev spaces $\Bps$ introduced in Definition \ref{D:besov}. We refer the interested reader to \cite{GTfi, GTiso, BGT} for a different but related setting. \begin{lemma}\label{L:inclus} Fix $1\leq p<\infty$. For any $f\in L^p$ we have \begin{equation}\label{inbesov} \underset{t \to 0^+}{\limsup}\ t^{-\frac{p}{2}}\ \int_{\bG} P_t\left(|f - f(g)|^p\right)(g) dg <\infty \quad \Longrightarrow \quad f\in \Bps \mbox{ for every}\ s\in (0,1). \end{equation} Moreover the following holds \begin{equation}\label{chain} W^{1,p}(\bG) \subset \Bps \subset\mathfrak B_{\sigma,p}(\bG)\,\,\, \mbox{ for every }\,0<\sigma \leq s <1, \end{equation} where the inclusions denote continuous embeddings with respect to the relevant topologies. \end{lemma} \begin{proof} Fix $1\leq p<\infty$ and $0<s<1$, and consider any $f\in L^p$ and $\ve>0$. Using $|a-b|^p\leq 2^{p-1} (a^p+b^p)$ together with $P_t1=P^*_t 1=1$, we have \begin{equation}\label{Ndopo1} \int_{\ve}^\infty \frac{1}{t^{\frac{s p}2 +1}} \int_{\bG} P_t\left(|f - f(g)|^p\right)(g) dg dt \leq 2^p \|f\|^p_p \int_{\ve}^\infty \frac{dt}{t^{\frac{s p}2 +1}}=\frac{2^{p+1}}{sp}\ve^{-\frac{s p}2}\|f\|^p_p. \end{equation} If we assume that $$L \overset{def}{=} \underset{t \to 0^+}{\limsup}\ t^{-\frac{p}{2}}\ \int_{\bG} P_t\left(|f - f(g)|^p\right)(g) dg <\infty,$$ then we have the existence of $\ve_0>0$ such that \[ \underset{\tau\in (0,\ve_0)}{\sup}\ \tau^{-\frac{p}{2}}\ \int_{\bG} P_\tau\left(|f - f(g)|^p\right)(g) dg \leq L+1. \] This yields \begin{align*} & \int_0^{\ve_0} \frac{1}{t^{\frac{s p}2 +1}} \int_{\bG} P_t\left(|f - f(g)|^p\right)(g) dg dt \\ &\leq \left(\underset{\tau\in (0,\ve_0)}{\sup}\ \tau^{-\frac{p}{2}}\ \int_{\bG} P_\tau\left(|f - f(g)|^p\right)(g) dg \right) \int_0^{\ve_0} \frac{dt}{t^{\frac{p}{2}(s-1) +1}}\leq (L+1) \frac{2\ve_0^{\frac{p}{2}(1-s)}}{p(1 -s)}. \end{align*} Combining the previous estimate with \eqref{Ndopo1} (with $\ve=\ve_0$) we deduce $$ \mathscr N_{s,p}(f)^p=\int_0^{\infty} \frac{1}{t^{\frac{s p}2 +1}} \int_{\bG} P_t\left(|f - f(g)|^p\right)(g) dg dt \leq (L+1) \frac{2\ve_0^{\frac{p}{2}(1-s)}}{p(1 -s)} +\frac{2^{p+1}}{sp}\ve_0^{-\frac{s p}2}\|f\|^p_p $$ which shows the validity of \eqref{inbesov}. On the other hand, if $f\in W^{1,p}(\bG)$ we can exploit the uniform bound in \eqref{suppose} and, by arguing in the same way as before (using \eqref{Ndopo1} with $\ve=1$), obtain \begin{equation}\label{wunoinbesov} \mathscr N_{s,p}(f)^p \leq \frac{2C_p}{p(1 -s)}\|\nh f\|^p_p +\frac{2^{p+1}}{sp}\|f\|^p_p\qquad \forall\, f\in W^{1,p}(\bG). \end{equation} Finally, if $0<\sigma \leq s$ we can use $t^{-\frac{\sigma p}2 -1}\leq t^{-\frac{s p}2 -1}$ for $t\in (0,1)$ and again \eqref{Ndopo1} with $\ve=1$ in order to infer \begin{equation}\label{besovinbesov} \mathscr N_{\sigma,p}(f)^p \leq \mathscr N_{s,p}(f)^p +\frac{2^{p+1}}{\sigma p}\|f\|^p_p\qquad \forall\, f\in \Bps. \end{equation} The estimates \eqref{wunoinbesov} and \eqref{besovinbesov} complete the proof of \eqref{chain}. \end{proof} The next density result will be used in the proof of Theorem \ref{T:MS}. \begin{lemma}\label{L:dens} For every $0<s<1$ and $1\le p<\infty$, we have $$\overline{C^\infty_0}^{\Bps}=\Bps.$$ \end{lemma} \begin{proof} Following a standard pattern one can first show that \begin{equation}\label{firstdens} \overline{C^\infty\cap \Bps}^{\Bps}=\Bps. \end{equation} Once \eqref{firstdens} is accomplished, one can conclude the desired density result via multiplication with a sequence of smooth cut-off functions approximating 1 in a pointwise sense. For details we refer the reader to \cite[Prop. 3.2]{BGT}, but we mention that, in the present situation, the proof of \eqref{firstdens} is easier with respect to that in \cite[Prop. 3.2]{BGT}. Here, given any $f\in \Bps$, in order to show that $\mathscr N_{s,p}(f-f_\ve)\to 0$ as $\ve\to 0^+$ we can use the group mollifiers $f_\ve = \rho_\ve \star f$ (see the proof of Lemma \ref{L:inf}, and in particular \eqref{unifeps}). \end{proof} \section{An integral decoupling property of the heat semigroup}\label{S:new} In this section we establish some basic properties of the heat semigroup $P_t$ that will provide the backbone of Theorems \ref{T:mainp} and \ref{T:p1}. Let $1\le m < N\in \mathbb N$ and denote points in $\R^N$ by $(z,\sigma)$, where $z\in \Rm$ and $\sigma \in \R^{N-m}$. Consider two decoupled parabolic operators: $L_1 - \p_t$ in $\Rm\times(0,\infty)$, and $L_2 - \p_t$ in $\R^{N-m}\times (0,\infty)$, and assume that their associated semigroups are stochastically complete. Denote by $p_m(z,t)$ and $p_{N-m}(\sigma,t)$ their respective fundamental solutions. It is a classical fact that the fundamental solution of the parabolic operator $L_1 + L_2 - \p_t$ in $\RN\times (0,\infty)$ is given by \begin{equation}\label{pd} p_N((z,\sigma),t) = p_m(z,t) \times p_{N-m}(\sigma,t). \end{equation} The remarkable pointwise decoupling formula \eqref{pd}, combined with the stochastic completeness of $p_{N-m}(\sigma,t)$, immediately implies the following integral decoupling property \begin{equation}\label{se} \int_{\R^{N-m}} p_N((z,\sigma),t) d\sigma = p_m(z,t). \end{equation} Let us now move to sub-Riemannian geometry and consider the ``simplest" situation of a Carnot group $\bG$ with step $r=2$. In such framework there exists the following famous formula for the heat kernel with pole at $(e,0)\in \bG\times\R$ that goes back to Gaveau-Hulanicki-Cygan, see e.g. \cite[Theor. 4.6]{GTpotan}, \begin{align}\label{ournucleo} & p(g,e,t) = \frac{2^{m_2}}{(4\pi t)^{\frac Q2}} \int_{\R^{m_2}} e^{-\frac{i}{t} \langle \sigma,\la\rangle} \left(\det j(\sqrt{- J(\la)^2})\right)^{1/2} \\ & \times \exp\bigg\{-\frac{1}{4t}\langle j(\sqrt{- J(\la)^2}) \cosh \sqrt{- J(\la)^2} z,z\rangle \bigg\} d\la,\notag \end{align} where according with our agreement in Section \ref{S:prelim} we have identified $g\in \bG$ with its logarithmic coordinates $(z,\sigma)$, where $z\in \Rm$, and $\sigma, \la\in \R^{m_2}$. Given a $m\times m$ matrix $C$ with real coefficients, we have indicated by $j(C)$ the matrix identified by the power series of the function $j(\tau)=\frac{\tau}{\sinh(\tau)}$. Finally, we have denoted by $J: \bg_2 \to \operatorname{End}(\bg_1)$ the Kaplan mapping defined by $$\langle J(\sigma)z,\zeta\rangle = \langle [z,\zeta],\sigma\rangle = - \langle J(\sigma)\zeta,z\rangle.$$ Clearly, $J(\sigma)^\star = - J(\sigma)$, and one has $- J(\sigma)^2\ge 0$. Formula \eqref{ournucleo} underscores how badly the sub-Riemannian heat kernel fails to have a pointwise decoupling property such as \eqref{pd} and, in general, to have any symmetry. In particular, it is not even spherically symmetric with respect to the horizontal layer $\bg_1$ of the Lie algebra. In contrast, we have the following result. \begin{theorem}[Integral decoupling property]\label{T:int} Let $\bG$ be any Carnot group. For any $z\in \R^m$, $t>0$, and $(z',\sigma')\in \bG$, we have $$ \int_{\R^{N-m}} p((z,\sigma),(z',\sigma'),t)d\sigma = (4\pi t)^{-\frac{m}{2}}e^{-\frac{||z-z'||^2}{4t}}. $$ \end{theorem} \begin{proof} We observe that in view of (i) in Proposition \ref{P:prop} one has $p((z,\sigma),(z,\sigma'),t)=p((z',\sigma')^{-1}\circ (z,\sigma),e,t)$. On the other hand, from the Baker-Campbell-Hausdorff formula \eqref{BCH} we know that (see \cite[formula (1.22)]{FS} or also p. 19 in \cite{CG}) the differential of left-translation is a lower-triangular matrix with $1$'s on the main diagonal. Therefore, if we write $(z-z',\sigma'') = (z',\sigma')^{-1}\circ (z,\sigma)$, then also the Jacobian of the change of variable $\sigma \to \sigma''$ in $\R^{N-m}$ has Jacobian determinant one. It follows that \begin{equation*} \int_{\R^{N-m}} p((z,\sigma),(z',\sigma'),t)d\sigma = \int_{\R^{N-m}} p((z-z',\sigma''),e,t)d\sigma'' = \int_{\R^{N-m}} p((z'-z,\sigma''),e,t)d\sigma''. \end{equation*} If we thus define \begin{equation}\label{defkernelh} h(z,z',t)=\int_{\R^{N-m}} p((z-z',\sigma),e,t)d\sigma, \end{equation} it is obvious that one has for every $z, z'\in\R^m$ and $t>0$ $$ h(z,z',t)=h(z',z,t)\qquad h(z,z',t)=h(z-z',0,t). $$ Moreover, in view of the Gaussian estimates \eqref{gauss0}-\eqref{gauss1}-\eqref{gauss2} it is clear that $h$ is a (smooth) positive function. From property (iii) of Proposition \ref{P:prop} we have \begin{equation}\label{fauno} \int_{\R^m} h(z,z',t)dz'=\int_{\R^m}\int_{\R^{N-m}} p((z'-z,\sigma),e,t)d\sigma dz'=\int_\bG p(g,e,t) dg=1. \end{equation} Since the very definition of non-isotropic gauge yields $|(z-z',\sigma)|^2\geq ||z-z'||^2$, from \eqref{gauss0} we infer $$ 0< h(z,z',t) \leq C t^{-\frac{Q}{2}} e^{-\frac{\beta }{2} \frac{||z-z'||^2}{t} }\int_{\R^{N-m}} e^{-\frac{\beta}{2} \frac{|(z-z',\sigma)|^2}{t}}d\sigma. $$ Using the validity of $\left(\sum_{s=2}^r ||\sigma_s||^{2r!/s}\right)^{2/2r!} \ge c_1 \sum_{s=2}^r ||\sigma_s||^{2/s}$ for some positive $c_1$, we also have \begin{align*} & \int_{\R^{N-m}} e^{-\frac{\beta}{2} \frac{|(z-z',\sigma)|^2}{t}}d\sigma \le \int_{\R^{N-m}} e^{-\frac{\beta c_1}{2} \sum_{s=2}^r \frac{||\sigma_s||^{2/s}}{t}} d\sigma \\ & = \prod_{s=2}^r \int_{\R^{m_s}} e^{-\frac{\beta c_1}{2} \frac{||\sigma_s||^{2/s}}{t}} d\sigma_s = \prod_{s=2}^r t^{\frac{s m_s}{2}} \int_{\R^{m_s}} e^{-\frac{\beta c_1}{2} ||\xi_s||^{2/s}} d\xi_s = A\ t^{\frac{Q-m}2}, \end{align*} where in the second to the last equality we have made the change of variable $\sigma_s = t^{\frac s2} \xi_s$ in the integral over $\R^{m_s}$, and we have then let $A = \prod_{s=2}^r \int_{\R^{m_s}} e^{-\frac{\beta c_1}{2} ||\xi_s||^{2/s}} d\xi_s > 0$. Substituting in the above estimate, we obtain for every $\delta>0$ and $z\in \Rm$ \begin{align*} & \int_{\{z'\in\R^m\,:\, ||z'-z||\geq \delta\}} h(z,z',t)dz' \le C A\ t^{-\frac{m}{2}} \int_{\{z'\in\R^m\,:\, ||z'-z||\geq \delta\}} e^{-\frac{\beta }{2} \frac{||z-z'||^2}{t}} dz' \\ & = C A \int_{\{z'\in\R^m\,:\, ||z'||\geq \frac{\delta}{t}\}} e^{-\frac{\beta }{2} ||z'||^2} dz'. \end{align*} From this estimate we conclude that for every $\delta>0$ and $z\in\Rm$ \begin{equation}\label{fadelta} \underset{t \to 0^+}{\lim} \int_{\{z'\in\R^m\,:\, ||z'-z||\geq \delta\}} h(z,z',t)dz'=0. \end{equation} Finally, we claim that \begin{equation}\label{facaldo} \Delta_z h(z,z',t)-\partial_t h(z,z',t)=0\qquad\mbox{ for all }z,z'\in\R^m \mbox{ and }t>0. \end{equation} On the one hand, \eqref{gauss1}-\eqref{gauss2} allow to differentiate under the integral sign, obtaining \begin{equation}\label{derh} \begin{cases} \partial_t h(z,z',t) =\int_{\R^{N-m}} \partial_t p((z-z',\sigma),e,t)d\sigma, \\ \partial_{z_i} h(z,z',t)=\int_{\R^{N-m}} \partial_{z_i} p((z-z',\sigma),e,t)d\sigma. \end{cases} \end{equation} On the other hand, in the notation of Section \ref{S:prelim} we obtain from \eqref{Xi} \[ \partial_{z_i} = X_i - \sum_{j=2}^{r}\sum_{s=1}^{m_j} b^s_{j,i}(z,...,\sigma_{j-1}) \partial_{\sigma_{j,s}}. \] This gives \begin{align*} \partial_{z_i} h(z,z',t) & = \int_{\R^{N-m}} X_i p((z-z',\sigma),e,t)d\sigma \\ & - \sum_{j=2}^{r}\sum_{s=1}^{m_j} \int_{\R^{N-m}} b^s_{j,i}(z,...,\sigma_{j-1}) \partial_{\sigma_{j,s}} p((z-z',\sigma),e,t) d\sigma. \end{align*} However, recalling that $b^s_{j,i}$ are homogeneous polynomials and the derivatives of $p$ have exponential decay, we now have for $j=2,...,r$ and $s=1,...,m_j$ \begin{align*} & \int_{\R^{N-m}} b^s_{j,i}(z,...,\sigma_{j-1}) \partial_{\sigma_{j,s}} p((z-z',\sigma),e,t) d\sigma \\ &=\int_{\R^{N-m-1}} b^s_{j,i}(z,...,\sigma_{j-1}) \left(\int_\R \partial_{\sigma_{j,s}} p((z-z',\sigma),e,t) d\sigma_{j,s}\right) d\hat{\sigma} \\ &=\int_{\R^{N-m-1}} b^s_{j,i}(z,...,\sigma_{j-1}) \bigg\{p((z-z',\sigma),e,t)\bigg\}^{\sigma_{j,s}=\infty}_{\sigma_{j,s}=-\infty} d\hat{\sigma}=0, \end{align*} where the last equality is again a consequence of \eqref{gauss0}. This implies the remarkable identity $$ \partial_{z_i} h(z,z',t) = \int_{\R^{N-m}} X_i p((z-z',\sigma),e,t)d\sigma. $$ Arguing in the same way for $\partial_{z_i z_i} h$ and summing in $i\in\{1,\ldots,m\}$, we obtain \begin{equation}\label{der2h} \Delta_z h(z,z',t)=\int_{\R^{N-m}} \mathscr L p((z-z',\sigma),e,t)d\sigma, \end{equation} where $\mathscr L$ is as in \eqref{L}. Since $\left(\mathscr L -\partial_t\right) p=0$, the combination of \eqref{derh} and \eqref{der2h} shows the validity of the claim \eqref{facaldo}. Let now $\vf \in C(\R^m)\cap L^{\infty}(\R^m)$. From the crucial properties \eqref{fauno},\eqref{fadelta} and \eqref{facaldo} (and the ubiquitous Gaussian estimates \eqref{gauss0}-\eqref{gauss1}-\eqref{gauss2}) we infer in a standard fashion that the bounded function $$ u(z,t)\overset{def}{=}\int_{\R^m} h(z,z',t)\vf(z')dz' $$ is a classical solution to the Cauchy problem in $\R^m\times(0,\infty)$ \begin{equation}\label{cp} \begin{cases} \Delta_z u - \partial_t u = 0, \\ u(z,0) = \vf(z). \end{cases} \end{equation} From the uniqueness of the bounded solutions to \eqref{cp} we infer that $$ \int_{\R^m} h(z,z',t)\vf(z')dz'= \int_{\R^m}(4\pi t)^{-\frac{m}{2}}e^{-\frac{||z-z'||^2}{4t}}\vf(z')dz'. $$ Since this identity holds for arbitrary $z\in\R^m$, $t>0$, and $\vf \in C(\R^m)\cap L^{\infty}(\R^m)$, we conclude that for every $z, z'\in\R^m$ and $t>0$ $$ h(z,z',t)=(4\pi t)^{-\frac{m}{2}}e^{-\frac{||z-z'||^2}{4t}}. $$ Recalling \eqref{defkernelh}, we have thus completed the proof of the theorem. \end{proof} \subsubsection{{\bf Nihil sub sole novum (Ecclesiaste), or also ``Chi cerca trova, e chi ricerca... ritrova" (E. De Giorgi)}}\label{SS:fulvio} We thank Fulvio Ricci for kindly bringing to our attention that a different, more abstract proof of the integral decoupling formula in Theorem \ref{T:int} can be extracted from the following result of D. M\"uller in \cite[Proposition 1.1]{Mu}. Let $C_\infty(\R^+)$ be the space of continuous functions on $\R^+$ which vanish at infinity. \begin{proposition}[D. M\"uller] Let $\bG$ be a homogeneous Lie group and let $L$ be a left-invariant, positive Rockland differential operator on $\bG$. Let $\pi$ be an irreducible unitary representation of $\bG$. If $m\in C_\infty(\R^+)$, then \[ \pi(m(L)) = m(d\pi(L)). \] \end{proposition} \subsection{A sub-Riemannian Ledoux-Huisken lemma}\label{SS:LH} We next use Theorem \ref{T:int} to establish another remarkable property of the heat semigroup $P_t$. To motivate it, we consider the standard heat kernel in $\Rm$, $p(z,t) = (4\pi t)^{-\frac m2} e^{-\frac{|z|^2}{4t}}$, and let $\mathbb S^{m-1} = \{\nu\in \Rm\mid ||\nu|| = 1\}$. It is an elementary (and beautiful) exercise to show that for any $\nu\in \mathbb S^{m-1}$, $t>0$, and $1\le p<\infty$ one has \begin{equation}\label{led} \frac{1}{t^{\frac p2}} \int_{\Rm} p(z,t) |\langle \nu,z\rangle|^p dz = \frac{2 \G(p)}{\G(p/2)}. \end{equation} The reader should note the appearance in the right-hand side of \eqref{led} of the same dimensionless constant in the right-hand side of \eqref{thesispPtk}. As far as we are aware of, the identity \eqref{led} was first used by Ledoux in the case $p=1$ in his approach to the isoperimetric inequality based on the heat semigroup, see \cite{Led} and also \cite{MPPP}. Another elementary (and equally beautiful) property of the standard heat kernel in $\Rm$ is expressed by the following identity which represents a special case of a deep monotonicity formula discovered by Huisken, see \cite[Theor. 3.1]{Hui}, \begin{equation}\label{hui} \sqrt{4\pi t} \int_{\{z\in\R^m\mid\left\langle z,\nu\right\rangle =0\}} p(z,t)\ dz = 1, \end{equation} where again $\nu\in \mathbb S^{m-1}$ is an arbitrary direction and $t>0$. We note that the hyperplane $\{z\in\R^m\mid\left\langle z,\nu\right\rangle =0\}$ is a global minimal surface in $\Rm$. In both \eqref{led} and \eqref{hui} the symmetries of the heat kernel seem to play an essential role. Quite surprisingly, the next result shows that, in fact, \eqref{led} and \eqref{hui} have a universal character, in the sense that they hold unchanged when the Gauss-Weierstrass kernel is replaced by the very complicated and non-explicit heat kernel $p(g,e,t)$ in any Carnot group! As agreed in the opening of the section, we will identify a point $g\in \bG$ with $(z,\sigma)$, where $z\in \Rm$ and $\sigma\in \R^{N-m}$. \begin{lemma}[sub-Riemannian Ledoux-Huisken lemma]\label{L:id} Let $\bG$ be a Carnot group, and let $1\le p<\infty$. For any $\nu\in \mathbb S^{m-1}$ and $t>0$ we have \begin{equation}\label{Inucostante} \frac{1}{t^{\frac p2}} \int_{\bG} p(g,e,t) |\langle \nu,z\rangle|^p dg = \frac{2 \G(p)}{\G(p/2)}, \end{equation} and \begin{equation}\label{punoint} \sqrt{4\pi t} \int_{\{z\in\R^m\mid\left\langle z,\nu\right\rangle =0\}\times \R^{N-m}} p((z,\sigma),e,t)\ dzd\sigma = 1. \end{equation} \end{lemma} \begin{proof} The proofs of \eqref{Inucostante} and \eqref{punoint} follow immediately by an application of Theorem \ref{T:int} with $(z',\sigma')=e$. To see this, if we let $g = (z,\sigma)$, then by Tonelli's theorem and Theorem \ref{T:int} we have \begin{align*} & \frac{1}{t^{\frac p2}} \int_{\bG} p((z,\sigma),e,t) |\langle \nu,z\rangle|^p dzd\sigma = \frac{1}{t^{\frac p2}} \int_{\R^m} |\langle \nu,z\rangle|^p \left(\int_{\R^{N-m}} p((z,\sigma),e,t) d\sigma\right)dz\\ &=\frac{1}{t^{\frac p2}} \int_{\R^m}|\langle \nu,z\rangle|^p (4\pi t)^{-\frac m2} e^{-\frac{||z||^2}{4}} dz = \frac{2 \G(p)}{\G(p/2)}, \end{align*} where in the last equality we have used \eqref{led}. In a similar fashion and using \eqref{hui}, we have \begin{align*} &\int_{\{z\in\R^m\mid\left\langle z,\nu\right\rangle =0\}\times \R^{N-m}} p((z,\sigma),e,t)dzd\sigma\\ &=\int_{\{z\in\R^m\mid\left\langle z,\nu\right\rangle =0\}} (4\pi t)^{-\frac m2}e^{-\frac{||z||^2}{4t}} dz =\frac{1}{\sqrt{4\pi t}}. \end{align*}\end{proof} In connection with \eqref{punoint} it is worth remarking here that in the Heisenberg group $\mathbb H^1$ the vertical planes $\{z\in\R^2\mid\left\langle z,\nu\right\rangle =0\}\times \R$ are the only stable $H$-minimal entire graphs with empty characteristic locus, see \cite[Theor. 1.8]{DGNP}. \section{Proof of Theorems \ref{T:mainp} and \ref{T:p1}}\label{S:proofs} This section is devoted to proving Theorems \ref{T:mainp} and \ref{T:p1}. In order to establish Theorem \ref{T:mainp} we are going to show in the next two lemmas a ``limsup" and a ``liminf'' inequality in the spirit of the arguments in \cite[Sec. 2]{B}, where the case of a spherically symmetric approximate identity $\{\rho_\ve\}$ is treated. One should also see the generalisation to Carnot groups in \cite[Sec. 3]{Bar}. As the reader will see, in our situation the accomplishment of this task is possible thanks to the remarkable identity \eqref{Inucostante} in Lemma \ref{L:id} which ultimately hinges on Theorem \ref{T:int}. \begin{lemma}\label{L:sup} Let $f\in W^{1,p}(\bG)$ with $1\leq p<\infty$. Then \begin{equation}\label{charlimsup} \underset{t \to 0^+}{\limsup}\ t^{-\frac{p}{2}}\ \int_{\bG} P_t\left(|f - f(g)|^p\right)(g) dg\leq \frac{2 \G(p)}{\G(p/2)} \|\nabla_H f\|^p_p. \end{equation} \end{lemma} \begin{proof} We begin by proving \eqref{charlimsup} for functions in $C^\infty_0(\bG)$. For a fixed $f\in C_0^\infty(\bG)$ we denote by $K=\overline \Om$, where $\Om$ is a bounded open set such that ${\rm{supp}}f\subset \Om$. Since $\left| (g')^{-1}\circ g\right|\ge \gamma>0$ for every $g\in \operatorname{supp} f$ and $g'\in \bG\setminus K$, the estimate \eqref{gauss0} in Proposition \ref{P:gaussian} easily implies \begin{align*} & \underset{t \to 0^+}{\lim}\ t^{-\frac{p}{2}}\ \int_{\bG\smallsetminus K}\int_{{\rm{supp}}f} p(g,g',t)|f(g') - f(g)|^p dg'dg \\ &=\underset{t \to 0^+}{\lim}\ t^{-\frac{p}{2}}\ \int_{{\rm{supp}}f} \int_{\bG\smallsetminus K} p(g,g',t)|f(g') - f(g)|^p dg'dg=0. \end{align*} Since this gives $$ \underset{t \to 0^+}{\limsup}\ t^{-\frac{p}{2}}\ \int_{\bG} P_t\left(|f - f(g)|^p\right)(g) dg=\underset{t \to 0^+}{\limsup}\ t^{-\frac{p}{2}}\ \int_{K}\int_{K} p(g,g',t)|f(g') - f(g)|^p dg'dg, $$ we are thus left with proving that \begin{equation}\label{claimlimsup} \underset{t \to 0^+}{\limsup}\ t^{-\frac{p}{2}} \int_{K}\int_{K} p(g,g',t)|f(g') - f(g)|^p dg'dg \leq \frac{2 \G(p)}{\G(p/2)} \|\nabla_H f\|^p_p. \end{equation} From the uniform continuity and boundedness of $\nh f$, it is clear from Proposition \ref{P:taylor} that for every $g, g'\in K$ one has $$f(g') - f(g) = \langle\nh f(g),z'-z\rangle + \omega(g,g')$$ where \begin{equation}\label{opiccolo} |\omega(g,g')| = o(d(g,g'))\mbox{ as }d(g,g')\to 0\,\,\mbox{ and }\,\, \frac{|\omega(g,g')|}{d(g,g')} \mbox{ is bounded uniformly in }K. \end{equation} Hence, for any $\theta>0$ there exists $c_\theta>0$ such that $$ |f(g') - f(g)|^p\leq (1+\theta) \left| \left\langle \nabla_{H} f(g), z'-z \right\rangle \right|^p + c_\theta\, |\omega(g,g')|^p. $$ This gives \begin{align}\label{daqui} &t^{-\frac{p}{2}}\int_{K}\int_{K} p(g,g',t)|f(g') - f(g)|^p dg'dg\\ &\leq (1+\theta) t^{-\frac{p}{2}}\int_{K}\int_{K} p(g,g',t)\left| \left\langle \nabla_{H} f(g), z'-z \right\rangle \right|^p dg'dg +\notag\\ &+ c_\theta\, t^{-\frac{p}{2}}\int_{K}\int_{K} p(g,g',t) |\omega(g,g')|^pdg'dg = I(t) + II(t).\notag \end{align} Keeping in mind that by (i) in Proposition \ref{P:prop} we have $p(g,g',t) = p(g',g,t) = p(g^{-1}\circ g',e,t)$, if we now use the change of variable $g'' = g^{-1}\circ g'$, we obtain \begin{align*} I(t) & = (1+\theta)t^{-\frac{p}{2}}\int_{K}\int_{K} p(g,g',t)|\langle \nabla_{H} f(g), z'-z\rangle|^p dg' dg \\ & = (1+\theta)t^{-\frac{p}{2}}\int_{\bG}\int_{\bG} p(g^{-1}\circ g',e,t)|\langle \nabla_{H} f(g), z'-z\rangle|^p dg' dg \\ & = (1+\theta)t^{-\frac{p}{2}}\int_{\{g\in \bG\,:\, |\nabla_{H} f(g)|\not=0\}}\int_{\bG} p(g'',e,t)|\langle \nabla_{H} f(g), z''\rangle|^p dg'' dg\\ &= (1+\theta)\frac{2 \G(p)}{\G(p/2)} \int_{\bG}|\nabla_{H} f(g)|^p dg, \end{align*} where in the last equality we have used \eqref{Inucostante} in Lemma \ref{L:id}. On the other hand, if we now fix a sufficiently small $\delta>0$, we have from \eqref{opiccolo} \begin{align*} &II(t) \le c_\theta\, t^{-\frac{p}{2}}\int_{K}\int_{B(g,\delta)} p(g,g',t) |o(d(g,g'))|^p dg'dg +\\ &+ \bar{C} t^{-\frac{p}{2}}\int_{K}\int_{K\setminus B(g,\delta)} p(g,g',t) \left| (g')^{-1}\circ g\right|^p dg'dg\\ &\leq c_\theta\,|\omega(t)|\int_{K}\int_{\bG} p(e,g'',1)|g''|^p dg'' dg + \bar{C} \int_{K}\int_{\bG\smallsetminus B\left(e,\frac{\delta}{\sqrt{t}}\right)} p(e,g'',1)|g''|^p dg'' dg \\ & \leq c_\theta\,|\omega(t)| |K| C\int_{\bG} e^{-\beta |g''|^2}|g''|^p dg'' + \bar{C} |K| C e^{-\frac{\beta\delta^2}{2t}} \int_{\bG} e^{-\frac{1}{2}\beta |g''|^2}|g''|^p dg'', \end{align*} where $|\omega(t)|\to 0$ as $t\to 0^+$, $\bar{C}$ is a suitable positive constant, and in the relevant integrals we have used Propositions \ref{P:prop} and \ref{P:gaussian}. The analysis of $I(t)$ and $II(t)$ implies \[ I(t) = (1+\theta) \frac{2 \G(p)}{\G(p/2)} \int_{\bG}|\nabla_{H} f(g)|^p dg,\ \ \ \ \ \ \underset{t \to 0^+}{\lim}\ II(t) =0. \] Substituting these relations in \eqref{daqui} we conclude that $$\underset{t \to 0^+}{\limsup}\ t^{-\frac{p}{2}} \int_{K}\int_{K} p(g,g',t)|f(g') - f(g)|^p dg'dg \leq (1+\theta) \frac{2 \G(p)}{\G(p/2)} \int_{\bG}|\nabla_{H} f(g)|^p dg.$$ The arbitrariness of $\theta>0$ leads to the validity of \eqref{claimlimsup} (and therefore of \eqref{charlimsup}) for $f\in C^\infty_0(\bG)$. In order to complete the proof we are going to use the density property \eqref{dense}. Given $f\in W^{1,p}(\bG)$, there exists $f_k\in C_0^\infty$ such that $||f-f_k||_{W^{1,p}(\bG)} \to 0$. This gives \begin{align*} & \underset{t \to 0^+}{\limsup}\ t^{-\frac{p}{2}} \int_{\bG} P_t\left(|f - f(g)|^p\right)(g) dg \le \underset{t \to 0^+}{\limsup}\ t^{-\frac{p}{2}} \int_{\bG} P_t\left(|(f-f_k) - (f(g)-f_k(g)|^p\right)(g) dg \\ & + \underset{t \to 0^+}{\limsup}\ t^{-\frac{p}{2}} \int_{\bG} P_t\left(|f_k - f_k(g)|^p\right)(g) dg \le C_p ||\nabla_H f_k - \nabla_H f||^p_{L^p(\bG)} + \frac{2 \G(p)}{\G(p/2)} ||\nabla_H f_k||^p_{L^p(\bG)}, \end{align*} where in the last inequality we have applied \eqref{suppose} in Lemma \ref{L:diffquot} and \eqref{charlimsup}. Letting $k\to \infty$ we conclude that \eqref{charlimsup} does hold also for $f\in W^{1,p}(\bG)$. This completes the proof. \end{proof} The next lemma provides the converse implication of Lemma \ref{L:sup}, and it requires $p>1$. \begin{lemma}\label{L:inf} Let $p>1$ and $f\in L^p(\bG)$. Suppose that $$\underset{t \to 0^+}{\liminf}\ t^{-\frac{p}{2}}\ \int_{\bG} P_t\left(|f - f(g)|^p\right)(g) dg <\infty.$$ Then $f\in W^{1,p}(\bG)$, and one has \begin{equation}\label{charliminf} \frac{2 \G(p)}{\G(p/2)} \|\nabla_H f\|^p_p\leq \underset{t \to 0^+}{\liminf}\ t^{-\frac{p}{2}}\ \int_{\bG} P_t\left(|f - f(g)|^p\right)(g) dg. \end{equation} \end{lemma} \begin{proof} Fix a nonnegative function $\rho\in C_0^\infty(\bG)$ such that $\int_{\bG}\rho(g) dg =1$, and denote by $$ \rho_\ve (g)=\ve^{-Q} \rho(\delta_{\ve^{-1}}(g)), $$ the approximate identity associated with $\rho$. Let $1\le p<\infty$ (the reader should note that we are not excluding the case $p=1$ at this moment). For any $f\in L^p(\bG)$ we also let $$ f_\ve (g)= \rho_\ve \star f(g) = \int_{\bG} \rho_\ve(g') f( (g')^{-1}\circ g) dg' = \int_{\bG} \rho_\ve(g\circ (g')^{-1}) f(g') dg'. $$ It is well-known, see e.g. \cite[Prop. 1.20]{FS} and \cite[Theor. 2.3]{Ri}, that $f_\ve \in L^p(\bG)\cap C^{\infty}(\bG)$ and $f_\ve \to f$ in $L^p(\bG)$ as $\ve\to 0^+$. The following simple, yet critical, fact holds for every $\ve >0$ and $t>0$ \begin{equation}\label{unifeps} {t}^{-\frac{p}{2}}\ \int_{\bG} P_{t}\left(|f_\ve - f_\ve(g)|^p\right)(g) dg \leq {t}^{-\frac{p}{2}}\ \int_{\bG} P_{t}\left(|f - f(g)|^p\right)(g) dg. \end{equation} To see \eqref{unifeps} note that H\"older inequality and (i) of Proposition \ref{P:prop} imply for any $t>0$ \begin{align*} &\int_{\bG} P_t\left(|f_\ve - f_\ve(g)|^p\right)(g) dg= \int_{\bG}\int_{\bG} p(g,g',t) |f_\ve(g') - f_\ve(g)|^p dg'dg\\ &\leq \int_{\bG}\int_{\bG} p(g,g',t)\int_{\bG} \rho_\ve(g'') |f( (g'')^{-1}\circ g') - f( (g'')^{-1}\circ g)|^p dg'' dg'dg\\ &= \int_{\bG} \rho_\ve(g'')\left(\int_{\bG}\int_{\bG} p((g'')^{-1}\circ g,(g'')^{-1}\circ g',t) |f( (g'')^{-1}\circ g') - f( (g'')^{-1}\circ g)|^p dg'dg\right)dg''\\ &= \int_{\bG} \rho_\ve(g'') \left(\int_{\bG} P_t\left(|f - f(g)|^p\right)(g) dg\right) dg'' = \int_{\bG} P_t\left(|f - f(g)|^p\right)(g) dg. \end{align*} We now claim that for every $1\le p < \infty$ and $\ve>0$ we have \begin{equation}\label{evvivaevviva} \frac{2 \G(p)}{\G(p/2)} \int_\bG |\nabla_{H} f_\ve(g)|^p dg \leq \underset{t \to 0^+}{\liminf}\ t^{-\frac{p}{2}}\ \int_{\bG} P_t\left(|f - f(g)|^p\right)(g) dg<\infty. \end{equation} The advantage now is that we can use smoothness and therefore argue similarly to the proof of Lemma \ref{L:sup}. We fix $\ve>0$ and let $K\subset \bG$ be a given compact set. From the continuity of $\nh f_\ve$ and Proposition \ref{P:taylor} we know that that for every $g\in K$ \[ \langle\nh f_\ve(g),z'-z\rangle = f_\ve(g') - f_\ve(g) + \omega_\ve(g,g'), \] where $|\omega_\ve(g,g')| = o(d(g,g'))$ as $g'\to g$ uniformly for $g\in K$. As in Lemma \ref{L:sup}, for any $\theta>0$ there exists $c_\theta>0$ such that \begin{equation}\label{unactheta} |\langle \nabla_{H} f_\ve(g),z'-z\rangle|^p\leq (1+\theta)|f_\ve(g') - f_\ve(g)|^p + c_\theta |\omega_\ve(g,g')|^p. \end{equation} From \eqref{Inucostante} in Lemma \ref{L:id} we see that for any $0<\rho<\frac{2 \G(p)}{\G(p/2)}$ there exists $M=M(\rho)>0$ such that for any $\nu\in \mathbb S^{m-1}$ one has \begin{equation}\label{inpartgauss} \frac{2 \G(p)}{\G(p/2)}-\rho \leq \int_{B(e,M)} p(g'',e,1) |\langle \nu,z''\rangle|^p dg'', \end{equation} where the uniformity of the constant $M$ with respect to $\nu$ is ensured by \eqref{gauss0}. Assuming that $g\in K$ is a point at which $|\nabla_{H} f_\ve(g)|\not= 0$, applying \eqref{inpartgauss} with $\nu= \frac{\nabla_{H} f_\ve(g)}{|\nabla_{H} f_\ve(g)|}$, we find \begin{align*} & \left(\frac{2 \G(p)}{\G(p/2)}-\rho\right) |\nabla_{H} f_\ve(g)|^p \le \int_{B(e,M)} p(g'',e,1) |\langle \nabla_{H} f_\ve(g),z''\rangle|^p dg''. \end{align*} Since such inequality continues to be trivially true at points $g\in K$ where $|\nabla_{H} f_\ve(g)|= 0$, we can assume that it holds for every $g\in K$. Integration in $g\in K$, together with the change of variable $\zeta\mapsto g'=g\circ \delta_{\sqrt{t}}g''$ and the estimate \eqref{unactheta}, give \begin{align*} & \left(\frac{2 \G(p)}{\G(p/2)}-\rho\right) \int_K |\nabla_{H} f_\ve(g)|^p dg \le \int_K \int_{B(e,M)} p(g'',e,1) |\langle \nabla_{H} f_\ve(g),z''\rangle|^p dg'' dg. \\ &=t^{-\frac{p}{2}}\ \int_K\int_{B(g,\sqrt{t}M)} p(g,g',t) |\langle \nabla_{H} f_\ve(g),z'-z\rangle|^p dg'dg\\ &\leq (1+\theta)t^{-\frac{p}{2}}\ \int_K\int_{B(g,\sqrt{t}M)} p(g,g',t)|f_\ve(g') - f_\ve(g)|^p dg'dg \\ & + c_\theta t^{-\frac{p}{2}}\ \int_K\int_{B(g,\sqrt{t}M)} p(g,g',t)|\omega_\ve(g,g')|^p dg'dg \\ &\leq (1+\theta) t^{-\frac{p}{2}} \int_{\bG} P_t\left(|f_\ve - f_\ve(g)|^p\right)(g) dg + c_\theta t^{-\frac{p}{2}}\ \int_K\int_{B(g,\sqrt{t}M)} p(g,g',t)|\omega_\ve(g,g')|^p dg'dg \\ &\leq (1+\theta) t^{-\frac{p}{2}} \int_{\bG} P_t\left(|f - f(g)|^p\right)(g) dg + +c_\theta t^{-\frac{p}{2}}\ \int_K\int_{B(g,\sqrt{t}M)} p(g,g',t)|\omega_\ve(g,g'))|^p dg'dg, \end{align*} where in the last inequality we used the key property \eqref{unifeps}. Taking the $\underset{t \to 0^+}{\liminf}$ of both sides of the latter inequality, we find \begin{align*} & \left(\frac{2 \G(p)}{\G(p/2)}-\rho\right) \int_K |\nabla_{H} f_\ve(g)|^p dg\ \le\ (1+\theta)\ \underset{t \to 0^+}{\liminf}\ t^{-\frac{p}{2}} \int_{\bG} P_t\left(|f - f(g)|^p\right)(g) dg \\ & + c_\theta\ \underset{t \to 0^+}{\limsup}\ t^{-\frac{p}{2}}\ \int_K\int_{B(g,\sqrt{t}M)} p(g,g',t)|\omega_\ve(g,g'))|^p dg'dg. \end{align*} By \eqref{gauss0} in Proposition \ref{P:gaussian} and the property of $\omega_\ve$, it is now easy to infer that, for any compact $K$ and any fixed $M>0$, we have $$ \underset{t \to 0^+}{\lim}\ t^{-\frac{p}{2}}\ \int_K\int_{B(g,\sqrt{t}M)} p(g,g',t)|\omega_\ve(g,g'))|^p dg'dg = 0. $$ We have thus proved that $$ \left(\frac{2 \G(p)}{\G(p/2)}-\rho\right) \int_K |\nabla_{H} f_\ve(g)|^p dg\ \le\ (1+\theta)\ \underset{t \to 0^+}{\liminf}\ t^{-\frac{p}{2}} \int_{\bG} P_t\left(|f - f(g)|^p\right)(g) dg. $$ By the arbitrariness of $K, \theta, \rho$, we conclude that \eqref{evvivaevviva} does hold. For later use in the proof of Theorem \ref{T:p1} below, we reiterate at this point that \eqref{evvivaevviva} is valid for any $1\le p<\infty$. We can now complete the proof of the lemma. By the theorem of Banach-Alaoglu (this is the only place where we are using $p>1$!) we know that (up to subsequence) for every $i=1,...,m$, $X_i f_\ve$ is weakly-convergent as $\ve\to 0^+$ to a function $g_i\in L^p(\bG)$. On the other hand, since $f_\ve\to f$ in $L^p(\bG)$, for every $i=1,...,m$, we have that $X_i f_\ve$ converges to $X_i f$ in $\mathscr D'(\bG)$. This shows $X_i f=g_i$ as $L^p$-functions, thus proving that $f\in W^{1,p}(\bG)$. Once we know this crucial information, we use the fact that, from the definition of $f_\ve$, we have $X_i(f_\ve) = \rho_\ve\star(X_i f)= (X_i f)_\ve$ for $i=1,...,m$, and therefore $\nabla_{H}(f_\ve)=\left(\nabla_{H} f\right)_\ve\to \nabla_{H} f$ in $L^p(\bG)^m$ as $\ve\to 0^+$. This allows to infer from \eqref{evvivaevviva} that $$ \frac{2 \G(p)}{\G(p/2)}\int_\bG |\nabla_{H} f(g)|^p dg \le \underset{t \to 0^+}{\liminf}\ t^{-\frac{p}{2}}\ \int_{\bG} P_t\left(|f - f(g)|^p\right)(g) dg, $$ which finally proves \eqref{charliminf}. \end{proof} We are then in a position to provide the \begin{proof}[Proof of Theorem \ref{T:mainp}] It follows immediately by combining Lemmas \ref{L:sup} and \ref{L:inf}. \end{proof} We next turn the attention to the \begin{proof}[Proof of Theorem \ref{T:p1}] We begin with proving that for $f\in L^1(\bG)$, we have \begin{equation}\label{infuno} \underset{t \to 0^+}{\liminf}\ \frac{1}{\sqrt t}\ \int_{\bG} P_t\left(|f - f(g)|\right)(g) dg<\infty\ \Longrightarrow\ f\in BV(\bG). \end{equation} Keeping in mind that \eqref{evvivaevviva} in the proof of Lemma \ref{L:inf} does hold also when $p=1$, we obtain from it for all $\ve>0$ $$ \frac{2}{\sqrt{\pi}} \int_\bG |\nabla_{H} f_\ve(g)| dg \leq \underset{t \to 0^+}{\liminf}\ \frac{1}{\sqrt t}\ \int_{\bG} P_t\left(|f - f(g)|\right)(g) dg<\infty, $$ where as before $f_\ve=\rho_\ve\star f \in C^\infty(\bG)\cap L^1(\bG)$ is the family of group mollifiers. This shows in particular that $f_\ve \in W^{1,1}(\bG)$. Recalling definition \eqref{var}, if we now fix $\zeta = (\zeta_1,...,\zeta_m)\in \mathscr F$, we clearly have for all $\ve >0$ \begin{align}\label{evvivaevviva1} &\int_{\bG} f_\ve (g) \sum_{i=1}^m X_i \zeta_i (g) dg = - \int_{\bG} \left\langle \nh f_\ve (g), \zeta(g) \right\rangle dg \\ &\leq \int_\bG |\nabla_{H} f_\ve(g)| dg \leq \frac{\sqrt{\pi}}{2}\underset{t \to 0^+}{\liminf}\ \frac{1}{\sqrt t}\ \int_{\bG} P_t\left(|f - f(g)|\right)(g) dg.\notag \end{align} Using the fact that $f_\ve \to f$ in $L^1(\bG)$ as $\ve \to 0^+$, together with \eqref{evvivaevviva1}, we obtain \begin{align*} & \int_{\bG} f(g) \sum_{i=1}^m X_i \zeta_i (g) dg = \underset{\ve \to 0^+}{\lim} \int_{\bG} f_\ve (g) \sum_{i=1}^m X_i \zeta_i (g) dg \\ & \leq \frac{\sqrt{\pi}}{2}\underset{t \to 0^+}{\liminf}\ \frac{1}{\sqrt t}\ \int_{\bG} P_t\left(|f - f(g)|\right)(g) dg. \end{align*} The arbitrariness of $\zeta \in \mathscr F$ and \eqref{var} yield the validity of \eqref{infuno}, and we conclude that \begin{equation}\label{infunosharp} \frac{2}{\sqrt{\pi}} \operatorname{Var}_\bG(f) \leq \underset{t \to 0^+}{\liminf}\ \frac{1}{\sqrt t}\ \int_{\bG} P_t\left(|f - f(g)|\right)(g) dg\ . \end{equation} The opposite implication, \begin{equation}\label{supuno} f\in BV(\bG)\ \Longrightarrow\ \underset{t \to 0^+}{\limsup}\ \frac{1}{\sqrt t}\ \int_{\bG} P_t\left(|f - f(g)|\right)(g) dg<\infty, \end{equation} is a trivial consequence of Proposition \ref{P:minchietto}. The two implications \eqref{infuno} and \eqref{supuno} prove \eqref{1uno}. If, in addition, $f\in W^{1,1}(\bG)$, then keeping in mind that $\operatorname{Var}_\bG(f) = \int_{\bG} |\nh f(g)| dg$, and by combining \eqref{infunosharp} with the case $p=1$ of Lemma \ref{L:sup}, we obtain $$ \exists\underset{t \to 0^+}{\lim}\ \frac{1}{\sqrt t}\ \int_{\bG} P_t\left(|f - f(g)|\right)(g) dg = \frac{2}{\sqrt{\pi}} \int_\bG |\nabla_{H} f(g)| dg. $$ This proves \eqref{2unouno}. We are left with the proof of \eqref{2uno}. We first apply Theorem \ref{T:bmpB}, that gives for $f\in BV(\bG)$ \begin{align}\label{olliop} &\underset{t \to 0^+}{\lim}\ \frac{1}{\sqrt t}\ \int_{\bG} P_t\left(|f - f(g)|\right)(g) dg = 4 \int_{\bG} \int_{T_\bG(\sigma_f(g))} p(g',e,1) dg' d \operatorname{Var}_\bG(f)(g) \\ &= \int_{\bG} \int_{T_{\sigma_f(g)}\times \R^{N-m}} p(g',e,1) dg' d \operatorname{Var}_\bG(f)(g), \notag \end{align} where in the last equality we have used \eqref{Tnu} with $\nu = \sigma_f(g)$. It is at this point that the crucial identity \eqref{punoint} in Lemma \ref{L:id} enters the stage. Using it we find that for $\operatorname{Var}_\bG(f)$-a.e. $g\in \bG$ one has \begin{equation}\label{wow} 4 \int_{T_{\sigma_f(g)}\times \R^{N-m}} p(g',e,1) dg' = \frac{2}{\sqrt \pi}. \end{equation} Inserting \eqref{wow} in \eqref{olliop} we obtain \eqref{2uno}, thus completing the proof of the theorem. \end{proof} \section{Limiting behaviour of Besov seminorms}\label{S:seminorms} In this section we prove Theorems \ref{T:bbmG} and \ref{T:MS}. As we have implicitly mentioned in the introduction, the gist of the proof of Theorem \ref{T:bbmG} is to connect the limit as $t\searrow 0^+$ of the heat semigroup in Theorem \ref{T:mainp} with that as $s\nearrow 1^-$ of the desingularised Besov seminorms $(1-s) \mathscr N_{s,p}(f)^p$. This connection is expressed by the following proposition (see also \cite[Section 3]{GTbbmd} for the case $p=1$). \begin{proposition}\label{P:stars} Let $1\leq p<\infty$ and $f\in L^p(\bG)$. One has \begin{align}\label{chaininfsup} &\frac{2}{p}\ \underset{t \to 0^+}{\liminf}\ t^{-\frac{p}{2}}\ \int_{\bG} P_t\left(|f - f(g)|^p\right)(g) dg \leq \underset{s\to 1^-}{\liminf}\ (1 - s)\ \mathscr N_{s,p}(f)^p \leq \\ &\leq \underset{s\to 1^-}{\limsup}\ (1 - s)\ \mathscr N_{s,p}(f)^p \leq \frac{2}{p}\ \underset{t \to 0^+}{\limsup}\ t^{-\frac{p}{2}}\ \int_{\bG} P_t\left(|f - f(g)|^p\right)(g) dg.\notag \end{align} \end{proposition} \begin{proof} We start by proving \begin{equation}\label{ve2} \underset{s\to 1^-}{\limsup}\ (1 - s)\ \mathscr N_{s,p}(f)^p \leq \frac{2}{p}\ \underset{t \to 0^+}{\limsup}\ t^{-\frac{p}{2}}\ \int_{\bG} P_t\left(|f - f(g)|^p\right)(g) dg, \end{equation} and we denote $$L \overset{def}{=} \underset{t \to 0^+}{\limsup}\ t^{-\frac{p}{2}}\ \int_{\bG} P_t\left(|f - f(g)|^p\right)(g) dg.$$ If $L=+\infty$, then \eqref{ve2} is trivially valid. Therefore, we might as well assume that $L<\infty$. By \eqref{inbesov} we already know that $f$ belongs to the fractional Sobolev space $\Bps$ for every $0<s<1$. Fix $\ve_0>0$ such that \[ \underset{\tau\in (0,\ve_0)}{\sup}\ \tau^{-\frac{p}{2}}\ \int_{\bG} P_\tau\left(|f - f(g)|^p\right)(g) dg <\infty \] and consider any $\ve\in(0,\ve_0)$. Hence we can write \begin{align*} \mathscr N_{s,p}(f)^p & = \int_0^\ve \frac{1}{t^{\frac{s p}2 +1}} \int_{\bG} P_t\left(|f - f(g)|^p\right)(g) dg dt + \int_\ve^\infty \frac{1}{t^{\frac{s p}2 +1}} \int_{\bG} P_t\left(|f - f(g)|^p\right)(g) dg dt. \end{align*} On one hand, by \eqref{Ndopo1} we know \[ \int_\ve^\infty \frac{1}{t^{\frac{s p}2 +1}} \int_{\bG} P_t\left(|f - f(g)|^p\right)(g) dg dt \leq \frac{2^{p+1}}{sp} \|f\|^p_p \ve^{-\frac{sp}{2}}. \] On the other hand, one has \[ \int_0^\ve \frac{1}{t^{\frac{s p}2 +1}} \int_{\bG} P_t\left(|f - f(g)|^p\right)(g) dg dt \leq \left(\underset{\tau\in (0,\ve)}{\sup}\ \tau^{-\frac{p}{2}}\ \int_{\bG} P_\tau\left(|f - f(g)|^p\right)(g) dg \right) \frac{2\ve^{\frac{p}{2}(1-s)}}{p(1 - s)}. \] We thus infer that for every $\ve\in (0,\ve_0)$ we have \begin{equation}\label{ve} \mathscr N_{s,p}(f)^p \leq \frac{2\ve^{\frac{p}{2}(1-s)}}{p(1 - s)} \left(\underset{\tau\in (0,\ve)}{\sup}\ \tau^{-\frac{p}{2}}\ \int_{\bG} P_\tau\left(|f - f(g)|^p\right)(g) dg \right) + \frac{2^{p+1}}{sp} \|f\|^p_p \ve^{-\frac{sp}{2}}. \end{equation} Multiplying by $(1 - s)$ in \eqref{ve} and taking the $\underset{s \to 1^-}{\limsup}$, we find for any $\ve\in (0,\ve_0)$, \begin{equation}\label{lim12} \underset{s\to 1^-}{\limsup}\, (1 - s) \mathscr N_{s,p}(f)^p \le \frac{2}{p} \underset{\tau\in (0,\ve)}{\sup}\ \tau^{-\frac{p}{2}}\ \int_{\bG} P_\tau\left(|f - f(g)|^p\right)(g) dg. \end{equation} Passing to the limit as $\ve\to 0^+$ in \eqref{lim12}, we reach the desired conclusion \eqref{ve2}.\\ We are left with the proof of \begin{equation}\label{ve3} \underset{s\to 1^-}{\liminf}\ (1 - s)\ \mathscr N_{s,p}(f)^p \geq \frac{2}{p}\ \underset{t \to 0^+}{\liminf}\ t^{-\frac{p}{2}}\ \int_{\bG} P_t\left(|f - f(g)|^p\right)(g) dg. \end{equation} To do this, for every $0<s<1$ and any $\ve>0$, we write \begin{align*} & (1 - s)\ \mathscr N_{s,p}(f)^p \geq (1-s) \int_0^\ve \frac{1}{t^{\frac{s p}2 +1}} \int_{\bG} P_t\left(|f - f(g)|^p\right)(g) dg dt \\ & \ge (1 -s) \left(\underset{0<\tau<\ve}{\inf}\ \tau^{-\frac{p}{2}} \int_{\bG} P_\tau\left(|f - f(g)|^p\right)(g) dg \right) \int_0^\ve t^{\frac{p}{2}(1-s)-1} dt \\ & = \frac{2}{p}\left(\underset{0<\tau<\ve}{\inf}\ \tau^{-\frac{p}{2}} \int_{\bG} P_\tau\left(|f - f(g)|^p\right)(g) dg \right)\ \ve^{\frac{p}{2}(1-s)}. \end{align*} Taking the $\underset{s \to 1^-}{\liminf}$ in the latter inequality, yields \[ \underset{s\nearrow 1}{\liminf}\, (1-s) \mathscr N_{s,p}(f)^p \geq \underset{0<\tau<\ve}{\inf}\ \frac{2}{p} \tau^{-\frac{p}{2}} \int_{\bG} P_\tau\left(|f - f(g)|^p\right)(g) dg. \] If we now take the limit as $\ve\to 0^+$, we reach the desired inequality \eqref{ve3}. \end{proof} We are then ready to provide the \begin{proof}[Proof of Theorem \ref{T:bbmG}] The characterisations \eqref{1sp} and, respectively, \eqref{1suno} follow easily from \eqref{charlimsup}, \eqref{charliminf}, and \eqref{chaininfsup} in case $p>1$ and, respectively, from \eqref{supuno}, \eqref{infuno}, and \eqref{chaininfsup} if $p=1$. Moreover, for $f\in W^{1,p}(\bG)$, the limiting behaviour \eqref{2sp} is a trivial consequence of \eqref{2p}, \eqref{2unouno}, and \eqref{chaininfsup}.\\ Finally, if $\bG$ has the property (B) and $f\in BV(\bG)$, the limiting behaviour \eqref{2suno} is a consequence of \eqref{2uno} and \eqref{chaininfsup}. \end{proof} We finally close the paper with the \begin{proof}[Proof of Theorem \ref{T:MS}] We observe that the heat semigroup $P_t$ satisfies the following three properties: \begin{itemize} \item[(a)] $P_t 1=P^*_t1=1$ for all $t>0$ (which is a consequence of (iii) in Proposition \ref{P:prop} and of the symmetry of the heat kernel), and thus in particular $$ \|P_t f\|_q\leq \|f\|_q \quad \forall\, f\in L^q,\, t>0,\mbox{ and }1\leq q\leq \infty; $$ \item[(b)] (ultracontractivity) for every $1<q\leq \infty$ there exists a constant $C_q$ such that $$ \|P_t f\|_q\leq \frac{C_q}{t^{\frac{Q}{2} \left(1-\frac{1}{q}\right)}} \|f\|_1 \quad \forall\,f\in C_0^\infty\mbox{ and } t>0, $$ (this is a consequence of Minkowski's integral inequality and the upper Gaussian estimate in \eqref{gauss0}); \item[(c)] the density property in Lemma \ref{L:dens} and the estimate \eqref{besovinbesov} of the embedding $\Bps \subset\mathfrak B_{\sigma,p}(\bG)$. \end{itemize} We emphasise that property (a) implies for the spaces $\Bps$ the same asymptotic behaviour as $s\to 0^+$ of the case ${\rm tr} B=0$ of the H\"ormander semigroup treated in \cite[Theorem 1.1]{BGT}. With properties (a)-(c) we can now follow verbatim the semigroup approach in \cite{BGT} to reach the desired conclusion. \end{proof} \bibliographystyle{amsplain} \begin{thebibliography}{10} \bibitem{Ad} R. A. Adams, \emph{Sobolev Spaces}, Academic Press, New York, 1975. \bibitem{ArB} P. Alonso Ruiz \& F. Baudoin, \emph{Yet another heat semigroup characterization of BV functions on Riemannian manifolds}, ArXiv preprint 2010.12131. \bibitem{Bar} D. Barbieri, \emph{Approximations of Sobolev norms in Carnot groups}, Commun. Contemp. Math. \textbf{13}~(2011), no. 5, 765-794. \bibitem{BLU} A. Bonfiglioli, E. Lanconelli, F. Uguzzoni, \textit{Stratified Lie groups and potential theory for their sub-Laplacians}, Springer Monographs in Mathematics, Springer, Berlin, 2007. \bibitem{BBM1} J. Bourgain, H. Brezis \& P. Mironescu, \emph{Another look at Sobolev spaces}, in ``Optimal control and partial differential equations'', IOS, Amsterdam, 2001, 439-455. \bibitem{BBM2} J. Bourgain, H. Brezis \& P. Mironescu, \emph{Limiting embedding theorems for $W^{s,p}$ when $s\nearrow 1$ and applications}, J. Anal. Math. \textbf{87}~(2002), 77-101. \bibitem{BMP} M. Bramanti, M. Jr. Miranda \& D. Pallara, \emph{Two characterization of $\operatorname{BV}$ functions on Carnot groups via the heat semigroup}, Int. Math. Res. Not. IMRN \textbf{17}~(2012), 3845-3876. \bibitem{B} H. Brezis, \emph{How to recognize constant functions. A connection with Sobolev spaces}, Uspekhi Mat. Nauk \textbf{57}~(2002), 59-74; translation in Russian Math. Surveys \textbf{57}~(2002), 693-708. \bibitem{BGT} F. Buseghin, N. Garofalo \& G. Tralli, {\it On the limiting behaviour of some nonlocal seminorms: a new phenomenon}, Ann. Sc. Norm. Super. Pisa Cl. Sci. (5) \textbf{23}~(2022), 837-875. \bibitem{CDG} L. Capogna, D. Danielli \& N. Garofalo, \emph{The geometric Sobolev embedding for vector fields and the isoperimetric inequality}, Comm. Anal. Geom. \textbf{2}~(1994), no. 2, 203-215. \bibitem{CMSV} M. Capolli, A. Maione, A. M. Salort \& E. Vecchi, \emph{Asymptotic behaviours in fractional Orlicz-Sobolev spaces on Carnot groups}, J. Geom. Anal. \textbf{31}~(2021), 3196-3229. \bibitem{CDPP} A. Carbotti, S. Don, D. Pallara \& A. Pinamonti, \emph{Local minimizers and gamma-convergence for nonlocal perimeters in Carnot groups}, ESAIM Control Optim. Calc. Var. \textbf{27}~(2021), S11. \bibitem{Ca} E. Cartan, \emph{Sur la repr\'esentation g\'eom\'etrique des syst\`emes mat\'eriels non holonomes}, Proc. Internat. Congress Math., vol.4, Bologna, 1928, 253-261. \bibitem{CG} L. Corwin \& F. P. Greenleaf, \emph{Representations of nilpotent Lie groups and their applications, Part I: basic theory and examples}, Cambridge Studies in Advanced Mathematics 18, Cambridge University Press, Cambridge (1990). \bibitem{CLL} X. Cui, N. Lam \& G. Lu, \emph{New characterizations of Sobolev spaces on the Heisenberg group}, J. Funct. Anal. \textbf{267}~(2014), 2962-2994. \bibitem{DGNP} D. Danielli, N. Garofalo, D.-M. Nhieu \& S. D. Pauls, \emph{Instability of graphical strips and a positive answer to the Bernstein problem in the Heisenberg group $\mathbb H^1$}, J. Differential Geom. \textbf{81}~(2009), no. 2, 251-295. \bibitem{Da} J. D\'avila, \emph{On an open question about functions of bounded variation}, Calc. Var. Partial Differential Equations \textbf{15}~(2002), no. 4, 519-527. \bibitem{FMPPS} F. Ferrari, M. Miranda, Jr., D. Pallara, A. Pinamonti \& Y. Sire, \emph{Fractional Laplacians, perimeters and heat semigroups in Carnot groups}, Discrete Contin. Dyn. Syst. Ser. S \textbf{11}~(2018), no. 3, 477-491. \bibitem{Fo} G. B. Folland, \emph{Subelliptic estimates and function spaces on nilpotent Lie groups}, Ark. Mat. \textbf{13}~(1975), no. 2, 161-207. \bibitem{FS} G. B. Folland \& E. M. Stein, \emph{Hardy spaces on homogeneous groups}, Mathematical Notes, \textbf{28}, Princeton University Press, Princeton, N.J.; University of Tokyo Press, Tokyo, 1982. xii+285 pp. \bibitem{FSS} B. Franchi, R. Serapioni \& F. Serra Cassano, \emph{On the structure of finite perimeter sets in step 2 Carnot groups}, J. Geom. Anal. \textbf{13}~(2003), 421-466. \bibitem{GN} N. Garofalo \& D.-M. Nhieu, \emph{Isoperimetric and Sobolev inequalities for Carnot-Carath\'eodory spaces and the existence of minimal surfaces}, Comm. Pure Appl. Math. \textbf{49}~(1996), 1081-1144. \bibitem{Gparis} N. Garofalo, \emph{Hypoelliptic operators and some aspects of analysis and geometry of sub-Riemannian spaces}, Geometry, analysis and dynamics on sub-Riemannian manifolds, vol. 1, 123-257, EMS Ser. Lect. Math., Eur. Math. Soc., Z\"urich, 2016. \bibitem{GTfi} N. Garofalo \& G. Tralli, \emph{Functional inequalities for class of nonlocal hypoelliptic equations of H\"ormander type}, Nonlinear Anal. \textbf{193}~(2020), special issue `Nonlocal and Fractional Phenomena', 111567. \bibitem{GTiso} N. Garofalo \& G. Tralli, \emph{Nonlocal isoperimetric inequalities for Kolmogorov-Fokker-Planck operators}, J. Funct. Anal. \textbf{279}~(2020), 108591. \bibitem{GTbbmd} N. Garofalo \& G. Tralli, \emph{A Bourgain-Brezis-Mironescu-D\'avila theorem in Carnot groups of step two}, to appear in Comm. Anal. Geom. (ArXiv preprint 2004.08529). \bibitem{GTpotan} N. Garofalo \& G. Tralli, \emph{Heat kernels for a class of hybrid evolution equations}, to appear in Potential Anal., DOI: \verb|10.1007/s11118-022-10003-2|. \bibitem{Go} W. G\'{o}rny, \emph{Bourgain-Brezis-Mironescu approach in metric spaces with Euclidean tangents}, J. Geom. Anal. \textbf{32}~(2022), 128. \bibitem{Gro} M. Gromov, \emph{Carnot-Carath\'eodory spaces seen from within}, p. 79-323 in \emph{Sub-Riemannian geometry}, Progr. Math., 144, Birkh\"auser, Basel, 1996. \bibitem{HP} B.-X. Han \& A. Pinamonti, \emph{On the asymptotic behaviour of the fractional Sobolev seminorms in metric measure spaces: Bourgain-Brezis-Mironescu's theorem revisited}, ArXiv preprint 2110.05980. \bibitem{Hui} G. Huisken, \emph{Asymptotic behavior for singularities of the mean curvature flow}, J. Differential Geom. \textbf{31}~(1990), no. 1, 285-299. \bibitem{JS} D. S. Jerison \& A. S\'anchez-Calle, \emph{Estimates for the heat kernel for a sum of squares of vector fields}, Indiana Univ. Math. J. \textbf{35}~(1986), no. 4, 835-854. \bibitem{KM} A. Kreuml and O. Mordhorst, \emph{Fractional Sobolev norms and BV functions on manifolds}, Nonlinear Anal. \textbf{187}~(2019), 450-466. \bibitem{Le} N. N. Lebedev, \emph{Special functions and their applications}, revised edition, translated from the Russian and edited by R. A. Silverman. Unabridged and corrected republication, Dover Publications, Inc., New York, 1972. \bibitem{Led} M. Ledoux, \emph{Semigroup proofs of the isoperimetric inequality in Euclidean and Gauss space}, Bull. Sci. Math. \textbf{118}~(1994), no. 6, 485-510. \bibitem{Lud} M. Ludwig, \emph{Anisotropic fractional Sobolev norms}, Adv. Math. \textbf{252}~(2014), 150-157. \bibitem{Ma} M. Marchi, \emph{Regularity of sets with constant intrinsic normal in a class of Carnot groups}, Ann. Inst. Fourier (Grenoble) \textbf{64}~(2014), 429-455. \bibitem{MS} V. Maz'ya and T. Shaposhnikova, \emph{On the Bourgain, Brezis, and Mironescu theorem concerning limiting embeddings of fractional Sobolev spaces}, J. Funct. Anal. \textbf{195}~(2002), 230-238 (\emph{Erratum}-ibid. \textbf{201}~(2003), 298-300). \bibitem{MPPP} M. Jr. Miranda, D. Pallara, F. Paronetto \& M. Preunkert, \emph{Short-time heat flow and functions of bounded variation in {$\bold R^N$}}, Ann. Fac. Sci. Toulouse Math. (6) \textbf{16}~(2007), 125-145. \bibitem{Mu} D. M\"uller, \emph{A restriction theorem for the Heisenberg group}, Ann. of Math. (2) \textbf{131}~(1990), no. 3, 567-587. \bibitem{P} P. Pansu, \emph{M\'etriques de Carnot-Carath\'eodory et quasi-isom\'etries des espaces sym\'etriques de rang un}, Ann. of Math. (2)\textbf{129}~(1989),1, 1-60. \bibitem{Ri} F. Ricci, \emph{Sub-Laplacians on nilpotent Lie groups}, notes of a course held at the Scuola Normale Superiore, Pisa, 2002-03. \bibitem{RS} T. Runst \& W. Sickel, \emph{Sobolev spaces of fractional order, Nemytskij operators, and nonlinear partial differential equations}, De Gruyter Series in Nonlinear Analysis and Applications, 3. Walter de Gruyter \& Co., Berlin, 1996. x+547 pp. \bibitem{Stein} E. M. Stein, \emph{Some problems in harmonic analysis suggested by symmetric spaces and semi-simple groups}, Actes du Congr\`es International des Math\'ematiciens (Nice, 1970), Tome 1, pp. 173-189. Gauthier-Villars, Paris, 1971. \bibitem{V} V. S. Varadarajan, \emph{Lie groups, Lie algebras, and their representations}, Graduate Texts in Mathematics 102, Springer-Verlag, New York, 1984, reprint of the 1974 edition. \bibitem{VSC} N. Th. Varopoulos, L. Saloff-Coste \& T. Coulhon, \emph{Analysis and geometry on groups}, Cambridge Tracts in Mathematics 100, Cambridge University Press, Cambridge, 1992. \end{thebibliography} \end{document}
2205.04560v1
http://arxiv.org/abs/2205.04560v1
Hasselmann's Paradigm for Stochastic Climate Modelling based on Stochastic Lie Transport
\documentclass[12pt, a4paper, twoside, notitlepage]{article} \textwidth 17cm \textheight 21cm \topmargin 0cm \oddsidemargin 0cm \evensidemargin 0cm \usepackage{amssymb,amssymb} \usepackage{enumerate} \usepackage{amsthm} \usepackage{framed} \usepackage{comment,url} \usepackage[centertags]{amsmath} \usepackage[english]{babel} \usepackage{multimedia} \usepackage[english]{babel} \usepackage[latin1]{inputenc} \usepackage{times} \usepackage[T1]{fontenc} \usepackage{color} \usepackage{graphicx} \usepackage{graphics} \usepackage{pgf} \usepackage{amssymb} \usepackage{bbm} \usepackage[centertags]{amsmath} \usepackage[english]{babel} \usepackage{multimedia} \usepackage[english]{babel} \usepackage[latin1]{inputenc} \usepackage{times} \usepackage[T1]{fontenc} \usepackage{picinpar} \usepackage{color} \usepackage{graphicx} \usepackage{graphics} \usepackage{pgf} \usepackage{todonotes} \newcommand{\bb}{\mathbb} \newcommand{\cali}{\mathcal} \newcommand{\norm}[1]{\lVert#1\rVert} \newcommand{\abs}[1]{\lvert#1\rvert} \newcommand {\J}{\mathcal{J}} \newcommand {\Jo}{\mathcal{J}_{obs}} \newcommand {\M}{\mathcal{M}} \newcommand {\Bmod}{\mathcal{B}} \newcommand {\Bprime}{\mathcal{\widetilde{B}}'} \newcommand {\Robs}{\mathcal{R}} \newcommand {\Hobs}{}\newcommand {\ROTATION}{{\bf \Omega}} \newcommand {\U}{{\bf u}} \newcommand {\V}{{\bf v}} \newcommand {\h}{\theta} \newcommand {\vecV}{\overrightarrow{\bf v}} \newcommand {\vecU}{\overrightarrow{\bf u}} \newcommand {\Div}{{ div}} \newcommand {\LV}{{ V}} \newcommand {\LU}{{ U}} \newcommand {\LH}{{\Theta}} \newcommand {\SOAU}{\bar{ U}} \newcommand {\SOAV}{\bar{ V}} \newcommand {\SOAH}{\bar{ \Theta}} \newcommand {\AU}{\widetilde{\LU}} \newcommand {\AV}{\widetilde{\LV}} \newcommand {\AH}{\widetilde{\LH}} \newcommand {\APsi}{\widetilde{\Psi}} \newcommand {\Deriv}{\mathcal{D}^\alpha} \newcommand {\TDeriv}{\mathcal{D}^l_t} \newcommand {\XDeriv}{\mathcal{D}^k_x} \newcommand{\dxran}{{\rm d}\mathbf{x}_t} \newcommand{\dXran}{{\color{blue}{\rm d}\mathbf{X}_t}} \newcommand{\rmd}{{\rm d}} \newcommand{\scp}[2]{{\big\langle {#1}\, , \, {#2}\big\rangle}} \newcommand{\Scp}[2]{{\Big\langle {#1}\, , \, {#2}\Big\rangle}} \newcommand{\SCP}[2]{{\left\langle {#1}\, , \, {#2}\right\rangle}} \newcommand{\ob}[1]{\overline{#1}} \newcommand{\wt}[1]{\widetilde{#1}} \newcommand{\mb}[1]{\mbox{\boldmath{$#1$}}} \newcommand{\mbs}[1]{\footnotesize{\mbox{\boldmath{$#1$}}}} \newcommand{\mbb}[1]{\mbox{\mathbb{$#1$}}} \newcommand{\mcal}[1]{\mathcal{#1}} \DeclareMathOperator{\dx}{{d\!}{x}} \def\bs#1{\boldsymbol{#1}} \def\wh#1{\widehat{#1}} \newcommand{\sym}[1]{\boldsymbol{#1}} \newcommand{\pp}[2]{\frac{\partial #1}{\partial #2}} \newcommand{\dede}[2]{\frac{\delta #1}{\delta #2}} \newcommand{\dd}[2]{\frac{{d\!} #1}{{d\!} #2}} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{definition}[theorem]{Definition} \newenvironment{remark}[1][Remark]{\begin{trivlist} \item[\hskip \labelsep {\bfseries #1}]}{\end{trivlist}} \def\NOTE#1{{\textcolor{blue}{\bf [#1]}}} \def\UNSURE#1{{\textcolor{magenta}{\bf [#1]}}} \def\DEL#1{{\textcolor{green}{\bf [#1]}}} \def\ADD#1{{\textcolor{red}{\bf [#1]}}} \begin{document} \begin{titlepage} \title{Hasselmann's Paradigm for Stochastic Climate Modelling based on Stochastic Lie Transport} \author{D. Crisan, D. D. Holm, P. Korn} \maketitle \begin{center} {\it Dedicated to the memory of Charlie Doering} \end{center} \vskip0.5cm \begin{abstract} A generic approach to stochastic climate modelling is developed for the example of an idealized Atmosphere-Ocean model that rests upon Hasselmann's paradigm for stochastic climate models. Namely, stochasticity is incorporated into the fast moving atmospheric component of an idealized coupled model by means of stochastic Lie transport, while the slow moving ocean model remains deterministic. More specifically the stochastic model SALT (stochastic advection by Lie transport) is constructed by introducing stochastic transport into the material loop in Kelvin's circulation theorem. The resulting stochastic model preserves circulation, as does the underlying deterministic climate model. A variant of SALT called LA-SALT (Lagrangian-Averaged SALT) is introduced in this paper. In LA-SALT, we replace the drift velocity of the stochastic vector field by its expected value. The remarkable property of LA-SALT is that the evolution of its higher moments are governed by linear deterministic equations. Our modelling approach is substantiated by establishing local existence results, first, for the deterministic climate model that couples compressible atmospheric equations to incompressible ocean equation, and second, for the two stochastic SALT and LA-SALT models. \end{abstract} \newpage \tableofcontents \end{titlepage} \section{Introduction}\label{sect_INTRO} Prediction of climate dynamics is one of the great societal and intellectual challenges of our time. The complexity of this task has prompted the formulation of {\it idealized} climate models that target the representation of selected spatio-temporal characteristics, instead of representing the full bandwidth of physical processes ranging from seconds to millennia and from centimeters to thousands of kilometers. A climate model of full complexity would couple, for example, an atmospheric model, described by the compressible three-dimensional Navier-Stokes equations and a set of advection-diffusion equations for temperature and humidity, to an oceanic model, given by the three-dimensional incompressible Navier-Stokes equations and advection-diffusion equations for temperature and salinity. Each model would completed by thermodynamic relationships and physical parametrizations to account for non-resolved processes, and the boundary conditions would represent the physics of the air-sea interface and the ocean's mix layer. In contrast, idealized models tend to simplify these equations by reducing the number of state variables, terms in the equations, or spatial dimensions. The amount of simplification needed in each climate model is dictated by the climate processes under investigation and is often rationalized by heuristic scaling considerations. In this paper we formulate a framework for deriving stochastic idealized climate models. The deterministic version of the two types of stochastic climate model we derive here belongs to a class of idealized climate models that target the study of El-Nino-Southern Oscillation (ENSO). ENSO is an instability of the coupled atmosphere-ocean system that occurs with quasi-periodic frequency of 5-7 years. The fundamental instability mechanism for ENSO can and has been investigated with idealized models that consist of {\it two-dimensional} coupled atmosphere-ocean-equations (see e.g. \cite{ENSO_THEORY}). We provide more details on this class of deterministic models in Section \ref{subsect_DetModel}. A conceptual picture of the integration of stochasticity into a climate model was formulated by Hasselmann \cite{Hasselmann1976}. In Hasselmann's paradigm, the atmosphere acts with high frequency on short time scales, represented as a stochastic white-noise forcing of the ocean. The integration of the atmosphere's stochastic white-noise forcing over long time scales produces a low-frequency response in the ocean. As a result of the back-reaction, a red spectrum of the atmosphere's climate fluctuation is produced which complies with a variety of observations of the internal variability of the climate system \cite{Hasselmann1976}. For a description of Hasselmanns program in probabilistic terms we refer to \cite{Arnold2001}). It is common practice in climate modelling to incorporate Hasselmann's paradigm as a stochastic perturbation of the initial conditions, or as a stochastic forcing to the right-hand side of the dynamical equations of a deterministic climate model, then model the range of stochastic effects by creating an ensemble of simulations (see e.g., \cite{Palmer_2019}). The particular choice of the stochastic perturbation must be based on the modelling objectives in the case at hand. A concise mathematical framework for stochastic climate modelling was developed in \cite{MajdaTimoEinden}. This approach relies also on scale separation in fast and slow dynamics following Hasselmann's paradigm but it differs methodically from ours in modelling the nonlinear self-interaction of the fast variables by means of a linear stochastic model such as an Ornstein-Uhlenbeck process and incorporates multiplicative noise while at the same maintaining energy conservation. While following Hasselmann's view in a general sense, in this paper we incorporate the stochasticity in a novel way, applied in two stages which both deviate from the established practice. Our approach deviates in several respects from common practice. First, the path of a fluid element in the Lagrangian sense is assumed to be stochastic. This assumption injects stochasticity directly into the transport velocity of the atmospheric fluid dynamics, thereby transforming the governing equations into stochastic PDEs. Second, although our stochasticity is introduced \emph{ab initio} and not \emph{a posteriori} via external forcing, both stages of our stochastic models are transparently related to the deterministic model by the Kelvin Circulation Theorem. This fundamental connection facilitates the physical interpretation of the two stages of the stochastic models. Our modelling approach could be seen as an implementation of Hasselmann's program, since we couple the fast and stochastic atmosphere model to the slow and deterministic ocean model and we implement this through a new coupling mechanism that passes the {\it expectation} of the atmospheric wind forcing to the ocean. However, Hasselmann's paradigm discussed in \cite{Hasselmann1976,FrankignoulHasselmann1977} now has more than three thousand citations, so the present paper could equally well be considered as a footnote to Hasselmann's program. The two stages of our stochastic approach are called Stochastic Advection by Lie Transport (SALT) and Lagrangian Averaged Stochastic Advection by Lie Transport (LA-SALT). The two stages of our approach represent two different viewpoints or modelling philosophies depending on the time scales of the intended application. For SALT, atmospheric `weather' produces uncertainty in advection arising from motion on unresolved time scales. In LA-SALT, atmospheric `climate' is taken as the baseline, and the atmospheric `weather' is treated as a field of fluctuations around the climate baseline, as discussed in Ed Lorenz's famous lecture \cite{Lorenz1995}. The LA-SALT approach brings us back to Hasselmann's paradigm, which decomposes a general climate model into deterministic and stochastic parts. In LA-SALT, in addition, the ideas of McKean, Vlasov and Kac \cite{McKean}, \cite{Vlasov} \cite{Kac}\footnote{See also the seminal discussion in Sznitman \cite{sas} of the ``propagation of chaos'' introduced in \cite{Kac}.} are applied to the deterministic climate description. Namely, the LA-SALT approach results in deterministic linear fluctuation equations that govern the dynamics of the climate statistics themselves, including variance, covariance and higher statistical moments. Within our framework these higher order statistical moments are governed by linear equations. This result offers potential computational advantages and opens new perspectives for the theoretical analysis of these moments. In summary, this paper formulates two complementary stochastic idealized climate models called SALT and LA-SALT. The SALT climate model couples a stochastic PDE for the atmospheric circulation to a deterministic PDE for the circulation of the ocean. The stochasticity is incorporated by assuming that Lagrangian particles in the atmosphere follow a stochastic path given by a Stratonovich process which appears in the motion of the material loop in Kelvin's circulation theorem. The stochastic Lagrangian path of the material loop is a semimartingale stochastic process in the SALT approach and is a McKean-Vlasov process in the LA SALT approach. Both the SALT and LA-SALT approaches are related to an underlying deterministic model via Kelvin's circulation theorem. We substantiate our modelling choices by anchoring them within an established class of idealized climate models, as well as providing a mathematical analysis that demonstrates by proving a local well-posedness theorem that the proposed stochastic climate models rest on a firm mathematical basis. The numerical simulations that would demonstrate the capabilities of the SALT and LA-SALT stochastic models, however, are beyond the scope of the present paper. The key element of such a numerical experiment is the sensible specification of the stochastic process. For the purpose of mathematical analysis carried out here, though, it is sufficient to assume that the stochastic process is of Stratonovich \emph{type}. In contrast, a numerical experiment would require one to chose a specific Stratonovich process by incorporating externally obtained information either from observations or from high-resolution simulations. For an example of the latter procedure in the context of the Euler fluid equations in two dimensions, we refer to \cite{COTTERetal2019}. \color{black} \bigskip In the remainder of the introduction we detail our modelling approach for the deterministic model in Section \ref{subsect_DetModel} and for the stochastic model in Section \ref{subsect_StochModel}. \bigskip \begin{enumerate} \item Main content of the paper \begin{enumerate} \item Adaptation of the deterministic Gill-Matsuno \cite{GILL,MATSUNO} class of ocean-atmosphere climate model (OACM) to the geometric variational framework. This adaptation produces a Kelvin circulation theorem which retains the transformation properties which are the basis for the remainder of the paper. These transformation properties are inherited from the variational framework. They enable the formulation of the deterministic and stochastic models in terms of the same type of Kelvin circulation theorem. \item Derivations of the SALT and LA-SALT stochastic versions of the OACM, whose flows all possess the same geometric transformation properties. This shared geometric structure enables the analysis to develop sequentially from deterministic to stochastic models. \item Mathematical analysis for the deterministic, SALT and LA-SALT versions of the OACM. Specifically we prove existence and uniqueness of local solutions for the deterministic OACM, the existence of a martingale solution for the SALT version of the OACM and existence and uniqueness of local solution for the LA-SALT version. \item Outlook -- open problems, including further pusuit of the predictive equations derived here for the dynamics of OACM statistics. \item Two appendices provide details of the derivations of the deterministic and stochastic models using Hamilton's variational principle. \end{enumerate} \item Plan of the paper \begin{enumerate}[(1)] \item The introduction in Section \ref{sect_INTRO} explains that the present work is based on Hasselmann's program of fast-slow decomposition of the climate into deterministic and stochastic components. It also introduces the deterministic climate model upon which we implement Hasselmann's program and it compares the deterministic and stochastic models we treat in terms of their individual Kelvin circulation theorems. \item Section \ref{SectSectMathAnalysis1} proves the local existence and uniqueness properties of our variational geometric adaptation of the deterministic Gill-Matsunoclimate model. \item Section \ref{Sec3} also discusses the analytical properties of the SALT in section \ref{sec-SALT} and LA-SALT in section \ref{LA-SALT-eqns-sec} stochastic models. \item Section \ref{Sec4} provides a summary conclusion and specification of open problems for the SALT and LA-SALT OACM. \end{enumerate} \end{enumerate} \color{black} \bigskip \subsection{The Deterministic Climate Model}\label{subsect_DetModel} The model of the atmospheric component of our idealized climate model consists of the compressible 2D Navier-Stokes equation coupled to an advection-diffusion equation for temperature $\theta^a$. The atmospheric velocity field $\U^a$ transports the temperature that provides the gradient term of the velocity equation. The ocean component of the coupled system consists of a 2D incompressible Navier-Stokes equations and an equation for the oceanic temperature variable $\theta^o$ that is passively advected by the ocean velocity field $\U^o$. Here the pressure acts here as a Lagrange multiplier to impose incompressibility. More specifically, the deterministic coupled PDE's for the ocean and the atmosphere are given by \begin{align} \text{Atmosphere: }\quad&\frac{\partial \U^a}{\partial t} +(\U^a\cdot\nabla)\U^a +\frac{1}{Ro^a}\U^{a\bot} +\frac{1}{Ro^a}\nabla \theta^a = \frac{1}{Re^a}\triangle\U^a,\label{COUPLED_SWE_VELOC_A} \\ & \frac{\partial \theta^a}{\partial t} + (\U^a\cdot\nabla)\theta^a = \gamma(\theta^a - \theta^o) +\frac{1}{Pe^a}\triangle \theta^a.\label{COUPLED_SWE_T_A}\\ \text{Ocean: }\quad&\frac{\partial \U^o}{\partial t} +(\U^o\cdot\nabla)\U^o +\frac{1}{Ro^o}\U^{o\bot} +\frac{1}{Ro^o}\nabla (p^o+q^a) = \sigma(\U^o-\bar{\U}^a_{sol}) +\frac{1}{Re^o}\triangle\U^o,\label{COUPLED_SWE_VELOC_O}\\ &\frac{\partial \theta^o}{\partial t} +(\U^o\cdot\nabla)\theta^o =\frac{1}{Pe^o}\triangle\theta^o,\label{COUPLED_SWE_T_O}\\ &\Div(\U^o)=0,\label{COUPLED_SWE_INCOMPRESS_O}\\ \text{with }& \text{ initial conditions }\nonumber\\ &\U^a(t_0)=\U^a_0,\ \theta^a(t_0)=\theta^a_0, \ \U^o(t_0)=\U^o_0,\ \theta^o(t_0)=\theta^o_0\nonumber. \end{align} In these equations, the ocean velocity $\U^o$ is coupled to the atmospheric velocity $\U^a$ and the atmospheric temperature $\theta^a$ is coupled to the oceanic temperature $\theta^o$. The {\it coupling constants} $\gamma,\sigma<0$ regulate the strength of the interaction between the two components. The velocity coupling between the compressible atmosphere and the incompressible ocean model deserves some consideration. To preserve the incompressibility of the oceanic velocity field during the coupling we apply the Leray-Helmholtz Theorem to decompose the atmospheric velocity $\U^a$into a solenoidal component $\U^a_{sol}$ and a gradient term $q^a$ such that $\U^a=\U^a_{sol}+\nabla q^a$. The gradient part is combined with the oceanic pressure. In a second step we remove the space average via $\bar{\U}:=\U-\frac{1}{|\Omega|}\int_\Omega \U dx$ such that the oceanic velocity fields remains in the space of periodic flows with vanishing average. This property allows to determine the oceanic pressure. Physically, this step removes the rapid mean velocity of the atmosphere relative to the slower ocean velocity in the frame of motion of the Earth's rotation. This means the ocean momentum responds to the shear force, which is proportional to the difference between the local ocean velocity at a given time and the local deviation of the atmospheric velocity away from its mean velocity. The model above belongs to the class of {\it intermediate coupled model}. These models are much simpler than the coupled general circulation models of the atmosphere-ocean system that are used for climate research. Intermediate coupled models allow to study fundamental aspects of the atmosphere-ocean interaction. The most prominent example is {\it El Ni\~{n}o-Southern Oscillation (ENSO)} in the tropical Pacific. As originally hypothesized by Bjerknes in 1969 \cite{BJERKNES_1969} this climate phenomenon crucially depends on the coupled interaction of both ocean and atmosphere. According to Bjerknes stronger trade winds increase the upwelling in the east Pacific, thereby creating a temperature gradient in the sea-surface temperature that amplifies the trade winds. This interaction between the trade winds and sea surface temperature in the tropical Pacific generates a quasi-periodic oscillation between the three ENSO-phases: the neutral phase, El Ni\~{n}o and La Ni\~{n}a. Intermediate coupled models have been used successfully to shed light on the fundamental principle of ENSO, thereby confirming Bjerknes hypothesis. \begin{figure}[h] \begin{center} \includegraphics*[width=0.75\textwidth, height=0.3\textheight]{ElNino_uk_L.pdf}\end{center} \caption{\it\small Illustration of the dynamics and feedbacks of the atmosphere-ocean model that generate the El Ni\~{n}o-Southern Oscillation (ENSO). The trade winds that are part of the Walker circulation in the tropical Pacific interact with the cold/warm pools of the sea surface temperature. Local heating creates wind anomalies that in turn change the thermocline and the upwelling. During these process Rossby and Kelvin waves are emitted. The feedback of the ocean weakens the trade winds. } \label{fig:coupledSystem} \end{figure} The story of intermediate coupled models began with (uncoupled) models to study equatorial waves and their response to external forcing. Matsuno \cite{MATSUNO} investigated an (uncoupled) divergent barotropic model (single layer of incompressible fluid of homogeneous density, with a free surface, on the beta plane) \begin{equation}\begin{split}\label{MATSUNO} &\frac{\partial \U}{\partial t} +\frac{1}{Ro^a}\U^{a\bot} +\frac{1}{Ro^a}\nabla \theta =0,\\ &\frac{\partial \theta}{\partial t} +H\Div(\U)= Q. \end{split}\end{equation} Matsuno \cite{MATSUNO} refers to $\theta$ as {\it surface elevation} above a mean depth $H$, and in this context $Q$ appears as a source/sink of mass. Gill \cite{GILL} studied the steady response to heating anomalies of a tropical atmosphere, as described by the Matsuno model. Systems of equations in the following class are often called {\it Gill models} \begin{equation}\begin{split}\label{GILL} &\frac{\partial \U}{\partial t} +\frac{1}{Ro^a}\U^{a\bot} +\frac{1}{Ro^a}\nabla \theta + a\U =0,\\ &\frac{\partial \theta}{\partial t} +H\Div(\U) +b\theta= Q, \end{split}\end{equation} where $a,b$ are Raleigh friction and Newtonian cooling and where $Q$ is a heating term. In Gill's work $\theta$ is proportional to the {\it surface pressure}. Since surface hydrostatic pressure is proportional to surface height, this identification is consistent with Matsuno's interpretation. Atmospheric models of Gill-Matsuno type are often used to understand the atmospheric response during an El Ni\~{n}o to observed sea surface temperature anomalies. Zebiak \cite{ZEBIAK1982} parametrized the heat flux $Q$ from the ocean to the atmosphere in terms of the ocean sea surface temperature SST $Q=\alpha\, SST$. This relation can be motivated by a linearization of Clausius-Clapeyron relation (see \cite{ZEBIAK1986} and also \cite{HANEY}). As a next step, intermediate coupled models were constructed with atmospheric model either either of Gill-Matsuno type \cite{GILL} or as a statistical model of the atmosphere, that is for example constructed through an statistical analysis of atmospheric data (e.g. by Empirical Orthogonal Functions). Then the atmospheric model is coupled to a one or two-layer ocean model. Nonlinear terms are omitted. The famous {\it Cane-Zebiak model} \cite{CANE_1985} applied a steady state atmosphere following Gill (\ref{GILL}) and a two-layer ocean model, with two equations for layer thickness and two equations for temperature. This model was used to issue the first ENO forecast \cite{CANE_1986}. An overview can be found in chapter 7 of \cite{DIJKSTRA_BOOK}, or in \cite{ENSO_THEORY}. We have modified the equations in the model above by including the nonlinear terms in the velocity and temperature equations. First, we have replaced the damping terms due to Raleigh friction and Newtonian cooling in (\ref{GILL}) by Laplace operators for velocity and temperature. Furthermore, we interpret atmospheric $\theta=h$ as surface elevation or equivalent depth and combined this with the {\it Charles Law} of thermodynamics (see e.g. \cite{DUTTON_BOOK}) according to which the volume $V$ is proportional to temperature $T$, $V=c\, T$, with constant $c>0$. Since the volume is also proportional to the surface elevation $V=c\, h $ one obtains $h=c\, T$. This identification allows one to interpret $\theta=T$ as the atmospheric temperature variable. This interpretation also relates the Matsuno and Gill equations (\ref{MATSUNO}) and (\ref{GILL}) to the atmospheric $\theta^a$-equation and allows one to interpret $\theta^a$ as a temperature variable.\footnote{For horizontal 2D models, the difference between potential and absolute temperature disappears.} A distinct feature of ENSO is its pronounced irregularity, ENSO extremes occur irregularly in time and vastly differing amplitudes of sea surface temperature anomalies. For the explanation of this irregular behaviour, two theories exist. Both theories rely on the separation of time scales shown by observations of the spectrum of variability in the tropical atmosphere-ocean system. Namely, a distinct time-scale separation is observed between the subseasonal and interannual oscillations. These observations suggest a natural decomposition of the dynamics in the tropics into short and long time scales. One theory explains the ENSO irregularity as a result of the chaotic dynamics exhibited by a nonlinear dynamical system through the interaction of slow components in which the fast components are less important. The other theory attributes the irregularity to a stochastic forcing of the slow modes by the fast modes, with applications to the Madden-Julian oscillation, and westerly/easterly wind bursts. The latter approach leads immediately to the question of which type of stochastic forcing, additive or multiplicative, is appropriate \cite{Perez_2005}. We do not aim to resolve this debate here. Rather, we suggest a new modelling approach for the stochastic theory of ENSO's irregularity. The new modelling approach is based on stochastic transport along the Lagrangian paths of advected fluid properties. \subsection{The Stochastic Atmospheric Climate Model}\label{subsect_StochModel} The fundamental principle in modelling stochastic fluid advection is the Kelvin circulation theorem. As we shall see, each component of the deterministic atmosphere-ocean model in equations \eqref{COUPLED_SWE_VELOC_A} - \eqref{COUPLED_SWE_INCOMPRESS_O} above possesses its own Kelvin theorem, and the two components are coupled together by their relative velocity. The model \eqref{COUPLED_SWE_VELOC_A} - \eqref{COUPLED_SWE_INCOMPRESS_O} describes their interaction as the exchange of circulation between the atmosphere and ocean. Later we treat the atmospheric component of the model as being stochastic either in the sense of weather (SALT) or in the sense of climate (LA-SALT). In either case, the stochastic modification of the atmospheric dynamics will retain a Kelvin circulation theorems. \begin{theorem}[Kelvin theorem for the deterministic atmospheric model in \eqref{COUPLED_SWE_VELOC_A} - \eqref{COUPLED_SWE_INCOMPRESS_O}]$ \label{KelvinThm_atmosphere_determnistic}\,$\\ The deterministic model for atmospheric dynamics satisfies the following Kelvin theorem for circulation around a loop $c(u^a)$ moving with the flow of the atmospheric velocity $\bs{u}^a$. Namely, \begin{align*} \frac{d}{dt}\oint_{c(u^a)} (\bs{u}^a + \frac{1}{Ro^a}\bs{R}(\bs{x}))\cdot d\bs{x} = \frac{1}{Re^a}\oint_{c(u^a)} \triangle\U^a \cdot d\bs{x} \,, \end{align*} where ${\rm curl}\bs{R} = 2\bs{\hat{z}}\Omega(\bs{x})$ is the Coriolis parameter in nondimensional units. \end{theorem} \begin{proof} By direct calculation, one shows that the deterministic atmospheric dynamics in the model above satisfies the relation in the Kelvin circulation theorem, \begin{align*} \frac{d}{dt}\oint_{c(u^a)} (\bs{u}^a + \frac{1}{Ro^a}\bs{R}(\bs{x}))\cdot d\bs{x} &= \oint_{c(u^a)} (\partial_t + \mathcal{L}_{u^a}) \big((\bs{u}^a + \frac{1}{Ro^a}\bs{R}(\bs{x}))\cdot d\bs{x}\big) \\&= \oint_{c(u^a)} \Big( \partial_t \bs{u}^a + (\bs{u}^a\cdot\nabla)\bs{u}^a + u_j^a\nabla {u^a}^j \\& \qquad- \bs{u}^a\times {\rm curl}\frac{1}{Ro^a}\bs{R}(\bs{x}) + \nabla (\bs{u}^a\cdot\frac{1}{Ro^a}\bs{R}) \Big)\cdot d\bs{x} \\\hbox{By the model}\quad& = \oint_{c(u^a)}\Big(- \frac{1}{Ro^a}\nabla\theta^a + \frac12\nabla |\bs{u}^a|^2 + \nabla (\bs{u}^a\cdot\frac{1}{Ro^a}\bs{R}) + \triangle\U^a \Big)\cdot d\bs{x} \\& = \frac{1}{Re^a}\oint_{c(u^a)} \triangle\U^a \cdot d\bs{x} \,. \end{align*} \end{proof} \begin{remark} In the proof above, $\mathcal{L}_u$ represents Lie derivative with respect to the vector field $u^a=\bs{u}^a \cdot\nabla$ with components $\bs{u}^a(\bs{x},t)$ and $c(u^a)$ denotes a material loop moving with the atmospheric Lagrangian transport velocity $\bs{u}^a(\bs{x},t)$. Consequently, in the absence of viscosity, atmospheric circulation is conserved by the deterministic model because the viscous term is absent then and the loop integrals of gradients such as $u_j\nabla u^j=\frac12\nabla |\bs{u}|^2$ vanish on the right-hand side of the equation in the proof. \\ \end{remark} Likewise, the dynamics of the ocean component of the model above satisfies the following Kelvin circulation theorem. \begin{theorem}[Kelvin theorem for the deterministic oceanic model in \eqref{COUPLED_SWE_VELOC_A} - \eqref{COUPLED_SWE_INCOMPRESS_O}]$\,$\\ The circulation dynamics around a loop $c(u^o)$ moving with the flow of the oceanic velocity $\bs{u}^o$ is given by \begin{align*} \frac{d}{dt}\oint_{c(u^o)} (\bs{u}^o + \frac{1}{Ro^o}\bs{R}(\bs{x}))\cdot d\bs{x} = \oint_{c(u^o)} \Big( \sigma(\U^o-\bar{\U}^a) + \frac{1}{Re^o} \triangle\U^o \Big)\cdot d\bs{x} \,. \end{align*} \end{theorem} \begin{proof} The proof follows analogously to the proof of Theorem \ref{KelvinThm_atmosphere_determnistic}. \\\end{proof} \subsection*{Stochastic Advection by Lie Transport (SALT) atmospheric model.} Let $(\Xi ,\mathcal{F},(\mathcal{F}_{t})_{t},\mathbb{P})$ be a filtered probability space on which we have defined a sequence of independent Brownian motions $(W^{i})_{i}$. Let $(\xi_{i})_{i}$ be a given sequence of sufficiently smooth vector fields that satisfies the condition in (\ref{xiassumpt}) below. In this work we assume the vector fields $(\xi_{i})_{i}$ to be given. For numerical simulations one defines these vector fields by extracting information from observational data. For an example we refer to \cite{COTTERetal2020c}. The derivation of the SALT atmospheric model introduces the stochastic Lagrangian path \begin{align} \dxran := \U^a(\bs{x},t)dt-\sum_i\xi_i^a(\bs{x})\circ dW_i(t) \,.\label{Atmos-SALT-Lag-Path} \end{align} Following \cite{Holm2015}, Appendix \ref{Appendix-SALT} discusses the introduction of the stochastic Lagrangian paths in \eqref{Atmos-SALT-Lag-Path} into Hamilton's variational principle for the atmospheric model equations. This step leads to the SALT version of the idealized deterministic climate model comprising equations \eqref{COUPLED_SWE_VELOC_A} - \eqref{COUPLED_SWE_INCOMPRESS_O}. Namely, the SALT model is specified by the system of stochastic differential equations below:\\[3mm] \noindent Atmosphere:\footnote{As in the deterministic case we will write the Coriolis parameter as $\mathrm{curl}\,\mathbf{R}(% \mathbf{x})=2\Omega \mathbf{(x}).$} \begin{align} & d\mathbf{u}^{a}+(d\mathbf{x}_{t}^{a}\cdot \nabla )\mathbf{u}^{a}+\frac{1}{Ro^{a}}d\mathbf{x}_{t}^{a\bot }+{\sum_{i}\Big(u_{j}^{a}\nabla \xi _{i}^{j}+\frac{1}{Ro^{a}}\nabla \Big(R_{j}\mathbf{(x})\xi _{i}^{j}\Big)\Big)\circ dW_{t}^{i}} \notag \\ & \hspace{3cm}+\frac{1}{Ro^{a}}\nabla \theta ^{a}=\frac{1}{Re^{a}}\triangle \mathbf{u}^{a}, \label{COUPLED_SWE_VELOC_A_STOCH} \\ & d\theta ^{a}+d\mathbf{x}_{t}^{a}\cdot \nabla \theta ^{a}=-\gamma (\theta ^{o}-\theta ^{a})+\frac{1}{Pe^{a}}\triangle \theta ^{a}, \label{COUPLED_SWE_T_A_STOCH} \\ & d\mathbf{x}_{t}^{a}=\mathbf{u}^{a}dt+{\sum_{i}\xi _{i}\circ dW_{l}^{i}} \label{stochasticpath} \end{align} \noindent Ocean: \begin{align} \frac{\partial \mathbf{u}^{o}}{\partial t}+(\mathbf{u}^{o}\cdot \nabla )\mathbf{u}^{o}&+\frac{1}{Ro^{o}}\mathbf{u}^{o\bot }+\frac{1}{Ro^{o}}\nabla p^{o} \notag \\ & =\sigma (\mathbf{u}^{o}-\mathbb{E}\bar{\mathbf{u}^{a}})+\frac{1}{Re^{o}}\triangle \mathbf{u}^{o}, \label{COUPLED_SWE_VELOC_O_STOCH} \\ \frac{\partial \theta ^{o}}{\partial t}+(\mathbf{u}^{o}\cdot \nabla )\theta ^{o}&=\frac{1}{Pe^{o}}\triangle \theta ^{o}, \label{COUPLED_SWE_T_O_STOCH} \\ {\ div}(\mathbf{u}^{o})&=0, \label{COUPLED_SWE_INCOMPRESS_O_STOCH} \end{align} \begin{theorem}Kelvin theorem for the SALT version of the atmospheric model in equations \eqref{COUPLED_SWE_VELOC_A_STOCH} - \eqref{stochasticpath} \begin{align} {\rm d}\oint_{c(\dxran)} (\bs{u}^a + \frac{1}{Ro^a}\bs{R}(\bs{x}))\cdot d\bs{x} = \frac{1}{Re}\oint_{c(\dxran)} \triangle\U^a \,dt \cdot d\bs{x} \,,\label{Atmos-SALT-Kel} \end{align} where $c(\dxran)$ denotes any closed material loop whose line elements follow stochastic Lagrangian paths as in \eqref{Atmos-SALT-Lag-Path}. \end{theorem} \begin{proof} Upon suppressing the superscript $a$ in the velocity $\bs{u}^a$ for brevity of notation, we calculate \begin{align*} {\rm d}\oint_{c(\dxran)} (\bs{u}+ \bs{R}(\bs{x}))\cdot d\bs{x} &= \oint_{c(\dxran)} ({\rm d} + \mathcal{L}_{\dxran}) \big((\bs{u} + \bs{R}(\bs{x}))\cdot d\bs{x}\big) \\&= \oint_{c(\dxran)} \Big( {\rm d} \bs{u} + (\rmd\bs{x}_t\cdot\nabla)\bs{u} + u_j \nabla {\rm d}x_t^j \\& \qquad - \dxran\times {\rm curl}\bs{R}(\bs{x}) + \nabla (\dxran\cdot\bs{R}) \Big)\cdot d\bs{x} \\\hbox{[By motion equation \eqref{COUPLED_SWE_VELOC_A_STOCH}]}\quad& = \oint_{c(\dxran)}\Big(- \nabla\theta dt + \frac12\nabla |\bs{u}|^2dt - \dxran\times {\rm curl}\bs{R}(\bs{x}) + \nabla (\bs{u}\cdot\bs{R})dt \\& \qquad + { u_j \nabla \sum \xi^j\circ dW(t) + \sum\nabla \big(\bs{\xi}\circ dW(t)\cdot\bs{R}\big)} \Big)\cdot d\bs{x} \\&= \frac{1}{Re}\oint_{c(\dxran)} \triangle\U\,dt \cdot d\bs{x} \,. \end{align*} \end{proof} \begin{remark} The stochastic equation for the potential temperature $\theta^a$ in the atmospheric model inherits the stochasticity of the Lagrangian trajectories $\dxran$ in \eqref{Atmos-SALT-Lag-Path}, as a scalar tracer transport equation, \begin{align} {\rm d}\theta^a + (\rmd\bs{x}_t\cdot\nabla)\theta^a = \big[ \gamma(\theta^a - \theta^o) +\frac{1}{Pe^a}\triangle \theta^a\big]dt. \label{Stoch_THETA_A} \end{align} \end{remark} \subsection{Lagrangian-Averaged Stochastic Advection by Lie Transport (LA-SALT) atmospheric model.} \color{black} We next modify the SALT approach to the two-dimensional atmospheric component of the climate system in the previous section to make it non-local in probability space, in the sense that the expected velocity will replace the drift velocity in the semimartingale for the SALT transport velocity of the stochastic fluid flow. This stochastic fluid model is derived by exploiting a novel idea introduced in \cite{DH2020} and developed further in \cite{AdLHT2020,DHL2020}, of applying Lagrangian-averaging (LA) in probability space to the fluid equations governing stochastic advection by Lie transport (SALT) which were introduced in \cite{Holm2015}. The LA-SALT approach achieves three results of potential interest in climate modelling. These results address three different components of the climate change problem. \begin{itemize} \item First, the LA-SALT approach introduces a sense of determinism into climate science, by replacing the drift velocity of the stochastic vector field for material transport by its expected value in equation \eqref{Atmos-SALT-Lag-Path}. In this step, the expected fluid velocity becomes deterministic. \item Second, the LA-SALT approach reduces the dynamical equations for the fluctuations to a \emph{linear} stochastic transport problem with a deterministic drift velocity. Such problems are well-posed. We prove here that the LA-SALT version of the SALT climate model a possesses local weak solutions. \item Third, the LA-SALT approach addresses the dynamics of the variances of the fluctuations. In particular, the third result enables the variances and higher moments of the fluctuation statistics to be found deterministically, as they are driven by a certain set of correlations of the fluctuations among themselves. \end{itemize} In summary, the first LA-SALT result makes the distinction between climate and weather for the case at hand. Namely, the LA-SALT fluid equations for the 2D atmosphere-ocean climate model system may be regarded as a dissipative system akin to the Navier-Stokes equations for the expected motion (climate) which is embedded into a larger conservative system which includes the statistics of the fluctuation dynamics (weather). The second result provides a set of linear stochastic transport equations for predicting the fluctuations (weather) of the physical variables, as they are driven by the deterministic expected motion. The third result produces closed deterministic evolutionary equations for the dynamics of the variances and covariances of the stochastic fluctuations. Thus, the LA-SALT approach to investigating the 2D atmosphere-ocean climate model system treated here reveals that its statistical properties are fundamentally dynamical. Specifically, the LA-SALT analysis of the 2D atmosphere-ocean model presented here defines its climate, climate change, weather, and change of weather statistics, in the context of a hierarchical systems of PDEs and SPDEs with unique local weak solutions. \color{black} \paragraph{LA-SALT} The expectation terms in LA-SALT induce another modification of the model which preserves the Kelvin circulation theorem, whose expectation yields a deterministic equation, \begin{theorem}[Kelvin theorem for the LA-SALT atmospheric model] \begin{align} d\oint_{c(\dXran)} \big(\bs{u}+ \bs{R}(\bs{x})\big)\cdot d\bs{x} &= \frac{1}{Re}\oint_{c(\dXran)} \triangle\U \,dt\cdot d\bs{x} \label{KelThm-LASALT} \end{align} where \[\dXran^a := \mathbb{E}[\U^a](\bs{x},t)dt+\sum_i\xi_i^a\circ dW_i(t).\] \end{theorem} \noindent {\bf Proof.} The proof of the Kelvin theorem for LA-SALT follows the same lines as for SALT. $\Box$ \paragraph{LA-SALT atmospheric equations in Stratonovich form.} As shown in Appendix \ref{Appendix-LASALT}, the \emph{Stratonovich} LA-SALT equations are given in the standard notation for stochastic fluid dynamics by expanding out Kelvin's theorem in \eqref{KelThm-LASALT} to find \begin{align} d\mathbf{u}^{a}+({d\mathbf{X}_{t}}^{a}\cdot \nabla )\mathbf{u}^{a}+\frac{1}{Ro^{a}}{d\mathbf{X}_{t}}^{a\bot }& +{ \sum_{i}\Big(u_{j}^{a}\nabla \xi _{i}^{j}+\frac{1}{Ro^{a}}\nabla \Big(R_{j}\mathbf{(x})\xi _{i}^{j}\Big)\Big)\circ dW_{t}^{i}} \notag\\ & \hspace{-22mm}{+\,u_{j}^{a}\nabla \mathbb{E}[{u^{a}}^{j}]dt+\frac{1}{Ro^{a}}\nabla (\mathbb{E}[{\mathbf{u}}^{a}]\cdot \mathbf{R})dt}+\frac{1}{Ro^{a}}\nabla \theta ^{a}\,dt=\frac{1}{Re^{a}}\triangle \mathbf{u}^{a}\,dt\,, \notag\\ & d\theta ^{a}+{d\mathbf{X}_{t}}^{a}\cdot \nabla \theta ^{a}=-\gamma (\theta ^{o}-\theta ^{a})\,dt + \frac{1}{Pe^{a}}\triangle \theta ^{a}\,dt\,. \label{COUPLED_SWE_T_A_STOCH_LA-Strat-Intro} \end{align} where the \emph{Stratonovich} stochastic Lagrangian trajectory for LA-SALT is given by \begin{equation} {{\sf d}\mathbf{X}_{t}^{a}} := {\mathbb{E}[\mathbf{u}^{a}]}(\bs{x},t)dt + \sum_{i}\xi _{i}^{a}(\bs{x})\circ dW_{i}(t). \label{Lag-path-Strat} \end{equation} \paragraph{LA-SALT atmospheric equations in It\^o form.} Likewise, the \emph{It\^o} LA-SALT equations are given in the standard notation for stochastic fluid dynamics in Appendix \ref{Appendix-LASALT} by \begin{align} & d\mathbf{u}^{a}+({d\mathbf{\wh{X}}_{t}}^{a}\cdot \nabla )\mathbf{u}^{a}+\frac{1}{Ro^{a}}{d\mathbf{\wh{X}}_{t}}^{a\bot } + { \sum_{i}\Big(u_{j}^{a}\nabla \xi _{i}^{j}+\frac{1}{Ro^{a}}\nabla \Big(R_{j}\mathbf{(x})\xi _{i}^{j}\Big)\Big) dW_{t}^{i}} \nonumber \\ &+\frac12 \bigg[ \mathbf{\hat{z}}\times \xi \Big( {\rm div}\Big(\xi\,\big(\,\mathbf{\hat{z}}\cdot{\rm curl}\,(\,\mathbb{E}[{\mathbf{u}}^{a}] + \frac{1}{Ro^{a}}\mathbf{R}(\mathbf{x}) \big)\Big) \,\,\Big) - \nabla \bigg( \xi\cdot\nabla\Big(\xi \cdot\big(\mathbb{E}[{\mathbf{u}}^{a}] + \frac{1}{Ro^{a}}\mathbf{R}(\mathbf{x})\big) \Big) \bigg)\bigg]dt \nonumber \\& \hspace{22mm}{+\,u_{j}^{a}\nabla \mathbb{E}[{u^{a}}^{j}]dt+\frac{1}{Ro^{a}}\nabla (\mathbb{E}[{\mathbf{u}}^{a}]\cdot \mathbf{R})dt}+\frac{1}{Ro^{a}}\nabla \theta ^{a}\,dt = \frac{1}{Re^{a}}\triangle \mathbf{u}^{a}\,dt\,, \label{COUPLED_SWE_VELOC_A_STOCH_LA-Ito-Intro} \\ & d\theta ^{a}+{d\mathbf{\wh{X}}_{t}}^{a}\cdot \nabla \theta ^{a} - \frac12 \Big(\xi\cdot\nabla(\xi\cdot\nabla \theta^a) \Big)dt =-\gamma (\theta ^{o}-\theta ^{a})\,dt + \frac{1}{Pe^{a}}\triangle \theta ^{a}\,dt \,, \label{COUPLED_SWE_T_A_STOCH_LA-Ito-Intro} \end{align} where the \emph{It\^o} stochastic Lagrangian trajectory for LA-SALT is given by \begin{equation} {{\sf d}\mathbf{\wh{X}}_{t}^{a}} := {\mathbb{E}[\mathbf{u}^{a}]}(\bs{x},t)dt + \sum_{i}\xi _{i}^{a}(\bs{x})dW_{i}(t). \label{Lag-path-Ito} \end{equation} \begin{remark}[Expected LA-SALT atmospheric equations.] Taking the expectation of equations \eqref{COUPLED_SWE_VELOC_A_STOCH_LA-Ito-Intro} and \eqref{COUPLED_SWE_T_A_STOCH_LA-Ito-Intro} yields a closed set of deterministic PDE for the expectations $\mathbb{E}[{\mathbf{u}}^{a}]$ and $\mathbb{E}[\theta ^{a}]$. Subtracting the expectations from equations \eqref{COUPLED_SWE_VELOC_A_STOCH_LA-Ito-Intro} and \eqref{COUPLED_SWE_T_A_STOCH_LA-Ito-Intro} yields \emph{linear equations} for the differences, \begin{align} {\mathbf{u}}^{a'}:= {\mathbf{u}}^{a} - \mathbb{E}[{\mathbf{u}}^{a}] \quad\hbox{and}\quad \theta ^{a'} := \theta ^{a} - \mathbb{E}[\theta ^{a}] \,. \label{fluctuations_u_theta} \end{align} Since ${\mathbf{u}}^{a'}$ and $\theta ^{a'}$ satisfy $\mathbb{E}[{\mathbf{u}}^{a'}]=0$ and $\mathbb{E}[\theta ^{a'}]=0$, one may regard these difference variables as fluctuations of ${\mathbf{u}}^{a}$ and $\theta ^{a}$ away from their expected values. From here one can calculate the dynamical equations for the statistics of the atmospheric model, e.g., its variances and its other tensor moments, as detailed in \cite{AdLHT2020,DH2020,DHL2020}. Further details of these equations can be found in Section \ref{LA-SALT-eqns-sec} \end{remark} \paragraph{Oceanic part of the LA-SALT model.} The oceanic part of the LA-SALT model \emph{coincides} with the oceanic part of the SALT model (\ref{COUPLED_SWE_VELOC_O_STOCH})-(\ref{COUPLED_SWE_INCOMPRESS_O_STOCH}). \newpage \section{Local Existence and Uniqueness of the Deterministic Climate Model}\label{SectSectMathAnalysis1} In the treatment below, we compactify the notation for the dynamics of the two-component system \eqref{COUPLED_SWE_VELOC_A} - \eqref{COUPLED_SWE_INCOMPRESS_O}, as follows. The state of the system is described by a state vector $\psi:=(\psi^a,\psi^o)$ with atmospheric component $\psi^a:=(\U^a,\theta^a)$ and oceanic component $\psi^o:=(\U^o,\theta^o)$. The initial state is denoted by $\psi(t_0)=\psi_0$. where $\psi_0=(\U^a_0,\theta^a_0,\U^o_0,\theta^o_0)$. In this notation, equations \eqref{COUPLED_SWE_VELOC_A} - \eqref{COUPLED_SWE_INCOMPRESS_O} take the operator form \begin{equation} d_t\psi+B\left( \psi,\psi\right) +C \psi +D(\psi^a,\psi^o)=L\psi, \label{eq:CoupledOperator} \end{equation}where one defines \begin{itemize} \item $B:=(B^a,B^o)$ is the usual bilinear transport operator, with $B^a( \psi^a,\psi^a):=(\U^a\cdot\nabla\U^a,\U^a\cdot\nabla\theta^a)$ and $B^o( \psi^o,\psi^o):=(\U^o\cdot\nabla\U^o,\U^o\cdot\nabla\theta^o)$. \item $C:=(C^a,C ^o)$ with $C^a\psi^a:=(\frac{1}{Ro^a}\U^{a\bot}+\nabla\theta^a,0)$ and $C^o\psi^o:=(\frac{1}{Ro^o}\U^{o\bot}+\nabla p^o,0)$ \item $L:=(L^a,L^o)$ denotes the dissipation/diffusion operator for velocity and temperature with $L^a\psi^a:=(\frac{1}{Re^a}\triangle\U^a,\frac{1}{Pe^a}\triangle\theta^a)$ and $L^o\psi^o:=(\frac{1}{Re^o}\triangle\U^o,\frac{1}{Pe^o}\triangle\theta^o)$ \item $D(\psi^a,\psi^o):=(0, \gamma(\theta^a - \theta^o),\sigma(\U^o-\bar{\U}^a_{sol}), 0)$ is the coupling operator. \end{itemize} \emph{Domain and Boundary Conditions:} The spatial domain is a two dimensional square $\Omega:=[0,L]\times [0,L]$ with $L\in \mathbb{R}^+$. We assume periodic boundary conditions. \noindent \emph{Operators and Spaces:} By $W^s(\Omega)$ we denote the $L^2$-Sobolev space of order $s\in\bb{Z}_+\cup \{0\}$ that is defined as the set of functions $f\in L^2(\Omega)$ such that its derivatives in the distributional sense $\Deriv f(x,y)=\partial^{\alpha_1}_{x}\partial^{\alpha_2}_{y} f(x,y)$ are in $L^2(\Omega)$ for all $|\alpha|\leq s$, with multi-index $\alpha=(\alpha_1,\alpha_2)\in\mathbb{Z}_+^2$, and degree $\abs{\alpha}:=\alpha_1+\alpha_2$. The scalar product in $W^s(\Omega)$ is defined by \begin{equation}\label{SOBOLEV_SCALATR_PROD} \big<f,g\big>_{W^s}:=\sum_{|\alpha|\leq s}\int_\Omega \Deriv f\cdot\Deriv g\, dx. \end{equation} The vectorial counterpart of the Sobolev space $W^s(\Omega)$ is denoted by $\mathbf{W}^s(\Omega)$. More information about Sobolev spaces can be found for example in \cite{Evans}, \cite{Mazja}. We define the scalar space \begin{equation}\begin{split}\label{V_def_scal} {V}:=\{f:\mathbb{R}^2\to\mathbb{R}:\ &f\text{ is a trigonometric polynomial with period L}\}, \end{split}\end{equation} and its vector-valued equivalents for atmosphere and ocean component \begin{equation}\begin{split}\label{V_def_vec} &\mathbf{ V}^a:=\{u:\mathbb{R}^2\to\mathbb{R}^2:\ u\text{ is a vector-valued trigonometric polynomial with period L}\}\\ &\mathbf{ V}^o:=\{u:\mathbb{R}^2\to\mathbb{R}^2:\ u\text{ is a vector-valued trigonometric polynomial with period L}\\ &\hskip2.5cm \text{and }\int_\Omega u\, dx=0\}. \end{split}\end{equation} We define now the following function spaces \begin{equation}\begin{split}\label{SOBOLEV} &H^s(\Omega) := \text{the closure of }V\ \text{in }W^s(\Omega),\quad \mathbf{H}^{s,a}(\Omega):= \text{the closure of }\mathbf{ V}^a\ \text{in }\mathbf{W}^s(\Omega),\\ &\mathbf{H}^{s,o}(\Omega):= \text{the closure of }\mathbf{ V}^o\ \text{in }\mathbf{W}^s(\Omega),\quad \mathbf{H}^s_{div}(\Omega):=\{u\in {\bf H}^{s,o}(\Omega): \Div(u)=0\}. \end{split}\end{equation} In this notation, we define for $s\in\bb{N}\cup\{0\}$ the {\it Sobolev space of state vectors} by \begin{equation}\label{Sobolev_StateSpace} \mathcal{H}^s(\Omega):= \mathbf{H}^{s,a}(\Omega)\times H^{s}(\Omega) \times \mathbf{H}^{s,o}_{div}(\Omega)\times H^{s}(\Omega), \end{equation} in which the norm of $\psi=(\U^a,\theta^a,\U^o,\theta^o)\in \mathcal{H}^{{s}}$ is given by \begin{equation}\label{Sobolev_StateSpaceNorm} ||\psi||_{\mathcal{H}^{s}}:= (||\U^a||_{\mathbf{H}^{s}}^2 + ||\theta^a||_{H^{s}}^2+ ||\U^o||_{\mathbf{H}^{s}}^2 + ||\theta^o||_{ H^{s}}^2)^{1/2}. \end{equation} We use an analogous notation for the Lebesgue spaces and denote by $L^2, {\bf L}^2, \mathcal{L}^2$ the sets of square-integrable scalar functions, vector fields and state vectors, respectively. \begin{definition}\label{REGULAR_SOL} Let $s\in\bb{N}$. A state vector $\psi:=(\U^a,\theta^a,\U^o,\theta^o)$ is said to be a local regular solution of (\ref{COUPLED_SWE_VELOC_A})-(\ref{COUPLED_SWE_INCOMPRESS_O}) on the time interval $T:=[t_0,t_1]$ if it satisfies(\ref{COUPLED_SWE_VELOC_A})-(\ref{COUPLED_SWE_INCOMPRESS_O}) with initial condition $\psi(t_0)=\psi_0$ and if \begin{equation}\begin{split}\label{REGULAR_SOL1} \psi\in C(T,\mathcal{H}^s(\Omega))\cap L^2(T,\mathcal{H}^{s+1}(\Omega)), \qquad {\frac{d\psi}{ dt}}\in \mathcal{H}^0 (T,\mathcal{H}^{s-1}(\Omega)) \end{split}.\end{equation} \end{definition} Define a cut-off function as follows \begin{equation}\begin{split} \label{COUPLED_SWE_GALERKIN_TRUNC} g_R(x):= \begin{cases} &1,\quad \text{if }0\leq x\leq R,\\ &0,\quad \text{if } x\geq R+\delta,\\ &\text{smoothly decaying}\quad \text{if }R<x<R+\delta \end{cases} \end{split}.\end{equation} Next, we define a finite-dimensional approximate system of equations for (\ref{eq:CoupledOperator}). Because of our assumption of periodic boundary conditions we may write the finite-dimensional approximation in terms of the Fourier basis $w_{\bf n}({\bf x}):=e^{\frac{2\pi i {\bf n}\cdot {\bf x}}{L}}$. We remark that the specific form of the basis does not play a role in our proofs. We must take into account the incompressibility of the ocean flow. This is imposed through the Leray projection, which projects the ocean equation onto the space of divergence-free vector fields. For periodic boundary conditions, the Leray projection commutes with the Laplace operator, so the Stokes operator coincides with the Laplacian. The Galerkin approximations for the atmospheric component $\psi^a=(\U^a,\theta^a)$ of the state vector are given by \begin{equation}\begin{split}\label{GALERKIN_U} & {\bf P}_m^a\U^a(x,t):=\sum_{{\bf n}\in\{\mathbb{Z}^2, |{\bf n}|\leq m\}} \widehat{\U}^a_{\bf n}(t) w_{\bf n}(x)\\ \text{ and }&\ P_m^a\theta^a(x,t):=\sum_{{\bf n}\in\{\mathbb{Z}^2, |{\bf n}|\leq m\}} \hat{\theta}_{\bf n}^a(t) w_{\bf n}(x), \end{split}\end{equation} with $\widehat{\U}^a_{\bf n}:=\int_\Omega \U^a(x,t)w_{\bf n}(x)\, dx$, $\hat{\theta}_{\bf n}^a:=\int_\Omega \theta^a(x,t)w_{\bf n}(x)\, dx$. Incompressibility must be into account for the oceanic component $\psi^o=(\U^o,\theta^o)$, so the basis combines the Galerkin approximation with the Leray projection onto the space of divergence-free vector fields \begin{equation}\begin{split}\label{GALERKIN_O} & {\bf P}_m^o\U^o(x,t):=\sum_{{\bf n}\in\{\mathbb{Z}^2\setminus \{ 0\}, |{\bf n}|\leq m\}} \big(\widehat{\U}^o_{\bf n}(t)-\frac{\widehat{\U}^o_{\bf n}(t)\cdot{\bf n}}{|{\bf n}|^2}{\bf n} \big) w_{\bf n}(x), \\ \text{ and }&\ P_m^o\theta^o(x,t):=\sum_{{\bf n}\in\{\mathbb{Z}^2, |n|\leq m\}} \hat{\theta}^o_{\bf n}(t) w_{\bf n}(x), \end{split}\end{equation} where $\widehat{\U}^o_{\bf n}:=\int_\Omega \U^o(x,t)w_{\bf n}(x)\, dx$, $\hat{\theta}_{\bf n}^o:=\int_\Omega \theta^o(x,t)w_{\bf n}(x)\, dx$. The Galerkin approximation of the state vector $\psi=(\psi^a,\psi^o)$ is defined as \begin{equation}\begin{split}\label{GALERKIN_STATE} P_m\psi:=({\bf P}_m^a\psi^a, {\bf P}^o_m\psi^o):=({\bf P}_m^a\U^a,P_m^a\theta^a, {\bf P}^o_m\U^o, P_m^o\theta^o). \end{split}\end{equation} We also use the notation \begin{equation}\begin{split}\label{GALERKIN_NOTATATION} \psi_m:=P_m\psi, \text{ with } \U^a_m:={\bf P}_m^a\U^a, \theta_m^a:=P_m^a\theta, \text{ and }\U^o_m:={\bf P}^o_m\U^o, \theta_m^o:=P_m^o\theta^o. \end{split}\end{equation} In preparation for establishing the local existence of the stochastic version of the coupled model, we prove the following theorem on the global existence in time of the truncated approximation to the coupled model. \begin{theorem}[Global well-posedness of the truncated coupled model] \label{THM_DETERMINISTIC_Galerkin} For the time interval $[0,T]$, let $s\geq 2$ and suppose the initial conditions of (\ref{eq:CoupledOperatorGalerkin}) satisfy $\psi_{0}=(\U^a_{},\theta^a_{0},\U^o_{0},\theta^o_{0})\in \mathcal{H}^{s}(\Omega)$. The truncation of the coupled model (\ref{eq:CoupledOperator}) is given by \begin{equation}\label{eq:CoupledOperatorTruncated} d_t\psi+g_R(||\psi||_{H^s})B\left( \psi,\psi\right) +C \psi +D(\psi^a,\psi^o)=L\psi. \end{equation}Then there exists a unique solution to (\ref{eq:CoupledOperatorTruncated}) in the sense of Definition \ref{REGULAR_SOL}. This solution depends continuously with respect to the $\mathcal{L}^2$-norm on the initial conditions. \end{theorem} \begin{proof} We first show the local existence in time of solutions for the truncated Galerkin system. Next, we prove the global existence via $\mathcal{H}^s$-estimates. Then, we pass to the limit and prove the corresponding assertions for the truncated system (\ref{eq:CoupledOperatorTruncated}). Finally, we show uniqueness and continuous dependency on the initial condition.\\ The truncated Galerkin system is given by \begin{equation}\label{eq:CoupledOperatorGalerkin} d_t\psi_m+g_R(||\psi_m||_{H^s})\mathbb{P}_m B\left( \psi_m,\psi_m\right) +C \psi_m +D(\psi^a_m,\psi^o_m)=L\psi_m, \end{equation}where ${P}_m$ was defined in (\ref{GALERKIN_STATE}). \noindent{\it Step 1: Local existence of the truncated Galerkin approximation.}\\ The truncated Galerkin system can be written as \begin{equation}\begin{split}\label{eq:CoupledOperatorGalerkin_Proof1} &d_t\psi_m=K(\psi_m)\\ \text{with }&K(\psi_m):=-g_R(||\psi_m||_{H^s})P_mB\left( \psi_m,\psi_m\right) -C \psi_m -D(\psi^a_m,\psi^o_m)+L\psi_m. \end{split}\end{equation} The right-hand side $K$ of (\ref{eq:CoupledOperatorGalerkin_Proof1}) is a Lipschitz continuous mapping from $\mathcal{H}^{{s}}$ into itself. It follows from the Picard Theorem that a unique solution $\psi_m\in C^1([t_0,t_1^m], \mathcal{H}^{{s}})$ of (\ref{eq:CoupledOperatorGalerkin}) exists on time intervals $[t_0^m,t_1^m]$ that depend on $m$. \noindent{\it Step 2: Global existence of the truncated Galerkin approximation.}\\ We show now that a solution exists globally in time. We apply the derivative $\Deriv $ to (\ref{eq:CoupledOperatorGalerkin}) and then take $L^2$-scalar product of equation (\ref{eq:CoupledOperatorGalerkin}) with $\Deriv\psi_m$. This yields \begin{equation}\begin{split}\label{eq:CoupledOperatorGalerkin_Proof_s1_0} &\frac{1}{2}d_t||\Deriv\psi_m||^2_{L^2} + \big<g_R(||\psi_m||_{H^1})\Deriv P_mB\left( \psi_m,\psi_m\right),\Deriv\psi_m\big>_{L^2} \\ &+\big<C \Deriv\psi_m,\Deriv\psi_m\big>_{L^2} +\big<D(\Deriv\psi^a_m,\Deriv\psi^o_m),\Deriv\psi_M\big>_{L^2} -\big<L\Deriv\psi_m,\Deriv\psi_m\big>_{L^2}=0. \end{split}\end{equation} \paragraph*{The nonlinear transport operator $B$.} We rewrite the operator $B$ \begin{equation}\begin{split}\label{eq:CoupledOperatorGalerkin_Proof_s1_4} &| \big<g_R(||\psi_m||_{H^s})\Deriv P_m\left( \psi_m\cdot\nabla\right )\psi_m,\Deriv\psi_m\big>_{L^2}\\ &=g_R(||\psi_m||_{H^s})| \big<\left( \Deriv\psi_m\cdot\nabla\right )\psi_m+ \left( \psi_m\cdot\nabla\right )\Deriv\psi_m,\Deriv\psi_m\big>_{L^2} \end{split}\end{equation} For the oceanic component the second term on the right-hand side vanishes as a consequence of the incompressibility. For the (compressible) atmospheric model the second term can be estimated with the inequalities of H\"older and Young \begin{equation}\begin{split}\label{eq:CoupledOperatorGalerkin_Proof_s1_5} &g_R(||\psi_m||_{H^s})| \big< \left( \psi_m\cdot\nabla\right )\Deriv\psi_m,\Deriv\psi_m\big>_{L^2}| \leq g_R(||\psi_m||_{H^s})|| \psi_m||_{L^6} ||\nabla\Deriv\psi_m||_{L^2} ||\Deriv\psi_m||_{L^3}\\ &\leq g_R(||\psi_m||_{H^s})|| \psi_m||_{H^1} ||\mathcal{D}^{\alpha +1}\psi_m||_{L^2} ||\Deriv\psi_m||_{L^2}^{1/2}||\mathcal{D}^{\alpha +1}\psi_m||_{L^2}^{1/2}\\ &\leq g_R(||\psi_m||_{H^s})||\Deriv\psi_m||_{L^2}^{3/2}||\mathcal{D}^{\alpha +1}\psi_m||_{L^2}^{3/2}\\ &\leq \frac{c}{2\epsilon_1}g_R^4(||\psi_m||_{H^s})||\Deriv\psi_m||_{L^2}^{6}+ \frac{\epsilon_1}{2}||\mathcal{D}^{\alpha +1}\psi_m||_{L^2}^{2}. \end{split}\end{equation} For the first term in (\ref{eq:CoupledOperatorGalerkin_Proof_s1_4}) the inequalities of H\"older and Young imply that both the atmospheric and oceanic components satisfy \begin{equation}\begin{split}\label{eq:CoupledOperatorGalerkin_Proof_s1_6} &g_R(||\psi_m||_{H^s})| \big<\left( \Deriv\psi_m\cdot\nabla\right )\psi_m,\Deriv\psi_m\big>_{L^2} \leq g_R(||\psi_m||_{H^s})||\Deriv\psi_m||_{L ^2}||\nabla\psi_m||_{L^6}||\Deriv\psi_m||_{L^3}\\ &\leq g_R(||\psi_m||_{H^s})||\Deriv\psi_m||_{L ^2}||\nabla\psi_m||_{H^1}||\Deriv\psi_m||_{L^2}^{1/2}||\mathcal{D}^{\alpha+1}\psi_m||_{L^2}^{1/2}\\ &\leq \frac{\epsilon_2}{2}||\Deriv\psi_m||_{L ^2}^2 +\frac{1}{\epsilon_2}g_R^2(||\psi_m||_{H^s})||\psi_m||_{H^2}^2||\Deriv\psi_m||_{L^2}||\mathcal{D}^{\alpha+1}\psi_m||_{L^2}\\ &\leq \frac{\epsilon_2}{2}||\Deriv\psi_m||_{L ^2}^2 +\frac{1}{\epsilon_2\epsilon_3}g_R^4(||\psi_m||_{H^s})||\psi_m||_{H^2}^4||\Deriv\psi_m||_{L^2}^2 +\frac{\epsilon_3}{2}||\mathcal{D}^{\alpha+1}\psi_m||_{L^2}^2. \end{split}\end{equation} \paragraph*{The linear operator $C$.} We obtain using the inequalities of Cauchy-Schwarz and Young that \begin{equation}\begin{split}\label{eq:CoupledOperatorGalerkin_Proof5} &\int_\Omega \big(C\Deriv\psi_m\big)\cdot\Deriv\psi_m\, dx = \int_\Omega \big(C^a\Deriv\psi^a_m\big)\cdot\Deriv\psi^a_m\, dx + \int_\Omega \big(C^o\Deriv\psi^o_m\big)\cdot\Deriv\psi^o_m\, dx\\ &=\int_\Omega \big(\frac{1}{Ro^a}\Deriv\U^{a\bot}_m+\nabla\Deriv\theta^a_m\big)\cdot\Deriv\U^a_m\, dx +\int_\Omega \big(\frac{1}{Ro^o}\Deriv\U^{o\bot}_m+\nabla\Deriv p^o_m\big)\cdot\Deriv\U^o_m\, dx\\ &\leq |\int_\Omega \nabla\Deriv\theta^a_m\cdot\Deriv\U^a_m\, dx| \leq \frac{1}{Ro^a}||\Deriv \theta^a_m||_{L^2}||\Deriv\nabla\U^a_m||_{L^2} \leq \frac{1}{2\epsilon _4Ro^a}||\Deriv \theta^a_m||_{L^2}^2+\frac{\epsilon_4}{2}||\mathcal{D}^{\alpha+1}\U^a_m||_{L^2}^2, \end{split}\end{equation} in which the ocean component of the pressure term has vanished, upon using incompressibility of the ocean flow. \paragraph*{The coupling operator $D$.} For the coupling term in the atmospheric temperature equation we find by using the inequalities of Cauchy-Schwarz and Young that \begin{equation}\begin{split}\label{CoupledModel_Estimate04} & |\gamma\int_\Omega\Deriv\big(\theta^o_m-\theta^a_m\big)\cdot\Deriv\theta^a_m\,dx| \leq |\gamma|\,(||\Deriv\theta^o_m||_{L^2}+||\Deriv\theta^a_m||_{L^2})||\Deriv\theta^a_m||_{L^2}\\ &\leq \frac{3|\gamma|}{2}(||\Deriv\theta^o_m||_{L^2}^2+||\Deriv\theta^a_m||_{L^2}^2). \end{split}\end{equation} The coupling term in the oceanic velocity equation can be estimated as follows \begin{equation}\begin{split}\label{DERIV_MOMENTUM_O_12} &|\sigma\int_\Omega\Deriv(\U^o_m-\bar{\U}^a_{sol,m})\cdot\Deriv\U^o_m\,dx| = |\sigma\int_\Omega\Deriv(\U^o_m-\U^a_{sol,m}+\frac{1}{|\Omega|}\int_{\Omega}\U^a_{sol,m}\,dx )\cdot\Deriv\U^o_m\,dx|\\ &\leq |\sigma|\big(\,||\Deriv\U^o_m||_{{\bf L}^2}^2 + ||\Deriv\U^o_{sol,m}||_{\bf L^2}||\U^a_{sol,m}||_{\bf L^2} +\int_\Omega\frac{1}{|\Omega|}\int_{\Omega}|\Deriv\U^a_{sol,m}(x)|\,dx\big)\, |\Deriv\U^o_m(y)|\,dy\\ &\leq |\sigma|\big(\,||\Deriv\U^o_m||_{{\bf L}^2}^2 +||\Deriv\U^o_m||_{\bf L^2}^2+||\U^a_{sol,m}||_{\bf L^2}^2 +\,\int_\Omega\frac{1}{|\Omega|}||\Deriv\U^a_{sol,m}||_{L^2}|\Omega| |\Deriv\U^o_m|\,dx\big)\\ &\leq C|\sigma|\,(||\Deriv\U^o_m||_{L^2}^2+||\Deriv\U^a_{m}||_{L^2}^2). \end{split}\end{equation} This estimate implies for the coupling operator \begin{equation}\begin{split}\label{eq:CoupledOperatorGalerkin_Proof6} &\int_\Omega \big(D(\Deriv\psi^a_m,\Deriv\psi^o_m)\big)\cdot\Deriv\psi\, dx \leq C(|\gamma|+|\sigma|)(||\Deriv\psi^o_m||_{L^2}^2+||\Deriv\psi^a_m||_{L^2}^2). \end{split}\end{equation} After summing over $|\alpha|$ up to $s$ this yields for (\ref{eq:CoupledOperatorGalerkin_Proof_s1_0}) \begin{equation}\begin{split}\label{eq:CoupledOperatorGalerkin_Proof12} \frac{1}{2}d_t||\psi_m||_{\mathcal{H}^s}^2 +\frac{1}{P}||\nabla\psi_m||^2_{\mathcal{H}^s} &\leq Cg_R^4(||\psi_m||_{\mathcal{H}^s})||\psi_m||^6_{\mathcal{H}^s} +C(|\gamma|+|\sigma|)(||\psi^o_m||_{\mathcal{H}^s}^2+||\psi^a_m||_{\mathcal{H}^s}^2). \end{split}\end{equation} where $\frac{1}{P}:=\min\{\frac{1}{Re^a},\frac{1}{Re^o}, \frac{1}{Pe^a}, \frac{1}{Pe^o}\}$. Upon using the truncation $g_R$ it follows \begin{equation}\begin{split}\label{eq:CoupledOperatorGalerkin_Proof13} \frac{1}{2}d_t||\psi_m||_{\mathcal{H}^s}^2 +\frac{1}{P}||\nabla\psi_m||^2_{\mathcal{H}^s} &\leq CR||\psi_m||^2_{\mathcal{H}^s}. \end{split}\end{equation} With Gronwall's inequality it follows that \begin{equation}\begin{split}\label{eq:CoupledOperatorGalerkin_Proof14} ||\psi_m(t)||_{\mathcal{H}^{{s}}}^2 &\leq ||\psi_m(t_0)||_{\mathcal{H}^{{s}}}^2e^{ CR (t-t_0)} \leq ||\psi(t_0)||_{\mathcal{H}^{{s}}}^2e^{ CR (t-t_0)}, \end{split}\end{equation} where $\psi(t_0)$ denotes the initial condition of the coupled equations (\ref{COUPLED_SWE_VELOC_A})-(\ref{COUPLED_SWE_INCOMPRESS_O}). This estimate implies in particular that $||\psi_m(t)||_{\mathcal{H}^{{s}}}^2$ is bounded uniformly in $m$. Integrating (\ref{eq:CoupledOperatorGalerkin_Proof13}) over the time interval $[t_0,t]$ yields with (\ref{eq:CoupledOperatorGalerkin_Proof14}) \begin{equation}\begin{split}\label{eq:CoupledOperatorGalerkin_Proof15} \frac{1}{R}\int_{t_0}^t||\nabla\psi_m(s)||_{\mathcal{H}^{{s}}}^2ds &\leq CR\int_{t_0}^t||\psi_m(s)||_{{\bf H}^{s}}^2ds+||\psi_m(t_0)||_{\mathcal{H}^{{s}}}^2\\ &\leq CR||\psi(t_0)||_{\mathcal{H}^{{s}}}^2e^{ 2CR (t-t_0)}+||\psi(t_0)||_{\mathcal{H}^{{s}}}^2. \end{split}\end{equation} From (\ref{eq:CoupledOperatorGalerkin_Proof14}) and (\ref{eq:CoupledOperatorGalerkin_Proof15}) it follows that $(\psi_m)_m$ is uniformly bounded in $L^\infty(T,\mathcal{H}^{{s}})\cap L^2(T,\mathcal{H}^{{s}+1})$ with time derivative $(\frac{d\psi_m}{dt})_m$ which is according to (\ref{eq:CoupledOperatorGalerkin_Proof13}) uniformly bounded in $L^2(T,\mathcal{H}^{{s}})$. The oceanic pressure $p$ can be recovered analogously to the Navier-Stokes Equations by solving the elliptic equation \begin{equation}\label{pressure_eq} \Delta p_m=div\big((\U^o_m\cdot\nabla)\U^o_m+\nabla q^a_m\big), \end{equation} where $q^a_m$ is the gradient part of the Leray-Helmholtz decomposition of the atmospheric velocity $\U^a_m$. \noindent{\it Step 3:} {\it Passage to the limit.}\\ The uniform boundedness of $(\psi_m)_m$ in $L^2(T,\mathcal{H}^{{s}+1})$ implies with the compact embedding of $L^2(T,\mathcal{H}^{{s}+1})$ into $L^2(T,\mathcal{H}^{{s}})$ that a subsequence $(\psi_{k})_k$ exists that converges strongly to $\psi\in L^2(T,\mathcal{H}^s)$. This subsequence converges also weakly in $L^\infty(T,\mathcal{H}^{{s}})$. We show now that the limit $\psi$ satisfies the truncated equations (\ref{eq:CoupledOperatorTruncated}). For the coupling term it holds for all $\phi\in [H^2(\Omega)]^6$ \begin{equation}\begin{split}\label{Coupled_CONV0} &\lim_{k\to\infty}\int_T\big<D(\psi^a_k,\psi^o_k)-D(\psi^a,\psi^o) ,\phi\big>_{L^2}dt\\ &= \lim_{k\to\infty}\int_T\big<\gamma\big((\theta^a_k- \theta^a)+(\theta^o- \theta^o_k)\big)+\sigma\big((\U^o_k-\U^o)+(\bar{\U}^a_{sol}-\bar{\U}^a_{sol,k})\big) ,\phi\big>_{L^2}dt\\ \end{split}\end{equation} For the velocity coupling involving the solenoidal part of the atmospheric velocity field in the ocean component it follows by using the Cauchy-Schwarz inequality that \begin{equation}\begin{split}\label{Coupled_CONV01} &|\int_T\big<\sigma\big(\bar{\U}^a_{sol}-\bar{\U}^a_{sol,k}\big) ,\phi\big>_{L^2}dt| =|\sigma\int_T\int_\Omega (\bar{\U}^a_{sol}(x,t)-\bar{\U}^a_{sol,k}(x,t)) \cdot\phi(x)\, dxdt|\\ =&|\sigma\int_T\int_\Omega\big(\U^a_{sol}(x,t)- \U^a_{sol,k}(x,t) + \frac{1}{|\Omega|}\int_\Omega \U^a_{sol,k}(z,t)- \U^a_{sol}(z,t)\, dz\big) \cdot\phi(x)\, dxdt|\\ \leq& |\sigma|\,\int_T\int_\Omega| \U^a_{sol}(x,t)-\U^a_{sol,k}(x,t)|\, |\phi(x)|\,dxdt\\ \left.\right.&+|\sigma|\,\frac{1}{|\Omega|}\int_\Omega(\int_T\int_\Omega| \U^a_{sol,k}(z,t)- \U^a_{sol}(z,t)|\, dzdt) |\phi(x)|\,dx\\ \leq& |\sigma|\,\int_T ||\U^a_{sol}(t)-\U^a_{sol,k}(t)||_{L^2}||\phi||_{L^2}dt +|\sigma|\,\int_\Omega(\int_T|| \U^a_{sol,k}(t)- \U^a_{sol}(t)||_{L^2}dt) |\phi(x)|\,dx. \end{split}\end{equation} From (\ref{Coupled_CONV01}) and the convergence of $(\psi_{k})_k$ in $\psi\in L^2(T,\mathcal{H}^s)$, there follows the convergence of the integral in (\ref{Coupled_CONV0}). The convergence of the remaining linear terms in the equations is obvious. Next, we focus on the nonlinear terms for which we have to show that \begin{equation}\begin{split}\label{Coupled_CONV1} \lim_{k\to\infty}\int_T\big< g_R(||\psi_k||_{H^s})P_m B(\psi_k,\psi_k)-g_R(||\psi||_{H^s})P_mB(\psi,\psi),\phi\big>_{L^2}dt=0,\quad \text{for all } \phi\in [C^\infty(\Omega)]^6. \end{split}\end{equation} The integral above can be written as \begin{equation}\begin{split}\label{Coupled_CONV2} &\int_T\big< g_R(||\psi_k||_{\mathcal{H}^s})P_m B(\psi_k,\psi_k)-g_R(||\psi||_{\mathcal{H}^s})P_m B(\psi,\psi),\phi\big>_{L^2}dt\\ &= \int_T \big(g_R(||\psi_k||_{H^s})-g_R(||\psi||_{\mathcal{H}^s})\big)\big<P_m B(\psi_k,\psi_k),\phi\big>_{L^2}dt\\ &+\int_T g_R(||\psi||_{H^s})\big<P_m B(\psi_k,\psi_k) -P_mB(\psi,\psi),\phi\big>_{L^2}dt \end{split}\end{equation} For the first integral on the right-hand side it follows with the H\"older inequality \begin{equation}\begin{split}\label{Coupled_CONV3} &\int_T \big(g_R(||\psi_k||_{\mathcal{H}^s})-g_R(||\psi||_{\mathcal{H}^s})\big)\big<P_mB(\psi_k,\psi_k),\phi\big>_{L^2}dt\\ &\leq \int_T \big(g_R(||\psi_k||_{\mathcal{H}^s})-g_R(||\psi||_{\mathcal{H}^s})\big)||\psi_k||_{L^3} ||\nabla\psi_k||_{L^2} \,||\phi||_{L^6} dt\\ &\leq c\int_T \big(g_R(||\psi_k||_{\mathcal{H}^s})-g_R(||\psi||_{\mathcal{H}^s})\big)||\psi_k||_{H^1}^2 \,||\phi||_{H^1} dt\\ &\leq c\sup_{t\in T} \big(g_R(||\psi_k(t)||_{\mathcal{H}^s})-g_R(||\psi(t)||_{\mathcal{H}^s})\big)\int_T||\psi_k||_{H^1}^2 \,||\phi||_{H^1} dt. \end{split}\end{equation} The sequence $(\psi_k)_k$ converges weakly to $\psi$ in $L^\infty(T,\mathcal{H}^{{s}})$, i.e. $||\psi_k||_{H^s}$ converges to $||\psi||_{\mathcal{H}^s}$ and with the continuity of the truncation function $g_R$ follows that the first term on the right-hand side converges to zero. Since $(\psi_k)_k$ is bounded in $L^\infty(T,\mathcal{H}^{{s}})\cap L^2(T,\mathcal{H}^{{s}+1})$ the right-hand side of (\ref{Coupled_CONV3}) converges for $k\to\infty$ to zero. For the second integral in (\ref{Coupled_CONV2}) it follows with H\"older's inequality that \begin{equation}\begin{split}\label{Coupled_CONV4} &\int_Tg_R(||\psi||_{\mathcal{H}^s})\big<P_mB(\psi_k,\psi_k) -P_mB(\psi,\psi),\phi\big>_{L^2}dt\\ &= \int_Tg_R(||\psi||_{\mathcal{H}^s})\big<P_mB(\psi_k-\psi,\psi_k) +P_mB(\psi,\psi_k-\psi),\phi\big>_{L^2}dt\\ &\leq \int_Tg_R(||\psi||_{\mathcal{H}^s})|| \psi_k-\psi||_{L^4}||\nabla\psi_k||_{L^2}\, ||\phi||_{L^4} \,dt +\int_Tg_R(||\psi||_{\mathcal{H}^s})||\psi||_{L^4}||\nabla(\psi_k-\psi)||_{L^2}||\phi||_{L^4} \,dt\\ &\leq \int_Tg_R(||\psi||_{\mathcal{H}^s})|| \psi_k-\psi||_{H^1}||\nabla\psi_k||_{L^2}\, ||\phi||_{H^1} \,dt + \int_Tg_R(||\psi||_{\mathcal{H}^s})||\psi||_{H^1}||\psi_k-\psi||_{H^1}||\phi||_{H^1} \,dt, \end{split}\end{equation} where the right-hand side tends to zero for $k\to\infty$ as a consequence of the boundedness of the sequence $(\psi_k)_k$ in $L^\infty(T,\mathcal{H}^s)\cap L^2(T,\mathcal{H}^{s+1})$ and its converges in $L^2(T,\mathcal{H}^s)$. \noindent{\it Step 4: Uniqueness of solutions of the truncated system}\\ Let $\psi_1,\psi_2$ be two solutions of (\ref{eq:CoupledOperatorTruncated}) with respective initial conditions $\psi_1(t_0=0)$ and $\psi_2(t_0=0)$. We assume that $||\psi_1(t)||_{\mathcal{H}^s},||\psi_2(t)||_{\mathcal{H}^s}\leq R$ for $t\in T$. This implies for the difference by $\widehat{\psi}:=\psi_1-\psi_2$ that $||\psi(t)||_{\mathcal{H}^s}\leq R$. The difference $\widehat{\psi}$ satisfies the following equation \begin{equation}\begin{split}\label{eq:CoupledOperatorGalerkin_uniq0} &d_t\widehat{\psi} +g_R(||\psi_{1}||_{\mathcal{H}^s}) \big( B\left( \psi_{1},\psi_{1}\right) - B\left( \psi_{2},\psi_{2}\right)\big)\\ &+B\left( \psi_{2},\psi_{2}\right)\big(g_R(||\psi_{1}||_{\mathcal{H}^s})-g_R(||\psi_{2}||_{\mathcal{H}^s})\big) +C \widehat{\psi} +D(\widehat{\psi}^a,\widehat{\psi}^o) =L\widehat{\psi}. \end{split}\end{equation} Taking the $L^2$-inner product with $\widehat{\psi}$ yields \begin{equation}\begin{split}\label{eq:CoupledOperatorGalerkin_uniq1} &d_t||\widehat{\psi}||_{\mathcal{L}^2}^2 +\int_\Omega g_R(||\psi_{1}||_{\mathcal{H}^s}) \big( B\left( \psi_{1},\psi_{1}\right) - B\left( \psi_{2},\psi_{2}\right)\big)\cdot \widehat{\psi}dx\\ &+\int_\Omega B\left( \psi_{2},\psi_{2}\right)\cdot\widehat{\psi}\big(g_R(||\psi_{1}||_{\mathcal{H}^s})-g_R(||\psi_{2}||_{\mathcal{H}^s})\big)dx\\ & +\int_\Omega \big(C \widehat{\psi} +D(\widehat{\psi}^a,\widehat{\psi}^o)\big)\cdot\widehat{\psi}dx =\int_\Omega (L\widehat{\psi})\cdot\widehat{\psi} dx. \end{split}\end{equation} For the difference of the two nonlinear terms we have \begin{equation}\begin{split}\label{eq:CoupledOperatorGalerkin_uniq3} &\int_\Omega g_R(||\psi_{1}||_{\mathcal{H}^s}) \big(B\left( \psi_{1},\psi_{1}\right) -B\left( \psi_{2},\psi_{2}\right)\big)\cdot\widehat{\psi}\, dx = \int_\Omega g_R(||\psi_{1}||_{H^s}) \big( B( \widehat{\psi},\psi_{1})+B(\psi_{2},\widehat{\psi})\big)\cdot\widehat{\psi}\, dx \end{split}\end{equation} In order to estimate the right-hand side consider the atmospheric velocity component of (\ref{eq:CoupledOperatorGalerkin_uniq3}), with the inequalities of H\"older, Agmon and Young follows \begin{equation}\begin{split}\label{eq:CoupledOperatorGalerkin_uniq4} &\int_\Omega g_R(||\U_{1}^a||_{{\bf H}^s}) \big( (\widehat{\U}^a\cdot\nabla)\U_{1}^a) +( \U^a_{2}\cdot\nabla)\widehat{\U}^a)\big)\cdot\widehat{\U}^a\, dx\\ &\leq cg_R(||\U_{1}^a||_{{\bf H}^s}) ||\nabla\U_{1}^a ||_{{\bf L}^6}||\widehat{\U}^a||_{{\bf L}^3}||\widehat{\U}^a||_{{\bf L}^2} +||\U_{2}^a ||_{{\bf L}^\infty}||\nabla\widehat{\U}^a||_{{\bf L}^2}||\widehat{\U}^a||_{{\bf L}^2}\\ &\leq cg_R(||\U_{1}^a||_{{\bf H}^s}) ||\U_{1}^a ||_{{\bf H}^2}||\widehat{\U}^a||_{{\bf H}^1}||\widehat{\U}^a||_{{\bf L}^2} +c||\U_{2}^a ||_{{\bf H}^2}||\widehat{\U}^a||_{{\bf H}^1}||\widehat{\U}^a||_{{\bf L}^2}\\ &\leq \frac{cg_R(||\U_{1}^a||_{{\bf H}^s})}{2\epsilon_1} ||\U_{1}^a ||_{{\bf H}^2}^2||\widehat{\U}^a||_{{\bf L}^2}^2 +\frac{c}{\epsilon_2}||\U_{2}^a ||_{{\bf H}^2}^2||\widehat{\U}^a||_{{\bf L}^2}^2 +(\frac{\epsilon_1}{2}+\frac{\epsilon_2}{2})||\widehat{\U}^a||_{{\bf H}^1}^2\\ &\leq M_0 ||\widehat{\U}^a||_{{\bf L}^2}^2 +(\frac{\epsilon_1}{2}+\frac{\epsilon_2}{2})||\widehat{\U}^a||_{{\bf H}^1}^2, \end{split}\end{equation} where $M_0=M_0(||\U_{1}^a ||_{{\bf H}^2}^2,||\U_{2}^a ||_{{\bf H}^2}^2)$. Similarly we derive for the oceanic velocity component \begin{equation}\begin{split}\label{eq:CoupledOperatorGalerkin_uniq5} &\int_\Omega \big( (\widehat{\U}^o\cdot\nabla)\U_{1}^o) +( \U^o_{2}\cdot\nabla)\widehat{\U}^o)\big)\cdot\widehat{\U}^o\, dx \leq M_1||\widehat{\U}^o||_{L^2}^2, \end{split}\end{equation} where $M_1=M_1(||\U_{1}^o ||_{H^2}^2)$, because the dependency on $||\U^o_2||_{H^s}$ vanishes due to the incompressibility. Analogous estimates hold for the atmospheric and oceanic temperature transport terms, such that the nonlinear operator difference in (\ref{eq:CoupledOperatorGalerkin_uniq3}) can be bounded by \begin{equation}\begin{split}\label{eq:CoupledOperatorGalerkin_uniq6} &\int_\Omega g_R(||\psi_{1}||_{\mathcal{H}^s})\big(B\left( \psi_{1},\psi_{1}\right) -B\left( \psi_{2},\psi_{2}\right)\big)\cdot\widehat{\psi}\, dx\leq K_0||\widehat{\psi}||_{\mathcal{L}^2}^2, \end{split}\end{equation} where $K_0=K_0(||\psi_1||_{\mathcal{H}^s}^2,(||\psi_2||_{\mathcal{H}^s}^2)$. The second term in (\ref{eq:CoupledOperatorGalerkin_uniq1}) can be estimated analogously as above \begin{equation}\begin{split}\label{eq:CoupledOperatorGalerkin_uniq6a} &\int_\Omega B\left( \psi_{2},\psi_{2}\right)\cdot\widehat{\psi}\big(g_R(||\psi_{1}||_{H^s})-g_R(||\psi_{2}||_{H^s})\big)dx\\ &\leq c||\psi_2||_{\mathcal{H}^2}||\psi_2||_{\mathcal{H}^1}||\widehat{\psi}||_{\mathcal{L}^2}\big|||\psi_{1}||_{\mathcal{H}^s}-||\psi_{2}||_{\mathcal{H}^s}\big| \leq K_1||\widehat{\psi}||_{\mathcal{L}^2}^2, \end{split}\end{equation} where $K_1=K_1(||\psi_2||_{\mathcal{H}^2})$ and where we have used the reverse triangle inequality in the last step. For the (linear) coupling operator it holds that \begin{equation}\label{eq:CoupledOperatorGalerkin_uniq7} |\int_\Omega D(\widehat{\psi}^a,\widehat{\psi}^o)\cdot \widehat{\psi}\, dx| = |\int_\Omega \gamma(\hat{\theta}^a - \hat{\theta}^o)^2+\sigma(\widehat{\U}^o-\overline{\widehat{\U}^a})^2\, dx| \leq K_3||\widehat{\psi}||_{\mathcal{L}^2}^2 \end{equation} where we have used that $\bar{\U}^a_1-\bar{\U}^a_2=\overline{\U^a_1-\U^a_2}=\overline{\widehat{\U}^a}$ and where $K_3$ depends on the coupling constants. This implies the following estimate for the difference equation (\ref{eq:CoupledOperatorGalerkin_uniq1}) \begin{equation}\label{eq:CoupledOperatorGalerkin_uniq8} \frac{1}{2}d_t||\widehat{\psi}||^2_{\mathcal{L}^2} +\frac{1}{R}||\nabla\widehat{\psi}||^2_{\mathcal{L}^2} \leq K||\widehat{\psi}||_{\mathcal{L}^2}^2, \end{equation} where $K=K(||\psi_1||_{\mathcal{H}^2},||\psi_2||_{\mathcal{H}^2}), \sigma,\gamma)$. From Gronwall's inequality we obtain \begin{equation}\label{eq:CoupledOperatorGalerkin_uniq9} ||\widehat{\psi}(t)||^2_{\mathcal{L}^2} \leq ||\widehat{\psi}(t_0)||^2_{\mathcal{L}^2}e^{\int_{t_0}^t K(s)ds}. \end{equation} Since $\psi_1\psi_2\in L^2(T,\mathcal{H}^2(\Omega))$ the function $K$ is integrable and the right-hand side is bounded. This proves the continuous dependency on the initial condition. If the two solutions have the same initial conditions, then the solutions coincide on $T$ and uniqueness follows. \end{proof} The following theorem is the main result for the deterministic version of the coupled model. \begin{theorem}[Local well-posedness of the coupled model]\label{THM_DETERMINISTIC_LOCAL} Let $s\geq 2$ and suppose the initial condition of the coupled equations (\ref{COUPLED_SWE_VELOC_A})-(\ref{COUPLED_SWE_INCOMPRESS_O}) satisfy $\psi_0=(\U^a_0,\theta^a_0,\U^o_0,\theta^o_0)\in \mathcal{H}^{s}(\Omega)$. Then there exists a unique time $t_1^{*}\in (t_{0}, \infty]$ such that a local regular solution $\psi$ of (\ref{COUPLED_SWE_VELOC_A})-(\ref{COUPLED_SWE_INCOMPRESS_O}) in the sense of Definition \ref{REGULAR_SOL} exists and is unique on any interval $T:=[t_0,t_1],$ where $t_0<t_1<t_1^*$ and that, if $t_1^{*}< \infty$, then \begin{equation}\label{explosionede} \lim_{t\nearrow t_1^{*}} ||\psi(t)||_{\mathcal{H}^{s}}=\infty. \end{equation} \end{theorem} \begin{proof} We define $t_R:=\inf\{t\geq t_0: ||\psi(t)||_{H^s}>R \}$, for $R>0$ and $\tau:=\lim_{R\to\infty} t_R]$. By $\psi_R$ we denote the solution of (\ref{eq:CoupledOperatorTruncated}) with initial condition $\psi_0$. We define $\psi(t):=\psi_R(t)$ for $t\in [t_0, t_R]$. On any time interval $[t_0,t_1]$ with $t_0<t_1< t_R$ the solutions $\psi$ and $\psi_R$ coincide, as a consequence of the uniqueness of solutions of the truncated equations (\ref{eq:CoupledOperatorTruncated}). If $\tau=\infty$ then $\psi$ is a global solution of (\ref{eq:CoupledOperator}). If $\tau<\infty$ then $||\psi(\tau)||_{H^s}=R$ and $[t_0,\tau]$ is the maximal interval of existence of the solution $\psi$. \end{proof} \section{The Stochastic Idealized Atmospheric Climate Model}\label{Sec3} \subsection{SALT atmospheric climate model}\label{sec-SALT} Recall that the state of the system is described by a state vector $\psi :=(\psi ^{a},\psi ^{o})$ with atmospheric component $\psi ^{a}:=(\mathbf{u}^{a},\theta ^{a})$ and oceanic component $\psi ^{o}:=(\mathbf{u}^{o},\theta ^{o})$. The initial state is denoted by $\psi (t_{0})=\psi _{0}$. where $\psi _{0}=(\mathbf{u}_{0}^{a},\theta _{0}^{a},\mathbf{u}_{0}^{o},\theta _{0}^{o})$, with six entries. We summarize equations \eqref{COUPLED_SWE_VELOC_A_STOCH}-\eqref{COUPLED_SWE_T_O_STOCH} for the SALT version of the idealized climate model as \begin{equation} d\psi _{t}+(B\left( \psi _{t},\psi _{t}\right) +C\left( \psi _{t}\right) +D\mathbb{E}[\bar{\psi}])dt+\sum_{i=1}^{\infty }E_{i}(\psi _{t})\circ dW_{t}^{i}=\nu \Delta \psi _{t}dt, \label{eq:CMSE} \end{equation}where \begin{itemize} \item The process $\psi $ gathers all variables in \eqref{COUPLED_SWE_VELOC_A_STOCH}-\eqref{COUPLED_SWE_T_O_STOCH}, i.e., \[ \psi _{t}=\{\psi _{t}^i\}_{i=1}^6:=(\mathbf{u}_{t}^{a},\theta _{t}^{a},\mathbf{u}_{t}^{o},\theta _{t}^{o})=({u}_{t}^{a,1},{u}_{t}^{a,2},\theta _{t}^{a},{u}_{t}^{o,1},{u}_{t}^{o,2},\theta _{t}^{o}), \] as in the deterministic case. \item $B$ is the usual bilinear transport operator, \item $C$ comprises all the linear terms (including the pressure term in the equation for the components of $\psi $ corresponding to $\bar{\mathbf{u}}% ^{o} $ as well as the term $-\gamma(\theta^a-\theta^o$) from \eqref{COUPLED_SWE_T_A_STOCH}). \item $D=\{D_{ij}\}_{i,j=1}^6$ is a $6\times 6$-matrix that captures the influence of $\mathbb{E}[\bar{\psi}]$ on the various components of $\psi _{t}$. More precisely, $D_{ij}$ is the coefficient appearing in front of $\mathbb{E}[\bar{\psi^j}]$ in the equation satisfied by $\psi^i _{t}$. For the SALT equations \eqref{COUPLED_SWE_VELOC_A_STOCH}-\eqref{COUPLED_SWE_T_O_STOCH}, we have $D_{41}=D_{52}=-\sigma $, with all the other entries equal to 0. Therefore the pair $({u}_{t}^{o,1},{u}_{t}^{o,2})$ is affected by $(\mathbb E [\bar{u}_{t}^{a,1}], \mathbb E [\bar{u}_{t}^{a,2}]) $. \item $E_{i}$ are diagonal operators given by \begin{equation*} E_{i}(\mathbf{u}^{a},\theta ^{a},\mathbf{u}^{o},\theta ^{o})=\mathrm{diag}(\xi _{i}\cdot \nabla \mathbf{u}^{a}+\frac{1}{Ro^{a}}\xi _{i}+{u}_{j}^{a}\nabla \xi _{i}^{j}+\frac{1}{Ro^{a}}\nabla (R_{j}(\mathbf{x})\xi _{i}^{j}),\ \xi _{i}\cdot \nabla \theta ^{a},\ 0,\ 0) \end{equation*} \item $\mathrm{curl}R({\mathbf{x}})=2\Omega ({\mathbf{x}})$. \item $\bar{\psi}:=\psi -\frac{1}{|\Omega |}\int_{\Omega }\psi dx$. \footnote{Recall that subtraction of the mean $\frac{1}{|\Omega |}\int_{\Omega }\psi dx$ places the oceanic and atmospheric variables all into the same frame of motion relative to the Earth's rotation. Note that this subtraction is only applied to $\mathbf{u}% ^{o}\,$. This is ensured by the multiplication by the matrix $D$.} \end{itemize} \noindent We start by giving a rigorous definition of the solution of \eqref{eq:CMSE}. Let $(\Xi ,\mathcal{F},(\mathcal{F}_{t})_{t},\mathbb{P},(W^{i})_{i})$ be a fixed stochastic basis. In addition to the Sobolev spaces defined above we introduce $C^{m}(\Omega ;\mathbb{R}^{p})$ to be the (vector) space of all $% \mathbb{R}^{p}$-valued functions $f$ which are continuous on $\Omega $ with continuous partial derivatives $D^{\alpha }f$ of orders $|\alpha |\leq m$, for fixed $m\geq 0$. Notice that on the torus all continuous functions are bounded. The space $C^{m}(\Omega ;\mathbb{R}^{p})$ is a Banach space when endowed with the usual supremum norm \begin{equation*} \Vert f\Vert _{m,\infty }=\sum_{|\alpha |\leq m}\Vert D^{\alpha }f\Vert _{\infty }. \end{equation*}The space $C^{\infty }(\Omega ;\mathbb{R}^{p})$ is regarded as the intersection of all spaces $C^{m}(\Omega ;\mathbb{R}^{p})$. Let $(\xi _{i})_{i}$ a sequence of vector fields which satisfy the following condition: \begin{equation} \sum_{i=1}^{\infty }\Vert \xi _{i}\Vert _{s+3,\infty }^{2}<\infty . \label{xiassumpt} \end{equation}We will work directly with the It\^{o} version of (\ref{eq:CMSE}). In this version, the Stratonovich integrals in (\ref{eq:CMSE}) are recast as It\^{o}~integrals with the required It\^{o} correction added in the drift term of the equation. The equivalence between the two versions is straightforward, see e.g. \cite{cfh} for details. \begin{definition} \label{def:localglobalsolution}$\left. \right. $ \begin{enumerate} \item[a.] A pathwise \textsl{local}{\ solution} of the system (\ref{eq:CMSE}) is given by a pair $(\psi ,\tau ),$ where $\tau :\Xi \rightarrow \lbrack t_{0},\infty ]$ is a strictly positive stopping time and $\psi:\Omega \times \lbrack t_{0},\infty ]\rightarrow \mathcal{H}^{s}(\Omega )$, is an $\mathcal{F}_{t}$-adapted process with initial condition $\psi _{t_{0}}\in \mathcal{H}^{s}(\Omega )$ such that \begin{equation*} \psi \in L^{2}\left( \Xi ;C\left( [{t_{0}},T ];\mathcal{H}^{s}(\Omega )\right) \right) \ \ \psi 1_{[t_0,\tau]} \in L^{2}\left( \Xi ;L^{2}\left( [{t_{0}},T ];\mathcal{H}^{s+1}(\Omega )\right) \right) \end{equation*}for any $T\ge t_0$ and the system (\ref{eq:CMSE}) is satisfied locally i.e., the following identity\begin{equation} \psi _{t}=\psi _{{t_{0}}}-\int_{{t_{0}}}^{t\wedge \tau}\left( F(\psi _{s})+D\mathbb{E}\left[ \bar{\psi}_{s}\right] \right) ds-\sum_{i=1}^{\infty }\int_{{t_{0}}}^{t\wedge \tau}E_{i}(\psi _{s})dW_{s}^{i}, \label{eq:CMSElocal} \end{equation}holds $\mathbb{P}$-almost surely, as an identity in $L^{2}\left( \Xi ;% \mathcal{H}^{0}(\Omega )\right) $ for any $t\in [t_0,\infty)$. In (\ref{eq:CMSElocal}), the mapping $F(\psi _{s})$ is defined as \begin{equation} F(\psi _{s})=B\left( \psi _{s},\psi _{s}\right) +C\left( \psi _{s}\right) -\frac{1}{2}\sum_{i=1}^{\infty }E_{i}^{2}(\psi _{s})-\nu \Delta \psi _{s}. \label{eq:Fofpsi} \end{equation} \item[b.] A \emph{martingale local} solution of equation \eqref{eq:CMSE} is a triple $(\check\Omega, \mathcal{\check F}, \check{\mathbb{P}}), (\mathcal{\check F}_t)_t, (\check\psi, \check\tau, (\check W^i)_i)$ such that\\ $(\check\Omega, \mathcal{\check F}, \check{\mathbb{P}})$ is a probability space, $(\mathcal{\check F}_t)_t$ is a filtration defined on this space, where $\check\tau :\Xi \rightarrow \lbrack t_{0},\infty ]$ is a strictly positive $\mathcal{\check F}_{t}$-stopping time and $\psi:\Omega \times \lbrack t_{0},\infty ]\rightarrow \mathcal{H}^{s}(\Omega )$, is an $\mathcal{\check F}_{t}$-adapted process with initial condition $\psi _{t_{0}}\in \mathcal{H}^{s}(\Omega )$ such that \begin{equation*} \check\psi \in L^{2}\left( \Xi ;C\left( [{t_{0}},T ];\mathcal{H}^{s}(\Omega )\right) \right) \ \ \check\psi 1_{[t_0,\check\tau]} \in L^{2}\left( \Xi ;L^{2}\left( [{t_{0}},T ];\mathcal{H}^{s+1}(\Omega )\right) \right) \end{equation*}for any $T\ge t_0$ and which satisfies equation \eqref{eq:CMSElocal}+\eqref{eq:Fofpsi} with $\psi$ replaced by $\check\psi$ \footnote{We use the "check" notation $(\check{\,\phantom\,})$ in the description of the various components of a martingale solution, to emphasize that the existence of a martingale solution does not guarantee that, for a \emph{given} set of Brownian motions $(W^i)_i$ defined on the (possibly different) probability space $(\Omega, \mathcal{F}, {\mathbb{P}})$ a solution of \eqref{eq:CMSE} will exist. Clearly the existence of a strong solution implies the existence of a martingale solution. }. \item[c.] If $\tau=\infty $, then we say that the system (\ref{eq:CMSE}) has a \emph{global}{\ solution}. In this case can remove the usage of the stopping time from equation (\ref{eq:CMSElocal}). In other words, we have that\begin{equation} \psi _{t}=\psi _{{t_{0}}}-\int_{{t_{0}}}^{t}\left( F(\psi _{s})+D\mathbb{E}[\bar{\psi}_{s}]\right) ds-\sum_{i=1}^{\infty }\int_{{t_{0}}}^{t}E_{i}(\psi _{s})dW_{s}^{i}, \label{eq:CMSEglobal} \end{equation}holds as an identity in $L^{2}\left( \Xi ;\mathcal{H}^{0}(\Omega )\right) $.\end{enumerate} \end{definition} \begin{remark}\label{impl} Observe that $\psi_t=\psi_\tau$ for any $t\ge \tau$, it other words the solution remains constant once it hits the defining stopping time. We require this to be able to make sense of the quantity $\mathbb{E}[\bar{\psi}_{s}]$ even for temporal values $s$ larger than $\tau$. In fact, equation (\ref{eq:CMSElocal}) can be re-written as \begin{equation} \psi _{t}=\psi _{{t_{0}}}-\int_{{t_{0}}}^{t\wedge \tau}\left( F(\psi _{s})+D\mathbb{E}\left[ \bar{\psi}_{s\wedge \tau}\right] \right) ds-\sum_{i=1}^{\infty }\int_{{t_{0}}}^{t\wedge \tau}E_{i}(\psi _{s})dW_{s}^{i}, \label{eq:CMSERlocalwrong} \end{equation}\end{remark} {\bf \noindent Roadmap of the section} In the following we show that equation (\ref{eq:CMSE}) has a martingale solution provided the additional condition \eqref{adp} is satisfied. To do so, we follow the same route as in the deterministic case. In Theorem \ref{truncatedgalerkinscm} we show that the Galerkin approximations of a truncated version of equation (\ref{eq:CMSElocal}) are well defined globally. Moreover we show that we can control the Sobolev norms of these approximations uniformly in the level of approximation, see \eqref{eq:goodboundG} and \eqref{lpc}. The truncation is done by multiplying each the coefficients of (\ref{eq:CMSElocal}) with the function $g_{R,\delta}(\left\vert \left\vert \cdot\right\vert \right\vert _{\mathcal{H}^{s}(\Omega )})$, where, as in the deterministic case, we use the cut-off function $g_{R,\delta}:\mathbb{R}_{+}\rightarrow \lbrack 0,1]$ \begin{equation*} g_{R,\delta}(x):=\begin{cases} & 1,\quad \text{if }0\leq x\leq R \\ & 0,\quad \text{if }x\geq R+\delta \\ & \text{smoothly decaying}\quad \text{if }R<x<R+\delta\end{cases}.\end{equation*}for arbitrary $R\ge 0$ and $\delta \in [0,1]$. We then show that the laws of the Galerkin approximations are relatively compact. Using this we deduce that equation (\ref{eq:truncatedsystem}) which is the truncated version of equation (\ref{eq:CMSElocal}) has a global solution, see Theorem \ref{truncatedscm}. We also show that (\ref{eq:truncatedsystem}) has a unique solution. In the deterministic case, the existence of a solution of the truncated equation would immediately imply the existence of a local solution of the original equation on the interval $[t_0,\tau_R]$, where $\tau_R$ is the first time when $\left\vert \left\vert \psi _{s}^{R,\delta}\right\vert \right\vert _{\mathcal{H}^{s}(\Omega )}$ reaches the value $R$. We cannot do this here as the solution of the truncated equation (\ref{eq:truncatedsystem}) as well as that of the original equation (\ref{eq:CMSElocal}) depend on temporal values $s$ larger than $\tau_R$ through the quantity $\mathbb{E}[\bar{\psi}_{s}^{R,\delta}]$, respectively, $\mathbb{E}[\bar{\psi}_{s}]$. A final convergence argument is required: We consider a sequence $\psi^{R,\delta_n}$ with $\delta_n$ tending to $0$. Then the laws of the elements of this sequence are relatively compact and we deduce from here that any limit point of the sequence that satisfies the additional property \eqref{adp} is a martingale solution of the original equation (\ref{eq:CMSE}). With regards to the uniqueness of the solutions of (\ref{eq:CMSE}): If $(\psi_1 ,\tau_1 ),$ and $(\psi_2 ,\tau_2 ),$ are local solutions that are defined with \emph{different} stopping times, then we cannot deduce that $\psi_1=\psi_2$ on the common interval of existence $[t_0,\tau_1\wedge\tau_2]$. The reason for this is that, in contrast with the deterministic case, the choice of the stopping time influences $\psi_s$ even for values $s\le \tau$ as a result of the expectation term in (\ref{eq:CMSE}). This lack of consistency between local solutions deters us to construct a pathwise local solution of equation (\ref{eq:CMSE}) and also a corresponding maximal solution for (\ref{eq:CMSE}). \begin{remark} The assumption $\psi\in L^{2}\left( \Xi ;C(\left( [{% t_{0}},T];\mathcal{H}^{s}(\Omega )\right) \right) $ for any $T\ge 0$ insures that the term \begin{equation*} \int_{{t_{0}}}^{t\wedge \tau}\mathbb{E}\left[ \bar{\psi}_{s}\right] ds \end{equation*}is well defined as an element of $\mathcal{H}^{0}(\Omega )$. Observe that, since $\bar{\psi}:=\psi -\frac{1}{|\Omega |}\int_{\Omega }\psi dx$, we have that, for $t\in [t_0,T],$ \begin{eqnarray*} \left\vert \int_{{t_{0}}}^{t\wedge \tau}\mathbb{E}\left[ \bar{\psi}_{s}\right] ds \right\vert &\leq& \left( t-t_{0}\right) \mathbb{E}\left[ \sup_{s\in \left[ t_{0},t\wedge\tau \right] }\left\vert \left\vert \bar{\psi}_{s}\right\vert \right\vert _{\mathcal{H}^{0 }(\Omega )}\right] \\ &\leq& 2\left( T-t_{0}\right) \mathbb{E}\left[ \sup_{s\in \left[ t_{0},T \right] }\left\vert \left\vert \psi _{s}\right\vert \right\vert _{\mathcal{H}^{s}(\Omega )}\right] <\infty . \end{eqnarray*}Moreover, for $0<\left\vert \alpha \right\vert \leq s$, we have that \begin{eqnarray*} \mathcal{D}^{\alpha }\mathbb{E}\left[ \bar{\psi}_{s}\right] &=&\mathbb{E}\left[\mathcal{D}^{\alpha } \psi_{s}\right] \\ \left\vert \left\vert \mathbb{E}\left[ \mathcal{D}_{s}^{\alpha }\psi \right] \right\vert \right\vert _{\mathcal{H}^{0}(\Omega )}^{2} &\leq &\mathbb{E}\left[ \left\vert \left\vert \mathcal{D}_{s}^{\alpha }\psi \right\vert \right\vert _{\mathcal{H}^{0}(\Omega )}^{2}\right] \leq \mathbb{E}\left[ \left\vert \left\vert \psi _{s}\right\vert \right\vert _{\mathcal{H}^{s}(\Omega )}^{2}\right] \end{eqnarray*}as the centralizing term vanishes when differentiated. \end{remark} The proof of the existence and uniquence of a local solution for the system (\ref{eq:CMSE}) shares many of the steps with that the existence and uniquence of the corresponding deterministic case. It uses the same truncated procedure as in the deterministic case and the same Galerkin approximations. However several technical difficulties need to be overcome. In the arguments below we will emphasize these difficulties and the methodology used to resolve them and omit the arguments that coincide with the deterministic case. We begin by introducing the Galerkin approximation to a truncated version of the system (\ref{eq:CMSE}). More precisely, let $\psi ^{m,R,\delta}=\left\{ \psi _{t}^{m,R,\delta},t\geq 0\right\} $ be the solution of the following stochastic differential system\begin{equation} \psi _{t}^{m,R,\delta}=P_{m}\left[ \psi _{{t_{0}}}\right] -\int_{{t_{0}}}^{t} F^{m,R,\delta}(\psi _{s}^{m,R,\delta}) ds-\sum_{i=1}^{\infty }\int_{{t_{0}}}^{t}g_{R,\delta}(||\psi _{s}^{m,R,\delta}||_{\mathcal{H}^{s}(\Omega )})P_mE_{i}(\psi _{s}^{m,R,\delta})dW_{s}^{i}. \label{eq:CMSEGalerkinlocal} \end{equation}Just as in (\ref{eq:CMSElocal}), the identity in (\ref{eq:CMSEGalerkinlocal}) is assumed to hold, $% \mathbb{P}$-almost surely, in $L^{2}\left( \Xi ;\mathcal{H}^{0}(\Omega )\right) $. In (\ref{eq:CMSEGalerkinlocal}), the mapping $F^{m,R,\delta}(\psi _{s}^{m,R,\delta})$ is defined as \begin{eqnarray} F^{m,R,\delta}(\psi _{s}^{m,R,\delta})&=&g_{R,\delta}(||\psi _{s}^{m,R,\delta}||_{\mathcal{H}^{s}(\Omega )})\bigg( P_{m}\left[ B\left( \psi _{s}^{m,R,\delta},\psi _{s}^{m,R,\delta}\right) \right] +C\left( \psi _{s}^{m,R,\delta}\right) \nonumber\\ &&\left.+\frac{1}{2}\sum_{i=1}^{\infty }P_mE_{i}^{2}(\psi _{s}^{m,R,\delta})-\nu \Delta \psi _{s}^{m,R,\delta} + D\mathbb{E}\left[ \bar{\psi}_{s}^{m,R,\delta}\right] \right). \label{eq:FofGalerkinpsi} \end{eqnarray}The projection operator $P_m$ is defined as in \eqref{GALERKIN_U}+\eqref{GALERKIN_O}. Recall that, when defining the projection corrresponding to the ocean velocity component, we have taken the incompressibility into account and projected onto the space of divergence-free vector fields. We then have the following: \begin{theorem} \label{truncatedgalerkinscm} Assume that $\psi _{t_{0}}\in \mathcal{H}% ^{s}(\Omega )$. Then the stochastic differential system (\ref% {eq:CMSEGalerkinlocal}) admits a unique global solution with values in the space \begin{equation*} L^{2}\left( \Xi ;C(\left( [{t_{0}},T];\mathcal{H}^{s}(\Omega )\right) \right) \cap L^{2}\left( \Xi ;L^{2}(\left( [{t_{0}},T];\mathcal{H}^{s+1}(\Omega )\right) \right) . \end{equation*}for any $T>0$. Moreover, there exists a constant $C=C\left( R,T\right) $ independent of $m$ and $\delta$ such that\begin{equation} \mathbb{E}\left[ \sup_{t\in \left[ t_{0},T\right] }||\psi _{s}^{m,R,\delta}||_{\mathcal{H}^{s}(\Omega )}^{2}\right] +\mathbb{E}\left[ \int_{{t_{0}}}^{T}||\psi _{s}^{m,R,\delta}||_{\mathcal{H}^{s+1}(\Omega )}^{2}\right] \leq C \label{eq:goodboundG} \end{equation}for any $T>0$. \end{theorem} \begin{proof} Similar to the deterministic case, the system (\ref{eq:CMSEGalerkinlocal}) is equivalent to a finite dimensional system of stochastic differential equations of McKean-Vlasov type with Lipschitz continuous coefficients. The same holds true for the system satisfied by $\left( \mathcal{D}^{\alpha }\psi ^{m,R,\delta}\right) _{\left\vert \alpha \right\vert \leq s}$ which involves $\psi ^{m,R,\delta}$ as well as all of its partial derivatives up to order $s$. The existence and uniqueness of a solution of the system system (\ref{eq:CMSEGalerkinlocal}) follows for example from \cite{sas}. The bound (\ref{eq:goodboundG}) is obtained, as in the deterministic case, via a Gronwall type argument. \end{proof} \begin{remark} In addition, one can prove that there exists a constant $C=C\left( p,R,T\right) $ independent of $m$ such that% \begin{equation} \label{lpc} \mathbb{E}\left[ \sup_{t\in \left[ t_{0},T\right] }||\psi _{s}^{m,R,\delta}||_{\mathcal{H}^{s}(\Omega )}^{p}\right] \leq C \end{equation}for any $T>0$ and $p\geq 2$. This bound is useful to show the continuity of the limit of the Galerkin approximation. \end{remark} Introduce next $\psi ^{R,\delta}=\left\{ \psi _{t}^{R,\delta},t\geq 0\right\} $ to be the solution of the following stochastic differential system\begin{equation} \psi _{t}^{R,\delta}= \psi _{{t_{0}}}-\int_{{t_{0}}}^{t} F^{R,\delta}(\psi _{s}^{R,\delta}) ds-\sum_{i=1}^{\infty }\int_{{t_{0}}}^{t}g_{R,\delta}(||\psi _{s}^{R,\delta}||_{\mathcal{H}^{s}(\Omega )})E_{i}(\psi _{s}^{R,\delta})dW_{s}^{i}, \label{eq:truncatedsystem} \end{equation}Just as in (\ref{eq:CMSElocal}), the identity (\ref{eq:truncatedsystem}) is assumed to hold, $% \mathbb{P}$-almost surely, in $L^{2}\left( \Xi ;\mathcal{H}^{0}(\Omega )\right) $. In (\ref{eq:truncatedsystem}), the mapping $F^{R,\delta}(\psi _{s}^{R,\delta})$ is defined as \begin{eqnarray} F^{R,\delta}(\psi _{s}^{R,\delta})&=&g_{R,\delta}(||\psi _{s}^{R,\delta}||_{\mathcal{H}^{s}(\Omega )})\bigg( B\left( \psi _{s}^{R,\delta},\psi _{s}^{R,\delta}\right) +C\left( \psi _{s}^{R,\delta}\right) \nonumber\\ &&\hspace{3cm}\left.+\frac{1}{2}\sum_{i=1}^{\infty }E_{i}^{2}(\psi _{s}^{R,\delta})-\nu \Delta \psi _{s}^{R,\delta} + D\mathbb{E}\left[ \bar{\psi}_{s}^{R,\delta}\right] \right). \label{eq:FofTpsi} \end{eqnarray} In the following, we will also need the weak version of the systems (\ref{eq:CMSEGalerkinlocal}) and (\ref{eq:truncatedsystem}). These are standard: For example, the weak version of (\ref{eq:truncatedsystem}) reads as\begin{eqnarray} \left\langle \psi _{t}^{R,\delta},\varphi \right\rangle _{\mathcal{H}^{0}(\Omega )^{6}} &=&\left\langle \psi _{{t_{0}}},\varphi \right\rangle _{\mathcal{H}^{0}(\Omega )^{6}}-\int_{{t_{0}}}^{t}\left( \left\langle F^{R,\varphi }(\psi _{s}^{R,\delta}) ,\varphi \right\rangle _{\mathcal{H}^{0}(\Omega )^{6}}\right) ds \notag \\ &-&\sum_{i=1}^{\infty }\int_{{t_{0}}}^{t}g_{R,\delta}(||\psi _{s}^{R,\delta}||_{\mathcal{H}^{s}(\Omega )})\left\langle \psi _{s}^{R,\delta},E_{i}^{\ast }\varphi \right\rangle _{\mathcal{H}^{0}(\Omega )^{6}}dW_{s}^{i},~~~\varphi \in \left( \mathcal{H}^{2}(\Omega )\right) ^{6}, \label{eq:truncatedsystemweak} \end{eqnarray}where, for $\psi _{t}=(\mathbf{u}_{t}^{a},\theta _{t}^{a},\mathbf{u}% _{t}^{o},\theta _{t}^{o})$ and $\varphi =\left( \varphi _{i}\right) _{i=1}^{6}$, $\varphi _{i}\in \mathcal{H}^{2}(\Omega )$ we have the composite inner product \begin{eqnarray*} \left\langle \psi _{{t}},\varphi \right\rangle _{\mathcal{H}^{0}(\Omega )^{6}} &=&\left\langle \mathbf{u}_{t}^{a,1},\varphi _{1}\right\rangle _{\mathcal{H}^{0}(\Omega )}+\left\langle \mathbf{u}_{t}^{a,2},\varphi _{2}\right\rangle _{\mathcal{H}^{0}(\Omega )}+\left\langle \theta _{t}^{a},\varphi _{3}\right\rangle _{\mathcal{H}^{0}(\Omega )} \\ &&+\left\langle \mathbf{u}_{t}^{o,1},\varphi _{4}\right\rangle _{\mathcal{H}^{0}(\Omega )}+\left\langle \mathbf{u}_{t}^{o,2},\varphi _{5}\right\rangle _{\mathcal{H}^{0}(\Omega )}+\left\langle \theta _{t}^{o},\varphi _{6}\right\rangle _{\mathcal{H}^{0}(\Omega )}. \end{eqnarray*}As the ocean velocity component takes values in the the space of divergence-free vector fields, we will take the corresponding pair of test function $(\varphi_4,\varphi_5)$ to take value in the same space. The operators $E_{i}^{\ast }$ are adjoint operators corresponding to the operators $E_{i}$, so that \begin{equation*} \left\langle E_{i}\psi _{s}^{R,\delta},\varphi \right\rangle _{\mathcal{H}^{0}(\Omega )^{6}}=\left\langle \psi _{s}^{R,\delta},E_{i}^{\ast }\varphi \right\rangle _{\mathcal{H}^{0}(\Omega )^{6}}. \end{equation*}In other words, $E_{i}^{\ast }$ are diagonal operators given by \begin{eqnarray*} E_{i}^{\ast }\varphi &:&=\mathrm{diag}(-\bar{E}^{1}\left( \varphi _{1},\varphi _{2}\right) ,\ -\bar{E}^{2}\left( \varphi _{1},\varphi _{2}\right) ,-\mathrm{div}\left( \xi _{i}\varphi _{3}\right) ,\ 0,\ 0,0) \\ \bar{E}^{k}\left( \varphi _{1},\varphi _{2}\right) &:&=\mathrm{div}\left( \xi _{i}\varphi _{k}\right) +\frac{1}{Ro^{a}}\xi _{i}^{k}+{\varphi }_{j}^{a}\partial _{k}\xi _{i}^{j}+\frac{1}{Ro^{a}}\partial ^{k}(R_{j}(\mathbf{x})\xi _{i}^{j}). \end{eqnarray*}The identity in (\ref{eq:truncatedsystemweak}) holds, $\mathbb{P}$-almost surely, in $L^{2}\left( \Xi ;\mathbb{R}\right) $. For the following theorem we need to introduce the additional space $% \mathcal{W}^{\alpha ,p}\left( [{t_{0}},T];\mathcal{H}^{0}(\Omega )\right) $, where $\beta\in (0,1)$ and $p>2$ with $\beta p>1$, defined as \[ \mathcal{W}^{\beta ,p}\left( [{t_{0}},T];\mathcal{H}^{0}(\Omega )\right) :=\left\{ a\in \mathcal{L}_{p}(\left( [{t_{0}},T];\mathcal{H}^{0}(\Omega )\right) |~~~\Vert a\Vert _{\mathcal{W}^{\beta ,p}(0,T;\mathcal{H}^{0}(\Omega ))}<\infty \right\} \]where the norm $\Vert \cdot \Vert _{\mathcal{W}^{\beta ,p}(0,T;\mathcal{H}% ^{0}(\Omega ))}$ is defined as \[ \Vert a\Vert _{\mathcal{W}^{\beta ,p}(0,T;\mathcal{H}^{0}(\Omega ))}^{p}:=\int_{0}^{T}\left\Vert a_{t}\right\Vert _{\mathcal{H}^{0}(\Omega ))}^{p}dt+\int_{0}^{T}\int_{0}^{T}\frac{\left\Vert a_{t}-a_{s}\right\Vert _{\mathcal{H}^{0}(\Omega ))}^{p}}{|t-s|^{1+\beta p}}dtds. \] \begin{theorem} \label{truncatedscm}Assume that $\psi _{t_{0}}\in \mathcal{H}^{s}(\Omega )$. Then the stochastic differential system (\ref{eq:truncatedsystem}) admits a unique global solution (in the sense of Definition \ref{def:localglobalsolution}) with values in the space \begin{equation*} C\left( \Xi ;C(\left( [{t_{0}},T];\mathcal{H}^{s}(\Omega )\right) \right) \cap L^{2}\left( \Xi ;L^{2}(\left( [{t_{0}},T];\mathcal{H}^{s+1}(\Omega )\right) \right) \end{equation*}for any $T>0$ and any $p\geq 2$. Moreover, there exists a constant $% C=C\left( R,T\right) $ independent of $\delta$ such that% \begin{equation} \mathbb{E}\left[ \sup_{t\in \left[ t_{0},T\right] }||\psi _{s}^{R,\delta}||_{\mathcal{H}^{s}(\Omega )}^{p}\right] +\mathbb{E}\left[ \int_{{t_{0}}}^{T}||\psi _{s}^{R,\delta}||_{\mathcal{H}^{s+1}(\Omega )}^{2}\right] \leq C \label{q:goodboundp} \end{equation}for any $T>0$. \end{theorem} \begin{proof}$\left. \right. $\newline \textbf{Existence. }We follow the same steps as in \cite{cfh}, underlying the main differences below. Let $\left\{ Q^{m}\right\} $ be the family of the probability laws of the processes $\left\{ \psi ^{m,R,\delta}\right\} .$ These laws supported on the space\begin{eqnarray*} \mathcal{E}_{0} &:&=\mathcal{E}_{1}\cap \mathcal{E}_{2} \\ \mathcal{E}_{1} &:&= L^{p}\left( \Xi ; \mathcal{W}^{\beta ,p}\left( [{t_{0}},T];\mathcal{H}^{0}(\Omega )\right) \right) \\ \mathcal{E}_{2} &:&=L^{p}\left( \Xi ;C(\left( [{t_{0}},T];\mathcal{H}^{s}(\Omega )\right) \right) \cap L^{2}\left( \Xi ;L^{2}(\left( [{t_{0}},T];\mathcal{H}^{s+1}(\Omega )\right) \right) , \end{eqnarray*}where $\beta $ is an arbitrary positive constant such that $\beta <1/2-1/p$ and $p>2$. Since $\mathcal{E}_{0}~$is compactly embedded in $L^{p}\left( \Xi ;C(\left( [{t_{0}},T];\mathcal{H}^{0}(\Omega )\right) \right) $, we deduce that these laws are\ relatively compact in the space of probability measures $L^{p}\left( \Xi ;C(\left( [{t_{0}},T];\mathcal{H}^{0}(\Omega )\right)\right) .$ We add to the processes $\left\{ \psi ^{m,R,\delta}\right\} $ the driving Brownian motions $\mathcal{W}=\left\{ W^{i}\right\} _{i=1}^{\infty }$. Then the pairs $\{\psi ^{m,R,\delta},\mathcal{W}\}$ have probability laws $\left\{ \widetilde{Q}% ^{m}\right\} $ that are relatively compact in the space of probability measures over the state space \begin{equation*} L^{p}\left( \Xi ;C(\left( [{t_{0}},T];\mathcal{H}^{0}(\Omega )\right) \right) \times L^{2}\left( \Xi ;C(\left( [{t_{0}},T];\mathbb{R}\right) ^{\infty }\right) . \end{equation*}Let $\widetilde{Q}$ be a limit point and $\left\{ \widetilde{Q}^{m_{n}}\right\} $ be a subsequence of measures converging to $\widetilde{Q}$. Also let $\left\{ \varphi _{k}\right\} _{k}$ be a countable dense set of $\left( \mathcal{H}^{2}(\Omega )\right) ^{6}.$ By Theorem 2.2 in \cite{kurtzprotter}, it follows that the processes \begin{equation*} \{\psi ^{m_n,R,\delta},\int_{{t_{0}}}^{\cdot }\left\langle \psi _{s}^{{m_n,R,\delta}},E_{i}^{\ast }P_{m_{n}}\varphi _{k}\right\rangle _{\mathcal{H}^{0}(\Omega )^{6}}dW_{s}^{i},i,k=1,...\infty ,\mathcal{W}\} \end{equation*}converge in distribution. By using the Skorohod representation theorem, there exists a probability space $\left( \widetilde{\Xi},\mathcal{\widetilde{F}},% \widetilde{P}\right) $ on which we can find \begin{itemize} \item a sequence \begin{equation*} \widetilde{A}^{m_{n}}=\left\{ \widetilde{\psi}^{m_n,R,\delta},\int_{{t_{0}}}^{\cdot }\left\langle \widetilde{\psi}_{s}^{m_n,R,\delta},E_{i}^{\ast }P_{m_{n}}\varphi _{k}\right\rangle _{\mathcal{H}^{0}(\Omega )^{6}}d\widetilde{W}_{s}^{m_{n},i},~~i,k=1,...\infty ,~~\mathcal{\widetilde{W}}^{m_{n}}\right\} \end{equation*}such that $\left\{ \widetilde{\psi}^{m_n,R,\delta},\mathcal{\widetilde{W}}% ^{m_{n}}\right\} $ has law $\widetilde{Q}^{m_{n}}$ \item a process \begin{equation*} \widetilde{A}=\left\{ \widetilde{\psi}^{R,\delta},\int_{{t_{0}}}^{\cdot }\left\langle \widetilde{\psi}_{s}^{R,\delta},E_{i}^{\ast }\varphi _{k}\right\rangle _{\mathcal{H}^{0}(\Omega )^{6}}d\widetilde{W}_{s}^{i},~~~i,k=1,...\infty ,~~\mathcal{\widetilde{W}}\right\} \end{equation*}with values in the product space \begin{equation*} \widetilde{E}:=L^{2}\left( \widetilde{\Xi};C(\left( [{t_{0}},T];\mathcal{H}^{0}(\Omega )\right) \right) \times L^{2}\left( \widetilde{\Xi};C(\left( [{t_{0}},T];\mathbb{R}\right) ^{\mathbb{N}\times \mathbb{N}}\right) \times L^{2}\left( \widetilde{\Xi};C(\left( [{t_{0}},T];\mathbb{R}\right) ^{\mathbb{N}}\right) . \end{equation*}such that the component $\left\{ \widetilde{\psi}^{R},\mathcal{\widetilde{W}}\right\} $ from $\widetilde{A}$ has law $\widetilde{Q}.$ \item The sequence $\left( \widetilde{A}^{m_{n}}\right) _{n}$ converges to $% \widetilde{A}$ as elements in the space $\widetilde{E}$. \item Since $Q^{m_{m}}$, the law of $\widetilde{\psi}^{m_n,R,\delta}$ is supported on the space \begin{equation*} L^{p}\left( \widetilde{\Xi};L^{\infty }(\left( [{t_{0}},T];\mathcal{H}^{s}(\Omega )\right) \right) \cap L^{2}\left( \widetilde{\Xi};L^{2}(\left( [{t_{0}},T];\mathcal{H}^{s+1}(\Omega )\right) \right) , \end{equation*}the law of $\widetilde{\psi}^{R,\delta}$ has the same property. \end{itemize} Next we take limits of all the terms in the weak version of the equation (\ref{eq:CMSEGalerkinlocal}) and show that $\widetilde{\psi}^{R}$ satisfies (\ref{eq:truncatedsystemweak})\footnote{In (\ref{eq:truncatedsystemweak}), the set of original Brownian motions $% \mathcal{W}=\left\{ W^{i}\right\} _{i=1}^{\infty }$ is replaced by $\mathcal{\widetilde{W}}=\left\{ \widetilde{W}^{i}\right\} _{i=1}^{\infty }$ $.$} for any $\varphi _{k}$ in a countable dense set of $\left( \mathcal{H}^{2}(\Omega )\right) ^{6}$, and therefore, by a density argument, for an arbitrary $\varphi $ $\in \left( \mathcal{H}^{2}(\Omega )\right) ^{6}$. The convergence of all the linear terms is straightforward. We only discuss the convergence of the nonlinear term, in other words, the limit\begin{eqnarray*} &&\lim_{n\rightarrow \infty }\int_{{t_{0}}}^{t}\left\langle g_{R}(||\widetilde{\psi}_{s}^{m_{n},R,\delta}||_{\mathcal{H}^{s}(\Omega )})P_{m_{n}}B\left( \widetilde{\psi}_{s}^{m_n,R,\delta},\widetilde{\psi}_{s}^{m_n,R,\delta}\right) ,\varphi _{k}\right\rangle _{\mathcal{H}^{0}(\Omega )^{6}}ds \\ &&\hspace{3cm}=\int_{{t_{0}}}^{t\wedge \bar{\tau}}\left\langle g_{R}(||\widetilde{\psi}_{s}^{R,\delta}||_{\mathcal{H}^{s}(\Omega )})B\left( \widetilde{\psi}_{s}^{R,\delta},\widetilde{\psi}_{s}^{R,\delta}\right) ,\varphi _{k}\right\rangle _{\mathcal{H}^{0}(\Omega )^{6}}ds. \end{eqnarray*}Recall that we took the corresponding pair of test function $(\varphi_4,\varphi_5)$ to be divergence free so we don't need to apply the Leray projection on the corresponding ocean velocity component of the bilinear form $B\left( \widetilde{\psi}_{s}^{R,\delta},\widetilde{% \psi}_{s}^{R,\delta}\right) $. The convergence follows via a Sobolev space interpolation argument from the following \begin{eqnarray*} \lim_{n\rightarrow \infty }\mathbb{E}\left[ \sup_{s\in \left[ t_{0},T\right] }\left\vert \left\vert \widetilde{\psi}_{s}^{m_n,R,\delta}-\widetilde{\psi}_{s}^{R,\delta}\right\vert \right\vert _{\mathcal{H}^{0}(\Omega )^{6}}^{2}\right] &=&0 \\ \mathbb{E}\left[ \sup_{t\in \left[ t_{0},T\right] }||\widetilde{\psi}_{s}^{m_n,R,\delta}||_{\mathcal{H}^{s}(\Omega )}^{p}\right] +\mathbb{E}\left[ \int_{{t_{0}}}^{T}||\widetilde{\psi}_{s}^{m_n,R,\delta}||_{\mathcal{H}^{s+1}(\Omega )}^{2}\right] &\leq &C,~~T\geq t_{0} \\ \mathbb{E}\left[ \sup_{t\in \left[ t_{0},T\right] }||\widetilde{\psi}_{s}^{R,\delta}||_{\mathcal{H}^{s}(\Omega )}^{p}\right] +\mathbb{E}\left[ \int_{{t_{0}}}^{T}||\widetilde{\psi}_{s}^{R,\delta}||_{\mathcal{H}^{s+1}(\Omega )}^{2}\right] &\leq &C,~~T\geq t_{0}, \end{eqnarray*}where $C=C\left( p,R,T\right) $ is independent of $m_{n}.$ The above argument justifies the existence of a solution of (\ref{eq:truncatedsystem}) which is weak in probability sense. From this and the pathwise uniqueness (see the argument below) of (\ref{eq:truncatedsystem}), by Yamada-Watanabe theorem, see e.g. \cite{Rockner} we deduce the existence of a solution of in the original space driven by the original set of Brownian motions $\mathcal{W}% =\left\{ W^{i}\right\} _{i=1}^{\infty }$. Note that both the solution of the equation (\ref{eq:truncatedsystem}) on the original space and $\widetilde{\psi}^{R,\delta}$ have the same distribution. Since the law of $% \widetilde{\psi}^{R,\delta}$ has support on the space \begin{equation*} L^{p}\left( \widetilde{\Xi};C(\left( [{t_{0}},T];\mathcal{H}^{s}(\Omega )\right) \right) \cap L^{2}\left( \widetilde{\Xi};L^{2}(\left( [{t_{0}},T];\mathcal{H}^{s+1}(\Omega )\right) \right) , \end{equation*}it follows that the solution of the equation (\ref{eq:truncatedsystem}) satisfies (\ref{eq:goodboundG}).\newline \textbf{Uniqueness.} Let $\psi ^{R,\delta,1}$ and $\psi ^{R,\delta,2}$ be two solutions of (\ref{eq:truncatedsystem}), in other words, \[ \psi _{t}^{R,\delta,i}= \psi _{{t_{0}}}-\int_{{t_{0}}}^{t} F^{R,\delta,i}(\psi _{s}^{R,\delta,i}) ds-\sum_{i=1}^{\infty }\int_{{t_{0}}}^{t}g_{R,\delta,i}(||\psi _{s}^{R,\delta,i}||_{\mathcal{H}^{s}(\Omega )})E_{i}(\psi _{s}^{R,\delta,i})dW_{s}^{i},\ \ i=1,2 \] where $F^{R,\delta,i}(\psi _{s}^{R,\delta,i})$ are defined as \begin{eqnarray} F^{R,\delta,i}(\psi _{s}^{R,\delta,i})&=&g_{R,\delta}(||\psi _{s}^{R,\delta,i}||_{\mathcal{H}^{s}(\Omega )})\bigg( B\left( \psi _{s}^{R,\delta,i},\psi _{s}^{R,\delta,i}\right) +C\left( \psi _{s}^{R,\delta,i}\right) \nonumber\\ &&\hspace{3cm}\left.+\frac{1}{2}\sum_{i=1}^{\infty }E_{i}^{2}(\psi _{s}^{R,\delta,i})-\nu \Delta \psi _{s}^{R,\delta,i} + D\mathbb{E}\left[ \bar{\psi}_{s}^{R,\delta,i}\right] \right). \label{eq:FofTpsi2} \end{eqnarray} The uniqueness argument is now standard: We use a Gronwall argument. We introduce the following notation \begin{eqnarray*} \psi ^{R,\delta,1,2} &=&\psi ^{R,\delta,1}-\psi ^{R,\delta,2},~~~~\bar{F}^{R,\delta,1,2}=\bar{F}^{R}(\psi _{s}^{R,\delta,1})-\bar{F}^{R}(\psi _{s}^{R,\delta,2}) \\ \bar{\psi}_{s}^{R,\delta,1,2} &=&\bar{\psi}_{s}^{R,\delta,1}-\bar{\psi}_{s}^{R,\delta,2}. \end{eqnarray*}Then \begin{equation*} \psi _{t}^{R,\delta,1,2}=\psi _{{t_{0}}}^{1,2}-\int_{{t_{0}}}^{t}\left( \bar{F}^{R,\delta,1,2}+\frac{1}{2}\sum_{i=1}^{\infty }E_{i}^{2}(\psi _{s}^{R,\delta,1,2})+D\mathbb{E}\left[ \bar{\psi}_{s}^{R,\delta,1,2}\right] \right) ds-\sum_{i=1}^{\infty }\int_{{t_{0}}}^{t}E_{i}(\psi _{s}^{R,\delta,1,2})dW_{s}^{i}, \end{equation*}from which we deduce that \begin{eqnarray} \mathbb{E}\left[ ||\psi _{t}^{R,\delta,1,2}||_{\mathcal{H}^{0}(\Omega )}^{2}\right] &=&\mathbb{E}\left[ ||\psi _{t_{0}}^{R,\delta,1,2}||_{\mathcal{H}^{0}(\Omega )}^{2}\right] -\int_{{t_{0}}}^{t}\mathbb{E}\left[ \left\langle \psi _{s}^{R,\delta,1,2},\bar{F}^{R,\delta,1,2}+D\mathbb{E}\left[ \bar{\psi}_{s}^{R,\delta,1,2}\right] \right\rangle _{\mathcal{H}^{0}(\Omega )}\right] ds \notag \\ &&+\sum_{i=1}^{\infty }\frac{1}{2}\int_{{t_{0}}}^{t}\mathbb{E}\left[ \left\langle \psi _{s}^{R,\delta,1,2},E_{i}^{2}(\psi _{s}^{R,\delta,1,2})\right\rangle _{\mathcal{H}^{0}(\Omega )}+||E_{i}(\psi _{s}^{R,\delta,1,2})||_{\mathcal{H}^{0}(\Omega )}^{2}\right] ds. \label{eq:e1} \end{eqnarray}Observe that \begin{eqnarray} \sum_{i=1}^{\infty }\left( \left\langle \psi _{s}^{R,\delta,1,2},E_{i}^{2}(\psi _{s}^{R,\delta,1,2})\right\rangle _{\mathcal{H}^{0}(\Omega )}+||E_{i}(\psi _{s}^{R,\delta,1,2})||_{\mathcal{H}^{0}(\Omega )}^{2}\right) &\leq &C||\psi _{s}^{R,\delta,1,2}||_{\mathcal{H}^{0}(\Omega )}^{2} \label{eq:e2} \\ \mathbb{E}\left[ \left( \left\langle \psi _{s}^{R,\delta,1,2},D\mathbb{E}\left[ \bar{\psi}_{s}^{R,\delta,1,2}\right] \right\rangle _{\mathcal{H}^{0}(\Omega )}\right) \right] &\leq &C\mathbb{E}\left[ ||\psi _{s}^{R,\delta,1,2}||_{\mathcal{H}^{0}(\Omega )}||\mathbb{E}\left[ \bar{\psi}_{s}^{R,\delta,1,2}\right] ||_{\mathcal{H}^{0}(\Omega )}\right] \notag \\ &\leq &C\mathbb{E}\left[ ||\psi _{s}^{R,\delta,1,2}||_{\mathcal{H}^{0}(\Omega )}\right] ^{2} \notag \\ &\leq &C\mathbb{E}\left[ ||\psi _{s}^{R,\delta,1,2}||_{\mathcal{H}^{0}(\Omega )}^{2}\right] \label{eq:e3} \end{eqnarray}Finally, similar to the deterministic case, we deduce that \begin{equation} \left\langle \psi _{s}^{R,\delta,1,2},\bar{F}^{R,\delta,1,2}\right\rangle _{\mathcal{H}^{0}(\Omega )}\leq C\left( R\right) ||\psi _{s}^{R,\delta,1,2}||_{\mathcal{H}^{0}(\Omega )}^{2}. \label{eq:e4} \end{equation}From (\ref{eq:e1}), (\ref{eq:e2}), (\ref{eq:e3}) and (\ref{eq:e4}), we deduce that there exists a constant $C\left( R,T\right) $ such that \begin{equation*} \mathbb{E}\left[ ||\psi _{t}^{R,\delta,1,2}||_{\mathcal{H}^{0}(\Omega )}^{2}\right] \leq \mathbb{E}\left[ ||\psi _{t_{0}}^{R,\delta,1,2}||_{\mathcal{H}^{0}(\Omega )}^{2}\right] +C\int_{{t_{0}}}^{t}\mathbb{E}\left[ ||\psi _{s}^{R,\delta,1,2}||_{\mathcal{H}^{0}(\Omega )}^{2}\right] ds,~~~t\in \left[ t_{0},T\right] , \end{equation*}and, by Gronwall's ineguality, we deduce that \begin{equation*} \mathbb{E}\left[ ||\psi _{t}^{R,\delta,1,2}||_{\mathcal{H}^{0}(\Omega )}^{2}\right] \leq e^{Ct}\mathbb{E}\left[ ||\psi _{t_{0}}^{R,\delta,1,2}||_{\mathcal{H}^{0}(\Omega )}^{2}\right] ~~~t\in \left[ t_{0},T\right] . \end{equation*}The continuous dependence of the initial condition implies the uniqueness of the solution of (\ref{eq:truncatedsystem}). \end{proof} We choose next a sequence $\psi ^{R,\delta_n}=\left\{ \psi _{t}^{R,\delta_n},t\geq 0\right\} $ of solutions of the truncated equation \eqref{eq:truncatedsystem} such that $\lim \delta_n=0$. Using arguments similar to those applied to the sequence of Galerkin approximations $\psi ^{m, R,\delta}$ one shows that these laws of the elements of the sequence are relatively compact in the space of probability measures $L^{p}\left( \Xi ;C(\left( [{t_{0}},T];\mathcal{H}^{0}(\Omega )\right)\right) .$ Via a Skorohod representation theorem, there exists a probability space $\left( \widetilde{\Xi},\mathcal{\widetilde{F}},% \widetilde{P}\right) $ on which we can find a sequence $\{ \widetilde{\psi}^{R,\delta_n}\}$ with the same law as the original sequence which converges in $L^{p}\left( \Xi ;C(\left( [{t_{0}},T];\mathcal{H}^{0}(\Omega )\right)\right)$ to a process $\{ \widetilde{\psi}^{R}\}$. that satisfies \[ \mathbb{E}\left[ \sup_{t\in \left[ t_{0},T\right] }||\widetilde{\psi}_{s}^{R,\delta}||_{\mathcal{H}^{s}(\Omega )}^{p}\right] +\mathbb{E}\left[ \int_{{t_{0}}}^{T}||\widetilde{\psi}_{s}^{R,\delta}||_{\mathcal{H}^{s+1}(\Omega )}^{2}\right] \leq C,~~T\geq t_{0}, \]where $C=C\left( p,R,T\right) $. Via a Sobolev interpolation argument we can also deduce that \begin{eqnarray*} \lim_{n\rightarrow \infty }\mathbb{E}\left[ \sup_{s\in \left[ t_{0},T\right] }\left\vert \left\vert \widetilde{\psi}_{s}^{R,\delta_n}-\widetilde{\psi}_{s}^{R}\right\vert \right\vert _{\mathcal{H}^{s-1}(\Omega )^{6}}^{p}\right] &=&0 \\ \lim_{n\rightarrow \infty }\mathbb{E}\left[ \int_{t_{0}}^T\left\vert \left\vert \widetilde{\psi}_{s}^{R,\delta_n}-\widetilde{\psi}_{s}^{R}\right\vert \right\vert _{\mathcal{H}^{s}(\Omega )^{6}}^{p}ds\right] &=&0 \end{eqnarray*} Next let $g_{R}:\mathbb{R}_{+}\rightarrow \lbrack 0,1]$ be the cut-off function as follows \begin{equation*} g_{R}(x):=\begin{cases} & 1,\quad \text{if }0\leq x\leq R \\ & 0,\quad \text{if }x\geq R\\ \end{cases}.\end{equation*}and assume that \begin{equation}\label{adp} \lim_{n\rightarrow \infty }\mathbb{E}\left[ \int_{t_0}^T \left(g_{R,\delta_n}(\| \widetilde{\psi}_{s}^{R,\delta_n}\| _{\mathcal{H}^{s}(\Omega )^{6}}) - g_{R}(\| \widetilde{\psi}_{s}^{R}\| _{\mathcal{H}^{s}(\Omega )^{6}})\right)^p dt \right] =0. \end{equation} If condition \eqref{adp} is satisfied then, by taking the limit of each term in equation \eqref{eq:truncatedsystemweak}, we can deduce that the limiting process $ \widetilde{\psi}^{R}$ solves the following stochastic differential system\begin{equation} \widetilde\psi _{t}^{R}= \widetilde\psi _{{t_{0}}}-\int_{{t_{0}}}^{t} F^{R}(\widetilde\psi _{s}^{R}) ds-\sum_{i=1}^{\infty }\int_{{t_{0}}}^{t}g_{R}(||\widetilde\psi _{s}^{R}||_{\mathcal{H}^{s}(\Omega )})E_{i}(\widetilde\psi _{s}^{R})dW_{s}^{i}. \label{eq:truncatedsystem0} \end{equation}In (\ref{eq:truncatedsystem0}), the mapping $F^{R}(\widetilde\psi _{s}^{R})$ is defined as \begin{eqnarray} F^{R}(\widetilde\psi _{s}^{R})&=&g_{R}(||\widetilde\psi _{s}^{R}||_{\mathcal{H}^{s}(\Omega )})\bigg( B\left( \widetilde\psi _{s}^{R},\widetilde\psi _{s}^{R}\right) +C\left( \widetilde\psi _{s}^{R}\right) \nonumber\\ &&\hspace{3cm}\left.+\frac{1}{2}\sum_{i=1}^{\infty }E_{i}^{2}(\widetilde\psi _{s}^{R})-\nu \Delta \widetilde\psi _{s}^{R} + D\mathbb{E}\left[ \bar{\widetilde\psi}_{s}^{R}\right] \right). \label{eq:FofTpsi3} \end{eqnarray} It is then immediate that equation \eqref{eq:truncatedsystem0} is equivalent to \eqref{eq:CMSElocal} where we choose the stopping time \[ \tau_R := \inf\{t\ge t_0 | \ \ \|\widetilde{\psi}_{s}^{R}\|_{\mathcal{H}^{s}(\Omega )^{6}} \ge R \} \] \begin{remark}By using standard Sobolev interpolation results, one can prove that \begin{equation}\label{adp'} \lim_{n\rightarrow \infty }\mathbb{E}\left[ \int_{t_0}^T \left |\| \widetilde{\psi}_{s}^{R,\delta_n}\| _{\mathcal{H}^{s}(\Omega )^{6}} - \| \widetilde{\psi}_{s}^{R}\| _{\mathcal{H}^{s}(\Omega )^{6}}\right | dt \right] =0. \end{equation} This implies that the sequence $\widetilde{\psi}^{R,\delta_n}$ has a subsequence that converges to $\widetilde{\psi}^{R}$ on a set $$ \{(\xi\times t)\in \Xi\times [t_0,T]\} $$ of full $\mathbb P \otimes \ell_{[t_0,T]}$-measure\footnote{That is the complement of the set has null measure}, where $\mathbb \ell_{[t_0,T]}$ is the Lebesgue measure on the interval $[t_0,T]$ . From the definition of $\widetilde{% \psi}_{s}^{R,\delta_n}$ we can deduce that $$ \{(\xi, t)\in \Xi\times [t_0,T]| | \| \widetilde{\psi}_{t}^{R,\delta_n}(\xi)\| _{\mathcal{H}^{s}(\Omega )^{6}}\le R+\delta_n \} $$ and therefore that \begin{eqnarray*} \Xi\times [t_0,T]&\supseteq&\{(\xi,t) | \| \widetilde{\psi}_{t}^{R}(\xi)\| _{\mathcal{H}^{s}(\Omega )^{6}}\le R \}\\ &&= \{(\xi,t) | \| \widetilde{\psi}_{t}^{R}(\xi)\| _{\mathcal{H}^{s}(\Omega )^{6}}< R \} \cup \{(\xi,t) | \| \widetilde{\psi}_{t}^{R}(\xi)\| _{\mathcal{H}^{s}(\Omega )^{6}}= R \}\\ &&=:\mathcal A_{<R}\cup\mathcal A_{=R} \end{eqnarray*} has full $\mathbb P \otimes \ell_{[t_0,T]}$-measure. We can deduce from here, via Egorov's theorem that \eqref{adp} holds true provided $\mathcal A_{=R}$ is a set of null $\mathbb P \otimes \ell_{[t_0,T]}$-measure.\footnote{We thank Tom Kurtz for pointing out this Remark.} \end{remark} \begin{remark} The argument sofar only shows the existence of a martingale solution (equivalently, a probabilistically weak solution). To prove the existence of a (probabilistically) strong solution one would need to show the uniqueness of equation \eqref{eq:truncatedsystem0}. This cannot be done as the cut-off function is no longer Lipschitz over the positive half-line. One can try to control the difference between two solutions $\psi^{R,1}$ and $\psi^{R,2}$ up to the minimum of their corresponding hitting times $\tau^1_R \wedge \tau^1_R$. On the interval $[t_0,\tau^1_R \wedge \tau^1_R]$, $g_{R}(||\widetilde\psi _{s}^{R,1}||_{\mathcal{H}^{s}(\Omega)})=g_{R}(||\widetilde\psi _{s}^{R,2}||_{\mathcal{H}^{s}(\Omega)})$ so we can avoid the difficulty raised by $g_{R}$ being non-Lipschitz. However, due to the the expectation terms in \eqref{eq:truncatedsystem0}. the solution depends on temporal values beyond $\tau^1_R \wedge \tau^1_R$ which we cannot control. Overcoming this difficulty is beyond the scope of the current paper. \end{remark} \subsection{Lagrangian-Averaged Stochastic Advection by Lie Transport Climate \\Model }\label{LA-SALT-eqns-sec} Using the same notation as in Section \ref{sec-SALT}, we describe the state of the system is described by a state vector $\psi :=(\psi ^{a},\psi ^{o})$ with atmospheric component $\psi ^{a}:=(\mathbf{u}^{a},\theta ^{a})$ and oceanic component $\psi ^{o}:=(\mathbf{u}^{o},\theta ^{o})$ with initial state is denoted by $\psi (t_{0})=\psi _{0}$. where $\psi _{0}=(\mathbf{u}_{0}^{a},\theta _{0}^{a},\mathbf{u}_{0}^{o},\theta _{0}^{o})$. The LA-SALT equations for the oceanic component $\psi ^{o}$ are the same as the corresponding SALT equations, in other words $(\mathbf{u}^{o},\theta^{o})$ satisfy equations \eqref{COUPLED_SWE_VELOC_O_STOCH} + \eqref{COUPLED_SWE_T_O_STOCH}. The LA-SALT equations for the atmospheric component $\psi ^{a}$ differ from the the corresponding SALT equations. More precisely, $ \psi ^{a}:=(\mathbf{u}^{a},\theta ^{a})$ satisfy equations \eqref{COUPLED_SWE_VELOC_A_STOCH_LA-Ito-Intro} + \eqref{COUPLED_SWE_T_A_STOCH_LA-Ito-Intro}. In the following we work with the same stochastic basis $(\Xi ,\mathcal{F},(% \mathcal{F}_{t})_{t},\mathbb{P},(W^{i})_{i})$ and use the same sequence of vector fields $(\xi _{i})_{i}$ as in Section \ref{sec-SALT}. Similar to (\ref{eq:CMSE}), we summarize the LA-CMSE equations \eqref{COUPLED_SWE_VELOC_O_STOCH},\eqref{COUPLED_SWE_T_O_STOCH}, \eqref{COUPLED_SWE_VELOC_A_STOCH_LA-Ito-Intro} and \eqref{COUPLED_SWE_T_A_STOCH_LA-Ito-Intro} as \begin{equation} d\psi _{t}+(B^{L}\left( \psi _{t},\psi _{t}\right) +C^{L}\left( \psi _{t}\right) )dt+\sum_{i=1}^{\infty }E_{i}^{L}(\psi _{t})\circ dW_{t}^{i}=\nu \Delta \psi _{t}dt, \label{eq:LACMSE} \end{equation}where the process $\psi $ gathers all variables in the LASALT model, i.e., $ \psi _{t}:=(\mathbf{u}_{t}^{a},\theta _{t}^{a},\mathbf{u}_{t}^{o},\theta _{t}^{o})$ (as in the deterministic and SALT cases), $\mathrm{curl}R({\mathbf{x}})=2\Omega ({\mathbf{x}})$, $\bar{\psi}:=\psi -\frac{1}{|\Omega |}\int_{\Omega }\psi dx$ and:\footnote{ As in Section \ref{sec-SALT}, the notation $\left( \cdot \right) ^{T}$ indicates that the operators $B^{L},C^{L},E_{i}^{L}$ are column vectors.} \begin{itemize} \item $B^{L}$ is the bilinear transport operator\begin{equation*} B^{L}\left( \psi _{t},\psi _{t}\right) =(\mathbb{E}[\mathbf{u}^{a}]\cdot \nabla \mathbf{u}^{a}+{u}_{j}^{a}\nabla \mathbb{E}[\mathbf{u}^{a}]^{j},\ \mathbb{E}[\mathbf{u}^{a}]\cdot \nabla \theta ^{a},\ \mathbf{u}^{o}\cdot \nabla \mathbf{u}^{o},\ \mathbf{u}^{o}\cdot \nabla \theta ^{o})^{T}. \end{equation*} \item $C^{L}$ comprises all the linear terms (including the pressure term in the equation for the components of $\psi $ corresponding to $\bar{\mathbf{u}}% ^{o}$)% \begin{eqnarray*} C^{L}\left( \psi _{t}\right) &=&\left( \frac{1}{Ro^{a}}\mathbb{E}[\mathbf{u}^{a}]+\frac{1}{Ro^{a}}\nabla (\mathbb{E}[\mathbf{u}^{a}]\cdot R(\mathbf{x}))+\frac{1}{Ro^{a}}\nabla \theta ^{a},\ -\gamma \left( \theta ^{o}-\theta ^{a}\right) ,\right. \ \\ &&\left. \frac{1}{Ro^{o}}\left( \mathbf{u}^{o}\right) ^{\perp }+\frac{1}{Ro^{o}}\nabla p^{o}+\sigma \left( \mathbf{u}^{o}-\mathbb{E}[\mathbf{\bar{u}}^{a}]\right) ,\ 0\right) ^{T} \end{eqnarray*} \item $E_{i}^{L}$ are operators given by \begin{equation*} E_{i}^{L}\left( \psi _{t}\right) =(\xi _{i}\cdot \nabla \mathbf{u}^{a}+\frac{1}{Ro^{a}}\xi _{i}+{u}_{j}^{a}\nabla \xi _{i}^{j}+\frac{1}{Ro^{a}}\nabla (R_{j}(\mathbf{x})\xi _{i}^{j}),\ \xi _{i}\cdot \nabla \theta ^{a},\ 0,\ 0)^{T} \end{equation*} \end{itemize} The treatment of equation (\ref{eq:LACMSE}) differs slightly from that of (\ref{eq:CMSE}). The reason is that the expected value of state vector $\psi$ satisfies a closed form equation. More precisely, as the oceanic component $\psi ^{o}$ is not random, we only need to take expectation in the equations satisfied by the atmospheric component and deduce that $\widehat\psi^{a}:=\mathbb{E}[\psi ^{a}]=(\mathbb{E}[\mathbf{u}^{a}], \mathbb{E}[\theta^{a}])^{T}=:(\widehat{\mathbf{u}}^{a}, \widehat{\theta}^{a})^{T}$ satisfies \begin{equation} d_t\widehat\psi _{t}^a+B^{L,a}( \widehat\psi _{t}^a,\widehat\psi _{t}^a) +C^{L}( \widehat\psi _{t}^a) =\frac{1}{2}\sum_{i=1}^{\infty }E_{i}^{L,a,2}(\widehat\psi _{t}^a)+\nu \Delta \widehat\psi _{t}^a, \label{eq:LACMSEE} \end{equation}where \begin{eqnarray*} B^{L,a}\left( \widehat\psi _{t}^a,\widehat\psi _{t}^a\right) &=&(\widehat{\mathbf{u}}^{a}\cdot \nabla \widehat{\mathbf{u}}^{a}+\widehat{u}_{j}^{a}\nabla (\widehat{\mathbf{u}}^{a})^{j},\ \widehat{\mathbf{u}}^{a}\cdot \nabla \widehat{\theta}^{a})^{T}.\\ C^{L}\left( \widehat{\psi}_{t}\right) &=& \left(\frac{1}{Ro^{a}} \widehat{\mathbf{u}}^{a} +\frac{1}{Ro^{a}}\nabla (\widehat{\mathbf{u}}^{a}\cdot R(\mathbf{x}))+\frac{1}{Ro^{a}}\nabla \widehat{\theta} ^{a}],\ -\gamma \left( \theta ^{o}-\widehat{\theta} ^{a}\right) \right)^{T}\\ E_{i}^{L,a,2}(\widehat\psi _{t}^a)&=& \bigg( \frac12 \mathbf{\widehat{z}}\times \xi \Big( {\rm div}\Big(\xi\,\big(\,\mathbf{\widehat{z}}\cdot{\rm curl}\,(\,\widehat{\mathbf{u}}^{a} + \frac{1}{Ro^{a}}\mathbf{R}(\mathbf{x}) \big)\Big) \,\,\Big) \\ &&\ \ \ \hspace{2mm}- \nabla \bigg( \xi\cdot\nabla\Big(\xi \cdot\big(\widehat{\mathbf{u}}^{a} + \frac{1}{Ro^{a}}\mathbf{R}(\mathbf{x})\big) \Big) \bigg), \\ &&\ \ \ \hspace{2mm}- \xi\cdot\nabla(\xi_i\cdot\nabla \widehat{\theta}^a) ) \bigg)^{T}. \end{eqnarray*} It is immediate that the pair $(\widehat{\psi} ^{a}, \psi ^{o})$ satisfies a system of equations of the form (\ref{COUPLED_SWE_VELOC_A})-(\ref{COUPLED_SWE_INCOMPRESS_O}). More precisely, the only difference between the system of equations satisfied by the pair $(\widehat\psi ^{a}, \psi ^{o})$ and the system of equations (\ref{COUPLED_SWE_VELOC_A})-(\ref{COUPLED_SWE_INCOMPRESS_O}) is the linear term $\frac{1}{2}\sum_{i=1}^{\infty }E_{i}^{L,a,2}(\widehat\psi _{t}^a)$. This term does not hinder (or help) the analysis of the system \eqref{eq:LACMSEE}+\eqref{COUPLED_SWE_VELOC_O_STOCH}+\eqref{COUPLED_SWE_T_O_STOCH}, where, in \eqref{COUPLED_SWE_VELOC_O_STOCH}+\eqref{COUPLED_SWE_T_O_STOCH} we replace $\mathbb{E}[\bar\psi ^{a}]$ by $\overline{\widehat{\psi}^{a}}:=\widehat{\psi} -\frac{1}{|\Omega |}\int_{\Omega }\widehat{\psi} dx$. As for the deterministic case in Theorem \ref{THM_DETERMINISTIC_LOCAL} we have the following \begin{theorem}\label{THM_EXPECTATION} Let $s\geq 2$ and suppose the initial condition of the system \eqref{eq:LACMSEE}+\eqref{COUPLED_SWE_VELOC_O_STOCH}+\eqref{COUPLED_SWE_T_O_STOCH} satisfies $\psi_0=(\widehat{\psi}_0^a,\psi_0^o)=(\psi_0^a,\psi_0^o)\in \mathcal{H}^{s}(\Omega)$. Then there exists a unique time $t_{e,1}^{*}\in (t_{0}, \infty]$ such that a local regular solution $(\widehat\psi^a,\widehat\psi^o)$ of \eqref{eq:LACMSEE}+\eqref{COUPLED_SWE_VELOC_O_STOCH}+\eqref{COUPLED_SWE_T_O_STOCH} in the sense of Definition \ref{REGULAR_SOL} exists and is unique on any interval $T:=[t_0,t_1],$ where $t_0<t_1<t_{e,1}^*$ and that, if $t_{e,1}^{*}< \infty$, then \begin{equation}\label{explosione} \lim_{t\nearrow t_{e,1}^{*}} ||(\widehat\psi^a,\widehat\psi^o)||_{\mathcal{H}^{s}}=\infty. \end{equation} \end{theorem} The fact that the equation satisfied by the coupled system $(\mathbb E [\psi ^{a}], \psi ^{o})$ has a closed form and, following Theorem \ref{THM_EXPECTATION}, has a local regular solution enables us to show that there exists a solution of the system (\ref{eq:LACMSE}) up to a time $t_{e,2}^{*}\in (t_{0}, \infty]$. We have the following \begin{theorem}\label{th:LAglobal} Let $s\geq 2$ and suppose the initial condition of the system (\ref{eq:LACMSE}) satisfies $\psi_0\in \mathcal{H}^{s}(\Omega)$. Then there exists a unique time $t_{e,2}^{*}\in (t_{0}, \infty]$, such that on any interval $T:=[t_0,t_1],$ where $t_0<t_1<t_{e,2}^*$, the system (\ref{eq:LACMSE}) has a unique solution with the property that \begin{equation}\label{pair0} \psi^a \in L^{2}\left( \Xi ;C(T;\mathbf{H}^{s-2,a}(\Omega)\times H^{s-2}(\Omega) \right) \cup L^{2}\left( \Xi ;L^{2}(T;\mathbf{H}^{s-1,a}(\Omega)\times H^{s-1}(\Omega) \right), \end{equation} \begin{equation}\label{pair} (\mathbb E[\psi^a],\psi^o) \in C(T;\mathcal{H}^{s}(\Omega ) \cup L^{2}(T;\mathcal{H}^{s+1}(\Omega ) . \end{equation}and that, if $t_{e,2}^{*}< \infty$, then \begin{equation}\label{explosionew} \lim_{t\nearrow t_{e,2}^{*}} ||(\mathbb E[\psi^a],\psi^o)||_{\mathcal{H}^{s}}=\infty. \end{equation} \end{theorem} \begin{remark}The loss of regularity in the atmospheric component $\psi^a$ as compared to the regularity of the pair $(\mathbb E[\psi^a],\psi^o)$ is an artifact of our proof. We use Theorems 1 and 2 in \cite{Rozovskii}, Chapter 4, to justify the existence of $\psi^a$ which require additional regularity on the coefficients. This can only be ensured by defining the solution in a lower Sobolev space. \end{remark} \begin{proof}$\left.\right.$\\ {\bf Existence.} From Theorem \ref{THM_EXPECTATION}, we have that there exists a solution of the system of equations \eqref{eq:LACMSEE}+\eqref{COUPLED_SWE_VELOC_O_STOCH}+\eqref{COUPLED_SWE_T_O_STOCH} up to $t_{e,1}^{*}\in (t_{0}, \infty]$ and that, if $t_{e,1}^{*}< \infty$, then \eqref{explosione} holds. We consider next the system of equations (\ref{eq:LACMSE}) where we replace $\mathbb{E}[\psi ^{a}]$ by the ``atmospheric'' component $\widehat\psi^a$ which is part of the solution of the system \eqref{eq:LACMSEE}+\eqref{COUPLED_SWE_VELOC_O_STOCH}+\eqref{COUPLED_SWE_T_O_STOCH} . The resulting system $(\check\psi^a,\check\psi^o)$ is linear in the stochastic component $\check\psi^a$ and nonlinear in the deterministic component $\check\psi^o$. Moreover, the oceanic component $\check\psi^o$ is decoupled from the atmospheric component as we have replaced the dependence on the atmospheric component $\mathbb{E}[\bar\psi ^{a}]$ by $\overline{\widehat{\psi}^{a}}:=\widehat{\psi} -\frac{1}{|\Omega |}\int_{\Omega }\widehat{\psi} dx$. Similar to the proof of Theorem \ref{THM_DETERMINISTIC_LOCAL} we deduce that there exists a unique time $t_{e,2}^{*}\in (t_{0}, \infty]$, $t_{e,2}^{*}\le t_{e,1}^{*}$ such that on any interval $T:=[t_0,t_1],$ where $t_0<t_1<t_{e,2}^*$, the equation satisfied by the oceanic component $\check\psi^o$ has a unique solution with the property that \begin{equation*} \check\psi^o \in C(T;\mathbf{H}^{s,o}_{div}(\Omega)\times H^{s}(\Omega)) \cup L^{2}(T;\mathbf{H}^{s+1,o}_{div}(\Omega)\times H^{s+1}(\Omega)) . \end{equation*}and, if $t_{e,2}^{*}< t_{e,1}^{*}$, then \begin{equation}\label{explosiono} \lim_{t\nearrow t_{e,2}^{*}} ||\check\psi^o||_{\mathbf{H}^{s,o}_{div}(\Omega)\times H^{s}(\Omega)}=\infty. \end{equation} Crucially, we have that $t_{e,2}^{*}\le t_{e,1}^{*}$ (beyond $t_{e,1}^{*}$ the coefficient of the system satisfied by $(\check\psi^a,\check\psi^o)$ may not be defined, because of blowup exhibited in \eqref{explosione}. The linear equation satisfied by the \emph{stochastic component} $\check\psi^a$ is a particular case of the equation $(1.1)-(1.2)$ in Chapter 4, Section 4.1, pp.129 in \cite{Rozovskii}. It is easy to check that all assumptions required by Theorem 1 and Theorem 2 in \cite{Rozovskii}, Chapter 4, are fulfilled. Therefore the equation satisfied by the stochastic component $\check\psi^a$ has a unique solution defined on the same interval $[t_0,t_{e,2}^{*})$ such that on any interval $T:=[t_0,t_1],$ where $t_0<t_1<t_{e,2}^*$, we have $\check\psi^a$ belongs to the space stated in \eqref{pair0}. Because of the linearity of the equation, no blow-up of $\check\psi^a$ is possible before $t_{e,2}^{*}$. Moreover $\mathbb E[\check\psi^a]$ satisfies a deterministic linear equation which will have a unique solution on the interval $[t_0,t_{e,2}^{*})$ in the same space as $\check\psi^a$. However this equations is also satisfied by $\widehat\psi^a$. It follows that $\mathbb E[\check\psi^a]\equiv\widehat\psi^a$. Moreover the pair $(E[\check\psi^a],\check\psi^o)$ is a solution of the system (\ref{eq:LACMSE}) and the pair $(\check\psi^a,\check\psi^o)$ also satisfies \eqref{pair} (because $(\widehat\psi^a,\widehat\psi^o)$ does). If $t_{e,2}^{*}< t_{e,1}^{*}$, the blow-up at $t_{e,2}^{*}$ holds because of \eqref{explosiono}. If $t_{e,2}^{*}= t_{e,1}^{*}<\infty$, then the blow-up at $t_{e,2}^{*}$ holds because of \eqref{explosione}. Hence \eqref{explosionew} holds if $t_{e,2}^{*}<\infty$. {\bf \noindent Uniqueness}. Assume that we have another time $\tilde t_{e,2}^{*}\in (t_{0}, \infty]$, such that on any interval $T:=[t_0,t_1],$ where $t_0<t_1<t_{e,2}^*$, the system (\ref{eq:LACMSE}) has a unique solution $\tilde\psi$ with the property that \begin{equation}\label{pair0dd} \tilde\psi^a \in L^{2}\left( \Xi ;C(T;\mathbf{H}^{s-2,a}(\Omega)\times H^{s-2}(\Omega) \right) \cup L^{2}\left( \Xi ;L^{2}(T;\mathbf{H}^{s-1,a}(\Omega)\times H^{s-1}(\Omega) \right), \end{equation} \begin{equation}\label{pairdd} (\mathbb E[\tilde\psi^a],\tilde\psi^o) \in C(T;\mathcal{H}^{s}(\Omega ) \cup L^{2}(T;\mathcal{H}^{s+1}(\Omega ) . \end{equation}and that, if $\tilde t_{e,2}^{*}< \infty$, then \begin{equation}\label{explosionewt} \lim_{ t\nearrow \tilde t_{e,2}^{*}} ||(\mathbb E[\tilde\psi^a],\tilde\psi^o)]||_{\mathcal{H}^{s}}=\infty. \end{equation} If $\tilde t_{e,2}^{*}< t_{e,2}^{*}$, then the system (\ref{eq:LACMSE}) has a unique solution in the interval $[t_0,\tilde t_{e,2}^{*}]$, hence $\tilde\psi=\psi$ on $[t_0,\tilde t_{e,2}^{*}]$ and it must be that \begin{equation}\label{explosionewtt} \lim_{ t\nearrow \tilde t_{e,2}^{*}} ||(\mathbb E[\tilde \psi^a],\tilde\psi^o)]||_{\mathcal{H}^{s}}=\lim_{ t\nearrow \tilde t_{e,2}^{*}} ||(\mathbb E[\psi^a],\psi^o)]||_{\mathcal{H}^{s}}<\infty. \end{equation} which contradicts \eqref{explosionewt}. Similarly we cannot have $\tilde t_{e,2}^{*}> t_{e,2}^{*}$, so we must have $\tilde t_{e,2}^{*}= t_{e,2}^{*}$ and, by the local uniqueness of the system (\ref{eq:LACMSE}), we also get that $\tilde\psi=\psi$ on the maximal interval of existence. \end{proof} From (\ref{eq:LACMSE}) and \eqref{eq:LACMSEE}, we can deduce the equation satisfied by the fluctuations of the system, i.e., \begin{equation*} \widetilde{\psi}_t:=\psi_t -\mathbb{E}\left[ \psi_t\right] =\psi_t-\widehat \psi_t, \ \ \ t_0\le t< t_{e,1}^{*}. \end{equation*}Then \begin{equation} \widetilde{\psi}_{t}=\widetilde{\psi}_{{t_{0}}}-\int_{{t_{0}}}^{t}\widetilde{F}^{L}(\psi _{s})ds-\sum_{i=1}^{\infty }\int_{{t_{0}}}^{t}E_{i}^{L}(\psi _{s})dW_{s}^{i}, \label{globalfluctuations} \end{equation}where $\widetilde{F}^{L}(\psi _{s})=F^{L}(\psi _{s})-\mathbb{E}[\left( F^{L}(\psi _{s})\right) ].$ Since only the atmospheric component is random, we can restrict (\ref{globalfluctuations}) to $\widetilde{\psi}_{t}^{a}:=(\mathbf{\widetilde{u}}% _{t}^{a},\ \widetilde{\theta}_{t}^{a})^{T}$ to deduce that \begin{equation*} d\widetilde{\psi}_{t}^{a}=-\widetilde{F}^{L,a}\widetilde{\psi}_{t}^{a}dt-\sum_{i=1}^{\infty }E_{i}^{L,a}\psi _{t}^{a}dW_{s}^{i} \end{equation*} where \begin{eqnarray*} \widetilde{F}^{L,a}\widetilde{\psi}_{t}^{a} &=&B^{L,a}(\mathbf{\widetilde{u}}^{a},\ \widetilde{\theta}^{a})+C^{L,a}(\mathbf{\widetilde{u}}^{a},\ \widetilde{\theta}^{a})-\frac{1}{2}\sum_{i=1}^{\infty }\left( E_{i}^{L,a}\right) ^{2}(\mathbf{\widetilde{u}}^{a},\ \widetilde{\theta}^{a})-\nu \Delta (\mathbf{\widetilde{u}}^{a},\ \widetilde{\theta}^{a})^{T} \\ B^{L,a}\widetilde{\psi}_{t}^{a} &=&\Big(\mathbb{E}[\mathbf{u}^{a}]\cdot \nabla \mathbf{\widetilde{u}}^{a}+{\widetilde{u}}_{j}^{a}\nabla \mathbb{E}[\mathbf{u}^{a}]^{j},\ \mathbb{E}[\mathbf{u}^{a}]\cdot \nabla \widetilde{\theta}^{a}\Big)^{T}\\ &\simeq& \big(\mathcal{L}_{\mathbb{E}[\mathbf{u}^{a}]} (\mathbf{\widetilde{u}}^{a}\cdot d\mathbf{x})\,,\,\mathcal{L}_{\mathbb{E}[\mathbf{u}^{a}]}\widetilde{\theta}^{a}\big)^{T}\\ C^{L,a}\widetilde{\psi}_{t}^{a} &=&\left( \frac{1}{Ro^{a}}\nabla \widetilde{\theta}^{a},\ \gamma \widetilde{\theta}^{a}\right) ^{T} \simeq \left( \frac{1}{Ro^{a}}d\widetilde{\theta}^{a} ,\ \gamma \widetilde{\theta}^{a}\right) ^{T} \\ E_{i}^{L,a}\psi _{t}^{a} &=&(\xi _{i}\cdot \nabla \mathbf{u}^{a}+{u}_{j}^{a}\nabla \xi _{i}^{j},\ \xi _{i}\cdot \nabla \theta ^{a})^{T} \simeq \big(\mathcal{L}_{\xi _{i}} (\mathbf{u}^{a}\cdot d\mathbf{x})\,,\,\mathcal{L}_{\xi _{i}}\theta^{a}\big)^{T}\\ \left( E_{i}^{L,a}\right) ^{2}\widetilde{\psi}_{t}^{a} &=&\big(\xi _{i}\cdot \nabla \left( \xi _{i}\cdot \nabla \mathbf{\widetilde{u}}^{a}+{\widetilde{u}}_{j}^{a}\nabla \xi _{i}^{j}\right) +\left( \xi _{i}\cdot \nabla \widetilde{u}_j^{a}+{\widetilde{u}}_{k}^{a}\partial_j \xi _{i}^{k}\right) \nabla \xi _{i}^{j}\,\,\,, \ \xi_{i}\cdot \nabla ( \xi _{i}\cdot \nabla \widetilde{\theta}^{a}) \big)^{T} \\ &\simeq& \big( \mathcal{L}_{\xi _{i}}( \mathcal{L}_{\xi _{i}} (\mathbf{\widetilde{u}}^{a}\cdot d\mathbf{x}))\,,\,\mathcal{L}_{\xi _{i}}( \mathcal{L}_{\xi _{i}}\widetilde{\theta}^{a} )\big)^{T}\end{eqnarray*} Here the symbol $\simeq$ recalls the geometric meanings of the coefficients appearing in the fluid equations above. Namely, the left component in the pairs $(\cdot\,\cdot)$ above is understood as the Lie derivative of a 1-form, while the right component is understood as a scalar function. For more discussion of the geometric meanings of these equations, see equation \eqref{EPSD-geom-LASAM-Ito} in Appendix \ref{Appendix-LASALT}. Define next the variance of the atmospheric component $\Theta ^{a}=\left\{z\Theta _{t}^{a},t\geq 0\right\} $ as \begin{equation*} \Theta _{t}=\mathbb{E}\left[ \left\vert \left\vert \widetilde{\psi}_{t}^{a}\right\vert \right\vert _{\mathcal{H}^{0}}^{2}\right] \,.\end{equation*}Then the previous set of equations implies the following dynamics of the variance \begin{equation} \frac{d\Theta _{t}}{dt}=2\mathbb{E}\left[ \left\langle \widetilde{\psi}_{t}^{a},\widetilde{F}^{L,a}\widetilde{\psi}_{t}^{a}\right\rangle _{\mathcal{H}^{0}}\right] +\sum_{i=1}^{\infty }\mathbb{E}\left[ \left\vert \left\vert E_{i}^{L,a}\psi _{t}^{a}\right\vert \right\vert _{\mathcal{H}^{0}}^{2}\right] \label{variancel} \end{equation}Formula (\ref{variancel}), along with the definitions from above, is important as it can be used to simulate the dynamics of the variance of the models. In other words, equation \eqref{variancel} enables one to compute the statistical dynamics of the deviations of the fluctuations of the weather that are consistent with the climatological expectation dynamics. Note that all of the quantities in the original equation combine to influence the fluctuations of the atmospheric component. In spite of the brevity of formula (\ref{variancel}), we observe that the variance of the fluctuations of the atmospheric component is influenced by all the components of the coupled system $\psi$ as can be seen from the explicit description of the operators $\widetilde{F}^{L,a}$ and $E_{i}^{L,a}$. This section has distinguished between the stochastic atmospheric weather model described by the SALT equations in \eqref{COUPLED_SWE_VELOC_A_STOCH} - \eqref{COUPLED_SWE_INCOMPRESS_O_STOCH} as summarised formally in equation \eqref{eq:CMSE} and the atmospheric expectation-fluctuation climate model described by the LA-SALT dynamical equations which are formulated in (\ref{eq:LACMSE}). The latter system of equations has led to the evolution equation \eqref{variancel} which predicts how the evolution of the variances of the atmospheric fluctuations are affected by statistical correlations in their fluctuating dynamics. Our analysis for both the SALT and LA-SALT models has determined the local well-posedness properties for the dynamics of the corresponding physical variables. In this well-posed mathematical setting, we have shown that the LA-SALT expectation dynamics can combine with an intricate array of correlations in the fluctuation dynamics to determine the evolution of the mean statistics of the LA-SALT atmospheric climate model. \section{Summary conclusion and outlook}\label{Sec4} \begin{enumerate} \item We have shown that the Ocean-Atmosphere Climate Model (OACM) in equation set \eqref{COUPLED_SWE_VELOC_A} - \eqref{COUPLED_SWE_INCOMPRESS_O} analysed here for the well-known Gill-Matsuno class of models is simple enough to successfully admit the mathematical analysis required to prove the local well-posedness of these models. The physics underlying these models can be improved, of course. For example, one could naturally include heating by the Greenhouse Effect, and this heating would drive the statistical properties of the climate model. \item In addition to proving well-posedness for both deterministic and stochastic OACM, we have developed a new tool for climate science for predicting the \emph{evolution of climate statistics} such as the variance. Indeed, the application of LA-SALT to the OACM here has established a method for also predicting the evolution of climate statistics such as the expectation of tensor moments of the fluctuations. We believe the new tools introduced here in the context of LA-SALT show considerable promise for future applications. \item We also expect that the shared geometric structure of these OACM will facilitate the parallel development of their numerical simulations; for example, in undertaking stochastic ENSO simulations. These stochastic simulations would be excellent candidates for the Data Analysis, Uncertainty Quantification and Particle Filtering methods for Data Assimilation which are already under development for SALT. See, e.g., \cite{COTTERetal2019,COTTERetal2020a,COTTERetal2020b,COTTERetal2020c}. \end{enumerate} \appendix \section{Variational derivations of the deterministic and \\ stochastic atmospheric models} \label{sec: Appendix} \paragraph{Summary.} In this appendix we explain the Euler-Poincar\'e variational principle and use it to rederive the equations of the standard ideal 2D compressible atmospheric and incompressible oceanic flows. These ideal models separately conserve their corresponding energy and potential vorticity. Next, we couple these models using the reduced Lagrange-d'Alembert method. Finally, we introduce stochasticity into the atmospheric flow and derive the full stochastic SALT and LA-SALT climate models which we analyse in the text in both their Stratonovich and It\^o forms.\\ \subsection{Mathematical setting}\label{Appendix-GeoMech} \begin{definition}[Fluid trajectory] A fluid trajectory starting from $X\in M$ in the flow domain manifold $M$ at time $t=0$ is given by $x(t)=g_t(X)=g(X,t)$, with $g:M\times\mathbb{R}^+\to M$ being a smooth one-parameter submanifold (i.e., a curve parameterised by time $t$) in the manifold of diffeomorphisms acting on $M$, denoted ${\rm Diff}(M)$. In the deterministic case, computing the time derivative, i.e., the tangent to the curve with initial data $g(X,0)=X$ along $g_t(X)$, leads to the following \emph{reconstruction equation}, given by \begin{equation} {\partial_t}g_t(X) = u(g_t(X),t), \label{eq:reconstructiondeterministic} \end{equation} where $u_t(\,\cdot\,)=u(\,\cdot\,,t)$ is a time-dependent vector field whose flow, $g_t(\,\cdot\,)=g(\,\cdot\,,t)$, is defined by the characteristic curves of the vector field $u_t(\,\cdot\,)\in \mathfrak{X}(M)$. The vector fields in $\mathfrak{X}(M)$ comprise the Lie algebra associated to the class of time-dependent maps $g_t\in {\rm Diff}(M)$. \end{definition} \begin{definition}[Advected quantities and Lie derivatives] A fluid variable $a\in V^*$ defined in a vector space $V^*$ is said to be \emph{advected}, if it keeps its value $a_t=a_0$ along the fluid trajectories. Advected quantities are sometimes called \emph{tracers}, because the histories of scalar advected quantities with different initial values (labels) trace out the Lagrangian trajectories of each label, or initial value, via the \emph{push-forward} by the flow group, i.e., $a_t=g_{t\,*}a_0= a_0g_t^{-1}$, where $g_t\in {\rm Diff}(M)$ is the time-dependent curve on the manifold of diffeomorphisms whose action represents the evolution of the fluid trajectory by push-forward. An advected quantity $a_t$ satisfies an evolutionary partial differential equation (PDE) obtained from the time derivative of the pull-back relation $a_0 = g_t^*a_t$, as follows \[ 0 = {\partial_t}a_0 = {\partial_t}(g_t^*a_t )= g_t^*\big({\partial_t}a_t + \mathcal{L}_u a_t\big) \quad\Longrightarrow\quad {\partial_t}a_t + \mathcal{L}_u a_t = 0 \,, \] where $\mathcal{L}_{u_t} (\,\cdot\,)$ denotes \emph{Lie derivative} by the vector field $u_t$ whose characteristic curves comprise the fluid trajectories. \end{definition} \begin{definition}[Stochastic advection by Lie transport (SALT)\cite{Holm2015}] In the setting of stochastic advection by Lie transport (SALT) the deterministic reconstruction equation in \eqref{eq:reconstructiondeterministic} is replaced by the semimartingale \begin{equation} {\sf d}g(X,t) = u(g_t(X),t)dt + \sum_{i=1}^M \xi_i(g_t(X))\circ dW_t^i, \label{eq:reconstructionstochastic} \end{equation} where the symbol $\circ$ means that the stochastic integral is taken in the Stratonovich sense. The initial data is given by $g(X,0)=X$. The $W_t^i$ are independent, identically distributed Brownian motions, defined with respect to the standard stochastic basis $(\Omega,\mathcal{F},(\mathcal{F}_t)_{t\geq 0},\mathbb{P})$. The $\xi_i(\,\cdot\,)\in\mathfrak{X}$ are prescribed vector fields which are meant to represent uncertainty due to effects on advection of unknown rapid time dependence. A stochastically advected quantity $a_t$ satisfies an evolutionary stochastic partial differential equation (SPDE) obtained as a semimartingale relation via the pull-back relation $a_0 = g_t^*a_t$, as follows \begin{equation} 0 = {\sf d} a_0 = {\sf d}(g_t^*a_t )= g_t^*\big({\sf d} a_t + \mathcal{L}_{{\sf d}x_t} a_t\big) \quad\Longrightarrow\quad {\sf d} a_t + \mathcal{L}_{{\sf d}x_t} a_t = 0 \,, \label{eq:KIWformula} \end{equation} where $\mathcal{L}_{u_t} (\,\cdot\,)$ denotes \emph{Lie derivative} by the vector field ${\sf d}x_t$ whose characteristic curves comprise the stochastic fluid trajectories in equation \eqref{eq:reconstructionstochastic}. \end{definition} \begin{remark}[Kunita-It\^o-Wentzell (KIW) formula.] Equation \eqref{eq:KIWformula} is called the \emph{KIW formula}, after its discovery by Kunita as an extension of the It\^o-Wentzell formula to define a stochastic Lie derivative for tensors and differential $k$-forms. For references and a discussion of its recent role in stochastic advection for fluid dynamics, see \cite{AdLHLT2020}. \end{remark} \begin{remark}[The deterministic limit.] In what follows, any of the stochastic fluid equations derived from the Euler-Poincar\'e variational approach will reduce to the corresponding deterministic fluid equations by simply setting $\xi_i\to0$ in the reconstruction equation for the stochastic fluid trajectory in \eqref{eq:reconstructionstochastic}. \end{remark} \begin{definition}[The diamond operator] The \emph{diamond operator} is defined for $a\in V^*$, $u\in\mathfrak{X}$ and fixed $b\in V$ as \begin{equation} \langle b\diamond a, u\rangle_{\mathfrak{X}^{*}\times\mathfrak{X}} := -\langle b,\mathcal{L}_u a\rangle_{V^*\times V}. \end{equation} Here, $\langle\,\cdot\,,\,\cdot\,\rangle_{\mathfrak{X}^{*}\times\mathfrak{X}}$ and $\langle\,\cdot\,,\,\cdot\,\rangle_{\mathfrak{X}^{*}\times\mathfrak{X}}$ denote the real-valued, non-degenerate, symmetric pairings between corresponding dual spaces, which can be defined on a case-by-case basis. The diamond operator provides a map dual to the Lie derivative, as $\mathcal{L}_{(\,\cdot\,)}b:\mathfrak{X}\to V$ and $b\diamond(\,\cdot\,):V^*\to\mathfrak{X}^{*}$. This duality is crucial in defining the Euler-Poincar\'e variational principle. \end{definition} \begin{definition}[The variational derivative] The variational derivative of a functional $F:\mathcal{B}\to\mathbb{R}$, where $\mathcal{B}$ is a Banach space, is denoted $\delta F/\delta \rho$ with $\rho\in\mathcal{B}$. The variational derivative $\delta F/\delta \rho$ can be defined via the linearisation of the functional $F$ with respect to the following infinitesimal deformation \begin{equation} \delta F[\rho]:= \frac{d}{d\epsilon}\Big|_{\epsilon=0} F[\rho+\epsilon \delta\rho] = \int \frac{\delta F}{\delta \rho}(x)\delta\rho(x)\,dx =: \left\langle\frac{\delta F}{\delta \rho},\delta \rho\right\rangle. \end{equation} In the definition above, $\epsilon\ll1\in\mathbb{R}$ is a parameter, $\delta\rho\in\mathcal{B}$ is an arbitrary function and the variation can be understood as a Fr\'echet derivative. With the definition of the functional derivative in place, the following lemma can be formulated. \end{definition} \begin{theorem}[Stochastic Euler-Poincar\'e theorem]\label{thm:SEP} With the notation as above, the following statements are equivalent. \begin{enumerate}[i)] \item The constrained variational principle \begin{equation} \delta\int_{t_1}^{t_2}\ell(u,a)\,dt = 0 \end{equation} holds on $\mathfrak{X}\times V^*$, using variations $\delta u$ and $\delta a$ of the form \begin{equation} \delta u = {\sf d}w - [{\sf d}x_t,w], \qquad \delta a = -\mathcal{L}_w a, \label{EPstoch-var} \end{equation} where $w(t)\in \mathfrak{X}$ is arbitrary and vanishes at the endpoints in time for arbitrary times $t_1,t_2$. \item The stochastic Euler-Poincar\'e equations \begin{equation} {\sf d}\frac{\delta \ell}{\delta u} + \mathcal{L}_{{\sf d}x_t}\frac{\delta \ell}{\delta u} = \frac{\delta \ell}{\delta a}\diamond a\,dt\,, \label{eq:stochep} \end{equation} hold on $\mathfrak{X}^*$ and the stochastic advection equations \begin{equation} {\sf d}a + \mathcal{L}_{{\sf d}x_t}a = 0\,, \label{eq:stochadv} \end{equation} hold on $\times V^*$. \end{enumerate} \end{theorem} \begin{proof} Using integration by parts and the endpoint conditions $w(t_1)=0=w(t_2)$, the variation can be computed to be \begin{equation} \begin{aligned} \delta\int_{t_1}^{t_2}\ell(u,a)\,dt &= \int_{t_1}^{t_2}\left\langle\frac{\delta\ell}{\delta u},\delta u\right\rangle + \left\langle\frac{\delta\ell}{\delta a},\delta a\right\rangle\,dt\\ &= \int_{t_1}^{t_2}\left\langle\frac{\delta\ell}{\delta u},{\sf d}w-[{\sf d}x_t,w]\right\rangle + \left\langle\frac{\delta\ell}{\delta a}\,dt,-\mathcal{L}_w a\right\rangle\\ &= \int_{t_1}^{t_2}\left\langle -{\sf d}\frac{\delta\ell}{\delta u} - \mathcal{L}_{{\sf d}x_t}\frac{\delta\ell}{\delta u} + \frac{\delta\ell}{\delta a}\diamond a\,dt,w\right\rangle\\ &= 0\,. \end{aligned} \label{eq:StochEPeqns} \end{equation} Since the vector field $w$ is arbitrary, one obtains the stochastic Euler-Poincar\'e equations. Finally, the advection equation \eqref{eq:stochadv} follows by applying the KIW formula to $a(t)=g_{t*}a_0$. \end{proof} \begin{remark} This version of the stochastic Euler-Poincar\'e theorem is equivalent to the version presented in \cite{Holm2015}, which uses stochastic Clebsch constraints. In \cite{Holm2015} one can also find an investigation of the It\^o formulation of the stochastic Euler-Poincar\'e equation. \end{remark} \begin{theorem}[Stochastic Kelvin-Noether Theorem] \label{thm:Kelvin} Let $c$ denote a compact embedded one-dimensional smooth submanifold of $M$ and denote $c_t=g_t(c)$ for all $t\in [0,T]$. If the mass density $D_0$ (a top-form) is initially non-vanishing, then \[ {\sf d}\oint_{c_t}\frac{1}{D_t}\frac{{\delta} \ell}{{\delta} u}(u_t,a_t) = \oint_{c_t}\frac{1}{D_t}\frac{{\delta} \ell}{{\delta} a}(u_t,a_t)\diamond a_t\,. \] The integrated form of this relation is \[ \oint_{c_t}\frac{1}{D_t}\frac{{\delta} \ell}{{\delta} u}(u_t,a_t) =\oint_{c_0}\frac{1}{D_0}\frac{{\delta} \ell}{{\delta} u}(u_0,a_0) + \int_0^t\oint_{c_s}\frac{1}{D_s}\frac{{\delta} \ell}{{\delta} a}(u_s,a_s)\diamond a_s \rmd s. \] \end{theorem} \begin{proof} The KIW formula \eqref{eq:KIWformula} -- also known as the \emph{Lie chain rule} -- implies that \begin{align*} {\sf d}\oint_{c_t}\frac{1}{D_t}\frac{{\delta} \ell}{{\delta} u}(u_t,a_t) &= \oint_{c_0} {\sf d} g_t^*\Big(\frac{1}{D_t} \frac{\delta\ell}{\delta u}\Big)\bigg) = \oint_{c_0} g_t^*\bigg(\big({\sf d} + \mathcal{L}_{{\sf d}x_t} \big)\Big(\frac{1}{D_t} \frac{\delta\ell}{\delta u}\Big)\bigg) \\&= \oint_{c_t} \big({\sf d} + \mathcal{L}_{{\sf d}x_t} \big)\Big(\frac{1}{D_t} \frac{\delta\ell}{\delta u}\Big) = \oint_{c_t}\frac{1}{D_t}\frac{{\delta} \ell}{{\delta} a}(u_t,a_t)\diamond a_t\,, \end{align*} where the first step also uses the stochastic advection equation for $D_t$ in \eqref{eq:stochep} and the final step follows by substituting the stochastic Euler-Poincar\'e equations in \eqref{eq:stochadv}. \end{proof} \subsection{2D SALT Stochastic Atmospheric Model (SAM) }\label{Appendix-SALT} \paragraph{Summary.} This part of the appendix derives the SALT atmospheric model (SAM) for the isothermal ideal gas in equations \eqref{Atmos-SALT-Kel} and \eqref{Stoch_THETA_A}. \\ In the present notation, the Lagrangian for the deterministic 2D Compressible Atmospheric Model (DAM) in Eulerian $(x,y)$ coordinates is, \begin{align} \ell\big[\bs{u}, D,\theta \big] = \int_{\Omega} \frac{D}{2} |\bs{u}|^2 + D \bs{u}\cdot\bs{R}(\bs{x}) - c_vD\theta\Pi \,dx\,dy, \label{CAM-Lag} \end{align} where $\bs{u}$ denotes 2D fluid velocity, $D$ is mass density, $\theta$ denotes the potential temperature, the function $\bs{R}(\bs{x})$ with ${\rm curl} \bs{R}=2\bs{\Omega}$ denotes the vector potential for the Coriolis parameter, $c_v$ is specific heat at constant volume, and $\Pi$ is the well-known Exner function, given by \[ \Pi = \left( \frac{p}{p_0} \right)^{R/c_p}, \] in which $p_0$ is a reference pressure level, $c_p$ is specific heat at constant pressure and $R=c_p-c_v$ is the gas constant. In these variables, the equation of state for an ideal gas in 2D with $n$ degrees of freedom is expressed as \[ \Pi = \left(\frac{RD\theta}{p_0}\right)^{R/c_p} = \left(\frac{RD\theta}{p_0}\right)^{1-\gamma^{-1}} = \left(\frac{RD\theta}{p_0}\right)^{2/(n+2)} \,, \] since the specific heat ratio $\gamma=c_p/c_v = 1+2/n$ for ideal gases whose molecules possess $n$ degrees of freedom, comprising spatial translations, rotations and oscillations. Ideal gases of diatomic molecules in 3D have three translations, plus rotations and oscillations, so $n=5$ and $\gamma=7/5$ in that case. Accordingly, the Lagrangian in \eqref{CAM-Lag} specialises for a ideal gas in 2D to \begin{align} \ell\big[\bs{u}, D,\theta \big] = \int_{\Omega} \frac{D}{2} |\bs{u}|^2 + D \bs{u}\cdot\bs{R}(\bs{x}) - \kappa (D\theta)^\alpha \,dx\,dy, \label{2DAM-Lag} \end{align} where the constants $(\kappa,\alpha)$ take the values, \[ \kappa=c_v (R/p_0)^{2/(n+2)} \quad\hbox{and}\quad \alpha = \frac{n+4}{n+2} = 1 + \frac{2}{n+2} = 2 -\gamma^{-1} \,. \] We obtain the following variational derivatives of the Lagrangian in \eqref{CAM-Lag}, \begin{align} \begin{split} \frac1D\dede{\ell}{\bs{u}} &= \bs{u} + \bs{R}(\bs{x}) \,, \\ \dede{\ell}{D} & = \frac{1}{2} |\bs{u}|^2 + \bs{u}\cdot\bs{R}(\bs{x}) - \kappa\alpha(D\theta)^{\alpha-1}\theta \,, \\ \dede{\ell}{\theta} &= -\, \kappa\alpha(D\theta)^{\alpha-1}D \,. \end{split} \label{CAM-Lag-vars} \end{align} Substitution of the variational derivatives (\ref{CAM-Lag-vars}) of the Lagrangian (\ref{CAM-Lag}) into the stochastic Euler-Poincar\'e equations with SALT in \eqref{eq:stochep} gives the SAM system \begin{align} \begin{split} \left({\sf d} + \mathcal{L}_{{\sf d}x_t}\right) \left( \big( \bs{u} + \bs{R}(\bs{x}) \big) \cdot d\bs{x} \right) &= \,{d\,}\left(\frac{1}{2} |\bs{u}|^2 + \bs{u}\cdot\bs{R}(\bs{x}) -(\kappa/\gamma) (D\theta)^{\alpha} \right) dt \,,\\ ({\sf d} + \mathcal{L}_{{\sf d}x_t}) (D\theta \,dxdy) & = 0 \,. \end{split} \label{EPSD-geom-CAM} \end{align} Consequently, we recover the Kelvin circulation conservation law for the SAM in the compact form \begin{align} {\sf d}\oint_{c_t}\hspace{-2mm} \big( \bs{u} + \bs{R}(\bs{x}) \big) \cdot d \bs{x} = 0 \,, \label{SAM-circons} \end{align} where $c_t=g_t(c_0)$ for all $t\in [0,T]$ denotes the push-forward by the SAM flow of the initial $c_0$, a compact embedded one-dimensional smooth submanifold of $M$. \begin{corollary}\label{CAM-PV} The system of SAM equations in \eqref{EPSD-geom-CAM} implies that potential vorticity $q:= \omega / (D\theta)$ is conserved along flow lines of the stochastic fluid trajectory ${\sf d}x_t$, \begin{align} {\sf d} q +{\sf d}\bs{x}_t \cdot\nabla q = 0 \quad\hbox{with potential vorticity }q:= \omega / (D\theta) \quad\hbox{and} \quad \omega := \bs{\widehat{z}}\cdot{\rm curl}\big( \bs{u} + \bs{R}(\bs{x}) \big) \,. \label{SAM-vorticitythm} \end{align} In turn, this formula implies that the following infinite family of integral quantities is conserved \begin{align} C_\Phi = \int_\Omega (D\theta)\Phi(q)\,dxdy \,, \label{SAM-enstrophy-thm} \end{align} for any differentiable function $\Phi$. \end{corollary} \begin{corollary} The \emph{Deterministic} AM equations in (\ref{EPSD-geom-CAM}) with $\bs{\xi}_i\to0$ are Hamiltonian, with conserved energy\footnote{The energy $E$ in \eqref{SAM-erg} is not conserved for $\bs{\xi}_i\ne0$, though, because $\bs{\xi}_i\ne0$ injects the explicit time dependence of stochastic Lagrangian trajectories into the Euler-Poincar\'e variations \eqref{EPstoch-var} in Hamilton's principle.} \begin{align} E = \int_{\Omega} \frac{D}{2} |\bs{u}|^2 + \kappa (D\theta)^\alpha \,dxdy. \label{SAM-erg} \end{align} The Lie-Poisson Hamiltonian structure of deterministic fluid equations is discussed in \cite{HMR1998} from the viewpoint of the Euler-Poincar\'e of Hamilton's principle for fluid dynamics. \end{corollary} \begin{remark} The system of SAM equations in \eqref{EPSD-geom-CAM} may also be written equivalently in \emph{standard} fluid dynamics notation as \begin{align} \begin{split} {\sf d} \bs{u} + {\sf d}\bs{x}_t \cdot\nabla \bs{u} - {\sf d}\bs{x}_t \times 2 \bs{\Omega} + \sum_{i=1}^M \Big( u_j \nabla \xi_i^j + \nabla \big(\bs{\xi}_i \cdot \bs{R}\big)\Big) \circ dW_t^i &= -\,(\kappa/\gamma) \nabla (D\theta)^\alpha dt \,, \\ {\sf d} (D\theta) + \nabla \cdot \big( D\theta\, {\sf d}\bs{x}_t \big) & = 0 \,.\end{split} \label{Eady-EPSDeqns2} \end{align} \end{remark} If one assumes low Mach number, so that $D\approx1$, and then adds viscosity and diffusion of heat, this set of equations will reproduce the SALT atmospheric model in equations \eqref{Atmos-SALT-Kel} and \eqref{Stoch_THETA_A} when one also sets $\alpha=1=\gamma$, which holds for the isothermal case of the ideal gas. \begin{thebibliography}{99} \bibitem{AdLHT2020} D. Alonso-Or\'an, A. Bethencourt de Le\'on, D. D. Holm \& S. Takao, Modelling the Climate and Weather of a 2D Lagrangian-Averaged Euler--Boussinesq Equation with Transport Noise. J. Stat. Phys. 179, 1267-1303 (2020). \url{https://doi.org/10.1007/s10955-019-02443-9} \bibitem{Arnold2001} L. Arnold, Hasselmann's program revisited: the analysis of stochasticity in deterministic climate models, In: P. Imkeller, JS von Storch (eds) Stochastic Climate Models. Progress in Probability, vol 49. Birkh\"auser, Basel. \url{https://doi.org/10.1007/978-3-0348-8287-3_5} \bibitem{AdLHLT2020} A. Bethencourt de L\'eon, D. D. Holm, E, Luesink, S. Takao.\\ Implications of Kunita--It\^o--Wentzell Formula for k-Forms in Stochastic Fluid Dynamics. J. Nonlin. Sci. (2020) 30:1421-1454, \url{https://doi.org/10.1007/s00332-020-09613-0} \bibitem{BJERKNES_1969} J. Bjerknes, Atmospheric teleconnections from the equatorial Pacific, Mon. Weather Rev., 97, 163-172, 1969. \bibitem{CANE_1985} M.A. Cane, S. E. Zebiak, A theory for El Nino and the Southern Oscillation, Science, 228, 1084-1087, 1985. \bibitem{CANE_1986} M.A. Cane, S. E. Zebiak, S.C. Dolan, Experimental forecasts of El Nino, Nature, 321, 827-832, 1986. \bibitem{COTTERetal2019} C. Cotter, Dan Crisan. D. D. Holm, W. Pan, I. Shevchenko, Numerical Modelling Stochastic Lie Transport in Fluid Dynamics, Multiscale Model. Simul., 17, 192-232, 2019 \bibitem{COTTERetal2020a} C Cotter, D Crisan, DD Holm, W Pan, I Shevchenko, A Particle Filter for Stochastic Advection by Lie Transport: A Case Study for the Damped and Forced Incompressible Two-Dimensional Euler Equation, SIAM/ASA Journal on Uncertainty Quantification 8 (4), 1446-1492, 2020 \bibitem{COTTERetal2020b} C Cotter, D Crisan, D Holm, W Pan, I Shevchenko, Data assimilation for a quasi-geostrophic model with circulation-preserving stochastic transport noise, Journal of Statistical Physics 179 (5), 1186-1221, 2020. \bibitem{COTTERetal2020c} C Cotter, D Crisan, D Holm, W Pan, I Shevchenko, Modelling uncertainty using stochastic transport noise in a 2-layer quasi-geostrophic model, Foundations of Data Science 2 (2), 173, 2020. \bibitem{cfh} D. Crisan, F. Flandoli, D. Holm, Solution properties of a 3D stochastic Euler fluid equation, J. Nonlinear Sci. 29 (2019), no. 3, 813-870. \bibitem{DIJKSTRA_BOOK} H. A. Dijkstra, Nonlinear Physical Oceanography, Springer, 2005 \bibitem{DH2020} T. D. Drivas and D. D. Holm, Circulation and Energy Theorem Preserving Stochastic Fluids. Proc. Roy. Soc. Edinburgh A: Mathematics, 150 (6) 2776-2814 \url{https://doi.org/10.1017/prm.2019.43} \bibitem{DHL2020} T.D. Drivas, D. D. Holm \& J. Leahy, Lagrangian averaged stochastic advection by Lie transport for fluids, J. Stat. Phys. 179, 1304-1342 (2020) \url{https://doi.org/10.1007/s10955-020-02493-4} \bibitem{DUTTON_BOOK} J.A. Dutton, The Ceaseless Wind, Dover, 1986 \bibitem{Evans} L. C. Evans: Partial Differential Equations, American Mathematical Society, 1998. \bibitem{FoiasConstantin} P. Constantin, C. Foias, Navier-Stokes Equations, University of Chicago Press, 1988 \bibitem{FrankignoulHasselmann1977} C. Frankignoul and K. Hasselmann, 1977. Stochastic climate models, Part II Application to sea-surface temperature anomalies and thermocline variability. Tellus, 29(4), pp.289-305. \bibitem{GILL} A. E. Gill, Some Simple Solutions for heat-induced tropical circulation, Q. J. R. M. S. {\bf 106}, 447-462, (1980) \bibitem{HANEY} R.L. Haney, Surface Thermal Boundary Conditions for Ocean Circulation Models, J. Phys. Ocean. {\bf 1} 241-248, (1971) \bibitem{Hasselmann1976} K. Hasselmann, Stochastic Cllmate Models. Part 1. Theory, Tellus 28, 473-485, 1976 \bibitem{Holm2015} D. D. Holm, Variational principles for stochastic fluid dynamics, Proc Roy Soc A, 471: 20140963, 2015. \url{http://dx.doi.org/10.1098/rspa.2014.0963} \bibitem{Kac} M. Kac., Probability and Related Topics in Physical Sciences, volume 1. American Mathematical Soc., 1959. \bibitem{HMR1998} D. D. Holm, J. E. Marsden and T. S. Ratiu, The Euler--Poincar\'e equations and semidirect products with applications to continuum theories, {\it Adv. in Math.}, {\bf 137} (1998) 1-81, \url{https://doi.org/10.1006/aima.1998.1721} \bibitem{KlainermanMajda} S. Klainermann, A. Majda, Singular limits of quasilinear hyperbolic systems with large parameters and the incompressible limit of compressible fluids, Communications on Pure and Applied Mathematics, {\bf 34} (1981), 481-524 \bibitem{Kleemann_2008} Stochastic theories for the irregularity of ENSO, Phil. Trans. R. Soc. A (2008) 366, 2511-2526 \bibitem{kurtzprotter} T. G. Kurtz and P. Protter. Weak limit theorems for stochastic integrals and stochastic differential equations. Ann. Probab., 19(3):1035 - 1070, 1991. \bibitem{LauChan_1985} K.M. Lau, P.H. Chan, The 40-50 day oscillation and the El Nino-Southern Oscilation: A new perspective. Bull. Amer. Meteor. Soc., 67, 533-534, 1985 \bibitem{LauShen_1988} K.M. Lau, S. Shen, On the dynamics of intraseasonal oscillations and ENSO. J. Atmos. Sci., 45, 1781-1797, 1988 \bibitem{Lorenz1995}Lorenz, E.N.: Climate is what you expect. (unpublished) (1995). \url{http://eaps4.mit.edu/research/Lorenz/Climate_expect.pdf}. \bibitem{MATSUNO} T. Matsuno, Quasi-Geostrophic Motions in the Equatorial Area, J. Met. Soc. Japan, {\bf 43} 25-43, (1966) \bibitem{MajdaTimoEinden} A. J. Majda, I. Timofeyev, E. Vanden Eijnden, A Mathematical Framework for Stochastic Climate Models, Comm. Pure Appl. Math. {\bf 54}, 891-974, 2001 \bibitem{Mazja} V. Maz'ya, Sobolev spaces: with applications to elliptic partial differential equations, Grundlehren der mathematischen Wissenschaften {\bf 341}, Springer, 2011 \bibitem{McKean}H. P. McKean Jr., A class of {M}arkov processes associated with nonlinear parabolic equations. Proceedings of the National Academy of Sciences of the United States of America, 56(6):1907, 1966 \bibitem{ENSO_THEORY} J. D. Neelin, D. S. Battisti, A. C. Hirst, F.-F. Jin, Y. Wakata, T. Yamagata, S. E. Zebiak, ENSO Theory, J. Geophys. Res., {\bf 103}, 14,261-14,290, 1998 \bibitem{Palmer_2019} T. Palmer, Stochastic weather and climate models, Nat Rev Phys 1, 463-471 (2019) \bibitem{Perez_2005} C. L. Perez, A. M. Moore, J. Zavala-Garay, R. Kleeman, A comparison of the influence of additive and multiplicative stochastic forcing on a coupled model of ENSO. J. Clim. 18, 5066-5085, 2005 \bibitem{Rozovskii} R. L. Rozovskii, Stochastic Evolution Systems, Kluwer Academic Publishers, 1990. \bibitem{PZ} G. Da Prato, J Zabczyk, Stochastic equations in infinite dimensions. Encyclopedia of Mathematics and its Applications, 44. Cambridge University Press, Cambridge, 1992. \bibitem{Rockner} {\ R\"{o}ckner, M., Schmuland, B., Zhang, X., Yamada-Watanabe theorem for stochastic evolution equations in infinite dimensions, Condensed Matter Physics 2008, Vol. 11, No 2(54), pp. 247-259.} \bibitem{sas} A-S Sznitman, Topics in propagation of chaos. Ecole d'Ete de Probabilites de Saint-Flour XIX-1989, 165-251, Lecture Notes in Math., 1464, Springer, Berlin, 1991. \bibitem{Vallis} G. Vallis, Atmospheric and Oceanic Fluid Dynamics, Cambridge University Press, 2nd ed. 2017. \bibitem{Vlasov} A. A. Vlasov., The vibrational properties of an electron gas. Soviet Physics Uspekhi, 10(6):721, 1968. \bibitem{ZEBIAK1982} S. Zebiak, A Simple Atmospheric Model of Relevance to El Nino, J. Atmos. Sci. {\bf 39}, 2017-2027, (1982) \bibitem{ZEBIAK1986} S. Zebiak, Atmospheric Convergence Feedback in A Simple Model for El Nino, Mon Wea. Rev. {\bf 114}, 1263-1271, (1986) \bibitem{CANE_ZEBIAK} S. Zebiak, M. Cane, A Model El Nino-Southern Oscillation, Mon Wea. Rev. {\bf 115}, 2262-2278, (1987) \end{thebibliography} \end{document}
2205.04456v1
http://arxiv.org/abs/2205.04456v1
A quadratically enriched count of lines on a degree 4 del Pezzo surface
\documentclass[11pt, oneside]{article} \usepackage{geometry} \geometry{letterpaper} \usepackage{graphicx} \usepackage{amssymb} \usepackage{amsmath} \usepackage{amsthm} \usepackage{enumerate} \usepackage{tikz-cd} \usepackage{mathrsfs} \usepackage{bbm} \usepackage{cite} \newtheorem{thm}{Theorem}[section] \newtheorem{prop}[thm]{Proposition} \newtheorem{cor}[thm]{Corollary} \newtheorem{lem}[thm]{Lemma} \newtheorem{dfn}{Definition} \newtheorem{rmk}{Remark}[section] \newtheorem{hw}{Problem} \newtheorem{conv}{Convention} \newtheorem{for}{Formula} \DeclareMathOperator{\msh}{mesh} \DeclareMathOperator{\Exp}{Exp} \DeclareMathOperator{\injrad}{injrad} \DeclareMathOperator{\End}{End} \DeclareMathOperator{\GCurv}{GCurv} \DeclareMathOperator{\MCurv}{MCurv} \DeclareMathOperator{\Area}{Area} \DeclareMathOperator{\length}{length} \DeclareMathOperator{\two}{II} \DeclareMathOperator{\Sym}{Sym} \DeclareMathOperator{\tr}{tr} \DeclareMathOperator{\vol}{vol} \DeclareMathOperator{\spn}{span} \DeclareMathOperator{\id}{id} \DeclareMathOperator{\range}{range} \DeclareMathOperator{\colim}{colim} \DeclareMathOperator{\module}{-mod} \DeclareMathOperator{\Hom}{Hom} \DeclareMathOperator{\op}{op} \DeclareMathOperator{\Set}{Set} \DeclareMathOperator{\res}{res} \DeclareMathOperator{\ev}{ev} \DeclareMathOperator{\pre}{pre} \DeclareMathOperator{\premod}{-premod} \DeclareMathOperator{\Vect}{Vect} \DeclareMathOperator{\rank}{rank} \DeclareMathOperator{\sgn}{sgn} \DeclareMathOperator{\re}{Re} \DeclareMathOperator{\im}{Im} \DeclareMathOperator{\ad}{ad} \DeclareMathOperator{\Ad}{Ad} \DeclareMathOperator{\Aut}{Aut} \DeclareMathOperator{\Spec}{Spec} \DeclareMathOperator{\fun}{fun} \DeclareMathOperator{\Nil}{Nil} \DeclareMathOperator{\adj}{adj} \DeclareMathOperator{\Gr}{Gr} \DeclareMathOperator{\ind}{ind} \DeclareMathOperator{\Jac}{Jac} \DeclareMathOperator{\Tr}{Tr} \DeclareMathOperator{\rk}{rk} \DeclareMathOperator{\codim}{codim} \DeclareMathOperator{\Lines}{Lines} \DeclareMathOperator{\mult}{mult} \title{A quadratically enriched count of lines on a degree 4 del Pezzo surface.} \author{Cameron Darwin} \date{} \begin{document} \maketitle \abstract{ Over an algebraically closed field $k$, there are 16 lines on a degree 4 del Pezzo surface, but for other fields the situation is more subtle. In order to improve enumerative results over perfect fields, Kass and Wickelgren introduce a method analogous to counting zeroes of sections of smooth vector bundles using the Poincar{\'e}-Hopf theorem in \cite{index}. However, the technique of Kass-Wickelgren requires the enumerative problem to satisfy a certain type of orientability condition. The problem of counting lines on a degree 4 del Pezzo surface does not satisfy this orientability condition, so most of the work of this paper is devoted to circumventing this problem. We do this by restricting to an open set where the orientability condition is satisfied, and checking that the count obtained is well-defined, similarly to an approach developed by Larson and Vogt in \cite{larsonvogt}. } \section{Introduction} \begin{conv} Throughout, we will assume that $k$ is a perfect field of characteristic not equal to 2. In statements of propositions, this will be explicitly reiterated when needed. \end{conv} There are 16 lines on a smooth degree 4 del Pezzo surface $\Sigma$ over an algebraically closed field $k$ of characteristic not equal to 2---that is to say, there are 16 linear embeddings $\mathbb{P}^1_k \to \Sigma$ up to reparametrization. When $k$ is not algebraically closed, the situation is more subtle. For starters, one must allow ``lines'' to include linear embeddings $\mathbb{P}^1_{k'} \to \Sigma$, for finite extensions $k'/k$. Moreover, there may not be 16 such embeddings. To see why, it is useful to recall how the count is done. A common strategy for solving enumerative problems is linearization---that is, one seeks to express the solution set as the zero locus of a section of a vector bundle $E$ over some ambient moduli space $X$. In the case of counting lines on a degree 4 del Pezzo, $X$ is $\Gr_k(2,5)$, the Grassmannian of lines in $\mathbb{P}^4_k$, and $E$ is $\Sym^2(S^\vee)\oplus\Sym^2(S^\vee)$, where $S$ is the canonical subplane bundle over $\Gr_k(2,5)$. $\Sigma$ can be written as the complete intersection of two quadrics $f_1$ and $f_2$ in $\mathbb{P}^4$ (pg. 100 of \cite{wittenberg}). Composing a line $\mathbb{P}^1_{k'} \to S$ with the embedding $\Sigma = Z(f_1, f_2) \to \mathbb{P}^4_k$ determines a linear embedding $\mathbb{P}^1_{k'} \to \mathbb{P}^4_k$, which can itself be identified with a closed point in $\Gr_k(2,5)$ with residue field $k'$. To identify which closed points in $\Gr_k(2,5)$ correspond to lines on $\Sigma$, one notices that for each line in $\mathbb{P}^4_k$, i.e. each linear embedding $L : \mathbb{A}^2_{k'} \to \mathbb{A}^5_k$, $f_1$ and $f_2$ pull back to degree 2 polynomials on $\mathbb{A}^2_{k'}$, i.e. to elements of $\Sym^2(S_L^\vee)$. Thus $f_1$ and $f_2$ determine two sections, $\sigma_1$ and $\sigma_2$ respectively, of $\Sym^2(S^\vee)$, and the set of lines on $\Sigma$ is precisely the zero locus $Z(\sigma_1 \oplus \sigma_2)$. For general $f_1$ and $f_2$, $Z(\sigma_1 \oplus \sigma_2)$ consists of finitely many closed points (Theorem 2.1 of \cite{debarremanivel}). The most na{\"i}ve count of lines on $\Sigma$---a literal count of the number of linear embeddings $\mathbb{P}^1_{k'} \to \Sigma$---would simply be $\#Z(\sigma_1 \oplus \sigma_2)$, but this number does not always come out to 16. To achieve an invariant answer, one could weight the lines on $\Sigma$ by the degree of the field extension $\kappa(L)/k$, and then one would have that \[ \sum_{L \in Z(\sigma_1 \oplus \sigma_2)} [\kappa(L):k] = 16. \] However, this is not a genuine improvement of the count for algebraically closed $k$: Fix an algebraic closure $\overline{k}$ of $k$. Then $\overline{X} := \Gr_{\overline{k}}(2,5)$ is the base change of $X$ from $k$ to $\overline{k}$, and $\overline{E} := \Sym^2(\overline{S}^\vee)\oplus\Sym^2(\overline{S}^\vee)$ (where $\overline{S}$ is the canonical subplane bundle over $\Gr_{\overline{k}}(2,5)$) is the base change of $E$ from $k$ to $\overline{k}$. Letting $\overline{f}_1$ and $\overline{f}_2$ denote the base changes of $f_1$ and $f_2$, the section $\overline{\sigma}_1 \oplus \overline{\sigma}_2$ of $\overline{X}$ corresponding to $\overline{f}_1$ and $\overline{f}_2$ as described earlier, is itself the base change of $\sigma_1 \oplus \sigma_2$. Moreover, the zero locus $\overline{\Sigma} = Z(\overline{f}_1, \overline{f}_2)$ is a smooth degree 4 del Pezzo over $\overline{k}$, and hence the zero locus of $\overline{\sigma}_1 \oplus \overline{\sigma}_2$ consists precisely of the lines on $\overline{\Sigma}$, of which there are 16. To prove that the weighted sum of lines on $\Sigma$ is 16, one considers the fact that $Z(\overline{\sigma}_1 \oplus \overline{\sigma}_2)$ is the base change of $Z(\sigma_1 \oplus \sigma_2)$. Considering the base change projection \[ c : Z(\overline{\sigma}_1 \oplus \overline{\sigma}_2) \to Z(\sigma_1 \oplus \sigma_2), \] one has that, for each $L \in Z(\sigma_1 \oplus \sigma_2)$, that $[\kappa(L) : k] = \#c^{-1}(L)$, and consequently \[ \sum_{L \in Z(\sigma_1 \oplus \sigma_2)} [\kappa(L):k] = \sum_{L \in Z(\sigma_1 \oplus \sigma_2)} \#c^{-1}(L) = \# Z(\overline{\sigma}_1 \oplus \overline{\sigma}_2) = 16. \] Thus, while weighting the lines on $\Sigma$ by $[\kappa(L) : k]$ achieves a consistent count of 16, this is really nothing more than the original count that there are 16 lines on a smooth degree 4 del Pezzo surface over an algebraically closed field. To improve upon this count, we will use an approach introduced by Kass and Wickelgren in \cite{index} to count lines on smooth cubic surface: Consider for a moment the classical case of a vector bundle $E$ of rank $r$ over a smooth closed manifold $X$ of dimension $r$, and consider a section $s$ of $E$ with only isolated zeroes. One might ask whether the number of zeroes of $s$ can change as $s$ is changed by a small homotopy. The answer, of course, is yes. If one studies how this can happen, one discovers two phenomena: a single zero can split into multiple zeroes, or two zeroes can cancel each other out. The former problem is analogous to the situation of a solution to an enumerative problem over $k$ splitting into multiple solutions over a field extension $k'/k$. To account for this problem, one can define a local multiplicity: \begin{dfn}[local multiplicity]\label{mult} Let $E$ be a smooth rank $r$ vector bundle over a smooth, closed manifold $X$ of dimension $r$. Let $s$ be a section of $E$ and $z$ an isolated zero of $s$. By choosing an open $r$-ball around $z$ and trivializing $E$ over that ball, one obtains a map $\mathbb{R}^r \to \mathbb{R}^r$ which vanishes only at 0, hence inducing a map $S^r \to S^r$ whose degree is well-defined up to a sign. Define the local multiplicity at $z$ to be the absolute value of this degree, which we will denote $\mult_z s$. \end{dfn} In some sense, the local multiplicity at $z$ is the ``expected'' number of zeroes $z$ will split into if $s$ is homotoped to be transversal to the zero section. Consequently, one might hope that counting local multiplicities is sufficient, in the sense that the sum \[ \sum_{z \in Z(s)} \mult_z s \] is independent of $s$. However, this does not deal with the possibility of two zeroes canceling each other out: for a section $s$ of $E$ which is already transversal to the zero section, every zero has multiplicity 1 (in the sense of Definition \ref{mult}), and hence weighting zeroes by their multiplicity simply obtains the set theoretic size of the zero set of $s$---but, as is well known, this number is still not well-defined. The upshot of this discussion is that there is a way to weight the zeroes of a section of a smooth vector bundle which is defined purely in terms of local data, namely the multiplicity, which is analogous to weighting zeroes by the degree of the extension $\kappa(z)/k$. In the algebraic case, the latter weighting does give a well-defined count, although an unsatisfying one, while in the topological case, it does not even give a well-defined count. Now we will recall how the problem of giving a well-defined count is solved in the topological case, in order to motivate, by analogy, Kass-Wickelgren's approach to giving a more nuanced count in the algebraic case: \begin{dfn}[orientation] Let $V$ be a real vector space. Then we will think of an orientation on $V$ as a choice of a positive half of $\det V$. More generally, for a vector bundle $E$, if removing the zero section disconnects the total space of $\det E$, then an orientation on $\det E$ is a choice of a positive half of $\det E \smallsetminus \{zero\ section\}$. Note that this is equivalent to trivializing $\det E$. \end{dfn} The topological problem is classically solved by making an orientability assumption on $E$ and $X$. In the simplest case, one assumes that both $E$ and $X$ are oriented. Then the differential $ds$ induces a well-defined isomorphism $T_z X \to E_z$ at every zero $z$ of $s$, and $z$ can be given a sign $\sgn_zs \in \{\pm 1\}$ according to whether $ds$ preserves orientation or reverses orientation. The Poincare-Hopf theorem then says that the sum \[ \sum_{z \in Z(s)} \sgn_zs \] is independent of the section $s$. The calculation of the local signs $\sgn_zs$ is both straight-forward and informative: an orientation on $X$ induces an orientation of $T_zX$, and an orientation of $E$ induces an orientation of $E_z$. Now one can choose a neighborhood $U$ containing $z$ and coordinates $\{u^i\}$ on $U$ so that \[ \frac{\partial}{\partial u^1} \wedge \cdots \wedge \frac{\partial}{\partial u^r} \] is in the positive half of $\det T_z X$. Next, one chooses a trivialization $\{e_j\}$ of $E|_U$ so that \[ e_1 \wedge \cdots \wedge e_r \] is in the positive half of $\det E_z$. Together, these express $\sigma|_U$ as a map $\{f^i\} : \mathbb{R}^r \to \mathbb{R}^r$ which has a zero at $z$. The determinant of the Jacobian matrix of first partial derivatives \[ \left( \frac{\partial f^i}{\partial u^j}\right) \] at $z$, which we will denote $\Jac_z (\sigma; u,e)$, depends on the choice of coordinates $\{u^i\}$, and on the trivialization $\{e_j\}$, but its sign does not. One then computes that \[ \sgn_z s = \left\{ \begin{array}{lcl} +1 & \ \ \ \ \ & \Jac_z(s;u,e) \sigma > 0 \\ -1 & \ \ \ \ \ & \Jac_z(s;u,e) \sigma < 0 \end{array} \right.. \] Unpacking this a bit more, we should note that counting the sign of the determinant has a rather straightforward homotopical interpretation: consider any linear isomorphism $\phi : \mathbb{R}^r \to \mathbb{R}^r$. Considering $S^r$ as the one point compactification of $\mathbb{R}^r$, $\phi$ determines a homeomorphism $\widetilde{\phi} : S^r \to S^r$, and it is precisely the sign of $\det \phi$ which determines the homotopy class of $\widetilde{\phi}$. Moreover, the identification of the sign of $\Jac_z(s;u,e)$ with a homotopy class of maps $S^r \to S^r$ underlies a rather direct approach to proving the Poincare-Hopf theorem, and is also an easy way to motivate the approach taken by Kass and Wickelgren: Stably, a homotopy class of self-homeomorphisms of a sphere corresponds to an element of $\pi_0^S$, which is isomorphic to $\mathbb{Z}$. In the stable motivic homotopy category over $k$, $\pi^S_0$ is isomorphic to $GW(k)$, the Grothendieck-Witt group\footnote{More precisely, $GW(k)$ is obtained by beginning with the semiring of isomorphism classes of symmetric non-degenerate bilinear forms over $k$, with tensor product as multiplication and direct sum as addition, and group-completing the addition.} of isomorphism classes of symmetric non-degenerate bilinear forms over $k$ \cite{morel}. An explicit description of $GW(k)$ in terms of generators and relations can be given (this is Lemma 2.9 of \cite{algtop}; see \cite{mh} Ch. III.5 for discussion), which it will be convenient for us to record: \begin{prop}\label{presentation} Let $k$ be a field with characteristic not equal to 2, and consider the abelian group $GW^{pr}(k)$ generated by symbols $\langle a \rangle $ for all $a \in k^\times$ subject to the relations \begin{enumerate}[i.] \item $\langle uv^2 \rangle = \langle u \rangle$ \item $ \langle u \rangle + \langle - u \rangle = \langle 1 \rangle + \langle -1 \rangle $ \item $\langle u \rangle + \langle v \rangle = \langle u + v \rangle + \langle (u + v)uv \rangle$ if $u + v \neq 0$ \end{enumerate} $GW^{pr}(k)$ becomes a ring under the multiplication $\langle u \rangle \cdot \langle v \rangle = \langle uv \rangle$, and sending $\langle a \rangle$ to the bilinear form $k \otimes k \to k$ given by $x \otimes y \mapsto axy$ extends to a ring isomorphism $GW^{pr}(k) \to GW(k)$. We will implicitly assume this identification, and simply use $\langle a\rangle$ to refer to the corresponding bilinear form. \end{prop} Now consider a linear isomorphism $\psi : k^r \to k^r$. In the motivic homotopy category, this determines a map $\widetilde{\psi} : \mathbb{P}^r_k/\mathbb{P}^{r-1}_k \to \mathbb{P}^r_k/\mathbb{P}^{r-1}_k$, analogously to how a linear isomorphism $\mathbb{R}^r \to \mathbb{R}^r$ determined a map $S^r \to S^r$. Moreover, motivically, $\mathbb{P}^r_k/\mathbb{P}^{r-1}_k$ is a sphere, and hence the homotopy class of $\widetilde{\psi}$ represents an element of $GW(k)$, which turns out to precisely be the rank one bilinear form $\langle \det \psi \rangle$. Viewed this way, the isomorphism class $\langle \det ds \rangle$ is the motivic analog of the sign of the determinant $\det ds$, at least when used to assign a local index to a zero of a section of a vector bundle\footnote{And also note that the multiplicative group of rank one non-degenerate bilinear forms over $\mathbb{R}$ is precisely the group of signs, i.e. the multiplicative group $\{\pm 1\}$}. In \cite{index}, Kass and Wickelgren use this idea to develop a fairly broad technique for counting zeroes of vector bundles over smooth schemes. Underlying their technique is the following orientability requirement: \begin{dfn}[relative orientation] Let $p : X \to \Spec k$ be a smooth scheme, and $E$ a vector bundle over $X$. Then $E$ is said to be relatively orientable if there is an isomorphism \[ \rho : \det E \otimes \omega_{X/k} \to L^{\otimes 2} \] for some line bundle $L$ over $X$. The isomorphism $\rho$ is called a relative orientation, and the pair $(E, \rho)$ will be called a relatively oriented vector bundle. \end{dfn} Now continuing the notation in the statement of the definition, and assuming that $\rk E = \dim X = r$, suppose $s$ is a section of $E$ whose zero locus consists of finitely many closed points. Consider some zero $z$ of $s$, and suppose that there is a neighborhood $U$ of $z$ and an isomorphism $u: U \cong \mathbb{A}^r_k$ (or an isomorphism with an open subset of $\mathbb{A}^r_k$). Note that the coordinate vector fields on $\mathbb{A}^r_k$ determine a basis $\{\partial_{u_1}|_z, \ldots, \partial_{u_r}|_z\}$ for $(T_X)_z$. Next, suppose that there is a trivialization of $E|_U$ by sections $\{e_1, \ldots, e_r\}$ such that the map $\det (T_X)_z \to \det E_z$ defined by \[ \partial_{u_1}|_z \wedge \cdots \wedge \partial_{u_r}|_z\longmapsto e_1 \wedge \cdots \wedge e_r \] is a square in $(\omega_X)_z \otimes \det E_z \cong (L_z)^{\otimes 2}$. Then we make the following definiton: \begin{dfn}[good parametrization] In the notation of the preceding paragraphs, and the conditions described, suppose also that the map $s_{u,e}:\mathbb{A}^r_k \to \mathbb{A}^r_k$ corresponding to $s$ over $U$ is {\'e}tale at $z$. Then we will refer to the coordinates $u: U \to \mathbb{A}^r_k$ (allowing this notation to also include the case of an isomorphism between $U$ and an open subset of $\mathbb{A}^r_k$) and the trivialization $\{e_1, \ldots, e_r\}$ of $E|_U$ together as a good parametrization near $z$. \end{dfn} Continuing with the same notation and assumptions, we consider two cases: first, suppose $z$ is $k$-rational, i.e. $\kappa(z) = k$. Then evaluating the Jacobian matrix $\left(\frac{\partial (s_{u,e})_i}{\partial u_j}\right)$ at $z$ yields a matrix of elements of $k$. This matrix has a determinant in $k$, which depends, as in the case of a section of a vector bundle over a manifold, on the choice of coordinates and trivialization. However, again analogous to the classical case, Kass and Wickelgren show in \cite{index} that provided that a good parametrization is used to compute the determinant, the bilinear form \[ \left \langle \det \left(\frac{\partial (s_{u,e})_i}{\partial u_j}\right) \right \rangle \] is well-defined up to isomorphism. When $z$ is not $k$-rational, we need to work a bit harder. Evaluating the Jacobian matrix $\left(\frac{\partial (s_{u,e})_i}{\partial u_j}\right)$ at $z$ on the nose yields a matrix of linear maps $\kappa(z) \to k$. However, by base changing the map $s_{u,e}$ to a map $s'_{u,e} : \mathbb{A}^r_{\kappa(z)} \to \mathbb{A}^r_{\kappa(z)}$ and then evaluating at $z$ one obtains a matrix $\left(\frac{\partial (s'_{u,e})_i}{\partial u'_j}\right)$ of elements of $\kappa(z)$, and this matrix now has a determinant in $\kappa(z)$. We would like to try to use the bilinear form \[ \left \langle \det \left(\frac{\partial (s'_{u,e})_i}{\partial u'_j}\right) \right \rangle \] to define our local sign, but we immediately run into the problem that this is a bilinear form over $\kappa(z)$, not over $k$. If we make the additional assumption that $\kappa(z)/k$ is separable---which is automatically guaranteed if, for example, $k$ is perfect---then we can use the trace map $\Tr_{\kappa(z)/k} : \kappa(z) \to k$. This map is surjective, and hence for any vector space $V$ over $\kappa(z)$, and any non-degenerate symmetric bilinear form $b : V \otimes V \to \kappa(z)$, composing $b$ with $\Tr_{\kappa(z)/k}$ and viewing $V$ as a vector space over $k$ produces a non-degenerate symmetric bilinear form $\Tr_{\kappa(z)/k} b$. In \cite{index}, Kass and Wickelgren show that, provided that a good parametrization is used, the bilinear form \[ \Tr_{\kappa(z)/k} \left \langle \det \left(\frac{\partial (s'_{u,e})_i}{\partial u'_j}\right) \right \rangle \] is well-defined. Moreover, this recovers the same bilinear form that would have been defined if $z$ were $k$-rational, because $\Tr_{k/k}$ is the identity map. Consequently, we make the following definition: \begin{dfn}[Jacobian form]\label{jacform} Let $(E,\rho)$ be a relatively oriented vector bundle over a smooth scheme $X \to \Spec k$ for $k$ a perfect field, and assume that $\rk E = \dim X = r$. Let $s$ be a section of $E$ whose zero locus consists of finitely many closed points. Assume also that there is a good parametrization at every zero $z$ of $s$. Then we define the Jacobian form \[ \Tr_{\kappa(z)/k} \langle \Jac_z (s;\rho)\rangle \] at $z$ to be the well-defined bilinear form $k \otimes k \to k$ given by computing \[ \Tr_{\kappa(z)/k} \left \langle \det \left(\frac{\partial (s'_{u,e})_i}{\partial u'_j}\right) \right \rangle \] in any good parametrization around $z$. Note that this bilinear form has rank $[\kappa(z) : k]$. \end{dfn} Now return to the situation of lines on a degree 4 del Pezzo surface. Then $X = \Gr_k(2,5)$ and $E = \Sym^2(S^\vee) \oplus \Sym^2(S^\vee)$, and we have that $X$ admits a cover by open sets which are isomorphic to $\mathbb{A}^6_k$. Moreover, for general $f_1$ and $f_2$, $Z(\sigma_1 \oplus \sigma_2)$ consists of finitely many closed points, and is itself {\'e}tale over $k$. For finite $k$, this can be refined to saying that there is a Zariski open subset of the space of sections of $E$ whose closed points correspond to degree 4 del Pezzos over a finite extension of $k$ where $Z(\sigma_1 \oplus \sigma_2)$ is finite {\'e}tale over $k$. Thus, for general $f_1$ and $f_2$, $\sigma_1 \oplus \sigma_2$ is a section whose zero set consists of finitely many closed points, at each of which there is a good parametrization. We would thus like to try to count lines on a del Pezzo by assigning each line its Jacobian form. But we run into a problem: $E$ is not relatively orientable. To explain how we get around this problem, it is useful to explain why $E$ fails to admit a relative orientation: Consider the Pl{\"u}cker embedding $X \hookrightarrow \mathbb{P}^9_k$. The Picard group of $X$ is generated by the restriction of $\mathcal{O}_{\mathbb{P}^9_k}(1)$ to $X$, which we will denote $\mathcal{O}_X(1)$. Moreover, the tautological line bundle on $\mathbb{P}^9_k$ restricts on $X$ to the determinant of $S$, so that $\det S = \mathcal{O}_X(-1)$. The tautological short exact sequence \begin{center} \begin{tikzcd} 0 \arrow{r}& S\arrow{r} & \mathscr{O}_X^{\oplus 5} \arrow{r}& Q \arrow{r}& 0 \end{tikzcd}, \end{center} together with the isomorphism $T_{X/k} \cong S^\vee \otimes Q$, implies that $\omega_{X/k} = \mathcal{O}_X(-5)$. We also have that $\det \Sym^2(S^\vee) = (\det S^\vee)^{\otimes 3}$, and hence $\det \Sym^2(S^\vee) = \mathcal{O}_X(3)$. Taken all together, we thus compute that, in the Picard group, \[ \det E \otimes \omega_{X/k} = \mathcal{O}_X(1), \] and hence $E$ is not relatively orientable. The Pl{\"u}cker embedding exhibits the zero locus of $\sigma_1 \oplus \sigma_2$ as closed points in $\mathbb{P}^9_k$. Provided that $|k| > 16$, we will show (Proposition \ref{nondeg one form}) that there is a section $s$ of $\mathcal{O}_{\mathbb{P}^9_k}(1)$, and hence a corresponding section of $\mathcal{O}_X(1)$, whose zero locus is disjoint from $Z(\sigma_1 \oplus \sigma_2)$. \begin{dfn}[non-degenerate on lines]\label{nondeg} We will refer to a section $s$ of $\mathcal{O}(1)$ whose zero locus is disjoint from $Z(\sigma_1 \oplus \sigma_2)$ as a ``one form\footnote{Our terminology ``one form'' refers not to K{\"a}hler one forms, but to the fact that a section of $\mathcal{O}_{\mathbb{P}^n_k}(1)$ corresponds to a one form on $\mathbb{A}^{n+1}_k$, i.e. a degree one homogeneous polynomial} non-degenerate on lines.'' \end{dfn} Letting $U$ denote the complement of $Z(s)$ in $X$, the fiber-wise map \[ \alpha \oplus \beta \mapsto s \otimes \alpha \oplus s^{\otimes 2} \otimes \beta \] determines an isomorphism between $E|U$ and the restriction of \[ \widetilde{E} := \mathcal{O}_X(1) \otimes \Sym^2(S^\vee) \oplus \mathcal{O}_X(2) \otimes \Sym^2(S^\vee). \] to $U$. By chasing through the same type of computation we used to show that $E$ is not relatively orientable, but this time for $\widetilde{E}$, we obtain a canonical relative orientation $\rho$ on $\widetilde{E}$. We now make the following definition: \begin{dfn}[twisted Jacobian form]\label{twjacform} With notation as in the preceding paragraphs, consider some $z \in Z(\sigma_1 \oplus \sigma_2)$, and let $\widetilde{\sigma}$ denote the section \[ s \otimes \sigma_1 \oplus s^{\otimes 2} \otimes \sigma_2. \] We define \[ \Tr_{\kappa(z)/k} \langle \widetilde{\Jac}_z (f_1,f_2; s)\rangle := \Tr_{\kappa(z)/k} \langle \Jac_z (\widetilde{\sigma}; \rho)\rangle, \] where the right side is defined as in Definition \ref{jacform} \end{dfn} We are now prepared to state our main result in the case that $|k| > 16$: \begin{thm}[Main Result] Let $\Sigma = Z(f_1,f_2) \subset \mathbb{P}^4_k$ be a general smooth degree 4 del Pezzo surface over a perfect field $k$ of characteristic not equal to 2, and assume that $|k| \geqslant 16$. Let $s$ be a one-form non-degenerate on the lines on $\Sigma$ (see Definition \ref{nondeg}). Let $\Lines(\Sigma)$ denote the set of linear embeddings $\mathbb{P}^1_{k'} \to \Sigma$, where $k'$ ranges over all finite extensions of $k$. Then \begin{equation} \label{result} \sum_{L \in \Lines(\Sigma)} \Tr_{\kappa(L)/k} \langle \widetilde{\Jac}_L (f_1, f_2;s)\rangle = 8H, \end{equation} where $H = \langle 1 \rangle + \langle -1\rangle \in GW(k)$, and the summand is the twisted Jacobian form of Definition \ref{twjacform}. \end{thm} \begin{rmk}\label{general} For an infinite field, this result automatically applies to infinitely many degree 4 del Pezzo surfaces over $k$. For any particular finite field, it is conceivable that the result as stated does not apply to any degree 4 del Pezzo surface over $k$. However, the proof shows that there is a Zariski open subset in $\Spec \Sym^{\bullet} \Gamma\left(\Sym^2(S^\vee) \oplus \Sym^2(S^\vee)\right)$, every closed point of which corresponds to a degree 4 del Pezzo surface over a finite extension of $k$ where equation (\ref{result}) holds. \end{rmk} In the course of the paper, we will explain how to extend this result to the case where $|k| \leqslant 16$. \section{Relation To Other Work} In addition to the work of Kass-Wickelgren and Bachmann-Wickelgren in defining enriched Euler classes, enrichment of enumerative geometry to take values in quadratic forms has been investigated by Levine in \cite{aspects}, and enriched results under the assumption of relative orientability have been obtained for various classical problems (see, e.g., \cite{pauli}, \cite{bezout},\cite{cdh},\cite{wendt},\cite{srinivasan}). The problem of enriching enumerative results in the case $k = \mathbb{R}$ has been studied in detail by Finashin and Kharlamov, among others---see, for example \cite{real}. The problem of quadratically enriched counts when relative orientability fails has been dealt with by Larson and Vogt in the case of bitangents to a smooth plane quartic \cite{larsonvogt}. Larson and Vogt define the notion of ``relatively orientable relative to a divisor,'' where a vector bundle is relatively orientable away from a certain effective divisor. They then show, essentially, that counts of zeroes are homotopy invariant as long as the homotopy does not move a zero across the chosen divisor, and use this to define counts dependent on choosing a connected component of the space of sections with isolated zeroes. Our approach is similar in that we rely on restricting to an open subset where the relevant vector bundle is relatively orientable. However, rather than dealing with homotopy invariance of counts directly, we use axiomatic properties of Chow-Witt groups and the Barge-Morel Euler class to obtain a well-defined count. \section{Acknowledgments} Thanks to Kirsten Wickelgren for support and extensive discussions, without which this paper could not have been written. Thank you to Olivier Wittenberg for helpful comments and correspondence. The author was partly supported by NSF DMS-2001890 and by a scholarship from the Sloan Foundation. \section{Oriented Intersection Theory}\label{oriented intersection} In order to prove our result, we will need to give a manifestly parametrization-invariant interpretation of the Jacobian form \[ \Tr_{\kappa(L)/k} \langle \Jac_L(\sigma;\rho)\rangle. \] To do this, it will prove most convenient to first recall the classical intersection theory behind counting zeroes of sections of vector bundles, for which a good reference is \cite{intersection}, particularly Chapter 6. Let $p : X \to \Spec k$ be smooth and proper of dimension $d$, and $E$ a vector bundle over $X$ of rank $d$. Let $z :X \to E$ be the zero section, and $s : X \to E$ any section such that $Z(s) \to X$ is a regular embedding. Then consider the fiber square \begin{center} \begin{tikzcd} Z(s) \arrow{r}{j}\arrow{d}{i} & X \arrow{d}{s} \\ X \arrow{r}{z} & E \end{tikzcd} \end{center} Because the bottom arrow is a regular embedding of codimension $d$, the refined Gysin homomorphism $z^! : CH^*(X) \to CH^{* + d - \codim Z(s)}(Z(s))$ is defined by $z^![V] = [X\cdot V]$, where $X \cdot V$ is the intersection product. By composing $z^!$ with proper push-forward $i_* : CH^*(Z(s)) \to CH^{*+\codim Z(s)}(X)$, one obtains a homomorphism $\phi : CH^*(X) \to CH^{*+d}(X)$, which turns out not to depend on $s$. Moreover, this homomorphism can be given a rather explicit description: Because $Z(s) \to X$ is a regular embedding, it consists of regularly embedded clopen components $j_m : Z_m \to X$, and each has a normal bundle $N_{Z_m} X$. Because the normal bundle of $X$ in $E$ along $z$ is simply $E$ itself, $N_{Z_m}X$ is naturally mapped into\footnote{The normal cone of $Z$ in $X$ will be included into $i^*N_XE$ quite generally, and in the case that $Z \hookrightarrow X$ is a regular embedding, as it is here, the normal cone is a vector bundle, namely the normal bundle. For proof, see the beginning of section 6.1 of \cite{intersection}.} $j_m^*E$ and the excess bundle $\mathcal{E}_m$ on $Z_m$ is defined by the short exact sequence \begin{center} \begin{tikzcd} 0 \arrow{r} & N_{Z_m}X \arrow{r} & j_m^*E \arrow{r} & \mathcal{E}_m \arrow{r} & 0. \end{tikzcd} \end{center} Now let $e(\mathcal{E}_m)$ denote the top Chern class of $\mathcal{E}_m$, considered as a homomorphism $CH^*(Z_m) \to CH^{*+\rk \mathcal{E}_m}(Z_m)$. Then the excess intersection formula computes $\phi$: for any $\alpha \in CH^*(X)$, one has \[ \phi(\alpha) = \sum_m i_*(e(\mathcal{E}_m)(j_m^*(\alpha))). \] At one extreme, where $s = z$, one has that $Z(s) = X$, and $\mathcal{E} = E$, and so also, in fact, that $\phi = e(E)$. At the other extreme, when $Z(s)$ consists of simple zeroes, each $\mathcal{E}_m$ is zero-dimensional, and hence $e(\mathcal{E}_m)$ is simply the identity on $CH^*(Z_m)$. Now letting $[X]$ denote the fundamental cycle of $X$, one has that \[ p_*(e(E)([X])) = \sum_m \ind_{z_m}(s), \] where we define \[ \ind_{z_m}(s) = p_*(i_*(j_m^*([X]))) \in CH^0(\Spec k) = \mathbb{Z}. \] When $k$ is algebraically closed, $\ind_{z_m}(s)$ is 1 at every simple zero, and by piggy-backing off this and considering the fibers of a base change morphism as we described in the first section, one can actually compute more generally that $\ind_{z_m}(s) = [\kappa(z_m):k]$. As we explained in the first section, we need to take orientability data into account if we wish to refine this procedure in the case that $k$ is not algebraically closed. This orientability data can be captured using the Chow-Witt groups---also known as oriented Chow groups---of Barge-Morel and Fasel (see \cite{bargemorel} and \cite{chowwitt}). To explain how, we briefly return to the topological case: Regardless of orientability assumptions, an Euler class can be defined for a vector bundle $E$ over a topological manifold $X$, at the cost of using not ordinary cohomology, but cohomology twisted by a local system. More precisely, letting $\mathfrak{o}(E)$ denote the local system whose fiber at $x \in X$ is $H_n(E|_x, E|_x \smallsetminus \{0\}; \mathbb{Z})$, an Euler class $e(E) \in H^n(X; \mathfrak{o}(E))$ can always be defined. Moreover, when $\mathfrak{o}(E)$ is isomorphic to the orientation sheaf of $X$, twisted Poincar{\'e} duality provides an isomorphism \[ H^n(X; \mathfrak{o}(E)) \cong H^n(X; \mathfrak{o}(X)) \to H_0(X; \mathbb{Z}), \] providing $E$ with an Euler number. Returning to the motivic case, a similar story can be told. The Chow-Witt groups of Barge-Morel and Fasel give an enriched version of the ordinary Chow groups which can be twisted by any line bundle. Moreover, for any vector bundle $E$ over a smooth $k$-scheme $X$, an Euler class $\widetilde{e}(E)$ is defined by Barge-Morel in \cite{bargemorel} and Fasel in \cite{chowwitt}, which recovers the top Chern class in the sense that the diagram \begin{center} \begin{tikzcd} \widetilde{CH}^*(X) \arrow{r}{\widetilde{e}(E)} \arrow{d} & \widetilde{CH}^{* + r}(X; \det E^\vee) \arrow{d} \\ CH^*(X) \arrow{r}{e(E)} & CH^{* + r}(X) \end{tikzcd} \end{center} commutes (see \cite{chowwitt}), where the vertical arrows are a natural comparison morphism from the Chow-Witt groups $\widetilde{CH}^*$ to ordinary Chow groups. Note that the top right is an analog of the classical Euler class being valued in cohomology twisted by $\mathfrak{o}(E)$. The cost of being able to twist by line bundles is that pull-back and push-forward become more complicated. For our purposes, it will be sufficient to note that for a regular embedding $i : X \to Y$, and any line bundle $L$ on $Y$, there is an induced pull-back \[ i^* : \widetilde{CH}^*(Y; L) \to \widetilde{CH}^*(X; i^*L), \] while for a proper morphism $f : X \to Y$ of relative dimension $d$, and any line bundle $L$ on $Y$, there is an induced push-forward \[ f_* : \widetilde{CH}^*(X; f^*L \otimes \omega_{X/k}) \to \widetilde{CH}^{*-d}(Y; L \otimes \omega_{Y/k}). \] Together, these allow an oriented version of the excess intersection formula, due to Fasel in \cite{excess}, to be used. First, considering the fundamental class $[X] \in \widetilde{CH}^0(X)$, $j_m^*([X])$ is defined in $\widetilde{CH}^0(Z_m)$. Then $\widetilde{e}(\mathcal{E}_m)(j_m^*([X]))$ is defined in $\widetilde{CH}^{d - \codim Z_m}(Z_m; \det\mathcal{E}_m^\vee)$. But now the short exact sequence \begin{center} \begin{tikzcd} 0 \arrow{r} & N_{Z_m}X \arrow{r} & j_m^*E \arrow{r} & \mathcal{E}_m \arrow{r} & 0 \end{tikzcd} \end{center} defining the excess normal bundle, affords an isomorphism \[ \det \mathcal{E}_m^\vee \cong j_m^*\det E^\vee \otimes \det N_{Z_m}X, \] while the normal short exact sequence \begin{center} \begin{tikzcd} 0 \arrow{r} & T_{Z_m/k} \arrow{r} & j_m^*T_{X/k} \arrow{r} & N_{Z_m}X \arrow{r} & 0 \end{tikzcd} \end{center} affords an isomorphism \[ \det N_{Z_m}X \cong j_m^*\omega_{X/k}^\vee \otimes \omega_{Z_m/k}. \] Consequently, $\widetilde{e}(\mathcal{E}_m)(j_m^*([X]))$ is actually defined in the Chow-Witt group \[ \widetilde{CH}^{d - \codim Z_m}(Z_m; j_m^*(\det E^\vee \cong \omega_{X/k}^\vee) \otimes \omega_{Z_m/k}), \] and considering the fact that $i|_{Z_m} = j_m$, we then have that $i_*(\widetilde{e}(\mathcal{E}_m)(j_m^*([X]))$ is defined in $\widetilde{CH}^d(X; \det E^\vee)$, and the excess intersection formula has an oriented analog due to Fasel (see Theorem 32 of \cite{excess} and Remark 5.22 from \cite{bw}): \begin{for}[oriented excess intersection formula] \[ \widetilde{e}(E)([X]) = \sum_m i_*(\widetilde{e}(\mathcal{E}_m)(j_m^*([X]))). \] \end{for} Now recall that in the ordinary Chow case, we were able to recover the count by pushing forward along $p_* : CH^d(X) \to CH^0(\Spec k)$. The difficulty now is the restrictions on when push-forward is defined for Chow-Witt groups. The only line bundles on $\Spec k$ are trivial, and so there is not necessarily a push-forward \[ p_* : \widetilde{CH}^d(X; \det E^\vee) \to \widetilde{CH}^0(\Spec k) \] defined. It is here where the need for relative orientability enters the story: The Chow-Witt groups are SL-orientable, meaning that for two line bundles $L'$ and $L''$ over $X$, and an isomorphism $\psi : L' \to L'' \otimes L^{\otimes 2}$ for some third line bundle $L$, there is an induced isomorphism \[ \psi : \widetilde{CH}^*(X; L') \to \widetilde{CH}^*(X; L''). \] For a relatively orientable vector bundle $E$, then, the relative orientation furnishes an isomorphism $\widetilde{CH}^*(X; \det E^\vee) \to \widetilde{CH}^*(X; \omega_{X/k})$, and hence we can consider an augmented pushforward $p_*^\rho : \widetilde{CH}^*(X; \det E^\vee) \to \widetilde{CH}^{* -d}(\Spec k)$ given as the composition \begin{center} \begin{tikzcd} \widetilde{CH}^*(X; \det E^\vee) \arrow{r}{\rho} & \widetilde{CH}^*(X; \det E^\vee) \arrow{r}{p_*} & \widetilde{CH}^{*-d}(\Spec k). \end{tikzcd} \end{center} Now we can define the oriented local index as follows: \begin{dfn} Let $\rho : \det E^\vee \otimes \omega_{X/k} \to L^{\otimes 2}$ be a relative orientation on $E$, and let $s$ be a section of $E$ whose zero locus is regularly embedded $i : Z(s) \to X$. Let $z \in Z(s)$ be an isolated zero (i.e. a closed point which is itself a clopen component of $Z(s)$). Let $j_z : \{z\} \to X$ be the inclusion. Then we define the oriented index to be \[ \ind^{or}_{z}(s;\rho):= p^\rho_*(i_*(\widetilde{e}(\mathcal{E}_z)(j_z^*([X])))) \] \end{dfn} Because $\widetilde{CH}^0(\Spec k) = GW(k)$ (see \cite{algtop}), this actually assigns an index valued in $GW(k)$ to each zero. Moreover, this index agrees with the one of Kass-Wickelgren, as proven in Sections 2.3 and 2.4 of \cite{bw}, in particular Proposition 2.31: \begin{prop}\label{oriented index} Let $X \to \Spec k$ be smooth, and $E$ a vector bundle over $X$, with $\rho :\omega_{X/k} \otimes \det E \to L^{\otimes 2}$ a relative orientation. Let $s$ be a section, and $z$ a simple zero of $s$ admitting a good parametrization, and such that $\kappa(z)/k$ is separable (e.g. if $k$ is perfect). Then \[ \Tr_{\kappa(z)/k} \langle \Jac_z(s;\rho)\rangle = \ind^{or}_z(s;\rho). \] \end{prop} We now have the notation and definitions needed to prove our main result in the next section. \section{Proof of Main Result} We will first prove the following: \begin{prop}\label{nondeg one form} Let $|k| > 16$. Then there is a one-form non-degenerate on lines. \end{prop} \noindent We will require the following lemma, which is well-known, but which we include for the sake of completeness: \begin{lem}\label{non-vanishing} Let $k$ be a field such that $|k| > n$, and let $z_1, \ldots, z_n$ be points in $\mathbb{P}^d_k$ for any $d$. Then there is a section $s$ of $\mathcal{O}_{\mathbb{P}^d_k}(1)$ such that $Z(s)$ does not contain any of the points $z_1, \ldots, z_n$. \end{lem} \begin{proof}[Proof of lemma] Sections of $\mathcal{O}_{\mathbb{P}^d_k}(1)$ are precisely degree one homogeneous polynomials on $\mathbb{A}^{d+1}_k$, and hence $\Gamma(\mathcal{O}_{\mathbb{P}^d_k}(1))$ isomorphic as a vector space over $k$ to $(k^{d+1})^\vee$. Choose an identification. Now for each $i = 1, \ldots, n$, consider the evaluation map $\ev_i : (k^{d+1})^\vee \to \kappa(z_i)$. This is $k$-linear and non-zero, and hence $\ker \ev_i$ is a positive codimension subspace of $(k^{d+1})^\vee$. Now if $z_i \in Z(s)$ for some section $s$ of $\mathcal{O}_{\mathbb{P}^d_k}(1)$, then $s \in \ker \ev_i$. Consequently, to find some $s$ such that $Z(s)$ does not contain any of the points $z_i$, it suffices to show that \[ \bigcap_i \left((k^{d+1})^\vee \smallsetminus \ker \ev_i\right) \] is non-empty, or, equivalently, that \[ \bigcup_i \ker \ev_i \] is not all of $(k^{d+1})^\vee$. But because $\ker \ev_i$ has positive codimension, this follows from the fact that $|k| > n$. \end{proof} \begin{proof} By the lemma, we are reduced to proving that $Z(\sigma_1 \oplus \sigma_2)$ consists of at most 16 closed points. But this follows from the result stated in the introduction, that summing over the closed points in $Z(\sigma_1 \oplus \sigma_2)$, with each point $z$ given weight $[\kappa(z) : k]$, gives 16. Because each such weight is $\geqslant 1$, there must be at most 16 points. \end{proof} Returning to the Pl{\"u}cker embedding $X \to \mathbb{P}^9_k$, we have that there is a section $s$ of $\mathcal{O}_{\mathbb{P}^9_k}(1)$ such that $Z(s)$ is disjoint from $Z(\sigma_1 \oplus \sigma_2)$. Now $s$ restricts on $X$ to a section of $\mathcal{O}_X(1)$. We define $U:= X \smallsetminus Z(s)$, and, because $Z(s)$ is disjoint from $Z(\sigma_1 \oplus \sigma_2)$, $U$ is a neighborhood of $Z(\sigma_1 \oplus \sigma_2)$. Moreover, $s$ is non-vanishing on $U$, and hence the assignment $\alpha \oplus \beta \mapsto s \otimes \alpha \oplus s^{\otimes 2} \otimes \beta$ determines an isomorphism \[ \phi : E|_U \to \left(\mathcal{O}_X(1) \otimes \Sym^2(S^\vee) \oplus \mathcal{O}_X(2) \otimes \Sym^2(S^\vee)\right)|_U. \] Now let $\widetilde{E} := \mathcal{O}_X(1) \otimes \Sym^2(S^\vee) \oplus \mathcal{O}_X(2) \otimes \Sym^2(S^\vee)$, so that this isomorphism can be rewritten as $\phi : E|_U \to \widetilde{E}|_U$. We easily compute, again in the Picard group, that \[ \det \widetilde{E} \otimes \omega_{X/k} = \mathcal{O}_X(10), \] whence $\widetilde{E}$ is relatively orientable. Moreover, chasing through the steps in the computation yields a canonical isomorphism $\rho : \det \widetilde{E} \otimes \omega_{X/k} \to (\mathcal{O}_X(5))^{\otimes 2}$. Through $\phi$, then, this also provides a relative orientation $\varrho$ of $E|_U$. We are now prepared to prove the main result: \begin{proof}[Proof of Main Result and Remark \ref{general}] Consider the section \[ \widetilde{\sigma} := \phi(\sigma) = s \otimes \sigma_1 \oplus s^{\otimes 2} \otimes \sigma_2. \] \noindent By Theorem 2.1 of \cite{debarremanivel}, we choose $f_1$ and $f_2$ general so that $Z(\sigma_1 \oplus \sigma_2)$ is finite {\'e}tale over $k$. In the case of a finite field, this may correspond to a finite extension of the base field; we will now denote this extension by $k$ (see Remark \ref{general}). By construction (see Definition \ref{twjacform}), we have for each $L \in Z(\sigma_1 \oplus \sigma_2)$, that \[ \Tr_{\kappa(L)/k} \langle \widetilde{\Jac}_z (f_1,f_2; s)\rangle := \Tr_{\kappa(L)/k} \langle \Jac_L (\widetilde{\sigma}; \rho)\rangle. \] Hence it suffices to show that \[ \sum_{L \in Z(\sigma_1 \oplus \sigma_2)} \Tr_{\kappa(L)/k} \langle \Jac_L (\widetilde{\sigma}; \rho)\rangle = 8H. \] But by Proposition \ref{oriented index}, we have that \[ \sum_{L \in Z(\sigma_1 \oplus \sigma_2)} Tr_{\kappa(L)/k} \langle \Jac_L(\widetilde{\sigma}; \rho) \rangle = \sum_{z \in Z(\widetilde{\sigma}) \cap U} \ind^{or}_z(\widetilde{\sigma}; \rho). \] We will consider both sides of this equation, and check two facts: \begin{enumerate}[(i)] \item The left side has rank 16. \item The right side is an integral multiple of $H$. \end{enumerate} \noindent To check (i), first note that \[ \rk \left( \sum_{L \in Z(\sigma_1 \oplus \sigma_2)} \Tr_{\kappa(L)/k} \langle \Jac_L (\widetilde{\sigma}; \rho)\rangle \right) = \sum_{L \in Z(\sigma_1 \oplus \sigma_2)} \rk \left(\Tr_{\kappa(L)/k} \langle \Jac_L (\widetilde{\sigma}; \rho) \rangle\right) \] Moreover, almost by construction (see Definition \ref{jacform}), \[ \rk \Tr_{\kappa(L)/k} \langle \Jac_L (\widetilde{\sigma}; \rho) \rangle = [\kappa(L):k]. \] Hence \[ \rk \left( \sum_{L \in Z(\sigma_1 \oplus \sigma_2)} \Tr_{\kappa(L)/k} \langle \Jac_L (\widetilde{\sigma}; \rho)\rangle \right) = \sum_{L \in Z(\sigma_1 \oplus \sigma_2)} [\kappa(L):k], \] and we explained in the first section why the right side is equal to 16. To check (ii), we first describe the structure of $Z(\widetilde{\sigma})$. Prima facie, it is given by \[ Z(\widetilde{\sigma}) = Z(s \otimes \sigma_1) \cap Z(s^{\otimes 2} \otimes \sigma_2) = \left(Z(s) \coprod Z(\sigma_1)\right) \cap \left(Z(s^{\otimes 2}) \coprod Z(\sigma_2)\right). \] But because $Z(s)$ and $Z(s^{\otimes 2})$ are both disjoint from $Z(\sigma_1) \cap Z(\sigma_2) = Z(\sigma_1 \oplus \sigma_2)$ by assumption, this simplifies to \[ Z(\widetilde{\sigma}) = Z(s) \cap Z(s^{\otimes 2}) \coprod Z(\sigma_1 \oplus \sigma_2). \] But $Z(s) \cap Z(s^{\otimes 2} ) = Z(s)$, so we finally obtain \[ Z(\widetilde{\sigma}) = Z(s) \coprod Z(\sigma_1 \oplus \sigma_2), \] expressing the zero scheme of $\widetilde{\sigma}$ as a the disjoint union\footnote{It is the appearance of $Z(s)$ as a component of the zero locus which motivates the appearance of the $s^{\oplus 2}$ factor in the second summand of $\widetilde{\sigma}$.} of $Z(s)$, which is regularly embedded because it is locally given by a regular sequence containing the single element $s$, and $Z(\sigma_1 \oplus \sigma_2)$, which is regularly embedded by assumption. Hence $Z(\widetilde{\sigma})$ is regularly embedded, so now for each clopen component $Z_k$ of $Z(\widetilde{\sigma})$, let $\mathcal{E}_k$ denote the excess normal bundle on $Z_k$ described in Section \ref{oriented intersection}, let $j_k :Z_k \to X$ be the inclusion, and let $i : Z \to X$ be the inclusion of the whole zero locus. Recall that the oriented excess intersection formula (see Section \ref{oriented intersection}, particularly Formula 1 and the discussion preceding) computes \[ \widetilde{e}(\widetilde{E})([X]) = \sum_k i_*(\widetilde{e}(\mathcal{E}_k)(j_k^*([X]))), \] where $\widetilde{e}$ is the Chow-Witt Euler class of Barge-Morel and Fasel (again see Section \ref{oriented intersection}). Now letting $Z_0 = Z(s)$ and $Z_1, \ldots, Z_m$ denote the closed points making up $Z(\sigma_1\oplus \sigma_2)$, we have \[ \sum_{k =1}^m i_*(\widetilde{e}(\mathcal{E}_k)(j_k^*([X]))) = \widetilde{e}(\widetilde{E})([X]) - i_*(\widetilde{e}(\mathcal{E}_0)(j_0^*([X]))), \] and hence (see Section \ref{oriented intersection} for notation) \[ \sum_{z \in Z(\sigma_1 \oplus \sigma_2)} \ind_z^{or}(\widetilde{\sigma};\rho) = p^\rho_*(\widetilde{e}(\widetilde{E})([X])) - p^\rho_*(i_*(\widetilde{e}(\mathcal{E}_0)(j_0^*([X])))). \] Because $\widetilde{E}$ has an odd-rank summand, $p^\rho_*(\widetilde{E}([X]))$ is an integer multiple of $H$ by a result of Ananyevskiy (Theorem 7.4 of \cite{sloriented}). Moreover, because $Z_0 = Z(s)$ has codimension 1 in $X$, and $\dim X = 6$, we have that $\mathcal{E}_0$ is itself odd rank, so by the same result of Ananyevskiy, $p^\rho_*(i_*(\widetilde{e}(\mathcal{E}_0)(j_0^*([X]))))$ is also an integer multiple of $H$. Thus we have that the sum \[ \sum_{L \in Z(\sigma_1 \oplus \sigma_2)} \Tr_{\kappa(L)/k} \langle \widetilde{\Jac}_z (f_1,f_2; s)\rangle = \sum_{L \in Z(\sigma_1 \oplus \sigma_2)} \Tr_{\kappa(L)/k} \langle \Jac_L (\widetilde{\sigma}; \rho)\rangle \] is an integral multiple of $H$ in $GW(k)$, which has rank 16, and hence \[ \sum_{L \in \Lines(\Sigma)} \Tr_{\kappa(L)/k} \langle \widetilde{\Jac}_z (f_1,f_2; s)\rangle = 8H. \] \end{proof} \section{Extension of Main Result} As we stated at the beginning, this result can also be extended to the case where $|k| \leqslant 16$. Recall that the reason for making the requirement $|k| > 16$ was to ensure that we could find a section of $\mathcal{O}_X(1)$ which did not vanish on $Z(\sigma_1 \oplus \sigma_2)$. Our strategy for extending the main result to smaller fields is to choose an appropriate extension of $k$ with enough elements. We first explain how to choose such an extension. For any field extension $K/k$, extension of scalars induces a map $GW(k) \to GW(K)$. In the case where $K$ and $k$ are finite fields of odd characteristic, this map is strongly controlled by its behavior on rank-one forms, in the following sense: \begin{prop} Let $K/k$ be an extension of finite fields of odd characteristic. Suppose that $k \to K$ induces an isomorphism $k^\times/(k^\times)^2 \to K^\times/(K^\times)^2$. Then the map $GW(k) \to GW(K)$ induced by extension of scalars is an isomorphism. \end{prop} \noindent This result is well-known, but we include an outline of a proof for the sake of completeness: \begin{proof} It suffices to show that extension of scalars induces a bijection between isomorphism classes of non-degenerate quadratic forms over $k$ and over $K$. Let $Q$ be a non-degenerate quadratic form over $K$. By Theorem II.3.5 (in particular, the proof of part (1)) and Proposition II.3.4 of \cite{forms}, $Q$ can be expressed a diagonal matrix with entries all $1$'s except the last entry, $d(Q)$. Moreover $d(Q)$ is uniquely determined up to multiplication by a square, i.e. the class of $d(Q)$ in $K^\times/(K^\times)^2$ is well-defined, and depends only on the isomorphism class of $Q$. Finally, $Q$ is itself completely determined up to isomorphism by its rank and $d(Q)$. The same applies to quadratic forms over $k$, and because extension of scalars is determined at the level of matrices by regarding a matrix over $k$ as a matrix over $K$, it is clear that extension of scalars sends a quadratic form $Q$ over $k$ of rank $r$ to a quadratic form $Q'$ over $K$ of rank $r$, with $d(Q')$ being the image in $K^\times/(K^\times)^2$ of $d(Q)$ under the map $k^\times/(k^\times)^2 \to K^\times/(K^\times)^2$. The claim follows immediately. \end{proof} Now our goal in selecting an appropriate extension $K$ of $k$ is to choose an extension with more than 16 elements, but one such that the induced map $k^\times/(k^\times)^2 \to K^\times/(K^\times)^2$ is an isomorphism, and hence the entire map $GW(k) \to GW(K)$, is an isomorphism. When $k$ is infinite, no extension need be chosen, so it suffices only to consider finite $k$. Moreover, we are only considering fields of characteristic not equal to 2, so it suffices to consider $k = \mathbb{F}_{p^q}$ for odd $p$. Now recall that there is an extension $\mathbb{F}_{p^{q'}}/\mathbb{F}_{p^q}$ precisely when $q|q'$, and in this case $[\mathbb{F}_{p^{q'}}: \mathbb{F}_{p^q} ] = q'/q$. Moreover, it is clearly always possible to find some $q'$ such that $p^{q'} > 16$ and such that $q'/q$ is odd. Hence our strategy comes down to the following fact: \begin{prop}\label{GW iso} Let $q'/q$ be odd. Then the inclusion $\mathbb{F}_{p^q} \to \mathbb{F}_{p^{q'}}$ induces an isomorphism \[ \mathbb{F}_{p^q}^\times/(\mathbb{F}_{p^q}^\times)^2 \to \mathbb{F}_{p^{q'}}^\times/(\mathbb{F}_{p^{q'}}^\times)^2 \] \end{prop} \noindent As before, this is standard, but we include the proof for completeness: \begin{proof} To check injectivity, first suppose there is some $a \in \mathbb{F}_{p^q}^\times$ representing a non-trivial class in the kernel. Then $a$ does not have a square root in $\mathbb{F}_{p^q}$, but has a square root in $\mathbb{F}_{p^{q'}}$. But then the splitting field of $a$ would be an intermediate field between $\mathbb{F}_{p^q}$ and $\mathbb{F}_{p^{q'}}$ whose degree over $\mathbb{F}_{p^q}$ is 2, which is impossible because $[\mathbb{F}_{p^{q'}} : \mathbb{F}_{p^q} ]$ is odd. To check surjectivity, recall that the multiplicative group $\mathbb{F}_{p^q}^\times$ and $\mathbb{F}_{p^{q'}}$ are cyclic groups of order $p^q - 1$ and $p^{q'} - 1$, respectively, and that the former is a subgroup of the latter. Hence, to check that an element of $\mathbb{F}_{p^{q'}}^\times$ is actually an element of $\mathbb{F}_{p^q}^\times$, it suffices to check that it has order dividing $p^q- 1$. Now consider any $a \in \mathbb{F}_{p^{q'}}^\times$. To show surjectivity, we will check that $a$ can be written as $a = \alpha \beta^{2}$ for some $\alpha \in \mathbb{F}_{p^q}$. To do this, set \[ \ell := \frac{p^{q'} - 1}{p^q - 1} = 1 + (p^q) + \cdots + (p^q)^{q'/q - 1}, \] and notice that $a^\ell$ has order dividing $p^q - 1$, and hence is in $\mathbb{F}_{p^q}^\times$. Moreover, because $q'/q$ is odd by assumption, $\ell$ is a sum of an odd number of odd numbers, and hence is odd, say $\ell = 2m + 1$. But now setting $\alpha = a^{\ell}$, and $\beta =a^{-m}$, we have that $\alpha \in \mathbb{F}_{p^q}$ and $a = \alpha \beta^2$. \end{proof} Now consider some field $k$, with characteristic not equal to 2, and with $|k| \leqslant 16$ (and hence $k$ is automatically perfect). After choosing an identification $k = \mathbb{F}_{p^q}$, there is a canonical way to choose some $q'$ such that $p^{q'} > 16$ and $q'/q$ is odd, namely we simply pick the smallest such $q'$. We now formalize this: \begin{dfn} Let $p \neq 2$ be prime, and $q$ some positive integer such that $p^q \leqslant 16$. Then $\hat{q}$ denotes the smallest multiple of $q$ such that $\hat{q}/q$ is odd and $p^{\hat{q}} > 16$. \end{dfn} This canonical choice also leads to a canonical extension $\mathbb{F}_{p^{\hat{q}}} / \mathbb{F}_{p^q}$, and thus after choosing an identification $k = \mathbb{F}_{p^q}$, there is a canonical extension $K/k$ such that $k \to K$ induces an isomorphism $GW(k) \to GW(K)$, namely we choose $K = \mathbb{F}_{p^{\hat{q}}}$. Note that the isomorphism $GW(k) \to GW(K)$ is now also canonical, because the inclusion $k \to K$ is canonical. Now let $\hat{X}$ denote the base change of $X$ from $k$ to $K$, $\hat{f}_1$ and $\hat{f}_2$ the base changes of $f_1$ and $f_2$, and $\hat{\sigma}_1 \oplus \hat{\sigma}_2$ the base change of $\sigma_1 \oplus \sigma_2$. Then $Z(\hat{\sigma}_1 \oplus \hat{\sigma}_2)$ is itself the base change of $Z(\sigma_1 \oplus \sigma_2)$. Now let $\pi : Z(\hat{\sigma}_1 \oplus \hat{\sigma}_2) \to Z(\sigma_1 \oplus \sigma_2)$ denote the base change projection. Because $|K| > 16$, there is some section $\hat{s}$ of $\mathcal{O}_{\hat{X}}(1)$ whose zero locus misses $Z(\hat{\sigma}_1 \oplus \hat{\sigma}_2)$, and so we can define an extended version of the twisted Jacobian form: \begin{dfn}[extension of twisted Jacobian form] With the notation as in the preceding paragraphs, consider some $z \in Z(\sigma_1 \oplus \sigma_2)$. We define \[ \widehat{\Tr}_{\kappa(z)/k} \langle \widetilde{\Jac}_z(f_1, f_2; \hat{s}) \rangle := \sum_{y \in \pi^{-1}(z)} \Tr_{\kappa(y)/K} \langle \widetilde{\Jac}_y (\hat{f}_1, \hat{f}_2; \hat{s})\rangle, \] which we regard as an element of $GW(k)$ through the identification $GW(k) \to GW(K)$. \end{dfn} Considering our main result applied to $\hat{\Sigma} = Z(\hat{f}_1,\hat{f}_2)$, we have the following extension of our main result directly from definitions: \begin{thm}[Extension of Main Result when $|k| \leqslant 16$] Let $k = \mathbb{F}_{p^q}$, with $p \neq 2$, and consider a general del Pezzo surface $\Sigma = Z(f_1, f_2)$ in $\mathbb{P}^4_k$. Choose a section $\hat{s}$ of $\mathcal{O}_{\hat{X}}(1)$ which does not vanish on $Z(\hat{\sigma}_1 \oplus \hat{\sigma_2})$ (see the preceding paragraphs for notation). Then \[ \sum_{L \in \Lines(\Sigma)} \widehat{\Tr}_{\kappa(L)/k} \langle \widetilde{\Jac}_z(f_1, f_2; \hat{s}) \rangle = 8H. \] \end{thm} \bibliographystyle{plain} \bibliography{bibliography.bib}{} \end{document}
2205.04380v2
http://arxiv.org/abs/2205.04380v2
Automorphisms and real structures for a $Π$-symmetric super-Grassmannian
\documentclass[a4paper]{amsart} \usepackage{amsmath,amsthm,amssymb,latexsym,epic,bbm,comment,color} \usepackage{graphicx,enumerate,stmaryrd} \usepackage[all,2cell]{xy} \xyoption{2cell} \usepackage{mathtools} \usepackage{color} \definecolor{purple}{RGB}{128,0,128} \newcommand{\mik}[1]{{\color{blue}#1}} \newcommand{\mmm}[1]{{\color{magenta}#1}} \newcommand{\liza}[1]{{\color{red}#1}} \def\H{{\mathbb H}} \def\ov{\overline} \def\ii{\textbf{\itshape i}} \def\jj{\textbf{\itshape j}} \def\kk{\textbf{\itshape k}} \def\Stab{{\rm Stab}} \newcommand{\ps}{{\Psi^{\rm st}_{-1}}} \newcommand{\g}{{\mathfrak g}} \newcommand{\Lie}{{\rm Lie}} \newcommand{\PiG}{{\Pi\!\Gr}} \newcommand{\id}{{\rm id}} \usepackage{dsfont} \renewcommand{\mathbb}{\mathds} \newcommand{\Z}{\mathbb Z} \newcommand{\C}{\mathbb C} \newcommand{\R}{{\mathbb R}} \newcommand{\mcA}{\mathcal A} \newcommand{\E}{\mathbb E} \newcommand{\gr}{\mathrm{gr}} \newcommand{\Gr}{\operatorname{Gr}} \newcommand{\GL}{\operatorname{GL}} \newcommand{\Q}{\operatorname{Q}} \newcommand{\PGL}{\operatorname{PGL}} \newcommand{\ord}{\textsf{ord}} \newtheorem{theorem}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{question}[theorem]{Question} \newtheorem*{theorem*}{Theorem} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{remark}[theorem]{Remark} \usepackage[all]{xy} \usepackage[active]{srcltx} \usepackage[parfill]{parskip} \newcommand{\mcJ}{\mathcal J} \newcommand{\mcM}{\mathcal M} \newcommand{\mcN}{\mathcal N} \newcommand{\mcO}{\mathcal O} \newcommand{\mcE}{\mathcal E} \newcommand{\mcH}{\mathcal H} \newcommand{\al}{\alpha} \newcommand{\tto}{\twoheadrightarrow} \font\sc=rsfs10 \newcommand{\cC}{\sc\mbox{C}\hspace{1.0pt}} \newcommand{\cG}{\sc\mbox{G}\hspace{1.0pt}} \newcommand{\cM}{\sc\mbox{M}\hspace{1.0pt}} \newcommand{\cR}{\sc\mbox{R}\hspace{1.0pt}} \newcommand{\cI}{\sc\mbox{I}\hspace{1.0pt}} \newcommand{\cJ}{\sc\mbox{J}\hspace{1.0pt}} \newcommand{\cS}{\sc\mbox{S}\hspace{1.0pt}} \newcommand{\cH}{\sc\mbox{H}\hspace{1.0pt}} \newcommand{\cT}{\sc\mbox{T}\hspace{1.0pt}} \newcommand{\cD}{\sc\mbox{D}\hspace{1.0pt}} \newcommand{\cL}{\sc\mbox{L}\hspace{1.0pt}} \newcommand{\cP}{\sc\mbox{P}\hspace{1.0pt}} \newcommand{\cA}{\sc\mbox{A}\hspace{1.0pt}} \newcommand{\cB}{\sc\mbox{B}\hspace{1.0pt}} \newcommand{\cU}{\sc\mbox{U}\hspace{1.0pt}} \font\scc=rsfs7 \newcommand{\ccC}{\scc\mbox{C}\hspace{1.0pt}} \newcommand{\ccD}{\scc\mbox{D}\hspace{1.0pt}} \newcommand{\ccP}{\scc\mbox{P}\hspace{1.0pt}} \newcommand{\ccA}{\scc\mbox{A}\hspace{1.0pt}} \newcommand{\ccJ}{\scc\mbox{J}\hspace{1.0pt}} \newcommand{\ccS}{\scc\mbox{S}\hspace{1.0pt}} \newcommand{\ccG}{\scc\mbox{G}\hspace{1.0pt}} \theoremstyle{plain} \newtheorem{prop}{Proposition}[section] \newtheorem{lem}[prop]{Lemma} \newtheorem{thm}[prop]{Theorem} \newtheorem{cor}[prop]{Corollary} \theoremstyle{definition} \newtheorem{subsec}[prop]{} \newtheorem{rem}[prop]{Remark} \newcommand{\M}{{\mathcal M}} \newcommand{\into}{\hookrightarrow} \newcommand{\isoto}{\overset{\sim}{\to}} \newcommand{\onto}{\twoheadrightarrow} \newcommand{\labelto}[1]{\xrightarrow{\makebox[1.5em]{\scriptsize ${#1}$}}} \newcommand{\longisoto}{{\labelto\sim}} \newcommand{\hs}{\kern 0.8pt} \newcommand{\hssh}{\kern 1.2pt} \newcommand{\hshs}{\kern 1.6pt} \newcommand{\hssss}{\kern 2.0pt} \newcommand{\hm}{\kern -0.8pt} \newcommand{\hmm}{\kern -1.2pt} \newcommand{\emm}{\bfseries} \newcommand{\mO}{{\mathcal O}} \newcommand{\uprho}{\hs^\rho\hm} \newcommand{\Aut}{{\rm Aut}} \newcommand{\G}{{\Gamma}} \newcommand{\SmallMatrix}[1]{\text{\tiny\arraycolsep=0.4\arraycolsep\ensuremath {\begin{pmatrix}#1\end{pmatrix}}}} \newcommand{\Mat}[1]{\text{\SMALL\arraycolsep=0.4\arraycolsep\ensuremath {\begin{pmatrix}#1\end{pmatrix}}}} \def\H{{\mathbb H}} \def\ov{\overline} \def\ii{\textbf{\itshape i}} \def\jj{\textbf{\itshape j}} \def\kk{\textbf{\itshape k}} \def\Stab{{\rm Stab}} \begin{document} \title[$\Pi$-symmetric super-Grassmannian] {Automorphisms and real structures for\\ a $\Pi$-symmetric super-Grassmannian} \author{Elizaveta Vishnyakova\\ {\Tiny appendix by}\\ Mikhail Borovoi} \begin{abstract} Any complex-analytic vector bundle $\E$ admits naturally defined homotheties $\phi_{\al}$, $\al\in \C^*$, i.e. $\phi_{\al}$ is the multiplication of a local section by a complex number $\al$. We investigate the question when such automorphisms can be lifted to a non-split supermanifold corresponding to $\E$. Further, we compute the automorphism supergroup of a $\Pi$-symmetric super-Grassmannian $\Pi\!\Gr_{n,k}$, and, using Galois cohomology, we classify the real structures on $\Pi\!\Gr_{n,k}$ and compute the corresponding supermanifolds of real points. \end{abstract} \date{\today} \maketitle \setcounter{tocdepth}{1} \tableofcontents \section{Introduction} Let $\E$ be a complex-analytic vector bundle over a complex-analytic manifold $M$. There are natural homotheties $\phi_{\al}$, $\al\in \C^*$, defined on local sections as the multiplication by a complex number $\al\ne 0$. Any automorphism $\phi_{\al}: \E\to \E$ may be naturally extended to an automorphism $\wedge \phi_{\al}$ of $\bigwedge\E$. Let $\mcE$ be the locally free sheaf corresponding to $\E$. Then the ringed space $(M,\bigwedge\mcE)$ is a split supermanifold equipped with the supermanifold automorphisms $(id,\wedge \phi_{\al})$, $\al\in \C^*$. Let $\mcM$ be any non-split supermanifold with retract $(M,\bigwedge\mcE)$. We investigate the question whether the automorphism $\wedge \phi_{\al}$ can be lifted to $\mcM$. We show that this question is related to the notion of the order of the supermanifold $\mcM$ introduced in \cite{Rothstein}; see Section \ref{sec Order of a supermanifold}. Let $\M=\Pi\!\Gr_{n,k}$ be a $\Pi$-symmetric super-Grassmannian; see Section \ref{sec charts on Gr} for the definition. We use obtained results to compute the automorphism group $\operatorname{Aut} \mathcal M$ and the automorphism supergroup, given in terms of a super-Harish-Chandra pair. \begin{theorem*}[Theorem \ref{t:Aut}] {\bf (1)} If $\mathcal M = \Pi\!\Gr_{n,k}$, where $n\ne 2k$, then $$ \operatorname{Aut} \mathcal M\simeq \PGL_n(\mathbb C) \times \{\id, \Psi^{st}_{-1} \} . $$ The automorphism supergroup is given by the Harish-Chandra pair $$ ( \PGL_n(\mathbb C) \times \{\id, \Psi^{st}_{-1} \}, \mathfrak{q}_{n}(\mathbb C)/\langle E_{2n}\rangle). $$ {\bf (2)} If $\mathcal M = \Pi\!\Gr_{2k,k}$, where $k\geq 2$, then $$ \operatorname{Aut} \mathcal M\simeq \PGL_{2k}(\mathbb C) \rtimes \{\id, \Theta, \Psi^{st}_{-1}, \Psi^{st}_{-1}\circ \Theta \}, $$ where $\Theta^2 = \Psi^{st}_{-1}$, $\Psi^{st}_{-1}$ is a central element of $\Aut\,\M$, and $\Theta \circ g\circ \Theta^{-1} = (g^t)^{-1}$ for $g\in \PGL_{2k}(\mathbb C)$. The automorphism supergroup is given by the Harish-Chandra pair $$ (\PGL_{2k}(\mathbb C) \rtimes \{\id, \Psi^{st}_{-1}, \Theta, \Psi^{st}_{-1}\circ \Theta \}, \mathfrak{q}_{2k}(\mathbb C)/\langle E_{4k}\rangle), $$ where $\Theta \circ C\circ \Theta^{-1} = - C^{t_i}$ for $C\in \mathfrak{q}_{2k}(\mathbb C)/\langle E_{4k}\rangle$ and $ \Psi^{st}_{-1}\circ C \circ (\Psi^{st}_{-1})^{-1} = (-1)^{\tilde{C}} C$, where $\tilde C\in\Z/2\Z$ is the parity of $C$. {\bf (3)} If $\mathcal M = \Pi\!\Gr_{2,1}$, then $$ \operatorname{Aut} \mathcal M\simeq \PGL_{2}(\mathbb C) \times \mathbb C^*. $$ The automorphism supergroup is given by the Harish-Chandra pair $$ ( \PGL_{2}(\mathbb C) \times \mathbb C^*, \mathfrak g \rtimes \langle z\rangle). $$ Here $\mathfrak g$ is a $\Z$-graded Lie superalgebra described in Theorem \ref{teor vector fields on supergrassmannians}, $z$ is the grading operator of $\mathfrak{g}$. The action of $\PGL_{2}(\mathbb C) \times \mathbb C^*$ on $z$ is trivial, and $\phi_{\al}\in \C^*$ multiplies $X\in \mathfrak v(\Pi\!\Gr_{2,1})_k$ by $\al^k$. \end{theorem*} Here $\ps=(\id,\psi^{st}_{-1})\in \operatorname{Aut} \mathcal M$, where $\psi^{st}_{-1}$ is an automorphism of the structure sheaf $\mcO$ of $\mcM$ defined by $\psi^{st}_{-1}(f) = (-1)^{\tilde f} f$ for a homogeneous local section $f$ of $\mcO$, where we denoted by $\tilde f\in\Z/2\Z$ the parity of $f$. We denote by $C^{t_i}$ the $i$-transposition of the matrix $C$, see (\ref{eq i transposition}). The automorphism $\Theta$ is constructed in Section \ref{sec construction of Theta}. We denoted by $g^t$ the transpose of $g$. We classify the real structures on a $\Pi$-symmetric super-Grassmannian $\Pi\!\Gr_{n,k}$ using Galois cohomology. \begin{theorem*}[Theorem \ref{c:Pi}] The number of the equivalence classes of real structures $\mu$ on $\mcM$, and representatives of these classes, are given in the list below: \begin{enumerate} \item[\rm (i)] If $n$ is odd, then there are two equivalence classes with representatives $$ \mu^o, \quad (1,\ps)\circ\mu^o. $$ \item[\rm (ii)] If $n$ is even and $n\neq 2k$, then there are four equivalence classes with representatives $$ \mu^o,\quad (1,\ps)\circ\mu^o, \quad (c_J,1)\circ\mu^o, \quad (c_J,\ps)\circ\mu^o. $$ \item[\rm (iii)] If $n=2k\ge 4$, then there are $k+3$ equivalence classes with representatives $$ \mu^o,\quad (c_J,1)\circ\mu^o, \quad (c_r,\Theta)\circ\mu^o, \,\, r= 0,\ldots, k. $$ \item[\rm (iv)] If $(n,k)= (2,1)$, then there are two equivalence classes with representatives $$ \mu^o,\quad (c_J,1)\circ\mu^o. $$ \end{enumerate} Here $\mu^o$ denotes the standard real structure on $\M=\PiG_{n,k}$, see Section \ref{ss:real-structures}. Moreover, $c_J\in\PGL_n(\C)$ and $c_r\in\PGL_{2k}(\C)$ for $r= 0,\ldots, k$ are certain elements constructed in Proposition \ref{p:H1} and Subsection \ref{ss:cp}, respectively. \end{theorem*} Further, we describe the corresponding real subsupermanifolds when they exist. Let $\mu$ be a real structure on $\mcM=\PiG_{n,k}$, and assume that the set of fixed points $ M^{\mu_0}$ is non-empty. Consider the ringed space $\M^{\mu}:= (M^{\mu_0}, \mcO^{\mu^*})$ where $\mcO^{\mu^*}$ is the sheaf of fixed points of $\mu^*$ over $M^{\mu}$. Then $\M^{\mu}$ is a real supermanifold. We describe this supermanifold in Theorem \ref{theor real main}. \textbf{Acknowledgments:} The author was partially supported by Coordena\c{c}\~{a}o de Aperfei\c{c}oamento de Pessoal de N\'{\i}vel Superior - Brasil (CAPES) -- Finance Code 001, (Capes-Humboldt Research Fellowship), by FAPEMIG, grant APQ-01999-18, Rede Mineira de Matemática-RMMAT-MG, Projeto RED-00133-21. We thank Peter \linebreak Littelmann for hospitality and the wonderful working atmosphere at the University of Cologne and we thank Dmitri Akhiezer for helpful comments. We also thank Mikhail Borovoi for suggesting to write this paper and for writing the appendix. \section{Preliminaries} \subsection{Supermanifolds} This paper is devoted to the study of the automorphism supergroup of a $\Pi$-symmetric super-Grassmannian $\Pi\!\Gr_{n,k}$, and to a classification of real structures on $\Pi\!\Gr_{n,k}$. Details about the theory of supermanifolds can be found in \cite{Bern,Leites,BLMS}. As usual, the superspace $\mathbb C^{n|m}:= \mathbb C^{n}\oplus \mathbb C^{m}$ is a $\Z_2$-graded vector space over $\mathbb C$ of dimension $n|m$. A {\it superdomain in $\mathbb{C}^{n|m}$} is a ringed space $\mathcal U:=(U,\mathcal F_U\otimes \bigwedge (\mathbb C^{m})^*)$, where $U\subset \mathbb C^{n}$ is an open set and $\mathcal F_U$ is the sheaf of holomorphic functions on $U$. If $(x_a)$ is a system of coordinates in $U$ and $(\xi_b)$ is a basis in $(\mathbb C^{m})^*$ we call $(x_a,\xi_b)$ a system of coordinates in $\mathcal U$. Here $(x_a)$ are called even coordinates of $\mathcal U$, while $(\xi_b)$ are called odd ones. A {\it supermanifold} $\mcM = (M,\mathcal{O})$ of dimension $n|m$ is a $\mathbb{Z}_2$-graded ringed space that is locally isomorphic to a super\-domain in $\mathbb{C}^{n|m}$. Here the underlying space $M$ is a complex-analytic manifold. A {\it morphism} $F:(M,\mcO_{\mcM}) \to (N,\mcO_{\mcN})$ of two supermanifolds is, by definition, a morphism of the corresponding $\mathbb{Z}_2$-graded locally ringed spaces. In more details, $F = (F_{0},F^*)$ is a pair, where $F_{0}:M\to N$ is a holomorphic map and $F^*: \mathcal{O}_{\mathcal N}\to (F_{0})_*(\mathcal{O}_{\mathcal M})$ is a homomorphism of sheaves of $\mathbb{Z}_2$-graded local superalgebras. We see that the morphism $F$ is even, that is, $F$ preserves the $\mathbb{Z}_2$-gradings of the sheaves. A morphism $F: \mcM\to \mcM$ is called an {\it automorphism of $\mcM$} if $F$ is an automorphism of the corresponding $\mathbb{Z}_2$-graded ringed spaces. The automorphisms of $\mcM$ form a group, which we denote by $\operatorname{Aut} \mathcal M$. Note that in this paper we also consider the automorphism supergroup, see a definition below. A supermanifold $\mcM=(M,\mcO)$ is called {\it split}, if its structure sheaf is isomorphic to $\bigwedge \mathcal E$, where $\mathcal E$ is a sheaf of sections of a holomorphic vector bundle $\mathbb E$ over $M$. In this case the structure sheaf of $\mcM$ is $\mathbb Z$-graded, not only $\Z_2$-graded. There is a functor assigning to any supermanifold a split supermanifold. Let us briefly remind this construction. Let $\mcM=(M,\mathcal O)$ be a supermanifold. Consider the following filtration in $\mathcal O$ $$ \mathcal O = \mathcal J^0 \supset \mathcal J \supset \mathcal J^2 \supset\cdots \supset \mathcal J^p \supset\cdots, $$ where $\mathcal J$ is the subsheaf of ideals in $\mcO$ locally generated by odd elements of $\mcO$. We define $$ \mathrm{gr} \mathcal M := (M,\mathrm{gr}\mathcal O),\quad \text{where} \quad \mathrm{gr}\mathcal O: = \bigoplus_{p \geq 0} \mathcal J^p/\mathcal J^{p+1}. $$ The supermanifold $\mathrm{gr} \mathcal M$ is split and it is called the {\it retract} of $\mcM$. The underlying space of $\mathrm{gr} \mathcal M$ is the complex-analytic manifold $(M,\mathcal O/\mathcal J)$, which coincides with $M$. The structure sheaf $\mathrm{gr}\mathcal O$ is isomorphic to $\bigwedge \mathcal E$, where $\mathcal E= \mathcal J/\mathcal J^{2}$ is a locally free sheaf of $\mathcal O/\mathcal J$-modules on $M$. Further let $\mcM =(M,\mcO_{\mcM})$ and $\mathcal{N}= (N,\mcO_{\mcN})$ be two supermanifolds, $\mathcal J_{\mcM}$ and $\mathcal J_{\mcN}$ be the subsheaves of ideals in $\mcO_{\mcM}$ and $\mcO_{\mcN}$, which are locally generated by odd elements in $\mcO_{\mcM}$ and in $\mcO_{\mcN}$, respectively. Any morphism $F:\mcM \to \mathcal{N}$ preserves these shaves of ideals, that is $F^*(\mcJ_{\mcN}) \subset (F_{0})_*(\mathcal{J}_{\mathcal M})$, and more generally $F^*(\mcJ^p_{\mcN}) \subset (F_{0})_*(\mathcal{J}^p_{\mathcal M})$ for any $p$. Therefore $F$ induces naturally a morphism $\mathrm{gr}(F): \mathrm{gr} \mathcal M\to \mathrm{gr} \mathcal N$. Summing up, the functor $\gr$ is defined. \subsection{A classification theorem for supermanifolds}\label{sec A classification theorem} Let $\mathcal M=(M,\mathcal O)$ be a (non-split) supermanifold. Recall that we denoted by $\operatorname{Aut} \mathcal M$ the group of all (even) automorphisms of $\mathcal M$. Denote by $\mathcal{A}ut \mathcal O$ the sheaf of automorphisms of $\mcO$. Consider the following subsheaf of $\mathcal{A}ut \mathcal O$ \begin{align*} \mathcal{A}ut_{(2)} \mathcal O := \{F\in \mathcal{A}ut \mathcal O\,\,|\,\,\, \gr (F) =id\}. \end{align*} This sheaf plays an important role in the classification of supermanifolds, see below. The sheaf $\mathcal{A}ut\mathcal{O}$ has the following filtration \begin{equation*}\mathcal{A}ut \mathcal{O}=\mathcal{A}ut_{(0)}\mathcal{O} \supset \mathcal{A}ut_{(2)}\mathcal{O}\supset \ldots \supset \mathcal{A}ut_{(2p)}\mathcal{O} \supset \ldots , \end{equation*} where $$ \mathcal{A}ut_{(2p)}\mathcal{O} = \{a\in\mathcal{A}ut\mathcal{O}\mid a(u)\equiv u\mod \mathcal{J}^{2p} \,\, \text{for any}\,\,\, u\in \mcO\}. $$ Recall that $\mathcal J$ is the subsheaf of ideals generated by odd elements in $\mathcal O$. Let $\E$ be the bundle corresponding to the locally free sheaf $\mcE=\mcJ/\mcJ^2$ and let $\operatorname{Aut} \E$ be the group of all automorphisms of $\E$. Clearly, any automorphism of $\E$ gives rise to an automorphism of $\gr \mcM$, and thus we get a natural action of the group $\operatorname{Aut} \E$ on the sheaf $\mathcal{A}ut (\gr\mathcal{O})$ by $Int: (a,\delta)\mapsto a\circ \delta\circ a^{-1}$, where $\delta\in\mathcal{A}ut (\gr\mathcal{O})$ and $a\in \operatorname{Aut} \E$. Clearly, the group $\operatorname{Aut} \E$ leaves invariant the subsheaves $\mathcal{A}ut_{(2p)} \gr\mathcal{O}$. Hence $\operatorname{Aut} \E$ acts on the cohomology sets $H^1(M,\mathcal{A}ut_{(2p)} \gr\mathcal{O})$. The unit element $\epsilon\in H^1(M,\mathcal{A}ut_{(2p)} \gr\mathcal{O})$ is fixed under this action. We denote by $H^1(M,\mathcal{A}ut_{(2p)}\gr\mathcal{O})/ \operatorname{Aut} \E$ the set of orbits of the action in $H^1(M,\mathcal{A}ut_{(2p)}\gr\mathcal{O})$ induced by $Int$. Denote by $[\mcM]$ the class of supermanifolds which are isomorphic to $\mcM= (M,\mcO)$. (Here we consider complex-analytic supermanifolds up to isomorphisms inducing the identical isomorphism of the base spaces.) The following theorem was proved in \cite{Green}. \begin{theorem}[{\bf Green}]\label{Theor_Green} Let $(M,\bigwedge \mcE)$ be a fixed split supermanifold. Then $$ \begin{array}{c} \{[\mcM ] \mid \gr\mathcal{O} \simeq\bigwedge \mcE\} \stackrel{1:1}{\longleftrightarrow} H^1(M,\mathcal{A}ut_{(2)}\gr\mathcal{O})/ \operatorname{Aut} \E. \end{array} $$ The split supermanifold $(M,\bigwedge \mcE)$ corresponds to the fixed point $\epsilon$. \end{theorem} \subsection{Tangent sheaf of $\mcM$ and $\gr \mcM$} Let again $\mathcal M=(M,\mathcal O)$ be a (non-split) supermanifold. The {\it tangent sheaf} of a supermanifold $\mcM$ is by definition the sheaf $\mathcal T = \mathcal{D}er\mcO$ of derivations of the structure sheaf $\mcO$. Sections of the sheaf $\mathcal T$ are called {\it holomorphic vector fields} on $\mcM$. The vector superspace $\mathfrak v(\mcM) = H^0(M, \mathcal T)$ of all holomorphic vector fields is a complex Lie superalgebra with the bracket $$ [X,Y]= X\circ Y- (-1)^{\tilde X\tilde Y} Y\circ X,\quad X,Y\in \mathfrak v(\mcM), $$ where $\tilde Z$ is the parity of an element $Z\in \mathfrak v(\mcM)$. The Lie superalgebra $\mathfrak v(\mcM)$ is finite dimensional if $M$ is compact. Let $\dim \mcM=n|m$. The tangent sheaf $\mathcal T$ possesses the following filtration: $$ \mathcal T=\mathcal T_{(-1)} \supset \mathcal T_{(0)} \supset \mathcal T_{(1)} \supset \cdots \supset \mathcal T_{(m)} \supset \mathcal T_{(m+1)}=0, $$ where $$ \mathcal T_{(p)} = \{ v\in \mathcal T \,\,|\,\, v(\mcO) \subset \mcJ^p,\,\, v(\mcJ) \subset \mcJ^{p+1} \},\quad p\geq 0. $$ Denote by $\mathcal T_{\gr}$ the tangent sheaf of the retract $\gr \mcM$. Since the structure sheaf $\gr \mcO$ of $\gr \mcM$ is $\Z$-graded, the sheaf $\mathcal T_{\gr}$ has the following induced $\Z$-grading $$ \mathcal T_{\gr} = \bigoplus_{p\geq -1} (\mathcal T_{\gr})_{p}, $$ where $$ (\mathcal T_{\gr})_{p}= \{\, v\in \mathcal T_{\gr} \,\,|\,\, v(\gr\mcO_q) \subset \gr\mcO_{q+p}\,\, \text{for any}\,\, q\in \mathbb Z \}. $$ We have the following exact sequence the sheaves of groups \begin{equation}\label{eq exact sequence} e \to \mathcal{A}ut_{(2p+2)}\mathcal{O} \to \mathcal{A}ut_{(2p)}\mathcal{O} \to (\mathcal T_{\gr})_{2p}\to 0 \end{equation} for any $p\geq 1$, see \cite{Rothstein}. More details about this sequence can be also found in \cite[Proposition 3.1]{COT} \subsection{Order of a supermanifold}\label{sec Order of a supermanifold} Let again $\mathcal M=(M,\mathcal O)$ be a (non-split) supermanifold. According to Theorem \ref{Theor_Green} a supermanifold corresponds to an element $[\gamma]\in H^1(M,\mathcal{A}ut_{(2)}\gr\mathcal{O})/ \operatorname{Aut} \E$. Furthermore for any $p\geq 1$ we have the following natural embedding of sheaves $$ \mathcal{A}ut_{(2p)}\mathcal{O} \hookrightarrow \mathcal{A}ut_{(2)} \mathcal{O}, $$ that induces the map of $1$-cohomology sets $$ H^1(M,\mathcal{A}ut_{(2p)}\mathcal{O}) \to H^1(M, \mathcal{A}ut_{(2)} \mathcal{O}). $$ (Note that our sheaves are not abelian.) Denote by $H_{2p}$ the image of $H^1(M,\mathcal{A}ut_{(2p)}\mathcal{O})$ in $H^1(M, \mathcal{A}ut_{(2)} \mathcal{O})$. We get the following $\operatorname{Aut} \E$-invariant filtration \begin{align*} H^1(M, \mathcal{A}ut_{(2)} \mathcal{O})= H_{2} \supset H_{4} \supset H_{6} \supset \cdots . \end{align*} Let $\gamma \in [\gamma]$ be any representative. As in \cite{Rothstein} we define the order $o(\gamma)$ of the cohomology class $\gamma\in H^1(M, \mathcal{A}ut_{(2)} \mathcal{O})$ to be equal to the maximal number between the numbers $2p$ such that $\gamma\in H_{2p}$. The order of the supermanifold $\mcM$ is by definition the order of the corresponding cohomology class $\gamma$. We put $o(\mcM):=\infty$, if $\mcM$ is a split supermanifold. \subsection{The automorphism supergroup of a complex-analytic compact supermanifold} Let us remind a description of a Lie supergroup in terms of a super-Harish-Chandra pair. A {\it Lie supergroup} $\mathcal G$ is a group object in the category of supermanifolds, see for example \cite{Vish_funk,V} for details. Any Lie supergroup can be described using a super-Harish-Chandra pair, see \cite{Bern} and also \cite{BCC,V}, due to the following theorem, see \cite{V} for the complex-analytic case. \begin{theorem}\label{theor Harish-Chandra} The category of complex Lie supergroups is equivalent to the category of complex super Harish-Chandra pairs. \end{theorem} A {\it complex super Harish-Chandra pair} is a pair $(G,\mathfrak{g})$ that consists of a complex-analytic Lie group $G$ and a Lie superalgebra $\mathfrak{g}=\mathfrak{g}_{\bar 0}\oplus\mathfrak{g}_{\bar 1}$ over $\mathbb C$, where $\mathfrak{g}_{\bar 0}=\Lie (G)$, endowed with a representation $\operatorname{Ad}: G\to \operatorname{Aut} \mathfrak{g}$ of $G$ in $\mathfrak{g}$ such that \begin{itemize} \item $\operatorname{Ad}$ preserves the parity and induces the adjoint representation of $G$ in $\mathfrak{g}_{\bar 0}$, \item the differential $(\operatorname{d} \operatorname{Ad})_e$ at the identity $e\in G$ coincides with the adjoint representation $\operatorname{ad}$ of $\mathfrak g_{\bar 0}$ in $\mathfrak g$. \end{itemize} Super Harish-Chandra pairs form a category. (A definition of a morphism is natural, see in \cite{Bern} or in \cite{V}.) A supermanifold $\mcM=(M,\mcO)$ is called compact if its base space $M$ is compact. If $\mcM$ is a compact complex-analytic supermanifold, the Lie superalgebra of vector fields $\mathfrak o(\mcM)$ is finite dimensional. For a compact complex-analytic supermanifold $\mcM$ we define the {\it automorphism supergroup} as the super-Harish-Chandra pair \begin{equation}\label{eq def of automorphism supergroup} (\operatorname{Aut} \mcM, \mathfrak o(\mcM)). \end{equation} \section{Super-Grass\-mannians and $\Pi$-symmetric super-Grassmannians}\label{sec charts on Gr} \subsection{Complex-analytic super-Grass\-mannians and complex-analytic\\ $\Pi$-symmetric super-Grassmannians}\label{sec def of a supergrassmannian} A super-Grassmannian $\Gr_{m|n,k|l}$ is the supermanifold that parameterizes all $k|l$-dimen\-sional linear subsuperspaces in $\mathbb C^{m|n}$. Here $k\leq m$, $l\leq n$ and $k+l< m+n$. The underlying space of $\Gr_{m|n,k|l}$ is the product of two usual Grassmannians $\Gr_{m,k}\times \Gr_{n,l}$. The structure of a supermanifold on $\Gr_{m|n,k|l}$ can be defined in the following way. Consider the following $(m+n)\times (k+l)$-matrix $$ \mathcal L=\left( \begin{array}{cc} A & B\\ C&D\\ \end{array} \right). $$ Here $A=(a_{ij})$ is a $(m\times k)$-matrix, whose entries $a_{ij}$ can be regarded as (even) coordinates in the domain of all complex $(m\times k)$-matrices of rank $k$. Similarly $D=(d_{sr})$ is a $(n\times l)$-matrix, whose entries $d_{sr}$ can be regarded as (even) coordinates in the domain of all complex $(n\times l)$-matrices of rank $l$. Further, $B=(b_{pq})$ and $C=(c_{uv})$ are $(m\times l)$ and $(n\times k)$-matrices, respectively, whose entries $b_{pq}$ and $c_{uv}$ can be regarded as generators of a Grassmann algebra. The matrix $\mathcal L$ determines the following open subsuperdomain in $\mathbb C^{mk+nl|ml+nk}$ $$ \mathcal V =(V,\mathcal F_V\otimes \bigwedge (b_{pq},c_{uv})), $$ where $V$ is the product of the domain of complex $(m\times k)$-matrices of rank $k$ and the domain of complex $(n\times l)$-matrices of rank $l$, $\mathcal F_V$ is the sheaf of holomorphic functions on $V$ and $\bigwedge (b_{pq},c_{uv})$ is the Grassmann algebra with generators $(b_{pq},c_{uv})$. Let us define an action $\mu:\mathcal V\times \GL_{k|l}(\mathbb C) \to \mathcal V$ of the Lie supergroup $\GL_{k|l}(\mathbb C)$ on $\mathcal V$ on the right in the natural way, that is by matrix multiplication. The quotient space under this action is called the {\it super-Grassmannian} $\Gr_{m|n,k|l}$. Now consider the case $m=n$. A {\it $\Pi$-symmetric super-Grassmannian} $\Pi\!\Gr_{n,k}$ is a subsupermanifold in $\Gr_{n|n,k|k}$, which is invariant under odd involution $\Pi: \mathbb C^{n|n}\to \mathbb C^{n|n}$, see below. Let us describe $\Gr_{m|n,k|l}$ and $\Pi\!\Gr_{n,k}$ using charts and local coordinates \cite{Manin}. First of all let as recall a construction of an atlas for the usual Grassmannian $\Gr_{m,k}$. Let $e_1,\ldots e_m$ be the standard basis in $\mathbb C^m$. Consider a complex $(m\times k)$-matrix $C=(c_{ij})$, where $i=1,\ldots, m$ and $j=1,\ldots, k$, of rank $k$. Such a matrix determines a $k$-dimensional subspace $W$ in $\mathbb C^m$ with basis $\sum\limits_{i=1}^mc_{i1}e_i,\ldots, \sum\limits_{i=1}^mc_{ik}e_i$. Let $I\subset\{1,\ldots,m\}$ be a subset of cardinality $k$ such that the square submatrix $L=(c_{ij})$, $i\in I$ and $j=1,\ldots, k$, of $C$ is non-degenerate. (There exists such a subset since $C$ is of rank $k$.) Then the matrix $C':= C\cdot L^{-1}$ determines the same subspace $W$ and contains the identity submatrix $E_k$ in the lines with numbers $i\in I$. Let $U_I$ denote the set of all $(m\times k)$-complex matrices $C'$ with the identity submatrix $E_k$ in the lines with numbers $i\in I$. Any point $x\in U_I$ determines a $k$-dimensional subspace $W_x$ in $\mathbb C^n$ as above, moreover if $x_1,x_2\in U_I$, $x_1\ne x_2$, then $W_{x_1}\ne W_{x_2}$. Therefore, the set $U_I$ is a subset in $\Gr_{m,k}$. We can verify that $U_I$ is open in a natural topology in $\Gr_{m,k}$ and it is homeomorphic to $\mathbb C^{(m-k)k}$. Therefore $U_I$ can be regarded as a chart on $\Gr_{m,k}$. Further any $k$-dimensional vector subspace in $\mathbb C^n$ is contained in some $U_J$ for a subset $J\subset\{1,\ldots,m\}$ of cardinality $|J|=k$. Hence the collection $\{U_I\}_{|I| =k}$ is an atlas on $\Gr_{m,k}$. Now we are ready to describe an atlas $\mathcal A$ on $\Gr_{m|n,k|l}$. Let $I=(I_{\bar 0},I_{\bar 1})$ be a pair of sets, where $$ I_{\bar 0}\subset\{1,\ldots,m\}\quad \text{and} \quad I_{\bar 1}\subset\{1,\ldots,n\}, $$ with $|I_{\bar 0}| = k,$ and $|I_{\bar 1}| = l$. As above to such an $I$ we can assign a chart $U_{I_{\bar 0}} \times U_{I_{\bar 1}}$ on $\Gr_{m,k}\times \Gr_{n,l}$. Let $\mathcal A = \{\mathcal U_{I}\}$ be a family of superdomains parametrized by $I=(I_{\bar 0},I_{\bar 1})$, where $$ \mathcal U_I:= (U_{I_{\bar 0}}\times U_{I_{\bar 1}}, \mathcal F_{U_{I_{\bar 0}}\times U_{I_{\bar 1}}}\otimes \bigwedge ((m-k)l+ (n-l)k)). $$ Here $\bigwedge (r)$ is a Grassmann algebra with $r$ generators and $\mathcal F_{U_{I_{\bar 0}}\times U_{I_{\bar 1}}}$ is the sheaf of holomorphic function on $U_{I_{\bar 0}}\times U_{I_{\bar 1}}$. Let us describe the superdomain $\mathcal U_I$ in a different way. First of all assume for simplicity that $I_{\bar 0}=\{m-k+1,\ldots, m\}$, $I_{\bar 1}=\{n-l+1,\ldots, n\}$. Consider the following matrix $$ \mathcal Z_{I} =\left( \begin{array}{cc} X&\Xi\\ E_{k}&0\\ H&Y\\0&E_{l}\end{array} \right), $$ where $E_{s}$ is the identity matrix of size $s$. We assume that the entries of $X=(x_{ij})$ and $Y=(y_{rs})$ are coordinates in the domain $U_{I_{\bar 0}}$ and the domain $U_{I_{\bar 1}}$, respectively. We also assume that the entries of $\Xi=(\xi_{ab})$ and of $H=(\eta_{cd})$ are generators of the Grassmann algebra $\bigwedge ((m-k)l+ (n-l)k)$. We see that the matrix $\mathcal Z_I$ determines a superdomain $$ \mathcal U_I:= (U_{I_{\bar 0}}\times U_{I_{\bar 1}}, \mathcal F_{U_{I_{\bar 0}}\times U_{I_{\bar 1}}}\otimes \bigwedge (\xi_{ab},\eta_{cd})) $$ with even coordinates $x_{ij}$ and $y_{rs}$, and odd coordinates $\xi_{ab}$ and $\eta_{cd}$. Let us describe $\mathcal U_I$ for any $I=(I_{\bar 0},I_{\bar 1})$. Consider the following $(m+n)\times (k+l)$-matrix $$ \mathcal Z_{I} =\left( \begin{array}{cc} X'&\Xi'\\ H'&Y'\\ \end{array} \right). $$ Here the blokes $X'$, $Y'$, $\Xi'$ and $H'$ are of size $m\times k$, $n\times l$, $m\times l$ and $n\times k$, respectively. We assume that this matrix contains the identity submatrix in the lines with numbers $i\in I_{\bar 0}$ and $i\in \{m+j\,\, |\,\, j\in I_{\bar 1}\} $. Further, non-trivial entries of $X'$ and $Y'$ can be regarded as coordinates in $U_{I_{\bar 0}}$ and $U_{I_{\bar 1}}$, respectively, and non-trivial entries of $\Xi'$ and $H'$ are identified with generators of the Grassmann algebra $\bigwedge ((m-k)l+ (n-l)k)$, see definition of $\mathcal U_I$. Summing up, we have obtained another description of $\mathcal U_I$. The last step is to define the transition functions in $\mathcal U_I\cap \mathcal U_J$. To do this we need the matrices $\mathcal Z_I$ and $\mathcal Z_J$. We put $\mathcal Z_{J} =\mathcal Z_{I}C_{IJ}^{-1}$, where $C_{IJ}$ is an invertible submatrix in $\mathcal Z_{I}$ that consists of the lines with numbers $i\in J_{\bar 0}$ and $m + i,$ where $i\in J_{\bar 1}$. This equation gives us a relation between coordinates of $\mathcal U_I$ and $\mathcal U_I$, in other words the transition functions in $\mathcal U_I\cap \mathcal U_J$. The supermanifold obtained by gluing these charts together is called the super-Grassmannian $\Gr_{m|n,k|l}$. The supermanifold $\Pi\!\Gr_{n,k}$ is defined as a subsupermanifold in $\Gr_{n|n,k|k}$ defined in $\mathcal Z_I$ by the equations $X'=Y'$ and $\Xi'=H'$. We can define the $\Pi\!\Gr_{n,k}$ as all fixed points of an automorphism of $\Gr_{n|n,k|k}$ induced by an odd linear involution $\Pi:\mathbb C^{n|n}\to \mathbb C^{n|n}$, given by $$ \left( \begin{array}{cc} 0&E_n\\ E_n&0\\ \end{array} \right) \left( \begin{array}{c} V\\ W\\ \end{array} \right) = \left( \begin{array}{c} W\\ V\\ \end{array} \right), $$ where $\left( \begin{array}{c} V\\ W\\ \end{array} \right)$ is the column of right coordinates of a vector in $\mathbb C^{n|n}$. In our charts $\Pi\!\Gr_{n,k}$ is defined by the following equation $$ \left( \begin{array}{cc} 0&E_n\\ E_n&0\\ \end{array} \right) \left( \begin{array}{cc} X&\Xi\\ H&Y\\ \end{array} \right) \left( \begin{array}{cc} 0&E_k\\ E_k&0\\ \end{array} \right) = \left( \begin{array}{cc} X&\Xi\\ H&Y\\ \end{array} \right), $$ or equivalently, $$ X= Y,\quad H=\Xi. $$ An atlas $\mathcal A^{\Pi}$ on $\Pi\!\Gr_{n,k}$ contains local charts $\mathcal U_I^{\Pi}$ parameterized by $I\subset \{ 1,\ldots, n\}$ with $|I|=k$. The retract $\gr\Pi\!\Gr_{n,k}$ of $\Pi\!\Gr_{n,k}$ is isomorphic to $(\Gr_{n,k}, \bigwedge \Omega)$, where $\Omega$ is the sheaf of $1$-forms on $\Gr_{n,k}$. More information about super-Grassmannians and $\Pi$-symmetric super-Grassmannians can be found in \cite{Manin}, see also \cite{COT,Vish_Pi sym}. \subsection{$\Pi$-symmetric super-Grassmannians over $\mathbb R$ and $\mathbb H$}\label{symmetric super-Grassmannians over R and H} We will also consider $\Pi$-symmetric super-Grassmannians $\Pi\!\Gr_{n,k}(\mathbb R)$ and $\Pi\!\Gr_{n,k}(\mathbb H)$ over $\mathbb R$ and $\mathbb H$. These supermanifolds are defined in a similar way as $\Pi\!\Gr_{n,k}$ assuming that all coordinates are real or quaternion. In more details, to define $\Pi\!\Gr_{n,k}(\mathbb R)$ we just repeat the construction of local charts and transition functions above assuming that we work over $\mathbb R$. The case of $\Pi\!\Gr_{n,k}(\mathbb H)$ is slightly more complicated. Indeed, we consider charts $\mathcal Z_I$ as above with even and odd coordinates $X=(x_{ij})$ and $\Xi= (\xi_{ij})$, respectively, where by definition $$ x_{ij}:= \left(\begin{array}{cc} x^{ij}_{11}& x^{ij}_{12}\\ -\bar x^{ij}_{12}& \bar x^{ij}_{11} \end{array} \right),\quad \xi_{ij}:=\left(\begin{array}{cc} \xi_{11}^{ij}& \xi^{ij}_{12}\\ -\bar \xi^{ij}_{12}& \bar \xi^{ij}_{11} \end{array} \right). $$ Here $x^{ij}_{ab}$ are even complex variables and $\bar x^{ij}_{ab}$ is the complex conjugation of $x^{ij}_{ab}$. Further, any $\xi_{ab}^{ij}$ is an odd complex variable and $\bar\xi_{ab}^{ij}$ is its complex conjugation. (Recall that a complex conjugation of a complex odd variable $\eta=\eta_1+i\eta_2$ is $\bar \eta :=\eta_1-i\eta_2$, where $\eta_i$ is a real odd variable.) To obtain $\Pi\!\Gr_{n,k}(\mathbb H)$ we repeat step by step the construction above. \subsection{The order of a $\Pi$-symmetric super-Grassmannian} We start this subsection with the following theorem proved in \cite[Theorem 5.1]{COT}. \begin{theorem}\label{theor PiGr is splitt iff} A $\Pi$-symmetric super-Grassmannian $\Pi\!\Gr_{n,k}$ is split if and only if $(n,k) = (2,1)$. \end{theorem} From \cite[Theorem 4.4]{COT} it follows that for the $\Pi$-symmetric super-Grassmannian $\mcM= \Pi\!\Gr_{n,k}$ we have $ H^1(M, (\mathcal T_{\gr})_{p})=\{0\},$ $p\geq 3.$ This implies the following statement. \begin{proposition}\label{prop o(PiGR)} A $\Pi$-symmetric super-Grassmannian $\Pi\!\Gr_{n,k}$ is a supermanifold of order $2$ for $(n,k)\ne (2,1)$. The order of $\Pi\!\Gr_{2,1}$ is $\infty$, since this supermanifold is split. \end{proposition} \begin{proof} To show the statement consider the exact sequence (\ref{eq exact sequence}) for $\mcM = \Pi\!\Gr_{n,k}$ and the corresponding exact sequence of cohomology sets \begin{align*} \to H^1(M,\mathcal{A}ut_{(2p+2)}\mathcal{O} )\to H^1(M,\mathcal{A}ut_{(2p)}\mathcal{O}) \to H^1(M, (\mathcal T_{\gr})_{2p}) \to . \end{align*} Since $H^1(M, (\mathcal T_{\gr})_{p})=\{0\}$ for $p\geq 3$, see \cite[Theorem 4.4]{COT}, and $\mathcal{A}ut_{(2q)}\mathcal{O} = \id$ for sufficiently large $p$, we have by induction $H^1(M, \mathcal{A}ut_{(2q)}\mathcal{O}) =\{\epsilon\}, \,\,\, q\geq 2.$ Therefore $H_{2p}=\{ \epsilon\}$ for $p\geq 2$. Since by Theorem \ref{theor PiGr is splitt iff}, the $\Pi$-symmetric super-Grassmannian $\Pi\!\Gr_{n,k}$ is not split for $(n,k)\ne (2,1)$, the corresponding to $\Pi\!\Gr_{n,k}$, where $(n,k)\ne (2,1)$, cohomology class $\gamma$ is not trivial. Therefore, $\gamma\in H_2\setminus H_4 = H_2\setminus \{\epsilon\} $. This completes the proof. \end{proof} \section{Lifting of homotheties on a non-split supermanifold} \subsection{Lifting of an automorphism in terms of Green's cohomology} On any vector bundle $\E$ over $M$ we can define a natural automorphism $\phi_{\alpha}$, where $\alpha\in \mathbb C^*=\mathbb C\setminus \{0\}$. In more details, $\phi_{\alpha}$ multiplies any local section by the complex number $\al$. Let $r$ be the minimum between positive integers $k$ such that $\alpha^k=1$. The number $r$ is called the {\it order} $\textsf{ord}(\phi_{\al})$ of the automorphism $\phi_{\alpha}$. If such a number does not exist we put $\textsf{ord}(\phi_{\alpha}) = \infty$. In this section we study a possibility of lifting of $\phi_{\alpha}$ on a non-split supermanifold corresponding to $\E$. A possibility of lifting of an automorphism (or an action of a Lie group) to a non-split supermanifold was studied in \cite{Oni_lifting}, see also \cite[Proposition 3.1]{Bunegina} for a proof of a particular case. In particular the following result was obtained there. Denote by $\underline{\operatorname{Aut}} \E$ the group of automorphisms of $\E$, which are not necessary identical on $M$. Clearly, we have $\operatorname{Aut}\E\subset \underline{\operatorname{Aut}} \E$. \begin{proposition}\label{prop lift of gamma} Let $\gamma\in H^1(M,\mathcal{A}ut_{(2)}\gr\mathcal{O})$ be a Green cohomology class of $\mathcal M$. Then ${\sf B}\in \underline{\operatorname{Aut}} \E$ lifts to $\mathcal M$ if and only if for the induced map in the cohomology group we have ${\sf B}(\gamma)=\gamma$. \end{proposition} Consider the case ${\sf B}= \phi_{\al}$ in details. Let us choose an acyclic covering $\mathcal U = \{U_{a}\}_{a\in I}$ of $M$. Then by the Leray theorem, we have an isomorphism $H^1(M,\mathcal{A}ut_{(2)}\gr\mathcal{O}) \simeq H^1(\mathcal U,\mathcal{A}ut_{(2)}\gr\mathcal{O})$, where $H^1(\mathcal U,\mathcal{A}ut_{(2)}\gr\mathcal{O})$ is the \u{C}ech 1-cohomology set corresponding to $\mathcal U$. Let $(\gamma_{ab})$ be a \u{C}ech cocycle representing $\gamma$ with respect to this isomorphism. Then \begin{align*} \gamma = \phi_{\al}(\gamma) \,\,\, \Longleftrightarrow \,\,\, \gamma_{ab}= u_{a} \circ \phi_{\al}(\gamma_{ab}) \circ u_{b}^{-1} = u_{a} \circ\phi_{\al} \circ \gamma_{ab} \circ \phi^{-1}_{\al} \circ u_{b}^{-1}, \end{align*} where $u_{c}\in \mathcal{A}ut_{(2)}\gr\mathcal{O} (U_c)$. In Theorem \ref{theor main} we will show that we always can find a \v{C}ech cocycle $(\gamma_{ab})$ representing the cohomology class $\gamma$ such that \begin{equation}\label{eq cocycle exact form} \gamma_{ab}= \phi_{\al}(\gamma_{ab}) = \phi_{\al} \circ \gamma_{ab} \circ \phi^{-1}_{\al}. \end{equation} \subsection{Natural gradings in a superdomain}\label{sec aut theta} Let us consider a superdomain $\mathcal U:= (U, \mcO)$, where $\mcO= \mathcal F\otimes \bigwedge(\xi_1,\ldots \xi_m)$ and $\mathcal F$ is the sheaf of holomorphic functions on $U$, with local coordinates $(x_a, \xi_b)$. For any $\al\in \mathbb C^*$ we define an automorphism $\theta_{\al}: \mcO\to \mcO$ of order $r= \textsf{ord}(\theta_{\al})$ given by $\theta_{\al} (x_a) =x_a$ and $\theta_{\al} (\xi_b) = \al\xi_b $. Clearly $\theta_{\al}$ defines the following $\mathbb Z_{r}$-grading (or $\Z$-grading if $r=\infty$) in $\mcO$: \begin{equation}\label{eq decomposition al} \mcO= \bigoplus_{\tilde k\in \mathbb Z_{r}} \mcO^{\tilde k}, \quad \text{where}\quad \mcO^{\tilde k} = \{f\in \mcO \,\,|\,\, \theta_{\al}(f) = \al^{\tilde k} f \}. \end{equation} If $r=2$, the decomposition (\ref{eq decomposition al}) coincides with the standard decomposition of $\mcO=\mcO_{\bar 0}\oplus \mcO_{\bar 1}$ into even and odd parts $$ \mcO_{\bar 0} = \mcO^{\tilde 0}, \quad \mcO_{\bar 1} = \mcO^{\tilde 1}. $$ \subsection{Lifting of an automorphism $\phi_{\al}$, local picture}\label{sec Automorphism psi_al} Let $\E$ be a vector bundle, $\mcE$ be the sheaf of section of $\E$, $(M,\bigwedge\mcE)$ be the corresponding split supermanifold, and $\mcM=(M,\mcO)$ be a (non-split) supermanifold with the retract $\gr\mcM\simeq (M,\bigwedge\mcE)$. Recall that the automorphism $\phi_{\alpha}$ of $\E$ multiplies any local section of $\E$ by the complex number $\al$. We say that $\psi_{\al}\in H^0(M, \mathcal{A}ut\mathcal{O})$ is a {\it lift} of $\phi_{\al}$ if $\gr(id,\psi_{\al})= (id,\wedge\phi_{\al})$. Let $\mathcal B=\{\mathcal V_{a}\}$ be any atlas on $\mcM$ and let $\mathcal V_{a}\in \mathcal B$ be a chart with even and odd coordinates $(x_i,\xi_j)$, respectively. In any such $\mathcal V_{a}\in \mathcal B$ we can define an automorphism $\theta_{\al}^a = \theta_{\al}^a (\mathcal V_{a})$ as in Section \ref{sec aut theta} depending on $\mathcal V_{a}$. This is $\theta^a_{\al}(x_i)=x_i$ and $\theta^a_{\al}(\xi_j)=\xi_j$. \begin{proposition}\label{prop new coordinates} Let $\psi_{\alpha}$ be a lift of the automorphism $\phi_{\alpha}$ of order $r= \textsf{ord}(\phi_{\alpha})$. \begin{enumerate} \item If $r$ is even, then there exists an atlas $\mathcal A=\{\mathcal U_{a}\}$ on $\mcM$ with local coordinates $(x^{a}_i,\xi^{a}_j)$ in $\mathcal U_{a}=(U_{a}, \mcO|_{U_a})$ such that $$ \theta_{\al}^a(\psi_{\alpha} (x_i^{a})) = \psi_{\alpha} (x_i^{a}), \quad \theta_{\al}^a (\psi_{\alpha} (\xi_k^{a})) = \alpha \psi_{\alpha} (\xi_k^{a}). $$ \item If $r>1$ is odd or if $r=\infty$, then there exists an atlas $\mathcal A=\{\mathcal U_{a}\}$ on $\mcM$ with local coordinates $(x^{a}_i,\xi^{a}_j)$ in $\mathcal U_{a}=(U_{a}, \mcO|_{U_a})$ such that $$ \psi_{\alpha} (x_i^{a}) = x_i^{a} ,\quad \psi_{\alpha} (\xi_j^{a}) = \al \xi_j^{a}. $$ \end{enumerate} \end{proposition} \begin{proof} Let $\mathcal A$ be any atlas on $\mcM$ and let us fix a chart $\mathcal U\in \mathcal A$ with coordinates $(x_i,\xi_j)$. In local coordinates any lift $\psi_{\alpha}$ of $\phi_{\alpha}$ can be written in the following form \begin{align*} \psi_{\alpha}(x_i) = x_i + F_{2}+F_4+\cdots;\quad \psi_{\alpha}(\xi_j) = \alpha (\xi_j + G_3+ G_5\cdots), \end{align*} where $F_s=F_s(x_i,\xi_j)$ is a homogeneous polynomial in variables $\{\xi_j\}$ of degree $s$, and the same for $G_q=G_q(x_i,\xi_j)$ for odd $q$. We note that $$ \psi_{\alpha}(F_{s})=\alpha^s F_{s}+\mcJ^{s+1}, \quad \psi_{\alpha}(G_{q})=\alpha^q G_{q}+\mcJ^{q+1} $$ for any even $s$ and odd $q$. The idea of the proof is to use successively the following coordinate change \begin{equation}\label{eq change x'= x+, xi'=xi} \begin{split} &(I)\quad x'_i= x_i+ \frac{1}{1-\alpha^{2p}} F_{2p}(x_i,\xi_j),\quad \xi'_j = \xi_j;\\ &(II)\quad x''_i= x'_i,\quad \xi''_j = \xi'_j+ \frac{1}{1-\alpha^{2p}} G_{2p+1}(x'_i,\xi'_j), \end{split} \end{equation} where $p=1,2,3\ldots$ in the following way. If $r=2$ there is nothing to check. If $r>2$, first of all we apply (\ref{eq change x'= x+, xi'=xi})(I) and (\ref{eq change x'= x+, xi'=xi})(II) successively for $p=1$. After coordinate changes (\ref{eq change x'= x+, xi'=xi})(I) we have \begin{align*} &\psi_{\alpha} (x'_i) = \psi_{\alpha} (x_i+ \frac{1}{1-\alpha^2} F_2) = x_i + F_2 + \frac{\alpha^2}{1-\alpha^2} F_2 +\cdots=\\ &x_i + \frac{1}{1-\alpha^2} F_2 +\cdots = x'_i +\cdots \in x'_i + \mathcal J^3;\quad \psi_{\alpha} (\xi'_j) \in \al \xi'_j + \mathcal J^3. \end{align*} After coordinate changes (\ref{eq change x'= x+, xi'=xi})(II) similarly we will have \begin{equation}\label{eq after change p=1} \psi_{\alpha} (x''_i) \in x''_i + \mathcal J^4,\quad \psi_{\alpha} (\xi''_j) \in \al\xi''_j + \mathcal J^4. \end{equation} Now we change notations $x_i:=x''_i$ and $\xi_j:=\xi''_j$. Further, since (\ref{eq after change p=1}) holds, we have \begin{align*} \psi_{\alpha}(x_i) = x_i + F_4+F_6+\cdots;\quad \psi_{\alpha}(\xi_j) = \alpha (\xi_j + G_5 + G_7+\cdots). \end{align*} Here we used the same notations for monomials $F_s$ and $G_q$ as above, however after the first step these functions may change. Now we continue to change coordinates consequentially in this way. If $\al^{2p}\ne 1$ for any $p\in \mathbb N$, that is the order $r= \textsf{ord}(\phi_{\alpha})$ is odd or infinite, we can continue this procedure and obtain the required coordinates. This proves the second statement. If $r$ is even we continue our procedure for $p<r/2$. Now in our new coordinates $\psi_{\al}$ has the following form \begin{align*} &\psi_{\alpha}(x_i) = x_i + F_{r}+F_{r+2}\cdots ;\quad &\psi_{\alpha}(\xi_j) = \alpha \xi_j + \al G_{r+1} + \al G_{r+3} +\cdots. \end{align*} For any $p$ such that $\al^{2p}\ne 1$, the changes of variables inverse to (\ref{eq change x'= x+, xi'=xi})(I) and (\ref{eq change x'= x+, xi'=xi})(II) have the following form \begin{equation}\label{eq inverse of coordinate change} \begin{split} &(I)\quad x_a= x'_a+ F'(x'_i,\xi'_j)_{(2p)}, \quad \xi_b= \xi'_b ;\\ &(II)\quad x'_a= x''_a, \quad \xi'_b= \xi''_b + G'(x''_i,\xi''_j)_{(2p+1)}, \end{split} \end{equation} where $F'(x'_i,\xi'_j)_{(2p)}\in \mcJ^{2p}$ and $G'(x''_i,\xi''_j)_{(2p+1)} \in \mcJ^{2p+1}$. Now we use again the coordinate change (\ref{eq change x'= x+, xi'=xi})(I) and (\ref{eq change x'= x+, xi'=xi})(II) for $p= r+2$, successively. Explicitly after coordinate changes (\ref{eq change x'= x+, xi'=xi})(I) using (\ref{eq inverse of coordinate change}) for $p= r+2$ we have \begin{align*} \psi_{\alpha} (x'_i) = \psi_{\alpha} (x_i+ \frac{1}{1-\alpha^{r+2}} F_{r+2}(x_i,\xi_j)) = x_i + F_r(x_i,\xi_j)+ F_{r+2}(x_i,\xi_j) +\\ \frac{\alpha^{r+2}}{1-\alpha^{r+2}} F_{r+2}(x_i,\xi_j) +\cdots= x_i + \frac{1}{1-\alpha^{r+2}} F_{r+2}(x_i,\xi_j) + F_r(x_i,\xi_j) +\cdots =\\ x'_i + F_r(x_i,\xi_j)+\cdots \in x'_i +F_r(x'_i,\xi'_j) +\mathcal J^{r+3};\\ \psi_{\alpha} (\xi'_j) \in \al \xi'_j +\al G_{r+1}(x'_i,\xi'_j) + \mathcal J^{r+3}. \end{align*} After the coordinate change (\ref{eq change x'= x+, xi'=xi})(II), we will have \begin{align*} \psi_{\alpha} (x''_i) \in x''_i + F_r(x''_i,\xi''_j)+ \mathcal J^{r+4},\quad \psi_{\alpha} (\xi''_j) \in \al\xi''_j + \al G_{r+1}(x''_i,\xi''_j) + \mathcal J^{r+4}. \end{align*} Repeating this procedure for $p= r+4, \ldots, 2r-2$ and so on for $p\ne kr$, $k\in \mathbb N$ we obtain the result. \end{proof} \subsection{Lifting of an automorphism $\phi_{\al}$, global picture} Now we will show that a supermanifold with an automorphism $\psi_{\al}$ has very special transition functions in an atlas $\mathcal A=\{\mathcal U_{a}\}$ from in Proposition \ref{prop new coordinates}. Recall that in any $\mathcal U_{a}\in \mathcal A$ with coordinates $(x_i,\xi_j)$ we can define an automorphism $\theta_{\al}^a = \theta_{\al}^a (\mathcal U_{a})$ as in Section \ref{sec aut theta} by $\theta^a_{\al}(x_i)=x_i$ and $\theta^a_{\al}(\xi_j)=\xi_j$. \begin{theorem}\label{theor main} Let $\mathcal A=\{\mathcal U_{a}\}$ be an atlas as in Proposition \ref{prop new coordinates} and let there exists a lift $\psi_{\al}$ of the automorphism $\phi_{\al}$ of order $r= \textsf{ord}(\phi_{\alpha})$. Let us take two charts $\mathcal U_{a},\, \mathcal U_{b}\in \mathcal A $ such that $U_{a}\cap U_{b}\ne \emptyset$ with coordinates $(x^{a}_s, \xi^{a}_t)$ and $(x^{b}_i, \xi^{b}_j)$, respectively, with the transition functions $\Psi_{a b}: \mathcal U_{b}\to \mathcal U_{a}$. \begin{enumerate} \item[(I)] If $r$ is even, then we have \begin{equation}\label{eq transition functions} \theta_{\al}^b(\Psi_{a b}^* (x^{a}_s)) = \Psi_{a b}^* (x^{a}_s);\quad \theta_{\al}^b (\Psi_{a b}^* (\xi^{a}_t)) = \alpha \Psi_{a b}^* (\xi^{a}_t). \end{equation} Or more generally, \begin{equation}\label{eq transition functions new} \theta_{\al}^b \circ \Psi_{a b}^* = \Psi_{a b}^* \circ \theta_{\al}^a;\quad \theta_{\al}^b \circ \Psi_{a b}^* = \Psi_{a b}^* \circ \theta_{\al}^a. \end{equation} \item[(II)] If we can find an atlas $\mathcal A$ with transition functions satisfying (\ref{eq transition functions}), the automorphism $\phi_{\al}$ possesses a lift $\psi_{\al}$. \item[(III)] If $r>1$ is odd or $r=\infty$, then $\mcM$ is split. \end{enumerate} \end{theorem} \begin{proof} {\it (III)} Let $\Psi_{a b}^* (x^{a}_s) :=L(x^{b}_i, \xi^{b}_j)= \sum\limits_{k}L_{2k}$, where $L_{2k}$ are homogeneous polynomials of degree $2k$ in variables $\{\xi^{b}_j\}$. Then if $r>1$ is odd or $r=\infty$ by Proposition \ref{prop new coordinates} we have \begin{align*} \psi_{\al}\circ \Psi^*_{a b}(x^{a}_s)& = \psi_{\al} (\sum_{k}L_{2k}) = L_0 + \al^2L_{2} + \al^4L_{4} + \cdots ;\\ \Psi^*_{a b}\circ \psi_{\al}(x^{a}_s) &= \Psi^*_{a b} ( x^{a}_s) = L_0 + L_{2} +L_4 +\cdots. \end{align*} Since $\psi_{\al}$ globally defined on $\mcM$, we have the following equality \begin{equation}\label{eq equality for psi_al} \psi_{\al}\circ \Psi^*_{a b} = \Psi^*_{a b}\circ \psi_{\al}, \end{equation} which implies that $L_{2q} = 0$ for any $q\geq 1$. Similarly, the equality $\psi_{\al}\circ \Psi^*_{a b}(\xi^{a}_t) = \Psi^*_{a b}\circ \psi_{\al}(\xi^{a}_t)$ implies that $\Psi^*_{a b}(\xi^{a}_t)$ is linear in $\{\xi^{b}_j\}$. In other words, $\mcM$ is split. {\it (I)} Now assume that $r$ is even. Similarly to above we have \begin{align*} \psi_{\al}\circ \Psi^*_{a b}(x^{a}_s)& = \psi_{\al} (\sum_{k}L_{2k}) = L_0 + \al^2L_{2} + \cdots + \al^{r-2}L_{r-2} + L' ;\\ \Psi^*_{a b}\circ \psi_{\al}(x^{a}_s) &= \Psi^*_{a b} ( x^{a}_s + F_r+F_{2r}+\cdots ) = L_0 + L_{2} +\cdots L_{r-2} + L'', \end{align*} where $L',L''\in \mcJ^{r}$. Again the equality (\ref{eq equality for psi_al}) implies that $L_2=\cdots = L_{r-2}=0$. Similarly, we can show that $$ \Psi^*_{a b} (\xi^{a}_t) = M_1+ M_{r+1} + M_{r+3}+\cdots , $$ where $M_{2k+1}$ are homogeneous polynomials of degree $2k+1$ in variables $\{\xi^{b}_j\}$. Now if $T=T_0+T_1+T_2+\ldots$ is a decomposition of a super-function into homogeneous polynomials in $\{\xi^{b}_j\}$, denote by $[T]_q:= T_q$ its $q$'s part. Using that $\psi_{\al} (L_{sr})$, where $s\in \mathbb N$, is $\theta_{\al}^b$-invariant, we have \begin{align*} [\psi_{\al}\circ \Psi^*_{a b}(x^{a}_s)]_{2p} = \al^{2p} L_{2p},\quad 2p=r+2,\ldots, 2r-2. \end{align*} Further, using $\Psi^*_{a b} (F_r) $ is $\theta_{\al}^b$-invariant $mod\, \mcJ^{2r}$, we have \begin{align*} [\Psi^*_{a b}\circ \psi_{\al}(x^{a}_s)]_{2p} = L_{2p}, \quad 2p=r+2,\ldots, 2r-2. \end{align*} This result implies that $L_{r+2}=\cdots= L_{2r-2}= 0$. Similarly we work with $M(x^{b}_i, \xi^{b}_j)$. In the same way we show that $L_{p}=0$ for any $p\ne sr$, where $s=0,1,2,\ldots$. {\it (II)} If $\mcM$ possesses an atlas $\mathcal A$ with transition functions satisfying (\ref{eq transition functions new}), a lift $\psi_{\al}$ can be defined in the following way for any chart $\mathcal U_{a}$ \begin{equation}\label{eq psi standatd any al} \psi_{\al}(x^{a}_i) = x^{a}_i;\quad \psi_{\al}(\xi^{a}_j) = \al \xi^{a}_j. \end{equation} Formulas (\ref{eq transition functions}) shows that $\psi_{\al}$ is well-defined. The proof is complete. \end{proof} \begin{remark} Now we can show that (\ref{eq cocycle exact form}) is equivalent to Theorem \ref{theor main} (I). Let again $\Psi_{a b}: \mathcal U_{b}\to \mathcal U_{a}$ be the transition function defined in $\mathcal U_a\cap \mathcal U_b$. In \cite[Section 2]{Bunegina} it was shown that we can decompose these transition functions in the following way $$ \Psi^*_{ab} = \gamma_{ab} \circ \gr \Psi^*_{ab}, $$ where $(\gamma_{ab})$ is a \v{C}ech cocycle corresponding to the covering $\mathcal A=\{\mathcal U_a\}$ representing $\mcM$, see Theorem \ref{Theor_Green}, and $\gamma_{ab}$ is written in coordinates of $\mathcal U_b$. In other words this means that the transition functions $\Psi_{ab}$ may be obtained from the transition functions of $\gr\Psi_{a b}: \mathcal U_{b}\to \mathcal U_{a}$ of $\gr \mcM$ applying the automorphism $\gamma_{ab}$. (Here we identified $\gr \mathcal U_c$ and $\mathcal U_c$ in a natural way.) In the structure sheaf of $\mathcal U_a$ (respectively $\mathcal U_b$) there is an automorphism $\theta_{\al}^a$ (respectively $\theta_{\al}^b$) defined as above. Since $\gr\mathcal U_c= \mathcal U_c$, we get $\theta_{\al}^a = \phi_{\al}|_{\mathcal U_a}$. Recall that the statement Theorem \ref{theor main} (I) we can reformulate in the following way $$ \Psi^*_{ab}\circ \phi_{\al} = \phi_{\al} \circ \Psi^*_{ab}. $$ Further, since, $\gr\Psi^*_{ab}\circ \phi_{\al} = \phi_{\al}\circ \gr \Psi^*_{ab}$, we get $\phi_{\al} \circ \gamma_{ab} = \gamma_{ab}\circ \phi_{\al}$. Conversely, if $\gamma_{ab}$ is $\phi_{\al}$-invariant, then applying $\Psi^*_{ab} = \gamma_{ab} \circ \gr \Psi^*_{ab}$ we get Theorem \ref{theor main} (I). \end{remark} \begin{remark} In case $r=\infty$ the result of Theorem \ref{theor main} can be deduced from an observation made in \cite{Koszul} about lifting of graded operators. \end{remark} Now we can formulate several corollaries of Theorem \ref{theor main}. \begin{corollary} Let $r= \textsf{ord}(\phi_{\al})>1$ and let there exists a lift $\psi_{\al}$ of $\phi_{\al}$ on $\mcM$. Then there exist another lift, denoted by $\psi'_{\al}$, of $\phi_{\al}$ and an atlas $\mathcal A=\{\mathcal U_{a}\}$ with local coordinates $(x^{a}_i,\xi^{a}_j)$ in $\mathcal U_{a}$ such that $$ \psi'_{\alpha} (x_i^{a}) = x_i^{a} ,\quad \psi'_{\alpha} (\xi_k^{a}) = \al \xi_k^{a}. $$ Indeed, for $r>1$ is odd or $r=\infty$ we can use Proposition \ref{prop new coordinates}(2). For $r$ is even the statement follows from Formulas (\ref{eq psi standatd any al}). \end{corollary} \begin{corollary}\label{cor psi_-1 exists} Any supermanifold $\mcM$ possesses a lift of an automorphism $\phi_{-1}$. Indeed, by definition $\mcM$ possesses an atlas satisfying (\ref{eq transition functions}). Therefore in (any) local coordinates $(x_a,\xi_b)$ of $\mcM$ we can define an automorphism $\psi^{st}_{-1}$ by the following formulas $$ \psi^{st}_{-1}(x_a)=x_a;\quad \psi^{st}_{-1}(\xi_b)=-\xi_b. $$ We will call this automorphism {\it standard}. We also can define this automorphism in the following coordinate free way $$ \psi^{st}_{-1}(f)=(-1)^{\tilde i}f, \quad f\in \mathcal O_{\bar i}. $$ \end{corollary} \begin{corollary}\label{cor phi can be lifted iff} Let $r= \textsf{ord}(\phi_{\al})>1$ be odd or $\infty$. Then the automorphism $\phi_{\al}$ can be lifted to a supermanifold $\mcM$ if and only if $\mcM$ is split. \end{corollary} \begin{corollary}\label{cor order of smf and order of al} If the automorphism $\phi_{\al}$ can be lifted to a supermanifold $\mcM$, then $o(\mcM)\geq \textsf{ord}(\phi_{\al})$, where $o(\mcM)$ is the order of a supermanifold $\mcM$, see Section \ref{sec Order of a supermanifold}. In particular, if $o(\mcM)=2$, the automorphism $\phi_{\al}$ can be listed to $\mcM$ only for $\al=\pm 1$. \end{corollary} \subsection{Lifting of the automorphism $\phi_{1}$ and consequences} By definition any lift $\psi_{1}$ of the automorphism $\phi_{1}=\id$ is a global section of the sheaf $H^0(M,\mathcal{A}ut_{(2)}\mathcal{O})$, see Section \ref{sec A classification theorem}. The $0$-cohomology group $H^0(M,\mathcal{A}ut_{(2)}\mathcal{O})$ can be computed using the following exact sequence \begin{align*} \{e\} \to \mathcal{A}ut_{(2q+2)}\mathcal{O} \to \mathcal{A}ut_{(2q)}\mathcal{O} \to (\mathcal T_{\gr})_{2q}\to 0, \quad p\geq 1, \end{align*} see (\ref{eq exact sequence}). Further let we have two lifts $\psi_{\al}$ and $\psi'_{\al}$ of $\phi_{\al}$. Then the composition $\Psi_1:=(\psi_{\al})^{-1}\circ \psi'_{\al}$ is a lift of $\phi_{1}$. Therefore any lift $\psi'_{\al}$ is equal to the composition $\psi_{\al} \circ \Psi_1$ of a fixed lift $\psi_{\al}$ and an element from $\Psi_1\in H^0(M,\mathcal{A}ut_{(2)}\mathcal{O})$. In particular, according Corollary \ref{cor psi_-1 exists} there always exists the standard lift $\psi^{st }_{-1}$ of $\phi_{-1}$. Therefore for any lift $\psi'_{-1}$ we have $\psi'_{-1} = \psi^{st}_{-1} \circ \Psi_1$, where $\Psi_1\in H^0(M,\mathcal{A}ut_{(2)}\mathcal{O})$. \section{Automorphisms of the structure sheaf of $\Pi\!\Gr_{n,k}$ } Let $\mathcal M=\Pi\!\Gr_{n,k}$ be a $\Pi$-symmetric super-Grassmannian. Recall that the retract $\gr\Pi\!\Gr_{n,k}$ of $\Pi\!\Gr_{n,k}$ is isomorphic to $(\Gr_{n,k}, \bigwedge \Omega)$, where $\Omega$ is the sheaf of $1$-forms on the usual Grassmannian $\Gr_{n,k}$. The sheaf $\Omega$ is the sheaf of sections of the cotangent bundle $\textsf{T}^*(M)$ over $M=\Gr_{n,k}$. In the next subsection we recover a well-known result about the automorphism group $\operatorname{Aut}\textsf{T}^*(M)$ of $\textsf{T}^*(M)$. \subsection{Automorphisms of the cotangent bundle over a Grassmannian} Let $M= \Gr_{n,k}$ be the usual Grassmannian, i.e. the complex manifold that paramete\-rizes all $k$-dimensional linear subspaces in $\mathbb C^n$ and let $\textsf{T}^*(M)$ be its cotangent bundle. It is well-known result that $\operatorname{End} \textsf{T}^*(M) \simeq \mathbb C$. Therefore, $\operatorname{Aut}\textsf{T}^*(M) \simeq \mathbb C^*$. For completeness we will prove this fact using use the Borel-Weil-Bott Theorem, see for example \cite{ADima} for details. Let $G=\GL_{n}(\mathbb C)$ be the general linear group, $P$ be a parabolic subgroup in $G$, $R$ be the reductive part of $P$ and let $\E_{\chi}\to G/P$ be the homogeneous vector bundle corresponding to a representation $\chi$ of $P$ in the fiber $E=(\E_{\chi})_{P}$. Denote by $\mathcal E_{\chi}$ the sheaf of holomorphic section of $\E_{\chi}$. In the Lie algebra $\mathfrak{gl}_{n}(\mathbb C)=\operatorname {Lie}(G)$ we fix the Cartan subalgebra $\mathfrak t= \{\operatorname{diag}(\mu_1,\dots,\mu_n)\}$, the following system of positive roots $$ \Delta^+=\{\mu_i-\mu_j\,\,|\,\, \,\,1\leq i<j \leq n\}, $$ and the following system of simple roots $ \Phi= \{\alpha_1,..., \alpha_{n-1}\}, \,\,\, \alpha_i=\mu_i-\mu_{i+1}$, where $i=1,\ldots , n-1$. Denote by $\mathfrak t^*(\mathbb R)$ a real subspace in $\mathfrak t^*$ spanned by $\mu_j$. Consider the scalar product $( \,,\, )$ in $\mathfrak t^*(\mathbb R)$ such that the vectors $\mu_j$ form an orthonormal basis. An element $\gamma\in \mathfrak t^*(\mathbb R)$ is called {\it dominant} if $(\gamma, \alpha)\ge 0$ for all $\alpha \in \Delta^+$. We assume that $B^-\subset P$, where $B^-$ is the Borel subgroup corresponding to $\Delta^-$. \begin{theorem}[Borel-Weil-Bott] \label{teor borel} Assume that the representation $\chi: P\to \GL(E)$ is completely reducible and $\lambda_1,..., \lambda_s$ are highest weights of $\chi|R$. Then the $G$-module $H^0(G/P,\mathcal E_{\chi})$ is isomorphic to the sum of irreducible $G$-modules with highest weights $\lambda_{i_1},..., \lambda_{i_t}$, where $\lambda_{i_a}$ are dominant highest weights of $\chi|R$. \end{theorem} Now we apply this theorem to the case of the usual Grassmannian $\Gr_{n,k}$. We have $\Gr_{n,k}\simeq G/P$, where $G= \GL_n(\mathbb C)$ and $P\subset G$ is given by $$ P= \left\{ \left( \begin{array}{cc} A&0\\ B&C \end{array} \right) \right\}, $$ where $A$ is a complex $k\times k$-matrix. We see that $R= \GL_k(\mathbb C)\times\GL_{n-k}(\mathbb C)$. The isotropy representation $\chi$ of $P$ can be computed in a standard way, see for instance \cite[Proposition 5.2]{COT}. The representation $\chi$ is completely reducible and it is equal to $\rho_1\otimes \rho^*_2$, where $\rho_1$ and $\rho_2$ are standard representations of the Lie groups $\GL_k(\mathbb C)$ and $\GL_{n-k}(\mathbb C)$, respectively. \begin{proposition}\label{prop automorphisms of T^*(M)} For usual Grassmannian $M= \Gr_{n,k}$, where $n-k,k>0$, we have $$ \operatorname{End} \textsf{T}^*(M) \simeq \mathbb C,\quad \operatorname{Aut}\textsf{T}^*(M) \simeq \mathbb C^*. $$ \end{proposition} \begin{proof} The cotangent bundle $\textsf{T}^*(M)$ over $M$ is homogeneous and the corresponding representation is the dual to isotropy representation $\chi$. Let us compute the representation $\omega$ of $P$ corresponding to the homogeneous bundle $$ \operatorname{End} \textsf{T}^*(M)\simeq \textsf{T}(M) \otimes \textsf{T}^*(M). $$ The representation $\omega$ is completely reducible and we have $$ \omega|R= \rho_1\otimes \rho^*_2\otimes\rho_1^*\otimes \rho_2 \simeq \rho_1\otimes \rho^*_1\otimes\rho_2\otimes \rho^*_2. $$ Therefore, we have \begin{enumerate} \item $\omega|R = 1+ ad_{1}+ ad_2 + ad_1\otimes ad_2$ for $k>1$ and $n-k>1$; \item $1 + ad_2$ for $k=1$ and $n-k>1$; \item $1 + ad_1$ for $k>1$ and $n-k=1$; \item $1$ for $k=n-k=1$, \end{enumerate} where $1$ is the trivial one dimensional representation, $ad_1$ and $ad_2$ are adjoint representations of $\GL_k(\mathbb C)$ and $\GL_{n-k}(\mathbb C)$, respectively. Then the heights weights of the representation $\omega|R$ are \begin{enumerate} \item $0,$ $\mu_1-\mu_k$, $\mu_{k+1}-\mu_{n}$, $\mu_1-\mu_k+ \mu_{k+1}-\mu_{n}$ for $k>1$ and $n-k>1$; \item $0,$ $\mu_{2}-\mu_{n}$ for $k=1$ and $n-k>1$; \item $0,$ $\mu_1-\mu_{n-1}$ for $k>1$ and $n-k=1$; \item $0$ for $k=n-k=1$, \end{enumerate} respectively. We see that the unique dominant weight is $0$ in any case. By Borel-Weil-Bott Theorem we obtain the result. \end{proof} \subsection{The group $H^0(M,\mathcal{A}ut \mathcal O)$} Recall that $\mathcal M=(M,\mcO)=\Pi\!\Gr_{n,k}$ is a $\Pi$-symmetric super-Grassmannian. To compute the automorphisms of $\mcO$ we use the following exact sequence of sheaves \begin{equation}\label{eq exact sec sheaves 1} e\to \mathcal{A}ut_{(2)} \mathcal O \xrightarrow[]{\iota} \mathcal{A}ut \mathcal O \xrightarrow[]{\sigma} \mathcal{A}ut (\Omega) \to e, \end{equation} where $\mathcal{A}ut (\Omega)$ is the sheaf of automorphisms of the sheaf of $1$-forms $\Omega$. Here the map $\iota$ is the natural inclusion and $\sigma$ maps any $\delta:\mcO\to \mcO$ to $\sigma(\delta): \mcO/\mcJ\to \mcO/\mcJ$, where $\mcJ$ is again the sheaf of ideals generated by odd elements in $\mcO$. Consider the corresponding to (\ref{eq exact sec sheaves 1}) exact sequence of $0$-cohomology groups \begin{equation}\label{eq exact seq automorphisms} \{e\} \to H^0(M, \mathcal{A}ut_{(2)} \mathcal O )\longrightarrow H^0(M, \mathcal{A}ut \mathcal O) \longrightarrow \operatorname{Aut} \textsf{T}^*(M), \end{equation} and the corresponding to (\ref{eq exact sequence}) exact sequence of $0$-cohomology groups \begin{equation}\label{eq exact seq automorphisms 3} \{e\} \to H^0(M, \mathcal{A}ut_{(2p+2)}\mathcal{O}) \to H^0(M,\mathcal{A}ut_{(2p)}\mathcal{O}) \to H^0(M,(\mathcal T_{\gr})_{2p}),\quad p\geq 1. \end{equation} In \cite[Therem 4.4]{COT} it has been proven that \begin{equation}\label{eq Oni Theorem 4.4} H^0(M, \mathcal (\mathcal T_{\gr})_s)=\{0\}\quad \text{for}\,\,\, s\geq 2. \end{equation} (For $\mathcal M=\Pi\!\Gr_{2,1}$ this statement follows from dimensional reason.) Therefore, \begin{equation}\label{eq H^0()Aut_(2)} H^0(M, \mathcal{A}ut_{(2)} \mathcal O) =\{e\}. \end{equation} Recall that the automorphism $\psi^{st}_{-1}$ of the structure sheaf was defined in Corollary \ref{cor psi_-1 exists}. \begin{theorem}\label{theor Aut O for Pi symmetric} Let $\mathcal M=\Pi\!\Gr_{n,k}$ be a $\Pi$-symmetric super-Grassmannian and $(n,k)\ne (2,1)$. Then $$ H^0(\Gr_{n,k},\mathcal{A}ut \mathcal O) =\{id, \psi^{st}_{-1} \}. $$ For $\mathcal M=\Pi\!\Gr_{2,1}$ we have $$ H^0(\Gr_{2,1},\mathcal{A}ut \mathcal O)\simeq \mathbb C^*. $$ \end{theorem} \begin{proof} From (\ref{eq exact seq automorphisms}), (\ref{eq H^0()Aut_(2)}) and Proposition \ref{prop automorphisms of T^*(M)}, it follows that $$ \{e\} \to H^0(M, \mathcal{A}ut \mathcal O) \to \{\phi_{\alpha}\,\,|\,\, \al\in \mathbb C^* \}\simeq \mathbb C^*. $$ Now the statement follows from Proposition \ref{prop o(PiGR)} and Corollary \ref{cor order of smf and order of al}. In more details, for $(n,k)\ne (2,1)$, we have $o(\mcM) =2$, therefore $\phi_{\alpha}$ can be lifted to $\mcM$ if and only if $\ord(\phi_{\alpha})=1$ or $2$. In other words, $\al=\pm 1$. In the case $\mathcal M=\Pi\!\Gr_{2,1}$, we have $\dim \mcM = (1|1)$. Therefore, $\mathcal{A}ut_{(2)} \mathcal O = id$ and any $\phi_{\alpha}$ can be lifted to $\mcM$. The proof is complete. \end{proof} We finish this section with the following theorem. \begin{theorem}\label{theor Aut gr O for Pi symmetric} Let $\gr \mathcal M=(M,\gr \mcO)=\gr \Pi\!\Gr_{n,k}$, where $\Pi\!\Gr_{n,k}$ is a $\Pi$-symmetric super-Grassmannian. Then $$ H^0(\Gr_{n,k},\mathcal{A}ut (\gr \mathcal O))= \operatorname{Aut} \textsf{T}^*(M) \simeq \mathbb C^*. $$ \end{theorem} \begin{proof} In Sequence \ref{eq exact seq automorphisms 3} we can replace $\mcO$ by $\gr \mcO$. (This sequence is exact for any $\mcO'$ such that $\gr\mcO'\simeq \gr \mcO$.) By (\ref{eq Oni Theorem 4.4}) as above we get $$ H^0(M, \mathcal{A}ut_{(2)} (\gr\mathcal O)) =\{e\}. $$ By (\ref{eq exact seq automorphisms}) we have $$ \{e\} \to H^0(M, \mathcal{A}ut (\gr \mathcal O)) \longrightarrow \operatorname{Aut} \textsf{T}^*(M)\simeq \mathbb C^*. $$ Hence any automorphism from $\operatorname{Aut} \textsf{T}^*(M)$ induces an automorphism of $\gr \mathcal O$, we obtain the result. \end{proof} \section{The automorphism supergroup $\operatorname{Aut}\Pi\!\Gr_{n,k}$ of a $\Pi$-symmetric super-Grassmannian} \subsection{The automorphism group of $\Gr_{n,k}$}\label{sec The automorphism group of Gr} The following theorem can be found for example in \cite[Chapter 3.3, Theorem 1, Corollary 2]{ADima}. \begin{theorem}\label{theor autom group of usual grassmannian} The automorphism group $\operatorname{Aut} (\Gr_{n,k})$ is isomorphic to $\PGL_n(\mathbb C)$ if $n\ne 2k$ and if $(n,k)=(2,1)$; and $\PGL_n(\mathbb C)$ is a normal subgroup of index $2$ in $\operatorname{Aut} (\Gr_{n,k})$ for $n=2k$, $k\ne 1$. More precisely in the case $n=2k\geq 4$ we have $$ \operatorname{Aut} (\Gr_{2k,k}) = \PGL_n(\mathbb C) \rtimes \{\id, \Phi \}, $$ where $\Phi^2 =\id$ and $\Phi\circ g\circ \Phi^{-1} = (g^t)^{-1}$ for $g\in \PGL_n(\mathbb C)$. \end{theorem} An additional automorphism $\Phi$ can be described geometrically. (Note that an additional automorphism is not unique.) It is well-known that $\Gr_{n,k}\simeq \Gr_{n,n-k}$ and this isomorphism is given by $\Gr_{n,k} \ni V \mapsto V^{\perp} \in \Gr_{n,n-k}$, where $V^{\perp}$ is the orthogonal complement of $V\subset \mathbb C^n$ with respect to a bilinear form $B$. In the case $n=2k$ we clearly have $\Gr_{n,k} = \Gr_{n,n-k}$, hence the map $V \mapsto V^{\perp}$ induces an automorphism of $\Gr_{2k,k}$, which we denote by $\Phi_B$. This automorphism is not an element of $\PGL_n(\mathbb C)$ for $(n,k)\ne (2,1)$. Assume that $B$ is the symmetric bilinear form, given in the standard basis of $\mathbb C^n$ by the identity matrix. Denote the corresponding automorphism by $\Phi$. Let us describe $\Phi$ in the standard coordinates on $\Gr_{2k,k}$, given in Section \ref{sec def of a supergrassmannian}. Recall that the chart $U_I$ on $\Gr_{2k,k}$, where $I=\{k+1, \ldots, 2k\}$, corresponds to the following matrix $ \left(\begin{array}{c} X\\ E\\ \end{array} \right), $ where $X$ is a $k\times k$-matrix of local coordinates and $E$ is the identity matrix. We have $$ \left(\begin{array}{c} X\\ E\\ \end{array} \right) \xrightarrow{\Phi} \left(\begin{array}{c} E\\ -X^t\\ \end{array} \right), $$ since $$ \left(\begin{array}{c} E\\ -X^t\\ \end{array} \right)^t \cdot \left( \begin{array}{c} X\\ E\\ \end{array} \right) = \left(\begin{array}{cc} E& -X\\ \end{array} \right) \cdot \left( \begin{array}{c} X\\ E\\ \end{array} \right) =0. $$ More general, let $U_I$, where $|I|= k$, be another chart on $\Gr_{2k,k}$ with coordinates $(x_{ij})$, $i,j=1,\ldots, k$, as described in Section \ref{sec def of a supergrassmannian}. Denote $J:= \{ 1,\ldots, 2k\}\setminus I$. Then $U_J$ is again a chart on $\Gr_{2k,k}$ with coordinates $(y_{ij})$, $i,j=1,\ldots, k$. Then the automorphism $\Phi$ is given by $y_{ij} = -x_{ji}$. \begin{remark} In case $(n,k)= (2,1)$ the automorphism $\Phi$ described above is defined as well, however it coincides with the following automorphism from $\PGL_2(\mathbb C)$ \begin{align*} \left(\begin{array}{cc} 0&1\\ -1&0\\ \end{array} \right)\cdot \left(\begin{array}{c} x\\ 1\\ \end{array} \right) = \left(\begin{array}{c} 1\\ -x\\ \end{array} \right) = \left(\begin{array}{c} 1\\ -x^t\\ \end{array} \right). \end{align*} The same in another chart. \end{remark} Let us discuss properties of $\Phi$ mentioned in Theorem \ref{theor autom group of usual grassmannian}. Clearly $\Phi^2 = \id$. Further, for $g\in \PGL_n(\mathbb C)$ we have \begin{align*} \left[(g^t)^{-1}\cdot \left(\begin{array}{c} E\\ -X^t\\ \end{array} \right)\right]^t \cdot \left[g \cdot \left( \begin{array}{c} X\\ E\\ \end{array} \right)\right] = \left(\begin{array}{cc} E& -X\\ \end{array} \right) \cdot g^{-1}\cdot g \cdot \left( \begin{array}{c} X\\ E\\ \end{array} \right) =0. \end{align*} (In other charts $U_I$ the argument is the same.) In other words, if $V\subset \mathbb C^{2k}$ is a linear subspace of dimension $k$, then $(g \cdot V)^{\perp} = (g^t)^{-1} \cdot V^{\perp}$. Hence, \begin{align*} V \xmapsto[]{\Phi^{-1}}V^{\perp} \xmapsto[]{\text{\,\,\,} g\text{\,\,\,} } g\cdot V^{\perp} \xmapsto[]{\text{\,\,}\Phi\text{\,\,}} (g^t)^{-1} \cdot V. \end{align*} Therefore, $\Phi\circ g\circ \Phi^{-1} = (g^t)^{-1}$. \subsection{About lifting of the automorphism $\Phi$}\label{sec lifting of exeptional hom} \subsubsection{Lifting of the automorphism $\Phi$ to $\gr \Pi\!\Gr_{2k,k}$} Recall that we have $$ \gr \Pi\!\Gr_{n,k}\simeq (\Gr_{n,k}, \bigwedge \Omega), $$ where $\Omega$ is the sheaf of $1$-forms on $\Gr_{n,k}$. Therefore any automorphism of $\Gr_{n,k}$ can be naturally lifted to $\gr \mcM=\gr \Pi\!\Gr_{n,k}$. Indeed, the lift of an automorphism $F$ of $\Gr_{n,k}$ is the automorphism $(F,\wedge \operatorname{d} (F))$ of $(\Gr_{n,k}, \bigwedge \Omega)$. Further, by Theorem \ref{theor Aut gr O for Pi symmetric} we have $$ \{e\} \to H^0(M, \mathcal{A}ut (\gr \mathcal O)) \simeq \mathbb C^* \longrightarrow \operatorname{Aut}( \gr \mcM) \longrightarrow \operatorname{Aut} (\Gr_{n,k}). $$ Hence, $$ \operatorname{Aut} (\gr \mcM )\simeq \mathbb C^* \rtimes \operatorname{Aut} (\Gr_{n,k}) . $$ Now we see that $\operatorname{Aut} (\gr \mcM )$ is isomorphic to the group of all automorphisms $\underline{\operatorname{Aut}} \textsf{T}^*(M)$ of $\textsf{T}^*(M)$. An automorphism $\phi_{\al} \in \mathbb C^*$ commutes with any $(F,\wedge \operatorname{d} (F))\in \operatorname{Aut} (\Gr_{n,k})$. Hence we obtain the following result. \begin{theorem}\label{theor aut gr mcM} If $\gr\mathcal M = \gr \Pi\!\Gr_{n,k}$, then $$ \operatorname{Aut} (\gr \mcM )\simeq \underline{\operatorname{Aut}} \textsf{T}^*(M)\simeq \operatorname{Aut} (\Gr_{n,k})\times \mathbb C^*. $$ In other words, any automorphism of $\gr \mcM $ is induced by an automorphism of $\textsf{T}^*(M)$. More precisely, {\bf (1)} If $\gr\mathcal M = \gr \Pi\!\Gr_{2k,k}$, where $k\geq 2$, then $$ \operatorname{Aut} (\gr\mathcal M)\simeq (\PGL_{2k}(\mathbb C) \rtimes \{\id, (\Phi, \wedge d(\Phi)) \})\times \mathbb C^*, $$ where $(\Phi, \wedge d(\Phi)) \circ g\circ (\Phi, \wedge d(\Phi))^{-1} = (g^t)^{-1}$ for $g\in \PGL_{2k}(\mathbb C)$. {\bf (2)} For other $(n,k)$, we have $$ \operatorname{Aut} (\gr\mathcal M)\simeq \PGL_n(\mathbb C) \times \mathbb C^*. $$ \end{theorem} \begin{corollary} We see, Theorem \ref{theor aut gr mcM}, that any lift of the automorphism $\Phi$ to $\gr \Pi\!\Gr_{2k,k}$ has the following form $$ \phi_{\al} \circ (\Phi, \wedge d(\Phi)),\quad \al\in \mathbb C^*. $$ \end{corollary} \subsubsection{An explicit construction of lifts of the automorphism $\Phi$ to $\gr \Pi\!\Gr_{2k,k}$}\label{sec explicit Phi} In Section \ref{sec charts on Gr} we constructed the atlas $\mathcal A^{\Pi}=\{\mathcal U_I^{\Pi}\}$ on $\Pi\!\Gr_{n,k}$. Therefore, $\gr\mathcal A^{\Pi}:=\{\gr\mathcal U_I^{\Pi}\}$ is an atlas on $\gr \Pi\!\Gr_{n,k}$. For the sake of completeness, we describe a lift $(\Phi, \wedge d(\Phi))$ of $\Phi$ from Section \ref{sec The automorphism group of Gr} in our local charts. First consider the following two coordinate matrices, see Section \ref{sec charts on Gr}: \begin{equation}\label{eq two standard charts} \mathcal Z_{1}= \left(\begin{array}{cc} X&\Xi \\ E&0\\ \Xi& X\\ 0&E \end{array} \right), \quad \mathcal Z_{2} = \left(\begin{array}{cc} E&0 \\ Y&H\\ 0& E\\ H&Y \end{array} \right), \end{equation} where $X = (x_{ij})$, $Y= (y_{ij})$ are $k\times k$-matrices of local even coordinates and $\Xi = (\xi_{st})$, $H = (\eta_{st})$ are $k\times k$-matrices of local odd coordinates on $\Pi\!\Gr_{2k,k}$. Denote by $\mathcal V_i\in \mathcal A^{\Pi}$ the corresponding to $\mathcal Z_i$ superdomain. Then $\gr \mathcal V_1$ and $\gr \mathcal V_2$ are superdomains in $\gr\mathcal A^{\Pi}$ with coordinates $(\gr (x_{ij}), \gr (\xi_{st}))$ and $(\gr (y_{ij}), \gr (\eta_{st}))$, respectively. (Note that we can consider any superfunction $f$ as a morphism between supermanifolds, therefore $\gr f$ is defined.) We can easily check that the coordinate $\gr (\xi_{ij})$ (or $\gr (\eta_{ij})$) can be identified with the $1$-form $d(\gr(x_{ij}))$ (or $d(\gr(y_{ij}))$, respectively) for any $(ij)$. Using this fact we can describe the automorphism $(\Phi, \wedge d(\Phi))$ on $\gr \Pi\!\Gr_{n,k}$. We get in our local charts \begin{equation*}\left(\begin{array}{cc} \gr X& \gr \Xi \\ E&0\\ \gr \Xi&\gr X\\ 0&E \end{array} \right) \xrightarrow{(\Phi, \wedge d(\Phi))} \left(\begin{array}{cc} E&0\\ -\gr X^t& -\gr \Xi^t \\ 0&E\\ - \gr \Xi^t& - \gr X^t\\ \end{array} \right). \end{equation*} We can describe the automorphism $(\Phi, \wedge d(\Phi))$ in any other charts of $\gr\mathcal A^{\Pi}$ in a similar way. Clearly, $(\Phi, \wedge d(\Phi))\circ (\Phi, \wedge d(\Phi)) =id$. \subsubsection{About lifting of the automorphism $\Phi$ to $\Pi\!\Gr_{2k,k}$} In this subsection we use results obtained in \cite{COT}. Recall that $\Omega$ is the sheaf of $1$-forms on $\Gr_{n,k}$ and $\mathcal T_{\gr} = \bigoplus_{p\in \Z} (\mathcal T_{\gr})_p$ is the tangent sheaf of $\gr\Pi\!\Gr_{n,k}$. We have the following isomorphism $$ (\mathcal T_{\gr})_2 \simeq \bigwedge^3 \Omega\otimes \Omega^* \oplus \bigwedge^2 \Omega\otimes \Omega^*. $$ see \cite[Formula 2.13]{COT}. (This isomorphism holds for any supermanifold with the retract $(M,\bigwedge \Omega)$). Therefore, \begin{equation}\label{eq H^1-1} H^1(\Gr_{n,k},(\mathcal T_{\gr})_2) \simeq H^1(\Gr_{n,k},\bigwedge^3 \Omega\otimes \Omega^*) \oplus H^1(\Gr_{n,k},\bigwedge^2 \Omega\otimes \Omega^*). \end{equation} By \cite[Proposition 4.10]{COT} we have \begin{equation}\label{eq H^1-2} H^1(\Gr_{n,k},\bigwedge^3 \Omega\otimes \Omega^*) =\{0\}. \end{equation} Further by Dolbeault-Serre theorem we have \begin{equation}\label{eq H^1-3} H^1(\Gr_{n,k}, \bigwedge^2 \Omega\otimes \Omega^*) \simeq H^{2,1} (\Gr_{n,k}, \Omega^*). \end{equation} Combining Formulas (\ref{eq H^1-1}), (\ref{eq H^1-2}) and (\ref{eq H^1-3}) we get \begin{equation*}H^1(\Gr_{n,k},(\mathcal T_{\gr})_2) \simeq H^{2,1} (\Gr_{n,k}, \Omega^*). \end{equation*} Consider the exact sequence (\ref{eq exact sequence}) for the sheaf $\gr \mcO$ $$ e \to \mathcal{A}ut_{(2p+2)}\gr\mathcal{O} \to \mathcal{A}ut_{(2p)}\gr\mathcal{O} \to (\mathcal T_{\gr})_{2p}\to 0. $$ Since $H^1(M, (\mathcal T_{\gr})_{p})=\{0\}$ for $p\geq 3$, see \cite[Theorem 4.4]{COT}, we have $$H^1(\Gr_{n,k},\mathcal{A}ut_{(2p)} \gr\mathcal{O}) = \{\epsilon\}\quad \text{for}\,\,\, p\geq 2. $$ Hence we have the following inclusion \begin{equation}\label{eq inclusion} H^1(\Gr_{n,k}, \mathcal{A}ut_{(2)}\gr\mathcal{O}) \hookrightarrow H^1(\Gr_{n,k}, (\mathcal T_{\gr})_2)\simeq H^{2,1} (\Gr_{n,k}, \Omega^*). \end{equation} Let $\gamma\in H^1(\Gr_{2k,k}, \mathcal{A}ut_{(2)}\gr\mathcal{O})$ be the Green cohomology class of the supermanifold $\Pi\!\Gr_{2k,k}$, see Theorem \ref{Theor_Green}. Denote by $\eta$ the image of $\gamma$ in $H^{2,1} (\Gr_{2k,k}, \Omega^*)$. (The notation $\eta$ we borrow in \cite{COT}.) \begin{theorem}\label{theor lift of Phi} The automorphism $\phi_{\al} \circ (\Phi, \wedge d(\Phi))$, where $\al\in \mathbb C^*$, can be lifted to $\Pi\!\Gr_{2k,k}$, where $k\geq 2$, if and only if $\al = \pm i$. The $\Pi$-symmetric super-Grassmannian $\Pi\!\Gr_{2,1}$ is split, in other words, $\Pi\!\Gr_{2,1}\simeq \gr \Pi\!\Gr_{2,1}$. Therefore any $\phi_{\al} \circ (\Phi, \wedge d(\Phi))$ is an automorphism of $\Pi\!\Gr_{2,1}$. \end{theorem} \begin{proof} This statement can be deduced from results of \cite{COT}. Indeed, by \cite[Theorem 5.2 (1)]{COT}, $\Pi\!\Gr_{2k,k}$, where $k\geq 2$, corresponds to the $(2; 1)$-form $\eta \neq0$ defined by \cite[Formula (4.19)]{COT}. Further, the inclusion (\ref{eq inclusion}) is $\underline {\operatorname{Aut}} \textsf{T}^*(M)$-invariant. In \cite[Lemma 3.2]{COT} it was shown that $\phi_{\al} (\eta) = \al^2 \eta$ and in the proof of \cite[Theorem 4.6 (2)]{COT} it was shown that $(\Phi, \wedge d(\Phi))(\eta) = -\eta \in H^{2,1} (\Gr_{n,k}, \Omega^*)$. This implies that $$ [\phi_{\al} \circ (\Phi, \wedge d(\Phi))] (\eta) =\eta $$ if and only if $\al = \pm i$. Hence, $$ [\phi_{\al} \circ (\Phi, \wedge d(\Phi))](\gamma) =\gamma\in H^1(M, \mathcal{A}ut_{(2)}\gr\mathcal{O}) $$ if and only if $\al = \pm i$. Hence by Proposition \ref{prop lift of gamma} we obtain the result. \end{proof} \subsubsection{A geometric construction of a lift of $\Phi$ to $\Pi\!\Gr_{2k,k}$}\label{sec construction of Theta} Together with the supermanifold $\Pi\!\Gr_{n,k}$ we can consider a $\Pi^L$-symmetric super-Grassmannian $\Pi\!\Gr^L_{n,k}$. A construction of $\Pi\!\Gr^L_{n,k}$ is similar to the construction of $\Pi\!\Gr_{n,k}$ given in Section \ref{sec def of a supergrassmannian}. The difference is that we write even and odd coordinates in rows, not columns. For example a coordinate matrix $\mathcal Z_{I}$, where $I=\{1,\ldots, k\}$, of $\Pi\!\Gr^L_{n,k}$ has the following form \begin{align*} \left(\begin{array}{cccc} E& X & 0 & \Xi\\ 0&\Xi & E& X\\ \end{array} \right) \end{align*} The supermanifold $\Pi\!\Gr^L_{n,k}$ is a super-Grassmannian of $\Pi^L$-symmetric $k$-dimensional superspaces in $\C^{n|n}$, where $\Pi^L$ is an odd involution, which is linear with respect to left coordinates in $\C^{n|n}$. (If we write our involution $\Pi$ in left coordinates, it will be superlinear, not linear, see \cite{Leites}.) In $\C^{2k|2k}$ we can consider a bilinear form given in the standard basis by the identity matrix of size $4k$. This form is not super-symmetric. The form induces a map $\Pi\!\Gr_{2k,k} \to \Pi\!\Gr^L_{2k,k}$ given in charts by \begin{equation}\label{eq Z to Z perp} \left(\begin{array}{cc} X& \Xi \\ E&0\\ \Xi&X\\ 0&E \end{array} \right) \longrightarrow \left(\begin{array}{cccc} E& -X & 0 & -\Xi\\ 0&-\Xi & E& -X\\ \end{array} \right) \end{equation} For other charts, let $\mathcal U_I$, where $|I|= k$, be another chart on $\Pi\Gr_{2k,k}$ with coordinates $(x_{ij}, \xi_{st})$, $i,j,s,t=1,\ldots, k$, as described in Section \ref{sec def of a supergrassmannian}. Denote $J:= \{ 1,\ldots, 2k\}\setminus I$. Then $\mathcal U^L_J$ is a chart on $\Pi\Gr^L_{2k,k}$ with coordinates $(y_{ij},\eta_{st})$, $i,j,s,t=1,\ldots, k$. Then our map is given by $y_{ij} = -x_{ij}$, $\eta_{st} = -\xi_{st}$. Since, \begin{align*} \left(\begin{array}{cccc} E& -X & 0 & -\Xi\\ 0&-\Xi & E& -X\\ \end{array} \right) \cdot \left(\begin{array}{cc} X& \Xi \\ E&0\\ \Xi&X\\ 0&E \end{array} \right) = 0. \end{align*} and the same for coordinate matrices of charts $\mathcal U_I$ and $\mathcal U^L_J$, $J:= \{ 1,\ldots, 2k\}\setminus I$, we see that the map is independent on the choice of a chart. Further, for $\Pi$-symmetric even matrices an $i$-transposition is defined, see (\ref{eq i transposition}), $$ \left(\begin{array}{cc} A & B\\ B & A\\ \end{array} \right)^{t_i} = \left(\begin{array}{cc} A^t & iB^t\\ iB^t & A^t\\ \end{array} \right), $$ which satisfies $(C_1\cdot C_2)^{t_i} = C_2^{t_i} \cdot C_1^{t_i}$. (Here $i B^t$ is the matrix $B^t$, transpose to the matrix $B$, multiplied by the complex number $i=\sqrt{-1}$.) Now we can define an automorphism of $\Pi\!\Gr_{n,k}$ by \begin{align*} \left(\begin{array}{cc} X& \Xi \\ E&0\\ \Xi&X\\ 0&E \end{array} \right) \longrightarrow \left(\begin{array}{cccc} E& -X & 0 & -\Xi\\ 0&-\Xi & E& -X\\ \end{array} \right) \xrightarrow{{t_i}} \left(\begin{array}{cc} E&0\\ -X^t& -i\Xi^t \\ 0&E\\ -i\Xi^t&-X^t\\ \end{array} \right). \end{align*} In charts $\mathcal Z_I \to \mathcal Z_J$ on $\Pi\!\Gr_{n,k}$, where $J:= \{ 1,\ldots, 2k\}\setminus I$, this map is given by $y_{ij} = -x_{ji}$, $\eta_{st} = -i\xi_{ts}$. In notations of Theorem \ref{theor lift of Phi}, the obtained automorphism is the (unique) lift of $\phi_{i} \circ (\Phi, \wedge d(\Phi))$ for $(n,k)\ne (2,1)$ and obtained automorphism is equal to $\phi_{i} \circ (\Phi, \wedge d(\Phi))$ for $(n,k)= (2,1)$. We denote the automorphism described in this section by $\Theta$. \subsection{The automorphism supergroup of $\Pi\!\Gr_{n,k}$} As above we denote by $\mathfrak v(\mathcal M) := H^0(M, \mathcal T)$ the Lie superalgebra of vector fields on a supermanifold $\mcM$. In this section $\mcM= \Pi\!\Gr_{n,k}$. Recall that $\mathfrak{q}_{n}(\mathbb C)$ is a strange Lie superalgebra, see \cite{Kac} for details. This Lie superalgebra contains all matrices over $\C$ in the following form $$ \left(\begin{array}{cc} A&B \\ B&A\\ \end{array} \right), $$ where $A,B$ are complex square matrix of size $n$. For instance the identity matrix $E_{2n}$ of size $2n$ is an element of $\mathfrak{q}_{n}(\mathbb C)$. The corresponding Lie supergroup we denote by $\Q_n(\C)$. This is a subsupergroup in $\GL_{n|n}(\C)$ which is invariant with respect to the odd involution $\Pi$. This Lie supergroup can be seen as a subseperdomain of the following form in $\GL_{n|n}(\C)$. \begin{align*} \left(\begin{array}{cc} L&M \\ M&L\\ \end{array} \right), \end{align*} where $L$ is an $n\times n$-matrix of even coordinates, while $M$ is an $n\times n$-matrix of odd coordinates. The corresponding to $\Q_n(\C)$ Harish-Chandra pair is $(\GL_{n}(\C),\mathfrak{q}_{n}(\mathbb C))$. In \cite{COT} the following theorem was proved, see \cite[Theorem 5.2]{COT}, \cite[Theorem 4.2]{COT} and \cite[Section 5]{Vish_Pi sym} for an explicit description of $\mathfrak v(\Pi\!\Gr_{2,1})$. \begin{theorem}\label{teor vector fields on supergrassmannians} {\bf (1)} If $\mathcal M = \Pi\!\Gr_{n,k}$, where $(n,k)\ne (2,1)$, then $$ \mathfrak v(\mathcal M)\simeq \mathfrak{q}_{n}(\mathbb C)/\langle E_{2n}\rangle. $$ {\bf (2)} If $\mathcal M = \Pi\!\Gr_{2,1}$, then $$ \mathfrak v(\mathcal M)\simeq \mathfrak g \rtimes \langle z\rangle \simeq \mathfrak{q}_{2}(\mathbb C)/\langle E_{4}\rangle \rtimes \langle z\rangle. $$ Here $\mathfrak g = \mathfrak g_{-1}\oplus \mathfrak g_0\oplus \mathfrak g_{1}$ is a $\mathbb Z$-graded Lie superalgebra defined in the following way. $$ \mathfrak g_{-1}= V, \quad \mathfrak g_{0}= \mathfrak{sl}_2(\mathbb C), \quad \mathfrak g_{1}= \langle d \rangle, $$ where $V= \mathfrak{sl}_2(\mathbb C)$ is the adjoint $\mathfrak{sl}_2(\mathbb C)$-module, $[\mathfrak g_{0}, \mathfrak g_{1}] = \{0\}$, $[d , -]$ maps identically $\mathfrak g_{-1}$ to $\mathfrak g_{0}$, and $z$ is the grading operator of the $\mathbb Z$-graded Lie superalgebra $\mathfrak{g}$, the element $d$ corresponds to the matrix $\left(\begin{array}{cc} 0&E \\ E&0\\ \end{array} \right) \in \mathfrak{q}_{2}(\mathbb C)$. \end{theorem} Recall that the automorphism $\psi^{st}_{-1}$ of the structure sheaf was defined in Corollary \ref{cor psi_-1 exists}. Denote by $\Psi^{st}_{-1}$ the corresponding automorphism of the supermanifold $\mcM$. Note that for any supermanifold $(M,\mcO)$ the action of $\ps$ of $H^0(M,\mathcal T)$ is given by $v \mapsto \psi^{st}_{-1}\circ v\circ (\psi^{st}_{-1})^* = (-1)^{\tilde v} v$. If our supermanifold is split and $v$ is a vector field of degree $k$, we have $\phi_{\al}\circ v\circ \phi_{\al^{-1}} = \al^k v$. In particular for the graded operator $z$ from Theorem \ref{teor vector fields on supergrassmannians} we have $\phi_{\al}\circ z\circ \phi_{\al^{-1}} = z$. Now everything is ready to prove the following theorem. Recall that the automorphism supergroup for a compact complex supermanifold is defined in terms of super-Harish-Chandra pairs by formula (\ref{eq def of automorphism supergroup}). \renewcommand{\AA}{{\sf A}} \begin{theorem}\label{t:Aut} {\bf (1)} If $\mathcal M = \Pi\!\Gr_{n,k}$, where $n\ne 2k$, then $$ \AA\coloneqq\operatorname{Aut} \mathcal M\simeq \PGL_n(\mathbb C) \times \{\id, \Psi^{st}_{-1} \}. $$ The automorphism supergroup is given by the Harish-Chandra pair $$ ( \PGL_n(\mathbb C) \times \{\id, \Psi^{st}_{-1} \}, \mathfrak{q}_{n}(\mathbb C)/\langle E_{2n}\rangle). $$ {\bf (2)} If $\mathcal M = \Pi\!\Gr_{2k,k}$, where $k\geq 2$, then $$ \AA\coloneqq\operatorname{Aut} \mathcal M\simeq \PGL_{2k}(\mathbb C) \rtimes \{\id, \Theta, \Psi^{st}_{-1}, \Psi^{st}_{-1}\circ \Theta \}, $$ where $\Theta^2 = \Psi^{st}_{-1}$, $\Psi^{st}_{-1}$ is a central element of $\AA$, and $\Theta \circ g\circ \Theta^{-1} = (g^t)^{-1}$ for $g\in \PGL_{2k}(\mathbb C)$. The automorphism supergroup is given by the Harish-Chandra pair $$ (\PGL_{2k}(\mathbb C) \rtimes \{\id, \Psi^{st}_{-1}, \Theta, \Psi^{st}_{-1}\circ \Theta \}, \mathfrak{q}_{2k}(\mathbb C)/\langle E_{4k}\rangle), $$ where $\Theta \circ C\circ \Theta^{-1} = - C^{t_i}$ for $C\in \mathfrak{q}_{2k}(\mathbb C)/\langle E_{4k}\rangle$ and $\Psi^{st}_{-1} \circ C\circ (\Psi^{st}_{-1})^{-1} = (-1)^{\tilde C} C$. {\bf (3)} If $\mathcal M = \Pi\!\Gr_{2,1}$, then $$ \AA\coloneqq \operatorname{Aut} \mathcal M\simeq \PGL_{2}(\mathbb C)\times \mathbb C^*. $$ The automorphism supergroup is given by the Harish-Chandra pair $$ ( \PGL_{2}(\mathbb C)\times \mathbb C^*, \mathfrak g \rtimes \langle z\rangle). $$ Here $\mathfrak g$ is a $\Z$-graded Lie superalgebra described in Theorem \ref{teor vector fields on supergrassmannians}, the action of $ \PGL_{2}(\mathbb C)\times \mathbb C^*$ on $z$ is trivial, and $\phi_{\al}\in \C^*$ multiplies $X\in \mathfrak v(\Pi\!\Gr_{2,1})_k$ by $\al^k$. \end{theorem} \begin{proof} We use the following exact sequence of groups: $$ e \to H^0(M,\mathcal{A}ut_{(2)}\mathcal{O}) \to \operatorname{Aut} \mathcal M \to \operatorname{Aut} (\gr \mathcal M). $$ By (\ref{eq H^0()Aut_(2)}) we have $H^0(M,\mathcal{A}ut_{(2)}\mathcal{O})=e$. Therefore, $\operatorname{Aut} \mathcal M$ is a subgroup in $\operatorname{Aut} (\gr \mathcal M)$ and the group $\operatorname{Aut} (\gr \mathcal M)$ was computed in Theorem \ref{theor aut gr mcM}. If $(n,k) =(2,1)$, we have $ \Pi\!\Gr_{2,1}\simeq \gr \Pi\!\Gr_{2,1}$. Hence, the result about the automorphism group follows from Theorem \ref{theor aut gr mcM}. In \cite{Manin} it was proven that $\Pi\!\Gr_{n,k}$ possesses an effective action of $\PGL_n(\mathbb C)$, which is compatible with the natural action of $\PGL_n(\mathbb C)$ on $\gr\mcM$ and $M$ for any $(n,k)$. Assume that $(n,k) \ne (2,1)$. By Theorem \ref{theor Aut O for Pi symmetric} we see that $H^0(M,\mathcal{A}ut \mathcal O) =\{id, \psi^{st}_{-1} \}$ (automorphisms which are identical on the base space $M$). In other words, $\phi_{\al} $ can be lifted to $\Pi\!\Gr_{n,k}$ if and only if $\al =\pm 1$. Note that the automorphism $\Psi^{st}_{-1}=(\id, \psi^{st}_{-1})$ is defined on any supermanifold and it always commutes with any other automorphism. This implies the result for $n\ne 2k$. For $n=2k$, $k\geq 2$, by Theorem \ref{theor lift of Phi}, the automorphism $\phi_{\al} \circ (\Phi, \wedge d(\Phi))$ can be lifted to $\Pi\!\Gr_{n,k}$ if and only if $\al =\pm i$. Above we denoted the lift of $\phi_{i} \circ (\Phi, \wedge d(\Phi))$ by $\Theta$. We check that $ \{\id, \Psi^{st}_{-1}, \Theta, \Psi^{st}_{-1}\circ \Theta \}$ is a subgroup and $\PGL_{2k}(\mathbb C)$ is a normal subgroup in $\operatorname{Aut} \mathcal M$. Indeed, $\Theta^2 = \Psi^{st}_{-1}$ since \begin{align*} \gr (\Theta^2) = \gr (\Theta)^2 = \phi_{i} \circ (\Phi, \wedge d(\Phi)) \circ \phi_{i} \circ (\Phi, \wedge d(\Phi)) = \phi_{-1} = \gr (\Psi^{st}_{-1}). \end{align*} Further, $(\Psi^{st}_{-1})^2 =\id$ and $\Psi^{st}_{-1}$ is central. Moreover, $\PGL_{2k}(\mathbb C)$ is normal in $\operatorname{Aut}(\gr \mathcal M)$, hence it is normal in $\operatorname{Aut}\mathcal M$ as well. We also have $\Theta \circ g\circ \Theta^{-1} = (g^t)^{-1}$ for $g\in \PGL_{2k}(\mathbb C)$, since by Theorem \ref{theor aut gr mcM} we have \begin{align*} \gr (\Theta \circ g\circ \Theta^{-1}) = (\Phi, \wedge d(\Phi)) \circ g\circ (\Phi, \wedge d(\Phi))^{-1} = (g^t)^{-1}. \end{align*} Let us define the action of $\AA$ on the Lie superalgebra $\mathfrak v(\Pi\!\Gr_{n,k})$. The actions of $\Psi^{st}_{-1}$ and $(\id,\phi_{\al})$ were defined above. Further, since the automorphism $(\Phi, \wedge d(\Phi))$ preserves the $\Z$-grading, its action on $z$ is trivial. Let us compute the action of $\Theta$ on $\mathfrak{q}_{2k}(\mathbb C)/\langle E_{4k}\rangle$. Let $C$ be a homogeneous element in $\mathfrak{q}_{2k}(\mathbb C)/\langle E_{4k}\rangle$, then $E_{4k}+ t C$ is a one-parameter subgroup in $\Q_{2k}(\C)/\langle E_{4k}\rangle$. Here the parity of $t$ is the same as the parity of $C$. We need to compute $\Theta \circ (E_{4k}+ t C)\circ \Theta^{-1}$. We have \begin{align*} \left(\begin{array}{cc} X& \Xi \\ E&0\\ \Xi&X\\ 0&E \end{array} \right) \xrightarrow{\Theta^{-1}} \left(\begin{array}{cc} E&0\\ -X^t& i\Xi^t \\ 0&E\\ i\Xi^t&-X^t\\ \end{array} \right)\xrightarrow{(E_{4k}+ t C)} (E_{4k}+ t C) \cdot \left(\begin{array}{cc} E&0\\ -X^t& i\Xi^t \\ 0&E\\ i\Xi^t&-X^t\\ \end{array} \right)\\ \xrightarrow{\Theta} ((E_{4k} + t C)^{-1})^{t_i} \cdot \left(\begin{array}{cc} X& \Xi \\ E&0\\ \Xi&X\\ 0&E \end{array} \right). \end{align*} Therefore, \begin{align*} \Theta \circ C\circ \Theta^{-1} = \frac{d}{dt}\Big|_{t=0} (\Theta \circ (E_{4k}+ t C)\circ \Theta^{-1}) = \frac{d}{dt}\Big|_{t=0} ((E_{4k} + t C)^{-1})^{t_i} = - C^{t_i}. \end{align*} The proof is complete. \end{proof} \parskip=5pt \parindent=12pt \section{Real structures on a supermanifold} \subsection{Real structures on commutative superalgebras} Let us fix $\epsilon_i\in \{\pm 1\}$ for $i=1,2,3$. Following Manin \cite[Definition 3.6.2]{Manin} a {\em real structure of type $(\epsilon_1,\epsilon_2,\epsilon_3)$} on a $\C$-superalgebra $A$ is an $\mathbb R$-linear (even) automorphism $\rho$ of $A$ such that \[ \uprho(\uprho(a))=\epsilon_1^{\tilde a} a, \qquad\uprho(ab)= \epsilon_3\epsilon_2^{\tilde a\hs\tilde b}\hs\uprho b\,\uprho\hm a, \qquad \uprho(\lambda a)=\bar\lambda\hs\uprho\hm a ,\] for $\lambda\in\C$ and for homogeneous elements $a,b\in A$, where $\tilde a$ and $\tilde b$ denote the parities of $a$ and $b$, respectively. To any real structure $\rho$ of type $(\epsilon_1,\epsilon_2,\epsilon_3)$ on $A$, we can assign a real structure $\rho'$ of type $(\epsilon_1,\epsilon_2,-\epsilon_3)$ as follows: we put $\rho'= -\rho$. Moreover, we can assign a real structure $\rho''$ of type $(\epsilon_1,-\epsilon_2,\epsilon_3)$ as follows: we put $\rho''(a)=\rho(a)$ if $a$ is even, and $\rho''(a)= i \rho(a)$ when $a$ is odd, where $i=\sqrt{-1}$. See \cite[Section 1.11.2]{BLMS}. Thus in order to classify real structures on $A$ of all types $(\epsilon_1,\epsilon_2,\epsilon_3)$, it suffices to classify real structures of the types $(1,-1,1)$ and $(-1,-1,1)$. In this paper we consider real structures of the type $(1,-1,1)$. Then for a commutative superalgebra $A$ and for a real structure $\rho$ on $A$ we have \[\uprho(\uprho(a))= a, \qquad\uprho(ab)= \uprho\hm a\,\uprho b, \qquad \uprho(\lambda a)=\bar\lambda\uprho\hm a.\] For a definition of a real structure on a complex Lie superalgebra see for instance \cite{Serganova}. \def\Hm{{\mathcal H}} \subsection{Real structures on supermanifolds}\label{sec complex conjugation} Recall that a {\it real structure} on a complex-analytic manifold $M$ is an anti-holomorphic involution $\mu:M\to M$, see for instance \cite{ADima-Stefani}. This definition is equivalent to the following one. Let $(M,\mathcal F)$ be a complex-analytic manifold, that is $\mathcal F$ is the sheaf of holomorphic functions on $M$. A homomorphism $\rho: \mathcal F\to \mathcal F$ of sheaves of real local algebras is called a {\it real structure} on $M$ if $$ \rho^2=id, \quad \rho(\lambda f) = \ov{\lambda} \rho(f), $$ where $\lambda\in \mathbb C$ and $f,g\in \mathcal F$. These definitions are equivalent. Indeed, if $\mu$ is a complex involution, we put $$ \rho(f) =\ov{\mu^*(f)}\in \mathcal F. $$ Conversely, if $\rho$ satisfies the second definition of a real structure, we put $$ \mu^*(f) = \ov{\rho(f)},\quad \mu^*(\ov{f}) = \rho(f)\quad \text{for}\,\,\, f\in \mathcal F. $$ Let us define a real structure on a supermanifold. A {\em real structure} of type $(1,-1,1)$ on a complex-analytic supermanfold $\M=(M,\mO)$ is a homomorphism $\rho: \mcO\to \mcO$ of sheaves of real local superalgebras such that \[ \rho^2=id, \quad \rho(\lambda f) = \ov{\lambda} \rho(f) \qquad\text{for}\ \ \lambda\in\C,\ f\in \mO.\] Since $\rho$ preserves the parity of elements in $\mcO$, it preserves the sheaf of ideals $\mcJ\subset \mcO$ generated by odd elements. Clearly the induced sheaves homomorphism $\rho' : \mcO/\mcJ \to \mcO/\mcJ$ is a real structure on the underlying manifold $M$. As in the case of complex-analytic manifolds let us give another definition of a real structure. To any complex-analytic supermanfold $\M=(M,\mO)$ of dimension $(n|m)$, we can assign a real supermanifold $\M^{\mathbb R}$ of dimension $(2n|2m)$. This procedure is described for instance in \cite[Section. From complex to real]{Kal}. There a definition of a complex-conjugation of a function was given. This definition can be regarded as follows. Any super-function $f\in \mcO$ is at the same time a morphism from $\mcM$ to $\mathbb C^{1|1}=\mathbb C^{1|0}\oplus \mathbb C^{0|1}$. More precisely, an even function $f\in \mcO_{\bar 0}$ can be regarded as a morphism $f:\mcM\to \mathbb C^{1|0}$ and an odd element $f\in \mcO_{\bar 1}$ can be regarded as a morphism $f:\mcM\to \mathbb C^{0|1}$. Further, let $(z,\eta)$ be the standard complex coordinates in $\mathbb C^{1|1}$, where $z=z_1+iz_2$ and $\eta=\eta_1+i\eta_2$ and $z_i$, $\eta_j$ are standard real even and odd coordinates in $\mathbb R^{2|2}$, respectively. In $\mathbb C^{1|1}$ we define a complex conjugation by the following formula $$ \ov{z}= z_1-iz_2\quad \text{and} \quad \ov{\eta}= \eta_1-i\eta_2. $$ Then for any $f:\mcM \to \mathbb C^{1|1}$ its complex conjugation $\ov{f}$ can be regarded as a composition of the morphism $f$ and the complex conjugation in $\mathbb C^{1|1}$. We also have \begin{equation*} \ov{f_1 \cdot f_2} = \ov{f_1} \cdot \ov{f_2},\quad f_1,\,f_2\in \mcO. \end{equation*} If $f:\mcM\to \mathbb C^{1|1}$ is a morphism (or a super-function), we have for any morphism $F=(F_0,F^*):\mcM\to \mcM$ the following formula \begin{equation}\label{eq fucntion as a morphism} F^*(f) = f\circ F. \end{equation} Now a {\it real structure $\mu$} on a supermanifold $\M$ is an anti-holomorphic involutive automorphism $\mu=(\mu_0,\mu^*)$ of the supermanifold $\M^{\mathbb R}$. That is $\mu^*$ maps $\mcO$ to $\ov{\mcO}$ and $\mu^2=\id$. \begin{proposition} Two definitions of a real structure on $\mcM$ coincide. \end{proposition} \begin{proof} Let $\mu=(\mu_0,\mu^*)$ be an anti-holomorphic involutive automorphism of $\mcM^{\mathbb R}$. Then we define $$ \rho(f):= \ov {\mu^* (f)}\quad \text{for} \,\,\, f\in \mcO. $$ Let us check that $\rho\circ \rho =\id$. Using (\ref{eq fucntion as a morphism}) we have \begin{align*} \rho (\rho(f)) = \rho (\ov {f\circ \mu}) = \ov {\mu^* (\ov {f\circ \mu})} = \ov { \ov {f\circ \mu \circ \mu}} = f. \end{align*} Other properties of $\rho$ are clear. On the other hand, let $\rho:\mcO\to \mcO$ be a real structure according to the first definition. By definition we put \begin{align*} \mu^*(f) = \ov {\rho (f)}\quad \text{for} \,\,\, f\in \mcO;\quad \mu^*(f) = \rho (\ov{f})\quad \text{for} \,\,\, f\in \ov{\mcO}. \end{align*} Clearly $\mu$ is anti-holomorphic. Further, we have \begin{align*} &\mu^*(\mu^*(f)) = \mu^*(\ov {\rho (f)}) = \rho( {\rho (f)}) = f\quad \text{for} \,\,\, f\in \mcO;\\ &\mu^*(\mu^*(f)) = \mu^*({\rho (\ov {f})}) = \ov{\rho( {\rho (\ov {f})}) }= f\quad \text{for} \,\,\, f\in \ov{\mcO}. \end{align*} The proof is complete. \end{proof} \def\upsig{{}^\gamma\hm} Two real structures $\mu,\mu'$ on $\M$ are called {\em equivalent} if the pairs $(\M,\mu)$ and $(\M,\mu')$ are isomorphic, that is, if there exists a (complex-analytic) isomorphism of supermanifolds $\beta\colon\M\to \M$ such that $\mu'\circ\beta=\beta\circ\mu$. \subsection{Isotropic $\Pi$-symmetric Hermitian super-Grassmannians}\label{sec isotropic Pi-symmetric super-Grassmannians} Let $A$ be a commutative superalgebra. For even matrices over $A$ an $i$-transposition is defined \begin{equation}\label{eq i transposition general} \left(\begin{array}{cc} A & B\\ C & D\\ \end{array} \right)^{t_i} = \left(\begin{array}{cc} A^t & iC^t\\ iB^t & D^t\\ \end{array} \right), \end{equation} which satisfies $(C_1\cdot C_2)^{t_i} = C_2^{t_i} \cdot C_1^{t_i}$. (Here $i B^t$ is the matrix $B^t$, transpose to the matrix $B$, multiplied by the complex number $i=\sqrt{-1}$.) The $i$-transposition maps a $\Pi$-symmetric even matrix to a $\Pi$-symmetric even matrix \begin{equation}\label{eq i transposition} \left(\begin{array}{cc} A & B\\ B & A\\ \end{array} \right)^{t_i} = \left(\begin{array}{cc} A^t & iB^t\\ iB^t & A^t\\ \end{array} \right). \end{equation} We denote an isotropic $\Pi$-symmetric Hermitian super-Grassmannian by $\Pi\operatorname {I}\!\Gr^{\mathbf{H}}_{2k,k}$, where $\mathbf{H}$ is the super-Hermitian form given by the following matrix in the standard basis of $\mathbb C^{2k|2k}$ \begin{align*} H= \left(\begin{array}{cc} b_k & 0\\ 0 & b_k\\ \end{array} \right), \quad b_k=\left(\begin{array}{cc} 0 & -iE_k\\ iE_k & 0\\ \end{array} \right), \end{align*} Compare with Section \ref{append matrix b_k}. We see that the restriction of the form $\mathbf{H}$ to the vector subspace of even $\Pi$-symmetric vectors is equal to the form $\mathcal F(-\,,-)$, see \ref{append matrix b_k}. The isotropic $\Pi$-symmetric Hermitian super-Grassmannian $\Pi\operatorname {I}\!\Gr^{\mathbf{H}}_{2k,k}$ is a real subsupermanifold in $\Pi\!\Gr_{2k,k}$ defined by the following isotropy condition \begin{equation}\label{eq isotropy condition} (\ov{\mathcal Z}_I)^{t_i} \cdot H\cdot \mathcal Z_I = 0, \end{equation} where $\mathcal Z_I = \left( \begin{array}{cc} X'&\Xi'\\ \Xi'&X'\\ \end{array} \right)$ is a $\Pi$-symmetric coordinate matrix of $\Pi\!\Gr_{2k,k}$, $i$-transposition is as in (\ref{eq i transposition}) and $\ov{\mathcal Z}_I$ is complex conjugation of $\mathcal Z_I$, see Section \ref{sec complex conjugation}. Let us write this condition (\ref{eq isotropy condition}) explicitly for the coordinate matrix \begin{align*} \mathcal Z_{I}= \left(\begin{array}{cc} X&\Xi \\ E&0\\ \Xi& X\\ 0&E \end{array} \right). \end{align*} We get after a direct computation \begin{equation}\label{eq isotropy condition easy chart} X= (\ov{X})^t,\quad - i \Xi = (\ov{\Xi})^t. \end{equation} The base space of this supermanifold is described in Corollary (\ref{cor iso gr}). \section{Real structures on a $\Pi$-symmetric super-Grassmannian} \subsection{The action of the complex conjugation on $ \operatorname{Aut} \PiG_{n,k}$ }\label{ss:action of bar on Aut mcM} If $a\in \operatorname{Aut} \mathcal M$, we denote by $\upsig a\in \operatorname{Aut} \mathcal M$ the element satisfying $(\upsig a)^*(f) = \overline{a^*(\overline f)}$ for any local function $f$ on $\mathcal M$. (Recall that $\overline f$ is defined in Section \ref{sec complex conjugation}.) Let us compute $\upsig a$ for any $a\in \operatorname{Aut} \PiG_{n,k}$. The group $\operatorname{Aut} \PiG_{n,k}$ was computed in Theorem \ref{t:Aut}. \begin{proposition}\label{prop action of bar on Aut mcM} For any $(n,k)$ we have $$ \upsig g = \ov{g}, \quad g\in \PGL_n(\mathbb C),\quad \upsig\hs(\Psi^{st}_{-1}) = \Psi^{st}_{-1}. $$ For $(2k,k)$, $k\geq 2$, we have $$ \upsig\hs\Theta = \Theta^{-1}= \Psi^{st}_{-1}\circ \Theta. $$ For $(2,1)$ we have $$ \upsig\phi_{\al} = \phi_{\ov{\al}}. $$ \end{proposition} \begin{proof} First of all, $ \Psi^{st}_{-1}$ multiplies an odd local function by $-1$ and it is identical on an even function. Therefore, for any supermanifold we have $\upsig\hs(\Psi^{st}_{-1}) = \Psi^{st}_{-1}$, see Section \ref{sec complex conjugation}. Further, let $\mathcal Z_I$ be a coordinate matrix of $\PiG_{n,k}$, see Section \ref{sec def of a supergrassmannian} and $g\in \PGL_n(\mathbb C)$. Then the action of $\PGL_n(\mathbb C)$ on $\PiG_{n,k}$ is defined by the matrix multiplication $g\cdot \mathcal Z_I$, see \cite{Manin}. Therefore, $\overline{g\cdot \overline{\mathcal Z_I}} = \overline{g} \cdot \mathcal Z_I$. Hence, $\upsig g = \ov{g}$. The automorphism $\Phi$ of $\Gr_{n,k}$ is described in local coordinates in Section \ref{sec The automorphism group of Gr}. Clearly, $\upsig\hs\Phi = \Phi$. Note that $\upsig\hs(\Phi, \wedge d(\Phi))$ is the lift of $\upsig\hs\Phi$ to $ \textsf{T}^*(\Gr_{n,k})$. The result follows. Recall that $\phi_{\al}$ is defined for split supermanifolds and it multiplies a local section of the corresponding bundle by $\al$. Hence, $\upsig \phi_{\al}$ multiplies a local section of the corresponding bundle by $\ov{\al}$. To compute $\upsig\hs\Theta$ we note that $\gr (\ov{f}) = \ov{\gr (f)}$ for any function on $\PiG_{n,k}$. Therefore, $$\gr (\upsig\hs\Theta) = \upsig\hs(\gr \Theta) = \upsig\hs(\phi_{i} \circ (\Phi, \wedge d(\Phi))) = \phi_{-i} \circ (\Phi, \wedge d(\Phi)) = \gr (\Theta^{-1}).$$ Recall that $\gr \operatorname{Aut} \PiG_{n,k} \to \operatorname{Aut} (\gr\PiG_{n,k})$ is injective, see Proof of Theorem \ref{t:Aut}. The result follows. \end{proof} \subsection{Real structures on $\PiG_{n,k}$}\label{ss:real-structures} Let $\mu^o=(\mu^o_0,(\mu^o)^*)$ denote the standard real structure on $\M=\PiG_{n,k}$. Namely, the anti-holomorphic involution $\mu^o$ of $\PiG_{n,k}$ is induced by the complex conjugation in $\C^{n|n}$. More precisely, let us describe $\mu^o$ in our charts. For instance, in the chart $\mathcal V_{1}$, see (\ref{eq two standard charts}), with local coordinates $x_{ij} = x^1_{ij} + ix^2_{ij} $, $\xi_{ab} = \xi^1_{ab} + i \xi^2_{ab}$, where $x^1_{ij},\, x^2_{ij};\, \xi^1_{ab},\,\xi^2_{ab}$ are real even and odd coordinates, the real structure $\mu^o$ is given by $$ (\mu^o)^*(x^1_{ij} + ix^2_{ij}) = x^1_{ij} - ix^2_{ij},\quad (\mu^o)^*(\xi^1_{ab} + i \xi^2_{ab}) = \xi^1_{ab} -i \xi^2_{ab}. $$ In other charts the idea is similar. If $n$ is even, write \[ a_J={\rm diag}(J,\dots,J)\ \,\text{($n/2$ times),\ \ where}\quad J=\SmallMatrix{0 &1\\-1&0}.\] Set $c:=\pi(a_J)\in \PGL_n(\mathbb C)$, where $\pi\colon \GL_n(\mathbb C)\to\PGL_n(\mathbb C)$ is the canonical homomorphism. Due to Theorem \ref{t:Aut}, the element $c\in \PGL_n(\mathbb C)$ can be regarded as an holomorphic automorphism of $\mcM$. \begin{theorem}\label{c:Pi} The number of the equivalence classes of real structures $\mu$ on $\mcM$, and representatives of these classes, are given in the list below: \begin{enumerate} \item[\rm (i)] If $n$ is odd, then there are two equivalence classes with representatives $$ \mu^o, \quad (1,\ps)\circ\mu^o. $$ \item[\rm (ii)] If $n$ is even and $n\neq 2k$, then there are four equivalence classes with representatives $$ \mu^o,\quad (1,\ps)\circ\mu^o, \quad (c_J,1)\circ\mu^o, \quad (c_J,\ps)\circ\mu^o. $$ \item[\rm (iii)] If $n=2k\ge 4$, then there are $k+3$ equivalence classes with representatives $$ \mu^o,\quad (c_J,1)\circ\mu^o, \quad (c_r,\Theta)\circ\mu^o, \,\, r= 0,\ldots, k. $$ \item[\rm (iv)] If $(n,k)= (2,1)$, then there are two equivalence classes with representatives $$ \mu^o,\quad (c_J,1)\circ\mu^o. $$ \end{enumerate} Here $\mu^o$ denotes the standard real structure on $\M=\PiG_{n,k}$ as above. Moreover, $c_J\in\PGL_n(\C)$ and $c_r\in\PGL_{2k}(\C)$ for $r= 0,\ldots, k$ are certain elements constructed in Proposition \ref{p:H1} and Subsection \ref{ss:cp}, respectively. \end{theorem} \begin{proof} By Proposition \ref{p:Serre-reduction} in Appendix \ref{app:cohomology}, the theorem follows immediately from Theorem \ref{t:Aut-M}. \end{proof} \subsection{Ringed space of real points of $\PiG_{n,k}$} We wish to describe the ringed space of real points of $\M=(M,\mO)$ of the corresponding real structures $\mu=(\mu_0,\mu^*)$ from Theorem \ref{c:Pi}. In more details the {\it ringed space of real points} is by definition the ringed space $\M^{\mu}:= (M^{\mu_0}, \mcO^{\mu^*})$, where $M^{\mu_0}$ is the set of fixed points of $\mu_0$ and $\mcO^{\mu^*}$ is the sheaf of fixed points of $\mu^*$ over $M^{\mu}$. Let us first describe real points corresponding to to the real structures $\mu^o$ and $\ps\circ \mu^o$. \begin{proposition}\label{prop PiGr(R)} \begin{enumerate} \item The ringed space of real points of $\PiG_{n,k}$ corresponding to the real structure $\mu^o$ can be identified with $\PiG_{n,k}(\mathbb R)$. \item The ringed space of real points of $\PiG_{n,k}$ corresponding to the real structure $\ps\circ \mu^o$ can be identified with a real subsupermanifold, which we denote by $\PiG'_{n,k}(\mathbb R)$. \end{enumerate} \end{proposition} \begin{proof} The first statement is obvious. Let us describe the supermanifold $\PiG'_{n,k}(\mathbb R)$ using charts and local coordinates. An atlas $\mathcal A$ of $\PiG'_{n,k}(\mathbb R)$ contains charts $\mathcal U_{I}$, where $I\subset\{1,\ldots,n\}$ with $|I| = k$. To any $I$ we assign the following $2n\times 2k$-matrix $$ \mathcal Z_{I} =\left( \begin{array}{cc} X' & \Xi'\\ \Xi' & X' \end{array} \right), $$ We assume that $\mathcal Z_{I}$ contains the identity submatrix $E_{2k}$ of size $2k$ in the lines with numbers $i$ and $n+i$, where $i\in I$. We also assume that non-trivial elements $x_{ij}$ of $X'$ are real numbers, i.e. $\ov{x}_{ij} = x_{ij}$, while non-trivial elements $\xi_{ij}$ of $\Xi'$ are pure imaginary odd variables, i.e. $\ov{\xi}_{ij} = - \xi_{ij}$. Transition functions are defined as above. \end{proof} \begin{remark} Note that for $n= 2k\geq 4$, compare with Theorem \ref{c:Pi}, the real structure $\ps\circ \mu^o$ is equivalent to the real structure $\mu^o$. Indeed, we have $\Theta \circ(\ps\circ \mu^o )\circ \Theta^{-1} = \mu^o$. \end{remark} In Proposition \ref{p:real-points} in Appendix \ref{app:cohomology} a description of real points of the base space $\Gr_{2n,2k}$ corresponding to the real structure $(c_J,1)\circ \mu^o$ was given. Indeed, we have a natural embedding $\GL_{n}(\mathbb H)\hookrightarrow \GL_{2n}(\mathbb C)$ defined as follows. If $(a_{ij})\in \GL_{n}(\mathbb H)$, we replace any quaternion entry $a_{ij}$ by the complex matrix $\left(\begin{array}{cc} a^{ij}_{11}& a^{ij}_{12}\\ -\bar a^{ij}_{12}& \bar a^{ij}_{11} \end{array} \right)$, see (\ref{e:ijk}). This inclusion leads to an inclusion of homogeneous spaces \begin{equation}\label{eq inclusion of homog spaces} \Gr_{n,k}(\mathbb H)\simeq \GL_{n}(\mathbb H)/P(\mathbb H)\hookrightarrow \GL_{2n}(\mathbb C)/P(\mathbb C) \simeq \Gr_{2n,2k}, \end{equation} where $P(\mathbb H)\subset \GL_{n}(\mathbb H)$ contains all matrices in the form $\left(\begin{array}{cc} S& 0\\ T& U \end{array} \right)$ over $\mathbb H$, where $S\in \operatorname{Mat}_{n-k}(\mathbb H)$, and $P(\mathbb C)\subset \GL_{2n}(\mathbb C)$ contains all matrices in the form $\left(\begin{array}{cc} S'& 0\\ T'& U' \end{array} \right)$ over $\mathbb C$, where $S'\in \operatorname{Mat}_{2n-2k}(\mathbb C)$. Let us describe this embedding in terms of local charts on $\Gr_{n,k}(\mathbb H)$ and $\Gr_{n,k}(\mathbb C)$ as above. Let $U_I$, where $I\subset \{1,\ldots, n\}$ with $|I|=k$, be a local chart on $\Gr_{n,k}(\mathbb H)$. Denote by $I'\subset \{1,\ldots, 2n\}$ the following set \begin{equation}\label{eq I'} I':= \{ 2i,\,\,2i-1\,\,| i\in I \}. \end{equation} and by $U'_{I'}$ the corresponding to $I'$ chart on $\Gr_{n,k}(\mathbb C)$. Now the embedding (\ref{eq inclusion of homog spaces}) in charts $U_{I}\to U'_{I'}$ is given by $$ \mathbb H\ni x_{ij} \mapsto \left(\begin{array}{cc} x^{ij}_{11}& x^{ij}_{12}\\ -\bar x^{ij}_{12}& \bar x^{ij}_{11} \end{array} \right), $$ where $(x_{ij})$ are standard coordinates in $U_I$. For instance we see that the image of $\Gr_{n,k}(\mathbb H)$ in $\Gr_{n,k}(\mathbb C)$ is covered by chart of the form $U'_{I'}$, where $I'$ is as in (\ref{eq I'}). \begin{proposition}\label{prop PiGr(H)2} Let $\M=\PiG_{2n,k}$. \begin{enumerate} \item If $k$ is odd, there are no real points corresponding to the real structure $(c_J,1)\circ \mu^o$. \item If $k=2k'$, the real points of $\M$ corresponding to the real structure $(c_J,1)\circ \mu^o$ can be identified with $\PiG_{n,k'}(\mathbb H)$. \end{enumerate} \end{proposition} \begin{proof} The first statement follows from Proposition \ref{p:real-points} in Appendix \ref{app:cohomology}. Indeed, in this case there is no real point on the base space. Assume that $k=2k'$. We have to compute the fixed points of $(c_J,1)\circ \mu^o$. Let us do that for $n=2$ and $k'=1$; the general case in charts $U'_{I'}$, see (\ref{eq I'}), is similar. We apply the standard real structure $\mu^o$ to the coordinates $Z_1$, see (\ref{eq two standard charts}), and now we apply $c_J$ \begin{align*} {\rm diag}(J,J,J,J) \left(\begin{array}{cccc} \bar x_{11}& \bar x_{12}&\bar \xi_{11}& \bar \xi_{12}\\ \bar x_{21}& \bar x_{22}& \bar \xi_{21} & \bar \xi_{22} \\ 1&0&0&0\\ 0&1&0&0\\ \bar \xi_{11}& \bar \xi_{12} &\bar x_{11}& \bar x_{12} \\ \bar \xi_{21} & \bar \xi_{22} &\bar x_{21}& \bar x_{22} \\ 0&0& 1&0\\ 0&0& 0&1\\ \end{array} \right) = \left(\begin{array}{cccc} \bar x_{21}& \bar x_{22}&\bar \xi_{21}& \bar \xi_{22}\\ -\bar x_{11}& -\bar x_{12}& -\bar \xi_{11} & \bar \xi_{12} \\ 0&1&0&0\\ -1&0&0&0\\ \bar \xi_{21}& \bar \xi_{22} &\bar x_{21}& \bar x_{22} \\ -\bar \xi_{11} & -\bar \xi_{12} &-\bar x_{11}& -\bar x_{12} \\ 0&0& 0&1\\ 0&0& -1&0\\ \end{array} \right) \end{align*} Therefore, in the standard coordinates, we get \begin{align*} \left(\begin{array}{cccc} \bar x_{21}& \bar x_{22}&\bar \xi_{21}& \bar \xi_{22}\\ -\bar x_{11}& -\bar x_{12}& -\bar \xi_{11} & \bar \xi_{12} \\ 0&1&0&0\\ -1&0&0&0\\ \bar \xi_{21}& \bar \xi_{22} &\bar x_{21}& \bar x_{22} \\ -\bar \xi_{11} & -\bar \xi_{12} &-\bar x_{11}& -\bar x_{12} \\ 0&0& 0&1\\ 0&0& -1&0\\ \end{array} \right) \left(\begin{array}{cccc} 0&-1 & 0& 0\\ 1&0 & 0& 0\\ 0& 0 &0&-1\\ 0& 0 & 1&0 \end{array} \right) =\\ \left(\begin{array}{cccc} \bar x_{22}& -\bar x_{21}&\bar \xi_{22}& -\bar \xi_{21}\\ -\bar x_{12}& \bar x_{11}& -\bar \xi_{12} & \bar \xi_{11} \\ 1&0&0&0\\ 0&1&0&0\\ \bar\xi_{22}& -\bar \xi_{21} &\bar x_{22}& -\bar x_{21} \\ -\bar \xi_{12} & \bar \xi_{11} &-\bar x_{12}& \bar x_{11} \\ 0&0& 1&0\\ 0&0& 0&1\\ \end{array} \right) = \left(\begin{array}{cccc} x_{11}& x_{12}& \xi_{11}& \xi_{12}\\ x_{21}& x_{22}& \xi_{21} & \xi_{22} \\ 1&0&0&0\\ 0&1&0&0\\ \xi_{11}& \xi_{12} & x_{11}& x_{12} \\ \xi_{21} & \xi_{22} & x_{21}& x_{22} \\ 0&0& 1&0\\ 0&0& 0&1\\ \end{array} \right) \end{align*} Therefore, $x_{11} = \bar x_{22}$, $ x_{12} = -\bar x_{21}$, $\xi_{11} = \bar \xi_{22}$, $ \xi_{12} = -\bar \xi_{21}$. In the matrix form we have $$ \left(\begin{array}{cccc} x_{11}& x_{12}& \xi_{11}& \xi_{12}\\ -\bar x_{12}& \bar x_{11}& -\bar \xi_{12} & \bar \xi_{11} \\ 1&0&0&0\\ 0&1&0&0\\ \xi_{11}& \xi_{12} & x_{11}& x_{12} \\ -\bar \xi_{12} & \bar \xi_{11} & -\bar x_{12}& \bar x_{11} \\ 0&0& 1&0\\ 0&0& 0&1\\ \end{array} \right) $$ Now we can regard the matrix $ z:=\left(\begin{array}{cc} x_{11}& x_{12}\\ -\bar x_{12}& \bar x_{11} \end{array} \right)$ as a new quaternion even variable, while $ \zeta:=\left(\begin{array}{cc} \xi_{11}& \xi_{12}\\ -\bar \xi_{12}& \bar \xi_{11} \end{array} \right)$ is a new quaternion odd variable. In short we have $$ \left(\begin{array}{cc} z&\zeta\\ 1&0\\ \zeta& z\\ 0&1 \end{array} \right) $$ is a standard chart on a $\Pi$-symmetric quaternion super-Grassmannian $\PiG_{2,1}(\mathbb H)$, see above. Clearly our computation does not depend on the choice of a local chart $U'_{I'}$ and it is similar for other $n$ and $k'$. \end{proof} \begin{proposition}\label{prop PiG'(H)} \begin{enumerate} \item If $n$ is even but $k$ is odd, then there are no real points corresponding to the real structure $(c_J,\ps)\circ \mu^o$. \item If $n=2n'$ and $k=2k'$, then the real points of $\M$ corresponding to the real structure $(c_J,\ps)\circ \mu^o$ is a real supermanifold, which we denote by $\PiG'_{2n',k'}(\mathbb H)$. \end{enumerate} \end{proposition} \begin{proof} The first statement follows from Proposition \ref{p:real-points} in Appendix \ref{app:cohomology}, since in this case there is no real point on the base space. If $k=2k'$ is even, we repeat the argument similar to Proposition \ref{prop PiGr(R)}. For simplicity we assume that $k'=1$, further we apply $(c_J,\ps)\circ \mu^o$, we will get for odd coordinates (for even coordinates the equation is the same as above): $$ \left(\begin{array}{cc} \xi_{11}& \xi_{12}\\ \xi_{21}& \xi_{22} \end{array} \right) = \left(\begin{array}{cc} -\ov{\xi}_{22}& \ov{\xi}_{21}\\ \ov{\xi}_{12}& -\ov{\xi}_{11} \end{array} \right). $$ Therefore, $\xi_{11} = -\ov{\xi}_{22}$ and $\xi_{21} = \ov{\xi}_{12}$. Now $ \zeta:=\left(\begin{array}{cc} \xi_{11}& \xi_{12}\\ \bar \xi_{12}& -\bar \xi_{11} \end{array} \right)$ is a new quaternion odd variable. Now local charts on $\PiG'_{2n',k'}(\mathbb H)$ are defined. \end{proof} \begin{remark} Note that for $n= 2k\geq 4$, compare with Theorem \ref{c:Pi}, the real structure $(c_J,\ps)\circ \mu^o$ is equivalent to the real structure $(c_J,1)\circ \mu^o$. Indeed, we have $\Theta \circ((c_J,\ps)\circ \mu^o)\circ \Theta^{-1} = (c_J,1)\circ \mu^o$. \end{remark} Let us prove the following proposition. \begin{proposition}\label{prop Pi Isotropic_super_Gra} \begin{enumerate} Let $n=2k$. \item There are no real points corresponding to the real structure $(c_r,\Theta)\circ\mu^o, \,\, r= 0,\ldots, k-1$. \item Let $k\geq 2$. The real points of $\M$ corresponding to the real structure $(c_k,\Theta)\circ\mu^o$ can be identified with $\Pi\operatorname {I}\!\Gr^{\mathbf{H}}_{2k,k}$. \end{enumerate} \end{proposition} \begin{proof} The first statement follows from Corollary \ref{p:X}. In this case there no real points on the base space. Let us prove the second statement. We take a coordinate matrix $\mathcal Z_I$ of $\PiG_{2k,k}$. The idea is to prove that the isotropy condition (\ref{eq isotropy condition}), see also (\ref{eq isotropy condition easy chart}), is equivalent to the condition $(c_k,\Theta)\circ\mu^o (\mathcal Z_I) =\mathcal Z_I$. Indeed, by Section \ref{append matrix b_k}, the cocycle $c_k$ may be represented by the matrix \begin{align*} H = \left(\begin{array}{cc} b_k & 0\\ 0 & b_k\\ \end{array} \right), \quad b_k=\left(\begin{array}{cc} 0 & -iE_k\\ iE_k & 0\\ \end{array} \right). \end{align*} We have \begin{align*} (c_k, \Theta) \circ\mu^o (\mathcal Z_I) = H \cdot \Theta (\ov{\mathcal Z}_I) = H \cdot (\ov{\mathcal Z}_I^{\perp})^{t_i}. \end{align*} Here we denoted by $\mathcal Z^{\perp}_I$ the result of application of the map (\ref{eq Z to Z perp}). Further, we have \begin{align*} (\ov{H \cdot (\ov{\mathcal Z}_I^{\perp})^{t_i}})^{t_i} \cdot H\cdot \mathcal Z_I= (\mathcal Z_I^{\perp}) \ov{H}^{t} \cdot H \cdot \mathcal Z_I= - \mathcal Z_I^{\perp}\cdot \mathcal Z_I =0. \end{align*} Therefore the fixed point condition $H \cdot (\ov{\mathcal Z}_I^{\perp})^{t_i} = \mathcal Z_I$ is equivalent to the isotropy condition (\ref{eq isotropy condition}) \end{proof} From Propositions \ref{prop PiGr(H)2}, \ref{prop PiGr(R)}, \ref{prop PiG'(H)}, \ref{prop Pi Isotropic_super_Gra} and Theorem \ref{c:Pi}, we obtain the following result. \begin{theorem}\label{theor real main} Let $\mcM = \PiG_{n,k}$. The ringed space of real points $\mcM^{\mu}$ corresponding to the real structures $\mu$ are as follows. \begin{enumerate} \item[\rm (i)] If $n$ is odd, then $$ \mcM^{\mu^o} =\PiG_{n,k}(\mathbb R)\quad \text{and} \quad \mcM^{\ps\circ\mu^o} = \PiG'_{n,k}(\mathbb R). $$ \item[\rm (ii)] For $(n,k) = (2,1)$ we have $$ \mcM^{\mu^o} =\PiG_{2,1}(\mathbb R)\quad \text{and} \quad \mcM^{(c_J,1)\circ\mu^o} = \emptyset. $$ \item[\rm (iii)] If $n$ is even, $n\ne 2k$ and $k$ is odd, we have \begin{align*} \mcM^{\mu^o} =\PiG_{n,k}(\mathbb R), \quad &\mcM^{\ps\circ\mu^o} = \PiG'_{n,k}(\mathbb R), \quad \mcM^{(c_J,1)\circ\mu^o} = \emptyset,\\ &\mcM^{(c_J,\ps)\circ\mu^o} = \emptyset. \end{align*} \item[\rm (iv)] If $n,k$ are even and $n\ne 2k$, we have \begin{align*} \mcM^{\mu^o} =\PiG_{n,k}(\mathbb R), \quad &\mcM^{\ps\circ\mu^o} = \PiG'_{n,k}(\mathbb R), \quad \mcM^{(c_J,1)\circ\mu^o} = \PiG_{n/2,k/2}(\mathbb H),\\ &\mcM^{(c_J,\ps)\circ\mu^o} = \PiG'_{n/2,k/2}(\mathbb H). \end{align*} \item[\rm (v)] If $k$ is even and $n= 2k$, we have \begin{align*} \mcM^{\mu^o} =\PiG_{2k,k}(\mathbb R), \quad \mcM^{(c_J,1)\circ\mu^o} = \PiG_{k,k/2}(\mathbb H), \quad \mcM^{(c_r,\Theta)\circ\mu^o} = \emptyset, \,\,\, r=0,\ldots, k-1,\\ \mcM^{(c_k,\Theta)\circ\mu^o} = \Pi\operatorname {I}\!\Gr^{\mathbf{H}}_{2k,k}. \end{align*} \item[\rm (vi)] If $k\geq 3$ is odd and $n= 2k$, we have \begin{align*} \mcM^{\mu^o} =\PiG_{2k,k}(\mathbb R), \quad \mcM^{(c_J,1)\circ\mu^o} = \emptyset, \quad \mcM^{(c_r,\Theta)\circ\mu^o} = \emptyset, \,\,\, r=0,\ldots, k-1,\\ \mcM^{(c_k,\Theta)\circ\mu^o} = \Pi\operatorname {I}\!\Gr^{\mathbf{H}}_{2k,k}. \end{align*} \end{enumerate} \end{theorem} \newcommand{\mm}{{\mu}} \newcommand{\Am}{{\sf A}} \newcommand{\am}{{a}} \newcommand{\cm}{{{c}}} \newcommand{\Pm}{{P}} \newcommand{\Pmm}{{P\hskip 0.4pt}} \newcommand{\upgam}{{\hs^\gamma}} \newcommand{\gm}{{\gamma}} \newcommand{\sm}{{\sigma}} \renewcommand{\cG}{{\hs_\cm G}} \newcommand{\U}{{\rm U}} \newcommand{\PU}{{\rm PU}} \newcommand{\diag}{{\rm diag}} \def\GmR{{\mathbb G}_{{\rm m},\R}} \def\ctil{{\breve c}} \def\gbar{{\bar g}} \def\cF{{\mathcal F}} \def\db{{\breve{d}}} \def\ii{{\boldsymbol{i}}} \newcommand{\GrI}{{\text{\rm IGr}}} \appendix \section{Real Galois cohomology} \label{app:cohomology} \medskip \centerline{\em by Mikhail Borovoi} \medskip In this appendix we prove Theorem \ref{t:Aut-M} computing $H^1(\R, \Aut\,\M)$ where $M=\Pi\Gr_{n,k}(\C)$. This result gives us Theorem \ref{c:Pi} classifying real structures on $\Pi\Gr_{n,k}(\C)$. After that, we state and prove Proposition \ref{p:real-points}, Proposition \ref{p:X}, and Corollary \ref{cor iso gr}, which compute the set of real points of certain twisted forms of the real Grassmannian $\Gr_{n,\hs k,\hs\R}$\hs. \begin{subsec} Let $\Gamma={\rm Gal}(\C/\R)$ denote the Galois group of $\C$ over $\R$. Then $\Gamma=\{1,\gamma\}$, where $\gamma$ is the complex conjugation. Let $(\Am,\sm)$ be a pair where $\Am$ is a group (not necessarily abelian) and $\sm\colon \Am\to \Am$ is an automorphism such that $\sm^2=\id_\Am$. We define a left action of $\Gamma$ on $\Am$ by \[ (\gm, a)\mapsto \upgam\hm a\coloneqq\sm(a)\ \, \text{for}\ a\in \Am.\] We say that $(\Am,\sm)$ is a {\em $\Gamma$-group.} We consider the set of $1$-{\em cocycles} \[ Z^1(\Am,\sm)=\{\cm\in \Am\,\mid\,\cm\cdot\hm\upgam\hm\cm =1\},\] where $\upgam\hm c=\sm(c)$. The group $\Am$ acts on the left on $Z^1(\Am,\sm)$ by \[\am\star\cm=\am\cdot\cm\cdot\hm\upgam\hm \am^{-1}\ \ \text{for}\ \am\in \Am,\ \cm\in Z^1(\Am,\sm).\] As in \cite{Serre}, Section I.5.1, we define the 1-{\em cohomology set} \[H^1(\Am,\sm)=Z^1(\Am,\sm)/\Am,\] to be the set of orbits of $\Am$ in $Z^1(\Am,\sm)$. We shall write $Z^1\Am$ for $Z^1(\Am,\sm)$ and $H^1 \Am$ for $H^1(\Am,\sm)$. We denote by $[\cm]\in H^1 \Am$ the cohomology class of a $1$-cocycle $\cm\in Z^1 \Am$. Note that $1\in Z^1 \Am$; we denote its class in $H^1 \Am$ by $[1]$ or just by $1$. In general (when $\Am$ is nonabelian) the cohomology set $H^1\Am$ has no natural group structure, but it has a canonical neutral element $[1]$. \end{subsec} \begin{rem}\label{r:H1-obvious} \begin{enumerate} \item If $(\Am,\sm)=(\Am',\sm')\times(\Am'',\sm'')$, then \begin{align*} Z^1(\Am,\sm)=Z^1(\Am',\sm')\times Z^1(\Am'',\sm'')\ \ \text{and}\ \ H^1(\Am,\sm)=H^1(\Am',\sm')\times H^1(\Am'',\sm''). \end{align*} \item $H^1(\{\pm 1\}, \id)=Z^1(\{\pm 1\}, \id)=\{\pm 1\}$. \item $H^1\big(\C^*,(z\mapsto \bar z)\big)=\{1\}$ (exercise); this is a special case of Hilbert's Theorem 90. \end{enumerate} \end{rem} \subsec{} Let $G$ be a linear algebraic group defined over $\R$ (we write ``an $\R$-group"). We denote by $G(\R)$ and $G(\C)$ the groups of $\R$-points and of $\C$-points of $G$, respectively. The nontrivial element $\gamma\in\Gamma$ acts on $G(\C)$ by the complex conjugation $\sm\colon g\mapsto\bar g$. We write $H^1(\R,G)=H^1(G(\C),\sm)$. For simplicity, we write $H^1\hs G$ or $H^1\hs G(\C)$ for $H^1(\R,G)$. \begin{subsec} Consider the short exact sequence of $\G$-groups \begin{equation}\label{e:exact-GL} 1\to \C^*\to\GL(n,\C)\labelto\pi\PGL_n(\mathbb C)\to 1, \end{equation} where $\gamma\in\G={\rm Gal}(\C/\R)$ acts on $g\in\GL(n,\C)$ by the complex conjugation $g\mapsto \bar g$. This exact sequence induces a cohomology exact sequence \[1=H^1\GL(n,\C)\to H^1\PGL_n(\mathbb C)\labelto\Delta H^2\, \C^*;\] see Serre's book \cite{Serre}, Section I.2.2 for the definition of $H^2$, and Proposition 43 in Section I.5.7 for the exact sequence. See also \cite{BTb}, Section 1.1 and Construction 4.4. \end{subsec} \begin{prop}[well-known] \label{p:H1} \[\#H^1\PGL_n= \begin{cases} &\!\!1\qquad\text{if $n$ is odd}\\&\!\!2 \qquad\text{if $n$ is even} \end{cases} \] If $n$ is even, write \[ a_J={\rm diag}(J,\dots,J)\ \,\text{($n/2$ times)\ \ where}\quad J=\SmallMatrix{0 &1\\-1&0}.\] Then $\cm:=\pi(a_J)\in \PGL_n(\mathbb C)$ is a cocycle representing the nontrivial cohomology class in $H^1\PGL_n$. \end{prop} \begin{proof} The cardinality $\#H^1 \PGL_n$ can be computed, for instance, using \cite[Corollary 13.6]{BT}. If $n$ is even, then we have $a_J\hs\cdot \hm\upgam\hm a_J=a_J^2=-1$. It follows that $\cm\cdot\hm\upgam\hm\cm=\pi(-1)=1$. These formulas mean that $\cm$ is a 1-cocycle and that $\Delta[\cm]=[-1]\neq [1]$, whence $[\cm]\neq [1]$, as required. \end{proof} \begin{subsec}\label{ss:cp} Consider the short exact sequence of real algebraic groups \[1\to \U_1\to \U_n\labelto\pi\PU_n\to 1,\] where $\U_n$ is the unitary group, that is, $\GL(n,\C)$ with complex conjugation $\sigma$ acting by $g\mapsto (\bar g^t)^{-1}$ where $\bar g^t$ denotes the transposed matrix to the complex conjugate matrix $\bar g$. This exact sequence is the exact sequence \eqref{e:exact-GL} of complex algebraic groups, but with another complex conjugation. Consider the cocycles $a_p=\diag(-1,\dots,-1,1,\dots,1)\in Z^1\U_n$ where $-1$ appears $p$ times and $1$ appears $n-p$ times, $0\le p\le n$. Set $c_p=\pi(a_p)\in Z^1\PU_n$. \end{subsec} \begin{prop}[well-known] \label{p:H1-U} \[H^1\PU_n=\{\hs[c_p]\ |\ 0\le p\le n/2\}.\] In particular, when $n=2k$, we have $\# H^1 \PU_n=k+1$. \end{prop} \begin{proof}[Idea of proof] Since $\PU_n$ is a compact group, one can compute $\#H^1\PU_n$ by the method of Borel and Serre. See Serre \cite[Section III.4.5, Example (a) for Theorem 6]{Serre}, which says that $H^1\PU_n=T_2/W$, where $T$ is the diagonal maximal torus, $T_2$ is the set of its elements of order dividing 2, and $W=W(\PU_n, T)\cong S_n$ is the corresponding Weyl group (which is isomorphic to the symmetric group on $n$ symbols). \end{proof} \begin{subsec}\label{ss:semi-direct} Consider the cyclic group $C_4$ of order $4$ with generator $\Theta$. Define an action of $C_4$ on $\GL(n,\C)$ by the formula \begin{equation*} \Theta(g)=g^{-t}\quad\ \text{for}\ \, g\in \GL(n,\C) \end{equation*} where $g^t$ denotes the transposed matrix to $g$ and we write \[g^{-t}\coloneqq (g^{-1})^t=(g^t)^{-1}.\] Note that $\Theta^2$ acts on $\GL(n,\C)$ trivially. Set \[\tilde\AA=\GL(n,\C)\rtimes C_4.\] Define an action of $\Gamma=\{1,\gamma\}$ on $C_4$ by $\upgam (\Theta^r)=\Theta^{-r}$ for $r\in \Z$, and let $\Gamma$ act on $\GL(n,\C)$ by the usual formula $\upgam\hm g=\bar g$, the bar denoting complex conjugation. Then \[\upgam(\Theta(g))=\ov{(g^{-t})}=\bar g\,^{-t}=\Theta(\upgam\hm g)=\Theta^{-1}(\upgam\hm g)=\upgam \Theta(\upgam\hm g),\] which shows that the actions of $\gamma$ on $C_4$ and $\GL(n,\C)$ are compatible, and so $\Gamma$ naturally acts on $\tilde \AA$. The center $Z=Z(\GL(n,\C))$ (consisting of the scalar matrices) is clearly $\Theta$-invariant and $\Gamma$-invariant. We set \[\AA=\tilde\AA/Z=\PGL(n,\C)\rtimes C_4\hs.\] \end{subsec} \begin{lem}\label{l:H1:n=2k} Consider the $\Gamma$-group $\AA=\PGL(n,\C)\rtimes C_4$ as above, where $n=2k\ge 4$. Then $\#H^1\AA=k+3$ with cocycles \begin{equation*} (1,1),\ (c_J,1),\ (c_0,\Theta),\ (c_1,\Theta),\ \dots,\ (c_k,\Theta) \end{equation*} where $c_J\in Z^1 \PGL_{n,\R}\subset\PGL(n,\C)$ is as in Proposition \ref{p:H1}, and $c_p\in Z^1\PU_n\subset\PGL(n,\C)$ for $p=0,1\dots,k$ is as in Proposition \ref{p:H1-U}. \end{lem} \begin{proof} Consider the short exact sequence \begin{equation}\label{e:exact-A} 1\to \AA_0\to\AA\labelto\pi C_4\to 1 \end{equation} where $\AA_0=\PGL(n,\C)$, and consider the induced cohomology exact sequence \begin{equation}\label{e:cohom-A} H^1\AA_0\to H^1\AA\labelto{\pi_*} H^1 C_4\hs. \end{equation} We see from the definition of $H^1$ that $H^1 C_4=C_4/\langle\Theta^2\rangle\cong C_2$ with cocycles $1,\Theta$. Since the exact sequence \eqref{e:exact-A} splits, we see that the map $\pi_*$ in \eqref{e:cohom-A} is surjective. Thus \[H^1\AA=\pi_*^{-1}[1]\cup\pi_*^{-1}[\Theta].\] We compute $\pi_*^{-1}[1]=\ker\pi_*$. By Serre \cite[Section I.5.5, Corollary 1 of Proposition 39]{Serre}, the inclusion map \[\AA_0\into\AA,\quad a\mapsto (a,1)\quad\text{for}\ \, a\in \AA_0=\PGL(n,\C)\] induces a bijection \[(H^1\AA_0)/C_4^\Gamma\isoto\ker\pi_*\hs.\] Here $C_4^\Gamma=\{1,\Theta^2\}$. It acts on the right on $H^1\AA_0$ as follows. Let $a\in Z^1 \AA_0$; then \[ [a,1]*\Theta^2=\big[(1,\Theta^2)^{-1}\cdot (a,1)\cdot\upgam(1,\Theta^2)\big].\] Since $(1,\Theta^2)$ is of order 2, central in $\AA$, and $\Gamma$-fixed, we have \begin{equation*} (1,\Theta^2)^{-1}\cdot (a,1)\cdot\upgam(1,\Theta^2) =(a,1). \end{equation*} Thus $C_4^\Gamma$ acts on $H^1\AA_0$ trivially, and $\ker\pi_*\cong H^1\AA_0$. By Proposition \ref{p:H1} we obtain that \[\#\ker\pi_*=2 \quad\ \text{with cocycles}\ \,(1,1),\ (c_J,1).\] We compute $\pi_*^{-1}[\Theta]$. By Serre \cite[Section I.5.5, Corollary 2 of Proposition 39]{Serre}, the inclusion map (not a homomorphism) \[\AA_0\into\AA,\quad\ a\mapsto (a,\Theta)\ \,\text{for}\ \,a\in\AA_0\] induces a bijection \[(H^1\,_\Theta\AA_0)\hs/\,(_\Theta C_4)^\Gamma\,\isoto\,\pi_*^{-1}[\Theta].\] Since $C_4$ is an abelian group, we have $_\Theta C_4=C_4$ and $(_\Theta C_4)^\Gamma= C_4^\Gamma=\{1,\Theta^2\}$. It acts on the right on $H^1\hs_\Theta\AA_0$ as follows. Let $a\in Z^1 \hs_\Theta\AA_0$; then \[ [a,\Theta]*\Theta^2=\big[\hs(1,\Theta^2)^{-1}\cdot (a,\Theta)\cdot\upgam(1,\Theta^2)\hs\big].\] We calculate: \begin{equation*} (1,\Theta^2)^{-1}\cdot (a,\Theta)\cdot\upgam(1,\Theta^2)= (1,\Theta^2)^{-1}\cdot (a,\Theta)\cdot(1,\Theta^2)=(\Theta^{-2}(a),\Theta)=(a,\Theta). \end{equation*} Thus $(\hs_\Theta C_4)^\Gamma$ acts on $H^1\hs_\Theta\AA_0$ trivially, and $\pi_*^{-1}[\Theta]\cong H^1\hs_\Theta A_0$. By Proposition \ref{p:H1-U} we obtain that \[\#\pi_*^{-1}[\Theta]=k+1 \quad\ \text{with cocycles}\ \,(c_0,\Theta),\ \dots,\ (c_k,\Theta).\] which completes the proof of the lemma. \end{proof} \begin{subsec}\label{ss:cohom-vs-eq} Let $\M$ be a complex supermanifold, and $\mm$ be a fixed real structure on $\M$ of the type $(1,-1,1)$. Then $\mu^2=\id_\M$. Write $\Am=\Aut\, \M$, the group of even holomorphic automorphisms of $\M$. For $a\in\Am$, we write \[\sm(a)=\mu a\mu^{-1}=\mu a \mu.\] Then \[\sm(\sm(a))=a\quad \text{for all}\ a\in\Am.\] We obtain a $\Gamma$-group $(\Am,\sm)$, where $\Gamma=\{1,\gamma\}$ acts on $\Am$ by \[\upgam\hm a=\sm(a)=\mu a\mu^{-1}\quad \text{for}\ a\in\Am.\] Let $\mm'$ be any real structure on $\M$ of the type $(1,-1,1)$. Write $\mm'=\cm\circ\mm$; then $\cm\in\Aut\,\M$ is an (even) holomorphic automorphism. We have \[1=\mm^{\prime\hs\hs2}=(\cm\mm)^2=\cm\mm\cm\mm =\cm\cdot\mm\cm\mm^{-1}\cdot \mm^2=\cm\cdot\hm\upgam\hm \cm.\] We see that the condition $\mm^{\prime\hs\hs2}=1$ implies the {\em cocycle condition} \begin{equation}\label{e:cocycle} \cm\cdot\hm\upgam\hm \cm=1. \end{equation} Conversely, if an automorphism $\cm\in\Aut\,\M$ satisfies \eqref{e:cocycle}, and $\mm'=\cm\mm$, then \[\mm^{\prime\hs\hs2}=\cm\mm\hs\cm\mm=\cm\cdot\mm\cm\mm^{-1}\cdot\mm^2 =(\cm\cdot\hm \upgam \cm)\cdot\mm^2=1,\] and hence $\mm'$ is a real structure on $\M$. Now let $\mm_1=\cm_1\circ\mm$ and $\mm_2=\cm_2\circ\mm$ be two real structures on $\M$. Assume that the real structures $\mm_1$ and $\mm_2$ are {\em equivalent}, that is, $(\M,\mm_1)\simeq (\M,\mm_2)$. This means that there exists an automorphism $\am\colon \M\to\M$ such that \begin{equation}\label{e:rho2-rho1} \mm_2=\am\circ\mm_1\circ\am^{-1}. \end{equation} Then \[\cm_2\hs\mm=\am\cdot\cm_1\hs\mm\cdot \am^{-1} =\am\cdot\cm_1\cdot\hm\sm(\am)^{-1}\cdot\mm,\] whence \begin{equation}\label{e:equiv} \cm_2=\am\cdot\cm_1\cdot\hm\sm(\am)^{-1}=\am\cdot\cm_1\cdot\hm\upgam \am^{-1}. \end{equation} Conversely, if \eqref{e:equiv} holds for some $\am\in\Aut\,\M$, then \eqref{e:rho2-rho1} holds. This means that $\am$ is an isomorphism $(\M,\mm_1)\isoto (\M,\mm_2)$, and therefore the real structures $\mm_1$ and $\mm_2$ are equivalent. We can state the results of this subsection as follows: \end{subsec} \begin{prop}\label{p:Serre-reduction} For $\M$, $\mm$, and the $\Gamma$-group $(\Am,\sm)$ as in Subsection \ref{ss:cohom-vs-eq}, define an action of the group $\Gamma=\{1,\gamma\}$ on $\Am$ by \[\upgam\hm \am=\sm(\am)\coloneqq \mm\cdot\am\cdot\mm^{-1}\ \, \text{for}\ \am\in \Am.\] Then: \begin{enumerate} \item[(i)] The map \[\cm\mapsto \cm\circ \mm\quad \text{for}\ \cm\in Z^1(\Am,\sm)\] is a bijection between the set of $1$-cocycles $Z^1(\Am,\sm)$ and the set of real structures on $\M$. \item[(ii)] This map induces a bijection between $H^1(\Am,\sm)$ and the set of equivalence classes of real structures on $\M$, sending $[1]\in H^1(\Am,\sm)$ to the equivalence class of $\mm$. \end{enumerate} \end{prop} This proposition is similar to Proposition 5 in Section III.1.3 of Serre's book \cite{Serre}. Now let $\M$ be the $\Pi$-symmetric Grassmannian $\PiG_{n,k}$. Theorem \ref{t:Aut} describes the automorphism group $\Aut\hs\M=\Aut(\PiG_{n,k})$. We wish to compute $H^1\hs\Aut(\PiG_{n,k})$ and to classify the real structures on $\PiG_{n,k}$. We use the notation of Propositions \ref{p:H1} and \ref{p:H1-U}. \begin{thm}\label{t:Aut-M} The list below gives the cardinality $\#H^1\hs\Aut(\PiG_{n,k})$ and a set of representing cocycles for all cohomology classes in $H^1\hs\Aut(\PiG_{n,k})$: \begin{enumerate} \item[\rm (i)] Case $n$ is odd: $\#H^1=2$ with representatives: \[(1,1), (1,\ps)\,\in\, \PGL_n(\C)\times \{\id,\ps\}.\] \item[\rm (ii)] Case $n$ is even, $n\neq 2k$: $\#H^1=4$ with representatives: \[(1,1),(1,\ps),(\cm_J,1),(\cm_J,\ps)\,\in\, \PGL_n(\C)\times \{\id,\ps\}.\] \item[\rm (iii)] Case $n=2k\ge 4$: $\#H^1=k+3$ with representatives: \[(1,1),(\cm_J, 1),\,(c_0,\Theta), (c_1,\Theta),\dots,(c_k,\Theta)\,\in\,\PGL_{2k}(\C)\rtimes \{\id,\Theta,\ps,\ps\circ\Theta\}.\] \item[\rm (iv)] Case $n=2$, $k=1$: $\#H^1=2$ with representatives: \[(1,1),(\cm_J,1)\,\in\,\PGL_2(\C)\times\C^*.\] \end{enumerate} \end{thm} \begin{proof} (i) and (ii) follow from Theorem \ref{t:Aut}(1), Remark \ref{r:H1-obvious}(1), and Proposition \ref{p:H1}. (iii) follows from Theorem \ref{t:Aut}(2) and Lemma \ref{l:H1:n=2k}. (iv) for $n=2$, $k=1$ by Theorem \ref{t:Aut}(3) we have $\Aut(\PiG_{n,k})\cong\GmR\times\PGL_{2,\R}$. By Hilbert's Theorem 90, we have $H^1(\R,\GmR)=1$. It is well known that $\#H^1(\R,\PGL_{2,\R})=2$ with cocycles $1,\cm_J$; see, for instance, \cite{Borovoi22-CiM}, Theorem 3.1. \end{proof} We denote by $\GL_{n,\R}$\hs, $\GL_{n'\!,\hs\H}$\hs, etc. algebraic $\R$-groups, and by $\GL(n,\R)$, $\GL(n'\!,\hs\H)$, etc. their groups of $\R$-points. Here $\H$ is the division algebra of Hamilton's quaternions. \begin{lem}[well-known] Let $G=\GL_{n,\R}$ and assume that $n$ is even, $n=2n'$. Let $\cm=\pi(a_J)\in Z^1\big(\R,\PGL_n(\mathbb C)\big)$ be as in Proposition \ref{p:H1}. Then the twisted group $_\cm G$ is isomorphic to $\GL_{n'\!,\hs\H}$. In other words, \[\big\{g\in\GL(n,\C)\ \big|\ \cm\cdot\bar g\cdot \cm^{-1}=g\big\}\cong \GL(n',\H).\] \end{lem} \begin{proof} Let $M_n(\R)$ denote the algebra of $n\times n$-matrices over $\R$. Then $G(\R)=\GL(n,\R)=M_n(\R)^*$. In order to compute $_\cm G$, it suffices to compute the twisted algebra $_\cm M_n$. Let $X\in M_n(\C)$. We write $X$ as a block matrix \[ X=(X_{ij})_{1\le i,j\le n'}\hs,\quad\text{where}\ X_{ij}\in M_2(\C).\] Then $X\in \hs_\cm M_n(\R)$ if and only if \[a_J\cdot\ov X\cdot a_J^{-1}=X,\] that is \begin{equation}\label{e:Xij} J\cdot\ov X_{ij}\cdot J^{-1}=X_{ij}\quad\text{for all}\ i,j. \end{equation} We write $Y$ for $X_{ij}$. Then condition \eqref{e:Xij} is equivalent to \begin{equation}\label{e:Y} J\cdot \ov Y=Y\cdot J. \end{equation} An easy calculation shows that condition \eqref{e:Y} on $Y$ means that \[Y=\Mat{u &v\\ -\bar v &\bar u},\quad\text{where}\ u,v\in\C.\] In other words, \[ Y=\lambda_1 +\lambda_2 \ii+\lambda_3 \jj+\lambda_4 \kk,\] where $\lambda_1, \dots,\lambda_4\in\R$ and \begin{equation}\label{e:ijk} \ii=\Mat{0&1\\-1&0},\quad \jj=\Mat{0&i\\i&0},\quad \kk=\Mat{i&0\\0&-i}. \end{equation} One checks that \[\ii^2=-1=\jj^2,\qquad \ii\jj=\kk=-\jj\ii.\] Thus \[(\hs_\cm M_2)(\R)\cong \H,\quad (\hs_\cm M_n)(\R)\cong M_{n'}(\H), \quad (\hs_\cm\!\GL_n)(\R)\cong\GL_{n'}(\H),\] and hence $_\cm\!\GL_n\cong \GL_{n'\!,\hs\H}$\hs, as required. \end{proof} \begin{prop}\label{p:real-points} Let $n=2n'$ be an even natural number. Consider the Grassmann variety $X=\Gr_{n,k,\R}$ over $\R$ and its twisted form $_\cm X\coloneqq(\Gr_{n,k}\hs,\hs\cm\circ\mm)$, where $\cm=\pi(a_J)\in Z^1\PGL_n(\mathbb C)$ is as in Proposition \ref{p:H1}, and $\mm$ is the standard complex conjugation in $\Gr_{n,k,\C}$\hs. \begin{enumerate} \item[\rm(i)] if $k$ is even, $k=2k'$, then the set of real points $(_\cm X)(\R)$ is in a canonical bijection with the set $\Gr_{n'\!,\hs k'}(\H)$ of quaternionic $k'$-dimensional subspaces in the quaternionic $n'$-dimensional space $\H^{\hs n'}$. In other words, \[\big\{x\in\Gr_{n,k}(\C)\ \big|\ \cm\cdot \bar x=x\big\}\cong \Gr_{n'\!,\hs k'}(\H).\] \item[\em(ii)] If $k$ is odd, then the set $(_\cm X)(\R)$ is empty. \end{enumerate} \end{prop} \begin{proof} \label{ss:real-points} Write $V=\R^n$ with canonical basis $e_1,\dots,e_n$. Write $G=\GL_{n,\R}$, $X=\Gr_{n,k,\R}$. To any complex point $x\in X(\C)$ we assign its stabilizer $\Pm_x\subset G_\C$. Let $x_0\in X(\R)$ denote the point corresponding to the $k$-dimensional subspace $W_0=\langle e_1,\dots,e_k\rangle\subset V$, and write $\Pm_0=\Stab_{G_\C}(x_0)$. Then \[\Pm_0=\left\{\Mat{A&B\\0&D}\ \bigg|\ A\in M_k(\C)\right\}.\] We have a canonical {\em $\G$-equivariant} bijection \[x\mapsto \Pm_x,\quad\ x\in X(\C),\ \Pm_x\subset G_\C,\ \text{$\Pm_x$ is conjugate to $\Pm_0$.}\] We twist $G$ and $X$ with the same cocycle $\cm=\pi(a_J)\in Z^1\PGL_n(\mathbb C)$; then the map $x\mapsto \Pm_x$ is $\G$-equivariant also with respect to the twisted real structures corresponding to the twists $_\cm X$ and $_\cm G$. If $k$ is even, $k=2k'$, we consider the subgroup \[ \Pmm_{\H,0}=\left\{\Mat{A&B\\0&D}\ \bigg|\ A\in M_{k'}(\H)\right\}\subset \GL_{n'\hm,\H}.\] We embed $\H$ into $\M_2(\C)$ using the formulas \eqref{e:ijk}. In this way we obtain a canonical isomorphism \[\GL_{n'\hm,\H}\times_\R\hs\C\isoto \GL_{n,\C}\] sending $\Pmm_{\H,0}\times_\R\C$ to $\Pm_0$. Note that $H^1\hs \Pmm_{\H,0}=1$. Since the subgroup $\Pmm_{\H,0}\subset\hs_\cm G$ is defined over $\R$, we see that the corresponding point $x_0\in X(\C)=(\hs_\cm X)(\C)$ is defined over $\R$. We conclude that $(\hs_\cm X)(\R)$ is nonempty. To $x_0$ we assign the subspace \[W_{\H,0}=\langle e_1,\dots,e_{k'}\rangle \subset \H^{\hs n'}.\] For any $k'$-dimensional quaternionic subspace $W_\H\subset \H^{n'}$ there exists an invertible matrix $g\in \GL(n',\H)$ such that $W_\H=g\cdot W_{\H,0}$, and we obtain a subgroup $\Pm:=g\hs \Pm_0\hs g^{-1}\subset (\hs_\cm G)$ that is defined over $\R$ and is conjugate over $\R$ to $\Pm_0$. To this subgroup we assign the real point $x=g\cdot x_0\in (\hs_\cm X)(\R)$ with stabilizer $\Pm$. Conversely, if $x\in(\hs_\cm X)(\R)$, we set $\Pm_x=\Stab\hs_{_\cm G}\hs(x)\subset \hs_\cm G$. Then $\Pm_x\subset \hs_\cm G$ is a subgroup defined over $\R$ and conjugate to $\Pm_0=\Pmm_{\H,0}$ over $\C$. Set \[T_x=\{g\in (\hs_\cm G)(\C)\mid g\hs \Pm_0\hs g^{-1}=\Pm_x\}.\] This variety is clearly defined over $\R$ and nonempty over $\C$. Moreover, it is a torsor (principal homogeneous space) under the normalizer $N$ of $\Pmm_{\H,0}$ in $\hs_\cm G$. Since $N=\Pmm_{\H,0}$ and $H^1\hs \Pmm_{\H,0}=1$, we conclude that $T_x$ has an $\R$-point $g_x$; see Serre \cite[Section I.5.2]{Serre}. Thus there exists $g_x\in (\hs_\cm G)(\R)$ such that $\Pm_x=g_x\cdot \Pmm_{\H,0}\cdot g^{-1}$. To $x$ we assign the subspace $W_x=g_x\cdot W_{\H, 0}\subset \H^{\hs n'}$ (which does not depend on the choice of $g_x\in T_x(\R)$). Thus we obtain a bijection \[(\hs_\cm X)(\R)\to \Gr_{n'\!,\hs k'}(\H),\quad\ x\mapsto (\Pm_x,\hs g_x)\mapsto g_x\cdot W_{\H,0}\hs,\] which proves assertion (i) of Proposition \ref{p:real-points}. If $k$ is odd, let $x\in\Gr_{2n'\hm, k\hs}(\C)$ be a $\C$-point. The stabilizer $\Pm_x$ of $x$ is a parabolic subgroup of {\em odd} codimension $k(2n'-k)$. On the other hand, by Lemma \ref{l:even} below, any {\em defined over $\R$} parabolic subgroup of $_\cm G$ has {\em even} codimension. It follows the parabolic subgroup $\Pm_x$ is not defined over $\R$, and hence $x$ is not an $\R$-point. We conclude that when $k$ is odd, $_\cm X$ has no real points, which proves assertion (ii) of Proposition \ref{p:real-points}. \end{proof} \begin{lem}\label{l:even} Set $Q\subseteq \cG\cong\GL_{n'\hm,\hs\H}$ be a parabolic $\R$-subgroup. Then the codimension of $Q$ in $\cG$ is divisible by 4. \end{lem} \begin{proof}[Idea of proof] Let $S\subset\cG$ be the $\R$-subtorus such that \[S(\R)=\big\{{\rm diag}(\lambda_1,\dots,\lambda_{n'})\ |\ \lambda_i\in\R^*\big\}.\] Then $S$ is a maximal split $\R$-torus in $\cG$. Consider the relative root system $R(\cG, S)$. It is easy to see that the root subspace $\g_\beta$ for any root $\beta\in R(\cG,S)$ has dimension 4. Let $P\subset \cG$ be the $\R$-parabolic (parabolic $\R$-subgroup) such that $P(\R)$ is the group of upper triangular quaternionic matrices in $\GL(n',\H)$; then $P$ is a minimal $\R$-parabolic in $\cG$. Let $P'\supseteq P$ be any {\em standard $\R$-parabolic} in $\cG$ with respect to $S$ and $P$ in the sense of Borel and Tits \cite[Section 5.12]{Borel-Tits}; then \[\Lie(\cG)=\Lie(P')\oplus\bigoplus_{\beta\in\Xi}\g_\beta\hs,\] where $\Xi\subseteq R(\cG,S)$ is some subset. It follows that the codimension of $P'$ is divisible by 4. By \cite[Proposition 5.14]{Borel-Tits}, any $\R$-parabolic $Q$ of $\cG$ is conjugate (over $\R$) to a standard $\R$-parabolic, and the lemma follows. \end{proof} \begin{subsec} If $V$ is a complex vector space, $W\subset V$ a subspace, and $B$ is a bilinear form (or a Hermitian form) on $V$, then we write \[ W^{\bot B}=\{x\in V\ |\ B(x,y)=0\ \,\text{for all}\ \,y\in W\}\] for the annihilator of $W$ in $V$ with respect to $B$. Let $n=2k$. The group $\AA=\PGL_{n,\R}\rtimes C_4$ of Subsection \ref{ss:semi-direct}, where $C_4=\{1,\Theta,\Theta^2,\Theta^3\}$, acts by conjugation on its identity component $G=\PGL_{n,\R}$. Therefore, for any cocycle $c\in Z^1\AA$ we obtain a twisted group $_c G$. Moreover, the group $\AA$ naturally acts on $X\coloneqq \Gr_{n,k}(\C)$. Namely, for a $k$-dimensional subspace $W$ of $V=\C^n$, and $[g]\in\PGL(n,\C)$, the class of $g\in\GL(n,\C)$, we have $[g]*W=g(W)$. Furthermore, the generator $\Theta$ of $C_4$ sends $W$ to $W^{\bot B_0}$, the annihilator of $W$ in $V=\C^n$ with respect to the symmetric bilinear form $B_0$ with matrix $\diag(1,\dots,1)$. We show that this action of $\AA$ on $X(\C)$ is well defined. For $g\in \PGL(n,\C)$ we compute \begin{align*} \Theta(g(W))&=g(W)^{\bot B_0}\\ &=\{x\in V\ |\ B_0(x,gW)=0\}\\ &=\{x\in V\ |\ B_0(g^t x,W)=0\}\\ &=\{g^{-t}y\ |\ B_0(y,W)=0\} \qquad y=g^t x,\ \,x=g^{-t}y\\ &=g^{-t} \cdot W^{\bot B_0}\\ &=\Theta(g)\cdot\Theta(W). \end{align*} Thus \[\Theta(g(W))=\Theta(g)\cdot\Theta(W)\] as required. For any cocycle $c\in Z^1\AA$ we obtain a twisted variety $_c X$. \end{subsec} \begin{lem} Consider an element $c=(F,\Theta)$ where $F\in \PGL(n,\C)$. \begin{enumerate} \item[\rm (i)] $c$ is a cocycle if and only if $\ov F\hs^t=F$. \item[\rm (ii)] If $c=(F,\Theta)$ is a cocycle and $c'=(F',\Theta)$ with $F'=gF\gbar^t$, then $c'\sim c$. \end{enumerate} \end{lem} \begin{proof} (i) We write the cocycle condition $c\bar c=1$: \[(F,\Theta)\cdot(\ov F,\Theta^{-1})=(1,1),\] that is, \[F\cdot \ov F\hs^{-t}=1.\] Thus \[ \ov F\hs^t=F,\] as required. (ii) We consider the equivalent cocycle \begin{align*} (g,1)\cdot (F,\Theta)\cdot\ov{(g,1)}\hs^{-1} &= (g,1)\cdot \big(F\cdot (\gbar\hs^{-1})^{-t}, \Theta\big)\\ &=\big(g\!\cdot\! F\!\cdot\!\gbar\hs ^t,\Theta\big)=(F',\Theta), \end{align*} as required. \end{proof} Let $c=(F,\Theta)$ be a cocycle. We compute the twisted group $_c G$ for $G=\PGL_{n,\R}$ and the twisted manifold $_c X$ for $X=\Gr_{n,k,\R}$. \begin{lem}\label{l:cG(R)} For a cocycle $c=(F,\Theta)$ consider the twisted group $_c G$ with $G=\PGL_{n,\R}$. Then \[(\hs_c G)(\R)=\{g\in \PGL(n,\C)\ |\ g\hs F\hs\bar g\hs^t=F\}. \] \end{lem} \begin{proof} First, note that $c^{-1}=(F^t,\Theta^{-1})$. Indeed, \[(F,\Theta)\cdot(F^t,\Theta^{-1})=(F\cdot(F^t)^{-t},1)=(F\cdot F^{-1},1)=(1,1).\] Now, \begin{align*} (\hs_c G)(\R) &=\big\{g\ |\ c(\gbar,1) c^{-1}=(g,1)\big\}\\ &=\big\{g\ |\ (F,\Theta)\cdot(\gbar,1)\cdot(F^t, \Theta^{-1})= (g,1)\big\}. \end{align*} We calculate: \begin{align*} (F,\Theta)\cdot(\gbar,1)\cdot(F^t, \Theta^{-1}) &=(F,\Theta)\cdot (\gbar F^t,\Theta^{-1})\\ &=(F\cdot(\gbar F^t)^{-t},1)\\ &=(F\hs\gbar\hs^{-t} F^{-1},1). \end{align*} We obtain \begin{align*} (\hs_c G)(\R)&=\{g\in\PGL(n,\C)\ |\ g=F\hs\gbar\hs^{-t} F^{-1}\}\\ &=\{g\in \PGL(n,\C)\ |\ g F \gbar^t=F\}, \end{align*} as required. \end{proof} \begin{lem}\label{l:cX(R)} Let $c=(F,\Theta)$, $\ov F\hs^t=F$ (then $F,\Theta)$ is a 1-cocycle). Then $$(\hs_c X)(\R)=\{W\subset V\ |\ \ \dim W=k\ \ \text{and}\ \ \bar x\hs^t\hs F^{-1}y=0\ \text{for all}\ \,x,y\in W\}.$$ \end{lem} \begin{proof} We write \[(\hs_c X)(\R) = \{W\in\Gr_k(V)\ \,|\ \,(F,\Theta)\cdot\ov W=W\}.\] We have \[ (F,\Theta)\cdot\ov W=F\cdot \ov W^{\bot B_0}.\] Thus the condition for $W$ to lie in $(\hs_c X)(\R)$ is \begin{equation}\label{e:F-W-bar} F\cdot \ov W^{\bot B_0}=W, \end{equation} or, equivalently, \begin{equation}\label{e:Wbar-bot--B0} \ov W^{\bot B_0}=F^{-1}\cdot W. \end{equation} This implies \[B_0(\bar x,F^{-1}\cdot y)=0\quad\ \text{for all}\ \, x,y\in W,\] that is, \begin{equation}\label{e:xF-1y} \bar x \hs^t F^{-1} y=0\quad\ \text{for all}\ \, x,y\in W. \end{equation} Conversely, if \eqref{e:xF-1y} holds, then \[\ov W^{\bot B_0}\hs\supseteq F^{-1}\cdot W,\] and comparing the dimensions, we obtain in turn the equalities \eqref{e:Wbar-bot--B0} and \eqref{e:F-W-bar}, which gives $W\in (\hs_cX)(\R)$, as required. \end{proof} \begin{cor} \label{l:G} Let $G=\PGL_{n,\R}$. For $0\le p\le n$, let the cocycle $\ctil_p=(\pi(a_p),\Theta)\in Z^1\AA$ be as in Lemma \ref{l:H1:n=2k}. Write $_p G=\hs_{\ctil_p}G$. Then $_pG\cong \PU(p,n-p)$; namely, \[(\hs_pG)(\R)\cong \{g\in\PGL(n,\C)\ |\ \bar g^t\cdot c_p\cdot g=c_p\}.\] \end{cor} \begin{proof} The corollary follows from Lemma \ref{l:cG(R)} with $F=c_p\coloneqq\pi(a_p)$. \end{proof} \begin{cor}\label{p:X} Let $X=\Gr_{n,k,\R}$ where $n=2k\ge 4$. For $0\le p\le k$, let the cocycle $\ctil_p=(\pi(a_p),\Theta)\in Z^1\AA$ be as in Lemma \ref{l:H1:n=2k}. Write $_p X=\hs_{\ctil_p}X$. \begin{enumerate} \item[\rm (i)] The set of $\R$-points $(\hs_p X)(\R)$ is in a canonical $(_p G)(\R)$-equivariant bijection with the {\em isotropic Grassmannian} $\GrI(n,k,p)$, that is, the set of $k$-dimensional subspaces $W\subset V=\C^n$ that are totally isotropic with respect to the Hermitian form \[\mcH_p(x,y)=\bar x^t\cdot c_p\cdot y\] where $x,y\in\C^n$ are $k$-dimensional column vectors. \item[\rm (ii)] This set $(\hs_p X)(\R)$ is non-empty if and only if $p=k$. \end{enumerate} \end{cor} Here we say that $W$ is totally isotropic with respect to $\mcH_p$ if $\mcH_p(x,y)=0$ for all $x,y\in W$. \begin{proof} Assertion (i) follows from Lemma \ref{l:cX(R)} with $F=c_p$. Note that $(c_p)^{-1}=c_p$\hs. If $p<k$, then the Hermitian form $\mcH_p$ admits no $k$-dimensional isotropic subspaces. If $p=k$, then $\mcH_k$ admits a $k$-dimensional isotropic subspace with basis $e^1-e_{k+1},\ e_2-e_{k+2},\ \dots,\ e_k-e_{2k}$. This proves (ii). \end{proof} \begin{rem} By the Witt theorem for Hermitian forms, the group $(\hs_k G)(\R)=\PU(k,k)$ acts on the isotropic Grassmannian $(\hs_k X)(\R)=\GrI(k,k)$ transitively; see, for instance, Dieudonn\'e \cite[Assertion (3) in Section I.11]{Dieudonne}. \end{rem} \begin{subsec}\label{append matrix b_k} Set \[b_k=\ii\begin{pmatrix}0& -E_k\\ E_k,&0\end{pmatrix} \in\GL(2k,\C)\quad\ \text{where}\ \,\ii^2=-1. \] Then \[\bar b_k^t=b_k\hs, \ \,b_k^{-1}=b_k\hs.\] Set \[d_k=\pi(b_k)\in\PGL(2k,\C), \ \ \db_k=(d_k,\Theta)\in Z^1(\Gamma,\AA).\] Consider the Hermitian form $\cF$ on $\C^{2k}$ given by \[ \cF(x,y)=\bar x\hs^t\hs b_k\hs y\quad\ \text{for}\ \, x,y\in \C^{2k}.\] \end{subsec} \begin{cor}\label{cor iso gr} \begin{enumerate} \item[\rm (i)] $(\hs_{\db_k}\hm G)(\R)=\{g\in \PGL(2k,\C)\ |\ g\hs d_k\hs \bar g^t=d_k\}\hs.$ \item[\rm (ii)] $(\hs_{\db_k}\hm X)(\R)=\{W\in \Gr_{2k,k}(\C)\ |\ \cF|_W=0\}.$ \end{enumerate} The set $(\hs_{\db_k}\hm X)(\R)$ is nonempty: it contains the $k$-dimensional subspace $W_0$ with basis $\{e_1\hs,\dots,e_k\}$. \end{cor} \begin{proof} Assertion (i) follows from Lemma \ref{l:cG(R)}, and assertion (ii) follows from Lemma \ref{l:cX(R)}. \end{proof} \begin{thebibliography}{999999} \bibitem[A]{ADima} {\it Akhiezer D. N.} Lie Group Actions in Complex Analysis. Aspects of Mathematics, E27. Friedr. Vieweg \& Sohn, Braunschweig, 1995. \bibitem[ACF]{ADima-Stefani} {\it Akhiezer D. N., Cupit-Foutou S.} On the canonical real structure on wonderful varieties, Journal f\"{u}r die reine und angewandte Mathematik (Crelles Journal), vol. 2014, no. 693, 2014, pp. 231-244. \bibitem[BCC]{BCC} {\it Balduzzi L., Carmeli C., Cassinelli G.} Super $G$-spaces. Symmetry in mathematics and physics, 159176, Contemp. Math., 490, AMS Providence, RI, 2009. \bibitem[BLMS]{BLMS} {\it J. Bernstein, D. Leites, V. Molotkov, V. Shander.} {Seminar of Supersymmetries: volume 1 (edited by D. Leites)} Moscow, Russia: Moscow Center of Continuous Mathematical Education (MCCME). \bibitem[BO]{Bunegina} {\it Bunegina, V. A., Onishchik, A. L.} Homogeneous supermanifolds associated with the complex projective line, preprint no. 33, Inst. Math. Univ. Oslo, Oslo 1993 \bibitem[Ber]{Bern} {\it Deligne P., Morgan J.W.} Notes on supersymmetry (following Joseph Bernstein), Quantum Fields and Strings: A Course for Mathematicians, Vols. 1,2 (Princeton, NJ, 1996/1997), 41-97. American Mathematical Society. Providence, R.I. 1999. \bibitem[Brl-T]{Borel-Tits} {\it Borel A., Tits J.} Groupes r\'eductifs. Inst. Hautes \'Etudes Sci. Publ. Math. No. 27 (1965), 55--150. \bibitem[Bor22]{Borovoi22-CiM} {\em Borovoi, M.} Galois cohomology of reductive algebraic groups over the field of real numbers, Commun. Math. 30 (2022), no. 3, 191--201. \bibitem[BT1]{BT} {\it Borovoi M., Timashev D.A.} Galois cohomology of real semisimple groups via Kac labelings. Transform. Groups {\bf 26} (2021), 433--477. \bibitem[BT2]{BTb} {\it Borovoi M., Timashev D.A.} Galois cohomology and component group of a real reductive group. To appear in Israel J. Math., arXiv:2110.13062 [math.GR]. \bibitem[Die71]{Dieudonne} {\it Dieudonné, J.A.} La g\'eom\'etrie des groupes classiques. Troisième édition. Ergebnisse der Mathematik und ihrer Grenzgebiete , Band 5. Springer-Verlag, Berlin-New York, 1971. \bibitem[Gr]{Green} {\it Green P.} On holomorphic graded manifolds. Proc. Amer. Math. Soc. 85 (1982), no. 4, 587-590. \bibitem[Kac]{Kac} {\it Kac V.G.} Lie superalgebras. Advances in Mathematics Volume 26, Issue 1, October 1977, Pages 8-96. \bibitem[Kal]{Kal} {\it Kalus M.} Almost complex structures on real Lie supergroups, Canad. Math. Bull. 58 (2015), no. 2, 281 - 284. \bibitem[Kos]{Koszul} {\it Koszul, J.-L.} Connections and splittings of supermanifolds. Differential Geom. Appl. 4 (2) (1994), 151-61. \bibitem[L]{Leites} {\it Leites D.A.} Introduction to the theory of supermanifolds. Uspekhi Mat. Nauk, 1980, Volume 35, Issue 1(211), 3-57 \bibitem[Man]{Manin} {\it Manin Yu.I.} Gauge Field Theory and Complex Geometry, Springer-Verlag, Berlin e.a., 1988. \bibitem[Oni1]{COT} {\it Onishchik A.L.} Non-split supermanifolds associated with the cotangent bundle. Universit\'{e} de Poitiers, D\'{e}partement de Math., N 109. Poitiers, 1997. \bibitem[Oni2]{Oni_lifting} {\it Onishchik A.L.} Lifting of Holomorphic Actions on Complex Supermanifolds, Advanced Studies in Pure Mathematics, 2002: 317-335 (2002). \bibitem[Roth]{Rothstein}{\it Rothstein M.J.} Deformations of complex supermanifolds, Proc. Amer. Math. Soc. 95 (1985), 255-260. \bibitem[S]{Serganova} {\it Serganova V. V.} Classification of real simple Lie superalgebras and symmetric superspaces, Funct Anal Its Appl 17, 200-207 (1983). \bibitem[Ser]{Serre} {\it Serre J.-P.} Galois Cohomology, Springer-Verlag, Berlin, 1997. \bibitem[Vi1]{Vish_funk} {\it Vishnyakova E.G.} On holomorphic functions on a compact complex homogeneous supermanifold. Journal of Algebra 350 (1), 2012, 174-196. \bibitem[Vi2]{Vish_Pi sym} {\it Vishnyakova E.G.} Vector fields on $\Pi$-symmetric flag supermanifolds. Sao Paulo Journal of Mathematical Sciences 10 (1), 20-35. \bibitem[Vi3]{V} {\it Vishnyakova E.G.} On complex Lie supergroups and split homogeneous supermanifolds. Transformation groups, Vol. 16, Issue 1, 2011. P. 265 - 285. \end{thebibliography} \noindent \noindent E.~V.: Departamento de Matem{\'a}tica, Instituto de Ci{\^e}ncias Exatas, Universidade Federal de Minas Gerais, Av. Ant{\^o}nio Carlos, 6627, CEP: 31270-901, Belo Horizonte, Minas Gerais, BRAZIL, \noindent email: {\tt VishnyakovaE\symbol{64}googlemail.com} \bigskip \noindent M.~B.: Raymond and Beverly Sackler School of Mathematical Sciences, Tel Aviv University, 6997801 Tel Aviv, ISRAEL, \noindent email: {\tt [email protected]} \end{document}
2205.04194v1
http://arxiv.org/abs/2205.04194v1
Computational issues by interpolating with inverse multiquadrics: a solution
\documentclass[10pt]{article} \input{SetUpForArticle} \pgfplotsset{compat=1.17} \begin{document} \title{Computational issues by interpolating with inverse multiquadrics: a solution} \author{Stefano De Marchi$^1$ \thanks{Electronic address: \texttt{[email protected]}}, Nadaniela Egidi$^2$ \thanks{Electronic address: \texttt{[email protected]}}, Josephin Giacomini$^2$ \thanks{Electronic address: \texttt{[email protected]}}, Pierluigi Maponi$^2$ \thanks{Electronic address: \texttt{[email protected]}}, Alessia Perticarini$^2$ \thanks{Electronic address: \texttt{[email protected]}; Corresponding author}} \affil{$^1$ Department of Mathematics “Tullio Levi-Civita”, University of Padua, via Trieste 63, 35121 Padova, Italy} \affil{$^2$ School of Science and Technology - Mathematics Division, University of Camerino, via Madonna delle Carceri 9, 62032 Camerino, Italy} \date{} \maketitle \begin{abstract} We consider the interpolation problem with the inverse multiquadric radial basis function. The problem usually produces a large dense linear system that has to be solved by iterative methods. The efficiency of such methods is strictly related to the computational cost of the multiplication between the coefficient matrix and the vectors computed by the solver at each iteration. We propose an efficient technique for the calculation of the product of the coefficient matrix and a generic vector. This computation is mainly based on the well-known spectral decomposition in spherical coordinates of the Green's function of the Laplacian operator. We also show the efficiency of the proposed method through numerical simulations. \end{abstract} \keywords{IMQ radial basis functions; Green’s function; iterative method; translation technique.} \section{Introduction} Radial basis functions (RBFs) are efficient tools in approximation of functions and data, in the solution of many engineering problems~\cite{hardy1990theory}, including applications in image processing~\cite{flusser1992adaptive,carr1997surface}. They are a suitable tool for scattered data interpolation problems~\cite{lazzaro2002radial}, solution of differential equations~\cite{franke1998solving}, machine learning techniques~\cite{poggio1990networks} and other applications~\cite{hardy1990theory}. In particular, the solution of scattered data interpolation problems leads to a linear system that has a dense matrix of coefficients. We can identify three main difficulties in solving the corresponding linear system: (i) high computational cost when a large number of interpolation points is considered, (ii) numerical instability and (iii) a clever choice of the shape parameter. In the present paper, we deal with the computational cost of the solution of the linear systems in the case of large interpolation problems. For these linear systems, iterative methods are the standard solvers and the efficiency of these methods can be improved in several ways, for instance, by improving the efficiency of the iterate and/or by providing an effective preconditioner~\cite{Beatson1999} or decreasing the computational cost of each iteration. The {\it fast multipole method}~\cite{greengard1987fast} has been proposed to reduce the computational cost {{in the solution algorithms for}} integral equations arising in classical scattering theory. This gave rise to a set of effective techniques centred around the principle of \textit{divide et impera}. Among the many existing variants of this idea, several schemes have been developed to provide efficient solution strategies for structured linear systems~\cite{xi2014superfast,xia2012superfast}, eigenvalue problems~\cite{benner2013preconditioned,xi2014fast},{ interpolation problems by RBFs~\cite{cherrie2002fast,Beatson1998,Beatson1992}}, integral equations~\cite{egidi2009efficient,martinsson2005fast,gillman2012direct}, and partial differential equations~\cite{le2006h}. A review of similar techniques can be found in~\cite{cai2018smash}. In this work, we propose a method aimed to decrease the computational cost of the product of the coefficient matrix and a generic vector. The method is based on a local low-rank representation of the interpolation matrix $A$. This representation resembles ones used in the fast multipole method and is conceptually based on a simple remark: the computational cost of the product of a matrix $A$ of order $N$ and a generic vector is proportional to $N^2$, but if a matrix $A$ admits a decomposition of the form $A=UV$, where $U$ and $V$ are rectangular matrices $N \times p$ and $p \times N$, respectively, the action of $A$ has a computational cost proportional to $2\, N\, p$, so, this decomposition can be profitably used when $p\ll N$. The efficiency of the proposed technique is also shown through numerical experiments. The paper is organised as follows. In Section~\ref{sec:translation}, we describe the proposed method, in particular, we define the interpolation matrix decomposition, which is based on the spectral expansion of the fundamental solution of the Laplacian operator, and we introduce the translation strategy for the efficient computation of the matrix action. In Section~\ref{sec:simulations}, we show the results of some numerical experiments. Finally, we give some conclusions and future developments in Section~\ref{sec:conclusions}. \section{The interpolation problem with inverse multiquadric RBFs}\label{sec:translation} We consider the interpolation problem where the interpolation function is expressed in terms of inverse multiquadric RBFs. In the following, we focus on iterpolation problem on $\bbr^2$ but the results can be easily generalized in $\bbr ^s$, for different $s$. Initially, we summarise some preliminary concepts and fix notations. Let $\Omega$ be a subset of $\bbr ^2$, $\mathcal{X}=\{\bsx_1,\bsx_2, \dots, \bsx_N\} \subset \Omega$ be the set of $N$ distinct points, usually called \textit{data sites}, and $f_i \in \bbr$ be the \textit{data values}. Moreover, data values are supposed to be obtained from some unknown function $f:\Omega \rightarrow \bbr$ evaluated at the data sites, i.e. \begin{equation} \label{interpolation_cond} f(\bsx_i)=f_i, \qquad i=1, 2, \dots, N. \end{equation} Given a RBF defined through $\phi :[0,\infty )\rightarrow \bbr$, $\phi(r_j)=\phi(\Vert\bx-\bsy_j\Vert)$ for $r_j=\Vert\bsx-\bsy_j\Vert$, the interpolation problem consists in finding the approximation function $P_f(\bsx)$ as follows \begin{equation}\label{interp_eq} P_f(\bsx)=\sum_{j=1}^N c_j\phi(||\bsx-\bsy_j||), \qquad \bsx \in \Omega, \end{equation} where $||\cdot ||$ is the Euclidean norm, $\bsy_j$, $j=1, \dots, N$, are the centres of the RBFs and coincide with the data sites $\bsy_j=\bsx_j$, and $c_j$, $j=1, \dots, N$, are unknown coefficients. We want to determine these unknowns in such a way that: \begin{equation}\label{P_fiEqfi} P_f(\bsx_i)=f_i,\qquad i=1,2,\ldots,N. \end{equation} In order to compute $P_f$ in~\eqref{interp_eq}, we have to solve the system of linear equations~\eqref{P_fiEqfi} which has the form: \begin{equation}\label{interp_system} A\bc=\bbf, \end{equation} where the interpolation matrix $A$ has entries $a_{ij}=\phi(||\bsx_i-\bsy_j||)$, $i,j=1, \dots, N$, $\bc=(c_1, \dots, c_N)^T$ is the unknown coefficient vector, and $\bbf=(f_1,\dots,f_N)^T$ is the known term given by the interpolation condition~\eqref{P_fiEqfi}. The linear system~\eqref{interp_system} has a dense coefficients matrix, moreover, for large values of $N$ direct methods usually cannot be used to calculate its numerical solution due to the memory and CPU-time resources required. Thus, iterative methods remain the unique numerical alternative for dealing with such kind of linear systems. In general, the efficiency of such methods depends on the spectral and sparsity properties of the coefficient matrix. The former usually influence the number of iterations required to achieve a given accuracy in the numerical solution, while the latter influence the computational cost of each iteration, which is strictly dependent on the iterative method considered but, for large dense linear systems, most of this cost is due to the computation of the product of the corresponding coefficient matrix and the generic tentative solution. We point out that, from standard arguments on numerical linear algebra, the computational cost of this matrix-vector product is proportional to $N^2$, when $N$ is the order of the matrix, so we need to supply a more efficient technique for this computation. For the description of the proposed method, we consider the interpolation problem in $\rtwo$ with inverse multiquadric (IMQ-) RBFs, even if, this study can be extended to other families of RBFs and in other dimensions. More precisely, let $\bsy \in \rtwo$ be the center of the IMQ-RBFs, $\bsx \in \rtwo$, and let $t\in \bbr $ be the shape parameter, we consider: \begin{equation}\label{IMQ_def} \phi(||\bsx-\bsy||)=\dfrac{1}{\sqrt{t^2+||\bsx-\bsy||^2}}. \end{equation} In the following sections, we introduce the proposed technique. In particular, in Section~\ref{sec:decomposition}, we present the spectral expansion in spherical coordinates of the Green's function of the Laplacian operator and its use in the local decomposition of the matrix $A$. In Section~\ref{sec:translationTec}, we discuss the translation strategy for the efficient computation of the product of the matrix $A$ and a generic vector. Then, in Section~\ref{sec:costoComp}, we analyse the computational cost of the proposed technique. \subsection{The Green's function of the Laplacian operator }\label{sec:decomposition} The Green's function, $G(\bsx;\bsy)$, of a linear differential operator $L$ is a solution of the equation $ L G(\bsx;\bsy)=\delta(\bsx-\bsy)$, where $\delta$ is the Dirac delta. From standard arguments, we have that \begin{equation}\label{G_def} G(\bsX;\bsY)=\frac1{|| \bsX-\bsY||}, \qquad \bsX, \bsY \in \bbr^3, \bsX\not=\bsY, \end{equation} is the Green's function of the Laplacian operator $L=\Delta$~\cite{carslaw1959conduction}, moreover, $G(\bsX;\bsY)$ has a well-known spectral decomposition in spherical coordinates. In particular if $(\rho_x,\theta_x,\omega_x),$ $(\rho_y,\theta_y,\omega_y)$ denote the spherical coordinates of $\bsX$ and $\bsY$, respectively, where $\rho_x,\rho_y \in [0,+\infty), \theta_x,\theta_y \in [0,\pi], \omega_x,\omega_y \in [0,2\pi)$, we have the following spectral expansion of $G(\bsX;\bsY)$ when $\rho_x\ne \rho_y$ \begin{equation}\label{GreenDecompositionG} \begin{aligned} G(\bsX;\bsY)= &\sum_{n=0}^\infty \sum_{m=0}^n \epsilon_m \frac{(n-m)!}{(n+m)!}P_n^m\left(\cos\theta_y\right) P_n^m\left(\cos\theta_x\right) \cdot \\ & \cdot \cos\left(m(\omega_x-\omega_y)\right) \begin{cases} \rho_x^n/\rho_y^{n+1}, & \text{if } \rho_y>\rho_x, \\ \rho_y^n/\rho_x^{n+1}, & \text{if } \rho_x>\rho_y, \end{cases} \end{aligned} \end{equation} where $\epsilon_m$ is the Neumann factor, that is $\epsilon_0=1,\epsilon_m=2,\,m>0$, and $P_n^m$ is the associated Legendre function of order $ m $ and degree $ n $, see~\cite{morse1953methods} for more details. Furthermore, in a practical application of~\eqref{GreenDecompositionG}, we have to consider the truncated series, that is \begin{equation}\label{serieTroncata} \begin{aligned} G(\bsX;\bsY)\approx &\sum_{n=0}^M \sum_{m=0}^n \epsilon_m \frac{(n-m)!}{(n+m)!}P_n^m\left(\cos\theta_y\right) P_n^m\left(\cos\theta_x\right) \cdot \\ & \cdot \cos\left(m(\omega_x-\omega_y)\right) \begin{cases} \rho_x^n/\rho_y^{n+1}, & \text{if } \rho_y>\rho_x, \\ \rho_y^n/\rho_x^{n+1}, & \text{if } \rho_x>\rho_y, \end{cases} \end{aligned} \end{equation} where $M\in \bbn$ is the truncation parameter for the index $n$ in~\eqref{GreenDecompositionG}. The accuracy of approximation~\eqref{serieTroncata} strongly depends on the particular choice of $\bsX$, $\bsY \in \bbr ^3$. In particular, it depends on the ratio between $\rho_y$ and $\rho_x$, as stated in the following theorem. \begin{theorem}\label{thm:series_error} Let $\bsX,\bsY\in \bbr^3$ be such that $\rho_y<\rho_x$ and let $r=\dfrac{\rho_y}{\rho_x}$. For the absolute error $E$ in the truncated series~\eqref{serieTroncata}, that is \begin{equation*}\label{ErrDecomp} E(\bsX,\bsY)=\Bigg|G(\bsX;\bsY) - \sum_{n=0}^M \sum_{m=0}^n \epsilon_m \frac{(n-m)!}{(n+m)!}P_n^m\left(\cos\theta_y\right) P_n^m\left(\cos\theta_x\right)\cos\left(m(\omega_x-\omega_y)\right)\dfrac{1}{\rho_x}r^n\Bigg| , \end{equation*} we have the following bound \begin{equation}\label{maggiorazioneErr} E(\bsX,\bsY)\leq \dfrac{1}{\rho_x}\dfrac{r^{M+1}}{(1-r)}. \end{equation} \begin{proof} Let $\alpha$ be the angle between $\bsX$ and $\bsY$, we have: \begin{equation} \begin{aligned} E(\bsX,\bsY)=&\Bigg| \sum_{n=M+1}^\infty \sum_{m=0}^n \epsilon_m \frac{(n-m)!}{(n+m)!}P_n^m\left(\cos\theta_y\right) P_n^m\left(\cos\theta_x\right)\cos\left(m(\omega_x-\omega_y)\right)\dfrac{1}{\rho_x}r^n\Bigg|\\ &= \Bigg| \dfrac{1}{\rho_x}\sum_{n=M+1}^\infty P_n\left(\cos\alpha\right)r^n \Bigg|\leq \dfrac{1}{\rho_x}\sum_{n=M+1}^\infty\Bigg| P_n\left(\cos\alpha\right)\Bigg|r^n \leq \dfrac{1}{\rho_x}\sum_{n=M+1}^\infty r^n =\\ &=\dfrac{1}{\rho_x}\Bigg(\sum_{n=0}^\infty r^n-\sum_{n=0}^M r^n \Bigg) = \dfrac{1}{\rho_x}\Bigg(\dfrac{1}{1-r}-\dfrac{1-r^{M+1}}{1-r}\Bigg) = \dfrac{1}{\rho_x}\dfrac{r^{M+1}}{1-r} \end{aligned} \end{equation} where $P_n$ is the Legendre polynomial of degree $n$ and the second equality holds because \begin{equation*} \displaystyle{P_n(\cos \alpha)=\sum_{m=0}^n \epsilon_m \frac{(n-m)!}{(n+m)!}P_n^m\left(\cos\theta_y\right) P_n^m\left(\cos\theta_x\right)\cos\left(m(\omega_x-\omega_y)\right)}, \end{equation*} see~\cite[Formula (10.3.38)]{morse1953methods} for more details, and the last inequality holds because $|P_n(x)|\leq 1$ for all $\vert x \vert \leq 1$~\cite[Formula (28.35)]{spiegelschaum}.\end{proof} \end{theorem} We note that a similar theorem holds in the case $\rho_x<\rho_y$ and $r=\dfrac{\rho_x}{\rho_y}$. As a consequence of Theorem~\ref{thm:series_error}, we have that the series in~\eqref{GreenDecompositionG} converges uniformly on every compact set of the domain $||\bsX||>||\bsY||$ or on the domain $||\bsX||<||\bsY||$. In Figure~\ref{fig:ErrorTrend}, we report the result of a numerical experiment showing that, when $\bsX$ is sufficiently far from $\bsY$, formula~\eqref{serieTroncata} is accurate even with moderate values of $M$, while it produces large errors when $||\bsX||\approx ||\bsY||$, even if a large truncation index $M$ is used. More precisely, in Figure~\ref{fig:ErrorTrend}, $\rho_x\in \bigg [\dfrac{11}{10},21\bigg ]$ and $\rho_y=1$; in addition, $\theta_x=\theta_y=\omega_x=\omega_y=\dfrac{\pi}{3}$ in Figure~\ref{fig:ErrorTrend}\subref{subf:err_trend_a}, whereas $\theta_x=\pi$, $\theta_y=\dfrac{\pi}{4}$, $\omega_x=\dfrac{\pi}{3}$ and $\omega_y=\dfrac{\pi}{2}$ in Figure~\ref{fig:ErrorTrend}\subref{subf:err_trend_b}. \begin{figure}[!hbt] \centering \subfloat[\label{subf:err_trend_a}]{ \includegraphics[width=.48\textwidth]{images/Err_decomp0.eps}}\hfill \subfloat[\label{subf:err_trend_b}]{ \includegraphics[width=.48\textwidth]{images/Err_decomp1.eps} } \caption{The error $E(\bsX,\bsY)$ in the representation formula~\eqref{serieTroncata} with truncation index $M = 5$, $M=10$, $M = 20$, and $\rho_y=1$, when $\theta_x=\theta_y=\omega_x=\omega_y=\dfrac{\pi}{3}$~\protect\subref{subf:err_trend_a}, $\theta_x=\pi$, $\theta_y=\dfrac{\pi}{4}$, $\omega_x=\dfrac{\pi}{3}$, $\omega_y=\dfrac{\pi}{2}$~\protect\subref{subf:err_trend_b}. We note that the logarithmic scale is used for the axis of the ordinates.} \label{fig:ErrorTrend} \end{figure} \subsection{The translation technique}\label{sec:translationTec} If we consider $\bsX=(x_1,x_2,t/2)$, $\bsY=(y_1,y_2,-t/2)$, where $t$ is the shape parameter in~\eqref{IMQ_def}, $\bsx=(x_1,x_2)\in\bbr^2$, and $\bsy=(y_1,y_2)\in\bbr^2$, we have that \begin{equation}\label{G_eq_phi} G(\bsX;\bsY)=\frac1{|| \bsX- \bsY||}=\frac1{\sqrt{||\bsx-\bsy ||^2+t^2}}=\phi(||\bsx-\bsy ||) \end{equation} and Formula~\eqref{GreenDecompositionG} gives the spectral decomposition for the IMQ-RBFs with shape parameter $t$. We note that Formula~\eqref{GreenDecompositionG} resembles the usual representation of degenerate kernels, in fact, it expresses $\phi(||{\bsx}-{\bsy}||)$ as a series, where each term is a product of two functions of only one variable, i.e., a function of $\bsx$ and the other of $\bsy$. In addition, it identifies two regions where we have two different representations. In fact, the structure of~\eqref{GreenDecompositionG} requires us to divide the elements of $\bA$ into three sets: the elements where $||\bsx||<||\bsy||$, those where $||\bsx||>||\bsy||$, and those where $||\bsx|| = ||\bsy||$ for which~\eqref{GreenDecompositionG} cannot be used. This is an interesting formula resembling a low-rank decomposition of matrix $A$, but unfortunately it does not provide a low-rank representation of the matrix $A$ due to the aforementioned partition of $A$. However, formula~\eqref{GreenDecompositionG} provides a local decomposition for $\phi(||\bsx - \bsy||)$ that can be profitably used in the solution of~\eqref{interp_system}. When the ratio $\rho_y/\rho_x \approx 1$ (but $\bsX$ is substantially different from $\bsY$), a simple translation operation can be used in formula~\eqref{serieTroncata}. Let $\bsz=(z_1,z_2)\in \Omega \subset \bbr^2$ be such that $||\bsy-\bsz||<||\bsx-\bsz||$, and let $\bsZ=(z_1,z_2,-t/2)$, then from~\eqref{serieTroncata} we have: \begin{equation}\label{dec_phi_trasl} \begin{aligned} &\phi(||{\bsx}-{\bsy}||)= \phi(||({\bsx}-{\bsz})-({\bsy}-{\bsz})||)= G(\bsX-\bsZ;\bsY-\bsZ)\approx\\ &\approx\sum_{n=0}^M \sum_{m=0}^n \epsilon_m \frac{(n-m)!}{(n+m)!}P_n^m\left(\cos(\theta_{y-z})\right) P_n^m\left(\cos(\theta_{x-z})\right) \cos\left(m(\omega_{x-z}-\omega_{y-z})\right) \dfrac{\rho_{y-z}^n}{\rho_{x-z}^{n+1}}, \end{aligned} \end{equation} where $(\rho_{x-z},\theta_{x-z},\omega_{x-z}), \ (\rho_{y-z},\theta_{y-z},\omega_{y-z})$ are the spherical coordinates of vectors $\bsX-\bsZ$, $\bsY-\bsZ$ $\in \bbr^3$, respectively. The translation technique for the computation of $A\bc$ consists in a convenient use of~\eqref{dec_phi_trasl}. For simplicity of notation, let us define the following quantities: $d_{n,m}=\epsilon_m \dfrac{(n-m)!}{(n+m)!}$, $h_{n,m}(\bsX-\bsZ)=\dfrac{P_n^m\left(\cos(\theta_{x-z})\right)}{\rho_{x-z}^{n+1}} $, $j_{n,m}(\bsY-\bsZ)=P_n^m\left(\cos(\theta_{y-z})\right)\rho_{y-z}^n$. Thus, when $||\bsy-\bsz||<||\bsx-\bsz||$, we can separate the contribution of ${\bsx}$ and ${\bsy}$ in the following way: \begin{equation}\label{dec_separata} \begin{aligned} \phi(||{\bsx}-{\bsy}||)=& \sum_{n=0}^M \sum_{m=0}^n d_{n,m}h_{n,m}(\bsX-\bsZ)\cos(m\omega_{x-z})j_{n,m}(\bsY-\bsZ)\cos(m\omega_{y-z})+\\+&\sum_{n=0}^M \sum_{m=0}^n d_{n,m}h_{n,m}(\bsX-\bsZ)\sin(m\omega_{x-z})j_{n,m}(\bsY-\bsZ)\sin(m\omega_{y-z}). \end{aligned} \end{equation} We can suppose, without losing generality, that the domain $\Omega$ is contained in a square $D\subset \rtwo$. The proposed strategy considers a set of partitions of $D$: the first partition is obtained by dividing $D$ into $4\times 4$ equivalent blocks, the successive are obtained by bisecting the edges of the previous blocks. More precisely, the partition considered in $D$ depends on a recursive index $l$, with $l=1,2,\ldots,L$. For each level $l$, the square D is partitioned in $2^{l+1}\times2^{l+1}$ blocks like in Figure~\ref{fig:trasla}, where the first two levels of these partitions are considered. The idea of the proposed strategy is the following: when ${\bsy}\in \mathcal{X}$ is in a set $S$ (the dark gray squares in Figure~\ref{fig:trasla}) of the considered partition, we associate to ${\bsy}$ the center ${\bsz}$ of $S$ and we use formula~\eqref{dec_phi_trasl} only for ${\bsx}\in \mathcal{X}$ belonging to well-separated squares of the considered partition (the light gray squares in Figure~\ref{fig:trasla}). In this way, we obtain a high accuracy of formula~\eqref{dec_phi_trasl}, even for small values of the truncation index $M$. \begin{figure}[!hbt] \subfloat[\label{subf:liv1}]{ \includegraphics[width=0.45\textwidth]{images/Livello1_gray.eps} }\hfill \subfloat[\label{subf:liv2}]{ \includegraphics[width=0.45\textwidth]{images/livello2_dotted2.eps}} \caption{Example of the translation technique at levels $l=1$~\protect\subref{subf:liv1} and $l=2$~\protect\subref{subf:liv2}.} \label{fig:trasla} \end{figure} In more detail, for each point ${\bsy}$ in a block at level $1$, we choose ${\bsz}$ as the center of this block as shown in the dark gray block in Figure~\ref{fig:trasla}\subref{subf:liv1}. Then, we define the well-separated blocks, with respect to the {{ previously selected block that contains $\bsy$}}, as those blocks of points $\bsx$ such that $\dfrac{\rho_{y-z}}{\rho_{x-z}}<\dfrac{1}{2}$. In Figure~\ref{fig:trasla}\subref{subf:liv1}, the well-separated blocks are the light gray ones. So, {{for each ${\bsy}$,}} in a given block of the partition we apply formula~\eqref{dec_phi_trasl} for points ${\bsx}$ in the corresponding well-separated blocks of the same partition. In this way,{{ if $t=1$ and $D$ has unitary edge, we have an error $E<2.86 \cdot 10^{-9}$ when $M= 10$, and an error $E<4.41 \cdot 10^{-17}$ when $M= 20$. In fact, it is easy to verify that $\rho_{y-z}\leq \dfrac{\sqrt{2}}{8}$ and $\rho_{x-z}\geq\dfrac{\sqrt{73}}{8}$, so that $\dfrac{\rho_{y-z}}{\rho_{x-z}}\leq 0.1655$.}} At the second level, i.e. $l=2$, each block of level $1$ is partitioned in $2\times2$ equivalent squares, thus obtaining $64$ blocks, as in Figure~\ref{fig:trasla}\subref{subf:liv2}. At level $2$, the same procedure is applied to the part of the domain $D$ that has not already been considered at level $1$. So, for each point ${\bsy}$ in a smaller block, as the dark gray block in Figure~\ref{fig:trasla}\subref{subf:liv2}, we choose ${\bsz}$ as the center of the block, and we apply formula~\eqref{dec_phi_trasl} for each point ${\bsx}$ in the new well-separated blocks, i.e., the light gray blocks in Figure~\ref{fig:trasla}\subref{subf:liv2}. {{Also at this level we have the same accuracy of previous level.}} The dotted region in Figure~\ref{fig:trasla}\subref{subf:liv2} highlights the region where formula~\eqref{dec_phi_trasl} has been already applied and so it does not need to be treated at this level. We repeat this procedure for all the blocks until the finest level $L$ is reached, where the contributions of the remaining white regions are directly computed by using formula~\eqref{IMQ_def} in the matrix action. In Algorithm~\ref{algo}, we report the translation technique for computing the{{ matrix-vector product $\bb=A\bu$, when $\bu \in \bbr^N$ is a generic vector. In this algorithm, we use the following notation: $P_l$ is the set of all blocks at level $l$, $p$ is one of these blocks, $S_p$ is the set of the blocks well-separated from the block $p$, and ${NS}_p$ is the set of the blocks which are not well-separated from the block $p$. }} \begin{algorithm}[!hbt] \SetAlgoLined $\bb=\left(0,\dots,0\right)^T$\; \For{ $l=1,\dots,L$}{ \For{ $p\in P_l$}{ ${\bsz}$ is the center of $p$\\ \For{${\bsy}_j \in p$}{ $\bv=\left(0,\dots,0\right)^T$\; $\bw=\left(0,\dots,0\right)^T$\; Compute the contribution of ${\bsy}_j$ from formula~\eqref{dec_separata}\\ \For{$n=0,\dots,M$}{ \For{$m=0,\dots,n$}{ $k=\dfrac{n(n+1)}{2}+m+1$\\ $\bv_k=\bv_k + j_{n,m}(\bsY_j-\bsZ)\sin(m\omega_{y_j-z})\bu_j$\\ $\bw_k=\bw_k + j_{n,m}(\bsY_j-\bsZ)\cos(m\omega_{y_j-z})\bu_j$ } } } \For{${\bsx}_i \in S_p$}{ Compute the contribution of ${\bsx}_i$ from formula~\eqref{dec_separata}\\ \For{$n=0,\dots,M$}{ \For{$m=0,\dots,n$}{ $k=\dfrac{n(n+1)}{2}+m+1$\\ $\bb_i=\bb_i + d_{n,m} h_{n,m}(\bsX_i-\bsZ)\big[\sin(m\omega_{x_i-z})\bv_k +\cos(m\omega_{x_i-z})\bw_k\big]$ } } } } } \For{$p\in P_L$}{ \For{${\bsx}_i \in NS_p$}{ \For{${\bsy}_j \in p$}{ Compute the contribution of ${\bsx}_i$ and ${\bsy}_j$ from formula~\eqref{interp_system}\\ $\bb_i=\bb_i + A_{i,j}\bu_j$ } } } \caption{Given $\bu \in \bbr ^N$, computes $\bb=A\bu \in \bbr ^N$ as follows.}\label{algo} \end{algorithm} \subsection{The computational cost}\label{sec:costoComp} The recursive procedure described in the previous section gives an efficient method for computing $A\bu$, where $\bu\in\bbr^N$ is a generic vector and the data sites{{ in $\mathcal{X}$ are generic points}} of the domain $\Omega$. However, for the sake of simplicity, we suppose a uniform distribution of points, so at the generic level $l$, each block $p$ contains $\dfrac{N}{4^{l+1}}$ data sites. As illustrated in Algorithm~\ref{algo}, we have that \begin{equation}\label{MatxVec} \begin{aligned} \sum_{j=1}^N A_{i,j}\bu_j=& \scaleto{\sum_{l=1}^L \sum_{\substack{p\in P_l\\i\in S_p}}}{35pt} \left\{ \sum_{n=0}^M \sum_{m=0}^n \Bigl(d_{n,m}h_{n,m}(\bsX_i-\bsZ)\cos(m\omega_{x_i-z})\Bigr) \left( \sum_{j\in p} j_{n,m}(\bsY_j-\bsZ)\cos(m\omega_{y_j-z})\bu_j\right)+\right.\\ &\phantom{\{}+\left.\sum_{n=0}^M \sum_{m=0}^n\left( d_{n,m}h_{n,m}(\bsX_i-\bsZ)\sin(m\omega_{x_i-z})\right)\left( \sum_{j\in p} j_{n,m}(\bsY_j-\bsZ)\sin(m\omega_{y_j-z})\bu_j\right)\right\}+\\ &+\sum_{\substack{p\in P_L\\i\in {NS}_p}} \left\{\sum_{j\in p} A_{i,j}\bu_j\right\}, \qquad \qquad i=1,\dots,N, \end{aligned} \end{equation} where $i\in S_p$ denotes that $\bsx_i$ is in well-separated blocks $S_p$, while $j\in p$ denotes that $\bsy_j$ is in $p$. This shows that the computation consists of two main addenda. The first addendum in~\eqref{MatxVec}, i.e., the one for $i\in S_p$, $p\in P_l$, $l=1,\dots,L$, can be computed efficiently since the part depending on the row indices $i$ is independent of the part depending on the column indices $j$; while the second addendum has to be evaluated only at the last level $L$. The computational cost $c$ of the matrix-vector product in~\eqref{MatxVec} can be obtained by summing the cost of the various steps. We retrace the computations in Algorithm~\ref{algo} and we evaluate the overall computational cost by counting only the multiplication operations. The computation of the coefficients $d_{n,m}$ and functions $h_{n,m}, j_{n,m}$ as well as trigonometric functions in formula~\eqref{MatxVec} are not considered since they are computed only once at the beginning of the solution process and stored in arrays. Let $c_1$ be the cost of lines $5-16$ in Algorithm~\ref{algo}, so $c_1$ is equal to the cost of \begin{equation}\label{cost:part1} \bv_{n,m}(\bsZ)=\sum_{j\in p} j_{n,m}(\bsY_j-\bsZ)\sin(m\omega_{y_j-z})\bu_j \quad \text{ and} \quad \bw_{n,m}(\bsZ)=\sum_{j\in p} j_{n,m}(\bsY_j-\bsZ)\cos(m\omega_{y_j-z})\bu_j, \end{equation} for all $n=0,1,\dots,M,\ m=0,1,\dots,n$, that is $4KN_p$, where $N_p$ are the number of points of $\mathcal{X}$ contained in $p$ and $K=(M+1)(M+2)/2$. Thus, considering a block $p$ at a generic level $l$, we have \begin{equation*} c_1\leq 4\frac{N}{4^{l+1}}K, \end{equation*} Let $c_2$ be the cost of lines $2-27$ in Algorithm~\ref{algo}, that is the cost of \begin{equation}\label{cost:part2} \sum_{n=0}^M \sum_{m=0}^n d_{n,m}h_{n,m}(\bsX_i-\bsZ)\sin(m\omega_{x_i-z})\bv_{n,m}(\bsZ) \quad \text{and} \quad \sum_{n=0}^M \sum_{m=0}^n d_{n,m}h_{n,m}(\bsX_i-\bsZ)\cos(m\omega_{x_i-z})\bw_{n,m}(\bsZ), \end{equation} for all $n=0,1,\dots,M,\ m=0,1,\dots,n$, $p\in P_l,l=1,2,\dots,L,$ $\bsz$ the centre of $p$, $i\in S_p$, where $\bv_{n,m},\bw_{n,m}$ contain the contributions of the points $\bsy_j$ for the sine and cosine part, respectively, already calculated in~\eqref{cost:part1}; $\bv_{n,m},\bw_{n,m}$ correspond respectively to $\bv_k,\bw_k$ in Algorithm~\ref{algo}. In the following discussion, the cost of the multiplication by the factor $d_{n,m}$ is neglected since such factor can be included in $h_{n,m}$ or, equivalently, in $j_{n,m}$ during the construction of the data structures. We analyse the cost $c_2$ as function of the level $l$. At the first level, we have $N/16$ points of $\mathcal{X}$ in each block $p\in P_1$ and at most there are $12$ well-separated blocks from $p$, that are blocks in $S_p$. Thus, the cost $c_2^{(1)}$ at the first level is \begin{equation*} c_2^{(1)}\leq 4K 12 \frac{N}{16}=3KN, \end{equation*} where $K$ is defined above and gives the number of addenda in each sum appearing in~\eqref{cost:part2}. At level $l\geq2$, each block of the level $l-1$ is divided into 4 blocks, so fixing the block $p$ of the first level, $p$ is divided into $4^{l-1}$ blocks at level $l$. For each small block there are at most $27$ well-separated blocks. Thus, the cost $c_2^{(l)}$ at level $l$ is \begin{equation*} c_2^{(l)}\leq 4K 4^{l-1} 27 \frac{N}{4^{l+1}}=\frac{27}{4}KN. \end{equation*} Since in the costs $c_2^{(l)},l=1,2,\dots,L,$ we fixed a block of the first level and we referred to it during the calculations, by considering also that there are $16$ different blocks at the first level, we have \begin{equation*} c_2\leq 16\sum_{l=1}^L c_2^{(l)}\leq 108 KNL. \end{equation*} Let $c_3$ be the cost of lines $28-35$ in Algorithm~\ref{algo}, that is the cost of \begin{equation*} \sum_{j\in p} A_{i,j}\bu_j, \end{equation*} for all $i\in {NS}_p$ and $p\in P_L$. With analogous arguments to the ones used above, i.e., the number of blocks in the first level, the number of blocks obtained at level $L$ from the subdivision of a first-level block into smaller blocks and the number of points into a block, considering also that the maximum number of non well-separated blocks is at most $9$, we obtain \begin{equation*} c_3\leq 16 \, 4^{L-1} \frac{N}{4^{L+1}}9\frac{N}{4^{L+1}}=9\frac{N^2}{4^{L+1}}. \end{equation*} So, the total cost of the procedure is \begin{equation}\label{total_cost} \begin{aligned} c=&c_1+c_2+c_3\leq\\ \leq&N\left(\frac{4K}{4^{l+1}}+108KL+\frac{9N}{4^{L+1}}\right)\leq\\ \leq&N\left(109KL+\frac{9N}{4^{L+1}}\right). \end{aligned} \end{equation} In formula~\eqref{total_cost}, the parameters $K$ and $N$ are fixed before the procedure, whereas the number of levels $L$ should be chosen in order to obtain the minimum $c$. Now, if we consider the upper bound for $c$, we should search for the minimum of this upper bound. In more detail, let $\alpha=109K, \beta=9N/4, g(\lambda)=\displaystyle{\alpha {{\lambda}}+\frac{\beta}{4^\lambda}}$, where $\lambda$ is the real extension of the integer variable $L$. Then, it can be shown that $g$ has a minimum in $\lambda_{min}=\displaystyle{\log_4\left(\frac{\beta\log_{10}(4)}{\alpha}\right)}$, where $g(\lambda_{min})=\displaystyle{\frac{\alpha}{\log_{10}(4)}\left(\log_{10}\left(\frac{\beta\log_{10}(4)}{\alpha}\right)+1\right)}$. A similar result can be obtained without the extension of the integer variable $L$, because of the convexity of the function $g$. In conclusion, the upper bound of $c$ is proportional to $N\log_{10}(N)$. \section{Numerical simulations}\label{sec:simulations} We present a numerical experiment performed with the technique described in the previous section, in order to show the accuracy and the efficiency of the proposed method. The experiment consists in computing the product of the matrix $A$ and a random vector whose components range is $[-1,1]$, by using two different techniques: the standard matrix-vector multiplication, and the translation technique described in Section~\ref{sec:translation}. We fix the shape parameter $t=1$, and we consider the set of quasi-random $N$ Halton points~\cite{HALTON1960} in $\Omega=[0,1]^2$, with $N=2(4), 4(4), 6(4), 8(4), 1(5)$, where $x(y)$ denotes the real number $x \cdot 10^y$. Furthermore, in the translation technique, the truncation index in~\eqref{dec_phi_trasl} is fixed to $M = 10$ and the maximum number of levels is $L = 1,2,3$. The results are reported in Table~\ref{tab:results} and Figure~\ref{fig:tempi}. Table~\ref{tab:results} shows the elapsed time $T$ in the computation of the product of $A$ and a random vector by using a standard row-column product, the elapsed time in the same computation by using the translation technique, the relative error $E$ in infinity-norm between the result computed by the translation technique and the one obtained by the standard row-column product, and the number of levels $L$ that gives the best efficiency result among $L=1,2,3$. In addition, Figure~\ref{fig:tempi} reports a comparison of the execution times of the different methods. In the $x-$axis the number of interpolation points is reported, while the vertical axis gives the normalised execution time, that is the execution time divided by the number of interpolation points, i.e., $\dfrac{T}{N}$. The line with circular markers represents the usual matrix-vector multiplication, while the line with squared markers represents the proposed technique. In Figure~\ref{fig:tempi}, the substantial reduction of the execution time can be easily appreciated, as well as the logarithmic trend of the proposed strategy. The results in Table~\ref{tab:results} and Figure~\ref{fig:tempi} have been obtained on a Workstation equipped with an Intel(R) Xeon(R) CPU E5-2620 v3 @2.40GHz, operative system Red Hat Enterprise Linux, release 7.5. All computations have been made in double precision and the FORTRAN codes have been compiled by the NAGWare f95 Compiler. \begin{table}[!hb] \centering ll}} \toprule \multirow{2}{*}{$N$} & \text{Standard} & \multicolumn{3}{c}{\text{Translation}} \\ \cmidrule(lr){2-2}\cmidrule(lr){3-5} & $T$ \, [\text{\SI{}{\second}}] & $T$ \, [\text{\SI{}{\second}}] & $E$ & $L$ \\ \midrule \vspace{2pt} $2(4)$ & $1.62(0)$ & $1.58(0)$ & $2.67(-9)$ & $1$ \\ \vspace{2pt} $4(4)$ & $1.79 (1)$ & $5.23(0)$ & $4.61(-9)$ & $2$ \\ \vspace{2pt} $6(4)$ & $3.21 (1)$ & $9.78(0)$ & $6.62(-9)$ & $2$ \\ \vspace{2pt} $8(4)$ & $8.38(1)$ & $1.58(1)$ & $8.72(-9)$ & $2$ \\ \vspace{2pt} $1(5)$ & $1.28(2)$ & $2.33(1)$ & $1.06(-8)$ & $2$ \\ \bottomrule \end{tabular*} \caption{Execution times $T$ (in seconds), relative error $E$ and number of levels $L$ by varying the number of points $N$, for the truncation index $M=10$.\label{tab:results}} \end{table} \begin{figure}[!hbt] \centering \begin{tikzpicture} \begin{axis} [scale only axis,height=.3\textheight,width=.7\textwidth, xmin=20000,xmax=100000,ymin=0,ymax=0.0013,xlabel={$N$},ylabel={$\dfrac{T}{N}$},xtick={20000,40000,60000,80000,100000},legend style={at={(0,1)},anchor=north west}] \addplot+ [color=black, mark=square, line width=.5pt] table[x index=0, y index=1] {tempiEsecuzione/fast.txt}; \addlegendentry{Translation} \addplot+ [color=black, mark=o, line width=.5pt] table[x index=0, y index=1] {tempiEsecuzione/classico.txt}; \addlegendentry{Standard} \end{axis} \end{tikzpicture} \caption{Trends of the normalised execution times with the two strategies, when $M=10$} \label{fig:tempi} \end{figure} From these results, we can observe a substantially reduced computational time $T$ of the translation technique with respect to the standard computation, especially for large value of $N$. Moreover, the error $E$ shows the high accuracy provided by the proposed method despite the small value of the truncation index $M$. \section{Conclusions}\label{sec:conclusions} We considered a scattered interpolation problem with the IMQ-RBFs and we proposed an efficient strategy for computing the product between the coefficient matrix and a generic vector. This method is based on the well-known decomposition formula in spherical coordinates for the Laplacian operator and a simple translation technique. The presented numerical experiments show strongly promising results, both for the efficiency and the accuracy of the proposed technique. In fact, such computational strategy has a smaller computational cost than the standard matrix-vector multiplication, and the computed numerical solutions are almost the same. These results encourage further investigations about this method. In particular, such strategy should be implemented into an iterative method for the solution of the interpolation problem. Then, it could be interesting to study the generalization of this technique to the interpolation problem with different choices of RBFs, as well as to the collocation problem for the solution of differential equations. Finally, the decomposition of $A$ could be profitably exploited for the construction of ad-hoc preconditioner for the interpolation problem. \vskip 0.2in {\bf Acknowledgments} This research has been accomplished within Rete ITaliana di Approssimazione (RITA), the thematic group on "Approximation Theory and Applications" of the Italian Mathematical Union and partially funded by GNCS-IN$\delta$AM. \printbibliography \end{document} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{microtype} \usepackage[a4paper, margin=0.8in]{geometry} \usepackage[italian,english]{babel} \usepackage[babel]{csquotes} \usepackage{emptypage} \usepackage{indentfirst} \usepackage{booktabs} \usepackage[dvipsnames]{xcolor} \usepackage{tabularx} \usepackage{multirow} \usepackage{subfig} \usepackage{graphicx} \graphicspath{{./Figures/}} \usepackage[export]{adjustbox} \usepackage{pgfplots} \pgfplotsset{legend pos=south east, every axis/.append style={font=\small}} \usepackage{caption} \usepackage{listings} \usepackage[font=small]{quoting} \usepackage{amsmath,amssymb,amsthm,mathrsfs,scalerel} \usepackage[euler]{textgreek} \usepackage{mathtools} \usepackage[linesnumbered,ruled,vlined]{algorithm2e} \SetKw{Break}{break} \usepackage{siunitx} \usepackage[english]{varioref} \usepackage{mparhack,relsize} \usepackage[style=numeric-comp,citestyle=ieee,hyperref,backref,natbib,backend=biber]{biblatex} \bibliography{Bibliography} \usepackage[affil-it]{authblk} \usepackage{eurosym} \usepackage[bookmarks=true]{hyperref} \usepackage{bookmark} \usepackage{empheq} \usepackage{lineno} \usepackage{comment} \newcommand{\bu}{\mathbf{u}} \newcommand{\bx}{\mathbf{x}} \newcommand{\by}{\mathbf{y}} \newcommand{\bz}{\mathbf{z}} \newcommand{\bn}{\mathbf{n}} \newcommand{\ba}{\mathbf{a}} \newcommand{\bb}{\mathbf{b}} \newcommand{\bc}{\mathbf{c}} \newcommand{\bv}{\mathbf{v}} \newcommand{\bw}{\mathbf{w}} \newcommand{\bp}{\mathbf{p}} \newcommand{\bq}{\mathbf{q}} \newcommand{\be}{\mathbf{e}} \newcommand{\bg}{\mathbf{g}} \newcommand{\bj}{\mathbf{j}} \newcommand{\bbf}{\mathbf{f}} \newcommand{\bT}{\mathbf{T}} \newcommand{\bK}{\mathbf{K}} \newcommand{\bD}{\mathbf{D}} \newcommand{\bP}{\mathbf{P}} \newcommand{\bF}{\mathbf{F}} \newcommand{\bA}{\mathbf{A}} \newcommand{\bX}{\mathbf{X}} \newcommand{\bY}{\mathbf{Y}} \newcommand{\pt}{\partial} \newcommand{\car}{\mathcal{R}} \newcommand{\cc}{\mathcal{C}} \newcommand{\cu}{\mathcal{U}} \newcommand{\cl}{\mathcal{L}} \newcommand{\cp}{\mathcal{P}} \newcommand{\ct}{\mathcal{T}} \newcommand{\ca}{\mathcal{A}} \newcommand{\cf}{\mathcal{F}} \newcommand{\cn}{\mathcal{N}} \newcommand{\ck}{\mathcal{K}} \newcommand{\cm}{\mathcal{M}} \newcommand{\ce}{\mathcal{E}} \newcommand{\cs}{\mathcal{S}} \newcommand{\cj}{\mathcal{J}} \newcommand{\rn}{\mathbb{R}^n} \newcommand{\rtwo}{\mathbb{R}^2} \newcommand{\bbr}{\mathbb{R}} \newcommand{\bbn}{\mathbb{N}} \newcommand{\alf}{\alpha} \newcommand{\tu}{\tilde{u}} \newcommand{\tit}{\tilde{T}} \newcommand{\tdt}{\tilde{t}} \newcommand{\tp}{\tilde{p}} \newcommand{\ve}{\varepsilon} \newcommand{\te}{\text} \newcommand{\bsx}{\boldsymbol{x}} \newcommand{\bsy}{\boldsymbol{y}} \newcommand{\bsz}{\boldsymbol{z}} \newcommand{\bsX}{\boldsymbol{X}} \newcommand{\bsY}{\boldsymbol{Y}} \newcommand{\bsZ}{\boldsymbol{Z}} \newcommand{\bsa}{\boldsymbol{a}} \newcommand{\bsb}{\boldsymbol{b}} \newcommand{\bsc}{\boldsymbol{c}} \newcommand{\ul}[1]{\underline {#1} } \providecommand{\keywords}[1]{\textit{Keywords ---} #1} \theoremstyle{definition} \newtheorem{defin}{Definition} \theoremstyle{plain} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{proposition}[theorem]{Proposition} \theoremstyle{remark} \newtheorem*{remarks}{Remarks} \newtheorem*{remark}{Remark} \newtheorem{ex}{Example} \newcommand\restr[2]{{ \left.\kern-\nulldelimiterspace #1 \right|_{#2} }} \DeclareMathOperator\erf{erf}
2205.04082v1
http://arxiv.org/abs/2205.04082v1
On the number of maximal independent sets: From Moon-Moser to Hujter-Tuza
\documentclass[12pt,a4paper]{amsart} \usepackage{amssymb,amsmath,amsthm} \usepackage{graphicx} \usepackage{hyperref} \usepackage[color=green!40]{todonotes} \newcommand\cA{{\mathcal A}} \newcommand\cB{{\mathcal B}} \newcommand\cC{{\mathcal C}} \newcommand\cD{{\mathcal D}} \newcommand\cE{{\mathcal E}} \newcommand\cF{{\mathcal F}} \newcommand\cS{{\mathcal S}} \newcommand\x{{\mathbf x}} \newcommand\y{{\mathbf y}} \newcommand\z{{\mathbf z}} \newcommand{\abs}[1]{\left\lvert{#1}\right\rvert} \newcommand{\floor}[1]{\left\lfloor{#1}\right\rfloor} \newcommand{\ceil}[1]{\left\lceil{#1}\right\rceil} \newcommand\bC{\mathbf C} \newcommand\cG{{\mathcal G}} \newcommand\cH{{\mathcal H}} \newcommand\cI{{\mathcal I}} \newcommand\cJ{{\mathcal J}} \newcommand\cK{{\mathcal K}} \newcommand\cL{{\mathcal L}} \newcommand\cM{{\mathcal M}} \newcommand\cN{{\mathcal N}} \newcommand\cP{{\mathcal P}} \newcommand\cY{{\mathcal Y}} \newcommand\cQ{{\mathcal Q}} \newcommand\cR{{\mathcal R}} \newcommand\cT{{\mathcal T}} \newcommand\cU{{\mathcal U}} \newcommand\cV{{\mathcal V}} \newcommand\PP{{\mathbb P}} \newcommand\coF{{\overline F}} \newcommand\coG{{\overline G}} \newcommand\De{\Delta} \newcommand\cBi{\textnormal{Bi}} \newcommand\G{\Gamma} \newcommand\eps{{\varepsilon}} \makeatletter \newtheorem*{rep@theorem}{\rep@title} \newcommand{\newreptheorem}[2]{\newenvironment{rep#1}[1]{ \def\rep@title{#2~\ref{##1}} \begin{rep@theorem}} {\end{rep@theorem}}} \makeatother \theoremstyle{plain} \newtheorem{theorem}{Theorem}\newreptheorem{theorem}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{observation}[theorem]{Observation} \newtheorem{problem}[theorem]{Problem} \newtheorem{claim}[theorem]{Claim} \theoremstyle{definition} \newtheorem{eg}[theorem]{Example} \newtheorem{defn}[theorem]{Definition} \newtheorem{con}[theorem]{Construction} \newtheorem{fact}[theorem]{Fact} \newtheorem{question}[theorem]{Question} \newtheorem*{rem}{Remark} \newcommand\lref[1]{Lemma~\ref{lem:#1}} \newcommand\tref[1]{Theorem~\ref{thm:#1}} \newcommand\cref[1]{Corollary~\ref{cor:#1}} \newcommand\clref[1]{Claim~\ref{clm:#1}} \newcommand\cnref[1]{Construction~\ref{con:#1}} \newcommand\cjref[1]{Conjecture~\ref{conj:#1}} \newcommand\sref[1]{Section~\ref{sec:#1}} \newcommand\pref[1]{Proposition~\ref{prop:#1}} \newcommand\rref[1]{Remark~\ref{rem:#1}} \newcommand\fref[1]{Fact~\ref{fact:#1}} \newcommand{\ex}{\mathop{}\!\mathrm{ex}} \DeclareMathOperator{\mis}{mis} \textheight=8.2in \textwidth=7in \topmargin=0.1in \oddsidemargin=-0.25in \evensidemargin=-0.25in \def\marrow{{\boldmath {\marginpar[\hfill$\rightarrow \rightarrow$]{$\leftarrow \leftarrow$}}}} \def\corys#1{{\sc CORY: }{\marrow\sf #1}} \def\pb#1{{\sc BALAZS: }{\marrow\sf #1}} \title{On the number of maximal independent sets: From Moon-Moser to Hujter-Tuza} \author{Cory Palmer} \address{University of Montana} \email{[email protected]} \thanks{Palmer's research is supported by a grant from the Simons Foundation \#712036} \author{Bal\'azs Patk\'os} \address{Alfr\'ed R\'enyi Institute of Mathematics} \email{[email protected]} \thanks{Patk\'os's research is partially supported by NKFIH grants SNN 129364 and FK 132060} \date{} \begin{document} \maketitle \begin{abstract} We connect two classical results in extremal graph theory concerning the number of maximal independent sets. The maximum number $\mis(n)$ of maximal independent sets in an $n$-vertex graph was determined by Moon and Moser. The maximum number $\mis_\bigtriangleup(n)$ of maximal independent sets in an $n$-vertex triangle-free graph was determined by Hujter and Tuza. We determine the maximum number $\mis_t(n)$ of maximal independent sets in an $n$-vertex graph containing no induced triangle matching of size $t+1$. We also reprove a stability result of Kahn and Park on the maximum number $\mis_{\bigtriangleup,t}(n)$ of maximal independent sets in an $n$-vertex triangle-free graphs containing no induced matching of size $t+1$. \end{abstract} \section{Introduction} Let $\mis(G)$ denote the of maximal independent sets in the graph $G$. The classic result determining $\mis(n)$, the maximum of $\mis(G)$ over all graphs on $n$ vertices is: \begin{theorem}[Miller, Muller \cite{MiMu}, Moon, Moser \cite{MM}]\label{mm} For any $n\ge 3$, we have \begin{eqnarray*} \mis(n)=\left\{ \begin{array}{cc} 3^{n/3} & \textnormal{if} ~n ~\textnormal{is divisible by 3}, \\ 4\cdot 3^{(n-4)/3}& \textnormal{if $n\equiv 1$ (mod 3)}, \\ 2\cdot 3^{(n-2)/3}& \textnormal{if $n\equiv 2$ (mod 3)}. \end{array} \right. \end{eqnarray*} \end{theorem} For graphs $F,G$ and positive integers $a,b$, we denote by $aF+bG$ the vertex-disjoint union of $a$ copies of $F$ and $b$ copies of $G$. Then the constructions giving the lower bound of Theorem~\ref{mm} are $\frac{n}{3}K_3$, $\frac{n-4}{3}K_3+K_4$ or $\frac{n-4}{3}K_3+2K_2$, and $\frac{n-2}{3}K_3+K_2$ in the three respective cases. As these constructions contain many triangles, one can ask the natural question to maximize $\mis(G)$ over all triangle-free graphs. The maximum over all such $n$-vertex graphs, denoted by $\mis_\bigtriangleup(n)$, was determined by Hujter and Tuza~\cite{HT}. \begin{theorem}[Hujter, Tuza \cite{HT}]\label{ht} For any $n\ge 4$, we have \begin{eqnarray*} \mis_{\bigtriangleup}(n)=\left\{ \begin{array}{cc} 2^{n/2} & \textnormal{if} ~n ~\textnormal{is even}, \\ 5\cdot 2^{(n-5)/2}& \textnormal{if $n$ is odd}. \end{array} \right. \end{eqnarray*} \end{theorem} The parameter $\mis(G)$ has been determined for connected graphs (see~\cite{F,Gr}) and for trees (see~\cite{Wi,Sa}). The value of $\mis(n)$ has implications for the runtime of various graph-coloring algorithms (see \cite{W} for several references). Answering a question of Rabinovich, Kahn and Park~\cite{KP} proved stability versions of both Theorem~\ref{mm} and Theorem~\ref{ht}. An {\it induced triangle matching} is an induced subgraph that is a vertex disjoint union of triangles; its {\it size} is the number of triangles. \begin{theorem}[Kahn, Park \cite{KP}]\label{kp} For any $\varepsilon >0$, there is a $\delta=\delta(\varepsilon) = \Omega(\varepsilon)$ such that for any $n$-vertex graph $G$ that does not contain an induced triangle matching of size $(1-\varepsilon)\frac{n}{3}$, we have $\log \mis(G)<(\frac{1}{3}\log 3-\delta)n$. \end{theorem} An {\it induced matching} is an induced subgraph that is a matching; its {\it size} is the number of edges \begin{theorem}[Kahn, Park \cite{KP}]\label{kp2} For any $\varepsilon >0$, there is a $\delta=\delta(\varepsilon) = \Omega(\varepsilon)$ such that for any $n$-vertex triangle-free graph $G$ that does not contain an induced matching of size $(1-\varepsilon)\frac{n}{2}$, we have $\log \mis(G)<(\frac{1}{2}-\delta)n$. \end{theorem} Let $\mis_t(n)$ denote the maximum number of maximal independent sets in an $n$-vertex graph that does not contain an induced triangle matching of size $t+1$. With this notation we have $\mis_0(n) = \mis_\bigtriangleup(n)$ and Theorem~\ref{kp} gives $\mis_t(n) < 3^{(1/3-\delta')n}$ when $t+1 = (1-\epsilon)\frac{n}{3}$. The primary result of this note is the following common generalization of Theorems~\ref{mm} and \ref{ht} which gives a strengthening of Theorem~\ref{kp} as it determines $\mis_t(n)$ for all $n$ and $t\leq n/3$. \begin{theorem}\label{main} For any $0\le t \le n/3$ put $m=n-3t$. Then we have \begin{eqnarray*} \mis_{t}(n)=\left\{ \begin{array}{cc} 3^t\cdot 2^{m/2} & \textnormal{if} ~m ~\textnormal{is even}, \\ 3^{t-1}\cdot 2^{(m+3)/2}& \textnormal{if $m$ is odd and $t>0$}, \\ 5\cdot 2^{(n-5)/2}& \textnormal{if $m$ is odd and $t=0$}. \end{array} \right. \end{eqnarray*} \end{theorem} Constructions showing the lower bounds are $tK_3+\frac{m}{2}K_2$, $(t-1)K_3+\frac{m+3}{2}K_2$, and $C_5+\frac{n-5}{2}K_2$, respectively. Let $\mis_{\bigtriangleup,t}(n)$ denote the maximum of $\mis(G)$ over all $n$-vertex triangle-free graphs that do not contain an induced matching of size $t+1$. The secondary result of this note is the following short reproof of Theorem~\ref{kp2}. \begin{theorem}\label{kp2+} Let $c$ denote the largest real root of the equation $x^6-2x^2-2x-1=0$, $c=1.40759\ldots<\sqrt{2}$. Then $\mis_{\bigtriangleup,t}(n)\le 2^tc^{n-2t}$. \end{theorem} \section{Proofs} In our proofs we shall use an observation due to Wood~\cite{W}. It follows from the fact that any maximal independent set in $G$ must meet the closed neighborhood $N[v]=N(v) \cup \{v\}$ of any vertex $v$ of $G$. \begin{observation}[Wood \cite{W}]\label{obs} For any graph $G$ and vertex $v\in V(G)$ we have \[ \mis(G)\le \sum_{w\in {N[v]}}\mis(G\setminus N[w]). \] \end{observation} We begin with some inequalities involving the bound, denoted $g_t(n)$, in Theorem~\ref{main}. \begin{fact}\label{fact} For $t>0$, we have $\frac{g_t(n-3)}{g_t(n)}\le 3/8$, $\frac{g_t(n-2)}{g_t(n)}=1/2$, and $\frac{g_t(n-4)}{g_t(n)}=1/4$. \end{fact} Observe that if $n$ is odd and $t=0$, then the bounds in Fact~\ref{fact} may not hold and, in particular, Case III of the following argument will not work. Fortunately, we may assume that $t >0$ as the $t=0$ case is exactly Theorem~\ref{ht}. In the proof, we will always compare $g_t(n-k)$ to $g_t(n)$, and it might happen that $n-k$ drops below $3t$. In this case, we consider $g_t(n-k)$ to be $g_{\lfloor \frac{n-k}{3}\rfloor}(n-k)$. Fortunately, all inequalities in Fact~\ref{fact} remain true as $g_t(n)$ is non-decreasing in $t$. \begin{proof}[Proof of Theorem~\ref{main}] By the discussion above we may assume $t>0$. We proceed by induction on $m$. Observe that cases $m=0,1,2$ are covered by Theorem~\ref{mm}. Let $G$ be a graph on $n$ vertices not containing an induced triangle matching of size $t+1$. We distinguish cases according to the minimum degree of $G$. \medskip \textsc{Case I:} $G$ has a vertex $x$ of degree $1$. \medskip Then by applying Observation~\ref{obs} with $v=x$ and Fact~\ref{fact}, we obtain $\mis(G)\le 2\mis_t(n-2) \leq 2g_t(n-2) \leq g_t(n)$. \medskip \textsc{Case II:} $G$ has a component $C$ of minimum degree $d \geq 3$. \medskip Then by applying Observation~\ref{obs} to any $v\in C$ and Fact~\ref{fact}, we obtain $\mis(G)\le (d+1) \mis_t(n-d-1)\le g_t(n)$. \medskip \textsc{Case III:} $G$ has component $C$ with a vertex $x$ of degree $2$ and a vertex of degree at least $3$. \medskip We may assume that $x$ is adjacent to a vertex $y$ of degree $d(y) \geq 3$. Applying Observation~\ref{obs} with $v=x$ and Fact~\ref{fact}, we obtain $\mis(G)\le 2 \mis_t(n-3)+\mis_t(n-4)\le 2g_t(n-3) + g_t(n-4)\leq g_t(n)$. \medskip \textsc{Case IV:} $G$ is $2$-regular, i.e., a cycle factor. \medskip It is not hard to verify (see for example \cite{F}) that $\mis(C_3)=3, \mis(C_4)=2, \mis(C_5)=5$ and $\mis(C_n)=\mis(C_{n-2})+\mis(C_{n-3})$. In particular, if $n\neq 3$, then $\mis(C_n)^{1/n}$ is maximized for $n=5$ with value $5^{1/5}$. Thus, for cycle factors containing at most $t$ triangles, we have $\mis(G)\le 3^t\cdot 5^{(n-3t)/5}\le g_t(n)$. \end{proof} Before the proof of Theorem \ref{kp2+}, we gather facts about the bound $h_t(n):=2^t\cdot c^{n-2t}$. \begin{fact}\label{fact2} For the largest real root $c=1.40759\ldots<\sqrt{2}$ of $x^6-2x^2-2x-1=0$, we have \begin{enumerate} \item $2+c\le 2c^2$ and so $h_t(n-2)+h_{t-1}(n-3)\le h_t(n)$, \item for any $d\ge 4$, we have $(d+1)\le c^{d+1}$ and so $(d+1)h_t(n-d-1)\le h_t(n)$, \item $3c+1\le c^5$ and so $3h_t(n-4)+h_t(n-5)\le h_t(n)$, \item $2c+1\le c^4$ and so $2h_t(n-3)+h_t(n-4)\le h_t(n)$. \end{enumerate} \end{fact} Just as in Fact \ref{fact}, $n-k$ might drop below $2t$, in which case we consider $h_t(n-k)$ to be $h_{\lfloor \frac{n-k}{2}\rfloor}(n-k)$ and again the inequalities in Fact~\ref{fact2} remain true. \begin{proof}[Proof of Theorem~\ref{kp2+}] We use double induction. First on $m:=n-2t$ and then on $t$. Theorem~\ref{ht} yields the statement when $m=0,1$. For any $m$, if $t=0$, then the only graph $G$ on $m-2t=m$ vertices that does not contain an induced matching of size one is the empty graph and in this case $\mis(G)=1\le h_0(m)=c^{m}$. Let $G$ be an $n$-vertex triangle-free graph that contains no induced matching of size $t+1$. We again distinguish cases according to the minimum degree of $G$. \medskip \textsc{Case I:} $G$ has a vertex $x$ of degree 1. \medskip Let $y$ be the neighbor of $x$. If $xy$ is an isolated edge, then $G\setminus \{x,y\}$ does not contain induced matchings of size $t$. Applying Observation~\ref{obs} and Fact~\ref{fact2} yields $\mis(G)\le 2\mis_{\bigtriangleup,t-1}(n-2) \leq 2 h_{t-1}(n-2)=h_t(n)$. If $y$ has further neighbors, then $G\setminus {N[y]}$ does not contain induced matchings of size $t$. Applying Observation~\ref{obs} and Fact~\ref{fact2} yields $\mis(G)\le \mis_{\bigtriangleup,t}(n-2) + \mis_{\bigtriangleup,t-1}(n-3) \leq h_t(n-2)+h_{t-1}(n-3)\le h_t(n)$. \medskip \textsc{Case II:} $G$ has minimum degree $d \geq 4$. \medskip Applying Observation~\ref{obs} and Fact~\ref{fact2} yields $\mis(G)\le (d+1)h_{t}(n-d-1)\le h_t(n)$. \medskip \textsc{Case III:} $G$ has a component $C$ of minimum degree $3$ with a vertex of degree at least $4$. \medskip Let $x$ be a vertex of $C$ of degree $3$ and $y\in N(x)$ of degree at least $4$. Applying Observation~\ref{obs} and Fact~\ref{fact2} yields $\mis(G)\le 3h_t(n-4)+h_t(n-5)\le h_t(n)$. \medskip \textsc{Case IV:} $G$ is $3$-regular. \medskip Suppose first that there exist vertices $x,y$ with $N(x)=N(y)$. As $G$ is triangle-free, $x$ and $y$ cannot be adjacent. Moreover, for any maximal independent set $X$, we have $x\in X$ if and only if $y\in X$. Therefore, when applying Observation \ref{obs}, the number of maximal independent sets containing $x$ can be bounded by $\mis(G\setminus ({N[x]} \cup \{y\})$ instead of $\mis(G\setminus ({N[x]})$. We obtain $\mis(G)\le 3h_t(n-4)+h_t(n-5)\le h_t(n)$. Suppose next that $G$ does not contain two vertices $x,y$ with $N(x)=N(y)$. Let $v$ be an arbitrary vertex of $G$ and $N(v)=\{a,b,c\}$. We apply Observation \ref{obs} in a slightly modified form: we keep $\mis(G\setminus {N[v]})$ and $\mis(G\setminus {N[a]})$, but to bound the number of maximal independent sets $X$ that contain $b$ or $c$, we use $\mis(G\setminus ({N[b]}\cup \{c\}))+\mis(G\setminus ({N[c]}\cup \{b\}))+\mis(G\setminus ({N[b]}\cup {N[c]}))$. The three terms bound the number of $X$s that contain exactly $b$, exactly $c$ or both $b$ and $c$, respectively. Observe that by the triangle-free property, $b$ and $c$ are not adjacent. Also, as $N(b)\neq N(c)$, we have $|{N[b]}\cup {N[c]}|\ge 6$. Therefore, we obtain $\mis(G)\le 2h_t(n-4)+2h_t(n-5)+h_t(n-6) \leq h_t(n)$ by the definition of $h_t(n)$. \medskip \textsc{Case V}: $G$ has a component $C$ of minimum degree $2$ with a vertex of degree at least $3$. \medskip Let $x$ be a vertex of $C$ of degree $2$, and $y\in N(x)$ of degree at least $3$. Then applying Observation~\ref{obs} and Fact~\ref{fact2}, we obtain $\mis(G)\le 2h_t(n-3)+h_t(n-4)\le h_t(n)$. \medskip \textsc{Case VI}: $G$ is 2-regular, i.e., a cycle factor. \medskip As in Case IV of Theorem \ref{main}, we have $\mis(G)\le 5^{n/5}\le h_t(n)$. \end{proof} \begin{thebibliography}{99} \bibitem{F} Z. F\"uredi, The number of maximal independent sets in connected graphs. \textit{Journal of Graph Theory}, 11(4) (1987) 463-–470. \bibitem{Gr} J.M. Griggs, C.M. Grinstead, D.R. Guichard, The number of maximal independent sets in a connected graph. \textit{Discrete Math.}, 68(2–3) (1988) 211--220. \bibitem{HT} M. Hujter, Zs. Tuza, The number of maximal independent sets in triangle-free graphs. \textit{SIAM Journal on Discrete Mathematics}, 6(2) (1993) 284--288. \bibitem{KP} J. Kahn, J. Park, Stability for Maximal Independent Sets. \textit{the Electronic Journal of Combinatorics}, 27(1) (2020) P1.59 \bibitem{MiMu} R.E. Miller, D.E. Muller, A problem of maximum consistent subsets, IBM Research Report RC-240, Thomas J. Watson Research Center, New York, USA, 1960. \bibitem{MM} J.W. Moon, L. Moser, On cliques in graphs. \textit{Israel Journal of Mathematics}, 3(1) (1965) 23--28. \bibitem{Sa} B. Sagan, A note on independent sets in trees. \textit{SIAM J. Discrete Math.}, 1(1) (1988) 105--108. \bibitem{Wi} H.S. Wilf, The number of maximal independent sets in a tree. \textit{SIAM J. Algebraic Discrete Methods}, 7 (1986) 125--130. \bibitem{W} D.R Wood, On the number of maximal independent sets in a graph. \textit{Discrete Mathematics \& Theoretical Computer Science}, 13(3) (2011) 17--20. \end{thebibliography} \end{document}
2205.04071v1
http://arxiv.org/abs/2205.04071v1
Existence and uniqueness in critical spaces for the magnetohydrodynamical system in $\mathbb{R}^n$
\documentclass[11pt,a4paper]{article} \usepackage[utf8]{inputenc} \addtolength{\textwidth}{4.2cm} \addtolength{\oddsidemargin}{-2.1cm} \addtolength{\evensidemargin}{-2.1cm} \parskip2pt \usepackage{amssymb,amscd,amsfonts,amsbsy,amsmath,amsthm,amsrefs} \usepackage{dsfont} \usepackage{enumerate} \usepackage{mathrsfs} \usepackage{epsf,epsfig,esint} \usepackage{pdfsync} \usepackage[all]{xy} \usepackage{pdfpages} \usepackage{xcolor} \usepackage{stmaryrd} \usepackage[colorlinks=true,citecolor=blue,linkcolor=cyan]{hyperref} \usepackage{euscript} \newtheorem{proposition}{Proposition}[section] \newtheorem{theorem}[proposition]{Theorem} \newtheorem{lemma}[proposition]{Lemma} \newtheorem{corollary}[proposition]{Corollary} \newtheorem{conjecture}[proposition]{Conjecture} \newtheorem{note}[proposition]{Note} \theoremstyle{definition} \newtheorem{definition}[proposition]{Definition} \newtheorem{notation}[proposition]{Notation} \theoremstyle{remark} \newtheorem{remark}[proposition]{Remark} \numberwithin{equation}{section} \newcommand{\Ni}{\mathds{N}} \newcommand{\Qi}{\mathds{Q}} \newcommand{\Ri}{\mathds{R}} \newcommand{\Ci}{\mathds{C}} \newcommand{\Ti}{\mathds{T}} \newcommand{\Zi}{\mathds{Z}} \newcommand{\Fi}{\mathds{F}} \newcommand{\Ki}{\mathds{K}} \newcommand{\Leray}{\mathds{P}} \newcommand{\e}{\varepsilon} \newcommand{\less}{\lesssim} \newcommand{\ca}{{\cal A}} \newcommand{\cb}{{\cal B}} \newcommand{\CB}{{\mathscr{B}}} \newcommand{\cc}{{\cal C}} \newcommand{\CC}{{\mathscr{C}}} \newcommand{\cd}{{\cal D}} \newcommand{\ce}{{\cal E}} \newcommand{\cf}{{\cal F}} \newcommand{\ch}{{\cal H}} \newcommand{\chs}{{\cal HS}} \newcommand{\ci}{{\cal I}} \newcommand{\ck}{{\cal K}} \newcommand{\cl}{{\cal L}} \newcommand{\cm}{{\cal M}} \newcommand{\cn}{{\cal N}} \newcommand{\co}{{\cal O}} \newcommand{\cp}{{\cal P}} \newcommand{\cs}{{\cal S}} \newcommand{\CS}{{\mathscr{S}}} \newcommand{\ct}{{\cal T}} \newcommand{\cu}{{\cal U}} \newcommand{\CU}{{\mathscr{U}}} \newcommand{\cx}{{\cal X}} \newcommand{\cy}{{\cal Y}} \newcommand{\cz}{{\cal Z}} \newcommand{\Schwartz}{\eus{S}} \newcommand{\gotb}{\gothic{b}} \newcommand{\gotu}{\gothic{u}} \newcommand{\curl}{\mathop{\rm curl}} \newcommand{\divv}{\mathop{\rm div}} \newcommand{\Sh}{S} \newcommand{\Mh}{M} \newcommand{\Dh}{D} \newcommand{\Lh}{\Delta} \newcommand{\RRe}{\mathop{\rm Re}} \newcommand{\IIm}{\mathop{\rm Im}} \newcommand{\tr}{{\mathop{\rm Tr \,}}} \newcommand{\Vol}{{\mathop{\rm Vol}}} \newcommand{\rank}{\mathop{\rm rank}} \newcommand{\supp}{\mathop{\rm supp}} \newcommand{\sgn}{\mathop{\rm sgn}} \newcommand{\essinf}{\mathop{\rm ess\,inf}} \newcommand{\esssup}{\mathop{\rm ess\,sup}} \newcommand{\dd}{\, {\rm d}} \newcommand{\contr}{\lrcorner \,} \newcommand{\Nn}{{\rm{\sf N}}} \newcommand{\Rr}{{\rm{\sf R}}} \newcommand{\Dd}{{\rm{\sf D}}} \newcommand{\demi}[1]{[#1[} \newcommand{\ouvert}[1]{]#1[} \color{black} \author{Cl\'ement Denis \\ Aix-Marseille University, I2M} \date{} \title{Existence and uniqueness in critical spaces for the magnetohydrodynamical system in $\Ri^n$} \begin{document} \maketitle \begin{abstract} We give a description of a magnetohydrodynamical system in $n$ dimension using the exterior derivative. We then prove existence of global solutions for small initial data and local existence for arbitrary large data in two classes of critical spaces - $L^q_tL^p_x$ and $\CC_tL^p_x$, as well as uniqueness for solutions in $\CC_tL^p_x$. \end{abstract} \section{Introduction} In $\Ri^3$, the magnetohydrodynamical system on a time interval $\ouvert{0,T}$ ($0<T\le+\infty$) as considered in \cite{ST83} and \cite{Mo21} is written as \begin{equation} \label{mhd} \left\{ \begin{array}{rclcl} \partial_t u-\Delta u+\nabla \pi-u\times({\rm curl}\,u)&=&({\rm curl}\,b)\times b&\mbox{ in }&\ouvert{0,T}\times\Ri^3\\ \partial_t b-\Delta b&=&{\rm curl}\,(u\times b)&\mbox{ in }&\ouvert{0,T}\times\Ri^3\\ {\rm div}\,u&=&0&\mbox{ in }&\ouvert{0,T}\times\Ri^3\\ {\rm div}\,b&=&0&\mbox{ in }&\ouvert{0,T}\times\Ri^3\\ \end{array} \right. \end{equation} where the {\sl velocity} of the (incompressible homogeneous) fluid is denoted by $u:\,\ouvert{0,T}\times\Ri^3\to{\mathds{R}}^3$, the {\sl magnetic field} is denoted by $b:\,\ouvert{0,T}\times\Ri^3\to{\mathds{R}}^3$ and the (dynamic) {\sl pressure} of the fluid is denoted by $\pi:\,\ouvert{0,T}\times\Ri^3\to{\mathds{R}}$. The first equation in \eqref{mhd} corresponds to the {\sl Navier-Stokes} equations with the fluid subject to the {\sl Laplace force} $({\rm curl}\,b)\times b$ applied by the magnetic field $b$. The second equation of \eqref{mhd} describes the evolution of the magnetic field following the so-called {\sl induction} equation. The condition $\divv u=0$ corresponds to the incompressibility of the fluid, while the divergence-free condition on the magnetic field $b$ comes from the fact that $b$ is in the range of the ${\rm curl}$ operator. Our aim in this paper is to study the same system in higher dimensions - i.e. $\Ri^n$, $n\ge 3$. This requires us to rewrite the system \eqref{mhd} using the exterior and interior derivative (see \cite{Mo21}) - an added benefit of this formulation is that it makes it easy to generalise the results of this paper to Riemannian manifolds. Interpreting the scalar function $\pi$ as a $0$-form, $u$ as $1$-form and $b$ as a $2$-form, we can write \eqref{mhd} as: \begin{equation} \label{mhd1diff}\tag{MHD} \left\{ \begin{array}{rclcl} \partial_t u+\Sh u+d \pi+ u\lrcorner\,du&=&-d^*b\lrcorner\, b&\mbox{ in }&\ouvert{0,T}\times\Ri^n\\ \partial_t b+\Mh b&=&-d(u\lrcorner\, b)&\mbox{ in }&\ouvert{0,T}\times\Ri^n\\ u(t,\cdot)&\in&{\rm{\sf N}}(d^*)_{|_{\Lambda^1}}&\mbox{ for all }&t\in\ouvert{0,T}\\ b(t,\cdot)&\in&{\rm{\sf R}}(d)_{|_{\Lambda^2}}&\mbox{ for all }&t\in\ouvert{0,T},\\ \end{array} \right. \end{equation} Where $d$ is the exterior derivative, $d^*$ is its adjoint, $\Sh$ is the Stokes operator and $\Mh$ is the Maxwell operator. We detail the signification of those notations in section \ref{tools}, but for now let us add the following remarks: \begin{remark} \begin{itemize} \item All the terms in the first equation are 1-forms, while all the terms in the second equation are 2-forms. \item $\Nn(d)$ is the null of $d$ ; $\Rr(d^*)$ is the range of $d^*$. \item In $\Ri^3$ the magnetic field $b$ is a 2-form, but can be identified as a 1-form in $\Ri^3$ using the Hodge-star operator. This is however impossible in dimension $n$. \end{itemize} \end{remark} As for the Navier-Stokes system, the system \eqref{mhd1diff} with $T=\infty$ is invariant under the scaling \begin{align*} u_\lambda(t,x)&=\lambda u(\lambda^2t,\lambda x) \\ b_\lambda(t,x)&=\lambda b(\lambda^2t,\lambda x) \\ \pi_\lambda(t,x)&=\lambda^2 \pi(\lambda^2t,\lambda x), \end{align*} for $\lambda>0$, $t>0$ and $x\in\Ri^n$. This suggests two possible critical spaces for $(u,b)$: either \begin{equation*} L^q\left(\demi{0,\infty}; L^p(\Ri^n, \Lambda^1)\right)\times L^q(\demi{0,\infty}; L^p(\Ri^n, \Lambda^2)), \end{equation*} with $\frac{n}{p}+\frac{2}{q}=1$, or \begin{equation*} \CC(\demi{0,\infty};L^n(\Ri^n,\Lambda^1))\times \CC(\demi{0,\infty};L^n(\Ri^n,\Lambda^2)). \end{equation*} The purpose of this paper is to prove existence and uniqueness of mild solutions (as defined in Definition~\ref{def_mild})of the \eqref{mhd1diff} system in $\Ri^n$. Section \ref{LqLp} is devoted to $L^q_tL^p_x$ spaces, with Theorem \ref{thm:mhd1global} and Theorem \ref{thm:mhd1local} proving respectively the global existence (in time) of mild solutions for small initial data and the local existence for arbitrary large initial data. Section \ref{CtLn} and \ref{Uniqueness} are devoted to $\CC_tL^n_x$ spaces. In section \ref{CtLn} we prove the existence of mild solutions (Theorems \ref{thm:mhd1global_2} and \ref{thm:mhd1local_2}), while in Section \ref{Uniqueness}, Theorem \ref{thm:uniqueness} we prove that those mild solutions are in fact unique. \section{Tools and notations} \label{tools} In this section we gathered notations and results about differential forms as well as the Laplacian, Stokes and Maxwell operators. Most of it is directly taken from \cite{Mo21} (which however focuses on bounded domains), while the proof for the different results stated can be found in \cite{McIM18} as well as \cite{MM09a} and \cite{MM09b}. \begin{notation} Let $A$ be an (unbounded) operator on a Banach space $X$. We denote by ${\rm{\sf D}}(A)$ its domain, ${\rm{\sf R}}(A)$ its range and ${\rm{\sf N}}(A)$ its null space. \end{notation} We also denote by $\CS(\Ri^n)$ the usual Schwartz space on $\Ri^n$. \subsection{Differential forms} \paragraph{Exterior algebra} We consider the exterior algebra $\Lambda=\Lambda^0\oplus\Lambda^1\oplus\dots\oplus\Lambda^n$ of ${\mathbb{R}}^n$, and we denote by $\{e_I, I\subset \llbracket 1,n \rrbracket \}$ the canonic basis of $\Lambda$, where $e_I=e_{j_1}\wedge e_{j_2}\wedge\dots\wedge e_{j_\ell}$ for $I=\{j_1,\dots,j_\ell\}$ with $\ j_1<j_2<\dots<j_{\ell}$. \\ Note that $\Lambda^0$ is in fact $\Ri^n$, and that for $\ell<0$ or $\ell>n$ we set $\Lambda^l=\{0\}$. \\ The basic operations on the exterior algebra $\Lambda$ are \begin{enumerate}[$(i)$ ] \item the exterior product $\wedge:\Lambda^k\times\Lambda^\ell\to\Lambda^{k+\ell}$, \item the interior product $\lrcorner\,:\Lambda^k\times\Lambda^\ell\to\Lambda^{\ell-k}$, \item the inner product $\langle\cdot,\cdot\rangle:\Lambda^\ell\times\Lambda^\ell \to{\mathbb{R}}$. \end{enumerate} These correspond to the following operations in $\Ri^3$: Let $u$ be a vector, interpreted as a 1-form: \begin{itemize} \item[-] for $\varphi$ scalar, interpreted as a 0-form: $u\wedge \varphi=\varphi u$, $u\lrcorner\, \varphi=0$. \item[-] for $\varphi$ scalar, interpreted as a 3-form: $u\wedge v\varphi=0$, $u\lrcorner\, \varphi=\varphi u$; \item[-] $v$ vector, interpreted as a 1-form: $u\wedge v=u\times v$, $u\lrcorner\, v=u\cdot v$; \item[-] $v$ vector, interpreted as a 2-form: $u\wedge v=u\cdot v$, $u\lrcorner\, v =-u\times v$. \end{itemize} \paragraph{Exterior and interior derivatives} We denote the {\sl exterior derivative} by $d:=\nabla\wedge=\sum_{j=1}^n \partial_j e_j\wedge$ and the {\sl interior derivative} (or co-derivative) by $\delta:=-\nabla\lrcorner\,=-\sum_{j=1}^n \partial_j e_j\lrcorner\,$. They act on {\sl differential forms} from $\Ri^n$ to the exterior algebra $\Lambda=\Lambda^0\oplus\Lambda^1\oplus\dots\oplus\Lambda^n$ of ${\mathbb{R}}^n$, and satisfy $d^2=d\circ d=0$ and $\delta^2=\delta\circ\delta=0$. In $\Ri^3$ they correspond to the following operators: \begin{align} d&: \Lambda^0=\Ri \overset{\nabla}{\longrightarrow} \Lambda^1=\Ri^3 \overset{\curl}{\longrightarrow} \Lambda^2=\Ri^3 \overset{\divv}{\longrightarrow} \Lambda^3=\Ri \\ \delta&: \Lambda^0=\Ri \overset{-\divv}{\longleftarrow} \Lambda^1=\Ri^3 \overset{\curl}{\longleftarrow} \Lambda^2=\Ri^3 \overset{-\nabla}{\longleftarrow} \Lambda^3=\Ri \end{align} We denote by ${\rm{\sf D}}(d)$ the domain of (the differential operator) $d$ and by ${\rm{\sf D}}(\delta)$ the domain of $\delta$. They are defined by \begin{equation} {\rm{\sf D}}(d):=\bigl\{u\in L^2(\Ri^n,\Lambda); du\in L^2(\Ri^n,\Lambda)\bigr\} \quad \mbox{and}\quad {\rm{\sf D}}(\delta):=\bigl\{u\in L^2(\Ri^n,\Lambda); \delta u\in L^2(\Ri^n,\Lambda)\bigr\}. \end{equation} Similarly, their domains in $L^p$ are: \begin{equation} {\rm{\sf D}}^p(d):=\bigl\{u\in L^p(\Ri^n,\Lambda); du\in L^p(\Ri^n,\Lambda)\bigr\} \ \mbox{ and }\ {\rm{\sf D}}^p(\delta):=\bigl\{u\in L^p(\Ri^n,\Lambda); \delta u\in L^p(\Ri^n,\Lambda)\bigr\}. \end{equation} We also consider the maximal adjoint operator of $d$ in $L^2(\Ri^n,\Lambda)$, denoted by $d^*$. In $\Ri^n$, $\delta=d^*$, and we will use $d^*$ in the rest of this paper. For more details on $d$ and $\delta$, we refer to \cite[Section~2]{AMcI04} and \cite[Section~2]{CMcI10}. Both these papers also contain some historical background. \subsection{Laplacian, Stokes and Maxwell operators} \begin{definition} The {\sl Dirac operator} on $\Ri^n$ is \[ \Dh:=d+d^*=d+\delta. \] The {\sl Laplacian operator} on $\Ri^n$ is defined as \begin{equation*} -\Lh :=\Dh^2=d d^*+ d^*d=d \delta+ \delta d. \end{equation*} \end{definition} \begin{remark} For $1$ forms in 3 dimension, this last equation correspond to the well-known identity $-\Delta = \curl \curl - \nabla\divv$. \end{remark} $\Dh$ is a closed densely defined operator on $L^2(\Ri^n,\Lambda)$ and we have the following Hodge decomposition (see \cite[Section 4, proof of Proposition 2.2]{AKMcI06Invent}): \begin{align} \label{H2}\tag{$H_2$} L^2(\Ri^n,\Lambda)=&\overline{{\rm{\sf R}}(d)}\stackrel{\bot}{\oplus} \overline{{\rm{\sf R}}(d^*)}\stackrel{\bot}{\oplus}{\rm{\sf N}}(\Dh)\\ =&\overline{{\rm{\sf R}}(d)}\stackrel{\bot}{\oplus}{\rm{\sf N}}(d^*) \label{H2Rd}\\ =&{\rm{\sf N}}(d)\stackrel{\bot}{\oplus}\overline{{\rm{\sf R}}(d^*)}. \label{H2Nd} \end{align} Note that the harmonic forms in $L^2$ on $\Ri^d$ are trivial, so ${\rm{\sf N}}(\Dh)= {\rm{\sf N}}(d)\cap{\rm{\sf N}}(d^*) ={\rm{\sf N}}\bigl(\Lh \bigr)=\{0\}$. The orthogonal projection from $L^2(\Ri^n,\Lambda)$ onto ${\rm{\sf N}}(d^*)$ (see \eqref{H2Rd}), restricted to $1$-forms, is the well-known Helmholtz (or Leray) projection denoted by ${\mathbb{P}}$. The Hodge decompositions exist also in $L^p$ (see \cite[Theorems~2.4.2 and 2.4.14]{Schw95}): \begin{align} \label{Hp}\tag{$H_p$} L^p(\Ri^n,\Lambda)=&\overline{{\rm{\sf R}}^p(d)}\oplus \overline{{\rm{\sf R}}^p(d^*)}\oplus{\rm{\sf N}}(\Dh)\\ =&\overline{{\rm{\sf R}}^p(d)} \oplus{\rm{\sf N}}^p(d^*)\label{HpRd}\\ =&{\rm{\sf N}}^p(d) \oplus\overline{{\rm{\sf R}}^p(d^*)}\label{HpNd} \end{align} for all $p\in\ouvert{1,\infty}$ and the projection ${\mathbb{P}}:L^p(\Ri^n,\Lambda^1)\to {\rm{\sf N}}^p(d^*)_{|_{\Lambda^1}}$ extends accordingly. \begin{definition} \begin{itemize} \item We denote by $\Sh$ the Stokes operator: \begin{equation} \Sh:=\Dh^2=d^*d \ {\rm in} \ N^2(d*)_{|\Lambda^1}, \end{equation} where $N^2(d^*)_{|\Lambda^1}$ is the restriction of $N^2(d^*)$ to the space of 1-forms $\Lambda^1$. \item We denote by $\Mh$ the Maxwell operator: \begin{equation} \Mh:=\Dh^2=dd^* \ {\rm in} \ N^2(d)_{|\Lambda^2}, \end{equation} where $N^2(d)_{|\Lambda^2}$ is the restriction of $N^2(d)$ to the space of 2-forms $\Lambda^2$. \end{itemize} \end{definition} \begin{remark} On $\Ri^n$, $\Sh=-\Lh_{|N^2(d*)_{|\Lambda^1}}$ and $\Mh=-\Lh_{N^2(d)_{|\Lambda^2}}.$. This means that the two following theorems, written for the Laplacian operator $\Lh$, are also true for both the Stokes and Maxwell operator. \end{remark} First those operators are sectorial and thus admit a bounded holomorphic functional calculus. \begin{theorem} \label{thm:HodgeL&S} \begin{enumerate} \item The Laplacian operator $-\Lh $ is sectorial of angle $0$ in $L^p(\Ri^n,\Lambda)$ and for all $\mu\in \ouvert{0,\frac{\pi}{2}}$, $-\Lh $ admits a bounded $S^\circ_{\mu+}$ holomorphic functional calculus in $L^p(\Ri^n,\Lambda)$. \end{enumerate} \end{theorem} And secondly they verify the maximal regularity property, which is crucial for our proof: \begin{theorem}[Maximal regularity]\label{Sylvie_disciple} Let $1<p,q<\infty$ and let R be the operator defined for $f\in L^1_{{\rm loc}}(]0,\infty[;\CS'(\Ri^n))$ by \begin{equation} Rf(t)=\int_0^t e^{(t-s)\Lh }f(s)\dd s, \quad \forall t>0. \end{equation} This operator is bounded from $L^q(\ouvert{0,\infty};L^p(\Ri^n))$ to $\dot{W}^{1,q}(\ouvert{0,\infty};L^p(\Ri^n))\cap L^q(\ouvert{0,\infty};\dot{W}^{2,p}(\Ri^n))$. In particular the operator $\Lh R$ is bounded in $L^q(\ouvert{0,\infty};L^p(\Ri^n))$. Moreover there exists a constant $C_{q,p}$ such that \begin{equation}\label{regmax} \|\frac{\dd}{\dd t}Rf\|_{L^q_tL^p_x}+\|\Lh Rf\|_{L^q_tL^p_x}+\|(-\Lh )^\alpha(\frac{\dd}{\dd t})^{1-\alpha}Rf\|_{L^q_tL^p_x} \le C_{q,p}\|f\|_{L^q_tL^p_x}, \end{equation} for all $\alpha\in \ouvert{0,1}$. \end{theorem} The proof can be found in \cite[Chapter IV, \S 3]{LSU68}. \subsection{The magnetohydrodynamical system} Let us recall the magnetohydrodynamical system \eqref{mhd1diff}: \begin{equation} \tag{MHD} \left\{ \begin{array}{rclcl} \partial_t u+\Sh u+d \pi+ u\lrcorner\,du&=&-d^*b\lrcorner\, b&\mbox{ in }&\ouvert{0,T}\times\Ri^n\\ \partial_t b+\Mh b&=&-d(u\lrcorner\, b)&\mbox{ in }&\ouvert{0,T}\times\Ri^n\\ u(t,\cdot)&\in&{\rm{\sf N}}(d^*)_{|_{\Lambda^1}}&\mbox{ for all }&t\in\ouvert{0,T}\\ b(t,\cdot)&\in&{\rm{\sf R}}(d)_{|_{\Lambda^2}}&\mbox{ for all }&t\in\ouvert{0,T}.\\ \end{array} \right. \end{equation} \begin{definition}\label{def_mild} A mild solution of the system \eqref{mhd1diff} with initial condition $u_0\in N(d^*)_{|\Lambda^1}$ and $b_0\in R(d)_{|\Lambda^2}$ is a pair $(u,b)$ such that $u$ is $1-$form on $\Ri^n$, $b$ is a $2-$form on $\Ri^n$, and $(u,b)$ satisfies \begin{align} \label{mildsolmhd1u} u(t)=&e^{-t\Sh}u_0+\int_0^te^{-(t-s)\Sh}{\mathbb{P}}\bigl(-u(s)\lrcorner\, du(s)\bigr)\,{\rm d}s +\int_0^te^{-(t-s)\Sh}{\mathbb{P}}\bigl(-d^*b(s)\lrcorner\,b(s)\bigr)\,{\rm d}s, \\ =& a_1(t) + B_1(u,u)(t)+B_2(b,b)(t)\\ \label{mildsolmhd1b} b(t)=&e^{-t \Mh}b_0+\int_0^te^{-(t-s) \Mh}\Bigl(-d\bigl(u(s)\lrcorner\,b(s)\bigr)\Bigr)\,{\rm d}s\\ =&a_2(t)+B_3(u,b)(t) \end{align} for all $t\in \ouvert{0,T}$. \end{definition} In the formalism we use, it is easy to see that the bilinear terms $B_1$ and $B_2$ are almost identical. In fact we will focus on $B_1$ and skip the details for $B_2$ altogether. The bilinear form for the magnetic field $B_3$ is different however, and in sections \ref{CtLn} and \ref{Uniqueness} we will need the following Leibniz-style inequality: \begin{lemma} Let $\alpha$, $\beta$, $\alpha'$, $\beta'$ and $\gamma$ be such that $\frac{1}{\alpha}+\frac{1}{\beta}=\frac{1}{\alpha'}+\frac{1}{\beta'}=\frac{1}{\gamma}$. There exists a constant $C_p$ such that \begin{equation}\label{Leibniz} \|d(\omega_1\lrcorner\,\omega_2)\|_{\gamma}\le C_p \bigl(\|\Dh\omega_1\|_\alpha\|\omega_2\|_\beta +\|\omega_1\|_{\alpha'}\|\Dh\omega_2\|_{\beta'}\bigr), \end{equation} for all $\omega_1\in {\rm{\sf D}}^\alpha(\Dh)\cap L^{\alpha'}(\Ri^n,\Lambda^1)$ and all $\omega_2\in {\rm{\sf D}}^{\beta'}(\Dh)\cap L^{\beta}(\Ri^n,\Lambda^2)$. \end{lemma} \begin{proof} On $\Ri^n$ we get $-\Delta = \Dh^2$, so $\nabla = [\nabla (-\Delta)^{-1} \Dh] \Dh = [\nabla(-\Delta)^{-1/2}] [(-\Delta)^{-1/2}\Dh] \Dh$. So $\nabla$ is controlled by $\Dh$. \end{proof} \begin{remark} This estimate is an open question for low-regularity domains and in particular for Lipschitz domains as discussed in \cite{Mo21}. \end{remark} \section{Existence in $L^q_tL^p_x$ spaces}\label{LqLp} In this section we consider solutions which are $L^q$ in time and $L^p$ in space. We prove global existence of those solutions for small initial data and local existence for arbitrary large initial data. Our proofs are based on the classical Picard fixed point theorem, already used for the Navier-Stokes equations by Fujita and Kato \cite{FK64} (see also \cite{M06}) and in \cite{BM20} (see also \cite{BH20}) for the Boussinesq system. Our most recent inspiration is a paper by Monniaux \cite{Mo21} on the 3-dimensional \eqref{mhd1diff} system. Most of the tools used here appeared in the paper \cite{MM09b}; see also \cite{McIM18}. Let us start our main theorems: \begin{theorem}[Global existence] \label{thm:mhd1global} Let $(p,q)$ such that $\frac{n}{p}+\frac{2}{q}=1$, $p>n$, and $q>3$. Then there exists $\varepsilon>0$ such that for all $u_0\in {\rm{\sf N}}^n(d^*)_{|_{\Lambda^1}}$ and $b_0\in {\rm{\sf R}}^n(d)_{|_{\Lambda^2}}$ with \begin{align} \|u_0\|_{\dot{B}^{-\frac{2}{q}}_{p,q}} &+ \|u_0\|_{\dot{B}^{-\frac{4}{q}+1}_{\frac{p}{2},\frac{q}{2}}}\le \e \\ {\rm and } \ \|b_0\|_{\dot{B}^{-\frac{2}{q}}_{p,q}} &+ \|b_0\|_{\dot{B}^{-\frac{4}{q}+1}_{\frac{p}{2},\frac{q}{2}}}\le \e, \end{align} the system \eqref{mhd1diff} admits a mild solution $(u,b) \in L^q([0,\infty[;L^p(\Ri^n,\Lambda^1))\times L^q([0,\infty[;L^p(\Ri^n,\Lambda^2))$. \end{theorem} \begin{theorem}[Local existence] \label{thm:mhd1local} Let $(p,q)$ such that $\frac{n}{p}+\frac{2}{q}=1$, $p>n$, and $q>3$. Then for all $u_0\in {\rm{\sf N}}^n(d^*)_{|_{\Lambda^1}}$ and $b_0\in {\rm{\sf R}}^n(d)_{|_{\Lambda^2}}$ there exists $T>0$ such that the system \eqref{mhd1diff} admits a mild solution $u \in L^q([0,\infty[;L^p(\Ri^n,\Lambda^1))$ and $b\in L^q([0,\infty[;L^p(\Ri^n,\Lambda^2))$. \end{theorem} \begin{proof} For $0<T\le \infty$, let us consider the spaces \begin{equation} \CU_T:=\left\{u\in L^q([0,T[;\Nn^p(d^*)_{|\Lambda^1}); \ du\in L^{\frac{q}{2}}([0,T[;L^{\frac{p}{2}}(\Ri^n,\, \Lambda^2))\right\} \end{equation} and \begin{equation} \CB_T:=\left\{b\in L^q([0,T[;\Rr^p(d)_{|\Lambda^2}); \ d^* b\in L^{\frac{q}{2}}([0,T[;L^{\frac{p}{2}}(\Ri^n,\, \Lambda^1))\right\} \end{equation} endowed with their natural norms \begin{align*} \|u\|_{\CU_T} &= \|u\|_{L^q([0,T[;L^p(\Ri^n,\Lambda^1))}+\|du\|_{L^{\frac{q}{2}}([0,T[;L^{\frac{p}{2}}(\Ri^n,\Lambda^2))} \\ \|b\|_{\CB_T} &= \|b\|_{L^q([0,T[;L^p(\Ri^n,\Lambda^2))}+\|d^*b\|_{L^{\frac{q}{2}}([0,T[;L^{\frac{p}{2}}(\Ri^n,\Lambda^1))}. \end{align*} The proof relies on the Picard fixed-point theorem (see \cite[Theorem 15.1]{Lem02}): the system \begin{align} \label{eq:fixedpoint} u=a_1+B_1(u,u)+B_2(b,b)\quad\mbox{and}\quad b=a_2+B_3(u,b), \quad (u,b)\in{\mathscr{U}}_T \end{align} can be reformulated as \begin{equation} \label{eq:picard1} {\boldsymbol{U}}={\boldsymbol{a}}+{\boldsymbol{\cb}}({\boldsymbol{U}},{\boldsymbol{U}}) \end{equation} where ${\boldsymbol{U}}=(u,b)\in {\mathscr{U}}_T\times{\mathscr{B}}_T$, ${\boldsymbol{a}}=(a_1,a_2)$ and ${\boldsymbol{\cb}}({\boldsymbol{U}},{\boldsymbol{U'}})=(B_1(u,u')+B_2(b,b'),B_3(u,b'))$ if ${\boldsymbol{U}}=(u,b)$ and ${\boldsymbol{U'}}=(u',b')$. On ${\mathscr{U}}_T\times{\mathscr{B}}_T$ we choose the norm $\|(u,b)\|_{{\mathscr{U}}_T\times{\mathscr{B}}_T}:=\|u\|_{{\mathscr{U}}_T}+\|b\|_{{\mathscr{B}}_T}$. We split the proof into two lemmas: Lemma~\ref{lem:initcond1} concerns the linear part, while Lemma~\ref{lem_Bilin1} concerns the bilinear operator $\boldsymbol{\cb}$. \begin{lemma}\label{lem:initcond1} For $u_0\in \dot{B}^{-\frac{2}{q}}_{p,q}(\Ri^n,\Lambda^1)\cap \dot{B}^{-\frac{4}{q}+1}_{\frac{p}{2},\frac{q}{2}}(\Ri^n,\Lambda^1)$ with $d^* u_0=0$ and $b_0\in \dot{B}^{-\frac{2}{q}}_{p,q}(\Ri^n,\Lambda^2)\cap\dot{B}^{-\frac{4}{q}+1}_{\frac{p}{2},\frac{q}{2}}(\Ri^n,\Lambda^2)$ with $d b_0=0$, then \begin{enumerate} \item $a_1: \, t\mapsto e^{-t\Sh }u_0 \in \CU_T$ \item $a_2: \, t\mapsto e^{-t\Mh}b_0 \in \CB_T$, \end{enumerate} for all $T\in ]0,+\infty]$. Besides for all $\varepsilon>0$, there exists $T>0$ such that \begin{equation}\label{a1a2_le_epsilon} \|a_1\|_{\CU_T}+\|a_2\|_{\CB_T} \le \varepsilon \end{equation} \end{lemma} \begin{lemma}\label{lem_Bilin1} The bilinear operators $B_1$, $B_2$ and $B_3$ are bounded in the following spaces: \begin{enumerate} \item $B_1:\CU_T\times\CU_T \rightarrow \CU_T$, \label{B_1} \item $B_2:\CB_T\times \CB_T \rightarrow \CU_T$, \label{B_2} \item $B_3:\CU_T\times \CB_T\rightarrow \CB_T$ \label{B_3} \end{enumerate} with norms independent from $T>0$. \end{lemma} The boundedness of the operator ${\boldsymbol{\cb}}$ is now obvious: let $\boldsymbol{U}=(u,b)\in \CU_T\times\CB_T$ and $\boldsymbol{U'}=(u',b')\in \CU_T\times \CB_T$. Then \begin{align*} \left\| {\boldsymbol{\cb}}({\boldsymbol{U}},{\boldsymbol{U'}}) \right\|_{\CU_T\times\CB_T} &= \left\| B_1(u,u')+B_2(b,b')\right\|_{\CU_T} + \left\| B_3(u,b')\right\|_{\CB_T} \\ &\le K\left( \|u\|_{\CU_T} \|u'\|_{\CU_T} + \|b\|_{\CB_T}\|b'\|_{\CB_T} + \|u\|_{\CU_T}\|b'\|_{\CB_T} \right) \\ &\le K \|\boldsymbol{U}\|_{\CU_T\times\CB_T} \|\boldsymbol{U'}\|_{\CU_T\times\CB_T}, \end{align*} where $K$ is a constant independent from $T>0$. Let then $\varepsilon=\frac{1}{4K}$. By Lemma~\ref{lem:initcond1}, for $u_0\in {\rm{\sf N}}^n(d^*)_{|_{\Lambda^1}}$, and $b_0\in {\rm{\sf R}}^n(d)_{|_{\Lambda^2}}$, there exists $T\le \infty$ such that $\|a_1\|_{\CU_T}+\|a_2\|_{\CB_T} \le \varepsilon$ holds for $\varepsilon=\frac{1}{4K}$. Then by Picard's fixed point theorem the system \eqref{eq:picard1} admits a unique solution ${\boldsymbol{U}}=(u,b)\in {\mathscr{U}}_T\times{\mathscr{B}}_T$. \end{proof} \begin{proof}[Lemma~\ref{lem:initcond1}] Let $\e>0$. Let $u_0\in \dot{B}^{-\frac{2}{q}}_{p,q}(\Ri^n,\Lambda^1)\cap \dot{B}^{-\frac{4}{q}+1}_{\frac{p}{2},\frac{q}{2}}(\Ri^n,\Lambda^1)$ with $d^* u_0=0$ and $b_0\in \dot{B}^{-\frac{2}{q}}_{p,q}(\Ri^n,\Lambda^2)\cap\dot{B}^{-\frac{4}{q}+1}_{\frac{p}{2},\frac{q}{2}}(\Ri^n,\Lambda^2)$ with $d b_0=0$. \begin{enumerate} \item First we prove that the semigroups $t\mapsto a_1(t)=e^{-t\Sh}u_0$ and $t\mapsto a_2(t)=e^{-t\Mh}b_0$ are respectively in $\CU_T$ and $\CB_T$. Let $T=+\infty$. Thanks to \cite[Lemma~2.34]{BCD11} we have \begin{align*} \|u_0\|_{\dot{B}^{-\frac{2}{q}}_{p,q}} &\sim_{p,q} \left\|t\mapsto \|t^\frac{1}{q}e^{t\Lh}u_0\|_{L^p_x}\right\|_{L^q(\demi{0,\infty},\frac{\dd t}{t})} \\ &\sim_{p,q} \left(\int_{0}^{+\infty}\|e^{t\Delta}u_0\|_{L^p_x}^q\dd t\right)^\frac{1}{q}\\ &\sim_{p,q} \|t\mapsto e^{-t\Sh}u_0\|_{L^q_tL^p_x}. \end{align*} Now, using the fact that $(-\Lh)^\frac{1}{2}: \ \dot{B}^s_{\frac{p}{2},\frac{q}{2}} \rightarrow \dot{B}^{s-1}_{\frac{p}{2},\frac{q}{2}}$ is an isomorphism we get \begin{equation*} \|u_0\|_{\dot{B}^{-\frac{4}{q}+1}_{\frac{p}{2},\frac{q}{2}}} \sim_{p,q} \|(-\Lh)^\frac{1}{2}u_0\|_{\dot{B}^{-\frac{4}{q}}_{\frac{p}{2},\frac{q}{2}}}. \end{equation*} Then using \cite[Lemma~2.34]{BCD11} again we get \begin{align*} \|u_0\|_{\dot{B}^{-\frac{4}{q}+1}_{\frac{p}{2},\frac{q}{2}}} &\sim_{p,q} \left\|t\mapsto \|t^\frac{q}{2}(-\Lh)^{\frac{1}{2}}e^{t\Lh}u_0\|_{L^p_x}\right\|_{L^q(\demi{0,\infty},\frac{\dd t}{t})} \\ &\sim_{p,q} \| t\mapsto (-\Lh)^\frac{1}{2}e^{t\Lh}u_0 \|_{L^\frac{q}{2}_t L^\frac{p}{2}_x} \\ &\sim_{p,q} \| de^{-t\Sh}u_0 \|_{L^\frac{q}{2}_t L^\frac{p}{2}_x}, \end{align*} where the last line comes from the fact that $d^*u_0=0$ and $\| \Dh \cdot \|_\frac{p}{2} \sim \| (-\Lh)^\frac{1}{2}\|_\frac{p}{2}$. Hence for $T\in \Ri^+$, \begin{equation} \|a_1\|_{\CU_T}\le \|a_1\|_{\CU_{\infty}}\less_{p,q} \|u_0\|_{\dot{B}^{-\frac{q}{2}}_{p,q}} + \|u_0\|_{\dot{B}^{-\frac{4}{q}+1}_{\frac{p}{2},\frac{q}{2}}}. \end{equation} The estimate for $a_2$ is proven in a similar way: \begin{equation} \|a_2\|_{\CU_T} \le \|a_2\|_{\CU_\infty} \less_{p,q} \|b_0\|_{\dot{B}^{-\frac{q}{2}}_{p,q}} + \|b_0\|_{\dot{B}^{-\frac{4}{q}+1}_{\frac{p}{2},\frac{q}{2}}}. \end{equation} \item Let $\e>0$. If $u_0$ and $b_0$ have norms smaller than $\|u_0\|_{\dot{B}^{-\frac{q}{2}}_{p,q}} + \|u_0\|_{\dot{B}^{-\frac{4}{q}+1}_{\frac{p}{2},\frac{q}{2}}}$ and $\|b_0\|_{\dot{B}^{-\frac{q}{2}}_{p,q}} + \|b_0\|_{\dot{B}^{-\frac{4}{q}+1}_{\frac{p}{2},\frac{q}{2}}}$ respectively, then we immediately have $\|a_1\|_{\CU_T}+\|a_2\|_{\CB_T} \le K_{p,q}\, \varepsilon$ for all $T\in [0,\infty]$. Else let $T\in \Ri^+$ and let $u_0^\e\in \CS(\Ri^n,\Lambda^1)$ and $b_0^\e \in \CS(\Ri^n,\Lambda^2)$ be such that $d^*u_0=0$, $db_0=0$, and \begin{align*} \|u_0-u_0^\e\|_{\dot{B}^{-\frac{4}{q}+1}_{\frac{p}{2},\frac{q}{2}}} &\le \e \; {\rm and } \; \|u_0-u_0^\e\|_{\dot{B}^{-\frac{2}{q}}_{p,q}} \le \e, \\ \|b_0-b_0^\e\|_{\dot{B}^{-\frac{4}{q}+1}_{\frac{p}{2},\frac{q}{2}}} &\le \e \; {\rm and } \; \|b_0-b_0^\e\|_{\dot{B}^{-\frac{2}{q}}_{p,q}} \le \e. \end{align*} Let us denote for $t\in [0,T]$ $a_1^\e(t) = e^{-t\Sh}u_0^\e$ and $a_2^\e(t) = e^{-t\Mh}b_0^\e$, and write \begin{equation*} \|a_1\|_{\CU_T}\le \|a_1-a_1^\e\|_{\CU_T}+\|a_1^\e\|_{\CU_T} \le K_{p,q}\, \e + \|a_1^\e\|_{\CU_T}. \end{equation*} By definition $\|a_1^\e\|_{\CU_T}=\|a_1^\e\|_{L^q_tL^p_x}+\|d a_1^\e\|_{L^\frac{q}{2}_t L^\frac{p}{2}_x}$. Let us consider $\| a_1^\e\|_{L^q_tL^p_x}$ first: let $s\in \Ri$ be such that $1-qs>0$. Then we get \begin{align*} \| a_1^\e\|_{L^q_tL^p_x} &\le \| t\mapsto t^s\|_{L^\infty([0,T])} \left\|t\mapsto t^{-s}\|e^{-t\Sh}u_0^\e\|_{L^p_x}\right\|_{L^q([0,T])} \\ &\less_{p,q} T^s \left\| t \mapsto \|t^{\frac{1-sq}{q}}e^{t\Lh}u_0^\e\|_{L^p_x}\right\|_{L^q([0,T],\frac{\dd t}{t})} \\ &\less_{p,q} T^s \|u_0^\e\|_{\dot{B}^{-2\frac{1-sq}{q}}_{p,q}}. \end{align*} Similarly let $s\in \Ri$ be such that $1-\frac{sq}{2}>0$. Then \begin{align*} \|d a_1^\e \|_{L^\frac{q}{2}_t L^{\frac{p}{2}}_x} &\less_{p,q} \left\|t\mapsto e^{t\Lh}(-\Lh)^\frac{1}{2}u_0^\e \right\|_{L^\frac{q}{2}_t L^{\frac{p}{2}}_x} \\ &\less_{p,q} T^s \| (-\Lh)^\frac{1}{2}u_0^\e \|_{\dot{B}^{-2\frac{2-sq}{q}}_{\frac{p}{2},\frac{q}{2}}} \\ &\less_{p,q} T^s \| u_0^\e \|_{\dot{B}^{-2\frac{2-sq}{q}+1}_{\frac{p}{2},\frac{q}{2}}}. \end{align*} Since $u_0^\e \in \CS(\Ri^n,\Lambda^1)$, both $\| u_0^\e \|_{\dot{B}^{-2\frac{2-sq}{q}+1}_{\frac{p}{2},\frac{q}{2}}}$ and $\|u_0^\e\|_{\dot{B}^{-2\frac{1-sq}{q}}_{p,q}}$ are well-defined and finite, although they can be arbitrarily large depending on $u_0$. However taking $T$ small enough we get \begin{equation*} \| a_1^\e \|_{\CU_T} \le K_{p,q} \, \e. \end{equation*} And in a similar way we can prove that \begin{equation*} \| a_2^\e \|_{\CB_T} \le K_{p,q} \, \e, \end{equation*} which concludes our proof. \end{enumerate} \end{proof} \begin{proof}[Lemma~\ref{lem_Bilin1}] Recall the relations on $n$, $p$ and $q$: \begin{equation*} n<p, \quad 3<q \quad {\rm and} \quad \frac{n}{p}+\frac{2}{q}=1 \end{equation*} \begin{enumerate} \item Recall that $B_1(u,v)(t)=\int_0^t e^{-(t-s)\Sh} \Leray(u(s)\contr dv(s))\dd s$. \begin{itemize} \item Let $\theta = \frac{n}{p}$. Then $\frac{3}{p}-\frac{2\theta}{n}=\frac{1}{p}$, so the Sobolev injection $W^{2\theta,\frac{p}{3}}\hookrightarrow L^p$ holds. For almost every $t>0$, we compute the norm in $L^p_x$ of $B_1(u,v)(t)$ in the following way: \begin{align*} \left\|B_1(u,v)(t)\right\|_{L^p_x}&=\left\| \int_0^t \Sh ^\theta e^{-(t-s)\Sh } \Sh ^{-\theta}\Leray \left(u(s)\contr dv(s)\right)\dd s \right\|_{L^p_x} \\ (1) \qquad &\lesssim \int_0^t \left\| \Sh ^\theta e^{-(t-s)\Sh }\right\|_{L^p\rightarrow L^p} \left\|\Sh ^{-\theta}\Leray \big(u(s)\contr dv(s)\big) \right\|_{L^p_x} \dd s \\ (2) \qquad &\lesssim \int^t_0 (t-s)^{-\theta} \left\|\Sh ^{-\theta}\Leray \big(u(s)\contr dv(s)\big) \right\|_{W^{2\theta,\frac{p}{3}}_x} \dd s \\ (3) \qquad &\lesssim \int^t_0 (t-s)^{-\theta} \left\|\Leray \big(u(s)\contr dv(s)\big) \right\|_{L^\frac{p}{3}_x} \dd s \\ (4) \qquad &\lesssim \int^t_0 (t-s)^{-\theta} \left\| u(s)\contr dv(s) \right\|_{L^\frac{p}{3}_x} \dd s \\ (5) \qquad &\lesssim \int^t_0 (t-s)^{-\theta} \|u(s)\|_{L^p_x}\| dv(s)\|_{L^\frac{p}{2}_x} \dd s, \end{align*} where (1) uses the operator norm of $\Sh^\theta e^{-(t-s)\Sh}$, (2) uses the Sobolev injection $W^{2\theta,\frac{p}{3}}\hookrightarrow L^p$, (3) uses the continuity of $\Sh^{-\theta}$ from $L^\frac{p}{3}_x$ to $W^{2\theta,\frac{p}{3}}$, (4) uses the continuity of the Leray projector $\Leray$ on $L^\frac{p}{3}_x$ and finally (5) is simply Hölder's inequality. Since $s\mapsto s^{-\theta}=s^{-\frac{n}{p}}$ is in $L^{\frac{p}{n},\infty}$ (see \cite[Definition~1.1.5]{Gr08}) and $s\mapsto \|u(s)\|_{L^p_x}\| dv(s)\|_{L^\frac{p}{2}_x}$ is in $L^\frac{q}{3}_t$ by Hölder's inequality, the convolution inequality $\|f\star g\|_{L^q}\less_{n,p,q} \|f\|_{L^{\frac{p}{n},\infty}}\|g\|_{L^\frac{q}{3}}$ (see \cite[Theorem~1.2.13 ]{Gr08}) yields \begin{equation} \left\|B_1(u,v)\right\|_{L^q_t L^p_x}\less_{n,p,q} \, \| u\|_{\CU_T} \|v\|_{\CU_T}. \end{equation} \item We now compute the norm of $dB_1(u,v)$. Let $\theta=\frac{n}{2p}$ be such that $\frac{3}{p}-\frac{2\theta}{n}=\frac{2}{p}$, so that the Sobolev injection $W^{2\theta, \frac{p}{3}}\hookrightarrow L^\frac{p}{2}$ holds. Then, following the same steps as for $\| B_1(u,v)(t)\|_{L^p_x}$, we get: \begin{align*} \left\|dB_1(u,v)(t)\right\|_{L^\frac{p}{2}_x}&=\left\| \int_0^t d \Sh ^\theta e^{-(t-s)\Sh } \Sh ^{-\theta}\Leray \left(u(s)\contr dv(s)\right)\dd s \right\|_{L^\frac{p}{2}_x} \\ (1) \qquad &\less \int_0^t \left\| d \Sh ^\theta e^{-(t-s)\Sh }\right\|_{L^\frac{p}{2}\rightarrow L^\frac{p}{2}} \left\|\Sh ^{-\theta}\Leray \big(u(s)\contr dv(s)\big) \right\|_{L^\frac{p}{2}_x} \dd s \\ (2) \qquad &\less \int^t_0 (t-s)^{-\theta-\frac{1}{2}} \left\|\Sh ^{-\theta}\Leray \big(u(s)\contr dv(s)\big) \right\|_{W^{2\theta,\frac{p}{3}}_x} \dd s \\ (3) \qquad &\less \int^t_0 (t-s)^{-\theta-\frac{1}{2}} \left\|\Leray \big(u(s)\contr dv(s)\big) \right\|_{L^\frac{p}{3}_x} \dd s \\ (4) \qquad &\less \int^t_0 (t-s)^{-\theta-\frac{1}{2}} \left\|u(s)\contr dv(s) \right\|_{L^\frac{p}{3}_x} \dd s \\ (5) \qquad &\less \int^t_0 (t-s)^{-\frac{n+p}{2p}} \|u(s)\|_{L^p_x}\| dv(s)\|_{L^\frac{p}{2}_x} \dd s, \end{align*} where (1) uses the operator norm of $\Sh^\theta e^{-(t-s)\Sh}$, (2) uses the Sobolev injection $W^{2\theta,\frac{p}{3}}\hookrightarrow L^\frac{p}{2}$, (3) uses the continuity of $\Sh^{-\theta}$ from $L^\frac{p}{3}_x$ to $W^{2\theta,\frac{p}{3}}$, (4) uses the continuity of the Leray projector $\Leray$ on $L^\frac{p}{3}_x$ and finally (5) is simply (again!) Hölder's inequality. Since $s\mapsto s^{-\frac{n+p}{2p}}$ is in $L^{\frac{2p}{n+p},\infty}$ (see \cite[Definition~1.1.5]{Gr08}) and $s\mapsto \|u(s)\|_{L^p_x}\| dv(s)\|_{L^\frac{p}{2}_x}$ is in $L^\frac{q}{3}_t$ by Hölder's inequality, the convolution inequality $\|f\star g\|_{L^\frac{q}{2}}\less_{n,p,q} \|f\|_{L^{\frac{2p}{n+p},\infty}}\|g\|_{L^\frac{q}{3}}$ (see \cite[Theorem~1.4.24]{Gr08}) yields \begin{equation} \left\|dB_1(u,v)\right\|_{L^q_t L^p_x}\lesssim \| u\|_{\CU_T} \|v\|_{\CU_T}. \end{equation} And combining both estimates yields \begin{equation} \left\|B_1(u,v)\right\|_{\CU_T}\lesssim \| u\|_{\CU_T} \|v\|_{\CU_T}. \end{equation} \end{itemize} \item The boundedness of $B_2:\CB_T\times \CB_T \rightarrow \CU_T$ is proved in the exact same way. \item \begin{itemize} \item The estimates on $B_3(u,b)$ is obtained in a similar way: taking $\theta=\frac{n}{2p}$ we get \begin{align*} \|B_3(u,b)(t)\|_{L^p_x}&=\left\| \int_0^t \Mh^\theta e^{-(t-s) \Mh}\Mh^{-\theta}\Big(-d\big(u(s)\contr b(s)\big)\Big)\dd s \right\|_{L^p_x} \\ (1) \qquad &=\left\| \int_0^t d \Sh^\theta e^{-(t-s) \Sh}\Sh^{-\theta} \big(u(s)\contr b(s)\big)\Big)\dd s \right\|_{L^p_x} \\ (2) \qquad &\lesssim \int^t_0 \left\|d\Sh^\theta e^{-(t-s)\Sh}\right\|_{L^p\rightarrow L^p} \|\Sh^{-\theta}u(s)\contr b(s)\|_{L^p_x} \dd s \\ (3) \qquad &\lesssim \int^t_0 (t-s)^{-\theta-\frac{1}{2}} \left\| \Sh^{-\theta} u(s)\contr b(s)\right\|_{W^{2\theta,\frac{p}{2}}_x} \dd s. \\ (4) \qquad &\lesssim \int^t_0 (t-s)^{-\frac{p+n}{2p}} \|u(s)\contr b(s)\|_{L^\frac{p}{2}_x} \dd s. \\ (5) \qquad &\lesssim \int^t_0 (t-s)^{-\frac{p+n}{2p}} \|u(s)\|_{L^p_x} \|b(s)\|_{L^p_x} \dd s, \end{align*} where (1) uses the fact that $\Mh d=d\Sh$, (2) uses the operator norm of $d\Sh^\theta e^{-(t-s)\Sh}$, (3) uses the Sobolev injection $W^{2\theta,\frac{p}{2}}\hookrightarrow L^p$, (4) uses the continuity of $\Sh^{-\theta}$ from $L^\frac{p}{2}_x$ to $W^{2\theta,\frac{p}{2}}$, and (5) is again Hölder's inequality. Then as before \cite[Theorem~1.4.24]{Gr08} gives us \begin{equation} \left\|B_3(u,b)\right\|_{L^q_t L^p_x}\lesssim \| u\|_{\CU_T} \|b\|_{\CB_T}. \end{equation} \item To compute $\|d^* B_3(u,b)(t)\|_\frac{p}{2}$, we can then apply the maximal regularity theorem: \begin{align*} \|d^* B_3(u,b)\|_{L^\frac{q}{2}_t L^\frac{p}{2}_x} &= \left\| t\mapsto \int_0^t d^*e^{-(t-s)\Mh}\Big(-d\big(u(s)\contr b(s)\big)\Big)\dd s \right\|_{L^\frac{q}{2}_t L^\frac{p}{2}_x}\\ (1)\qquad &=\left\| t\mapsto \int_0^t \Sh e^{-(t-s)\Sh}\big(-u(s)\contr b(s)\big)\dd s \right\|_{L^\frac{q}{2}_t L^\frac{p}{2}_x}\\ (2)\qquad &\lesssim \| u\contr b\|_{L^\frac{q}{2}_t L^\frac{p}{2}_x} \\ (3)\qquad &\lesssim \|u\|_{L^q_t L^p_x} \|b\|_{L^q_t L^p_x} \\ &\lesssim \| u\|_{\CU_T} \|b\|_{\CB_T}, \end{align*} where (1) uses $\Mh d=d\Sh$, (2) is Theorem~\ref{Sylvie_disciple} applied to $\Sh$, and (3) is Hölder's inequality. \end{itemize} \end{enumerate} This last estimate concludes our proof. \end{proof} \section{Existence in $\CC_tL^p_x$ spaces}\label{CtLn} In this section we prove the global and local existence of mild solutions for the magnetohydrodynamic system \eqref{mhd1diff}, following closely \cite{Mo21}. The method is roughly the same as in the last section - using Picard's fixed point theorem and maximal regularity - with the main difference being that we rely heavily on the Leibnitz estimate \eqref{Leibniz}, which makes it difficult to generalize our results to low-regularity domains. However, contrary to the $L^q_tL^p_x$ case, we were able to prove the uniqueness of mild solutions of the system \eqref{mhd1diff} in Section \ref{Uniqueness}. Let us start by stating our two theorems: \begin{theorem}[Global existence] \label{thm:mhd1global_2} There exists $\varepsilon>0$ such that for all $u_0\in {\rm{\sf N}}^n(d^*)_{|_{\Lambda^1}}$ and $b_0\in {\rm{\sf R}}^n(d)_{|_{\Lambda^2}}$ with $\|u_0\|_{L^n(\Ri^n,\Lambda^1)}+\|b_0\|_{L^n(\Ri^n,\Lambda^2)}\le \varepsilon$, the system \eqref{mhd1diff} admits a mild solution $u \in \CC([0,\infty[;L^n(\Ri^n,\Lambda^1))$ and $b\in \CC([0,\infty[;L^n(\Ri^n,\Lambda^2))$. \end{theorem} \begin{theorem}[Local existence] \label{thm:mhd1local_2} For all $u_0\in {\rm{\sf N}}^n(d^*)_{|_{\Lambda^1}}$ and $b_0\in {\rm{\sf R}}^n(d)_{|_{\Lambda^2}}$ there exists $T>0$ such that the system \eqref{mhd1diff} admits a mild solution $u \in \CC(\demi{0,T};L^n(\Ri^n,\Lambda^1))$ and $b\in \CC(\demi{0,T};L^n(\Ri^n,\Lambda^2))$. \end{theorem} Let $p\in \ouvert{n,2n}$ and $\alpha=1-\frac{n}{p}$, and define the following Banach spaces for $0<T\le +\infty$: \begin{align} \label{eq:UT} \CU_T:=&\bigl\{u\in{\mathscr{C}}(\ouvert{0,T};{\rm{\sf N}}^p(d^*)_{|_{\Lambda^1}}); du\in{\mathscr{C}}(\ouvert{0,T};L^p(\Ri^n,\Lambda^2)) :\\ \nonumber &\sup_{0<t<T}\bigl(t^{\frac{\alpha}{2}}\|u(t)\|_{L^p_x}+t^{\frac{1+\alpha}{2}}\|du(t)\|_{L^p_x}\bigr)<\infty\bigr\}, \end{align} endowed with the norm \begin{equation} \label{eq:normUT} \|u\|_{\CU_T}:=\sup_{0<t<T}\bigl(t^{\frac{\alpha}{2}}\|u(t)\|_{L^p_x} +t^{\frac{1+\alpha}{2}}\|du(t)\|_{L^p_x}\bigr), \end{equation} and \begin{align} \label{eq:BT} \CB_T:=&\bigl\{b\in{\mathscr{C}}(\ouvert{0,T};{\rm{\sf R}}^p(d)_{|_{\Lambda^2}}) ; d^*b\in{\mathscr{C}}(]0,T[;L^p(\Ri^n,\Lambda^1)) :\\ \nonumber &\sup_{0<t<T}\bigl(t^{\frac{\alpha}{2}}\|b(t)\|_{L^p_x}+t^{\frac{1+\alpha}{2}}\|d^*b(t)\|_{L^p_x}\bigr)<\infty\bigr\}, \end{align} endowed with the norm \begin{equation} \label{eq:normBT} \|b\|_{\CB_T}:=\sup_{0<t<T}\bigl(t^{\frac{\alpha}{2}}\|b(t)\|_{L^p_x} +t^{\frac{1+\alpha}{2}}\|d^*b(t)\|_{L^p_x}\bigr). \end{equation} We split the proof into three lemmas: in Lemma \ref{lem:initcond2} we study the action of the Stokes and Maxwell semi-group on initial data $u_0\in \Nn^n(d^*)_{|\Lambda^1}$ and $b_0\in \Rr^n(d)_{|\Lambda^2}$. In Lemma \ref{lem_Bilin2}, we prove bilinear estimates on $B_1$, $B_2$ and $B_3$, and in Lemma \ref{lem:UTBTmild} we show that solutions from the working spaces $\CU_T$ and $\CB_T$ are in fact continuous on $\demi{0,T}$ and in $L^n$ in space. \begin{lemma} \label{lem:initcond2} For $u_0\in {\rm{\sf N}}^n(d^*)_{|_{\Lambda^1}}$ and $b_0\in {\rm{\sf R}}^n(d)_{|_{\Lambda^2}}$, we have \begin{enumerate} \item $a_1:t\mapsto e^{-t\Sh}u_0\in\CU_T$, \item $a_2:t\mapsto e^{-t\Mh}b_0\in\CB_T$, \end{enumerate} for all $T>0$ Moreover, for all $\varepsilon>0$, there exists $T>0$ such that \begin{equation} \label{eq:a1a2} \|a_1\|_{\CU_T}+\|a_2\|_{\CB_T}\le \varepsilon. \end{equation} \end{lemma} As in Section \ref{LqLp} the second lemma gives us estimates for the bilinear operator: \begin{lemma}\label{lem_Bilin2} The bilinear operators $B_1$, $B_2$ and $B_3$ are bounded in the following spaces: \begin{enumerate} \item $B_1:\CU_T\times\CU_T \rightarrow \CU_T$, \item $B_2:\CB_T\times \CB_T \rightarrow \CU_T$, \item $D:\CU_T\times \CB_T\rightarrow \CB_T$ \end{enumerate} with norms independent from $T>0$. \end{lemma} Our last Lemma gives us additional regularity for mild solutions of \eqref{mhd1diff}: \begin{lemma} \label{lem:UTBTmild} Let $T>0$. Assume that $(u,b)\in {\mathscr{U}}_T\times{\mathscr{B}}_T$ is a mild solution of \eqref{mhd1diff} with initial conditions $u_0\in {\rm{\sf N}}^n(d^*)_{|_{\Lambda^1}}$ and $b_0\in {\rm{\sf R}}^n(d)_{|_{\Lambda^2}}$. Then $u\in {\mathscr{C}}_b([0,T[;{\rm{\sf N}}^n(d^*)_{|_{\Lambda^1}})$ and $b\in {\mathscr{C}}_b([0,T[;{\rm{\sf R}}^n(d)_{|_{\Lambda^2}})$. \end{lemma} \begin{proof}[Proof of Theorems~\ref{thm:mhd1global_2} and \ref{thm:mhd1local_2}] The system \begin{align} \label{eq:fixedpoint} u=a_1+B_1(u,u)+B_2(b,b)\quad\mbox{and}\quad b=a_2+B_3(u,b), \quad (u,b)\in{\mathscr{U}}_T \end{align} can be reformulated as \begin{equation} \label{eq:picard} {\boldsymbol{u}}={\boldsymbol{a}}+{\boldsymbol{\cb}}({\boldsymbol{u}},{\boldsymbol{u}}) \end{equation} where ${\boldsymbol{u}}=(u,b)\in {\mathscr{U}}_T\times{\mathscr{B}}_T$, ${\boldsymbol{a}}=(a_1,a_2)$ and ${\boldsymbol{B}}({\boldsymbol{u}},{\boldsymbol{v}})=(B_1(u,v)+B_2(b,b'),B_3(u,b'))$ if ${\boldsymbol{u}}=(u,b)$ and ${\boldsymbol{v}}=(v,b')$. On ${\mathscr{U}}_T\times{\mathscr{B}}_T$ we choose the norm $\|(u,b)\|_{{\mathscr{U}}_T\times{\mathscr{B}}_T}:=\|u\|_{{\mathscr{U}}_T}+\|b\|_{{\mathscr{B}}_T}$. As in section \ref{LqLp}, one can easily check, using Lemma~\ref{lem_Bilin2}, that \[ \|{\boldsymbol{\cb}}({\boldsymbol{u}},{\boldsymbol{v}})\|_{{\mathscr{U}}_T\times{\mathscr{B}}_T} \le C \|{\boldsymbol{u}}\|_{{\mathscr{U}}_T\times{\mathscr{B}}_T} \|{\boldsymbol{v}}\|_{{\mathscr{U}}_T\times{\mathscr{B}}_T} \] where $C$ is a constant independent from $T>0$. We can then apply Picard's fixed point theorem to prove that for $u_0\in {\rm{\sf N}}^n(d^*)_{|_{\Lambda^1}}$ and $b_0\in {\rm{\sf R}}^n(d)_{|_{\Lambda^2}}$, with $T\le \infty$ such that \eqref{eq:a1a2} holds for $\varepsilon=\frac{1}{4C}$, the system \eqref{eq:picard} admits a unique solution ${\boldsymbol{u}}=(u,b)\in {\mathscr{U}}_T\times{\mathscr{B}}_T$. By Lemma~\ref{lem:UTBTmild}, this provides a mild solution $(u,b)\in{\mathscr{C}}_b(\demi{0,T};{\rm{\sf N}}^n(d^*)_{|_{\Lambda^1}})\times {\mathscr{C}}_b(\demi{0,T};{\rm{\sf R}}^n(d)_{|_{\Lambda^2}})$ of \eqref{mhd1diff}. \end{proof} \begin{proof}[Proof of Lemma~\ref{lem:initcond2}] Let $\varepsilon>0$ and let $u_0\in \Nn^n(d^*)_{\Lambda^1}$ and $b_0\in \Rr^n(d)_{\Lambda^2}$. By Theorem \ref{thm:HodgeL&S}, the semi-group $e^{-t\Sh}$ and $e^{-t\Mh}$ are bounded and there exists constants $c_{\alpha,p}^S$ and $c_{\alpha,p}^M$ such that for all $T>0$ \begin{equation}\label{eq:données_petites} \| a_1\|_{\CU_T}+\|a_2\|_{\CB_T}\le c_{\alpha,p}^S\|u_0\|_{L^p_x} +c_{\alpha,p}^M\|b_0\|_{L^p_x}. \end{equation} Hence if $\|u_0\|_{L^p_x}$ and $\|b_0\|_{L^p_x}$ are small enough, the inequality \eqref{eq:a1a2} $\|a_1\|_{\CU_T}+\|a_2\|_{\CB_T}\le \varepsilon$ holds. For any $u_0\in \Nn^n(d^*)_{\Lambda^1}$ and $b_0 \in \Rr^n(d)_{\Lambda^2} $, with arbitrary norms, let $u_0^\e\in \Nn^p(d^*)_{\Lambda^1}$ and $b_0^\e \in \Rr^p(d)_{\Lambda^2} $ be such that \begin{align*} \|u_0-u_0^\e \|_{L^n_x} &\le \e \\ \|b_0-b_0^\e \|_{L^n_x} &\le \e. \end{align*} Let us write $a_1^\e(t)=e^{-t\Sh}u_0^\e$ and $a_2^\e(t)=e^{-t\Mh}b_0^\e$. Then \begin{equation} \|a_1\|_{\CU_T}\le \| a_1-a_1^\e\|_{\CU_T} + \|a_1^\e\|_{\CU_T}. \end{equation} Since $\Sh$ generates a bounded semi-group, $\| a_1-a_1^\e\|_{\CU_T}\le K_{\alpha,p} \e$. and $ \|a_1^\e\|_{\CU_T}=\sup_{0<t<T}\bigl(t^{\frac{\alpha}{2}}\|e^{-t\Sh}u_0^\e \|_{L^p_x} +t^{\frac{1+\alpha}{2}}\|de^{-t\Sh}u_0^\e\|_{L^p_x}\bigr)\le K_{\alpha,p} T^\frac{\alpha}{2}\|u_0^\e\|_{L^p_x}$. We get the same estimates for $b_0$, and combining them together we get \begin{equation*} \| a_1\|_{\CU_T}+\|a_2 \|_{\CB_T}\le \e K_{\alpha,p}\left( T^\frac{\alpha}{2} (\|u_0^\e\|_{L^p_x}+\|b_0^\e\|_{L^p_x})+\e\right). \end{equation*} Choosing $T$ small enough, such that $T^\frac{\alpha}{2} (\|u_0^\e\|_{L^p_x}+\|b_0^\e\|_{L^p_x})\le \e$, we get \begin{equation} \| a_1\|_{\CU_T}+\|a_2 \|_{\CB_T}\le 2K_{\alpha,p}\, \e. \end{equation} \end{proof} The proof of Lemma~\ref{lem_Bilin2} proceeds similarly to the proof of Lemma~\ref{lem_Bilin1}, except for the third estimate $B_3$. \begin{proof}[Proof of Lemma~\ref{lem_Bilin2}] Recall that $\alpha=1-\frac{n}{p}$. \begin{enumerate} \item Let $\theta=\frac{1-\alpha}{2}$. Then $\frac{2}{p}-\frac{2\theta}{n}=\frac{1}{p}$, so the Sobolev inclusion $W^{2\theta,\frac{p}{2}}\hookrightarrow L^p$ holds. For $u,v\in\CU_T$, by definition of $\CU_T$, $s\mapsto s^{\frac{1}{2}+\alpha}u(s)\lrcorner\,dv(s) \in {\mathscr{C}}_b(\demi{0,T};L^{\frac{p}{2}}(\Ri^n,\Lambda^1)$ with bounded $L^\infty$-norm in time. The Leray projector $\Leray$ is bounded from $L^{\frac{p}{2}}(\Ri^n,\Lambda^1)$ to ${\rm{\sf N}}^{\frac{p}{2}}(d^*)_{|_{\Lambda^1}}$, and for $\theta>0$ the operator $\Sh^{-\theta}$ is bounded from $W^{2\theta,\frac{p}{2}}$ to $L^{\frac{p}{2}}$. Let $t\in \ouvert{0,T}$. Then we get: \begin{align*} \left\|B_1(u,v)(t)\right\|_{L^p_x}&=\left\| \int_0^t s^{-\alpha-\frac{1}{2}}\Sh^\theta e^{-(t-s)\Sh} \Sh^{-\theta}\Leray \left(-s^{\frac{\alpha}{2}}u(s)\contr s^{\frac{\alpha+1}{2}}dv(s)\right)\dd s \right\|_{L^p_x} \\ (1)\qquad &\lesssim \int_0^t s^{-\alpha-\frac{1}{2}}\left\|\Sh^\theta e^{-(t-s)\Sh}\right\|_{L^p\rightarrow L^p} \left\| \Sh^{-\theta}\Leray \left(-s^{\frac{\alpha}{2}}u(s)\contr s^{\frac{\alpha+1}{2}}dv(s)\right)\right\|_{L^p_x} \dd s \\ (2)\qquad &\lesssim \int_0^t s^{-\alpha-\frac{1}{2}}(t-s)^{-\theta} s^\frac{\alpha}{2} \|u(s)\|_{L^p_x} \ s^\frac{\alpha+1}{2} \|dv(s)\|_{L^\frac{p}{2}_x} \dd s \\ (3)\qquad &\lesssim \left(\int_0^t s^{-\alpha-\frac{1}{2}}(t-s)^{-\theta} \dd s \right) \|u\|_{\CU_T}\|v\|_{\CU_T} \\ (4)\qquad &\lesssim t^{-\alpha-\frac{1}{2}-\theta+1}\int^1_0 \sigma^{-\alpha-\frac{1}{2}}(1-\sigma)^{-\theta} \dd \sigma \|u\|_{\CU_T}\|v\|_{\CU_T} \\ (5)\qquad &\less t^{-\frac{\alpha}{2}}\|u\|_{\CU_T}\|v\|_{\CU_T}, \end{align*} where (1) uses the operator norm of $S^\theta e^{-(t-s)\Sh}$, (2) uses successively the Sobolev injection $W^{2\theta,\frac{p}{2}}\hookrightarrow L^p$, the continuity of $S^{-\theta}$ from $W^{2\theta,\frac{p}{2}}$ to $L^{\frac{p}{2}}$, the continuity of the Leray projector $\Leray$ and the Hölder inequality - the same steps as for Lemma~\ref{lem_Bilin1}. (3) uses simply the definition of the $\CU_T$ norm and (4) and (5) are straightforward integral computations - since $n<p<2n$, both $\alpha+\frac{1}{2}$ and $\theta$ are strictly lower than $1$. This gives us our first estimate $\sup_{0<t<T}\bigl(t^{\frac{\alpha}{2}}\|B_1(u,v)(t)\|_{L^p_x}\bigl)<+\infty$. The second estimate proceeds similarly: taking $\theta=\frac{n}{2p}=\frac{1-\alpha}{2}$ as before, we get \begin{align*} \left\|dB_1(u,v)(t)\right\|_{L^p_x}&=\left\| \int_0^t s^{-\alpha-\frac{1}{2}}d \Sh^\theta e^{-(t-s)\Sh} \Sh^{-\theta}\Leray \left(-s^{\frac{\alpha}{2}}u(s)\contr s^{\frac{\alpha+1}{2}}dv(s)\right)\dd s \right\|_{L^p_x} \\ &\lesssim \left(\int_0^t s^{-\alpha-\frac{1}{2}}(t-s)^{-\theta-\frac{1}{2}} \dd s \right) \|u\|_{\CU_T}\|v\|_{\CU_T} \\ &\lesssim t^{-\frac{1+\alpha}{2}}\|u\|_{\CU_T}\|v\|_{\CU_T}, \end{align*} with a multiplicative constant independent from $T$. This gives us our second estimate $\sup_{0<t<T}\bigl(t^{\frac{1+\alpha}{2}}\|dB_1(u,v)(t)\|_{L^p_x}\bigl)<+\infty$ \item As for Lemma~\ref{lem_Bilin1}, the proof of point 2. proceeds exactly as in the previous point. \item Let $u\in \CU_T$ and $b\in \CB_T$, and set again $\theta=\frac{n}{2p}=\frac{1-\alpha}{2}$. Taking the $L^p$ norm of $B_3(u,b)(t)$ now yields \begin{align*} \|B_3(u,b)(t)\|_{L^p_x}&=\left\| \int_0^t s^{-\alpha} \Mh^\theta e^{-(t-s) \Mh}s^{\alpha}\Mh^{-\theta}\Bigl(-d\bigl(u(s)\contr b(s)\bigr)\Bigr)\,{\rm d}s \right\|_{L^p_x} \\ &\lesssim \int_0^t s^{-\alpha} \left\| d \Sh^\theta e^{-(t-s)\Sh} \right\|_{L^p\rightarrow L^p} \left\|\Sh^{-\theta}\bigl(s^\frac{\alpha}{2}u(s)\contr s^\frac{\alpha}{2}b(s)\bigr)\right\|_{L^p_x} \dd s \\ &\lesssim \left(\int^t_0 s^{-\alpha}(t-s)^{-\theta-\frac{1}{2}} {\rm d}s\right) \|u\|_{\CU_T}\|b\|_{\CB_T} \\ &\lesssim t^{-\frac{\alpha}{2}} \|u\|_{\CU_T}\|b\|_{\CB_T}. \end{align*} For the last term $d^*B_3(u,v)$ we use the Leibniz inequality \eqref{Leibniz} to get \begin{equation} \left\|d\bigl(u(s)\lrcorner\,b(s)\bigr)\right\|_{\frac{p}{2}}\less \|u(s)\|_{L^p_x}\|d^*b(s)\|_{L^p_x} + \|b(s)\|_{L^p_x}\|du(s)\|_{L^p_x}. \end{equation} We can now use this estimate to compute $\|d^*B_3(u,b)(t)\|_{L^p_x}$, using the same methods as before: \begin{align*} \|d^*B_3(u,b)(t)\|_{L^p_x}&=\left\| \int_0^t s^{-\alpha-\frac{1}{2}} d^* \Mh^\theta e^{-(t-s) \Mh}s^{\alpha+\frac{1}{2}} \Mh^{-\theta}\Bigl(-d\bigl(u(s)\lrcorner\,b(s)\bigr)\Bigr)\,{\rm d}s \right\|_{L^p_x} \\ &\lesssim \int_0^t s^{-\alpha-\frac{1}{2}} \left\|d^* \Mh^\theta e^{-(t-s) \Mh}\right\|_{L^p\rightarrow L^p} s^{\alpha+\frac{1}{2}} \left\| \Mh^{-\theta}\Bigl(-d\bigl(u(s)\lrcorner\,b(s)\bigr)\Bigr)\right\|_{L^p_x}\,{\rm d}s \\ &\lesssim \int_0^t s^{-\alpha-\frac{1}{2}} (t-s)^{-\theta-\frac{1}{2}} s^{\alpha+\frac{1}{2}}\left\|\Mh^{-\theta}\Bigl(-d\bigl(u(s)\lrcorner\,b(s)\bigr)\Bigr)\right\|_{W^{2\theta,\frac{p}{2}}}\,{\rm d}s \\ &\lesssim \int_0^t s^{-\alpha-\frac{1}{2}} (t-s)^{-\theta-\frac{1}{2}} s^{\alpha+\frac{1}{2}} \left\|-d\bigl(u(s)\lrcorner\,b(s)\bigr)\right\|_{\frac{p}{2}}\,{\rm d}s \\ &\lesssim \int_0^t s^{-\alpha-\frac{1}{2}} (t-s)^{-\theta-\frac{1}{2}} s^{\alpha+\frac{1}{2}} \left(\|u(s)\|_{L^p_x}\|d^*b(s)\|_{L^p_x} + \|b(s)\|_{L^p_x}\|du(s)\|_{L^p_x}\right) \,{\rm d}s \\ &\lesssim \left(\int^t_0 s^{-\alpha-\frac{1}{2}}(t-s)^{-\theta-\frac{1}{2}} {\rm d}s\right) \|u\|_{\CU_T}\|b\|_{\CB_T} \\ &\lesssim t^{-\frac{1+\alpha}{2}}\|u\|_{\CU_T}\|b\|_{\CB_T} \end{align*} \end{enumerate} Which concludes our proof of Lemma~\ref{lem_Bilin2}. \end{proof} \begin{proof}[Proof of Lemma~\ref{lem:UTBTmild}] To prove this lemma, first observe that if $u_0\in {\rm{\sf N}}^n(d^*)_{|_{\Lambda^1}}$ and $b_0\in {\rm{\sf R}}^n(d)_{|_{\Lambda^2}}$, then for all $T>0$, $t\mapsto e^{-t\Sh}u_0 \in {\mathscr{C}}_b(\demi{0,T};{\rm{\sf N}}^n(d^*)_{|_{\Lambda^1}})$ and $t\mapsto e^{-t\Mh}b_0\in {\mathscr{C}}_b(\demi{0,T};{\rm{\sf R}}^n(d)_{|_{\Lambda^2}})$. It remains to show that if $u\in{\mathscr{U}}_T$ and $b\in{\mathscr{B}}_T$, then $B_1(u,u)\in {\mathscr{C}}_b(\demi{0,T};{\rm{\sf N}}^n(d^*)_{|_{\Lambda^1}})$, $B_2(b,b)\in {\mathscr{C}}_b(\demi{0,T};{\rm{\sf N}}^n(d^*)_{|_{\Lambda^1}})$ and $B_3(u,b)\in {\mathscr{C}}_b(\demi{0,T};{\rm{\sf R}}^n(d)_{|_{\Lambda^2}})$. To prove boundedness, we use the same method as in the previous lemma: recall that $\alpha=1-\frac{n}{p}$ and chose $\varphi=\frac{n}{p}-\frac{1}{2}$, so that $W^{2\varphi,\frac{p}{2}}\hookrightarrow L^n$. Then we can proceed similarly: \begin{align*} \left\|B_1(u,v)(t)\right\|_n&=\left\| \int_0^t s^{-\alpha-\frac{1}{2}}\Sh^\varphi e^{-(t-s)\Sh} \Sh^{-\varphi}\Leray \left(-s^{\frac{\alpha}{2}}u(s)\contr s^{\frac{\alpha+1}{2}}du(s)\right)\dd s \right\|_{L^n_x} \\ &\lesssim \left(\int_0^t s^{-\alpha-\frac{1}{2}}(t-s)^{-\varphi} \dd s \right) \|u\|_{\CU_T}^2 \\ &\lesssim t^{-\alpha-\frac{1}{2}-\varphi+1}\int^1_0 \sigma^{-\alpha-\frac{1}{2}}(1-\sigma)^{-\varphi} \dd \sigma \|u\|_{\CU_T}^2\\ &\lesssim \|u\|_{\CU_T}^2. \end{align*} The continuity in $0$ is then straightforward, and the terms $B_2$ and $B_3$ can be treated similarly. \end{proof} \section{Uniqueness}\label{Uniqueness} \begin{theorem}\label{thm:uniqueness} Let $T\in [0,\infty]$ and assume there exist two solutions $(u_i,b_i)$, $i=1,2$ of \eqref{mhd1diff} with the same initial data $(u_0,b_0)$, and such that \begin{align*} d u_i &\in{\mathscr{C}}_b(\demi{0,T};L^\frac{n}{2}(\Ri^n,\Lambda^2)) \\ d^* b_i &\in{\mathscr{C}}_b(\demi{0,T};L^\frac{n}{2}(\Ri^n,\Lambda^1)). \end{align*} Then $(u_1,b_1)=(u_2,b_2)$. \end{theorem} \begin{remark} The condition $(du,d^*b)\in {\mathscr{C}}_b(\demi{0,T};L^{\frac{n}{2}}(\Ri^n,\Lambda^2))\times {\mathscr{C}}_b(\demi{0,T};L^{\frac{n}{2}}(\Ri^n,\Lambda^1)))$ in fact implies $(u,b)\in {\mathscr{C}}_b(\demi{0,T};{\rm{\sf N}}^n(d^*)_{|_{\Lambda^1}})\times {\mathscr{C}}_b(\demi{0,T};{\rm{\sf R}}^n(d)_{|_{\Lambda^2}})$. \end{remark} \begin{proof} Assume that there exists $t^*\in \demi{0,\infty}$ such that $(u_1,b_1)=(u_2,b_2)$ on $[0,t^*]$. We write $(u_i,b_i)(t^*,\cdot)=(u_*)$. Let $u=u_1-u_2$ and $b=b_1-b_2$. For $i=1,2$, since $(u_i,b_i)$ is a solution of $\eqref{mhd1diff}$, we have \begin{align*} u_i &= a_1 + B_1(u_i,u_i) + B_2(b_i,b_i) \\ b_i &= a_2 + B_3(u_i,b_i). \end{align*} Hence \begin{align} u =& B_1(u,u_1)+B_1(u_2,u)+B_2(b,b_1)+B_2(b_2,b) \\ b =& B_3(u,b_1)+B_3(u_2,b)\,. \end{align} Let $\e>0$. let $(u_i^\e, b_i^\e)$ be such that $u_i^\e$ and $b_i^\e$ are in $\CC^2[0,T];\CS(\Ri^n))$ with \begin{align} &\|d(u_i-u_i^\e)\|_{L^\infty_t(L^\frac{n}{2}_x)} \le \e \label{du_epsilon}\\ &\|d(b_i-b_i^\e)\|_{L^\infty_t(L^\frac{n}{2}_x)} \le \e \, . \label{db_epsilon} \end{align} Note that this in particular implies \begin{align} &\|u_i-u_i^\e \|_{L^\infty_t(L^n_x)} \le \e \label{u_epsilon} \\ &\|b_i-b_i^\e \|_{L^\infty_t(L^n_x)} \le \e. \label{b_epsilon} \end{align} We want to prove that there exists some $r>1$ and $\tau>0$ such that \begin{align} \|du\|_{L^r([t^*,t^*+\tau],L^\frac{n}{2}_x)} &\le K_{r,n,u_i,b_i}\, \e \left( \|du\|_{L^r([t^*,t^*+\tau],L^\frac{n}{2}_x)}+\|d^*b\|_{L^r([t^*,t^*+\tau],L^\frac{n}{2}_x)} \right)\\ \|d^*b\|_{L^r([t^*,t^*+\tau],L^\frac{n}{2}_x)} &\le K_{r,n,u_i,b_i}\, \e \left( \|d^*b\|_{L^r([t^*,t^*+\tau],L^\frac{n}{2}_x)}+\|du\|_{L^r([t^*,t^*+\tau],L^\frac{n}{2}_x)}\right) \end{align} Let $\tau>0$ and $t\in [t^*,t^*+\tau]$. \begin{enumerate} \item Let us look at $dB_1(u,u_1)$ first. We can write $dB_1(u,u_1)= dB_1(u,u_1^\e)+ dB_1(u,u_1-u_1^\e)$. \begin{itemize} \item Let us begin with $dB_1(u,u_1-u_1^\e)$. Since $\Leray$ is the projection on $\Nn(d^*)$, \begin{equation*} dB_1(u,u_1-u_1^\e)(t)= \int_{t^*}^t \Dh e^{-(t-t^*-s)\Sh} \Leray\big(u(s)\contr d(u_1-u_1^\e)(s)\big) \dd s = \Dh B_1(u,u_1-u_1^\e)(t). \end{equation*} Using the fact that $\|\Dh f\|_r \sim \|\Sh^\frac{1}{2}f\|_r$ for all $r\in ]1,\infty[$, we can estimate $\Sh^\frac{1}{2}B_1(u,u_1-u_1^\e)$ instead of $dB_1(u,u_1-u_1^\e)$. Using the maximal regularity Theorem~\ref{Sylvie_disciple} we get: \begin{equation*} \int_{t^*}^t \Sh^\frac{1}{2} e^{-(t-s)\Sh} \Leray\big(u(s)\contr d(u_1-u_1^\e)(s)\big) \dd s=\int_{t^*}^t \Sh e^{-(t-s)\Sh} \Sh^{-\frac{1}{2}}\Leray\big(u(s)\contr d(u_1-u_1^\e)(s)\big) \dd s, \end{equation*} So \begin{align*} \left\|\int_{t^*}^t \Sh^\frac{1}{2} e^{-(t-s)\Sh} \Leray\big(u(s)\contr d(u_1-u_1^\e)(s)\big) \dd s \right\|_{L^r(\ouvert{t^*,t^*+\tau}) L^\frac{n}{2}_x} &\less_{r,n} \left\|\Sh^{-\frac{1}{2}} \Leray\big(u\contr d(u_1-u_1^\e)\big)\right\|_{L^r_tL^\frac{n}{2}_x} \\ &\less_{r,n} \left\| \Leray\big(u\contr d(u_1-u_1^\e)\big)\right\|_{L^r_t W^{1,\frac{n}{3}}_x} \\ &\less_{r,n} \|u\|_{L^r_tL^n_x} \|d(u_1-u_1^\e)\|_{L^\infty_t L_x^\frac{n}{2}} \\ &\less_{r,n} \e \|du\|_{L^r_t L^\frac{n}{2}_x}. \end{align*} Hence there exists some constant $K_{r,n}$ such that \begin{equation}\label{B11} \|dB_1(u,u_1-u_1^\e)\|_{L^r_tL^\frac{n}{2}_x}\le K_{r,n} \e \|du\|_{L^r_t L^\frac{n}{2}_x}. \end{equation} \item The second term does not require maximal regularity: \begin{align*} \left\| dB_1(u,u_1^\e)(t) \right\|_{\frac{n}{2}}&\less_n \int_{t^*}^t \left\| d e^{-(t-s)\Sh } \right\|_{L^{\frac{n}{2}}\rightarrow L^{\frac{n}{2}}} \left\| \Leray \big( u(s)\contr du_1^\e(s)\big) \dd s \right\|_{\frac{n}{2}} \\ &\less \int_{t^*}^t \frac{1}{\sqrt{t-s}} \|u(s)\|_n \| du_1^\e(s) \|_n \dd s\, . \end{align*} Using the convolution injection $L^1\star L^r\hookrightarrow L^r$ we get: \begin{equation}\label{B12} \left\| dB_1(u,u_1^\e)\right\|_{L^r([0,\tau[;L^\frac{n}{2})}\less_{r,n} 2\sqrt{\tau}\, \|d u\|_{L^r([t^*,t^*+\tau[;L^\frac{n}{2})} \|d u_1^\e \|_{L^\infty(L^n_x)}. \end{equation} \end{itemize} Now $\|d u_1^\e \|_{L^\infty(L^n)}$ is well defined but not necessarily bounded as $\e$ goes to $0$. However we can always pick $\tau$ small enough to ensure that $\sqrt{\tau} \|d u_1^\e \|_{L^\infty(L^n)}\le \e$. Therefore combining estimates \eqref{B11} and \eqref{B12} there exists a constant $K_{r,n,u_i}$ such that \begin{equation}\label{B1_uniqueness} \left\| dB_1(u,u_1) \right\|_{L^r([t^*,t^*\tau[;L^\frac{n}{2})} \le K_{r,n,u_i} \e \|d u\|_{L^r([t^*,t^*+\tau[;L^\frac{n}{2})}. \end{equation} \item Let us write $dB_1(u_2,u)=dB_1(u_2-u_2^\e,u)+dB_1(u_2^\e,u)$. Then by maximal regularity we get: \begin{align*} \|dB_1(u_2-u_2^\e,u)\|_{L^r_t L^\frac{n}{2}}&\less_{r,n} \left\|\Sh^{-\frac{1}{2}} \Leray\big((u_2-u_2^e)\contr du\big)\right\|_{L^r_tL^\frac{n}{2}_x} \\ &\less_{r,n} \|du\|_{L^r_tL^\frac{n}{2}_x} \|u_2-u_2^\e\|_{L^\infty_t L^n} \\ &\less_{r,n} \e \|du\|_{L^r_t L^\frac{n}{2}_x}. \end{align*} Besides \begin{align*} \left\| dB_1(u_2^\e,u)(t) \right\|_{\frac{n}{2}}&\less_{n} \int_{t^*}^t \left\| d e^{-(t-s)\Sh } \right\|_{L^{\frac{n}{2}}\rightarrow L^{\frac{n}{2}}} \left\| \Leray \big( (u_2^\e)(s)\contr du(s)\big) \dd s \right\|_{\frac{n}{2}} \\ &\less_n \int_{t^*}^t \frac{1}{\sqrt{t-s}} \|u_2^\e(s)\|_\infty \| du(s) \|_\frac{n}{2} \dd s \, ,\\ \end{align*} And by convolution we get \begin{equation} \left\| dB_1(u_2^\e,u)(t) \right\|_{\frac{n}{2}}\less_{r,n} \sqrt{\tau} \|u_2^\e\|_{L^\infty_tL^\infty_x} \| du \|_{L^r_tL^\frac{n}{2}_x}. \end{equation} Setting $\tau$ such that $\sqrt{\tau} \|u_2^\e\|_{L^\infty_tL^\infty_x}\le \e$, we finally get \begin{equation} \left\| dB_1(u_2,u) \right\|_{L^r([t^*,t^*+\tau[;L^\frac{n}{2})} \le K_{r,n,u_i} \e \|d u\|_{L^r([t^*,t^*+\tau[;L^\frac{n}{2})}. \end{equation} \item The next terms $B_2(b,b_1)$ and $B_2(b_2,b)$ are treated in the exact same way. \item Recall that $b=B_3(u,b_1)+B_3(u_2,b)$. We start by writing $d^*B_3(u,b_1)=d^*B_3(u,b_1-b_1^\e)+d^*B_3(u,b_1^\e)$. \begin{itemize} \item Let us recall from section \ref{LqLp} that $d^*e^{-(t-s)\Mh} d=\Sh e^{-(t-s)\Sh}$. Using the maximal regularity property \ref{Sylvie_disciple} we then get: \begin{equation}\label{UD1} \| d^*B_3(u,b_1-b_1^\e) \|_{L^r_tL^\frac{n}{2}_x} \le \| u\contr (b_1-b_1^\e)\|_{L^r_tL^\frac{n}{2}_x} \less_{r,n} \|u\|_{L^r_tL^n_x} \|(b_1-b_1^\e)\|_{L^\infty_tL^n_x} \less_{r,n} \e \|du\|_{L^r_tL^\frac{n}{2}_x}. \end{equation} \item Using the Leibniz-rule \eqref{Leibniz}, we get: \begin{align*} \|d^*B_3(u,b_1^\e)(t)\|_\frac{n}{2} &\less_n \int_{t^*}^t \frac{1}{\sqrt{t-s}} \|d(u\contr b_1^e)\|_{\frac{n}{2}}(s)ds\\ &\less_n \int_{t^*}^t \frac{1}{\sqrt{t-s}} \left(\|u(s)\|_n\|db_1^\e(s)\|_n+\|du(s)\|_{\frac{n}{2}}\|b_1^\e(s)\|_\infty\right), \end{align*} And by convolution we can conclude that \begin{equation}\label{U=2} \|d^*B_3(u,b_1^e)\|_{L^r_tL^\frac{n}{2}_x} \less_{r,n} \sqrt{\tau} \left(\|b_1^\e\|_{L^\infty_tL^\infty_x}+\|db_1^\e\|_{L^\infty_tL^n_x}\right) \|du\|_{L^r_tL^\frac{n}{2}_x}. \end{equation} \end{itemize} Choosing $\tau$ such that $\sqrt{\tau} \left(\|b_1^\e\|_{L^\infty_tL^\infty_x}+\|db_1^\e\|_{L^\infty_tL^n_x}\right)\le \e$ let us finally get \begin{equation} \|d^*B_3(u,b_1)\|_{L^r_tL^\frac{n}{2}_x} \le K_{r,n,u_i,b_i}\, \e \|du\|_{L^r_tL^\frac{n}{2}_x}. \end{equation} A similar computation shows that \begin{equation} \|d^*B_3(u_2,b)\|_{L^r_tL^\frac{n}{2}_x} \le K_{r,n,u_i,b_i}\, \e \|d^*b\|_{L^r_tL^\frac{n}{2}_x}, \end{equation} and with this last estimate we have proven that \begin{align} \left\| du \right\|_{L^r([t^*,t^*+\tau[;L^\frac{n}{2})} &\le K_{r,n,u_i,b_i}\, \e \left( \|d u\|_{L^r([t^*,t^*+\tau[;L^\frac{n}{2})}+ \|d^* b\|_{L^r([t^*,t^*+\tau[;L^\frac{n}{2})} \right)\label{contraction}\\ \left\| d^* b \right\|_{L^r([t^*,t^*+\tau[;L^\frac{n}{2})} &\le K_{r,n,u_i,b_i} \e \left(\|d u\|_{L^r([t^*,t^*+\tau[;L^\frac{n}{2})} + \|d^* b\|_{L^r([t^*,t^*+\tau[;L^\frac{n}{2})} \right)\nonumber, \end{align} where $K_{r,n,u_i,b_i}$ is a constant independent of $u$, and $\e$ can be chosen arbitrarily small. Then letting $\e$ be such that $K_{r,n,u_i,b_i}\e \le \frac{1}{4}$ we get \begin{equation*} \left\| du \right\|_{L^r([t^*,t^*+\tau[;L^\frac{n}{2})} + \left\| d^* b \right\|_{L^r([t^*,t^*+\tau[;L^\frac{n}{2})} \le \frac{1}{2} \left( \left\| du \right\|_{L^r([t^*,t^*+\tau[;L^\frac{n}{2})} + \left\| d^* b \right\|_{L^r([t^*,t^*+\tau[;L^\frac{n}{2})}\right), \end{equation*} which proves that $du$ and $d^*b$ (and hence $u$ and $b$) are equal to $0$ on $[t^*,t^*+\tau[$. Let \begin{equation*} I=\left\{t^*, \text{the system \eqref{mhd1diff} has a unique solution on } [0,t^*[.\right\} \end{equation*} Then $I$ is open, and it is also closed by continuity. Thus $I=[0,T[$ by connectedness. \end{enumerate} \end{proof} \begin{thebibliography}{11} \bib{AB20}{article}{ AUTHOR = {Amrouche, Ch\'{e}rif}, author={Boukassa, Saliha}, TITLE = {Existence and regularity of solution for a model in magnetohydrodynamics}, JOURNAL = {Nonlinear Anal.}, VOLUME = {190}, YEAR = {2020}, PAGES = {111602, 20}, } \bib{AKMcI06Invent}{article}{ AUTHOR = {Axelsson, Andreas}, author={Keith, Stephen}, author={McIntosh, Alan}, TITLE = {Quadratic estimates and functional calculi of perturbed {D}irac operators}, JOURNAL = {Invent. Math.}, VOLUME = {163}, YEAR = {2006}, NUMBER = {3}, PAGES = {455--497}, } \bib{AMcI04}{incollection}{ AUTHOR = {Axelsson, Andreas}, author={McIntosh, Alan}, TITLE = {Hodge decompositions on weakly {L}ipschitz domains}, BOOKTITLE = {Advances in analysis and geometry}, SERIES = {Trends Math.}, PAGES = {3--29}, PUBLISHER = {Birkh\"auser, Basel}, YEAR = {2004}, } \bib{BCD11}{book}{ AUTHOR = {Bahouri, Hajer and Chemin, Jean-Yves and Danchin, Rapha\"{e}l}, TITLE = {Fourier analysis and nonlinear partial differential equations}, SERIES = {Grundlehren der mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]}, VOLUME = {343}, PUBLISHER = {Springer, Heidelberg}, YEAR = {2011}, PAGES = {xvi+523}, ISBN = {978-3-642-16829-1}, MRCLASS = {35-02 (35L72 35Q30 42-02 42B37 76B03 76D03 76N10)}, MRNUMBER = {2768550}, MRREVIEWER = {Peter R. Massopust}, DOI = {10.1007/978-3-642-16830-7}, URL = {https://doi.org/10.1007/978-3-642-16830-7}, } \bib{BH20}{article}{ AUTHOR = {Brandolese, Lorenzo}, author={He, Jiao}, TITLE = {Uniqueness theorems for the {B}oussinesq system}, JOURNAL = {Tohoku Math. J. (2)}, VOLUME = {72}, YEAR = {2020}, NUMBER = {2}, PAGES = {283--297}, } \bib{BM20}{unpublished}{ title={Well-posedness for the Boussinesq system in critical spaces via maximal regularity}, author={Brandolese, Lorenzo}, author={Monniaux, Sylvie}, year={2020}, eprint={2012.02450}, note={Preprint arXiv 2012.02450}, } \bib{CMcI10}{article}{ AUTHOR = {Costabel, Martin}, author={McIntosh, Alan}, TITLE = {On {B}ogovski\u{\i} and regularized {P}oincar\'{e} integral operators for de {R}ham complexes on {L}ipschitz domains}, JOURNAL = {Math. Z.}, VOLUME = {265}, YEAR = {2010}, NUMBER = {2}, PAGES = {297--320}, } \bib{FK64}{article}{ AUTHOR = {Fujita, H.}, author={Kato, T.}, TITLE = {On the {N}avier-{S}tokes initial value problem. {I}}, JOURNAL = {Arch. Rational Mech. Anal.}, VOLUME = {16}, YEAR = {1964}, PAGES = {269--315}, } \bib{Gr08}{book}{ AUTHOR = {Grafakos, L.}, TITLE = {Classical {F}ourier analysis}, SERIES = {Graduate Texts in Mathematics}, VOLUME = {249}, EDITION = {Second}, PUBLISHER = {Springer, New York}, YEAR = {2008}, PAGES = {xvi+489}, ISBN = {978-0-387-09431-1}, MRCLASS = {42-01 (42Bxx)}, MRNUMBER = {2445437}, MRREVIEWER = {Andreas Seeger}, } \bib{HMM}{article}{ AUTHOR = {Hofmann, Steve}, author={Mitrea, Marius}, author={Monniaux, Sylvie}, TITLE = {Riesz transforms associated with the {H}odge {L}aplacian in {L}ipschitz subdomains of {R}iemannian manifolds}, JOURNAL = {Ann. Inst. Fourier (Grenoble)}, VOLUME = {61}, YEAR = {2011}, NUMBER = {4}, PAGES = {1323--1349 (2012)}, } \bib{LSU68}{book}{ AUTHOR = {Lady\v{z}enskaja, O. A.} author = {Solonnikov, V. A.} author = {Ural'ceva, N.N.}, TITLE = {Linear and quasilinear equations of parabolic type}, SERIES = {Translations of Mathematical Monographs, Vol. 23}, NOTE = {Translated from the Russian by S. Smith}, PUBLISHER = {American Mathematical Society, Providence, R.I.}, YEAR = {1968}, PAGES = {xi+648}, MRCLASS = {35.62}, MRNUMBER = {0241822}, MRREVIEWER = {B. Frank Jones, Jr.}, } \bib{Lem02}{book}{ author={Lemari{\'e}-Rieusset, Pierre Gilles}, title={Recent developments in the Navier-Stokes problem}, series={Chapman \& Hall/CRC Research Notes in Mathematics}, volume={431}, publisher={Chapman \& Hall/CRC, Boca Raton, FL}, date={2002}, pages={xiv+395}, } \bib{McIM18}{article}{ AUTHOR = {McIntosh, Alan}, author={Monniaux, Sylvie}, TITLE = {Hodge-{D}irac, {H}odge-{L}aplacian and {H}odge-{S}tokes operators in {$L^p$} spaces on {L}ipschitz domains}, JOURNAL = {Rev. Mat. Iberoam.}, VOLUME = {34}, YEAR = {2018}, NUMBER = {4}, PAGES = {1711--1753}, } \bib{MMM08}{article}{ AUTHOR = {Mitrea, Dorina}, author={Mitrea, Marius}, author={Monniaux, Sylvie}, TITLE = {The {P}oisson problem for the exterior derivative operator with {D}irichlet boundary condition in nonsmooth domains}, JOURNAL = {Commun. Pure Appl. Anal.}, VOLUME = {7}, YEAR = {2008}, NUMBER = {6}, PAGES = {1295--1333}, } \bib{MM09a}{article}{ AUTHOR = {Mitrea, Marius}, author={Monniaux, Sylvie}, TITLE = {On the analyticity of the semigroup generated by the {S}tokes operator with {N}eumann-type boundary conditions on {L}ipschitz subdomains of {R}iemannian manifolds}, JOURNAL = {Trans. Amer. Math. Soc.}, VOLUME = {361}, YEAR = {2009}, NUMBER = {6}, PAGES = {3125--3157}, } \bib{MM09b}{article}{ AUTHOR = {Mitrea, Marius}, author={Monniaux, Sylvie}, TITLE = {The nonlinear {H}odge-{N}avier-{S}tokes equations in {L}ipschitz domains}, JOURNAL = {Differential Integral Equations}, VOLUME = {22}, YEAR = {2009}, NUMBER = {3-4}, PAGES = {339--356}, } \bib{M06}{article}{ AUTHOR = {Monniaux, Sylvie}, TITLE = {Navier-{S}tokes equations in arbitrary domains: the {F}ujita-{K}ato scheme}, JOURNAL = {Math. Res. Lett.}, VOLUME = {13}, YEAR = {2006}, NUMBER = {2-3}, PAGES = {455--461}, } \bib{Mo21}{article}{ TITLE = {{Existence in critical spaces for the magnetohydrodynamical system in 3D bounded Lipschitz domains}}, AUTHOR = {Monniaux, Sylvie}, URL = {https://hal.archives-ouvertes.fr/hal-03158650}, NOTE = {working paper or preprint}, YEAR = {2021}, MONTH = {Mars}, PDF = {https://hal.archives-ouvertes.fr/hal-03158650/file/mhdHAL.pdf}, } \bib{Schw95}{book}{ AUTHOR = {Schwarz, G{\"u}nter}, TITLE = {Hodge decomposition---a method for solving boundary value problems}, SERIES = {Lecture Notes in Mathematics}, VOLUME = {1607}, PUBLISHER = {Springer-Verlag, Berlin}, YEAR = {1995}, PAGES = {viii+155}, } \bib{ST83}{article}{ AUTHOR = {Sermange, Michel}, author={Temam, Roger}, TITLE = {Some mathematical questions related to the {MHD} equations}, JOURNAL = {Comm. Pure Appl. Math.}, VOLUME = {36}, YEAR = {1983}, NUMBER = {5}, PAGES = {635--664}, } \end{thebibliography} \end{document}
2205.04019v1
http://arxiv.org/abs/2205.04019v1
Wiener filters on graphs and distributed polynomial approximation algorithms
\documentclass[onecolumn, journal]{IEEEtran} \usepackage{rotating} \usepackage{amsmath} \usepackage{graphicx,amssymb} \usepackage{algorithm} \usepackage{algorithmic} \usepackage{amsthm,color} \usepackage{diagbox} \usepackage{multirow} \usepackage{subcaption} \usepackage{makecell} \usepackage{hyperref} \usepackage{soul} \usepackage{mdframed} \usepackage{algorithm} \usepackage{makecell} \usepackage[affil-it]{authblk} \makeatletter \renewcommand\thealgorithm{\thesection.\arabic{algorithm}} \@addtoreset{algorithm}{section} \makeatother \newtheorem{thm}{Theorem}[section] \newtheorem{lemma}[thm]{Lemma} \newtheorem{proposition}[thm]{Proposition} \newtheorem{definition}[thm]{Definition} \newtheorem{corollary}[thm]{Corollary} \newtheorem{problem}[thm]{Problem} \newtheorem{exercises}[thm]{Exercises} \newtheorem{exercise}[thm]{Exercise} \newtheorem{remark}[thm]{Remark} \newtheorem{claim}{Claim} \newtheorem{example}[thm]{Example} \newtheorem{question}[thm]{Question} \newtheorem{conjecture}[thm]{Conjecture} \newtheorem{assumption}[thm]{Assumption} \def\Zd{{\ZZ}^d} \def\Rd{{\RR}^d} \def\Cd{{\CC}^d} \renewcommand{\theequation} {\thesection.\arabic{equation}} \def\C{ {\mathbb C} } \def\R{ {\mathbb R} } \def\Z{ {\mathbb Z} } \def\N{ {\mathbb N} } \numberwithin{equation}{section} \newcommand{\cc}{{\bf \color{blue}CC}: \textcolor{red}} \renewcommand{\thesection}{\arabic{section}} \setstcolor{red} \begin{document} \title {Wiener filters on graphs and distributed polynomial approximation algorithms} \author{Cong Zheng, Cheng Cheng, and Qiyu Sun \thanks{Zheng and Sun are with the Department of Mathematics, University of Central Florida, Orlando, Florida 32816; Cheng is with the School of Mathematics, Sun Yat-sen University, Guangzhou, Guangdong 510275, China. Emails: [email protected]; [email protected]; [email protected]. This work is partially supported by the National Science Foundation (DMS-1816313), National Nature Science Foundation of China (12171490) and Guangdong Province Nature Science Foundation (2022A1515011060) } } \maketitle \begin{abstract} In this paper, we consider Wiener filters to reconstruct deterministic and (wide-band) stationary graph signals from their observations corrupted by random noises, and we propose distributed algorithms to implement Wiener filters and inverse filters on networks in which agents are equipped with a data processing subsystem for limited data storage and computation power, and with a one-hop communication subsystem for direct data exchange only with their adjacent agents. The proposed distributed polynomial approximation algorithm is an exponential convergent quasi-Newton method based on Jacobi polynomial approximation and Chebyshev interpolation polynomial approximation to analytic functions on a cube. Our numerical simulations show that Wiener filtering procedure performs better on denoising (wide-band) stationary signals than the Tikhonov regularization approach does, and that the proposed polynomial approximation algorithms converge faster than the Chebyshev polynomial approximation algorithm and gradient decent algorithm do in the implementation of an inverse filtering procedure associated with a polynomial filter of commutative graph shifts. \end{abstract} \vskip-1mm {\bf Keywords:} {Wiener filter, inverse filter, polynomial filter, stationary graph signals, distributed algorithm, quasi-Newton method, gradient descent algorithm} \section{Introduction} Massive data sets on networks are collected in numerous applications, such as (wireless) sensor networks, smart grids and social networks \cite{Wass94}-\cite{Cheng19}. Graph signal processing provides an innovative framework to extract knowledge from (noisy) data sets residing on networks \cite{sandryhaila13}-\cite{ncjs22}. Graphs ${\mathcal G} = (V, E)$ are widely used to model the complicated topological structure of networks in engineering applications, where a vertex in $V$ may represent an agent of the network and an edge in $E$ between vertices could indicate that the corresponding agents have a peer-to-peer communication link between them and/or they are within certain range in the spatial space. In this paper, we consider distributed implementation of Wiener filtering procedure and inverse filtering procedure on simple graphs (i.e., unweighted undirected graphs containing no loops or multiple edges) of large order $N\ge 1$. Many data sets on a network can be considered as signals ${\bf x}=(x_i)_{i\in V}$ residing on the graph ${\mathcal G}$, where $x_i$ represents the real/complex/vector-valued data at the vertex/agent $i\in V$. In this paper, the data $x_i$ at each vertex $i\in V$ is assumed to be real-valued. The filtering procedure for signals on a network is a linear transformation \begin{equation}\label{filtering.def} {\bf x}\longmapsto {\bf y}={\bf H}{\bf x}, \end{equation} which maps a graph signal ${\bf x}$ to another graph signal ${\bf y}={\bf H}{\bf x}$, and ${\bf H}=(H(i,j))_{ i,j\in V}$ is known as a {\em graph filter}. In this paper, we assume that graph filters are real-valued. We say that a matrix ${\bf S}=(S(i,j))_{i,j\in V}$ on the graph ${\mathcal G}=(V, E)$ is a {\em graph shift} if $S(i,j)\ne 0$ only if either $j=i$ or $(i,j)\in E$. Graph shift is a basic concept in graph signal processing, and illustrative examples are the adjacency matrix ${\bf A}$, Laplacian matrix ${\bf L}={\bf D}-{\bf A}$, and symmetrically normalized Laplacian ${\bf L}^{\rm sym}:={\bf D}^{-1/2} {\bf L}{\bf D}^{-1/2}$, where ${\bf D}$ is the degree matrix of the graph \cite{sandryhaila13}, \cite{ncjs22}-\cite{jiang19}. In \cite{ncjs22}, the notion of multiple commutative graph shifts ${\bf S}_1, \dots, {\bf S}_d$ are introduced, \begin{equation}\label{commutativityS} {\bf S}_k{\bf S}_{k'}={\bf S}_{k'}{\bf S}_k,\ 1\le k,k'\le d, \end{equation} and some multiple commutative graph shifts on circulant/Cayley graphs and on Cartesian product graphs are constructed with physical interpretation. An important property for commutative graph shifts ${\mathbf S}_1, \ldots, {\mathbf S}_d$ is that they can be upper-triangularized simultaneously, \begin{equation} \label{upperdiagonalization} \widehat{\bf S}_k={\bf U}^{\rm H}{\bf S}_k{\bf U},\ 1\le k\le d, \end{equation} where ${\bf U}$ is a unitary matrix, ${\bf U}^{\rm H}$ is the Hermitian of the matrix ${\bf U}$, and $\widehat{\bf S}_k=(\widehat S_{k}(i,j))_{1\le i, j\le N}, 1\le k\le d$, are upper triangular matrices \cite[Theorem 2.3.3]{horn1990matrix}. As $\widehat{S}_k(i, i), 1\le i\le N$, are eigenvalues of ${\bf S}_k, 1\le k\le d$, we call the set \begin{equation}\label{jointspectrum.def} \Lambda=\big\{\pmb \lambda_i=\big(\widehat{S}_1(i,i), ..., \widehat{ S}_d(i,i)\big), 1\le i\le N\big\}\end{equation} as the {\em joint spectrum} of ${\bf S}_1, \ldots, {\bf S}_d$ \cite{ncjs22}. For the case that graph shifts ${\bf S}_1, \ldots, {\bf S}_d$ are symmetric, one may verify that their joint spectrum are contained in some cube, \begin{equation}\label{jointspectralcubic.def} \Lambda\subset [{\pmb \mu}, {\pmb \nu}]:=[\mu_1,\nu_1] \times \cdots \times[\mu_d,\nu_d]\subset {\mathbb R}^d. \end{equation} A popular family of graph filters contains {\em polynomial graph filters} of commutative graph shifts ${\bf S}_1, \dots, {\bf S}_d$, \begin{equation}\label{MultiShiftPolynomial} {\bf H}=h({\bf S}_1, \ldots, {\bf S}_d)=\sum_{ l_1=0}^{L_1} \cdots \sum_{ l_d=0}^{L_d} h_{l_1,\dots,l_d}{\bf S}_1^{l_1}\cdots {\bf S}_d^{l_d}, \end{equation} where $h$ is a multivariate polynomial in variables $t_1,\cdots,t_d$, $$h(t_1, \ldots, t_d)=\sum_{ l_1=0}^{L_1} \cdots \sum_{ l_d=0}^{L_d} h_{l_1,\dots,l_d} t_1^{l_1} \ldots t_d^{l_d}$$ \cite{ncjs22, segarra17}, \cite{Leus17}-\cite{David2019}. Commutative graph shifts ${\bf S}_1, \dots, {\bf S}_d$ are building blocks for polynomial graph filters and they play similar roles in graph signal processing as the one-order delay $z_1^{-1}, \ldots, z_d^{-1}$ in multi-dimensional digital signal processing \cite{ncjs22}. For polynomial graph filters in \eqref{MultiShiftPolynomial}, a significant advantage is that the corresponding filtering procedure \eqref{filtering.def} can be implemented at the vertex level in which each vertex is equipped with a {\bf one-hop communication subsystem}, i.e., each agent has direct data exchange only with its adjacent agents, see \cite[Algorithms 1 and 2]{ncjs22}. Inverse filtering procedure associated with a polynomial filter has been widely used in denoising, non-subsampled filter banks and signal reconstruction, graph semi-supervised learning and many other applications \cite{jiang19, Leus17, Lu18}-\cite{Emirov19}, \cite{siheng_inpaint15}-\cite{cheng2021}. In Sections \ref{stochasticwienerfilter.section} and \ref{worst-casewienerfilter.section}, we consider the scenario that the filtering procedure \eqref{filtering.def} is associated with a polynomial filter, its inputs ${\bf x}$ are either (wide-band) stationary signals or deterministic signals with finite energy, and its outputs ${\bf y}$ are corrupted by some random noises which have mean zero and their covariance matrix being a polynomial filter of graph shifts \cite{bi2009}-\cite{yagan2020}. We show that the corresponding stochastic/worst-case Wiener filters are essentially the product of a polynomial filter and inverse of another polynomial filter, see Theorems \ref{wienerfiltermse.thm}, \ref{widebandwienerfilter.thm} and \ref{wienerfilterworsecase.thm}. Numerical demonstrations in Sections \ref{randomsignal.demo} and \ref{denoisingwideband.demo} indicate that the Wiener filtering procedure has better performance on denoising (wide-band) stationary signals than the conventional Tikhonov regularization approach does \cite{ncjs22, Shi15}. Given a polynomial filter ${\bf H}$ of graph shifts, one of the main challenges in the corresponding inverse filtering procedure \begin{equation}\label{inverseprocedure} {\bf y}\longmapsto {\bf x}={\bf H}^{-1}{\bf y} \end{equation} is on its distributed implementation, as the inverse filter ${\bf H}^{-1}$ is usually not a polynomial filter of small degree even if ${\bf H}$ is. The last two authors of this paper proposed the following exponentially convergent quasi-Newton method \begin{equation}\label{Approximationalgorithm} {\bf e}^{(m)}= {\bf H}{\bf x}^{(m-1)}-{\bf y}\ {\rm and} \ {\bf x}^{(m)}={\bf x}^{(m-1)}-{\bf G}{\bf e}^{(m)}, \ m\ge 1, \end{equation} with arbitrary initial ${\bf x}^{(0)}$ to fulfill the inverse filtering procedure, where the polynomial approximation filter ${\bf G}$ to the inverse ${\bf H}^{-1}$ is so chosen that the spectral radius of ${\bf I}-{\bf G}{\bf H}$ is strictly less than $1$ \cite{ncjs22, Emirov19, cheng2021}. More importantly, each iteration in \eqref{Approximationalgorithm} includes mainly two filtering procedures associated with polynomial filters ${\bf H}$ and ${\bf G}$. In this paper, the quasi-Newton method \eqref{Approximationalgorithm} is used to implement the Wiener filtering procedure and inverse filtering procedure associated with a polynomial filter on networks whose agents are equipped with a one-hop communication subsystem, see \eqref{jacobiapproximation.def} and Algorithms \ref{Wiener2.algorithm} and \ref{Wiener1.algorithm}. An important problem not discussed yet is how to select the polynomial approximation filter ${\bf G}$ appropriately for the fast convergence of the quasi-Newton method \eqref{Approximationalgorithm}. The above problem has been well studied when ${\bf H}$ is a polynomial filter of the graph Laplacian (and a single graph shift in general) \cite{ Leus17, Emirov19, Shi15, sihengTV15, Shuman18, isufi19}. For a polynomial filter ${\bf H}$ of multiple graph shifts, optimal/Chebyshev polynomial approximation filters are introduced in \cite{ncjs22}. The construction of Chebyshev polynomial approximation filters is based on the exponential approximation property of Chebyshev polynomials to the reciprocal of a multivariate polynomial on the cube containing the joint spectrum of multiple graph shifts. Chebyshev polynomials form a special family of Jacobi polynomials. In Section \ref{Jacobiapproximation.section}, based on the exponential approximation property of Jacobi polynomials and Chebyshev interpolation polynomials to analytic functions on a cube, we introduce Jacobi polynomial filters and Chebyshev interpolation polynomial filters to approximate the inverse filter ${\bf H}^{-1}$, and we use the corresponding quasi-Newton method algorithm \eqref{jacobiapproximation.def} to implement the inverse filtering procedure \eqref{inverseprocedure}. Numerical experiments in Section \ref{circulantgraph.demo} indicate that the proposed Jacobi polynomial approach with appropriate selection of parameters and Chebyshev interpolation polynomial approach have better performance than Chebyshev polynomial approach and gradient descent method with optimal step size do \cite{ncjs22,jiang19, Leus17, Waheed18, Shi15, sihengTV15, Shuman18, isufi19}. Notation: Let ${\mathbb Z}_+$ be the set of all nonnegative integers and set $\mathbb{Z}_+^d=\{(n_1, \ldots, n_d), \ n_k\in {\mathbb Z}_+, 1\le k\le d\}$. Define $\|{\bf x}\|_2=(\sum_{i\in V} |x_i|^2)^{1/2}$ for a graph signal ${\bf x}=(x_i)_{i\in V}$ and $\|{\bf A}\|=\sup_{\|{\bf x}\|_2=1} \|{\bf A}{\bf x}\|_2$ for a graph filter ${\bf A}$. Denote the transpose of a matrix ${\bf A}$ by ${\bf A}^T$ and the trace of a square matrix ${\bf A}$ by ${\rm tr}({\bf A})$. As usual, we use ${\bf O}, {\bf I}, {\bf 0}, {\bf 1}$ to denote the zero matrix, identity matrix, zero vector and vector of all $1$s of appropriate sizes respectively. \section{Preliminaries on Jacobi polynomials and Chebyshev interpolating polynomials} Let $\alpha, \beta>-1$, $[{\pmb \mu}, {\pmb \nu}]=[\mu_1,\nu_1] \times \cdots \times[\mu_d,\nu_d]$ be a cube in ${\mathbb R}^d$ with its volume denoted by $|[{\pmb \mu}, {\pmb \nu}]| $, and let $h$ be a multivariate polynomial satisfying \begin{equation}\label{polynomial.assump} h({\bf t})\neq 0\ \text{ for all }\ {\bf t}\in [{\pmb \mu}, {\pmb \nu}]. \end{equation} In this section, we recall the definitions of multivariate Jacobi polynomials and interpolation polynomials at Chebyshev nodes, and their exponential approximation property to the reciprocal of the polynomial $h$ on the cube $[{\pmb \mu}, {\pmb \nu}]$ \cite{Ismail2009}-\cite{xiang2012}. Our numerical simulations indicate that Jacobi polynomials with appropriate selection of parameters $\alpha$ and $\beta$ and interpolation polynomials at Chebyshev points provide better approximation to the reciprocal of a polynomial on a cube than Chebyshev polynomials do \cite{ncjs22}, see Figure \ref{approximation.fig} and Table \ref{MaxAppErr.tab}. Define standard {\em univariate Jacobi polynomials} $P_n^{(\alpha, \beta)}(t), n=0, 1$ on $[-1, 1]$ by \begin{equation*}\label{jacobipolynomiala.def} P_0^{(\alpha, \beta)}(t)=1, \ P_1^{(\alpha, \beta)}(t)=\frac{\alpha+\beta+2}{2}t+\frac{\alpha-\beta}{2}, \end{equation*} and $P_n^{(\alpha, \beta)}(t), n\ge 2$, by the following three-term recurrence relation, \begin{equation*}\label{jacobipolynomialb.def} P_n^{(\alpha, \beta)}(t)= \big(a_{n, 1}^{(\alpha,\beta)}t- a_{n, 2}^{(\alpha,\beta)}\big)P_{n-1}^{(\alpha, \beta)}(t)-a_{n, 3}^{(\alpha,\beta)}P_{n-2}^{(\alpha, \beta)}(t),\end{equation*} where $$ \begin{aligned} a_{n, 1}^{(\alpha,\beta)}&=\frac{(2n+\alpha+\beta-1)(2n+\alpha+\beta)}{2n(n+\alpha+\beta)},\\ a_{n, 2}^{(\alpha,\beta)}&=\frac{(\beta^2-\alpha^2)(2n+\alpha+\beta-1)}{2n(n+\alpha+\beta)(2n+\alpha+\beta-2)},\\ a_{n, 3}^{(\alpha,\beta)}&=\frac{(n+\alpha-1)(n+\beta-1)(2n+\alpha+\beta)}{n(n+\alpha+\beta)(2n+\alpha+\beta-2)}. \end{aligned} $$ The Jacobi polynomials $P_n^{(\alpha, \beta)}, n\ge 0$, with $\alpha=\beta$ are also known as Gegenbauer polynomials or ultraspherical polynomials. The Legendre polynomials $P_n$, Chebyshev polynomials $T_n$ and Chebyshev polynomial of the second kind $U_n, n\ge 0$, are Jacobi polynomials with $\alpha=\beta=0, -1/2, 1/2$ respectively \cite{Ismail2009, Shen2011}. In order to construct polynomial filters to approximate the inverse of a polynomial filter of multiple graph shifts, we next define multivariate Jacobi polynomials $P_{{\bf n}; {\pmb \mu}, {\pmb \nu}}^{(\alpha, \beta)}, {\bf n}\in\mathbb{Z}_+^d$, and Jacobi weights $w^{(\alpha, \beta)}_{{\pmb \mu}, {\pmb \nu}}$ on the cube $[{\pmb \mu}, {\pmb \nu}]$ by $$P_{{\bf n}; {\pmb \mu}, {\bf v}}^{(\alpha, \beta)}({\bf t})= \prod_{i=1}^d P_{n_i}^{(\alpha, \beta)}\left(\frac{2t_i-\mu_i-\nu_i}{\nu_i-\mu_i}\right)$$ and $$w^{(\alpha, \beta)}_{{\pmb \mu}, {\pmb \nu}}({\bf t})=\prod_{i=1}^d w^{(\alpha, \beta)}\left(\frac{2t_i-\mu_i-\nu_i}{\nu_i-\mu_i}\right), $$ where ${\bf t}=(t_1, \ldots, t_d) \in [{\pmb \mu}, {\pmb \nu}]$, ${\bf n}=(n_1,\cdots, n_d)\in {\mathbb Z}_+^d$, and $w^{(\alpha, \beta)}(t) := (1-t)^{\alpha} (1+t)^{ \beta},\ -1<t<1$. Let $L^2( %[{\pmb \mu}, {\pmb \nu}], w^{(\alpha, \beta)}_{{\pmb \mu}, {\pmb \nu}})$ be the Hilbert space of all square-integrable functions with respect to the Jacobi weight $ w^{(\alpha, \beta)}_{{\pmb \mu}, {\pmb \nu}}$ on $[{\pmb \mu}, {\pmb \nu}]$ and denote its norm by $\|\cdot\|_{2, w^{(\alpha, \beta)}_{{\pmb \mu}, {\pmb \nu}}}$. Following the argument in \cite{Ismail2009, Shen2011, trefethen2013} for univariate Jacobi polynomials, we can show that multivariate Jacobi polynomials $P_{{\bf n}; {\pmb \mu}, {\pmb \nu}}^{(\alpha, \beta)}, {\bf n}\in\mathbb{Z}_+^d$, form a complete orthogonal system in $L^2( %[{\pmb \mu}, {\pmb \nu}], w^{(\alpha, \beta)}_{{\pmb \mu}, {\pmb \nu}})$ with \begin{equation*} \big\|P_{{\bf n}; {\pmb \mu}, {\bf v}}^{(\alpha, \beta)}\big\|_{2, w^{(\alpha, \beta)}_{{\pmb \mu}, {\pmb \nu}}}^2 = 2^{-d} |[{\pmb \mu}, {\pmb \nu}]| \gamma_{\bf n}^{(\alpha, \beta)}, \end{equation*} where $\Gamma(s)=\int_0^\infty t^{s-1} e^{-t} dt, s>0$, is the Gamma function, and for ${\bf n}=(n_1,\cdots, n_d)\in\mathbb{Z}_+^d$, $$\gamma_{\bf n}^{(\alpha, \beta)}=\prod_{i=1}^d \frac{2^{\alpha+\beta}}{2n_i+\alpha+\beta+1}\frac{\Gamma(n_i+\alpha+1)\Gamma(n_i+\beta+1)}{\Gamma(n_i+\alpha+\beta+1)\Gamma(n_i+1)}.$$ For ${\bf n}=(n_1,\cdots, n_d)\in {\mathbb Z}^d_+$, we set $\|{\bf n}\|_\infty=\sup_{1\le i\le d} |n_i|$ and define \begin{equation} \label{Fouriercoefficient.def} c_{\bf n}= \frac{2^d}{|[{\pmb \mu}, {\pmb \nu}]| \gamma_{\bf n}^{(\alpha, \beta)}} \int_{[{\pmb \mu}, {\pmb \nu}]} \frac {P_{{\bf n}; {\pmb \mu}, {\pmb \nu}}^{(\alpha, \beta)} ({\bf t})} {h({\bf t})}w^{(\alpha, \beta)}_{{\pmb \mu}, {\pmb \nu}}({\bf t}) d{\bf t}. \end{equation} As $1/h$ is an analytic function on the cube $[{\pmb \mu}, {\pmb \nu}]$ by \eqref{polynomial.assump}, following the argument in \cite[Theorem 2.2]{xiang2012} we can show that the partial summation \begin{equation} \label{partialsum.def} g_M^{(\alpha, \beta)}({\bf t})=\sum_{\|{\bf n}\|_\infty\le M} c_{\bf n} P_{{\bf n}; {\pmb \mu}, {\pmb \nu}}^{(\alpha, \beta)} ({\bf t}),\ M\ge 0 \end{equation} of its Fourier expansion converges to $1/h$ exponentially in the uniform norm, see \cite[Theorem 8.2]{trefethen2013} for Chebyshev polynomial approximation and \cite[Theorem 2.5]{wang2011} for Legendre polynomial approximation. This together with the boundedness of the polynomial $h$ on the cube $[{\pmb \mu}, {\pmb \nu}]$ implies that the existence of positive constants $ D_0\in (0, \infty)$ and $r_0\in (0, 1)$ such that \begin{equation}\label{bN.def} b_M^{(\alpha, \beta)}:=\sup_{{\bf t}\in [{\pmb \mu}, {\pmb \nu}]} |1-g_M^{(\alpha, \beta)}({\bf t}) h({\bf t})|\leq D_0 r_0^M,\ M\geq 0. \end{equation} Shown in Figure \ref{approximation.fig}, except the figure on the bottom right, are the approximation error $1-h_1(t)g_M^{(\alpha, \beta)}(t), 0\le t\le 2$, where $g_M^{(\alpha, \beta)}, 0\le M\le 4$, are the partial summation in \eqref{partialsum.def} to approximate the reciprocal $1/h_1$ of the univariate polynomial \begin{equation}\label{h1.def} h_1(t) = (9/4-t)(3 + t), \ t\in [0, 2] \end{equation} in \cite[Eqn. (5.4)]{ncjs22}. Presented in Table \ref{MaxAppErr.tab}, except the last row, are the maximal approximation errors measured by $b_M^{(\alpha, \beta)}, 0\le M\le 4$. This demonstrates that Jacobi polynomials have exponential approximation property \eqref{bN.def} and also that with appropriate selection of parameters $\alpha, \beta>-1$, they have better approximation property than Chebyshev polynomials (the Jacobi polynomials with $\alpha=\beta=-1/2$) do, see the figure plotted on the top left of Figure \ref{approximation.fig} and the maximal approximation errors listed in the first row of Table \ref{MaxAppErr.tab}, and also the numerical simulations in Section \ref{circulantgraph.demo}. \begin{figure}[t] \begin{center} \includegraphics[width=43mm, height=30mm] {NHNH.jpg} \includegraphics [width=43mm, height=30mm]{HH.jpg}\\ \includegraphics[width=43mm, height=30mm] {ZZ.jpg} \includegraphics [width=43mm, height=30mm]{OneO.jpg} \\ \includegraphics[width=43mm, height=30mm] {NHH.jpg} \includegraphics [width=43mm, height=30mm]{HNH.jpg}\\ \includegraphics [width=43mm, height=30mm]{ZNH.jpg} \includegraphics [width=43mm, height=30mm]{Chebyint.jpg} \caption{Plotted on the top three rows and the left of bottom row are the approximation error functions $1-h_1(t)g_M^{(\alpha, \beta)}(t), t\in [0, 2], 0\le M\le 4$ for pairs $(\alpha, \beta)=(-1/2, -1/2)$ (top row left), $(1/2, 1/2)$ (top row right), $(0, 0)$ (second row left), $(1, 1)$ (second row right), $(-1/2, 1/2)$ (third row left), $(1/2, -1/2)$ (third row right) and $(0, -1/2)$ (bottom row left). On the bottom row right is the approximation error function $1-h_1(t)C_M(t), t\in [0, 2], 0\le M\le 4$, between the Chebyshev interpolation polynomial $C_M(t)$ and the reciprocal of the polynomial $h_1(t)$.} \label{approximation.fig} \end{center} \end{figure} \begin{table}[t] \renewcommand\arraystretch{1.2} \centering \caption{ Shown in the first seven rows are the maximal approximation error $b_M^{(\alpha, \beta)}, 0\le M\le 4$, of Jacobi polynomial approximations to $1/h_1$ on $[0, 2]$, while in the last row is the maximal approximation error $\tilde b_M, 0\le M\le 4$, of Chebyshev interpolation approximation to $1/h_1$ on $[0, 2]$. } \begin{tabular} {|c|c|c|c|c|c|c|c|c|c|} \hline \backslashbox{$(\alpha, \beta)$}{M}& 0 & 1 & 2 & 3 & 4 \\ \hline (-.5, -.5) & 1.0463 & 0.5837 & 0.2924 & 0.1467 & 0.0728 \\ \hline (.5 .5) & 0.7014 & 0.5904 & 0.3897 & 0.2505 & 0.1517 \\ \hline (0, 0) & 0.7409 & 0.6153 & 0.3667 & 0.2146 & 0.1202 \\ \hline (1, 1) & 0.7140 & 0.5626 & 0.3927 & 0.2686 & 0.1720 \\ \hline (-.5, .5) & 1.8612 & 1.8855 & 1.3522 & 0.8937 & 0.5534 \\ \hline (.5, -.5) & 0.7720 & 0.5603 & 0.3563 & 0.2184 & 0.1289 \\ \hline (0, -.5) & 0.7356 & 0.4760 & 0.2749 & 0.1548 & 0.0850 \\ \hline {ChebyInt} & 0.7500 & 0.4497 & 0.2342 & 0.1186 & 0.0595 \\ \hline \end{tabular} \label{MaxAppErr.tab} \end{table} Another excellent method of approximating the reciprocal of the polynomial $h$ on the cube $[{\pmb \mu}, {\bf \pmb \nu}]$ is polynomial interpolation \begin{equation}\label{Chebyshevinterpolation.def} C_M({\bf t})=\sum_{\|{\bf n}\|_\infty\le M} d_{\bf n}{\bf t}^{\bf n}\end{equation} at rescaled Chebyshev points ${\bf t}_{{\bf j}; {\pmb \mu}, {\pmb \nu} }= ({t}_{j_1, M}, \ldots, {t}_{j_d, M})$, i.e., \begin{equation} \label{chebyshevinterpolation.def} C_M({\bf t}_{{\bf j}; {\pmb \mu}, {\pmb \nu} })= 1/h( {\bf t}_{{\bf j}; {\pmb \mu}, {\pmb \nu} }), \end{equation} where $${ t}_{j_k, M}= \frac{\nu_k+\mu_k}{2}+ \frac{\nu_k-\mu_k}{2} \cos \frac{(j_k-1/2)\pi}{M+1}$$ for $1\le j_k\le M+1, 1\le k\le d$. Recall that the Lebesgue constant for the above polynomial interpolation at rescaled Chebyshev points is of the order $(\ln (M+2))^d$. This together with the exponential convergence of Chebyshev polynomial approximation, see \cite[Theorem 8.2]{trefethen2013} and \cite[Theorem 2.2]{xiang2012}, implies that \begin{equation}\label{exponentialapproximationerror.interpolation} \tilde b_M:=\sup_{{\bf t}\in [{\pmb \mu}, {\pmb \nu}]} |1-h({\bf t})C_M({\bf t})|\le D_1 r_1^M, \ M\ge 0, \end{equation} for some positive constants $D_1\in (0, \infty)$ and $r_1\in (0, 1)$. Shown in the bottom right of Figure \ref{approximation.fig} is our numerical demonstration to the above approximation property of the Chebyshev interpolation polynomial $C_M$, ChebyInt for abbreviation, to the function $1/h_1$, see bottom row of Table \ref{MaxAppErr.tab} for the maximal approximation error $\tilde b_M, 0\le M\le 4$, in \eqref{exponentialapproximationerror.interpolation} and also the numerical simulations in Section \ref{circulantgraph.demo}. \section{Polynomial approximation algorithm for inverse filtering} \label{Jacobiapproximation.section} Let ${\mathbf S}_1, \ldots, {\bf S}_d$ be commutative graph shifts whose joint spectrum $\Lambda$ in \eqref{jointspectrum.def} is contained in a cube $[{\pmb \mu}, {\pmb \nu}]$, i.e., \eqref {jointspectralcubic.def} holds. The joint spectrum $\Lambda$ of commutative graph shifts ${\bf S}_1, \ldots, {\bf S}_d$ plays a critical role in \cite{ncjs22} to construct optimal/Chebyshev polynomial approximation to the inverse of a polynomial filter. In this section, based on the exponential approximation property of Jacobi polynomials and Chebyshev interpolation polynomials to the reciprocal of a nonvanishing multivariate polynomial, we propose an iterative Jacobi polynomial approximation algorithm and Chebyshev interpolation approximation algorithm to implement the inverse filtering procedure associated with a polynomial graph filter at the vertex level with one-hop communication. Let $\alpha, \beta>-1$, $h$ be a multivariate polynomial satisfying \eqref{polynomial.assump}, and let $g_M^{(\alpha, \beta)}$ and $C_M, M\ge 0$, be the Jacobi polynomial approximation and Chebyshev interpolation polynomial approximation to $1/h$ in \eqref{partialsum.def} and \eqref{chebyshevinterpolation.def} respectively. Set ${\bf H}=h({\bf S}_1, \ldots, {\bf S}_d)$, ${\bf G}_M^{(\alpha, \beta)}= g_M^{(\alpha, \beta)}({\bf S}_1, \ldots, {\bf S}_d)$ and ${\bf C}_M=C_M({\bf S}_1, \ldots, {\bf S}_d), M\ge 0$. By the spectral assumption \eqref{jointspectralcubic.def}, the spectral radii of ${\bf I}-{\bf G}_M^{(\alpha, \beta)}{\bf H}$ and ${\bf I}-{\bf C}_M{\bf H}$ are bounded by $b_M^{(\alpha, \beta)}$ in \eqref{bN.def} and $\tilde b_M$ in \eqref{exponentialapproximationerror.interpolation} respectively, i.e., \begin{equation} \rho({\bf I}-{\bf G}_M^{(\alpha, \beta)}{\bf H})\leq b^{(\alpha, \beta)}_M\ {\rm and} \ \rho({\bf I}-{\bf C}_M{\bf H})\leq \tilde b_M, \ M\ge 0. \end{equation} Therefore with appropriate selection of the polynomial degree $M$, applying the arguments used in \cite[Theorem 3.1]{ncjs22}, we obtain the exponential convergence of the following iterative algorithm for inverse filtering, \begin{equation} \label{jacobiapproximation.def} \left\{\begin{array}{l} {\bf e}^{(m)} = {\bf H} {\bf x}^{(m-1)} - {\bf y}\\ {\bf x}^{(m)} = {\bf x}^{(m-1)} -{\bf G}_M {\bf e}^{(m)}, \ m\ge 1 \end{array} \right. \end{equation} with arbitrary initials ${\bf x}^{(0)}$, where ${\bf G}_M$ is either ${\bf G}_M^{(\alpha, \beta)}$ or ${\bf C}_M$, and the input ${\bf y}$ of the inverse filtering procedure is obtained via the filtering procedure \eqref{filtering.def}. \begin{thm}\label{exponentialconvergence.thm} Let ${\mathbf S}_1, \ldots, {\bf S}_d$ be commutative graph shifts satisfying \eqref{jointspectralcubic.def}, $h$ be a multivariate polynomial satisfying \eqref{polynomial.assump}, and let $b_M^{(\alpha, \beta)}$ and $\tilde b_M$ be given in \eqref{bN.def} and \eqref{exponentialapproximationerror.interpolation} respectively. If \begin{equation} b_M^{(\alpha, \beta)}<1 \ \ ({\rm resp.} \ \tilde b_M<1),\end{equation} then for any input ${\bf y}$, the sequence ${\bf x}^{(m)}, m\ge 0$, in the iterative algorithm \eqref{jacobiapproximation.def} with ${\bf G}_M={\bf G}_M^{(\alpha, \beta)}$ (resp. ${\bf G}_M={\bf C}_M$) converges to the output ${\bf H}^{-1}{\bf y}$ of the inverse filtering procedure \eqref{inverseprocedure} exponentially. In particular, there exist constants $C\in (0, \infty)$ and $r\in (\rho({\bf I}-{\bf G}_M^{(\alpha, \beta)}{\bf H}), 1)$ (resp. $r\in (\rho({\bf I}-{\bf C}_M{\bf H}), 1)$) such that \begin{equation} \| {\bf x}^{(m)}- {\bf H}^{-1}{\bf y}\|_2 \le C \|{\bf y}\|_2 r^m, \ m\ge 0. \end{equation} \end{thm} We call the algorithm \eqref{jacobiapproximation.def} with ${\bf G}_M={\bf G}_M^{(\alpha, \beta)}$ as {\em Jacobi polynomial approximation algorithm}, JPA$(\alpha, \beta)$ for abbreviation, and the iterative algorithm \eqref{jacobiapproximation.def} with ${\bf G}_M={\bf C}_M$ as {\em Chebyshev interpolation polynomial approximation algorithm}, CIPA for abbreviation. By Theorem \ref{exponentialconvergence.thm}, the exponential convergence rates of the JPA$(\alpha, \beta)$ and CIPA are $\rho({\bf I}-{\bf G}_M^{(\alpha, \beta)} {\bf H})$ and $\rho({\bf I}-{\bf C}_M {\bf H})$ respectively. In addition to the exponential convergence, each iteration in the JPA$(\alpha, \beta)$ and CIPA contains essentially two filtering procedures associated with polynomial filters ${\bf G}_M$ and ${\bf H}$, and hence it can be implemented at the vertex level with one-hop communication, see \cite[Algorithm 4] {ncjs22}. Therefore the JPA$(\alpha, \beta)$ and CIPA algorithms can be implemented on a network with each agent equipped with limited storage and data processing ability, and one-hop communication subsystem. More importantly, the memory, computational cost and communication expense for each agent of the network are {\bf independent} on the size of the whole network. \begin{remark} {\rm We remark that the JPA$(\alpha, \beta)$ with $\alpha=\beta=-1/2$ was introduced in \cite{ncjs22} as iterative Chebyshev polynomial approximation algorithm. For a positive definite polynomial filter ${\bf H}$, replacing the approximation filter ${\bf G}_M$ in the quasi-Newton algorithm \eqref{jacobiapproximation.def} by $\gamma_{\rm opt} {\bf I}$, we obtain the traditional gradient descent method \begin{equation} \label{gd0.def} {\bf x}^{(m)} = {\bf x}^{(m-1)} -\gamma_{\rm opt} ({\bf H} {\bf x}^{(m-1)} - {\bf y}), \ m\ge 1 \end{equation} with the optimal step size $\gamma_{\rm opt}={2}/{(\lambda_{\min}({\bf H})+\lambda_{\rm max}({\bf H})})$, where $\lambda_{\max} ({\bf H})$ and $\lambda_{\min} ({\bf H})$ are the maximal and minimal eigenvalue of the matrix ${\bf H}$ respectively \cite{jiang19, Leus17, Waheed18, Shi15, sihengTV15, Shuman18, isufi19}. Numerical comparisons with the JPA$(\alpha, \beta)$ and CIPA algorithms to implement inverse filtering on circulant graphs will be given in Section \ref{circulantgraph.demo}. } \end{remark} \section{Wiener filters for stationary graph signals} \label{stochasticwienerfilter.section} Let ${\mathbf S}_1, \ldots, {\mathbf S}_d$ be real commutative symmetric graph shifts on a simple graph ${\mathcal G}=(V, E)$ of order $N\ge 1$ and assume that their joint spectrum is contained in some cube $[{\pmb \mu}, {\pmb \nu}]$, i.e., \eqref{jointspectralcubic.def} holds. In this section, we consider the scenario that the filtering procedure \eqref{filtering.def} has the filter \begin{subequations}\label{wiener2.eq} \begin{equation} \label{wiener2.eqb} {\bf H}=h({\mathbf S}_1, \ldots, {\bf S}_d) \end{equation} being a polynomial filter of ${\bf S}_1, \ldots, {\bf S}_d$, the inputs ${\bf x}$ are {\em stationary} signals with the correlation matrix \begin{equation} \label{wiener2.eq1} {\bf R}= r({\mathbf S}_1, \ldots, {\mathbf S}_d) \end{equation} being a polynomial of graph shifts ${\bf S}_1, \ldots, {\mathbf S}_d$ (\cite{perraudin17, segarrat2017, yagan2020}), and the outputs \begin{equation}\label{wiener2.eq2} {\bf y}={\bf H} {\bf x}+{\pmb \epsilon}\end{equation} are corrupted by some random noise ${\pmb \epsilon}$ being independent with the input signal ${\bf x}$, and having zero mean and covariance matrix ${\bf G}$ to be a polynomial of graph shifts ${\bf S}_1, \ldots, {\mathbf S}_d$, i.e., \begin{equation} \label{wiener2.eq3} {\mathbb E}{\pmb \epsilon}={\bf 0}, \ {\mathbb E}{\pmb \epsilon}{\bf x}^T={\bf 0} \ {\rm and}\ {\bf G}=g({\bf S}_1, \ldots, {\bf S}_d) \end{equation} \end{subequations} for some multivariate polynomial $g$. In this section, we find the optimal reconstruction filter ${\bf W}_{\rm mse}$ with respect to the stochastic mean squared error $F_{{\rm mse},P, {\bf K}}$ in \eqref{mse.objectivefunction}, and we propose a distributed algorithm to implement the stochastic Wiener filtering procedure ${\bf y}\longmapsto {\bf W}_{\rm mse} {\bf y}$ at the vertex level with one-hop communication. In this section, we also consider optimal unbiased reconstruction filters for the scenario that the input signals ${\bf x}$ are {\em wide-band stationary}, i.e., \begin{equation}\label{wiener2.eq4} {\mathbb E}{\bf x}=c {\bf 1} \ \ {\rm and} \ \ {\mathbb E}({\bf x}-{\mathbb E}{\bf x}) ({\bf x}-{\mathbb E}({\bf x}))^T =\widetilde {\bf R}= \tilde r({\bf S}_1, \ldots, {\bf S}_d), \end{equation} for some $0\ne c\in {\mathbb R}$ and some multivariate polynomial $\tilde r$, The concept of (wide-band) stationary signals was introduced in \cite[Definition 3]{perraudin17} in which the graph Laplacian is used as the graph shift. For a probability measure $P=(p(i))_{i\in V}$ on the graph ${\mathcal G}$ and a regularization matrix ${\bf K}$, we define the {\em stochastic mean squared error} of a reconstruction filter ${\bf W}$ by \begin{equation}\label{mse.objectivefunction} F_{{\rm mse}, P, {\bf K}}({\bf W}) ={\mathbb E} ({\bf W}{\bf y}-{\bf x})^T {\bf P} ({\bf W}{\bf y}-{\bf x}) + {\bf y}^T {\bf W}^T {\bf K} {\bf W} {\bf y}, \end{equation} where ${\bf P}$ is the diagonal matrix with diagonal entries $p(i), i\in V$. The stochastic mean squared error $F_{{\rm mse}, P, {\bf K}}({\bf W})$ in \eqref{mse.objectivefunction} contains the regularization term ${\mathbb E} {\bf y}^T {\bf W}^T {\bf K} {\bf W} {\bf y}$ and the fidelity term ${\mathbb E} ({\bf W}{\bf y}-{\bf x})^T {\bf P} ({\bf W}{\bf y}-{\bf x})= \sum_{i\in V} p(i) {\mathbb E} | ({\bf W} {\bf y})(i)-{ x}(i)|^2$. It is discussed in \cite{perraudin17} for the case that the filter ${\bf H}$, the covariance ${\bf G}$ of noises and the regularizer ${\bf K}$ are polynomials of the graph Laplacian ${\bf L}$, and that the probability measure $P$ is the uniform probability measure $P_U$, i.e., $p_U(i)=1/N, i\in V$. In the following theorem, we provide an explicit solution to the minimization $\min_{\bf W} F_{{\rm mse}, P, {\bf K} }({\bf W})$, see Appendix \ref{wienerfiltermsd.prof} for the proof. \begin{thm} \label{wienerfiltermse.thm} Let the filter ${\bf H}$, the input signal ${\bf x}$, the noisy output signal ${\bf y}$ and additive noise ${\pmb \epsilon}$ be as in \eqref{wiener2.eq}, and let the stochastic mean squared error $F_{{\rm mes}, P, {\bf K}}$ be as in \eqref{mse.objectivefunction}. Assume that ${\bf H}{\bf R}{\bf H}^T+{\bf G}$ and ${\bf P}+{\bf K}$ are strictly positive definite, and define \begin{equation}\label{wienerfiltermse.eq2} {\bf W}_{{\rm mse}}= ({\bf P}+{\bf K})^{-1}{\bf P} {\bf R}{\bf H}^T \big( {\bf H} {\bf R}{\bf H}^T+{\bf G})^{-1}. \end{equation} Then ${\bf W}_{{\rm mse}}$ is the unique minimizer of the minimization problem \begin{equation} \label{wienerfiltermse.eq0} {\bf W}_{{\rm mse}}=\arg \min_{\bf W} F_{{\rm mse}, P, {\bf K} }({\bf W}),\end{equation} and \begin{equation} \label{wienerfiltermse.eq1} F_{{\rm mse}, P, {\bf K} }({\bf W}_{\rm mse}) = {\rm tr}\big ({\bf P}({\bf I}- {\bf W}_{\rm mse} {\bf H}\big){\bf R}\big). \end{equation} \end{thm} We call the optimal reconstruction filter ${\bf W}_{\rm mse}$ in \eqref{wienerfiltermse.eq2} as the {\em stochastic Wiener filter}. For the case that the stochastic mean squared error does not take the regularization term into account, i.e., ${\bf K}={\bf 0}$, we obtain from \eqref{wienerfiltermse.eq2} that the corresponding stochastic Wiener filter ${\bf W}_{\rm mse}$ becomes \begin{equation}\label{wmse0.def} {\bf W}_{\rm mse}^0= {\bf R}{\bf H}^T \big( {\bf H} {\bf R}{\bf H}^T+{\bf G})^{-1},\end{equation} which is independent of the probability measure $P=(p(i))_{i\in V}$ on the graph ${\mathcal G}$. If we further assume that the probability measure $P$ is the uniform probability measure $P_U$ and the input signals ${\bf x}$ are i.i.d with mean zero and variance $\delta_1$, the stochastic Wiener filter becomes $${\bf W}_{\rm mse}^0= \delta_1^2 {\bf H}^T (\delta_1^2 {\bf H}{\bf H}^T+{\bf G})^{-1}$$ and the corresponding stochastic mean squared error is given by \begin{equation}\label{fmsepu} F_{{\rm mes}, P_U}({\bf W}_{\rm mse}^0)= \frac{\delta_1^2}{N} {\rm tr} \left ( (\delta_1^2 {\bf H}{\bf H}^T +{\bf G})^{-1} {\bf G}\right), \end{equation} cf. \eqref{wienerfilterworsecase.eq2} and \eqref{wienerfilterworsecase.error2}, and \cite[Eqn. 16]{perraudin17}. Denote the reconstructed signal via the stochastic Wiener filter ${\bf W}_{\rm mse}$ by \begin{equation}\label{stochasticestimator} {\bf x}_{\rm mse}={\bf W}_{\rm mse} {\bf y},\end{equation} where ${\bf y}$ is given in \eqref{wiener2.eq2}. The above estimator via stochastic Wiener filter ${\bf W}_{\rm mse}$ is {\bf biased} in general. For the case that ${\bf G}, {\bf H}, {\bf K}$ and ${\bf R}$ are polynomials of commutative symmetric graph shifts ${\mathbf S}_1, \ldots, {\mathbf S}_d$, one may verify that matrices ${\bf H}^T, {\bf H}, {\bf G}, {\bf R}, {\bf K}$ are commutative, and \begin{eqnarray} {\mathbb E}({\bf x}-{\bf x}_{\rm mse}) & \hskip-0.08in = & \hskip-0.08in ({\bf P}+{\bf K})^{-1} \big( {\bf H} {\bf R}{\bf H}^T+{\bf G})^{-1} {\bf R}{\bf H}^T {\bf H}{\bf K} {\mathbb E}{\bf x}\nonumber\\ & \hskip-0.08in & \hskip-0.08in + \big( {\bf H} {\bf R}{\bf H}^T+{\bf G})^{-1} {\bf G} {\mathbb E}{\bf x}. \end{eqnarray} Therefore the estimator \eqref{stochasticestimator} is {\bf unbiased} if \begin{equation}\label{stochasticestimator.unbiased} {\mathbf K}{\mathbb E}{\bf x}={\mathbf G}{\mathbb E}{\bf x}={\bf 0}. \end{equation} \begin{remark}\label{stochasticwiener.remark} {\rm By \eqref{wienerfiltermse.eq2} and \eqref{wmse0.def}, the reconstructed signal ${\bf x}_{\rm mse}$ in \eqref{stochasticestimator} can be obtained in two steps, \begin{subequations} \label{stochasticrecovery.eq} \begin{equation} \label{stochasticrecovery.eqa} {\bf w}= {\bf W}_{\rm mse}^0{\bf y}= {\bf R} {\bf H}^T \big({\bf H} {\bf R} {\bf H}^T +{\bf G}\big)^{-1} {\bf y}, \end{equation} and \begin{equation} \label{stochasticrecovery.eqb} {\bf x}_{\rm mse} = {\bf P}^{-1/2} \big({\bf I}+{\bf P}^{-1/2} {\bf K}{\bf P}^{-1/2})^{-1} {\bf P}^{1/2} {\bf w}, \end{equation} \end{subequations} where the first step \eqref{stochasticrecovery.eqa} is the Wiener filtering procedure without the regularization term taken into account, and the second step \eqref{stochasticrecovery.eqb} is the solution of the following Tikhonov regularization problem, \begin{equation}\label{tihhonov.eq0} {\bf x}_{\rm mse}=\arg\min_{\bf x}\ ({\bf x}-{\bf w})^T {\bf P} ({\bf x}-{\bf w})+{\bf x}^T {\bf K} {\bf x}.\end{equation} } \end{remark} By symmetry and commutativity assumptions on the graph shifts ${\mathbf S}_1, \ldots, {\mathbf S}_d$, and the polynomial assumptions \eqref{wiener2.eqb}, \eqref{wiener2.eq1} and \eqref{wiener2.eq3}, the Wiener filter ${\bf W}_{\rm mse}^0$ in \eqref{wmse0.def} is the product of a polynomial filter ${\bf R} {\bf H}^T=(hr)({\mathbf S}_1, \ldots, {\mathbf S}_d)$ and the inverse of another polynomial filter $ {\bf H}{\bf R} {\bf H}^T +{\bf G}= (h^2 r +g)({\mathbf S}_1, \ldots, {\bf S}_d)$. Set ${\bf z}_1 = ({\bf H} {\bf R} {\bf H}^T +{\bf G})^{-1} {\bf y}$. Therefore using \cite[Algorithms 1 and 2]{ncjs22}, the filtering procedure ${\bf w}= {\bf R} {\bf H}^T {\bf z}_1$ can be implemented at the vertex level with one-hop communication. Also we observe that the Jacobi polynomial approximation algorithm and Chebyshev interpolation polynomial approximation algorithm in Section \ref{Jacobiapproximation.section} can be applied to the inverse filtering procedure ${\bf y}\longmapsto {\bf z}_1$, when \begin{equation}\label{stochasticwienerfilter.filtercondition} h^2({\bf t}) r({\bf t})+g({\bf t})>0 \ {\rm for \ all} \ {\bf t}\in [{\pmb \mu}, {\pmb \nu}],\end{equation} see Part I of Algorithm \ref{Wiener2.algorithm} for the implementation of the Wiener filtering procedure \eqref{stochasticrecovery.eqa} without regularization at the vertex level. \smallskip Set ${\bf z}_2= {\bf P}^{1/2} {\bf w}$ and $ {\bf z}_3=\big({\bf I}+{\bf P}^{-1/2} {\bf K}{\bf P}^{-1/2})^{-1} {\bf z}_2$. As ${\bf P}$ is a diagonal matrix, the rescaling procedure ${\bf z}_2= {\bf P}^{1/2} {\bf w}$ and ${\bf x}_{\rm mse}={\bf P}^{-1/2} {\bf z}_3$ can be implemented at the vertex level. Then it remains to find a distributed algorithm to implement the inverse filtering procedure \begin{equation}\label{inversez3} {\bf z}_3=\big({\bf I}+{\bf P}^{-1/2} {\bf K}{\bf P}^{-1/2})^{-1} {\bf z}_2\end{equation} at the vertex level. As ${\bf P}^{-1/2}$ {\bf may not commutate} with the graph shifts ${\bf S}_1, \ldots, {\bf S}_d$, the filter ${\bf I}+{\bf P}^{-1/2} {\bf K}{\bf P}^{-1/2}$ is not necessarily a polynomial filter of some commutative graph shifts even if ${\bf K}=k({\bf S}_1, \ldots, {\bf S}_d)$ is, hence the polynomial approximation algorithm proposed in Section \ref{Jacobiapproximation.section} {\bf does not apply} to the above inverse filtering procedure directly. Next we propose a {\em novel} exponentially convergent algorithm to implement the inverse filtering procedure \eqref{inversez3} at the vertex level when the positive semidefinite regularization matrix ${\bf K}=k({\bf S}_1, \ldots, {\bf S}_d)$ is a polynomial of graph shifts ${\bf S}_1, \ldots, {\bf S}_d$. Set $$K=\sup_{{\bf t}\in [{\pmb \mu}, {\pmb \nu}]} k({\bf t}) \ {\rm and} \ p_{\min}=\min_{i\in V} p(i).$$ Then one may verify that \begin{equation}\label{PKinverse.eq0} {\bf I}\preceq {\bf I}+{\bf P}^{-1/2} {\bf K}{\bf P}^{-1/2} \preceq \frac{ K+p_{\min}}{p_{\min}} {\bf I},\end{equation} where for symmetric matrices ${\bf A}$ and ${\bf B}$, we use ${\bf A}\preceq {\bf B}$ to denote the positive semidefiniteness of ${\bf B}-{\bf A}$. Applying Neumann series expansion $ (1-t)^{-1}=\sum_{n=0}^\infty t^n$ with $t$ replaced by ${\bf I}- \frac{p_{\min}} { K+p_{\min}} ({\bf I}+{\bf P}^{-1/2} {\bf K}{\bf P}^{-1/2})$, we obtain \begin{eqnarray*}&\hskip-0.08in & \hskip-0.08in \big({\bf I}+{\bf P}^{-1/2} {\bf K}{\bf P}^{-1/2})^{-1}\nonumber\\ &\hskip-0.08in = & \hskip-0.08in \frac{p_{\min}} { K+p_{\min}} \sum_{n=0}^\infty \left(\frac{ K {\bf I}- p_{\min}{\bf P}^{-1/2} {\bf K}{\bf P}^{-1/2}} { K+p_{\min}} \right)^n.\qquad \end{eqnarray*} Therefore the sequence ${\bf w}_m, m\ge 0$, defined by \begin{eqnarray} \label{PKinverse.eq2} {\bf w}_{m+1} &\hskip-0.08in = & \hskip-0.08in \frac{p_{\min}}{K+p_{\min}} {\bf w}_0+ \frac{K}{K+p_{\min}} {\bf w}_m\nonumber \\ & &\hskip-0.08in -\frac{p_{\min}}{K+p_{\min}} {\bf P}^{-1/2} {\bf K}{\bf P}^{-1/2}{\bf w}_m, \ m\ge 0 \quad \end{eqnarray} with initial ${\bf w}_0={\bf z}_2$ converges to ${\bf z}_3$ exponentially, since \begin{eqnarray*} & \hskip-0.08in & \hskip-0.08in \|{\bf w}_m- {\bf z}_3\|_2\\ & \hskip-0.08in = & \hskip-0.08in \frac{p_{\min}} { K+p_{\min}} \left\|\sum_{n=m+1}^\infty \left(\frac{ K {\bf I}- p_{\min}{\bf P}^{-1/2} {\bf K}{\bf P}^{-1/2}} { K+p_{\min}} \right)^n {\bf z}_2\right\|_2\\ & \hskip-0.08in \le & \hskip-0.08in \frac{p_{\min}\|{\bf z}_2\|_2} { K+p_{\min}} \sum_{n=m+1}^\infty \left\|\frac{ K {\bf I}- p_{\min}{\bf P}^{-1/2} {\bf K}{\bf P}^{-1/2}} { K+p_{\min}} \right\|^n \\ & \hskip-0.08in \le & \hskip-0.08in \frac{p_{\min}\|{\bf z}_2\|_2} { K+p_{\min}} \sum_{n=m+1}^\infty \left(\frac{K} { K+p_{\min}}\right)^n\nonumber\\ & \hskip-0.08in = & \hskip-0.08in \left(\frac{K} { K+p_{\min}}\right)^{m+1} \|{\bf z}_2\|_2,\ m\ge 1, \end{eqnarray*} where the last inequality follows from \eqref{PKinverse.eq0}. More importantly, each iteration in the algorithm to implement the inverse filtering procedure \eqref{inversez3} contains mainly two rescaling procedure and a filter procedure associated with the polynomial filter ${\bf K}$ which can be implemented by \cite[Algorithms 1 and 2]{ncjs22}. Hence the regularization procedure \eqref{stochasticrecovery.eqb} can be implemented at the vertex level with one-hop communication, see Part 2 of Algorithm \ref{Wiener2.algorithm}. \begin{algorithm}[t] \caption{Polynomial approximation algorithm to implement the Wiener filtering procedure ${\bf x}_{\rm mse}={\bf W}_{\rm mse} {\bf y}$ at a vertex $i\in V$. } \label{Wiener2.algorithm} \begin{algorithmic} \STATE {\em Inputs}: Polynomial coefficients of polynomial filters ${\bf H}, {\bf G}, {\bf K}, {\bf R}$ and ${\bf G}_M$ (either Jacobi polynomial approximation filter ${\bf G}_M^{(\alpha, \beta)}$ or Chebyshev interpolation approximation filter ${\bf C}_M$ to the inverse filter $({\bf H}^2 {\bf R} +{\bf G})^{-1}$), entries $S_k(i,j), j\in {\mathcal N}_i$ in the $i$-th row of the shifts ${\bf S}_k, 1\le k\le d$, the value $y(i)$ of the input signal ${\bf y}=(y(i))_{i\in V}$ at the vertex $i$, the probability $p(i)$ at the vertex $i$, and numbers $L_1$ and $L_2$ of the first and second iteration. \STATE {\bf Part I}: \ Implementation of the Wiener filtering procedure \eqref{stochasticrecovery.eqa} at the vertex $i$ \STATE {\em Pre-processing}:\ Find the polynomial coefficients of polynomial filters ${\bf H}^2 {\bf R} +{\bf G}$ and ${\bf R}{\bf H}$. \STATE {\em Initialization}: \ $n=0$ and zero initial $x^{(0)}(i)=0$. \STATE {\em Iteration}: \ Use \cite[Algorithms 1 and 2]{ncjs22} to implement the filtering procedures ${\bf e}^{(m)}=({\bf H} {\bf R} {\bf H}^T +{\bf G}) {\bf x}^{(m-1)}-{\bf y}$ and ${\bf x}^{(m)}= {\bf x}^{(m-1)}-{\bf G}_M {\bf e}^{(m)}, 0\le m\le L_1$ at the vertex $i$. \STATE {\em Output of the iteration}: \ Denote the output of the $L_1$-th iteration by $ z_1^{(L_1)}(i)$, which is the approximate value of the output data of the inverse filtering procedure ${\bf z}_1 = ({\bf H}^2 {\bf R} +{\bf G})^{-1} {\bf y}$ at the vertex $i$. \STATE {\em Post-processing after the iteration}:\ Use \cite[Algorithms 1 and 2]{ncjs22} to implement the filtering procedure ${\bf w}= {\bf R} {\bf H} {\bf z}_1={\bf W}_{\rm mse}^0 {\bf y}$ at the vertex $i$, where the input is $z_1^{(L_1)}(i)$ and the output denoted by $w^{(L_1)}(i)$, is the approximate value of the output data of the above filtering procedure. \STATE {\bf Part II}: \ Implementation of the regularization procedure \eqref{stochasticrecovery.eqb} at the vertex $i$ \STATE {\em Pre-processing}: \ Rescaling $z_2^{(L_1)}(i)= p(i)^{1/2} w^{(L_1)}(i)$, the approximate value of the output data of the rescaling procedure ${\bf z}_2= {\bf P}^{1/2} {\bf w}$. \STATE {\em Iteration}: \ Start from ${\bf w}_0(i)=z_2^{(L_1)}(i)$, and use \cite[Algorithms 1 and 2]{ncjs22} and rescaling ${\bf P}^{-1/2}$ to implement the procedure \eqref{PKinverse.eq2} for $0\le m\le L_2$, with the output, denoted by $z_3^{(L_1, L_2)}(i)$, being the approximation value of the output data of the inverse filtering procedure ${\bf z}_3=\big({\bf I}+{\bf P}^{-1/2} {\bf K}{\bf P}^{-1/2})^{-1} {\bf z}_2$ at the vertex $i$ \STATE {\em Post-processing}:\ ${ x}_{\rm mse}^{(L_1, L_2)}(i)=p(i)^{-1/2} z_3^{(L_1, L_2)}(i)$. \STATE {\em Output}: $ x_{\rm mse}(i)\approx { x}_{\rm mse}^{(L_1, L_2)}(i)$, the approximate value of the output data of the Wiener filtering procedure $ {\bf x}_{\rm mse}= ({\bf P}+{\bf K})^{-1} {\bf P}{\bf w}={\bf W}_{\rm mse} {\bf y}$ at the vertex $i$. \end{algorithmic} \end{algorithm} \begin{remark} {\rm We remark that for the case that the probability measure $P$ is uniform \cite{perraudin17}, ${\bf I}+{\bf P}^{-1/2} {\bf K}{\bf P}^{-1/2}={\bf I}+N {\bf K}$ is a polynomial filter of ${\bf S}_1, \ldots, {\bf S}_d$ if ${\bf K}=k({\bf S}_1, \ldots, {\bf S}_d)$ is, and hence JPA$(\alpha, \beta)$ and CIPA algorithms proposed in Section \ref{Jacobiapproximation.section} can be applied to the inverse filtering procedure ${\bf z}_3=\big({\bf I}+{\bf P}^{-1/2} {\bf K}{\bf P}^{-1/2})^{-1} {\bf z}_2$ if $ 1+ N k({\bf t})>0 \ {\rm for \ all} \ {\bf t}\in [{\pmb \mu}, {\pmb \nu}]$. }\end{remark} We finish this section with optimal {\bf unbiased} Wiener filters for the scenario that the input signals ${\bf x}$ are wide-stationary, i.e., ${\bf x}$ satisfies \eqref{wiener2.eq4}, the filtering procedure satisfies \eqref{wiener2.eqb} and \begin{equation}\label{wbs.reuqirement1} {\bf H}{\bf 1}=\tau {\bf 1}\end{equation} for some $\tau\ne 0$, the output ${\bf y}$ in \eqref{wiener2.eq2} are corrupted by some noise ${\pmb \epsilon}$ satisfying \eqref{wiener2.eq3}, and the covariance matrix ${\bf G}$ of the noise and the regularization matrix ${\bf K}$ satisfy \begin{equation}\label{KG.condition} {\bf G}{\bf 1}={\bf K}{\bf 1}={\bf 0}. \end{equation} In the above setting, the random variable $\tilde {\bf x}={\bf x}-{\mathbb E}{\bf x}={\bf x}- c {\bf 1}$ satisfies \begin{equation}\label{wiener2.eq4b} {\mathbb E}\tilde {\bf x}={\bf 0}, {\mathbb E} {\tilde {\bf x}}{\pmb \epsilon}^T={\bf 0}\ {\rm and} \ {\mathbb E} \tilde {\bf x} \tilde {\bf x}^T= \widetilde{\bf R}=\tilde r({\mathbf S}_1, \ldots, {\mathbf S}_d). \end{equation} For any unbiased reconstruction filter ${\bf W}$, we have $${\bf W}{\bf H} {\bf 1}={\bf 1}.$$ This together with \eqref{KG.condition} implies that \begin{eqnarray*}{\bf W} {\bf y}-{\bf x} & \hskip-0.08in = & \hskip-0.08in c ({\bf W}{\bf H} {\bf 1}-{\bf 1})+ ({\bf W}{\bf H}-{\bf I})\tilde {\bf x}+{\bf W}{\pmb \epsilon}\nonumber\\ & \hskip-0.08in = & \hskip-0.08in ({\bf W}{\bf H}-{\bf I})\tilde {\bf x}+{\bf W}{\pmb \epsilon} \end{eqnarray*} and \begin{eqnarray*}{\bf y}^T {\bf W}^T {\bf K} {\bf W}{\bf y} & \hskip-0.08in = & \hskip-0.08in ({\bf H}\tilde {\bf x}+{\pmb \epsilon})^T{\bf W}^T{\bf K} {\bf W} ({\bf H}\tilde {\bf x}+{\pmb \epsilon})+{\bf 1}^T {\bf K}{\bf 1}\nonumber\\ & \hskip-0.08in & \hskip-0.08in +{\bf 1}^T {\bf K} {\bf W}({\bf H}\tilde {\bf x}+{\pmb \epsilon}) + ({\bf H}\tilde {\bf x}+{\pmb \epsilon})^T {\bf W}^ T {\bf K}{\bf 1} \nonumber\\ & \hskip-0.08in = & \hskip-0.08in ({\bf H}\tilde {\bf x}+{\pmb \epsilon})^T{\bf W}^T{\bf K} {\bf W} ({\bf H}\tilde {\bf x}+{\pmb \epsilon}). \end{eqnarray*} Therefore following the argument used in the proof of Theorem \ref{wienerfiltermse.thm} with the signal ${\bf x}$ and polynomial $r$ replaced by $\tilde{\bf x}$ and $\tilde r$ respectively, and applying \eqref{stochasticestimator.unbiased}, \eqref{wbs.reuqirement1} and \eqref{wiener2.eq4b}, we can show that the stochastic Wiener filter $\widetilde{\bf W}_{\rm mse}$ in \eqref{stationarywienerfiltermse.eq2} is an optimal unbiased filter to reconstruct wide-band stationary signals. \begin{thm}\label{widebandwienerfilter.thm} Let the input signal ${\bf x}$, the noisy output signal ${\bf y}$ and the additive noise ${\pmb \epsilon}$ be in \eqref{wiener2.eq4}, \eqref{wiener2.eq2}, \eqref{wiener2.eq3}, the covariance matrix ${\bf G}$ of the noise and the regularization matrix ${\bf K}$ satisfy \eqref{KG.condition}, and let the filtering procedure associated with the filter ${\bf H}$ satisfy \eqref{wiener2.eqb} and \eqref{wbs.reuqirement1}. Assume that ${\bf H}\widetilde {\bf R}{\bf H}^T+{\bf G}$ and ${\bf P}+{\bf K}$ are strictly positive definite. Then \begin{equation} \label{stationarywienerfiltermse.eq1} F_{{\rm mse}, P, {\bf K}}({\bf W})\ge F_{{\rm mse}, P, {\bf K}}(\widetilde {\bf W}_{\rm mse}) \end{equation} hold for all unbiased reconstructing filters ${\bf W}$, where $F_{{\rm mse}, P, {\bf K}}({\bf W})$ is the stochastic mean squared error in \eqref{mse.objectivefunction} and \begin{equation} \label{stationarywienerfiltermse.eq2} \widetilde{\bf W}_{\rm mse}=({\bf P}+{\bf K})^{-1}{\bf P}\widetilde {\bf R}{\bf H}^T ({\bf H}\widetilde {\bf R}{\bf H}^T+{\bf G})^{-1}. \end{equation} Moreover, $\tilde {\bf x}_{\rm mse}=\widetilde {\bf W}_{\rm mse}{\bf y}$ is an unbiased estimator to the wide-band stationary signal ${\bf x}$. \end{thm} Following the distributed algorithm used to implement the stochastic Wiener filtering procedure, the unbiased estimation $\tilde {\bf x}_{\rm mse}=\widetilde {\bf W}_{\rm mse}{\bf y}$ can be implemented at the vertex level with one-hop communication when \begin{equation*} h^2({\bf t}) \tilde r({\bf t})+g({\bf t})>0 \ {\rm for \ all} \ {\bf t}\in [{\pmb \mu}, {\pmb \nu}].\end{equation*} Numerical demonstrations to denoise wide-band stationary signals are presented in Section \ref{denoisingwideband.demo}. \section{Wiener filters for deterministic graph signals} \label{worst-casewienerfilter.section} Let ${\mathbf S}_1, \ldots, {\mathbf S}_d$ be real commutative symmetric graph shifts on a simple graph ${\mathcal G}=(V, E)$ and their joint spectrum be contained in some cube $[{\pmb \mu}, {\pmb \nu}]$, i.e., \eqref{jointspectralcubic.def} holds. In this section, we consider the scenario that the filtering procedure \eqref{filtering.def} has the filter ${\bf H}$ given in \eqref{wiener2.eqb}, its inputs ${\bf x}=(x(i))_{i\in V}$ are deterministic signals with their energy bounded by some $\delta_0>0$, \begin{equation} \label{wiener1.eqa1} \|{\bf x}\|_2\le \delta_0, \end{equation} and its outputs \begin{equation}\label{wiener1.eqc} {\bf y}={\bf H} {\bf x}+{\pmb \epsilon}\end{equation} are corrupted by some random noise ${\pmb \epsilon}$ which has mean zero and covariance matrix ${\bf G}={\rm cov}({\pmb \epsilon})$ being a polynomial of graph shifts ${\bf S}_1, \ldots, {\mathbf S}_d$, \begin{equation} \label{wiener1.eqd} {\mathbb E}{\pmb \epsilon}={\bf 0}\ \ {\rm and}\ {\bf G}=g({\bf S}_1, \ldots, {\bf S}_d) \end{equation} for some multivariate polynomial $g$. For the above setting of the filtering procedure, we introduce the {\em worst-case mean squared error} of a reconstruction filter ${\bf W}$ by \begin{equation}\label{wcms.objectivefunction} F_{{\rm wmse}, P}({\bf W})= \sum_{i\in V} p(i) \max_{\|{\bf x}\|_2\le \delta_0} {\mathbb E} | ({\bf W} {\bf y})(i)-{ x}(i)|^2, \end{equation} where $P=(p(i))_{i\in V}$ is a probability measure on the graph ${\mathcal G}$ \cite{bi2009, eldar2006}. In this section, we discuss the optimal reconstruction filter ${\bf W}_{\rm wmse}$ with respect to the worst-case mean squared error $F_{{\rm wmse}, P}$ in \eqref{wcms.objectivefunction}, and we propose a distributed algorithm to implement the worst-case Wiener filtering procedure at the vertex level with one-hop communication. First, we provide a {\bf universal} solution to the minimization problem \begin{equation} \label{wcms.minimization} \min_{\bf W} F_{{\rm wmse}, P}({\bf W}), \end{equation} which is independent of the probability measure $P$, see Appendix \ref{wienerfilterworsecase.thm.pfappendix} for the proof. \begin{thm} \label{wienerfilterworsecase.thm} Let the filter ${\bf H}$, the input ${\bf x}$, the noisy output ${\bf y}$, the noise $\pmb \epsilon$, and the worst-case mean squared error $F_{{\rm wmse}, P}$ be as in \eqref{wiener2.eqb}, \eqref{wiener1.eqa1}, \eqref{wiener1.eqc}, \eqref{wiener1.eqd} and \eqref{wcms.objectivefunction} respectively. Assume that $\delta_0^2 {\bf H}{\bf H}^T+{\bf G}$ is strictly positive definite. Then \begin{eqnarray} \label{wienerfilterworsecase.eq1} F_{{\rm wmse}, P}({\bf W}) & \hskip-0.08in \ge & \hskip-0.08in F_{{\rm wmse}, P}({\bf W}_{\rm wmse})\nonumber\\ & \hskip-0.08in = & \hskip-0.08in \delta_0^2- \delta_0^4 {\rm tr} ( (\delta_0^2 {\bf H} {\bf H}^T +{\bf G})^{-1} {\bf H} {\bf P} {\bf H}^T \big)\qquad \end{eqnarray} hold for all reconstructing filters ${\bf W}$, where ${\bf P}$ is the diagonal matrix with diagonal entries $p(i), i\in V$, and \begin{equation}\label{wienerfilterworsecase.eq2} {\bf W}_{{\rm wmse}}= \delta_0^2 {\bf H}^T \big(\delta_0^2 {\bf H}{\bf H}^T+{\bf G})^{-1}. \end{equation} Moreover, the reconstruction filter ${\bf W}_{\rm wmse}$ is the unique solution of the minimization problem \eqref{wcms.minimization} if ${\bf P}$ is invertible, i.e., the probability $p(i)$ at every vertex $i\in V$ is positive. \end{thm} We call the optimal reconstruction error ${\bf W}_{\rm wmse}$ in \eqref{wienerfilterworsecase.eq2} as the {\em worst-case Wiener filter}. Denote the order of the graph ${\mathcal G}$ by $N$. For the case that the probability measure $P$ is the uniform probability measure $P_U$, we can simplify the estimate \eqref{wienerfilterworsecase.eq1} as follows: \begin{equation}\label{wienerfilterworsecase.error2} F_{{\rm wmse}, P_U}({\bf W}_{\rm wmse})=\frac{\delta_0^2}{N} {\rm tr} \big( (\delta_0^2 {\bf H} {\bf H}^T +{\bf G})^{-1} {\bf G}\big), \end{equation} c.f. \eqref{fmsepu}. If the random noises ${\pmb \epsilon}$ are further assumed to be i.i.d and have mean zero and variance $\sigma$, we can use singular values $\mu_i({\bf H}), 1\le i\le N$, of the filter ${\bf H}$ to estimate the worst-case mean squared error for the worst-case Wiener filter ${\bf W}_{\rm wmse}$, \begin{equation}\label{wienerfilterworsecase.error3} F_{{\rm wmse}, P_U}({\bf W}_{\rm wmse})=\frac{\delta_0^2 \sigma^2}{N} \sum_{i=1}^N \frac{1} {\delta_0^2 \mu_i({\bf H})^2+\sigma^2}. \end{equation} Denote the reconstructed signal via the worst-case Wiener filter ${\bf W}_{\rm wmse}$ by \begin{equation} {\bf x}_{\rm wmse}={\bf W}_{\rm wmse} {\bf y},\end{equation} where ${\bf y}$ is given in \eqref{wiener1.eqc}. By \eqref{wienerfilterworsecase.eq2}, the reconstructed signal ${\bf x}_{\rm wmse}$ can be obtained by the combination of an inverse filtering procedure \begin{subequations} \label{worstcaserecovery.eq} \begin{equation} \label{worstcaserecovery.eqb} {\bf z} = \big(\delta_0^2 {\bf H} {\bf H}^T +{\bf G}\big)^{-1} {\bf y} \end{equation} and a filtering procedure \begin{equation} \label{worstcaserecovery.eqa} {\bf x}_{\rm wmse}= \delta_0^2 {\bf H}^T {\bf z}, \end{equation} \end{subequations} where the noisy observation ${\bf y}$ is the input and $\delta_0^2 {\bf H} {\bf H}^T +{\bf G}$ is a polynomial filter. As the graph shifts ${\mathbf S}_1, \ldots, {\mathbf S}_d$ are symmetric and commutative, $\bf H$ is a polynomial graph filter in \eqref{wiener2.eqb} and \eqref{wiener1.eqd} holds, we have that ${\bf H}^T={\bf H}=h({\mathbf S}_1, \ldots, {\mathbf S}_d)$ and $\delta_0^2 {\bf H} {\bf H}^T +{\bf G}= \delta_0^2 {\bf H}^2 +{\bf G}= (\delta_0^2 h^2+g)({\mathbf S}_1, \ldots, {\bf S}_d)$ are polynomial filters of ${\bf S}_1, \ldots, {\bf S}_d$. Therefore using \cite[Algorithms 1 and 2]{ncjs22}, the filtering procedure \eqref{worstcaserecovery.eqa} can be implemented at the vertex level with one-hop communication. By Theorem \ref{exponentialconvergence.thm}, the polynomial approximation algorithm \eqref{jacobiapproximation.def} proposed in the last section can be applied to the inverse filtering procedure \eqref{worstcaserecovery.eqb} if the following requirement is met, \begin{equation*}\label{worsecasewienerfilter.filtercondition} \delta_0^2 h^2({\bf t})+g({\bf t})>0 \ {\rm for \ all} \ {\bf t}\in [{\pmb \mu}, {\pmb \nu}].\end{equation*} Hence the worst-case Wiener filtering procedure \eqref{worstcaserecovery.eq} can be implemented at the vertex level with one-hop communication, see Algorithm \ref{Wiener1.algorithm} for the implementation at a vertex. \begin{algorithm}[t] \caption{Polynomial approximation algorithm to implement the worst-case Wiener filtering procedure $ {\bf x}_{\rm wmse}={\bf W}_{\rm wmse} {\bf y}$ at a vertex $i\in V$. } \label{Wiener1.algorithm} \begin{algorithmic} \STATE {\em Inputs}: Polynomial coefficients of polynomial filters ${\bf H}, {\bf G}$ and ${\bf G}_M$ (either Jacobi polynomial approximation filter ${\bf G}_M^{(\alpha, \beta)}$ or Chebyshev interpolation approximation filter ${\bf C}_M$), entries $S_k(i,j), j\in {\mathcal N}_i$ in the $i$-th row of the shifts ${\bf S}_k, 1\le k\le d$, the value $y(i)$ of the input signal ${\bf y}=(y(i))_{i\in V}$ at the vertex $i$, and number $L$ of iteration. \STATE {\em Pre-iteration}: Find the polynomial coefficients of polynomial filter $\delta_0^2 {\bf H}^2+{\bf G}$. \STATE {\em Initialization}: $n=0$ and zero initial $x^{(0)}(i)=0$. \STATE{\em Iteration}: Use \cite[Algorithms 1 and 2]{ncjs22} to implement the filtering procedures ${\bf e}^{(m)}=(\delta_0^2 {\bf H}^2+{\bf G}) {\bf x}^{(m-1)}-{\bf y}$ and ${\bf x}^{(m)}= {\bf x}^{(m-1)}-{\bf G}_M {\bf e}^{(m)}$ at the vertex $i$, with the output of the $L$-th iteration denoted by $ x^{(L)}(i)$. \STATE {\em Post-iteration}: Use \cite[Algorithms 1 and 2]{ncjs22} to implement the filtering procedure $ {\bf x}_{\rm wmse}=\delta_0^2 {\bf H} {\bf x}^{(L)}$ at the vertex $i$, with the output denoted by $ x_{\rm wmse}^{(L)}(i)$. \STATE {\em Output}: $ x_{\rm wmse}(i)\approx x^{(L)}_{\rm wmse}(i)$, the approximate value of the output data of the Wiener filtering procedure $ {\bf x}_{\rm wmse}={\bf W}_{\rm wmse} {\bf y}$ at the vertex $i$. \end{algorithmic} \end{algorithm} For a probability measure $P=(p(i))_{i\in V}$ on the graph ${\mathcal G}$ and a reconstruction filter ${\bf W}$, \begin{eqnarray}\label{wcms.objectivefunction2} \widetilde F_{{\rm wmse}, P}({\bf W})= \max_{\|{\bf x}\|_2\le \delta_0} \sum_{i\in V} p(i) {\mathbb E} | ({\bf W} {\bf y})(i)-{\bf x}(i)|^2 \end{eqnarray} is another natural worst-case mean squared error measurement, c.f. \eqref{wcms.objectivefunction}. By \eqref{wiener1.eqc} and \eqref{wiener1.eqd}, we obtain \begin{eqnarray*} & \hskip-0.08in & \hskip-0.08in \widetilde F_{{\rm wmse}, P}({\bf W}) \nonumber\\ & \hskip-0.08in =& \hskip-0.08in \sup_{\|{\bf x}\|_2\le \delta_0} {\bf x}^T ({\bf H}^T {\bf W}^T-{\bf I}) {\bf P} ({\bf W}{\bf H}-{\bf I}) {\bf x}\nonumber\\ & & + {\rm tr} \big({\bf P} {\bf W} ({\mathbb E}({\pmb \epsilon} {\pmb \epsilon}^T){\bf W}^T\big)\nonumber\\ & \hskip-0.08in = & \hskip-0.08in \delta_0^2 \lambda_{\max} \left(({\bf H}^T {\bf W}^T-{\bf I}) {\bf P} ({\bf W}{\bf H}-{\bf I})\right) + {\rm tr} ({\bf P} {\bf W} {\bf G}{\bf W}^T)\nonumber\\ & \hskip-0.08in \le & \hskip-0.08in \delta_0^2 {\rm tr} \left(({\bf H}^T {\bf W}^T-{\bf I}) {\bf P} ({\bf W}{\bf H}-{\bf I})\right) + {\rm tr} ({\bf P} {\bf W} {\bf G}{\bf W}^T)\nonumber\\ & \hskip-0.08in = & \hskip-0.08in {\rm tr} \left( {\bf P} \big(\delta_0^2 ({\bf W}{\bf H}-{\bf I})({\bf H}^T {\bf W}^T-{\bf I})+ {\bf W} {\bf G}{\bf W}^T\big)\right)\nonumber\\ & \hskip-0.08in = & \hskip-0.08in F_{{\rm wmse}, P}({\bf W}), \end{eqnarray*} where the inequality holds as the matrix $({\bf H}^T {\bf W}^T-{\bf I}) {\bf P} ({\bf W}{\bf H}-{\bf I})$ is positive semidefinite. Similarly, we have the following lower bound estimate, \begin{eqnarray*} \hskip-0.18in \widetilde F_{{\rm wmse}, P}({\bf W}) & \hskip-0.08in \ge & \hskip-0.08in \frac{\delta_0^2}{N} {\rm tr} \big(({\bf H}^T {\bf W}^T-{\bf I}) {\bf P} ({\bf W}{\bf H}-{\bf I})\big) \nonumber\\ \hskip-0.18in & \hskip-0.08in & \hskip-0.08in + {\rm tr} ({\bf P} {\bf W} {\bf G}{\bf W}^T)\ge \frac{F_{{\rm wmse}, P}({\bf W})}{N}. \end{eqnarray*} For the case that the probability measure is uniform and the random noise vector $\pmb \epsilon$ is i.i.d. with mean zero and variance $\sigma^2$, we get \begin{eqnarray*} \widetilde F_{{\rm wmse}, P_U}({\bf W}_{\rm wmse}) & \hskip-0.08in = & \hskip-0.08in \frac{\delta_0^2 \sigma^2}{N} \max_{1\le i\le N} \frac{\sigma^2}{(\delta_0^2 \mu_i({\bf H})^2+\sigma^2)^2}\nonumber\\ & & \hskip-0.08in + \frac{\delta_0^2\sigma^2}{N} \sum_{i=1}^N \frac{\delta_0^2 \mu_i({\bf H})^2}{ (\delta_0^2 \mu_i({\bf H})^2+\sigma^2)^2},\qquad \quad \end{eqnarray*} where $\mu_i({\bf H}), 1\le i\le N$, are singular values of the filter ${\bf H}$, cf. \eqref{wienerfilterworsecase.error3} for the estimate for $F_{{\rm wmse}, P_U}({\bf W}_{\rm wmse})$. \section{Simulations} \label{simulation.section} Let $N\ge 1$ and we say that $a=b\ {\rm mod }\ N$ if $(a-b)/N$ is an integer. The {\em circulant graph} ${\mathcal C}(N, Q)$ generated by $Q=\{q_1, \ldots, q_L\}$ is a simple graph with the vertex set $V_N=\{0, 1, \ldots, N-1\}$ and the edge set $E_N(Q)=\{(i, i\pm q\ {\rm mod}\ N),\ i\in V_N, q\in Q\}$, where $q_l, 1\le l\le L$, are integers contained in $[1, N/2)$ \cite{ncjs22, ekambaram13}-\cite{dragotti19}. In Section \ref{circulantgraph.demo}, we demonstrate the theoretical result in Theorem \ref{exponentialconvergence.thm} on the exponential convergence of the Jacobi polynomial approximation algorithm (JPA($\alpha, \beta$)) and Chebyshev interpolation polynomial algorithm (CIPA) on the implementation of inverse filtering procedures on circulant graphs. Our numerical results show that the CIPA and JPA($\alpha, \beta$) with appropriate selection of parameters $\alpha$ and $\beta$ have superior performance to implement the inverse procedure than the Chebyshev polynomial approximation algorithm in \cite{ncjs22} and the gradient descent method in \cite{Shi15} do. Let ${\mathcal G}_{N}=(V_{N}, E_{N}), N\ge 2$, be random geometric graphs with vertices randomly deployed on $[0, 1]^2$ and an undirected edge between two vertices if their physical distance is not larger than $\sqrt{2/N}$ \cite{ncjs22, jiang19, Nathanael2014}. In Sections \ref{randomsignal.demo} and \ref{denoisingwideband.demo}, we consider denoising (wide-band) stationary signals via the Wiener procedures with/without regularization taken into account, and we compare the performance of denoising via the Tikhonov regularization method \eqref{tik.def}. It is observed that the Wiener filtering procedures with/without regularization taken into account have better performance on denoising (wide-band) stationary signals than the conventional Tikhonov regularization approach does. \subsection{Polynomial approximation algorithms on circulant graphs} \label{circulantgraph.demo} In simulations of this subsection, we take circulant graphs ${\mathcal C}(N,Q_0)$, polynomial filters ${\bf H}_1$, input signals ${\bf x}$ of the filtering procedure ${\bf x}\longmapsto {\bf H}_1{\bf x}$, and input signals ${\bf y}$ of the inverse filtering procedure ${\bf y}\longmapsto {\bf H}_1^{-1}{\bf y}$ as in \cite{ncjs22}, that is, the circulant graphs ${\mathcal C}(N,Q_0)$ are generated by $Q_0 = \{1, 2, 5\}$, ${\bf H}_1=h_1( {\bf L}_{C(N,Q_0)}^{\rm sym})$ is a polynomial filter of the symmetric normalized Laplacian ${\bf L}_{C(N,Q_0)}^{\rm sym}$ on the circulant graph ${\mathcal C}(N,Q_0)$ with $ h_1(t)=(9/4-t)(3+t)$ given in \eqref{h1.def}, the input signal ${\bf x}$ has i.i.d. entries randomly selected in $[-1, 1]$, and the input signal ${\bf y}={\bf H}_1{\bf x}$ of the inverse filtering procedure is the output of the filtering procedure. Shown in Table \ref{CirculantGraphICPA.Table} are averages of the relative iteration error $${\rm E}(m)=\frac{ \|{\bf x}^{(m)}-{\bf x}\|_2}{\|{\bf x}\|_2},\ m\ge 1,$$ over 1000 trials to implement the inverse filtering procedure ${\bf y}\longmapsto {\bf H}_1^{-1} {\bf y}$ via the JPA($\alpha, \beta$) and CIPA with zero initial ${\bf x}^{(0)}={\bf 0}$, where ${\bf x}^{(m)}, m\ge 1$, are the output of the polynomial approximation algorithm \eqref{jacobiapproximation.def} at $m$-th iteration and $M$ is the degree of polynomials in the Jacobi (Chebyshev interpolation) polynomial approximation. The JPA($\alpha, \beta$) with $\alpha=\beta=-1/2$ is the Chebyshev polynomial approximation algorithm, ICPA for abbreviation, introduced in \cite{ncjs22} and the relative iteration error presented in Table \ref{CirculantGraphICPA.Table} for the JPA($-1/2, -1/2$) is copied from \cite[Table 1]{ncjs22}. We observe that CIPA and JPA($\alpha, \beta$) with appropriate selection of parameters $\alpha$ and $\beta$ have better performance on the implementation of inverse filtering procedure than the ICPA in \cite{ncjs22} does, and they have much better performance if we select approximation polynomials with higher order $M$. As the filter ${\bf H}_1$ is a positive definite matrix, the inverse filtering procedure ${\bf y}\longmapsto {\bf H}_1^{-1}{\bf y}$ can also be implemented by the gradient descent method with optimal step size \eqref{gd0.def}, GD0 for abbreviation \cite{Shi15}. Shown in the sixth row of Table \ref{CirculantGraphICPA.Table}, which is copied from \cite[Table 1]{ncjs22}, is the relative iteration error to implement the inverse filtering ${\bf y}\longmapsto {\bf H}_1^{-1}{\bf y}$. It indicates that the CIPA and JPA($\alpha, \beta$) with appropriate selection of parameters $\alpha$ and $\beta$ have superior performance to implement the inverse procedure than the gradient descent method does. \begin{table}[t] \centering \caption{ Average relative iteration errors $E(m)$ to implement the inverse filtering ${\bf y}\longmapsto {\bf H}_1^{-1} {\bf y}$ on the circulant graph ${\mathcal C}(1000, Q_0)$ via polynomial approximation algorithms and the gradient descent method with zero initial. } \label{CirculantGraphICPA.Table} \begin{tabular} {|c|c|c|c|c|c|c|c|} \hline \hline \backslashbox{Alg.} {Iter. $m$} & 1 & 2 & 3 & 4 & 5 \\ \hline \hline \multicolumn{6}{c}{\multirow{1}{*}{$M=0$}}\\ \hline \hline JPA(-${1}/{2}$, -${1}/{2}$) & 0.5686 & 0.4318 & 0.3752 & 0.3521 & 0.3441\\ \hline JPA(${1}/{2}$, ${1}/{2}$) & 0.3007 & 0.1307 & 0.0677 & 0.0379 & 0.0219 \\ \hline JPA(${1}/{2}$,-${1}/{2}$)& 0.2298 & 0.0955& 0.0452 & 0.0223 & 0.0113 \\ \hline JPA(0,-${1}/{2}$)& 0.2296 & 0.0833 & 0.0337 & 0.0141 & 0.0060 \\ \hline CIPA & 0.2189 & 0.0822 & 0.0347 & 0.0154 & 0.0070 \\ \hline GD0 & 0.2350 & 0.0856 & 0.0349 & 0.0147 & 0.0063\\ \hline \hline \multicolumn{6}{c}{\multirow{1}{*}{$M=1$}}\\ \hline \hline JPA(-${1}/{2}$, -${1}/{2}$) & 0.4494 & 0.2191 & 0.1103 & 0.0566 & 0.0295 \\ \hline JPA(${1}/{2}$, ${1}/{2}$) & 0.2056 & 0.0769 & 0.0390 & 0.0213 & 0.0119\\ \hline JPA(${1}/{2}$, -${1}/{2}$) & 0.1624 & 0.0297 & 0.0056 & 0.0011 & 0.0002 \\ \hline JPA(0, -${1}/{2}$) & 0.2580 & 0.0754 & 0.0225 & 0.0068 & 0.0021 \\ \hline CIPA &0.2994 & 0.1010 & 0.0349 & 0.0122 & 0.0043 \\ \hline \hline \multicolumn{6}{c}{\multirow{1}{*}{$M=2$}}\\ \hline \hline JPA(-${1}/{2}$, -${1}/{2}$) & 0.1860 & 0.0412 & 0.0098 & 0.0024 & 0.0006\\ \hline JPA($\frac{1}{2}$, $\frac{1}{2}$) & 0.1079 & 0.0271 & 0.0093 & 0.0034 & 0.0012\\ \hline JPA($\frac{1}{2}$, -$\frac{1}{2}$) & 0.0603 & 0.0056 & 0.0006 & 0.0001 & 0.0000 \\ \hline JPA(0, -$\frac{1}{2}$) & 0.0964 & 0.0123 & 0.0017 & 0.0003& 0.0000 \\ \hline CIPA & 0.1173 & 0.0193 & 0.0035 & 0.0007 & 0.0001 \\ \hline\hline \multicolumn{6}{c}{\multirow{1}{*}{$M=3$}}\\ \hline \hline JPA(-${1}/{2}$, -${1}/{2}$) & 0.0979 & 0.0113 & 0.0014 & 0.0002 & 0.0000 \\ \hline JPA(${1}/{2}$, ${1}/{2}$) & 0.0581 & 0.0096 & 0.0022 & 0.0005 & 0.0001 \\ \hline JPA(${1}/{2}$, -${1}/{2}$) & 0.0424 & 0.0021 & 0.0001 & 0.0000 & 0.0000 \\ \hline JPA(0, -${1}/{2}$) & 0.0636 & 0.0046 & 0.0003 & 0.0000& 0.0000 \\ \hline CIPA & 0.0761 & 0.0067 & 0.0006 & 0.0001 & 0.0000 \\ \hline \hline \end{tabular} \end{table} \subsection{Denoising stationary signals on random geometric graphs} \label{randomsignal.demo} Let ${\bf L}^{\rm sym}$ be the normalized Laplacian on the random geometric graph ${\mathcal G}_N$ with $N=256$. In simulations of this subsection, we consider stationary signals ${\bf x}$ on the random geometric graph ${\mathcal G}_{256}$ with correlation matrix ${\mathbb E} {\bf x} {\bf x}^T={\bf I}+{\bf L}^{\rm sym}/2$, and noisy observations ${\bf y}={\bf x}+\pmb \epsilon$ being the inputs ${\bf x}$ corrupted by some additive noises $\pmb \epsilon$ which is independent of the input signal ${\bf x}$ and whose entries are i.i.d. random variables with normal distribution ${\mathcal N}(0, \varepsilon)$ for some $\varepsilon>0$, and we select the uniform probability measure $\bf P$ in the stochastic mean squared error \eqref{mse.objectivefunction}. In other words, we consider the Wiener filtering procedure \eqref{stochasticestimator} in the scenario that $${\bf H}={\bf I}, {\bf R}={\bf I}+{\bf L}^{\rm sym}/2, {\bf P}=N^{-1} {\bf I}\ {\rm and} \ {\bf G}=\varepsilon^2 {\bf I}.$$ For input signals ${\bf x}$ in our simulations, one may verify ${\mathbb E}\|{\bf x}\|_2^2= {\rm tr} ({\mathbb E}({\bf x}{\bf x}^T))=3N/2$, ${\mathbb E}\|{\pmb \epsilon}\|_2^2 %= {\rm tr} ({\mathbb E}({\pmb \eta}{\pmb \eta}^T)) =N \varepsilon^2$, and \begin{eqnarray*} {\mathbb E}{\bf x}^T {\bf L}^{\rm sym}{\bf x} & \hskip-0.08in = & \hskip-0.08in {\rm tr}\big( {\bf L}^{\rm sym} ({\bf I}+{\bf L}^{\rm sym}/2)\big) \in (3N/2, 2N]. \end{eqnarray*} Based on the above observations, we use ${\bf K}=\varepsilon^2 {\bf L}^{\rm sym}/(4N)$ as the regularization matrix to balance the fidelity and regularization terms in \eqref{mse.objectivefunction}. Therefore \begin{eqnarray*} {\bf x}_{\rm W0} & \hskip-0.08in := & \hskip-0.08in {\bf W}_{\rm mse}^0 {\bf y}={\bf R} \big({\bf R}+{\bf G}\big)^{-1} {\bf y}\nonumber\\ & \hskip-0.08in = & \hskip-0.08in ({\bf I}+{\bf L}^{\rm sym}/2)\big( (1+\varepsilon^2){\bf I}+ {\bf L}^{\rm sym}/2\big)^{-1}{\bf y}\end{eqnarray*} and \begin{eqnarray*} {\bf x}_{\rm W} & \hskip-0.08in := & \hskip-0.08in {\bf W}_{\rm mse} {\bf y} = ({\bf P}+{\bf K})^{-1}{\bf P} {\bf R} \big({\bf R}+{\bf G}\big)^{-1} {\bf y}\nonumber\\ & \hskip-0.08in = & \hskip-0.08in ({\bf I}+\varepsilon^2 {\bf L}^{\rm sym}/4)^{-1} {\bf x}_{\rm W0} \end{eqnarray*} are signals reconstructed from the noisy observation ${\bf y}$ via the Wiener procedures \eqref{stochasticrecovery.eqa} and \eqref{wienerfiltermse.eq2} without/with regularization taken into account respectively. Define the input signal-to-noise ratio (ISNR) and the output signal-to-noise ratio (SNR) by $$ {\rm ISNR}= -20 \log_{10} \frac{\|{\pmb \epsilon}\|_2}{\|{\bf x}\|_2}\ {\rm and} \ {\rm SNR}=-20 \log_{10} \frac{\|\widehat{\bf x}-{\bf x}\|_2}{\|{\bf x}\|_2} $$ respectively, where $\widehat{\bf x}$ are either the reconstructed signal ${\bf x}_{\rm W0}$ via the Wiener procedure \eqref{stochasticrecovery.eqa} without regularization, or the reconstructed signal ${\bf x}_{\rm W}$ via the Wiener procedure \eqref{wienerfiltermse.eq2} with regularization, or the reconstructed signal \begin{eqnarray}\label{tik.def} {\bf x}_{\rm Tik} & \hskip-0.08in = & \hskip-0.08in ({\bf P}+{\bf K})^{-1}{\bf P} {\bf y}= ({\bf I}+\varepsilon^2 {\bf L}^{\rm sym}/2)^{-1} {\bf y}\nonumber\\ &\hskip-0.08in = & \hskip-0.08in \arg\min_{\bf x} \ ({\bf x}-{\bf y})^T {\bf P} ({\bf x}-{\bf y})+ {\bf x}^T {\bf K} {\bf x} \end{eqnarray} via the Tikhonov regularization approach. It is observed from Figure \ref{denoise_random.fig} that the Wiener procedure without regularization has the best performance on denoising stationary signals. \begin{figure}[t] \centering \includegraphics[width=43mm, height=30mm]{randomsignal.jpg} \includegraphics[width=43mm, height=30mm] {snr_random.jpg} \\ \includegraphics[width=42mm, height=30mm]{piecewise.jpg} \includegraphics[width=43mm, height=30mm] {snr_piecewise.jpg} \caption{ Plotted are the stationary signal ${\bf x}$ with correlation matrix ${\bf I}+{\bf L}^{\rm sym}/2$ (top left), the four-strip signal ${\bf x}_{\rm pp}$ in \cite{jiang19} (bottom left), and the averages of the input signal-to-noise ratio ${\rm ISNR}$ and output signal-to-noise ratio ${\rm SNR}$ of denoising stationary signals ${\bf x}$ (top right) and the four-strip signal ${\bf x}_{\rm pp}$ (bottom right) via the Wiener procedures without/with regularization and Tikhonov regularization approach over 1000 trials for different noise levels $0.5 \le \varepsilon\le 2$. } \label{denoise_random.fig} \end{figure} Graph signals ${\bf x}$ in many applications exhibit some smoothness, which is widely measured by the ratio ${\bf x}^T {\bf L}^{\rm sym} {\bf x}/ \|{\bf x}\|_2^2$. Observe that stationary signals ${\bf x}$ in the above simulations does not have good regularity as ${\mathbb E}{\bf x}^T {\bf L}^{\rm sym} {\bf x}/ {\mathbb E}\|{\bf x}\|_2^2\in [1, 4/3]$. We believe that it could be the reason that Wiener procedure with regularization has slightly poor performance on denoising than the Wiener procedure without regularization does. Let ${\bf x}_{\rm pp}$ be the four-strip signal on the random geometric graph that impose the polynomial $0.5-2 c_x$ on the first and third diagonal strips and $0.5+c_x^2+c_y^2$ on the second and fourth strips respectively, where $(c_x,c_y)$ are the coordinates of vertices \cite[Fig. 2]{jiang19}. We do simulations on denoising the four-strip signal ${\bf x}_{\rm pp}$, i.e., we apply the same Tikhonov regularization and Wiener procedures with/without regularization except that stationary signals ${\bf x}$ is replaced by ${\bf x}_{\rm pp}$, see Figure \ref{denoise_random.fig}. This indicates that Wiener procedure with regularization may have the best performance on denoising signals with certain regularity. \subsection{Denoising wide-band stationary signals on random geometric graphs} \label{denoisingwideband.demo} In this subsection, we consider denoising wide-band stationary signals ${\bf x}$ in \eqref{wiener2.eq4} on a random geometric graph ${\cal G}_{256}$ with $${\mathbb E}{\bf x}= c{\bf 1}\ \ {\rm and}\ \ {\mathbb E} ({\bf x}-{\mathbb E}{\bf x}) ({\bf x}-{\mathbb E}{\bf x}) ^T={\bf I}+{\bf L}^{\rm sym}/2,$$ where $c\ne 0$ is not necessarily to be given in advance. The observations ${\bf y}={\bf x}+\pmb \epsilon$ are the inputs ${\bf x}$ corrupted by some additive noises $\pmb \epsilon$ which is independent of the input signal ${\bf x}$ and whose covariance matrix is ${\bf G}=\varepsilon^2{\bf L}^{\rm sym}$ for some $\varepsilon>0$, and we select the uniform probability measure $\bf P$ in the stochastic mean squared error. In other words, we consider the Wiener filtering procedure \eqref{stochasticestimator} in the scenario that $${\bf H}={\bf I}, \widetilde {\bf R}={\bf I}+{\bf L}^{\rm sym}/2, {\bf P}=N^{-1} {\bf I} \ {\rm and} \ {\bf G}=\varepsilon^2 {\bf L}^{\rm sym}.$$ Similar to the simulations in Section \ref{randomsignal.demo}, we test the performance of the Wiener procedures with/without regularization and Tikhonov regularization on denoising wide-band stationary signals. From the simulation results presented in Figure \ref{stationary.fig}, we see that the Wiener procedure with regularization has slightly poor performance on denoising than the Wiener procedure without regularization does, but they both perform better than Tikhonov regularization approach does. \begin{figure}[t] \centering \includegraphics[width=43mm, height=30mm] {widebandC1.jpg} \includegraphics[width=43mm, height=30mm] {widebandC5.jpg} \caption{Plotted are the averages of the input signal-to-noise ratio ${\rm ISNR}$ and output signal-to-noise ratio ${\rm SNR}$ obtained by the Wiener procedures without/with regularization and Tikhonov regularization approach over 1000 trials for different noise levels $0.5\le \varepsilon\le 2$, in which the original signal is wide-band stationary with $c=1$ (left) and $c=5$ (right) on the random geometric graph ${\cal G}_{256}$. } \label{stationary.fig} \end{figure} \appendices \setcounter{equation}{0} \setcounter{thm}{0} \setcounter{section}{0} \setcounter{subsection}{0} \renewcommand{\thesection}{\Alph{section}} \renewcommand{\thesubsection}{A.\arabic{subsection}} \renewcommand{\theequation}{\Alph{section}.\arabic{equation}} \section{Proof of Theorem \ref{wienerfiltermse.thm}} \label{wienerfiltermsd.prof} By \eqref{wiener2.eq1}, \eqref{wiener2.eq2} and \eqref{wiener2.eq3}, we have \begin{equation} \label{wienerfiltermse.thmpf.eq0} {\mathbb E} {\bf y}{\bf y}^T= {\bf H} {\bf R} {\bf H}^T+{\bf G} \ \ {\rm and} \ \ {\mathbb E} {\bf y}{\bf x}^T={\bf H}{\bf R}. \end{equation} By \eqref{wiener2.eq1}, \eqref{mse.objectivefunction} and \eqref {wienerfiltermse.thmpf.eq0}, we obtain \begin{eqnarray} \label{wienerfiltermse.thmpf.eq1} & \hskip-0.08in & \hskip-0.08in F_{{\rm mse}, P, {\bf K}}({\bf W})\nonumber\\ & \hskip-0.08in = & \hskip-0.08in {\rm tr}\left ( {\bf P} {\mathbb E} ( ({\bf W}{\bf y}-{\bf x})({\bf W}{\bf y}-{\bf x})^T \right) +{\rm tr} {\bf W}^T {\bf K}{\bf W}{\mathbb E}({\bf y} {\bf y}^T) \nonumber\\ & \hskip-0.08in = & \hskip-0.08in {\rm tr} \big({\bf W}^T ({\bf P}+{\bf K}) {\bf W} ({\bf H} {\bf R} {\bf H}^T+{\bf G})\big) +{\rm tr} ({\bf P} {\bf R}) \nonumber\\ & & \hskip-0.08in - {\rm tr} ( {\bf H}{\bf R}{\bf P}{\bf W}) - {\rm tr} ({\bf W}^T {\bf P}{\bf R} {\bf H}^T ). \end{eqnarray} Substituting ${\bf W}$ in \eqref{wienerfiltermse.thmpf.eq1} by ${\bf W}_{\rm mse}$ proves \eqref{wienerfiltermse.eq1}. By \eqref{wienerfiltermse.eq2} and \eqref{wienerfiltermse.thmpf.eq1}, we obtain \begin{eqnarray} \label{wienerfiltermse.thmpf.eq2} & \hskip-0.08in & F_{{\rm mse}, P, {\bf K}}({\bf W}) \nonumber\\ & \hskip-0.08in = & \hskip-0.08in F_{{\rm mse}, P, {\bf K}}({\bf W}_{\rm mse}) +{\rm tr} \big({\bf V}^T ({\bf P}+{\bf K}) {\bf V} ({\bf H} {\bf R} {\bf H}^T+{\bf G})\big) \nonumber\\ & & + {\rm tr} \big({\bf V}^T ({\bf P}+{\bf K}) {\bf W}_{\rm mse} ({\bf H} {\bf R} {\bf H}^T+{\bf G})-{\bf V}^T {\bf P}{\bf R} {\bf H}^T\big) \nonumber\\ & & + {\rm tr} \big({\bf W}_{\rm mse}^T ({\bf P}+{\bf K}) {\bf V} ({\bf H} {\bf R} {\bf H}^T+{\bf G})- {\bf H}{\bf R}{\bf P}{\bf V}\big) \nonumber\\ & \hskip-0.08in = & \hskip-0.08in F_{{\rm mse}, P, {\bf K}}({\bf W}_{\rm mse}) +{\rm tr} \big(({\bf H} {\bf R} {\bf H}^T+{\bf G})^{1/2}\nonumber\\ & & \qquad\qquad \times {\bf V}^T ({\bf P}+{\bf K}) {\bf V} ({\bf H} {\bf R} {\bf H}^T+{\bf G})^{1/2}\big) \nonumber\\ & \hskip-0.08in \ge & \hskip-0.08in F_{{\rm mse}, P, {\bf K}}({\bf W}_{\rm mse}), \end{eqnarray} where ${\bf V}={\bf W}-{\bf W}_{\rm mse}$, the first and second equality follows from \eqref{wienerfiltermse.thmpf.eq1} and \eqref{wienerfiltermse.eq2} respectively, and the inequality holds as $({\bf H} {\bf R} {\bf H}^T+{\bf G})^{1/2}{\bf V}^T ({\bf P}+{\bf K}) {\bf V} ({\bf H} {\bf R} {\bf H}^T+{\bf G})^{1/2}$ are positive semidefinite for all matrices ${\bf V}$. This proves that ${\bf W}_{\rm mse}$ is a minimizer to the minimization problem $\min_{\bf W} F_{{\rm mse}, P, {\bf K}}({\bf W})$. The conclusion that ${\bf W}_{\rm mse}$ is a unique minimizer to the minimization problem $\min_{\bf W} F_{{\rm mse}, P, {\bf K}}({\bf W})$ follows from \eqref{wienerfiltermse.thmpf.eq2} and the assumptions that ${\bf P}+{\bf K}$ and ${\bf H} {\bf R} {\bf H}^T+{\bf G}$ are strictly positive definite. \section{Proof of Theorem \ref{wienerfilterworsecase.thm}} \label{wienerfilterworsecase.thm.pfappendix} Define the worst-case mean squared error of a reconstruction vector ${\bf w}$ with respect to a given unit vector ${\bf u}$ by \begin{equation} \label{wienerfilterworsecase.thm.pfeq1} f_{{\rm wmse}, {\bf u}}({\bf w})=\max_{\|{\bf x}\|_2\le \delta_0} {\mathbb E} | {\bf w}^T {\bf y}-{\bf u}^T {\bf x}|^2\end{equation} and set \begin{equation} \label{wienerfilterworsecase.thm.pfeq2} {\bf w}_{{\rm wmse}, {\bf u}}= {\bf W}_{{\rm wmse}}^T {\bf u}. \end{equation} By direct computation, we have \begin{equation} \label{wienerfilterworsecase.thm.pfeq1+} F_{{\rm wmse}, P}({\bf W})=\sum_{i\in V} p(i) f_{{\rm wmse}, {\bf e}_i}( {\bf W}^T {\bf e}_i), \end{equation} where ${\bf e}_i, i\in V$, are delta signals taking value one at vertex $i$ and zero at all other vertices. Then it suffices to show that ${\bf w}_{{\rm wmse}, {\bf u}}$ is the optimal reconstructing vector with respect to the measurement $f_{{\rm wmse}, {\bf u}}({\bf w})$, i.e., \begin{equation} \label{wienerfilterworsecase.thm.pfeq3} {\bf w}_{{\rm wmse}, {\bf u}}=\arg\min_{\bf w} f_{{\rm wmse}, {\bf u}}({\bf w}). \end{equation} By \eqref{wiener1.eqc}, \eqref{wiener1.eqd} and the assumption $\|{\bf u}\|_2=1$, we have \begin{eqnarray*} \label{wienerfilterworsecase.thm.pfeq4} \hskip-0.18in f_{{\rm wmse}, {\bf u}}({\bf w}) & \hskip-0.08in = & \hskip-0.08in \max_{\|{\bf x}\|_2\le \delta_0} {\mathbb E} | ({\bf w}^T {\bf H}-{\bf u}^T) {\bf x} + {\bf w}^T {\pmb \epsilon}|^2\nonumber \\ \hskip-0.18in &\hskip-0.08in = & \hskip-0.08in \max_{\|{\bf x}\|_2\le \delta_0}\big| ({\bf w}^T {\bf H}-{\bf u}^T) {\bf x} \big|^2+ {\mathbb E} | {\bf w}^T {\pmb \epsilon}|^2\nonumber\\ \hskip-0.18in & \hskip-0.08in = & \hskip-0.08in \delta_0^2 ({\bf w}^T {\bf H}-{\bf u}^T) ({\bf H}^T {\bf w}-{\bf u}) + {\bf w}^T {\bf G}{\bf w} \nonumber\\ \hskip-0.18in & \hskip-0.08in = & \hskip-0.08in {\bf w}^T \big(\delta_0^2 {\bf H}{\bf H}^T+{\bf G}) {\bf w}-2 \delta_0^2 {\bf w}^T {\bf H} {\bf u}+\delta_0^2. \end{eqnarray*} Therefore \begin{eqnarray} \label{wienerfilterworsecase.thm.pfeq5} \hskip-0.18in f_{{\rm wmse}, {\bf u}}({\bf w}) & \hskip-0.08in = &\hskip-0.08in f_{{\rm wmse}, {\bf u}}({\bf w}_{{\rm wmse}, {\bf u}})+ {\bf v}^T \big(\delta_0^2 {\bf H}{\bf H}^T+{\bf G}) {\bf v}\nonumber\\ \hskip-0.18in & \hskip-0.08in & + 2 {\bf v}^T \Big( \big(\delta_0^2 {\bf H}{\bf H}^T+{\bf G}\big) {\bf w}_{{\rm wmse}, {\bf u}}-\delta_0^2{\bf H} {\bf u}\Big)\nonumber\\ \hskip-0.18in & \hskip-0.08in = &\hskip-0.08in f_{{\rm wmse}, {\bf u}}({\bf w}_{{\rm wmse}, {\bf u}})+ {\bf v}^T \big(\delta_0^2 {\bf H}{\bf H}^T+{\bf G}) {\bf v}\nonumber\\ \hskip-0.18in & \hskip-0.08in \ge &\hskip-0.08in f_{{\rm wmse}, {\bf u}}({\bf w}_{{\rm wmse}, {\bf u}}), \end{eqnarray} where ${\bf v}={\bf w}-{\bf w}_{{\rm wmse}, {\bf u}}$ and the last inequality holds as $\delta_0^2 {\bf H}{\bf H}^T+{\bf G}$ is strictly positive definite. This proves \eqref{wienerfilterworsecase.thm.pfeq3} and hence that ${\bf W}_{\rm wmse}$ is a minimizer of the minimization problem \eqref{wcms.minimization}, i.e., the inequality in \eqref{wienerfilterworsecase.eq1} holds. By \eqref{wienerfilterworsecase.thm.pfeq2}, \eqref{wienerfilterworsecase.thm.pfeq1+} and \eqref{wienerfilterworsecase.thm.pfeq3}, we have \begin{eqnarray*} &\hskip-0.08in & \hskip-0.08in F_{{\rm wmse}, P}({\bf W}_{\rm wmse}) = \sum_{i\in V} p(i) f_i({\bf w}_{{\rm wmse}, {\bf e}_i})\nonumber\\ & \hskip-0.08in = & \hskip-0.08in \sum_{i\in V} p(i) \big(-\delta_0^4 {\bf e}_i^T {\bf H}^T (\delta_0^2 {\bf H} {\bf H}^T +{\bf G})^{-1} {\bf H} {\bf e}_i+\delta_0^2\big) \nonumber\\ & \hskip-0.08in = & \hskip-0.08in \delta_0^2- \delta_0^4 {\rm tr} ( {\bf P} {\bf H}^T (\delta_0^2 {\bf H} {\bf H}^T +{\bf G})^{-1} {\bf H}\big)\nonumber\\ & \hskip-0.08in = & \hskip-0.08in \delta_0^2- \delta_0^4 {\rm tr} ( (\delta_0^2 {\bf H} {\bf H}^T +{\bf G})^{-1} {\bf H} {\bf P} {\bf H}^T \big). \end{eqnarray*} This proves the equality in \eqref{wienerfilterworsecase.eq1} and hence completes the proof of the conclusion \eqref{wienerfilterworsecase.eq1}. The uniqueness of the minimization problem \eqref{wcms.minimization} follows from \eqref{wienerfilterworsecase.thm.pfeq1+} and \eqref{wienerfilterworsecase.thm.pfeq5}, and the strictly positive definiteness of the matrices $\bf P$ and $\delta_0^2 {\bf H}{\bf H}^T+{\bf G}$. \medskip {\bf Acknowledgement}\ The authors would like to thank Professors Xin Li, Zuhair Nashed, Paul Nevai and Yuan Xu, and Dr. Nazar Emirov for their help during the preparation of this manuscript. \bibliographystyle{ieeetr} \begin{thebibliography}{99} \bibitem{Wass94} S. Wasserman and K. Faust, {\em Social Network Analysis: Methods and Applications}, Cambridge University Press, 1994. \bibitem{chong2003} C. Chong and S. Kumar, ``Sensor networks: evolution, opportunities, and challenges," {\em Proc. IEEE}, vol. 91, pp. 1247-1256, Aug. 2003. \bibitem{mfa07} G. Mao, B. Fidan, and B. D. O. Anderson, ``Wireless sensor network localization techniques," {\em Comput. Netw.}, vol. 51, no. 10, pp. 2529-2553, July 2007. \bibitem{Yick08} J. Yick, B. Mukherjee, and D. Ghosal, ``Wireless sensor network survey," {\em Comput. Netw.}, vol. 52, no. 12, pp. 2292-2330, Aug. 2008. \bibitem{Motee17} N. Motee and Q. Sun, ``Sparsity and spatial localization measures for spatially distributed systems," {\em SIAM J. Control Optim.}, vol. 55, no. 1, pp. 200-235, Jan. 2017. \bibitem{Hebner17} R. Hebner, ``The power grid in 2030," {\em IEEE Spectrum}, vol. 54, no. 4, pp. 50-55, Apr. 2017. \bibitem{Cheng19} C. Cheng, Y. Jiang, and Q. Sun, ``Spatially distributed sampling and reconstruction," {\em Appl. Comput. Harmon. Anal.}, vol. 47, no. 1, pp. 109-148, July 2019. \bibitem{sandryhaila13} A. Sandryhaila and J. M. F. Moura, ``Discrete signal processing on graphs," {\em IEEE Trans. Signal Process.}, vol. 61, no. 7, pp. 1644-1656, Apr. 2013. \bibitem{shuman13} D. I. Shuman, S. K. Narang, P. Frossard, A. Ortega, and P. Vandergheynst, ``The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains," {\em IEEE Signal Process. Mag.}, vol. 30, no. 3, pp. 83-98, May 2013. \bibitem{sandryhaila14} A. Sandryhaila and J. M. F. Moura, ``Discrete signal processing on graphs: Frequency analysis," {\em IEEE Trans. Signal Process.}, vol. 62, no. 12, pp. 3042-3054, June 2014. \bibitem{bronstein17} M. M. Bronstein, J. Bruna, Y. LeCun, A. Szlam, and P. Vandergheynst, ``Geometric deep learning: Going beyond Euclidean data,'' {\em IEEE Signal Process. Mag.}, vol. 34, no. 4, pp. 18-42, 2017. \bibitem{Ortega18} A. Ortega, P. Frossard, J. Kova{\v{c}}evi{\'c}, J. M. F. Moura, and P. Vandergheynst, ``Graph signal processing: Overview, challenges, and applications," {\em Proc. IEEE}, vol. 106, no. 5, pp. 808-828, May 2018. \bibitem{stankovic2019introduction} L. Stankovi{\'c}, M. Dakovi{\'c}, and E. Sejdi{\'c}, ``{Introduction to graph signal processing}," In {\em Vertex-Frequency Analysis of Graph Signals}, Springer, pp. 3-108, 2019. \bibitem{dong20} X. Dong, D. Thanou, L. Toni, M. Bronstein, and P. Frossard, ``Graph signal processing for machine learning: A review and new perspectives," {\em IEEE Signal Process. Mag.}, vol. 37, no. 6, pp. 117-127, 2020. \bibitem{ncjs22} N. Emirov, C. Cheng, J. Jiang, and Q. Sun, ``Polynomial graph filter of multiple shifts and distributed implementation of inverse filtering," {\em Sampl. Theory Signal Process. Data Anal.}, vol. 20, Article No. 2, 2022. \bibitem{segarra17} S. Segarra, A. G. Marques, and A. Ribeiro, ``Optimal graph-filter design and applications to distributed linear network operators," {\em IEEE Trans. Signal Process.}, vol. 65, no. 15, pp. 4117-4131, Aug. 2017. \bibitem{Coutino17} A. Gavili and X. Zhang, ``On the shift operator, graph frequency, and optimal filtering in graph signal processing," {\em IEEE Trans. Signal Process.}, vol. 65, no. 23, pp. 6303-6318, Dec. 2017. \bibitem{jiang19} J. Jiang, C. Cheng, and Q. Sun, ``Nonsubsampled graph filter banks: Theory and distributed algorithms," {\em IEEE Trans. Signal Process.}, vol. 67, no. 15, pp. 3938-3953, Aug. 2019. \bibitem{horn1990matrix} R. A. Horn and C. R. Johnson. {\em Matrix Analysis}, Cambridge University Press, 2012. \bibitem{Leus17} E. Isufi, A. Loukas, A. Simonetto, and G. Leus, ``Autoregressive moving average graph filtering," {\em IEEE Trans. Signal Process.}, vol. 65, no. 2, pp. 274-288, Jan. 2017. \bibitem{Waheed18} W. Waheed and D. B. H. Tay, ``Graph polynomial filter for signal denoising," {\em IET Signal Process.}, vol. 12, no. 3, pp. 301-309, Apr. 2018. \bibitem{Lu18} K. Lu, A. Ortega, D. Mukherjee, and Y. Chen, ``Efficient rate-distortion approximation and transform type selection using Laplacian operators," in {\em 2018 Picture Coding Symposium (PCS)}, San Francisco, CA, June 2018, pp. 76-80. \bibitem{shuman18} D. I. Shuman, P. Vandergheynst, D. Kressner, and P. Frossard, ``Distributed signal processing via Chebyshev polynomial approximation,'' {\em IEEE Trans. Signal Inf. Process. Netw.}, vol. 4, no. 4, pp. 736-751, Dec. 2018. \bibitem{mario19} M. Coutino, E. Isufi, and G. Leus, ``Advances in distributed graph filtering," {\em IEEE Trans. Signal Process.}, vol. 67, no. 9, pp. 2320-2333, May 2019. \bibitem{Emirov19} C. Cheng, J. Jiang, N. Emirov, and Q. Sun, ``Iterative Chebyshev polynomial algorithm for signal denoising on graphs," in {\em Proceeding 13th Int. Conf. on SampTA}, Bordeaux, France, Jul. 2019, pp. 1-5. \bibitem{David2019} J. Jiang, D. B. Tay, Q. Sun, and S. Ouyang, ``Design of nonsubsampled graph filter banks via lifting schemes," {\em IEEE Signal Process. Lett.}, vol. 27, pp. 441-445, Feb. 2020. \bibitem{siheng_inpaint15} S. Chen, A. Sandryhaila, and J. Kova{\v{c}}evi{\'c}, ``Distributed algorithm for graph signal inpainting," in {\em 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, Brisbane, QLD, Apr. 2015, pp. 3731-3735. \bibitem{Shi15} X. Shi, H. Feng, M. Zhai, T. Yang, and B. Hu, ``Infinite impulse response graph filters in wireless sensor networks," {\em IEEE Signal Process. Lett.}, vol. 22, no. 8, pp. 1113-1117, Aug. 2015. \bibitem{sihengTV15} S. Chen, A. Sandryhaila, J. M. F. Moura, and J. Kova{\v{c}}evi{\'c}, ``Signal recovery on graphs: variation minimization," {\em IEEE Trans. Signal Process.}, vol. 63, no. 17, pp. 4609-4624, Sept. 2015. \bibitem{Onuki16} M. Onuki, S. Ono, M. Yamagishi, and Y. Tanaka, ``Graph signal denoising via trilateral filter on graph spectral domain," {\em IEEE Trans. Signal Inf. Process. Netw.}, vol. 2, no. 2, pp. 137-148, June 2016. \bibitem{cheng2021} C. Cheng, N. Emirov, and Q. Sun, ``Preconditioned gradient descent algorithm for inverse filtering on spatially distributed networks," {\em IEEE Signal Process. Lett.}, vol. 27, pp. 1834-1838, Oct. 2020. \bibitem{bi2009} N. Bi, M. Z. Nashed, and Q. Sun, ``Reconstructing signals with finite rate of innovation from noisy samples," {\em Acta Appl. Math.}, vol. 107, no. 1, pp. 339-372, July 2009. \bibitem{girault2015} B. Girault, ``Stationary graph signals using an isometric graph translation,'' in {\em Proc. 23rd Eur. Signal Process. Conf.}, 2015, pp. 1516-1520. \bibitem{perraudin17} N. Perraudin and P. Vandergheynst, ``Stationary signal processing on graphs", {\em IEEE. Trans. Signal Process.}, vol. 65, no. 13, pp. 3462-3477, July 2017. \bibitem{segarrat2017} S. Segarrat, A. G. Marques, G. Leus, and A. Ribeiro, ``Stationary graph processes: parametric power spectal estimation," in {\em 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, USA}, pp. 4099-4103, Mar. 2017. \bibitem{yagan2020} A. C. Yagan and M. T. Ozgen, ``Spectral graph based vertex-frequency Wiener filtering for image and graph signal denoising," {\em IEEE Trans. Signal Inf. Process.}, vol. 6, pp. 226-240, Feb. 2020. \bibitem{Shuman18} D. I. Shuman, P. Vandergheynst, D. Kressner, and P. Frossard, ``Distributed signal processing via Chebyshev polynomial approximation,'' {\em IEEE Trans. Signal Inf. Process. Netw.}, vol. 4, no. 4, pp. 736-751, Dec. 2018. \bibitem{isufi19} E. Isufi, A. Loukas, N. Perraudin, and G. Leus, ``Forecasting time series with VARMA recursions on graphs," {\em IEEE Trans. Signal Process.}, vol. 67, no. 18, pp. 4870-4885, Sept. 2019. \bibitem{Ismail2009} M. E. H. Ismail, {\em Classical and Quantum Orthogonal Polynomials in One Variable}, Cambridge University Press, Aug. 2009. \bibitem{Shen2011} J. Shen, T. Tang, and L.-L. Wang, {\em Spectral Methods: Algorithms, Analysis and Applications}, Springer, Aug. 2011. \bibitem{trefethen2013} L. N. Trefethen, {\em Approximation Theory and Approximation Practice}, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2013. \bibitem{wang2011} H. Wang and S. Xiang, ``On the convergence rates of Legendre approximation," {\em Math. Comput.}, vol. 81, no. 278, pp. 861-877, Apr. 2011 \bibitem{xiang2012} S. Wang, ``On Error bounds for orthogonal polynomial expansions and Gauss-type quadrature," {\em SIAM J. Numer. Anal.}, vol. 50, no. 3, pp. 1240-1263, 2012. \bibitem{eldar2006} Y. Eldar and M. Unser, ``Nonideal sampling and interpolation from noisy observations in shift-invariant spaces," {\em IEEE Trans. Signal Process.}, vol. 54, no. 7, pp. 2636-2651, June 2006. \bibitem{ekambaram13} V. N. Ekambaram, G. C. Fanti, B. Ayazifar, and K. Ramchandran, ``Circulant structures and graph signal processing," in {\em Proc. IEEE Int. Conf. Image Process.}, 2013, pp. 834-838. \bibitem{vnekambaram13} V. N. Ekambaram, G. C. Fanti, B. Ayazifar, and K. Ramchandran, ``Multiresolution graph signal processing via circulant structures," in {\em Proc. IEEE Digital Signal Process./Signal Process. Educ. Meeting (DSP/SPE)}, 2013, pp. 112-117. \bibitem{dragotti19a} M. S. Kotzagiannidis and P. L. Dragotti, ``Splines and wavelets on circulant graphs," {\em Appl. Comput. Harmon. Anal.}, vol. 47, no. 2, pp. 481-515, Sept. 2019. \bibitem{dragotti19} M. S. Kotzagiannidis and P. L. Dragotti, ``Sampling and reconstruction of sparse signals on circulant graphs -- an introduction to graph-FRI," {\em Appl. Comput. Harmon. Anal.}, vol. 47, no. 3, pp. 539-565, Nov. 2019. \bibitem{Nathanael2014} P. Nathanael, J. Paratte, D. Shuman, L. Martin, V. Kalofolias, P. Vandergheynst, and D. K. Hammond, ``GSPBOX: A toolbox for signal processing on graphs,'' {\em arXiv}:1408.5781, Aug. 2014. \end{thebibliography} \end{document}
2205.03965v1
http://arxiv.org/abs/2205.03965v1
Connected size Ramsey numbers of matchings versus a small path or cycle
\documentclass[11pt]{article} \usepackage{amsmath,amsfonts,amssymb,amsthm,fancyhdr,bm,mathtools,enumitem} \usepackage[hidelinks]{hyperref} \usepackage{fullpage} \usepackage[dvipsnames]{xcolor} \usepackage[capitalize]{cleveref} \setlength{\marginparwidth}{2cm} \usepackage[color=Purple!50!white]{todonotes} \usepackage[numbers, square,comma,sort&compress]{natbib} \usepackage[affil-it]{authblk} \linespread{1.3} \setlength\headheight{15pt} \setlength\headsep{10pt} \newtheorem{thm}{Theorem}[section] \newtheorem*{thm*}{Theorem} \newtheorem{lem}[thm]{Lemma} \newtheorem{claim}[thm]{Claim} \newtheorem{prop}[thm]{Proposition} \newtheorem{cor}[thm]{Corollary} \newtheorem{conj}[thm]{Conjecture} \newtheorem{qu}[thm]{Question} \theoremstyle{definition} \newtheorem{Def}[thm]{Definition} \newtheorem*{rem}{Remark} \crefname{equation}{equation}{equations} \crefname{lem}{Lemma}{Lemmas} \crefname{thm}{Theorem}{Theorems} \newlist{lemenum}{enumerate}{1} \setlist[lemenum]{label=(\alph*), ref=\thelem(\alph*)} \crefalias{lemenumi}{lemma} \DeclareMathOperator\pr{Pr} \DeclareMathOperator\ext{ext} \DeclareMathOperator\ex{ex} \DeclareMathOperator\codeg{codeg} \DeclareMathOperator\bin{Bin} \newcommand\ol[1]{\overline{#1}} \newcommand\dd{\mathrm{d}} \newcommand\up[1]{^{(#1)}} \newcommand\wh[1]{\widehat{#1}} \newcommand\wt[1]{\widetilde{#1}} \newcommand\ab[1]{\lvert#1\rvert} \newcommand\inv{^{-1}} \newcommand{\flo}[1]{\lfloor #1 \rfloor} \newcommand{\cei}[1]{\lceil #1 \rceil} \newcommand{\pardiff}[2]{\mathchoice{\frac{\partial #1}{\partial #2}}{\partial #1/\partial #2}{\partial #1/\partial #2}{\partial #1/\partial #2}} \newcommand\E{\mathbb{E}} \newcommand\N{\mathbb{N}} \newcommand\cL{\mathcal{L}} \newcommand{\X}{\mathbf{X}} \newcommand{\Y}{\mathbf{Y}} \newcommand{\Z}{\mathbf{Z}} \newcommand{\A}{\mathbf{A}} \newcommand\rhat{\hat r} \newcommand\Beta{\mathrm{B}} \newcommand{\rc}{{\hat{r}}_c} \title{Connected size Ramsey numbers of matchings versus \\a small path or cycle} \author{Sha Wang$^{1,2}$,\, Ruyu Song$^{1,2}$,\, Yixin Zhang$^{1,2}$,\, Yanbo Zhang$^{1,2,}$\thanks{Corresponding author: {\tt [email protected]}. Research supported by NSFC (No.\ 11601527 and 11971011).}} \affil{ { \small {$^1$School of Mathematical Sciences, Hebei Normal University, Shijiazhuang 050024, P.R.~China}}\\ { \small {$^2$Hebei International Joint Research Center for Mathematics and Interdisciplinary Science,\\ Shijiazhuang 050024, P.R.~China}} } \date{} \begin{document} \maketitle \begin{abstract} Given two graphs $G_1, G_2$, the connected size Ramsey number $\rc(G_1,G_2)$ is defined to be the minimum number of edges of a connected graph $G$, such that for any red-blue edge colouring of $G$, there is either a red copy of $G_1$ or a blue copy of $G_2$. Concentrating on $\rc(nK_2,G_2)$ where $nK_2$ is a matching, we generalise and improve two previous results as follows. Vito, Nabila, Safitri, and Silaban obtained the exact values of $\rc(nK_2,P_3)$ for $n=2,3,4$. We determine its exact values for all positive integers $n$. Rahadjeng, Baskoro, and Assiyatun proved that $\rc(nK_2,C_4)\le 5n-1$ for $n\ge 4$. We improve the upper bound from $5n-1$ to $\lfloor (9n-1)/2 \rfloor$. In addition, we show a result which has the same flavour and has exact values: $\rc(nK_2,C_3)=4n-1$ for all positive integers $n$. \end{abstract} \section{Introduction} Graph Ramsey theory is currently among the most active areas in combinatorics. Two of the main parameters in the theory are Ramsey number and size Ramsey number, which are defined as follows. Given two graphs $G_1$ and $G_2$, we write $G\to (G_1,G_2)$ if for any edge colouring of $G$ such that each edge is coloured either red or blue, the graph $G$ always contains either a red copy of $G_1$ or a blue copy of $G_2$. The \emph{Ramsey number} $r(G_1,G_2)$ is the smallest possible number of vertices in a graph $G$ satisfying $G\to (G_1,G_2)$. The \emph{size Ramsey number} $\hat{r}(G_1,G_2)$ is the smallest possible number of edges in a graph $G$ satisfying $G\to (G_1,G_2)$. That is to say, $r(G_1,G_2)=\min\{|V(G)|:G\to (G_1,G_2)\}$, and $\hat{r}(G_1,G_2)=\min\{|E(G)|:G\to (G_1,G_2)\}$. The size Ramsey number was introduced by Erd\H os, Faudree, Rousseau, and Schelp \cite{erdos1978size} in 1978. Some variants have also been studied since then. In 2015, Rahadjeng, Baskoro, and Assiyatun \cite{rahadjeng2015connected} initiated the study of such a variant called connected size Ramsey number by adding the condition that $G$ is connected. Formally speaking, the \emph{connected size Ramsey number} $\rc(G_1,G_2)$ is the smallest possible number of edges in a connected graph $G$ satisfying $G\to (G_1,G_2)$. It is easy to see that $\hat{r}(G_1,G_2)\le \rc(G_1,G_2)$, and equality holds when both $G_1$ and $G_2$ are connected graphs. But the latter parameter seems more tricky when $G_1$ or $G_2$ is disconnected. The previous results are mainly concerned with the connected size Ramsey numbers of a matching versus a sparse graph such as a path, a star, and a cycle. Let $nK_2$ be a matching with $n$ edges, and $P_m$ a path with $m$ vertices. Vito, Nabila, Safitri, and Silaban \cite{vito2021size} gave an upper bound of $\rc(nK_2,P_m)$, and the exact values of $\rc(nK_2,P_3)$ for $n=2,3,4$. \begin{thm}\cite{vito2021size} \label{thm:pm} For $n\ge 1$, $m\ge 3$, $\rc(nK_2,P_m)\le \begin{cases}n(m+2)/2-1, & \text{if}\ n\ \text{is even}; \\ (n+1)(m+2)/2-3, & \text{if}\ n\ \text{is odd}. \end{cases}$\\ Equality holds for $m=3$ and $1\le n\le 4$. \end{thm} If $m$ is much larger than $n$, this upper bound cannot be tight. Because Erd\H os and Faudree \cite{erdos1984size} constructed a connected graph which implies $\rc(nK_2,P_m)\le m+c\sqrt{m}$, where $c$ is a constant depending on $n$. But for small $m$, the above upper bound can be tight. Our first result determines the exact values of $\rc(nK_2,P_3)$ for all positive integers $n$, which generalises the equality of \cref{thm:pm}. \begin{thm} \label{thm:P3} For all positive integers $n$, we have $\rc(nK_2,P_3)=\flo{(5n-1)/2}$. \end{thm} Rahadjeng, Baskoro, and Assiyatun \cite{rahadjeng2017connected} proved that $\rc(nK_2,C_4)\le 5n-1$ for $n\ge 4$. This upper bound can be improved from $5n-1$ to $\lfloor (9n-1)/2 \rfloor$. \begin{thm} \label{thm:C4} For all positive integers $n$, we have $\rc(nK_2,C_4)\le \lfloor (9n-1)/2 \rfloor$. \end{thm} Now we prove the theorem by constructing a graph with $\lfloor (9n-1)/2 \rfloor$ edges. Let $K_{3,3}-e$ be the graph $K_{3,3}$ with one edge deleted. It is easy to check that $K_{3,3}-e\to (2K_2,C_4)$. We use $nG$ to denote $n$ disjoint copies of $G$. If $n$ is even, then $\frac{n}{2}(K_{3,3}-e)\to (nK_2,C_4)$. The graph $\frac{n}{2}(K_{3,3}-e)$ has $n/2$ components and can be connected by adding $n/2-1$ edges. If $n$ is odd, then $\frac{n-1}{2}(K_{3,3}-e)\cup C_4\to (nK_2,C_4)$. The graph $\frac{n-1}{2}(K_{3,3}-e)\cup C_4$ has $(n+1)/2$ components and can be connected by adding $(n-1)/2$ edges. In both cases, we obtain a connected graph with $\lfloor (9n-1)/2 \rfloor$ edges and hence the upper bound follows. It seems likely that the determination of $\rc(nK_2,C_4)$ for all $n$ is tricky. We believe the upper bound is tight and pose the following conjecture. \begin{conj} \label{conj:C4} For all positive integers $n$, $\rc(nK_2,C_4)=\lfloor (9n-1)/2 \rfloor$. \end{conj} Even though solving the above conjecture seems out of our reach, we show a result which has the same flavour and has exact values: $\rc(nK_2,C_3)=4n-1$. \begin{thm} \label{thm:C3} For all positive integers $n$, we have $\rc(nK_2,C_3)=4n-1$. \end{thm} Proofs of \cref{thm:P3} and \cref{thm:C3} will be presented in Section \ref{section2} and Section \ref{section3}, respectively. To prove the lower bounds, we need to discuss the connectivity of a graph $G$. If $G$ is not 2-connected, the basic properties of blocks and end blocks are needed, which can be found in Bondy and Murty \cite[Chap.~5.2]{bondy2008graph}. Moreover, the following terminology is used frequently in the proofs. We say $G$ has a \emph{$(G_1,G_2)$-colouring} if there is a red-blue edge colouring of $G$ such that $G$ contains neither a red $G_1$ nor a blue $G_2$. Thus, it is equivalent to $G\not\to (G_1,G_2)$. \section{A matching versus $P_3$}\label{section2} For the upper bound, we know that $C_4\to (2K_2,P_3)$. If $n$ is even, then $\frac{n}{2}C_4\to (nK_2,P_3)$. The graph $\frac{n}{2}C_4$ has $n/2$ components and can be connected by adding $n/2-1$ edges. If $n$ is odd, then $\frac{n-1}{2}C_4\cup P_3\to (nK_2,P_3)$. The graph $\frac{n-1}{2}C_4\cup P_3$ has $(n+1)/2$ components and can be connected by adding $(n-1)/2$ edges. In both cases, we obtain a connected graph with $\flo{(5n-1)/2}$ edges and hence the upper bound follows. For the lower bound, we use induction on $n$. The result is obvious for $n=1,2$. Assume that for $k<n$ and any connected graph $G$ with $\flo{(5k-3)/2}$ edges, we have $G\not\to (kK_2,P_3)$. Now consider $G$ to be a connected graph with minimum number of edges such that $G\to (nK_2,P_3)$. Thus, for any proper connected subgraph $G'$ of $G$, we have $G'\not\to (nK_2,P_3)$. Since $n\ge 3$, $G$ has at least six edges. Suppose to the contrary that $G$ has at most $\flo{(5n-3)/2}$ edges. We will deduce a contradiction and hence $\rc(nK_2,P_3)\ge \flo{(5n-1)/2}$. An edge set $E_1$ of a connected graph $G$ is called \emph{deletable}, if $E_1$ satisfies the following conditions: \begin{lemenum} \item $E_1$ can be partitioned into two edge sets $E_2$ and $E_3$, where $E_2$ forms a star and $E_3$ forms a matching; \item any edge of $E(G)\setminus E_1$ is nonadjacent to $E_3$; \item the graph induced by $E(G)\setminus E_1$ is still connected. \end{lemenum} Note that for a deletable edge set $E_1$, the graph $G-E_1$ may have some isolated vertices, but all edges of $G-E_1$ belong to the same connected component. We have the following property of a deletable edge set. \begin{claim}\label{clm: deletableedge} Every deletable edge set has size at most two. \end{claim} \begin{proof} Let $E_1$ be a deletable edge set. If $|E_1|\ge 3$, then the graph induced by $E(G)\setminus E_1$ has at most $\flo{(5n-3)/2}-3$ edges and hence an $((n-1)K_2, P_3)$-colouring by induction. We then colour all edges of $E_2$ red and all edges of $E_3$ blue. This is a $(nK_2, P_3)$-colouring of $G$, a contradiction. \end{proof} A \emph{non-cut vertex} of a connected graph is a vertex whose deletion still results in a connected graph. That is, every vertex of a nontrivial connected graph is either a cut vertex or a non-cut vertex. Since $E_3$ in the definition of a deletable edge set can be empty, the edges incident to a non-cut vertex form a deletable edge set. We have the following direct corollary. \begin{claim}\label{clm: noncut} Every non-cut vertex has degree at most two. \end{claim} If $G$ is a 2-connected graph, by \cref{clm: noncut}, $G$ is a cycle. Beginning from any edge of $G$, we may colour all edges consecutively along the cycle. We alternately colour two edges red and one edge blue, until all edges of $G$ have been coloured. Obviously $G$ contains no blue $P_3$. From $(5n-3)/2\le 3(n-1)$ and the colouring of $G$ we see that $G$ contains no red matching with $n$ edges. Thus, $G\not\to (nK_2,P_3)$. Now we assume that $G$ is connected but not 2-connected. Recall that a \emph{block} of a graph is a subgraph that is nonseparable and is maximal with respect to this property. An \emph{end block} is a block that contains exactly one cut vertex of $G$. We have the following observation. \begin{claim}\label{clm: endblock} Every end block is either a $K_2$ or a cycle. \end{claim} \begin{proof} Let $B$ be an end block with at least three vertices, and let $v$ be the single cut vertex of $G$ that is contained in $B$. Since $B$ is 2-connected, the subgraph $B-v$ is still connected. By \cref{clm: noncut}, every non-cut vertex has degree two. It follows that $B-v$ is either a path or a cycle. We see that $v$ has two neighbours in $B$. If not, $v$ has at least three neighbours in $B$, each of which has degree one in $B-v$. Since a path has two vertices of degree one and a cycle has no vertex of degree one, $B-v$ is neither a path nor a cycle, a contradiction. Hence, every vertex of $B$ has two neighbours in $B$. Since $B$ is 2-connected, it must be a cycle. \end{proof} Since $G$ is not 2-connected, there is at least one cut vertex. Choose any cut vertex as a \emph{root}, denoted by $r$. For a vertex $u$ of $G$, if any path from $u$ to $r$ must pass through a cut vertex $v$, then $u$ is called a \emph{(vertex) descendant} of $v$. For any edge $e$ of $G$, if both ends of $e$ are descendants of $v$, then $e$ is called an \emph{edge descendant} of $v$. For a cut vertex $v$, the block containing $v$ but no other descendant of $v$ is called a \emph{parent block} of $v$. It is obvious that every cut vertex has a unique parent block, except that the root $r$ has no parent block. If $v$ is a cut vertex but every descendant of $v$ is not a cut vertex of $G$, we call $v$ an \emph{end-cut}. It is obvious that $G$ has at least one end-cut. We have the following property of end-cuts. \begin{claim}\label{clm: end-cut} Every end-cut is contained in a unique end block, which is $K_2$. Moreover, if an end-cut is not the root of $G$, its parent block is also $K_2$. \end{claim} \begin{proof} Let $v$ be an end-cut. If $v$ is not the root $r$, by the definition of end-cut, every block containing $v$ is an end block, except for its parent block. If we delete $v$ and all descendants of $v$ from $G$, the induced subgraph is still connected, denoted by $G'$. This is because, any vertex of $G'$ is not a descendant of $v$. For any two vertices of $G'$, there is a path joining them in $G$ without passing through $v$. So the path still exists in $G'$ and hence $G'$ is connected. In the following, regardless of whether $v$ is the root or not, we first colour all edges incident to $v$ red, then give a colouring of all edge descendants of $v$. After that, we find a colouring of $G'$ by the inductive hypothesis. We prove that this edge colouring of $G$ is an $(nK_2,P_3)$-colouring under certain conditions. By \cref{clm: endblock}, every end block is either a $K_2$ or a cycle. Assume that $v$ has $t_1$ neighbours in its parent block. Note that if $v$ is the root of $G$, then $t_1=0$. Assume $v$ is contained in $t_2$ blocks which are $K_2$, and in $t$ blocks which are cycles. Let $p_1+2, p_2+2, \dots, p_t+2$ be the cycle lengths of these $t$ cycles. If we remove $v$ from $G$, the cycles become $t$ disjoint paths with length $p_1, p_2, \dots, p_t$ respectively. We colour all edges incident to $v$ red. For each path with length $p_i$, where $1\le i\le t$, we colour all edges from one leaf to the other leaf consecutively along the path, alternately with one edge blue and two edges red. Now we have coloured $x:=t_1+t_2+2t+p_1+p_2+\dots+p_t$ edges, no blue $P_3$ appears, and the maximum red matching has $y:=1+\flo{(p_1+1)/3}+\dots+\flo{(p_t+1)/3}$ edges. If $v$ is the root, then we have already coloured all edges of $G$. So $x\le \flo{(5n-3)/2}$, and we need to check that $y\le n-1$. Since $G$ has at least six edges, we have $6t_2+7t+p_1\ge 11$. Thus, $n\ge (2x+3)/5\ge (2t_2+4t+2p_1+\dots+2p_t+3)/5\ge (t+p_1+\dots+p_t+4)/3>y$. This implies that $G\not\to (nK_2,P_3)$. If $v$ is not the root, recall that $G'$ is formed by the remaining edges and is connected. We can use the inductive hypothesis. The graph $G'$ has at most $\flo{(5n-3)/2}-x$ edges, which is $\flo{(5(n-2x/5)-3)/2}\le \flo{(5(n-\flo{2x/5})-3)/2}$. So $G'$ has an $((n-\flo{2x/5})K_2,P_3)$-colouring. It is not difficult to check that $G$ has no blue $P_3$, and the maximum red matching has at most $y+n-\flo{2x/5}-1$ edges. It is left to show that under what conditions $y+n-\flo{2x/5}-1$ is less than $n$. So we deduce a contradiction. Since $v$ has at least one neighbour in its parent block, it follows that $t_1\ge 1$. If $t\ge 1$, then $1+(t-1)/3\le (4t+1)/5$, and $\flo{(p_1+1)/3}\le (2p_1+1)/5$. Thus, \begin{align*} y&=1+\flo{(p_1+1)/3}+\flo{(p_2+1)/3}+\dots+\flo{(p_t+1)/3}\\ &\le 1+\flo{(p_1+1)/3}+(p_2+1)/3+\dots+(p_t+1)/3\\ &\le 1+\flo{(p_1+1)/3}+(t-1)/3+2(p_2+\dots+p_n)/5\\ &\le (4t+1)/5+(2p_1+1)/5+2(p_2+\dots+p_n)/5\\ &\le 2(t_1+t_2+2t+p_1+p_2+\dots+p_t)/5=2x/5. \end{align*} If $t=0$ and $t_1+t_2\ge 3$, then $y=1<6/5\le 2x/5$. In both cases, we have $y\le 2x/5$ and hence $y+n-\flo{2x/5}-1<n$. Now we consider the remainder case when $t=0$ and $t_1+t_2\le 2$. Since $v$ is an end-cut, we have $t_2\ge 1$. Thus, $t_1=t_2=1$. It follows from $t=0$ and $t_2=1$ that $v$ is contained in a unique end block which is $K_2$. It follows from $t_1=1$ that $v$ has only one neighbour in its parent block. Since each block is either a $K_2$ or a 2-connected subgraph, the parent block of $v$ must be $K_2$. \end{proof} Let $v$ be an end-cut. By \cref{clm: end-cut}, it cannot be the root of $G$. Let $u$ be the other end of its parent block, and $v^+$ the descendant of $v$. If $u$ is contained in an end block which is an edge $uu^+$, then $uv, uu^+, vv^+$ form a deletable edge set. If $u$ is contained in an end block which is not an edge, by \cref{clm: endblock}, the end block is a cycle. Let $u^+$ be a neighbour of $u$ on the cycle. Then $uv, uu^+, vv^+$ form a deletable edge set. If $u$ has at least two end-cuts as its descendants, let $w$ be another end-cut and $uw, ww^+$ edge descendants of $u$. Then $uv, vv^+, uw, ww^+$ form a deletable edge set. If $u$ has only one end-cut as its descendant, which is $v$, then all edges incident to $u$ and the edge $vv^+$ form a deletable edge set. By \cref{clm: deletableedge}, each of the above cases leads to a contradiction. This completes the proof of the lower bound. \section{A matching versus $C_3$}\label{section3} Now we prove \cref{thm:C3}. The graph $nC_3$ has $n$ components and can be connected by adding $n-1$ edges. Denote this connected graph by $H$. It follows from $nC_3\to (nK_2,C_3)$ that $H\to (nK_2,C_3)$. Thus, $\rc(nK_2,C_3)\le 4n-1$. Set $\mathcal{G}=\{G:|E(G)|\le 4n-2, \text{and}\ G\to (nK_2,C_3)\}$. We will prove that $\mathcal{G}$ is an empty set and hence the lower bound follows. Suppose not, choose a graph $G$ from $\mathcal{G}$ with minimum order and minimum size subjective to its order. Thus, for any proper connected subgraph $G'$ of $G$ with at most $4k-2$ edges, we have $G'\not\to (kK_2,C_3)$. We present the proof through a sequence of claims. \begin{claim}\label{clm:minimumdegree} The minimum degree of $G$ is at least two. \end{claim} \begin{proof} Suppose that $G$ has a vertex $v$ of degree one. Then $G-v$ has an $(nK_2,C_3)$-colouring. It can be extended to an $(nK_2,C_3)$-colouring of $G$ by colouring the edge incident to $v$ blue, which contradicts our assumption that $G\to (nK_2,C_3)$. Thus, $\delta(G)\ge 2$. \end{proof} \begin{claim}\label{clm:nocutedge} The graph $G$ has no cut edge. \end{claim} \begin{proof} Suppose that $G$ has a cut edge $e$. Then $G-e$ has two connected components $X$ and $Y$. Let $k_1,k_2$ be the integers such that $4k_1-5\le |E(X)|\le 4k_1-2$ and $4k_2-5\le |E(Y)|\le 4k_2-2$ respectively. Then $X$ has a $(k_1K_2,C_3)$-colouring and $Y$ has a $(k_2K_2,C_3)$-colouring. It can be extended to an $((k_1+k_2-1)K_2,C_3)$-colouring of $G$ by colouring $e$ blue. So the maximum red matching has at most $k_1+k_2-2$ edges. From $(4k_1-5)+(4k_2-5)+1\le |E(X)|+|E(Y)|+1\le 4n-2$ we deduce that $k_1+k_2-2\le n-1/4$, a contradiction which implies our claim. \end{proof} \begin{claim}\label{clm:2-connected} The graph $G$ is 2-connected. \end{claim} \begin{proof} If $G$ is not 2-connected, let $v$ be a cut vertex of $G$, and $B_1,B_2,\dots,B_\ell$ the blocks containing $v$, where $\ell\ge 2$. If $v$ has only one neighbour in $B_i$ for some $i$ with $1\le i\le \ell$, then $B_i$ is not 2-connected. Since any block is either 2-connected or a $K_2$, $B_i$ has to be $K_2$. Hence, $B_i$ is a cut edge \cite[Chap.~4.1.18]{west2001introduction}, which contradicts \cref{clm:nocutedge}. Thus, for each $B_i$ with $1\le i\le \ell$, $v$ has at least two neighbours in $B_i$. The vertex set $V(G)$ can be partitioned into two parts $X$ and $Y$ as follows. If any path from $u$ to $v$ has to pass through a vertex of $B_1$ other than $v$, then $u\in X$; otherwise $u\in Y$. Let $G_1$ and $G_2$ be the subgraphs induced by $X\cup \{v\}$ and $Y$ respectively. It is obvious that $G_1$ contains $B_1$ and $G_2$ contains $B_2\cup \dots\cup B_\ell$. And they share only one vertex, which is $v$. Let $k_1,k_2$ be the integers such that $4k_1-5\le |E(G_1)|\le 4k_1-2$ and $4k_2-5\le |E(G_2)|\le 4k_2-2$ respectively. Then $G_1$ has a $(k_1K_2,C_3)$-colouring and $G_2$ has a $(k_2K_2,C_3)$-colouring. Combining the two colourings we have an $((k_1+k_2-1)K_2,C_3)$-colouring of $G$. So the maximum red matching has at most $k_1+k_2-2$ edges. From $(4k_1-5)+(4k_2-5)\le |E(G_1)|+|E(G_2)|\le 4n-2$ we deduce that $k_1+k_2-2\le n$. If $k_1+k_2-2<n$, then this colouring is an $(nK_2,C_3)$-colouring of $G$. If $k_1+k_2-2=n$, then we have $|E(G_1)|=4k_1-5$ and $|E(G_2)|=4k_2-5$. We obtain an $(nK_2,C_3)$-colouring of $G$ as follows. If $\ell=2$, then both $B_1-v$ and $B_2-v$ are connected. Hence, both $G_1-v$ and $G_2-v$ are connected, which implies that they have a $((k_1-1)K_2,C_3)$-colouring and a $((k_2-1)K_2,C_3)$-colouring, respectively. It can be extended to an $((k_1+k_2-2)K_2,C_3)$-colouring of $G$ by colouring all edges incident to $v$ red. Thus, $G\not\to (nK_2,C_3)$. If $\ell\ge 3$, then for each $i$ with $2\le i\le \ell$, we delete an edge $vv_i$ from $B_i$. Both $G_1-v$ and $G_2-\{vv_2,\dots,vv_\ell\}$ are connected. So they have a $((k_1-1)K_2,C_3)$-colouring and a $((k_2-1)K_2,C_3)$-colouring, respectively. It can be extended to an $((k_1+k_2-2)K_2,C_3)$-colouring of $G$ by colouring the remaining edges red. Again, $G\not\to (nK_2,C_3)$. \end{proof} \begin{claim}\label{clm:maximumdegree} The maximum degree of $G$ is at most three. \end{claim} \begin{proof} For any vertex $v$ of $G$, by \cref{clm:2-connected}, $G-v$ is still connected. If $d(v)\ge 4$, $G-v$ has at most $4(n-1)-2$ edges and hence an $((n-1)K_2,C_3)$-colouring by the choice of $G$. It can be extended to an $(nK_2,C_3)$-colouring of $G$ by colouring all edges incident to $v$ red, a contradiction. Thus, the maximum degree of $G$ is at most three. \end{proof} \begin{claim}\label{clm:3-regular} The graph $G$ is 3-regular. \end{claim} \begin{proof} By \cref{clm:minimumdegree} and \cref{clm:maximumdegree}, $2\le d(v)\le 3$ for any vertex $v$ of $G$. If $G$ is 2-regular, by \cref{clm:2-connected}, $G$ is a cycle. If $G$ is a triangle, then $n\ge 2$. We colour all edges of $G$ red, which is a $(nK_2,C_3)$-colouring. If $G$ is not a triangle, then we colour all edges of $G$ blue, which is a $(K_2,C_3)$-colouring. Thus, $G$ cannot be 2-regular. If $G$ is not 3-regular, then $G$ has some vertices with degree two and some with degree three. There exist two adjacent vertices with degrees two and three respectively, denoted by $v_1$ and $v_2$. Since $G-v_2$ is connected, and $v_1$ has only one neighbour in $G-v_2$, it follows that $G-\{v_1,v_2\}$ is connected. This graph has at most $4(n-1)-2$ edges and hence an $((n-1)K_2,C_3)$-colouring. It can be extended to an $(nK_2,C_3)$-colouring of $G$ by colouring all edges incident to $v_2$ red and the remaining edge incident to $v_1$ blue, a contradiction which implies our claim. \end{proof} \begin{claim}\label{clm:triangle} Each edge of $G$ is contained in at least one triangle. \end{claim} \begin{proof} Suppose that $G$ has an edge $e$ which is not contained in any triangle. By \cref{clm:nocutedge}, $G-e$ is connected. It follows from the choice of $G$ that $G-e$ has an $(nK_2,C_3)$-colouring. It can be extended to an $(nK_2,C_3)$-colouring of $G$ by colouring $e$ blue, a contradiction. \end{proof} Consider a triangle $v_1v_2v_3$. By \cref{clm:3-regular}, each of $v_1,v_2,v_3$ has another neighbour, denoted by $v_4,v_5,v_6$ respectively. If $v_4,v_5,v_6$ are the same vertex, then $v_1,v_2,v_3,v_4$ forms a $K_4$. Since $G$ is a 3-regular 2-connected graph, the whole graph $G$ is a $K_4$ and $n\ge 2$. We colour the triangle $v_1v_2v_3$ red, and the other three edges blue. This is a $(2K_2,C_3)$-colouring of $G$, a contradiction. Thus, at least two of $v_4,v_5,v_6$ are distinct, say, $v_4$ and $v_5$ are two distinct vertices. The vertex $v_3$ cannot be adjacent to both $v_4$ and $v_5$, since otherwise $d(v_3)\ge 4$. Without loss of generality, assume that $v_3$ is not adjacent to $v_4$. Moreover, $v_4$ is not adjacent to $v_2$, since otherwise $d(v_2)\ge 4$. By \cref{clm:triangle}, $v_1v_4$ is contained in a triangle, denoted by $v_1v_4v_6$. Since $v_6$ is different with $v_2,v_3$, we have $d(v_1)\ge 4$, a final contradiction. \begin{thebibliography}{1} \bibitem{bondy2008graph} J.~A. Bondy and U.~S.~R. Murty, \emph{Graph theory}, Springer, 2008. \bibitem{erdos1984size} P.~Erd{\H{o}}s and R.~Faudree, \emph{Size Ramsey numbers involving matchings}, Finite and Infinite Sets, Elsevier, 1984, pp.~247--264. \bibitem{erdos1978size} P.~Erd{\H{o}}s, R.~Faudree, C.~C. Rousseau, and R.~H. Schelp, \emph{The size Ramsey number}, Period. Math. Hungar. \textbf{9} (1978), 145--161. \bibitem{rahadjeng2015connected} B.~Rahadjeng, E.~T. Baskoro, and H.~Assiyatun, \emph{Connected size Ramsey numbers for matchings versus cycles or paths}, Procedia Comput. Sci. \textbf{74} (2015), 32--37. \bibitem{rahadjeng2017connected} B.~Rahadjeng, E.~T. Baskoro, and H.~Assiyatun, \emph{Connected size Ramsey number for matchings vs. small stars or cycles}, Proc. Indian Acad. Sci.: Math. Sci. \textbf{127} (2017), 787--792. \bibitem{vito2021size} V.~Vito, A.~C. Nabila, E.~Safitri, and D.~R. Silaban, \emph{The size Ramsey and connected size Ramsey numbers for matchings versus paths}, J. Phys. Conf. Ser. \textbf{1725} (2021), p.~012098. \bibitem{west2001introduction} D.~B. West, \emph{Introduction to graph theory}, vol.~2, Prentice Hall, Upper Saddle River, 2001. \end{thebibliography} \end{document}
2205.03928v1
http://arxiv.org/abs/2205.03928v1
Number of complete subgraphs of Peisert graphs and finite field hypergeometric functions
\documentclass[reqno]{amsart} \usepackage{amsmath,amsthm,amssymb,amscd} \newcommand{\E}{\mathcal E} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{result}[theorem]{Result} \newtheorem{corollary}[theorem]{Corollary} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{xca}[theorem]{Exercise} \newtheorem{problem}[theorem]{Problem} \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} \newtheorem{conj}[theorem]{Conjecture} \numberwithin{equation}{section} \allowdisplaybreaks \begin{document} \title[number of complete subgraphs of Peisert graphs] {number of complete subgraphs of Peisert graphs and finite field hypergeometric functions} \author{Anwita Bhowmik} \address{Department of Mathematics, Indian Institute of Technology Guwahati, North Guwahati, Guwahati-781039, Assam, INDIA} \email{[email protected]} \author{Rupam Barman} \address{Department of Mathematics, Indian Institute of Technology Guwahati, North Guwahati, Guwahati-781039, Assam, INDIA} \email{[email protected]} \subjclass[2020]{05C25; 05C30; 11T24; 11T30} \date{9th May 2022} \keywords{Peisert graphs; clique; finite fields; character sums; hypergeometric functions over finite fields} \begin{abstract} For a prime $p\equiv 3\pmod{4}$ and a positive integer $t$, let $q=p^{2t}$. Let $g$ be a primitive element of the finite field $\mathbb{F}_q$. The Peisert graph $P^\ast(q)$ is defined as the graph with vertex set $\mathbb{F}_q$ where $ab$ is an edge if and only if $a-b\in\langle g^4\rangle \cup g\langle g^4\rangle$. We provide a formula, in terms of finite field hypergeometric functions, for the number of complete subgraphs of order four contained in $P^\ast(q)$. We also give a new proof for the number of complete subgraphs of order three contained in $P^\ast(q)$ by evaluating certain character sums. The computations for the number of complete subgraphs of order four are quite tedious, so we further give an asymptotic result for the number of complete subgraphs of any order $m$ in Peisert graphs. \end{abstract} \maketitle \section{introduction and statements of results} The arithmetic properties of Gauss and Jacobi sums have a very long history in number theory, with applications in Diophantine equations and the theory of $L$-functions. Recently, number theorists have obtained generalizations of classical hypergeometric functions that are assembled with these sums, and these functions have recently led to applications in graph theory. Here we make use of these functions, as developed by Greene, McCarthy, and Ono \cite{greene, greene2,mccarthy3, ono2} to study substructures in Peisert graphs, which are relatives of the well-studied Paley graphs. \par The Paley graphs are a well-known family of undirected graphs constructed from the elements of a finite field. Named after Raymond Paley, they were introduced as graphs independently by Sachs in 1962 and Erd\H{o}s \& R\'enyi in 1963, inspired by the construction of Hadamard matrices in Paley's paper \cite{paleyp}. Let $q\equiv 1\pmod 4$ be a prime power. Then the Paley graph of order $q$ is the graph with vertex set as the finite field $\mathbb{F}_q$ and edges defined as, $ab$ is an edge if $a-b$ is a non-zero square in $\mathbb{F}_q$. \par It is natural to study the extent to which a graph exhibits symmetry. A graph is called \textit{symmetric} if, given any two edges $xy$ and $x_1y_1$, there exists a graph automorphism sending $x$ to $x_1$ and $y$ to $y_1$. Another kind of symmetry occurs if a graph is isomorphic to its complement, in which case the graph is called \textit{self-complementary}. While Sachs studied the self-complementarity properties of the Paley graphs, Erd\H{o}s \& R\'enyi were interested in their symmetries. It turns out that the Paley graphs are both self-complementary and symmetric. \par It is a natural question to ask for the classification of all self-complementary and symmetric (SCS) graphs. In this direction, Chao's classification in \cite{chao} sheds light on the fact that the only such possible graphs of prime order are the Paley graphs. Zhang in \cite{zhang}, gave an algebraic characterization of SCS graphs using the classification of finite simple groups, although it did not follow whether one could find such graphs other than the Paley graphs. In 2001, Peisert gave a full description of SCS graphs as well as their automorphism groups in \cite{peisert}. He derived that there is another infinite family of SCS graphs apart from the Paley graphs, and, in addition, one more graph not belonging to any of the two former families. He constructed the $P^\ast$-graphs (which are now known as \textit{Peisert graphs}) as follows. For a prime $p\equiv 3\pmod{4}$ and a positive integer $t$, let $q=p^{2t}$. Let $g$ be a primitive element of the finite field $\mathbb{F}_q$, that is, $\mathbb{F}_q^\ast=\mathbb{F}_q\setminus\{0\}=\langle g\rangle$. Then the Peisert graph $P^\ast(q)$ is defined as the graph with vertex set $\mathbb{F}_q$ where $ab$ is an edge if and only if $a-b\in\langle g^4\rangle \cup g\langle g^4\rangle$. It is shown in \cite{peisert} that the definition is independent of the choice of $g$. It turns out that an edge is well defined, since $q\equiv 1\pmod 8$ implies that $-1\in\langle g^4\rangle$. \par We know that a complete subgraph, or a clique, in an undirected graph is a set of vertices such that every two distinct vertices in the set are adjacent. The number of vertices in the clique is called the order of the clique. Let $G^{(n)}$ denote a graph on $n$ vertices and let $\overline{G^{(n)}}$ be its complement. Let $k_m(G)$ denote the number of cliques of order $m$ in a graph $G$. Let $T_m(n)=\text{min}\left(k_m(G^{(n)})+ k_m(\overline{G^{(n)}})\right) $ where the minimum is taken over all graphs on $n$ vertices. Erd\H{o}s \cite{erdos}, Goodman \cite{goodman} and Thomason \cite{thomason} studied $T_m(n)$ for different values of $m$ and $n$. Here we note that the study of $T_m(n)$ can be linked to Ramsey theory. This is because, the diagonal Ramsey number $R(m,m)$ is the smallest positive integer $n$ such that $T_m(n)$ is positive. Also, for the function $k_m(G^{(n)})+ k_m(\overline{G^{(n)}})$ on graphs with $n=p$ vertices, $p$ being a prime, Paley graphs are minimal in certain ways; for example, in order to show that $R(4,4)$ is atleast $18$, the Paley graph with $17$ vertices acts as the only graph (upto isomorphism) such that $k_m(G^{(17)})+ k_m(\overline{G^{(17)}})=0$. What followed was a study on $k_m(G)$, $G$ being a Paley graph. Evans et al. \cite{evans1981number} and Atansov et al. \cite{atanasov2014certain} gave formulae for $k_4(G)$, where $G$ is a Paley graph with number of vertices a prime and a prime-power, respectively. One step ahead led to generalizations of Paley graphs by Lim and Praeger \cite{lim2006generalised}, and computing the number of cliques of orders $3$ and $4$ in those graphs by Dawsey and McCarthy \cite{dawsey}. Very recently, we \cite{BB} have defined \emph{Paley-type} graphs of order $n$ as follows. For a positive integer $n$, the Paley-type graph $G_n$ has the finite commutative ring $\mathbb{Z}_n$ as its vertex set and edges defined as, $ab$ is an edge if and only if $a-b\equiv x^2\pmod{n}$ for some unit $x$ of $\mathbb{Z}_n$. For primes $p\equiv 1\pmod{4}$ and any positive integer $\alpha$, we have also found the number of cliques of order $3$ and $4$ in the Paley-type graphs $G_{p^{\alpha}}$. \par The Peisert graphs lie in the class of SCS graphs alongwith Paley graphs, so it would serve as a good analogy to study the number of cliques in the former class too. There is no known formula for the number of cliques of order $4$ in Peisert graph $P^{\ast}(q)$. The main purpose of this paper is to provide a general formula for $k_4(P^\ast(q))$. In \cite{jamesalex2}, Alexander found the number of cliques of order $3$ using the properties that the Peisert graph are edge-transitive and that any pair of vertices connected by an edge have the same number of common neighbors (a graph being edge-transitive means that, given any two edges in the graph, there exists a graph automorphism sending one edge to the other). In this article, we follow a character-sum approach to compute the number of cliques of orders $3$ and $4$ in Peisert graphs. In the following theorem, we give a new proof for the number of cliques of orders $3$ in Peisert graphs by evaluating certain character sums. \begin{theorem}\label{thm1} Let $q=p^{2t}$, where $p\equiv 3\pmod 4$ is a prime and $t$ is a positive integer. Then, the number of cliques of order $3$ in the Peisert graph $P^{\ast}(q)$ is given by $$k_3(P^\ast(q))=\dfrac{q(q-1)(q-5)}{48}.$$ \end{theorem} Next, we find the number of cliques of order $4$ in Peisert graphs. In this case, the character sums are difficult to evaluate. We use finite field hypergeometric functions to evaluate some of the character sums. Before we state our result on $k_4(P^\ast(q))$, we recall Greene's finite field hypergeometric functions from \cite{greene, greene2}. Let $p$ be an odd prime, and let $\mathbb{F}_q$ denote the finite field with $q$ elements, where $q=p^r, r\geq 1$. Let $\widehat{\mathbb{F}_q^{\times}}$ be the group of all multiplicative characters on $\mathbb{F}_q^{\times}$. We extend the domain of each $\chi\in \widehat{\mathbb{F}_q^{\times}}$ to $\mathbb{F}_q$ by setting $\chi(0)=0$ including the trivial character $\varepsilon$. For multiplicative characters $A$ and $B$ on $\mathbb{F}_q$, the binomial coefficient ${A \choose B}$ is defined by \begin{align*} {A \choose B}:=\frac{B(-1)}{q}J(A,\overline{B}), \end{align*} where $J(A, B)=\displaystyle \sum_{x \in \mathbb{F}_q}A(x)B(1-x)$ denotes the Jacobi sum and $\overline{B}$ is the character inverse of $B$. For a positive integer $n$, and $A_0,\ldots, A_n, B_1,\ldots, B_n\in \widehat{\mathbb{F}_q^{\times}}$, Greene \cite{greene, greene2} defined the ${_{n+1}}F_n$- finite field hypergeometric function over $\mathbb{F}_q$ by \begin{align*} {_{n+1}}F_n\left(\begin{array}{cccc} A_0, & A_1, & \ldots, & A_n\\ & B_1, & \ldots, & B_n \end{array}\mid x \right) :=\frac{q}{q-1}\sum_{\chi\in \widehat{\mathbb{F}_q^\times}}{A_0\chi \choose \chi}{A_1\chi \choose B_1\chi} \cdots {A_n\chi \choose B_n\chi}\chi(x). \end{align*} For $n=2$, we recall the following result from \cite[Corollary 3.14]{greene}: $${_{3}}F_{2}\left(\begin{array}{ccc} A, & B, & C \\ & D, & E \end{array}| \lambda\right)=\sum\limits_{x,y\in\mathbb{F}_q}A\overline{E}(x)\overline{C}E(1-x)B(y)\overline{B}D(1-y)\overline{A}(x-\lambda y).$$ Some of the biggest motivations for studying finite field hypergeometric functions have been their connections with Fourier coefficients and eigenvalues of modular forms and with counting points on certain kinds of algebraic varieties. For example, Ono \cite{ono} gave formulae for the number of $\mathbb{F}_p$-points on elliptic curves in terms of special values of Greene's finite field hypergeometric functions. In \cite{ono2}, Ono wrote a beautiful chapter on finite field hypergeometric functions and mentioned several open problems on hypergeometric functions and their relations to modular forms and algebraic varieties. In recent times, many authors have studied and found solutions to some of the problems posed by Ono. \par Finite field hypergeometric functions are useful in the study of Paley graphs, see for example \cite{dawsey, wage}. In the following theorem, we express the number of cliques of order $4$ in Peisert graphs in terms of finite field hypergeometric functions. \begin{theorem}\label{thm2} Let $p$ be a prime such that $p\equiv 3\pmod 4$. For a positive integer $t$, let $q=p^{2t}$. Let $q=u^2+2v^2$ for integers $u$ and $v$ such that $u\equiv 3\pmod 4$ and $p\nmid u$ when $p\equiv 3\pmod 8$. If $\chi_4$ is a character of order $4$, then the number of cliques of order $4$ in the Peisert graph $P^{\ast}(q)$ is given by \begin{align*} k_4(P^\ast(q))=\frac{q(q-1)}{3072}\left[2(q^2-20q+81)+2 u(-p)^t+3q^2\cdot {_{3}}F_{2}\left(\begin{array}{ccc} \hspace{-.12cm}\chi_4, &\hspace{-.14cm} \chi_4, &\hspace{-.14cm} \chi_4^3 \\ & \hspace{-.14cm}\varepsilon, &\hspace{-.14cm} \varepsilon \end{array}| 1\right) \right]. \end{align*} \end{theorem} Using Sage, we numerically verify Theorem $\ref{thm2}$ for certain values of $q$. We list some of the values in Table \ref{Table-1}. We denote by ${_{3}}F_{2}(\cdot)$ the hypergeometric function appearing in Theorem \ref{thm2}. \begin{table}[ht] \begin{center} \begin{tabular}{|c |c | c | c | c | c | c|} \hline $p$ &$q$ & $k_4(P^\ast(q))$ & $u$ & $q^2 \cdot {_{3}}F_{2}(\cdot)$ & $k_4(P^\ast(q))$ &${_{3}}F_{2}(\cdot)$\\ && (by Sage) & & (by Sage) & (by Theorem \ref{thm2}) &\\\hline $3$ &$9$ & $0$ & $-1$ & $10$ & $0$& $0.1234\ldots$ \\ $7$ &$49$ & $2156$ & $7$ & $-30$ & $2156$& $-0.0123\ldots$\\ $3$ &$81$ & $21060$ & $7$ & $-62$ & $21060$& $-0.0094\ldots$\\ $11$ &$121$ & $116160$ & $7$ & $42$ & $116160$& $0.0028\ldots$\\ $19$ &$361$ & $10515930$ & $-17$ & $522$ & $10515930$& $0.0040\ldots$\\ $23$ &$529$ & $49135636$ & $23$ & $930$ & $49135636$& $0.0033\ldots$\\ \hline \end{tabular} \caption{Numerical data for Theorem \ref{thm2}} \label{Table-1} \end{center} \end{table} \par We note that the number of $3$-order cliques in the Peisert graph of order $q$ equals the number of $3$-order cliques in the Paley graph of the same order. The computations for the number of cliques of order $4$ are quite tedious, so we further give an asymptotic result in the following theorem, for the number of cliques of order $m$ in Peisert graphs, $m\geq 1$ being an integer. \begin{theorem}\label{asym} Let $p$ be a prime such that $p\equiv 3\pmod 4$. For a positive integer $t$, let $q=p^{2t}$. For $m\geq 1$, let $k_m(P^\ast(q))$ denote the number of cliques of order $m$ in the Peisert graph $P^\ast(q)$. Then $$\lim\limits_{q\to\infty}\dfrac{k_m(P^\ast(q))}{q^m}=\dfrac{1}{2^{{m}\choose_{2}}m!}.$$ \end{theorem} \section{preliminaries and some lemmas} We begin by fixing some notations. For a prime $p\equiv 3\pmod{4}$ and positive integer $t$, let $q=p^{2t}$. Let $g$ be a primitive element of the finite field $\mathbb{F}_q$, that is, $\mathbb{F}_q^\ast=\mathbb{F}_q\setminus\{0\}=\langle g\rangle$. Now, we fix a multiplicative character $\chi_4$ on $\mathbb{F}_q$ of order $4$ (which exists since $q\equiv 1\pmod 4$). Let $\varphi$ be the unique quadratic character on $\mathbb{F}_q$. Then, we have $\chi_4^2=\varphi$. Let $H=\langle g^4\rangle\cup g\langle g^4\rangle$. Since $H$ is the union of two cosets of $\langle g^4\rangle $ in $\langle g\rangle $, we see that $|H|=2\times \frac{q-1}{4}=\frac{q-1}{2}$. We recall that a vertex-transitive graph is a graph in which, given any two vertices in the graph, there exists some graph automorphism sending one of the vertices to the other. Peisert graphs being symmetric, are vertex-transitive. Also, the subgraphs induced by $\langle g^4\rangle$ and $g\langle g^4\rangle$ are both vertex transitive: if $s,t$ are two elements of $\langle g^4\rangle$ (or $g\langle g^4\rangle$) then the map on the vertex set of $\langle g^4\rangle$ (or $g\langle g^4\rangle$) given by $x\longmapsto \frac{t}{s} x$ is an isomorphism sending $s$ to $t$. The subgraph of $P^\ast(q)$ induced by $H$ is denoted by $\langle H\rangle$. \par Throughout the article, we fix $h=1-\chi_4(g)$. For $x\in\mathbb{F}_q^\ast$, we have the following: \begin{align}\label{qq} \frac{2+h\chi_4(x)+\overline{h}\overline{\chi_4}(x)}{4} = \left\{ \begin{array}{lll} 1, & \hbox{if $\chi_4(x)\in\{1,\chi_4(g)\}$;} \\ 0, & \hbox{\text{otherwise.}} \end{array} \right. \end{align} We note here that for $x\neq 0$, $x\in H$ if and only if $\chi_4(x)=1$ or $\chi_4(x)=\chi_4(g)$. \par We have the following lemma which will be used in proving the main results. \begin{lemma}\label{rr} Let $q=p^{2t}$ where $p\equiv 3\pmod 4$ is a prime and $t$ is a positive integer. Let $\chi_4$ be a multiplicative character of order $4$ on $\mathbb{F}_q$, and let $\varphi$ be the unique quadratic character. Then, we have $J(\chi_4,\chi_4)=J(\chi_4,\varphi)=-(-p)^t$. \end{lemma} \begin{proof} By \cite[Proposition 1]{katre}, we have $J(\chi_4,\chi_4)=-(-p)^t$. We also note that by Theorem 2.1.4 and Theorem 3.2.1 of \cite{berndt}, where the results remain the same if we replace a prime by a prime power, we see that $J(\chi_4,\varphi)=\chi_4(4)J(\chi_4,\chi_4)=a_4+ib_4$, where $a_4^2+b_4^2=q$ and $a_4\equiv -(\frac{q+1}{2})\pmod 4$. Hence, $a_4\equiv 1\pmod 4$ and $a_4=-(-p)^t,b_4=0$. Thus, we obtain $J(\chi_4,\varphi)=J(\chi_4,\chi_4)=-(-p)^t$. \end{proof} Next, we evaluate certain character sums in the following lemmas. \begin{lemma}\label{lem1} Let $q\equiv 1\pmod 4$ be a prime power and let $\chi_4$ be a character on $\mathbb{F}_q$ of order $4$ such that $\chi_4(-1)=1$, and let $\varphi$ be the unique quadratic character. Let $a\in\mathbb{F}_q$ be such that $a\neq0,1$. Then, $$\sum_{y\in\mathbb{F}_q}\chi_4((y-1)(y-a))=\varphi(a-1)J(\chi_4,\chi_4).$$ \end{lemma} \begin{proof} We have \begin{align*} &\sum_{y\in\mathbb{F}_q}\chi_4((y-1)(y-a))=\sum_{y'\in\mathbb{F}_q}\chi_4(y'(y'+1-a))\\ &=\sum_{y''\in\mathbb{F}_q}\chi_4((1-a)y'')\chi_4((1-a)(y''+1)) =\varphi(1-a)\sum_{y''\in\mathbb{F}_q}\chi_4(y''(y''+1))\\ &=\varphi(1-a)\sum_{y''\in\mathbb{F}_q}\chi_4(-y''(-y''+1))\\ &=\varphi(1-a)J(\chi_4,\chi_4), \end{align*} where we used the substitutions $y-1=y'$, $y''=y'(1-a)^{-1}$, and replaced $y''$ by $-y''$. \end{proof} \begin{lemma}\label{lem2} Let $q\equiv 1\pmod 4$ be a prime power and let $\chi_4$ be a character on $\mathbb{F}_q$ of order $4$ such that $\chi_4(-1)=1$. Let $a\in\mathbb{F}_q$ be such that $a\neq0,1$. Then, $$\sum_{y\in\mathbb{F}_q}\chi_4(y)\overline{\chi_4}(a-y)=-1.$$ \end{lemma} \begin{proof} We have \begin{align*} &\sum_{y\in\mathbb{F}_q}\chi_4(y)\overline{\chi_4}(a-y)=\sum_{y'\in\mathbb{F}_q}\chi_4(ay')\overline{\chi_4}(a-ay')\\ &=\sum_{y'\in\mathbb{F}_q}\chi_4(y')\overline{\chi_4}(1-y')\\ &=\sum_{y'\in\mathbb{F}_q}\chi_4\left(y'(1-y')^{-1}\right)\\ &=\sum_{y''\in\mathbb{F}_q, y''\neq -1}\chi_4(y'') =-1, \end{align*} where we used the substitutions $y' =y a^{-1}$ and $y'' =y'(1-y')^{-1}$, respectively. \end{proof} \begin{lemma}\label{lem3} Let $q=p^{2t}$, where $p\equiv 3\pmod 4$ is a prime and $t$ is a positive integer. Let $\chi_4$ be a character on $\mathbb{F}_q$ of order $4$ and let $\varphi$ be the unique quadratic character. Let $J(\chi_4,\chi_4)=J(\chi_4,\varphi)=\rho$, where $\rho=-(-p)^t$. Then, \begin{align}\label{koro} \sum\limits_{x,y\in\mathbb{F}_q, x\neq 1}\overline{\chi_4}(x)\chi_4(y)\chi_4(1-y)\chi_4(x-y)=-2\rho \end{align} and \begin{align}\label{koro1} \sum\limits_{x,y\in\mathbb{F}_q, x\neq 1}\overline{\chi_4}(x)\chi_4(y)\chi_4(1-y)\overline{\chi_4}(x-y)=1-\rho. \end{align} \end{lemma} \begin{proof} By Lemma \ref{lem2}, we have \begin{align*} \sum_{y\neq 0,1}\chi_4(y)\chi_4(1-y)\sum_{x\neq 0,1,y}\overline{\chi_4}(x)\chi_4(x-y) &=\sum_{y\neq 0,1}\chi_4(y)\chi_4(1-y)\left[-1-\chi_4(y-1) \right]\\ &=-\rho-\sum_y \chi_4(y)\varphi(1-y)=-2\rho, \end{align*} which proves \eqref{koro}. Next, using the substitution $x'=xy^{-1}$, we have \begin{align}\label{sum-new} \sum_x \overline{\chi_4}(x)\overline{\chi_4}(x-y)&=\sum_{x'} \overline{\chi_4}(x'y)\overline{\chi_4}(x'y-y)\notag \\ &=\varphi(y)\rho. \end{align} So, using \eqref{sum-new}, we find that \begin{align*} &\sum_{y\neq 0,1}\chi_4(y)\chi_4(1-y)\sum_{x\neq 0,1,y}\overline{\chi_4}(x)\overline{\chi_4}(x-y)\\ &=\sum_{y\neq 0,1}\chi_4(y)\chi_4(1-y)\left[\varphi(y)\rho-\overline{\chi_4}(y-1) \right]\\ &=\rho\sum_y \overline{\chi_4}(y)\chi_4(1-y)-\sum_y \chi_4(y)\\ &=-\rho+1. \end{align*} This completes the proof of the lemma. \end{proof} We need to evaluate several analogous character sums as in Lemma \ref{lem3}. To this end, we have the following two lemmas whose proofs merely involve Lemmas \ref{lem1} and \ref{lem2} (as in Lemma \ref{lem3}). \begin{lemma}\label{lema1} Let $q=p^{2t}$, where $p\equiv 3\pmod 4$ is a prime and $t$ is a positive integer. Let $\chi_4$ be a character on $\mathbb{F}_q$ of order $4$ and let $\varphi$ be the unique quadratic character. Let $J(\chi_4,\chi_4)=J(\chi_4,\varphi)=\rho$, where $\rho=-(-p)^t$. Then, we have \begin{align*} &\sum\limits_{x,y\in\mathbb{F}_q, x\neq 1} \chi_4^{i_1}(y)\chi_4^{i_2}(1-y)\chi_4^{i_3}(x-y)\\ &=\left\{ \begin{array}{lll} -2\rho, & \hbox{if $(i_1, i_2, i_3)\in \{(1, 1, 1), (-1, -1, -1)\};$} \\ 2, & \hbox{if $(i_1, i_2, i_3)\in \{(1, 1, -1), (-1, -1, 1)\};$} \\ 1-\rho, & \hbox{if $(i_1, i_2, i_3)\in \{(1, -1, 1), (1, -1, -1), (-1, 1, 1), (-1, 1, -1)\}$.} \end{array} \right. \end{align*} \end{lemma} \begin{lemma}\label{corr} Let $q=p^{2t}$, where $p\equiv 3\pmod 4$ is a prime and $t$ is a positive integer. Let $\chi_4$ be a character on $\mathbb{F}_q$ of order $4$ and let $\varphi$ be the unique quadratic character. Let $J(\chi_4,\chi_4)=J(\chi_4,\varphi)=\rho$, where $\rho=-(-p)^t$. Then, for $i_1,i_2,i_3\in\{\pm 1\}$, we have the following tabulation of the values of the expression given below: \begin{align}\label{new-eqn1} \sum\limits_{x,y\in\mathbb{F}_q, x\neq 1}A_x \cdot \chi_4^{i_1}(y)\chi_4^{i_2}(1-y)\chi_4^{i_3}(x-y). \end{align} For $w\in\{1,2,\ldots,8\}$ and $z\in \{1,2,\ldots,7\}$, the $(w,z)$-th entry in the table corresponds to \eqref{new-eqn1}, where $A_x$ is either $\chi_4(x),\overline{\chi_4}(x),\chi_4(1-x)$ or $\overline{\chi_4}(1-x)$ and the tuple $(i_1,i_2,i_3)$ depends on $w$. \begin{align*} \begin{array}{|l|l|l|l|l|l|l|} \cline {4 - 7 } \multicolumn{3}{c|}{} & \multicolumn{4}{|c|}{A_{x}} \\ \hline i_{1} & i_{2} & i_{3} & \chi_4(x) & \overline{\chi_4}(x) & \chi_4(1-x) & \overline{\chi_4}(1-x) \\ \hline 1 & 1 & 1 & -2 \rho & -2 \rho & -2 \rho & -2 \rho \\ 1 & 1 & -1 & 1-\rho & 1-\rho & 1- \rho & 1-\rho \\ 1 & -1 & 1 & {\rho}^2+1 & 2 & {\rho}^2-\rho & 1-\rho \\ 1 & -1 & -1 & 1-\rho & {\rho}^2-\rho & 2 & {\rho}^2+1 \\ -1 & 1 & 1 &{\rho}^2-\rho & 1-\rho &{\rho}^2+1 & 2 \\ -1 & 1 & -1 &2 & {\rho}^2+1 &1-\rho & {\rho}^2-\rho \\ -1 & -1 & 1 &1-\rho & 1-\rho &1-\rho & 1-\rho \\ -1 & -1 & -1 &-2\rho & -2\rho &-2\rho & -2\rho\\ \hline \end{array} \end{align*} For example, the $(3,6)$-th position contains the value ${\rho}^2-\rho$. Here $w=3$ corresponds to $i_1=1,i_2=-1,i_3=1$; $z=6$ corresponds to the column $A_x=\chi_4(1-x)$. So, $$\sum\limits_{x,y\in\mathbb{F}_q, x\neq 1}\chi_4(1-x)\chi_4(y) \overline{\chi_4}(1-y)\chi_4(x-y)={\rho}^2-\rho.$$ \end{lemma} \begin{proof} The calculations follow along the lines of Lemma \ref{lem1} and Lemma \ref{lem2}. For example, in Lemma \ref{lem3}, one can take $\chi_4(x),~\chi_4(x-1)$ or $\overline{\chi_4}(x-1)$ in place of $\overline{\chi_4}(x)$ in $\eqref{koro}$ and $\eqref{koro1}$ (which we denote by $A_x$), and easily evaluate the corresponding character sum. \end{proof} \begin{lemma}\label{lem4} Let $q=p^{2t}$, where $p\equiv 3\pmod 4$ is a prime and $t$ is a positive integer. Let $\chi_4$ be a character of order $4$. Let $\varphi$ and $\varepsilon$ be the quadratic and the trivial characters, respectively. Let $q=u^2+2v^2$ for integers $u$ and $v$ such that $u\equiv 3\pmod 4$ and $p\nmid u$ when $p\equiv 3\pmod 8$. Then, \begin{align*} &{_{3}}F_2\left(\begin{array}{ccc} \chi_4, & \chi_4, & \chi_4\\ & \varepsilon, & \varepsilon \end{array}\mid 1 \right)= {_{3}}F_2\left(\begin{array}{ccc}\overline{\chi_4}, & \overline{\chi_4}, & \overline{\chi_4}\\ & \varepsilon, & \varepsilon \end{array}\mid 1\right)\\ &={_{3}}F_2\left(\begin{array}{ccc}\chi_4, & \overline{\chi_4}, & \overline{\chi_4}\\ & \varphi, & \varepsilon\end{array}\mid 1\right) ={_{3}}F_2\left(\begin{array}{ccc}\overline{\chi_4}, & \chi_4, & \chi_4\\ & \varphi, & \varepsilon\end{array}\mid 1\right)\\ &=\frac{1}{q^2}[-2u(-p)^t]. \end{align*} \end{lemma} \begin{proof} Let $\chi_8$ be a character of order $8$ such that $\chi_8^2=\chi_4$. Now, Proposition 1 in \cite{katre} tells us that $J(\chi_4,\chi_4)=-(-p)^t$ and hence it is real. Again, by Theorem 3.3.3 and the paragraph preceeding Theorem 3.3.1 in \cite{berndt}, $J(\chi_8,\chi_8^2)=\chi_8(-4)J(\chi_4,\chi_4)$, where $\chi_8(4)=\pm 1$ and thus, is also real. By \cite[Theorem 4.37]{greene}, we have \begin{align}\label{doe} {_{3}}F_{2}\left(\begin{array}{ccc}\chi_4, & \chi_4, & \chi_4\\ & \varepsilon, & \varepsilon\end{array}\mid 1\right)&=\binom{\chi_8}{\chi_8^2}\binom{\chi_8}{\chi_8^3}+\binom{\chi_8^5}{\chi_8^2}\binom{\chi_8^5}{\overline{\chi_8}}\notag \\ &=\frac{\chi_8(-1)}{q^2}[J(\chi_8,\chi_8^6)J(\chi_8,\chi_8^5)+J(\chi_8^5,\chi_8^6)J(\chi_8^5,\chi_8)]. \end{align} Using Theorems 2.1.5 and 2.1.6 in \cite{berndt} we obtain \begin{align*} &J(\chi_8,\chi_8^6)=\chi_8(-1)J(\chi_8,\chi_8),\\ &J(\chi_8,\chi_8^5)=\chi_8(-1)J(\chi_8,\chi_8^2),\\ &J(\chi_8^5,\chi_8^6)=\chi_8(-1)\overline{J(\chi_8,\chi_8)}. \end{align*} Substituting these values in $\eqref{doe}$ and using \cite[Lemma 3.6 (2)]{dawsey}, we find that \begin{align}\label{real} {_{3}}F_{2}\left(\begin{array}{ccc}\chi_4, & \chi_4, & \chi_4\\ & \varepsilon, & \varepsilon\end{array}\mid 1\right)&=\frac{\chi_8(-1)}{q^2}[J(\chi_8,\chi_8)J(\chi_8,\chi_8^2)+\overline{J(\chi_8,\chi_8)}J(\chi_8,\chi_8^2)]\notag \\ &=\frac{1}{q^2}J(\chi_8,\chi_8^2)\times 2 Re(J(\chi_8,\chi_8))\times \chi_8(-1)\notag \\ &=\frac{1}{q^2}[-2u(-p)^t]. \end{align} Since ${_{3}}F_{2}\left(\begin{array}{ccc}\overline{\chi_4}, & \overline{\chi_4}, & \overline{\chi_4}\\ & \varepsilon, & \varepsilon\end{array}\mid 1\right)$ is the conjugate of ${_{3}}F_{2}\left(\begin{array}{ccc}\chi_4, & \chi_4, & \chi_4\\ & \varepsilon, & \varepsilon\end{array}\mid 1\right)$, so both are equal as the value given in \eqref{real} is a real number. Using Lemma 4.37 in \cite{greene} again, we have \begin{align}\label{jack} {_{3}}F_{2}\left(\begin{array}{ccc}\chi_4, & \overline{\chi_4}, & \overline{\chi_4}\\ & \varphi, & \varepsilon\end{array}| 1\right)&=\binom{\overline{\chi_8}}{\chi_8^2}\binom{\overline{\chi_8}}{\chi_8}+\binom{\chi_8^3}{\chi_8^2}\binom{\chi_8^3}{\overline{\chi_8}^3}\notag \\ &=\frac{\chi_8(-1)}{q^2}[J(\overline{\chi_8},\overline{\chi_8}^2)J(\overline{\chi_8},\overline{\chi_8})+J(\chi_8^3,\overline{\chi_8}^2)J(\chi_8^3,\chi_8^3)]. \end{align} Recalling Theorem 2.1.6 in \cite{berndt} gives $J(\chi_8,\chi_8)=J(\chi_8^3,\chi_8^3)$. Also, Theorem 2.1.5 in \cite{berndt} gives $J(\chi_8^3,\overline{\chi_8}^2)=\overline{J(\chi_8^5,\chi_8^2)}=\overline{J(\chi_8,\chi_8^2)}=J(\chi_8,\chi_8^2)$. Hence, $\eqref{jack}$ yields \begin{align*} {_{3}}F_{2}\left(\begin{array}{ccc}\chi_4, & \overline{\chi_4}, & \overline{\chi_4}\\ & \varphi, & \varepsilon\end{array}| 1\right)&=\frac{1}{q^2}J(\chi_8,\chi_8^2)\times 2 Re(J(\chi_8,\chi_8))\times \chi_8(-1)\\ &=\frac{1}{q^2}[-2u(-p)^t], \end{align*} which is the same real number we found in $\eqref{real}$. Hence, its complex conjugate, namely ${_{3}}F_{2}\left(\begin{array}{ccc}\overline{\chi_4}, & \chi_4, & \chi_4\\ & \varphi, & \varepsilon\end{array}| 1\right)$ is also real and has the same value. This completes the proof of the lemma. \end{proof} Next, we note the following observations given in the beginning of the sixth section in \cite{dawsey}. We state it as a lemma since we shall use it in proving Theorem \ref{thm2}. Greene \cite{greene, greene2} gave some transformation formulae which we list here as follows. Let $A,B,C,D,E$ be characters on $\mathbb{F}_q$. Then, we have \begin{align} &{_{3}}F_{2}\left(\begin{array}{ccc}A, & B, & C\\ & D, & E \end{array}| 1\right)={_{3}}F_{2}\left(\begin{array}{ccc}B\overline{D}, & A\overline{D}, & C\overline{D}\\ & \overline{D}, & E\overline{D}\end{array}| 1\right),\label{1}\\ &{_{3}}F_{2}\left(\begin{array}{ccc}A, & B, & C\\ & D, & E \end{array}| 1\right)=ABCDE(-1)\cdot {_{3}}F_{2}\left(\begin{array}{ccc}A, & A\overline{D}, & A\overline{E}\\ & A\overline{B}, & A\overline{C}\end{array}| 1\right),\label{2}\\ &{_{3}}F_{2}\left(\begin{array}{ccc}A, & B, & C\\ & D, & E \end{array}|1\right)=ABCDE(-1)\cdot{_{3}}F_{2}\left(\begin{array}{ccc}B\overline{D}, & B, & B\overline{E}\\ & B\overline{A}, & B\overline{C}\end{array}| 1\right),\label{3}\\ &{_{3}}F_{2}\left(\begin{array}{ccc}A, & B, & C\\ & D, & E\end{array}| 1\right)=AE(-1)\cdot{_{3}}F_{2}\left(\begin{array}{ccc}A, & B, & E\overline{C}\\ & AB\overline{D}, & E\end{array}| 1\right),\label{4}\\ &{_{3}}F_{2}\left(\begin{array}{ccc}A, & B, & C\\ & D, & E\end{array}| 1\right)=AD(-1)\cdot{_{3}}F_{2}\left(\begin{array}{ccc}A, & D\overline{B}, & C\\ & D, & AC\overline{E} \end{array}| 1\right),\label{5}\\ &{_{3}}F_{2}\left(\begin{array}{ccc}A, & B, & C\\ & D, & E \end{array}| 1\right)=B(-1)\cdot{_{3}}F_{2}\left(\begin{array}{ccc}\overline{A}D, & B, & C\\ & D, & BC\overline{E}\end{array}| 1\right),\label{6}\\ &{_{3}}F_{2}\left(\begin{array}{ccc}A, & B, & C\\ & D, & E \end{array}| 1\right)=AB(-1)\cdot{_{3}}F_{2}\left(\begin{array}{ccc}\overline{A}D, & \overline{B}D, & C\\ & D, & DE\overline{AB}\end{array}| 1\right).\label{7} \end{align} Let $X=\{(t_1,t_2,t_3,t_4,t_5)\in\mathbb{Z}_4^5: t_1,t_2,t_3\neq 0,t_4,t_5;~t_1+t_2+t_3\neq t_4,t_5\}$. To each of the transformations in $\eqref{1}$ to $\eqref{7}$, Dawsey and McCarthy in \cite{dawsey} associated a map on $X$; for example, the transformation in $\eqref{1}$ gives that $${_{3}}F_{2}\left(\begin{array}{ccc}\chi_4^{t_1}, & \chi_4^{t_2}, & \chi_4^{t_3}\\ & \chi_4^{t_4}, & \chi_4^{t_5}\end{array}| 1\right)={_{3}}F_{2}\left(\begin{array}{ccc}\chi_4^{t_2-t_4}, &\chi_4^{t_1-t_4} , & \chi_4^{t_3-t_4}\\ & \chi_4^{-t_4}, &\chi_4^{t_5-t_4}\end{array}| 1\right),$$ so it induces a map $f_1: X\rightarrow X$ given by $$f_1(t_1,t_2,t_3,t_4,t_5)=(t_2-t_4,t_1-t_4,t_3-t_4,-t_4,t_5-t_4).$$ Similarly, the other transformations in $\eqref{2}$ to $\eqref{7}$ led to the construction of the maps $f_2$ to $f_7$. \begin{lemma}\label{dlemma} Let $X=\{(t_1,t_2,t_3,t_4,t_5)\in\mathbb{Z}_4^5: t_1,t_2,t_3\neq 0,t_4,t_5;~t_1+t_2+t_3\neq t_4,t_5\}$. Define the functions $f_i:X\rightarrow X,~i\in\{1,2,\ldots,7\}$ in the following manner: \begin{align*} f_1(t_1,t_2,t_3,t_4,t_5)&=(t_2-t_4,t_1-t_4,t_3-t_4,-t_4,t_5-t_4),\\ f_{2}\left(t_{1}, t_{2}, t_{3}, t_{4}, t_{5}\right)&=\left(t_{1}, t_{1}-t_{4}, t_{1}-t_{5}, t_{1}-t_{2}, t_{1}-t_{3}\right),\\ f_{3}\left(t_{1}, t_{2}, t_{3}, t_{4}, t_{5}\right)&=\left(t_{2}-t_{4}, t_{2}, t_{2}-t_{5}, t_{2}-t_1,t_2-t_3\right),\\ f_{4}\left(t_{1}, t_{2}, t_{3}, t_{4}, t_{5}\right)&=\left(t_{1}, t_{2}, t_{5}-t_{3}, t_{1}+t_{2}-t_{4}, t_{5}\right),\\ f_{5}\left(t_1, t_{2}, t_{3}, t_{4}, t_{5}\right)&=\left(t_{1}, t_{4}-t_{2}, t_{3}, t_{4}, t_{1}+t_{3}-t_{5}\right),\\ f_{6}\left(t_{1}, t_{2}, t_{3}, t_{4}, t_{5}\right)&=\left(t_{4}-t_{1}, t_{2}, t_{3}, t_{4}, t_{2}+t_{3}-t_{5}\right),\\ f_{7}\left(t_{1},t_{2}, t_{3},t_{4}, t_{5}\right)&=\left(t_{4}-t_{1},t_{4}-t_{2},t_{3}, t_{4}, t_{4}+t_{5}-t_{1}-t_{2}\right). \end{align*} Then the group generated by $f_1,\ldots,f_7$, with operation composition of functions, is the set $$\mathcal{F}=\{f_0,f_i,f_j \circ f_l,f_4\circ f_1,f_6\circ f_2,f_5\circ f_3,f_1\circ f_4\circ f_1: 1\leq i\leq 7,~1\leq j\leq 3,~4\leq l\leq 7\},$$ where $f_0$ is the identity map. \\ Moreover, the group $\mathcal{F}$ acts on the set $X$. If we associate the $5$-tuple $(t_1,t_2,\ldots,t_5)\in X$ to the hypergeometric function ${_{3}}F_{2}\left(\begin{array}{ccc}\chi_4^{t_1}, & \chi_4^{t_2}, & \chi_4^{t_3}\\ & \chi_4^{t_4}, & \chi_4^{t_5}\end{array}| 1\right)$, then each orbit of the group action consists of a number of $5$-tuples $(t_1,t_2,\ldots,t_5)$, and the corresponding ${}_3 F_{2}$ terms have the same value. \end{lemma} \begin{proof} For a proof, see Section $6$ of \cite{dawsey}. \end{proof} In order to prove Theorem \ref{asym}, the following famous theorem, due to Andr\'e Weil, serves as the crux. We state it here. \begin{theorem}[Weil's estimate]\label{weil} Let $\mathbb{F}_q$ be the finite field of order $q$, and let $\chi$ be a character of $\mathbb{F}_q$ of order $s$. Let $f(x)$ be a polynomial of degree $d$ over $\mathbb{F}_q$ such that $f(x)$ cannot be written in the form $c\cdot {h(x)}^s$, where $c\in\mathbb{F}_q$. Then $$\Bigl\lvert\sum_{x\in\mathbb{F}_q}\chi(f(x))\Bigr\rvert\leq (d-1)\sqrt{q}.$$ \end{theorem} The rest of the article goes as follows. In Section $3$, we prove Theorem \ref{thm1}. In Section $4$, we prove Theorem \ref{thm2}. Finally, in Section $5$ we prove the asymptotic formula for the number of cliques of any order in Peisert graphs. To count the number of cliques in Peisert graphs, we note that since the graph is vertex-transitive, so any two vertices in the graph are contained in the same number of cliques of a particular order. We will also use the following notation throughout the proofs. For an induced subgraph $S$ of a Peisert graph and a vertex $v\in S$, we denote by $k_3(S)$ and $k_3(S,v)$ the number of cliques of order $3$ in $S$ and the number of cliques of order $3$ in $S$ containing $v$, respectively. \section{number of $3$-order cliques in $P^\ast(q)$} In this section, we prove Theorem \ref{thm1}. Recall that $\mathbb{F}_q^\ast=\langle g\rangle$ and $H=\langle g^4\rangle\cup g\langle g^4\rangle$. Also, $\langle H\rangle$ is the subgraph induced by $H$ and $h=1-\chi_4(g)$. \begin{proof}[Proof of Theorem \ref{thm1}] Using the vertex-transitivity of $P^\ast(q)$, we find that \begin{align}\label{trian} k_3(P^\ast(q))&=\frac{1}{3}\times q\times k_3(P^\ast(q),0)\notag \\ &=\frac{q}{3}\times \text{number of edges in }\langle H\rangle . \end{align} Now, \begin{align}\label{ww-new} \text{the number of edges in~} \langle H\rangle =\frac{1}{2}\times \mathop{\sum\sum}_{\chi_4(x-y)\in \{1, \chi_4(g)\}} 1, \end{align} where the 1st sum is taken over all $x$ such that $\chi_4(x)\in\{1,\chi_4(g)\}$ and the 2nd sum is taken over all $y\neq x$ such that $\chi_4(y)\in\{1,\chi_4(g)\}$. Hence, using \eqref{qq} in \eqref{ww-new}, we find that \begin{align}\label{ww} &\text{the number of edges in~}\langle H\rangle \notag \\ &=\frac{1}{2\times 4^3}\sum\limits_{x\neq 0}(2+h\chi_4(x)+\overline{h}\overline{\chi_4}(x))\notag\\ &\hspace{1.5cm}\times \sum\limits_{y\neq 0,x}[(2+h\chi_4(y)+\overline{h}\overline{\chi_4}(y))(2+h\chi_4(x-y)+\overline{h}\overline{\chi_4}(x-y))]. \end{align} We expand the inner summation in $\eqref{ww}$ to obtain \begin{align}\label{ee} &\sum\limits_{y\neq 0,x}[4+2h\chi_4(y)+2\overline{h}\overline{\chi_4}(y)+2h\chi_4(x-y)+2\overline{h}\overline{\chi_4}(x-y)+2\chi_4(y)\overline{\chi_4}(x-y)\notag \\ & +2\overline{\chi_4}(y)\chi_4(x-y)-2\chi_4(g)\chi_4(y(x-y))+2\chi_4(g)\overline{\chi_4}(y(x-y))]. \end{align} We have \begin{align}\label{new-eqn3} \sum\limits_{y\neq 0,x}\chi_4(y(x-y))=\sum\limits_{y\neq 0,1}\chi_4(xy)\chi_4(x-xy)=\varphi(x) J(\chi_4,\chi_4). \end{align} Using Lemma \ref{lem2} and \eqref{new-eqn3}, \eqref{ee} yields \begin{align}\label{new-eqn2} &\sum\limits_{y\neq 0,x}[(2+h\chi_4(y)+\overline{h}\overline{\chi_4}(y))(2+h\chi_4(x-y)+\overline{h}\overline{\chi_4}(x-y))]\notag \\ &=4(q-3)-4h\chi_4(x)-4\overline{h}\overline{\chi_4}(x)-2\chi_4(g)\varphi(x)J(\chi_4,\chi_4)+2\chi_4(g)\varphi(x)\overline{J(\chi_4,\chi_4)}. \end{align} Now, putting \eqref{new-eqn2} into \eqref{ww}, and then using Lemma \ref{rr}, we find that \begin{align*} &\text{the number of edges in }\langle H\rangle\\ =&\frac{1}{2\times 4^3}\sum\limits_{x\neq 0}[(2+h\chi_4(x)+\overline{h}\overline{\chi_4}(x))(4(q-3)-4h\chi_4(x)-4\overline{h}\overline{\chi_4}(x))]\\ =&\frac{1}{2\times 4^3}\sum\limits_{x\neq 0}[8(q-5)+(4h(q-3)-8h)\chi_4(x)+(4\overline{h}(q-3)-8\overline{h})\overline{\chi_4}(x)]\\ =&\frac{(q-1)(q-5)}{16}. \end{align*} Substituting this value in $\eqref{trian}$ gives us the required result. \end{proof} \section{number of $4$-order cliques in $P^\ast(q)$} In this section, we prove Theorem \ref{thm2}. First, we recall again that $\mathbb{F}_q^\ast=\langle g\rangle$ and $H=\langle g^4\rangle\cup g \langle g^4\rangle$. Let $J(\chi_4,\chi_4)=J(\chi_4,\varphi)=\rho$, where the value of $\rho$ is given by Lemma \ref{rr}. Let $q=u^2+2v^2$ for integers $u$ and $v$ such that $u\equiv 3\pmod 4$ and $p\nmid u$ when $p\equiv 3\pmod 8$. Let $\chi_8$ be a character of order $8$ such that $\chi_8^2=\chi_4$. Note that in the proof we shall use the fact that $\chi_4(-1)=1$ multiple times. Recall that $h=1-\chi_4(g)$. \begin{proof}[Proof of Theorem \ref{thm2}] Noting again that $P^\ast(q)$ is vertex-transitive, we find that \begin{align}\label{tt} k_4(P^\ast(q)) &=\frac{q}{4}\times \text{ number of $4$-order cliques in $P^\ast(q)$ containing }0\notag \\ &=\frac{q}{4}\times k_3(\langle H\rangle). \end{align} Let $a, b\in H$ be such that $\chi_4(ab^{-1})=1$. We note that \begin{align}\label{new-eqn4} k_3(\langle H\rangle, a) =\frac{1}{2}\times \mathop{\sum\sum}_{\chi_4(x-y)\in \{1, \chi_4(g)\}} 1, \end{align} where the 1st sum is taken over all $x$ such that $\chi_4(x), \chi_4(a-x)\in\{1,\chi_4(g)\}$ and the 2nd sum is taken over all $y\neq x$ such that $\chi_4(y), \chi_4(a-y)\in\{1,\chi_4(g)\}$. Hence, using \eqref{qq} in \eqref{new-eqn4}, we find that \begin{align*} &k_3(\langle H\rangle, a)\\ &=\frac{1}{2\times 4^5}\sum_{x\neq 0,a}\sum_{y\neq 0,a,x}[(2+h\chi_4(a-x)+\overline{h}\overline{\chi_4}(a-x))\\ &\times (2+h\chi_4(a-y)+\overline{h}\overline{\chi_4}(a-y))(2+h\chi_4(x-y)+\overline{h}\overline{\chi_4}(x-y))\\ &\times (2+h\chi_4(x)+\overline{h}\overline{\chi_4}(x))(2+h\chi_4(y)+\overline{h}\overline{\chi_4}(y))]. \end{align*} Using the substitution $Y=ba^{-1}y$, the sum indexed by $y$ in the above yields \begin{align*} &k_3(\langle H\rangle, a)\\ &=\frac{1}{2\times 4^5}\sum_{x\neq 0,a}\sum_{Y\neq 0,b,ba^{-1}x} [(2+h\chi_4(a-x)+\overline{h}\overline{\chi_4}(a-x))\\ &\times (2+h\chi_4(Y-b)+\overline{h}\overline{\chi_4}(Y-b))(2+h\chi_4(Y-ba^{-1}x)+\overline{h}\overline{\chi_4}(Y-ba^{-1}x))\\ &\times (2+h\chi_4(x)+\overline{h}\overline{\chi_4}(x))(2+h\chi_4(Y)+\overline{h}\overline{\chi_4}(Y))] \\ &=\frac{1}{2\times 4^5}\sum_{Y\neq 0,b}\sum_{x\neq 0,a,ab^{-1}Y}[(2+h\chi_4(a-x)+\overline{h}\overline{\chi_4}(a-x))\\ &\times (2+h\chi_4(Y-b)+\overline{h}\overline{\chi_4}(Y-b)) (2+h\chi_4(Y-ba^{-1}x)+\overline{h}\overline{\chi_4}(Y-ba^{-1}x))\\ &\times (2+h\chi_4(x)+\overline{h}\overline{\chi_4}(x))(2+h\chi_4(Y)+\overline{h}\overline{\chi_4}(Y))]. \end{align*} Again, using the substitution $X=ba^{-1}x$ yields \begin{align*} &k_3(\langle H\rangle, a)\\ &=\frac{1}{2\times 4^5}\sum_{Y\neq 0,b}\sum_{X\neq 0,b,Y}[(2+h\chi_4(b-X)+\overline{h}\overline{\chi_4}(b-X))\\ &\times(2+h\chi_4(b-Y)+\overline{h}\overline{\chi_4}(b-Y))(2+h\chi_4(X-Y)+\overline{h}\overline{\chi_4}(X-Y))\\ &\times (2+h\chi_4(X)+\overline{h}\overline{\chi_4}(X))(2+h\chi_4(Y)+\overline{h}\overline{\chi_4}(Y))] \\ &=k_3(\langle H\rangle,b). \end{align*} Thus, if $a, b\in H$ are such that $\chi_4(ab^{-1})=1$, then \begin{align}\label{cond} k_3(\langle H\rangle,a)=k_3(\langle H\rangle,b). \end{align} Let $\langle g^4\rangle =\{x_1,\ldots,x_{\frac{q-1}{4}}\}$ with $x_1=1$ and $g\langle g^4\rangle=\{y_1,\ldots, y_{\frac{q-1}{4}}\}$ with $y_1=g$. Then, \begin{align}\label{pick} \sum_{i=1}^{\frac{q-1}{4}}k_3(\langle H\rangle,x_i)+\sum_{i=1}^{\frac{q-1}{4}}k_3(\langle H\rangle,y_i)=3\times k_3(\langle H\rangle). \end{align} By $\eqref{cond}$, we have $$k_3(\langle H\rangle,x_1)=k_3(\langle H\rangle,x_2)=\cdots=k_3(\langle H\rangle,x_{\frac{q-1}{4}})$$ and $$k_3(\langle H\rangle,y_1)=k_3(\langle H\rangle,y_2)=\cdots=k_3(\langle H\rangle,y_{\frac{q-1}{4}}).$$ Hence, \eqref{pick} yields \begin{align}\label{1g} k_3(\langle H\rangle)=\frac{q-1}{12}[k_3(\langle H\rangle, 1)+ k_3(\langle H\rangle, g)]. \end{align} Thus, we need to find only $k_3(\langle H\rangle, 1)$ and $k_3(\langle H\rangle, g)$. We first find $k_3(\langle H\rangle, 1)$. \par We have \begin{align}\label{xandy} &k_3(\langle H\rangle,1)\notag \\ &=\frac{1}{2\times 4^5}\sum_{x\neq 0,1}[ (2+h\chi_4(1-x)+\overline{h}\overline{\chi_4}(1-x))(2+h\chi_4(x)+\overline{h}\overline{\chi_4}(x))]\notag\\ &\hspace{1.5cm} \sum_{y\neq 0,1,x}[(2+h\chi_4(1-y)+\overline{h}\overline{\chi_4}(1-y)) (2+h\chi_4(x-y)+\overline{h}\overline{\chi_4}(x-y)) \notag \\ &\hspace{2.5cm}\times (2+h\chi_4(y)+\overline{h}\overline{\chi_4}(y))]. \end{align} Let $i_1,i_2,i_3\in\{\pm 1\} $ and let $F_{i_1,i_2,i_3}$ denote the term $\chi_4^{i_1}(y)\chi_4^{i_2}(1-y)\chi_4^{i_3}(x-y)$. Using this notation, we expand and evaluate the inner summation in \eqref{xandy}. We have \begin{align}\label{sun} &\sum_{y\neq 0,1,x}[2+h\chi_4(y)+\overline{h}\overline{\chi_4}(y)][2+h\chi_4(1-y)+\overline{h}\overline{\chi_4}(1-y)][2+h\chi_4(x-y)+\overline{h}\overline{\chi_4}(x-y)]\notag\\ &=\sum_{y\neq 0,1,x}[8+4h\chi_4(y)+4\overline{h}\overline{\chi_4}(y)+4h\chi_4(1-y)+4\overline{h}\overline{\chi_4}(1-y)+4h\chi_4(x-y)\notag\\&+4\overline{h}\overline{\chi_4}(x-y) +4\chi_4(y)\overline{\chi_4}(1-y)+4\overline{\chi_4}(y)\chi_4(1-y)+4\chi_4(y)\overline{\chi_4}(x-y)\notag\\ &+4\overline{\chi_4}(y)\chi_4(x-y) +4\chi_4(1-y)\overline{\chi_4}(x-y)+4\overline{\chi_4}(1-y)\chi_4(x-y)\notag\\ &+2h^2\chi_4(y)\chi_4(1-y)+2{\overline{h}}^2\overline{\chi_4}(y)\overline{\chi_4}(1-y)+2h^2\chi_4(y)\chi_4(x-y)\notag\\ &+2{\overline{h}}^2\overline{\chi_4}(y)\overline{\chi_4}(x-y) +2h^2\chi_4(1-y)\chi_4(x-y)+2{\overline{h}}^2\overline{\chi_4}(1-y)\overline{\chi_4}(x-y)\notag\\ &+h^3 F_{1,1,1}+2hF_{1,1,-1}+2hF_{1,-1,1}+2\overline{h}F_{1,-1,-1}+2hF_{-1,1,1}+2\overline{h}F_{-1,1,-1} \notag\\ &+2\overline{h}F_{-1,-1,1}+{\overline{h}}^3F_{-1,-1,-1}]. \end{align} Now, referring to Lemmas \ref{lem1} and \ref{lem2}, we can easily check that any term of the form $\sum\limits_{y}\chi_4(\cdot)\overline{\chi_4}(\cdot)$ gives $-1$, $\sum\limits_y \chi_4((y-1)(y-x))$ gives $\varphi(x-1)\rho$ and $\sum\limits_y \chi_4(y(y-x))$ gives $\varphi(x)\rho$. Hence, $\eqref{sun}$ yields \begin{align}\label{yonly} &\sum_{y\neq 0,1,x}[2+h\chi_4(y)+\overline{h}\overline{\chi_4}(y)][2+h\chi_4(1-y)+\overline{h}\overline{\chi_4}(1-y)][2+h\chi_4(x-y)+\overline{h}\overline{\chi_4}(x-y)]\notag \\ &=A+B\chi_4(x)+\overline{B}\overline{\chi_4}(x)+B\chi_4(x-1)+\overline{B}\overline{\chi_4}(x-1)-4\chi_4(x)\overline{\chi_4}(x-1)\notag \\ &-4\overline{\chi_4}(x)\chi_4(x-1)-2h^2\chi_4(x)\chi_4(x-1)-2{\overline{h}}^2\overline{\chi_4}(x)\overline{\chi_4}(x-1)\notag \\ &+h^3 F_{1,1,1}+2hF_{1,1,-1}+2hF_{1,-1,1}+2\overline{h}F_{1,-1,-1}+2hF_{-1,1,1}+2\overline{h}F_{-1,1,-1}\notag \\&+2\overline{h}F_{-1,-1,1}+{\overline{h}}^3F_{-1,-1,-1}\notag\\ &=:\mathcal{I}, \end{align} where $A=8(q-8)$ and $B=-12h$. \par Next, we introduce some notations. Let \begin{align*} B_1&=16(q-9)+6B+\overline{B}h^2,\\ D_1&=2\overline{B}-8\overline{h}+Bh^2-4h^3,\\ E_1&=8(q-9)+4Bh,\\ F_1&=16(q-9)+4 Re(B\overline{h}). \end{align*} For $i\in\{1,2,3,4\}$ and $j\in\{1,2,\ldots,8\}$, we define the following character sums. \begin{align*} T_j&:=\sum_{x\neq 0,1}\sum_y \chi_4^{i_1}(y)\chi_4^{i_2}(1-y)\chi_4^{i_3}(x-y),\\ U_{ij}&:=\sum_{x\neq 0,1}\chi_4^l(m)\sum_y \chi_4^{i_1}(y)\chi_4^{i_2}(1-y)\chi_4^{i_3}(x-y),\\ V_{ij}&:=\sum_x\chi_4^{l_1}(x)\chi_4^{l_2}(1-x)\sum_y \chi_4^{i_1}(y)\chi_4^{i_2}(1-y)\chi_4^{i_3}(x-y), \end{align*} where \begin{align*} l = \left\{ \begin{array}{lll} 1, & \hbox{if $i$ is odd,} \\ -1, & \hbox{\text{otherwise};} \end{array} \right. \end{align*} \begin{align*} m = \left\{ \begin{array}{lll} x, & \hbox{if $i\in\{1,2\}$,} \\ 1-x, & \hbox{\text{otherwise;}} \end{array} \right. \end{align*} and \begin{align*} (l_1,l_2) = \left\{ \begin{array}{lll} (1,1), & \hbox{if $i=1$,} \\ (1,-1), & \hbox{if $i=2$,} \\ (-1,1), & \hbox{if $i=3$,} \\ (-1,-1), & \hbox{if $i=4$.} \end{array} \right. \end{align*} Also, corresponding to each $j$, let $(i_1,i_2,i_3)$ take the value according to the following table: \begin{table}[h!] \begin{center} \begin{tabular}{ |c| c| c| c| } \hline $j$ & $i_1$ & $i_2$ & $i_3$ \\ \hline $1$ & $1$ & $1$ & $1$ \\ $2$ & $1$ & $1$ & $-1$ \\ $3$ & $1$ & $-1$ & $1$\\ $4$ & $1$ & $-1$ & $-1$\\ $5$ & $-1$ & $1$ & $1$\\ $6$ & $-1$ & $1$ & $-1$\\ $7$ & $-1$ & $-1$ & $1$\\ $8$ & $-1$ & $-1$ & $-1$\\ \hline \end{tabular} \end{center} \end{table}\\ Then, using $\eqref{yonly}$ and the notations we just described, $\eqref{xandy}$ yields \begin{align*} &k_3(\langle H\rangle,1)=\frac{1}{2048}\sum_{x\neq 0,1}[2+h\chi_4(x)+\overline{h}\overline{\chi_4}(x)][2+h\chi_4(1-x)+\overline{h}\overline{\chi_4}(1-x)]\times \mathcal{I}\\ =&\frac{1}{2048}\sum_{x\neq 0,1}\Big[ 32(q-15)+B_1\chi_4(x)+\overline{B_1}\overline{\chi_4}(x)+B_1\chi_4(x-1)+\overline{B_1}\overline{\chi_4}(x-1)\\ &+4 Re(Bh)\varphi(x)+4 Re(Bh)\varphi(x-1)+D_1\chi_4(x)\varphi(x-1)+\overline{D_1}\overline{\chi_4}(x)\varphi(x-1)\\ &+D_1\varphi(x)\chi_4(x-1)+\overline{D_1}\varphi(x)\overline{\chi_4}(x-1)+E_1\chi_4(x)\chi_4(x-1)+\overline{E_1}\overline{\chi_4}(x)\overline{\chi_4}(x-1) \\ &+F_1\chi_4(x)\overline{\chi_4}(1-x)+\overline{F_1}\overline{\chi_4}(x)\chi_4(x-1)\Big]\\ &+\frac{1}{2\times 4^5}\Big[ 4h^3T_1+8hT_2+8h T_3+8\overline{h}T_4+8h T_5+8\overline{h}T_6+8\overline{h}T_7+4{\overline{h}}^3 T_8\\ &+2h^4 U_{11}+4h^2 U_{12}+4h^2 U_{13}+8 U_{14}+4h^2 U_{15}+8 U_{16}+8 U_{17}+4{\overline{h}}^2 U_{18}\\ &+4h^2 U_{21} +8 U_{22}+8 U_{23}+4{\overline{h}}^2 U_{24}+8 U_{25}+4{\overline{h}}^2 U_{26}+4{\overline{h}}^2 U_{27}+2{\overline{h}}^4 U_{28}\\ &+2h^4 U_{31}+4h^2 U_{32}+4h^2 U_{33}+8 U_{34}+4h^2 U_{35}+8 U_{36}+8 U_{37}+4{\overline{h}}^2 U_{38}\\ &+4h^2 U_{41} +8 U_{42}+8 U_{43}+4{\overline{h}}^2 U_{44}+8 U_{45}+4{\overline{h}}^2 U_{46}+4{\overline{h}}^2 U_{47}+2{\overline{h}}^4 U_{48}\\ &+h^5 V_{11}+2h^3 V_{12}+2h^3 V_{13}+4h V_{14}+2h^3 V_{15}+4h V_{16}+4h V_{17}+4\overline{h} V_{18}\\ &+2h^3 V_{21}+4h V_{22}+4h V_{23}+4\overline{h}V_{24}+4h V_{25}+4\overline{h}V_{26}+4\overline{h}V_{27}+2{\overline{h}}^3 V_{28}\\ &+2h^3 V_{31}+4h V_{32}+4h V_{33}+4\overline{h}V_{34}+4h V_{35}+4\overline{h}V_{36}+4\overline{h}V_{37}+2{\overline{h}}^3 V_{38}\\ &+4h V_{41}+4\overline{h}V_{42}+4\overline{h}V_{43}+2{\overline{h}}^3 V_{44}+4\overline{h}V_{45}+2{\overline{h}}^3 V_{46}+2{\overline{h}}^3 V_{47}+{\overline{h}}^5 V_{48} \Big]. \end{align*} Using Lemmas \ref{lem3}, \ref{lema1} and \ref{corr}, we find that \begin{align}\label{bigex} &k_3(\langle H\rangle,1)=\frac{1}{2048}\left[32(q^2-20q+81) \right.\notag \\ &+h^5 V_{11}+2h^3 V_{12}+2h^3 V_{13}+4h V_{14}+2h^3 V_{15}+4h V_{16}+4h V_{17}+4\overline{h} V_{18}\notag \\ &+2h^3 V_{21}+4h V_{22}+4h V_{23}+4\overline{h}V_{24}+4h V_{25}+4\overline{h}V_{26}+4\overline{h}V_{27}+2{\overline{h}}^3 V_{28}\notag \\ &+2h^3 V_{31}+4h V_{32}+4h V_{33}+4\overline{h}V_{34}+4h V_{35}+4\overline{h}V_{36}+4\overline{h}V_{37}+2{\overline{h}}^3 V_{38}\notag \\ &\left.+4h V_{41}+4\overline{h}V_{42}+4\overline{h}V_{43}+2{\overline{h}}^3 V_{44}+4\overline{h}V_{45}+2{\overline{h}}^3 V_{46}+2{\overline{h}}^3 V_{47}+{\overline{h}}^5 V_{48}\right]. \end{align} Now, we convert each term of the form $V_{i j}$ $[i \in\{1,2,3,4\}, j\in\{1,2, \ldots, 8\}]$ into its equivalent $q^{2}\cdot {_{3}}F_{2}$ form. We use the notation $(t_{1}, t_{2}, \ldots, t_{5})\in \mathbb{Z}_4^5$ for the term $q^{2}\cdot {_{3}}F_{2}\left(\begin{array}{ccc}\chi_4^{t_{1}}, & \chi_4^{t_{2}}, & \chi_4^{t_{3}}\\ & \chi_4^{t_{4}}, & \chi_4^{t_{5}}\end{array}| 1\right)$. Then, $\eqref{bigex}$ yields \begin{align}\label{bigexp} &k_3(\langle H\rangle,1)=\frac{1}{2048}\left[32(q^2-20q+81)\notag \right. \\ &\hspace{.5cm}+h^{5}(3,1,1,2,2)+2 h^{3}(1,1,3,2,0)+2 h^{3}(3,1,1,0,2)+4 h(1,1,3,0,0)\notag \\ &\hspace{.5cm}+2h^{3}(3,3,1,0,2)+4 h(1,3,3,0,0)+4 h(3,3,1,2,2)+4 \overline{h}(1,3,3,2,0) \notag\\ &\hspace{.5cm}+2 h^{3}(3,1,3,2,2)+4 h(1,1,1,2,0)+4 h(3,1,3,0,2)+4 \overline{h}(1,1,1,0,0)\notag\\ &\hspace{.5cm}+4 h(3,3,3,0,2)+4 \overline{h}(1,3,1,0,0)+4 \overline{h}(3,3,3,2,2)+2 {\overline{h}}^{3}(1,3,1,2,0)\notag \\ &\hspace{.5cm}+2 h^{3}(3,1,3,2,0)+4 h(1,1,1,2,2)+4 h(3,1,3,0,0)+4 \overline{h}(1,1,1,0,2)\notag\\ &\hspace{.5cm}+4 h(3,3,3,0,0)+4 \overline{h}(1,3,1,0,2)+4 \overline{h}(3,3,3,2,0)+2 {\overline{h}}^{3}(1,3,1,2,2) \notag \\ &\hspace{.5cm}+4 h(3,1,1,2,0)+4 \overline{h}(1,1,3,2,2)+4 \overline{h}(3,1,1,0,0)+2{\overline{h}}^{3}(1,1,3,0,2)\notag\\ &\hspace{.5cm}\left. +4 \overline{h}(3,3,1,0,0)+2 \overline{h}^{3}(1,3,3,0,2)+ 2\overline{h}^{3}(3,3,1,2,0)+{\overline{h}}^{5}(1,3,3,2,2)\right]. \end{align} Next, we use Lemma \ref{dlemma} alongwith the notations therein. We list the tuples $(t_1,t_2,\ldots, t_5)$ in each orbit of the group action of $\mathcal{F}$ on $X$, and then group the corresponding terms in $\eqref{bigexp}$ together. The orbit representatives $(1,1,1,0,0)$, $(3,3,3,0,0)$, $(1,3,3,2,0)$, $(3,1,1,2,0)$ and $(1,1,3,0,0)$ given in the proof of Corollary 2.7 in \cite{dawsey} are the ones whose orbits exhaust the hypergeometric terms in $\eqref{bigexp}$. We denote the $q^2\cdot {_{3}}F_{2}$ terms corresponding to these orbit representatives as $M_1,M_2,\ldots,M_5$ respectively. Then, $\eqref{bigexp}$ yields \begin{align}\label{mex} &k_3(\langle H\rangle,1)=\frac{1}{2048} \left[32(q^2-20q+81)\right. \notag \\ &\hspace{.5cm}+h^{5}M_4+2 h^{3}M_1+2 h^{3}M_1+4 hM_5 +2h^{3}M_1+4 hM_5+4 hM_1+4 \overline{h}M_3 \notag\\ &\hspace{.5cm}+2 h^{3}M_4+4 hM_5+4 hM_2+4 \overline{h}M_1+4 hM_5+4 \overline{h}M_5+4 \overline{h}M_5+2 {\overline{h}}^{3}M_3\notag \\ &\hspace{.5cm}+2 h^{3}M_4+4 hM_5+4 hM_5+4 \overline{h}M_5 +4 hM_2+4 \overline{h}M_1+4 \overline{h}M_5+2 {\overline{h}}^{3}M_3 \notag \\ &\hspace{.5cm}+4 hM_4+4 \overline{h}M_3+4 \overline{h}M_5+2{\overline{h}}^{3}M_2 +4 \overline{h}M_5+2 \overline{h}^{3}M_2+\left. 2\overline{h}^{3}M_2+{\overline{h}}^{5}M_3\right]. \end{align} Using Lemma \ref{lem4} (note that we could not reduce $M_5$), $\eqref{mex}$ yields \begin{align}\label{mexp} k_3(\langle H\rangle,1)=\frac{1}{128}\left[2(q^2-20q+81)+2 u(-p)^t +3q^2\cdot{_{3}}F_{2}\left(\begin{array}{ccc}\chi_4, & \chi_4, & \overline{\chi_4}\\ & \varepsilon, & \varepsilon\end{array}| 1\right) \right]. \end{align} Returning back to $\eqref{1g}$, we are now left to calculate $k_3( \langle H\rangle,g)$. Again, we have \begin{align}\label{gxandy} &k_3(\langle H\rangle,g)\notag\\ &=\frac{1}{2048}\sum_{x\neq 0,g}\sum_{y\neq 0,g,x}\left[ (2+h\chi_4(g-x)+\overline{h}\overline{\chi_4}(g-x)) (2+h\chi_4(g-y)+\overline{h}\overline{\chi_4}(g-y))\notag \right.\\ &\times\left. (2+h\chi_4(x-y)+\overline{h}\overline{\chi_4}(x-y))(2+h\chi_4(x)+\overline{h}\overline{\chi_4}(x)) (2+h\chi_4(y)+\overline{h}\overline{\chi_4}(y))\right]. \end{align} Using the substitutions $Y=yg^{-1}$ and $X=xg^{-1}$, and then using the fact that $h\chi_4(g)=\overline{h}$, \eqref{gxandy} yields \begin{align*} &k_3(\langle H\rangle,g)\\ &=\frac{1}{2048}\sum_{x\neq 0,1}\sum_{y\neq 0,1,x}\left[ (2+\overline{h}\chi_4(1-x)+h\overline{\chi_4}(1-x)) (2+\overline{h}\chi_4(1-y)+h\overline{\chi_4}(1-y))\notag \right.\\ &\times\left. (2+\overline{h}\chi_4(x-y)+h\overline{\chi_4}(x-y)) (2+\overline{h}\chi_4(x)+h\overline{\chi_4}(x))(2+\overline{h}\chi_4(y)+h\overline{\chi_4}(y))\right]. \end{align*} Comparing this with $\eqref{xandy}$ we see that the expansion of the expression inside this summation will consist of the same summation terms as in $\eqref{xandy}$ except that the coefficient corresponding to each summation will become the complex conjugate of the corresponding coefficient of the same summation. This means that, to calculate the coefficient of each summation after expanding the expression in $\eqref{gxandy}$, we need to replace each corresponding coefficient in $\eqref{mex}$ by its complex conjugate. Now, $\eqref{mexp}$ is the final expression from $\eqref{mex}$, and we see that \eqref{mexp} contains three summands, two of them being real numbers and the other being a ${}_3F_2$ term whose coefficient is also a real number. Then by the foregoing argument, $\eqref{gxandy}$ yields the same value as given in $\eqref{mexp}$. Thus, $\eqref{1g}$ gives that \begin{align*} k_3(\langle H\rangle)=\frac{q-1}{768}&\left[2(q^2-20q+81)+2 u(-p)^t+3q^2\cdot {_{3}}F_{2}\left(\begin{array}{ccc}\chi_4, & \chi_4, & \overline{\chi_4}\\ & \varepsilon, & \varepsilon\end{array}| 1\right)\right]. \end{align*} Substituting the above value in $\eqref{tt}$, we complete the proof of the theorem. \end{proof} \section{proof of theorem $\ref{asym}$} Let $m\geq 1$ be an integer. We have observed that the calculations for computing the number of $4$-order cliques in $P^\ast(q)$ become very tedious. However, we can have an asymptotic result on the number of cliques of order $m$ in $P^\ast(q)$ as $q\rightarrow\infty$. The method follows along the lines of \cite{wage} and so we prove by the method of induction. \begin{proof}[Proof of Theorem \ref{asym}] Let $\mathbb{F}_q^\ast=\langle g\rangle$. We set a formal ordering of the elements of $\mathbb{F}_q:\{a_1<\cdots<a_q\}$. Let $\chi_4$ be a fixed character on $\mathbb{F}_q$ of order $4$ and let $h=1-\chi_4(g)$. First, we note that the result holds for $m=1,2$ and so let $m\geq 3$. Let the induction hypothesis hold for $m-1$. We shall use the notation `$a_m\neq a_i$' to mean $a_m\neq a_1,\ldots,a_{m-1}$. Recalling \eqref{qq}, we see that \begin{align}\label{ss} k_m(P^\ast(q))&=\mathop{\sum\cdots\sum}_{a_1<\cdots<a_m}\prod_{1\leq i<j\leq m} \frac{2+h\chi_4(a_i-a_j)+\overline{h}\chi_4^3(a_i-a_j)}{4}\notag \\ &=\frac{1}{m}\mathop{\sum\cdots\sum}_{a_1<\cdots<a_{m-1}}\left[ \prod_{1\leq i<j\leq m-1}\frac{2+h\chi_4(a_i-a_j)+\overline{h}\chi_4^3(a_i-a_j)}{4}\right.\notag \\ &\left.\frac{1}{4^{m-1}}\sum\limits_{a_m\neq a_i}\prod_{i=1}^{m-1}\{2+h\chi_4(a_m-a_i)+\overline{h}\chi_4^3(a_m-a_i)\}\right] \end{align} In order to use the induction hypothesis, we try to bound the expression $$\sum\limits_{a_m\neq a_i}\prod_{i=1}^{m-1}\{2+h\chi_4(a_m-a_i)+\overline{h}\chi_4^3(a_m-a_i)\}$$ in terms of $q$ and $m$. We find that \begin{align}\label{dd} \mathcal{J}&:=\sum\limits_{a_m\neq a_i} \prod_{i=1}^{m-1}\{2+h\chi_4(a_m-a_i)+\overline{h}\chi_4^3(a_m-a_i)\}\notag \\ &=2^{m-1}(q-m+1)\notag \\ &+\sum\limits_{a_m\neq a_i}[(3^{m-1}-1)\text{ number of terms containing expressions in }\chi_4] \end{align} Each term in \eqref{dd} containing $\chi_4$ is of the form $$2^f h^{i'}\overline{h}^{j'}\chi_4((a_m-a_{i_1})^{j_1}\cdots (a_m-a_{i_s})^{j_s}),$$ where \begin{equation}\label{asy} \left.\begin{array}{l} 0\leq f\leq m-2,\\ 0\leq i',j'\leq m-1,\\ i_1,\ldots,i_s \in \{1,2,\ldots,m-1\},\\ j_1,\ldots,j_s \in \{1,3\},\text{ and}\\ 1\leq s\leq m-1. \end{array}\right\} \end{equation} Let us consider such an instance of a term containing $\chi_4$. Excluding the constant factor $2^fh^{i'}\overline{h}^{j'}$, we obtain a polynomial in the variable $a_m$. Let $g(a_m)=(a_m-a_{i_1})^{j_1}\cdots (a_m-a_{i_s})^{j_s}$. Using Weil's estimate (Theorem \ref{weil}), we find that \begin{align}\label{asy1} \mid\sum\limits_{a_m\in\mathbb{F}_q}\chi_4(g(a_m))\mid\leq (j_1+\cdots+j_s-1)\sqrt{q}. \end{align} Then, using \eqref{asy1} we have \begin{align}\label{asy2} |2^fh^{i'}\overline{h}^{j'} \sum\limits_{a_m}\chi_4(g(a_m))|&\leq 2^{f+i'+j'}(j_1+\cdots+j_s-1)\sqrt{q}\notag \\ &\leq 2^{3m-4}(3m-4)\sqrt{q}\notag \\ &\leq 2^{3m}\cdot 3m\sqrt{q}. \end{align} Noting that the values of $\chi_4$ are roots of unity, using \eqref{asy2}, and using \eqref{asy} and the conditions therein, we obtain \begin{align*} &\mid 2^f h^{i'}\overline{h}^{j'}\sum\limits_{a_m\neq a_i}\chi_4(g(a_m))\mid\\ &=\mid 2^fh^{i'}\overline{h}^{j'}\left\lbrace \sum\limits_{a_m}\chi_4(g(a_m))-\chi_4(g(a_1))-\cdots-\chi_4(g(a_{m-1})) \right\rbrace \mid\\ &\leq 2^{3m}\cdot 3m\sqrt{q}+2^{2m-3}\\ &\leq 2^{2m}(1+2^m\cdot 3m\sqrt{q}), \end{align*} that is, $$-2^{2m}(1+2^m\cdot 3m\sqrt{q})\leq 2^f h^{i'}\overline{h}^{j'}\sum\limits_{a_m\neq a_i}\chi_4(g(a_m))\leq 2^{2m}(1+2^m\cdot 3m\sqrt{q}).$$ Then, \eqref{dd} yields \begin{align*} &2^{m-1}(q-m+1)-2^{2m}(1+2^m\cdot 3m\sqrt{q})(3^{m-1}-1)\\ &\leq \mathcal{J}\\ &\leq 2^{m-1}(q-m+1)+2^{2m}(1+2^m\cdot 3m\sqrt{q})(3^{m-1}-1) \end{align*} and thus, \eqref{ss} yields \begin{align}\label{asy3} &[2^{m-1}(q-m+1)-2^{2m}(1+2^m\cdot 3m\sqrt{q})(3^{m-1}-1)]\times\frac{1}{m\times 4^{m-1}}k_{m-1}(P^\ast(q))\notag\\ &\leq k_m(P^\ast(q))\notag \\ &\leq [2^{m-1}(q-m+1)+2^{2m}(1+2^m\cdot 3m\sqrt{q})(3^{m-1}-1)]\times\frac{1}{m\times 4^{m-1}}k_{m-1}(P^\ast(q)) \end{align} Dividing by $q^m$ throughout in \eqref{asy3} and taking $q\rightarrow \infty$, we have \begin{align}\label{ff} &\lim_{q\rightarrow \infty}\frac{2^{m-1}(q-m+1)-2^{2m}(1+2^m\cdot 3m\sqrt{q})(3^{m-1}-1)}{m\times 4^{m-1}\times q}\lim_{q\rightarrow \infty}\frac{k_{m-1}(P^\ast(q))}{q^{m-1}}\notag \\ &\leq \lim_{q\rightarrow \infty}\frac{k_m(P^\ast(q))}{q^m}\notag \\ &\leq \lim_{q\rightarrow \infty}\frac{2^{m-1}(q-m+1)+2^{2m}(1+2^m\cdot 3m\sqrt{q})(3^{m-1}-1)}{m\times 4^{m-1}\times q}\lim_{q\rightarrow \infty}\frac{k_{m-1}(P^\ast(q))}{q^{m-1}} \end{align} Now, using the induction hypothesis and noting that \begin{align*} &\lim\limits_{q\to\infty}\frac{2^{m-1}(q-m+1)\pm 2^{2m}(1+2^m\cdot 3m\sqrt{q})(3^{m-1}-1)}{m\times 4^{m-1}q}\\ &=\frac{1}{m\times 4^{m-1}}2^{m-1}\\ &=\frac{1}{m\times 2^{m-1}} , \end{align*} we find that both the limits on the left hand side and the right hand side of \eqref{ff} are equal. This completes the proof of the result. \end{proof} Taking $m=3$ in Theorem \ref{asym}, we find that $$\lim\limits_{q\to\infty}\dfrac{k_3(P^\ast(q))}{q^3}=\frac{1}{48}.$$ We obtain the same limiting value from Theorem \ref{thm1} as well. \par Taking $m=4$ in Theorem \ref{thm2} and Theorem \ref{asym}, we obtain the following corollary which is also evident from Table \ref{Table-1}. \begin{corollary}\label{cor1} We have \begin{align*} \lim\limits_{q\to\infty} {_{3}}F_{2}\left(\begin{array}{ccc} \chi_4, & \chi_4, & \chi_4^3 \\ & \varepsilon, & \varepsilon \end{array}| 1\right)=0. \end{align*} \end{corollary} \begin{proof} Putting $m=4$ in Theorem \ref{asym}, we have \begin{align}\label{eqn1-cor1} \lim\limits_{q\to\infty}\dfrac{k_4(P^\ast(q))}{q^4}=\frac{1}{1536}. \end{align} Putting $m=4$ in Theorem \ref{thm2}, we have \begin{align}\label{eqn2-cor1} \lim\limits_{q\to\infty}\dfrac{k_4(P^\ast(q))}{q^4}=\frac{1}{1536}+3\times \lim\limits_{q\to\infty} {_{3}}F_{2}\left(\begin{array}{ccc} \chi_4, & \chi_4, & \chi_4^3 \\ & \varepsilon, & \varepsilon \end{array}| 1\right). \end{align} Combining \eqref{eqn1-cor1} and \eqref{eqn2-cor1}, we complete the proof. \end{proof} \section{Acknowledgements} We are extremely grateful to Ken Ono for previewing a preliminary version of this paper and for his helpful comments. \begin{thebibliography}{99} \bibitem{jamesalex2} J. Alexander, {\it Selected results in combinatorics and graph theory}, Thesis (Ph.D.) University of Delaware (2016). \bibitem{atanasov2014certain} R. Atanasov, M. Budden, J. Lambert, K. Murphy and A. Penland, {\it On certain induced subgraphs of Paley graphs}, Acta Univ. Apulensis Math. Inform. 40 (2014), 51--65. \bibitem{berndt} B. C. Berndt, K. S. Williams and R. J. Evans, {\it Gauss and Jacobi sums}, Canadian Mathematical Society Series of Monographs and Advanced Texts, A Wiley-Interscience Publication, John Wiley \& Sons, Inc., New York, 1998. \bibitem{BB} A. Bhowmik and R. Barman, {\it On a Paley-type graph on $\mathbb{Z}_n$}, Graphs and Combinatorics 38 (2022), no. 2, Paper No. 41, 25 pp. \bibitem{chao} C. Chao, {\it On the classification of symmetric graphs with a prime number of vertices}, Transactions of the American Mathematical Society 158 (1971), 247--256. \bibitem{dawsey} M. L. Dawsey and D. McCarthy, {\it Generalized Paley graphs and their complete subgraphs of orders three and four}, Research in the Mathematical Sciences 8 (2021), no. 2, Paper No. 18, 23 pp. \bibitem{erdos} P. Erd$\ddot{\text{o}}$s, {\it On the number of complete subgraphs contained in certain graphs}, Magyar Tud. Akad. Mat. Kutat{\'o} Int. K{\"o}zl 7 (1962), 459--464. \bibitem{evans1981number} R.J. Evans, J.R. Pulham and J. Sheehan, {\it On the number of complete subgraphs contained in certain graphs}, Journal of Combinatorial Theory, Series B, 3 (1981), 364--371. \bibitem{goodman} A. W. Goodman, {\it On sets of acquaintances and strangers at any party}, The American Mathematical Monthly 66 (1959), 778--783. \bibitem{greene} J. Greene, {\it Hypergeometric functions over finite fields}, Trans. Amer. Math. Soc. 301 (1987), 77--101. \bibitem{greene2} J. Greene, {\it Character Sum Analogues for Hypergeometric and Generalized Hypergeometric Functions over Finite Fields}, Ph.D. thesis, Univ. of Minnesota, Minneapolis, 1984. \bibitem{katre} S. A. Katre and A. R. Rajwade, {\it Resolution of the sign ambiguity in the determination of the cyclotomic numbers of order 4 and the corresponding Jacobsthal sum}, Mathematica Scandinavica 60 (1987), 52--62. \bibitem{lim2006generalised} T. K. Lim and C. E. Praeger, {\it On generalised Paley graphs and their automorphism groups}, Michigan Mathematical Journal 58 (2009), 293--308. \bibitem{mccarthy3} D. McCarthy, {\it Transformations of well-poised hypergeometric functions over finite fields}, Finite Fields and Their Applications, 18 (2012), no. 6, 1133--1147. \bibitem{ono} K. Ono, \textit{Values of Gaussian hypergeometric series}, Trans. Amer. Math. Soc. 350 (1998), no. 3, 1205--1223. \bibitem{ono2} K. Ono, {\it The web of modularity: arithmetic of the coefficients of modular forms and $q$-series}, CBMS Regional Conference Series in Mathematics, 102, Amer. Math. Soc., Providence, RI, 2004. \bibitem{paleyp} R. E. A. C. Paley, {\it On orthogonal matrices}, Journal of Mathematics and Physics 12 (1933), 311--320. \bibitem{peisert} W. Peisert, {\it All self-complementary symmetric graphs}, Journal of Algebra 240 (2001), 209--229. \bibitem{thomason} A. G. Thomason, {\it Partitions of Graphs}, Ph.D. thesis, Cambridge University (1979). \bibitem{wage} N. Wage, {\it Character sums and Ramsey properties of generalized Paley graphs}, Integers 6 (2006), article number 18. \bibitem{zhang} H. Zhang, {\it Self‐complementary symmetric graphs}, Journal of Graph Theory 16 (1992), 1--5. \end{thebibliography} \end{document}